{"text": "# Loops\n\n## for Loops\n\nA [for loop](https://docs.python.org/3/reference/compound_stmts.html#for) allows us to execute a block of code multiple times with some parameters updated each time through the loop. A `for` loop begins with the `for` statement:\n\n\n```python\niterable = [1,2,3]\nfor item in iterable:\n # code block indented 4 spaces\n print(item)\n```\n\n 1\n 2\n 3\n\n\nThe main points to observe are:\n\n* `for` and `in` keywords\n* `iterable` is a sequence object such as a list, tuple or range\n* `item` is a variable which takes each value in `iterable`\n* end `for` statement with a colon `:`\n* code block indented 4 spaces which executes once for each value in `iterable`\n\nFor example, let's print $n^2$ for $n$ from 0 to 5:\n\n\n```python\nfor n in [0,1,2,3,4,5]:\n square = n**2\n print(n,'squared is',square)\nprint('The for loop is complete!')\n```\n\n 0 squared is 0\n 1 squared is 1\n 2 squared is 4\n 3 squared is 9\n 4 squared is 16\n 5 squared is 25\n The for loop is complete!\n\n\nCopy and paste this code and any of the examples below into the [Python visualizer](http://www.pythontutor.com/visualize.html#mode=edit) to see each step in a `for` loop!\n\n## while Loops\n\nWhat if we want to execute a block of code multiple times but we don't know exactly how many times? We can't write a `for` loop because this requires us to set the length of the loop in advance. This is a situation when a [while loop](https://en.wikipedia.org/wiki/While_loop#Python) is useful.\n\nThe following example illustrates a [while loop](https://docs.python.org/3/tutorial/introduction.html#first-steps-towards-programming):\n\n\n```python\nn = 5\nwhile n > 0:\n print(n)\n n = n - 1\n```\n\n 5\n 4\n 3\n 2\n 1\n\n\nThe main points to observe are:\n\n* `while` keyword\n* a logical expression followed by a colon `:`\n* loop executes its code block if the logical expression evaluates to `True`\n* update the variable in the logical expression each time through the loop\n* **BEWARE!** If the logical expression *always* evaluates to `True`, then you get an [infinite loop](https://en.wikipedia.org/wiki/While_loop#Python)!\n\nWe prefer `for` loops over `while` loops because of the last point. A `for` loop will never result in an infinite loop. If a loop can be constructed with `for` or `while`, we'll always choose `for`.\n\n## Constructing Sequences\n\nThere are several ways to construct a sequence of values and to save them as a Python list. We have already seen Python's list comprehension syntax. There is also the `append` list method described below.\n\n### Sequences by a Formula\n\nIf a sequence is given by a formula then we can use a list comprehension to construct it. For example, the sequence of squares from 1 to 100 can be constructed using a list comprehension:\n\n\n```python\nsquares = [d**2 for d in range(1,11)]\nprint(squares)\n```\n\n [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]\n\n\nHowever, we can achieve the same result with a `for` loop and the `append` method for lists:\n\n\n```python\n# Intialize an empty list\nsquares = []\nfor d in range(1,11):\n # Append the next square to the list\n squares.append(d**2)\nprint(squares)\n```\n\n [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]\n\n\nIn fact, the two examples above are equivalent. The purpose of list comprehensions is to simplify and compress the syntax into a one-line construction.\n\n### Recursive Sequences\n\nWe can only use a list comprehension to construct a sequence when the sequence values are defined by a formula. But what if we want to construct a sequence where the next value depends on previous values? This is called a [recursive sequence](https://en.wikipedia.org/wiki/Recursion).\n\nFor example, consider the [Fibonacci sequence](https://en.wikipedia.org/wiki/Fibonacci_number):\n\n$$\nx_1 = 1, x_2 = 1, x_3 = 2, x_4 = 3, x_5 = 5, ...\n$$\n\nwhere\n\n$$\nx_{n} = x_{n-1} + x_{n-2}\n$$\n\nWe can't use a list comprehension to build the list of Fibonacci numbers, and so we must use a `for` loop with the `append` method instead. For example, the first 15 Fibonacci numbers are:\n\n\n```python\nfibonacci_numbers = [1,1]\nfor n in range(2,15):\n fibonacci_n = fibonacci_numbers[n-1] + fibonacci_numbers[n-2]\n fibonacci_numbers.append(fibonacci_n)\n print(fibonacci_numbers)\n```\n\n [1, 1, 2]\n [1, 1, 2, 3]\n [1, 1, 2, 3, 5]\n [1, 1, 2, 3, 5, 8]\n [1, 1, 2, 3, 5, 8, 13]\n [1, 1, 2, 3, 5, 8, 13, 21]\n [1, 1, 2, 3, 5, 8, 13, 21, 34]\n [1, 1, 2, 3, 5, 8, 13, 21, 34, 55]\n [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]\n [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144]\n [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233]\n [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377]\n [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610]\n\n\n## Computing Sums\n\nSuppose we want to compute the sum of a sequence of numbers $x_0$, $x_1$, $x_2$, $x_3$, $\\dots$, $x_n$. There are at least two approaches:\n\n1. Compute the entire sequence, store it as a list $[x_0,x_1,x_2,\\dots,x_n]$ and then use the built-in function `sum`.\n2. Initialize a variable with value 0 (and name it `result` for example), create and add each element in the sequence to `result` one at a time.\n\nThe advantage of the second approach is that we don't need to store all the values at once. For example, here are two ways to write a function which computes the sum of squares.\n\nFor the first approach, use a list comprehension:\n\n\n```python\ndef sum_of_squares_1(N):\n \"Compute the sum of squares 1**2 + 2**2 + ... + N**2.\"\n return sum([n**2 for n in range(1,N + 1)])\n```\n\n\n```python\nsum_of_squares_1(4)\n```\n\n\n\n\n 30\n\n\n\nFor the second approach, use a `for` loop with the initialize-and-update construction:\n\n\n```python\ndef sum_of_squares_2(N):\n \"Compute the sum of squares 1**2 + 2**2 + ... + N**2.\"\n # Initialize the output value to 0\n result = 0\n for n in range(1,N + 1):\n # Update the result by adding the next term\n result = result + n**2\n return result\n```\n\n\n```python\nsum_of_squares_2(4)\n```\n\n\n\n\n 30\n\n\n\nAgain, both methods yield the same result however the second uses less memory!\n\n## Computing Products\n\nThere is no built-in function to compute products of sequences therefore we'll use an initialize-and-update construction similar to the example above for computing sums.\n\nWrite a function called `factorial` which takes a positive integer $N$ and return the factorial $N!$.\n\n\n```python\ndef factorial(N):\n \"Compute N! = N(N-1) ... (2)(1) for N >= 1.\"\n # Initialize the output variable to 1\n product = 1\n for n in range(2,N + 1):\n # Update the output variable\n product = product * n\n return product\n```\n\nLet's test our function for input values for which we know the result:\n\n\n```python\nfactorial(2)\n```\n\n\n\n\n 2\n\n\n\n\n```python\nfactorial(5)\n```\n\n\n\n\n 120\n\n\n\nWe can use our function to approximate $e$ using the Taylor series for $e^x$:\n\n$$\ne^x = \\sum_{k=0}^{\\infty} \\frac{x^k}{k!}\n$$\n\nFor example, let's compute the 100th partial sum of the series with $x=1$:\n\n\n```python\nsum([1/factorial(k) for k in range(0,101)])\n```\n\n\n\n\n 2.7182818284590455\n\n\n\n## Searching for Solutions\n\nWe can use `for` loops to search for integer solutions of equations. For example, suppose we would like to find all representations of a positive integer $N$ as a [sum of two squares](https://en.wikipedia.org/wiki/Sum_of_two_squares_theorem). In other words, we want to find all integer solutions $(x,y)$ of the equation:\n\n$$\nx^2 + y^2 = N\n$$\n\nWrite a function called `reps_sum_squares` which takes an integer $N$ and finds all representations of $N$ as a sum of squares $x^2 + y^2 = N$ for $0 \\leq x \\leq y$. The function returns the representations as a list of tuples. For example, if $N = 50$ then $1^2 + 7^2 = 50$ and $5^2 + 5^2 = 50$ and the function returns the list `[(1, 7),(5, 5)]`.\n\nLet's outline our approach before we write any code:\n\n1. Given $x \\leq y$, the largest possible value for $x$ is $\\sqrt{\\frac{N}{2}}$\n2. For $x \\leq \\sqrt{\\frac{N}{2}}$, the pair $(x,y)$ is a solution if $N - x^2$ is a square\n3. Define a helper function called `is_square` to test if an integer is square\n\n\n```python\ndef is_square(n):\n \"Determine if the integer n is a square.\"\n if round(n**0.5)**2 == n:\n return True\n else:\n return False\n\ndef reps_sum_squares(N):\n '''Find all representations of N as a sum of squares x**2 + y**2 = N.\n\n Parameters\n ----------\n N : integer\n\n Returns\n -------\n reps : list of tuples of integers\n List of tuples (x,y) of positive integers such that x**2 + y**2 = N.\n\n Examples\n --------\n >>> reps_sum_squares(1105)\n [(4, 33), (9, 32), (12, 31), (23, 24)]\n '''\n reps = []\n if is_square(N/2):\n # If N/2 is a square, search up to x = (N/2)**0.5\n max_x = round((N/2)**0.5)\n else:\n # If N/2 is not a square, search up to x = floor((N/2)**0.5)\n max_x = int((N/2)**0.5)\n for x in range(0,max_x + 1):\n y_squared = N - x**2\n if is_square(y_squared):\n y = round(y_squared**0.5)\n # Append solution (x,y) to list of solutions\n reps.append((x,y))\n return reps\n```\n\n\n```python\nreps_sum_squares(1105)\n```\n\n\n\n\n [(4, 33), (9, 32), (12, 31), (23, 24)]\n\n\n\nWhat is the smallest integer which can be expressed as the sum of squares in 5 different ways?\n\n\n```python\nN = 1105\nnum_reps = 4\nwhile num_reps < 5:\n N = N + 1\n reps = reps_sum_squares(N)\n num_reps = len(reps)\nprint(N,':',reps_sum_squares(N))\n```\n\n 4225 : [(0, 65), (16, 63), (25, 60), (33, 56), (39, 52)]\n\n\n## Examples\n\n### Prime Numbers\n\nA positive integer is [prime](https://en.wikipedia.org/wiki/Prime_number) if it is divisible only by 1 and itself. Write a function called `is_prime` which takes an input parameter `n` and returns `True` or `False` depending on whether `n` is prime or not.\n\nLet's outline our approach before we write any code:\n\n1. An integer $d$ divides $n$ if there is no remainder of $n$ divided by $d$.\n2. Use the modulus operator `%` to compute the remainder.\n3. If $d$ divides $n$ then $n = d q$ for some integer $q$ and either $d \\leq \\sqrt{n}$ or $q \\leq \\sqrt{n}$ (and not both), therefore we need only test if $d$ divides $n$ for integers $d \\leq \\sqrt{n}$\n\n\n```python\ndef is_prime(n):\n \"Determine whether or not n is a prime number.\"\n if n <= 1:\n return False\n # Test if d divides n for d <= n**0.5\n for d in range(2,round(n**0.5) + 1):\n if n % d == 0:\n # n is divisible by d and so n is not prime\n return False\n # If we exit the for loop, then n is not divisible by any d\n # and therefore n is prime\n return True\n```\n\nLet's test our function on the first 30 numbers:\n\n\n```python\nfor n in range(0,31):\n if is_prime(n):\n print(n,'is prime!')\n```\n\n 2 is prime!\n 3 is prime!\n 5 is prime!\n 7 is prime!\n 11 is prime!\n 13 is prime!\n 17 is prime!\n 19 is prime!\n 23 is prime!\n 29 is prime!\n\n\nOur function works! Let's find all the primes between 20,000 and 20,100.\n\n\n```python\nfor n in range(20000,20100):\n if is_prime(n):\n print(n,'is prime!')\n```\n\n 20011 is prime!\n 20021 is prime!\n 20023 is prime!\n 20029 is prime!\n 20047 is prime!\n 20051 is prime!\n 20063 is prime!\n 20071 is prime!\n 20089 is prime!\n\n\n### Divisors\n\nLet's write a function called `divisors` which takes a positive integer $N$ and returns the list of positive integers which divide $N$.\n\n\n```python\ndef divisors(N):\n \"Return the list of divisors of N.\"\n # Initialize the list of divisors (which always includes 1)\n divisor_list = [1]\n # Check division by d for d <= N/2\n for d in range(2,N // 2 + 1):\n if N % d == 0:\n divisor_list.append(d)\n # N divides itself and so we append N to the list of divisors\n divisor_list.append(N)\n return divisor_list\n```\n\nLet's test our function:\n\n\n```python\ndivisors(10)\n```\n\n\n\n\n [1, 2, 5, 10]\n\n\n\n\n```python\ndivisors(100)\n```\n\n\n\n\n [1, 2, 4, 5, 10, 20, 25, 50, 100]\n\n\n\n\n```python\ndivisors(59)\n```\n\n\n\n\n [1, 59]\n\n\n\n### Collatz Conjecture\n\nLet $a$ be a positive integer and consider the recursive sequence where $x_0 = a$ and\n\n$$\nx_{n+1} = \\left\\\\{ \\begin{array}{cl} x_n/2 & \\text{if } x_n \\text{ is even} \\\\\\\\ 3x_n+1 & \\text{if } x_n \\text{ is odd} \\end{array} \\\\right.\n$$\n\nThe [Collatz conjecture](https://en.wikipedia.org/wiki/Collatz_conjecture) states that this sequence will *always* reach 1. For example, if $a = 10$ then $x_0 = 10$, $x_1 = 5$, $x_2 = 16$, $x_3 = 8$, $x_4 = 4$, $x_5 = 2$ and $x_6 = 1$.\n\nWrite a function called `collatz` which takes one input parameter `a` and returns the sequence of integers defined above and ending with the first occurrence $x_n=1$.\n\n\n```python\ndef collatz(a):\n \"Compute the Collatz sequence starting at a and ending at 1.\"\n # Initialize list with first value a\n sequence = [a]\n # Compute values until we reach 1\n while sequence[-1] > 1:\n # Check if the last element in the list is even\n if sequence[-1] % 2 == 0:\n # Compute and append the new value\n sequence.append(sequence[-1] // 2)\n else:\n # Compute and append the new value\n sequence.append(3*sequence[-1] + 1)\n return sequence\n```\n\nLet's test our function:\n\n\n```python\nprint(collatz(10))\n```\n\n [10, 5, 16, 8, 4, 2, 1]\n\n\n\n```python\ncollatz(22)\n```\n\n\n\n\n [22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1]\n\n\n\nThe Collatz conjecture is quite amazing. No matter where we start, the sequence always terminates at 1!\n\n\n```python\na = 123456789\nseq = collatz(a)\nprint(\"Collatz sequence for a =\",a)\nprint(\"begins with\",seq[:5])\nprint(\"ends with\",seq[-5:])\nprint(\"and has\",len(seq),\"terms.\")\n```\n\n Collatz sequence for a = 123456789\n begins with [123456789, 370370368, 185185184, 92592592, 46296296]\n ends with [16, 8, 4, 2, 1]\n and has 178 terms.\n\n\nWhich $a < 1000$ produces the longest sequence?\n\n\n```python\nmax_length = 1\na_max = 1\nfor a in range(1,1001):\n seq_length = len(collatz(a))\n if seq_length > max_length:\n max_length = seq_length\n a_max = a\nprint('Longest sequence begins with a =',a_max,'and has length',max_length)\n```\n\n Longest sequence begins with a = 871 and has length 179\n\n\n## Exercises\n\n1. [Fermat's theorem on the sum of two squares](https://en.wikipedia.org/wiki/Fermat%27s_theorem_on_sums_of_two_squares) states that every prime number $p$ of the form $4k+1$ can be expressed as the sum of two squares. For example, $5 = 2^2 + 1^2$ and $13 = 3^2 + 2^2$. Find the smallest prime greater than $2019$ of the form $4k+1$ and write it as a sum of squares. (Hint: Use the functions `is_prime` and `reps_sum_squares` from this section.)\n\n2. What is the smallest prime number which can be represented as a sum of squares in 2 different ways?\n\n3. What is the smallest integer which can be represented as a sum of squares in 3 different ways?\n\n4. Write a function called `primes_between` which takes two integer inputs $a$ and $b$ and returns the list of primes in the closed interval $[a,b]$.\n\n5. Write a function called `primes_d_mod_N` which takes four integer inputs $a$, $b$, $d$ and $N$ and returns the list of primes in the closed interval $[a,b]$ which are congruent to $d$ mod $N$ (this means that the prime has remainder $d$ after division by $N$). This kind of list is called [primes in an arithmetic progression](https://en.wikipedia.org/wiki/Dirichlet%27s_theorem_on_arithmetic_progressions).\n\n6. Write a function called `reciprocal_recursion` which takes three positive integers $x_0$, $x_1$ and $N$ and returns the sequence $[x_0,x_1,x_2,\\dots,x_N]$ where\n\n $$\n x_n = \\frac{1}{x_{n-1}} + \\frac{1}{x_{n-2}}\n $$\n\n7. Write a function called `root_sequence` which takes input parameters $a$ and $N$, both positive integers, and returns the $N$th term $x_N$ in the sequence:\n\n $$\n \\begin{align}\n x_0 &= a \\\\\\\n x_n &= 1 + \\sqrt{x_{n-1}}\n \\end{align}\n $$\n\n Does the sequence converge to different values for different starting values $a$?\n\n8. Write a function called `fib_less_than` which takes one input $N$ and returns the list of Fibonacci numbers less than $N$.\n\n9. Write a function called `fibonacci_primes` which takes an input parameter $N$ and returns the list of Fibonacci numbers less than $N$ which are also prime numbers.\n\n10. Let $w(N)$ be the number of ways $N$ can be expressed as a sum of two squares $x^2 + y^2 = N$ with $1 \\leq x \\leq y$. Then\n\n $$\n \\lim_{N \\to \\infty} \\frac{1}{N} \\sum_{n=1}^{N} w(n) = \\frac{\\pi}{8}\n $$\n\n Compute the left side of the formula for $N=100$ and compare the result to $\\pi / 8$.\n\n11. A list of positive integers $[a,b,c]$ (with $1 \\leq a < b$) are a [Pythagorean triple](https://en.wikipedia.org/wiki/Pythagorean_triple) if $a^2 + b^2 = c^2$. Write a function called `py_triples` which takes an input parameter $N$ and returns the list of Pythagorean triples `[a,b,c]` with $c \\leq N$.\n", "meta": {"hexsha": "1c990981923cffe752b41ae7f7ebe8f325b28a68", "size": 28148, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "computing-primer/python/loops.ipynb", "max_stars_repo_name": "osterEWU/mathemathemical_computing_with_Python", "max_stars_repo_head_hexsha": "a0bfe3bed2df0a02d26c4c1cf5084ce89f094e19", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-03-29T22:06:11.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-29T23:53:52.000Z", "max_issues_repo_path": "computing-primer/python/loops.ipynb", "max_issues_repo_name": "osterEWU/mathemathemical_computing_with_Python", "max_issues_repo_head_hexsha": "a0bfe3bed2df0a02d26c4c1cf5084ce89f094e19", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "computing-primer/python/loops.ipynb", "max_forks_repo_name": "osterEWU/mathemathemical_computing_with_Python", "max_forks_repo_head_hexsha": "a0bfe3bed2df0a02d26c4c1cf5084ce89f094e19", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.2610441767, "max_line_length": 454, "alphanum_fraction": 0.5246198664, "converted": true, "num_tokens": 5326, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9059898127684335, "lm_q2_score": 0.9381240142763573, "lm_q1q2_score": 0.8499308000478082}} {"text": "```python\n%matplotlib notebook\n\nimport pandas as pd\nimport numpy as np\nimport sklearn.datasets\n\nimport matplotlib.pyplot as plt\n\nplt.rcParams['figure.figsize'] = [8, 4]\n```\n\n\n```python\niris = sklearn.datasets.load_iris()\n\nX = pd.DataFrame(iris.data, columns=iris.feature_names)\n\nX_zscaled = (X - X.mean()) / X.std(ddof=1)\n\nY = pd.DataFrame(iris.target, columns=['target'])\nY['species'] = Y.apply(lambda r: iris.target_names[r])\n```\n\n## Multicollinearity Check\n\nUsing $corr(X)$, check to see that we some level of multicollinearity in the data, enough to warrant using PCA of visualization in reduced-rank principal component space. As a rule of thumb, the off-diagonal correlation values (either in the upper- or lower-triangle) should have absolute values of around 0.30 or so.\n\n\n```python\ncorr = X_zscaled.corr()\n\ntmp = pd.np.triu(corr) - np.eye(corr.shape[0]) \ntmp = tmp.flatten()\ntmp = tmp[np.nonzero(tmp)]\ntmp = pd.Series(np.abs(tmp))\n\nprint('Correlation matrix:\\n\\n{}\\n\\n'.format(corr.values))\n\nprint('Multicollinearity check using off-diagonal values:\\n\\n{}'.format(tmp.describe()))\n```\n\n Correlation matrix:\n \n [[ 1. -0.10936925 0.87175416 0.81795363]\n [-0.10936925 1. -0.4205161 -0.35654409]\n [ 0.87175416 -0.4205161 1. 0.9627571 ]\n [ 0.81795363 -0.35654409 0.9627571 1. ]]\n \n \n Multicollinearity check using off-diagonal values:\n \n count 6.000000\n mean 0.589816\n std 0.341915\n min 0.109369\n 25% 0.372537\n 50% 0.619235\n 75% 0.858304\n max 0.962757\n dtype: float64\n\n\n## PCA via Eigen-Decomposition\n\n### Obtain eigenvalues, eigenvectors of $cov(X)$ via eigen-decomposition\n\nFrom the factorization of symmetric matrix $S$ into orthogonal matrix $Q$ of the eigenvectors and diagonal matrix $\\Lambda$ of the eigenvalues, we can likewise decompose $cov(X)$ (or in our case $corr(X)$ since we standardized our data).\n\n\\begin{align}\n S &= Q \\Lambda Q^\\intercal \\\\\n \\Rightarrow X^\\intercal X &= V \\Lambda V^\\intercal \\\\\n\\end{align}\n\nwhere $V$ are the orthonormal eigenvectors of $X X^\\intercal$.\n\nWe can normalize the eigenvalues to see how much variance is captured per each respective principal component. We will also calculate the cumulative variance explained. This will help inform our decision of how many principal components to keep when reducing dimensions in visualizing the data.\n\n\n```python\neigenvalues, eigenvectors = np.linalg.eig(X_zscaled.cov())\n\neigenvalues_normalized = eigenvalues / eigenvalues.sum()\n\ncumvar_explained = np.cumsum(eigenvalues_normalized)\n```\n\n### Reduce dimensions and visualize\n\nTo project the original data into principal component space, we obtain score matrix $T$ by taking the dot product of $X$ and the eigenvectors $V$.\n\n\\begin{align}\n T &= X V \\\\\n\\end{align}\n\n\n```python\nT = pd.DataFrame(X_zscaled.dot(eigenvectors))\n\n# set column names\nT.columns = ['pc1', 'pc2', 'pc3', 'pc4']\n\n# also add the species label as \nT = pd.concat([T, Y.species], axis=1)\n```\n\nWe can visualize the original, 4D iris data of $X$ by using the first $k$ eigenvectors of $X$, projecting the original data into a reduced-rank $k$-dimensional principal component space.\n\n\\begin{align}\n T_{rank=k} &= X V_{rank=k}\n\\end{align}\n\n\n```python\n# let's try using the first 2 principal components\nk = 2\n\n# divide T by label\nirises = [T[T.species=='setosa'], \n T[T.species=='versicolor'], \n T[T.species=='virginica']]\n\n# define a color-blind friendly set of quantative colors\n# for each species\ncolors = ['#1b9e77', '#d95f02', '#7570b3']\n\n_, (ax1, ax2) = plt.subplots(1, 2, sharey=False)\n\n# plot principal component vis-a-vis total variance in data\nax1.plot([1,2,3,4],\n eigenvalues_normalized,\n '-o',\n color='#8da0cb',\n label='eigenvalue (normalized)',\n alpha=0.8,\n zorder=1000)\n\nax1.plot([1,2,3,4],\n cumvar_explained,\n '-o',\n color='#fc8d62',\n label='cum. variance explained',\n alpha=0.8,\n zorder=1000)\n\nax1.set_xlim(0.8, 4.2)\nax1.set_xticks([1,2,3,4])\nax1.set_xlabel('Principal component')\nax1.set_ylabel('Variance')\nax1.legend(loc='center right', fontsize=7)\nax1.grid(color='#fdfefe')\nax1.set_facecolor('#f4f6f7')\n\n# plot the reduced-rank score matrix representation\nfor group, color in zip(irises, colors):\n ax2.scatter(group.pc1,\n group.pc2,\n marker='^',\n color=color,\n label=group.species,\n alpha=0.5,\n zorder=1000)\nax2.set_xlabel(r'PC 1')\nax2.set_ylabel(r'PC 2')\nax2.grid(color='#fdfefe')\nax2.set_facecolor('#f4f6f7')\nax2.legend(labels=iris.target_names, fontsize=7)\n\nplt.suptitle(r'Fig. 1: PCA via eigen-decomposition, 2D visualization')\nplt.tight_layout(pad=3.0)\nplt.show() \n```\n\n\n \n\n\n\n\n\n\n### Relative weights of the original features in principal component space\n\n* Each row of $V$ is a vector representing the relative weights for each of the original features for each principal component.\n* Given reduced-rank $k$, the columns of $V_k$ are the weights for principal components $1, \\dots, k$.\n* Calculate the norm for each row of $V_k$, and normalize the results to obtain the relative weights for each original feature in principal component space.\n\n\n```python\nfeature_norms = np.linalg.norm(eigenvectors[:, 0:k], axis=1)\nfeature_weights = feature_norms / feature_norms.sum()\n\nmsg = ('Using {} principal components, '\n 'the original features are represented with the following weights:')\nprint(msg.format(k))\nfor feature, weight in zip(iris.feature_names, feature_weights):\n print('- {}: {:0.3f}'.format(feature, weight))\n```\n\n Using 2 principal components, the original features are represented with the following weights:\n - sepal length (cm): 0.233\n - sepal width (cm): 0.349\n - petal length (cm): 0.211\n - petal width (cm): 0.207\n\n\n## PCA via Singular Value Decomposition\n\nSingular value decomposition factors any matrix $A$ into right-singular vector matrix $U$; diagonal matrix of singular values $\\Sigma$; and right-singular vector matrix $V$.\n\n\\begin{align}\n A &= U \\Sigma V^\\intercal \\\\\n\\end{align}\n\nIf we start with \n\n\\begin{align}\n X &= U \\Sigma V^\\intercal \\\\\n \\\\\n X^\\intercal X &= (U \\Sigma V^\\intercal)^\\intercal U \\Sigma V^\\intercal \\\\\n &= V \\Sigma^\\intercal U^\\intercal U \\Sigma V^\\intercal \\\\\n &= V \\Sigma^\\intercal \\Sigma V^\\intercal \\\\\n \\\\\n \\Rightarrow \\Sigma^\\intercal \\Sigma &= \\Lambda\n\\end{align}\n\n\n```python\nU, S, Vt = np.linalg.svd(X_zscaled)\n```\n\n\n```python\nprint(eigenvectors)\n```\n\n [[ 0.52237162 -0.37231836 -0.72101681 0.26199559]\n [-0.26335492 -0.92555649 0.24203288 -0.12413481]\n [ 0.58125401 -0.02109478 0.14089226 -0.80115427]\n [ 0.56561105 -0.06541577 0.6338014 0.52354627]]\n\n\n\n```python\nprint(Vt.T)\n```\n\n [[ 0.52237162 -0.37231836 0.72101681 0.26199559]\n [-0.26335492 -0.92555649 -0.24203288 -0.12413481]\n [ 0.58125401 -0.02109478 -0.14089226 -0.80115427]\n [ 0.56561105 -0.06541577 -0.6338014 0.52354627]]\n\n\n\n```python\nVt.T.shape\n```\n\n\n\n\n (4, 4)\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "a3906fc57558b6a2eebc7c7c6c85fd6cf8ef5915", "size": 117596, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "PCA.ipynb", "max_stars_repo_name": "buruzaemon/svd", "max_stars_repo_head_hexsha": "b600b7078a63537899df62bbfdf3a02d090c1a7b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-05-10T17:02:24.000Z", "max_stars_repo_stars_event_max_datetime": "2017-05-10T17:02:24.000Z", "max_issues_repo_path": "PCA.ipynb", "max_issues_repo_name": "buruzaemon/svd", "max_issues_repo_head_hexsha": "b600b7078a63537899df62bbfdf3a02d090c1a7b", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "PCA.ipynb", "max_forks_repo_name": "buruzaemon/svd", "max_forks_repo_head_hexsha": "b600b7078a63537899df62bbfdf3a02d090c1a7b", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-03-18T01:41:49.000Z", "max_forks_repo_forks_event_max_datetime": "2020-03-31T02:36:33.000Z", "avg_line_length": 98.9032800673, "max_line_length": 71169, "alphanum_fraction": 0.7768886697, "converted": true, "num_tokens": 2130, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391579526935, "lm_q2_score": 0.9207896824119663, "lm_q1q2_score": 0.8499249331050696}} {"text": "```python\nimport sympy as sm\n```\n\n\n```python\nfrom sympy.physics.vector import init_vprinting\ninit_vprinting(use_latex='mathjax', pretty_print=False)\n```\n\n\n```python\nfrom IPython.display import Image\nImage('fig/2rp_new.png', width=300)\n```\n\n\n```python\nfrom sympy.physics.mechanics import dynamicsymbols\n```\n\n\n```python\ntheta1, theta2, l1, l2 = dynamicsymbols('theta1 theta2 l1 l2')\ntheta1, theta2, l1, l2\n```\n\n\n\n\n$\\displaystyle \\left( \\theta_{1}, \\ \\theta_{2}, \\ l_{1}, \\ l_{2}\\right)$\n\n\n\n\n```python\npx = l1*sm.cos(theta1) + l2*sm.cos(theta1 + theta2) # tip psition in x-direction\npy = l1*sm.sin(theta1) + l2*sm.sin(theta1 + theta2) # tip position in y-direction\n```\n\n\n```python\n# evaluating the jacobian matrix \na11 = sm.diff(px, theta1) # differentiate px with theta_1\na12 = sm.diff(px, theta2) # differentiate px with theta_2\n\na21 = sm.diff(py, theta1) # differentiate py with theta_1\na22 = sm.diff(py, theta2) # differentiate py with theta_2\n\nJ = sm.Matrix([[a11, a12], [a21, a22]]) # assemble into matix form\nJsim = sm.simplify(J) # simplified result\nJsim\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}- l_{1} \\sin{\\left(\\theta_{1} \\right)} - l_{2} \\sin{\\left(\\theta_{1} + \\theta_{2} \\right)} & - l_{2} \\sin{\\left(\\theta_{1} + \\theta_{2} \\right)}\\\\l_{1} \\cos{\\left(\\theta_{1} \\right)} + l_{2} \\cos{\\left(\\theta_{1} + \\theta_{2} \\right)} & l_{2} \\cos{\\left(\\theta_{1} + \\theta_{2} \\right)}\\end{matrix}\\right]$\n\n\n\n\n```python\n# Manipulator singularities\n\nJdet = sm.det(Jsim) #determinant of the jacobian matrix\ndetJ = sm.simplify(Jdet)\ndetJ\n```\n\n\n\n\n$\\displaystyle l_{1} l_{2} \\sin{\\left(\\theta_{2} \\right)}$\n\n\n\n\n```python\nsm.solve(detJ, (theta2)) # slove detJ for theta_2\n```\n\n\n\n\n$\\displaystyle \\left[ 0, \\ \\pi\\right]$\n\n\n\n\n```python\n# This means the manipulator will be in singular configuration when the angle \u03b82 is either zero or it is \u00b1\u03c0 ,\nImage('fig/2rp_sing_config1.png', width=300) # \u03b82=0 \n```\n\n\n```python\nImage('fig/2rp_sing_config2.png', width=300) # \u03b82=\u00b1\u03c0 \n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "dc04763a33b59fe50c6495e74aeeb38157e1e830", "size": 213195, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Jacobian--Singularity--2RP.ipynb", "max_stars_repo_name": "Eddy-Morgan/Jacobian--Singularity--2RP", "max_stars_repo_head_hexsha": "37c86725c9b5cc87b926b20ef7eee5b68f9b4fc5", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Jacobian--Singularity--2RP.ipynb", "max_issues_repo_name": "Eddy-Morgan/Jacobian--Singularity--2RP", "max_issues_repo_head_hexsha": "37c86725c9b5cc87b926b20ef7eee5b68f9b4fc5", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Jacobian--Singularity--2RP.ipynb", "max_forks_repo_name": "Eddy-Morgan/Jacobian--Singularity--2RP", "max_forks_repo_head_hexsha": "37c86725c9b5cc87b926b20ef7eee5b68f9b4fc5", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 795.5037313433, "max_line_length": 70612, "alphanum_fraction": 0.9518609723, "converted": true, "num_tokens": 687, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9546474246069458, "lm_q2_score": 0.8902942283051332, "lm_q1q2_score": 0.8499170921939236}} {"text": "```python\nimport sympy\nfrom sympy import I, pi, oo\nimport numpy as np\nsympy.init_printing(backcolor=\"Transparent\")\n```\n\n\n```python\nx = sympy.Symbol(\"x\")\nx.is_real is None\n```\n\n\n\n\n True\n\n\n\n\n```python\ny = sympy.Symbol(\"y\", real=True)\ny.is_real\n```\n\n\n\n\n True\n\n\n\n\n```python\nz = sympy.Symbol(\"z\", imaginary=True)\nz.is_real\n```\n\n\n\n\n False\n\n\n\n\n```python\nx = sympy.Symbol(\"x\")\ny = sympy.Symbol(\"y\", positive=True)\n```\n\n\n```python\nsympy.sqrt(x ** 2)\n```\n\n\n```python\nsympy.sqrt(y ** 2)\n```\n\n\n```python\nn1 = sympy.Symbol(\"n\")\nn2 = sympy.Symbol(\"n\", integer=True)\nn3 = sympy.Symbol(\"n\", odd=True)\n```\n\n\n```python\nsympy.cos(n1 * pi)\n```\n\n\n```python\nsympy.cos(n2 * pi)\n```\n\n\n```python\nsympy.cos(n3 * pi)\n```\n\n\n```python\na, b, c = sympy.symbols(\"a, b, c\", negative=True)\nd, e, f = sympy.symbols(\"d, e, f\", positive=True)\n```\n\n### Numbers\n\n\n```python\ni = sympy.Integer(19)\ntype(i)\n```\n\n\n\n\n sympy.core.numbers.Integer\n\n\n\n\n```python\ni.is_Integer, i.is_real, i.is_odd\n```\n\n\n\n\n (True, True, True)\n\n\n\n\n```python\nf = sympy.Float(2.3)\ntype(f)\n```\n\n\n\n\n sympy.core.numbers.Float\n\n\n\n\n```python\nf.is_Integer, f.is_real, f.is_odd\n```\n\n\n\n\n (False, True, False)\n\n\n\n\n```python\ni, f = sympy.sympify(19), sympy.sympify(2.3)\ntype(i), type(f)\n```\n\n\n\n\n (sympy.core.numbers.Integer, sympy.core.numbers.Float)\n\n\n\n\n```python\nn = sympy.Symbol(\"n\", integer=True)\n```\n\n\n```python\nn.is_integer, n.is_Integer, n.is_positive, n.is_Symbol\n```\n\n\n\n\n (True, False, None, True)\n\n\n\n\n```python\ni = sympy.Integer(19)\n```\n\n\n```python\ni.is_integer, i.is_Integer, i.is_positive, i.is_Symbol\n```\n\n\n\n\n (True, True, True, False)\n\n\n\n\n```python\nsympy.factorial(100)\n```\n\n\n```python\nsympy.Float(0.3, 25)\n```\n\n\n```python\nsympy.Rational(11, 13)\n```\n\n\n```python\nr1 = sympy.Rational(2, 3)\nr2 = sympy.Rational(4, 5)\n```\n\n\n```python\nr1 * r2\n```\n\n\n```python\nr1 / r2\n```\n\n### Functions\n\n\n```python\nx, y, z = sympy.symbols(\"x, y, z\")\nf = sympy.Function(\"f\")\ntype(f)\n```\n\n\n\n\n sympy.core.function.UndefinedFunction\n\n\n\n\n```python\nf(x)\n```\n\n\n```python\ng = sympy.Function(\"g\")(x, y, z)\ng\n```\n\n\n```python\ng.free_symbols\n```\n\n\n```python\nsympy.sin\n```\n\n\n\n\n sin\n\n\n\n\n```python\nsympy.sin(x)\n```\n\n\n```python\nsympy.sin(pi * 1.5)\n```\n\n\n```python\nn = sympy.Symbol(\"n\", integer=True)\nsympy.sin(pi * n)\n```\n\n\n```python\nh = sympy.Lambda(x, x**2)\nh\n```\n\n\n```python\nh(5)\n```\n\n\n```python\nh(1 + x)\n```\n\n\n```python\nx = sympy.Symbol(\"x\")\nexpr = 1 + 2 * x**2 + 3 * x**3\nexpr\n```\n\n### Simplify\n\n\n```python\nexpr = 2 * (x**2 - x) - x * (x + 1)\nexpr\n```\n\n\n```python\nsympy.simplify(expr)\n```\n\n\n```python\nexpr.simplify()\n```\n\n\n```python\nexpr\n```\n\n\n```python\nexpr = 2 * sympy.cos(x) * sympy.sin(x)\nexpr\n```\n\n\n```python\nsympy.simplify(expr)\n```\n\n\n```python\nexpr = sympy.exp(x) * sympy.exp(y)\nexpr\n```\n\n\n```python\nsympy.simplify(expr)\n```\n\n### Expand\n\n\n```python\nexpr = (x + 1) * (x + 2)\nsympy.expand(expr)\n```\n\n\n```python\nsympy.sin(x + y).expand(trig=True)\n```\n\n\n```python\na, b = sympy.symbols(\"a, b\", positive=True)\nsympy.log(a * b).expand(log=True)\n```\n\n\n```python\nsympy.exp(I*a + b).expand(complex=True)\n```\n\n\n```python\nsympy.expand((a * b)**x, power_base=True)\n```\n\n\n```python\nsympy.exp((a-b)*x).expand(power_exp=True)\n```\n\n### Factor, Collect and Combine\n\n\n```python\nsympy.factor(x**2 - 1)\n```\n\n\n```python\nsympy.factor(x * sympy.cos(y) + sympy.sin(z) * x)\n```\n\n\n```python\nsympy.logcombine(sympy.log(a) - sympy.log(b))\n```\n\n\n```python\nexpr = x + y + x * y * z\nexpr.collect(x)\n```\n\n\n```python\nexpr.collect(y)\n```\n\n\n```python\nexpr = sympy.cos(x + y) + sympy.sin(x - y)\nexpr.expand(trig=True).collect([\n sympy.cos(x),sympy.sin(x)\n]).collect(sympy.cos(y) - sympy.sin(y))\n```\n\n\n```python\nsympy.apart(1/(x**2 + 3*x + 2), x)\n```\n\n\n```python\nsympy.together(1 / (y * x + y) + 1 / (1+x))\n```\n\n\n```python\nsympy.cancel(y / (y * x + y))\n```\n\n\n```python\n(x + y).subs(x, y)\n```\n\n\n```python\nsympy.sin(x * sympy.exp(x)).subs(x, y)\n```\n\n\n```python\nsympy.sin(x * z).subs({z: sympy.exp(y), x: y, sympy.sin: sympy.cos})\n```\n\n\n```python\nsympy.sin(x * z).subs({z: sympy.exp(y), x: y, sympy.sin: sympy.cos})\n```\n\n\n```python\nexpr = x * y + z**2 *x\nvalues = {x: 1.25, y: 0.4, z: 3.2}\nexpr.subs(values)\n```\n\n### Numerical evaluation\n\n\n```python\nsympy.N(1 + pi)\n```\n\n\n```python\nsympy.N(pi, 50)\n```\n\n\n```python\n(x + 1/pi).evalf(10)\n```\n\n\n```python\nexpr = sympy.sin(pi * x * sympy.exp(x))\n[expr.subs(x, xx).evalf(3) for xx in range(0, 10)]\n```\n\n\n```python\nexpr_func = sympy.lambdify(x, expr)\nexpr_func(1.0)\n```\n\n\n```python\nexpr_func = sympy.lambdify(x, expr, 'numpy')\nxvalues = np.arange(0, 10)\nexpr_func(xvalues)\n```\n\n\n\n\n array([ 0. , 0.77394269, 0.64198244, 0.72163867, 0.94361635,\n 0.20523391, 0.97398794, 0.97734066, -0.87034418, -0.69512687])\n\n\n\n### Calculus\n\n\n```python\nf = sympy.Function('f')(x)\nsympy.diff(f, x)\n```\n\n\n```python\nsympy.diff(f, x, x)\n```\n\n\n```python\nsympy.diff(f, x, 3)\n```\n\n\n```python\ng = sympy.Function('g')(x, y)\ng.diff(x, y)\n```\n\n\n```python\ng.diff(x, 3, y, 2)\n```\n\n\n```python\nexpr = x**4 + x**3 + x**2 + x + 1\nexpr.diff(x)\n```\n\n\n```python\nexpr.diff(x, x)\n```\n\n\n```python\nexpr = (x + 1)**3 * y ** 2 * (z - 1)\nexpr.diff(x, y, z)\n```\n\n\n```python\nexpr = sympy.sin(x * y) * sympy.cos(x / 2)\nexpr.diff(x)\n```\n\n\n```python\nexpr = sympy.functions.special.polynomials.hermite(x, 0)\nexpr.diff(x).doit()\n```\n\n\n```python\nd = sympy.Derivative(sympy.exp(sympy.cos(x)), x)\nd\n```\n\n\n```python\nd.doit()\n```\n\n\n```python\na, b, x, y = sympy.symbols(\"a, b, x, y\")\nf = sympy.Function(\"f\")(x)\n```\n\n\n```python\nsympy.integrate(f)\n```\n\n\n```python\nsympy.integrate(f, (x, a, b))\n```\n\n\n```python\nsympy.integrate(sympy.sin(x))\n```\n\n\n```python\nsympy.integrate(sympy.sin(x), (x, a, b))\n```\n\n\n```python\nsympy.integrate(sympy.exp(-x**2), (x, 0, oo))\n```\n\n\n```python\na, b, c = sympy.symbols(\"a, b, c\", positive=True)\n```\n\n\n```python\nsympy.integrate(a * sympy.exp(-((x-b)/c)**2), (x, -oo, oo))\n```\n\n\n```python\nsympy.integrate(sympy.sin(x * sympy.cos(x)))\n```\n\n\n```python\nexpr = sympy.sin(x*sympy.exp(y))\nsympy.integrate(expr, x)\n```\n\n\n```python\nexpr = (x + y)**2\nsympy.integrate(expr, x)\n```\n\n\n```python\nsympy.integrate(expr, (x, 0, 1), (y, 0, 1))\n```\n\n\n```python\nx, y = sympy.symbols(\"x, y\")\nf = sympy.Function(\"f\")(x)\n```\n\n\n```python\nsympy.series(f, x)\n```\n\n\n\n\n$\\displaystyle f{\\left(0 \\right)} + x \\left. \\frac{d}{d \\xi} f{\\left(\\xi \\right)} \\right|_{\\substack{ \\xi=0 }} + \\frac{x^{2} \\left. \\frac{d^{2}}{d \\xi^{2}} f{\\left(\\xi \\right)} \\right|_{\\substack{ \\xi=0 }}}{2} + \\frac{x^{3} \\left. \\frac{d^{3}}{d \\xi^{3}} f{\\left(\\xi \\right)} \\right|_{\\substack{ \\xi=0 }}}{6} + \\frac{x^{4} \\left. \\frac{d^{4}}{d \\xi^{4}} f{\\left(\\xi \\right)} \\right|_{\\substack{ \\xi=0 }}}{24} + \\frac{x^{5} \\left. \\frac{d^{5}}{d \\xi^{5}} f{\\left(\\xi \\right)} \\right|_{\\substack{ \\xi=0 }}}{120} + O\\left(x^{6}\\right)$\n\n\n\n\n```python\nx0 = sympy.Symbol(\"{x_0}\")\nf.series(x, x0, n=2)\n```\n\n\n\n\n$\\displaystyle f{\\left({x_0} \\right)} + \\left(x - {x_0}\\right) \\left. \\frac{d}{d \\xi_{1}} f{\\left(\\xi_{1} \\right)} \\right|_{\\substack{ \\xi_{1}={x_0} }} + O\\left(\\left(x - {x_0}\\right)^{2}; x\\rightarrow {x_0}\\right)$\n\n\n\n\n```python\nf.series(x, x0, n=2).removeO()\n```\n\n\n\n\n$\\displaystyle \\left(x - {x_0}\\right) \\left. \\frac{d}{d \\xi_{1}} f{\\left(\\xi_{1} \\right)} \\right|_{\\substack{ \\xi_{1}={x_0} }} + f{\\left({x_0} \\right)}$\n\n\n\n\n```python\nsympy.cos(x).series()\n```\n\n\n```python\nsympy.sin(x).series()\n```\n\n\n```python\nsympy.exp(x).series()\n```\n\n\n```python\n(1/(1+x)).series()\n```\n\n\n```python\nexpr = sympy.cos(x) / (1 + sympy.sin(x * y))\nexpr.series(x, n=4)\n```\n\n\n```python\nexpr.series(y, n=4)\n```\n\n\n```python\nsympy.limit(sympy.sin(x) / x, x, 0)\n```\n\n\n```python\nf = sympy.Function('f')\nx, h = sympy.symbols(\"x, h\")\ndiff_limit = (f(x + h) - f(x))/h\nsympy.limit(diff_limit.subs(f, sympy.cos), h, 0)\n```\n\n\n```python\nsympy.limit(diff_limit.subs(f, sympy.sin), h, 0)\n```\n\n\n```python\nexpr = (x**2 - 3*x) / (2*x - 2)\np = sympy.limit(expr/x, x, sympy.oo)\nq = sympy.limit(expr - p*x, x, sympy.oo)\np, q\n```\n\n\n```python\nn = sympy.symbols(\"n\", integer=True)\nx = sympy.Sum(1/(n**2), (n, 1, oo))\nx\n```\n\n\n```python\nx.doit()\n```\n\n\n```python\nx = sympy.Product(n, (n, 1, 7))\nx\n```\n\n\n```python\nx.doit()\n```\n\n\n```python\nx = sympy.Symbol(\"x\")\nsympy.Sum((x)**n/(sympy.factorial(n)), (n, 1, oo)).doit().simplify()\n```\n\n\n```python\nsympy.solve(x**2 + 2*x - 3)\n```\n\n\n```python\na, b, c = sympy.symbols(\"a, b, c\")\nsympy.solve(a * x**2 + b * x + c, x)\n```\n\n\n```python\nsympy.solve(sympy.sin(x) - sympy.cos(x), x)\n```\n\n\n```python\nsympy.solve(sympy.exp(x) + 2 * x, x)\n```\n\n\n```python\nsympy.solve(x**5 - x**2 + 1, x)\n```\n\n\n```python\neq1 = x + 2 * y - 1\neq2 = x - y + 1\nsympy.solve([eq1, eq2], [x, y], dict=True)\n\n```\n\n\n```python\neq1 = x**2 - y\neq2 = y**2 - x\nsols = sympy.solve([eq1, eq2], [x, y], dict=True)\nsols\n```\n\n\n```python\n[eq1.subs(sol).simplify() == 0 and eq2.subs(sol).simplify() == 0 for sol in sols]\n```\n\n\n\n\n [True, True, True, True]\n\n\n\n### Matrix\n\n\n```python\nsympy.Matrix([1, 2])\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1\\\\2\\end{matrix}\\right]$\n\n\n\n\n```python\nsympy.Matrix([[1, 2]])\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 2\\end{matrix}\\right]$\n\n\n\n\n```python\nsympy.Matrix([[1, 2], [3, 4]])\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 2\\\\3 & 4\\end{matrix}\\right]$\n\n\n\n\n```python\nsympy.Matrix(3, 4, lambda m, n: 10 * m + n)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0 & 1 & 2 & 3\\\\10 & 11 & 12 & 13\\\\20 & 21 & 22 & 23\\end{matrix}\\right]$\n\n\n\n\n```python\na, b, c, d = sympy.symbols(\"a, b, c, d\")\nM = sympy.Matrix([[a, b], [c, d]])\nM\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}a & b\\\\c & d\\end{matrix}\\right]$\n\n\n\n\n```python\nM * M\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}a^{2} + b c & a b + b d\\\\a c + c d & b c + d^{2}\\end{matrix}\\right]$\n\n\n\n\n```python\nx = sympy.Matrix(sympy.symbols(\"x_1, x_2\"))\nM * x\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}a x_{1} + b x_{2}\\\\c x_{1} + d x_{2}\\end{matrix}\\right]$\n\n\n\n\n```python\np, q = sympy.symbols(\"p, q\")\nM = sympy.Matrix([[1, p], [q, 1]])\nM\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & p\\\\q & 1\\end{matrix}\\right]$\n\n\n\n\n```python\nb = sympy.Matrix(sympy.symbols(\"b_1, b_2\"))\nb\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}b_{1}\\\\b_{2}\\end{matrix}\\right]$\n\n\n\n\n```python\nx = M.LUsolve(b)\nx\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}b_{1} - \\frac{p \\left(- b_{1} q + b_{2}\\right)}{- p q + 1}\\\\\\frac{- b_{1} q + b_{2}}{- p q + 1}\\end{matrix}\\right]$\n\n\n\n\n```python\nx = M.inv() * b\nx\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\frac{b_{1}}{- p q + 1} - \\frac{b_{2} p}{- p q + 1}\\\\- \\frac{b_{1} q}{- p q + 1} + \\frac{b_{2}}{- p q + 1}\\end{matrix}\\right]$\n\n\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "4cd66c7fb4cade22421f9a265cd6c01522cc91e1", "size": 247216, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "NumericalPython/2.SymPy.ipynb", "max_stars_repo_name": "nickovchinnikov/Computational-Science-and-Engineering", "max_stars_repo_head_hexsha": "45620e432c97fce68a24e2ade9210d30b341d2e4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2021-01-14T08:00:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-31T14:00:11.000Z", "max_issues_repo_path": "NumericalPython/2.SymPy.ipynb", "max_issues_repo_name": "nickovchinnikov/Computational-Science-and-Engineering", "max_issues_repo_head_hexsha": "45620e432c97fce68a24e2ade9210d30b341d2e4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "NumericalPython/2.SymPy.ipynb", "max_forks_repo_name": "nickovchinnikov/Computational-Science-and-Engineering", "max_forks_repo_head_hexsha": "45620e432c97fce68a24e2ade9210d30b341d2e4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-25T15:21:40.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-25T15:21:40.000Z", "avg_line_length": 77.692017599, "max_line_length": 7944, "alphanum_fraction": 0.8141989192, "converted": true, "num_tokens": 4162, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9353465152482724, "lm_q2_score": 0.9086179000259899, "lm_q1q2_score": 0.8498725864815128}} {"text": "# Linear Regression Derived\n\n#### Cost function\n\n$$ C = \\sum_i (y_i - mx_i - c)^2 $$\n\n#### Calculate the partial derivative with respect to $m$\n\n$$\n\\begin{align}\n\\frac{\\partial C}{\\partial m} &= \\sum_i 2(y_i - m x_i -c)(-x_i) \\\\\n&= -2 \\sum_i x_i (y_i - m x_i -c) \\\\\n\\end{align}\n$$\n\n#### Set derivative to zero\n\n$$\n\\begin{align}\n& \\frac{\\partial C}{\\partial m} = 0 \\\\\n\\Rightarrow & -2 \\sum_i x_i (y_i - m x_i -c) = 0 \\\\\n\\Rightarrow & \\sum_i x_i y_i - m x_i x_i - x_i c = 0 \\\\\n\\Rightarrow & \\sum_i x_i y_i - \\sum_i m x_i x_i - \\sum_i x_i c = 0 \\\\\n\\Rightarrow & \\sum_i x_i y_i - \\sum_i x_i c = \\sum_i x_i x_i \\\\\n\\Rightarrow & \\sum_i x_i y_i - c \\sum_i x_i = m \\sum_i x_i x_i \\\\\n\\Rightarrow & m = \\frac{\\sum_i x_i y_i - c \\sum_i x_i}{\\sum_i x_i x_i}\n\\end{align}\n$$\n\n#### Calculate the partial derivative with respect to $c$\n\n$$\n\\begin{align}\n\\frac{\\partial C}{\\partial c} &= \\sum_i 2(y_i - m x_i -c)(-1) \\\\\n&= -2 \\sum_i (y_i - m x_i -c) \\\\\n\\end{align}\n$$\n\n#### Set the derivative to zero\n\n$$\n\\begin{align}\n& \\frac{\\partial C}{\\partial c} = 0 \\\\\n\\Rightarrow & -2 \\sum_i (y_i - m x_i - c) = 0 \\\\\n\\Rightarrow & \\sum_i (y_i - m x_i - c) = 0 \\\\\n\\Rightarrow & \\sum_i y_i - \\sum_i m x_i - \\sum_i c = 0 \\\\\n\\Rightarrow & \\sum_i y_i - m \\sum_i x_i - c \\sum_i 1 = 0 \\\\\n\\Rightarrow & \\sum_i y_i - m \\sum_i x_i = c \\sum_i 1 \\\\\n\\Rightarrow & c = \\frac{\\sum_i y_i - m \\sum_i x_i}{\\sum_i 1} \\\\\n\\Rightarrow & c = \\frac{\\sum_i y_i - m \\sum_i x_i}{\\sum_i 1} \\\\\n\\Rightarrow & c = \\frac{\\sum_i y_i}{\\sum_i 1} - m \\frac{\\sum_i x_i}{\\sum_i 1} \\\\\n\\Rightarrow & c = \\bar{y} - m \\bar{x} \\\\\n\\end{align}\n$$\n\n#### Combine the estimates\n\n$$\n\\begin{align}\n& m = \\frac{\\sum_i x_i y_i - c \\sum_i x_i}{\\sum_i x_i x_i} \\\\\n& c = \\bar{y} - m \\bar{x} \\\\\n& \\Rightarrow m = \\frac{\\sum_i x_i y_i - (\\bar{y} - m \\bar{x}) \\sum_i x_i}{\\sum_i x_i x_i} \\\\\n& \\Rightarrow m = \\frac{\\sum_i x_i y_i - \\bar{y} \\sum_i x_i - m \\bar{x} \\sum_i x_i}{\\sum_i x_i x_i} \\\\\n& \\Rightarrow m = \\frac{\\sum_i x_i y_i - \\bar{y} \\sum_i x_i}{\\sum_i x_i x_i} - m \\frac{\\bar{x} \\sum_i x_i}{\\sum_i x_i x_i} \\\\\n& \\Rightarrow m + m \\frac{\\bar{x} \\sum_i x_i}{\\sum_i x_i x_i} = \\frac{\\sum_i x_i y_i - \\bar{y} \\sum_i x_i}{\\sum_i x_i x_i} \\\\\n& \\Rightarrow m(1 + \\frac{\\bar{x} \\sum_i x_i}{\\sum_i x_i x_i}) = \\frac{\\sum_i x_i y_i - \\bar{y} \\sum_i x_i}{\\sum_i x_i x_i} \\\\\n& \\Rightarrow m(\\frac{\\sum_i x_i x_i + \\bar{x} \\sum_i x_i}{\\sum_i x_i x_i}) = \\frac{\\sum_i x_i y_i - \\bar{y} \\sum_i x_i}{\\sum_i x_i x_i} \\\\\n& \\Rightarrow m(\\sum_i x_i x_i + \\bar{x} \\sum_i x_i) = \\sum_i x_i y_i - \\bar{y} \\sum_i x_i \\\\\n& \\Rightarrow m = \\frac{\\sum_i x_i y_i - \\bar{y} \\sum_i x_i}{\\sum_i x_i x_i + \\bar{x} \\sum_i x_i} \\\\\n\\end{align}\n$$\n\n#### End\n\n\n```python\nimport numpy as np\n\nw = np.arange(1.0, 16.0, 1.0)\nd = 5.0 * w + 10.0 + np.random.normal(0.0, 5.0, w.size)\n\n# Calculate the best values for m and c.\nw_avg = np.mean(w)\nd_avg = np.mean(d)\n\nw_zero = w - w_avg\nd_zero = d - d_avg\n\nm = np.sum(w_zero * d_zero) / np.sum(w_zero * w_zero)\nc = d_avg - m * w_avg\n\nprint(\"m is %8.6f and c is %6.6f.\" % (m, c))\n\n\nx, y, x_avg, y_avg = w, d, w_avg, d_avg\nm2 = (np.sum(x * y) - y_avg * np.sum(x)) / (np.sum(x * x) - x_avg * np.sum(x))\nc2 = y_avg - m2 * x_avg\nm2, c2\n```\n\n m is 4.728824 and c is 12.756653.\n\n\n\n\n\n (4.7288241778626832, 12.756652779225512)\n\n\n\n\n```python\nq1 = (np.sum(x * y) - y_avg * np.sum(x)) / (np.sum(x*x) - x_avg * np.sum(x))\nq2 = (np.sum(x*y) - y_avg * np.sum(x) - x_avg * np.sum(y) + len(x) * x_avg * y_avg) / \\\n (np.sum(x * x) - x_avg * np.sum(x) - x_avg * np.sum(x) + len(x) * x_avg * x_avg)\nq3 = (np.sum(x * y) - (x.size * x_avg * y_avg)) / (np.sum(x * x) - x.size * x_avg * x_avg)\n\nq1, q2, q3\n```\n\n\n\n\n (4.7288241778626832, 4.7288241778626832, 4.7288241778626832)\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "bf227685bbd226e1f5be8944b02ece78e2f0602f", "size": 6981, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "linear-regression-derived.ipynb", "max_stars_repo_name": "sean-meade/jupyter-teaching-notebooks", "max_stars_repo_head_hexsha": "2f2a2bf6925fba479d1a73481122faebd201bdba", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "linear-regression-derived.ipynb", "max_issues_repo_name": "sean-meade/jupyter-teaching-notebooks", "max_issues_repo_head_hexsha": "2f2a2bf6925fba479d1a73481122faebd201bdba", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "linear-regression-derived.ipynb", "max_forks_repo_name": "sean-meade/jupyter-teaching-notebooks", "max_forks_repo_head_hexsha": "2f2a2bf6925fba479d1a73481122faebd201bdba", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.3764705882, "max_line_length": 161, "alphanum_fraction": 0.4626844292, "converted": true, "num_tokens": 1561, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9579122720843811, "lm_q2_score": 0.8872045952083047, "lm_q1q2_score": 0.8498641695996908}} {"text": "# Exploring the Discrete Fourier Transform \n\n### George Tzanetakis, University of Victoria \n\nIn this notebook we will explore the Discrete Fourier Transform which is a fundamental algorithm in Digital Signal Processing. First let's look at the mathematical notation: \n\n\\begin{equation} \nX_k = = \\sum_{t=0}^{N-1} x_t e^{-j t k 2 \\pi/N} \n\\end{equation} \n\nWe start with a finite, length-N segment of a digital signal $x_0, x_1, \\dots, x_{N-1}$. We define the inner product as expected: \n\\begin{equation} \n = \\sum_{t=0}^{N-1} x_t y_t^*\n\\end{equation}\nand the corresponding basis is N frequency phasors equally spaced in the desired rage from zero to the sampling frequency: \n\\begin{equation} \n{e^{jtk2\\pi/N}} = {cos(tk2\\pi/N) + j sin(tk2\\pi/N})\\;\\;\\; \\text{for} \\;\\; 0 \\leq K \\leq N-1\n\\end{equation} \n\n\n\n\n```python\n%matplotlib inline \nimport matplotlib.pyplot as plt\nimport numpy as np\nimport IPython.display as ipd\n```\n\n\nLet's plot the real part of one of these basis functions - let's say for $k=4$. \n\n\n```python\ndef plot_basis(t, x_re, x_im): \n # plot them \n plt.figure(figsize=(20,5))\n plt.subplot(121)\n plt.plot(t, x_re, lw=2, color='blue')\n plt.subplot(122)\n plt.plot(t, x_im, lw=2, color='red')\n\nN = 512 \nt = np.arange(0,N)\n\n# basis function real and imaginary parts for k=4\nk = 1\nx_re = np.cos(k*t*2*np.pi/N)\nx_im = np.sin(k*t*2*np.pi/N)\n\nplot_basis(t, x_re, x_im) \n\n```\n\nLet's plot the real and imaginary parts of a basis function with higher frequency with k=10 \n\n\n\n```python\n# basis function real and imaginary parts for k=10 \nk = 10 \nx_re = np.cos(k*t*2*np.pi/N)\nx_im = np.sin(k*t*2*np.pi/N)\n\nplot_basis(t, x_re, x_im) \n```\n\n\\begin{equation} \nX_k = = \\sum_{t=0}^{N-1} x_t e^{-j t k 2 \\pi/N} = \\sum_{t=0}^{N-1} x_t cos(tk2\\pi/N) + \\sum_{t=0}^{N-1} x_t j sin(tk2\\pi/N)\n\\end{equation} \n\nLooking at the equation above you can see that for a specific k the complex spectrum value X_k can be computed by taking two inner products for the input signal $x_t$ with the real and imaginary parts of the corresponding basis function. \n\nLet's consider as input a cosine signal of the same frequency as a particular pair of basis functions let's say with $k=4$ and look at the signal resulting from the point-wise multiplication of the input with the real part and the imagine part. As you can see the point-wise multiplication with the real part results in a signal that is centered above zero and therefore the inner product is positive (the value is shown with the straight black line in the plot). The point-wise multiplication with the imaginary part results in a signal that is centered at zero and therefore the inner product is zero (the value is shown with the straight black line in the plot). Recall from before when we examined using the inner product with a sinusoid of known frequency to estimate amplitude that the estimated amplitude is simply twice the inner product. \n\n\n```python\ndef plot_product(t,x1,x2): \n prod = np.multiply(x1,x2)\n inner_prod = np.sum(prod) / len(prod)\n plt.figure(figsize=(20,5))\n plt.plot(t, x1,'.', lw=1, color='blue')\n plt.plot(t, x2,'.', lw=1, color='cyan')\n plt.plot(t, np.multiply(x1,x2), lw=4, color='red')\n ip_line = np.empty(len(prod))\n ip_line.fill(inner_prod)\n plt.plot(t, ip_line, lw=4, color='black')\n plt.plot(t, ip_line*2, lw=2, color='black')\n plt.legend(('x', 'x_re', 'point-wise product', 'inner product', 'amplitude estimate'), loc='upper right')\n \n\nk=4 \na = 0.6\nx = a * np.cos(k*t*2*np.pi/N)\nx_re = np.cos(k*t*2*np.pi/N)\nx_im = np.sin(k*t*2*np.pi/N)\nplot_product(t,x,x_re)\nplot_product(t,x,x_im)\n\n\n```\n\nWhen the input is a sine wave the inner product with the real part of the basis is zero and the inner product with the imaginary part of the basis is positive. \n\n\n```python\nx = a * np.sin(k*t*2*np.pi/N)\nplot_product(t,x,x_re)\nplot_product(t,x,x_im)\n\n```\n\nNow let's consider the case of using as input a sinusoidal signal of different frequency let's say corresponding to k=10. As you can observe in this case the inner products with both the real and imaginary part of the basis functions are zero. \n\n\n```python\nk=10\nx = a * np.sin(k*t*2*np.pi/N)\nplot_product(t,x,x_re)\nplot_product(t,x,x_im)\n```\n\nNow consider an input that is a mixure of two sinsoids. Notice that we are still able to correctly estimate the amplitude of the mixture component that corresponds to the basis function we are using. \n\n\n```python\nk=4\na1 = 0.8 \nx1 = a1 * np.sin(k*t*2*np.pi/N)\na2 = 0.4 \nk = 10 \nx2 = a2 * np.sin(k*t*2*np.pi/N)\nx = x1+x2\nplot_product(t,x,x_re)\nplot_product(t,x,x_im)\n```\n\nFinally lets look at computing the Discrete Fourier Transform directly from the equation and viewing the magnitude spectrum for the mixture signal we examined above. The code is written so that the connection to measuring amplitude and phases using inner products with the real and imaginary parts of each basis element is emphasized and the code does not utilize the complex number type. Notice the normalization by 2/N so that the mangitude spectrum shows the estimated amplitudes of the input signal. \n\n\n```python\ndef pedagogical_dft(x, N): \n X_re = np.zeros(N) # array holding the real parts of the spectrum \n X_im = np.zeros(N) # array holding the imaginary values of the spectrum \n for k in np.arange(0,N): \n for t in np.arange(0,N): \n X_re[k] += x[t] * np.cos(t * k * 2 * np.pi / N) # inner product with real basis k \n X_im[k] += x[t] * np.sin(t * k * 2 * np.pi / N) # inner product with imaginary basis k\n return (X_re, X_im)\n\ndef plot_mag_spectrum(Xmag): \n plt.figure(figsize=(20,5))\n n = np.arange(0,len(Xmag))\n plt.plot(n,Xmag)\n\nN = 512 \nn = np.arange(0,N)\n\n# Single sinusoid \nk=50\nx1 = 0.5 * np.sin(k*n*2*np.pi/N)\n\n(X_re, X_im) = pedagogical_dft(x1, N)\nXmag = 2 * np.sqrt(X_re * X_re + X_im * X_im) /N\nplot_mag_spectrum(Xmag)\n```\n\n\n```python\n# Mixture sinusoid input \nx2 = np.sin(50*n*2*np.pi/N) + 0.5 * np.sin(100 * n * 2 * np.pi/N) + 0.3 * np.sin(300 * n * 2 * np.pi/N)\n\n(X_re, X_im) = pedagogical_dft(x2, N)\nXmag = 2 * np.sqrt(X_re * X_re + X_im * X_im) / N \nplot_mag_spectrum(Xmag)\n \n```\n\nFinally let's check that we get similar results with the library implementation of the Fast Fourier Transform - notice the computation is much faster. \n\n\n```python\nX = np.fft.fft(x1)\nXmag = 2 * np.abs(X) / N \nplot_mag_spectrum(Xmag)\n\nX = np.fft.fft(x2)\nXmag = 2 * np.abs(X) / N \nplot_mag_spectrum(Xmag)\n```\n\n\n```python\n%%time\nimport numpy as np\nimport timeit \n\n\ndef pedagogical_dft(x, N): \n X_re = np.zeros(N) # array holding the real parts of the spectrum \n X_im = np.zeros(N) # array holding the imaginary values of the spectrum \n for k in np.arange(0,N):\n \n for t in np.arange(0,N): \n X_re[k] += x[t] * np.cos(t * k * 2 * np.pi / N) # inner product with real basis k \n X_im[k] += x[t] * np.sin(t * k * 2 * np.pi / N) # inner product with imaginary basis k\n return (X_re, X_im)\n\ndef test(): \n x = np.zeros(512)\n pedagogical_dft(x,512)\n\ntest()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "87f8c2f2a5f2e590c89410661a076d5e76a54f69", "size": 576465, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "course1/session2/kadenze_mir_c1_s2_5_discrete_fourier_transform.ipynb", "max_stars_repo_name": "Achilleasein/mir_program_kadenze", "max_stars_repo_head_hexsha": "adc204f82dff565fe615e20681b84c94c2cff10d", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 19, "max_stars_repo_stars_event_min_datetime": "2021-03-16T00:00:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-01T05:03:45.000Z", "max_issues_repo_path": "course1/session2/kadenze_mir_c1_s2_5_discrete_fourier_transform.ipynb", "max_issues_repo_name": "femiogunbode/mir_program_kadenze", "max_issues_repo_head_hexsha": "7c3087acf1623b3b8d9742f1d50cd5dd53135020", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "course1/session2/kadenze_mir_c1_s2_5_discrete_fourier_transform.ipynb", "max_forks_repo_name": "femiogunbode/mir_program_kadenze", "max_forks_repo_head_hexsha": "7c3087acf1623b3b8d9742f1d50cd5dd53135020", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2021-03-16T03:07:45.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-12T04:29:03.000Z", "avg_line_length": 1223.9171974522, "max_line_length": 67616, "alphanum_fraction": 0.9535652642, "converted": true, "num_tokens": 2138, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9481545333502202, "lm_q2_score": 0.8962513689768735, "lm_q1q2_score": 0.8497847985167635}} {"text": "# Unweighted and Weighted Means\n\n\n```python\nimport numpy as np\nimport scipy.stats as stats\n\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as mpathces\n```\n\n## Maximum Likelihood Estimator motivated \"derivations\"\n\n### Unweighted Means\n\nIf we make $n$ identical statistically independent (isi) measurements of a random variable $x$, such that the measurements collected form data $\\vec{x} = \\left\\{x_i, \\cdots, x_n\\right\\}$, from a Gaussian (Normal) distribution,\n\n$$\n\\begin{equation}\nL\\left(\\vec{x}; \\vec{\\theta}\\right) = \\prod_{i=1}^{n} f(x_i; \\mu, \\sigma) = \\frac{1}{(2\\pi)^{n/2} \\sigma^{n}} \\exp\\left(-\\frac{1}{2\\sigma^2} \\sum_{i=1}^{n} \\left(x_i - \\mu\\right)^2 \\right)\n\\end{equation}\n$$\n\nthen\n\n$$\n\\begin{equation}\n-\\ln L = \\frac{n}{2} \\ln\\left(2\\pi\\right) + n \\ln \\sigma + \\frac{1}{2\\sigma^2} \\sum_{i=1}^{n}\\left(x_i - \\mu\\right)^2\n\\end{equation}\n$$\n\nand so $L$ is maximized with respect to a variable $\\alpha$ when $-\\ln L$ is minimized,\n\n$$\n\\begin{equation*}\n\\frac{\\partial \\left(-\\ln L\\right)}{\\partial \\alpha} = 0.\n\\end{equation*}\n$$\n\nThus, $L$ is maximized when\n\n$$\n\\begin{equation*}\n\\frac{\\partial \\left(-\\ln L\\right)}{\\partial \\mu} = -\\frac{1}{\\sigma^2} \\sum_{i=1}^{n}\\left(x_i - \\mu\\right) = 0,\n\\end{equation*}\n$$\n\nwhich occurs for\n\n$$\n\\begin{equation*}\n\\sum_{i=1}^{n} x_i = n \\mu,\n\\end{equation*}\n$$\n\nsuch that the best estimate for true parameter $\\mu$ is\n\n$$\n\\begin{equation}\n\\boxed{\\hat{\\mu} = \\frac{1}{n} \\sum_{i=1}^{n} x_i = \\bar{x}\\,}\\,,\n\\end{equation}\n$$\n\nand $L$ is maximized when\n\n$$\n\\begin{equation*}\n\\frac{\\partial \\left(-\\ln L\\right)}{\\partial \\sigma} = \\frac{n}{\\sigma} - \\frac{1}{\\sigma^3} \\sum_{i=1}^{n} \\left(x_i - \\mu\\right) = 0,\n\\end{equation*}\n$$\n\nwhich occurs for\n\n$$\n\\begin{equation*}\nn\\sigma^2 = \\sum_{i=1}^{n} \\left(x_i - \\mu\\right)^2,\n\\end{equation*}\n$$\n\nwhich is\n\n$$\n\\begin{equation*}\n\\sigma = \\sqrt{\\frac{1}{n}\\sum_{i=1}^{n} \\left(x_i - \\mu\\right)^2}.\n\\end{equation*}\n$$\n\nHowever, $\\mu$ is an unknown true parameter, and the best estimate of it is $\\hat{\\mu}$, which is in no\nmanner required to be equal to $\\mu$. Thus, the best estimate of $\\sigma$ is\n\n$$\n\\begin{equation}\n\\boxed{\\hat{\\sigma}_{\\hat{\\mu}} = \\sqrt{\\frac{1}{n}\\sum_{i=1}^{n} \\left(x_i - \\hat{\\mu}\\right)^2} = \\sqrt{\\frac{1}{n}\\sum_{i=1}^{n} \\left(x_i - \\bar{x}\\,\\right)^2}\\,}\\,.\n\\end{equation}\n$$\n\nIf the separation from the mean of each observation, $\\left(x_i - \\bar{x}\\right) = \\delta x = \\text{constant}$, are the same then the uncertainty on the mean is found to be\n\n$$\n\\begin{equation*}\n\\sigma_{\\hat{\\mu}} = \\frac{\\delta x}{\\sqrt{n}},\n\\end{equation*}\n$$\n\nwhich is often referred to as the \"standard error\".\n\n---\nSo, for a population of measurements sampled from a distribution, it can be said that the sample mean is\n\n$$\\mu = \\frac{1}{n} \\sum_{i=1}^{n} x_i = \\bar{x},$$\n\nand the standard deviation of the sample is\n\n$$\n\\begin{equation*}\n\\sigma = \\sqrt{\\frac{1}{n}\\sum_{i=1}^{n} \\left(x_i - \\bar{x}\\,\\right)^2}.\n\\end{equation*}\n$$\n\n---\n\n### Weighted Means\n\nAssume that $n$ individual measurements $x_i$ are spread around (unknown) true value $\\theta$ according to a Gaussian distribution, each with known width $\\sigma_i$.\n\nThis then leads to the likelihood function\n\n$$\n\\begin{equation*}\nL(\\theta) = \\prod_{i=1}^{n} \\frac{1}{\\sqrt{2\\pi}\\sigma_i} \\exp\\left(-\\frac{\\left(x_i - \\theta\\right)^2}{2\\sigma_i^2} \\right)\n\\end{equation*}\n$$\n\nand so negative log-likelihood\n\n$$\n\\begin{equation}\n-\\ln L = \\frac{1}{2} \\ln\\left(2\\pi\\right) + \\ln \\sigma_i + \\frac{1}{2\\sigma_i^2} \\sum_{i=1}^{n}\\left(x_i - \\theta\\right)^2.\n\\end{equation}\n$$\n\nAs before, $L$ is maximized with respect to a variable $\\alpha$ when $-\\ln L$ is minimized,\n\n$$\n\\begin{equation*}\n\\frac{\\partial \\left(-\\ln L\\right)}{\\partial \\alpha} = 0,\n\\end{equation*}\n$$\n\nand so $L$ is maximized with respect to $\\theta$ when\n\n$$\n\\begin{equation*}\n\\frac{\\partial \\left(-\\ln L\\right)}{\\partial \\theta} = -\\sum_{i=1}^{n} \\frac{x_i - \\theta}{\\sigma_i^2} = 0,\n\\end{equation*}\n$$\n\nwhich occurs for\n\n$$\n\\begin{equation*}\n\\sum_{i=1}^{n} \\frac{x_i}{\\sigma_i^2} = \\theta \\sum_{i=1}^{n} \\frac{1}{\\sigma_i^2},\n\\end{equation*}\n$$\n\nwhich is\n\n$$\n\\begin{equation}\n\\hat{\\theta} = \\frac{\\displaystyle\\sum_{i=1}^{n} \\frac{x_i}{\\sigma_i^2}}{\\displaystyle\\sum_{i=1}^{n}\\frac{1}{\\sigma_i^2}}.\n\\end{equation}\n$$\n\nNote that by defining \"weights\" to be\n\n$$\n\\begin{equation*}\nw_i = \\frac{1}{\\sigma_1^2},\n\\end{equation*}\n$$\n\nthis can be expressed as\n\n$$\n\\begin{equation}\n\\boxed{\\hat{\\theta} = \\frac{\\displaystyle\\sum_{i=1}^{n} w_i\\, x_i}{\\displaystyle\\sum_{i=1}^{n}w_i}}\\,,\n\\end{equation}\n$$\n\nmaking the term \"weighted mean\" very transparent.\n\nTo find the standard deviation on the weighted mean, we first look to the variance, $\\sigma^2$. [4]\n\n$$\n\\begin{align*}\n\\sigma^2 &= \\text{E}\\left[\\left(\\hat{\\theta} - \\text{E}\\left[\\hat{\\theta}\\right]\\right)^2\\right] \\\\\n &= \\text{E}\\left[\\left(\\frac{\\displaystyle\\sum_{i=1}^{n} w_i\\, x_i}{\\displaystyle\\sum_{i=1}^{n}w_i} - \\text{E}\\left[\\frac{\\displaystyle\\sum_{i=1}^{n} w_i\\, x_i}{\\displaystyle\\sum_{i=1}^{n}w_i}\\right]\\,\\right)^2\\right] \\\\\n &= \\frac{1}{\\displaystyle\\left(\\sum_{i=1}^{n} w_i\\right)^2} \\text{E} \\left[ \\displaystyle\\left(\\sum_{i=1}^{n} w_i\\,x_i\\right)^2 - 2 \\displaystyle\\left(\\sum_{i=1}^{n} w_i\\,x_i\\right) \\displaystyle\\left(\\sum_{i=j}^{n} w_j\\, \\text{E}\\left[x_j\\right]\\right) + \\displaystyle\\left(\\sum_{i=1}^{n} w_i\\, \\text{E}\\left[x_i\\right]\\right)^2 \\right] \\\\\n &= \\frac{1}{\\displaystyle\\left(\\sum_{i=1}^{n} w_i\\right)^2} \\text{E} \\left[ \\sum_{i,j}^{n} w_i\\, x_i w_j\\, x_j - 2 \\sum_{i,j}^{n} w_i\\, x_i w_j\\, \\text{E}\\left[x_j\\right] + \\sum_{i,j}^{n} w_i\\, \\text{E}\\left[x_i\\right] w_j\\, \\text{E}\\left[x_j\\right] \\right] \\\\\n &= \\frac{1}{\\displaystyle\\left(\\sum_{i=1}^{n} w_i\\right)^2} \\sum_{i,j}^{n} w_i w_j \\left( \\text{E}\\left[ x_i x_j \\right] - 2 \\text{E}\\left[ x_i \\right]\\text{E}\\left[ x_j \\right] + \\text{E}\\left[ x_i \\right]\\text{E}\\left[ x_j \\right] \\right) \\\\\n &= \\frac{1}{\\displaystyle\\left(\\sum_{i=1}^{n} w_i\\right)^2} \\sum_{i,j}^{n} w_i w_j \\left( \\text{E}\\left[ x_i x_j \\right] - \\text{E}\\left[ x_i \\right]\\text{E}\\left[ x_j \\right] \\right) \\\\\n &= \\frac{1}{\\displaystyle\\left(\\sum_{i=1}^{n} w_i\\right)^2} \\sum_{i,j}^{n} w_i w_j \\,\\text{Cov}\\left( x_i, x_j \\right) = \\left\\{\n\\begin{array}{ll}\n\\frac{\\displaystyle1}{\\displaystyle\\left(\\sum_{i=1}^{n} w_i\\right)^2} \\displaystyle\\sum_{i}^{n} \\left( w_i \\sigma_i \\right)^2\\,, & x_i \\text{ and } x_j \\text{ statistically independent}, \\\\\n0\\,, &\\text{ otherwise},\n\\end{array}\n\\right. \\\\\n &= \\frac{\\displaystyle\\sum_{i}^{n} \\left( \\sigma_i^{-2} \\sigma_i \\right)^2}{\\displaystyle\\left(\\sum_{i=1}^{n} w_i\\right)^2} = \\frac{\\displaystyle\\sum_{i}^{n} w_i}{\\displaystyle\\left(\\sum_{i=1}^{n} w_i\\right)^2} \\\\\n &= \\frac{\\displaystyle 1}{\\displaystyle\\sum_{i=1}^{n} w_i}\n\\end{align*}\n$$\n\nThus, it is seen that the standard deviation on the weighted mean is\n\n$$\n\\begin{equation}\n\\boxed{\\sigma_{\\hat{\\theta}} = \\sqrt{\\frac{\\displaystyle 1}{\\displaystyle\\sum_{i=1}^{n} w_i}} = \\left(\\displaystyle\\sum_{i=1}^{n} \\frac{1}{\\sigma_i^2}\\right)^{-1/2}}\\,.\n\\end{equation}\n$$\n\nNotice that in the event that the uncertainties are uniform for each observation, $\\sigma_i = \\delta x$, the above yields the same result as the unweighted mean. $\\checkmark$\n\nAfter this aside it is worth pointing out that [1] have a very elegant demonstration that\n\n$$\n\\begin{equation*}\n\\sigma_{\\hat{\\theta}} = \\left(\\frac{\\partial^2\\left(- \\ln L\\right)}{\\partial\\, \\theta^2}\\right)^{-1/2} = \\left(\\displaystyle\\sum_{i=1}^{n} \\frac{1}{\\sigma_i^2}\\right)^{-1/2}.\n\\end{equation*}\n$$\n\n---\nSo, the average of $n$ measurements of quantity $\\theta$, with individual measurements, $x_i$, Gaussianly distributed about (unknown) true value $\\theta$ with known width $\\sigma_i$, is the weighted mean\n\n$$\n\\begin{equation*}\n\\hat{\\theta} = \\frac{\\displaystyle\\sum_{i=1}^{n} w_i\\, x_i}{\\displaystyle\\sum_{i=1}^{n}w_i},\n\\end{equation*}\n$$\n\nwith weights $w_i = \\sigma_i^{-2}$, with standard deviation on the weighted mean\n\n$$\n\\begin{equation*}\n\\sigma_{\\hat{\\theta}} = \\sqrt{\\frac{\\displaystyle 1}{\\displaystyle\\sum_{i=1}^{n} w_i}} = \\left(\\displaystyle\\sum_{i=1}^{n} \\frac{1}{\\sigma_i^2}\\right)^{-1/2}.\n\\end{equation*}\n$$\n\n---\n\n## Specific Examples\n\nGiven the measurements\n\n$$\n\\vec{x} = \\left\\{10, 9, 11\\right\\}\n$$\n\nwith uncertanties\n\n$$\\vec{\\sigma_x} = \\left\\{1, 2, 3\\right\\}$$\n\n\n```python\nx_data = [10, 9, 11]\nx_uncertainty = [1, 2, 3]\n```\n\n\n```python\nnumerator = sum(x / (sigma_x ** 2) for x, sigma_x in zip(x_data, x_uncertainty))\ndenominator = sum(1 / (sigma_x ** 2) for sigma_x in x_uncertainty)\n\nprint(f\"hand calculated weighted mean: {numerator / denominator}\")\n```\n\nUsing [NumPy's `average` method](https://docs.scipy.org/doc/numpy/reference/generated/numpy.average.html)\n\n\n```python\n# unweighted mean\nnp.average(x_data)\n```\n\n\n```python\nx_weights = [1 / (uncert ** 2) for uncert in x_uncertainty]\n# weighted mean\nweighted_mean = np.average(x_data, weights=x_weights)\nprint(weighted_mean)\n```\n\n\n```python\n# no method to do this in NumPy!?\nsigma = np.sqrt(1 / np.sum(x_weights))\nprint(f\"hand calculated uncertaintiy on weighted mean: {sigma}\")\n```\n\n\n```python\n# A second way to find the uncertainty on the weighted mean\nsummand = sum((x * w) for x, w in zip(x_data, x_weights))\nnp.sqrt(np.average(x_data, weights=x_weights) / summand)\n```\n\nLet's plot the data now and take a look at the results\n\n\n```python\ndef draw_weighted_mean(data, errors, w_mean, w_uncert):\n plt.figure(1)\n\n # the data to be plotted\n x = [i + 1 for i in range(len(data))]\n\n x_min = x[x.index(min(x))]\n x_max = x[x.index(max(x))]\n\n y = data\n y_min = y[y.index(min(y))]\n y_max = y[y.index(max(y))]\n\n err_max = errors[errors.index(max(errors))]\n\n # plot data\n plt.errorbar(x, y, xerr=0, yerr=errors, fmt=\"o\", color=\"black\")\n # plot weighted mean\n plt.plot((x_min, x_max), (w_mean, w_mean), color=\"blue\")\n # plot uncertainty on weighted mean\n plt.plot(\n (x_min, x_max),\n (w_mean - w_uncert, w_mean - w_uncert),\n color=\"gray\",\n linestyle=\"--\",\n )\n plt.plot(\n (x_min, x_max),\n (w_mean + w_uncert, w_mean + w_uncert),\n color=\"gray\",\n linestyle=\"--\",\n )\n\n # Axes\n plt.xlabel(\"Individual measurements\")\n plt.ylabel(\"Value of measruement\")\n # view range\n epsilon = 0.1\n plt.xlim(x_min - epsilon, x_max + epsilon)\n plt.ylim([y_min - err_max, 1.5 * y_max + err_max])\n\n # ax = figure().gca()\n # ax.xaxis.set_major_locator(MaxNLocator(integer=True))\n\n # Legends\n wmean_patch = mpathces.Patch(\n color=\"blue\", label=fr\"Weighted mean: $\\mu={w_mean:0.3f}$\"\n )\n uncert_patch = mpathces.Patch(\n color=\"gray\",\n label=fr\"Uncertainty on the weighted mean: $\\pm{w_uncert:0.3f}$\",\n )\n plt.legend(handles=[wmean_patch, uncert_patch])\n\n plt.show()\n```\n\n\n```python\ndraw_weighted_mean(x_data, x_uncertainty, weighted_mean, sigma)\n```\n\nNow let's do this again but with data that are Normally distributed about a mean value\n\n\n```python\ntrue_mu = np.random.uniform(3, 9)\ntrue_sigma = np.random.uniform(0.1, 2.0)\nn_samples = 20\n\nsamples = np.random.normal(true_mu, true_sigma, n_samples).tolist()\ngauss_errs = np.random.normal(2, 0.4, n_samples).tolist()\n\nweights = [1 / (uncert ** 2) for uncert in gauss_errs]\n\ndraw_weighted_mean(\n samples,\n gauss_errs,\n np.average(samples, weights=weights),\n np.sqrt(1 / np.sum(weights)),\n)\n```\n\n## References\n\n1. [_Data Analysis in High Energy Physics_](http://eu.wiley.com/WileyCDA/WileyTitle/productCd-3527410589.html), Behnke et al., 2013, $\\S$ 2.3.3.1\n2. [_Statistical Data Analysis_](http://www.pp.rhul.ac.uk/~cowan/sda/), Glen Cowan, 1998\n3. University of Marlyand, Physics 261, [Notes on Error Propagation](http://www.physics.umd.edu/courses/Phys261/F06/ErrorPropagation.pdf)\n4. Physics Stack Exchange, [_How do you find the uncertainty of a weighted average?_](https://physics.stackexchange.com/questions/15197/how-do-you-find-the-uncertainty-of-a-weighted-average)\n", "meta": {"hexsha": "57753502a2e0dcc22b67bb999894e246ccb49334", "size": 21778, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "book/notebooks/Introductory/Error-on-means.ipynb", "max_stars_repo_name": "matthewfeickert/Statistics-Notes", "max_stars_repo_head_hexsha": "088181920b0f560fdd2ed593d3653f67baa56190", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 18, "max_stars_repo_stars_event_min_datetime": "2018-02-15T15:22:15.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-18T07:28:57.000Z", "max_issues_repo_path": "book/notebooks/Introductory/Error-on-means.ipynb", "max_issues_repo_name": "matthewfeickert/Statistics-Notes", "max_issues_repo_head_hexsha": "088181920b0f560fdd2ed593d3653f67baa56190", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2018-05-08T22:51:03.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-08T03:58:30.000Z", "max_forks_repo_path": "book/notebooks/Introductory/Error-on-means.ipynb", "max_forks_repo_name": "matthewfeickert/Statistics-Notes", "max_forks_repo_head_hexsha": "088181920b0f560fdd2ed593d3653f67baa56190", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-10-24T17:15:44.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-13T00:27:06.000Z", "avg_line_length": 26.3975757576, "max_line_length": 389, "alphanum_fraction": 0.5003214253, "converted": true, "num_tokens": 4301, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.948154531885212, "lm_q2_score": 0.8962513627417531, "lm_q1q2_score": 0.8497847912918903}} {"text": "- The prime 41, can be written as the sum of six consecutive primes:\n41 = 2 + 3 + 5 + 7 + 11 + 13\n\n- This is the longest sum of consecutive primes that adds to a prime below one-hundred.\n\n- The longest sum of consecutive primes below one-thousand that adds to a prime, \ncontains 21 terms, and is equal to 953.\n\n- **Which prime, below one-million, can be written as the sum of the most consecutive primes?**\n\n\n```python\nfrom tqdm import tqdm\nimport numpy as np\nfrom sympy import sieve\n\nN = 1e6\nsieve._reset()\nsieve.extend(N)\n\nprimes = np.array(sieve._list)\nprime_set = set(primes)\ncs = np.cumsum(primes)\n\nprimes.shape\n```\n\n$f(i,n) = \\sum_{i}^{i+n}P_i$\n\n$S = \\texttt{primes} < 1000000$\n\n$\\exists \\ i^*, n^*: f(i^*, n^*) \\in S \\wedge \\not \\exists \\ i', n' > n^*: \\ f(i', n') \\in S$\n\n\n```python\nd = -np.infty\np = None\n\nfor i in tqdm(range(len(primes))):\n for j in range(i, -1, -1):\n p_ = cs[i] - cs[j]\n if p_ > N: break\n \n if p_ in prime_set and (i - j) > d:\n d, p = i-j, p_\nd, p\n```\n", "meta": {"hexsha": "62568c8ae385e65a3425cdbdb8bb9eaef81d7b30", "size": 2163, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "src/p50.ipynb", "max_stars_repo_name": "alexandru-dinu/project-euler", "max_stars_repo_head_hexsha": "10afd9e204203dd8d5c827b33659a5a2b3090532", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/p50.ipynb", "max_issues_repo_name": "alexandru-dinu/project-euler", "max_issues_repo_head_hexsha": "10afd9e204203dd8d5c827b33659a5a2b3090532", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2021-10-13T19:26:01.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-13T22:18:23.000Z", "max_forks_repo_path": "src/p50.ipynb", "max_forks_repo_name": "alexandru-dinu/project-euler", "max_forks_repo_head_hexsha": "10afd9e204203dd8d5c827b33659a5a2b3090532", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.5108695652, "max_line_length": 108, "alphanum_fraction": 0.4868238558, "converted": true, "num_tokens": 343, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9702399051935108, "lm_q2_score": 0.8757869867849166, "lm_q1q2_score": 0.849723483027908}} {"text": "# 3. The Central Dogma of Molecular Biology\n\n\nThe __Central Dogma of Molecular Biology__ is the process by which the instructions in DNA are converted into a functional product. It its most simplest definition, it involves the following processes:\n\n- _Replication_: DNA is copied during cell division so the two daughter cells inherit the same genetic information. \n\n- _Transcription_: the information in the DNA of every cell is converted into small, portable RNA messages.\n\n- _Translation_: During translation, these messages travel from where the DNA is in the cell nucleus to the ribosomes where they are \u2018read\u2019 to make specific proteins.\n\nWe will write a mathematical simplified version of the Central Dogma using the differential form of the Mass action Law. To do that, we will consider a gene that is regulated by a transcription factor $T$, and let M denote the RNA molecular species resulting from this gene transcription. Under the assumption that chemical kinetics can be used to model gene expression and gene regulation, we can write the following reaction:\n\n\\begin{align}\nT+D &\\overset{k_M}{\\longrightarrow} M + T + D \\tag{1} \\\\ \nM &\\overset{\\gamma_M}{\\longrightarrow} \\emptyset \\tag{2}\n\\end{align}\n\n- $[M]$ is the concentration of mRNA \n- $[D]$ is the concentration of gene copies\n- $[T]$ is the transcription factor\n- $k_M$ is the maximum transcription rate of a gene copy \n- $\\gamma_M$ is the RNA degradation rate \n\nBased on the Mass Action Law, and assuming that both `D` and `T` are constant, the equation governing the dynamics of the RNA concentration [M] is:\n\n$$\n\\frac{\\mathrm{d} [M]}{\\mathrm{d} t}= k_M [D] [T]- \\gamma_M [M] \\tag{3}\n$$\n\nTo solve this equation, we first separate the variables in both sides of the equation as: \n\n$$\\begin{align*}\n \\frac{\\mathrm{d} [M]}{\\mathrm{d} t} + \\gamma_M [M] &= k_M [D] [T] \\tag{4}\\\\\n \\end{align*}$$ \n \n We need to calculate the integrating factor, $e^{ \\int p(x)dx }$, which in this case is $e^{\\gamma_M \\cdot t}$. We then multiply both terms in the previous equation by the integrating factor. In this case is simply \n $$\\begin{align*}\n e^{\\gamma_M \\cdot t} \\frac{\\mathrm{d} [M]}{\\mathrm{d} t} + \\gamma_M [M] e^{\\gamma_M \\cdot t} &= k_M [D] [T] e^{\\gamma_M \\cdot t}\\tag{5}\n \\end{align*}$$ \n the first term of the equation is simply\n \n $$\\begin{align*}\n \\frac{\\mathrm{d} ([M] e^{\\gamma_M \\cdot t})}{\\mathrm{d} t} &= k_M [D] [T] e^{\\gamma_M \\cdot t} \\tag{6}\n \\end{align*}$$ \n \n we change the `dt` to the left side and then integrate both sides of the equation:\n $$\\begin{align*}\n \\int \\mathrm{d} ([M] e^{\\gamma_M \\cdot t}) &= \\int k_M [D] [T] e^{\\gamma_M \\cdot t} \\mathrm{d} t\\tag{7}\\\\\n [M] e^{\\gamma_M \\cdot t} &= \\frac{k_M [D] [T]}{\\gamma_M} e^{ \\gamma_M \\cdot t} + C \\tag{8}\\\\\n \\end{align*}$$ \n \n that rearranging terms becomes \n\n $$\\begin{align*}\n [M] &= \\frac{k_M [D] [T]}{\\gamma_M} + C \\cdot e^{-\\gamma_M \\cdot t}\\tag{9}\\\\\n \\end{align*}$$ \n \n to evaluate the integration constant, we use the initial value of `M`:\n \n $$\\begin{align*}\n [M(0)] &= \\frac{k_M [D] [T]}{\\gamma_M} + C \\tag{10}\\\\\n C &= [M(0)] - \\frac{k_M [D] [T]}{\\gamma_M} \\tag{11}\\\\\n \\end{align*}$$ \n \n therefore\n\n $$\\begin{align*}\n [M(t)] &= \\frac{k_M [D] [T]}{\\gamma_M} + [M(0)] \\cdot e^{-\\gamma_M \\cdot t} - \\frac{k_M [D] [T]}{\\gamma_M} \\cdot e^{-\\gamma_M \\cdot t}\\tag{12}\\\\\n [M(t)] &= [M(0)] \\cdot e^{-\\gamma_M \\cdot t} + \\frac{k_M [D] [T]}{\\gamma_M} (1 - e^{-\\gamma_M \\cdot t})\\tag{13}\\\\\n \\end{align*}$$ \n \nTherefore, the dynamics of `mRNA` looks like this:\n\n\n```julia\nusing Plots\ngr()\n```\n\n\n\n\n Plots.GRBackend()\n\n\n\n\n```julia\nk_M=1\n\u03b3_M=1\nT=1\nD=1\nM\u2080=0.2\nt=collect(0:0.1:10);\n```\n\n\n```julia\nplot(t,t-> M\u2080*exp(-\u03b3_M*t)+k_M*D*T/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ M\",seriestype=:line,ylims = (0,1))\ntitle!(\"Central Dogma\")\nxaxis!(\"Time\")\nyaxis!(\"Concentration\")\n```\n\n\n\n\n \n\n \n\n\n\n## Promoter leakining\n\nThis simplified approximation does not take into acount that promoters are not perfect, and that there is often a basal production of mRNA that is not negligible. The next set of inetactions includes the basal production of `M` in abseence of transcription factor `T` with a rate $\\alpha_0$:\n\n$$\n\\begin{align}\n\\emptyset &\\overset{\\alpha_0}{\\longrightarrow} M \\tag{14}\\\\ \nT+D &\\overset{k_M}{\\longrightarrow} M + T + D \\tag{15}\\\\ \nM &\\overset{\\gamma_M}{\\longrightarrow} \\emptyset \\tag{16}\\\\\n\\end{align}\n$$\n\nNow, the equation governing the dynamics of the mRNA concentration [M] is:\n\n$$\n\\frac{\\mathrm{d} [M]}{\\mathrm{d} t}= \\alpha_0 + k_M [D] [T]- \\gamma_M [M] \\tag{17}\n$$\nFollowing the same steps as the previous equation, we obatin the following solution:\n\n $$\\begin{align*}\n [M] &= \\frac{\\alpha_0 + k_M [D] [T]}{\\gamma_M} + C \\cdot e^{-\\gamma_M \\cdot t}\\tag{18}\\\\\n \\end{align*}$$ \n \n to evaluate the integration constant, we use the initial value of `M`:\n \n $$\\begin{align*}\n [M(0)] &= \\frac{\\alpha_0 + k_M [D] [T]}{\\gamma_M} + C \\tag{19}\\\\\n C &= [M(0)] - \\frac{\\alpha_0 + k_M [D] [T]}{\\gamma_M} \\tag{20}\\\\\n \\end{align*}$$ \n \n therefore the final equation is\n\n $$\\begin{align*}\n [M(t)] &= \\frac{\\alpha_0 + k_M [D] [T]}{\\gamma_M} + [M(0)] \\cdot e^{-\\gamma_M \\cdot t} - \\frac{\\alpha_0 + k_M [D] [T]}{\\gamma_M} \\cdot e^{-\\gamma_M \\cdot t}\\tag{21}\\\\\n [M(t)] &= [M(0)] \\cdot e^{-\\gamma_M \\cdot t} + \\frac{\\alpha_0 + k_M [D] [T]}{\\gamma_M} (1 - e^{-\\gamma_M \\cdot t})\\tag{22}\\\\\n \\end{align*}$$ \n \nTherefore, with promoter leaking the dynamics of `mRNA` looks like this:\n\n\n```julia\n\u03b1_0=0.5\nplot(t,t-> M\u2080*exp(-\u03b3_M*t)+(\u03b1_0 + k_M*D*T)/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ M with \\\\alpha_0\",seriestype=:line,ylims = (0,1.5))\nplot!(t,t-> M\u2080*exp(-\u03b3_M*t)+( k_M*D*T)/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ M without \\\\alpha_0\",seriestype=:line,ylims = (0,1.5))\n\ntitle!(\"Central Dogma with Promoter Leaking\")\nxaxis!(\"Time\")\nyaxis!(\"Concentration\")\n```\n\n\n\n\n \n\n \n\n\n\n## Hill Function and cooperativity\n\nThis first version of the model assumes that one molecule of transcription factor `T` results in one molecule of `mRNA`. This is not the case in transcription, since a molecule of transcription factor is able to recruit many molecules of `pRNA` and activate transcription of many `mRNA` molecules. Therefore, more molecules of transcition factor means more activation of the transcription of `M`. This way, we write the rate of production of `M` depending on the number of molecules of transciption factor `T` as a function: \n\n$$\n\\frac{\\mathrm{d} [M]}{\\mathrm{d} t}= \\alpha_0 + k_M [D] \\Phi ([T])- \\gamma_M [M] \\tag{23}\n$$\n\n$\\Phi ([T])$ can be interpreted as the probability that a gene copy is transcribed at a given time, as a function of $[T]$. This dependence allows us to introduce the concept of a transcription factor as activators (its presence results in activation of transcription) or as repressors (its presence results in repression of transcription). \nDepending on this, the regulatory function $\\Phi([T])$ will increase or decrease with `T`. As a first approximation, $\\Phi([T])$ is often assumed as a step function., i.e, the gene is transcribed (or repressed) when the transcription factor is bounded. \n\n\n```julia\nT_vector= LinRange(0,2,100)\nn=1000\nK=1\n\u03d5=D.*T_vector.^n./(K.^n.+T_vector.^n)\nP1=plot(T_vector,\u03d5,lab=\"\")\ntitle!(\"Step function activator\")\nxlabel!(\"Concentration of Transcription factor\")\nylabel!(\"function \u00f8\")\n\n\u03d5=D.*K.^n./(K.^n.+T_vector.^n)\nP2=plot(T_vector,\u03d5,lab=\"\")\ntitle!(\"Step function repressor\")\nxlabel!(\"Concentration of Transcription factor\")\nylabel!(\"function \u00f8\")\n\n\nplot(P1,P2,layout=(1,2),legend=true,size = (800, 500))\n```\n\n\n\n\n \n\n \n\n\n\n The analytical soultion will look quite similar to the previous equation, just replacing [$T$] by [$\\Phi ([T])$]\n\n$$\\begin{align*}\n [M(t)] &= [M(0)] \\cdot e^{-\\gamma_M \\cdot t} + \\frac{\\alpha_0 + k_M [D] \\Phi ([T])}{\\gamma_M} (1 - e^{-\\gamma_M \\cdot t})\\tag{24}\\\\\n \\end{align*}$$ \n \n and the solution will depend on whether there is a value of `T` higher or lower than 1:\n\n\n```julia\nP1=plot(t,t-> M\u2080*exp(-\u03b3_M*t)+(\u03b1_0 + k_M*D*0.)/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ M with T=0\",seriestype=:line,ylims = (0,1.5))\nplot!(t,t-> M\u2080*exp(-\u03b3_M*t)+(\u03b1_0 + k_M*D*1.)/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ M with T=1\",seriestype=:line,ylims = (0,1.5))\n\ntitle!(\"Central Dogma Activator Step Funct\")\nxaxis!(\"Time\")\nyaxis!(\"Concentration\")\n\nP2=plot(t,t-> M\u2080*exp(-\u03b3_M*t)+(\u03b1_0 + k_M*D*1.)/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ M with T=0\",seriestype=:line,ylims = (0,1.5))\nplot!(t,t-> M\u2080*exp(-\u03b3_M*t)+(\u03b1_0 + k_M*D*0.)/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ M with T=1\",seriestype=:line,ylims = (0,1.5))\n\ntitle!(\"Central Dogma Repressor Step Func\")\nxaxis!(\"Time\")\nyaxis!(\"Concentration\")\n\nplot(P1,P2,layout=(1,2),legend=true,size = (800, 500))\n```\n\n\n\n\n \n\n \n\n\n\nWe can see now that, when concentration of `T` is above a certain value, transcription is activated (for `T` as activator) or repressed (for `T`as repressors). Binding and unbinding of`T` to the promoter is a highly dynamic procces, and more realistic scenario is that the probability of transcription of a gene `\u00d8([T],[D])` depends on the time the trascription factor `T` is bound to the DNA `D`. This bindng and unbinding can be written as a chemical interaction as\n\n$$ [D_{inactive}] + [T] \\overset{k_1}{\\underset{k_2}{\\longleftrightarrow}} [D_{active}] \\tag{25}$$\n\nTo obtain a more general approximation, we will introduce the possibility that several molecuels of the transcription factor are required to activate transcription. This is known as `cooperativity` and is represented in the following scheme of interaction. \n\n$$ [D_{inactive}] + n[T] \\overset{k_1}{\\underset{k_2}{\\longleftrightarrow}} [D_{Active}] \\tag{26}$$\n\nThe dynamics of binding and unbinding of transcription factor to the DNA is much faster that the dynamics of transcription. Therefore, using separation of scales, we can consider that equilibrium is reached verty fast. Based on this assumption and on Mass Action Law, the concetrations at equilibrium satisfy the following relation: \n \n$$[D_{inactive}] \\cdot [T]^n= \\frac{k_2}{k_1} [D_{Active}]= K_D \\cdot [D_{Active}] \\tag{27}$$\n\nwhere $K_D$ is the chemical equilibrium constant. We have also an extra constraint given by the fact that the total number of DNA copies `[D]` is fixed, therefore:\n\n$$[D_{Active}] + [D_{Inactive}] = [D] \\tag{28}$$\n\nCombining the equations 27 and 28, we can write the ratio of active copies of the transcriotion factor as:\n\n$$\n\\begin{align}\n\\frac{[D_{Active}]}{[D]}&=\\frac{[T]^n [D_{Inactive}]}{K_D}\\cdot \\frac{1}{[D_{Active}]+[D_{Inactive}]} \\tag{29}\\\\\n\\\\\n\\frac{[D_{Active}]}{[D]}&=\\frac{[T]^n }{K_D \\frac{[D_{Inactive}]+[D_{Active}]}{[D_{Inactive}]}} \\tag{30}\\\\\n\\\\\n\\frac{[D_{Active}]}{[D]}&=\\frac{[T]^n }{ K_D(\\frac{[D_{Active}]}{[D_{Inactive}]}+1)} \\tag{31}\\\\\n\\\\\n\\frac{[D_{Active}]}{[D]}&=\\frac{[T]^n}{\\frac{K_D \\cdot [D_{Active}]}{[D_{Inactive}]}+K_D} \\tag{32}\\\\\n\\end{align}$$\n\n\nand using again the relation in Eq. 27, we obtain\n\n$$\\frac{[D_{Active}]}{[D]}=\\frac{[T]^n}{K_D+[T]^n} \\tag{33}$$\n\nif we rewrite the equilibrium constant $K_D$ as a new constant to the power of `n`, we obatin:\n\n$$h^{(1)}=\\frac{[D_{Active}]}{[D]}=\\frac{[T]^n}{K^n+[T]^n} \\tag{34}$$\n\nThis equation $h^{(1)}$ is the well known `Hill function`. It satisfies the following properties:\n\n1. $h^{(1)}(0)=0 \\tag{35}$\n\n2. $h^{(1)}(T=K)=\\frac{1}{2} \\tag{36}$\n\n3. $\\lim_{T\\to \\infty} h^{(1)}(T) = 1 \\tag{37}$\n\n4. The maximum slope controlled by the parameter `n`, with is calles sigmoidicity. To study the value of the slope and its correlation with the value of `n`, we divide numerator and denominator by $K^n$ and define the variable x as $x= T/K$. This way the normalized version of the Hill function is: \n\n$$\nh^{(1)}=\\frac{x^n}{1+x^n}\\tag{38}\n$$\n\nwe then calculate the slope by caluclating the value of the derivative of $h^{(1)}$ at the point $x=1$: \n\n$$\\begin{align}\n\\frac{\\mathrm{d} h^{(1)}}{\\mathrm{d} x}&=\\frac{n x^{n-1} (1+x^n)-x^n \\cdot n x^{-1}}{(1+x^n)^2} \\tag{39}\\\\\n\\frac{\\mathrm{d} h^{(1)}}{\\mathrm{d} x}&=\\frac{n x^{n-1} +x^n \\cdot n x^{n-1}-x^n \\cdot n x^{-1}}{(1+x^n)^2} \\tag{40}\\\\\n\\frac{\\mathrm{d} h^{(1)}}{\\mathrm{d} x}&=\\frac{n x^{n-1} +x^n \\cdot n x^{n-1}-x^n \\cdot n x^{-1}}{(1+x^n)^2} \\tag{41}\\\\\n\\frac{\\mathrm{d} h^{(1)}}{\\mathrm{d} x}&=\\frac{n x^{n-1} }{(1+x^n)^2} \\tag{42}\\\\\n\\frac{\\mathrm{d} h^{(1)}}{\\mathrm{d} x}\\big|_{x=1}&=\\frac{n 1^{n-1} }{(1+1^n)^2} \\tag{43}\\\\\n\\frac{\\mathrm{d} h^{(1)}}{\\mathrm{d} x}\\big|_{x=1}&=\\frac{n}{4} \\tag{44}\n\\end{align}$$\n\n\n\n\n```julia\nn=5\n\u03d5=T_vector.^n./(K.^n.+T_vector.^n)\nP1=plot(T_vector,\u03d5,lab=\"Hill Function activator\")\n\nn=1000\n\u03d5=T_vector.^n./(K.^n.+T_vector.^n)\nP1=plot!(T_vector,\u03d5,lab=\"Step Function activator\")\n\ntitle!(\"Hill Function vs. Step function Activator\")\nxlabel!(\"Concentration of Transcription Factor\")\nylabel!(\"fraction of active promoters\")\n```\n\n\n\n\n \n\n \n\n\n\nFor repressors, \n\n$$ [D_{Active}] + n[T] \\overset{k_1}{\\underset{k_2}{\\longleftrightarrow}} [D_{Inactive}] \\tag{45}$$\n\nthe equilibrium is now\n\n$$[D_{Active}] \\cdot [T]^n= \\frac{k_2}{k_1} [D_{Inactive}]= K_D \\cdot [D_{Inactive}] \\tag{46}$$\n and using the same conservation of of `D` we have\n \n $$\n\\begin{align}\n\\frac{[D_{Active}]}{[D]}&=\\frac{K_D [D_{Inactive}]}{[T]^n}\\cdot \\frac{1}{[D_{Active}]+[D_{Inactive}]} \\tag{47}\\\\\n\\\\\n\\frac{[D_{Active}]}{[D]}&=\\frac{K_D }{[T]^n \\frac{[D_{Inactive}]+[D_{Active}]}{[D_{Inactive}]}} \\tag{48}\\\\\n\\\\\n\\frac{[D_{Active}]}{[D]}&=\\frac{K_D }{[T]^n (\\frac{[D_{Active}]}{[D_{Inactive}]}+1)} \\tag{49}\\\\\n\\\\\n\\frac{[D_{Active}]}{[D]}&=\\frac{K_D }{\\frac{[T]^n \\cdot [D_{Active}]}{[D_{Inactive}]}+[T]^n} \\tag{50}\\\\\n\\end{align}$$\n\nand using again the relation in Eq 24, we obtain\n\n$$\\frac{[D_{Active}]}{[D]}=\\frac{[K_D}{K_D+[T]^n} \\tag{51}$$\n\nif we rewrite again the equilibrium constant $K_D$ as a new constant to the power of `n`, we obatin:\n\n$$h^{(2)}=\\frac{[D_{Active}]}{[D]}=\\frac{K^n}{K^n+[T]^n} \\tag{52}$$\n\nThis equation $h^{(1)}$ is now the `Hill function` for repressor molecules. It satisfies the following properties:\n\n1. $h^{(2)}(0)=1 \\tag{53}$\n\n2. $h^{(2)}(K)=\\frac{D}{2} \\tag{54}$\n\n3. $\\lim_{T\\to \\infty} h^{(2)}(T) = 0 \\tag{55}$\n\n4. The maximum slope controlled by the parameter `n`, with is calles sigmoidicity. To study the value of the slope and its correlation with the value of `n`, we divide numerator and denominator by $K^n$ and define the variable x as $x= T/K$. This way the normalized version of the Hill function is: \n\n$$\nh^{(2)}=\\frac{1}{1+x^n}\\tag{56}\n$$\n\nwe then calculate the slope by caluclating the value of the derivative of $h^{(2)}$ at the point $x=1$: \n\n\n$$\\begin{align}\n\\frac{\\mathrm{d} h^{(2)}}{\\mathrm{d} x}&=\\frac{0 \\cdot (1+x^n) - 1 \\cdot n x^{-1}}{(1+x^n)^2} \\tag{57}\\\\\n\\frac{\\mathrm{d} h^{(2)}}{\\mathrm{d} x}&=- \\frac{ n x^{-1}}{(1+x^n)^2} \\tag{58}\\\\\n\\frac{\\mathrm{d} h^{(2)}}{\\mathrm{d} x}\\big|_{x=1}&=- \\frac{n 1^{n-1} }{(1+1^n)^2} \\tag{59}\\\\\n\\frac{\\mathrm{d} h^{(2)}}{\\mathrm{d} x}\\big|_{x=1}&=-\\frac{n}{4} \\tag{60}\n\\end{align}$$\n\n\n\n\n```julia\nn=5\n\u03d5=K.^n./(K.^n.+T_vector.^n)\nP2=plot(T_vector,\u03d5,lab=\"Hill Function repressor\")\n\nn=1000\n\u03d5=K.^n./(K.^n.+T_vector.^n)\nP2=plot!(T_vector,\u03d5,lab=\"step Function repressor\")\ntitle!(\"Hill Function vs. Step function repressor\")\nxlabel!(\"Concentration of Transcription factor\")\nylabel!(\"fraction of active promoters\")\n```\n\n\n\n\n \n\n \n\n\n\nThe Hill functions as activator or repressor are a more realistic approach to introduce the effect of a transcription factor into the Central Dogma Model. This way, the dynamics of production of `mRNA` for activator is like this: \n\n$$\n\\frac{\\mathrm{d} [M]}{\\mathrm{d} t}= \\alpha_0 + k_M \\frac{[T^n][D]}{K^n+[T^n]}-\\gamma_M[M]= \\alpha_0 +\\alpha_M \\frac{[T^n]}{K^n+[T^n]}-\\gamma_M[M] \\tag{61}\n$$\nand for repression, we have\n\n$$\n\\frac{\\mathrm{d} [M]}{\\mathrm{d} t}= \\alpha_0 + k_M \\frac{K^n[D]}{K^n+[T^n]}-\\gamma_M[M]= \\alpha_0 + \\alpha_M \\frac{K^n}{K^n+[T^n]}-\\gamma_M[M] \\tag{62}\n$$\n\nwhere we defined $\\alpha_M= k_M [D]$ as the maximum transcription rate of mRNA `M`. The set of interactions is then generalized as:\n\n$$\n\\begin{align}\n\\emptyset &\\overset{\\alpha_0}{\\longrightarrow} M \\tag{63}\\\\ \n\\Psi (T,K) &\\overset{\u03b1_M}{\\longrightarrow} M \\tag{64}\\\\ \nM &\\overset{\\gamma_M}{\\longrightarrow} \\emptyset \\tag{65}\n\\end{align}\n$$\n\nillustrating that the production of `M` is defined by a function of the amount of transcription factor `T` and the constant of the Hill funcion `K` that corresponds to the half maximal concetration of transcripton factor. This function $\\Psi$ can refer to the Hill function for activator or for repressor. The analytical solution can be dertived similarly as before, and the resulting equation is:\n\n$$\\begin{align*}\n [M(t)] &= [M(0)] \\cdot e^{-\\gamma_M \\cdot t} + \\frac{\\alpha_0 + \\alpha_M \\frac{[T^n]}{K^n+[T^n]}}{\\gamma_M} (1 - e^{-\\gamma_M \\cdot t})\\tag{66}\\\\\n [M(t)] &= [M(0)] \\cdot e^{-\\gamma_M \\cdot t} + \\frac{\\alpha_0 + \\alpha_M \\frac{[K^n]}{K^n+[T^n]}}{\\gamma_M} (1 - e^{-\\gamma_M \\cdot t})\\tag{67}\n \\end{align*}$$ \n \n for activators and repressors respectively:\n\n\n```julia\n\u03b1_M=k_M*D\nK=1.\nn=2.\nT=0.5\n\u03d5=T.^n./(K.^n.+T.^n)\nP1=plot(t,t-> M\u2080*exp(-\u03b3_M*t)+(\u03b1_0 + \u03b1_M * \u03d5)/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ T=0.5\",seriestype=:line,ylims = (0,1.5))\nT=1\n\u03d5=T.^n./(K.^n.+T.^n)\nplot!(t,t-> M\u2080*exp(-\u03b3_M*t)+(\u03b1_0 + \u03b1_M * \u03d5)/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ T=1\",seriestype=:line,ylims = (0,1.5))\nT=2\n\u03d5=T.^n./(K.^n.+T.^n)\nplot!(t,t-> M\u2080*exp(-\u03b3_M*t)+(\u03b1_0 + \u03b1_M * \u03d5)/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ T=2\",seriestype=:line,ylims = (0,1.5))\nT=10\n\u03d5=T.^n./(K.^n.+T.^n)\nplot!(t,t-> M\u2080*exp(-\u03b3_M*t)+(\u03b1_0 + \u03b1_M * \u03d5)/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ T=10\",seriestype=:line,ylims = (0,1.5))\nT=50\n\u03d5=T.^n./(K.^n.+T.^n)\nplot!(t,t-> M\u2080*exp(-\u03b3_M*t)+(\u03b1_0 + \u03b1_M * \u03d5)/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ T=50\",seriestype=:line,ylims = (0,1.5))\n\ntitle!(\"Central Dogma Activator Hill\")\nxaxis!(\"Time\")\nyaxis!(\"Concentration\")\n\nT=0.5\n\u03d5=K.^n./(K.^n.+T.^n)\nP2=plot(t,t-> M\u2080*exp(-\u03b3_M*t)+(\u03b1_0 + \u03b1_M * \u03d5)/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ T=0.5\",seriestype=:line,ylims = (0,1.5))\nT=1\n\u03d5=K.^n./(K.^n.+T.^n)\nplot!(t,t-> M\u2080*exp(-\u03b3_M*t)+(\u03b1_0 + \u03b1_M * \u03d5)/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ T=1\",seriestype=:line,ylims = (0,1.5))\nT=2\n\u03d5=K.^n./(K.^n.+T.^n)\nplot!(t,t-> M\u2080*exp(-\u03b3_M*t)+(\u03b1_0 + \u03b1_M * \u03d5)/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ T=2\",seriestype=:line,ylims = (0,1.5))\nT=10\n\u03d5=K.^n./(K.^n.+T.^n)\nplot!(t,t-> M\u2080*exp(-\u03b3_M*t)+(\u03b1_0 + \u03b1_M * \u03d5)/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ T=10\",seriestype=:line,ylims = (0,1.5))\nT=50\n\u03d5=K.^n./(K.^n.+T.^n)\nplot!(t,t-> M\u2080*exp(-\u03b3_M*t)+(\u03b1_0 + \u03b1_M * \u03d5)/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ T=50\",seriestype=:line,ylims = (0,1.5))\ntitle!(\"Central Dogma Repressor Hill\")\nxaxis!(\"Time\")\nyaxis!(\"Concentration\")\n\n\nplot(P1,P2,layout=(1,2),legend=true,size = (800, 500))\n```\n\n\n\n\n \n\n \n\n\n\nThe Hill function combines the fact that one mRNA can produce may proteins, but also the dependence on the amount of trancription factor `T`. It also adds a saturation term for high concentrations of trancription factor, compared to the value of the Hill constant `K`. \n\n## Protein dynamics\n\nAfter analyzing the effect induced by the Hill function, the next step is to add the dynamics of the protein. The reactions that take place can be illustrated as:\n\n$$\n\\begin{align}\n\\emptyset &\\overset{\\alpha_0}{\\longrightarrow} M \\tag{68}\\\\ \n\\Psi (T,K) &\\overset{\u03b1_M}{\\longrightarrow} M \\tag{69}\\\\ \nM &\\overset{\\gamma_M}{\\longrightarrow} \\emptyset \\tag{70}\\\\\nM &\\overset{\\alpha_P}{\\longrightarrow} P \\tag{71}\\\\\nP &\\overset{\\gamma_P}{\\longrightarrow} \\emptyset \\tag{72}\n\\end{align}\n$$\n\n\nwhere `P` is the concentration of protein, $\\alpha_P$ and $\\gamma_P$ correspond to syntehsis and degradation of `P`. Explicitely, for activators, the set of differential equations is:\n\n\n$$\\begin{align*}\n [M(t)] &= [M(0)] \\cdot e^{-\\gamma_M \\cdot t} + \\frac{\\alpha_0 + \\alpha_M \\frac{[T^n]}{K^n+[T^n]}}{\\gamma_M} (1 - e^{-\\gamma_M \\cdot t})\\tag{73}\\\\\n \\frac{\\mathrm{d} [P]}{\\mathrm{d} t} &= \\alpha_P [M]-\\gamma_P[P] \\tag{74}\n \\end{align*}$$ \n\nand for repressors: \n\n\n$$\\begin{align*}\n [M(t)] &= [M(0)] \\cdot e^{-\\gamma_M \\cdot t} + \\frac{\\alpha_0 + \\alpha_M \\frac{[K^n]}{K^n+[T^n]}}{\\gamma_M} (1 - e^{-\\gamma_M \\cdot t})\\tag{75}\\\\\n \\frac{\\mathrm{d} [P]}{\\mathrm{d} t} &= \\alpha_P [M]-\\gamma_P[P] \\tag{76}\n \\end{align*}$$ \n\n\nTo solve analytically, we will proceed with separation of variables in eq. 76:\n\n$$\\begin{align*}\n \\frac{\\mathrm{d} [P]}{\\mathrm{d} t} + \\gamma_P[P]&= \\alpha_P [M] \\tag{77}\n \\end{align*}$$ \n \n We need to calculate the integrating factor, $e^{ \\int p(x)dx }$, which in this case is $e^{ \\int k_2dt }=e^{\\gamma_P \\cdot t}$. We then multiply both terms in the previous equation by the integrating factor. In this case is simply \n $$\\begin{align*}\n \\frac{\\mathrm{d} [P]}{\\mathrm{d} t} e^{\\gamma_P \\cdot t} + \\gamma_P[P] e^{\\gamma_P \\cdot t}&= \\alpha_P [M] e^{\\gamma_P \\cdot t}\\tag{78}\n \\end{align*}$$ \n \n the first term of the equation is simply\n\n $$\\begin{align*}\n \\frac{\\mathrm{d} ([P]e^{\\gamma_P \\cdot t})}{\\mathrm{d} t} &= \\alpha_P [M] e^{\\gamma_P \\cdot t}\\tag{78}\n \\end{align*}$$\n \n now we substitute the solution for `M(t)` in Eq 66 and 67 into Eq 78:\n $$\\begin{align*}\n \\frac{\\mathrm{d} ([P]e^{\\gamma_P \\cdot t})}{\\mathrm{d} t} &= \\alpha_P \\big[ M(0) \\cdot e^{-\\gamma_M \\cdot t} + \\frac{W}{\\gamma_M} (1 - e^{-\\gamma_M \\cdot t}) e^{\\gamma_P \\cdot t}\\big] \\tag{78}\n \\end{align*}$$\n \n where variable W is simply \n $$W=\\alpha_0 + \\alpha_M \\cdot \\Psi(T,K)$$\n \n This function $\\Psi(T,K)$ can refer to the Hill function for activator or for repressor. Now we solve the differential equation:\n \n $$\\begin{align*}\n \\int \\mathrm{d} ([P]e^{\\gamma_P \\cdot t}) &= \\alpha_P \\int e^{\\gamma_P \\cdot t} \\big[M(0) \\cdot e^{-\\gamma_M \\cdot t} dt + \\int \\frac{W}{\\gamma_M} (1 - e^{-\\gamma_M \\cdot t}) dt \\big]\\tag{78}\n \\end{align*}$$\n \nWe reorganize the second term:\n \n $$\\begin{align*}\n \\int \\mathrm{d} ([P]e^{\\gamma_P \\cdot t}) &= \\alpha_P \\big[\\int e^{\\gamma_P \\cdot t} M(0) \\cdot e^{-\\gamma_M \\cdot t} dt + \\int \\frac{W}{\\gamma_M} e^{\\gamma_P \\cdot t} dt - \\int e^{\\gamma_P \\cdot t} e^{-\\gamma_M \\cdot t} dt \\big]\\tag{78}\\\\\n \\int \\mathrm{d} ([P]e^{\\gamma_P \\cdot t}) &= \\alpha_P \\big[M(0) \\int e^{(\\gamma_P-\\gamma_M) t} dt + \\frac{W}{\\gamma_M} \\int e^{\\gamma_P \\cdot t} dt - \\frac{W}{\\gamma_M} \\int e^{(\\gamma_P-\\gamma_M) t} dt \\big]\\tag{79}\n \\end{align*}$$\n \n We solve the integral\n \n $$\\begin{align*}\n P(t)e^{\\gamma_P \\cdot t} &= \\alpha_P \\big[ \\frac{e^{(\\gamma_P-\\gamma_M)\\cdot t}M(0)}{\\gamma_P-\\gamma_M} + \\frac{W}{\\gamma_M} (\\frac{e^{\\gamma_P \\cdot t}}{\\gamma_P}- \\frac{e^{(\\gamma_P-\\gamma_M)\\cdot t}}{\\gamma_P-\\gamma_M}) \\big] +C \\tag{80}\\\\\n P(t) &= \\alpha_P \\big[ \\frac{e^{-\\gamma_M\\cdot t}M(0)}{\\gamma_P-\\gamma_M} + \\frac{W}{\\gamma_M} (\\frac{1}{\\gamma_P}- \\frac{e^{-\\gamma_M\\cdot t}}{\\gamma_P-\\gamma_M}) \\big] +C \\cdot e^{-\\gamma_P \\cdot t} \\tag{81}\n \\end{align*}$$\n \n Now we calculate the integration constant using the initial condition for the protein $P(t=0)=P[0]$:\n $$\\begin{align*}\n P(0) &= \\alpha_P \\big[ \\frac{M(0)}{\\gamma_P-\\gamma_M} + \\frac{W}{\\gamma_M} (\\frac{1}{\\gamma_P}- \\frac{1}{\\gamma_P-\\gamma_M}) \\big] +C \\tag{81}\n \\end{align*}$$\n which rearranging terms becomes:\n $$\\begin{align*}\n P(0) &= \\alpha_P \\big[ \\frac{M(0)}{\\gamma_P-\\gamma_M} + \\frac{W}{\\gamma_M} (\\frac{\\gamma_P-\\gamma_M-\\gamma_P}{\\gamma_P(\\gamma_P-\\gamma_M)}) \\big] +C \\tag{82}\\\\\n P(0) &= \\alpha_P \\big[ \\frac{M(0)}{\\gamma_P-\\gamma_M} + \\frac{W}{\\gamma_M} (\\frac{-\\gamma_M}{\\gamma_P(\\gamma_P-\\gamma_M)}) \\big] +C \\tag{83}\\\\ \n P(0) &= \\alpha_P \\big[ \\frac{M(0)}{\\gamma_P-\\gamma_M} - \\frac{W}{\\gamma_P(\\gamma_P-\\gamma_M)} \\big] +C \\tag{84}\\\\ \n \\end{align*}$$\n \n so\n $$\\begin{align*}\n C &= P(0)- \\frac{ \\alpha_P M(0)}{\\gamma_P-\\gamma_M} + \\frac{\\alpha_P W}{\\gamma_P(\\gamma_P-\\gamma_M)} \\tag{85}\\\\\n C &= P(0)+ \\frac{\\alpha_P}{\\gamma_P-\\gamma_M} \\big[ \\frac{W}{\\gamma_P} - M(0)\\big] \\tag{85}\\\\\n \\end{align*}$$\n \nFinally, the set of equations is:\n \n $$\\begin{align*}\n [M(t)] &= [M(0)] \\cdot e^{-\\gamma_M \\cdot t} + \\frac{W}{\\gamma_M} (1 - e^{-\\gamma_M \\cdot t})\\tag{86}\\\\\n P(t) &= \\alpha_P \\big[ \\frac{e^{-\\gamma_M\\cdot t}M(0)}{\\gamma_P-\\gamma_M} + \\frac{W}{\\gamma_M} (\\frac{1}{\\gamma_P}- \\frac{e^{-\\gamma_M\\cdot t}}{\\gamma_P-\\gamma_M}) \\big] + \\big[P(0)+ \\frac{\\alpha_P}{\\gamma_P-\\gamma_M} \\big[ \\frac{W}{\\gamma_P} - M(0)\\big] \\big]\\cdot e^{-\\gamma_P \\cdot t} \\tag{87}\n \\end{align*}$$\n\n\n\n```julia\n\u03b1_P=0.6*2\n\u03b1_M=0.6\n\u03b3_M=1\n\u03b3_P=0.5\nT=1.0*0.5\nK=2\n\u03b1_0=0.3*0\nM\u2080=0.4*2\nP\u2080=0.3\nn=2\nt=collect(0:0.1:10);\n\u03d5=T.^n./(K.^n.+T.^n)\n\u03b1_M=k_M*D\nW=\u03b1_0 + \u03b1_M * \u03d5\nC=P\u2080 + \u03b1_P / (\u03b3_P-\u03b3_M) * ((W/\u03b3_P)-M\u2080)\nP1=plot(t,t-> M\u2080*exp(-\u03b3_M*t)+W/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ M\",seriestype=:line,ylims = (0,1))\nplot!(t,t-> \u03b1_P * ((M\u2080*exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))+W/\u03b3_M*((1/\u03b3_P)-(exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))))+C*exp(-\u03b3_P*t),label=\"\\\\ P\",seriestype=:line)\n\ntitle!(\"Central Dogma Activator Hill\")\nxaxis!(\"Time\")\nyaxis!(\"Concentration\")\n\n\u03d5=K.^n./(K.^n.+T.^n)\n\u03b1_M=k_M*D\nW=\u03b1_0 + \u03b1_M * \u03d5\nC=P\u2080 + \u03b1_P / (\u03b3_P-\u03b3_M) * ((W/\u03b3_P)-M\u2080)\nP2=plot(t,t-> M\u2080*exp(-\u03b3_M*t)+W/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ M\",seriestype=:line,ylims = (0,1))\nplot!(t,t-> \u03b1_P * ((M\u2080*exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))+W/\u03b3_M*((1/\u03b3_P)-(exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))))+C*exp(-\u03b3_P*t),label=\"\\\\ P\",seriestype=:line)\n\n\ntitle!(\"Central Dogma Repressor Hill\")\nxaxis!(\"Time\")\nyaxis!(\"Concentration\")\n\n\n\nplot(P1,P2,layout=(1,2),legend=true,size = (800, 500))\n\n```\n\n\n\n\n \n\n \n\n\n\n\n```julia\nT=0.2\n\u03d5=T.^n./(K.^n.+T.^n)\nW=\u03b1_0 + \u03b1_M * \u03d5\nC=P\u2080 + \u03b1_P / (\u03b3_P-\u03b3_M) * ((W/\u03b3_P)-M\u2080)\nP1=plot(t,t-> \u03b1_P * ((M\u2080*exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))+W/\u03b3_M*((1/\u03b3_P)-(exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))))+C*exp(-\u03b3_P*t),label=\"\\\\ T=0.2\",seriestype=:line)\nT=1\n\u03d5=T.^n./(K.^n.+T.^n)\nW=\u03b1_0 + \u03b1_M * \u03d5\nC=P\u2080 + \u03b1_P / (\u03b3_P-\u03b3_M) * ((W/\u03b3_P)-M\u2080)\nP1=plot!(t,t-> \u03b1_P * ((M\u2080*exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))+W/\u03b3_M*((1/\u03b3_P)-(exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))))+C*exp(-\u03b3_P*t),label=\"\\\\ T=1\",seriestype=:line)\nT=2\n\u03d5=T.^n./(K.^n.+T.^n)\nW=\u03b1_0 + \u03b1_M * \u03d5\nC=P\u2080 + \u03b1_P / (\u03b3_P-\u03b3_M) * ((W/\u03b3_P)-M\u2080)\nP1=plot!(t,t-> \u03b1_P * ((M\u2080*exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))+W/\u03b3_M*((1/\u03b3_P)-(exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))))+C*exp(-\u03b3_P*t),label=\"\\\\ T=2\",seriestype=:line)\nT=3\n\u03d5=T.^n./(K.^n.+T.^n)\nW=\u03b1_0 + \u03b1_M * \u03d5\nC=P\u2080 + \u03b1_P / (\u03b3_P-\u03b3_M) * ((W/\u03b3_P)-M\u2080)\nP1=plot!(t,t-> \u03b1_P * ((M\u2080*exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))+W/\u03b3_M*((1/\u03b3_P)-(exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))))+C*exp(-\u03b3_P*t),label=\"\\\\ T=3\",seriestype=:line)\n\ntitle!(\"Central Dogma Activator Hill\")\nxaxis!(\"Time\")\nyaxis!(\"Concentration\")\n\nT=0.2\n\u03d5=K.^n./(K.^n.+T.^n)\nW=\u03b1_0 + \u03b1_M * \u03d5\nC=P\u2080 + \u03b1_P / (\u03b3_P-\u03b3_M) * ((W/\u03b3_P)-M\u2080)\nP2=plot(t,t-> \u03b1_P * ((M\u2080*exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))+W/\u03b3_M*((1/\u03b3_P)-(exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))))+C*exp(-\u03b3_P*t),label=\"\\\\ T=0.2\",seriestype=:line)\nT=1\n\u03d5=K.^n./(K.^n.+T.^n)\nW=\u03b1_0 + \u03b1_M * \u03d5\nC=P\u2080 + \u03b1_P / (\u03b3_P-\u03b3_M) * ((W/\u03b3_P)-M\u2080)\nP2=plot!(t,t-> \u03b1_P * ((M\u2080*exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))+W/\u03b3_M*((1/\u03b3_P)-(exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))))+C*exp(-\u03b3_P*t),label=\"\\\\ T=1\",seriestype=:line)\nT=2\n\u03d5=K.^n./(K.^n.+T.^n)\nW=\u03b1_0 + \u03b1_M * \u03d5\nC=P\u2080 + \u03b1_P / (\u03b3_P-\u03b3_M) * ((W/\u03b3_P)-M\u2080)\nP2=plot!(t,t-> \u03b1_P * ((M\u2080*exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))+W/\u03b3_M*((1/\u03b3_P)-(exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))))+C*exp(-\u03b3_P*t),label=\"\\\\ T=2\",seriestype=:line)\nT=3\n\u03d5=K.^n./(K.^n.+T.^n)\nW=\u03b1_0 + \u03b1_M * \u03d5\nC=P\u2080 + \u03b1_P / (\u03b3_P-\u03b3_M) * ((W/\u03b3_P)-M\u2080)\nP2=plot!(t,t-> \u03b1_P * ((M\u2080*exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))+W/\u03b3_M*((1/\u03b3_P)-(exp(-\u03b3_M*t)/(\u03b3_P-\u03b3_M))))+C*exp(-\u03b3_P*t),label=\"\\\\ T=3\",seriestype=:line)\n\ntitle!(\"Central Dogma Activator Hill\")\nxaxis!(\"Time\")\nyaxis!(\"Concentration\")\n\nplot(P1,P2,layout=(1,2),legend=true,size = (800, 500))\n```\n\n\n\n\n \n\n \n\n\n\n for the special case that no initial protein or mRNA:\n \n $$\\begin{align*}\n [M(t)] &= \\frac{W}{\\gamma_M} (1 - e^{-\\gamma_M \\cdot t})\\tag{88}\\\\\n P(t) &= \\alpha_P \\big[\\frac{W}{\\gamma_M} (\\frac{\\gamma_P-\\gamma_M -\\gamma_P e^{-\\gamma_M\\cdot t}}{\\gamma_P(\\gamma_P-\\gamma_M)}) \\big] + \\big[ \\frac{\\alpha_P W}{(\\gamma_P-\\gamma_M)\\gamma_P} \\big]\\cdot e^{-\\gamma_P \\cdot t} \\tag{89}\\\\\n P(t) &= \\frac{\\alpha_P W}{\\gamma_P(\\gamma_P-\\gamma_M)} \\big[\\frac{ \\gamma_P-\\gamma_M -\\gamma_P e^{-\\gamma_M\\cdot t}}{\\gamma_M} + e^{-\\gamma_P \\cdot t}\\big]\\cdot \\tag{90}\\\\\n P(t) &= \\frac{\\alpha_P W}{\\gamma_P(\\gamma_P-\\gamma_M)} \\big[\\frac{ \\gamma_P(1- e^{-\\gamma_M\\cdot t})-\\gamma_M}{\\gamma_M} + e^{-\\gamma_P \\cdot t}\\big] \\tag{91}\\\\\n P(t) &= \\frac{\\alpha_P W}{\\gamma_P-\\gamma_M} \\big[\\frac{ 1- e^{-\\gamma_M\\cdot t}}{\\gamma_M} + \\frac{e^{-\\gamma_P \\cdot t} -1}{\\gamma_P}\\big] \\tag{92}\\\\\n P(t) &= \\frac{\\alpha_P W}{\\gamma_P-\\gamma_M} \\big[\\frac{ 1- e^{-\\gamma_M\\cdot t}}{\\gamma_M} - \\frac{1- e^{-\\gamma_P \\cdot t}}{\\gamma_P}\\big] \\tag{93}\n \\end{align*}$$\n \n\n\n```julia\n\u03b1_P=0.6\n\u03b1_M=0.6\n\u03b3_M=3\n\u03b3_P=0.5\nT=1.\nK=2\n\u03b1_0=0.\nM\u2080=0.0\nP\u2080=0.\nn=2\nt=collect(0:0.1:10);\n\u03d5=T.^n./(K.^n.+T.^n)\n\u03b1_M=k_M*D\nW=\u03b1_0 + \u03b1_M * \u03d5\nC=P\u2080 + \u03b1_P / (\u03b3_P-\u03b3_M) * ((W/\u03b3_P)-M\u2080)\nP1=plot(t,t-> W/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ M\",seriestype=:line,ylims = (0,0.5))\nplot!(t,t-> \u03b1_P * W/(\u03b3_P-\u03b3_M)*(((1-exp(-\u03b3_M*t))/\u03b3_M)-((1-exp(-\u03b3_P*t))/\u03b3_P)),label=\"\\\\ P\",seriestype=:line)\n\ntitle!(\"Central Dogma Activator Hill\")\nxaxis!(\"Time\")\nyaxis!(\"Concentration\")\n\n\u03d5=K.^n./(K.^n.+T.^n)\n\u03b1_M=k_M*D\nW=\u03b1_0 + \u03b1_M * \u03d5\nC=P\u2080 + \u03b1_P / (\u03b3_P-\u03b3_M) * ((W/\u03b3_P)-M\u2080)\nP2=plot(t,t-> W/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ M\",seriestype=:line,ylims = (0,0.5))\nplot!(t,t-> \u03b1_P * W/(\u03b3_P-\u03b3_M)*(((1-exp(-\u03b3_M*t))/\u03b3_M)-((1-exp(-\u03b3_P*t))/\u03b3_P)),label=\"\\\\ P\",seriestype=:line)\n\ntitle!(\"Central Dogma Repressor Hill\")\nxaxis!(\"Time\")\nyaxis!(\"Concentration\")\n\nplot(P1,P2,layout=(1,2),legend=true,size = (800, 500))\n```\n\n\n\n\n \n\n \n\n\n\nHaving these two analytical solutions, one can ask many different questions, such as: \n - How the amount protein changes with the cooperativity ? \n - How the amount of protein changes with the Hill constant?\n\nsince $\u03b3_M >> \u03b3_P$, quite often, people tend to simplify the term that accounts for mRNA degradation \n\n\n```julia\n\u03b3_M=100\n\u03b3_P=1\nplot(t,t-> (1-exp(-\u03b3_M*t))/\u03b3_M,label=\"\\\\ mRNA term\",seriestype=:line,ylims = (0,3))\nplot!(t,t-> (1-exp(-\u03b3_P*t))/\u03b3_P,label=\"\\\\ Protein term\",seriestype=:line,ylims = (0,3))\n```\n\n\n\n\n \n\n \n\n\n\nTherefore, we can simplify the Eq 93 as:\n\n $$\\begin{align*}\n P(t) &= \\frac{\\alpha_P W}{-\\gamma_M} \\big[ - \\frac{1- e^{-\\gamma_P \\cdot t}}{\\gamma_P}\\big] \\tag{94}\\\\\n P(t) &= \\frac{\\alpha_P W}{\\gamma_M \\gamma_P} \\big[1- e^{-\\gamma_P \\cdot t}\\big] \\tag{95}\n \\end{align*}$$\n \nTherefore, the Central Dogma of Molecular Biology can be written in terms of two simple equations: \n\n$$\\begin{align*}\n M(t) &= \\frac{\\alpha_0 + \\alpha_M \\frac{[K^n]}{K^n+[T^n]}}{\\gamma_M} \\big[1 - e^{-\\gamma_M \\cdot t}\\big]\\tag{96}\\\\\n P(t) &= \\alpha_P \\frac{\\alpha_0 + \\alpha_M \\frac{[K^n]}{K^n+[T^n]}}{\\gamma_M \\gamma_P} \\big[1- e^{-\\gamma_P \\cdot t}\\big] \\tag{97}\n \\end{align*}$$\n\n\nTo finalize, the fact that the cell is growing while mRNA and Protein are being produced introduces an extra dilution factor $\\mu$ that reduces the concetration of `M`and `P`. If the volume is assumed to grow linearly,we can simply write this dilution factor as: \n\n$$\\begin{align*}\n\\frac{\\mathrm{d} [M]}{\\mathrm{d} t} &= k_M [D] [T]- \\gamma_M [M] - \\mu [M] = k_M [D] [T]- (\\gamma_M+\\mu) [M] \\tag{98}\\\\\n \\frac{\\mathrm{d} [P]}{\\mathrm{d} t} &= \\alpha_P [M]-\\gamma_P[P] - \\mu [P] = \\alpha_P [M]-(\\gamma_P +\\mu )[P] \\tag{99}\n \\end{align*}$$\n \n The final analyitical solutions can be obtained following the same strategy:\n\n$$\\begin{align*}\n M(t) &= \\frac{\\alpha_0 + \\alpha_M \\frac{[K^n]}{K^n+[T^n]}}{\\gamma_M+\\mu} \\big[1 - e^{-(\\gamma_M+\\mu) \\cdot t}\\big]\\tag{100}\\\\\n P(t) &= \\alpha_P \\frac{\\alpha_0 + \\alpha_M \\frac{[K^n]}{K^n+[T^n]}}{(\\gamma_M+\\mu) (\\gamma_P+\\mu)} \\big[1- e^{-(\\gamma_P+\\mu) \\cdot t}\\big] \\tag{101}\n \\end{align*}$$\n \n In real systems `mRNA` natural degradation is orders of magnitude larger that the effect of dilution. Therefore $\\gamma_M+\\mu \\approx \\gamma_M$, and the final equations are: \n \n $$\\begin{align*}\n M(t) &= \\frac{\\alpha_0 + \\alpha_M \\frac{[K^n]}{K^n+[T^n]}}{\\gamma_M} \\big[1 - e^{-\\gamma_M \\cdot t}\\big]\\tag{102}\\\\\n P(t) &= \\alpha_P \\frac{\\alpha_0 + \\alpha_M \\frac{[K^n]}{K^n+[T^n]}}{\\gamma_M (\\gamma_P+\\mu)} \\big[1- e^{-(\\gamma_P+\\mu) \\cdot t}\\big] \\tag{103}\n \\end{align*}$$\n\n\n\n```julia\n\u03b1_P=0.6\n\u03b1_M=0.6\n\u03b3_M=3\n\u03b3_P=0.5\n\u03bc=0.5\nT=1.\nK=2\n\u03b1_0=0.\nM\u2080=0.0\nP\u2080=0.\nn=2\nt=collect(0:0.1:10);\n\u03d5=T.^n./(K.^n.+T.^n)\n\u03b1_M=k_M*D\nW=\u03b1_0 + \u03b1_M * \u03d5\nC=P\u2080 + \u03b1_P / (\u03b3_P-\u03b3_M) * ((W/\u03b3_P)-M\u2080)\nP1=plot(t,t-> W/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ M\",seriestype=:line,ylims = (0,0.5))\nplot!(t,t-> \u03b1_P * W/(\u03b3_M*(\u03b3_P+\u03bc))*(1-exp(-\u03b3_P*t)),label=\"\\\\ P\",seriestype=:line)\n\ntitle!(\"Central Dogma Activator Hill\")\nxaxis!(\"Time\")\nyaxis!(\"Concentration\")\n\n\u03d5=K.^n./(K.^n.+T.^n)\n\u03b1_M=k_M*D\nW=\u03b1_0 + \u03b1_M * \u03d5\nC=P\u2080 + \u03b1_P / (\u03b3_P-\u03b3_M) * ((W/\u03b3_P)-M\u2080)\nP2=plot(t,t-> W/\u03b3_M*(1-exp(-\u03b3_M*t)),label=\"\\\\ M\",seriestype=:line,ylims = (0,0.5))\nplot!(t,t-> \u03b1_P * W/(\u03b3_M*(\u03b3_P+\u03bc))*(1-exp(-\u03b3_P*t)),label=\"\\\\ P\",seriestype=:line)\n\ntitle!(\"Central Dogma Repressor Hill\")\nxaxis!(\"Time\")\nyaxis!(\"Concentration\")\n\nplot(P1,P2,layout=(1,2),legend=true,size = (800, 500))\n```\n\n\n\n\n \n\n \n\n\n\n## Numerical solution of the Central Dogma\n\nAnother way to solve this type of models when they become too complex is using nuymerical simulations\n\n\n```julia\nusing DifferentialEquations\nusing ParameterizedFunctions\ntspan = (0.0,10)\n```\n\n\n\n\n (0.0, 10)\n\n\n\n\n```julia\nCentralDogma4_DSL! = @ode_def ab begin\n dM = \u03b1_0-\u03b3_M*M+\u03b1_M*T^n/(K^n +T^n)\n dP = \u03b1_P * M - \u03b3_P * P\n end \u03b1_0 \u03b1_M \u03b1_P \u03b3_M \u03b3_P T K n\n\nCentralDogma5_DSL! = @ode_def ab begin\n dM = \u03b1_0-\u03b3_M*M+\u03b1_M*K^n/(K^n +T^n)\n dP = \u03b1_P * M - \u03b3_P * P\n end \u03b1_0 \u03b1_M \u03b1_P \u03b3_M \u03b3_P T K n\n```\n\n\n\n\n (::ab{var\"#89#93\",var\"#90#94\",var\"#91#95\",Nothing,Nothing,var\"#92#96\",Expr,Expr}) (generic function with 2 methods)\n\n\n\n\n```julia\np=[\u03b1_0,\u03b1_M,\u03b1_P,\u03b3_M,\u03b3_P,T,K,n];\nu\u2080 = [M\u2080,P\u2080]\nprob3 = ODEProblem(CentralDogma4_DSL!,u\u2080,tspan,p)\nprob4 = ODEProblem(CentralDogma5_DSL!,u\u2080,tspan,p)\n```\n\n\n\n\n \u001b[36mODEProblem\u001b[0m with uType \u001b[36mArray{Float64,1}\u001b[0m and tType \u001b[36mFloat64\u001b[0m. In-place: \u001b[36mtrue\u001b[0m\n timespan: (0.0, 10.0)\n u0: [0.0, 0.0]\n\n\n\n\n```julia\nsol3 = solve(prob3)\nsol4 = solve(prob4)\nP3=plot(sol3,label=[\"mRNA\",\"Protein\"],ylims = (0,0.5))\ntitle!(\"Central Dogma activator Hill function\")\nxlabel!(\"Time [s]\")\nylabel!(\"Concentration [nM]\")\nP4=plot(sol4,label=[\"mRNA\",\"Protein\"],ylims = (0,0.5))\ntitle!(\"Central Dogma repressor Hill function\")\nxlabel!(\"Time [s]\")\nylabel!(\"Concentration [nM]\")\nplot(P3,P4,layout=(1,2),legend=true,size = (800, 500))\n```\n\n\n\n\n \n\n \n\n\n\nwhich are equivalent to the dynamcis predicte in the analytical soloution\n\n\n```julia\nfunction CentralDogma_activator_parameters(\u03b1_0,\u03b1_M,\u03b1_P,\u03b3_M,\u03b3_P,T,K,n)\n p=[\u03b1_0,\u03b1_M,\u03b1_P,\u03b3_M,\u03b3_P,T,K,n];\n prob3 = ODEProblem(CentralDogma4_DSL!,u\u2080,tspan,p)\n sol3 = solve(prob3)\n x=(\"T = $(T)\")\n plot!(sol3,vars=(2),label=x)\nend\n\nfunction CentralDogma_repressor_parameters(\u03b1_0,\u03b1_M,\u03b1_P,\u03b3_M,\u03b3_P,T,K,n)\n p=[\u03b1_0,\u03b1_M,\u03b1_P,\u03b3_M,\u03b3_P,T,K,n];\n prob4 = ODEProblem(CentralDogma5_DSL!,u\u2080,tspan,p)\n sol4 = solve(prob4)\n x=(\"T = $(T)\")\n plot!(sol4,vars=(2),label=x)\nend\n```\n\n\n\n\n CentralDogma_repressor_parameters (generic function with 1 method)\n\n\n\n\n```julia\nplot()\nfor T in [0.2,1,2,3]\n CentralDogma_activator_parameters(\u03b1_0,\u03b1_M,\u03b1_P,\u03b3_M,\u03b3_P,T,K,n)\nend\ntitle!(\"Central Dogma activator different T concentrations\")\nxlabel!(\"Time [s]\")\nylabel!(\"Concentration [a.u]\")\n```\n\n\n\n\n \n\n \n\n\n\n\n```julia\nplot()\nfor T in [0.2,1,2,3]\n CentralDogma_repressor_parameters(\u03b1_0,\u03b1_M,\u03b1_P,\u03b3_M,\u03b3_P,T,K,n)\nend\ntitle!(\"Central Dogma Repressor different T concentrations\")\nxlabel!(\"Time [s]\")\nylabel!(\"Concentration [a.u]\")\n```\n\n\n\n\n \n\n \n\n\n\n## Conclusions\n\nIn conclusion, both the activator form and the repressor forms of the Hill function can be used to model numerically the transctiption and translation of mRNA into protein regulated by transcriptional activators and represors, respectively. The dynamics is similar, but the dependence on the amount of transcription factor `T` is reversed. \n\nThe Hill functions are widely used in Biochemistry, Physiology, pharmacology and Genetics. \n\nIt is physically unrealistic: \nall ligands have to bind simultaneously \nexact in conditions of extremely high cooperativity \nit is based on equilibrium considerations (Biology is far from equilibrium)\n\nSequential or independent binding are more realistic, but they represent often a small correction versus the highly simple Hill equation. \n\nTherefore, the simplicity of the Hill equation wins and it is widely used in many biological contexts (as we will see\u2026)\n\n", "meta": {"hexsha": "35dc1e682e042fa72ed2408324569eb3eaf58b95", "size": 602377, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "5_CentralDogma.ipynb", "max_stars_repo_name": "davidgmiguez/julia_notebooks", "max_stars_repo_head_hexsha": "b395fac8f73bf8d9d366d6354a561c722f37ce66", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "5_CentralDogma.ipynb", "max_issues_repo_name": "davidgmiguez/julia_notebooks", "max_issues_repo_head_hexsha": "b395fac8f73bf8d9d366d6354a561c722f37ce66", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "5_CentralDogma.ipynb", "max_forks_repo_name": "davidgmiguez/julia_notebooks", "max_forks_repo_head_hexsha": "b395fac8f73bf8d9d366d6354a561c722f37ce66", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 92.1347506883, "max_line_length": 534, "alphanum_fraction": 0.6217219449, "converted": true, "num_tokens": 13968, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9449947163538935, "lm_q2_score": 0.8991213745668094, "lm_q1q2_score": 0.8496649483264849}} {"text": "# 2D Structure Extraction (Hough Transform)\nIn this exercise, we will implement a Hough transform in order to detect parametric curves, such as lines or circles.\nIn the following, we shortly review the motivation for this technique.\n\nConsider the point $p=(\\mathtt{x},\\mathtt{y})$ and the equation for a line $y = mx+c$. What are the lines that could pass through $p$?\nThe answer is simple: all the lines for which $m$ and $c$ satisfy $\\mathtt{y} = m\\mathtt{x}+c$.\nRegarding $(\\mathtt{x},\\mathtt{y})$ as fixed, the last equation is that of a line in $(m,c)$-space.\nRepeating this reasoning, a second point $p'=(\\mathtt{x}',\\mathtt{y}')$ will also have an associated line in parameter space, and the two lines will intersect at the point $(\\tilde{m},\\tilde{c})$, which corresponds to the line connecting $p$ and $p'$.\n\nIn order to find lines in the input image, we can thus pursue the following approach.\nWe start with an empty accumulator array quantizing the parameter space for $m$ and $c$.\nFor each edge pixel in the input image, we then draw a line in the accumulator array and increment the corresponding cells.\nEdge pixels on the same line in the input image will produce intersecting lines in $(m,c)$-space and will thus reinforce the intersection point.\nMaxima in this array thus correspond to lines in the input image that many edge pixels agree on.\n\nIn practice, the parametrization in terms of $m$ and $c$ is problematic, since the slope $m$ may become infinite.\nInstead, we use the following parametrization in polar coordinates:\n\\begin{equation}\n\t\\mathtt{x}\\cos\\theta + \\mathtt{y}\\sin\\theta = \\rho \\label{eq:hough_line}\n\\end{equation}\nThis produces a sinusoidal curve in $(\\rho,\\theta)$-space, but otherwise the procedure is unchanged.\n\nThe following sub-questions will guide you through the steps of building a Hough transform.\n\n\n```python\n%matplotlib notebook\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport imageio\nimport cv2\n```\n\n## Part a\nBuild up an accumulator array ``acc`` for the parameter space $(\\rho, \\theta)$. $\\theta$ ranges from $-\\pi/2$ to $\\pi/2$, and $\\rho$ ranges from $-D$ to $D$, where $D$ denotes the length of the image diagonal.\nUse ``n_bins_rho`` and ``n_bins_theta`` as the number of bins in each direction.\nInitially, the array should be filled with zeros.\n\nFor each edge pixel in the input image, create the corresponding curve in $(\\rho, \\theta)$ space by evaluating above line equation for all values of $\\theta$ and increment the corresponding cells of the accumulator array.\n\n\n```python\ndef hough_transform(edge_image, n_bins_rho, n_bins_theta):\n # Vote accumulator\n votes = np.zeros((n_bins_rho, n_bins_theta), dtype=np.int) \n \n # Create bins\n diag = np.linalg.norm(edge_image.shape) # Length of image diagonal\n theta_bins = np.linspace(-np.pi / 2, np.pi / 2, n_bins_theta)\n rho_bins = np.linspace(-diag, diag, n_bins_rho)\n \n # YOUR CODE HERE\n raise NotImplementedError()\n return votes, rho_bins, theta_bins\n```\n\nTest the implementation on an example image. Visualize the resulting Hough space by displaying it as a 2D image.\n\n\n```python\ncolor_im = imageio.imread('gantrycrane.png')\ngray_im = cv2.cvtColor(color_im, cv2.COLOR_RGB2GRAY)\n\n# Get edges using Canny\nblurred = cv2.GaussianBlur(gray_im, None, sigmaX=2)\nedges = cv2.Canny(blurred, threshold1=30, threshold2=90) # 30, 90 are manually tunned\n\nhough_space, rho_bins, theta_bins = hough_transform(edges, n_bins_rho=300, n_bins_theta=300)\n\nfig, axes = plt.subplots(1, 3, figsize=(12, 4))\naxes[0].imshow(color_im)\naxes[0].set_title('Image')\naxes[1].imshow(edges, cmap='gray')\naxes[1].set_title('Edges')\naxes[2].imshow(hough_space)\naxes[2].set_title('Hough space')\naxes[2].set_xlabel('theta (index)')\naxes[2].set_ylabel('rho (index)')\nfig.tight_layout()\n```\n\n## Part b\nWrite a function ``nms2d`` which suppresses all points in the Hough space that are not local maxima.\nThis can be achieved by looking at the 8 direct neighbors of each pixel and keeping only pixels whose value is greater than all its neighbors.\nThis function is simpler than the non-maximum suppression from the Canny Edge Detector since it does not take into account local gradients.\n\n\n```python\ndef nms2d(hough_array):\n hough_array_out = np.zeros_like(hough_array)\n \n # YOUR CODE HERE\n raise NotImplementedError()\n \n return hough_array_out\n```\n\nWrite a function ``find_hough_peaks`` that takes the result of ``hough_transform`` as an argument, finds the extrema in Hough space using ``nms2d`` and returns the index of all points $(\\rho_i, \\theta_i)$ for which the corresponding Hough value is greater than ``threshold``.\n\n\n```python\ndef find_hough_peaks(hough_space, threshold):\n # YOUR CODE HERE\n raise NotImplementedError()\n```\n\n\n```python\ndef plot_hough_lines(image, rho, theta):\n # compute start and ending point of the line x*cos(theta)+y*sin(theta)=rho\n x0, x1 = 0, image.shape[1] - 1\n y0 = rho / np.sin(theta)\n y1 = (rho - x1 * np.cos(theta)) / np.sin(theta)\n\n # Check out this page for more drawing function in OpenCV:\n # https://docs.opencv.org/3.1.0/dc/da5/tutorial_py_drawing_functions.html\n for yy0, yy1 in zip(y0, y1):\n cv2.line(image, (x0, int(yy0)), (x1, int(yy1)), color=(255, 0, 0), thickness=1)\n\n return image\n```\n\nTry your implementation on the images ``gantrycrane.png`` and ``circuit.png``.\nDo you find all the lines?\n\nYOUR ANSWER HERE\n\n\n```python\n# Find maximum\nrho_max_idx, theta_max_idx = find_hough_peaks(hough_space, 200)\nprint(f'gantrycrane.png: found {len(rho_max_idx)} lines in the image.')\nrho_max, theta_max = rho_bins[rho_max_idx], theta_bins[theta_max_idx]\n\ncolor_image = imageio.imread('gantrycrane.png')\nimage_with_lines = plot_hough_lines(color_image, rho_max, theta_max)\n\n# Plot\nfig, ax = plt.subplots(figsize=(8, 4))\nax.imshow(image_with_lines)\n```\n\n\n```python\n# Try another image\nim = imageio.imread('circuit.png')\n\nblurred = cv2.GaussianBlur(im, None, sigmaX=2)\nedge = cv2.Canny(blurred, threshold1=30, threshold2=90)\nhough_space, rho_bins, theta_bins = hough_transform(edge, n_bins_rho=300, n_bins_theta=300)\n\n# Find maximum\nrho_max_idx, theta_max_idx = find_hough_peaks(hough_space, 100)\nprint(f'circuit.png: found {len(rho_max_idx)} lines in the image.')\nrho_max, theta_max = rho_bins[rho_max_idx], theta_bins[theta_max_idx]\ncolor_image = cv2.cvtColor(im, cv2.COLOR_GRAY2RGB)\nimage_with_lines = plot_hough_lines(color_image, rho_max, theta_max)\n\n# Plot\nfig, ax = plt.subplots(figsize=(8, 4))\nax.imshow(image_with_lines)\nfig.tight_layout()\n```\n\n## Part c (bonus)\n\nThe Hough transform is a general technique that can not only be applied to lines, but also to other parametric curves, such as circles.\nIn the following, we will show how the implementation can be extended to finding circles.\n\nA circle can be parameterized by the following equation:\n$$\t\n (\\mathtt{x}-a)^2 + (\\mathtt{y}-b)^2 = r^2. \\label{eq:hough_circle}\n$$\n\nUnfortunately, the computation and memory requirements for the Hough transform increase exponentially with the number of parameters.\nWhile a 3D search space is still just feasible, we can dramatically reduce the amount of computation by integrating the gradient direction in the algorithm.\n\nWithout gradient information, all values $a, b$ lying on the cone given by above equation are incremented.\nWith the gradient information, we only need to increment points on an arc centered at $(a, b)$:\n$$\n\\begin{eqnarray}\n\ta &=& x + r\\cos\\phi\\\\\n\tb &=& y + r\\sin\\phi,\n\\end{eqnarray}\n$$\nwhere $\\phi$ is the gradient angle returned by the edge operator.\n\nCreate a function ``hough_circle`` which implements the Hough transform for circles.\nTry your implementation for a practical application of counting coins in an image.\nYou can use the images ``coins1.png`` and ``coins2.png`` for testing.\n\n\n```python\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n## Pard d (bonus)\nThe same trick (as in **Part c**) of using the image gradient can be used for lines.\nModify the code from **Part a** to only vote for one line per edge pixel, instead of all the lines running through this pixel.\n\n\n```python\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n## Part e (bonus)\nCan you build an online coin classification and counting system?\n\nYou can take a look at the ``Haribo classification`` demo (MATLAB) in the Moodle for some ideas. Use the functions you wrote in the previous questions.\n(Hint: you may need to include a reference shape in the picture in order to obtain the absolute scale).\n\n\n```python\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n", "meta": {"hexsha": "1b86509759f480f6fae121956c9e21d9b7ac4906", "size": 14405, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "exercise1/05_hough_transform.ipynb", "max_stars_repo_name": "danikhani/CV1-2020", "max_stars_repo_head_hexsha": "80b77776763dbd30f68bc2966e51e7ad592a0373", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exercise1/05_hough_transform.ipynb", "max_issues_repo_name": "danikhani/CV1-2020", "max_issues_repo_head_hexsha": "80b77776763dbd30f68bc2966e51e7ad592a0373", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercise1/05_hough_transform.ipynb", "max_forks_repo_name": "danikhani/CV1-2020", "max_forks_repo_head_hexsha": "80b77776763dbd30f68bc2966e51e7ad592a0373", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.8145539906, "max_line_length": 283, "alphanum_fraction": 0.6045817425, "converted": true, "num_tokens": 2237, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9111797075998823, "lm_q2_score": 0.9324533088603708, "lm_q1q2_score": 0.8496325333179354}} {"text": "# Optimization\n\nAlmost every problem in machine learning and data science starts\nwith a dataset $X$, a model $g(\\theta)$, which is a function of the parameters $\\theta$ and a cost function $C(X, g(\\theta))$ that allows us to judge how well the\nmodel $g(\\theta)$ explains the observations $X$. The model is fit by finding the values of $\\theta$ that minimize the cost function.\n\nWe will look at a class of methods for computing minima of functions known as gradient descent and its generalizations. \n\n# Gradient Descent\n\nThe basic idea of gradient descent is that a function $F(\\mathbf{x})$, $ \\mathbf{x} \\equiv (x_1,\\cdots,x_n)$, decreases fastest if one goes from $\\bf {x}$ in the direction of the negative gradient $-\\nabla F(\\mathbf{x})$.\n\nIt follows that, if \n\\begin{equation}\n\\mathbf{x}_{k+1} = \\mathbf{x}_k - \\gamma \\nabla F(\\mathbf{x}_k), \\ \\ \\gamma > 0\n\\end{equation}\nfor $\\gamma$ small enough, then $F(\\mathbf{x}_{k+1}) \\leq F(\\mathbf{x}_k)$. This means that for a sufficiently small $\\gamma$ we moves towards smaller function values, i.e a minimum.\n\nThis observation is the basis of the gradient descent (GD) method. One starts with an initial guess $\\mathbf{x}_0$ for a minimum of $F$ and compute new approximations according to\n\n\\begin{equation}\n\\mathbf{x}_{k+1} = \\mathbf{x}_k - \\gamma \\nabla F(\\mathbf{x}_k), \\ \\ k \\geq 0.\n\\end{equation}\n\nIdeally the sequence $\\{ \\mathbf{x}_k \\}_{k=0}$ converges to a __global__ minimum of the function $F$. In general we do not know if we are in a global or local minimum. In the special case when $F$ is a convex function, all local minima are also global minima, so in this case gradient descent can converge to the global solution. We will explore this further in the exercises.\n\nIn a practical implementation we have to choose a criterion for when to stop the iteration. One such condition would be to stop when \n\n\\begin{equation}\n||\\nabla F(\\mathbf{x})|| < \\epsilon,\n\\end{equation}\nwhere $\\epsilon$ is some small number. \n\n\nThe parameter $\\gamma$ is referred to as the step size and is constant in the simplest Gradient Descent scheme. We will consider generalizations later where $\\gamma$ is adaptet during each iteration in order to obtain faster convergence or esacpe local minima.\n\nAnother point worth noticing is that in order to use gradient descent all we need to know is the function and its gradient. Computing the gradient may be difficult depending on the complexity of the function we want to minimize. \n\n## Exercise 1\nWe start by considering functions of a single variable since they are easy to work with and we can compute exact minima to compare our implementation. Furthermore, single variable functions captures much of the problems we experience computing minima of more complex, multi-variable functions.\n\na) Consider the function $f(x) = x^2+1$. Show that $f(x_{k+1}) \\leq f(x_k)$ when $0 \\leq \\gamma \\leq 1$.\n\nHint: Use that (GD step) $x_{k+1} = x_k - \\gamma f'(x_k)$ and solve the resulting inequality.\n\nSolution: \n\n\\begin{align}\nx_k^2(1-2\\gamma)^2 + 1 &\\leq x_k^2+1 \\\\\n(1-2\\gamma)^2 &\\leq 1 \\\\\n\\gamma(1-\\gamma) &\\leq 0 \\\\\n\\Rightarrow 0 \\leq \\gamma &\\leq 1.\n\\end{align}\n\nb) Show/convince yourself that f has its minimum at $x=0$ with $f(0) = 1$. \n\nWrite a function computes the minimum using the GD method, with $x_0 = -3$ as initial guess. Try to write the function in a general way such that you in principle can use it on any function.\n\nExperiment with different step sizes in the range $0 < \\gamma < 1$ and check how many iterations you need in order for the solution to converge within a precision of $10^{-6}$. \n\nSince the derivative has to be zero in a minimum you can use $$|f'(x_k)| < \\epsilon,$$ with $\\epsilon = 10^{-6}$ as a criterion for convergence. (The choice of $10^{-6}$ as precision is arbitrary).\n\nWhat happens if you choose $\\gamma$ exactly equal to 1? What happens if $\\gamma$ is slighly larger than 1? One way to get a better visual understanding of whats going on is to plot the function and the function values at each iteration, $f(x_k)$, within the same plot.\n\n\n```python\n#Program that solves exercise 1b.\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef gradient_descent(xk,dx_f,gamma):\n return xk-gamma*dx_f\n\ndef quadratic(a,b,c,x):\n return a*x**2+b*x+c\n\ndef dx_quadratic(a,b,x):\n return 2*a*x+b\n\n#One variable examples\na,b,c = 1,0,1\nx = np.linspace(-5,5,101)\nquad = quadratic(a,b,c,x)\ndx_quad = dx_quadratic(a,b,x)\n\nxk = -3\nxk_vec = [xk]\nfxk_vec = [quadratic(a,b,c,xk)]\ngamma = 0.3\niters = 0\nmax_iters = 1000\nconverged = False\n\nwhile(abs(dx_quadratic(a,b,xk)) > 1e-6 and iters < max_iters):\n xk = gradient_descent(xk,dx_quadratic(a,b,xk),gamma)\n xk_vec.append(xk)\n fxk_vec.append(quadratic(a,b,c,xk))\n iters += 1\n\nif(iters < max_iters):\n converged = True\n\nprint (\"Converged: %s\" % converged)\nprint (\"Number of iterations for convergence: %d\" % iters)\n\nplt.figure(1)\nplt.plot(x,quad)\nplt.plot(xk_vec,fxk_vec,'o')\nplt.legend([\"f(x)\",\"GD-iterates\"])\nplt.show()\n```\n\nc) Quadratic functions as the one in exercise b) are particularly forigiving to work with since they only have one minimum/maximum, which in turn is global. A third order polynomial can have a maximum and a minimum or a saddle point. A fourth order polynomial may have two local minima. The point of the following exercise is to investigate the how GD depends on the initial guess, $x_0$.\n\nConsider the function\n\\begin{equation}\nf(x) = \\frac{(x+4)(x+1)(x-1)(x-3)}{14} + \\frac{1}{2},\n\\end{equation}\nwith derivative \n\\begin{equation}\nf'(x) = \\frac{1}{14}\\left( 4x^3 + 3x^2 - 26x -1 \\right).\n\\end{equation}\n\nMake a plot of the function for $x \\in [-5,4]$. The function has a global minimum at $x \\approx -2.9354$, a local maximum at $x \\approx -0.038301$ and a local minimum at $x \\approx 2.2237$. Choose $\\gamma = 0.1$ as step length and use GD descent to compute the minimum. \n\n* Expermient with different initial values $x_0 \\in [-5,-0.1]$ and $x_0 \\in [0.1, 4]$. \n* What happens if you choose $x_0 = -0.038301$ (i.e at the local maximum)? Explain why this happens. \n* What happens if you choose $x_0$ slightly smaller/larger than $-0.038301$? \n* Furthermore you can experiment with different step sizes.\n\n\n```python\ndef quartic(x):\n return (x+4)*(x+1)*(x-1)*(x-3)/14.0 + 0.5\ndef dx_quartic(x):\n return (1.0/14.0)*(4*x**3 + 3*x**2 - 26*x - 1)\n\nx = np.linspace(-5,4,101)\nquart = quartic(x)\ndx_quart = dx_quartic(x)\n\nxk = -0.02\nxk_vec = [xk]\nfxk_vec = [quartic(xk)]\ngamma = 0.1\niters = 0\nmax_iters = 200\nconverged = False\n\nwhile(abs(dx_quartic(xk)) > 1e-6 and iters < max_iters):\n xk = gradient_descent(xk,dx_quartic(xk),gamma)\n xk_vec.append(xk)\n fxk_vec.append(quartic(xk))\n iters += 1\n\nif(iters < max_iters):\n converged = True\n\nprint (\"Converged: %s\" % converged)\nprint (\"Number of iterations for convergence: %d\" % iters)\n\nplt.figure(1)\nplt.plot(x,quart)\nplt.plot(xk_vec,fxk_vec,'o')\nplt.legend([\"f(x)\",\"GD-iterates\"])\nplt.show()\n```\n\nd) In this exercise we will look at function of two variables. \n\nConsider the function \n\\begin{equation}\nz(x,y) = x^2+10y^2-1.\n\\end{equation}\n\n* Compute the gradient and show that $z(x,y)$ has its minimum at $x=0, y=0$.\n* Extend your program such that it can compute the minimum of a multivariate function. The main difference now is that the points and derivative/gradient now are vectors/arrays and not just scalars. Also try to make relevant visualizations, such as surface or contour plots.\n* As before, experiment with different step sizes $\\gamma$ and intitial values $\\mathbf{x}_0 = (x_0,y_0)$.\n\n\n\n```python\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm\n\n#Two variable example\ndef Z(x,y):\n return x**2+10*y**2-1\n\ndef grad_Z(x,y):\n gZ = np.zeros(2)\n gZ[0] = 2*x\n gZ[1] = 20*y\n return gZ\n\nX = np.arange(-4,4,0.1)\nY = np.arange(-5,5,0.1)\n\nX_, Y_ = np.meshgrid(X, Y)\nZ_ = Z(X_,Y_)\nfig = plt.figure(2)\nax = fig.gca(projection='3d')\nsurf = ax.plot_surface(X_, Y_, Z_, cmap=cm.coolwarm,linewidth=0, antialiased=False)\nplt.show()\nplt.figure(3)\nplt.contour(X_,Y_,Z_,corner_mask=0)\nplt.show()\n\nxk = np.zeros(2)\nxk[0] = 1.5\nxk[1] = 2.3\n\ngamma = 0.05\niters = 0\nmax_iters = 200\nconverged = False\nwhile(abs(np.linalg.norm(grad_Z(xk[0],xk[1]))) > 1e-6 and iters < max_iters):\n xk = xk - gamma*grad_Z(xk[0],xk[1])\n iters += 1\nprint (\"Number of iterations for convergence: %d\" % iters)\nprint(xk)\n```\n\n## Exercise 2 (Linear regression)\n\nIn this exercise we will work out how Gradient Descent is used in the context of linear regression. The method is unchanged, however we have to minimize a different function, namely the cost function. \n\n\n\\begin{equation}\n\\hat{y} = \\theta_0x_0 + \\sum_{i=1}^n \\theta_i x_i, \\ \\ \\hat{y} = \\theta^T \\cdot \\bar{x}\n\\end{equation}\nwhere $x_0 \\equiv 1$ by convention (?)\n\n\nThe normal equation\n\\begin{equation}\n\\hat{\\theta} = (X^TX)^{-1} X^Ty\n\\end{equation}\n\n\n\n```python\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n#One variable example\nN = 100\nx0 = np.ones(N)\nx1 = 2*np.random.rand(N)\ny = 4 + 3*x1 + np.random.randn(N)\n\n\n#Compute theta and predicted y using normal equations\nX = np.c_[x0,x1]\nXt_X_inv = np.linalg.inv(np.dot(X.transpose(),X))\nXt_y = np.dot(X.transpose(),y)\ntheta_normeqs = np.dot(Xt_X_inv,Xt_y)\n\n\n#Compute theta using gradient descent\neta = 0.1 \nmax_iters = 100\ntheta = np.random.randn(2)\n\ndiff = 100\niters = 0\nwhile(diff > 1e-10):\n gradient = 2.0/float(N) * np.dot(X.transpose(), np.dot(X,theta) - y)\n theta = theta-eta*gradient\n diff = np.linalg.norm(gradient)\n iters += 1\n\n#Output number of iterations before convergence and compare theta computed with GD with theta computed \n#using the Normal equations\nprint(\"Number of iterations before convergence: %d\" % iters)\nprint(abs(theta_normeqs-theta))\nprint(theta_normeqs)\nprint(theta)\n\n#Plot true y and y_predicted\nplt.figure(4)\ny_pred = theta[0] + theta[1]*x1\nplt.plot(x1,y,'ro')\nplt.plot(x1,y_pred,'-b')\nplt.show()\n```\n\n\n```python\n#Two variable example\nN = 100\nx0 = np.ones(N)\nx1 = 2*np.random.rand(N)\nx2 = 2*np.random.rand(N)\n\nnoise_scale = 2\ng_noise = noise_scale*np.random.randn(N)\n\ny = 4+3*x1+2*x2+g_noise\n\n#Compute theta using normal equations\nX = np.c_[x0,x1,x2]\nXt_X_inv = np.linalg.inv(np.dot(X.transpose(),X))\nXt_y = np.dot(X.transpose(),y)\ntheta_normeqs = np.dot(Xt_X_inv,Xt_y)\n\n\n#Compute theta using gradient descent\neta = 0.1 \nmax_iters = 100\ntheta = np.random.randn(3)\n\ndiff = 100\niters = 0\nwhile(diff > 1e-10):\n gradient = 2.0/float(N) * np.dot(X.transpose(), np.dot(X,theta) - y)\n theta = theta-eta*gradient\n diff = np.linalg.norm(gradient)\n iters += 1\n\n#Output number of iterations before convergence and compare theta computed with GD with theta computed \n#using the Normal equations\nprint(\"Number of iterations before convergence: %d\" % iters)\nprint(abs(theta_normeqs-theta))\nprint(theta_normeqs)\nprint(theta)\n\n#Plot true y and y_predicted\ny_pred = theta[0] + theta[1]*x1+theta[2]*x2\nfig = plt.figure(5)\nax = fig.gca(projection='3d')\nscatter1 = ax.scatter(x1, x2, y,marker='^',c='r')\nscatter2 = ax.scatter(x1, x2, y_pred,marker='o',c='b')\nplt.show()\n```\n\n## Short summary so far\n\nSummary...\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "49a7598973be7d89c77a6a6390ce60a663996791", "size": 178191, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/src/GradientOptim/GradientMethods.ipynb", "max_stars_repo_name": "kylegodbey/MachineLearningMSU", "max_stars_repo_head_hexsha": "a6580d3a61e9b8c332683e5e14ed3bdd7a1ecff0", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 59, "max_stars_repo_stars_event_min_datetime": "2019-12-06T09:24:50.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-17T03:27:28.000Z", "max_issues_repo_path": "doc/src/GradientOptim/GradientMethods.ipynb", "max_issues_repo_name": "kylegodbey/MachineLearningMSU", "max_issues_repo_head_hexsha": "a6580d3a61e9b8c332683e5e14ed3bdd7a1ecff0", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2020-06-16T18:24:24.000Z", "max_issues_repo_issues_event_max_datetime": "2020-07-08T21:13:56.000Z", "max_forks_repo_path": "doc/src/GradientOptim/GradientMethods.ipynb", "max_forks_repo_name": "kylegodbey/MachineLearningMSU", "max_forks_repo_head_hexsha": "a6580d3a61e9b8c332683e5e14ed3bdd7a1ecff0", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 43, "max_forks_repo_forks_event_min_datetime": "2019-11-30T00:37:00.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-01T21:30:09.000Z", "avg_line_length": 329.3733826248, "max_line_length": 50948, "alphanum_fraction": 0.9217412776, "converted": true, "num_tokens": 3482, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9324533069832974, "lm_q2_score": 0.9111797039819735, "lm_q1q2_score": 0.8496325282340532}} {"text": "# Merge Sort\n\nMany problems require sorting a collection of items by their keys, which are assumed to be comparable. Simple algorithums such as bubble sort and insertion sort can do this in $O(n^2)$ time, however using a divide and conquer approach, merge sort acheives $O(n \\lg n)$.\n\nOther than runtime, their are two additional characteristics we may be intrested in when comparing sorting algorithums\n- __in-place__: Only $O(1)$ extra space is required to sort the array\n- __stable__: For every $i$, $j$ where $i < j$ and $A[i].key = A[j].key$, $A[i]$ is guarenteed to proceed $A[j]$, \n(the ordering of equal keys remains the same).\n\n## Algorithum\n\nMerge sort works by recursivly splitting the array into two, until we reach arrays of size 1, which by definition are sorted.\n\n```\nAlgorithum mergeSort(A, i, j)\n mid <- floor((i + j) / 2)\n mergeSort(A, i, mid)\n mergeSort(A, mid+1, j)\n merge(A, i, mid, j)\n```\n\nMerging two sorted arrays is simple. Create a scratch array $B$, then look at the first element in each of the two halves. Copy the smallest element and repeat until all elements have been copyed. Finnaly copy array $B$ back into array $A$.\n\n```\nAlgorithum merge(A, i, mid, j)\n initialize array B with length j - i + 1\n k <- i\n l <- mid + 1\n m <- 0\n \n while k <= mid and l <= j do\n if A[k].key <= A[l].key then\n B[m] <- A[l].key\n k <- k + 1\n else\n B[m] <- A[l]\n l <- l + 1\n m <- m + 1\n \n while k <= mid do\n B[m] <- A[l]\n k <- k + 1\n m <- m + 1\n \n while l <= j do\n B[m] <- A[l]\n l <- l + 1\n m <- m + 1\n \n for m = 0 to j - i do\n A[m + i] <- B[m]\n```\n\nClearly mergo sort is not _in-place_ since it uses an auxillary array $B$, however it is _stable_ since we always merge $A$ first if the keys are equal, and since all elements of $A$ came before $B$ `mergeSort` will never swap equal keys.\n\n## Runtime\n\nThe runtime of `mergeSort` is simple, some constant amount of work for the mid point, then the sum of the three function calls.\n\n$$\nT_{mergeSort} = \\begin{cases}\n O(1) & n \\leq 1 \\\\\n O(1) + T_{mergeSort}(\\lfloor{\\frac{n}{2}}\\rfloor) + T_{mergeSort}(\\lceil{\\frac{n}{2}}\\rceil) + T_{merge}(n) & n \\geq 2 \\\\\n\\end{cases}\n$$\n\nThe worst case runtime of `merge` is $O(n)$ since their are no nested loops and each loop runs at most $n$ times. The last loop always runs $n$ times thus it is also $\\Omega(n)$, thus `merge` us $\\Theta(n)$.\n\nLets assume $n = 2^k$ for some integer $k$, thus we dont need to consider the ceil and floor's, now to solve the rest of the recurence we can expand and collect terms:\n\n$$\n\\begin{align}\nT_{mergeSort}(n) &= 2 T_{mergeSort}\\left(\\frac{n}{2}\\right) + \\Theta(n) \\\\\n &= 2 \\left(2 T_{mergeSort}\\left(\\frac{n}{2^2}\\right) + \\Theta\\left(\\frac{n}{2}\\right)\\right) + \\Theta(n) \\\\\n &= 4 T_{mergeSort}\\left(\\frac{n}{2^2}\\right) + 2 \\Theta\\left(\\frac{n}{2}\\right) + \\Theta(n) \\\\\n &= 4 \\left( 2 T_{mergeSort}\\left(\\frac{n}{2^3}\\right) + \\Theta\\left(\\frac{n}{2^2}\\right) \\right) + 2 \\Theta\\left(\\frac{n}{2}\\right) + \\Theta(n) \\\\\n &= 8 T_{mergeSort}\\left(\\frac{n}{2^3}\\right) + 4 \\Theta\\left(\\frac{n}{2^2}\\right) + 2 \\Theta\\left(\\frac{n}{2}\\right) + \\Theta(n) \\\\\n &\\vdots \\\\\n &= 2^k T_{mergeSort}\\left(\\frac{n}{2^k}\\right) + \\sum_{i=0}^{k-1}{2^i \\Theta{\\left(\\frac{n}{2^i}\\right)}} \\\\\n &= n T_{mergeSort}\\left(1\\right) + \\sum_{i=0}^{k-1}{2^i \\Theta{\\left(\\frac{n}{2^i}\\right)}} \\\\\n &= n \\Theta(1) + \\sum_{i=0}^{k-1}{2^i \\Theta{\\left(\\frac{n}{2^i}\\right)}} \\\\\n &= \\Theta(n) + \\sum_{i=0}^{\\lg n -1}{2^i \\Theta{\\left(\\frac{n}{2^i}\\right)}} \\\\\n &= \\Theta(n) + \\Theta(n \\lg n) \\\\\n &= \\Theta(n \\lg n) \\\\\n\\end{align}\n$$\n\nTo complete this proof we would use induction to fill in the \"gap\" when unwinding the recurance. Also, to show this works for any $n$ we use induction to show $T(n) \\leq T(n+1)$ then using the facts that $n \\leq 2^k < 2n$ for some $k$ and $\\frac{n}{2} \\leq 2^{k'} < n$ for some $k'$ we get \n\n$$\nT(n) \\leq T(2^k) = O(2^k \\lg 2^k) = O(2n \\lg 2n) = O(n \\lg n)\n$$\n\nand\n\n$$\nT(n) = \\Omega(2^{k'} \\lg 2^{k'}) = \\Omega\\left(\\frac{n}{2} \\lg \\frac{n}{2}\\right) = \\Omega(n \\lg n)\n$$\n\nthus $T(n) = \\Omega(n \\lg n)$\n\n## Masters Theorem\n\nThe proof above was from first princibles, but we could instead use _masters theorem_.\n\nLet $n_0 \\in \\mathbb{N}$, $k \\in \\mathbb{N}$, and $a, b \\in \\mathbb{R}$ with $a > 0$ and $b > 1$, let $T: \\mathbb{N} \\rightarrow \\mathbb{N}$ satisfy the following recurrence:\n\n$$\nT(n) = \\begin{cases}\n\\Theta(1) & n < n_0 \\\\\na \\Theta\\left(\\frac{n}{b}\\right) + \\Theta(n^k) & n \\geq n_0\n\\end{cases}\n$$\n\nLet the critical component $e = \\log_b(a)$, then:\n\n$$\nT(n) = \\begin{cases}\n\\Theta(n^e) & k < e \\\\\n\\Theta(n^e \\lg n) & k = e \\\\\n\\Theta(n^k) & k > e \\\\\n\\end{cases}\n$$\n\nFor merge sort we have the terms $a = 2$, $b = 2$, $k = 1$, $n_0 = 2$, thus $e = \\log_2(2) = 1$. $k = e$ thus by case $II$ of Masters Theorem merge sort is $\\Theta(n \\lg n)$.\n", "meta": {"hexsha": "397e099910017317f6ac7799d4511aae85d0b626", "size": 7189, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Merge Sort.ipynb", "max_stars_repo_name": "bens-notes/ads-notes", "max_stars_repo_head_hexsha": "fdd273dd25e778551039a9f69740a8e1eaa641d8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Merge Sort.ipynb", "max_issues_repo_name": "bens-notes/ads-notes", "max_issues_repo_head_hexsha": "fdd273dd25e778551039a9f69740a8e1eaa641d8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Merge Sort.ipynb", "max_forks_repo_name": "bens-notes/ads-notes", "max_forks_repo_head_hexsha": "fdd273dd25e778551039a9f69740a8e1eaa641d8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.316091954, "max_line_length": 305, "alphanum_fraction": 0.4826818751, "converted": true, "num_tokens": 1776, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9073122238669025, "lm_q2_score": 0.9362850097482405, "lm_q1q2_score": 0.8495028343679205}} {"text": "# Lecture 14, Algebraic Modeling Languages\n\nAlgebraic Modeling Languages (AML) are high-level computer programming languages for describing and solving high complexity problems for large scale mathematical computation (i.e. large scale optimization type problems). Their syntax mimics the mathematical notation of optimization problems, which allows one to express optimization problems in a familiar, concise and readable way. \n\n**AMLs do not directly solve the problem, but they call appropriate external solvers to find the solution.**\n\nExamples of AMLs are\n* A Mathematical Programming Language (AMPL),\n* General Algebraic Modeling System (GAMS),\n* Optimization Programming Language (OPL),\n* Advanced Interactive Multidimensional Modeling System (AIMMS), and\n* Pyomo.\n\nIn addition to the ease of modelling, one of the advantages of AMLs is that you can model the problem once and then solve it with multiple solvers.\n\n## Pyomo\n\nOn this course, we use Pyomo as an example of AMLs. Pyomo is a Python-based, open-source optimization modeling language with a diverse set of optimization capabilities.\n\nPyomo may not be a completely typical AML, because Pyomo's modeling objects are embedded within a full-featured high-level programming language providing a rich set of supporting libraries, which distinguishes Pyomo from other AMLs.\n\nPyomo supports a wide range of problem types, including:\n* Linear programming\n* Quadratic programming\n* Nonlinear programming\n* Mixed-integer linear programming\n* Mixed-integer quadratic programming\n* Mixed-integer nonlinear programming\n* Stochastic programming\n* Generalized disjunctive programming\n* Differential algebraic equations\n* Bilevel programming\n* Mathematical programs with equilibrium constraints\n\n# Installing Pyomo\n\nThe easiest way to install Pyomo is to call\n```\npip install pyomo\n```\nwhen pip has been installed on your machine.\n\n## Example 1, linear optimization\n\nLet us start with a very simple linear problem\n$$\n\\begin{align}\n\\min &\\qquad 2x_1+3x_2\\\\\n\\text{s.t. }& \\qquad 3x_1+4x_2\\geq 1\\\\\n& \\qquad x_1,x_2\\geq 0.\n\\end{align}\n$$\n\n\n```python\nfrom pyomo.environ import *\n\n\nmodel = ConcreteModel()\n\nmodel.x = Var([1,2], domain=NonNegativeReals) #Non-negative variables x[1] and x[2]\n\nmodel.OBJ = Objective(expr = 2*model.x[1] + 3*model.x[2]) #Objective function\n\nmodel.Constraint1 = Constraint(expr = 3*model.x[1] + 4*model.x[2] >= 1) #Constraint\n\n```\n\nOnce we have defined the problem, we can solve it. Let us start by using glpk, which is an open source linear programming program.\n\nYou need to have glpk installed on your system. For details, see https://www.gnu.org/software/glpk/#TOCdownloading. For many Linux distributions, you can install glpk from the repositories by typing\n```\nsudo yum install glpk\n```\n```\nsudo apt-get install glpk,\n```\nor whatever your distribution needs.\n\n\n\n```python\nfrom pyomo.opt import SolverFactory #Import interfaces to solvers\nopt = SolverFactory(\"glpk\") #Use glpk\nres = opt.solve(model, tee=True) #Solve the problem and print the output\nprint \"Solution:\"\nprint \"=========\"\nmodel.x.display() #Print values of x\n```\n\n GLPSOL: GLPK LP/MIP Solver, v4.52\n Parameter(s) specified in the command line:\n --write /tmp/tmpogS1Ue.glpk.raw --wglp /tmp/tmpA5Hc_F.glpk.glp --cpxlp /tmp/tmpkKjSRP.pyomo.lp\n Reading problem data from '/tmp/tmpkKjSRP.pyomo.lp'...\n 2 rows, 3 columns, 3 non-zeros\n 21 lines were read\n Writing problem data to `/tmp/tmpA5Hc_F.glpk.glp'...\n 15 lines were written\n GLPK Simplex Optimizer, v4.52\n 2 rows, 3 columns, 3 non-zeros\n Preprocessing...\n 1 row, 2 columns, 2 non-zeros\n Scaling...\n A: min|aij| = 3.000e+00 max|aij| = 4.000e+00 ratio = 1.333e+00\n Problem data seem to be well scaled\n Constructing initial basis...\n Size of triangular part is 1\n 0: obj = 0.000000000e+00 infeas = 1.000e+00 (0)\n * 1: obj = 7.500000000e-01 infeas = 0.000e+00 (0)\n * 2: obj = 6.666666667e-01 infeas = 0.000e+00 (0)\n OPTIMAL LP SOLUTION FOUND\n Time used: 0.0 secs\n Memory used: 0.0 Mb (40408 bytes)\n Writing basic solution to `/tmp/tmpogS1Ue.glpk.raw'...\n 7 lines were written\n Solution:\n =========\n x : Size=2, Index=x_index, Domain=NonNegativeReals\n Key : Lower : Value : Upper : Fixed : Stale\n 1 : 0 : 0.333333333333 : None : False : False\n 2 : 0 : 0.0 : None : False : False\n\n\nNow, if you have other linear solvers installed on your system, you can use them too. Let us use Cplex, which is a commercial solvers (academic license available).\n\n\n```python\nopt = SolverFactory(\"cplex\")\nres = opt.solve(model, tee=True)\nprint \"Solution:\"\nmodel.x.display()\n```\n\n \n Welcome to IBM(R) ILOG(R) CPLEX(R) Interactive Optimizer 12.6.2.0\n with Simplex, Mixed Integer & Barrier Optimizers\n 5725-A06 5725-A29 5724-Y48 5724-Y49 5724-Y54 5724-Y55 5655-Y21\n Copyright IBM Corp. 1988, 2015. All Rights Reserved.\n \n Type 'help' for a list of available commands.\n Type 'help' followed by a command name for more\n information on commands.\n \n CPLEX> Logfile 'cplex.log' closed.\n Logfile '/tmp/tmpnxw9d9.cplex.log' open.\n CPLEX> Problem '/tmp/tmpEdgLVy.pyomo.lp' read.\n Read time = 0.00 sec. (0.00 ticks)\n CPLEX> Problem name : /tmp/tmpEdgLVy.pyomo.lp\n Objective sense : Minimize\n Variables : 3\n Objective nonzeros : 2\n Linear constraints : 2 [Greater: 1, Equal: 1]\n Nonzeros : 3\n RHS nonzeros : 2\n \n Variables : Min LB: 0.000000 Max UB: all infinite \n Objective nonzeros : Min : 2.000000 Max : 3.000000 \n Linear constraints :\n Nonzeros : Min : 1.000000 Max : 4.000000 \n RHS nonzeros : Min : 1.000000 Max : 1.000000 \n CPLEX> Tried aggregator 1 time.\n LP Presolve eliminated 2 rows and 3 columns.\n All rows and columns eliminated.\n Presolve time = 0.00 sec. (0.00 ticks)\n \n Dual simplex - Optimal: Objective = 6.6666666667e-01\n Solution time = 0.00 sec. Iterations = 0 (0)\n Deterministic time = 0.00 ticks (2.72 ticks/sec)\n \n CPLEX> Solution written to file '/tmp/tmpjWjvnD.cplex.sol'.\n CPLEX> Solution:\n x : Size=2, Index=x_index, Domain=NonNegativeReals\n Key : Lower : Value : Upper : Fixed : Stale\n 1 : 0 : 0.333333333333 : None : False : False\n 2 : 0 : 0.0 : None : False : False\n\n\nWe can use also gurobi, which is another commercial solver with academic license.\n\n\n```python\nopt = SolverFactory(\"gurobi\")\nres = opt.solve(model, tee=True)\nprint \"Solution:\"\nmodel.x.display()\n```\n\n Optimize a model with 2 rows, 3 columns and 3 nonzeros\n Coefficient statistics:\n Matrix range [1e+00, 4e+00]\n Objective range [2e+00, 3e+00]\n Bounds range [0e+00, 0e+00]\n RHS range [1e+00, 1e+00]\n Presolve removed 2 rows and 3 columns\n Presolve time: 0.00s\n Presolve: All rows and columns removed\n Iteration Objective Primal Inf. Dual Inf. Time\n 0 6.6666667e-01 0.000000e+00 0.000000e+00 0s\n \n Solved in 0 iterations and 0.00 seconds\n Optimal objective 6.666666667e-01\n Solution:\n x : Size=2, Index=x_index, Domain=NonNegativeReals\n Key : Lower : Value : Upper : Fixed : Stale\n 1 : 0 : 0.333333333333 : None : False : False\n 2 : 0 : 0.0 : None : False : False\n\n\n## Example 2, nonlinear optimization\n\nLet use define optimization problem\n$$\n\\begin{align}\n\\max &\\qquad c_b\\\\\n\\text{s.t. }& \\qquad c_{af}s_v - s_vc_a-k_1c_a=0\\\\\n&\\qquad s_vc_b+k_1c_a-k_2c_b-2k_3c_a^2=0\\\\\n&\\qquad s_vc_c+k_2c_b=0\\\\\n&\\qquad s_vc_d+k_3c_a^2=0,\\\\\n&\\qquad s_v,c_a,c_b,c_c,c_d\\geq0\n\\end{align}\n$$\nwhere $k_1=5/6$, $k_2=5/3$, $k_3=1/6000$, and $c_{af}=10000$.\n\n\n```python\nfrom pyomo.environ import *\n# create the concrete model\nmodel = ConcreteModel()\n# set the data \nk1 = 5.0/6.0 \nk2 = 5.0/3.0 \nk3 = 1.0/6000.0 \ncaf = 10000.0 \n# create the variables\nmodel.sv = Var(initialize = 1.0, within=PositiveReals)\nmodel.ca = Var(initialize = 5000.0, within=PositiveReals)\nmodel.cb = Var(initialize = 2000.0, within=PositiveReals)\nmodel.cc = Var(initialize = 2000.0, within=PositiveReals)\nmodel.cd = Var(initialize = 1000.0, within=PositiveReals)\n\n# create the objective\nmodel.obj = Objective(expr = model.cb, sense=maximize)\n# create the constraints\nmodel.ca_bal = Constraint(expr = (0 == model.sv * caf \\\n - model.sv * model.ca - k1 * model.ca \\\n - 2.0 * k3 * model.ca ** 2.0))\nmodel.cb_bal = Constraint(expr=(0 == -model.sv * model.cb \\\n + k1 * model.ca - k2 * model.cb))\nmodel.cc_bal = Constraint(expr=(0 == -model.sv * model.cc \\\n + k2 * model.cb))\nmodel.cd_bal = Constraint(expr=(0 == -model.sv * model.cd \\\n + k3 * model.ca ** 2.0))\n```\n\n## Solving with Ipopt\n\nInstall IPopt following http://www.coin-or.org/Ipopt/documentation/node10.html.\n\n\n```python\nopt = SolverFactory(\"ipopt\",solver_io=\"nl\")\n\nopt.solve(model,tee=True)\n\nprint \"Solution is \"\nmodel.sv.display()\nmodel.ca.display()\nmodel.cb.display()\nmodel.cc.display()\nmodel.cd.display()\n```\n\n \n \n ******************************************************************************\n This program contains Ipopt, a library for large-scale nonlinear optimization.\n Ipopt is released as open source code under the Eclipse Public License (EPL).\n For more information visit http://projects.coin-or.org/Ipopt\n ******************************************************************************\n \n This is Ipopt version 3.12, running with linear solver mumps.\n NOTE: Other linear solvers might be more efficient (see Ipopt documentation).\n \n Number of nonzeros in equality constraint Jacobian...: 11\n Number of nonzeros in inequality constraint Jacobian.: 0\n Number of nonzeros in Lagrangian Hessian.............: 5\n \n Total number of variables............................: 5\n variables with only lower bounds: 5\n variables with lower and upper bounds: 0\n variables with only upper bounds: 0\n Total number of equality constraints.................: 4\n Total number of inequality constraints...............: 0\n inequality constraints with only lower bounds: 0\n inequality constraints with lower and upper bounds: 0\n inequality constraints with only upper bounds: 0\n \n iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 0 -2.0000000e+03 7.50e+03 6.25e-01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0\n 1 -1.0801475e+03 5.54e+02 2.46e+00 -1.0 1.39e+03 - 5.54e-01 1.00e+00h 1\n 2 -1.0763574e+03 8.86e+01 1.87e+02 -1.0 3.66e+02 - 9.31e-01 1.00e+00h 1\n 3 -1.0727252e+03 1.07e+01 5.26e+00 -1.0 9.35e+01 - 9.71e-01 1.00e+00h 1\n 4 -1.0726714e+03 4.01e+00 1.22e-01 -1.0 6.03e+01 - 9.68e-01 1.00e+00h 1\n 5 -1.0724371e+03 3.44e-04 2.93e-05 -2.5 4.19e-01 - 1.00e+00 1.00e+00h 1\n 6 -1.0724372e+03 1.63e-08 3.93e-09 -3.8 3.85e-03 - 1.00e+00 1.00e+00h 1\n 7 -1.0724372e+03 5.68e-11 2.68e-11 -5.7 2.24e-04 - 1.00e+00 1.00e+00h 1\n 8 -1.0724372e+03 4.55e-13 2.51e-14 -8.6 2.78e-06 - 1.00e+00 1.00e+00h 1\n \n Number of Iterations....: 8\n \n (scaled) (unscaled)\n Objective...............: -1.0724372001086319e+03 -1.0724372001086319e+03\n Dual infeasibility......: 2.5080209371475748e-14 2.5080209371475748e-14\n Constraint violation....: 1.1368683772161604e-14 4.5474735088646412e-13\n Complementarity.........: 2.5059065225787342e-09 2.5059065225787342e-09\n Overall NLP error.......: 2.5059065225787342e-09 2.5059065225787342e-09\n \n \n Number of objective function evaluations = 9\n Number of objective gradient evaluations = 9\n Number of equality constraint evaluations = 9\n Number of inequality constraint evaluations = 0\n Number of equality constraint Jacobian evaluations = 9\n Number of inequality constraint Jacobian evaluations = 0\n Number of Lagrangian Hessian evaluations = 8\n Total CPU secs in IPOPT (w/o function evaluations) = 0.002\n Total CPU secs in NLP function evaluations = 0.000\n \n EXIT: Optimal Solution Found.\n \n Ipopt 3.12: Optimal Solution Found\n Solution is \n sv : Size=1, Index=None, Domain=PositiveReals\n Key : Lower : Value : Upper : Fixed : Stale\n None : 0 : 1.34381176107 : None : False : False\n ca : Size=1, Index=None, Domain=PositiveReals\n Key : Lower : Value : Upper : Fixed : Stale\n None : 0 : 3874.25886723 : None : False : False\n cb : Size=1, Index=None, Domain=PositiveReals\n Key : Lower : Value : Upper : Fixed : Stale\n None : 0 : 1072.43720011 : None : False : False\n cc : Size=1, Index=None, Domain=PositiveReals\n Key : Lower : Value : Upper : Fixed : Stale\n None : 0 : 1330.09353341 : None : False : False\n cd : Size=1, Index=None, Domain=PositiveReals\n Key : Lower : Value : Upper : Fixed : Stale\n None : 0 : 1861.60519963 : None : False : False\n\n\n# Black-box optimization using scipy.optimize\n\nOften cases, you do not have algebraic formulations of the objective functions, but instead, you have an executable, which gives you the values and you do not know what is happening inside there.\n\n### Example\n\nExecutable 'prob4' (which you need to compile before using) includes a script for two variable problem with three inequality constraints (where two of them involve only one variable). The problem is of the form\n$$\n\\min \\ f(x)\n\\\\ \\text{s.t. }g(x) <= 2\n\\\\ x\\geq 0.\n$$\nThe executable reads in a file 'input.txt', which contains variable values on top of each other and outputs a file \"output.txt\", which contains on top of each other value of f, value of g, value of $x_1$, value of $x_2$, gradient of f and gradient of the contraints.\n\nLet us solve this problem using *scipy.optimize*\n\n\n\n```python\nimport csv\ndef evaluate_prob4(x):\n with open('input.txt','w') as f:\n f.write('%f\\n%f'%(x[0],x[1])) #Write x[0] and x[1] to the input.txt file\n !./prob4 #Execute prob4\n val = []\n with open('output.txt','r') as f: \n valuereader = csv.reader(f)\n for row in valuereader:\n val.extend([float(i) for i in row])\n f_val = val[0]\n g_val = [0]*3\n g_val[0] = 2-val[1]\n g_val[1]=val[2]\n g_val[2]=val[3]\n grad_f=[val[4],val[5]]\n grad_g = [[0,0],[0,0],[0,0]]\n grad_g[0] = [-val[6],-val[7]]\n grad_g[1] = [val[8],val[9]]\n grad_g[2] = [val[10],val[11]]\n return f_val,g_val,grad_f,grad_g\n \n```\n\n\n```python\nimport math\nevaluate_prob4([2,0.])\n```\n\n\n\n\n (3.0, [-2.0, 2.0, 0.0], [-6.0, 2.0], [[-4.0, -0.0], [1.0, 1.0], [1.0, 1.0]])\n\n\n\n\n```python\nfrom scipy.optimize import minimize\n\nconstraint_tuple=(\n {'type':'ineq','fun':lambda x:evaluate_prob4(x)[1][0],\\\n 'jac':lambda x:evaluate_prob4(x)[3][0]},\n {'type':'ineq','fun':lambda x:evaluate_prob4(x)[1][1],\\\n 'jac':lambda x:evaluate_prob4(x)[3][1]},\n {'type':'ineq','fun':lambda x:evaluate_prob4(x)[1][2],\\\n 'jac':lambda x:evaluate_prob4(x)[3][2]}\n)\n```\n\n\n```python\nminimize(lambda x: evaluate_prob4(x)[0], [0,0], method='SLSQP'\n , jac=lambda x: evaluate_prob4(x)[2], \n constraints = constraint_tuple,options = {'disp':True})\n```\n\n# Example 3, Nonlinear multiobjective optimization\n\nLet us study optimization problem\n$$\n\\begin{align}\n\\min \\ & \\left(\\sum_{i=1}^{48}\\frac{\\sqrt{1+x_i^2}}{v_i},\\sum_{i=1}^{48}\\left(\\left(\\frac{x_iv_i}{\\sqrt{1+x_i^2}}+v_w\\right)^2+\\frac{v_i^2}{1+x_i^2}\\right)\\right., \\\\\n&\\qquad\\left.\\sum_{i=1}^{47}\\big|(x_{i+1}-x_i\\big|\\right)\\\\\n\\text{s.t. } & \\sum_{i=1}^{j}x_i\\leq -1\\text{ for all }j=10,11,12,13,14\\\\\n& \\left|\\sum_{i=1}^{j}x_i\\right|\\geq 2\\text{ for all }j=20,21,22,23,24\\\\\n& \\sum_{i=1}^{j}x_i\\geq 1\\text{ for all }j=30,31,32,33,34\\\\\n&\\sum_{i=1}^{48}\\frac{\\sqrt{1+x_i^2}}{v_i} \\leq 5\\\\\n&\\sum_{i=1}^{48}x_i=0\\\\\n&-10\\leq\\sum_{i=1}^{j}x_i\\leq10\\text{ for all }j=1,\\ldots,48\n&0\\leq v_i\\leq 25\\text{ for all }i=1,\\ldots,48\\\\\n&-10\\leq x_i\\leq 10\\text{ for all }i=1,\\ldots,48\\\\\n\\end{align}\n$$\n\n\n```python\n\nfrom pyomo.environ import *\n# create the concrete model9\ndef solve_ach(reference,lb,ub):\n model = ConcreteModel()\n\n vwind = 5.0\n min_speed = 0.01\n\n\n #f1, time used\n def f1(model):\n return sum([sqrt(1+model.y[i]**2)/model.v[i] for i in range(48)])\n #f2, wind drag, directly proportional to square of speed wrt. wind\n def f2(model):\n return sum([((model.y[i]*model.v[i])/sqrt(1+model.y[i]**2)+vwind)**2/\n +model.v[i]**2*((1+model.y[i])**2) for i in range(48)])\n #f3, maximal course changes\n def f3(model):\n return sum([abs(model.y[i+1]-model.y[i]) for i in range(47)])\n\n def h1_rule(model,i):\n return sum(model.y[j] for j in range(i))<=-1\n def h2_rule(model,i):\n return abs(sum(model.y[j] for j in range(i)))>=2\n def h3_rule(model,i):\n return sum(model.y[j] for j in range(i))>=1\n def h4_rule(model):\n return sum([sqrt(1+model.y[i]**2)/model.v[i] for i in range(48)])<=25\n def h5_rule(model):\n return sum(model.y[i] for i in range(48))==0\n\n def f_rule(model):\n return t\n\n def y_init(model,i):\n if i==0:\n return -1\n if i==18:\n return -1\n if i==24:\n return 1\n if i==25:\n return 1\n if i==26:\n return 1\n if i==34:\n return -1\n return 0\n model.y = Var(range(48),bounds = (-10,10),initialize=y_init)\n model.v = Var(range(48),domain=NonNegativeReals,bounds=(min_speed,25),initialize=25)\n model.t = Var()\n model.h1=Constraint(range(9,14),rule=h1_rule)\n model.h2=Constraint(range(19,24),rule=h2_rule)\n model.h3=Constraint(range(29,34),rule=h3_rule)\n model.h4=Constraint(rule=h4_rule)\n model.h5=Constraint(rule=h5_rule)\n \n def h6_rule(model,i):\n return -10<=sum([model.y[j] for j in range(i)])<=10\n \n model.h6 = Constraint(range(1,48),rule=h6_rule)\n def t_con_f1_rule(model):\n return model.t>=(f1(model)-reference[0]-lb[0])/(ub[0]-lb[0])\n model.t_con_f1 = Constraint(rule = t_con_f1_rule)\n def t_con_f2_rule(model):\n return model.t>=(f2(model)-reference[1]-lb[1])/(ub[1]-lb[1])\n model.t_con_f2 = Constraint(rule = t_con_f2_rule)\n def t_con_f3_rule(model):\n return model.t>=(f3(model)-reference[2]-lb[2])/(ub[2]-lb[2])\n model.t_con_f3 = Constraint(rule = t_con_f3_rule)\n model.f = Objective(expr = model.t+1e-10*(f1(model)+f2(model)+f3(model)))\n tee =False\n opt = SolverFactory(\"ipopt\",solver_io=\"nl\")\n opt.options.max_iter=100000\n #opt.options.constr_viol_tol=0.01\n #opt.options.halt_on_ampl_error = \"yes\"\n\n opt.solve(model,tee=tee)\n return [[value(f1(model)),value(f2(model)),value(f3(model))],[model.y,model.v]]\n\n```\n\n\n```python\nlb_ = [0,0,0]\nub_ = [1,1,1]\nvalues =[]\nfor i in range(3):\n reference = [1e10,1e10,1e10]\n reference[i]=0\n values.append(solve_ach(reference,ub_,lb_)[0])\nprint values\n```\n\n WARNING - Loading a SolverResults object with a warning status into model=unknown; message from solver=Ipopt 3.12\\x3a Maximum Number of Iterations Exceeded.\n WARNING - Loading a SolverResults object with a warning status into model=unknown; message from solver=Ipopt 3.12\\x3a Maximum Number of Iterations Exceeded.\n [[25133.93493837363, 759264160.1226324, 860.0], [25.000000247539568, 11188.12151236145, 147.55358533550918], [12673.93579526275, 289156195.6539606, 151.2406436251979]]\n\n\n\n```python\nlb = [0,0,0]\nub = [1,1,1]\nfor i in range(3):\n lb[i] = min([values[j][i] for j in range(3)])\n ub[i] = max([values[j][i] for j in range(3)])\nprint lb\nprint ub\n```\n\n\n```python\n[f,x] = solve_ach([(a+b)/2 for (a,b) in zip(lb,ub)],lb,ub) #Compromise solution\n#[f,x] = solve_ach([1e10,1e10,0],lb,ub) #Minimize the third objective\n\n```\n\n\n```python\nimport matplotlib.pyplot as plt\nfrom matplotlib.patches import Rectangle\nplt.plot([sum(value(x[0][j]) for j in range(i)) for i in range(49)])\ncurrentAxis = plt.gca()\ncurrentAxis.add_patch(Rectangle((10, -1),4,10))\ncurrentAxis.add_patch(Rectangle((20, -2),4,4))\ncurrentAxis.add_patch(Rectangle((30, -10),4,11))\nplt.show()\n```\n", "meta": {"hexsha": "b4d5c746201f9f566f2139c7a9949f6d030a4344", "size": 33676, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lecture 10, Algebraic Modeling Languages, especially Pyomo.ipynb", "max_stars_repo_name": "maeehart/TIES483", "max_stars_repo_head_hexsha": "cce5c779aeb0ade5f959a2ed5cca982be5cf2316", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2019-04-26T12:46:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-23T03:38:59.000Z", "max_issues_repo_path": "Lecture 10, Algebraic Modeling Languages, especially Pyomo.ipynb", "max_issues_repo_name": "maeehart/TIES483", "max_issues_repo_head_hexsha": "cce5c779aeb0ade5f959a2ed5cca982be5cf2316", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture 10, Algebraic Modeling Languages, especially Pyomo.ipynb", "max_forks_repo_name": "maeehart/TIES483", "max_forks_repo_head_hexsha": "cce5c779aeb0ade5f959a2ed5cca982be5cf2316", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2016-01-08T16:28:11.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-10T05:18:10.000Z", "avg_line_length": 37.5848214286, "max_line_length": 2057, "alphanum_fraction": 0.5428792018, "converted": true, "num_tokens": 6720, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.962673111584966, "lm_q2_score": 0.8824278556326344, "lm_q1q2_score": 0.8494895695311173}} {"text": "# SymPy\n \n \n\n[SymPy](https://es.wikipedia.org/wiki/SymPy) es una biblioteca de Python que permite realizar c\u00e1lculos simb\u00f3licos.\nNos ofrece las capacidades de \u00e1lgebra computacional, y se puede usar en l\u00ednea a trav\u00e9s de [SymPy Live](http://live.sympy.org/) o [SymPy Gamma](http://www.sympygamma.com/), este \u00faltimo es similar a\n[Wolfram Alpha](https://www.wolframalpha.com/).\n\nSi usas Anaconda este paquete ya viene instalado por defecto pero si se usa miniconda o pip debe instalarse.\n\n````python\nconda install sympy # Usando el gestor conda de Anaconda/Miniconda\npip install sympy # Usando el gestor pip (puede requerir instalar m\u00e1s paquetes)\n````\n\n\nLo primero que debemos hacer, antes de usarlo, es importar el m\u00f3dulo, como con cualquier\notra biblioteca de Python.\n\nSi deseamos usar SymPy de forma interactiva usamos\n\n```python\nfrom sympy import *\ninit_printing()\n```\n\nPara scripting es mejor importar la biblioteca de la siguiente manera\n\n```python\nimport sympy as sym\n```\n\nY llamar las funciones de la siguiente manera\n\n```python\nx = sym.Symbols(\"x\")\nexpr = sym.cos(x)**2 + 3*x\nderiv = expr.diff(x)\n```\n\nen donde calculamos la derivada de $\\cos^2(x) + 3x$,\nque debe ser $-2\\sin(x)\\cos(x) + 3$.\n\n\n```python\n%matplotlib notebook\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sympy import *\n```\n\n\n```python\ninit_printing()\n```\n\nDefinamos la variable $x$ como un s\u00edmbolo matem\u00e1tico. Esto nos permite hacer uso de\nesta variable en SymPy.\n\n\n```python\nx = symbols(\"x\")\n```\n\nEmpecemos con c\u00e1lculos simples. Abajo, tenemos una _celda de c\u00f3digo_ con una suma.\nUbica el cursor en ella y presiona SHIFT + ENTER para evaluarla.\n\n\n\n```python\n1 + 3\n```\n\nRealicemos algunos c\u00e1lculos.\n\n\n```python\nfactorial(5)\n```\n\n\n```python\n1 // 3\n```\n\n\n```python\n1 / 3\n```\n\n\n```python\nS(1) / 3\n```\n\nPodemos evaluar esta expresi\u00f3n a su versi\u00f3n en punto flotante\n\n\n```python\nsqrt(2*pi)\n```\n\n\n```python\nfloat(sqrt(2*pi))\n```\n\nTambi\u00e9n podemos almacenar expresiones como variables, como cualquier variable de Python.\n\n\n\n```python\nradius = 10\nheight = 100\narea = pi * radius**2\nvolume = area * height\n```\n\n\n```python\nvolume\n```\n\n\n```python\nfloat(volume)\n```\n\nHasta ahora, hemos usado SymPy como una calculadora. Intentemos\nalgunos c\u00e1lculos m\u00e1s avanzados. Por ejemplo, algunas integrales.\n\n\n\n```python\nintegrate(sin(x), x)\n```\n\n\n```python\nintegrate(sin(x), (x, 0, pi))\n```\n\nPodemos definir una funci\u00f3n, e integrarla\n\n\n```python\nf = lambda x: x**2 + 5\n```\n\n\n```python\nf(5)\n```\n\n\n```python\nintegrate(f(x), x)\n```\n\n\n```python\ny = symbols(\"y\")\nintegrate(1/(x**2 + y), x)\n```\n\nSi asumimos que el denominador es positivo, esta expresi\u00f3n se puede simplificar a\u00fan m\u00e1s\n\n\n```python\na = symbols(\"a\", positive=True)\nintegrate(1/(x**2 + a), x)\n```\n\nHasta ahora, aprendimos lo m\u00e1s b\u00e1sico. Intentemos algunos ejemplos\nm\u00e1s complicados ahora.\n\n**Nota:** Si quieres saber m\u00e1s sobre una funci\u00f3n espec\u00edfica se puede usar\nla funci\u00f3n ``help()`` o el com\u00e1ndo _m\u00e1gico_ de IPython ``??``\n\n\n```python\nhelp(integrate)\n```\n\n\n```python\nintegrate??\n```\n\n## Ejemplos\n\n### Soluci\u00f3n de ecuaciones algebraicas\n\nPara resolver sistemas de ecuaciones algebraicos podemos usar: \n[``solveset`` and ``solve``](http://docs.sympy.org/latest/tutorial/solvers.html).\nEl m\u00e9todo preferido es ``solveset``, sin embargo, hay sistemas que\nse pueden resolver usando ``solve`` y no ``solveset``.\n\nPara resolver sistemas usando ``solveset``:\n\n\n```python\na, b, c = symbols(\"a b c\")\nsolveset(a*x**2 + b*x + c, x)\n```\n\nDebemos ingresar la expresi\u00f3n igualada a 0, o como una ecuaci\u00f3n\n\n\n```python\nsolveset(Eq(a*x**2 + b*x, -c), x)\n```\n\n``solveset`` no permite resolver sistemas de ecuaciones no lineales, por ejemplo\n\n\n\n```python\nsolve([x*y - 1, x - 2], x, y)\n```\n\n### \u00c1lgebra lineal\n\nUsamos ``Matrix`` para crear matrices. Las matrices pueden contener variables y expresiones matem\u00e1ticas.\n\nUsamos el m\u00e9todo ``.inv()`` para calcular la inversa, y ``*`` para multiplicar matrices.\n\n\n```python\nA = Matrix([\n [1, -1],\n [1, sin(c)]\n ])\ndisplay(A)\n```\n\n\n```python\nB = A.inv()\ndisplay(B)\n```\n\n\n```python\nA * B\n```\n\nEsta expresi\u00f3n deber\u00eda ser la matriz identidad, simplifiquemos la expresi\u00f3n.\nExisten varias formas de simplificar expresiones, y ``simplify`` es la m\u00e1s general.\n\n\n```python\nsimplify(A * B)\n```\n\n### Graficaci\u00f3n\n\nSymPy permite realizar gr\u00e1ficos 2D y 3D\n\n\n```python\nfrom sympy.plotting import plot3d\n```\n\n\n```python\nplot(sin(x), (x, -pi, pi));\n```\n\n\n```python\nmonkey_saddle = x**3 - 3*x*y**2\np = plot3d(monkey_saddle, (x, -2, 2), (y, -2, 2))\n```\n\n### Derivadas y ecuaciones diferenciales\n\nPodemos usar la funci\u00f3n ``diff`` o el m\u00e9todo ``.diff()`` para calcular derivadas.\n\n\n```python\nf = lambda x: x**2\n```\n\n\n```python\ndiff(f(x), x)\n```\n\n\n```python\nf(x).diff(x)\n```\n\n\n```python\ng = lambda x: sin(x)\n```\n\n\n```python\ndiff(g(f(x)), x)\n```\n\nY s\u00ed, \u00a1SymPy sabe sobre la regla de la cadena!\n\nPara terminar, resolvamos una ecuaci\u00f3n diferencial de segundo orden\n\n$$ u''(t) + \\omega^2 u(t) = 0$$\n\n\n```python\nt = symbols(\"t\")\nu = symbols(\"u\", cls=Function)\nomega = symbols(\"omega\", positive=True)\n```\n\n\n```python\node = u(t).diff(t, 2) + omega**2 * u(t)\ndsolve(ode, u(t))\n```\n\n## Convertir expresiones de SymPy en funciones de NumPy\n\n``lambdify`` permite convertir expresiones de sympy en funciones para hacer c\u00e1lculos usando NumPy.\n\nVeamos c\u00f3mo.\n\n\n```python\nf = lambdify(x, x**2, \"numpy\")\nf(3)\n```\n\n\n```python\nf(np.array([1, 2, 3]))\n```\n\nIntentemos un ejemplo m\u00e1s complejo\n\n\n```python\nfun = diff(sin(x)*cos(x**3) - sin(x)/x, x)\nfun\n```\n\n\n```python\nfun_numpy = lambdify(x, fun, \"numpy\")\n```\n\ny eval\u00faemoslo en alg\u00fan intervalo, por ejemplo, $[0, 5]$.\n\n\n```python\npts = np.linspace(0, 5, 1000)\nfun_pts = fun_numpy(pts + 1e-6) # Para evitar divisi\u00f3n por 0\n```\n\n\n```python\nplt.figure()\nplt.plot(pts, fun_pts)\n```\n\n## Ejercicios\n\n1. Calcule el l\u00edmite\n\n $$ \\lim_{x \\rightarrow 0} \\frac{\\sin(x)}{x}\\, .$$\n\n2. Resuelva la ecuaci\u00f3n diferencial de Bernoulli\n\n $$x \\frac{\\mathrm{d} u(x)}{\\mathrm{d}x} + u(x) - u(x)^2 = 0\\, .$$\n\n\n## Recursos adicionales\n\n- Equipo de desarrollo de SymPy. [SymPy Tutorial](http://docs.sympy.org/latest/tutorial/index.html), (2018). Consultado: Julio 23, 2018\n- Ivan Savov. [Taming math and physics using SymPy](https://minireference.com/static/tutorials/sympy_tutorial.pdf), (2017). Consultado: Julio 23, 2018\n\n\n```python\n\n```\n", "meta": {"hexsha": "c40fe67272cce528f04b4e72247a83235798976e", "size": 15089, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "sympy_basico.ipynb", "max_stars_repo_name": "carlosalvarezh/FEM", "max_stars_repo_head_hexsha": "e38f4fe5cfba208243687453af207ae9c0e4c55b", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sympy_basico.ipynb", "max_issues_repo_name": "carlosalvarezh/FEM", "max_issues_repo_head_hexsha": "e38f4fe5cfba208243687453af207ae9c0e4c55b", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-25T15:10:49.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-25T15:10:49.000Z", "max_forks_repo_path": "sympy_basico.ipynb", "max_forks_repo_name": "carlosalvarezh/FEM", "max_forks_repo_head_hexsha": "e38f4fe5cfba208243687453af207ae9c0e4c55b", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-10-13T02:08:02.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-11T04:20:33.000Z", "avg_line_length": 20.5013586957, "max_line_length": 205, "alphanum_fraction": 0.5134203725, "converted": true, "num_tokens": 1964, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9149009549929797, "lm_q2_score": 0.9284088035267047, "lm_q1q2_score": 0.8494021009704719}} {"text": "# Gibbs sampling in 2D\n\nThis is BONUS content related to Day 22, where we introduce Gibbs sampling\n\n## Random variables\n\n(We'll use 0-indexing so we have close alignment between math and python code)\n\n* 2D random variable $z = [z_0, z_1]$\n* each entry $z_d$ is a real scalar: $z_d \\in \\mathbb{R}$\n\n## Target distribution\n\n\\begin{align}\np^*(z_0, z_1) = \\mathcal{N}\\left(\n \\left[ \\begin{array}{c}\n 0 \\\\ 0\n \\end{array} \\right],\n \\left[\n \\begin{array}{c c}\n 1 & 0.8 \\\\\n 0.8 & 2\n \\end{array} \\right] \\right)\n\\end{align}\n\n## Key takeaways\n\n* New concept: 'Gibbs sampling', which just iterates between two conditional sampling distributions:\n\n\\begin{align}\n z^{t+1}_0 &\\sim p^* (z_0 | z_1 = z^t_1) \\\\\n z^{t+1}_1 &\\sim p^* (z_1 | z_0 = z^{t+1}_0)\n\\end{align}\n\n## Things to remember\n\nThis is a simple example to illustrate the idea of how Gibbs sampling works.\n\nThere are other \"better\" ways of sampling from a 2d normal.\n\n\n# Setup\n\n\n```python\nimport numpy as np\n\n```\n\n\n```python\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_style(\"whitegrid\")\nsns.set_context(\"notebook\", font_scale=2.0)\n```\n\n# Step 1: Prepare for Gibbs sampling\n\n## Define functions to sample from target's conditionals\n\n\n\n```python\ndef draw_z0_given_z1(z1, random_state):\n ## First, use Bishop textbook formulas to compute the conditional mean/var\n mean_01 = 0.4 * z1\n var_01 = 0.68\n \n ## Then, use simple transform to obtain a sample from this conditional\n ## Remember, if u ~ Normal(0, 1), a \"standard\" normal with mean 0 variance 1,\n ## then using transform: x <- T(u), with T(u) = \\mu + \\sigma * u\n ## we can say x ~ Normal(\\mu, \\sigma^2)\n u_samp = random_state.randn()\n z0_samp = mean_01 + np.sqrt(var_01) * u_samp\n return z0_samp\n```\n\n\n```python\ndef draw_z1_given_z0(z0, random_state):\n ## First, use Bishop textbook formulas to compute conditional mean/var\n mean_10 = 0.8 * z0\n var_10 = 1.36\n \n ## Then, use simple transform to obtain a sample from this conditional\n ## Remember, if u ~ Normal(0, 1), a \"standard\" normal with mean 0 variance 1,\n ## then using transform: x <- T(u), with T(u) = \\mu + \\sigma * u\n ## we can say x ~ Normal(\\mu, \\sigma^2)\n u_samp = random_state.randn()\n z1_samp = mean_10 + np.sqrt(var_10) * u_samp\n return z1_samp\n```\n\n# Step 2: Execute the Gibbs sampling algorithm\n\nPerform 6000 iterations.\n\nDiscard the first 1000 as \"not yet burned in\".\n\n\n```python\nS = 6000\nsample_list = list()\nz_D = np.zeros(2)\n\nrandom_state = np.random.RandomState(0) # reproducible random seeds\n\nfor t in range(S):\n z_D[0] = draw_z0_given_z1(z_D[1], random_state)\n z_D[1] = draw_z1_given_z0(z_D[0], random_state)\n \n if t > 1000:\n sample_list.append(z_D.copy()) # save copies so we get different vectors\n```\n\n\n```python\nz_samples_SD = np.vstack(sample_list)\n```\n\n## Step 3: Compare to samples from built-in routines for 2D MVNormal sampling\n\n\n```python\nCov_22 = np.asarray([[1.0, 0.8], [0.8, 2.0]])\ntrue_samples_SD = random_state.multivariate_normal(np.zeros(2), Cov_22, size=S-1000)\n```\n\n\n```python\nfig, ax_grid = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True, figsize=(10,4))\n\nax_grid[0].plot(z_samples_SD[:,0], z_samples_SD[:,1], 'k.')\nax_grid[0].set_title('Gibbs sampler')\nax_grid[0].set_aspect('equal', 'box');\n\nax_grid[1].plot(true_samples_SD[:,0], true_samples_SD[:,1], 'k.')\nax_grid[1].set_title('np.random.multivariate_normal')\nax_grid[1].set_aspect('equal', 'box');\nax_grid[1].set_xlim([-6, 6]);\nax_grid[1].set_ylim([-6, 6]);\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "1f7380ced705a12595b94e44213102fbd0e0788a", "size": 50341, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/GibbsSampling.ipynb", "max_stars_repo_name": "tufts-ml-courses/comp136-spr-20s-assignments-", "max_stars_repo_head_hexsha": "c53cce8e376862eeef395aa0b55eca8b284a0115", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-04-18T21:03:04.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-18T21:03:04.000Z", "max_issues_repo_path": "notebooks/GibbsSampling.ipynb", "max_issues_repo_name": "tufts-ml-courses/comp136-spr-20s-assignments-", "max_issues_repo_head_hexsha": "c53cce8e376862eeef395aa0b55eca8b284a0115", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/GibbsSampling.ipynb", "max_forks_repo_name": "tufts-ml-courses/comp136-spr-20s-assignments-", "max_forks_repo_head_hexsha": "c53cce8e376862eeef395aa0b55eca8b284a0115", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2020-01-28T22:47:07.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-12T23:56:18.000Z", "avg_line_length": 208.020661157, "max_line_length": 43948, "alphanum_fraction": 0.9089012932, "converted": true, "num_tokens": 1145, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284087985746092, "lm_q2_score": 0.9149009578933863, "lm_q1q2_score": 0.8494020991325579}} {"text": "```\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import stats as st\n```\n\n## Univariate normal distribution\n\nThe normal distribution, also known as the **Gaussian distribution**, is so called because its based on the Gaussian function . This distribution is defined by two parameters: the **mean** $\\mu$, which is the **expected value of the distribution**, and the **standard deviation** $\\sigma$, which corresponds to the **expected deviation from the mean**. The square of the standard deviation is typically referred to as the **variance** $\\sigma^2$. We denote this distribution as:\n\n\\begin{equation}\n$\\mathcal{N}=(\\mu,\\sigma)$\n\\end{equation}\n\nGiven this mean and variance we can calculate the **probility densitiy function** (**pdf**) of the normal distribution with the normalised Gaussian function. For a value $x$ the density is:\n\n\\begin{equation}\np(x \\mid \\mu, \\sigma) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\exp{ \\left( -\\frac{(x - \\mu)^2}{2\\sigma^2}\\right)}\n\\end{equation}\n\nWe call this distribution the **univariate** normal because it consists of **only one random normal variable**. Three examples of univariate normal distributions with different mean and variance are plotted in the next figure:\n\n\n```\ndef univariate_normal(x, mean, std):\n \"\"\"pdf of the univariate normal distribution.\"\"\"\n return ((1. / np.sqrt(2 * np.pi * std**2)) * np.exp(-(x - mean)**2 / (2 * std**2)))\n```\n\n\n```\n# Plot different Univariate Normals\nx = np.linspace(-5, 5, num=150)\nfig = plt.figure(figsize=(10, 5))\nplt.plot(x, univariate_normal(x, mean=0, std=1),label=\"$\\mathcal{N}(0, 1)$\")\nplt.xlabel('$x$', fontsize=13)\nplt.ylabel('density: $p(x)$', fontsize=13)\nplt.title('Univariate normal distributions')\nplt.ylim([0, 1])\nplt.xlim([-5, 5])\nplt.legend(loc=1)\n\nplt.show()\n```\n\n\n```\n# Plot different Univariate Normals\nx = np.linspace(-5, 5, num=150)\nfig = plt.figure(figsize=(10, 7))\nplt.plot(x, univariate_normal(x, mean=0, std=2),label=\"$\\mathcal{N}(0, 2)$\")\nplt.plot(x, univariate_normal(x, mean=0, std=0.5),label=\"$\\mathcal{N}(0, 0.5)$\")\nplt.plot(x, univariate_normal(x, mean=2, std=2),label=\"$\\mathcal{N}(2, 2)$\")\nplt.plot(x, univariate_normal(x, mean=2, std=0.4),label=\"$\\mathcal{N}(2, 0.4)$\")\nplt.xlabel('$x$', fontsize=13)\nplt.ylabel('density: $p(x)$', fontsize=13)\nplt.title('Univariate normal distributions')\nplt.ylim([0, 1])\nplt.xlim([-5, 5])\nplt.legend(loc=1)\n\nplt.show()\n```\n\n\n```\n# Plot different Univariate Normals\nx = np.linspace(-5, 5, num=150)\nfig = plt.figure(figsize=(10, 7))\nplt.plot(x, st.norm.pdf(x, loc=0, scale=1),label=\"$\\mathcal{N}(0, 1)$\")\nplt.plot(x, st.norm.pdf(x, loc=2, scale=2),label=\"$\\mathcal{N}(2, 3)$\")\nplt.plot(x, st.norm.pdf(x, loc=0, scale=0.5),label=\"$\\mathcal{N}(0, 0.2)$\")\nplt.xlabel('$x$', fontsize=13)\nplt.ylabel('density: $p(x)$', fontsize=13)\nplt.title('Univariate normal distributions')\nplt.ylim([0, 1])\nplt.xlim([-5, 5])\nplt.legend(loc=1)\nplt.show()\n\n\n```\n", "meta": {"hexsha": "2a620affd322b47130ea660592f631405dbf3d6f", "size": 111943, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "L06_gaussian distribuition/L06_1D gaussian.ipynb", "max_stars_repo_name": "pedrogomes-dev/MA28CP-Intro-to-Machine-Learning", "max_stars_repo_head_hexsha": "fd24017b8195a0d9ec9511071d4f8842dd596861", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 21, "max_stars_repo_stars_event_min_datetime": "2020-08-24T20:23:24.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T13:47:59.000Z", "max_issues_repo_path": "L06_gaussian distribuition/L06_1D gaussian.ipynb", "max_issues_repo_name": "pedrogomes-dev/MA28CP-Intro-to-Machine-Learning", "max_issues_repo_head_hexsha": "fd24017b8195a0d9ec9511071d4f8842dd596861", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "L06_gaussian distribuition/L06_1D gaussian.ipynb", "max_forks_repo_name": "pedrogomes-dev/MA28CP-Intro-to-Machine-Learning", "max_forks_repo_head_hexsha": "fd24017b8195a0d9ec9511071d4f8842dd596861", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 22, "max_forks_repo_forks_event_min_datetime": "2020-08-29T14:30:40.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T13:42:17.000Z", "avg_line_length": 111943.0, "max_line_length": 111943, "alphanum_fraction": 0.9527080746, "converted": true, "num_tokens": 894, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9736446471538803, "lm_q2_score": 0.8723473647220787, "lm_q1q2_score": 0.8493563421204458}} {"text": "#### Jupyter notebooks\n\nThis is a [Jupyter](http://jupyter.org/) notebook using Python. You can install Jupyter locally to edit and interact with this notebook.\n\n# Higher order finite difference methods\n\n## Lagrange Interpolating Polynomials\n\nSuppose we are given function values $u_0, \\dotsc, u_n$ at the distinct points $x_0, \\dotsc, x_n$ and we would like to build a polynomial of degree $n$ that goes through all these points. This explicit construction is attributed to Lagrange (though he was not first):\n\n$$ p(x) = \\sum_{i=0}^n u_i \\prod_{j \\ne i} \\frac{x - x_j}{x_i - x_j} $$\n\n* What is the degree of this polynomial?\n* Why is $p(x_i) = u_i$?\n* How expensive (in terms of $n$) is it to evaluate $p(x)$?\n* How expensive (in terms of $n$) is it to convert to standard form $p(x) = \\sum_{i=0}^n a_i x^i$?\n* Can we easily evaluate the derivative $p'(x)$?\n* What can go wrong? Is this formulation numerically stable?\n\nA general derivation of finite difference methods for approximating $p^{(k)}(x)$ using function values $u(x_i)$ is to construct the Lagrange interpolating polynomial $p(x)$ from the function values $u_i = u(x_i)$ and evaluate it or its derivatives at the target point $x$. We can do this directly from the formula above, but a more linear algebraic approach will turn out to be more reusable.\n\n#### Uniqueness\n\nIs the polynomial $p(x)$ of degree $m$ that interpolates $m+1$ points unique? Why?\n\n### Vandermonde matrices\n\nWe can compute a polynomial\n\n$$ p(x) = c_0 + c_1 x + c_2 x^2 + \\dotsb $$\n\nthat assumes function values $p(x_i) = u_i$ by solving a linear system with the Vandermonde matrix.\n\n$$ \\underbrace{\\begin{bmatrix} 1 & x_0 & x_0^2 & \\dotsb \\\\\n 1 & x_1 & x_1^2 & \\dotsb \\\\\n 1 & x_2 & x_2^2 & \\dotsb \\\\\n \\vdots & & & \\ddots \\end{bmatrix}}_V \\begin{bmatrix} c_0 \\\\ c_1 \\\\ c_2 \\\\ \\vdots \\end{bmatrix} = \\begin{bmatrix} u_0 \\\\ u_1 \\\\ u_2 \\\\ \\vdots \\end{bmatrix} .$$\n\n\n```python\n%matplotlib inline\nimport numpy\nfrom matplotlib import pyplot\npyplot.style.use('ggplot')\n\nx = numpy.linspace(-2,2,4)\nu = numpy.sin(x)\nxx = numpy.linspace(-3,3,40)\nc = numpy.linalg.solve(numpy.vander(x), u)\npyplot.plot(x, u, '*')\npyplot.plot(xx, numpy.vander(xx, 4).dot(c), label='p(x)')\npyplot.plot(xx, numpy.sin(xx), label='sin(x)')\npyplot.legend(loc='upper left');\n```\n\nGiven the coefficients $c = V^{-1} u$, we find\n\n$$ \\begin{align} p(0) &= c_0 \\\\ p'(0) &= c_1 \\\\ p''(0) &= c_2 \\cdot 2! \\\\ p^{(k)}(0) &= c_k \\cdot k! . \\end{align} $$\n\nTo compute the stencil coefficients $s_i^0$ for interpolation to $x=0$,\n$$ p(0) = s_0^0 u_0 + s_1^0 u_1 + \\dotsb = \\sum_i s_i^0 u_i $$\nwe can write\n$$ p(0) = e_0^T \\underbrace{V^{-1} u}_c = \\underbrace{e_0^T V^{-1}}_{(s^0)^T} u $$\nwhere $e_0$ is the first column of the identity. Evidently $s^0$ can also be expressed as\n$$ s^0 = V^{-T} e_0 . $$\nWe can compute stencil coefficients for any order derivative $p^{(k)}(0) = (s^k)^T u$ by solving the linear system\n$$ s^k = V^{-T} e_k \\cdot k! . $$\nAlternatively, invert the Vandermonde matrix $V$ and scale row $k$ of $V^{-1}$ by $k!$.\n\n\n```python\ndef fdstencil(z, x):\n x = numpy.array(x)\n V = numpy.vander(x - z, increasing=True)\n scaling = numpy.array([numpy.math.factorial(i) for i in range(len(x))])\n return (numpy.linalg.inv(V).T * scaling).T\n\nx = numpy.linspace(0,3,4)\nS = fdstencil(0, x)\nprint(S)\n\nhs = 2.**(-numpy.arange(6))\nerrors = numpy.zeros((3,len(hs)))\nfor i,h in enumerate(hs):\n z = 1 + .3*h\n S = fdstencil(z, 1+x*h)\n u = numpy.sin(1+x*h)\n errors[:,i] = S[:3].dot(u) - numpy.array([numpy.sin(z), numpy.cos(z), -numpy.sin(z)])\n\npyplot.loglog(hs, numpy.abs(errors[0]), 'o', label=\"$p(0)$\")\npyplot.loglog(hs, numpy.abs(errors[1]), '<', label=\"$p'(0)$\")\npyplot.loglog(hs, numpy.abs(errors[2]), 's', label=\"$p''(0)$\")\nfor k in (1,2,3):\n pyplot.loglog(hs, hs**k, label='$h^{%d}$' % k)\npyplot.legend(loc='upper left');\n```\n\n### Notes on accuracy\n\n* When using three points, we fit a polynomial of degree 2. The leading error term for interpolation $p(0)$ is thus $O(h^3)$.\n* Each derivative gives up one order of accuracy, therefore differencing to a general (non-centered or non-uniform grid) point is $O(h^2)$ for the first derivative and $O(h)$ for the second derivative.\n* Centered differences on uniform grids can provide cancelation, raising the order of accuracy by one. So our standard 3-point centered second derivative is $O(h^2)$ as we have seen in the Taylor analysis and numerically.\n* The Vandermonde matrix is notoriously ill-conditioned when using many points. For such cases, we recommend using a more numerically stable method from [Fornberg](https://doi.org/10.1137/S0036144596322507).\n\n\n```python\n-fdstencil(0, numpy.linspace(-1,4,6))[2]\n```\n\n\n\n\n array([-0.83333333, 1.25 , 0.33333333, -1.16666667, 0.5 ,\n -0.08333333])\n\n\n\n### Solving BVPs\n\nThis `fdstencil` gives us a way to compute derivatives of arbitrary accuracy on arbitrary grids. We will need to use uncentered rules near boundaries, usually with more points to maintain order of accuracy. This will usually cost us symmetry. Implementation of boundary conditions is the bane of high order finite difference methods.\n\n### Discretization stability measures: $h$-ellipticity\n\nConsider the test function $\\phi(\\theta, x) = e^{i\\theta x}$ and apply the difference stencil centered at an arbitrary point $x$ with element size $h=1$:\n\n$$ \\begin{bmatrix} -1 & 2 & -1 \\end{bmatrix} \\begin{bmatrix} e^{i \\theta (x - 1)} \\\\ e^{i \\theta x} \\\\ e^{i \\theta (x+1)} \\end{bmatrix}\n= \\big( 2 - (e^{i\\theta} + e^{-i\\theta}) \\big) e^{i\\theta x}= 2 (1 - \\cos \\theta) e^{i \\theta x} . $$\n\nEvidently $\\phi(\\theta,x) = e^{i \\theta x}$ is an eigenfunction of the discrete differencing operator on an infinite grid and the corresponding eigenvalue is\n$$ L(\\theta) = 2 (1 - \\cos \\theta), $$\nalso known as the \"symbol\" of the operator. That $\\phi(\\theta,x)$ is an eigenfunction of the discrete differencing formula will generally be true for uniform grids.\n\nThe highest frequency that is distinguishable using this stencil is $\\theta_{\\max} = \\pi$ which results in a wave at the Nyquist frequency. If a higher frequency wave is sampled onto this grid, it will be aliased back into the interval $[-\\pi, \\pi)$.\n\n\n```python\nx = numpy.linspace(-1, 1, 3)\ns2 = -fdstencil(0, x)[2]\nprint(s2)\ntheta = numpy.linspace(-numpy.pi, numpy.pi)\nphi = numpy.exp(1j*numpy.outer(x, theta))\npyplot.plot(theta, numpy.abs(s2.dot(phi)), '.')\npyplot.plot(theta, 2*(1-numpy.cos(theta)))\npyplot.plot(theta, theta**2);\n```\n\nA measure of internal stability known as $h$-ellipticity is defined by\n\n$$ E^h(L) = \\frac{\\min_{\\pi/2 \\le |\\theta| \\le \\pi} L(\\theta)}{\\max_{|\\theta| \\le \\pi} L(\\theta)} . $$\n\n* What is $E^h(L)$ for the second order \"version 2\" stencil?\n* How about for uncentered formulas and higher order?\n\n# Spectral collocation\n\nSuppose that instead of using only a fixed number of neighbors in our differencing stencil, we use all points in the domain?\n\n\n```python\nn = 10\nx = numpy.linspace(-1, 1, n)\nL = numpy.zeros((n,n))\nfor i in range(n):\n L[i] = -fdstencil(x[i], x)[2]\n\nu = numpy.cos(3*x)\npyplot.plot(x, L.dot(u), 'o')\npyplot.plot(x, 9*u);\n```\n\nWe are suffering from two problems here. The first is that the monomial basis is very ill-conditioned when using many terms. This is true as continuous functions, not just when sampled onto a particular grid.\n\n\n```python\nx = numpy.linspace(-1, 1, 50)\nV = numpy.vander(x, 15)\npyplot.plot(x, V)\nnumpy.linalg.cond(V)\n```\n\n## Chebyshev polynomials\n\nDefine $$ T_n(x) = \\cos (n \\arccos(x)) .$$\nThis turns out to be a polynomial, though it may not be obvious why.\nRecall $$ \\cos(a + b) = \\cos a \\cos b - \\sin a \\sin b .$$\nLet $y = \\arccos x$ and check\n$$ \\begin{split}\n T_{n+1}(x) &= \\cos (n+1) y = \\cos ny \\cos y - \\sin ny \\sin y \\\\\n T_{n-1}(x) &= \\cos (n-1) y = \\cos ny \\cos y + \\sin ny \\sin y\n\\end{split}$$\nAdding these together produces a similar recurrence:\n$$\\begin{split}\nT_0(x) &= 1 \\\\\nT_1(x) &= x \\\\\nT_{n+1}(x) &= 2 x T_n(x) - T_{n-1}(x)\n\\end{split}$$\nwhich we can also implement in code\n\n\n```python\ndef vander_chebyshev(x, n=None):\n if n is None:\n n = len(x)\n T = numpy.ones((len(x), n))\n if n > 1:\n T[:,1] = x\n for k in range(2,n):\n T[:,k] = 2 * x * T[:,k-1] - T[:,k-2]\n return T\n\nx = numpy.linspace(-1, 1)\nV = vander_chebyshev(x, 5)\npyplot.plot(x, V)\nnumpy.linalg.cond(V)\n```\n\n\n```python\n# We can use the Chebyshev basis for interpolation\nx = numpy.linspace(-2, 2, 4)\nu = numpy.sin(x)\nc = numpy.linalg.solve(vander_chebyshev(x), u)\npyplot.plot(x, u, '*')\npyplot.plot(xx, vander_chebyshev(xx, 4).dot(c), label='p(x)')\npyplot.plot(xx, numpy.sin(xx), label='sin(x)')\npyplot.legend(loc='upper left');\n```\n\n### Differentiation\n\nWe can differentiate Chebyshev polynomials using the recurrence\n\n$$ \\frac{T_n'(x)}{n} = 2 T_{n-1}(x) + \\frac{T_{n-2}'(x)}{n-2} $$\n\nwhich we can differentiate to evaluate higher derivatives.\n\n\n```python\ndef chebeval(z, n=None):\n \"\"\"Build matrices to evaluate the n-term Chebyshev expansion and its derivatives at point(s) z\"\"\"\n z = numpy.array(z, ndmin=1)\n if n is None:\n n = len(z)\n Tz = vander_chebyshev(z, n)\n dTz = numpy.zeros_like(Tz)\n dTz[:,1] = 1\n dTz[:,2] = 4*z\n ddTz = numpy.zeros_like(Tz)\n ddTz[:,2] = 4\n for n in range(3,n):\n dTz[:,n] = n * (2*Tz[:,n-1] + dTz[:,n-2]/(n-2))\n ddTz[:,n] = n * (2*dTz[:,n-1] + ddTz[:,n-2]/(n-2))\n return [Tz, dTz, ddTz]\n\nn = 44\nx = numpy.linspace(-1, 1, n)\nT = vander_chebyshev(x)\nprint('cond = {:e}'.format(numpy.linalg.cond(T)))\nTinv = numpy.linalg.inv(T)\nL = numpy.zeros((n,n))\nfor i in range(n):\n L[i] = chebeval(x[i], n)[2].dot(Tinv)\n\nu = numpy.cos(3*x)\npyplot.plot(x, L.dot(u), 'o')\nxx = numpy.linspace(-1, 1, 100)\npyplot.plot(xx, -9*numpy.cos(3*xx));\n```\n\n### Runge Effect\n\nPolynomial interpolation on equally spaced points is very ill-conditioned as the number of points grows. We've seen that in the growth of the condition number of the Vandermonde matrix, both for monomials and Chebyshev polynomials, but it's also true if the polynomials are measured in a different norm, such as pointwise values or merely the eyeball norm.\n\n\n```python\ndef chebyshev_interp_and_eval(x, xx):\n \"\"\"Matrix mapping from values at points x to values\n of Chebyshev interpolating polynomial at points xx\"\"\"\n A = vander_chebyshev(x)\n B = vander_chebyshev(xx, len(x))\n return B.dot(numpy.linalg.inv(A))\n\ndef runge1(x):\n return 1 / (1 + 10*x**2)\nx = numpy.linspace(-1,1,20)\nxx = numpy.linspace(-1,1,100)\npyplot.plot(x, runge1(x), 'o')\npyplot.plot(xx, chebyshev_interp_and_eval(x, xx).dot(runge1(x)));\n```\n\n\n```python\nx = cosspace(-1,1,8)\npyplot.plot(xx, chebyshev_interp_and_eval(x,xx))\n```\n\n\n```python\nnumpy.outer(numpy.arange(4), [10,20])\n```\n\n\n```python\nns = numpy.arange(5,20)\nconds = [numpy.linalg.cond(chebyshev_interp_and_eval(numpy.linspace(-1,1,n),\n numpy.linspace(-1,1,100)))\n for n in ns]\npyplot.semilogy(ns, conds);\n```\n\nThis ill-conditioning cannot be fixed when using polynomial *interpolation* on equally spaced grids.\n\n### Chebyshev nodes\n\nThe Chebyshev polynomials assume their maximum value of 1 at points where their derivatives are zero (plus the endpoints). Choosing the roots of $T_n'(x)$ (plus endpoints) will control the polynomials and should lead to a well-conditioned formulation.\n\n\n```python\ndef cosspace(a, b, n=50):\n return (a + b)/2 + (b - a)/2 * (numpy.cos(numpy.linspace(-numpy.pi, 0, n)))\n\nconds = [numpy.linalg.cond(chebyshev_interp_and_eval(cosspace(-1,1,n),\n numpy.linspace(-1,1,100)))\n for n in ns]\npyplot.figure()\npyplot.plot(ns, conds);\n```\n\n\n```python\nx = cosspace(-1, 1, 7)\npyplot.plot(x, 0*x, 'o')\npyplot.plot(xx, chebeval(xx, 7)[1]);\n```\n\n\n```python\nx = cosspace(-1,1,20)\nxx = numpy.linspace(-1,1,100)\npyplot.figure()\npyplot.plot(x, runge1(x), 'o')\npyplot.plot(xx, chebyshev_interp_and_eval(x, xx).dot(runge1(x)));\n```\n\n## Chebyshev solution of Boundary Value Problems\n\nIf instead of an equally (or arbitrarily) spaced grid, we choose the Chebyshev nodes and compute derivatives in a stable way (e.g., via interpolating into the Chebyshev basis), we should have a very accurate method. Let's return to our test equation\n\n$$ -u''(x) = f(x) $$\n\nsubject to some combination of Neumann and Dirichlet boundary conditions.\n\n\n```python\ndef laplacian_cheb(n, rhsfunc, left, right):\n \"\"\"Solve the Laplacian boundary value problem on (-1,1) using n elements with rhsfunc(x) forcing.\n The left and right boundary conditions are specified as a pair (deriv, func) where\n * deriv=0 for Dirichlet u(x_endpoint) = func(x_endpoint)\n * deriv=1 for Neumann u'(x_endpoint) = func(x_endpoint)\"\"\"\n x = cosspace(-1, 1, n+1) # n+1 points is n \"elements\"\n T = chebeval(x)\n L = -T[2]\n rhs = rhsfunc(x)\n for i,deriv,func in [(0, *left), (-1, *right)]:\n L[i] = T[deriv][i]\n rhs[i] = func(x[i])\n return x, L.dot(numpy.linalg.inv(T[0])), rhs\n\nclass exact_tanh:\n def __init__(self, k=1, x0=0):\n self.k = k\n self.x0 = x0\n def u(self, x):\n return numpy.tanh(self.k*(x - self.x0))\n def du(self, x):\n return self.k * numpy.cosh(self.k*(x - self.x0))**(-2)\n def ddu(self, x):\n return -2 * self.k**2 * numpy.tanh(self.k*(x - self.x0)) * numpy.cosh(self.k*(x - self.x0))**(-2)\n \nex = exact_tanh(5, 0.3)\nx, L, rhs = laplacian_cheb(50, lambda x: -ex.ddu(x),\n left=(0,ex.u), right=(0,ex.u))\nuu = numpy.linalg.solve(L, rhs)\npyplot.plot(x, uu, 'o')\npyplot.plot(xx, ex.u(xx))\npyplot.plot(xx, chebeval(xx)[0][:,:51].dot(numpy.linalg.solve(chebeval(x)[0], uu)))\nprint(numpy.linalg.norm(numpy.linalg.solve(L, rhs) - ex.u(x), numpy.inf))\n```\n\n\n```python\ndef mms_error(n, discretize, sol):\n x, L, f = discretize(n, lambda x: -sol.ddu(x), left=(0,sol.u), right=(1,sol.du))\n u = numpy.linalg.solve(L, f)\n return numpy.linalg.norm(u - sol.u(x), numpy.inf)\n\nns = numpy.arange(10,60,2)\nerrors = [mms_error(n, laplacian_cheb, ex) for n in ns]\npyplot.figure()\npyplot.semilogy(ns, errors, 'o', label='numerical')\nfor p in range(1,5):\n pyplot.semilogy(ns, 1/ns**(p), label='$n^{-%d}$'%p)\npyplot.xlabel('n')\npyplot.ylabel('error')\n \npyplot.legend(loc='lower left');\n```\n\n# Homework 1: Due 2017-09-25\n\nUse a Chebyshev method to solve the second order ordinary differential equation\n\n$$ u''(t) + a u'(t) + b u(t) = f(t) $$\n\nfrom $t=0$ to $t=1$ with initial conditions $u(0) = 1$ and $u'(0) = 0$.\n\n1. Do a grid convergence study to test the accuracy of your method.\n* Setting $f(t)=0$, experiment with the values $a$ and $b$ to identify two regimes with qualitatively different dynamics.\n", "meta": {"hexsha": "535e72c8fe1205a09e06a40dc7f9d0e5d9a10c6b", "size": 276939, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "FDHighOrder.ipynb", "max_stars_repo_name": "reycronin/numpde", "max_stars_repo_head_hexsha": "499e762ffa6f91592fc444523e87f93e0693c362", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2017-11-18T00:48:54.000Z", "max_stars_repo_stars_event_max_datetime": "2018-01-23T15:25:43.000Z", "max_issues_repo_path": "FDHighOrder.ipynb", "max_issues_repo_name": "reycronin/numpde", "max_issues_repo_head_hexsha": "499e762ffa6f91592fc444523e87f93e0693c362", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "FDHighOrder.ipynb", "max_forks_repo_name": "reycronin/numpde", "max_forks_repo_head_hexsha": "499e762ffa6f91592fc444523e87f93e0693c362", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 16, "max_forks_repo_forks_event_min_datetime": "2017-08-28T16:13:41.000Z", "max_forks_repo_forks_event_max_datetime": "2018-08-08T15:37:46.000Z", "avg_line_length": 364.8735177866, "max_line_length": 53544, "alphanum_fraction": 0.9142699295, "converted": true, "num_tokens": 4770, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9099070084811306, "lm_q2_score": 0.9334308082517252, "lm_q1q2_score": 0.8493352343604511}} {"text": "# Introduction to the exponential distribution\n\n\n```python\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set()\n```\n\nIn queuing systems it is typically assumed that \n\n1. customers arrive at random\n2. service time varies randomly.\n\nThe notion of *random* is closely associated with the **exponential distribution** (note this sometimes called the *negative* exponential distribution). Its density function is given by (1).\n\n\\begin{equation}\nf(t) = \\lambda e^{-\\lambda t}\n\\tag{1}\n\\end{equation}\n\nwhere \n* $1/\\lambda$ is the mean time between events. \n* $\\lambda$ is the rate at which events occur (the expected no. of events per time unit)\n\nTo put it another way $f(t)$ represents the probability of an event occuring in the next t time units.\n\n## Illustration: $\\lambda = 1.0$\n\nThe chart below plot $f(t)$ for $\\lambda = 1.0$. It can be observed that the most likely times are the small values close to zero. The longer the time the less likely it is to occur. In other words, exponentially distributed times are most likely to be small and below the mean, but there still will be the occational long time.\n\n\n```python\nLAMBDA = 1.0\n```\n\n\n```python\n#create 100 equally spaced time points between 0 and 5\nt = np.linspace(0, 5, 100)\n```\n\n\n```python\n#calculate f_t\nf_t = LAMBDA * np.power(np.e, -LAMBDA * t)\n```\n\n\n```python\n#plot\nax = pd.DataFrame(f_t, index=t).plot(figsize=(9, 6))\nax.legend(['$\\lambda = 1.0$']);\nax.figure.savefig('exponential_1.png', bbox_inches='tight', dpi=300)\n```\n\n# Varying $\\lambda$ changes the shape of the distribution.\n\nThe area under each density curve must equal 1.0. This has consequences for the shape of the distribution as $\\lambda$ varies in size.\nThe code below plots $f(t)$ while varying the parameter $\\lambda = 1.0, 2.0, 3.0$. We observe the following behaviour:\n\n* A larger value of $\\lambda$ leads to a more rapid decrease an asymptotic decrease to zero.\n\n\n```python\nfig, ax = plt.subplots(1, 1, figsize=(9, 6))\n\nlabels = []\nfor _lambda in range(1, 4):\n f_t = _lambda * np.power(np.e, -_lambda * t)\n ax.plot(pd.DataFrame(f_t, index=t))\n labels.append(f'$\\lambda = {float(_lambda)}$')\n\nax.set_xlabel('t')\nax.set_ylabel('f(t)')\nax.legend(labels);\nfig.savefig('exponential.png', bbox_inches='tight', dpi=300)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "1e8f3800502e82e7ccd3801b5c6b997114910f5c", "size": 51045, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "arrival_and_service_patterns.ipynb", "max_stars_repo_name": "TomMonks/jackson-network", "max_stars_repo_head_hexsha": "bbb6fce334803b771fa57cc055fe06c5e93e3cdb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "arrival_and_service_patterns.ipynb", "max_issues_repo_name": "TomMonks/jackson-network", "max_issues_repo_head_hexsha": "bbb6fce334803b771fa57cc055fe06c5e93e3cdb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "arrival_and_service_patterns.ipynb", "max_forks_repo_name": "TomMonks/jackson-network", "max_forks_repo_head_hexsha": "bbb6fce334803b771fa57cc055fe06c5e93e3cdb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 283.5833333333, "max_line_length": 27896, "alphanum_fraction": 0.9284552846, "converted": true, "num_tokens": 634, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.941654159388319, "lm_q2_score": 0.9019206692796966, "lm_q1q2_score": 0.8492973496655227}} {"text": "# Series Expansions: Fourier\n\nSeries expansions are used in a variety of circumstances:\n- When we need a tractable approximation to some ugly equation\n- To transform between equivalent ways of looking at a problem (e.g. time domain vs frequency domain)\n- When they are (part of) a solution to a particular class of differential equation\n\nFor approximations, there is an important divide between getting the best fit *near a point* (e.g. Taylor series) and getting the best fit *over an interval* (e.g. Fourier series). This notebook deals with the latter; there is a separate notebook for Taylor expansions, and others for Bessel, Legendre, etc.\n\n## Fitting over an interval\n\nWhat is the best (tractable) series approximating my function across some range of values? What matters is an overall best fit (e.g. least-squares deviation) across the range, and we can't tolerate wild divergences as with the Taylor series.\n\nThere are various series which are useful in different contexts, but a common property is that the terms are *orthogonal* over some interval $[-L,L]$. If $f(t)$ is a real-valued function their *inner product* is defined as\n\n$$ \\langle f(m t),f(n t) \\rangle \\colon =\\int _{-L}^L f(m t) f(n t) \\, dt $$\n\nFor orthogonal functions, this is non-zero if $m=n$ and zero if $m \\ne n$. If the inner product is $\\delta_{mn}$ (the Kronecker delta), the fuctions are said to be orthonormal.\n\n# Differential Equations\n\nThe ODE $y'' + n^2 y = 0$ is solved by all the period functions $\\sin(n x)$, $\\cos(n x)$ and $e^{\\pm i n x}$. There is thus a close analogy to functions such as Bessel and Legendre, though sine and cosine have become much more familar to most of us for other (geometric) reasons. They have the nice property of evenly-spaced zeros, unlike Bessel functions. For example, $\\sin(x)=0$ for $x = n \\pi$ where n is any integer.\n\nThe use of sin/cos or complex exponentials is also exceptionally familiar in series expansions, mainly because they are so useful in engineering and communications.\n\n## Fourier Series and Fourier Analysis\n\nA periodic function $f$ of period $2L$ can be approximated by a Fourier Series of sines and cosines:\n\n$$ f(t) = \\frac{a_0}{2} + \\sum _{n \\ge 1} a_ n \\cos \\frac{n \\pi t}{L} + \\sum _{n \\ge 1} b_ n \\sin \\frac{n \\pi t}{L} $$\n\nTo find the coefficients:\n$$\n\\begin{align*}\n\t\\frac{a_0}{2} &= \\displaystyle \\frac{1}{2L} \\int _{-L}^{L} f(t) \\, dt = \\frac{\\langle f(t), 1 \\rangle }{\\langle 1, 1\\rangle }\\\\[6pt]\n\ta_ n& = \\frac{1}{L} \\int _{-L}^{L} f(t) \\cos \\frac{n \\pi t}{L} \\, dt = \\frac{\\langle f(t),\\cos \\left(\\frac{n \\pi }{L} t\\right)\\rangle }{\\langle \\cos \\left(\\frac{n \\pi }{L} t\\right), \\cos \\left(\\frac{n \\pi }{L} t\\right)\\rangle } \\\\[10pt]\n\tb_ n &= \\displaystyle \\frac{1}{L} \\int _{-L}^{L} f(t) \\sin \\frac{n \\pi t}{L} \\, dt = \\frac{\\langle f(t),\\sin \\left(\\frac{n \\pi }{L} t\\right)\\rangle }{\\langle \\sin \\left(\\frac{n \\pi }{L} t\\right), \\sin \\left(\\frac{n \\pi }{L} t\\right)\\rangle }\n\\end{align*}\n$$\n\nEquivalently, we can express the Fourier Series as complex exponentials:\n\n$$ f\\left(t\\right) = \\sum _{n = -\\infty }^{\\infty } c_{n} e^{i n t}, \\qquad c_{n} \\colon =\\frac{a_{n} - i b_{n}}{2} \\quad \\text{ and } \\quad c_{-n} \\colon =\\bar{c}_{n} = \\frac{a_{n} + i b_{n}}{2} $$\n\nReal-world situations tend not to give infinitely periodic functions, so Fourier Analysis can be thought of as the limit as $L$ goes to infinity of a periodic signal of period $2L$. As $L$ increases, the spacing between the frequencies in our sum are approaching zero. This turns the sum into an integral in the limit, and we have the equations:\n \n$$ f(t) = \\int _{-\\infty }^{\\infty } \\widehat{f}\\left(k\\right)e^{ i k t} \\, dk \\quad \\text{where} \\quad \\widehat{f} = \\frac{1}{2\\pi }\\int _{-\\infty }^{\\infty } f\\left(t\\right)e^{- i k t} \\, dt $$\n\nWe call $\\widehat{f}$ the **Fourier transform** of $f(t)$.\n\nStart with quite a lot of imports ready for calculation and plotting:\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom ipywidgets import interact, interactive, fixed, interact_manual, Layout\nimport ipywidgets as w\n\nplt.rcParams.update({'font.size': 16})\n\nfrom sympy import fourier_series, pi, init_printing, lambdify, integrate\ninit_printing()\nfrom sympy.functions import sin, cos\nfrom sympy.abc import x\n\nfrom sympy.parsing.sympy_parser import parse_expr\nfrom sympy.parsing.sympy_parser import standard_transformations, implicit_multiplication_application\ntransformations = (standard_transformations + (implicit_multiplication_application,))\n```\n\nSymPy has a FourierSeries class which looked like it would do what we need, very simply. The results were disappointing: `fourier_series()` returns a very complicated result for even simple functions, and attempts to lambdify this never terminated.\n\nThe `fourier_coeff()` function below calculates $a_n$ and $b_n$ by integration and returns them as NumPy arrays to 5 figure accuracy. Note that this is only for illustrating the effect of adding more terms: the calculation is slow and clunky, and nobody would do Fourier transforms this way for real problems.\n\n\n```python\n# Fourier series, to order n\ndef fourier_coeff(f, L, n):\n # f is a SymPy function of x\n a_n = []\n b_n = [0,]\n a_n.append((1/(2*L)*integrate(f, (x, -L, L))).evalf(5))\n for i in range(1, n+1):\n a_n.append((1/L*integrate(f*cos(i*pi/L*x), (x, -L, L))).evalf(5))\n b_n.append((1/L*integrate(f*sin(i*pi/L*x), (x, -L, L))).evalf(5))\n \n # SymPy is VERY reluctant to give us simple numbers rather than SymPy objects\n # Force the conversion to NumPy floats with np.array().astype()\n return (np.array(a_n).astype(np.float64), np.array(b_n).astype(np.float64))\n```\n\n\n```python\n# Plot results\ndef plotFourier(f_sympy, L):\n \n max_terms = 20\n \n # get a NumPy-style function from the SymPy version\n f_np = lambdify(x, f_sympy, 'numpy')\n display(f_sympy) # display shows LaTex, print wouldn't\n \n # plot the starting function\n x_lims = [-L,L]\n x1 = np.linspace(x_lims[0], x_lims[1], 100)\n fig = plt.figure(figsize=(9, 20))\n \n ax1 = fig.add_subplot(311)\n ax1.plot(x1, f_np(x1), 'k.', label=f_sympy)\n \n # get some terms of a Fourier series \n# f_fourier = fourier_series(f_sympy, (x, -L, L))\n# f_fourier.truncate(4)\n# display(f_fourier)\n\n a_n, b_n = fourier_coeff(f_sympy, L, max_terms)\n \n ax2 = fig.add_subplot(312)\n x_int = range(0, len(a_n))\n# ax2.stem(a_n, 'k.', label='a_n')\n ax2.stem(x_int, a_n, markerfmt='C0o', label='a_n')\n ax2.stem(x_int, b_n, markerfmt='C1o', label='b_n')\n ax2.set_xlim(left=-0.5)\n ax2.set_ylabel('coefficients')\n# ax2.xaxis.set_major_locator(MaxNLocator(integer=True)) # fails\n ax2.legend()\n \n # plot the successive approximations\n for n in [0,1,2,3,5,10,20]:\n if n > max_terms:\n break\n y = np.zeros(len(x1))\n for i in range(n+1):\n cos_term = a_n[i]*np.cos(i*np.pi/L*x1)\n sin_term = b_n[i]*np.sin(i*np.pi/L*x1)\n y += (cos_term + sin_term)\n ax1.plot(x1, y, label='order ' + str(n))\n\n # graph housekeeping\n ax1.set_xlim(x_lims)\n ax1.set_xlabel('x')\n ax1.set_ylabel('y')\n ax1.legend()\n ax1.grid(True)\n plt.title('Fourier series approximation of ' + str(f_sympy))\n```\n\n\n```python\ndef parse_input(f_txt, L):\n f_sympy = parse_expr(f_txt, transformations=transformations)\n plotFourier(f_sympy, L)\n```\n\nPlease wait patiently for each new calculation, which will take several seconds at best.\n\n\n```python\nstyle = {'description_width': 'initial'} # to avoid the labels getting truncated\ninteract(parse_input, \n f_txt = w.Text(description='f(x):',\n layout=Layout(width='80%'),\n continuous_update=False,\n value='x**2 - x**3'),\n L = w.FloatSlider(description=\"Limits $\\pm L$\", style=style,\n layout=Layout(width='80%'),\n continuous_update=False,\n min=1, max=10, \n value=np.pi),\n);\n```\n\n\n interactive(children=(Text(value='x**2 - x**3', continuous_update=False, description='f(x):', layout=Layout(wi\u2026\n\n\n\n```python\n\n```\n\n### Discrete Fourier transforms\n\nThe mathematics of Fourier analysis goes back to the early 19th century, but its use has exploded in the last few decades. A couple of factors collided to drive this:\n\n- An efficient Fast Fourier Transform (FFT) algorithm, developed in the 1960s and implemented in both software and specialist hardware\n- The spread of digital technology, for audio, video and many other sorts of discretized signals. These are all perfect inputs for FFT.\n\nFFT gets away from complicated integrals and replaces them with a series of simple multiplications and additions. This gives a computation time of $\\mathcal{O}(N \\log N)$ for a signal with $N$ data points. Fast, as the name suggests! And your cellphone is doing millions of these calculations whenever you use it (for anything at all).\n\nThere are many FFT functions in the `numpy.fft` module, but for real input we will use the `rfft()` function and its inverse, `irfft()`. This avoids calculating and storing half the coefficients, which for real input are just complex conjugates of the other half.\n\nZeroing all but the first few coefficients before the inverse FFT simulates the effect of using few terms in the Fourier series.\n\n\n```python\ndef plotFFT(f_sympy, L):\n \n n_pts = 100\n \n # get a NumPy-style function from the SymPy version\n f_np = lambdify(x, f_sympy, 'numpy')\n \n # discretize the function (for fft, not just for plotting)\n x_lims = [-L,L]\n x1 = np.linspace(x_lims[0], x_lims[1], n_pts)\n y = f_np(x1)\n \n # plot the starting function\n fig = plt.figure(figsize=(9, 20))\n ax1 = fig.add_subplot(311)\n ax1.plot(x1, f_np(x1), 'k.', label=f_sympy)\n \n display(f_sympy) # display shows LaTex, print wouldn't\n \n f_fft = np.fft.rfft(y)\n\n # get a_n and b_n from complex coefficients in f_fft\n# print(f_fft.shape)\n# print(f_fft)\n a_n = np.real(f_fft[:50] + np.conj(f_fft[:50]))\n a_n[0] = a_n[0]/2\n b_n = np.imag(f_fft[:50] - np.conj(f_fft[:50]))\n \n ax2 = fig.add_subplot(312)\n x_int = range(0, len(a_n))\n ax2.stem(x_int, a_n, markerfmt='C0o', label='real')\n ax2.stem(x_int, b_n, markerfmt='C1o', label='imag')\n# ax2.set_xlim(left=-0.5)\n ax2.set_ylabel('coefficients')\n# ax2.xaxis.set_major_locator(MaxNLocator(integer=True)) # fails\n ax2.legend()\n\n# # plot the successive approximations - FAILS\n for i in [0,1,2,3,10,20]:\n fft_i = np.zeros(len(f_fft), dtype=np.cfloat)\n np.put(fft_i, range(i+1), f_fft[:i+1])\n y_i = np.real(np.fft.irfft(fft_i))\n ax1.plot(x1, y_i, label='order ' + str(i))\n \n y_i = np.real(np.fft.irfft(f_fft))\n ax1.plot(x1, y_i, label='full inverse FFT')\n \n # graph housekeeping\n ax1.set_xlim(x_lims)\n# plt.ylim([-3,3])\n ax1.set_xlabel('x')\n ax1.set_ylabel('y')\n ax1.legend()\n ax1.grid(True)\n plt.title('FFT series approximation of ' + str(f_sympy))\n```\n\n\n```python\ndef parse_input_fft(f_txt, L):\n # we could just run eval(f_txt), \n # but parse_expr() followed by lambdify() is probably safer\n f_sympy = parse_expr(f_txt, transformations=transformations)\n plotFFT(f_sympy, L)\n```\n\n\n```python\nstyle = {'description_width': 'initial'} # to avoid the labels getting truncated\ninteract(parse_input_fft, \n f_txt = w.Text(description='f(x):',\n layout=Layout(width='80%'),\n continuous_update=False,\n value='x**2 - x**3'),\n L = w.FloatSlider(description=\"Limits $\\pm L$\", style=style,\n layout=Layout(width='80%'),\n continuous_update=False,\n min=1, max=10, \n value=np.pi),\n );\n```\n\n\n interactive(children=(Text(value='x**2 - x**3', continuous_update=False, description='f(x):', layout=Layout(wi\u2026\n\n\n\n```python\n\n```\n\n\n```python\ndef plotFFT_discontinuous(waveform='square', L=np.pi):\n \n n_pts = 100\n \n # discretize the function (for fft, not just for plotting)\n x_lims = [-L,L]\n x1 = np.linspace(x_lims[0], x_lims[1], n_pts)\n \n if waveform == 'square':\n y = np.sign(x1)\n elif waveform=='triangle':\n y = np.abs(x1)\n \n # plot the starting function\n fig = plt.figure(figsize=(9, 20))\n ax1 = fig.add_subplot(311)\n ax1.plot(x1, y, 'r.', label=waveform)\n \n f_fft = np.fft.rfft(y)\n\n # get a_n and b_n from complex coefficients in f_fft\n# print(f_fft.shape)\n# print(f_fft)\n a_n = np.real(f_fft[:50] + np.conj(f_fft[:50]))\n a_n[0] = a_n[0]/2\n b_n = np.imag(f_fft[:50] - np.conj(f_fft[:50]))\n \n ax2 = fig.add_subplot(312)\n x_int = range(0, len(a_n))\n ax2.stem(x_int, a_n, markerfmt='C0o', label='a_n')\n ax2.stem(x_int, b_n, markerfmt='C1o', label='b_n')\n# ax2.set_xlim(left=-0.5)\n ax2.set_ylabel('coefficients')\n# ax2.xaxis.set_major_locator(MaxNLocator(integer=True)) # fails\n ax2.legend()\n\n y_i = np.real(np.fft.irfft(f_fft))\n ax1.plot(x1, y_i, label='full inverse FFT')\n \n # graph housekeeping\n ax1.set_xlim(x_lims)\n# plt.ylim([-3,3])\n ax1.set_xlabel('x')\n ax1.set_ylabel('y')\n ax1.legend()\n ax1.grid(True)\n plt.title('FFT approximation of ' + waveform + ' wave')\n```\n\n\n```python\nplotFFT_discontinuous('triangle')\n```\n\n\n```python\nplotFFT_discontinuous('square')\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n\n## References\n\nBoas, \"Mathematical methods in the physical sciences\"\n\n\n```python\n\n```\n", "meta": {"hexsha": "d1de751cdee70da2c76b78827c2e57b79423ef27", "size": 92431, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "math/Fourier.ipynb", "max_stars_repo_name": "colinleach/astro-Jupyter", "max_stars_repo_head_hexsha": "8d7618068f0460ff0c514075ce84d2bda31870b6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-06-06T15:35:35.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-06T15:35:35.000Z", "max_issues_repo_path": "math/Fourier.ipynb", "max_issues_repo_name": "colinleach/astro-Jupyter", "max_issues_repo_head_hexsha": "8d7618068f0460ff0c514075ce84d2bda31870b6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-06-08T11:44:15.000Z", "max_issues_repo_issues_event_max_datetime": "2019-06-10T17:42:32.000Z", "max_forks_repo_path": "math/Fourier.ipynb", "max_forks_repo_name": "colinleach/astro-Jupyter", "max_forks_repo_head_hexsha": "8d7618068f0460ff0c514075ce84d2bda31870b6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-04-14T15:28:43.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-14T15:28:43.000Z", "avg_line_length": 163.3056537102, "max_line_length": 42268, "alphanum_fraction": 0.8690915386, "converted": true, "num_tokens": 3964, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541544761566, "lm_q2_score": 0.9019206699387734, "lm_q1q2_score": 0.8492973458557644}} {"text": "# Code Samples\n\n\n```python\nimport sympy as sp\nimport numpy\nimport pandas\n```\n\n\n```python\nsp.init_printing()\n```\n\n\n```python\nx, y, z, k1, k2, k3 = sp.symbols(\"x, y, z, k1, k2, k3\")\n```\n\n\n```python\nsp.solveset(sp.sin(x) - 1, x)\n```\n\n\n```python\nmatrix = sp.Matrix([sp.sin(x) -1, sp.cos(y) -1 ])\nmatrix\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\sin{\\left(x \\right)} - 1\\\\\\cos{\\left(y \\right)} - 1\\end{matrix}\\right]$\n\n\n\n\n```python\nsp.solve(matrix)\n```\n\n\n```python\nkinetics = sp.Matrix([k1*x*y - 3, k2*x/(1 -x) - 4])\n```\n\n\n```python\nsp.nonlinsolve(kinetics, [x,y])\n```\n\n\n```python\nsp.plot(2*x**2 + 3*x)\n```\n\n\n```python\nfrom sympy.plotting import plot3d_parametric_line\n\nt = sp.symbols('t')\nalpha = [sp.cos(t), sp.sin(t), t]\nplot3d_parametric_line(*alpha)\n\n```\n\n\n```python\n# Plots for the reaction flux\n# x + y -> z; k1*x*y\nflux = sp.Matrix([x, y, k1*x*y])\nflux_plot = flux.subs({k1: 3})\nplot3d_parametric_line(x, x**2, 3*x**3)\n```\n\n\n```python\nf, g = sp.symbols('f g', cls=sp.Function)\ndiffeq = sp.Eq(f(x).diff(x, x) - 2*f(x).diff(x) + f(x), sp.sin(x))\ndiffeq\n```\n\n\n```python\nresult = sp.dsolve(diffeq, f(x))\n```\n\n\n```python\nsyms = list(result.free_symbols)\nsyms[0]\nresult1 = result.subs({syms[0]: 1, syms[1]: 1})\nsp.plot(result1.rhs)\n```\n\n\n```python\nsp.solve(x**2 - 2*x + 1, x)\n```\n\n\n```python\nresult1.rhs\n```\n\n# Workflow for Solving LTI Systems\n1. Given $A, B, C$, find \n 1. $e^{At}$\n 1. $\\int_0^t e^{A(t - \\tau)} u(\\tau) d \\tau$ for \n $u(\\tau) \\in \\{ \\delta(t), 1(t), t \\} $\n 1. $x(t)$\n 1. $y(t)$\n\n1. Plot $x$, $y$\n\n1. Solve for observability, controllability\n\n# Workflow for Reaction Networks\n1. Simulate the original model\n1. Convert model to sympy\n1. Get symbolic Jaccobian\n1. Construct LTI models for different points in state space\n", "meta": {"hexsha": "8de1223ec4f07121645e6f249abbf1faad9e9b5f", "size": 184863, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "save/MatrixExponential.ipynb", "max_stars_repo_name": "joseph-hellerstein/advanced-controls-lectures", "max_stars_repo_head_hexsha": "dc43f6c3517616da3b0ea7c93192d911414ee202", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "save/MatrixExponential.ipynb", "max_issues_repo_name": "joseph-hellerstein/advanced-controls-lectures", "max_issues_repo_head_hexsha": "dc43f6c3517616da3b0ea7c93192d911414ee202", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "save/MatrixExponential.ipynb", "max_forks_repo_name": "joseph-hellerstein/advanced-controls-lectures", "max_forks_repo_head_hexsha": "dc43f6c3517616da3b0ea7c93192d911414ee202", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 366.0653465347, "max_line_length": 71514, "alphanum_fraction": 0.9269188534, "converted": true, "num_tokens": 663, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9458012655937034, "lm_q2_score": 0.897695298265595, "lm_q1q2_score": 0.8490413492171168}} {"text": "# Fisher Discriminant Analysis\n\nFirst off we import necessary libraries.\n\n\n```python\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport random\n\nfrom numpy.linalg import inv, eig\n```\n\nWe define the number of points in each class, covariance matrix and mean of every class in accordance with the requirements of the exercise.\n\n\n```python\nn = 30\ncov = np.array([[1, 0], [0, 1]])\nmeans = {'A': np.array([-1, 1]), 'B': np.array([2, 4]), 'C': np.array([-2, 2])}\n```\n\nWe generate the data from multivariate normal distribution and insert it into the dataframe\n\n\n```python\ndata = pd.DataFrame(index=range(n * len(means)), columns=['x', 'y', 'label'])\nfor i, (label, mean) in enumerate(means.items()):\n data.loc[i*n:((i+1)*n)-1, ['x', 'y']] = np.random.multivariate_normal(mean, cov, n)\n data.loc[i*n:((i+1)*n)-1, ['label']] = label\n```\n\nNext we define a function that plots the points from the dataframe.\n\n\n```python\ndef plot_points(data):\n fig = plt.figure(figsize=(8, 6))\n sns.set_style('white')\n sns.set_palette('muted')\n sns.scatterplot(data=data, x='x', y='y', hue='label', legend=False, edgecolor='black').set(xlim=(-5, 5), ylim=(-2, 6))\n```\n\n\n```python\nplot_points(data)\n```\n\n## Determination of optimal direction\n\nBefore we move on to computing of optimal direction, we should update means of the classes and covariance matrix corresponding to the generated points.\n\n\n```python\nmeans = {label: np.array(np.mean(data.loc[data['label'] == label])) for label in means.keys()}\ncov = {label: np.cov(data.loc[data['label'] == label][['x', 'y']].values.T.astype(float)) for label in means.keys()}\n```\n\nWe compute a mean of all the points without distinguishing classes. Next we compute between class covariance matrix based on the equation:\n\n\\begin{equation}\n\\boldsymbol{B} = \\frac{1}{g - 1}\\sum_{k=1}^{g}n_{k}(\\boldsymbol{m_{k}} - \\boldsymbol{m})(\\boldsymbol{m_{k}} - \\boldsymbol{m})^{T}\n\\end{equation}\n\nand within class covariance matrix based on the equation:\n\n\\begin{equation}\n\\boldsymbol{W} = \\frac{1}{n - g}\\sum_{k=1}^{g}(n_{k} - 1)\\boldsymbol{S_{k}}.\n\\end{equation}\n\nHowever we can simplify the above expression into the form:\n\n\\begin{equation}\n\\boldsymbol{W} = \\frac{1}{g}\\sum_{k=1}^{g}\\boldsymbol{S_{k}}\n\\end{equation}\n\nbecause number of points is identical for every class.\n\n\n```python\nmean = np.mean(np.array([means[i] for i in means]), axis=0)\nouter_products = [np.outer(value - mean, value - mean) for value in means.values()]\nbc_cov = sum(n * outer_products) / (len(means) - 1)\nwc_cov = np.mean(np.array([cov[i] for i in cov]), axis=0)\n```\n\nWe define the matrix $\\boldsymbol{U}$:\n\n\\begin{equation}\n\\boldsymbol{U} = \\boldsymbol{W^{-1}}\\boldsymbol{B},\n\\end{equation}\n\ndetermine eigenvalues and eigenvectors of that matrix and choose the eigenvector corresponding to the maximum eigenvalue.\n\n\n```python\nU = np.dot(inv(wc_cov), bc_cov)\neig_values, eig_vectors = eig(U)\na = eig_vectors[:, np.argmax(eig_values)]\n```\n\n## Projection of points on the line and decision boundary\n\nHaving vector $\\boldsymbol{a}$ that maximizes the expression:\n\n\\begin{equation}\n\\boldsymbol{J} = \\frac{\\boldsymbol{a^{T}Ba}}{\\boldsymbol{a^{T}Wa}}\n\\end{equation}\n\nit's possible to figure out the parameters of discriminant hyperplane.\n\n\n```python\nslope = a[1] / a[0]\nintercept = -0.5 * np.dot(a.T, np.sum(np.array([means[i] for i in means]), axis=0))\n```\n\nNow we can plot the line which the points will be projected on.\n\n\n```python\nx = np.arange(-5, 5, 0.1)\ny = slope * x\n\nplot_points(data)\nsns.lineplot(x, y, color='red', linewidth=0.75);\n```\n\nAfter that we define a function that projects the points on the line.\n\n\n```python\ndef projection(data, slope):\n data_cp = data.copy()\n data_cp['x'] = (data['x'] + slope * data['y']) / (slope**2 + 1)\n data_cp['y'] = slope * data_cp['x']\n return data_cp\n```\n\n\n```python\nproj_data = projection(data, slope)\n```\n\nFinally we can plot the projected points and the decision boundary that satisfies the equation:\n\n\\begin{equation}\na_{0}x + a_{1}y + b = 0,\n\\end{equation}\n\nwhere $a_{0}$ and $a_{1}$ are the elements of vector $\\boldsymbol{a}$ and $b$ is the intercept.\n\n\n```python\nplot_points(data)\nsns.lineplot(x, y, color='red', linewidth=0.75)\nsns.scatterplot(data=proj_data, x='x', y='y', hue='label', legend=False, marker='P', s=75, edgecolor='black')\nsns.lineplot(x, -1 / slope * x - intercept / a[1], color='black', linewidth=2);\n```\n\nAs we can see it's impossible to separate all three classes satisfying the requirements of the exercise with Fisher Discriminant Analysis method.\n", "meta": {"hexsha": "21d8cb46bef022a8d8071ab508fd61800252669c", "size": 122163, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Exercise 0/Fisher Discriminant Analysis.ipynb", "max_stars_repo_name": "mickuz/lsed", "max_stars_repo_head_hexsha": "c7aa1dc0544e971dcc405425cf131d180b6721de", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Exercise 0/Fisher Discriminant Analysis.ipynb", "max_issues_repo_name": "mickuz/lsed", "max_issues_repo_head_hexsha": "c7aa1dc0544e971dcc405425cf131d180b6721de", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Exercise 0/Fisher Discriminant Analysis.ipynb", "max_forks_repo_name": "mickuz/lsed", "max_forks_repo_head_hexsha": "c7aa1dc0544e971dcc405425cf131d180b6721de", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 347.0539772727, "max_line_length": 50420, "alphanum_fraction": 0.9338588607, "converted": true, "num_tokens": 1330, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9504109784205503, "lm_q2_score": 0.8933094131553265, "lm_q1q2_score": 0.8490110733892415}} {"text": "# Transformations, Eigenvectors, and Eigenvalues\n\nMatrices and vectors are used together to manipulate spatial dimensions. This has a lot of applications, including the mathematical generation of 3D computer graphics, geometric modeling, and the training and optimization of machine learning algorithms. We're not going to cover the subject exhaustively here; but we'll focus on a few key concepts that are useful to know when you plan to work with machine learning.\n\n## Linear Transformations\nYou can manipulate a vector by multiplying it with a matrix. The matrix acts a function that operates on an input vector to produce a vector output. Specifically, matrix multiplications of vectors are *linear transformations* that transform the input vector into the output vector.\n\nFor example, consider this matrix ***A*** and vector ***v***:\n\n$$ A = \\begin{bmatrix}2 & 3\\\\5 & 2\\end{bmatrix} \\;\\;\\;\\; \\vec{v} = \\begin{bmatrix}1\\\\2\\end{bmatrix}$$\n\nWe can define a transformation ***T*** like this:\n\n$$ T(\\vec{v}) = A\\vec{v} $$\n\nTo perform this transformation, we simply calculate the dot product by applying the *RC* rule; multiplying each row of the matrix by the single column of the vector:\n\n$$\\begin{bmatrix}2 & 3\\\\5 & 2\\end{bmatrix} \\cdot \\begin{bmatrix}1\\\\2\\end{bmatrix} = \\begin{bmatrix}8\\\\9\\end{bmatrix}$$\n\nHere's the calculation in Python:\n\n\n```python\nimport numpy as np\n\nv = np.array([1,2])\nA = np.array([[2,3],\n [5,2]])\n\nt = A@v\nprint (t)\n```\n\nIn this case, both the input vector and the output vector have 2 components - in other words, the transformation takes a 2-dimensional vector and produces a new 2-dimensional vector; which we can indicate like this:\n\n$$ T: \\rm I\\!R^{2} \\to \\rm I\\!R^{2} $$\n\nNote that the output vector may have a different number of dimensions from the input vector; so the matrix function might transform the vector from one space to another - or in notation, ${\\rm I\\!R}$n -> ${\\rm I\\!R}$m.\n\nFor example, let's redefine matrix ***A***, while retaining our original definition of vector ***v***:\n\n$$ A = \\begin{bmatrix}2 & 3\\\\5 & 2\\\\1 & 1\\end{bmatrix} \\;\\;\\;\\; \\vec{v} = \\begin{bmatrix}1\\\\2\\end{bmatrix}$$\n\nNow if we once again define ***T*** like this:\n\n$$ T(\\vec{v}) = A\\vec{v} $$\n\nWe apply the transformation like this:\n\n$$\\begin{bmatrix}2 & 3\\\\5 & 2\\\\1 & 1\\end{bmatrix} \\cdot \\begin{bmatrix}1\\\\2\\end{bmatrix} = \\begin{bmatrix}8\\\\9\\\\3\\end{bmatrix}$$\n\nSo now, our transformation transforms the vector from 2-dimensional space to 3-dimensional space:\n\n$$ T: \\rm I\\!R^{2} \\to \\rm I\\!R^{3} $$\n\nHere it is in Python:\n\n\n```python\nimport numpy as np\nv = np.array([1,2])\nA = np.array([[2,3],\n [5,2],\n [1,1]])\n\nt = A@v\nprint (t)\n```\n\n\n```python\nimport numpy as np\nv = np.array([1,2])\nA = np.array([[1,2],\n [2,1]])\n\nt = A@v\nprint (t)\n```\n\n## Transformations of Magnitude and Amplitude\n\nWhen you multiply a vector by a matrix, you transform it in at least one of the following two ways:\n* Scale the length (*magnitude*) of the matrix to make it longer or shorter\n* Change the direction (*amplitude*) of the matrix\n\nFor example consider the following matrix and vector:\n\n$$ A = \\begin{bmatrix}2 & 0\\\\0 & 2\\end{bmatrix} \\;\\;\\;\\; \\vec{v} = \\begin{bmatrix}1\\\\0\\end{bmatrix}$$\n\nAs before, we transform the vector ***v*** by multiplying it with the matrix ***A***:\n\n\\begin{equation}\\begin{bmatrix}2 & 0\\\\0 & 2\\end{bmatrix} \\cdot \\begin{bmatrix}1\\\\0\\end{bmatrix} = \\begin{bmatrix}2\\\\0\\end{bmatrix}\\end{equation}\n\nIn this case, the resulting vector has changed in length (*magnitude*), but has not changed its direction (*amplitude*).\n\nLet's visualize that in Python:\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nv = np.array([1,0])\nA = np.array([[2,0],\n [0,2]])\n\nt = A@v\nprint (t)\n\n# Plot v and t\nvecs = np.array([t,v])\norigin = [0], [0]\nplt.axis('equal')\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)\nplt.show()\n```\n\nThe original vector ***v*** is shown in orange, and the transformed vector ***t*** is shown in blue - note that ***t*** has the same direction (*amplitude*) as ***v*** but a greater length (*magnitude*).\n\nNow let's use a different matrix to transform the vector ***v***:\n\\begin{equation}\\begin{bmatrix}0 & -1\\\\1 & 0\\end{bmatrix} \\cdot \\begin{bmatrix}1\\\\0\\end{bmatrix} = \\begin{bmatrix}0\\\\1\\end{bmatrix}\\end{equation}\n\nThis time, the resulting vector has been changed to a different amplitude, but has the same magnitude.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nv = np.array([1,0])\nA = np.array([[0,-1],\n [1,0]])\n\nt = A@v\nprint (t)\n\n# Plot v and t\nvecs = np.array([v,t])\norigin = [0], [0]\nplt.axis('equal')\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'blue'], scale=10)\nplt.show()\n```\n\nNow let's see change the matrix one more time:\n\\begin{equation}\\begin{bmatrix}2 & 1\\\\1 & 2\\end{bmatrix} \\cdot \\begin{bmatrix}1\\\\0\\end{bmatrix} = \\begin{bmatrix}2\\\\1\\end{bmatrix}\\end{equation}\n\nNow our resulting vector has been transformed to a new amplitude *and* magnitude - the transformation has affected both direction and scale.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nv = np.array([1,0])\nA = np.array([[2,1],\n [1,2]])\n\nt = A@v\nprint (t)\n\n# Plot v and t\nvecs = np.array([v,t])\norigin = [0], [0]\nplt.axis('equal')\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'blue'], scale=10)\nplt.show()\n```\n\n### Afine Transformations\nAn Afine transformation multiplies a vector by a matrix and adds an offset vector, sometimes referred to as *bias*; like this:\n\n$$T(\\vec{v}) = A\\vec{v} + \\vec{b}$$\n\nFor example:\n\n\\begin{equation}\\begin{bmatrix}5 & 2\\\\3 & 1\\end{bmatrix} \\cdot \\begin{bmatrix}1\\\\1\\end{bmatrix} + \\begin{bmatrix}-2\\\\-6\\end{bmatrix} = \\begin{bmatrix}5\\\\-2\\end{bmatrix}\\end{equation}\n\nThis kind of transformation is actually the basis of linear regression, which is a core foundation for machine learning. The matrix defines the *features*, the first vector is the *coefficients*, and the bias vector is the *intercept*.\n\nhere's an example of an Afine transformation in Python:\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nv = np.array([1,1])\nA = np.array([[5,2],\n [3,1]])\nb = np.array([-2,-6])\n\nt = A@v + b\nprint (t)\n\n# Plot v and t\nvecs = np.array([v,t])\norigin = [0], [0]\nplt.axis('equal')\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'blue'], scale=15)\nplt.show()\n```\n\n## Eigenvectors and Eigenvalues\nSo we can see that when you transform a vector using a matrix, we change its direction, length, or both. When the transformation only affects scale (in other words, the output vector has a different magnitude but the same amplitude as the input vector), the matrix multiplication for the transformation is the equivalent operation as some scalar multiplication of the vector.\n\nFor example, earlier we examined the following transformation that dot-mulitplies a vector by a matrix:\n\n$$\\begin{bmatrix}2 & 0\\\\0 & 2\\end{bmatrix} \\cdot \\begin{bmatrix}1\\\\0\\end{bmatrix} = \\begin{bmatrix}2\\\\0\\end{bmatrix}$$\n\nYou can achieve the same result by mulitplying the vector by the scalar value ***2***:\n\n$$2 \\times \\begin{bmatrix}1\\\\0\\end{bmatrix} = \\begin{bmatrix}2\\\\0\\end{bmatrix}$$\n\nThe following python performs both of these calculation and shows the results, which are identical.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nv = np.array([1,0])\nA = np.array([[2,0],\n [0,2]])\n\nt1 = A@v\nprint (t1)\nt2 = 2*v\nprint (t2)\n\nfig = plt.figure()\na=fig.add_subplot(1,1,1)\n# Plot v and t1\nvecs = np.array([t1,v])\norigin = [0], [0]\nplt.axis('equal')\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)\nplt.show()\na=fig.add_subplot(1,2,1)\n# Plot v and t2\nvecs = np.array([t2,v])\norigin = [0], [0]\nplt.axis('equal')\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)\nplt.show()\n```\n\nIn cases like these, where a matrix transformation is the equivelent of a scalar-vector multiplication, the scalar-vector pairs that correspond to the matrix are known respectively as eigenvalues and eigenvectors. We generally indicate eigenvalues using the Greek letter lambda (λ), and the formula that defines eigenvalues and eigenvectors with respect to a transformation is:\n\n$$ T(\\vec{v}) = \\lambda\\vec{v}$$\n\nWhere the vector ***v*** is an eigenvector and the value ***λ*** is an eigenvalue for transformation ***T***.\n\nWhen the transformation ***T*** is represented as a matrix multiplication, as in this case where the transformation is represented by matrix ***A***:\n\n$$ T(\\vec{v}) = A\\vec{v} = \\lambda\\vec{v}$$\n\nThen ***v*** is an eigenvector and ***λ*** is an eigenvalue of ***A***.\n\nA matrix can have multiple eigenvector-eigenvalue pairs, and you can calculate them manually. However, it's generally easier to use a tool or programming language. For example, in Python you can use the ***linalg.eig*** function, which returns an array of eigenvalues and a matrix of the corresponding eigenvectors for the specified matrix.\n\nHere's an example that returns the eigenvalue and eigenvector pairs for the following matrix:\n\n$$A=\\begin{bmatrix}2 & 0\\\\0 & 3\\end{bmatrix}$$\n\n\n```python\nimport numpy as np\nA = np.array([[2,0],\n [0,3]])\neVals, eVecs = np.linalg.eig(A)\nprint(eVals)\nprint(eVecs)\n```\n\nSo there are two eigenvalue-eigenvector pairs for this matrix, as shown here:\n\n$$ \\lambda_{1} = 2, \\vec{v_{1}} = \\begin{bmatrix}1 \\\\ 0\\end{bmatrix} \\;\\;\\;\\;\\;\\; \\lambda_{2} = 3, \\vec{v_{2}} = \\begin{bmatrix}0 \\\\ 1\\end{bmatrix} $$\n\nLet's verify that multiplying each eigenvalue-eigenvector pair corresponds to the dot-product of the eigenvector and the matrix. Here's the first pair:\n\n$$ 2 \\times \\begin{bmatrix}1 \\\\ 0\\end{bmatrix} = \\begin{bmatrix}2 \\\\ 0\\end{bmatrix} \\;\\;\\;and\\;\\;\\; \\begin{bmatrix}2 & 0\\\\0 & 3\\end{bmatrix} \\cdot \\begin{bmatrix}1 \\\\ 0\\end{bmatrix} = \\begin{bmatrix}2 \\\\ 0\\end{bmatrix} $$\n\nSo far so good. Now let's check the second pair:\n\n$$ 3 \\times \\begin{bmatrix}0 \\\\ 1\\end{bmatrix} = \\begin{bmatrix}0 \\\\ 3\\end{bmatrix} \\;\\;\\;and\\;\\;\\; \\begin{bmatrix}2 & 0\\\\0 & 3\\end{bmatrix} \\cdot \\begin{bmatrix}0 \\\\ 1\\end{bmatrix} = \\begin{bmatrix}0 \\\\ 3\\end{bmatrix} $$\n\nSo our eigenvalue-eigenvector scalar multiplications do indeed correspond to our matrix-eigenvector dot-product transformations.\n\nHere's the equivalent code in Python, using the ***eVals*** and ***eVecs*** variables you generated in the previous code cell:\n\n\n```python\nvec1 = eVecs[:,0]\nlam1 = eVals[0]\n\nprint('Matrix A:')\nprint(A)\nprint('-------')\n\nprint('lam1: ' + str(lam1))\nprint ('v1: ' + str(vec1))\nprint ('Av1: ' + str(A@vec1))\nprint ('lam1 x v1: ' + str(lam1*vec1))\n\nprint('-------')\n\nvec2 = eVecs[:,1]\nlam2 = eVals[1]\n\nprint('lam2: ' + str(lam2))\nprint ('v2: ' + str(vec2))\nprint ('Av2: ' + str(A@vec2))\nprint ('lam2 x v2: ' + str(lam2*vec2))\n```\n\nYou can use the following code to visualize these transformations:\n\n\n```python\nt1 = lam1*vec1\nprint (t1)\nt2 = lam2*vec2\nprint (t2)\n\nfig = plt.figure()\na=fig.add_subplot(1,1,1)\n# Plot v and t1\nvecs = np.array([t1,vec1])\norigin = [0], [0]\nplt.axis('equal')\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)\nplt.show()\na=fig.add_subplot(1,2,1)\n# Plot v and t2\nvecs = np.array([t2,vec2])\norigin = [0], [0]\nplt.axis('equal')\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)\nplt.show()\n```\n\nSimilarly, earlier we examined the following matrix transformation:\n\n$$\\begin{bmatrix}2 & 0\\\\0 & 2\\end{bmatrix} \\cdot \\begin{bmatrix}1\\\\0\\end{bmatrix} = \\begin{bmatrix}2\\\\0\\end{bmatrix}$$\n\nAnd we saw that you can achieve the same result by mulitplying the vector by the scalar value ***2***:\n\n$$2 \\times \\begin{bmatrix}1\\\\0\\end{bmatrix} = \\begin{bmatrix}2\\\\0\\end{bmatrix}$$\n\nThis works because the scalar value 2 and the vector (1,0) are an eigenvalue-eigenvector pair for this matrix.\n\nLet's use Python to determine the eigenvalue-eigenvector pairs for this matrix:\n\n\n```python\nimport numpy as np\nA = np.array([[2,0],\n [0,2]])\neVals, eVecs = np.linalg.eig(A)\nprint(eVals)\nprint(eVecs)\n```\n\nSo once again, there are two eigenvalue-eigenvector pairs for this matrix, as shown here:\n\n$$ \\lambda_{1} = 2, \\vec{v_{1}} = \\begin{bmatrix}1 \\\\ 0\\end{bmatrix} \\;\\;\\;\\;\\;\\; \\lambda_{2} = 2, \\vec{v_{2}} = \\begin{bmatrix}0 \\\\ 1\\end{bmatrix} $$\n\nLet's verify that multiplying each eigenvalue-eigenvector pair corresponds to the dot-product of the eigenvector and the matrix. Here's the first pair:\n\n$$ 2 \\times \\begin{bmatrix}1 \\\\ 0\\end{bmatrix} = \\begin{bmatrix}2 \\\\ 0\\end{bmatrix} \\;\\;\\;and\\;\\;\\; \\begin{bmatrix}2 & 0\\\\0 & 2\\end{bmatrix} \\cdot \\begin{bmatrix}1 \\\\ 0\\end{bmatrix} = \\begin{bmatrix}2 \\\\ 0\\end{bmatrix} $$\n\nWell, we already knew that. Now let's check the second pair:\n\n$$ 2 \\times \\begin{bmatrix}0 \\\\ 1\\end{bmatrix} = \\begin{bmatrix}0 \\\\ 2\\end{bmatrix} \\;\\;\\;and\\;\\;\\; \\begin{bmatrix}2 & 0\\\\0 & 2\\end{bmatrix} \\cdot \\begin{bmatrix}0 \\\\ 1\\end{bmatrix} = \\begin{bmatrix}0 \\\\ 2\\end{bmatrix} $$\n\nNow let's use Pythonto verify and plot these transformations:\n\n\n```python\nvec1 = eVecs[:,0]\nlam1 = eVals[0]\n\nprint('Matrix A:')\nprint(A)\nprint('-------')\n\nprint('lam1: ' + str(lam1))\nprint ('v1: ' + str(vec1))\nprint ('Av1: ' + str(A@vec1))\nprint ('lam1 x v1: ' + str(lam1*vec1))\n\nprint('-------')\n\nvec2 = eVecs[:,1]\nlam2 = eVals[1]\n\nprint('lam2: ' + str(lam2))\nprint ('v2: ' + str(vec2))\nprint ('Av2: ' + str(A@vec2))\nprint ('lam2 x v2: ' + str(lam2*vec2))\n\n\n# Plot the resulting vectors\nt1 = lam1*vec1\nt2 = lam2*vec2\n\nfig = plt.figure()\na=fig.add_subplot(1,1,1)\n# Plot v and t1\nvecs = np.array([t1,vec1])\norigin = [0], [0]\nplt.axis('equal')\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)\nplt.show()\na=fig.add_subplot(1,2,1)\n# Plot v and t2\nvecs = np.array([t2,vec2])\norigin = [0], [0]\nplt.axis('equal')\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)\nplt.show()\n```\n\nLet's take a look at one more, slightly more complex example. Here's our matrix:\n\n$$\\begin{bmatrix}2 & 1\\\\1 & 2\\end{bmatrix}$$\n\nLet's get the eigenvalue and eigenvector pairs:\n\n\n```python\nimport numpy as np\n\nA = np.array([[2,1],\n [1,2]])\n\neVals, eVecs = np.linalg.eig(A)\nprint(eVals)\nprint(eVecs)\n```\n\nThis time the eigenvalue-eigenvector pairs are:\n\n$$ \\lambda_{1} = 3, \\vec{v_{1}} = \\begin{bmatrix}0.70710678 \\\\ 0.70710678\\end{bmatrix} \\;\\;\\;\\;\\;\\; \\lambda_{2} = 1, \\vec{v_{2}} = \\begin{bmatrix}-0.70710678 \\\\ 0.70710678\\end{bmatrix} $$\n\nSo let's check the first pair:\n\n$$ 3 \\times \\begin{bmatrix}0.70710678 \\\\ 0.70710678\\end{bmatrix} = \\begin{bmatrix}2.12132034 \\\\ 2.12132034\\end{bmatrix} \\;\\;\\;and\\;\\;\\; \\begin{bmatrix}2 & 1\\\\0 & 2\\end{bmatrix} \\cdot \\begin{bmatrix}0.70710678 \\\\ 0.70710678\\end{bmatrix} = \\begin{bmatrix}2.12132034 \\\\ 2.12132034\\end{bmatrix} $$\n\nNow let's check the second pair:\n\n$$ 1 \\times \\begin{bmatrix}-0.70710678 \\\\ 0.70710678\\end{bmatrix} = \\begin{bmatrix}-0.70710678\\\\0.70710678\\end{bmatrix} \\;\\;\\;and\\;\\;\\; \\begin{bmatrix}2 & 1\\\\1 & 2\\end{bmatrix} \\cdot \\begin{bmatrix}-0.70710678 \\\\ 0.70710678\\end{bmatrix} = \\begin{bmatrix}-0.70710678\\\\0.70710678\\end{bmatrix} $$\n\nWith more complex examples like this, it's generally easier to do it with Python:\n\n\n```python\nvec1 = eVecs[:,0]\nlam1 = eVals[0]\n\nprint('Matrix A:')\nprint(A)\nprint('-------')\n\nprint('lam1: ' + str(lam1))\nprint ('v1: ' + str(vec1))\nprint ('Av1: ' + str(A@vec1))\nprint ('lam1 x v1: ' + str(lam1*vec1))\n\nprint('-------')\n\nvec2 = eVecs[:,1]\nlam2 = eVals[1]\n\nprint('lam2: ' + str(lam2))\nprint ('v2: ' + str(vec2))\nprint ('Av2: ' + str(A@vec2))\nprint ('lam2 x v2: ' + str(lam2*vec2))\n\n\n# Plot the results\nt1 = lam1*vec1\nt2 = lam2*vec2\n\nfig = plt.figure()\na=fig.add_subplot(1,1,1)\n# Plot v and t1\nvecs = np.array([t1,vec1])\norigin = [0], [0]\nplt.axis('equal')\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)\nplt.show()\na=fig.add_subplot(1,2,1)\n# Plot v and t2\nvecs = np.array([t2,vec2])\norigin = [0], [0]\nplt.axis('equal')\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, vecs[:,0], vecs[:,1], color=['blue', 'orange'], scale=10)\nplt.show()\n```\n\n## Eigendecomposition\nSo we've learned a little about eigenvalues and eigenvectors; but you may be wondering what use they are. Well, one use for them is to help decompose transformation matrices.\n\nRecall that previously we found that a matrix transformation of a vector changes its magnitude, amplitude, or both. Without getting too technical about it, we need to remember that vectors can exist in any spatial orientation, or *basis*; and the same transformation can be applied in different *bases*.\n\nWe can decompose a matrix using the following formula:\n\n$$A = Q \\Lambda Q^{-1}$$\n\nWhere ***A*** is a trasformation that can be applied to a vector in its current base, ***Q*** is a matrix of eigenvectors that defines a change of basis, and ***Λ*** is a matrix with eigenvalues on the diagonal that defines the same linear transformation as ***A*** in the base defined by ***Q***.\n\nLet's look at these in some more detail. Consider this matrix:\n\n$$A=\\begin{bmatrix}3 & 2\\\\1 & 0\\end{bmatrix}$$\n\n***Q*** is a matrix in which each column is an eigenvector of ***A***; which as we've seen previously, we can calculate using Python:\n\n\n```python\nimport numpy as np\n\nA = np.array([[3,2],\n [1,0]])\n\nl, Q = np.linalg.eig(A)\nprint(Q)\n```\n\nSo for matrix ***A***, ***Q*** is the following matrix:\n\n$$Q=\\begin{bmatrix}0.96276969 & -0.48963374\\\\0.27032301 & 0.87192821\\end{bmatrix}$$\n\n***Λ*** is a matrix that contains the eigenvalues for ***A*** on the diagonal, with zeros in all other elements; so for a 2x2 matrix, Λ will look like this:\n\n$$\\Lambda=\\begin{bmatrix}\\lambda_{1} & 0\\\\0 & \\lambda_{2}\\end{bmatrix}$$\n\nIn our Python code, we've already used the ***linalg.eig*** function to return the array of eigenvalues for ***A*** into the variable ***l***, so now we just need to format that as a matrix:\n\n\n```python\nL = np.diag(l)\nprint (L)\n```\n\nSo ***Λ*** is the following matrix:\n\n$$\\Lambda=\\begin{bmatrix}3.56155281 & 0\\\\0 & -0.56155281\\end{bmatrix}$$\n\nNow we just need to find ***Q-1***, which is the inverse of ***Q***:\n\n\n```python\nQinv = np.linalg.inv(Q)\nprint(Qinv)\n```\n\nThe inverse of ***Q*** then, is:\n\n$$Q^{-1}=\\begin{bmatrix}0.89720673 & 0.50382896\\\\-0.27816009 & 0.99068183\\end{bmatrix}$$\n\nSo what does that mean? Well, it means that we can decompose the transformation of *any* vector multiplied by matrix ***A*** into the separate operations ***QΛQ-1***:\n\n$$A\\vec{v} = Q \\Lambda Q^{-1}\\vec{v}$$\n\nTo prove this, let's take vector ***v***:\n\n$$\\vec{v} = \\begin{bmatrix}1\\\\3\\end{bmatrix} $$\n\nOur matrix transformation using ***A*** is:\n\n$$\\begin{bmatrix}3 & 2\\\\1 & 0\\end{bmatrix} \\cdot \\begin{bmatrix}1\\\\3\\end{bmatrix} $$\n\nSo let's show the results of that using Python:\n\n\n```python\nv = np.array([1,3])\nt = A@v\n\nprint(t)\n\n# Plot v and t\nvecs = np.array([v,t])\norigin = [0], [0]\nplt.axis('equal')\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'b'], scale=20)\nplt.show()\n```\n\nAnd now, let's do the same thing using the ***QΛQ-1*** sequence of operations:\n\n\n```python\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nt = (Q@(L@(Qinv)))@v\n\n# Plot v and t\nvecs = np.array([v,t])\norigin = [0], [0]\nplt.axis('equal')\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'b'], scale=20)\nplt.show()\n```\n\nSo ***A*** and ***QΛQ-1*** are equivalent.\n\nIf we view the intermediary stages of the decomposed transformation, you can see the transformation using ***A*** in the original base for ***v*** (orange to blue) and the transformation using ***Λ*** in the change of basis decribed by ***Q*** (red to magenta):\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nt1 = Qinv@v\nt2 = L@t1\nt3 = Q@t2\n\n# Plot the transformations\nvecs = np.array([v,t1, t2, t3])\norigin = [0], [0]\nplt.axis('equal')\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, vecs[:,0], vecs[:,1], color=['orange', 'red', 'magenta', 'blue'], scale=20)\nplt.show()\n```\n\nSo from this visualization, it should be apparent that the transformation ***Av*** can be performed by changing the basis for ***v*** using ***Q*** (from orange to red in the above plot) applying the equivalent linear transformation in that base using ***Λ*** (red to magenta), and switching back to the original base using ***Q-1*** (magenta to blue).\n\n## Rank of a Matrix\n\nThe **rank** of a square matrix is the number of non-zero eigenvalues of the matrix. A **full rank** matrix has the same number of non-zero eigenvalues as the dimension of the matrix. A **rank-deficient** matrix has fewer non-zero eigenvalues as dimensions. The inverse of a rank deficient matrix is singular and so does not exist (this is why in a previous notebook we noted that some matrices have no inverse).\n\nConsider the following matrix ***A***:\n\n$$A=\\begin{bmatrix}1 & 2\\\\4 & 3\\end{bmatrix}$$\n\nLet's find its eigenvalues (***Λ***):\n\n\n```python\nimport numpy as np\nA = np.array([[1,2],\n [4,3]])\nl, Q = np.linalg.eig(A)\nL = np.diag(l)\nprint(L)\n```\n\n$$\\Lambda=\\begin{bmatrix}-1 & 0\\\\0 & 5\\end{bmatrix}$$\n\nThis matrix has full rank. The dimensions of the matrix is 2. There are two non-zero eigenvalues. \n\nNow consider this matrix:\n\n$$B=\\begin{bmatrix}3 & -3 & 6\\\\2 & -2 & 4\\\\1 & -1 & 2\\end{bmatrix}$$\n\nNote that the second and third columns are just scalar multiples of the first column.\n\nLet's examine it's eigenvalues:\n\n\n```python\nB = np.array([[3,-3,6],\n [2,-2,4],\n [1,-1,2]])\nlb, Qb = np.linalg.eig(B)\nLb = np.diag(lb)\nprint(Lb)\n```\n\n$$\\Lambda=\\begin{bmatrix}3 & 0& 0\\\\0 & -6\\times10^{-17} & 0\\\\0 & 0 & 3.6\\times10^{-16}\\end{bmatrix}$$\n\nNote that matrix has only 1 non-zero eigenvalue. The other two eigenvalues are so extremely small as to be effectively zero. This is an example of a rank-deficient matrix; and as such, it has no inverse.\n\n## Inverse of a Square Full Rank Matrix\nYou can calculate the inverse of a square full rank matrix by using the following formula:\n\n$$A^{-1} = Q \\Lambda^{-1} Q^{-1}$$\n\nLet's apply this to matrix ***A***:\n\n$$A=\\begin{bmatrix}1 & 2\\\\4 & 3\\end{bmatrix}$$\n\nLet's find the matrices for ***Q***, ***Λ-1***, and ***Q-1***:\n\n\n```python\nimport numpy as np\nA = np.array([[1,2],\n [4,3]])\n\nl, Q = np.linalg.eig(A)\nL = np.diag(l)\nprint(Q)\nLinv = np.linalg.inv(L)\nQinv = np.linalg.inv(Q)\nprint(Linv)\nprint(Qinv)\n```\n\nSo:\n\n$$A^{-1}=\\begin{bmatrix}-0.70710678 & -0.4472136\\\\0.70710678 & -0.89442719\\end{bmatrix}\\cdot\\begin{bmatrix}-1 & -0\\\\0 & 0.2\\end{bmatrix}\\cdot\\begin{bmatrix}-0.94280904 & 0.47140452\\\\-0.74535599 & -0.74535599\\end{bmatrix}$$\n\nLet's calculate that in Python:\n\n\n```python\nAinv = (Q@(Linv@(Qinv)))\nprint(Ainv)\n```\n\nThat gives us the result:\n\n$$A^{-1}=\\begin{bmatrix}-0.6 & 0.4\\\\0.8 & -0.2\\end{bmatrix}$$\n\nWe can apply the ***np.linalg.inv*** function directly to ***A*** to verify this:\n\n\n```python\nprint(np.linalg.inv(A))\n```\n", "meta": {"hexsha": "9d82599ed391b15e116e045e155f801fae22cf37", "size": 35597, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Vector and Matrices by Hiren/03-05-Transformations Eigenvectors and Eigenvalues.ipynb", "max_stars_repo_name": "awesome-archive/Basic-Mathematics-for-Machine-Learning", "max_stars_repo_head_hexsha": "b6699a9c29ec070a0b1615c46952cb0deeb73b54", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 401, "max_stars_repo_stars_event_min_datetime": "2018-08-29T04:55:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T11:03:39.000Z", "max_issues_repo_path": "Vector and Matrices by Hiren/03-05-Transformations Eigenvectors and Eigenvalues.ipynb", "max_issues_repo_name": "aligeekk/Basic-Mathematics-for-Machine-Learning", "max_issues_repo_head_hexsha": "8662076d60e89f58a6e81e4ca1377569472760a2", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-09-28T13:52:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-28T18:13:53.000Z", "max_forks_repo_path": "Vector and Matrices by Hiren/03-05-Transformations Eigenvectors and Eigenvalues.ipynb", "max_forks_repo_name": "aligeekk/Basic-Mathematics-for-Machine-Learning", "max_forks_repo_head_hexsha": "8662076d60e89f58a6e81e4ca1377569472760a2", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 135, "max_forks_repo_forks_event_min_datetime": "2018-08-29T05:04:00.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-30T07:04:25.000Z", "avg_line_length": 34.4264990329, "max_line_length": 425, "alphanum_fraction": 0.5381071439, "converted": true, "num_tokens": 7828, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9504109770159682, "lm_q2_score": 0.8933093954028816, "lm_q1q2_score": 0.8490110552623965}} {"text": "# Explicit Methods for the Model Hyperbolic PDE\n\nThe one-dimensional wave equation or linear convection equation is given by the following partial differential equation.\n\n$$\n\\frac{\\partial u}{\\partial t} + c \\frac{\\partial u}{\\partial x} = 0\n$$\n\nWhen we solve this PDE numerically, we divide the spatial and temporal domains into a series of mesh points and time points. Assume our problem domain is of length $L$, and that we want to compute the solution of $u(x,t)$ between time equals to zero to some final time, $t_f$ for all values of $x$ between zero and $L$. The first step in solving our PDE numerically is to divide the domain, $0 \\leq x \\leq L$, into a discrete number of mesh points or nodes, $N_x$. Likewise, we also divide the time domain, $0 \\leq t \\leq t_f$, into a discrete number of time steps, $N_t$. If we do so uniformally in time and space, then the distance bewteen each mesh point is $\\Delta x$, and the distance between each time step is $\\Delta t$. \n\nIn other words, for a uniform mesh size and a uniform time step, the value of $x$ for the i-th node is \n\n$$\nx_i = i \\Delta x\n$$\n\nand the time at each time step, $n$, is\n\n$$\nt_n = n \\Delta t\n$$\n\nThe goal is to approximate this PDE as a difference equation, meaning that we want to represent the PDE using only a discrete number of mesh points, $N_x$, and time points, $N_t$. To do this requires using a Taylor series expansion about each point in the mesh, both in time and space, to approximate the time and space derivatives of $u$ in terms of a difference formula. Since in our Taylor series expansion we choose only a finite number of terms, these difference formulas are known as finite-difference approximations.\n\nSince there are many different finite-difference formulas, let use define a common nomenclature. Let the symbol $\\mathcal{D}$ represent a difference approximation. The subscript of $\\mathcal{D}$ shall represent the direction of the finite-difference, forward (+), backward (-), or central (0). Last, let the denominator represent the domain over which the finite-difference formula is used. For the Cartesian domain, let us use the $\\Delta x$ to represent a finite-difference approximation in the $x$-direction. Likewise, $\\Delta y$ would represent a finite-difference approximation in the $y$-direction.\n\n\n\n\n## First-Derivative Approximations\n\nUsing this nomenclature described above, let use define a set of finite-difference approximations for the first directive of the variable $\\phi$ with respect to $x$. In other words, what are the available finite-difference formulae for \n\n$$\n\\frac{ \\partial \\phi }{ \\partial x} = \\textrm{Finite-difference Approximation} + \\textrm{Truncation Error}\n$$\n\nThese expressions can be derived using a Taylor series expansion. By keeping track of the order of the higher-order terms neglected or truncated from the Taylor series expansion, we can also provide an estimate of the truncation error. We say that a finite-difference approximation is first-order accurate if the truncation error is of order $\\mathcal{O} (\\Delta x)$. It is second-order accurate if the truncation error is of order $\\mathcal{O} (\\Delta x^2)$. \n\n**Forward difference**, first-order accurate, $\\mathcal{O} (\\Delta x)$\n\n- Equally spaced\n\n$$\n\\frac{\\mathcal{D}_{+}\\cdot}{\\Delta x} \\phi_i = \\frac{\\phi_{i+1} - \\phi_i}{\\Delta x}\n$$\n\n- Nonequally spaced\n\n$$\n\\frac{\\mathcal{D}_{+}\\cdot}{\\Delta x} \\phi_i = \\frac{\\phi_{i+1} - \\phi_i}{x_{i+1} - x_i}\n$$\n\n\n**Backward difference**, first-order accurate, $\\mathcal{O} (\\Delta x)$\n\n- Equally spaced\n\n$$\n\\frac{\\mathcal{D}_{-}\\cdot}{\\Delta x} \\phi_i = \\frac{\\phi_{i} - \\phi_{i-1}}{\\Delta x}\n$$\n\n- Nonequally spaced\n\n$$\n\\frac{\\mathcal{D}_{-}\\cdot}{\\Delta x} \\phi_i = \\frac{\\phi_{i} - \\phi_{i-1}}{x_{i} - x_{i-1}}\n$$\n\n**Central difference**, second-order accurate, $\\mathcal{O} (\\Delta x^2)$\n\n- Equally spaced\n\n$$\n\\frac{\\mathcal{D}_{0}\\cdot}{\\Delta x} \\phi_i = \\frac{\\phi_{i+1} - \\phi_{i-1}}{2\\Delta x}\n$$\n\n- Nonequally spaced\n\n$$\n\\frac{\\mathcal{D}_{0}\\cdot}{\\Delta x} \\phi_i = \\frac{\\phi_{i+1} - \\phi_{i-1}}{x_{i+1} - x_{i-1}}\n$$\n\n## Difference Equations for the Model Hyperbolic PDE\n\nLet us use the above finite-difference approximations to derive several difference equations for the model hyperbolic PDEs. We have free reign in this choice, and most importantly, we certainly not limited to those provided above. It is possible to derive, third and fourth-order finite-difference approximations. These higher-order methods can become increasingly complex. Note, however, that while we can freely choose the difference approximation, we still need to check that this difference equation is not only consistent but also stable. Different combinations of time and spatial difference approximations result in different numerically stability.\n\n### Forward-Time, Backward Space\n\nUsing a forward-time, backward-space difference approximation, the partial differential equation, \n\n$$\n\\frac{\\partial u}{\\partial t} + c \\frac{\\partial u}{\\partial x} = 0\n$$\n\nis transformed into the following difference equation, \n\n$$\n\\frac{\\mathcal{D}_{+}}{\\Delta t} \\Big( u^n_i \\Big) + c \\frac{\\mathcal{D}_{-}}{\\Delta x} \\Big( u^n_i \\Big) = 0,\n$$\n\nwhich we can then expand as \n\n$$\n\\frac{ u^{n+1}_i - u^n_i }{\\Delta t} + c \\frac{ u^n_i - u^n_{i-1} }{\\Delta x} = 0.\n$$\n\nFrom the truncation error of the difference approximations, we can say that this method is first-order accurate in time and space, in other words the truncation error is of the order $\\mathcal{O}(\\Delta t, \\Delta x)$. We can represent this numerical method using the following diagram.\n\n\n\nThe *stencil* diagram visually shows the mesh points and time points that are used in the difference equation.\n\n\n### Forward-Time, Forward Space\n\nUsing a forward-time, forward-space difference approximation, the partial differential equation, \n\n$$\n\\frac{\\partial u}{\\partial t} + c \\frac{\\partial u}{\\partial x} = 0\n$$\n\nis transformed into the following difference equation, \n\n$$\n\\frac{\\mathcal{D}_{+}}{\\Delta t} \\Big( u^n_i \\Big) + c \\frac{\\mathcal{D}_{+}}{\\Delta x} \\Big( u^n_i \\Big) = 0,\n$$\n\nwhich we can then expand as \n\n$$\n\\frac{ u^{n+1}_i - u^n_i }{\\Delta t} + c \\frac{ u^n_{i+1} - u^n_{i} }{\\Delta x} = 0.\n$$\n\nFrom the truncation error of the difference approximations, we can say that this method is first-order accurate in time and space, in other words the truncation error is of the order $\\mathcal{O}(\\Delta t, \\Delta x)$. We can represent this numerical method using the following diagram.\n\n\n\nThe *stencil* diagram visually shows the mesh points and time points that are used in the difference equation.\n\n\n\n### Forward-Time, Central Space\n\nUsing a forward-time, central-space difference approximation, the partial differential equation, \n\n$$\n\\frac{\\partial u}{\\partial t} + c \\frac{\\partial u}{\\partial x} = 0\n$$\n\nis transformed into the following difference equation, \n\n$$\n\\frac{\\mathcal{D}_{+}}{\\Delta t} \\Big( u^n_i \\Big) + c \\frac{\\mathcal{D}_{0}}{\\Delta x} \\Big( u^n_i \\Big) = 0,\n$$\n\nwhich we can then expand as \n\n$$\n\\frac{ u^{n+1}_i - u^n_i }{\\Delta t} + c \\frac{ u^n_{i+1} - u^n_{i-1} }{2 \\Delta x} = 0.\n$$\n\nFrom the truncation error of the difference approximations, we can say that this method is first-order accurate in time and second-order in space, in other words the truncation error is of the order $\\mathcal{O}(\\Delta t, \\Delta x^2)$. We can represent this numerical method using the following diagram.\n\n\n\nThe *stencil* diagram visually shows the mesh points and time points that are used in the difference equation.\n\n\n\n\n## Are all these methods stable? \n\n***We need to apply von Neumann stability analysis to each of these difference equations to determine under what conditions, i.e., for what values of $\\Delta t$ and $\\Delta x$, is the numerical method stable?***\n\n# Summary of Numerical Methods\n\n## Explicit, Forward-Time, Backward-Space (FTBS)\n\n$$\n\\begin{align}\n\\textrm{Method :}\\quad & u^{n+1}_i = u^n_i - c \\Delta t \\frac{\\mathcal{D}_{-} \\cdot}{\\Delta x} u_i^n \\\\\n\\textrm{Stability Criteria :}\\quad & c > 0, \\quad \\Delta t \\le \\frac{\\Delta x}{|c|} \\\\\n\\textrm{Order of Accuracy :}\\quad & \\mathcal{O}\\left(\\Delta t, \\Delta x\\right)\n\\end{align}\n$$\n\n## Explicit, Forward-Time, Forward-Space (FTFS)\n\n$$\n\\begin{align}\n\\textrm{Method :}\\quad & u^{n+1}_i = u^n_i - c \\Delta t \\frac{\\mathcal{D}_{+} \\cdot}{\\Delta x} u_i^n \\\\\n\\textrm{Stability Criteria :}\\quad & c < 0, \\quad \\Delta t \\le \\frac{\\Delta x}{|c|} \\\\\n\\textrm{Order of Accuracy :}\\quad & \\mathcal{O}\\left(\\Delta t, \\Delta x\\right)\n\\end{align}\n$$\n\nNote that this method is only stable for $c < 0$.\n\n## Explicit, Forward-Time, Central-Space (FTCS)\n\n$$\n\\begin{align}\n\\textrm{Method :}\\quad & u^{n+1}_i = u^n_i - c \\Delta t \\frac{\\mathcal{D}_{0} \\cdot}{\\Delta x} u_i^n \\\\\n\\textrm{Stability Criteria :}\\quad & \\textrm{Always unstable} \\\\\n\\textrm{Order of Accuracy :}\\quad & \\mathcal{O}\\left(\\Delta t, \\Delta x^2\\right)\n\\end{align}\n$$\n\nThis method is *always* unstable.\n\n## Lax\n\n\n$$\n\\begin{align}\n\\textrm{Method :}\\quad & u^{n+1}_i = \\frac{u^n_{i-1} + u^n_{i+1}}{2} - c \\Delta t \\frac{\\mathcal{D}_{0} \\cdot}{\\Delta x} u_i^n \\\\\n\\textrm{Stability Criteria :}\\quad & \\Delta t \\le \\frac{\\Delta x}{|c|} \\\\\n\\textrm{Order of Accuracy :}\\quad & \\mathcal{O}\\left(\\Delta t, \\frac{\\Delta x^2 } { \\Delta t}, \\Delta x^2\\right)\n\\end{align}\n$$\n\nThe Lax algorithm is an *inconsistent* difference equation because the truncation error is not gaurenteed to go to zero. It only does so if $\\Delta x^2$ goes to zero faster than $\\Delta t$.\n\n## Lax-Wendroff\n\n\n$$\n\\begin{align}\n\\textrm{Method :}\\quad & u^{n+1}_i = u^n_i - c \\Delta t \\frac{\\mathcal{D}_{0} \\cdot}{\\Delta x} u_i^n + \\frac{1}{2}c^2 \\Delta t^2 \\frac{\\mathcal{D}_{+} \\cdot}{\\Delta x} \\frac{\\mathcal{D}_{-} \\cdot}{\\Delta x} u_i^n \\\\\n\\textrm{Stability Criteria :}\\quad & \\Delta t \\le \\frac{\\Delta x}{|c|} \\\\\n\\textrm{Order of Accuracy :}\\quad & \\mathcal{O}\\left(\\Delta t^2, \\Delta x^2\\right)\n\\end{align}\n$$\n\n## MacCormack\n\n\n$$\n\\begin{align}\n\\textrm{Method :}\\quad & u^{\\overline{n+1}}_i = u^n_i - c \\Delta t \\frac{\\mathcal{D}_{+} \\cdot}{\\Delta x} u_i^n \\\\\n & u^{n+1}_i = \\frac{1}{2} \\left[u^n_i + u^{\\overline{n+1}}_i - c \\Delta t \\frac{\\mathcal{D}_{-} \\cdot}{\\Delta x} u^{\\overline{n+1}}_i \\right] \\\\\n\\textrm{Stability Criteria :}\\quad & \\Delta t \\le \\frac{\\Delta x}{|c|} \\\\\n\\textrm{Order of Accuracy :}\\quad & \\mathcal{O}\\left(\\Delta t^2, \\Delta x^2\\right)\n\\end{align}\n$$\n\n## Jameson\n\n\n$$\n\\begin{align}\n\\textrm{Method :}\\quad & u^{(0)} = u^n_i \\\\ \n & u^{(k)} = u^n_i - \\alpha_k c \\Delta t \\frac{\\mathcal{D}_{0} \\cdot}{\\Delta x} u_i^{(k-1)} \\\\\n & \\qquad \\textrm{where} \\,\\, \\alpha_k = \\frac{1}{5 - k}, \\,\\, k = 1, 2, 3, 4 \\\\\n & u^{n+1}_i = u^{(4)}_i \\\\\n\\textrm{Stability Criteria :}\\quad & \\Delta t \\le \\frac{2 \\sqrt{2} \\Delta x}{|c|} \\\\\n\\textrm{Order of Accuracy :}\\quad & \\mathcal{O}\\left(\\Delta t^4, \\Delta x^2\\right)\n\\end{align}\n$$\n\n## Warming-Beam\n\n\n$$\n\\begin{align}\n\\textrm{Method :}\\quad & u^{n + 1/2}_i = u^n_i - \\frac{ c \\Delta t }{2} \\frac{\\mathcal{D}_{-} \\cdot}{\\Delta x} u_i^n \\\\\n & u^{n+1}_i = u^n_i - c \\Delta t \\frac{\\mathcal{D}_{-} \\cdot}{\\Delta x} \n \\left[ u^{n+1/2}_i + \\frac{\\Delta x}{2} \\frac{\\mathcal{D}_{-} \\cdot}{\\Delta x} u^n_i \\right] \\\\\n\\textrm{Stability Criteria :}\\quad & \\Delta t \\le \\frac{2 \\Delta x}{|c|} \\\\\n\\textrm{Order of Accuracy :}\\quad & \\mathcal{O}\\left(\\Delta t^2, \\Delta x^2\\right)\n\\end{align}\n$$\n\n## More difference equations\n\nLet us look more closely at the Lax-Wendroff algorithm. We see the term like the following\n\n$$\n\\frac{\\mathcal{D}_{+} \\cdot}{\\Delta x} \\frac{\\mathcal{D}_{-} \\cdot}{\\Delta x} u_i^n \n$$\n\nwhat does mean? How do we apply two difference approximations? *This operators can be linearly combined,* so we just need to apply the first difference approximation, and then apply the second difference approximation on each of the remaing terms. Here is what that looks like for the above expression.\n\n$$\n\\frac{\\mathcal{D}_{+} \\cdot}{\\Delta x} \\left[ \\frac{\\mathcal{D}_{-} \\cdot}{\\Delta x} u_i^n \\right] = \n\\frac{\\mathcal{D}_{+} \\cdot}{\\Delta x} \\left[ \\frac{u^n_i - u^n_{i-1}}{\\Delta x} \\right] = \n\\frac{1}{\\Delta x} \\left[ \\frac{\\mathcal{D}_{+} \\cdot}{\\Delta x}\\Big( u^n_i \\Big) - \\frac{\\mathcal{D}_{+} \\cdot}{\\Delta x} \\Big( u^n_{i-1} \\Big) \\right] \n$$\n\nNow applying the second difference operator, results in \n\n$$\n\\frac{\\mathcal{D}_{+} \\cdot}{\\Delta x} \\frac{\\mathcal{D}_{-} \\cdot}{\\Delta x} u_i^n = \n\\frac{1}{\\Delta x} \\left[ \\frac{\\mathcal{D}_{+} \\cdot}{\\Delta x}\\Big( u^n_i \\Big) - \\frac{\\mathcal{D}_{+} \\cdot}{\\Delta x} \\Big( u^n_{i-1} \\Big) \\right] =\n\\frac{1}{\\Delta x} \\left[ \\frac{u^n_{i+1} - u^n_i}{\\Delta x} - \\frac{u^n_{i} - u^n_{i-1}}{\\Delta x} \\right] =\n\\frac{u^n_{i+1} - 2 u^n_i + u^n_{i-1} }{\\Delta x^2} \n$$\n\n\n\n\n", "meta": {"hexsha": "0d3b42f5c5a58015e1efdbc00bf907fca0530b29", "size": 17656, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Notebooks/LinearConvection/5-LinearConvection-ExplicitMethods.ipynb", "max_stars_repo_name": "jcschulz/ae269", "max_stars_repo_head_hexsha": "5c467a6e70808bb00e27ffdb8bb0495e0c820ca0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Notebooks/LinearConvection/5-LinearConvection-ExplicitMethods.ipynb", "max_issues_repo_name": "jcschulz/ae269", "max_issues_repo_head_hexsha": "5c467a6e70808bb00e27ffdb8bb0495e0c820ca0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notebooks/LinearConvection/5-LinearConvection-ExplicitMethods.ipynb", "max_forks_repo_name": "jcschulz/ae269", "max_forks_repo_head_hexsha": "5c467a6e70808bb00e27ffdb8bb0495e0c820ca0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.2506265664, "max_line_length": 743, "alphanum_fraction": 0.5553919348, "converted": true, "num_tokens": 4058, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9304582477806522, "lm_q2_score": 0.912436153333645, "lm_q1q2_score": 0.8489837444425419}} {"text": "# Simple Linear Regression with NumPy\n\nIn school, students are taught to draw lines like the following.\n\n$$ y = 2 x + 1$$\n\nThey're taught to pick two values for $x$ and calculate the corresponding values for $y$ using the equation.\nThen they draw a set of axes, plot the points, and then draw a line extending through the two dots on their axes.\n\n\n```python\n# Import matplotlib.\nimport matplotlib.pyplot as plt\n```\n\n\n```python\n# Draw some axes.\nplt.plot([-1, 10], [0, 0], 'k-')\nplt.plot([0, 0], [-1, 10], 'k-')\n\n# Plot the red, blue and green lines.\nplt.plot([1, 1], [-1, 3], 'b:')\nplt.plot([-1, 1], [3, 3], 'r:')\n\n# Plot the two points (1,3) and (2,5).\nplt.plot([1, 2], [3, 5], 'ko')\n# Join them with an (extending) green lines.\nplt.plot([-1, 10], [-1, 21], 'g-')\n\n# Set some reasonable plot limits.\nplt.xlim([-1, 10])\nplt.ylim([-1, 10])\n\n# Show the plot.\nplt.show()\n```\n\nSimple linear regression is about the opposite problem - what if you have some points and are looking for the equation?\nIt's easy when the points are perfectly on a line already, but usually real-world data has some noise.\nThe data might still look roughly linear, but aren't exactly so.\n\n***\n\n## Example (contrived and simulated)\n\n\n\n#### Scenario\nSuppose you are trying to weigh your suitcase to avoid an airline's extra charges.\nYou don't have a weighing scales, but you do have a spring and some gym-style weights of masses 7KG, 14KG and 21KG.\nYou attach the spring to the wall hook, and mark where the bottom of it hangs.\nYou then hang the 7KG weight on the end and mark where the bottom of the spring is.\nYou repeat this with the 14KG weight and the 21KG weight.\nFinally, you place your case hanging on the spring, and the spring hangs down halfway between the 7KG mark and the 14KG mark.\nIs your case over the 10KG limit set by the airline?\n\n#### Hypothesis\nWhen you look at the marks on the wall, it seems that the 0KG, 7KG, 14KG and 21KG marks are evenly spaced.\nYou wonder if that means your case weighs 10.5KG.\nThat is, you wonder if there is a *linear* relationship between the distance the spring's hook is from its resting position, and the mass on the end of it.\n\n#### Experiment\nYou decide to experiment.\nYou buy some new weights - a 1KG, a 2KG, a 3Kg, all the way up to 20KG.\nYou place them each in turn on the spring and measure the distance the spring moves from the resting position.\nYou tabulate the data and plot them.\n\n#### Analysis\nHere we'll import the Python libraries we need for or investigations below.\n\n\n```python\n# Make matplotlib show interactive plots in the notebook.\n%matplotlib inline\n```\n\n\n```python\n# numpy efficiently deals with numerical multi-dimensional arrays.\nimport numpy as np\n\n# matplotlib is a plotting library, and pyplot is its easy-to-use module.\nimport matplotlib.pyplot as plt\n\n# This just sets the default plot size to be bigger.\nplt.rcParams['figure.figsize'] = (8, 6)\n```\n\nIgnore the next couple of lines where I fake up some data. I'll use the fact that I faked the data to explain some results later. Just pretend that w is an array containing the weight values and d are the corresponding distance measurements.\n\n\n```python\nw = np.arange(0.0, 21.0, 1.0)\nd = 5.0 * w + 10.0 + np.random.normal(0.0, 5.0, w.size)\n```\n\n\n```python\n# Let's have a look at w.\nw\n```\n\n\n\n\n array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12.,\n 13., 14., 15., 16., 17., 18., 19., 20.])\n\n\n\n\n```python\n# Let's have a look at d.\nd\n```\n\n\n\n\n array([ 7.87555482, 21.59953558, 26.98707474, 23.32362417,\n 28.61198227, 38.81904106, 40.87544464, 47.68660422,\n 53.36700771, 52.8709924 , 54.8670291 , 67.09374109,\n 72.57056963, 79.88756871, 83.51318718, 76.50407315,\n 93.042945 , 97.96230846, 94.0586369 , 100.01267012,\n 109.05819117])\n\n\n\nLet's have a look at the data from our experiment.\n\n\n```python\n# Create the plot.\n\nplt.plot(w, d, 'k.')\n\n# Set some properties for the plot.\nplt.xlabel('Weight (KG)')\nplt.ylabel('Distance (CM)')\n\n# Show the plot.\nplt.show()\n```\n\n#### Model\nIt looks like the data might indeed be linear.\nThe points don't exactly fit on a straight line, but they are not far off it.\nWe might put that down to some other factors, such as the air density, or errors, such as in our tape measure.\nThen we can go ahead and see what would be the best line to fit the data. \n\n#### Straight lines\nAll straight lines can be expressed in the form $y = mx + c$.\nThe number $m$ is the slope of the line.\nThe slope is how much $y$ increases by when $x$ is increased by 1.0.\nThe number $c$ is the y-intercept of the line.\nIt's the value of $y$ when $x$ is 0.\n\n#### Fitting the model\nTo fit a straight line to the data, we just must pick values for $m$ and $c$.\nThese are called the parameters of the model, and we want to pick the best values possible for the parameters.\nThat is, the best parameter values *given* the data observed.\nBelow we show various lines plotted over the data, with different values for $m$ and $c$.\n\n\n```python\n# Plot w versus d with black dots.\nplt.plot(w, d, 'k.', label=\"Data\")\n\n# Overlay some lines on the plot.\nx = np.arange(0.0, 21.0, 1.0)\nplt.plot(x, 5.0 * x + 10.0, 'r-', label=r\"$5x + 10$\")\nplt.plot(x, 6.0 * x + 5.0, 'g-', label=r\"$6x + 5$\")\nplt.plot(x, 5.0 * x + 15.0, 'b-', label=r\"$5x + 15$\")\n\n# Add a legend.\nplt.legend()\n\n# Add axis labels.\nplt.xlabel('Weight (KG)')\nplt.ylabel('Distance (CM)')\n\n# Show the plot.\nplt.show()\n```\n\n#### Calculating the cost\nYou can see that each of these lines roughly fits the data.\nWhich one is best, and is there another line that is better than all three?\nIs there a \"best\" line?\n\nIt depends how you define the word best.\nLuckily, everyone seems to have settled on what the best means.\nThe best line is the one that minimises the following calculated value.\n\n$$ \\sum_i (y_i - mx_i - c)^2 $$\n\nHere $(x_i, y_i)$ is the $i^{th}$ point in the data set and $\\sum_i$ means to sum over all points. \nThe values of $m$ and $c$ are to be determined.\nWe usually denote the above as $Cost(m, c)$.\n\nWhere does the above calculation come from?\nIt's easy to explain the part in the brackets $(y_i - mx_i - c)$.\nThe corresponding value to $x_i$ in the dataset is $y_i$.\nThese are the measured values.\nThe value $m x_i + c$ is what the model says $y_i$ should have been.\nThe difference between the value that was observed ($y_i$) and the value that the model gives ($m x_i + c$), is $y_i - mx_i - c$.\n\nWhy square that value?\nWell note that the value could be positive or negative, and you sum over all of these values.\nIf we allow the values to be positive or negative, then the positive could cancel the negatives.\nSo, the natural thing to do is to take the absolute value $\\mid y_i - m x_i - c \\mid$.\nWell it turns out that absolute values are a pain to deal with, and instead it was decided to just square the quantity instead, as the square of a number is always positive.\nThere are pros and cons to using the square instead of the absolute value, but the square is used.\nThis is usually called *least squares* fitting.\n\n\n```python\n# Calculate the cost of the lines above for the data above.\ncost = lambda m,c: np.sum([(d[i] - m * w[i] - c)**2 for i in range(w.size)])\n\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (5.0, 10.0, cost(5.0, 10.0)))\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (6.0, 5.0, cost(6.0, 5.0)))\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (5.0, 15.0, cost(5.0, 15.0)))\n```\n\n Cost with m = 5.00 and c = 10.00: 364.91\n Cost with m = 6.00 and c = 5.00: 1911.26\n Cost with m = 5.00 and c = 15.00: 784.03\n\n\n#### Minimising the cost\nWe want to calculate values of $m$ and $c$ that give the lowest value for the cost value above.\nFor our given data set we can plot the cost value/function.\nRecall that the cost is:\n\n$$ Cost(m, c) = \\sum_i (y_i - mx_i - c)^2 $$\n\nThis is a function of two variables, $m$ and $c$, so a plot of it is three dimensional.\nSee the **Advanced** section below for the plot.\n\nIn the case of fitting a two-dimensional line to a few data points, we can easily calculate exactly the best values of $m$ and $c$.\nSome of the details are discussed in the **Advanced** section, as they involve calculus, but the resulting code is straight-forward.\nWe first calculate the mean (average) values of our $x$ values and that of our $y$ values.\nThen we subtract the mean of $x$ from each of the $x$ values, and the mean of $y$ from each of the $y$ values.\nThen we take the *dot product* of the new $x$ values and the new $y$ values and divide it by the dot product of the new $x$ values with themselves.\nThat gives us $m$, and we use $m$ to calculate $c$.\n\nRemember that in our dataset $x$ is called $w$ (for weight) and $y$ is called $d$ (for distance).\nWe calculate $m$ and $c$ below.\n\n\n```python\n# Calculate the best values for m and c.\n\n# First calculate the means (a.k.a. averages) of w and d.\nw_avg = np.mean(w)\nd_avg = np.mean(d)\n\n# Subtract means from w and d.\nw_zero = w - w_avg\nd_zero = d - d_avg\n\n# The best m is found by the following calculation.\nm = np.sum(w_zero * d_zero) / np.sum(w_zero * w_zero)\n# Use m from above to calculate the best c.\nc = d_avg - m * w_avg\n\nprint(\"m is %8.6f and c is %6.6f.\" % (m, c))\n```\n\n m is 4.768029 and c is 12.823887.\n\n\nNote that numpy has a function that will perform this calculation for us, called polyfit.\nIt can be used to fit lines in many dimensions.\n\n\n```python\nnp.polyfit(w, d, 1)\n```\n\n\n\n\n array([ 4.76802927, 12.82388744])\n\n\n\n#### Best fit line\nSo, the best values for $m$ and $c$ given our data and using least squares fitting are about $4.95$ for $m$ and about $11.13$ for $c$.\nWe plot this line on top of the data below.\n\n\n```python\n# Plot the best fit line.\nplt.plot(w, d, 'k.', label='Original data')\nplt.plot(w, m * w + c, 'b-', label='Best fit line')\n\n# Add axis labels and a legend.\nplt.xlabel('Weight (KG)')\nplt.ylabel('Distance (CM)')\nplt.legend()\n\n# Show the plot.\nplt.show()\n```\n\nNote that the $Cost$ of the best $m$ and best $c$ is not zero in this case.\n\n\n```python\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (m, c, cost(m, c)))\n```\n\n Cost with m = 4.77 and c = 12.82: 318.14\n\n\n### Summary\nIn this notebook we:\n1. Investigated the data.\n2. Picked a model.\n3. Picked a cost function.\n4. Estimated the model parameter values that minimised our cost function.\n\n### Advanced\nIn the following sections we cover some of the more advanced concepts involved in fitting the line.\n\n#### Simulating data\nEarlier in the notebook we glossed over something important: we didn't actually do the weighing and measuring - we faked the data.\nA better term for this is *simulation*, which is an important tool in research, especially when testing methods such as simple linear regression.\n\nWe ran the following two commands to do this:\n\n```python\nw = np.arange(0.0, 21.0, 1.0)\nd = 5.0 * w + 10.0 + np.random.normal(0.0, 5.0, w.size)\n```\n\nThe first command creates a numpy array containing all values between 1.0 and 21.0 (including 1.0 but not including 21.0) in steps of 1.0.\n\n\n```python\n np.arange(0.0, 21.0, 1.0)\n```\n\n\n\n\n array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12.,\n 13., 14., 15., 16., 17., 18., 19., 20.])\n\n\n\nThe second command is more complex.\nFirst it takes the values in the `w` array, multiplies each by 5.0 and then adds 10.0.\n\n\n```python\n5.0 * w + 10.0\n```\n\n\n\n\n array([ 10., 15., 20., 25., 30., 35., 40., 45., 50., 55., 60.,\n 65., 70., 75., 80., 85., 90., 95., 100., 105., 110.])\n\n\n\nIt then adds an array of the same length containing random values.\nThe values are taken from what is called the normal distribution with mean 0.0 and standard deviation 5.0.\n\n\n```python\nnp.random.normal(0.0, 5.0, w.size)\n```\n\n\n\n\n array([ 2.00118422, 8.76268533, 7.24193421, -2.89217859, 4.38408328,\n 6.27869981, -6.30259698, 0.40051845, 3.62034207, 4.924415 ,\n 1.57441724, 4.17091877, -0.3527485 , 4.48634788, -4.92792773,\n -0.46885963, -4.57679121, 2.23673805, -2.89051011, 4.45334771,\n 8.2654568 ])\n\n\n\nThe normal distribution follows a bell shaped curve.\nThe curve is centred on the mean (0.0 in this case) and its general width is determined by the standard deviation (5.0 in this case).\n\n\n```python\n# Plot the normal distrution.\nnormpdf = lambda mu, s, x: (1.0 / (2.0 * np.pi * s**2)) * np.exp(-((x - mu)**2)/(2 * s**2))\n\nx = np.linspace(-20.0, 20.0, 100)\ny = normpdf(0.0, 5.0, x)\nplt.plot(x, y)\n\nplt.show()\n```\n\nThe idea here is to add a little bit of randomness to the measurements of the distance.\nThe random values are entered around 0.0, with a greater than 99% chance they're within the range -15.0 to 15.0.\nThe normal distribution is used because of the [Central Limit Theorem](https://en.wikipedia.org/wiki/Central_limit_theorem) which basically states that when a bunch of random effects happen together the outcome looks roughly like the normal distribution. (Don't quote me on that!)\n\n#### Plotting the cost function\nWe can plot the cost function for a given set of data points.\nRecall that the cost function involves two variables: $m$ and $c$, and that it looks like this:\n\n$$ Cost(m,c) = \\sum_i (y_i - mx_i - c)^2 $$\n\nTo plot a function of two variables we need a 3D plot.\nIt can be difficult to get the viewing angle right in 3D plots, but below you can just about make out that there is a low point on the graph around the $(m, c) = (\\approx 5.0, \\approx 10.0)$ point. \n\n\n```python\n# This code is a little bit involved - don't worry about it.\n# Just look at the plot below.\n\nfrom mpl_toolkits.mplot3d import Axes3D\n\n# Ask pyplot a 3D set of axes.\nax = plt.figure().gca(projection='3d')\n\n# Make data.\nmvals = np.linspace(4.5, 5.5, 100)\ncvals = np.linspace(0.0, 20.0, 100)\n\n# Fill the grid.\nmvals, cvals = np.meshgrid(mvals, cvals)\n\n# Flatten the meshes for convenience.\nmflat = np.ravel(mvals)\ncflat = np.ravel(cvals)\n\n# Calculate the cost of each point on the grid.\nC = [np.sum([(d[i] - m * w[i] - c)**2 for i in range(w.size)]) for m, c in zip(mflat, cflat)]\nC = np.array(C).reshape(mvals.shape)\n\n# Plot the surface.\nsurf = ax.plot_surface(mvals, cvals, C)\n\n# Set the axis labels.\nax.set_xlabel('$m$', fontsize=16)\nax.set_ylabel('$c$', fontsize=16)\nax.set_zlabel('$Cost$', fontsize=16)\n\n# Show the plot.\nplt.show()\n```\n\n#### Coefficient of determination\nEarlier we used a cost function to determine the best line to fit the data.\nUsually the data do not perfectly fit on the best fit line, and so the cost is greater than 0.\nA quantity closely related to the cost is the *coefficient of determination*, also known as the *R-squared* value.\nThe purpose of the R-squared value is to measure how much of the variance in $y$ is determined by $x$.\n\nFor instance, in our example the main thing that affects the distance the spring is hanging down is the weight on the end.\nIt's not the only thing that affects it though.\nThe room temperature and density of the air at the time of measurment probably affect it a little.\nThe age of the spring, and how many times it has been stretched previously probably also have a small affect.\nThere are probably lots of unknown factors affecting the measurment.\n\nThe R-squared value estimates how much of the changes in the $y$ value is due to the changes in the $x$ value compared to all of the other factors affecting the $y$ value.\nIt is calculated as follows:\n\n$$ R^2 = 1 - \\frac{\\sum_i (y_i - m x_i - c)^2}{\\sum_i (y_i - \\bar{y})^2} $$\n\nNote that sometimes the [*Pearson correlation coefficient*](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) is used instead of the R-squared value.\nYou can just square the Pearson coefficient to get the R-squred value.\n\n\n```python\n# Calculate the R-squared value for our data set.\nrsq = 1.0 - (np.sum((d - m * w - c)**2)/np.sum((d - d_avg)**2))\n\nprint(\"The R-squared value is %6.4f\" % rsq)\n```\n\n The R-squared value is 0.9822\n\n\n\n```python\n# The same value using numpy.\nnp.corrcoef(w, d)[0][1]**2\n```\n\n\n\n\n 0.9821506875952921\n\n\n\n#### The minimisation calculations\nEarlier we used the following calculation to calculate $m$ and $c$ for the line of best fit.\nThe code was:\n\n```python\nw_zero = w - np.mean(w)\nd_zero = d - np.mean(d)\n\nm = np.sum(w_zero * d_zero) / np.sum(w_zero * w_zero)\nc = np.mean(d) - m * np.mean(w)\n```\n\nIn mathematical notation we write this as:\n\n$$ m = \\frac{\\sum_i (x_i - \\bar{x}) (y_i - \\bar{y})}{\\sum_i (x_i - \\bar{x})^2} \\qquad \\textrm{and} \\qquad c = \\bar{y} - m \\bar{x} $$\n\nwhere $\\bar{x}$ is the mean of $x$ and $\\bar{y}$ that of $y$.\n\nWhere did these equations come from?\nThey were derived using calculus.\nWe'll give a brief overview of it here, but feel free to gloss over this section if it's not for you.\nIf you can understand the first part, where we calculate the partial derivatives, then great!\n\nThe calculations look complex, but if you know basic differentiation, including the chain rule, you can easily derive them.\nFirst, we differentiate the cost function with respect to $m$ while treating $c$ as a constant, called a partial derivative.\nWe write this as $\\frac{\\partial m}{ \\partial Cost}$, using $\\delta$ as opposed to $d$ to signify that we are treating the other variable as a constant.\nWe then do the same with respect to $c$ while treating $m$ as a constant.\nWe set both equal to zero, and then solve them as two simultaneous equations in two variables.\n\n###### Calculate the partial derivatives\n$$\n\\begin{align}\nCost(m, c) &= \\sum_i (y_i - mx_i - c)^2 \\\\[1cm]\n\\frac{\\partial Cost}{\\partial m} &= \\sum 2(y_i - m x_i -c)(-x_i) \\\\\n &= -2 \\sum x_i (y_i - m x_i -c) \\\\[0.5cm]\n\\frac{\\partial Cost}{\\partial c} & = \\sum 2(y_i - m x_i -c)(-1) \\\\\n & = -2 \\sum (y_i - m x_i -c) \\\\\n\\end{align}\n$$\n\n###### Set to zero\n$$\n\\begin{align}\n& \\frac{\\partial Cost}{\\partial m} = 0 \\\\[0.2cm]\n& \\Rightarrow -2 \\sum x_i (y_i - m x_i -c) = 0 \\\\\n& \\Rightarrow \\sum (x_i y_i - m x_i x_i - x_i c) = 0 \\\\\n& \\Rightarrow \\sum x_i y_i - \\sum_i m x_i x_i - \\sum x_i c = 0 \\\\\n& \\Rightarrow m \\sum x_i x_i = \\sum x_i y_i - c \\sum x_i \\\\[0.2cm]\n& \\Rightarrow m = \\frac{\\sum x_i y_i - c \\sum x_i}{\\sum x_i x_i} \\\\[0.5cm]\n& \\frac{\\partial Cost}{\\partial c} = 0 \\\\[0.2cm]\n& \\Rightarrow -2 \\sum (y_i - m x_i - c) = 0 \\\\\n& \\Rightarrow \\sum y_i - \\sum_i m x_i - \\sum c = 0 \\\\\n& \\Rightarrow \\sum y_i - m \\sum_i x_i = c \\sum 1 \\\\\n& \\Rightarrow c = \\frac{\\sum y_i - m \\sum x_i}{\\sum 1} \\\\\n& \\Rightarrow c = \\frac{\\sum y_i}{\\sum 1} - m \\frac{\\sum x_i}{\\sum 1} \\\\[0.2cm]\n& \\Rightarrow c = \\bar{y} - m \\bar{x} \\\\\n\\end{align}\n$$\n\n###### Solve the simultaneous equations\nHere we let $n$ be the length of $x$, which is also the length of $y$.\n\n$$\n\\begin{align}\n& m = \\frac{\\sum_i x_i y_i - c \\sum_i x_i}{\\sum_i x_i x_i} \\\\[0.2cm]\n& \\Rightarrow m = \\frac{\\sum x_i y_i - (\\bar{y} - m \\bar{x}) \\sum x_i}{\\sum x_i x_i} \\\\\n& \\Rightarrow m \\sum x_i x_i = \\sum x_i y_i - \\bar{y} \\sum x_i + m \\bar{x} \\sum x_i \\\\\n& \\Rightarrow m \\sum x_i x_i - m \\bar{x} \\sum x_i = \\sum x_i y_i - \\bar{y} \\sum x_i \\\\[0.3cm]\n& \\Rightarrow m = \\frac{\\sum x_i y_i - \\bar{y} \\sum x_i}{\\sum x_i x_i - \\bar{x} \\sum x_i} \\\\[0.2cm]\n& \\Rightarrow m = \\frac{\\sum (x_i y_i) - n \\bar{y} \\bar{x}}{\\sum (x_i x_i) - n \\bar{x} \\bar{x}} \\\\\n& \\Rightarrow m = \\frac{\\sum (x_i y_i) - n \\bar{y} \\bar{x} - n \\bar{y} \\bar{x} + n \\bar{y} \\bar{x}}{\\sum (x_i x_i) - n \\bar{x} \\bar{x} - n \\bar{x} \\bar{x} + n \\bar{x} \\bar{x}} \\\\\n& \\Rightarrow m = \\frac{\\sum (x_i y_i) - \\sum y_i \\bar{x} - \\sum \\bar{y} x_i + n \\bar{y} \\bar{x}}{\\sum (x_i x_i) - \\sum x_i \\bar{x} - \\sum \\bar{x} x_i + n \\bar{x} \\bar{x}} \\\\\n& \\Rightarrow m = \\frac{\\sum_i (x_i - \\bar{x}) (y_i - \\bar{y})}{\\sum_i (x_i - \\bar{x})^2} \\\\\n\\end{align}\n$$\n\n
\n\n#### Using sklearn neural networks\n\n***\n\n\n```python\nimport sklearn.neural_network as sknn\n\n# Expects a 2D array of inputs.\nw2d = w.reshape(-1, 1)\n\n# Train the neural network.\nregr = sknn.MLPRegressor(max_iter=10000).fit(w2d, d)\n\n# Show the predictions.\nnp.array([d, regr.predict(w2d)]).T\n```\n\n\n\n\n array([[ 7.87555482, 8.17960683],\n [ 21.59953558, 18.50355393],\n [ 26.98707474, 23.1972181 ],\n [ 23.32362417, 27.89146122],\n [ 28.61198227, 32.58570433],\n [ 38.81904106, 37.27994745],\n [ 40.87544464, 41.97419056],\n [ 47.68660422, 46.66843368],\n [ 53.36700771, 51.36216639],\n [ 52.8709924 , 56.05536619],\n [ 54.8670291 , 60.74856598],\n [ 67.09374109, 65.44176577],\n [ 72.57056963, 70.13496556],\n [ 79.88756871, 74.82816534],\n [ 83.51318718, 79.52119914],\n [ 76.50407315, 84.21423294],\n [ 93.042945 , 88.90726673],\n [ 97.96230846, 93.60030053],\n [ 94.0586369 , 98.29333432],\n [100.01267012, 102.99082487],\n [109.05819117, 107.97221093]])\n\n\n\n\n```python\n# The score.\nregr.score(w2d, d)\n```\n\n\n\n\n 0.9838518616697404\n\n\n\n#### End\n", "meta": {"hexsha": "1b50797d5fe3ae652e16517f73401c917f240806", "size": 259494, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "simple-linear-regression.ipynb", "max_stars_repo_name": "angela1C/jupyter-teaching-notebooks", "max_stars_repo_head_hexsha": "7494cab4702b8bfb95f716bf66e9ddf62a67b408", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "simple-linear-regression.ipynb", "max_issues_repo_name": "angela1C/jupyter-teaching-notebooks", "max_issues_repo_head_hexsha": "7494cab4702b8bfb95f716bf66e9ddf62a67b408", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "simple-linear-regression.ipynb", "max_forks_repo_name": "angela1C/jupyter-teaching-notebooks", "max_forks_repo_head_hexsha": "7494cab4702b8bfb95f716bf66e9ddf62a67b408", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 245.7329545455, "max_line_length": 103856, "alphanum_fraction": 0.9102214309, "converted": true, "num_tokens": 6734, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391664210671, "lm_q2_score": 0.9196425306271939, "lm_q1q2_score": 0.8488660748754857}} {"text": "#Solving circuits with sympy\n\nThis documents explains how to solve electrical circuits using the **sympy** symbolic math module.\n\n# Imports\n\nFirst we need to import the sympy module\n\n\n```python\n# Import the sympy module\nimport sympy\n```\n\n## Example DC circuit\n\nThe following circuit includes one voltage source, one current source and two resistors.\n\nThe objective is to obtain the output voltage **Vo** as function of the components.\n\n\n\nThe circuit will be solved using the **nodal** method.\n\nFirst we need to locate the circuit **nodes**, assign one as **ground** and assign a number for the rest of them. As the circuit has three nodes and one is ground, we have two nodes left: 1 and 2.\n\nWe will first generate a set of sympy symbols. There will be:\n\n* One symbol for each component: Vs, R1, R2, Is\n\n* One current symbol for each power sypply: iVs\n\n* One symbol for each measurement we want to obtain: Vo\n\n* One symbol for each node voltage that is not ground: V1, V2\n\n\n```python\n# Create the circuit symbols\nVs,iVs,R1,R2,Is,Vo, V1, V2 = sympy.symbols('Vs,iVs,R1,R2,Is,Vo,V1,V2')\n```\n\nThen we can define the current equations on each node except ground.\n\nThe current equations add all the currents in the node from each component.\n\nAll equations we add, are supposed to have a result of **zero**.\n\n\n```python\n# Create an empty list of equations\nequations = []\n\n# Nodal equations\nequations.append(iVs-(V1-V2)/R1) # Node 1\nequations.append(Is-(V2-V1)/R1-V2/R2) # Node 2\n```\n\nThen we add one equation for each voltage source that associates its voltage with the node voltages.\n\nIf we want to use two sided equations, we can use the sympy **Eq** function that equates the two sides.\n\n\n```python\n# Voltage source equations\nequations.append(sympy.Eq(Vs,V1))\n```\n\nFinally we add one equation for each measurement we want to obtain\n\n\n```python\n# Measurement equations\nequations.append(sympy.Eq(Vo,V2))\n```\n\nNow we can define the unknows for the circuit.\n\nThe number of unknowns shall be equal to the number of equations. \n\nThe list includes:\n\n* The node voltages: V1, V2\n\n* The current on voltage sources: iVs\n\n* The measurement values: Vo\n\n\n```python\nunknowns = [V1,V2,iVs,Vo]\n```\n\nWe can see the equations and unknows before solving the circuit.\n\nTo ease reusing the code, we will define a **showCircuit** function that shows equations and unknowns\n\n\n```python\n# Define the function\ndef showCircuit():\n print('Equations')\n for eq in equations:\n print(' ',eq)\n print()\n print('Unknowns:',unknowns)\n print()\n \n# Use the function\nshowCircuit()\n```\n\n Equations\n iVs - (V1 - V2)/R1\n Is - V2/R2 - (-V1 + V2)/R1\n Eq(Vs, V1)\n Eq(Vo, V2)\n \n Unknowns: [V1, V2, iVs, Vo]\n \n\n\nNow, we can solve the circuit.\n\nThe sympy **solve** function gets a list of equations and unknowns and return a **dictionary** with solved unknowns\n\nThe following code solves the circuit and list the solutions\n\n\n```python\n# Solve the circuit\nsolution = sympy.solve(equations,unknowns)\n\n# List the solutions\nprint('Solutions')\nfor sol in solution:\n print(' ',sol,'=',solution[sol])\n```\n\n Solutions\n V1 = Vs\n V2 = R2*(Is*R1 + Vs)/(R1 + R2)\n iVs = (-Is*R2 + Vs)/(R1 + R2)\n Vo = R2*(Is*R1 + Vs)/(R1 + R2)\n\n\nNote that, in this case, the equation that includes **iVs** is only needed to obtain this unknown, so we can eliminate its equation and the **iVs** unknown if we don't need the **iVs** solution.\n\nNote also that we can easily identify **V1** as **Vs**, so we can also eliminate the **Vs** equation if we use **Vs** as the voltage on node **V1**\n\nFinally note that we can also identify **V2** as **Vo** so we can also eliminate the **Vo** equation.\n\nThe following code solves the circuit using only one equations\n\n\n```python\nsolution = sympy.solve(Is-(Vo-Vs)/R1-Vo/R2,Vo)\nprint('Vo =',solution[0])\n```\n\n Vo = R2*(Is*R1 + Vs)/(R1 + R2)\n\n\n## Solve using the loop current method\n\nInstead of using the **nodal** method, we could also use the complementary **loop current** method\n\n\n\nIn this method we assign a current to each loop in the circuit\n\nWe will first generate a set of sympy symbols. There will be:\n\n* One symbol for each component: Vs, R1, R2, Is\n\n* One voltage symbol for each current sypply: Vo\n\n* One symbol for each measurement we want to obtain: Vo\n\n* One symbol for each loop current: I1, I2\n\nNote that, in this circuit, **Vo** appears two times, as the voltage on the **Is** source and as the value to measure. Logically we only define one **Vo** symbol.\n\n\n```python\n# Create the circuit symbols\nVs,R1,R2,Is,Vo,I1,I2 = sympy.symbols('Vs,R1,R2,Is,Vo,I1,I2')\n```\n\nThen we create a list of equations and add one equation for each loop that adds all voltages on the loop\n\n\n```python\n# New list of equations\nequations = []\n\n# Loop current equations\nequations.append(Vs-R1*I1-R2*(I1-I2)) # Loop current 1\nequations.append(-R2*(I2-I1)-Vo) # Loop current 2\n```\n\nThen we create one equation for each current supply that relates it to the loop currents\n\n\n```python\n# Current source equations\nequations.append(sympy.Eq(Is,-I2)) # Current source Is\n```\n\nNow we can define the unknows for the circuit.\n\nThe number of unknowns shall be equal to the number of equations. \n\nThe list includes:\n\n* The loop currents: I1, I2\n\n* The voltage on current sources: Vo\n\n* The measurement values: Vo\n\n\n```python\n# Unknowns list\nunknowns = [I1,I2,Vo]\n```\n\nWe can see the equations and unknows before solving the circuit\n\n\n```python\nshowCircuit()\n```\n\n Equations\n -I1*R1 - R2*(I1 - I2) + Vs\n -R2*(-I1 + I2) - Vo\n Eq(Is, -I2)\n \n Unknowns: [I1, I2, Vo]\n \n\n\nNow we can obtain the solution\n\n\n```python\n# Obtain solution\nsolution = sympy.solve(equations,unknowns)\n\nprint('Vo =',solution[Vo])\n```\n\n Vo = R2*(Is*R1 + Vs)/(R1 + R2)\n\n\nAs in the **nodal** case, you could have used less equations. For instance, you could have used the **Is** current for the second loop.\n\n\n```python\n# Create the circuit symbols\nVs,R1,R2,Is,Vo,I1 = sympy.symbols('Vs,R1,R2,Is,Vo,I1')\n\n# New list of equations\nequations = []\n\n# Loop current equations\nequations.append(Vs-R1*I1-R2*(I1+Is)) # Loop current 1\nequations.append(R2*(Is+I1)-Vo) # Loop current 2\n\n# Unknowns list\nunknowns = [I1,Vo]\n\n# Show equations and unknowns\nshowCircuit()\n\n# Obtain solution\nsolution = sympy.solve(equations,unknowns)\n\nprint('Vo =',solution[Vo])\n```\n\n Equations\n -I1*R1 - R2*(I1 + Is) + Vs\n R2*(I1 + Is) - Vo\n \n Unknowns: [I1, Vo]\n \n Vo = R2*(Is*R1 + Vs)/(R1 + R2)\n\n\n## Circuit with current measurements\n\nThe following example is special because the circuit has current measurements **I1** and **I2** that we want to obtain\n\n\n\nThe circuit can be solved using four different methods\n\n### Method #1 : Use nodal method and get currents from resistors\n\nIn this method we will just use the normal nodal methods and we will compute the currents using Ohm's law\n\nNote that we don't need the current on **Vs** so there is no point in obtaining the equation on node 1\n\n\n```python\n# Symbols for the circuit\nVs,R1,R2,R3,V2,I1,I2 = sympy.symbols('Vs,R1,R2,R3,V2,I1,I2')\n\n# Nodal equation only on node 2\nequations = []\nequations.append(-(V2-Vs)/R1-V2/R2-V2/R3)\n\n# Equations for the currents using Ohm's law\nequations.append(sympy.Eq(I1,V2/R2))\nequations.append(sympy.Eq(I2,V2/R3))\n\n# Unknowns\nunknowns = [V2,I1,I2]\n\n# Show equations and unknowns\nshowCircuit()\n\n# Solve the circuit\nsolution = sympy.solve(equations,unknowns)\n\nprint('I1 =',solution[I1])\nprint('I2 =',solution[I2])\n```\n\n Equations\n -V2/R3 - V2/R2 + (-V2 + Vs)/R1\n Eq(I1, V2/R2)\n Eq(I2, V2/R3)\n \n Unknowns: [V2, I1, I2]\n \n I1 = R3*Vs/(R1*R2 + R1*R3 + R2*R3)\n I2 = R2*Vs/(R1*R2 + R1*R3 + R2*R3)\n\n\n### Method #2 : Use four nodes plus ground\n\nWe can split the node 2 in three nodes: 2, 3 and 4\n\nThat way we can use the current equations to obtain I1 and I2\n\nAs all three nodes have the same voltage, we can set them equal using two equations\n\nAs in the previous case, we don't need the equation in node 1 \n\n\n```python\n# Symbols for the circuit\nVs,R1,R2,R3,V2,V3,V4,I1,I2 = sympy.symbols('Vs,R1,R2,R3,V2,V3,V4,I1,I2')\n\n# Node equations\nequations = []\nequations.append(-(V2-Vs)/R1-I1-I2) # Node 2\nequations.append(I1-V3/R2) # Node 3\nequations.append(I2-V4/R3) # Node 4\n\n# In fact, nodes 2, 3 and 4 are the same\nequations.append(sympy.Eq(V2,V3)) \nequations.append(sympy.Eq(V2,V4))\n\n# Unknowns\nunknowns = [V2,V3,V4,I1,I2]\n\n# Show equations and unknowns\nshowCircuit()\n\n# Solve the circuit\nsolution = sympy.solve(equations,unknowns)\n\nprint('I1 =',solution[I1])\nprint('I2 =',solution[I2])\n```\n\n Equations\n -I1 - I2 + (-V2 + Vs)/R1\n I1 - V3/R2\n I2 - V4/R3\n Eq(V2, V3)\n Eq(V2, V4)\n \n Unknowns: [V2, V3, V4, I1, I2]\n \n I1 = R3*Vs/(R1*R2 + R1*R3 + R2*R3)\n I2 = R2*Vs/(R1*R2 + R1*R3 + R2*R3)\n\n\n### Method #3 : Use the loop current method\n\nIn this case we will define two loop currents \n\n* Ia goes on the first loop: Vs -> R1 -> R2\n* I2 goes on the second loop: R2 -> R3\n\n\n```python\n# Symbols for the circuit\nVs,R1,R2,R3,Ia,I1,I2 = sympy.symbols('Vs,R1,R2,R3,Ia,I1,I2')\n\n# Loop equations\nequations = []\nequations.append(Vs-R1*Ia-R2*(Ia-I2)) # Loop Ia\nequations.append(-R2*(I2-Ia)-R3*I2) # Loop I2\n\n# Define I1 from loop currents\nequations.append(sympy.Eq(I1,Ia-I2))\n\n# Unknowns\nunknowns = [Ia,I1,I2]\n\n# Show equations and unknowns\nshowCircuit()\n\n# Solve the circuit\nsolution = sympy.solve(equations,unknowns)\n\nprint('I1 =',solution[I1])\nprint('I2 =',solution[I2])\n```\n\n Equations\n -Ia*R1 - R2*(-I2 + Ia) + Vs\n -I2*R3 - R2*(I2 - Ia)\n Eq(I1, -I2 + Ia)\n \n Unknowns: [Ia, I1, I2]\n \n I1 = R3*Vs/(R1*R2 + R1*R3 + R2*R3)\n I2 = R2*Vs/(R1*R2 + R1*R3 + R2*R3)\n\n\n### Method #4 : Use a modified loop current method\n\nIn this method we will use **I1** and **I2** as loop currents\n\n* I1 goes Vs -> R1 -> R2\n* I2 goes Vs -> R1 -> R3\n\n\n```python\n# Symbols for the circuit\nVs,R1,R2,R3,I1,I2 = sympy.symbols('Vs,R1,R2,R3,I1,I2')\n\n# Loop equations\nequations = []\nequations.append(Vs-R1*(I1+I2)-R2*I1) # Loop I1\nequations.append(Vs-R1*(I1+I2)-R3*I2) # Loop I2\n\n# Unknowns\nunknowns = [I1,I2]\n\n# Show equations and unknowns\nshowCircuit()\n\n# Solve the circuit\nsolution = sympy.solve(equations,unknowns)\n\nprint('I1 =',solution[I1])\nprint('I2 =',solution[I2])\n```\n\n Equations\n -I1*R2 - R1*(I1 + I2) + Vs\n -I2*R3 - R1*(I1 + I2) + Vs\n \n Unknowns: [I1, I2]\n \n I1 = R3*Vs/(R1*R2 + R1*R3 + R2*R3)\n I2 = R2*Vs/(R1*R2 + R1*R3 + R2*R3)\n\n\n## Controlled voltage source\n\nThe following circuit includes voltage controlled source\n\n\n\nIn this case we will treat the controlled voltage source as an independent voltage source, but, we will use $k \\cdot V_m$ as its value.\n\nThat means that we will need to add an equation to add $V_m$ to the set of symbols\n\nThe following code defines and solves the circuit using the loop current method.\n\n\n\n```python\n# Symbols for the circuit\nVs,R1,R2,k,R3,I1,I2,Vm = sympy.symbols('Vs,R1,R2,k,R3,I1,I2,Vm')\n\n# Loop equations\nequations = []\nequations.append(Vs-I1*R1-I1*R2) # Loop I1\nequations.append(k*Vm-I2*R3) # Loop I2\n\n# Equations for Vm and Vo\nequations.append(sympy.Eq(Vm,I1*R2))\nequations.append(sympy.Eq(Vo,I2*R3))\n\n# Unknowns\nunknowns = [I1,I2,Vm,Vo]\n\n# Show equations and unknowns\nshowCircuit()\n\n# Solve the circuit\nsolution = sympy.solve(equations,unknowns)\n\nprint('Vo =',solution[Vo])\n```\n\n Equations\n -I1*R1 - I1*R2 + Vs\n -I2*R3 + Vm*k\n Eq(Vm, I1*R2)\n Eq(Vo, I2*R3)\n \n Unknowns: [I1, I2, Vm, Vo]\n \n Vo = R2*Vs*k/(R1 + R2)\n\n\nThe circuit could also be solved using the nodal method.\n\nRemember that we don't need the nodal equations for nodes on grounded supplies if we don't need the supply current\n\n\n```python\n# Symbols for the circuit\nVs,Vm,Vo,R1,R2,k = sympy.symbols('Vs,Vm,Vo,R1,R2,k')\n\n# Node equation\nequations = []\nequations.append(-(Vm-Vs)/R1-Vm/R2)\n\n# Equation for Vo\nequations.append(sympy.Eq(Vo,k*Vm))\n\n# Unknowns\nunknowns = [Vm,Vo]\n\n# Show equations and unknowns\nshowCircuit()\n\n# Solve the circuit\nsolution = sympy.solve(equations,unknowns)\n\nprint('Vo =',solution[Vo])\n```\n\n Equations\n -Vm/R2 + (-Vm + Vs)/R1\n Eq(Vo, Vm*k)\n \n Unknowns: [Vm, Vo]\n \n Vo = R2*Vs*k/(R1 + R2)\n\n\n## RC Circuit\n\nNow we can also solve circuits with capacitor or inductors\n\nYou just need to define the **capacitors** currents and voltages as function of the **s** variable:\n\n$\\qquad i_C = V_C \\cdot C \\cdot s \\qquad v_C = \\frac{i_C}{C \\cdot s}$\n\nAlso, for **inductors**:\n\n$\\qquad i_L = \\frac{V_L}{L \\cdot s} \\qquad v_L = i_L \\cdot L \\cdot s$\n\nWe will use the following circuit as an example\n\n\n\nThe following code describes and solves the circuit using the current loop method\n\n\n```python\n# Symbols for the circuit\nVs,R1,R3,C1,C2,Vo,I1,I2,s = sympy.symbols('Vs,R1,R3,C1,C2,Vo,I1,I2,s')\n\n# Loop equations\nequations = []\nequations.append(Vs-R1*I1-(I1-I2)/(C1*s))\nequations.append(-(I2-I1)/(C1*s)-R3*I2-I2/(C2*s))\n\n# Equation for Vo\nequations.append(sympy.Eq(Vo,I2/(C2*s)))\n\n# Unknowns\nunknowns = [I1,I2,Vo]\n\n# Show equations and unknowns\nshowCircuit()\n\n# Solve the circuit\nsolution = sympy.solve(equations,unknowns)\n\nVo_s = solution[Vo]\nprint('Solution')\nprint(' Vo =',Vo_s)\n```\n\n Equations\n -I1*R1 + Vs - (I1 - I2)/(C1*s)\n -I2*R3 - I2/(C2*s) + (I1 - I2)/(C1*s)\n Eq(Vo, I2/(C2*s))\n \n Unknowns: [I1, I2, Vo]\n \n Solution\n Vo = -C1*Vs/(C2 - (C1*R1*s + 1)*(C1*C2*R3*s + C1 + C2))\n\n\nWe can obtain a better solution equation using **simplify**, **expand** and **collect** from the sympy module\n\n\n```python\n# Use simplify, expand and collect to give a prettier equation\nVo_s = Vo_s.simplify().expand() # Eliminate quotients of quotients\nVo_s = sympy.collect(Vo_s,s).simplify() # Group s symbols and simplify\n\nprint('Prettier solution')\nprint(' Vo =',Vo_s)\n```\n\n Prettier solution\n Vo = Vs/(C1*C2*R1*R3*s**2 + s*(C1*R1 + C2*R1 + C2*R3) + 1)\n\n\nWe can obtian a particular solution using numbers to substitute the literals\n\nTo substitute one symbol you can use:\n\n>`expr.subs(oldSymbol,newSymbol)`\n\nBut, in order to substitute several symbols at once you can use a substitution dictionary:\n\n>`expr.subs({old1:new1,old2:new2,....})`\n\nSubstituting in our example we get a **particular** solution\n\n\n```python\nH_s = Vo_s.subs({Vs:1,R1:1000,R3:100,C1:1e-6,C2:100e-9})\nH_s = H_s.simplify()\n\nprint('Particular solution')\nprint(' H(s) = Vo(s)/Vs(s) =',H_s)\n```\n\n Particular solution\n H(s) = Vo(s)/Vs(s) = 1/(1.0e-8*s**2 + 0.00111*s + 1)\n\n\nWe can also get the **poles**, the **zeros** and the **DC gain**\n\n\n```python\nnumer,denom =H_s.as_numer_denom()\nprint('Num =',numer)\nprint('Den =',denom)\nprint()\n\nzeros = sympy.roots(numer,s)\npoles = sympy.roots(denom,s)\nprint('Zeros =',zeros)\nprint('Poles =',poles)\nprint()\n\nprint('DC gain =',H_s.subs(s,0).evalf())\n```\n\n Num = 1\n Den = 1.0e-8*s**2 + 0.00111*s + 1\n \n Zeros = {}\n Poles = {-110091.666030631: 1, -908.333969368548: 1}\n \n DC gain = 1.00000000000000\n\n\n## Opamp circuit\n\nWe can also solve operational amplifier circuits\n\n\n\nThe easiest solution is obtained if we can guarantee that the **virtual shortcircuit** holds\n\nIn this case, the opamp output is an unknown and the two input voltages are equal:\n\n$\\qquad V_{(+)}=V_{(-)}$ \n\nWe will use the nodal method.\n\nRemember that we don't need the node equations in nodes 1 and 3 because we don't need the source currents\n\n\n```python\n# Symbols for the circuit\nVs,Ri,Rf,Vo,V2 = sympy.symbols('Vs,Ri,Rf,Vo,V2')\n\n# Node equation\nequations = []\nequations.append(-(V2-Vs)/Ri-(V2-Vo)/Rf)\n\n# Virtual shortcircuit\nequations.append(V2) # V2 = V(+) = 0\n\n# Unknowns\nunknowns = [V2,Vo]\n\n# Show equations and unknowns\nshowCircuit()\n\n# Solve the circuit\nsolution = sympy.solve(equations,unknowns)\n\nprint('Vo =',solution[Vo])\n```\n\n Equations\n (-V2 + Vs)/Ri - (V2 - Vo)/Rf\n V2\n \n Unknowns: [V2, Vo]\n \n Vo = -Rf*Vs/Ri\n\n\n### Finite gain solution\n\nIf we don't want to use the **virtual short circuit** we can solve the opamp as a voltage controlled voltage source\n\nIn this case the opamp can be defined with two equations:\n\n$\\qquad V_d = V_{(+)}-V_{(-)}$\n\n$\\qquad V_O = A \\cdot V_d $\n\nThen, the ideal case will be given when $A \\rightarrow \\infty$\n\n\n```python\n# Symbols for the circuit\nVs,Ri,Rf,Vo,V2,Vd,A = sympy.symbols('Vs,Ri,Rf,Vo,V2,Vd,A')\n\n# Node equation\nequations = []\nequations.append(-(V2-Vs)/Ri-(V2-Vo)/Rf)\n\n# Opamp equations\nequations.append(sympy.Eq(Vd,0-V2))\nequations.append(sympy.Eq(Vo,A*Vd))\n\n# Unknowns\nunknowns = [V2,Vo,Vd]\n\n# Show equations and unknowns\nshowCircuit()\n\n# Solve the circuit\nsolution = sympy.solve(equations,unknowns)\n\nprint('Solution as function of A')\nprint()\nprint(' Vo =',solution[Vo])\nprint()\nprint('Solution for A -> oo')\nprint()\nprint(' Vo =',sympy.limit(solution[Vo],A,sympy.oo))\n```\n\n Equations\n (-V2 + Vs)/Ri - (V2 - Vo)/Rf\n Eq(Vd, -V2)\n Eq(Vo, A*Vd)\n \n Unknowns: [V2, Vo, Vd]\n \n Solution as function of A\n \n Vo = -A*Rf*Vs/(A*Ri + Rf + Ri)\n \n Solution for A -> oo\n \n Vo = -Rf*Vs/Ri\n\n\n### Dominant pole solution\n\nHaving a solution as function of **A** enables us to obtain the response of the circuit using a dominant pole model for the operational amplifier.\n\nJust susbtitute, in the solution **A** for the one pole model of the opamp\n\n$\\qquad A = \\frac{Ao \\cdot p1}{s+p1}$\n\n\n```python\n# New A, p1 and s symbols\nAo,p1,s = sympy.symbols('Ao,p1,s')\n\nVo_s = solution[Vo].subs(A,Ao*p1/(s+p1))\n\n# Use simplify, expand and collect to give a prettier equation\nVo_s = Vo_s.simplify().expand() # Eliminate quotients of quotients\nVo_s = sympy.collect(Vo_s,s) # Group s symbols\nVo_s = sympy.collect(Vo_s,Ri) # Group Ri symbols\n\nprint('Vo(s) =',Vo_s)\n```\n\n Vo(s) = -Ao*Rf*Vs*p1/(Rf*p1 + Ri*(Ao*p1 + p1) + s*(Rf + Ri))\n\n\nWe can obtian, as in a previous example, a particular solution using numbers to substitute the literals\n\nIn our opamp circuit solution we substitute:\n\n* Circuit resistors: R1, R2\n\n* Opamp model: Ao, p1\n\nWe also substitute Vs for 1 to obtain the transfer function $H(s)$\n\n$\\qquad H(s)=\\frac{V_O(s)}{V_s(s)}$\n\n\n```python\nH_s = Vo_s.subs({Vs:1,Ao:100000,Rf:100000,Ri:10000,p1:16})\nprint('H(s) =',H_s)\n```\n\n H(s) = -160000000000/(110000*s + 16001760000)\n\n\nNow you can also obtain the **poles** and **zeros** of $H(s)$\n\nAlso we can get the DC gain\n\nWe will use the **evalf()** method that evaluates a **sympy** expression to a floating point number\n\n\n```python\nnumer,denom =H_s.as_numer_denom()\nprint('Num =',numer)\nprint('Den =',denom)\nprint()\n\nzeros = sympy.roots(numer,s)\npoles = sympy.roots(denom,s)\nprint('Zeros =',zeros)\nprint('Poles =',poles)\nprint()\n\nprint('DC gain =',H_s.subs(s,0).evalf())\n```\n\n Num = -160000000000\n Den = 110000*s + 16001760000\n \n Zeros = {}\n Poles = {-1600176/11: 1}\n \n DC gain = -9.99890012098669\n\n\n





\n\n## Document information\n\nCopyright \u00a9 Vicente Jim\u00e9nez (2018)\n\nLast update: 26/3/2018\n\nThis work is licensed under a [Creative Common Attribution-ShareAlike 4.0 International license](http://creativecommons.org/licenses/by-sa/4.0/). \n\nYou can find the module [here](https://github.com/R6500/Python-bits/tree/master/Modules)\n\nSee my blogs [AIM65](http://aim65.blogspot.com.es/) (in spanish) and [R6500](http://r6500.blogspot.com.es/) (in english)\n\n\n```python\n\n```\n", "meta": {"hexsha": "9abdfdb1a23a795342af855c1210a656867e1b1a", "size": 47816, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/sympy-circuit-sandbox.ipynb", "max_stars_repo_name": "IronwoodLabs/ambrose", "max_stars_repo_head_hexsha": "b5f3570b8a61ad22cbd9bf95b7217bcec118e8f4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/sympy-circuit-sandbox.ipynb", "max_issues_repo_name": "IronwoodLabs/ambrose", "max_issues_repo_head_hexsha": "b5f3570b8a61ad22cbd9bf95b7217bcec118e8f4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/sympy-circuit-sandbox.ipynb", "max_forks_repo_name": "IronwoodLabs/ambrose", "max_forks_repo_head_hexsha": "b5f3570b8a61ad22cbd9bf95b7217bcec118e8f4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.5700534759, "max_line_length": 205, "alphanum_fraction": 0.5283377949, "converted": true, "num_tokens": 6327, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391621868805, "lm_q2_score": 0.9196425317283919, "lm_q1q2_score": 0.8488660719979966}} {"text": "# Interpolation and reconstruction\n\nIt's not obvious how to represent a continuous-time signal $x(t)$ in a way that can be manipulated by a computer. The time index is real-valued, so even if we only care about the signal value over a finite interval there are infinitely many function values to consider and store. \n\nOne approach is to assume that $x(t)$ varies slowly, and hope that sampling $x(t)$ at $t=nT$ for $n$ integer and $T$ small is sufficient to characterise the signal. This leads to the discrete-time signal $x[n] = x(nT)$. When this assumption is formalised and holds then the method works, and we use it a lot. The first part of this workbook explores how a discrete signal $x[n]$ can be used to represent or parameterise a continuous-time $x(t)$. Understanding this link and its limitations lets us process \"analog\" signals $x(t)$ using digital processing of the corresponding $x[n]$. See https://en.wikipedia.org/wiki/Digital_signal_processing.\n\nA more general option is to represent the continuous-time signal $x(t)$ as a linear combination of a fixed set of known basis functions $b_n(t)$:\n$$x(t) = \\sum_{n=0}^{N-1} c_n b_n(t).$$\nHere the signal $x(t)$ is defined for all $t \\in \\mathbb{R}$, but as written it is completely specified by the finite and discrete set of values $c_n, n=0, \\ldots, N-1$. Not every signal $x(t)$ can be written in this way, and those that can depends on the choice of functions $b_n(t)$.\n\nOne simple example uses a polynomial basis with $b_n(t) = t^n$. The signals parameters $c_n$ are then just the coefficients for each polynomial order. Another is the cosine basis $b_n(t) = \\cos(n \\omega_0 t)$, which can represent all periodic even functions $x(t)$ if $N$ is large enough. Instead of summing from $0$ to $N-1$ in the representation we might also have infinite bounds like $0$ to $\\infty$, or $-\\infty$ to $\\infty$. Even then, the coefficients $c_n$ are still a countable set and can be carefully used to represent or process the continuous-time $x(t)$ that they represent. The second part of this workbook explores this concept.\n\n## Reconstruction from discrete samples\n\nSuppose that $b_0(t)$ is an even function centered on the origin $t=0$, and for each $n$ the basis function $b_n(t)$ in the representation equation above is $b_0(t)$ shifted until it is centered on some point $t=nT$: \n$$x(t) = \\sum_{n=0}^{N-1} c_n b_0(t-nT).$$\nThis representation is particularly useful if we add the requirement\n$$b_0(t) = \\begin{cases}\n 1 \\qquad & t=0 \\\\\n 0 \\qquad & t=nT \\quad \\text{for integer $n$}.\n \\end{cases}\n$$\n\nSuppose now that we have access to a discrete set of regular samples $x[n] = x(nT)$ from $x(t)$, and want to determine the corresponding coefficients $c_n$. Note that\n$$x(kT) = \\sum_{n=0}^{N-1} c_n b_0(kT-nT) = \\sum_{n=0}^{N-1} c_n b_0((k-n)T) = c_k.$$\nThe last step above follows because $b_0((k-n)T)=1$ only if $k-n=0$, so all the terms in the sum are zero except one. You could also note that $b_0((k-n)T) = \\delta(k-n)$ and use the sifting property. Either way $x[k] = x(kT) = c_k$, and the expansion takes the form\n$$x(t) = \\sum_{n=0}^{N-1} x[n] b_0(t-nT).$$\n\nWe can view this as a reconstruction formula. It takes as input the sample values in the form of a discrete signal $x[n]$ for integer $n$, and produces the interpolated or reconstructed continuous-time signal $x(t)$. The nature of the reconstruction depends on the prototype basis function $b_0(t)$, and different choices lead to interpolation with different properties.\n\n### Basic code\n\nWe start off with an instance of a continuous-time signal and sample it according to $x[n] = x(nT)$ for some $T$. The first subplot below shows a continuous-time signal $x(t)$, along with dots indicating a set of discrete samples of the signal. The second subplot shows the corresponding discrete signal $x[n]$.\n\n\n```python\nimport numpy as np\nimport sympy as sp\nimport matplotlib.pyplot as plt\n%matplotlib notebook\n\n# Set up symbolic signal to approximate\nt = sp.symbols('t');\n#x = sp.cos((t-5)/5) - ((t-5)/5)**3;\nx = sp.exp(sp.cos(t)+t/10)\nlam_x = sp.lambdify(t, x, modules=['numpy']);\n\n# Discrete samples of signal over required range\nT = 2; tmax = 20;\nnm = np.floor(tmax/T);\ntnv = T*np.arange(0,nm+1);\nxnv = lam_x(tnv);\n\n# Dense set of points for plotting \"continuous\" signals\ntv = np.linspace(-2, tmax+2, 2500);\nxv = lam_x(tv);\n \n# Plots\nfh, ax = plt.subplots(2);\nax[0].plot(tv,xv,'r-'); ax[0].scatter(tnv, xnv, c='r'); \nax[0].set_ylabel('$x(t)$ and samples'); ax[0].set_xlabel('$t$');\nax[1].stem(tnv, xnv); ax[1].set_ylabel('$x[n]$'); ax[1].set_xlabel('$n$');\n```\n\n\n \n\n\n\n\n\n\n /Users/nicolls/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:26: UserWarning: In Matplotlib 3.3 individual lines on a stem plot will be added as a LineCollection instead of individual lines. This significantly improves the performance of a stem plot. To remove this warning and switch to the new behaviour, set the \"use_line_collection\" keyword argument to True.\n\n\nThe code below defines and plots a triangular reconstruction or interpolation kernel $b_0(t)$ that satisfies the required conditions, namely that $b_0(nT) = \\delta(n)$ for integer $n$. Following that is the entire set of basis functions shown over the interval of interest.\n\n\n```python\n# Interpolation kernel\n#b0 = sp.Piecewise( (0, t/T<=-1/2), (1, t/T<1/2), (0, True)); # rectangular\n#b0 = sp.Piecewise( (0, t/T<=-1), (1+t/T, t<0), (1-t/T, t/T<1), (0, True)); # triangular\n#at = sp.Abs(t/T); a = -0.5; b0 = sp.Piecewise( ((a+2)*at**3 - (a+3)*at**2 + 1 , at<1), (a*at**3 - 5*a*at**2 + 8*a*at - 4*a, at<2), (0, True)); # cubic\nat = sp.Abs(t/T); w = 3; b0 = sp.Piecewise( (sp.sin(sp.pi*at)/(sp.pi*at), at<=w), (0, True) ); # windowed sinc\nlam_b0 = sp.lambdify(t, b0.simplify(), modules=['numpy']);\n\n# Plot kernel and reconstruction basis\ntbv = np.linspace(-4*T, 4*T, 1500);\nb0v = lam_b0(tbv);\nfh, ax = plt.subplots(2);\nax[0].plot(tbv, b0v, 'r-');\nfor tn in tnv:\n ax[1].plot(tv, lam_b0(tv-tn));\nax[1].set_xlabel('t');\n```\n\nThe first subplot below shows a continuous-time signal $x(t)$, the dots indicating a set of discrete samples of the signal, and the signal reconstructed from the dots using the reconstruction kernel above. The second subplot shows the separate scaled components that are summed to form the reconstruction. We observe that the reconstruction corresponds to first-order linear interpolation between the sample points, so $b_0(t)$ in this case is the linear interpolation reconstruction kernel.\n\n\n```python\n# Numerically evaluate linear interpolation approximation\nxav = np.zeros(tv.shape);\nfor i in range(0,len(xnv)): \n xav = xav + xnv[i]*lam_b0(tv - tnv[i]);\n \n# Plots\nfh, ax = plt.subplots(2);\nax[0].plot(tv,xv,'r-',tv,xav,'b-');\nax[0].scatter(tnv, xnv, c='r');\nfor i in range(0,len(xnv)): \n ax[1].plot(tv,xnv[i]*lam_b0(tv - tnv[i]));\nax[1].scatter(tnv, xnv, c='r'); ax[1].set_xlabel('t');\n```\n\nThe reconstruction above is quite good, but the condition that $b_0(nT) = \\delta(n)$ for integer $n$ is not sufficient for it to be a good reconstruction kernel. The code below generates a reconstruction using another basis.\n\n\n```python\n# Bad interpolation\nb0a = sp.Piecewise( (0, t<=-T), (1+t/T, t<0), (1-t/T, t\n\n\n\n\n\n\nFinding the required values for $c_n$ given a function $x(t)$ to be approximated depends on the particular problem. In this case we again simply require that the function $x(t)$ and the approximation agree on their values at $t=kT$ for some set of integers $k$. As long as this set has more than $N$ elements then we have more than $N$ equations in $N$ unknowns, which can be solved if the resulting system is not underdetermined. \n\nThe code below calculates the coefficients for the function\n$$x(t) = \\cos\\left( \\frac{t-5}{5} \\right) - \\left( \\frac{t-5}{5} \\right)^3$$\nby formulating a matrix equation and solving. Each value of $k$ leads to a single sample point and an equation of the form\n$$x(kT) = \\sum_{n=0}^{N-1} c_n b_0(kT - t_n)\n= \\begin{pmatrix} b_0(kT-t_0) & \\cdots & b_0(kT-t_{N-1}) \\end{pmatrix}\n\\begin{pmatrix} c_0 \\\\ \\vdots \\\\ c_{N-1} \\end{pmatrix}.$$\nIf we consider $k=0, \\ldots, M-1$ this leads to the system of equations\n$$\\begin{pmatrix} x(0T) \\\\ \\vdots \\\\ x((M-1)T) \\end{pmatrix}\n= \\begin{pmatrix} \nb_0(0T-t_0) & \\cdots & b_0(0T-t_{N-1}) \\\\\n\\vdots & \\ddots & \\vdots \\\\\nb_0((M-1)T-t_0) & \\cdots & b_0((M-1)T-t_{N-1})\n\\end{pmatrix}\n\\begin{pmatrix} c_0 \\\\ \\vdots \\\\ c_{N-1} \\end{pmatrix},$$\nwhich is of the form $\\mathbf{b} = \\mathbf{B} \\mathbf{c}$. In general if this system is overdetermined (which is usually true if $M>N$) then the least squares solution is often appropriate:\n$$\\mathbf{c} = (\\mathbf{B}^T \\mathbf{B})^{-1} \\mathbf{B}^T \\mathbf{b}.$$\nSee https://en.wikipedia.org/wiki/Least_squares for details if interested.\n\n\n```python\nfrom numpy.linalg import pinv\n\n# Set up signal to approximate\nx = sp.cos((t-5)/5) - ((t-5)/5)**3;\n#x = sp.sin(t+0.5) + 5*(1 - sp.exp(-(t+0.5))); # reconstruction points\nlam_x = sp.lambdify(t, x, modules=['numpy']);\n\n# Evaluate basis matrix and value for periodic observations\nM = int(1.0*len(tnv)); # M = 2*len(tnv);\nT = tmax/(M-1);\nB = np.zeros((M,len(tnv))); b = np.zeros(M);\nfor k in range(0,M):\n b[k] = lam_x(k*T);\n for n in range(0,len(tnv)):\n B[k,n] = lam_b0r(k*T - tnv[n]);\n \n# Solve for coefficients\ncv = pinv(B).dot(b);\nprint(cv);\n```\n\n [ 1.35834594 0.46235455 0.64432842 0.5208424 0.56564884 0.56715441\n 0.55934324 0.50895771 0.34921072 0.18851101 -0.36759327 -0.33474544\n -2.44417749]\n\n\nThe coefficient vector `cv` now specifies the particular instance of the approximation function. The code below plots both the approximation and the actual function for a dense set of time points, and then shows the scaled basis contributions.\n\n\n```python\n# Numerically evaluate approximation (same as before)\nxav = np.zeros(tv.shape);\nfor n in range(0,len(tnv)): \n xav = xav + cv[n]*lam_b0r(tv - tnv[n]);\n \n# Plot\nfh, ax = plt.subplots(2);\nax[0].plot(tv,lam_x(tv),'k-',tv,xav,'r-');\nax[0].scatter(np.arange(0,M)*T, b, c='r');\nfor n in range(0,len(tnv)): \n ax[1].plot(tv,cv[n]*lam_b0r(tv - tnv[n]));\n```\n\n\n \n\n\n\n\n\n\nFor the case where $M=N$ the series approximation will match the actual function values at all the points used to estimate the coefficients. However, we have no control of the quality of the approximation between these points. Also observe that if we increase $\\sigma$ then the approximation becomes smoother, and vice versa. Without knowing the degree of smoothness of $x(t)$ it is however difficult to know what value is appropriate, and in general we might not even have an expression for it that we can plot.\n\nThe radial basis function approach shown here is a local representation: each basis function only influences on a small neighbourhood of points around its center. This is convenient but isn't required. A polynomial basis, for example, does not have this property, and neither does a sinusoidal basis. \n\nFinding the coefficients for a general basis, even for the sampling-based method presented above which is not without flaws, is computationally complex: for a $N$ element basis we have to invert a $N \\times N$ matrix. With a different basis we can improve on this situation. For example, the requirement that $b_0(nT) = \\delta(n)$ used earlier in this workbook imposes an *orthogonality* condition on the problem that makes it easier to find the coefficients.\n\n# Tasks\n\nThese tasks involve writing code, or modifying existing code, to meet the objectives described.\n\n1. Suppose we have samples $x[n] = x(nT)$ of the signal\n$$x(t) = \\cos\\left( \\frac{t-5}{5} \\right) - \\left( \\frac{t-5}{5} \\right)^3$$\nfor $n=0, \\ldots, N-1$, with $N=10$ and $T = 1.2$. Consider the reconstruction equation\n$$x_r(t) = \\sum_{n=0}^{N-1} x[n] b_0(t-nT).$$\nOn the same set of axes plot both $x(t)$ and $x_r(t)$ over the range $t = 0$ to $t=(N-1)T$ for the case of a box filter interpolant $b_0(t) = p_T(t) = p_1(t/T)$, where $p_T(t)$ is the unit pulse of total width $T$ centered on the origin.

\n\n2. Generate two new plots with the same specifications as for the previous task, but using these interpolation kernels:\n\n A. Cubic kernel $b_0(tT) = \n \\begin{cases}\n (a+2)|t|^3 - (a+3)|t|^2 + 1 \\qquad & 0 \\leq |t| < 1 \\\\\n a|t|^3 - 5a|t|^2 + 8a|t| - 4a \\qquad & 1 \\leq |t| < 2 \\\\\n 0 \\qquad & 2 \\leq |t|\n \\end{cases}\n $
\n with $a=-1/2$.\n\n B. Truncated or windowed sinc function $b_0(tT) = \\frac{\\sin(\\pi t)}{\\pi t} p_{2w}(t)$ for $w=3$.

\n \n3. Use a representation of the form\n$$x_r(t) = \\sum_{n=0}^{N-1} c_n b_n(t)$$\nwith a polynomial basis $b_n(t) = t^n$ to approximate the signal $x(t) = e^{(t/10 + cos(t))}$. The coefficients should be calculated using least squares from the samples $x[n] = x(nT)$ for $T=1$ and $n=0, \\ldots, M-1$. On a single set of axes show the signal $x(t)$, the reconstruction $x_r(t)$, and the sampled points, for the case of $N=9$ and $M=11$. The domain of the plot should be from $t=0$ to $t=10$.

\n\n4. (optional) Repeat the previous task using a cosine basis $b_n(t) = \\cos(\\omega_0 n t)$ with $\\omega_0 = 2\\pi/10$ and $N = 6$.\n\n# Hints and guidance\n\nThe remainder of this notebook provides a more detailed walkthrough of possible ways of addressing the given tasks. You may not need them.\n\n## Task 1\n\n\n```python\n# 1.A) Set up symbolic signal x\n# Create a symbolic variable 't', and specify the given x in terms of t. Use 'lambdify' to create a function 'lam_x'\n# that takes t as an input and gives x(t) out.\n```\n\n\n```python\n# 1.B) Get the continuous x(t)\n# Create an array tv of 2500 time points over the range you're interested in. Get xv, the continuous approximation of the signal\n# x, by applying the lambda function you just created to each value in tv.\n```\n\n\n```python\n# 1.C) Get the discrete x[n]\n# Create the an array 'tnv' that contains the values of nT for n = 0 to n = N. Sample x by applying the lam_x function to each \n# value in tnv. Call the resulting array xnv.\n```\n\n\n```python\n# 1.D) Get b0\n# Use 'piecewise' to create the rectangular pulse b0. (Hint: the line of code you'll need should be in the 'Interpolation\n# kernel' cell.) Create a lambda function lam_b0 that takes t as the input and returns b0(t).\n\n# OPTIONAL: create an array 'b0v' by applying lam_b0 to a time array (tv or some new array, tbv, over a preferable range.)\n# plot b0v to see what each individual basis function looks like. You can also use a for loop to plot all the basis functions\n# over the range of interest.\n```\n\n\n```python\n# 1.E) Reconstruct\n# Create an array 'xav' to hold the reconstructed signal. It should be the same size as tv. Use a for loop to add each scaled\n# basis function to the reconstruction.\n```\n\n\n```python\n# 1.F) Output\n# Plot the original signal xv and the reconstructed signal xav on the same set of axes vs tv. If you like, you can use subplot\n# to create a second set of axes and plot each basis function on it using a for loop to better visualize what's happening.\n```\n\n## Task 2.A\n\n\n```python\n# 2.A.A) Get b0\n# You've already set up the signal and created continuous and discrete versions of it in task 1.\n# Use 'piecewise' to create the cubic interpolation function b0 and lambdify it. (Again, Prof's done it for you. You just have to\n# uncomment it and change the parameters.)\n\n# OPTIONAL: plot b0 as described in 1.D\n```\n\n\n```python\n# 2.A.B) Reconstruct and output\n# Create the reconstructed signal xav and plot it on the same set of axes as the original signal, as described in 1.E and 1.F\n```\n\n## Task 2.B\n\n\n```python\n# 2.B.A) Get b0\n# Use 'piecewise' to create the sinc interpolation function b0 and lambdify it.\n\n# OPTIONAL: plot b0 as described in 1.D\n```\n\n\n```python\n# 2.B.B) Reconstruct and output\n# Create the reconstructed signal xav and plot it on the same set of axes as the original signal, as described in 1.E and 1.F\n```\n\n## Task 3\n\nThe idea of reconstruction from basis functions and the way this task is formulated probably seem unfamiliar to you, but if you expand out that sum, you get $x(t) = c_0 b_0 + c_1 b_1 + ... + c_N b_N $\n\nSubstituting in $b_n = t^n$, you get $x(t) = c_0 + c_1 t + c_2 t^2 ... + c_N t^N $ so all we're actually trying to do here is come up with an Nth order polynomial we can use to approximate $x(t)$\n\n\n```python\n# 3.A) Set up symbolic x\n# Specify the signal x in terms of the symbolic variable t and lambdify it to create lam_x\n# Create the continuous approximation x(t) as described in 1.B\n\nt = sp.symbols('t');\nx = sp.exp(sp.cos(t)+t/10)\nlam_x = sp.lambdify(t, x, modules=['numpy']);t = sp\n\ntv = np.linspace(0,10,2500);\nxv = lam_x(tv);\n```\n\n## 3.B Finding coefficients with least squares\nWe're going to use samples of $x(t)$ at $M$ points to evaluate the coefficients.\n\nAt each individual sample, $x(kT) = \\sum_{n=0}^{N-1} c_n b_n(kT) = c_0 + c_1 (kT) + c_2 (kT)^2 + ... c_N (kT)^N$, which can be written as the matrix expression: $$x(kT)\n= \\begin{pmatrix} 1 & kT & (kT)^2 & \\cdots & (kT)^N) \\end{pmatrix}\n\\begin{pmatrix} c_0 \\\\ c_1 \\\\ c_2\\\\ \\vdots \\\\ c_N \\end{pmatrix}.$$\n\nBy repeating this for k values from 0 through M, we get $$\\begin{pmatrix} x(0T) \\\\ x(1T) \\\\ \\vdots \\\\ x(MT) \\end{pmatrix}\n= \\begin{pmatrix} 1 & 0T & \\cdots & (0T)^N\n\\\\ 1 & 1T & \\cdots & (1T)^N\n\\\\ \\vdots\n\\\\ 1 & MT & \\cdots & (MT)^N\n\\end{pmatrix}\n\\begin{pmatrix} c_0 \\\\ c_1 \\\\ \\vdots \\\\ c_N \\end{pmatrix}$$ or $b = Bc$\n\nWe can calculate each term in $b$ using the lam_x function. The matrix $B$ is also simple to evaluate. From there, we just have to invert $B$ and multiply it by $b$ to get the coefficient matrix $c$ out.\n\n\n```python\n# 3.B)\n# Use 'zeros' to create the M x 1 matrix 'b' and the M x N matrix 'B'. Populate these matrices with the values described above\n# using for loops. Solve for the coefficient matrix cv by taking the dot product of the inverse of B and b.\n# (Hint: you can get an idea of how to do each of these tasks by looking at the least squares code.)\n```\n\n\n```python\n# 3.C) Reconstruct and output\n# Create an array xav, the same size as tv, to use for your polynomial approximation. Use a for loop to build the polynomial by\n# adding each c_n*t**n term from 0 up to N.\n# plot the original signal and the polynomial on the same set of axes.\n```\n", "meta": {"hexsha": "5d8e4923f77793addf6872ed95f5d0318341e580", "size": 580529, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lab_reconstruct.ipynb", "max_stars_repo_name": "maxnvdm/notebooks", "max_stars_repo_head_hexsha": "c719a43d02e330bdc25dceea33d5e6c2b156e02d", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-07-17T09:03:42.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-01T05:28:21.000Z", "max_issues_repo_path": "lab_reconstruct.ipynb", "max_issues_repo_name": "maxnvdm/notebooks", "max_issues_repo_head_hexsha": "c719a43d02e330bdc25dceea33d5e6c2b156e02d", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lab_reconstruct.ipynb", "max_forks_repo_name": "maxnvdm/notebooks", "max_forks_repo_head_hexsha": "c719a43d02e330bdc25dceea33d5e6c2b156e02d", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 20, "max_forks_repo_forks_event_min_datetime": "2017-08-21T12:06:52.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-02T16:52:18.000Z", "avg_line_length": 186.7853925354, "max_line_length": 190519, "alphanum_fraction": 0.8550632268, "converted": true, "num_tokens": 7078, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.923039160069787, "lm_q2_score": 0.9196425278741989, "lm_q1q2_score": 0.8488660664934562}} {"text": "**Create Plots**\n\n**Plot with Symbolic Plotting Functions**\n\nMATLAB\u00ae provides many techniques for plotting numerical data. Graphical capabilities of MATLAB include plotting tools, standard plotting functions, graphic manipulation and data exploration tools, and tools for printing and exporting graphics to standard formats. Symbolic Math Toolbox\u2122 expands these graphical capabilities and lets you plot symbolic functions using:\n\n- fplot to create 2-D plots of symbolic expressions, equations, or functions in Cartesian coordinates.\n- fplot3 to create 3-D parametric plots.\n- ezpolar to create plots in polar coordinates.\n- fsurf to create surface plots.\n- fcontour to create contour plots.\n- fmesh to create mesh plots.\n\nPlot the symbolic expression $sin(6x)$ by using **fplot**. By default, **fplot** uses the range $\u22125fplot3 creates 3-D parameterized line plots.\n- fsurf creates 3-D surface plots.\n- fmesh creates 3-D mesh plots.\n\nCreate a spiral plot by using **fplot3** to plot the parametric line\n$$ x=(1-t)sin(100t)$$\n$$ y=(1-t)cos(100t)$$\n$$ z=\\sqrt{1-x^2-y^2}$$\n\n\n```python\nt = symbols('t')\nx = (1-t)*sin(100*t)\ny = (1-t)*cos(100*t)\nz = sqrt(1-x**2-y**2)\neqfx = lambdify(t,x)\neqfy = lambdify(t,y)\neqfz = lambdify(t,z)\nX = eqfx(np.arange(0,1,1/1000))\nY = eqfy(np.arange(0,1,1/1000))\nZ = eqfz(np.arange(0,1,1/1000))\n\nfig = plt.figure()\nax = fig.add_subplot(projection='3d')\nax.plot(X,Y,Z,linewidth=0.6)\nax.set_title('Symbolic 3-D Parametric Line')\n```\n\nSuperimpose a plot of a sphere with radius 1 and center at $(0, 0, 0)$. Find points on the sphere numerically by using **sphere**. Plot the sphere by using **mesh**. The resulting plot shows the symbolic parametric line wrapped around the top hemisphere.\n\n\n```python\n#hold on\n#[X,Y,Z] = sphere;\n#mesh(X, Y, Z)\n#colormap(gray)\n#title('Symbolic Parametric Plot and a Sphere')\n#hold off\ntheta,phi = np.meshgrid(np.linspace(0,2*np.pi,30),np.linspace(0,np.pi,30))\nX_sphere = np.sin(phi)*np.cos(theta)\nY_sphere = np.sin(phi)*np.sin(theta)\nZ_sphere = np.cos(phi)\n\nfig = plt.figure()\nax = fig.add_subplot(projection='3d')\nax.plot_wireframe(X_sphere,Y_sphere,Z_sphere,linewidth=0.2,color='black')\nax.plot(X,Y,Z)\n```\n", "meta": {"hexsha": "0e9fe4386891d3ea496525e4fd41c07af53db7eb", "size": 446631, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Symbolic Math Toolbox/Create Plots.ipynb", "max_stars_repo_name": "Zebedee2021/Signal_Processing_Toolbox_Python", "max_stars_repo_head_hexsha": "b98852b9a93d4a89eaa389061e61914a18b0dc5c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2022-03-15T13:37:10.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T02:48:44.000Z", "max_issues_repo_path": "Symbolic Math Toolbox/Create Plots.ipynb", "max_issues_repo_name": "Zebedee2021/Signal_Processing_Toolbox_Python", "max_issues_repo_head_hexsha": "b98852b9a93d4a89eaa389061e61914a18b0dc5c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Symbolic Math Toolbox/Create Plots.ipynb", "max_forks_repo_name": "Zebedee2021/Signal_Processing_Toolbox_Python", "max_forks_repo_head_hexsha": "b98852b9a93d4a89eaa389061e61914a18b0dc5c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2022-03-13T07:46:53.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-17T02:41:11.000Z", "avg_line_length": 722.7038834951, "max_line_length": 85284, "alphanum_fraction": 0.951241629, "converted": true, "num_tokens": 2181, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.918480252950991, "lm_q2_score": 0.9241418121440552, "lm_q1q2_score": 0.8488060053806591}} {"text": "# 15.6. Finding a Boolean propositional formula from a truth table\n\n\n```python\nfrom sympy import *\ninit_printing()\n```\n\n\n```python\nvar('x y z')\n```\n\n\n```python\nP = x & (y | ~z)\nP\n```\n\n\n```python\nP.subs({x: True, y: False, z: True})\n```\n\n\n```python\nminterms = [[1, 0, 1], [1, 0, 0], [0, 0, 0]]\ndontcare = [[1, 1, 1], [1, 1, 0]]\n```\n\n\n```python\nQ = SOPform(['x', 'y', 'z'], minterms, dontcare)\nQ\n```\n\n\n```python\nQ.subs({x: True, y: False, z: False}), Q.subs(\n {x: False, y: True, z: True})\n```\n", "meta": {"hexsha": "3c54f98e34c9b985cb66eeb145e6642125ad184e", "size": 2101, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "001-Jupyter/001-Tutorials/002-IPython-Cookbook/chapter15_symbolic/06_logic.ipynb", "max_stars_repo_name": "jhgoebbert/jupyter-jsc-notebooks", "max_stars_repo_head_hexsha": "bcd08ced04db00e7a66473b146f8f31f2e657539", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "001-Jupyter/001-Tutorials/002-IPython-Cookbook/chapter15_symbolic/06_logic.ipynb", "max_issues_repo_name": "jhgoebbert/jupyter-jsc-notebooks", "max_issues_repo_head_hexsha": "bcd08ced04db00e7a66473b146f8f31f2e657539", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "001-Jupyter/001-Tutorials/002-IPython-Cookbook/chapter15_symbolic/06_logic.ipynb", "max_forks_repo_name": "jhgoebbert/jupyter-jsc-notebooks", "max_forks_repo_head_hexsha": "bcd08ced04db00e7a66473b146f8f31f2e657539", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-13T18:49:12.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-13T18:49:12.000Z", "avg_line_length": 17.3636363636, "max_line_length": 72, "alphanum_fraction": 0.4626368396, "converted": true, "num_tokens": 203, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9566342024724487, "lm_q2_score": 0.8872046041554923, "lm_q1q2_score": 0.8487302689261739}} {"text": "---\nauthor: Nathan Carter (ncarter@bentley.edu)\n---\n\nThis answer assumes you have imported SymPy as follows.\n\n\n```python\nfrom sympy import * # load all math functions\ninit_printing( use_latex='mathjax' ) # use pretty math output\n```\n\nLet's create an equation with many variables.\n\n\n```python\nvar('P V n R T')\nideal_gas_law = Eq( P*V, n*R*T )\nideal_gas_law\n```\n\n\n\n\n$\\displaystyle P V = R T n$\n\n\n\nTo isolate one variable, call the `solve` function, and pass that variable\nas the second argument.\n\n\n```python\nsolve( ideal_gas_law, R )\n```\n\n\n\n\n$\\displaystyle \\left[ \\frac{P V}{T n}\\right]$\n\n\n\nThe brackets surround a list of all solutions---in this case, just one.\nThat solution is that $R=\\frac{PV}{Tn}$.\n", "meta": {"hexsha": "45165b5930c81a8b6f1b0f6116a1dc189f537dc9", "size": 2404, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "database/tasks/How to isolate one variable in an equation/Python, using SymPy.ipynb", "max_stars_repo_name": "nathancarter/how2data", "max_stars_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "database/tasks/How to isolate one variable in an equation/Python, using SymPy.ipynb", "max_issues_repo_name": "nathancarter/how2data", "max_issues_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "database/tasks/How to isolate one variable in an equation/Python, using SymPy.ipynb", "max_forks_repo_name": "nathancarter/how2data", "max_forks_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-07-18T19:01:29.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T06:47:11.000Z", "avg_line_length": 20.9043478261, "max_line_length": 99, "alphanum_fraction": 0.5141430948, "converted": true, "num_tokens": 200, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9653811601648193, "lm_q2_score": 0.8791467595934565, "lm_q1q2_score": 0.8487117187314726}} {"text": "# Brusselator\n\nThe Brusselator is a simplified model for chemical reactions that oscillate. The reaction scheme is as follows:\n$$\n\\begin{align}\nA &\\overset{k_1}\\longrightarrow X \\tag{1} \\\\\nB + X &\\overset{k_2}\\longrightarrow Y + D \\tag{2}\\\\\n2X + Y &\\overset{k_3}\\longrightarrow 3X \\tag{3}\\\\\nX &\\overset{k_4}\\longrightarrow E \\tag{4}\\\\\n\\end{align}\n $$\n\n`A, B, D, E, X, Y` are the reactants species. \nWe will use the general formulation of the mass action in differential form to derive the differentiatl equations that govern the dynamics of the system. \n$$\n\\begin{align}\n\\frac{\\mathrm{d} X}{\\mathrm{d} t}&= (B-A)^T \\cdot K \\cdot X^A \\tag{53}\\\\\n\\end{align}\n$$\nwhere `A` and `B` are the matrices with the stoichiometric coefficients, `X` is the state vector $X=[X_1,X_2,X_3]^T$, and `K` is a matrix in the form:\n\n$$\nA=\\begin{pmatrix}\n 1 & 0 & 0 & 0 & 0 \\\\ \n 0 & 1 & 1 & \\dots & 0\\\\ \n 0 & 0 & k_3 &\\dots &0 \\\\ \n \\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\tag{54}\\\\ \n 0 & 0 & 0 & \\dots & k_r \n\\end{pmatrix}\n$$\n\n\nThe system is a open reactor where we maintain a constant concentration of the reactants `A,B,D,E`.\n\nTherefore, the equations for the evolution of `[X]` and `[Y]` are as follows:\n\n$$\\begin{align} \n \\frac{ d[X] }{dt} &= k_1[A] \u2212 k_2[B][X] + k_3[X]^2[Y] \u2212 k_4[X]\\tag{5} \\\\ \n \\frac{ d[Y] }{dt} &= k_2[B][X] \u2212 k_3[X]^2[Y] \\tag{6} \n \\end{align} $$\n\nWe assume as initial conditions:\n\n$$\\begin{align} \n [X (0)] &= 0 \\tag{7} \\\\ \n [Y (0)] &= 0 \\tag{8} \n \\end{align} $$\n\nTo calculate the equilibrium, we just set eqs. 5 and 6 to zero and solve for `X` and `Y`. \n\n$$\\begin{align} \n [X]_{eq} &= \\frac{k_1 [A]}{k_4}\\tag{5} \\\\ \n [Y]_{eq} &= \\frac{k_4 k_2 [B]}{k_3 k_1 [A]} \\tag{6} \n \\end{align} $$\n \n \nTo evaluate stability, we will evaluate the Jacobian at the stationary state $([X]_{eq},[Y]_{eq})$. \n\nThe parameters sets to try are:\n\n$$\nk_1=1\\\\\nk_2=1\\\\\nk_3=1\\\\\nk_4=1\\\\\nA=1\\\\\nB=3\n$$\n\n\n```julia\nusing DifferentialEquations\n```\n\n\n```julia\nusing Plots; gr()\n```\n\n\n\n\n Plots.GRBackend()\n\n\n\n\n```julia\nbrusselator! = @ode_def BR begin\n dX = k_1 * A - k_2 * B * X + k_3 * X^2 * Y - k_4 * X\n dY = k_2 * B * X - k_3 * X^2 * Y\n end k_1 k_2 k_3 k_4 A B\n```\n\n\n\n\n (::BR{getfield(Main, Symbol(\"##3#7\")),getfield(Main, Symbol(\"##4#8\")),getfield(Main, Symbol(\"##5#9\")),Nothing,Nothing,getfield(Main, Symbol(\"##6#10\")),Expr,Expr}) (generic function with 2 methods)\n\n\n\n\n```julia\n\n```\n\n\n```julia\ntspan = (0.0,50.0)\nk_1=1.\nk_2=1.\nk_3=1.\nk_4=1.\nA=1.\nB=3.\nu\u2080=[0.0,0.0]\np=[k_1,k_2,k_3,k_4,A,B];\n```\n\n\n```julia\nprob1 = ODEProblem(brusselator!,u\u2080,tspan,p)\nsol1 = solve(prob1)\nplot(sol1,label=[\"X\",\"Y\"])\ntitle!(\"Brusselator\")\nxlabel!(\"Time [s]\")\nylabel!(\"Concentration [a.u]\")\n```\n\n\n\n\n \n\n \n\n\n\n\n```julia\nplot(sol1,vars=(1,2),label=[\"limit cycle plot\"])\ntitle!(\"Brusselator \")\nxlabel!(\"X [a.u.]\")\nylabel!(\"Y [a.u.]\")\n```\n\n\n\n\n \n\n \n\n\n\n\n```julia\n\n```\n", "meta": {"hexsha": "abf8255b6cc7be886b12ef7943a1287b0511f967", "size": 101035, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "7_Brusselator.ipynb", "max_stars_repo_name": "davidgmiguez/julia_notebooks", "max_stars_repo_head_hexsha": "b395fac8f73bf8d9d366d6354a561c722f37ce66", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "7_Brusselator.ipynb", "max_issues_repo_name": "davidgmiguez/julia_notebooks", "max_issues_repo_head_hexsha": "b395fac8f73bf8d9d366d6354a561c722f37ce66", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "7_Brusselator.ipynb", "max_forks_repo_name": "davidgmiguez/julia_notebooks", "max_forks_repo_head_hexsha": "b395fac8f73bf8d9d366d6354a561c722f37ce66", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 105.5747126437, "max_line_length": 241, "alphanum_fraction": 0.6483693769, "converted": true, "num_tokens": 1162, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9532750373915658, "lm_q2_score": 0.8902942268497306, "lm_q1q2_score": 0.8486952623896721}} {"text": "\n\n# Box-Cox Transformation \n\nNormality is an important assumption for many statistical techniques. A **Box-Cox transformation** is a transformation of non-normal dependent variables into a normal shape, which depends on the parameter $\\lambda$ and is defined as follow : \n\n\\begin{equation}\nw_t=\n\\begin{cases}\n \\displaystyle \\ln(y_t) \\text{ if } \\lambda=0 \\text{ , } \\\\\n \\displaystyle \\frac{(y_t^\\lambda-1)}{\\lambda} \\text{ otherwise.}\n\\end{cases}\n\\end{equation}\n\n* For all $\\lambda \\neq 1$, the time series will change shape.\n\n* If $\\lambda=0$, natural logarithms are used. \n\n* If $\\lambda=1$, then $w_t=y_t-1$, so the transformed data is shifted downwards but there is no change in the shape of the time series. \n\n* Having chosen a transformation, we need to forecast the transformed data. Then, we need to reverse the transformation (or back-transform) to obtain forecasts on the original scale.\n\n* Features of power transformations \n * Often no transformation is needed.\n * The forecasting results are relatively insensitive(\u611f\u89ba\u9072\u920d\u7684) to the value of $\\lambda$. \n * Transformations sometimes make little difference to the forecasts but **have a large effect on prediction intervals**. \n\nReference : [Mathematical transformations](https://otexts.com/fpp2/transformations.html#mathematical-transformations)\n\n\n", "meta": {"hexsha": "c258138925409a2b80ffc2ba62845d89af9b1e2f", "size": 2941, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Materials/Box_Cox_Transformation.ipynb", "max_stars_repo_name": "YenLinWu/Time_Series_Model", "max_stars_repo_head_hexsha": "c8bbbc2e5121be4f646b29742ebd525683ab90ff", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Materials/Box_Cox_Transformation.ipynb", "max_issues_repo_name": "YenLinWu/Time_Series_Model", "max_issues_repo_head_hexsha": "c8bbbc2e5121be4f646b29742ebd525683ab90ff", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Materials/Box_Cox_Transformation.ipynb", "max_forks_repo_name": "YenLinWu/Time_Series_Model", "max_forks_repo_head_hexsha": "c8bbbc2e5121be4f646b29742ebd525683ab90ff", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.2133333333, "max_line_length": 349, "alphanum_fraction": 0.5651139068, "converted": true, "num_tokens": 420, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9263037262250327, "lm_q2_score": 0.9161096164524095, "lm_q1q2_score": 0.8485957513504524}} {"text": "# Optimization Exercise 1\n\n## Imports\n\n\n```python\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.optimize as opt\n```\n\n## Hat potential\n\nThe following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the \"hat potential\":\n\n$$ V(x) = -a x^2 + b x^4 $$\n\nWrite a function `hat(x,a,b)` that returns the value of this function:\n\n\n```python\n# YOUR CODE HERE\ndef hat(x, a, b):\n return (-a * x**2) + (b * x**4)\n```\n\n\n```python\nassert hat(0.0, 1.0, 1.0)==0.0\nassert hat(0.0, 1.0, 1.0)==0.0\nassert hat(1.0, 10.0, 1.0)==-9.0\n```\n\nPlot this function over the range $x\\in\\left[-3,3\\right]$ with $b=1.0$ and $a=5.0$:\n\n\n```python\na = 5.0\nb = 1.0\nx = np.linspace(-3, 3, 100)\nplt.plot(x, hat(x, a, b))\n```\n\n\n```python\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n\n```python\nassert True # leave this to grade the plot\n```\n\nWrite code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.\n\n* Use `scipy.optimize.minimize` to find the minima. You will have to think carefully about how to get this function to find both minima.\n* Print the x values of the minima.\n* Plot the function as a blue line.\n* On the same axes, show the minima as red circles.\n* Customize your visualization to make it beatiful and effective.\n\n\n```python\n# YOUR CODE HERE\n#specifies a number of divisions which the function will look for a minimum on each.\n#more divisions = more accurate\ndef minima(divisions, function, a, b):\n critpoints = []\n for n in range(0, divisions):\n sectionx = np.linspace(n*(6/divisions) - 3, (n+1)*(6/divisions) - 3, 100)\n sectiony = function(np.linspace(n*(6/divisions) - 3, (n+1)*(6/divisions) - 3, 100), a, b)\n #make sure the minimum is not on the ends\n if np.amin(sectiony) != sectiony[0] and np.amin(sectiony) != sectiony[-1]:\n minpt = np.argmin(sectiony)\n critpoints.append(sectionx[minpt])\n \n return critpoints\n \n```\n\n\n```python\nminpts = minima(100, hat, a, b)\n```\n\n\n```python\nx = np.linspace(-3, 3, 100)\nplt.plot(x, hat(x, a, b))\nplt.scatter(minpts[0], hat(minpts[0], a, b), color = \"r\")\nplt.scatter(minpts[1], hat(minpts[1], a, b), color = \"r\")\nplt.xlabel(\"X\")\nplt.ylabel(\"V(x)\")\nprint(\"Minimums: \", minpts)\n```\n\n\n```python\nassert True # leave this for grading the plot\n```\n\nTo check your numerical results, find the locations of the minima analytically. Show and describe the steps in your derivation using LaTeX equations. Evaluate the location of the minima using the above parameters.\n\nTo find the minima, we can take the derivative of the function and set it to zero, finding the critical points.\n\n\\begin{equation}\nV'(x) = -2ax + 4bx^3 = 0\n\\end{equation}\n\nDividing both sides by x gives us a quadratic, and we can put in the given values of a and b:\n\n\\begin{equation}\n-10 + 4x^2 = 0\n\\end{equation}\n\nSolving this gives us the intercepts, where V'(x) = 0.\n\n\\begin{equation}\nx = \\pm \\sqrt{\\frac{5}{2}}\n\\end{equation}\n\nWhich approximates to:\n\\begin{equation}\nx = \\pm 1.5811\n\\end{equation}\n\nAnd this agrees with the program's results.\n\n\n```python\n\n```\n", "meta": {"hexsha": "dad9934b36477e08755d1a8eda6358c7709e383c", "size": 31686, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "assignments/assignment11/OptimizationEx01.ipynb", "max_stars_repo_name": "SJSlavin/phys202-2015-work", "max_stars_repo_head_hexsha": "60311479d6a27ca4c530b057036a326e87805b61", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assignments/assignment11/OptimizationEx01.ipynb", "max_issues_repo_name": "SJSlavin/phys202-2015-work", "max_issues_repo_head_hexsha": "60311479d6a27ca4c530b057036a326e87805b61", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assignments/assignment11/OptimizationEx01.ipynb", "max_forks_repo_name": "SJSlavin/phys202-2015-work", "max_forks_repo_head_hexsha": "60311479d6a27ca4c530b057036a326e87805b61", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 84.0477453581, "max_line_length": 11464, "alphanum_fraction": 0.8296092912, "converted": true, "num_tokens": 966, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8962513703624558, "lm_q2_score": 0.9465966734015935, "lm_q1q2_score": 0.8483885657167202}} {"text": "If $x_1, ..., x_n$ are an iid sample from the double-exponential (Laplace) distribution with density:\n\n$$p_{\\theta}(x) = (2 \\theta)^{-1} exp (-|x|/\\theta)$$\n\nthen from the factorization theorem it is obvious that $$t(x) = \\sum_i |x_i|$$ is sufficient.\n\nWe want to show that the conditional distribution of the data given T = t is free of $\\theta$. According to the following discussion, it seems that we can infer that the [distribution of T](https://math.stackexchange.com/questions/199460/distribution-of-sum-of-absolute-value-of-random-variable) is given by $T \\sim Gamma(n \\theta, n)$. Then using Bayes theorem we have:\n\n\n$$\n\\begin{align}\nP(X_1 = x_1, ..., X_n = x_n | T = t) & = \\frac{P(X_1 = x_1, ..., X_n = x_n, T = t)}{P(T = t)} \\\\\n& = \\frac{P(X_1 = x_1, ..., X_n = x_n)}{P(T = t)} \\\\\n& = \\frac{(2 \\theta)^{-n} exp (-t/\\theta)}{\\Gamma (n)^{-1} (n \\theta)^{-n} t^{n - 1 } exp (-t/\\theta)} \\\\\n& = \\frac{\\Gamma (n)}{t^{n - 1}} \\bigg( \\frac{2}{n} \\bigg)^{-n}\n\\end{align}\n$$\n\nwhich is free of $\\theta$ as expected.\n", "meta": {"hexsha": "93fb9ca3dfd3161ee11c1acae32b7007a469918a", "size": 1732, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "python/chapter-3/exercises/EX3-5.ipynb", "max_stars_repo_name": "covuworie/in-all-likelihood", "max_stars_repo_head_hexsha": "6638bec8bb4dde7271adb5941d1c66e7fbe12526", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "python/chapter-3/exercises/EX3-5.ipynb", "max_issues_repo_name": "covuworie/in-all-likelihood", "max_issues_repo_head_hexsha": "6638bec8bb4dde7271adb5941d1c66e7fbe12526", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-03-24T17:53:04.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-23T20:16:17.000Z", "max_forks_repo_path": "python/chapter-3/exercises/EX3-5.ipynb", "max_forks_repo_name": "covuworie/in-all-likelihood", "max_forks_repo_head_hexsha": "6638bec8bb4dde7271adb5941d1c66e7fbe12526", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-21T10:24:59.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-21T10:24:59.000Z", "avg_line_length": 33.9607843137, "max_line_length": 382, "alphanum_fraction": 0.5311778291, "converted": true, "num_tokens": 353, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9496693645535725, "lm_q2_score": 0.893309405344251, "lm_q1q2_score": 0.8483485753230046}} {"text": "# Linear regression\n- It's a way to model relationship between 2 models\n- Also known as the slope formula\n- equation form: \\begin{align}Y = a + bX\\end{align}\n - where X is the independent variable and Y is dependent\n - b = slope\n - a = y-intercept\n\n#### Equations\n\\begin{align}\na = \\dfrac{(\\sum y)(\\sum x^2)-(\\sum x)(\\sum xy)}{n(\\sum x^2) - (\\sum x)^2}\n\\end{align}\n\n\\begin{align}\nb = \\dfrac{n(\\sum xy)-(\\sum x)(\\sum y)}{n(\\sum x^2) - (\\sum x)^2}\n\\end{align}\n\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n```\n\n\n```python\nimport os\npath = os.getcwd() + '\\data\\ex1data1.txt'\ndata = pd.read_csv(path, header=None, names=['Population', 'Profit'])\ndata.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
PopulationProfit
06.110117.5920
15.52779.1302
28.518613.6620
37.003211.8540
45.85986.8233
\n
\n\n\n\n\n```python\ndata.describe()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
PopulationProfit
count97.00000097.000000
mean8.1598005.839135
std3.8698845.510262
min5.026900-2.680700
25%5.7077001.986900
50%6.5894004.562300
75%8.5781007.046700
max22.20300024.147000
\n
\n\n\n\n> Let's plot it together to get a better idea of what the data looks like\n\n\n```python\nplt.scatter(x=data['Population'], y=data['Profit'])\nplt.show()\n```\n\n> Implementing linear regression using gradient descent to minimize cost function\n\n> Creating a function to compute the cost of a given solution\n\n\n```python\ndef compute_cost(X, y, theta):\n inner = np.power(((X * theta.T) - y), 2)\n return np.sum(inner) / (2 * len(X))\n```\n\nInserting a column of ones to use a vectorized solution to computing cost and gradients.\n\n\n```python\ndata.insert(0, 'Ones', 1)\n```\n\nChecking the new table\n\n\n```python\ndata.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
OnesPopulationProfit
016.110117.5920
115.52779.1302
218.518613.6620
317.003211.8540
415.85986.8233
\n
\n\n\n\nVariable initialization\n\n\n```python\n# set X (training data) and y (target variable)\ncols = data.shape[1]\nX = data.iloc[:, 0:cols-1]\ny = data.iloc[:, cols-1:cols]\n```\n\nChecking values\n\n\n```python\nX.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
OnesPopulation
016.1101
115.5277
218.5186
317.0032
415.8598
\n
\n\n\n\n\n```python\ny.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Profit
017.5920
19.1302
213.6620
311.8540
46.8233
\n
\n\n\n\nConvert to numpy matrices\n\n\n```python\nX = np.matrix(X.values)\ny = np.matrix(y.values)\ntheta = np.matrix(np.array([0, 0]))\n```\n\n\n```python\ntheta\n```\n\n\n\n\n matrix([[0, 0]])\n\n\n\nShape of matrices\n\n\n```python\nX.shape, y.shape, theta.shape\n```\n\n\n\n\n ((97, 2), (97, 1), (1, 2))\n\n\n\nCompute cost for initial solution (0 values for theta)\n\n\n```python\ncompute_cost(X, y, theta)\n```\n\n\n\n\n 32.072733877455676\n\n\n\n---\nDefining a function to perform gradient descent on paramaters theta using the update rules defined in the text\n\n\n```python\ndef gradient_descent(X, y, theta, alpha, iters):\n temp = np.matrix(np.zeros(theta.shape))\n parameters = int(theta.ravel().shape[1])\n cost = np.zeros(iters)\n \n for i in range(iters):\n error = (X * theta.T) - y\n \n for j in range(parameters):\n term = np.multiply(error, X[:, j])\n temp[0, j] = theta[0, j] - ((alpha / len(X)) * np.sum(term))\n \n theta = temp\n cost[i] = compute_cost(X, y, theta)\n \n return theta, cost\n```\n\nInit some additional variables: learning rate $\\alpha$ (alpha), and number of iterations to perform\n\n\n```python\nalpha = 0.01\niters = 1000\n```\n\nRunning the gradient descent algo to ift our parameters theta to the training set\n\n\n```python\ng, cost = gradient_descent(X, y, theta, alpha, iters)\ng\n```\n\n\n\n\n matrix([[-3.24140214, 1.1272942 ]])\n\n\n\nNow we can compute our cost (error) of trained model using our fitted parameters.\n\n\n```python\ncompute_cost(X, y, g)\n```\n\n\n\n\n 4.5159555030789118\n\n\n\nLet's plot the linear model along with the data to see how it fits\n\n\n```python\nx = np.linspace(data.Population.min(), data.Population.max(), 100)\nf = g[0, 0] + (g[0, 1] * x)\n\nfig, ax = plt.subplots(figsize=(12, 8))\nax.plot(x, f, 'r', label='Prediction')\nax.scatter(data.Population, data.Profit, label='Traning Data')\nax.legend(loc=2)\nax.set_xlabel('Population')\nax.set_ylabel('Profit')\nax.set_title('Predicted Profit vs. Population Size')\nfig\n```\n\nObservations:\n- The cost always decreases. This is an example of convex optimization problem\n- function outputs a vector with the cost at each iteration, so we can plot that as well\n\n\n```python\nfig, ax = plt.subplots(figsize=(12,8))\nax.plot(np.arange(iters), cost, 'r')\nax.set_xlabel('Iterations')\nax.set_xlim([0, iters])\nax.set_ylabel('Cost')\nax.set_title('Error vs. Training Epoch')\nfig\n```\n\n---\n# Mulitple variables\n---\n\nLet's try 2 variables. \n\nExample: a housing price data set with 2 variables \n- size of house, no. of bedrooms and \n- target price of house.\n\n\n```python\npath = os.getcwd() + '\\data\\ex1data2.txt'\ndata2 = pd.read_csv(path, header=None, names=['Size', 'Bedrooms', 'Price'])\ndata2.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
SizeBedroomsPrice
021043399900
116003329900
224003369000
314162232000
430004539900
\n
\n\n\n\n> add another step: normalizing the features\n\n\n```python\ndata2 = (data2 - data2.mean()) / data2.std()\ndata2.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
SizeBedroomsPrice
00.130010-0.2236750.475747
1-0.504190-0.223675-0.084074
20.502476-0.2236750.228626
3-0.735723-1.537767-0.867025
41.2574761.0904171.595389
\n
\n\n\n\nRepeating steps from previous part\n\n\n```python\n# Add ones column\ndata2.insert(0, 'Ones', 1)\n```\n\n\n```python\n# set X(training data) and y(target)\ncols = data2.shape[1]\nX2 = data2.iloc[:, 0:cols-1]\ny2 = data2.iloc[:, cols-1:cols]\n```\n\n\n```python\n# convert to np array\nX2 = np.matrix(X2.values)\ny2 = np.matrix(y2.values)\ntheta2 = np.matrix(np.array([0,0,0]))\n```\n\n\n```python\n# perform linear regression\ng2, cost2 = gradient_descent(X2, y2, theta2, alpha, iters)\n\n# compute cost\ncompute_cost(X2, y2, g2)\n```\n\n\n\n\n 0.13070336960771892\n\n\n\n\n```python\nfig, ax = plt.subplots(figsize=(12,8))\nax.plot(np.arange(iters), cost2, 'r')\nax.set_xlabel('Iterations')\nax.set_ylabel('Cost')\nax.set_title('Error vs. Training Epoch')\nfig\n```\n\n---\n# Using scikit learn's linear regression function\n---\n\n\n```python\nfrom sklearn import linear_model\n```\n\n\n```python\nmodel = linear_model.LinearRegression()\nmodel.fit(X, y)\n```\n\n\n\n\n LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)\n\n\n\n\n```python\nx = np.array(X[:, 1].A1)\nf = model.predict(X).flatten()\n\nfig, ax = plt.subplots(figsize=(12,8))\nax.plot(x, f, 'r', label='Prediction')\nax.scatter(data.Population, data.Profit, label='Traning Data')\nax.legend(loc=2)\nax.set_xlabel('Population')\nax.set_ylabel('Profit')\nax.set_title('Predicted Profit vs. Population Size')\nfig\n```\n", "meta": {"hexsha": "f41b1ce13adf64bbefa6764fd25187526413b278", "size": 132467, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ML/classification/Linear Regression Python.ipynb", "max_stars_repo_name": "manparvesh/ml-dl", "max_stars_repo_head_hexsha": "d1eae6b4d8346582303718601394557b5be744c0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ML/classification/Linear Regression Python.ipynb", "max_issues_repo_name": "manparvesh/ml-dl", "max_issues_repo_head_hexsha": "d1eae6b4d8346582303718601394557b5be744c0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ML/classification/Linear Regression Python.ipynb", "max_forks_repo_name": "manparvesh/ml-dl", "max_forks_repo_head_hexsha": "d1eae6b4d8346582303718601394557b5be744c0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 108.31316435, "max_line_length": 30928, "alphanum_fraction": 0.8374689545, "converted": true, "num_tokens": 4046, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9449947117065458, "lm_q2_score": 0.8976952989498449, "lm_q1q2_score": 0.8483173102314301}} {"text": "# Gaussian Process\n\nIn this tutorial, we expose what gaussian processes are, and how to use the [GPy library](http://sheffieldml.github.io/GPy/). We first provide a gentle reminder about Gaussian distributions and their properties.\n\n\n```python\n# Import all the important libraries\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.stats import norm\nfrom scipy.stats import multivariate_normal\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.mlab as mlab\nfrom ipywidgets import widgets as wg\nfrom matplotlib import cm\n\nimport GPy\n#%matplotlib inline\n%matplotlib notebook\n```\n\n## 1D Gaussian distribution\n\n\n```python\n# Plot\ndef plot_gaussian(mu=0, sigma=1):\n x = np.linspace(-3, 3, 100)\n plt.plot(x, norm.pdf(x, mu, sigma))\n plt.xlabel('x')\n plt.ylabel('p(x)')\n\nwg.interact(plot_gaussian, mu=(-2,2,0.1), sigma=(-2,2,0.1))\nplt.show()\n```\n\n## Multivariate Gaussian distribution (2D)\n\nThe multivariable Gaussian distribution is a generalization of the Gaussian distribution to vectors. See [wikipedia](https://en.wikipedia.org/wiki/Multivariate_normal_distribution) for more info.\n\n\n```python\n# moments\nmu = np.array([0,0])\nSigma = np.array([[1,0], \n [0,1]])\nSigma1 = np.array([[1,0.5],\n [0.5,1]])\nSigma2 = np.array([[1,-0.5],\n [-0.5,1]])\nSigmas = [Sigma, Sigma1, Sigma2]\n\npts = []\nfor S in Sigmas:\n pts.append(np.random.multivariate_normal(mu, S, 1000).T)\n\n# Plotting\nwidth = 16\nheight = 4\nplt.figure(figsize=(width, height))\n\n# make plot\nfor i in range(len(Sigmas)):\n plt.subplot(1,3,i+1)\n plt.title('Plot '+str(i+1))\n plt.ylim(-4,4)\n plt.xlim(-4,4)\n plt.xlabel('x1')\n plt.ylabel('x2')\n plt.plot(pts[i][0], pts[i][1],'o')\n #plt.scatter(pts[i][0], pts[i][1])\n \nplt.show()\n```\n\nThe 1st plot above is described by:\n\\begin{equation}\n \\left[ \\begin{array}{c} x_1\\\\x_2 \\end{array}\\right] \\sim \\mathcal{N} \\left(\\left[ \\begin{array}{c} 0\\\\0 \\end{array}\\right], \\left[ \\begin{array}{cc} 1 & 0\\\\ 0 & 1 \\end{array}\\right] \\right)\n\\end{equation}\n\nThe 2nd plot is given by:\n\\begin{equation}\n \\left[ \\begin{array}{c} x_1\\\\x_2 \\end{array}\\right] \\sim \\mathcal{N}\\left(\\left[ \\begin{array}{c} 0\\\\0 \\end{array}\\right], \\left[ \\begin{array}{cc} 1 & 0.5\\\\ 0.5 & 1 \\end{array}\\right]\\right)\n\\end{equation}\n\nFinally, the 3rd plot is given by:\n\\begin{equation}\n \\left[ \\begin{array}{c} x_1\\\\x_2 \\end{array}\\right] \\sim \\mathcal{N}\\left(\\left[ \\begin{array}{c} 0\\\\0 \\end{array}\\right], \\left[ \\begin{array}{cc} 1 & -0.5\\\\ -0.5 & 1 \\end{array}\\right]\\right)\n\\end{equation}\n\nThe covariance (and the dot product) measures the similarity.\n\nFor the 2nd and 3rd plots, $x_1$ is **correlated** with $x_2$, i.e. knowing $x_1$ gives us information about $x_2$.\n\n### Joint distribution $p(x_1,x_2)$\n\nThe joint distribution $p(x_1, x_2)$ is given by:\n\n\\begin{equation}\n \\left[ \\begin{array}{c} x_1\\\\x_2 \\end{array}\\right] \\sim \\mathcal{N}\\left(\\left[ \\begin{array}{c} \\mu_1 \\\\ \\mu_2 \\end{array}\\right], \\left[ \\begin{array}{cc} \\Sigma_{11} & \\Sigma_{12} \\\\ \\Sigma_{21} & \\Sigma_{22} \\end{array}\\right] \\right) = \\mathcal{N}(\\pmb{\\mu}, \\pmb{\\Sigma})\n\\end{equation}\n\n\n```python\n# Reference: http://stackoverflow.com/questions/38698277/plot-normal-distribution-in-3d\n\n# moments\nmu = np.array([0,0])\nSigma = np.array([[1,0], [0,1]])\n\n# Create grid and multivariate normal\nstep = 500\nbound = 10\nx = np.linspace(-bound,bound,step)\ny = np.linspace(-bound,bound,step)\nX, Y = np.meshgrid(x,y)\npos = np.empty(X.shape + (2,))\npos[:, :, 0] = X; pos[:, :, 1] = Y\npdf = multivariate_normal(mu, Sigma).pdf(pos)\n\n# Plot\nfig = plt.figure(figsize=plt.figaspect(0.5)) # Twice as wide as it is tall.\n\n# 1st subplot (3D)\nax = fig.add_subplot(1, 2, 1, projection='3d')\nax.plot_surface(X, Y, pdf, cmap='viridis', linewidth=0)\nax.set_xlabel('x1')\nax.set_ylabel('x2')\nax.set_zlabel('p(x1, x2)')\n\n# 2nd subplot (2D)\nax = fig.add_subplot(1, 2, 2)\nax.contourf(x, y, pdf)\n#ax.colorbar()\nax.set_xlabel('x1')\nax.set_ylabel('x2')\n\nfig.tight_layout()\n\nplt.show()\n```\n\n### Normalization\n\nIn order to be a valid probability distribution, the volume under the surface should equal to 1.\n\n\\begin{equation}\n \\int \\int p(x_1,x_2) dx_1 dx_2 = 1\n\\end{equation}\n\n\n```python\n# p(x1,x2) = pdf\n# dx = 2.*bound/step\n# dx1 dx2 = (2.*bound/step)**2\nprint(\"Summation: {}\".format((2.*bound/step)**2 * pdf.sum()))\n```\n\n### Conditional distribution $p(x_2|x_1)$\n\nWhat is the mean $\\mu_{2|1}$ and the variance $\\Sigma_{2|1}$ of the conditional distribution $p(x_2|x_1) = \\mathcal{N}(\\mu_{2|1}, \\Sigma_{2|1})$?\n\nWe know the mean $\\pmb{\\mu}$ and the covariance $\\pmb{\\Sigma}$ of the joint distribution $p(x_1,x_2)$. Using the [Schur complement](https://en.wikipedia.org/wiki/Schur_complement), we obtain:\n\n\\begin{align}\n \\mu_{2|1} &= \\mu_{2} + \\Sigma_{21}\\Sigma_{22}^{-1}(x_2 - \\mu_2) \\\\\n \\Sigma_{2|1} &= \\Sigma_{22} - \\Sigma_{21}\\Sigma_{22}^{-1}\\Sigma_{12}\n\\end{align}\n\nFor the demo, check Murphy's book \"Machine Learning: A Probabilistic Perspective\", section 4.3.4\n\n\n```python\nfig = plt.figure(figsize=plt.figaspect(0.5)) # Twice as wide as it is tall.\n\nx1_value = 0\nz_max = pdf.max()\n\n# 1st subplot\nax = fig.add_subplot(1, 2, 1, projection='3d')\nax.plot_surface(X, Y, pdf, cmap='viridis', linewidth=0)\nax.set_xlabel('x1')\nax.set_ylabel('x2')\nax.set_zlabel('p(x1, x2)')\ny1 = np.linspace(-bound,bound,2)\nz = np.linspace(0,z_max,2)\nY1, Z = np.meshgrid(y1,z)\nax.plot_surface(x1_value, Y1, Z, color='red', alpha=0.2)\n#cset = ax.contourf(X, Y, pdf, zdir='x', offset=-bound, cmap=cm.coolwarm)\n\n# 2nd subplot\nax = fig.add_subplot(1, 2, 2)\nax.plot(x, pdf[step//2 + x1_value*step//(2*bound)])\nax.set_xlabel('x2')\nax.set_ylabel('p(x2|x1)')\n\nfig.tight_layout()\n\nplt.show()\n```\n\n### Marginal distribution $p(x_1)$ and $p(x_2)$\n\n\\begin{align}\n p(x_1) &= \\int p(x_1, x_2) dx_2 = \\mathcal{N}(\\mu_1, \\Sigma_{11}) \\\\\n p(x_2) &= \\int p(x_1, x_2) dx_1 = \\mathcal{N}(\\mu_2, \\Sigma_{22})\n\\end{align}\n\n\n```python\nfig = plt.figure(figsize=plt.figaspect(0.5))\nplt.subplot(1,2,1)\nplt.title('By summing')\ndx = 2. * bound / step\nplt.plot(x, pdf.sum(0) * dx, color='blue')\nplt.xlabel('x2')\nplt.ylabel('p(x2)')\n\nplt.subplot(1,2,2)\nplt.title('by using the normal distribution')\nplt.plot(x, norm.pdf(x, mu[1], Sigma[1,1]), color='red')\nplt.xlabel('x2')\nplt.ylabel('p(x2)')\n\nfig.tight_layout()\nplt.show()\n```\n\n## Gaussian Processes (GPs)\n\nA Gaussian process is a Gaussian distribution over functions. That is, it is a generalization of the multivariable Gaussian distribution to infinite vectors.\n\nIt will become clearer with an example.\n\n\n```python\nx = [0.5,0.8,1.4]\nf = [1,2,6]\n\nplt.plot(x,f,'o')\nfor i in range(len(x)):\n plt.annotate('f'+str(i+1), (x[i],f[i]))\nplt.xlim(0,2)\nplt.ylim(0,6.5)\nplt.ylabel('f(x)')\nplt.xlabel('x')\nplt.xticks(x, ['x'+str(i+1) for i in range(len(x))])\nplt.show()\n```\n\n\\begin{align}\n \\left[ \\begin{array}{c} f_1 \\\\ f_2 \\\\ f_3 \\end{array}\\right] \n &\\sim \\mathcal{N}\\left( \\left[ \\begin{array}{c} 0 \\\\ 0 \\\\ 0 \\end{array}\\right], \\left[ \\begin{array}{ccc} K_{11} & K_{12} & K_{13} \\\\ K_{21} & K_{22} & K_{23} \\\\ K_{31} & K_{32} & K_{33} \\end{array}\\right] \\right) \\\\\n &\\sim \\mathcal{N}\\left( \\left[ \\begin{array}{c} 0 \\\\ 0 \\\\ 0 \\end{array}\\right], \\left[ \\begin{array}{ccc} 1 & 0.7 & 0.2 \\\\ 0.7 & 1 & 0.6 \\\\ 0.2 & 0.6 & 1 \\end{array}\\right] \\right)\n\\end{align}\n\nSimilarity measure: $K_{ij} = \\exp(- ||x_i - x_j||^2) = \\left\\{ \\begin{array}{ll} 0 & ||x_i - x_j|| \\rightarrow \\infty \\\\ 1 & x_i = x_j \\end{array} \\right.$\n\nPrediction (noiseless GP regression): given data $\\mathcal{D} = \\{(x_1,f_1), (x_2,f_2), (x_3,f_3)\\}$, and new point $x_*$ (e.g. $x_*$=1.4), what is the value of $f_*$?\n\n\\begin{equation}\n \\pmb{f} \\sim \\mathcal{N}(\\pmb{0}, \\pmb{K}) \\qquad \\mbox{and} \\qquad f_* \\sim \\mathcal{N}(0, K(x_*,x_*)) = \\mathcal{N}(0, K_{**})\n\\end{equation}\n\nIn this case, $K_{**} = K(x_*,x_*) = \\exp(- ||x_* - x_*||^2) = 1$.\n\nNow, we can write:\n\n\\begin{equation}\n \\left[ \\begin{array}{c} \\pmb{f} \\\\ f_* \\end{array} \\right] \\sim \\mathcal{N}\\left(\\pmb{0}, \\left[\\begin{array}{cc} \\left[ \\begin{array}{ccc} K_{11} & K_{12} & K_{13} \\\\ K_{21} & K_{22} & K_{23} \\\\ K_{31} & K_{32} & K_{33} \\end{array} \\right] & \\left[ \\begin{array}{c} K_{1*} \\\\ K_{2*} \\\\ K_{3*} \\end{array} \\right] \\\\ \\left[ \\begin{array}{ccc} K_{*1} & K_{*2} & K_{*3} \\end{array} \\right] & \\left[\\begin{array}{c} K_{**} \\end{array} \\right] \\end{array} \\right] \\right) = \\mathcal{N}\\left(\\pmb{0}, \\left[\\begin{array}{cc} \\pmb{K} & \\pmb{K}_* \\\\ \\pmb{K}_*^T & \\pmb{K}_{**} \\end{array}\\right]\\right)\n\\end{equation}\n\nUsing the formula for the conditional probability $p(f_*|f)$, we have:\n\n\\begin{align}\n \\mu_* &= \\mathbb{E}[f_*] = \\pmb{K}_*^T \\pmb{K}^{-1}\\pmb{f} \\\\\n c_* &= K_{**} - \\pmb{K}_*^T \\pmb{K}^{-1}\\pmb{K}_*\n\\end{align}\n\nWe can thus predict the mean $\\mu_*$ and the variance $c_*$ for the test point $x_*$.\n\n\n```python\nx = [0.5,0.8,1.4]\nf = [1,2,6]\n\nx_new = 1.3\nf_new = 5.2\n\nplt.plot(x+[x_new],f+[f_new],'o')\nfor i in range(len(x)):\n plt.annotate('f'+str(i+1), (x[i],f[i]))\nplt.errorbar(x_new, f_new, yerr=1)\nplt.annotate('f*', (x_new+0.02, f_new))\nplt.xlim(0,2)\nplt.ylim(0,6.5)\nplt.ylabel('f(x)')\nplt.xlabel('x')\nplt.xticks(x+[x_new], ['x'+str(i+1) for i in range(len(x))]+['x*'])\nplt.show()\n```\n\n### Generalization\n\nA GP defines a distribution over functions $p(f)$ (i.e. it is the joint distribution over all the infinite function values).\n\nDefinition: $p(f)$ is a GP if for any finite subset $\\{x_1,...,x_n\\} \u2282 X$, the marginal distribution over that finite subset $p(f)$ has a multivariate Gaussian distribution.\n\nPrior on $f$:\n\\begin{equation}\n \\pmb{f}|\\pmb{x} \\sim \\mathcal{GP}(\\pmb{\\mu}(\\pmb{x}), \\pmb{K}(\\pmb{x}, \\pmb{x}))\n\\end{equation}\nwith\n\\begin{align*}\n \\pmb{\\mu}(\\pmb{x}) &= \\mathbb{E}_f \\lbrack \\pmb{x} \\rbrack \\\\\n k(\\pmb{x}, \\pmb{x'}) &= \\mathbb{E}_f \\lbrack (\\pmb{x} - \\pmb{\\mu}(\\pmb{x})) (\\pmb{x'} - \\pmb{\\mu}(\\pmb{x'})) \\rbrack\n\\end{align*}\n\nOften written as:\n\\begin{equation}\n \\pmb{f} \\sim \\mathcal{GP}(\\pmb{0}, \\pmb{K})\n\\end{equation}\n\nConcretely, assume $\\pmb{x} \\in \\mathbb{R}^{50}$, then $\\pmb{K}(\\pmb{x}, \\pmb{x}) \\in \\mathbb{R}^{50 \\times 50}$, then $\\pmb{f} \\sim \\mathcal{GP}(\\pmb{0}, \\pmb{K})$ means:\n\n\\begin{equation}\n \\left[ \\begin{array}{c} f_1 \\\\ \\vdots \\\\ f_{50} \\end{array}\\right] := \\left[ \\begin{array}{c} f(x_1) \\\\ \\vdots \\\\ f(x_{50}) \\end{array}\\right] \\sim \\mathcal{N}\\left( \\left[ \\begin{array}{c} 0 \\\\ \\vdots \\\\ 0 \\end{array}\\right], \\left[ \\begin{array}{ccc} k(x_1,x_1) & \\cdots & k(x_1, x_{50}) \\\\ \\vdots & \\ddots & \\vdots \\\\ k(x_{50},x_1) & \\cdots & k(x_{50}, x_{50}) \\end{array} \\right] \\right)\n\\end{equation}\n\n#### RBF kernel\n\nLet's choose a RBF (a.k.a Squared Exponential, Gaussian) kernel:\n\n\\begin{equation}\n\\pmb{K} = \\left[ \\begin{array}{ccc} k(x_1,x_1) & \\cdots & k(x_1, x_d) \\\\ \\vdots & \\ddots & \\vdots \\\\ k(x_d,x_1) & \\cdots & k(x_d, x_d) \\end{array} \\right]\n\\end{equation}\nwith\n\\begin{equation}\n k(x_i, x_j) = \\alpha^2 \\exp \\left( - \\frac{(x_i - x_j)^2}{2l} \\right) \\qquad \\mbox{ and hyperparameters } \\pmb{\\Phi} = \\left\\{ \\begin{array}{l} \\alpha \\mbox{: amplitude} \\\\ l \\mbox{: the lengthscale} \\end{array} \\right.\n\\end{equation}\n\nThis function $k$ is infinitely differentiable.\n\n\n```python\n# Reference: https://www.youtube.com/watch?v=4vGiHC35j9s&t=51s\n\n# Hyperparameters\nalpha = 1\nl = 2\n\n# Parameters\nn = 50 # nb of points\nn_func = 10 # nb of fct to draw\nx_bound = 5 # bound on the x axis\n\ndef RBF_kernel(a,b):\n sqdist = np.sum(a**2,1).reshape(-1,1) + np.sum(b**2,1) - 2*np.dot(a,b.T)\n return alpha**2 * np.exp(-1/l * sqdist)\n\nn = 50\nX = np.linspace(-x_bound, x_bound, n).reshape(-1,1)\nK = RBF_kernel(X, X) # dim(K) = n x n\n\nL = np.linalg.cholesky(K + 1e-6 * np.eye(n))\nf_prior = np.dot(L, np.random.normal(size=(n, n_func)))\n\n# Plotting\nwidth = 16\nheight = 4\nplt.figure(figsize=(width, height))\n\n# plot f_prior\nplt.subplot(1,3,1)\nplt.title('GP: prior on f')\nplt.plot(X, f_prior)\nplt.plot(X, f_prior.mean(1), linewidth=3, color='black')\nplt.ylabel('f(x)')\nplt.xlabel('x')\n\n# plot Kernel\nplt.subplot(1,3,2)\nplt.title('Kernel matrix')\nplt.pcolor(K[::-1])\nplt.colorbar()\n\nplt.subplot(1,3,3)\nplt.title('Kernel function')\nplt.plot(X, RBF_kernel(X, np.array([[1.0]])))\nplt.show()\n```\n\n### Kernel (prior knowledge)\n\nBy choosing a specific kernel, we can incorporate prior knowledge that we have about the function $f$, such as, if the function is:\n* periodic\n* smooth\n* symmetric\n* etc.\n\nThe hyperparameters for each kernel are also very intuitive/interpretable.\n\nNote: kernels can be combined!\n\nIndeed, if $k(x,y)$, $k_1(x,y)$ and $k_2(x,y)$ are valid kernels then:\n* $\\alpha k(x,y) $ with $\\alpha \\geq 0$\n* $k_1(x,y) + k_2(x,y)$\n* $k_1(x,y) k_2(x,y)$\n* $p(k(x,y))$ with $p$ being a polynomial function with non-negative coefficients\n* $exp(k(x,y))$\n* $f(x) k(x,y) \\overline{f(y)}$ with $\\overline{f} = $ complex conjugate\n* $k(\\phi(x),\\phi(y))$\n\nare all valid kernels!\n\n##### Periodic Exponential kernel\n\n\n```python\nvariance = 1.\nlengthscale = 1.\nperiod = 2.*np.pi\n\n#K = periodic_kernel(X, X) # dim(K) = n x n\nkern = GPy.kern.PeriodicExponential(variance=variance, lengthscale=lengthscale, period=period)\nK1 = kern.K(X)\n\nL = np.linalg.cholesky(K1 + 1e-6 * np.eye(n))\nf_prior = np.dot(L, np.random.normal(size=(n, 1)))\n\n# Plotting\nwidth = 16\nheight = 4\nplt.figure(figsize=(width, height))\n\n# plot f_prior\nplt.subplot(1,3,1)\nplt.title('GP: prior on f')\nplt.plot(X, f_prior)\nplt.plot(X, f_prior.mean(1), linewidth=3, color='black')\nplt.ylabel('f(x)')\nplt.xlabel('x')\n\n# plot Kernel\nplt.subplot(1,3,2)\nplt.title('Kernel matrix')\nplt.pcolor(K1[::-1])\nplt.colorbar()\n\nplt.subplot(1,3,3)\nplt.title('Kernel function')\nplt.plot(X, kern.K(X, np.array([[1.0]])))\nplt.show()\n```\n\n##### addition and multiplication of 2 kernels (SE and PE)\n\n\n```python\nK_add = K + K1\n\nL = np.linalg.cholesky(K_add + 1e-6 * np.eye(n))\nf_prior = np.dot(L, np.random.normal(size=(n, n_func)))\n\n# Plotting\nwidth = 16\nheight = 8\nplt.figure(figsize=(width, height))\n\n# plot f_prior\nplt.subplot(2,2,1)\nplt.title('GP: prior on f with K_add')\nplt.plot(X, f_prior)\nplt.plot(X, f_prior.mean(1), linewidth=3, color='black')\nplt.ylabel('f(x)')\nplt.xlabel('x')\n\n# plot Kernel\nplt.subplot(2,2,2)\nplt.title('Kernel matrix: K_add')\nplt.pcolor(K_add[::-1])\nplt.colorbar()\n\nK_prod = K * K1\n\nL = np.linalg.cholesky(K_prod + 1e-6 * np.eye(n))\nf_prior = np.dot(L, np.random.normal(size=(n, n_func)))\n\n# plot f_prior\nplt.subplot(2,2,3)\nplt.title('GP: prior on f with K_prod')\nplt.plot(X, f_prior)\nplt.plot(X, f_prior.mean(1), linewidth=3, color='black')\nplt.ylabel('f(x)')\nplt.xlabel('x')\n\n# plot Kernel\nplt.subplot(2,2,4)\nplt.title('Kernel matrix: K_prod')\nplt.pcolor(K_prod[::-1])\nplt.colorbar()\nplt.show()\n```\n\n### GP Posterior\n\nGiven $\\mathcal{D}=\\{(x_i, y_i)\\}_{i=1}^{i=N} = (\\pmb{X}, \\pmb{y})$, we have:\n\n\\begin{equation}\n p(f|\\mathcal{D}) = \\frac{p(\\mathcal{D}|f)p(f)}{p(\\mathcal{D})}\n\\end{equation}\n\n### GP Regression\n\n\\begin{equation}\n y_i = f(\\pmb{x}_i) + \\epsilon_i \\qquad \n\t\\left\\{ \\begin{array}{l}\n\t\tf \\sim \\mathcal{GP}(\\pmb{0}, \\pmb{K}) \\\\\n\t\t\\epsilon_i \\sim \\mathcal{N}(0, \\sigma^2)\n\t\\end{array} \\right.\n\\end{equation}\n\n* Prior $f$ is a GP $\\Leftrightarrow p(\\pmb{f}|\\pmb{X}) = \\mathcal{N}(\\pmb{0}, \\pmb{K})$\n* Likelihood is Gaussian $\\Leftrightarrow p(\\pmb{y}|\\pmb{X},\\pmb{f}) = \\mathcal{N}(\\pmb{f}, \\sigma^2\\pmb{I})$\n* $\\rightarrow p(f|\\mathcal{D})$ is also a GP.\n\n#### Predictive distribution: \n$$p(\\pmb{y}_*|\\pmb{x}_*, \\pmb{X}, \\pmb{y}) = \\int p(\\pmb{y}_{*}| \\pmb{x}_{*}, \\pmb{f}, \\pmb{X}, \\pmb{y}) p(f|\\pmb{X}, \\pmb{y}) d\\pmb{f} = \\mathcal{N}(\\pmb{\\mu}_*, \\pmb{\\Sigma}_*)$$\n\\begin{align}\n \\pmb{\\mu}_* &= \\pmb{K}_{*N} (\\pmb{K}_N + \\sigma^2 \\pmb{I})^{-1} \\pmb{y} \\\\\n \\pmb{\\Sigma}_* &= \\pmb{K}_{**} - \\pmb{K}_{*N} (\\pmb{K}_N + \\sigma^2 \\pmb{I})^{-1} \\pmb{K}_{N*}\n\\end{align}\n\n### Learning a GP\n#### Marginal likelihood:\n\\begin{equation}\n p(\\pmb{y}|\\pmb{X}) = \\int p(\\pmb{y}|\\pmb{f},\\pmb{X}) p(\\pmb{f}|\\pmb{X}) d\\pmb{f} = \\mathcal{N}(\\pmb{0}, \\pmb{K} + \\sigma^2\\pmb{I})\n\\end{equation}\n\nBy taking the logarithm, and setting $\\pmb{K}_y = (\\pmb{K} + \\sigma^2\\pmb{I})$, we have:\n\n\\begin{equation}\n \\mathcal{L} = \\log p(\\pmb{y}|\\pmb{X}; \\pmb{\\Phi}) = \\underbrace{-\\frac{1}{2} \\pmb{y}^T \\pmb{K}_y^{-1} \\pmb{y}}_{\\mbox{data fit}} \\underbrace{-\\frac{1}{2} \\log |\\pmb{K}_y^{-1}|}_{\\mbox{complexity penalty}} - \\frac{n}{2} \\log 2\\pi\n\\end{equation}\n\nThe marginal likelihood (i.e. ML-II) is used to optimize the hyperparameters $\\pmb{\\Phi}$ that defines the covariance function and thus the GP.\n\n\\begin{equation}\n \\pmb{\\Phi}^* = argmax_{\\pmb{\\Phi}} \\log p(\\pmb{y}|\\pmb{X}; \\pmb{\\Phi})\n\\end{equation}\n\nOptimizing the marginal likelihood is more robust than the likelihood as it tries to optimize the complexity of the model, and the fitting of this last one to the observed data.\n\n### GPy\n\n\n```python\n# GP Regression\n# Based on the tutorial: https://github.com/SheffieldML/notebook/blob/master/GPy/GPyCrashCourse.ipynb\n\n# Create dataset\nX = np.random.uniform(-3.0, 3.0, (20,1))\nY = np.sin(X) + np.random.randn(20,1) * 0.05 \n\n# Create the kernel\n# Reminder 1: The sum of valid kernels gives a valid kernel.\n# Reminder 2: The product of valid kernels gives a valid kernel.\n# Available kernels: RBF, Exponential, Matern32, Matern52, Brownian, Bias, Linear, PeriodicExponential, White.\nkernel = GPy.kern.RBF(input_dim=1, variance=1.0, lengthscale=1.0)\n\n# Create the model\ngp_model = GPy.models.GPRegression(X, Y, kernel)\n\n# Display and plot\nprint(\"Before optimization: \", gp_model)\ngp_model.plot()\nplt.show()\n\n# Optimize the model (that is find the 'best' hyperparameters of the kernel matrix)\n# By default, the optimizer is a 2nd order algo: lbfgsb. Others are available such as the scg, ...\ngp_model.optimize(messages=False)\n\n# Display and plot\nprint(\"After optimization: \", gp_model)\ngp_model.plot()\nplt.show()\n```\n\n### Gaussian Process Latent Variable Model (GP-LVM)\n\n\n```python\n# GPLVM\n# Based on the tutorials: \n# http://nbviewer.jupyter.org/github/SheffieldML/notebook/blob/master/GPy/MagnificationFactor.ipynb\n# https://github.com/SheffieldML/notebook/blob/master/lab_classes/gprs/lab4-Copy0.ipynb\n\n# Create dataset\nN = 100\nk1 = GPy.kern.RBF(5, variance=1, lengthscale=1./np.random.dirichlet(np.r_[10,10,10,0.1,0.1]), ARD=True)\nk2 = GPy.kern.RBF(5, variance=1, lengthscale=1./np.random.dirichlet(np.r_[0.1,10,10,10,0.1]), ARD=True)\nX = np.random.normal(0, 1, (N,5))\nA = np.random.multivariate_normal(np.zeros(N), k1.K(X), 10).T\nB = np.random.multivariate_normal(np.zeros(N), k2.K(X), 10).T\n\nY = np.vstack((A,B))\n\n# latent space dimension\nlatent_dim = 2\n\n# Create the kernel\nkernel = GPy.kern.RBF(input_dim=latent_dim, variance=1.0, lengthscale=1.0)\n\n# Create the GPLVM model\ngplvm_model = GPy.models.GPLVM(Y, latent_dim, init='PCA', kernel=kernel)\n\n# Display and plot\nprint(\"Before optimization: \", gplvm_model)\ngplvm_model.plot_latent()\nplt.show()\n\n# Optimize the model (that is find the 'best' hyperparameters of the kernel matrix)\n# By default, the optimizer is a 2nd order algo: lbfgsb. Others are available such as the scg, ...\ngplvm_model.optimize(messages=False)\n\n# Display and plot\nprint(\"After optimization: \", gplvm_model)\ngplvm_model.plot_latent()\nplt.show()\n```\n", "meta": {"hexsha": "10263ca151b0479cf16e5353437d9130d0aec59b", "size": 27993, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tutorials/machine_learning/GP.ipynb", "max_stars_repo_name": "Pandinosaurus/pyrobolearn", "max_stars_repo_head_hexsha": "9cd7c060723fda7d2779fa255ac998c2c82b8436", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-01-21T21:08:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T16:45:49.000Z", "max_issues_repo_path": "tutorials/machine_learning/GP.ipynb", "max_issues_repo_name": "Pandinosaurus/pyrobolearn", "max_issues_repo_head_hexsha": "9cd7c060723fda7d2779fa255ac998c2c82b8436", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tutorials/machine_learning/GP.ipynb", "max_forks_repo_name": "Pandinosaurus/pyrobolearn", "max_forks_repo_head_hexsha": "9cd7c060723fda7d2779fa255ac998c2c82b8436", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-09-29T21:25:39.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-29T21:25:39.000Z", "avg_line_length": 34.6019777503, "max_line_length": 668, "alphanum_fraction": 0.5129139428, "converted": true, "num_tokens": 7105, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9390248191350352, "lm_q2_score": 0.9032941995446778, "lm_q1q2_score": 0.8482156723531675}} {"text": "# Solving differential equations in Julia\n\n## Define your model and find parameters\n\nThe concentration of a decaying nuclear isotope could be described as an exponential decay:\n\n$$\n\\frac{d}{dt}C(t) = - \\lambda C(t)\n$$\n\n**State variable**\n- $C(t)$: The concentration of a decaying nuclear isotope.\n\n**Parameter**\n- $\\lambda$: The rate constant of decay. The half-life $t_{\\frac{1}{2}} = \\frac{ln2}{\\lambda}$\n\nTake a more complex model, the spreading of an contagious disease can be described by the [SIR model](https://www.maa.org/press/periodicals/loci/joma/the-sir-model-for-spread-of-disease-the-differential-equation-model):\n\n$$\n\\begin{align}\n\\frac{d}{dt}S(t) &= - \\beta S(t)I(t) \\\\\n\\frac{d}{dt}I(t) &= \\beta S(t)I(t) - \\gamma I(t) \\\\\n\\frac{d}{dt}R(t) &= \\gamma I(t)\n\\end{align}\n$$\n\n**State variables**\n\n- $S(t)$ : the fraction of susceptible people\n- $I(t)$ : the fraction of infectious people\n- $R(t)$ : the fraction of recovered (or removed) people\n\n**Parameters**\n\n- $\\beta$ : the rate of infection when susceptible and infectious people meet\n- $\\gamma$ : the rate of recovery of infectious people\n\n## Make a solver by yourself\n\n### Forward Euler method\n\nThe most straightforward approach to numerically solve differential equations is the forward Euler's (FE) method[^Euler].\n\nIn each step, the next state variables ($\\vec{u}_{n+1}$) is accumulated by the product of the size of time step (dt) and the derivative at the current state ($\\vec{u}_{n}$):\n\n$$ \n\\vec{u}_{n+1} = \\vec{u}_{n} + dt \\cdot f(\\vec{u}_{n}, t_{n})\n$$\n\n\n```julia\n# The ODE model. Exponential decay in this example\n# The input/output format is compatible to Julia DiffEq ecosystem\nexpdecay(u, p, t) = p * u\n\n# Forward Euler stepper \nstep_euler(model, u, p, t, dt) = u .+ dt .* model(u, p, t)\n\n# In house ODE solver\nfunction mysolve(model, u0, tspan, p; dt=0.1, stepper=step_euler)\n # Time points\n ts = tspan[1]:dt:tspan[end]\n # State variable at those time points\n us = zeros(length(ts), length(u0))\n # Initial conditions\n us[1, :] .= u0\n # Iterations\n for i in 1:length(ts)-1\n us[i+1, :] .= stepper(model, us[i, :], p, ts[i], dt)\n end\n # Results\n return (t = ts, u = us)\nend\n\ntspan = (0.0, 2.0)\np = -1.0\nu0 = 1.0\n\nsol = mysolve(expdecay, u0, tspan, p, dt=0.1, stepper=step_euler)\n\n# Visualization\nusing Plots\nPlots.gr(lw=2)\n\n# Numericalsolution\nplot(sol.t, sol.u, label=\"FE method\")\n\n# True solution\nplot!(x -> exp(-x), 0.0, 2.0, label=\"Analytical solution\")\n```\n\n\n```julia\n# SIR model\nfunction sir(u, p ,t)\n\ts, i, r = u\n\t\u03b2, \u03b3 = p\n\tv1 = \u03b2 * s * i\n\tv2 = \u03b3 * i\n\treturn [-v1, v1-v2, v2]\nend\n\n\np = (\u03b2 = 1.0, \u03b3 = 0.3)\nu0 = [0.99, 0.01, 0.00] # s, i, r\ntspan = (0.0, 20.0)\n\nsol = mysolve(sir, u0, tspan, p, dt=0.5, stepper=step_euler)\n\nplot(sol.t, sol.u, label=[\"S\" \"I\" \"R\"], legend=:right)\n```\n\n### The fourth order Runge-Kutta (RK4) method\n\nOne of the most popular ODE-solving methods is the fourth order Runge-Kutta ([RK4](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods)) method.\n\nIn each step, the next state is calculated in 5 steps, 4 of which are intermediate steps.\n\n$$\n\\begin{align}\nk_1 &= dt \\cdot f(\\vec{u}_{n}, t_n) \\\\\nk_2 &= dt \\cdot f(\\vec{u}_{n} + 0.5k_1, t_n + 0.5dt) \\\\\nk_3 &= dt \\cdot f(\\vec{u}_{n} + 0.5k_2, t_n + 0.5dt) \\\\\nk_4 &= dt \\cdot f(\\vec{u}_{n} + k_3, t_n + dt) \\\\\nu_{n+1} &= \\vec{u}_{n} + \\frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4)\n\\end{align}\n$$\n\nIn homework 1, you are going to replace the Euler stepper with the RK4 one:\n\n```julia\nstep_rk4(f, u, p, t, dt) = \"\"\"TODO\"\"\"\n```\n\n## Using DifferentialEquations.jl\n\nDocumentation: \n\n\n```julia\nusing Plots, DifferentialEquations\nPlots.gr(linewidth=2)\n```\n\n### Exponential decay model\n\n\n```julia\n# Parameter of exponential decay\np = -1.0\nu0 = 1.0\ntspan = (0.0, 2.0)\n\n# Define a problem\nprob = ODEProblem(expdecay, u0, tspan, p)\n\n# Solve the problem\nsol = solve(prob)\n\n# Visualize the solution\nplot(sol, legend=:right)\n```\n\n### SIR model\n\n\n```julia\n# Parameters of the SIR model\np = (\u03b2 = 1.0, \u03b3 = 0.3)\nu0 = [0.99, 0.01, 0.00] # s, i, r\ntspan = (0.0, 20.0)\n\n# Define a problem\nprob = ODEProblem(sir, u0, tspan, p)\n\n# Solve the problem\nsol = solve(prob)\n\n# Visualize the solution\nplot(sol, label=[\"S\" \"I\" \"R\"], legend=:right)\n```\n\n\n```julia\nplot(sol, vars=(0, 2), legend=:right)\n```\n\n\n```julia\nplot(sol, vars=(1, 2), legend=:right)\n```\n\n## Using ModelingToolkit.jl\n\n[ModelingToolkit.jl](https://mtk.sciml.ai/dev/) is a high-level package for symbolic-numeric modeling and simulation ni the Julia DiffEq ecosystem.\n\n\n```julia\nusing DifferentialEquations\nusing ModelingToolkit\nusing Plots\nPlots.gr(linewidth=2)\n```\n\n### Exponential decay model\n\n\n```julia\n@parameters \u03bb # Decaying rate constant\n@variables t C(t) # Time and concentration\n\nD = Differential(t) # Differential operator\n\n# Make an ODE system\n@named expdecaySys = ODESystem([D(C) ~ -\u03bb*C ])\n```\n\n\n```julia\nu0 = [C => 1.0]\np = [\u03bb => 1.0]\ntspan = (0.0, 2.0)\n\nprob = ODEProblem(expdecaySys, u0, tspan, p)\nsol = solve(prob)\n\nplot(sol)\n```\n\n### SIR model\n\n\n```julia\n@parameters \u03b2 \u03b3\n@variables t s(t) i(t) r(t)\n\nD = Differential(t) # Differential operator\n\n# Make an ODE system\n@named sirSys = ODESystem(\n [D(s) ~ -\u03b2 * s * i,\n D(i) ~ \u03b2 * s * i - \u03b3 * i,\n D(r) ~ \u03b3 * i])\n```\n\n\n```julia\n# Parameters of the SIR model\np = [\u03b2 => 1.0, \u03b3 => 0.3]\nu0 = [s => 0.99, i => 0.01, r => 0.00]\ntspan = (0.0, 20.0)\n\nprob = ODEProblem(sirSys, u0, tspan, p)\nsol = solve(prob)\n\nplot(sol)\n```\n\n## Using Catalyst.jl\n\n[Catalyst.jl](https://github.com/SciML/Catalyst.jl) is a domain-specific language (DSL) package to solve \"law of mass action\" problems.\n\n\n```julia\nusing Catalyst\nusing DifferentialEquations\nusing Plots\nPlots.gr(linewidth=2)\n```\n\n### Exponential decay model\n\n\n```julia\ndecayModel = @reaction_network begin\n \u03bb, C --> 0\nend \u03bb\n```\n\n\n```julia\np = [1.0]\nu0 = [1.0]\ntspan = (0.0, 2.0)\n\nprob = ODEProblem(decayModel, u0, tspan, p)\nsol = solve(prob)\n\nplot(sol)\n```\n\n### SIR model\n\n\n```julia\nsirModel = @reaction_network begin\n \u03b2, S + I --> 2I\n \u03b3, I --> R\nend \u03b2 \u03b3\n```\n\n\n```julia\n# Parameters of the SIR model\np = (1.0, 0.3)\nu0 = [0.99, 0.01, 0.00]\ntspan = (0.0, 20.0)\n\nprob = ODEProblem(sirModel, u0, tspan, p)\nsol = solve(prob)\n\nplot(sol, legend=:right)\n```\n", "meta": {"hexsha": "b6bb388c68a2d91cd6e540f2b7a886361aa5ae7a", "size": 11337, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/intro/03-diffeq.ipynb", "max_stars_repo_name": "NTUMitoLab/mmsb-bebi-5009", "max_stars_repo_head_hexsha": "5ab98e5a11bc3c1e5c4df1aab9ab94f05acc4062", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/intro/03-diffeq.ipynb", "max_issues_repo_name": "NTUMitoLab/mmsb-bebi-5009", "max_issues_repo_head_hexsha": "5ab98e5a11bc3c1e5c4df1aab9ab94f05acc4062", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 25, "max_issues_repo_issues_event_min_datetime": "2021-10-04T14:28:01.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-15T08:29:54.000Z", "max_forks_repo_path": "docs/intro/03-diffeq.ipynb", "max_forks_repo_name": "ntumitolab/mmsb-bebi-5009", "max_forks_repo_head_hexsha": "813610f812b23970f26d473e55e33fc0088e7d9f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.4859611231, "max_line_length": 228, "alphanum_fraction": 0.4905177737, "converted": true, "num_tokens": 2223, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9433475715065793, "lm_q2_score": 0.8991213806488609, "lm_q1q2_score": 0.8481839709247456}} {"text": "# For today's code challenge you will be reviewing yesterdays lecture material. Have fun!\n\n### if you get done early check out [these videos](https://www.3blue1brown.com/neural-networks).\n\n# The Perceptron\n\nThe first and simplest kind of neural network that we could talk about is the perceptron. A perceptron is just a single node or neuron of a neural network with nothing else. It can take any number of inputs and spit out an output. What a neuron does is it takes each of the input values, multplies each of them by a weight, sums all of these products up, and then passes the sum through what is called an \"activation function\" the result of which is the final value.\n\nI really like figure 2.1 found in this [pdf](http://www.uta.fi/sis/tie/neuro/index/Neurocomputing2.pdf) even though it doesn't have bias term represented there.\n\n\n\nIf we were to write what is happening in some verbose mathematical notation, it might look something like this:\n\n\\begin{align}\n y = sigmoid(\\sum(weight_{1}input_{1} + weight_{2}input_{2} + weight_{3}input_{3}) + bias)\n\\end{align}\n\nUnderstanding what happens with a single neuron is important because this is the same pattern that will take place for all of our networks. \n\nWhen imagining a neural network I like to think about the arrows as representing the weights, like a wire that has a certain amount of resistance and only lets a certain amount of current through. And I like to think about the node itselef as containing the prescribed activation function that neuron will use to decide how much signal to pass onto the next layer.\n\n# Activation Functions (transfer functions)\n\nIn Neural Networks, each node has an activation function. Each node in a given layer typically has the same activation function. These activation functions are the biggest piece of neural networks that have been inspired by actual biology. The activation function decides whether a cell \"fires\" or not. Sometimes it is said that the cell is \"activated\" or not. In Artificial Neural Networks activation functions decide how much signal to pass onto the next layer. This is why they are sometimes referred to as transfer functions because they determine how much signal is transferred to the next layer.\n\n## Common Activation Functions:\n\n\n\n# Implementing a Perceptron from scratch in Python\n\n### Establish training data\n\n\n```python\nimport numpy as np\n\nnp.random.seed(812)\n\ninputs = np.array([\n [0, 0, 1],\n [1, 1, 1],\n [1, 0, 1],\n [0, 1, 1]\n])\n\ncorrect_outputs = [[0], [1], [1], [0]]\n```\n\n### Sigmoid activation function and its derivative for updating weights\n\n\n```python\ndef sigmoid(x):\n return 1 / (1 + np.exp(-x))\n\ndef sigmoid_derivative(x):\n sx = sigmoid(x)\n return sx * (1 - sx)\n```\n\n## Updating weights with derivative of sigmoid function:\n\n\n\n### Initialize random weights for our three inputs\n\n\n```python\n\n```\n\n### Calculate weighted sum of inputs and weights\n\n\n```python\n\n```\n\n### Output the activated value for the end of 1 training epoch\n\n\n```python\n\n```\n\n### take difference of output and true values to calculate error\n\n\n```python\n\n```\n\n### Put it all together\n\n\n```python\n\n```\n", "meta": {"hexsha": "5d1e7541e33d8159bbb0ec91acb14bab19623193", "size": 6775, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Tuesday's_Challenge.ipynb", "max_stars_repo_name": "eyvonne/Neural_network_foundations_code_challenges", "max_stars_repo_head_hexsha": "fa9ef104cb4e5dc43e3b6649659778ce10ff17b3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Tuesday's_Challenge.ipynb", "max_issues_repo_name": "eyvonne/Neural_network_foundations_code_challenges", "max_issues_repo_head_hexsha": "fa9ef104cb4e5dc43e3b6649659778ce10ff17b3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Tuesday's_Challenge.ipynb", "max_forks_repo_name": "eyvonne/Neural_network_foundations_code_challenges", "max_forks_repo_head_hexsha": "fa9ef104cb4e5dc43e3b6649659778ce10ff17b3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2019-11-05T16:41:55.000Z", "max_forks_repo_forks_event_max_datetime": "2019-11-05T17:09:13.000Z", "avg_line_length": 27.1, "max_line_length": 614, "alphanum_fraction": 0.5870110701, "converted": true, "num_tokens": 713, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9559813551535004, "lm_q2_score": 0.8872045922259088, "lm_q1q2_score": 0.8481510483745329}} {"text": "# Solution {-}\nThe random variable $X$ is completely described by its mean and covariance matrix:\n\\begin{equation*}\n \\mu=\n \\begin{bmatrix}\n 1 \\\\\n 2 \\\\\n \\end{bmatrix} \\quad\n C_X=\n \\begin{bmatrix}\n 4 &1\\\\\n 1 &1\\\\\n \\end{bmatrix}\n\\end{equation*}\n\nNow consider another random variable $Y$ that is functionally related to $X$ by:\n\\begin{equation*}\n y=Ax+b\n\\end{equation*}\n\nwhere\n\\begin{equation*}\n A=\n \\begin{bmatrix}\n 2 &1\\\\\n 1 &-1\\\\\n \\end{bmatrix} \\quad\n b=\n \\begin{bmatrix}\n 1\\\\\n 1\\\\\n \\end{bmatrix}\n\\end{equation*}\n\nFind the mean and the covariance matrix for $Y$:\n\n\n```python\nfrom sympy import Matrix\n\nmx = Matrix([[1],\n [2]])\nCx = Matrix([[4, 1],\n [1, 1]])\n\nA = Matrix([[2, 1],\n [1, -1]])\nb = Matrix([[1],\n [1]])\n \nmy = A@mx + b\nCy = A@Cx@A.T\n\ndisplay(my)\ndisplay(Cy)\n```\n\n\n$\\displaystyle \\left[\\begin{matrix}5\\\\0\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}21 & 6\\\\6 & 3\\end{matrix}\\right]$\n\n", "meta": {"hexsha": "caea3667b52b52ecc4a440d5255db71a3b3b9c95", "size": 2557, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Problem 1.28.ipynb", "max_stars_repo_name": "mfkiwl/GMPE340", "max_stars_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-07T09:36:36.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-07T09:36:36.000Z", "max_issues_repo_path": "Problem 1.28.ipynb", "max_issues_repo_name": "mfkiwl/GMPE340", "max_issues_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Problem 1.28.ipynb", "max_forks_repo_name": "mfkiwl/GMPE340", "max_forks_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-20T18:48:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-20T18:48:20.000Z", "avg_line_length": 21.132231405, "max_line_length": 91, "alphanum_fraction": 0.4118107157, "converted": true, "num_tokens": 361, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9683812345563904, "lm_q2_score": 0.8757869981319863, "lm_q1q2_score": 0.8480956944594881}} {"text": "# Computing the Z-Normalized Euclidean Distance from Dot Products\n\nIn the [Matrix Profile I](https://www.cs.ucr.edu/~eamonn/STOMP_GPU_final_submission_camera_ready.pdf) and [Matrix Profile II](https://www.cs.ucr.edu/~eamonn/PID4481997_extend_Matrix%20Profile_I.pdf) papers, the Z-normalized Euclidean distance between a query subsequence, $Q_{i,m}=(q_i, q_{i+1}, q_{i+2}\\ldots, q_{i+m-1})$, and the $i^{th}$ subsequence, $T_{i,m}=(t_i, t_{i+1}, t_{i+2}, \\ldots, t_{i+m-1})$, with window size, $m$, in the time series, $T$, can be computed following:\n\n\\begin{align}\n D(Q_{i,m}, T_{i,m}) ={}&\n \\sqrt{\n \\sum \\limits _{0 \\leq {j} \\lt m}\n \\left(\n \\frac{t_{i+j}-M_{T_{i,m}}}{\\Sigma_{T_{i,m}}}\n - \n \\frac{q_{i+j}-\\mu_{Q_{i,m}}}{\\sigma_{Q_{i,m}}}\n \\right)^2\n }\n \\\\\n ={}&\n \\sqrt{\n \\sum \\limits _{0 \\leq {j} \\lt m}\n \\left[\n \\left(\n \\frac{t_{i+j}-M_{T_{i,m}}}{\\Sigma_{T_{i,m}}}\n \\right)\n -\n 2\n \\left(\n \\frac{t_{i+j}-M_{T_{i,m}}}{\\Sigma_{T_{i,m}}}\n \\right)\n \\left(\n \\frac{q_{i+j}-\\mu_{Q_{i,m}}}{\\sigma_{Q_{i,m}}}\n \\right)\n +\n \\left(\n \\frac{q_{i+j}-\\mu_{Q_{i,m}}}{\\sigma_{Q_{i,m}}}\n \\right)^2\n \\right]\n }\n \\\\\n ={}&\n \\sqrt{\n \\sum \\limits _{0 \\leq {j} \\lt m}\n \\left(\n \\frac{t_{i+j}-M_{T_{i,m}}}{\\Sigma_{T_{i,m}}}\n \\right)^2\n -\n \\sum \\limits _{0 \\leq {j} \\lt m}\n 2\n \\left(\n \\frac{t_{i+j}-M_{T_{i,m}}}{\\Sigma_{T_{i,m}}}\n \\right)\n \\left(\n \\frac{q_j-\\mu_{Q_{i,m}}}{\\sigma_{Q_{i,m}}}\n \\right)\n +\n \\sum \\limits _{0 \\leq {j} \\lt m}\n \\left(\n \\frac{q_{i+j}-\\mu_{Q_{i,m}}}{\\sigma_{Q_{i,m}}}\n \\right)^2\n }\n \\\\\n ={}&\n \\sqrt{\n m\n -\n \\sum \\limits _{0 \\leq {j} \\lt m}\n 2\n \\left(\n \\frac{t_{i+j}-M_{T_{i,m}}}{\\Sigma_{T_{i,m}}}\n \\right)\n \\left(\n \\frac{q_{i+j}-\\mu_{Q_{i,m}}}{\\sigma_{Q_{i,m}}}\n \\right)\n +\n m\n }\n \\\\\n ={}&\n \\sqrt{\n 2m\n -\n 2\n \\sum \\limits _{0 \\leq {j} \\lt m}\n \\left(\n \\frac{t_{i+j}-M_{T_{i,m}}}{\\Sigma_{T_{i,m}}}\n \\right)\n \\left(\n \\frac{q_{i+j}-\\mu_{Q_m}}{\\sigma_{Q_{i,m}}}\n \\right)\n }\n \\\\\n ={}&\n \\sqrt{\n 2m\n \\left[\n 1\n -\n \\frac{1}{m}\n \\sum \\limits _{0 \\leq {j} \\lt m}\n \\left(\n \\frac{t_{i+j}-M_{T_{i,m}}}{\\Sigma_{T_{i,m}}}\n \\right)\n \\left(\n \\frac{q_{i+j}-\\mu_{Q_{i,m}}}{\\sigma_{Q_{i,m}}}\n \\right)\n \\right]\n }\n \\\\\n ={}&\n \\sqrt{\n 2m\n \\left[\n 1\n -\n \\sum \\limits _{0 \\leq {j} \\lt m}\n \\frac{\n \\left(\n t_{i+j}-M_{T_{i,m}}\n \\right)\n \\left(\n q_{i+j}-\\mu_{Q_{i,m}}\n \\right)\n }{\n m \\sigma_{Q_{i,m}} \\Sigma_{T_{i,m}}\n }\n \\right]\n }\n \\\\\n ={}&\n \\sqrt{\n 2m\n \\left[\n 1\n -\n \\sum \\limits _{0 \\leq {j} \\lt m}\n \\frac{\n t_{i+j}q_j\n -t_{i+j}\\mu_{Q_{i,m}}\n -M_{T_{i,m}}q_{i+j}\n +M_{T_{i,m}}\\mu_{Q_{i+m}}\n }{\n m \\sigma_{Q_{i+m}} \\Sigma_{T_{i,m}}\n }\n \\right]\n }\n \\\\\n ={}&\n \\sqrt{\n 2m\n \\left[\n 1\n -\n \\frac{\n \\sum \\limits _{0 \\leq {j} \\lt m}\n q_{i+j}t_{i+j}\n -\n \\sum \\limits _{0 \\leq {j} \\lt m}\n t_{i+j}\\mu_{Q_{i,m}}\n -\n \\sum \\limits _{0 \\leq {j} \\lt m}\n \\left(\n M_{T_{i,m}}q_{i+j}\n -M_{T_{i,m}}{\\mu_{Q_{i,m}}}\n \\right)\n }{\n m \\sigma_{Q_{i,m}} \\Sigma_{T_{i,m}}\n }\n \\right]\n }\n \\\\\n ={}&\n \\sqrt{\n 2m\n \\left[\n 1\n -\n \\frac{\n Q_{i,m}\\cdot{T_{i,m}}\n -\n \\sum \\limits _{0 \\leq {j} \\lt m}\n t_{i+j}\\mu_{Q_{i,m}}\n -\n M_{T_{i,m}}\n \\sum \\limits _{0 \\leq {j} \\lt m}\n \\left(\n q_{i+j}-{\\mu_{Q_{i,m}}}\n \\right)\n }{\n m \\sigma_{Q_{i,m}} \\Sigma_{T_{i,m}}\n }\n \\right]\n }\n \\\\\n ={}&\n \\sqrt{\n 2m\n \\left[\n 1\n -\n \\frac{\n Q_{i,m}\\cdot{T_{i,m}}\n -\n \\mu_{Q_{i,m}}\n \\sum \\limits _{0 \\leq {j} \\lt m}\n t_{i+j}\n -\n 0\n }{\n m \\sigma_{Q_{i,m}} \\Sigma_{T_{i,m}}\n }\n \\right]\n }\n \\\\\n ={}&\n \\sqrt{\n 2m\n \\left[\n 1\n -\n \\frac{\n Q_{i,m}\\cdot{T_{i,m}}\n -\n \\mu_{Q_{i,m}}\n \\sum \\limits _{0 \\leq {j} \\lt m}\n t_{i+j}\n }{\n m \\sigma_{Q_{i,m}} \\Sigma_{T_{i,m}}\n }\n \\right]\n }\n \\\\\n ={}&\n \\sqrt{\n 2m\n \\left[\n 1\n -\n \\frac{\n Q_{i,m}\\cdot{T_{i,m}}\n -\n \\mu_{Q{i,m}}m\n \\sum \\limits _{0 \\leq {j} \\lt m}\n \\frac{t_{i+j}}{m}\n }{\n m \\sigma_{Q_{i,m}} \\Sigma_{T_{i,m}}\n }\n \\right]\n }\n \\\\\n ={}&\n \\sqrt{\n 2m\n \\left[\n 1\n -\n \\frac{\n Q_{i,m}\\cdot{T_{i,m}}\n -\n m\\mu_{Q_{i,m}}M_{T_{i,m}}\n }{\n m \\sigma_{Q_{i,m}} \\Sigma_{T_{i,m}}\n }\n \\right]\n }\n \\\\ \n\\end{align}\n\n# Computing the Z-Normalized Euclidean Distance from the Pearson Correlation\n\nBased on the fact that the Pearson Correlation, $\\rho$, can be written as (see Equation 4 in [this paper](https://www.cs.unm.edu/~mueen/Projects/JOCOR/joinICDM.pdf) or Equation 3 in [this paper](https://arxiv.org/pdf/1601.02213.pdf)):\n\n\\begin{align}\n \\rho(Q_{i,m}, T_{i,m}) ={}& \\frac{E\n \\left[\n \\left(\n Q_{i,m}-\\mu_{Q_{i,m}}\n \\right)\n \\left(\n T_{i,m}-M_{T_{i,m}}\n \\right)\n \\right]\n }{\\sigma_{Q_{i,m}}\\Sigma_{T_{i,m}}}\n \\\\\n ={}& \n \\frac{\n \\langle\n \\left(\n Q_{i,m}-\\mu_{Q_{i,m}}\n \\right)\n ,\n \\left(\n T_{i,m}-M_{T_{i,m}}\n \\right)\n \\rangle\n }{\\sigma_{Q_{i,m}}\\Sigma_{T_{i,m}}}\n \\\\\n ={}&\n \\frac{1}{m}\n \\sum \\limits _{0 \\leq j \\lt m}\n \\frac{\n \\left(\n q_{i+j}-\\mu_{Q_{i,m}}\n \\right)\n \\left(\n t_{i+j}-M_{T_{i,m}}\n \\right)\n }{\\sigma_{Q_{i,m}}\\Sigma_{T_{i,m}}}\n \\\\\n ={}&\n \\frac{1}{m}\n \\sum \\limits _{0 \\leq j \\lt m}\n \\left(\n \\frac{\n q_{i+j}-\\mu_{Q_{i,m}}\n }{\\sigma_{Q_{i,m}}}\n \\right)\n \\left(\n \\frac{ \n t_{i+j}-M_{T_{i,m}}\n }{\\Sigma_{T_{i,m}}}\n \\right)\n \\\\\n\\end{align}\n\nSimilar to above, the Z-normalized Euclidean distance can be computed from $\\rho$ following:\n\n\\begin{align}\n D(Q_{i,m}, T_{i,m}) ={}&\n \\sqrt{\n \\sum \\limits _{0 \\leq {j} \\lt m}\n \\left(\n \\frac{t_{i+j}-M_{T_{i,m}}}{\\Sigma_{T_{i,m}}}\n - \n \\frac{q_{i+j}-\\mu_{Q_{i,m}}}{\\sigma_{Q_{i,m}}}\n \\right)^2\n }\n \\\\\n \\vdots\n \\\\\n ={}&\n \\sqrt{\n 2m\n \\left[\n 1\n -\n \\frac{1}{m}\n \\sum \\limits _{0 \\leq {j} \\lt m}\n \\left(\n \\frac{t_{i+j}-M_{T_{i,m}}}{\\Sigma_{T_{i,m}}}\n \\right)\n \\left(\n \\frac{q_{i+j}-\\mu_{Q_{i,m}}}{\\sigma_{Q_{i,m}}}\n \\right)\n \\right]\n }\n \\\\\n ={}&\n \\sqrt{\n 2m\n \\left[\n 1\n -\n \\rho(Q_{i,m},T_{i,m})\n \\right]\n }\n \\\\\n\\end{align}\n\nThus, by employing the most efficient way to compute $\\rho(Q_{i,m},T_{i,m})$, then we'd also have an efficient way to directly compute $D(Q_{i,m},T_{i,m})$. Recall that:\n\n\\begin{align}\n \\rho(Q_{i,m},T_{i,m}) = \\frac{cov(Q_{i,m},T_{i,m})}{\\sigma_{Q_{i,m}}\\Sigma_{T_{i,m}}}\n\\end{align}\n\nThus, it follows that finding the most efficient way to compute the covariance matrix, $cov(Q_{i,m},T_{i,m})$ would result in the most efficient way to compute the distance. Also, remember that we would like to traverse our distance matrix along each diagonal rather than along each row/column.\n\n# Covariance\n\nRecall that the covariance, $cov(Q_{i,m},T_{i,m})$, can be written as:\n\n\\begin{align}\n cov(Q_{i,m},T_{i,m}) ={}& E\n \\left[\n \\left(\n Q-\\mu_{Q_{i,m}}\n \\right)\n \\left(\n T_{i,m}-M_{T_{i,m}}\n \\right)\n \\right]\n \\\\\n ={}& \n \\langle\n \\left(\n Q-\\mu_{Q_{i,m}}\n \\right)\n ,\n \\left(\n T_{i,m}-M_{T_{i,m}}\n \\right)\n \\rangle\n \\\\\n ={}&\n \\frac{1}{m}\n \\sum \\limits _{0 \\leq j \\lt m}\n \\left(\n q_{i+j}-\\mu_{Q_{i,m}}\n \\right)\n \\left(\n t_{i+j}-M_{T_{i,m}}\n \\right)\n \\\\\n\\end{align}\n\nNote that we've explicitly called out the fact that the means, $\\mu_{Q_{{i,m}}}$ and $M_{T_{i,m}}$, are computed with the subsequences of length $m$. Additionally, according to Welford, we can express these means with respect to the means of the same subsequences that have their last elements removed (i.e., $\\mu_{Q_{i,m-1}}$ and $M_{T_{i,m-1}}$).\n\n\\begin{align}\n cov(Q_{i,m},T_{i,m}) \n ={}&\n \\frac{1}{m}\n \\sum \\limits _{0 \\leq j \\lt m}\n \\left(\n q_{i+j}-\\mu_{Q_{i,m}}\n \\right)\n \\left(\n t_{i+j}-M_{T_{i,m}}\n \\right)\n \\\\\n ={}&\n \\frac{\n S(Q_{i,m-1}, T_{i,m-1})\n +\n \\left(\n \\frac{m-1}{m}\n \\right) \n \\left(\n q_{i+m-1} - \\mu_{Q_{i,m-1}}\n \\right)\n \\left(\n t_{i+m-1} - M_{T_{i,m-1}}\n \\right)\n }{m}\n \\\\\n ={}&\n \\frac{\n \\frac{m-1}{m-1}S(Q_{i,m-1}, T_{i,m-1})\n +\n \\left(\n \\frac{m-1}{m}\n \\right) \n \\left(\n q_{i+m-1} - \\mu_{Q_{i,m-1}}\n \\right)\n \\left(\n t_{i+m-1} - M_{T_{i,m-1}}\n \\right)\n }{m}\n \\\\\n ={}&\n \\frac{\n cov(Q_{i,m-1},T_{i,m-1}) (m-1)\n +\n \\left(\n \\frac{m-1}{m}\n \\right) \n \\left(\n q_{i+m-1} - \\mu_{Q_{i,m-1}}\n \\right)\n \\left(\n t_{i+m-1} - M_{T_{i,m-1}}\n \\right)\n }{m}\n \\\\\n ={}&\n \\frac{m-1}{m} \n \\left[\n cov(Q_{i,m-1},T_{i,m-1})\n +\n \\frac{\n \\left(\n q_{i+m-1} - \\mu_{Q_{i,m-1}}\n \\right)\n \\left(\n t_{i+m-1} - M_{T_{i,m-1}}\n \\right)\n }{m}\n \\right]\n \\\\\n\\end{align}\n\nSimilarly, $cov(Q_{i-1,m},T_{i-1,m})$ can also be expressed with respect to $cov(Q_{i,m-1},T_{i,m-1})$:\n\n\\begin{align}\n cov(Q_{i-1,m},T_{i-1,m}) \n ={}&\n \\frac{1}{m}\n \\sum \\limits _{0 \\leq j \\lt m}\n \\left(\n q_{i+j-1}-\\mu_{Q_{i-1,m}}\n \\right)\n \\left(\n t_{i+j-1}-M_{T_{i-1,m}}\n \\right)\n \\\\\n ={}&\n \\frac{\n S(Q_{i,m-1},T_{i,m-1})\n +\n \\frac{m-1}{m}\n \\left(\n q_{i-1} \n - \n \\mu_{Q_{i,m-1}} \n \\right)\n \\left(\n t_{i-1}\n -\n M_{T_{i,m-1}} \n \\right)\n }{m}\n \\\\\n ={}&\n \\frac{\n \\frac{m-1}{m-1}S(Q_{i,m-1},T_{i,m-1})\n +\n \\frac{m-1}{m}\n \\left(\n q_{i-1} \n - \n \\mu_{Q_{i,m-1}} \n \\right)\n \\left(\n t_{i-1}\n -\n M_{T_{i,m-1}} \n \\right)\n }{m}\n \\\\\n ={}&\n \\frac{\n cov(Q_{i,m-1},T_{i,m-1}) (m-1)\n +\n \\left(\n \\frac{m-1}{m}\n \\right) \n \\left(\n q_{i-1} \n - \n \\mu_{Q_{i,m-1}} \n \\right)\n \\left(\n t_{i-1}\n -\n M_{T_{i,m-1}} \n \\right)\n }{m}\n \\\\\n ={}&\n \\frac{m-1}{m} \n \\left[\n cov(Q_{i,m-1},T_{i,m-1})\n +\n \\frac{\n \\left(\n q_{i-1} \n - \n \\mu_{Q_{i,m-1}} \n \\right)\n \\left(\n t_{i-1}\n -\n M_{T_{i,m-1}} \n \\right)\n }{m}\n \\right]\n \\\\\n\\end{align}\n\nNow, we can rearrange write this and write represent $cov(Q_{i,m-1},T_{i,m-1})$ as a function of $cov(Q_{i-1,m},T_{i-1,m})$:\n\n\\begin{align}\n cov(Q_{i-1,m},T_{i-1,m})\n ={}&\n \\frac{m-1}{m} \n \\left[\n cov(Q_{i,m-1},T_{i,m-1})\n +\n \\frac{\n \\left(\n q_{i-1} \n - \n \\mu_{Q_{i,m-1}} \n \\right)\n \\left(\n t_{i-1}\n -\n M_{T_{i,m-1}} \n \\right)\n }{m}\n \\right]\n \\\\\n \\frac{m}{m-1} \n cov(Q_{i-1,m},T_{i-1,m})\n -\n \\frac{\n \\left(\n q_{i-1} \n - \n \\mu_{Q_{i,m-1}} \n \\right)\n \\left(\n t_{i-1}\n -\n M_{T_{i,m-1}} \n \\right)\n }{m}\n ={}&\n cov(Q_{i,m-1},T_{i,m-1})\n \\\\\n\\end{align}\n\nAnd we can then substitute this representation of $cov(Q_{i,m-1},T_{i,m-1})$ into our $cov(Q_{i,m},T_{i,m})$ equation from above and get:\n\n\\begin{align}\n cov(Q_{i,m},T_{i,m})\n ={}&\n \\frac{m-1}{m} \n \\left[\n cov(Q_{i,m-1},T_{i,m-1})\n +\n \\frac{\n \\left(\n q_{i+m-1} - \\mu_{Q_{i,m-1}}\n \\right)\n \\left(\n t_{i+m-1} - M_{T_{i,m-1}}\n \\right)\n }{m}\n \\right]\n \\\\\n ={}&\n \\frac{m-1}{m} \n \\left[\n \\frac{m}{m-1}\n cov(Q_{i-1,m},T_{i-1,m})\n -\n \\frac{\n \\left(\n q_{i-1} \n - \n \\mu_{Q_{i,m-1}} \n \\right)\n \\left(\n t_{i-1}\n -\n M_{T_{i,m-1}} \n \\right)\n }{m}\n +\n \\frac{\n \\left(\n q_{i+m-1} - \\mu_{Q_{i,m-1}}\n \\right)\n \\left(\n t_{i+m-1} - M_{T_{i,m-1}}\n \\right)\n }{m}\n \\right]\n \\\\\n ={}&\n cov(Q_{i-1,m},T_{i-1,m})\n +\n \\frac{m-1}{m^2}\n \\left[\n \\left(\n q_{i+m-1} - \\mu_{Q_{i,m-1}}\n \\right)\n \\left(\n t_{i+m-1} - M_{T_{i,m-1}}\n \\right)\n -\n \\left(\n q_{i-1} \n - \n \\mu_{Q_{i,m-1}} \n \\right)\n \\left(\n t_{i-1}\n -\n M_{T_{i,m-1}} \n \\right)\n \\right]\n \\\\\n\\end{align}\n\n# Pearson Correlation\n\n\\begin{align}\n \\rho(Q_{i,m},T_{i,m}) \n &{}= \n \\frac{cov(Q_{i,m},T_{i,m})}{\\sigma_{Q_{i,m}}\\Sigma_{T_{i,m}}}\n \\\\\n &{}=\n \\frac{\n cov(Q_{i-1,m},T_{i-1,m})\n +\n \\frac{m-1}{m^2}\n \\left[\n \\left(\n q_{i+m-1} - \\mu_{Q_{i,m-1}}\n \\right)\n \\left(\n t_{i+m-1} - M_{T_{i,m-1}}\n \\right)\n -\n \\left(\n q_{i-1} \n - \n \\mu_{Q_{i,m-1}} \n \\right)\n \\left(\n t_{i-1}\n -\n M_{T_{i,m-1}} \n \\right)\n \\right]\n }{\\sigma_{Q_{i,m}}\\Sigma_{T_{i,m}}}\n \\\\\n\\end{align}\n\n# Z-Normalized Distance\n\n\n```python\n\n```\n", "meta": {"hexsha": "00ac95e310771e337d0487ccc9aa7c3242cddf73", "size": 27972, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/Matrix_Profile_Derivation.ipynb", "max_stars_repo_name": "profintegra/stumpy", "max_stars_repo_head_hexsha": "66b3402d91820005b466e1da6fe353b61e6246c5", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 2296, "max_stars_repo_stars_event_min_datetime": "2019-05-03T19:26:39.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T20:42:08.000Z", "max_issues_repo_path": "docs/Matrix_Profile_Derivation.ipynb", "max_issues_repo_name": "vishalbelsare/stumpy", "max_issues_repo_head_hexsha": "5f192a0a41fbb44f144cc4b676d525f19aaeaa98", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 436, "max_issues_repo_issues_event_min_datetime": "2019-05-06T14:14:01.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T20:39:31.000Z", "max_forks_repo_path": "docs/Matrix_Profile_Derivation.ipynb", "max_forks_repo_name": "vishalbelsare/stumpy", "max_forks_repo_head_hexsha": "5f192a0a41fbb44f144cc4b676d525f19aaeaa98", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 318, "max_forks_repo_forks_event_min_datetime": "2019-05-04T01:36:05.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T20:31:11.000Z", "avg_line_length": 32.2258064516, "max_line_length": 490, "alphanum_fraction": 0.2579365079, "converted": true, "num_tokens": 5546, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9702399034724605, "lm_q2_score": 0.8740772269642949, "lm_q1q2_score": 0.8480646043173135}} {"text": "
\n\n### simple *sqrt*\n\n\n```python\nimport math\nimport sympy \n\nmath.sqrt(9)\nmath.sqrt(8)\n\nsympy.sqrt(9)\nsympy.sqrt(8)\nsympy.sqrt(3)\n```\n\n### actual computation\n\n\n```python\nfrom sympy import *\n\nx,t,z,nu = symbols('x t z nu')\ninit_printing(use_unicode=True)\n```\n\n\n```python\nlimit(sin(x)/x,x,0),\nlimit((3*x**2)/(x-3),x,3)\n```\n\n\n```python\ndiff(sin(x)),\ndiff(5*x**3,x),\ndiff(5*x**3,x,2)\n```\n\n\n```python\nsolve(x**2-10,x)\n```\n\n### *expand* / *factor*\n\n\n```python\nfrom sympy import symbols\nfrom sympy import expand,factor\n\nx,y = symbols('x y')\nexpr = x**2 + 2*y\n```\n\n\n```python\nexpr\nexpr + 1\nexpr - x\n```\n\n\n```python\nexpand(x*expr),\nfactor(x*expr)\n```\n\n### about *symbols*\n\n\n```python\nfrom sympy import *\n\nx = symbols('x')\ny,z = symbols('y z')\n\nx,y,z\n```\n\n\n```python\nh = symbols('h')\nexpr = h ** 2\n\n# a python variable \nh = 5\n\n# the sympy symbol 'h'\nexpr * 2 \n```\n\n\n```python\nx = symbols('x')\nexpr = x + 1\n\nexpr.subs(x,1)\nexpr\n```\n\n\n```python\n# both '=' and '==' cannot be used in here!\nEq(1,1)\nEq(x**2+y**2,z**2)\nEq((x+1)**2,x**2+2*x+1)\n```\n\n\n```python\na = (x+1)**2\nb = x**2 + 2*x + 1\nc = x**2 - 2*x + 1\n\nsimplify(a-b)\nsimplify(a-c)\nsimplify(b-c)\n```\n\n\n```python\na = cos(x)**2 + sin(x)**2\nb = a/a\n\na.equals(b)\na.equals(b+1)\n```\n\n\n\n\n True\n\n\n\n\n\n\n False\n\n\n\n### *type* thing\n\n\n```python\ntype(1+1)\ntype(Integer(1)+1)\n\n1+1 == Integer(1)+1\n```\n\n\n\n\n int\n\n\n\n\n\n\n sympy.core.numbers.Integer\n\n\n\n\n\n\n True\n\n\n\n\n```python\n# Nah\n1/3 , 1//3\n\n# Oh yeah\n1/Integer(3), 1//Integer(3)\n\nRational(1,3) == Integer(1)/3\n```\n\n
\n\n### basic opts\n\nmostly subs\n\n\n```python\nfrom sympy import *\nx,y,z = symbols('x y z')\n```\n\n\n```python\nexpr = cos(x) + 1\n\nexpr.subs(x,sin(0))\nexpr.subs(x,0)\nexpr.subs(x,expr.subs(x,0))\n```\n\n\n```python\nexpr = sin(2*x) + cos(2*x)\n\nexpand(expr)\nexpand_trig(expr)\n\nexpr.subs(sin(2*x),2*sin(x)*cos(x))\n```\n\n\n```python\nexpr = x**2 + y**2\n\nexpr.subs([(x,3),(y,4)])\nexpr.subs([(x,30),(y,40)])\n```\n\n\n```python\nexpr = x**4 - 4*x**3 + 4*x**2 - 2*x +3\n\n# replace all inst of x that have an even power \n# x**2 -> y**2\n# x**4 -> y**4 \nreplace_evenpow_as_y = [ (x**i,y**i) for i in range(5) if i %2 ==0 ]\n\nexpr\nexpr.subs(replace_evenpow_as_y)\n```\n\n\n```python\n# do not confused with 'simplify' (down-below)\nsympify(\"x**2+y**2\")\nsympify(\"x**2 + y**2\").subs([(x,3),(y,4)])\n\nsimplify(\"(x+1)**2 + (x**2 + 2*x + 1)\")\n```\n\n
\n\n\n```python\nexpr = sqrt(8)\n\nexpr\nexpr.evalf()\nexpr.evalf(20)\n```\n\n\n```python\nexpr = cos(2*x)\n\nexpr.evalf(subs={x:0})\nexpr.evalf(subs={x:pi/2})\nexpr.evalf(subs={x:pi/6})\n```\n\n\n```python\none = cos(x)**2\n\none.evalf(subs={x:pi/2})\none.evalf(subs={x:pi/2},chop=True)\n```\n\n\n```python\nimport numpy\na = numpy.arange(10)\n```\n\n\n```python\nf = lambdify(x,sin(x))\n\nf(-30)\nf(30)\n```\n\n\n```python\nf1 = lambdify(x,cos(x),\"numpy\")\nf2 = lambdify(x,sin(x),\"math\")\n\nf1(a)\n\nf2(pi/6)\nf2(pi/4)\nf2(pi/3)\n```\n\n
\n\n### Printing\n\n\n```python\nfrom sympy import init_printing\n\n# It'll enable the best printer available\ninit_printing(use_unicode=True)\n```\n\n\n```python\n# Or you're in an interactive shell (ipython or qtconsole)\n# >> from sympy import init_session\n# >> init_session()\n# >> init_printing()\n\n# See more at sympy.org\n```\n", "meta": {"hexsha": "eef64102fd266f4f5304d8b3a56b69ca014b0650", "size": 88918, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "sympy-part01-intro.ipynb", "max_stars_repo_name": "codingEzio/code_python_learn_math", "max_stars_repo_head_hexsha": "bd7869d05e1b4ec250cc5fa13470a960b299654e", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sympy-part01-intro.ipynb", "max_issues_repo_name": "codingEzio/code_python_learn_math", "max_issues_repo_head_hexsha": "bd7869d05e1b4ec250cc5fa13470a960b299654e", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sympy-part01-intro.ipynb", "max_forks_repo_name": "codingEzio/code_python_learn_math", "max_forks_repo_head_hexsha": "bd7869d05e1b4ec250cc5fa13470a960b299654e", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.4289919058, "max_line_length": 2988, "alphanum_fraction": 0.8085651949, "converted": true, "num_tokens": 1185, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294404038127071, "lm_q2_score": 0.9124361664296878, "lm_q1q2_score": 0.8480550389797274}} {"text": "**Problem 1** (7 pts)\n\nIn the finite space all norms are equivalent. \nThis means that given any two norms $\\|\\cdot\\|_*$ and $\\|\\cdot\\|_{**}$ over $\\mathbb{C}^{n\\times 1}$, inequality\n$$\n c_1 \\Vert x \\Vert_* \\leq \\Vert x \\Vert_{**} \\leq c_2 \\Vert x \\Vert_*\n$$\nholds for every $x\\in \\mathbb{C}^{n\\times 1}$ for some constants $c_1, c_2$ which in general depend on the vector size $n$: $c_1 \\equiv c_1(n)$, $c_2 \\equiv c_2(n)$.\nNorms equivalence means that for a certain process convergence in $\\Vert \\cdot \\Vert_{**}$ is followed by $\\Vert \\cdot \\Vert_{*}$ and vice versa. Note that practically convergence in a certain norm may be better than in another due to the strong dependence on $n$.\n\nConsider \n\\begin{equation}\n\\begin{split}\nc_1(n) \\Vert x \\Vert_\\infty &\\leqslant \\Vert x \\Vert_1 \\leqslant c_2(n) \\Vert x \\Vert_\\infty \\\\\nc_1(n) \\Vert x \\Vert_\\infty &\\leqslant \\Vert x \\Vert_2 \\leqslant c_2(n)\\Vert x \\Vert_\\infty \\\\\nc_1(n) \\Vert x \\Vert_2\\ &\\leqslant \\Vert x \\Vert_1 \\leqslant c_2(n) \\Vert x \\Vert_2\n\\end{split}\n\\end{equation}\n\n- Generate random vectors and plot optimal constants $c_1$ and $c_2$ from inequalities above as a function of $n$.\n- Find these optimal constants analytically and plot them together with constants found numerically \n\n\n```\n\n```\n\n**Problem 2** (7 pts) \n\nGiven $A = [a_{ij}] \\in\\mathbb{C}^{n\\times m}$\n\n- prove that for operator matrix norms $\\Vert \\cdot \\Vert_{1}$, $\\Vert \\cdot \\Vert_{\\infty}$ hold\n$$ \\Vert A \\Vert_{1} = \\max_{1\\leqslant j \\leqslant m} \\sum_{i=1}^n |a_{ij}|, \\quad \\Vert A \\Vert_{\\infty} = \\max_{1\\leqslant i \\leqslant n} \\sum_{j=1}^m |a_{ij}|.\n$$\n**Hint**: show that \n$$\n\\Vert Ax\\Vert_{1} \\leqslant \\left(\\max_{1\\leqslant j \\leqslant m} \\sum_{i=1}^n |a_{ij}|\\right) \\Vert x\\Vert_1\n$$\nand find such $x$ that this inequality becomes equality (almost the same hint is for $ \\Vert A \\Vert_\\infty$).\n- check that for randomly generated $x$ and for given analytical expressions for $\\Vert \\cdot \\Vert_{1}$, $\\Vert \\cdot \\Vert_{\\infty}$ always hold $\\|A\\| \\geqslant \\|Ax\\|/\\|x\\|$ (choose matrix $A$ randomly)\n- prove that $ \\Vert A \\Vert_F = \\sqrt{\\text{trace}(A^{*} A)}$\n\n\n```\n\n```\n\n**Problem 3** (6 pts)\n- Prove Cauchy-Schwarz (Cauchy-Bunyakovsky) inequality $(x, y) \\leqslant \\Vert x \\Vert \\Vert y \\Vert $, where $(\\cdot, \\cdot)$ is a dot product that induces norm $ \\Vert x \\Vert = (x,x)^{1/2}$. \n- Show that vector norm $\\|\\cdot \\|_2$ is unitary ivariant: $\\|Ux\\|_2\\equiv \\|x\\|_2$, where $U$ is unitary\n- Prove that matrix norms $\\|\\cdot \\|_2$ and $\\|\\cdot \\|_F$ are unitary invariant: $\\|UAV\\|_2 = \\|A\\|_2$ and $\\|UAV\\|_F = \\|A\\|_F$, where $U$ and $V$ are unitary\n\n\n```\n\n```\n\n**Problem 4** (5 pts)\n\n- Download [Lenna image](http://www.ece.rice.edu/~wakin/images/lenaTest3.jpg) and import it in Python as a 2D real array\n- Find its SVD and plot singular values (use logarithmic scale)\n- Plot compressed images for several accuracies (use $\\verb|plt.subplots|$). Specify their compression rates\n\n\n```\n\n```\n\n**Problem 5** (bonus tasks)\n- The norm is called absolute if $\\|x\\|=\\| \\lvert x \\lvert \\|$ for any vector $x$, where $x=(x_1,\\dots,x_n)^T$ and $\\lvert x \\lvert = (\\lvert x_1 \\lvert,\\dots, \\lvert x_n \\lvert)^T$. Give an example of a norm which is not absolute.\n- Prove that Frobenius norm is not an operator norm\n\n\n```\n\n```\n", "meta": {"hexsha": "6057a7498fa78bf10bd942e04e6f5246623f2db3", "size": 5309, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "problems/Pset2.ipynb", "max_stars_repo_name": "oseledets/NLA", "max_stars_repo_head_hexsha": "d16d47bc8e20df478d98b724a591d33d734ec74b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2015-01-20T13:24:38.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-03T05:54:09.000Z", "max_issues_repo_path": "problems/Pset2.ipynb", "max_issues_repo_name": "oseledets/NLA", "max_issues_repo_head_hexsha": "d16d47bc8e20df478d98b724a591d33d734ec74b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "problems/Pset2.ipynb", "max_forks_repo_name": "oseledets/NLA", "max_forks_repo_head_hexsha": "d16d47bc8e20df478d98b724a591d33d734ec74b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2015-09-10T09:14:10.000Z", "max_forks_repo_forks_event_max_datetime": "2019-10-09T04:36:07.000Z", "avg_line_length": 40.2196969697, "max_line_length": 281, "alphanum_fraction": 0.5307967602, "converted": true, "num_tokens": 1183, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294404077216355, "lm_q2_score": 0.9124361533336451, "lm_q1q2_score": 0.8480550303743839}} {"text": "# 4.3 Linear Discriminant Analysis\n\nSuppose $f_k(x)$ is the class-conditional density of X, and let $\\pi_k$ be the prior-probability, with $\\sum \\pi_k = 1$. \nThe Bayes theorem gives us (4.7): \n\n$$\nPr(G = k, X = x) = \\cfrac{f_k(x)\\pi_k}{\\sum_{l=1}^K f_l(x)\\pi_l}\n$$\n\nSuppose that we model each class density as multivariate Gaussian (4.8):\n\n$$\nf_k(x) = \\cfrac{1}{(2\\pi)^{p/2} |\\Sigma_k|^{1/2}} e^{-\\frac{1}{2}(x-\\mu_k)^T\\Sigma_k^{-1}(x-\\mu_k)}\n$$\n\nLinear discriminant analysis (LDA) arises when $\\Sigma_k = \\Sigma \\text{ }\\forall k$. The log-ration between two classes $k \\text{ and } l$ is (4.9):\n\n$$\n\\begin{align}\nlog \\cfrac{PR(G=k|X=x)}{PR(G=l|X=x)} &= log \\cfrac{f_k(x)}{f_l(x)} + log \\cfrac{\\pi_k}{\\pi_l}\\\\\n&= log \\cfrac{\\pi_k}{\\pi_l} - \\frac{1}{2}(\\mu_k+\\mu_l)^T\\Sigma^{-1}(\\mu_k - \\mu_l)\n+ x^T\\Sigma^{-1}(\\mu_k-\\mu_l),\n\\end{align}\n$$\n\nan equation is linear in x. \n\nFrom (4.9) we see that the linear discriminant functions (4.10):\n\n$$\n\\delta_k(x) = x^T\\Sigma^{-1}\\mu_k - \\frac{1}{2}\\mu_k^T\\Sigma^{-1}\\mu_k + log \\pi_k\n$$\n\nare an equivalent description of the decision rule, with $G(x) = argmax_k \\delta_k(x)$.\n\nIn practice we do not know the parameters of the Gaussian distributions, and will need to estimate them using the training data:\n\n- $\\hat{\\pi_k} = N_k / N, N_k$ is the number of class-k observations;\n\n- $\\hat{\\mu}_k = \\sum_{g_i = k} x_i / N_k$\n\n- $\\hat{\\Sigma} = \\sum_{k=1}^K\\sum_{g_i=k} (x_i - \\hat{\\mu}_k)(x_i-\\hat{\\mu}_k)^T / (N - K)$\n\nWith two classes, the LDA rule classifies to class 2 if (4.11):\n\n$$\nx^T\\hat{\\Sigma}^{-1}(\\hat{\\mu}_2 - \\hat{\\mu}_1) > \n\\frac{1}{2}\\hat{\\mu}_2^T\\hat{\\Sigma}^{-1}\\hat{\\mu}_2 \n- \\frac{1}{2}\\hat{\\mu}_1^T\\hat{\\Sigma}^{-1}\\hat{\\mu}_1\n+ log(N_1/N)\n- log(N_2/N)\n$$\n\nSuppose we code the targets in the 2-classes as +1 and -1. It is easy to show that the coefficient vector from least squares is proportional to the LDA direction given in (4.11). However unless $N_1 = N_2$ the intercepts are different.\n(**TODO**: solve exercise 4.11)\n\nSince LDA direction via least squares does not use a Gaussian assumption, except the derivation of the intercept or cut-point via (4.11). Thus it makes sense to choose the cut-point that minimizes the training error.\n\nWith more than two classes, LDA is not the same as linear regression and it avoids the masking problems.\n\n**Quadratic discriminant functions**\n\nIf $\\Sigma_k$ are assumed to be equal, then we get *quadratic discriminant functions (QDA)* (4.12):\n$$\n\\delta_k(x)=\\cfrac{1}{2}log|\\Sigma_k| - \\cfrac{1}{2}(x-\\mu_k)^T\\Sigma_k^{-1}(x-\\mu_k)+log \\pi_k\n$$\n", "meta": {"hexsha": "10b6d019378b84e97acb918a7c0860f0830e9838", "size": 3871, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chapter-04/4.3-linear-discriminant-analysis.ipynb", "max_stars_repo_name": "leduran/ESL", "max_stars_repo_head_hexsha": "fcb6c8268d6a64962c013006d9298c6f5a7104fe", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 360, "max_stars_repo_stars_event_min_datetime": "2019-01-28T14:05:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-27T00:11:21.000Z", "max_issues_repo_path": "chapter-04/4.3-linear-discriminant-analysis.ipynb", "max_issues_repo_name": "leduran/ESL", "max_issues_repo_head_hexsha": "fcb6c8268d6a64962c013006d9298c6f5a7104fe", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-07-06T16:51:40.000Z", "max_issues_repo_issues_event_max_datetime": "2020-07-06T16:51:40.000Z", "max_forks_repo_path": "chapter-04/4.3-linear-discriminant-analysis.ipynb", "max_forks_repo_name": "leduran/ESL", "max_forks_repo_head_hexsha": "fcb6c8268d6a64962c013006d9298c6f5a7104fe", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 79, "max_forks_repo_forks_event_min_datetime": "2019-03-21T23:48:35.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T13:05:10.000Z", "avg_line_length": 35.8425925926, "max_line_length": 244, "alphanum_fraction": 0.5355205373, "converted": true, "num_tokens": 914, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541643004809, "lm_q2_score": 0.9005297947939938, "lm_q1q2_score": 0.8479876313444217}} {"text": "# Interpolation for Squaring\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport os\nimport sys\nsys.path.append(os.path.join(os.getcwd(), '../build/'))\nimport pydeft as deft\n\nplt.rc('font', family='serif')\nplt.rc('xtick', labelsize='x-small')\nplt.rc('ytick', labelsize='x-small')\n```\n\nConsider samples of a continuous function\n\\begin{equation}\n w(\\mathbf{r}) = \\cos{p x},\n\\end{equation}\nsuch that we have enough samples to reconstruct the continuous function using Fourier-based interpolations. \n\nIf we are interested in the quantitiy $w^2$, just squaring the values of the sampled points may not be enough to fully reconstruct the square of the continuous function. This is because larger frequency components have been introduced during the squaring, which might not be adequately described by the sampled points. Hence, the same set of sampled points that could successfully reconstruct $w$ may not adequately reconstruct $w^2$. This can be rectified by first interpolating the sampled points in $w$ before squaring the values on a denser grid. \n\n\n```python\n# create box and data array\n\ndense_factor = 11\ngrd_pts = 7\ndense_grd_pts = grd_pts*dense_factor\n\nbox_vectors = np.eye(3)\nbox = deft.Box(box_vectors)\n\nx_sparse = np.arange(grd_pts)/grd_pts\nx_dense = np.arange(dense_grd_pts)/dense_grd_pts\nx,y,z = np.meshgrid(x_sparse, np.arange(5)/5, np.arange(5)/5,indexing='ij')\np = np.pi*4\ndata = deft.Double3D([grd_pts, 5, 5])\ndata[...] = np.cos(p*x)\n\ndata_dense = deft.fourier_interpolate(data, [dense_grd_pts, 5, 5]) \n\n# font\nplt.rc('font', size=14)\nfig = plt.figure(figsize=[9,3.5])\nplt.plot(x_dense, np.cos(p*x_dense), color='0.5', ls = 'solid', linewidth = 2)\nplt.plot(x_sparse, data[:,0,0], 'k+', markersize = 18)\nplt.plot(x_dense, data_dense[:,0,0], 'k.', markersize = 8)\nplt.legend(['$\\cos(4\\pi x)$', 'sampled points', 'interpolated points'], loc=(1.02,0.25))\nplt.tight_layout()\nplt.show()\n```\n\n\n```python\nsparse_squared = deft.fourier_interpolate(data*data, [dense_grd_pts, 5, 5]) \n \n# plot\nplt.rc('font', size=14)\nfig = plt.figure(figsize=[9,3.5])\nplt.plot(x_dense, np.cos(p*x_dense)**2, color='0.5', ls = 'solid', linewidth = 2)\nplt.plot(x_dense, data_dense[:,0,0]**2, 'k.', markersize = 8)\nplt.plot(x_sparse, data[:,0,0]**2, 'k+', markersize = 18)\nplt.plot(x_dense, sparse_squared[:,0,0], color='0.2', ls = 'dashed', linewidth = 1)\nplt.set_xlabel('$x$-axis [one lattice cell]')\nplt.legend(['$\\cos^2(4\\pi x)$', 'interpolated before squaring', 'sampled points squared', 'squared before interpolating'], loc=(1.02,0.25))\nplt.tight_layout()\nplt.show()\n```\n", "meta": {"hexsha": "1ce76ea76ac2f947dfb5e5c92895c64d684b903e", "size": 4057, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "example/interpolation_squaring.ipynb", "max_stars_repo_name": "cw-tan/deft", "max_stars_repo_head_hexsha": "abb4d23fa0bb53031c13daef9942bceba4afd655", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "example/interpolation_squaring.ipynb", "max_issues_repo_name": "cw-tan/deft", "max_issues_repo_head_hexsha": "abb4d23fa0bb53031c13daef9942bceba4afd655", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "example/interpolation_squaring.ipynb", "max_forks_repo_name": "cw-tan/deft", "max_forks_repo_head_hexsha": "abb4d23fa0bb53031c13daef9942bceba4afd655", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.9837398374, "max_line_length": 557, "alphanum_fraction": 0.5814641361, "converted": true, "num_tokens": 776, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541561135442, "lm_q2_score": 0.9005297881200701, "lm_q1q2_score": 0.8479876176873133}} {"text": "# Linear Algebra using SymPy\n\n\n## Introduction\n\nThis notebook is a short tutorial of Linear Algebra calculation using SymPy. For further information refer to SymPy official [tutorial](http://docs.sympy.org/latest/tutorial/index.html).\n\nYou can also check the [SymPy in 10 minutes](./SymPy_in_10_minutes.ipynb) tutorial.\n\n\n```python\nfrom sympy import *\ninit_session()\n```\n\n IPython console for SymPy 1.0 (Python 2.7.13-64-bit) (ground types: python)\n \n These commands were executed:\n >>> from __future__ import division\n >>> from sympy import *\n >>> x, y, z, t = symbols('x y z t')\n >>> k, m, n = symbols('k m n', integer=True)\n >>> f, g, h = symbols('f g h', cls=Function)\n >>> init_printing()\n \n Documentation can be found at http://docs.sympy.org/1.0/\n\n\nA matrix $A \\in \\mathbb{R}^{m\\times n}$ is a rectangular array of real number with $m$ rows and $n$ columns. To specify a matrix $A$, we specify the values for its components as a list of lists:\n\n\n```python\nA = Matrix([\n [3, 2, -1, 1],\n [2, -2, 4, -2],\n [-1, S(1)/2, -1, 0]])\ndisplay(A)\n```\n\n\n$$\\left[\\begin{matrix}3 & 2 & -1 & 1\\\\2 & -2 & 4 & -2\\\\-1 & \\frac{1}{2} & -1 & 0\\end{matrix}\\right]$$\n\n\nWe can access the matrix elements using square brackets, we can also use it for submatrices\n\n\n```python\nA[0, 1] # row 0, column 1\n```\n\n\n```python\nA[0:2, 0:3] # top-left 2x3 submatrix\n```\n\n\n\n\n$$\\left[\\begin{matrix}3 & 2 & -1\\\\2 & -2 & 4\\end{matrix}\\right]$$\n\n\n\nWe can also create some common matrices. Let us create an identity matrix\n\n\n```python\neye(2)\n```\n\n\n\n\n$$\\left[\\begin{matrix}1 & 0\\\\0 & 1\\end{matrix}\\right]$$\n\n\n\n\n```python\nzeros(2, 3)\n```\n\n\n\n\n$$\\left[\\begin{matrix}0 & 0 & 0\\\\0 & 0 & 0\\end{matrix}\\right]$$\n\n\n\nWe can use algebraic operations like addition $+$, substraction $-$, multiplication $*$, and exponentiation $**$ with ``Matrix`` objects.\n\n\n```python\nB = Matrix([\n [2, -3, -8],\n [-2, -1, 2],\n [1, 0, -3]])\nC = Matrix([\n [sin(x), exp(x**2), 1],\n [0, cos(x), 1/x],\n [1, 0, 2]])\n```\n\n\n```python\nB + C\n```\n\n\n\n\n$$\\left[\\begin{matrix}\\sin{\\left (x \\right )} + 2 & e^{x^{2}} - 3 & -7\\\\-2 & \\cos{\\left (x \\right )} - 1 & 2 + \\frac{1}{x}\\\\2 & 0 & -1\\end{matrix}\\right]$$\n\n\n\n\n```python\nB ** 2\n```\n\n\n\n\n$$\\left[\\begin{matrix}2 & -3 & 2\\\\0 & 7 & 8\\\\-1 & -3 & 1\\end{matrix}\\right]$$\n\n\n\n\n```python\nC ** 2\n```\n\n\n\n\n$$\\left[\\begin{matrix}\\sin^{2}{\\left (x \\right )} + 1 & e^{x^{2}} \\sin{\\left (x \\right )} + e^{x^{2}} \\cos{\\left (x \\right )} & \\sin{\\left (x \\right )} + 2 + \\frac{e^{x^{2}}}{x}\\\\\\frac{1}{x} & \\cos^{2}{\\left (x \\right )} & \\frac{1}{x} \\cos{\\left (x \\right )} + \\frac{2}{x}\\\\\\sin{\\left (x \\right )} + 2 & e^{x^{2}} & 5\\end{matrix}\\right]$$\n\n\n\n\n```python\ntan(x) * B ** 5\n```\n\n\n\n\n$$\\left[\\begin{matrix}52 \\tan{\\left (x \\right )} & 27 \\tan{\\left (x \\right )} & - 28 \\tan{\\left (x \\right )}\\\\- 2 \\tan{\\left (x \\right )} & - \\tan{\\left (x \\right )} & - 78 \\tan{\\left (x \\right )}\\\\11 \\tan{\\left (x \\right )} & 30 \\tan{\\left (x \\right )} & 57 \\tan{\\left (x \\right )}\\end{matrix}\\right]$$\n\n\n\nAnd the ``transpose`` of the matrix, that flips the matrix through its main diagonal:\n\n\n```python\nA.transpose() # the same as A.T\n```\n\n\n\n\n$$\\left[\\begin{matrix}3 & 2 & -1\\\\2 & -2 & \\frac{1}{2}\\\\-1 & 4 & -1\\\\1 & -2 & 0\\end{matrix}\\right]$$\n\n\n\n## Row operations\n\n\n```python\nM = eye(4)\n```\n\n\n```python\nM[1, :] = M[1, :] + 5*M[0, :]\n```\n\n\n```python\nM\n```\n\n\n\n\n$$\\left[\\begin{matrix}1 & 0 & 0 & 0\\\\5 & 1 & 0 & 0\\\\0 & 0 & 1 & 0\\\\0 & 0 & 0 & 1\\end{matrix}\\right]$$\n\n\n\nThe notation ``M[1, :]`` refers to entire rows of the matrix. The first argument specifies the 0-based row index, for example the first row of ``M`` is ``M[0, :]``. The code example above implements the row operation $R_2 \\leftarrow R_2 + 5R_1$. To scale a row by a constant $c$, use the ``M[1, :] = c*M[1, :]``. To swap rows $1$ and $j$, we can use the Python tuple-assignment syntax ``M[1, :], M[j, :] = M[j, :], M[1, :]``.\n\n## Reduced row echelon form\n\nThe Gauss-Jordan elimination procedure is a sequence of row operations that can be performed on any matrix to bring it to its _reduced row echelon form_ (RREF). In Sympy, matrices have a ``rref`` method that compute it:\n\n\n```python\nA.rref()\n```\n\n\n\n\n$$\\left ( \\left[\\begin{matrix}1 & 0 & 0 & 1\\\\0 & 1 & 0 & -2\\\\0 & 0 & 1 & -2\\end{matrix}\\right], \\quad \\left [ 0, \\quad 1, \\quad 2\\right ]\\right )$$\n\n\n\nIt return a tuple, the first value is the RREF of the matrix $A$, and the second tells the location of the leading ones (pivots). If we just want the RREF, we can just get the first entry of the matrix, i.e.\n\n\n```python\nA.rref()[0]\n```\n\n\n\n\n$$\\left[\\begin{matrix}1 & 0 & 0 & 1\\\\0 & 1 & 0 & -2\\\\0 & 0 & 1 & -2\\end{matrix}\\right]$$\n\n\n\n## Matrix fundamental spaces\n\nConsider the matrix $A \\in \\mathbb{R}^{m\\times n}$. The fundamental spaces of a matrix are its column space $\\mathcal{C}(A)$, its null space $\\mathcal{N}(A)$, and its row space $\\mathcal{R}(A)$. These vector spaces are importan when we consider the matrix product $A\\mathbf{x} = \\mathbf{y}$ as a linear transformation $T_A:\\mathbb{R}^n\\rightarrow \\mathbb{R}^n$ of the input vector $\\mathbf{x}\\in\\mathbb{R}^n$ to produce an output vector $\\mathbf{y} \\in \\mathbb{R}^m$.\n\n**Linear transformations** $T_A: \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ can be represented as $m\\times n$ matrices. The fundamental spaces of a matrix $A$ gives us information about the domain and image of the linear transformation $T_A$. The column space $\\mathcal{C}(A)$ is the same as the image space $\\mathrm{Im}(T_A)$ (the set of all possible outputs). The null space $\\mathcal{N}(A)$ is also called kernel $\\mathrm{Ker}(T_A)$, and is the set of all input vectors that are mapped to the zero vector. The row space $\\mathcal{R}(A)$ is the orthogonal complement of the null space, i.e., the vectors that are mapped to vectors different from zero. Input vectors in the row space of $A$ are in a one-to-one correspondence with the output vectors in the column space of $A$.\n\nLet us see how to compute these spaces, or a base for them!\n\nThe non-zero rows in the reduced row echelon form $A$ are a basis for its row space, i.e.\n\n\n```python\n[A.rref()[0][row, :] for row in A.rref()[1]]\n```\n\n\n\n\n$$\\left [ \\left[\\begin{matrix}1 & 0 & 0 & 1\\end{matrix}\\right], \\quad \\left[\\begin{matrix}0 & 1 & 0 & -2\\end{matrix}\\right], \\quad \\left[\\begin{matrix}0 & 0 & 1 & -2\\end{matrix}\\right]\\right ]$$\n\n\n\nThe column space of $A$ is the span of the columns of $A$ that contain the pivots.\n\n\n```python\n[A[:, col] for col in A.rref()[1]]\n```\n\n\n\n\n$$\\left [ \\left[\\begin{matrix}3\\\\2\\\\-1\\end{matrix}\\right], \\quad \\left[\\begin{matrix}2\\\\-2\\\\\\frac{1}{2}\\end{matrix}\\right], \\quad \\left[\\begin{matrix}-1\\\\4\\\\-1\\end{matrix}\\right]\\right ]$$\n\n\n\nWe can also use the ``columnspace`` method\n\n\n```python\nA.columnspace()\n```\n\n\n\n\n$$\\left [ \\left[\\begin{matrix}3\\\\2\\\\-1\\end{matrix}\\right], \\quad \\left[\\begin{matrix}2\\\\-2\\\\\\frac{1}{2}\\end{matrix}\\right], \\quad \\left[\\begin{matrix}-1\\\\4\\\\-1\\end{matrix}\\right]\\right ]$$\n\n\n\nNote that we took columns from the original matrix and not from its RREF.\n\nTo find (a base for) the null space of $A$ we use the ``nullspace`` method:\n\n\n```python\nA.nullspace()\n```\n\n\n\n\n$$\\left [ \\left[\\begin{matrix}-1\\\\2\\\\2\\\\1\\end{matrix}\\right]\\right ]$$\n\n\n\n## Determinants\n\nThe determinant of a matrix, denoted by $\\det(A)$ or $|A|$, isis a useful value that can be computed from the elements of a square matrix. It can be viewed as the scaling factor of the transformation described by the matrix.\n\n\n```python\nM = Matrix([\n [1, 2, 2],\n [4, 5, 6],\n [7, 8, 9]])\n```\n\n\n```python\nM.det()\n```\n\n## Matrix inverse\n\nFor invertible matrices (those with $\\det(A)\\neq 0$), there is an inverse matrix $A^{-1}$ that have the _inverse_ effect (if we are thinking about linear transformations).\n\n\n```python\nA = Matrix([\n [1, -1, -1],\n [0, 1, 0],\n [1, -2, 1]])\n```\n\n\n```python\nA.inv()\n```\n\n\n\n\n$$\\left[\\begin{matrix}\\frac{1}{2} & \\frac{3}{2} & \\frac{1}{2}\\\\0 & 1 & 0\\\\- \\frac{1}{2} & \\frac{1}{2} & \\frac{1}{2}\\end{matrix}\\right]$$\n\n\n\n\n```python\nA.inv() * A\n```\n\n\n\n\n$$\\left[\\begin{matrix}1 & 0 & 0\\\\0 & 1 & 0\\\\0 & 0 & 1\\end{matrix}\\right]$$\n\n\n\n\n```python\nA * A.inv()\n```\n\n\n\n\n$$\\left[\\begin{matrix}1 & 0 & 0\\\\0 & 1 & 0\\\\0 & 0 & 1\\end{matrix}\\right]$$\n\n\n\n## Eigenvectors and Eigenvalues\n\nTo find the eigenvalues of a matrix, use ``eigenvals``. ``eigenvals`` returns a dictionary of ``eigenvalue:algebraic multiplicity``.\n\n\n```python\nM = Matrix([\n [3, -2, 4, -2],\n [5, 3, -3, -2],\n [5, -2, 2, -2],\n [5, -2, -3, 3]])\n\nM\n```\n\n\n\n\n$$\\left[\\begin{matrix}3 & -2 & 4 & -2\\\\5 & 3 & -3 & -2\\\\5 & -2 & 2 & -2\\\\5 & -2 & -3 & 3\\end{matrix}\\right]$$\n\n\n\n\n```python\nM.eigenvals()\n```\n\nThis means that ``M`` has eigenvalues -2, 3, and 5, and that the eigenvalues -2 and 3 have algebraic multiplicity 1 and that the eigenvalue 5 has algebraic multiplicity 2.\n\nTo find the eigenvectors of a matrix, use ``eigenvects``. ``eigenvects`` returns a list of tuples of the form ``(eigenvalue:algebraic multiplicity, [eigenvectors])``.\n\n\n```python\nM.eigenvects()\n```\n\n\n\n\n$$\\left [ \\left ( -2, \\quad 1, \\quad \\left [ \\left[\\begin{matrix}0\\\\1\\\\1\\\\1\\end{matrix}\\right]\\right ]\\right ), \\quad \\left ( 3, \\quad 1, \\quad \\left [ \\left[\\begin{matrix}1\\\\1\\\\1\\\\1\\end{matrix}\\right]\\right ]\\right ), \\quad \\left ( 5, \\quad 2, \\quad \\left [ \\left[\\begin{matrix}1\\\\1\\\\1\\\\0\\end{matrix}\\right], \\quad \\left[\\begin{matrix}0\\\\-1\\\\0\\\\1\\end{matrix}\\right]\\right ]\\right )\\right ]$$\n\n\n\nThis shows us that, for example, the eigenvalue 5 also has geometric multiplicity 2, because it has two eigenvectors. Because the algebraic and geometric multiplicities are the same for all the eigenvalues, ``M`` is diagonalizable.\n\nTo diagonalize a matrix, use diagonalize. diagonalize returns a tuple $(P,D)$, where $D$ is diagonal and $M=PDP^{\u22121}$.\n\n\n```python\nP, D = M.diagonalize()\n```\n\n\n```python\nP\n```\n\n\n\n\n$$\\left[\\begin{matrix}0 & 1 & 1 & 0\\\\1 & 1 & 1 & -1\\\\1 & 1 & 1 & 0\\\\1 & 1 & 0 & 1\\end{matrix}\\right]$$\n\n\n\n\n```python\nD\n```\n\n\n\n\n$$\\left[\\begin{matrix}-2 & 0 & 0 & 0\\\\0 & 3 & 0 & 0\\\\0 & 0 & 5 & 0\\\\0 & 0 & 0 & 5\\end{matrix}\\right]$$\n\n\n\n\n```python\nP * D * P.inv()\n```\n\n\n\n\n$$\\left[\\begin{matrix}3 & -2 & 4 & -2\\\\5 & 3 & -3 & -2\\\\5 & -2 & 2 & -2\\\\5 & -2 & -3 & 3\\end{matrix}\\right]$$\n\n\n\n\n```python\nP * D * P.inv() == M\n```\n\n\n\n\n True\n\n\n\nNote that since ``eigenvects`` also includes the ``eigenvalues``, you should use it instead of ``eigenvals`` if you also want the ``eigenvectors``. However, as computing the eigenvectors may often be costly, ``eigenvals`` should be preferred if you only wish to find the eigenvalues.\n\nIf all you want is the characteristic polynomial, use ``charpoly``. This is more efficient than ``eigenvals``, because sometimes symbolic roots can be expensive to calculate.\n\n\n```python\nlamda = symbols('lamda')\np = M.charpoly(lamda)\nfactor(p)\n```\n\n**Note:** ``lambda`` is a reserved keyword in Python, so to create a Symbol called \u03bb, while using the same names for SymPy Symbols and Python variables, use ``lamda`` (without the b). It will still pretty print as \u03bb.\n\nNon-square matrices don\u2019t have eigenvectors and therefore don\u2019t\nhave an eigendecomposition. Instead, we can use the singular value\ndecomposition to break up a non-square matrix A into left singular\nvectors, right singular vectors, and a diagonal matrix of singular\nvalues. Use the singular_values method on any matrix to find its\nsingular values.\n\n\n```python\nA\n```\n\n\n\n\n$$\\left[\\begin{matrix}1 & -1 & -1\\\\0 & 1 & 0\\\\1 & -2 & 1\\end{matrix}\\right]$$\n\n\n\n\n```python\nA.singular_values()\n```\n\n## References\n\n1. SymPy Development Team (2016). [Sympy Tutorial: Matrices](http://docs.sympy.org/latest/tutorial/matrices.html)\n2. Ivan Savov (2016). [Taming math and physics using SymPy](https://minireference.com/static/tutorials/sympy_tutorial.pdf)\n\nThe following cell change the style of the notebook.\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open('./styles/custom_barba.css', 'r').read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "503dab8963b7b367a390b5dbaecd26d88378b788", "size": 41577, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/sympy/linear_algebra.ipynb", "max_stars_repo_name": "nicoguaro/AdvancedMath", "max_stars_repo_head_hexsha": "2749068de442f67b89d3f57827367193ce61a09c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 26, "max_stars_repo_stars_event_min_datetime": "2017-06-29T17:45:20.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-06T20:14:29.000Z", "max_issues_repo_path": "notebooks/sympy/linear_algebra.ipynb", "max_issues_repo_name": "nicoguaro/AdvancedMath", "max_issues_repo_head_hexsha": "2749068de442f67b89d3f57827367193ce61a09c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/sympy/linear_algebra.ipynb", "max_forks_repo_name": "nicoguaro/AdvancedMath", "max_forks_repo_head_hexsha": "2749068de442f67b89d3f57827367193ce61a09c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2019-04-22T08:08:56.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-27T08:15:53.000Z", "avg_line_length": 31.5934650456, "max_line_length": 2346, "alphanum_fraction": 0.5162469635, "converted": true, "num_tokens": 4632, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541528387691, "lm_q2_score": 0.9005297907896396, "lm_q1q2_score": 0.847987617252092}} {"text": "# Simon's Algorithm\n\n## Problem statement:\n\nGiven: a function $f$ acting on bit strings $f:\\{0,1\\}^n \\rightarrow \\{0,1\\}^n$ and a promise that $f(x)=f(x \\oplus s)$ for all $x$ (addition mod 2). The goal is to use Simon's algorithm to find the unknown string $s$.\n\n\n\n\n### Example:\n\nFor example, if $n = 3$, then the following function is an example of a function that satisfies the required and just mentioned property:\n\n|$x$| $$f(x)$$|\n|---|------|\n|000|\t101|\n|001|\t010|\n|010|\t000|\n|011|\t110|\n|100|\t000|\n|101|\t110|\n|110|\t101|\n|111|\t010|\n\n\n\nGiven $f$ is a two-to-one function i.e. it maps exactly two inputs to every unique output, and we find 2 values of input $x$ that have the same output, $f(x_1)=f(x_2)$ then it is guaranteed that $x_1 \\oplus x_2 = s$\n\nFor example, the input strings $011$ and $101$ are both mapped by $f$ to the same output string $110$. If we XOR $011$ and $101$ we obtain $s$, that is:\n\n$$011 \\oplus 101 = 110$$\n\nso for this example $s = 110$\n\n## Problem hardness\n\nTo solve this classically, you need to find two different inputs $x$ and $y$ for which $f(x)=f(y)$. Given $f$ is a blackbox, we can discover something about $f$ (or what it does) only when, for two different inputs, we obtain the same output. In any case, we would need to guess $ \\Omega ({\\sqrt {2^{n}}})$ different inputs before being likely to find a pair on which $f$ takes the same output.\n\n## Simon's Algorithm\n\nThe high-level idea behind Simon's algorithm is to \"probe\" a quantum circuit \"enough times\" to find $n-1$ (linearly independent) n-bit strings, that is\n\n$$ y_{1},y_{2},\\dots ,y_{n-1}\\in \\{0,1\\}^{n} $$\n\nsuch that the following equations are satisfied\n\n$$ \\begin{aligned}y_{1}\\cdot s&=0\\\\y_{2}\\cdot s&=0\\\\&\\,\\,\\,\\vdots \\\\y_{n-1}\\cdot s&=0\\end{aligned}$$ \n\n\nwhere $ y_{i}\\cdot s$ is the modulo-2 dot product; that is, $ y_{i}\\cdot s=y_{i1}s_{1}\\oplus y_{i2}s_{2}\\oplus \\dots \\oplus y_{in}s_{n} $\n\nSo, this linear system contains $n-1$ linear equations in $n$ unknowns (i.e. the bits of $s$, and the goal is to solve it to obtain $s$, and $s$ is fixed for a given function $f$.\n\n\n### Simon's quantum circuit\n\nThe quantum circuit below is the implementation (and visualization) of the quantum part of Simon's algorithm.\n\n\n\nThe circuit acts on $2n$ qubits (where $n$ is the length of the bit string in question (i.e., $n=3$ for our example). Apply a Hadamard gate to the first $n$ qubits, then apply $U_f$ - which is an oracle (or \"black box\"), which knows how to compute $f$ , then apply a Hadamard gate to the first $n$ qubits.\n\nFor more details on Simon's algorithm refer to [Wikipedia]()\n\n\n## Import related lib\n\n\n```python\nfrom braket.devices import LocalSimulator\nfrom braket.circuits import Circuit\nimport random\nimport matplotlib.pyplot as plt\n```\n\n\n```python\n# the following are imported for matrix computing to resolve the final equation set.\nfrom sympy import Matrix, pprint, MatrixSymbol, expand, mod_inverse\n```\n\n\n```python\n# define the quantum device.\n# use LocalSimulator so that it can be run locally with Braket SDK installed.\ndevice = LocalSimulator()\n```\n\n\n```python\ndef generate_oracle(secret_s):\n \n # validating input secret s:\n first_1_bit_location = -1\n other_1_bit_location_list = list()\n \n for index,bit_value in enumerate(secret_s):\n if (bit_value != '0' and bit_value != '1'):\n raise Exception ('Incorrect char \\'' + bit_value + '\\' in secret string S:' + secret_s)\n else:\n if (bit_value == '1'):\n if (first_1_bit_location == -1):\n first_1_bit_location = index\n else:\n other_1_bit_location_list.append(index)\n \n if (first_1_bit_location == -1):\n raise Exception (' All 0 in secret string S')\n \n n = len(str(secret_s))\n \n oracle_circuit = Circuit()\n\n \n oracle_circuit.cnot(first_1_bit_location, first_1_bit_location+n)\n \n for other_1_bit_location in other_1_bit_location_list:\n oracle_circuit.cnot(first_1_bit_location, other_1_bit_location)\n \n for i in range(n):\n oracle_circuit.cnot(i, n+i)\n \n for other_1_bit_location in other_1_bit_location_list:\n oracle_circuit.cnot(first_1_bit_location, other_1_bit_location)\n \n return oracle_circuit\n```\n\n\n```python\ndef generate_input_circuit(source_list):\n \n input_circuit_list = list()\n \n for input_index, digit_string in enumerate(source_list):\n cur_circuit = Circuit()\n for reg_index, digit_value in enumerate(digit_string):\n if (digit_value == '0'):\n cur_circuit.i(reg_index)\n elif (digit_value == '1'):\n cur_circuit.x(reg_index)\n else:\n raise Exception('incorrect input value: \\'' + digit_value + '\\' in: ' + digit_string )\n \n input_circuit_list.append(cur_circuit)\n \n return input_circuit_list\n```\n\n\n```python\n\n```\n\n\n```python\ndef generate_simon_circuit(secret_s):\n \n bit_number = len(secret_s)\n \n oracle_circuit = generate_oracle(secret_s)\n \n input_circuit = Circuit()\n \n for i in range(bit_number):\n input_circuit.h(i)\n \n output_circuit = Circuit()\n \n for i in range(bit_number):\n output_circuit.h(i)\n \n simon_circuit = input_circuit + oracle_circuit + output_circuit\n \n return simon_circuit\n \n \n \n \n```\n\n\n```python\ndef generate_full_bit_string(bit_number):\n zero_string = '0' * bit_number\n result_list = list()\n for i in range(pow(2, bit_number)):\n cur_string = (zero_string + bin(i)[2:])[-bit_number:]\n result_list.append(cur_string)\n return result_list\n```\n\n\n```python\ndef generate_random_secret_s(start=2, end=8):\n random_bit_number = random.randint(start, end)\n \n secret_s_list = generate_full_bit_string(random_bit_number)[1:]\n \n candidate_number = len(secret_s_list)\n \n random_index = random.randint(0,candidate_number-1)\n \n selected_secret_s = secret_s_list[random_index]\n \n return selected_secret_s\n```\n\n\n```python\nsecret_s = generate_random_secret_s()\n```\n\n\n```python\nsecret_s\n```\n\n\n\n\n '10101010'\n\n\n\n\n```python\nbit_number = len(secret_s)\n```\n\n\n```python\nsimon_circuit = generate_simon_circuit(secret_s)\n```\n\n\n```python\nprint(simon_circuit)\n```\n\n T : |0| 1 | 2 | 3 | 4 | 5 |6| 7 | 8 |9|\n \n q0 : -H-C---------C---C---C---C---C-C---C---H-\n | | | | | | | | \n q1 : -H-|-C-------|-H-|---|---|---|-|---|-----\n | | | | | | | | | \n q2 : -H-|-|-------X---|-C-|---|---X-|-H-|-----\n | | | | | | | | \n q3 : -H-|-|-C-----H---|-|-|---|-----|---|-----\n | | | | | | | | | \n q4 : -H-|-|-|---------X-|-|-C-|-----X---|-H---\n | | | | | | | | \n q5 : -H-|-|-|-C---H-----|-|-|-|---------|-----\n | | | | | | | | | \n q6 : -H-|-|-|-|---------|-X-|-|-C-------X---H-\n | | | | | | | | \n q7 : -H-|-|-|-|-C-H-----|---|-|-|-------------\n | | | | | | | | | \n q8 : ---X-|-|-|-|-------|---|-X-|-------------\n | | | | | | | \n q9 : -----X-|-|-|-------|---|---|-------------\n | | | | | | \n q10 : -------|-|-|-------X---|---|-------------\n | | | | | \n q11 : -------X-|-|-----------|---|-------------\n | | | | \n q12 : ---------|-|-----------X---|-------------\n | | | \n q13 : ---------X-|---------------|-------------\n | | \n q14 : -----------|---------------X-------------\n | \n q15 : -----------X-----------------------------\n \n T : |0| 1 | 2 | 3 | 4 | 5 |6| 7 | 8 |9|\n\n\n\n```python\n# run 2 * bit_number times to get the y output\n# according to Simon's algorithm, we only need bit_number-1 independent y to caculate s\n# in real world, we may get y with all zeros or dependent y, \n# running bit_number-1 time is not enough to get bit_number-1 independent y\n# so we run 2 * bit_number times to generate y, \n# the complexity is O(2*bit_number), which is still O(bit_number), aka O(n)\n\ntask = device.run(simon_circuit, shots=bit_number*2)\nresult = task.result()\nprint (result.measurement_counts)\n```\n\n Counter({'0010011001101111': 1, '0001111100101100': 1, '1111010101111000': 1, '0001010001110001': 1, '1110000000100100': 1, '1011111000010011': 1, '1111101000100001': 1, '1011111001110101': 1, '0111100001000010': 1, '1001100000101111': 1, '0101000101001010': 1, '0100111100000011': 1, '1011111001110010': 1, '1000011001011001': 1, '1000100101011100': 1, '0010001001110100': 1})\n\n\n\n```python\n# Simulate partial measurement by seperating out first n bits\nanswer_plot = {}\nfor measresult in result.measurement_counts.keys():\n measresult_input = measresult[:bit_number]\n if measresult_input in answer_plot:\n answer_plot[measresult_input] += result.measurement_counts[measresult]\n else:\n answer_plot[measresult_input] = result.measurement_counts[measresult] \n\nprint(f\"measurement_of_input_registers: {answer_plot}\\n\")\nplt.bar(answer_plot.keys(), answer_plot.values())\nplt.show()\n```\n\n\n```python\nlAnswer = [ (k,v) for k,v in answer_plot.items() if k != \"0\"*bit_number ] #excluding the trivial all-zero\nY = []\nfor k, v in lAnswer:\n Y.append( [ int(c) for c in k ] )\n \nprint('The output we got:')\nfor a in Y:\n print (a)\n```\n\n The output we got:\n [0, 0, 1, 0, 0, 1, 1, 0]\n [0, 0, 0, 1, 1, 1, 1, 1]\n [1, 1, 1, 1, 0, 1, 0, 1]\n [0, 0, 0, 1, 0, 1, 0, 0]\n [1, 1, 1, 0, 0, 0, 0, 0]\n [1, 0, 1, 1, 1, 1, 1, 0]\n [1, 1, 1, 1, 1, 0, 1, 0]\n [0, 1, 1, 1, 1, 0, 0, 0]\n [1, 0, 0, 1, 1, 0, 0, 0]\n [0, 1, 0, 1, 0, 0, 0, 1]\n [0, 1, 0, 0, 1, 1, 1, 1]\n [1, 0, 0, 0, 0, 1, 1, 0]\n [1, 0, 0, 0, 1, 0, 0, 1]\n [0, 0, 1, 0, 0, 0, 1, 0]\n\n\n\n```python\ndef mod(x,modulus):\n numer, denom = x.as_numer_denom()\n return numer*mod_inverse(denom,modulus) % modulus\n```\n\n\n```python\nY = Matrix(Y)\nY_transformed = Y.rref(iszerofunc=lambda x: x % 2==0)\nY_new = Y_transformed[0].applyfunc(lambda x: mod(x,2)) #must takecare of negatives and fractional values\nfor row_index in range(Y_new.shape[0]):\n print (Y_new.row(row_index))\n```\n\n Matrix([[1, 0, 0, 0, 0, 0, 1, 0]])\n Matrix([[0, 1, 0, 0, 0, 0, 0, 0]])\n Matrix([[0, 0, 1, 0, 0, 0, 1, 0]])\n Matrix([[0, 0, 0, 1, 0, 0, 0, 0]])\n Matrix([[0, 0, 0, 0, 1, 0, 1, 0]])\n Matrix([[0, 0, 0, 0, 0, 1, 0, 0]])\n Matrix([[0, 0, 0, 0, 0, 0, 0, 1]])\n Matrix([[0, 0, 0, 0, 0, 0, 0, 0]])\n Matrix([[0, 0, 0, 0, 0, 0, 0, 0]])\n Matrix([[0, 0, 0, 0, 0, 0, 0, 0]])\n Matrix([[0, 0, 0, 0, 0, 0, 0, 0]])\n Matrix([[0, 0, 0, 0, 0, 0, 0, 0]])\n Matrix([[0, 0, 0, 0, 0, 0, 0, 0]])\n Matrix([[0, 0, 0, 0, 0, 0, 0, 0]])\n\n\n\n```python\n\n```\n\n\n```python\nprint(\"The hidden bistring s[ 0 ], s[ 1 ]....s[\",bit_number-1,\"] is the one satisfying the following system of linear equations:\")\nrows, cols = Y_new.shape\nresult_s = ['0']*bit_number\nfor r in range(rows):\n \n location_list = list()\n for i,v in enumerate(list(Y_new[r,:])):\n if (v == 1):\n location_list.append(i)\n \n if (len(location_list) == 1):\n print ('s[ ' + str(location_list[0]) +' ] = 0')\n elif (len(location_list) > 1):\n for location in location_list:\n result_s[location] = '1'\n Yr = [ \"s[ \"+str(location)+\" ]\" for location in location_list ]\n tStr = \" + \".join(Yr)\n print(tStr, \"= 0\")\n \nresult_s = ''.join(result_s)\n\nprint()\nprint ('Which is: ' + result_s)\n```\n\n The hidden bistring s[ 0 ], s[ 1 ]....s[ 7 ] is the one satisfying the following system of linear equations:\n s[ 0 ] + s[ 6 ] = 0\n s[ 1 ] = 0\n s[ 2 ] + s[ 6 ] = 0\n s[ 3 ] = 0\n s[ 4 ] + s[ 6 ] = 0\n s[ 5 ] = 0\n s[ 7 ] = 0\n \n Which is: 10101010\n\n\n\n```python\n\n```\n\n\n```python\n# check whether result_s is equal to secret_s:\n\nif (result_s == secret_s):\n print ('We found the answer')\n print ('\\tsecret string:' + secret_s)\n print ('\\tresult string:' + result_s)\nelse:\n print ('Error, the answer is wrong')\n```\n\n We found the answer\n \tsecret string:10101010\n \tresult string:10101010\n\n\n\n```python\n\n```\n\n### Appendix\n\nHow did we design the Oracle circuit.\n\nThe following cells explain how we designed the Oracle circuit, using an specified secret_s: 0110.\n\n#### Basic idea\n\nSay that we have a secret string '0110', we want to generate an Oracle circuit with this secret string.\n\nThe bit bumber of '0110' is 4, so the input qbit number should be 4.\n\nFor all the possible inputs from '0000' to '1111', we can seperate them into two groups: x1 and x2, base on the rule of Simon's problem:\n\n$$x1 \\oplus x2 = s$$\n\nPut x1 in column 1, x2 in column 2 and secret string in column 3, we got the following table:\n\n| x1 | x2 | s\n| ---- | ---- |---- |\n| 0000 | 0110 | 0110 |\n| 0001 | 0111 | 0110 |\n| 0010 | 0100 | 0110 |\n| 0011 | 0101 | 0110 |\n| 1000 | 1110 | 0110 |\n| 1001 | 1111 | 0110 |\n| 1010 | 1100 | 0110 |\n| 1011 | 1101 | 0110 |\n\n\nAfter analyzing the table, we found that we can seperate all the inputs base on the second bit of x.\n\nIn column 1, all the second bit of x1 is 0, while in column 2, all the second bit of x2 is 1.\n\n x1[1] = 0\n x2[1] = 1\n\nIn fact, for any location $j$, if $s[j] = 1$, we can use bit number $j$ to seperate all the inputs into two groups which meet the requirement of Simon's Oracle.\n\nIn our example, when s is '0110', we can use the third bit too. Of course, we will seperate the intputs in different way if we use the third bit.\n\n+ this is the first key thing we identified, use the location $j$ where we find the first 1 in secret string s. To seperate the inputs into two groups.\n\n\nAs all elements in x1 (which is column 1 in the table) are different from each other, we can use the x1 as the output of the ciruit, this will meet the requirement of Simon's Oracle cuircuit:\n\n| x1 | x2 | s | y (=x1)\n| ---- | ---- |---- |---- |\n| 0000 | 0110 | 0110 |0000 |\n| 0001 | 0111 | 0110 |0001 |\n| 0010 | 0100 | 0110 |0010 |\n| 0011 | 0101 | 0110 |0011 |\n| 1000 | 1110 | 0110 |1000 |\n| 1001 | 1111 | 0110 |1001 |\n| 1010 | 1100 | 0110 |1010 |\n| 1011 | 1101 | 0110 |1011 |\n\nFrom the requirement: $x1 \\oplus x2 = s$, we know that $x2 \\oplus s = x1$\n\nSo, for all the x2, we can make $ y = x1 = x2 \\oplus s $ \n\nThe basic logic of our circuit is:\n\n j = the location of first 1 we found in secret_s\n if ( input_x[j] == 0):\n y = input_x\n elif (input_x[j] == 1):\n y = input_x[j] XOR secret_s\n \n+ This is the second key thing we indentified: XOR the input $x$ with $s$ if bit $j$ in $s$ is $1$\n\n\n#### Implemetation\n\nThe question left to us is how to implement the basic logic in quantum circuit\n\n\n```python\n# we have the secret string '0110'\nsecret_s = '0110'\n\n# the location j of first 1 found in secret_s is 1 (0 is the first bit, 1 is the second bit)\nj = 1\n```\n\nIf we already know that `x[1] = 0`, then the quantum circuit is very simple, we just need to copy all the input qubit to output qubit.\n\nThe circuit is like this:\n\n\n```python\noracle_circuit = Circuit()\noracle_circuit.cnot(0,4).cnot(1,5).cnot(2,6).cnot(3,7)\nprint (oracle_circuit)\n```\n\n T : | 0 |\n \n q0 : -C-------\n | \n q1 : -|-C-----\n | | \n q2 : -|-|-C---\n | | | \n q3 : -|-|-|-C-\n | | | | \n q4 : -X-|-|-|-\n | | | \n q5 : ---X-|-|-\n | | \n q6 : -----X-|-\n | \n q7 : -------X-\n \n T : | 0 |\n\n\nOn the other hand, if we know that `x[1] = 1`, we need to let the output be $ x \\oplus s$ \n\nWhile, there is no other ways we can input secret s into the quantum circuit.\n\nWe need to caculate the result of $x \\oplus s $ on the fly. \n\nLook into the $\\oplus$ operation bit by bit, we know that when `s[i] = 0` we keep `x[i]`, when `s[i] = 1` we flip `x[i]`.\n\nFor the secret s '0110' the circuit is like this:\n\n\n```python\noracle_circuit = Circuit()\noracle_circuit.cnot(0,4).cnot(3,7).x(1).x(2).cnot(1,5).cnot(2,6)\nprint (oracle_circuit)\n```\n\n T : | 0 | 1 |\n \n q0 : -C-------\n | \n q1 : -|-X-C---\n | | \n q2 : -|-X-|-C-\n | | | \n q3 : -|-C-|-|-\n | | | | \n q4 : -X-|-|-|-\n | | | \n q5 : ---|-X-|-\n | | \n q6 : ---|---X-\n | \n q7 : ---X-----\n \n T : | 0 | 1 |\n\n\nNow we have two seperated circuits for two cases: `x[1] = 0` and `x[1] = 1`.\n\nWe need to combine them into one circuit, that means we don't know whether `x[1] = 0` or `x[1] = 1`.\n\nThe key thing is that we need to flip qbit1 and qbit2 if `input_x[1] = 1`, that is what exactly cnot gate can do.\n\nFor qbit2, we can use qbit1 to control the flip.\n\nFor qbit1 itself, there is a little trouble, we do not have cnote gate can flip the control bit itself.\n\nBut if we look into the true table of qbit1, we found that we actually don't need to do anything, we just need to output 0. If the input of this bit is 0, that is `x[1] = 0`, we need to output `x[1]`, which is 0, if the input of this bit 1, that is `x[1] = 1`, we need to flip itself and then copy to output bit, which is still 0.\n\nSo, we don't need to do anything for qbit1, while, when we are designing quantum circuit, we are requested to generate continued qbit, we must assign some gates for this qbit. So, two cnot gate which always generate 0 is a good choice.\n\nNow, the circuit is something like this:\n\n\n\n```python\noracle_circuit = Circuit()\noracle_circuit.cnot(0,4).cnot(1,5).cnot(3,7).cnot(1,2).cnot(1,5).cnot(2,6)\nprint (oracle_circuit)\n```\n\n T : | 0 |1| 2 |\n \n q0 : -C-----------\n | \n q1 : -|-C---C-C---\n | | | | \n q2 : -|-|---X-|-C-\n | | | | \n q3 : -|-|-C---|-|-\n | | | | | \n q4 : -X-|-|---|-|-\n | | | | \n q5 : ---X-|---X-|-\n | | \n q6 : -----|-----X-\n | \n q7 : -----X-------\n \n T : | 0 |1| 2 |\n\n\n\n```python\n\n```\n\nFor quantum circuit, we need to make sure that the input `|x>` is equal to output `|x>`. \n\nSo we add another cnot gate to convert qbit2 to what it is originaly:\n\n\n```python\noracle_circuit = Circuit()\noracle_circuit.cnot(0,4).cnot(1,5).cnot(3,7).cnot(1,2).cnot(1,5).cnot(2,6).cnot(1,2)\nprint (oracle_circuit)\n```\n\n T : | 0 |1| 2 |3|\n \n q0 : -C-------------\n | \n q1 : -|-C---C-C---C-\n | | | | | \n q2 : -|-|---X-|-C-X-\n | | | | \n q3 : -|-|-C---|-|---\n | | | | | \n q4 : -X-|-|---|-|---\n | | | | \n q5 : ---X-|---X-|---\n | | \n q6 : -----|-----X---\n | \n q7 : -----X---------\n \n T : | 0 |1| 2 |3|\n\n\nNow we have an Oracle circuit which meet the requirement of Simon's problem.\n\nBut there is still somthing missing, in this circuit, if `x[j] = 0` then ` f(x) = x`.\n\nIt doesn't look like a real \"function\", and the user of this Oracle circuit can find some clues of secret_s\uff0cthen they may solve the problem in shorter time.\n\nA simple solution is adding some x gates in the output qubit to flip some of the output bits, to make the output look different from the input when `x[j] = 0`. \n\nOnly if we use same x gates for all shots, it doesn't not impact the result of Simon's algorithm, as the output still meet the requirement of Simon's Oracle.\n\n\n```python\n\n```\n", "meta": {"hexsha": "276957e9b5bc7482e9b260a9c8dfcc7979d36498", "size": 38795, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "quantum_gate/Simons_Algorithm.ipynb", "max_stars_repo_name": "DamonDeng/braket_quantum_computing", "max_stars_repo_head_hexsha": "9b294f8d971af83fba216e6e3671f855497d52ba", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "quantum_gate/Simons_Algorithm.ipynb", "max_issues_repo_name": "DamonDeng/braket_quantum_computing", "max_issues_repo_head_hexsha": "9b294f8d971af83fba216e6e3671f855497d52ba", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "quantum_gate/Simons_Algorithm.ipynb", "max_forks_repo_name": "DamonDeng/braket_quantum_computing", "max_forks_repo_head_hexsha": "9b294f8d971af83fba216e6e3671f855497d52ba", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.9125514403, "max_line_length": 8192, "alphanum_fraction": 0.5703312283, "converted": true, "num_tokens": 6523, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9372107949104866, "lm_q2_score": 0.9046505351008904, "lm_q1q2_score": 0.8478482471181026}} {"text": "#Snippets and Programs from Chapter 5: Playing with Sets and Probability\n\n\n```python\n%matplotlib inline\n```\n\n\n```python\n#P125: Finding the power set of a set\n>>> from sympy import FiniteSet\n>>> s = FiniteSet(1, 2, 3)\n>>> ps = s.powerset()\n>>> ps\n```\n\n\n\n\n {EmptySet(), {1}, {2}, {3}, {1, 2}, {1, 3}, {2, 3}, {1, 2, 3}}\n\n\n\n\n```python\n#P126: Union of two Sets\n>>> from sympy import FiniteSet \n>>> s = FiniteSet(1, 2, 3)\n>>> t = FiniteSet(2, 4, 6)\n>>> s.union(t)\n{1, 2, 3, 4, 6}\n```\n\n\n\n\n {1, 2, 3, 4, 6}\n\n\n\n\n```python\n#P127: Intersection of two Sets\n>>> from sympy import FiniteSet\n>>> s = FiniteSet(1, 2) \n>>> t = FiniteSet(2, 3) \n>>> s.intersect(t)\n```\n\n\n\n\n {2}\n\n\n\n\n```python\n#P127/128: Cartesian product of two Sets\n>>> from sympy import FiniteSet \n>>> s = FiniteSet(1, 2)\n>>> t = FiniteSet(3, 4)\n>>> p = s*t\n>>> for elem in p:\n print(elem)\n```\n\n (1, 3)\n (1, 4)\n (2, 3)\n (2, 4)\n\n\n\n```python\n#P130: Different gravity, different results\nfrom sympy import FiniteSet, pi\ndef time_period(length, g):\n T = 2*pi*(length/g)**0.5\n return T\n\nif __name__ == '__main__':\n L = FiniteSet(15, 18, 21, 22.5, 25)\n g_values = FiniteSet(9.8, 9.78, 9.83)\n print('{0:^15}{1:^15}{2:^15}'.format('Length(cm)', 'Gravity(m/s^2)', 'Time Period(s)'))\n for elem in L*g_values:\n l = elem[0]\n g = elem[1]\n t = time_period(l/100, g)\n print('{0:^15}{1:^15}{2:^15.3f}'.format(float(l), float(g), float(t)))\n\n```\n\n Length(cm) Gravity(m/s^2) Time Period(s) \n 15.0 9.78 0.778 \n 15.0 9.8 0.777 \n 15.0 9.83 0.776 \n 18.0 9.78 0.852 \n 18.0 9.8 0.852 \n 18.0 9.83 0.850 \n 21.0 9.78 0.921 \n 21.0 9.8 0.920 \n 21.0 9.83 0.918 \n 22.5 9.78 0.953 \n 22.5 9.8 0.952 \n 22.5 9.83 0.951 \n 25.0 9.78 1.005 \n 25.0 9.8 1.004 \n 25.0 9.83 1.002 \n\n\n\n```python\n#P132: Probability of a Prime number appearing when a 20-sided dice is rolled\ndef probability(space, event):\n return len(event)/len(space)\n\ndef check_prime(number): \n if number != 1:\n for factor in range(2, number):\n if number % factor == 0:\n return False\n else:\n return False\n return True\n\nif __name__ == '__main__':\n space = FiniteSet(*range(1, 21))\n primes = []\n for num in space:\n if check_prime(num):\n primes.append(num)\n event= FiniteSet(*primes)\n p = probability(space, event)\n print('Sample space: {0}'.format(space))\n print('Event: {0}'.format(event))\n print('Probability of rolling a prime: {0:.5f}'.format(p))\n```\n\n Sample space: {1, 2, 3, ..., 18, 19, 20}\n Event: {2, 3, 5, 7, 11, 13, 17, 19}\n Probability of rolling a prime: 0.40000\n\n\n\n```python\n#P134: Probability of event A or event B\n>>> from sympy import FiniteSet\n>>> s = FiniteSet(1, 2, 3, 4, 5, 6) \n>>> a = FiniteSet(2, 3, 5)\n>>> b = FiniteSet(1, 3, 5)\n>>> e = a.union(b) \n>>> len(e)/len(s)\n```\n\n\n\n\n 0.6666666666666666\n\n\n\n\n```python\n#P134: Probability of event A and event B\n>>> from sympy import FiniteSet\n>>> s = FiniteSet(1, 2, 3, 4, 5, 6) \n>>> a = FiniteSet(2, 3, 5)\n>>> b = FiniteSet(1, 3, 5)\n>>> e = a.intersect(b)\n>>> len(e)/len(s)\n```\n\n\n\n\n 0.3333333333333333\n\n\n\n\n```python\n#P135: Can you Roll that score?\n\n'''\nRoll a die until the total score is 20\n'''\nimport matplotlib.pyplot as plt\nimport random\n\ntarget_score = 20\ndef roll():\n return random.randint(1, 6)\n\nif __name__ == '__main__':\n score = 0\n num_rolls = 0\n while score < target_score:\n die_roll = roll()\n num_rolls += 1\n print('Rolled: {0}'.format(die_roll))\n score += die_roll\n print('Score of {0} reached in {1} rolls'.format(score, num_rolls))\n```\n\n Rolled: 5\n Rolled: 4\n Rolled: 2\n Rolled: 5\n Rolled: 2\n Rolled: 6\n Score of 24 reached in 6 rolls\n\n\n\n```python\n#P136: Is the target score possible?\nfrom sympy import FiniteSet\nimport random\ndef find_prob(target_score, max_rolls):\n die_sides = FiniteSet(1, 2, 3, 4, 5, 6)\n # sample space\n s = die_sides**max_rolls\n # Find the event set\n if max_rolls > 1:\n success_rolls = []\n for elem in s:\n if sum(elem) >= target_score:\n success_rolls.append(elem)\n else:\n if target_score > 6:\n success_rolls = []\n else:\n success_rolls = []\n for roll in die_sides:\n if roll >= target_score:\n success_rolls.append(roll)\n e = FiniteSet(*success_rolls)\n # calculate the probability of reaching target score\n return len(e)/len(s)\nif __name__ == '__main__':\n target_score = int(input('Enter the target score: '))\n max_rolls = int(input('Enter the maximum number of rolls allowed: '))\n p = find_prob(target_score, max_rolls)\n print('Probability: {0:.5f}'.format(p))\n```\n\n Enter the target score: 25\n Enter the maximum number of rolls allowed: 5\n Probability: 0.03241\n\n\n\n```python\n#P139: Simulate a fictional ATM\n'''\nSimulate a fictional ATM that dispenses dollar bills\nof various denominations with varying probability\n'''\n\nimport random\nimport matplotlib.pyplot as plt\n\ndef get_index(probability):\n c_probability = 0\n sum_probability = []\n for p in probability:\n c_probability += p\n sum_probability.append(c_probability)\n r = random.random()\n for index, sp in enumerate(sum_probability):\n if r <= sp:\n return index\n return len(probability)-1\n\ndef dispense():\n dollar_bills = [5, 10, 20, 50]\n probability = [1/6, 1/6, 1/3, 1/3]\n bill_index = get_index(probability)\n return dollar_bills[bill_index]\n\n# Simulate a large number of bill withdrawls\nif __name__ == '__main__':\n bill_dispensed = []\n for i in range(10000):\n bill_dispensed.append(dispense())\n # plot a histogram \n plt.hist(bill_dispensed)\n plt.show()\n \n```\n\n\n```python\n#P140: Draw a Venn diagram for two sets\n'''\nDraw a Venn diagram for two sets\n'''\nfrom matplotlib_venn import venn2\nimport matplotlib.pyplot as plt\nfrom sympy import FiniteSet\ndef draw_venn(sets):\n venn2(subsets=sets) \n plt.show()\n \nif __name__ == '__main__':\n s1 = FiniteSet(1, 3, 5, 7, 9, 11, 13, 15, 17, 19)\n s2 = FiniteSet(2, 3, 5, 7, 11, 13, 17, 19)\n draw_venn([s1, s2])\n```\n", "meta": {"hexsha": "3fc1c2ab283caeff159c29e24431fc8097b130d6", "size": 29622, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chapter5/Chapter5.ipynb", "max_stars_repo_name": "hexu1985/Doing.Math.With.Python", "max_stars_repo_head_hexsha": "b6a02805cd450325e794a49f55d2d511f9db15a5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 109, "max_stars_repo_stars_event_min_datetime": "2015-08-28T10:23:24.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-15T01:39:51.000Z", "max_issues_repo_path": "chapter5/Chapter5.ipynb", "max_issues_repo_name": "hexu1985/Doing.Math.With.Python", "max_issues_repo_head_hexsha": "b6a02805cd450325e794a49f55d2d511f9db15a5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2015-12-07T19:35:30.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-01T07:25:42.000Z", "max_forks_repo_path": "chapter5/Chapter5.ipynb", "max_forks_repo_name": "hexu1985/Doing.Math.With.Python", "max_forks_repo_head_hexsha": "b6a02805cd450325e794a49f55d2d511f9db15a5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 74, "max_forks_repo_forks_event_min_datetime": "2015-10-15T18:09:15.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-30T05:06:21.000Z", "avg_line_length": 60.950617284, "max_line_length": 10200, "alphanum_fraction": 0.7467422861, "converted": true, "num_tokens": 2138, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9372107896491796, "lm_q2_score": 0.9046505351008906, "lm_q1q2_score": 0.8478482423584585}} {"text": "# MS-E2121 Exercise session 9\n### Problem 9.1: Uncapacitated Facility Location (UFL)\n\n**a)** \nLet $N = \\{1,\\dots,n\\}$ be a set of potential facilities and $M = \\{1,\\dots,m\\}$ a set of clients. Let $y_j = 1$ if facility $j$ is opened, and $y_j = 0$ otherwise. Moreover, let $x_{ij}$ be the fraction of client $i$'s demand satisfied from facility $j$. The UFL can be formulated as the mixed-integer problem (MIP): \n\n$$\\begin{align}\n \\text{(UFL-W)} : \\quad &\\min_{x,y} \\sum_{j\\in N} f_jy_j + \\sum_{i\\in M}\\sum_{j\\in N} c_{ij}x_{ij} \\\\\n &\\text{s.t.} \\\\\n &\\quad \\sum_{j\\in N}x_{ij} = 1, &\\forall i \\in M,\\\\\n &\\quad \\sum_{i\\in M}x_{ij} \\leq my_j, &\\forall j \\in N,\\\\\n &\\quad x_{ij} \\geq 0, &\\forall i \\in M, \\forall j \\in N,\\\\\n &\\quad y_j \\in \\{0,1\\}, &\\forall j\\in N,\n\\end{align}$$\n\nwhere $f_j$ is the cost of opening facility $j$, and $c_{ij}$ is the cost of satisfying client $i$'s demand from facility $j$. Consider an instance of the UFL with opening costs $f=(4,3,4,4,7)$ and client costs\n\n$$\\begin{align*}\n (c_{ij}) = \\left(\n\t\\begin{array}{ccccc}\n\t\t12 & 13 & 6 & 0 & 1 \\\\\n\t\t8 & 4 & 9 & 1 & 2 \\\\\n\t\t2 & 6 & 6 & 0 & 1 \\\\\n\t\t3 & 5 & 2 & 1 & 8 \\\\\n\t\t8 & 0 & 5 & 10 & 8 \\\\\n\t\t2 & 0 & 3 & 4 & 1\n\t\\end{array}\n \\right)\n\\end{align*}$$\n\nImplement (the model) and solve the problem with Julia using JuMP.\n\n**b)** \nAn alternative formulation of the UFL is of the form\n\n$$\\begin{align}\n \\text{(UFL-S)} : \\quad &\\min_{x,y} \\sum_{j\\in N}f_jy_j + \\sum_{i\\in M}\\sum_{j\\in N}c_{ij}x_{ij}\\\\\n &\\text{s.t.} \\\\\n &\\quad \\sum_{j\\in N}x_{ij} = 1, &\\forall i \\in M,\\\\\n &\\quad x_{ij} \\leq y_j, &\\forall i\\in M, \\forall j \\in N,\\\\\n &\\quad x_{ij} \\geq 0, &\\forall i \\in M, \\forall j \\in N,\\\\\n &\\quad y_j \\in \\{0,1\\}, &\\forall j\\in N.\n\\end{align}$$\n\n\nLinear programming (LP) relaxations of these problems can be obtained by relaxing the binary constraints $y_j\\in \\{0,1\\}$ to $0 \\leq y_j \\leq 1$ for all $j \\in N$. For the same instance as in part (a), solve the LP relaxations of UFL-W and UFL-S with Julia using JuMP, and compare the optimal costs of the LP relaxations against the optimal integer cost obtained in part (a).\n\n\n```julia\nusing JuMP, Cbc\n```\n\nWrite down the problem data\n\n\n```julia\nf = [4 3 4 4 7] # Facility opening costs\nc = [12 13 6 0 1; 8 4 9 1 2; 2 6 6 0 1; 3 5 2 1 8; 8 0 5 10 8; 2 0 3 4 1] # Cost of satisfying demand\n(m, n) = size(c)\nM = 1:m # Set of facilities\nN = 1:n;# Set of clients\n```\n\nImplement the problem in JuMP\n\n\n```julia\nufl_w = Model(Cbc.Optimizer)\n\n@variable(ufl_w, x[M,N] >= 0) # Fraction of demand (client i) satisfied by facility j\n@variable(ufl_w, y[N], Bin) # Facility location\n\n# Minimize total cost\n@objective(ufl_w, Min, sum(f[j]*y[j] for j in N) + sum(c[i,j]*x[i,j] for i in M, j in N)) \n\n# For each client, the demand must be fulfilled\n@constraint(ufl_w, demand[i in M], sum(x[i,j] for j in N) == 1)\n# A big-M style constraint stating that facility j can't send out anything if y[j]==0\n@constraint(ufl_w, supply[j in N], sum(x[i,j] for i in M) <= m*y[j])\n\noptimize!(ufl_w)\n```\n\n\n```julia\nprintln(\"UFL-W MILP:\")\nprintln(\"Optimal value $(objective_value(ufl_w))\")\nprintln(\"with y = $(value.(y).data)\")\n```\n\n\n```julia\nufl_w_rel = Model(Cbc.Optimizer)\n\n@variable(ufl_w_rel, x[M,N] >= 0) # Fraction of demand (client i) satisfied by facility j\n@variable(ufl_w_rel, 0<=y[N]<=1) # Facility location\n\n# Minimize total cost\n@objective(ufl_w_rel, Min, sum(f[j]*y[j] for j in N) + sum(c[i,j]*x[i,j] for i in M, j in N)) \n\n# For each client, the demand must be fulfilled\n@constraint(ufl_w_rel, demand[i in M], sum(x[i,j] for j in N) == 1)\n# A big-M style constraint stating that facility j can't send out anything if y[j]==0\n@constraint(ufl_w_rel, supply[j in N], sum(x[i,j] for i in M) <= m*y[j])\n\noptimize!(ufl_w_rel)\n```\n\n\n```julia\nprintln(\"UFL-W LP:\")\nprintln(\"Optimal value $(objective_value(ufl_w_rel))\")\nprintln(\"with y = $(value.(y).data)\")\n```\n\n\n```julia\nufl_s_rel = Model(Cbc.Optimizer)\n\n@variable(ufl_s_rel, x[M,N] >= 0)\n@variable(ufl_s_rel, 0<=y[N]<=1)\n\n@objective(ufl_s_rel, Min, sum(f[j]*y[j] for j in N) + sum(c[i,j]*x[i,j] for i in M, j in N))\n\n@constraint(ufl_s_rel, demand[i in M], sum(x[i,j] for j in N) == 1)\n# The difference between the models is that UFL-S has m constraints telling that nothing can be sent to client i from facility j if y[j]==0\n# In UFL-W, there is a single constraint telling that nothing can be sent from facility j if y[j]==0\n@constraint(ufl_s_rel, supply[i in M, j in N], x[i,j] <= y[j])\n\noptimize!(ufl_s_rel)\n```\n\n\n```julia\nprintln(\"UFL-S LP:\")\nprintln(\"Optimal value $(objective_value(ufl_s_rel))\")\nprintln(\"with y = $(value.(y).data)\")\n```\n\n#### Branching\nWe see that the UFL-S relaxation produces an integer solution, meaning that we have an integer optimal solution and no branching needs to be done. However, if we used UFL-W instead, we would need to do B&B or something else to obtain the integer optimum. In the UFL-W LP relaxation solution (0, 1/3, 0, 2/3, 0), we have two fractional variables $y_2$ and $y_4$, and we can branch on one of them. Let's choose $y_2$ and see what happens if we set it to 0 or 1.\n\n\n```julia\nufl_w_rel_y2_0 = Model(Cbc.Optimizer)\n\n@variable(ufl_w_rel_y2_0, x[M,N] >= 0)\n@variable(ufl_w_rel_y2_0, 0<=y[N]<=1)\n\n@objective(ufl_w_rel_y2_0, Min, sum(f[j]*y[j] for j in N) + sum(c[i,j]*x[i,j] for i in M, j in N))\n\n@constraint(ufl_w_rel_y2_0, demand[i in M], sum(x[i,j] for j in N) == 1)\n@constraint(ufl_w_rel_y2_0, supply[j in N], sum(x[i,j] for i in M) <= m*y[j])\n@constraint(ufl_w_rel_y2_0, y[2] == 0)\n\noptimize!(ufl_w_rel_y2_0)\n```\n\n\n```julia\nufl_w_rel_y2_1 = Model(Cbc.Optimizer)\n\n@variable(ufl_w_rel_y2_1, x[M,N] >= 0)\n@variable(ufl_w_rel_y2_1, 0<=y[N]<=1)\n\n@objective(ufl_w_rel_y2_1, Min, sum(f[j]*y[j] for j in N) + sum(c[i,j]*x[i,j] for i in M, j in N))\n\n@constraint(ufl_w_rel_y2_1, demand[i in M], sum(x[i,j] for j in N) == 1)\n@constraint(ufl_w_rel_y2_1, supply[j in N], sum(x[i,j] for i in M) <= m*y[j])\n@constraint(ufl_w_rel_y2_1, y[2] == 1)\n\noptimize!(ufl_w_rel_y2_1)\n```\n\n\n```julia\nprintln(\"UFL-W LP with y2=0:\")\nprintln(\"Optimal value $(objective_value(ufl_w_rel_y2_0))\")\nprintln(\"with y = $(value.(ufl_w_rel_y2_0[:y]).data)\")\n\nprintln()\n\nprintln(\"UFL-W LP with y2=1:\")\nprintln(\"Optimal value $(objective_value(ufl_w_rel_y2_1))\")\nprintln(\"with y = $(value.(ufl_w_rel_y2_1[:y]).data)\")\n```\n\nBoth branches have fractional solutions, and more branching is thus needed. You can practice that in the next exercise.\n#' \n### Problem 9.3: Solving Branch & Bound (B&B) graphically\n\n*You can do this with pen and paper if you want to do it graphically, or solve the problems using JuMP instead if you don't feel like drawing.*\n\nConsider the following integer programming problem $IP$:\n\n$$\\begin{matrix}\n\\text{max} &x_{1} &+&2x_{2} & \\\\\n\\text{s.t.}&-3x_{1} &+&4x_{2} &\\le 4 \\\\\n&3x_{1} &+&2x_{2} &\\le 11 \\\\\n&2x_{1} &-&x_{2} &\\le 5 \\\\\n&x_{1}, &x_{2} & \\text{integer} &\\\\\n\\end{matrix}$$\n\nPlot (or draw) the feasible region of the linear programming (LP) relaxation of the problem $IP$, then solve the problems using the figure. Recall that the LP relaxation of $IP$ is obtained by replacing the integrality constraints $x_1,x_2\\in \\mathbb{Z}_+$ by linear nonnegativity $x_1,x_2\\geq 0$ and including (possible) upper bounds corresponding to the upper bounds of the integer variables ($x_1,x_2\\leq 1$ for binary variables). \n\n(a) What is the optimal cost $z_{LP}$ of the LP relaxation of the problem $IP$? What is the optimal cost $z$ of the problem $IP$?\n\n(b) Draw the border of the convex hull of the feasible solutions of the problem $IP$. Recall that the convex hull represents the *ideal* formulation for the problem $IP$.\n\n(c) Solve the problem $IP$ by LP-relaxation based Branch \\& Bound (B\\&B). You can solve the LP relaxations at each node of the B\\&B tree graphically. Start the B\\&B procedure without any primal bound.\n\nCheck your solutions using JuMP. Make sure to point out the optimal solutions in the figure, as well as giving their numerical values.\n\n\n```julia\n# TODO: add your code here\n```\n\n\n```julia\n# TODO: add your code here\n```\n\n\n```julia\n# TODO: add your code here\n```\n\n\n```julia\n# TODO: add your code here\n```\n\n\n```julia\n# TODO: add your code here\n```\n\n\n```julia\n# TODO: add your code here\n```\n\n\n```julia\n# TODO: add your code here\n```\n\n\n```julia\n# TODO: add your code here\n```\n\n\n```julia\n# TODO: add your code here\n```\n\n\n```julia\n\n```\n", "meta": {"hexsha": "96c41b93117a5f4ccb0c73fbb68ab3c96f547684", "size": 12936, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "exercises/ex-09/exercise9_skeleton.ipynb", "max_stars_repo_name": "nnhjy/lp-julia", "max_stars_repo_head_hexsha": "06c63cf06b6911bc4ab99a8c18b33bca2b61083c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exercises/ex-09/exercise9_skeleton.ipynb", "max_issues_repo_name": "nnhjy/lp-julia", "max_issues_repo_head_hexsha": "06c63cf06b6911bc4ab99a8c18b33bca2b61083c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercises/ex-09/exercise9_skeleton.ipynb", "max_forks_repo_name": "nnhjy/lp-julia", "max_forks_repo_head_hexsha": "06c63cf06b6911bc4ab99a8c18b33bca2b61083c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.1692307692, "max_line_length": 465, "alphanum_fraction": 0.5417439703, "converted": true, "num_tokens": 2985, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.953966093674472, "lm_q2_score": 0.8887588008585925, "lm_q1q2_score": 0.8478457614738795}} {"text": "# Exercise\n\n\n```python\n#%pylab inline\n%pylab notebook\n# pylab Populating the interactive namespace from numpy and matplotlib\n# numpy for numerical computation\n# matplotlib for ploting\nimport numpy as np\nfrom numpy import append, array, diagonal, tril, triu\nfrom numpy.linalg import inv\nfrom scipy.linalg import lu\n#from scipy.linalg import solve\nfrom pprint import pprint\nfrom numpy import array, zeros, diag, diagflat, dot\nfrom sympy import *\nimport sympy as sym\ninit_printing()\n```\n\n Populating the interactive namespace from numpy and matplotlib\n\n\n /opt/conda/lib/python3.5/site-packages/IPython/core/magics/pylab.py:161: UserWarning: pylab import has clobbered these variables: ['Polygon', 'log', 'sqrt', 'source', 'det', 'ones', 'power', 'zeros', 'sinc', 'solve', 'exp', 'interactive', 'product', 'sign', 'tanh', 'sinh', 'plotting', 'test', 'flatten', 'multinomial', 'poly', 'cos', 'Circle', 'cbrt', 'var', 'transpose', 'tan', 'diff', 'roots', 'mod', 'seterr', 'trace', 'plot', 'reshape', 'gamma', 'beta', 'pi', 'eye', 'cosh', 'trunc', 'take', 'diag', 'conjugate', 'floor', 'invert', 'vectorize', 'add', 'prod', 'binomial', 'nan', 'sin']\n `%matplotlib` prevents importing * from pylab and numpy\n \"\\n`%matplotlib` prevents importing * from pylab and numpy\"\n\n\n\n```python\n\n```\n\n2.1 Solve Ax =b\n\n\n```python\nA = array([[54, 14, -11, 2],\n [14, 50, -4, 29],\n [-11, -4, 55, 22],\n [2, 29, 22, 95]])\nb = array([1, 1, 1, 1])\nx0 = [1, 1, 1,1]\n\n```\n\n## (a) L-U decomposition\n\n\n```python\nn = 20\n\np, l, u = lu(A)\n```\n\n\n```python\np\n```\n\n\n\n\n array([[ 1., 0., 0., 0.],\n [ 0., 1., 0., 0.],\n [ 0., 0., 1., 0.],\n [ 0., 0., 0., 1.]])\n\n\n\n\n```python\nl\n```\n\n\n\n\n array([[ 1. , 0. , 0. , 0. ],\n [ 0.25925926, 1. , 0. , 0. ],\n [-0.2037037 , -0.02476038, 1. , 0. ],\n [ 0.03703704, 0.61421725, 0.43831321, 1. ]])\n\n\n\n\n```python\nu\n```\n\n\n\n\n array([[ 54. , 14. , -11. , 2. ],\n [ 0. , 46.37037037, -1.14814815, 28.48148148],\n [ 0. , 0. , 52.73083067, 23.11261981],\n [ 0. , 0. , 0. , 67.30154198]])\n\n\n\n\n```python\nb\n```\n\n\n\n\n array([1, 1, 1, 1])\n\n\n\n\n```python\ny = np.linalg.solve(l, b)\ny \n\n```\n\n\n\n\n array([ 1. , 0.74074074, 1.22204473, -0.02765113])\n\n\n\n\n```python\nx = np.linalg.solve(u, y)\nx\n```\n\n\n\n\n array([ 0.01893441, 0.01680508, 0.02335523, -0.00041085])\n\n\n\n## (b) Gauss-Jacobi iteration\n\n\n```python\ndef gjacobi(A, b, maxit=1000,tol = 1/1000, x0=None):\n \"\"\"\n Solve a linear equation by the gauss jacobi iteration outlined in the book.\n Follows the eq:\n x = inv(D)(b - Rx)\n Where D is the diagonal matrix of A and R is the remainder s.t D + R = A\n \"\"\"\n # If we have not provided an initial array for x make a new one\n if x0==None:\n x = array([1 for _ in range(A.shape[1])])\n else:\n x = x0\n d = np.diag(np.diag(A))\n \n for it in np.arange(maxit):\n dx = inv(d).dot(b - A.dot(x))\n x = x + dx\n if np.linalg.norm(dx)\n\n## Derivation of the general formula\n\nIt is possible to derive a general formula for $F_n$ without computing all the previous numbers in the sequence. If a gemetric series (i.e. a series with a constant ratio between consecutive terms $r^n$) is to solve the difference equation, we must have\n\n\\begin{aligned}\n r^n = r^{n-1} + r^{n-2} \\\\\n\\end{aligned}\n\nwhich is equivalent to\n\n\\begin{aligned}\n r^2 = r + 1 \\\\\n\\end{aligned}\n\nThis equation has two unique solutions\n\\begin{aligned}\n \\varphi = & \\frac{1 + \\sqrt{5}}{2} \\approx 1.61803\\cdots \\\\\n \\psi = & \\frac{1 - \\sqrt{5}}{2} = 1 - \\varphi = - {1 \\over \\varphi} \\approx -0.61803\\cdots \\\\\n\\end{aligned}\n\nIn particular the larger root is known as the _golden ratio_\n\\begin{align}\n\\varphi = \\frac{1 + \\sqrt{5}}{2} \\approx 1.61803\\cdots\n\\end{align}\n\nNow, since both roots solve the difference equation for Fibonacci numbers, any linear combination of the two sequences also solves it\n\n\\begin{aligned}\n a \\left(\\frac{1 + \\sqrt{5}}{2}\\right)^n + b \\left(\\frac{1 - \\sqrt{5}}{2}\\right)^n \\\\\n\\end{aligned}\n\nIt's not hard to see that all Fibonacci numbers must be of this general form because we can uniquely solve for $a$ and $b$ such that the initial conditions of $F_1 = 1$ and $F_0 = 0$ are met\n\n\\begin{equation}\n\\left\\{\n\\begin{aligned}\n F_0 = 0 = a \\left(\\frac{1 + \\sqrt{5}}{2}\\right)^0 + b \\left(\\frac{1 - \\sqrt{5}}{2}\\right)^0 \\\\\n F_1 = 1 = a \\left(\\frac{1 + \\sqrt{5}}{2}\\right)^1 + b \\left(\\frac{1 - \\sqrt{5}}{2}\\right)^1 \\\\\n\\end{aligned}\n\\right.\n\\end{equation}\n\nyielding\n\n\\begin{equation}\n\\left\\{\n\\begin{aligned}\n a = \\frac{1}{\\sqrt{5}} \\\\\n b = \\frac{-1}{\\sqrt{5}} \\\\\n\\end{aligned}\n\\right.\n\\end{equation}\n\nWe have therefore derived the general formula for the $n$-th Fibonacci number\n\n\\begin{aligned}\n F_n = \\frac{1}{\\sqrt{5}} \\left(\\frac{1 + \\sqrt{5}}{2}\\right)^n - \\frac{1}{\\sqrt{5}} \\left(\\frac{1 - \\sqrt{5}}{2}\\right)^n \\\\\n\\end{aligned}\n\nSince the second term has an absolute value smaller than $1$, we can see that the ratios of Fibonacci numbers converge to the golden ratio\n\n\\begin{aligned}\n \\lim_{n \\rightarrow \\infty} \\frac{F_n}{F_{n-1}} = \\frac{1 + \\sqrt{5}}{2}\n\\end{aligned}\n\n## Various implementations in Python\n\nWriting a function in Python that outputs the $n$-th Fibonacci number seems simple enough. However even in this simple case one should be aware of some of the computational subtleties in order to avoid common pitfalls and improve efficiency.\n\n### Common pitfall #1: inefficient recursion\n\nHere's a very straight-forward recursive implementation \n\n\n```python\nimport math\nfrom __future__ import print_function\n```\n\n\n```python\ndef fib_recursive(n):\n if n == 0:\n return 0\n elif n == 1:\n return 1\n else:\n return fib_recursive(n-1) + fib_recursive(n-2)\n```\n\n\n```python\nprint([fib_recursive(i) for i in range(20)])\n```\n\n [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181]\n\n\nthis seems to work fine, however the recursion overhead is actually very significant when $n$ is just slightly large. Here I'm computing $F_{34}$ and it takes more than 3 seconds! (on a 2013 model Macbook Air) \n\n\n```python\n%timeit fib_recursive(34)\n```\n\n 1 loops, best of 3: 3.58 s per loop\n\n\nThe overhead incurred by creating a large number of stack frames is tremendous. Python by default does not perform what's known as tail recursion elimination http://stackoverflow.com/questions/13543019/why-is-recursion-in-python-so-slow, and therefore this is a very inefficient implemenation. In contrast, if we have an iterative implementation, the speed is dramatically faster\n\n\n```python\ndef fib_iterative(n):\n a, b = 0, 1\n while n > 0:\n a, b = b, a + b\n n -= 1\n return a\n```\n\n\n```python\n%timeit fib_iterative(34)\n```\n\n 100000 loops, best of 3: 4.59 \u00b5s per loop\n\n\nNow, let's see if we can make it even faster by eliminating the loop altogether and just go straight to the general formula we derived earlier\n\n\n```python\ndef fib_formula(n):\n golden_ratio = (1 + math.sqrt(5)) / 2\n val = (golden_ratio**n - (1 - golden_ratio)**n) / math.sqrt(5)\n return int(round(val))\n```\n\n\n```python\n%timeit fib_formula(34)\n```\n\n 1000000 loops, best of 3: 1.36 \u00b5s per loop\n\n\nEven faster, great! And since we are not looping anymore, we should expect to see the computation time to scale better as $n$ increases. That's indeed what we see:\n\n\n```python\nimport pandas as pd\nimport numpy as np\n```\n\n\n```python\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom IPython.core.pylabtools import figsize\nfigsize(15, 5)\n```\n\n\n```python\nelapsed = {}\nelapsed['iterative'] = {}\nelapsed['formula'] = {}\nfor i in range(34):\n result = %timeit -n 10000 -q -o fib_iterative(i)\n elapsed['iterative'][i] = result.best\n result = %timeit -n 10000 -q -o fib_formula(i)\n elapsed['formula'][i] = result.best\n```\n\n\n```python\nelapased_ms = pd.DataFrame(elapsed) * 1000\nelapased_ms.plot(title='time taken to compute the n-th Fibonaccis number')\nplt.ylabel('time taken (ms)')\nplt.xlabel('n')\n```\n\nIndeed as we expect, the iterative approach scales linearly, while the formula approach is basically constant time.\n\nHowever we need to be careful with using a numerical formula like this for getting integer results. \n\n### Common pitfall #2: numerical precision\n\nHere we compare the actual values obtained by `fib_iterative()` and `fib_formula()`. Notice that it does not take a very large `n` for us to run into numerical precision issues.\n\nWhen `n` is 71 we are starting to get different results from the two implementations!\n\n\n```python\ndf = {}\ndf['iterative'] = {}\ndf['formula'] = {}\ndf['diff'] = {}\n\nfor i in range(100):\n df['iterative'][i] = fib_iterative(i)\n df['formula'][i] = fib_formula(i)\n df['diff'][i] = df['formula'][i] - df['iterative'][i]\ndf = pd.DataFrame(df, columns=['iterative', 'formula', 'diff'])\ndf.index.name = 'n-th Fibonacci'\ndf.ix[68:74]\n```\n\n\n\n\n
\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
iterativeformuladiff
n-th Fibonacci
68 72723460248141 72723460248141 0
69 117669030460994 117669030460994 0
70 190392490709135 190392490709135 0
71 308061521170129 308061521170130 1
72 498454011879264 498454011879265 1
73 806515533049393 806515533049395 2
74 1304969544928657 1304969544928660 3
\n
\n\n\n\nYou can see that `fib_iterative()` produces the correct result by eyeballing the sum of $F_{69}$ and $F_{70}$, while `fib_formual()` starts to have precision errors as the number gets larger. So, be mindful with precision issues when doing numerical computing. Here's a nice article on this topic http://www.codeproject.com/Articles/25294/Avoiding-Overflow-Underflow-and-Loss-of-Precision\n\nAlso notice that unlike C/C++, in Python there's technically no limit in the precision of its integer representation. In Python 2 any overflowing operation on `int` is automatically converted into `long`, and `long` has arbitrary precision. In Python 3 it is just `int`. More information on Python's arbitrary-precision integers can be found here http://stackoverflow.com/questions/9860588/maximum-value-for-long-integer\n", "meta": {"hexsha": "e44db84d693f5c7af7794fe4b7a5ad73e04f9ae2", "size": 52342, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "blog/Fibonacci_numbers_in_python.ipynb", "max_stars_repo_name": "mortada/notebooks", "max_stars_repo_head_hexsha": "12fd6a5fc1430efee63889f6cb709d8e94bde602", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2015-06-16T23:48:03.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-29T09:39:32.000Z", "max_issues_repo_path": "blog/Fibonacci_numbers_in_python.ipynb", "max_issues_repo_name": "mortada/notebooks", "max_issues_repo_head_hexsha": "12fd6a5fc1430efee63889f6cb709d8e94bde602", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "blog/Fibonacci_numbers_in_python.ipynb", "max_forks_repo_name": "mortada/notebooks", "max_forks_repo_head_hexsha": "12fd6a5fc1430efee63889f6cb709d8e94bde602", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2015-06-18T17:55:08.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-05T08:59:48.000Z", "avg_line_length": 94.4801444043, "max_line_length": 36806, "alphanum_fraction": 0.8188452868, "converted": true, "num_tokens": 2602, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.907312221360624, "lm_q2_score": 0.9343951607140232, "lm_q1q2_score": 0.8477881488960576}} {"text": "# What is an optimization problem?\n\nA general mathematical formulation for **the optimization problems studied on this course** is\n$$\n\\begin{align} \\\n\\min \\quad &f(x)\\\\\n\\text{s.t.} \\quad & g_j(x) \\geq 0\\text{ for all }j=1,\\ldots,J\\\\\n& h_k(x) = 0\\text{ for all }k=1,\\ldots,K\\\\\n&x\\in \\mathbb R^n.\n\\end{align}\n$$\n\nThe above problem can be expressed as \n>Find an $x\\in \\mathbb R^n$ such that $g_j(x)\\geq 0$ for all $j=1,\\ldots,J$ and $h_k(x)=0$ for all $k=1,\\ldots,K$, and there does not exist $x'\\in \\mathbb R^n$ such that $f(x')0$ such that there does not exist a feasible solution $x'\\in \\operatorname{B}(x^*,r)$ such that $f(x')0$, when $\\|x^*-x^{**}\\|\\leq L$.\n\n\n# Line search\n\nLet us study optimization problem $\\min_{x\\in[a,b]} f(x)$, where $a,b\\in\\mathbb R$. Let us try to find an approximation of a local optimum to this problem. \n\n\n```python\n#Example objective function\ndef f(x):\n return 2+(1-x)**2\n```\n\n\n```python\nprint \"The value of the objective function at 0 is \" + str(f(0))\n```\n\n The value of the objective function at 0 is 3\n\n\n## Line search with fixed steps\n**input:** the quality $L>0$ of the approximation of the local optimum. \n**output:** an approximation of the local optimum with quality $L$.\n```\nstart with x as the start point of the interval\nloop until stops:\n if the value of the objective is increasing for x+L from x\n stop, because the approximation of the locally optimal solution is x \n increase x by L\n```\n\n\n```python\ndef fixed_steps_line_search(a,b,f,L):\n x = a\n while f(x)>f(x+L) and x+L0$ of the approximation of the local optimum. \n**output:** an approximation of the local optimum with quality $L$.\n```\nSet x as the start point of interval and y as the end point\nwhile y-x<2*L:\n if the function is increasing at the mid point between x and y:\n set y as the midpoint between y and x, because a local optimum is before the midpoint\n otherwise:\n set x as the midpoint, because a local optimum is after the midpoint\nreturn midpoint between x and y\n```\n\nThe following function is to be completed in class as an exercise\n\n\n```python\ndef bisection_line_search(a,b,f,L,epsilon):\n x = a\n y = b\n while y-x>2*L:\n if f((x+y)/2+epsilon)>f((x+y)/2-epsilon):\n y=(x+y)/2+epsilon\n else:\n x = (x+y)/2-epsilon\n return (x+y)/2\n```\n\nThis is what we should end up. The following function is not shown on the slides.\n\n\n```python\ndef bisection_line_search(a,b,f,L,epsilon):\n x = a\n y = b\n while y-x<2*L:\n if f((x+y)/2-epsilon)0$ of the approximation of the local optimum. \n**output:** an approximation of the local optimum with quality $L$.\n```\nSet x as the start point of interval and y as the end point\nwhile y-x>2*L:\n Divide the interval [x,y] in the golden section from the letf and right and attain two division points\n If the greater of the division points has a greater function value \n set y as the rightmost division point, because a local optimum is before that\n otherwise:\n set x as the leftmost division point, because a local optimum is after that\nreturn midpoint between x and y\n```\n\nThe following function is to be completed in class as an exercise\n\n\n```python\nimport math\ndef golden_section_line_search(a,b,f,L):\n x = a\n y = b\n while y-x>2*L:\n if f(x+(math.sqrt(5.0)-1)/2.0*(y-x))\nNile University
\nAmmar Sherif (email-ID: ASherif)
\n\n## Problem Statement\n\nProve that the following algorithm is correct:\n\n\n```python\ndef maxVal(A):\n \"\"\"\n The Algorithm returns the maximum value in the list\n \n Input constraints: A is not empty\n \"\"\"\n maxV = A[0]\n for v in A:\n if maxV < v:\n maxV = v\n return maxV\n```\n\n\n```python\nmaxVal([1,2,3,40,5,6])\n```\n\n\n\n\n 40\n\n\n\n## Proof\n\nWe first try to try to formulate a hypothesis to help us in proving the correctness. It is as follows:\n\n$$H(i): \\text{ After the completion of the }i^{th} \\text{ iteration,}$$\n$$\\boxed{\\texttt{maxV} = \\max(A[0],A[1],\\cdots,A[i-1])}$$\nWhich is a compact representation of:\n\\begin{equation}\n \\texttt{maxV} \\in \\{ A[0],A[1],\\cdots,A[i-1]\\}\\\\\n \\land ( \\texttt{maxV} \\geq A[0] \\land \\texttt{maxV} \\geq A[1] \\land \\cdots \\land \\texttt{maxV} \\geq A[i-1] )\n\\end{equation}\nTo prove using Induction, we have two steps we have to show that it holds:\n
    \n
  1. Base Case
  2. \n
  3. Inductive Step
  4. \n
\n\n### Base case\n\nAfter the completion of the first iteration, we know that $\\texttt{maxV} = \\max(A[0]) = A[0]$ because of the **initialization** at the first line before the loop. Therefore, we showed that $\\boxed{H(1)}$ holds.\n\n### Inductive step\n\nWe want to show that $$H(k) \\implies H(k+1)$$\nTherefore, assuming $H(k)$ is True:
\n$$H(k) \\implies \\texttt{maxV}\\rvert_k = \\max(A[0],A[1],\\cdots,A[k-1])$$\nFor the next iteration $\\texttt{v} = A[k]$; then, we have a condition as follows\n\\begin{align*}\nH(k) &\\implies\\\\\n\\texttt{maxV}\\rvert_{k+1} &=\n\\begin{cases}\n\\texttt{v} = A[k], \\qquad \\texttt{maxV}\\rvert_{k} < A[k]\\\\\n\\texttt{maxV}\\rvert_{k}, \\qquad \\text{otherwise}\n\\end{cases}\\\\\n&= \\max(\\texttt{maxV}\\rvert_{k},A[k])\\\\\n&= \\max(\\underbrace{\\max(A[0],A[1],\\cdots,A[k-1])}_{H(k)},A[k])\\\\\n&= \\max(A[0],A[1],\\cdots,A[k])\n\\end{align*}\nTherefore, after the end of this iteration the variable\n$$\\underbrace{\\texttt{maxV} = \\max(A[0],A[1],\\cdots,A[k])}_{H(k+1)}$$\n$$\\boxed{H(k) \\implies H(k+1)}$$\n\n### Termination\n\nNow, using the induction proof above: we know that after the last iteration, $m^{th}$ iteration, $H(m)$ holds. Therefore, $$\\texttt{maxV} = \\max(A[0],\\cdots,A[m-1])$$\nThe remaining is that our algorithm terminates after the end of the loop, returning the maximum value, so the algorithm is *correct*.\n", "meta": {"hexsha": "acd911664dc7493afe0fde36bb9741e428578935", "size": 4691, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Module 02: Introduction to Theoretical Analysis/Module 02: Example on Correctness Proofs.ipynb", "max_stars_repo_name": "ammarSherif/Analysis-and-Design-of-Algorithms-Tutorials", "max_stars_repo_head_hexsha": "c45fa87ea20bc35000efb2420e58ed486196df14", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Module 02: Introduction to Theoretical Analysis/Module 02: Example on Correctness Proofs.ipynb", "max_issues_repo_name": "ammarSherif/Analysis-and-Design-of-Algorithms-Tutorials", "max_issues_repo_head_hexsha": "c45fa87ea20bc35000efb2420e58ed486196df14", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Module 02: Introduction to Theoretical Analysis/Module 02: Example on Correctness Proofs.ipynb", "max_forks_repo_name": "ammarSherif/Analysis-and-Design-of-Algorithms-Tutorials", "max_forks_repo_head_hexsha": "c45fa87ea20bc35000efb2420e58ed486196df14", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.7747252747, "max_line_length": 219, "alphanum_fraction": 0.5056491153, "converted": true, "num_tokens": 862, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9019206738932334, "lm_q2_score": 0.9399133515091156, "lm_q1q2_score": 0.8477272833943491}} {"text": "```python\nfrom sympy import *\nfrom IPython.display import display\n%matplotlib inline\ninit_printing(use_latex=True)\n```\n\n# Rayleigh Quotient MarkII\n\nWe want to mix the last two functions we saw in the exercise, the shape associated with a load applied to the tip and the shape associated with a uniform distributed load.\n\nWe start by defining a number of variables that point to `Symbol` objects,\n\n\n```python\nz, h , r0, dr, t, E, rho, zeta = symbols('z H r_0 Delta t E rho zeta')\n```\n\nWe define the tip-load function starting from the expression of the bending moment, just a linear function that is 0 for $z=H$... we integrate two times and we get the displacements bar the constants of integration that, on the other hand, happen to be both equal to zero due to clamped end at $z=0$, implying that $\\psi_1(0)=0$ and $\\psi'_1(0)=0$\n\n\n```python\nf12 = h-z\nf11 = integrate(f12,z)\nf10 = integrate(f11,z)\n```\n\nWe have no scaling in place... we have to scale correctly our function by evaluating it for $z=H$ \n\n\n```python\nscale_factor = f10.subs(z,h)\n```\n\nDividing our shape function (and its derivatives) by this particular scale factor we have, of course, an unit value of the tip displacement.\n\n\n```python\nf10 /= scale_factor\nf11 /= scale_factor\nf12 /= scale_factor\nf10, f11, f12, f10.subs(z,h)\n```\n\nWe repeat the same procedure to compute the shape function for a constant distributed load, here the constraint on the bending moment is that both the moment and the shear are zero for $z=H$, so the non-normalized expression for $M_b\\propto \\psi_2''$ is\n\n\n```python\nf22 = h*h/2 - h*z + z*z/2\n```\n\nThe rest of the derivation is the same\n\n\n```python\nf21 = integrate(f22,z)\nf20 = integrate(f21,z)\nscale_factor = f20.subs(z,h)\nf20 /= scale_factor\nf21 /= scale_factor\nf22 /= scale_factor\nf20, f21, f22, f20.subs(z,h)\n```\n\nTo combine the two shapes in the _right_ way we write\n\n$$\\psi = \\alpha\\,\\psi_1+(1-\\alpha)\\,\\psi_2$$\n\nso that $\\psi(H)=1$, note that the shape function depends on one parameter, $\\alpha$, and we can minimize the Rayleigh Quotient with respect to $\\alpha$.\n\n\n```python\na = symbols('alpha')\nf0 = a*f10 + (1-a)*f20\nf2 = diff(f0,z,2)\nf0.expand().collect(z), f2.expand().collect(z), f0.subs(z,h)\n```\n\nWorking with symbols we don't need to formally define a Python function, it suffices to bind a name to a symbolic expression. That's done for the different variable quantities that model our problem and using these named expressions we can compute the denominator and the numerator of the Rayleigh Quotient.\n\n\n```python\nre = r0 - dr * z/h\nri = re - t\nA = pi*(re**2-ri**2)\nJ = pi*(re**4-ri**4)/4\nfm = rho*A*f0**2\nfs = E*J*f2**2\nmstar = 80000+integrate(fm,(z,0,h))\nkstar = integrate(fs,(z,0,h))\n```\n\nOur problem is characterized by a set of numerical values for the different basic variables:\n\n\n```python\nvalues = {E:30000000000,\n h:32,\n rho:2500,\n t:Rational(1,4),\n r0:Rational(18,10),\n dr:Rational(6,10)}\n\nvalues\n```\n\nWe can substitute these values in the numerator and denominator of the RQ\n\n\n```python\ndisplay(mstar.subs(values))\ndisplay(kstar.subs(values))\n```\n\nLet's look at the RQ as a function of $\\alpha$, with successive refinements\n\n\n```python\nrq = (kstar/mstar).subs(values)\nplot(rq, (a,-3,3));\n```\n\n\n```python\nplot(rq, (a,1,3));\n```\n\n\n```python\nplot(rq, (a,1.5,2.0));\n```\n\nHere we do the following:\n\n 1. Derive the RQ and obtain a numerical function (rather than a symbolic expression) using the `lambdify` function.\n 2. Using a root finder function (here `bisect` from the `scipy.optimize` collection) we find the location of the minimum of RQ.\n 3. Display the location of the minimum.\n 4. Display the shape function as a function of $\\zeta=z/H$.\n 5. Display the minimum value of RQ.\n \nNote that the eigenvalue we have previously found, for $\\psi\\propto1-\\cos\\zeta\\pi/2$ was $\\omega^2= 66.259\\,(\\text{rad/s})^2$ \n\n\n```python\nrqdiff = lambdify(a, rq.diff(a))\nfrom scipy.optimize import bisect\na_0 = bisect(rqdiff, 1.6, 1.9)\ndisplay(a_0)\ndisplay(f0.expand().subs(a,a_0).subs(z,zeta*h))\nrq.subs(a,a_0).evalf()\n```\n\nOh, we have (re)discovered the Ritz method! and we have the better solution so far...\n\n\n```python\n# usual incantation\nfrom IPython.display import HTML\nHTML(open('00_custom.css').read())\n```\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "c0e5749cda66404b21d8755da42331f0b8454537", "size": 93207, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "dati_2015/ha03/04_Rayleigh_MkII.ipynb", "max_stars_repo_name": "shishitao/boffi_dynamics", "max_stars_repo_head_hexsha": "365f16d047fb2dbfc21a2874790f8bef563e0947", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "dati_2015/ha03/04_Rayleigh_MkII.ipynb", "max_issues_repo_name": "shishitao/boffi_dynamics", "max_issues_repo_head_hexsha": "365f16d047fb2dbfc21a2874790f8bef563e0947", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "dati_2015/ha03/04_Rayleigh_MkII.ipynb", "max_forks_repo_name": "shishitao/boffi_dynamics", "max_forks_repo_head_hexsha": "365f16d047fb2dbfc21a2874790f8bef563e0947", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-06-23T12:32:39.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-15T18:33:55.000Z", "avg_line_length": 151.556097561, "max_line_length": 17557, "alphanum_fraction": 0.8565343805, "converted": true, "num_tokens": 1745, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.92522995296862, "lm_q2_score": 0.9161096164524095, "lm_q1q2_score": 0.8476120573443634}} {"text": "# $k$ Nearest Neighbors\n\n## Illustration of Nearest Neighbors\n\nArguably one of the simplest classification algorithm is $k$ nearest neighbors (KNN). Below figure portrays the algorithm's simplicity. We see a training set of five red and five blue dots, representing some label 1 and 0, respectively. The two axes represent two features, e.g. income and credit card balance. If we now add a new test data point $x_0$ (green dot), KNN will label this test data according to its $k$ closest neighbors. In below figure we set $k=3$. The three closest neighbors are: one blue dot and two red dots, resulting in estimated probabilities of 2/3 for the red class and 1/3 for the blue class. Hence KNN will predict that the new data point will belong to the red class. More so, based on the training set the algorithm is able to draw a decision boundary. (Note: In a classification settings with $j$ classes a *decision boundary* is a hyperplane that partitions the underlying vector space into $j$ sets, one for each class.). This is shown with the jagged line that separates background colors cyan (blue label) and light blue (red label). Given any possible pair of feature values, KNN labels the response along the drawn decision boundary. With $k=1$ the boundary line is very jagged. Increasing the number of $k$ will smoothen the decision boundary. This tells us that small values of $k$ will produce large variance but low bias, meaning that each new added training point might change the decision boundary line significantly but the decision boundary separates the training set (almost) correctly. As $k$ increases, variance decreases but bias increases. This is a manifestation of the Bias-Variance Trade-Off as discussed in the script ([Fortmann-Roe (2012)](http://scott.fortmann-roe.com/docs/BiasVariance.html)) and highlights the importance of selecting an adequate value of $k$ - a topic we will pick up in a future chapter.\n\n\n\n## Mathematical Description of KNN\n\nHaving introduced KNN illustratively, let us now define this in mathematical terms. Let our data set be $\\{(x_1, y_1), (x_2, y_2), \\ldots (x_n, y_n)\\}$ with $x_i \\in \\mathbb{R}^p$ and $y_i \\in \\{0, 1\\}\\; \\forall i \\in \\{1, 2, \\ldots, n\\}$. Based on the $k$ neighbors, KNN estimates the conditional probability for class $j$ as the fraction of points in $\\mathcal{N}(k, x_0)$ (the set of the $k$ closest neighbors of $x_0$) whose response values equals $j$ (Russell and Norvig (2009)):\n\n$$\\begin{equation}\n\\Pr(Y = j | X = x_0) = \\frac{1}{k} \\sum_{i \\in \\mathcal{N}(k, x_0)} \\mathbb{I}(y_i = j).\n\\end{equation}$$\n\nOnce the probability for each class $j$ is calculated, the KNN classifier predicts a class label $\\hat{y}_0$ for the new data point $x_0$ by maximizing the conditional probability (Batista and Silva (2009)).\n\n$$\\begin{equation}\n\\hat{y}_0 = \\arg \\max_{j} \\frac{1}{k} \\sum_{i \\in \\mathcal{N}(k,x_0)} \\mathbb{I}(y_i = j)\n\\end{equation}$$\n\nSelecting a new data point $x_0$'s nearest neighbors requires some notion of **distance measure**. Most researchers chose Minkowski's distance, which is often referred to as $L^m$ norm (Guggenbuehler (2015)). The distance between points $x_a$ and $x_b$ in $\\mathbb{R}^p$ is then defined as follows (Russell and Norvig (2009)):\n\n$$\\begin{equation}\nL^m (x_a, x_b) = \\left(\\sum_{i=1}^p |x_{a, i} - x_{b, i}|^m \\right)^{1/m}\n\\end{equation}$$\n\nUsing $m=2$, above equation simplifies to the well known Euclidean distance and $m=1$ yields the Manhattten distance. Python's `sklearn` package, short for scikit-learn, offers [several other options](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.DistanceMetric.html) that we will not discuss. \n\nBeyond the pure distance measure, it is also possible to weight training data points relative to their distance from a certain point. In above figure distance is weighted uniformly. Alternatively one could weight points by the inverse of their distance (closer neighbors of a query point will have a greater influence than neighbors which are further away) or any other user-defined weighting function. For [further details check `sklearn`'s documentation](http://scikit-learn.org/stable/modules/neighbors.html\\#classification} for details).\n\n## Application: Predicting Share Price Movement\n\n### Loading Data\n\nThe application of KNN is shown using simple stock market data. The idea is to predict a stock's movement based on simple features such as:\n* `Lag1, Lag2`: log returns of the two previous trading days \n* `SMI`: SMI log return of the previous day\n\nThe response is a binary variable: if a stock closed above the previous day closing price it equals 1, and 0 if it fell. We start by loading the necessary packages and stock data from a csv - a procedure we are well acquainted with by now. Thus comments are held short.\n\n\n```python\n%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn-whitegrid')\nplt.rcParams['font.size'] = 14\n```\n\n\n```python\n# Import daily shares prices and select window\nshsPr = pd.read_csv('Data/SMIDataDaily.csv', sep=',',\n parse_dates=['Date'], dayfirst=True,\n index_col=['Date'])\nshsPr = shsPr['2017-06-30':'2012-06-01']\nshsPr.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
ABBNADENBAERCFRCSGNGEBNGIVNLHNLONNNESN...ROGSCMNSGSNSLHNSRENUBSGUHRZURNSIKSMI
Date
2017-06-3023.6872.9050.4579.0013.86447.21918.054.90207.383.45...244.2462.72322.0323.687.6516.24354.1279.16160.08906.89
2017-06-2923.6372.8551.3078.6014.10446.01913.055.00204.783.65...246.9463.92310.0323.387.8016.36353.4276.36145.08944.04
2017-06-2824.0773.5551.6580.0013.82452.51927.056.35207.585.40...251.2464.92342.0325.388.5516.29360.0278.06195.09076.73
2017-06-2724.4073.7051.6580.2013.74453.81931.056.40207.684.30...252.0465.02355.0322.788.8016.00362.0281.76280.09072.92
2017-06-2624.6373.8551.3580.4513.49457.01947.056.25208.285.65...251.7469.32375.0323.989.4015.71367.5284.46310.09121.22
\n

5 rows \u00d7 21 columns

\n
\n\n\n\nHaving the data in a proper dataframe, we are now in a position to create the features and response values.\n\n\n```python\n# Calculate log-returns and label responses: \n# 'direction' equals 1 if stock closed above \n# previous day and 0 if it fell.\ntoday = np.log(shsPr / shsPr.shift(-1))\ndirection = np.where(today >= 0, 1, 0)\n\n# Convert 'direction' to dataframe\ndirection = pd.DataFrame(direction, index=today.index, columns=today.columns)\n\n# Lag1, 2: t-1 and t-2 returns; excl. smi (in last column)\nLag1 = np.log(shsPr.iloc[:, :-1].shift(-1) / shsPr.iloc[:, :-1].shift(-2))\nLag2 = np.log(shsPr.iloc[:, :-1].shift(-2) / shsPr.iloc[:, :-1].shift(-3))\n\n# Previous day return for SMI index\nsmi = np.log(shsPr.iloc[:, -1].shift(-1) / shsPr.iloc[:, -1].shift(-2))\n```\n\n### KNN Algorithm Applied\n\nNow comes the difficult part. What we want to achieve is to run the KNN algorithm for every stock and for different hyperparameter $k$ and see how it performs. For this we do the following steps:\n\n1. Create a feature matrix `X` containing `Lag1`, `Lag2` and `SMI` data for share i\n2. Create a response vector `y` with binary direction values\n3. Split data to training (before 2016-06-30) and test set (after 2016-06-30)\n4. Run KNN for different values of $k$ (loop)\n5. Write test score for given $k$ to matrix `scr`\n6. Once we've run through all $k$'s we proceed with step 1. with share i+1\n\nThis means we need two loops. The first corresponds to the share (e.g. ABB, Adecco, etc.), the second runs the KNN algorithm for different values of $k$. \n\nThe reason for this approach is that we are interested in finding any pattern/structure that would provide a successful trading strategy. There is obviously no free lunch. Predicting share price direction is by no means an easy task and we must be well aware that we are in for a difficult job here. If it were simple, neither one of us would be sitting here but run his own fund. But nonetheless, let us see how KNN performs and how homogeneous (or heterogeneous) the results are.\n\nOur first step is as usual to prepare the ground by loading the necessary package and defining some auxiliary variables. The KNN function we will be using is available through the `sklearn` (short for Scikit-learn) package. We only load the `neighbor` sublibrary which contains the needed KNN function called `KNeigborsClassifier()`. KNN is applied with the default distance metric: Euclidean distance (Minkowski's distance with $m=2$). If we would prefer another distance metric we would have to specify it ([see documentation](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html)).\n\n\n```python\n# Import relevant functions\nfrom sklearn import neighbors\n\n# k = {1, 3, ..., 200}\nk = np.arange(1, 200, 2)\n\n# Array to store results in. Dimension is [k x m] \n# with m=20 for the 20 companies (excl. SMI)\nscr = np.empty(shape=(len(k), len(shsPr.columns)-1))\n```\n\n\n```python\nfor i in range(len(shsPr.columns)-1):\n \n # 1) Create matrix with feature values of stock i\n X = pd.concat([Lag1.iloc[:, i], Lag2.iloc[:, i], smi], axis=1)\n X = X[:-3] # Drop last three rows with NaN (due to lag)\n \n # 2) Remove last three rows of response dataframe\n # to have equal no. of rows for features and response\n y = direction.iloc[:, i]\n y = y[:-3]\n \n # 3) Split data into training set...\n X_train = X['2016-06-30':]\n y_train = y['2016-06-30':]\n # ...and test set.\n X_test = X[:'2016-07-01']\n y_test = y[:'2016-07-01']\n \n # Convert responses to 1xN array\n y_train = y_train.values.ravel()\n y_test = y_test.values.ravel()\n \n for j in range(len(k)):\n \n # 4) Run KNN\n # Instantiate KNN class\n knn = neighbors.KNeighborsClassifier(n_neighbors=k[j])\n # Fit KNN classifier using training set\n knn = knn.fit(X_train, y_train)\n \n # 5) Extract test score for k[j]\n scr[j, i] = knn.score(X_test, y_test)\n```\n\n\n```python\n# Convert data to pandas dataframe\ntickers = shsPr.columns\nscr = pd.DataFrame(scr, index=k, columns=tickers[:-1])\n```\n\n\n```python\nscr.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
ABBNADENBAERCFRCSGNGEBNGIVNLHNLONNNESNNOVNROGSCMNSGSNSLHNSRENUBSGUHRZURNSIK
10.4624510.5335970.5217390.5138340.5098810.5335970.4980240.4980240.5059290.4110670.5494070.5533600.5138340.5217390.5098810.4703560.4901190.5494070.4980240.529644
30.4782610.4703560.5296440.4861660.5770750.5335970.5256920.4822130.5256920.4861660.5177870.5731230.4861660.5177870.5415020.4505930.4901190.5573120.5177870.513834
50.4980240.4940710.5375490.4940710.5256920.5731230.5375490.4743080.5019760.4861660.5256920.5138340.4664030.5019760.5217390.4822130.4940710.5533600.5335970.509881
70.5177870.4861660.5059290.4980240.5059290.5533600.5256920.5177870.4743080.5098810.5138340.4822130.4703560.5098810.5256920.4664030.4901190.5612650.5335970.509881
90.5533600.4782610.5059290.4703560.4861660.5335970.5059290.5138340.4584980.4624510.5454550.4861660.4624510.5019760.5019760.4703560.4901190.5177870.4822130.470356
\n
\n\n\n\n### Results & Analysis\n\nNow let's see the results in an overview. \n\n\n```python\nscr.describe()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
ABBNADENBAERCFRCSGNGEBNGIVNLHNLONNNESNNOVNROGSCMNSGSNSLHNSRENUBSGUHRZURNSIK
count100.000000100.000000100.000000100.000000100.000000100.000000100.000000100.000000100.000000100.000000100.000000100.000000100.000000100.000000100.000000100.000000100.000000100.000000100.000000100.000000
mean0.5532020.5067980.5159680.5085770.5174310.4997630.5333200.4920160.5375100.5199600.5468770.5130430.4790510.5169960.4905140.5207910.4892890.5073520.5535570.530040
std0.0311830.0105790.0231870.0246940.0166660.0202660.0134980.0195460.0272720.0208420.0189390.0168820.0199020.0206340.0201500.0178370.0149620.0277360.0202820.028188
min0.4624510.4703560.4624510.4545450.4466400.4584980.4980240.4505930.4584980.4110670.5059290.4743080.4466400.4743080.4505930.4505930.4387350.4545450.4822130.466403
25%0.5296440.5009880.5019760.4930830.5098810.4861660.5256920.4772730.5296440.5098810.5335970.5019760.4624510.5019760.4743080.5167980.4822130.4861660.5454550.512846
50%0.5573120.5059290.5177870.5098810.5177870.4980240.5335970.4901190.5474310.5256920.5454550.5138340.4762850.5217390.4940710.5256920.4901190.5019760.5573120.525692
75%0.5810280.5138340.5335970.5256920.5256920.5138340.5375490.5059290.5533600.5335970.5612650.5256920.4901190.5335970.5019760.5335970.4980240.5256920.5691700.546443
max0.6007910.5335970.5691700.5573120.5770750.5731230.5731230.5415020.5770750.5494070.5928850.5731230.5415020.5612650.5415020.5454550.5217390.5731230.5928850.581028
\n
\n\n\n\nFollowing finance theory, returns should be distributed symmetrically. Thus the simplest guess would be to expect a share price to increase on 50% of the days and to decrease on the remaining 50%. Similar to guessing a coin flip, if we would guess an 'up' movement for every day, we obviously would - in the long run - be correct 50% of the times. This would make for a score of 50%. \n\nLooking in that light at the above summary, we see some very interesting results. For 13 out of 20 stocks KNN produces test scores of > 50% for even the 0.25th percentile. Let's plot the ones with the highest test-scores (ABBN, NOVN, SIK and ZURN) to see at what value of $k$ the best test-score is achieved.\n\n\n```python\nnms = ['ABBN', 'NOVN', 'SIK', 'ZURN']\n\nplt.figure(figsize=(12, 8))\nfor col in nms:\n scr[col].plot(legend=True)\nplt.axhline(0.50, c='k', ls='--');\n```\n\nFor ABB and Novartis the max. score is around $k=100$ while for Sika and Zurich it is between 177 - 185. Furthermore, it seems interesting that for ABB, Novartis and Zurich the test score was barely below 50%. If this is indeed a pattern we would have found a trading strategy, wouldn't we? \n\nTo further assess our results we look into KNN's prediction of ABB stock movements. For this we rerun our KNN classifier algorithm for ABB as before.\n\n\n```python\n# 1) Create matrix with feature values of stock i\nX = pd.concat([Lag1['ABBN'], Lag2['ABBN'], smi], axis=1)\nX = X[:-3] # Drop last three rows with NaN (due to lag)\n```\n\n\n```python\n# 2) Remove last three rows of response dataframe\n# to have equal no. of rows for features and response\ny = direction['ABBN']\ny = y[:-3]\n```\n\n\n```python\n# 3) Split data into training set...\nX_train = X['2016-06-30':]\ny_train = y['2016-06-30':]\n# ...and test set.\nX_test = X[:'2016-07-01']\ny_test = y[:'2016-07-01']\n\n# Covert responses to 1xN array\ny_train = y_train.values.ravel()\ny_test = y_test.values.ravel()\n```\n\nFor ABB the maximum score is reached where $k=119$. You can check this with the `scr['ABBN'].idxmax()` command, which provides the index of the maximum value of the selected column. In our case, the index is equivalent to the value of $k$. Thus we run KNN with $k=119$.\n\n\n```python\n# 4) Run KNN\n# Instantiate KNN class for ABB with k=119\nknn = neighbors.KNeighborsClassifier(n_neighbors=119)\n# Fit KNN classifier using training set\nknn = knn.fit(X_train, y_train)\n\n# 5) Extract test score for ABB\nscr_ABB = knn.score(X_test, y_test)\nscr_ABB\n```\n\n\n\n\n 0.60079051383399207\n\n\n\nThe score of 60.08% is the very same as above. Nothing new so far. (Recall that the score is the total of correctly predicted outcomes.)\n\nHowever, the alert reader should by now raise some questions regarding our assumption that 50% of the returns should have been positive. In the long run, this might be true. But our training sample contained only 1'018 records and of these 535 were positive. \n\n\n```python\n# Percentage of 'up' days in training set\ny_train.sum() / y_train.size\n```\n\n\n\n\n 0.52554027504911593\n\n\n\nTherefore, if we would guess 'up' for every day of our test set and **given the distribution of classes in the test set is exactly as in our training set**, then we would predict the correct movement in 52.55% of the cases. So in that light, the predictive power of our KNN algorithm has to be put in perspective to the 52.55%.\n\nIn summary, our KNN algorithm has a score of 60.08%. Our best guess (based on the training set) would yield a score of 52.55%. This still displays that overall our KNN algorithm outperforms our best guess. Nonetheless, the margin is smaller than initially thought. \n\n### Confusion Matrix\n\nThere are more tools to assess the accuracy of an algorithm. We postpone the discussion of these tools to a later chapter and at this stage restrict ourselves to the discussion of a tool called \"confusion matrix\". \n\nA confusion matrix is a convenient way of displaying how our classifier performs. In binary classification (with e.g. response $y \\in \\{0, 1\\}$) there are four prediction categories possible (Ting (2011)):\n\n* **True positive**: True response value is 1, predicted value is 1 (\"hit\")\n* **True negative**: True response value is 0, predicted value is 0 (\"correct rejection\")\n* **False positive**: True response value is 0, predicted value is 1 (\"False alarm\", Type 1 error)\n* **False negative**: True response value is 1, predicted value is 0 (\"Miss\", Type 2 error)\n\nThese information help us to understand how our (KNN) algorithm performed. There are different two ways of arranging confusion matrix. James et al. (2013) follow the convention that column labels indicate the true class label and rows the predicted response class. Others have it transposed, such that column labels indicate predicted classes and row labels show true values. We will use the latter approach.\n\n\n\nTo run this in Python, we first predict the response value for each data entry in our test matrix `X_test`. Then we arrange the data in a suitable manner.\n\n\n```python\n# Predict 'up' (=1) or 'down' (=0) for test set\npred = knn.predict(X_test)\n```\n\n\n```python\n# Store data in DataFrame\ncfm = pd.DataFrame({'True direction': y_test,\n 'Predicted direction': pred})\ncfm.replace(to_replace={0:'Down', 1:'Up'}, inplace=True)\n\n# Arrange data to confusion matrix\nprint(cfm.groupby(['Predicted direction','True direction']) \\\n .size().unstack('Predicted direction'))\n```\n\n Predicted direction Down Up\n True direction \n Down 31 83\n Up 18 121\n\n\nAs mentioned before, rows represent the true outcome and columns show what class KNN predicted. In 31 cases, the test set's true response was 'down' (in our case represented by 0) and KNN correctly predicted 'down'. 121 times KNN was correct in predicting an 'up' (=1) movement. 18 returns in the test set were positive but KNN predicted a negative return. And in 83 out of 253 cases KNN predicted a 'up' movement whereas in reality the stock price decreased. The KNN score of 60.08% for ABB is the sum of true positive and negative (31 + 121) in relation to the total number of predictions (253 = 31 + 18 + 83 + 121). The error rate is 1 - score or (18 + 83)/253.\n\nClass-specific performance is also helpful to better understand results. The related terms are **sensitivity** and **specifity**. In the above case, sensitivity is the percentage of true 'up' movements that are identified. A good 88.1% (= 121 / (18 + 121)). The specifity is the percentage of 'down' movements that are correctly identified, here a poor 27.2% (= 31 / (31 + 83)). More on this in the next chapter.\n\nBecause confusion matrices are important to analyze results, `Scikit-learn` has its own [command to generate it](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html). It is part of the `metrics` sublibrary. The difficulty is that in contrast to above (manually generated) table, the function's output provides no labels. Therefore one must be sure to know which value are where. Here's the code to generate the confusion matrix.\n\n\n```python\nfrom sklearn.metrics import confusion_matrix\n\n# Confusion matrix\nconfusion_matrix(y_test, pred)\n```\n\n\n\n\n array([[ 31, 83],\n [ 18, 121]], dtype=int64)\n\n\n\nOften it is helpful to visualize results. `Sklearn` unfortunately doesn't have a specific plotting function for the confusion matrix. However, on the package's website a [function code is provided](http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html) that does exactly this. Below their code is applied to our KNN results on ABB's stock price movement. \n\n\n```python\nimport itertools\nplt.style.use('default')\n\ndef plot_confusion_matrix(cm, classes,\n normalize=False,\n title='Confusion matrix',\n cmap=plt.cm.Blues):\n \"\"\"\n This function prints and plots the confusion matrix.\n Normalization can be applied by setting `normalize=True`.\n \"\"\"\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n print(\"Normalized confusion matrix\")\n else:\n print('Confusion matrix, without normalization')\n\n print(cm)\n\n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(len(classes))\n plt.xticks(tick_marks, classes, rotation=45)\n plt.yticks(tick_marks, classes)\n\n fmt = '.2f' if normalize else 'd'\n thresh = cm.max() / 2.\n for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n plt.text(j, i, format(cm[i, j], fmt),\n horizontalalignment=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n\n plt.tight_layout()\n plt.ylabel('True label')\n plt.xlabel('Predicted label')\n```\n\n\n```python\n#plt.style.use('default')\n# Compute confusion matrix\ncfm_matrix = confusion_matrix(y_test, pred)\nnp.set_printoptions(precision=2)\n\n# Plot non-normalized confusion matrix\nplt.figure()\nplot_confusion_matrix(cfm_matrix, classes=['Down', 'Up'],\n title='Confusion matrix, without normalization');\n```\n\n\n```python\n# Plot normalized confusion matrix\nplt.figure()\nplot_confusion_matrix(cfm_matrix, classes=['Down', 'Up'], normalize=True,\n title='Normalized confusion matrix');\n```\n\n# Further Ressources\n\n\nIn writing this notebook, many ressources were consulted. For internet ressources the links are provided within the textflow above and will therefore not be listed again. Beyond these links, the following ressources were consulted and are recommended as further reading on the discussed topics:\n\n* Batista, Gustavo, and Diego Furtado Silva, 2009, How k-nearest neighbor parameters affect its performance, in *Argentine Symposium on Artificial Intelligence*, 1\u201312, sn.\n* Fortmann-Roe, Scott, 2012, Understanding the Bias-Variance Tradeoff from website, http://scott.fortmann-roe.com/docs/BiasVariance.html, 08/15/17.\n* Guggenbuehler, Jan P., 2015, Predicting net new money using machine learning algorithms and newspaper articles, Technical report, University of Zurich, Zurich.\n* James, Gareth, Daniela Witten, Trevor Hastie, and Robert Tibshirani, 2013, *An Introduction to Statistical Learning: With Applications in R* (Springer Science & Business Media, New York, NY).\n* M\u00fcller, Andreas C., and Sarah Guido, 2017, *Introduction to Machine Learning with Python* (O\u2019Reilly Media, Sebastopol, CA).\n* Russell, Stuart, and Peter Norvig, 2009, *Artificial Intelligence: A Modern Approach* (Prentice Hall Press, Upper Saddle River, NJ).\n* Ting, Kai Ming, 2011, Confusion matrix, in Claude Sammut, and Geoffrey I. Webb, eds., *Encyclopedia of Machine Learning* (Springer Science & Business Media, New York, NY).\n", "meta": {"hexsha": "ee4567a0059abee563396140b38b5c7e9c9e4b18", "size": 249705, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "0207_kNN.ipynb", "max_stars_repo_name": "mauriciocpereira/ML_in_Finance_UZH", "max_stars_repo_head_hexsha": "d99fa0f56b92f4f81f9bbe024de317a7949f0d38", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "0207_kNN.ipynb", "max_issues_repo_name": "mauriciocpereira/ML_in_Finance_UZH", "max_issues_repo_head_hexsha": "d99fa0f56b92f4f81f9bbe024de317a7949f0d38", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "0207_kNN.ipynb", "max_forks_repo_name": "mauriciocpereira/ML_in_Finance_UZH", "max_forks_repo_head_hexsha": "d99fa0f56b92f4f81f9bbe024de317a7949f0d38", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 176.5947666195, "max_line_length": 138166, "alphanum_fraction": 0.8516409363, "converted": true, "num_tokens": 10494, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9314625107731764, "lm_q2_score": 0.9099070054272775, "lm_q1q2_score": 0.8475442638453942}} {"text": "```python\nimport numpy as np\nfrom sympy import *\ninit_printing(use_latex='mathjax')\n```\n\n\n```python\nx = symbols('x')\nf = x ** 6 / 6 - 3 * x ** 4 - 2 * x ** 3 / 3 + 27 * x ** 2 / 2 + 18 * x - 30\nf\n```\n\n\n\n\n$$\\frac{x^{6}}{6} - 3 x^{4} - \\frac{2 x^{3}}{3} + \\frac{27 x^{2}}{2} + 18 x - 30$$\n\n\n\n\n```python\ndf = diff(f, x)\ndf\n```\n\n\n\n\n$$x^{5} - 12 x^{3} - 2 x^{2} + 27 x + 18$$\n\n\n\n\n```python\n- f.evalf(subs={x:1}) / df.evalf(subs={x:1})\n```\n\n\n\n\n$$0.0625$$\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "9729b823f8d2194511cbe0e42bfa223528155b3b", "size": 2342, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Certification 2/Week5.1 - Newton-Raphson method.ipynb", "max_stars_repo_name": "The-Brains/MathForMachineLearning", "max_stars_repo_head_hexsha": "5cbd9006f166059efaa2f312b741e64ce584aa1f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2018-04-16T02:53:59.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-16T06:51:57.000Z", "max_issues_repo_path": "Certification 2/Week5.1 - Newton-Raphson method.ipynb", "max_issues_repo_name": "The-Brains/MathForMachineLearning", "max_issues_repo_head_hexsha": "5cbd9006f166059efaa2f312b741e64ce584aa1f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Certification 2/Week5.1 - Newton-Raphson method.ipynb", "max_forks_repo_name": "The-Brains/MathForMachineLearning", "max_forks_repo_head_hexsha": "5cbd9006f166059efaa2f312b741e64ce584aa1f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2019-05-20T02:06:55.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-18T06:21:41.000Z", "avg_line_length": 19.5166666667, "max_line_length": 94, "alphanum_fraction": 0.401793339, "converted": true, "num_tokens": 217, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9603611631680358, "lm_q2_score": 0.8824278556326344, "lm_q1q2_score": 0.8474494418472323}} {"text": "## Fourier methods\n\nThe Fourier transform (FT) for a well-behaved functions $f$ is defined as:\n\n$$f(k) = \\int e^{-ikx} f(x) ~dx$$\n\nThe inverse FT is then \n\n$$f(x) = \\frac{1}{2\\pi} \\int e^{ikx} f(k) ~dk$$\n\n\n\n## Discrete Fourier transforms (DFTs)\n\n\nIf the function is periodic in real space, $f(x+L) = f(x)$, then the Fourier space is discrete with spacing $\\frac{2\\pi}{L}$. Moreover, if the real space is periodic as well as discrete with the spacing $h$, then the Fourier space is discrete as well as bounded. \n\n$$f(x) = \\sum e^{ikx} f(k) ~dk~~~~~~ \\text{where } k = \\Bigg[ -\\frac{\\pi}{h}, \\frac{\\pi}{h}\\Bigg];~~ \\text{with interval} \\frac{2\\pi}{L} $$\n\n\nThis is very much in line with crystallography with $ [ -\\frac{\\pi}{h}, \\frac{\\pi}{h} ]$ being the first Brillouin zone. So we see that there is a concept of the maximum wavenumber $ k_{max}=\\frac{\\pi}{h} $, we will get back to this later in the notes. Usually in computations we need to find FT of discrete function rather than of a well defined analytic function. Since the real space is discrete and periodic, the Fourier space is also discrete and periodic or bounded. Also, the Fourier space is continuous if the real space is unbounded. If the function is defined at $N$ points in real space and one wants to calculate the function at $N$ points in Fourier space, then **DFT** is defined as \n\n$$f_k = \\sum_{n=0}^{N-1} f_n ~ e^{-i\\frac{2\\pi~n~k}{N}}$$\n\nwhile the inverse transform of this is\n\n$$f_n = \\frac1N \\sum_{n=0}^{N-1} f_k ~ e^{~i\\frac{2\\pi~n~k}{N}}$$\n\nTo calculate each $f_n$ one needs $N$ computations and it has to be done $N$ times, i.e, the algorithm is simply $\\mathcal{O}(N^2)$. This can be implemented numerically as a matrix multiplication, $f_k = M\\cdot f_n$, where $M$ is a $N\\times N$ matrix.\n\n\n## Fast fourier tranforms (FFTs)\n\nThe discussion here is based on the Cooley-Tukey algorithm. FFTs improves on DFTs by exploiting their symmetries.\n\n$$ \\begin{align}\nf_k &= \\sum_{n=0}^{N-1} f_n e^{-i~\\frac{2\\pi~k~n}{N}} \\\\\n&= \\sum_{n=0}^{N/2-1} f_{2n} e^{-i~\\frac{2\\pi~k~2n}{N}} &+ \\sum_{n=0}^{N/2-1} f_{2n + 1} e^{-i~\\frac{2\\pi~k~(n+1)}{N}}\\\\\n&= \\sum_{n=0}^{N/2 - 1} f_{2n} e^{-i~\\frac{2\\pi k~n}{N/2}} &+ e^{-i\\frac{2\\pi k}{N}} \\sum_{n=0}^{N/2 - 1} f_{2n + 1} e^{-i~\\frac{2\\pi~k~n~}{N/2}}\\\\\n&=\\vdots &\\vdots\n\\end{align}$$\n\nWe can use the symmetry property, from the definition, $f_{N+k} = f_k$. Notice that, because of the tree structure, there are $\\ln_2 N$ stages of the calculation. By applying the method of splitting the computation in two halves recursively, the complexity of the problem becomes $\\mathcal{O}(N \\ln N)$ while the naive algorithm is $\\mathcal{O}(N^2)$. This is available in standard python packages like numpy and scipy.\n\nWe will now use PyGL to explore its usage to solve physics problems.\n\n\n```python\nimport pygl\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndim, Nx, Ny = 2, 128, 128\ngrid = {\"dim\":dim, \"Nx\":Nx, \"Ny\":Ny}\n\n# now construct the spectral solver\nss = pygl.dms.FourierSpectral(grid)\n```\n\n#### We first demonstrate the translation of blob using momentum operator\n\n\n```python\n# The momentum operator, e^{-ikr}, generates translation!\n\nf = plt.figure(figsize=(20, 5), dpi=80); \nL, N = 128, 128 \nx, y = np.meshgrid(np.linspace(0, L, N), np.linspace(0, L, N))\nrr = np.sqrt( ((x-L/2)*(x-L/2)+(y-L/2)*(y-L/2))*.5 )\nsig = np.fft.fft2(np.exp(-0.1*rr))\n\n\ndef plotFirst(x, y, sig, n_):\n sp = f.add_subplot(1, 3, n_ )\n plt.pcolormesh(x, y, sig, cmap=plt.cm.Blues)\n plt.axis('off'); \n\nxx = ([0, -L/4, -L/2,])\nyy = ([0, -L/3, L/2])\nfor i in range(3):\n kdotr = ss.kx*xx[i] + ss.ky*yy[i]\n sig = sig*np.exp(-1j*kdotr)\n plotFirst(x, y, np.real(np.fft.ifftn(sig)), i+1)\n```\n\n### Sampling: Aliasing error\n\nWe saw that because of the smallest length scale, $h$, in the real space there is a corresponding largest wave-vector, $k_{max}$ in the Fourier space. The error is because of this $k_{max}$ and a signal which has $k>k_{max}$ can not be distinguished on this grid. In the given example, below, we see that if the real space has 10 points that one can not distinguish between $sin(2\\pi x/L)$ and $sin(34 \\pi x/L)$. In general, $sin(k_1 x)$ and $sin(k_2 x)$ can not be distinguished if $k_1 -k_2$ is a multiple of $\\frac{2\\pi}{h}$. This is a manifestation of the gem called the sampling theorem which is defined, as on wikipedia:\n\nIf a function x(t) contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart.\n\n\n\n```python\nL, N = 1, 16\nx = np.arange(0, L, L/512)\nxx = np.arange(0, L, L/N)\n\ndef ff(k, x):\n return np.sin(k*x)\n\nf = plt.figure(figsize=(17, 6), dpi=80); \n\nplt.plot(x, ff(x, 2*np.pi), color=\"#A60628\", linewidth=2);\nplt.plot(x, ff(x, 34*np.pi), color=\"#348ABD\", linewidth=2);\nplt.plot(xx, ff(xx, 2*np.pi), 'o', color=\"#020e3e\", markersize=8)\nplt.xlabel('x', fontsize=15); plt.ylabel('y(x)', fontsize=15);\nplt.title('Aliasing in sampling of $sin(2\\pi x/L)$ and $sin(34 \\pi x/L)$', fontsize=24);\nplt.axis('off');\n```\n\nTo avoid this error, we truncate higher mode in PyGL using `ss.dealias'\n\n### Differentiation\n \nWe now show the usage of PyGL to compute differentiation matrices. We first compute the second derivative of $\\cos x$.\n\n\n```python\nimport pygl\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndim, Nx, Ny = 1, 32, 32\ngrid = {\"dim\":dim, \"Nx\":Nx, \"Ny\":Ny}\n\nss = pygl.dms.FourierSpectral(grid)\n```\n\n\n```python\ndef f1(kk, x):\n return np.cos(kk*x)\n\n\nf = plt.figure(figsize=(10, 5), dpi=80); \n\nL, N = Nx, Nx; x=np.arange(0, N); fac=2*np.pi/L\nk = ss.kx\n\nfk = np.fft.fft(f1(fac, x)) \nf1_kk = -k*k*fk \nf1_xx = np.real(np.fft.ifft(f1_kk))\n\n \nplt.plot(x, -f1(fac, x)*fac*fac, color=\"#348ABD\", label = 'analytical', linewidth=2) \nplt.plot(x, f1_xx, 'o', color=\"#A60628\", label = 'numerical', markersize=6) \nplt.legend(loc = 'best'); plt.xlabel('x', fontsize=15);\n```\n", "meta": {"hexsha": "0b74bf14e8bda57564ef1d0cded43b029dba1794", "size": 174920, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/fourierMethod.ipynb", "max_stars_repo_name": "rajeshrinet/pyGL", "max_stars_repo_head_hexsha": "924aa8b9a0ab39aa7014a8e130b0bd4502380aa0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2020-08-11T12:55:16.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-01T15:49:22.000Z", "max_issues_repo_path": "examples/fourierMethod.ipynb", "max_issues_repo_name": "rajeshrinet/pyGL", "max_issues_repo_head_hexsha": "924aa8b9a0ab39aa7014a8e130b0bd4502380aa0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/fourierMethod.ipynb", "max_forks_repo_name": "rajeshrinet/pyGL", "max_forks_repo_head_hexsha": "924aa8b9a0ab39aa7014a8e130b0bd4502380aa0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-07-11T01:17:42.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-05T12:52:49.000Z", "avg_line_length": 655.1310861423, "max_line_length": 88400, "alphanum_fraction": 0.9453121427, "converted": true, "num_tokens": 2013, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9518632247867715, "lm_q2_score": 0.8902942188450159, "lm_q1q2_score": 0.8474383261588365}} {"text": "## 3.2 Decentralized Gradient Descent (Adapt-then-combine)\n\n### 3.2.1 Problem and distributed gradient descent\n\nConsider $n$ computing nodes collaborate to solve the problem:\n\n$$\\min_{x\\in \\mathbb{R}^d} \\quad \\frac{1}{n}\\sum_{i=1}^n f_i(x)$$\n\nwhere $f_i(x)$ is a local and private function held by node $i$. Each node $i$ can access its own variable $x$ or gradient $\\nabla f_i(x)$, but it has to communicate to access information from other nodes.\n\nIf each $f_i(x)$ is assumed to be smooth, the leading algorithm to solve the above problem is gradient descent:\n\n$$\\begin{align}\nx^{(k+1)} = \\frac{1}{n}\\sum_{i=1}^n \\Big(x^{(k)} - \\alpha \\nabla f_i(x^{(k)}) \\Big) \\quad \\mbox{(distributed gradient descent)}\n\\end{align}$$\n\n### 3.2.2 Decentralized gradient descent\n\nIn this section we disscuss a new gradient descent that can solve the optimization problem in a decentralized manner. To this end, we first organize all computing nodes with a connected network topology. Next we generate a doubly stochastic combination matrix $W$. For more details on the network topology and combination matrix, please check Sections [2.2](../Sec-2.2-Network-topology.ipynb)-[2.4](../Sec-2.4-Combination-matrix-over-directed-network.ipynb).\n\nWith combination matrix $W$, the decentralized gradient descent has recursions as follows:\n\n\\begin{align}\ny_i^{(k)} &= x_i^{(k)} - \\alpha \\nabla f_i(x_i^{(k)}) \\hspace{5.5cm} \\mbox{(local update)} \\\\\nx_i^{(k+1)} &= \\sum_{j=1}^n w_{ij} y_j^{(k)} = w_{ii} y_i^{(k)} + \\sum_{j\\in\\mathcal{N}(i)}w_{ij} y_j^{(k)} \\hspace{2.1cm} \\mbox{(combination)}\n\\end{align}\n\nwhere $\\mathcal{N}(i)$ is the set of incoming neighbors of node $i$. It is observed that the combination step incurs partial averaging within neighborhood, which is different from the global averaging used in distributed gradient descent. The partical averaging typically triggers $O(1)$ bandwidth cost and $O(1)$ latency, which is independent of the number of all computing nodes $n$. On a sparse network such as the ring, or the one-peer exponential-two graph \\[Refs\\], decentralized gradient descent can save significant communications than distributed gradient descdent.\n\nDecentralized gradient descent has two major variants. One variant is as the above recursions. Since the recursion has the \"adapt-then-combine\" order, we will refer to it as the ATC-DGD \\[Refs\\]. The other variant will mix the adaptation (i.e., the gradient descent) and combination in the same update, which will be referred to as \"adapt-with-combination (AWC)\" decentralized gradient descent. There are subtle differences between these two variants. We will leave the discussion of AWC-DGD algorithm in the next section.\n\n### 3.2.3 Convergence properties\n\nWe give a brief descrption on the convergence property of the ATC-DGD algorithm. For $L$-smooth and $\\mu$-strongly convex problems, if the step-size $\\alpha$ is sufficiently small, the ATC-DGD algorithm will converge as follows.\n\n\\begin{align}\n\\frac{1}{n}\\sum_{i=1}^n \\|x_i^{(k)} - x^\\star \\|^2 = O\\Big( (1-\\alpha \\mu)^{k} + \\frac{\\alpha^2 \\rho^2 b^2}{(1-\\rho)^2}\\Big) \\hspace{1cm} \\mbox{(DGD-Convergence)}\n\\end{align}\n\nwhere $x^\\star$ is the glboal solution to the optimization problem, $\\rho = \\max\\{|\\lambda_2(W), \\lambda_n(W)|\\}$ and $b^2 = \\frac{1}{n}\\sum_{i=1}^n \\|\\nabla f_i(x^\\star)\\|^2$ denotes the data heterogeneity between nodes. Quantity $1-\\rho$ measures the connectivity of the network topology. It is observed that ATC-DGD cannot converge exactly to the solution $x^\\star$, but to a neighborhood around it. The limiting error is on the order of $O(\\frac{\\alpha^2 b^2}{(1-\\rho)^2})$. When step-size $\\alpha$ is small, or the data heterogeneity $b^2$ is small, or the network is well-connected, i.e., $\\rho \\to 0$, the limiting error can be negligible. \n\n### 3.2.4 An example: least-square problem\n\nIn this section, we will show a demo on how to solve a least-square problem with ATC-DGD using BlueFog. Suppose $n$ computing nodes collaborate to solve the following problem:\n\n$$\\min_x \\quad \\frac{1}{n}\\sum_{i=1}^n \\|A_i x - b_i\\|^2$$\n\nwhere $\\{A_i, b_i\\}$ are local data held in node $i$.\n\n#### 3.2.4.1 Set up BlueFog\n\nIn the following code, you should be able to see the id of your CPUs. We use 8 CPUs to conduct the following experiment.\n\n\n```\nimport ipyparallel as ipp\n\nrc = ipp.Client(profile=\"bluefog\")\ndview = rc[:] # A DirectView of all engines\ndview.block = True\nrc.ids\n```\n\n\n\n\n [0, 1, 2, 3, 4, 5, 6, 7]\n\n\n\n\n```\n%%px\nimport numpy as np\nimport bluefog.torch as bf\nimport torch\nfrom bluefog.common import topology_util\nimport networkx as nx\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\nbf.init()\nprint(f\"Hello, I am {bf.rank()} among {bf.size()} processes\")\n```\n\n [stdout:0] Hello, I am 2 among 8 processes\n [stdout:1] Hello, I am 4 among 8 processes\n [stdout:2] Hello, I am 7 among 8 processes\n [stdout:3] Hello, I am 6 among 8 processes\n [stdout:4] Hello, I am 3 among 8 processes\n [stdout:5] Hello, I am 0 among 8 processes\n [stdout:6] Hello, I am 5 among 8 processes\n [stdout:7] Hello, I am 1 among 8 processes\n\n\n#### 3.2.4.2 Generate local data $A_i$ and $b_i$\n\n\n```\n%%px\n\n\ndef generate_data(m, n):\n\n A = torch.randn(m, n).to(torch.double)\n x_o = torch.randn(n, 1).to(torch.double)\n ns = 0.1 * torch.randn(m, 1).to(torch.double)\n b = A.mm(x_o) + ns\n\n return A, b\n```\n\n#### 3.2.4.3 Distributed gradient descent method\n\n\n```\n%%px\n\n\ndef distributed_grad_descent(A, b, maxite=5000, alpha=1e-1):\n\n x_opt = torch.zeros(n, 1, dtype=torch.double)\n\n for _ in range(maxite):\n # calculate local gradient\n grad_local = A.t().mm(A.mm(x_opt) - b)\n\n # global gradient\n grad = bf.allreduce(grad_local, name=\"gradient\")\n\n # distributed gradient descent\n x_opt = x_opt - alpha * grad\n\n grad_local = A.t().mm(A.mm(x_opt) - b)\n grad = bf.allreduce(grad_local, name=\"gradient\") # global gradient\n\n # evaluate the convergence of distributed gradient descent\n # the norm of global gradient is expected to 0 (optimality condition)\n global_grad_norm = torch.norm(grad, p=2)\n if bf.rank() == 0:\n print(\n \"[Distributed Grad Descent] Rank {}: global gradient norm: {}\".format(\n bf.rank(), global_grad_norm\n )\n )\n\n return x_opt\n```\n\nIn the following code we run distributed gradient descent to achieve the global solution $x^\\star$ to the optimization problem. To validate whether $x^\\star$ is optimal, it is enough to examine $\\frac{1}{n}\\sum_{i=1}^n \\nabla f_i(x^\\star) = 0$.\n\n\n```\n%%px\n\nm, n = 20, 5\nA, b = generate_data(m, n)\nx_opt = distributed_grad_descent(A, b, maxite=200, alpha=1e-2)\n```\n\n [stdout:5] [Distributed Grad Descent] Rank 0: global gradient norm: 7.236065407667669e-15\n\n\n#### 3.2.4.3 Decentralized gradient descent method\n\nIn this section, we depict the convergence curve of the decentralied gradient descent (the ATC version). We will utilize the $x^\\star$ achieved by distributed gradient descent as the optimal solution. First, we define one step of the ATC-DGD method.\n\n\n```\n%%px\n\n\ndef ATC_DGD_one_step(x, x_opt, A, b, alpha=1e-2):\n\n # one-step ATC-DGD.\n # The combination weights have been determined by the associated combination matrix.\n\n grad_local = A.t().mm(A.mm(x) - b) # compute local grad\n y = x - alpha * grad_local # adapte\n x_new = bf.neighbor_allreduce(y) # combination\n\n # the relative error: |x^k-x_gloval_average|/|x_gloval_average|\n rel_error = torch.norm(x_new - x_opt, p=2) / torch.norm(x_opt, p=2)\n\n return x_new, rel_error\n```\n\nNext we run ATC-DGD algorithm.\n\n\n```\n%%px\n\n# Set topology as exponential-two topology.\nG = topology_util.ExponentialTwoGraph(bf.size())\nbf.set_topology(G)\n\nmaxite = 200\nx = torch.zeros(n, 1, dtype=torch.double) # Initialize x\nrel_error = torch.zeros((maxite, 1))\nfor ite in range(maxite):\n\n if bf.rank() == 0:\n if ite % 10 == 0:\n print(\"Progress {}/{}\".format(ite, maxite))\n\n x, rel_error[ite] = ATC_DGD_one_step(\n x, x_opt, A, b, alpha=3e-3\n ) # you can adjust alpha to different values\n```\n\n [stdout:5] \n Progress 0/200\n Progress 10/200\n Progress 20/200\n Progress 30/200\n Progress 40/200\n Progress 50/200\n Progress 60/200\n Progress 70/200\n Progress 80/200\n Progress 90/200\n Progress 100/200\n Progress 110/200\n Progress 120/200\n Progress 130/200\n Progress 140/200\n Progress 150/200\n Progress 160/200\n Progress 170/200\n Progress 180/200\n Progress 190/200\n\n\nIn the following, we adjust step-size to differnt values to examine its influence on the convergence rate and limiting bias.\n\n\n```\n# collect relative error from node 0 for step-size 1e-2\nrel_error_exp2_alpha1em2 = dview.pull(\"rel_error\", block=True, targets=0)\n```\n\n\n```\n# collect relative error from node 0 for step-size 5e-3\nrel_error_exp2_alpha5em3 = dview.pull(\"rel_error\", block=True, targets=0)\n```\n\n\n```\n# collect relative error from node 0 for step-size 3e-3\nrel_error_exp2_alpha3em3 = dview.pull(\"rel_error\", block=True, targets=0)\n```\n\n\n```\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\nplt.semilogy(rel_error_exp2_alpha1em2)\nplt.semilogy(rel_error_exp2_alpha5em3)\nplt.semilogy(rel_error_exp2_alpha3em3)\n\nplt.legend([\"1e-2\", \"5e-3\", \"3e-3\"], fontsize=16)\n\nplt.xlabel(\"Iteration\", fontsize=16)\nplt.ylabel(\"Relative error\", fontsize=16)\n```\n\nIt is observed from the above figures that smaller alpha can lead to more accuate solution, but the convergence rate will get slower. This observation is consistent with Eq. (DGD-Convergence).\n", "meta": {"hexsha": "219f04cdc1c7ff679b9cf3b9cb26dc73d8563d1b", "size": 35868, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Section 3/Sec-3.2-Decentralized-gradient-descent-ATC.ipynb", "max_stars_repo_name": "Bluefog-Lib/bluefog-tutorial", "max_stars_repo_head_hexsha": "a9b371376dafd40d4dee8c61e9060f4e2cec773d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-10-01T07:51:59.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-01T07:51:59.000Z", "max_issues_repo_path": "Section 3/Sec-3.2-Decentralized-gradient-descent-ATC.ipynb", "max_issues_repo_name": "Bluefog-Lib/bluefog-tutorial", "max_issues_repo_head_hexsha": "a9b371376dafd40d4dee8c61e9060f4e2cec773d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Section 3/Sec-3.2-Decentralized-gradient-descent-ATC.ipynb", "max_forks_repo_name": "Bluefog-Lib/bluefog-tutorial", "max_forks_repo_head_hexsha": "a9b371376dafd40d4dee8c61e9060f4e2cec773d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 79.178807947, "max_line_length": 21008, "alphanum_fraction": 0.7995148879, "converted": true, "num_tokens": 2843, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9353465170505204, "lm_q2_score": 0.9059898184796793, "lm_q1q2_score": 0.8474144211982012}} {"text": "#
Automatic Differentiation
\n###
[Dr David Race](dr.david.race@gmail.com)
\n\nThis notebook is specifically designed to provide a quick demonstration of \"autograd\" capabilities. This is designed to be the first in a series on convex function minimization within Machine Learning (ML) environments, so it starts with the basics of differentiation. This notebook uses the \"tensor\" concepts to demonstrate some of the nice methods available with both the HIPS and PyTorch packages. These are both foundations for Deep Learning (DL) environments, but are equally adapt with some standard mathematics.\n\nThere are two main sections:\n1. HIPS\n2. PyTorch\n\nThe examples grow in sophistication as the notebook progresses, so be sure to follow the instructions carefully.\n\nNOTE: This is designed to run in Colaboratory, but is likely to run in most other Jupyter envrionments also. In particular this does not connect to a GPU, so it will run on minimal hardware.\n\n##1. Differentiation with HIPS Autograd\n\nsympy is a great package for symbolic differentiation, but sympy will just be used for simple comarison of results so you can better understand the results. autograd is much more appropriate for larger problems so is more important for Differential Equations and Linear Algebra applications. The reference for autograd is found at [HIPS - Autograd](https://github.com/HIPS/autograd).\n\n### 1.1 - Set Up Environment\n\nThis section installs Autograd into the Colaboratory environment and imports the standard python numeric packages.\n\n\n```\n!pip install autograd\n#python imports\nimport os, sys\n#numpy\nimport numpy as np\nfrom numpy import linspace\nimport scipy as sp\n#sympy\nimport sympy as smp\nfrom sympy import *\nfrom sympy import Function, Symbol\nfrom sympy import Derivative\n#\nimport autograd\n```\n\nThse section sets up the graphics \n\n\n```\n#The general plot capabilities\nimport matplotlib\nimport matplotlib.pyplot as plt\n#Since these are images, turn off the grids\nmatplotlib.rc('axes',**{'grid':False})\n# sympy plotting\nfrom sympy.plotting import plot\n#seaborn\nimport seaborn as sns\n```\n\n### 1.2 -Example 1 - $f(x) = x^2 + 4$\n\nWe start with the basic problem and progress along the knowledge path.\n\n#### 1.2.1 Sympy Implementation\n\nThis section uses the symbolic package to derive the known quantities for our function. This could be done by hand, but is intended to show how the results mesh together.\n\n\n```\n#Define the function\nx = Symbol('x')\nf = Function('f')(x)\nf = x**2 + 4\n#Show the function definition\nprint(\"The function f\")\nsmp.pprint(f)\n#take the derivative\nf_prime = f.diff(x)\nprint('The derivative of f')\nsmp.pprint(f_prime)\n# Plot the function and derivative\np1 = plot(f,xlim=(-3.0,3.0),ylim=(0.0,12.0))\n# Compute the values of f between -3 and 3\nf_n = lambdify(x, f, \"numpy\")\nf_prime_n = lambdify(x,f_prime,\"numpy\")\nx_vals = linspace(-3.0, 3.0)\ny_vals = f_n(x_vals)\ny_prime_vals = f_prime_n(x_vals)\n\nsns.set_style('dark')\nfig, ax = plt.subplots()\nplt.ylim(0.0,12.0)\nplt.yticks(np.arange(1,13))\nax.axvline(0.0, color='k')\nax.axhline(0.0, color='k')\nfn, = ax.plot(x_vals,y_vals, label='$f$')\nfprimen, = ax.plot(x_vals,y_prime_vals, label='$\\\\frac{\\\\partial f}{\\\\partial x}$')\nplt.legend(handles=[fn, fprimen])\nplt.show()\n```\n\nThis is a standard an easily understood problem, so not much effort is put into the plot. The main point is generation of the x and y values.\n\n#### 1.2.2 Autograd Implementation\n\nAutograd understands the same type operations, but rather than a focus on symbolic computation the focus is on numeric computation using a similar underlying framework. The main difference is that the gradient (yes, these are the partial derivatives) are taken relative to a scalar \"loss\" value. Therefore when working with tensors of numbers, we need to define the function that will be differentiated in terms of a loss value (NOTE: The use of the loss value stems from Machine Learning.) The following code generates the same example data.\n\nIt may not be obvious, but this provides a way to automatically compute the derivative of a function at many point concurrently. Here is the process:\n\n1. Define your function, $f$, so it inputs a tensor (vector, matrix, etc).\n2. Define your loss function, $loss_f$ to be $np.sum(f(x)) $\n3. Define the gradient of $f$ to be $grad(loss_f)$\n4. Then for clarity, define a function g, that outputs the $f(x)$ and $f^\\prime(x)$\n\nThese steps are shown in the next example:\n\n\n\n```\nimport autograd.numpy as np #This is so the gradient understands the numpy operations\nfrom autograd import grad\n#Follow the steps\n\ndef f(x):\n y = x*x + 4.0\n return y\ndef loss_f(x):\n loss = np.sum(f(x))\n return loss\nf_p = grad(loss_f)\ndef g(x):\n return f(x), f_p(x)\n#Compute points\ny, y_p = g(x_vals)\n#plot\nsns.set_style('dark')\nfig, ax = plt.subplots()\nplt.ylim(0.0,12.0)\nplt.yticks(np.arange(1,13))\nax.axvline(0.0, color='k')\nax.axhline(0.0, color='k')\nfn, = ax.plot(x_vals,y, label='$f$')\nfprimen, = ax.plot(x_vals,y_p, label='$\\\\frac{\\\\partial f}{\\\\partial x}$')\nplt.legend(handles=[fn,fprimen])\nplt.show()\n#\n# check the results\n#\nmax_der_diff = np.max(np.abs(y_p - y_prime_vals))\nprint(\"The max difference in the derivative computation: {:.8f}\".format(max_der_diff))\n```\n\nAs you can see, ther results are exactly the same as expected and performs correctly. Lets, see why:\n\nRecall from above, we defined the loss function as $np.sum(f(x))$, thus $loss_f = \\sum_{i=0}^{N-1} f(x_i)$; therefore,\n\n$\\frac{\\partial f(x_i)}{\\partial x_i} = f^{\\prime}(x_i)$\n\nsince the value of $f(x_i)$ only appears once in the summation.\n\nConsequently, using the $np.sum$ function provides a quick way to compute the derivatives of $f$ for the input $x$ values.\n\nObviously we now want to consider a second derivative. This is computed using the following code cell.\n\n\n```\nimport autograd.numpy as np #This is so the gradient understands the numpy operations\nfrom autograd import grad\n#Follow the steps\n\ndef f(x):\n y = x*x + 4.0\n return y\ndef loss_f(x):\n loss = np.sum(f(x))\n return loss\nf_p = grad(loss_f)\ndef loss_fp(x):\n loss = np.sum(f_p(x))\n return loss\nf_pp = grad(loss_fp)\ndef g(x):\n return f(x), f_p(x)\ndef h(x):\n return f(x), f_p(x), f_pp(x)\n\n#Compute points\ny, y_p,y_pp = h(x_vals)\npprint(y_pp)\n```\n\n### Example 1.3 - $f(x,y) = x^2 + y^2 + 4$\n\nIn this example, we will compute the gradient of f, namely of $grad(f) = \\nabla(f) = \\begin{bmatrix} \\frac{\\partial f}{\\partial x} \\\\ \\frac{\\partial f}{\\partial y} \\end{bmatrix} = \\begin{bmatrix}2x \\\\ 2y\\end{bmatrix}$ for a set of random points with $x,y \\in [0,1)$.\n\nThis works much the same as the previous example by leveraging the grad function and providing the appropriate loss function that can operate on multiple inputs concurrently.\n\n\n```\nimport autograd.numpy as np #This is so the gradient understands the numpy operations\nfrom autograd import grad\n#Follow the steps\n\ndef f(xy):\n z = xy[0]*xy[0] + xy[1]*xy[1] + 4.0\n return z\ndef loss_f(z):\n loss = np.sum(f(z))\n return loss\nf_p = grad(loss_f)\ndef g(xy):\n return np.array(f(xy)),np.array(f_p(xy))\n#Define the x and y\nx_vals = np.random.uniform(-1,1,50)\ny_vals = np.random.uniform(-1,1,50)\nxy = [x_vals,y_vals]\n#\n#Compute points\nz, z_p = g(xy)\n#Compute the formula values\nz_p_compute = np.array([2*x_vals,2*y_vals])\n\nmax_err = np.max(np.abs(z_p - z_p_compute))\npprint(\"Max Error: {:8f}\".format(max_err))\n\n```\n\nOnce again, the use of grad allows for easy computation of exactly the values we need for computation.\n\n### 1.4 Conclusion\n\nWith autograd, the computation of gradients is automatic. Even though autograd has its primary use in Machine Learning, this tool can be very powerful for mathematics operations since it supports both GPUs and targets numpy compatibility.\n\n## 2. Differentiation with PyTorch.autograd\n\nThe Autograd is a very nice package, but at this point PyTorch probably has a larger user community and it is also very pythonic. PyTorch has GPU support, but it doesn't overload the numpy packages. Given its sponsors (including Facebook), the implementation for Machine Learning is very robust and it has several pre-trained models that are ready for use in solving problems. This series of studies on using gradients generally focuses on PyTorch; however, most of the work can be done within Autograd.\n\nThe documentation for Pytorch can be found at [Docs](https://pytorch.org/docs/stable/index.html).\n\n###2.1 Set up Environment\n\n\n```\n#\n!pip3 install -U torch\n#\nimport torch as torch\nimport torch.tensor as T\nimport torch.autograd as t_autograd #normally I use autograd, but I want to distinguish between autograd and torch\n#\n# Output Environment Information\n#\nhas_cuda = torch.cuda.is_available()\ncurrent_device = torch.cuda.current_device() if has_cuda else -1\ngpu_count = torch.cuda.device_count() if has_cuda else -1\ngpu_name = torch.cuda.get_device_name(current_device) if has_cuda else \"NA\"\nprint(\"Current device {}\".format(current_device))\nprint(\"Number of devices: {:d}\".format(gpu_count))\nprint(\"Current GPU Number: {:d}\".format(current_device))\nprint(\"GPU Name: {:s}\".format(gpu_name))\n#Set the accelerator variable\naccelerator = 'cuda' if has_cuda else 'cpu'\nprint(\"Accelerator: {:s}\".format(accelerator))\n```\n\n### 2.2 -Example 1 - $f(x) = x^2 + 4$\n\nThis section solves the same problem as the previous section, but is written to accomodate a GPU so it includes some of the details to use a GPU.\n\n\n```\n#define setup\n#\nN = 50\ndevice = torch.device(accelerator)\n#\n#Define the function and loss\n#\ndef f(x):\n y = x * x + 2.0\n return y\ndef loss_f(x):\n z = f(x).sum()\n return z\ndef f_p(x):\n z = loss_f(x)\n z.backward()\n return x.grad\nx_val = np.linspace(-3., 3.0, N)\nx = T(x_val, requires_grad = True).to(device)\n#Get the data\nx_vals = x.data.numpy()\ny = f(x).data.numpy()\ny_p = f_p(x).data.numpy()\n#Graph\nsns.set_style('dark')\nfig, ax = plt.subplots()\nplt.ylim(0.0,12.0)\nplt.yticks(np.arange(1,13))\nax.axvline(0.0, color='k')\nax.axhline(0.0, color='k')\nfn, = ax.plot(x_vals,y, label='$f$')\nfprimen, = ax.plot(x_vals,y_p, label='$\\\\frac{\\\\partial f}{\\\\partial x}$')\nplt.legend(handles=[fn,fprimen])\nplt.show()\n#\n# check the results\n#\nmax_der_diff = np.max(np.abs(y_p - y_prime_vals))\nprint(\"The max difference in the derivative computation: {:.8f}\".format(max_der_diff))\n\n```\n\nAs you can see, the computations are similar to using Autograd, but instead of using a grad function this uses a backward function (backward is the word in Machine Learning that computes the derivative relative to the loss) and then grad is a property of the variable that us used for the computation that required the gradient.\n\n### Example 2.3 - $f(x,y) = x^2 + y^2 + 4$\n\n\n```\n\n#Follow the steps\n\ndef f(xy):\n z = xy[0]*xy[0] + xy[1]*xy[1] + 4.0\n return z\ndef loss_f(z):\n loss = f(z).sum()\n return loss\ndef f_p(xy):\n z = loss_f(xy)\n z.backward()\n return xy.grad\ndef g(xy):\n return f(xy).data.numpy(),f_p(xy).data.numpy()\n#Define the x and y\nx_vals = np.random.uniform(-1,1,50)\ny_vals = np.random.uniform(-1,1,50)\nxy = T([x_vals,y_vals], requires_grad = True).to(device)\n#\n#Compute points\nz, z_p = g(xy)\n#Compute the formula values\nz_p_compute = np.array([2*x_vals,2*y_vals])\n\nmax_err = np.max(np.abs(z_p - z_p_compute))\npprint(\"Max Error: {:8f}\".format(max_err))\n```\n\n### 2.4 Conclusion\n\nPyTorch provide both a numpy compatible interface for the numpy functions, so starting with a minimal set of code using numpy, it is easy to scale up to use GPUs and PyTorch.autograd. The interworking of PyTorch.autograd are exactly as we expect.\n\n## Overall Conclusion\n\nBoth Autograd and PyTorch are environments for Machine Learning, but they provide many benefits to numerical computations and modeling. These free tools coupled with Colaboratory greatly expands the types of mathematic modeling and computations that are available to developers. Autograd appears to have a smaller footprint, but PyTorch appears to have a larger following (especially when considering ML). Using both isn't a bad option depending on the resources available for processing.\n", "meta": {"hexsha": "b476d438483097271a689a620cdfaa95ca263cd4", "size": 17289, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "0-Automatic_Differentiation.ipynb", "max_stars_repo_name": "drdavidrace/Deep_Learning_Environment_Fun", "max_stars_repo_head_hexsha": "dc7da485ad44198ba3c986cfbe55d143711b090d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "0-Automatic_Differentiation.ipynb", "max_issues_repo_name": "drdavidrace/Deep_Learning_Environment_Fun", "max_issues_repo_head_hexsha": "dc7da485ad44198ba3c986cfbe55d143711b090d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "0-Automatic_Differentiation.ipynb", "max_forks_repo_name": "drdavidrace/Deep_Learning_Environment_Fun", "max_forks_repo_head_hexsha": "dc7da485ad44198ba3c986cfbe55d143711b090d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 17289.0, "max_line_length": 17289, "alphanum_fraction": 0.684423622, "converted": true, "num_tokens": 3324, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.903294214513915, "lm_q2_score": 0.9381240155738166, "lm_q1q2_score": 0.8474019957643905}} {"text": "## Taylor expansions\n\nWe first import some prerequisite modules, most importantly Sympy and Matplotlib. \n\n\n```python\ndef uniq(seq): \n # order preserving\n seen = {}\n result = []\n for item in seq:\n if item in seen: \n continue\n seen[item] = 1\n result.append(item)\n return result\n```\n\n\n```python\nimport sympy as sm\nfrom sympy.abc import x\nfrom matplotlib.colors import cnames\n```\n\n\n```python\nimport matplotlib\nfrom matplotlib.colors import rgb2hex\n\nfrom numpy import arange\n```\n\n\n```python\nfrom sympy import Symbol,plot\nimport matplotlib.pyplot as plt\n\ndef move_sympyplot_to_axes(p, ax):\n backend = p.backend(p)\n backend.ax = ax\n backend.process_series()\n backend.ax.spines['right'].set_color('none')\n backend.ax.spines['bottom'].set_position('zero')\n backend.ax.spines['top'].set_color('none')\n plt.close(backend.fig)\n```\n\nThe following routine will plot the function `f` together with its Taylor polynomials up to the degree `degree`, in the square box with sides `[-A,A]`. We will use it several times for different functions `f`.\n\n\n```python\ndef taylorplot(f, degree, A):\n L = [f.series(n=i).removeO() for i in range (1, degree+2)] # +1 for the offset of `range`, another +1 for the offset of `.series`\n L = uniq(L)\n L.insert(0,f)\n # Pick colors from the Blues colorscheme\n a = arange(0,1,1/len(L))\n c = matplotlib.cm.Blues(a)[:,:3]\n \n # Plot the original function in red, then add the graphs of Taylor polynomials in different shades of blue\n p1 = sm.plot(L[0], xlim=[-A,A], ylim=[-A,A], line_color = 'r', show=False)\n for i in range(1,len(L)):\n p1.extend(sm.plot(L[i], xlim=[-A,A], ylim=[-A,A], line_color = rgb2hex(c[i,:]), show=False))\n p1.legend = False\n # p1.show()\n \n # Plot the original function alone\n p2 = sm.plot(L[0], xlim=[-A,A], ylim=[-A,A], line_color = 'r', show=False)\n p2.legend = False\n p2.grid = True\n # p2.show()\n\n # Generate the legend by LaTeXing the expressions for the functions involved.\n l =[]\n for i in range(len(L)):\n l.append(sm.latex(L[i], mode='inline'))\n\n # %matplotlib gtk3 # uncomment to use the external plot window\n # Combine the two resulting plots, add legend\n fig, (ax1,ax2) = plt.subplots(ncols=2,figsize=(20,10))\n move_sympyplot_to_axes(p1, ax1)\n move_sympyplot_to_axes(p2, ax2)\n fig.legend(l,loc='lower center', bbox_to_anchor=(0.5, 0.0),\n ncol=3, fancybox=True, shadow=True)\n plt.show()\n```\n\n----\n\nLet us first plot the partial sums of the Maclaurin series of the Bessel function $J_0$, up to the degree 14. We will restrict the graphs to the box `[-10,10]` by `[-10,10]`:\n\n\n```python\nB = sm.besselj(0,x)\ntaylorplot(B, 14, 10)\n```\n\nIn these graphs we show the Taylor polynomials (centered at $a=0$ here), using the darker shade to denote to the higher degree polynomial $T_n(x)$. The plot on the right shows $J_0(x)$ itself.\n\n----\n\nNext, let us do the same with $\\sin(x)$. We plot its Taylor polynomials at 0 up to degree 13 in the box `[-10,10]` by `[-10,10]`.\n\n\n```python\nS = sm.sin(x)\ntaylorplot(S, 13,10)\n```\n\n---\n\nFinally, let us look at two examples in which the Maclaurin series of a function does not converge for all `x`. The first example is $ 1/(1-x) $; we will plot its Taylor polynomials up to degree 10 in the square box `[-6,6]` by `[-6,6]`.\n\n\n```python\nf1 = 1/(1-x)\ntaylorplot(f1,10,6)\n```\n\nObserve how the polynomials $ T_n $ diverge away from the graph of the function `f1` when $x=-1$, even though the function does not have a singularity there. This is of course because the radius of convergence of the geometric series $$ \\sum_{n=0}^\\infty x^n $$ is $1$. Since the interval of convergence of a series must be symmetric, divergence of the series at $ x = 1 $, where the function `f1` has a singularity, means that the series must also diverge at $x = -1$.\n\n---\n\nThe following example is even more subtle. The function we look at here, `f2`$= 1/(1+x^2) $, does not exhibit any singularities at all! On the other hand, its Maclaurin series $$ \\sum_{n=0}^\\infty\\, (-1)^n x^{2n} $$ diverges as soon as $|x|\\geq 1 $. Let us plot the Taylor polynomials at $ a = 0 $ up to the degree $ n = 20$.\n\n\n```python\nf2 = 1/(1+x**2)\ntaylorplot(f2,20,6)\n```\n\n**Outside of Calc2:** It turns out, to understand the reason for such behavior of this series, one has to extend the function `f2`$=1/(1+x^2)$ to *complex numbers*, where for $x = i$, our function `f2` has a singularity (the complex number $i$ is such that $i^2+1 = 0$, turning the denominator of `f2` into $0$). In the complex plane, series converge on *discs* instead of intervals, so because the function `f2` has a singularity at the point $0+i\\,1$, its series centered at $a=0$ will have the radius of convergence equal to $1$. We show this relation in the following graph. In it, a point $(x,y)$ in the plane stands for the complex number $z = x+ iy$.\n\n\n```python\n fig, ax = plt.subplots(figsize=(10,10))\n circle = matplotlib.patches.Circle ((0, 0), 1, edgecolor='none',alpha=0.5)\n p = matplotlib.collections.PatchCollection([circle], alpha=0.4)\n ax.add_collection(p);\n ax.scatter([0.0],[1.0],marker='o',c='r',s=40) \n ax.annotate('Singularity of f2 at $0+i$', xy=(0.1, 1.05), xytext=(1, 1.5),\n arrowprops=dict(arrowstyle=\"->\")\n )\n ax.annotate('For complex $z= x + i\\,y$\\nthat lie inside this disc,\\nthe series converges.', xy=(0.5, -0.5), xytext=(0.5, -1.5),\n arrowprops=dict(arrowstyle=\"->\")\n )\n ax.annotate('The series will diverge at the\\n points of the disc boundary', xy=(-1.02, -0.13), xytext=(-2, -1),\n arrowprops=dict(arrowstyle=\"->\")\n )\n # This adjusts the position of the axes\n ax.spines['left'].set_position('center')\n ax.spines['bottom'].set_position('center')\n ax.spines['right'].set_color('none')\n ax.spines['top'].set_color('none')\n ax.xaxis.set_ticks_position('bottom')\n ax.yaxis.set_ticks_position('left')\n \n plt.ylim(-2,2)\n plt.xlim(-2,2)\n plt.show()\n```\n", "meta": {"hexsha": "86ba68b4c905faee75caafef2f71823e3c84b49c", "size": 384117, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "assets/taylor.ipynb", "max_stars_repo_name": "OVlasiuk/ovlasiuk.github.io", "max_stars_repo_head_hexsha": "ca3b6317fcc458945c6cfe3fd16a8b2622e836a2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assets/taylor.ipynb", "max_issues_repo_name": "OVlasiuk/ovlasiuk.github.io", "max_issues_repo_head_hexsha": "ca3b6317fcc458945c6cfe3fd16a8b2622e836a2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assets/taylor.ipynb", "max_forks_repo_name": "OVlasiuk/ovlasiuk.github.io", "max_forks_repo_head_hexsha": "ca3b6317fcc458945c6cfe3fd16a8b2622e836a2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 1055.2664835165, "max_line_length": 101840, "alphanum_fraction": 0.9553729723, "converted": true, "num_tokens": 1811, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9086179018818865, "lm_q2_score": 0.932453306514029, "lm_q1q2_score": 0.8472437669676047}} {"text": "# Solutions to 1st-order ODEs\n\n## 1. Solution by direct integration\n\nWhen equations are of this form, we can directly integrate:\n\n\\begin{align}\n\\frac{dy}{dx} &= y^{\\prime} = f(x) \\\\\n\\int dy &= \\int f(x) dx \\\\\ny(x) &= \\int f(x) dx + C\n\\end{align}\n\nFor example:\n\\begin{align}\n\\frac{dy}{dx} &= x^2 \\\\\ny(x) &= \\frac{1}{3} x^3 + C\n\\end{align}\n\nWhile these problems look simple, there may not be an obvious closed-form solution to all:\n\n\\begin{align}\n\\frac{dy}{dx} &= e^{-x^2} \\\\\ny(x) &= \\int e^{-x^2} dx + C\n\\end{align}\n\n(You may recognize this as leading to the error function, $\\text{erf}$:\n$\\frac{1}{2} \\sqrt{\\pi} \\text{erf}(x) + C$,\nso the exact solution to the integral over the range $[0,1]$ is 0.7468.)\n\n## 2. Solution by separation of variables\n\nIf the given derivative is a separate function of $x$ and $y$, then we can solve via separation of variables:\n\\begin{align}\n\\frac{dy}{dx} &= f(x) g(y) = \\frac{h(x)}{j(y)} \\\\\n\\int \\frac{1}{g(y)} dy &= \\int f(x) dx\n\\end{align}\n\nFor example, consider this problem:\n\\begin{equation}\ny^{\\prime} = \\frac{dy}{dx} = 1 + y^2 \\\\\n\\end{equation}\nWe can separate this into a problem that looks like $f(y) dy = g(x) dx$, where $dy = \\frac{1}{1+y^2}$ and $g(x) = 1$.\n\\begin{align}\n\\int \\frac{dy}{1 + y^2} &= \\int dx \\\\\n\\arctan y &= x + c \\\\\ny(x) &= \\tan(x+c)\n\\end{align}\n\nUnfortunately, not every separable ODE can be integrated:\n\\begin{align}\n\\frac{dy}{dx} &= \\frac{e^x / 2 + 5}{y^2 + \\cos y} \\\\\n(y^2 + \\cos y) dy &= (e^x / 2 + 5) dx\n\\end{align}\n\n## 3. General solution to linear 1st-order ODEs\n\nGiven a general linear 1st-order ODE of the form\n\\begin{equation}\n\\frac{dy}{dx} + p(x) y = q(x)\n\\end{equation}\nwe can solve by integration factor:\n\\begin{equation}\ny(x) = e^{-\\int p(x) dx} \\left[ \\int e^{\\int p(x) dx} q(x) dx + C \\right]\n\\end{equation}\n\nFor example, in this equation\n\\begin{equation}\ny^{\\prime} + xy - 5 e^x = 0\n\\end{equation}\nafter rearranging to the standard form\n\\begin{equation}\ny^{\\prime} + xy = 5 e^x\n\\end{equation}\nwe see that $p(x) = x$ and $q(x) = 5e^x$.\n\n## 4. Solution to nonlinear 1st-order ODEs\n\nGiven a general nonlinear 1st-order ODE\n\\begin{equation}\n\\frac{dy}{dx} + p(x) y = q(x) y^a \n\\end{equation}\nwhere $a \\neq 1$ and $a$ is a constant. This is known as the Bernoulli equation.\n\nWe can solve by transforming to a linear equation, by changing the dependent variable from $y$ to $z$:\n\\begin{align}\n\\text{let} \\quad z &= y^{1-a} \\\\\n\\frac{dz}{dx} &= (1-a) y^{-a} \\frac{dy}{dx}\n\\end{align}\nMultiply the original equation by $(1-a) y^{-a}$:\n\\begin{align}\n(1-a) y^{-a} \\frac{dy}{dx} + (1-a) y^{-a} p(x) y &= (1-a) y^{-a} q(x) y^a \\\\\n\\frac{dz}{dx} + p(x) (1-a) z &= q(x) (1-a) \\;,\n\\end{align}\nwhich is now a *linear* first-order ODE, that looks like\n\\begin{equation}\n\\frac{dz}{dx} + p(x)^{\\prime} z = q(x)^{\\prime}\n\\end{equation}\nwhere $p(x)^{\\prime} = (1-a) p(x)$ and $q(x)^{\\prime} = (1-a)q(x)$. \n\nWe can solve this using the integrating-factor approach discussed above. Then, once we have $z(x)$, we can find $y(x)$:\n\\begin{align}\nz &= y^{1-a} \\\\\ny &= z^{\\frac{1}{1-a}}\n\\end{align}\n", "meta": {"hexsha": "83440f7da4a0327e5567411cf5c8228746a1b3dd", "size": 4832, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/review-first-order.ipynb", "max_stars_repo_name": "kyleniemeyer/ME373", "max_stars_repo_head_hexsha": "db7e78ac21d7a2cc5bd9fc49cdc3614f2f0fe00e", "max_stars_repo_licenses": ["CC-BY-4.0", "MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-11-03T18:09:05.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-03T18:09:05.000Z", "max_issues_repo_path": "docs/review-first-order.ipynb", "max_issues_repo_name": "kyleniemeyer/ME373", "max_issues_repo_head_hexsha": "db7e78ac21d7a2cc5bd9fc49cdc3614f2f0fe00e", "max_issues_repo_licenses": ["CC-BY-4.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/review-first-order.ipynb", "max_forks_repo_name": "kyleniemeyer/ME373", "max_forks_repo_head_hexsha": "db7e78ac21d7a2cc5bd9fc49cdc3614f2f0fe00e", "max_forks_repo_licenses": ["CC-BY-4.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.7894736842, "max_line_length": 128, "alphanum_fraction": 0.4904801325, "converted": true, "num_tokens": 1150, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9407897558991953, "lm_q2_score": 0.9005297894548548, "lm_q1q2_score": 0.8472092008011867}} {"text": "# Pyomo - Getting started\n\nPyomo installation: see http://www.pyomo.org/installation\n\n```\npip install pyomo\n```\n\nSee also these excellent tutorials that details how to install Pyomo and several solvers (GLPK, COIN-OR CBC, COIN-OR Ipopt, ...):\n- https://nbviewer.jupyter.org/github/jckantor/ND-Pyomo-Cookbook/blob/master/notebooks/01.01-Installing-Pyomo.ipynb\n- https://nbviewer.jupyter.org/github/jckantor/ND-Pyomo-Cookbook/blob/master/notebooks/01.02-Running-Pyomo-on-Google-Colab.ipynb\n- https://nbviewer.jupyter.org/github/jckantor/ND-Pyomo-Cookbook/blob/master/notebooks/01.04-Cross-Platform-Installation-of-Pyomo-and-Solvers.ipynb\n\n\n```python\nfrom pyomo.environ import *\n```\n\n## Example 1\n\n\n```python\nmodel = ConcreteModel(name=\"Getting started\")\n\nmodel.x = Var(bounds=(-10, 10))\n\nmodel.obj = Objective(expr=model.x)\n\nmodel.const_1 = Constraint(expr=model.x >= 5)\n\n# @tail:\nopt = SolverFactory('glpk') # \"glpk\" or \"cbc\"\n\nres = opt.solve(model) # solves and updates instance\n\nmodel.display()\n\nprint()\nprint(\"Optimal solution: \", value(model.x))\nprint(\"Cost of the optimal solution: \", value(model.obj))\n# @:tail\n```\n\n## Example 2\n\n$$\n\\begin{align}\n \\max_{x_1,x_2} & \\quad 4 x_1 + 3 x_2 \\\\\n \\text{s.t.} & \\quad x_1 + x_2 \\leq 100 \\\\\n & \\quad 2 x_1 + x_2 \\leq 150 \\\\\n & \\quad 3 x_1 + 4 x_2 \\leq 360 \\\\\n & \\quad x_1, x_2 \\geq 0\n\\end{align}\n$$\n\n```\nOptimal total cost is: 350.0\n\nx_1 = 50.\nx_2 = 50.\n```\n\n\n```python\nmodel = ConcreteModel(name=\"Getting started\")\n\nmodel.x1 = Var(within=NonNegativeReals)\nmodel.x2 = Var(within=NonNegativeReals)\n\nmodel.obj = Objective(expr=4. * model.x1 + 3. * model.x2, sense=maximize)\n\nmodel.ineq_const_1 = Constraint(expr=model.x1 + model.x2 <= 100)\nmodel.ineq_const_2 = Constraint(expr=2. * model.x1 + model.x2 <= 150)\nmodel.ineq_const_3 = Constraint(expr=3. * model.x1 + 4. * model.x2 <= 360)\n\n# @tail:\nopt = SolverFactory('glpk') # \"glpk\" or \"cbc\"\n\nresults = opt.solve(model) # solves and updates instance\n\nmodel.display()\n\nprint()\nprint(\"Optimal solution: ({}, {})\".format(value(model.x1), value(model.x2)))\nprint(\"Gain of the optimal solution: \", value(model.obj))\n# @:tail\n```\n", "meta": {"hexsha": "e5086a43cf1ca59397f82e4adc1f1099d94b012d", "size": 4099, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "nb_dev_python/python_pyomo_getting_started_1.ipynb", "max_stars_repo_name": "jdhp-docs/python-notebooks", "max_stars_repo_head_hexsha": "91a97ea5cf374337efa7409e4992ea3f26b99179", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2017-05-03T12:23:36.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-26T17:30:56.000Z", "max_issues_repo_path": "nb_dev_python/python_pyomo_getting_started_1.ipynb", "max_issues_repo_name": "jdhp-docs/python-notebooks", "max_issues_repo_head_hexsha": "91a97ea5cf374337efa7409e4992ea3f26b99179", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "nb_dev_python/python_pyomo_getting_started_1.ipynb", "max_forks_repo_name": "jdhp-docs/python-notebooks", "max_forks_repo_head_hexsha": "91a97ea5cf374337efa7409e4992ea3f26b99179", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-10-26T17:30:57.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-26T17:30:57.000Z", "avg_line_length": 25.4596273292, "max_line_length": 153, "alphanum_fraction": 0.5186630886, "converted": true, "num_tokens": 701, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9005297807787537, "lm_q2_score": 0.9407897442783527, "lm_q1q2_score": 0.8472091821738847}} {"text": "# Straight Forward Expansion \n\n\n```python\nimport sympy as sp\nfrom sympy.simplify.fu import TR7, TR8\n```\n\n\n```python\nN = 3\n\n# Define the symbolic parameters \n# epsilon = sp.symbols('epsilon_(1:' + str(N+1) + ')')\nepsilon = sp.symbols('epsilon')\nomega_0 = sp.symbols('omega_0')\nalpha = sp.symbols('alpha_(2:' + str(N+1) + ')')\n\n# Define time variable\nt = sp.symbols('t', real=True)\n\n# Define function variable \nx = sp.Function('x')(t)\nxdot = sp.Derivative(x, t) # first time derivative \nxddot = sp.Derivative(xdot, t) # second time derivative\n\n# EOM\nEOM = xddot + omega_0**2 * x + sum([alpha[i-2] * x**i for i in range(2,N+1)])\nEOM\n```\n\n\n\n\n$\\displaystyle \\alpha_{2} x^{2}{\\left(t \\right)} + \\alpha_{3} x^{3}{\\left(t \\right)} + \\omega_{0}^{2} x{\\left(t \\right)} + \\frac{d^{2}}{d t^{2}} x{\\left(t \\right)}$\n\n\n\n\n```python\n# Exponential expansion of the variable x \nx1 = sp.Function('x_1', real=True)(t)\nx2 = sp.Function('x_2', real=True)(t)\nx3 = sp.Function('x_3', real=True)(t)\n```\n\n\n```python\nx_i = (x1, x2, x3)\nx_e = sum([epsilon**i * x_i[i-1] for i in range(1,N+1)])\nx_e\n```\n\n\n\n\n$\\displaystyle \\epsilon^{3} \\operatorname{x_{3}}{\\left(t \\right)} + \\epsilon^{2} \\operatorname{x_{2}}{\\left(t \\right)} + \\epsilon \\operatorname{x_{1}}{\\left(t \\right)}$\n\n\n\n\n```python\n# Substitute this into the EOM \nEOM = EOM.subs(x, x_e)\nEOM\n```\n\n\n\n\n$\\displaystyle \\alpha_{2} \\left(\\epsilon^{3} \\operatorname{x_{3}}{\\left(t \\right)} + \\epsilon^{2} \\operatorname{x_{2}}{\\left(t \\right)} + \\epsilon \\operatorname{x_{1}}{\\left(t \\right)}\\right)^{2} + \\alpha_{3} \\left(\\epsilon^{3} \\operatorname{x_{3}}{\\left(t \\right)} + \\epsilon^{2} \\operatorname{x_{2}}{\\left(t \\right)} + \\epsilon \\operatorname{x_{1}}{\\left(t \\right)}\\right)^{3} + \\omega_{0}^{2} \\left(\\epsilon^{3} \\operatorname{x_{3}}{\\left(t \\right)} + \\epsilon^{2} \\operatorname{x_{2}}{\\left(t \\right)} + \\epsilon \\operatorname{x_{1}}{\\left(t \\right)}\\right) + \\frac{\\partial^{2}}{\\partial t^{2}} \\left(\\epsilon^{3} \\operatorname{x_{3}}{\\left(t \\right)} + \\epsilon^{2} \\operatorname{x_{2}}{\\left(t \\right)} + \\epsilon \\operatorname{x_{1}}{\\left(t \\right)}\\right)$\n\n\n\n\n```python\nEOM = sp.expand(sp.expand(EOM).doit())\n```\n\n\n```python\n# Collect the coefficients for the epsilons \nepsilon_Eq = sp.collect(EOM, epsilon, evaluate=False)\nepsilon_1_Eq = sp.Eq(epsilon_Eq[epsilon], 0)\nepsilon_1_Eq\n```\n\n\n\n\n$\\displaystyle \\omega_{0}^{2} \\operatorname{x_{1}}{\\left(t \\right)} + \\frac{d^{2}}{d t^{2}} \\operatorname{x_{1}}{\\left(t \\right)} = 0$\n\n\n\n\n```python\nepsilon_2_Eq = sp.Eq(epsilon_Eq[epsilon**2], 0)\nepsilon_2_Eq\n```\n\n\n\n\n$\\displaystyle \\alpha_{2} \\operatorname{x_{1}}^{2}{\\left(t \\right)} + \\omega_{0}^{2} \\operatorname{x_{2}}{\\left(t \\right)} + \\frac{d^{2}}{d t^{2}} \\operatorname{x_{2}}{\\left(t \\right)} = 0$\n\n\n\n\n```python\nepsilon_3_Eq = sp.Eq(epsilon_Eq[epsilon**3], 0)\nepsilon_3_Eq\n```\n\n\n\n\n$\\displaystyle 2 \\alpha_{2} \\operatorname{x_{1}}{\\left(t \\right)} \\operatorname{x_{2}}{\\left(t \\right)} + \\alpha_{3} \\operatorname{x_{1}}^{3}{\\left(t \\right)} + \\omega_{0}^{2} \\operatorname{x_{3}}{\\left(t \\right)} + \\frac{d^{2}}{d t^{2}} \\operatorname{x_{3}}{\\left(t \\right)} = 0$\n\n\n\n\n```python\nsp.dsolve(epsilon_1_Eq, x1)\n```\n\n\n\n\n$\\displaystyle \\operatorname{x_{1}}{\\left(t \\right)} = C_{1} e^{- i \\omega_{0} t} + C_{2} e^{i \\omega_{0} t}$\n\n\n\nNot as convenient as working in the polar form\n\n\n```python\na = sp.symbols('a')\nbeta = sp.symbols('beta')\nx1_polar = a * sp.cos(omega_0 * t + beta)\nx1_polar\n```\n\n\n\n\n$\\displaystyle a \\cos{\\left(\\beta + \\omega_{0} t \\right)}$\n\n\n\nUpdated $\\epsilon$-2 equation \n\n\n```python\nepsilon_2_Eq = sp.expand(TR7(epsilon_2_Eq.subs(x1, x1_polar)))\nepsilon_2_Eq\n```\n\n\n\n\n$\\displaystyle \\frac{a^{2} \\alpha_{2} \\cos{\\left(2 \\beta + 2 \\omega_{0} t \\right)}}{2} + \\frac{a^{2} \\alpha_{2}}{2} + \\omega_{0}^{2} \\operatorname{x_{2}}{\\left(t \\right)} + \\frac{d^{2}}{d t^{2}} \\operatorname{x_{2}}{\\left(t \\right)} = 0$\n\n\n\n\n```python\nsp.dsolve(epsilon_2_Eq, x2)\n```\n\n\n\n\n$\\displaystyle \\operatorname{x_{2}}{\\left(t \\right)} = C_{1} e^{- i \\omega_{0} t} + C_{2} e^{i \\omega_{0} t} + \\frac{a^{2} \\alpha_{2} \\cos{\\left(2 \\beta + 2 \\omega_{0} t \\right)}}{6 \\omega_{0}^{2}} - \\frac{a^{2} \\alpha_{2}}{2 \\omega_{0}^{2}}$\n\n\n\nWe only want to keep the particular solution in the above\n\n\n```python\nx2_p = alpha[0] * a**2 * (sp.cos(2 * omega_0 * t + 2*beta) - 3)/6/omega_0**2\nx2_p\n```\n\n\n\n\n$\\displaystyle \\frac{a^{2} \\alpha_{2} \\left(\\cos{\\left(2 \\beta + 2 \\omega_{0} t \\right)} - 3\\right)}{6 \\omega_{0}^{2}}$\n\n\n\nUpdated $\\epsilon$-3 equation\n\n\n```python\nepsilon_3_Eq = TR7(epsilon_3_Eq.subs([\n (x1, x1_polar), (x2, x2_p)\n]))\nepsilon_3_Eq = TR8(epsilon_3_Eq)\nepsilon_3_Eq\n```\n\n\n\n\n$\\displaystyle \\frac{a^{3} \\alpha_{2}^{2} \\left(- 5 \\cos{\\left(\\beta + \\omega_{0} t \\right)} + \\cos{\\left(3 \\beta + 3 \\omega_{0} t \\right)}\\right)}{6 \\omega_{0}^{2}} + a^{3} \\alpha_{3} \\left(\\frac{3 \\cos{\\left(\\beta + \\omega_{0} t \\right)}}{4} + \\frac{\\cos{\\left(3 \\beta + 3 \\omega_{0} t \\right)}}{4}\\right) + \\omega_{0}^{2} \\operatorname{x_{3}}{\\left(t \\right)} + \\frac{d^{2}}{d t^{2}} \\operatorname{x_{3}}{\\left(t \\right)} = 0$\n\n\n\n\n```python\nsp.dsolve(epsilon_3_Eq)\n```\n\n\n\n\n$\\displaystyle \\operatorname{x_{3}}{\\left(t \\right)} = C_{1} e^{- i \\omega_{0} t} + C_{2} e^{i \\omega_{0} t} + \\frac{5 a^{3} \\alpha_{2}^{2} t \\sin{\\left(\\beta + \\omega_{0} t \\right)}}{12 \\omega_{0}^{3}} + \\frac{a^{3} \\alpha_{2}^{2} \\cos{\\left(3 \\beta + 3 \\omega_{0} t \\right)}}{48 \\omega_{0}^{4}} - \\frac{3 a^{3} \\alpha_{3} t \\sin{\\left(\\beta + \\omega_{0} t \\right)}}{8 \\omega_{0}} + \\frac{a^{3} \\alpha_{3} \\cos{\\left(3 \\beta + 3 \\omega_{0} t \\right)}}{32 \\omega_{0}^{2}}$\n\n\n", "meta": {"hexsha": "fed8b6634941b0ef171636c033a14452d8f0500f", "size": 13871, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/straight_forward_expansion.ipynb", "max_stars_repo_name": "smallpondtom/pNLsys", "max_stars_repo_head_hexsha": "ebaea9c2945391fccd076f8006baee7066951097", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-12-17T16:44:14.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-22T05:40:43.000Z", "max_issues_repo_path": "notebooks/straight_forward_expansion.ipynb", "max_issues_repo_name": "smallpondtom/pNLsys", "max_issues_repo_head_hexsha": "ebaea9c2945391fccd076f8006baee7066951097", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/straight_forward_expansion.ipynb", "max_forks_repo_name": "smallpondtom/pNLsys", "max_forks_repo_head_hexsha": "ebaea9c2945391fccd076f8006baee7066951097", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.9620535714, "max_line_length": 838, "alphanum_fraction": 0.5120034605, "converted": true, "num_tokens": 2173, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9653811631528337, "lm_q2_score": 0.8774767810736693, "lm_q1q2_score": 0.8470995555525034}} {"text": "## Problem 2\n\n\n```python\nimport numpy as np\nimport sympy as sp\nimport scipy as sc\nimport matplotlib.pyplot as plt\nfrom math import e\n%matplotlib inline\n```\n\nThe given equation is $x(t)=A_1e^{s_1t} + A_2e^{s_2t}$ thus we can find the values of $A_1$ and $A_2$ by
substituting $A_2=1-A_1$ and $A_1=\\frac{-s_2}{s_1-s_2}$ using the given intital conditions.
Then get the values of $s_1$ and $s_2$ from the provided equation for parts A to D using the different values of $a$ and $b$. \n\n#### Part A\n\n\n```python\na=1\nb=0.25\ns1=-a/2 + np.sqrt((a/2)**2 - b**2)\ns2=-a/2 - np.sqrt((a/2)**2 - b**2) \nprint(\"s1=\",s1,\"s2=\",s2)\nA1=-s2/(s1-s2)\nA2=1-A1\n# A1=A2=1\nprint(\"A1=\",A1,\"A2=\",A2)\nt=np.arange(0,2*np.pi,0.1)\nx=A1*(e**(s1*t)) + A2*(e**(s2*t))\n# # plt.ylim(-10,10,0.5)\n# plt.xlim(-6,6,0.5)\nplt.xlabel(\"Period\")\nplt.ylabel(\"x(t)\")\nplt.plot(t,x)\nplt.legend(['for a=1 & b=0.25'])\nplt.show()\n```\n\n#### Part B\n\n\n```python\na=-1\nb=0.25\ns1=-a/2 + np.sqrt((a/2)**2 - b**2)\ns2=-a/2 - np.sqrt((a/2)**2 - b**2) \nprint(\"s1=\",s1,\"s2=\",s2)\nA1=-s2/(s1-s2)\nA2=1-A1\nprint(\"A1=\",A1,\"A2=\",A2)\nt=np.arange(0,2*np.pi,0.1)\nx=A1*(e**(s1*t)) + A2*(e**(s2*t))\nplt.xlabel(\"Period\")\nplt.ylabel(\"x(t)\")\nplt.plot(t,x)\nplt.legend(['for a=-1 & b=0.25'])\nplt.show()\n```\n\n#### Part C\n\n\n```python\na=1\nb=1\ns1=complex(-0.5,0.5)\ns2=complex(-0.5,-0.5)\nprint(\"s1=\",s1,\"s2=\",s2)\nA1=-s1/(s1-s2)\nA2=1-A1\nprint(\"A1=\",A1,\"A2=\",A2)\nt=np.arange(0,2*np.pi,0.1)\nx=A1.real*(e**(s1.real*t)) + A2.real*(e**(s2.real*t)) #ignoring imaginary parts\nplt.xlabel(\"Period\")\nplt.ylabel(\"x(t)\")\nplt.plot(t,x)\nplt.legend(['for a=1 & b=1'])\nplt.show()\n```\n\n#### Part D\n\n\n```python\na=-1\nb=1\ns1=complex(0.5,0.5)\ns2=complex(0.5,-0.5)\nprint(\"s1=\",s1,\"s2=\",s2)\nA1=-s2/(s1-s2)\nA2=1-A1\nprint(\"A1=\",A1,\"A2=\",A2)\nt=np.arange(0,2*np.pi,0.1)\nx=A1.real*(e**(s1.real*t)) + A2.real*(e**(s2.real*t)) #ignoring imaginary parts\nplt.xlabel(\"Period\")\nplt.ylabel(\"x(t)\")\nplt.plot(t,x)\nplt.legend(['for a=-1 & b=1'])\nplt.show()\n```\n\n#### Part E\n\n\n```python\n#Undamped oscillation which is pretty much a straight line. \na=0\nb=1\ns1=complex(0,1)\ns2=complex(0,-1)\nprint(\"s1=\",s1,\"s2=\",s2)\nA1=-s2/(s1-s2)\nA2=1-A1\nprint(\"A1=\",A1,\"A2=\",A2)\nt=np.arange(0,2*np.pi,0.1)\nx=A1.real*(e**(s1.real*t)) + A2.real*(e**(s2.real*t)) #ignoring imaginary parts\nplt.xlabel(\"Period\")\nplt.ylabel(\"x(t)\")\nplt.plot(t,x)\nplt.legend(['for a=0 & b=1'])\nplt.show()\n```\n", "meta": {"hexsha": "130550299e2eac5d418dcdf5ef605898bc60cf26", "size": 70730, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "1. Single Degree of Freedom Oscillation/1. SDOF Code.ipynb", "max_stars_repo_name": "eyobghiday/mechanical-vibration", "max_stars_repo_head_hexsha": "fdd5f554464544dcbca8ff96e430f13da4fcceb4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2022-02-08T18:08:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-13T07:06:01.000Z", "max_issues_repo_path": "1. Single Degree of Freedom Oscillation/1. SDOF Code.ipynb", "max_issues_repo_name": "eyobghiday/mechanical-vibration", "max_issues_repo_head_hexsha": "fdd5f554464544dcbca8ff96e430f13da4fcceb4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "1. Single Degree of Freedom Oscillation/1. SDOF Code.ipynb", "max_forks_repo_name": "eyobghiday/mechanical-vibration", "max_forks_repo_head_hexsha": "fdd5f554464544dcbca8ff96e430f13da4fcceb4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 216.963190184, "max_line_length": 15472, "alphanum_fraction": 0.9141524106, "converted": true, "num_tokens": 1049, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9481545377452443, "lm_q2_score": 0.8933094032139577, "lm_q1q2_score": 0.8469953642678101}} {"text": "# Essential Math for Machine Learning: Python Edition\n\nCourse source: [LinkedIn Learning](https://www.linkedin.com/learning/essential-math-for-machine-learning-python-edition)\n\n## 1. Equations, Graphs, and Functions\n\n### 1.1 Linear Equations\n\n#### 1.1.1 Solving a Linear Equation\nConsider the following equation:\n\n\\begin{equation}2y + 3 = 3x - 1 \\end{equation}\n\nAfter simplification: \\begin{equation}y = \\frac{3x - 4}{2} \\end{equation}\n\n\n```python\nimport pandas as pd\n\n# Create a dataframe with an x column containing values from -10 to 10\ndf = pd.DataFrame ({'x': range(-10, 11)})\n\n# Add a y column by applying the solved equation to x\ndf['y'] = (3*df['x'] - 4) / 2\n\n#Display the dataframe\ndf\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
xy
0-10-17.0
1-9-15.5
2-8-14.0
3-7-12.5
4-6-11.0
5-5-9.5
6-4-8.0
7-3-6.5
8-2-5.0
9-1-3.5
100-2.0
111-0.5
1221.0
1332.5
1444.0
1555.5
1667.0
1778.5
18810.0
19911.5
201013.0
\n
\n\n\n\n#### 1.1.2 Visualization\n\n\n```python\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nplt.plot(df.x, df.y, color=\"grey\", marker = \"o\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.show()\n```\n\n#### 1.1.3 Intercepts\n\nLet's take a look at the line from our linear equation with the X and Y axis shown through the origin (0,0).\n\n\n```python\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\n\n## add axis lines for 0,0\nplt.axhline()\nplt.axvline()\nplt.show()\n```\n\n\n```python\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\n\n## add axis lines for 0,0\nplt.axhline()\nplt.axvline()\n## After calculation equation x-intercept: 4/3, y-intercept: -2\nplt.annotate('x-intercept',(1.333, 0))\nplt.annotate('y-intercept',(0,-2))\nplt.show()\n```\n\n#### 1.1.4 Slope\n\n\n```python\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\n\n# set the slope\nm = 1.5\n\n# get the y-intercept\nyInt = -2\n\n# plot the slope from the y-intercept for 1x\nmx = [0, 1]\nmy = [yInt, yInt + m]\nplt.plot(mx,my, color='red', lw=5)\n\nplt.show()\n```\n\n##### 1.1.4.1 Slope-Intercept Form\n\nNow that we know the slope and y-intercept for the line that this equation defines, we can rewrite the equation as:\n\n\\begin{equation}y = 1\\frac{1}{2}x + -2 \\end{equation}\n\n\n```python\n# Create a dataframe with an x column containing values from -10 to 10\ndf = pd.DataFrame ({'x': range(-10, 11)})\n\n# Define slope and y-intercept\nm = 1.5\nyInt = -2\n\n# Add a y column by applying the slope-intercept equation to x\ndf['y'] = m*df['x'] + yInt\n\n# Plot the line\nfrom matplotlib import pyplot as plt\n\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\n\n# label the y-intercept\nplt.annotate('y-intercept',(0,yInt))\n\n# plot the slope from the y-intercept for 1x\nmx = [0, 1]\nmy = [yInt, yInt + m]\nplt.plot(mx,my, color='red', lw=5)\n\nplt.show()\n```\n\n### 1.2 Systems of Equations\n\nHere are the equations\n\n\\begin{equation}x + y = 16 \\end{equation}\n\\begin{equation}10x + 25y = 250 \\end{equation}\n\nTaken together, these equations form a *system of equations* that will enable us to determine how many of each chip denomination we have.\n\n#### 1.2.1 Graphing Lines to Find the Intersection Point\n\nOne approach is to determine all possible values for x and y in each equation and plot them.\n\nLet's plot each of these ranges of values as lines on a graph:\n\n\n```python\n# Get the extremes for number\nx1 = [16, 0]\ny1 = [0, 16]\n\n# Get the extremes for values\nx2 = [25,0]\ny2 = [0,10]\n\n# Plot the lines\nplt.plot(x1, y1, color='blue')\nplt.plot(x2, y2, color=\"orange\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\n\nplt.show()\n```\n\n### 1.3 Exponentials, Radicals, and Logs\n\n#### 1.3.1 Exponentials\n\n\\begin{equation}2^{2} = 2 \\cdot 2 = 4\\end{equation}\n\n\n```python\nx = 2**2\nprint(x)\n```\n\n 4\n\n\n#### 1.3.2 Radicals (Roots)\n\n\\begin{equation}\\sqrt[3]{64} = 4 \\end{equation}\n\n\n```python\nimport math\n\n# Calculate square root of 25\nx = math.sqrt(25)\nprint (x)\n\n# Calculate cube root of 64\ncr = round(64 ** (1. / 3))\nprint(cr)\n```\n\n 5.0\n 4\n\n\n#### 1.3.3 Logarithms\n\n\\begin{equation}log_{4}(16) = 2 \\end{equation}\n\\begin{equation}log(1000) = 3 \\end{equation}\n\\begin{equation}log_{e}(64) = ln(64) = 4.1589 \\end{equation}\n\n\n```python\n# log of 16 base 4\nprint(math.log(16, 4))\n\n# Common log of 1000\nprint(math.log10(1000))\n\n# Natural log of 64\nprint (math.log(64))\n```\n\n 2.0\n 3.0\n 4.1588830833596715\n\n\n#### 1.3.4 Solving Equations with Exponentials\n\n\\begin{equation}2y = 2x^{4} ( \\frac{x^{2} + 2x^{2}}{x^{3}} ) \\end{equation}\n\n\n\\begin{equation}y = 3x^{3} \\end{equation}\n\nNow we have a solution that defines y in terms of x. We can use Python to plot the line created by this equation for a set of arbitrary *x* and *y* values:\n\n\n```python\n# Create a dataframe with an x column containing values from -10 to 10\ndf = pd.DataFrame ({'x': range(-10, 11)})\n\n# Add a y column by applying the slope-intercept equation to x\ndf['y'] = 3*df['x']**3\n\n#Display the dataframe\nprint(df)\n\nplt.plot(df.x, df.y, color=\"magenta\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\nplt.show()\n```\n\nFor Equation:\n\\begin{equation}\ny = 2^{x}\n\\end{equation}\n\n\n```python\n# Create a dataframe with an x column containing values from -10 to 10\ndf = pd.DataFrame ({'x': range(-10, 11)})\n\n# Add a y column by applying the slope-intercept equation to x\ndf['y'] = 2.0**df['x']\n\n#Display the dataframe\nprint(df)\n\nplt.plot(df.x, df.y, color=\"magenta\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\nplt.show()\n```\n\n### 1.4 Quadratic Equations\n\nConsider the following equation:\n\n\\begin{equation}y = 2(x - 1)(x + 2)\\end{equation}\n\nIf you multiply out the factored ***x*** expressions, this equates to:\n\n\\begin{equation}y = 2x^{2} + 2x - 4\\end{equation}\n\nNote that the highest ordered term includes a squared variable (x2).\n\nLet's graph this equation for a range of ***x*** values:\n\n\n```python\n# Create a dataframe with an x column containing values to plot\ndf = pd.DataFrame ({'x': range(-9, 9)})\n\n# Add a y column by applying the quadratic equation to x\ndf['y'] = 2*df['x']**2 + 2 *df['x'] - 4\n\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\nplt.show()\n```\n\nFor following **Parabola**\n\n\\begin{equation}y = -2x^{2} + 6x + 7\\end{equation}\n\n\n```python\n# Create dataFrame for x column\ndf = pd.DataFrame({'x': range(-9,13)})\n\n# Add y column\ndf['y'] = -2 * df['x'] ** 2 + 6 * df['x'] - 4\n\nplt.plot(df.x, df.y, color=\"blue\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\nplt.show()\n```\n\nAgain, the graph shows a parabola, but this time instead of being open at the top, the parabola is open at the bottom.\n\nEquations that assign a value to ***y*** based on an expression that includes a squared value for ***x*** create parabolas. If the relationship between ***y*** and ***x*** is such that ***y*** is a *positive* multiple of the ***x2*** term, the parabola will be open at the top; when ***y*** is a *negative* multiple of the ***x2*** term, then the parabola will be open at the bottom.\n\nThese kinds of equations are known as *quadratic* equations, and they have some interesting characteristics. There are several ways quadratic equations can be written, but the *standard form* for quadratic equation is:\n\n\\begin{equation}y = ax^{2} + bx + c\\end{equation}\n\nWhere ***a***, ***b***, and ***c*** are numeric coefficients or constants.\n\n#### 1.4.1 Parabola Vertex and Line of Symmetry\n\nParabolas are symmetrical, with x and y values converging exponentially towards the highest point (in the case of a downward opening parabola) or lowest point (in the case of an upward opening parabola). The point where the parabola meets the line of symmetry is known as the *vertex*.\n\n\n```python\nimport numpy as np\n\ndef plot_parabola(a, b, c):\n # get the x value for the line of symmetry\n vx = (-1*b)/(2*a)\n \n # get the y value when x is at the line of symmetry\n vy = a*vx**2 + b*vx + c\n\n # Create a dataframe with an x column containing values from x-10 to x+10\n minx = int(vx - 10)\n maxx = int(vx + 11)\n df = pd.DataFrame ({'x': range(minx, maxx)})\n\n # Add a y column by applying the quadratic equation to x\n df['y'] = a*df['x']**2 + b *df['x'] + c\n\n # get min and max y values\n miny = df.y.min()\n maxy = df.y.max()\n\n # Plot the line\n plt.plot(df.x, df.y, color=\"grey\")\n plt.xlabel('x')\n plt.ylabel('y')\n plt.grid()\n plt.axhline()\n plt.axvline()\n\n # plot the line of symmetry\n sx = [vx, vx]\n sy = [miny, maxy]\n plt.plot(sx,sy, color='magenta')\n\n # Annotate the vertex\n plt.scatter(vx,vy, color=\"red\")\n plt.annotate('vertex',(vx, vy), xytext=(vx - 1, (vy + 5)* np.sign(a)))\n\n plt.show()\n```\n\n\n```python\nplot_parabola(2, 2, -4) \nplot_parabola(-2, 3, 5) \n```\n\n#### 1.4.2 Parabola Intercepts\n\nWhen y is 0, x is -2 or 1. Let's plot these points on our parabola:\n\n\n```python\n# Assign the calculated x values\nx1 = -2\nx2 = 1\n\n# Create a dataframe with an x column containing some values to plot\ndf = pd.DataFrame ({'x': range(x1-5, x2+6)})\n\n# Add a y column by applying the quadratic equation to x\ndf['y'] = 2*(df['x'] - 1) * (df['x'] + 2)\n\n# Get x at the line of symmetry (halfway between x1 and x2)\nvx = (x1 + x2) / 2\n\n# Get y when x is at the line of symmetry\nvy = 2*(vx -1)*(vx + 2)\n\n# get min and max y values\nminy = df.y.min()\nmaxy = df.y.max()\n\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\n\n# Plot calculated x values for y = 0\nplt.scatter([x1,x2],[0,0], color=\"green\")\nplt.annotate('x1',(x1, 0))\nplt.annotate('x2',(x2, 0))\n\n# plot the line of symmetry\nsx = [vx, vx]\nsy = [miny, maxy]\nplt.plot(sx,sy, color='magenta')\n\n# Annotate the vertex\nplt.scatter(vx,vy, color=\"red\")\nplt.annotate('vertex',(vx, vy), xytext=(vx - 1, (vy - 5)))\n\nplt.show()\n```\n\n#### 1.4.3 Solving Quadratics Using the Square Root Method\n\nLet's consider this equation:\n\n\\begin{equation}y = 3x^{2} - 12\\end{equation}\n\nLet's restate it so we're solving for ***x*** when ***y*** is 0:\n\n\\begin{equation}3x^{2} - 12 = 0\\end{equation}\n\n\\begin{equation}x = \\pm\\sqrt{4}\\end{equation}\n\nLet's see this in Python, and use the results to calculate and plot the parabola with its line of symmetry and vertex:\n\n\n```python\ny = 0\nx1 = int(- math.sqrt(y + 12 / 3))\nx2 = int(math.sqrt(y + 12 / 3))\n\n# Create a dataframe with an x column containing some values to plot\ndf = pd.DataFrame ({'x': range(x1-10, x2+11)})\n\n# Add a y column by applying the quadratic equation to x\ndf['y'] = 3*df['x']**2 - 12\n\n# Get x at the line of symmetry (halfway between x1 and x2)\nvx = (x1 + x2) / 2\n\n# Get y when x is at the line of symmetry\nvy = 3*vx**2 - 12\n\n# get min and max y values\nminy = df.y.min()\nmaxy = df.y.max()\n\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\n\n# Plot calculated x values for y = 0\nplt.scatter([x1,x2],[0,0], color=\"green\")\nplt.annotate('x1',(x1, 0))\nplt.annotate('x2',(x2, 0))\n\n# plot the line of symmetry\nsx = [vx, vx]\nsy = [miny, maxy]\nplt.plot(sx,sy, color='magenta')\n\n# Annotate the vertex\nplt.scatter(vx,vy, color=\"red\")\nplt.annotate('vertex',(vx, vy), xytext=(vx - 1, (vy - 20)))\n\nplt.show()\n```\n\n#### 1.4.4 Solving Quadratics Using the Completing the Square Method\n\nLet's look at an example:\n\n\\begin{equation}y = x^{2} + 6x - 7\\end{equation}\n\nLet's start as we've always done so far by restating the equation to solve ***x*** for a ***y*** value of 0:\n\n\\begin{equation}x^{2} + 6x - 7 = 0\\end{equation}\n\n\\begin{equation}x^{2} + 6x = 7\\end{equation}\n\n\\begin{equation}x^{2} + 6x + 9 = 16\\end{equation}\n\n\\begin{equation}(x + 3)^{2} = 16\\end{equation}\n\n\\begin{equation}x + 3 =\\pm\\sqrt{16}\\end{equation}\n\n\\begin{equation}x = -7, 1\\end{equation}\n\nLet's see what the parabola for this equation looks like in Python:\n\n\n```python\nx1 = int(- math.sqrt(16) - 3)\nx2 = int(math.sqrt(16) - 3)\n\n# Create a dataframe with an x column containing some values to plot\ndf = pd.DataFrame ({'x': range(x1-10, x2+11)})\n\n# Add a y column by applying the quadratic equation to x\ndf['y'] = ((df['x'] + 3)**2) - 16\n\n# Get x at the line of symmetry (halfway between x1 and x2)\nvx = (x1 + x2) / 2\n\n# Get y when x is at the line of symmetry\nvy = ((vx + 3)**2) - 16\n\n# get min and max y values\nminy = df.y.min()\nmaxy = df.y.max()\n\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\n\n# Plot calculated x values for y = 0\nplt.scatter([x1,x2],[0,0], color=\"green\")\nplt.annotate('x1',(x1, 0))\nplt.annotate('x2',(x2, 0))\n\n# plot the line of symmetry\nsx = [vx, vx]\nsy = [miny, maxy]\nplt.plot(sx,sy, color='magenta')\n\n# Annotate the vertex\nplt.scatter(vx,vy, color=\"red\")\nplt.annotate('vertex',(vx, vy), xytext=(vx - 1, (vy - 10)))\n\nplt.show()\n```\n\n#### 1.4.5 Vertex Form\n\nTh *vertex form* of a quadratic equation which is generically described as:\n\n\\begin{equation}y = a(x - h)^{2} + k\\end{equation}\n\nThe neat thing about this form of the equation is that it tells us the coordinates of the vertex - it's at ***h,k***.\n\n\n```python\ndef plot_parabola_from_vertex_form(a, h, k):\n # Create a dataframe with an x column a range of x values to plot\n df = pd.DataFrame ({'x': range(h-10, h+11)})\n\n # Add a y column by applying the quadratic equation to x\n df['y'] = (a*(df['x'] - h)**2) + k\n\n # get min and max y values\n miny = df.y.min()\n maxy = df.y.max()\n\n # calculate y when x is 0 (h+-h)\n y = a*(0 - h)**2 + k\n\n plt.plot(df.x, df.y, color=\"grey\")\n plt.xlabel('x')\n plt.ylabel('y')\n plt.grid()\n plt.axhline()\n plt.axvline()\n\n # Plot calculated y values for x = 0 (h-h and h+h)\n plt.scatter([h-h, h+h],[y,y], color=\"green\")\n plt.annotate(str(h-h) + ',' + str(y),(h-h, y))\n plt.annotate(str(h+h) + ',' + str(y),(h+h, y))\n\n # plot the line of symmetry (x = h)\n sx = [h, h]\n sy = [miny, maxy]\n plt.plot(sx,sy, color='magenta')\n\n # Annotate the vertex (h,k)\n plt.scatter(h,k, color=\"red\")\n plt.annotate('v=' + str(h) + ',' + str(k),(h, k), xytext=(h - 1, (k - 10)))\n\n plt.show()\n```\n\n\n```python\n# Call the function for the example discussed above\nplot_parabola_from_vertex_form(2, 4, -30)\n```\n\n\n```python\nplot_parabola_from_vertex_form(3, -1, -1)\n```\n\n#### 1.4.6 Shortcuts for Solving Quadratic Equations\n\n##### 1.4.6.1 The Quadratic Formula\nAnother useful formula to remember is the *quadratic formula*, which makes it easy to calculate values for ***x*** when ***y*** is **0**; or in other words:\n\n\\begin{equation}ax^{2} + bx + c = 0\\end{equation}\n\nHere's the formula:\n\n\\begin{equation}x = \\frac{-b \\pm \\sqrt{b^{2} - 4ac}}{2a}\\end{equation}\n\n\n```python\ndef plot_parabola_from_formula (a, b, c):\n # Get vertex\n print('CALCULATING THE VERTEX')\n print('vx = -b / 2a')\n\n nb = -b\n a2 = 2*a\n print('vx = ' + str(nb) + ' / ' + str(a2))\n\n vx = -b/(2*a)\n print('vx = ' + str(vx))\n\n print('\\nvy = ax^2 + bx + c')\n print('vy = ' + str(a) + '(' + str(vx) + '^2) + ' + str(b) + '(' + str(vx) + ') + ' + str(c))\n\n avx2 = a*vx**2\n bvx = b*vx\n print('vy = ' + str(avx2) + ' + ' + str(bvx) + ' + ' + str(c))\n\n vy = avx2 + bvx + c\n print('vy = ' + str(vy))\n\n print ('\\nv = ' + str(vx) + ',' + str(vy))\n\n # Get +x and -x (showing intermediate calculations)\n print('\\nCALCULATING -x AND +x FOR y=0')\n print('x = -b +- sqrt(b^2 - 4ac) / 2a')\n\n\n b2 = b**2\n ac4 = 4*a*c\n print('x = ' + str(nb) + ' +- sqrt(' + str(b2) + ' - ' + str(ac4) + ') / ' + str(a2))\n\n sr = math.sqrt(b2 - ac4)\n print('x = ' + str(nb) + ' +- ' + str(sr) + ' / ' + str(a2))\n print('-x = ' + str(nb) + ' - ' + str(sr) + ' / ' + str(a2))\n print('+x = ' + str(nb) + ' + ' + str(sr) + ' / ' + str(a2))\n\n posx = (nb + sr) / a2\n negx = (nb - sr) / a2\n print('-x = ' + str(negx))\n print('+x = ' + str(posx))\n\n\n print('\\nPLOTTING THE PARABOLA')\n\n # Create a dataframe with an x column a range of x values to plot\n df = pd.DataFrame ({'x': range(round(vx)-10, round(vx)+11)})\n\n # Add a y column by applying the quadratic equation to x\n df['y'] = a*df['x']**2 + b*df['x'] + c\n\n # get min and max y values\n miny = df.y.min()\n maxy = df.y.max()\n\n plt.plot(df.x, df.y, color=\"grey\")\n plt.xlabel('x')\n plt.ylabel('y')\n plt.grid()\n plt.axhline()\n plt.axvline()\n\n # Plot calculated x values for y = 0\n plt.scatter([negx, posx],[0,0], color=\"green\")\n plt.annotate('-x=' + str(negx) + ',' + str(0),(negx, 0), xytext=(negx - 3, 5))\n plt.annotate('+x=' + str(posx) + ',' + str(0),(posx, 0), xytext=(posx - 3, -10))\n\n # plot the line of symmetry\n sx = [vx, vx]\n sy = [miny, maxy]\n plt.plot(sx,sy, color='magenta')\n\n # Annotate the vertex\n plt.scatter(vx,vy, color=\"red\")\n plt.annotate('v=' + str(vx) + ',' + str(vy),(vx, vy), xytext=(vx - 1, vy - 10))\n\n plt.show()\n```\n\n\n```python\nplot_parabola_from_formula (2, -16, 2)\n```\n\n### 1.5 Functions\n\nFor example, consider the following equation:\n\n\\begin{equation}f(x) = x^{2} + 2\\end{equation}\n\n\n```python\ndef f(x):\n return x**2 + 2\n```\n\n\n```python\n# Create an array of x values from -100 to 100\nx = np.array(range(-100, 101))\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('f(x)')\nplt.grid()\n\n# Plot x against f(x)\nplt.plot(x,f(x), color='purple')\n\nplt.show()\n```\n\n#### 1.5.1 Bounds of a Function\n\nFor example, consider the function ***u*** defined here:\n\n\\begin{equation}u(x) = x + 1\\end{equation}\n\\begin{equation}\\{x \\in \\rm I\\!R\\}\\end{equation}\n\n\n##### 1.5.1.1 Domain of a Function\n\nNow consider the following function ***g***:\n\n\\begin{equation}g(x) = (\\frac{12}{2x})^{2}\\end{equation}\n\nThis is interpreted as *Any value for x where x is in the set of real numbers such that x is not equal to 0*, and we can incorporate this into the function's definition like this:\n\n\\begin{equation}g(x) = (\\frac{12}{2x})^{2}, \\{x \\in \\rm I\\!R\\;\\;|\\;\\; x \\ne 0 \\}\\end{equation}\n\nOr more simply:\n\n\\begin{equation}g(x) = (\\frac{12}{2x})^{2},\\;\\; x \\ne 0\\end{equation}\n\n\n```python\n# Define function g\ndef g(x):\n if x != 0:\n return (12/(2*x))**2\n```\n\n\n```python\n# Create an array of x values from -100 to 100\nx = range(-100, 101)\n\n# Get the corresponding y values from the function\ny = [g(a) for a in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('g(x)')\nplt.grid()\n\n# Plot x against g(x)\nplt.plot(x,y, color='purple')\n\n# plot an empty circle to show the undefined point\nplt.plot(0,g(0.0000001), color='purple', marker='o', markerfacecolor='w', markersize=8)\n\nplt.show()\n```\n\nWe can indicate the domain of this function in its definition like this:\n\n\\begin{equation}h(x) = 2\\sqrt{x}, \\{x \\in \\rm I\\!R\\;\\;|\\;\\; x \\ge 0 \\}\\end{equation}\n\n\n```python\ndef h(x):\n if x >= 0:\n return 2 * np.sqrt(x)\n```\n\n\n```python\n# Create an array of x values from -100 to 100\nx = range(-100, 101)\n\n# Get the corresponding y values from the function\ny = [h(a) for a in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('h(x)')\nplt.grid()\n\n# Plot x against h(x)\nplt.plot(x,y, color='purple')\n\n# plot a filled circle at the end to indicate a closed interval\nplt.plot(0, h(0), color='purple', marker='o', markerfacecolor='purple', markersize=8)\n\nplt.show()\n```\n\n##### 1.5.1.2 Range of a Function\n\nFor example, consider the following function:\n\n\\begin{equation}p(x) = x^{2} + 1\\end{equation}\n\nIt's range is:\n\n\\begin{equation}\\{p(x) \\in \\rm I\\!R\\;\\;|\\;\\; p(x) \\ge 1 \\}\\end{equation}\n\nLet's create and plot the function for a range of ***x*** values in Python:\n\n\n```python\n# define a function to return x^2 + 1\ndef p(x):\n return x**2 + 1\n```\n\n\n```python\n# Create an array of x values from -100 to 100\nx = np.array(range(-100, 101))\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('p(x)')\nplt.grid()\n\n# Plot x against f(x)\nplt.plot(x,p(x), color='purple')\n\nplt.show()\n```\n", "meta": {"hexsha": "d76092390a92e99052319336b188ad50dce137fa", "size": 380225, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Codes/Essential Math for Machine Learning/1. Equations, Graphs, and Functions.ipynb", "max_stars_repo_name": "moni-roy/100DaysOfMLCode", "max_stars_repo_head_hexsha": "5c1cfda650db20f3d08145a343b90d44252c8300", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-22T02:03:32.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-22T02:03:32.000Z", "max_issues_repo_path": "Codes/Essential Math for Machine Learning/1. Equations, Graphs, and Functions.ipynb", "max_issues_repo_name": "moni-roy/100DaysOfMLCode", "max_issues_repo_head_hexsha": "5c1cfda650db20f3d08145a343b90d44252c8300", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Codes/Essential Math for Machine Learning/1. Equations, Graphs, and Functions.ipynb", "max_forks_repo_name": "moni-roy/100DaysOfMLCode", "max_forks_repo_head_hexsha": "5c1cfda650db20f3d08145a343b90d44252c8300", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 222.6141686183, "max_line_length": 24328, "alphanum_fraction": 0.907147084, "converted": true, "num_tokens": 7532, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9449947132556619, "lm_q2_score": 0.8962513800615313, "lm_q1q2_score": 0.8469528159062379}} {"text": "$\\newcommand{\\mb}[1]{\\mathbf{ #1 }}$\n$\\newcommand{\\bs}[1]{\\boldsymbol{ #1 }}$\n$\\newcommand{\\bb}[1]{\\mathbb{ #1 }}$\n\n$\\newcommand{\\R}{\\bb{R}}$\n\n$\\newcommand{\\ip}[2]{\\left\\langle #1, #2 \\right\\rangle}$\n$\\newcommand{\\norm}[1]{\\left\\Vert #1 \\right\\Vert}$\n\n$\\newcommand{\\der}[2]{\\frac{\\mathrm{d} #1 }{\\mathrm{d} #2 }}$\n$\\newcommand{\\derp}[2]{\\frac{\\partial #1 }{\\partial #2 }}$\n\n# Cart Pole\n\nConsider a cart on a frictionless track. Suppose a pendulum is attached to the cart by a frictionless joint. The cart is modeled as a point mass $m_c$ and the pendulum is modeled as a massless rigid link with point mass $m_p$ a distance $l$ away from the cart.\n\nLet $\\mathcal{I} = (\\mb{i}^1, \\mb{i}^2, \\mb{i}^3)$ denote an inertial frame. Suppose the position of the cart is resolved in the inertial frame as $\\mb{r}_{co}^{\\mathcal{I}} = (x, 0, 0)$. Additionally, suppose the gravitational force acting on the pendulum is resolved in the inertial frame as $\\mb{f}_g^{\\mathcal{I}} = (0, 0, -m_p g)$.\n\nLet $\\mathcal{B} = (\\mb{b}^1, \\mb{b}^2, \\mb{b}^3)$ denote a body reference frame, with $\\mb{b}^2 = \\mb{i}^2$. The position of the pendulum mass relative to the cart is resolved in the body frame as $\\mb{r}_{pc}^\\mathcal{B} = (0, 0, l)$.\n\nThe kinetic energy of the system is:\n\n\\begin{equation}\n \\frac{1}{2} m_c \\norm{\\dot{\\mb{r}}_{co}^\\mathcal{I}}_2^2 + \\frac{1}{2} m_p \\norm{\\dot{\\mb{r}}_{po}^\\mathcal{I}}_2^2\n\\end{equation}\n\nFirst, note that $\\dot{\\mb{r}}_{co}^{\\mathcal{I}} = (\\dot{x}, 0, 0)$.\n\nNext, note that $\\mb{r}_{po}^\\mathcal{I} = \\mb{r}_{pc}^\\mathcal{I} + \\mb{r}_{co}^\\mathcal{I} = \\mb{C}_{\\mathcal{I}\\mathcal{B}}\\mb{r}_{pc}^\\mathcal{B} + \\mb{r}_{co}^\\mathcal{I}$, where $\\mb{C}_{\\mathcal{I}\\mathcal{B}}$ is the direction cosine matrix (DCM) satisfying:\n\n\\begin{equation}\n \\mb{C}_{\\mathcal{I}\\mathcal{B}} = \\begin{bmatrix} \\ip{\\mb{i}_1}{\\mb{b}_1} & \\ip{\\mb{i}_1}{\\mb{b}_2} & \\ip{\\mb{i}_1}{\\mb{b}_3} \\\\ \\ip{\\mb{i}_2}{\\mb{b}_1} & \\ip{\\mb{i}_2}{\\mb{b}_2} & \\ip{\\mb{i}_2}{\\mb{b}_3} \\\\ \\ip{\\mb{i}_3}{\\mb{b}_1} & \\ip{\\mb{i}_3}{\\mb{b}_2} & \\ip{\\mb{i}_3}{\\mb{b}_3} \\end{bmatrix}.\n\\end{equation}\n\nWe parameterize the DCM using $\\theta$, measuring the clockwise angle of the pendulum from upright in radians. In this case, the DCM is:\n\n\\begin{equation}\n \\mb{C}_{\\mathcal{I}\\mathcal{B}} = \\begin{bmatrix} \\cos{\\theta} & 0 & \\sin{\\theta} \\\\ 0 & 1 & 0 \\\\ -\\sin{\\theta} & 0 & \\cos{\\theta} \\end{bmatrix},\n\\end{equation}\n\nfollowing from $\\cos{\\left( \\frac{\\pi}{2} - \\theta \\right)} = \\sin{\\theta}$. Therefore:\n\n\\begin{equation}\n \\mb{r}_{po}^\\mathcal{I} = \\begin{bmatrix} x + l\\sin{\\theta} \\\\ 0 \\\\ l\\cos{\\theta} \\end{bmatrix}\n\\end{equation}\n\nWe have $\\dot{\\mb{r}}_{po}^\\mathcal{I} = \\dot{\\mb{C}}_{\\mathcal{I}\\mathcal{B}} \\mb{r}_{pc}^\\mathcal{B} + \\dot{\\mb{r}}_{co}^\\mathcal{I}$, following from $\\dot{\\mb{r}}_{pc}^\\mathcal{B} = \\mb{0}_3$ since the pendulum is rigid. The derivative of the DCM is:\n\n\\begin{equation}\n \\der{{\\mb{C}}_{\\mathcal{I}\\mathcal{B}}}{\\theta} = \\begin{bmatrix} -\\sin{\\theta} & 0 & \\cos{\\theta} \\\\ 0 & 0 & 0 \\\\ -\\cos{\\theta} & 0 & -\\sin{\\theta} \\end{bmatrix},\n\\end{equation}\n\nfinally yielding:\n\n\\begin{equation}\n \\dot{\\mb{r}}_{po}^\\mathcal{I} = \\dot{\\theta} \\der{\\mb{C}_{\\mathcal{I}\\mathcal{B}}}{\\theta} \\mb{r}^{\\mathcal{B}}_{pc} + \\dot{\\mb{r}}_{co}^\\mathcal{I} = \\begin{bmatrix} l\\dot{\\theta}\\cos{\\theta} + \\dot{x} \\\\ 0 \\\\ -l\\dot{\\theta}\\sin{\\theta} \\end{bmatrix}\n\\end{equation}\n\nDefine generalized coordinates $\\mb{q} = (x, \\theta)$ with configuration space $\\mathcal{Q} = \\R \\times \\bb{S}^1$, where $\\bb{S}^1$ denotes the $1$-sphere. The kinetic energy can then be expressed as:\n\n\\begin{align}\n T(\\mb{q}, \\dot{\\mb{q}}) &= \\frac{1}{2} m_c \\begin{bmatrix} \\dot{x} \\\\ \\dot{\\theta} \\end{bmatrix}^\\top \\begin{bmatrix} 1 & 0 \\\\ 0 & 0 \\end{bmatrix} \\begin{bmatrix} \\dot{x} \\\\ \\dot{\\theta} \\end{bmatrix} + \\frac{1}{2} m_p \\begin{bmatrix} \\dot{x} \\\\ \\dot{\\theta} \\end{bmatrix}^\\top \\begin{bmatrix} 1 & l \\cos{\\theta} \\\\ l \\cos{\\theta} & l^2 \\end{bmatrix} \\begin{bmatrix} \\dot{x} \\\\ \\dot{\\theta} \\end{bmatrix}\\\\\n &= \\frac{1}{2} \\dot{\\mb{q}}^\\top\\mb{D}(\\mb{q})\\dot{\\mb{q}},\n\\end{align}\n\nwhere inertia matrix function $\\mb{D}: \\mathcal{Q} \\to \\bb{S}^2_{++}$ is defined as:\n\n\\begin{equation}\n \\mb{D}(\\mb{q}) = \\begin{bmatrix} m_c + m_p & m_p l \\cos{\\theta} \\\\ m_p l \\cos{\\theta} & m_pl^2 \\end{bmatrix}.\n\\end{equation}\n\nNote that:\n\n\\begin{equation}\n \\derp{\\mb{D}}{x} = \\mb{0}_{2 \\times 2},\n\\end{equation}\n\nand:\n\n\\begin{equation}\n \\derp{\\mb{D}}{\\theta} = \\begin{bmatrix} 0 & -m_p l \\sin{\\theta} \\\\ -m_p l \\sin{\\theta} & 0 \\end{bmatrix},\n\\end{equation}\n\nso we can express:\n\n\\begin{equation}\n \\derp{\\mb{D}}{\\mb{q}} = -m_p l \\sin{\\theta} (\\mb{e}_1 \\otimes \\mb{e}_2 \\otimes \\mb{e}_2 + \\mb{e}_2 \\otimes \\mb{e}_1 \\otimes \\mb{e}_2).\n\\end{equation}\n\nThe potential energy of the system is $U: \\mathcal{Q} \\to \\R$ defined as:\n\n\\begin{equation}\n U(\\mb{q}) = -\\ip{\\mb{f}_g^\\mathcal{I}}{\\mb{r}^{\\mathcal{I}}_{po}} = m_p g l \\cos{\\theta}.\n\\end{equation}\n\nDefine $\\mb{G}: \\mathcal{Q} \\to \\R^2$ as:\n \n\\begin{equation}\n \\mb{G}(\\mb{q}) = \\left(\\derp{U}{\\mb{q}}\\right)^\\top = \\begin{bmatrix} 0 \\\\ -m_p g l \\sin{\\theta} \\end{bmatrix}.\n\\end{equation}\n\nAssume a force $(F, 0, 0)$ (resolved in the inertial frame) can be applied to the cart. The Euler-Lagrange equation yields:\n\n\\begin{align}\n \\der{}{t} \\left( \\derp{T}{\\dot{\\mb{q}}} \\right)^\\top - \\left( \\derp{T}{\\mb{q}} - \\derp{U}{\\mb{q}} \\right)^\\top &= \\der{}{t} \\left( \\mb{D}(\\mb{q})\\dot{\\mb{q}} \\right) - \\frac{1}{2}\\derp{\\mb{D}}{\\mb{q}}(\\dot{\\mb{q}}, \\dot{\\mb{q}}, \\cdot) + \\mb{G}(\\mb{q})\\\\\n &= \\mb{D}(\\mb{q})\\ddot{\\mb{q}} + \\derp{\\mb{D}}{\\mb{q}}(\\cdot, \\dot{\\mb{q}}, \\dot{\\mb{q}}) - \\frac{1}{2}\\derp{\\mb{D}}{\\mb{q}}(\\dot{\\mb{q}}, \\dot{\\mb{q}}, \\cdot) + \\mb{G}(\\mb{q})\\\\\n &= \\mb{B} F,\n\\end{align}\n\nwith static actuation matrix:\n\n\\begin{equation}\n \\mb{B} = \\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix}.\n\\end{equation}\n\nNote that:\n\n\\begin{align}\n \\derp{\\mb{D}}{\\mb{q}}(\\cdot, \\dot{\\mb{q}}, \\dot{\\mb{q}}) - \\frac{1}{2}\\derp{\\mb{D}}{\\mb{q}}(\\dot{\\mb{q}}, \\dot{\\mb{q}}, \\cdot) &= -m_p l \\sin{\\theta} (\\mb{e}_1 \\dot{\\theta}\\dot{\\theta} + \\mb{e}_2\\dot{x}\\dot{\\theta}) + \\frac{1}{2} m_p l \\sin{\\theta} (\\dot{x}\\dot{\\theta} \\mb{e}_2 + \\dot{\\theta}\\dot{x} \\mb{e}_2)\\\\\n &= \\begin{bmatrix} -m_p l \\dot{\\theta}^2 \\sin{\\theta} \\\\ 0 \\end{bmatrix}\\\\\n &= \\mb{C}(\\mb{q}, \\dot{\\mb{q}})\\dot{\\mb{q}},\n\\end{align}\n\nwith Coriolis terms defined as:\n\n\\begin{equation}\n \\mb{C}(\\mb{q}, \\dot{\\mb{q}}) = \\begin{bmatrix} 0 & -m_p l \\sin{\\theta} \\\\ 0 & 0 \\end{bmatrix}.\n\\end{equation}\n\nFinally, we have:\n\n\\begin{equation}\n \\mb{D}(\\mb{q})\\ddot{\\mb{q}} + \\mb{C}(\\mb{q}, \\dot{\\mb{q}})\\dot{\\mb{q}} + \\mb{G}(\\mb{q}) = \\mb{B}F\n\\end{equation}\n\n\n```python\nfrom numpy import array, concatenate, cos, dot, reshape, sin, zeros\n\nfrom core.dynamics import RoboticDynamics\n\nclass CartPole(RoboticDynamics):\n def __init__(self, m_c, m_p, l, g=9.81):\n RoboticDynamics.__init__(self, 2, 1)\n self.params = m_c, m_p, l, g\n \n def D(self, q):\n m_c, m_p, l, _ = self.params\n _, theta = q\n return array([[m_c + m_p, m_p * l * cos(theta)], [m_p * l * cos(theta), m_p * (l ** 2)]])\n \n def C(self, q, q_dot):\n _, m_p, l, _ = self.params\n _, theta = q\n _, theta_dot = q_dot\n return array([[0, -m_p * l * theta_dot * sin(theta)], [0, 0]])\n \n def U(self, q):\n _, m_p, l, g = self.params\n _, theta = q\n return m_p * g * l * cos(theta)\n \n def G(self, q):\n _, m_p, l, g = self.params\n _, theta = q\n return array([0, -m_p * g * l * sin(theta)])\n \n def B(self, q):\n return array([[1], [0]])\n\nm_c = 0.5\nm_p = 0.25\nl = 0.5\ncart_pole = CartPole(m_c, m_p, l)\n```\n\nWe attempt to stabilize the pendulum upright, that is, drive $\\theta$ to $0$. We'll use the normal form transformation:\n\n\\begin{equation}\n \\bs{\\Phi}(\\mb{q}, \\dot{\\mb{q}}) = \\begin{bmatrix} \\bs{\\eta}(\\mb{q}, \\dot{\\mb{q}}) \\\\ \\mb{z}(\\mb{q}, \\dot{\\mb{q}}) \\end{bmatrix} = \\begin{bmatrix} \\theta \\\\ \\dot{\\theta} \\\\ x \\\\ m_p l \\dot{x} \\cos{\\theta} + m_p l^2 \\dot{\\theta} \\end{bmatrix}.\n\\end{equation}\n\n\n```python\nfrom core.dynamics import ConfigurationDynamics\n\nclass CartPoleOutput(ConfigurationDynamics):\n def __init__(self, cart_pole):\n ConfigurationDynamics.__init__(self, cart_pole, 1)\n self.cart_pole = cart_pole\n \n def y(self, q):\n return q[1:]\n \n def dydq(self, q):\n return array([[0, 1]])\n \n def d2ydq2(self, q):\n return zeros((1, 2, 2))\n \noutput = CartPoleOutput(cart_pole)\n```\n\n\n```python\nfrom numpy import identity\n\nfrom core.controllers import FBLinController, LQRController\n\nQ = 10 * identity(2)\nR = identity(1)\nlqr = LQRController.build(output, Q, R)\nfb_lin = FBLinController(output, lqr)\n```\n\n\n```python\nfrom numpy import linspace, pi\n\nx_0 = array([0, pi / 4, 0, 0])\nts = linspace(0, 10, 1000 + 1)\n\nxs, us = cart_pole.simulate(x_0, fb_lin, ts)\n```\n\n\n```python\nfrom matplotlib.pyplot import subplots, show, tight_layout\n```\n\n\n```python\n_, axs = subplots(2, 2, figsize=(8, 8))\nylabels = ['$x$ (m)', '$\\\\theta$ (rad)', '$\\\\dot{x}$ (m / sec)', '$\\\\dot{\\\\theta}$ (rad / sec)']\n\nfor ax, data, ylabel in zip(axs.flatten(), xs.T, ylabels):\n ax.plot(ts, data, linewidth=3)\n ax.set_ylabel(ylabel, fontsize=16)\n ax.grid()\n \nfor ax in axs[-1]:\n ax.set_xlabel('$t$ (sec)', fontsize=16)\n \ntight_layout()\nshow()\n```\n\n\n```python\n_, ax = subplots(figsize=(4, 4))\n\nax.plot(ts[:-1], us, linewidth=3)\nax.grid()\nax.set_xlabel('$t$ (sec)', fontsize=16)\nax.set_ylabel('$F$ (N)', fontsize=16)\n\nshow()\n```\n", "meta": {"hexsha": "beb6ab6f74291eb4159985d1337d242f1c134130", "size": 71882, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "cart-pole.ipynb", "max_stars_repo_name": "ivandariojr/core", "max_stars_repo_head_hexsha": "c4dec054a3e80355ed3812d48ca2bba286584a67", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-01-26T21:00:24.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-28T23:57:50.000Z", "max_issues_repo_path": "cart-pole.ipynb", "max_issues_repo_name": "ivandariojr/core", "max_issues_repo_head_hexsha": "c4dec054a3e80355ed3812d48ca2bba286584a67", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 15, "max_issues_repo_issues_event_min_datetime": "2020-01-28T22:49:18.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-14T08:34:39.000Z", "max_forks_repo_path": "cart-pole.ipynb", "max_forks_repo_name": "ivandariojr/core", "max_forks_repo_head_hexsha": "c4dec054a3e80355ed3812d48ca2bba286584a67", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2019-06-07T21:31:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-13T01:00:02.000Z", "avg_line_length": 169.5330188679, "max_line_length": 45868, "alphanum_fraction": 0.8653348543, "converted": true, "num_tokens": 3839, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9449947117065459, "lm_q2_score": 0.8962513641273355, "lm_q1q2_score": 0.8469527994601099}} {"text": "# Investigating propagation of errors\n\nThis notebook was made to investigate the propagation of errors formula.\nWe imagine that we have a function $q(x,y)$ and we want to propagate the\nuncertainty on $x$ and $y$ (denoted $\\sigma_x$ and $\\sigma_y$, respectively) through to the quantity $q$.\n\nThe most straight forward way to do this is just randomly sample $x$ and $y$, evaluate $q$ and look at it's distribution. This is really the definition of what we mean by propagation of uncertianty. It's very easy to do with some simply python code.\n\nThe calculus formula for the propagation of errors is really an approximation. This is the formula for a general $q(x,y)$\n\\begin{equation}\n\\sigma_q^2 = \\left( \\frac{\\partial q}{\\partial x} \\sigma_x \\right)^2 + \\left( \\frac{\\partial q}{\\partial y}\\sigma_y \\right)^2\n\\end{equation}\n\nIn the special case of addition $q(x,y) = x\\pm y$ we have $\\sigma_q^2 = \\sigma_x^2 + \\sigma_y^2$.\n\nIn the special case of multiplication $q(x,y) = x y$ and division $q(x,y) = x / y$ we have $(\\sigma_q/q)^2 = (\\sigma_x/x)^2 + (\\sigma_y/y)^2$, which we can rewrite as $\\sigma_q = (x/y) \\sqrt{(\\sigma_x/x)^2 + (\\sigma_y/y)^2}$\n\nLet's try out these formulas and compare the direct approach of making the distribution to the prediction from these formulas\n\n\n```python\n#%pylab inline --no-import-all\n```\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.stats as scs\n```\n\n\n```python\nfrom ipywidgets import widgets \nfrom ipywidgets import interact, interactive, fixed\n```\n\n## Setup repeated observations of two variables x, y\n\n\n```python\nmean_x = .8\nstd_x = .15\nmean_y = 3.\nstd_y = .9\nN = 1000\nx = np.random.normal(mean_x, std_x, N)\ny = np.random.normal(mean_y, std_y, N)\n```\n\n## Check propagation of errors under addition \n\n\n```python\nx = np.random.normal(mean_x, std_x, N)\ny = np.random.normal(mean_y, std_y, N)\n\nq_of_x_y = x+y\n\npred_mean_q = mean_x+mean_y\npred_std_q = np.sqrt(std_x**2+std_y**2)\n\ncounts, bins, patches = plt.hist(q_of_x_y, \n bins=np.linspace(pred_mean_q-3*pred_std_q,pred_mean_q+3*pred_std_q,30), \n density=True, alpha=0.3)\n\nplt.plot(bins, scs.norm.pdf(bins, pred_mean_q, pred_std_q), c='r', lw=2)\nplt.legend(('pred','hist'))\nplt.xlabel('q(x)')\nplt.ylabel('p(x)')\n```\n\n### same thing with an interactive widget\n\n\n```python\ndef plot_x_plus_y(mean_x, std_x, mean_y, std_y, N):\n x = np.random.normal(mean_x, std_x, N)\n y = np.random.normal(mean_y, std_y, N)\n\n q_of_x_y = x+y\n\n pred_mean_q = mean_x+mean_y\n pred_std_q = np.sqrt(std_x**2+std_y**2)\n\n counts, bins, patches = plt.hist(q_of_x_y, \n bins=np.linspace(pred_mean_q-3*pred_std_q,pred_mean_q+3*pred_std_q,30), \n density=True, alpha=0.3)\n\n plt.plot(bins, scs.norm.pdf(bins, pred_mean_q, pred_std_q), c='r', lw=2)\n plt.legend(('pred','hist'))\n plt.xlabel('q(x)')\n plt.ylabel('p(x)')\n\n\n# now make the interactive widget\ninteract(plot_x_plus_y,\n mean_x=(0.,3.,.1), std_x=(.0, 2., .1), \n mean_y=(0.,3.,.1), std_y=(.0, 2., .1),\n N=(0,10000,1000))\n```\n\n\n interactive(children=(FloatSlider(value=1.5, description='mean_x', max=3.0), FloatSlider(value=1.0, descriptio\u2026\n\n\n\n\n\n \n\n\n\n## Single variable example: division\n\nAs a warm up, let's consider $q(x) = 1/x$\n\n\n```python\ncounts, bins, patches = plt.hist(x, bins=50, density=True, alpha=0.3)\ngaus_x = scs.norm.pdf(bins, mean_x,std_x)\n\nq_for_plot = 1./bins\n\nplt.plot(bins, gaus_x, lw=2)\nplt.plot(bins, q_for_plot, lw=2)\n\nplt.xlabel('x')\nplt.ylabel('q(x)')\n```\n\n\n```python\nplt.xlabel('x')\n\nq_of_x = 1./x\n\npred_mean_q = 1./mean_x\npred_std_q = np.sqrt((std_x/mean_x)**2)/mean_x\n\ncounts, bins, patches = plt.hist(q_of_x, bins=50, density=True, alpha=0.3)\n\nplt.plot(bins, scs.norm.pdf(bins, pred_mean_q, pred_std_q), c='r', lw=2)\nplt.legend(('pred','hist'))\nplt.xlabel('x')\nplt.ylabel('p(x)')\n```\n\n### Now let's do the same thing with an interactive widget!\n\n\n```python\ndef plot_1_over_x(mean_x, std_x, N):\n x = np.random.normal(mean_x, std_x, N)\n\n q_of_x = 1./x\n\n pred_mean_q = 1./mean_x\n pred_std_q = np.sqrt((std_x/mean_x)**2)/mean_x\n\n counts, bins, patches = plt.hist(q_of_x, \n bins=np.linspace(pred_mean_q-3*pred_std_q,pred_mean_q+3*pred_std_q,30), \n density=True, alpha=0.3)\n\n plt.plot(bins, scs.norm.pdf(bins, pred_mean_q, pred_std_q), c='r', lw=2)\n plt.legend(('pred','hist'))\n plt.xlabel('q(x)')\n plt.ylabel('p(x)')\n```\n\n\n```python\n# now make the interactive widget\ninteract(plot_1_over_x,mean_x=(0.,3.,.1), std_x=(.0, 2., .1), N=(0,10000,1000))\n```\n\n\n interactive(children=(FloatSlider(value=1.5, description='mean_x', max=3.0), FloatSlider(value=1.0, descriptio\u2026\n\n\n\n\n\n \n\n\n\n### Check propagation of errors under division \n\n\n```python\ndef plot_x_over_y(mean_x, std_x, mean_y, std_y, N):\n x = np.random.normal(mean_x, std_x, N)\n y = np.random.normal(mean_y, std_y, N)\n\n q_of_x_y = x/y\n\n pred_mean_q = mean_x/mean_y\n pred_std_q = np.sqrt((std_x/mean_x)**2+(std_y/mean_y)**2)*mean_x/mean_y\n\n counts, bins, patches = plt.hist(q_of_x_y, \n bins=np.linspace(pred_mean_q-3*pred_std_q,pred_mean_q+3*pred_std_q,30), \n density=True, alpha=0.3)\n\n\n plt.plot(bins, scs.norm.pdf(bins, pred_mean_q, pred_std_q), c='r', lw=2)\n plt.legend(('pred','hist'))\n plt.xlabel('q(x)')\n plt.ylabel('p(x)')\n\n\n\ninteract(plot_x_over_y,mean_x=(0.,3.,.1), std_x=(.0, 2., .1), mean_y=(0.,3.,.1), std_y=(.0, 2., .1),N=(0,100000,1000))\n```\n\n\n interactive(children=(FloatSlider(value=1.5, description='mean_x', max=3.0), FloatSlider(value=1.0, descriptio\u2026\n\n\n\n\n\n \n\n\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "d5e65bc56c9558005b79e11ebe6357a9f81ad720", "size": 60717, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "book/error-propagation/investigating-propagation-of-errors.ipynb", "max_stars_repo_name": "willettk/stats-ds-book", "max_stars_repo_head_hexsha": "06bc751a7e82f73f9d7419f32fe5882ec5742f2f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 41, "max_stars_repo_stars_event_min_datetime": "2020-08-18T12:14:43.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T16:37:17.000Z", "max_issues_repo_path": "book/error-propagation/investigating-propagation-of-errors.ipynb", "max_issues_repo_name": "willettk/stats-ds-book", "max_issues_repo_head_hexsha": "06bc751a7e82f73f9d7419f32fe5882ec5742f2f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2020-08-19T04:22:24.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-22T15:18:24.000Z", "max_forks_repo_path": "book/error-propagation/investigating-propagation-of-errors.ipynb", "max_forks_repo_name": "willettk/stats-ds-book", "max_forks_repo_head_hexsha": "06bc751a7e82f73f9d7419f32fe5882ec5742f2f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2020-08-19T02:57:47.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T15:24:07.000Z", "avg_line_length": 129.185106383, "max_line_length": 18876, "alphanum_fraction": 0.8741209217, "converted": true, "num_tokens": 1879, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9390248174286373, "lm_q2_score": 0.9019206857566126, "lm_q1q2_score": 0.8469259072777144}} {"text": "# Demo series: Controllers & Agents\n\n---\n\n## Active Inference for controlling a driven damped harmonic oscillator\n\nWouter Kouw, 02-08-2021\n\n### System\n\nThis project considers a [harmonic oscillator](https://en.wikipedia.org/wiki/Harmonic_oscillator#Driven_harmonic_oscillators). Consider $y(t)$ as observed displacement, $x(t)$ as the unobserved state, $u(t)$ as the driving force and $v(t), w(t)$ as measurement and process noise, respectively. The continuous-time dynamics of the system are:\n\n$$\\begin{align}\nm \\frac{d^2 x(t)}{dt^2} =&\\ - c \\frac{d x(t)}{dt} - k x(t) + u(t) \\\\\ny(t) =&\\ x(t) + e(t)\n\\end{align}$$\nwhere \n$$\nm = \\text{mass} \\, , \\quad\nc = \\text{damping} \\, , \\quad\nk = \\text{spring stiffness} \\, ,\n$$\nconstitute the physical parameters.\n\nThe process noise is a Wiener process, where the increment is Gaussian distributed $w(t) \\sim \\mathcal{N}(0, \\zeta^{-1}dt)$ with $\\zeta$ representing the precision of the process. The measurement noise is also a Wiener process, with $v(t) \\sim \\mathcal{N}(0, \\xi^{-1}dt)$ and $\\xi$ as precision parameter.\n\nWe cast this to the following discrete-time system. Using a forward finite difference method, i.e.,\n\n$$\\begin{align}\n\\frac{d^2 x(t)}{dt^2} = \\frac{x_{t+1} - 2x_t + x_{t-1}}{(\\Delta t)^2} \\, , \\quad \\text{and} \\quad\n\\frac{d x(t)}{dt} = \\frac{x_{t+1} - x_t}{\\Delta t} \\, ,\n\\end{align}$$\n\nwe get:\n\n$$\\begin{align}\nm \\frac{x_{t+1} - 2x_t + x_{t-1}}{(\\Delta t)^2} =& - c \\frac{x_{t+1} - x_t}{\\Delta t} - k x_t + u_t \\\\\nx_{t+1} =&\\ -\\frac{-2m + k (\\Delta t)^2}{m + c \\Delta t} x_t - \\frac{m - c \\Delta t}{m + c \\Delta t} = \\frac{(\\Delta t)^2}{m + c \\Delta t} u_t \\, .\n\\end{align}$$\n\nIf we substitute variables we get:\n\n$$\\begin{align} \nx_{t+1} =&\\ \\theta_1 x_t + \\theta_2 x_{t-1} + \\eta u_t \\\\ \ny_t =&\\ x_t + v_t \\, .\n\\end{align}$$\n\n\n```julia\nusing Pkg\nPkg.activate(\".\")\nPkg.instantiate();\n```\n\n \u001b[32m\u001b[1m Activating\u001b[22m\u001b[39m environment at `C:\\Users\\kouww\\Research\\actinf-oscillator\\Project.toml`\n\n\n\n```julia\nusing Revise\nusing Plots\npyplot();\n```\n\n\n```julia\n# Sampling step size\n\u0394t = 1.0\n\n# Dynamical parameters\nm = 9.0\nc = 0.3\nk = 1.5\n\n# Measurement noise precision\n\u03be_true = Inf\n\n# Pack parameters\nsys_params = (m, c, k, \u03be_true);\n```\n\n\n```julia\n# State transition coefficients\n\u03b81 = -(-2*m + k*\u0394t^2)/(m + c*\u0394t)\n\u03b82 = -(m - c*\u0394t)/(m + c*\u0394t)\n\u03b8_true = [\u03b81, \u03b82]\n\n# Control coefficient\n\u03b7_true = \u0394t^2/(m + c*\u0394t)\n\n# Pack substituted variables\nsubs_params = (\u03b8_true, \u03b7_true, \u03be_true);\n```\n\n\n```julia\nfunction sim_sys(input, state, params)\n \"Simulate dynamic system\"\n \n # Unpack state\n x_kmin1, x_kmin2 = state\n \n # Unpack parameters\n \u03b8, \u03b7, \u03be = params\n \n # State transition\n x_k = \u03b8[1]*x_kmin1 + \u03b8[2]*x_kmin2 + \u03b7*input\n \n # Generate observation\n y_k = x_k + sqrt(inv(\u03be))*randn(1,)[1]\n \n return y_k, x_k \nend;\n```\n\n\n```julia\n# Time \nT = 200\nstop = T-100\n\n# Input signal\ninput = [cos.((1:stop)./ (6*\u03c0)); zeros(length(stop+1:T),)]\n# input = zeros(T,); input[1] = 1.0\n\n# Preallocate arrays\nstates = zeros(T,)\noutput = zeros(T,)\n\nfor k = 4:T\n output[k], states[k] = sim_sys(input[k], states[[k-1, k-2]], subs_params)\nend\n\np100 = plot(1:T, states, linewidth=1, color=\"blue\", label=\"states\", size=(900,300))\nplot!(1:T, input, linewidth=1, color=\"red\", label=\"input\")\nvline!([stop], color=\"green\", label=\"\")\nscatter!(1:T, output, markersize=1, color=\"black\", label=\"output\")\n```\n\n\n```julia\nsavefig(p100, \"figures/example-input-output_seq1.png\")\n```\n\n## Model 1: Latent Auto-Regressive model with eXogenous inputs\n\nWe cast the above system into matrix form:\n\n$$ \\underbrace{\\begin{bmatrix} x_{k} \\\\ x_{k-1} \\end{bmatrix}}_{z_k} = \\Big(\\underbrace{\\begin{bmatrix} 0 & 0 \\\\ 1 & 0 \\end{bmatrix}}_{S} + \\underbrace{\\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix}}_{s} \\begin{bmatrix} \\theta_1 & \\theta_2 \\end{bmatrix} \\Big) \\underbrace{\\begin{bmatrix} x_{k-1} \\\\ x_{k-2} \\end{bmatrix}}_{z_{k-1}} + \\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix} \\eta u_t \\, ,$$\n\nNext, we add white noise with unknown precision to the first element in the model and white noise with a tiny amount of noise to the other elements:\n\n$$w_k \\sim \\mathcal{N}\\big(0, V(\\zeta) \\big) \\, , \\quad \\text{with} \\quad V(\\zeta) = \\begin{bmatrix} \\zeta^-1 & 0 \\\\ 0 & \\epsilon \\end{bmatrix} \\, .$$\n\nThe state transition can thus be written as:\n\n$$ z_k = \\underbrace{\\big(S + s\\theta^{\\top} \\big)}_{A(\\theta)} z_{k-1} + \\underbrace{s\\eta}_{B(\\eta)} u_k + w_k \\, .$$\n\n\n### Likelihood\n\nIntegrating out $w_k$ and $v_t$ lets us formulate the dynamics as a Gaussian state-space model:\n\n$$\\begin{align}\nz_k \\sim&\\ \\mathcal{N}(A(\\theta) z_{k-1} + B(\\eta) u_k , V(\\zeta)) \\\\\ny_k \\sim&\\ \\mathcal{N}(s^{\\top} z_k, \\xi^{-1}) \\, .\n\\end{align}$$\n\n### Priors\n\nWe choose Gaussian priors for the unknown coefficients and Gamma priors for the unknown noise precisions:\n\n$$\\begin{align}\np(\\theta) = \\mathcal{N}(m^{0}_{\\theta}, V^{0}_{\\theta}) \\, , \\quad \np(\\eta) = \\mathcal{N}(m^{0}_{\\eta}, v^{0}_{\\eta}) \\, , \\quad \np(\\zeta)= \\Gamma(a^{0}_\\zeta, b^{0}_\\zeta) \\, , \\quad\np(\\xi)= \\Gamma(a^{0}_\\xi, b^{0}_\\xi) \\, .\n\\end{align}$$\n\nWe also need a prior distribution for the initial state:\n\n$$ p(z_0) = \\mathcal{N}({\\bf 0}, V^{z_0}) \\, .$$\n\n\n### Posteriors\n\nWe can apply the Bayesian filtering equations to obtain recursive posterior estimates:\n\n$$\\begin{align}\\label{eq:filtering}\n \\underbrace{p(z_k | x_{1:t})}_{\\text{state posterior}} &= \\frac{\\overbrace{p(x_t | z_t)}^{\\text{likelihood}}}{\\underbrace{p(x_t | x_{1:t-1})}_{\\text{evidence}} } \\overbrace{\\iint \\underbrace{p(z_t| z_{t-1},u_{t})}_{\\text{state transition}} \\underbrace{q(u_t)}_{\\substack{\\text{control} \\\\ \\text{signal}}} \\underbrace{p(z_{t-1} | x_{1:t-1})}_{\\text{state prior}} \\d z_{t-1} \\d u_t}^{\\text{prior predictive $p(z_t | x_{1:t-1})$}} \\, ,\n\\end{align}$$\n\n### Variational Bayesian inference\n\n#### Recognition model\n\nThe recognition model will follow the generative model:\n\n$$\\begin{align}\nq(\\theta) = \\mathcal{N}(m_{\\theta}, V_{\\theta}) \\ , \\quad \nq(\\eta) = \\mathcal{N}(m_{\\eta}, v_{\\eta}) \\ , \\quad \nq(\\zeta)= \\Gamma(a_\\zeta, b_\\zeta) \\, , \\quad\nq(\\xi)= \\Gamma(a_\\xi, b_\\xi) \\, .\n\\end{align}$$\n\n#### Free Energy function\n\n\n```julia\nusing LinearAlgebra\nusing LaTeXStrings\nusing ProgressMeter\nusing Optim\nusing ForneyLab\n```\n\n\n```julia\nusing LARX\n```\n\n\n```julia\ninclude(\"util.jl\");\n```\n\n\n```julia\ngraph1 = FactorGraph()\n\n# Coefficients\n@RV \u03b8 ~ GaussianMeanPrecision(placeholder(:m_\u03b8, dims=(2,)), placeholder(:w_\u03b8, dims=(2,2)))\n@RV \u03b7 ~ GaussianMeanPrecision(placeholder(:m_\u03b7), placeholder(:w_\u03b7))\n\n# Noise precisions\n@RV \u03b6 ~ Gamma(placeholder(:a_\u03b6), placeholder(:b_\u03b6))\n@RV \u03be ~ Gamma(placeholder(:a_\u03be), placeholder(:b_\u03be))\n\n# State prior\n@RV x ~ GaussianMeanPrecision(placeholder(:m_x, dims=(2,)), placeholder(:w_x, dims=(2, 2)), id=:x_k)\n\n# Autoregressive state transition\n@RV z ~ LatentAutoregressiveX(\u03b8, x, \u03b7, placeholder(:u), \u03b6, id=:z_k)\n\n# Likelihood\n@RV y ~ GaussianMeanPrecision(dot([1. , 0.], z), \u03be, id=:y_k)\nplaceholder(y, :y);\n\n# Specify recognition model\nq = PosteriorFactorization(z, x, \u03b8, \u03b7, \u03b6, \u03be, ids=[:z, :x, :\u03b8, :\u03b7, :\u03b6, :\u03be])\n\n# Specify and compile message passing algorithm\nalgo = messagePassingAlgorithm([z, x, \u03b8, \u03b7, \u03b6, \u03be], q)\neval(Meta.parse(algorithmSourceCode(algo)));\n```\n\n### Experiment 1: Online system Identification from known inputs\n\n\n```julia\n# Time horizon\nT = 300\n\n# Design input signal\ninputs = cos.(range(1, stop=T) * \u03c0/6);\n# inputs = [mean(sin.(t./ (0.1:.1:6 .*\u03c0))) for t in 1:T];\n```\n\n\n```julia\nusing LARX\n```\n\n\n```julia\n# Preallocation\nstates = zeros(T,)\noutputs = zeros(T,)\n\n# Initialize constraints\nconstraints = Dict()\n\n# Initialize marginals in factor graph\nmarginals = Dict()\nmarginals[:x] = ProbabilityDistribution(Multivariate, GaussianMeanPrecision, m=randn(2,), w=.01*eye(2))\nmarginals[:z] = ProbabilityDistribution(Multivariate, GaussianMeanPrecision, m=randn(2,), w=.01*eye(2))\nmarginals[:\u03b8] = ProbabilityDistribution(Multivariate, GaussianMeanPrecision, m=randn(2,), w=.01*eye(2))\nmarginals[:\u03b7] = ProbabilityDistribution(Univariate, GaussianMeanPrecision, m=0.0, w=.01)\nmarginals[:\u03b6] = ProbabilityDistribution(Univariate, Gamma, a=1e3, b=1)\nmarginals[:\u03be] = ProbabilityDistribution(Univariate, Gamma, a=1e2, b=1e-2)\n\n# Number of variational updates\nnum_iterations = 10\n\n# Track estimated states\nest_states = (zeros(T,), zeros(T,))\nest_coeffs = (zeros(T,2), zeros(T,2,2))\n\n@showprogress for k = 3:T\n \n # Execute input and observe output\n outputs[k], states[k] = sim_sys(inputs[k], states[[k-1,k-2]], sys_params)\n \n \"Update model params\"\n \n # Update constraints\n constraints = Dict(:y => outputs[k],\n :u => inputs[k],\n :m_x => mn(marginals[:z]),\n :w_x => pc(marginals[:z]),\n :m_\u03b8 => mn(marginals[:\u03b8]),\n :w_\u03b8 => pc(marginals[:\u03b8]),\n :m_\u03b7 => mn(marginals[:\u03b7]),\n :w_\u03b7 => pc(marginals[:\u03b7]),\n :a_\u03b6 => marginals[:\u03b6].params[:a],\n :b_\u03b6 => marginals[:\u03b6].params[:b],\n :a_\u03be => marginals[:\u03be].params[:a],\n :b_\u03be => marginals[:\u03be].params[:b])\n\n # Iterate recognition factor updates\n for ii = 1:num_iterations\n\n # Update parameters\n step\u03b8!(constraints, marginals)\n step\u03b7!(constraints, marginals) \n \n # Update states\n stepz!(constraints, marginals)\n stepx!(constraints, marginals)\n \n # Update noise precisions\n step\u03b6!(constraints, marginals)\n step\u03be!(constraints, marginals)\n \n end \n \n # Store state estimates\n est_states[1][k] = mn(marginals[:z])[1]\n est_states[2][k] = sqrt(inv(pc(marginals[:z])[1,1]))\n est_coeffs[1][k,:] = mn(marginals[:\u03b8])\n est_coeffs[2][k,:,:] = inv(pc(marginals[:\u03b8]))\nend\n```\n\n\n```julia\np1 = plot(3:T, inputs[3:T], color=\"red\", label=\"inputs\")\np2 = scatter(3:T, outputs[3:T], markersize=2, color=\"black\", label=\"outputs\")\nplot!(3:T, est_states[1][3:T], color=\"purple\", label=\"inferred\")\nplot!(3:T, est_states[1][3:T], ribbon=[est_states[2][3:T], est_states[2][3:T]], color=\"purple\", label=\"\")\nplot(p1, p2, layout=(2,1), size=(900,400))\n```\n\n\n```julia\np1 = plot(3:T, est_coeffs[1][3:T,1], ribbon=[sqrt.(est_coeffs[2][3:T,1,1]), sqrt.(est_coeffs[2][3:T,1,1])], color=\"blue\", label=\"\u03b81 inferred\")\nplot!(\u03b8_true[1]*ones(T-2,), label=\"\u03b81 true\", color=\"green\")\np2 = plot(3:T, est_coeffs[1][3:T,2], ribbon=[sqrt.(est_coeffs[2][3:T,2,2]), sqrt.(est_coeffs[2][3:T,2,2])], color=\"darkblue\", label=\"\u03b82 inferred\")\nplot!(\u03b8_true[2]*ones(T-2,), label=\"\u03b82 true\", color=\"darkgreen\")\nplot(p1,p2, layout=(2,1), size=(900,300))\n```\n\n\n```julia\nmn(marginals[:\u03b8])\n```\n\n#### Validate identified system\n\nWe validate the identified system by computing simulation error on the validation set.\n\n\n```julia\n# Time horizon\nT_val = 1000\n\n# Input signal\ninputs_val = sin.(range(1, stop=T_val)*\u03c0/6);\n# inputs_val = [mean(sin.(t./ (0.1:.1:6 .*\u03c0))) for t in 1:T_val];\n```\n\n\n```julia\n# # Prediction graph\n# graph = FactorGraph()\n\n# # Autoregressive node\n# @RV z ~ LatentAutoregressiveX(placeholder(:\u03b8, dims=(2,)), placeholder(:x, dims=(2,)), placeholder(:\u03b7), placeholder(:u), placeholder(:\u03b6), id=:z_pred)\n\n# # Inference algorithm\n# q = PosteriorFactorization(z, ids=[:_pred])\n# algo = messagePassingAlgorithm([z], q)\n# eval(Meta.parse(algorithmSourceCode(algo)));\n```\n\n\n```julia\n# Preallocate arrays\nstates_val = zeros(T_val,)\noutputs_val = zeros(T_val,)\n\n# # Initialize marginal\n# marginals[:pred] = ProbabilityDistribution(Multivariate, GaussianMeanPrecision, m=zeros(2,), w=eye(2))\n\n# Preallocate prediction arrays\npreds = (zeros(T_val,), zeros(T_val,))\n\n@showprogress for k = 3:T_val\n \n # Simulate forward\n outputs_val[k], states_val[k] = sim_sys(inputs_val[k], states_val[[k-1,k-2]], sys_params)\n \n # Inferred prior states\n z_kmin1 = preds[1][[k-1,k-2]]\n \n # Posterior predictive\n preds[1][k] = mn(marginals[:\u03b8])'*z_kmin1 + mn(marginals[:\u03b7])*inputs_val[k]\n preds[2][k] = z_kmin1'*pc(marginals[:\u03b8])*z_kmin1 + inputs_val[k]'*pc(marginals[:\u03b7])*inputs_val[k] + inv(mn(marginals[:\u03be]))\n\nend\n```\n\n\n```julia\nzoom = 3:min(100,T_val)\n\n# Plot predictions\np23 = scatter(zoom, outputs_val[zoom], label=\"outputs\", xlabel=\"time (t)\", color=\"black\", size=(900,300), legend=:topleft)\nplot!(zoom, inputs_val[zoom], label=\"inputs\", color=\"red\")\nplot!(zoom, preds[1][zoom], label=\"simulation\", color=\"blue\")\n# plot!(zoom, preds[1][zoom], ribbon=[sqrt.(inv.(preds[2][zoom])), sqrt.(inv.(preds[2][zoom]))], label=\"\", color=\"blue\")\n```\n\n\n```julia\nPlots.savefig(p23, \"figures/simulations.png\")\n```\n\n\n```julia\n# Compute prediction error\nsq_pred_error = (preds[1] .- outputs_val).^2\n\n# Simulation error\nMSE_sim = mean(sq_pred_error)\n\n# Scatter error over time\np24 = scatter(1:T_val, sq_pred_error, color=\"black\", xlabel=\"time (t)\", ylabel=\"Prediction error\", label=\"\")\ntitle!(\"MSE = \"*string(MSE_sim))\n```\n\n\n```julia\nPlots.savefig(p24, \"figures/sim-errors.png\")\n```\n\n### Experiment 2: Online system identification with agent-regulated inputs\n\n\n```julia\ngoal_state = (.8, 1e-3)\n```\n\n\n```julia\nfunction EFE_k(action, prior_state, goal_state, model_params)\n\n # Unpack model parameters\n \u03b8, \u03b7, \u03b6, \u03be = model_params\n\n # Process noise\n \u03a3_z = inv(\u03b6) *[1. 0.; 0. 0.]\n\n # Helper matrices\n S = [0. 0.; 1. 0.]\n s = [1.; 0.]\n\n # Unpack prior state \n \u03bc_kmin = prior_state[1]\n \u03a3_kmin = prior_state[2]\n\n # Predicted observation\n y_hat = \u03b8'*\u03bc_kmin + \u03b7*action[1]\n \n # Covariance matrix\n \u03a3_k = \u03a3_kmin + \u03a3_z\n \u03a3_11 = \u03a3_k\n \u03a3_21 = s'*\u03a3_k\n \u03a3_12 = \u03a3_k*s\n \u03a3_22 = s'*\u03a3_k*s .+ inv(\u03be)\n\n # Calculate conditional entropy\n \u03a3_cond = \u03a3_22 - \u03a3_21 * inv(\u03a3_11) * \u03a3_12\n ambiguity = 0.5(log2\u03c0 + log(\u03a3_cond[1]) + 1)\n\n # Risk as KL between marginal and goal prior\n risk = KLDivergence(y_hat, \u03a3_22[1], goal_state[1], goal_state[2])\n\n # Update loss.\n return risk + ambiguity\nend\n```\n\n\n```julia\n# Time horizon\nT = 300\n \n# Goal state (mean and std dev)\ngoal_state = (.8, 1e-4)\n\n# Number of variational updates\nnum_iterations = 4\n\n# Preallocation\ninputs = zeros(T,)\nstates = zeros(T,)\noutputs = zeros(T,)\n\n# Initialize constraints\nconstraints = Dict()\n\n# Initialize marginals in factor graph\nmarginals = Dict()\nmarginals[:x] = ProbabilityDistribution(Multivariate, GaussianMeanPrecision, m=randn(2,), w=.1*eye(2))\nmarginals[:z] = ProbabilityDistribution(Multivariate, GaussianMeanPrecision, m=randn(2,), w=.1*eye(2))\nmarginals[:\u03b8] = ProbabilityDistribution(Multivariate, GaussianMeanPrecision, m=randn(2,), w=.1*eye(2))\nmarginals[:\u03b7] = ProbabilityDistribution(Univariate, GaussianMeanPrecision, m=randn(1,)[1], w=.1)\nmarginals[:\u03b6] = ProbabilityDistribution(Univariate, Gamma, a=1, b=1)\nmarginals[:\u03be] = ProbabilityDistribution(Univariate, Gamma, a=1e2, b=1e-2)\n\n# Track estimated states\nest_states = (zeros(T,), zeros(T,))\n\n@showprogress for k = 3:T\n \n \"Find control\"\n\n # Pack parameters estimated by model\n# model_params = (mn(marginals[:\u03b8]), mn(marginals[:\u03b7]), mn(marginals[:\u03b6]), mn(marginals[:\u03be]))\n model_params = (\u03b8_true, \u03b7_true, 1., \u03be_true)\n\n # State prior variable\n prior_state = (mn(marginals[:z]), inv(pc(marginals[:z])))\n# prior_state = (states[[k-1,k-2]], eye(2))\n\n # Objective function\n G(u_k) = EFE_k(u_k, prior_state, goal_state, model_params)\n\n # Miminimize EFE\n# results = optimize(G, -1, 1, rand(1,), Fminbox(LBFGS()), Optim.Options(iterations=100); autodiff=:forward)\n results = optimize(G, rand(1,), LBFGS(), Optim.Options(iterations=100); autodiff=:forward)\n\n # Select first action in policy\n inputs[k] = Optim.minimizer(results)[1]\n \n \"Execute control\"\n \n # Execute input and observe output\n outputs[k], states[k] = sim_sys(inputs[k], states[[k-1,k-2]], sys_params)\n \n \"Update model params\"\n \n # Update constraints\n constraints = Dict(:y => outputs[k],\n :u => inputs[k],\n :m_x => mn(marginals[:z]),\n :w_x => pc(marginals[:z]),\n :m_\u03b8 => mn(marginals[:\u03b8]),\n :w_\u03b8 => pc(marginals[:\u03b8]),\n :m_\u03b7 => mn(marginals[:\u03b7]),\n :w_\u03b7 => pc(marginals[:\u03b7]),\n :a_\u03b6 => marginals[:\u03b6].params[:a],\n :b_\u03b6 => marginals[:\u03b6].params[:b],\n :a_\u03be => marginals[:\u03be].params[:a],\n :b_\u03be => marginals[:\u03be].params[:b])\n\n # Iterate recognition factor updates\n for ii = 1:num_iterations\n\n # Update parameters\n step\u03b7!(constraints, marginals)\n step\u03b8!(constraints, marginals)\n \n # Update states\n stepz!(constraints, marginals)\n stepx!(constraints, marginals)\n \n # Update noise precisions\n# step\u03b6!(constraints, marginals)\n# step\u03be!(constraints, marginals)\n \n end \n \n # Store state estimates\n est_states[1][k] = mn(marginals[:z])[1]\n est_states[2][k] = sqrt(inv(pc(marginals[:z])[1,1]))\n \nend\n```\n\n\n```julia\np1 = plot(3:T, inputs[3:T], color=\"red\", label=\"inputs\")\np2 = plot(3:T, outputs[3:T], color=\"blue\", label=\"outputs\")\nplot!(3:T, est_states[1][3:T], color=\"purple\", label=\"inferred\")\nplot!(3:T, est_states[1][3:T], ribbon=[est_states[2][3:T], est_states[2][3:T]], color=\"purple\", label=\"\")\nplot!(3:T, goal_state[1]*ones(T-2,), color=\"green\", label=\"goal state\")\nplot(p1, p2, layout=(2,1), size=(900,400))\n```\n\n#### Validation\n\n\n```julia\n# Time horizon\nT_val = 1000\n\n# Input signal\ninputs_val = cos.(range(1, stop=T_val)*\u03c0/6);\n# inputs_val = [mean(sin.(t./ (0.1:.1:6 .*\u03c0))) for t in 1:T_val];\n```\n\n\n```julia\n# Preallocate arrays\nstates_val = zeros(T_val,)\noutputs_val = zeros(T_val,)\n\n# Preallocate prediction arrays\npreds = (zeros(T_val,), zeros(T_val,))\n\n@showprogress for k = 3:T_val\n \n # Simulate forward\n outputs_val[k], states_val[k] = sim_sys(inputs_val[k], states_val[[k-1,k-2]], sys_params)\n \n # Inferred prior states\n z_kmin1 = preds[1][[k-1,k-2]]\n \n # Posterior predictive\n preds[1][k] = mn(marginals[:\u03b8])'*z_kmin1 + mn(marginals[:\u03b7])*inputs_val[k]\n preds[2][k] = z_kmin1'*pc(marginals[:\u03b8])*z_kmin1 + inputs_val[k]'*pc(marginals[:\u03b7])*inputs_val[k] + mn(marginals[:\u03be])\n\nend\n```\n\n\n```julia\nzoom = 3:T_val\n\n# Plot predictions\np23 = plot(zoom, outputs_val[zoom], label=\"outputs\", xlabel=\"time (t)\", color=\"black\", size=(900,300))\nplot!(zoom, inputs_val[zoom], label=\"inputs\", color=\"red\")\nplot!(zoom, preds[1][zoom], label=\"simulation\", color=\"blue\")\n# plot!(zoom, preds[1][zoom], ribbon=[sqrt.(inv.(preds[2][zoom])), sqrt.(inv.(preds[2][zoom]))], label=\"\", color=\"blue\")\n```\n\n\n```julia\n# Compute prediction error\nsq_pred_error = (preds[1] .- outputs_val).^2\n\n# Simulation error\nMSE_sim = mean(sq_pred_error)\n\n# Scatter error over time\np24 = scatter(1:T_val, sq_pred_error, color=\"black\", xlabel=\"time (t)\", ylabel=\"Prediction error\", label=\"\")\ntitle!(\"MSE = \"*string(MSE_sim))\n```\n\n\n```julia\n\n```\n\n\n```julia\n\n```\n\n\n```julia\n\n```\n", "meta": {"hexsha": "bb83c1290c9de5f70804ee4769175199aef2a4e7", "size": 69114, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "demo_ActInf-LARX-oscillator.ipynb", "max_stars_repo_name": "wmkouw/actinf-oscillator", "max_stars_repo_head_hexsha": "3cee2e5e49423f431c3a85eebbe17195ed800ebc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "demo_ActInf-LARX-oscillator.ipynb", "max_issues_repo_name": "wmkouw/actinf-oscillator", "max_issues_repo_head_hexsha": "3cee2e5e49423f431c3a85eebbe17195ed800ebc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "demo_ActInf-LARX-oscillator.ipynb", "max_forks_repo_name": "wmkouw/actinf-oscillator", "max_forks_repo_head_hexsha": "3cee2e5e49423f431c3a85eebbe17195ed800ebc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 77.3087248322, "max_line_length": 40325, "alphanum_fraction": 0.7718551958, "converted": true, "num_tokens": 6325, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9496693645535724, "lm_q2_score": 0.8918110475945175, "lm_q1q2_score": 0.8469256308709411}} {"text": "## Quadrature\n\n\n```python\nimport numpy as np\nimport matplotlib as mpl\nfrom matplotlib import pyplot as plt\nfrom matplotlib import cm\nimport seaborn as sns\nfrom scipy.integrate import quad, fixed_quad\nsns.set_context(\"talk\", font_scale=1.5, rc={\"lines.linewidth\": 2.5})\nsns.set_style(\"whitegrid\")\nimport sympy as sp\nfrom IPython.display import HTML\nfrom matplotlib import animation\n%matplotlib inline\n\n# Don't tinker, or do\n#%matplotlib nbagg\n# from matplotlib import rcParams\n#rcParams['font.family']='sans-serif' \n#rcParams('font', serif='Helvetica Neue') \n# rcParams['text.usetex']= True \n#rcParams.update({'font.size': 22})\n```\n\n## Numerical integration schemes\nIntegration is a fundamental operation in calculus, and involves finding the area under a certain curve (geometrically). As and when you were first introduced to integration, you realized that certain functions cannot be analytically integrated (the reverse is not true however---differentiation of functions was more natural!). But the definite integral of such functions (indefinite integrals are important, but not central in engineering) can at least be estimated numerically. We look at some schemes/algorithms to do such an estimation. The problem we deal with is then to approximate\n$$ \\int_{a}^{b} f(x) dx \\approx \\sum_{i=1}^{n} \\omega_i f(x_i) $$\ngiven an integrable function $f : [a,b] \\to \\mathbb{R}$ with *nodes* ($x_i$) and *weights* ($\\omega_i$). Schemes that we will be discussing involve careful choices of these nodes and weights. \n\nTo get started, let us define a class that helps us implement any quadrature rule easily.\n\n\n```python\nclass SpatialIntegrator(object):\n \"\"\" Class for wrapping a quadrature scheme with other goodies\n \"\"\"\n def __init__(self, i_a, i_b, i_h):\n \"\"\" Initialize the integrator\n \"\"\"\n # What forcing function are we using?\n self.forcing = None\n\n # What integration algorithm are we using?\n self.integrator = None\n\n self.a = i_a\n self.b = i_b\n self.h = i_h\n self.nsteps = int((i_b - i_a)/i_h)\n\n def set_forcing_function(self, t_func):\n \"\"\" Set forcing function (or) integrand to be used \n \"\"\"\n if type(t_func) is not str:\n try:\n # If not string, try and evaluate the function\n t_func(0.0)\n # If the function works, set this function as forcing\n self.forcing = t_func\n except:\n raise RuntimeError('Provided function cannot be evaluated')\n \n def analytic(self):\n \"\"\" For testing convergence, defined as a special function \"\"\"\n if self.forcing.__name__ == \"composite_function\":\n analytical_integral = (self.forcing(self.b)[1] - self.forcing(self.a)[1])\n else:\n def temp_func(x):\n return self.forcing(x)[0]\n analytical_integral, _ = quad(temp_func, self.a, self.b)\n return analytical_integral\n \n def integrate_using(self, integrator, renderer):\n \"\"\" Integrates the function and spits out the relative error\n \"\"\"\n self.integrator = integrator.__name__\n all_starts = np.linspace(self.a, self.b - self.h, self.nsteps)\n all_ends = all_starts + self.h \n integral, start_end_values = integrator(self.forcing, all_starts, all_ends)\n integral = np.sum(integral)\n \n analytical_integral = self.analytic()\n print(\"Integral is {:20.16f}, Analytical integral is {:20.16f}, Relative error is {:20.16f}\".format(integral, analytical_integral, np.abs(integral/analytical_integral - 1)))\n # Pass as (2xn) arrays. While reshaping do T to get (nx2) arrays\n self.draw(renderer, start_end_values)\n return np.abs(integral - analytical_integral)/np.abs(analytical_integral)\n\n def draw(self, renderer, st_end_vals):\n \"\"\" Draw the matplotlib canvas with the portrait we want\n \"\"\"\n # If there is a timestepper, then there is numerical data\n # Plot them\n fine_mesh = np.linspace(self.a, self.b, 1001)\n renderer.plot(fine_mesh, self.forcing(fine_mesh)[0], 'r-', label=r'$f(x)$')\n\n # This step interleaves data present in 0,2,4,6... by transposing\n # them and then reshaping\n all_x_values = st_end_vals[0::2].T\n all_x_values = all_x_values.reshape(-1,)\n\n all_y_values = st_end_vals[1::2].T\n all_y_values = all_y_values.reshape(-1,)\n \n renderer.plot(all_x_values, all_y_values, 'k--')\n renderer.fill_between(all_x_values, all_y_values, alpha=0.2)\n renderer.legend()\n renderer.set_xlabel(r'$x$')\n renderer.set_ylabel(r'$y$')\n renderer.set_title(r'${}$'.format(self.integrator))\n# renderer.set_aspect('equal')\n\n```\n\n## Quadrature\n\nA variety of rules for quadratures exist, and we are going to look at three simple ones : (a) Midpoint (b) Trapezoidal and (c) Simpson rules, although others (Clenshaw--Curtis, Gaussian quadrature) will be discussed on the way. We will attempt to compare these methods in terms of their ease (in understanding/implementation), order of accuracy (in comparison to the discretization $h$) and function evaluations for each step $h$.\n\n## What do these rules do?\nThey approximate the area under the curve in a smart way---using local polynomials (in the case of interpolatory quadrature), you approximate the curve and integrate these polynomials instead. This approximation is not arbitrary, it represents a local Taylor series expansion. We'll first look at some schemes and what local polynomials they represent. \n\n\n```python\ndef trapezoidal(func, lhs, rhs):\n \"\"\"Does quadrature for one panel using the trapezoidal\n rule\n \n Parameters\n ----------\n func : ufunc\n A function, f, that takes one argument and\n returns the function value and exact integral\n as a tuple\n lhs : float/array-like\n The lower evaluation limit of the definite integral\n rhs : float/array-like\n The upper evalutation limit of the definite integral\n\n Returns\n -------\n integral : float/array-like\n Approximation of the integral using quadrature\n pts : np.array\n Points and values at which the local polynomials\n are drawn, specified as (x_1,fx_1,x_2,fx_2,...)\n \"\"\"\n tmp1, tmp2 = func(lhs)[0], func(rhs)[0]\n return 0.5*(rhs-lhs)*(tmp1 + tmp2), np.array([lhs, tmp1, rhs, tmp2])\n```\n\n\n```python\ndef midpoint(func, lhs, rhs):\n \"\"\"Does quadrature for one panel using the midpoint\n rule\n \n Parameters\n ----------\n func : ufunc\n A function, f, that takes one argument and\n returns the function value and exact integral\n as a tuple\n lhs : float/array-like\n The lower evaluation limit of the definite integral\n rhs : float/array-like\n The upper evalutation limit of the definite integral\n\n Returns\n -------\n integral : float/array-like\n Approximation of the integral using quadrature\n pts : np.array\n Points and values at which the local polynomials\n are drawn, specified as (x_1,fx_1,x_2,fx_2,...)\n \"\"\"\n tmp = func(0.5*(lhs+rhs))[0]\n return (rhs-lhs)*tmp, np.array([lhs, tmp, rhs, tmp])\n```\n\n\n```python\ndef simpson(func, lhs, rhs):\n \"\"\"Does quadrature for one panel using the Simpson\n rule\n \n Parameters\n ----------\n func : ufunc\n A function, f, that takes one argument and\n returns the function value and exact integral\n as a tuple\n lhs : float/array-like\n The lower evaluation limit of the definite integral\n rhs : float/array-like\n The upper evalutation limit of the definite integral\n\n Returns\n -------\n integral : float/array-like\n Approximation of the integral using quadrature\n pts : np.array\n Points and values at which the local polynomials\n are drawn, specified as (x_1,fx_1,x_2,fx_2,...)\n \"\"\"\n tmp1, tmp2, tmp3 = func(lhs)[0], func(rhs)[0], func(0.5*(lhs+rhs))[0]\n return (rhs-lhs)/6. * (tmp1+tmp2+4*tmp3), np.array([lhs, tmp1, 0.5*(lhs+rhs), tmp3, rhs, tmp2])\n```\n\n## Testing quadrature?\nLet's test these rules out between $[a,b]$ for simple polynomial (constant, linear, quadratic and cubic) curves and draw inferences from them (What's the error and so on...)\n\n\n```python\n# Bounds of integration\na = 0\nb = 1\nh = (b-a)\n\n# Define function to be integrated and\n# its integral\ndef test_func(x):\n# # Constant\n# return 1. + 0.*x, 0. + 1.*x\n\n# # Linear\n# return 0. + 1.*x, 0. + 0.5*x**2\n\n# # Quadratic\n# return 0. + 1.*x**2, 0. + x**3/3.\n\n# # Cubic\n# return 0. + 1.*x**3, 0. + x**4/4.\n\n # Quartic\n return 0. + 1.*x**4, 0. + x**5/5.\n```\n\n\n```python\nfig, ax = plt.subplots(1,1, figsize=(6, 6))\nsi = SpatialIntegrator(a, b, h)\nsi.set_forcing_function(test_func)\n\n# Try out different rules below\n# si.integrate_using(midpoint, ax)\n# si.integrate_using(trapezoidal, ax)\nsi.integrate_using(simpson, ax)\n```\n\n### Things to observe\n- What local polynomial orders do they fit?\n- Is there a relation between the errors of the midpoint and trapezoidal quadrature rule?\n- How do they perform for functions that are more complicated (say, a cubic)?\n\n## Composite quadrature\nWe saw that if the function is not simple, quadrature struggles to give us approximations that are close. When faced with this situtation, we do what we do best---throw in more points and hope for the best. This has a name---\"Composite quadrature\", which fits the curve into many small piecewise polynomials and integrates them instead. We can also rigorously show the error bounds for composite rules too. Let's do that now. \n\n\n```python\n# Bounds of integration\na = 0\nb = 1\n\n# All the change comes in this part only. \n# Instead of simple interval, we pass many\n# such intervals to the quadrature rule\n# and sum them up\nh = (b-a)/10\n# h = (b-a)/50\n# h = (b-a)/100\n\n# Define not so simple function to be integrated\ndef composite_function(x):\n# return np.exp(2*x), 0.5*np.exp(2*x)\n \n# return np.exp(x)*(1-np.cos(2.*np.pi*x)), np.exp(x)*(1. + 4.*np.pi**2 - np.cos(2.*np.pi*x) - 2.*np.pi*np.sin(2.*np.pi*x))/(1. + 4.*np.pi**2)\n\n return np.sqrt(1-x**2), 0.5*( x * np.sqrt(1-x**2) + np.arcsin(x))\n\n# return np.log(2. + np.cos(2.*np.pi*x)), 2*x\n\n# return 1. + np.cos(2.*np.pi*x), x + np.sin(2.*np.pi*x)/(2.*np.pi)\n```\n\n\n```python\nfig, ax = plt.subplots(1,1, figsize=(5, 5))\nsi = SpatialIntegrator(a, b, h)\nsi.set_forcing_function(composite_function)\n\n# Try out different rules below\n# si.integrate_using(midpoint, ax)\n# si.integrate_using(trapezoidal, ax)\nsi.integrate_using(simpson, ax)\n```\n\n## Order of accuracy of composite quadrature rules\nWhat's **order** of accuracy? Order of accuracy quantifies the rate of convergence of a numerical approximation of a integral to the exact integral.\nThe numerical solution ${u}$ is said to be $n^{\\text{th}}$-order accurate if the error, $e(h):=\\lvert\\tilde{{u}}-{u} \\rvert$ is proportional to the discretization-size $ h $, to the $n^{\\text{th}}$ power. That is\n\n$$ e(h)=\\lvert\\tilde{{u}}-{u} \\rvert\\leq C(h)^{n} $$\n\nDetails of this are given in the slides. Here, we focus on integrating a simple function (which has a non-negligible contribution from H.O.T in the Taylor series ) and figure out the order of convergence. The model problem that we deal with is\n\n$$ \\int_{0}^{1} e^{2x} dx $$\nwhich as we know has the analytical solution $ \\tilde{y} = \\frac{e^{2}}{2} $, and so error can be calculated.\n\n### Gaussian quadrature\nBefore doing tests for convergence, let us also implement Gaussian quadrature using `scipy.integrate.fixed_quad`\n\n\n```python\ndef gauss(func, lhs, rhs):\n \"\"\"Does quadrature for one panel using the Gaussian rule\n \n Parameters\n ----------\n func : ufunc\n A function, f, that takes one argument and\n returns the function value and exact integral\n as a tuple\n lhs : float/array-like\n The lower evaluation limit of the definite integral\n rhs : float/array-like\n The upper evalutation limit of the definite integral\n\n Returns\n -------\n integral : float/array-like\n Approximation of the integral using quadrature\n pts : np.array\n Points and values at which the local polynomials\n are drawn, specified as (x_1,fx_1,x_2,fx_2,...)\n \"\"\"\n def temp_func(x):\n return func(x)[0]\n \n quad_sum = 0.0\n for i in range(lhs.shape[0]):\n quad_sum += fixed_quad(temp_func, lhs[i], rhs[i], n=3)[0] \n return quad_sum, np.array([lhs, 0.0*lhs, rhs, 0.0*rhs])\n```\n\n\n```python\n# Bounds of integration\na = 0\nb = 1\n\n# All functions that you coded up\nimpl_list = [midpoint, trapezoidal, simpson, gauss]\n\n# Time steps and associated errors from 25*1 to 25*12\nh_steps = np.arange(1, 12, dtype=np.int16)\nerrors_list = [[None for i in h_steps] for impl in impl_list]\n\n# Run simulations and collect errors\nfor i_impl, impl in enumerate(impl_list):\n for i_step, step in enumerate(h_steps):\n new_h = (b-a)/(25*(step))\n si = SpatialIntegrator(a, b, new_h)\n si.set_forcing_function(composite_function)\n errors_list[i_impl][i_step] = si.integrate_using(impl, ax)\n```\n\n\n```python\n# Draw error plots in a log-log plot\nfig, ax = plt.subplots(1,1, figsize=(6, 6))\n\n# x axis is time, y axis is error\nfor i_impl, impl in enumerate(impl_list):\n ax.plot((b-a)/(25.*h_steps), errors_list[i_impl], 'o-', label=impl.__name__)\n\n# Draw helpful slope lines to compare\nx_ax = (b-a)/(25.*h_steps)\nax.plot(x_ax, 1.2 * x_ax**2, 'k--')\nax.plot(x_ax, 0.5 * x_ax**2, 'k--')\nax.plot(x_ax, 0.015 * x_ax**4, 'k--') \n\n# Make it readable\nax.set_xlabel(r'$h$')\nax.set_ylabel(r'$e(h)$')\nax.set_title('Order of accuracy')\nax.set_yscale('log')\nax.set_xscale('log')\nax.legend()\n# fig.savefig('ooa.pdf')\n```\n\n### Things to observe\n- What is the order of accuracy in the composite case?\n- How does this compare to our error estimation (through Taylor series expansion) for non-composite and composite cases? \n- Is the error function dependent? (change functions and see). If so, why?\n\n## Analysis of the functions above\nSome further analysis is needed to explain the results above.\n\n\n```python\nsp.x = sp.symbols('x')\n# sp_expr = sp.exp(sp.x)*(1-sp.cos(2.*sp.pi*sp.x))\nsp_expr = sp.sqrt(1-sp.x**2)\n```\n\n\n```python\nexpr_list = []\nmax_derivative = 3\nfor i in range(max_derivative):\n new_expr = sp.diff(sp_expr, sp.x, i)\n expr_list.append(sp.lambdify(sp.x, new_expr, 'numpy'))\n```\n\n\n```python\nfig, ax = plt.subplots(1,1, figsize=(10, 10))\nx = np.linspace(0.0, 1.0, 1001)\nfor i_expr, expr in enumerate(expr_list):\n ax.plot(x, expr(x), label=r'{} derivative'.format(i_expr))\nax.legend()\n```\n", "meta": {"hexsha": "c40ee2ab4ab1680b2b00aae624d56fa6a45fcdb1", "size": 20757, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lectures/06_spaceintegration/code/quadrature.ipynb", "max_stars_repo_name": "tp5uiuc/soft_systems_course", "max_stars_repo_head_hexsha": "c9585c8fdc7fbc2fd539b4a1ed5e3b43a889a1ce", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2022-01-12T21:54:46.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-15T09:31:40.000Z", "max_issues_repo_path": "lectures/06_spaceintegration/code/quadrature.ipynb", "max_issues_repo_name": "tp5uiuc/soft_systems_course", "max_issues_repo_head_hexsha": "c9585c8fdc7fbc2fd539b4a1ed5e3b43a889a1ce", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/06_spaceintegration/code/quadrature.ipynb", "max_forks_repo_name": "tp5uiuc/soft_systems_course", "max_forks_repo_head_hexsha": "c9585c8fdc7fbc2fd539b4a1ed5e3b43a889a1ce", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.8031914894, "max_line_length": 598, "alphanum_fraction": 0.5503203738, "converted": true, "num_tokens": 3971, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294404038127071, "lm_q2_score": 0.9111797094088366, "lm_q1q2_score": 0.8468872370588942}} {"text": "## 2.2 Two-Way Tables and Two Categorical Variables\n\nData science is all about relationships between variables. How do we summarize and visualize the relationship between two categorical variables?\n\nFor example, what can we say about the relationship between gender and survival on the Titanic?\n\n\n```python\nimport pandas as pd\ndata_dir = \"http://dlsun.github.io/pods/data/\"\ndf_titanic = pd.read_csv(data_dir + \"titanic.csv\")\n```\n\nWe can summarize each variable individually like we did in the previous lesson.\n\n\n```python\ndf_titanic[\"gender\"].value_counts()\n```\n\n\n```python\ndf_titanic[\"survived\"].value_counts()\n```\n\nBut this does not tell us how gender interacts with survival. To do that, we need to produce a _cross-tabulation_, or \"cross-tab\" for short. (Statisticians tend to call this a _contigency table_ or a _two-way table_.)\n\n\n```python\npd.crosstab(df_titanic[\"survived\"], df_titanic[\"gender\"])\n```\n\nA cross-tabulation of two categorical variables is a two-dimensional array, with the levels of one variable along the rows and the levels of the other variable along the columns. Each cell in this array contains the number of observations that had a particular combination of levels. So in the Titanic data set, there were 359 females who survived and 1366 males who died. From the cross-tabulation, we can see that there were more females who survived than not, while there were more males who died than not. Clearly, gender had a strong influence on survival because of the Titanic's policy of \"women and children first\".\n\nTo get probabilities instead of counts, we specify `normalize=True`.\n\n\n\n```python\njoint_survived_gender = pd.crosstab(df_titanic[\"survived\"], df_titanic[\"gender\"], \n normalize=True)\njoint_survived_gender\n```\n\nNotice that the four probabilities in this table add up to 1.0. Each of these probabilities is called a joint probability and can be notated, for example, as \n\n$$ P(\\text{female}, \\text{died}) = 0.058903.$$\n\nCollectively, these probabilities make up the _joint distribution_ of the variables **survived** and **gender**.\n\n### 2.2.1 Marginal Distributions\n\nIs it possible to recover the distribution of **gender** alone from the joint distribution of **survived** and **gender**? \n\nYes! We simply sum the probabilities for each **gender** over all the possible levels of **survived**.\n\n$$\\begin{align}\nP(\\text{female}) = P(\\text{female}, \\text{died}) + P(\\text{female}, \\text{survived}) &= 0.058903 + 0.162664 = 0.221567 \\\\\nP(\\text{male}) = P(\\text{male}, \\text{died}) + P(\\text{male}, \\text{survived}) &= 0.618940 + 0.159493 = 0.778433\n\\end{align}$$\n\nIn code, this can be achieved by summing the `DataFrame` _over_ one of the dimensions. We can specify which dimension to sum over, using the `axis=` argument to `.sum()`.\n\n- `axis=0` refers to the rows. In the current example, **survived** is the variable along this axis.\n- `axis=1` refers to the columns. In the current example, **gender** is the variable along this axis.\n\nSince we want to sum _over_ the **survived** variable, we specify `.sum(axis=0)`.\n\n\n```python\ngender = joint_survived_gender.sum(axis=0)\ngender\n```\n\nWhen calculated from a joint distribution, the distribution of one variable is called a _marginal distribution_. So the above is the marginal distribution of **gender**. \n\nThe name \"marginal distribution\" comes from the fact that it is customary to write these totals in the _margins_ of the table. In fact `pd.crosstab()` has an argument `margins=` that automatically adds these margins to the cross-tabulation.\n\n\n```python\npd.crosstab(df_titanic[\"survived\"], df_titanic[\"gender\"], \n normalize=True, margins=True)\n```\n\nWhile the margins are useful for display purposes, they actually make computations more difficult, since it is easy to mix up which numbers correspond to joint probabilities and which ones correspond to marginal probabilities.\n\nLikewise, to obtain the marginal distribution of **survived**, we sum over the possible levels of **gender** (which is the variable along `axis=1`).\n\n\n```python\nsurvived = joint_survived_gender.sum(axis=1)\nsurvived\n```\n\nWe can check this answer by calculating the distribution of **survived** directly from the original data, using the techniques from the previous lesson.\n\n\n```python\ndf_titanic[\"survived\"].value_counts(normalize=True)\n```\n\n### 2.2.2 Conditional Distributions\n\nLet's take another look at the joint distribution of **survived** and **gender**.\n\n\n```python\njoint_survived_gender\n```\n\nFrom the joint distribution, it is tempting to conclude that females and males did not differ too much in their survival rates, since \n\n$$ P(\\text{female}, \\text{survived}) = 0.162664 $$\n\nis not too different from\n\n$$ P(\\text{male}, \\text{survived}) = 0.159493. $$\n\nThis is because there were 359 women and 352 men who survived, out of 2207 passengers.\n\nBut this is the wrong comparison. The joint probabilities are affected by the baseline gender probabilities, and over three-quarters of the people aboard the Titanic were men. $P(\\text{male}, \\text{survived})$ and $ P(\\text{female}, \\text{survived})$ should not even be close if men were just as likely to survive as women, simply because of the sheer number of men aboard.\n\nA better comparison is between the conditional probabilities. We ought to compare \n\n$$ P(\\text{survived} | \\text{female}) $$\n\nto \n\n$$ P(\\text{survived} | \\text{male}). $$\n\nTo calculate each conditional probability, we simply divide the joint probability by the marginal probability. That is,\n\n$$\\begin{align}\nP(\\text{survived} | \\text{female}) = \\frac{P(\\text{female}, \\text{survived})}{P(\\text{female})} &= \\frac{0.162664}{0.221568} = .7341 \\\\\nP(\\text{survived} | \\text{male}) = \\frac{P(\\text{male}, \\text{survived})}{P(\\text{male})} &= \\frac{0.159493}{0.778432} = .2049\n\\end{align}$$\n\nThe conditional probabilities expose the stark difference in survival rates. One way to think about conditional probabilities is that they _adjust_ for the baseline gender probabilities. By dividing by $P(\\text{male})$ and $P(\\text{female})$, we adjust for the fact that there were more men and fewer women on the Titanic, thus enabling an apples-to-apples comparison.\n\nIn code, this can be achieved by dividing the joint distribution by the marginal distribution (of **gender**). However, we have to be careful:\n\n- The joint distribution is a two-dimensional array. It is stored as a `DataFrame`.\n- The marginal distribution (of **gender**) is a one-dimensional array. It is stored as a `Series`.\n\nHow is it possible to divide a two-dimensional object by a one-dimensional object? Only if we _broadcast_ the one-dimensional object over the other dimension. A toy example is illustrated below.\n\n$$\\begin{align}\n\\begin{bmatrix} 1 & 2 \\\\ 3 & 4 \\\\ 5 & 6 \\end{bmatrix} \\Big/ \\begin{bmatrix} 7 \\\\ 8 \\end{bmatrix} &= \\begin{bmatrix} 1 & 2 \\\\ 3 & 4 \\\\ 5 & 6 \\end{bmatrix} \\Big/ \\begin{bmatrix} 7 & 8 \\\\ 7 & 8 \\\\ 7 & 8 \\end{bmatrix} \\\\\n&= \\begin{bmatrix} 1/7 & 2/8 \\\\ 3/7 & 4/8 \\\\ 5/7 & 6/8 \\end{bmatrix}\n\\end{align}$$\n\nTo do this in `pandas`, we use the `.divide()` method, specifying the dimension on which to align the `Series` with the `DataFrame`. Since **gender** is on `axis=1` of `joint_survived_gender`, we align the `DataFrame` and `Series` along `axis=1`.\n\n\n```python\ncond_survived_gender = joint_survived_gender.divide(gender, axis=1)\n# In this case, joint_survived_gender / gender would also haved worked,\n# but better to play it safe and be explicit about the axis.\n\ncond_survived_gender \n```\n\nEvery probability in this table represents a conditional probability of gender given survival status. So from the table, we can read that \n\n$$ P(\\text{survived} | \\text{female}) = 0.734151. $$\n\nNotice that each row sums to $1.0$---as it must, since given the information that a person was female, there are only two possibilities: she either survived or died.\n\nIn other words, we have a distribution of **survived** for each level of **gender**. We might wish to compare these two distributions. When we call `.plot.bar()` on the `DataFrame`, it will plot the values in each column as a set of bars with its own color.\n\n\n```python\ncond_survived_gender.plot.bar()\n```\n\nA different way to visualize a conditional distribution is to use a stacked bar graph. Here, we want one bar for females and another for males, each one divided in proportion to the survival rates for that gender. First, let's take a look at the desired graph.\n\n\n```python\ncond_survived_gender.T.plot.bar(stacked=True)\n```\n\nNow, let's unpack the code that generated this graphic. Recall that `.plot.bar()` plots each column of a `DataFrame` in a different color. Here we want different colors for each level of **survived**, so we need to swap the rows and columns of `cond_survived_gender`. In other words, we need the _transpose_ of the `DataFrame`, which is accomplished using `.T`.\n\n\n```python\ncond_survived_gender.T\n```\n\nWhen we call `.plot.bar()` on this transposed `DataFrame`, with `stacked=True`, we obtain the stacked bar graph above.\n\n### 2.2.3 Exercises\n\nExercises 1-4 ask you to continue working with the Titanic data set explored in this lesson.\n\n1\\. Filter the data to include passengers only. Calculate the joint distribution between a passenger's class and where they embarked. \n\n2\\. Using the joint distribution that you calculated in Exercise 1, calculate the following:\n\n- the conditional distribution of their class given where they embarked\n- the conditional distribution of where they embarked given their class\n\nUse the conditional distributions that you calculate to answer the following questions:\n\n- What proportion of 3rd class passengers embarked at Southampton?\n- What proportion of Southampton passengers were in 3rd class?\n\n3\\. Make a visualization showing the distribution of a passenger's class, given where they embarked.\n\n4\\. Compare the survival rates of crew members versus passengers. Which group appears to survive at higher rates?\n\n(_Hint:_ You will have to transform the **class** variable to a variable that indicates whether a person was a passenger or a crew member. Refer to the previous lesson.)\n\nExercises 5-6 ask you to work with the Florida Death Penalty data set, which is available at `https://dlsun.github.io/pods/data/death_penalty.csv`. This data set contains information about the races of the defendant and the victim, as well as whether a death penalty verdict was rendered, in 674 homicide trials in Florida between 1976-1987.\n\n5\\. Use the joint distribution to summarize the relationship between the defendant's and the victim's races in Florida homicides.\n\n6\\. Does there appear to be a relationship between death penalty verdicts and the defendant's race? If so, in what direction?\n", "meta": {"hexsha": "f128dde6d280bdbcc223f677363705e0db6a12b9", "size": 18689, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "TesterBook/02_Categorical_Data/.ipynb_checkpoints/2.2 Two Categorical Variables and Cross-Tabulations-checkpoint.ipynb", "max_stars_repo_name": "bfkwong/Latex-Book-Parser", "max_stars_repo_head_hexsha": "07ed107b785c7297eb4a41bbf337bc006a14f00c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "TesterBook/02_Categorical_Data/.ipynb_checkpoints/2.2 Two Categorical Variables and Cross-Tabulations-checkpoint.ipynb", "max_issues_repo_name": "bfkwong/Latex-Book-Parser", "max_issues_repo_head_hexsha": "07ed107b785c7297eb4a41bbf337bc006a14f00c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-07-06T00:12:10.000Z", "max_issues_repo_issues_event_max_datetime": "2020-07-06T01:53:58.000Z", "max_forks_repo_path": "TesterBook/02_Categorical_Data/2.2 Two Categorical Variables and Cross-Tabulations.ipynb", "max_forks_repo_name": "bfkwong/Latex-Book-Parser", "max_forks_repo_head_hexsha": "07ed107b785c7297eb4a41bbf337bc006a14f00c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.5692567568, "max_line_length": 634, "alphanum_fraction": 0.6024399379, "converted": true, "num_tokens": 2687, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294403979493139, "lm_q2_score": 0.9111797124237604, "lm_q1q2_score": 0.8468872345184812}} {"text": "# Simple Linear Regression with NumPy\n\nIn school, students are taught to draw lines like the following.\n\n$$ y = 2 x + 1$$\n\nThey're taught to pick two values for $x$ and calculate the corresponding values for $y$ using the equation.\nThen they draw a set of axes, plot the points, and then draw a line extending through the two dots on their axes.\n\n\n```python\n# Import matplotlib.\nimport matplotlib.pyplot as plt\n```\n\n\n```python\n# Draw some axes.\nplt.plot([-1, 10], [0, 0], 'k-')\nplt.plot([0, 0], [-1, 10], 'k-')\n\n# Plot the red, blue and green lines.\nplt.plot([1, 1], [-1, 3], 'b:')\nplt.plot([-1, 1], [3, 3], 'r:')\n\n# Plot the two points (1,3) and (2,5).\nplt.plot([1, 2], [3, 5], 'ko')\n# Join them with an (extending) green lines.\nplt.plot([-1, 10], [-1, 21], 'g-')\n\n# Set some reasonable plot limits.\nplt.xlim([-1, 10])\nplt.ylim([-1, 10])\n\n# Show the plot.\nplt.show()\n```\n\nSimple linear regression is about the opposite problem - what if you have some points and are looking for the equation?\nIt's easy when the points are perfectly on a line already, but usually real-world data has some noise.\nThe data might still look roughly linear, but aren't exactly so.\n\n***\n\n## Example (contrived and simulated)\n\n\n\n#### Scenario\nSuppose you are trying to weigh your suitcase to avoid an airline's extra charges.\nYou don't have a weighing scales, but you do have a spring and some gym-style weights of masses 7KG, 14KG and 21KG.\nYou attach the spring to the wall hook, and mark where the bottom of it hangs.\nYou then hang the 7KG weight on the end and mark where the bottom of the spring is.\nYou repeat this with the 14KG weight and the 21KG weight.\nFinally, you place your case hanging on the spring, and the spring hangs down halfway between the 7KG mark and the 14KG mark.\nIs your case over the 10KG limit set by the airline?\n\n#### Hypothesis\nWhen you look at the marks on the wall, it seems that the 0KG, 7KG, 14KG and 21KG marks are evenly spaced.\nYou wonder if that means your case weighs 10.5KG.\nThat is, you wonder if there is a *linear* relationship between the distance the spring's hook is from its resting position, and the mass on the end of it.\n\n#### Experiment\nYou decide to experiment.\nYou buy some new weights - a 1KG, a 2KG, a 3Kg, all the way up to 20KG.\nYou place them each in turn on the spring and measure the distance the spring moves from the resting position.\nYou tabulate the data and plot them.\n\n#### Analysis\nHere we'll import the Python libraries we need for or investigations below.\n\n\n```python\n# Make matplotlib show interactive plots in the notebook.\n%matplotlib inline\n```\n\n\n```python\n# numpy efficiently deals with numerical multi-dimensional arrays.\nimport numpy as np\n\n# matplotlib is a plotting library, and pyplot is its easy-to-use module.\nimport matplotlib.pyplot as plt\n\n# This just sets the default plot size to be bigger.\nplt.rcParams['figure.figsize'] = (8, 6)\n```\n\nIgnore the next couple of lines where I fake up some data. I'll use the fact that I faked the data to explain some results later. Just pretend that w is an array containing the weight values and d are the corresponding distance measurements.\n\n\n```python\nw = np.arange(0.0, 21.0, 1.0)\nd = 5.0 * w + 10.0 + np.random.normal(0.0, 5.0, w.size)\n```\n\n\n```python\n# Let's have a look at w.\nw\n```\n\n\n\n\n array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12.,\n 13., 14., 15., 16., 17., 18., 19., 20.])\n\n\n\n\n```python\n# Let's have a look at d.\nd\n```\n\n\n\n\n array([ 7.8941861 , 14.4990969 , 15.54494165, 34.10112763,\n 28.09931362, 32.40950674, 47.95067306, 50.56800243,\n 53.57449031, 57.26477658, 60.90854081, 66.42632473,\n 65.84573316, 65.8210405 , 73.13032259, 81.26976123,\n 93.94740543, 98.1664509 , 102.50090033, 113.22448452,\n 104.44046614])\n\n\n\nLet's have a look at the data from our experiment.\n\n\n```python\n# Create the plot.\n\nplt.plot(w, d, 'k.')\n\n# Set some properties for the plot.\nplt.xlabel('Weight (KG)')\nplt.ylabel('Distance (CM)')\n\n# Show the plot.\nplt.show()\n```\n\n#### Model\nIt looks like the data might indeed be linear.\nThe points don't exactly fit on a straight line, but they are not far off it.\nWe might put that down to some other factors, such as the air density, or errors, such as in our tape measure.\nThen we can go ahead and see what would be the best line to fit the data. \n\n#### Straight lines\nAll straight lines can be expressed in the form $y = mx + c$.\nThe number $m$ is the slope of the line.\nThe slope is how much $y$ increases by when $x$ is increased by 1.0.\nThe number $c$ is the y-intercept of the line.\nIt's the value of $y$ when $x$ is 0.\n\n#### Fitting the model\nTo fit a straight line to the data, we just must pick values for $m$ and $c$.\nThese are called the parameters of the model, and we want to pick the best values possible for the parameters.\nThat is, the best parameter values *given* the data observed.\nBelow we show various lines plotted over the data, with different values for $m$ and $c$.\n\n\n```python\n# Plot w versus d with black dots.\nplt.plot(w, d, 'k.', label=\"Data\")\n\n# Overlay some lines on the plot.\nx = np.arange(0.0, 21.0, 1.0)\nplt.plot(x, 5.0 * x + 10.0, 'r-', label=r\"$5x + 10$\")\nplt.plot(x, 6.0 * x + 5.0, 'g-', label=r\"$6x + 5$\")\nplt.plot(x, 5.0 * x + 15.0, 'b-', label=r\"$5x + 15$\")\n\n# Add a legend.\nplt.legend()\n\n# Add axis labels.\nplt.xlabel('Weight (KG)')\nplt.ylabel('Distance (CM)')\n\n# Show the plot.\nplt.show()\n```\n\n#### Calculating the cost\nYou can see that each of these lines roughly fits the data.\nWhich one is best, and is there another line that is better than all three?\nIs there a \"best\" line?\n\nIt depends how you define the word best.\nLuckily, everyone seems to have settled on what the best means.\nThe best line is the one that minimises the following calculated value.\n\n$$ \\sum_i (y_i - mx_i - c)^2 $$\n\nHere $(x_i, y_i)$ is the $i^{th}$ point in the data set and $\\sum_i$ means to sum over all points. \nThe values of $m$ and $c$ are to be determined.\nWe usually denote the above as $Cost(m, c)$.\n\nWhere does the above calculation come from?\nIt's easy to explain the part in the brackets $(y_i - mx_i - c)$.\nThe corresponding value to $x_i$ in the dataset is $y_i$.\nThese are the measured values.\nThe value $m x_i + c$ is what the model says $y_i$ should have been.\nThe difference between the value that was observed ($y_i$) and the value that the model gives ($m x_i + c$), is $y_i - mx_i - c$.\n\nWhy square that value?\nWell note that the value could be positive or negative, and you sum over all of these values.\nIf we allow the values to be positive or negative, then the positive could cancel the negatives.\nSo, the natural thing to do is to take the absolute value $\\mid y_i - m x_i - c \\mid$.\nWell it turns out that absolute values are a pain to deal with, and instead it was decided to just square the quantity instead, as the square of a number is always positive.\nThere are pros and cons to using the square instead of the absolute value, but the square is used.\nThis is usually called *least squares* fitting.\n\n\n```python\n# Calculate the cost of the lines above for the data above.\ncost = lambda m,c: np.sum([(d[i] - m * w[i] - c)**2 for i in range(w.size)])\n\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (5.0, 10.0, cost(5.0, 10.0)))\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (6.0, 5.0, cost(6.0, 5.0)))\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (5.0, 15.0, cost(5.0, 15.0)))\n```\n\n Cost with m = 5.00 and c = 10.00: 525.70\n Cost with m = 6.00 and c = 5.00: 1809.49\n Cost with m = 5.00 and c = 15.00: 974.82\n\n\n#### Minimising the cost\nWe want to calculate values of $m$ and $c$ that give the lowest value for the cost value above.\nFor our given data set we can plot the cost value/function.\nRecall that the cost is:\n\n$$ Cost(m, c) = \\sum_i (y_i - mx_i - c)^2 $$\n\nThis is a function of two variables, $m$ and $c$, so a plot of it is three dimensional.\nSee the **Advanced** section below for the plot.\n\nIn the case of fitting a two-dimensional line to a few data points, we can easily calculate exactly the best values of $m$ and $c$.\nSome of the details are discussed in the **Advanced** section, as they involve calculus, but the resulting code is straight-forward.\nWe first calculate the mean (average) values of our $x$ values and that of our $y$ values.\nThen we subtract the mean of $x$ from each of the $x$ values, and the mean of $y$ from each of the $y$ values.\nThen we take the *dot product* of the new $x$ values and the new $y$ values and divide it by the dot product of the new $x$ values with themselves.\nThat gives us $m$, and we use $m$ to calculate $c$.\n\nRemember that in our dataset $x$ is called $w$ (for weight) and $y$ is called $d$ (for distance).\nWe calculate $m$ and $c$ below.\n\n\n```python\n# Calculate the best values for m and c.\n\n# First calculate the means (a.k.a. averages) of w and d.\nw_avg = np.mean(w)\nd_avg = np.mean(d)\n\n# Subtract means from w and d.\nw_zero = w - w_avg\nd_zero = d - d_avg\n\n# The best m is found by the following calculation.\nm = np.sum(w_zero * d_zero) / np.sum(w_zero * w_zero)\n# Use m from above to calculate the best c.\nc = d_avg - m * w_avg\n\nprint(\"m is %8.6f and c is %6.6f.\" % (m, c))\n```\n\n m is 4.958010 and c is 10.781211.\n\n\nNote that numpy has a function that will perform this calculation for us, called polyfit.\nIt can be used to fit lines in many dimensions.\n\n\n```python\nnp.polyfit(w, d, 1)\n```\n\n\n\n\n array([ 4.95801012, 10.78121051])\n\n\n\n#### Best fit line\nSo, the best values for $m$ and $c$ given our data and using least squares fitting are about $4.95$ for $m$ and about $11.13$ for $c$.\nWe plot this line on top of the data below.\n\n\n```python\n# Plot the best fit line.\nplt.plot(w, d, 'k.', label='Original data')\nplt.plot(w, m * w + c, 'b-', label='Best fit line')\n\n# Add axis labels and a legend.\nplt.xlabel('Weight (KG)')\nplt.ylabel('Distance (CM)')\nplt.legend()\n\n# Show the plot.\nplt.show()\n```\n\nNote that the $Cost$ of the best $m$ and best $c$ is not zero in this case.\n\n\n```python\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (m, c, cost(m, c)))\n```\n\n Cost with m = 4.96 and c = 10.78: 521.60\n\n\n### Summary\nIn this notebook we:\n1. Investigated the data.\n2. Picked a model.\n3. Picked a cost function.\n4. Estimated the model parameter values that minimised our cost function.\n\n### Advanced\nIn the following sections we cover some of the more advanced concepts involved in fitting the line.\n\n#### Simulating data\nEarlier in the notebook we glossed over something important: we didn't actually do the weighing and measuring - we faked the data.\nA better term for this is *simulation*, which is an important tool in research, especially when testing methods such as simple linear regression.\n\nWe ran the following two commands to do this:\n\n```python\nw = np.arange(0.0, 21.0, 1.0)\nd = 5.0 * w + 10.0 + np.random.normal(0.0, 5.0, w.size)\n```\n\nThe first command creates a numpy array containing all values between 1.0 and 21.0 (including 1.0 but not including 21.0) in steps of 1.0.\n\n\n```python\n np.arange(0.0, 21.0, 1.0)\n```\n\n\n\n\n array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12.,\n 13., 14., 15., 16., 17., 18., 19., 20.])\n\n\n\nThe second command is more complex.\nFirst it takes the values in the `w` array, multiplies each by 5.0 and then adds 10.0.\n\n\n```python\n5.0 * w + 10.0\n```\n\n\n\n\n array([ 10., 15., 20., 25., 30., 35., 40., 45., 50., 55., 60.,\n 65., 70., 75., 80., 85., 90., 95., 100., 105., 110.])\n\n\n\nIt then adds an array of the same length containing random values.\nThe values are taken from what is called the normal distribution with mean 0.0 and standard deviation 5.0.\n\n\n```python\nnp.random.normal(0.0, 5.0, w.size)\n```\n\n\n\n\n array([ -1.06010546, 6.11989391, 6.50528315, -0.64401265,\n 5.56020188, -9.21568895, 1.73234209, -1.60719357,\n -10.31804015, -1.65038869, -0.70955811, 4.51754854,\n 4.30969704, -4.40688318, 0.70951891, 1.63830816,\n -2.28882717, 4.62862745, 1.30286098, 5.19446648,\n -1.77633161])\n\n\n\nThe normal distribution follows a bell shaped curve.\nThe curve is centred on the mean (0.0 in this case) and its general width is determined by the standard deviation (5.0 in this case).\n\n\n```python\n# Plot the normal distrution.\nnormpdf = lambda mu, s, x: (1.0 / (2.0 * np.pi * s**2)) * np.exp(-((x - mu)**2)/(2 * s**2))\n\nx = np.linspace(-20.0, 20.0, 100)\ny = normpdf(0.0, 5.0, x)\nplt.plot(x, y)\n\nplt.show()\n```\n\nThe idea here is to add a little bit of randomness to the measurements of the distance.\nThe random values are entered around 0.0, with a greater than 99% chance they're within the range -15.0 to 15.0.\nThe normal distribution is used because of the [Central Limit Theorem](https://en.wikipedia.org/wiki/Central_limit_theorem) which basically states that when a bunch of random effects happen together the outcome looks roughly like the normal distribution. (Don't quote me on that!)\n\n#### Plotting the cost function\nWe can plot the cost function for a given set of data points.\nRecall that the cost function involves two variables: $m$ and $c$, and that it looks like this:\n\n$$ Cost(m,c) = \\sum_i (y_i - mx_i - c)^2 $$\n\nTo plot a function of two variables we need a 3D plot.\nIt can be difficult to get the viewing angle right in 3D plots, but below you can just about make out that there is a low point on the graph around the $(m, c) = (\\approx 5.0, \\approx 10.0)$ point. \n\n\n```python\n# This code is a little bit involved - don't worry about it.\n# Just look at the plot below.\n\nfrom mpl_toolkits.mplot3d import Axes3D\n\n# Ask pyplot a 3D set of axes.\nax = plt.figure().gca(projection='3d')\n\n# Make data.\nmvals = np.linspace(4.5, 5.5, 100)\ncvals = np.linspace(0.0, 20.0, 100)\n\n# Fill the grid.\nmvals, cvals = np.meshgrid(mvals, cvals)\n\n# Flatten the meshes for convenience.\nmflat = np.ravel(mvals)\ncflat = np.ravel(cvals)\n\n# Calculate the cost of each point on the grid.\nC = [np.sum([(d[i] - m * w[i] - c)**2 for i in range(w.size)]) for m, c in zip(mflat, cflat)]\nC = np.array(C).reshape(mvals.shape)\n\n# Plot the surface.\nsurf = ax.plot_surface(mvals, cvals, C)\n\n# Set the axis labels.\nax.set_xlabel('$m$', fontsize=16)\nax.set_ylabel('$c$', fontsize=16)\nax.set_zlabel('$Cost$', fontsize=16)\n\n# Show the plot.\nplt.show()\n```\n\n#### Coefficient of determination\nEarlier we used a cost function to determine the best line to fit the data.\nUsually the data do not perfectly fit on the best fit line, and so the cost is greater than 0.\nA quantity closely related to the cost is the *coefficient of determination*, also known as the *R-squared* value.\nThe purpose of the R-squared value is to measure how much of the variance in $y$ is determined by $x$.\n\nFor instance, in our example the main thing that affects the distance the spring is hanging down is the weight on the end.\nIt's not the only thing that affects it though.\nThe room temperature and density of the air at the time of measurment probably affect it a little.\nThe age of the spring, and how many times it has been stretched previously probably also have a small affect.\nThere are probably lots of unknown factors affecting the measurment.\n\nThe R-squared value estimates how much of the changes in the $y$ value is due to the changes in the $x$ value compared to all of the other factors affecting the $y$ value.\nIt is calculated as follows:\n\n$$ R^2 = 1 - \\frac{\\sum_i (y_i - m x_i - c)^2}{\\sum_i (y_i - \\bar{y})^2} $$\n\nNote that sometimes the [*Pearson correlation coefficient*](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) is used instead of the R-squared value.\nYou can just square the Pearson coefficient to get the R-squred value.\n\n\n```python\n# Calculate the R-squared value for our data set.\nrsq = 1.0 - (np.sum((d - m * w - c)**2)/np.sum((d - d_avg)**2))\n\nprint(\"The R-squared value is %6.4f\" % rsq)\n```\n\n The R-squared value is 0.9732\n\n\n\n```python\n# The same value using numpy.\nnp.corrcoef(w, d)[0][1]**2\n```\n\n\n\n\n 0.9731819489090523\n\n\n\n#### The minimisation calculations\nEarlier we used the following calculation to calculate $m$ and $c$ for the line of best fit.\nThe code was:\n\n```python\nw_zero = w - np.mean(w)\nd_zero = d - np.mean(d)\n\nm = np.sum(w_zero * d_zero) / np.sum(w_zero * w_zero)\nc = np.mean(d) - m * np.mean(w)\n```\n\nIn mathematical notation we write this as:\n\n$$ m = \\frac{\\sum_i (x_i - \\bar{x}) (y_i - \\bar{y})}{\\sum_i (x_i - \\bar{x})^2} \\qquad \\textrm{and} \\qquad c = \\bar{y} - m \\bar{x} $$\n\nwhere $\\bar{x}$ is the mean of $x$ and $\\bar{y}$ that of $y$.\n\nWhere did these equations come from?\nThey were derived using calculus.\nWe'll give a brief overview of it here, but feel free to gloss over this section if it's not for you.\nIf you can understand the first part, where we calculate the partial derivatives, then great!\n\nThe calculations look complex, but if you know basic differentiation, including the chain rule, you can easily derive them.\nFirst, we differentiate the cost function with respect to $m$ while treating $c$ as a constant, called a partial derivative.\nWe write this as $\\frac{\\partial m}{ \\partial Cost}$, using $\\delta$ as opposed to $d$ to signify that we are treating the other variable as a constant.\nWe then do the same with respect to $c$ while treating $m$ as a constant.\nWe set both equal to zero, and then solve them as two simultaneous equations in two variables.\n\n###### Calculate the partial derivatives\n$$\n\\begin{align}\nCost(m, c) &= \\sum_i (y_i - mx_i - c)^2 \\\\[1cm]\n\\frac{\\partial Cost}{\\partial m} &= \\sum 2(y_i - m x_i -c)(-x_i) \\\\\n &= -2 \\sum x_i (y_i - m x_i -c) \\\\[0.5cm]\n\\frac{\\partial Cost}{\\partial c} & = \\sum 2(y_i - m x_i -c)(-1) \\\\\n & = -2 \\sum (y_i - m x_i -c) \\\\\n\\end{align}\n$$\n\n###### Set to zero\n$$\n\\begin{align}\n& \\frac{\\partial Cost}{\\partial m} = 0 \\\\[0.2cm]\n& \\Rightarrow -2 \\sum x_i (y_i - m x_i -c) = 0 \\\\\n& \\Rightarrow \\sum (x_i y_i - m x_i x_i - x_i c) = 0 \\\\\n& \\Rightarrow \\sum x_i y_i - \\sum_i m x_i x_i - \\sum x_i c = 0 \\\\\n& \\Rightarrow m \\sum x_i x_i = \\sum x_i y_i - c \\sum x_i \\\\[0.2cm]\n& \\Rightarrow m = \\frac{\\sum x_i y_i - c \\sum x_i}{\\sum x_i x_i} \\\\[0.5cm]\n& \\frac{\\partial Cost}{\\partial c} = 0 \\\\[0.2cm]\n& \\Rightarrow -2 \\sum (y_i - m x_i - c) = 0 \\\\\n& \\Rightarrow \\sum y_i - \\sum_i m x_i - \\sum c = 0 \\\\\n& \\Rightarrow \\sum y_i - m \\sum_i x_i = c \\sum 1 \\\\\n& \\Rightarrow c = \\frac{\\sum y_i - m \\sum x_i}{\\sum 1} \\\\\n& \\Rightarrow c = \\frac{\\sum y_i}{\\sum 1} - m \\frac{\\sum x_i}{\\sum 1} \\\\[0.2cm]\n& \\Rightarrow c = \\bar{y} - m \\bar{x} \\\\\n\\end{align}\n$$\n\n###### Solve the simultaneous equations\nHere we let $n$ be the length of $x$, which is also the length of $y$.\n\n$$\n\\begin{align}\n& m = \\frac{\\sum_i x_i y_i - c \\sum_i x_i}{\\sum_i x_i x_i} \\\\[0.2cm]\n& \\Rightarrow m = \\frac{\\sum x_i y_i - (\\bar{y} - m \\bar{x}) \\sum x_i}{\\sum x_i x_i} \\\\\n& \\Rightarrow m \\sum x_i x_i = \\sum x_i y_i - \\bar{y} \\sum x_i + m \\bar{x} \\sum x_i \\\\\n& \\Rightarrow m \\sum x_i x_i - m \\bar{x} \\sum x_i = \\sum x_i y_i - \\bar{y} \\sum x_i \\\\[0.3cm]\n& \\Rightarrow m = \\frac{\\sum x_i y_i - \\bar{y} \\sum x_i}{\\sum x_i x_i - \\bar{x} \\sum x_i} \\\\[0.2cm]\n& \\Rightarrow m = \\frac{\\sum (x_i y_i) - n \\bar{y} \\bar{x}}{\\sum (x_i x_i) - n \\bar{x} \\bar{x}} \\\\\n& \\Rightarrow m = \\frac{\\sum (x_i y_i) - n \\bar{y} \\bar{x} - n \\bar{y} \\bar{x} + n \\bar{y} \\bar{x}}{\\sum (x_i x_i) - n \\bar{x} \\bar{x} - n \\bar{x} \\bar{x} + n \\bar{x} \\bar{x}} \\\\\n& \\Rightarrow m = \\frac{\\sum (x_i y_i) - \\sum y_i \\bar{x} - \\sum \\bar{y} x_i + n \\bar{y} \\bar{x}}{\\sum (x_i x_i) - \\sum x_i \\bar{x} - \\sum \\bar{x} x_i + n \\bar{x} \\bar{x}} \\\\\n& \\Rightarrow m = \\frac{\\sum_i (x_i - \\bar{x}) (y_i - \\bar{y})}{\\sum_i (x_i - \\bar{x})^2} \\\\\n\\end{align}\n$$\n\n#### End\n", "meta": {"hexsha": "0c5252eb4e34e80c9b3737303dce01618c695145", "size": 270975, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "simple-linear-regression.ipynb", "max_stars_repo_name": "sean-meade/jupyter-teaching-notebooks", "max_stars_repo_head_hexsha": "2f2a2bf6925fba479d1a73481122faebd201bdba", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "simple-linear-regression.ipynb", "max_issues_repo_name": "sean-meade/jupyter-teaching-notebooks", "max_issues_repo_head_hexsha": "2f2a2bf6925fba479d1a73481122faebd201bdba", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "simple-linear-regression.ipynb", "max_forks_repo_name": "sean-meade/jupyter-teaching-notebooks", "max_forks_repo_head_hexsha": "2f2a2bf6925fba479d1a73481122faebd201bdba", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 277.6383196721, "max_line_length": 115836, "alphanum_fraction": 0.9171768613, "converted": true, "num_tokens": 6260, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9196425289753969, "lm_q2_score": 0.920789678608055, "lm_q1q2_score": 0.8467973486895547}} {"text": "# Extracting Information from Audio Signals \n\n## Measuring amplitude (Session 1.9) - Kadenze \n\n### George Tzanetakis, University of Victoria \n\nIn this notebook we will explore different ways of measuring the amplitude of a sinusoidal signal. The use of the inner product to estimate the amplitude of a sinusoids in the presence of noise and other sinusoids will also be covered. As usual we start by defining a sinusoid generation function. \n\n\n\n```python\n%matplotlib inline \nimport matplotlib.pyplot as plt\nimport numpy as np\nimport IPython.display as ipd\n\n# generate a discrete time sinusoidal signal with a specified frequency and duration\ndef sinusoid(freq=440.0, dur=1.0, srate=44100.0, amp=1.0, phase = 0.0): \n t = np.linspace(0,dur,int(srate*dur))\n data = amp * np.sin(2*np.pi*freq *t+phase)\n return data\n```\n\nOne way of measuring the amplitude of an audio signal is by finding the maximum value. As long the array of samples contains a few cycles of a sinusoidal signal this estimation works well. \n\n\n```python\ndef peak_amplitude(data): \n return np.max(data)\n```\n\nLet's check it out: \n\n\n```python\nfreq = 550 \ndata = sinusoid(freq, 0.5, amp =4.0)\nprint('Peak amplitude = %2.2f ' % peak_amplitude(data))\n```\n\n Peak amplitude = 4.00 \n\n\nNow let's define a function that returns the Root of the Mean Squarred (RMS) amplitude. For an array containing a few cycles of a sinusoid signal we can estimate the RMS amplitude as follows: \n\n\n\n```python\ndef rms_amplitude(data): \n rms_sum = np.sum(np.multiply(data,data))\n rms_sum /= len(data)\n return np.sqrt(rms_sum) * np.sqrt(2.0)\n```\n\nLet's check out that this method of estimation also works: \n\n\n```python\nfreq = 550 \ndata = sinusoid(freq, 0.5, amp =8.0)\nprint('Rms amplitude = %2.2f' % rms_amplitude(data))\n```\n\n Rms amplitude = 8.00\n\n\nNow let's look at estimating the amplitude based on taking the dot product of two sinusoids. \nUnlike the peak and RMS methods of estimating amplitude this method requires knowledge of the \nfrequency (and possibly phase) of the underlying sinusoid. However, it has the advantage that it is much more robust when there is interferring noise or other sinusoidal signals with other frequencies. \n\n\n\n```python\ndef dot_amplitude(data1, data2): \n dot_product = np.dot(data1, data2)\n return 2 * (dot_product / len(data1))\n```\n\nFirst lets confirm that this amplitude estimation works for a single sinusoid \n\n\n```python\ndata = sinusoid(300, 0.5, amp =4.0)\nbasis = sinusoid(300, 0.5, amp = 1)\nprint('Dot product amplitude = %2.2f' % dot_amplitude(data, basis))\nplt.figure() \nplt.plot(data[1:1000])\nplt.plot(basis[1:1000])\n```\n\nNow lets add some noise to our signal. Notice that the dot-amplitude estimation works reliably, the RMS still does ok but the peak amplitude gets really affected by the added noise. Notice that the dot product amplitude estimation requires knowledge of the frequency to create the appropriate basis signal \n\n\n```python\nnoise = np.random.normal(0, 1.0, len(data))\nmix = data + noise \nplt.figure() \nplt.plot(data[1:1000])\nplt.plot(noise[1:1000])\nplt.plot(mix[1:1000])\nplt.plot(basis[1:1000])\nprint('Dot product amplitude = %2.2f' % dot_amplitude(mix, basis))\nprint('Peak amplitude = %2.2f' % peak_amplitude(mix))\nprint('RMS amplitude = %2.2f' % rms_amplitude(mix))\n```\n\n\n```python\ndata_other = sinusoid(500, 0.5, amp = 3.0)\nmix = data + data_other \nplt.figure()\n#plt.plot(data_other[1:1000])\n#plt.plot(data[1:1000])\nplt.plot(mix[1:1000])\nplt.plot(basis[1:1000])\nprint('Dot product amplitude = %2.2f' % dot_amplitude(mix, basis))\nprint('Peak amplitude = %2.2f' % peak_amplitude(mix))\nprint('RMS amplitude = %2.2f' % rms_amplitude(mix))\n```\n\nTo summarize, if we know the frequency of the sinusoid we are interested in, we can use the inner product with a sinusoid of the same frequency and phase as a robust way to estimate the amplitude in the presence of interferring noise and/or sinusoidal signals of different frequencies. If we don't know the phase we can use an iterative approach of trying every possible phase and selecting the one that gives the highest amplitude estimate - the brute force approach we talked about in a previous notebook. \n\nHowever, there is a simpler approach to estimating both the amplitude and the phase of a sinusoidal signal of known frequency. \n\nIt is based on the following identity: \n\\begin{equation} a \\sin(x) + b \\cos(x) = R \\sin (x + \\theta) \n\\end{equation}\nwhere \n$ R = \\sqrt{(a^2 + b^2)} \\;\\; \\text{and} \\;\\; \\theta = \\tan^{-1} \\frac{b}{a} $\n\nSo basically we can represent a sinusoidal signal of a particular amplitude and phase as a weighted sum (with appropriate weights $ a \\;\\; \\text{and}\\;\\; b$ of a sine signal and a cosine signal. So to estimate the amplitude and phase of a sinusoid of known frequency we can take the inner product with a pair of sine and cosine signals of the same frequncy. Let's see how this would work. We will see later that these pairs of sines and cosine signals are what are called basis functions of the Discrete Fourier Transform. \n\n\n```python\nsrate = 8000\namplitude = 3.0 \nk = 1000 \nphase = k * (2 * np.pi / srate)\nprint('Original amplitude = %2.2f' % amplitude)\nprint('Original phase = %2.2f' % phase)\n\ndata = sinusoid(300, 0.5, amp =amplitude, phase = phase)\nplt.plot(data[1:1000])\nbasis_sin = sinusoid(300, 0.5, amp = 1)\nbasis_cos = sinusoid(300, 0.5, amp = 1, phase = np.pi/2)\n\na = dot_amplitude(data, basis_sin)\nb = dot_amplitude(data, basis_cos)\nestimated_phase = np.arctan(b/a)\nestimated_magnitude = np.sqrt(a*a+b*b)\nprint('Estimated Magnitude = %2.2f' % estimated_magnitude)\nprint('Estimated Phase = %2.2f' % estimated_phase)\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "d5388e3e11af7625457d90f580ede4580157c601", "size": 144311, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "course1/session1/kadenze_mir_c1_s1_9_measuring_amplitude.ipynb", "max_stars_repo_name": "Achilleasein/mir_program_kadenze", "max_stars_repo_head_hexsha": "adc204f82dff565fe615e20681b84c94c2cff10d", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 19, "max_stars_repo_stars_event_min_datetime": "2021-03-16T00:00:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-01T05:03:45.000Z", "max_issues_repo_path": "course1/session1/kadenze_mir_c1_s1_9_measuring_amplitude.ipynb", "max_issues_repo_name": "femiogunbode/mir_program_kadenze", "max_issues_repo_head_hexsha": "7c3087acf1623b3b8d9742f1d50cd5dd53135020", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "course1/session1/kadenze_mir_c1_s1_9_measuring_amplitude.ipynb", "max_forks_repo_name": "femiogunbode/mir_program_kadenze", "max_forks_repo_head_hexsha": "7c3087acf1623b3b8d9742f1d50cd5dd53135020", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2021-03-16T03:07:45.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-12T04:29:03.000Z", "avg_line_length": 365.3443037975, "max_line_length": 39396, "alphanum_fraction": 0.9375099611, "converted": true, "num_tokens": 1570, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9511422241476944, "lm_q2_score": 0.8902942363098473, "lm_q1q2_score": 0.8467964400696212}} {"text": "# Arbitrary precision\nTypically numerical optimization is performed using binary 64 bit IEEE 754 floating point numbers (becuase modern processors offer good performance/precision characteristics for this data type). By using a math library which supports arbitrary precision we can achive higher precision (larger number of significant digits). By using ``pyneqsys`` we only need to write our equations once. We will be using [mpmath](http://www.mpmath.org) as our backend library for arbitrary precision. \n\n\n```python\nimport sympy as sp\nfrom pyneqsys.symbolic import SymbolicSys\nsp.init_printing()\n```\n\n\n```python\ndef f(x):\n return [x[0]**2 + x[1],\n 5*x[0]**2 - 3*x[0] + 2*x[1] - 3]\n\nneqsys = SymbolicSys.from_callback(f, 2)\nneqsys.exprs\n```\n\nLet us now find the roots for this system numerically, if we read the documentation of that library we will see that they recommend that the starting guess is obtained using a conventional solver.\n\nSo we will do just that: we will first solve the problem using SciPy's default solver and then we will perform a \"refinement\" by solving it using ``mpmath``'s solver\n\n\n```python\nx, sol = neqsys.solve([100, 100])\nsol, f(sol['x'])\n```\n\n\n```python\nmp_neqsys = SymbolicSys.from_callback(f, 2, module='mpmath')\nmp_x, mp_sol = mp_neqsys.solve(x, solver='mpmath', dps=50)\nmp_sol, f(mp_sol['x'])\n```\n\nSo that gave us, as requested, close to 50 significant digits\u2012nifty!\n", "meta": {"hexsha": "28782980d0347e29ae33907370259e0119cf2221", "size": 2655, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/multiprecision.ipynb", "max_stars_repo_name": "bjodah/pyneqsys", "max_stars_repo_head_hexsha": "1e8b63ee6e820d33a87b95c057655df03e1e4fc6", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 30, "max_stars_repo_stars_event_min_datetime": "2015-11-12T09:18:36.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-16T23:25:03.000Z", "max_issues_repo_path": "examples/multiprecision.ipynb", "max_issues_repo_name": "bjodah/pyneqsys", "max_issues_repo_head_hexsha": "1e8b63ee6e820d33a87b95c057655df03e1e4fc6", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2016-01-01T22:37:28.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-15T11:17:48.000Z", "max_forks_repo_path": "examples/multiprecision.ipynb", "max_forks_repo_name": "bjodah/pyneqsys", "max_forks_repo_head_hexsha": "1e8b63ee6e820d33a87b95c057655df03e1e4fc6", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-01-17T02:53:21.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-02T22:52:56.000Z", "avg_line_length": 27.65625, "max_line_length": 491, "alphanum_fraction": 0.5962335217, "converted": true, "num_tokens": 378, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.951142225532629, "lm_q2_score": 0.8902942144788076, "lm_q1q2_score": 0.8467964205381968}} {"text": "# Introduction\n\nIn this section we learn to do the following:\n\n* Import SymPy and set up pretty printing\n* Use mathematical operations like `sqrt` and `sin`\n* Make SymPy Symbols\n* Take derivatives of expressions\n* Simplify expressions\n\n## Preamble\n\nJust like NumPy and Pandas replace functions like `sin`, `cos`, `exp`, and `log` to powerful numeric implementations, SymPy replaces `sin`, `cos`, `exp` and `log` with powerful mathematical implementations.\n\n\n```python\nfrom sympy import *\ninit_printing() # Set up fancy printing\n```\n\n\n```python\nimport math\nmath.sqrt(2)\n```\n\n\n```python\nsqrt(2) # This `sqrt` comes from SymPy\n```\n\n\n```python\ncos(0)\n```\n\n### Exercise\n\nUse the function `acos` on `-1` to find when cosine equals `-1`. Try this same function with the math library. Do you get the same result?\n\n\n```python\n# Call acos on -1 to find where on the circle the x coordinate equals -1\n\n\n```\n\n\n```python\n# Call `math.acos` on -1 to find the same result using the builtin math module. \n# Is the result the same? \n# What does `numpy.arccos` give you?\n\n```\n\n## Symbols\n\nJust like the NumPy `ndarray` or the Pandas `DataFrame`, SymPy has `Symbol`, which represents a mathematical variable.\n\nWe create symbols using the function `symbols`. Operations on these symbols don't do numeric work like with NumPy or Pandas, instead they build up mathematical expressions.\n\n\n```python\nx, y, z = symbols('x,y,z')\nalpha, beta, gamma = symbols('alpha,beta,gamma')\n```\n\n\n```python\nx + 1\n```\n\n\n```python\nlog(alpha**beta) + gamma\n```\n\n\n```python\nsin(x)**2 + cos(x)**2\n```\n\n### Exercise\n\nUse `symbols` to create two variables, `mu` and `sigma`. \n\n\n```python\n?, ? = symbols('?')\n```\n\n### Exercise\n\nUse `exp`, `sqrt` and Python's arithmetic operators like `+, -, *, **` to create the standard bell curve with SymPy objects\n\n$$ e^{\\frac{(x - \\mu)^2}{ \\sigma^2}} $$\n\n\n```python\nexp(?)\n```\n\n## Derivatives\n\nOne of the most commonly requested operations in SymPy is the derivative. To take the derivative of an expression use the `diff` method\n\n\n```python\n(x**2).diff(x)\n```\n\n\n```python\nsin(x).diff(x)\n```\n\n\n```python\n(x**2 + x*y + y**2).diff(x)\n```\n\n\n```python\ndiff(x**2 + x*y + y**2, y) # diff is also available as a function\n```\n\n### Exercise\n\nIn the last section you made a normal distribution\n\n\n```python\nmu, sigma = symbols('mu,sigma')\n```\n\n\n```python\nbell = exp((x - mu)**2 / sigma**2)\nbell\n```\n\nTake the derivative of this expression with respect to $x$\n\n\n```python\n?.diff(?)\n```\n\n### Exercise\n\nThere are three symbols in that expression. We normally are interested in the derivative with respect to `x`, but we could just as easily ask for the derivative with respect to `sigma`. Try this now\n\n\n```python\n# Derivative of bell curve with respect to sigma\n\n\n```\n\n### Exercise\n\nThe second derivative of an expression is just the derivative of the derivative. Chain `.diff( )` calls to find the second and third derivatives of your expression.\n\n\n```python\n# Find the second and third derivative of `bell`\n\n\n```\n\n## Functions\n\nSymPy has a number of useful routines to manipulate expressions. The most commonly used function is `simplify`.\n\n\n```python\nexpr = sin(x)**2 + cos(x)**2\nexpr\n```\n\n\n```python\nsimplify(expr)\n```\n\n### Exercise\n\nIn the last section you found the third derivative of the bell curve\n\n\n```python\nbell.diff(x).diff(x).diff(x)\n```\n\nYou might notice that this expression has lots of shared structure. We can factor out some terms to simplify this expression. \n\nCall `simplify` on this expression and observe the result.\n\n\n```python\n# Call simplify on the third derivative of the bell expression\n\n```\n\n## Sympify\n\nThe `sympify` function transforms Python objects (ints, floats, strings) into SymPy objects (Integers, Reals, Symbols). \n\n*note the difference between `sympify` and `simplify`. These are not the same functions.*\n\n\n```python\nsympify('r * cos(theta)^2')\n```\n\nIt's useful whenever you interact with the real world, or for quickly copy-paste an expression from an external source.\n", "meta": {"hexsha": "01fdad76c13f0cfa031cc581446b8ef60954b82d", "size": 9776, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tutorial_exercises/01-Symbols-Derivatives-Functions.ipynb", "max_stars_repo_name": "gvvynplaine/scipy-2016-tutorial", "max_stars_repo_head_hexsha": "aa417427a1de2dcab2a9640b631b809d525d7929", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 53, "max_stars_repo_stars_event_min_datetime": "2016-06-21T21:11:02.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-04T07:51:03.000Z", "max_issues_repo_path": "tutorial_exercises/01-Symbols-Derivatives-Functions.ipynb", "max_issues_repo_name": "gvvynplaine/scipy-2016-tutorial", "max_issues_repo_head_hexsha": "aa417427a1de2dcab2a9640b631b809d525d7929", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2016-07-02T20:24:06.000Z", "max_issues_repo_issues_event_max_datetime": "2016-07-11T11:31:44.000Z", "max_forks_repo_path": "tutorial_exercises/01-Symbols-Derivatives-Functions.ipynb", "max_forks_repo_name": "gvvynplaine/scipy-2016-tutorial", "max_forks_repo_head_hexsha": "aa417427a1de2dcab2a9640b631b809d525d7929", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 36, "max_forks_repo_forks_event_min_datetime": "2016-06-25T09:04:24.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-09T06:46:01.000Z", "avg_line_length": 20.4091858038, "max_line_length": 212, "alphanum_fraction": 0.5286415712, "converted": true, "num_tokens": 1037, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026663679976, "lm_q2_score": 0.9230391637747005, "lm_q1q2_score": 0.8467062860926197}} {"text": "```python\nfrom sympy import symbols, laplace_transform, \\\n DiracDelta, Heaviside, exp, sin, cos\n\ns, t, a, n, b = symbols('s t a n b')\n\nlaplace_transform(DiracDelta(t), t, s)\n```\n\n\n\n\n (1 - Heaviside(0), -oo, True)\n\n\n\n\n```python\nlaplace_transform(Heaviside(t), t, s, noconds=True)\n```\n\n\n\n\n$\\displaystyle \\frac{1}{s}$\n\n\n\n\n```python\nlaplace_transform(1, t, s, noconds=True)\n```\n\n\n\n\n$\\displaystyle \\frac{1}{s}$\n\n\n\n\n```python\nlaplace_transform(t, t, s, noconds=True)\n```\n\n\n\n\n$\\displaystyle \\frac{1}{s^{2}}$\n\n\n\n\n```python\nlaplace_transform(t**n, t, s, noconds=True)\n```\n\n\n\n\n$\\displaystyle \\frac{s^{- n} \\Gamma\\left(n + 1\\right)}{s}$\n\n\n\n\n```python\nlaplace_transform(exp(-a*t), t, s, noconds=True)\n```\n\n\n\n\n$\\displaystyle \\frac{1}{a + s}$\n\n\n\n\n```python\nlaplace_transform(sin(b*t), t, s, noconds=True)\n```\n\n\n\n\n$\\displaystyle \\frac{b}{b^{2} + s^{2}}$\n\n\n\n\n```python\nlaplace_transform(cos(b*t), t, s, noconds=True)\n```\n\n\n\n\n$\\displaystyle \\frac{s}{b^{2} + s^{2}}$\n\n\n\n\n```python\nlaplace_transform(exp(-a*t)*sin(b*t), t, s, noconds=True)\n```\n\n\n\n\n$\\displaystyle \\frac{b}{b^{2} + \\left(a + s\\right)^{2}}$\n\n\n\n\n```python\nlaplace_transform(exp(-a*t)*cos(b*t), t, s, noconds=True)\n```\n\n\n\n\n$\\displaystyle \\frac{a + s}{b^{2} + \\left(a + s\\right)^{2}}$\n\n\n", "meta": {"hexsha": "21885c04c5fb74ae05ae0b7b38f0d40ea28871cd", "size": 5362, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Laplace transform.ipynb", "max_stars_repo_name": "mfkiwl/GMPE340", "max_stars_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-07T09:36:36.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-07T09:36:36.000Z", "max_issues_repo_path": "Laplace transform.ipynb", "max_issues_repo_name": "mfkiwl/GMPE340", "max_issues_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Laplace transform.ipynb", "max_forks_repo_name": "mfkiwl/GMPE340", "max_forks_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-20T18:48:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-20T18:48:20.000Z", "avg_line_length": 20.0074626866, "max_line_length": 73, "alphanum_fraction": 0.4649384558, "converted": true, "num_tokens": 431, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9559813538993888, "lm_q2_score": 0.8856314783461303, "lm_q1q2_score": 0.8466471797252508}} {"text": "## Mathematics with Python\n\nAll notebooks can be found [https://github.com/drvinceknight/mwp](https://github.com/drvinceknight/mwp).\n\n### Requirements for this workshop\n\n- Python (version 3+)\n- Sympy\n- Numpy\n- Matplotlib\n- Jupyter notebooks (although if you're comfortable using Python in another way do not feel obliged to use notebooks)\n\n### Arithmetic\n\nIt is possible to use Python to carry out arithmetic. For example\n\n\n```python\n2 + 2\n```\n\n\n\n\n 4\n\n\n\n\n```python\n538 * 612 / 24\n```\n\n\n\n\n 13719.0\n\n\n\n### Exercises\n\n- Calculate $42 ^ 2$\n- Calculate $56 / 2 \\times 5$\n\n## Symbolic mathematics\n\nMost of mathematics involves the use of symbolic variables. We can use a python library called `sympy` to do this. For example, let us compute:\n\n$$x + x$$\n\n\n```python\nimport sympy as sym\nx = sym.Symbol(\"x\") # Creating a symbolic variable x\nx + x\n```\n\n\n\n\n 2*x\n\n\n\n### Exercises\n\n- Compute $x - 2x$\n- Compute $x + 2y - 3x + y$\n\n## Nicer output\n\n`sympy` can use $\\LaTeX$ to display mathematics when using Jupyter notebooks:\n\n\n```python\nsym.init_printing()\nx + x\n```\n\n## Substituting values in to expressions\n\nIf we need to compute a numerical value, it is possible to do so:\n\n\n```python\nexpr = x + x\nexpr\n```\n\n\n```python\nexpr.subs({x: 4})\n```\n\n### Exercises\n\n- Substitute $x=5, y=7$ in to the expression: $x + 2y$\n\n## Expanding expressions\n\nWe can use `sympy` to verify expressions like the following:\n\n$$(a + b) ^ 2 = a ^ 2 + 2 a b + b ^ 2$$\n\n\n```python\na, b = sym.symbols(\"a, b\") # Short hand: note we're using `sym.symbols` and not `sym.Symbol`\nexpr = (a + b) ** 2\nexpr\n```\n\n\n```python\nexpr.expand()\n```\n\nA `sympy` expression not only retains mathematical information but also the form, indeed the following two expressions are not the same. They have equal values.\n\n\n```python\nexpr == expr.expand()\n```\n\n\n\n\n False\n\n\n\n### Exercises\n\n- Expand $(a + b + c)^2$\n- Expand $(a + b) ^ 3$\n\n## Factorising expressions\n\nWe can also factor expressions like:\n\n$$a ^ 2 - b ^ 2 = (a - b)(a + b)$$\n\n\n```python\nsym.factor(a ** 2 - b ** 2)\n```\n\n### Exercises\n\n- Factorise $a ^ 3 - b ^ 3$\n- Factorise $4x + x ^ 2 - yx$\n\n## Solving equations\n\nWe can use `sympy` to solve algebraic equations. Let us take a look at the quadratic equation:\n\n$$ax^2 + b x + c = 0$$\n\n\n```python\nc = sym.Symbol('c')\neqn = a * x ** 2 + b * x + c \nsym.solveset(eqn, x)\n```\n\n### Exercises:\n\n- Obtain the solution to: $ax^3+bx^2+cx+d=0$\n- Obtain the solution to $x^2 + 1 = 0$\n", "meta": {"hexsha": "52ae9f955f0ecea150c6ff8a4b964132321a8236", "size": 14295, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "nbs/01-introduction-and-installation.ipynb", "max_stars_repo_name": "drvinceknight/mwp", "max_stars_repo_head_hexsha": "51bb5aa699788445f1dd710ee4c7535121f1d60d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "nbs/01-introduction-and-installation.ipynb", "max_issues_repo_name": "drvinceknight/mwp", "max_issues_repo_head_hexsha": "51bb5aa699788445f1dd710ee4c7535121f1d60d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-02-17T16:36:11.000Z", "max_issues_repo_issues_event_max_datetime": "2018-02-19T15:01:53.000Z", "max_forks_repo_path": "nbs/01-introduction-and-installation.ipynb", "max_forks_repo_name": "drvinceknight/mwp", "max_forks_repo_head_hexsha": "51bb5aa699788445f1dd710ee4c7535121f1d60d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-02-19T13:48:25.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T10:42:08.000Z", "avg_line_length": 32.6369863014, "max_line_length": 2120, "alphanum_fraction": 0.6493878979, "converted": true, "num_tokens": 758, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9559813463747182, "lm_q2_score": 0.8856314662716159, "lm_q1q2_score": 0.8466471615181552}} {"text": "# Automatic differentiation\n\nAutomatic differentiation (AD) is a set of techniques to calculate **exact** derivatives, numerically, in an automatic way. It is neither symbolic differentiation, nor something like finite differences.\n\nThere are two main methods: forward-mode AD and reverse-mode AD. \nEach has its strengths and weaknesses. Forward mode is significantly easier to implement.\n\n# Forward-mode AD\n\nLet's start by thinking about univariate functions $f: \\mathbb{R} \\to \\mathbb{R}$. We would like to calculate the derivative $f'(a)$ at some point $a \\in \\mathbb{R}$.\n\nWe know various rules about how to calculate such derivatives. For example, if we have already managed to calculate $f'(a)$ and $g'(a)$, we can calculate $(f+g)'(a)$ and $(f.g)'(a)$ as\n\n\n\\begin{align}\n(f+g)'(a) &= f'(a) + g'(a)\\\\\n(f.g)'(a) &= f'(a) \\, g(a) + f(a) \\, g'(a)\n\\end{align}\n\nWe also have the chain rule, which plays a crucial role:\n\n$$(f \\circ g)'(a) = f'(g(a)) \\, g'(a)$$\n\nWe see that in general we will need, for each function $f$, both the value $f(a)$ and the derivative $f'(a)$, and this is the *only* information that we require in order to calculate the first derivative of any combination of functions.\n\n## Jets\n\nFormally, we can think of a first-order Taylor polynomial of $f$, called the \n[**jet** of $f$](https://en.wikipedia.org/wiki/Jet_(mathematics) at $a$, denoted $J_a(f)$:\n\n$$(J_a(f))(x) := f(a) + x f'(a)$$\n\n[This can be thought of as representing the **set of all functions** with the same data $f(a)$ and $f'(a)$.]\n\nFormally, it is common to think of this as a \"dual number\", $f + \\epsilon f'$, that we can manipulate, following the rule that $\\epsilon^2 = 0$. (Cf. complex numbers, which have the same structure, but with $\\epsilon^2 = -1$.) E.g.\n\n$$(f + \\epsilon f') \\times (g + \\epsilon g') = f \\, g + \\epsilon (f' g + f g')$$\n\nshows how to define the multiplication of two jets.\n\n## Computer representation\n\nAs usual, we can represent a polynomial just by its degree and its coefficients, so we can define a Julia object as follows. We will leave the evaluation point $(a)$ as being implicit, although we could, of course, include it if desired.\n\n\n```julia\nimmutable Jet{T} <: Real\n val::T # value\n der::T # derivative # type \\prime to get \u2032\nend\n```\n\n\n```julia\nimport Base: +, *, -, convert, promote_rule\n```\n\n\n```julia\n+(f::Jet, g::Jet) = Jet(f.val + g.val, f.der + g.der)\n-(f::Jet, g::Jet) = Jet(f.val - g.val, f.der - g.der)\n\n*(f::Jet, g::Jet) = Jet(f.val*g.val, f.der*g.val + f.val*g.der)\n```\n\nWe can now define `Jet`s and manipulate them:\n\n\n```julia\nf = Jet(3, 4) # any function f such that f(a) = 3 and f'(a) = 4, or the set of all such functions\ng = Jet(5, 6) # any function g such that g(a) = 5 and g'(a) = 6\n\nf + g # calculate the value and derivative of (f + g) for any f and g in these sets\n```\n\n\n```julia\nf * g\n```\n\n\n```julia\nf * (g + g)\n```\n\n## Performance\n\nIt seems like we must have introduced quite a lot of computational overhead by creating a relatively complex data structure, and associated methods, to manipulate pairs of numbers. Let's see how the performance is:\n\n\n```julia\nadd(a1, a2, b1, b2) = (a1+b1, a2+b2)\n```\n\n\n```julia\nadd(1, 2, 3, 4)\n@time add(1, 2, 3, 4)\n```\n\n\n```julia\na = Jet(1, 2)\nb = Jet(3, 4)\n\nadd2(j1, j2) = j1 + j2\nadd2(a, b)\n@time add2(a, b)\n```\n\n\n```julia\n\n```\n\n\n```julia\n@code_native add(1, 2, 3, 4)\n```\n\n\n```julia\n@code_native add2(a, b)\n```\n\nWe see that there is only a slight overhead to do with moving the data around. The data structure itself has disappeared, and we basically have a standard Julia tuple.\n\n## Functions on jets: chain rule\n\nWe can also define functions of these objects using the chain rule. For example, if `f` is a jet representing the function $f$, then we would like `exp(f)` to be a jet representing the function $\\exp \\circ f$, i.e. with value $\\exp(f(a))$ and derivative $(\\exp \\circ f)'(a) = \\exp(f(a)) \\, f'(a)$:\n\n\n```julia\nimport Base: exp\n```\n\n\n```julia\nexp(f::Jet) = Jet(exp(f.val), exp(f.val) * f.der)\n```\n\n\n```julia\nf\n```\n\n\n```julia\nexp(f)\n```\n\n## Conversion and promotion\n\nHowever, we can't do e.g. the following:\n\n\n```julia\n# 3 * f\n```\n\nIn order to get this to work, we need to hook into Julia's type promotion and conversion machinery.\nFirst, we specify how to promote a number and a `Jet`:\n\n\n```julia\npromote_rule{T<:Real,S}(::Type{Jet{S}}, ::Type{T}) = Jet{S}\n```\n\nSecond, we specify how to `convert` a (constant) number to a `Jet`. By e.g. $g = f+3$, we mean the function such that $g(x) = f(x) + 3$ for all $x$, i.e. $g = f + 3.\\mathbb{1}$, where $\\mathbb{1}$ is the constant function $\\mathbb{1}: x \\mapsto 1$.\n\nThus we think of a constant $c$ as the constant function $c \\, \\mathbb{1}$, with $c(a) = c$ and $c'(a) = 0$, which we encode as the following conversion:\n\n\n```julia\nconvert{T<:Union{AbstractFloat, Integer, Rational},S}(::Type{Jet{S}}, x::T) = Jet{S}(x, 0)\n```\n\n\n```julia\nconvert(Jet{Float64}, 3.1)\n```\n\n\n```julia\npromote(Jet(1,2), 3.0)\n```\n\n\n```julia\npromote(Jet(1,2), 3.1)\n```\n\n\n```julia\nconvert(Jet{Float64}, 3.0)\n```\n\nJulia's machinery now enables us to do what we wanted:\n\n\n```julia\nJet(1.1, 2.3) + 3\n```\n\n## Calculating derivatives of arbitrary functions\n\nHow can we use this to calculate the derivative of an arbitrary function? For example, we wish to differentiate the function\n\n\n```julia\nh(x) = x^2 - 2\n```\n\nat $a = 3$.\n\nWe think of this as a function of $x$, which itself we think of as the identity function $\\iota: x \\mapsto x$, so that\n\n$$h = \\iota^2 - 2.\\mathbb{1}$$\n\nWe represent the identity function as follows:\n\n\n```julia\na = 3\nx = Jet(a, 1) \n```\n\nsince $\\iota'(a) = 1$ for any $a$.\n\nNow we simply evaluate the function `h` at `x`:\n\n\n```julia\nh(x)\n```\n\nThe first component of the resulting `Jet` is the value $h(a)$, and the second component is the derivative, $h'(a)$. \n\nWe can codify this into a function as follows:\n\n\n```julia\nderivative(f, x) = f(Jet(x, one(x))).der\n```\n\n\n```julia\nderivative(x -> 3x^5 + 2, 2)\n```\n\nThis is capable of differentiating any function that involves functions whose derivatives we have specified by defining corresponding rules on `Jet` objects. For example,\n\n\n```julia\ny = [1.,2]\nk(x) = (y'* [x 2; 3 4] * y)[]\n```\n\n\n```julia\nk(3)\n```\n\n\n```julia\nderivative(x->k(x), 10)\n```\n\nThis works since Julia is constructing the following object:\n\n\n```julia\n[Jet(3.0, 1.0) 2; 3 4]\n```\n\n# Higher dimensions\n\nHow can we extend this to higher dimensions? For example, we wish to differentiate the following function $f: \\mathbb{R}^2 \\to \\mathbb{R}$:\n\n\n```julia\nf1(x, y) = x^2 + x*y\n```\n\nAs we learn in calculus, the partial derivative $\\partial f/\\partial x$ is the function obtained by fixing $y$, thinking of the resulting function as a function only of $x$, and then differentiating.\n\nSuppose that we wish to differentiate $f$ at $(a, b)$:\n\n\n```julia\na, b = 3.0, 4.0\n\nf1_x(x) = f1(x, b) # single-variable function \n```\n\nSince we now have a single-variable function, we can differentiate it:\n\n\n```julia\nderivative(f1_x, a)\n```\n\nUnder the hood this is doing\n\n\n```julia\nf1(Jet(a, one(a)), b)\n```\n\nSimilarly, we can differentiate with respect to $y$ by doing\n\n\n```julia\nf1(a, Jet(b, one(b)))\n```\n\nNote that we must do **two separate calculations** to get the two partial derivatives. To calculate a gradient of a function $f:\\mathbb{R}^n \\to \\mathbb{R}$ thus requires $n$ separate calculations.\n\nForward-mode AD is implemented in a clean and efficient way in the `ForwardDiff.jl` package.\n\n# Syntax trees\n\n## Forward-mode\n\nTo understand what forward-mode AD is doing, and its name, it is useful to think of an expression as a **syntax tree**; cf. [this notebook](Syntax trees in Julia.ipynb).\n\nIf we label the nodes in the tree as $v_i$, then forward differentiation fixes a variable, e.g. $y$, and calculates $\\partial v_i / \\partial y$ for each $i$. If e.g. $v_1 = v_2 + v_3$, then we have\n\n$$\\frac{\\partial v_1}{\\partial y} = \\frac{\\partial v_2}{\\partial y} + \\frac{\\partial v_3}{\\partial y}.$$\n\nDenoting $v_1' := \\frac{\\partial v_1}{\\partial y}$, we have $v_1' = v_2' + v_3'$, so we need to calculate the derivatives and nodes lower down in the graph first, and propagate the information up. We start at $v_x' = 0$, since $\\frac{\\partial x}{\\partial y} = 0$, and $v_y' = 1$.\n\n\n## Reverse mode\n\nAn alternative method to calculate derivatives is to fix not the variable with which to differentiate, but *what it is* that we differentiate, i.e. to calculate the **adjoint**, $\\bar{v_i} := \\frac{\\partial f}{\\partial v_i}$, for each $i$. \n\nIf $f = v_1 + v_2$, with $v_1 = v_3 + v_4$ and $v_2 = v_3 + v_5$, then\n\n$$\\frac{\\partial f}{\\partial v_3} = \\frac{\\partial f}{\\partial v_1} \\frac{\\partial v_1}{\\partial v_3} + \\frac{\\partial f}{\\partial v_2} \\frac{\\partial v_2}{\\partial v_3},$$\n\ni.e.\n\n$$\\bar{v_3} = \\alpha_{13} \\, \\bar{v_1} + \\alpha_{2,3} \\, \\bar{v_2},$$\n\nwhere $\\alpha_{ij}$ are the coefficients specifying the relationship between the different terms. Thus, the adjoint information propagates **down** the graph, in **reverse** order, hence the name \"reverse-mode\".\n\nFor this reason, reverse mode is much harder to implement. However, it has the advantage that all derivatives $\\partial f / \\partial x_i$ are calculated in a *single pass* of the tree.\n\nJulia has en efficient implementation of reverse-mode AD in https://github.com/JuliaDiff/ReverseDiff.jl\n\n## Example of reverse mode\n\nReverse mode is difficult to implement in a general way, but easy to do by hand. e.g. consider the function\n\n$$f(x,y,z) = x \\, y - \\sin(z)$$\n\nWe decompose this into its tree with labelled nodes, corresponding to the following sequence of elementary operations:\n\n\n```julia\nff(x, y, z) = x*y - 2*sin(x*z)\n\nx, y, z = 1, 2, 3\n\nv\u2081 = x\nv\u2082 = y\nv\u2083 = z\nv\u2084 = v\u2081 * v\u2082\nv\u2085 = v\u2081 * v\u2083\nv\u2086 = sin(v\u2085)\nv\u2087 = v\u2084 - 2v\u2086 # f\n```\n\n\n```julia\nff(x, y, z)\n```\n\nWe have decomposed the **forward pass** into elementary operations. We now proceed to calculate the adjoints. The difficulty is to *find which variables depend on the current variable under question*.\n\n\n```julia\nv\u0304\u2087 = 1\nv\u0304\u2086 = -2 # \u2202f/\u2202v\u2086 = \u2202v\u2087/\u2202v\u2086\nv\u0304\u2085 = v\u0304\u2086 * cos(v\u2085) # \u2202v\u2087/\u2202v\u2086 * \u2202v\u2086/\u2202v\u2085\nv\u0304\u2084 = 1 \nv\u0304\u2083 = v\u0304\u2085 * v\u2081 # \u2202f/\u2202v\u2083 = \u2202f/\u2202v\u2085 . \u2202v\u2085/\u2202v\u2083. # This gives \u2202f/\u2202z\nv\u0304\u2082 = v\u0304\u2084 * v\u2081\nv\u0304\u2081 = v\u0304\u2085*v\u2083 + v\u0304\u2084*v\u2082\n```\n\nThus, in a single pass we have calculated the gradient $\\nabla f(1, 2, 3)$:\n\n\n```julia\n(v\u0304\u2081, v\u0304\u2082, v\u0304\u2083)\n```\n\nLet's check that it's correct:\n\n\n```julia\nForwardDiff.gradient(x->ff(x...), [x,y,z])\n```\n\n# Example: optimization\n\nAs an example of the use of AD, consider the following function that we wish to optimize:\n\n\n```julia\nx = rand(3)\ny = rand(3)\n\ndistance(W) = W*x - y\n```\n\n\n```julia\nusing ForwardDiff\n```\n\n\n```julia\nForwardDiff.jacobian(distance, rand(3,3))\n```\n\n\n```julia\nobjective(W) = (a = distance(W); dot(a, a))\n```\n\n\n```julia\nW0 = rand(3, 3)\ngrad = ForwardDiff.gradient(objective, W0)\n```\n\n\n```julia\n2*(W0*x-y)*x' == grad # LHS is the analytical derivative\n```\n\n# Example: Interval arithmetic\n\nHow can we find roots of a function?\n\n\n```julia\nf2(x) = x^2 - 2\n```\n\n## Exclusion of domains\n\nAn idea is to *exclude* regions of $\\mathbb{R}$ by showing that they *cannot* contain a zero, by calculating the image (range) of the function over a given domain.\n\nThis is, in general, a difficult problem, but **interval arithmetic** provides a partial solution, by calculating an **enclosure** of the range, i.e. and interval that is guaranteed to contain the range.\n\n\n```julia\nusing ValidatedNumerics\n```\n\n\n```julia\nX = 3..4\n```\n\n\n```julia\ntypeof(X)\n```\n\nThis is a representation of the set $X = [3, 4] := \\{x\\in \\mathbb{R}: 3 \\le x \\le 4\\}$.\n\nWe can evaluate a Julia function on an `Interval` object `X`. The result is a new `Interval`, which is **guaranteed to contain the true image** $\\mathrm{range}(f; X) := \\{f(x): x \\in X \\}$. This is achieved by defining arithmetic operations on intervals in the correct way, e.g.\n\n$$X + Y = [x_1, x_2] + [y_1, y_2] = [x_1+y_1, x_2+y_2].$$\n\n\n```julia\nf2(X)\n```\n\nSince this result does not contain $0$, we have *proved* that $f$ has no zero in the domain $[3,4]$. We can even use semi-infinite intervals:\n\n\n```julia\nX1 = 3..\u221e # type \\infty\n```\n\n\n```julia\nf2(X1)\n```\n\n\n```julia\nX2 = -\u221e.. -3 # space is required\n```\n\n\n```julia\nf2(X2)\n```\n\nWe have thus exclued two semi-infinite regions, and have proved that any root *must* lie in $[-3,3]$, by two simple calculations. However,\n\n\n```julia\nf2(-3..3)\n```\n\nWe cannot conclude anything from this, since the result is, in general, an over-estimate of the true range, which thus may or may not contain zero. We can proceed by bisecting the interval. E.g. after two bisections, we find\n\n\n```julia\nf2(-3.. -1.5)\n```\n\nso we have excluded another piece.\n\n## Proving existence of roots\n\nTo prove that there *does* exist a root, we need a different approach. It is a standard method to evaluate the function at two end-points of an interval:\n\n\n```julia\nf2(1), f2(2)\n```\n\nSince there is a sign change, there exists at least one root $x^*$ in the interval $[1,2]$, i.e. a point such that $f(x^*) = 0$.\n\nTo prove that it is unique, one method is to prove that $f_2$ is *monotone* in that interval, i.e. that the derivative has a unique sign. To do so, we need to evaluate the derivative *at every point in the interval*, which seems impossible.\n\nAgain, however, interval arithmetic easily gives an *enclosure* of this image. To show this, we need to evaluate the derivative using interval arithmetic.\n\nThanks to Julia's parametric types, we get **composability for free**: we can just substitute in an interval to `ForwardDiff` or `Jet`, and it works:\n\n\n```julia\nForwardDiff.derivative(f2, 1..2)\n```\n\nAgain, the reason for this is that Julia creates the object\n\n\n```julia\nJet(x, one(x))\n```\n\nSince an enclosure of the derivative is the interval $[2, 4]$ (and, in fact, in this case this is the true image, but there is no way to know this other than with an analytical calculation), we have **proved** that the image of the derivative function $f'$ over the interval $X = [1,2]$ does *not* contain zero, and hence that the image is monotone.\n\nTo actually find the root within this interval, we can use the [Newton interval method](Interval Newton.ipynb). In general, we should not expect to be able to use intervals in standard numerical methods designed for floats; rather, we will need to modify the numerical method to take *advantage* of intervals.\n\nThe Newton interval method can find, in a guaranteed way, *all* roots of a function in a given interval (or tell you if when it is unable to to so, for example if there are double roots). Although people think that finding roots of a general function is difficult, this is basically a solved problem using these methods.\n", "meta": {"hexsha": "3f15b8b876bf8b1ce2bee251590b67e7d5bdce77", "size": 28840, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "09. Automatic differentiation.ipynb", "max_stars_repo_name": "dpsanders/cincinnati_julia_workshop", "max_stars_repo_head_hexsha": "62773597f85099dd04924e36c35cec17bf5bf1c8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2018-03-19T09:49:01.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-30T13:05:34.000Z", "max_issues_repo_path": "09. Automatic differentiation.ipynb", "max_issues_repo_name": "dpsanders/cincinnati_julia_workshop", "max_issues_repo_head_hexsha": "62773597f85099dd04924e36c35cec17bf5bf1c8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "09. Automatic differentiation.ipynb", "max_forks_repo_name": "dpsanders/cincinnati_julia_workshop", "max_forks_repo_head_hexsha": "62773597f85099dd04924e36c35cec17bf5bf1c8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2018-03-21T11:12:41.000Z", "max_forks_repo_forks_event_max_datetime": "2019-09-07T00:43:24.000Z", "avg_line_length": 24.1743503772, "max_line_length": 355, "alphanum_fraction": 0.5360263523, "converted": true, "num_tokens": 4637, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9099070060380482, "lm_q2_score": 0.930458262243501, "lm_q1q2_score": 0.8466304916413491}} {"text": "```python\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom ipywidgets import interact, fixed\nfrom matplotlib import cm\nplt.rcParams['font.size'] = 14\nplt.rcParams['axes.spines.right'] = False\nplt.rcParams['ytick.right'] = False\nplt.rcParams['axes.spines.top'] = False\nplt.rcParams['xtick.top'] = False\n```\n\n### Regularized optimization problems\nIn practice, we often encounter situations where we would like to estimate lots of parameters but where we don't really have enough data. One possible solution is then to use prior knowledge to favor some parameter combinations over others. For example, if we know that many parameters are likely to be zero, then one possibility is to add a regularization term to our objective function that promotes sparse solutions. However, some commonly used regularization terms are not smooth functions, and consequently, we can no longer use simple gradient based methods for finding the optimal parameters. Here, we therefore introduce a method called proximal gradient, which is used for minimizing non-differentiable functions that can be divided into a sum of two convex functions: one smooth and differentiable, and one non-differentiable.\n\n### Objective function with $L^1$ regularization\nBuilding upon our previous Poisson regression example, we note that we can promote sparse solutions by adding an $L^1$ regularization term (also known as lasso-regularization as it shrinks dimensions to zero one at a time). Our new objective function thus takes the form:\n\\begin{align*}\n nll(w_0, w_1) &= \\sum_i^N f(z_i) - y_i\\log(f(z_i)) + \\lambda ||\\mathbf{w}||_{L^1},\\\\\n ||\\mathbf{w}||_{L^1} &= \\sum_j |w_j| = |w_1| + |w_2|, \n\\end{align*}\nwhere $\\lambda$ is a regularization parameter that adjust how heavily we penalize non-zero parameter values. Observe that we still refer to this objective function as the negative log-likelihood ($nll$), as the additional regularization term only corresponds to adding a Laplacean zero mean prior on our parameters. However, the objective function is now composed of two parts, which we can highlight by writing it as:\n\\begin{align*}\n nll(\\mathbf{w}) &= g(\\mathbf{w}) + h(\\mathbf{w}),\\\\\n g(\\mathbf{w}) &=\\sum_i^N f(z_i) - y_i\\log(f(z_i)),\\\\\n h(\\mathbf{w}) &= ||\\mathbf{w}||_{L^1}. \n\\end{align*}\nThe first part ($g$) is convex and differentiable, whereas the second part ($h$) is convex (as it is a norm) but non-differentiable.\n\n\n```python\n# Poisson regression, rectified inverse link function\ndef invLinkFun(X, w):\n z = np.dot(X, w)\n mu = np.zeros(z.shape)\n mu[z<0] = np.exp(z[z<0])\n mu[z>=0] = z[z>=0] + 1\n return mu\n\n# Poisson regression, negative log-likelihood\ngFun = lambda X, y, w: np.sum(invLinkFun(X, w) - y*np.log(invLinkFun(X, w)))\n\n# L1 regularization term\nhFun = lambda w, reg_lambda: reg_lambda*np.linalg.norm(w)\n\n# Negative log-likelihood\nnegLogLikFun = lambda X, y, w, reg_lambda: gFun(X, y, w) + hFun(w, reg_lambda)\n\n# Poisson regression, gradient of the negative log-likelihood\ndef gDerFun(X, y, w):\n z = np.dot(X, w)\n der = np.dot(np.exp(z[z<0])-y[z<0], X[z<0, :])\n der += np.dot(1-y[z>=0]/(z[z>=0]+1), X[z>=0, :])\n return der\n```\n\n### Proximal gradient\nTo introduce the logic behind the proximal gradient method, we begin by performing a quadratic expansion of $g$ around $\\mathbf{w}_k$:\n$$\ng(\\mathbf{w}_k)+\\nabla g(\\mathbf{w}_k)^T(\\mathbf{w}-\\mathbf{w}_k)+\\frac{1}{2\\eta}||\\mathbf{w}-\\mathbf{w}_k||^2_2\n$$\nwhere the Hessian is approximated by $\\frac{1}{\\eta}\\mathbf{I}$. Differentiating the expansion with respect to $\\mathbf{w}$ gives:\n$$\n\\nabla g(\\mathbf{w}_k) + \\frac{1}{\\eta}(\\mathbf{w}-\\mathbf{w}_k),\n$$\nwhich results in $\\mathbf{w}_{k+1}=\\mathbf{w}=\\mathbf{w}_k - \\eta\\nabla g(\\mathbf{w}_k)$ when set to zero. That is, we see that the normal gradient descent rule corresponds to minimizing a quadratic expansion around our current parameter values when the Hessian is approximated as $\\frac{1}{\\eta}\\mathbf{I}$. A logical continuation is then to ask what kind of update rule we get if we seek to minimize the quadratic approximation plus the regularization term:\n$$\n\\underset{\\mathbf{w}}{\\operatorname{argmin}} g(\\mathbf{w}_k)+\\nabla g(\\mathbf{w}_k)^T(\\mathbf{w}-\\mathbf{w}_k)+\\frac{1}{2\\eta}||\\mathbf{w}-\\mathbf{w}_k||^2_2 + h(\\mathbf{w}).\n$$\nThis minimization problem can further be simplified as:\n$$\n\\underset{\\mathbf{w}}{\\operatorname{argmin}} \\frac{1}{2\\eta}||\\mathbf{w}-(\\mathbf{w}_k-\\eta\\nabla g(\\mathbf{w}_k))||^2_2 + h(\\mathbf{w}).\n$$\nWe therefore define the proximal operator to be:\n$$\n\\operatorname{prox}_{\\eta}(\\mathbf{x}):=\\underset{\\mathbf{\\mathbf{w}}}{\\operatorname{argmin}} \\frac{1}{2\\eta}||\\mathbf{w}-\\mathbf{x}||_2^2+h(\\mathbf{w})\n$$\nand the proximal gradient update rule as:\n$$\n\\mathbf{w}_{k+1}=\\operatorname{prox}_{\\eta}(\\mathbf{w}_{k}-\\eta\\nabla g(\\mathbf{w}_{k}))\n$$\nAt first sight, it might seem like we have just complicated things by introducing a new optimization problem that will have to be solved at each iteration of the proximal gradient method. However, as we shall see, this new optimization problem has closed form solutions for commonly used regularization functions.\n\n### Proximal mapping for $L^1$ regularization\n\nBy expanding the ${L^1}$ regularization term, we see that the expression for the proximal operator corresponds to:\n$$\n\\operatorname{prox}_{\\eta}(\\mathbf{x})=\\underset{\\mathbf{w}}{\\operatorname{argmin}} \\frac{1}{2\\eta}||\\mathbf{w}-\\mathbf{x}||_{L^2}^2+\\lambda ||\\mathbf{w}||_{L^1}=\\underset{\\mathbf{w}}{\\operatorname{argmin}} \\frac{1}{2}||\\mathbf{w}-\\mathbf{x}||_{L^2}^2+\\lambda \\eta||\\mathbf{w}||_{L^1}.\n$$\nThis optimization problem can be directly solved using subradient, by enforcing the condition that the zero vector ($\\mathbf{0}$) have to be in the set of subgradients:\n$$\n\\mathbf{0}\\in \\partial_\\mathbf{z} \\left( \\frac{1}{2} ||\\mathbf{w}-\\mathbf{x}||_{L^2}^2+\\lambda \\eta ||\\mathbf{w}||_{L^1} \\right)=\\mathbf{w}-\\mathbf{x}+\\lambda \\eta \\ \\partial ||\\mathbf{w}||_{L^1} \\iff \\mathbf{x}-\\mathbf{w} \\in \\lambda \\eta\\ \\partial||\\mathbf{w}||_{L^1}.\n$$\nThe subgradient of the $L^1$-norm can be easily computed component-wise as:\n$$\n[\\partial||\\mathbf{z}||_{L^1}]_i=\\begin{cases} 1 &\\mbox{if } z_i>0 \\\\\n[-1,1]&\\mbox{if } z_i=0\\\\\n-1 & \\mbox{if } z_i<0\n\\end{cases}\n$$\nwhereupon we find that the proximal mapping is given by:\n$$\n\\operatorname{prox}(\\mathbf{x}) = S(\\mathbf{x}, \\eta \\lambda),\n$$\nwhere $S(\\mathbf{x},\\lambda)$ is the soft-thresholding operator, defined component-wise as:\n$$\n[S(\\mathbf{x},\\eta \\lambda)]_i:=\\begin{cases} x_i-\\eta \\lambda &\\mbox{if } x_i> \\eta \\lambda \\\\\n0&\\mbox{if } x_i\\in[-\\eta \\lambda, \\eta \\lambda]\\\\\nx_i+\\eta \\lambda & \\mbox{if } x_i<-\\eta \\lambda\n\\end{cases}\n$$\nOur final update step in each iteration of the proximal gradient method is thus given by:\n$$\n\\mathbf{w}_{k+1}=S(\\mathbf{w}_{k}-\\eta_k\\nabla g(\\mathbf{w}_{k}), \\eta_k \\lambda).\n$$\n\n\n```python\ndef softThresholdingFun(w, th):\n w_th = np.zeros(np.size(w))\n w_th[w > th] = w[w > th] - th\n w_th[w < -th] = w[w < -th] + th\n return w_th\n```\n\n### Example case\nWe generate simulated data using the same example case as in Part 1.\n\n\n```python\n# Parameters\nn = 200;\nwTrue = [1, 0.5]\nregLambda = 100\n\n# Generate example data\nx = np.random.rand(n)*30 - 15\nx = np.sort(x)\nX = np.vstack([np.ones(n), x]).T\nmu = invLinkFun(X, wTrue)\ny = np.random.poisson(mu)\n\n# Evaluate the objective function over a grid\nnGrid = 51\nW0, W1 = np.meshgrid(np.linspace(wTrue[0]-2, wTrue[0]+2, nGrid), np.linspace(wTrue[1]-1, wTrue[1]+1, nGrid))\n# Get the log-likelihood for each parameter combination\nnegLogLikVals = np.zeros([nGrid, nGrid])\nfor i in range(nGrid):\n for j in range(nGrid):\n wTmp = np.array([W0[i, j], W1[i, j]])\n negLogLikVals[i, j] = negLogLikFun(X, y, wTmp, regLambda)\n\n# Plotting\ndef plotFun(W0, W1, funVals, X, y):\n fig = plt.figure(figsize=(15, 5))\n # Objective function surface\n ax_left = fig.add_subplot(1, 2, 1)\n contourHandle = ax_left.contourf(W0, W1, funVals, 150, cmap=cm.coolwarm)\n ax_left.set_xlabel('$w_0$')\n ax_left.set_ylabel('$w_1$')\n cBarHandle = plt.colorbar(contourHandle)\n cBarHandle.set_label('Objective function value');\n # Data\n ax_right = fig.add_subplot(1, 2, 2)\n ax_right.plot(x, y, 'k.', label='data')\n ax_right.set_ylabel('Spike count')\n ax_right.set_xlabel('x')\n ax_right.set_xticks([-10, 0, 10]);\n ax_right.set_ylim([-1, 11]);\n return ax_left, ax_right\n\nplotFun(W0, W1, negLogLikVals, X, y);\n```\n\n### Vanilla proximal gradient\n\n\n```python\n# Proximal gradient parameters\neta = 1e-4\nnIterations = 20\nwInit = np.array([1.8, 0.85])\n\nwUpdates = np.zeros([nIterations, 2])\nwUpdates[0, :] = wInit\nobjFunVals=np.zeros(nIterations)\nobjFunVals[0] = negLogLikFun(X, y, wUpdates[0, :], regLambda)\n\n# Proximal gradient loop\nfor i in range(1, nIterations):\n gradTmp = gDerFun(X, y, wUpdates[i-1, :])\n wUpdates[i, :] = softThresholdingFun(wUpdates[i-1, :]- eta*gradTmp, regLambda*eta)\n objFunVals[i] = negLogLikFun(X, y, wUpdates[i, :], regLambda)\n \n# Plotting\nax_left, ax_right = plotFun(W0, W1, negLogLikVals, X, y);\nax_left.plot(wUpdates[:, 0], wUpdates[:, 1], '-o', ms=6, color='white', lw=3)\nax_right.plot(X[:, 1], invLinkFun(X, wUpdates[i, :]), '-', color=0.5*np.ones(3));\n```\n\n### Acceleration and backtracking for proximal gradient\nAcceleration and backtracking can be implemented as for normal gradient descent, but with a slight modification to the backtracking criteria. That is, acceleration is implemented by first taking a step along the previous update direction ($v_{k+1}$), and then a corrective proximal step ($w_{k+1}$):\n\\begin{align}\n \\mathbf{v}_{k+1} &= \\mathbf{w}_k + \\frac{k-2}{k+1}(\\mathbf{w}_k - \\mathbf{w}_{k-1}), \\\\\n \\mathbf{w}_{k+1} &= \\mathbf{v}_{k+1} - \\eta \\nabla nll(\\mathbf{v}_{k+1}). \n\\end{align}\nDuring the corrective proximal step, we again set $\\eta$ through a backtracking procedure, where $\\eta$ is decreased until the following condition is met:\n$$\ng(\\mathbf{w}_{k+1})\\leq g(\\mathbf{v}_{k+1})+\\nabla g(\\mathbf{v}_{k+1})^T(\\mathbf{w}_{k+1}-\\mathbf{v}_{k+1})+\\frac{1}{2\\eta} ||\\mathbf{w}_{k+1}-\\mathbf{v}_{k+1}||^2\n$$\nThe intuition behind this condition is that we do... FIX THIS STILL!!! and it simplifies to the condition used for gradient descent if $h$ is removed from the objective function.\n\n\n```python\n# Proximal gradient parameters\neta = 1e-1\nbeta = 0.8\n\nwUpdates = np.zeros([nIterations, 2])\nwUpdates[0:2, :] = wInit\nwTmp = wUpdates[0, :]\nobjFunVals=np.zeros(nIterations)\nobjFunVals[0:2] = negLogLikFun(X, y, wUpdates[0, :], regLambda)\n\n# Proximal gradient with backtracking and accerlation\nfor i in range(2, nIterations):\n gradTmp = gDerFun(X, y, wTmp)\n gTmp = gFun(X, y, wTmp)\n while True:\n wUpdates[i,:] = softThresholdingFun(wTmp - eta*gradTmp, regLambda*eta)\n diff = wUpdates[i,:] - wTmp\n if gFun(X, y, wUpdates[i,:]) > gTmp + np.dot(gradTmp, diff) + np.dot(diff, diff)/(2*eta):\n eta *= beta\n else:\n break\n objFunVals[i] = negLogLikFun(X, y, wUpdates[i,:], regLambda)\n wTmp = wUpdates[i,:] + (i-2)/(i+1)*(wUpdates[i,:] - wUpdates[i-1,:])\n\n# Plotting\nax_left, ax_right = plotFun(W0, W1, negLogLikVals, X, y);\nax_left.plot(wUpdates[:, 0], wUpdates[:, 1], '-o', ms=6, color='white', lw=3)\nax_right.plot(X[:, 1], invLinkFun(X, wUpdates[i, :]), '-', color=0.5*np.ones(3));\n```\n", "meta": {"hexsha": "d6caffca3f4dcbf2718f18c4b904b6ddf34e502b", "size": 164999, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ConvexOptimization/Part 2, proximal gradient and L1 regularisation.ipynb", "max_stars_repo_name": "ala-laurila-lab/jupyter-notebooks", "max_stars_repo_head_hexsha": "c7fac1ee74af8e61832dad8536b223a205e79bf2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ConvexOptimization/Part 2, proximal gradient and L1 regularisation.ipynb", "max_issues_repo_name": "ala-laurila-lab/jupyter-notebooks", "max_issues_repo_head_hexsha": "c7fac1ee74af8e61832dad8536b223a205e79bf2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ConvexOptimization/Part 2, proximal gradient and L1 regularisation.ipynb", "max_forks_repo_name": "ala-laurila-lab/jupyter-notebooks", "max_forks_repo_head_hexsha": "c7fac1ee74af8e61832dad8536b223a205e79bf2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-02-21T17:03:39.000Z", "max_forks_repo_forks_event_max_datetime": "2019-02-21T17:03:39.000Z", "avg_line_length": 452.0520547945, "max_line_length": 53704, "alphanum_fraction": 0.930284426, "converted": true, "num_tokens": 3737, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9304582593509315, "lm_q2_score": 0.9099069974872589, "lm_q1q2_score": 0.8466304810532274}} {"text": "```python\n# A regression attempts to fit a of function to observed data to make predictions on new data.\n# A linear regression fits a straight line to observed data, attempting to demonstrate a linear relationship\n# between variables and make predictions on new data yet to be observed.\n```\n\n\n```python\n# Scikit-Learn to perform a basic, unvalidated linear regression on the sample of 10 dogs.\nfrom matplotlib import *\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.linear_model import LinearRegression\n\n# Import points\ndf = pd.read_csv('https://bit.ly/3goOAnt', delimiter=\",\")\n\n# Extract input variables (all rows, all columns but last column)\nX = df.values[:, :-1]\n\n# Extract output column (all rows, last column)\nY = df.values[:, -1]\n\n# Fit a line to the points\nfit = LinearRegression().fit(X, Y)\n\n# m = 1.7867224, b = -16.51923513\nm = fit.coef_.flatten()\nb = fit.intercept_.flatten()\nprint(\"m = {0}\".format(m))\nprint(\"b = {0}\".format(b))\n\n# show in chart\nplt.plot(X, Y, 'o') # scatterplot\nplt.plot(X, m*X+b) # line\nplt.show()\n```\n\n\n```python\n# # The residual is the numeric difference between the line and the points\n\n# Another name for residuals are errors, because they reflect how wrong our line is in predicting the data.\n\n# Calculating the residuals for a given line and data\n\n# Import points\npoints = pd.read_csv('https://bit.ly/3goOAnt', delimiter=\",\").itertuples()\n\n# Test with a given line\nm = 1.93939\nb = 4.73333\n\n# calculate sum of squares\nfor p in points:\n y_actual = p.y\n y_predict = m*p.x + b\n residual = y_actual - y_predict\n print(residual)\n```\n\n -1.67272\n 1.3878900000000005\n -0.5515000000000008\n 2.5091099999999997\n -0.4302799999999998\n -1.3696699999999993\n 0.6909400000000012\n -2.2484499999999983\n 2.812160000000002\n -1.1272299999999973\n\n\n\n```python\n# If we are fitting a straight line through our 10 data points, we likely want to minimize these residuals\n# in total so there is as little of a gap as possible between the line and points.\n# But how do we measure the \u201ctotal\u201d? The best approach is to take the sum of squares,\n# which simply squares each residual, or multiplies each residual by itself, \n# and sums them. We take each actual y value and subtract from it the predicted y value taken from the line,\n# then square and sum all those differences.\n```\n\n\n```python\n# You might wonder why we have to square the residuals before summing them.\n# Why not just add them up without squaring?\n# That will not work because the negatives will cancel out the positives.\n# What if we add the absolute values, where we turn all negative values into positive values?\n# That sounds promising but absolute values are mathematically inconvenient.\n# More specifically, absolute values do not work well with Calculus derivatives \n# which we are going to use later for gradient descent. \n# This is why we choose the squared residuals as our way of totaling the loss.\n```\n\n\n```python\n# Calculating the sum of squares for a given line and data\n\npoints = pd.read_csv(\"https://bit.ly/2KF29Bd\").itertuples()\n\n# Test with a given line\nm = 1.93939\nb = 4.73333\n\nsum_of_squares = 0.0\n\n# calculate sum of squares\nfor p in points:\n y_actual = p.y\n y_predict = m*p.x + b\n residual_squared = (y_predict - y_actual)**2\n sum_of_squares += residual_squared\n\n \nprint(\"sum of squares = {}\".format(sum_of_squares))\n```\n\n sum of squares = 28.096969704500005\n\n\n\n```python\n# Calculating m and b for a simple linear regression\n\n# Load the data\npoints = list(pd.read_csv('https://bit.ly/2KF29Bd', delimiter=\",\").itertuples())\n\nn = len(points)\n\nm = (n*sum(p.x*p.y for p in points) - sum(p.x for p in points) *\n sum(p.y for p in points)) / (n*sum(p.x**2 for p in points) -\n sum(p.x for p in points)**2)\n\nb = (sum(p.y for p in points) / n) - m * sum(p.x for p in points) / n\n\nprint(m, b)\n```\n\n 1.9393939393939394 4.7333333333333325\n\n\n\n```python\n# Using inverse and transposed matrices to fit a linear regression\n\nimport pandas as pd\nfrom numpy.linalg import inv,qr\nimport numpy as np\n\n# Import points\ndf = pd.read_csv('https://bit.ly/3goOAnt', delimiter=\",\")\n\n# Extract input variables (all rows, all columns but last column)\nX = df.values[:, :-1].flatten()\n\n# Add placeholder \"1\" column to generate intercept\nX_1 = np.vstack([X, np.ones(len(X))]).T\n\n# Extract output column (all rows, last column)\nY = df.values[:, -1]\n\n# Calculate coefficents for slope and intercept\nb = inv(X_1.transpose() @ X_1) @ (X_1.transpose() @ Y)\nprint(b) # [1.93939394 4.73333333]\n\n# Predict against the y-values\ny_predict = X_1.dot(b)\n\nprint (y_predict)\n```\n\n [1.93939394 4.73333333]\n [ 6.67272727 8.61212121 10.55151515 12.49090909 14.43030303 16.36969697\n 18.30909091 20.24848485 22.18787879 24.12727273]\n\n\n\n```python\n# Using QR decomposition to perform a linear regression\n\n# Import points\ndf = pd.read_csv('https://bit.ly/3goOAnt', delimiter=\",\")\n\n# Extract input variables (all rows, all columns but last column)\nX = df.values[:, :-1].flatten()\n\n# Add placeholder \"1\" column to generate intercept\nX_1 = np.vstack([X, np.ones(len(X))]).transpose()\n\n# Extract output column (all rows, last column)\nY = df.values[:, -1]\n\n# calculate coefficents for slope and intercept\n# using QR decomposition\nQ, R = qr(X_1)\nb = inv(R).dot(Q.transpose()).dot(Y)\n\nprint(b)\n\n```\n\n [1.93939394 4.73333333]\n\n\n\n```python\n# Gradient descent is an optimization technique that uses derivatives and\n# iterations to minimize/maximize a set of parameters against an objective.\n\n# Using gradient descent to find the minimum of a parabola\n\nimport random\n\n\ndef f(x):\n return (x - 3) ** 2 + 4\n\ndef dx_f(x):\n return 2*(x - 3)\n\n# The learning rate\nL = 0.001\n\n# The number of iterations to perform gradient descent\niterations = 100_000\n\n # start at a random x\nx = random.randint(-15,15)\n\nfor i in range(iterations):\n\n # get slope\n d_x = dx_f(x)\n\n # update x by subtracting the (learning rate) * (slope)\n x -= L * d_x\n\nprint(x, f(x))\n```\n\n 3.000000000000111 4.0\n\n\n\n```python\n# Performing gradient descent for a linear regression\n\n# Import points from CSV\npoints = list(pd.read_csv(\"https://bit.ly/2KF29Bd\").itertuples())\n\n# Building the model\nm = 0.0\nb = 0.0\n\n# The learning Rate\nL = .001\n\n# The number of iterations\niterations = 100_000\n\nn = float(len(points)) # Number of elements in X\n\n# Perform Gradient Descent\nfor i in range(iterations):\n\n # slope with respect to m\n D_m = sum(2 * p.x * ((m * p.x + b) - p.y) for p in points)\n\n # slope with respect to b\n D_b = sum(2 * ((m * p.x + b) - p.y) for p in points)\n\n # update m and b\n m -= L * D_m\n b -= L * D_b\n\nprint(\"y = {0}x + {1}\".format(m, b))\n```\n\n y = 1.9393939393939548x + 4.733333333333227\n\n\n\n```python\n# Calculating partial derivatives for m and b\n\nfrom sympy import *\n\nm, b, i, n = symbols('m b i n')\nx, y = symbols('x y', cls=Function)\n\nsum_of_squares = Sum((m*x(i) + b - y(i)) ** 2, (i, 0, n))\n\nd_m = diff(sum_of_squares, m)\nd_b = diff(sum_of_squares, b)\nprint(d_m)\nprint(d_b)\n```\n\n Sum(2*(b + m*x(i) - y(i))*x(i), (i, 0, n))\n Sum(2*b + 2*m*x(i) - 2*y(i), (i, 0, n))\n\n\n\n```python\n# Performing stochastic gradient descent for a linear regression\n\n\n# Input data\ndata = pd.read_csv('https://bit.ly/2KF29Bd', header=0)\n\nX = data.iloc[:, 0].values\nY = data.iloc[:, 1].values\n\nn = data.shape[0] # rows\n\n# Building the model\nm = 0.0\nb = 0.0\n\nsample_size = 1 # sample size\nL = .0001 # The learning Rate\nepochs = 1_000_000 # The number of iterations to perform gradient descent\n\n# Performing Stochastic Gradient Descent\nfor i in range(epochs):\n idx = np.random.choice(n, sample_size, replace=False)\n x_sample = X[idx]\n y_sample = Y[idx]\n\n # The current predicted value of Y\n Y_pred = m * x_sample + b\n\n # d/dm derivative of loss function\n D_m = (-2 / sample_size) * sum(x_sample * (y_sample - Y_pred))\n\n # d/db derivative of loss function\n D_b = (-2 / sample_size) * sum(y_sample - Y_pred)\n m = m - L * D_m # Update m\n b = b - L * D_b # Update b\n\n # print progress\n if i % 10000 == 0:\n print(i, m, b)\n\nprint(\"y = {0}x + {1}\".format(m, b))\n```\n\n 0 0.006 0.002\n 10000 2.36326760682638 1.8835518624103165\n 20000 2.208478682839455 2.860757272083633\n 30000 2.0990271430034566 3.5015104981731477\n 40000 2.059718240314351 3.918670691928773\n 50000 2.0254319787495616 4.228272179662717\n 60000 2.0128706983709828 4.3987557816635325\n 70000 1.9588699155518574 4.511397889311558\n 80000 1.9572242882073074 4.573105411192914\n 90000 1.9603079930214902 4.646560110817617\n 100000 1.915286401953193 4.679197268853813\n 110000 1.9403467582673453 4.694118062444541\n 120000 1.9599274086298113 4.725943861094068\n 130000 1.9368722399981075 4.7143432848607825\n 140000 1.963520934610125 4.71478629302077\n 150000 1.9360986672130869 4.728305205329085\n 160000 1.9324433155776284 4.741555567620757\n 170000 1.926229705774608 4.7507334297890695\n 180000 1.9432750003962185 4.731457564467172\n 190000 1.9652083911680416 4.750695486563088\n 200000 1.924857106440728 4.741023799177265\n 210000 1.949168382522107 4.742302871466551\n 220000 1.9482919821971325 4.732498280457749\n 230000 1.9436648255769589 4.744293925905611\n 240000 1.9705030140252286 4.742529291867208\n 250000 1.9353429418031627 4.739497162984329\n 260000 1.9562758398526634 4.738933645814807\n 270000 1.9382884118185948 4.728545301878929\n 280000 1.93255522206419 4.720959425792094\n 290000 1.9410867423424232 4.719485579197047\n 300000 1.928348444769958 4.709229998935416\n 310000 1.944704859029115 4.734245656426094\n 320000 1.9315489900068277 4.7164345649351676\n 330000 1.939995611167781 4.709338225607842\n 340000 1.961142019124275 4.705429525526002\n 350000 1.9216025853518108 4.723954582852459\n 360000 1.944900362530313 4.7252435418819125\n 370000 1.9399721319259282 4.734938065834019\n 380000 1.9497781564660994 4.742384445451385\n 390000 1.9805381395468729 4.731931988681261\n 400000 1.9454482978428054 4.730618875951833\n 410000 1.9703383827976337 4.733665784535996\n 420000 1.981650785600547 4.742295189046688\n 430000 1.972530745570693 4.74071158972351\n 440000 1.9111024810843704 4.726256650760091\n 450000 1.9082781943786304 4.731152175860961\n 460000 1.9204477114790428 4.739821573687611\n 470000 1.9644981269863893 4.745855875675362\n 480000 1.9263137432887223 4.743643371721059\n 490000 1.91711757641405 4.74294635046912\n 500000 1.930934583192176 4.74212184317665\n 510000 1.967479133941378 4.736673843592278\n 520000 1.9505228605459906 4.728075419769489\n 530000 1.9516554472517902 4.728522913632032\n 540000 1.9417919849569742 4.749508598760058\n 550000 1.9611385059437294 4.766887931996634\n 560000 1.9267487723441967 4.745625753029247\n 570000 1.9160704416577639 4.735761047329838\n 580000 1.929617903629193 4.721682821898547\n 590000 1.9149289269251695 4.728326533479536\n 600000 1.931097863515108 4.706517870443267\n 610000 1.938513500360612 4.72158595083187\n 620000 1.9654752270010292 4.727653810731125\n 630000 1.9124274374663552 4.714167893880229\n 640000 1.9460836587073391 4.721447016138792\n 650000 1.9548197285647888 4.714751503819112\n 660000 1.946696243788322 4.7234709202203105\n 670000 1.943371459630136 4.722904788603818\n 680000 1.9573802094954733 4.7256764624636\n 690000 1.931115907669294 4.718575794959121\n 700000 1.9262178442406925 4.7242553744409905\n 710000 1.9584890714298269 4.707751870061937\n 720000 1.91096125496631 4.732834014230203\n 730000 1.9638450307901556 4.730820586569408\n 740000 1.9376318945047226 4.742484193616451\n 750000 1.9764575310410655 4.721173739823862\n 760000 1.959201460128207 4.728725027441086\n 770000 1.9616063379508497 4.745386968448955\n 780000 1.9303406405244505 4.7324681138365285\n 790000 1.9424228463604172 4.728831976971964\n 800000 1.9043294280432401 4.69159602664981\n 810000 1.9336889001186668 4.7071650308872695\n 820000 1.9441607414771653 4.712772819341543\n 830000 1.959822283009035 4.73382719635445\n 840000 1.9453503711741207 4.724613103589071\n 850000 1.9587792297040645 4.737822839830986\n 860000 1.9157005293366143 4.722802115138588\n 870000 1.9464146899956396 4.719103639727119\n 880000 1.9111757048916815 4.714647715026774\n 890000 1.9257806715010586 4.731873835908689\n 900000 1.9534173041959169 4.722614847513641\n 910000 1.9505340945508731 4.73813717689114\n 920000 1.9330475833082221 4.73912605385538\n 930000 1.9594103111947985 4.752444875710609\n 940000 1.9415876754302852 4.754499178310029\n 950000 1.9306788443386962 4.752825434666956\n 960000 1.921548945614283 4.76041767073736\n 970000 1.962487828671824 4.772474403778124\n 980000 1.9865264430013605 4.754946770865421\n 990000 1.9421834301828678 4.7535420839260265\n y = 1.9502363279513883x + 4.748196684882565\n\n\n\n```python\n# correlation coefficient, also called the Pearson correlation,\n# which measures the strength of the relationship between two variables as a value \n# between -1 and 1. A correlation coefficient closer to 0 indicates there is no correlation.\n# A correlation coefficient closer to 1 indicates a strong positive correlation,\n# meaning when one variable increases the other proportionally increases.\n# If it is closer to -1 then it indicates a strong negative correlation,\n# which means as one variable increases the other proportionally decreases.\n```\n\n\n```python\n# Using Pandas to see the correlation coefficent between every pair of variables\n\n# Read data into Pandas dataframe\ndf = pd.read_csv('https://bit.ly/2KF29Bd', delimiter=\",\")\n\n# Print correlations between variables\ncorrelations = df.corr(method='pearson')\nprint(correlations)\n\n```\n\n x y\n x 1.000000 0.957586\n y 0.957586 1.000000\n\n\n\n```python\n# Calculating correlation coefficient from scratch in Python\n\nfrom math import sqrt\n\n# Import points from CSV\npoints = list(pd.read_csv(\"https://bit.ly/2KF29Bd\").itertuples())\nn = len(points)\n\nnumerator = n * sum(p.x * p.y for p in points) - \\\n sum(p.x for p in points) * sum(p.y for p in points)\n\ndenominator = sqrt(n*sum(p.x**2 for p in points) - sum(p.x for p in points)**2) \\\n * sqrt(n*sum(p.y**2 for p in points) - sum(p.y for p in points)**2)\n\ncorr = numerator / denominator\n\nprint(corr)\n```\n\n 0.9575860952087218\n\n\n\n```python\n# Calculating the critical value from a T-distribution\n\nfrom scipy.stats import t\n\nn = 10\nlower_cv = t(n-1).ppf(.025)\nupper_cv = t(n-1).ppf(.975)\n\nprint(lower_cv, upper_cv)\n```\n\n -2.262157162740992 2.2621571627409915\n\n\n\n```python\n# Testing significance for linear-looking data\n\nfrom scipy.stats import t\nfrom math import sqrt\n\n# sample size\nn = 10\n\nlower_cv = t(n-1).ppf(.025)\nupper_cv = t(n-1).ppf(.975)\n\n# correlation coefficient\n# derived from data https://bit.ly/2KF29Bd\nr = 0.957586\n\n# Perform the test\ntest_value = r / sqrt((1-r**2) / (n-2))\n\nprint(\"TEST VALUE: {}\".format(test_value))\nprint(\"CRITICAL RANGE: {}, {}\".format(lower_cv, upper_cv))\n\nif test_value < lower_cv or test_value > upper_cv:\n print(\"CORRELATION PROVEN, REJECT H0\")\nelse:\n print(\"CORRELATION NOT PROVEN, FAILED TO REJECT H0 \")\n\n# Calculate p-value\nif test_value > 0:\n p_value = 1.0 - t(n-1).cdf(test_value)\nelse:\n p_value = t(n-1).cdf(test_value)\n\n# Two-tailed, so multiply by 2\np_value = p_value * 2\nprint(\"P-VALUE: {}\".format(p_value))\n```\n\n TEST VALUE: 9.399564671312076\n CRITICAL RANGE: -2.262157162740992, 2.2621571627409915\n CORRELATION PROVEN, REJECT H0\n P-VALUE: 5.9763860877914965e-06\n\n\n\n```python\n# Creating a correlation matrix in Pandas\n\n# Read data into Pandas dataframe\ndf = pd.read_csv('https://bit.ly/2KF29Bd', delimiter=\",\")\n\n# Print correlations between variables\ncoeff_determination = df.corr(method='pearson') ** 2\nprint(coeff_determination)\n```\n\n x y\n x 1.000000 0.916971\n y 0.916971 1.000000\n\n\n\n```python\n# Calculating a prediction interval of vet visits for a dog that\u2019s 8.5 years old\n\nfrom math import sqrt\n\n# Load the data\npoints = list(pd.read_csv('https://bit.ly/2KF29Bd', delimiter=\",\").itertuples())\n\nn = len(points)\n\n# Linear Regression Line\nm = 1.939\nb = 4.733\n\n# Calculate Prediction Interval for x = 8.5\nx_0 = 8.5\nx_mean = sum(p.x for p in points) / len(points)\n\nt_value = t(n - 2).ppf(.975)\n\nstandard_error = sqrt(sum((p.y - (m * p.x + b)) ** 2 for p in points) / (n - 2))\n\nmargin_of_error = t_value * standard_error * \\\n sqrt(1 + (1 / n) + (n * (x_0 - x_mean) ** 2) / \\\n (n * sum(p.x ** 2 for p in points) - sum(p.x for p in points) ** 2))\n\npredicted_y = m*x_0 + b\n\n# Calculate prediction interval\nprint(predicted_y - margin_of_error, predicted_y + margin_of_error)\n```\n\n 16.462516875955465 25.966483124044537\n\n\n\n```python\n# Doing a train/test split on linear regression\n\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.model_selection import train_test_split\n\n# Load the data\ndf = pd.read_csv('https://bit.ly/3cIH97A', delimiter=\",\")\n\n# Extract input variables (all rows, all columns but last column)\nX = df.values[:, :-1]\n\n# Extract output column (all rows, last column)\nY = df.values[:, -1]\n\n# Separate training and testing data\n# This leaves a third of the data out for testing\nX_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=1/3)\n\nmodel = LinearRegression()\nmodel.fit(X_train, Y_train)\nresult = model.score(X_test, Y_test)\nprint(\"R^2: %.3f\" % result)\n```\n\n R^2: 0.993\n\n\n\n```python\n# Using 3-fold cross validation for a linear regression\n\n\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.model_selection import KFold, cross_val_score\n\ndf = pd.read_csv('https://bit.ly/3cIH97A', delimiter=\",\")\n\n# Extract input variables (all rows, all columns but last column)\nX = df.values[:, :-1]\n\n# Extract output column (all rows, last column)\\\nY = df.values[:, -1]\n\n# Perform a simple linear regression\nkfold = KFold(n_splits=3, random_state=7, shuffle=True)\nmodel = LinearRegression()\nresults = cross_val_score(model, X, Y, cv=kfold)\nprint(results)\nprint(\"MSE: mean=%.3f (stdev-%.3f)\" % (results.mean(), results.std()))\n```\n\n [0.99337354 0.99345032 0.99251425]\n MSE: mean=0.993 (stdev-0.000)\n\n\n\n```python\n# Using a random-fold validation for a linear regression\n\n\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.model_selection import KFold, cross_val_score\n\ndf = pd.read_csv('https://bit.ly/3cIH97A', delimiter=\",\")\n\n# Extract input variables (all rows, all columns but last column)\nX = df.values[:, :-1]\n\n# Extract output column (all rows, last column)\\\nY = df.values[:, -1]\n\n# Perform a simple linear regression\nkfold = KFold(n_splits=3, random_state=7, shuffle=True)\nmodel = LinearRegression()\nresults = cross_val_score(model, X, Y, cv=kfold)\nprint(results)\nprint(\"MSE: mean=%.3f (stdev-%.3f)\" % (results.mean(), results.std()))\n```\n\n [0.99337354 0.99345032 0.99251425]\n MSE: mean=0.993 (stdev-0.000)\n\n\n\n```python\n# A linear regressoin with two input variables\n\n# Load the data\ndf = pd.read_csv('https://bit.ly/2X1HWH7', delimiter=\",\")\n\n# Extract input variables (all rows, all columns but last column)\nX = df.values[:, :-1]\n\n# Extract output column (all rows, last column)\\\nY = df.values[:, -1]\n\n# Training\nfit = LinearRegression().fit(X, Y)\n\n# Print coefficients\nprint(\"Coefficients = {0}\".format(fit.coef_))\nprint(\"Intercept = {0}\".format(fit.intercept_))\nprint(\"z = {0} + {1}x + {2}y\".format(fit.intercept_, fit.coef_[0], fit.coef_[1]))\n```\n\n Coefficients = [2.00672647 3.00203798]\n Intercept = 20.109432820035963\n z = 20.109432820035963 + 2.0067264725128062x + 3.002037976646693y\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "5272cf2d52311bb77a499409d367cbec74255816", "size": 41806, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Maths/Linear Regression/Linear Regression.ipynb", "max_stars_repo_name": "rishi9504/Data-Science", "max_stars_repo_head_hexsha": "10344bf641c601bf16451ddd9eaa28ab4c0fc75b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Maths/Linear Regression/Linear Regression.ipynb", "max_issues_repo_name": "rishi9504/Data-Science", "max_issues_repo_head_hexsha": "10344bf641c601bf16451ddd9eaa28ab4c0fc75b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Maths/Linear Regression/Linear Regression.ipynb", "max_forks_repo_name": "rishi9504/Data-Science", "max_forks_repo_head_hexsha": "10344bf641c601bf16451ddd9eaa28ab4c0fc75b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.0102880658, "max_line_length": 12084, "alphanum_fraction": 0.6788020858, "converted": true, "num_tokens": 6652, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9241418137109955, "lm_q2_score": 0.9161096084360388, "lm_q1q2_score": 0.8466151950981508}} {"text": "# Linear Vs. Non-Linear Functions\n** October 2017 **\n\n** Andrew Riberio @ [AndrewRib.com](http://www.andrewrib.com) **\n\nResources\n* https://en.wikipedia.org/wiki/Linear_function\n* https://www.montereyinstitute.org/courses/Algebra1/COURSE_TEXT_RESOURCE/U03_L2_T5_text_final.html\n* https://en.wikipedia.org/wiki/Linear_combination\n\n## Libraries\n\n\n```python\nimport numpy as np\nimport sympy as sp\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n```\n\n## Linear Functions \n\nA linear function of one variable has the geometric interpretation as a line when we plot the space of inputs against the output space: y = f(x). In three dimensions, in 3d space, we can do a similar thing z = f(x,y), but we arrive at the generalization of a line in 3d space called a *plane.*\n\n\n```python\n# There are no functions in one dimension since we cannot deliniate inputs/outputs. \n# A single dimension is just a constant value of existence with no notion of difference. \n\n# Two dimensional funciton\ndef f2(x):\n return x\n\n# Three dimensional funciton. Output is z.\ndef f3(x,y):\n return x + y\n\n# Four dimensional function. We cannot visualize this easily. \ndef f4(x,y,z):\n return x + y + z\n\nt1 = np.arange(0.0, 5.0, 0.1)\n\nplt.figure(1)\nplt.subplot(211)\nplt.plot(t1, f2(t1))\n\nX,Y = np.meshgrid(t1,t1)\n\nplt.subplot(212,projection=\"3d\")\n\nplt.plot(X,Y, f3(t1,t1))\n\nplt.show()\n```\n\n**Theorem:** The derivitive of every linear function is a constant.\n\n\n```python\nx_sym, y_sym = sp.symbols('x y')\nprint(\"dy/dx f2 = {0}\".format(sp.diff(f2(x_sym))))\nprint(\"\u2202z/\u2202x f3 = {0}\".format(sp.diff(f3(x_sym,y_sym),x_sym)))\n```\n\n dy/dx f2 = 1\n \u2202z/\u2202x f3 = 1\n\n\n**Definition:** A *linear combination* of varibles (x1, x2, ... , xn) = \u22021x1 + \u22022x2+ ... + \u2202nxn where \u2202 are constants.\n\n**Theorem:** All linear functions can be represented as a linear combination. \n\n\n## Non-Linear Functions\n\nNon-linear functions are more unpredictable than linear functions because the derivitive of a non-linear function is always a function, not a constant. The geometric interpretation of non-linear functions encompasses the diverse space of curves. \n\n\n```python\n# Two dimensional funciton\ndef f2(x):\n return x**2\n\n# Three dimensional funciton. Output is z.\ndef f3(x,y):\n return x * y\n\nt1 = np.arange(0.0, 5.0, 0.1)\n\nplt.figure(1)\nplt.subplot(211)\nplt.plot(t1, f2(t1))\n\nplt.subplot(212,projection=\"3d\")\nX,Y = np.meshgrid(t1,t1)\nplt.plot(X,Y, f3(t1,t1))\nplt.show()\n```\n\n**Theorem:** The derivitive of every non-linear function is **not a constant**.\n\n\n\n```python\nx_sym, y_sym = sp.symbols('x y')\nprint(\"dy/dx f2 = {0}\".format(sp.diff(f2(x_sym))))\nprint(\"\u2202z/\u2202x f3 = {0}\".format(sp.diff(f3(x_sym,y_sym),x_sym)))\n```\n\n dy/dx f2 = 2*x\n \u2202z/\u2202x f3 = y\n\n\n**Theorem:** Non-linear functions cannot be represented by a linear combination.\n\n", "meta": {"hexsha": "64a4f5c25499c5496728d5c87a37a5e551d60934", "size": 151088, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Notebooks/Linear Vs. Non-Linear Functions.ipynb", "max_stars_repo_name": "Andrewnetwork/WorkshopScipy", "max_stars_repo_head_hexsha": "739d24b9078fffb84408e7877862618d88d947dc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 433, "max_stars_repo_stars_event_min_datetime": "2017-12-16T20:50:07.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-08T13:05:57.000Z", "max_issues_repo_path": "Notebooks/Linear Vs. Non-Linear Functions.ipynb", "max_issues_repo_name": "Andrewnetwork/WorkshopScipy", "max_issues_repo_head_hexsha": "739d24b9078fffb84408e7877862618d88d947dc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2017-12-17T06:10:28.000Z", "max_issues_repo_issues_event_max_datetime": "2018-11-14T15:50:10.000Z", "max_forks_repo_path": "Notebooks/Linear Vs. Non-Linear Functions.ipynb", "max_forks_repo_name": "Andrewnetwork/WorkshopScipy", "max_forks_repo_head_hexsha": "739d24b9078fffb84408e7877862618d88d947dc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 47, "max_forks_repo_forks_event_min_datetime": "2017-12-06T20:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-01T11:33:57.000Z", "avg_line_length": 614.1788617886, "max_line_length": 72682, "alphanum_fraction": 0.9375, "converted": true, "num_tokens": 893, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9724147209709197, "lm_q2_score": 0.8705972667296309, "lm_q1q2_score": 0.8465815982049394}} {"text": "# Examples of image reconstruction using PCA \n\nData classification in high dimensional spaces can be challenging and the results often lack robustness. \nThis well-known problem has its own name; the curse of dimensionality. Principal Component \nAnalysis is a popular method for dimensionality reduction. It can also be used as a \nvisualisation tool to project data into lower dimensional spaces, which improves data exploration and comprehension.\n\nIn this script, we will use PCA to study a few characteristics of the MNIST test dataset. It contains 10 K images of \nhand-written digits from 0 to 9. It is a good introduction to PCA in image analysis. \n\n## The PCA transform\nThis linear transform allows us to move from the natural/original space $\\cal{X}$ into the component space $\\cal{Z}$ and is defined as \n
$\\bf{z} = \\bf{W}^{T}(\\bf{x}-\\bf{\\mu})$
\n\nwith \n
\n$\\begin{align}\n\\bf{x} &= [x_{1} x_{2} \\cdots x_{N}]^\\top \\\\\n\\bf{\\mu} &= [\\mu_{1} \\mu_{2} \\cdots \\mu_{N}]^\\top \\\\\n\\bf{z} &= [z_{1} z_{2} \\cdots z_{N}]^\\top \\\\\n\\end{align}$\n
\n\nwhere N is the dimension of space and $\\bf{\\mu}$ is the mean of the N-dimensional data X. \n\nThe $W$ matrix if made of the N eigenvectors $\\bf{w}_{i}$ of the covariance matrix $\\Sigma$. They are stacked together along columns\n\n
\n$\\begin{align}\n\\bf{W} &= \\begin{pmatrix} \\bf{w}_{1} & \\bf{w}_{2} & \\dotsb & \\bf{w}_{N} \\end{pmatrix} \\\\\n&= \\begin{pmatrix} w_{1,1} & w_{2,1} & \\dotsb & w_{N,1} \\\\\nw_{1,2} & w_{2,2} & \\dotsb & w_{N,2} \\\\\n\\vdots & \\vdots & \\dotsb & \\vdots \\\\\nw_{1,N} & w_{2,N} & \\dotsb & w_{N,N} \\end{pmatrix} \n\\end{align}$ \n
\n\nAs for the covariance matrix $\\Sigma$, it is computed from the N-dimensional data distribution X \n\n
\n$\n\\begin{align}\n\\bf{\\Sigma} &= E\\{(\\bf{x}-\\bf{\\mu})(\\bf{x}-\\bf{\\mu})^{T} \\} \\\\\n&= \\begin{pmatrix} \\sigma_{1}^2 & \\sigma_{1,2} & \\cdots & \\sigma_{1,N} \\\\ \n\\sigma_{1,2} & \\sigma_{2}^2 & \\cdots & \\sigma_{2,N} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\sigma_{1,N} & \\sigma_{2,N} & \\cdots & \\sigma_{N}^2 \\end{pmatrix}\n\\end{align}\n$ \n
\n\nThe $\\it{inverse}$ PCA transform allows us to move from the component space $\\cal{Z}$ back to the natural/original space $\\cal{X}$ \nand is defined as \n\n
$\\bf{x} = \\bf{W}\\bf{z} + \\bf{\\mu}$
\n\nThe principal components z are sorted in decreasing order of variance $var(z_{i})=\\lambda_{i}$ where the $\\lambda_{i}$ are \nthe eigenvalues of the covariance matrix $\\bf{\\Sigma}$. \n\nN.B. The principal components usually come in one of the two popular notations: $z_{i}$ or $PCA_{i}$.\n\n## PCA as a tool for dimensionality reduction\n\nThe last equation shows that we can reconstruct the vector $\\bf{x}$ exactly. We usually drop the \nleast significant principal components (the last elements in z and the last columns in $\\bf{W}$) \nbecause they only contain noise. This produces an approximate reconstruction of x \n\n
$\\bf{x} \\approx \\bf{\\tilde{W}}\\bf{\\tilde{z}} + \\bf{\\mu}$
\n\nwith the modified arrays\n\n
$\\bf{\\tilde{z}} = \\begin{pmatrix} z_{1} \\ z_{2} \\ \\dotsb \\ z_{M} \\end{pmatrix}^{T} $
\n\nand\n\n
\n$\n\\begin{align}\n\\bf{\\tilde{W}} &= \\begin{pmatrix} \\bf{w}_{1} & \\bf{w}_{2} & \\dotsb & \\bf{w}_{M} \\end{pmatrix} \\\\\n&= \\begin{pmatrix} w_{1,1} & w_{2,1} & \\dotsb & w_{M,1} \\\\\nw_{1,2} & w_{2,2} & \\dotsb & w_{M,2} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\nw_{1,N} & w_{2,N} & \\dotsb & w_{M,N} \\end{pmatrix} \n\\end{align}\n$ \n
\n\nThe scree plot, which displays $\\lambda$ $\\it{versus}$ the component number, is a practical tool for finding \nthe number of relevant components, i.e. those that contain information rather than noise.\n\n## PCA as a tool for visualisation\n\nThe MNIST dataset contains 10 K images. A PCA analysis will generate as many eigenvectors $\\bf{w}_{i}$ \nthat can be reshaped into images. Each original images is a linear combinations of the eigenvectors. As \nwe will see below, the first principal components $z_{i}$ are the most important ones. If we keep only \nthe first two, each image $\\bf{x}$ can be approximated as \n\n
\n$\n\\begin{align}\n\\bf{x} &= \\sum_{i=1}^N z_{i} \\bf{w}_{i} + \\bf{\\mu} \\\\\n& \\approx z_{1} \\bf{w}_{1} + z_{2} \\bf{w}_{2} + \\bf{\\mu}\n\\end{align}\n$ \n
\n\nThis means that each image can be 'summarized' by only two numbers $z_{1}$ and $z_{2}$. Thus, the \n10 K images can be represented by as many points in the 2-D PCA space of \ncoordinates ($z_{1}$, $z_{2}$) or equivalently ($PCA_{1}$, $PCA_{2}$). This representation \nmakes image comparison easy as we will see below. \n\n\n\n```python\nprint(__doc__)\n\n# Author: Pierre Gravel \n# License: BSD\n\n%matplotlib inline\n\nimport os\nimport numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\nfrom matplotlib.offsetbox import OffsetImage, AnnotationBbox\n\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.decomposition import PCA\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay\n\nimport seaborn as sns\nsns.set(color_codes=True)\n\n# Used for reproductibility of the results\nnp.random.seed(43)\n```\n\n Automatically created module for IPython interactive environment\n\n\n# Data preprocessing\n\n### Load the MNIST test dataset. \n\nThe dataset contains 10 K images of size 28 x 28, each stored into a line of 784 elements. The X data is an array of\nshape 10K x 784; each line corresponds to an image (an observation) and each column corresponds to a feature. \nThe y data is an array of 10 K elements that contains the image labels $[0,\\cdots ,9]$\n\nThe test dataset can be downloaded from the Kaggle website: https://www.kaggle.com/oddrationale/mnist-in-csv\n \nIn what follows, we assume that you downloaded the dataset into the current directory.\n\n\n```python\nfilename = 'mnist_test.csv'\ndf = pd.read_csv(filename)\n\n# Extract from the Panda dataframe the features X and the labels y\nX = df.drop(['label'], axis=1).values\ny = df[['label']].values\n\n```\n\nDisplay one (inverted) image for each class. N.B. The original images are white on a black background.\n\n\n```python\nfig, ax = plt.subplots(2,5, figsize=(10,6))\nfor i in range(10):\n idx = np.where(y==i)\n idx = idx[0][0]\n plt.subplot(2,5, i + 1)\n plt.imshow(255 - X[idx,:].reshape(28, 28), cmap='gray', interpolation='nearest')\n plt.xticks(())\n plt.yticks(())\nfig.tight_layout() \n \nplt.savefig('13.1.1_Examples_from_MNIST_dataset.png')\nplt.savefig('13.1.1_Examples_from_MNIST_dataset.pdf')\n```\n\n### Normalise the data\n\n\n```python\nsc = StandardScaler().fit(X)\nX_s = sc.transform(X)\n```\n\n# PCA analysis\n\n### Compute the PCA transform using all the images\n\n\n\n```python\npca = PCA()\npca.fit(X_s);\n```\n\n### Display the PCA scree plot \n\nThe scree plot shows the eigenvalues $\\lambda$ in a decreasing order. The most relevant eigenvectors are on the left and \nthe least relevant ones on the right. \n\nThe second panel shows the fraction of the image variance already explained by the first n principal components. \nFor instance, the first 50 principal components explain \nabout 60% of the information in the dataset, whereas the first 200 account for 90% of it.\n\n\n\n```python\n(n,m) = X_s.shape\nn_components = np.arange(1,m+1)\n\nfig, ax = plt.subplots(2,1, figsize=(10,6))\nax[0].plot(n_components, pca.singular_values_) \nax[0].set_xlabel('Eigenvectors', fontsize=16)\nax[0].set_ylabel('Eigenvalues', fontsize=16)\nax[0].set_title('MNIST Scree plot', fontsize=16)\n\nax[1].plot(n_components, 100*np.cumsum(pca.explained_variance_ratio_)) \nax[1].set_xlabel('Eigenvectors', fontsize=16)\nax[1].set_ylabel('Ratio (%)', fontsize=16)\nax[1].set_title('Proportion of variance explained', fontsize=16)\n\nfig.tight_layout() \n\nplt.savefig('13.1.2_MNIST_scree_plot.png')\nplt.savefig('13.1.2_MNIST_scree_plot.pdf')\n```\n\n### Show examples of eigenvectors (reshaped as images)\n\nThe first eigenvectors (first row) are the most important ones as they contain coherent structures. \nThe last eigenvectors (second row) usually contain only noise and are generally discarded.\n\n\n```python\nindx = [1, 2, 3, 4, 300, 400, 500, 600]\n\nfig, ax = plt.subplots(2,4, figsize=(10,6))\nfor i in range(8):\n im = pca.components_[indx[i]-1,:].reshape(28, 28)\n plt.subplot(2,4, i + 1)\n plt.imshow(im, cmap='gray', interpolation='nearest')\n plt.title('$Eigenvector_{%d}$' % (indx[i]), fontsize=16)\n plt.xticks(())\n plt.yticks(())\nfig.tight_layout() \n \nplt.savefig('13.1.3_Examples_of_eigenvectors.png')\nplt.savefig('13.1.3_Examples_of_eigenvectors.pdf')\n```\n\n# Image similarities\n\n### Project the image data into a 2-D space defined by the first two principal components.\n\nCompute the principal components of the image data and make a plot where each image is represented as a \npoint in 2-D PCA space of coordinates ($PCA_{1}$, $PCA_{2}$) or equivalently ($z_{1}$, $z_{2}$). \nSuperpose on it the labels for five images in each class.\n\nNotice how the classes are clustered; the '1' images are on the left, the '0' images are on the right, the '7' on the top, etc. \nThis is one of the reasons why PCA is so much used in data analysis. \n\nThe classes are not perfectly separated however since there is some visible overlap between them.\n\n\n```python\nZ = pca.transform(X_s)\n```\n\n\n```python\nsns.set_style('white')\nfig, ax = plt.subplots(figsize = (10, 10))\nax.scatter(Z[:,0],Z[:,1], c = 'c', s=1)\nax.set_xlabel('$PCA_{1}$', fontsize=18)\nax.set_ylabel('$PCA_{2}$', fontsize=18)\nax.set_title('2-D PCA space for MNIST', fontsize=16)\n\nfor i in range(10):\n idx = np.where(y==i)\n idx = idx[0][0:5]\n ax.scatter(Z[idx,0], Z[idx,1],c='k', marker=r\"$ {} $\".format(i), edgecolors='none', s=150 )\n \n\nax.grid(color='k', linestyle='--', linewidth=.2)\nsns.despine()\n\nplt.savefig('13.1.4_2D_PCA_space_for_MNIST.png')\nplt.savefig('13.1.4_2D_PCA_space_for_MNIST.pdf')\n```\n\nIn the next figure, we replace the label markers with their corresponding images. Different labels overlap because\ntheir handwritten images share similarities. For instance, a curved '7' may look like a '9', a squashed '3' may \nlooked like an '8', etc.\n\n\n```python\nsns.set_style('white')\nimage_shape = (28, 28)\nfig, ax = plt.subplots(figsize = (10, 10))\nax.scatter(Z[:,0],Z[:,1], c = 'b', s=1)\nax.set_xlabel('$PCA_{1}$', fontsize=18)\nax.set_ylabel('$PCA_{2}$', fontsize=18)\nax.set_title('2-D PCA space for MNIST', fontsize=16)\n\nz = np.zeros((28, 28))\n\nfor i in range(10):\n idx = np.where(y==i)\n idx = idx[0]\n \n for j in range(5):\n J = 1.-X[idx[j],:].reshape(image_shape)\n I = (np.dstack((J,J,z)) * 255.999) .astype(np.uint8) \n \n imagebox = OffsetImage(I, zoom=.5);\n ab = AnnotationBbox(imagebox, (Z[idx[j],0], Z[idx[j],1]), frameon=True, pad=0);\n ax.add_artist(ab);\n\nax.grid(color='k', linestyle='--', linewidth=.2)\nsns.despine()\n\nplt.savefig('13.1.5_2D_PCA_space_for_MNIST_with_images.png')\nplt.savefig('13.1.5_2D_PCA_space_for_MNIST_with_images.pdf')\n```\n\n# Find the most similar image classes\n\nIf images from two different classes look alike, chances are, we will make classification errors when looking at them. The \nexample of '1' and '7' images is well known. This is why we often put an horizontal bar in the middle of the '7' to \nmake differences between them more visible. Do classifiers make the same errors?\n\nIn what follows, we will split the dataset into a training and a test datasets. A K-Nearest-Neighbor \nclassifier (KNN) will first be trained on the training dataset. Then, it will be used to make class predictions on the test dataset. \n\nThe confusion matrix between the predicted and the true test labels will help to identify the most \ncommon errors. This will tell us what are the most lookalike image classes from the classifier standpoint.\n\n### Split the dataset into a training and a test datasets\n\n\n```python\nX_train, X_test, y_train, y_test = train_test_split(X_s, y, random_state=0,train_size=0.8)\ny_train = y_train.ravel()\ny_test = y_test.ravel()\n\n```\n\nTrain a KNN classifier on the training dataset (using 5 neighbors) and use it to classify the test \ndataset. \n\nWarning: the next cell may take a few seconds to a minute to compute. In a few years from now, this warning will be pointless \ngiven the increasing speeds of hardware and software!\n\n\n```python\nclf = KNeighborsClassifier(n_neighbors=5)\nclf.fit(X_train, y_train)\n\ny_pred = clf.predict(X_test)\n\n```\n\nCompute and display the confusion matrix between the true and the predicted labels for the test dataset. The confusion matrix \ntells us what are the most common classification mistakes. For instance, images of '8' were confused 13 times \nwith images of '5'. The most common mistakes were found between (8,5) and (7,9) pairs. Surprisingly, the (1,7) \npair was not the most prevalent source of confusion. \n\n\n```python\nfig, ax = plt.subplots(figsize = (7, 7))\n\ncm = confusion_matrix(y_test, y_pred)\nConfusionMatrixDisplay(cm).plot(ax=ax)\n\nfig.savefig('13.1.6_confusion_matrix.png')\nfig.savefig('13.1.6_confusion_matrix.pdf')\n```\n\n# Examples of image reconstructions and comparisons\n\nThe following section will explain several observations we made about the image class distributions in the 2-D PCA space. \nIt is important to mention that the results could have been different if we had trained the KNN classifier with more \nneighbors (5) and more principal components per image (2). \n\n### A few useful functions\n\nWe define functions that will be used to \n\n
    \n
  • Show the reconstruction performances with increasing number of principal components
  • \n
  • Compare the images reconstructed from the first two components $PCA_{1}$ and $PCA_{2}$
  • \n
\n\nThe first function computes PCA approximations of images of labels i and j using the first n principal components.\n\n\n```python\ndef PCA_approximations(n_comp, X_s, y, sc, image_shape, i, j):\n\n # Find one image with label i and one image with label j\n idx = np.where(y==i)\n idx = idx[0][0]\n image_i = X_s[idx,:].reshape(1, -1)\n \n idx = np.where(y==j)\n idx = idx[0][0]\n image_j = X_s[idx,:].reshape(1, -1)\n\n \n # Compute the PCA transform using only the first n principal components \n pca = PCA(n_components=n_comp)\n pca.fit(X_s)\n\n\n # Transform and reconstruct the image i using the first n principal components \n Z = pca.transform(image_i) \n x_i = pca.inverse_transform(Z)\n\n # Remove the normalisation transform\n x_i = sc.inverse_transform(x_i)\n x_i = x_i.reshape(image_shape)\n\n \n # Transform and reconstruct the image using the first n principal components \n Z = pca.transform(image_j) \n x_j = pca.inverse_transform(Z)\n\n # Remove the normalisation transform\n x_j = sc.inverse_transform(x_j)\n x_j = x_j.reshape(image_shape)\n\n \n # Remove the normalisation transform from the corresponding original images\n X_i = sc.inverse_transform(image_i) \n X_i = X_i.reshape(image_shape) \n X_j = sc.inverse_transform(image_j) \n X_j = X_j.reshape(image_shape) \n \n return (x_i, x_j, X_i, X_j)\n```\n\nThe second function displays a mosaic of the original images with their PCA approximations with 1, 2, 20 \nand 200 principal components.\n\n\n```python\ndef display_PCA_reconstructions(X_s, y, sc, image_shape, i, j):\n # Number of principal components used for the reconstruction of each image\n ncomp = [1, 2, 20, 200]\n \n fig, ax = plt.subplots(2,len(ncomp)+1, figsize=(10,6))\n for k in range(len(ncomp)):\n # Reconstruct both images with the same number of components\n (x_i, x_j, X_i, X_j) = PCA_approximations(ncomp[k], X_s, y, sc, image_shape, i, j)\n \n plt.subplot(2,5, k+2)\n plt.imshow(255-x_i, cmap='gray', interpolation='nearest')\n plt.title('M = %d' % ncomp[k], fontsize=16)\n plt.xticks(())\n plt.yticks(())\n \n plt.subplot(2,5, k + 7)\n plt.imshow(255-x_j, cmap='gray', interpolation='nearest')\n plt.xticks(())\n plt.yticks(())\n\n\n # Original images for reference\n plt.subplot(2,5, 1)\n plt.imshow(255-X_i, cmap='gray', interpolation='nearest')\n plt.title('Original', fontsize=16)\n plt.xticks(())\n plt.yticks(())\n \n plt.subplot(2,5, 6)\n plt.imshow(255-X_j, cmap='gray', interpolation='nearest')\n plt.xticks(())\n plt.yticks(())\n\n fig.tight_layout() \n\n```\n\n## Example I: 7 versus 9\n\nThe figure below shows image reconstructions for an increasing number of principal components.\nNotice how the reconstructions of '7' and '9' images become easily recognizable when 200 principal components are used. \nThis is not too surprising since, as mentionned before, the first 200 components account for 90% of the total image variance.\n\nNotice also how the first two reconstructions (M = 1,2) are very similar for both '7' and '9' images. Hence, they share \nsimilar values of $z_{1}$ and $z_{2}$. As a result, the '7' and '9' images should be neighbors in the 2-D PCA space \n($z_{1}$, $z_{2}$) or equivalently ($PCA_{1}$, $PCA_{2}$). This is the case; their distributions overlap in the \ntop of the 2-D PCA space (see corresponding figures above). \n\n\n```python\ni = 7\nj = 9\n\ndisplay_PCA_reconstructions(X_s, y, sc, image_shape, i, j)\n\nplt.savefig('13.1.7_Examples_image_reconstructions_7_and_9.png')\nplt.savefig('13.1.7_Examples_image_reconstructions_7_and_9.pdf')\n```\n\n## Example II: 1 versus 7\n\nThis new example is counter intuitive. The reconstructions with M = 2 are now quite different; the '1' and '7' images \ndo not share similar values of $z_{1}$ and $z_{2}$. As a result, the '1' and '7' images are not close neighbors in \nthe 2-D PCA space. The '1' are found on the left of the 2-D PCA space whereas the '7' are found at the top. Their \ndistributions barely overlap.\n\n\n\n```python\ni = 1\nj = 7\ndisplay_PCA_reconstructions(X_s, y, sc, image_shape, i, j)\n\nplt.savefig('13.1.8_Examples_image_reconstructions_1_and_7.png')\nplt.savefig('13.1.8_Examples_image_reconstructions_1_and_7.pdf')\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "7451263beba85f8bafd8c4413d4143be5e64d0f1", "size": 500780, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "13.1_Generate_image_reconstructions_using_PCA.ipynb", "max_stars_repo_name": "AstroPierre/Scripts-for-figures-courses-GIF-4101-GIF-7005", "max_stars_repo_head_hexsha": "a38ad6f960cc6b8155fad00e4c4562f5e459f248", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "13.1_Generate_image_reconstructions_using_PCA.ipynb", "max_issues_repo_name": "AstroPierre/Scripts-for-figures-courses-GIF-4101-GIF-7005", "max_issues_repo_head_hexsha": "a38ad6f960cc6b8155fad00e4c4562f5e459f248", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "13.1_Generate_image_reconstructions_using_PCA.ipynb", "max_forks_repo_name": "AstroPierre/Scripts-for-figures-courses-GIF-4101-GIF-7005", "max_forks_repo_head_hexsha": "a38ad6f960cc6b8155fad00e4c4562f5e459f248", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 625.975, "max_line_length": 162136, "alphanum_fraction": 0.9445105635, "converted": true, "num_tokens": 5140, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9372107861416413, "lm_q2_score": 0.9032942067038784, "lm_q1q2_score": 0.8465770735821321}} {"text": "# Special Series\n\n\n```python\n%matplotlib inline\nfrom sympy import *\ninit_printing()\n```\n\n\n```python\nx, t = symbols('x, t')\n```\n\nSymPy can compute special series like formal power series and fourier series. This is a new feature released in SymPy 1.0\n\nLet's try computing formal power series of some basic functions.\n\n\n```python\nexp_series = fps(exp(x), x)\nexp_series\n```\n\nThis looks very similar to what ``series`` has to offer, but unlike series a formal power series object returns an infinite expansion.\n\n\n```python\nexp_series.infinite # Infinite representation\n```\n\nWe can easily find out any term of the expansion (no need to recompute the expansion).\n\n\n```python\nexp_series.term(51) # equivalent to exp_series[51]\n```\n\n\n```python\nexp_series.truncate(10) # return a truncated series expansion\n```\n\n# Exercise\n\nTry computing the formal power series of $\\log(1 + x)$. Try to look at the infinite representation. What is the 51st term in this case? Compute the expansion about 1.\n\n\n```python\nlog_series = fps(?)\nlog_series\n```\n\n\n```python\n# infinite representation\n\n```\n\n\n```python\n# 51st term\n```\n\n\n```python\n# expansion about 1\n```\n\n# Fourier Series\n\nFourier series for functions can be computed using ``fourier_series`` function.\n\nA sawtooth wave is defined as:\n 1. $$ s(x) = x/\\pi \\in (-\\pi, \\pi) $$\n 2. $$ s(x + 2k\\pi) = s(x) \\in (-\\infty, \\infty) $$\n \nLet's compute the fourier series of the above defined wave.\n\n\n```python\nsawtooth_series = fourier_series(x / pi, (x, -pi, pi))\nsawtooth_series\n```\n\n\n```python\nplot(sawtooth_series.truncate(50)) \n```\n\nSee https://en.wikipedia.org/wiki/Gibbs_phenomenon for why the fourier series has peculiar behavior near jump discontinuties.\n\nJust like formal power series we can index fourier series as well.\n\n\n```python\nsawtooth_series[51]\n```\n\nIt is easy to shift and scale the series using ``shift`` and ``scale`` methods.\n\n\n```python\nsawtooth_series.shift(10).truncate(5)\n```\n\n\n```python\nsawtooth_series.scale(10).truncate(5)\n```\n\n# Exercise\n\nConsider a square wave defined over the range of (0, 1) as:\n 1. $$ f(t) = 1 \\in (0, 1/2] $$\n 2. $$ f(t) = -1 \\in (1/2, 1) $$\n 3. $$ f(t + 1) = f(t) \\in (-\\infty, \\infty) $$\n \nTry computing the fourier series of the above defined function. Also, plot the computed fourier series.\n\n\n```python\nsquare_wave = Piecewise(?)\n```\n\n\n```python\nsquare_series = fourier_series(?)\nsquare_series\n```\n\n\n```python\nplot(?)\n```\n\n# What next?\n\nTry some basic operations like addition, subtraction, etc on formal power series, fourier series and see what happens.\n", "meta": {"hexsha": "5228f76c95f20e8dcabbade4093f3f64c4013f72", "size": 6818, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tutorial_exercises/Advanced - Special Series.ipynb", "max_stars_repo_name": "gvvynplaine/scipy-2016-tutorial", "max_stars_repo_head_hexsha": "aa417427a1de2dcab2a9640b631b809d525d7929", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 53, "max_stars_repo_stars_event_min_datetime": "2016-06-21T21:11:02.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-04T07:51:03.000Z", "max_issues_repo_path": "tutorial_exercises/Advanced - Special Series.ipynb", "max_issues_repo_name": "gvvynplaine/scipy-2016-tutorial", "max_issues_repo_head_hexsha": "aa417427a1de2dcab2a9640b631b809d525d7929", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2016-07-02T20:24:06.000Z", "max_issues_repo_issues_event_max_datetime": "2016-07-11T11:31:44.000Z", "max_forks_repo_path": "tutorial_exercises/Advanced - Special Series.ipynb", "max_forks_repo_name": "gvvynplaine/scipy-2016-tutorial", "max_forks_repo_head_hexsha": "aa417427a1de2dcab2a9640b631b809d525d7929", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 36, "max_forks_repo_forks_event_min_datetime": "2016-06-25T09:04:24.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-09T06:46:01.000Z", "avg_line_length": 19.591954023, "max_line_length": 173, "alphanum_fraction": 0.5252273394, "converted": true, "num_tokens": 716, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9372107878954105, "lm_q2_score": 0.9032942034496964, "lm_q1q2_score": 0.8465770721164471}} {"text": "# Linear Equations\nThe equations in the previous lab included one variable, for which you solved the equation to find its value. Now let's look at equations with multiple variables. For reasons that will become apparent, equations with two variables are known as linear equations.\n\n## Solving a Linear Equation\nConsider the following equation:\n\n\\begin{equation}2y + 3 = 3x - 1 \\end{equation}\n\nThis equation includes two different variables, **x** and **y**. These variables depend on one another; the value of x is determined in part by the value of y and vice-versa; so we can't solve the equation and find absolute values for both x and y. However, we *can* solve the equation for one of the variables and obtain a result that describes a relative relationship between the variables.\n\nFor example, let's solve this equation for y. First, we'll get rid of the constant on the right by adding 1 to both sides:\n\n\\begin{equation}2y + 4 = 3x \\end{equation}\n\nThen we'll use the same technique to move the constant on the left to the right to isolate the y term by subtracting 4 from both sides:\n\n\\begin{equation}2y = 3x - 4 \\end{equation}\n\nNow we can deal with the coefficient for y by dividing both sides by 2:\n\n\\begin{equation}y = \\frac{3x - 4}{2} \\end{equation}\n\nOur equation is now solved. We've isolated **y** and defined it as 3x-4/2\n\nWhile we can't express **y** as a particular value, we can calculate it for any value of **x**. For example, if **x** has a value of 6, then **y** can be calculated as:\n\n\\begin{equation}y = \\frac{3\\cdot6 - 4}{2} \\end{equation}\n\nThis gives the result 14/2 which can be simplified to 7.\n\nYou can view the values of **y** for a range of **x** values by applying the equation to them using the following Python code:\n\n\n```python\nimport pandas as pd\n\n# Create a dataframe with an x column containing values from -10 to 10\ndf = pd.DataFrame ({'x': range(-10, 11)})\n\n# Add a y column by applying the solved equation to x\ndf['y'] = (3*df['x'] - 4) / 2\n\n#Display the dataframe\ndf\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
xy
0-10-17.0
1-9-15.5
2-8-14.0
3-7-12.5
4-6-11.0
5-5-9.5
6-4-8.0
7-3-6.5
8-2-5.0
9-1-3.5
100-2.0
111-0.5
1221.0
1332.5
1444.0
1555.5
1667.0
1778.5
18810.0
19911.5
201013.0
\n
\n\n\n\nWe can also plot these values to visualize the relationship between x and y as a line. For this reason, equations that describe a relative relationship between two variables are known as *linear equations*:\n\n\n```python\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nplt.plot(df.x, df.y, color=\"grey\", marker = \"o\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.show()\n```\n\nIn a linear equation, a valid solution is described by an ordered pair of x and y values. For example, valid solutions to the linear equation above include:\n- (-10, -17)\n- (0, -2)\n- (9, 11.5)\n\nThe cool thing about linear equations is that we can plot the points for some specific ordered pair solutions to create the line, and then interpolate the x value for any y value (or vice-versa) along the line.\n\n## Intercepts\nWhen we use a linear equation to plot a line, we can easily see where the line intersects the X and Y axes of the plot. These points are known as *intercepts*. The *x-intercept* is where the line intersects the X (horizontal) axis, and the *y-intercept* is where the line intersects the Y (horizontal) axis.\n\nLet's take a look at the line from our linear equation with the X and Y axis shown through the origin (0,0).\n\n\n```python\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\n\n## add axis lines for 0,0\nplt.axhline()\nplt.axvline()\nplt.show()\n```\n\nThe x-intercept is the point where the line crosses the X axis, and at this point, the **y** value is always 0. Similarly, the y-intercept is where the line crosses the Y axis, at which point the **x** value is 0. So to find the intercepts, we need to solve the equation for **x** when **y** is 0.\n\nFor the x-intercept, our equation looks like this:\n\n\\begin{equation}0 = \\frac{3x - 4}{2} \\end{equation}\n\nWhich can be reversed to make it look more familar with the x expression on the left:\n\n\\begin{equation}\\frac{3x - 4}{2} = 0 \\end{equation}\n\nWe can multiply both sides by 2 to get rid of the fraction:\n\n\\begin{equation}3x - 4 = 0 \\end{equation}\n\nThen we can add 4 to both sides to get rid of the constant on the left:\n\n\\begin{equation}3x = 4 \\end{equation}\n\nAnd finally we can divide both sides by 3 to get the value for x:\n\n\\begin{equation}x = \\frac{4}{3} \\end{equation}\n\nWhich simplifies to:\n\n\\begin{equation}x = 1\\frac{1}{3} \\end{equation}\n\nSo the x-intercept is 11/3 (approximately 1.333).\n\nTo get the y-intercept, we solve the equation for y when x is 0:\n\n\\begin{equation}y = \\frac{3\\cdot0 - 4}{2} \\end{equation}\n\nSince 3 x 0 is 0, this can be simplified to:\n\n\\begin{equation}y = \\frac{-4}{2} \\end{equation}\n\n-4 divided by 2 is -2, so:\n\n\\begin{equation}y = -2 \\end{equation}\n\nThis gives us our y-intercept, so we can plot both intercepts on the graph:\n\n\n```python\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\n\n## add axis lines for 0,0\nplt.axhline()\nplt.axvline()\nplt.annotate('x-intercept',(1.333, 0))\nplt.annotate('y-intercept',(0,-2))\nplt.show()\n```\n\nThe ability to calculate the intercepts for a linear equation is useful, because you can calculate only these two points and then draw a straight line through them to create the entire line for the equation.\n\n## Slope\nIt's clear from the graph that the line from our linear equation describes a slope in which values increase as we travel up and to the right along the line. It can be useful to quantify the slope in terms of how much **x** increases (or decreases) for a given change in **y**. In the notation for this, we use the greek letter Δ (*delta*) to represent change:\n\n\\begin{equation}slope = \\frac{\\Delta{y}}{\\Delta{x}} \\end{equation}\n\nSometimes slope is represented by the variable ***m***, and the equation is written as:\n\n\\begin{equation}m = \\frac{y_{2} - y_{1}}{x_{2} - x_{1}} \\end{equation}\n\nAlthough this form of the equation is a little more verbose, it gives us a clue as to how we calculate slope. What we need is any two ordered pairs of x,y values for the line - for example, we know that our line passes through the following two points:\n- (0,-2)\n- (6,7)\n\nWe can take the x and y values from the first pair, and label them x1 and y1; and then take the x and y values from the second point and label them x2 and y2. Then we can plug those into our slope equation:\n\n\\begin{equation}m = \\frac{7 - -2}{6 - 0} \\end{equation}\n\nThis is the same as:\n\n\\begin{equation}m = \\frac{7 + 2}{6 - 0} \\end{equation}\n\nThat gives us the result 9/6 which is 11/2 or 1.5 .\n\nSo what does that actually mean? Well, it tells us that for every change of **1** in x, **y** changes by 11/2 or 1.5. So if we start from any point on the line and move one unit to the right (along the X axis), we'll need to move 1.5 units up (along the Y axis) to get back to the line.\n\nYou can plot the slope onto the original line with the following Python code to verify it fits:\n\n\n```python\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\n\n# set the slope\nm = 1.5\n\n# get the y-intercept\nyInt = -2\n\n# plot the slope from the y-intercept for 1x\nmx = [0, 1]\nmy = [yInt, yInt + m]\nplt.plot(mx,my, color='red', lw=5)\n\nplt.show()\n```\n\n### Slope-Intercept Form\nOne of the great things about algebraic expressions is that you can write the same equation in multiple ways, or *forms*. The *slope-intercept form* is a specific way of writing a 2-variable linear equation so that the equation definition includes the slope and y-intercept. The generalised slope-intercept form looks like this:\n\n\\begin{equation}y = mx + b \\end{equation}\n\nIn this notation, ***m*** is the slope and ***b*** is the y-intercept.\n\nFor example, let's look at the solved linear equation we've been working with so far in this section:\n\n\\begin{equation}y = \\frac{3x - 4}{2} \\end{equation}\n\nNow that we know the slope and y-intercept for the line that this equation defines, we can rewrite the equation as:\n\n\\begin{equation}y = 1\\frac{1}{2}x + -2 \\end{equation}\n\nYou can see intuitively that this is true. In our original form of the equation, to find y we multiply x by three, subtract 4, and divide by two - in other words, x is half of 3x - 4; which is 1.5x - 2. So these equations are equivalent, but the slope-intercept form has the advantages of being simpler, and including two key pieces of information we need to plot the line represented by the equation. We know the y-intecept that the line passes through (0, -2), and we know the slope of the line (for every x, we add 1.5 to y.\n\nLet's recreate our set of test x and y values using the slope-intercept form of the equation, and plot them to prove that this describes the same line:\n\n\n```python\n%matplotlib inline\n\nimport pandas as pd\nfrom matplotlib import pyplot as plt\n\n# Create a dataframe with an x column containing values from -10 to 10\ndf = pd.DataFrame ({'x': range(-10, 11)})\n\n# Define slope and y-intercept\nm = 1.5\nyInt = -2\n\n# Add a y column by applying the slope-intercept equation to x\ndf['y'] = m*df['x'] + yInt\n\n# Plot the line\nfrom matplotlib import pyplot as plt\n\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\n\n# label the y-intercept\nplt.annotate('y-intercept',(0,yInt))\n\n# plot the slope from the y-intercept for 1x\nmx = [0, 1]\nmy = [yInt, yInt + m]\nplt.plot(mx,my, color='red', lw=5)\n\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "27f39bd77b5d1c587b5144ce70f8f1fd22b518a3", "size": 83186, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "MathsToML/Module01-Equations, Graphs, and Functions/01-02-Linear Equations.ipynb", "max_stars_repo_name": "hpaucar/data-mining-repo", "max_stars_repo_head_hexsha": "d0e48520bc6c01d7cb72e882154cde08020e1d33", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "MathsToML/Module01-Equations, Graphs, and Functions/01-02-Linear Equations.ipynb", "max_issues_repo_name": "hpaucar/data-mining-repo", "max_issues_repo_head_hexsha": "d0e48520bc6c01d7cb72e882154cde08020e1d33", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MathsToML/Module01-Equations, Graphs, and Functions/01-02-Linear Equations.ipynb", "max_forks_repo_name": "hpaucar/data-mining-repo", "max_forks_repo_head_hexsha": "d0e48520bc6c01d7cb72e882154cde08020e1d33", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 431.0155440415, "max_line_length": 14558, "alphanum_fraction": 0.8948260525, "converted": true, "num_tokens": 3573, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.939913354875362, "lm_q2_score": 0.9005297914570319, "lm_q1q2_score": 0.846419977453589}} {"text": "## Product Rule for Random Variables\n\nWe introduced inference in the context of random variables, where there was a simple way to visualize what was going on in terms of joint probability tables. Marginalization referred to summing out rows or columns. Conditioning referred to taking a slice of the table and renormalizing so entries within that slice summed to 1. We then saw a more general story in terms of events. In fact, we saw that for many inference problems, using random variables to solve the problem is not necessary \u2013 reasoning with events was enough! A powerful tool we saw was Bayes' theorem.\n\nWe now return to random variables and build up to Bayes' theorem for random variables. This machinery will be extremely important as it will be how we automate inference for much larger problems in the later sections of the course, where we can have a large number of random variables at play, and a large amount of observations that we need to incorporate into our inference.\n\n## Product Rule for Random Variables\n\nWe know that product rule for event is \n$$\\mathbb {P}(\\mathcal{A} \\cap \\mathcal{B}) = \\mathbb {P}(\\mathcal{A}) \\mathbb {P}(\\mathcal{A} | \\mathcal{B})$$\nSimilarly the product rule for random varaible will be given by follwing formula if $p_Y(y) \\neq 0$.\n$$\\begin{align}\n p_{x|y}(x|y) &= \\frac{p_{X,Y}(x,y)}{p_Y(y)} \\\\\n &\\Downarrow \\\\\n p_{X,Y}(x,y) &= p_Y(y) \\, p_{x|y}(x|y)\n\\end{align}$$\n\n\n\nIn general the formula for joint probabiliy distribution is given by\n\n$$ p_{X,Y}(x,y) = \\begin{cases}\n p_Y(y) \\, p_{x|y}(x|y) & \\mbox{if } p_Y(y) > 0 \\\\\n 0 & \\mbox{if } p_Y(y) = 0\n \\end{cases}\n$$ \n\n### More than 2 random variable\n\nSuppose we have have three random variable then we can think any two as one random variable, for example trat last two as one random variable we get \n\n$$\\begin{align}p_{X_1, X_2, X_3}(x_1, x_2, x_3) \n&= p_{X_1}(x_1)p_{X_2, X_3|X_1}(x_2, x_3|x_1) \\\\ \n&= p_{X_1}(x_1)p_{X2|X_1}(x_2|x_1)p_{X_3|X_1,X_2}(x_3|x_1,x_2) \n\\end{align}$$\n\nWe can genrealize the formula as follows,\n\n$$\\begin{align}p_{X_1, X_2,\\ldots, X_N}(x_1, x_2,\\ldots,x_N) \n&= p_{X_1}(x_1) p_{X_2,\\ldots, X_N|X_1}(x_2,\\ldots, x_N|x_1) \\\\\n&= p_{X_1}(x_1) p_{X_2|X_1}(x_2|x_1) p_{X_3,\\ldots,X_N|X_1, X_2}(x_3,\\ldots,x_N|x_1,x_2) \\\\\n&= p_{X_1}(x_1) p_{X_2|X_1}(x_2|x_1) \\cdots p_{X_N|X_1, \\ldots, X_{N-1}}(x_n|x_1,\\ldots,x_{N-1})\n\\end{align}$$\n\n### Exercise: The Product Rule for Random Variables - Medical Diagnosis Revisited\n\nLet's revisit the medical diagnosis problem we saw earlier. We now use random variables to construct a joint probability table.\n\nLet random variable $X$ represent the patient's condition \u2014 whether \u201chealthy\" or \u201cinfected\", with the following distribution for $X$:\n\n\n\nMeanwhile, the test outcome $Y$ for whether the patient is infected is either \u201cpositive\" (for the disease) or \u201cnegative\". As before, the test is $99\\%$ accurate, which means that the conditional probability table for $Y$ given $X$ is as follows (note that we also show how to write things out as a single table):\n\n\n\nUsing the product rule for random variables, what are the four entries for the joint probability table? Please provide the exact answer for these four quantities.\n\n$p_{X,Y}(\\text {healthy}, \\text {positive}) = p_X(\\text {healthy})~p_{Y|X}(\\text {positive}~|~\\text {healthy})$ \n\n\n```python\nprior_or_p_X = {\n \"healthy\" : 0.999,\n \"infected\": 0.001\n}\n\np_Y_given_X = {\n ('positive', 'healthy' ): 0.01,\n ('positive', 'infected'): 0.99,\n ('negative', 'healthy' ): 0.99,\n ('negative', 'infected'): 0.01\n}\n\n# p_X_Y stores the joint probability dist. of X and Y\np_X_Y = {} \nfor key, values in p_Y_given_X.items():\n p_X_Y[key[::-1]] = values * prior_or_p_X[key[1]]\n \np_X_Y \n```\n\n\n\n\n {('healthy', 'negative'): 0.98901,\n ('healthy', 'positive'): 0.00999,\n ('infected', 'negative'): 1e-05,\n ('infected', 'positive'): 0.00099}\n\n\n\n\n```python\nprint(\"{0:.5f}\".format(p_X_Y[('healthy', 'positive')]))\n```\n\n 0.00999\n\n\n$p_{X,Y}(\\text {healthy}, \\text {negative}) = $\n\n\n```python\nprint(\"{0:.5f}\".format(p_X_Y[('healthy', 'negative')]))\n```\n\n 0.98901\n\n\n$p_{X,Y}(\\text {infected}, \\text {positive}) =$\n\n\n```python\nprint(\"{0:.5f}\".format(p_X_Y[('infected', 'positive')]))\n```\n\n 0.00099\n\n\n$p_{X,Y}(\\text {infected}, \\text {negative}) =$\n\n\n```python\nprint(\"{0:.5f}\".format(p_X_Y[('infected', 'negative')]))\n```\n\n 0.00001\n\n\n## Baye's Rule for Random Variable\n\nIn inference, what we want to reason about is some unknown random variable $X$, where we get to observe some other random variable $Y$, and we have some model for how $X$ and $Y$ relate. Specifically, suppose that we have some \u201cprior\" distribution $p_X$ for $X$; this prior distribution encodes what we believe to be likely or unlikely values that $X$ takes on, before we actually have any observations. We also suppose we have a \u201clikelihood\" distribution $p_{Y\u2223X}$.\n\n\n\nAfter observing that $Y$ takes on a specific value $y$, our \u201cbelief\" of what $X$ given $Y=y$ is now given by what's called the \u201cposterior\" distribution $p_{X\u2223Y}(\u22c5\u2223y)$. Put another way, we keep track of a probability distribution that tells us how plausible we think different values $X$ can take on are. When we observe data $Y$ that can help us reason about $X$, we proceed to either upweight or downweight how plausible we think different values $X$ can take on are, making sure that we end up with a probability distribution giving us our updated belief of what $X$ can be.\n\nThus, once we have observed $Y=y$, our belief of what $X$ is changes from the prior $p_X$ to the posterior $p_{X\u2223Y}(\u22c5\u2223y)$.\n\nBayes' theorem (also called Bayes' rule or Bayes' law) for random variables explicitly tells us how to compute the posterior distribution $p_{X\u2223Y}(\u22c5\u2223y)$, i.e., how to weight each possible value that random variable $X$ can take on, once we've observed $Y=y$. Bayes' theorem is the main workhorse of numerous inference algorithms and will show up many times throughout the course.\n\n**Bayes' theorem:** Suppose that $y$ is a value that random variable $Y$ can take on, and $p_Y(y)>0$. Then\n\n$$p_{X\\mid Y}(x\\mid y)=\\frac{p_{X}(x)p_{Y\\mid X}(y\\mid x)}{\\sum _{ x'}p_{X}( x')p_{Y\\mid X}(y\\mid x')}$$\n \nfor all values $x$ that random variable $X$ can take on.\n\n!!!important \n Remember that $p_{Y\u2223X}(\u22c5\u2223x)$ could be undefined but this isn't an issue since this happens precisely when $p_X(x)=0$, and we know that $p_{X,Y}(x,y)=0$ (for every $y$) whenever $p_X(x)=0$.\n\n Proof: We have\n\n $$p_{X\\mid Y}(x\\mid y)\\overset {(a)}{=}\\frac{p_{X,Y}(x,y)}{p_{Y}(y)}\\overset {(b)}{=}\\frac{p_{X}(x)p_{Y\\mid X} (y\\mid x)}{p_{Y}(y)}\\overset {(c)}{=}\\frac{p_{X}(x)p_{Y\\mid X}(y\\mid x)}{\\sum _{ x'}p_{X,Y}( x',y)}\\overset {(d)} {=}\\frac{p_{X}(x)p_{Y\\mid X}(y\\mid x)}{\\sum _{ x'}p_{X}( x')p_{Y\\mid X}(y\\mid x')},$$\n\n where step (a) uses the definition of conditional probability (this step requires $p_Y(y)>0$, step (b) uses the product rule (recall that for notational convenience we're not separately writing out the case when $p_X(x)=0$, step (c) uses the formula for marginalization, and step (d) uses the product rule (again, for notational convenience, we're not separately writing out the case when $p_X(x\u2032)=0$. \n\n## BAYES' THEOREM FOR RANDOM VARIABLES: A COMPUTATIONAL VIEW\n\nComputationally, Bayes' theorem can be thought of as a two-step procedure. Once we have observed $Y=y$:\n\n1. For each value $x$ that random variable $X$ can take on, initially we believed that $X=x$ with a score of $p_X(x)$, which could be thought of as how plausible we thought ahead of time that $X=x$. However now that we have observed $Y=y$, we weight the score $p_X(x)$ by a factor $p_{Y\u2223X}(y\u2223x)$, so\n\n $$\\text {new belief for how plausible }X=x\\text { is:}\\quad \\alpha (x\\mid y)\\triangleq p_{X}(x)p_{Y\\mid X}(y\\mid x),$$\n\n where we have defined a new table $\u03b1(\u22c5\u2223y)$ which is not a probability table, since when we put in the weights, the new beliefs are no longer guaranteed to sum to $1$ (i.e., $\\sum _{x}\\alpha (x\\mid y)$ might not equal $1$)! $\u03b1(\u22c5\u2223y)$ is an unnormalized posterior distribution!\n\n Also, if $p_X(x)$ is already $0$, then as we already mentioned a few times, $p_{Y\u2223X}(y\u2223x)$ is undefined, but this case isn't a problem: no weighting is needed since an impossible outcome stays impossible.\n\n To make things concrete, here is an example from the medical diagnosis problem where we observe $Y = \\text {positive}$:\n\n \n\n2. We fix the fact that the unnormalized posterior table $\u03b1(\u22c5\u2223y)$ isn't guaranteed to sum to $1$ by renormalizing:\n\n $$p_{X\\mid Y}(x\\mid y)=\\frac{\\alpha (x\\mid y)}{\\sum _{ x'}\\alpha ( x'\\mid y)}=\\frac{p_{X}(x)p_{Y\\mid X}(y\\mid x)}{\\sum _{ x'}p_{X}( x')p_{Y\\mid X}(y\\mid x')}.$$\n \n!!! Note\n Some times we won't actually care about doing this second renormalization step because we will only be interested in what value that $X$ takes on is more plausible relative to others; while we could always do the renormalization, if we just want to see which value of $x$ yields the highest entry in the unnormalized table $\u03b1(\u22c5\u2223y)$, we could find this value of x without renormalizing!\n\n### MAXIMUM A POSTERIORI (MAP) ESTIMATION\n\nFor a hidden random variable $X$ that we are inferring, and given observation $Y=y$, we have been talking about computing the posterior distribution $p_{X\u2223Y}(\u22c5|y)$ using Bayes' rule. ``The posterior is a distribution for what we are inferring``. Often times, we want to report which particular value of $X$ actually achieves the highest posterior probability, i.e., the most probable value $x$ that $X$ can take on given that we have observed $Y=y$.\n\nThe value that $X$ can take on that maximizes the posterior distribution is called the maximum a posteriori (MAP) estimate of $X$ given $Y=y$. We denote the MAP estimate by $\\widehat{x}_{\\text {MAP}}(y)$, where we make it clear that it depends on what the observed $y$ is. Mathematically, we write\n\n$$\\widehat{x}_{\\text {MAP}}(y) = \\arg \\max _ x p_{X \\mid Y}(x | y).$$\n \nNote that if we didn't include the \u201carg\" before the \u201cmax\", then we would just be finding the highest posterior probability rather than which value\u2013or \u201cargument\"\u2013x actually achieves the highest posterior probability.\n\nIn general, there could be ties, i.e., multiple values that $X$ can take on are able to achieve the best possible posterior probability.\n\n### Exercise: Bayes' Theorem for Random Variables - Medical Diagnosis, Continued\n\nRecall the medical diagnosis setup from before, summarized in these tables:\n\n\n\n\nRecall that Bayes' theorem is given by\n\n$$p_{X\\mid Y}(x\\mid y)=\\frac{p_{X}(x)p_{Y\\mid X}(y\\mid x)}{\\sum _{ x'}p_{X}( x')p_{Y\\mid X}(y\\mid x')}$$\n \nfor all values $x$ that random variable $X$ can take on.\n\nUse Bayes' theorem to compute the following probabilities: (Please be precise with at least 3 decimal places, unless of course the answer doesn't need that many decimal places. You could also put a fraction.)\n\n\n```python\nprior_or_p_X = {\n \"healthy\" : 0.999,\n \"infected\": 0.001\n}\n\np_Y_given_X = {\n ('positive', 'healthy' ): 0.01,\n ('positive', 'infected'): 0.99,\n ('negative', 'healthy' ): 0.99,\n ('negative', 'infected'): 0.01\n}\n\n# p_X_Y stores the joint probabilty distribution of X and Y\np_X_Y = {}\nfor key, values in p_Y_given_X.items():\n p_X_Y[key[::-1]] = values * prior_or_p_X[key[1]]\n\n\n# p_Y stores the marginal probabilty distribution Y \np_Y = {}\nfor key, values in p_X_Y.items():\n if key[1] in p_Y:\n p_Y[key[1]] += values\n \n else:\n p_Y[key[1]] = values\n\n# p_X_given_Y stores the conditional probability dist. of X given Y. \np_X_given_Y = {}\nfor key, values in p_X_Y.items():\n p_X_given_Y[key] = values / p_Y[key[1]]\n \np_X_given_Y \n```\n\n\n\n\n {('healthy', 'negative'): 0.9999898889810116,\n ('healthy', 'positive'): 0.9098360655737705,\n ('infected', 'negative'): 1.0111018988493663e-05,\n ('infected', 'positive'): 0.09016393442622951}\n\n\n\n$p_{X\\mid Y}(\\text {healthy}\\mid \\text {positive}) = $ \n\n\n```python\nprint(\"{0:.5f}\".format(p_X_given_Y [('healthy', 'positive')]))\n```\n\n 0.90984\n\n\n$p_{X\\mid Y}(\\text {healthy}\\mid \\text {negative}) =$\n\n\n```python\nprint(\"{0:.5f}\".format(p_X_given_Y [('healthy', 'negative')]))\n```\n\n 0.99999\n\n\nWhat is the MAP estimate for $X$ given $Y = \\text{positive}$?\n\n\n```python\ncomp = 0\nMAP = \"\"\nfor key, val in p_X_given_Y.items():\n if 'positive' in key and comp < val:\n comp = val\n MAP = key[0]\n \nprint(MAP)\n```\n\n healthy\n\n\nWhat is the MAP estimate for $X$ given $Y=\\text{negative}$?\n\n\n```python\ncomp = 0\nMAP = \"\"\nfor key, val in p_X_given_Y.items():\n if 'negative' in key and comp < val:\n comp = val\n MAP = key[0]\n \nprint(MAP) \n```\n\n healthy\n\n\n### Exercise: Complexity of Computing Bayes' Theorem for Random Variables\n\nThis exercise is extremely important and gets at how expensive it is to compute a posterior distribution when we have many quantities we want to infer.\n\nConsider when we have $N$ random variables $X_1, \\dots , X_ N$ with joint probability distribution $p_{X_1, \\dots , X_ N}$, and where we have an observation $Y$ related to $X_1, \\dots , X_ N$ through the known conditional probability table $p_{Y\\mid X_1, \\dots , X_ N}$. Treating $X=(X_1, \\dots , X_ N)$ as one big random variable, we can apply Bayes' theorem to get\n\n$$\\begin{eqnarray}\n&& p_{X_1, X_2, \\dots, X_N \\mid Y}(x_1, x_2, \\dots, x_N \\mid y) \\\\\n&&\n= \\frac{p_{X_1, X_2, \\dots, X_N}(x_1, x_2, \\dots, x_N)\n p_{Y\\mid X_1, X_2, \\dots, X_N}(y\\mid x_1, x_2, \\dots, x_N)}\n {\\sum_{x_1'}\n \\sum_{x_2'}\n \\cdots\n \\sum_{x_N'}\n p_{X}(x_1',\n x_2',\n \\dots,\n x_N')\n p_{Y\\mid X_1, X_2, \\dots, X_N}(y\\mid x_1',\n x_2',\n \\dots,\n x_N')}.\n\\end{eqnarray}$$\n\nSuppose each $X_i$ takes on one of $k$ values. In the denominator, how many terms are we summing together? Express your answer in terms of $k$ and $N$.\n\nIn this part, please provide your answer as a mathematical formula (and not as Python code). Use \"$\\hat{}$\" for exponentiation, e.g., $x\\hat{}2$ to denotes $x^2$. Explicitly include multiplication using $*$, e.g. $x*y$ is $xy$.\n\n**Answer:** Here we have $k$ choices of $X_1$ and $k$ choices of $X_2$ and so on. Hence total number of terms will be $k^N$.\n\n\n```python\n\n```\n", "meta": {"hexsha": "3df24bcafcf937bbce8a5cf18fc4cb1a29102f2a", "size": 30433, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "week03/01 Product Rule for Random Variables.ipynb", "max_stars_repo_name": "infimath/Computational-Probability-and-Inference", "max_stars_repo_head_hexsha": "e48cd52c45ffd9458383ba0f77468d31f781dc77", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-04-04T03:07:47.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-04T03:07:47.000Z", "max_issues_repo_path": "week03/01 Product Rule for Random Variables.ipynb", "max_issues_repo_name": "infimath/Computational-Probability-and-Inference", "max_issues_repo_head_hexsha": "e48cd52c45ffd9458383ba0f77468d31f781dc77", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week03/01 Product Rule for Random Variables.ipynb", "max_forks_repo_name": "infimath/Computational-Probability-and-Inference", "max_forks_repo_head_hexsha": "e48cd52c45ffd9458383ba0f77468d31f781dc77", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-02-27T05:33:49.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-27T05:33:49.000Z", "avg_line_length": 28.2834572491, "max_line_length": 587, "alphanum_fraction": 0.5055696119, "converted": true, "num_tokens": 4522, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9399133464597458, "lm_q2_score": 0.9005297881200701, "lm_q1q2_score": 0.846419966738621}} {"text": "# Integrals and derivatives\n\nOne of the most basic but also most important application of computers in physics is the evaluation of integrals and derivatives. Numerical evaluation of integrals is a particularly crucial topic, because integrals occur widely in physics and, while some integrals can be done analytically in closed form, most cannot. They can, however, almost always be solved on a computer. Conversely, derivatives can almost always be performed analytically, if a close analytic expression of the expression is available. However, we may not always be interested in an analytic expression of the derivative, since the expression could be too complicated or because its evaluation is more involved than a numerical derivative. In this chapter we examine a number of different techniques for evaluating integrals and derivatives, as well as taking a brief look at the related operation of interpolation.\n\n## Integration\n\nIn integration our objective is to find the value of a definite integral with fixed bounds from $x=a$ to $x=b$ $$ I(a,b)=\\int_a^b f(x)dx.$$ Note that this is a much simpler problem than finding the antiderivative (or the indefinite integral) $F(y)=\\int_0^y f(x)dx$. The simplest and most naive way to solve the definite integral is to discretize the sum and to assign a rectangle to each segment $$I(a,b)\\approx\\sum_{i=k}^N f(x_k)h.$$ To discretize the continuous variable x, we take the interval $[a,b]$ and divide it into $N$ equally sized portions $h=(b-a)/N$. The sequence of $x$-points then becomes $x_k=a+(k-\\frac{1}{2})h$. \n\nDiscretization is a core concept in computational science. An example of the simple rectangular integration is shown in panel a) of the following figure. \n\nWe can now write a little python program that evaluates the sum for the integral. We do this for the function $f(x)=x^4-2x+1$ from $a=0$ to $b=2$. We know the correct answer for this integral and compare the result of our program to it $$\\int_0^2 (x^4-2x+1)dx = \\left[\\frac{1}{5}x^5-x^2+x\\right]_0^2=4.4 .$$\n\n\n```python\ndef f(x):\n return x**4 - 2*x + 1\n\nN = 10\na = 0.0\nb = 2.0\nh = (b-a)/N\n\ns = 0.0\nfor k in range(1,N):\n s += f(a+(k-0.5)*h)\n\nprint(h*s)\n```\n\n 2.3003400000000007\n\n\nWith only 10 points, the value of the integral is not very good, because the constant value of the function that we assume in each interval does not reflect the curvature of the function $f(x)$ very well. We could improve the accuracy of our integration by using more points (i.e. by making the interval smaller and smaller). This strategy will eventually lead to success, but it might require a large number of points. Maybe we can do something smarter.\n\n\n### Trapezoidal rule\n\nInstead of replacing the function in each interval with a horizontal line, we could represent the function by a line that runs through the function values at the end point of each interval. Such sloped lines, would capture more rapidly varying functions better. The idea is shown in panel b) of the previous figure.\n\nThe two end-points of each segment are $a+(k-1)h$ and $a+kh$. Following the $\\textit{trapezoidal rule}$ this gives the simple formula for the area of the segment $$A_k=\\frac{1}{2}h\\left[ f(a+(k-1)h)+f(a+k h)\\right].$$ All we have to do now is sum up the segments again to obtain the integral $$\\begin{align} I(a,b)\\approx \\sum_{k=1}^N A_k &=\\frac{1}{2}h\\sum_{k=1}^N \\left[ f(a+(k-1)h)+f(a+kh)\\right] \\\\ &=h \\left[ \\frac{1}{2}f(a)+\\frac{1}{2}f(b)+\\sum_{k=1}^{N-1} f(a+kh) \\right]\\end{align}$$ \n\nWe can easily modify the code for the rectangular integration to incorporate the trapezoidal rule:\n\n\n```python\ndef f(x):\n return x**4 - 2*x + 1\n\nN = 10\na = 0.0\nb = 2.0\nh = (b-a)/N\n\ns = 0.5*f(a) + 0.5*f(b)\nfor k in range(1,N):\n s += f(a+k*h)\n\nprint(h*s)\n```\n\n 4.50656\n\n\nWe see that with still only 10 discretization points, we immediately achieve a better integration accuracy. With a value of 4.507, we are now $\\sim$2\\% off.\n\n### Simpson's rule\n\nThe trapezoidal rule approximated the integrand with linear segments. To improve on the trapezoidal rule, we make an attempt to represent the function better. We do this by a powerlaw expansion. The simplest expansion beyond linear is quadratic. To fit a quadratic piece to our function requires 3 points, which implies that we now need two discretized segments for each fit. This is schematically shown in the figure. \n\n\n\nDenoting our 3 points $-h$, $0$ and $h$, we fit a quadratic function $Ax^2+Bx+C$ through these points. At the 3 points the function assumes the following values $$\\begin{gather} f(-h)=Ah^2-Bh+C, & f(0)=C, & f(h)=Ah^2+Bh+C\\end{gather}.$$ Solving this system of equations provides the desired expressions for the 3 coefficients $$\\begin{gather} A=\\frac{1}{h^2}\\left[\\frac{1}{2}f(-h)-f(0)+\\frac{1}{2}f(h)\\right], & B=\\frac{1}{2h}\\left[f(h)-f(-h)\\right], & C=f(0)\\end{gather}.$$ These expressions we insert into the integral of the quadratic approximation to the integral under the curve of the first two segments: $$\\int_{-h}^{h} (Ax^2+Bx+C)dx=\\frac{2}{3}Ah^3+2Ch=\\frac{1}{3}\\left[f(-h)+4f(0)+f(h)\\right] .$$\n\nNow we have to generalize this expression to incorporate also the remaining segments. We do this by sliding the procedure along in pairs of segments and summing up the results. In general, the three points of our first pair of segments are at $a$, $a+h$ and $a+2h$. The three points for the next pair of segments lie at $a+2h$, $a+3h$ and $a+4h$, and so forth. The approximate value of the integral then becomes $$\\begin{align}I(a,b)\\approx & \\frac{h}{3}\\left[ f(a)+4f(a+h)+f(a+2h)\\right] \\\\ & +\\frac{h}{3} \\left[f(a+2h)+4f(a+3h)+f(a+4h) \\right] + \\ldots \\\\ & +\\frac{h}{3} \\left[f(a+(N-2)h+4f(a+(N-1)h)+f(b)\\right] \\end{align}$$ Note that the total number of slices must be even for this to work. Collecting terms, we now have $$\\begin{align} I(a,b)\\approx & \\frac{h}{3} \\left[f(a)+4f(a+h)+2f(a+2h)+4f(a+3h)+\\ldots + f(b)\\right] \\\\ &=\\frac{h}{3} \\left[f(a)+f(b)+4\\!\\!\\!\\!\\sum_{\\: \\: k \\: \\textrm{odd} \\\\ 1\\ldots N-1} \\!\\! f(a+kh)+2 \\!\\!\\!\\!\\sum_{\\: \\: k \\: \\textrm{even} \\\\ 2\\ldots N-2} \\!\\! f(a+kh) \\right] \\end{align} $$ This formula is called the $\\textit{extended Simpson's rule}$.\n\nThe sums over odd and even values of $k$ can be conveniently accomplished in Python by using a for loop of the form \"$\\texttt{for k in the range(1,N,2)}$\" for the odd terms and \"$\\texttt{for k in the range(2,N,2)}$\" for the even terms. Alternatively, we could rewrite Simpson's rule as $$ I(a,b)\\approx \\frac{h}{3} \\left[f(a)+f(b)+4\\sum_{k=1}^{N/2} f(a+(2k-1)h)+2\\sum_{k=1}^{N/2-1}f(a+2kh) \\right] $$ and use an ordinary for loop.\n\nWith this insight, we modify our integration code once more to adapt it to the Simpson's rule:\n\n\n```python\ndef f(x):\n return x**4 - 2*x + 1\n\nN = 10\na = 0.0\nb = 2.0\nh = (b-a)/N\n\ns = f(a) + f(b)\nfor k in range(1,N,2):\n s += 4*f(a+k*h)\n \nfor k in range(2,N,2):\n s += 2*f(a+k*h)\n \nprint(h*s/3.0)\n```\n\n 4.400426666666668\n\n\nIn summary, we have seen three increasingly more sophisticated integration algorithms. Here is an overview of their performance for 10 integration points: $$ \\begin{array}{llrr} \\text{Method} & & \\text{Value} & \\text{Error} \\\\\\hline \\text{Flat} & : & 2.3003 & 52.20 \\% \\\\ \\text{Trapezoidal} & : & 4.5066 & 2.40\\% \\\\ \\text{Simpson} & : & 4.4004 & 0.01\\% \\\\ \\text{Exact} & : & 4.4000 \\\\ \\end{array} $$\n\n# Higher-order integration methods\n\nIn the previous section, we have increased the sophistication of approximating the integrand in each method. We went from constants (0th order) to lines (1st order (trapezoidal)) to quadratic curves (2nd order (Simpson's rule)). In principle, we could keep going to higher and higher orders. The general expression for the integral is $$\\int_a^b f(x)dx\\approx \\sum_{k=1}^{N}w_kf(x_k),$$ where $x_k$ are the positions at which we evaluate the integrand and $w_k$ are a set of weights given by the integration method. Since we used homogenuous grids (i.e. fixed grid spacing), we can split off the grid spacing $h$ from the weights $w_k=c_k h$. The different integration methods then only differ in the set of coefficients that make up the integration weights. Different coefficient sets are summarized in the table below. $$ \\begin{array}{lll} \\text{Order} & \\text{Polynomial} & \\text{Coefficients} \\\\\\hline \\text{0 (naive)} & \\text{constant} & 1,1,1,\\ldots,1 \\\\ \\text{1 (trapezoidal rule)} & \\text{straight line} & \\frac{1}{2}, 1, 1, \\ldots, 1, \\frac{1}{2} \\\\ \\text{2 (Simpson's rule)} & \\text{quadratic} & \\frac{1}{3},\\frac{4}{3},\\frac{2}{3},\\frac{4}{3},\\ldots,\\frac{4}{3},\\frac{1}{3} \\\\ \\text{3} & \\text{cubic} & \\frac{3}{8},\\frac{9}{8},\\frac{9}{8},\\frac{3}{4},\\frac{9}{8},\\frac{9}{8},\\frac{3}{4},\\ldots, \\frac{9}{8},\\frac{3}{8}\\\\ \\text{4} & \\text{quartic} & \\frac{14}{45}, \\frac{64}{45},\\frac{8}{15},\\frac{64}{45},\\frac{28}{45},\\frac{64}{45}, \\frac{8}{15},\\frac{64}{45}, \\ldots, \\frac{64}{45},\\frac{14}{45} \\end{array}$$\n\nWe can now view integration as a simple sum over integration $\\textit{weights}$ and the integrand evaluated at certain discretized $\\text{grid}$ points. The weights depend on the integration method and give the weighting of the integrand at a corresponding grid point in the sum. With this new perspective we can rethink our integration program and make it more modular. If we create a function that prefills the weights, we can compute integrals for different methods in the same code framework. Here is an example of our Simpson rule integration adapted to a sum over a grid with the corresponding weights.\n\n\n\n```python\nfrom numpy import empty,array\ndef f(x):\n return x**4 - 2*x + 1\n\n# variable definitions\nN = 10\na = 0.0\nb = 2.0\nh = (b-a)/N\ns = 0.0\nweights = empty(N+1,float)\ngrid = empty(N+1,float)\n\n# grid specification (this step is in principle redundant, but we use it to illustrate the concept)\nfor k in range (0,N+1): \n grid[k]=a+k*h\n\n# weights specification (only this part would need to be changed for a new method)\nfor k in range(1,N,2):\n weights[k]=h*4.0/3.0\n \nfor k in range(2,N,2):\n weights[k]=h*2.0/3.0\n\nweights[0]=h/3.0\nweights[N]=h/3.0\n\n# actual integration\nfor k in range(0,N+1):\n s += weights[k]*f(grid[k])\n \nprint(s)\n```\n\n 4.400426666666667\n\n\nThe program gives the same value as its orginal version. We can now, however, easily upgrade it by modifying the section that specifies the weights. In principle, we could even outsource the weight definitions into subroutines. \n\n# Gaussian quadrature\n\nEquipped with our new perspective of integration as looping over a grid with weighted integration points, we now consider non-uniform grids. We will choose the position of the grid points such that they are optimal for the integration. The integration weights are then derived correspondingly.\n\nFor simplicity, we restrict the domain of integration to $[-1,1]$ and will later scale it back to $[a,b]$. Our objective is then to find the grid points $x_k$ and weights $w_k$ for the integral $$\\int_{-1}^{1}f(x)dx\\approx\\sum_{k=1}^Nw_kf(x_k).$$ Let us assume that our function $f(x)$ is a polynomial in $x$ of degree $2N-1$. We can then use the properties of the Legendre polynomials for our integration method: \n - The legendre polynomial $P_N(x)$ is orthogonal to every polynomial of lower degree, i.e. $ \\int_{-1}^{1}x^k P_N(x)dx=0$ for all integers $k$ in the range $0 \\leq k \\lt N$. \n - For all $N$, $P_N(x)$ has $N$ real roots that all lie in the interval from $-1$ to $1$.\n \nIf we now divide $f(x)$ by the Legendre polynomial $P_N(x)$, we get $$f(x)=q(x)P_N(x)+r(x),$$ where $q(x)$ and $r(x)$ are both polynomials of degree $N-1$ or less. This simplifies our original integral $$\\int_{-1}^{1} f(x)dx=\\int_{-1}^{1} q(x)P_N(x)dx+\\int_{-1}^{1}r(x)dx=\\int_{-1}^{1}r(x)dx, $$ because $P_N(x)$ is orthogonal to $q(x)$. This transformation has not gained us anything, since we do not know the function $r(x)$. However, we can make the same substitution in the summation over the grid and weights: $$ \\sum_{k=1}^N w_kf(x_k)=\\sum_{k=1}^Nq(x_k)P_N(x_k)+\\sum_{k=1}^Nw_kr(x_k)$$\nWe now make use of the second property of the Legendre polynomials, namely, that $P_N(x)$ has $N$ many zeros between $-1$ and $1$. We take these zero positions (roots) as the grid points $x_k$ so that $P_N(x_k)=0$ for all $k$. This simplifies our sum to $$\\sum_{k=1}^N w_k f(x_k) = \\sum_{k=1}^N w_k r(x_k)=\\int_{-1}^{1} r(x) dx = \\int_{-1}^{1} f(x) dx .$$ The second equality holds only, because we assumed that $f(x)$ is a polynomial of degree $2N-1$, which makes $r(x)$ a polynomial of degree $N-1$ or less.\n\nTo proceed, we have to do the following:\n\n - Find the roots of $P_N(x)$: this is possible, but tedious. The derviation can be found e.g. in (Abramowitz & Stegun 1972, p. 887).\n \n - Given a set of $x_k$ grid points (corresponding to the roots of $P_N(x)$) compute the corresponding weights.\n \nFor the second point, we assume that we can find a single polynomial of degree $N-1$ to fit the function $f(x)$ or $r(x)$. For this, we use the _method of interpolating polynomials_. Our _interpolating polynomial_ is $$ \\begin{align} \\phi_k(x) & =\\prod_{m=1\\ldots N \\\\ \\:\\:\\: m\\neq k}\\frac{(x-x_m)}{(x_k-x_m)} \\\\ & = \\frac{(x-x_1)}{(x_k-x_1)} \\times \\ldots \\times \\frac{(x-x_{k-1})}{(x_k-x_{k-1})} \\frac{(x-x_{k+1})}{(x_k-x_{k+1})} \\times \\ldots \\times \\frac{(x-x_N)}{(x_k-x_N)} \\end{align}$$ Note that the numerator contains one factor for each sample point except the point $x_k$. $\\phi_k(x)$ is thus a polynomial of degree $N-1$. It has the property $$\\phi_k(x_m)=\\delta_{km}.$$ We use this property to define a new function $\\Phi(x)$: $$\\Phi(x)=\\sum_{k=1}^Nf(x_k)\\phi_k(x),$$ which is also a polynomial of degree $N-1$. By definition $\\Phi(x)$ fits $f(x)$ exactly at $x_m$: $$\\Phi(x_m)=\\sum_{k=1}^Nf(x_k)\\phi_k(x_m)=\\sum_{k=1}^Nf(x_k)\\delta_{km}=f(x_m).$$\n \nWe now insert $\\Phi(x)$ into our integral $$\\int_{-1}^{1}f(x)dx\\approx\\int_{-1}^{1}\\Phi(x)dx=\\int_{-1}^{1}\\sum_{k=1}^Nf(x_k)\\phi_k(x)dx=\\sum_{k=1}^Nf(x_k)\\int_{-1}^{1}\\phi_k(x)dx=\\sum_{k=1}^Nf(x_k)w_k.$$ In other words, for an arbitrary set of points $x_k$, the integration weights are given by $$w_k=\\int_{-1}^{1}\\phi_k(x)dx.$$ \n\nThis is a general expression for finding integration weights for a given set of grid points. For the Gauss Legendre quadrature, we insert the roots of $P_N(x)$ as $x_k$ into the interpolating polynomial and then carry out the integral to find the corresponding weights $w_k$. For small $N$, analytic expressions have been derived. In general, however, roots and weights are calculated numerically. We have done this in the accompanying program gaussxw.py. The figure below shows the positions of the grid points and the values of the corresponding weights for $N=10$ and $N=100$. The grid points are not evenly spaced. Their density increases towards the edges of the integration interval. Concomitantly, the weights are lowest at the interval edges and rise in towards the middle of the interval, where the point density is lowest. \n\n\n\nThe calculation of the grid points and weights in Gaussian quadrature takes a bit of effort. In return, Gaussian quadrature exactly integrates functions that are polynomials of degree $2N-1$ with only $N$ grid points. This is due to the properties of the Legendre polynomials and the clever grid choice. As an example, we will consider our test function $x^4-2x+1$. This time, we will integrate it with only $N=3$ grid points using Gaussian quadrature.\n\nBefore we perform the integration, we need to briefly consider how to change the integration interval from $[-1,1]$ to $[a,b]$. Since the area under a curve does not depend on where that curve is along the $x$ axis, the sample points can be freely slid up and down the $x$ axis _en masse_. If the desired domain is wider or narrower than the interval from $-1$ to $1$ then we also need to spread the points out or squeeze them together. The stretching or squeezing operation that accomplishes this is $$x_k'=\\frac{1}{2}(b-a)x_k+\\frac{1}{2}(b+a).$$ Similarly, the weights do not change, if we are simply sliding the sample points up and down the $x$ axis, but if the width of the integration domain changes then the value of the integral will increase or decrease by a corresponding factor. The weights have to be rescaled accordingly $$w_k'=\\frac{1}{2}(b-a)w_k.$$\n\nHere, finally, is then then the Python program that integrates $x^4-2x+1$ with only 3 grid points using Gaussian quadrature.\n\n\n```python\nfrom gaussxw import gaussxw\n\ndef f(x):\n return x**4 - 2*x + 1\n\nN = 3\na = 0.0\nb = 2.0\n\n# Calculate the sample points and weights, then map them\n# to the required integration domain\nx,w = gaussxw(N)\nxp = 0.5*(b-a)*x + 0.5*(b+a)\nwp = 0.5*(b-a)*w\n\n# Perform the integration\ns = 0.0\nfor k in range(N):\n s += wp[k]*f(xp[k])\n\nprint(s)\n```\n\n 4.4000000000000075\n\n\n# Choosing an integration method\n\nWhich integration method should you use in practice? There is no general answer, because the performance of the methods depends on the nature of the integrand. If in doubt, try several methods. Below is a brief table of advantages and disadvantes of each method. $$ \\begin{array}{lll} \\text{Method} & \\text{Advantage} & \\text{Disadvantage} \\\\\\hline \\text{Trapezoidal} & \\text{simple and versatile; works for pathological integrands and noisy data} & \\text{limited accuracy} \\\\ \\text{Simpson} & \\text{simple with good accuracy} & \\text{less suitable for pathological integrands and noisy data} \\\\ \\text{Gaussian} & \\text{very good accuracy} & \\text{less suitable for pathological integrands and noisy data} \\\\ \\end{array} $$\n\n# Integrals over infinite ranges\n\nOften in physics we encounter integrals over infinite ranges, like $\\int_0^\\infty f(x)dx$. The techniques we have used so far will not work for such integrals, because we would need an infinite number of integration points. The solution to this problem is to change variables. For an integral over the range from $0$ to $\\infty$ the standard change of variables is $$z=\\frac{x}{1+x} \\quad \\text{or equivalently} \\quad x=\\frac{z}{1-z} .$$ With $dx=dz/(1-z)^2$ we obtain $$\\int_0^\\infty f(x)dx=\\int_0^1 \\frac{1}{(1-z)^2} f(\\frac{z}{1-z})dz,$$ which can be done using any of the techniques covered earlier in the chapter.\n\nFor integrals over a range from a nonzero value $a$ to $\\infty$, we have to make two changes of variables. First we change $y=x-a$, which shifts the start of the integration range to $0$, and then apply the previous substitution, but this time in $y$: $z=y/(1+y)$. Or we combine both changes into a single one: $$ z=\\frac{x-a}{1+x-a} \\quad \\text{or equivalently} \\quad x=\\frac{z}{1-z}+a .$$ Again $dx=dz/(1-z)^2$, so that we end up with $$\\int_a^\\infty f(x)dx=\\int_0^1 \\frac{1}{(1-z)^2} f(\\frac{z}{1-z}+a)dz .$$\n\nIntegrals from $-\\infty$ to $a$ can be done the same way by substituting $z \\rightarrow -z$ and integrals from $-\\infty$ to $\\infty$ can be broken up into to integrals from $-\\infty$ to $0$ and from $0$ to $\\infty$. Alternatively, we could use a single change of variables, such as $$x=\\frac{z}{1-z^2}, \\quad dx=\\frac{1+z^2}{(1-z^2)^2}dz,$$ which would give $$\\int_{-\\infty}^\\infty f(x)dx=\\int_{-1}^1 \\frac{1+z^2}{(1-z)^2} f(\\frac{z}{1-z^2})dz.$$\n\nAs an example, we will calculate the value of the following integral using Gaussian quadrature: $$I=\\int_0^\\infty e^{-t^2}dt$$ We make the change of variables $z=t/(1+t)$ and the integral becomes $$I=\\int_0^1 \\frac{e^{-z^2/(1-z)^2}}{(1-z)^2}dz .$$ We modify our program for the function $x^4-2x+1$ and perform the integration with $N=50$ grid points.\n\n\n```python\nfrom gaussxw import gaussxwab\nfrom math import exp\n\ndef f(z):\n return exp(-z**2/(1-z)**2)/(1-z)**2\n\nN = 50\na = 0.0\nb = 1.0\nx,w = gaussxwab(N,a,b)\ns = 0.0\nfor k in range(N):\n s += w[k]*f(x[k])\nprint(s)\n```\n\n 0.8862269254528349\n\n\nThe value of this integral is known analytically: $I=\\frac{1}{2}\\sqrt{\\pi}=0.886226925453\\ldots$. Again we see the impressive accuracy of the Gaussian quadrature method. With just 50 sample points, we have calculated an estimate of the integral that is correct to the limits of machine precision.\n\n# Multiple integrals\n\nIntegrals over more than one variable are common in physics problems and can be tackled using generalizations of the methods we have already seen. Consider for instance the integral $$I=\\int_0^1\\int_0^1f(x,y) dx dy.$$ We can rewrite this by defining a function $F(y)$ thus $$F(y)=\\int_0^1f(x,y)dx.$$ Then our integral becomes $$I=\\int_0^1 F(y) dy.$$ We can thus do multiple integrals numerically by first integrating in one variable and then in the others. For instance, if we do the integrals by Gaussian quadrature with the same number $N$ of points for both $x$ and $y$ integrals, we have $$F(y)\\approx \\sum_{i=1}^N w_i f(x_i,y) \\quad \\text{and} \\quad I\\approx \\sum_{j=1}^N w_j F(y_j) .$$ Substituting the first into the second sum not surprisingly gives us the following double sum for the integral $$ I\\approx\\sum_{i=1}^N\\sum_{j=1}^N w_i w_j f(x_i,y_j). $$ This expression has a form similar to the standard integration formula for single integrals with a sum over values of the function $f(x,y)$ at a set of grid points, multiplied by appropriate weights. Now the points are distributed on a two dimensional grid. The approach can be generalized to arbitrary dimensions. The figure below shows the points on a two dimensional Gaussian quadradture grid.\n\n\n\n\n# Derivatives\n\n## Forward and backward differences\n\nThe standard definition of a derivative, the one you see in calculus books, is $$\\frac{df}{dx}=\\lim_{h\\rightarrow 0}\\frac{f(x+h)-f(x)}{h} .$$ The basic method for calculating numerical derivatives is precisely an implementation of this formula. We cannot take the limit $h \\rightarrow 0$ in practice, but we can make $h$ very small and then calculate $$\\frac{df}{dx} \\approx \\frac{f(x+h)-f(x)}{h} .$$ This is called the _forward difference_, because it is measured in the forward (i.e. positive) direction from the point of interest $x$. Analogously, we can define the _backward difference_ $$\\frac{df}{dx} \\approx \\frac{f(x)-f(x-h)}{h} .$$ Both are shown in the figure below. They usually give the same value, provided $h$ is small enough and the function is not pathological.\n\n\n\n\n## Errors\n\nForward and backward difference are usually not the most accurate. To understand this, we Taylor expand $f(x)$ around $x$: $$f(x+h)=f(x)+hf'(x)+\\frac{1}{2}h^2f''(x)+\\ldots ,$$ where $f'$ and $f''$ denote the first and second derivatives of $f$, respectively. Rearranging gives us $$f'(x)=\\frac{f(x+h)-f(x)}{h}-\\frac{1}{2}hf''(x)+\\dots .$$ The first part is our numeric forward difference. The following terms we omit in our numeric difference. They contribute to the error and are proportional to $h$ in leading order ($\\frac{1}{2}h|f''(x)|$). It would seem that if we make $h$ smaller, we would reduce the error. However, we are bound by machine precision and that affects the accuracy of the numeric difference, which means we have a lower bound for $h$. Since $f(x+h)$ and $f(x)$ are typically close in value the total round error on $f(x+h)-f(x)$ will be $2C|f(x)|$, where $C$ is machine precision (typically $10^{-16}$ in Phython). The total error for our derivative is thus $$\\epsilon =\\frac{2C|f(x)|}{h} + \\frac{1}{2}h|f''(x)|.$$\n\nWe want to find the value of $h$ that minimizes this error, so we differentiate with respect to $h$ and set the result equal to zero, which gives $$-\\frac{2C|f(x)|}{h^2}+\\frac{1}{2}|f''(x)|=0, \\quad \\text{or equivalently} \\quad h=\\sqrt{4C\\left|\\frac{f(x)}{f''(x)}\\right|} .$$ Substituting this back into our expression for the error $\\epsilon$, we find that the lowest error on our deivative is $$\\epsilon=\\sqrt{4C|f(x)f''(x)}.$$ If both $f(x)$ and $f''(x)$ are of order $1$, we should choose $h$ to be of order $\\sqrt{C}$, which will typically be $10^{-8}$ and our final error will also be of order $\\sqrt{C}$, i.e. $10^{-8}$. In many cases, this might be ok, but it is significantly less accurate than what we have seen so far and we can do better.\n\n## Central differences\n\nA simple improvement on the forward and backward difference is the _central difference_: $$\\frac{df}{dx}\\approx\\frac{f((x+h/2)-f(x-h/2)}{h}.$$ The central difference is similar to the forward and the backward difference, approximating the derivative using the difference between two values of $f(x)$ at points a distance $h$ apart. What has changed is that the two points are now placed symmetrically around $x$, one at a distance $\\frac{1}{2}h$ in the forward and one at $-\\frac{1}{2}h$ in the backward direction. \n\nTo calculate the approximation error on the central difference we write two Taylor expansions: $$\\begin {align} & f(x+h/2)=f(x)+\\frac{1}{2}hf'(x)+\\frac{1}{8}h^2f''(x)+\\frac{1}{48}h^3f'''(x)+\\ldots \\\\ & f(x-h/2)=f(x)-\\frac{1}{2}hf'(x)+\\frac{1}{8}h^2f''(x)-\\frac{1}{48}h^3f'''(x)+\\ldots\\end{align}$$ Subtracting the second expression from the first and rearranging for $f'(x)$, we get $$f'(x)=\\frac{f((x+h/2)-f(x-h/2)}{h}-\\frac{1}{24}h^2f'''(x)+\\ldots.$$ To leading order the magitude of the error is now $\\frac{1}{24}h^2|f'''(x)|$, which is one order in $h$ higher than before. The size of the rounding error remains unchanged, so the total error on our derivative is $$\\epsilon =\\frac{2C|f(x)|}{h} + \\frac{1}{24}h^2|f'''(x)|.$$ Differentiating to find the minimum and rearranging, we find that the optimal value of $h$ is $$h=\\left(24C\\left|\\frac{f(x)}{f'''(x)}\\right| \\right)^{\\frac{1}{3}}.$$ Substituting this back into the error itself, we find the optimal error to be $$\\epsilon=(\\frac{9}{8}[f(x)]^2|f'''(x)|)^\\frac{1}{3}.$$ If we again assume that $f(x)$ and $f'''(x)$ are of order 1, the ideal $h$ is now of order $C^\\frac{1}{3}$, which is typically $10^{-5}$, but the error is of order $C^\\frac{2}{3}$, which will be $10^{-10}$. \n\nThus, the central difference is about a factor $100$ more accurate than the forward or backwards difference. And we achieve this with a larger value of $h$, which is a bonus.\n\n## Second derivatives\n\nWe can also derive numerical approximations for the second derivative of a function $f(x)$. The second derivative is, by definition, the derivative of the first derivative, so we can calculate it by applying our first-derivative formulas twice. If we do this for the central difference formula we can write expressions for the first derivative at $x+h/2$ and $x-h/2$: $$ f'(x+h/2)\\approx \\frac{f(x+h)-f(x)}{h} \\quad \\text{and} \\quad f'(x-h/2)\\approx \\frac{f(x)-f(x-h)}{h} .$$ This gives us for the second derivative: $$\\begin{align} f''(x) & \\approx \\frac{f'(x+h/2)-f'(x-h/2)}{h} \\\\ & = \\frac{(f(x+h)-f(x))-(f(x)-f(x-h))}{h^2} \\\\ & = \\frac{f(x+h)-2f(x)+f(x-h)}{h^2} \\end{align}$$ This is the simplest approximation for the second derivative and we will use it extensively for solving second-order differential equations. \n\nThe error of the second derivative can be estimated analogously to before by Taylor expanding $f(x+h)$ and $f(x-h)$. We obtain $$\\epsilon=\\frac{4Cf(x)}{h^2}+\\frac{1}{12}h^2|f''''(x)|.$$ The optimum values of $h$ and $\\epsilon$ are $$h=\\left(48C\\left|\\frac{f(x)}{f''''(x)} \\right| \\right)^\\frac{1}{4} \\quad \\text{and} \\quad \\epsilon=(\\frac{4}{3}C|f(x)f''''(x))^\\frac{1}{2} .$$ This means that the error in the 2nd derivative is of order $\\sqrt{C}$, which is typically around $10^{-8}$. Our 2nd derivative therefore has about the same error as the forward or backward difference for the first derivative.\n\n\n```python\n\n```\n", "meta": {"hexsha": "56277329ae79f723066790835f0c9ae1c1d4c987", "size": 32638, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lecture 2 - Integrals and Derivatives/Integrals_and_Derivatives.ipynb", "max_stars_repo_name": "hlappal/comp-phys", "max_stars_repo_head_hexsha": "8d78a459bc5849ddf5c6c21d484503136bccccbd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lecture 2 - Integrals and Derivatives/Integrals_and_Derivatives.ipynb", "max_issues_repo_name": "hlappal/comp-phys", "max_issues_repo_head_hexsha": "8d78a459bc5849ddf5c6c21d484503136bccccbd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture 2 - Integrals and Derivatives/Integrals_and_Derivatives.ipynb", "max_forks_repo_name": "hlappal/comp-phys", "max_forks_repo_head_hexsha": "8d78a459bc5849ddf5c6c21d484503136bccccbd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 76.9764150943, "max_line_length": 1594, "alphanum_fraction": 0.6347202647, "converted": true, "num_tokens": 8488, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9005297807787536, "lm_q2_score": 0.9399133523506772, "lm_q1q2_score": 0.8464199651433787}} {"text": "# Numerical Integration\n\n## Equally Spaced Points\n\n### Trapezoidal Rule\nAssuming $n$ equally spaced points $x_0=x_l, x_1, \\ldots , x_{n-2}, x_{n-1}=x_u$ with $h = x_i - x_{i-1}$ and corresponding values of the function stored in an array $y_0, y_1, \\ldots , y_{n-2}, y_{n-1}$, the approximation to the integral is\n$$\nI = \\int_{x_l}^{x_{u}} y \\ dx \\approx \\frac{h}{2} \\left[ y_0 + 2 \\left( y_1 + y_2 + \\cdots + y_{n-3} + y_{n-2} \\right) + y_{n-1} \\right]\n$$\nThe above expresion presumes that the function $y = f(x)$ has been evaluated at $n$ equally spaced points prior to computing the integral\n\nIn this procedure, we take the following approach:\n\n1. Generate $n$ equally spaced data points $x_i, i = 0, 1, 2, \\ldots , n-1$\n2. Compute the value of the function for each of the data points $y_i = f(x_i)$\n3. Carry out numerical integration using Trapezoidal rule\n\n\n```python\nfrom __future__ import print_function, division\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\n%matplotlib inline\n\ndef trap(h, y):\n n = len(y)\n s = y[0] + 2 * sum(y[1:-1]) + y[-1]\n return s * h / 2.0\n\nx1 = 0.0\nx2 = np.pi\nn = 500\nx = np.linspace(x1, x2, n+1)\ny = np.sin(x)\nh = (x2 - x1) / float(n)\nI = trap(h, y)\n%timeit trap(h, y)\nprint(I)\nplt.fill(x, y)\nplt.axis([x1, 2*x2, -1.1, 1.1])\nplt.grid()\nplt.show()\n```\n\n\n```python\ndef f(x):\n return np.sin(x)\n\ndef trap2(f, x1, x2, n):\n h = (x2 - x1) / n\n x = np.linspace(x1, x2, n+1)\n y = f(x)\n s = y[0] + 2 * sum(y[1:-1]) + y[-1]\n return s * h / 2\n\nI = trap2(f, 0, np.pi, 500)\nprint(I)\n%timeit trap2(f, 0, np.pi, 500)\n```\n\n 1.99999342026\n 10000 loops, best of 3: 105 \u00b5s per loop\n\n\n\n```python\ndef trap3(f, x1, x2, n):\n h = (x2 - x1) / n\n s = (f(x1) + f(x2)) / 2\n x = x1 + h\n while x <= x2:\n s += f(x)\n x += h\n return s * h\n\nI = trap3(f, 0, np.pi, 500)\nprint(I)\n%timeit trap3(f, 0, np.pi, 500)\n```\n\n 1.99999342026\n 1000 loops, best of 3: 851 \u00b5s per loop\n\n\n### Simpson's 1/3 Rule\n\n$$\nI = \\int_{x_1}^{x_2} y \\, dx \\approx \\frac{h}{3} \\left[ y_0 + 4 \\left( y_1 + y_3 + \\cdots + y_{n-2} \\right) + 2 \\left( y_2 + y_4 + \\cdots + y_{n-3} \\right) + y_{n-1} \\right]\n$$\n\n\n```python\ndef simp(f, x1, x2, n):\n h = (x2 - x1) / n\n x = np.linspace(x1, x2, n+1)\n y = f(x)\n s = y[0] + y[-1]\n s += 4 * sum(y[1:-1:2])\n s += 2 * sum(y[2:-2:2])\n return s * h / 3\n\nI = simp(f, 0, np.pi, 500)\nprint(I)\n%timeit simp(f, 0, np.pi, 500)\n```\n\n 2.00000000002\n 10000 loops, best of 3: 106 \u00b5s per loop\n\n\n\n```python\nfrom sympy import *\nx = symbols('x')\nintegrate(sin(x), (x, 0, pi))\n```\n\n\n\n\n 2\n\n\n", "meta": {"hexsha": "501d17490e883c13bf10ce4d025258a32e1c929e", "size": 14830, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Workshop/numerical_integration.ipynb", "max_stars_repo_name": "satish-annigeri/Notebooks", "max_stars_repo_head_hexsha": "92a7dc1d4cf4aebf73bba159d735a2e912fc88bb", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Workshop/numerical_integration.ipynb", "max_issues_repo_name": "satish-annigeri/Notebooks", "max_issues_repo_head_hexsha": "92a7dc1d4cf4aebf73bba159d735a2e912fc88bb", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Workshop/numerical_integration.ipynb", "max_forks_repo_name": "satish-annigeri/Notebooks", "max_forks_repo_head_hexsha": "92a7dc1d4cf4aebf73bba159d735a2e912fc88bb", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 64.4782608696, "max_line_length": 9448, "alphanum_fraction": 0.7725556305, "converted": true, "num_tokens": 1080, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9399133447766224, "lm_q2_score": 0.9005297807787537, "lm_q1q2_score": 0.8464199583227169}} {"text": "### Principal Component Analysis\n\n\n```python\nimport scipy.io as sio\nimport numpy as np\nimport matplotlib.pyplot as plt\nmat_contents = sio.loadmat('mnistAll.mat')\n```\n\n#### Load data\n\n\n```python\ntrain_img = mat_contents['mnist'][0][0][0]\ntrain_label = mat_contents['mnist'][0][0][2]\nidx3 = np.where(mat_contents['mnist'][0][0][2]==3)\nndx = idx3[0][:1000]; d = 28*28;\nn = len(ndx)\nX = np.double(train_img[:,:,ndx].reshape(d,n))\n```\n\n\n```python\nplt.subplot(1,2,1)\nplt.imshow(X[:,0].reshape(28,28))\nplt.subplot(1,2,2)\nmu = np.mean(X,axis=1)\nplt.imshow(mu.reshape(28,28))\n```\n\n#### processing\n\n\n```python\ndef pcaPmtk(x):\n n,d=x.shape\n K = np.linalg.matrix_rank(x)x\n mu = np.mean(x,axis=1)\n xc = x-np.transpose(np.tile(mu,(d,1)))\n w,v = np.linalg.eigh(np.cov(xc))\n wsort = np.sort(w)[::-1]\n ind = np.argsort(w)[::-1]\n B = v[:,ind[:K]]\n z = np.matmul(np.transpose(xc),B)\n xrecon = np.matmul(z,np.transpose(B))+np.tile(mu,(d,1))\n return B,z,w\n```\n\n\n```python\nv,z,evals = pcaPmtk(X)\n```\n\n#### Principal Axis\n\n\n```python\nfor i in range(4):\n plt.subplot(2,2,i+1)\n plt.imshow(v[:,i].reshape(28,28))\n```\n\n#### Reconstructed Image\n\n\n```python\n# number of principal axes\nKs = [5,10,20,np.linalg.matrix_rank(X)]\nfor j in range(4):\n xre = np.matmul(z[124,:Ks[j]],np.transpose(v[:,:Ks[j]]))+mu\n plt.subplot(2,2,j+1)\n plt.imshow(xre.reshape(28,28))\n```\n\n### Proof\n\n\n#### Cost function\n$$J(w,z_{i1})=\\frac{1}{N}\\sum^N_{i=1}(x_i-z_{i1}w_1)^\\top(x_i-z_{i1}w_1)=\\frac{1}{N}\\sum^N_{i=1}(x_i^\\top x_i-2x_i^\\top z_{i1}w_1+z_{i1}^2w^\\top_1 w_1)$$\n\n#### Minimizing the Cost function, \nSince $w_1$ is orthnormal, $w_1^\\top w_1 = 1$\n\n\\begin{align*}\n\\frac{\\partial J}{\\partial z_{j1}} & = \\frac{1}{N}\\sum^N_{i=1}(-2x_i^\\top \\frac{\\partial z_{i1}}{\\partial z_{j1}}w_1+2z_i\\frac{\\partial z_{i1}}{\\partial z_{j1}}w_1^\\top w_1) = \\frac{1}{N}\\sum^N_{i=1} \\delta_{ij}(-2)(x_i^\\top w_1-z_{i1}w_1^\\top w_1) \n\\\\\n&= \\frac{1}{N}(x_j^\\top w_1-z_{j1}w_1^\\top w_1) = 0 \n\\qquad\\text{ Therefore, } \\qquad z_{j1}=w_1^\\top x_j\n\\end{align*}\n\n#### Substitute back to the Cost function,\n\\begin{align*}\nJ(z_{i1}) &= \\frac{1}{N}\\sum^N_{i=1}(x^\\top_i x_i - 2z_{i1}w_1^\\top x_i + z_{i1}^2) = \\frac{1}{N}\\sum^N_{i=1}(x_i^\\top x_i - 2z_{i1}^2+z_{i1}^2) = \\frac{1}{N}\\sum^N_{i=1}(x_i^\\top x_i-z_{i1}^2) = constant - \\frac{1}{N}\\sum^N_{i=1}z^2_{i1}\n\\end{align*}\n\n#### Covariance matrix\nConsider $z_{i1} = w_1^\\top x_i$,\n\n$$\n\\frac{1}{N}\\sum^N_{i=1}z_{i1}^2 = \\frac{1}{N}\\sum^N_{i=1}w_1^\\top x_ix_i^\\top w_1=w_1^\\top \\bigg(\\frac{1}{N}\\sum^N_{i=1}x_ix_i^\\top \\bigg)w_1 = w_1^\\top \\hat{\\Sigma}w_1\n$$\n\nTo minimize the cost function $J(z_{i1})$ is equivalent to maximize $\\frac{1}{N}\\sum^N_{i=1}z_{i1}^2$, so we can rewrite the cost function as $\\tilde{J}(w_1)$ with a constraint of $w^\\top_1w_1=1$ using Lagrange multiplier.\n\\begin{equation}\n\\tilde{J}(w_1) = w^\\top_1\\hat{\\Sigma}w_1+\\lambda_1(w^\\top_1w_1-1)\n\\end{equation}\nAfter minimizing the new cost function, obtain the eigenvector \n$$\\frac{\\partial \\tilde{J}}{\\partial w_1} = 2\\hat{\\Sigma}w_1-2\\lambda_1w_1 = 0 \\qquad \\longrightarrow \\qquad \\hat{\\Sigma}w_1=\\lambda_1w_1$$\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "e0ebb4c1e4bdc26c29634e37625d45704821e8ef", "size": 43795, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Machine Learning A Probabilistic Perspective/12LatentLinearModels/F12.5/.ipynb_checkpoints/12.5pcaImageDemo-checkpoint.ipynb", "max_stars_repo_name": "zcemycl/ProbabilisticPerspectiveMachineLearning", "max_stars_repo_head_hexsha": "8291bc6cb935c5b5f9a88f7b436e6e42716c21ae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2019-11-20T10:20:29.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-09T11:15:23.000Z", "max_issues_repo_path": "Machine Learning A Probabilistic Perspective/12LatentLinearModels/F12.5/.ipynb_checkpoints/12.5pcaImageDemo-checkpoint.ipynb", "max_issues_repo_name": "zcemycl/ProbabilisticPerspectiveMachineLearning", "max_issues_repo_head_hexsha": "8291bc6cb935c5b5f9a88f7b436e6e42716c21ae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Machine Learning A Probabilistic Perspective/12LatentLinearModels/F12.5/.ipynb_checkpoints/12.5pcaImageDemo-checkpoint.ipynb", "max_forks_repo_name": "zcemycl/ProbabilisticPerspectiveMachineLearning", "max_forks_repo_head_hexsha": "8291bc6cb935c5b5f9a88f7b436e6e42716c21ae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-05-27T03:56:38.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-02T13:15:42.000Z", "avg_line_length": 145.0165562914, "max_line_length": 15012, "alphanum_fraction": 0.888092248, "converted": true, "num_tokens": 1340, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9473810511092412, "lm_q2_score": 0.8933094039240554, "lm_q1q2_score": 0.8463044020553414}} {"text": "```python\n%matplotlib inline\n\n'''\ngrad_descent.py\n\nUse gradient descent to find the minimum value of a\nsingle variable function. This also checks for the existence\nof a solution for the equation, f'(x)=0 and plots the intermediate\npoints traversed.\n'''\n\nfrom sympy import Derivative, Symbol, sympify, solve\nimport matplotlib.pyplot as plt\n\ndef grad_descent(x0, f1x, x):\n # check if f1x=0 has a solution\n if not solve(f1x):\n print('Cannot continue, solution for {0}=0 does not exist'.format(f1x))\n return None\n epsilon = 1e-6\n step_size = 1e-4\n x_old = x0\n x_new = x_old - step_size*f1x.subs({x:x_old}).evalf()\n\n # list to store the X values traversed\n X_traversed = []\n while abs(x_old - x_new) > epsilon:\n X_traversed.append(x_new)\n x_old = x_new\n x_new = x_old-step_size*f1x.subs({x:x_old}).evalf()\n\n return x_new, X_traversed\n\ndef frange(start, final, interval):\n\n numbers = []\n while start < final:\n numbers.append(start)\n start = start + interval\n \n return numbers\n\ndef create_plot(X_traversed, f, var):\n # First create the graph of the function itself\n x_val = frange(-1, 1, 0.01)\n f_val = [f.subs({var:x}) for x in x_val]\n plt.plot(x_val, f_val, 'bo')\n # calculate the function value at each of the intermediate\n # points traversed\n f_traversed = [f.subs({var:x}) for x in X_traversed]\n plt.plot(X_traversed, f_traversed, 'r.')\n plt.legend(['Function', 'Intermediate points'], loc='best')\n plt.show()\n\nif __name__ == '__main__':\n\n f = input('Enter a function in one variable: ')\n var = input('Enter the variable to differentiate with respect to: ')\n var0 = float(input('Enter the initial value of the variable: '))\n try:\n f = sympify(f)\n except SympifyError:\n print('Invalid function entered')\n else:\n var = Symbol(var)\n d = Derivative(f, var).doit()\n var_min, X_traversed = grad_descent(var0, d, var)\n if var_min:\n print('{0}: {1}'.format(var.name, var_min))\n print('Minimum value: {0}'.format(f.subs({var:var_min})))\n create_plot(X_traversed, f, var)\n\n```\n", "meta": {"hexsha": "0dbbe763be90ec92564d074b5aa82977040bc4ad", "size": 3388, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/.ipynb_checkpoints/Gradient Descent-checkpoint.ipynb", "max_stars_repo_name": "doingmathwithpython/pycon-us-2016", "max_stars_repo_head_hexsha": "08ecbddcc1dad9c6ffe83ea6d0c483c2d9dd4b62", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/.ipynb_checkpoints/Gradient Descent-checkpoint.ipynb", "max_issues_repo_name": "doingmathwithpython/pycon-us-2016", "max_issues_repo_head_hexsha": "08ecbddcc1dad9c6ffe83ea6d0c483c2d9dd4b62", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/.ipynb_checkpoints/Gradient Descent-checkpoint.ipynb", "max_forks_repo_name": "doingmathwithpython/pycon-us-2016", "max_forks_repo_head_hexsha": "08ecbddcc1dad9c6ffe83ea6d0c483c2d9dd4b62", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-07-23T06:53:38.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-11T09:13:57.000Z", "avg_line_length": 31.6635514019, "max_line_length": 88, "alphanum_fraction": 0.5203659976, "converted": true, "num_tokens": 612, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9441768604361741, "lm_q2_score": 0.8962513842182775, "lm_q1q2_score": 0.8462198181127885}} {"text": "# Introduction to orthogonal coordinates\n\nIn $\\mathbb{R}^3$, we can think that each point is given by the\nintersection of three surfaces. Thus, we have three families of curved\nsurfaces that intersect each other at right angles. These surfaces are\northogonal locally, but not (necessarily) globally, and are\ndefined by\n\n$$u_1 = f_1(x, y, z)\\, ,\\quad u_2 = f_2(x, y, z)\\, ,\\quad u_3=f_3(x, y, z) \\, .$$\n\nThese functions should be invertible, at least locally, and we can also write\n\n$$x = x(u_1, u_2, u_3)\\, ,\\quad y = y(u_1, u_2, u_3)\\, ,\\quad z = z(u_1, u_2, u_3)\\, ,$$\n\nwhere $x, y, z$ are the usual Cartesian coordinates. The curve defined by the intersection of two of the surfaces gives us\none of the coordinate curves.\n\n## Scale factors\n\nSince we are interested in how these surface intersect each other locally,\nwe want to express differential vectors in terms of the coordinates. Thus,\nthe differential for the position vector ($\\mathbf{r}$) is given by\n\n$$\\mathrm{d}\\mathbf{r} = \\frac{\\partial\\mathbf{r}}{\\partial u_1}\\mathrm{d}u_1\n+ \\frac{\\partial\\mathbf{r}}{\\partial u_2}\\mathrm{d}u_2\n+ \\frac{\\partial\\mathbf{r}}{\\partial u_3}\\mathrm{d}u_3\\, ,\n$$\n\nor\n\n$$\\mathrm{d}\\mathbf{r} = \\sum_{i=1}^3 \\frac{\\partial\\mathbf{r}}{\\partial u_i}\\mathrm{d}u_i\\, .$$\n\nThe factor $\\partial \\mathbf{r}/\\partial u_i$ is a non-unitary vector that takes\ninto account the variation of $\\mathbf{r}$ in the direction of $u_i$, and is then\ntangent to the coordinate curve $u_i$. We can define a normalized basis $\\hat{\\mathbf{e}}_i$\nusing\n\n$$\\frac{\\partial\\mathbf{r}}{\\partial u_i} = h_i \\hat{\\mathbf{e}}_i\\, .$$\n\nThe coefficients $h_i$ are functions of $u_i$ and we call them _scale factors_. They\nare really important since they allow us to _measure_ distances while we move along\nour coordinates. We would need them to define vector operators in orthogonal coordinates.\nWhen the coordinates are not orthogonal we would need to use the [metric tensor](https://en.wikipedia.org/wiki/Metric_tensor), but we are going to restrict ourselves to orthogonal systems.\n\nHence, we have the following\n\n$$\\begin{align}\n&h_i = \\left|\\frac{\\partial\\mathbf{r}}{\\partial u_i}\\right|\\, ,\\\\\n&\\hat{\\mathbf{e}}_i = \\frac{1}{h_i} \\frac{\\partial \\mathbf{r}}{\\partial u_i}\\, .\n\\end{align}$$\n\n## Curvilinear coordinates available\n\nThe following coordinate systems are available:\n\n- Cartesian;\n\n- Cylindrical;\n\n- Spherical;\n\n- Parabolic cylindrical;\n\n- Parabolic;\n\n- Paraboloidal;\n\n- Elliptic cylindrical;\n\n- Oblate spheroidal;\n\n- Prolate spheroidal;\n\n- Ellipsoidal;\n\n- Bipolar cylindrical;\n\n- Toroidal;\n\n- Bispherical; and\n\n- Conical.\n\n\nTo obtain the transformation for a given coordinate system we can use\nthe function `transform_coords` in the `vector` module.\n\n\n```python\nimport sympy as sym\nfrom continuum_mechanics import vector\n```\n\nFirst, we define the variables for the coordinates $(u, v, w)$.\n\n\n```python\nsym.init_printing()\nu, v, w = sym.symbols(\"u v w\")\n```\n\nAnd, we compute the coordinates for the **parabolic** system using ``transform_coords``.\nThe first parameter is a string defining the coordinate system and the second is\na tuple with the coordinates.\n\n\n```python\nvector.transform_coords(\"parabolic\", (u, v, w))\n```\n\nThe scale factors for the coordinate systems mentioned above are availabe.\nWe can compute them for bipolar cylindrical coordinates. The coordinates\nare defined by\n\n$$\\begin{align}\n&x = a \\frac{\\sinh\\tau}{\\cosh\\tau - \\cos\\sigma}\\, ,\\\\\n&y = a \\frac{\\sin\\sigma}{\\cosh\\tau - \\cos\\sigma}\\, ,\\\\\n&z = z\\, ,\n\\end{align}$$\n\nand have the following scale factors\n\n$$h_\\sigma = h_\\tau = \\frac{a}{\\cosh\\tau - \\cos\\sigma}\\, ,$$\n\nand $h_z = 1$.\n\n\n```python\nsigma, tau, z, a = sym.symbols(\"sigma tau z a\")\nz = sym.symbols(\"z\")\nscale = vector.scale_coeff_coords(\"bipolar_cylindrical\", (sigma, tau, z), a=a)\nscale\n```\n\nFinally, we can compute vector operators for different coordinates.\n\nThe Laplace operator for the bipolar cylindrical system is given by\n\n$$\n\\nabla^2 \\phi =\n\\frac{1}{a^2} \\left( \\cosh \\tau - \\cos\\sigma \\right)^{2}\n\\left( \n\\frac{\\partial^2 \\phi}{\\partial \\sigma^2} + \n\\frac{\\partial^2 \\phi}{\\partial \\tau^2} \n\\right) + \n\\frac{\\partial^2 \\phi}{\\partial z^2}\\, ,$$\n\nand we can compute it using the function ``lap``. For this function,\nthe first parameter is the expression that we want to compute the\nLaplacian for, the second parameter is a tuple with the coordinates\nand the third parameter is a tuple with the scale factors.\n\n\n```python\nphi = sym.symbols(\"phi\", cls=sym.Function)\nlap = vector.lap(phi(sigma, tau, z), coords=(sigma, tau, z), h_vec=scale)\nsym.simplify(lap)\n```\n", "meta": {"hexsha": "37ad4501c97e4936a62f91748c39070b383beb66", "size": 28739, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/tutorials/curvilinear_coordinates.ipynb", "max_stars_repo_name": "nicoguaro/continuum_mechanics", "max_stars_repo_head_hexsha": "f8149b69b8461784f6ed721294cd1a49ffdfa3d7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 21, "max_stars_repo_stars_event_min_datetime": "2018-12-09T15:02:51.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-16T09:28:38.000Z", "max_issues_repo_path": "docs/tutorials/curvilinear_coordinates.ipynb", "max_issues_repo_name": "nicoguaro/continuum_mechanics", "max_issues_repo_head_hexsha": "f8149b69b8461784f6ed721294cd1a49ffdfa3d7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 223, "max_issues_repo_issues_event_min_datetime": "2019-05-06T16:31:50.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T21:21:03.000Z", "max_forks_repo_path": "docs/tutorials/curvilinear_coordinates.ipynb", "max_forks_repo_name": "nicoguaro/continuum_mechanics", "max_forks_repo_head_hexsha": "f8149b69b8461784f6ed721294cd1a49ffdfa3d7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2020-01-29T10:03:52.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-25T19:34:37.000Z", "avg_line_length": 81.6448863636, "max_line_length": 8296, "alphanum_fraction": 0.7823167125, "converted": true, "num_tokens": 1375, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9441768620069627, "lm_q2_score": 0.8962513627417531, "lm_q1q2_score": 0.8462197992429725}} {"text": "# Basis for grayscale images\n\n## Introduction\n\nConsider the set of real-valued matrices of size $M\\times N$; we can turn this into a vector space by defining addition and scalar multiplication in the usual way:\n\n\\begin{align}\n\\mathbf{A} + \\mathbf{B} &= \n \\left[ \n \\begin{array}{ccc} \n a_{0,0} & \\dots & a_{0,N-1} \\\\ \n \\vdots & & \\vdots \\\\ \n a_{M-1,0} & \\dots & b_{M-1,N-1} \n \\end{array}\n \\right]\n + \n \\left[ \n \\begin{array}{ccc} \n b_{0,0} & \\dots & b_{0,N-1} \\\\ \n \\vdots & & \\vdots \\\\ \n b_{M-1,0} & \\dots & b_{M-1,N-1} \n \\end{array}\n \\right]\n \\\\\n &=\n \\left[ \n \\begin{array}{ccc} \n a_{0,0}+b_{0,0} & \\dots & a_{0,N-1}+b_{0,N-1} \\\\ \n \\vdots & & \\vdots \\\\ \n a_{M-1,0}+b_{M-1,0} & \\dots & a_{M-1,N-1}+b_{M-1,N-1} \n \\end{array}\n \\right] \n \\\\ \\\\ \\\\\n\\beta\\mathbf{A} &= \n \\left[ \n \\begin{array}{ccc} \n \\beta a_{0,0} & \\dots & \\beta a_{0,N-1} \\\\ \n \\vdots & & \\vdots \\\\ \n \\beta a_{M-1,0} & \\dots & \\beta a_{M-1,N-1}\n \\end{array}\n \\right]\n\\end{align}\n\n\nAs a matter of fact, the space of real-valued $M\\times N$ matrices is completely equivalent to $\\mathbb{R}^{MN}$ and we can always \"unroll\" a matrix into a vector. Assume we proceed column by column; then the matrix becomes\n\n$$\n \\mathbf{a} = \\mathbf{A}[:] = [\n \\begin{array}{ccccccc}\n a_{0,0} & \\dots & a_{M-1,0} & a_{0,1} & \\dots & a_{M-1,1} & \\ldots & a_{0, N-1} & \\dots & a_{M-1,N-1}\n \\end{array}]^T\n$$\n\nAlthough the matrix and vector forms represent exactly the same data, the matrix form allows us to display the data in the form of an image. Assume each value in the matrix is a grayscale intensity, where zero is black and 255 is white; for example we can create a checkerboard pattern of any size with the following function:\n\n\n```python\n# usual pyton bookkeeping...\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport IPython\nfrom IPython.display import Image\nimport math\n```\n\n\n```python\n# ensure all images will be grayscale\nplt.gray();\n```\n\n\n
\n\n\n\n```python\n# let's create a checkerboard pattern\nSIZE = 4\nimg = np.zeros((SIZE, SIZE))\nfor n in range(0, SIZE):\n for m in range(0, SIZE):\n if (n & 0x1) ^ (m & 0x1):\n img[n, m] = 255\n\n# now display the matrix as an image\nplt.matshow(img); \n```\n\nGiven the equivalence between the space of $M\\times N$ matrices and $\\mathbb{R}^{MN}$ we can easily define the inner product between two matrices in the usual way:\n\n$$\n\\langle \\mathbf{A}, \\mathbf{B} \\rangle = \\sum_{m=0}^{M-1} \\sum_{n=0}^{N-1} a_{m,n} b_{m, n}\n$$\n\n(where we have neglected the conjugation since we'll only deal with real-valued matrices); in other words, we can take the inner product between two matrices as the standard inner product of their unrolled versions. The inner product allows us to define orthogonality between images and this is rather useful since we're going to explore a couple of bases for this space.\n\n\n## Actual images\n\nConveniently, using IPython, we can read images from disk in any given format and convert them to numpy arrays; let's load and display for instance a JPEG image:\n\n\n```python\nimg = np.array(plt.imread('_inputs/20190919/cameraman.jpg'), dtype=int)\nplt.matshow(img);\n```\n\nThe image is a $64\\times 64$ low-resolution version of the famous \"cameraman\" test picture. Out of curiosity, we can look at the first column of this image, which is is a $64\u00d71$ vector:\n\n\n```python\nimg[:,0]\n```\n\n\n\n\n array([156, 157, 157, 152, 154, 155, 151, 157, 152, 155, 158, 159, 159,\n 160, 160, 161, 155, 160, 161, 161, 164, 162, 160, 162, 158, 160,\n 158, 157, 160, 160, 159, 158, 163, 162, 162, 157, 160, 114, 114,\n 103, 88, 62, 109, 82, 108, 128, 138, 140, 136, 128, 122, 137,\n 147, 114, 114, 144, 112, 115, 117, 131, 112, 141, 99, 97])\n\n\n\nThe values are integers between zero and 255, meaning that each pixel is encoded over 8 bits (or 256 gray levels).\n\n## The canonical basis\n\nThe canonical basis for any matrix space $\\mathbb{R}^{M\\times N}$ is the set of \"delta\" matrices where only one element equals to one while all the others are 0. Let's call them $\\mathbf{E}_n$ with $0 \\leq n < MN$. Here is a function to create the canonical basis vector given its index:\n\n\n```python\ndef canonical(n, M=5, N=10):\n e = np.zeros((M, N))\n e[(n % M), int(n / M)] = 1\n return e\n```\n\nHere are some basis vectors: look for the position of white pixel, which differentiates them and note that we enumerate pixels column-wise:\n\n\n```python\nplt.matshow(canonical(0));\nplt.matshow(canonical(1));\nplt.matshow(canonical(49));\n```\n\n## Transmitting images\n\nSuppose we want to transmit the \"cameraman\" image over a communication channel. The intuitive way to do so is to send the pixel values one by one, which corresponds to sending the coefficients of the decomposition of the image over the canonical basis. So far, nothing complicated: to send the cameraman image, for instance, we will send $64\\times 64 = 4096$ coefficients in a row. \n\nNow suppose that a communication failure takes place after the first half of the pixels have been sent. The received data will allow us to display an approximation of the original image only. If we replace the missing data with zeros, here is what we would see, which is not very pretty:\n\n\n```python\n# unrolling of the image for transmission (we go column by column, hence \"F\")\ntx_img = np.ravel(img, \"F\")\n\n# oops, we lose half the data\ntx_img[int(len(tx_img)/2):] = 0\n\n# rebuild matrix\nrx_img = np.reshape(tx_img, (64, 64), \"F\")\nplt.matshow(rx_img);\n```\n\nCan we come up with a trasmission scheme that is more robust in the face of channel loss? Interestingly, the answer is yes, and it involves a different, more versatile basis for the space of images. What we will do is the following: \n\n* describe the Haar basis, a new basis for the image space\n* project the image in the new basis\n* transmit the projection coefficients\n* rebuild the image using the basis vectors\n\nWe know a few things: if we choose an orthonormal basis, the analysis and synthesis formulas will be super easy (a simple inner product and a scalar multiplication respectively). The trick is to find a basis that will be robust to the loss of some coefficients. \n\nOne such basis is the **Haar basis**. We cannot go into too many details in this notebook but, for the curious, a good starting point is [here](https://chengtsolin.wordpress.com/2015/04/15/real-time-2d-discrete-wavelet-transform-using-opengl-compute-shader/). Mathematical formulas aside, the Haar basis works by encoding the information in a *hierarchical* way: the first basis vectors encode the broad information and the higher coefficients encode the detail. Let's have a look. \n\nFirst of all, to keep things simple, we will remain in the space of square matrices whose size is a power of two. The code to generate the Haar basis matrices is the following: first we generate a 1D Haar vector and then we obtain the basis matrices by taking the outer product of all possible 1D vectors (don't worry if it's not clear, the results are what's important):\n\n\n```python\ndef haar1D(n, SIZE):\n # check power of two\n if math.floor(math.log(SIZE) / math.log(2)) != math.log(SIZE) / math.log(2):\n print(\"Haar defined only for lengths that are a power of two\")\n return None\n if n >= SIZE or n < 0:\n print(\"invalid Haar index\")\n return None\n \n # zero basis vector\n if n == 0:\n return np.ones(SIZE)\n \n # express n > 1 as 2^p + q with p as large as possible;\n # then k = SIZE/2^p is the length of the support\n # and s = qk is the shift\n p = math.floor(math.log(n) / math.log(2))\n pp = int(pow(2, p))\n k = SIZE / pp\n s = (n - pp) * k\n \n h = np.zeros(SIZE)\n h[int(s):int(s+k/2)] = 1\n h[int(s+k/2):int(s+k)] = -1\n # these are not normalized\n return h\n\n\ndef haar2D(n, SIZE=8):\n # get horizontal and vertical indices\n hr = haar1D(n % SIZE, SIZE)\n hv = haar1D(int(n / SIZE), SIZE)\n # 2D Haar basis matrix is separable, so we can\n # just take the column-row product\n H = np.outer(hr, hv)\n H = H / math.sqrt(np.sum(H * H))\n return H\n```\n\nFirst of all, let's look at a few basis matrices; note that the matrices have positive and negative values, so that the value of zero will be represented as gray:\n\n\n```python\nplt.matshow(haar2D(0));\nplt.matshow(haar2D(1));\nplt.matshow(haar2D(10));\nplt.matshow(haar2D(63));\n```\n\nWe can notice two key properties\n\n* each basis matrix has positive and negative values in some symmetric patter: this means that the basis matrix will implicitly compute the difference between image areas\n* low-index basis matrices take differences between large areas, while high-index ones take differences in smaller **localized** areas of the image\n\nWe can immediately verify that the Haar matrices are orthogonal:\n\n\n```python\n# let's use an 8x8 space; there will be 64 basis vectors\n# compute all possible inner product and only print the nonzero results\nfor m in range(0,64):\n for n in range(0,64):\n r = np.sum(haar2D(m, 8) * haar2D(n, 8))\n if r != 0:\n print(\"[%dx%d -> %f] \" % (m, n, r), end=\"\")\n```\n\n [0x0 -> 1.000000] [1x1 -> 1.000000] [2x2 -> 1.000000] [3x3 -> 1.000000] [4x4 -> 1.000000] [5x5 -> 1.000000] [6x6 -> 1.000000] [7x7 -> 1.000000] [8x8 -> 1.000000] [9x9 -> 1.000000] [10x10 -> 1.000000] [11x11 -> 1.000000] [12x12 -> 1.000000] [13x13 -> 1.000000] [14x14 -> 1.000000] [15x15 -> 1.000000] [16x16 -> 1.000000] [16x17 -> -0.000000] [17x16 -> -0.000000] [17x17 -> 1.000000] [18x18 -> 1.000000] [19x19 -> 1.000000] [20x20 -> 1.000000] [21x21 -> 1.000000] [22x22 -> 1.000000] [23x23 -> 1.000000] [24x24 -> 1.000000] [24x25 -> -0.000000] [25x24 -> -0.000000] [25x25 -> 1.000000] [26x26 -> 1.000000] [27x27 -> 1.000000] [28x28 -> 1.000000] [29x29 -> 1.000000] [30x30 -> 1.000000] [31x31 -> 1.000000] [32x32 -> 1.000000] [33x33 -> 1.000000] [34x34 -> 1.000000] [35x35 -> 1.000000] [36x36 -> 1.000000] [37x37 -> 1.000000] [38x38 -> 1.000000] [39x39 -> 1.000000] [40x40 -> 1.000000] [41x41 -> 1.000000] [42x42 -> 1.000000] [43x43 -> 1.000000] [44x44 -> 1.000000] [45x45 -> 1.000000] [46x46 -> 1.000000] [47x47 -> 1.000000] [48x48 -> 1.000000] [49x49 -> 1.000000] [50x50 -> 1.000000] [51x51 -> 1.000000] [52x52 -> 1.000000] [53x53 -> 1.000000] [54x54 -> 1.000000] [55x55 -> 1.000000] [56x56 -> 1.000000] [57x57 -> 1.000000] [58x58 -> 1.000000] [59x59 -> 1.000000] [60x60 -> 1.000000] [61x61 -> 1.000000] [62x62 -> 1.000000] [63x63 -> 1.000000] \n\nOK! Everything's fine. Now let's transmit the \"cameraman\" image: first, let's verify that it works\n\n\n```python\n# project the image onto the Haar basis, obtaining a vector of 4096 coefficients\n# this is simply the analysis formula for the vector space with an orthogonal basis\ntx_img = np.zeros(64*64)\nfor k in range(0, (64*64)):\n tx_img[k] = np.sum(img * haar2D(k, 64))\n\n# now rebuild the image with the synthesis formula; since the basis is orthonormal\n# we just need to scale the basis matrices by the projection coefficients\nrx_img = np.zeros((64, 64))\nfor k in range(0, (64*64)):\n rx_img += tx_img[k] * haar2D(k, 64)\n\nplt.matshow(rx_img);\n```\n\nCool, it works! Now let's see what happens if we lose the second half of the coefficients:\n\n\n```python\n# oops, we lose half the data\nlossy_img = np.copy(tx_img);\nlossy_img[int(len(tx_img)/2):] = 0\n\n# rebuild matrix\nrx_img = np.zeros((64, 64))\nfor k in range(0, (64*64)):\n rx_img += lossy_img[k] * haar2D(k, 64)\n\nplt.matshow(rx_img);\n```\n\nThat's quite remarkable, no? We've lost the same amount of information as before but the image is still acceptable. This is because we lost the coefficients associated to the fine details of the image but we retained the \"broad strokes\" encoded by the first half. \n\nNote that if we lose the first half of the coefficients the result is markedly different:\n\n\n```python\nlossy_img = np.copy(tx_img);\nlossy_img[0:int(len(tx_img)/2)] = 0\n\nrx_img = np.zeros((64, 64))\nfor k in range(0, (64*64)):\n rx_img += lossy_img[k] * haar2D(k, 64)\n\nplt.matshow(rx_img);\n```\n\nIn fact, schemes like this one are used in *progressive encoding*: send the most important information first and add details if the channel permits it. You may have experienced this while browsing the interned over a slow connection. \n\nAll in all, a great application of a change of basis!\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "e5c3f3c2cfc87253f676d22d7337226020a4c214", "size": 124654, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Codes/2019.09.19 Haar_Basis.ipynb", "max_stars_repo_name": "lev1khachatryan/ASDS_DSP", "max_stars_repo_head_hexsha": "9059d737f6934b81a740c79b33756f7ec9ededb3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-12-29T18:02:13.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-29T18:02:13.000Z", "max_issues_repo_path": "Codes/2019.09.19 Haar_Basis.ipynb", "max_issues_repo_name": "lev1khachatryan/ASDS_DSP", "max_issues_repo_head_hexsha": "9059d737f6934b81a740c79b33756f7ec9ededb3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Codes/2019.09.19 Haar_Basis.ipynb", "max_forks_repo_name": "lev1khachatryan/ASDS_DSP", "max_forks_repo_head_hexsha": "9059d737f6934b81a740c79b33756f7ec9ededb3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 190.3114503817, "max_line_length": 17036, "alphanum_fraction": 0.8941068879, "converted": true, "num_tokens": 4007, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9353465116437761, "lm_q2_score": 0.9046505261034854, "lm_q1q2_score": 0.8461617138476019}} {"text": "```python\nimport numpy as np # Load the library\n\na = np.linspace(-np.pi, np.pi, 100) # Create even grid from -\u03c0 to \u03c0\nb = np.cos(a) # Apply cosine to each element of a\nc = np.sin(a) # Apply sin to each element of a\n```\n\n\n```python\nb @ c\n```\n\n\n\n\n 3.3306690738754696e-16\n\n\n\n\n```python\nfrom scipy.stats import norm\nfrom scipy.integrate import quad\n\n\u03d5 = norm()\nvalue, error = quad(\u03d5.pdf, -2, 2) # Integrate using Gaussian quadrature\nvalue\n```\n\n\n\n\n 0.9544997361036417\n\n\n\n\n```python\nfrom sympy import Symbol\n\nx, y = Symbol('x'), Symbol('y') # Treat 'x' and 'y' as algebraic symbols\nx + x + x + y\n```\n\n\n\n\n$\\displaystyle 3 x + y$\n\n\n\n\n```python\nexpression = (x + y)**2\nexpression.expand()\n```\n\n\n\n\n$\\displaystyle x^{2} + 2 x y + y^{2}$\n\n\n\n\n```python\nfrom sympy import solve\n\nsolve(x**2 + x + 2)\n```\n\n\n\n\n [-1/2 - sqrt(7)*I/2, -1/2 + sqrt(7)*I/2]\n\n\n\n\n```python\nfrom sympy import limit, sin, diff\n\nlimit(1 / x, x, 0)\n```\n\n\n\n\n$\\displaystyle \\infty$\n\n\n\n\n```python\nlimit(sin(x) / x, x, 0)\n```\n\n\n\n\n$\\displaystyle 1$\n\n\n\n\n```python\ndiff(sin(x), x)\n```\n\n\n\n\n$\\displaystyle \\cos{\\left(x \\right)}$\n\n\n\n\n```python\nimport pandas as pd\nnp.random.seed(1234)\n\ndata = np.random.randn(5, 2) # 5x2 matrix of N(0, 1) random draws\ndates = pd.date_range('28/12/2010', periods=5)\n\ndf = pd.DataFrame(data, columns=('price', 'weight'), index=dates)\nprint(df)\n```\n\n price weight\n 2010-12-28 0.471435 -1.190976\n 2010-12-29 1.432707 -0.312652\n 2010-12-30 -0.720589 0.887163\n 2010-12-31 0.859588 -0.636524\n 2011-01-01 0.015696 -2.242685\n\n\n\n```python\ndf.mean()\n```\n\n\n\n\n price 0.411768\n weight -0.699135\n dtype: float64\n\n\n", "meta": {"hexsha": "a59e1b578ecb89c983dd541fddf22760db028530", "size": 5605, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "mini_book/_build/.jupyter_cache/executed/28957355068643b77064203b6d0079ba/base.ipynb", "max_stars_repo_name": "rebeccajohnson88/qss20_win22_coursepage", "max_stars_repo_head_hexsha": "cbe96d3e1e04d6e5d3de5e55acf8d65207cea0a0", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-04-01T18:42:36.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-01T18:42:36.000Z", "max_issues_repo_path": "mini_book/_build/.jupyter_cache/executed/28957355068643b77064203b6d0079ba/base.ipynb", "max_issues_repo_name": "rebeccajohnson88/qss20", "max_issues_repo_head_hexsha": "f936e77660e551bb10a82abb96a36369ccbf3d18", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-02-14T22:36:59.000Z", "max_issues_repo_issues_event_max_datetime": "2021-02-24T23:33:24.000Z", "max_forks_repo_path": "mini_book/_build/.jupyter_cache/executed/28957355068643b77064203b6d0079ba/base.ipynb", "max_forks_repo_name": "rebeccajohnson88/qss20_win22_coursepage", "max_forks_repo_head_hexsha": "cbe96d3e1e04d6e5d3de5e55acf8d65207cea0a0", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 18.3770491803, "max_line_length": 83, "alphanum_fraction": 0.4456735058, "converted": true, "num_tokens": 598, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9381240142763572, "lm_q2_score": 0.9019206705978501, "lm_q1q2_score": 0.8461134400600792}} {"text": "## TNK\n\nTanaka suggested the following two-variable problem:\n\n**Definition**\n\n\\begin{equation}\n\\newcommand{\\boldx}{\\mathbf{x}}\n\\begin{array}\n\\mbox{Minimize} & f_1(\\boldx) = x_1, \\\\\n\\mbox{Minimize} & f_2(\\boldx) = x_2, \\\\\n\\mbox{subject to} & C_1(\\boldx) \\equiv x_1^2 + x_2^2 - 1 - \n0.1\\cos \\left(16\\arctan \\frac{x_1}{x_2}\\right) \\geq 0, \\\\\n& C_2(\\boldx) \\equiv (x_1-0.5)^2 + (x_2-0.5)^2 \\leq 0.5,\\\\\n& 0 \\leq x_1 \\leq \\pi, \\\\\n& 0 \\leq x_2 \\leq \\pi.\n\\end{array}\n\\end{equation}\n\n**Optimum**\n\nSince $f_1=x_1$ and $f_2=x_2$, the feasible objective space is also\nthe same as the feasible decision variable space. The unconstrained \ndecision variable space consists of all solutions in the square\n$0\\leq (x_1,x_2)\\leq \\pi$. Thus, the only unconstrained Pareto-optimal \nsolution is $x_1^{\\ast}=x_2^{\\ast}=0$. \nHowever, the inclusion of the first constraint makes this solution\ninfeasible. The constrained Pareto-optimal solutions lie on the boundary\nof the first constraint. Since the constraint function is periodic and\nthe second constraint function must also be satisfied,\nnot all solutions on the boundary of the first constraint are Pareto-optimal. The \nPareto-optimal set is disconnected.\nSince the Pareto-optimal\nsolutions lie on a nonlinear constraint surface, an optimization\nalgorithm may have difficulty in finding a good spread of solutions across\nall of the discontinuous Pareto-optimal sets.\n\n**Plot**\n\n\n```python\nfrom pymoo.factory import get_problem\nfrom pymoo.util.plotting import plot\n\nproblem = get_problem(\"tnk\")\nplot(problem.pareto_front(), no_fill=True)\n```\n", "meta": {"hexsha": "362fb98503fd0d8ca82c8a0b776df4b1ba1c59b5", "size": 35841, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "source/problems/multi/tnk.ipynb", "max_stars_repo_name": "SunTzunami/pymoo-doc", "max_stars_repo_head_hexsha": "f82d8908fe60792d49a7684c4bfba4a6c1339daf", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-09-11T06:43:49.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-10T13:36:09.000Z", "max_issues_repo_path": "source/problems/multi/tnk.ipynb", "max_issues_repo_name": "SunTzunami/pymoo-doc", "max_issues_repo_head_hexsha": "f82d8908fe60792d49a7684c4bfba4a6c1339daf", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2021-09-21T14:04:47.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-07T13:46:09.000Z", "max_forks_repo_path": "source/problems/multi/tnk.ipynb", "max_forks_repo_name": "SunTzunami/pymoo-doc", "max_forks_repo_head_hexsha": "f82d8908fe60792d49a7684c4bfba4a6c1339daf", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-10-09T02:47:26.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-10T07:02:37.000Z", "avg_line_length": 250.6363636364, "max_line_length": 32180, "alphanum_fraction": 0.91883597, "converted": true, "num_tokens": 496, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9425067244294587, "lm_q2_score": 0.8976952825278492, "lm_q1q2_score": 0.8460838402711007}} {"text": "```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.ticker import (MultipleLocator, FormatStrFormatter,\n AutoMinorLocator)\n```\n\n# Superposition of two waves in perpendicular direction\n\\begin{equation}\nx = a \\sin (2\\pi f_1 t)\\\\\ny=b \\sin (2\\pi f_2 t - \\phi)\n\\end{equation}\n\n\n```python\na = [10,30] # amplitude of first wave\nf1 = [1,2,4,8,12,16] # frequency of first wave\nb=[10,30] # amplitude of second wave\nf2=[1,2,4,8,12,16] # frequency of second wave\nphi=[0,np.pi/4,np.pi/2] # phase angle(0,30,45,60,90)\nt = np.arange(0,8.0,0.01) # time\n```\n\n## Example\n\n### 1.same amplitude,phase is zero ,different frequency\n Lissajous figure\n\n\n```python\nplt.figure(figsize = [6,36])\nfor i in range(len(f2)):\n plt.subplot(len(f2),1,i+1)\n ax = plt.gca()\n ax.set_facecolor('k') # backgound color\n ax.grid(color='xkcd:sky blue') # grid color\n x = a[0]*np.sin(2*np.pi*f1[2]*t)\n y = b[0]*np.sin(2*np.pi*f2[i]*t-phi[0])\n plt.plot(x,y, color ='g',label='f1='+str(f1[2])+',f2='+str(f2[i]))\n plt.xlabel(\"x\",color='r',fontsize=14)\n plt.ylabel(\"y\",color='r',fontsize=14)\n ax.xaxis.set_minor_locator(AutoMinorLocator()) ##\n ax.yaxis.set_minor_locator(AutoMinorLocator()) ###\n ax.tick_params(which='both', width=2)\n ax.tick_params(which='major', length=9)\n ax.tick_params(which='minor', length=4)\n plt.legend()\n plt.subplots_adjust(wspace = 0.5, hspace = 0.5)\nplt.show()\n```\n\n### 2.same frequency and amplitude,different phase\n\n\n\n```python\nplt.figure(figsize = [6,24])\nfor i in range(len(phi)):\n plt.subplot(len(phi),1,i+1)\n ax = plt.gca()\n ax.set_facecolor('xkcd:sky blue')\n ax.grid(color='g')\n x = a[0]*np.sin(2*np.pi*f1[2]*t)\n y = b[0]*np.sin(2*np.pi*f2[2]*t-phi[i])\n plt.plot(x,y, color ='purple',label='phase='+str(phi[i]*180/np.pi))\n plt.xlabel(\"x\",color='g',fontsize=14)\n plt.ylabel(\"y\",color='g',fontsize=14)\n plt.legend()\n plt.subplots_adjust(wspace = 0.5, hspace = 0.5)\nplt.show()\n```\n\n### 3.same frequency ,different phase and amplitude\n\n\n```python\nplt.figure(figsize = [8,24])\nfor i in range(len(phi)):\n plt.subplot(len(phi),1,i+1)\n ax = plt.gca()\n ax.grid(color='tab:brown')\n x = a[0]*np.sin(2*np.pi*f1[2]*t)\n y = b[1]*np.sin(2*np.pi*f2[2]*t-phi[i])\n plt.plot(x,y, color ='r',label='phase='+str(phi[i]*180/np.pi))\n plt.xlabel(\"x\",color='r',fontsize=14)\n plt.ylabel(\"y\",color='r',fontsize=14)\n plt.legend()\n plt.subplots_adjust(wspace = 0.5, hspace = 0.5)\nplt.show()\n```\n\n# Superposition of two waves in same direction\n\\begin{equation}\ny_1 = a \\sin (2\\pi f_1 t)\\\\\ny_2=b \\sin (2\\pi f_2 t - \\phi)\n\\end{equation}\n\n## Example\n\n### 1.same amplitude and phase,different frequency\n\n\n```python\nf1=[1,2,3,4,5]\nf2=[1,2,3,4,5]\nplt.figure(figsize = [12,16])\nfor i in range(len(f2)):\n plt.subplot(len(f2),1,i+1)\n ax = plt.gca() #graphic current axis\n ax.set_facecolor('k')\n ax.grid(False)\n y1 = a[0]*np.sin(2*np.pi*f1[0]*t)\n y2 = a[0]*np.sin(2*np.pi*f2[i]*t-phi[0])\n y=y1+y2\n plt.plot(t,y,color ='tab:olive',label='f1='+str(f1[0])+',f2='+str(f2[i]))\n plt.xlabel(\"t\",color='r',fontsize=14)\n plt.ylabel(\"y\",color='r',fontsize=14)\n plt.legend()\n plt.subplots_adjust(wspace = 0.5, hspace = 0.5)\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "ba6341d873ccba0073742e9b90416a6a7f3a72bb", "size": 389107, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lissajous.ipynb", "max_stars_repo_name": "AmbaPant/NPS", "max_stars_repo_head_hexsha": "0500f39f6708388d5c3f2b8d3e5ee5e56a1f646f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-09-16T03:21:55.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-16T03:21:55.000Z", "max_issues_repo_path": "Lissajous.ipynb", "max_issues_repo_name": "AmbaPant/NPS", "max_issues_repo_head_hexsha": "0500f39f6708388d5c3f2b8d3e5ee5e56a1f646f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lissajous.ipynb", "max_forks_repo_name": "AmbaPant/NPS", "max_forks_repo_head_hexsha": "0500f39f6708388d5c3f2b8d3e5ee5e56a1f646f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-08-10T12:17:21.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-13T14:31:02.000Z", "avg_line_length": 1435.8191881919, "max_line_length": 180712, "alphanum_fraction": 0.9567933756, "converted": true, "num_tokens": 1159, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9553191246389618, "lm_q2_score": 0.8856314753275017, "lm_q1q2_score": 0.8460606857625813}} {"text": "# Coin toss problem\n\n## What is the problem?\n\nLet's pick up a random coin (not necessarily a fair one with equal probability of head and tail). We did the coin toss experiment $n$ times and gathered the observed data $D$ as a set of outcomes (e.g. $\\{H, T, T, ...\\}$). Now, we are interested in predicting the probability of heads $p(H)=\\theta_{best}$ for our coin.\n\n## Applying Bayes rule\n\nIn this problem, we will {ref}`model the distribution of parameters `. \n\n\\begin{equation}\n\\underbrace{p(\\theta|D)}_{\\text{Posterior}} = \\frac{\\overbrace{p(D|\\theta)}^{\\text{Likelihood}}}{\\underbrace{p(D)}_{\\text{Evidence}}}\\underbrace{p(\\theta)}_{\\text{Prior}}\n\\end{equation}\n\\begin{equation}\np(D) = \\int_{\\theta}p(D|\\theta)p(\\theta)d\\theta\n\\end{equation}\n\nWe are interested in $p(\\theta|D)$ and to derive that, we need prior, likelihood and evidence terms. Let us look at them one by one.\n\n### Prior\n\nWhat is our prior belief about the coin's probability of head $p($H$)$? Yes, that's exactly the question. A most simple way is to assume equal probability of heads and tails. However, we can represent our prior belief in terms of a distribution. Let's assume a beta distribution over the probability of heads $p(H) = \\theta$ (we will see in later sections why beta and not Gaussian or uniform or something else?). So, our prior distibution $p(\\theta)$ is:\n\n$$\np(\\theta|\\alpha, \\beta) = \\frac{\\theta^{\\alpha-1}(1-\\theta)^{\\beta-1}}{B(\\alpha,\\beta)}, \\alpha,\\beta>0\\\\\nB(\\alpha, \\beta) = \\frac{\\Gamma(\\alpha)\\Gamma(\\beta)}{\\Gamma(\\alpha+\\beta)}\\\\\n\\Gamma(\\alpha) = (\\alpha-1)!\n$$\n\nHere, $\\alpha$ and $\\beta$ are the hyperparameters of the beta distrubution. $B$ is Beta function. You may play with [this interactive demo](https://huggingface.co/spaces/Zeel/Beta_distribution) to see how pdf changes with $\\alpha$ and $\\beta$. In our modeling, we can assume that $\\alpha$ and $\\beta$ are already known. There are methods of assuming distributions over the $\\alpha$ and $\\beta$ as well but that's out of the scope for now.\n\n### Likelihood\n\nLikelihood is probability of observing the data $D$ given $\\theta$. From, $n$ number of experiments, if we received heads $h$ times, then $p(D|\\theta)$ follows a Bernoulli distribution. We can also arrive at this formula by following the basic probability rules for independent events:\n\n$$\np(D|\\theta) = \\theta^h(1-\\theta)^{n-h}\n$$\n\n### Maximum likelihood estimation (MLE)\n\nIn cases, where prior is not available, we can use likelihood to get the best estimate of $\\theta$. Let us find the optimal theta by differentiating likelihood $p(D|\\theta)$ w.r.t $\\theta$.\n\n\\begin{align}\np(D|\\theta) &= (\\theta)^h(1-\\theta)^{n-h}\\\\\n\\text{taking log both sides to simplify things,}\\\\\n\\log p(D|\\theta) &= h\\log(\\theta)+(n-h)\\log(1-\\theta)\\\\\n\\frac{d}{d\\theta}\\log p(D|\\theta) &= \\frac{h}{\\theta} - \\frac{n-h}{1-\\theta} = 0\\\\\n&= h(1-\\theta)-(n-h)\\theta = 0\\\\\n&= h - h\\theta - n\\theta + h\\theta = 0\\\\\n\\therefore \\theta_{MLE} = \\frac{h}{n}\n\\end{align}\n\nHow can we know if optima at $\\theta_{MLE}$ is a maxima? well, it is a maxima if $\\frac{d^2}{d\\theta^2}\\log p(D|\\theta)$ is negative [(check here if not convinced)](https://www.khanacademy.org/math/multivariable-calculus/applications-of-multivariable-derivatives/optimizing-multivariable-functions/a/second-partial-derivative-test):\n\n\\begin{align}\n\\frac{d}{d\\theta}\\log p(D|\\theta) &= \\frac{h}{\\theta} - \\frac{n-h}{1-\\theta}\\\\\n\\frac{d^2}{d\\theta^2}\\log p(D|\\theta) &= -\\frac{h}{\\theta^2}-\\frac{n-h}{(1-\\theta)^2}\n\\end{align}\n\nAfter a bit of thinking, one can see that above value is always negative and thus our optima is a maxima.\n\n### Maximum a posteriori estimation (MAP)\n\nWe know that posterior is given by the following formula:\n\n\\begin{equation}\n\\underbrace{p(\\theta|D)}_{\\text{Posterior}} = \\frac{\\overbrace{p(D|\\theta)}^{\\text{Likelihood}}}{\\underbrace{p(D)}_{\\text{Evidence}}}\\underbrace{p(\\theta)}_{\\text{Prior}}\n\\end{equation}\n\nIf we are only interested in maximum probable value of $\\theta$ in the posterior (point estimate in other words), we can differentiate the posterior w.r.t. $\\theta$. However, we have not yet derived the evidence but it does not depend on $\\theta$. So, we can claim that the following is true:\n\n$$\n\\arg \\max_{\\theta} p(\\theta|D) = \\arg \\max_{\\theta} p(D|\\theta)p(\\theta)\n$$\n\nNow, differentiating $p(D|\\theta)p(\\theta)$ w.r.t $\\theta$:\n\n\\begin{align}\np(D|\\theta)p(\\theta) &= \\theta^h(1-\\theta)^{N-h}\\cdot\\frac{\\theta^{\\alpha-1}(1-\\theta)^{\\beta-1}}{B(\\alpha, \\beta)}\\\\\n &= \\frac{\\theta^{h+\\alpha-1}(1-\\theta)^{N-h+\\beta-1}}{B(\\alpha, \\beta)}\\\\\n\\text{Taking log for simplification}\\\\\n\\log p(\\theta|D)p(\\theta) &= (h+\\alpha-1)\\log(\\theta) + (N-h+\\beta-1)\\log(1-\\theta) - \\log(B(\\alpha, \\beta))\\\\\n\\\\\n\\frac{d}{d\\theta} \\log p(\\theta|D)p(\\theta) &= \\frac{h+\\alpha-1}{\\theta} - \\frac{N-h+\\beta-1}{1-\\theta} = 0\\\\\n\\\\\n\\therefore \\theta_{MAP} = \\frac{h+(\\alpha-1)}{N+(\\alpha-1)+(\\beta-1)}\n\\end{align}\n\nNow, we have the maximum probable value of $\\theta$ from the posterior but if we are interested in the posterior distribution, we must get the evidence!\n\n### Evidence\n\nThe formula for computing the evidence is the following:\n\n$$\np(D) = \\int\\limits_{\\theta}p(D|\\theta)p(\\theta)d\\theta\n$$\n\nSubstituting the values and deriving the formula:\n\n\\begin{align}\np(D) &= \\int\\limits_{0}^{1}p(D|\\theta)p(\\theta)d\\theta\\\\\n &= \\int\\limits_{0}^{1}(\\theta)^h(1-\\theta)^{N-h}\\frac{\\theta^{\\alpha-1}(1-\\theta)^{\\beta-1}}{B(\\alpha,\\beta)}d\\theta\\\\\n &= \\frac{1}{B(\\alpha,\\beta)}\\int\\limits_{0}^{1}(\\theta)^{h+\\alpha-1}(1-\\theta)^{N-h+\\beta-1}d\\theta\\\\\n &= \\frac{1}{B(\\alpha,\\beta)}B(h+\\alpha, N-h+\\beta)\\\\\n \\therefore p(D) = \\frac{B(h+\\alpha, N-h+\\beta)}{B(\\alpha,\\beta)}\n\\end{align}\n\nThe last step follows from definition of [the Beta function](https://en.wikipedia.org/wiki/Beta_function).\n\n### Posterior\n\nNow, we have all the required terms to compute the posterior $p(\\theta|D)$.\n\n\\begin{align}\np(\\theta|D) &= \\frac{p(D|\\theta)}{p(D)}p(\\theta)\\\\\n&= \\theta^h(1-\\theta)^{n-h} \\cdot \\frac{B(\\alpha,\\beta)}{B(h+\\alpha, N-h+\\beta)} \\cdot \\frac{\\theta^{\\alpha-1}(1-\\theta)^{\\beta-1}}{B(\\alpha,\\beta)}\\\\\n&= \\frac{\\theta^{h+\\alpha-1}(1-\\theta)^{N-h+\\beta-1}}{B(h+\\alpha, N-h+\\beta)}\n\\\\\n\\therefore p(\\theta|D) = Beta(h+\\alpha, N-h+\\beta)\n\\end{align}\n\nWe have successfully derived the posterior and it follows a Beta distribution.\n\n## MAP is not the expected value of the posterior\n\nFrom [Wikipedia](https://en.wikipedia.org/wiki/Beta_distribution), expected value of our posterior is:\n\n$$\n\\mathbb{E}_{\\theta}(p(\\theta|D)) = \\frac{h+\\alpha}{N + \\alpha + \\beta}\n$$\n\nWe derived the MAP as:\n\n$$\n\\theta_{MAP} = \\frac{h+(\\alpha-1)}{N+(\\alpha-1)+(\\beta-1)}\n$$\n\nWe can see that both values are clearly different. \n", "meta": {"hexsha": "ca10d448197b1c767978d041a0d01bcd0818c08a", "size": 8867, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_build/jupyter_execute/coin-toss.ipynb", "max_stars_repo_name": "patel-zeel/bayesian-ml", "max_stars_repo_head_hexsha": "2b7657f22fbf70953a91b2ab2bc321bb451fa5a5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_build/jupyter_execute/coin-toss.ipynb", "max_issues_repo_name": "patel-zeel/bayesian-ml", "max_issues_repo_head_hexsha": "2b7657f22fbf70953a91b2ab2bc321bb451fa5a5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_build/jupyter_execute/coin-toss.ipynb", "max_forks_repo_name": "patel-zeel/bayesian-ml", "max_forks_repo_head_hexsha": "2b7657f22fbf70953a91b2ab2bc321bb451fa5a5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.5523255814, "max_line_length": 466, "alphanum_fraction": 0.5591519116, "converted": true, "num_tokens": 2150, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9324533126145179, "lm_q2_score": 0.907312213841788, "lm_q1q2_score": 0.8460262793723871}} {"text": "# Symbolic math introduction\n\nSymbolic mathematics is a maturing technology that lets a computer do maths using symbolic manipulation rather than numerical computation. Python has support for symbolic computation via the \"sympy\" package.\n\nThe sympy documentation can be found at http://docs.sympy.org/latest/index.html. The PDF version is just under 2000 pages long, which is quite frightening.\n\nNonetheless, this notebook will introduce some of the basics of symbolic math using Python. In particular we will see how to define functions of symbolic variables, and differentiate and integrate them symbolically. \n\nSome other good examples of sympy in use are at http://www.cfm.brown.edu/people/dobrush/am33/SymPy/index.html and https://github.com/sympy/sympy/wiki/Quick-examples. The first one in particular deals with differential equations, which will be used in subsequent computer assignments.\n\n## Basic differentiation\n\nThe cell below imports the symbolic math package, and defines two symbolic variables `x` and `y`. A symbolic function $f(x,y) = (x^2-2x+3)/y$ is then defined and printed.\n\n\n```python\nimport sympy as sp #import the sympy library under the name 'sp'\n#?sp.integrate()\n\nx, y = sp.symbols('x y');\nf = (x**2 - 2*x + 3)/y;\nprint(f);\n```\n\n (x**2 - 2*x + 3)/y\n\n\n\n```python\n# protip: you can make your outputs look sexy with LaTex:\nfrom IPython.display import display \nsp.init_printing() # initializes pretty printing. Only needs to be run once.\n\ndisplay(f)\n```\n\nNote that `f` here is a symbol representing a function. It would be nice if the notation made it explicit that it's actually a function of $x$ and $y$, namely `f(x,y)`, but that's not how it works. However, we can query the free variables:\n\n\n```python\nf.free_symbols\n```\n\nWe can get sympy to find a symbolic expression for the partial derivative of $f(x,y)$ with respect to $y$: \n\n\n```python\nfpy = sp.diff(f, y)\nfpy\n```\n\nTo evaluate this derivative at some particular values $x=\\pi$ and $y=2$ we can substitute into the symbolic expression:\n\n\n```python\nfpyv = fpy.subs([(x, sp.pi), (y, 2)])\nfpyv\n```\n\nNotice though that this is still a symbolic expression. It can be evaluated using the \"evalf\" method, which finally returns a number:\n\n\n```python\nfpyv.evalf()\n```\n\n## More advanced differentiation\n\nSymbolic expressions can be manipulated. For example we can define $g(t) = f(x(t), y(t))$, which in this case given above means\n$$g(t) = (x(t)^2-2x(t)+3)/y(t),$$\nand find its derivative with respect to time.\n\n\n```python\nt = sp.symbols('t');\n#xt, yt = sp.symbols('xt yt', cls=sp.Function);\nxt = sp.Function(\"x\")(t); # x(t)\nyt = sp.Function(\"y\")(t) # y(t)\n\ng = f.subs([(x,xt),(y,yt)]);\ngp = sp.diff(g,t);\nprint(g);\nprint(gp); \n```\n\n (x(t)**2 - 2*x(t) + 3)/y(t)\n (2*x(t)*Derivative(x(t), t) - 2*Derivative(x(t), t))/y(t) - (x(t)**2 - 2*x(t) + 3)*Derivative(y(t), t)/y(t)**2\n\n\n## Plotting symbolic functions\n\nThe sympy module has a `plot` method that knows how to plot symbolic functions of a single variable. The function `g` above with $x(t) = \\sin(t)$ and $y(t) = \\cos(2t)$ is a function of a single time variable `t`, and can be visualised as follows:\n\n\n```python\ngs = g.subs([(xt,sp.sin(t)), (yt,sp.cos(2*t))]);\nprint(gs);\nsp.plot(gs, (t,1,2));\n```\n\n (sin(t)**2 - 2*sin(t) + 3)/cos(2*t)\n\n\n\n
\n\n\nA roughly equivalent plot could be obtained numerically by creating a lambda function for the expression, evaluating it for a closely-spaced set of values of `t` over the required range, and using standard numerical plotting functions that draw straight lines between the calulated points. If you increase the number of calculated points over the interval then the approximation in the above graph becomes more accurate.\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib notebook\n\ntv = np.linspace(1, 2, 10);\ngs_h = sp.lambdify(t, gs, modules=['numpy']);\ngstv = gs_h(tv);\nplt.plot(tv, gstv);\n\n# lambda functions are also called anonymous functions. You can think of creating one as just another way of defining a function. For example:\n# For example: say you had a symbolic expression, f = x^2\n# writing my_func = sp.lambdify(x,f);\n# is the same as going\n# def my_func(x):\n# x^2\n```\n\n\n \n\n\n\n\n\n\nThe sympy plot function is quite fragile, and might not always work. Symbolic math packages are amazing, but they're difficult to implement and are sometimes not robust: you'll find various postings on the internet that give instances of very good symbolic math engines giving a wrong result. In short, they are useful but you should be careful when using them.\n\nOne other nice thing about Jupyter notebooks is that a pretty-print method exists for symbolic expressions. After the appropriate setup, note the difference in output between the `print` and `display` methods below:\n\n\n```python\nfrom IPython.display import display\nsp.init_printing() # pretty printing\n\nprint(gs);\ndisplay(gs);\n```\n\n## Symbolic integration\n\nIntegration is also a standard function in sympy, so we can find for example the integral\n$$y(t) = \\int_{-10}^t x(\\lambda) d\\lambda$$\nfor $x(t) = e^{-t/10} \\cos(t)$:\n\n\n```python\nxt = sp.exp(-t/10)*sp.cos(t); # x(t)\nlamb = sp.symbols('lamb'); \nxl = xt.subs(t,lamb); # x(lamb)\n\nyt = sp.integrate(xl, (lamb, -10, t)); # indefinite integral\nyt\n\n# to get a definite integral over the range, say -10 to 0, you'd go yt = sp.integrate(xl, (lamb, -10, 0));\n# This would give a numeric value.\n# NOTE: don't forget about your initial conditions. The definite integral just gives the change in the variable over the\n# interval, so you need to add its initial state to this value get the true final state.\n```\n\n## Tasks\n\nThese tasks involve writing code, or modifying existing code, to meet the objectives described.\n\n1. Define the expression $y(t) = v_0 t - \\frac{1}{2} g t^2$ for some symbolic values of $v_0$ and $g$ using sympy. You should recognise this as the \"altitude\" of a particle moving under the influence of gravity, given that the initial velocity at time $t=0$ is $v_0$. Make a plot of the particle height in meters for $v_0 = 22.5m/s$ given $g = 9.8 m/s^2$, over the range $t=0$ to $t=5s$.

\n\n2. Use symbolic math and the `roots` method to find an expression for the zeros of the expression $y(t)$ above for the same set of conditions. Substitute to find the nonzero numerical value of $t$ for which your plot in the previous task crosses the x-axis.

\nFor help on the `roots` method, type `?sp.roots()` in an empty code cell and run it. The method takes a symbolic expression as input, and returns a Python dictionary object as the output. The roots are contained in the tags of the dictionary. Note that for a d

\n\n3. Use symbolic differentiation to find the vertical velocity of the particle in the previous task as a function of time, given the same conditions. Make a plot of this velocity over the same time range.

\n\n4. Suppose the acceleration of a particle is given by $a(t) = 0.2 + \\cos(t)$ for positive time. Use symbolic methods to find and plot the velocity $v(t)$ of the particle over the range $t=0$ to $t=5$ given the initial condition $v(0) = -0.3$. Then find and plot the position $s(t)$ of the particle over the same time period, given the additional auxiliary condition $s(0) = 0.1$.\n\n\n```python\n?sp.roots\n```\n", "meta": {"hexsha": "3ff211bf27d6d370f10a0c8edf369076d9dae254", "size": 120294, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lab_symintro.ipynb", "max_stars_repo_name": "maxnvdm/notebooks", "max_stars_repo_head_hexsha": "c719a43d02e330bdc25dceea33d5e6c2b156e02d", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-07-17T09:03:42.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-01T05:28:21.000Z", "max_issues_repo_path": "lab_symintro.ipynb", "max_issues_repo_name": "maxnvdm/notebooks", "max_issues_repo_head_hexsha": "c719a43d02e330bdc25dceea33d5e6c2b156e02d", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lab_symintro.ipynb", "max_forks_repo_name": "maxnvdm/notebooks", "max_forks_repo_head_hexsha": "c719a43d02e330bdc25dceea33d5e6c2b156e02d", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 20, "max_forks_repo_forks_event_min_datetime": "2017-08-21T12:06:52.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-02T16:52:18.000Z", "avg_line_length": 95.0940711462, "max_line_length": 53475, "alphanum_fraction": 0.7653997706, "converted": true, "num_tokens": 2012, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.951863227517834, "lm_q2_score": 0.8887587846530938, "lm_q1q2_score": 0.8459768052447214}} {"text": "# Neural Networks\n\n\n\n**[1. Neural Networks](#1.Neural_Networks)**\n * [1.1. Perceptron](#1.1.Perceptron)\n * [1.2. Sigmoid](#1.2.Sigmoid)\n \n**[2. Neural Networks Architecture](#2.Neural_Networks_Architecture)**\n\n**[3. Training Neural Network](#3.Training_Neural_Network)**\n * [3.1. Forward Propagation](#3.1.Forward_Propagation)\n * [3.2. Compute Error](#3.2.Compute_Error)\n * [3.3. Back Propagation](#3.3.Back_Propagation)\n * [3.4. Gradient Descent](#3.4.Gradient_Descent)\n * [3.5. Computational Graph](#3.5.Computational_Graph)\n * [3.6. Gradient_Checking](#3.6.Gradient_Checking)\n * [3.7. Parameter Update](#3.7.Parameter_Update)\n * [3.8. Learning Rate](#3.8.Learning_Rate)\n\n\n\n\n\n# 1. Neural Networks\n\n\n\nNeural networks (NN) are a broad family of algorithms that have formed the basis for the recent resurgence in the computational field called deep learning. Early work on neural networks actually began in the 1950s and 60s. And just recently, neural network has experienced a resurgence of interest, as deep learning has achieved impressive state-of-the-art results. \n\nNeural network is basically a mathematical model built from simple functions with changing parameters. Just like a biological neuron has dendrites to receive signals, a cell body to process them, and an axon to send signals out to other neurons, an artificial neuron has a number of input channels, a processing stage, and one output that can branch out to multiple other artificial neurons. Neurons are interconnected and pass message to each other.\n\nTo understand neural networks, Let's get started with **Perceptron**.\n\n
\n\n\n\n## 1.1. Perceptron\n\nA perceptron takes several binary inputs, $x_1,x_2,\u2026,x_n$, and produces a single binary output.\n\n
\n\nThe example above shows a perceptron taking three inputs $x_1, x_2, x_3$. Each input is given a $weight$ $W \\in \\mathbb{R}$ and it serves to express the importance of its corresponding input in the computation of output for that perceptron. The perceptron output, 0 or 1, is determined by the weighted sum $\\sum_i w_ix_i$ with respect to a $threshold$ value as follows:\n\n\\begin{equation}\n output = \\left\\{\n \\begin{array}{rl}\n 0 & \\text{if } \\sum_iw_ix_i \\leq \\text{threshold}\\\\\n 1 & \\text{if } \\sum_iw_ix_i > \\text{threshold}\n \\end{array} \\right.\n\\end{equation}\n\nThe weighted sum can be categorically defined as a dot product between $w$ and $x$ as follows: \n\n$$\\sum_i w_ix_i \\equiv w \\cdot x$$\n\nwhere $w$ and $x$ are vectors corresponding to weights and inputs respectively. Introducing a bias term $b \\equiv -threshold$ results in\n\n\\begin{equation}\n output = \\left\\{\n \\begin{array}{rl}\n 0 & \\text{if } w \\cdot x + b \\leq 0\\\\\n 1 & \\text{if } w \\cdot x + b > 0\n \\end{array} \\right.\n\\end{equation}\n\nYou can think of the $bias$ as a measure of how easy it is to get the perceptron to output 1. For a perceptron with a high positive $bias$, it is extremely easy for the perceptron to output 1. In constrast, if the $bias$ is relatively a negative value, it is difficult for the perceptron to output 1.\n\n
\n\nA way to think about the perceptron is that it is a device that makes **decisions** by weighing up evidence. \n\n\n\n```python\nimport numpy as np\nX = np.array([0, 1, 1])\nW = np.array([5, 1, -3])\nb=5\n\ndef perceptron_neuron(X, W, b):\n return int(X.dot(W)+b > 0)\n\nperceptron_neuron(X,W,b)\n```\n\n\n\n\n 1\n\n\n\n\n## 1.2. Sigmoid \n\nSmall changes to $weights$ and $bias$ of any perceptron in a network can cause the output to flip from 0 to 1 or 1 to 0. This flip can cause the behaviour of the rest of the network to change in a complicated way.\n\n\n```python\nx = np.array([100])\nb = np.array([9])\nw1 = np.array([-0.08])\nw2 = np.array([-0.09])\n\nprint(perceptron_neuron(x,w1,b))\nprint(perceptron_neuron(x,w2,b))\n\n```\n\n 1\n 0\n\n\nThe problem above can be overcome by using a Sigmoid neuron. It functions similarly to a Perceptron but modified such that small changes in $weights$ and $bias$ cause only a small change in the output.\n\nAs with a Perceptron, a Sigmoid neuron also computes $w \\cdot x + b $, but now with the Sigmoid function being incorporated as follows:\n\n\\begin{equation}\n z = w \\cdot x + b \\\\\n \\sigma(z) = \\frac{1}{1+e^{-z}}\n\\end{equation}\n\n
\n\nA Sigmoid function produces output between 0 and 1, and the figure below shows the function. If $z$ is large and positive, the output of a sigmoid neuron approximates to 1, just as it would for a perceptron. Alternatively if $z$ is highly negative, the output approxiates to 0.\n\n
\n\n\n```python\ndef sigmoid(x):\n return 1/(1 + np.exp(-x))\n\ndef sigmoid_neuron(X, W, b):\n z = X.dot(W)+b\n return sigmoid(z)\n\nprint(sigmoid_neuron(x,w1,b))\nprint(sigmoid_neuron(x,w2,b))\n```\n\n [0.73105858]\n [0.5]\n\n\nClick here to go back [Table of Content](#Table_of_Content).\n\n\n \n# 2. Neural Networks Architecture\n\nA neural network can take many forms. A typical architecture consists of an input layer (leftmost), an output layer (rightmost), and a middle layer (hidden layer). Each layer can have multiple neurons while the number of neurons in the output layer is dependent on the number of classes.\n\n
\n\n\nClick here to go back [Table of Content](#Table_of_Content).\n\n\n```python\n# Create dataset\nfrom sklearn.datasets import make_moons, make_circles\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nseed = 123\n\nnp.random.seed(seed)\nX, y = make_circles(n_samples=1000, factor=.5, noise=.1, random_state=seed)\n```\n\n\n```python\ncolors = {0:'red', 1:'blue'}\ndf = pd.DataFrame(dict(x=X[:,0], y=X[:,1], label=y))\n\nfig, ax = plt.subplots()\ngrouped = df.groupby('label')\nfor key, group in grouped:\n group.plot(ax=ax, kind='scatter', x='x', y='y', label=key, color=colors[key])\nplt.show()\n```\n\n\n\n## 3. Training Neural Network\n\nPreviously we have learnt that $weights$ express the importance of variables, and $bias$ is a threshold to control the behaviour of neurons. So, how can we determine these $weights$ and $bias$?\n\nConsider these steps: \n1. Since we do not know the ideal $weights$ and $bias$, we initialize them using random numbers **(parameters initialization).**\n2. Let the data flow through the network with these initialized $weights$ and $bias$ to get a predicted output. This process is known as **forward propagation**.\n3. Compare the predicted output with the actual output. An error is computed if there is a difference between them. A high error thus indicates that current $weights$ and $bias$ do not give an accurate prediction. **(compute error)**\n4. To fix these $weights$ and $bias$, a backward computation is carried out by finding the partial derivative of error with respect to each $weight$ and $bias$ and then updating their values accordingly. This process is known as **backpropagation**.\n5. Repeat steps (2) to (4) until the error is below a pre-defined threshold to obtain the optimized $weights$ and $bias$.\n\n\n\n\n### 3.1. Forward Propagation\n\n
\n\nThe model above has two neurons in the input layer, four neurons (also known as **activation units**) in the hidden layer, and one neuron in the output layer. \n\n$X = [x_1, x_2]$ is the input matrix\n\n$W^{j+1} =$ matrix of $weights$ controlling function mapping from layer $j$ to layer $j + 1$ \n\n\\begin{align}\n W^{j+1} \\equiv \n\\begin{bmatrix}\nw_{11} & w_{12} \\\\\nw_{21} & w_{22} \\\\\nw_{31} & w_{32} \\\\\nw_{41} & w_{42} \\\\\n\\end{bmatrix}^{j+1}\n\\end{align}\n$W^{j+1}_{kl}$, $k$ is node in layer $j+1$, $l$ is node in layer $j$\n\n$B^{j+1} = $ matrix of $bias$ controlling function mapping from layer $j$ to layer $j + 1$\n\n\\begin{align}\n B^{j+1} \\equiv \n\\begin{bmatrix}\nb_{1} & b_{2} & b_{3} & b_{4} \n\\end{bmatrix}^{j+1}\n\\end{align}\n\n$B^{j+1}_k$, $k$ is node in layer $j + 1$\n\nthe activation units can be label as $a_i^j =$ \"activation\" of unit $i$ in layer $j$. \n\nif $j=0,\\quad a_i^j, $ is equivalent to input layer. \n\n\nFinally the activation function in layer 1 can be denode as, \n\n\\begin{align}\n a_1^1 = \\sigma(W_{11}^1x_1+W_{12}^1x_2+B^1_{1}) \\\\\n a_2^1 = \\sigma(W_{21}^1x_1+W_{22}^1x_2+B^1_{2}) \\\\\n a_3^1 = \\sigma(W_{31}^1x_1+W_{32}^1x_2+B^1_{3}) \\\\\n a_4^1 = \\sigma(W_{41}^1x_1+W_{42}^1x_2+B^1_{4})\n\\end{align}\n\nSimplified using vectorization,\n\n\\begin{align}\n a^1 = \\sigma(X \\cdot W^{1T}+B^1) \\\\\n\\end{align}\n\n\\begin{align}\n output = \\sigma( a^1 \\cdot W^{2T}+B^2) \\\\\n\\end{align}\n\nImplement forward propagation and feed in the generated data\n\nTips:\n- Using numpy function *random.randn()* to generate a Gaussian distribution with mean 0, and variance 1.\n\n\n```python\n#step 1: parameters initialization\ndef initialize_params():\n params = {\n 'W1': np.random.randn(4,2),\n 'B1': np.random.randn(1,4),\n 'W2': np.random.randn(1,4),\n 'B2': np.random.randn(1,1),\n }\n return params\n```\n\n\n```python\n#step 2: forward propagation\nx_ = np.array([X[0]])\ny_ = np.array([y[0]])\n\nnp.random.seed(0)\nparams = initialize_params()\n\ndef forward(X, params):\n a1 = sigmoid(X.dot(params['W1'].T)+params['B1'])\n output = sigmoid(a1.dot(params['W2'].T)+params['B2'])\n cache={'a1':a1, 'params': params}\n return output, cache\n\noutput, cache = forward(x_, params)\nprint('Actual output: ', y_)\nprint('Predicted output: ',output)\n```\n\n Actual output: [0]\n Predicted output: [[0.91620003]]\n\n\n\n\n### 3.2. Compute Error\n\nError is also known as Loss or Cost.\n\n**Loss function** is usually a function defined on a data point, prediction and label, and measures the penalty. \n\n**Cost function** is usually more general. It might be a sum of loss functions over your training set plus some model complexity penalty (regularization).\n\nTo compute the error, we should first define a $cost$ $function$. For simplicity, we will use **One Half Mean Squared Error** as our cost function. The equation is listed below:\n\n\\begin{equation}\n MSE = \\frac{1}{2n} \\sum (\\hat y - y)^2\n\\end{equation}\n\nwhere $n$ is the number of training samples, $\\hat y$ is the predicted output, and $y$ is the actual output. A low cost results is returned if the predicted output is close to the actual output, which indicates a good measure of accuracy. \n\n\n```python\n#step 3: cost function\ndef mse(yhat, y):\n n = yhat.shape[0]\n return (1/(2*n)) * np.sum(np.square(yhat-y))\n\nmse(output, y_)\n```\n\n\n\n\n 0.4197112460136526\n\n\n\n\n\n### 3.3. Back Propagation\n\nNow we know that to get a good prediction, the cost should be as low/small as possible. To minimize the cost, we have to tune the $weights$ and $bias$, but how can we do that? Do we go with random trial and error or is there a better way to do it? Fortunately, there is a better way and it is called **Gradient Descent**.\n\n\n\n\n\n### 3.4. Gradient Descent\n\nGradient descent is an optimization algorithm that iteratively looks for optimal $weights$ and $bias$ so that the cost gets smaller and eventually equals zero.\n\nIn the interative process, the gradient (of the cost function with respect to $weights$ and $bias$) is computed. The gradient is the change in cost when $weights$ and $bias$ are changed. This helps us update $weights$ and $bias$ in the direction in which the cost is minimized.\n\nLet's recall the forward propagation equation:\n\n\\begin{align}\n a^1 = \\sigma(X \\cdot W^{1T}+B^1) \\\\\n output = \\sigma( a^1 \\cdot W^{2T}+B^2) \\\\\n cost = \\frac{1}{2n} \\sum (output - y)^2\n\\end{align}\n\nArrange them into a single equation, and $cost$, $L$ can be defined as follows:\n\n\\begin{align}\n L = \\frac{1}{2n} \\sum (\\sigma( \\sigma(X \\cdot W^{1T}+B^1) \\cdot W^{2T}+B^2) - y)^2\n\\end{align}\n\nFrom the equation, we want to find the gradient or derivative of $L$ with respect to $W^1, W^2, B^1, B^2$.\n\\begin{align}\n \\frac{\\partial L}{\\partial W^1}, \\frac{\\partial L}{\\partial W^2}, \\frac{\\partial L}{\\partial B^1}, \\frac{\\partial L}{\\partial B^2}\n\\end{align}\n\nComputation of partial derivatives of $L$ with respect to the $weights$ and $bias$ can become very complex if the layer of the network grows. To make it simple, we can actually break the equation into smaller compenents and use **chain rule** to derive the partial derivative.\n\n\n\n### 3.5. Computational Graph\nEventually, we can think of the forward propagation equation as a computational graph.\n\n
\n
\n
\n\n\n\n\n\n#### Scalar Example\n\\begin{align}\n a_1^1 = \\sigma(W_{11}^1x_1+W_{12}^1x_2+B^1_{1}) \\quad \\equiv \\quad \\frac{1}{1+\\exp^{-(W_1x_1+W_2x_2+B_1)}}\n\\end{align}\n\n
\n\n#### Note:\n\\begin{align}\n L \\quad &\\rightarrow \\quad \\frac{\\partial L}{\\partial L} = 1 \\\\\n L = \\frac{1}{2n} \\sum (output - y)^2 \\quad &\\rightarrow \\quad \\frac{\\partial L}{\\partial output} = \\frac{1}{n} (output - y) \\\\\n f(x) = e^x \\quad &\\rightarrow \\quad \\frac{\\partial f}{\\partial x} = e^x \\\\\n f(x) = xy \\quad &\\rightarrow \\quad \\frac{\\partial f}{\\partial x} = y, \\quad \\frac{\\partial f}{\\partial y} = x \\\\\n f(x) = 1/x \\quad &\\rightarrow \\quad \\frac{\\partial f}{\\partial x} = -1/x^2 \\\\\n f(x) = x+c \\quad &\\rightarrow \\quad \\frac{\\partial f}{\\partial x} = 1 \\\\\n \\sigma(x) = \\frac{1}{1+e^{-x}} \\quad &\\rightarrow \\quad \\frac{\\partial f}{\\partial \\sigma} = \\sigma(1-\\sigma)\n\\end{align}\n\n\n```python\nprint('X: ', X[0])\nprint('W11: ', params['W1'][0])\nprint('B11: ', params['B1'][0][0])\n\n# how is the graph look like in our case?\n# calculate forward and backward flows\n```\n\n X: [-0.08769568 1.08597835]\n W11: [1.76405235 0.40015721]\n B11: -0.10321885179355784\n\n\n\n\n#### A Vectorized Example\n\n\\begin{align}\n a^1 = \\sigma(X \\cdot W^{1T}+B^1) \\\\\n\\end{align}\n\n
\n\n#### Note:\n\\begin{align}\n q = X\\cdot(W^T) \\quad &\\rightarrow \\quad \\frac{\\partial f}{\\partial X} = \\frac{\\partial f}{\\partial q} \\cdot W , \\quad \\frac{\\partial f}{\\partial W} = X^T \\cdot \\frac{\\partial f}{\\partial q} \\\\\n l = q+B \\quad &\\rightarrow \\quad \\frac{\\partial f}{\\partial B} = \\begin{bmatrix}1 & 1 \\end{bmatrix} \\cdot \\frac{\\partial f}{\\partial l} , \\quad \\frac{\\partial f}{\\partial q} = \\frac{\\partial f}{\\partial l}\n\\end{align}\n\n\n\n```python\ndef dmse(output, y):\n return (output - y)/output.shape[0]\n \ndef backward(X, output, y, cache):\n grads={}\n a1=cache['a1']\n params=cache['params']\n \n dloss = dmse(output, y)\n \n doutput = output*(1-output)*dloss\n \n #compute gradient of B2 and W2\n dW2 = a1.T.dot(doutput)\n dB2 = np.sum(doutput, axis=0, keepdims=True)\n \n dX2 = doutput.dot(params['W2'])\n da1 = a1*(1-a1)*dX2\n \n #compute gradient of B1 and W1\n dW1 = X.T.dot(da1)\n dB1 = np.sum(da1, axis=0, keepdims=True)\n \n grads['W1'] = dW1.T\n grads['W2'] = dW2.T\n grads['B1'] = dB1\n grads['B2'] = dB2\n \n return grads\n \n```\n\n\n```python\nX_ = X[:3]\nY_ = y[:3].reshape(-1,1)\n\ndef step(X,y,params):\n\n output, cache = forward(X, params)\n\n cost = mse(output, y)\n\n grads = backward(X, output, y, cache)\n\n return (cost, grads)\n\nnp.random.seed(0)\nparams = initialize_params()\n\ncost, grads = step(X_, Y_, params)\n\nprint(cost)\nprint(grads)\n```\n\n 0.4184302966572367\n {'W1': array([[-0.00208634, 0.0076568 ],\n [-0.00053028, 0.00068688],\n [-0.0004186 , 0.00346495],\n [-0.00170289, 0.00300797]]), 'W2': array([[0.03212358, 0.05777044, 0.02222057, 0.05217427]]), 'B1': array([[0.01009638, 0.00114439, 0.00470392, 0.00425563]]), 'B2': array([[0.07033491]])}\n\n\n\n\n### 3.6. Gradient Checking\nWe can use Numerical Gradient to evaluate the gradient (Analytical gradient) that we have calculated.\n\n
\n\nConsider the image above, where the red line is our function, the blue line is the gradient derived from the point $x$, the green line is the approximated gradient from the point of $x$, and $h$ is the step size. It can then be shown that:\n\n\n$$ \\frac{\\partial f}{\\partial x} \\approx \\frac{Y_C-Y_B}{X_C-X_B} \\quad = \\quad \\frac{f(x+h) - f(x-h)}{(x+h)-(x-h)} \\quad = \\quad \\frac{f(x+h) - f(x-h)}{2h} $$\n\n\n\n##### EXAMPLE\n\n\n\n```python\nw1 = 3; x1 = 1; w2 = 2; x2 = -2; b1 = 2\n\nh = 1e-4\n\ndef f(w1, x1, w2, x2, b1):\n linear = (w1*x1)+(w2*x2)+b1\n return 1/(1+np.exp(-linear))\n\n\nnum_grad_x1 = (f(w1, x1+h, w2, x2, b1) - f(w1, x1-h, w2, x2, b1))/(2*h)\nprint(num_grad_x1)\n\n```\n\n 0.5898357981348745\n\n\n\n```python\n# vectorized gradient checking\ndef gradient_check(f, x, h=0.00001):\n grad = np.zeros_like(x)\n # iterate over all indexes in x\n it = np.nditer(x, flags=['multi_index'])\n while not it.finished:\n # evaluate function at x+h\n ix = it.multi_index\n oldval = x[ix]\n x[ix] = oldval + h # increment by h\n fxph = f(x) # evalute f(x + h)\n x[ix] = oldval - h\n fxnh = f(x) # evaluate f(x - h)\n x[ix] = oldval # restore\n\n # compute the partial derivative with centered formula\n grad[ix] = ((fxph - fxnh) / (2 * h)).sum() # the slope\n it.iternext() # step to next dimension\n\n return grad\n\ndef rel_error(x, y):\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n```\n\n\n```python\nfor param_name in grads:\n f = lambda W: step(X_, Y_, params)[0]\n \n param_grad_num = gradient_check(f, params[param_name])\n print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))\n\n```\n\n W1 max relative error: 3.437088e-09\n W2 max relative error: 5.328225e-11\n B1 max relative error: 2.182159e-09\n B2 max relative error: 3.440171e-11\n\n\n\n\n### 3.7. Parameter Update\nAfter getting the gradient, parameters are updated depend on the gradients; if it is positive, then the updated parameters reduces in value, and if it is negative, then the updated parameter increases in value. Regardless of the gradient, the main goal is to reach the global minimum.\n\n\n\n### 3.8. Learning Rate\nLarge or small updates are controlled by the learning rate known as $\\alpha$. Hence, gradient descent equations are as follows:\n\n\\begin{align}\n W^1 &= W^1 - \\alpha * \\frac{\\partial L}{\\partial W^1} \\\\\n B^1 &= B^1 - \\alpha * \\frac{\\partial L}{\\partial B^1} \\\\\n W^2 &= W^2 - \\alpha * \\frac{\\partial L}{\\partial W^2} \\\\\n B^2 &= B^2 - \\alpha * \\frac{\\partial L}{\\partial B^2} \\\\\n\\end{align}\n\n\n\n```python\ndef update_parameter(params, grads, learning_rate):\n params['W1'] += -learning_rate * grads['W1']\n params['B1'] += -learning_rate * grads['B1']\n params['W2'] += -learning_rate * grads['W2']\n params['B2'] += -learning_rate * grads['B2']\n \n```\n\n\n```python\nparams = initialize_params()\n\ndef train(X,y,learning_rate=0.1,num_iters=30000,batch_size=256):\n num_train = X.shape[0]\n costs = []\n for it in range(num_iters):\n random_indices = np.random.choice(num_train, batch_size)\n X_batch = X[random_indices]\n y_batch = y[random_indices]\n \n cost, grads = step(X_batch, y_batch, params)\n costs.append(cost)\n \n # update parameters \n update_parameter(params, grads, learning_rate)\n \n return costs\n\ncosts = train(X,y.reshape(-1,1))\nplt.plot(costs)\n```\n\n\n```python\ndef predict(X, params):\n W1 = params['W1']\n B1 = params['B1']\n W2 = params['W2']\n B2 = params['B2']\n \n output, _ = forward(X, params)\n return output\n```\n\n\n```python\n# test on training samples\ny_pred = []\nfor i in range(len(X)):\n pred = np.squeeze(predict(X[i],params)).round()\n y_pred.append(pred)\n \nplt.scatter(X[:,0], X[:,1], c=y_pred, linewidths=0, s=20);\n```\n\n\n```python\n# test on new samples\nX_new, _ = make_circles(n_samples=1000, factor=.5, noise=.1)\ny_pred = []\n\nfor i in range(len(X_new)):\n pred = np.squeeze(predict(X_new[i],params)).round()\n y_pred.append(pred)\n \nplt.scatter(X_new[:,0], X_new[:,1], c=y_pred, linewidths=0, s=20);\n```\n\n# References:\n\n- http://neuralnetworksanddeeplearning.com/\n\n- http://cs231n.github.io/optimization-2/\n\n- http://kineticmaths.com/index.php?title=Numerical_Differentiation\n\n- https://google-developers.appspot.com/machine-learning/crash-course/backprop-scroll/\n\nClick here to go back [Table of Content](#Table_of_Content).\n", "meta": {"hexsha": "cbf67ea71443059e40d8aa49aedc89b3082dd38d", "size": 221487, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "01-neural-networks/neural-network-training.ipynb", "max_stars_repo_name": "williamardianto/neural-network-from-scratch", "max_stars_repo_head_hexsha": "9e01bcd2d6663a074e8152a83ffe671992b14a89", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "01-neural-networks/neural-network-training.ipynb", "max_issues_repo_name": "williamardianto/neural-network-from-scratch", "max_issues_repo_head_hexsha": "9e01bcd2d6663a074e8152a83ffe671992b14a89", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "01-neural-networks/neural-network-training.ipynb", "max_forks_repo_name": "williamardianto/neural-network-from-scratch", "max_forks_repo_head_hexsha": "9e01bcd2d6663a074e8152a83ffe671992b14a89", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 225.5468431772, "max_line_length": 74236, "alphanum_fraction": 0.90220645, "converted": true, "num_tokens": 6298, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9532750373915658, "lm_q2_score": 0.8872045907347108, "lm_q1q2_score": 0.8457499894066003}} {"text": "```python\nimport pyprob\nfrom pyprob import Model\nfrom pyprob.distributions import Normal\n\nimport torch\nimport numpy as np\nimport math\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfig = plt.figure();\n```\n\n\n
\n\n\n# Defining the model\nFirst, we define the model as a probabilistic program inheriting from `pyprob.Model`. Models inherit from `torch.nn.Module` and can be potentially trained with gradient-based optimization (not covered in this example).\n\nThe `forward` function can have any number and type of arguments as needed.\n\n\n```python\nclass GaussianUnknownMean(Model):\n def __init__(self):\n super().__init__(name='Gaussian with unknown mean') # give the model a name\n self.prior_mean = 1\n self.prior_std = math.sqrt(5)\n self.likelihood_std = math.sqrt(2)\n\n def forward(self): # Needed to specifcy how the generative model is run forward\n # sample the (latent) mean variable to be inferred:\n mu = pyprob.sample(Normal(self.prior_mean, self.prior_std)) # NOTE: sample -> denotes latent variables\n\n # define the likelihood\n likelihood = Normal(mu, self.likelihood_std)\n\n # Lets add two observed variables\n # -> the 'name' argument is used later to assignment values:\n pyprob.observe(likelihood, name='obs0') # NOTE: observe -> denotes observable variables\n pyprob.observe(likelihood, name='obs1')\n\n # return the latent quantity of interest\n return mu\n \nmodel = GaussianUnknownMean()\n```\n\n# Finding the correct posterior analytically\nSince all distributions are gaussians in this model, we can analytically compute the posterior and we can compare the true posterior to the inferenced one.\n\nAssuming that the prior and likelihood are $p(x) = \\mathcal{N}(\\mu_0, \\sigma_0)$ and $p(y|x) = \\mathcal{N}(x, \\sigma)$ respectively and, $y_1, y_2, \\ldots y_n$ are the observed values, the posterior would be $p(x|y) = \\mathcal{N}(\\mu_p, \\sigma_p)$ where,\n$$\n\\begin{align}\n\\sigma_{p}^{2} & = \\frac{1}{\\frac{n}{\\sigma^2} + \\frac{1}{\\sigma_{0}^{2}}} \\\\\n\\mu_p & = \\sigma_{p}^{2} \\left( \\frac{\\mu_0}{\\sigma_{0}^{2}} + \\frac{n\\overline{y}}{\\sigma^2} \\right)\n\\end{align}\n$$\nThe following class implements computing this posterior distribution. We also implement some helper functions and variables for plotting the correct posterior and prior.\n\n\n```python\ndef plot_function(min_val, max_val, func, *args, **kwargs):\n x = np.linspace(min_val,max_val,int((max_val-min_val)*50))\n plt.plot(x, np.vectorize(func)(x), *args, **kwargs)\n\ndef get_dist_pdf(dist):\n return lambda x: math.exp(dist.log_prob(x))\n \nclass CorrectDistributions:\n def __init__(self, model):\n self.prior_mean = model.prior_mean\n self.prior_std = model.prior_std\n self.likelihood_std = model.likelihood_std\n self.prior_dist = Normal(self.prior_mean, self.prior_std)\n \n @property\n def observed_list(self):\n return self.__observed_list\n\n @observed_list.setter\n def observed_list(self, new_observed_list):\n self.__observed_list = new_observed_list\n self.construct_correct_posterior()\n \n def construct_correct_posterior(self):\n n = len(self.observed_list)\n posterior_var = 1/(n/self.likelihood_std**2 + 1/self.prior_std**2)\n posterior_mu = posterior_var * (self.prior_mean/self.prior_std**2 + n*np.mean(self.observed_list)/self.likelihood_std**2)\n self.posterior_dist = Normal(posterior_mu, math.sqrt(posterior_var))\n\n def prior_pdf(self, model, x):\n p = Normal(model.prior_mean,model.prior_stdd)\n return math.exp(p.log_prob(x))\n\n def plot_posterior(self, min_val, max_val):\n if not hasattr(self, 'posterior_dist'):\n raise AttributeError('observed values are not set yet, and posterior is not defined.')\n plot_function(min_val, max_val, get_dist_pdf(self.posterior_dist), label='correct posterior', color='orange')\n\n\n def plot_prior(self, min_val, max_val):\n plot_function(min_val, max_val, get_dist_pdf(self.prior_dist), label='prior', color='green')\n\ncorrect_dists = CorrectDistributions(model)\n```\n\n# Prior distribution\nWe inspect the prior distribution to see if it behaves in the way we intended. First we construct an `Empirical` distribution with forward samples from the model.\n\nNote: Extra arguments passed to `prior_distribution` will be forwarded to model's `forward` function.\n\n\n```python\nprior = model.prior_distribution(num_traces=1000)\n```\n\n Time spent | Time remain.| Progress | Trace | Traces/sec\n 0d:00:00:00 | 0d:00:00:00 | #################### | 1000/1000 | 1,293.24 \n\n\nWe can plot a historgram of these samples that are held by the `Empirical` distribution.\n\n\n```python\nprior.plot_histogram(show=False, alpha=0.75, label='emprical prior')\ncorrect_dists.plot_prior(min(prior.values_numpy()),max(prior.values_numpy()))\nplt.legend();\n```\n\n# Posterior inference with importance sampling\nFor a given set of observations, we can get samples from the posterior distribution.\n\n\n```python\ncorrect_dists.observed_list = [8, 9] # Observations\n# sample from posterior (5000 samples)\nposterior = model.posterior_distribution(\n num_traces=5000, # the number of samples estimating the posterior\n inference_engine=pyprob.InferenceEngine.IMPORTANCE_SAMPLING, # specify which inference engine to use\n observe={'obs0': correct_dists.observed_list[0],\n 'obs1': correct_dists.observed_list[1]} # assign values to the observed values\n )\n```\n\n Time spent | Time remain.| Progress | Trace | Traces/sec\n 0d:00:00:02 | 0d:00:00:00 | #################### | 5000/5000 | 1,840.04 \n\n\nRegular importance sampling uses proposals from the prior distribution. We can see this by plotting the histogram of the posterior distribution without using the importance weights. As expected, this is the same with the prior distribution.\n\n\n```python\nposterior_unweighted = posterior.unweighted()\nposterior_unweighted.plot_histogram(show=False, alpha=0.75, label='empirical proposal')\ncorrect_dists.plot_prior(min(posterior_unweighted.values_numpy()),\n max(posterior_unweighted.values_numpy()))\ncorrect_dists.plot_posterior(min(posterior_unweighted.values_numpy()),\n max(posterior_unweighted.values_numpy()))\nplt.legend();\n```\n\nWhen we do use the weights, we end up with the correct posterior distribution. The following shows the sampled posterior with the correct posterior (orange curve).\n\n\n```python\nposterior.plot_histogram(show=False, alpha=0.75, bins=50, label='inferred posterior')\ncorrect_dists.plot_posterior(min(posterior.values_numpy()),\n max(posterior_unweighted.values_numpy()))\nplt.legend();\n```\n\nIn practice, it is advised to use methods of the `Empirical` posterior distribution instead of dealing with the weights directly, which ensures that the weights are used in the correct way.\n\nFor instance, we can get samples from the posterior, compute its mean and standard deviation, and evaluate expectations of a function under the distribution:\n\n\n```python\nprint(posterior.sample())\n```\n\n tensor(7.0136)\n\n\n\n```python\nprint(posterior.mean)\n```\n\n tensor(7.1378)\n\n\n\n```python\nprint(posterior.stddev)\n```\n\n tensor(0.9106)\n\n\n\n```python\nprint(posterior.expectation(lambda x: torch.sin(x)))\n```\n\n tensor(0.4846)\n\n\n# Inference compilation\nInference compilation is a technique where a deep neural network is used for parameterizing the proposal distribution in importance sampling (https://arxiv.org/abs/1610.09900). This neural network, which we call inference network, is automatically generated and trained with data sampled from the model.\n\nWe can learn an inference network for our model.\n\n\n```python\nmodel.learn_inference_network(num_traces=20000,\n observe_embeddings={'obs0' : {'dim' : 32},\n 'obs1': {'dim' : 32}},\n inference_network=pyprob.InferenceNetwork.LSTM)\n```\n\n Creating new inference network...\n Observable obs0: observe embedding not specified, using the default FEEDFORWARD.\n Observable obs0: embedding depth not specified, using the default 2.\n Observable obs1: observe embedding not specified, using the default FEEDFORWARD.\n Observable obs1: embedding depth not specified, using the default 2.\n Observe embedding dimension: 64\n Train. time | Epoch| Trace | Init. loss| Min. loss | Curr. loss| T.since min | Traces/sec\n New layers, address: 16__forward__mu__Normal__1, distribution: Normal\n Total addresses: 1, distribution types: 1, parameters: 1,643,583\n 0d:00:00:33 | 1 | 20,032 | +2.40e+00 | +1.10e+00 | \u001b[32m+1.26e+00\u001b[0m | 0d:00:00:05 | 1,003.0 \n\n\nWe now construct the posterior distribution using samples from inference compilation, using the trained inference network.\n\nA much smaller number of samples are enough (200 vs. 5000) because the inference network provides good proposals based on the given observations. We can see that the proposal distribution given by the inference network is doing a job much better than the prior, by plotting the posterior samples without the importance weights, for a selection of observations.\n\n\n```python\n# sample from posterior (200 samples)\nposterior = model.posterior_distribution(\n num_traces=200, # the number of samples estimating the posterior\n inference_engine=pyprob.InferenceEngine.IMPORTANCE_SAMPLING_WITH_INFERENCE_NETWORK, # specify which inference engine to use\n observe={'obs0': correct_dists.observed_list[0],\n 'obs1': correct_dists.observed_list[1]} # assign values to the observed values\n )\n```\n\n Time spent | Time remain.| Progress | Trace | Traces/sec\n 0d:00:00:00 | 0d:00:00:00 | #################### | 200/200 | 335.57 \n\n\n\n```python\nposterior_unweighted = posterior.unweighted()\nposterior_unweighted.plot_histogram(show=False, bins=50, alpha=0.75, label='empirical proposal')\n\ncorrect_dists.plot_posterior(min(posterior.values_numpy()),\n max(posterior.values_numpy()))\nplt.legend();\n```\n\nWe can see that the proposal distribution given by the inference network is already a good estimate to the true posterior which makes the inferred posterior a much better estimate than the prior, even using much less number of samples.\n\n\n```python\nposterior.plot_histogram(show=False, bins=50, alpha=0.75, label='inferred posterior')\ncorrect_dists.plot_posterior(min(posterior.values_numpy()),\n max(posterior.values_numpy()))\nplt.legend();\n```\n\nInference compilation performs amortized inferece which means, the same trained network provides proposal distributions for any observed values.\n\nWe can try performing inference using the same trained network with different observed values.\n\n\n```python\ncorrect_dists.observed_list = [12, 10] # New observations\n\nposterior = model.posterior_distribution(\n num_traces=200,\n inference_engine=pyprob.InferenceEngine.IMPORTANCE_SAMPLING_WITH_INFERENCE_NETWORK, # specify which inference engine to use\n observe={'obs0': correct_dists.observed_list[0],\n 'obs1': correct_dists.observed_list[1]}\n )\n```\n\n Time spent | Time remain.| Progress | Trace | Traces/sec\n 0d:00:00:00 | 0d:00:00:00 | #################### | 200/200 | 273.36 \n\n\n\n```python\nposterior_unweighted = posterior.unweighted()\nposterior_unweighted.plot_histogram(show=False, bins=50, alpha=0.75, label='empirical proposal')\n\ncorrect_dists.plot_posterior(min(posterior.values_numpy()),\n max(posterior.values_numpy()))\nplt.legend();\n```\n\n\n```python\nposterior.plot_histogram(show=False, bins=50, alpha=0.75, label='inferred posterior')\ncorrect_dists.plot_posterior(min(posterior.values_numpy()),\n max(posterior.values_numpy()))\nplt.legend();\n```\n", "meta": {"hexsha": "38dae4a14b7c1f527acd70e5343159fc8869f094", "size": 211604, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/gaussian_unknown_mean.ipynb", "max_stars_repo_name": "ammunk/pyprob", "max_stars_repo_head_hexsha": "56a165b5c01edd16471f4cc29891b5f5fd163c10", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-01-08T19:36:00.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-07T02:32:40.000Z", "max_issues_repo_path": "examples/gaussian_unknown_mean.ipynb", "max_issues_repo_name": "ammunk/pyprob", "max_issues_repo_head_hexsha": "56a165b5c01edd16471f4cc29891b5f5fd163c10", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/gaussian_unknown_mean.ipynb", "max_forks_repo_name": "ammunk/pyprob", "max_forks_repo_head_hexsha": "56a165b5c01edd16471f4cc29891b5f5fd163c10", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 343.512987013, "max_line_length": 32464, "alphanum_fraction": 0.9302659685, "converted": true, "num_tokens": 2864, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9196425377849806, "lm_q2_score": 0.9196425284247979, "lm_q1q2_score": 0.8457423886955773}} {"text": "\n\n# Application of 2nd order Runge Kutta to Populations Equations\n\nThis notebook implements the 2nd Order Runge Kutta method for three different population intial value problems.\n\n# 2nd Order Runge Kutta\nThe general 2nd Order Runge Kutta method for to the first order differential equation\n\\begin{equation} y^{'} = f(t,y), \\end{equation}\nnumerical approximates $y$ the at time point $t_i$ as $w_i$\nwith the formula:\n\\begin{equation} w_{i+1}=w_i+\\frac{h}{2}\\big[k_1+k_2],\\end{equation}\nfor $i=0,...,N-1$, where \n\\begin{equation}k_1=f(t_i,w_i)\\end{equation}\nand\n\\begin{equation}k_2=f(t_i+h,w_i+hk_1)\\end{equation}\nand $h$ is the stepsize.\n\nTo illustrate the method we will apply it to three intial value problems:\n## 1. Linear \nConsider the linear population Differential Equation\n\\begin{equation} y^{'}=0.1y, \\ \\ (2000 \\leq t \\leq 2020), \\end{equation}\nwith the initial condition,\n\\begin{equation}y(2000)=6.\\end{equation}\n\n## 2. Non-Linear Population Equation \nConsider the non-linear population Differential Equation\n\\begin{equation} y^{'}=0.2y-0.01y^2, \\ \\ (2000 \\leq t \\leq 2020), \\end{equation}\nwith the initial condition,\n\\begin{equation}y(2000)=6.\\end{equation}\n\n## 3. Non-Linear Population Equation with an oscillation \nConsider the non-linear population Differential Equation with an oscillation \n\\begin{equation} y^{'}=0.2y-0.01y^2+\\sin(2\\pi t), \\ \\ (2000 \\leq t \\leq 2020), \\end{equation}\nwith the initial condition,\n\\begin{equation}y(2000)=6.\\end{equation}\n\n#### Setting up Libraries\n\n\n```python\n## Library\nimport numpy as np\nimport math \nimport pandas as pd\n%matplotlib inline\nimport matplotlib.pyplot as plt # side-stepping mpl backend\nimport matplotlib.gridspec as gridspec # subplots\nimport warnings\n\nwarnings.filterwarnings(\"ignore\")\n\n\n```\n\n## Discrete Interval\nThe continuous time $a\\leq t \\leq b $ is discretised into $N$ points seperated by a constant stepsize\n\\begin{equation} h=\\frac{b-a}{N}.\\end{equation}\nHere the interval is $2000\\leq t \\leq 2020,$ \n\\begin{equation} h=\\frac{2020-2000}{200}=0.1.\\end{equation}\nThis gives the 201 discrete points:\n\\begin{equation} t_0=2000, \\ t_1=2000.1, \\ ... t_{200}=2020. \\end{equation}\nThis is generalised to \n\\begin{equation} t_i=2000+i0.1, \\ \\ \\ i=0,1,...,200.\\end{equation}\nThe plot below shows the discrete time steps:\n\n\n```python\nN=200\nt_end=2020.0\nt_start=2000.0\nh=((t_end-t_start)/N)\nt=np.arange(t_start,t_end+h/2,h)\nfig = plt.figure(figsize=(10,4))\nplt.plot(t,0*t,'o:',color='red')\nplt.title('Illustration of discrete time points for h=%s'%(h))\nplt.show()\n```\n\n# 1. Linear Population Equation\n## Exact Solution \nThe linear population equation\n\\begin{equation}y^{'}=0.1y, \\ \\ (2000 \\leq t \\leq 2020), \\end{equation}\nwith the initial condition,\n\\begin{equation}y(2000)=6.\\end{equation}\nhas a known exact (analytic) solution\n\\begin{equation} y=6e^{0.1(t-2000)}. \\end{equation}\n\n## Specific 2nd Order Runge Kutta \nTo write the specific 2nd Order Runge Kutta method for the linear population equation we need \n\\begin{equation}f(t,y)=0.1y.\\end{equation}\n\n\n```python\ndef linfun(t,w):\n ftw=0.1*w\n return ftw\n```\n\nthis gives\n\\begin{equation}k_1=f(t_i,w_i)=0.lw_i,\\end{equation}\n\\begin{equation}k_2=f(t_i+h,w_i+hk_1)=0.1(w_i+hk_1),\\end{equation}\nand the difference equation\n\\begin{equation}w_{i+1}=w_{i}+\\frac{h}{2}(k_1+k_2)\\end{equation}\nfor $i=0,...,199$, where $w_i$ is the numerical approximation of $y$ at time $t_i$, with step size $h$ and the initial condition\n\\begin{equation}w_0=6.\\end{equation}\n\n\n```python\nw=np.zeros(N+1)\nw[0]=6.0\n## 2nd Order Runge Kutta\nfor k in range (0,N):\n k1=linfun(t[k],w[k])\n k2=linfun(t[k]+h,w[k]+h*k1)\n w[k+1]=w[k]+h/2*(k1+k2)\n```\n\n## Plotting Results\n\n\n```python\ny=6*np.exp(0.1*(t-2000))\nfig = plt.figure(figsize=(8,4))\nplt.plot(t,w,'o:',color='purple',label='Runge Kutta')\nplt.plot(t,y,'s:',color='black',label='Exact')\nplt.legend(loc='best')\nplt.show()\n```\n\n## Table\nThe table below shows the time, the Runge Kutta numerical approximation, $w$, the exact solution, $y$, and the exact error $|y(t_i)-w_i|$ for the linear population equation:\n\n\n```python\n\nd = {'time t_i': t[0:10], 'Runge Kutta':w[0:10],'Exact (y)':y[0:10],'Exact Error':np.abs(np.round(y[0:10]-w[0:10],10))}\ndf = pd.DataFrame(data=d)\ndf\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
time t_iRunge KuttaExact (y)Exact Error
02000.06.0000006.0000000.000000
12000.16.0603006.0603010.000001
22000.26.1212066.1212080.000002
32000.36.1827246.1827270.000003
42000.46.2448616.2448650.000004
52000.56.3076216.3076270.000005
62000.66.3710136.3710190.000006
72000.76.4350426.4350490.000007
82000.86.4997146.4997220.000009
92000.96.5650366.5650460.000010
\n
\n\n\n\n## 2. Non-Linear Population Equation \n\\begin{equation} y^{'}=0.2y-0.01y^2, \\ \\ (2000 \\leq t \\leq 2020), \\end{equation}\nwith the initial condition,\n\\begin{equation}y(2000)=6.\\end{equation}\n## Specific 2nd Order Runge Kutta for the Non-Linear Population Equation\nTo write the specific 2nd Order Runge Kutta method we need\n\\begin{equation}f(t,y)=0.2y-0.01y^2,\\end{equation}\nthis gives\n\\begin{equation}k_1=f(t_i,w_i)=0.2w_i-0.01w_i^2,\\end{equation}\n\\begin{equation}k_2=f(t_i+h,w_i+hk_1)=0.2(w_i+hk_1)-0.01(w_i+hk_1)^2,\\end{equation}\nand the difference equation\n\\begin{equation}w_{i+1}=w_{i}+\\frac{h}{2}(k_1+k_2)\\end{equation}\nfor $i=0,...,199$, where $w_i$ is the numerical approximation of $y$ at time $t_i$, with step size $h$ and the initial condition\n\\begin{equation}w_0=6.\\end{equation}\n\n\n```python\ndef nonlinfun(t,w):\n ftw=0.2*w-0.01*w*w\n return ftw\n```\n\n\n```python\nw=np.zeros(N+1)\nw[0]=6.0\n## 2nd Order Runge Kutta\nfor k in range (0,N):\n k1=nonlinfun(t[k],w[k])\n k2=nonlinfun(t[k]+h,w[k]+h*k1)\n w[k+1]=w[k]+h/2*(k1+k2)\n```\n\n## Results\nThe plot below shows the Runge Kutta numerical approximation, $w$ (circles) for the non-linear population equation:\n\n\n```python\nfig = plt.figure(figsize=(8,4))\nplt.plot(t,w,'o:',color='purple',label='Runge Kutta')\nplt.legend(loc='best')\nplt.show()\n```\n\n## Table\nThe table below shows the time and the Runge Kutta numerical approximation, $w$, for the non-linear population equation:\n\n\n```python\nd = {'time t_i': t[0:10], \n 'Runge Kutta':w[0:10]}\ndf = pd.DataFrame(data=d)\ndf\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
time t_iRunge Kutta
02000.06.000000
12000.16.084332
22000.26.169328
32000.36.254977
42000.46.341270
52000.56.428197
62000.66.515747
72000.76.603909
82000.86.692672
92000.96.782025
\n
\n\n\n\n## 3. Non-Linear Population Equation with an oscilation \n\\begin{equation} y^{'}=0.2y-0.01y^2+\\sin(2\\pi t), \\ \\ (2000 \\leq t \\leq 2020), \\end{equation}\nwith the initial condition,\n\\begin{equation}y(2000)=6.\\end{equation}\n\n## Specific 2nd Order Runge Kutta for the Non-Linear Population Equation with an oscilation\nTo write the specific 2nd Order Runge Kutta difference equation for the intial value problem we need \n\\begin{equation}f(t,y)=0.2y-0.01y^2+\\sin(2\\pi t),\\end{equation}\nwhich gives\n\\begin{equation}k_1=f(t_i,w_i)=0.2w_i-0.01w_i^2+\\sin(2\\pi t_i),\\end{equation}\n\\begin{equation}k_2=f(t_i+h,w_i+hk_1)=0.2(w_i+hk_1)-0.01(w_i+hk_1)^2+\\sin(2\\pi (t_i+h)),\\end{equation}\nand the difference equation\n\\begin{equation}w_{i+1}=w_{i}+\\frac{h}{2}(k_1+k_2)\\end{equation}\nfor $i=0,...,199$, where $w_i$ is the numerical approximation of $y$ at time $t_i$, with step size $h$ and the initial condition\n\\begin{equation}w_0=6.\\end{equation}\n\n\n```python\ndef nonlin_oscfun(t,w):\n ftw=0.2*w-0.01*w*w+np.sin(2*np.math.pi*t)\n return ftw\n```\n\n\n```python\nw=np.zeros(N+1)\nw[0]=6.0\n## 2nd Order Runge Kutta\nfor k in range (0,N):\n k1=nonlin_oscfun(t[k],w[k])\n k2=nonlin_oscfun(t[k]+h,w[k]+h*k1)\n w[k+1]=w[k]+h/2*(k1+k2)\n```\n\n## Results\nThe plot below shows the 2nd order Runge Kutta numerical approximation, $w$ (circles) for the non-linear population equation:\n\n\n```python\nfig = plt.figure(figsize=(8,4))\nplt.plot(t,w,'o:',color='purple',label='Taylor')\nplt.legend(loc='best')\nplt.show()\n```\n\n## Table\nThe table below shows the time and the 2nd order Runge Kutta numerical approximation, $w$, for the non-linear population equation:\n\n\n```python\nd = {'time t_i': t[0:10], \n 'Runge Kutta':w[0:10]}\ndf = pd.DataFrame(data=d)\ndf\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
time t_iRunge Kutta
02000.06.000000
12000.16.113722
22000.26.276109
32000.36.458005
42000.46.623032
52000.56.741504
62000.66.801784
72000.76.814712
82000.86.809444
92000.96.822305
\n
\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "8eb6739fc1948da9a12503e5c8a17c1e26eeb5f0", "size": 69442, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter 03 - Runge Kutta/Supplementary/301_Problem_Sheet.ipynb", "max_stars_repo_name": "jjcrofts77/Numerical-Analysis-Python", "max_stars_repo_head_hexsha": "97e4b9274397f969810581ff95f4026f361a56a2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 69, "max_stars_repo_stars_event_min_datetime": "2019-09-05T21:39:12.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-26T14:00:25.000Z", "max_issues_repo_path": "Chapter 03 - Runge Kutta/Supplementary/301_Problem_Sheet.ipynb", "max_issues_repo_name": "jjcrofts77/Numerical-Analysis-Python", "max_issues_repo_head_hexsha": "97e4b9274397f969810581ff95f4026f361a56a2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter 03 - Runge Kutta/Supplementary/301_Problem_Sheet.ipynb", "max_forks_repo_name": "jjcrofts77/Numerical-Analysis-Python", "max_forks_repo_head_hexsha": "97e4b9274397f969810581ff95f4026f361a56a2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2021-06-17T15:34:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-14T14:53:43.000Z", "avg_line_length": 78.8217934166, "max_line_length": 10886, "alphanum_fraction": 0.728636848, "converted": true, "num_tokens": 4540, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294404096760998, "lm_q2_score": 0.9099070017626537, "lm_q1q2_score": 0.8457043364854325}} {"text": "# Binomial Distribution\n\n> ***GitHub***: https://github.com/czs108\n\n## Definition\n\n\\begin{equation}\nP(X = r) =\\, ^{n}C_{r} \\cdot p^{r} \\cdot (1 - p)^{n - r}\n\\end{equation}\n\n\\begin{align}\nX &= \\text{The total number of successes.} \\\\\np &= \\text{The probability of a success on an individual trial.} \\\\\nn &= \\text{The number of trials.}\n\\end{align}\n\nIf a variable $X$ follows a *Binomial Distribution* where the probability of *success* in a trial is $p$, and $n$ is the *total* number of trials. This can be written as\n\n\\begin{equation}\nX \\sim B(n,\\, p)\n\\end{equation}\n\n## Expectation\n\n\\begin{equation}\nE(X) = np\n\\end{equation}\n\n## Variance\n\n\\begin{equation}\nVar(X) = np \\cdot (1 - p)\n\\end{equation}\n\n## Approximation\n\n### Poisson Distribution\n\n$X \\sim B(n,\\, p)$ can be approximated by the *Poisson Distribution* $X \\sim Po(np)$ if $n$ is *large* and $p$ is *small*.\n\nBecause both the *expectation* and *variance* of $X \\sim Po(np)$ are $np$. When $n$ is large and $p$ is small, $(1 - p) \\approx 1$ and for $X \\sim B(n,\\, p)$:\n\n\\begin{equation}\nE(X) = np\n\\end{equation}\n\n\\begin{equation}\nVar(X) \\approx np\n\\end{equation}\n\nThe approximation is typically very close if $n > 50$ and $p < 0.1$.\n\n### Normal Distribution\n\nSuppose $q$ is $(1 - p)$.\n\nNormally if $np > 5$ and $nq > 5$, $X \\sim B(n,\\, p)$ also can be approximated by the *Normal Distribution* $X \\sim N(np,\\, npq)$.\n", "meta": {"hexsha": "f99a902de98a89b27d709a3b194ddff3d00c37af", "size": 2649, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "exercises/Binomial Distribution.ipynb", "max_stars_repo_name": "czs108/Probability-Theory-Exercises", "max_stars_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exercises/Binomial Distribution.ipynb", "max_issues_repo_name": "czs108/Probability-Theory-Exercises", "max_issues_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercises/Binomial Distribution.ipynb", "max_forks_repo_name": "czs108/Probability-Theory-Exercises", "max_forks_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-21T05:04:07.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-21T05:04:07.000Z", "avg_line_length": 24.3027522936, "max_line_length": 178, "alphanum_fraction": 0.4850887127, "converted": true, "num_tokens": 475, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9219218305645894, "lm_q2_score": 0.9173026624116694, "lm_q1q2_score": 0.8456813497123378}} {"text": "```python\n\n```\n\n\n\n# Linear Regression\n\n- Dataset consists of 2 types of variables\n - Independent variables / features / decsriptors / input (random) variables / covariates / regressor\n - Dependent variables / output / class / target\n \nRegression is a method of modelling a target value based on independent variables. \n\nLinear regression is a method to find the line of best fit given a dataset so that we can predict one variable given the other\n\n$\n\\begin{align}\ny = mx + c\n\\end{align}\n$\n\nwhere m is the slope, c is the y intercept\n\n## Read the data from input file - Using [Pandas](https://pandas.pydata.org/)\n\n- Generally the data is provided in the tabular format in the form of .csv, .xls or via a database. \n- Pandas is predominantly used for data wrangling and analysis for a dataset in a tabular format \n\n\n\n\n```python\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\nfig = plt.figure(figsize=(16, 6))\n\nm = 3\nc = 7\n\nx_vals = [i for i in range(-5, 6)]\ny_vals = [m*x_ + c for x_ in x_vals]\n\nax = fig.gca()\nax.set_xticks(np.arange(min(x_vals)-1, max(x_vals)+1, 1))\nax.set_yticks(np.arange(min(y_vals)-1, max(y_vals)+1, 1))\nplt.subplot(1,2,1)\nplt.plot(x_vals, y_vals, color='r')\nplt.title('Positive Slope')\nplt.grid()\n\nm = -3\nc = 7\n\nx_vals = [i for i in range(-5, 6)]\ny_vals = [m*x_ + c for x_ in x_vals]\n\nax.set_xticks(np.arange(min(x_vals)-1, max(x_vals)+1, 1))\nax.set_yticks(np.arange(min(y_vals)-1, max(y_vals)+1, 1))\nplt.subplot(1,2,2)\nplt.plot(x_vals, y_vals, color='r')\nplt.title('Negative Slope')\nplt.grid()\n\nplt.show()\n```\n\n\n```python\nimport pandas as pd\ndf_data = pd.read_csv('../data/2d_classification.csv')\n```\n\n\n```python\nfrom matplotlib import pyplot as plt\n\nplt.rcParams['figure.figsize'] = [10, 7] # Size of the plots\n\ncolors = {0:'b', 1:'g', 2:'r', 3:'c', 4:'m', 5:'y', 6:'k'}\nfig, ax = plt.subplots()\ngrouped = df_data.groupby('label')\nfor key, group in grouped:\n group.plot(ax=ax, kind='scatter', x='x', y='y', label=key, color=colors[key])\nplt.show()\n```\n\n\n```python\nfrom sklearn.linear_model import LinearRegression\n\nx = df_data[['x']].values\ny = df_data[['y']].values\n\nlin_regr = LinearRegression()\nlin_regr.fit(x, y)\n\npred_y = lin_regr.predict(x)\n```\n\n\n```python\ncolors = {0:'b', 1:'g', 2:'r', 3:'c', 4:'m', 5:'y', 6:'k'}\n\nplt.figure(figsize=(8,6))\nplt.scatter(x, y, color='black')\nplt.plot(x, pred_y, color='blue', linewidth=1)\nplt.xlabel('X')\nplt.ylabel('Y')\nplt.title('Linear Regression')\nplt.show()\n```\n\n## Components to run an algorithm on sklearn\n\n### Step 1: Instance of the algorithm that is to be used\n- Create an instance of the algorithm that could be invoked to use the implemented functions\n\n __lin_regr = LinearRegression()__ \n\n### Step 2: Train your data -> Fit function\n\n- What does it do ?\n - It trains on the dataset\n- What does it identify ?\n - Coefficients / Weights / Intercepts\n- When will it be used ?\n - Use the co-efficients to predict for the new data\n\n __lin_regr.fit(x, y)__ \n\n\n### Step 3: Prediction\n- For any new set of data, you can predict using\n\n __pred_y = lin_regr.predict(x)__ \n\n### What does your model store ?\n\n#### Coeficients\n\n\n```python\nprint('Coefficients:', lin_regr.coef_)\n```\n\n Coefficients: [[-0.33526649]]\n\n\n#### Intercepts\n\n\n```python\nprint('Intercepts:', lin_regr.intercept_)\n```\n\n Intercepts: [7.86368678]\n\n\n\n```python\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\nfig = plt.figure(figsize=(8, 6))\n\nm = lin_regr.coef_[0]\nc = lin_regr.intercept_\n\nx_vals = [i for i in range(-10, 3)]\ny_vals = [m*x_ + c for x_ in x_vals]\n\nax.set_xticks(np.arange(min(x_vals)-1, max(x_vals)+1, 1))\nax.set_yticks(np.arange(min(y_vals)-1, max(y_vals)+1, 1))\nplt.subplot(1,1,1)\nplt.plot(x_vals, y_vals, color='r')\nplt.title('Negative Slope')\nplt.grid()\n\nplt.show()\n```\n\n## Metrics\n\n#### Mean Suqared Error\n\n\n```python\nfrom sklearn.metrics import mean_squared_error\nprint('Mean squared error: %.2f' % mean_squared_error(y, pred_y))\n```\n\n Mean squared error: 0.95\n\n\n#### R^2 Score\n- proportion of the variance in the dependent variable(target/label) that is predictable from the independent variables(features)\n\n\n```python\nfrom sklearn.metrics import r2_score\nprint('Variance score: %.2f' % r2_score(y, pred_y))\n```\n\n Variance score: 0.22\n\n\n### Another example\n\n\n```python\nfrom sklearn.datasets.samples_generator import make_blobs\nimport pandas as pd\nfrom matplotlib import pyplot as plt\nfrom sklearn.linear_model import LinearRegression\n\n# Creating data\ndata, label = make_blobs(n_samples=100, centers=1, n_features=2)\ndf_data = pd.DataFrame(dict(x=data[:,0], y=data[:,1], label=label)) # converting it into dataframe to make it easier\n\n# Making them easy to read\nx = df_data[['x']].values\ny = df_data[['y']].values\n\nprint(type(x))\nprint(x.shape)\n# Prediction\nlin_regr = LinearRegression()\nlin_regr.fit(x, y)\npred_y = lin_regr.predict(x)\n\n# Plotting the results\nplt.figure()\nplt.scatter(x, y, color='black')\nplt.plot(x, pred_y, color='blue', linewidth=1)\nplt.show()\n```\n\n### Restrictions of Linear Regression\n\n\n```python\nfrom matplotlib import pyplot as plt\n\n\ndata = np.array([[1,0], [2,0], [3,0], [4,0], [5,0], [6,1], [7,1], [8,1], [9,1], [10, 1]])\n\nx = np.reshape(data[:, 0], (data.shape[0], 1))\ny = np.reshape(data[:, 1], (data.shape[0], 1))\n\n# Prediction\nlin_regr = LinearRegression()\nlin_regr.fit(x, y)\npred_y = lin_regr.predict(x)\n\n# Plotting the results\nplt.figure(figsize=(10, 8))\nplt.scatter(x, y, color='black')\nplt.plot(x, pred_y, color='blue', linewidth=1)\nplt.axhline(y=0.5, xmin=0, xmax=1, color='m')\nplt.axvline(x=5.5, ymin=0, ymax=1, color='g', dashes=[3,3])\nplt.show()\n```\n\n\n```python\nfrom matplotlib import pyplot as plt\n\n\ndata = np.array([[1,0], [2,0], [3,0], [4,0], [5,0], [6,1], [7,1], [8,1], [9,1], [10, 1], [14, 1], [16,1], [17, 1]])\n\nx = np.reshape(data[:, 0], (data.shape[0], 1))\ny = np.reshape(data[:, 1], (data.shape[0], 1))\n\n# Prediction\nlin_regr = LinearRegression()\nlin_regr.fit(x, y)\npred_y = lin_regr.predict(x)\n\n# Plotting the results\nplt.figure(figsize=(10, 8))\nplt.scatter(x, y, color='black')\nplt.plot(x, pred_y, color='blue', linewidth=1)\nplt.axhline(y=0.5, xmin=0, xmax=1, color='m')\nplt.axvline(x=6.25, ymin=0, ymax=1, color='g', dashes=[3,3])\nplt.show()\n```\n", "meta": {"hexsha": "ad17e415491abed68b92ab7768ae51993f45ce26", "size": 129190, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "classification/notebooks/.ipynb_checkpoints/02 - Linear Regression-checkpoint.ipynb", "max_stars_repo_name": "pshn111/Machine-Learning-Package", "max_stars_repo_head_hexsha": "fbbaa44daf5f0701ea77e5b62eb57ef822e40ab2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "classification/notebooks/.ipynb_checkpoints/02 - Linear Regression-checkpoint.ipynb", "max_issues_repo_name": "pshn111/Machine-Learning-Package", "max_issues_repo_head_hexsha": "fbbaa44daf5f0701ea77e5b62eb57ef822e40ab2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "classification/notebooks/.ipynb_checkpoints/02 - Linear Regression-checkpoint.ipynb", "max_forks_repo_name": "pshn111/Machine-Learning-Package", "max_forks_repo_head_hexsha": "fbbaa44daf5f0701ea77e5b62eb57ef822e40ab2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 238.3579335793, "max_line_length": 25904, "alphanum_fraction": 0.9223701525, "converted": true, "num_tokens": 1949, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9219218284193595, "lm_q2_score": 0.9173026618464796, "lm_q1q2_score": 0.8456813472234519}} {"text": "In school, students are taught to draw lines like the following.\n\n$$ y = 2 x + 1$$\nThey're taught to pick two values for $x$ and calculate the corresponding values for $y$ using the equation. Then they draw a set of axes, plot the points, and then draw a line extending through the two dots on their axes.\n\n\n```python\n# Import matplotlib.\nimport matplotlib.pyplot as plt\n```\n\n\n```python\n# Draw some axes.\nplt.plot([-1, 10], [0, 0], 'k-')\nplt.plot([0, 0], [-1, 10], 'k-')\n\n# Plot the red, blue and green lines.\nplt.plot([1, 1], [-1, 3], 'b:')\nplt.plot([-1, 1], [3, 3], 'r:')\n\n# Plot the two points (1,3) and (2,5).\nplt.plot([1, 2], [3, 5], 'ko')\n# Join them with an (extending) green lines.\nplt.plot([-1, 10], [-1, 21], 'g-')\n\n# Set some reasonable plot limits.\nplt.xlim([-1, 10])\nplt.ylim([-1, 10])\n\n# Show the plot.\nplt.show()\n```\n\n\n```python\nSimple linear regression is about the opposite problem - what if you have some points and are looking for the equation? It's easy when the points are perfectly on a line already, but usually real-world data has some noise. The data might still look roughly linear, but aren't exactly so.\n```\n\n\n```python\nExample (contrived and simulated)\n```\n\n\nScenario\nSuppose you are trying to weigh your suitcase to avoid an airline's extra charges. You don't have a weighing scales, but you do have a spring and some gym-style weights of masses 7KG, 14KG and 21KG. You attach the spring to the wall hook, and mark where the bottom of it hangs. You then hang the 7KG weight on the end and mark where the bottom of the spring is. You repeat this with the 14KG weight and the 21KG weight. Finally, you place your case hanging on the spring, and the spring hangs down halfway between the 7KG mark and the 14KG mark. Is your case over the 10KG limit set by the airline?\n\nHypothesis\nWhen you look at the marks on the wall, it seems that the 0KG, 7KG, 14KG and 21KG marks are evenly spaced. You wonder if that means your case weighs 10.5KG. That is, you wonder if there is a linear relationship between the distance the spring's hook is from its resting position, and the mass on the end of it.\n\nExperiment\nYou decide to experiment. You buy some new weights - a 1KG, a 2KG, a 3Kg, all the way up to 20KG. You place them each in turn on the spring and measure the distance the spring moves from the resting position. You tabulate the data and plot them.\n\nAnalysis\nHere we'll import the Python libraries we need for or investigations below\n\n\n```python\n\n# Make matplotlib show interactive plots in the notebook.\n%matplotlib inline\n\n```\n\n\n```python\n# numpy efficiently deals with numerical multi-dimensional arrays.\nimport numpy as np\n\n# matplotlib is a plotting library, and pyplot is its easy-to-use module.\nimport matplotlib.pyplot as plt\n\n# This just sets the default plot size to be bigger.\nplt.rcParams['figure.figsize'] = (8, 6)\n```\n\n\n```python\nIgnore the next couple of lines where I fake up some data. I'll use the fact that I faked the data to explain some results later. Just pretend that w is an array containing the weight values and d are the corresponding distance measurements.\n```\n\n\n```python\nw = np.arange(0.0, 21.0, 1.0)\nd = 5.0 * w + 10.0 + np.random.normal(0.0, 5.0, w.size)\n```\n\n\n```python\n\n# Let's have a look at w.\nw\n```\n\n\n```python\n# Let's have a look at d.\nd\n```\n\n\n```python\n# Look at the data from the experiment\n# Create the plot.\n\nplt.plot(w, d, 'k.')\n\n# Set some properties for the plot.\nplt.xlabel('Weight (KG)')\nplt.ylabel('Distance (CM)')\n\n# Show the plot.\nplt.show()\n```\n\n\n```python\n\nModel\nIt looks like the data might indeed be linear. The points don't exactly fit on a straight line, but they are not far off it. We might put that down to some other factors, such as the air density, or errors, such as in our tape measure. Then we can go ahead and see what would be the best line to fit the data.\n\nStraight lines\nAll straight lines can be expressed in the form $y = mx + c$. The number $m$ is the slope of the line. The slope is how much $y$ increases by when $x$ is increased by 1.0. The number $c$ is the y-intercept of the line. It's the value of $y$ when $x$ is 0.\n\nFitting the model\nTo fit a straight line to the data, we just must pick values for $m$ and $c$. These are called the parameters of the model, and we want to pick the best values possible for the parameters. That is, the best parameter values given the data observed. Below we show various lines plotted over the data, with different values for $m$ and $c$.\n```\n\n\n```python\n# Plot w versus d with black dots.\nplt.plot(w, d, 'k.', label=\"Data\")\n\n# Overlay some lines on the plot.\nx = np.arange(0.0, 21.0, 1.0)\nplt.plot(x, 5.0 * x + 10.0, 'r-', label=r\"$5x + 10$\")\nplt.plot(x, 6.0 * x + 5.0, 'g-', label=r\"$6x + 5$\")\nplt.plot(x, 5.0 * x + 15.0, 'b-', label=r\"$5x + 15$\")\n\n# Add a legend.\nplt.legend()\n\n# Add axis labels.\nplt.xlabel('Weight (KG)')\nplt.ylabel('Distance (CM)')\n\n# Show the plot.\nplt.show()\n```\n\n\n```python\nCalculating the cost\nYou can see that each of these lines roughly fits the data. Which one is best, and is there another line that is better than all three? Is there a \"best\" line?\n\nIt depends how you define the word best. Luckily, everyone seems to have settled on what the best means. The best line is the one that minimises the following calculated value.\n\n$$ \\sum_i (y_i - mx_i - c)^2 $$\nHere $(x_i, y_i)$ is the $i^{th}$ point in the data set and $\\sum_i$ means to sum over all points. The values of $m$ and $c$ are to be determined. We usually denote the above as $Cost(m, c)$.\n\nWhere does the above calculation come from? It's easy to explain the part in the brackets $(y_i - mx_i - c)$. The corresponding value to $x_i$ in the dataset is $y_i$. These are the measured values. The value $m x_i + c$ is what the model says $y_i$ should have been. The difference between the value that was observed ($y_i$) and the value that the model gives ($m x_i + c$), is $y_i - mx_i - c$.\n\nWhy square that value? Well note that the value could be positive or negative, and you sum over all of these values. If we allow the values to be positive or negative, then the positive could cancel the negatives. So, the natural thing to do is to take the absolute value $\\mid y_i - m x_i - c \\mid$. Well it turns out that absolute values are a pain to deal with, and instead it was decided to just square the quantity instead, as the square of a number is always positive. There are pros and cons to using the square instead of the absolute value, but the square is used. This is usually called least squares fitting.\n```\n\n\n```python\n# Calculate the cost of the lines above for the data above.\ncost = lambda m,c: np.sum([(d[i] - m * w[i] - c)**2 for i in range(w.size)])\n\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (5.0, 10.0, cost(5.0, 10.0)))\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (6.0, 5.0, cost(6.0, 5.0)))\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (5.0, 15.0, cost(5.0, 15.0)))\n```\n\n\n```python\nCost with m = 5.00 and c = 10.00: 364.91\nCost with m = 6.00 and c = 5.00: 1911.26\nCost with m = 5.00 and c = 15.00: 784.03\nMinimising the cost\nWe want to calculate values of $m$ and $c$ that give the lowest value for the cost value above. For our given data set we can plot the cost value/function. Recall that the cost is:\n\n$$ Cost(m, c) = \\sum_i (y_i - mx_i - c)^2 $$\nThis is a function of two variables, $m$ and $c$, so a plot of it is three dimensional. See the Advanced section below for the plot.\n\nIn the case of fitting a two-dimensional line to a few data points, we can easily calculate exactly the best values of $m$ and $c$. Some of the details are discussed in the Advanced section, as they involve calculus, but the resulting code is straight-forward. We first calculate the mean (average) values of our $x$ values and that of our $y$ values. Then we subtract the mean of $x$ from each of the $x$ values, and the mean of $y$ from each of the $y$ values. Then we take the dot product of the new $x$ values and the new $y$ values and divide it by the dot product of the new $x$ values with themselves. That gives us $m$, and we use $m$ to calculate $c$.\n\nRemember that in our dataset $x$ is called $w$ (for weight) and $y$ is called $d$ (for distance). We calculate $m$ and $c$ below.\n```\n\n\n```python\n# Calculate the best values for m and c.\n\n# First calculate the means (a.k.a. averages) of w and d.\nw_avg = np.mean(w)\nd_avg = np.mean(d)\n\n# Subtract means from w and d.\nw_zero = w - w_avg\nd_zero = d - d_avg\n\n# The best m is found by the following calculation.\nm = np.sum(w_zero * d_zero) / np.sum(w_zero * w_zero)\n# Use m from above to calculate the best c.\nc = d_avg - m * w_avg\n\nprint(\"m is %8.6f and c is %6.6f.\" % (m, c))\n```\n\n\n```python\n#Note that numpy has a function that will perform this calculation for us, called polyfit. It can be used to fit lines in many dimensions.\nnp.polyfit(w, d, 1)\n```\n\n\n```python\nBest fit line\u00b6\nSo, the best values for $m$ and $c$ given our data and using least squares fitting are about $4.95$ for $m$ and about $11.13$ for $c$. We plot this line on top of the data below.\n```\n\n\n```python\n# Plot the best fit line.\nplt.plot(w, d, 'k.', label='Original data')\nplt.plot(w, m * w + c, 'b-', label='Best fit line')\n\n# Add axis labels and a legend.\nplt.xlabel('Weight (KG)')\nplt.ylabel('Distance (CM)')\nplt.legend()\n\n# Show the plot.\nplt.show()\n```\n\n\n```python\nNote that the $Cost$ of the best $m$ and best $c$ is not zero in this case.\n```\n\n\n```python\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (m, c, cost(m, c)))\n\n```\n\n\n```python\nSummary\u00b6\nIn this notebook we:\n\nInvestigated the data.\nPicked a model.\nPicked a cost function.\nEstimated the model parameter values that minimised our cost function.\nAdvanced\nIn the following sections we cover some of the more advanced concepts involved in fitting the line.\n\nSimulating data\nEarlier in the notebook we glossed over something important: we didn't actually do the weighing and measuring - we faked the data. A better term for this is simulation, which is an important tool in research, especially when testing methods such as simple linear regression.\n\nWe ran the following two commands to do this:\n\nw = np.arange(0.0, 21.0, 1.0)\nd = 5.0 * w + 10.0 + np.random.normal(0.0, 5.0, w.size)\nThe first command creates a numpy array containing all values between 1.0 and 21.0 (including 1.0 but not including 21.0) in steps of 1.0.\n```\n\n\n```python\nnp.arange(0.0, 21.0, 1.0)\n```\n\n\n```python\n# The second command is more complex. First it takes the values in the w array, multiplies each by 5.0 and then adds 10.0.\n5.0 * w + 10.0\n```\n\n\n```python\nnp.random.normal(0.0, 5.0, w.size)\n```\n\n\n```python\n#The normal distribution follows a bell shaped curve. The curve is centred on the mean (0.0 in this case) and its general width is determined by the standard deviation (5.0 in this case).\n# Plot the normal distrution.\nnormpdf = lambda mu, s, x: (1.0 / (2.0 * np.pi * s**2)) * np.exp(-((x - mu)**2)/(2 * s**2))\n\nx = np.linspace(-20.0, 20.0, 100)\ny = normpdf(0.0, 5.0, x)\nplt.plot(x, y)\n\nplt.show()\n```\n\n\n```python\n\nThe idea here is to add a little bit of randomness to the measurements of the distance. The random values are entered around 0.0, with a greater than 99% chance they're within the range -15.0 to 15.0. The normal distribution is used because of the Central Limit Theorem which basically states that when a bunch of random effects happen together the outcome looks roughly like the normal distribution. (Don't quote me on that!)\n\nPlotting the cost function\nWe can plot the cost function for a given set of data points. Recall that the cost function involves two variables: $m$ and $c$, and that it looks like this:\n\n$$ Cost(m,c) = \\sum_i (y_i - mx_i - c)^2 $$\nTo plot a function of two variables we need a 3D plot. It can be difficult to get the viewing angle right in 3D plots, but below you can just about make out that there is a low point on the graph around the $(m, c) = (\\approx 5.0, \\approx 10.0)$ point.\n```\n\n\n```python\n# This code is a little bit involved - don't worry about it.\n# Just look at the plot below.\n\nfrom mpl_toolkits.mplot3d import Axes3D\n\n# Ask pyplot a 3D set of axes.\nax = plt.figure().gca(projection='3d')\n\n# Make data.\nmvals = np.linspace(4.5, 5.5, 100)\ncvals = np.linspace(0.0, 20.0, 100)\n\n# Fill the grid.\nmvals, cvals = np.meshgrid(mvals, cvals)\n\n# Flatten the meshes for convenience.\nmflat = np.ravel(mvals)\ncflat = np.ravel(cvals)\n\n# Calculate the cost of each point on the grid.\nC = [np.sum([(d[i] - m * w[i] - c)**2 for i in range(w.size)]) for m, c in zip(mflat, cflat)]\nC = np.array(C).reshape(mvals.shape)\n\n# Plot the surface.\nsurf = ax.plot_surface(mvals, cvals, C)\n\n# Set the axis labels.\nax.set_xlabel('$m$', fontsize=16)\nax.set_ylabel('$c$', fontsize=16)\nax.set_zlabel('$Cost$', fontsize=16)\n\n# Show the plot.\nplt.show()\n```\n\n\n```python\n\nCoefficient of determination\nEarlier we used a cost function to determine the best line to fit the data. Usually the data do not perfectly fit on the best fit line, and so the cost is greater than 0. A quantity closely related to the cost is the coefficient of determination, also known as the R-squared value. The purpose of the R-squared value is to measure how much of the variance in $y$ is determined by $x$.\n\nFor instance, in our example the main thing that affects the distance the spring is hanging down is the weight on the end. It's not the only thing that affects it though. The room temperature and density of the air at the time of measurment probably affect it a little. The age of the spring, and how many times it has been stretched previously probably also have a small affect. There are probably lots of unknown factors affecting the measurment.\n\nThe R-squared value estimates how much of the changes in the $y$ value is due to the changes in the $x$ value compared to all of the other factors affecting the $y$ value. It is calculated as follows:\n\n$$ R^2 = 1 - \\frac{\\sum_i (y_i - m x_i - c)^2}{\\sum_i (y_i - \\bar{y})^2} $$\nNote that sometimes the Pearson correlation coefficient is used instead of the R-squared value. You can just square the Pearson coefficient to get the R-squred value.\n```\n\n\n```python\n# Calculate the R-squared value for our data set.\nrsq = 1.0 - (np.sum((d - m * w - c)**2)/np.sum((d - d_avg)**2))\n\nprint(\"The R-squared value is %6.4f\" % rsq)\n```\n\n\n```python\n# The same value using numpy.\nnp.corrcoef(w, d)[0][1]**2\n\n```\n\nThe minimisation calculations\nEarlier we used the following calculation to calculate $m$ and $c$ for the line of best fit. The code was:\n\nw_zero = w - np.mean(w)\nd_zero = d - np.mean(d)\n\nm = np.sum(w_zero * d_zero) / np.sum(w_zero * w_zero)\nc = np.mean(d) - m * np.mean(w)\nIn mathematical notation we write this as:\n\n$$ m = \\frac{\\sum_i (x_i - \\bar{x}) (y_i - \\bar{y})}{\\sum_i (x_i - \\bar{x})^2} \\qquad \\textrm{and} \\qquad c = \\bar{y} - m \\bar{x} $$\nwhere $\\bar{x}$ is the mean of $x$ and $\\bar{y}$ that of $y$.\n\nWhere did these equations come from? They were derived using calculus. We'll give a brief overview of it here, but feel free to gloss over this section if it's not for you. If you can understand the first part, where we calculate the partial derivatives, then great!\n\nThe calculations look complex, but if you know basic differentiation, including the chain rule, you can easily derive them. First, we differentiate the cost function with respect to $m$ while treating $c$ as a constant, called a partial derivative. We write this as $\\frac{\\partial m}{ \\partial Cost}$, using $\\delta$ as opposed to $d$ to signify that we are treating the other variable as a constant. We then do the same with respect to $c$ while treating $m$ as a constant. We set both equal to zero, and then solve them as two simultaneous equations in two variables.\n\nCalculate the partial derivatives\n$$\n\\begin{align}\nCost(m, c) &= \\sum_i (y_i - mx_i - c)^2 \\\\[1cm]\n\\frac{\\partial Cost}{\\partial m} &= \\sum 2(y_i - m x_i -c)(-x_i) \\\\\n &= -2 \\sum x_i (y_i - m x_i -c) \\\\[0.5cm]\n\\frac{\\partial Cost}{\\partial c} & = \\sum 2(y_i - m x_i -c)(-1) \\\\\n & = -2 \\sum (y_i - m x_i -c) \\\\\n\\end{align}\n$$\nSet to zero\n$$\n\\begin{align}\n& \\frac{\\partial Cost}{\\partial m} = 0 \\\\[0.2cm]\n& \\Rightarrow -2 \\sum x_i (y_i - m x_i -c) = 0 \\\\\n& \\Rightarrow \\sum (x_i y_i - m x_i x_i - x_i c) = 0 \\\\\n& \\Rightarrow \\sum x_i y_i - \\sum_i m x_i x_i - \\sum x_i c = 0 \\\\\n& \\Rightarrow m \\sum x_i x_i = \\sum x_i y_i - c \\sum x_i \\\\[0.2cm]\n& \\Rightarrow m = \\frac{\\sum x_i y_i - c \\sum x_i}{\\sum x_i x_i} \\\\[0.5cm]\n& \\frac{\\partial Cost}{\\partial c} = 0 \\\\[0.2cm]\n& \\Rightarrow -2 \\sum (y_i - m x_i - c) = 0 \\\\\n& \\Rightarrow \\sum y_i - \\sum_i m x_i - \\sum c = 0 \\\\\n& \\Rightarrow \\sum y_i - m \\sum_i x_i = c \\sum 1 \\\\\n& \\Rightarrow c = \\frac{\\sum y_i - m \\sum x_i}{\\sum 1} \\\\\n& \\Rightarrow c = \\frac{\\sum y_i}{\\sum 1} - m \\frac{\\sum x_i}{\\sum 1} \\\\[0.2cm]\n& \\Rightarrow c = \\bar{y} - m \\bar{x} \\\\\n\\end{align}\n$$\nSolve the simultaneous equations\nHere we let $n$ be the length of $x$, which is also the length of $y$.\n\n$$\n\\begin{align}\n& m = \\frac{\\sum_i x_i y_i - c \\sum_i x_i}{\\sum_i x_i x_i} \\\\[0.2cm]\n& \\Rightarrow m = \\frac{\\sum x_i y_i - (\\bar{y} - m \\bar{x}) \\sum x_i}{\\sum x_i x_i} \\\\\n& \\Rightarrow m \\sum x_i x_i = \\sum x_i y_i - \\bar{y} \\sum x_i + m \\bar{x} \\sum x_i \\\\\n& \\Rightarrow m \\sum x_i x_i - m \\bar{x} \\sum x_i = \\sum x_i y_i - \\bar{y} \\sum x_i \\\\[0.3cm]\n& \\Rightarrow m = \\frac{\\sum x_i y_i - \\bar{y} \\sum x_i}{\\sum x_i x_i - \\bar{x} \\sum x_i} \\\\[0.2cm]\n& \\Rightarrow m = \\frac{\\sum (x_i y_i) - n \\bar{y} \\bar{x}}{\\sum (x_i x_i) - n \\bar{x} \\bar{x}} \\\\\n& \\Rightarrow m = \\frac{\\sum (x_i y_i) - n \\bar{y} \\bar{x} - n \\bar{y} \\bar{x} + n \\bar{y} \\bar{x}}{\\sum (x_i x_i) - n \\bar{x} \\bar{x} - n \\bar{x} \\bar{x} + n \\bar{x} \\bar{x}} \\\\\n& \\Rightarrow m = \\frac{\\sum (x_i y_i) - \\sum y_i \\bar{x} - \\sum \\bar{y} x_i + n \\bar{y} \\bar{x}}{\\sum (x_i x_i) - \\sum x_i \\bar{x} - \\sum \\bar{x} x_i + n \\bar{x} \\bar{x}} \\\\\n& \\Rightarrow m = \\frac{\\sum_i (x_i - \\bar{x}) (y_i - \\bar{y})}{\\sum_i (x_i - \\bar{x})^2} \\\\\n\\end{align}\n$$\n\n\nUsing sklearn neural networks\n\n\n```python\nimport sklearn.neural_network as sknn\n\n# Expects a 2D array of inputs.\nw2d = w.reshape(-1, 1)\n\n# Train the neural network.\nregr = sknn.MLPRegressor(max_iter=10000).fit(w2d, d)\n\n# Show the predictions.\nnp.array([d, regr.predict(w2d)]).T\n```\n\n\n```python\n# The score.\nregr.score(w2d, d)\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "6eb1ce2e83aa74b54c4528f52904d76ac2e28d41", "size": 26573, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "CourseWork/Topic 6 Simple Linear Regression with NumPy.ipynb", "max_stars_repo_name": "ClodaghMurphy/MachineLearningandStatistics", "max_stars_repo_head_hexsha": "c09c33467dd3f3c6a8228f96338c5f328c2ca1a9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "CourseWork/Topic 6 Simple Linear Regression with NumPy.ipynb", "max_issues_repo_name": "ClodaghMurphy/MachineLearningandStatistics", "max_issues_repo_head_hexsha": "c09c33467dd3f3c6a8228f96338c5f328c2ca1a9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CourseWork/Topic 6 Simple Linear Regression with NumPy.ipynb", "max_forks_repo_name": "ClodaghMurphy/MachineLearningandStatistics", "max_forks_repo_head_hexsha": "c09c33467dd3f3c6a8228f96338c5f328c2ca1a9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-12-12T01:51:33.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-12T01:51:33.000Z", "avg_line_length": 40.1404833837, "max_line_length": 669, "alphanum_fraction": 0.5755089753, "converted": true, "num_tokens": 5650, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026528034425, "lm_q2_score": 0.9219218278830522, "lm_q1q2_score": 0.8456813383945225}} {"text": "\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport math\nimport sympy as sp\nimport pandas as pd\nimport matplotlib.animation as animation\nplt.style.use('seaborn-poster')\n\n# if using a Jupyter notebook, include:\n%matplotlib inline\n```\n\n\n```python\n\nt, a = sp.symbols('t a')\n\n\ndef taylor_terms(func, order, point, derivatives=None):\n \"\"\"\n find symbolic derivatives and Taylor terms\n func: symbolic function to approximate\n order: highest order derivative for Taylor polynomial (interger)\n point: point about which the function is approximated\n derivatives: list of symbolic derivatives\n \"\"\"\n\n # initialize list of derivatives\n if derivatives is None:\n derivatives = [func.subs({t: a})]\n\n # check if highest order derivative is reached\n if len(derivatives) > order:\n # return list of taylor terms evaluated using substitution\n return derivatives, [derivatives[i].subs({a: point}) / math.factorial(i) * (t - point) ** i for i in range(len(derivatives))]\n\n # differentiate function with respect to t\n derivative = func.diff(t)\n # append to list of symbolic derivatives ** substitute t with a **\n derivatives.append(derivative.subs({t: a}))\n\n # recursive call to find next term in Taylor polynomial\n return taylor_terms(derivative, order, point, derivatives)\n\n\ndef taylor_polynomials(_terms):\n \"\"\"\n find Taylor polynomials\n func: symbolic function to approximate\n order: highest order derivative for Taylor polynomial (interger)\n point: point about which the function is approximated\n derivatives: list of Taylor terms\n \"\"\"\n\n # initialize list\n polynomials = []\n\n # initialize taylor polynomial\n _poly = None\n\n # loop through tayloer terms\n for term in range(len(_terms)):\n # build up polynomial on each iteration\n _poly = _terms[term] if _poly is None else _poly + _terms[term]\n\n # store current taylor polynomial\n polynomials.append(_poly)\n\n # return taylor polynomials\n return polynomials\n\n\nif __name__ == '__main__':\n\n # analysis label\n label = 'ln(t)'\n\n # symbolic function to approximate\n f = sp.log(t)\n\n # point about which to approximate\n approximation_point = np.pi\n\n # definte time start and stop\n start = 0.01\n stop = 2 * sp.pi\n time = np.arange(start, stop, 0.1)\n\n # find taylor polynomial terms describing function f(t)\n symbolic_derivatives, terms = taylor_terms(func=f, order=4, point=approximation_point)\n polys = taylor_polynomials(terms)\n\n # initialize plot\n fig, ax = plt.subplots()\n ax.set(xlabel='t', ylabel='y', title=f'Taylor Polynomial Approximation: {label}')\n legend = []\n\n for p, poly in enumerate(polys):\n # plot current polynomial approximation\n ax.plot(time, [poly.subs({t: point}) for point in time])\n\n # append item to legend\n legend.append(f'P{p}')\n\n # plot actual function for comparison\n ax.plot(time, [f.subs({t: point}) for point in time])\n legend.append(f'f(t)')\n\n # create dataframe\n df = pd.DataFrame({'symbolic_derivatives': symbolic_derivatives,\n 'taylor_terms': terms,\n 'polynomials': polys\n })\n\n # save and show results\n ax.legend(legend)\n ax.grid()\n plt.savefig(f'taylor_{label}.png')\n plt.show()\n \n df.to_csv(f'taylor_{label}.csv', encoding='utf-8')\n print(df.head(10))\n\n```\n\n\n```python\n#new Approximation function\ndef func_cos(x, n):\n cos_approx = 0\n for i in range(n):\n coef = (-1)**i\n num = x**(2*i)\n denom = math.factorial(2*i)\n cos_approx += ( coef ) * ( (num)/(denom) )\n \n return cos_approx\nangles = np.arange(-2*np.pi,2*np.pi,0.1)\np_cos = np.cos(angles)\n\n\nangle_rad = (math.radians(45))\nout = func_cos(angle_rad,5)\nprint(out)\n\nfig, ax = plt.subplots()\nplt.rcParams[\"figure.figsize\"] = (10,10)\nax.plot(angles,p_cos)\n\n# add lines for between 1 and 6 terms in the Taylor Series\nfor i in range(1,6):\n t_cos = [func_cos(angle,i) for angle in angles]\n ax.plot(angles,t_cos)\n\nax.set_ylim([-7,4])\n\n# set up legend\nlegend_lst = ['cos() function']\nfor i in range(1,6):\n legend_lst.append(f'Taylor Series - {i} terms')\nax.legend(legend_lst, loc=3)\n\nplt.show()\n\n```\n", "meta": {"hexsha": "4548eca093b9911de7cc5038ba6c78ffbaa1bfea", "size": 121728, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "FunctionApproximation/UsingTaylorPolynomials.ipynb", "max_stars_repo_name": "technologyhamed/Neuralnetwork", "max_stars_repo_head_hexsha": "95a96c7197f8441577ce14f30afd33b954906f9c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-11-16T22:42:21.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-24T10:05:14.000Z", "max_issues_repo_path": "FunctionApproximation/UsingTaylorPolynomials.ipynb", "max_issues_repo_name": "technologyhamed/Neuralnetwork", "max_issues_repo_head_hexsha": "95a96c7197f8441577ce14f30afd33b954906f9c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "FunctionApproximation/UsingTaylorPolynomials.ipynb", "max_forks_repo_name": "technologyhamed/Neuralnetwork", "max_forks_repo_head_hexsha": "95a96c7197f8441577ce14f30afd33b954906f9c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 447.5294117647, "max_line_length": 85238, "alphanum_fraction": 0.9255964117, "converted": true, "num_tokens": 1129, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9481545348152283, "lm_q2_score": 0.8918110353738529, "lm_q1q2_score": 0.8455746773879825}} {"text": "# Rectangular Plate Bending Element\n\nThis derivation follows the procedure given in Chapter 12 of \"A First Course in the Finite Element Method, 4th Edition\" by Daryl L. Logan.\n\nWe'll start by importing a few Python libraries that are useful for symbolic math, and initializing \"pretty\" printing.\n\n\n```python\nfrom sympy import symbols, Matrix, diff, integrate, simplify, factor, latex, init_printing\nfrom IPython.display import display, Math\ninit_printing()\n```\n\nThe plate width will be defined as $2b$, and the height will be $2c$ to be consistent with Figure 12-1. We'll set up some Sympy symbols to represent $b$ and $c$.\n\n\n```python\nb, c = symbols('b, c')\n```\n\nThe plate is defined by four nodes specified in counter-clockwise order: i, j, m, and n. The local x-axis runs from node i toward node j, and the local y-axis runs from node i toward node n. Next we'll define the element's local displacement vector, $[d]$, at each node. There are 3 degrees of freedom at each node: $w$, $\\theta_x$, and $\\theta_y$.\n\n\n```python\nwi, theta_xi, theta_yi = symbols('w_i, theta_x_i, theta_yi')\nwj, theta_xj, theta_yj = symbols('w_j, theta_xj, theta_yj')\nwm, theta_xm, theta_ym = symbols('w_m, theta_xm, theta_ym')\nwn, theta_xn, theta_yn = symbols('w_n, theta_xn, theta_yn')\nd = Matrix([wi, theta_xi, theta_yi, wj, theta_xj, theta_yj, wm, theta_xm, theta_ym, wn, theta_xn, theta_yn])\ndisplay(Math('[d] = ' + latex(d)))\n```\n\nA 12-term polynomial displacement function will be assumed to define the out-of-plane displacement, w, at any point (x, y) in the plate's local coordinate system. The rotations about each axis are derivatives of this displacement:\n\n$w = a_1 + a_2x + a_3y + a_4x^2 + a_5xy + a_6y^2 + a_7x^3 + a_8x^2y + a_9xy^2 + a_{10}y^3 + a_{11}x^3y + a_{12}xy^3$\n\n$\\theta_x = \\frac{dw}{dy} = a_3 + a_5x + 2a_6y + a_8x^2 + 2a_9xy + 2a_{10}y^2 + a_{11}x^3 + 3a_{12}xy^2$\n\n$\\theta_y = -\\frac{dw}{dx} = -a_2 - 2a_4x - a_5y - 3a_7x^2 - 2a_8xy - a_9y^2 - 3a_{11}x^2y - a_{12}y^3$\n\nThe negative sign on $\\frac{dw}{dx}$ is required to be consistent with the right hand rule. These equations can be rewritten in matrix form as follows:\n\n$[\\psi] = [P][a]$\n\nwhere $[\\psi]$ is shorthand for $\\begin{bmatrix} w \\\\ \\theta_x \\\\ \\theta_y \\end{bmatrix}$ and $[P]$ is defined as follows:\n\n\n```python\nx, y = symbols('x, y')\nP = Matrix([[1, x, y, x**2, x*y, y**2, x**3, x**2*y, x*y**2, y**3, x**3*y, x*y**3],\n [0, 0, 1, 0, x, 2*y, 0, x**2, 2*x*y, 3*y**2, x**3, 3*x*y**2],\n [0, -1, 0, -2*x, -y, 0, -3*x**2, -2*x*y, -y**2, 0, -3*x**2*y, -y**3]])\ndisplay(Math('P = ' + latex(P)))\n```\n\nThis equation for $[w]$ is valid for a single node. Evaluating $[P]$ at each node gives us a larger set of equations:\n\n$[d] = [C][a]$\n\nwhere $[C]$ is merely $[P]$ evaluated at each node, and $[d]$ is correpsondingly $[\\psi]$ at each node. Knowing that the plate width is $2b$ and the plate height is $2c$, we can obtain the matrix $[C]$.\n\n\n```python\nC = Matrix([P, P, P, P])\nC[0:3, 0:12] = C[0:3, 0:12].subs(x, 0).subs(y, 0) # i-node @ x = 0, y = 0\nC[3:6, 0:12] = C[3:6, 0:12].subs(x, 2*b).subs(y, 0) # j-node @ x = 2b, y = 0\nC[6:9, 0:12] = C[6:9, 0:12].subs(x, 2*b).subs(y, 2*c) # m-node @ x = 2b, y = 2c\nC[9:12, 0:12] = C[9:12, 0:12].subs(x, 0).subs(y, 2*c) # n-node @ x = 0, y = 2c\ndisplay(Math('[C] = ' + latex(C)))\n```\n\nAn important matrix that we will come back to later is the shape function matrix $[N]$, defined as:\n\n$[N] = [P][C]^{-1}$\n\nThe closed form solution of $[N]$ for a rectangular plate is:\n\n\n```python\nN = P*C.inv()\ndisplay(Math('[N] = ' + latex(simplify(N))))\n```\n\nWe can now solve for the $[a]$ matrix in terms of the nodal displacements:\n\n\n```python\na = simplify(C.inv()*d)\ndisplay(Math('[a] = ' + latex(a)))\n```\n\nThe next step is to define the curvature matrix:\n\n$[\\kappa] = \\begin{bmatrix} -\\frac{d^2w}{dx^2} \\\\ -\\frac{d^2w}{dy^2} \\\\ -\\frac{2d^2w}{dxdy} \\end{bmatrix} = [Q][a]$\n\nIt should be recognized that $w/[a]$ is simply the first row of our $[P]$ matrix. Evaluating the derivatives in this expression gives $[Q]$ as follows:\n\n\n```python\nQ = Matrix([-diff(diff(P[0, :], x), x),\n -diff(diff(P[0, :], y), y),\n -2*diff(diff(P[0, :], x), y)])\ndisplay(Math('[Q] = ' + latex(Q)))\n```\n\nWith $[Q]$ in hand we can now solve for the $[B]$ matrix which is essential for formulating the stiffness matrix $[k]$\n\n\n```python\nB = simplify(Q*C.inv())\ndisplay(Math('[B] = ' + latex(B)))\n```\n\nNow we form the constitutive matrix for isotropic materials, [D]. This matrix is analagous to the flexural stiffness of a beam EI.\n\n\n```python\nE, t, nu = symbols('E, t, nu')\nCoef = E*t**3/(12*(1-nu**2))\nD = Coef*Matrix([[1, nu, 0],\n [nu, 1, 0],\n [0, 0, (1-nu)/2]])\ndisplay(Math('[D] = ' + latex(D)))\n```\n\nNow we can calculate the stiffness matrix:\n\n$[k] = \\int_0^{2c} \\int_0^{2b} [B]^T[D][B] dx dy$\n\n\n```python\nk = integrate(integrate(B.T*D*B, (x, 0, 2*b)), (y, 0, 2*c))\ndisplay(Math('[k] = {Et^3}/{12(1-\\nu^2)}' + latex(simplify(k/Coef))))\n```\n\nThe surface force matrix $[F_s]$ can be obtained from the shape function matrix. Since we're interested in the surface force matrix for uniform pressures in the direction of w,\n\n\n```python\nq = symbols('q')\nFs = integrate(integrate(N[0, :].T*q, (x, 0, 2*b)), (y, 0, 2*c))\ndisplay(Math('[F_s] = 4qcb' + latex(Fs/(4*q*c*b))))\nprint(Fs/(4*q*c*b))\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "986c99e1495952c0dba475739b3da3b4ec45df55", "size": 8817, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Derivations/Rectangular Plate Element.ipynb", "max_stars_repo_name": "geosharma/PyNite", "max_stars_repo_head_hexsha": "b7419ac352b48d7cd92b1a34a30d0083e19cfb78", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 199, "max_stars_repo_stars_event_min_datetime": "2019-04-12T05:30:43.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T23:42:01.000Z", "max_issues_repo_path": "Derivations/Rectangular Plate Element.ipynb", "max_issues_repo_name": "mohashrafy/PyNite", "max_issues_repo_head_hexsha": "efffccdbff6727d3b271ba2937e35892d9df8c00", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 103, "max_issues_repo_issues_event_min_datetime": "2019-07-22T19:41:26.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T22:18:32.000Z", "max_forks_repo_path": "Derivations/Rectangular Plate Element.ipynb", "max_forks_repo_name": "mohashrafy/PyNite", "max_forks_repo_head_hexsha": "efffccdbff6727d3b271ba2937e35892d9df8c00", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 69, "max_forks_repo_forks_event_min_datetime": "2019-02-07T12:02:01.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T13:24:38.000Z", "avg_line_length": 30.2989690722, "max_line_length": 356, "alphanum_fraction": 0.5264829307, "converted": true, "num_tokens": 1970, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9433475699138559, "lm_q2_score": 0.8962513828326955, "lm_q1q2_score": 0.8454765640271562}} {"text": "## Setting up python environment\n\n\n```python\n# reset all previously defined varibles\n%reset -f\n\n# import everything from sympy moduleb \nfrom sympy import *\n\n# pretty math formatting\ninit_printing() # latex\n```\n\n### Symbolic variables must be declared\n\n\n```python\nx,y,z = symbols('x y z')\n```\n\n### Function definition\n\n\n```python\n## Example 1\n\nf = 2*x**3 # define function\nf # print function\n\n```\n\n\n```python\n## Example 2\n\ng = x**2 + x*y**2 # define function\ng # print function\n```\n\n### Function Evaluation\n\n\n```python\n# f(0)\nf.subs(x,0)\n```\n\n\n```python\n# f(1)\nf.subs(x,1)\n```\n\n\n```python\n# g(x=2,y=3)\ng.subs([(x,2),(y,3)])\n```\n\n## Plotting\n\n\n```python\n## Graph of f = x^2\n\nf = x**2\n\nplot(f)\n```\n\n\n```python\n## multiple graph in same window\n\nf = x**2\ng = x**3\n\np = plot(f,g, (x, -2, 2),show=False, legend=True )\n\np[0].line_color = 'blue'\np[1].line_color = 'green'\n\n\np[0].label = 'f(x)'\np[1].label = 'g(x)'\np.show()\n\n```\n\n\n```python\n## undocking plots out of browser\n%matplotlib qt \n\nf = x**2\nplot(f)\n\n```\n\n\n\n\n \n\n\n\n\n```python\n## forcing plots inside browser\n%matplotlib inline\n\nf = x**2\nplot(f)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "da51b4c1101f2051f3f67d0cffc3b97abe463d2f", "size": 77432, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "sympy/generalCalculus.ipynb", "max_stars_repo_name": "krajit/krajit.github.io", "max_stars_repo_head_hexsha": "221c8bcdf0612b3ae28c827809aa309ea6a7b0c2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-09-29T07:40:07.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-28T22:17:04.000Z", "max_issues_repo_path": "sympy/generalCalculus.ipynb", "max_issues_repo_name": "krajit/krajit.github.io", "max_issues_repo_head_hexsha": "221c8bcdf0612b3ae28c827809aa309ea6a7b0c2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-08-26T08:42:47.000Z", "max_issues_repo_issues_event_max_datetime": "2019-08-26T09:48:21.000Z", "max_forks_repo_path": "sympy/generalCalculus.ipynb", "max_forks_repo_name": "krajit/krajit.github.io", "max_forks_repo_head_hexsha": "221c8bcdf0612b3ae28c827809aa309ea6a7b0c2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2017-09-09T23:32:09.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-28T21:11:39.000Z", "avg_line_length": 206.4853333333, "max_line_length": 27220, "alphanum_fraction": 0.9077513173, "converted": true, "num_tokens": 396, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9546474220263197, "lm_q2_score": 0.885631470799559, "lm_q1q2_score": 0.8454658004641767}} {"text": "## Exercise 1 (3 points): Principal Component Analysis [Python]\n\n\nIn this exercise, we cover probably one of the most popular data analysis techniques: principal component analysis (PCA). It is used in many applications and is often part of the pre-processing steps. With its help, we can find new axes which are designed to reflect the variation present in the data. What is more, we can represent each data point relative to these new axes and when we choose fewer axes we effectively project our data to a lower-dimensional projection space. This way, PCA can also be used as a dimensionality reduction technique where we keep only the dimensions which explain most of the variance. However, this specific aspect is postponed until Exercise 3.\n\n\nThere are far too many applications of PCA to be covered in a single exercise and you will probably encounter some of them in other lectures anyway. However, we still want to look at an example to understand the basics.\n\n\nSuppose you have captured an image of a brick as shown in Figure 1 (a). Additionally, you have already extracted a mask which discriminates the background from the brick. The file shapeData.pickle is attached to this exercise and contains a matrix of size 6752 \u00d7 2 which stores a list of all non-zero positions of this mask. We are going to apply PCA to this data and will see what we can do with it. For the test script, please pay also attention to Table 1.\n\n\n1. Load the data points stored in the file shapeData.pickle and plot the result. Hint: set an equal axes scaling so that the mask of the brick does not look distorted\n\n2. Centre the data so that it has zero mean, i.e. when $X \\in \\mathbb{R}^{nx2}$ denotes our data matrix, calculate\n\\begin{equation} X = \\tilde{X} - (\\mu_1 \\hspace{0.3cm} \\mu_2)\\end{equation}\nwhere $\\mu_i$ denotes the mean of the \ud835\udc56-th column of $\\tilde{X}$\n\n3. Compute the covariance matrix\n\\begin{equation}C = \\frac{1}{n-1}X^TX \\end{equation}\n\n4. We can now calculate the eigensystem, i.e. the principal components.\n \n a) Compute the eigenvalues and corresponding eigenvectors of $C$. You can use the $np.linalg.eig$ function for this task. But make sure (like in one of the previous exercises) that the result is sorted descendingly by the eigenvalues.\n\n b) The eigenvalue $\\lambda_i$ corresponds to the variance along the new axis $u_i$ (which is defined by the respective eigenvector). Calculate the explained variance of each axis (in percentage terms).\n\n5. The eigensystem $U = (u_1 \\hspace{0.3cm} u_2)$ defines its own feature space and we can describe our points relative to the new axes. Project each data point to the new eigensystem and plot the result. Before you do so, think about what you would expect the shape to change to and then check if the result matches with your expectation.\n\n## Exercise 2 (4 points): Singular Value Decomposition\n\nSingular value decomposition (SVD) is related to PCA since it ends up with a similar result even though the computation is different. In this exercise, we want to discuss the similarities and the differences between these two approaches.\n\nRoughly speaking, SVD is an extended diagonalization technique which allows us to decompose our data matrix\n\\begin{equation}X = U\\Sigma V^T\\end{equation}\n\nAs usual, $X \\in \\mathbb{R}^{n\u00d7d}$ stores the observations in rows and the variables in columns. Here, we assume that $X$ is already centred, i.e. the columns have zero mean. $\\Sigma$ is a diagonal matrix storing the singular values $s_i$ and the column vectors of $U$ and $V$ contain the left and right singular vectors, respectively.\n\nIn PCA, we need to calculate the covariance matrix $C$ (Equation 2). This matrix is symmetric so we can diagonalize it directly\n\\begin{equation} C = \\frac{1}{n-1}X^TX = WDW^T \\end{equation}\nwhere \ud835\udc4a stores the eigenvectors in the columns and \ud835\udc37 has the eigenvalues \ud835\udf06\ud835\udc56 on the diagonal. \n\n1. [Pen and Paper] Let\u2019s begin with the matrices of SVD and see how they are related to PCA. For this, fill out the missing properties summarized in Table 2.\n\n a) For the matrices $U$ and $V$, do the singular vectors correspond to the eigenvectors of $X^TX$ or $XX^T$? For this, show the mappings of the third column of Table 2 with the help of Equation 3, Equation 4 and \\begin{equation} \\tilde{C} = \\frac{1}{n-1}X^TX = \\tilde{W}\\tilde{D}\\tilde{W}^T \\end{equation}\n\n b) Based on your previous findings, what is the relationship between the singular values $s_i$ and the eigenvalues $\\lambda_i$? \\begin{equation} s_i = \u2026 \\end{equation}\n \n c) Decide about the matrix dimensions (the mappings should help with this decision). Assume that zero rows/columns have not been removed. \n \n d) We can also interpret the matrices geometrically. What is the corresponding affine transformation (translation, scaling, rotation, etc.) of each matrix? Hint: think about the special properties of the matrices\n\n2. [Python] We saw that there is a strong relationship between the result of PCA and SVD since we can define a mapping between the eigenvectors/eigenvalues and singular vectors/singular values. However, the algorithm used to compute the results is different. For the standard approach with PCA1, we need to calculate the covariance matrix but this is not the case in SVD. It turns out that this is an advantage for SVD since these calculations can get numerically unstable. We now want to explore this circumstance a bit further by calculating the singular values of the matrix

\\begin{equation} \\begin{pmatrix}1 & 1 & 1\\\\ \\epsilon & 0 & 0\\\\ 0 & \\epsilon & 0\\\\ 0 & 0 & \\epsilon \\end{pmatrix} \\end{equation}

where $\\epsilon \\in \\mathbb{R}^+$ is a small number. We are not interested in the corresponding vectors in this part.\n \n a) Define a function which takes \ud835\udf00 as an argument and defines the matrix \ud835\udc34 accordingly. \n i. Calculate the singular values via the standard PCA method \n\\begin{equation} C = A^TA \\\\ \\lambda = eval(C) \\\\ s = \\sqrt{\\lambda} \\end{equation}
\n It is helpful to calculate this in a step-by-step fashion and to print the exact definition after each step with full precision. For the latter, you can use the astype(str) method of the ndarray objects.\n \n ii. Calculate the singular values $s$ again but this time using an SVD algorithm (e.g.np.linalg.svd) and print the result.\n\n b) Call your function and set $\\epsilon$ to the tiny number $10^{-7}$\n\\begin{equation} epsilon = 1.0e-7 \\end{equation}\n \n c) Now, set \ud835\udf00 = 10\u22128 and repeat the calculations. If done correctly, you should get an error. What is the problem here?\n\n d) How can you tell that the standard PCA approach returns at least one incorrect result for the eigenvalues $\\lambda$? Hint: look for a property of the covariance matrix which is violated here.\n\n## Exercise 3 (3 points): PCA vs. LDA [Python]\n\nAs mentioned in Exercise 1, it is also common to use PCA (or SVD) as a dimensionality reduction technique. Instead of projecting our data points to all new axes, we select only a subset and this subset is designed to cover as much of the variance in the data as possible.\n\n\nWe already saw another dimensionality reduction technique in previous assignments, namely\nlinear discriminant analysis (LDA). We are going to apply PCA also to the Statlog (Vehicle\nSilhouettes) dataset and compare the result with the one obtained by LDA. For the test script,\nplease pay also attention to Table 3.\n1. Load the data from the train.csv file into your Python program.\n\n2. PCA may not work well when the features are scaled differently as it is the case with our\ndataset. As a countermeasure, standardize the data (as discussed in assignment 3). You\ncan use the StandardScaler from the sklearn package for this purpose.\n\n3. Apply PCA to the dataset and calculate the (two-dimensional) projections of the data\nto the new coordinate system. There is a PCA class in the sklearn.decomposition\npackage which you can use.\n\n4. Show the explained variance (in percentage terms) as a cumulative sum over all eigenvalues\nin a plot (Figure 2 shows an example solution).\n\n5. Plot the data with respect to the first two principal components (\ud835\udc961 and \ud835\udc962) and compare\nthe result with your LDA plot.\n\n6. Fill out Table 4 which compares some properties of PCA with LDA.\n \n a) Is the procedure supervised or unsupervised?\n \n b) What is the focus of the new axes, i.e. are they designed to explain the variance in\nthe data or to discriminate the classes?\n \n c) What is the maximal number of eigenvectors we can retrieve, i.e. the maximal dimension size of the projection space?\n", "meta": {"hexsha": "a547ddaea067ea4b5c0e9b2416dc3ed5d91dc304", "size": 10861, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "09_pca_svd_lda/PCA.ipynb", "max_stars_repo_name": "jhinga-la-la/pattern-recognition-course", "max_stars_repo_head_hexsha": "7ad4f70b2c427f3c37f59f47768b90371873823c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "09_pca_svd_lda/PCA.ipynb", "max_issues_repo_name": "jhinga-la-la/pattern-recognition-course", "max_issues_repo_head_hexsha": "7ad4f70b2c427f3c37f59f47768b90371873823c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "09_pca_svd_lda/PCA.ipynb", "max_forks_repo_name": "jhinga-la-la/pattern-recognition-course", "max_forks_repo_head_hexsha": "7ad4f70b2c427f3c37f59f47768b90371873823c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.274611399, "max_line_length": 871, "alphanum_fraction": 0.6691833165, "converted": true, "num_tokens": 2117, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9304582497090321, "lm_q2_score": 0.908617890746506, "lm_q1q2_score": 0.8454310122783065}} {"text": "# Praca domowa - Dominik Sta\u0144czak\n\n\n```python\nimport sympy\nsympy.init_printing()\nt, lambda3a, lambdaa12, N4, N12, N16, dN4, dN12, dN16, dt = sympy.symbols('t, lambda_3a, lambda_a12, N4, N12, N16, dN4, dN12, dN16, dt', real=True)\n\neqs = [\n sympy.Eq(dN4/dt, -3*lambda3a * N4 **3 - lambdaa12 * N4 * N12),\n sympy.Eq(dN12/dt, lambda3a * N4 **3 - lambdaa12 * N4 * N12),\n sympy.Eq(dN16/dt, lambdaa12 * N4 * N12)\n]\neqs\n```\n\n\n```python\nm, rho = sympy.symbols('m, rho', real=True)\nX4, X12, X16, dX4, dX12, dX16 = sympy.symbols('X4, X12, X16, dX4, dX12, dX16', real=True)\nXeqs = [\n sympy.Eq(X4, m/rho*4*N4),\n sympy.Eq(X12, m/rho*12*N12),\n sympy.Eq(X16, m/rho*16*N16),\n]\nXeqs\n```\n\n\n```python\nsubs = {X4: dX4, X12: dX12, X16: dX16, N4: dN4, N12: dN12, N16: dN16}\ndXeqs = [eq.subs(subs) for eq in Xeqs]\ndXeqs\n```\n\n\n```python\nfull_conservation = [sympy.Eq(X4 + X12 + X16, 1), sympy.Eq(dX4 + dX12 + dX16, 0)]\nfull_conservation\n```\n\n\n```python\nall_eqs = eqs + Xeqs + dXeqs + full_conservation\nall_eqs\n```\n\n\n```python\nX_all_eqs = [eq.subs(sympy.solve(Xeqs, [N4, N12, N16])).subs(sympy.solve(dXeqs, [dN4, dN12, dN16])) for eq in eqs] + [full_conservation[1]]\nX_all_eqs\n```\n\n\n```python\nsolutions = sympy.solve(X_all_eqs, [dX4, dX12, dX16])\ndX12dX4 = solutions[dX12]/solutions[dX4]\ndX12dX4\n```\n\n\n```python\nq = sympy.symbols('q', real=True)\ndX12dX4_final = dX12dX4.subs({lambdaa12*m: q * lambda3a * rho}).simplify()\ndX12dX4_final\n```\n\n\n```python\nfX12 = sympy.Function('X12')(X4)\ndiffeq = sympy.Eq(fX12.diff(X4), dX12dX4_final.subs(X12, fX12))\ndiffeq\n```\n\n\n```python\ndX16dX4 = solutions[dX16]/solutions[dX4]\ndX16dX4\n```\n\n\n```python\ndX16dX4_final = dX16dX4.subs({lambdaa12*m: q * lambda3a * rho}).simplify()\ndX16dX4_final\n```\n\n\n```python\nderivatives_func = sympy.lambdify((X4, X12, X16, q), [dX12dX4_final, dX16dX4_final])\nderivatives_func(1, 0, 0, 1)\n```\n\n\n```python\ndef f(X, X4, q):\n return derivatives_func(X4, *X, q)\nf([0, 0], 1, 1)\n```\n\n\n```python\nimport numpy as np\nfrom scipy.integrate import odeint\nimport matplotlib.pyplot as plt\nX4 = np.linspace(1, 0, 1000)\nq_list = np.logspace(-3, np.log10(2), 500)\nresults = []\n# fig, (ax1, ax2) = plt.subplots(2, sharex=True, figsize=(10, 8))\n# ax1.set_xlim(0, 1)\n# ax2.set_xlim(0, 1)\n# ax1.set_ylim(0, 1)\n# ax2.set_ylim(0, 1)\nfor q in q_list:\n X = odeint(f, [0, 0], X4, args=(q,))\n X12, X16 = X.T\n# ax1.plot(X4, X12, label=f\"q: {q:.1f}\")\n# ax2.plot(X4, X16, label=f\"q: {q:.1f}\")\n# ax2.set_xlabel(\"X4\")\n# ax1.set_ylabel(\"X12\")\n# ax2.set_ylabel(\"X16\")\n# plt.plot(X4, X16)\n# plt.legend()\n results.append(X[-1])\nresults = np.array(results)\n```\n\n\n```python\nX12, X16 = results.T\nplt.figure(figsize=(10, 10))\nplt.plot(q_list, X12, label=\"X12\")\nplt.plot(q_list, X16, label=\"X16\")\nplt.xlabel(\"q\")\nplt.xscale(\"log\")\nplt.ylabel(\"X\")\nplt.legend(loc='best')\nplt.xlim(q_list.min(), q_list.max());\nplt.grid()\nplt.savefig(\"Reacts.png\")\n```\n", "meta": {"hexsha": "b6276353fad9cee72fadd49d73eef54341751ff5", "size": 82607, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "NuclearReactsOnly.ipynb", "max_stars_repo_name": "StanczakDominik/FUW-AstroI-notatki", "max_stars_repo_head_hexsha": "f99d639b3264a79ddcd291ca9c475718acb82d5d", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "NuclearReactsOnly.ipynb", "max_issues_repo_name": "StanczakDominik/FUW-AstroI-notatki", "max_issues_repo_head_hexsha": "f99d639b3264a79ddcd291ca9c475718acb82d5d", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "NuclearReactsOnly.ipynb", "max_forks_repo_name": "StanczakDominik/FUW-AstroI-notatki", "max_forks_repo_head_hexsha": "f99d639b3264a79ddcd291ca9c475718acb82d5d", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 169.2766393443, "max_line_length": 31872, "alphanum_fraction": 0.8696115341, "converted": true, "num_tokens": 1254, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9511422186079558, "lm_q2_score": 0.8887587993853654, "lm_q1q2_score": 0.8453360162547395}} {"text": "\n\nAsset price under $\\mathbb Q$ follows\n$$S_{t} = S_{0} exp \\{ \\mu t + \\sigma W_{t} \\}$$\nCoonsider Digital put with payoff \n$$h(S_{T}) = I(S_{T} < S_{0}e^{-b})$$\nWe want to find the forward price:\n$$v = \\mathbb E^{\\mathbb Q}[h(S_{T})]$$\nParameters are given as \n$$r = .03, \\sigma = .2, \\mu = r - \\frac{1}{2} \\sigma^2 = .01, T = 1, b = .39$$\n\n\n- Prove that the exact price is 0.02275\n\n__Pf:__\n\nAs we have discussed in the video, we claim that\n\n\\begin{equation}\n\\begin{aligned}\nv &= \\mathbb E^{\\mathbb Q}[h(S_{T})] \\\\\n&= \\mathbb Q(W_1 < - \\frac{b + \\mu}{\\sigma} = -2) \\\\\n&= \\Phi(-2), \\\\\n\\end{aligned}\n\\end{equation}\n\nwhere $W_1 \\sim \\mathcal N(0,1)$ and $\\Phi(\\cdot)$ is the $\\mathrm{CDF}$ of the standard Normal Distribution.\n\nThus, the exace price is shown as follows:\n\n\n\n```\nimport scipy.stats as ss\nexact_price = ss.norm(0,1).cdf(-2)\nprint(exact_price)\n```\n\n 0.022750131948179195\n\n\n- Use $\\mathrm{OMC}$ find the price\n\n__Soln:__\n\nBy using $\\mathrm{OMC}$, we replace $\\Phi(-2)$ with $\\frac{1}{n} \\sum_{i=1}^n I(X_i<-2)$\n\n\n```\nimport numpy as np\n\ndef OMC(N): # N is the number of samples\n s = 0\n for i in range(N):\n X = np.random.normal(0,1)\n if (X<-2):\n s += 1\n return s/N\nOMC(1000000)\n```\n\n\n\n\n 0.022768\n\n\n\n- Use $\\mathrm{IS}(\\alpha)$ find the price\n \n __Soln:__\n\n Assume $\\phi_\\alpha(\\cdot) \\sim \\mathcal N(-\\alpha,1)$, where $\\phi_\\alpha(\\cdot)$ is the $\\mathrm{pdf}$.\n\nThen we can rewrite v which has been discussed in the video:\n\n\\begin{equation}\n\\begin{aligned}\nv &= \\mathbb E[I(Y<-2) \\cdot e^{\\frac{1}{2}\\alpha^2 + \\alpha Y}|Y \\sim \\phi_\\alpha] \\\\\n&\\approx \\frac{1}{n} \\sum_{i=1}^n I(Y_i<-2) \\cdot e^{\\frac{1}{2}\\alpha^2 + \\alpha Y_i}, \\\\\n\\end{aligned}\n\\end{equation}\n\nwhere $Y_i \\sim \\mathcal N(-\\alpha,1).$\n\n\n```\nimport numpy as np\nimport math\n\ndef ImpSamp(N,alpha): # N is the number of samples\n s = 0\n for i in range(N):\n Y = np.random.normal(-alpha,1)\n if (Y<-2):\n s += math.e ** (alpha**2/2 + alpha*Y)\n return s/N\nImpSamp(10000,2)\n```\n\n\n\n\n 0.02249178731207442\n\n\n\n- Can you show your approach is optimal?\n\n\n\n```\nerr_list = []\nfor alpha in range(10):\n M = 1000 # the number of trials to find the minimum of error\n error = []\n for i in range (M):\n error.append(abs(ImpSamp(10000,alpha) - exact_price)) # document every trial's error\n err_list.append(min(error)) # document the minimum of error wrt different alpha\nmin_err = min(err_list)\nprint(\">>> The minimum error among different \u03b1 is \" + str(min_err) + \", where \u03b1 = \" + str(err_list.index(min_err))) \n```\n\n >>> The minimum error among different \u03b1 is 2.0011788371201988e-07, where \u03b1 = 2\n\n\n- Prove or demonstrate $\\mathrm{IS}$ is more efficient to $\\mathrm{OMC}$\n\n\n```\nN = 1000 # N is the number of samples for both OMC and Importance Sampling\nsq_err_omc = 0\nsq_err_ImpSamp = 0\nK = 10000 # K is the number of trials for OMC\nfor i in range(K):\n sq_err_omc += (exact_price - OMC(N))**2\n sq_err_ImpSamp += (exact_price - ImpSamp(N,2))**2\nmse_omc = sq_err_omc / K\nmse_ImpSamp = sq_err_ImpSamp / K\nprint(\">>> The Mean Square Error of Ordinary Monte Carlo is \" + str(mse_omc))\nprint(\">>> The Mean Square Error of Importance Sampling is \" + str(mse_ImpSamp)) \n```\n\n >>> The Mean Square Error of Ordinary Monte Carlo is 2.228314286581901e-05\n >>> The Mean Square Error of Importance Sampling is 1.1975332166123286e-06\n\n\n- After trying two different values of $K$ ($K=1000\\ or\\ 10000$), the value of exponential will not change. Thus we can say that $\\mathrm{IS}$ is more efficient to $\\mathrm{OMC}$.\n\n__Numeric Method:__\n\nAs we have shown before,\n$$ \\hat v_{OMC} = \\frac{1}{n} \\sum_{i=1}^n I(X_i<-2),$$\nand\n$$ \\hat v_{IS} = \\frac{1}{n} \\sum_{i=1}^n I(Y_i<-2) \\cdot e^{\\frac{1}{2}\\alpha^2 + \\alpha Y_i}.$$\nBoth $v_{OMC}$ and $v_{IS}$ are unbiased, thus we can deduce that\n\n\\begin{equation}\n\\begin{aligned}\n\\mathrm{MSE}(\\hat v_{OMC}) &= Var(\\hat v_{OMC}) \\\\\n&= \\frac{1}{n} \\left( \\mathbb E[I(X_i<-2)^2] - (\\mathbb E[I(X_i<-2)])^2 \\right) \\\\\n&= \\frac{1}{n}(\\Phi(-2) - \\Phi(-2)^2) \\\\\n\\mathrm{MSE}(\\hat v_{IS}) &= Var(\\hat v_{IS}) \\\\\n&= \\frac{1}{n} \\left( \\mathbb{E} [I(Y_{i} < - 2) e^{\\alpha^{2} + 2 \\alpha Y_{i}}] - \\Phi^{2}(-2) \\right) \\\\\n\\end{aligned}\n\\end{equation}\n\nNow we calculate the right hand side of the last equation:\n\n\\begin{equation}\n\\begin{aligned}\n\\mathbb{E} [I(Y_{i} < - 2) e^{\\alpha^{2} + 2 \\alpha Y_{i}}] &= \\int_{- \\infty}^{-2} e^{\\alpha^{2}+ 2 \\alpha y} \\frac{1}{\\sqrt{2 \\pi}} e^{- \\frac{(y + \\alpha)^2}{2}} \\mathrm{d} y \\\\\n&= \\int_{- \\infty}^{-2} \\frac{1}{\\sqrt{2 \\pi}} e^{- \\frac{y^{2} - \\alpha y - \\alpha^{2}}{2}} \\mathrm{d} y \\\\\n&= \\int_{- \\infty}^{-2} \\frac{1}{\\sqrt{2 \\pi}} e^{- \\frac{(y - \\alpha)^{2}}{2}} e^{\\alpha^{2}} \\mathrm{d} y \\\\\n&= e^{\\alpha^{2}} \\cdot \\Phi(-2-\\alpha), \\\\\n\\end{aligned}\n\\end{equation}\n\nThus, we will have\n$$\\mathrm{MSE}(\\hat v_{IS}) = \\frac{1}{n} \\left( e^{\\alpha^{2}} \\Phi(-2-\\alpha) - \\Phi^{2}(-2) \\right).$$\n\nTherefore,\n$$\\mathrm{MSE}(\\hat v_{OMC}) - \\mathrm{MSE}(\\hat v_{IS}) = \\frac{1}{n} \\left( \\Phi(-2) - e^{\\alpha^{2}} \\Phi(-2 - \\alpha) \\right) \\ge 0.$$\n\n\n```\nimport numpy as np\nimport scipy.stats as ss\n\nfor alpha in range(5):\n diff = ss.norm(0,1).cdf(-2) - np.exp(alpha**2) * ss.norm.cdf(-2-alpha)\n print(diff)\n```\n\n 0.0\n 0.019080728658526478\n 0.021020940734838522\n 0.020427370203270685\n 0.01398320509620665\n\n", "meta": {"hexsha": "8c0be3d53ee1e59832f3adbcf495e031378fccbe", "size": 12458, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "src/Hw11_IS.ipynb", "max_stars_repo_name": "Jun-629/20MA573", "max_stars_repo_head_hexsha": "addad663d2dede0422ae690e49b230815aea4c70", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/Hw11_IS.ipynb", "max_issues_repo_name": "Jun-629/20MA573", "max_issues_repo_head_hexsha": "addad663d2dede0422ae690e49b230815aea4c70", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/Hw11_IS.ipynb", "max_forks_repo_name": "Jun-629/20MA573", "max_forks_repo_head_hexsha": "addad663d2dede0422ae690e49b230815aea4c70", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-02-05T21:42:08.000Z", "max_forks_repo_forks_event_max_datetime": "2020-02-05T21:42:08.000Z", "avg_line_length": 31.7806122449, "max_line_length": 225, "alphanum_fraction": 0.4353026168, "converted": true, "num_tokens": 2071, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9046505325302033, "lm_q2_score": 0.934395164824565, "lm_q1q2_score": 0.8453010834521898}} {"text": "```python\nimport math\nimport torch\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n\n```python\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\n```\n\n# Exploring optimisation of analytic functions\n\n\n```python\ndef rastrigin(X, A=1.0):\n return A*2 + ( (X[0]**2 - A*torch.cos(2*math.pi*X[0])) + (X[1]**2 - A*torch.cos(2*math.pi*X[1])) ) \n```\n\n\n```python\nxmin, xmax, xstep = -5, 5, .2\nymin, ymax, ystep = -5, 5, .2\nxs = np.arange(xmin, (xmax + xstep), xstep)\nys = np.arange(ymin, (ymax + ystep), ystep)\nz = rastrigin(torch.tensor([xs,ys]), A=1.0).numpy()\n```\n\n\n```python\nfig, ax = plt.subplots(figsize=(8,6))\nax.plot(xs, z)\nplt.tight_layout()\nplt.savefig('rastrigin.png')\n```\n\nWith $A=1.0$ we find the Rastrigin function has many small 'bumps' or local minima. However the main basin at $x=0$ is the global minimum we want our optimisers to reach.\n\n\n```python\np_SGD = torch.tensor([5.0, 5.0], requires_grad=True, device=device)\np_SGD_Mom = torch.tensor([5.0, 5.0], requires_grad=True, device=device)\np_Adagrad = torch.tensor([5.0, 5.0], requires_grad=True, device=device)\np_Adam = torch.tensor([5.0, 5.0], requires_grad=True, device=device)\nprint('Parameters initialised:\\n ', p_SGD, p_SGD.type())\nepochs = 100\nprint('Max epochs:\\n ', epochs)\nA = 1.0\nopt_SGD = torch.optim.SGD([p_SGD], lr=0.01)\nprint(f'Initialised SGD:\\n Learning rate:{0.01}')\nopt_SGD_Mom = torch.optim.SGD([p_SGD_Mom], lr=0.01, momentum=0.09)\nprint(f'Initialised SGD Momentum:\\n Learning rate:{0.01}, Momentum:{0.09}')\nopt_Adagrad = torch.optim.Adagrad([p_Adagrad], lr=0.01)\nprint(f'Initialised Adagrad:\\n Learning rate:{0.01}')\nopt_Adam = torch.optim.Adam([p_Adam], lr=0.01)\nprint(f'Initialised Adam:\\n Learning rate:{0.01}')\n\nplt_loss_SGD = []\nplt_loss_SGD_Mom = []\nplt_loss_Adagrad = []\nplt_loss_Adam = []\n\nfor epoch in range(epochs):\n # zero gradients\n opt_SGD.zero_grad()\n opt_SGD_Mom.zero_grad()\n opt_Adagrad.zero_grad()\n opt_Adam.zero_grad()\n # compute loss\n loss_SGD = rastrigin(p_SGD, A=A)\n loss_SGD_Mom = rastrigin(p_SGD_Mom, A=A)\n loss_Adagrad = rastrigin(p_Adagrad, A=A)\n loss_Adam = rastrigin(p_Adam, A=A)\n # backprop\n loss_SGD.backward()\n loss_SGD_Mom.backward()\n loss_Adagrad.backward()\n loss_Adam.backward()\n # step optimiser\n opt_SGD.step()\n opt_SGD_Mom.step()\n opt_Adagrad.step()\n opt_Adam.step()\n # store loss for plots\n plt_loss_SGD.append(loss_SGD.item())\n plt_loss_SGD_Mom.append(loss_SGD_Mom.item())\n plt_loss_Adagrad.append(loss_Adagrad.item())\n plt_loss_Adam.append(loss_Adam.item())\n\n\nprint(f'Loss function:\\n Rastrigin, A={A}')\nfig, ax = plt.subplots(figsize=(8,5))\nax.plot(plt_loss_SGD, label='SGD', linewidth=2, alpha=.6)\nax.plot(plt_loss_SGD_Mom, label='SGD Momentum', linewidth=2, alpha=.6)\nax.plot(plt_loss_Adagrad, label='Adagrad', linewidth=2, alpha=.6)\nax.plot(plt_loss_Adam, label='Adam', linewidth=2, alpha=.6)\n\nax.legend()\nax.set_xlabel('Epoch')\nax.set_ylabel('Loss')\nplt.tight_layout()\nplt.savefig('optimiser_comparison_rastrigin.png')\n```\n\nRastrigin is a difficult function to optimise as it's filled with many local minima, but there is only one global minimum. We find that SGD + Momentum shows the best performance when applied to a 2D Rastrigin function with $A=1.0$ (a parameter which determines how 'bumpy' the function is).\n\n# Optimisation of a SVM on real data\n\n\nApplying soft-margin SVM to Iris data and optimise its paramters using gradient descent. \n\nNote: we will only be using two of the four classes from the dataset.\n\nAn SVM tries to find the maximum margin hyperplane which separates the data classes. For a soft margin SVM\nwhere $\\textbf{x}$ is our data, we minimize:\n\n\\begin{equation}\n\\left[\\frac 1 n \\sum_{i=1}^n \\max\\left(0, 1 - y_i(\\textbf{w}\\cdot \\textbf{x}_i - b)\\right) \\right] + \\lambda\\lVert \\textbf{w} \\rVert^2\n\\end{equation}\n\nWe can formulate this as an optimization over our weights $\\textbf{w}$ and bias $b$, where we minimize the\nhinge loss subject to a level 2 weight decay term. The hinge loss for some model outputs\n$z = \\textbf{w}\\textbf{x} + b$ with targets $y$ is given by:\n\n\\begin{equation}\n\\ell(y,z) = \\max\\left(0, 1 - yz \\right)\n\\end{equation}\n\n\n```python\nimport torch\nimport pandas as pd\nfrom tqdm.auto import tqdm\nimport matplotlib.pyplot as plt\n```\n\n\n```python\ndef svm(x,w,b):\n h = (w*x).sum(1) + b\n return h\n```\n\n\n```python\ndef hinge_loss(z, y):\n yz = y * z\n return torch.max(torch.zeros_like(yz), (1-yz))\n```\n\n\n```python\n# test\nprint(hinge_loss(torch.randn(2), torch.randn(2) > 0).float())\n```\n\n tensor([1.0000, 2.7751])\n\n\n\n```python\nurl = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'\ndf = pd.read_csv(url, header=None)\ndf = df.sample(frac=1, random_state=0) # shuffle\n\ndf = df[df[4].isin(['Iris-virginica', 'Iris-versicolor'])] # filter\n\n# add label indices column\nmapping = {k: v for v, k in enumerate(df[4].unique())}\ndf[5] = (2 * df[4].map(mapping)) - 1 # labels in {-1,1}\n\n# normalise data\nalldata = torch.tensor(df.iloc[:, [0,1,2,3]].values, dtype=torch.float)\nalldata = (alldata - alldata.mean(dim=0)) / alldata.var(dim=0)\n\n# create datasets\ntargets_tr = torch.tensor(df.iloc[:75, 5].values, dtype=torch.long)\ntargets_va = torch.tensor(df.iloc[75:, 5].values, dtype=torch.long)\ndata_tr = alldata[:75]\ndata_va = alldata[75:]\n```\n\n\n```python\nfrom torch.utils import data\n```\n\n\n```python\n# mini-batch training data\ndataset_tr = data.TensorDataset(data_tr, targets_tr)\ndataloader_tr = data.DataLoader(dataset_tr, batch_size=25, shuffle=True)\n```\n\n\n```python\n# mini-batch test data\ndataset_va = data.TensorDataset(data_va, targets_va)\ndataloader_va = data.DataLoader(dataset_va, batch_size=25, shuffle=True)\n```\n\n\n```python\ndef eval_accuracy(predictions, labels):\n a = sum(predictions.detach().numpy() * labels.numpy() >=0) / len(labels)\n return a\n```\n\n\n```python\ndef svm_train(train_dataloader, data_va, targets_va, epochs=100, lr=0.01, decay=0.01):\n print('Support Vector Machine:')\n print(' learning_rate:', lr)\n print(' epochs:', epochs)\n train_rows, train_col = train_dataloader.dataset.tensors[0].shape\n print(f' X_train.shape: ({train_rows},{train_col})')\n# train_y_rows = dataloader.dataset.tensors[1].shape[0]\n# print(f' y_train.shape: ({train_y_rows})')\n# test_rows, test_col = test_dataloader.dataset.tensors[0].shape\n# print(f' X_test.shape: ({train_rows},{train_col})')\n# test_y_rows = dataloader.dataset.tensors[1].shape[0]\n# print(f' y_test.shape: ({test_y_rows})')\n print('---------------------------------')\n # initialise weights and biases\n w = torch.randn(1, train_col, requires_grad=True)\n b = torch.randn(1, requires_grad=True)\n \n optimiser = torch.optim.SGD([w,b], lr=lr, weight_decay=decay)\n# optimiser = torch.optim.Adam([w,b], lr=lr, weight_decay=decay)\n \n # record loss over epoch\n losses = []\n # record training accuracy over epoch\n train_accuracy = []\n # record validation accuracy over epoch\n test_accuracy = []\n \n print('Training model, please wait...')\n for epoch in tqdm(range(epochs)):\n ep_loss = 0\n for train_batch in train_dataloader:\n # zero gradients\n optimiser.zero_grad()\n # compute loss\n X_train, y_train = train_batch\n y_pred = svm(X_train, w, b)\n loss = hinge_loss(y_pred, y_train).mean()\n # backprop\n loss.backward()\n # step optimiser\n optimiser.step()\n # track loss\n ep_loss += loss.item()\n losses.append(ep_loss)\n # training accuracy\n ep_train_pred = svm(X_train, w, b)\n ep_train_acc = eval_accuracy(ep_train_pred, y_train)\n train_accuracy.append(ep_train_acc)\n # validation accuracy\n ep_test_pred = svm(data_va, w, b)\n ep_test_acc = eval_accuracy(ep_test_pred, targets_va)\n test_accuracy.append(ep_test_acc)\n\n print(f'Training accuracy: {train_accuracy[-1]*100}%')\n print(f'Validation accuracy: {ep_test_acc*100}%')\n \n print(f'Train loss: {losses[-1]}')\n return train_accuracy, test_accuracy, losses\n```\n\n\n```python\ntr_acc, va_acc, loss = svm_train(dataloader_tr, data_va, targets_va,\n epochs=100, lr=0.01, decay=0.01)\n```\n\n Support Vector Machine:\n learning_rate: 0.01\n epochs: 100\n X_train.shape: (75,4)\n ---------------------------------\n Training model, please wait...\n\n\n\n 0%| | 0/100 [00:000\n\\end{equation}\nto be solved using forward Euler discretization in time and upwind discretization in space(FTUS). The difference equation is\n\\begin{equation}\n\\frac{u_{i}^{n+1}-u_{i}^{n}}{\\Delta t}=-a\\left(\\frac{u_{i}^{n}-u_{i-1}^{n}}{\\Delta x}\\right).\n\\end{equation}\n\n\n```python\nfrom pymodpde import DifferentialEquation, symbols, i, k, n # import the differential equation and the indicies we need to use\n\na= symbols('a') # define positive a constant for the advection velocity\n\nDE1 = DifferentialEquation(dependentVarName='u',independentVarsNames=['x']) # construct a one D differential equation with 'x' and 'u' as independent and dependent variables, respectively\n\n# method I of constructing the rhs:\n# construct the first upwind derivative of the dependent variabe 'u' \n# with respect to the independent variable 'x' using the stencil -1, and 0 evaluated at time tn = n\nadvectionTerm1 = DE1.expr(order=1, directionName='x', time=n, stencil=[-1, 0])\n \n\n# set the rhs of the differential equation\nDE1.set_rhs(- a * advectionTerm1 )\n\n# compute the modified equation up to two terms\n\nDE1.generate_modified_equation(nterms=4)\n```\n\n\n```python\n# display the mofified equation\nDE1.display_modified_equation()\n```\n\n\n$\\displaystyle \\frac{\\partial u }{\\partial t} = - a \\frac{\\partial u }{\\partial x} + \\frac{a \\left(- \\Delta{t} a + \\Delta{x}\\right)}{2} \\frac{\\partial^{2} u }{\\partial x^{2}} + \\frac{a \\left(- 2 \\Delta{t}^{2} a^{2} + 3 \\Delta{t} \\Delta{x} a - \\Delta{x}^{2}\\right)}{6} \\frac{\\partial^{3} u }{\\partial x^{3}} + \\frac{a \\left(- 6 \\Delta{t}^{3} a^{3} + 12 \\Delta{t}^{2} \\Delta{x} a^{2} - 7 \\Delta{t} \\Delta{x}^{2} a + \\Delta{x}^{3}\\right)}{24} \\frac{\\partial^{4} u }{\\partial x^{4}} $\n\n\n\n```python\n# display the amplification factor\nDE1.display_amp_factor()\n```\n\n\n$\\displaystyle e^{\\Delta{t} \\alpha} = - \\frac{\\Delta{t} a}{\\Delta{x}} + \\frac{\\Delta{t} a e^{- i \\Delta{x} k_{1}}}{\\Delta{x}} + 1.0$\n\n\n\n```python\n# print the latex form of the generate_modified_equation\nprint(DE1.latex_modified_equation())\n```\n\n \\frac{\\partial u }{\\partial t} = - a \\frac{\\partial u }{\\partial x} + \\frac{a \\left(- \\Delta{t} a + \\Delta{x}\\right)}{2} \\frac{\\partial^{2} u }{\\partial x^{2}} + \\frac{a \\left(- 2 \\Delta{t}^{2} a^{2} + 3 \\Delta{t} \\Delta{x} a - \\Delta{x}^{2}\\right)}{6} \\frac{\\partial^{3} u }{\\partial x^{3}} + \\frac{a \\left(- 6 \\Delta{t}^{3} a^{3} + 12 \\Delta{t}^{2} \\Delta{x} a^{2} - 7 \\Delta{t} \\Delta{x}^{2} a + \\Delta{x}^{3}\\right)}{24} \\frac{\\partial^{4} u }{\\partial x^{4}} \n\n\n\n```python\n# print the latex form of the amplification factor\nprint(DE1.latex_amp_factor())\n```\n\n e^{\\Delta{t} \\alpha} = - \\frac{\\Delta{t} a}{\\Delta{x}} + \\frac{\\Delta{t} a e^{- i \\Delta{x} k_{1}}}{\\Delta{x}} + 1.0\n\n\nUsing the second method for constructing the rhs\n\n\n```python\nfrom pymodpde import DifferentialEquation, symbols, j, n # import the differential equation and the indicies we need to use\n\nb = symbols('b') # define positive a constant for the advection velocity\n\nDE2 = DifferentialEquation(dependentVarName='f',independentVarsNames=['y'], indices=[j])# construct the first upwind derivative of the dependent variabe 'f' \n # with respect to the independent variable 'y' using the stencil -1, and 0 evaluated at time tn = n\n \n\n# method II of constructing the rhs:\nadvectionTerm2 = (DE2.f(time=n, y=j) - DE2.f(time=n, y = j-1)) / DE2.dy \n \nDE2.set_rhs(- b * advectionTerm2 ) # set the rhs of the Differential equation DE\n\nDE2.generate_amp_factor() # computing the amplification factor\n```\n\n\n```python\n# display the amplification factor\nDE2.display_amp_factor()\n```\n\n\n$\\displaystyle e^{\\Delta{t} \\alpha} = - \\frac{\\Delta{t} b}{\\Delta{y}} + \\frac{\\Delta{t} b e^{- i \\Delta{y} k_{1}}}{\\Delta{y}} + 1.0$\n\n\nThe difference equation for Backward in time and Upwind in space is\n\\begin{equation}\n\\frac{u_{i}^{n+1}-u_{i}^{n}}{\\Delta t}=-a\\left(\\frac{u_{i}^{n+1}-u_{i-1}^{n+1}}{\\Delta x}\\right).\n\\end{equation}\n\n\n```python\nfrom pymodpde import DifferentialEquation, symbols\n\n# define the advection velocity\na= symbols(\"a\")\n# define the spatial and temporal indices\ni, n = symbols(\"i n\")\n\n# construct a time dependent differential equation\nDE = DifferentialEquation(dependentVarName=\"u\",independentVarsNames=[\"x\"], indices=[i], timeIndex=n)\n\n# method I of constructing the rhs:\nadvectionTerm = -a * DE.expr(order=1, directionName=\"x\", time=n+1,stencil=[-1, 0])\n\n# setting the rhs of the differential equation\nDE.set_rhs( advectionTerm )\n\n# computing and displaying the modified equation up to two terms\nDE.generate_modified_equation(nterms=2)\n```\n\n\n```python\n# display the mofified equation\nDE.display_modified_equation()\n```\n\n\n$\\displaystyle \\frac{\\partial u }{\\partial t} = - a \\frac{\\partial u }{\\partial x} + \\frac{a \\left(\\Delta{t} a + \\Delta{x}\\right)}{2} \\frac{\\partial^{2} u }{\\partial x^{2}} $\n\n\n\n```python\n# display the amplification factor\nDE.display_amp_factor()\n```\n\n\n$\\displaystyle e^{\\Delta{t} \\alpha} = \\frac{\\Delta{x} e^{i \\Delta{x} k_{1}}}{\\Delta{t} a e^{i \\Delta{x} k_{1}} - \\Delta{t} a + \\Delta{x} e^{i \\Delta{x} k_{1}}}$\n\n\n## Example Using BTCS on Advection-Diffusion PDEs (1D)\n---\n\nConsider the following PDE\n\\begin{equation}\nu_t=-cu_x+\\gamma u_{xx}\n\\end{equation} \nto be solved using backward Euler discretization in time and central discretization in space(BTCS). The difference equation is\n$$\n\\frac{u_{i}^{n+1}-u_{i}^{n}}{\\Delta t}=-c\\left(\\frac{u_{i+1}^{n+1}-u_{i-1}^{n+1}}{2\\Delta x}\\right)+ \\gamma \\left(\\frac{u_{i+1}^{n+1}-2u_i^{n+1}+u_{i-1}^{n+1}}{\\Delta x^2}\\right).\n$$\n\n\n```python\nfrom pymodpde import DifferentialEquation, symbols, i, n # import the differential equation and the indicies we need to use\n\nc, g = symbols('c gamma') # define positive a constant for the advection velocity\n\nDE3 = DifferentialEquation(dependentVarName='u',independentVarsNames=['x'], indices=[i])# construct the first upwind derivative of the dependent variabe 'f' \n # with respect to the independent variable 'y' using the stencil -1, and 0 evaluated at time tn = n\n \n\n# method II of constructing the rhs:\nadvectionTerm3 = (DE3.u(time=n+1, x=i+1) - DE3.u(time=n+1, x=i-1)) / DE3.dx /2\n\ndiffusionTerm3 = (DE3.u(time=n+1, x=i+1) - 2* DE3.u(time=n+1, x=i) + DE3.u(time=n+1, x=i-1)) / DE3.dx / DE3.dx\n \nDE3.set_rhs(- c * advectionTerm3 ) # set the rhs of the Differential equation DE\n\nDE3.generate_modified_equation(nterms=3) # compute the modified equation up to 2 terms and display the latex form of the modified equation\n```\n\n\n```python\n# display the mofified equation\nDE3.display_modified_equation()\n```\n\n\n$\\displaystyle \\frac{\\partial u }{\\partial t} = - c \\frac{\\partial u }{\\partial x} + \\frac{\\Delta{t} c^{2}}{2} \\frac{\\partial^{2} u }{\\partial x^{2}} - \\frac{c \\left(2 \\Delta{t}^{2} c^{2} + \\Delta{x}^{2}\\right)}{6} \\frac{\\partial^{3} u }{\\partial x^{3}} $\n\n\n\n```python\n# display the amplification factor\nDE3.display_amp_factor()\n```\n\n\n$\\displaystyle e^{\\Delta{t} \\alpha} = \\frac{2.0 \\Delta{x} e^{i \\Delta{x} k_{1}}}{\\Delta{t} c e^{2.0 i \\Delta{x} k_{1}} - \\Delta{t} c + 2.0 \\Delta{x} e^{i \\Delta{x} k_{1}}}$\n\n\n## Example Using FTUS on Advection PDEs (2D)\n---\n\nConsider the advection equation in a two dimensional space\n\\begin{equation}\nu_{t}=-bu_{x}-cu_{y};\\quad(b,c)\\in\\mathbb{R}^{+}\n\\end{equation}\nto be discretized using a forward Euler method in time and UPWIND in space (FTUS)\n\\begin{equation}\n\\frac{u_{i,j}^{n+1}-u_{i,j}^{n}}{\\Delta t}=-b\\frac{u_{i,j}^{n}-u_{i-1,j}^{n}}{\\Delta x}-c\\frac{u_{i,j}^{n}-u_{i,j-1}^{n}}{\\Delta y},\n\\end{equation}\n\nwith a modified equation \n\n\n```python\nfrom pymodpde import DifferentialEquation, symbols, i, j, n # import the differential equation and the indicies we need to use\n\nb, c= symbols('b c') # define positive a constant for the advection velocity\n\nDE1 = DifferentialEquation(dependentVarName='u',independentVarsNames=['x','y']) # construct a one D differential equation with 'x' and 'u' as independent and dependent variables, respectively\n\n# method II of constructing the rhs:\nadvectionTerm1 = (DE1.u(time=n, x=i, y=j) - DE1.u(time=n, x=i-1, y=j))/DE1.dx\n\nadvectionTerm2 = (DE1.u(time=n, x=i, y=j) - DE1.u(time=n, x=i, y=j-1))/DE1.dy\n\n# set the rhs of the differential equation\nDE1.set_rhs( -b * advectionTerm1 - c * advectionTerm2 )\n\n# compute and the modified equation up to two terms\nDE1.generate_modified_equation(nterms=2)\n```\n\n\n```python\n# display the mofified equation\nDE1.display_modified_equation()\n```\n\n\n$\\displaystyle \\frac{\\partial u }{\\partial t} = - c \\frac{\\partial u }{\\partial y} - b \\frac{\\partial u }{\\partial x} + \\frac{c \\left(- \\Delta{t} c + \\Delta{y}\\right)}{2} \\frac{\\partial^{2} u }{\\partial y^{2}} - \\Delta{t} b c \\frac{\\partial^{2} u }{\\partial y\\partial x} + \\frac{b \\left(- \\Delta{t} b + \\Delta{x}\\right)}{2} \\frac{\\partial^{2} u }{\\partial x^{2}} $\n\n\n\n```python\n# display the amplification factor\nDE1.display_amp_factor()\n```\n\n\n$\\displaystyle e^{\\Delta{t} \\alpha} = - \\frac{\\Delta{t} c}{\\Delta{y}} + \\frac{\\Delta{t} c e^{- i \\Delta{y} k_{2}}}{\\Delta{y}} - \\frac{\\Delta{t} b}{\\Delta{x}} + \\frac{\\Delta{t} b e^{- i \\Delta{x} k_{1}}}{\\Delta{x}} + 1.0$\n\n\n### Numerical Solution\n\n\n```python\nnx = 64\nny = 64\nLx = 1.0\nLy = 1.0\nx = np.linspace(0,Lx,nx)\ny = np.linspace(0,Ly,ny)\ndx = Lx/(nx-1)\ndy = Ly/(ny-1)\n# create a grid of coordinates\nxx,yy = np.meshgrid(x,y)\n\nu0 = lambda x,y : np.exp(-(x-0.2)**2/0.01)*np.exp(-(y-0.2)**2/0.01)\n\ncx = 1.0\ncy = 1.0\n\ndt = 1/128\ntend = 0.5 #s\nt = 0\n\ncflx = cx * dt/dx\ncfly = cy * dt/dy\n\n# setup the initial condition\nsol = []\nu = np.zeros([ny+2,nx+2]) # we will ghost cells to simplify the implementation of periodic BCs\nu[1:-1, 1:-1] = u0(xx,yy)\n# set periodic boundaries\nu[:,0] = u[:,-3] #x-minus face\nu[:,-1] = u[:,2] #x-plus face\nu[0,:] = u[-3,:] #y-minus face\nu[-1,:] = u[2,:] #y-plus face\nsol.append(u)\n```\n\n\n```python\nwhile t < tend:\n un = sol[-1]\n unew = un.copy()\n unew[1:-1,1:-1] = un[1:-1,1:-1] - cflx * (un[1:-1,1:-1] - un[1:-1,:-2]) - cfly * (un[1:-1,1:-1] - un[:-2,1:-1])\n # set periodic boundaries\n unew[:,0] = unew[:,-3] #x-minus face\n unew[:,-1] = unew[:,2] #x-plus face\n unew[0,:] = unew[-3,:] #y-minus face\n unew[-1,:] = unew[2,:] #y-plus face\n\n sol.append(unew)\n t += dt\n```\n\n\n```python\nlevs = np.linspace(0,1,15)\nplt.figure(figsize=(10,8))\nplt.contourf(xx,yy,sol[-1][1:-1,1:-1]+sol[0][1:-1,1:-1],cmap=cm.viridis,levels=levs,vmax=1.0,vmin=0)\nplt.text(0.1, 0.4, 'time=0.0s', fontsize=12,family='serif')\nplt.text(0.6, 0.92,'time={}s'.format(tend), fontsize=12,family='serif')\nplt.xlabel('x', fontsize=14,family='serif')\nplt.ylabel('y', fontsize=14,family='serif')\ncbar = plt.colorbar()\nplt.clim(-1,1)\ncbar.set_ticks(np.linspace(-1,1,5))\nplt.savefig('./advection_2d.pdf',transparent=True)\nplt.show()\n```\n\n## Numerical Solver\n---\n\n\n```python\n%matplotlib notebook\nfrom numerical.layout import *\nplty=PlotStyling()\n```\n\n\n Tab(children=(VBox(children=(HBox(children=(FloatText(value=0.01, description='dt:'), FloatText(value=10.0, de\u2026\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "70e9d575efaa6f696cacbce9a54a3e60e19515c4", "size": 40594, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples.ipynb", "max_stars_repo_name": "mk-95/modified_equation_code_test", "max_stars_repo_head_hexsha": "6404f67506ed8133b39cd317d659b3180c2a584d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "examples.ipynb", "max_issues_repo_name": "mk-95/modified_equation_code_test", "max_issues_repo_head_hexsha": "6404f67506ed8133b39cd317d659b3180c2a584d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2020-01-14T20:05:42.000Z", "max_issues_repo_issues_event_max_datetime": "2020-01-22T17:26:35.000Z", "max_forks_repo_path": "examples.ipynb", "max_forks_repo_name": "mk-95/modified_equation_code_test", "max_forks_repo_head_hexsha": "6404f67506ed8133b39cd317d659b3180c2a584d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-01-14T19:52:13.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-14T19:52:13.000Z", "avg_line_length": 55.9917241379, "max_line_length": 19268, "alphanum_fraction": 0.7297876533, "converted": true, "num_tokens": 3956, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9372107966642556, "lm_q2_score": 0.9019206719160033, "lm_q1q2_score": 0.8452897914543581}} {"text": "# Eigenvalues of $A$ based on $a$\n\n\n```julia\nusing LinearAlgebra\nfunction A(a)\n eigvals([1 a; a 1])\nend\n\nA(1)\n```\n\n\n\n\n 2-element Array{Float64,1}:\n 0.0\n 2.0\n\n\n\n$\n\\begin{align}\ndet(A-\\lambda I) &= 0 \\\\\n(1-\\lambda)^2 - a^2 &= 0 \\\\\n\\lambda^2 - 2 \\lambda + (1- a^2) &=0 \\\\\n\\rightarrow \\lambda &= \\frac{1 \\pm \\sqrt{1 - 1 +a^2}}{1} \\\\\n&= 1 \\pm \\| a\\|\n\\end{align}\n$\n\n# SVD and Eigenvalue\n\n$k = \\frac{\\sigma_{max}}{\\sigma_{min}}$\n\n$a=1$ is making $A$ a singular matrix. \n\n# Quadratic form\n\n\n```julia\nusing Plots\n\nfunction quadratic(u)\n range = collect(0:0.01:2)\n outpt = zeros(size(range,1))\n for n in 1:size(range,1)\n a = range[n]\n y = u' * [1 a; a 1] * u\n outpt[n] = y\n end\n plot(range, outpt, xlabel=\"a\", ylabel=\"f(u)\"\n , title=\"u=\"*string(u), label=:false, linewidth=4)\nend\n\nquadratic([5, 2])\n```\n\n\n\n\n \n\n \n\n\n\nFor $a=1$, the output of $f(u)$ function is $(u[1]+u[2])^2$;\n\n## Adjourn\n\n\n```julia\nusing Dates\nprintln(\"mahdiar\")\nDates.format(now(), \"Y/U/d HH:MM\") \n```\n\n mahdiar\n\n\n\n\n\n \"2021/February/11 17:54\"\n\n\n", "meta": {"hexsha": "0dafad3ca1545f4b2c5049e3f223f6cd869a2d46", "size": 31802, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "HW03/2.ipynb", "max_stars_repo_name": "mahdiarsadeghi/NumericalAnalysis", "max_stars_repo_head_hexsha": "95a0914c06963b0510971388f006a6b2fc0c4ef9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW03/2.ipynb", "max_issues_repo_name": "mahdiarsadeghi/NumericalAnalysis", "max_issues_repo_head_hexsha": "95a0914c06963b0510971388f006a6b2fc0c4ef9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW03/2.ipynb", "max_forks_repo_name": "mahdiarsadeghi/NumericalAnalysis", "max_forks_repo_head_hexsha": "95a0914c06963b0510971388f006a6b2fc0c4ef9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 113.1743772242, "max_line_length": 11646, "alphanum_fraction": 0.6572857053, "converted": true, "num_tokens": 412, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9372107949104866, "lm_q2_score": 0.9019206705978501, "lm_q1q2_score": 0.8452897886372103}} {"text": "---\nauthor: Nathan Carter (ncarter@bentley.edu)\n---\n\nThis answer assumes you have imported SymPy as follows.\n\n\n```python\nfrom sympy import * # load all math functions\ninit_printing( use_latex='mathjax' ) # use pretty math output\n```\n\nThe following code tells SymPy that $x$ is a variable and that\n$y$ is a function of $x$. It then expresses $\\frac{dy}{dx}$ as the\nderivative of $y$ with respect to $x$.\n\n\n```python\nvar( 'x' ) # Let x be a variable.\ny = Function('y')(x) # Literally, y is a function, named y, based on x.\ndydx = Derivative( y, x ) # How to write dy/dx.\ndydx # Let's see how SymPy displays dy/dx.\n```\n\n\n\n\n$\\displaystyle \\frac{d}{d x} y{\\left(x \\right)}$\n\n\n\nLet's now write a very simple differential equation, $\\frac{dy}{dx}=y$.\n\nAs with how to do implicit differentiation, SymPy expects us to move everything\nto the left hand side of the equation. In this case, that makes the equation\n$\\frac{dy}{dx}-y=0$, and we will use just the left-hand side to express our ODE.\n\n\n```python\node = dydx - y\node\n```\n\n\n\n\n$\\displaystyle - y{\\left(x \\right)} + \\frac{d}{d x} y{\\left(x \\right)}$\n\n\n", "meta": {"hexsha": "309d94af3e3206f5ab7dcf4a346dd6b8e1cbd114", "size": 2897, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "database/tasks/How to write an ordinary differential equation/Python, using SymPy.ipynb", "max_stars_repo_name": "nathancarter/how2data", "max_stars_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "database/tasks/How to write an ordinary differential equation/Python, using SymPy.ipynb", "max_issues_repo_name": "nathancarter/how2data", "max_issues_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "database/tasks/How to write an ordinary differential equation/Python, using SymPy.ipynb", "max_forks_repo_name": "nathancarter/how2data", "max_forks_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-07-18T19:01:29.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T06:47:11.000Z", "avg_line_length": 24.974137931, "max_line_length": 99, "alphanum_fraction": 0.5088022092, "converted": true, "num_tokens": 337, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9632305318133554, "lm_q2_score": 0.8774767794716264, "lm_q1q2_score": 0.845212424944325}} {"text": "# Kaczmarz Algorithm\n\nGiven $Ax = b$, iteratively solve with the update:\n\\begin{equation}\nx^{(k+1)} = x^{(k)} + \\frac{b_i - \\langle a_i , x^{(k)}\\rangle}{\\lvert a_i \\rvert^2}a_i^*\n\\end{equation}\nwhere $i = k \\mathrm{mod} m +1$\n\n\n```python\nimport numpy as np\nfrom numpy import random\nfrom numpy import linalg as LA\nfrom collections import OrderedDict\n\n\ndef relative_error_to_reference(x, x_ref):\n return LA.norm(x - x_ref) / LA.norm(x_ref)\n\n\ndef cg(A, x, b, tol=1.0e-08, max_it=200):\n # Iteration counter\n k = 0\n # Residual\n r = b - np.einsum('ij,j', A, x)\n # Search direction\n p = r\n while LA.norm(r) > tol:\n tmp = np.einsum('i,i', r, r)\n alpha = tmp / np.einsum('i,ij,j', p, A, p)\n x = x + alpha * p\n # Update residual\n r = r - alpha * np.einsum('ij,j', A, p)\n # Update search direction\n beta = np.einsum('i,i', r, r) / tmp\n p = r + beta * p\n #print('Iteration {}, Norm of residual {}'.format(k, LA.norm(r)))\n k += 1\n if k >= max_it:\n raise RuntimeError('Maximum number of iterations reached!')\n break\n return k, LA.norm(r), x\n\n \ndef kaczmarz(A, x, b, tol=1.0e-08, max_it=2000):\n m = A.shape[0]\n # Iteration counter\n k = 0\n # Residual\n r = b - np.einsum('ij,j', A, x)\n while LA.norm(r) > tol:\n i = k%m\n alpha = (b[i] - np.einsum('i,i', A[i,:], x)) / np.einsum('i,i', A[i,:], A[i,:])\n x = x + alpha * np.transpose(A[i,:])\n r = b - np.einsum('ij,j', A, x)\n #for i in range(m):\n # j = m-i-1\n # alpha = (b[j] - np.einsum('i,i', A[j, :], x)) / np.einsum('i,i', A[j, :], A[j, :])\n # x = x + alpha * np.conjugate(A[j, :])\n # r = b - np.einsum('ij,j', A, x)\n #print('Iteration {}, Norm of residual {}'.format(k, LA.norm(r)))\n k += 1\n if k >= max_it:\n raise RuntimeError('Maximum number of iterations reached!')\n break\n return k, LA.norm(r), x\n\n\ndef random_kaczmarz(A, x, b, tol=1.0e-08, max_it=20000):\n m = A.shape[0]\n # Iteration counter\n k = 0\n # Residual\n r = b - np.einsum('ij,j', A, x)\n # Random start row index\n i = random.randint(0, m)\n # 2-norm squared of row\n norm = np.einsum('i,i', A[i, :], A[i, :])\n while LA.norm(r) > tol:\n # Do a Kaczmarz sweep on row i\n alpha = (b[i] - np.einsum('i,i', A[i, :], x)) / np.einsum('i,i', A[i, :], A[i, :])\n x = x + alpha * np.conjugate(A[i, :])\n r = b - np.einsum('ij,j', A, x)\n #print('Iteration {}, Norm of residual {}'.format(k, LA.norm(r)))\n # Propose random index\n prop_i = random.randint(0,m)\n # 2-norm squared of row\n prop_norm = np.einsum('i,i', A[prop_i, :], A[prop_i, :])\n acceptance = min(1.0, prop_norm / norm)\n if random.uniform() < acceptance:\n i = prop_i\n norm = prop_norm\n k += 1\n if k >= max_it:\n raise RuntimeError('Maximum number of iterations reached!')\n break\n return k, LA.norm(r), x\n\ndim = 10\nM = np.random.randn(dim, dim)\n# Make sure our matrix is SPD\nA = 0.5 * (M + M.transpose())\nA = A * A.transpose()\nA += dim * np.eye(dim)\nb = np.random.rand(dim)\nx_ref = LA.solve(A, b)\n\nx_0 = np.zeros_like(b)\n\nC = np.arange(6).reshape(2,3)\nfor i in C:\n print(i)\n\nmethods = OrderedDict({\n 'Conjugate gradient' : cg, \n 'Kaczmarz' : kaczmarz,\n 'Random Kaczmarz' : random_kaczmarz\n })\n#reports = OrderedDict()\nfor k, v in methods.items():\n print('@ {:s} algorithm'.format(k))\n it, residual, x = v(A, x_0, b)\n rel_err = relative_error_to_reference(x, x_ref)\n print('{:s} converged in {:d} iterations with relative error to reference {:.5E}\\n'.format(k, it, rel_err))\n #reports.update({k: report})\n```\n\n [0 1 2]\n [3 4 5]\n @ Conjugate gradient algorithm\n Conjugate gradient converged in 9 iterations with relative error to reference 2.10003E-09\n \n @ Kaczmarz algorithm\n Kaczmarz converged in 174 iterations with relative error to reference 7.04794E-09\n \n @ Random Kaczmarz algorithm\n Random Kaczmarz converged in 480 iterations with relative error to reference 8.09928E-09\n \n\n\n\n```python\n\n```\n", "meta": {"hexsha": "29590c2f671e3e5baa8d4ec1b9f3bc5ede4a684e", "size": 6252, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Kaczmarz.ipynb", "max_stars_repo_name": "robertodr/solver", "max_stars_repo_head_hexsha": "dced957cb3f2aa8c1f3ab085d8445c6dd7d3d284", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-11-09T17:03:09.000Z", "max_stars_repo_stars_event_max_datetime": "2018-11-19T10:30:04.000Z", "max_issues_repo_path": "Kaczmarz.ipynb", "max_issues_repo_name": "robertodr/solver", "max_issues_repo_head_hexsha": "dced957cb3f2aa8c1f3ab085d8445c6dd7d3d284", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Kaczmarz.ipynb", "max_forks_repo_name": "robertodr/solver", "max_forks_repo_head_hexsha": "dced957cb3f2aa8c1f3ab085d8445c6dd7d3d284", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.0793650794, "max_line_length": 121, "alphanum_fraction": 0.4587332054, "converted": true, "num_tokens": 1385, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9525741308615412, "lm_q2_score": 0.8872045952083047, "lm_q1q2_score": 0.8451281461769163}} {"text": "## Optimal Charging Example\n\nWe have an electric storage device with state-of-charge (SOC) $q_t \\in \\mathbb{R}_+$ at time $t$ and capacity $Q \\in \\mathbb{R}_+$. We denote the amount of energy charged from time $t$ to time $t+1$ as $u_t \\in \\mathbb{R}$, i.e., $q_{t+1} = q_t + u_t$. Power is limited by $C \\in \\mathbb{R}_+$ ($D \\in \\mathbb{R}_+$), the maximum possible magnitude of charging (discharging) power. The energy price $P(u_t)$ is higher when buying energy from the grid compared to the case of selling energy to the grid. Specifically, \n\n\\begin{equation}\nP(u_i) = \\begin{cases}\np_t u_t (1+\\eta) \\quad &\\text{if} \\quad u_t > 0 \\\\\np_t u_t (1-\\eta) \\quad &\\text{otherwise},\n\\end{cases}\n\\end{equation}\n\nwhere $p_t \\in \\mathbb{R}_+$ is the average market price at time $t$ and $0 < \\eta < 1$. To optimize the cost of charging the energy storage from empty to full within a time period of length $T$, we solve the optimization problem\n\n\\begin{equation}\n\\begin{array}{ll}\n\\text{minimize} \\quad & \\sum_{t=0}^T p_t \\left(u_t + \\eta |u_t|\\right) + \\gamma u_t^2\\\\\n\\text{subject to} \\quad &q_{t+1} = q_t + u_t \\quad \\forall t \\in \\{0,...,T \\}\\\\\n&-D \\leq u_t \\leq C \\quad \\forall t \\in \\{0,...,T \\}\\\\\n&0 \\leq q_t \\leq Q \\quad \\forall t \\in \\{0,...,T \\}\\\\\n&q_0 = 0\\\\\n&q_{T+1} = Q,\n\\end{array}\n\\end{equation}\n\nwhere $u_t \\in \\mathbb{R}$ and $q_t \\in \\mathbb{R}_+$ are the variables. We have added the regularization term $\\gamma u_t^2$ to reduce stress on the electronic system due to peak power values, with $\\gamma \\in \\mathbb{R}_+$. We reformulate the problem to be [DPP-compliant](https://www.cvxpy.org/tutorial/advanced/index.html#disciplined-parametrized-programming) by introducing the parameter $s_t = p_t \\eta$ and we use time vectors $u \\in \\mathbb{R}^T$, $p, s \\in \\mathbb{R}_+^T$ and $q \\in \\mathbb{R}_+^{T+1}$ to summarize the temporal variables and parameters. Finally, we solve\n\n\\begin{equation}\n\\begin{array}{ll}\n\\text{minimize} \\quad & p^T u + s^T |u| + \\gamma \\Vert u \\Vert_2^2\\\\\n\\text{subject to} \\quad &q_{1:T+1} = q_{0:T} + u\\\\\n&-D \\mathbb{1} \\leq u \\leq C \\mathbb{1}\\\\\n&\\mathbb{0} \\leq q \\leq Q \\mathbb{1}\\\\\n&q_0 = 0\\\\\n&q_{T+1} = Q,\n\\end{array}\n\\end{equation}\n\nwhere $|u|$ is the element-wise absolute value of $u$. Let's define the corresponding CVXPY problem. To model a one-day period with a resolution of one minute, we choose $T=24 \\cdot 60 = 1440$.\n\n\n```python\nimport cvxpy as cp\nimport numpy as np\n\n# define dimension\nT = 1440\n\n# define variables\nu = cp.Variable(T, name='u')\nq = cp.Variable(T+1, name='q')\n\n# define parameters\np = cp.Parameter(T, nonneg=True, name='p')\ns = cp.Parameter(T, nonneg=True, name='s')\nD = cp.Parameter(nonneg=True, name='D')\nC = cp.Parameter(nonneg=True, name='C')\nQ = cp.Parameter(nonneg=True, name='Q')\ngamma = cp.Parameter(nonneg=True, name='gamma')\n\n# define objective\nobjective = cp.Minimize(p@u + s@cp.abs(u) + gamma*cp.sum_squares(u))\n\n# define constraints\nconstraints = [q[1:] == q[:-1] + u,\n -D <= u, u<= C,\n 0 <= q, q <= Q,\n q[0] == 0, q[-1] == Q]\n\n# define problem\nproblem = cp.Problem(objective, constraints)\n```\n\nAssign parameter values and solve the problem. The one-day period starts at 2pm with a medium energy price level until 5pm, high price level from 5pm to midnight and low prices otherwise.\n\n\n```python\nimport matplotlib.pyplot as plt\n\np.value = np.concatenate((3*np.ones(3*60),\n 5*np.ones(7*60),\n 1*np.ones(14*60)), axis=0)\neta = 0.1\ns.value = eta*p.value\nQ.value = 1\nC.value = 3*Q.value/(24*60)\nD.value = 2*C.value\ngamma.value = 100\n\nval = problem.solve()\n\nfig, ax1 = plt.subplots()\n\nax1.plot(100*q.value, color='b')\nax1.grid()\nax1.set_xlabel('Time [min]')\nax1.set_ylabel('SOC [%]', color='b')\nax1.tick_params(axis='y', labelcolor='b')\n\nax2 = ax1.twinx()\nax2.plot(100*p.value / max(p.value), color='m')\nax2.set_ylabel('Price Level [%]', color='m')\nax2.tick_params(axis='y', labelcolor='m')\n```\n\nWe observe that it is optimal to charge the storage with maximum power during the medium price phase, then empty the storage when prices are highest, and then fully charge the storage for the lowest price of the day. Generating C source for the problem is as easy as:\n\n\n```python\nfrom cvxpygen import cpg\n\ncpg.generate_code(problem, code_dir='charging_code')\n```\n\nNow, you can use a python wrapper around the generated code as a custom CVXPY solve method.\n\n\n```python\nfrom charging_code.cpg_solver import cpg_solve\nimport numpy as np\nimport pickle\nimport time\n\n# load the serialized problem formulation\nwith open('charging_code/problem.pickle', 'rb') as f:\n prob = pickle.load(f)\n\n# assign parameter values\nprob.param_dict['p'].value = np.concatenate((3*np.ones(3*60),\n 5*np.ones(7*60),\n 1*np.ones(14*60)), axis=0)\neta = 0.1\nprob.param_dict['s'].value = eta*prob.param_dict['p'].value\nprob.param_dict['Q'].value = 1\nprob.param_dict['C'].value = 5*prob.param_dict['Q'].value/(24*60)\nprob.param_dict['D'].value = 2*prob.param_dict['C'].value\n\n# solve problem conventionally\nt0 = time.time()\n# CVXPY chooses eps_abs=eps_rel=1e-5, max_iter=10000, polish=True by default,\n# however, we choose the OSQP default values here, as they are used for code generation as well\nval = prob.solve(solver='OSQP', eps_abs=1e-3, eps_rel=1e-3, max_iter=4000, polish=False)\nt1 = time.time()\nprint('\\nCVXPY\\nSolve time: %.3f ms' % (1000 * (t1 - t0)))\nprint('Objective function value: %.6f\\n' % val)\n\n# solve problem with C code via python wrapper\nprob.register_solve('CPG', cpg_solve)\nt0 = time.time()\nval = prob.solve(method='CPG')\nt1 = time.time()\nprint('\\nCVXPYgen\\nSolve time: %.3f ms' % (1000 * (t1 - t0)))\nprint('Objective function value: %.6f\\n' % val)\n```\n\n\\[1\\] Wang, Yang, Brendan O'Donoghue, and Stephen Boyd. \"Approximate dynamic programming via iterated Bellman inequalities.\" International Journal of Robust and Nonlinear Control 25.10 (2015): 1472-1496.\n", "meta": {"hexsha": "ad3fba7c9deec803fa5c41b05300e67c5ce1764b", "size": 8539, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/charging.ipynb", "max_stars_repo_name": "cvxgrp/cvxpygen", "max_stars_repo_head_hexsha": "5aaacdb894354288e2f12a16ca738248c69181c4", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 32, "max_stars_repo_stars_event_min_datetime": "2022-02-25T03:30:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T16:24:40.000Z", "max_issues_repo_path": "examples/charging.ipynb", "max_issues_repo_name": "cvxgrp/cvxpygen", "max_issues_repo_head_hexsha": "5aaacdb894354288e2f12a16ca738248c69181c4", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/charging.ipynb", "max_forks_repo_name": "cvxgrp/cvxpygen", "max_forks_repo_head_hexsha": "5aaacdb894354288e2f12a16ca738248c69181c4", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.2914798206, "max_line_length": 606, "alphanum_fraction": 0.5533434828, "converted": true, "num_tokens": 1906, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9525741254760639, "lm_q2_score": 0.8872045996818986, "lm_q1q2_score": 0.8451281456603259}} {"text": "# Linear Algebra\n\n## 1.1 Graph vector $\\vec{a}$ \n\n\\begin{align}\n\\vec{a} = \\begin{bmatrix} 3 \\\\ 2 \\end{bmatrix}\n\\end{align}\n\n\n```python\nimport matplotlib.pyplot as plt\n\nplt.arrow(0, 0, 3, 2, width=0.04)\nplt.xlim(0,4)\nplt.ylim(0,4);\n```\n\n## 1.2 Find $||\\vec{b}||$. What does the norm of a vector represent?\n\\begin{align}\n\\vec{b} = \\begin{bmatrix} 17 & -4 & -2 & 1\\end{bmatrix}\n\\end{align}\n\n\n\n```python\nimport numpy as np\n\nnorm = np.linalg.norm(np.array([17, -4, -2, 1]))\nprint(norm)\n```\n\n 17.60681686165901\n\n\nNorm represents the magnitude/factor/distance of a vector.\n\n## 1.3 Find $\\vec{c} \\cdot \\vec{d}$\n\n\\begin{align}\n\\vec{c} = \\begin{bmatrix}3 & 7 & -2 & 12\\end{bmatrix}\n\\qquad\n\\vec{d} = \\begin{bmatrix}9 & -7 & 4 & 6\\end{bmatrix}\n\\end{align}\n\n\n```python\nc = np.array([3, 7, -2, 12])\nd = np.array([9, -7, 4, 6])\n\nprint(np.dot(c, d))\n```\n\n 42\n\n\n## 1.4 Find $E^{-1}$ and $E^{T}$\n\n\\begin{align}\nE = \n\\begin{bmatrix}\n 7 & 4 & 2 \\\\\n 1 & 3 & -1 \\\\\n 2 & 6 & -4\n\\end{bmatrix}\n\\end{align}\n\n\n```python\nE = np.array([[7, 4, 2],\n [1, 3, -1],\n [2, 6, -4]])\n\nprint(np.linalg.inv(E))\nprint(E.T)\n```\n\n [[ 0.17647059 -0.82352941 0.29411765]\n [-0.05882353 0.94117647 -0.26470588]\n [ 0. 1. -0.5 ]]\n [[ 7 1 2]\n [ 4 3 6]\n [ 2 -1 -4]]\n\n\n# Intermediate Linear Algebra\n\n## 2.1 Suppose that the number of customers at a ski resort as well as the number of inches of fresh powder (snow) was recorded for 7 days. \n\n### Customers: [820, 760, 1250, 990, 1080, 1450, 1600]\n\n### Inches of new snow: [0, 1, 7, 1, 0, 6, 4 ]\n\n## Find the mean, variance, and standard deviation for both the number of customers and inches of new snow for the week. You may use library functions, dataframes, .describe(), etc. \n\n\n\n\n```python\nimport pandas as pd\n\ncustomers = [820, 760, 1250, 990, 1080, 1450, 1600]\nsnow = [0, 1, 7, 1, 0, 6, 4]\n\ndf = pd.DataFrame({'customers': customers, 'snow': snow})\n\ndf.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
customerssnow
08200
17601
212507
39901
410800
\n
\n\n\n\n\n```python\ndf.mean()\n```\n\n\n\n\n customers 1135.714286\n snow 2.714286\n dtype: float64\n\n\n\n\n```python\ndf.std(ddof=0)\n```\n\n\n\n\n customers 290.951991\n snow 2.710524\n dtype: float64\n\n\n\n\n```python\ndf.var(ddof=0)\n```\n\n\n\n\n customers 84653.061224\n snow 7.346939\n dtype: float64\n\n\n\n## 2.2 Are the variances of the number of customers and inches of snow comparable? \n## Why or why not? \n\nNo, customers has a much greater variance.\n\n## 2.3 Find the variance-covariance matrix for the number of customers and inches of snow at the ski resort. \n\n\n```python\ndf.cov()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
customerssnow
customers98761.904762670.238095
snow670.2380958.571429
\n
\n\n\n\n# PCA\n\n## 3.1 Standardize the data so that it has a mean of 0 and a standard deviation of 1. (You may use library functions)\n\nWe have included some code to get you started so that you don't get stuck on something that isn't standardizing the data or PCA.\n\nThis might be helpful:\n\n\n\n\n```python\n# Let me get you some data to start you off.\nimport pandas as pd\n\ndata = {\"Country\": [\"England\",\"Wales\",\"Scotland\",\"North Ireland\"], \n \"Cheese\": [105,103,103,66], \n \"Carcass_Meat\": [245,227,242,267], \n \"Other_Meat\": [685, 803, 750, 586], \n \"Fish\": [147, 160, 122, 93], \n \"Fats_and_Oils\": [193, 235, 184, 209], \n \"Sugars\": [156, 175, 147, 139], \n \"Fresh_Potatoes\": [720, 874, 566, 1033], \n \"Fresh_Veg\": [253, 265, 171, 143], \n \"Other_Veg\": [488, 570, 418, 355], \n \"Processed_Potatoes\": [198, 203, 220, 187], \n \"Processed_Veg\": [360, 365, 337, 334], \n \"Fresh_Fruit\": [1102, 1137, 957, 674], \n \"Cereals\": [1472, 1582, 1462, 1494], \n \"Beverages\": [57,73,53,47], \n \"Soft_Drinks\": [1374, 1256, 1572, 1506], \n \"Alcoholic Drinks\": [375, 475, 458, 135], \n \"Confectionery\": [54, 64, 62, 41]}\n\ndf = pd.DataFrame(data)\n\n# Look at the data\ndf.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
CountryCheeseCarcass_MeatOther_MeatFishFats_and_OilsSugarsFresh_PotatoesFresh_VegOther_VegProcessed_PotatoesProcessed_VegFresh_FruitCerealsBeveragesSoft_DrinksAlcoholic DrinksConfectionery
0England1052456851471931567202534881983601102147257137437554
1Wales1032278031602351758742655702033651137158273125647564
2Scotland103242750122184147566171418220337957146253157245862
3North Ireland66267586932091391033143355187334674149447150613541
\n
\n\n\n\n\n```python\nfrom sklearn.preprocessing import StandardScaler\n\ndf_zstand = StandardScaler().fit_transform(df.select_dtypes('number'))\n```\n\n## 3.2 Perform PCA on the data and graph Principal Component 1 against Principal Component 2. (You may use library functions)\n\nThis might be helpful:\n\n\n\n\n```python\nfrom sklearn.decomposition import PCA\n\npca = PCA(2).fit_transform(df_zstand)\n\nprint(pca)\n\nplt.scatter(x=[_[0] for _ in pca], y=[_[1] for _ in pca]);\n```\n\n# Clustering\n\n## 4.1 Use K-Means to cluster the following data and then graph your results. (You may use library functions)\n\nWe have included some code to get you started so that you don't get stuck on something that isn't standardizing clustering.\n\nPrioritize calculating the clusters over graphing them. \n\nScikit-Learn K-Means Documentation:\n\n\n\n\n```python\npoints = pd.read_csv('https://raw.githubusercontent.com/ryanleeallred/datasets/master/points.csv')\npoints.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
xy
0-7.846803-3.421277
1-3.554323-6.884729
2-0.192822-9.671030
3-6.401456-5.223972
4-0.804026-9.704457
\n
\n\n\n\n\n```python\nfrom sklearn.cluster import KMeans\n\nsum_of_squared_distances = []\nxs = range(1,15)\nfor x in xs:\n km = KMeans(n_clusters=x)\n km = km.fit(points)\n sum_of_squared_distances.append(km.inertia_)\n```\n\n\n```python\nplt.plot(list(range(1,15)), sum_of_squared_distances);\n```\n\n\n```python\nlabels = KMeans(4).fit(points).labels_\n```\n\n\n```python\nfig, ax = plt.subplots()\nax.scatter(x='x', y='y', data=points, c=labels)\nax.set_aspect('equal');\n```\n", "meta": {"hexsha": "ff213f481f1a3ececd598237ed9df6a52e13aced", "size": 77403, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "05-Linear-Algebra/05_Sprint_Challenge.ipynb", "max_stars_repo_name": "shalevy1/data-science-journal", "max_stars_repo_head_hexsha": "2a6beaf5bf328e257b638a695983457a9f3cd7ce", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 71, "max_stars_repo_stars_event_min_datetime": "2019-03-05T04:44:48.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-24T09:47:48.000Z", "max_issues_repo_path": "05-Linear-Algebra/05_Sprint_Challenge.ipynb", "max_issues_repo_name": "pesobreiro/data-science-journal", "max_issues_repo_head_hexsha": "82a72b4ed5ce380988fac17b0acd97254c2b5c86", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "05-Linear-Algebra/05_Sprint_Challenge.ipynb", "max_forks_repo_name": "pesobreiro/data-science-journal", "max_forks_repo_head_hexsha": "82a72b4ed5ce380988fac17b0acd97254c2b5c86", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 37, "max_forks_repo_forks_event_min_datetime": "2019-03-07T05:08:03.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-05T11:32:51.000Z", "avg_line_length": 78.3431174089, "max_line_length": 24128, "alphanum_fraction": 0.7893880082, "converted": true, "num_tokens": 3576, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.907312226373181, "lm_q2_score": 0.931462514578343, "lm_q1q2_score": 0.8451273278852379}} {"text": "```python\n%matplotlib inline\n```\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n# Motivation\n\nWe can view pretty much all of **machine learning (ML)** (and this is one of many possible views) as an **optimisation** exercise. Our challenge in supervised learning is to find a function that maps the inputs of a certain system to its outputs. Since we don't have direct access to that function, we have to estimate it. We aim to find the *best* possible estimate. Whenever we use the word \"best\" in mathematics, we imply some kind of optimisation. Thus we either maximise some **performance function**, which increases for better estimates, or minimise some **loss function**, which decreases for better estimates. In general, we refer to the function that we optimise as the **objective function**.\n\nThere are elements of both science and art in the choice of performance/loss functions. For now let us focus on optimisation itself.\n\n# Univariate functions\n\nFrom school many of us remember how to optimise functions of a single scalar variable — **univariate** functions, such as, for example,\n$$f(x) = -2x^2 + 6x + 9.$$\n\nIn Python we would define this function as\n\n\n```python\ndef func(x): return -2. * x**2 + 6. * x + 9.\n```\n\nSo we can pass values of $x$ to it as arguments and obtain the corresponding values $f(x)$ as the function's return value:\n\n\n```python\nfunc(0.)\n```\n\n\n\n\n 9.0\n\n\n\nWhenever we are dealing with functions, it is always a good idea to visually examine their graphs:\n\n\n```python\nxs = np.linspace(-10., 10., 100)\nfs = [func(x) for x in xs]\nplt.plot(xs, fs, 'o');\n```\n\nUnsurprisingly (if we remember high school mathematics), the graph of our univariate **quadratic** (because the highest power of $x$ in it comes as $x^2$) function is a **parabola**. We are lucky: this function is **concave** — if we join any two points on its graph, the straight line joining them will always lie below the graph. For such functions we can usually find the **global optimum** (**minimum** or **maximum**, in this case the function has a single **global maximum**).\n\n# Global versus local optima\n\nWe say **global** optimum, because a function may have multiple optima. All of them are called **local** optima, but only the largest maxima (the smallest minima) are referred to as **global**.\n\nConsider the function\n$$f(x) = x \\cos(x).$$\nIt has numerous local minima and local maxima over $x \\in \\mathbb{R}$, but no global minimum/maximum:\n\n\n```python\nxs = np.linspace(-100., 100., 1000)\nfs = xs * np.cos(xs)\nplt.plot(xs, fs);\n```\n\nNow consider the function\n$$f(x) = \\frac{1}{x} \\sin(x).$$\nIt has a single global maximum, two global minima, and infinitely many local maxima and minima.\n\n\n```python\nxs = np.linspace(-100., 100., 1000)\nfs = (1./xs) * np.sin(xs)\nplt.plot(xs, fs);\n```\n\n# High school optimisation\n\nMany of us remember from school this method of optimising functions. For our function, say\n$$f(x) = -2x^2 + 6x + 9,$$\nfind the function's derivative. If we forgot how to differentiate functions, we can look up the rules of differentiation, say, on Wikipedia. In our example, differentiation is straightforward, and yields\n$$\\frac{d}{dx}f(x) = -4x + 6.$$\n\nHowever, if we have completely forgotten the rules of differentiation, one particular Python library — the one for doing symbolic maths — comes in useful:\n\n\n```python\nimport sympy\nx = sympy.symbols('x')\nfunc_diff = sympy.diff(-2. * x**2 + 6. * x + 9, x)\nfunc_diff\n```\n\n\n\n\n -4.0*x + 6.0\n\n\n\nOur next step is to find such $x$ (we'll call it $x_{\\text{max}}$), at which this derivative becomes zero. This notation is somewhat misleading, because it is $f(x_{\\text{max}})$ that is maximum, not $x_{\\text{max}}$ itself; $x_{\\text{max}}$ is the *location* of the function's maximum:\n$$\\frac{d}{dx}f(x_{\\text{max}}) = 0,$$\ni.e.\n$$-4x_{\\text{max}} + 6 = 0.$$\nHence the solution is\n\n$$x_{\\text{max}} = -6 / (-4) = 3/2 = 1.5$$\n\nWe could also use SymPy to solve the above equation:\n\n\n```python\nroots = sympy.solve(func_diff, x)\nroots\n```\n\n\n\n\n [1.50000000000000]\n\n\n\n\n```python\nx_max = roots[0]\n```\n\nIn order to check that the value is indeed a local maximum and not a local minimum (and not a **saddle point**, look them up), we look at the second derivative of the function,\n$$\\frac{d^2}{dx^2}f(x_{\\text{max}}) = -4.$$\nSince this second derivative is negative at $x_{\\text{max}}$, we are indeed looking at an (at least local) maximum. In this case we are lucky: this is also a global maximum. However, in general, it isn't easy to check mathematically whether an optimum global or not. This is one of the major challenges in optimisation.\n\nLet us now find the value of the function at the maximum by plugging in $x_{\\text{max}}$ into $f$:\n\n$$f_{\\text{max}} = f(x_{\\text{max}}) = -2 x_{\\text{max}}^2 + 6 x_{\\text{max}} + 9 = -2 \\cdot 1.5^2 + 6 \\cdot 1.5 + 9 = 13.5.$$\n\n\n```python\nf_max = func(x_max)\nf_max\n```\n\n\n\n\n 13.5000000000000\n\n\n\nLet us label this maximum on the function's graph:\n\n\n```python\nxs = np.linspace(-10., 10., 100)\nfs = [func(x) for x in xs]\nplt.plot(xs, fs, 'o')\nplt.plot(x_max, f_max, 'o', color='red')\nplt.axvline(x_max, color='red')\nplt.axhline(f_max, color='red');\n```\n\n# Multivariate functions\n\nSo far we have considered the optimisation of **real-valued** functions of a single real variable, i.e. $f: \\mathbb{R} \\rightarrow \\mathbb{R}$.\n\nHowever, most functions that we encounter in data science and machine learning are **multivariate**, i.e. $f: \\mathbb{R}^n \\rightarrow \\mathbb{R}$. Moreover, some are also **multivalued**, i.e. $f: \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$.\n\n(Note: univariate/multivariate refers to the function's argument, whereas single-valued/multi-valued to the function's output.)\n\nConsider, for example, the following single-valued, multivariate function:\n$$f(x_1, x_2) = -x_1^2 - x_2^2 + 6x_1 + 3x_2 + 9.$$\n\nWe could define it in Python as\n\n\n```python\ndef func(x1, x2): return -x1**2 - x2**2 + 6.*x1 + 3.*x2 + 9.\n```\n\nLet's plot its graph. First, we need to compute the values of the function on a two-dimensional mesh grid:\n\n\n```python\nx1s, x2s = np.meshgrid(np.linspace(-100., 100., 100), np.linspace(-100., 100., 100))\nfs = func(x1s, x2s)\nnp.shape(fs)\n```\n\n\n\n\n (100, 100)\n\n\n\nThen we can use the following code to produce a 3D plot:\n\n\n```python\nfrom mpl_toolkits.mplot3d import Axes3D\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nax.contour3D(x1s, x2s, fs, 50);\n```\n\nIt may be more convenient to implement multivariate functions as functions of a single vector (more precisely, rank-1 NumPy array) in Python:\n\n\n```python\ndef func(x): return -x[0]**2 - x[1]**2 + 6.*x[0] + 3.*x[1] + 9.\n```\n\n# Optimising multivariate functions analytically\n\nThe analytical method of finding the optimum of a multivariate function is similar to that for univariate functions. As the function has mutliple arguments, we need to find its so-called **partial derivative** with respect to each argument. They are computed similarly to normal derivatives, while pretending that all the other arguments are constants:\n$$\\frac{\\partial}{\\partial x_1} f(x_1, x_2) = -2x_1 + 6,$$\n$$\\frac{\\partial}{\\partial x_2} f(x_1, x_2) = -2x_2 + 3.$$\n\nWe call the vector of the function's partial derivatives its **gradient** vector, or **grad**:\n$$\\nabla f(x_1, x_2) = \\begin{pmatrix} \\frac{\\partial}{\\partial x_1} f(x_1, x_2) \\\\ \\frac{\\partial}{\\partial x_2} f(x_1, x_2) \\end{pmatrix}.$$\n\nWhen the function is continuous and differentiable, all the partial derivatives will be 0 at a local maximum or minimum point. Saying that all the partial derivatives are zero at a point, $(x_1^*, x_2^*)$, is the same as saying the gradient at that point is the zero vector:\n$$\\nabla f(x_1^*, x_2^*) = \\begin{pmatrix} \\frac{\\partial}{\\partial x_1} f(x_1^*, x_2^*) \\\\ \\frac{\\partial}{\\partial x_2} f(x_1^*, x_2^*) \\end{pmatrix} = \\begin{pmatrix} 0 \\\\ 0 \\end{pmatrix} = \\mathbf{0}.$$\n\nIn our example, we can easily establish that the gradient vector is zero at $x_1^* = 3$, $x_2^* = 1.5$. And the maximum value that is achieved is\n\n\n```python\nfunc([3, 1.5])\n```\n\n\n\n\n 20.25\n\n\n\n\n```python\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nax.contour3D(x1s, x2s, fs, 50)\nax.plot([3], [1.5], [20.25], 'o', color='red', markersize=20);\n```\n\n# The Jacobian\n\nNotice that, for multivalued (not just multivariate) functions, $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$, the **gradient** vector of partial derivatives generalises to the **Jacobian** matrix:\n$$\\mathbf{J} = \\begin{pmatrix} \\frac{\\partial f_1}{\\partial x_1} & \\frac{\\partial f_1}{\\partial x_2} & \\cdots & \\frac{\\partial f_1}{\\partial x_n} \\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ \\frac{\\partial f_m}{\\partial x_1} & \\frac{\\partial f_m}{\\partial x_2} & \\cdots & \\frac{\\partial f_m}{\\partial x_n} \\end{pmatrix}.$$\n\n# Newton-Raphson's method\n\n**Newton-Raphson's method** is a numerical procedure for finding zeros (**roots**) of functions.\n\nFor example, consider again the function\n$$f(x) = -2x^2 + 6x + 9.$$\n\n\n```python\ndef func(x): return -2. * x**2 + 6. * x + 9.\n```\n\nWe have already found that its derivative is given by\n$$\\frac{df}{dx}(x) = -4x + 6.$$\n\n\n```python\ndef func_diff(x): return -4. * x + 6.\n```\n\nThe Newton-Raphson method starts with some initial guess, $x_0$, and then proceeds iteratively:\n\n$$x_{n+1} = x_n - \\frac{f(x_n)}{\\frac{d}{dx}f(x_n)}$$\n\nLet's code it up:\n\n\n```python\ndef newton_raphson_method(f, fdiff, x0, iter_count=10):\n x = x0\n print('x_0', x0)\n for i in range(iter_count):\n x = x - f(x) / fdiff(x)\n print('x_%d' % (i+1), x)\n return x\n```\n\nNow let's apply it to our function:\n\n\n```python\nnewton_raphson_method(func, func_diff, -5.)\n```\n\n x_0 -5.0\n x_1 -2.269230769230769\n x_2 -1.280023547880691\n x_3 -1.1040302676228468\n x_4 -1.0980830182607666\n x_5 -1.0980762113622329\n x_6 -1.098076211353316\n x_7 -1.098076211353316\n x_8 -1.098076211353316\n x_9 -1.098076211353316\n x_10 -1.098076211353316\n\n\n\n\n\n -1.098076211353316\n\n\n\nWe see that the method converges quite quickly to (one of the) roots. Notice that, which of the two roots we converge to depends on the initial guess:\n\n\n```python\nnewton_raphson_method(func, func_diff, x0=5.)\n```\n\n x_0 5.0\n x_1 4.214285714285714\n x_2 4.1005639097744355\n x_3 4.098077401218985\n x_4 4.098076211353589\n x_5 4.098076211353316\n x_6 4.098076211353316\n x_7 4.098076211353316\n x_8 4.098076211353316\n x_9 4.098076211353316\n x_10 4.098076211353316\n\n\n\n\n\n 4.098076211353316\n\n\n\n**Newton-Raphson** is a **root finding**, not an **optimisation**, algorithm. However, recall that optimisation is equivalent to finding the root of the derivative function. Thus we can apply this algorithm to the derivative function (we also need to provide the second derivative function) to find a local optimum of the function:\n\n\n```python\ndef func_diff2(x): return -4.\n```\n\n\n```python\nnewton_raphson_method(func_diff, func_diff2, -5.)\n```\n\n x_0 -5.0\n x_1 1.5\n x_2 1.5\n x_3 1.5\n x_4 1.5\n x_5 1.5\n x_6 1.5\n x_7 1.5\n x_8 1.5\n x_9 1.5\n x_10 1.5\n\n\n\n\n\n 1.5\n\n\n\nThe result is consistent with our analytical solution.\n\n# Newton's method for multivariate functions\n\nNewton's method can be generalised to mutlivariate functions. For multivalued multivariate functions $f: \\mathbb{R}^k \\rightarrow \\mathbb{R}^k$, the method becomes\n$$x_{n+1} = x_n - \\mathbf{J}(x_n)^{-1} f(x_n),$$\nwhere $\\mathbf{J}$ is the Jacobian.\n\nSince inverses are only defined for square matrices, for functions $f: \\mathbb{R}^k \\rightarrow \\mathbb{R}^m$, we use the Moore-Penrose pseudoinverse $\\mathbf{J}^+ = (\\mathbf{J}^T \\mathbf{J})^{-1} \\mathbf{J}^T$ instead of $\\mathbf{J}^{-1}$. Let's code this up.\n\nInside our generalised implementation of Newton-Raphson, we'll be working with vectors. It's probably a good idea to assume that the function and the Jacobian return rank-2 NumPy arrays.\n\nHowever, one may have coded up the function as\n\n\n```python\ndef func(x): return -x[0]**2 - x[1]**2 + 6.*x[0] + 3.*x[1] + 9.\n```\n\nand the Jacobian as\n\n\n```python\ndef func_diff(x): return np.array([-2.*x[0] + 6., -2.*x[1] + 3.])\n```\n\nLet's see how we can convert NumPy stuff to rank-2 arrays. For rank-1 arrays:\n\n\n```python\na = np.array([3., 5., 7.])\nnp.reshape(a, (np.shape(a)[0], -1))\n```\n\n\n\n\n array([[ 3.],\n [ 5.],\n [ 7.]])\n\n\n\nif we want a column (rather than row) vector, which is probably a sensible default. If we wanted a row vector, we could do\n\n\n```python\nnp.reshape(a, (-1, np.shape(a)[0]))\n```\n\n\n\n\n array([[ 3., 5., 7.]])\n\n\n\nExisting rank-2 arrays remain unchanged by this:\n\n\n```python\na = np.array([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]])\nnp.reshape(a, (np.shape(a)[0], -1))\n```\n\n\n\n\n array([[ 1., 2., 3.],\n [ 4., 5., 6.],\n [ 7., 8., 9.]])\n\n\n\n\n```python\nnp.reshape(a, (-1, np.shape(a)[0]))\n```\n\n\n\n\n array([[ 1., 2., 3.],\n [ 4., 5., 6.],\n [ 7., 8., 9.]])\n\n\n\nFor scalars, `np.shape(a)[0]` won't work, as their shape is `()`, so we need to do something special. Based on this information, let us implement the auxiliary function `to_rank_2`:\n\n\n```python\ndef to_rank_2(arg, row_vector=False):\n shape = np.shape(arg)\n size = 1 if len(shape) == 0 else shape[0]\n new_shape = (-1, size) if row_vector else (size, -1)\n return np.reshape(arg, new_shape)\n```\n\nAnd test it:\n\n\n```python\nto_rank_2(5.)\n```\n\n\n\n\n array([[ 5.]])\n\n\n\n\n```python\nto_rank_2([1., 2., 3.])\n```\n\n\n\n\n array([[ 1.],\n [ 2.],\n [ 3.]])\n\n\n\n\n```python\nto_rank_2([[1.], [2.], [3.]])\n```\n\n\n\n\n array([[ 1.],\n [ 2.],\n [ 3.]])\n\n\n\n\n```python\nto_rank_2([[1., 2., 3.]])\n```\n\n\n\n\n array([[ 1., 2., 3.]])\n\n\n\n\n```python\nto_rank_2([[1., 2., 3], [4., 5., 6.]])\n```\n\n\n\n\n array([[ 1., 2., 3.],\n [ 4., 5., 6.]])\n\n\n\nNow let's generalise our implementation of the Newton-Raphson method:\n\n\n```python\ndef newton_raphson_method(f, fdiff, x0, iter_count=10):\n x = to_rank_2(x0)\n for i in range(iter_count):\n f_x = to_rank_2(f(x))\n fdiff_x = to_rank_2(fdiff(x), row_vector=True)\n non_square_jacobian_inv = np.dot(np.linalg.inv(np.dot(fdiff_x.T, fdiff_x)), fdiff_x.T)\n x = x - np.dot(non_square_jacobian_inv, f_x)\n print('x_%d' % (i+1), x)\n return x\n```\n\n\n```python\nnewton_raphson_method(func, func_diff, np.array([-10., -10.]), iter_count=5)\n```\n\n x_1 [[-80.25 ]\n [ 25.125]]\n x_2 [[-80.25 ]\n [ 25.125]]\n x_3 [[-80.25 ]\n [ 25.125]]\n x_4 [[-80.25 ]\n [ 25.125]]\n x_5 [[-80.25 ]\n [ 25.125]]\n\n\n\n\n\n array([[-80.25 ],\n [ 25.125]])\n\n\n\n\n```python\nfunc_diff([-80.25, 25.125])\n```\n\n\n\n\n array([ 166.5 , -47.25])\n\n\n\n**NB! TODO: The above doesn't seem to work at the moment. The returned optimum is wrong. Can you spot a problem with the above implementation?**\n\n# Quasi-Newton method\n\nIn practice, we may not always have access to the Jacobian of a function. There are numerical methods, known as **quasi-Newton methods**, which approximate the Jacobian numerically.\n\nOne such method is the **Broyden-Fletcher-Goldfarb-Shanno (BFGS)** algorithm. It is generally a bad idea to implement these algorithms by hand, since their implementations are often nuanced and nontrivial.\n\nFortunately, Python libraries provide excellent implementations of optimisation algorithms.\n\nLet us use SciPy to optimise our function.\n\nRemember that to maximise a function we simply minimise its negative, which is what we achieve with the Python lambda below:\n\n\n```python\nimport scipy.optimize\nscipy.optimize.minimize(lambda x: -func(x), np.array([-80., 25.]), method='BFGS')\n```\n\n\n\n\n fun: -20.249999999999446\n hess_inv: array([[ 0.53711516, 0.13107271],\n [ 0.13107271, 0.96288482]])\n jac: array([ 1.43051147e-06, -4.76837158e-07])\n message: 'Optimization terminated successfully.'\n nfev: 32\n nit: 4\n njev: 8\n status: 0\n success: True\n x: array([ 3.0000007 , 1.49999975])\n\n\n\n# Grid search\n\nWhat we have considered so far isn't the most straightforward optimisation procedure. A natural first thing to do is often the **grid search**.\n\nIn grid search, we pick a subset of the parameter search, usually a rectangular grid, evaluate the value at each grid point and pick the point where the function is largest (smallest) as the approximate location of the maximum (minimum).\n\nAs a by-product of the grid search we get a heat-map — an excellent way of visualising the magnitude of the function on the parameter space.\n\nIf we have more than two parameters, we can produce heatmaps for each parameter pair. (E.g., for a three-dimensional function, $(x_1, x_2)$, $(x_1, x_3)$, $(x_2, x_3)$.)\n\nGrid search is often useful for **tuning** machine learning **hyperparameters** and finding optimal values for trading (and other) strategies, in which case a single evaluation of the objective function may correspond to a single backtest run over all available data.\n\nLet us use the following auxiliary function from https://matplotlib.org/gallery/images_contours_and_fields/image_annotated_heatmap.html\n\n\n```python\ndef heatmap(data, row_labels, col_labels, ax=None,\n cbar_kw={}, cbarlabel=\"\", **kwargs):\n \"\"\"\n Create a heatmap from a numpy array and two lists of labels.\n\n Arguments:\n data : A 2D numpy array of shape (N,M)\n row_labels : A list or array of length N with the labels\n for the rows\n col_labels : A list or array of length M with the labels\n for the columns\n Optional arguments:\n ax : A matplotlib.axes.Axes instance to which the heatmap\n is plotted. If not provided, use current axes or\n create a new one.\n cbar_kw : A dictionary with arguments to\n :meth:`matplotlib.Figure.colorbar`.\n cbarlabel : The label for the colorbar\n All other arguments are directly passed on to the imshow call.\n \"\"\"\n\n if not ax:\n ax = plt.gca()\n\n # Plot the heatmap\n im = ax.imshow(data, **kwargs)\n\n # Create colorbar\n cbar = ax.figure.colorbar(im, ax=ax, **cbar_kw)\n cbar.ax.set_ylabel(cbarlabel, rotation=-90, va=\"bottom\")\n\n # We want to show all ticks...\n ax.set_xticks(np.arange(data.shape[1]))\n ax.set_yticks(np.arange(data.shape[0]))\n # ... and label them with the respective list entries.\n ax.set_xticklabels(col_labels)\n ax.set_yticklabels(row_labels)\n\n # Let the horizontal axes labeling appear on top.\n ax.tick_params(top=True, bottom=False,\n labeltop=True, labelbottom=False)\n\n # Rotate the tick labels and set their alignment.\n plt.setp(ax.get_xticklabels(), rotation=-30, ha=\"right\",\n rotation_mode=\"anchor\")\n\n # Turn spines off and create white grid.\n for edge, spine in ax.spines.items():\n spine.set_visible(False)\n\n ax.set_xticks(np.arange(data.shape[1]+1)-.5, minor=True)\n ax.set_yticks(np.arange(data.shape[0]+1)-.5, minor=True)\n ax.grid(which=\"minor\", color=\"w\", linestyle='-', linewidth=3)\n ax.tick_params(which=\"minor\", bottom=False, left=False)\n\n return im, cbar\n```\n\n\n```python\ndef func(x1, x2): return -x1**2 - x2**2 + 6.*x1 + 3.*x2 + 9.\nx1s_ = np.linspace(-100., 100., 10)\nx2s_ = np.linspace(-100., 100., 10)\nx1s, x2s = np.meshgrid(x1s_, x2s_)\nfs = func(x1s, x2s)\nnp.shape(fs)\nheatmap(fs, x1s_, x2s_)[0];\n```\n\n# Random search\n\nSometimes a **random search** may be preferred over grid search. This also enables us to incorporate our guess — a prior distribution — of the location of the optimum, so we can sample the parameter points from that prior distribution and evaluate the values of the function at those points.\n\nBoth **grid search** and **random search** are the so-called **embarrassingly parallel** methods and are trivial to parallelise, either over multiple cores on a single machine or over a cluster/cloud.\n\nIn general, it is suboptimal to explore a hypercube of the parameter space by systematically going through each point in a grid. Sobol sequences give the optimal sequence of points to try — see Sergei Kucherenko's work in this area.\n\n# Stochastic and batch gradient descent\n\nWhen working with **aritificial neural networks (ANNs)** we usually prefer the **stochastic** and **batch gradient descent methods** over the quasi-Newton methods. We will examine these methods when we introduce ANNs.\n", "meta": {"hexsha": "f9138c134af46254f3237c081ed05643bce3215d", "size": 333530, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "src/jupyter/python/foundations/optimisation.ipynb", "max_stars_repo_name": "saarahrasheed/tsa", "max_stars_repo_head_hexsha": "e4460f707eeecb737663c48d8fc3245f0acb124c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/jupyter/python/foundations/optimisation.ipynb", "max_issues_repo_name": "saarahrasheed/tsa", "max_issues_repo_head_hexsha": "e4460f707eeecb737663c48d8fc3245f0acb124c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/jupyter/python/foundations/optimisation.ipynb", "max_forks_repo_name": "saarahrasheed/tsa", "max_forks_repo_head_hexsha": "e4460f707eeecb737663c48d8fc3245f0acb124c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 240.4686373468, "max_line_length": 92392, "alphanum_fraction": 0.9083620664, "converted": true, "num_tokens": 6279, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9314625069680097, "lm_q2_score": 0.9073122244934722, "lm_q1q2_score": 0.8451273192294112}} {"text": "```python\nimport sympy as sym #simbolica\nimport matplotlib.pyplot as plt #importa matplotlib solo pyplot\nimport matplotlib.image as mpimg \nfrom IPython.display import Image #para importar y mostrar en jupyter imagenes\n\nsym.init_printing() #activa a jupyter para mostrar simbolicamente el output\n#%matplotlib widget \n%matplotlib inline\n```\n\n\n```python\n# Puente completo celda de carga es como se observa en la imagen.\nImage(filename='straingauge.png',width=300) \n```\n\n\n```python\nImage(filename='esq_sg.png',width=250) #Circuito esquematico cambiando el enfoque\n```\n\n\n```python\n#Se busca el equivalente theveniN\n#Se van a plantear ecuaciones nodales\n#Vth es el voltaje en Vo cuando Io=0 => CA\n\nsym.var('R1, R2, R3, R4, Io, Vo, Vth')\nsym.var('Va, Vb, Vp')\nfind=sym.Matrix(([Vo], [Va], [Vb])) #son las incognitas\nec_p_0=sym.Eq((Va-Vo)/R1+(Vb-Vo)/R2,0) # Nodo Vo ; Io=0\nec_p_1=sym.Eq((Va-Vb),Vp) #SuperNodo\nec_p_2=sym.Eq((Va/R3)+Vb/R4,0) # Tierra Io=0\ndisplay(sym.Eq(Vth,sym.factor(sym.simplify(sym.solve([ec_p_0,ec_p_1,ec_p_2],find)[Vo]))))\nVth=sym.solve([ec_p_0,ec_p_1,ec_p_2],find)[Vo]\n```\n\n\n```python\n#Circuito para encontrar eq. Norton Io=-In Vo=0 -> CC\nImage(filename='esq_sg2.png',width=250) \n```\n\n\n```python\n#Se busca el equivalente Norton y se usa LKV\nsym.var('I1,I2,In')\nec_p_3=sym.Eq(I1*R3+Vp+(I1-In)*R4,0)\nec_p_4=sym.Eq(I2*R1+(I2-In)*R2,Vp)\nec_p_5=sym.Eq((In-I1)*R4+(In-I2)*R2,0)\ndisplay(sym.Eq(In,sym.simplify(sym.factor(sym.solve([ec_p_3,ec_p_4,ec_p_5],(In,I1,I2))[In]))))\nIn=sym.solve([ec_p_3,ec_p_4,ec_p_5],(In,I1,I2))[In]\n```\n\n\n```python\n#Por lo tanto La resistencia de Thevenin\nsym.var('Rth')\ndisplay(sym.Eq(Rth,sym.simplify(sym.factor(Vth/In))))\nRth=sym.simplify(Vth/In)\n```\n\n\n```python\n#Calculo de forma directa\nImage(filename='esq_sg.png',width=250) #Circuito esquematico cambiando el enfoque\n\n```\n\n\n```python\n#De forma directa Vth es VR2+VR4 \n# Vth= Vp/(R1+R2) * R2 - Vp/(R3+R4) * R4\n# Rth cuando Vp=0 es Rth=R1//R2 + R3//R4\nsym.var('Vth_, Rth_')\ndisplay(sym.Eq(Vth_,sym.simplify(sym.factor(Vp*(R2/(R1+R2)-R4/(R3+R4))))))\nVth_=sym.fu((Vp/(R1+R2))*R2-(Vp/(R3+R4))*R4)\n\ndisplay(sym.Eq(Rth_,(R1**-1+R2**-1)**-1+(R3**-1+R4**-1)**-1))\ndisplay(sym.Eq(Rth_,sym.simplify(sym.factor((R1**-1+R2**-1)**-1+(R3**-1+R4**-1)**-1))))\n\nRth_=sym.factor(((R1**-1+R2**-1)**-1)+(R3**-1+R4**-1)**-1)\n\n```\n", "meta": {"hexsha": "0676ab41dedd6f21954d5e6866c359d652d7c69f", "size": 101658, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "python/0/Puente.ipynb", "max_stars_repo_name": "WayraLHD/SRA21", "max_stars_repo_head_hexsha": "1b0447bf925678b8065c28b2767906d1daff2023", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-29T16:38:53.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-29T16:38:53.000Z", "max_issues_repo_path": "python/0/Puente.ipynb", "max_issues_repo_name": "WayraLHD/SRA21", "max_issues_repo_head_hexsha": "1b0447bf925678b8065c28b2767906d1daff2023", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-08-10T08:24:57.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-10T08:24:57.000Z", "max_forks_repo_path": "python/0/Puente.ipynb", "max_forks_repo_name": "WayraLHD/SRA21", "max_forks_repo_head_hexsha": "1b0447bf925678b8065c28b2767906d1daff2023", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 321.7025316456, "max_line_length": 23424, "alphanum_fraction": 0.9240984477, "converted": true, "num_tokens": 950, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9314625088705931, "lm_q2_score": 0.9073122201074847, "lm_q1q2_score": 0.8451273168702655}} {"text": "# Ejercicios P\u00e9rdida de Significancia\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n## Comentario de la clase...\n\nDel ejercicio realizado en clases ten\u00edamos problemas con $p=53$ para representaci\u00f3n en *doble precisi\u00f3n*. Se puede representar hasta $2^{53}$ sin problemas y luego solo podemos sumar m\u00faltiplos de $2$ (combinaciones de los bits asociados a $2^{1}$, $2^{2}$, $2^{3}$, $\\dots$), ya que se pierde el $2^0$. Ver el ejemplo en el video de la clase.\n\n\n```python\n2.**53, 2.**53+1, 2.**53+2, 2.**53+3, 2.**53+4, 2.**53+5, 2.**53+6, 2.**53+7, 2.**53+8, 2.**53+9\n```\n\n## Ejercicio 1\n\nObtener $a+\\sqrt{a^2 + b^4}$, $a=-12345678987654321$, $b=123$.\n\nResultado aproximado: $9.2699089790398179 \\times 10^{-9}$\n\n\n```python\na = -12345678987654321.\nb = 123.\n```\n\n\n```python\nprint(\"a: %f, b: %f\" % (a, b))\n```\n\nPerdemos un d\u00edgito...\n\n### C\u00e1lculo directo\n\n\n```python\na + np.sqrt(a ** 2 + b ** 4)\n```\n\n### Manejo algebraico\n\n\\begin{equation}\n \\begin{split}\n a+\\sqrt{a^2 + b^4} & = a+\\sqrt{a^2 + b^4} \\cdot \\frac{a-\\sqrt{a^2 + b^4}}{a- \\sqrt{a^2 + b^4}} \\\\\n & = \\frac{a^2-a^2 - b^4}{a- \\sqrt{a^2 + b^4}} \\\\\n & = \\frac{-b^4}{a - \\sqrt{a^2 + b^4}} \\\\\n & = \\frac{b^4}{\\sqrt{a^2 + b^4} - a}\n \\end{split}\n\\end{equation}\n\n\n```python\nb ** 4 / (np.sqrt(a ** 2 + b ** 4) - a)\n```\n\n## Ejercicio 2\n\nObtener\n\n\\begin{equation}\n x^2 = 1.2222222^2 + 3344556600^2\n\\end{equation}\n\nEsto ser\u00eda:\n\\begin{equation}\n x = \\sqrt{1.2222222^2 + 3344556600^2}\n\\end{equation}\n\nEl resultado es aproximadamente $3.34455660000000000022332214\\times 10^9$\n\n\n```python\nc1 = 1.2222222\nc2 = 3344556600.\n```\n\n\n```python\nprint(c1, c2)\n```\n\n\n```python\nx1 = np.sqrt(c1 ** 2 + c2 ** 2)\n```\n\n\n```python\nprint(x1)\n```\n\nUtilizando\n\n\\begin{equation}\n x^2 = (c_1 + c_2)^2 - 2c_1c_2\n\\end{equation}\n\n\n```python\nx2 = np.sqrt((c1 + c2) ** 2 - 2 * c1 * c2)\n```\n\n\n```python\nprint(x2)\n```\n\nUtilizando un uno conveniente\n\n\\begin{equation}\n \\begin{split}\n x^2 & = \\sqrt{c_1^2 + c_2^2} \\, \\frac{\\sqrt{c_1^2 + c_2^2}}{\\sqrt{c_1^2 + c_2^2}} \\\\\n & = \\frac{c_1^2 + c_2^2}{\\sqrt{c_1^2 + c_2^2}} \n \\end{split}\n\\end{equation}\n\n\n```python\nx3 = (c1 ** 2 + c2 ** 2) / np.sqrt(c1 ** 2 + c2 ** 2)\n```\n\n\n```python\nprint(x3)\n```\n\n\n```python\nprint(abs(3.34455660000000000022332214e9 - x1))\nprint(abs(3.34455660000000000022332214e9 - x2))\nprint(abs(3.34455660000000000022332214e9 - x3))\n```\n\nNo todos los manejos algebraicos nos ayudar\u00e1n a resolver problemas mal planteados.\n\n## Ejercicio 3\n\nCalcular las ra\u00edces de\n\n\\begin{equation}\n x^2 + 9^{12} x - 3 = 0\n\\end{equation}\n\n\\begin{equation}\n x = \\frac{-9^{12} \\pm \\sqrt{9^{24} + 12}}{2}\n\\end{equation}\n\nUna ra\u00edz ser\u00eda:\n\n$x_1=\\frac{-9^{12} - \\sqrt{9^{24} + 12}}{2}= \u22122.824\\times 10^{11}$\n\nEn el caso de la segunda ra\u00edz esto es muy cercano a $0$\n\n$x_2=\\frac{-9^{12} + \\sqrt{9^{24} + 12}}{2}$\n\n$-9^{12} + \\sqrt{9^{24}} \\approx 0$ pero seg\u00fan c\u00f3mo lo calculemos vamos a poder recuperar algunos decimales.\n\n### C\u00e1lculo directo\n\n\n```python\ndef root(a, b, c):\n r1 = (-b - np.sqrt(b ** 2 - 4 * a * c)) / (2 * a)\n r2 = (-b + np.sqrt(b ** 2 - 4 * a * c)) / (2 * a)\n return r1, r2\n```\n\n\n```python\na = 1.\nb = 9. ** 12\nc = -3.\n```\n\n\n```python\nx1, x2 = root(a, b, c)\n```\n\n\n```python\nprint(x1, x2)\n```\n\n### Manejo algebraico\n\n\\begin{equation}\n \\begin{split}\n x_2 & = \\frac{-b + \\sqrt{b^2 - 4ac}}{2a} \\\\\n & = \\frac{-b + \\sqrt{b^2 - 4ac}}{2a} \\, \\frac{b + \\sqrt{b^2 - 4ac}}{b + \\sqrt{b^2 - 4ac}} \\\\\n & = \\frac{-4ac}{2a(b + \\sqrt{b^2 - 4ac})} \\\\\n & = \\frac{-2c}{(b + \\sqrt{b^2 - 4ac})}\n \\end{split}\n\\end{equation}\n\n\n```python\ndef root_improved(a, b, c):\n r1 = (-b - np.sqrt(b ** 2 - 4 * a * c)) / (2 * a)\n r2 = -2 * c / (b + np.sqrt(b ** 2 - 4 * a * c))\n return r1, r2\n```\n\n\n```python\nx1, x2 = root_improved(a, b, c)\n```\n\n\n```python\nprint(x1, x2)\n```\n\nMirando con mayor detalle, si $4|ac| \\ll b^2$, $b$ y $\\sqrt{b^2-4ac}$ son similares en magnitud por lo tanto una de las ra\u00edces va a sufrir p\u00e9rdida de significancia. Revisar el libro gu\u00eda para m\u00e1s detalles de c\u00f3mo solucionar el problema.\n\n## Ejercicio 4\n\nSean\n\\begin{equation}\n E_1(x)=\\frac{1-\\cos(x)}{\\sin^2(x)}, \\quad E_2(x)=\\frac{1}{1+\\cos(x)}\n\\end{equation}\n\n\u00bfSon $E_1(x)$ y $E_2(x)$ iguales? \u00bfQu\u00e9 ocurre al evaluarlas cerca de $0$?\n\nPrimero, \u00bf$E_1(x)=E_2(x)$?\n\n\\begin{equation}\n \\begin{split}\n E_1(x) & =E_2(x) \\\\\n \\frac{1-\\cos(x)}{\\sin^2(x)} & = \\frac{1}{1+\\cos(x)} \\\\\n (1-\\cos(x))(1+\\cos(x)) &= \\sin^2(x) \\\\\n 1-\\cos^2(x) &= \\sin^2(x) \\\\\n 1 &= \\sin^2(x) + \\cos^2(x)\n \\end{split}\n\\end{equation}\n\nEfectivamente son iguales. Ahora veamos num\u00e9ricamente qu\u00e9 ocurre al evaluarla cerca de $0$.\n\n\n```python\nE1 = lambda x: (1 - np.cos(x)) / np.sin(x) ** 2\nE2 = lambda x: 1 / (1 + np.cos(x))\n```\n\n\n```python\nxa, xb = -.5, .5\nN = 101\nx = np.linspace(xa, xb, N)\n```\n\n\n```python\nx\n```\n\n\n```python\nplt.figure(figsize=(13, 4))\nplt.subplot(1, 2, 1)\nplt.plot(x, E1(x), 'b.')\nplt.xlabel(r\"$x$\")\nplt.ylabel(r\"$E_1(x)$\")\nplt.grid(True)\nplt.subplot(1, 2, 2)\nplt.plot(x, E2(x), 'r.')\nplt.xlabel(r\"$x$\")\nplt.ylabel(r\"$E_2(x)$\")\nplt.grid(True)\nplt.tight_layout()\nplt.show()\n```\n\nTenemos problemas al evaluar $0$ en el denominador de $E_1(x)$ puesto que se indetermina.\n\n\n```python\nx = np.logspace(-19,0,20)[-1:0:-1]\no1 = E1(x)\no2 = E2(x)\n\nprint(\"x, E1(x), E2(x)\")\nfor i in np.arange(len(x)):\n print(\"%1.15f, %1.15f, %1.15f\" % (x[i], o1[i], o2[i]))\n```\n\n## Ejercicio 5\n\n\u00bfCu\u00e1ndo habr\u00eda problemas al evaluar $f(x)$ y $g(x)$?\n\n\\begin{equation}\n f(x) = \\frac{1-(1-x)^3}{x}, \\quad g(x)= \\frac{1}{1+x}-\\frac{1}{1-x}\n\\end{equation}\n\n\n```python\nf = lambda x: (1 - (1 - x) ** 3) / x\ng = lambda x: 1 / (1 + x) - 1 / (1 - x)\n```\n\n\n```python\nxa, xb = -2, 2\nN = 101\nx = np.linspace(xa, xb, N)\n```\n\n\n```python\nplt.figure(figsize=(13, 4))\nplt.subplot(1, 2, 1)\nplt.plot(x, f(x), 'b.')\nplt.xlabel(r\"$x$\")\nplt.ylabel(r\"$f(x)$\")\nplt.grid(True)\nplt.subplot(1, 2, 2)\nplt.plot(x, g(x), 'r.')\nplt.xlabel(r\"$x$\")\nplt.ylabel(r\"$g(x)$\")\nplt.grid(True)\nplt.tight_layout()\nplt.show()\n```\n\nEs claro que para $f(x)$ el problema se encuentra cercano a $0$ mientras que para $g(x)$ el problema est\u00e1 en $-1$ y $1$. En el caso de $f(x)$ basta expandirla para obtener $f(x)=x^2-3x+3$. Si operamos sobre $g(x)$ se obtiene $g(x)=\\frac{2x}{x^2-1}$, el cual contiene los mismos problemas.\n\n\n```python\nff = lambda x: x ** 2 - 3 * x + 3\ngg = lambda x: 2 * x / (x ** 2 - 1)\n```\n\n\n```python\nplt.figure(figsize=(12, 4))\nplt.subplot(1, 2, 1)\nplt.plot(x, ff(x), 'b.', label=r\"$f(x)$\")\nplt.grid(True)\nplt.subplot(1, 2, 2)\nplt.plot(x, gg(x), 'r.', label=r\"$g(x)$\")\nplt.grid(True)\nplt.tight_layout()\nplt.show()\n```\n\n\u00bfExistir\u00e1 alguna forma de solucionar los problemas de $g(x)$?\n\n# Referencias\n\n* Sauer, T. (2006). Numerical Analysis Pearson Addison Wesley.\n* https://github.com/tclaudioe/Scientific-Computing/blob/master/SC1/03_floating_point_arithmetic.ipynb <- Revisar este enlace que tiene ejemplos y comentarios interesantes!\n", "meta": {"hexsha": "9a25b132f58e3dac3e58d299cb33523e2d107ef1", "size": 14279, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "material/02_punto_flotante/ejercicios.ipynb", "max_stars_repo_name": "etra0/INF-285", "max_stars_repo_head_hexsha": "189f8d66cf6997fc87b545378b2c5f1c7b908dbc", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2020-04-24T01:25:24.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-10T01:08:37.000Z", "max_issues_repo_path": "material/02_punto_flotante/ejercicios.ipynb", "max_issues_repo_name": "etra0/INF-285", "max_issues_repo_head_hexsha": "189f8d66cf6997fc87b545378b2c5f1c7b908dbc", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-04-22T00:57:29.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-23T23:59:28.000Z", "max_forks_repo_path": "material/02_punto_flotante/ejercicios.ipynb", "max_forks_repo_name": "etra0/INF-285", "max_forks_repo_head_hexsha": "189f8d66cf6997fc87b545378b2c5f1c7b908dbc", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 45, "max_forks_repo_forks_event_min_datetime": "2020-04-20T01:15:09.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-27T22:53:59.000Z", "avg_line_length": 23.0306451613, "max_line_length": 349, "alphanum_fraction": 0.4635478675, "converted": true, "num_tokens": 2868, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8933094117351309, "lm_q2_score": 0.9458012745689621, "lm_q1q2_score": 0.8448931802035365}} {"text": "# Poisson Distribution - Broken Machine\n\n> This document is written in *R*.\n>\n> ***GitHub***: https://github.com/czs108\n\n## Background\n\n> Suppose the *mean* number of machine malfunctions per week, or rate of malfunctions, is **3.4**.\n\n## Question A\n\n> What's the probability of the machine *not* malfunctioning next week?\n\n\\begin{equation}\n\\lambda = 3.4\n\\end{equation}\n\n\\begin{equation}\n\\begin{split}\nP(X = 0) &= \\frac{e^{-3.4} \\times {3.4}^{0}}{0!} \\\\\n &= e^{-3.4} \\\\\n &= 0.033\n\\end{split}\n\\end{equation}\n\nUse the `dpois` function directly.\n\n\n```R\ndpois(x=0, lambda=3.4)\n```\n\n\n0.0333732699603261\n\n\nOr use the `exp` function.\n\n\n```R\nexp(-3.4)\n```\n\n\n0.0333732699603261\n\n\n## Question B\n\n> What's the probability of the machine malfunctioning **3** times next week?\n\n\\begin{equation}\n\\begin{split}\nP(X = 3) &= \\frac{e^{-3.4} \\times {3.4}^{3}}{3!} \\\\\n &= 0.2186\n\\end{split}\n\\end{equation}\n\n\n```R\ndpois(x=3, lambda=3.4)\n```\n\n\n0.218617167086776\n\n\n## Question C\n\n> What's the *expectation* and *variance* of the machine malfunctions?\n\n\\begin{equation}\nE(X) = \\lambda = 3.4\n\\end{equation}\n\n\\begin{equation}\nVar(X) = \\lambda = 3.4\n\\end{equation}\n", "meta": {"hexsha": "7589b30aea0d77a794a11456bb372cfb7a654e2b", "size": 3861, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "exercises/Poisson Distribution - Broken Machine.ipynb", "max_stars_repo_name": "czs108/Probability-Theory-Exercises", "max_stars_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exercises/Poisson Distribution - Broken Machine.ipynb", "max_issues_repo_name": "czs108/Probability-Theory-Exercises", "max_issues_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercises/Poisson Distribution - Broken Machine.ipynb", "max_forks_repo_name": "czs108/Probability-Theory-Exercises", "max_forks_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-21T05:04:07.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-21T05:04:07.000Z", "avg_line_length": 18.5625, "max_line_length": 104, "alphanum_fraction": 0.4498834499, "converted": true, "num_tokens": 415, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9539660989095221, "lm_q2_score": 0.8856314632529872, "lm_q1q2_score": 0.844862392070984}} {"text": "# For today's code challenge you will be reviewing yesterdays lecture material. Have fun!\n\n### if you get done early check out [these videos](https://www.3blue1brown.com/neural-networks).\n\n# The Perceptron\n\nThe first and simplest kind of neural network that we could talk about is the perceptron. A perceptron is just a single node or neuron of a neural network with nothing else. It can take any number of inputs and spit out an output. What a neuron does is it takes each of the input values, multplies each of them by a weight, sums all of these products up, and then passes the sum through what is called an \"activation function\" the result of which is the final value.\n\nI really like figure 2.1 found in this [pdf](http://www.uta.fi/sis/tie/neuro/index/Neurocomputing2.pdf) even though it doesn't have bias term represented there.\n\n\n\nIf we were to write what is happening in some verbose mathematical notation, it might look something like this:\n\n\\begin{align}\n y = sigmoid(\\sum(weight_{1}input_{1} + weight_{2}input_{2} + weight_{3}input_{3}) + bias)\n\\end{align}\n\nUnderstanding what happens with a single neuron is important because this is the same pattern that will take place for all of our networks. \n\nWhen imagining a neural network I like to think about the arrows as representing the weights, like a wire that has a certain amount of resistance and only lets a certain amount of current through. And I like to think about the node itselef as containing the prescribed activation function that neuron will use to decide how much signal to pass onto the next layer.\n\n# Activation Functions (transfer functions)\n\nIn Neural Networks, each node has an activation function. Each node in a given layer typically has the same activation function. These activation functions are the biggest piece of neural networks that have been inspired by actual biology. The activation function decides whether a cell \"fires\" or not. Sometimes it is said that the cell is \"activated\" or not. In Artificial Neural Networks activation functions decide how much signal to pass onto the next layer. This is why they are sometimes referred to as transfer functions because they determine how much signal is transferred to the next layer.\n\n## Common Activation Functions:\n\n\n\n# Implementing a Perceptron from scratch in Python\n\n### Establish training data\n\n\n```python\nimport numpy as np\n\nnp.random.seed(812)\n\ninputs = np.array([\n [0, 0, 1],\n [1, 1, 1],\n [1, 0, 1],\n [0, 1, 1]\n])\n\ncorrect_outputs = [[0], [1], [1], [0]]\n```\n\n### Sigmoid activation function and its derivative for updating weights\n\n\n```python\ndef sigmoid(x):\n return 1 / (1 + np.exp(-x))\n\ndef sigmoid_derivative(x):\n sx = sigmoid(x)\n return sx * (1 - sx)\n```\n\n## Updating weights with derivative of sigmoid function:\n\n\n\n### Initialize random weights for our three inputs\n\n\n```python\nweights = np.random.rand(3,1)\nweights\n```\n\n\n\n\n array([[0.32659171],\n [0.59345002],\n [0.25569456]])\n\n\n\n### Calculate weighted sum of inputs and weights\n\n\n```python\nweighted_sum = np.dot(inputs,weights)\n```\n\n### Output the activated value for the end of 1 training epoch\n\n\n```python\nactivated_value = sigmoid(weighted_sum)\nactivated_value\n```\n\n\n\n\n array([[0.56357763],\n [0.76418031],\n [0.64159331],\n [0.70038767]])\n\n\n\n### take difference of output and true values to calculate error\n\n\n```python\nerror = correct_outputs - activated_value\nerror\n```\n\n\n\n\n array([[-0.56357763],\n [ 0.23581969],\n [ 0.35840669],\n [-0.70038767]])\n\n\n\n\n```python\nderivative = sigmoid_derivative(error)\nderivative\n```\n\n\n\n\n array([[0.23115422],\n [0.24655628],\n [0.24214035],\n [0.22168396]])\n\n\n\n### Put it all together\n\n\n```python\nweights += np.dot(inputs,derivative)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "b40330e34b1d3ceb758b83164aa7563deab03613", "size": 12213, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Tuesday's_Challenge.ipynb", "max_stars_repo_name": "vishnuyar/Neural_network_foundations_code_challenges", "max_stars_repo_head_hexsha": "5c628b2634eb25de443faff703b60ffb42d59dc7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Tuesday's_Challenge.ipynb", "max_issues_repo_name": "vishnuyar/Neural_network_foundations_code_challenges", "max_issues_repo_head_hexsha": "5c628b2634eb25de443faff703b60ffb42d59dc7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Tuesday's_Challenge.ipynb", "max_forks_repo_name": "vishnuyar/Neural_network_foundations_code_challenges", "max_forks_repo_head_hexsha": "5c628b2634eb25de443faff703b60ffb42d59dc7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.9338235294, "max_line_length": 618, "alphanum_fraction": 0.4854663064, "converted": true, "num_tokens": 916, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9609517095103498, "lm_q2_score": 0.8791467770088163, "lm_q1q2_score": 0.8448175982771363}} {"text": "```python\nfrom math_so.model import lotka_volterra\nimport numpy as np\nfrom scipy.integrate import solve_ivp\nimport matplotlib.pyplot as plt\n```\n\n# Numerical Differentiation\n\nUsing a Taylor expansion of the (sufficiently often differentiable) function $f$ about $x$,\n\n1. $f(x+h) = f(x) + h f'(x) + \\frac{h^2}{2!} f''(x) + \\frac{h^3}{3!} f'''(x) + \\ldots, $\n2. $f(x-h) = f(x) - h f'(x) + \\frac{h^2}{2!} f''(x) - \\frac{h^3}{3!} f'''(x) + \\ldots, $\n3. $f(x+2h) = f(x) + 2h f'(x) + 4\\frac{h^2}{2!} f''(x) + 8\\frac{h^3}{3!} f'''(x) + \\ldots, $\n4. $f(x-2h) = f(x) - 2h f'(x) + 4\\frac{h^2}{2!} f''(x) - 8\\frac{h^3}{3!} f'''(x) + \\ldots, $\n\nwe can derive the following finite difference approximations to the derivative of $f$ at $x$.\n\n## First-order finite difference \nFrom (1) we obtain:\n$$\\begin{align*}f'_\\text{forward}(x) &=\\displaystyle \\frac{f(x+h) - f(x)}{h} \n\\color{red}{-\\frac{h}{2!}f''(x)-\\frac{h^2}{3!}f'''(x)+\\dots}\\\\\n &=\\displaystyle \\frac{f(x+h) - f(x)}{h} \\color{red}{+\\mathcal{O}(h)}\n\\end{align*}$$\n\nFrom (2) we obtain:\n$$\\begin{align*}f'_\\text{backward}(x) &=\\displaystyle \\frac{f(x) - f(x-h)}{h} \n\\color{red}{+\\frac{h}{2!}f''(x)-\\frac{h^2}{3!}f'''(x)+\\dots}\\\\\n &=\\displaystyle \\frac{f(x) - f(x-h)}{h} \\color{red}{+\\mathcal{O}(h)}\n\\end{align*}$$\n\nFrom (1) and (2) we obtain:\n$$\\begin{align*}f'_\\text{central}(x) &=\\displaystyle \\frac{f(x+h) - f(x-h)}{2h} \n\\color{red}{-\\frac{h^2}{3!}f'''(x)+\\dots}\\\\\n &=\\displaystyle \\frac{f(x+h) - f(x-h)}{2h} \\color{red}{+\\mathcal{O}(h^2)}\n\\end{align*}$$\n\nThe error for the forward and backward difference method os of order $h$, for the central methos the error is of order $h^2$. (From the highest oder term in the dropped part of the taylor series. [[1](https://www.youtube.com/watch?v=ZJkGI5DZQv8&list=PLYdroRCLMg5OvLx1EtY1ByvveJeTEXQd_&index=18)][[2](https://www.youtube.com/watch?v=C2Wk-wiXLvE&list=PLYdroRCLMg5OvLx1EtY1ByvveJeTEXQd_&index=20)])\n\nHere's an example of a function whose derivatives we know analytically:\n\n\n```python\nf = lambda x: np.log(x)\nx0 = 3\ndf_ex = 1/x0 #f'(x0)\nd2f_ex = -1/x0**2 #f''(x0) \n```\n\n\n```python\ndef fd_forward1(f, x, h):\n \"\"\"fd_forward1 calculates the derivative of f at x \n with First-order forward finite difference.\n \"\"\"\n return (f(x + h) - f(x))/h\ndef fd_backward1(f, x, h):\n \"\"\"fd_backward1 calculates the derivative of f at x \n with First-order backward finite difference.\n \"\"\"\n return (f(x)-f(x - h))/h\ndef fd_central1(f, x, h):\n \"\"\"fd_central1 calculates the derivative of f at x \n with First-order central finite difference.\n \"\"\"\n return (f(x + h) - f(x-h))/(2*h)\n```\n\n**Forward/Backward**: Let's check if the error really decreases by a factor of 2 as we halve $h$, until ultimately roundoff errors spoil the convergence:\n\n\n```python\nh = .1\nfor m in np.arange(0,24):\n df = fd_forward1(f, x0, h)\n err = np.abs(df - df_ex)\n if m > 0:\n print('h = {:.2E}, err = {:.2E}, fac = {:.2f}'.format(h, err, err_old/err))\n err_old = err\n h = h/2\n```\n\n h = 5.00E-02, err = 2.75E-03, fac = 1.98\n h = 2.50E-02, err = 1.38E-03, fac = 1.99\n h = 1.25E-02, err = 6.93E-04, fac = 1.99\n h = 6.25E-03, err = 3.47E-04, fac = 2.00\n h = 3.13E-03, err = 1.73E-04, fac = 2.00\n h = 1.56E-03, err = 8.68E-05, fac = 2.00\n h = 7.81E-04, err = 4.34E-05, fac = 2.00\n h = 3.91E-04, err = 2.17E-05, fac = 2.00\n h = 1.95E-04, err = 1.09E-05, fac = 2.00\n h = 9.77E-05, err = 5.43E-06, fac = 2.00\n h = 4.88E-05, err = 2.71E-06, fac = 2.00\n h = 2.44E-05, err = 1.36E-06, fac = 2.00\n h = 1.22E-05, err = 6.78E-07, fac = 2.00\n h = 6.10E-06, err = 3.39E-07, fac = 2.00\n h = 3.05E-06, err = 1.70E-07, fac = 2.00\n h = 1.53E-06, err = 8.48E-08, fac = 2.00\n h = 7.63E-07, err = 4.26E-08, fac = 1.99\n h = 3.81E-07, err = 2.16E-08, fac = 1.97\n h = 1.91E-07, err = 1.06E-08, fac = 2.05\n h = 9.54E-08, err = 5.90E-09, fac = 1.79\n h = 4.77E-08, err = 5.90E-09, fac = 1.00\n h = 2.38E-08, err = 1.06E-08, fac = 0.56\n h = 1.19E-08, err = 1.24E-09, fac = 8.50\n\n\n\n```python\nh = .1\nfor m in np.arange(0,24):\n df = fd_backward1(f, x0, h)\n err = np.abs(df - df_ex)\n if m > 0:\n print('h = {:.2E}, err = {:.2E}, fac = {:.2f}'.format(h, err, err_old/err))\n err_old = err\n h = h/2\n```\n\n h = 5.00E-02, err = 2.81E-03, fac = 2.02\n h = 2.50E-02, err = 1.40E-03, fac = 2.01\n h = 1.25E-02, err = 6.96E-04, fac = 2.01\n h = 6.25E-03, err = 3.48E-04, fac = 2.00\n h = 3.13E-03, err = 1.74E-04, fac = 2.00\n h = 1.56E-03, err = 8.68E-05, fac = 2.00\n h = 7.81E-04, err = 4.34E-05, fac = 2.00\n h = 3.91E-04, err = 2.17E-05, fac = 2.00\n h = 1.95E-04, err = 1.09E-05, fac = 2.00\n h = 9.77E-05, err = 5.43E-06, fac = 2.00\n h = 4.88E-05, err = 2.71E-06, fac = 2.00\n h = 2.44E-05, err = 1.36E-06, fac = 2.00\n h = 1.22E-05, err = 6.78E-07, fac = 2.00\n h = 6.10E-06, err = 3.39E-07, fac = 2.00\n h = 3.05E-06, err = 1.70E-07, fac = 2.00\n h = 1.53E-06, err = 8.49E-08, fac = 2.00\n h = 7.63E-07, err = 4.24E-08, fac = 2.00\n h = 3.81E-07, err = 2.15E-08, fac = 1.98\n h = 1.91E-07, err = 1.16E-08, fac = 1.86\n h = 9.54E-08, err = 5.74E-09, fac = 2.01\n h = 4.77E-08, err = 3.41E-09, fac = 1.68\n h = 2.38E-08, err = 8.07E-09, fac = 0.42\n h = 1.19E-08, err = 1.74E-08, fac = 0.46\n\n\n**Central**: Let's check if the error really decreases by a factor of 4 as we halve $h$, until ultimately roundoff errors spoil the convergence:\n\n\n```python\nh = .1\nfor m in np.arange(0,24):\n df = fd_central1(f, x0, h)\n err = np.abs(df - df_ex)\n if m > 0:\n print('h = {:.2E}, err = {:.2E}, fac = {:.2f}'.format(h, err, err_old/err))\n err_old = err\n h = h/2\n```\n\n h = 5.00E-02, err = 3.09E-05, fac = 4.00\n h = 2.50E-02, err = 7.72E-06, fac = 4.00\n h = 1.25E-02, err = 1.93E-06, fac = 4.00\n h = 6.25E-03, err = 4.82E-07, fac = 4.00\n h = 3.13E-03, err = 1.21E-07, fac = 4.00\n h = 1.56E-03, err = 3.01E-08, fac = 4.00\n h = 7.81E-04, err = 7.54E-09, fac = 4.00\n h = 3.91E-04, err = 1.88E-09, fac = 4.00\n h = 1.95E-04, err = 4.70E-10, fac = 4.00\n h = 9.77E-05, err = 1.17E-10, fac = 4.03\n h = 4.88E-05, err = 2.93E-11, fac = 3.99\n h = 2.44E-05, err = 8.79E-12, fac = 3.33\n h = 1.22E-05, err = 4.85E-12, fac = 1.81\n h = 6.10E-06, err = 4.85E-12, fac = 1.00\n h = 3.05E-06, err = 3.15E-11, fac = 0.15\n h = 1.53E-06, err = 6.79E-11, fac = 0.46\n h = 7.63E-07, err = 7.76E-11, fac = 0.88\n h = 3.81E-07, err = 7.76E-11, fac = 1.00\n h = 1.91E-07, err = 5.04E-10, fac = 0.15\n h = 9.54E-08, err = 7.76E-11, fac = 6.50\n h = 4.77E-08, err = 1.24E-09, fac = 0.06\n h = 2.38E-08, err = 1.24E-09, fac = 1.00\n h = 1.19E-08, err = 8.07E-09, fac = 0.15\n\n\n\n```python\n\n```\n\n\n```python\nfig, ax = plt.subplots(figsize=(8, 5))\nf = lambda x: np.log(x)\nx0 = 0.2\ndf = lambda x: 1/x\nh = .1\na,b = x0-h,x0+h\n\nx = np.linspace(a-.03,b+.03,200)\ny = f(x) \ny0 = f(x0)\nyt = df(x0) * (x - x0) + y0 \nytf = fd_forward1(f, x0, h) * (x - x0) + y0 \nytb = fd_backward1(f, x0, h) * (x - x0) + y0 \nytc = fd_central1(f, x0, h) * (x - x0) + y0 \n\nax.plot(x,y,'r--',label=r'$f$')\nax.plot(x,yt,'k--',label=r\"$f'$ (exact)\")\nax.plot(x,ytf,'b-',label=r\"$\\hat{f}'$ (forward)\")\nax.plot(x,ytb,'g-',label=r\"$\\hat{f}'$ (backward)\")\nax.plot(x,ytc,'m-',label=r\"$\\hat{f}'$ (central)\")\n\nax.annotate(r'$\\frac{f(x_i+h) - f(x_i)}{h}$', xy=(x0, y0), xytext=(x0+h, y0), xycoords='data', textcoords='data',\n arrowprops={'arrowstyle': '<->','color':'b'},color='b')\nax.annotate('', xy=(x0+h, f(x0+h)), xytext=(x0+h, y0), xycoords='data', textcoords='data',\n arrowprops={'arrowstyle': '<->','color':'b'},color='b')\n\nax.annotate(r'$\\frac{f(x_i)-f(x_i-h)}{h}$', xy=(x0-h, f(x0-h)), xytext=(x0, f(x0-h)), xycoords='data', textcoords='data',\n arrowprops={'arrowstyle': '<->','color':'g'},color='g')\nax.annotate('', xy=(x0, y0), xytext=(x0, f(x0-h)), xycoords='data', textcoords='data',\n arrowprops={'arrowstyle': '<->','color':'g'},color='g')\n\nax.annotate(r'$\\frac{f(x_i+h)-f(x_i-h)}{2h}$', xy=(x0+h, f(x0+h)), xytext=(x0-h, f(x0+h)), xycoords='data', textcoords='data',\n arrowprops={'arrowstyle': '<->','color':'m'},color='m',horizontalalignment='right')\nax.annotate('', xy=(x0-h, f(x0-h)), xytext=(x0-h, f(x0+h)), xycoords='data', textcoords='data',\n arrowprops={'arrowstyle': '<->','color':'m'},color='m')\n\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\n\nplt.xticks([x0-h, x0, x0+h], [r'$x_i-h$', r'$x_i$', r'$x_i+h$'])\nplt.yticks([f(x0-h), f(x0), f(x0+h)], [r'$f(x_i-h)$', r'$f(x_i)$', r'$f(x_i+h)$'])\nplt.title('geometric interpretation (1st order derivative)')\nplt.legend(loc='lower right')\nplt.show() \n```\n\n## Second-order finite difference \nFrom (1)+(2) we obtain:\n$$\\begin{align*}f''_\\text{central}(x) &=\\displaystyle \\frac{f(x+h) - 2f(x)+f(x-h)}{h^2} \n\\color{red}{+\\frac{h^2}{12}f^{(4)}(x)+\\dots}\\\\\n &=\\displaystyle \\frac{f(x+h) - 2f(x) + f(x-h)}{h^2} \\color{red}{+\\mathcal{O}(h^2)}\n\\end{align*}$$\n\nRef: Chapra, Canale: Numerical Methods for Engineers\n\nHere's an example of a function whose derivatives we know analytically:\n\n\n```python\nf = lambda x: np.log(x)\nx0 = 3\ndf_ex = 1/x0 #f'(x0)\nd2f_ex = -1/x0**2 #f''(x0) \n```\n\n\n```python\ndef f2d_central2(f, x, h):\n \"\"\"fd_central2 calculates the derivative of f at x \n with Second-order central finite difference.\n \"\"\"\n return (f(x + h) - 2*f(x) + f(x-h))/(h**2)\n```\n\n**Central**: Let's check if the error really decreases by a factor of 4 as we halve $h$, until ultimately roundoff errors spoil the convergence:\n\n\n```python\nh = .1\nfor m in np.arange(0,10):\n d2f = f2d_central2(f, x0, h)\n err = np.abs(d2f - d2f_ex)\n if m > 0:\n print('h = {:.2E}, err = {:.2E}, fac = {:.2f}'.format(h, err, err_old/err))\n err_old = err\n h = h/2\n```\n\n h = 5.00E-02, err = 1.54E-05, fac = 4.00\n h = 2.50E-02, err = 3.86E-06, fac = 4.00\n h = 1.25E-02, err = 9.65E-07, fac = 4.00\n h = 6.25E-03, err = 2.41E-07, fac = 4.00\n h = 3.13E-03, err = 6.03E-08, fac = 4.00\n h = 1.56E-03, err = 1.52E-08, fac = 3.98\n h = 7.81E-04, err = 3.78E-09, fac = 4.01\n h = 3.91E-04, err = 1.24E-09, fac = 3.06\n h = 1.95E-04, err = 2.69E-09, fac = 0.46\n\n\n# Numerical solution of ordinary differential equations\n\nAn ordinary differential equation (ODE) is an equation of the form\n\n$$\\frac{dx}{dt}=f(y,t)$$\n\nfor an unknown function $y:\\mathbf{R}\\to \\mathbf{R}^d$, $t\\mapsto y(t)$, and a right-hand side $f:\\mathbf{R} \\times \\mathbf{R}^d \\to \\mathbf{R}^d$. The initial value problem is completed by specifying an initial condition\n\n$$y(t_0)=y_0$$\n\nUnder certain conditions on $f$, a unique solution exists at least in a neighbourhood of $t_0$.\n\nA simple example is the growth of a bacteria colony, whose growth rate is proportional to the size of the population:\n\n$$\\frac{dy}{dt}=rt, y(0)=y_0$$\n\nwith a constant $r\\in\\mathbf{R}$, The exact solution is $y(t) = y_0 \\, e^{rt}$.\n\n\n```python\nr=0.8\nf=lambda t,y:r*y\ny0=1000\na=0; b=2\nt_ex = np.linspace(a-.03,b+.03,200)\ny_ex = lambda t: y0*np.exp(r*t)\n```\n\nThe simplest numerical method is **Euler's method**, which is based on the Taylor expansion at first order:\n\n$$y(t+h) = y(t) + h y'(t)+ O(h^2) = y(t) + h \\, f(t,h) + O(h^2),$$\n\nwhere for the second equality we have used the ODE to replace the derivative. This step is repeated $n$ times, with $h=(b-a)/n$, starting at $t=a$ and ending up at $t=b$. An implementation can be found in `euler`.\n\n\n```python\ndef euler(f,a,b,y0,n):\n \"\"\"\n EULER solves the ordinary differential equation with right-hand side f and\n initial value y(a) = y0 on the interval [a, b] with n steps of Euler's method\n \"\"\"\n h = (b - a)/n\n t,y = a,y0\n t_,y_=[],[]\n while t <= b:\n t_.append(t)\n y_.append(y)\n t += h\n y += h * f(t,y)\n return(np.array(t_),np.array(y_))\n```\n\n\n```python\nn = 20\n[t_euler, y_euler] = euler(f, a, b, y0, n)\n```\n\nFrom the above equation, the local truncation error of a single Euler step is of the order $O(h^2)$. However after the step is repeated $n=(b-a)/h$ times, the global truncation error at $t=b$ is only $O(h)$. So when the number of subintervals is doubled, the error decreases by a factor of 2.\n\n\n```python\nn = 20\nfor m in np.arange(0,10):\n tm, ym = euler(f, a, b, y0, n)\n err = abs(ym[n-1] - y_ex(b))\n if m > 0:\n print('n = {:}, err = {:.2f}, fac = {:.2f}'.format(n, err, err_old/err))\n err_old = err\n n = n*2\n```\n\n n = 40, err = 336.67, fac = 1.89\n n = 80, err = 173.19, fac = 1.94\n n = 160, err = 87.86, fac = 1.97\n n = 320, err = 44.25, fac = 1.99\n n = 640, err = 22.21, fac = 1.99\n n = 1280, err = 11.12, fac = 2.00\n n = 2560, err = 5.57, fac = 2.00\n n = 5120, err = 2.78, fac = 2.00\n n = 10240, err = 1.39, fac = 2.00\n\n\nThere are more accurate schemes, e.g. the popular 4th-order Runge-Kutta method `rk4`:\n\n\n```python\ndef rk4(f,a,b,y0,n):\n \"\"\"\n RK4 solves the ordinary differential equation with right-hand side f and\n initial value y(a) = y0 on the interval [a, b] with n steps of 4th-order \n Runge-Kutta method\n \"\"\"\n dy=lambda t, y, dt: (\n lambda dy1: (\n lambda dy2: (\n lambda dy3: (\n lambda dy4: (dy1 + 2*dy2 + 2*dy3 + dy4)/6\n )( dt * f( t + dt , y + dy3 ) )\n )( dt * f( t + dt/2, y + dy2/2 ) )\n )( dt * f( t + dt/2, y + dy1/2 ) )\n )( dt * f( t , y ) )\n h = (b - a)/n\n t,y = a,y0\n t_,y_=[],[]\n while t <= b:\n t_.append(t)\n y_.append(y)\n t += h\n y += dy(t,y,h)\n return(np.array(t_),np.array(y_))\n\nn = 20\n[t_rk4, y_rk4] = rk4(f, a, b, y0, n)\n```\n\n`scipy` provides `solve_ivp` as interface to various solvers for initial value problems of systems of ODEs. \n- Explicit Runge-Kutta methods ('RK23', 'RK45', 'DOP853') should be used for non-stiff problems\n- Implicit methods ('Radau', 'BDF') for stiff problems\n- Among Runge-Kutta methods, 'DOP853' is recommended for solving with high precision (low values of `rtol` and `atol`)\n\n\n```python\nRK23 =solve_ivp(f,(a, b),[y0],method='RK23')\nRK45 =solve_ivp(f,(a, b),[y0],method='RK45')\nDOP853=solve_ivp(f,(a, b),[y0],method='DOP853')\nRadau =solve_ivp(f,(a, b),[y0],method='Radau')\nBDF =solve_ivp(f,(a, b),[y0],method='BDF')\n```\n\nObserve how the step size is adaptive (non-uniform) in this case.\n\n\n```python\nfig, ax = plt.subplots(figsize=(8, 5))\nax.plot(t_ex ,y_ex(t_ex) , label=r'exact')\nax.plot(t_euler, y_euler, label=r'euler');\nax.plot(t_rk4 , y_rk4 , label=r'rk4');\nax.plot(RK23.t, RK23.y[0,:], 'o', label=r'RK23');\nax.plot(RK45.t, RK45.y[0,:], 'o', label=r'RK45');\nax.plot(DOP853.t, DOP853.y[0,:], 'o', label=r'DOP853');\nax.plot(Radau.t, Radau.y[0,:], 'o', label=r'Radau');\nax.plot(BDF.t, BDF.y[0,:], 'o', label=r'BDF');\nplt.title('Numerical solution of ordinary differential equations')\nplt.legend(loc='lower right')\nplt.show()\n```\n\n\n```python\nt_rk4\n```\n\n\n\n\n array([0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. , 1.1, 1.2,\n 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9])\n\n\n\n## Systems of ODEs, additional parameters\n\nSo far we have taken the dimension to be $d=1$. An example of a system with $d=2$ unknown functions is the [Lotka-Volterra (predator-prey) model](https://en.wikipedia.org/wiki/Lotka\u2013Volterra_equations):\n\n$$\n\\begin{align}\n\\frac{dy_1}{dt} &= \\alpha y_1 - \\beta y_1 y_2,\\\\ \n\\frac{dy_2}{dt} &= \\gamma y_1 y_2 - \\delta y_2 \n\\end{align}\n$$\n\nHere $y_1$ is the population of prey, $y_2$ the population of predators, and $\\alpha,\\beta,\\gamma,\\delta>0$ are constants. Each has a certain birthrate and a deathrate. The birthrate of the prey is proportional to its current population and the birthrate of the predator is proportional to both its population and the prey population. The deathrate of the prey is proportional to both its population and the predator population and the deathrate of the predator is proportional to its population.\n\nThe equations are non-linear since they include $xy$, this implies that finding an analytical solution will be much harder but for the numerical integration the problem isn't any more difficult.\n\n\n```python\nf = lambda t, y: [p*y[0] - q*y[0]*y[1], r*y[0]*y[1] - s*y[1]]\np = 0.4; q = 0.04; r = 0.02; s = 2\na = 0; b = 15\ny0 = [105, 8]\nlotka_volterra.main(p, q, r, s, y0[0], y0[1], [a, b])\n```\n\nThe Population dynamics is shown in the left plot. The population of predators trailing that of prey by 90\u00b0 in the cycle.\n\nThe right plot shows the solutions parametrically as orbits in phase space, without representing time, but with one axis representing the number of prey and the other axis representing the number of predators for all times. \n\n**Population equilibrium** occurs in the model when neither of the population levels is changing, i.e. when both of the derivatives are equal to 0:\n\n$$\n\\begin{align}\n0 &= \\alpha y_1 - \\beta y_1 y_2,\\\\ \n0 &= \\gamma y_1 y_2 - \\delta y_2 \n\\end{align}\n$$\n\nThe above system of equations yields two solutions:\n$$y_1=0,y_2=0$$\n$$y_1=\\frac{\\delta}{\\gamma},y_2=\\frac{\\alpha}{\\beta}$$\n\nThe first solution effectively represents the extinction of both species. If both populations are at 0, then they will continue to be so indefinitely. The second solution represents a **fixed point** at which both populations sustain their current, non-zero numbers, and, in the simplified model, do so indefinitely.\n\n\n```python\nprint(r'y_1 = {}'.format(s/r))\nprint(r'y_2 = {}'.format(p/q))\nlotka_volterra.main(p, q, r, s, s/r, p/q, [a, b])\n```\n\nThe **stability of the fixed point** at the origin can be determined by performing a [linearization](https://www.youtube.com/watch?v=k_IkbxwSK7g) using partial derivatives. The linearization of the Lotka-Volterra system about an equilibrium point $(y_1^*, y_2^*)$ has the form\n\n$$\n\\begin{bmatrix}\n \\frac{du}{dt}\\\\\n \\frac{dv}{dt}\n\\end{bmatrix}\n= \\mathbf{J}\n\\begin{bmatrix}\n u\\\\\n v\n\\end{bmatrix}\n$$\n\n\nwhere $u = y_1 \u2212 y_1^*$, $v = y_2 \u2212 y_2^*$ and $\\mathbf{J}$ is the Jacobian ([Community matrix](https://en.wikipedia.org/wiki/Community_matrix)).\n\nThe [Jacobian matrix](https://en.wikipedia.org/wiki/Jacobian_matrix) of the predator\u2013prey model is\n\n$$\\mathbf{J}(y_1,y_2)=\n\\begin{bmatrix}\n\\alpha - \\beta y_2 & -\\beta y_1\\\\\n\\gamma y_2 & \\gamma y_1 - \\delta\n\\end{bmatrix}\n$$\n\nThe eigenvalues of the Jacobian determine the stability of the equilibrium point. By the [stable manifold theorem](https://en.wikipedia.org/wiki/Stable_manifold_theorem), if one or both eigenvalues of $\\mathbf{J}$ have positive real part then the equilibrium is unstable, but if all eigenvalues have negative real part then it is stable.\n\nFor the **first fixed point** (extinction) of $(0, 0)$, the Jacobian matrix $\\mathbf{J}$ becomes\n\n$$\\mathbf{J}(0,0)=\n\\begin{bmatrix}\n\\alpha & 0\\\\\n0 & - \\delta\n\\end{bmatrix}\n$$\n\nThe eigenvalues of this matrix are\n\n$$\\lambda_1=\\alpha,\\lambda_2=-\\gamma$$\n\nIn the model $\\alpha$ and $\\gamma$ are always greater than zero, and as such the sign of the eigenvalues above will always differ. Hence the fixed point at the origin is a saddle point.\n\nThe stability of this fixed point is of significance. If it were stable, non-zero populations might be attracted towards it, and as such the dynamics of the system might lead towards the extinction of both species for many cases of initial population levels. However, as the fixed point at the origin is a saddle point, and hence unstable, it follows that the extinction of both species is difficult in the model. (In fact, this could only occur if the prey were artificially completely eradicated, causing the predators to die of starvation. If the predators were eradicated, the prey population would grow without bound in this simple model.) The populations of prey and predator can get infinitesimally close to zero and still recover.\n\nFor the **second fixed point** of $\\left(\\frac{\\delta}{\\gamma},\\frac{\\alpha}{\\beta}\\right)$, the Jacobian matrix $\\mathbf{J}$ becomes\n\n$$\\mathbf{J}\\left(\\frac{\\delta}{\\gamma},\\frac{\\alpha}{\\beta}\\right)=\n\\begin{bmatrix}\n0 & -\\frac{\\beta\\delta}{\\gamma}\\\\\n\\frac{\\alpha\\gamma}{\\beta} & 0\n\\end{bmatrix}\n$$\n\nThe eigenvalues of this matrix are\n\n$$\\lambda_1=i\\sqrt{\\alpha\\delta},\\lambda_2=-i\\sqrt{\\alpha\\delta}$$\n\nAs the eigenvalues are both purely imaginary and conjugate to each others, this fixed point is elliptic, so the solutions are periodic, oscillating on a small ellipse around the fixed point, with a frequency $\\omega=\\sqrt{\\lambda_1\\lambda_2}=\\sqrt{\\alpha\\delta}$ and period $T=\\frac{2\\pi}{\\sqrt{\\lambda_1\\lambda_2}}$.\n\nAs illustrated in the circulating oscillations in the figure above, the level curves are closed orbits surrounding the fixed point.\n\n\n```python\nprint('T = {}'.format(2*np.pi/np.sqrt(p*s)))\n```\n\n T = 7.024814731040727\n\n\nHere the interactive version of this model:\n\n\n```python\nlotka_volterra.interactive()\n```\n\n\n interactive(children=(FloatSlider(value=0.4, description='Birth Rate of Prey', layout=Layout(width='99%'), max\u2026\n\n\n**Note**: In real-life situations, prey fluctuations of the discrete numbers of individuals, as well as the family structure and life-cycle of prey, might cause the prey to actually go extinct, and, by consequence, the predators as well.\n\n## Higher-order ODEs\n\nHigher-order ODEs can be rewritten as systems of first-order ODEs. For example, the second-order initial value problem\n\n$$\\frac{d^2 y}{dt^2} = -k^2 y, \\quad y(0) = y_0, \\quad y'(0) = y_0'$$\n\nis equivalent to the following first-order system obtained by introducing the auxiliary variables $y_1 := y$ and y_2 := y':\n\n$$\\frac{dy_1}{dt} = y_2, \\quad \\frac{dy_2}{dt} = -k^2 y_1, \\quad y_1(0) = y_0, \\quad y_2(0) = y_0'.$$\n\n\n```python\nk = np.pi\nf = lambda t, y: [y[1], -k**2*y[0]]\na = 0; b = 2\ny0 = [1, 0]\nDOP853 = solve_ivp(f, [a,b],y0, method='DOP853')\ny_ex = lambda t: np.cos(k*t)\nt_ex = np.linspace(a,b,200)\n\nfig, ax = plt.subplots(figsize=(8, 5))\nax.plot(t_ex ,y_ex(t_ex) , label=r'exact')\nax.plot(DOP853.t, DOP853.y[0,:], 'o', label=r'DOP853');\nplt.title('Numerical solution of ordinary differential equations')\nplt.legend(loc='lower right')\nplt.show()\n```\n\n# Simulation of random processes\n\nRecall that `np.random.rand` generates uniformly distributed random numbers between 0 and 1, whereas `np.random.randn` generates normally distributed random numbers with $\\mu=0$ and $\\sigma=1$. \n\nTossing a coin can be simulated by dividing the interval into two equal parts:\n\n\n```python\nr = np.random.rand()\nif r > 0.5:\n print('head')\nelse:\n print('tail')\n```\n\n tail\n\n\nA nice application of the Monte Carlo method is to estimate $\\pi$. Take a square of side length 2 and inscribe a circle of radius 1. Generate uniformly distributed pairs (i.e. points in $\\mathbf{R}^2$) of random numbers in $[0,2]^2$. Then the ratio of points falling into the circle to the total number of points should be the ratio of the areas of the circle and the square, namely $\\pi/4$. We construct a logical vector to detect and count the points inside the circle.\n\n\n```python\nn = 1000000\nx = 2*np.random.rand(2, n)\nincircle = (x[0,:] - 1)**2 + (x[1,:] - 1)**2 < 1\npi_estimated = 4*sum(incircle)/n\npi_estimated\n```\n\n\n\n\n 3.139724\n\n\n", "meta": {"hexsha": "569b1ffcde285476f411cf6dfe2c9680b4a7542a", "size": 192637, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/NumericalDifferentiation.ipynb", "max_stars_repo_name": "OleBo/MathSo", "max_stars_repo_head_hexsha": "1f9fa0492d467c0bb0479768c503eee7723ae777", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-08T23:52:57.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-08T23:52:57.000Z", "max_issues_repo_path": "notebooks/NumericalDifferentiation.ipynb", "max_issues_repo_name": "OleBo/MathSo", "max_issues_repo_head_hexsha": "1f9fa0492d467c0bb0479768c503eee7723ae777", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/NumericalDifferentiation.ipynb", "max_forks_repo_name": "OleBo/MathSo", "max_forks_repo_head_hexsha": "1f9fa0492d467c0bb0479768c503eee7723ae777", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-02T21:14:30.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-02T21:14:30.000Z", "avg_line_length": 188.1220703125, "max_line_length": 42424, "alphanum_fraction": 0.8844095371, "converted": true, "num_tokens": 8908, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9489172673767973, "lm_q2_score": 0.8902942355821459, "lm_q1q2_score": 0.8448155731899245}} {"text": "```python\n%pylab inline\n```\n\n Populating the interactive namespace from numpy and matplotlib\n\n\n\n```python\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import proj3d, Axes3D\nfrom sympy import symbols, Matrix, solve\nimport math\nfrom sympy.parsing.sympy_parser import parse_expr\n```\n\n\n```python\nplt.figure(figsize=(10,10))\nt = symbols('t')\nu = parse_expr('Matrix([t*cos(3*t), t*sin(2*t)])')\na = 16\nlim = [0, a]\nts = np.linspace(lim[0], lim[1], a**2)\nY = np.array([u.subs('t', t).evalf() for t in ts]).astype('float')\nxs = [x[0] for x in Y]\nys = [x[1] for x in Y]\nplot(xs, ys)\n```\n\n\n```python\nt = symbols('t')\nu = parse_expr('Matrix([t*cos(3*t), t*sin(2*t), t])')\na = 16\nlim = [0, a]\nts = np.linspace(lim[0], lim[1], a**2)\nY = np.array([u.subs('t', t).evalf() for t in ts]).astype('float')\nxs = [x[0] for x in Y]\nys = [x[1] for x in Y]\nzs = [x[2] for x in Y]\nfig = plt.figure(figsize=(10,10))\n#plt.rcParams['savefig.dpi'] = 600\nax = fig.add_subplot(111, projection='3d')\nplot(xs, ys, zs, c=\"purple\")\nplt.show()\n```\n\n\n```python\ntype(u)\n```\n\n\n\n\n sympy.matrices.dense.MutableDenseMatrix\n\n\n\n\n```python\nu.shape\n```\n\n\n\n\n (3, 1)\n\n\n\n\n```python\ndef is_matrix_3x1(u): return 'Matrix' in str(type(u)) and u.shape == (3, 1)\ndef is_matrix_2x1(u): return 'Matrix' in str(type(u)) and u.shape == (2, 1)\n```\n\n\n```python\nis_matrix_3x1(u)\n```\n\n\n\n\n True\n\n\n\n\n```python\nis_matrix_2x1(u)\n```\n\n\n\n\n False\n\n\n\n\n```python\nu.free_symbols\n```\n\n\n\n\n {t}\n\n\n\n# 1 var extremas\n\n\n```python\nx = symbols('x')\nexpr = parse_expr('-x**3 + 10*x**2 - x')\nxs = np.linspace(-2, 10, 20)\nys = np.array([expr.subs('x', x).evalf() for x in xs]).astype('float')\nplot(xs, ys)\n```\n\n\n```python\nexpr.diff('x')\n```\n\n\n\n\n -3*x**2 + 20*x - 1\n\n\n\n\n```python\nexpr_dx = expr.diff(x)\nxs = np.linspace(-2, 10, 20)\nys = np.array([expr_dx.subs('x', x).evalf() for x in xs]).astype('float')\nplot(xs, ys)\naxhline(0, c='black', lw=1, ls='dashed')\ncp = solve(expr_dx, 'x')\nplot(cp, 'o')\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "92d14b9b1524e7b65c2aa63aeba78ae29e570761", "size": 286120, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Calcupy/Parametric curves.ipynb", "max_stars_repo_name": "darkeclipz/jupyter-notebooks", "max_stars_repo_head_hexsha": "5de784244ad9db12cfacbbec3053b11f10456d7e", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-08-28T12:16:12.000Z", "max_stars_repo_stars_event_max_datetime": "2018-08-28T12:16:12.000Z", "max_issues_repo_path": "Calcupy/Parametric curves.ipynb", "max_issues_repo_name": "darkeclipz/jupyter-notebooks", "max_issues_repo_head_hexsha": "5de784244ad9db12cfacbbec3053b11f10456d7e", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Calcupy/Parametric curves.ipynb", "max_forks_repo_name": "darkeclipz/jupyter-notebooks", "max_forks_repo_head_hexsha": "5de784244ad9db12cfacbbec3053b11f10456d7e", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 812.8409090909, "max_line_length": 166996, "alphanum_fraction": 0.9567803719, "converted": true, "num_tokens": 706, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9489172601537141, "lm_q2_score": 0.8902942173896132, "lm_q1q2_score": 0.844815549496047}} {"text": "# Fractals and Julia Sets\n\nIn this unit we explore iterations and the images that one can generate with them. \n\n## A Fractal from Pascal's Triangle\nBefore we take up Newton's method and continuous mathematics and the [fractals](https://en.wikipedia.org/wiki/Fractal) that can be generated, let's begin with something simpler: Pascal's triangle, attributed to [Blaise Pascal](https://en.wikipedia.org/wiki/Blaise_Pascal), although it was certainly known before that: see the Wikipedia entry on [Halayudha](https://en.wikipedia.org/wiki/Halayudha) who clearly described it in the 10th century CE, commenting on the work of [Acharya Pingala](https://en.wikipedia.org/wiki/Pingala) from the 3rd/2nd century BCE; Pingala seems to have been the first to write about _binary_ numbers, which we will also use here.\n\nIn any case, the triangle of binomial coefficients $\\binom{n}{m}$ from the binomial theorem can be written\n
1
\n
1,1
\n
1,2,1
\n
1,3,3,1
\n
1,4,6,4,1
\n
1,5,10,10,5,1
\n
1,6,15,20,15,6,1
\n
1,7,21,35,35,21,7,1
\n\nand so on. The $n$th row contains the coefficients of $(a + b)^n$ when expanded: $a^n + \\binom{n}{1}a^{n-1}b + \\binom{n}{2} a^{n-2}b^2 + \\cdots + \\binom{n}{n-1}a b^{n-1} + b^n$. So far so good, and this should be familiar from high school. No fractals so far, though.\n\nOne idea pursued in the 19th century was to investigate the evenness or oddness of binomial coefficients. So, instead of taking that triangle literally, we instead write $1$ if the number is odd, and $0$ if the number is even; this is the beginnings of modular arithmetic. One can compute this in Maple (and Python); in Maple the modulo operator is called `mod` while in Python it is called `%`. When we take the binomial coefficient triangle mod 2, we get\n
1
\n
1,1
\n
1,0,1
\n
1,1,1,1
\n
1,0,0,0,1
\n
1,1,0,0,1,1
\n
1,0,1,0,1,0,1
\n
1,1,1,1,1,1,1,1
\n\nStill not seeing a fractal---but that's because we're just getting started. If we go to 16 rows instead of the 8 above, we get (by the following Maple command)\n\n```{image} ../Figures/Fractals/sierpinskibinomial_image0.png\n:height: 300px\n:alt: Maple command for 16 by 16 matrix\n:align: center\n```\n\nThe binomial triangle is tilted over to the left, and we have a bunch of wasted space in the upper triangular part of the matrix, but that's ok. Now we make an image out of that: every entry with a $1$ gets a black square, and every entry with a $0$ is left blank.\n\n```{image} ../Figures/Fractals/sierpinskibinomial_image1.png\n:height: 300px\n:alt: Maple browse image 16\n:align: center\n```\n\nNow let's try a $32$ by $32$ version:\n\n```{image} ../Figures/Fractals/sierpinskibinomial_image2.png\n:height: 300px\n:alt: Maple browse image 32\n:align: center\n```\n\nBigger yet, $64$ by $64$\n\n```{image} ../Figures/Fractals/sierpinskibinomial_image3.png\n:height: 300px\n:alt: Maple browse image 64\n:align: center\n```\n\n$128$ by $128$\n\n```{image} ../Figures/Fractals/sierpinskibinomial_image4.png\n:height: 300px\n:alt: Maple browse image 128\n:align: center\n```\n\n$256$ by $256$\n\n```{image} ../Figures/Fractals/sierpinskibinomial_image5.png\n:height: 300px\n:alt: Maple browse image 256\n:align: center\n```\n\n$512$ by $512$\n\n```{image} ../Figures/Fractals/sierpinskibinomial_image6.png\n:height: 300px\n:alt: Maple browse image 512\n:align: center\n```\n\nEach successive image contains three half-size copies of the image before, surrounding the triangle in the middle. The details get finer and finer with each increase in the amount of data. This figure is called [The Sierpinski Triangle](https://en.wikipedia.org/wiki/Sierpi%C5%84ski_triangle), named after [Waclaw Sierpinski](https://en.wikipedia.org/wiki/Wac%C5%82aw_Sierpi%C5%84ski). In the limit as the number of rows goes to infinity, the figure becomes a _fractal_, an object that does not have an integer dimension. It's a bit startling that this object arises out of a combinatorial discussion with integers mod 2!\n\nOne can do other things with binomial coefficients to produce fractals---see the beautiful paper [\"Zaphod Beeblebrox's Brain and the $59$th row of Pascal's Triangle\"](https://www.jstor.org/stable/2324898) by Andrew Granville; and one wonders if similar things can happen with other common combinatorial numbers such as Stirling numbers or Eulerian numbers or, well, you get the idea, even if you don't know just what those other numbers are, yet.\n\nSee also [Stephen Wolfram's 1984 paper](https://www.jstor.org/stable/2323743) which contains several pictures of this kind, and quite a bit of information on the \"fractal dimension\" of these figures. That paper also makes a connection to what are known as \"cellular automata\".\n\n## Newton's method, Fractals, and Chaos\n\nNewton's method is not perfect. If we ask it to do something impossible, such as find a real root of $f(x) = x^2+1 = 0$, it can go to infinity (if any $x_n = 0$, then we divide by zero on the next iterate because $f'(0)=0$); it can _cycle_ as in the graph below; or it can wander \"chaotically\".\n\n$$\nx_{n+1} = x_n - \\frac{x_n^2+1}{2x_n} = \\frac12\\left( x_n - \\frac{1}{x_n}\\right)\n$$\n\n```{figure} ../Figures/Fractals/Period3Animation.gif\n---\nheight: 400px\nname: periodic3animation\n---\nAnimation of Newton's Method on $x^2+1=0$ with a period-3 orbit\n```\n\nConsider the polynomial $p(x) = x^5 + 3x + 2$, and consider the following Python implementation of 8 iterations of Newton's method to find a zero, starting from the initial estimate $x_0 = -1.0$.\n\n\n```python\nimport numpy as np\nfrom numpy.polynomial import Polynomial as Poly\np = Poly( [2, 3, 0, 0, 0, 1])\ndp= p.deriv()\nprint( p )\nprint( dp ) # Python polynomials know how to differentiate themselves\nx = np.zeros( 8 ) # Try 8 iterations, why not\nx[0] = -1.0 # Initial estimate for a root\nfor j in range(1,8):\n x[j] = x[j-1] - p( x[j-1])/dp( x[j-1] )\nprint( x )\nprint( p(x) )\n```\n\n 2.0 + 3.0 x**1 + 0.0 x**2 + 0.0 x**3 + 0.0 x**4 + 1.0 x**5\n 3.0 + 0.0 x**1 + 0.0 x**2 + 0.0 x**3 + 5.0 x**4\n [-1. -0.75 -0.64364876 -0.632914 -0.63283452 -0.63283452\n -0.63283452 -0.63283452]\n [-2.00000000e+00 -4.87304688e-01 -4.14163790e-02 -3.02195989e-04\n -1.60124998e-08 0.00000000e+00 0.00000000e+00 0.00000000e+00]\n\n\nThat was pretty straightforward, once the initial estimate (-1.0) was chosen. In your numerical analysis class you will study the behaviour of Newton's iteration in general: it is a very powerful method used not just for single polynomials as here but also for _systems_ of nonlinear equations. Here, though, it's both _overkill_ and _underwhelming_: for roots of polynomials, we really want to find _all_ the roots. Of course, Python has a built-in method to do that:\n\n(How does it work, you ask? We will find out, when we talk about _eigenvalues_ in the next unit)\n\n\n```python\np.roots()\n```\n\n\n\n\n array([-0.74846849-0.99543395j, -0.74846849+0.99543395j,\n -0.63283452+0.j , 1.06488575-0.95054603j,\n 1.06488575+0.95054603j])\n\n\n\nWe see that four out of the five roots of that particular polynomial were complex, but our Newton iteration got the real root nicely. If we wanted instead to get the complex roots, we would have had to use a _complex initial estimate_ because---since the coefficients of the polynomial are real---the Newton iteration stays real if the initial estimate is real. So, without knowing the roots, how do we get an initial estimate?\n\nHere is where this OER departs from the standard curriculum: we instead ask _what happens if we take every possible initial estimate_? This generates something typically known as the \"Newton Fractal\" for a function---to a pure mathematician analyst, this is more likely to be termed a \"Fatou set\", named after the French astronomer [Pierre Fatou](https://en.wikipedia.org/wiki/Pierre_Fatou); the notions are a bit distinct, but if we are using Newton's method to find the roots of a polynomial, the resulting Newton fractal is indeed a \"Fatou set\". The exact definition of a Fatou set is given in the Wikipedia page for \"Julia Set\", which we will cite below; for now, let's just use the term \"Newton Fractal\".\n\nThe following program is not intended as a \"full-featured-fractal\" program. It is meant to get you off the ground: it uses nested loops, and numpy arrays, and the \"filled contour plot\" from matplotlib to plot the basins of attraction for Newton's method applied to a simple function, $x^3-2$. This function has three roots, two of which are complex. We hand-supply the derivative in this case. Every initial point which goes to the real root (in twenty iterations) gets coloured red; every initial point which goes to the complex root $(-1/2 + i\\sqrt{3}/2)\\sqrt{2}$ is coloured yellow; every initial point which goes to the complex conjugate of that last root gets coloured brown. All of the points which (after twenty iterations) are still undecided are coloured various other colours. You will be asked to improve this code in the exercises (this includes the possibility of throwing it all out and writing your own from scratch).\n\n\n```python\n# A very short hacky program to draw the edges of a Newton fractal\n# RMC 2021.12.20\nimport numpy as np\nfrom matplotlib import pyplot as plt\n# We will take an N by N grid of initial estimates\nN = 600 # 800 by 800 is a lot and it takes a few seconds to draw \nx = np.linspace(-2,2,N)\ny = np.linspace(-2,2,N)\nF = np.zeros((N,N))\n# Here is the function and its derivative whose zeros we are looking for\nf = lambda x: x ** 3 - 2;\ndf = lambda x: 3*x**2 ;\n# SirIsaac performs one Newton iteration\nSirIsaac = lambda x: x - f(x)/df(x);\nfor k in range(N):\n for i in range(N):\n # We range over all initial estimates in the grid\n z = x[i]+1j*y[k];\n # Hard-wire in 20 iterations (maybe not enough)\n for m in range(20):\n z = SirIsaac( z )\n # After twenty iterations we hope the iteration has settled down, except on\n # the boundary between basins of attraction.\n # The phase (angle) is a likely candidate for a unique identifier for the root\n F[k,i] = np.angle( z ) # Rows, Columns\n# A magic incantation\nX,Y = np.meshgrid( x, y )\nplt.figure(figsize=(10,10))\nplt.contourf( X, Y, F, levels=[-3,-2,0,2,3], colors=['brown','red','black','yellow','black','blue','black'] )\nplt.gca().set_aspect('equal', adjustable='box')\n```\n\nLet's run that again, this time zooming in to a region near $-1$.\n\n\n```python\n# Zoom in to the region near -1\nN = 600 # 800 by 800 is a lot and it takes a few seconds to draw \nx = np.linspace(-1.2,-0.8,N)\ny = np.linspace(-0.2,0.2,N)\nF = np.zeros((N,N))\n# Here is the function and its derivative whose zeros we are looking for\nf = lambda x: x ** 3 - 2;\ndf = lambda x: 3*x**2 ;\n# SirIsaac performs one Newton iteration\nSirIsaac = lambda x: x - f(x)/df(x);\nfor k in range(N):\n for i in range(N):\n # We range over all initial estimates in the grid\n z = x[i]+1j*y[k];\n # Hard-wire in 20 iterations (maybe not enough)\n for m in range(20):\n z = SirIsaac( z )\n # After twenty iterations we hope the iteration has settled down, except on\n # the boundary between basins of attraction.\n # The phase (angle) is a likely candidate for a unique identifier for the root\n F[k,i] = np.angle( z ) # Rows, columns\n# A magic incantation\nX,Y = np.meshgrid( x, y )\nplt.figure(figsize=(10,10))\nplt.contourf( X, Y, F, levels=[-3,-2,0,2,3], colors=['brown','red','black','yellow','black','blue','black'] )\nplt.gca().set_aspect('equal', adjustable='box')\n```\n\n### Looking back at those plots and that code\n\nThe first question we should ask ourselves is _is that code correct_? Is it doing what we want? If we are surprised at anything about that, is the surprise owing to the underlying math, or to some bug or weakness in the code?\n\nWe suspect the funny shapes (maybe they look like red and black pantaloons, from a [Harlequin](https://en.wikipedia.org/wiki/Harlequin)?) that don't fit the chain pattern are artifacts of our code, somehow.\n\nLet's zoom in even more, but increase the number of iterations.\n\n\n```python\n# Zoom in to the region near -1\nN = 800 # 800 by 800 is a lot and it takes a few seconds to draw \nx = np.linspace(-1.02,-0.98,N)\ny = np.linspace(-0.02,0.02,N)\nF = np.zeros((N,N))\n# Here is the function and its derivative whose zeros we are looking for\nf = lambda x: x ** 3 - 2;\ndf = lambda x: 3*x**2 ;\n# SirIsaac performs one Newton iteration\nSirIsaac = lambda x: x - f(x)/df(x);\nfor k in range(N):\n for i in range(N):\n # We range over all initial estimates in the grid\n z = x[i]+1j*y[k];\n # Hard-wire in 40 iterations (maybe not enough)\n for m in range(40):\n z = SirIsaac( z )\n # After twenty iterations we hope the iteration has settled down, except on\n # the boundary between basins of attraction.\n # The phase (angle) is a likely candidate for a unique identifier for the root\n F[k,i] = np.angle( z ) # Row, column\n# A magic incantation\nX,Y = np.meshgrid( x, y )\nplt.figure(figsize=(10,10))\nplt.contourf( X, Y, F, levels=[-3,-2,0,2,3], colors=['brown','red','black','yellow','black','blue','black'] )\nplt.gca().set_aspect('equal', adjustable='box')\n```\n\nRight. Running the code at higher resolution seems to fix the problem. Now we might trust the picture to feed us questions that have something to do with the math. Here's one: what's actually happening at $z=-1+0j$? That is, if we start the iteration there, using exact arithmetic, what would happen?\n\nIn the exercises, you are asked to think of some of your own questions. Experiment with this code; change the parameters, the resolution, the zooming, the function; whatever you like. Bring out the \"sandbag\" questions, maybe: what do you notice? What do you see? What do you wonder?\n\n## Variations: Halley's Method, Secant Method, Infinitely Many Others\n\nNewton's method can be understood as replacing the nonlinear equation $f(x)=0$ with a _linear approximation_ $f(a) + f'(a)(x-a) = 0$ and solving that instead; if one starts with $x=a$ as an initial approximation to the root of $f(x)=0$ then hopefully the solution of the linear approximation, namely $x = a - f(a)/f'(a)$, would be an improved approximation to the root. But there are other methods. As discussed in the Exercises in the [Rootfinding unit](rootfinding.ipynb), there is also the __secant method__ which uses _two_ initial estimates of the root, say $x_0$ and $x_1$, to generate\n\\begin{equation}\nx_{n+1} = x_n - \\frac{ f(x_n)(x_n-x_{n-1})}{f(x_n)-f(x_{n-1})}\n\\end{equation}\nand you can see that instead of having $f'(x_n)$ we instead have the difference quotient---the slope of the secant line---\n\\begin{equation}\nf'(x_n) \\approx \\frac{ f(x_n)-f(x_{n-1}) }{x_{n}-x_{n-1}}\n\\end{equation}\nplaying the same role. We save the values of $f(x_0)$, $f(x_1)$, $\\ldots$ as we go along so we don't have to recompute them; and each iteration costs us only one new evaluation of the function (which can serve as a check on our errors as well) each time. Newton's method, in contrast, needs an evaluation of $f(x)$ _and_ an evaluation of $f'(x)$ for each iteration, so it costs more per iteration. The secant method tends to be take more iterations but be faster to compute on each step, so it is frequently faster overall. We can study \"secant fractals\" in the same way we studied Newton fractals if we insist on a rule for generating $x_1$ from $x_0$; for instance, we could always take $x_1 = x_0 - f(x_0)/f'(x_0)$ so we would use one Newton iteration to get started. Frequently this information is available at the beginning, so it isn't much of a \"cheat\".\n\nWe can go the other way: also as discussed in the exercises in the rootfinding unit, there is something known as [_Halley's method_](https://en.wikipedia.org/wiki/Halley%27s_method), named after the astronomer [Edmond Halley](https://en.wikipedia.org/wiki/Edmond_Halley):\n\\begin{equation}\nz_{n+1} = z_n - \\frac{f(z_n)}{f'(z_n) - \\frac{f(z_n)f''(z_n)}{2f'(z_n)}}\n\\end{equation}\nThis requires _two_ derivatives; if one derivative is too expensive, then two is twice too much. But sometimes derivatives are cheap and this method becomes practical. Consider for example the task of inverting the function\n$f(w) = w\\exp(w)-z = 0$; that is, given a value for $z$, find a value of $w$ for which the equation is true. We are computing the [Lambert W function](http://www.orcca.on.ca/LambertW) of z. Since the \"expensive\" part of the computation of $f(w)$ is the exponential, $\\exp(w)$, the derivatives $f'(w) = (1+w)\\exp(w)$ and $f''(w) = (2+w)\\exp(w)$ are essentially free thereafter; so Halley's method becomes quite attractive, _because it takes even fewer iterations than Newton's method_ (typically) for this function.\n\nAn interesting trick (dating at least back to the 1920's) converts Halley's method for $f(z)=0$ into Newton's method for a different function $F(z) = 0$: put $F(z) = f(z)/\\sqrt{f'(z)}$. Then some algebra shows that Newton's iteration on $F(z)$, namely\n\\begin{equation}\nz_{n+1} = z_n - \\frac{F(z_n)}{F'(z_n)}\n\\end{equation}\nis converted (by use of the chain rule to compute $F'(z)$) _exactly_ into Halley's method for $f(z)=0$.\nIt is quite instructive to compute the Newton fractal for a function, and then compute the Halley fractal for the same function. You can even use the same imaging code, just by swapping one function for another.\n\nFor instance, here is the Newton fractal (Fatou set) for $f(z) = z^{8}+4 z^{7}+6 z^{6}+6 z^{5}+5 z^{4}+2 z^{3}+z^{2}+z$, as computed by Maple's `Fractals:-EscapeTime:-Newton` command\n\n```{image} ../Figures/Fractals/M4Newton.png\n:height: 300px\n:alt: Mandelbrot 4 Newton Fractal\n:align: center\n```\n\nand here is the Halley fractal (Fatou set) for the same function, which is also the Newton fractal for $F(z) = f(z)/\\sqrt{f'(z)}$. Again, this was computed by `Fractals:-EscapeTime:-Newton`.\n\n```{image} ../Figures/Fractals/M4Halley.png\n:height: 300px\n:alt: Mandelbrot 4 Halley Fractal\n:align: center\n```\n\nIn order to _understand_ the differences in the two images, it is necessary to understand what the colors mean; we think that the different shades of orange count the number of iterations to reach each root (but we're not terribly sure: the documentation of that opaque code is not clear on that point). If that is true, then one can see from the two pictures that Halley's method takes fewer iterations to get a good approximation to a root. However, the `Fractals:-Escapetime:-Newton` code takes _ten times as long_ for the Halley case, likely because of internal compiler reasons (yes, Maple has a compiler, but it is quite limited). We ask you to redo these figures yourself in Python, and also to do the secant fractal (not possible in Maple simply by co-opting the `Fractals:-EscapeTime:-Newton` code, as Halley is), and to compare the results.\n\n## Julia sets\n\nThe [technical definition given in Wikipedia of a \"Julia set\"](https://en.wikipedia.org/wiki/Julia_set), named after the mathematician [Gaston Julia](https://en.wikipedia.org/wiki/Gaston_Julia), is pretty opaque. We will take an explicitly _oversimplified_ view here and not worry about technicalities; we're just going to compute things that are sort of like Julia sets. The basic idea is pretty simple. If we are given an iteration\n\\begin{equation}\nz_{n+1} = F(z_n)\n\\end{equation}\nstarting with $z_0 = $ some critical point (typically $z_0 = 0$) then to find our \"Julia sets\" we will _run the iteration backwards_. In the aforementioned technical Wikipedia article this algorithm is mentioned, and the reader is cautioned against it owing to its exponential cost; there are other problems with it as well, but for our purposes---exploration!---we will just implement it and try it out. We will be able to generate several interesting pictures this way, and begin to develop some insight.\n\nWe will restrict ourselves to _polynomial maps_ $F(z_n)$, and we will use NumPy's `roots` command to solve the polynomials. We'll suggest a method in the exercises that will allow you to extend this to _rational maps_.\n\nFirst, let's see how to solve polynomials in Python.\n\n\n\n```python\nimport numpy as np\n\np = np.poly1d( [1, 0, -2, -5 ]); # Newton's original example\nprint( p )\nprint( np.roots(p) )\nprint( p.r )\nprint( p(p.r) )\n# Wilkinson10 = np.poly1d( [1,-55,1320,-18150,157773,-902055,3416930,-8409500,12753576,-10628640,3628800] );\n# print( Wilkinson10.r )\n# print( Wilkinson10(Wilkinson10.r) )\n# Wilkinson20 = np.poly1d( [1, -210, 20615, -1256850, 53327946, -1672280820, 40171771630, -756111184500, 11310276995381, -135585182899530, 1307535010540395, -10142299865511450, 63030812099294896, -311333643161390640, 1206647803780373360, -3599979517947607200, 8037811822645051776, -12870931245150988800, 13803759753640704000, -8752948036761600000, 2432902008176640000] )\n# print( Wilkinson20.r )\n# print( Wilkinson20(Wilkinson20.r) )\n```\n\n 3\n 1 x - 2 x - 5\n [ 2.09455148+0.j -1.04727574+1.13593989j -1.04727574-1.13593989j]\n [ 2.09455148+0.j -1.04727574+1.13593989j -1.04727574-1.13593989j]\n [ 1.95399252e-14+0.00000000e+00j -1.77635684e-15-1.77635684e-15j\n -1.77635684e-15+1.77635684e-15j]\n\n\n\n```python\nfrom numpy.polynomial import Polynomial as Poly\nc = np.array( [-5, -2, 0, 1 ] )\np = Poly( c ) # Newton's original example\np\nprint( p.roots() )\nprint( p(p.roots()) )\n```\n\n [-1.04727574-1.13593989j -1.04727574+1.13593989j 2.09455148+0.j ]\n [ 5.32907052e-15-3.55271368e-15j 5.32907052e-15+3.55271368e-15j\n -1.77635684e-15+0.00000000e+00j]\n\n\nReading the output from those commands, we see that to make a polynomial we call `poly1d` (the 1d means \"one-dimensional\") with a vector of (monomial basis) coefficients. Thereafter, we can either call `roots` or simply ask for the roots by using the `.r` method. We can evaluate the polynomial at a vector of values by a call using parentheses: `p(p.r)` evaluates the polynomial at the computed roots; we see that the answers (in this case) are quite small, being essentially on the order of rounding errors. Since polynomials are continuous, we therefore believe that these computed roots might be accurate. \n\nIn truth the story is more complicated than that, but we will save that for your numerical analysis class.\n\nWe now see that the `Polynomial` convenience class is supposed to be used instead of `poly1d`. Fine. This means writing the coefficients in reverse order, but that is also fine.\n\nWhat we will do here to run the iteration backward is to take the equation\n\\begin{equation}\nz_{n+1} = F(z_n)\n\\end{equation}\nand _solve_ it (using `roots`) for $z_n$, when we are given $z_{n+1}$. We'll start with the same $z_0$ that we were using before, and compute all possible $z_{-1}$ values which would give us $z_0 = F( z_{-1} )$. This will be clearer with an example.\n\nLet's take $F(z) = z^2 + 1.2$. This is an instance of the Mandelbrot map, with $c=1.2$ being in the Mandelbrot set. We could solve $z_{n+1} = z_n^2 + 1.2$ just by rearranging the equation: $z_n^2 = z_{n+1}-1.2$ and so by taking square roots we are done. Notice that there are _two_ possible $z_{n}$ values (call them _preimages_ of $z_{n+1}$). This is of course because our $F(z)$ is a polynomial of degree two. Then for each of these two $z_n$ values, there will be two $z_{n-1}$ values, so four $z_{n-1}$ values; then eight $z_{n-2}$ values, and so on. This is the \"exponential growth\" that the Wikipedia article warns about. We shall ignore the warning.\n\n\n```python\nN = 10001 # Make the array of length one more than a multiple of degree d\nHistory = np.zeros(N,dtype=complex)\n# History[0] is deliberately 0.0 for this example\n# If you want to start at another place, issue \n# the command\n# History[0] = whatever you want to start with\nhere = 0\nthere = 1\nc = [-1.2, 0, 1 ]\nd = len(c)-1\nwhile there <= N-d:\n cc = c.copy()\n cc[0] = c[0] - History[here]\n p = Poly( cc );\n rts = p.roots();\n # This loop places those roots in the History array\n for j in range(d):\n # Can you explain to yourself how this code works?\n History[there] = rts[j];\n there += 1;\n here += 1;\n \nimport matplotlib.pyplot as plt\nx = [e.real for e in History]\ny = [e.imag for e in History]\nplt.scatter( x, y, s=0.5, marker=\".\" )\nplt.show()\n```\n\nHere are similar images generated in Maple; one by our own code (which only plots the last half of the table of points), and one by the built-in `Fractals:-EscapeTime:-Julia code`.\n\n```{figure} ../Figures/Fractals/JuliaMaple12.png\n---\nheight: 200px\nname: julia_own_code\n---\nOur own code\n```\n\n```{figure} ../Figures/Fractals/JuliaEscapeTimeF12.png\n---\nheight: 300px\nname: julia_maple\n---\nMaple's built-in code\n```\n\n\n## Exercises\n1. Write down as many questions as you can about material from this section.\n2. Write a Python program to draw the Sierpinski gasket (perhaps by using binomial coefficients mod 2).\n3. Explore pictures of the binomial coefficients mod 4 (and then consult Andrew Granville's paper previously referenced).\n4. Investigate pictures (mod 2 or otherwise) of other combinatorial families of numbers, such as [Stirling Numbers](https://en.wikipedia.org/wiki/Stirling_number) (both kinds). Try also \"Eulerian numbers of the first kind\" mod 3.\n5. Write a Python program to animate Newton's method for real functions and real initial estimates in general (the animated GIF at the top of this unit was produced by a Maple program, Student:-Calculus1:-NewtonsMethod, which is quite a useful model). This exercise asks you to \"roll your own\" animation.\n6. Write your own code for computing Newton fractals, perhaps based on the code above (but at least improve the colour scheme).\n7. Compute Newton fractals for several functions of your own choosing. Test your code on the function $f(z) = z^{8}+4 z^{7}+6 z^{6}+6 z^{5}+5 z^{4}+2 z^{3}+z^{2}+z$ used above.\n8. Compute Halley fractals for the same functions.\n9. Compute secant fractals for the same functions, using the $x_1 = x_0 - f(x_0)/f'(x_0)$ rule to generate the needed second initial estimate. Try a different rule for generating $x_1$ and see if it affects your fractals.\n10. Try a few different values of \"c\" in the Mandelbrot example above, and generate your own \"Julia sets\".\n11. These are not really Julia sets; they include too much of the history! Alter the program so that it plots only (say) the last half of the points computed; increase the number of points by a lot, as well. Compare your figure to (say) the Maple Julia set for c=1.2. \n12. Change the function F to be a different polynomial; find places where both F and F' are zero (if any). If necessary, change your polynomial so that there is such a \"critical point\". Start your iteration there, and go backwards---plot your \"Julia set\".\n13. Extend the program so that it works for _rational_ functions F, say $F(z) = p(z)/q(z)$. This means solving the polynomial equation $p(z_n) - z_{n+1}q(z_n)=0$ for $z_n$. Try it out on the rational functions you get from Newton iteration on polynomial (or rational!) functions; or on Halley iteration on polynomial functions. Try any of the that arise from the methods that you can find listed in [Revisiting Gilbert Strang's \"A Chaotic Search for _i_\"](https://doi.org/10.1145/3363520.3363521). \n14. Read the [Wikipedia entry on Julia sets](https://en.wikipedia.org/wiki/Julia_set); it ought to be a little more intelligible now (but you will see that there are still lots of complications left to explain). One of the main items of interest is the theorem that states that the Fatou sets all have a _common boundary_. This means that if the number of components is $3$ or more, then the Julia set (which is that boundary!) _must be a fractal_.\n\n\n```python\n\n```\n", "meta": {"hexsha": "2909d8d6bceec4b4191eae754f78aa8341772ccf", "size": 165059, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "book/Contents/fractals-and-julia-sets.ipynb", "max_stars_repo_name": "jameshughes89/Computational-Discovery-on-Jupyter", "max_stars_repo_head_hexsha": "614eaaae126082106e1573675599e6895d09d96d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2022-02-21T23:50:22.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T22:21:55.000Z", "max_issues_repo_path": "book/Contents/fractals-and-julia-sets.ipynb", "max_issues_repo_name": "jameshughes89/Computational-Discovery-on-Jupyter", "max_issues_repo_head_hexsha": "614eaaae126082106e1573675599e6895d09d96d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "book/Contents/fractals-and-julia-sets.ipynb", "max_forks_repo_name": "jameshughes89/Computational-Discovery-on-Jupyter", "max_forks_repo_head_hexsha": "614eaaae126082106e1573675599e6895d09d96d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-22T02:43:44.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-23T14:27:31.000Z", "avg_line_length": 251.6143292683, "max_line_length": 36348, "alphanum_fraction": 0.8939833635, "converted": true, "num_tokens": 8218, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9099070035949657, "lm_q2_score": 0.9284088040219143, "lm_q1q2_score": 0.8447656729787657}} {"text": "# Import Problem Instance\nWe start by importing a simple problem instance to demonstrate the tsplib reader.\n\n\n\n```python\nfrom tsplib95 import tsplib95\nimport itertools\nimport networkx as nx\n\ninstance = tsplib95.load_problem('./tsplib/ulysses16.tsp')\n\ninstance.comment\n```\n\nRemember, this repository contains a small selection of TSP instances that you can use to test your algorithms.\n\n| name | nodes | description |\n|------|-------|-------------|\n| ulysses16.tsp | 16 | Odyssey of Ulysses |\n| ulysses7.tsp | 7 | subset of ulysses16 for testing purposes |\n| bayg29.tsp | 29 | 29 Cities in Bavaria |\n| bier127.tsp | 127 | 127 Biergaerten in Augsburg |\n| bier20.tsp | 20 | subset of bier127 |\n| brazil58.tsp | 58 | 58 cities in Brazil |\n| ali535.tsp | 535 | 535 Airports around the globe |\n| d18512.tsp | 18512 | 18512 places in Germany |\n\nThe following calls show the dimension = number of nodes of the problem, its node set and the edge weights. The functions `instance.get_nodes()` and `instance.get_edges()` are implemented as iterators, so you can loop over the nodes or edges. To get a list of nodes or edges, you have to explicitly construct one using `list(instance.get_nodes())`. Note that node counting may start at 1 for some instances while others use 0 as starting point. For convenience, we store the index of the first node as `first_node`.\n\n\n\n```python\ninstance.dimension\n\ninstance.get_nodes()\nprint(\"List of nodes: \", list(instance.get_nodes()))\n\nfirst_node = min(instance.get_nodes())\nfirst_node\n\nfor i,j in instance.get_edges():\n if i >= j:\n continue\n print(f\"edge {{ {i:2},{j:2} }} has weight {instance.wfunc(i,j):3}.\")\n\n```\n\nYou have already seen how to draw a graph, here is the relevant code again.\n\n\n```python\nG = instance.get_graph()\nif instance.is_depictable():\n pos = {i: instance.get_display(i) for i in instance.get_nodes()}\nelse:\n pos = nx.drawing.layout.spring_layout(G)\nnx.draw_networkx_nodes(G, pos, node_color='#66a3ff', node_size=200)\nnx.draw_networkx_labels(G, pos, font_weight='bold' )\nnx.draw_networkx_edges(G, pos, edge_color='#e6e6e6')\n```\n\n# Implementing the standard model with subtour elimination in Gurobi\nWe will implement the standard model with subtour elimination callback using binary variables $x_{ij} \\in \\{0,1\\}$ to indicate whether edge $\\{i,j\\}$ is being used in the tour. To avoid double counting edges, we employ the convention that $i < j$ for an edge $\\{i,j\\}$ and we will denote the resulting edge set by $E$. The formulation looks like this:\n\n\\begin{align}\n\\min\\;&\\sum_{\\{i,j\\} \\in E} c_{i,j} \\cdot x_{i,j}\\\\\n&\\sum_{j: \\{i,j\\} \\in E} x_{i,j} = 2 \\quad \\text{for all nodes $i$}\\\\\n&\\sum_{\\{i,j\\} \\in \\delta(S)} x_{ij} \\ge 2 \\quad \\text{for all $S \\subsetneq V$, $S \\ne \\emptyset$}\\\\\n&x_{i,j} \\in \\{0,1\\}\n\\end{align}\n\n## Creating the variables\nWe start by creating the model and the variables. Notice that we already define the objective function by using the `obj` parameter upon variable creation.\n\n\n```python\nimport gurobipy as grb\n\nmodel = grb.Model(name=\"Subtour TSP formulation\")\n\nx = grb.tupledict()\nfor i,j in instance.get_edges():\n if i < j:\n x[i,j] = model.addVar(obj=instance.wfunc(i,j), vtype=grb.GRB.BINARY, name=f\"x[{i},{j}]\")\n```\n\n## Adding the degree constraints\nNext, we add the constraints for our model with the exception of the subtour elimination constraints. We use the sum method of our variables to express the summation in an elegant way.\n\n\n```python\nfor i in instance.get_nodes():\n model.addConstr(x.sum(i,'*') + x.sum('*',i) == 2, name=f\"degree_ctr[{i}]\")\n```\n\n## Starting the Optimization Process\nFinally, we set the objective to minimization and call the optimizer.\n\n\n```python\nmodel.ModelSense = grb.GRB.MINIMIZE\nmodel.reset()\nmodel.optimize()\n```\n\n## Querying and Visualizing the Solution\nBefore we visualize our result, let us look at a few key figures of our solution.\n\n\n```python\nmodel.ObjVal\n```\n\n\n```python\nsolution_edges = [(i,j) for i,j in x.keys() if x[i,j].x > 0.9]\nsolution_edges\n```\n\nFor debugging purposes, it might be helpful to export the model held by Gurobi into a human-readable format:\n\n\n```python\nmodel.write('test.lp')\n```\n\n Finally, let us visualize the solution using NetworkX. In this case, we need to prescribe positions and draw the nodes and two layers of edges separately.\n\n\n```python\nif instance.is_depictable():\n pos = {i: instance.get_display(i) for i in instance.get_nodes()}\nelse:\n pos = nx.drawing.layout.spring_layout(G)\nnx.draw_networkx_nodes(G, pos, node_color='#66a3ff', node_size=500)\nnx.draw_networkx_labels(G, pos, font_weight='bold' )\nnx.draw_networkx_edges(G, pos, edge_color='#e6e6e6')\nnx.draw_networkx_edges(G, pos, edgelist=solution_edges, edge_color='#ffa31a', width=4)\n```\n\n## Subtour elimination\nAs you can hopefully see (depending on the instance you selected), the solution may contain a subtour. Let us add a callback to detect and eliminate subtours in an integer solution. \n\nWe start by defining a function that, given a list of edges, finds a subtour if there is one. For a more concise implementation, we use a function of networkx to find such a subtour: We construct an auxiliary graph $G_{\\text{aux}}$, find a cycle in this graph and return just the nodes contained in this cycle.\n\n\n```python\ndef find_subtour_set(nodes, edges):\n G_aux = nx.Graph()\n G_aux.add_nodes_from(nodes)\n G_aux.add_edges_from(edges)\n return set(itertools.chain(*nx.find_cycle(G_aux)))\n\nfind_subtour_set(instance.get_nodes(), [(i,j) for i,j in x.keys() if x[i,j].x > 0.9])\n```\n\n### Define a callback\nNext, we need to define a callback function that adds a violated subtour inequality if one exists. In Gurobi, there is just one \"global\" callback function that two parameters: The `model` and a constant called `where` (a \"Callback Code\") that indicates at which position in the optimization process the callback has been invoked. The documentation contains a list of the available codes at http://www.gurobi.com/documentation/8.0/refman/callback_codes.html#sec:CallbackCodes. We want our callback to spring into action whenever a new integer solution has been found, so the relevant callback code is `GRB.Callback.MIPSOL`.\n\nNotice that we can only access parameters and current values of our model through the model object, not through the variables we have defined above. The model supplies a number of `cbGet...` methods for this purpose. To access our variables, we need to define a _user variable_ in the model that stores this information and makes it accessible in the callback. User variables can be any member of the model objects that starts with an underscore. We will add a parameter `_vars` to our model that simply stores the x variables and then use this parameter in the callback to access the current solution.\n\nTo add a new constraint, we use the method `cbLazy` of our model that adds a new lazy constraint. The node will then be re-evaluated automatically.\n\n### Subtour Elimination Callback\n **Task 1:** In the following function, complete the definition of the `cut_edges` list to make the subtour elimination work properly.\n\n\n```python\ndef subtour_callback(model, where):\n if where == grb.GRB.Callback.MIPSOL:\n sol = model.cbGetSolution(model._vars)\n S = find_subtour_set(model._instance.get_nodes(), [(i,j) for i,j in model._vars.keys() if sol[i,j] > 0.9])\n if len(S) < model._instance.dimension:\n # TODO: cut_edges\n cut_edges = [(i,j) for i,j in model._vars.keys() if ...]\n model.cbLazy(sum(model._vars[i,j] for i,j in cut_edges) >= 2)\n```\n\n### Add Callback to the Model \nLet us now add the variables to the model and resolve, this time using the callback. Also, we need to switch on lazy constraints by setting the appropropriate parameter. The callback function is simply passed as a parameter to the optimizer.\n\n\n```python\nmodel._vars = x # for use in the callback\nmodel._instance = instance # for use in the callback\nmodel.reset()\nmodel.Params.lazyConstraints = 1 # use lazy constraints\nmodel.optimize(subtour_callback) # use callback to add lazy constraints\n```\n\n### Results\nLet's have a look at the results.\n\n\n```python\nmodel.ObjVal\n```\n\n\n```python\nsolution_edges = [(i,j) for i,j in x.keys() if x[i,j].x > 0.9]\nsolution_edges\n```\n\n\n```python\nif instance.is_depictable():\n pos = {i: instance.get_display(i) for i in instance.get_nodes()}\nelse:\n pos = nx.drawing.layout.spring_layout(G)\nnx.draw_networkx_nodes(G, pos, node_color='#66a3ff', node_size=500)\nnx.draw_networkx_labels(G, pos, font_weight='bold' )\nnx.draw_networkx_edges(G, pos, edge_color='#e6e6e6')\nnx.draw_networkx_edges(G, pos, edgelist=solution_edges, edge_color='#ffa31a', width=4)\n```\n\n## Fractional Subtour Elimination\nFinally, let us try to implement a separation procedure for subtour elimination constraints that works on the relaxation and does not need an integer solution. This can be done by solving a minimum cut problem on an auxiliary graph where we fix an arbitrary source node (we will use node $1$) and iterate through all possible target nodes until we find a minimum cut that has a value of less than $2$. For this cut, the corresponding subtour elimination constraint is inserted as a lazy cut. We modify our callback to use the callback code `MIPNODE` instead of `MIPSOL` and query `GRB.Callback.MIPNODE_STATUS` to see whether the node has already been solved to (fractional) optimality. For computing the minimum cut we again use an algorithm provided by the `networkx` package.\nNote that we have substituted `cbGetSolution` by `cbGetNodeRel`, as a solution is not generally available at any node. Also, we have to include the `MIPSOL` callback branch, because `MIPNODE` might not be called for nodes that yield an integer optimum right away.\n\n\n### Define Separation Method\n\n\n```python\nimport itertools\ndef find_minimum_cut_partition(instance, sol):\n G_flow = instance.get_graph()\n for i,j in G_flow.edges():\n if (i,j) in sol:\n G_flow[i][j]['capacity'] = sol[i,j]\n else:\n G_flow[i][j]['capacity'] = 0\n for t in G_flow.nodes() - {1}:\n cut_value, S = nx.minimum_cut(G_flow, 1, t, capacity='capacity')\n if cut_value < 2:\n return S[0]\n return set() #no cut with value < 2 has been found\n```\n\n### Subtour Elimination Callback\n**Task 2:** In the following callback, fill in the code for adding the correct subtour elimination inequalities.\n\n\n```python\ndef subtour_elimination_callback(model, where):\n if where == grb.GRB.Callback.MIPSOL: # MIP solution found\n sol = model.cbGetSolution(model._vars)\n # ... add cut here\n elif where == grb.GRB.Callback.MIPNODE and model.cbGet(grb.GRB.Callback.MIPNODE_STATUS) == grb.GRB.Status.OPTIMAL:\n # current MIP node has been solved to LP optimality\n # add fractional cut here\n sol = model.cbGetNodeRel(model._vars)\n```\n\n### Optimize\n\n\n```python\nmodel._vars = x\nmodel._instance = instance\nmodel.reset()\nmodel.Params.lazyConstraints = 1\nmodel.optimize(subtour_elimination_callback)\n```\n\n### Results\n\n\n```python\nmodel.ObjVal\n```\n\n\n```python\nsolution_edges = [(i,j) for i,j in x.keys() if x[i,j].x > 0.9]\nsolution_edges\n```\n\n\n```python\nif instance.is_depictable():\n pos = {i: instance.get_display(i) for i in instance.get_nodes()}\nelse:\n pos = nx.drawing.layout.spring_layout(G)\nnx.draw_networkx_nodes(G, pos, node_color='#66a3ff', node_size=500)\nnx.draw_networkx_labels(G, pos, font_weight='bold' )\nnx.draw_networkx_edges(G, pos, edge_color='#e6e6e6')\nnx.draw_networkx_edges(G, pos, edgelist=solution_edges, edge_color='#ffa31a', width=4)\n```\n\n### Comparison\n**Task 3:** Compare the two different callbacks with respect to running time and number of cuts added. Try different TSPLIB instances. Compare this formulation to MTZ.\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "38202bdfd86446e761b545c1f3ee808a68642011", "size": 17783, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "06-subtours.ipynb", "max_stars_repo_name": "michael-ritter/assignments-ma4502-S2019", "max_stars_repo_head_hexsha": "dd40371d72a2cc8c8c3b505d073e02dbe1bbec49", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "06-subtours.ipynb", "max_issues_repo_name": "michael-ritter/assignments-ma4502-S2019", "max_issues_repo_head_hexsha": "dd40371d72a2cc8c8c3b505d073e02dbe1bbec49", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "06-subtours.ipynb", "max_forks_repo_name": "michael-ritter/assignments-ma4502-S2019", "max_forks_repo_head_hexsha": "dd40371d72a2cc8c8c3b505d073e02dbe1bbec49", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.3965183752, "max_line_length": 786, "alphanum_fraction": 0.6028791542, "converted": true, "num_tokens": 3055, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9099070084811306, "lm_q2_score": 0.9284087960985615, "lm_q1q2_score": 0.8447656703056101}} {"text": "# Funciones de forma unidimensionales\n\nLas funciones de forma unidimensionales sirven para aproximar los desplazamientos: \n\n\\begin{equation}\nw = \\alpha_{0} + \\alpha_{1} x + \\cdots + \\alpha_{n} x^{n} = \\sum_{i = 0}^{n} \\alpha_{i} x^{i}\n\\end{equation}\n\n## Elemento viga Euler-Bernoulli\n\nLos elementos viga soportan esfuerzos debido a flexi\u00f3n.\n\n### Elemento de dos nodos\n\nPara un elemento de dos nodos y cuatro grados de libertad:\n\n\\begin{equation}\nw = \\alpha_{0} + \\alpha_{1} x + \\alpha_{2} x^{2} + \\alpha_{3} x^{3} =\n\\left [\n\\begin{matrix}\n1 & x & x^{2} & x^{3}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\n\\alpha_{0} \\\\\\\n\\alpha_{1} \\\\\\\n\\alpha_{2} \\\\\\\n\\alpha_{3}\n\\end{matrix}\n\\right ]\n\\end{equation}\n\nLa deformaci\u00f3n angular:\n\n\\begin{equation}\n\\theta = \\frac{\\partial{w}}{\\partial{x}} = \\alpha_{1} + 2 \\alpha_{2} x + 3 \\alpha_{3} x^{2}\n\\end{equation}\n\nReemplazando los valores nodales en coordenadas naturales:\n\n\\begin{eqnarray}\n\\alpha_{0} + \\alpha_{1} (-1) + \\alpha_{2} (-1)^{2} + \\alpha_{3} (-1)^{3} &=& w_{1} \\\\\\\n\\alpha_{1} + 2 \\alpha_{2} (-1) + 3 \\alpha_{3} (-1)^{2} &=& \\theta_{1} \\\\\\\n\\alpha_{0} + \\alpha_{1} (1) + \\alpha_{2} (1)^{2} + \\alpha_{3} (1)^{3} &=& w_{2} \\\\\\\n\\alpha_{1} + 2 \\alpha_{2} (1) + 3 \\alpha_{3} (1)^{2} &=& \\theta_{2}\n\\end{eqnarray}\n\nEvaluando:\n\n\\begin{eqnarray}\n\\alpha_{0} - \\alpha_{1} + \\alpha_{2} - \\alpha_{3} &=& w_{1} \\\\\\\n\\alpha_{1} - 2 \\alpha_{2} + 3 \\alpha_{3} &=& \\theta_{1} \\\\\\\n\\alpha_{0} + \\alpha_{1} + \\alpha_{2} + \\alpha_{3} &=& w_{2} \\\\\\\n\\alpha_{1} + 2 \\alpha_{2} + 3 \\alpha_{3} &=& \\theta_{2}\n\\end{eqnarray}\n\nEn forma matricial:\n\n\\begin{equation}\n\\left [\n\\begin{matrix}\n1 & -1 & 1 & -1 \\\\\\\n0 & 1 & -2 & 3 \\\\\\\n1 & 1 & 1 & 1 \\\\\\\n0 & 1 & 2 & 3\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\n\\alpha_{0} \\\\\\\n\\alpha_{1} \\\\\\\n\\alpha_{2} \\\\\\\n\\alpha_{3}\n\\end{matrix}\n\\right ] =\n\\left [\n\\begin{matrix}\nw_{1} \\\\\\\n\\theta_{1} \\\\\\\nw_{2} \\\\\\\n\\theta_{2}\n\\end{matrix}\n\\right ]\n\\end{equation}\n\nResolviendo el sistema:\n\n\\begin{equation}\n\\left [\n\\begin{matrix}\n\\alpha_{0} \\\\\\\n\\alpha_{1} \\\\\\\n\\alpha_{2} \\\\\\\n\\alpha_{3}\n\\end{matrix}\n\\right ] =\n\\left [\n\\begin{matrix}\n\\frac{1}{2} & \\frac{1}{4} & \\frac{1}{2} & -\\frac{1}{4} \\\\\\\n-\\frac{3}{4} & -\\frac{1}{4} & \\frac{3}{4} & -\\frac{1}{4} \\\\\\\n0 & -\\frac{1}{4} & 0 & \\frac{1}{4} \\\\\\\n\\frac{1}{4} & \\frac{1}{4} & -\\frac{1}{4} & \\frac{1}{4}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\nw_{1} \\\\\\\n\\theta_{1} \\\\\\\nw_{2} \\\\\\\n\\theta_{2}\n\\end{matrix}\n\\right ]\n\\end{equation}\n\nReemplazando:\n\n\\begin{equation}\nw =\n\\left [\n\\begin{matrix}\n1 & x & x^{2} & x^{3}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\n\\alpha_{0} \\\\\\\n\\alpha_{1} \\\\\\\n\\alpha_{2} \\\\\\\n\\alpha_{3}\n\\end{matrix}\n\\right ] =\n\\left [\n\\begin{matrix}\n1 & x & x^{2} & x^{3}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\n\\frac{1}{2} & \\frac{1}{4} & \\frac{1}{2} & -\\frac{1}{4} \\\\\\\n-\\frac{3}{4} & -\\frac{1}{4} & \\frac{3}{4} & -\\frac{1}{4} \\\\\\\n0 & -\\frac{1}{4} & 0 & \\frac{1}{4} \\\\\\\n\\frac{1}{4} & \\frac{1}{4} & -\\frac{1}{4} & \\frac{1}{4}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\nw_{1} \\\\\\\n\\theta_{1} \\\\\\\nw_{2} \\\\\\\n\\theta_{2}\n\\end{matrix}\n\\right ] =\n\\left [\n\\begin{matrix}\n\\frac{1}{4} (x+2) (x-1)^{2} & \\frac{1}{4} (x+1) (x-1)^{2} & -\\frac{1}{4} (x-2) (x+1)^{2} & \\frac{1}{4} (x-1) (x+1)^{2}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\nw_{1} \\\\\\\n\\theta_{1} \\\\\\\nw_{2} \\\\\\\n\\theta_{2}\n\\end{matrix}\n\\right ]\n\\end{equation}\n\nReescribiendo $w$:\n\n\\begin{equation}\nw = \\Big [ \\frac{1}{4} (x+2) (x-1)^{2} \\Big ] w_{1} + \\Big [ \\frac{1}{4} (x+1) (x-1)^{2} \\Big ] \\theta_{1} + \\Big [ -\\frac{1}{4} (x-2) (x+1)^{2} \\Big ] w_{2} + \\Big [ \\frac{1}{4} (x-1) (x+1)^{2} \\Big ] \\theta_{2} = H_{01} w_{1} + H_{11} \\theta_{1} + H_{02} w_{2} + H_{12} \\theta_{2}\n\\end{equation}\n\n### Elemento de tres nodos\n\nPara un elemento de tres nodos y seis grados de libertad:\n\n\\begin{equation}\nw = \\alpha_{0} + \\alpha_{1} x + \\alpha_{2} x^{2} + \\alpha_{3} x^{3} + \\alpha_{4} x^{4} + \\alpha_{5} x^{5} =\n\\left [\n\\begin{matrix}\n1 & x & x^{2} & x^{3} & x^{4} & x^{5}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\n\\alpha_{0} \\\\\\\n\\alpha_{1} \\\\\\\n\\alpha_{2} \\\\\\\n\\alpha_{3} \\\\\\\n\\alpha_{4} \\\\\\\n\\alpha_{5}\n\\end{matrix}\n\\right ]\n\\end{equation}\n\nLa deformaci\u00f3n angular:\n\n\\begin{equation}\n\\theta = \\frac{\\partial{w}}{\\partial{x}} = \\alpha_{1} + 2 \\alpha_{2} x + 3 \\alpha_{3} x^{2} + 4 \\alpha_{4} x^{3} + 5 \\alpha_{5} x^{4}\n\\end{equation}\n\nReemplazando los valores nodales en coordenadas naturales:\n\n\\begin{eqnarray}\n\\alpha_{0} + \\alpha_{1} (-1) + \\alpha_{2} (-1)^{2} + \\alpha_{3} (-1)^{3} + \\alpha_{4} (-1)^{4} + \\alpha_{5} (-1)^{5} &=& w_{1} \\\\\\\n\\alpha_{1} + 2 \\alpha_{2} (-1) + 3 \\alpha_{3} (-1)^{2} + 4 \\alpha_{4} (-1)^{3} + 5 \\alpha_{5} (-1)^{4} &=& \\theta_{1} \\\\\\\n\\alpha_{0} + \\alpha_{1} (0) + \\alpha_{2} (0)^{2} + \\alpha_{3} (0)^{3} + \\alpha_{4} (0)^{4} + \\alpha_{5} (0)^{5} &=& w_{2} \\\\\\\n\\alpha_{1} + 2 \\alpha_{2} (0) + 3 \\alpha_{3} (0)^{2} + 4 \\alpha_{4} (0)^{3} + 5 \\alpha_{5} (0)^{4} &=& \\theta_{2} \\\\\\\n\\alpha_{0} + \\alpha_{1} (1) + \\alpha_{2} (1)^{2} + \\alpha_{3} (1)^{3} + \\alpha_{4} (1)^{4} + \\alpha_{5} (1)^{5} &=& w_{3} \\\\\\\n\\alpha_{1} + 2 \\alpha_{2} (1) + 3 \\alpha_{3} (1)^{2} + 4 \\alpha_{4} (1)^{3} + 5 \\alpha_{5} (1)^{4} &=& \\theta_{3}\n\\end{eqnarray}\n\nEvaluando:\n\n\\begin{eqnarray}\n\\alpha_{0} - \\alpha_{1} + \\alpha_{2} - \\alpha_{3} + \\alpha_{4} - \\alpha_{5} &=& w_{1} \\\\\\\n\\alpha_{1} - 2 \\alpha_{2} + 3 \\alpha_{3} - 4 \\alpha_{4} + 5 \\alpha_{5} &=& \\theta_{1} \\\\\\\n\\alpha_{0} &=& w_{2} \\\\\\\n\\alpha_{1} &=& \\theta_{2} \\\\\\\n\\alpha_{0} + \\alpha_{1} + \\alpha_{2} + \\alpha_{3} + \\alpha_{4} + \\alpha_{5} &=& w_{3} \\\\\\\n\\alpha_{1} + 2 \\alpha_{2} + 3 \\alpha_{3} + 4 \\alpha_{4} + 5 \\alpha_{5} &=& \\theta_{3}\n\\end{eqnarray}\n\nEn forma matricial:\n\n\\begin{equation}\n\\left [\n\\begin{matrix}\n1 & -1 & 1 & -1 & 1 & -1 \\\\\\\n0 & 1 & -2 & 3 & -4 & 5 \\\\\\\n1 & 0 & 0 & 0 & 0 & 0 \\\\\\\n0 & 1 & 0 & 0 & 0 & 0 \\\\\\\n1 & 1 & 1 & 1 & 1 & 1 \\\\\\\n0 & 1 & 2 & 3 & 4 & 5\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\n\\alpha_{0} \\\\\\\n\\alpha_{1} \\\\\\\n\\alpha_{2} \\\\\\\n\\alpha_{3} \\\\\\\n\\alpha_{4} \\\\\\\n\\alpha_{5}\n\\end{matrix}\n\\right ] =\n\\left [\n\\begin{matrix}\nw_{1} \\\\\\\n\\theta_{1} \\\\\\\nw_{2} \\\\\\\n\\theta_{2} \\\\\\\nw_{3} \\\\\\\n\\theta_{3}\n\\end{matrix}\n\\right ]\n\\end{equation}\n\nResolviendo el sistema:\n\n\\begin{equation}\n\\left [\n\\begin{matrix}\n\\alpha_{0} \\\\\\\n\\alpha_{1} \\\\\\\n\\alpha_{2} \\\\\\\n\\alpha_{3} \\\\\\\n\\alpha_{4} \\\\\\\n\\alpha_{5}\n\\end{matrix}\n\\right ] =\n\\left [\n\\begin{matrix}\n0 & 0 & 1 & 0 & 0 & 0 \\\\\\\n0 & 0 & 0 & 1 & 0 & 0 \\\\\\\n1 & \\frac{1}{4} & -2 & 0 & 1 & -\\frac{1}{4} \\\\\\\n-\\frac{5}{4} & -\\frac{1}{4} & 0 & -2 & \\frac{5}{4} & -\\frac{1}{4} \\\\\\\n-\\frac{1}{2} & -\\frac{1}{4} & 1 & 0 & -\\frac{1}{2} & \\frac{1}{4} \\\\\\\n\\frac{3}{4} & \\frac{1}{4} & 0 & 1 & -\\frac{3}{4} & \\frac{1}{4}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\nw_{1} \\\\\\\n\\theta_{1} \\\\\\\nw_{2} \\\\\\\n\\theta_{2} \\\\\\\nw_{3} \\\\\\\n\\theta_{3}\n\\end{matrix}\n\\right ]\n\\end{equation}\n\nReemplazando:\n\n\\begin{equation}\nw =\n\\left [\n\\begin{matrix}\n1 & x & x^{2} & x^{3} & x^{4} & x^{5}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\n\\alpha_{0} \\\\\\\n\\alpha_{1} \\\\\\\n\\alpha_{2} \\\\\\\n\\alpha_{3} \\\\\\\n\\alpha_{4} \\\\\\\n\\alpha_{5}\n\\end{matrix}\n\\right ] =\n\\left [\n\\begin{matrix}\n1 & x & x^{2} & x^{3} & x^{4} & x^{5}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\n0 & 0 & 1 & 0 & 0 & 0 \\\\\\\n0 & 0 & 0 & 1 & 0 & 0 \\\\\\\n1 & \\frac{1}{4} & -2 & 0 & 1 & -\\frac{1}{4} \\\\\\\n-\\frac{5}{4} & -\\frac{1}{4} & 0 & -2 & \\frac{5}{4} & -\\frac{1}{4} \\\\\\\n-\\frac{1}{2} & -\\frac{1}{4} & 1 & 0 & -\\frac{1}{2} & \\frac{1}{4} \\\\\\\n\\frac{3}{4} & \\frac{1}{4} & 0 & 1 & -\\frac{3}{4} & \\frac{1}{4}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\nw_{1} \\\\\\\n\\theta_{1} \\\\\\\nw_{2} \\\\\\\n\\theta_{2} \\\\\\\nw_{3} \\\\\\\n\\theta_{3}\n\\end{matrix}\n\\right ] =\n\\left [\n\\begin{matrix}\n\\frac{1}{4} x^{2} (3x+4) (x-1)^{2} & \\frac{1}{4} x^{2} (x+1) (x-1)^{2} & (x-1)^{2} (x+1)^{2} & x (x-1)^{2} (x+1)^{2} & -\\frac{1}{4} x^{2} (3x-4) (x+1)^{2} & \\frac{1}{4} x^{2} (x-1) (x+1)^{2}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\nw_{1} \\\\\\\n\\theta_{1} \\\\\\\nw_{2} \\\\\\\n\\theta_{2} \\\\\\\nw_{3} \\\\\\\n\\theta_{3}\n\\end{matrix}\n\\right ]\n\\end{equation}\n\nReescribiendo $u$:\n\n\\begin{equation}\nw = \\Big [ \\frac{1}{4} x^{2} (3x+4) (x-1)^{2} \\Big ] w_{1} + \\Big [ \\frac{1}{4} x^{2} (x+1) (x-1)^{2} \\Big ] \\theta_{1} + \\Big [ (x-1)^{2} (x+1)^{2} \\Big ] w_{2} + \\Big [ x (x-1)^{2} (x+1)^{2} \\Big ] \\theta_{2} + \\Big [ -\\frac{1}{4} x^{2} (3x-4) (x+1)^{2} \\Big ] w_{3} + \\Big [ \\frac{1}{4} x^{2} (x-1) (x+1)^{2} \\Big ] \\theta_{3} = H_{01} w_{1} + H_{11} \\theta_{1} + H_{02} w_{2} + H_{12} \\theta_{2} + H_{03} w_{3} + H_{13} \\theta_{3}\n\\end{equation}\n\n## Elementos viga Euler-Bernoulli de mayor grado polinomial\n\nLos elementos de mayor grado pueden obtenerse mediante polinomios de Hermite:\n\n\\begin{eqnarray}\nH_{0i} &=& [1 - 2 \\ \\ell_{(x_{i})}^{\\prime} (x - x_{i})] [\\ell_{(x)}]^{2} \\\\\\\nH_{1i} &=& (x - x_{i}) [\\ell_{(x)}]^{2}\n\\end{eqnarray}\n\n### Elemento de dos nodos\n\nUsando la f\u00f3rmula para polinomios de Lagrange:\n\n\\begin{eqnarray}\n\\ell_{1} &=& \\frac{x - 1}{-1 - 1} = \\frac{1}{2} - \\frac{1}{2} x \\\\\\\n\\ell_{1}^{\\prime} &=& - \\frac{1}{2} \\\\\\\n\\ell_{2} &=& \\frac{x - (-1)}{1 - (-1)} = \\frac{1}{2} + \\frac{1}{2} x \\\\\\\n\\ell_{2}^{\\prime} &=& \\frac{1}{2}\n\\end{eqnarray}\n\nUsando la f\u00f3rmula para polinomios de Hermite:\n\n\\begin{eqnarray}\nH_{01} &=& \\Big \\\\{ 1 - 2 \\Big [ -\\frac{1}{2} \\Big ] [x - (-1)] \\Big \\\\} \\Big ( \\frac{1}{2} - \\frac{1}{2} x \\Big )^{2} = \\frac{1}{4} (x+2) (x-1)^{2} \\\\\\\nH_{11} &=& [x - (-1)] \\Big ( \\frac{1}{2} - \\frac{1}{2} x \\Big )^{2} = \\frac{1}{4} (x+1) (x-1)^{2} \\\\\\\nH_{02} &=& \\Big [ 1 - 2 \\Big ( \\frac{1}{2} \\Big ) (x - 1) \\Big ] \\Big ( \\frac{1}{2} + \\frac{1}{2} x \\Big )^{2} = -\\frac{1}{4} (x-2) (x+1)^{2} \\\\\\\nH_{12} &=& (x - 1) \\Big ( \\frac{1}{2} + \\frac{1}{2} x \\Big )^{2} = \\frac{1}{4} (x-1) (x+1)^{2}\n\\end{eqnarray}\n\n### Elemento de tres nodos\n\nUsando la f\u00f3rmula para polinomios de Lagrange:\n\n\\begin{eqnarray}\n\\ell_{1} &=& \\frac{x - 0}{-1 - 0} \\frac{x - 1}{-1 - 1} = -\\frac{1}{2} x + \\frac{1}{2} x^{2} \\\\\\\n\\ell_{1}^{\\prime} &=& -\\frac{1}{2} + x \\\\\\\n\\ell_{2} &=& \\frac{x - (-1)}{0 - (-1)} \\frac{x - 1}{0 - 1} = 1 - x^{2} \\\\\\\n\\ell_{2}^{\\prime} &=& - 2 x \\\\\\\n\\ell_{3} &=& \\frac{x - 0}{1 - 0} \\frac{x - 1}{1 - 1} = \\frac{1}{2} x + \\frac{1}{2} x^{2} \\\\\\\n\\ell_{3}^{\\prime} &=& \\frac{1}{2} + x\n\\end{eqnarray}\n\nUsando la f\u00f3rmula para polinomios de Hermite:\n\n\\begin{eqnarray}\nH_{01} &=& \\Big \\\\{ 1 - 2 \\Big [ -\\frac{1}{2} + (-1) \\Big ] [x - (-1)] \\Big \\\\} \\Big ( -\\frac{1}{2} x + \\frac{1}{2} x^{2} \\Big )^{2} = \\frac{1}{4} x^{2} (3x+4) (x-1)^{2} \\\\\\\nH_{11} &=& [x - (-1)] \\Big ( -\\frac{1}{2} x + \\frac{1}{2} x^{2} \\Big )^{2} = \\frac{1}{4} x^{2} (x+1) (x-1)^{2} \\\\\\\nH_{02} &=& \\Big \\\\{ 1 - 2 \\Big [ - 2 (0) \\Big ] (x - 0) \\Big \\\\} \\Big ( 1 - x^{2} \\Big )^{2} = (x-1)^{2} (x+1)^{2} \\\\\\\nH_{12} &=& (x - 0) \\Big ( 1 - x^{2} \\Big )^{2} = x (x-1)^{2} (x+1)^{2} \\\\\\\nH_{03} &=& \\Big \\\\{ 1 - 2 \\Big [ \\frac{1}{2} + (1) \\Big ] \\Big \\\\} \\Big ( \\frac{1}{2} x + \\frac{1}{2} x^{2} \\Big )^{2} = -\\frac{1}{4} x^{2} (3x-4) (x+1)^{2} \\\\\\\nH_{13} &=& [x - (1)] \\Big ( \\frac{1}{2} x + \\frac{1}{2} x^{2} \\Big )^{2} = \\frac{1}{4} x^{2} (x-1) (x+1)^{2}\n\\end{eqnarray}\n\n\n```\n\n```\n\n\n```\n\n```\n", "meta": {"hexsha": "722056b9edbcc72ba8e617d1684b52f173b00d83", "size": 20779, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Funciones de forma/funciones forma viga.ipynb", "max_stars_repo_name": "ClaudioVZ/Teoria-FEM-Python", "max_stars_repo_head_hexsha": "8a4532f282c38737fb08d1216aa859ecb1e5b209", "max_stars_repo_licenses": ["Artistic-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-28T00:23:45.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-28T00:23:45.000Z", "max_issues_repo_path": "Funciones de forma/funciones forma viga.ipynb", "max_issues_repo_name": "ClaudioVZ/Teoria-FEM-Python", "max_issues_repo_head_hexsha": "8a4532f282c38737fb08d1216aa859ecb1e5b209", "max_issues_repo_licenses": ["Artistic-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Funciones de forma/funciones forma viga.ipynb", "max_forks_repo_name": "ClaudioVZ/Teoria-FEM-Python", "max_forks_repo_head_hexsha": "8a4532f282c38737fb08d1216aa859ecb1e5b209", "max_forks_repo_licenses": ["Artistic-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2015-12-04T12:42:00.000Z", "max_forks_repo_forks_event_max_datetime": "2019-10-31T21:50:32.000Z", "avg_line_length": 29.183988764, "max_line_length": 467, "alphanum_fraction": 0.3674864045, "converted": true, "num_tokens": 5540, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9572778048911612, "lm_q2_score": 0.8824278664544912, "lm_q1q2_score": 0.8447286109743462}} {"text": "Mean\n\\begin{align}\n\\bar x = \\frac{\\sum_{i=1}^n x_i}{n}\n\\end{align}\n\nVariance\n\\begin{align}\n\\sigma^2 = \\frac{\\sum_{i=1}^n (\\bar x - x_i)^2}{n}\n\\end{align}\n\nStandard Deviation (Population)\n\\begin{align}\n\\sigma = \\sqrt{\\frac{\\sum_{i=1}^n (\\bar x - x_i)^2}{n}}\n\\end{align}\n\nStandard Deviation (Sample)\n\\begin{align}\ns = \\sqrt{\\frac{\\sum_{i=1}^n (\\bar x - x_i)^2}{n-1}}\n\\end{align}\n\nStandard Error\n\\begin{align}\nSE = \\frac{\\sigma}{\\sqrt n}\n\\end{align}\n\nz-score\n\\begin{align}\nz = \\frac{x - \\mu}{\\sigma}\n\\end{align}\n\nMargin of error\n\\begin{align}\nME = Z^* \\frac{\\sigma}{\\sqrt{n}}\n\\end{align}\n\nConfidence Interval\n\\begin{align}\nCI = \\bar x \\pm Z^* \\frac{\\sigma}{\\sqrt{n}}\n\\end{align}\n\nt-stat\n\\begin{align}\nt = \\frac{\\bar x - \\mu}{\\frac{\\sigma}{\\sqrt{n}}}\n\\end{align}\n\ndf\n\\begin{align}\ndf = n - 1\n\\end{align}\n\nCohen\u2019s d\n\\begin{align}\nd = \\frac{\\bar x_1 - \\bar x_2}{s}\n\\end{align}\n\nwhere\n\\begin{align}\ns = \\sqrt{\\frac{(n_1-1)s_1^2+(n_2-1)s_2^2}{n_1+n_2-2}}\n\\end{align}\n\nStandard Error\n\\begin{align}\nSE = \\sqrt{\\frac{S_1^2}{n_1}+\\frac{S_2^2}{n_2}}\n\\end{align}\n\nPooled Variance\n\\begin{align}\nS_p^2 = \\frac{SS_1+SS_2}{df_1+df_2}\n\\end{align}\n\nNew Standard Error\n\\begin{align}\nSE = \\sqrt{\\frac{S_p^2}{n_1}+\\frac{S_p^2}{n_2}}\n\\end{align}\n\nPercentual de variabilidade\n\\begin{align}\nr^2 = \\frac{t^2}{t^2 + df}\n\\end{align} \n\nR2 mede a propor\u00e7\u00e3o de uma diferen\u00e7a nas m\u00e9dias que pode ser explicada pela vari\u00e1vel independente.\n\nO d de Cohen \u00e9 a medida do tamanho do efeito.\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "16622e84a8a6dbcdc13d17862118a26f0d11c117", "size": 3037, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Summary.ipynb", "max_stars_repo_name": "tassotirap/data-science-summary", "max_stars_repo_head_hexsha": "384bd84e25df8e9abb05f20cf9a6e80ab9385751", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Summary.ipynb", "max_issues_repo_name": "tassotirap/data-science-summary", "max_issues_repo_head_hexsha": "384bd84e25df8e9abb05f20cf9a6e80ab9385751", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Summary.ipynb", "max_forks_repo_name": "tassotirap/data-science-summary", "max_forks_repo_head_hexsha": "384bd84e25df8e9abb05f20cf9a6e80ab9385751", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.0075757576, "max_line_length": 107, "alphanum_fraction": 0.4636154099, "converted": true, "num_tokens": 607, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9626731147976795, "lm_q2_score": 0.8774767858797979, "lm_q1q2_score": 0.8447233106255615}} {"text": "# 05 Algebra with Sympy, Scipy and Numpy \n# Review Exercises - Solutions\n\n\n\n\n\n## Review Exercise 1: Root finding \n\nFit a polynomial function to the data below.\n\nFind the roots of the fitted polynomial. \n\n\n```python\n# Review Exercise 1: Root finding \nx = [-6.0, -5.555555555555555, -5.444444444444445, -5.333333333333333, -5.0, -4.777777777777778, -4.555555555555555, -4.333333333333334, -4.111111111111111, -3.555555555555556, -3.3333333333333335, -3.2222222222222223, -2.2222222222222223, -1.8888888888888893, -1.7777777777777777, -1.5555555555555554, -1.4444444444444446, -1.333333333333334, -1.2222222222222223, -1.1111111111111116, -0.7777777777777777, -0.5555555555555562, -0.3333333333333339, -0.22222222222222232, 0.33333333333333304, 0.5555555555555554, 0.6666666666666661, 0.8888888888888884, 1.0, 1.2222222222222214, 1.333333333333333, 1.4444444444444438, 1.666666666666666, 1.7777777777777777, 2.1111111111111107, 2.333333333333332, 2.5555555555555554, 2.777777777777777, 2.8888888888888893, 3.1111111111111107, 3.2222222222222214, 3.5555555555555554, 3.777777777777777, 4.0, 4.111111111111111, 4.222222222222221, 4.333333333333332, 4.444444444444445, 4.8888888888888875, 5.0]\ny = [-306.0670724099247, -273.4252575751447, -236.35910170243054, -2.147806809067588, -162.88428946693543, -72.0539258242078, -49.64195238514043, -75.05934686306523, -49.40805793483066, -15.803160491117433, -20.408192287721462, -34.04243919689319, -2.6008654388252075, -0.33819910212586596, 0.5967691522163541, 1.955165125544544, 0.754741501848223, 3.1485956879192134, 0.2736824650635393, 2.535463038423905, 2.0383401626385638, 0.8371085078493934, 0.27326740330999844, -0.14152399821562134, -0.15792222719404883, -1.357836647665497, -4.064496618469092, -2.2060777524379893, -6.716174537753252, -2.381049714701943, -0.8951333867263299, -3.703956978393335, -5.121504730336851, -1.4824097773484555, -0.0658532580151797, 2.5527247901789907, 9.310234512028755, 7.839090794578473, 0.8239015424106111, 27.801254862532222, 33.099581728518, 17.182186572769048, 63.28883410018085, 38.47325866392358, 74.26392095969987, 100.73153613329536, 119.19508682705471, 46.85235728093459, 175.63882495054517, 118.62483544333234]\n```\n\n\n```python\n# Review Exercise 1: Root finding \n# Example Solution\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.plot(x,y,'co')\n\n# 3rd order polynomial \n# 4 coefficients\nc = np.polyfit(x, y, 3)\n\n# 3rd order polynomial \nyfit3 = np.poly1d(c)(x)\n\n# plot fitted function\nplt.plot(x, \n yfit3, \n 'r',\n label=f'{round(c[0],2)}*x**3 + {round(c[1],2)}*x**2 + {round(c[2],2)}*x + {round(c[3],2)}');\n\nplt.legend()\n\n# find roots\nr = np.roots(c)\nprint(r)\n\n#plot roots\nz = np.zeros(len(r)) # array of zeros \nplt.plot(r, z, 'ks') # roots\n```\n\n\n```python\n\n```\n\n## Review Exercise 2: Root Finding using an Initial Estimate\n\n__Example:__ Find the root of the cosine function that is closest to -5.\n\n \n\n\n```python\n# Review Exercise 2: Finding the Closest Root to a Point\n# Example Solution \nfrom scipy.optimize import fsolve\nprint(fsolve(np.cos, -5))\n```\n\n [-4.71238898]\n\n\n## Review Exercise 3: Systems of Equations\n### Example Engineering Application: An Electrical Circuit\n\n\n#### Kirchhoff's Voltage Law\nFor a closed loop series path the algebraic sum of all the *voltages* and *voltage drops* around any closed loop in a circuit is equal to zero.\n\n$\\sum E - \\sum V = 0 $\n\n \n\n\n#### Electrical Elements Obey Ohm's Law \nThe current through a conductor (I, units amps) is the voltage measured across the conductor (V, units volts) divided by the resistance (R, units Ohms).\n\n$$V = IR$$\n\n\nConsider a three loop current network with five resistors and\ntwo voltage sources.\n\nHere we have three loops, hence we can write three equations\nto use resitances R1, R2, R3, R4, R5 and voltages v1, v2, to solve for the three unknowns, the currents: i1, i2, i3.\n\n \n\n\n\n\nWe can use Kirchoff's voltage law to equate the voltage and voltage drop in each loop: \n
$\\sum V = \\sum E$ \n\nand Ohm's law : $V=IR$ \n\n__Loop 1:__   $ (R_1 + R_2) i_1 + i_2 R_2 = v_1$\n\n__Loop 2:__   $ -R_2 i_1 + (R_2 + R_3 + R_4)i_2 - R_4 i_3 = 0$\n\n__Loop 3:__   $ -R_4 i_2 + (R_4 + R_5) i_3 = -v_2$
\n\nPutting the equations in matrix form:\n\n\n\\begin{equation*}\n\\underbrace{\n\\begin{bmatrix}\n(R_1 + R_2) & -R_2 & 0 \\\\\n-R_2 & (R_2 + R_3 + R_4) & -R_4 \\\\\n0 & -R_4 & (R_4 + R_5) \\\\\n\\end{bmatrix}\n}_{\\mathbf{R}}\n\\cdot\n\\underbrace{\n\\begin{bmatrix}\ni_1 \\\\\ni_2 \\\\\ni_3 \\\\\n\\end{bmatrix}\n}_{\\mathbf{I}}\n=\\underbrace{\n\\begin{bmatrix}\nv_1 \\\\\n0 \\\\\n-v_2 \\\\\n\\end{bmatrix}\n}_{\\mathbf{V}}\n\\end{equation*}\n\nGiven the following resitance and voltage values, solve the system of equations to find the three unknown currents: i1, i2, i3.\n\n$R1=1K\\Omega$
\n$R2=300\\Omega$
\n$R3=500\\Omega$
\n$R4=1K\\Omega$
\n$R5=300\\Omega$
\n\n$v1 = 2V$
\n$v2 = 5V$\n\n\n```python\n# Review Exercise 3: Systems of Equations\n# Example Solution\nimport numpy as np\n\nR1=1000\nR2=300\nR3=500\nR4=1000\nR5=300\n\nv1 = 2\nv2 = 5\n\n# Arrays for the known values\nR = np.array([[(R1+R2), -R2, 0],\n [ -R2, (R2+R3+R4), -R4],\n [ 0, -R4, (R4+R5)]])\n\nv = np.array([v1, 0, -v2])\n\n\n# Solve for u\nI = np.linalg.solve(R, v)\nprint(I)\n```\n\n [ 0.00072615 -0.00352 -0.00655385]\n\n\n\n```python\n\n```\n\n## Review Exercise 4: Symbolic math\n\n$$ y = \\frac{x^P}{4d} $$\n\nMake $x$ the subject of the equation.\n\nUsing symbolic substitution, find the value of $x$ when:\n\n$P = 12$\n\n$d = 4$ \n\n$y = 2$\n\n\n```python\n# Review Exercise 4: Symbolic math\n# Example solution\n\nimport sympy\n\n# create a symbolic representation of all values\nP, d, y, x = sympy.symbols('P d y x')\n\n# make a symbolic equation \ny_eq = sympy.Eq(y, (x**P / (4 * d)))\nsympy.pprint(y_eq)\n\n# re-arrange for x using solve\nx_expr = sympy.solve(y_eq, x)[0]\nprint(x_expr)\n\n# make a symbolic equation for x\nx_eq = sympy.Eq(x, x_expr) # This be written as one line...\nsympy.pprint(x_eq)\n\n\n# substitute in initial condition\nsol = x_expr.subs([(P, 12), # E = 3.48e-6 --> subs 3.48e-6 for E\n (d,4),\n (y, 2),\n ])\nprint(sol)\n\n\n```\n\n P\n x \n y = \u2500\u2500\u2500\n 4\u22c5d\n (4*d*y)**(1/P)\n P _______\n x = \u2572\u2571 4\u22c5d\u22c5y \n 2**(5/12)\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "f22225cc36d81c0b1f2d8e3cd5fb97afa2924d33", "size": 27202, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ReviewQuestions_ExampleSolutions/05_Algebra_SympyScipyNumpy__ClassMaterial.ipynb", "max_stars_repo_name": "hphilamore/UoB_PythonForEngineers_2020", "max_stars_repo_head_hexsha": "27eb7e07edecd2003d4672c83ebc6c355d92b46b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ReviewQuestions_ExampleSolutions/05_Algebra_SympyScipyNumpy__ClassMaterial.ipynb", "max_issues_repo_name": "hphilamore/UoB_PythonForEngineers_2020", "max_issues_repo_head_hexsha": "27eb7e07edecd2003d4672c83ebc6c355d92b46b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ReviewQuestions_ExampleSolutions/05_Algebra_SympyScipyNumpy__ClassMaterial.ipynb", "max_forks_repo_name": "hphilamore/UoB_PythonForEngineers_2020", "max_forks_repo_head_hexsha": "27eb7e07edecd2003d4672c83ebc6c355d92b46b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 64.9212410501, "max_line_length": 15700, "alphanum_fraction": 0.7756414969, "converted": true, "num_tokens": 2459, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.942506726044381, "lm_q2_score": 0.8962513842182775, "lm_q1q2_score": 0.8447229578523134}} {"text": "# Python Basic Programs\n\n### 1. Complex Numbers\nWrite a function `rand_complex(n)` that returns a list of `n` random complex numbers uniformly distributed in the unit circle (i.e., the magnitudes of the numbers are all between 0 and 1). Give the function a docstring. Demonstrate the function by making a list of 25 complex numbers. \n\n\n```python\nimport numpy as np\nimport random as random\nimport matplotlib.pyplot as plt\ndef rand_complex (n):\n \"\"\"To get random complex numbers uniformly distribued in the unit circle.\n Unit circle= A circle of unit radius, i.e. a radius of 1.\n Condition for unit circle: If (x,y) is a point on the circumference of the circle.\n Then, (x^2+y^2=1). \n\n Parameter n: Required number of complex numbers, type int.\n\n Returns m: An array of complex numbers within the unit circle.\n \"\"\"\n m = [] #Initializing an empty array\n for i in range(n):\n a = np.random.uniform(-1 , 1) #Get a random number between -1 & 1\n b = np.sqrt(1-a**2) #Get a number that fulfills (a^2+b^2=1 or b=square-root of(1-a^2))\n br = np.random.uniform(-b , b)#Get a random number between -b & b\n z = complex (a , br) #Forms complex number of the form (a + br j)\n m.append(z) #Append the array with complex numbers generated \n plt.scatter(a , br) #Plot the complex numbers\n circle=plt.Circle((0,0),1, fill= False)\n plt.gca().add_patch(circle)\n return m\n\nrand_complex(25)\n```\n\n### 2. Hashes\nWrite a function `to_hash(L) `that takes a list of complex numbers `L` and returns an array of hashes of equal length, where each hash is of the form `{ \"re\": a, \"im\": b }`. Give the function a docstring and test it by converting a list of 25 numbers generated by your `rand_complex` function. \n\n\n```python\ndef to_hash(L):\n \"\"\"Returns the hash or dictionary values of a list of complex numbers.\n\n Parameter L: List of complex numbers.\n\n Returns a: An array of hashes of equal length in the form {\"re\": a, \"im\": b} \n \"\"\"\n a = [] #Initializing an empty array\n b = len(L) #Get the length of array list\n for i in range(b): \n x = L[i].real #Stores real values of complex numbers\n y = L[i].imag #Stores imaginary values of complex numbers\n a.append({ \"re\": x, \"im\": y }) #Add all the real and imaginary parts to the array in the form of hashes\n return a\n\nto_hash(rand_complex(25)) \n```\n\n### 3. Matrices\n\nWrite a function `lower_traingular(n)` that returns an $n \\times n$ numpy matrix with zeros on the upper diagonal, and ones on the diagonal and lower diagonal. For example, `lower_triangular(3)` would return\n\n```python\narray([[1, 0, 0],\n [1, 1, 0],\n [1, 1, 1]])\n```\n\n\n```python\nimport numpy as np\ndef lower_traingular(n):\n \"\"\"Forms an nxn lower triangular matrix, with zeros above diagonal\n and ones on diagonal and below diagonal\n \n Parameter n: Dimension of resultant nxn matrix, type int. \n\n Returns x: Desired lower triangular nxn matrix.\n \"\"\"\n x = np.zeros((n , n)) #Forms a matrix of zeros of desired dimension\n for i in np.arange(n): \n for j in np.arange(n):\n if (i>=j): #Selecting diagonal and lower diagonal places\n x[i][j]=1 #Inputs 1 wherever the condition applies\n return x \n \nlower_traingular(3)\n```\n\n\n\n\n array([[1., 0., 0.],\n [1., 1., 0.],\n [1., 1., 1.]])\n\n\n\n### 4. Numpy\n\nWrite a function `convolve(M,K)` that takes an $n \\times m$ matrix $M$ and a $3 \\times 3$ matrix $K$ (called the kernel) and returns their convolution as in [this diagram](https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTYo2_VuAlQhfeEGJHva3WUlnSJLeE0ApYyjw&usqp=CAU).\n\n\nPlease do not use any predefined convolution functions from numpy or scipy. Write your own. If the matrix $M$ is too small, your function should return a exception.\n\nYou can read more about convolution in [this post](https://setosa.io/ev/image-kernels/).\n\nThe matrix returned will have two fewer rows and two fewer columns than $M$. Test your function by making a $100 \\times 100$ matrix of zeros and ones that as an image look like the letter X and convolve it with the kernel\n\n$$\nK = \\frac{1}{16} \\begin{pmatrix}\n1 & 2 & 1 \\\\\n2 & 4 & 2 \\\\\n1 & 2 & 1\n\\end{pmatrix}\n$$\n\nUse `imshow` to display both images using subplots. \n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n#Take user inputs for image dimensions \nn = int(input(\"Enter no of rows for image: \"))\nm = int(input(\"Enter no of columns for image: \"))\n\n#To form an image in the form of X of desired dimensions\nimg = np.zeros((n , m)) \nfor i in range(n):\n for j in range(m):\n if (i==j or i+j==m-1):\n img[i][j] = 1\n\n#Kernel provided for convolution\nker = (1/16)*np.array([[1, 2, 1], [2, 4, 2], [1, 2, 1]])\nk = ker.shape[0]\n\n#Defining dimensions of resultant image\n#Dimensions 2 rows and columns lesser than given image (as kernel is 3x3) \nres = np.zeros(((n-k+1) , (m-k+1)))\n\n#Perfom Convolution\ndef convolve(M , K):\n \"\"\"Performs the convolution of a nxm image by a kernel of 3x3 and provides a resultant image.\n\n Parameters:\n M= nxm image on which convolution is to be performed.\n K= 3x3 kernel.\n\n Raises: Exception as dimensions of input image can not be less than kernel dimensions. \n \"\"\"\n if (img.shape[0] _Trig with a lisp._\n\n$$ cosh(y) \\equiv cos(iy) = \\frac{1}{2}(e^{-y} + e^y) $$\n\n$$ i\\ sinh(y) \\equiv sin(iy) = \\frac{1}{2 i}(e^{-y} - e^y) $$\n\n$$ tanh(y) \\equiv \\frac{sinh(y)}{cosh(y)} $$\n\nEven function: $cosh(x) > 1$\n\nOdd function: $sinh(x)$\n\nOdd function: $-1 < tanh(x) < 1$\n\n", "meta": {"hexsha": "64a0291244366ee273aad23458c90b87f754c9bf", "size": 78680, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/mathematics/complex_numbers.ipynb", "max_stars_repo_name": "JeppeKlitgaard/jepedia", "max_stars_repo_head_hexsha": "c9af119a78b916bd01eb347a585ff6a6a0ca1782", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/mathematics/complex_numbers.ipynb", "max_issues_repo_name": "JeppeKlitgaard/jepedia", "max_issues_repo_head_hexsha": "c9af119a78b916bd01eb347a585ff6a6a0ca1782", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/mathematics/complex_numbers.ipynb", "max_forks_repo_name": "JeppeKlitgaard/jepedia", "max_forks_repo_head_hexsha": "c9af119a78b916bd01eb347a585ff6a6a0ca1782", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 354.4144144144, "max_line_length": 29660, "alphanum_fraction": 0.9267666497, "converted": true, "num_tokens": 1342, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9161096227509861, "lm_q2_score": 0.9219218423633528, "lm_q1q2_score": 0.8445814712133852}} {"text": "\n\n# Linear Variational Principle\nThis notebook demonstrates the Linear Variational Method to the particle in a box of length $L = 10$ atomic units \nwith a delta function potential centered at $x_0=5$ atomic units. This notebook will attempt to show graphically why inclusion of excited-states of the ordinary particle in a box system can improve the energy of the particle in a box with a delta potential. \n\n# The approach\nWe will optimize\nthe trial wavefunction given by \n\\begin{equation}\n\\Phi(x) = \\sum_{n=1}^N c_n \\psi_n(x)\n\\end{equation}\nwhere the coefficients $c_n$ are real numbers\nand $\\psi_n(x)$ are the energy eigenfunctions of the particle in a box with no potential:\n\\begin{equation}\n\\psi_n(x) = \\sqrt{\\frac{2}{L} } {\\rm sin}\\left(\\frac{n \\pi x}{L} \\right).\n\\end{equation}\n\nWe will seek to minimize the energy functional through the expansion coefficients, where the\nenergy functional can be written as\n\\begin{equation}\nE[\\Phi(x)] = \\frac{\\int_0^{L} \\Phi^* (x) \\: \\hat{H} \\: \\Phi(x) dx }{\\int_0^{L} \\Phi^* (x) \\: \\Phi(x) dx }.\n\\end{equation}\n\n\nThe Hamiltonian operator in the box is given by \n\\begin{equation}\n\\hat{H} = -\\frac{\\hbar^2}{2M} \\frac{d^2}{dx^2} + \\delta(x-x_0);\n\\end{equation}\nin natural units, $\\hbar$ and the electron mass $M$ are equal to 1.\n\n$E[\\Phi(x)]$ can be expanded as\n\\begin{equation}\nE[\\Phi(x)] \\sum_{n=1}^N \\sum_{m=1}^N c_n c_m S_{nm} = \\sum_{n=1}^N \\sum_{m=1}^N c_n c_m H_{nm}\n\\end{equation}\nwhere \n\\begin{equation}\nS_{nm} = \\int_0^L \\psi_n(x) \\psi_m(x) dx = \\delta_{nm}\n\\end{equation}\nand\n\\begin{equation}\nH_{nm} = \\int_0^L \\psi_n(x) \\hat{H} \\psi_m(x) dx. \n\\end{equation}\n\nSolving this equation can be seen to be identical to diagonalizing the matrix ${\\bf H}$, whose elements are $H_{nm}$ to obtain energy eigenvalues $E$ and eigenvectors ${\\bf c}$. The lowest eigenvalue corresponds to the variational ground state energy, and the corresponding eigenvector can be\nused to expand the variational ground state wavefunction through Equation 1.\n\n# Computing elements of the matrix\nWe can work out a general expression for the integrals $H_{nm}$:\n\\begin{equation}\nH_{nm} = \\frac{\\hbar^2 \\pi^2 n^2}{2 M L^2} \\delta_{nm} + \\frac{2}{L} {\\rm sin}\\left( \\frac{n \\pi x_0}{L} \\right) {\\rm sin}\\left( \\frac{m\\pi x_0}{L} \\right).\n\\end{equation}\n\n\nImport NumPy and PyPlot libraries\n\n\n```python\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom scipy import signal\n\n```\n\nWrite a function that computes the matrix elements $H_{ij}$ given quantum numbers $i$ and $j$, length of the box $L$, and location of the delta function $x_0$. We will assume atomic units where $\\hbar = 1$ and $M = 1$.\n\n\n```python\n### Function to return integrals involving Hamiltonian and basis functions\ndef H_nm(n, m, L, x0):\n ''' We will use the expression for Hnm along with given values of L and x_0 \n to compute the elements of the Hamiltonian matrix '''\n if n==m:\n ham_int = np.pi**2 * m**2/(2 * L**2) + (2/L) * np.sin(n*np.pi*x0/L) * np.sin(m*np.pi*x0/L)\n else:\n ham_int = (2/L) * np.sin(n*np.pi*x0/L) * np.sin(m*np.pi*x0/L)\n \n return ham_int\n\ndef psi_n(n, L, x):\n return np.sqrt(2/L) * np.sin(n * np.pi * x/L)\n```\n\nNext we will create a numpy array called $H_{mat}$ that can be used to store the Hamiltonian matrix elements. We can start by considering a trial wavefunction that is an expansion of the first 3 PIB energy eigenfunctions, so our Hamiltonian in this case should be a 3x3 numpy array.\n\n\n```python\ndim = 3\nH_mat = np.zeros((dim,dim))\n```\n\nYou can use two nested $for$ loops along with your $H_{ij}$ function to fill out the values of this matrix. Note that the indices for numpy arrays start from zero while the quantum numbers for our system start from 1, so we must offset our quantum numbers by +1 relative to our numpy array indices.\n\n\n```python\n### define L to be 10 and x0 to be 5\nL = 10\nx0 = 5\n### loop over indices of the basis you are expanding in\n### and compute and store the corresponding Hamiltonian matrix elements\nfor n in range(0,dim):\n for m in range(0,dim):\n H_mat[n,m] = H_nm(n+1, m+1, L, x0)\n\n### Print the resulting Hamiltonian matrix\nprint(H_mat)\n```\n\n [[ 2.49348022e-01 2.44929360e-17 -2.00000000e-01]\n [ 2.44929360e-17 1.97392088e-01 -2.44929360e-17]\n [-2.00000000e-01 -2.44929360e-17 6.44132198e-01]]\n\n\n\n```python\n### compute eigenvalues and eigenvectors of H_mat\n### store eigenvalues to E_vals and eigenvectors to c\nE_vals, c = np.linalg.eig(H_mat)\n\n### The eigenvalues will not necessarily be sorted from lowest-to-highest; this step will sort them!\nidx = E_vals.argsort()[::1]\nE_vals = E_vals[idx]\nc = c[:,idx]\n\n### print lowest eigenvalues corresponding to the \n### variational estimate of the ground state energy\nprint(\"Ground state energy with potential is approximately \",E_vals[0])\nprint(\"Ground state energy of PIB is \",np.pi**2/(200))\n```\n\n Ground state energy with potential is approximately 0.16573541893898724\n Ground state energy of PIB is 0.04934802200544679\n\n\nLet's plot the first few eigenstates of the ordinary PIB against the potential:\n\n\n```python\n### array of x-values\nx = np.linspace(0,L,100)\n### first 3 energy eigenstates of ordinary PIB\npsi_1 = psi_n(1, L, x)\npsi_2 = psi_n(2, L, x)\npsi_3 = psi_n(3, L, x)\nVx = signal.unit_impulse(100,50)\n\nplt.plot(x, psi_1, 'orange', label='$\\psi_1$')\nplt.plot(x, psi_2, 'green', label='$\\psi_2$')\nplt.plot(x, psi_3, 'blue', label='$\\psi_3$')\nplt.plot(x, Vx, 'purple', label='V(x)')\nplt.legend()\nplt.show()\n```\n\nNow let's plot the probability density associated with the first three eigenstates along with the probability density of the variational solution against the potential:\n\n\n```python\nPhi = c[0,0]*psi_1 + c[1,0]*psi_2 + c[2,0]*psi_3\n\n\nplt.plot(x, psi_1*psi_1, 'orange', label='$|\\psi_1|^2$')\nplt.plot(x, psi_2*psi_2, 'green', label='$|\\psi_2|^2$')\nplt.plot(x, psi_3*psi_3, 'blue', label='$|\\psi_3|^2$')\nplt.plot(x, Phi*Phi, 'red', label='$|\\Phi|^2$')\nplt.plot(x, Vx, 'purple', label='V(x)')\nplt.legend()\nplt.show()\n```\n\n### Questions To Think About!\n1. Is the energy you calculated above higher or lower than the ground state energy of the ordinary particle in a box system (without the delta function potential)?\nAnswer: The energy calculated to approximate the ground state energy of the PIB + Potential using the linear variational method is higher than the true PIB ground state energy (0.165 atomic units for the PIB + Potential compared to 0.0493 atomic units for the ordinary PIB). The addition of the potential should increase the ground state energy because it is repulsive.\n2. Why do you think mixing in functions that correspond to excited states in the ordinary particle in a box system actually helped to improve (i.e. lower) your energy in the system with the delta function potential?\nAnswer: Certain excited states (all states with even $n$) go to zero at the center of the box, and the repulsive potential is localized to the center of the box. Therefore, all excited states with even $n$ will move electron density away from the repulsive potential, which can potentially loer the energy.\n3. Increase the number of basis functions to 6 (so that ${\\bf H}$ is a 6x6 matrix and ${\\bf c}$ is a vector with 6 entries) and repeat your calculation of the variational estimate of the ground state energy. Does the energy improve (lower) compared to what it was when 3 basis functions were used?\nAnswer: Yes, the energy improves. With 3 basis functions, the ground state energy is approximated to be 0.165 atomic units and with 6 basis functions, the ground state energy is approximated to be 0.155 atomic units. The added flexibility of these additional basis functions (specifically more basis functions with $n$ even) allows greater flexibility in optimizing a wavefunction that describes an electron effectively avoiding the repulsive potential in the center of the box.\n\n\n```python\n\n```\n", "meta": {"hexsha": "72eb0a0dbc114a8c2d4ae6f8ac1f8ecc89d88c09", "size": 63955, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/Linear_Variational_Method.ipynb", "max_stars_repo_name": "FoleyLab/FoleyLab.github.io", "max_stars_repo_head_hexsha": "1f84e4dc2f87286dbd4e07e483ac1e48943cb493", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/Linear_Variational_Method.ipynb", "max_issues_repo_name": "FoleyLab/FoleyLab.github.io", "max_issues_repo_head_hexsha": "1f84e4dc2f87286dbd4e07e483ac1e48943cb493", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-02-25T08:45:19.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-19T04:28:30.000Z", "max_forks_repo_path": "notebooks/Linear_Variational_Method.ipynb", "max_forks_repo_name": "FoleyLab/FoleyLab.github.io", "max_forks_repo_head_hexsha": "1f84e4dc2f87286dbd4e07e483ac1e48943cb493", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 159.4887780549, "max_line_length": 26534, "alphanum_fraction": 0.8606676569, "converted": true, "num_tokens": 2364, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9161096227509861, "lm_q2_score": 0.921921835927664, "lm_q1q2_score": 0.8445814653175887}} {"text": "# Introductory junk on root finding \nRoot finding is the generalized name of funciton minimization, just rather than finding where a function is equal to zero, we do a little extra work to use it to find where a function is equal to _something_. Most generally, a minimization problem is the computational task of finding the value of $x$ which satisfies \n\n$$\n\\begin{equation}\nf(x) = 0\n\\end{equation}\n$$\n\nWell, to turn this into a _root finding_ problem, we are now looking for\n\n$$\n\\begin{equation}\nf(x) = c\n\\end{equation}\n$$\n\nWhere, as in all minimization problems you'll find the phrase \"constants don't matter\" thrown around, so, to find the value of $x$ to see where our funciton evaluates to $c$ we simply have the problem\n\n$$\n\\begin{equation}\nf(x) - c = 0\n\\end{equation}\n$$\n\nour classic minimization problem\n\n## Newton-Raphson \n\nThe method we will focus on here is the Newton-Raphson method before moving on to more general techniques, as I find it's a good introduction to the general idea of how minimization algorithms come to be. (If this was a craft beer, this is a discussion of \"mouth-feel\" for minimization routines). The idea of the Newton Raphson method is to use information about the derivative to interpolate our way to zero. \n\n### Derivation \n\nAs with most derivations, our story begins with the mathematical equivalent of \"adding more layers\" to our neural network - we expand our function $f(x)$ in a Taylor Series about some small value. That is, for a small value, say $\\epsilon$ we have\n\n$$\n\\begin{equation}\nf(x + \\epsilon) \\approx f(x) + \\epsilon f^\\prime(x) + \\epsilon^2 f^{\\prime \\prime}(x) + ...\n\\end{equation}\n$$\n\nWhere, as we're assuming $\\epsilon$ is sufficently small (or we're close enough to our solution) we neglect any terms beyond those linear in $\\epsilon$. That is, we have \n\n$$\n\\begin{aligned}\nf(x +\\epsilon) & \\approx f(x) + \\epsilon f^\\prime(x) = 0 \\\\\n\\implies & \\epsilon = -\\frac{f(x)}{f^\\prime(x)}\n\\end{aligned}\n$$\n\nWhere we can define our update formulas to be our previous point, minus the above equation. Or\n\n$$\n\\begin{aligned}\nx_{i+1} = & x_i -\\frac{f(x_i)}{f^\\prime(x_i)} \\\\\n\\epsilon_{i+1} = & \\epsilon_i -\\frac{f(x_i)}{f^\\prime(x_i)}\n\\end{aligned}\n$$\n\nWhere we would repeat this process until subsequent values for $x_i$ no longer change much. \n\nOne caveat to the above is the iterative nature of finding $x_n$, how do we get this algorithm going? The answer here is that we have to provide an initial guess at our solution. And with any luck, we should know approximately where our solution is so that we can guess at it, and come to a solution rather quickly. As we explore this notebook, we will see rather quickly that the inital guess at our solution matters a lot when it comes to finding solutions -- if we can find them at all!\n\n# Challenge 1\n\nImplement your own Newton Raphson method below. I have set up a function for you to fill in with your own implemenation of the Newton Raphson method. I have also provided a completed function in the `scripts` folder of this directory if you don't want to write it yourself, but I do strongly recommend that you try!\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt \nfrom scipy.optimize import brentq\n%matplotlib inline\n\ndef function(x):\n return x**2\n\ndef fprime(x):\n return 2 * x\n\ndef MyNR(f, fprime, x0, max_iter = 100, prec = 1e-6, verbose = False):\n '''\n Your Implementation of the Newton Raphson method. The variables are as follows:\n f --> a Python function which takes a point x, and returns that function as evaluated\n at that point x\n fprime --> the derivative of your function f, which takes a point x and returns a the value \n of the derivative at x\n x0 --> the inital guess at your solution\n max_iter --> How many times you want to try to find your solution\n prec --> the precision of your calculation ie. how close to zero is our soltion?\n verbose --> Do we want to print convergence messages\n '''\n \n # So we can plot our solutions path\n x = [x0]\n x_val = x0\n \n for i in range(max_iter):\n \n funciton_value = 1# YOUR CODE HERE\n derivative_value = 1#YOUR CODE HERE\n epsilon = 1#YOUR CODE HERE\n new_x =1 # YOUR CODE HERE\n \n x.append(new_x)\n \n if abs(epsilon) < prec:\n return x_val, x\n if verbose:\n print(f\"This calculation did not converge after {max_iter} iterations\")\n \n return x_val, x\n```\n\n# How Does It Find Solutions\n\nLet's take a look at how well we find solutions using Newton Raphson. Let's start with a nice easy case of trying to find where\n\n$$\nx^2 = 0\n$$\n\nThis is a nice friendly equation with only one minima, let's see how well we find zero.\n\n## Challenge 2\n\n1. Try adding different values for `x0`, does your path to solution change? \n2. Use your own function `MyNR`, do you get the same results?\n\nNotice how many iterations it takes before it converges depending on your initial guess\n\n\n```python\nimport sys\nsys.path.append('scripts/')\nimport fractalfuncs as FF\n\ndef func(x):\n return np.power(x,2)\n\ndef deriv(x):\n return 2*np.array(x)\n \n \nx= np.arange(-1,1,1/100)\n\nplt.plot(x, func(x))\n\nx0 = -8\n\n# Replace FF.NewtonRaphson with your own implementation if you desire\nmin_ , points = FF.NewtonRaphson(func, deriv, x0=x0,prec= 1e-5) \n\nplt.plot(points, func(points), linestyle='--', marker='o', color='b')\nplt.text(-.5, 0.5, f\"I converged in {len(points)} iterations\", size = 16)\nplt.text(-.5, 0.45, f\"Found root at $x=${round(min_,2)}\", size = 16)\nplt.ylim([0,0.6])\nplt.xlim([-.6,.6])\n```\n\nSo that was pretty easy, but what about if we use a \"spicer\" function than a simple quadratic? Lets try something that we couldn't invert easily by hand, something like say\n\n$$\nf(x) = -10 \\exp\\left(-x^2\\right)-5 \\exp\\left( \\left[x-5\\right]^2\\right) +x = 0\n$$\n\nwhere in fact, inverting this equation and solving for where x is equal to zero by hand is quite impossible! But let's plot that function below and see what it looks like\n\n\n```python\ndef double_min(x):\n return -10 * np.exp(-x**2) - 5*np.exp(-(x - 5)**2) +x\ndef double_min_deriv(x):\n return 20 * x * np.exp(-x**2) + 10 * (x-5) * np.exp(-(x-5)**2) \n\nx= np.arange(-5,10,1/100)\nplt.plot(x, double_min(x))\nplt.axhline(0, c = 'black')\n```\n\nAh ha! so we see that we have two roots, one at about 1.5, and another at about 4.5. Let's see how well our root finding technique can find those roots below\n\n## Challenge 3\n\nChange your initial guess `x0`, can you find values that do not find a solution? Note that finding the root at approximately 1.5 is much more difficult!\n\n\n```python\n# Replace FF.NewtonRaphson with your own implementation if you desire\ndef double_min(x):\n return -10 * np.exp(-x**2) - 5*np.exp(-(x - 5)**2) +x\ndef double_min_deriv(x):\n return 20 * x * np.exp(-x**2) + 10 * (x-5) * np.exp(-(x-5)**2) \n\nx0 = 1\nmin_ , points = FF.NewtonRaphson(double_min, double_min_deriv, x0=x0, prec= 1e-5) \n\nplt.plot(x, double_min(x))\nplt.text(-5, 6.0, f\"I converged in {len(points)} iterations\", size = 16)\nplt.text(-5, 4.0, f\"Found root at $x$={round(min_,2)}\", size = 16)\n\nplt.plot(np.array(points), double_min(np.array(points)), linestyle='--', marker='o', color='b')\nlast = points[-1]\n\nplt.scatter(last,double_min(last) , c = 'r', marker = \"x\", s=200)\n\n```\n\nWhere on a function like the one above, it probably didn't take you too long to find an intial guess at a solution which did not. This is not your fault. In fact, this is something that just happens when it comes to root finding! Because of this there are a few pieces of advice when it comes to root finding\n\n1. Always plot your function and try to have an idea approximately where your solution lies so you can make an intelligent initial guess\n2. If your solution doesn't converge, odds are your initial guess was bad!\n\n\nYou may be wondering how this can be used to generate fractals, but we will start abusing the convergence properties of the Newton Raphson (and other formulas) to find areas where our function converges, and areas where it does not. \n\n## More Things\n\nTry with other functions! If you're not sure on how to take the derivative of a function, wolframalpha may help, alternatively you can do it numerically with a provided function like so \n\n\n\n```python\n# Only if you haven't imported it already\nimport sys\nsys.path.append('scripts/')\nimport fractalfuncs as FF\ndef myfunction(z):\n return z**2 # for example\n\ndef myderivative(z):\n return FF.nderiv(myfunction, z)\n```\n", "meta": {"hexsha": "3604dc30a5c1641ef74f0831577ce7e114ebe74f", "size": 72820, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/fractals/RootFinding.ipynb", "max_stars_repo_name": "lgfunderburk/mathscovery", "max_stars_repo_head_hexsha": "da9fcfd7f660835c663985c94645aec6dfd9f7bb", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/fractals/RootFinding.ipynb", "max_issues_repo_name": "lgfunderburk/mathscovery", "max_issues_repo_head_hexsha": "da9fcfd7f660835c663985c94645aec6dfd9f7bb", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/fractals/RootFinding.ipynb", "max_forks_repo_name": "lgfunderburk/mathscovery", "max_forks_repo_head_hexsha": "da9fcfd7f660835c663985c94645aec6dfd9f7bb", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-12T00:49:57.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-12T00:49:57.000Z", "avg_line_length": 182.05, "max_line_length": 22232, "alphanum_fraction": 0.8922960725, "converted": true, "num_tokens": 2321, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9219218327098193, "lm_q2_score": 0.9161096084360388, "lm_q1q2_score": 0.8445814491724278}} {"text": "# Constrained optimization\n\nNow we will move to studying constrained optimizaton problems i.e., the full problem\n$$\n\\begin{align} \\\n\\min \\quad &f(x)\\\\\n\\text{s.t.} \\quad & g_j(x) \\geq 0\\text{ for all }j=1,\\ldots,J\\\\\n& h_k(x) = 0\\text{ for all }k=1,\\ldots,K\\\\\n&a_i\\leq x_i\\leq b_i\\text{ for all } i=1,\\ldots,n\\\\\n&x\\in \\mathbb R^n,\n\\end{align}\n$$\nwhere for all $i=1,\\ldots,n$ it holds that $a_i,b_i\\in \\mathbb R$ or they may also be $-\\infty$ of $\\infty$.\n\nFor example, we can have an optimization problem\n$$\n\\begin{align} \\\n\\min \\quad &x_1^2+x_2^2\\\\\n\\text{s.t.} \\quad & x_1+x_2-1\\geq 0\\\\\n&-1\\leq x_1\\leq 1, x_2\\leq 3.\\\\\n\\end{align}\n$$\n\nIn order to optimize that problem, we can define the following python function:\n\n\n```python\nimport numpy as np\ndef f_constrained(x):\n return np.linalg.norm(x)**2,[x[0]+x[1]-1],[]\n```\n\nNow, we can call the function:\n\n\n```python\n(f_val,ieq,eq) = f_constrained([1,0])\nprint \"Value of f is \"+str(f_val)\nif len(ieq)>0:\n print \"The values of inequality constraints are:\"\n for ieq_j in ieq:\n print str(ieq_j)+\", \"\nif len(eq)>0:\n print \"The values of the equality constraints are:\"\n for eq_k in eq:\n print str(eq_k)+\", \"\n```\n\n Value of f is 1.0\n The values of inequality constraints are:\n 0, \n\n\nIs this solution feasible?\n\n\n```python\nif all([ieq_j>=0 for ieq_j in ieq]) and all([eq_k==0 for eq_k in eq]):\n print \"Solution is feasible\"\nelse:\n print \"Solution is infeasible\"\n```\n\n Solution is feasible\n\n\n# Indirect and direct methods for constrained optimization\n\nThere are two categories of methods for constrained optimization: Indirect and direct methods. The main difference is that\n1. Indirect methods convert the constrained optimization problem into a single or a sequence of unconstrained optimization problems, that are then solved. Often, the intermediate solutions do not need to be feasbiel, the sequence of solutions converges to a solution that is optimal (and, thus, feasible).\n2. Direct methods deal with the constrained optimization problem directly. In this case, the intermediate solutions are feasible.\n\n# Indirect methods\n\n## Penalty function methods\n\n**IDEA:** Include constraints into the objective function with the help of penalty functions that penalize constraint violations.\n\nLet, $\\alpha(x):\\mathbb R^n\\to\\mathbb R$ be a function so that \n* $\\alpha(x)=$ for all feasible $x$\n* $\\alpha(x)>0$ for all infeasible $x$.\n\nDefine optimization problems\n$$\n\\begin{align} \\\n\\min \\qquad &f(x)+r\\alpha(x)\\\\\n\\text{s.t.} \\qquad &x\\in \\mathbb R^n\n\\end{align}\n$$\nfor $r>0$ and $x_r$ be the optimal solutions of these problems.\n\nIn this case, the optimal solutions $x_r$ converge to the optimal solution of the constrained problem, when $r\\to\\infty$, if such solution exists.\n\nFor example, good ideas for penalty functions are\n* $h_k(x)^2$ for equality constraints,\n* $\\left(\\min\\{0,g_j(x)\\}\\right)^2$ for inequality constraints.\n\n\n```python\ndef alpha(x,f):\n (_,ieq,eq) = f(x)\n return sum([min([0,ieq_j])**2 for ieq_j in ieq])+sum([eq_k**2 for eq_k in eq])\n```\n\n\n```python\nalpha([1,0],f_constrained)\n```\n\n\n\n\n 0\n\n\n\n\n```python\ndef penalized_function(x,f,r):\n return f(x)[0] + r*alpha(x,f)\n```\n\n\n```python\npenalized_function([-1,0],f_constrained,10000)\n```\n\n\n\n\n 40001.0\n\n\n\n\n```python\nfrom scipy.optimize import minimize\nres = minimize(lambda x:penalized_function(x,f_constrained,100000),\n [0,0],method='Nelder-Mead', \n options={'disp': True})\nprint res.x\n```\n\n Optimization terminated successfully.\n Current function value: 0.499998\n Iterations: 57\n Function evaluations: 96\n [ 0.49994305 0.50005243]\n\n\n\n```python\n(f_val,ieq,eq) = f_constrained(res.x)\nprint \"Value of f is \"+str(f_val)\nif len(ieq)>0:\n print \"The values of inequality constraints are:\"\n for ieq_j in ieq:\n print str(ieq_j)+\", \"\nif len(eq)>0:\n print \"The values of the equality constraints are:\"\n for eq_k in eq:\n print str(eq_k)+\", \"\n\nif all([ieq_j>=0 for ieq_j in ieq]) and all([eq_k==0 for eq_k in eq]):\n print \"Solution is feasible\"\nelse:\n print \"Solution is infeasible\"\n```\n\n Value of f is 0.49999548939\n The values of inequality constraints are:\n -4.51660156242e-06, \n Solution is infeasible\n\n\n### How to set the penalty term $r$?\n\nThe penalty term should\n* be large enough in order for the solutions be close enough to the feasible region, but\n* not be too large to\n * cause numerical problems, or\n * cause premature convergence to non-optimal solutions because of relative tolerances.\n\nUsually, the penalty term is either\n* set as big as possible without causing problems (hard to know), or\n* updated iteratively.\n\n\n# Barrier function methods\n\n**IDEA:** Prevent leaving the feasible region so that the value of the objective is $\\infty$ outside the feasible set.\n\nThis method is only applicable to problems with inequality constraints and for which the set \n$$\\{x\\in \\mathbb R^n: g_j(x)>0\\text{ for all }j=1,\\ldots,J\\}$$\nis non-empty.\n\nLet $\\beta:\\{x\\in \\mathbb R^n: g_j(x)>0\\text{ for all }j=1,\\ldots,J\\}\\to \\mathbb R$ be a function so that $\\beta(x)\\to \\infty$, when $x\\to\\partial\\{x\\in \\mathbb R^n: g_j(x)>0\\text{ for all }j=1,\\ldots,J\\}$, where $\\partial A$ is the boundary of the set $A$. Now, define optimization problem \n$$\n\\begin{align}\n\\min \\qquad & f(x) + r\\beta(x)\\\\\n\\text{s.t. } \\qquad & x\\in \\{x\\in \\mathbb R^n: g_j(x)>0\\text{ for all }j=1,\\ldots,J\\}.\n\\end{align}\n$$\nand let $x_r$ be the optimal solution of this problem (which we assume to exist for all $r>0$).\n\nIn this case, $x_r$ converges to the optimal solution of the problem (if it exists), when $r\\to 0^+$ (i.e., $r$ converges to zero from the right).\n\nA good idea for barrier algorithm is $\\frac1{g_j(x)}$.\n\n\n```python\ndef beta(x,f):\n _,ieq,_ = f(x)\n try:\n value=sum([1/max([0,ieq_j]) for ieq_j in ieq])\n except ZeroDivisionError:\n value = float(\"inf\")\n return value\n```\n\n\n```python\ndef function_with_barrier(x,f,r):\n return f(x)[0]+r*beta(x,f)\n```\n\n\n```python\nfrom scipy.optimize import minimize\nres = minimize(lambda x:function_with_barrier(x,f_constrained,0.00000000000001),\n [1,1],method='Nelder-Mead', options={'disp': True})\nprint res.x\n```\n\n Optimization terminated successfully.\n Current function value: 0.500000\n Iterations: 78\n Function evaluations: 136\n [ 0.49998927 0.50001085]\n\n\n\n```python\n(f_val,ieq,eq) = f_constrained(res.x)\nprint \"Value of f is \"+str(f_val)\nif len(ieq)>0:\n print \"The values of inequality constraints are:\"\n for ieq_j in ieq:\n print str(ieq_j)+\", \"\nif len(eq)>0:\n print \"The values of the equality constraints are:\"\n for eq_k in eq:\n print str(eq_k)+\", \"\nif all([ieq_j>=0 for ieq_j in ieq]) and all([eq_k==0 for eq_k in eq]):\n print \"Solution is feasible\"\nelse:\n print \"Solution is infeasible\"\n```\n\n Value of f is 0.500000122097\n The values of inequality constraints are:\n 1.21864303093e-07, \n Solution is feasible\n\n\n## Other notes about using penalty and barrier function methods\n\n* It is worthwile to consider whether feasibility can be compromized. If the constraints do not have any tolerances, then barrier function method should be considered.\n* Also barrier methods parameter can be set iteratively\n* Penalty and barrier functions should be chosen so that they are differentiable (thus $x^2$ above)\n* In both methods, the minimum is attained at the limit.\n* Different penalty and barrier parameters can be used for differnt constraints, even for same problem.\n", "meta": {"hexsha": "45256d4d304a6853e389d2698bef91a04485aac0", "size": 14539, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lecture 6, Indirect methods for constrained optimization.ipynb", "max_stars_repo_name": "maeehart/TIES483", "max_stars_repo_head_hexsha": "cce5c779aeb0ade5f959a2ed5cca982be5cf2316", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2019-04-26T12:46:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-23T03:38:59.000Z", "max_issues_repo_path": "Lecture 6, Indirect methods for constrained optimization.ipynb", "max_issues_repo_name": "maeehart/TIES483", "max_issues_repo_head_hexsha": "cce5c779aeb0ade5f959a2ed5cca982be5cf2316", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture 6, Indirect methods for constrained optimization.ipynb", "max_forks_repo_name": "maeehart/TIES483", "max_forks_repo_head_hexsha": "cce5c779aeb0ade5f959a2ed5cca982be5cf2316", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2016-01-08T16:28:11.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-10T05:18:10.000Z", "avg_line_length": 25.2852173913, "max_line_length": 321, "alphanum_fraction": 0.5285783066, "converted": true, "num_tokens": 2227, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9407897525789548, "lm_q2_score": 0.8976952880018481, "lm_q1q2_score": 0.8445425278905522}} {"text": "# Black-Scholes Option Pricing Model\n\nBlack\u2013Scholes\u2013Merton model is a mathematical model for the dynamics of a financial market containing derivative investment instruments. From the partial differential equation in the model, known as the Black\u2013Scholes equation, one can deduce the Black\u2013Scholes formula, which gives a theoretical estimate of the price of European-style options and shows that the option has a unique price regardless of the risk of the security and its expected return (instead replacing the security's expected return with the risk-neutral rate).\n\nThe key idea behind the model is to hedge the option by buying and selling the underlying asset in just the right way and, as a consequence, to eliminate risk. This type of hedging is called \"continuously revised delta hedging\" and is the basis of more complicated hedging strategies such as those engaged in by investment banks and hedge funds.\n\nThe Black\u2013Scholes model assumes that the market consists of at least one risky asset, usually called the stock, and one riskless asset, usually called the money market, cash, or bond.\n\nNow we make assumptions on the assets (which explain their names):\n\n- (riskless rate) The rate of return on the riskless asset is constant and thus called the risk-free interest rate.\n- (random walk) The instantaneous log return of stock price is an infinitesimal random walk with drift; more precisely, it is a geometric Brownian motion, and we will assume its drift and volatility are constant (if they are time-varying, we can deduce a suitably modified Black\u2013Scholes formula quite simply, as long as the volatility is not random).\n- The stock does not pay a dividend.\n\nAssumptions on the market:\n\n- There is no arbitrage opportunity (i.e., there is no way to make a riskless profit).\n- It is possible to borrow and lend any amount, even fractional, of cash at the riskless rate.\n- It is possible to buy and sell any amount, even fractional, of the stock (this includes short selling).\n- The above transactions do not incur any fees or costs (i.e., frictionless market).\n\n\nThe Black-Scholes formula:\n\n\\begin{equation}\n\\frac{\\partial V}{ \\partial t} + \\frac{1}{2}\\sigma^{2} S^{2} \\frac{\\partial^{2} V}{\\partial S^2} + r S \\frac{\\partial V}{\\partial S} - r V = 0\n\\end{equation}\n\n\"The Greeks\" measure the sensitivity of the value of a derivative or a portfolio to changes in parameter value(s) while holding the other parameters fixed. They are partial derivatives of the price with respect to the parameter values. One Greek, \"gamma\" is a partial derivative of another Greek, \"delta\" in this case.\n\nThe Greeks are important not only in the mathematical theory of finance, but also for those actively trading. Financial institutions will typically set (risk) limit values for each of the Greeks that their traders must not exceed. Delta is the most important Greek since this usually confers the largest risk. Many traders will zero their delta at the end of the day if they are speculating and following a delta-neutral hedging approach as defined by Black\u2013Scholes.\nTypically the following Greeks are of interest: Delta, Vega and Gamma.\n\nBlack-Scholes Call and Put Option Price Formulas:\n\n$C=S_{0}*N(d_{1})-Ke^{-rt}*N(d_{2})$\n\n$P=Ke^{-rt}*N(-d_{2})-S_{0}*N(-d_{1})$\n\nwhere $N(x)$ is the standard normal cumulative distribution function\n\n$d_{1}=\\frac{\\ln{\\frac{S_{o}}{K}}+t(r+\\frac{\\sigma^{2}}{2})}{\\sigma\\sqrt{t}}$\n\n$d_{2}=d_{1}-\\sigma\\sqrt{t}$\n\n\n```python\nimport math\nfrom scipy import stats\n```\n\n\n```python\n# Option parameters\n\nK = 110 # strike price\nr = .04 # interest rate\nsigma = .3 # volatility - standard deviation of the stock\nT = 1 # time maturity\nS0 = 100 # starting price of the stock\n```\n\n\n```python\ndef get_black_scholes_simulation_results(K, r, sigma, T, S0, is_call=True):\n \"\"\"\n Calculates option price using Black-Scholes model, \n returns the option value and the dict of calculated greeks in the form of a list\n \"\"\"\n d1 = (math.log(S0 / K) + \n (r + (0.5 * sigma**2)) * T) / (sigma * math.sqrt(T))\n d2 = d1 - (sigma * math.sqrt(T))\n\n if is_call:\n call_value = S0 * stats.norm.cdf(d1, 0, 1) - K * math.exp(\n -r * T) * stats.norm.cdf(d2, 0, 1)\n delta_call = stats.norm.cdf(d1, 0, 1)\n gamma_call = stats.norm.pdf(d1, 0, 1) / (S0 * sigma * math.sqrt(T))\n theta_call = -(r * K * math.exp(-r * T) * stats.norm.cdf(d2, 0, 1)) - (\n sigma * S0 * stats.norm.pdf(d1, 0, 1) / (2 * math.sqrt(T)))\n rho_call = T * K * math.exp(-r * T) * stats.norm.cdf(d2, 0, 1)\n vega_call = math.sqrt(T) * S0 * stats.norm.pdf(d1, 0, 1)\n\n return [\n call_value, {\n 'delta': delta_call,\n 'gamma': gamma_call,\n 'theta': theta_call,\n 'rho': rho_call,\n 'vega': vega_call\n }\n ]\n\n else:\n put_value = K * math.exp(-r * T) * stats.norm.cdf(-d2, 0, 1) - (\n S0 * stats.norm.cdf(-d1, 0, 1))\n delta_put = -stats.norm.cdf(-d1, 0, 1)\n gamma_put = stats.norm.pdf(d1, 0, 1) / (S0 * sigma * math.sqrt(T))\n theta_put = (r * K * math.exp(-r * T) * stats.norm.cdf(-d2, 0, 1)) - (\n sigma * S0 * stats.norm.pdf(d1, 0, 1) / (2 * math.sqrt(T)))\n rho_put = -T * K * math.exp(-r * T) * stats.norm.cdf(-d2, 0, 1)\n vega_put = math.sqrt(T) * S0 * stats.norm.pdf(d1, 0, 1)\n\n return [\n put_value, {\n 'delta': delta_put,\n 'gamma': gamma_put,\n 'theta': theta_put,\n 'rho': rho_put,\n 'vega': vega_put\n }\n ]\n```\n\n\n```python\nbs_call_results = get_black_scholes_simulation_results(K, r, sigma, T, S0)\nbs_put_results = get_black_scholes_simulation_results(K, r, sigma, T, S0, is_call=False)\n\nprint('\\nVanila European Call:\\n' + '\\nValue: ' + str(bs_call_results[0]) + '\\nDelta: ' +\n str(bs_call_results[1]['delta']))\nprint('\\nVanila European Put:\\n' + '\\nValue: ' + str(bs_put_results[0]) + '\\nDelta: ' +\n str(bs_put_results[1]['delta']))\n```\n\n \n Vanila European Call:\n \n Value: 9.625357828843697\n Delta: 0.48629214299030143\n \n Vanila European Put:\n \n Value: 15.312196135599244\n Delta: -0.5137078570096986\n\n", "meta": {"hexsha": "47044913a7ff2b702bf9690ead922eb5c828e940", "size": 8795, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Black-Scholes Option Pricing Model.ipynb", "max_stars_repo_name": "andrei-andrianov/computational-finance", "max_stars_repo_head_hexsha": "51205c72dc63a7bbb269f23c62a8a5027c399851", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-12-15T13:54:33.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-07T17:55:55.000Z", "max_issues_repo_path": "Black-Scholes Option Pricing Model.ipynb", "max_issues_repo_name": "andrei-andrianov/computational-finance", "max_issues_repo_head_hexsha": "51205c72dc63a7bbb269f23c62a8a5027c399851", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Black-Scholes Option Pricing Model.ipynb", "max_forks_repo_name": "andrei-andrianov/computational-finance", "max_forks_repo_head_hexsha": "51205c72dc63a7bbb269f23c62a8a5027c399851", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.2633928571, "max_line_length": 537, "alphanum_fraction": 0.5656623081, "converted": true, "num_tokens": 1722, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.964321448096903, "lm_q2_score": 0.8757869851639066, "lm_q1q2_score": 0.8445401737576793}} {"text": "# Distributed clustering\n\nWe have many types of distributed clustering, where most are an modification of k-means. In this section we show three types: hard k-means (hcm), fuzzy k-means (fcm) and possibilistic k-means (pcm).\n\n### Libraries\n\nWe need four libraries. Numpy is used for the matrices calculation. The math library is used to calcualte the square root when we calculate the Euclidean distance. Matplotlib is used for the plots. Finally, pandas is used here only for displaying the assignation matrix in a easy to ready form in Jupyter.\n\n\n```python\nimport numpy as np\nfrom math import sqrt\nimport matplotlib.pyplot as plt\nimport pandas as pd\n```\n\n## K-means\n\nThe most known method is called k-means and assign each case to one cluster strictly. It is also known as hard c-means where k is the same as c and are the number of clusters that we are willing to divide the data set to. The steps of hcm are like following:\n1. choose the entrance cluster centroids,\n2. item calculate the assignation matrix $U$,\n3. item calculate new centroids matrix $V$,\n4. calculate the difference between previously assignation matrix $U$ and the new one calculated in current iteration.\n\n\n```python\n%store -r data_set\n```\n\nBefore we start, we should setup a few variables like the assignation matrix, number of clusters, the error margin and feature space:\n\n\n```python\ngroups = 2\nerror_margin = 0.01\nm=2\nassignation=np.zeros((len(data_set),groups))\n```\n\nThe error margin is a value of error that below ends the clustering loop. \n\nThe assignation matrix if filled with zeros as we don't have any guess for assignation yet. We can also fill it randomly with 1 and 0 for each group. The assignation matrix looks like following:\n\n\\begin{equation*}\nU=\\begin{bmatrix}\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n\\end{bmatrix}.\n\\end{equation*}\n\n\nIt's time to generate centroid array randomly:\n\\begin{equation}\n V=[v_{1},v_{2},\\ldots,v_{c}].\n\\end{equation}\n\nWe go through each group and add a random array of the feature space centroid positions:\n\n\n```python\ndef select_centers():\n return np.random.rand(groups,len(data_set[0]))\n \ncenters = select_centers()\n```\n\nLet's take a look what centroids do we have:\n\n\n```python\npd.DataFrame(centers, columns=['x1','x2'])\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
x1x2
00.1499940.092524
10.6076880.589990
\n
\n\n\n\nWe can also set fixed centers. It is important that the values are normalized.\n\n\n```python\n#centers = [[0.2,0.2], [0.8,0.8]]\n#pd.DataFrame(centers)\n#print(centers)\n```\n\nTo check what is the distance between the centroids and the elements of data set we use the Euclidean distance:\n\n\\begin{equation}\n \\rho_{Min}(x_{i},v_{j})=\\sqrt{\\sum_{i=1}^{d}(x_{i}-v_{j})^{2}}.\n\\end{equation}\n\n\n```python\ndef calculate_distance(x,v):\n return sqrt((x[0]-v[0])**2+(x[1]-v[1])**2)\n```\n\nThe next step is to calculate the new assignation matrix:\n\n\\begin{equation}\n \\mu_{ik}^{(t)}=\n \\begin{cases}\n 1 & \\text{if } d(x_{k},v_{i})0:\n if calculate_differences(new_assignation, assignation) < error_margin:\n difference_limit_not_achieved=False\n assignation=new_assignation\n iter=iter+1\n return new_assignation, new_centers\n```\n\nReady to build some new clusters: \n\n\n```python\nnew_assignation_hcm, new_centers_hcm = cluster_hcm(assignation, centers)\n%store new_assignation_hcm\n%store new_centers_hcm\n```\n\n Stored 'new_assignation_hcm' (list)\n Stored 'new_centers_hcm' (list)\n\n\nThe centers are like following:\n\n\n```python\npd.DataFrame(new_centers_hcm, columns=['x1','x2'])\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
x1x2
00.1277010.207853
10.8290770.970594
\n
\n\n\n\nAnd the assignation matrix looks like:\n\n\n```python\npd.DataFrame(new_assignation_hcm, columns = ['Cluster 1','Cluster 2'])\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Cluster 1Cluster 2
010
110
210
310
410
510
601
701
801
910
\n
\n\n\n\nTo plot it, we need to develop a short function that adds some colors to our plot:\n\n\n```python\nred = data_set[np.where(np.array(new_assignation_hcm)[:,0]==1)]\nblue = data_set[np.where(np.array(new_assignation_hcm)[:,1]==1)]\n```\n\nAnd finally plot the results:\n\n\n```python\nfig, ax = plt.subplots()\n\nax.scatter(blue[:,0],blue[:,1],c='blue')\nax.scatter(red[:,0],red[:,1],c='red')\nax.scatter(np.array(new_centers_hcm)[:,0],np.array(new_centers_hcm)[:,1],c='black')\nax.set(xlabel='Seats count', ylabel='Distance range (km)',\n title='Aircrafts (clusters)')\nax.grid()\nplt.show()\n```\n\n## Fuzzy k-means\n\nWe reset the assignation matrix and set the m parameter. The m paramtere is also known as fuzzifier. The higher value it is the values are more fuzzy. A lower value gives as results that are closer to the one that we got with the hard version of k-means.\n\n\n```python\nassignation=np.zeros((len(data_set),groups))\n\nm = 2.0\n```\n\nThe fuzzy implementation of k-means is a bit more complex and we need to modify the calculate_u function to be complient with the equation:\n\n\\begin{equation}\n \\mu_{ik}=(\\sum_{j=1}^{c}(\\frac{d(x_{k},v_{i})}{d(x_{k},v_{j})})^{\\frac{2}{m-1}})^{-1}\n\\end{equation}\n\nThe implementation is given as below.\n\n\n```python\ndef calculate_u_fcm(x, centers, group_id):\n distance_centers = 0\n for group in range(groups): \n if group != group_id:\n distance_centers+= calculate_distance(x, centers[group])\n distance_sum=1.0+(calculate_distance(x, centers[group_id])/distance_centers)**m\n return distance_sum**-1\n```\n\nThat's the only difference between HCM and FCM. The rest is almost the same in both cases.\n\n\n```python\ndef cluster_fcm(assignation, centers):\n difference_limit_not_achieved=True\n new_centers = centers\n iter=0\n while difference_limit_not_achieved:\n new_assignation=[]\n for i in range(len(data_set)):\n new_assignation_vector=[]\n for k in range(groups):\n new_assignation_vector.append(calculate_u_fcm(data_set[i],new_centers,k))\n new_assignation.append(new_assignation_vector)\n new_centers = calculate_new_centers(new_assignation)\n\n if iter>0:\n if calculate_differences(new_assignation, assignation) < error_margin:\n difference_limit_not_achieved=False\n assignation=new_assignation\n iter=iter+1\n return new_assignation, new_centers\n```\n\nCalculation of the clusters is done the same way as in the previous example:\n\n\n```python\nnew_assignation_fcm, new_centers_fcm = cluster_fcm(assignation, centers)\n```\n\nThe cluster centers are similar to the previous example:\n\n\n```python\npd.DataFrame(new_centers_hcm, columns=['x1','x2'])\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
x1x2
00.1277010.207853
10.8290770.970594
\n
\n\n\n\nThe assignation matrix is different even we assign same objects to the same clusters. Values in each row sums to 1.\n\n\n```python\npd.DataFrame(new_assignation_fcm, columns = ['Cluster 1','Cluster 2'])\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Cluster 1Cluster 2
00.9881550.011845
10.9928770.007123
20.9838580.016142
30.9901630.009837
40.9920960.007904
50.9985430.001457
60.0061090.993891
70.0198400.980160
80.0769200.923080
90.7922760.207724
\n
\n\n\n\nTo plot the objects in a fuzzy k-means we need to group them by values higher than 0.5 as both values sums to 1.\n\n\n```python\nred = data_set[np.where(np.array(new_assignation_fcm)[:,0]>0.5)]\nblue = data_set[np.where(np.array(new_assignation_fcm)[:,1]>0.5)]\n\nfig, ax = plt.subplots()\n\nax.scatter(blue[:,0],blue[:,1],c='blue')\nax.scatter(red[:,0],red[:,1],c='red')\nax.scatter(np.array(new_centers_fcm)[:,0],np.array(new_centers_fcm)[:,1],c='black')\nax.set(xlabel='Seats count', ylabel='Distance range (km)',\n title='Aircrafts (clusters)')\nax.grid()\nplt.show()\n```\n\n### Possibilistic k-means (PCM)\n\nIn the fuzzy version, each row sums to 1. In real-world cases, it doesn't need to be like this. The possibilistic k-means returns the distance to the center rather than dividing the assignation between clusters.\n\n\nAs suggested by the authors, the initial assignation matrix should be created using the FCM method. We do a fixed number of FCM method loops. The number of loops is set by the variable ``F``. The ``error_margin`` variable is the error threshold were below of it we stop the loop.\n\n\n```python\nF = 2\nerror_margin = 0.08\nassignation=np.zeros((len(data_set),groups))\n```\n\nThe assignation function is more complex compared to the two previous one. In PCM we use the Mahalanobis distance instead of the Euclidean one, and the assignation function is set as:\n\\begin{equation}\n \\mu_{ik}=(1+(\\frac{\\rho_{A}(x_{i},v_{j})}{\\eta_{i}})^{\\frac{2}{m-1}})^{-1},\n\\end{equation}\nwhere\n\\begin{equation}\n\\eta_{i}=\\frac{\\sum_{k=1}^{M}(\\mu_{ik})^{m}\\rho_{A}(x_{i},v_{j})}{\\sum_{k=1}^{M}(\\mu_{ik}\n)^{m}}.\n\\end{equation}\n$\\rho_{A}(x_{i},v_{j})$ is the Mahalanobis distance:\n\\begin{equation}\n\\rho_{A}(x_{i},v_{j})=(x_{i}-v_{j})^{T}A(x_{i}-v_{j}).\n\\end{equation}\nIt use ``A`` diagnoal matrix to measure the distance. The figure below show how the euclidean distance is measured:\n\nThe difference between two distances is that in Mahalanobis distance we use the diagonal matrix ``A``, which is also known as Mahalanobis norm, that allow us to measure the distance between objects as it's shown in figure below.\n\n\nThe Mahalanobis norm can be implemented as below.\n\n\n```python\ndef calculate_A():\n mean=np.mean(data_set,axis=0)\n sumof = np.zeros((data_set[0].shape))\n for i in range(len(data_set)):\n subtracted = np.subtract(data_set[i],mean)\n sumof = sumof + np.multiply(subtracted, subtracted)\n variance = np.divide(sumof,len(data_set))\n ABcov = np.cov(data_set[:,0]*data_set[:,1])\n R = np.array([[variance[0], ABcov], [ABcov, variance[1]]])\n return R**-1\n```\n\nThe matrix can be saved as global variable ``A``. It is the size of the feature number by feature number. In our case it will be a matrix of size $2\\times2$.\n\n\n```python\nA = calculate_A()\nprint(A)\n```\n\n [[7.89464944 6.69665317]\n [6.69665317 7.75894855]]\n\n\nAfter getting the ``A`` matrix, we are able to calcualte the Mahalanobis distance. The ``A`` matrix is calculated once, because it depends on the whole data set, not the method steps.\n\n\n```python\ndef calculate_mah_distance(group, centers):\n dmc = data_set - centers[group]\n dmca = np.dot(data_set - centers[group], A)\n\n distances = lambda dmc, dmca: [np.dot(dmca[i], dmc[i]) for i in range(dmc.shape[0])]\n return distances(dmc,dmca)\n```\n\nThe $\\eta$ can be implemented as below:\n\n\n```python\ndef calculate_eta(assignation, group, mah_distances):\n ud = np.sum((assignation[:, group] ** m) * mah_distances, axis=0)\n uq = np.sum(assignation[:, group] ** m, axis=0)\n return ud/uq\n```\n\nFinally, we can calculate the $\\nu$:\n\n\n```python\ndef calculate_u_pcm(assignation, centers):\n new_assignation = np.zeros((len(data_set), groups))\n for group in range(groups):\n mah_distances = calculate_mah_distance(group, centers)\n group_eta = calculate_eta(assignation, group, mah_distances)\n new_assignation[:,group] = (1.0+(mah_distances/group_eta))**-1\n return new_assignation\n```\n\nA stop function in PCM is defined as the difference between old and newly calculated centers.\n\n\n```python\ndef get_centers_difference(old_centers, new_centers):\n return np.sum(np.abs(np.subtract(old_centers,new_centers))) \n```\n\nThe ``cluster_pcm`` function has two parts. The first one is a FCM method that returns the input assignation matrix for the PCM method.\n\n\n```python\ndef cluster_pcm(assignation, centers):\n new_centers = centers\n new_assignation = assignation\n for f in range(F):\n assignation = []\n for i in range(len(data_set)):\n assignation_vector = []\n for k in range(groups): \n assignation_vector.append(calculate_u_fcm(data_set[i], new_centers, k))\n assignation.append(assignation_vector)\n new_centers = calculate_new_centers(assignation)\n new_assignation = np.array(assignation)\n\n \n difference_limit_not_achieved = True\n while difference_limit_not_achieved:\n new_assignation = calculate_u_pcm(new_assignation, new_centers)\n old_centers = new_centers\n new_centers = calculate_new_centers(new_assignation)\n\n if get_centers_difference(old_centers, new_centers) < error_margin:\n difference_limit_not_achieved = False\n return new_assignation, new_centers\n```\n\nNow, we can cluster the data set with PCM:\n\n\n```python\nnew_assignation_pcm, new_centers_pcm = cluster_pcm(assignation, centers)\n```\n\nThe assignation values does not sum to 1 as in fuzzy k-means. The matrix give a better understanding of where the object is placed in the feature space.\n\n\n```python\npd.DataFrame(new_assignation_pcm, columns = ['Cluster 1','Cluster 2'])\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Cluster 1Cluster 2
00.5052680.022455
10.7553400.024592
20.9105550.028171
30.8781860.026017
40.9518710.027998
50.9582310.029036
60.0118910.867600
70.0104230.594965
80.0178230.565899
90.1017880.062391
\n
\n\n\n\nIn case of PCM we decided to extend the plot for many groups, up to 6. The colors are defined below.\n\n\n```python\nassigned_groups = []\ncolors = ['red','blue','green','orange','black','yellow']\n\nfor el in range(len(data_set)):\n group_id = np.argmax(new_assignation_pcm[el])\n assigned_groups.append(group_id)\n```\n\nWe need a function that assign a color to each cluster.\n\n\n```python\ndef get_colours(color_id):\n return data_set[np.where(np.array(assigned_groups)[:]==color_id)]\n```\n\nFinally, we go through groups we have and assign objects to colors and plot it. What is important to mention is that some assignation values for an object can be very low, means that this object is far from all centers. We can implement here a threshold where if all assignation values are below some threshold we treat such objects as noise. In the figure below, we see the last object that is closer to the red centroid, but was assigned to the blue cluster. In this case both values are very low, but the blue one is just a bit higher. In a hard k-means method it wouldn't be so easy to find the noise.\n\n\n```python\nfig, ax = plt.subplots()\n\n\nfor group in range(groups):\n small_set = get_colours(group) \n ax.scatter(small_set[:,0],small_set[:,1],c=colors.pop(0))\nax.scatter(np.array(new_centers_pcm)[:,0],np.array(new_centers_pcm)[:,1],marker='x',c='black')\nax.set(xlabel='Seats count', ylabel='Distance range (km)',\n title='Aircrafts (clusters)')\nax.grid()\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "f4d00a64d98a225a10149e5ead64995731da8f76", "size": 71142, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ML1/clustering/042Clustering_Distributed.ipynb", "max_stars_repo_name": "DevilWillReign/ML2022", "max_stars_repo_head_hexsha": "cb4cc692e9f0e178977fb5e1d272e581b30f998d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ML1/clustering/042Clustering_Distributed.ipynb", "max_issues_repo_name": "DevilWillReign/ML2022", "max_issues_repo_head_hexsha": "cb4cc692e9f0e178977fb5e1d272e581b30f998d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ML1/clustering/042Clustering_Distributed.ipynb", "max_forks_repo_name": "DevilWillReign/ML2022", "max_forks_repo_head_hexsha": "cb4cc692e9f0e178977fb5e1d272e581b30f998d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 52.5420974889, "max_line_length": 11200, "alphanum_fraction": 0.7104804476, "converted": true, "num_tokens": 6134, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9518632261523028, "lm_q2_score": 0.8872045981907007, "lm_q1q2_score": 0.8444974310909579}} {"text": "# Fitting Logistic Regression Models$\n\\newcommand{\\cond}{{\\mkern+2mu} \\vert {\\mkern+2mu}}\n\\newcommand{\\SetDiff}{\\mathrel{\\backslash}}\n\\DeclareMathOperator{\\BetaFunc}{\u0392}\n\\DeclareMathOperator{\\GammaFunc}{\u0393}\n\\DeclareMathOperator{\\prob}{p}\n\\DeclareMathOperator{\\cost}{J}\n\\DeclareMathOperator{\\score}{V}\n\\DeclareMathOperator{\\dcategorical}{Categorical}\n\\DeclareMathOperator{\\dcategorical}{Categorical}\n\\DeclareMathOperator{\\ddirichlet}{Dirichlet}\n$\n\n\"The logistic regression model arises from the desire to model the posterior probabilities of the $K$ classes via linear functions in $x$, while at the same time ensuring that they sum to one and remain in $[0, 1]$.\", Hastie et al., 2009 (p. 119).\n\n$$\n\\begin{align}\n\\log \\frac{\\prob(Y = k \\cond X = x)}{\\prob(Y = K \\cond X = x)} = \\beta_k^{\\text{T}} x && \\text{for } k = 1, \\dotsc, K-1.\n\\end{align}\n$$\n\nThe probability for the case $Y = K$ is held out, as the probabilities must sum to one, so there are only $K-1$ free variables.\nThus if there are two categories, there is just a single linear function.\n\nThis gives us that\n$$\n\\begin{align}\n\\prob(Y = k \\cond X = x) &= \\frac{\\exp(\\beta_k^{\\text{T}}x)}{1 + \\sum_{i=1}^{K-1} \\exp(\\beta_i^{\\text{T}}x)} & \\text{for } k = 1, \\dotsc, K-1 \\\\[3pt]\n\\prob(Y = K \\cond X = x) &= \\frac{1}{1 + \\sum_{i=1}^{K-1} \\exp(\\beta_i^{\\text{T}}x)}.\n\\end{align}\n$$\n\nNote that if we fix $\\beta_K = 0$, we have the form\n$$\n\\begin{align}\n\\prob(Y = k \\cond X = x) &= \\frac{\\exp(\\beta_k^{\\text{T}}x)}{\\sum_{i=1}^K \\exp(\\beta_i^{\\text{T}}x)} & \\text{for } k = 1, \\dotsc, K.\n\\end{align}\n$$\n\nThen, writing $\\beta = \\{\\beta_1^{\\text{T}}, \\dotsc, \\beta_{K}^{\\text{T}}\\}$, we have that $\\prob(Y = k \\cond X = x) = \\prob_k(x; \\beta)$.\n\nThe log likelihood for $N$ observations is\n$$\n\\ell(\\beta) = \\sum_{n=1}^N \\log \\prob_{y_n}(x_n; \\beta).\n$$\n\nWe can use the cost function\n$$\n\\begin{align}\n\\cost(\\beta)\n &= -\\frac{1}{N} \\left[ \\sum_{n=1}^N \\sum_{k=1}^K 1[y_n = k] \\log \\prob_k(x_n; \\beta) \\right] \\\\\n &= -\\frac{1}{N} \\left[ \\sum_{n=1}^N \\sum_{k=1}^K 1[y_n = k] \\log \\frac{\\exp(\\beta_k^{\\text{T}}x_n)}{\\sum_{i=1}^K \\exp(\\beta_i^{\\text{T}}x_n)} \\right] \\\\\n &= -\\frac{1}{N} \\left[ \\sum_{n=1}^N \\sum_{k=1}^K 1[y_n = k] \\left( \\beta_k^{\\text{T}}x_n - \\log \\sum_{i=1}^K \\exp(\\beta_i^{\\text{T}}x_n) \\right) \\right].\n\\end{align}\n$$\n\nThus the score function is\n$$\n\\score_k(\\beta) = \\nabla_{\\beta_k} \\cost(\\beta) = -\\frac{1}{N} \\left[ \\sum_{n=1}^N x_n \\big( 1[y_n = k] - \\prob_k(x_n; \\beta) \\big) \\right].\n$$\n", "meta": {"hexsha": "0c523af19974039c6aad3465514dc0efb6eef906", "size": 4078, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "GLM/Fitting Logistic Regression Models.ipynb", "max_stars_repo_name": "ConradScott/IJuliaSamples", "max_stars_repo_head_hexsha": "0f5d212dcf63bc795e79ac790aa3f1c9b010c89e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "GLM/Fitting Logistic Regression Models.ipynb", "max_issues_repo_name": "ConradScott/IJuliaSamples", "max_issues_repo_head_hexsha": "0f5d212dcf63bc795e79ac790aa3f1c9b010c89e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "GLM/Fitting Logistic Regression Models.ipynb", "max_forks_repo_name": "ConradScott/IJuliaSamples", "max_forks_repo_head_hexsha": "0f5d212dcf63bc795e79ac790aa3f1c9b010c89e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.859375, "max_line_length": 255, "alphanum_fraction": 0.5007356547, "converted": true, "num_tokens": 966, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9603611631680359, "lm_q2_score": 0.8791467643431002, "lm_q1q2_score": 0.8442984091999549}} {"text": "## TNK\n\nTanaka suggested the following two-variable problem:\n\n**Definition**\n\n\\begin{equation}\n\\newcommand{\\boldx}{\\mathbf{x}}\n\\begin{array}\n\\mbox{Minimize} & f_1(\\boldx) = x_1, \\\\\n\\mbox{Minimize} & f_2(\\boldx) = x_2, \\\\\n\\mbox{subject to} & C_1(\\boldx) \\equiv x_1^2 + x_2^2 - 1 - \n0.1\\cos \\left(16\\arctan \\frac{x_1}{x_2}\\right) \\geq 0, \\\\\n& C_2(\\boldx) \\equiv (x_1-0.5)^2 + (x_2-0.5)^2 \\leq 0.5,\\\\\n& 0 \\leq x_1 \\leq \\pi, \\\\\n& 0 \\leq x_2 \\leq \\pi.\n\\end{array}\n\\end{equation}\n\n**Optimum**\n\nSince $f_1=x_1$ and $f_2=x_2$, the feasible objective space is also\nthe same as the feasible decision variable space. The unconstrained \ndecision variable space consists of all solutions in the square\n$0\\leq (x_1,x_2)\\leq \\pi$. Thus, the only unconstrained Pareto-optimal \nsolution is $x_1^{\\ast}=x_2^{\\ast}=0$. \nHowever, the inclusion of the first constraint makes this solution\ninfeasible. The constrained Pareto-optimal solutions lie on the boundary\nof the first constraint. Since the constraint function is periodic and\nthe second constraint function must also be satisfied,\nnot all solutions on the boundary of the first constraint are Pareto-optimal. The \nPareto-optimal set is disconnected.\nSince the Pareto-optimal\nsolutions lie on a nonlinear constraint surface, an optimization\nalgorithm may have difficulty in finding a good spread of solutions across\nall of the discontinuous Pareto-optimal sets.\n\n**Plot**\n\n\n```python\nfrom pymoo.factory import get_problem\nfrom pymoo.util.plotting import plot\n\nproblem = get_problem(\"tnk\")\nplot(problem.pareto_front(), no_fill=True)\n```\n", "meta": {"hexsha": "dc137c26abc6b0a5bb489a28b0575c061a13e2e9", "size": 35384, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/source/problems/multi/tnk.ipynb", "max_stars_repo_name": "gabicavalcante/pymoo", "max_stars_repo_head_hexsha": "1711ce3a96e5ef622d0116d6c7ea4d26cbe2c846", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-09-18T19:33:31.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-18T19:33:33.000Z", "max_issues_repo_path": "doc/source/problems/multi/tnk.ipynb", "max_issues_repo_name": "gabicavalcante/pymoo", "max_issues_repo_head_hexsha": "1711ce3a96e5ef622d0116d6c7ea4d26cbe2c846", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/source/problems/multi/tnk.ipynb", "max_forks_repo_name": "gabicavalcante/pymoo", "max_forks_repo_head_hexsha": "1711ce3a96e5ef622d0116d6c7ea4d26cbe2c846", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-31T08:19:13.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T08:19:13.000Z", "avg_line_length": 258.2773722628, "max_line_length": 31980, "alphanum_fraction": 0.9176746552, "converted": true, "num_tokens": 496, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9390248208414329, "lm_q2_score": 0.8991213745668094, "lm_q1q2_score": 0.844297287667301}} {"text": "# Polynomial Regression from scratch\n> Polynomial regression from scratch with Pytorch using normal equation and gradient descent.\n- author: \"Axel Mendoza\"\n- categories: [polynomial-regression, pytorch, gradient-descent, from-scratch]\n- toc: false\n- comments: true\n- badges: true\n- image: images/polynomial.jpg\n\n\n\nDuring this experiment, we will see how to implement polynomial regression from scratch using Pytorch.\n\n\n```python\nimport torch\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\nGenerate a polynomial distribution with random noise\n\n\n```python\n\n# Function for creating a vector with value between [r1, r2]\ndef randvec(r1, r2, shape):\n return (r1 - r2) * torch.rand(shape) + r2\n\n# Defining the range of our distribution\nX = torch.tensor([i for i in range(-30, 30)]).float()\n# Creating random points from a gaussian with random noise\ny = randvec(-1e4, 1e4, X.shape) - (1/2) * X + 3 * X.pow(2) - (6/4) * X.pow(3)\n\nplt.scatter(X, y)\n```\n\n### Create the polynomial features\nThe formula of linear regression is as follow:\n$$\n \\boldsymbol{\\hat{y}} = \\boldsymbol{X}\\boldsymbol{w}\n$$\nwhere $\\boldsymbol{\\hat{y}}$ is the target, $\\boldsymbol{w}$ are the weights learned by the model and $\\boldsymbol{X}$ is training data.\nPolynomial regression is still considered as a linear regression because there is only linear learning parameters:\n$$\n \\boldsymbol{y} = \\boldsymbol{w}_0 + \\boldsymbol{X}\\boldsymbol{w}_1 + \\boldsymbol{X}^2\\boldsymbol{w}_2 + \\dots + \\boldsymbol{X}^n\\boldsymbol{w}_n\n$$\nAs you have probably guessed, this equation is not linear. We use a trick to make it linear:\n- We gather all the $\\boldsymbol{X}^2$ to $\\boldsymbol{X}^n$ as new features that we created and we concatenate them to $\\boldsymbol{X}$.\n- All the $\\boldsymbol{w}_1$ to $\\boldsymbol{w}_n$ are concatenated to $\\boldsymbol{w}_0$.\n\nAt the end, the polynomial regression has the same formula as the linear regression but with the aggregated arrays.\n\n\n```python\ndef create_features(X, degree=2, standardize=True):\n \"\"\"Creates the polynomial features\n \n Args:\n X: A torch tensor for the data.\n degree: A intege for the degree of the generated polynomial function.\n standardize: A boolean for scaling the data or not.\n \"\"\"\n if len(X.shape) == 1:\n X = X.unsqueeze(1)\n # Concatenate a column of ones to has the bias in X\n ones_col = torch.ones((X.shape[0], 1), dtype=torch.float32)\n X_d = torch.cat([ones_col, X], axis=1)\n for i in range(1, degree):\n X_pow = X.pow(i + 1)\n # If we use the gradient descent method, we need to\n # standardize the features to avoid exploding gradients\n if standardize:\n X_pow -= X_pow.mean()\n std = X_pow.std()\n if std != 0:\n X_pow /= std\n X_d = torch.cat([X_d, X_pow], axis=1)\n return X_d\n\ndef predict(features, weights):\n return features.mm(weights)\n\nfeatures = create_features(X, degree=3, standardize=False)\ny_true = y.unsqueeze(1)\n```\n\n### Method 1: Normal equation\nThe first method is analytical and uses the normal equation.\nTraining a linear model using least square regression is equivalent to minimize the mean squared error:\n\n$$\n\\begin{align}\n \\text{Mse}(\\hat{y}, y) &= \\frac{1}{n}\\sum_{i=1}^{n}{||\\hat{y}_i - y_i ||_{2}^{2}} \\\\\n \\text{Mse}(\\hat{y}, y) &= \\frac{1}{n}||\\boldsymbol{X}\\boldsymbol{w} - \\boldsymbol{y} ||_2^2\n\\end{align}\n$$\n\nwhere $n$ is the number of samples, $\\hat{y}$ is the predicted value of the model and $y$ is the true target.\nThe prediction $\\hat{y}$ is obtained by matrix multiplication between the input $\\boldsymbol{X}$ and the weights of the model $\\boldsymbol{w}$.\n\nMinimizing the $\\text{Mse}$ can be achieved by solving the gradient of this equation equals to zero in regards to the weights $\\boldsymbol{w}$:\n\n$$\n\\begin{align}\n\\nabla_{\\boldsymbol{w}}\\text{Mse}(\\hat{y}, y) &= 0 \\\\\n(\\boldsymbol{X}^\\top \\boldsymbol{X})^{-1}\\boldsymbol{X}^\\top \\boldsymbol{y} &= \\boldsymbol{w}\n\\end{align}\n$$\n\nFor more information on how to find $\\boldsymbol{w}$ please visit the section \"Linear Least Squares\" of this [link](https://en.wikipedia.org/wiki/Least_squares#:~:text=The%20linear%20least%2Dsquares%20problem,is%20similar%20in%20both%20cases).\n\n\n```python\ndef normal_equation(y_true, X):\n \"\"\"Computes the normal equation\n \n Args:\n y_true: A torch tensor for the labels.\n X: A torch tensor for the data.\n \"\"\"\n XTX_inv = (X.T.mm(X)).inverse()\n XTy = X.T.mm(y_true)\n weights = XTX_inv.mm(XTy)\n return weights\n\nweights = normal_equation(y_true, features)\ny_pred = predict(features, weights)\n\nplt.scatter(X, y)\nplt.plot(X, y_pred, c='red')\n```\n\nWith the normal equation method, the polynomial regressor fits well the synthetic data.\n\n### Method 2: Gradient Descent\n The Gradient descent method takes steps proportional to the negative of the gradient of a function at a given point, in order to iteratively minimize the objective function. The gradient generalizes the notion of derivative to the case where the derivative is with respect to a vector: the gradient of $f$ is the vector containing all of the partial derivatives, denoted $\\nabla_{\\boldsymbol{x}}f(\\boldsymbol{x})$.\n \n The directional derivative in direction $\\boldsymbol{u}$ (a unit vector) is the slope of the function $f$ in direction $\\boldsymbol{u}$. In other words, the directional derivative is the derivative of the function $f(\\boldsymbol{x} + \\sigma \\boldsymbol{u})$ with respect to $\\sigma$ close to 0. To minimize $f$, we would like to find the direction in which $f$ decreases the fastest. We can do this using the directional derivative:\n$$\n\\begin{align}\n &\\min_{\\boldsymbol{u}, \\boldsymbol{u}^\\top \\boldsymbol{u} = 1}{\\boldsymbol{u}^\\top \\nabla_{\\boldsymbol{x}} f(\\boldsymbol{x})} \\\\\n = &\\min_{\\boldsymbol{u}, \\boldsymbol{u}^\\top \\boldsymbol{u} = 1}{||\\boldsymbol{u}||_2 ||\\nabla_{\\boldsymbol{x}}f(\\boldsymbol{x})||_2 \\cos \\theta}\n\\end{align}\n$$\nignoring factors that do not depend on $\\boldsymbol{u}$, this simplifies to $\\min_{u}{\\cos \\theta}$. This is minimized when $\\boldsymbol{u}$ points in the opposite direction as the gradient. Each step of the gradient descent method proposes a new points:\n$$\n\\boldsymbol{x'} = \\boldsymbol{x} - \\epsilon \\nabla_{\\boldsymbol{x}}f(\\boldsymbol{x})\n$$\nwhere $\\epsilon$ is the learning rate.\nIn the context of polynomial regression, the gradient descent is as follow:\n$$\n\\boldsymbol{w} = \\boldsymbol{w} - \\epsilon \\nabla_{\\boldsymbol{w}}\\text{MSE}\n$$\nwhere:\n$$\n\\begin{align}\n\\nabla_{\\boldsymbol{w}}\\text{MSE} &= \\nabla_{\\boldsymbol{w}}\\left(\\frac{1}{n}{||\\boldsymbol{X}\\boldsymbol{w} - \\boldsymbol{y} ||_2^2}\\right) \\\\\n&= \\frac{2}{N}\\boldsymbol{X}^\\top(\\boldsymbol{X}\\boldsymbol{w} - \\boldsymbol{y})\n\\end{align}\n$$\n\n\n```python\ndef gradient_descent(X, y_true, lr=0.001, it=30000):\n \"\"\"Computes the gradient descent\n \n Args:\n X: A torch tensor for the data.\n y_true: A torch tensor for the labels.\n lr: A scalar for the learning rate.\n it: A scalar for the number of iteration\n or number of gradient descent steps.\n \"\"\"\n weights_gd = torch.ones((X.shape[1], 1))\n n = X.shape[0]\n fact = 2 / n\n for _ in range(it):\n y_pred = predict(X, weights_gd)\n grad = fact * X.T.mm(y_pred - y_true)\n weights_gd -= lr * grad\n return weights_gd\n\nfeatures = create_features(X, degree=3, standardize=True)\nweights_gd = gradient_descent(features, y_true)\n\npred_gd = predict(features, weights_gd)\n```\n\nThe mean squared error is even lower when using gradient descent.\n\n\n```python\nplt.scatter(X, y)\nplt.plot(X, pred_gd, c='red')\n```\n\n### Conclusion\nThe polynomial regression is an appropriate example to learn more about the concept of normal equation and gradient descent.\nThis method work well with data that has polynomial shapes but we need to choose the right polynomial degree for a good bias/variance trade-off. However, the polynomial regression method has an important drawback. In fact, it is necessary to transform the data to a higher dimensional space which can be unfeasable if the data is very large.\n", "meta": {"hexsha": "118334502e11af44ea59d31a1ac3d0f857bf5c90", "size": 50270, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_notebooks/2020-09-14-Polynomial-Regression.ipynb", "max_stars_repo_name": "ConsciousML/blog", "max_stars_repo_head_hexsha": "e3f8fb858ebbf840313f192e33cc41a291e04f7b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_notebooks/2020-09-14-Polynomial-Regression.ipynb", "max_issues_repo_name": "ConsciousML/blog", "max_issues_repo_head_hexsha": "e3f8fb858ebbf840313f192e33cc41a291e04f7b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-09-28T05:35:20.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-28T05:35:20.000Z", "max_forks_repo_path": "_notebooks/2020-09-14-Polynomial-Regression.ipynb", "max_forks_repo_name": "ConsciousML/blog", "max_forks_repo_head_hexsha": "e3f8fb858ebbf840313f192e33cc41a291e04f7b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-02-18T16:04:25.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-18T16:04:25.000Z", "avg_line_length": 135.8648648649, "max_line_length": 14248, "alphanum_fraction": 0.8637955043, "converted": true, "num_tokens": 2300, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9532750440288019, "lm_q2_score": 0.8856314768368161, "lm_q1q2_score": 0.8442503850749087}} {"text": "# Linear least squares\n\n## Fitting data by polynomials\n\nHere are 5-year averages of the worldwide temperature anomaly as compared to the 1951-1980 average (source: NASA).\n\n\n```julia\nyear = collect(1955:5:2000);\nanomaly = [ -0.0480; -0.0180; -0.0360; -0.0120; -0.0040; 0.1180; 0.2100; 0.3320; 0.3340; 0.4560 ];\nusing Plots;\nplot(year,anomaly,m=:o,l=nothing)\n```\n\nThe numbers work better if we measure time in years since 1955. (We can quantify that statement later in the course.) \n\n\n```julia\nt = year-1955;\n@show m = length(t);\n```\n\n m = length(t) = 10\n\n\nA polynomial through all of these points has to have degree at least 9 in general (i.e., unless the points are special). We can solve for the coefficients of this polynomial by solving a $10\\times 10$ Vandermonde system.\n\n## Polynomial Data-Fitting\n\nSuppose we are given $m$ distinct points $x_1,\\dots, x_m\\in\\mathbb{C}$, and data $y_1,\\dots, y_m \\in \\mathbb{C}$ at these points. Then there is a unique _polynomial interpolant_ of degree $\\leq m-1$: \n\n$$\np(x) = c_0 + c_1x + \\cdots + c_{m-1} x^{m-1},\n$$\n\nsatisfying $p(x_i) = y_i$.\n\n$$\\begin{bmatrix} 1 & x_1 & x_1^2 & \\cdots & x_1^{m-1} \\\\ 1 & x_2 & x_2^2 & \\cdots & x_2^{m-1} \\\\ & \\vdots & & & \\vdots \\\\ 1 & x_m & x_m^2 & \\cdots & x_m^{m-1}\\end{bmatrix} \\begin{bmatrix} c_0\\\\ c_1 \\\\ \\vdots \\\\ c_{m-1} \\end{bmatrix} = \\begin{bmatrix} y_1 \\\\ y_2\\\\ \\vdots \\\\ y_m\\end{bmatrix}$$\n\nThe Vandermonde matrix is nonsingular if and only if $x_i\\ne x_j$.\n\n\n```julia\nA = t.^0;\nfor j = 1:m-1\n A = [A t.^j];\nend\nb = anomaly;\n\nc = A\\b; # coefficients, from degree 0 to m-1\n\nusing Polynomials\np = Poly(c)\n```\n\n\n\n\n Poly(-0.048 + 0.3655471428571423x - 0.1821111666666659x^2 + 0.03600809999999972x^3 - 0.003737914999999952x^4 + 0.00022573177777777346x^5 - 8.205466666666444e-6x^6 + 1.7689396825396164e-7x^7 - 2.0826666666665615e-9x^8 + 1.0311111111110426e-11x^9)\n\n\n\n\n```julia\nplot(t->p(t-1955),1955,2000)\nplot!(year,anomaly,m=:o,l=nothing);\ntitle!(\"World temperature anomaly\");\nxlabel!(\"year\"); ylabel!(\"anomaly (deg C)\")\n```\n\nAs intended, the polynomial interpolates the data, but it obviously gives a lot of unphysical nonsense. This phenomenon, which is common with interpolating polynomials over equally spaced times, is an example of _overfitting_. \n\nWe actually get a better approximation by reducing the degree to 3. \n\n\n```julia\nA = A[:,1:4];\n@show size(A);\n```\n\n size(A) = (10,4)\n\n\nWe now have an overdetermined system of linear equations. The easiest way to define a solution is to minimize the 2-norm of the residual--i.e., least squares. In MATLAB we use the same backslash operator as for square systems, even though the algorithms are very different. \n\n\n```julia\nc = A\\b; # coefficients, from degree 0 to 3\np = Poly(c); # fitting polynomial\nplot(t->p(t-1955),1955,2000)\nplot!(year,anomaly,m=:o,l=nothing);\ntitle!(\"World temperature anomaly\");\nxlabel!(\"year\"); ylabel!(\"anomaly (deg C)\")\n```\n\nThis polynomial certainly describes the data better.\n\n## Orthogonal Projection and the Normal Equations\n\n> ** THEOREM. ** Let $A\\in\\mathbb{C}^{m\\times n}$, $m\\geq n$, and $\\mathbf{b}\\in\\mathbb{C}^m$. A vector $\\mathbf{x}\\in\\mathbb{C}^m$ minimizes the residual norm $\\|\\mathbf{r}\\|_2 = \\|\\mathbf{b} -A\\mathbf{x}\\|_2$, hence solves the linear squares problem if and only if $\\mathbf{r}\\perp C(A)$, i.e. $$A^*\\mathbf{r} = \\mathbf{0} \\quad \\Longleftrightarrow \\quad A^*A \\mathbf{x} = A^* \\mathbf{b}.$$\nFurther, $A^*A$ is nonsingular if and only if $A$ has full rank (i.e. linearly independent columns).\n\n** Proof. ** To show $\\mathbf{y} = P \\mathbf{b}$ is the unique point in $C(A)$ that minimizes $\\|\\mathbf{b}-\\mathbf{y}\\|_2$, suppose $\\mathbf{z}\\ne\\mathbf{y}$ is another point. Since $\\mathbf{z}-\\mathbf{y}$ is orthogonal to $\\mathbf{b}-\\mathbf{y}$: \n\n$$\\|\\mathbf{b}-\\mathbf{z} \\|_2^2 = \\|\\mathbf{b}-\\mathbf{y}\\|_2^2 + \\|\\mathbf{y}-\\mathbf{z}\\|_2^2 > \\|\\mathbf{b}-\\mathbf{y}\\|_2^2.$$\n\n## Three least squares algortithms\n\n### Linear least squares\n\n\n\nThere are three distinct ways to solve the general dense linear least squares problem.\n\n\n```julia\nusing Plots\npyplot() #choose plotting backend\ninput = [\n 1 6\n 2 5\n 3 7\n 4 10]\nX = hcat(ones(size(input)[1]),input[:,1])\ny = input[:,2]\nbetaHat = (X' * X ) \\ X' * y #backslash computes LS-solution as in Matlab\nprint(betaHat)\nplot(x->betaHat[2]*x + betaHat[1],0,5,label=\"curve fit\")\nscatter!(input[:,1],input[:,2],label=\"data\")\n```\n\n [3.4999999999999964,1.4000000000000012][Plots.jl] Initializing backend: pyplot\n\n\n INFO: Precompiling module PyPlot.\n WARNING: No working GUI backend found for matplotlib.\n\n\n\n\n\n\n\n\n\n#### Normal equations\n\nFirst, we can pose and solve the normal equations:\n\n\\begin{align}\nA\\mathbf{x} &= \\mathbf{b}\\\\\nA^* A \\mathbf{x} &= A^* \\mathbf{b}\\\\\n\\mathbf{x} &= (A^*A)^{-1}A^*\\mathbf{b} = A^+ \\mathbf{b}.\n\\end{align}\n\n$A^+$ is called the _pseudoinverse_ of $A$.\n\n##### Least squares via Normal equations:\n\n1. Form the matrix $A^* A$ and the vector $A^*\\mathbf{b}$.\n- Compute the 'Cholesky factorization': $A^*A = R^*R$.\n- Solve the lower-triangular system $R^*\\mathbf{w} = A^*\\mathbf{b}$ for $\\mathbf{w}$.\n- Solve the upper-triangular system $R\\mathbf{x} = \\mathbf{w}$ for $\\mathbf{x}$.\n\nSymmetry implies the computation of $A^*A$ requires only $mn^2$ flops (rather than $2mn^2$). Step 2 requires $n^3/3$ flops.\n\n> ** THEOREM. ** Least squares via normal equations has operation count $\\sim mn^2+ \\frac{n^3}{3}$ flops.\n\n\n```julia\nB = A'*A; z = A'*b;\n@show size(B);\n```\n\n size(B) = (4,4)\n\n\n\n```julia\n[c B\\z]\n```\n\n\n\n\n 4x2 Array{Float64,2}:\n -0.0261566 -0.0261566 \n -0.00908228 -0.00908228 \n 0.000785734 0.000785734\n -7.74825e-6 -7.74825e-6 \n\n\n\n#### QR Factorization\n\nSecond, we can use a thin QR factorization to express the range of $A$ orthonormally, and reduce to a triangular square system.\n\nUsing either Gram-Schmidt or Householder triangularization, one constructs $A = \\hat{Q}\\hat{R}$. The orthogonal projector $P$ can be written as $\\hat{Q}\\hat{Q}^*$. Therefore:\n\n\\begin{align}\nA\\mathbf{x} &= \\mathbf{b}\\\\\n\\hat{Q}\\hat{R} \\mathbf{x} & = \\mathbf{b} \\\\\n\\hat{R} \\mathbf{x} &= \\hat{Q}^* \\mathbf{b}\\\\\n\\mathbf{x} &= \\hat{R}^{-1} \\hat{Q}^* \\mathbf{b} = A^+ \\mathbf{b}\n\\end{align}\n\n##### Least squares via QR factorization\n\n1. Compute the reduced QR factorization $A = \\hat{Q}\\hat{R}$.\n- Compute the vector $\\hat{Q}^*\\mathbf{b}$.\n- Solve the upper-triangular system $\\hat{R}\\mathbf{x} = \\hat{Q}^*\\mathbf{b}$ for $\\mathbf{x}$.\n\nThe work for this algorithm is dominated by the cost of the $QR$ factorization. If Householder reflectors are used for this step, we have: \n\n> ** THEOREM. ** Work for solving least squares via QR factorization is $\\sim 2mn^2 - \\frac{2}{3}n^3$ flops.\n\n\n```julia\n(Q,R) = qr(A); \nz = Q'*b;\n[c R\\z]\n```\n\n\n\n\n 4x2 Array{Float64,2}:\n -0.0261566 -0.0261566 \n -0.00908228 -0.00908228 \n 0.000785734 0.000785734\n -7.74825e-6 -7.74825e-6 \n\n\n\n#### SVD\n\nAnd third, we can use the SVD to orthgonalize both the range and the domain, ultimately getting a diagonal square system.\n\nSuppose $A=\\hat{U}\\hat{\\Sigma}V^*$ is an SVD of $A$:\n\n\\begin{align}\nA \\mathbf{x} &= \\mathbf{b} \\\\\n\\hat{U}\\hat{\\Sigma}V^* \\mathbf{x} &= \\mathbf{b} \\\\\n\\hat{U}\\hat{\\Sigma} V^* \\mathbf{x} & = \\hat{U}\\hat{U}^* \\mathbf{b}\\\\\n\\hat{\\Sigma} V^* \\mathbf{x} &= \\hat{U}^* \\mathbf{b}\\\\\n\\mathbf{x} &= V\\hat{\\Sigma}^{-1} \\hat{U}^*\\mathbf{b} = A^+ \\mathbf{b}.\n\\end{align}\n\n##### Least squares via SVD\n\n1. Compute the reduced SVD $A = \\hat{U}\\hat{\\Sigma}V^*$.\n- Compute the vector $\\hat{U}^*\\mathbf{b}$.\n- Solve the diagonal system $\\hat{\\Sigma}\\mathbf{w} = \\hat{U}^*\\mathbf{b}$ for $\\mathbf{w}$.\n- Set $\\mathbf{x} = V\\mathbf{w}$.\n\nThe operation count for this algorithm is dominated by the computation of the SVD. Which for $m>> n$ is approximately the same as the QR factorization, however for $m\\approx n$ it is typically:\n\n> ** THEOREM. ** The work for the least squares via SVD is $\\sim 2mn^2 + 11n^3$ flops.\n\n\n```julia\n(U,s,V) = svd(A); \nz = U'*b;\n[c V*(z./s)]\n```\n\n\n\n\n 4x2 Array{Float64,2}:\n -0.0261566 -0.0261566 \n -0.00908228 -0.00908228 \n 0.000785734 0.000785734\n -7.74825e-6 -7.74825e-6 \n\n\n\n## Conditioning and stability\n\nGiven multiple algorithms to solve a given problem, how do we determine which to use in a given situation? \n\nThere are several factors: algorithm _complexity_ or runtime, _parallelizability_, mathematical _conditioning_ and numerical _stability_. \n", "meta": {"hexsha": "b0e2249b611fa497afdac8be71297874c4495101", "size": 45309, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Linear Least Squares.ipynb", "max_stars_repo_name": "Twelve33/NumericalLinearAlgebra", "max_stars_repo_head_hexsha": "4122cf464712855f81be82eb0e92de27aad1ea31", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-23T23:55:16.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-23T23:55:16.000Z", "max_issues_repo_path": "Linear Least Squares.ipynb", "max_issues_repo_name": "Twelve33/NumericalLinearAlgebra", "max_issues_repo_head_hexsha": "4122cf464712855f81be82eb0e92de27aad1ea31", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-04-06T04:19:20.000Z", "max_issues_repo_issues_event_max_datetime": "2019-04-28T05:56:52.000Z", "max_forks_repo_path": "Linear Least Squares.ipynb", "max_forks_repo_name": "Twelve33/NumericalLinearAlgebra", "max_forks_repo_head_hexsha": "4122cf464712855f81be82eb0e92de27aad1ea31", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-04-06T04:04:45.000Z", "max_forks_repo_forks_event_max_datetime": "2019-04-06T04:04:45.000Z", "avg_line_length": 53.3674911661, "max_line_length": 23535, "alphanum_fraction": 0.7593855525, "converted": true, "num_tokens": 3065, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9124361652391386, "lm_q2_score": 0.9252299509069106, "lm_q1q2_score": 0.844213268369898}} {"text": "```python\nfrom sympy import *\nfrom sympy.abc import *\nMatrix([x,y,z])\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}x\\\\y\\\\z\\end{matrix}\\right]$\n\n\n\n\n```python\ne1 = 5*x+3*y+4*z\ne2 = 3*x+4*y+5*z\ne3 = 4*x+5*y+3*z\n```\n\n\n```python\nA = Matrix([e1,e2,e3])\nA\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}5 x + 3 y + 4 z\\\\3 x + 4 y + 5 z\\\\4 x + 5 y + 3 z\\end{matrix}\\right]$\n\n\n\n\n```python\nB = A*2\nB\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}10 x + 6 y + 8 z\\\\6 x + 8 y + 10 z\\\\8 x + 10 y + 6 z\\end{matrix}\\right]$\n\n\n\n\n```python\nIntegral(A,x)\n```\n\n\n\n\n$\\displaystyle \\int \\left[\\begin{matrix}5 x + 3 y + 4 z\\\\3 x + 4 y + 5 z\\\\4 x + 5 y + 3 z\\end{matrix}\\right]\\, dx$\n\n\n\n\n```python\nIntegral(A,x).doit()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\frac{5 x^{2}}{2} + x \\left(3 y + 4 z\\right)\\\\\\frac{3 x^{2}}{2} + x \\left(4 y + 5 z\\right)\\\\2 x^{2} + x \\left(5 y + 3 z\\right)\\end{matrix}\\right]$\n\n\n\n\n```python\nDerivative(A,x)\n```\n\n\n\n\n$\\displaystyle \\frac{\\partial}{\\partial x} \\left[\\begin{matrix}5 x + 3 y + 4 z\\\\3 x + 4 y + 5 z\\\\4 x + 5 y + 3 z\\end{matrix}\\right]$\n\n\n\n\n```python\nDerivative(A,x).doit()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}5\\\\3\\\\4\\end{matrix}\\right]$\n\n\n", "meta": {"hexsha": "1b7a8e2ed4d2e50a59795314ec97d1b86ebc8b15", "size": 4854, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Personal_Projects/Matrix_Calculus.ipynb", "max_stars_repo_name": "NSC9/Sample_of_Work", "max_stars_repo_head_hexsha": "8f8160fbf0aa4fd514d4a5046668a194997aade6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Personal_Projects/Matrix_Calculus.ipynb", "max_issues_repo_name": "NSC9/Sample_of_Work", "max_issues_repo_head_hexsha": "8f8160fbf0aa4fd514d4a5046668a194997aade6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Personal_Projects/Matrix_Calculus.ipynb", "max_forks_repo_name": "NSC9/Sample_of_Work", "max_forks_repo_head_hexsha": "8f8160fbf0aa4fd514d4a5046668a194997aade6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.012987013, "max_line_length": 207, "alphanum_fraction": 0.4202719407, "converted": true, "num_tokens": 488, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9465966702001758, "lm_q2_score": 0.8918110404058913, "lm_q1q2_score": 0.8441853612959711}} {"text": "(nm_gaussian_elimination)=\n# Gaussian elimination\n```{index} Gaussian elimination\n```\n## Method\nThe [Gaussian elimination](https://en.wikipedia.org/wiki/Gaussian_elimination) algorithm is simply a systematic implementation of the method of equation substitution we used in the {ref}`introduction section ` to solve the \\\\(2\\times 2\\\\) system (i.e. where we \"multiply the second equation by 2 and subtract the first equation from the resulting equation to eliminate \\\\(x\\\\) and hence allowing us to find \\\\(y\\\\), we can then compute \\\\(x\\\\) from the first equation\").\n\nGaussian elimination as the method is atributed to the mathematician Gauss (although it was certainly known before his time) and elimination as we seek to eliminate unknowns. To perform this method for arbitrarily large systems (on paper) we form the so-called augmented matrix\n\n\\\\[\n[A|\\pmb{b}] = \n\\left[\n \\begin{array}{rr|r}\n 2 & 3 & 7 \\\\\\\\\\\\\n 1 & -4 & 3 \\\\\\\\\\\\\n \\end{array}\n\\right].\n\\\\]\n\nNote that this encodes the equations (including the RHS values). We can perform so-called row operations and as long as we are consistent with what we do for the LHS and RHS components then what this system is describing will not change - a solution to the updated system will be the same as the solution to the original system.\n\nOur task is to update the system so we can easily read off the solution - of course this is exactly what we do via the substitution approach. First we multiplied the second equation by 2, this yield the updated augmented matrix:\n\n\\\\[\n\\left[\n \\begin{array}{rr|r}\n 2 & 3 & 7 \\\\\\\\\\\\\n 2 & -8 & 6 \\\\\\\\\\\\\n \\end{array}\n\\right].\n\\\\]\nWe can use the following notation to describe this operation:\n\n\\\\[ \\text{Eq. } (2) \\leftarrow 2\\times \\text{Eq. } (2). \\\\]\n\nNote importantly that this does not change anything about what these pair of equations are telling us about the unknown solution vector \\\\(\\pmb{x}\\\\) which although it doesn't appear, is implicilty defined by this augmented equation.\n\nThe next step was to subtract the first equation from the updated second (\\\\( \\text{Eq. } (2) \\leftarrow \\text{Eq. } (2) - \\text{Eq. } (1) \\\\)):\n\n\\\\[\n\\left[\n \\begin{array}{rr|r}\n 2 & 3 & 7 \\\\\\\\\\\\\n 0 & -11 & -1 \\\\\\\\\\\\\n \\end{array}\n\\right].\n\\\\]\n\nThe square matrix that is now in the \\\\(A\\\\) position of this augmented system is an example of an upper-triangular matrix - all entries below the diagonal are zero.\n\nFor such a matrix we can perform back substitution - starting at the bottom to solve trivially for the final unknown (\\\\(y\\\\) here which clearly takes the value \\\\(-1/-11\\\\)), and then using this knowledge working our way up to solve for each remaining unknown in turn, here just \\\\(x\\\\) (solving \\\\(2x + 3\\times (1/11) = 7\\\\)).\n\nWe can perform the similar substitution if we had a lower triangular matrix, first finding the first unknown and then working our way forward through the remaining unknowns - hence in this case forward substitution.\n\nIf we wished we could continue working on the augmented matrix to make the \\\\(A\\\\) component diagonal - divide the second equation by 11 and multiply by 3 (\\\\( \\text{Eq. } (2) \\leftarrow (3/11)\\times \\text{Eq. } (2) \\\\)) and add it to the first (\\\\( \\text{Eq. } (1) \\leftarrow \\text{Eq. } (1) + \\text{Eq. } (2) \\\\)):\n\n\\\\[\n\\left[\n \\begin{array}{rr|r}\n 2 & 0 & 7-3/11\\\\\\\\\\\\\n 0 & -3 & -3/11 \\\\\\\\\\\\\n \\end{array}\n\\right]\n\\\\]\n\nand we can further make it the identity by dividing the rows by 2 and -3 respectively (\\\\( \\text{Eq. } (1) \\leftarrow (1/2)\\times \\text{Eq. } (1) \\\\), \\\\( \\text{Eq. } (2) \\leftarrow (-1/3)\\times \\text{Eq. } (2) \\\\)) :\n\n\\\\[\n\\left[\n \\begin{array}{rr|r}\n 1 & 0 & (7-3/11)/2 \\\\\\\\\\\\\n 0 & 1 & 1/11 \\\\\\\\\\\\\n \\end{array}\n\\right].\n\\\\]\n\nEach of these augmented matrices encodes exactly the same information as the original matrix system in terms of the unknown vector \\\\(\\pmb{x}\\\\), and hence this is telling us that\n\n\\\\[ \\pmb{x} = I \\pmb{x} = \\left[\n \\begin{array}{c}\n (7-3/11)/2 \\\\\\\\\\\\\n 1/11 \\\\\\\\\\\\\n \\end{array}\n\\right]\n\\\\]\n\ni.e. exactly the solution we found when we performed back substitution from the upper-triangular form of the augmented system.\n\n### Gaussian elimination example\n\nConsider a system of linear equations\n\n\\\\[\\begin{align*}\n 2x + 3y - 4z &= 5 \\\\\\\\\\\\\n 6x + 8y + 2z &= 3 \\\\\\\\\\\\\n 4x + 8y - 6z &= 19\n\\end{align*}\\\\]\n\nwrite this in matrix form, form the corresponding augmented system and perform row operations until you get to upper-triangular form, find the solution using back substitution (do this all with pen and paper).\n\nWe should find that \\\\(x=-6\\\\), \\\\(y=5\\\\), \\\\(z=-1/2\\\\).\n\n\n```{admonition} Answer\n:class: dropdown\n\nWe begin by taking first two equations and try eliminating \\\\(x\\\\). To do that we need to multiply first equation by 3:\n\n\\\\[\n\\begin{align*}\n 6x + 9y - 12z &= 15 \\\\\\\\\\\\\n 6x + 8y + 2z &= 3 \\\\\\\\\\\\\n\\end{align*}\n\\\\]\n\nThen we will subtract first equation from the second to get:\n\\\\[-1y+14z=-12.\\\\]\n\nOur system now looks:\n\n\\\\[\n\\begin{align*}\n 2x + 3y - 4z &= 5 \\\\\\\\\\\\\n 0x - 1y + 14z &= -12 \\\\\\\\\\\\\n 4x + 8y - 6z &= 19\n\\end{align*}\n\\\\]\n\nNow let's take first and third equation and eliminate \\\\(x\\\\) by multiplying first equation by 2:\n\n\\\\[\n\\begin{align*}\n 4x + 6y - 8z &= 10 \\\\\\\\\\\\\n 4x + 8y - 6z &= 19\n\\end{align*}\n\\\\]\n\nWe subtract the equations to obtain:\n\\\\[2y+2z=9.\\\\]\n\nOur updated system is then:\n\n\\\\[\n\\begin{align*}\n 2x + 3y - 4z &= 5 \\\\\\\\\\\\\n 0x - 1y + 14z &= -12 \\\\\\\\\\\\\n 0x + 2y + 2z &= 9\n\\end{align*}\n\\\\]\n\nUnknown \\\\(x\\\\) was successfully eliminated from two equations. Now comparing two last equations, we see that we need to multiply first one by -2 to eliminate \\\\(y\\\\):\n\n\\\\[\n\\begin{align*}\n 0x + 2y - 28z &= 24 \\\\\\\\\\\\\n 0x + 2y + 2z &= 9\n\\end{align*}\n\\\\]\n\nSubtracting the equations we get:\n\\\\[ 0y - 30z = 15.\\\\]\n\nOur updated system is then:\n\\\\[\n\\begin{align*}\n 2x + 3y - 4z &= 5 \\\\\\\\\\\\\n 0x + y - 14z &= 12 \\\\\\\\\\\\\n 0x + 0y - 30z &= 15\n\\end{align*}\n\\\\]\n\nWe have successfully eliminated the unknown \\\\(y\\\\). Now, our equations look like they have a triangle where the coefficients are \\\\(0\\\\) and another triangle where the coefficients are \\\\(\\neq 0\\\\). Because the triangle where coefficients are \\\\(\\neq 0\\\\) is above the triangle where the coefficients are \\\\(0\\\\), this form is called the **upper triangle form**. \n\nNow we will use back substitution to solve the remaining system. From third equation we get that \\\\(z=-\\frac{1}{2}\\\\).\n\nWe will proceed to equation 2 knowing that \\\\(z=-\\frac{1}{2}\\\\), therefore \\\\(y=5\\\\).\n\nFinally, we can solve first equation to find that \\\\(x=6\\\\).\n\n```\n(nm_gaussian_elimination_code)=\n## Implementation\nNotice that we are free to perform the following operations on the augmented system without changing the corresponding solution:\n\n\n* exchanging two rows\n\n\n* multiplying a row by a non-zero constant (\\\\(\\text{Eq. } (i)\\leftarrow \\lambda \\times \\text{Eq. } (i)\\\\))\n\n\n* subtracting a (non-zero) multiple of one row with another (\\\\(\\text{Eq. } (i)\\leftarrow \\text{Eq. } (i) - \\lambda \\times \\text{Eq. }(j)\\\\))\n\n\nLet's consider the algorithm mid-way working on an arbitrary matrix system, i.e. assume that the first \\\\(k\\\\) rows (i.e. above the horizontal dashed line in the matrix below) have already been transformed into upper-triangular form, while the equations/rows below are not yet in this form. The augmented equation in this case can be assumed to look like\n\n```{margin} Note\nRemember that here as we are mid-way through the algorithm the \\\\(A\\\\)'s and \\\\(b\\\\)'s in the above are not the same as in the original system!\n```\n\n\\\\[\n\\left[\n \\begin{array}{rrrrrrrrr|r}\n A_{11} & A_{12} & A_{13} & \\cdots & A_{1k} & \\cdots & A_{1j} & \\cdots & A_{1n} & b_1 \\\\\\\\\\\\\n 0 & A_{22} & A_{23} & \\cdots & A_{2k} & \\cdots & A_{2j} & \\cdots & A_{2n} & b_2 \\\\\\\\\\\\\n 0 & 0 & A_{33} & \\cdots & A_{3k} & \\cdots & A_{3j} & \\cdots & A_{3n} & b_3 \\\\\\\\\\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots & \\ddots & \\vdots & \\ddots & \\vdots & \\vdots \\\\\\\\\\\\\n 0 & 0 & 0 & \\cdots & A_{kk} & \\cdots & A_{kj} & \\cdots & A_{kn} & b_k \\\\\\\\\\\\ \n\\hdashline\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots & \\ddots & \\vdots & \\ddots & \\vdots & \\vdots \\\\\\\\\\\\\n 0 & 0 & 0 & \\cdots & A_{ik} & \\cdots & A_{ij} & \\cdots & A_{in} & b_i \\\\\\\\\\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots & \\ddots & \\vdots & \\ddots & \\vdots & \\vdots \\\\\\\\\\\\\n 0 & 0 & 0 & \\cdots & A_{nk} & \\cdots & A_{nj} & \\cdots & A_{nn} & b_n \\\\\\\\\\\\\n\\end{array}\n\\right].\n\\\\]\n\n\nOur aim as a next step in the algorithm is to use row \\\\(k\\\\) (the \"pivot row\") to eliminate \\\\(A_{ik}\\\\), and we need to do this for all of the rows \\\\(i\\\\) below the pivot, i.e. for all \\\\(i>k\\\\).\n\nThe zeros to the left of the leading term in the pivot row means that these operations will not mess up the fact that we have all the zeros we are looking for in the lower left part of the matrix.\n\nTo eliminate \\\\(A_{ik}\\\\) for a single row \\\\(i\\\\) we need to perform the operation \n\\\\[ \\text{Eq. } (i)\\leftarrow \\text{Eq. } (i) - \\frac{A_{ik}}{A_{kk}} \\times \\text{Eq. }(k) \\\\]\n\nor equivalently\n\n\\\\[\n\\begin{align}\nA_{ij} &\\leftarrow A_{ij} - \\frac{A_{ik}}{A_{kk}} A_{kj}, \\quad j=k,k+1,\\ldots,n\\\\\\\\\\\\\nb_i &\\leftarrow b_i - \\frac{A_{ik}}{A_{kk}} b_{k}\n\\end{align}\n\\\\]\n\n\\\\(j\\\\) only needs to run from \\\\(k\\\\) upwards as we can assume that the earlier entries in column \\\\(i\\\\) have already been set to zero, and also that the corresponding terms from the pivot row are also zero (we don't need to perform operations that we know involve the addition of zeros!).\n\nTo eliminate these entries for all rows below the pivot we need to repeat for all \\\\(i>k\\\\).\n\n### Upper triangular form code\n\nWe will write some code that takes a matrix \\\\(A\\\\) and a vector \\\\(\\pmb{b}\\\\) and converts it into upper-triangular form using the above algorithm. For the \\\\(2 \\times 2\\\\) and \\\\(3\\times 3\\\\) examples from above we will compare the resulting \\\\(A\\\\) and \\\\(\\pmb{b}\\\\) you obtain following elimination.\n\nAt first we will write a function to obtain the upper triangular matrix form:\n\n\n```python\nimport numpy as np\n\ndef upper_triangle(A, b):\n \"\"\" A function to covert A into upper triangluar form through row operations.\n The same row operations are performed on the vector b.\n \n Note that this implementation does not use partial pivoting which is introduced below.\n \n Also note that A and b are overwritten, and hence we do not need to return anything\n from the function.\n \"\"\"\n n = np.size(b)\n rows, cols = np.shape(A)\n # Check A is square\n assert(rows == cols)\n # Check A has the same numner of rows as the size of the vector b\n assert(rows == n)\n\n # Loop over each pivot row - all but the last row which we will never need to use as a pivot\n for k in range(n-1):\n # Loop over each row below the pivot row, including the last row which we do need to update\n for i in range(k+1, n):\n # Define the scaling factor for this row outside the innermost loop otherwise \n # its value gets changed as you over-write A!!\n # There's also a performance saving from not recomputing things when not strictly necessary\n s = (A[i, k] / A[k, k])\n # Update the current row of A by looping over the column j\n # start the loop from k as we can assume the entries before this are already zero\n for j in range(k, n):\n A[i, j] = A[i, j] - s*A[k, j]\n # and update the corresponding entry of b\n b[i] = b[i] - s*b[k]\n \n return A, b\n```\n\nNow we can test our code on the examples from above. First \\\\(2\\times 2\\\\) example:\n\n\n```python\nA = np.array([[2., 3.],\n [1., -4.]])\nb = np.array([7., 3.])\n\nA, b = upper_triangle(A, b)\n\nprint('Our A matrix following row operations to transform it into upper-triangular form:')\nprint(A)\nprint('The correspondingly updated b vector:')\nprint(b)\n```\n\n Our A matrix following row operations to transform it into upper-triangular form:\n [[ 2. 3. ]\n [ 0. -5.5]]\n The correspondingly updated b vector:\n [ 7. -0.5]\n\n\n\\\\(3\\times 3\\\\) example:\n\n\n```python\nA = np.array([[2., 3., -4.],\n [6., 8., 2.],\n [4., 8., -6.]])\nb = np.array([5., 3., 19.])\n\nA, b = upper_triangle(A, b)\n\nprint('\\nOur A matrix following row operations to transform it into upper-triangular form:')\nprint(A)\nprint('The correspondingly updated b vector:')\nprint(b)\n```\n\n \n Our A matrix following row operations to transform it into upper-triangular form:\n [[ 2. 3. -4.]\n [ 0. -1. 14.]\n [ 0. 0. 30.]]\n The correspondingly updated b vector:\n [ 5. -12. -15.]\n\n\n### Back substitution code\n\nNow that we have an augmented system in the upper-triangular form\n\n\\\\[\n\\left[\n \\begin{array}{rrrrr|r}\n A_{11} & A_{12} & A_{13} & \\cdots & A_{1n} & b_1 \\\\\\\\\\\\\n 0 & A_{22} & A_{23} & \\cdots & A_{2n} & b_2 \\\\\\\\\\\\\n 0 & 0 & A_{33} & \\cdots & A_{3n} & b_3 \\\\\\\\\\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\\\\\\\\\\n 0 & 0 & 0 & \\cdots & A_{nn} & b_n \\\\\\\\\\\\ \n\\end{array}\n\\right].\n\\\\]\n\nwhere the solution \\\\(\\pmb{x}\\\\) of the original system also satisfies \\\\(A\\pmb{x}=\\pmb{b}\\\\) for the \\\\(A\\\\) and \\\\(\\pmb{b}\\\\) in the above upper-triangular form (rather than the original \\\\(A\\\\) and \\\\(\\pmb{b}\\\\)).\n\nWe can solve the final equation row to yield\n\n\\\\[x_n = \\frac{b_n}{A_{nn}}.\\\\]\n\nThe second to last equation then yields\n\n\\\\[\n\\begin{align}\nA_{n-1,n-1}x_{n-1} + A_{n-1,n}x_n &= b_{n-1}\\\\\\\\\\\\\n\\implies x_{n-1} = \\frac{b_{n-1} - A_{n-1,n}x_n}{A_{n-1,n-1}}\\\\\\\\\\\\\n\\implies x_{n-1} = \\frac{b_{n-1} - A_{n-1,n}\\frac{b_n}{A_{nn}}}{A_{n-1,n-1}}\n\\end{align}\n\\\\]\n\nand so on to row \\\\(k\\\\) which yields\n\n\\\\[\n\\begin{align}\nA_{k,k}x_{k} + A_{k,k+1}x_{k+1} +\\cdots + A_{k,n}x_n &= b_{k}\\\\\\\\\\\\\n\\iff A_{k,k}x_{k} + \\sum_{j=k+1}^{n}A_{kj}x_j &= b_{k}\\\\\\\\\\\\\n\\implies x_{k} &= \\left( b_k - \\sum_{j=k+1}^{n}A_{kj}x_j\\right)\\frac{1}{A_{kk}}\n\\end{align}\n\\\\]\n\nWe will extend the code to perform back substitution and hence to obtain the final solution \\\\(\\pmb{x}\\\\).\n\n\n```python\n# This function assumes that A is already an upper triangular matrix, \n# e.g. we have already run our upper_triangular function if needed.\n\ndef back_substitution(A, b):\n \"\"\" Function to perform back subsitution on the system Ax=b.\n \n Returns the solution x.\n \n Assumes that A is on upper triangular form.\n \"\"\"\n n = np.size(b)\n # Check A is square and its number of rows and columns same as size of the vector b\n rows, cols = np.shape(A)\n assert(rows == cols)\n assert(rows == n)\n # We can/should check that A is upper triangular using np.triu which is the \n # upper triangular part of a matrix - if A is already upper triangular, then\n # it should of course match the upper-triangular component of A!!\n assert(np.allclose(A, np.triu(A)))\n \n x = np.zeros(n)\n # Start at the end (row n-1) and work backwards\n for k in range(n-1, -1, -1):\n # Note that we could do this update in a single vectorised line \n # using np.dot or @ - this could also speed things up\n s = 0.\n for j in range(k+1, n):\n s = s + A[k, j]*x[j]\n x[k] = (b[k] - s)/A[k, k]\n\n return x\n```\n\nLet's test it:\n\n\n```python\nA = np.array([[2., 3., -4.],\n [6., 8., 2.],\n [4., 8., -6.]])\nb = np.array([5., 3., 19.])\n\nA_upp, b_upp = upper_triangle(A, b)\n\n# Print the solution using our codes\nx = back_substitution(A_upp, b_upp)\nprint('Our solution: ',x) \n\n# Check our answer against what SciPy gives us by multiplying b by A inverse \nimport scipy.linalg as sl\n\nprint('SciPy solution: ',sl.inv(A) @ b)\nprint('Success: ', np.allclose(x, sl.inv(A) @ b))\n```\n\n Our solution: [-6. 5. -0.5]\n SciPy solution: [-6. 5. -0.5]\n Success: True\n\n\n(nm_gauss_jordan_elimination)=\n## Gauss-Jordan elimination\n```{index} Gauss-Jordan elimination\n```\nRecall that for the augmented matrix example above we continued past the upper-triangular form so that the augmented matrix had the identity matrix in the \\\\(A\\\\) location. This algorithm has the name Gauss-Jordan elimination but note that it requires more operations than the conversion to upper-triangular form followed by back subsitution and so is only of academic interest.\n\n### Matrix inversion\n\nNote that if we were to form the augmented equation with the full identity matrix in the place of the vector \\\\(\\pmb{b}\\\\), i.e. \\\\([A|I]\\\\) and performed row operations exactly as above until \\\\(A\\\\) is transformed into the identity matrix \\\\(I\\\\), then we would be left with the inverse of \\\\(A\\\\) in the original \\\\(I\\\\) location, i.e.\n\n\\\\[ [A|I] \\rightarrow [I|A^{-1}].\\\\]\n\nWe will write the code to construct inverse matrix.\n\n```{admonition} Hints\n:class: dropdown\n\nWe have written a a bunch of matrices below. Seee if you can find what the sequence of matrices mean. You should feel that the operations are oddly familiar. \n\n\\\\[\nA=\n\\left(\n \\begin{array}{rrr}\n 2 & -1 & 0 \\\\\\\\\\\\\n -1 & 2 & -1 \\\\\\\\\\\\\n 0 & -1 & 2 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\quad\\quad I=\n\\left(\n \\begin{array}{rrr}\n 1 & 0 & 0 \\\\\\\\\\\\\n 0 & 1 & 0 \\\\\\\\\\\\\n 0 & 0 & 1 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\\\]\n\n\\\\[\n\\left(\n \\begin{array}{rrr}\n 2 & -1 & 0 \\\\\\\\\\\\\n 2 & -4 & +2 \\\\\\\\\\\\\n 0 & -1 & 2 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\quad\\quad\n\\left(\n \\begin{array}{rrr}\n 1 & 0 & 0 \\\\\\\\\\\\\n 0 & -2 & 0 \\\\\\\\\\\\\n 0 & 0 & 1 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\\\]\n\n\\\\[\n\\left(\n \\begin{array}{rrr}\n 2 & -1 & 0 \\\\\\\\\\\\\n 0 & -3 & +2 \\\\\\\\\\\\\n 0 & -1 & 2 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\quad\\quad\n\\left(\n \\begin{array}{rrr}\n 1 & 0 & 0 \\\\\\\\\\\\\n -1 & -2 & 0 \\\\\\\\\\\\\n 0 & 0 & 1 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\\\]\n\n\\\\[\n\\left(\n \\begin{array}{rrr}\n 2 & -1 & 0 \\\\\\\\\\\\\n 0 & -3 & +2 \\\\\\\\\\\\\n 0 & -3 & 6 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\quad\\quad\n\\left(\n \\begin{array}{rrr}\n 1 & 0 & 0 \\\\\\\\\\\\\n -1 & -2 & 0 \\\\\\\\\\\\\n 0 & 0 & 3 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\\\]\n\n\\\\[\n\\left(\n \\begin{array}{rrr}\n 2 & -1 & 0 \\\\\\\\\\\\\n 0 & -3 & +2 \\\\\\\\\\\\\n 0 & 0 & 4 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\quad\\quad\n\\left(\n \\begin{array}{rrr}\n 1 & 0 & 0 \\\\\\\\\\\\\n -1 & -2 & 0 \\\\\\\\\\\\\n 1 & 2 & 3 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\\\]\n\\\\[\n\\left(\n \\begin{array}{rrr}\n 2 & -1 & 0 \\\\\\\\\\\\\n 0 & -6 & +4 \\\\\\\\\\\\\n 0 & 0 & 4 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\quad\\quad\n\\left(\n \\begin{array}{rrr}\n 1 & 0 & 0 \\\\\\\\\\\\\n -2 & -4 & 0 \\\\\\\\\\\\\n 1 & 2 & 3 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\\\]\n\n\\\\[\n\\left(\n \\begin{array}{rrr}\n 2 & -1 & 0 \\\\\\\\\\\\\n 0 & -6 & 0 \\\\\\\\\\\\\n 0 & 0 & 4 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\quad\\quad\n\\left(\n \\begin{array}{rrr}\n 1 & 0 & 0 \\\\\\\\\\\\\n -3 & -6 & -3 \\\\\\\\\\\\\n 1 & 2 & 3 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\\\]\n\n\\\\[\n\\left(\n \\begin{array}{rrr}\n 12 & -6 & 0 \\\\\\\\\\\\\n 0 & -6 & 0 \\\\\\\\\\\\\n 0 & 0 & 4 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\quad\\quad\n\\left(\n \\begin{array}{rrr}\n 6 & 0 & 0 \\\\\\\\\\\\\n -3 & -6 & -3 \\\\\\\\\\\\\n 1 & 2 & 3 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\\\]\n\n\\\\[\n\\left(\n \\begin{array}{rrr}\n -12 & 0 & 0 \\\\\\\\\\\\\n 0 & -6 & 0 \\\\\\\\\\\\\n 0 & 0 & 4 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\quad\\quad\n\\left(\n \\begin{array}{rrr}\n -9 & -6 & -3 \\\\\\\\\\\\\n -3 & -6 & -3 \\\\\\\\\\\\\n 1 & 2 & 3 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\\\]\n\n\\\\[\n\\left(\n \\begin{array}{rrr}\n 1 & 0 & 0 \\\\\\\\\\\\\n 0 & 1 & 0 \\\\\\\\\\\\\n 0 & 0 & 1 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\quad\\quad\n\\left(\n \\begin{array}{rrr}\n 3/4 & 1/2 & 1/4 \\\\\\\\\\\\\n 1/2 & 1 & 1/2 \\\\\\\\\\\\\n 1/4 & 2/4 & 3/4 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\\\]\n\n\\\\[\nA^{-1}=\n\\left(\n \\begin{array}{rrr}\n 3/4 & 1/2 & 1/4 \\\\\\\\\\\\\n 1/2 & 1 & 1/2 \\\\\\\\\\\\\n 1/4 & 2/4 & 3/4 \\\\\\\\\\\\\n \\end{array}\n\\right)\n\\\\]\n\n```\n\n\n\n```python\n# This updated version of the upper_triangular function now\n# assumes that a matrix, B, is in the old vector location (what was b)\n# in the augmented system, and applies the same operations to\n# B as to A - only a minor difference\n\ndef upper_triangle2(A, B):\n n, m = np.shape(A)\n assert(n == m) # This is designed to work for a square matrix\n\n # Loop over each pivot row.\n for k in range(n-1):\n # Loop over each equation below the pivot row.\n for i in range(k+1, n):\n # Define the scaling factor outside the innermost\n # loop otherwise its value gets changed as you are\n # over-writing A\n s = (A[i, k]/A[k, k])\n for j in range(n):\n A[i, j] = A[i, j] - s*A[k, j]\n # Replace the old b update with the same update as A\n B[i, j] = B[i, j] - s*B[k, j]\n\n\n# This is a version which transforms the matrix into lower\n# triangular form - the point here is that if you give it a\n# matrix that is already in upper triangular form, then the\n# result will be a diagonal matrix\ndef lower_triangle2(A, B):\n n, m = np.shape(A)\n assert(n == m) # This is designed to work for a square matrix\n\n # Now it's basically just the upper triangular algorithm \n # applied backwards\n for k in range(n-1, -1, -1):\n for i in range(k-1, -1, -1):\n s = (A[i, k]/A[k, k])\n for j in range(n):\n A[i, j] = A[i, j] - s*A[k, j]\n B[i, j] = B[i, j] - s*B[k, j]\n```\n\n\n```python\n# Let's redefine A as our matrix above\nA = np.array([[2., 3., -4.],\n [3., -1., 2.],\n [4., 2., 2.]])\n\n# and B is the identity of the corresponding size\nB = np.eye(np.shape(A)[0])\n\n# transform A into upper triangular form \n# (and perform the same operations on B)\nupper_triangle2(A, B)\nprint('Upper triangular transformed A = ')\nprint(A)\n\n# now make this updated A lower triangular as well \n# (the result should be diagonal)\nlower_triangle2(A, B)\nprint('\\nand following application of our lower triangular function = ')\nprint(A)\n\n# The final step to achieve the identity is just to divide each row through by the value \n# of the diagonal to end up with 1's on the main diagonal and 0 everywhere else.\nfor i in range(np.shape(A)[0]):\n B[i, :] = B[i, :]/A[i, i]\n A[i, :] = A[i, :]/A[i, i]\n\n# the final A should be the identity\nprint('\\nOur final transformed A = ')\nprint(A)\n\n# the final B should therefore be the inverse of the original B\nprint('\\nand the correspondingly transformed B = ')\nprint(B)\n\n# let's compute the inverse using built-in functions and check\n# we get the same answer (we need to reinitialise A)\nA = np.array([[2., 3., -4.], [3., -1., 2.], [4., 2., 2.]])\nprint('\\nSciPy computes the inverse as:')\nprint(sl.inv(A))\n\n# B should now store the inverse of the original A - let's check\nprint('\\nSuccess: ', np.allclose(B, sl.inv(A)))\n```\n\n Upper triangular transformed A = \n [[ 2. 3. -4. ]\n [ 0. -5.5 8. ]\n [ 0. 0. 4.18181818]]\n \n and following application of our lower triangular function = \n [[ 2. 0. 0. ]\n [ 0. -5.5 0. ]\n [ 0. 0. 4.18181818]]\n \n Our final transformed A = \n [[ 1. 0. 0.]\n [-0. 1. -0.]\n [ 0. 0. 1.]]\n \n and the correspondingly transformed B = \n [[ 0.13043478 0.30434783 -0.04347826]\n [-0.04347826 -0.43478261 0.34782609]\n [-0.2173913 -0.17391304 0.23913043]]\n \n SciPy computes the inverse as:\n [[ 0.13043478 0.30434783 -0.04347826]\n [-0.04347826 -0.43478261 0.34782609]\n [-0.2173913 -0.17391304 0.23913043]]\n \n Success: True\n\n\nYou may have noticed above that we have no way of guaranteeing that the \\\\(A_{kk}\\\\) we divide through by in the Guassian elimination or back substitution algorithms is non-zero (or not very small which will also lead to computational problems). \n\nWe commented that we are free to exchange two rows in our augmented system - how could you use this fact to build robustness into our algorithms in order to deal with matrices for which our algorithms do lead to very small or zero \\\\(A_{kk}\\\\) values?\n\n\n```python\n# This function swaps rows in matrix A\n# (and remember that we need to do likewise for the vector b \n# we are performing the same operations on)\n\ndef swap_row(A, b, i, j):\n \"\"\" Swap rows i and j of the matrix A and the vector b.\n \"\"\" \n if i == j:\n return\n print('swapping rows', i,'and', j)\n # If we are swapping two values, we need to take a copy of one of them first otherwise\n # we will lose it when we make the first swap and will not be able to use it for the second.\n # We need to make sure it is a real copy - not just a copy of a reference to the data!\n # use np.copy to do this. \n iA = np.copy(A[i, :])\n ib = np.copy(b[i])\n\n A[i, :] = A[j, :]\n b[i] = b[j]\n\n A[j, :] = iA\n b[j] = ib\n\n \n# This is a new version of the upper_triangular function\n# with the added step of swapping rows so the largest\n# magnitude number is always our pivot/\n# pp stands for partial pivoting which will be explained\n# in more detail below.\n\ndef upper_triangle_pp(A, b):\n \"\"\" A function to covert A into upper triangluar form through row operations.\n The same row operations are performed on the vector b.\n \n This version uses partial pivoting.\n \n Note that A and b are overwritten, and hence we do not need to return anything\n from the function.\n \"\"\"\n n = np.size(b)\n # check A is square and its number of rows and columns same as size of the vector b\n rows, cols = np.shape(A)\n assert(rows == cols)\n assert(rows == n)\n\n # Loop over each pivot row - all but the last row\n for k in range(n-1):\n # Swap rows so we are always dividing through by the largest number.\n # initiatise kmax with the current pivot row (k)\n kmax = k\n # loop over all entries below the pivot and select the k with the largest abs value\n for i in range(k+1, n):\n if abs(A[kmax, k]) < abs(A[i, k]):\n kmax = i\n # and swap the current pivot row (k) with the row with the largest abs value below the pivot\n swap_row(A, b, kmax, k)\n\n for i in range(k+1, n):\n s = (A[i, k]/A[k, k])\n for j in range(k, n):\n A[i, j] = A[i, j] - s*A[k, j]\n b[i] = b[i] - s*b[k]\n\n\n# Apply the new code with row swaps to our matrix problem from above\nA = np.array([[2., 3., -4.],\n [3., -1., 2.],\n [4., 2., 2.]])\nb = np.array([10., 3., 8.])\n\nupper_triangle_pp(A, b)\n\nprint('\\nA and b with row swaps: ')\nprint(A)\nprint(b)\n# compute the solution from these using our back substitution code\n# could also have used SciPy of course\nx1 = back_substitution(A, b)\n\n# compare with our first function with no row swaps\nA = np.array([[2., 3., -4.],\n [3., -1., 2.],\n [4., 2., 2.]])\nb = np.array([10., 3., 8.])\n\nupper_triangle(A, b)\n\nprint('\\nA and b without any row swaps: ')\nprint(A)\nprint(b)\nx2 = back_substitution(A, b)\n\n# check these two systems are equivalent\nprint('\\nThese two upper triangular systems are equivalent (i.e. have the same solution): ',np.allclose(x1, x2))\n```\n\n swapping rows 2 and 0\n \n A and b with row swaps: \n [[ 4. 2. 2. ]\n [ 0. -2.5 0.5]\n [ 0. 0. -4.6]]\n [ 8. -3. 3.6]\n \n A and b without any row swaps: \n [[ 2. 3. -4. ]\n [ 0. -5.5 8. ]\n [ 0. 0. 4.18181818]]\n [ 10. -12. -3.27272727]\n \n These two upper triangular systems are equivalent (i.e. have the same solution): True\n\n", "meta": {"hexsha": "0a66f8d4ad41c0bccd86cfa84efe414ace3289d6", "size": 38847, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/c_mathematics/numerical_methods/12_Gaussian_elimination.ipynb", "max_stars_repo_name": "primer-computational-mathematics/book", "max_stars_repo_head_hexsha": "305941b4f1fc4f15d472fd11f2c6e90741fb8b64", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-08-02T07:32:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-16T16:40:43.000Z", "max_issues_repo_path": "notebooks/c_mathematics/numerical_methods/12_Gaussian_elimination.ipynb", "max_issues_repo_name": "primer-computational-mathematics/book", "max_issues_repo_head_hexsha": "305941b4f1fc4f15d472fd11f2c6e90741fb8b64", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2020-07-27T10:45:26.000Z", "max_issues_repo_issues_event_max_datetime": "2020-08-12T15:09:14.000Z", "max_forks_repo_path": "notebooks/c_mathematics/numerical_methods/12_Gaussian_elimination.ipynb", "max_forks_repo_name": "primer-computational-mathematics/book", "max_forks_repo_head_hexsha": "305941b4f1fc4f15d472fd11f2c6e90741fb8b64", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2020-08-05T13:57:32.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-02T19:03:57.000Z", "avg_line_length": 37.6424418605, "max_line_length": 523, "alphanum_fraction": 0.467732386, "converted": true, "num_tokens": 8957, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8947894745194283, "lm_q2_score": 0.9433475786738345, "lm_q1q2_score": 0.8440974842107354}} {"text": "```python\nimport sympy\nt, K ,r, P0, C1 = sympy.symbols('t, K, r, P_0, C_1')\nP = sympy.Function('P')\nedo = P(t).diff(t) - r * P(t) * (1 - P(t)/K)\nedo\n```\n\n\n\n\n$\\displaystyle - r \\left(1 - \\frac{P{\\left(t \\right)}}{K}\\right) P{\\left(t \\right)} + \\frac{d}{d t} P{\\left(t \\right)}$\n\n\n\n\n```python\nedo_sol = sympy.dsolve(edo, P(t))\nedo_sol\n```\n\n\n\n\n$\\displaystyle P{\\left(t \\right)} = \\frac{K e^{C_{1} K + r t}}{e^{C_{1} K + r t} - 1}$\n\n\n\n\n```python\nini_cond = {P(0): P0}\nini_cond\n```\n\n\n\n\n {P(0): P_0}\n\n\n\n\n```python\nC_eq = edo_sol.subs(t,0).subs(ini_cond)\nC_eq\n```\n\n\n\n\n$\\displaystyle P_{0} = \\frac{K e^{C_{1} K}}{e^{C_{1} K} - 1}$\n\n\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\ndef logistica(t, P0=100, K=1000, r=0.25):\n A = P0 / (P0 - K)\n return K / (1 - np.exp(-r*t) / A)\n```\n\n\n```python\nt = np.linspace(0, 40, 100) # Intervalo de tiempo en que se obtiene la soluci\u00f3n\np1 = plt.plot(t, logistica(t, P0 = 100), label=r'$P_0 = 100$')\np1 = plt.plot(t, logistica(t, P0 = 2000), label=r'$P_0 = 1500$')\nplt.legend()\nplt.show()\n```\n", "meta": {"hexsha": "47271a62c6a94c6beb38ab10e3b5c89be877cb68", "size": 18086, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "example/equations_and_graphs.ipynb", "max_stars_repo_name": "facundobatista/jupynotex", "max_stars_repo_head_hexsha": "87df3aa14a115854fbfe8f5152df5a8390796eae", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2020-10-21T20:28:52.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-03T22:49:58.000Z", "max_issues_repo_path": "example/equations_and_graphs.ipynb", "max_issues_repo_name": "facundobatista/jupynotex", "max_issues_repo_head_hexsha": "87df3aa14a115854fbfe8f5152df5a8390796eae", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-10-12T13:31:52.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-02T18:39:50.000Z", "max_forks_repo_path": "example/equations_and_graphs.ipynb", "max_forks_repo_name": "facundobatista/jupynotex", "max_forks_repo_head_hexsha": "87df3aa14a115854fbfe8f5152df5a8390796eae", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 112.3354037267, "max_line_length": 14732, "alphanum_fraction": 0.8679088798, "converted": true, "num_tokens": 437, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9637799451753696, "lm_q2_score": 0.8757869965109764, "lm_q1q2_score": 0.8440659434826504}} {"text": "\n\nUniversidade Tecnol\u00f3gica Federal do Paran\u00e1 \nProfessor: Wellington Jos\u00e9 Corr\u00eaa \nOrientando: Enzo Dornelles Italiano \nC\u00e1lculo Num\u00e9rico\n\n# Integra\u00e7\u00e3o Num\u00e9rica\n\nComo de costume, devemos executar os c\u00f3digos abaixo antes de executar.\n\n## C\u00f3digos\n\n\n```\n# !pip install mpld3\n# !pip install \"git+https://github.com/javadba/mpld3@display_fix\"\nimport copy\nimport math\nimport mpld3\nimport numpy as np\nfrom sympy import *\nfrom mpld3 import plugins\nimport matplotlib.pyplot as plt\nx,t = symbols('x t')\n\ndef trapezio(f, a, b):\n h = b - a\n print(\"O valor de h \u00e9 \", h)\n deriv = h/2*(f.subs(x,a)+f.subs(x,b))\n exact = integrate(f, (x, a, b))\n print(\"Pela Regra do Trap\u00e9zio, temos que a integral\")\n pprint(Integral(f, (x, a, b)), use_unicode=True)\n print(\"\u00e9 aproximadamente\", deriv)\n\n print(\"\\nComo compara\u00e7\u00e3o, o valor exato da integral\")\n pprint(Integral(f, (x, a, b)), use_unicode=True)\n print(\"\u00e9\", exact)\n\n f = diff(f, x, 2)\n maior = abs(f.subs(x,a))\n if abs(f.subs(x,b)) > maior:\n maior = abs(f.subs(x,b))\n E = (-h**3/12)*maior\n print(\"\\nLimitante\")\n print(\"|E| <=\", abs(E))\n\ndef trapezio_gen(f, a, b, n):\n h = (b - a)/n\n print(\"O valor de h \u00e9\", h)\n xk = np.linspace(a,b,n+1)\n fx = 0\n for i in range(len(xk)):\n if i == 0 or i == len(xk)-1:\n fx += f.subs(x, xk[i])\n else:\n fx += 2*f.subs(x, xk[i])\n deriv = (h/2)*fx\n exact = integrate(f, (x, a, b))\n print(\"Pela Regra do Trap\u00e9zio, temos que a integral\")\n pprint(Integral(f, (x, a, b)), use_unicode=True)\n print(\"\u00e9 aproximadamente\", deriv)\n\n print(\"\\nComo compara\u00e7\u00e3o, o valor exato da integral\")\n pprint(Integral(f, (x, a, b)), use_unicode=True)\n print(\"\u00e9\", exact)\n\n f = diff(f, x, 2)\n maior = abs(f.subs(x,a))\n if abs(f.subs(x,b)) > maior:\n maior = abs(f.subs(x,b))\n E = (h**2/12)*maior*(b-a)\n print(\"\\nLimitante\")\n print(\"|E| <=\", abs(E))\n\ndef simpson13(f, a, b, n):\n h = (b - a)/n\n xk = np.linspace(a,b,n+1)\n fx = 0\n for i in range(len(xk)):\n if i == 0 or i == len(xk)-1:\n fx += f.subs(x, xk[i])\n elif i % 2 == 0:\n fx += 2*f.subs(x, xk[i])\n else:\n fx += 4*f.subs(x, xk[i])\n deriv = (h/3)*fx\n exact = integrate(f, (x, a, b)).evalf()\n\n print(\"Pela Regra 1/3 de Simpson, temos que a integral\")\n pprint(Integral(f, (x, a, b)), use_unicode=True)\n print(\"\u00e9 aproximadamente\", deriv)\n\n print(\"\\nComo compara\u00e7\u00e3o, o valor exato da integral\")\n pprint(Integral(f, (x, a, b)), use_unicode=True)\n print(\"\u00e9\", exact)\n\n f = diff(f, x, 4)\n maior = abs(f.subs(x,a))\n if abs(f.subs(x,b)) > maior:\n maior = abs(f.subs(x,b))\n E = (h**4/180)*maior*(b-a)\n print(\"\\nLimitante\")\n print(\"|E| <=\", abs(E.evalf()))\n\ndef calc_parabola_vertex(x1, y1, x2, y2, x3, y3):\n denom = (x1-x2) * (x1-x3) * (x2-x3);\n A = (x3 * (y2-y1) + x2 * (y1-y3) + x1 * (y3-y2)) / denom;\n B = (x3*x3 * (y1-y2) + x2*x2 * (y3-y1) + x1*x1 * (y2-y3)) / denom;\n C = (x2 * x3 * (x2-x3) * y1+x3 * x1 * (x3-x1) * y2+x1 * x2 * (x1-x2) * y3) / denom;\n return A,B,C\n\ndef simpson_tabela13(a,b,n,y):\n h = (b - a)/n\n xk = np.linspace(a,b,n+1)\n fx = 0\n for i in range(len(xk)):\n if i == 0 or i == len(xk)-1:\n fx += y[i]\n elif i % 2 == 0:\n fx += 2*y[i]\n else:\n fx += 4*y[i]\n deriv = (h/3)*fx\n print(\"O valor da integral \u00e9 aproximadamente\", deriv)\n\ndef simpson38(f,a,b,n):\n h = (b-a)/n\n xk = np.linspace(a,b,n+1)\n fx = 0\n for i in range(len(xk)):\n if i == 0 or i == len(xk)-1:\n fx += f.subs(x,xk[i])\n elif i % 3 == 0:\n fx += 2*f.subs(x,xk[i])\n else:\n fx += 3*f.subs(x,xk[i])\n deriv = (3/8)*h*fx\n exact = integrate(f, (x, a, b)).evalf()\n\n print(\"Pela Regra 1/3 de Simpson, temos que a integral\")\n pprint(Integral(f, (x, a, b)), use_unicode=True)\n print(\"\u00e9 aproximadamente\", deriv)\n\n print(\"\\nComo compara\u00e7\u00e3o, o valor exato da integral\")\n pprint(Integral(f, (x, a, b)), use_unicode=True)\n print(\"\u00e9\", exact)\n\n f = diff(f, x, 4)\n maior = abs(f.subs(x,a))\n if abs(f.subs(x,b)) > maior:\n maior = abs(f.subs(x,b))\n E = (h**4/80)*maior*(b-a)\n print(\"\\nLimitante\")\n print(\"|E| <=\", abs(E.evalf()))\n\ndef simpson_tabela38(a,b,n,y):\n h = (b - a)/n\n xk = np.linspace(a,b,n+1)\n fx = 0\n for i in range(len(xk)):\n if i == 0 or i == len(xk)-1:\n fx += y[i]\n elif i % 3 == 0:\n fx += 2*y[i]\n else:\n fx += 3*y[i]\n deriv = (3/8)*h*fx\n print(\"O valor da integral \u00e9 aproximadamente\", deriv)\n\ndef quadGauss(f, a, b, n):\n table = [[[0.5773502692, 1],[-0.5773502692, 1]],\n [[0.7745966692, 0.5555555556],[0, 0.8888888889],[-0.7745966692, 0.555555556]],\n [[0.8611363116, 0.3478548451],[0.3399810436, 0.6521451549],[-0.3399810436, 0.6521451549],[0.8611363116, 0.3478548451]],\n [[0.9061798459, 0.2369268850],[0.5384693101, 0.4786286705],[0, 0.5688888889],[-0.5384693101, 0.4786286705],[-0.9061798459, 0.2369268850]]]\n \n g = f.subs(x, ((1/2)*((b-a)*t+a+b)))\n expr = lambdify(t, g, \"numpy\")\n if n > 2:\n soma = 0\n for i in range(n):\n soma += table[n-2][i][1]*expr(table[n-2][i][0])\n result = ((b-a)/2)*soma\n else:\n soma = 0\n for i in range(n):\n soma += expr(table[n-2][i][0])\n result = ((b-a)/2)*soma\n print(\"O valor aproximado da integral\")\n pprint(Integral(f, (x, a, b)), use_unicode=True)\n print(\"\u00e9\", result)\n```\n\n## 1. Regra dos Trap\u00e9zios\n\n### 1.1 Regra dos trap\u00e9zios simples\n\nO procedimento que usaremos para calcular a integral definida via Regra dos trap\u00e9zios \u00e9 trapezio(f,a,b)\n\nExemplo:Calcule o valor aproximado da integral definida da fun\u00e7\u00e3o $f(x)=ln(x)+x$ entre 0,5 e\n1, usando a regra do trap\u00e9zios e determine um limitante superior.\n\nSolu\u00e7\u00e3o: inicialmente, definamos $f$ e os pontos a e b:\n\n\n```\ndef f(x): return log(x)+x\na = 0.5\nb = 1\n```\n\nUsando o procedimento citado anteriormente, temos que\n\n\n```\ntrapezio(f(x), a, b)\n```\n\nPara plotar o gr\u00e1fico de $f(x)$ e o trap\u00e9zio constru\u00eddo, usemos o comando:\n\n\n```\nfig, ax = plt.subplots()\nz = np.arange(a,b+0.001,0.001)\n\ny = lambdify(x, f(x), \"numpy\")\n\npontos = [[a,b],[y(a), y(b)]]\n\nax.fill_between([a,b],pontos[1], color=\"red\")\nax.plot(z,y(z), \"black\")\nax.plot(pontos[0],pontos[1])\n\nax.grid()\nplugins.connect(fig, plugins.MousePosition(fontsize=14))\n\nmpld3.display()\n```\n\n### 1.2 Regra do trap\u00e9zio generalizada\n\nNeste momento, o procedimento \u00e9 para calcular a integral definida pela regra dos trap\u00e9zios generalizada \u00e9 trapezio_gen(f,a,b,n)\n\nExemplo: Calcule o valor aproximado da integral definida de $x^{(1/2)}$ entre 1 e 4 usando a regra dos trap\u00e9zios generalizada para 6 subintervalos e determine um limitante para o erro:\n\nSolu\u00e7\u00e3o: De fato, de antem\u00e3o, definamos $f$, a, b e n:\n\n\n```\ndef f(x): return sqrt(x)\na = 1\nb = 4\nn = 6\n```\n\nFazendo uso do procedimento trapezio_gen, temos:\n\n\n```\ntrapezio_gen(f(x), a, b, n)\n```\n\nPor fim, o gr\u00e1fico de $f(x)$ com os trap\u00e9zios \u00e9 dado por:\n\n\n```\nfig, ax = plt.subplots()\nz = np.arange(a,b+0.001,0.001)\n\ny = lambdify(x, f(x), \"numpy\")\n\npontos = [[a,b],[y(a), y(b)]]\n\nax.fill_between([a,b],pontos[1], color=\"red\")\nax.plot(z,y(z), \"black\")\nax.plot(pontos[0],pontos[1])\n\nax.grid()\nplugins.connect(fig, plugins.MousePosition(fontsize=14))\n\nmpld3.display()\n```\n\n## 2. Regra de Simpson\n\n### 2.1 Regra $\\frac{1}{3}$ de Simpson\n\nEstudaremos duas situa\u00e7\u00f5es:\n\n(a) Quando f \u00e9 dada analiticamente.\n\nNeste caso, o procedimento \u00e9 simpson13(f,a,b,n)\n\nExemplo: Usando a regra $\\frac{1}{3}$ de Simpson para 4 subintervalos, calcule o valor aproximado da integral definida de $x*e^x+1$ no intervalo\n$[0,3]$ e determine um limitante superior para o erro.\n\nSolu\u00e7\u00e3o: Definamos $f(x)$, a, b e n:\n\n\n```\ndef f(x): return x*exp(x)+1\na = 0\nb = 3\nn = 4\n```\n\nAssim, a integral definida pela regra $\\frac{1}{3}$ de Simpson \u00e9\n\n\n```\nsimpson13(f(x), a, b, n)\n```\n\nPodemos exibir as par\u00e1bolas interpoladoras juntamente com $f(x)$:\n\n\n```\nxk = np.linspace(a,b,n+1)\nfx = []\ny = []\nfor j in range(n-1):\n for i in range(len(xk)):\n fx.append(f(x).subs(x,xk[i]))\n A,B,C = calc_parabola_vertex(xk[j],fx[j],xk[j+1],fx[j+1],xk[j+2],fx[j+2])\n y.append(A*x**2 + B*x + C)\n\nfig, ax = plt.subplots()\nz = []\nw = []\nfor i in range(n-1):\n z.append(np.arange(xk[i],xk[i+2]+0.001,0.001))\n\n w.append(lambdify(x, y[i], \"numpy\"))\n ax.plot(z[i],w[i](z[i]), label=\"Parabola \"+str(i+1))\n\nc = np.arange(a,b+0.001,0.001)\nm = []\nfor i in range(len(c)):\n m.append(f(x).subs(x,c[i]))\nax.plot(c,m, label=\"f(x)\")\nax.legend()\nax.grid()\nplugins.connect(fig, plugins.MousePosition(fontsize=14))\n\nmpld3.display()\n```\n\n(b) Quando f \u00e9 dada por um conjunto de pontos discretos (tabela):\n\nNeste caso, usaremos: simpson_tabela13(a,b,n,y)\n\nExemplo: Considere $f(x)$ dada pela tabela:\n\n| x | 0 | 1 | 2 | 3 | 4 | 5 | 6 |\n|------|------|------|------|------|------|------|------|\n| f(x) | 0.21 | 0.32 | 0.42 | 0.51 | 0.82 | 0.91 | 1.12 |\n\nDo exposto, calcule a integral de $f(x)$ entre 0 e 6.\n\nSolu\u00e7\u00e3o: Primeiramente, declaramos os valores de a, b, n e a tabela citada:\n\n\n```\na = 0\nb = 6\nn = 6\ny = [0.21,0.32,0.42,0.51,0.82,0.91,1.12]\n```\n\nDeste modo, temos que o valor da integral \u00e9 aproxidamente:\n\n\n```\nsimpson_tabela13(a,b,n,y)\n```\n\n O valor da integral \u00e9 aproximadamente 3.59\n\n\n### 2.2 Regra $\\frac{3}{8}$ de Simpson\n\nComo na regra $\\frac{1}{3}$ de Simpson, estudaremos duas situa\u00e7\u00f5es:\n\n(a) Quando f \u00e9 dada analiticamente.\n\nNeste caso, o procedimento \u00e9 para calcular a integral definida \u00e9 simpson38(f,a,b,n)\n\nExemplo: Calcule o valor aproximado da integral de $ln(x+9)$ entre 1 e 7 usando a regra $\\frac{3}{8} de Simpson e determine um limitante superior \npara o erro para 6 subintervalos:\n\nSolu\u00e7\u00e3o: Declarando os valores de $f(x)$, a, b e n, temos:\n\n\n```\ndef f(x): return log(x+9)\na = 1\nb = 7\nn = 6\n```\n\nAssim, empregando comando para o c\u00e1lculo da integral, resulta:\n\n\n```\nsimpson38(f(x), a,b,n)\n```\n\n(b) Quando f \u00e9 dada por um conjunto de pontos discretos (tabela):\n\nNeste caso, usaremos: simpson_tabela38(a,b,n,y)\n\nExemplo: Considere $f(x)$ dada pela tabela:\n\n| x | 0 | 1 | 2 | 3 | 4 | 5 | 6 |\n|------|------|------|------|------|------|------|------|\n| f(x) | 0.21 | 0.32 | 0.42 | 0.51 | 0.82 | 0.91 | 1.12 |\n\nDo exposto, calcule a integral de $f(x)$ entre 0 e 6.\n\n\nSolu\u00e7\u00e3o: Primeiramente, declaramos os valores de a, b, n e a tabela citada:\n\n\n```\na = 0\nb = 6\nn = 6\ny = [0.21,0.32,0.42,0.51,0.82,0.91,1.12]\n```\n\nDeste modo, temos que o valor da integral \u00e9 aproxidamente:\n\n\n```\nsimpson_tabela38(a,b,n,y)\n```\n\n## 3. Quadratura de Gauss\n\nO procedimento aqui \u00e9 quadGauss(f,a,b,n)\n\nonde $[a,b]$ \u00e9 um intervalo arbitr\u00e1rio e n = 2,3,4,5 (veja p\u00e1ginas 3-5 e exerc\u00edcio 7 da lista\n5).\n\nExemplo: Obtenha uma aproxima\u00e7\u00e3o para a integral da fun\u00e7\u00e3o $f(x)=e^{-x^2}$ de 1 a 1.5,\nutilizando a quadratura de Gauss com n = 5.\n\nSolu\u00e7\u00e3o: Primeiramente, definamos $f(x)$:\n\n\n```\ndef f(x): return exp(-x**2)\na = 1\nb = 1.5\nn = 5\n```\n\nLogo, a quadratura de Gauss nos fornece:\n\n\n```\nquadGauss(f(x), a, b, n)\n```\n", "meta": {"hexsha": "6ad8343dfa94549753a5b076a4700b764008a3dd", "size": 25404, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lista_5.ipynb", "max_stars_repo_name": "EnzoItaliano/calculoNumericoEmPython", "max_stars_repo_head_hexsha": "be3161b823955620be71e0f94a3421288fd28ef0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-12-28T21:23:00.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-28T21:23:00.000Z", "max_issues_repo_path": "Lista_5.ipynb", "max_issues_repo_name": "EnzoItaliano/calculoNumericoEmPython", "max_issues_repo_head_hexsha": "be3161b823955620be71e0f94a3421288fd28ef0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lista_5.ipynb", "max_forks_repo_name": "EnzoItaliano/calculoNumericoEmPython", "max_forks_repo_head_hexsha": "be3161b823955620be71e0f94a3421288fd28ef0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.7375565611, "max_line_length": 241, "alphanum_fraction": 0.4163123917, "converted": true, "num_tokens": 4172, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9343951607140233, "lm_q2_score": 0.9032942125614059, "lm_q1q2_score": 0.8440337409183619}} {"text": "# Examples for use of `Eq`\n\n\n```python\nfrom symbolic_equation import Eq\n```\n\n## Solving a simple system of equations\n\nhttps://pythonforundergradengineers.com/sympy-expressions-and-equations.html#Defining-Equations-in-Sympy\n\n\n```python\nfrom sympy import symbols\n```\n\n\n```python\nx, y = symbols('x y')\n```\n\n\n```python\neq1 = Eq(2*x - y - 1, tag='I')\neq1\n```\n\n\n\n\n\\begin{equation}\n 2 x - y - 1 = 0\n\\tag{I}\\end{equation}\n\n\n\n\n\n```python\neq2 = Eq(x + y - 5, tag='II')\neq2\n```\n\n\n\n\n\\begin{equation}\n x + y - 5 = 0\n\\tag{II}\\end{equation}\n\n\n\n\n\n```python\neq_y = (\n (eq1 - 2 * eq2).tag(\"I - 2 II\")\n .transform(lambda eq: eq - 9)\n .transform(lambda eq: eq / (-3)).tag('y')\n)\neq_y\n```\n\n\n\n\n\\begin{align}\n 9 - 3 y &= 0\\tag{I - 2 II}\\\\\n - 3 y &= -9\\\\\n y &= 3\n\\tag{y}\\end{align}\n\n\n\n\n\n```python\neq_x = (\n eq1.apply_to_lhs('subs', eq_y.as_dict).reset().tag(r'$y$ in I')\n .transform(lambda eq: eq / 2)\n .transform(lambda eq: eq + 2).tag('x')\n)\neq_x\n```\n\n\n\n\n\\begin{align}\n 2 x - 4 &= 0\\tag{$y$ in I}\\\\\n x - 2 &= 0\\\\\n x &= 2\n\\tag{x}\\end{align}\n\n\n\n\nAlternatively, we could let `sympy` solve the equation directly:\n\n\n```python\nfrom sympy import solve\n```\n\n\n```python\nsol = solve((eq1, eq2),(x, y))\n```\n\n\n```python\nsol\n```\n\n\n\n\n {x: 2, y: 3}\n\n\n\n## Proof of Euler's equation (to 6th order)\n\nhttps://austinrochford.com/posts/2014-02-05-eulers-formula-sympy.html\n\n\n```python\nfrom sympy import exp, sin, cos, I\n```\n\n\n```python\n\u03b8 = symbols('theta', real=True)\n```\n\n\n```python\nn = 6\n```\n\n\n```python\neq_euler = (\n Eq(exp(I * \u03b8), cos(\u03b8) + I * sin(\u03b8))\n .apply('subs', {cos(\u03b8): cos(\u03b8).series(n=n)})\n .apply('subs', {sin(\u03b8): sin(\u03b8).series(n=n)})\n .apply_to_rhs('expand').amend(previous_lines=2)\n .apply_to_lhs('series', n=n)\n)\neq_euler\n```\n\n\n\n\n\\begin{align}\n e^{i \\theta} &= i \\sin{\\left(\\theta \\right)} + \\cos{\\left(\\theta \\right)}\\\\\n &= 1 + i \\theta - \\frac{\\theta^{2}}{2} - \\frac{i \\theta^{3}}{6} + \\frac{\\theta^{4}}{24} + \\frac{i \\theta^{5}}{120} + O\\left(\\theta^{6}\\right)\\\\\n 1 + i \\theta - \\frac{\\theta^{2}}{2} - \\frac{i \\theta^{3}}{6} + \\frac{\\theta^{4}}{24} + \\frac{i \\theta^{5}}{120} + O\\left(\\theta^{6}\\right) &= 1 + i \\theta - \\frac{\\theta^{2}}{2} - \\frac{i \\theta^{3}}{6} + \\frac{\\theta^{4}}{24} + \\frac{i \\theta^{5}}{120} + O\\left(\\theta^{6}\\right)\n\\end{align}\n\n\n\n\n\n```python\neq_euler.lhs - eq_euler.rhs\n```\n\n\n\n\n$\\displaystyle O\\left(\\theta^{6}\\right)$\n\n\n", "meta": {"hexsha": "ede0ddc332bea568c671b8dca0af675b30f8811d", "size": 10305, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples.ipynb", "max_stars_repo_name": "goerz/symbolic_equation", "max_stars_repo_head_hexsha": "b0677b216a220adba65a0b2dc14f27cd206a998f", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-12-10T15:45:53.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-10T15:45:53.000Z", "max_issues_repo_path": "examples.ipynb", "max_issues_repo_name": "goerz/symbolic_equation", "max_issues_repo_head_hexsha": "b0677b216a220adba65a0b2dc14f27cd206a998f", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples.ipynb", "max_forks_repo_name": "goerz/symbolic_equation", "max_forks_repo_head_hexsha": "b0677b216a220adba65a0b2dc14f27cd206a998f", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.402173913, "max_line_length": 318, "alphanum_fraction": 0.4670548278, "converted": true, "num_tokens": 917, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9343951552333005, "lm_q2_score": 0.9032942067038784, "lm_q1q2_score": 0.8440337304944115}} {"text": "# Linear Programming: Simplex Method\n\n## Karush-Kuhn-Tucker Conditions\n\nWe consider the linear program\n\n\\begin{equation}\n\\min \\vec{c}^\\mathsf{T}\\vec x\\\\\n\\textrm{subject to} \\begin{cases}\n\\mathbf{A}\\vec x=\\vec b\\\\\n\\vec x\\ge\\vec 0\n\\end{cases}\n\\end{equation}\n\nand define the Lagrangian function\n\n\\begin{equation}\n\\mathcal L\\left(\\vec x,\\vec \\lambda,\\vec s\\right) = \\vec{c}^\\mathsf{T}\\vec x - \\vec \\lambda^\\mathsf{T}\\left(\\mathbf A\\vec x-\\vec b\\right) - \\vec s^\\mathsf{T}\\vec x\n\\end{equation}\n\nwhere $\\vec \\lambda$ are the multipliers for the equality constraints and $\\vec s$ are the multipliers for the bound constraints.\n\nThe Karush-Kuhn-Tucker condition states that to find the first-order necessary conditions for $\\vec x^\\star$ to be a solution of the problem, their exist $\\vec \\lambda^\\star$ and $\\vec s^\\star$ such that\n\n\\begin{align}\n\\mathbf A^\\mathsf{T}\\vec \\lambda^\\star+\\vec s^\\star&=\\vec c\\\\\n\\mathbf A\\vec x^\\star&=\\vec b\\\\\n\\vec x^\\star&\\ge\\vec 0\\\\\n\\vec s^\\star&\\ge \\vec 0\\\\\n\\left(\\vec x^\\star\\right)^\\mathsf{T}\\vec s^\\star&=0\n\\end{align}\n\nThe first eqation states that the gradient of the Lagrangian with respect to $\\vec x$ must be zero and the last equation that at least $x_i$ or $s_i$ must be zero for each $i=1,2,\\dots,n$.\n\nIt can be shown that these conditions are also sufficient.\n\n## The Simplex Method Explained\n\nAs mentioned in the previous lecture, all iterates of the simplex method are basic feasible points and therefore vertices of the feasible polytope. Most steps consists of a move from one vertex to an adjacent one. On most steps (but not all), the value of the objective function $\\vec{c}^\\mathsf{T}\\vec x$ is decreased. Another type of step occurs when the problem is unbounded: the step is an edge along which the objective funtion is reduced, and along which we can move infinitely far without reaching a vertex.\n\nThe major issue at each simplex iteration is to decide which index to remove from the basic index set $\\mathcal B$. Unless the step is a direction of unboundness, a single index must be removed from $\\mathcal B$ and replaced by another from outside $\\mathcal B$. We can gain some insight into how this decision is made by looking again at the KKT conditions.\n\nFirst, define the nonbasic index set $\\mathcal N = \\left\\{1,2,\\dots,n\\right\\} \\setminus \\mathcal B$. Just as $\\mathbf B$ is the basic matrix, whose columns are $\\mathbf A_i$ for $i\\in\\mathcal B$, we use $\\mathbf N$ to denote the nonbasic matrix $\\mathbf N=\\left[\\mathbf A_i\\right]_{i\\in\\mathcal N}$. We also partition the vectors $\\vec x$, $\\vec s$ and $\\vec c$ according to the index sets $\\mathcal B$ and $\\mathcal N$, using the notation\n\n\\begin{align}\n\\vec x_\\mathbf B=\\left[\\vec x_i \\right]_{i\\in\\mathcal B},&\\qquad\\vec x_\\mathbf N=\\left[\\vec x_i \\right]_{i\\in\\mathcal N}\\\\\n\\vec s_\\mathbf B=\\left[\\vec s_i \\right]_{i\\in\\mathcal B},&\\qquad\\vec s_\\mathbf N=\\left[\\vec s_i \\right]_{i\\in\\mathcal N}\\\\\n\\vec c_\\mathbf B=\\left[\\vec c_i \\right]_{i\\in\\mathcal B},&\\qquad\\vec c_\\mathbf N=\\left[\\vec c_i \\right]_{i\\in\\mathcal N}\n\\end{align}\n\nFrom the second KKT coniditions, we have that\n\n\\begin{equation}\n\\mathbf A \\vec x= \\mathbf B \\vec x_\\mathbf B + \\mathbf N \\vec x_\\mathbf N=\\vec b\\,.\n\\end{equation}\n\nThe _primal_ variable $\\vec x$ for this simplex iterate is defined as\n\n\\begin{equation}\n\\vec x_\\mathbf B = \\mathbf B^{-1}\\vec b,\\qquad \\vec x_\\mathbf N=\\vec 0\\,.\n\\end{equation}\n\nSince we are dealing only with basic feasible points, we know that $\\mathbf B$ is nonsingular and that $\\vec x_\\mathbf B\\ge\\vec0$, so this choice of $\\vec x$ satisfies two of the KKT coniditions.\n\nWe choose $\\vec s$ to satisfy the complimentary condition (the last one) by setting $\\vec s_\\mathbf B=\\vec 0$. The remaining components $\\vec \\lambda$ and $\\vec s_\\mathbf N$ can be found by partitioning this condition into $\\vec c_\\mathbf B$ and $\\vec c_\\mathbf N$ components and using $\\vec s_\\mathbf B=\\vec 0$ to obtain\n\n\\begin{equation}\n\\mathbf B^\\mathsf{T}\\vec \\lambda=\\vec c_\\mathbf B,\\qquad \\vec N^\\mathsf{T}\\vec\\lambda+\\vec s_\\mathbf N = \\vec c_\\mathbf N\\,.\n\\end{equation}\n\nSince $\\mathbf B$ is square and nonsingular, the first equation uniquely defines $\\vec \\lambda$ as\n\n\\begin{equation}\n\\vec \\lambda = \\left(\\mathbf B^\\mathsf{T}\\right)^{-1}\\vec c_\\mathbf B\\,.\n\\end{equation}\n\nThe second equation implies a value for $\\vec s_\\mathbf N$:\n\n\\begin{equation}\n\\vec s_\\mathbf N = \\vec c_\\mathbf N - \\mathbf N^\\mathsf{T}\\vec \\lambda=\\vec c_\\mathbf N -\\left(\\mathbf B ^{-1}\\mathbf N\\right)^\\mathsf{T}\\vec c_\\mathbf B\\,.\n\\end{equation}\n\nComputation of the vector $\\vec s_\\mathbf N$ is often referred to as _pricing_. The components of $\\vec s_\\mathbf N$ are often called the _reduced costs_ of the nonbasic variables $\\vec x_\\mathbf N$.\n\nThe only KKT condition that we have not enforced explicitly is the nonnegativity condition $\\vec s \\ge \\vec 0$. The basic components $\\vec s_\\mathbf B$ certainly satisfy this condition, by our choice $\\vec s_\\mathbf B = 0$. If the vector $\\vec s_\\mathbf N$ also satisfies $\\vec s_\\mathbf N \\ge \\vec 0$, we have found an optimal\nvector triple $\\left(\\vec x^\\star, \\vec \\lambda^\\star, \\vec s^\\star\\right)$, so the algorithm can terminate and declare success. Usually, however, one or more of the components of $\\vec s_\\mathbf N$ are negative. The new index to enter the basis index set $\\mathcal B$ is chosen to be one of the indices $q \\in \\mathcal N$ for which $s_q < 0$. As we show below, the objective $\\vec{c}^\\mathsf{T}\\vec x$ will decrease when we allow $x_q$ to become positive if and only if \n\n1. $s_q < 0$ and\n2. it is possible to increase $x_q$ away from zero while maintaining feasibility of $\\vec x$.\n\nOur procedure for altering $\\mathcal B$ and changing $\\vec x$ and $\\vec s$ can be described accordingly as follows:\n\n- allow $x_q$ to increase from zero during the next step;\n- fix all other components of $\\vec x_\\mathbf N$ at zero, and figure out the effect of increasing $x_q$ on the current basic vector $\\vec x_\\mathbf B$, given that we want to stay feasible with respect to the equality constraints $\\mathbf{A}\\vec x=\\vec b$;\n- keep increasing $x_q$ until one of the components of $\\vec x_\\mathbf B$ ($x_p$, say) is driven to zero, or determining that no such component exists (the unbounded case);\n- remove index $p$ (known as the leaving index) from $\\mathcal B$ and replace it with the entering index $q$.\n\nThis process of selecting entering and leaving indices, and performing the algebraic operations necessary to keep track of the values of the variables $\\vec x$, $\\vec \\lambda$, and $\\vec s$, is sometimes known as _pivoting_.\n\nWe now formalize the pivoting procedure in algebraic terms. Since both the new iterate $\\vec x^+$ and the current iterate $\\vec x$ should satisfy $\\mathbf A\\vec x=\\vec b$, and since $\\vec x_\\mathbf N=\\vec 0$ and $\\vec x_i^{+}=0$ for $i\\in\\mathcal N\\setminus\\left\\{q\\right\\}$ we have\n\n\\begin{equation}\n\\mathbf A\\vec x^+=\\mathbf B\\vec x_\\mathbf B^+ +\\vec A_q x_q^+=\\vec b=\\mathbf B\\vec x_\\mathbf B=\\mathbf A\\vec x\\,.\n\\end{equation}\n\nBy multiplying this expression by $\\mathbf B^{-1}$ and rearranging, we obtain\n\n\\begin{equation}\n\\vec x_\\mathbf B^+=\\vec x_\\mathbf B-\\mathbf B^{-1}\\vec A_q x_q^+\n\\end{equation}\n\nGeometrically speaking, we move along an edge of the feasible polytope that decreases $\\vec{c}^\\mathsf{T}\\vec x$. We continue to move along this edge until a new vertex is encountered. At this vertex, a new constraint $x_p \\ge 0$ must have become active, that is, one of the components $x_p$, $p \\in \\mathbf B$, has decreased to zero. We then remove this index $p$ from the basis index set $\\mathcal B$ and replace it by $q$.\n\nIt is possible that we can increase $x_q^+$ to $\\infty$ without ever encountering a new vertex. In other words, the constraint $x_\\mathbf B^+=x_\\mathbf B-\\mathbf B^{-1}\\vec A_q\\vec x_q^+\\ge 0$ holds for all positive values of $x_q+$. When this happens, the linear program is unbounded; the simplex method has identified a\nray that lies entirely within the feasible polytope along which the objective $\\vec{c}^\\mathsf{T}\\vec x$ decreases to $\u2212\\infty$.\n\n\n## One Step of Simplex Algorithm\n\nGiven $\\mathcal B$, $\\mathcal N$, $x_\\mathbf B=\\mathbf B^{-1}\\vec b\\ge 0$,$\\vec x_\\mathbf N=\\vec 0$;\n\n1. Solve $\\vec \\lambda = \\left(\\mathbf B^\\mathsf{T}\\right)^{-1}\\vec c_\\mathbf B$, and compute $\\vec s_\\mathbf N =\\vec c_\\mathbf N - \\mathbf N^\\mathsf{T} \\vec \\lambda$ (pricing);\n2. If $\\vec s_\\mathbf N \\ge \\vec 0$ stop (optimal point found);\n3. Select $q\\in\\mathcal N$ with $s_q<0$ and solve $\\vec d=\\mathbf B^{-1}\\vec A_q$;\n4. If $\\vec d\\le\\vec 0$ stop (problem is unbounded);\n5. Calculate $x_q^+=\\min_{i|d_i>0}\\frac{\\left(x_\\mathbf B\\right)_i}{d_i}$, and use $p$ to denote the minimizing $i$;\n6. Update $\\vec x_\\mathbf B^+=\\vec x_\\mathbf B-x_q^+\\vec d$;\n7. Change $\\mathcal B$ by adding $q$ and removing the basic variable corresponding to column $p$ of $\\mathbf B$.\n\nWe illustrate this procedure with a simple example.\n\nConsider the problem\n\n\\begin{equation}\n\\min -4x_1-2x_2\\\\\n\\textrm{subject to}\n\\begin{cases}\nx_1+x_2+x_3&=5\\\\\n2x_1+\\frac{1}{2}x_2+x_4&=8\\\\\n\\vec x&\\ge \\vec 0\n\\end{cases}\n\\end{equation}\n\nSuppose we start with the basis index set $\\mathcal B=\\left\\{3,4\\right\\}$, for which we have\n\n\n```julia\nB = [1 0;0 1]\nN = [1 1;2 0.5]\nb = [5;8]\ncB = [0;0]\ncN = [-4;-2]\nxB = inv(B)*b\n\u03bb = inv(transpose(B))*cB\nsN = cN - transpose(N)*\u03bb\n@show xB \u03bb sN;\n```\n\n\\begin{equation}\n\\vec x_\\mathbf B =\n\\begin{pmatrix}\nx_3\\\\\nx_4\n\\end{pmatrix}\n= \n\\begin{pmatrix}\n5\\\\\n8\n\\end{pmatrix}\n\\,\\qquad\\vec\\lambda = \n\\begin{pmatrix}\n0\\\\\n0\n\\end{pmatrix}\n\\,\\qquad\\vec s_\\mathbf N =\n\\begin{pmatrix}\ns_1\\\\\ns_2\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n-4\\\\\n-2\n\\end{pmatrix}\n\\end{equation}\n\nand an objective value of $\\vec{c}^\\mathsf{T}\\vec x=0$. Since both elements of $\\vec s_\\mathbf N$ are negative, we could choose either 1 or 2 to be the entering variable. Suppose we choose $q=1$. We obtain\n\n\\begin{equation}\n\\vec d = \\begin{pmatrix}\n1\\\\\n2\n\\end{pmatrix}\\,,\n\\end{equation}\n\nso we cannot (yet) conclude that the problem is unbounded. By performing the ratio calculation, we find that $p=2$ (corresponding to index 4) and $x_1^+=4$.\n\n\n```julia\nq = 1\nAq = [1;2]\nd = inv(B)*Aq\nratio = xB./d\nxq = minimum(ratio)\nxB -= d * xq\n@show d ratio xq xB;\n```\n\nWe update the basic and nonbasic index sets to $\\mathcal B=\\left\\{3,1\\right\\}$ and $\\mathcal N=\\left\\{4,2\\right\\}$, and move to the next iteration.\n\nAt the second iteration, we have\n\n\n```julia\nB = [1 1;0 2]\nN = [0 1;1 0.5]\ncB = [0;-4]\ncN = [0;-2]\nxB = inv(B)*b\n\u03bb = inv(transpose(B))*cB\nsN = cN - transpose(N)*\u03bb\n@show xB \u03bb sN;\n```\n\n\\begin{equation}\n\\vec x_\\mathbf B =\n\\begin{pmatrix}\nx_3\\\\\nx_1\n\\end{pmatrix}\n= \n\\begin{pmatrix}\n1\\\\\n4\n\\end{pmatrix}\n\\,\\qquad\\vec\\lambda = \n\\begin{pmatrix}\n0\\\\\n-2\n\\end{pmatrix}\n\\,\\qquad\\vec s_\\mathbf N =\n\\begin{pmatrix}\ns_4\\\\\ns_2\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n2\\\\\n-1\n\\end{pmatrix}\n\\end{equation}\n\nwith an objective value of -16. We see that $\\vec s_\\mathbf N$ has one negative component, corresponding to the index $q=2$, se we select this index to enter the basis. We obtain\n\n\n```julia\nq = 2\nAq = [1;0.5]\nd = inv(B)*Aq\nratio = xB./d\nxq = minimum(ratio)\nxB -= d * xq\n@show d ratio xq xB;\n```\n\n\\begin{equation}\n\\vec d = \\begin{pmatrix}\n\\frac{3}{4}\\\\\n\\frac{1}{4}\n\\end{pmatrix}\\,,\n\\end{equation}\n\nso again we do not detect unboundedness. Continuing, we find that the minimum value of $x_2^+$ is $\\frac{4}{3}$, and that $p=1$, which indicates that index 3 will leave the basic index set $\\mathcal B$. We update the index sets to $\\mathcal B=\\left\\{2,1\\right\\}$ and $\\mathcal N=\\left\\{4,3\\right\\}$ and continue.\n\nAt the start of the third iteration, we have\n\n\n```julia\nB = [1 1;0.5 2]\nN = [0 1;1 0]\ncB = [-2;-4]\ncN = [0;0]\nxB = inv(B)*b\n\u03bb = inv(transpose(B))*cB\nsN = cN - transpose(N)*\u03bb\n@show xB \u03bb sN;\n```\n\n\\begin{equation}\n\\vec x_\\mathbf B =\n\\begin{pmatrix}\nx_2\\\\\nx_1\n\\end{pmatrix}\n= \n\\begin{pmatrix}\n\\frac{4}{3}\\\\\n\\frac{11}{3}\n\\end{pmatrix}\n\\,\\qquad\\vec\\lambda = \n\\begin{pmatrix}\n-\\frac{4}{3}\\\\\n-\\frac{4}{3}\n\\end{pmatrix}\n\\,\\qquad\\vec s_\\mathbf N =\n\\begin{pmatrix}\ns_4\\\\\ns_3\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n\\frac{4}{3}\\\\\n\\frac{4}{3}\n\\end{pmatrix}\n\\end{equation}\n\nwith an objective value of $-\\frac{52}{3}$. We see that $\\vec s_\\mathbf N\\ge\\vec 0$, so the optimality test is satisfied, and we terminate.\n\nWe need to flesh out this procedure with specifics of three important aspects of the implementation:\n\n- Linear algebra issues\u2014maintaining an LU factorization of $\\mathbf B$ that can be used to solve for $\\vec \\lambda$ and $\\vec d$.\n- Selection of the entering index $q$ from among the negative components of $\\vec s_\\mathbf N$. (In general, there are many such components.)\n- Handling of degenerate bases and degenerate steps, in which it is not possible to choose a positive value of $x_q$ without violating feasibility.\n\nProper handling of these issues is crucial to the efficiency of a simplex implementation. We will use a software package handling these details.\n\n## JuMP\n\n\n```julia\n#using Pkg\n#pkg\"add JuMP GLPK\"\n\nusing JuMP\nusing GLPK\n```\n\n\n```julia\nmodel = Model(with_optimizer(GLPK.Optimizer, method = GLPK.SIMPLEX))\n@variable(model, 0 <= x1)\n@variable(model, 0 <= x2)\n@objective(model, Min, -4*x1 -2*x2)\n@constraint(model, con1, x1 + x2 <= 5)\n@constraint(model, con2, 2*x1 + 0.5*x2 <= 8)\noptimize!(model)\n```\n\n\n```julia\ntermination_status(model)\n```\n\n\n```julia\nprimal_status(model)\n```\n\n\n```julia\nobjective_value(model)\n```\n\n\n```julia\nvalue(x1)\n```\n\n\n```julia\nvalue(x2)\n```\n\n\n```julia\nusing Plots\nusing LaTeXStrings\n\nx = -1:5\nplot(x, 5 .- x, linestyle=:dash, label=L\"x_1+x_2=5\")\nplot!(x, (8 .- 2 .* x) ./ 0.5, linestyle=:dash, label=L\"2x_1+0.5x_2=8\")\nplot!([0,4,11/3,0,0],[0,0,4/3,5,0], linewidth=2, label=\"constraints\")\nplot!(x, -4 .* x ./ 2, label=L\"f\\left(x_1,x_2\\right)=-4x_1-2x_2=0\")\nplot!(x, (-16 .+ 4 .* x) ./ -2, label=L\"f\\left(x_1,x_2\\right)=-4x_1-2x_2=-16\")\nplot!(x, (-52/3 .+ 4 .* x) ./ -2, label=L\"f\\left(x_1,x_2\\right)=-4x_1-2x_2=-52/3\")\n```\n\n## Where does the Simplex Method fit?\n\nIn linear programming, as in all optimization problems in which inequality constraints are present, the fundamental task of the algorithm is to determine which of these constraints are active at the solution and which are inactive. The simplex method belongs to a general class of algorithms for constrained optimization known as _active set methods_, which explicitly maintain estimates of the active and inactive index sets that are updated at each step of the algorithm. (At each iteration, the basis $\\mathcal B$ is our current estimate of the inactive set, that is, the set of indices $i$ for which we suspect that $x_i > 0$ at the\nsolution of the linear program.) Like most active set methods, the simplex method makesonly modest changes to these index sets at each step; a single index is exchanged between $\\mathcal B$ into $\\mathcal N$.\n\nOne undesirable feature of the simplex method attracted attention from its earliest days. Though highly efficient on almost all practical problems (the method generally requires at most $2m$ to $3m$ iterations, where $m$ is the row dimension of the constraint matrix, there are pathological problems on which the algorithm performs very poorly. The complexity of the simplex method is _exponential_, roughly speaking, its running time may be an exponential function of the dimension of\nthe problem. For many years, theoreticians searched for a linear programming algorithm that has polynomial complexity, that is, an algorithm in which the running time is bounded by a polynomial function of the amount of storage required to define the problem.\n\nIn the mid-1980s, Karmarkar described a polynomial algorithm that approaches the solution through the interior of the feasible polytope rather than working its way around the boundary as the simplex method does.\n\n## Interior Point Methods\n\nIn the 1980s it was discovered that many large linear programs could be solved efficiently by using formulations and algorithms from nonlinear programming and nonlinear equations. One characteristic of these methods was that they required all iterates to satisfy the inequality constraints in the problem _strictly_, so they became known as interior-point methods. By the early 1990s, a subclass of interior-point methods known as primal-dual methods had distinguished themselves as the most efficient practical approaches, and proved to be strong competitors to the simplex method on large problems.\n\nInterior-point methods arose from the search for algorithms with better theoretical properties than the simplex method. The simplex method can be inefficient on certain pathological problems. Roughly speaking, the time required to solve a linear program may be exponential in the size of the problem, as measured by the number\nof unknowns and the amount of storage needed for the problem data. For almost all practical problems, the simplex method is much more efficient than this bound would suggest, but its poor worst-case complexity motivated the development of new algorithms with better guaranteed performance.\n\nInterior-point methods share common features that distinguish them from the simplex method. Each interior-point iteration is expensive to compute and can make significant progress towards the solution, while the simplex method usually requires a larger number of inexpensive iterations. Geometrically speaking, the simplex method works its way around the boundary of the feasible polytope, testing a sequence of vertices in turn until it finds the optimal one. Interior-point methods approach the boundary of the feasible set only in the limit. They may approach the solution either from the interior or the exterior of the feasible region, but they never actually lie on the boundary of this region.\n\n\n```julia\nmodel = Model(with_optimizer(GLPK.Optimizer, method = GLPK.INTERIOR))\n@variable(model, 0 <= x1)\n@variable(model, 0 <= x2)\n@objective(model, Min, -4*x1 -2*x2)\n@constraint(model, con1, x1 + x2 <= 5)\n@constraint(model, con2, 2*x1 + 0.5*x2 <= 8)\noptimize!(model)\n```\n\n\n```julia\ntermination_status(model)\n```\n\n\n```julia\nprimal_status(model)\n```\n\n\n```julia\nobjective_value(model)\n```\n\n\n```julia\nvalue(x1)\n```\n\n\n```julia\nvalue(x2)\n```\n", "meta": {"hexsha": "4b70b47d5f6cfb8861e5e01c12d62222a7ae526d", "size": 25102, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lectures/Lecture 6.ipynb", "max_stars_repo_name": "JuliaTagBot/ES313.jl", "max_stars_repo_head_hexsha": "3601743ca05bdb2562a26efd8b809c1a4f78c7b1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lectures/Lecture 6.ipynb", "max_issues_repo_name": "JuliaTagBot/ES313.jl", "max_issues_repo_head_hexsha": "3601743ca05bdb2562a26efd8b809c1a4f78c7b1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lectures/Lecture 6.ipynb", "max_forks_repo_name": "JuliaTagBot/ES313.jl", "max_forks_repo_head_hexsha": "3601743ca05bdb2562a26efd8b809c1a4f78c7b1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.6976744186, "max_line_length": 706, "alphanum_fraction": 0.5931399888, "converted": true, "num_tokens": 5572, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.926303724190573, "lm_q2_score": 0.91117969855511, "lm_q1q2_score": 0.8440291481784421}} {"text": "# Module 2\n## Module 2.1: Unit 3: Flow control\n### Conditionals\n\n#### Example: Heaviside function \n\nConsider an implementation of the [Heaviside step function](http://mathworld.wolfram.com/HeavisideStepFunction.html)\n\n$$\n\\Theta(x) = \\begin{cases}\n 0 & x < 0 \\\\\n \\frac{1}{2} & x = 0\\\\\n 1 & x > 0\n \\end{cases}\n$$\n\n\n\n```python\nx = float(input(\"x <- \"))\n\ntheta = None\nif x < 0:\n theta = 0\nelif x == 0:\n theta = 0.5\nelse:\n theta = 1\n \nprint(\"theta({0}) = {1}\".format(x, theta))\n```\n\n x <- 2\n theta(2.0) = 1\n\n\n#### Equality operator `==` \n\n\n```python\n3 == 3.0\n```\n\n\n\n\n True\n\n\n\n\n```python\n\"A\" == \"a\"\n```\n\n\n\n\n False\n\n\n\n\n```python\n0 == 0\n```\n\n\n\n\n True\n\n\n\n\n```python\n\"42\" == 42\n```\n\n\n\n\n False\n\n\n\n\n```python\n\n```\n\n#### Comparison operators \n\n\n```python\nlow = -2.3\nhigh = 1000\n```\n\n\n```python\nlow < high\n```\n\n\n\n\n True\n\n\n\n\n```python\nlow >= high\n```\n\n\n\n\n False\n\n\n\n\n```python\nlow == high\n```\n\n\n\n\n False\n\n\n\n\n```python\nmiddle = 0\n```\n\n\n```python\nlow < middle < high\n```\n\n\n\n\n True\n\n\n\n\n```python\nlow < middle > high\n```\n\n\n\n\n False\n\n\n\n\n```python\nlow < high > middle\n```\n\n\n\n\n True\n\n\n\n\n```python\nlow < middle <= 0 < high\n```\n\n\n\n\n True\n\n\n\nWorks also for strings\n\n\n```python\nfirst = \"aardvark\"\nlast = \"zebra\"\n```\n\n\n```python\nfirst < last\n```\n\n\n\n\n True\n\n\n\n\n```python\nfirst == \"Aardvark\"\n```\n\n\n\n\n False\n\n\n\n\n```python\nfirst > \"Aardvark\"\n```\n\n\n\n\n True\n\n\n\n... but why and how?\n\n\n```python\n\"a\" > \"A\"\n```\n\n\n\n\n True\n\n\n\n\n```python\nord(\"a\")\n```\n\n\n\n\n 97\n\n\n\n\n```python\nord(\"A\")\n```\n\n\n\n\n 65\n\n\n\n\n```python\nchr(65)\n```\n\n\n\n\n 'A'\n\n\n\n\n```python\n(65, 65) < (97, 65)\n```\n\n\n\n\n True\n\n\n\n#### Boolean operators\n* `and`\n* `or`\n* `not`\n\n\n```python\nlow < middle and middle < high\n```\n\n\n\n\n True\n\n\n\n\n```python\nlow < middle or middle == high\n```\n\n\n\n\n True\n\n\n\n#### What is truth (in Python)? \n\nStop and try the following code and figure out why the following expressions return `True` or `False`:\n\n\n```python\nTrue\n```\n\n\n\n\n True\n\n\n\n\n```python\nFalse\n```\n\n\n\n\n False\n\n\n\n\n```python\nbool(True)\n```\n\n\n\n\n True\n\n\n\n\n```python\nbool(False)\n```\n\n\n\n\n False\n\n\n\n\n```python\nbool(0)\n```\n\n\n\n\n False\n\n\n\n\n```python\nbool(1)\n```\n\n\n\n\n True\n\n\n\n\n```python\nbool(2)\n```\n\n\n\n\n True\n\n\n\n\n```python\nbool(\"True\")\n```\n\n\n\n\n True\n\n\n\n\n```python\nbool(\"true\")\n```\n\n\n\n\n True\n\n\n\n\n```python\nbool(\"False\")\n```\n\n\n\n\n True\n\n\n\n\n```python\nbool(\"\")\n```\n\n\n\n\n False\n\n\n\n\n```python\nbool(\" \")\n```\n\n\n\n\n True\n\n\n\n\n```python\nbool(None)\n```\n\n\n\n\n False\n\n\n\n#### Identity operator \n\n**Warning**: Do not use `is` when testing for equality:\n\n\n```python\na = 256\nb = 256\n```\n\n\n```python\na is b\n```\n\n\n\n\n True\n\n\n\n\n```python\nx = 257\ny = 257\n```\n\n\n```python\nx is y\n```\n\n\n\n\n False\n\n\n\n(see https://wsvincent.com/python-wat-integer-cache/ for more information and further links)\n\n### Loops\n\n#### `while` \n\n\n```python\n# countup.py\n\ntmax = 10.\nt, dt = 0, 2.\n\nwhile t <= tmax:\n print(\"time \" + str(t))\n t += dt\nprint(\"Finished\")\n```\n\n time 0\n time 2.0\n time 4.0\n time 6.0\n time 8.0\n time 10.0\n Finished\n\n\nFibonacci series\n$$\nF_n = F_{n\u22121} + F_{n\u22122} \\quad \\text{with}\\ F_1 = F_2 = 1\n$$\n\n\n```python\n# fibonacci.py\n\nFmax = 100\na, b = 0, 1\n\nwhile b < Fmax:\n print(b, end=' ')\n a, b = b, a+b\nprint()\n```\n\n 1 1 2 3 5 8 13 21 34 55 89 \n\n\n##### Vertical throw\n\nThrow a ball of mass $m$ vertically into the air from initial height $y_0 = 2$ m. What is its position $y(t)$ as a function of time $t$ and when does it hit the ground at $y=0$ if it has an initial upwards velocity $v_0 = 15\\,\\text{m/s}$ (a slow baseball pitch is $30\\,\\text{m/s}$)? \n\nKinematic equation of motion:\n$$\ny(t) = -\\frac{1}{2} g t^2 + v_0 t + y_0\n$$\n\n\n```python\n# parameters\ng = 9.81 # acceleration due to gravity in m/s**2\nv0 = 15 # initial velocity in m/s\ny0 = 2 # initial position in m\ny_ground = 0\n\ndt = 0.1 # time step in s\n\n# data containers\ny_values = []\nt_values = []\n\n# initial conditions\ny = y0\nt = 0\nwhile y > y_ground:\n y = -0.5*g*t**2 + v0*t + y0\n t_values.append(t)\n y_values.append(y)\n \n t += dt\n```\n\n\n```python\ny_values[-1]\n```\n\n\n\n\n -0.22720000000002472\n\n\n\n\n```python\nt_values[-1]\n```\n\n\n\n\n 3.2000000000000015\n\n\n\n\n```python\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n\n```python\nplt.plot(t_values, y_values, 'o-')\nplt.xlabel(\"time (s)\")\nplt.ylabel(\"y (m)\");\n```\n\n#### `for` loop \n\nConvert temperatures in Fahrenheit to Kelvin\n$$\nT/\\text{K} = \\frac{5}{9}(\\theta/^\\circ\\text{F} - 32) + 273.15\n$$\n\n\n```python\n# temperature conversion\n\ntemperatures_F = [60.1, 78.3, 98.8, 97.1, 101.3, 110.0]\nfor theta in temperatures_F:\n T = (5/9) * (theta - 32) + 273.15\n print(T)\n \nprint(\"Conversion complete\")\n```\n\n 288.76111111111106\n 298.8722222222222\n 310.26111111111106\n 309.31666666666666\n 311.65\n 316.4833333333333\n Conversion complete\n\n\nStore results:\n\n\n```python\ntemperatures_F = [60.1, 78.3, 98.8, 97.1, 101.3, 110.0]\ntemperatures_K = []\nfor theta in temperatures_F:\n T = (5/9) * (theta - 32) + 273.15\n temperatures_K.append(T)\n \nprint(temperatures_K)\n```\n\n [288.76111111111106, 298.8722222222222, 310.26111111111106, 309.31666666666666, 311.65, 316.4833333333333]\n\n\nUse `range()` to print side by side _after_ processing:\n\n\n```python\ntemperatures_F = [60.1, 78.3, 98.8, 97.1, 101.3, 110.0]\ntemperatures_K = []\nfor theta in temperatures_F:\n T = (theta - 32) * (5/9) + 273.15\n temperatures_K.append(T)\n \n# print results side by side\nfor i in range(len(temperatures_F)):\n T_F = temperatures_F[i]\n T_K = temperatures_K[i]\n print(\"{0:.2f} F = {1:.2f} K\".format(T_F, T_K))\n```\n\n 60.10 F = 288.76 K\n 78.30 F = 298.87 K\n 98.80 F = 310.26 K\n 97.10 F = 309.32 K\n 101.30 F = 311.65 K\n 110.00 F = 316.48 K\n\n\n#### Vertical throw example with `for` and `break`\n\nFixed number of steps with condition\n- run `Nsteps` or stop when ball hits ground\n- Kinematic equation of motion:\n $$\n y(t) = -\\frac{1}{2} g t^2 + v_0 t + y_0\n $$\n\n\n```python\n# parameters\ng = 9.81 # acceleration due to gravity in m/s**2\nv0 = 15 # initial velocity in m/s\ny0 = 2 # initial position in m\ny_ground = 0\n\ndt = 0.1 # time step in s\nNsteps = 100\n\n# data containers\ny_values = []\nt_values = []\n\n# initial conditions\ny = y0\nt = 0\n\nfor i in range(Nsteps):\n t = i * dt\n y = -0.5*g*t**2 + v0*t + y0\n if y < y_ground:\n break\n y_values.append(y)\n t_values.append(t)\nprint(\"Computed\", len(y_values), \"positions\")\n```\n\n Computed 32 positions\n\n\n\n```python\nplt.plot(t_values, y_values, '.-')\n```\n\n\n\n## Module 2.2: Unit 4: Containers\n\n*Data structures* are important for keeping track of data.\n- organize many numbers\n- Think of your key data structures *before* you start programming!\n\n#### Examples in physics\n - vectors: $\\mathbf{r} = \\begin{pmatrix}x\\\\y\\\\z\\end{pmatrix}$\n - matrices: $\\mathsf{A} = \\begin{pmatrix}\n A_{11} & A_{12} & A_{13}\\\\\n A_{21} & A_{22} & A_{23}\n \\end{pmatrix}$\n - tensors: $\\epsilon_{ijk}$\n\n \n\n - time series of measurements: $\\big\\{x(t_1), x(t_2), \\dots, x(t_N)\\big\\}$\n - measurements at specific coordinates e.g., \n - cartesian coordinate in space: $\\rho(x,y,z)$\n - latitude/longitude on earth: $p(\\phi, \\theta)$\n\n### Lists\nVery versatile container (and for us the most important container)\n* sequence\n* ordered\n* mutable\n\n\n```python\ntemperatures = [60.1, 78.3, 98.8, 97.1, 101.3, 110.0]\nstuff = [\"dog\", 42, -1.234, \"cat\", [3, 2, 1]]\nempty = []\ntwo = [[], []]\n```\n\n#### indexing \n\n\n```python\ntemperatures = [60.1, 78.3, 98.8, 97.1, 101.3, 110.0]\n```\n\n```\ntemperatures = [60.1, 78.3, 98.8, 97.1, 101.3, 110.0] elements\n | | | | | | |\n | 0 | 1 | 2 | 3 | 4 | 5 | index\n```\n\nFirst element\n\n\n```python\ntemperatures[0]\n```\n\n\n\n\n 60.1\n\n\n\nArbitrary elements\n\n\n```python\ntemperatures[3]\n```\n\n\n\n\n 97.1\n\n\n\n**Note**: Python indices are **0-based**.\n\n```\ntemperatures = [60.1, 78.3, 98.8, 97.1, 101.3, 110.0] elements\n | | | | | | |\n | 0 | 1 | 2 | 3 | 4 | 5 | index\n```\nFor example, the third element is at index 2:\n\n\n```python\ntemperatures[2]\n```\n\n\n\n\n 98.8\n\n\n\nNegative indices count from the last element to the first:\n```\ntemperatures = [60.1, 78.3, 98.8, 97.1, 101.3, 110.0] elements\n | | | | | | |\n | 0 | 1 | 2 | 3 | 4 | 5 | index\n | -6 | -5 | -4 | -3 | -2 | -1 | neg.index\n\n```\n\nLast element\n\n\n```python\ntemperatures[-1]\n```\n\n\n\n\n 110.0\n\n\n\nThird element from end\n\n\n```python\ntemperatures[-3]\n```\n\n\n\n\n 97.1\n\n\n\nPython\n[built-in function](https://docs.python.org/3.5/library/functions.html#built-in-functions)\nto determine the *length of a list*:\n[len()](https://docs.python.org/3.5/library/functions.html#len):\n\n\n```python\nlen(temperatures)\n```\n\n\n\n\n 6\n\n\n\n#### slicing\nSlicing produces a new list by extracting a subset of elements as\ndetermined by the \"slice\" _start:stop:step_. The general slicing syntax for a list `a`: \n```python\na[start:stop:step]\n```\nwhere \n- index `start` is *included* and \n- `stop` is *excluded*; \n- `start`, `stop`, `step` are each optional:\n - default for `start`: first element (index 0)\n - default for `stop`: after last element\n - default for `step` is 1, i.e., include every element. \n\nNegative values are also allowed for indices and negative step counts backwards.\n\nFirst 3 elements:\n\n```\ntemperatures = [60.1, 78.3, 98.8, 97.1, 101.3, 110.0] elements\n | | | | | | |\n | 0 | 1 | 2 | 3 | 4 | 5 | index\n | -6 | -5 | -4 | -3 | -2 | -1 | neg.index\n\n```\n\n\n```python\ntemperatures[0:3]\n```\n\n\n\n\n [60.1, 78.3, 98.8]\n\n\n\n\n```python\ntemperatures[:3]\n```\n\n\n\n\n [60.1, 78.3, 98.8]\n\n\n\n(`start` defaults to 0 and can be omitted).\n\n**`a[start:stop:step]`**\n\nOmitting parameters\n```python\ntemperatures[::2] == [60.1, 98.8, 101.3]\ntemperatures[2::2] == [98.8, 101.3]\ntemperatures[:2:2] == [60.1]\n```\n\n```\ntemperatures = [60.1, 78.3, 98.8, 97.1, 101.3, 110.0] elements\n | | | | | | |\n | 0 | 1 | 2 | 3 | 4 | 5 | index\n | -6 | -5 | -4 | -3 | -2 | -1 | neg.index\n\n```\n\n##### slicing example \n\n\n```python\nletters = ['A', 'B', 'C', 'D', 'E', 'F']\n```\n\n```\n+---+---+---+---+---+---+\n|'A'|'B'|'C'|'D'|'E'|'F'| elements \n+---+---+---+---+---+---+\n| 0 | 1 | 2 | 3 | 4 | 5 | index\n+---+---+---+---+---+---+\n|-6 |-5 |-4 |-3 |-2 |-1 | index\n+---+---+---+---+---+---+\n```\n\n```\n+---+---+---+---+---+---+\n|'A'|'B'|'C'|'D'|'E'|'F'| elements \n+---+---+---+---+---+---+\n| 0 | 1 | 2 | 3 | 4 | 5 | index\n+---+---+---+---+---+---+\n|-6 |-5 |-4 |-3 |-2 |-1 | index\n+---+---+---+---+---+---+\n```\n\n\n```python\nletters[:3]\n```\n\n\n\n\n ['A', 'B', 'C']\n\n\n\n\n```python\nletters[0:3]\n```\n\n\n\n\n ['A', 'B', 'C']\n\n\n\n\n```python\nletters[1:3]\n```\n\n\n\n\n ['B', 'C']\n\n\n\n```\n+---+---+---+---+---+---+\n|'A'|'B'|'C'|'D'|'E'|'F'| elements \n+---+---+---+---+---+---+\n| 0 | 1 | 2 | 3 | 4 | 5 | index\n+---+---+---+---+---+---+\n|-6 |-5 |-4 |-3 |-2 |-1 | index\n+---+---+---+---+---+---+\n```\n\n\n```python\nletters[-3]\n```\n\n\n\n\n 'D'\n\n\n\n\n```python\nletters[-3:-1]\n```\n\n\n\n\n ['D', 'E']\n\n\n\n\n```python\nletters[-3:]\n```\n\n\n\n\n ['D', 'E', 'F']\n\n\n\n\n```python\nletters[1::2]\n```\n\n\n\n\n ['B', 'D', 'F']\n\n\n\n#### Summary\n\n```python\nletters[:3] == ['A', 'B', 'C']\nletters[0:3] == ['A', 'B', 'C']\nletters[1:3] == ['B', 'C']\nletters[-3] == 'D'\nletters[-3:-1] == ['D', 'E']\nletters[-3:] == ['D', 'E', 'F']\nletters[1::2] == ['B', 'D', 'F']\n```\n\n```\n+---+---+---+---+---+---+\n|'A'|'B'|'C'|'D'|'E'|'F'| elements \n+---+---+---+---+---+---+\n| 0 | 1 | 2 | 3 | 4 | 5 | index\n+---+---+---+---+---+---+\n|-6 |-5 |-4 |-3 |-2 |-1 | index\n+---+---+---+---+---+---+\n```\n\n### Dictionaries\n\n`dict` is a great data structure but used less often in numerical calculations.\n\nWe mostly use it to keep track of parameters, e.g. masses of elements:\n\n\n\n```python\nmasses = {\n \"H\": 1.0079, \"He\": 4.002602,\n \"Li\": 6.94, \"Be\": 9.0121831,\n \"B\": 10.81, \"C\": 12.011,\n \"N\": 14.007, \"O\": 15.999,\n \"F\": 18.998403163,\n \"Ne\": 20.1797,\n}\n```\n\nCalculate the mass of the H$_2$O molecule:\n\n\n```python\nm_water = 2*masses[\"H\"] + 1*masses[\"O\"]\nprint(\"Water mass\", m_water, \"u\")\n```\n\n Water mass 18.0148 u\n\n\n### Nested list example: 2D harmonic oscillator trajectory\n\nA mass $m$ held by two perpendicular identical springs in the x-y plane ([2D harmonic oscillator](https://farside.ph.utexas.edu/teaching/336k/Newtonhtml/node28.html)) moves in a potential\n$$\nU(x, y) = \\frac{1}{2}k(x^2 + y^2)\n$$\nor with the vector $\\mathbf{r} = (x,y)$ written as\n$$\nU(\\mathbf{r}) = \\frac{1}{2}k\\mathbf{r}^2.\n$$\nIts trajectory (curve in space) $\\mathbf{r}(t) = \\big(x(t), y(t)\\big)$ is \n\n\\begin{align}\nx(t) &= A \\cos(\\omega t)\\\\\ny(t) &= B \\cos(\\omega t + \\phi)\n\\end{align}\n\nwhere the amplitudes $A$, $B$ and the phase difference $\\phi$ are determined by the initial conditions, and the frequency is $\\omega = \\sqrt{k/m}$.\n\nGiven $A=1$, $B=2$, $\\phi=\\pi/4$ and $\\omega=1$, calculate the trajectory $\\mathbf{r}(t)$.\n\n\n```python\nimport math\n\n# parameters\nA = 1\nB = 2\nphi = math.pi/4\nomega = 1\n\ndt = 0.1\nN = 100 # number of steps\n\nr_values = []\nfor i in range(N):\n t = i*dt\n x = A*math.cos(omega*t)\n y = B*math.cos(omega*t + phi)\n r = [x, y]\n r_values.append(r)\n```\n\n`r_values` is a nested list with `N` entries:\n\n\n```python\nlen(r_values)\n```\n\n\n\n\n 100\n\n\n\nEach entry is a list of length 2, which is clear from looking at the first few elements:\n\n\n```python\nr_values[:3]\n```\n\n\n\n\n [[1.0, 1.4142135623730951],\n [0.9950041652780257, 1.2659626133539166],\n [0.9800665778412416, 1.1050625843737085]]\n\n\n\nWe say that `r_values` \n* has 2 dimensions (one needs two indices to get at a value), e.g. `r_values[3][1]` to get the y-coordinate at time index 3.\n* has length `N`\n* has *shape* `N x 2`\n\nUnpack the values from the `N x 2` list into two `N` lists (using *list comprehensions*): \n\n\n```python\nx_values = [r[0] for r in r_values]\ny_values = [r[1] for r in r_values]\n```\n\nLoad the *matplotlib* library for plotting and make images appear inline in the notebook:\n\n\n```python\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n\n```python\nplt.plot(x_values, y_values)\nplt.xlabel(\"x\")\nplt.ylabel(\"y\")\nplt.gca().set_aspect(1)\n```\n\n##### Alternative ways to unpack:\n\nUnpack the two-element list as loop variables:\n\n\n```python\nx_values = [x for x,y in r_values]\ny_values = [y for x,y in r_values]\n```\n\n**Advanced (optional):** Very tricky use of the [zip()](https://docs.python.org/3/library/functions.html#zip) function and the [unpacking operator](https://docs.python.org/3/tutorial/controlflow.html#tut-unpacking-arguments) `*` :\n\n\n```python\nx_values, y_values = zip(*r_values)\n```\n\n(Understanding how this works is left as a challenge. I am happy to give more hints if you ask in the forum.)\n\nBy the way: you can also use `zip` to \"merge\" the `x_values` and `y_values` back into a `Nx2` list, as demonstrated by looking at first few elements:\n\n\n```python\nlist(zip(x_values, y_values))[:3]\n```\n\n\n\n\n [(1.0, 1.4142135623730951),\n (0.9950041652780257, 1.2659626133539166),\n (0.9800665778412416, 1.1050625843737085)]\n\n\n\n\n```python\nr_values[:3]\n```\n\n\n\n\n [[1.0, 1.4142135623730951],\n [0.9950041652780257, 1.2659626133539166],\n [0.9800665778412416, 1.1050625843737085]]\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "00eb9035314c4ac5d148b7e526d67687b0d52346", "size": 91640, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Module_2/Module_2.ipynb", "max_stars_repo_name": "Py4Physics/PHY194", "max_stars_repo_head_hexsha": "68966ad96bbf2756ca3c0c39210be69c379c7619", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-10-26T00:39:14.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-29T19:35:20.000Z", "max_issues_repo_path": "Module_2/Module_2.ipynb", "max_issues_repo_name": "Py4Phy/PHY202", "max_issues_repo_head_hexsha": "ec3a0b0285f2601accfdbf0c30416e1351430342", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Module_2/Module_2.ipynb", "max_forks_repo_name": "Py4Phy/PHY202", "max_forks_repo_head_hexsha": "ec3a0b0285f2601accfdbf0c30416e1351430342", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.7988546886, "max_line_length": 14388, "alphanum_fraction": 0.6680161502, "converted": true, "num_tokens": 5587, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9496693631290971, "lm_q2_score": 0.8887587890727755, "lm_q1q2_score": 0.8440269931941302}} {"text": "```python\nimport sympy as sym\nfrom sympy.polys.multivariate_resultants import MacaulayResultant\n\nsym.init_printing()\n```\n\nMacaulay Resultant\n------------------\n\nThe Macauly resultant is a multivariate resultant. It is used for calculating the resultant of $n$ polynomials\nin $n$ variables. The Macaulay resultant is calculated as the determinant of two matrices,\n\n$$R = \\frac{\\text{det}(A)}{\\text{det}(M)}.$$\n\nMatrix $A$\n-----------\n\nThere are a number of steps needed to construct matrix $A$. Let us consider an example from https://dl.acm.org/citation.cfm?id=550525 to \nshow the construction.\n\n\n```python\nx, y, z = sym.symbols('x, y, z')\n```\n\n\n```python\na_1_1, a_1_2, a_1_3, a_2_2, a_2_3, a_3_3 = sym.symbols('a_1_1, a_1_2, a_1_3, a_2_2, a_2_3, a_3_3')\nb_1_1, b_1_2, b_1_3, b_2_2, b_2_3, b_3_3 = sym.symbols('b_1_1, b_1_2, b_1_3, b_2_2, b_2_3, b_3_3')\nc_1, c_2, c_3 = sym.symbols('c_1, c_2, c_3')\n```\n\n\n```python\nvariables = [x, y, z]\n```\n\n\n```python\nf_1 = a_1_1 * x ** 2 + a_1_2 * x * y + a_1_3 * x * z + a_2_2 * y ** 2 + a_2_3 * y * z + a_3_3 * z ** 2\n```\n\n\n```python\nf_2 = b_1_1 * x ** 2 + b_1_2 * x * y + b_1_3 * x * z + b_2_2 * y ** 2 + b_2_3 * y * z + b_3_3 * z ** 2\n```\n\n\n```python\nf_3 = c_1 * x + c_2 * y + c_3 * z\n```\n\n\n```python\npolynomials = [f_1, f_2, f_3]\nmac = MacaulayResultant(polynomials, variables)\n```\n\n**Step 1** Calculated $d_i$ for $i \\in n$. \n\n\n```python\nmac.degrees\n```\n\n**Step 2.** Get $d_M$.\n\n\n```python\nmac.degree_m\n```\n\n**Step 3.** All monomials of degree $d_M$ and size of set.\n\n\n```python\nmac.get_monomials_set()\n```\n\n\n```python\nmac.monomial_set\n```\n\n\n```python\nmac.monomials_size\n```\n\nThese are the columns of matrix $A$.\n\n**Step 4** Get rows and fill matrix.\n\n\n```python\nmac.get_row_coefficients()\n```\n\nEach list is being multiplied by polynomials $f_1$, $f_2$ and $f_3$ equivalently. Then we fill the matrix\nbased on the coefficient of the monomials in the columns.\n\n\n```python\nmatrix = mac.get_matrix()\nmatrix\n```\n\nMatrix $M$\n-----------\n\nColumns that are non reduced are kept. The rows which contain one if the $a_i$s is dropoed.\n$a_i$s are the coefficients of $x_i ^ {d_i}$.\n\n\n```python\nmac.get_submatrix(matrix)\n```\n\nSecond example\n-----------------\nThis is from: http://isc.tamu.edu/resources/preprints/1996/1996-02.pdf\n\n\n```python\nx, y, z = sym.symbols('x, y, z')\n```\n\n\n```python\na_0, a_1, a_2 = sym.symbols('a_0, a_1, a_2')\nb_0, b_1, b_2 = sym.symbols('b_0, b_1, b_2')\nc_0, c_1, c_2,c_3, c_4 = sym.symbols('c_0, c_1, c_2, c_3, c_4')\n```\n\n\n```python\nf = a_0 * y - a_1 * x + a_2 * z\ng = b_1 * x ** 2 + b_0 * y ** 2 - b_2 * z ** 2\nh = c_0 * y - c_1 * x ** 3 + c_2 * x ** 2 * z - c_3 * x * z ** 2 + c_4 * z ** 3\n```\n\n\n```python\npolynomials = [f, g, h]\n```\n\n\n```python\nmac = MacaulayResultant(polynomials, variables=[x, y, z])\n```\n\n\n```python\nmac.degrees\n```\n\n\n```python\nmac.degree_m\n```\n\n\n```python\nmac.get_monomials_set()\n```\n\n\n```python\nmac.get_size()\n```\n\n\n```python\nmac.monomial_set\n```\n\n\n```python\nmac.get_row_coefficients()\n```\n\n\n```python\nmatrix = mac.get_matrix()\nmatrix\n```\n\n\n```python\nmatrix.shape\n```\n\n\n```python\nmac.get_submatrix(mac.get_matrix())\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "b6ecd4a4e650f9cfdf372e25658630814f1d4193", "size": 58732, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/notebooks/Macaulay_resultant.ipynb", "max_stars_repo_name": "utkarshdeorah/sympy", "max_stars_repo_head_hexsha": "dcdf59bbc6b13ddbc329431adf72fcee294b6389", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 8323, "max_stars_repo_stars_event_min_datetime": "2015-01-02T15:51:43.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T13:13:19.000Z", "max_issues_repo_path": "examples/notebooks/Macaulay_resultant.ipynb", "max_issues_repo_name": "utkarshdeorah/sympy", "max_issues_repo_head_hexsha": "dcdf59bbc6b13ddbc329431adf72fcee294b6389", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 15102, "max_issues_repo_issues_event_min_datetime": "2015-01-01T01:33:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T22:53:13.000Z", "max_forks_repo_path": "examples/notebooks/Macaulay_resultant.ipynb", "max_forks_repo_name": "utkarshdeorah/sympy", "max_forks_repo_head_hexsha": "dcdf59bbc6b13ddbc329431adf72fcee294b6389", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 4490, "max_forks_repo_forks_event_min_datetime": "2015-01-01T17:48:07.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T17:24:05.000Z", "avg_line_length": 77.8938992042, "max_line_length": 14916, "alphanum_fraction": 0.7622420486, "converted": true, "num_tokens": 1224, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541659378681, "lm_q2_score": 0.8962513765975758, "lm_q1q2_score": 0.8439588425006563}} {"text": "```python\nimport numpy as np\nimport sympy\nsympy.init_printing(use_unicode=True)\nfrom sympy import *\nfrom sympy.solvers import solve\nfrom IPython.display import display\n\ndef simplified(exp):\n simp = simplify(exp)\n display(simp)\n return simp\n\ndef firstOrderCondition(exp, var, iSelectedSolution=None):\n diffExp = simplify(diff(exp, var))\n display(diffExp)\n solutions = solve(diffExp, var)\n display(solutions)\n if iSelectedSolution is not None:\n solution = solutions[iSelectedSolution]\n optimum = exp.subs(var, solution)\n return simplified(optimum)\n else:\n return solutions\n\nx = symbols('x')\nw,A,B = symbols('w A B', Integer=True, Positive=True)\nn,p,q,d = symbols('n p q d', positive=True)\n```\n\n### Optimal Channel Initialization\n\n\n```python\nXn = simplified(n/(2*p-1) - w/(2*p-1)*(1-(p/(1-p))**n)/(1-(p/(1-p))**w) )\n```\n\n\n```python\nfirstOrderCondition(Xn,n)\n```\n\n\n```python\nXn = simplified(n*(1-q**(-w/B)) - w*(1-q**(-n/B)))\nfirstOrderCondition(Xn,n)\n```\n\n\n```python\nnopt.subs(B,1)\n```\n\n\n```python\nnopt.subs(q,1/2)\n```\n\n\n```python\nsympy.simplify(Xn.subs(n,nopt))\n```\n\n\n```python\ncp = (A * x**(A+1) - (A+1)*x**A + 1)/(x-1)\ncp\n```\n\n\n```python\nsympy.simplify(cp.subs(A, 3))\n```\n", "meta": {"hexsha": "c69e0b3ecf93f91bd212a530c1decc49f3403b5f", "size": 45601, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "old/sympy-sample-notebook.ipynb", "max_stars_repo_name": "erelsgl/bitcoin-simulations", "max_stars_repo_head_hexsha": "79bfa0930ab9ad17be59b9cad1ec6e7c3530aa3b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-11-26T02:44:38.000Z", "max_stars_repo_stars_event_max_datetime": "2018-11-26T02:44:38.000Z", "max_issues_repo_path": "old/sympy-sample-notebook.ipynb", "max_issues_repo_name": "erelsgl/bitcoin-simulations", "max_issues_repo_head_hexsha": "79bfa0930ab9ad17be59b9cad1ec6e7c3530aa3b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "old/sympy-sample-notebook.ipynb", "max_forks_repo_name": "erelsgl/bitcoin-simulations", "max_forks_repo_head_hexsha": "79bfa0930ab9ad17be59b9cad1ec6e7c3530aa3b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-09-06T00:11:26.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-29T17:14:59.000Z", "avg_line_length": 118.7526041667, "max_line_length": 4744, "alphanum_fraction": 0.7686892831, "converted": true, "num_tokens": 378, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9546474246069457, "lm_q2_score": 0.8840392756357326, "lm_q1q2_score": 0.8439458177370419}} {"text": "# \u57fa\u672c\nsympy\u306f\u9ad8\u6027\u80fd\u3067\u3059\u304c\u5168\u822c\u7684\u306b\u9045\u3044\u3067\u3059\uff0e \n\u5de8\u5927\u306a\u6570\u5f0f\u306esymplify\uff08\u6574\u7406\uff09\u7b49\u306f\u3069\u3093\u3067\u3082\u306a\u3044\u6642\u9593\u304c\u304b\u304b\u308a\u307e\u3059\uff0e \n\n\n```python\nimport sympy as sy # sympy\u306e\u30a4\u30f3\u30dd\u30fc\u30c8\n```\n\n# \u30b7\u30f3\u30dc\u30ea\u30c3\u30af\u5909\u6570\u306e\u5b9a\u7fa9\n`\u5909\u6570 = sy.Symbol(\"\u5909\u6570\")` \n\u307e\u305f\u306f \n`\u5909\u65701, \u5909\u65702 = sy.symbols(\"\u5909\u65701, \u5909\u65702\")` \n\u3067\u5b9a\u7fa9\u3059\u308b\uff0e \n\n\n```python\nt = sy.Symbol('x')\nx, y, z = sy.symbols('x, y, z')\n```\n\n\u30b7\u30f3\u30dc\u30ea\u30c3\u30af\u5909\u6570\u3092\u4f7f\u3063\u3066\u6570\u5f0f\u3092\u5b9a\u7fa9\u3067\u304d\u308b\n\n\n```python\nexpr = x + y**2 + z\nexpr\n```\n\n\n\n\n$\\displaystyle x + y^{2} + z$\n\n\n\nsin\u95a2\u6570\u306a\u3069\u306e\u4e00\u822c\u7684\u306a\u95a2\u6570\u3082\u4f7f\u3048\u308b\uff0e\n\n\n```python\nexpr2 = sy.sin(x) * sy.exp(y) * x**2\nexpr2\n```\n\n\n\n\n$\\displaystyle x^{2} e^{y} \\sin{\\left(x \\right)}$\n\n\n\n## \u5fae\u5206\n`sy.diff(\u6570\u5f0f, \u5909\u6570)`\u3092\u4f7f\u3046\n\n\n```python\nsy.diff(expr, y)\n```\n\n\n\n\n$\\displaystyle 2 y$\n\n\n\n\n```python\nsy.diff(expr2, x)\n```\n\n\n\n\n$\\displaystyle x^{2} e^{y} \\cos{\\left(x \\right)} + 2 x e^{y} \\sin{\\left(x \\right)}$\n\n\n\n## \u7a4d\u5206\n`sy.integrate()`\u3092\u4f7f\u3046\uff0e \n\n\n```python\nsy.integrate(expr, x)\n```\n\n\n\n\n$\\displaystyle \\frac{x^{2}}{2} + x \\left(y^{2} + z\\right)$\n\n\n\n\n```python\nsy.integrate(expr2, y)\n```\n\n\n\n\n$\\displaystyle x^{2} e^{y} \\sin{\\left(x \\right)}$\n\n\n\n****\n## \u30d9\u30af\u30c8\u30eb\uff0c\u884c\u5217\u306e\u5b9a\u7fa9\nnumpy\u3068\u4f3c\u305f\u5f62\u5f0f\n\n\n```python\nX = sy.Matrix([\n [x],\n [y]\n])\nX\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}x\\\\y\\end{matrix}\\right]$\n\n\n\n\n```python\nY = sy.Matrix([\n [x, x*y],\n [z**3, x*sy.sin(z)],\n])\nY\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}x & x y\\\\z^{3} & x \\sin{\\left(z \\right)}\\end{matrix}\\right]$\n\n\n\n## \u30e4\u30b3\u30d3\u884c\u5217\n\u30e4\u30b3\u30d3\u884c\u5217\u306f \n```python\n\u30d9\u30af\u30c8\u30ebA.jacobian(\u30d9\u30af\u30c8\u30ebB)\n``` \n\u3067\u8a08\u7b97\u3067\u304d\u308b\uff0e \n\n\n```python\nV = sy.Matrix([\n [x**2 + z*sy.log(z)],\n [x + sy.sin(y)]\n])\nV.jacobian(X)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}2 x & 0\\\\1 & \\cos{\\left(y \\right)}\\end{matrix}\\right]$\n\n\n", "meta": {"hexsha": "b3363faa1ea7604c1512cf4845e7fd0bda861ed7", "size": 6319, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "symbolic/python_src/basic.ipynb", "max_stars_repo_name": "YoshimitsuMatsutaIe/abc_2022", "max_stars_repo_head_hexsha": "9c6fb487c7ec22fdc57cc1eb0abec4c9786ad995", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "symbolic/python_src/basic.ipynb", "max_issues_repo_name": "YoshimitsuMatsutaIe/abc_2022", "max_issues_repo_head_hexsha": "9c6fb487c7ec22fdc57cc1eb0abec4c9786ad995", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "symbolic/python_src/basic.ipynb", "max_forks_repo_name": "YoshimitsuMatsutaIe/abc_2022", "max_forks_repo_head_hexsha": "9c6fb487c7ec22fdc57cc1eb0abec4c9786ad995", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 18.2103746398, "max_line_length": 114, "alphanum_fraction": 0.4250672575, "converted": true, "num_tokens": 711, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9597620579518498, "lm_q2_score": 0.8791467722591728, "lm_q1q2_score": 0.8437717153851899}} {"text": "Notes on [this litte paper](https://arxiv.org/abs/1807.03462).\n\nSuppose we have a random variable $X$ with cumulative distribution function $F(x)$. Think about the following expectation.\n\n\\begin{align}\nL_p(\\theta) &= \\mathbb{E}\\left[\\left| X-\\theta \\right|^p\\right] \\\\\n &= \\int_{-\\infty}^{\\theta} (\\theta-x)^p dF + \\int_{\\theta}^{\\infty} (x-\\theta)^p dF\n\\end{align}\n\nThe minimum of $L_p$ gives us useful points in a distribution.\n\n\\begin{align}\n\\frac{\\partial L_p}{\\partial \\theta} &= \\int_{-\\infty}^{\\theta} p(\\theta-x)^{p-1} dF - \\int_{\\theta}^{\\infty} p(x-\\theta)^{p-1} dF\n\\end{align}\n\n\\begin{align}\n\\frac{\\partial L_p}{\\partial \\theta} = 0 \\Rightarrow \\int_{-\\infty}^{\\theta} (\\theta-x)^{p-1} dF = \\int_{\\theta}^{\\infty} (x-\\theta)^{p-1} dF\n\\end{align}\n\nFor $p=2$:\n\n\\begin{align}\n &\\int_{-\\infty}^{\\theta} (\\theta-x) dF = \\int_{\\theta}^{\\infty} (x-\\theta) dF \\\\\n\\Rightarrow \\qquad &\\int_{-\\infty}^{\\infty} (x-\\theta) dF = 0 \\\\\n\\Rightarrow \\qquad &\\theta = \\int_{-\\infty}^{\\infty} x dF\n\\end{align}\n\nThis is the mean by definition.\n\nFor $p=1$:\n\n\\begin{align}\n &\\int_{-\\infty}^{\\theta} dF = \\int_{\\theta}^{\\infty} dF \\\\\n\\Rightarrow \\qquad & F(\\theta) = 1-F(\\theta) \\\\\n\\Rightarrow \\qquad & F(\\theta) = \\frac{1}{2}\n\\end{align}\n\nThis is the usual definition of the median.\n\nUnlike the mean, this definition of the median is a bit troublesome, because there is not necessarily a single point which satisfies this equation. In particular, for an empirical distribution with an even number of samples, any point in the interval between the $(\\frac{N}{2}-1)$th and $(\\frac{N}{2}+1)$th point will do. This is the reason why at school we are taught the double definition for the sample median: The middle point if there are an odd number of points, and the midpoint of the two middle points if there are an even number of points. WTF!?\n\nThis paper has a way to generalize the definition of the median such that there is always a single unique value. Instead of minimizing $L_p$ with $p=1$, do it with $p=1+\\epsilon$ for $\\epsilon \\rightarrow 0$.\n\n\\begin{align}\n\\int_{-\\infty}^{\\theta} (\\theta-x)^{\\epsilon} dF = \\int_{\\theta}^{\\infty} (x-\\theta)^{\\epsilon} dF\n\\end{align}\n\nUsing Taylor:\n\\begin{align}\n(x-\\theta)^{\\epsilon} &= \\exp(\\log(x-\\theta))^{\\epsilon} \\\\\n &= \\exp(\\epsilon\\log(x-\\theta)) \\\\\n &= 1 + \\epsilon\\log(x-\\theta) + \\mathcal{O}(\\epsilon^2)\n\\end{align}\n\nThis gives:\n\\begin{align}\n& \\int_{-\\infty}^{\\theta} \\left[ 1 + \\epsilon\\log(\\theta-x) + \\mathcal{O}(\\epsilon^2) \\right] dF = \\int_{\\theta}^{\\infty} \\left[ 1 + \\epsilon\\log(x-\\theta) + \\mathcal{O}(\\epsilon^2) \\right] dF \\\\\n\\Rightarrow \\quad & F(\\theta) + \\epsilon \\int_{-\\infty}^{\\theta} \\log(\\theta-x) dF + \\mathcal{O}(\\epsilon^2) = 1 - F(\\theta) + \\epsilon \\int_{\\theta}^{\\infty} \\log(x-\\theta) dF + \\mathcal{O}(\\epsilon^2) \\\\\n\\Rightarrow \\quad & 2F(\\theta) - 1 + \\epsilon \\left[ \\int_{-\\infty}^{\\theta} \\log(\\theta-x) dF - \\int_{\\theta}^{\\infty} \\log(x-\\theta) dF \\right]+ \\mathcal{O}(\\epsilon^2) = 0\n\\end{align}\n\nIf $2F(\\theta) - 1 = 0$ has a unique solution then as $\\theta$ will approach this in the limit as $\\epsilon\\rightarrow 0$. However, when ambiguity arises, the $\\epsilon$ term acts as a tie-breaker.\n\nIn the empirical distribution case, $\\theta$ will need to satisfy the following:\n\n\\begin{align}\n\\sum_{i \\leq \\frac{N}{2}} \\log\\left( \\theta - x_i \\right) = \\sum_{i > \\frac{N}{2}} \\log\\left( x_i - \\theta \\right)\n\\end{align}\n\nWe can try this out.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import stats\nfrom scipy import optimize\n\nN = 8\nseed = 1234\n\nx_samples = stats.gamma.rvs(1, size=N, random_state=np.random.RandomState(seed))\nx_samples.sort()\n\nx_range = np.linspace(0, 4, 10000)\nF = [(x_samples < x).sum() for x in x_range]\n\nmed_lower = x_samples[int(N/2-1)]\nmed_upper = x_samples[int(N/2)]\n\ndef Llog(theta):\n return np.log(theta-x_samples[:int(N/2)]).sum() - np.log(x_samples[int(N/2):]-theta).sum()\n\ntiny_bit = 1E-10\nmed = optimize.brentq(Llog, med_lower+tiny_bit, med_upper-tiny_bit)\n\nplt.plot(x_range, F)\nplt.plot(med_lower*np.array([1,1]), [0,N], 'r')\nplt.plot(med_upper*np.array([1,1]), [0,N], 'r')\nplt.plot(med*np.array([1,1]), [0,N], 'g')\nplt.show()\n\n```\n\n# Appendix\n\nThat derivative is not trivial. It relies on the following. Define:\n\n\\begin{align}\nQ(\\theta) &= \\int_{-\\infty}^{\\theta} g(x,\\theta) dF\n\\end{align}\n\nLet's go.\n\n\\begin{align}\n\\frac{\\partial Q}{\\partial \\theta} &= \\lim_{\\phi\\rightarrow 0} \\frac{1}{\\phi}\\left[ \\int_{-\\infty}^{\\theta+\\phi} g(x,\\theta+\\phi) dF - \\int_{-\\infty}^{\\theta} g(x,\\theta) dF \\right] \\\\\n&= \\lim_{\\phi\\rightarrow 0} \\frac{1}{\\phi}\\left[ \\int_{-\\infty}^{\\theta+\\phi} \\left[ g(x,\\theta) + \\phi h(x, \\theta) + \\mathcal{O}(\\phi^2) \\right] dF - \\int_{-\\infty}^{\\theta} g(x,\\theta) dF \\right] \\\\\n&= \\lim_{\\phi\\rightarrow 0} \\frac{1}{\\phi}\\left[ \\int_{-\\infty}^{\\theta} \\left[ g(x,\\theta) + \\phi h(x, \\theta) + \\mathcal{O}(\\phi^2) \\right] dF + \\int_{\\theta}^{\\theta+\\phi} \\left[ g(x,\\theta) + \\phi h(x, \\theta) + \\mathcal{O}(\\phi^2) \\right] dF - \\int_{-\\infty}^{\\theta} g(x,\\theta) dF \\right] \\\\\n&= \\lim_{\\phi\\rightarrow 0} \\frac{1}{\\phi}\\left[ \\phi \\int_{-\\infty}^{\\theta} h(x, \\theta) dF + \\int_{\\theta}^{\\theta+\\phi} \\left[ g(x,\\theta) + \\phi h(x, \\theta) \\right] dF + \\mathcal{O}(\\phi^2) \\right] \\\\\n&= \\lim_{\\phi\\rightarrow 0} \\frac{1}{\\phi}\\left[ \\phi \\int_{-\\infty}^{\\theta} h(x, \\theta) dF + \\left[ g(\\theta,\\theta) + \\phi h(\\theta, \\theta) \\right] \\int_{\\theta}^{\\theta+\\phi}dF + \\mathcal{O}(\\phi^2) \\right] \\\\\n\\end{align}\n\nWhere:\n\n\\begin{align}\nh(x, \\theta) &= \\left.\\frac{\\partial g(x,y)}{\\partial y}\\right|_{y=\\theta}\n\\end{align}\n\nTo finish off, notice that our $g$ has the convenient property that $g(\\theta,\\theta)=0$, so for this special case, we can carry on.\n\n\\begin{align}\n\\frac{\\partial Q}{\\partial \\theta} &= \\lim_{\\phi\\rightarrow 0} \\frac{1}{\\phi}\\left[ \\phi \\int_{-\\infty}^{\\theta} h(x, \\theta) dF + \\mathcal{O}(\\phi^2) \\right] \\\\\n&= \\int_{-\\infty}^{\\theta} h(x, \\theta) dF\n\\end{align}\n\nGosh, I can still do calculus.\n", "meta": {"hexsha": "739df6a8b387606ec07d11ffbfc681ba0b48d78d", "size": 14800, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "median/median.ipynb", "max_stars_repo_name": "drpeteb/assorted-notebooks", "max_stars_repo_head_hexsha": "3a64d23ad4a060a3460ff4091acbd325eef05192", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "median/median.ipynb", "max_issues_repo_name": "drpeteb/assorted-notebooks", "max_issues_repo_head_hexsha": "3a64d23ad4a060a3460ff4091acbd325eef05192", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "median/median.ipynb", "max_forks_repo_name": "drpeteb/assorted-notebooks", "max_forks_repo_head_hexsha": "3a64d23ad4a060a3460ff4091acbd325eef05192", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 77.0833333333, "max_line_length": 6252, "alphanum_fraction": 0.7109459459, "converted": true, "num_tokens": 2194, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9353465170505204, "lm_q2_score": 0.9019206791658465, "lm_q1q2_score": 0.8436083659136144}} {"text": "# Final project\n\n### Author: Roberto Corti\n\nThe Allen\u2013Cahn equation (after John W. Cahn and Sam Allen) is a reaction\u2013diffusion equation of mathematical physics which describes the process of phase separation in multi-component alloy systems, including order-disorder transitions.\n\nThe equation describes the time evolution of a scalar-valued state variable $\\eta$ on a domain $\\Omega=[0,1]$ during a time interval $[0,T]$, and is given (in one dimension) by:\n\n$$\n\\frac{\\partial \\eta}{\\partial t} - \\varepsilon^2 \\eta'' + f'(\\eta) = 0, \\qquad \\eta'(0, t) = \\eta'(1, t) = 0,\\qquad\\eta(x,0) = \\eta_0(x)\n$$\n\nwhere $f$ is a double-well potential, $\\eta_0$ is the initial condition, and $\\varepsilon$ is the characteristic width of the phase transition.\n\nThis equation is the L2 gradient flow of the Ginzburg\u2013Landau free energy functional, and it is closely related to the Cahn\u2013Hilliard equation.\n\nA typical example of double well potential is given by the following function\n\n$$\nf(\\eta) = \\eta^2(\\eta-1)^2\n$$\n\nwhich has two minima in $0$ and $1$ (the two wells, where its value is zero), one local maximum in $0.5$, and it is always greater or equal than zero.\n\nThe two minima above behave like \"attractors\" for the phase $\\eta$. Think of a solid-liquid phase transition (say water+ice) occupying the region $[0,1]$. When $\\eta = 0$, then the material is liquid, while when $\\eta = 1$ the material is solid (or viceversa).\n\nAny other value for $\\eta$ is *unstable*, and the equation will pull that region towards either $0$ or $1$.\n\nDiscretisation of this problem can be done by finite difference in time. For example, a fully explicity discretisation in time would lead to the following algorithm.\n\nWe split the interval $[0,T]$ in `n_steps` intervals, of dimension `dt = T/n_steps`. Given the solution at time `t[k] = k*dt`, it i possible to compute the next solution at time `t[k+1]` as\n\n$$\n\\eta_{k+1} = \\eta_{k} + \\Delta t \\varepsilon^2 \\eta_k'' - \\Delta t f'(\\eta_k)\n$$\n\nSuch a solution will not be stable. A possible remedy that improves the stability of the problem, is to treat the linear term $\\Delta t \\varepsilon^2 \\eta_k''$ implicitly, and keep the term $-f'(\\eta_k)$ explicit, that is:\n\n$$\n\\eta_{k+1} - \\Delta t \\varepsilon^2 \\eta_k'' = \\eta_{k} - \\Delta t f'(\\eta_k)\n$$\n\nGrouping together the terms on the right hand side, this problem is identical to the one we solved in the python notebook number 9, with the exception of the constant $\\Delta t \\varepsilon^2$ in front the stiffness matrix.\n\nIn particular, given a set of basis functions $v_i$, representing $\\eta = \\eta^j v_j$ (sum is implied), we can solve the problem using finite elements by computing\n\n$$\n\\big((v_i, v_j) + \\Delta t \\varepsilon^2 (v_i', v_j')\\big) \\eta^j_{k+1} = \\big((v_i, v_j) \\eta^j_{k} - \\Delta t (v_i, f'(\\eta_k)\\big)\n$$\nwhere a sum is implied over $j$ on both the left hand side and the right hand side. Let us remark that while writing this last version of the equation we moved from a forward Euler scheme to a backward Euler scheme for the second spatial derivative term: that is, we used $\\eta^j_{k+1}$ instead of $\\eta^j_{k}$. \n\nThis results in a linear system\n\n$$\nA x = b\n$$\n\nwhere \n\n$$\nA_{ij} = M_{ij}+ \\Delta t \\varepsilon^2 K_{ij} = \\big((v_i, v_j) + \\Delta t \\varepsilon^2 (v_i', v_j')\\big) \n$$\n\nand \n\n$$\nb_i = M_{ij} \\big(\\eta_k^j - \\Delta t f'(\\eta_k^j)\\big)\n$$\n\nwhere we simplified the integration on the right hand side, by computing the integral of the interpolation of $f'(\\eta)$.\n\n## Step 1\n\nWrite a finite element solver, to solve one step of the problem above, given the solution at the previous time step, using the same techniques used in notebook number 9.\n\nIn particular:\n\n1. Write a function that takes in input a vector representing $\\eta$, an returns a vector containing $f'(\\eta)$. Call this function `F`.\n\n2. Write a function that takes in input a vector of support points of dimension `ndofs` and the degree `degree` of the polynomial basis, and returns a list of basis functions (piecewise polynomial objects of type `PPoly`) of dimension `ndofs`, representing the interpolatory spline basis of degree `degree`\n\n3. Write a function that, given a piecewise polynomial object of type `PPoly` and a number `n_gauss_quadrature_points`, computes the vector of global_quadrature_points and global_quadrature_weights, that contains replicas of a Gauss quadrature formula with `n_gauss_quadrature_points` on each of the intervals defined by `unique(PPoly.x)`\n\n4. Write a function that, given the basis and the quadrature points and weights, returns the two matrices $M$ and $K$ \n\n## Step 2\n\nSolve the Allen-Cahan equation on the interval $[0,1]$, from time $t=0$ and time $t=1$, given a time step `dt`, a number of degrees of freedom `ndofs`, and a polynomial degree `k`.\n\n1. Write a function that takes the initial value of $\\eta_0$ as a function, eps, dt, ndofs, and degree, and returns a matrix of dimension `(int(T/dt), ndofs)` containing all the coefficients $\\eta_k^i$ representing the solution, and the set of basis functions used to compute the solution\n\n2. Write a function that takes all the solutions `eta`, the basis functions, a stride number `s`, and a resolution `res`, and plots on a single plot the solutions $\\eta_0$, $\\eta_s$, $\\eta_{2s}$, computed on `res` equispaced points between zero and one\n\n## Step 3\n\nSolve the problem for all combinations of\n\n1. eps = [01, .001]\n\n2. ndofs = [16, 32, 64, 128]\n\n3. degree = [1, 2, 3]\n\n3. dt = [.25, .125, .0625, .03125, .015625]\n\nwith $\\eta_0 = \\sin(2 \\pi x)+1$.\n\nPlot the final solution at $t=1$ in all cases. What do you observe? What happens when you increase ndofs and keep dt constant? \n\n## Step 4 (Optional)\n\nInstead of solving the problem explicitly, solve it implicitly, by using backward euler method also for the non linear term. This requires the solution of a Nonlinear problem at every step. Use scipy and numpy methods to solve the non linear iteration.\n\n\n```python\n%pylab inline\nimport sympy as sym\nimport scipy\nfrom scipy.interpolate import *\nfrom scipy.integrate import *\nfrom scipy import optimize\n```\n\n Populating the interactive namespace from numpy and matplotlib\n\n\n## Step 1\n\n1. The function `F`computes the first derivative of the double-well potential function, that is \n\n$$ f'(\\eta) = 2\\eta(\\eta-1)(2\\eta-1) $$ \n\n\n```python\n# Step 1.1\n\ndef F(eta):\n return 2*eta*(eta-1)*(2*eta-1)\n```\n\n2. The function `compute_basis_functions` computes the piecewise polynomial basis functions, representing the interpolatory spline basis of degree `degree`. As arguments it will require `support_points`, that is a list of points, and the `degree` of the piecewise polynomials. \n\n\n```python\n# Step 1.2\n\ndef compute_basis_functions(support_points, degree):\n ''' \n Computes piecewise polynomial basis function.\n '''\n basis = []\n M = len(support_points)\n for i in range(M):\n c = support_points*0 # c = [0,0,....,0]\n c[i] = 1 # c = [0,0,..1,.0]\n bi = PPoly.from_spline(splrep(support_points,c,k=degree)) # construct a piecewise polynomial from spline\n basis.append(bi)\n \n return basis\n```\n\n3. The function `compute_global_quadrature`, given as input the basis functions and an integer `n_gauss_quadrature_points`, computes global quadrature points and global quadrature weights used for integration.\n\n\n```python\n# Step 1.3\n\ndef compute_global_quadrature(basis, n_gauss_quadrature_points):\n '''\n Create a Gauss quadrature formula with n_gauss_quadrature_points, extract the intervals from basis,\n create len(x)-1 shifted and scaled Gauss quadrature formulas that can be used to integrate on each interval. \n Return the result\n '''\n \n intervals = unique(basis[0].x) # Make sure every interval border is taken only once\n\n qp, w = numpy.polynomial.legendre.leggauss(n_gauss_quadrature_points+1) #computes sample points and weights \n # for Gauss-Legendre quadrature for [-1,1] \n qp = (qp+1)/2 # Rescale the points and weights to work from zero to one\n w /= 2\n \n h = diff(intervals)\n gloabl_quadrature = array([intervals[i]+h[i]*qp for i in range(len(h))]).reshape((-1,))\n global_weights = array([w*h[i] for i in range(len(h))]).reshape((-1,))\n return gloabl_quadrature, global_weights\n \n```\n\n3. Given the basis, the global quadrature points and weights the function `compute_system_matrices` computes the two matrices $M=(v_i,v_j)$ and $K=(v_i',v_j')$:\n\n\n```python\n# Step 1.4\n\ndef compute_system_matrices(basis, gloabl_quadrature, global_weights):\n '''\n Compute the matrices M_ij = (v_i, v_j) and K_ij = (v_i', v_j') and return them\n '''\n M = len(basis)\n dbasis = []\n for i in range(M):\n dbasis.append(basis[i].derivative(1))\n Bq = array([basis[i](gloabl_quadrature) for i in range(M)]).T\n dBq = array([dbasis[i](gloabl_quadrature) for i in range(M)]).T\n M = einsum('qi, q, qj', Bq, global_weights, Bq)\n K = einsum('qi, q, qj', dBq, global_weights, dBq)\n return M, K\n\n```\n\n## Step 2\n\nThe function `solve_allen_cahan` returns the solutions $\\eta_k^i$ at each time step $t_k$ and the set of basis functions used to compute the solution. \nAs initial function, I decide to use \n$\\eta_0 = \\sin(2\\pi x) +1$\n\n\n```python\n#initial function\n\ndef eta_0(x):\n return sin(2*pi*x)+1\n```\n\n\n```python\n# Step 2.1\n\ndef solve_allen_cahan(eta_0_function, eps, dt, ndofs, degree):\n '''\n Forward Euler solution.\n Produce as result a matrix eta, containing the solution at all points, and basis.\n '''\n q = linspace(0,1, ndofs) # array of points\n basis = compute_basis_functions(q, degree) # create basis from q\n Q, W = compute_global_quadrature(basis, degree+1) # compute quadrature for having matrixes M and K\n M, K = compute_system_matrices(basis, Q, W) # compute M,K matrixes\n A = M + (dt*eps**2)*K # compute A matrix\n N_step = int(1/dt) # compute number of steps to do\n time_interval= [i*dt for i in range(N_step+1)] # creates list of times\n eta = zeros((len(time_interval), ndofs)) # creates matrix of eta where I'll store each eta_k(t_h)\n eta_k = eta_0_function(q) # initial function \n eta[0,:] = eta_k\n for t in range(1,len(time_interval)): # solving linear system Ax=b at each step (x=eta_k)\n b = M.dot((eta_k -dt*F(eta_k)))\n eta_k = linalg.solve(A, b)\n eta[t,:] = eta_k\n \n return eta, basis # return eta matrix and basis functions\n \n```\n\nThe function `plot_solution` produces a plot of the solutions $\\eta_0$, $\\eta_s$, $\\eta_{2s}$ computed on `resolution` equispaced points between $0$ and $1$:\n\n\n```python\n# Step 2.2 \n\ndef plot_solution(eta, basis, stride, resolution):\n # plot eta[::stride], on x = linspace(0,1,resolution)\n x = linspace(0,1,resolution)\n B = zeros((resolution, len(basis)))\n for i in range(len(basis)):\n B[:,i] = basis[i](x)\n \n n_t = shape(eta)[0]\n t = ['t ='+str(round((i/(n_t-1)),2)) for i in range(n_t)]\n for eta,label_t in zip(eta[::-stride],t[::-stride]):\n plot(x, eta.dot(B.T),label=label_t)\n \n _= legend(fontsize='x-large')\n _= title('Allen\u2013Cahn equation solution $\\eta(x,t)$', fontsize=25)\n _= xlabel('$x$', fontsize=20)\n _= ylabel('$\\eta$', fontsize=20)\n```\n\n\n```python\nfigure(figsize=(12,6))\n\neta, basis = solve_allen_cahan(eta_0, eps=0.01, dt=0.1, ndofs=64, degree=1)\n\n_= plot_solution(eta, basis, stride=4, resolution=1025)\n```\n\n## Step 3\n\nThe functions `plot_increasing_ndofs`, `plot_increasing_dt`, `plot_increasing_degree`, `plot_increasing_eps` will plot the final solution $\\eta(x, t=1)$ as increasing, for each associate function, `ndofs`, `dt`, `degree` and `eps`. I decide to vary the parameters at the following values:\n\n```\neps = [01, .001]\n\nndofs = [16, 32, 64, 128]\n\ndegree = [1, 2, 3]\n\ndt = [.25, .125, .0625, .03125, .015625]\n```\n\n\n```python\n#parameters\n\nresolution = 1025\neps = [.01, .001]\nndofs = [16, 32, 64, 128]\ndegree = [1, 2, 3]\ndt = [.25, .125, .0625, .03125, .015625]\n```\n\n\n```python\n# function for increase ndofs and fixed dt,eps,degree\n\ndef plot_increasing_ndofs(eps, degree, dt):\n ndofs = [16, 32, 64, 128]\n fig = figure(figsize=(22,11))\n for i in range(len(ndofs)):\n eta, basis = solve_allen_cahan(eta_0, eps, dt, ndofs[i], degree)\n subplot(2,2,i+1)\n plot_solution(eta, basis, int(1/dt+1), 1025)\n title('ndofs = '+ str(ndofs[i]), fontsize=18)\n legend(fontsize='x-large')\n xlabel('x', fontsize=10)\n ylabel('$\\eta$', fontsize=10) \n \n print('eps = '+str(eps)+', deg = '+str(degree)+', dt = '+ str(dt))\n \n \ndef plot_increasing_dt(eps, degree, ndofs):\n dt = [.25, .125, .0625, .03125, .015625]\n fig = figure(figsize=(18,28))\n for i in range(len(dt)):\n eta, basis = solve_allen_cahan(eta_0, eps, dt[i], ndofs, degree)\n subplot(5,3,i+1)\n plot_solution(eta, basis, int(1/dt[i]+1), 1025)\n title('eps = '+str(eps)+', degree = '+str(degree)+', ndofs = '+ str(ndofs)\n +', dt = '+ str(dt[i]), fontsize=12)\n legend(fontsize='x-large')\n xlabel('x', fontsize=10)\n ylabel('$\\eta$', fontsize=10) \n \n \n \ndef plot_increasing_eps(dt, degree, ndofs):\n eps = [.1, .01, .001]\n fig = figure(figsize=(16,7))\n for i in range(len(eps)):\n eta, basis = solve_allen_cahan(eta_0, eps[i], dt, ndofs, degree)\n subplot(1,3,i+1)\n plot_solution(eta, basis, int(1/dt+1), 1025)\n title('eps = '+ str(eps[i]), fontsize=15)\n legend(fontsize='x-large')\n xlabel('x', fontsize=10)\n ylabel('$\\eta$', fontsize=10) \n \n \n print('dt = '+str(dt)+', degree = '+str(degree)+', ndofs = '+ str(ndofs))\n```\n\n\n```python\nplot_increasing_ndofs(eps[1], degree[0], dt[3])\n```\n\nAs first observation, I note that the solution becomes smoother when increasing the number of degrees of freedom (`ndofs`) and keeping `dt`, `eps` and `degree` constant. This behaviour can be explained by the fact that as we are increasing the `support_points` in `compute_basis_function` our polynomial aproximation becomes more close to the solution $\\eta(x,t=1)$\n\n\n```python\nfor deg in degree:\n plot_increasing_dt(eps[1], deg, ndofs[2])\n```\n\nFrom these plots it is possible to notice that the solution tends to $0$ for $x<0.5$ and to $1$ for $x>0.5$. However, for large `dt` (`dt=0.25` and `dt=0.125`) the solution is unstable since the forward Euler method is conditionally stable.\n\n## Step 4\n\nBy applying Backward Euler method we consider the following equation \n\n $$ \\frac{\\partial \\eta}{\\partial t} \\simeq \\frac{(\\eta_{k+1} - \\eta_k)}{\\Delta t} = \\varepsilon^2 \\eta_{k+1} '' - f'(\\eta_{k+1}). $$\n \n Then, using the same discretiazation procedure done in **Step 2** and with our basis representation $\\eta=\\eta^{j}v_j$ we end up with the following non linear equation\n \n $$ \\big((v_i, v_j) + \\Delta t \\varepsilon^2 (v_i', v_j')\\big) \\eta^j_{k+1} + \\Delta t (v_i, f'(\\eta^j_{k+1})) - (v_i, v_j) \\eta^j_{k} = 0. $$\n \n Defining the matrix $A_{ij} = M_{ij} + \\Delta t \\varepsilon^2 K_{ij}$, where $ M_{ij} = (v_i, v_j) $ and $ K_{ij} = (v_i', v_j') $, the equation can be written in this compact form \n \n $$ \\boxed {F_i(\\eta^j_{k+1}) := A_{ij}\\eta^j_{k+1} - M_{ij}\\eta^j_k + \\Delta t M_{ij} f'(\\eta^j_{k+1}) = 0} $$ \n \n Thus, in order to implement a numerical method I have to find the value of $\\eta^j_{k+1}$ that satisfies $F_i(\\eta^j_{k+1}) =0$. This is equivalent to find for every time step the root of the given function.\n \n In my implementation `solve_solve_allen_cahan_non_linear` I decide to use methods provided by `scipy`. After using several methods I find out that the method [`optimize.root`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.root.html) results the most efficient one. This function requires as arguments the function $F_i$ and its realative jacobian matrix $\\partial_j F_i$. As did in `solve_solve_allen_cahan` the outputs are the solution, that is stored in a matrix of dimension (`time_steps`, `ndofs`), and the `basis` functions.\n\n\n```python\n# Step 4\n\ndef F_prime(eta): \n return 12*eta*(eta-1) + 2 #second derivative of f. I need for Jacobian matrix for solving non linear eq.\n\ndef solve_allen_cahan_non_linear(eta_0_function, eps, dt, ndofs, degree):\n '''\n Solution with backward Euler. \n This requires the solution of a Nonlinear problem at every step. \n Used scipy and numpy methods to solve the non linear iteration.\n '''\n q = linspace(0,1, ndofs) # array of points\n basis = compute_basis_functions(q, degree) # create basis from q\n Q, W = compute_global_quadrature(basis, degree+1) # compute quadrature for having matrixes M and K\n M, K = compute_system_matrices(basis, Q, W) # compute M,K matrixes\n A = M + (dt*eps**2)*K # compute A matrix\n N_step = int(1/dt) # compute number of steps to do \n time_interval= [i*dt for i in range(N_step+1)] # create list of time steps\n eta = zeros((len(time_interval), ndofs)) # create eta_matrix\n eta_k = eta_0_function(q) # initial_solution\n eta[0,:] = eta_k\n \n for t in range(1,len(time_interval)):\n \n def f(z): # F_i\n return A.dot(z)+dt*M.dot(F(z))-M.dot(eta_k)\n \n def Jacobian(z): # \u00d0_j F_i\n S = zeros((len(z), len(z)))\n for i in range(len(z)):\n for j in range(len(z)):\n S[i,j] = dt*M[i,j]*F_prime(z[j])\n return A + S\n \n eta_k = (optimize.root(f, eta_k,jac=Jacobian, method='hybr')).x # non-linear method solutor call\n eta[t,:] = eta_k # store eta solution for time-step t\n \n return eta, basis \n```\n\n\n```python\nfigure(figsize=(12,6))\n\neta, basis = solve_allen_cahan_non_linear(eta_0, eps=0.01, dt=.03125, ndofs=32, degree=1)\n\n_= plot_solution(eta, basis, 12, 1025)\n```\n\n\n```python\nfigure(figsize=(16,5))\n\nstride = 2\nresolution = 1025\n\nplt.subplot(1, 2, 1)\neta, basis = solve_allen_cahan(eta_0, eps=0.01, dt=0.25, ndofs=32, degree=1)\n_= plot_solution(eta, basis, stride, resolution)\n_= title(\"Forward Euler\", fontsize=17)\n\nplt.subplot(1, 2, 2)\neta, basis = solve_allen_cahan_non_linear(eta_0, eps=0.01, dt=0.25, ndofs=32, degree=1)\n_ = plot_solution(eta, basis, stride, resolution)\n_ = title(\"Backward Euler\", fontsize=17)\n```\n\nFor large values of `dt`(i.e. `0.25`) the linear solution is unstable while the nonlinear solution converges. This result is expected, since we know that backward Euler is stable for any time-step value (see [Ordinary Differential Equations](https://people.sissa.it/~grozza/wp-content/uploads/2020/01/Slides_ODE_EN.pdf) )\n", "meta": {"hexsha": "cd22ea96daa78b206751e3cfa33aada6acbeae61", "size": 524714, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "final_project/final_project_2019-2020.ipynb", "max_stars_repo_name": "RobertoCorti/P1.4_seed", "max_stars_repo_head_hexsha": "54944f584e896233cfcf41973310714e02ce1669", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "final_project/final_project_2019-2020.ipynb", "max_issues_repo_name": "RobertoCorti/P1.4_seed", "max_issues_repo_head_hexsha": "54944f584e896233cfcf41973310714e02ce1669", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "final_project/final_project_2019-2020.ipynb", "max_forks_repo_name": "RobertoCorti/P1.4_seed", "max_forks_repo_head_hexsha": "54944f584e896233cfcf41973310714e02ce1669", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 749.5914285714, "max_line_length": 84904, "alphanum_fraction": 0.9464089008, "converted": true, "num_tokens": 5513, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9353465098415279, "lm_q2_score": 0.9019206692796967, "lm_q1q2_score": 0.8436083501646993}} {"text": "```python\n# symbols and expressions\nimport sympy\n```\n\n\n```python\nx*2\n```\n\n\n```python\n# we have to say that x is a symbol\nx = sympy.var('x')\n```\n\n\n```python\nx*x\n```\n\n\n\n\n x**2\n\n\n\n\n```python\nfrom sympy import *\n```\n\n\n```python\nx = symbols('x')\n```\n\n\n```python\nf = x*x*x\n```\n\n\n```python\nprint(f)\n```\n\n x**3\n\n\n\n```python\n# will cover in detail later\ndiff(f,x)\n```\n\n\n\n\n 3*x**2\n\n\n\n\n```python\nx, y, z, t = symbols('x y z t')\n```\n\n\n```python\nexpr = x*x+y*z+t*x*z\n```\n\n\n```python\ndiff(expr, z)\n```\n\n\n\n\n t*x + y\n\n\n\n\n```python\n# caveat - possible to confuse and go mad if not careful\nx = symbols('y')\n```\n\n\n```python\nx*x\n```\n\n\n\n\n y**2\n\n\n\n\n```python\nx = symbols('areyoucrazy')\n```\n\n\n```python\nx*x\n```\n\n\n\n\n areyoucrazy**2\n\n\n\n\n```python\n# == signs compares the expressions exactly, not symbolically.\n# For comparisons use either equals or subtraction\nx = symbols('x')\n(x+2)**2 == x**2 + 4*x + 4\n```\n\n\n\n\n False\n\n\n\n\n```python\na = (x+2)**2\nb = x**2 + 4*x + 4\n```\n\n\n```python\nc = a-b\n```\n\n\n```python\nprint(c)\n```\n\n -x**2 - 4*x + (x + 2)**2 - 4\n\n\n\n```python\nsimplify(c)\n```\n\n\n\n\n 0\n\n\n\n\n```python\na.equals(b)\n```\n\n\n\n\n True\n\n\n\n\n```python\n# numerals \nx + 1/10\n```\n\n\n\n\n x + 0.1\n\n\n\n\n```python\n1/10\n```\n\n\n\n\n 0.1\n\n\n\n\n```python\n# constructing rational objects explicitly\nRational(1,10)\n```\n\n\n\n\n 1/10\n\n\n\n\n```python\nx + Rational(1,20)\n```\n\n\n\n\n x + 1/20\n\n\n\n\n```python\nexpr = x**y\n```\n\n\n```python\nexpr_x2 = expr.subs(x,2)\n```\n\n\n```python\nprint(expr_x2)\n```\n\n 2**y\n\n\n\n```python\nexpr_yx = expr.subs(y, x)\n```\n\n\n```python\nprint(expr_yx)\n```\n\n x**x\n\n\n\n```python\nexpr_x2_y3 = expr.subs([(x,2),(y,3)])\n```\n\n\n```python\nprint(expr_x2_y3)\n```\n\n 8\n\n\n\n```python\n# evaluating expressions\n\n```\n\n\n```python\nexpr = sqrt(8)\n```\n\n\n```python\nprint(expr)\n```\n\n 2*sqrt(2)\n\n\n\n```python\nexpr.evalf()\n```\n\n\n\n\n 2.82842712474619\n\n\n\n\n```python\npi\n```\n\n\n\n\n pi\n\n\n\n\n```python\nprint(pi)\n```\n\n pi\n\n\n\n```python\npi.evalf(10)\n```\n\n\n\n\n 3.141592654\n\n\n\n\n```python\npi.evalf(100)\n```\n\n\n\n\n 3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117068\n\n\n\n\n```python\n# simplifying expression\n# by default the expressions entered by the user are not simplified because\n# the user might want the expressions displayed in different formats.\n# For example, some times it is useful to have the polynomials in terms of factors.\n\n```\n\n\n```python\n(x+2)**2 - 4*x\n```\n\n\n\n\n -4*x + (x + 2)**2\n\n\n\n\n```python\n# we can see the get the polynomial form by using expand\nexpand((x+2)**2)\n```\n\n\n```python\nsimplify((x+2)**2 - 4*x)\n```\n\n\n```python\n# pretty printing for powers, exponentials, integrals, derivatives etc.\n# IPython's QTConsole uses LATEX if installed. Otherwise, uses matplotlib engine.\n```\n\n\n```python\n# use init_printing() for enabling pretty printing.\n# This will load the default printing available\n# unicode type printing is used on my laptop\nfrom sympy import init_printing\ninit_printing()\n```\n\n\n```python\n(x+2)**2 - 4*x\n```\n\n\n```python\nIntegral(sqrt(1/x), x)\n```\n\n\n```python\n# symbols and functions\nx, y, z = symbols('x y z', integer=True)\n```\n\n\n```python\nf, g = symbols('f g', cls=Function)\n```\n\n\n```python\ntype(x)\n```\n\n\n\n\n sympy.core.symbol.Symbol\n\n\n\n\n```python\ntype(f)\n```\n\n\n\n\n sympy.core.function.UndefinedFunction\n\n\n\n\n```python\nf = 2*x*y**2 + x**t\n```\n\n\n```python\ntype(f)\n```\n\n\n\n\n sympy.core.add.Add\n\n\n\n\n```python\ng = 2*x*y*z**t\n```\n\n\n```python\ntype(g)\n```\n\n\n\n\n sympy.core.mul.Mul\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "03e8fd33f9e6cf6867a2506ace36273dca1cef1b", "size": 19249, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "2-basics-of-sympy.ipynb", "max_stars_repo_name": "chennachaos/SA2CTechChatSymPy", "max_stars_repo_head_hexsha": "9f1dbb48655ff5f8bdd6b4ced48b58aed0ba5bf4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2-basics-of-sympy.ipynb", "max_issues_repo_name": "chennachaos/SA2CTechChatSymPy", "max_issues_repo_head_hexsha": "9f1dbb48655ff5f8bdd6b4ced48b58aed0ba5bf4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2-basics-of-sympy.ipynb", "max_forks_repo_name": "chennachaos/SA2CTechChatSymPy", "max_forks_repo_head_hexsha": "9f1dbb48655ff5f8bdd6b4ced48b58aed0ba5bf4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 20.6312968917, "max_line_length": 1148, "alphanum_fraction": 0.5306249675, "converted": true, "num_tokens": 1116, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9184802484881361, "lm_q2_score": 0.9184802434674242, "lm_q1q2_score": 0.8436059622514035}} {"text": "# Public-Key Encryption (Optional)\n\n**CS1302 Introduction to Computer Programming**\n___\n\nThis notebook will introduce the idea of public-key (asymmetric key) cryptography called the [RSA algorithm](https://en.wikipedia.org/wiki/RSA_(cryptosystem)).\n\n## Motivation\n\n**What is public key encryption?**\n\nSuppose there is one receiving trying to receive private messsages from multiple senders.\n\n---\n\n**Definition** (Asymmetric key cipher)\n\nEvery sender uses the same public key $k_e$ to encrypt a plaintext $x$ to a ciphertext $y$ as\n\n$$\ny = E(x, k_e).\n$$ (encrypt)\n\nThe receiver with the private key $k_d$ decrypts the ciphertext back to the plaintext as\n\n$$\nx = D(y, k_d).\n$$ (decrypt)\n\n$k_e, k_d, E, D$ is chosen to ensure\n\n- Decryptability: The receiver can recover the plaintext.\n- Secrecy: Anyone with only the public key but not the private key cannot recover the plaintext.\n\n---\n\n**Exercise** (Optional) What is the benefit of asymmetric key cipher over symmetric key cipher?\n\n- Easy to share keys: Encryption key can be shared to all senders publicly.\n- Easy to store keys: Only the receiver needs to know the private key.\n\n**Is public key encryption possible?**\n\nUnfortunately, public key encryption is an invertible function given a public key:\n\n---\n\n**Proposition**\n\nThe plaintext can be recovered from the ciphertext using the public key, even without the private key. \n \n--- \n\n\n**Exercise** (Optional) Prove the above proposition.\n\nYOUR ANSWER HERE\n\n**How to make public key encryption secure?**\n\nWe can make it computationally infeasible to invert $E(\\cdot, k_e)$ unless with the private key $k_d$ is available. Such a function is called the [trapdoor function](https://en.wikipedia.org/wiki/Trapdoor_function#:~:text=A%20trapdoor%20function%20is%20a,are%20widely%20used%20in%20cryptography.). An example of a trapdoor function is:\n\n---\n\n**Proposition** (Integer factorization) \n\nComputing the product of of two prime numbers $p$ and $q$, i.e.,\n\n$$\n(p,q)\\mapsto n,\n$$\n\nis a trapdoor function because integer factorization (computing $p$ and $q$ from $n$) is [co-NP](https://en.wikipedia.org/wiki/Integer_factorization) (difficult).\n\n---\n\nAs the size of $p$ and $q$ increases, the time required to factor $n$ increases dramatically as illustrated [here](https://www.khanacademy.org/computing/computer-science/cryptography/modern-crypt/pi/time-complexity-exploration).\n\n## RSA Algorithm\n\n**How to encrypt/decrypt?**\n\nThe encryption and decryption use modulo exponentiation instead of addition:\n\n$$\n\\begin{align}\nE(x, k_e) &:= x^e \\bmod n && \\text{where }k_e:=(e,n)\\\\\nD(c, k_d) &:= c^d \\bmod n && \\text{where }k_d:=(d,n),\n\\end{align}\n$$\n \nand $e$, $d$, and $n$ are positive integers.\n\nComputing exponentiation can be fast using [repeated squaring](https://en.wikipedia.org/wiki/Exponentiation_by_squaring). The built-in function `pow` already has an efficient implementation:\n\n\n```python\n%%timeit\nx, e, n = 3, 2 ** 1000000, 4\npow(x, e, n)\n```\n\n**Exercise** (Optional) Implement you own `modulo_power` using a recusion.\n\n\n```python\n%%timeit\ndef modulo_power(x, e, n):\n # YOUR CODE HERE\n raise NotImplementedError()\n\n\nx, e, n = 3, 2 ** 1000000, 4\npow(x, e, n)\n```\n\n**How to ensure decryptability?**\n\nFor $x = D(E(x, k_e), k_d) = x^{ed} \\bmod n$, we need $0\\leq x < n$ and\n\n$$\nx^{ed} \\equiv x \\mod n.\n$$ (decryptability)\n\n**Exercise** (Optional) Derive the above condition using $(a^c \\bmod b) = (a\\bmod b)^c \\bmod b$.\n\n$$\n\\begin{align}\nx &= D(E(x, k_e), k_d) \\\\\n&= (x^{e} \\bmod n)^{d} \\bmod n \\\\\n&= x^{ed} \\bmod n.\n\\end{align}\n$$\n\n\nRSA makes use of the following result to choose $(e, d, n)$:\n\n---\n\n**Theorem** (Fermat's little Theorem)\n\nIf $p$ is prime, then\n\n$$\nx^{p-1}\\equiv 1 \\mod p\n$$ (fermat)\n\nfor any integer $x$.\n\n---\n\nThere are elegant and elementary [combinatorial proofs](https://en.wikipedia.org/wiki/Proofs_of_Fermat%27s_little_theorem#Combinatorial_proofs).\n\nSince {eq}`fermat` implies $x^p = x \\bmod p$, can we choose \n- $n=p$ and \n- $ed=p$\n\nto satisfies {eq}`decryptability`?\n\nNo because the private key can then be easily computed from the public key: $d = n/e$.\n\nAlternatively, by raising {eq}`fermat` to the power of any integer $m$,\n\n$$\nx^{m(p-1)} \\equiv 1 \\mod p.\n$$ (fermat-m)\n\nCan we have $n=p$ and $ed \\equiv 1 \\bmod p-1$?\n\nNo because $d$ is the modular multiplicative inverse of $e$, which is [easy to compute](https://en.wikipedia.org/wiki/Modular_multiplicative_inverse#Computation), e.g., using `pow` with an exponent of `-1`. In particular, for prime modulus here, the inverse is $d=e^{p-2}\\bmod p$:\n\n\n```python\ne, n = 3, 7\nd = pow(e, -1, n)\nd, e * d % n == 1 and d == e ** (n - 2) % n\n```\n\n**How to make it difficult to compute $d$?**\n\nRSA makes use of the hardness of factoring a product of large primes to create the desired trapdoor function.\n\nIn particular, with $m(p-1)$ in {eq}`fermat-m` being the least common multiple $\\operatorname{lcm}(p-1,q-1)$ for another prime number $q$, we have\n\n$$\n\\begin{align}\nx^{\\operatorname{lcm}(p-1, q-1)} &\\equiv 1 \\mod p && \\text{and}\\\\\nx^{\\operatorname{lcm}(p-1, q-1)} &\\equiv 1 \\mod q && \\text{by symmetry.}\n\\end{align}\n$$\n\nThis implies $x^{\\operatorname{lcm}(p-1, q-1)} - 1$ is divisible by both $p$ and $q$, and so\n\n$$\nx^{\\operatorname{lcm}(p-1, q-1)} \\equiv 1 \\mod p q.\n$$\n\nRaising both sides to the power of any positive integer $m$ give:\n\n---\n\n**Proposition** (RSA) \n\nIf $p$ and $q$ are prime, then\n\n$$\nx^{\\overbrace{m \\operatorname{lcm}(p-1, q-1)}^{ed - 1}} \\equiv 1 \\mod \\overbrace{p q}^n\n$$ (rsa)\n\nfor any integer $x$. This implies {eq}`decryptability` with by choosing $n = pq$ and\n\n$$\ned \\equiv 1 \\mod \\operatorname{lcm}(p-1, q-1).\n$$ (ed)\n\n---\n\nAlthough $d$ is still the modulo multiplicative inverse of $e$, it is with respect to $\\operatorname{lcm}(p-1, q-1)$, which is not easy to compute without knowing the factors of $n$, namely $p$ and $q$. It can be shown that computing $d$ is [as hard as](https://crypto.stackexchange.com/questions/16036/is-knowing-the-private-key-of-rsa-equivalent-to-the-factorization-of-n) computing $\\operatorname{lcm}(p-1, q-1)$ or factoring $n$.\n\n**How to generate the public key and private key?**\n\nBy {eq}`ed`, we can compute $d$ as the modulo multiplicative inverse of $e$. How to choose $e$ then?\n\nWe can choose any $e \\in \\{1, \\dots, \\operatorname{lcm}(p-1, q-1)\\}$ such that $e$ does not divide $\\operatorname{lcm}(p-1, q-1)$.\n\n**Exercise** (Optional) For $e$ to have the modulo multiplicative inverse, it should not divide $\\operatorname{lcm}(p-1, q-1)$. Why? \n\nYOUR ANSWER HERE\n\nThe following function randomly generate the `e, d, n` for some given prime numbers `p` and `q`:\n\n\n```python\nfrom math import gcd\nfrom random import randint\n\n\ndef get_rsa_keys(p, q):\n n = p * q\n lcm = (p - 1) * (q - 1) // gcd(p - 1, q - 1)\n while True:\n e = randint(1, lcm - 1)\n if gcd(e, lcm) == 1:\n break\n d = pow(e, -1, lcm)\n return e, d, n, lcm\n```\n\nNote that\n\n$$\n\\operatorname{lcm}(p-1, q-1) = \\frac{(p-1)(q-1)}{\\operatorname{gcd}(p-1, q-1)}.\n$$\n\nAs an example, if we choose two prime numbers $p=17094589121$ and $q=1062873761$:\n\n\n```python\ne, d, n, lcm = get_rsa_keys(17094589121, 1062873761)\ne, d, n, e * d % lcm == 1\n```\n\nThe integer $1302$ can be encrypted as follows:\n\n\n```python\nx = 1302\nc = pow(x, e, n)\nprint(f'The plain text \"{x}\" has been encrypted into \"{c}\".')\n```\n\nWith the private key $k_d$, the ciphertext can be decrypted easily as\n\n\n```python\noutput = pow(c, d, n)\nprint(f'The cipher \"{c}\" has been decrypted into \"{output}\".')\n```\n\n**Exercise** (optional) Using the `rsa` module, [generate a RSA key pair](https://stuvel.eu/python-rsa-doc/usage.html#generating-keys) with suitable length and then use it to encrypt and decrypt your own message. You can install the module as follows:\n\n\n```python\n!pip install rsa\n```\n", "meta": {"hexsha": "3da430049b94e775db5621074a9fade7603078f9", "size": 19547, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lab7/RSA.ipynb", "max_stars_repo_name": "ccha23/CS1302", "max_stars_repo_head_hexsha": "b5d55a9844c3e6b80ec9029509b5d572b24b6be3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lab7/RSA.ipynb", "max_issues_repo_name": "ccha23/CS1302", "max_issues_repo_head_hexsha": "b5d55a9844c3e6b80ec9029509b5d572b24b6be3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lab7/RSA.ipynb", "max_forks_repo_name": "ccha23/CS1302", "max_forks_repo_head_hexsha": "b5d55a9844c3e6b80ec9029509b5d572b24b6be3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.7197368421, "max_line_length": 441, "alphanum_fraction": 0.547347419, "converted": true, "num_tokens": 2410, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9196425399873764, "lm_q2_score": 0.9173026505426832, "lm_q1q2_score": 0.843590539482226}} {"text": "# M\u00e9todos Iterativos para sistemas de ecuaciones lineales\n\n\n```python\nimport numpy as np\nfrom scipy.linalg import solve_triangular\nimport matplotlib.pyplot as plt\nfrom time import time\n```\n\n\n```python\ndef jacobi(A, b, n_iter=50, x_0=None):\n \"\"\"\n Solve Ax=b using Jacobi method\n \n Parameters\n -----------\n A : (n, n) array\n A matrix \n b : (n, ) array\n RHS vector\n n_iter : int\n Number of iterations\n x_0 : (n, ) array\n Initual guess\n \n Returns\n -------\n X : (n_iter + 1, n) array\n Matrix with approximation at each iteration\n \"\"\"\n n = A.shape[0] # Matrix size\n X = np.zeros((n_iter + 1, n)) # Matrix with solution at each iteration\n # Initial guess\n if x_0 is not None:\n X[0] = x_0\n D = np.diag(A) # Diagonal of A (only keep a vector with diagonal)\n # Inverse of D. Compute reciprocal of vector elements and then fill diagonal matrix\n D_inv = np.diag(1 / D) # This avoid inverse computation \"O(n) instead O(n^3)\"\n LU = (A - np.diag(D)) # A - D = L + U (here D is a matrix) - Rembember LU != LU of PA=LU or A=LU\n # Jacobi iteration\n for k in range(n_iter):\n X[k+1] = np.dot(D_inv, (b - np.dot(LU, X[k])))\n return X\n```\n\n\n```python\ndef gaussSeidel(A, b, n_iter=50, x_0=None):\n \"\"\"\n Solve Ax=b using Gauss-Seidel method\n \n Parameters\n -----------\n A : (n, n) array\n A matrix \n b : (n, ) array\n RHS vector\n n_iter : int\n Number of iterations\n x_0 : (n, ) array\n Initual guess\n \n Returns\n -------\n X : (n_iter + 1, n) array\n Matrix with approximation at each iteration\n \"\"\"\n n = A.shape[0] # Matrix size\n X = np.zeros((n_iter + 1, n)) # Matrix with solution at each iteration\n # Initial guess\n if x_0 is not None:\n X[0] = x_0\n LD = np.tril(A) # Get lower triangle with main diagonal (L + D)\n U = A - LD # Upper triangle \n # Gauss-Seidel iteration\n for k in range(n_iter):\n X[k+1] = solve_triangular(LD, b - np.dot(U, X[k]), lower=True)\n return X\n```\n\n\n```python\ndef SOR(A, b, w=1.05, n_iter=50, x_0=None):\n \"\"\"\n Solve Ax=b using SOR(w) method\n \n Parameters\n -----------\n A : (n, n) array\n A matrix \n b : (n, ) array\n RHS vector\n w : float \n Omega parameter\n n_iter : int\n Number of iterations\n x_0 : (n, ) array\n Initual guess\n \n Returns\n -------\n X : (n_iter + 1, n) array\n Matrix with approximation at each iteration\n \"\"\"\n n = A.shape[0] # Matrix size\n X = np.zeros((n_iter + 1, n)) # Matrix with solution at each iteration\n # Initial guess\n if x_0 is not None:\n X[0] = x_0\n L = np.tril(A, k=-1) # Get lower triangle \n U = np.triu(A, k=1) # Get Upper triangle \n D = A - U - L\n # SOR\n for k in range(n_iter):\n X[k+1] = solve_triangular(w * L + D, w * b + np.dot((1 - w) * D - w * U, X[k]), lower=True)\n return X\n```\n\n\n```python\ndef error(X, x):\n \"\"\"\n Compute error of approximation at each iteration\n \n Parameters\n ----------\n X : (m, n) array\n Matrix with approximation at each iteration\n x : (n, ) array\n Solution of system \n \n Returns\n -------\n X_err : (m, ) array\n Error vector\n \"\"\"\n X_err = np.linalg.norm(X - x, axis=1, ord=np.inf)\n return X_err\n```\n\n# Ejemplo Apunte\n\nResolver\n\n\\begin{equation}\n \\begin{split}\n u + 3v & = -1 \\\\\n 5u + 4v & = 6\n \\end{split}\n\\end{equation}\n\nPrimero, lo expresamos de forma matricial:\n\\begin{equation}\n \\begin{bmatrix}\n 5 & 4 \\\\\n 1 & 3\n \\end{bmatrix} \n \\begin{bmatrix} u \\\\ v \\end{bmatrix}\n =\n \\begin{bmatrix} 6 \\\\ -1 \\end{bmatrix}.\n\\end{equation}\n\nNotar que se intercambiaron filas para asegurar que $A$ sea **estrictamente diagonal dominante**. Comprobemos que se llega a la soluci\u00f3n (o cerca) en la iteraci\u00f3n que se indica.\n\n\n```python\nA_ap = np.array([[5, 4], [1, 3]])\nb_ap = np.array([6, -1])\n```\n\n## Soluci\u00f3n Anal\u00edtica\n\n\n```python\nx_ap = np.linalg.solve(A_ap, b_ap)\nx_ap\n```\n\n\n\n\n array([ 2., -1.])\n\n\n\n## Jacobi\n\n\n```python\nx_ap_j = jacobi(A_ap, b_ap, 50)\nx_ap_j[-1]\n```\n\n\n\n\n array([ 2., -1.])\n\n\n\n## Gauss-Seidel\n\n\n```python\nx_ap_g = gaussSeidel(A_ap, b_ap, 17)\nx_ap_g[-1]\n```\n\n\n\n\n array([ 2., -1.])\n\n\n\n## SOR($\\omega$)\n\n\n```python\nx_ap_s = SOR(A_ap, b_ap, 1.09, 9)\nx_ap_s[-1]\n```\n\n\n\n\n array([ 2., -1.])\n\n\n\nEfectivamente *SOR* obtiene la soluci\u00f3n en menos iteraciones.\n\n# Otro ejemplo\n\n\n```python\nA_2 = np.array([\n [3, -1, 0, 0, 0, 0.5],\n [-1, 3, -1, 0, 0.5, 0],\n [0, -1, 3, -1, 0, 0],\n [0, 0, -1, 3, -1, 0],\n [0, 0.5, 0, -1, 3, -1],\n [0.5, 0, 0, 0, -1, 3]\n])\nb_2 = np.array([2.5, 1.5, 1., 1., 1.5, 2.5])\n```\n\n## Soluci\u00f3n de referencia\n\n\n```python\nx_2 = np.linalg.solve(A_2, b_2)\nx_2\n```\n\n\n\n\n array([1., 1., 1., 1., 1., 1.])\n\n\n\n## Jacobi\n\n\n```python\nX_2_jac = jacobi(A_2, b_2)\nX_2_jac[-1]\n```\n\n\n\n\n array([1., 1., 1., 1., 1., 1.])\n\n\n\n## Gauss-Seidel\n\n\n```python\nX_2_gss = gaussSeidel(A_2, b_2)\nX_2_gss[-1]\n```\n\n\n\n\n array([1., 1., 1., 1., 1., 1.])\n\n\n\n## SOR($\\omega$)\n\n\n```python\ndef error(X, x):\n return np.linalg.norm(X - x, axis=1, ord=np.inf)\n```\n\nPodemos buscar el par\u00e1metro $\\omega$, analizando el error para un par de iteraciones\n\n\n```python\nn_w = 20\nsor_err_2 = np.zeros(n_w)\nw_s = np.linspace(1, 1.3, n_w)\nfor i in range(n_w):\n X_sor_tmp = SOR(A_2, b_2, w_s[i], 5)\n err_tmp_2 = error(X_sor_tmp, x_2)\n sor_err_2[i] = err_tmp_2[-1]\n print(\"i: %d \\t w: %f \\t error: %f\" % (i, w_s[i], sor_err_2[i]))\n```\n\n i: 0 \t w: 1.000000 \t error: 0.013793\n i: 1 \t w: 1.015789 \t error: 0.012247\n i: 2 \t w: 1.031579 \t error: 0.010732\n i: 3 \t w: 1.047368 \t error: 0.009437\n i: 4 \t w: 1.063158 \t error: 0.008973\n i: 5 \t w: 1.078947 \t error: 0.008508\n i: 6 \t w: 1.094737 \t error: 0.008036\n i: 7 \t w: 1.110526 \t error: 0.007551\n i: 8 \t w: 1.126316 \t error: 0.007048\n i: 9 \t w: 1.142105 \t error: 0.006524\n i: 10 \t w: 1.157895 \t error: 0.005972\n i: 11 \t w: 1.173684 \t error: 0.005390\n i: 12 \t w: 1.189474 \t error: 0.004774\n i: 13 \t w: 1.205263 \t error: 0.006131\n i: 14 \t w: 1.221053 \t error: 0.007827\n i: 15 \t w: 1.236842 \t error: 0.009566\n i: 16 \t w: 1.252632 \t error: 0.011351\n i: 17 \t w: 1.268421 \t error: 0.013182\n i: 18 \t w: 1.284211 \t error: 0.015062\n i: 19 \t w: 1.300000 \t error: 0.016993\n\n\n\n```python\nplt.plot(w_s, sor_err_2, 'bd')\nplt.yscale('log')\nplt.grid(True)\nplt.show()\n```\n\nMirando los valores del gr\u00e1fico obtenemos que $\\omega=1.189474$\n\n\n```python\nmin_pos_2 = np.argmin(sor_err_2)\nX_2_sor = SOR(A_2, b_2, w_s[min_pos_2])\nX_2_sor[-1]\n```\n\n\n\n\n array([1., 1., 1., 1., 1., 1.])\n\n\n\n## Convergencia de m\u00e9todos\n\n\n```python\n# Error\ne_jac = error(X_2_jac, x_2)\ne_gss = error(X_2_gss, x_2)\ne_sor = error(X_2_sor, x_2)\n```\n\n\n```python\nn_jac = np.arange(e_jac.shape[-1])\nn_gss = np.arange(e_gss.shape[-1])\nn_sor = np.arange(e_sor.shape[-1])\nplt.plot(n_jac, e_jac, 'ro', label=\"Jacobi\")\nplt.plot(n_gss, e_gss, 'bo', label=\"Gauss-Seidel\")\nplt.plot(n_sor, e_sor, 'go', label=\"SOR\")\nplt.yscale('log')\nplt.grid(True)\nplt.legend()\nplt.show()\n```\n\n# Ejemplo aleatorio\n\n\n```python\ndef ddMatrix(n):\n \"\"\"\n Randomly generates an n x n strictly diagonally dominant matrix A.\n \n Parameters\n ----------\n n : int \n Matrix size\n \n Returns\n -------\n A : (n, n) array\n Strictly diagonally dominant matrix\n \"\"\"\n A = np.random.random((n,n))\n deltas = 0.5 * np.random.random(n)\n row_sum = A.sum(axis=1) - np.diag(A)\n np.fill_diagonal(A, row_sum+deltas)\n return A\n```\n\n\n```python\nn = 50\nA_r = ddMatrix(n)\nb_r = np.random.rand(n)\n```\n\n## Soluci\u00f3n Anal\u00edtica\n\n\n```python\nx_r = np.linalg.solve(A_r, b_r)\n```\n\n## Jacobi\n\n\n```python\nX_r_jac = jacobi(A_r, b_r, n_iter=100)\n```\n\n## Gauss-Seidel\n\n\n```python\nX_r_gss = gaussSeidel(A_r, b_r, n_iter=50)\n```\n\n## SOR($\\omega$)\n\nPara buscar $\\omega$ de *SOR*, se realiza un par de iteraciones y se analiza el error. Vamos a elegir el que tenga menor error...\n\n\n```python\nn_w = 20\nw_s = np.linspace(1, 1.3, n_w)\nsor_err_r = np.zeros(n_w)\nfor i in range(n_w):\n X_sor_tmp = SOR(A_r, b_r, w_s[i], 5)\n sor_err_r[i] = np.linalg.norm(X_sor_tmp[-1] - x_r, np.inf)\n print(\"i: %d \\t w: %f \\t error: %f\" % (i, w_s[i], sor_err_r[i]))\n```\n\n i: 0 \t w: 1.000000 \t error: 0.000014\n i: 1 \t w: 1.015789 \t error: 0.000016\n i: 2 \t w: 1.031579 \t error: 0.000019\n i: 3 \t w: 1.047368 \t error: 0.000027\n i: 4 \t w: 1.063158 \t error: 0.000038\n i: 5 \t w: 1.078947 \t error: 0.000049\n i: 6 \t w: 1.094737 \t error: 0.000063\n i: 7 \t w: 1.110526 \t error: 0.000078\n i: 8 \t w: 1.126316 \t error: 0.000101\n i: 9 \t w: 1.142105 \t error: 0.000126\n i: 10 \t w: 1.157895 \t error: 0.000155\n i: 11 \t w: 1.173684 \t error: 0.000188\n i: 12 \t w: 1.189474 \t error: 0.000226\n i: 13 \t w: 1.205263 \t error: 0.000267\n i: 14 \t w: 1.221053 \t error: 0.000314\n i: 15 \t w: 1.236842 \t error: 0.000365\n i: 16 \t w: 1.252632 \t error: 0.000423\n i: 17 \t w: 1.268421 \t error: 0.000486\n i: 18 \t w: 1.284211 \t error: 0.000555\n i: 19 \t w: 1.300000 \t error: 0.000631\n\n\n\n```python\nmin_pos_r = np.argmin(sor_err_r)\nX_r_sor = SOR(A_r, b_r, w_s[min_pos_r], n_iter=50)\n```\n\n## Comparaci\u00f3n de Error\n\n\n```python\ne_r_jac = error(X_r_jac, x_r)\ne_r_gss = error(X_r_gss, x_r)\ne_r_sor = error(X_r_sor, x_r)\n```\n\n\n```python\nn_r_jac = np.arange(e_r_jac.shape[-1])\nn_r_gss = np.arange(e_r_gss.shape[-1])\nn_r_sor = np.arange(e_r_sor.shape[-1])\nplt.plot(n_r_jac, e_r_jac, 'ro', label=\"Jacobi\")\nplt.plot(n_r_gss, e_r_gss, 'bo', label=\"Gauss-Seidel\")\nplt.plot(n_r_sor, e_r_sor, 'go', label=\"SOR\")\nplt.yscale('log')\nplt.grid(True)\nplt.legend()\nplt.show()\n```\n\nPara analizar la convergencia de los m\u00e9todos vamos a utilizar el *radio espectral* $\\rho(A)$.\n\n\n```python\ndef spectralRadius(A):\n \"\"\"\n Compute spectral radius of A\n \n Parameters\n ----------\n A : (n, n) array\n A Matrix\n \n Returns\n -------\n rho : float\n Spectral radius\n \n \"\"\"\n ev = np.linalg.eigvals(A) # Compute eigenvalues\n rho = np.max(np.abs(ev)) # Largest eigenvalue in magnitude\n return rho\n```\n\n\n```python\nL = np.tril(A_r, k=-1)\nU = np.triu(A_r, k=1)\nD = A_r - L - U\n```\n\n\n```python\nM_r_jac = np.dot(np.linalg.inv(D), L + U)\nM_r_gss = np.dot(np.linalg.inv(L + D), U)\n```\n\n\n```python\nsr_jac = spectralRadius(M_r_jac)\nsr_gss = spectralRadius(M_r_gss)\nprint(sr_jac, sr_gss)\n```\n\n 0.989112355268985 0.21169440485989235\n\n\nEn este caso vemos que *Gauss-Seidel* converge mucho m\u00e1s r\u00e1pido que Jacobi.\n\n# Teorema de los c\u00edrculos de Gershgorin\n\n\n```python\ndef diskGershgorin(A):\n \"\"\"\n Compute Gershgorin disks.\n \n Parameters\n ----------\n A : (n, n) array\n Matrix\n \n Returns\n -------\n disks : (n, 2) array\n Gershgorin disks. \n First column is the center = |a_{ii}| and second column is radius = \\sum_{i\\neq j} |a_{ij}|.\n \"\"\"\n n = A.shape[0]\n disks = np.zeros((n, 2)) # First column is center and second is radius\n for i in range(n):\n c = A[i, i] # Center \n R = np.sum(np.abs(A[i])) - np.abs(c) # Sum of absolute values of rows without diagonal\n disks[i, 0] = c\n disks[i, 1] = R\n return disks\n```\n\n\n```python\ndef circles(disks):\n \"\"\"\n Return circles.\n \n Parameters\n ----------\n disks : (n, 2) array\n Gershgorin disks. \n \n Returns\n -------\n C : (n, 100, 2) array\n Circles to plot.\n \n \"\"\"\n n = disks.shape[0]\n N = 100\n theta = np.linspace(0, 2*np.pi, N)\n C = np.zeros((n, N, 2))\n for i in range(n):\n C[i, :, 0] = disks[i, 0] + disks[i, 1] * np.cos(theta)\n C[i, :, 1] = disks[i, 1] * np.sin(theta)\n return C\n```\n\n\n```python\ndef plotCircles(A):\n \"\"\"\n Plot Gershgorin disks and eigenvalues of A.\n \n Parameters\n ----------\n A : (n, n) array\n Matrix\n \n Returns\n -------\n None\n \"\"\"\n disks = diskGershgorin(A)\n circs = circles(disks)\n ev = np.linalg.eigvals(A)\n for i in range(disks.shape[0]):\n plt.fill(circs[i, :, 0], circs[i, :, 1], alpha=.5)\n plt.plot(ev[i], 0, 'bo')\n plt.grid(True)\n plt.axis('equal')\n plt.show()\n```\n\n\n```python\nA1 = np.array([\n [8, 1, 0],\n [1, 4, 0.1],\n [0, 0.1, 1]\n])\n```\n\n\n```python\nA2 = np.array([\n [10, -1, 0, 1],\n [0.2, 8, 0.2, 0.2],\n [1, 1, 2, 1],\n [-1, -1, -1, -11]\n])\n```\n\n\n```python\nplotCircles(A1)\n```\n\n\n```python\nplotCircles(A2)\n```\n\n## Comentario sobre convergencia\n\nPara un m\u00e9todo iterativo de la forma $\\mathbf{x}_{k+1}=M\\mathbf{x}_k + \\mathbf{\\hat{b}}$, podemos usar el *radio espectral* $\\rho(M)$ y as\u00ed estudiar la convergencia del m\u00e9todo. Esto implica resolver el problema\n\\begin{equation}\n M\\mathbf{v} = \\lambda \\mathbf{v},\n\\end{equation}\n\npara obtener los valores de $\\lambda$, con una complejidad aproximada de $\\sim O(n^3)$. Si utilizamos el *Teorema de los c\u00edrculos de Gershgorin*, podr\u00edamos obtener una *cota* de los valores propios con aproximadamente $\\sim O(n^2)$ operaciones.\n\n# Complejidad\n\nPara ver los tiempos de c\u00f3mputo y comparar el n\u00famero de iteraciones, utilizaremos la tercera forma de representar los m\u00e9todos.\n\n\n```python\ndef jacobi2(A, b, n_iter=50, tol=1e-8, x_0=None):\n \"\"\"\n Solve Ax=b using Jacobi method\n \n Parameters\n -----------\n A : (n, n) array\n A matrix \n b : (n, ) array\n RHS vector\n n_iter : int\n Number of iterations\n tol : float\n Tolerance\n x_0 : (n, ) array\n Initual guess\n \n Returns\n -------\n X : (n_iter + 1, n) array\n Matrix with approximation at each iteration\n \"\"\"\n n = A.shape[0] # Matrix size\n X = np.zeros((n_iter + 1, n)) # Matrix with solution at each iteration\n # Initial guess\n if x_0 is not None:\n X[0] = x_0\n D = np.diag(A) # Diagonal of A (only keep a vector with diagonal)\n D_inv = np.diag(1 / D) # Inverse of D\n r = b - np.dot(A, X[0]) # Residual vector\n # Jacobi iteration\n for k in range(n_iter):\n X[k+1] = X[k] + np.dot(D_inv, r)\n r = b - np.dot(A, X[k+1]) # Update residual\n if np.linalg.norm(r) < tol: # Stop criteria\n X = X[:k+2]\n break\n return X\n```\n\n\n```python\ndef gaussSeidel2(A, b, n_iter=50, tol=1e-8, x_0=None):\n \"\"\"\n Solve Ax=b using Gauss-Seidel method\n \n Parameters\n -----------\n A : (n, n) array\n A matrix \n b : (n, ) array\n RHS vector\n n_iter : int\n Number of iterations\n tol : float\n Tolerance\n x_0 : (n, ) array\n Initual guess\n \n Returns\n -------\n X : (n_iter + 1, n) array\n Matrix with approximation at each iteration\n \"\"\"\n n = A.shape[0] # Matrix size\n X = np.zeros((n_iter + 1, n)) # Matrix with solution at each iteration\n # Initial guess\n if x_0 is not None:\n X[0] = x_0\n LD = np.tril(A) # Get lower triangle (L + D)\n # Get inverse in O(n^2) instead of np.linalg.inv(LD) O(n^3)\n LD_inv = solve_triangular(LD, np.eye(n), lower=True) \n r = b - np.dot(A, X[0]) # Residual\n # Gauss-Seidel iteration\n for k in range(n_iter):\n X[k+1] = X[k] + np.dot(LD_inv, r)\n r = b - np.dot(A, X[k+1]) # Residual update\n if np.linalg.norm(r) < tol: # Stop criteria\n X = X[:k+2]\n break\n return X\n```\n\n\n```python\ndef SOR2(A, b, w=1.05, n_iter=50, tol=1e-8, x_0=None):\n \"\"\"\n Solve Ax=b using SOR(w) method\n \n Parameters\n -----------\n A : (n, n) array\n A matrix \n b : (n, ) array\n RHS vector\n w : w\n Omega parameter.\n n_iter : int\n Number of iterations\n tol : float\n Tolerance\n x_0 : (n, ) array\n Initual guess\n \n Returns\n -------\n X : (n_iter + 1, n) array\n Matrix with approximation at each iteration\n \"\"\"\n n = A.shape[0] # Matrix size\n X = np.zeros((n_iter + 1, n)) # Matrix with solution at each iteration\n # Initial guess\n if x_0 is not None:\n X[0] = x_0\n L = np.tril(A, k=-1) # Get lower triangle \n Dw = np.diag(np.diag(A) / w)\n # Get inverse in O(n^2) instead of np.linalg.inv(L+Dw) O(n^3)\n LDw_inv = solve_triangular(L+Dw, np.eye(n), lower=True) \n r = b - np.dot(A, X[0]) # Residual\n # SOR iteration\n for k in range(n_iter):\n X[k+1] = X[k] + np.dot(LDw_inv, r)\n r = b - np.dot(A, X[k+1]) # Residual update\n if np.linalg.norm(r) < tol: # Stop criteria\n X = X[:k+2]\n break\n return X\n```\n\nA continuaci\u00f3n se resuelven sistemas parra distintos valores de $n$, y calculamos los tiempos.\n\n\n```python\nNe = 5 # Number of experiments\nN = 2 ** np.arange(7, 10) # N = [2^7, 2^{10}]\nNn = N.shape[-1] \n# For times\ntimes_jac = np.zeros(Nn)\ntimes_gss = np.zeros(Nn)\ntimes_sor = np.zeros(Nn)\n```\n\n\n```python\nfor i in range(Nn):\n n = N[i]\n A = ddMatrix(n)\n b = np.random.random(n)\n # Time Jacobi\n start_time= time()\n for j in range(Ne):\n x = jacobi2(A, b)\n end_time = time()\n times_jac[i] = (end_time - start_time) / Ne\n # Time G-S\n start_time = time()\n for j in range(Ne):\n x = gaussSeidel2(A, b)\n end_time = time()\n times_gss[i] = (end_time - start_time) / Ne\n # Time SOR\n start_time = time()\n for j in range(Ne):\n x = SOR2(A, b)\n end_time = time()\n times_sor[i] = (end_time - start_time) / Ne\n```\n\n\n```python\nplt.figure(figsize=(12, 6))\nplt.plot(N, times_jac, 'rx', label=\"Jacobi\")\nplt.plot(N, times_gss, 'bd', label=\"Gauss-Seidel\")\nplt.plot(N, times_sor, 'go', label=\"SOR\")\n# Deben adaptar el coeficiente que acompa\u00f1a a N**k seg\u00fan los tiempos que obtengan en su computador\nplt.plot(N, 1e-7 * N ** 2, 'g--', label=r\"$O(n^2)$\") \nplt.plot(N, 1e-8 * N ** 3, 'r--', label=r\"$O(n^3)$\")\nplt.grid(True)\nplt.yscale('log')\nplt.xscale('log')\nplt.xlabel(r\"$n$\")\nplt.ylabel(\"Time [s]\")\nplt.legend()\nplt.show()\n```\n\nDel gr\u00e1fico podemos confirmar que la complejidad de estos m\u00e9todos es $\\sim I n^2$, donde $I$ es el n\u00famero de iteraciones. El valor de $I$ puede ser diferente en cada m\u00e9todo.\n\n# Referencias\n\n* Sauer, T. (2006). Numerical Analysis Pearson Addison Wesley.\n* https://github.com/tclaudioe/Scientific-Computing/tree/master/SC1/05_linear_systems_of_equations.ipynb\n\n\n```python\n\n```\n", "meta": {"hexsha": "9893c80b9aef573015158f39520ccb379425a647", "size": 122651, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "material/04_sistemas_ecuaciones/metodos_iterativos.ipynb", "max_stars_repo_name": "Felipitoo/CC", "max_stars_repo_head_hexsha": "2ce7bac8c02b5ef7089e752f2143e13a4b77afc2", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "material/04_sistemas_ecuaciones/metodos_iterativos.ipynb", "max_issues_repo_name": "Felipitoo/CC", "max_issues_repo_head_hexsha": "2ce7bac8c02b5ef7089e752f2143e13a4b77afc2", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "material/04_sistemas_ecuaciones/metodos_iterativos.ipynb", "max_forks_repo_name": "Felipitoo/CC", "max_forks_repo_head_hexsha": "2ce7bac8c02b5ef7089e752f2143e13a4b77afc2", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 89.2007272727, "max_line_length": 29528, "alphanum_fraction": 0.8211755306, "converted": true, "num_tokens": 6610, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.919642528975397, "lm_q2_score": 0.9173026516730628, "lm_q1q2_score": 0.8435905304204532}} {"text": "## Deriving the transfer function of virtual analog first order filters.\n\nHTML output built with: jupyter nbconvert --to html one_pole_z_domain_tf.ipynb\n\nSource:\nhttp://www.willpirkle.com/Downloads/AN-4VirtualAnalogFilters.pdf\n\nWe will derive the algorithm from the block diagram found on page 5, but we will follow the style of [Andrew Simper's SVF paper](https://cytomic.com/files/dsp/SvfLinearTrapOptimised2.pdf).\n\nSympy can't (very easily) be bent to display transfer functions in terms of $z^{-1}, z^{-2}, ...$ which is the convention. Plain $z$ will be used here instead - keep in mind it actually means $z^{-1}$.\n\n\n```python\nfrom sympy import *\ninit_printing()\n\nz = symbols(\"z\")\n```\n\nStart with the parameters.\n\n```\ng = Tan[\u03c0 * cutoff / samplerate];\na1 = g / (1.0 + g);\n```\n\nThe other coefficients defining the shape of the filter (`m0, m1`) will be ignored for now, as they are only used to \"mix\" the output.\n\n\n```python\ng = symbols(\"g\")\na1 = g / (1.0 + g)\n\na1\n```\n\nThen the computation.\n\nThe variable `v0` represents the input signal - we will consider it to represent the z-transform of the input over time. `v1` and `v2` represent two other nodes in the block diagram.\n\nThe state variable `ic1eq` will be defined as unknown first, and then we will solve it using its equations.\n\nThe relevant lines of the algorithm are:\n\n```\nv1 = a1 * (v0 - ic1eq);\nv2 = v1 + ic1eq;\n```\n\nNotice that `ic1eq` actually refers to the _previous_ value of these samples. This corresponds to multiplying by $z$ (contrary to convention!) in the z-domain.\n\n\n```python\nv0, ic1eq = symbols(\"v0 ic_1\")\n\nv1 = a1 * (v0 - ic1eq * z)\nv2 = ic1eq * z + v1\n\n(v1, v2)\n```\n\nThe \"new\" value for `ic1eq` is computed as follows:\n\n```\nic1eq = v2 + v1;\n```\n\ndepending on the current values of `v1, v2`, and the previous value of `ic1eq`.\n\nConsider this equation, and solve it:\n\n\n```python\nequation = [\n v2 + v1 - ic1eq, # = 0\n]\nsolution = solve(equation, (ic1eq))\n\nsolution\n```\n\nWe may now subsitute the solution into `v2` to obtain the transfer function\n\n$$\n\\begin{aligned}\nH_0(z) &= \\frac {v_0(z)} {v_0(z)} = 1 \\\\\nH_1(z) &= \\frac {v_2(z)} {v_0(z)} \\\\\n\\end{aligned}\n$$\n\n\n```python\nH0 = 1\nH1 = v2.subs(solution) / v0\nH1 = collect(simplify(H1), z)\n\n(H1)\n```\n\nWe can now assemble the complete transfer function, taking into account the mix coefficients `m0, m1`.\n\n$$\nH(z) = m_0 H_0(z) + m_1 H_1(z)\n$$\n\n\n```python\nm0, m1 = symbols(\"m0 m1\")\n\nH = m0 * H0 + m1 * H1\n\nprint(H)\nH\n```\n\n## Sanity check: High pass filter\n\n\n```python\nfrom sympy.functions import tan, exp\n\nsamplerate = 40_000\ncutoff = sqrt(samplerate/2)\n\nf = symbols(\"f\")\n\nH_hp_f = H.subs({\n g: tan(pi * cutoff / samplerate),\n m0: 1,\n m1: -1,\n z: exp(2*I*pi * f / samplerate)**-1,\n})\n\nplot(abs(H_hp_f), (f, 1, samplerate/2), xscale='log', yscale='log')\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "7de8892116632ccb25407dcaf6d3fec7953e3325", "size": 71058, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "one_pole_z_domain_tf.ipynb", "max_stars_repo_name": "DGriffin91/dsp-math-notes", "max_stars_repo_head_hexsha": "199663a47b486f18e5175d648e009d4542c062ca", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-26T21:47:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-26T21:47:41.000Z", "max_issues_repo_path": "one_pole_z_domain_tf.ipynb", "max_issues_repo_name": "DGriffin91/dsp-math-notes", "max_issues_repo_head_hexsha": "199663a47b486f18e5175d648e009d4542c062ca", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "one_pole_z_domain_tf.ipynb", "max_forks_repo_name": "DGriffin91/dsp-math-notes", "max_forks_repo_head_hexsha": "199663a47b486f18e5175d648e009d4542c062ca", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-05-05T00:54:16.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-05T00:54:16.000Z", "avg_line_length": 208.3812316716, "max_line_length": 37799, "alphanum_fraction": 0.7054096653, "converted": true, "num_tokens": 908, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9559813488829417, "lm_q2_score": 0.8824278649085117, "lm_q1q2_score": 0.8435845805871333}} {"text": "```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sympy import Symbol, integrate\n%matplotlib inline\n```\n\n### Smooth local paths\nWe will use cubic spirals to generate smooth local paths. Without loss of generality, as $\\theta$ smoothly changes from 0 to 1, we impose a condition on the curvature as follows\n\n$\\kappa = f'(x) = K(x(1-x))^n $\n\nThis ensures curvature vanishes at the beginning and end of the path. Integrating, the yaw changes as\n$\\theta = \\int_0^x f'(x')dx'$\n\nWith $n = 1$ we get a cubic spiral, $n=2$ we get a quintic spiral and so on. Let us use the sympy package to find the family of spirals\n\n1. Declare $x$ a Symbol\n\n2. You want to find Integral of $f'(x)$\n\n3. You can choose $K$ so that all coefficients are integers\n\nVerify if $\\theta(0) = 0$ and $\\theta(1) = 1$\n\n\n```python\nK = 30 #choose for cubic/quintic\nn = 2 #choose for cubic (n=1)/ quintic (n=2)\nx = Symbol('x') #declare as Symbol\nprint(integrate(K*(x*(1-x))**n, x)) # complete the expression\n```\n\n 6*x**5 - 15*x**4 + 10*x**3\n\n\n\n```python\n# Return a cubic curve equation\ndef cubic_equation(K=6):\n x = Symbol('x')\n n = 1\n return integrate(K*(x * (1-x))**n, x)\n\n# Return a quintic curve equation\ndef quintic_equation(K=30):\n x = Symbol('x')\n n = 2\n return integrate(K * (x * (1-x))**n, x)\n```\n\n\n```python\nprint(f\"Cubic equation: {cubic_equation(6)}\")\nprint(f\"Quintic equation: {quintic_equation(30)}\")\n```\n\n Cubic equation: -2*x**3 + 3*x**2\n Quintic equation: 6*x**5 - 15*x**4 + 10*x**3\n\n\nAnother way of doing this is through matrice multiplication\n\n**Cubic Equations**\n\n$f(x) = a x^3 +b x^2 + c x + d$\n\n$f'(x) = 3ax^2 + 2bx + c$\n\nSubjected to constraints $f(0) = 0$, $f(1) = 1$, $f'(0) = 0$ and $f'(1) = 0$\n\n**Quintic Equations**\n\n$f(x) = a x^5 +b x^4 + c x^3 + d x^2 + e x + f$\n\n$f'(x) = 5ax^4 + 4bx^3 + 3cx^2 + 2dx + e$\n\n$f''(x) = 20ax^3 + 12bx^2 + 6cx + 2d$\n\nSubjected to constraints $f(0) = 0$, $f(1) = 1$, $f'(0) = 0$, $f'(1) = 0$, $f''(0) = 0$ and $f''(1) = 0$\n\n\n```python\n# Cubic coefficients\nC = np.array([\n [0, 0, 0, 1],\n [1, 1, 1, 1],\n [0, 0, 1, 0],\n [3, 2, 1, 0]\n])\nB = np.array([0, 1, 0, 0]).reshape((-1, 1))\ncubic_coeffs = np.linalg.inv(C) @ B\nprint(f\"Cubic coefficients [a,b,c,d]: {cubic_coeffs.reshape((-1))}\")\n\n# Quintic coefficients\nC = np.array([\n [0, 0, 0, 0, 0, 1],\n [1, 1, 1, 1, 1, 1],\n [0, 0, 0, 0, 1, 0],\n [5, 4, 3, 2, 1, 0],\n [0, 0, 0, 2, 0, 0],\n [20, 12, 6, 2, 0, 0]\n])\nB = np.array([0, 1, 0, 0, 0, 0]).reshape((-1, 1))\nquintic_coeffs = np.linalg.inv(C) @ B\nprint(f\"Quintic coefficients [a,b,c,d,e,f]: {quintic_coeffs.reshape((-1))}\")\n```\n\n Cubic coefficients [a,b,c,d]: [-2. 3. 0. 0.]\n Quintic coefficients [a,b,c,d,e,f]: [ 6. -15. 10. 0. 0. 0.]\n\n\nNow plot these equations\n\n\n```python\nx = np.linspace(0, 1, num=100)\n# Note: `num` controlls how many points\nthetas = -2*x**3 + 3*x**2\nplt.figure()\nplt.plot(x, thetas,'.', label=\"Cubic\")\nthetas = 6*x**5 - 15*x**4 + 10*x**3\nplt.plot(x, thetas,'.', label=\"Quintic\")\nplt.legend()\nplt.show()\n```\n\n\n```python\n#input can be any theta_i and theta_f (not just 0 and 1)\ndef cubic_spiral(theta_i, theta_f, n=10):\n x = np.linspace(0, 1, num=n)\n #-2*x**3 + 3*x**2 -> Scale and add offset (min)\n return (theta_f-theta_i)*(-2*x**3 + 3*x**2) + theta_i\n\ndef quintic_spiral(theta_i, theta_f, n=10):\n x = np.linspace(0, 1, num=n) \n #6*x**5 - 15*x**4 + 10*x**3 -> Scale and add offset (min)\n return (theta_f-theta_i)*(6*x**5 - 15*x**4 + 10*x**3) + theta_i\n```\n\n### Plotting\nPlot cubic, quintic spirals along with how $\\theta$ will change from $\\pi/2$ to $0$ when moving in a circular arc. Remember circular arc is when $\\omega $ is constant\n\n\n\n```python\nnum_pts = 100\ntheta_i = np.pi/2\ntheta_f = 0\n# Get the points\ntheta_circle = (theta_f - theta_i) * np.linspace(0, 1, num=num_pts) + theta_i\ntheta_cubic = cubic_spiral(theta_i, theta_f, n=num_pts)\ntheta_quintic = quintic_spiral(theta_i, theta_f, n=num_pts)\n# Make the plots (data inline)\nplt.figure()\nplt.plot(theta_circle, label='Circular') # Theta -> Linear\nplt.plot(theta_cubic, label='Cubic')\nplt.plot(theta_quintic,label='Quintic')\nplt.grid()\nplt.legend()\n```\n\n## Trajectory\n\nUsing the spirals, convert them to trajectories $\\{(x_i,y_i,\\theta_i)\\}$. Remember the unicycle model \n\n$dx = v\\cos \\theta dt$\n\n$dy = v\\sin \\theta dt$\n\n$\\theta$ is given by the spiral functions you just wrote. Use cumsum() in numpy to calculate {(x_i, y_i)}\n\nWhat happens when you change $v$?\n\n\n```python\nv = 1\ndt = 0.02\n\n# Create a function to return points\ndef ret_spirals(theta_i, theta_f, num_pts):\n # Get the points\n theta_circle = (theta_f - theta_i) * np.linspace(0, 1, num=num_pts) + theta_i\n theta_cubic = cubic_spiral(theta_i, theta_f, n=num_pts)\n theta_quintic = quintic_spiral(theta_i, theta_f, n=num_pts)\n #cubic\n x_cubic = np.cumsum(v*np.cos(theta_cubic)*dt)\n y_cubic = np.cumsum(v*np.sin(theta_cubic)*dt)\n cubic_pts = {\"x\": x_cubic, \"y\": y_cubic}\n #Quintic\n x_quintic = np.cumsum(v*np.cos(theta_quintic)*dt)\n y_quintic = np.cumsum(v*np.sin(theta_quintic)*dt)\n quintic_pts = {\"x\": x_quintic, \"y\": y_quintic}\n #Circular\n x_circle = np.cumsum(v*np.cos(theta_circle)*dt)\n y_circle = np.cumsum(v*np.sin(theta_circle)*dt)\n circular_pts = {\"x\": x_circle, \"y\": y_circle}\n return cubic_pts, quintic_pts, circular_pts\n\n```\n\n\n```python\nnum_pts = int(v/dt)\n# plot trajectories for circular/ cubic/ quintic for left and right turns\nplt.figure()\nplt.subplot(1,2,1) # Left turn -> np.pi/2 to np.pi\nplt.axis('equal')\nplt.title(\"Left turn\")\ncubic_pts, quintic_pts, circular_pts = ret_spirals(np.pi/2, np.pi, num_pts)\nplt.plot(circular_pts[\"x\"], circular_pts[\"y\"], label='Circular')\nplt.plot(cubic_pts[\"x\"], cubic_pts[\"y\"], label='Cubic')\nplt.plot(quintic_pts[\"x\"], quintic_pts[\"y\"], label='Quintic')\nplt.legend()\nplt.grid()\nplt.subplot(1,2,2) # Right turn -> np.pi/2 to 0\nplt.axis('equal')\nplt.title(\"Right turn\")\ncubic_pts, quintic_pts, circular_pts = ret_spirals(np.pi/2, 0, num_pts)\nplt.plot(circular_pts[\"x\"], circular_pts[\"y\"], label='Circular')\nplt.plot(cubic_pts[\"x\"], cubic_pts[\"y\"], label='Cubic')\nplt.plot(quintic_pts[\"x\"], quintic_pts[\"y\"], label='Quintic')\nplt.legend()\nplt.grid()\n```\n\n## Symmetric poses\n\nWe have been doing only examples with $|\\theta_i - \\theta_f| = \\pi/2$. \n\nWhat about other orientation changes? Given below is an array of terminal angles (they are in degrees!). Start from 0 deg and plot the family of trajectories\n\n\n```python\ndt = 0.1\nthetas = np.deg2rad([15, 30, 45, 60, 90, 120, 150, 180]) #convert to radians\nplt.figure()\nfor tf in thetas:\n t = cubic_spiral(0, tf,50)\n x = np.cumsum(np.cos(t)*dt)\n y = np.cumsum(np.sin(t)*dt)\n plt.plot(x, y, label=f\"{np.rad2deg(tf):.0f}\")\n\n# On the same plot, move from 180 to 180 - theta\nthetas = np.pi - np.deg2rad([15, 30, 45, 60, 90, 120, 150, 180])\nfor tf in thetas:\n t = cubic_spiral(np.pi, tf, 50)\n x = np.cumsum(np.cos(t)*dt)\n y = np.cumsum(np.sin(t)*dt)\n plt.plot(x, y, label=f\"{np.rad2deg(tf):.0f}\")\n\nplt.grid()\nplt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')\nplt.show()\n```\n\nModify your code to print the following for the positive terminal angles $\\{\\theta_f\\}$\n1. Final x, y position in corresponding trajectory: $x_f, y_f$ \n2. $\\frac{y_f}{x_f}$ and $\\tan \\frac{\\theta_f}{2}$\n\nWhat do you notice? \nWhat happens when $v$ is doubled?\n\n\n```python\ndt = 0.05\nv = 2.0\nthetas = np.deg2rad([15, 30, 45, 60, 90, 120, 150, 180]) #convert to radians\nfor tf in thetas:\n t = cubic_spiral(0, tf,100)\n x = np.cumsum(v*np.cos(t)*dt)\n y = np.cumsum(v*np.sin(t)*dt)\n print(f\"tf:{np.rad2deg(tf):0.1f} xf:{x[-1]:0.3f} yf:{y[-1]:0.3f} yf/xf:{y[-1]/x[-1]:0.3f} tan(theta/2):{np.tan(tf/2):0.3f}\")\n```\n\n tf:15.0 xf:9.873 yf:1.300 yf/xf:0.132 tan(theta/2):0.132\n tf:30.0 xf:9.497 yf:2.545 yf/xf:0.268 tan(theta/2):0.268\n tf:45.0 xf:8.892 yf:3.683 yf/xf:0.414 tan(theta/2):0.414\n tf:60.0 xf:8.087 yf:4.669 yf/xf:0.577 tan(theta/2):0.577\n tf:90.0 xf:6.041 yf:6.041 yf/xf:1.000 tan(theta/2):1.000\n tf:120.0 xf:3.743 yf:6.484 yf/xf:1.732 tan(theta/2):1.732\n tf:150.0 xf:1.610 yf:6.010 yf/xf:3.732 tan(theta/2):3.732\n tf:180.0 xf:-0.000 yf:4.812 yf/xf:-7880729543884461.000 tan(theta/2):16331239353195370.000\n\n\nThese are called *symmetric poses*. With this spiral-fitting approach, only symmetric poses can be reached. \n\nIn order to move between any 2 arbitrary poses, you will have to find an intermediate pose that is pair-wise symmetric to the start and the end pose. \n\nWhat should be the intermediate pose? There are infinite possibilities. We would have to formulate it as an optimization problem. As they say, that has to be left for another time!\n\n\n```python\n\n```\n", "meta": {"hexsha": "fb7c03e89a9abb6d2f5b5f9151f6b490ab136291", "size": 136483, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "week3/avneesh/Q2 - 1/Attempt1_filesubmission_Avneesh Mishra - Cubic Quintic Spirals.ipynb", "max_stars_repo_name": "naveenmoto/lablet102", "max_stars_repo_head_hexsha": "24de9daa4ae75cbde93567a3239ede43c735cf03", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-09T16:48:44.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-09T16:48:44.000Z", "max_issues_repo_path": "week3/avneesh/Q2 - 1/Attempt1_filesubmission_Avneesh Mishra - Cubic Quintic Spirals.ipynb", "max_issues_repo_name": "naveenmoto/lablet102", "max_issues_repo_head_hexsha": "24de9daa4ae75cbde93567a3239ede43c735cf03", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week3/avneesh/Q2 - 1/Attempt1_filesubmission_Avneesh Mishra - Cubic Quintic Spirals.ipynb", "max_forks_repo_name": "naveenmoto/lablet102", "max_forks_repo_head_hexsha": "24de9daa4ae75cbde93567a3239ede43c735cf03", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 136483.0, "max_line_length": 136483, "alphanum_fraction": 0.9273828975, "converted": true, "num_tokens": 3216, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9207896802383028, "lm_q2_score": 0.9161096175976053, "lm_q1q2_score": 0.8435442818509329}} {"text": "```python\nimport gurobipy as gp\nfrom gurobipy import GRB\n```\n\n# Max-Flow Min-Cut\n\n\n\nThe maximum flow minimum cut problem holds a special place in the history of optimization theory. We will first model the problems as Linear Programs and use the results to discuss some somewhat surprising results.\n\nThe problems consider a directed graph consisting of a set of nodes and a set of labeled arcs. The arc labels are non-negative values representing a notion of capacity for the arc. In the node set, there exists a source node $s$ and a terminal node $t$. The amount of flow into one of the intermediary nodes must equal the amount of flow out of the node, i.e., flow is conserved. \n\nThe maximum flow question asks: what is the maximum flow that can be transferred from the source to the sink. The minimum cut question asks: which is the subset of arcs, that once removed would disconnect the source node from the terminal node, which has the minimum sum of capacities. For example, removing arcs $(C, t)$ and $(D, t)$ from the network below would mean there is no longer a path from $s$ to $t$ and the sum of the capacities of these arcs is $140 + 90 = 250$. It is reasonably straight forward to find a better cut, i.e., a subset of nodes with sum of capacities less than 250.\n\nA complete model is provided for the maximum flow problem, whereas the minimum cut problem is left as a challenge to the reader.\n\n\n\n\n### Notation\n\nLet's represent the set of nodes and arcs as below\n\n| index | Set | Description | \n|:---------|:--------|:--------------|\n| $i$ | $V$ | Set of nodes |\n| $(i,j)$ | $A$ | Set of arcs |\n\nAdditionally we can define the capacity of the arcs as follows\n\n| Parameter | Description | \n|:---------|:--------|\n| $c_{i,j}$ | Capacity of arc $(i,j) \\in A$ |\n\n\n```python\n# Programmatically we can define the problem data as \nnodes = ['s', 'A', 'B', 'C', 'D', 't']\ncapacity = {\n ('s', 'A'): 100,\n ('s', 'B'): 150,\n ('A', 'B'): 120,\n ('A', 'C'): 90,\n ('B', 'D'): 110,\n ('C', 'D'): 120,\n ('C', 't'): 140,\n ('D', 't'): 90,\n }\narcs = capacity.keys()\n```\n\n## Maximum Flow\n\nFirst, let's consider the problem of calculating the maximum flow that can pass through the network.\n\n### Variables\n\nThe variables we will use are as follows\n\n| Variable | Type | Description | \n|:---------|:--------| :----- |\n| $f_{i,j}$ | Continuous | Flow along arc $(i,j) \\in A$ |\n\n### Model\n\nA model of the problem can then be defined as follows:\n\n$$\n\\begin{align}\n\\text{maximise} \\ & \\sum_{j \\in V: (s,j) \\in A} f_{s, j} && & \\quad (1a) \\label{model-obj}\\\\\ns.t. \\ & f_{i, j} \\leq c_{i,j} \\quad && \\forall (i, j) \\in A & \\quad (1b) \\label{m1-c1}\\\\\n& \\sum_{i \\in V: (i,j) \\in A} f_{i, j} - \\sum_{k \\in V: (j, k) \\in A} f_{j,k} = 0 \\quad && \\forall j \\in V \\setminus \\{s, t\\} & \\quad (1c) \\label{m2-c2}\n\\end{align}\n$$\n\nThe objective (1a) is to maximise the sum of flow leaving the source node $s$. Constraints (1b) ensure that the flow in each arc does not exceed the capacity of that arc. Constraints (1c) are continuity constraints, which ensure that the flow into each of the nodes, excluding the source and sink, is equal to the flow out of that node.\n\n\n```python\n# A function that takes a set of nodes, arcs, and capacities, creates a model and optimises can be defined as follows:\ndef max_flow(nodes, arcs, capacity):\n\n # Create optimization model\n m = gp.Model('flow')\n\n # Create variables\n flow = m.addVars(arcs, obj=1, name=\"flow\")\n\n # Objective\n m.setObjective(gp.quicksum(var for (i, j), var in flow.items() if i == \"s\"), sense=-1)\n\n # Arc-capacity constraints\n m.addConstrs(\n (flow.sum(i, j) <= capacity[i, j] for i, j in arcs), \"cap\")\n\n # Flow-conservation constraints\n m.addConstrs(\n (flow.sum(j, '*') == flow.sum('*', j)\n for j in nodes if j not in ('s', 't')), \"node\")\n\n # Compute optimal solution\n m.optimize()\n\n # Print solution\n if m.status == GRB.OPTIMAL:\n solution = m.getAttr('x', flow)\n print('\\nOptimal flows')\n for i, j in arcs:\n if solution[i, j] > 0:\n print('%s -> %s: %g' % (i, j, solution[i, j]))\n```\n\n\n```python\nmax_flow(nodes, arcs, capacity)\n```\n\n Using license file /Library/gurobi/gurobi.lic\n Gurobi Optimizer version 9.1.1 build v9.1.1rc0 (mac64)\n Thread count: 4 physical cores, 8 logical processors, using up to 8 threads\n Optimize a model with 12 rows, 8 columns and 20 nonzeros\n Model fingerprint: 0xfa101f8c\n Coefficient statistics:\n Matrix range [1e+00, 1e+00]\n Objective range [1e+00, 1e+00]\n Bounds range [0e+00, 0e+00]\n RHS range [9e+01, 2e+02]\n Presolve removed 12 rows and 8 columns\n Presolve time: 0.01s\n Presolve: All rows and columns removed\n Iteration Objective Primal Inf. Dual Inf. Time\n 0 1.8000000e+02 0.000000e+00 0.000000e+00 0s\n \n Solved in 0 iterations and 0.01 seconds\n Optimal objective 1.800000000e+02\n \n Optimal flows\n s -> A: 90\n s -> B: 90\n A -> C: 90\n B -> D: 90\n C -> t: 90\n D -> t: 90\n\n\n## Minimum Cut\n\nNext, let's consider the problem of determining the minimum cut.\n\n### Variables\n\nThe variables used are as follows\n\n| Parameter | Type | Description | \n|:---------|:--------| :----- |\n| $r_{i,j}$ | Continuous | 1 if arc $(i,j) \\in A$ is removed |\n| $z_{i,j}$ | Continuous | 1 if $i \\in V \\setminus \\{s,t\\}$ is removed |\n\n### Model\n\nA model of the problem can then be defined as follows:\n\n$$\n\\begin{alignat}3\n\\text{minimize} \\ & \\sum_{(i,j) \\in A} c_{i,j} \\cdot r_{i,j} && & (2a) \\\\\ns.t.\\ & r_{s,j} + z_j \\geq 1 \\quad && \\forall j \\in V : (s, j) \\in A & \\quad (2c)\\\\\n& r_{i,j} + z_j \\geq z_i \\quad && \\forall (i, j) \\in A: i \\neq s \\text{ and } j \\neq t & \\quad (2b) \\\\\n& r_{i,t} - z_i \\geq 0 \\quad && \\forall i \\in V : (i, t) \\in A & \\quad (2d)\n\\end{alignat}\n$$\n\nThe objective (2a) is to minimise the sum of capacities of the arcs that are removed from the network. Constraints (2b) ensure that for all arcs leaving the source node, either the adjacent node $j$ is connected to the sink, i.e., $z_j = 1$, or the arc is removed, $r_{s, j} = 1$. Constraints (2c) ensure that, for each arc where both nodes are neither the source or the sink, that if predecessor is connected, $z_i = 1$, then either the successor is connected $z_j =1$ or the arc is removed $r_{i,j} = 1$. Finally constraints (2d) ensure that for any arc adjacent to the sink node, if the predecessor is connected, $z_i = 1$, then the arc must be removed, $r_{i,t}=1$. \n\n\n```python\ndef min_cut(nodes, arcs, capacity):\n\n # Create optimization model\n m = gp.Model('cut')\n m.ModelSense = 1\n\n # Create variables\n remove = m.addVars(arcs, vtype=GRB.CONTINUOUS, obj=capacity, name=\"r_\")\n connect = m.addVars((i for i in nodes if i not in (\"s\", \"t\")), name=\"z_\", vtype=GRB.CONTINUOUS)\n\n # Arc-capacity constraints\n for (i, j) in arcs:\n\n if i == \"s\":\n m.addConstr(remove[\"s\", j] + connect[j] >= 1)\n \n elif j == \"t\":\n m.addConstr(remove[i, \"t\"] - connect[i] >= 0)\n \n else:\n m.addConstr(remove[i, j] + connect[j] - connect[i] >= 0)\n\n # Compute optimal solution\n m.optimize()\n\n # Print solution\n if m.status == GRB.OPTIMAL:\n solution = m.getAttr('x', remove)\n print('\\nOptimal cuts')\n for i, j in arcs:\n if solution[i, j] > 0.5:\n print('%s -> %s: %g' % (i, j, capacity[i, j]))\n```\n\n\n```python\nmin_cut(nodes, arcs, capacity)\n```\n\n Gurobi Optimizer version 9.1.1 build v9.1.1rc0 (mac64)\n Thread count: 4 physical cores, 8 logical processors, using up to 8 threads\n Optimize a model with 8 rows, 12 columns and 20 nonzeros\n Model fingerprint: 0x28bf68bd\n Coefficient statistics:\n Matrix range [1e+00, 1e+00]\n Objective range [9e+01, 2e+02]\n Bounds range [0e+00, 0e+00]\n RHS range [1e+00, 1e+00]\n Presolve removed 6 rows and 9 columns\n Presolve time: 0.01s\n Presolved: 2 rows, 3 columns, 4 nonzeros\n \n Iteration Objective Primal Inf. Dual Inf. Time\n 0 9.0000000e+01 1.000000e+00 0.000000e+00 0s\n 1 1.8000000e+02 0.000000e+00 0.000000e+00 0s\n \n Solved in 1 iterations and 0.01 seconds\n Optimal objective 1.800000000e+02\n \n Optimal cuts\n A -> C: 90\n D -> t: 90\n\n\n### Questions\n\n* How do the number of variables of one model compare with the number of constraints of the other?\n* How do the optimal solutions compare?\n* How do the objective values of feasible solutions to the max flow problem compare with those of the min cut problem?\n* Why are the $r$ and $c$ variables continuous when a cut is clearly a binary operation?\n", "meta": {"hexsha": "4df2b12ff816be13a4808ab92ec8903849d66096", "size": 12794, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "week2/Max Flow + Min Cut.ipynb", "max_stars_repo_name": "stevedwards/gurobi_course", "max_stars_repo_head_hexsha": "badb716d5dac86a77712908637cbc08722d0415d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "week2/Max Flow + Min Cut.ipynb", "max_issues_repo_name": "stevedwards/gurobi_course", "max_issues_repo_head_hexsha": "badb716d5dac86a77712908637cbc08722d0415d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week2/Max Flow + Min Cut.ipynb", "max_forks_repo_name": "stevedwards/gurobi_course", "max_forks_repo_head_hexsha": "badb716d5dac86a77712908637cbc08722d0415d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.659025788, "max_line_length": 676, "alphanum_fraction": 0.5214944505, "converted": true, "num_tokens": 2749, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9219218391455084, "lm_q2_score": 0.9149009503523291, "lm_q1q2_score": 0.8434671667847927}} {"text": "# Euler's method\n\n- toc: false\n- branch: master\n- badges: true\n- comments: false\n- categories: [mathematics, numerical recipes]\n- hide: true\n\n-------\nQuestions:\n- How do I use Euler's method to solve a first-order ODE?\n\n\n\n--------\n\n\n\n---------\nObjectives:\n- Use Euler's method, implemented in Python, to solve a first-order ODE\n- Understand that this method is approximate and the significance of step size $h$\n- Compare results at different levels of approximation using the `matplotlib` library.\n------\n\n### There are a variety of ways to solve an ODE\n\nIn the previous lesson we considered nuclear decay:\n\n\\begin{equation}\n\\frac{\\mathrm{d} N}{\\mathrm{d} t} = -\\lambda N\n\\end{equation}\n\nThis is one of the simplest examples of am ODE - a first-order, linear, separable differential equation with one dependent variable. We saw that we could model the number of atoms $N$ by finding an analytic solution through integration:\n\n\\begin{equation}\nN = N_0 e^{-\\lambda t}\n\\end{equation}\n\nHowever there is more than one way to crack an egg (or solve a differential equation). We could have, instead, used an approximate, numerical method. One such method - Euler's method - is this subject of this lesson.\n\n### A function can be approximated using a Taylor expansion\n\nThe Taylor series is a polynomial expansion of a function about a point.\nFor example, the image below shows $\\mathrm{sin}(x)$ and its Taylor approximation by polynomials of degree 1, 3, 5, 7, 9, 11, and 13 at $x = 0$. \n
\n\n
Credit: Image By IkamusumeFan - CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=27865201
\n
\n\nThe Taylor series of $f(x)$ evaluated at point $a$ can be expressed as:\n\n\\begin{equation}\nf(x) = f(a) + \\frac{\\mathrm{d} f}{\\mathrm{d} x}(x-a) + \\frac{1}{2!} \\frac{\\mathrm{d} ^2f}{\\mathrm{d} x^2}(x-a)^2 + \\frac{1}{3!} \\frac{\\mathrm{d} ^3f}{\\mathrm{d} x^3}(x-a)^3\n\\end{equation}\n\n\n\nReturning to our example of nuclear decay, we can use a Taylor expansion to write the value of $N$ a short interval $h$ later:\n\n\\begin{equation}\nN(t+h) = N(t) + h\\frac{\\mathrm{d}N}{\\mathrm{d}t} + \\frac{1}{2}h^2\\frac{\\mathrm{d}^2N}{\\mathrm{d}t^2} + \\ldots\n\\end{equation}\n\n\\begin{equation}\nN(t+h) = N(t) + hf(N,t) + \\mathcal{O}(h^2)\n\\end{equation}\n\n*If you want to know more about Taylor expansion, there is aan excellent video explanation from user `3blue1brown` on Youtube:*\n\n> youtube: https://youtu.be/3d6DsjIBzJ4\n\n### If the step size $h$ is small then higher order terms can be neglected\n\nIf $h$ is small and $h^2$ is very small we can neglect the terms in $h^2$ and higher and we get:\n\n\\begin{equation}\nN(t+h) = N(t) + hf(N,t).\n\\end{equation}\n\n\n\n### Euler's method can be used to approximate the solution of differential equations\n\nWe can keep applying the equation above so that we calculate $N(t)$ at a succession of equally spaced points for as long as we want. If $h$ is small enough we can get a good approximation to the solution of the equation. This method for solving differential equations is called Euler's method, after Leonhard Euler, its inventor.\n\n\n\n> Note: Although we are neglecting terms $h^2$ and higher, Euler's method typically has an error linear in $h$ as the error accumulates over repeated steps. This means that if we want to double the accuracy of our calculation we need to double the number of steps, and double the calcuation time.\n\n> Note: So far we have looked at an example where the input (or independent variable) is time. This isn't always the case - but it is the most common case in physics, as we are often interested in how things evolve with time.\n\n### Euler's method can be applied using the Python skills we have developed\n\nLet's use Euler's method to solve the differential equation for nuclear decay. We will model the decay process over a period of 10 seconds, with the decay constant $\\lambda=0.1$ and the initial condition $N_0 = 1000$.\n\n\\begin{equation}\n\\frac{\\mathrm{d}N}{\\mathrm{d} t} = -0.1 N\n\\end{equation}\n\n\nFirst, let's import the standard scientific libraries we will be using - Numpy and Matplotlib:\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\nLet's definte the function $f(N,t)$ which describes the rate of decay. In this case, the function depends only on the number of atoms present.\n\n\n```python\n# define the function for nuclear decay\ndef f(Num_atoms):\n return -0.1*Num_atoms\n```\n\nNext we'll list the simulation parameters and initial conditions: start time, end time, number of starting atoms (which is an initial condition), number of time steps and step size (which is calculated using the number of time steps).\n\n\n```python\na = 0 # start time\nb = 10 # end time\nNum_atoms = 1000 # initial condition\nnum_steps = 5 # number of time steps\nh = (b-a) / num_steps # time step size\n```\n\nWe use the Numpy `arange` function to generate a list of evenly spaced times at which to evaluate the number of atoms. We also create an empty list to hold the values for $N$ that we are yet to calculate.\n\n\n```python\n# use the Numpy arange function to generate a list of evenly spaced times at which to evaluate the number of atoms N.\ntime_list = np.arange(a,b,h)\n\n# create an empty list to hold the calculated N values\nNum_atoms_list = []\n```\n\nFinally, we apply Euler's method using a `For` loop. Note that the order of operations in the loop body is important.\n\n\n```python\n# apply Euler's method. Note that the order of operations in the loop body is important.\nfor time in time_list:\n Num_atoms_list.append(Num_atoms)\n Num_atoms += h*f(Num_atoms)\n```\n\n### We can easily visualise our results, and compare against the analytical solution, using the `matplotlib` plotting library\n\n\n```python\nplt.scatter(time_list, Num_atoms_list)\nplt.xlabel(\"time\")\nplt.ylabel(\"Number of atoms\")\nplt.show()\n```\n\nUsing the analytic solution from the previous lesson, we can define a function for calculating the number of atoms $N$ as a function of time (this is the exact solution).\n\n\n```python\ndef analytic_solution(time):\n return 1000*np.exp(-0.1*time)\n```\n\nWe can use this to calculate the exact value for $N$ over the full time range. Note that we use a large number of points in time (in this case 1000) to give a nice smooth curve:\n\n\n```python\nnum_steps = 1000\nh = (b-a) / num_steps\ntime_analytic_list = np.arange(a,b,h)\nNum_atoms_analytic_list = []\n\nfor time in time_analytic_list:\n Num_atoms_analytic_list.append(analytic_solution(time))\n```\n\nFinally, we plot the approximate Euler method results against the exact analytical solution:\n\n\n```python\nplt.plot(time_analytic_list,Num_atoms_analytic_list)\nplt.scatter(time_list, Num_atoms_list)\nplt.xlabel(\"time\")\nplt.ylabel(\"Number of atoms\")\n```\n\nWe can see that the error is increasing over time. We can calculate the error at $t=8$:\n\n\n```python\nprint(\"Analytic solution at t=8: \",round(analytic_solution(8)))\nprint(\"Numerical solution at t=8: \",round(Num_atoms_list[-1]))\nprint(\"Error is: \",round(analytic_solution(8)-Num_atoms_list[-1]))\n```\n\n Analytic solution at t=8: 449\n Numerical solution at t=8: 410\n Error is: 40\n\n\n----\n\nKeypoints:\n\n- There are a variety of ways to solve an ODE\n- A function can be approximated using a Taylor expansion\n- If the step size $h$ is small then higher order terms can be neglected\n- Euler's method can be used to approximate the solution of differential equations\n- Euler's method can be applied using the Python skills we have developed\n- We can easily visualise our results, and compare against the analytical solution, using the matplotlib plotting library\n-----\n\n---\n\nDo [the quick-test](https://nu-cem.github.io/CompPhys/2021/08/02/Eulers-Method-Qs.html).\n\nBack to [Modelling with Ordinary Differential Equations](https://nu-cem.github.io/CompPhys/2021/08/02/ODEs.html).\n\n---\n", "meta": {"hexsha": "e884328928ab42a5b9dfad118d3382e83def89c0", "size": 38697, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_notebooks/2021-08-02-Eulers-Method.ipynb", "max_stars_repo_name": "NU-CEM/CompPhys", "max_stars_repo_head_hexsha": "7a7e8ab672797d6db0dbbd673fdc66c6e7bad971", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_notebooks/2021-08-02-Eulers-Method.ipynb", "max_issues_repo_name": "NU-CEM/CompPhys", "max_issues_repo_head_hexsha": "7a7e8ab672797d6db0dbbd673fdc66c6e7bad971", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 17, "max_issues_repo_issues_event_min_datetime": "2021-10-06T08:11:43.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-24T13:46:08.000Z", "max_forks_repo_path": "_notebooks/2021-08-02-Eulers-Method.ipynb", "max_forks_repo_name": "NU-CEM/CompPhys", "max_forks_repo_head_hexsha": "7a7e8ab672797d6db0dbbd673fdc66c6e7bad971", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-03T10:11:28.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-03T10:11:28.000Z", "avg_line_length": 79.787628866, "max_line_length": 15548, "alphanum_fraction": 0.8253352973, "converted": true, "num_tokens": 2066, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9473810466522863, "lm_q2_score": 0.8902942326713409, "lm_q1q2_score": 0.843447881976669}} {"text": "# Model SIR\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.integrate import odeint\n```\n\n**Cel: zredukowanie modelu r\u00f3wna\u0144 r\u00f3\u017cniczkowych**\n\n\n$$ \\begin{array}{rcl} \\displaystyle \\frac{dS}{dt} & = & \\displaystyle -\\frac{\\beta}{N} I S \\\\ \\displaystyle \\frac{dI}{dt} & = & \\displaystyle \\frac{\\beta}{N} I S - \\gamma I \\\\ \\displaystyle \\frac{dR}{dt} & = & \\gamma I \\end{array} \\quad, $$\ngdzie:

\n$ N = S + I + R$
\n$S$ - ilo\u015b\u0107 (g\u0119sto\u015b\u0107) podatnych (ang. _susceptible_),
\n$I$ - ilo\u015b\u0107 (g\u0119sto\u015b\u0107) zainfekowanych, kt\u00f3rzy przenosz\u0105 chorob\u0119 (ang. _infected_ i _infectious_),
\n$R$ - ilo\u015b\u0107 (g\u0119sto\u015b\u0107) ozdrowia\u0142ych lub usuni\u0119tych z populacji z powodu \u015bmierci na skutek infekcji (ang. _recovered_ lub _removed_),
\n$\\beta > 0$ - parametr okre\u015blaj\u0105cy tempo rozprzestrzeniania si\u0119 infekcji (ang. _transmission rate_),
\n$\\gamma$ - parametr okre\u015blaj\u0105cy tempo zdrowienia (ang. _recovery rate_),
\n$\\quad$ \u015bredni czas przebywania w grupie $I$ to $1/\\gamma$ .
\n\n**Nasze warto\u015bci pocz\u0105tkowe**\n\n\n```python\nT = 200\nh = 1e-2\nt = np.arange(start=0, stop=T + h, step=h)\n\nbet, gam = 0.15, 1 / 50\n# S_start = 0.8 \nS_start = np.random.uniform(0.7, 1)\nI_start = 1 - S_start\nR_start = 0\nN = S_start + I_start + R_start # is const\n```\n\n## 1) Zredukowanie modelu do dw\u00f3ch r\u00f3wna\u0144 r\u00f3\u017cniczkowych \n\n### a) Wyprowadzenie wzor\u00f3w:\n\n
**Najpierw udowodnijmy, \u017ce dla najprostszego modelu SIR ca\u0142kowita populacja (N) jest sta\u0142a:**\n\n$ N = S + I + R$
\nNarzucaj\u0105c obustronnie pochodn\u0105 po t:

\n$\n\\begin{align}\n\\frac{dN}{dt} = \\frac{dS}{dt} + \\frac{dI}{dt} + \\frac{dR}{dt}\n\\end{align}\n$\n\nwiedz\u0105c \u017ce:\n

\n$\n\\begin{align}\n\\frac{dS}{dt}=-\\frac{\\beta}{N} I S\n\\end{align}\n$\n

\n$\n\\begin{align}\n\\frac{dI}{dt}=\\frac{\\beta}{N} I S - \\gamma I \n\\end{align}\n$\n

\n$\n\\frac{dR}{dt}=\\gamma I\n$\n

\nOtrzymujemy:

\n$\n\\begin{align}\n\\frac{dN}{dt} = -\\frac{\\beta}{N} I S + \\frac{\\beta}{N} I S - \\gamma I + \\gamma I = 0\n\\end{align}\n$\n

Co dowodzi \u017ce N jest sta\u0142e\n\n
**Poniewa\u017c ca\u0142kowita populacja jest sta\u0142a, trzecie r\u00f3wnanie mo\u017ce by\u0107 uzyskane z pierwszych dw\u00f3ch:**\n

\n$ N = S + I + R$
\n$ R = N - S - I$
\n
Z tego powodu wystarczy \u017ce wyliczymy z r\u00f3wna\u0144 r\u00f3\u017cniczkowych tylko S oraz I, a R b\u0119dziemy mogli uzyska\u0107 z ju\u017c wyliczonych S, I\n\n### b) Implementacji i zamodelowanie:\n\n**U\u017cywaj\u0105c \"solwera\" *odeint* z biblioteki *scipy.integrate*:**\n\n\n```python\ndef two_diff_ode_equation(state, t, bet, gam):\n S, I = state\n return [- bet * I * S / N, bet * I * S / N - gam * I]\n\ndef calc_R(S_arr, I_arr):\n R_arr = np.zeros(len(t))\n for i in range(len(R_arr)):\n R_arr[i] = N - S_arr[i] - I_arr[i]\n return R_arr\n\ndef two_equation_ode_plot(t, sym, labelt='$t$', labels=['S', 'I', 'R']):\n fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 4))\n\n # plot drawing (S, I)\n for i in range(len(labels) - 1):\n ax.plot(t, sym[:, i], label=labels[i])\n # plot drawing (R)\n ax.plot(t, calc_R(sym[:, 0], sym[:, 1]), label=labels[2])\n\n ax.set_xlabel(labelt, fontsize=14)\n ax.set_ylabel('state', fontsize=14)\n ax.set_ylim([0, 1])\n ax.legend()\n plt.show()\n\n# ---------------------------------------------------------------------------------------------------------------#\nstart_state = S_start, I_start\nsym = odeint(two_diff_ode_equation, start_state, t, args=(bet, gam))\ntwo_equation_ode_plot(t, sym, labels=['S', 'I', 'R'])\n```\n\n**R\u0119cznie zapisuj\u0105c funkcj\u0119 wed\u0142ug schematu Eulera:**\n\n\n```python\nS = np.zeros(len(t))\nS[0] = S_start\nI = np.zeros(len(t))\nI[0] = I_start\nR = np.zeros(len(t))\nR[0] = R_start\n\n\ndef two_diff_equation_manual():\n for i in range(t.size - 1):\n S[i + 1] = S[i] + h * (- bet * I[i] * S[i] / N)\n I[i + 1] = I[i] + h * (bet * I[i] * S[i + 1] / N - gam * I[i])\n R[i + 1] = N - S[i + 1] - I[i + 1]\n\ndef equation_man_plot(t, sirList, labelt='$t$', labels=['S', 'I', 'R']):\n fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 4))\n # plot drawing (R, S, I)\n for i in range(len(sirList)):\n ax.plot(t, sirList[i], label=labels[i])\n ax.set_xlabel(labelt, fontsize=14)\n ax.set_ylabel('state', fontsize=14)\n ax.set_ylim([0, 1])\n ax.legend()\n plt.show()\n\n# ---------------------------------------------------------------------------------------------------------------#\ntwo_diff_equation_manual()\nequation_man_plot(t, [S, I, R], labels=['S', 'I', 'R'])\n```\n\n### 2) Zredukowanie modelu do jednego r\u00f3wnania r\u00f3\u017cniczkowego\n\n### a) Wyprowadzenie wzor\u00f3w:\n\nPrzez chwil\u0119 postrzegamy I jako funkcj\u0119 od S i obliczamy: $\\frac{dI}{dS}$

\n$\n\\begin{align}\n\\frac{dI}{dS} = \\frac{\\frac{dI}{dt}}{\\frac{dS}{dt}} = \\frac{\\beta I S-\\gamma I}{-\\beta I S} = \\frac{\\gamma}{\\beta S} - 1\n\\end{align}\n$\n
Zauwa\u017camy nast\u0119pnie, \u017ce zale\u017cy tylko od S. Wiedz\u0105c \u017ce ca\u0142kowanie jest operacj\u0105 odwrotn\u0105 do r\u00f3\u017cniczkowania wyliczamy:

\n$\n\\begin{align}\nI = \\frac{\\gamma}{\\beta}*ln(S)-S+C\n\\end{align}\n$
gdzie C - const

Mo\u017cemy je wyliczy\u0107 dla warto\u015bci pocz\u0105tkowych:

\n$\n\\begin{align}\nC = I(0) - \\frac{\\gamma}{\\beta}*ln(S(0)) + S(0)\n\\end{align}\n$\n

Tym sposobem jeste\u015bmy zale\u017cni tylko od S, a I oraz R mo\u017cemy wyliczy\u0107 posiadaj\u0105c tylko S\n\n### b) Implemntacja i zamodelowanie:\n\n**U\u017cywaj\u0105c \"solwera\" *odeint* z biblioteki *scipy.integrate*:**\n\n\n```python\nC = I_start - gam / bet * np.log(S_start) + S_start # C - const\n\ndef one_diff_equation_ode(state, t, bet, gam):\n S = state[0]\n return [(-bet / N * S * (gam / bet * np.log(S) - S + C))]\n\n\ndef calc_R(S_arr, I_arr):\n R_arr = np.zeros(len(t))\n for i in range(len(R_arr)):\n R_arr[i] = N - S_arr[i] - I_arr[i]\n return R_arr\n\n\ndef calc_I(S_arr):\n I_arr = np.zeros(len(t))\n for i in range(len(I_arr)):\n I_arr[i] = gam / bet * np.log(S_arr[i]) - S_arr[i] + C\n return I_arr\n\ndef one_equation_ode_plot(t, sym, labelt='$t$', labels=['S', 'I', 'R']):\n fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 4))\n\n # plot drawing (S)\n ax.plot(t, sym[:, 0], label=labels[0])\n # plot drawing (I)\n I_arr = calc_I(sym[:, 0])\n ax.plot(t, I_arr, label=labels[2])\n # plot drawing (R)\n ax.plot(t, calc_R(sym[:, 0], I_arr), label=labels[2])\n\n ax.set_xlabel(labelt, fontsize=14)\n ax.set_ylabel('state', fontsize=14)\n ax.set_ylim([0, 1])\n ax.legend()\n plt.show()\n\n\nstart_state = S_start\nsym = odeint(one_diff_equation_ode, start_state, t, args=(bet, gam))\none_equation_ode_plot(t, sym, labels=['S', 'I', 'R'])\n```\n\n**R\u0119cznie zapisuj\u0105c funkcj\u0119 wed\u0142ug schematu Eulera**\n\n\n```python\nS = np.zeros(len(t))\nS[0] = S_start\nI = np.zeros(len(t))\nI[0] = I_start\nR = np.zeros(len(t))\nR[0] = R_start\n\n\ndef one_diff_equation_manual():\n C = I_start - gam / bet * np.log(S_start) + S_start # C - const\n for i in range(t.size - 1):\n S[i + 1] = S[i] + h * (-bet / N * S[i] * (gam / bet * np.log(S[i]) - S[i] + C))\n I[i + 1] = gam / bet * np.log(S[i + 1]) - S[i + 1] + C\n R[i + 1] = N - S[i + 1] - I[i + 1]\n\n\ndef equation_man_plot(t, sirList, labelt='$t$', labels=['S', 'I', 'R']):\n fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 4))\n # plot drawing (R, S, I)\n for i in range(len(sirList)):\n ax.plot(t, sirList[i], label=labels[i])\n ax.set_xlabel(labelt, fontsize=14)\n ax.set_ylabel('state', fontsize=14)\n ax.set_ylim([0, 1])\n ax.legend()\n plt.show()\n\n# ---------------------------------------------------------------------------------------------------------------#\none_diff_equation_manual()\nequation_man_plot(t, [S, I, R], labels=['S', 'I', 'R'])\n```\n\nAutor: \u0141ukasz Gajerski, 246703\n\n\n```python\n\n```\n", "meta": {"hexsha": "916bde4544198d2d64558da3e5de02a24d4fec23", "size": 127468, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "SIR_Model_Spread_of_Disease/Report.ipynb", "max_stars_repo_name": "Ukasz09/Machine-learning", "max_stars_repo_head_hexsha": "fad267247a98099dd0647840fb2cbeb91ba63ef6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-04-18T11:30:26.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-18T11:30:26.000Z", "max_issues_repo_path": "SIR_Model_Spread_of_Disease/Report.ipynb", "max_issues_repo_name": "Ukasz09/Machine-learning", "max_issues_repo_head_hexsha": "fad267247a98099dd0647840fb2cbeb91ba63ef6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "SIR_Model_Spread_of_Disease/Report.ipynb", "max_forks_repo_name": "Ukasz09/Machine-learning", "max_forks_repo_head_hexsha": "fad267247a98099dd0647840fb2cbeb91ba63ef6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 272.3675213675, "max_line_length": 28956, "alphanum_fraction": 0.9175322434, "converted": true, "num_tokens": 2752, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9441768604361742, "lm_q2_score": 0.8933093961129794, "lm_q1q2_score": 0.8434420610200877}} {"text": "\n\n\u6570\u5024\u7a4d\u5206\u6cd5\u306e\u7cbe\u5ea6\u3092\u5186\u5468\u7387\u306e\u8a08\u7b97\n\\begin{equation}\n I\\equiv\\int_a^b dx f(x)\n =\\int_{0}^1dx\\frac{4}{1+x^2}=\\pi\n\\end{equation}\n\u3067\u6bd4\u8f03\u3059\u308b\u3002\n\n## \u53f0\u5f62\u516c\u5f0f\n\n\u53f0\u5f62\u516c\u5f0f\u306f\u7a4d\u5206\u3092\u5358\u7d14\u306a\u5dee\u5206\u3067\u8868\u3057\u305f\u3082\u306e\u3067\n\\begin{equation}\nI\\simeq\\frac{\\Delta x}{2}\\left(f(x_0)+2\\sum_{k=1}^{n-1}f(x_k)+f(x_n)\\right),\\quad\n\\Delta x\\equiv\\frac{x_n - x_0}{n},\\quad x_k\\equiv x_0 + k\\Delta x\n\\end{equation}\n\u3067\u3042\u308b\u3002\n\n\n```\nimport numpy as np\n\n#: \u88ab\u7a4d\u5206\u95a2\u6570\ndef integrand(x: float) -> float:\n return 4 / (1 + x**2)\n```\n\n\n```\nfrom typing import Callable\n\ndef trapezoid(integrand: Callable[[float], float], \n x_0: float, x_n: float, num: int) -> float:\n # \u5206\u5272\u5e45\n dx: float = 1./num\n # \u53f0\u5f62\u516c\u5f0f\n sum: float = dx / 2 * (integrand(x_0) + integrand(x_n))\n for k in range(1, num):\n x_k = x_0 + k * dx\n sum += dx * integrand(x_k)\n return sum\n\nprint(f\"pi = {trapezoid(integrand, 0, 1, 100):.10f}\")\n```\n\n pi = 3.1415759869\n\n\n## \u30b7\u30f3\u30d7\u30bd\u30f3\u516c\u5f0f\n\n\n\u30b7\u30f3\u30d7\u30bd\u30f3\u516c\u5f0f\u306f 2 \u6b21\u306e\u30cb\u30e5\u30fc\u30c8\u30f3\u30fb\u30b3\u30fc\u30c4\u516c\u5f0f\uff082 \u6b21\u306e\u30e9\u30b0\u30e9\u30f3\u30b8\u30e5\u88dc\u9593\u306b\u3088\u308b\u7a4d\u5206\uff09\u3067\n\\begin{equation}\nI\\simeq\\frac{\\Delta x}{3}\\left(f(x_0)+4\\sum_{k=1,3,\\cdots}^{2n-1}f(x_k)+2\\sum_{k=2,4,\\cdots}^{2n-2}f(x_k)+f(x_{2n})\\right),\\quad\n\\Delta x\\equiv\\frac{x_{2n}-x_0}{2n},\\quad\nx_k\\equiv x_0 + k\\Delta x\n\\end{equation}\n\u3068\u306a\u308b\u3002\n\n\n```\ndef simpson(integrand: Callable[[float], float], \n x_0: float, x_2n: float, num: int) -> float:\n dx = (x_2n - x_0) / (2 * num)\n sum = dx / 3 * (integrand(x_0) + integrand(x_2n))\n \n #: \u5947\u6570\u306e\u5834\u5408\n for k in np.arange(1, 2*num, 2):\n x_k = x_0 + k * dx\n sum += dx * 4/3 * integrand(x_k)\n #: \u5076\u6570\u306e\u5834\u5408\n for k in np.arange(2, 2*num, 2):\n x_k = x_0 + k * dx\n sum += dx * 2/3 * integrand(x_k)\n return sum\n\nprint(f\"pi = {simpson(integrand, 0, 1, 50):.15f}\")\n```\n\n pi = 3.141592653589754\n\n\n## \u30ed\u30f3\u30d0\u30fc\u30b0\u6cd5\n\n\u30ed\u30f3\u30d0\u30fc\u30b0\u6cd5\u306f\u53ce\u675f\u5024\u304c\u771f\u306e\u7a4d\u5206\u5024\u3067\u3042\u308b\u3088\u3046\u306a\u53ce\u675f\u5217\u3092\u7528\u3044\u3066\u3001\u88dc\u5916\u3092\u884c\u3046\u3053\u3068\u3067\u7a4d\u5206\u5024\u3092\u6c42\u3081\u308b\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u3067\u3042\u308b\u3002\n\u3053\u306e\u3088\u3046\u306a\u53ce\u675f\u5217\u3092\u53f0\u5f62\u516c\u5f0f\u306b\u3088\u3063\u3066\n\\begin{equation}\n\\begin{split}\n&T_0^n\\equiv\\frac{\\Delta x}{2}\\left(f(x_0)+2\\sum_{k=1}^{2^n-1}f(x_k)+f(x_{2^n})\\right)\\overset{n\\rightarrow\\infty}{\\rightarrow}I,\\\\\n&\\Delta x\\equiv\\frac{x_{2^n}-x_0}{2^n},\\quad x_k\\equiv k\\Delta x\n\\end{split}\n\\end{equation}\n\u306e\u3088\u3046\u306b\u4f5c\u308b\u3002\n\u4e0a\u306e\u6dfb\u5b57\u306f\u5206\u5272\u6570\u306e\u6dfb\u5b57\u3067\u3042\u308a\u3001\u4e0b\u306e\u6dfb\u5b57\u306f\u88dc\u5916\u56de\u6570\u306e\u6dfb\u5b57\u3067\u3042\u308b\u3002\n\u30ed\u30f3\u30d0\u30fc\u30b0\u6cd5\u306f\u5404 $T^n_0$ \u306b\u5bfe\u3057\u3066\n\\begin{equation}\nT^{n+1}_m \\equiv\\frac{4^m T^{n+1}_{m-1} - T^n_{m-1}}{4^m-1}\n\\end{equation}\n\u3092\u8a08\u7b97\u3059\u308b\u3002\n\u8a08\u7b97\u306f\n\\begin{equation}\nT_0^0\\overset{\\text{\u5206\u5272}}{\\rightarrow}\nT_1^0\\overset{\\text{\u88dc\u5916}}{\\rightarrow}\nT_1^1\\overset{\\text{\u5206\u5272}}{\\rightarrow}\nT_2^0\\overset{\\text{\u88dc\u5916}}{\\rightarrow}\nT_2^1\\overset{\\text{\u88dc\u5916}}{\\rightarrow}\nT_2^2\\overset{\\text{\u5206\u5272}}{\\rightarrow}\n\\cdots\n\\end{equation}\n\u306e\u9806\u306b\u884c\u3044\u66f4\u65b0\u5e45\u304c\u4e00\u5b9a\u4ee5\u4e0b\u306b\u306a\u308b\u307e\u3067\u884c\u3046\u3002\n\n\n```\ndef romberg(integrand: Callable[[float], float], num: int) -> float:\n epsilon: float = 1e-10\n #: \u30a4\u30f3\u30c7\u30c3\u30af\u30b9\u306f\u4e0b\u6dfb\u5b57\n T_list: list = [trapezoid(integrand, 0, 1, 1)]\n for up_index in range(1, num+1):\n #: T_0 \u306e\u30ea\u30b9\u30c8\u66f4\u65b0\n T_0: float = trapezoid(integrand, 0, 1, 2 ** up_index)\n # \u7d42\u4e86\u5224\u5b9a\n if abs(T_0 - T_list[-1]) < epsilon:\n return T_0\n\n #: 2^up_index \u5206\u5272\u306b\u5bfe\u3059\u308b Romberg \u88dc\u5916\n T_tmp_list: list = [T_0]\n for low_index in range(1, up_index+1):\n T_tmp_list.append(\n (4. ** low_index * T_tmp_list[low_index-1] - T_list[low_index-1])\n / (4. ** low_index - 1.)\n )\n #: \u7d42\u4e86\u5224\u5b9a\n if low_index > 1:\n if abs(T_tmp_list[-1] - T_tmp_list[-2]) < epsilon:\n return T_tmp_list[-1]\n #: \u30ea\u30b9\u30c8\u66f4\u65b0\n T_list = T_tmp_list\n return T_list[-1]\n \nprint(f\"pi = {romberg(integrand, 6):.15f}\")\n```\n\n pi = 3.141592653649611\n\n\n## \u30ac\u30a6\u30b9\u30fb\u30eb\u30b8\u30e3\u30f3\u30c9\u30eb\u7a4d\u5206\n\n\u30ac\u30a6\u30b9\u30fb\u30eb\u30b8\u30e3\u30f3\u30c9\u30eb\u7a4d\u5206\u306f\u30eb\u30b8\u30e3\u30f3\u30c9\u30eb\u591a\u9805\u5f0f\u306e\u30bc\u30ed\u70b9\n\\begin{equation}\nx_1\\leq x_2\\leq\\cdots\\leq x_k\\leq\\cdots\\leq x_N\\quad\n\\text{s.t.}\\quad P_N(x_k)=0\n\\end{equation}\n\u3092\u5229\u7528\u3059\u308b\u3002\n\u30eb\u30b8\u30e3\u30f3\u30c9\u30eb\u591a\u9805\u5f0f\u306f\u30dc\u30cd\u306e\u6f38\u5316\u5f0f\n\\begin{equation}\nP_0(x)=1,\\quad\nP_1(x)=x,\\quad\n(n+1)P_{n+1}(x)=(2n+1)xP_n(x)-nP_{n-1}(x)\n\\end{equation}\n\u306b\u3088\u3063\u3066\u751f\u6210\u3055\u308c\u308b\u3002\n\n\n```\ndef legendre(n: int, x: float) -> float:\n '''\n \u30eb\u30b8\u30e3\u30f3\u30c9\u30eb\u591a\u9805\u5f0f\n '''\n if n == 0:\n return 1\n elif n == 1:\n return x\n \n p: list[float] = [0., 1., x]\n\n for k in range(n-1):\n p[0] = p[1]\n p[1] = p[2]\n # \u30dc\u30cd\u306e\u6f38\u5316\u5f0f\n p[2] = ((2*k+3) * x * p[1] - (k+1) * p[0]) / (k+2)\n return p[2]\n\nprint(f\"P_10(0.148874) = {legendre(10, 0.148874):.4e}\")\n```\n\n P_10(0.148874) = -8.9179e-07\n\n\n\u30eb\u30b8\u30e3\u30f3\u30c9\u30eb\u591a\u9805\u5f0f\u306e\u30bc\u30ed\u70b9\u306f\u4e8c\u5206\u6cd5\u3067\u6c42\u3081\u308b:\n\n1. \u89e3\u304c\u542b\u307e\u308c\u308b\u9818\u57df $[a,b]$ \u3092\u4e8b\u524d\u306b\u6307\u5b9a\u3059\u308b\n2. \u4e2d\u70b9\u3092\u5229\u7528\u3057\u3066\u9818\u57df\u3092 $[a,(a+b)/2]$ \u3068 $[(a+b)/2,b]$ \u306e2\u3064\u306b\u5206\u5272\u3059\u308b\n3. \u89e3\u3092\u542b\u3080\u9818\u57df\u3092\u4e2d\u9593\u5024\u306e\u5b9a\u7406\u3067\u5224\u5b9a\u3059\u308b\n4. \u89e3\u3092\u542b\u3080\u9818\u57df\u3092\u9078\u629e\u3057\u3066\u65b0\u305f\u306b\u8a08\u7b97\u3059\u308b\u9818\u57df\u3068\u3057\u3066\u7e70\u308a\u8fd4\u3059\n5. \u9818\u57df\u5e45\u304c\u4e00\u5b9a\u4ee5\u4e0b\u306b\u306a\u3063\u305f\u3089\u7d42\u4e86\n\n\u4e2d\u9593\u5024\u306e\u5b9a\u7406\u304b\u3089\u9818\u57df $[a,b]$ \u306b\u5bfe\u3057\u3066 $f(a)f(b)<0$ \u306a\u3089\u3001\u3053\u306e\u9818\u57df $[a,b]$ \u306b $f(x)=0$ \u306e\u89e3\u304c\u542b\u307e\u308c\u308b\u4e8b\u304c\u308f\u304b\u308b\u3002\n\n\n```\ndef bisection(func: Callable[[float],float],\n left: float, right: float) -> float:\n '''\n \u4e8c\u5206\u6cd5\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\n '''\n # \u4e8c\u5206\u6cd5\u3067\u89e3\u3051\u306a\u3044\u5834\u5408\n if func(left) * func(right) >= 0:\n raise Exception('Cannot be solved by bisection method!')\n\n # \u6570\u5024\u7cbe\u5ea6\n epsilon: float = 1e-10\n\n middle: float = 0.\n while True:\n # \u4e2d\u70b9\u8a08\u7b97\n middle = (left + right)/2.\n # \u4e2d\u9593\u5024\u306e\u5b9a\u7406\u304b\u3089\u89e3\u3092\u542b\u3080\u533a\u9593\u3092\u9078\u629e\n if func(left) * func(middle) < 0:\n right = middle\n else:\n left = middle\n # \u7d42\u4e86\u5224\u5b9a\n if abs(func(left) - func(right)) < epsilon:\n break\n \n middle = (left+right)/2.\n return middle\n\nprint(f\"x = {bisection(lambda x: legendre(3, x), 0.1, 1):.4f} s.t. P_3(x) < 1e-10\")\n```\n\n x = 0.7746 s.t. P_3(x) < 1e-10\n\n\n$n$ \u6b21\u30eb\u30b8\u30e3\u30f3\u30c9\u30eb\u95a2\u6570\u306e\u6b63\u306e\u30bc\u30ed\u70b9 $x_k$ \u306f\n\\begin{equation}\n\\begin{split}\n&n=2m\\quad\\text{($n$ is even)}\\\\\n&n=2m+1\\quad\\text{($n$ is odd)}\n\\end{split}\n\\end{equation}\n\u3068\u3059\u308b\u3068\n\\begin{equation}\n\\sin\\left(\\frac{n-1-2k}{2n+1}\\pi\\right)\n< x_k <\n\\sin\\left(\\frac{n+1-2k}{2n+1}\\pi\\right)\n\\end{equation}\n\u306b\u3042\u308b\u3002\n\u307e\u305f$-x_k$\u3082\u30bc\u30ed\u70b9\u3067\u3042\u308b\u3053\u3068\u304c\u77e5\u3089\u308c\u3066\u3044\u308b\u3002\n$n$ \u304c\u5947\u6570\u306e\u5834\u5408\u306f $x=0$ \u3082\u30bc\u30ed\u70b9\u306b\u306a\u308b\u3002\n\u3053\u306e\u6027\u8cea\u3092\u5229\u7528\u3057\u3066\u30eb\u30b8\u30e3\u30f3\u30c9\u30eb\u95a2\u6570\u306e\u30bc\u30ed\u70b9\u3092\u4e8c\u5206\u6cd5\u306b\u3088\u3063\u3066\u6c42\u3081\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u3002\n\n\n```\nimport math\n\ndef legendre_zeros(num: int) -> list:\n # \u521d\u671f\u5316\n m: int = 0\n zeros: list[float ] = []\n if num % 2 == 0:\n m = num / 2\n else:\n m = (num - 1) / 2\n zeros.append(0)\n\n # \u4e8c\u5206\u6cd5\u306e\u533a\u9593\u7b97\u51fa\n sections: list[float] = []\n for k in np.arange(0, m+1, 1):\n boundary: float = math.sin((num-(2*m+1)+2*k)/(2*num+1) * math.pi)\n if boundary > 0:\n sections.append(boundary)\n else:\n sections.append(1e-10)\n\n # \u30bc\u30ed\u70b9\u306e\u8a08\u7b97\n for k in range(0, int(m)):\n zero_tmp: float = bisection(lambda x: legendre(num, x), \n sections[k], sections[k+1])\n zeros.append(zero_tmp)\n zeros.insert(0, -zero_tmp)\n return zeros\nprint(legendre_zeros(5))\n```\n\n [-0.9061798459380039, -0.5384693101074591, 0, 0.5384693101074591, 0.9061798459380039]\n\n\n\u5dee\u5206\u8fd1\u4f3c\n\\begin{equation}\nI\\equiv\\int_{-1}^1dx \\tilde{f}(x)\n\\simeq\\sum_{k=1}^N w_k \\tilde{f}(x_k),\\quad\nw_k\\equiv 2\\left[\\sum_{l=0}^{N-1}(2l+1)\\left[P_l(x_k)\\right]^2\\right]^{-1}\n\\end{equation}\n\u306e\u4fc2\u6570\u3092\u8a08\u7b97\u3059\u308b\u3002\n\n\n```\ndef legendre_coeff(num: int) -> list:\n zeros: list[float] = legendre_zeros(num)\n\n coeff: list[float] = []\n for k in range(0, num):\n sum: float = 0\n for l in range(0, num):\n sum += (2*l+1)*legendre(l,zeros[k])**2\n coeff.append(2/sum)\n return coeff\nlegendre_coeff(5)\n```\n\n\n\n\n [0.23692688505777393,\n 0.47862867049807717,\n 0.5688888888888889,\n 0.47862867049807717,\n 0.23692688505777393]\n\n\n\n\u7a4d\u5206\u3092\u5909\u5f62\n\\begin{equation}\n I=\\int_{0}^1dx\\frac{4}{1+x^2}\n =\\frac{1}{2}\\int_{-1}^1dx\\frac{4}{1+x^2}\n =\\pi\n\\end{equation}\n\u3057\u3066\u30ac\u30a6\u30b9\u30fb\u30eb\u30b8\u30e3\u30f3\u30c9\u30eb\u7a4d\u5206\u3092\u5b9f\u65bd\u3002\n\n\n```\ndef gauss_legendre(integrand: Callable[[float], float], num: int) -> float:\n zeros: list[float] = legendre_zeros(num)\n coeff: list[float] = legendre_coeff(num)\n\n ans: float = 0\n for k in range(0,num):\n ans += coeff[k] * integrand(zeros[k])\n \n return ans\n \nprint(f\"pi = {gauss_legendre(integrand, 20)/2:.15f}\")\n```\n\n pi = 3.141592653590604\n\n", "meta": {"hexsha": "f4d33df8b8adf18aab89891b90ff531be440c250", "size": 17458, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "simulation/Integration.ipynb", "max_stars_repo_name": "applejxd/colaboratory", "max_stars_repo_head_hexsha": "3a301af16b07c3870b6cd69898011ba794ce7a53", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "simulation/Integration.ipynb", "max_issues_repo_name": "applejxd/colaboratory", "max_issues_repo_head_hexsha": "3a301af16b07c3870b6cd69898011ba794ce7a53", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "simulation/Integration.ipynb", "max_forks_repo_name": "applejxd/colaboratory", "max_forks_repo_head_hexsha": "3a301af16b07c3870b6cd69898011ba794ce7a53", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.3905723906, "max_line_length": 231, "alphanum_fraction": 0.4098407607, "converted": true, "num_tokens": 3629, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9425067228145365, "lm_q2_score": 0.8947894738180211, "lm_q1q2_score": 0.8433450945771666}} {"text": "# Perform classification using a single layer perceptron\n\n\n```python\nimport itertools\nimport math\nimport numpy as np\nimport operator\nimport random\n\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n\nimport sympy as sp\nfrom sympy.functions.elementary.exponential import exp\nfrom sympy.utilities.lambdify import lambdify\n```\n\n\n```python\n\n```\n\n# Generate sample data\n\nhttps://en.wikipedia.org/wiki/Rotation_matrix\n\nhttps://en.wikipedia.org/wiki/Multivariate_normal_distribution\n\n\n```python\n# Rotation matrix\ndef rot(theta):\n theta = np.radians(theta)\n c = np.cos(theta)\n s = np.sin(theta)\n return np.array([[c, -s], [s, c]])\n```\n\n\n```python\n# Sample points from a rotated 2-dimensional gaussian distribution\ndef sample_2d_normal(center, sd_x, sd_y, theta, N):\n \n # Covariance matrix\n cov = np.diag([sd_x ** 2, sd_y ** 2])\n \n # Rotation matrix\n R = rot(-theta)\n\n # Rotate covariance matrix\n cov = np.dot(R, cov).T\n\n # Sample points from the rotated distribution\n return np.random.multivariate_normal(center, cov, N).T\n```\n\n\n```python\n## First cluster of points\n\n# Number of data points\nN1 = 1000\n\n# Data\nx1 = sample_2d_normal([-3, 3], sd_x=1, sd_y=2, theta=45, N=N1)\n\n# Labels\ny1 = np.array([0] * N1)\n```\n\n /usr/lib/python3.5/site-packages/ipykernel/__main__.py:14: RuntimeWarning: covariance is not symmetric positive-semidefinite.\n\n\n\n```python\n## Second cluster of points\n\n# Number of data points\nN2 = 1000\n\n# Data\nx2 = sample_2d_normal([1, -1], sd_x=2, sd_y=1, theta=45, N=N2)\n\n# Labels\ny2 = np.array([1] * N2)\n```\n\n /usr/lib/python3.5/site-packages/ipykernel/__main__.py:14: RuntimeWarning: covariance is not symmetric positive-semidefinite.\n\n\n\n```python\n## Together\n\n# Number of data points\nN = N1 + N2\n\n# Data\nx = np.concatenate([x1, x2], axis=1)\n\n# Labels\ny = np.concatenate([y1, y2])\n```\n\n\n```python\n# Shuffle dataset\nindex = list(np.random.permutation(N))\nx = x[:, index]\ny = y[index]\n```\n\n\n```python\n# Plot data\ndef plot_data(x, y):\n plt.plot(x[0, y == 0], x[1, y == 0], 'g+')\n plt.plot(x[0, y == 1], x[1, y == 1], 'b+')\n plt.grid()\n\nplot_data(x, y)\n```\n\n**Goal:** separate the two clusters of points as best as possible with a line.\n\n## Model\n\n\n```python\n# Add ones to data, so that a bias can be learned by the model\nz = np.vstack([x, np.array([1] * N)])\n```\n\n\n```python\n# Linear combination of weights and data\ndef f(w, z):\n return np.dot(w.T, z)\n```\n\n\n```python\n# Activation function\n\n# As a convention, let's prefix the sympy variables and functions with an underscore.\n\n# Symbolic activation function (logistic)\ndef _activation(_h):\n return 1 / (1 + exp(-_h))\n\n# A symbolic variable. In practice, variable `h` designates `f(w, z)`.\n_h = sp.var('h')\n\n# Regular python activation function\nactivation = lambdify(h, _activation(h))\n\n# Plot activation function\nplot_x = np.linspace(-10, 10, 100)\nplot_y = np.vectorize(activation)(plot_x)\nplt.plot(plot_x, plot_y)\nplt.grid()\n```\n\nThe model is equivalent to a single layer neural network.\n\nFor a vector of weights $w \\in \\mathbb{R}^3$ and a point $z \\in \\mathbb{R}^3$, the model is going to be $sgn(activation(f(w, z))$, where $sgn$ is the sign function.\n\nIf $sgn(activation(f(w, z)) < 0$, the point is assigned to the first class.\n\nConversely, if $sgn(activation(f(w, z)) > 0$, the point is assigned to the second class.\n\n## Error definition\n\n\n```python\n# Symbolic variable for labels\n_y = sp.var('_y')\n\n# Error function (squared error): error commited by the model at one data point\n_error = (_activation(_h) - _y) ** 2\n```\n\n\n```python\n# Regular python error function\nerror = lambdify((_h, _y), _error)\n```\n\n\n```python\n# Loss function: error commited by the model for all data points\ndef loss(w, z, y):\n N = z.shape[1]\n return sum(error(f(w, z[:, i]), y[i]) for i in range(N))\n```\n\n## Gradient of the error\n\n\n```python\n# Symbolic derivative of error which respect to `h`\n_d_error = _error.diff(_h)\n\n# Regular python function for the derivative of error\nd_error = lambdify((_h, _y), _d_error)\n```\n\nUsing [chain rule](https://en.wikipedia.org/wiki/Chain_rule):\n\n$$\nerror\\_gradient(w, z, y) = \\nabla_w error(f(w, z), y) = d\\_error(f(w, z), y) \\cdot \\underbrace{\\nabla_w f(w, z)}_{z} \\ \\ \\ \\ \\ \\ \\ (1)\n$$\n\nTo go further let's derive the [delta rule](https://en.wikipedia.org/wiki/Delta_rule).\n\nLet's substitute $h$ for $f(w, z)$.\n\nIf we assume $error$ to be defined as $error(h, y) = (activation(h) - y)^2$, then, using the chain rule again:\n\n$$\nd\\_error(h, y) = \\frac{\\partial error}{\\partial h} = 2 \\cdot (activation(h) - y) \\cdot d\\_activation(h)\n$$\n\nFurthermore, if we assume $activation$ to be the logistic function defined as $activation(h) = \\frac{1}{1+e^{-h}}$, which has a derivative nicely expressed as:\n\n$$\nd\\_activation(h) = \\frac{\\partial activation}{\\partial h} = activation(h) \\cdot (1 - activation(h))\n$$\n\nThen:\n\n$$\nd\\_error(h, y) = 2 \\cdot (activation(h) - y) \\cdot activation(h) \\cdot (1 - activation(h))\n$$\n\nWhich finally gives, substituting $d\\_error(h, y)$ in $(1)$:\n\n$$\nerror\\_gradient(w, z, y) = 2 \\cdot (activation(f(w, z)) - y) \\cdot activation(f(w, z)) \\cdot (1 - activation(f(w, z)) \\cdot z \\ \\ \\ \\ \\ \\ \\ (2)\n$$\n\nHowever, let's stick to $(1)$, which allows for any derivable $error$ and $activation$ functions.\n\n\n```python\n# By equation (1):\ndef error_gradient(w, z, y):\n return d_error(f(w, z), y) * z\n\n# Alternatively (and faster), by equation (2):\n#def error_gradient(w, z, y):\n# a = activation(f(w, z))\n# return 2 * (a - y) * a * (1 - a) * z\n```\n\n## Stochastic gradient descent algorithm\n\nhttps://en.wikipedia.org/wiki/Stochastic_gradient_descent\n\n\n```python\n# Update rule\ndef update(w, z, y, lr):\n # `lr` stands for \"learning rate\"\n return w - lr * error_gradient(w, z, y)\n```\n\n\n```python\ndef sgd(z, y, initial_weights, lr):\n\n w = initial_weights\n N = z.shape[1]\n \n while True:\n \n # Stochastic part of the algorithm: choose a data point at random for this iteration.\n i = random.randint(0, N - 1)\n \n w = update(w, z[:, i], y[i], lr)\n current_loss = loss(w, z, y)\n yield w, current_loss\n```\n\n## Run algorithm\n\n\n```python\n# Set learning rate\nlr = 0.05\n\n# Numbers of iterations\nsteps_nb = 1000\n\n# Initialize weights\nw = np.array([1, 1, 0])\n\n# Initialize algorithm\n\ncurrent_loss = 1\nstep = 0\n\nloss_history = []\nweights_history = []\n\ngen = sgd(z, y, w, lr)\n\n# Run algorithm\n\nfor step in range(1, steps_nb + 1):\n\n w, current_loss = next(gen)\n\n weights_history.append(w)\n loss_history.append(current_loss)\n\n # Pretty-printing ironically requires code that is not so pretty\n if step % 10 ** (len(str(step)) - 1) == 0 or step == steps_nb:\n print('step %s, loss = %.10f' % (str(step).zfill(len(str(steps_nb))), current_loss))\n```\n\n step 0001, loss = 649.3501900424\n step 0002, loss = 645.6639626632\n step 0003, loss = 641.9927097608\n step 0004, loss = 634.3177535079\n step 0005, loss = 624.6324853710\n step 0006, loss = 602.4517600195\n step 0007, loss = 598.1356396507\n step 0008, loss = 582.0149726857\n step 0009, loss = 622.3880009774\n step 0010, loss = 566.4120966837\n step 0020, loss = 533.8024731744\n step 0030, loss = 406.2087968821\n step 0040, loss = 323.7647993776\n step 0050, loss = 266.9804214954\n step 0060, loss = 237.6695998007\n step 0070, loss = 220.1762934562\n step 0080, loss = 211.5830002945\n step 0090, loss = 203.5261532749\n step 0100, loss = 198.8597984758\n step 0200, loss = 156.4961158002\n step 0300, loss = 132.1478389457\n step 0400, loss = 118.7821965091\n step 0500, loss = 107.1656187677\n step 0600, loss = 97.7075181417\n step 0700, loss = 94.1520733311\n step 0800, loss = 83.7875889118\n step 0900, loss = 81.1940006730\n step 1000, loss = 78.1554197753\n\n\n## Plot loss history\n\n\n```python\nx_plot = list(range(1, len(loss_history) + 1))\ny_plot = list(loss_history)\n\nplt.plot(x_plot, y_plot)\nplt.xlabel ('step')\nplt.ylabel ('loss')\nplt.grid()\n\nplt.show()\n```\n\n## Visualize algorithm\n\nThe best way to visualize what the model is doing is to plot the decision surface, which is the set of points for which $sgn(activation(f(w, z)) = 0$.\n\nLet's find the equation of the decision surface.\n\nFor all points $x = (x_1, x_2)$, which translates as $z = (x_1, x_2, 1)$:\n\n$$\nsgn(activation(f(w, z)) = 0 \\\\\n\\Leftrightarrow \\\\\nw_1 \\cdot \\underbrace{z_1}_{x_1} + w_2 \\cdot \\underbrace{z_2}_{x_2} + w_3 \\cdot \\underbrace{z_3}_{1} = 0 \\\\\n\\underset{w_2 \\neq 0,\\ please}{\\Leftrightarrow} \\\\\nx_2 = - \\frac{w_1}{w_2} \\cdot x_1 - \\frac{w_3}{w_2}\n$$\n\n... which is the equation of a line.\n\nAbout decision surfaces: http://www.cs.stir.ac.uk/courses/ITNP4B/lectures/kms/2-Perceptrons.pdf, slide 7\n\n\n```python\ndef decision_surface(w, x_1):\n return -w[0] / w[1] * x_1 - w[2] / w[1]\n```\n\n\n```python\n# Plot data along with the decision surface of the model\ndef plot(w, x, y):\n \n # Plot data\n plt.plot(x[0, y == 0], x[1, y == 0], 'g+')\n plt.plot(x[0, y == 1], x[1, y == 1], 'b+')\n \n # Get data bounds\n x_min, x_max = x[0, :].min(), x[0, :].max()\n x_range = x_max - x_min\n y_min, y_max = x[1, :].min(), x[1, :].max()\n y_range = y_max - y_min\n\n # Enlarge data bounds for prettier plotting\n zoom_out_ratio = 1.1\n x_min -= (zoom_out_ratio - 1) * x_range\n x_max += (zoom_out_ratio - 1) * x_range\n y_min -= (zoom_out_ratio - 1) * y_range\n y_max += (zoom_out_ratio - 1) * y_range\n\n # Set graph limits\n plt.xlim(x_min, x_max)\n plt.ylim(y_min, y_max)\n\n # Plot the decision surface of the model\n x1_min, x1_max = -100, 100\n x2_min = decision_surface(w, x1_min)\n x2_max = decision_surface(w, x1_max)\n plt.plot([x1_min, x1_max], [x2_min, x2_max], 'r')\n\n plt.grid()\n```\n\n\n```python\n# Find a set of losses at which to display the progression of the algorithm\n\n# Get loss bounds\nloss_min, loss_max = min(loss_history), max(loss_history)\nloss_range = loss_max - loss_min\n\n# Split losses\nselected_losses = [loss_max - r * loss_range for r in (0, 0.8, 0.9, 1)]\nselected_losses = sorted(selected_losses, reverse=True)\n\nselected_losses\n```\n\n\n\n\n [649.350190042356, 192.2415403722369, 135.10295916347206, 77.96437795470717]\n\n\n\n\n```python\n# Find a set of strictly increasing steps for which the loss is around `selected_losses`\n\nselected_steps = []\n\ni = 0\nfor step, current_loss in enumerate(loss_history):\n\n if current_loss <= selected_losses[i]:\n selected_steps.append(step + 1)\n i += 1\n\n if i >= len(selected_losses):\n break\n\n# Float comparison is tricky. Make the step corresponding\n# to the lowest loss has been selected. \nif len(selected_steps) < len(selected_losses):\n selected_steps.append(steps_nb - 1)\n\nselected_steps\n```\n\n\n\n\n [1, 109, 285, 962]\n\n\n\n\n```python\n# Visualize\nfor step in selected_steps:\n print('step %i' % (step))\n plot(weights_history[step - 1], x, y)\n plt.show()\n```\n\n## Re-plot loss history\n\nHighlight `selected_steps` this time\n\n\n```python\nx_plot = list(range(1, len(loss_history) + 1))\ny_plot = list(loss_history)\n\nselected_losses = [loss_history[step - 1] for step in selected_steps]\n\nplt.plot(x_plot, y_plot)\nplt.plot(selected_steps, selected_losses, 'ro')\n\nplt.xlabel ('step')\nplt.ylabel ('loss')\nplt.grid()\n\nplt.show()\n```\n\n## Plot weights history\n\nHighlight `selected_steps` as well\n\n\n```python\nweights_x2 = list(map(operator.itemgetter(1), weights_history))\nweights_x1 = list(map(operator.itemgetter(0), weights_history))\n\nselected_weights_x2 = [weights_history[step - 1][1] for step in selected_steps]\nselected_weights_x1 = [weights_history[step - 1][0] for step in selected_steps]\n\nplt.plot(weights_x2, weights_x1)\nplt.plot(selected_weights_x2, selected_weights_x1, 'ro')\n\nplt.xlabel ('weight_x1')\nplt.ylabel ('weight_x2')\nplt.grid()\n\nplt.show()\n```\n\n\n```python\n\n```\n\n## Do the same as above, using Keras\n\n\n```python\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense\nfrom keras.optimizers import SGD\n```\n\n\n```python\nlm = Sequential([Dense(1, activation='sigmoid', input_shape=(2,))])\nlm.compile(optimizer=SGD(lr=0.1), loss='mse')\n```\n\n\n```python\nlm.fit(x.T, y)\n```\n\n Epoch 1/10\n 2000/2000 [==============================] - 0s - loss: 0.1804 \b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n Epoch 2/10\n 2000/2000 [==============================] - 0s - loss: 0.0731 \b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n Epoch 3/10\n 2000/2000 [==============================] - 0s - loss: 0.0605 \b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n Epoch 4/10\n 2000/2000 [==============================] - 0s - loss: 0.0527 \b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n Epoch 5/10\n 2000/2000 [==============================] - 0s - loss: 0.0477 \b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n Epoch 6/10\n 2000/2000 [==============================] - 0s - loss: 0.0445 \b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n Epoch 7/10\n 2000/2000 [==============================] - 0s - loss: 0.0422 \b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n Epoch 8/10\n 2000/2000 [==============================] - 0s - loss: 0.0405 \b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n Epoch 9/10\n 2000/2000 [==============================] - 0s - loss: 0.0392 \b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n Epoch 10/10\n 2000/2000 [==============================] - 0s - loss: 0.0381 \b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n\n\n\n\n\n \n\n\n\n\n```python\nweights, bias = lm.get_weights()\nweights = np.vstack([weights, bias])\n```\n\n\n```python\nplot(weights, x, y)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "b06176e8d8a27f26fa9b33f5ac98140ffafb4a36", "size": 241503, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "classification_sgd.ipynb", "max_stars_repo_name": "quentin-auge/simple-nn", "max_stars_repo_head_hexsha": "c5b17c5550d8f14eb767c9a7f39bc5860d58b4c2", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-10-05T19:25:24.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-05T19:25:24.000Z", "max_issues_repo_path": "classification_sgd.ipynb", "max_issues_repo_name": "quentin-auge/simple-nn", "max_issues_repo_head_hexsha": "c5b17c5550d8f14eb767c9a7f39bc5860d58b4c2", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "classification_sgd.ipynb", "max_forks_repo_name": "quentin-auge/simple-nn", "max_forks_repo_head_hexsha": "c5b17c5550d8f14eb767c9a7f39bc5860d58b4c2", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 225.9148737138, "max_line_length": 27052, "alphanum_fraction": 0.9034877414, "converted": true, "num_tokens": 4889, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9425067244294588, "lm_q2_score": 0.8947894618940992, "lm_q1q2_score": 0.8433450847838055}} {"text": "Problem 1 (20 points)\nShow that the stationary point (zero gradient) of the function$$\n\\begin{aligned}\n f=2x_{1}^{2} - 4x_1 x_2+ 1.5x^{2}_{2}+ x_2\n\\end{aligned}\n$$is a saddle (with indefinite Hessian). Find the directions of downslopes away from the saddle. Hint: Use Taylor's expansion at the saddle point. Find directions that reduce $f$\n\n\n```python\nimport sympy as sym\n\n\nx1, x2 = sym.symbols(\"x1 x2\") # variables to be used in the function\nf = 2*(x1)**2 - (4 * x1 * x2) + (1.5 * (x2)**2) + x2 # Declaring the function\n\ngradient = sym.derive_by_array(f, (x1, x2)) # Finding the gradient of the function\n\nhessian = sym.Matrix(sym.derive_by_array(gradient, (x1, x2))) # Finding Hessian of the function\n\nstationary_points = sym.solve(gradient, (x1, x2)) \nprint(f'Stationary points are:\\n {stationary_points}')\n\nvalue = f.subs({x1: stationary_points[x1], x2: stationary_points[x2]})\nprint(f'Value of the function at stationary point: {value}')\n\negnval = hessian.eigenvals() # Finding the eigenvalues of the Hessian\ne_val = list(egnval.keys())\nprint(\"The eigenvales are:\")\nprint(*e_val)\n\n\n# checking for the nature of he Hessain\nposi = zo = nposi = 0\n\nfor val in e_val:\n val = float(val)\n if val > 0:\n posi += 1\n elif val == 0:\n zo += 1\n else:\n nposi += 1\n\nif posi == len(e_val):\n print(\"Positive Definite Hessian\")\nelif zo == len(e_val):\n print(\"Undertimed Eigen Values are zero\")\nelif nposi == len(e_val):\n print(\"Negative Definite Hessian\")\nelse:\n print(\"Indefinite Hessian\")\n```\n\n Stationary points are:\n {x1: 1.00000000000000, x2: 1.00000000000000}\n Value of the function at stationary point: 0.500000000000000\n The eigenvales are:\n 7.53112887414927 -0.531128874149275\n Indefinite Hessian\n\n", "meta": {"hexsha": "be4f3179814a85fb033775235e3b5f496402857c", "size": 3006, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Imcomplete_HW2.ipynb", "max_stars_repo_name": "MrNobodyInCamelCase/Trail_repo", "max_stars_repo_head_hexsha": "2508eef78e9793945d46c2394a61633e693387b7", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Imcomplete_HW2.ipynb", "max_issues_repo_name": "MrNobodyInCamelCase/Trail_repo", "max_issues_repo_head_hexsha": "2508eef78e9793945d46c2394a61633e693387b7", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Imcomplete_HW2.ipynb", "max_forks_repo_name": "MrNobodyInCamelCase/Trail_repo", "max_forks_repo_head_hexsha": "2508eef78e9793945d46c2394a61633e693387b7", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.1844660194, "max_line_length": 184, "alphanum_fraction": 0.5419161677, "converted": true, "num_tokens": 567, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9425067195846918, "lm_q2_score": 0.8947894661025424, "lm_q1q2_score": 0.843345084415245}} {"text": "---\n# Section 1.1: Matrix Multiplication\n---\n\n### Example:\n\nLet\n\n$$\nA = \n\\begin{bmatrix} \n2 & 1 & 4 \\\\ \n3 & -2 & 1 \n\\end{bmatrix} \n\\in \\mathbb{R}^{2 \\times 3},\n\\qquad\nx = \n\\begin{bmatrix}\n1 \\\\ 2 \\\\ 1\n\\end{bmatrix}\n\\in \\mathbb{R}^3.\n$$\n\nCompute the matrix-vector product $Ax$ by hand and check that Julia gives the same answer.\n\n\n```julia\nA = [2 1 4; 3 -2 1.0]\n\nx = [1; 2; 1.0]\n```\n\n\n```julia\n2*1 + 1*2 + 4*1\n```\n\n\n```julia\n3*1 + (-2)*2 + 1*1\n```\n\n\n```julia\nb = A*x\n```\n\n---\n\nIn general, if $A$ is a real matrix with $m$ rows and $n$ columns, and $x$ is a real vector with $n$ entries, then\n\n$$\nA = \n\\begin{bmatrix}\na_{11} & \\cdots & a_{1n} \\\\\n\\vdots & & \\vdots \\\\\na_{m1} & \\cdots & a_{mn}\n\\end{bmatrix}\n\\in \\mathbb{R}^{m \\times n}\n\\quad \\text{and} \\quad\nx = \n\\begin{bmatrix}\nx_1 \\\\ \\vdots \\\\ x_n\n\\end{bmatrix}\n\\in \\mathbb{R}^n.\n$$\n\nIf $b = Ax$, then $b \\in \\mathbb{R}^m$ and\n\n$$\nb_i = \\sum_{j = 1}^n a_{ij} x_j = a_{i1}x_1 + \\cdots + a_{in}x_n,\n\\quad\ni = 1,\\ldots, m.\n$$\n\nThus, $b_i$ is the **inner-product** between $\\mathrm{row}_i(A) = \\begin{bmatrix} a_{i1} & \\cdots & a_{in} \\end{bmatrix}$ and the vector $x$.\n\nAlso, \n\n$$b = \\mathrm{col}_1(A) x_1 + \\cdots + \\mathrm{col}_n(A) x_n,$$\n\nso $b$ is a **linear combination** of the columns of $A$.\n\n\n```julia\nA\n```\n\n\n```julia\nx\n```\n\n\n```julia\nA[:,1]*x[1]\n```\n\n\n```julia\nA[:,2]*x[2]\n```\n\n\n```julia\nA[:,3]*x[3]\n```\n\n\n```julia\nA[:,1]*x[1] + A[:,2]*x[2] + A[:,3]*x[3]\n```\n\n---\n\n### Exercise:\n\nWrite a Julia function to multiply a matrix and a vector using for loops.\n\n\n```julia\nfunction Amulx_ip!(b, A, x)\n m, n = size(A)\n\n for i = 1:m\n b[i] = 0\n for j = 1:n\n b[i] += A[i,j]*x[j]\n end\n end\n\n return b\nend\n```\n\n\n```julia\nfunction Amulx_lc!(b, A, x)\n m, n = size(A)\n\n for i = 1:m\n b[i] = 0\n end\n \n for j = 1:n\n for i = 1:m\n b[i] += A[i,j]*x[j]\n end\n end\n\n return b\n\nend\n```\n\n\n```julia\nA = [2 1 4; 3 -2 1.0]\nx = [1; 2; 1.0]\n\nm, n = size(A)\n\nb = zeros(m)\n\nAmulx_lc!(b, A, x)\n\nb\n```\n\n---\n\n### Exercise:\n\nTest the speed of your function to compute $b = Ax$.\n\n\n\n```julia\n@time Amulx_ip!(b, A, x);\n```\n\n\n```julia\n@time Amulx_lc!(b, A, x);\n```\n\n\n```julia\nusing BenchmarkTools\n```\n\n\n```julia\n@btime Amulx_ip!(b, A, x);\n@btime Amulx_lc!(b, A, x);\n```\n\n\n```julia\n@benchmark Amulx_ip!(b, A, x)\n```\n\n\n```julia\n@benchmark Amulx_lc!(b, A, x)\n```\n\n\n```julia\nm, n = 2000, 1800\n\nA = rand(m, n)\nx = rand(n)\n\nb = A*x;\n```\n\n\n```julia\n@benchmark Amulx_ip!(b, A, x)\n```\n\n\n```julia\n@benchmark Amulx_lc!(b, A, x)\n```\n\n---\n\n## Storage of arrays in memory\n\nIn Julia, the matrix \n\n$$\nA = \n\\begin{bmatrix}\n2 & 1 & 4 \\\\\n3 & -2 & 1\n\\end{bmatrix}\n$$\n\nis stored in computer memory in **column-major order**:\n\n| $\\vdots$ |\n|:------:|\n| 2.0 |\n| 3.0 |\n| 1.0 |\n| -2.0 |\n| 4.0 |\n| 1.0 |\n| $\\vdots$ |\n\n### Computer memory architecture\n\nWhen the CPU needs data from memory, the **page** in memory where that data is located gets loaded into the **cache**.\n\n$$\n\\begin{matrix}\n& \\text{fast}& & \\text{slow} & \\\\\n\\fbox{CPU} & \\Longleftrightarrow & \\fbox{cache} & \\longleftrightarrow & \\fbox{memory} \\\\\n& & \\text{3 MB} & & \\text{16 GB}\n\\end{matrix}\n$$\n\nIt is better to load data from memory that is stored contiguously.\n\n\n\n### Row-major vs column-major order\n\nSome languages store arrays in **row-major order**, such as:\n- C/C++\n- Python\n- Mathematica\n\nOther languages use **column-major order**, such as:\n- Fortran\n- MATLAB\n- R\n- Julia\n\nSee the [Row-major order](https://en.wikipedia.org/wiki/Row-major_order) Wikipedia page for more information.\n\n\n---\n\n### Floating-point operations (flops)\n\nA **flop** is a *floating-point operation* between numbers stored in a floating-point format on a computer.\n\nWe will discuss this floating-point format in detail later in the course. For now, it is enough to know that this format is a way of storing real numbers on a computer that is like *scientific notation*. It allows us to store a large range of numbers, but only to a finite precision.\n\nIf $x$ and $y$ are numbers stored in a floating point format, then the following operations are each *one flop*:\n\n$$\nx + y, \\quad x - y, \\quad xy, \\quad x/y.\n$$\n\n---\n\n### Example:\n\nCount the number of flops in the following code.\n\n```julia\nfor j = 1:n\n for i = 1:m\n b[i] = b[i] + A[i, j]*x[j]\n end\nend\n```\n\n---\n\nFor $A \\in \\mathbb{R}^{n \\times n}$ and $x \\in \\mathbb{R}^n$, computing $b = Ax$ requires $2n^2$ flops.\n\nThus, we expect that computing $b = Ax$ with $n = 2000$ will take 4 times as long as the same computation with $n = 1000$.\n\nWe say that $b = Ax$ is an **order $n^2$** operation:\n\n$$\\fbox{Computing $b = Ax$ requires $O(n^2)$ flops.}$$\n\nThe exact number of flops matters less than how the number of flops grows as $n$ grows.\n\n---\n\n### Speed test\n\nWrite code to test compare running times when $n$ is doubled.\n\n\n```julia\n\n```\n\n---\n\n### Matrix-Matrix Multiplication\n\nLet $A \\in \\mathbb{R}^{m \\times n}$ and $X \\in \\mathbb{R}^{n \\times p}$. If $B = AX$ then $B \\in \\mathbb{R}^{m \\times p}$ and\n\n$$\nb_{ij} = \\sum_{k = 1}^n a_{ik} x_{kj}, \\quad i = 1,\\ldots,m, \\quad j = 1,\\ldots,p.\n$$\n\nThat is, $b_{ij}$ is the **inner-product** between row $i$ of $A$ and column $j$ of $X$.\n\nAlso, each column of $B$ is a **linear combination** of the columns of $A$.\n\n---\n\n### Exercise:\n\nWrite a Julia function to multiply two matrices.\n\n\n```julia\n\n```\n\n---\n\n### Exercise:\n\nCompare the running time of your function to Julia's built-in matrix-matrix multiplication.\n\n\n```julia\n\n```\n\n---\n\n### Exercise:\n\nWithout running the following code, what do you expect the output of the following code will be?\n\n```julia\nn = 1000\nA = rand(n,n)\nX = rand(n,n)\nt1 = @belapsed A*X\n\nn = 2000\nA = rand(n,n)\nX = rand(n,n)\nt2 = @belapsed A*X\n\nt2/t1\n```\n\n---\n\n### Block Matrices\n\nPartition $A \\in \\mathbb{R}^{m \\times n}$ and $X \\in \\mathbb{R}^{n \\times p}$ into blocks:\n\n$$\n\\begin{matrix}\n & & \\begin{matrix} n_1 & n_2 \\end{matrix} \\\\\nA = & \\begin{matrix} m_1 \\\\ m_2 \\end{matrix}\n & \\begin{bmatrix}\n A_{11} & A_{12} \\\\\n A_{21} & A_{22}\n \\end{bmatrix},\n\\end{matrix}\n\\qquad\n\\begin{matrix}\n & & \\begin{matrix} p_1 & p_2 \\end{matrix} \\\\\nX = & \\begin{matrix} n_1 \\\\ n_2 \\end{matrix}\n & \\begin{bmatrix}\n X_{11} & X_{12} \\\\\n X_{21} & X_{22}\n \\end{bmatrix},\n\\end{matrix}\n$$\n\nwhere $n = n_1 + n_2$, $m = m_1 + m_2$, and $p = p_1 + p_2$.\n\nIf $B = AX$ and\n\n$$\n\\begin{matrix}\n & & \\begin{matrix} p_1 & p_2 \\end{matrix} \\\\\nB = & \\begin{matrix} m_1 \\\\ m_2 \\end{matrix}\n & \\begin{bmatrix}\n B_{11} & B_{12} \\\\\n B_{21} & B_{22}\n \\end{bmatrix},\n\\end{matrix}\n$$\n\nthen\n\n$$\n\\begin{align}\n\\begin{bmatrix}\n B_{11} & B_{12} \\\\\n B_{21} & B_{22}\n\\end{bmatrix} = B = AX &= \n\\begin{bmatrix}\n A_{11} & A_{12} \\\\\n A_{21} & A_{22}\n\\end{bmatrix}\n\\begin{bmatrix}\n X_{11} & X_{12} \\\\\n X_{21} & X_{22}\n\\end{bmatrix}\\\\\\\\\n&=\n\\begin{bmatrix}\n A_{11} X_{11} + A_{12} X_{21} & A_{11} X_{12} + A_{12} X_{22} \\\\\n A_{21} X_{11} + A_{22} X_{21} & A_{21} X_{12} + A_{22} X_{22} \\\\\n\\end{bmatrix}\n\\end{align}\n$$\n\nThat is,\n\n$$\nB_{ij} = \\sum_{k = 1}^2 A_{ik}X_{kj}, \\qquad i,j = 1,2.\n$$\n\n---\n\n### Exercise:\n\nVerify the above block matrix multiplication formula on random matrices in Julia.\n\n\n```julia\n\n```\n\n---\n\n### Use of Block Matrix Operations to Decrease Data Movement Delays\n\nSuppose $n = rs$.\n\nIf $A, X \\in \\mathbb{R}^{n \\times n}$ are partitioned into $s \\times s$ block matrices where each block is of size $r \\times r$, then $B = AX$ can be computed as in the following pseudo-code:\n\n```julia\nB = zeros(n, n)\nfor i = 1:s\n for j = 1:s\n for k = 1:s\n Bij = Bij + Aik * Xkj\n end\n end\nend\n```\n\n\n\nWe can do the following operations in parallel:\n\n1. Multiply $A_{ik} X_{kj}$ in $O(r^3)$ time.\n\n2. Fetch the next blocks $A_{i,k+1}$ and $X_{k+1,j}$ in $O(r^2)$ time.\n\nWe should be able to choose $r$ so that step 2 takes less time than step 1.\n\nTherefore, the CPU will not have to wait to load data from the memory into the cache.\n\nCache size needs to be taken into account. We need to be able to store the following 5 submatrices in cache at the same time:\n\n$$\nB_{ij}, \\quad A_{ik}, \\quad X_{kj}, \\quad A_{i,k+1}, \\quad X_{k+1,j}.\n$$\n\nIn addition, multiple processors can compute different submatrices $B_{ij}$ at the same time.\n\n---\n", "meta": {"hexsha": "c4aeec1b5b6591cd53520017397f8e119ec56968", "size": 16511, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": ".ipynb_checkpoints/Section 1.1 - Matrix Multiplication-checkpoint.ipynb", "max_stars_repo_name": "jesus-lua-8/fall2021math434", "max_stars_repo_head_hexsha": "e6c6931676922193a6718b51c92232a3de7fa650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-08-31T21:01:22.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-31T21:01:22.000Z", "max_issues_repo_path": ".ipynb_checkpoints/Section 1.1 - Matrix Multiplication-checkpoint.ipynb", "max_issues_repo_name": "jesus-lua-8/fall2021math434", "max_issues_repo_head_hexsha": "e6c6931676922193a6718b51c92232a3de7fa650", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": ".ipynb_checkpoints/Section 1.1 - Matrix Multiplication-checkpoint.ipynb", "max_forks_repo_name": "jesus-lua-8/fall2021math434", "max_forks_repo_head_hexsha": "e6c6931676922193a6718b51c92232a3de7fa650", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-16T19:28:56.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-16T19:28:56.000Z", "avg_line_length": 22.1327077748, "max_line_length": 292, "alphanum_fraction": 0.4428562776, "converted": true, "num_tokens": 3073, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8840392909114836, "lm_q2_score": 0.9539661025086188, "lm_q1q2_score": 0.8433435168153111}} {"text": "# Exercise 2 - Answers\n\n## Task 1\n* max 2 points, 1 point for correct obj function, 1 point for correct constraint\n\n\nThe profit that you make **for each kwH** is $2-0.01x^2-(1-0.01x)$. Thus, the amount of profit that you make is\n$$\n(1+0.01x-0.01x^2)x=x+0.01x^2-0.01x^3.\n$$\nThus, the optimization problem is\n$$\n\\begin{align}\n\\max \\qquad & x+0.01x^2-0.01x^3\\\\\n\\text{s.t.} \\qquad & 0\\leq x \\leq 50.\n\\end{align}\n$$\n\n## Task 2\n* max 2 points, 2 points if one gets correct solution, points are reduced if the implementation has any flaws \n\n\n```python\ndef bisection_line_search(a,b,f,L,epsilon):\n x = a\n y = b\n iters = 0\n while y-x>2*L:\n if f((x+y)/2+epsilon)>f((x+y)/2-epsilon):\n y=(x+y)/2+epsilon\n else:\n x = (x+y)/2-epsilon\n iters = iters + 1\n return (x+y)/2, iters\n```\n\n\n```python\n# the problem to be solved\ndef f_ex2(x):\n return (1-x)**2+x\n```\n\n\n```python\n(x_opt,iters) = bisection_line_search(0,2,f_ex2,0.0001,1e-6)\nprint(\"Local optimum approximation:\", x_opt)\nprint(\"Number of iterations:\", iters)\n```\n\n Local optimum approximation: 0.49993946490478514\n Number of iterations: 14\n\n\n\n```python\n# check the objective function value at the solutions found\nprint(f_ex2(x_opt))\n\n# check the objective function values at the end points of the interval\nprint(f_ex2(0), f_ex2(2))\n```\n\n 0.7500000036644978\n 1 3\n\n\n## Task 3\n* max 2 points, 2 points if one gets correct solution, points are reduced if the implementation has any flaws \n\n\n```python\nimport math\ndef golden_section_line_search(a,b,f,L):\n x = a\n y = b\n iters = 0\n golden_ratio = (math.sqrt(5.0)-1)/2.0\n f_left = f(y-golden_ratio*(y-x)) #funtion eval \n f_right = f(x+golden_ratio*(y-x)) #function eval\n while y-x>2*L:\n if f_left > f_right:\n x = y-golden_ratio*(y-x)\n f_left = f_right #no function eval\n f_right = f(x+golden_ratio*(y-x)) #function eval\n else:\n y = x+golden_ratio*(y-x)\n f_right = f_left #no function eval\n f_left = f(y-golden_ratio*(y-x)) #function eval\n iters = iters + 1\n return (x+y)/2, iters\n```\n\n\n```python\n(x_opt2,iters) = golden_section_line_search(0,2,f_ex2,0.0001)\nprint(x_opt2)\nprint(iters)\n```\n\n 0.4999795718254958\n 20\n\n\n## Task 4\n* 1 point for each problem\n\n**Problem 1**: $f(x)=x+0.01x^2-0.01x^3$. Thus,\n$$\nf'(x) = 1 + 2*0.01x - 3*0.01x^2 = 1 + 0.02x - 0.03x^2\n$$\nand\n$$\nf''(x) = 0.02-0.06x.\n$$\n\nIf $f'(x) = 0$, then \n$$\nx = \\frac{-0.002\\pm\\sqrt{0.02^2-4*(-0.03)*1}}{2*(-0.03)}=\\frac{0.02\\pm\\sqrt{0.1204}}{0.06}\n$$\n\n\n```python\ndef f1(x):\n return x+0.01*x**2-0.01*x**3\n```\n\n\n```python\nimport math\nx1 = (0.02+math.sqrt(0.1204))/0.06\nx2 = (0.02-math.sqrt(0.1204))/0.06\nx3 = 0.0 # lower bound\nx4 = 50.0 # upper bound\nprint(x1, x2, x3, x4)\nprint(f1(x1),f1(x2),f1(x3),f1(x4))\n```\n\n 6.116450524299158 -5.449783857632491 0.0 50.0\n 4.202336906253436 -3.534188758105288 0.0 -1175.0\n\n\n$f''(x)<0$ when $0.02-0.06x<0$ that is $x>1/3$. So $x=\\frac{0.02+\\sqrt{0.1204}}{0.06}\\approx 6.116$ is a local maximum.\n\n\n```python\ndef hf(x):\n return 0.02-0.06*x\n```\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.arange(-15.0, 15.0, 1.0)\n#plt.plot(x, f1(x), 'bo')\nplt.plot(x, f1(x), 'bo', x, hf(x),'ro') # objective function blue, second derivative red\nplt.show()\nprint(f1(0.0))\n```\n\n**Problem 2**: Now, $f(x) = (1-x)^2+x$. Thus, \n$$\nf'(x)=2(1-x)(-1)+1= -1 +2x.\n$$\n\n\nIf $f'(x) = 0$, then $2x=1$ and $x=\\frac 12$.\nThis is a local minimum since,\n$$\nf''(x) = 2,\n$$\nwhich is greater than $0$.\n\nThis means that the algorithms work!\n", "meta": {"hexsha": "87d73f2d619ef13f188300d670bed31ff5ed1479", "size": 14770, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Exercise 2 answers.ipynb", "max_stars_repo_name": "bshavazipour/TIES483-2022", "max_stars_repo_head_hexsha": "93dfabbfe1e953e5c5f83c44412963505ecf575a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Exercise 2 answers.ipynb", "max_issues_repo_name": "bshavazipour/TIES483-2022", "max_issues_repo_head_hexsha": "93dfabbfe1e953e5c5f83c44412963505ecf575a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Exercise 2 answers.ipynb", "max_forks_repo_name": "bshavazipour/TIES483-2022", "max_forks_repo_head_hexsha": "93dfabbfe1e953e5c5f83c44412963505ecf575a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-02-03T09:40:02.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-03T09:40:02.000Z", "avg_line_length": 41.2569832402, "max_line_length": 6916, "alphanum_fraction": 0.7020311442, "converted": true, "num_tokens": 1414, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9241418241572635, "lm_q2_score": 0.9124361652391386, "lm_q1q2_score": 0.8432204221711559}} {"text": "# Simulating a Predator and Prey Relationship\n\nWithout a predator, rabbits will reproduce until they reach the carrying capacity of the land. When coyotes show up, they will eat the rabbits and reproduce until they can't find enough rabbits. We will explore the fluctuations in the two populations over time.\n\n# Using Lotka-Volterra Model\n\n## Part 1: Rabbits without predators\n\nAccording to [Mother Earth News](https://www.motherearthnews.com/homesteading-and-livestock/rabbits-on-pasture-intensive-grazing-with-bunnies-zbcz1504), a rabbit eats six square feet of pasture per day. Let's assume that our rabbits live in a five acre clearing in a forest: 217,800 square feet/6 square feet = 36,300 rabbit-days worth of food. For simplicity, let's assume the grass grows back in two months. Thus, the carrying capacity of five acres is 36,300/60 = 605 rabbits.\n\nFemale rabbits reproduce about six to seven times per year. They have six to ten children in a litter. According to [Wikipedia](https://en.wikipedia.org/wiki/Rabbit), a wild rabbit reaches sexual maturity when it is about six months old and typically lives one to two years. For simplicity, let's assume that in the presence of unlimited food, a rabbit lives forever, is immediately sexually mature, and has 1.5 children every month.\n\nFor our purposes, then, let $x_t$ be the number of rabbits in our five acre clearing on month $t$.\n$$\n\\begin{equation*}\n R_t = R_{t-1} + 1.5\\frac{605 - R_{t-1}}{605} R_{t-1}\n\\end{equation*}\n$$\n\nThe formula could be put into general form\n$$\n\\begin{equation*}\n R_t = R_{t-1} + growth_{R} \\times \\big( \\frac{capacity_{R} - R_{t-1}}{capacity_{R}} \\big) R_{t-1}\n\\end{equation*}\n$$\n\nBy doing this, we allow users to interact with growth rate and the capacity value visualize different interaction \n\n\n\n```python\nfrom __future__ import print_function\nfrom ipywidgets import interact, interactive, fixed, interact_manual\nfrom IPython.display import display, clear_output\nimport ipywidgets as widgets\nimport matplotlib.pyplot as plt\nimport numpy as np\n```\n\n\n```python\n%matplotlib inline\nstyle = {'description_width': 'initial'}\ncapacity_R = widgets.FloatText(description=\"Capacity\", value=605)\ngrowth_rate_R = widgets.FloatText(description=\"Growth rate\", value=1.5)\ninitial_R = widgets.FloatText(description=\"Initial population\",style=style, value=1)\nbutton_R = widgets.Button(description=\"Plot Graph\")\ndisplay(initial_R, capacity_R, growth_rate_R, button_R)\n\ndef plot_graph_r(b):\n print(\"helo\")\n clear_output()\n display(initial_R, capacity_R, growth_rate_R, button_R)\n fig = plt.figure()\n ax = fig.add_subplot(111)\n t = np.arange(0, 20, 1)\n s = np.zeros(t.shape)\n R = initial_R.value\n for i in range(t.shape[0]):\n s[i] = R\n R = R + growth_rate_R.value * (capacity_R.value - R)/(capacity_R.value) * R\n if R < 0.0:\n R = 0.0\n \n ax.plot(t, s)\n ax.set(xlabel='time (months)', ylabel='number of rabbits',\n title='Rabbits Without Predators')\n ax.grid()\n\nbutton_R.on_click(plot_graph_r)\n```\n\n**Exercise 1** (1 point). Complete the following functions, find the number of rabbits at time 5, given $x_0$ = 10, population capcity =100, and growth rate = 0.8\n\n\n```python\nR_i = 10\nfor i in range(5):\n R_i = int(R_i + 0.8 * (100 - R_i)/(100) * R_i)\n \nprint(f'There are {R_i} rabbits in the system at time 5')\n\n```\n\n There are 81 rabbits in the system at time 5\n\n\n## Tweaking the Growth Function\nThe growth is regulated by this part of the formula:\n$$\n\\begin{equation*}\n \\frac{capacity_{R} - R_{t-1}}{capacity_{R}}\n\\end{equation*}\n$$\nThat is, this fraction (and thus growth) goes to zero when the land is at capacity. As the number of rabbits goes to zero, this fraction goes to 1.0, so growth is at its highest speed. We could substitute in another function that has the same values at zero and at capacity, but has a different shape. For example, \n$$\n\\begin{equation*}\n \\left( \\frac{capacity_{R} - R_{t-1}}{capacity_{R}} \\right)^{\\beta}\n\\end{equation*}\n$$\nwhere $\\beta$ is a positive number. For example, if $\\beta$ is 1.3, it indicates that the rabbits can sense that food supplies are dwindling and pre-emptively slow their reproduction.\n\n\n```python\n#### %matplotlib inline\nimport math\nstyle = {'description_width': 'initial'}\ncapacity_R_2 = widgets.FloatText(description=\"Capacity\", value=605)\ngrowth_rate_R_2 = widgets.FloatText(description=\"Growth rate\", value=1.5)\ninitial_R_2 = widgets.FloatText(description=\"Initial population\",style=style, value=1)\nshaping_R_2 = widgets.FloatText(description=\"Shaping\", value=1.3)\nbutton_R_2 = widgets.Button(description=\"Plot Graph\")\ndisplay(initial_R_2, capacity_R_2, growth_rate_R_2, shaping_R_2, button_R_2)\n\ndef plot_graph_r(b):\n clear_output()\n display(initial_R_2, capacity_R_2, growth_rate_R_2, shaping_R_2, button_R_2) \n fig = plt.figure()\n ax = fig.add_subplot(111)\n t = np.arange(0, 20, 1)\n s = np.zeros(t.shape)\n R = initial_R_2.value\n beta = float(shaping_R_2.value)\n for i in range(t.shape[0]):\n s[i] = R\n reserve_ratio = (capacity_R_2.value - R)/capacity_R_2.value\n if reserve_ratio > 0.0:\n R = R + R * growth_rate_R_2.value * reserve_ratio**beta\n else:\n R = R - R * growth_rate_R_2.value * (-1.0 * reserve_ratio)**beta\n if R < 0.0:\n R = 0\n \n ax.plot(t, s)\n ax.set(xlabel='time (months)', ylabel='number of rabbits',\n title='Rabbits Without Predators (Shaped)')\n ax.grid()\n\nbutton_R_2.on_click(plot_graph_r)\n```\n\n**Exercise 2** (1 point). Repeat Exercise 1, with $\\beta$ = 1.5 Complete the following functions, find the number of rabbits at time 5. Should we expect to see more rabbits or less?\n\n\n```python\nR_i = 10\nb=1.5\nfor i in range(5):\n R_i = int(R_i + 0.8 * ((100 - R_i)/(100))**b * R_i)\n \nprint(f'There are {R_i} rabbits in the system at time 5, less rabbits compare to exercise 1, where beta = 1')\n```\n\n There are 64 rabbits in the system at time 5, less rabbits compare to exercise 1, where beta = 1\n\n\n## Part 2: Coyotes without Prey\nAccording to [Huntwise](https://www.besthuntingtimes.com/blog/2020/2/3/why-you-should-coyote-hunt-how-to-get-started), coyotes need to consume about 2-3 pounds of food per day. Their diet is 90 percent mammalian. The perfect adult cottontail rabbits weigh 2.6 pounds on average. Thus, we assume the coyote eats one rabbit per day. \n\nFor coyotes, the breeding season is in February and March. According to [Wikipedia](https://en.wikipedia.org/wiki/Coyote#Social_and_reproductive_behaviors), females have a gestation period of 63 days, with an average litter size of 6, though the number fluctuates depending on coyote population density and the abundance of food. By fall, the pups are old enough to hunt for themselves.\n\nIn the absence of rabbits, the number of coyotes will drop, as their food supply is scarce.\nThe formula could be put into general form:\n\n$$\n\\begin{align*}\n C_t & \\sim (1 - death_{C}) \\times C_{t-1}\\\\\n &= C_{t-1} - death_{C} \\times C_{t-1}\n\\end{align*}\n$$\n\n\n\n\n```python\n%matplotlib inline\nstyle = {'description_width': 'initial'}\ninitial_C=widgets.FloatText(description=\"Initial Population\",style=style,value=200.0)\ndeclining_rate_C=widgets.FloatText(description=\"Death rate\",value=0.5)\nbutton_C=widgets.Button(description=\"Plot Graph\")\ndisplay(initial_C, declining_rate_C, button_C)\n\ndef plot_graph_c(b):\n clear_output()\n display(initial_C, declining_rate_C, button_C)\n fig = plt.figure()\n ax = fig.add_subplot(111)\n t1 = np.arange(0, 20, 1)\n s1 = np.zeros(t1.shape)\n C = initial_C.value\n for i in range(t1.shape[0]):\n s1[i] = C\n C = (1 - declining_rate_C.value)*C\n \n ax.plot(t1, s1)\n ax.set(xlabel='time (months)', ylabel='number of coyotes',\n title='Coyotes Without Prey')\n ax.grid()\n\nbutton_C.on_click(plot_graph_c)\n\n```\n\n**Exercise 3** (1 point). Assume the system has 100 coyotes at time 0, the death rate is 0.5 if there are no prey. At what point in time, coyotes become extinct.\n\n\n```python\nti = 0\ncoyotes_init = 100\nc_i = coyotes_init\nd_r = 0.5\nwhile c_i > 10:\n c_i= int((1 - d_r)*c_i)\n ti =ti + 1\nprint(f'At time t={ti}, the coyotes become extinct') \n```\n\n At time t=4, the coyotes become extinct\n\n\n## Part 3: Interaction Between Coyotes and Rabbit\nWith the simple interaction from the first two parts, now we can combine both interaction and come out with simple interaction.\n$$\n\\begin{align*}\n R_t &= R_{t-1} + growth_{R} \\times \\big( \\frac{capacity_{R} - R_{t-1}}{capacity_{R}} \\big) R_{t-1} - death_{R}(C_{t-1})\\times R_{t-1}\\\\\\\\\n C_t &= C_{t-1} - death_{C} \\times C_{t-1} + growth_{C}(R_{t-1}) \\times C_{t-1}\n\\end{align*}\n$$\n\nIn equations above, death rate of rabbit is a function parameterized by the amount of coyote. Similarly, the growth rate of coyotes is a function parameterized by the amount of the rabbit.\n\nThe death rate of the rabbit should be $0$ if there are no coyotes, while it should approach $1$ if there are many coyotes. One of the formula fulfilling this characteristics is hyperbolic function.\n\n$$\n\\begin{equation}\ndeath_R(C) = 1 - \\frac{1}{xC + 1}\n\\end{equation}\n$$\n\nwhere $x$ determines how quickly $death_R$ increases as the number of coyotes ($C$) increases. Similarly, the growth rate of the coyotes should be $0$ if there are no rabbits, while it should approach infinity if there are many rabbits. One of the formula fulfilling this characteristics is a linear function.\n\n$$\n\\begin{equation}\ngrowth_C(R) = yC\n\\end{equation}\n$$\n\nwhere $y$ determines how quickly $growth_C$ increases as number of rabbit ($R$) increases.\n\nPutting all together, the final equtions are\n\n$$\n\\begin{align*}\n R_t &= R_{t-1} + growth_{R} \\times \\big( \\frac{capacity_{R} - R_{t-1}}{capacity_{R}} \\big) R_{t-1} - \\big( 1 - \\frac{1}{xC_{t-1} + 1} \\big)\\times R_{t-1}\\\\\\\\\n C_t &= C_{t-1} - death_{C} \\times C_{t-1} + yR_{t-1}C_{t-1}\n\\end{align*}\n$$\n\n\n\n**Exercise 4** (3 point). The model we have created above is a variation of the Lotka-Volterra model, which describes various forms of predator-prey interactions. Complete the following functions, which should generate the state variables plotted over time. Blue = prey, Orange = predators. \n\n\n```python\n%matplotlib inline\ninitial_rabbit = widgets.FloatText(description=\"Initial Rabbit\",style=style, value=1)\ninitial_coyote = widgets.FloatText(description=\"Initial Coyote\",style=style, value=1)\ncapacity = widgets.FloatText(description=\"Capacity rabbits\", style=style,value=5)\ngrowth_rate = widgets.FloatText(description=\"Growth rate rabbits\", style=style,value=1)\ndeath_rate = widgets.FloatText(description=\"Death rate coyotes\", style=style,value=1)\nx = widgets.FloatText(description=\"Death rate ratio due to coyote\",style=style, value=1)\ny = widgets.FloatText(description=\"Growth rate ratio due to rabbit\",style=style, value=1)\nbutton = widgets.Button(description=\"Plot Graph\")\ndisplay(initial_rabbit, initial_coyote, capacity, growth_rate, death_rate, x, y, button)\ndef plot_graph(b):\n clear_output()\n display(initial_rabbit, initial_coyote, capacity, growth_rate, death_rate, x, y, button)\n fig = plt.figure()\n ax = fig.add_subplot(111)\n t = np.arange(0, 20, 0.5)\n s = np.zeros(t.shape)\n p = np.zeros(t.shape)\n R = initial_rabbit.value\n C = initial_coyote.value\n for i in range(t.shape[0]):\n s[i] = R\n p[i] = C\n R = R + growth_rate.value * (capacity.value - R)/(capacity.value) * R - (1 - 1/(x.value*C + 1))*R\n C = C - death_rate.value * C + y.value*s[i]*C\n \n ax.plot(t, s, label=\"rabit\")\n ax.plot(t, p, label=\"coyote\")\n ax.set(xlabel='time (months)', ylabel='population size',\n title='Coyotes-Rabbit (Predator-Prey) Relationship')\n ax.grid()\n ax.legend()\n\nbutton.on_click(plot_graph)\n```\n\nThe system shows an oscillatory behavior. Let's try to verify the nonlinear oscillation in phase space visualization.\n\n\n## Part 4: Trajectories and Direction Fields for a system of equations \n\nTo further demonstrate the predator numbers rise and fall cyclically with their preferred prey, we will be using the Lotka-Volterra equations, which is based on differential equations. The Lotka-Volterra Prey-Predator model involves two equations, one describes the changes in number of preys and the second one decribes the changes in number of predators. The dynamics of the interaction between a rabbit population $R_t$ and a coyotes $C_t$ is described by the following differential equations:\n$$\n\\begin{align*}\n\\frac{dR}{dt} = aR_t - bR_tC_t\n\\end{align*}\n$$\n\n$$\n\\begin{align*}\n\\frac{dC}{dt} = bdR_tC_t - cC_t\n\\end{align*}\n$$\n\nwith the following notations:\n\nR$_t$: number of preys(rabbits)\n\nC$_t$: number of predators(coyotes)\n\na: natural growing rate of rabbits, when there is no coyotes\n\nb: natural dying rate of rabbits, which is killed by coyotes per unit of time\n\nc: natural dying rate of coyotes, when ther is not rabbits\n\nd: natural growing rate of coyotes with which consumed prey is converted to predator\n\nWe start from defining the system of ordinary differential equations, and then find the equilibrium points for our system. Equilibrium occurs when the frowth rate is 0, and we can see that we have two equilibrium points in our example, the first one happens when theres no preys or predators, which represents the extinction of both species, the second equilibrium happens when $R_t=\\frac{c}{b d}$ $C_t=\\frac{a}{b}$. Move on, we will use the scipy to help us integrate the differential equations, and generate the plot of evolution for both species:\n\n\n**Exercise 5** (3 point). As we can tell from the simulation results of predator-prey model, the system shows an oscillatory behavior. Find the equilibrium points of the system and generate the phase space visualization to demonstrate the oscillation seen previously is nonlinear with distorted orbits.\n\n\n```python\nfrom scipy import integrate\n\n#using the same input number from the previous example\ninput_a = widgets.FloatText(description=\"a\",style=style, value=1)\ninput_b = widgets.FloatText(description=\"b\",style=style, value=1)\ninput_c = widgets.FloatText(description=\"c\",style=style, value=1)\ninput_d = widgets.FloatText(description=\"d\",style=style, value=1)\n# Define the system of ODEs\n# P[0] is prey, P[1] is predator\ndef dP_dt(P,t=0):\n return np.array([a*P[0]-b*P[0]*P[1], d*b*P[0]*P[1]-c*P[1]])\n\nbutton_draw_trajectories = widgets.Button(description=\"Plot Graph\")\ndisplay(input_a, input_b, input_c, input_d, button_draw_trajectories)\n\ndef plot_trajectories(graph):\n global a, b, c, d, eq1, eq2\n clear_output()\n display(input_a, input_b, input_c, input_d, button_draw_trajectories)\n a = input_a.value\n b = input_b.value\n c = input_c.value\n d = input_d.value\n # Define the Equilibrium points\n eq1 = np.array([0. , 0.])\n eq2 = np.array([c/(d*b),a/b])\n values = np.linspace(0.1, 3, 10)\n # Colors for each trajectory\n vcolors = plt.cm.autumn_r(np.linspace(0.1, 1., len(values)))\n f = plt.figure(figsize=(10,6))\n t = np.linspace(0, 150, 1000)\n for v, col in zip(values, vcolors):\n # Starting point \n P0 = v*eq2\n P = integrate.odeint(dP_dt, P0, t)\n plt.plot(P[:,0], P[:,1],\n lw= 1.5*v, # Different line width for different trajectories\n color=col, label='P0=(%.f, %.f)' % ( P0[0], P0[1]) )\n ymax = plt.ylim(bottom=0)[1]\n xmax = plt.xlim(left=0)[1]\n nb_points = 20\n x = np.linspace(0, xmax, nb_points)\n y = np.linspace(0, ymax, nb_points)\n X1,Y1 = np.meshgrid(x, y) \n DX1, DY1 = dP_dt([X1, Y1]) \n M = (np.hypot(DX1, DY1)) \n M[M == 0] = 1. \n DX1 /= M \n DY1 /= M\n plt.title('Trajectories and direction fields')\n Q = plt.quiver(X1, Y1, DX1, DY1, M, pivot='mid', cmap=plt.cm.plasma)\n plt.xlabel('Number of rabbits')\n plt.ylabel('Number of coyotes')\n plt.legend()\n plt.grid()\n plt.xlim(0, xmax)\n plt.ylim(0, ymax)\n print(f\"\\n\\nThe equilibrium pointsof the system are:\", list(eq1), list(eq2))\n plt.show() \n \nbutton_draw_trajectories.on_click(plot_trajectories)\n\n```\n\nThe model here is described in continuous differential equations, thus there is no jump or intersections between the trajectories.\n\n\n\n## Part 5: Multiple Predators and Preys Relationship\n\nThe previous relationship could be extended to multiple predators and preys relationship\n\n**Exercise 6** (3 point). Develop a discrete-time mathematical model of four species, and each two of them competing for the same resource, and simulate its behavior. Plot the simulation results.\n\n\n```python\n%matplotlib inline\ninitial_rabbit2 = widgets.FloatText(description=\"Initial Rabbit\", style=style,value=2)\ninitial_coyote2 = widgets.FloatText(description=\"Initial Coyote\",style=style, value=2)\ninitial_deer2 = widgets.FloatText(description=\"Initial Deer\", style=style,value=1)\ninitial_wolf2 = widgets.FloatText(description=\"Initial Wolf\", style=style,value=1)\npopulation_capacity = widgets.FloatText(description=\"capacity\",style=style, value=10)\npopulation_capacity_rabbit = widgets.FloatText(description=\"capacity rabbit\",style=style, value=3)\ngrowth_rate_rabbit = widgets.FloatText(description=\"growth rate rabbit\",style=style, value=1)\ndeath_rate_coyote = widgets.FloatText(description=\"death rate coyote\",style=style, value=1)\ngrowth_rate_deer = widgets.FloatText(description=\"growth rate deer\",style=style, value=1)\ndeath_rate_wolf = widgets.FloatText(description=\"death rate wolf\",style=style, value=1)\nx1 = widgets.FloatText(description=\"death rate ratio due to coyote\",style=style, value=1)\ny1 = widgets.FloatText(description=\"growth rate ratio due to rabbit\", style=style,value=1)\nx2 = widgets.FloatText(description=\"death rate ratio due to wolf\",style=style, value=1)\ny2 = widgets.FloatText(description=\"growth rate ratio due to deer\", style=style,value=1)\nplot2 = widgets.Button(description=\"Plot Graph\")\ndisplay(initial_rabbit2, initial_coyote2,initial_deer2, initial_wolf2, population_capacity, \n population_capacity_rabbit, growth_rate_rabbit, growth_rate_deer, death_rate_coyote,death_rate_wolf,\n x1, y1,x2, y2, plot2)\ndef plot_graph(b):\n clear_output()\n display(initial_rabbit2, initial_coyote2,initial_deer2, initial_wolf2, population_capacity, \n population_capacity_rabbit, growth_rate_rabbit, growth_rate_deer, death_rate_coyote,death_rate_wolf,\n x1, y1,x2, y2, plot2)\n fig = plt.figure()\n ax = fig.add_subplot(111)\n t_m = np.arange(0, 20, 0.5)\n r_m = np.zeros(t_m.shape)\n c_m = np.zeros(t_m.shape)\n d_m = np.zeros(t_m.shape)\n w_m = np.zeros(t_m.shape)\n R_m = initial_rabbit2.value\n C_m = initial_coyote2.value\n D_m = initial_deer2.value\n W_m = initial_wolf2.value\n population_capacity_deer = population_capacity.value - population_capacity_rabbit.value\n for i in range(t_m.shape[0]):\n r_m[i] = R_m\n c_m[i] = C_m\n d_m[i] = D_m\n w_m[i] = W_m\n \n R_m = R_m + growth_rate_rabbit.value * (population_capacity_rabbit.value - R_m)\\\n /(population_capacity_rabbit.value) * R_m - (1 - 1/(x1.value*C_m + 1))*R_m - (1 - 1/(x2.value*W_m + 1))*R_m \n D_m = D_m + growth_rate_deer.value * (population_capacity_deer - D_m) \\\n /(population_capacity_deer) * D_m - (1 - 1/(x1.value*C_m + 1))*D_m - (1 - 1/(x2.value*W_m + 1))*D_m\n \n C_m = C_m - death_rate_coyote.value * C_m + y1.value*r_m[i]*C_m + y2.value*d_m[i]*C_m\n W_m = W_m - death_rate_wolf.value * W_m + y1.value*r_m[i]*W_m + y2.value*d_m[i]*W_m\n \n ax.plot(t_m, r_m, label=\"rabit\")\n ax.plot(t_m, c_m, label=\"coyote\")\n ax.plot(t_m, d_m, label=\"deer\")\n ax.plot(t_m, w_m, label=\"wolf\")\n ax.set(xlabel='time (months)', ylabel='population',\n title='Multiple Predator Prey Relationship')\n ax.grid()\n ax.legend()\n\nplot2.on_click(plot_graph)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "4a65cfb51e9bb7ae0f09b0d5304f98e8356689bd", "size": 320642, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Submission/Final/Notebooks/Part1-Lotka-Volterra-Model.ipynb", "max_stars_repo_name": "hillegass/complex-sim", "max_stars_repo_head_hexsha": "acfd3849c19fa3361788a6e8f96ce76ca64be613", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-03-05T20:57:14.000Z", "max_stars_repo_stars_event_max_datetime": "2020-03-17T00:45:54.000Z", "max_issues_repo_path": "Submission/Final/Notebooks/Part1-Lotka-Volterra-Model.ipynb", "max_issues_repo_name": "hillegass/complex-sim", "max_issues_repo_head_hexsha": "acfd3849c19fa3361788a6e8f96ce76ca64be613", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Submission/Final/Notebooks/Part1-Lotka-Volterra-Model.ipynb", "max_forks_repo_name": "hillegass/complex-sim", "max_forks_repo_head_hexsha": "acfd3849c19fa3361788a6e8f96ce76ca64be613", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 247.9829853055, "max_line_length": 160580, "alphanum_fraction": 0.9128560825, "converted": true, "num_tokens": 5630, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9124361628580401, "lm_q2_score": 0.9241418236349501, "lm_q1q2_score": 0.8432204194941055}} {"text": "# Brief introduction to linear algebra using PGA\nTed Corcovilos, 2021-03-18\n\n\n```python\n# import libraries\nfrom sympy import *\nfrom galgebra.ga import Ga\n# setup notebook pretty printing\ninit_printing()\n```\n\n\n```python\n# set up the algebra\npga3coords = (w,x,y,z) = symbols('w x y z', real=True)\npga3 = Ga('e_0 e_1 e_2 e_3',g=[0,1,1,1], coords=pga3coords)\n\ne0, e1, e2, e3 = pga3.mv()\n```\n\n !!!!If I**2 = 0, I cannot be normalized!!!!\n\n\n\n```python\n# define some useful functions for PGA\ndef J3(x):\n # J map for pga3 multivectors to calculate the orthogonal complement\n coef_list = x.blade_coefs()\n # flip sign of e02 and e1\n signs = [1,\n 1,-1,1,-1,\n 1,-1,1,1,-1,1,\n 1,-1,1,-1,\n 1]\n size = len(signs)\n return pga3.mv(sum(signs[mm]*coef_list[mm]*flatten(pga3.blades)[size-1-mm] for mm in range(size)))\n\ndef vee3(a,b):\n # regressive product for pga3\n return J3(J3(a)^J3(b))\n```\n\n\n```python\nt= symbols('t', real=True) # real parameter to use later\n```\n\n## Systems of linear equations\nEach linear equation may be represented by a vector in PGA.\n\nLet's solve the system\n$$\n\\begin{aligned}\na:&& x + \\phantom{2}y + z -1 &= 0 \\\\\nb:&& 2x + \\phantom{2}y + z +0&= 0 \\\\\nc:&& x -2y -z -2 &= 0\n\\end{aligned}\n$$\n\n\n\n```python\na = e1 + e2 + e3 - e0\nb = 2*e1 + e2 + e3\nc = e1 -2*e2-e3 -2*e0 \n```\n\nThe solution is the intersection point of the 3 planes described by each equation above.\n\n\n```python\na^b^c\n```\n\n\n\n\n\\begin{equation*} 7 \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{1}\\wedge \\boldsymbol{e}_{2} + 5 \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{1}\\wedge \\boldsymbol{e}_{3} - \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{2}\\wedge \\boldsymbol{e}_{3} - \\boldsymbol{e}_{1}\\wedge \\boldsymbol{e}_{2}\\wedge \\boldsymbol{e}_{3} \\end{equation*}\n\n\n\nThe trivector terms describe a point. It's easier to read off if we normalize it and look at its orthogonal complement:\n\n\n```python\nJ3(a^b^c) \n```\n\n\n\n\n\\begin{equation*} \\boldsymbol{e}_{0} - \\boldsymbol{e}_{1} -5 \\boldsymbol{e}_{2} + 7 \\boldsymbol{e}_{3} \\end{equation*}\n\n\n\nThe coefficient of $e_1$ is the $x$ value, etc. So the solution of the original system of equations is\n$$\n\\begin{aligned}\nx&=-1\\\\\ny&=-5\\\\\nz&=7\n\\end{aligned}\n$$\n\n\n```python\n# Let's define p as a generic point\np=J3(x*e1 + y*e2 + z*e3 + e0)\np\n```\n\n\n\n\n\\begin{equation*} - z \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{1}\\wedge \\boldsymbol{e}_{2} + y \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{1}\\wedge \\boldsymbol{e}_{3} - x \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{2}\\wedge \\boldsymbol{e}_{3} + \\boldsymbol{e}_{1}\\wedge \\boldsymbol{e}_{2}\\wedge \\boldsymbol{e}_{3} \\end{equation*}\n\n\n\nWe can represent this as equations for x,y,z by setting terms of the expression below equal to zero: $p\\vee(a\\wedge b\\wedge c)=0$. PGA is a Regressive Product Null Space (RPNS) representation.\n\n\n```python\nvee3(p,a^b^c)\n```\n\n\n\n\n\\begin{equation*} \\left ( - 7 y - 5 z\\right ) \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{1} + \\left ( 7 x + z\\right ) \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{2} + \\left ( 5 x - y\\right ) \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{3} + \\left ( z - 7\\right ) \\boldsymbol{e}_{1}\\wedge \\boldsymbol{e}_{2} + \\left ( - y - 5\\right ) \\boldsymbol{e}_{1}\\wedge \\boldsymbol{e}_{3} + \\left ( x + 1\\right ) \\boldsymbol{e}_{2}\\wedge \\boldsymbol{e}_{3} \\end{equation*}\n\n\n\nSetting the line above equal to zero gives a redundant set of equations, but looking at the line three terms we can read off\n$$\n\\begin{aligned}\nx&=-1\\\\\ny&=-5\\\\\nz&=7\n\\end{aligned}\n$$\n\n## Under-defined system\nLet's try something different. Let's do an under-defined system.\n$$\n\\begin{aligned}\na:&& x + y + z -1 &= 0 \\\\\nb:&& 2x + y + z +0&= 0 \\\\\n%x -2y -z &= 2\n\\end{aligned}\n$$\n\nAgain, the solution is $a\\wedge b$:\n\n\n```python\na^b\n```\n\n\n\n\n\\begin{equation*} -2 \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{1} - \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{2} - \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{3} - \\boldsymbol{e}_{1}\\wedge \\boldsymbol{e}_{2} - \\boldsymbol{e}_{1}\\wedge \\boldsymbol{e}_{3} \\end{equation*}\n\n\n\nThis is the Pl\u00fccker representation of a line in 3D space. As before, we can get this into a more familiar form by computing $p \\vee (a\\wedge b)=0$.\n\n\n```python\nvee3(p,a^b)\n```\n\n\n\n\n\\begin{equation*} \\left ( - 2 x - y - z\\right ) \\boldsymbol{e}_{0} + \\left ( - y - z + 2\\right ) \\boldsymbol{e}_{1} + \\left ( x + 1\\right ) \\boldsymbol{e}_{2} + \\left ( x + 1\\right ) \\boldsymbol{e}_{3} \\end{equation*}\n\n\n\n### Parametric form of the line\n(TODO generalize the method to higher dimensional solution spaces.)\n\nThe solution to the system above is a line of points: $x=-1, y=-z+2$.\n\nTo put this in parametric form, we need the direction of the line $d$ and any point on the line $P$:\n$$\n\\begin{aligned}\nd &= a\\wedge b \\wedge e_0 \\\\\nP &= d^* \\wedge (a\\wedge b)\n\\end{aligned}\n$$\n\nThen the line consists of the points $P + dt$ for a scalar value $t$.\n\n\n```python\nd=a^b^e0 # direction of the line\nd\n```\n\n\n\n\n\\begin{equation*} - \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{1}\\wedge \\boldsymbol{e}_{2} - \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{1}\\wedge \\boldsymbol{e}_{3} \\end{equation*}\n\n\n\n\n```python\nP = J3(d)^(a^b) # a point on the line\nP\n```\n\n\n\n\n\\begin{equation*} -2 \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{1}\\wedge \\boldsymbol{e}_{2} + 2 \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{1}\\wedge \\boldsymbol{e}_{3} + 2 \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{2}\\wedge \\boldsymbol{e}_{3} + 2 \\boldsymbol{e}_{1}\\wedge \\boldsymbol{e}_{2}\\wedge \\boldsymbol{e}_{3} \\end{equation*}\n\n\n\n\n```python\n(P+d*t)\n```\n\n\n\n\n\\begin{equation*} \\left ( - t - 2\\right ) \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{1}\\wedge \\boldsymbol{e}_{2} + \\left ( 2 - t\\right ) \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{1}\\wedge \\boldsymbol{e}_{3} + 2 \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{2}\\wedge \\boldsymbol{e}_{3} + 2 \\boldsymbol{e}_{1}\\wedge \\boldsymbol{e}_{2}\\wedge \\boldsymbol{e}_{3} \\end{equation*}\n\n\n\nOr, translating to coordinates and rescaling $t$:\n$$\nx = -1,\\quad\ny = 1-t, \\quad z=1+t\n$$\n\n## Over-defined system\nLet's look at an over-defined system in 2D:\n$$\n\\begin{aligned}\nx&= 0\\\\\ny&= 0\\\\\nx+y&= 1\n\\end{aligned}\n$$\nThis clearly has no solution, but can we find a \"best fit\"?\nOne way to answer is to sum the 3 normalized intersection points.\n\n(Q: what happens if I use the _unnormalized_ points? Is this useful?)\n\n\n```python\nsum([e1^e2,-e2^(e1+e2-e0),-(e1+e2-e0)^e1])/3\n```\n\n\n\n\n\\begin{equation*} \\frac{1}{3} \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{1} - \\frac{1}{3} \\boldsymbol{e}_{0}\\wedge \\boldsymbol{e}_{2} + \\boldsymbol{e}_{1}\\wedge \\boldsymbol{e}_{2} \\end{equation*}\n\n\n\nTo generalize to larger systems, for an _N_-dimensional space with _m_ equations, form all $\\binom{m}{N}$ of the _N_-wise wedge products, normalize, and sum them.\n\nThis method also works when two or more of the (hyper-)planes are parallel. The intersection point is then ideal, and that point can be discarded before summing.\n\n(Q: does this give the same result as the least-squares solution, using pseudo-inverse? It seems like that is different?)\n\n## Gaussian elimination\nOne of the standard solution techniques in linear algebra is Gaussian elimination. What does that look like in PGA? It's almost trivial.\n\nLet's go back to our earlier problem:\n$$\n\\begin{aligned}\na:&&x + \\phantom{2}y + z -1 &= 0 \\\\\nb:&&2x + \\phantom{2}y + z +0 &= 0 \\\\\nc:&&x -2y -z -2 &= 0\n\\end{aligned}\n$$\n\nThe idea of Gaussian elimination is to add a scalar multiple of one row to another row to zero out coefficients in the lower left triangle of the matrix.\n\nFor example, we could replace row $b$ with $b-2a$ and row $c$ with $c-a$, to eliminate the coefficients of $x$ in the bottom two rows:\n\n\n```python\nb-2*a, c-a\n```\n\n$$\n\\begin{aligned}\na:&&x + \\phantom{3}y +\\phantom{2} z -1 &= 0 \\\\\nb-2a:&&0x -\\phantom{3}y - \\phantom{2}z +2&= 0 \\\\\nc-\\phantom{2}a:&&0x -3y -2z -1 &= 0\n\\end{aligned}\n$$\n\nThen add -3 times the second row to the third row:\n\n\n```python\n-3*(b-2*a)+(c-a)\n```\n\n\n\n\n\\begin{equation*} -7 \\boldsymbol{e}_{0} + \\boldsymbol{e}_{3} \\end{equation*}\n\n\n\n$$\n\\begin{aligned}\na:&&x + y + \\phantom{0}z -1 &= 0 \\\\\nb-2a:&&0x -y - \\phantom{0}z +2&= 0 \\\\\n5a-3b+c:&&0x+ 0y +z -7 &= 0\n\\end{aligned}\n$$\n\nBut the properties of the exterior product make this unnecessary.\n$$\na\\wedge(b-2a)\\wedge(5a-3b+c) = a\\wedge b \\wedge c,\n$$\nso we are back where we started.\n\n### TODOs\n* add figures, highlight geometry\n* describe rotation picture of Gaussian elimination\n* projections, rejections, reflections\n* emphasize linear combinations of planes keep their intersection invariant\n* add norm and normalize functions\n", "meta": {"hexsha": "4f45fc1e55f4e37d4a75606547166ae6ab84d9e2", "size": 20756, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "linear algebra.ipynb", "max_stars_repo_name": "corcoted/GA-scratch", "max_stars_repo_head_hexsha": "a7ba5da5fa758f52330c4e1218d56c2e193a3091", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "linear algebra.ipynb", "max_issues_repo_name": "corcoted/GA-scratch", "max_issues_repo_head_hexsha": "a7ba5da5fa758f52330c4e1218d56c2e193a3091", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "linear algebra.ipynb", "max_forks_repo_name": "corcoted/GA-scratch", "max_forks_repo_head_hexsha": "a7ba5da5fa758f52330c4e1218d56c2e193a3091", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.8410104012, "max_line_length": 2476, "alphanum_fraction": 0.5631142802, "converted": true, "num_tokens": 3106, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.950410972802222, "lm_q2_score": 0.8872045966995027, "lm_q1q2_score": 0.8432089838237774}} {"text": "# Symbolic Python\n\nIn standard mathematics we routinely write down abstract variables or concepts and manipulate them without ever assigning specific values to them. An example would be the quadratic equation\n\n$$ a x^2 + b x + c = 0 $$\n\nand its roots $x_{\\pm}$: we can write down the solutions of the equation and discuss the existence, within the real numbers, of the roots, without specifying the particular values of the parameters $a, b$ and $c$.\n\nIn a standard computer programming language, we can write *functions* that encapsulate the solutions of the equation, but calling those functions requires us to specify values of the parameters. In general, the value of a variable must be given before the variable can be used.\n\nHowever, there *do* exist *Computer Algebra Systems* that can perform manipulations in the \"standard\" mathematical form. Through the university you will have access to Wolfram Mathematica and Maple, which are commercial packages providing a huge range of mathematical tools. There are also freely available packages, such as SageMath and `sympy`. These are not always easy to use, as all CAS have their own formal languages that rarely perfectly match your expectations.\n\nHere we will briefly look at `sympy`, which is a pure Python CAS. `sympy` is not suitable for complex calculations, as it's far slower than the alternatives. However, it does interface very cleanly with Python, so can be used inside Python code, especially to avoid entering lengthy expressions.\n\n# sympy\n\n## Setting up\n\nSetting up `sympy` is straightforward:\n\n\n```python\nimport sympy\nsympy.init_printing()\n```\n\nThe standard `import` command is used. The `init_printing` command looks at your system to find the clearest way of displaying the output; this isn't necessary, but is helpful for understanding the results.\n\nTo do *anything* in `sympy` we have to explicitly tell it if something is a variable, and what name it has. There are two commands that do this. To declare a single variable, use\n\n\n```python\nx = sympy.Symbol('x')\n```\n\nTo declare multiple variables at once, use\n\n\n```python\ny, z0 = sympy.symbols(('y', 'z_0'))\n```\n\nNote that the \"name\" of the variable does not need to match the symbol with which it is displayed. We have used this with `z0` above:\n\n\n```python\nz0\n```\n\nOnce we have variables, we can define new variables by operating on old ones:\n\n\n```python\na = x + y\nb = y * z0\nprint(\"a={}. b={}.\".format(a, b))\n```\n\n a=x + y. b=y*z_0.\n\n\n\n```python\na\n```\n\nIn addition to variables, we can also define general functions. There is only one option for this:\n\n\n```python\nf = sympy.Function('f')\n```\n\n## In-built functions\n\nWe have seen already that mathematical functions can be found in different packages. For example, the $\\sin$ function appears in `math` as `math.sin`, acting on a single number. It also appears in `numpy` as `numpy.sin`, where it can act on vectors and arrays in one go. `sympy` re-implements many mathematical functions, for example as `sympy.sin`, which can act on abstract (`sympy`) variables.\n\nWhenever using `sympy` we should use `sympy` functions, as these can be manipulated and simplified. For example:\n\n\n```python\nc = sympy.sin(x)**2 + sympy.cos(x)**2\n```\n\n\n```python\nc\n```\n\n\n```python\nc.simplify()\n```\n\nNote the steps taken here. `c` is an object, something that `sympy` has created. Once created it can be manipulated and simplified, using the methods on the object. It is useful to use tab completion to look at the available commands. For example,\n\n\n```python\nd = sympy.cosh(x)**2 - sympy.sinh(x)**2\n```\n\nNow type `d.` and then tab, to inspect all the available methods. As before, we could do\n\n\n```python\nd.simplify()\n```\n\nbut there are many other options.\n\n## Solving equations\n\nLet us go back to our quadratic equation and check the solution. To define an *equation* we use the `sympy.Eq` function:\n\n\n```python\na, b, c, x = sympy.symbols(('a', 'b', 'c', 'x'))\nquadratic_equation = sympy.Eq(a*x**2+b*x+c, 0)\nsympy.solve(quadratic_equation)\n```\n\nWhat happened here? `sympy` is not smart enough to know that we wanted to solve for `x`! Instead, it solved for the first variable it encountered. Let us try again:\n\n\n```python\nsympy.solve(quadratic_equation, x)\n```\n\nThis is our expectation: multiple solutions, returned as a list. We can access and manipulate these results:\n\n\n```python\nroots = sympy.solve(quadratic_equation, x)\nxplus, xminus = sympy.symbols(('x_{+}', 'x_{-}'))\nxplus = roots[0]\nxminus = roots[1]\n```\n\nWe can substitute in specific values for the parameters to find solutions:\n\n\n```python\nxplus_solution = xplus.subs([(a,1), (b,2), (c,3)])\nxplus_solution\n```\n\nWe have a list of substitutions. Each substitution is given by a tuple, containing the variable to be replaced, and the expression replacing it. We do not have to substitute in numbers, as here, but could use other variables:\n\n\n```python\nxminus_solution = xminus.subs([(b,a), (c,a+z0)])\nxminus_solution\n```\n\n\n```python\nxminus_solution.simplify()\n```\n\nWe can use similar syntax to solve *systems* of equations, such as\n\n$$ \\begin{aligned} x + 2 y &= 0, \\\\ xy & = z_0. \\end{aligned} $$\n\n\n```python\neq1 = sympy.Eq(x+2*y, 0)\neq2 = sympy.Eq(x*y, z0)\nsympy.solve([eq1, eq2], [x, y])\n```\n\n## Differentiation and integration\n\n### Differentiation\n\nThere is a standard function for differentiation, `diff`:\n\n\n```python\nexpression = x**2*sympy.sin(sympy.log(x))\nsympy.diff(expression, x)\n```\n\nA parameter can control how many times to differentiate:\n\n\n```python\nsympy.diff(expression, x, 3)\n```\n\nPartial differentiation with respect to multiple variables can also be performed by increasing the number of arguments:\n\n\n```python\nexpression2 = x*sympy.cos(y**2 + x)\nsympy.diff(expression2, x, 2, y, 3)\n```\n\nThere is also a function representing an *unevaluated* derivative:\n\n\n```python\nsympy.Derivative(expression2, x, 2, y, 3)\n```\n\nThese can be useful for display, building up a calculation in stages, simplification, or when the derivative cannot be evaluated. It can be explicitly evaluated using the `doit` function:\n\n\n```python\nsympy.Derivative(expression2, x, 2, y, 3).doit()\n```\n\n### Integration\n\nIntegration uses the `integrate` function. This can calculate either definite or indefinite integrals, but will *not* include the integration constant.\n\n\n```python\nintegrand=sympy.log(x)**2\nsympy.integrate(integrand, x)\n```\n\n\n```python\nsympy.integrate(integrand, (x, 1, 10))\n```\n\nThe definite integral is specified by passing a tuple, with the variable to be integrated (here `x`) and the lower and upper limits (which can be expressions).\n\nNote that `sympy` includes an \"infinity\" object `oo` (two `o`'s), which can be used in the limits of integration:\n\n\n```python\nsympy.integrate(sympy.exp(-x), (x, 0, sympy.oo))\n```\n\nMultiple integration for higher dimensional integrals can be performed:\n\n\n```python\nsympy.integrate(sympy.exp(-(x+y))*sympy.cos(x)*sympy.sin(y), x, y)\n```\n\n\n```python\nsympy.integrate(sympy.exp(-(x+y))*sympy.cos(x)*sympy.sin(y), \n (x, 0, sympy.pi), (y, 0, sympy.pi))\n```\n\nAgain, there is an unevaluated integral:\n\n\n```python\nsympy.Integral(integrand, x)\n```\n\n\n```python\nsympy.Integral(integrand, (x, 1, 10))\n```\n\nAgain, the `doit` method will explicitly evaluate the result where possible.\n\n## Differential equations\n\nDefining and solving differential equations uses the pattern from the previous sections. We'll use the same example problem as in the `scipy` case, \n\n$$ \\frac{\\text{d} y}{\\text{d} t} = e^{-t} - y, \\qquad y(0) = 1. $$\n\nFirst we define that $y$ is a function, currently unknown, and $t$ is a variable.\n\n\n```python\ny = sympy.Function('y')\nt = sympy.Symbol('t')\n```\n\n`y` is a general function, and can be a function of anything at this point (any number of variables with any name). To use it consistently, we *must* refer to it explicitly as a function of $t$ everywhere. For example,\n\n\n```python\ny(t)\n```\n\nWe then define the differential equation. `sympy.Eq` defines the equation, and `diff` differentiates:\n\n\n```python\node = sympy.Eq(y(t).diff(t), sympy.exp(-t) - y(t))\node\n```\n\nHere we have used `diff` as a method applied to the function. As `sympy` can't differentiate $y(t)$ (as it doesn't have an explicit value), it leaves it unevaluated.\n\nWe can now use the `dsolve` function to get the solution to the ODE. The syntax is very similar to the `solve` function used above:\n\n\n```python\nsympy.dsolve(ode, y(t))\n```\n\nThis is simple enough to solve, but we'll use symbolic methods to find the constant, by setting $t = 0$ and $y(t) = y(0) = 1$.\n\n\n```python\ngeneral_solution = sympy.dsolve(ode, y(t))\nvalue = general_solution.subs([(t,0), (y(0), 1)])\nvalue\n```\n\nWe then find the specific solution of the ODE.\n\n\n```python\node_solution = general_solution.subs([(value.rhs,value.lhs)])\node_solution\n```\n\n## Plotting\n\n`sympy` provides an interface to `matplotlib` so that expressions can be directly plotted. For example,\n\n\n```python\n%matplotlib inline\nfrom matplotlib import rcParams\nrcParams['figure.figsize']=(12,9)\n```\n\n\n```python\nsympy.plot(sympy.sin(x));\n```\n\nWe can explicitly set limits, for example\n\n\n```python\nsympy.plot(sympy.exp(-x)*sympy.sin(x**2), (x, 0, 1));\n```\n\nWe can plot the solution to the differential equation computed above:\n\n\n```python\nsympy.plot(ode_solution.rhs, xlim=(0, 1), ylim=(0.7, 1.05));\n```\n\nThis can be *visually* compared to the previous result. However, we would often like a more precise comparison, which requires numerically evaluating the solution to the ODE at specific points.\n\n## lambdify\n\nAt the end of a symbolic calculation using `sympy` we will have a result that is often long and complex, and that is needed in another part of another code. We could type the appropriate expression in by hand, but this is tedious and error prone. A better way is to make the computer do it.\n\nThe example we use here is the solution to the ODE above. We have solved it symbolically, and the result is straightforward. We can also solve it numerically using `scipy`. We want to compare the two.\n\nFirst, let us compute the `scipy` numerical result:\n\n\n```python\nfrom numpy import exp\nfrom scipy.integrate import odeint\nimport numpy\n\ndef dydt(y, t):\n \"\"\"\n Defining the ODE dy/dt = e^{-t} - y.\n \n Parameters\n ----------\n \n y : real\n The value of y at time t (the current numerical approximation)\n t : real\n The current time t\n \n Returns\n -------\n \n dydt : real\n The RHS function defining the ODE.\n \"\"\"\n \n return exp(-t) - y\n\nt_scipy = numpy.linspace(0.0, 1.0)\ny0 = [1.0]\n\ny_scipy = odeint(dydt, y0, t_scipy)\n```\n\nWe want to evaluate our `sympy` solution at the same points as our `scipy` solution, in order to do a direct comparison. In order to do that, we want to construct a function that computes our `sympy` solution, without typing it in. That is what `lambdify` is for: it creates a function from a sympy expression.\n\nFirst let us get the expression explicitly:\n\n\n```python\node_expression = ode_solution.rhs\node_expression\n```\n\nThen we construct the function using `lambdify`:\n\n\n```python\nfrom sympy.utilities.lambdify import lambdify\n\node_function = lambdify((t,), ode_expression, modules='numpy')\n```\n\nThe first argument to `lambdify` is a tuple containing the arguments of the function to be created. In this case that's just `t`, the time(s) at which we want to evaluate the expression. The second argument to `lambdify` is the expression that we want converted into a function. The third argument, which is optional, tells `lambdify` that where possible it should use `numpy` functions. This means that we call the function using `numpy` arrays, it will calculate using `numpy` array expressions, doing the whole calculation in a single call.\n\nWe now have a function that we can directly call:\n\n\n```python\nprint(\"sympy solution at t=0: {}\".format(ode_function(0.0)))\nprint(\"sympy solution at t=0.5: {}\".format(ode_function(0.5)))\n```\n\n sympy solution at t=0: 1.0\n sympy solution at t=0.5: 0.9097959895689501\n\n\nAnd we can directly apply this function to the times at which the `scipy` solution is constructed, for comparison:\n\n\n```python\ny_sympy = ode_function(t_scipy)\n```\n\nNow we can use `matplotlib` to plot both on the same figure:\n\n\n```python\nfrom matplotlib import pyplot\npyplot.plot(t_scipy, y_scipy[:,0], 'b-', label='scipy')\npyplot.plot(t_scipy, y_sympy, 'k--', label='sympy')\npyplot.xlabel(r'$t$')\npyplot.ylabel(r'$y$')\npyplot.legend(loc='upper right')\npyplot.show()\n```\n\nWe see good visual agreement everywhere. But how accurate is it?\n\nNow that we have `numpy` arrays explicitly containing the solutions, we can manipulate these to see the differences between solutions:\n\n\n```python\npyplot.semilogy(t_scipy, numpy.abs(y_scipy[:,0]-y_sympy))\npyplot.xlabel(r'$t$')\npyplot.ylabel('Difference in solutions');\n```\n\nThe accuracy is around $10^{-8}$ everywhere - by modifying the accuracy of the `scipy` solver this can be made more accurate (if needed) or less (if the calculation takes too long and high accuracy is not required).\n\n# Further reading\n\n`sympy` has [detailed documentation](http://docs.sympy.org/latest/index.html) and a [useful tutorial](http://docs.sympy.org/dev/tutorial/index.html).\n\n## Exercise : systematic ODE solving\n\nWe are interested in the solution of\n\n$$ \\frac{\\text{d} y}{\\text{d} t} = e^{-t} - y^n, \\qquad y(0) = 1, $$\n\nwhere $n > 1$ is an integer. The \"minor\" change from the above examples mean that `sympy` can only give the solution as a power series.\n\n### Exercise 1\n\nCompute the general solution as a power series for $n = 2$.\n\n### Exercise 2\n\nInvestigate the help for the `dsolve` function to straightforwardly impose the initial condition $y(0) = 1$ using the `ics` argument. Using this, compute the specific solutions that satisfy the ODE for $n = 2, \\dots, 10$.\n\n### Exercise 3\n\nUsing the `removeO` command, plot each of these solutions for $t \\in [0, 1]$.\n", "meta": {"hexsha": "f04d14277c89ba3e8d80529d285a0aad541535e6", "size": 210257, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "content/notebooks/07-sympy.ipynb", "max_stars_repo_name": "IanHawke/maths-with-python-book", "max_stars_repo_head_hexsha": "552be64d07ff218988885f272194786b4cd30716", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-01-28T16:00:53.000Z", "max_stars_repo_stars_event_max_datetime": "2019-01-28T16:00:53.000Z", "max_issues_repo_path": "content/notebooks/07-sympy.ipynb", "max_issues_repo_name": "IanHawke/maths-with-python-book", "max_issues_repo_head_hexsha": "552be64d07ff218988885f272194786b4cd30716", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-05-19T22:38:28.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-26T04:43:53.000Z", "max_forks_repo_path": "content/notebooks/07-sympy.ipynb", "max_forks_repo_name": "IanHawke/maths-with-python-book", "max_forks_repo_head_hexsha": "552be64d07ff218988885f272194786b4cd30716", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 130.0290661719, "max_line_length": 35518, "alphanum_fraction": 0.8686179295, "converted": true, "num_tokens": 3739, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9407897426182321, "lm_q2_score": 0.8962513724408291, "lm_q1q2_score": 0.8431840979998448}} {"text": "# SymPy Basics\nAdapted from: https://github.com/sympy/sympy/wiki/Quick-examples\n\n\n```python\nfrom sympy import *\nfrom IPython.display import display\ninit_printing(order=\"lex\",use_latex='mathjax')\n```\n\n# Symbolic Expressions and Calculations\n\n\n```python\nx, y, z, t = symbols('x y z t')\nk, m, n = symbols('k m n', integer=True)\n#f, g, h = map(Function, 'fgh')\n```\n\n\n```python\neqn = Rational(3,2)*pi + exp(I*x) / (x**2 + y)\neqn\n```\n\n\n\n\n$$\\frac{3 \\pi}{2} + \\frac{e^{i x}}{x^{2} + y}$$\n\n\n\n\n```python\neqn.subs(x,3)\n```\n\n\n\n\n$$\\frac{3 \\pi}{2} + \\frac{e^{3 i}}{y + 9}$$\n\n\n\n\n```python\nexp(I*x).subs(x,pi).evalf()\n```\n\n\n\n\n$$-1.0$$\n\n\n\n\n```python\nexpr = x + 2*y\nexpr.args\n```\n\n\n\n\n$$\\left ( x, \\quad 2 y\\right )$$\n\n\n\n\n```python\nexp(pi * sqrt(163)).evalf(50)\n```\n\n\n\n\n$$262537412640768743.99999999999925007259719818568888$$\n\n\n\n\n```python\nN(pi,100)\n```\n\n\n\n\n$$3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117068$$\n\n\n\n\n```python\nlatex(S(eqn,evaluate=False))\n```\n\n\n\n\n '\\\\frac{3 \\\\pi}{2} + \\\\frac{e^{i x}}{x^{2} + y}'\n\n\n\n$$ \\frac{3 \\pi}{2} + \\frac{e^{i x}}{x^{2} + y}$$\n\n## Algebra\n\n\n```python\n((x+y)**2 * (x+1)).expand()\n```\n\n\n\n\n$$x^{3} + 2 x^{2} y + x^{2} + x y^{2} + 2 x y + y^{2}$$\n\n\n\n\n```python\na = 1/x + (x*sin(x) - 1)/x\na\n```\n\n\n\n\n$$\\frac{1}{x} \\left(x \\sin{\\left (x \\right )} - 1\\right) + \\frac{1}{x}$$\n\n\n\n\n```python\na.simplify()\n```\n\n\n\n\n$$\\sin{\\left (x \\right )}$$\n\n\n\n\n```python\neqn = Eq(x**3 + 2*x**2 + 4*x + 8, 0)\neqn\n```\n\n\n\n\n$$x^{3} + 2 x^{2} + 4 x + 8 = 0$$\n\n\n\n\n```python\nsolve(eqn,x)\n```\n\n\n\n\n$$\\left [ -2, \\quad - 2 i, \\quad 2 i\\right ]$$\n\n\n\n\n```python\neq1 = Eq(x + 5*y, 2)\neq2 = Eq(-3*x + 6*y, 15)\ndisplay(eq1)\ndisplay(eq2)\nsln = solve([eq1, eq2], [x, y])\nsln\n```\n\n\n$$x + 5 y = 2$$\n\n\n\n$$- 3 x + 6 y = 15$$\n\n\n\n\n\n$$\\left \\{ x : -3, \\quad y : 1\\right \\}$$\n\n\n\n\n```python\ndisplay(eq1.subs(sln))\ndisplay(eq2.subs(sln))\n```\n\n\n$$\\mathrm{True}$$\n\n\n\n$$\\mathrm{True}$$\n\n\n## Recurrence Relations\n\n$$\n\\large\\begin{align}\ny_0 & =1 \\\\\ny_1 & =4 \\\\\ny_n & =y_n-2y_{n-1}+5y_{n-2} \n\\end{align}\n$$\n\n\n```python\nf=y(n)-2*y(n-1)-5*y(n-2)\nf\n```\n\n\n\n\n$$y{\\left (n \\right )} - 5 y{\\left (n - 2 \\right )} - 2 y{\\left (n - 1 \\right )}$$\n\n\n\n\n```python\nsln = rsolve(f,y(n),[1,4])\nsln\n```\n\n\n\n\n$$\\left(\\frac{1}{2} + \\frac{\\sqrt{6}}{4}\\right) \\left(1 + \\sqrt{6}\\right)^{n} + \\left(- \\sqrt{6} + 1\\right)^{n} \\left(- \\frac{\\sqrt{6}}{4} + \\frac{1}{2}\\right)$$\n\n\n\n\n```python\nfor i in range(0,10):\n print(sln.subs(n,i).simplify())\n```\n\n 1\n 4\n 13\n 46\n 157\n 544\n 1873\n 6466\n 22297\n 76924\n\n\n## Sums and Products\n\n\n```python\na, b = symbols('a b')\ns = Sum(6*n**2 + 2**n, (n, a, b))\ns\n```\n\n\n\n\n$$\\sum_{n=a}^{b} \\left(2^{n} + 6 n^{2}\\right)$$\n\n\n\n\n```python\ns.doit()\n```\n\n\n\n\n$$- 2^{a} + 2^{b + 1} - 2 a^{3} + 3 a^{2} - a + 2 b^{3} + 3 b^{2} + b$$\n\n\n\n\n```python\ns.subs({b:3,a:1}).doit()\n```\n\n\n\n\n$$98$$\n\n\n\n\n```python\nSum(b, (b, 1, n)).doit().factor()\n```\n\n\n\n\n$$\\frac{n}{2} \\left(n + 1\\right)$$\n\n\n\n\n```python\nSum(n*(n+1)/2,(n, 1, b)).doit()\n```\n\n\n\n\n$$\\frac{b^{3}}{6} + \\frac{b^{2}}{2} + \\frac{b}{3}$$\n\n\n\n\n```python\nfor i in range(1,10):\n print(Sum(n*(n+1)/2, (n, 1, b)).doit().subs(b,i))\n```\n\n 1\n 4\n 10\n 20\n 35\n 56\n 84\n 120\n 165\n\n\n\n```python\nSum(n, (n, a, b)).subs(a,1).doit()\n```\n\n\n\n\n$$\\frac{b^{2}}{2} + \\frac{b}{2}$$\n\n\n\n\n```python\n(x**3/6 + x**2/2 +x/3).factor()\n```\n\n\n\n\n$$\\frac{x}{6} \\left(x + 1\\right) \\left(x + 2\\right)$$\n\n\n\n\n```python\nproduct(n*(n+1), (n, 1, b))\n```\n\n\n\n\n$${2}^{\\left(b\\right)} b!$$\n\n\n\n\n```python\nf=Function('f')\nex=Eq(f(1/x)-3*f(x),x)\n```\n\n## Calculus\n\n$$\\lim_{x\\to 0} \\frac{\\sin x - x}{x^3} = -\\frac{1}{6}$$\n\n\n```python\n((sin(x)-x)/x**3).limit(x,0)\n```\n\n\n\n\n$$- \\frac{1}{6}$$\n\n\n\n\n```python\n(x**2+5*x**3).diff(x)\n```\n\n\n\n\n$$15 x^{2} + 2 x$$\n\n\n\n\n```python\n(-x).limit(x,oo)\n```\n\n\n\n\n$$-\\infty$$\n\n\n\n$$\\int x^2 \\cos x \\ dx$$\n\n\n```python\n(x**2 * cos(x)).integrate(x)\n```\n\n\n\n\n$$x^{2} \\sin{\\left (x \\right )} + 2 x \\cos{\\left (x \\right )} - 2 \\sin{\\left (x \\right )}$$\n\n\n\n$$\\int_0^{\\pi/2} x^2 \\cos x \\ dx$$\n\n\n```python\nintegrate(x**2 * cos(x), (x, 0, pi/2))\n##(x**2 * cos(x)).integrate(x, 0, pi/2) does not work. \n```\n\n\n\n\n$$-2 + \\frac{\\pi^{2}}{4}$$\n\n\n\n$$ \\large f''(x) + 9 f(x) = 1 $$\n\n\n```python\nfn = dsolve(Eq(Derivative(f(x),x,x) + 9*f(x), 1), f(x))\nfn\n```\n\n\n\n\n$$f{\\left (x \\right )} = C_{1} \\sin{\\left (3 x \\right )} + C_{2} \\cos{\\left (3 x \\right )} + \\frac{1}{9}$$\n\n\n\n\n```python\nfla = 3*sin(3*x)+3*cos(3*x)+1/9\nfla.diff(x).diff(x).subs(x,3)+9*fla.subs(x,3)\n```\n\n\n\n\n$$1.0$$\n\n\n\n## Linear Algebra \n", "meta": {"hexsha": "a55c5474aee9966b0ac15d9c005a6ccf83b4f64d", "size": 18890, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Notebooks/SymPy Basics.ipynb", "max_stars_repo_name": "Andrewnetwork/WorkshopScipy", "max_stars_repo_head_hexsha": "739d24b9078fffb84408e7877862618d88d947dc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 433, "max_stars_repo_stars_event_min_datetime": "2017-12-16T20:50:07.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-08T13:05:57.000Z", "max_issues_repo_path": "Notebooks/SymPy Basics.ipynb", "max_issues_repo_name": "Andrewnetwork/WorkshopScipy", "max_issues_repo_head_hexsha": "739d24b9078fffb84408e7877862618d88d947dc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2017-12-17T06:10:28.000Z", "max_issues_repo_issues_event_max_datetime": "2018-11-14T15:50:10.000Z", "max_forks_repo_path": "Notebooks/SymPy Basics.ipynb", "max_forks_repo_name": "Andrewnetwork/WorkshopScipy", "max_forks_repo_head_hexsha": "739d24b9078fffb84408e7877862618d88d947dc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 47, "max_forks_repo_forks_event_min_datetime": "2017-12-06T20:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-01T11:33:57.000Z", "avg_line_length": 18.6660079051, "max_line_length": 186, "alphanum_fraction": 0.3878771837, "converted": true, "num_tokens": 2018, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9407897475985937, "lm_q2_score": 0.896251366205709, "lm_q1q2_score": 0.8431840965975638}} {"text": "# \"Differentiation and Integration with SymPy\"\n> \"Differentiation and Integration are some of the most important topics in maths. This post walks through how python can be used to solve these kinds of problems.\"\n\n - toc:false\n - badges: true\n - comments: true\n - author: Rohit Thakur\n - categories: [maths, calculus]\n\nDifferentiation and Integration and the building blocks of calculus. They are usually taught in Calculus I. \nThese two concept are heavily used in alot of fields, including machine learning. \n\nIf you don't remember, here is a high level introduction of what these two concepts are:\n\nDifferentiation (also called derivation) measures the rate of change of a function. Think of it like calculating slope in linear models. Only difference here is that you can calculate the slope at every point and for functions that are non linear. \nFor Example: if `f(x) = x**2` then, the derivative of this function is `f'(x) = 2x`. The derivative function is read \"f prime of x\".\n\nIntergration on the other hand measures the area under the curve. \nFor the above function, the integration or often called antiderivative will be `x**3 / 3`\n\n\n\nCalculating Derivatives and Integrals is sometimes confusing so we'll use [SymPy](https://www.sympy.org/en/index.html). \nWe will start by installing the library. \n\n\n```python\n#collapse-output\n!pip install sympy\n```\n\n Requirement already satisfied: sympy in c:\\users\\rohit\\miniconda3\\envs\\math\\lib\\site-packages (1.8)\n Requirement already satisfied: mpmath>=0.19 in c:\\users\\rohit\\miniconda3\\envs\\math\\lib\\site-packages (from sympy) (1.2.1)\n\n\nNext we import everything we need.\n\n\n```python\nfrom sympy import Symbol, diff, integrate\n```\n\nSince we will be calculating for `x`, we'll have to create a symbol for it. \n\n\n```python\nx = Symbol('x')\n```\n\nLets start by writing our function\n\n\n```python\nf = 2*x**3+6\nf\n```\n\n\n\n\n$\\displaystyle 2 x^{3} + 6$\n\n\n\nNow we calculate the derivative of the function using sympy.\n\n\n```python\nf_prime = f.diff(x)\nf_prime\n```\n\n\n\n\n$\\displaystyle 6 x^{2}$\n\n\n\nSo now we have the derivative of our function. Lets calculate the integral. \n\n\n```python\nf_int = integrate(f_prime, x)\nf_int\n```\n\n\n\n\n$\\displaystyle 2 x^{3}$\n\n\n\nSo this is how we calculate both the derivative and anti-derivative of a function using sympy. There are other libraries as well that work really well,\ne.g scipy. \n\n>Important: These aren't actual functions. In order to use these, you need to use \"lambdify\" them. We'll start by importing it.\n\n\n```python\nfrom sympy.utilities.lambdify import lambdify\n```\n\n\n```python\nf = lambdify(x, f)\nf_prime = lambdify(x, f_prime)\nf_int = lambdify(x, f_int)\n```\n\n\n```python\nprint(f\"The value of function at 3 is: {f(3)} and its derivative is {f_prime(3)}\")\n```\n\n The value of function at 3 is: 60 and its derivative is 54\n\n", "meta": {"hexsha": "3623558e16408df1ebc5d3095e07cc7d22f30ddf", "size": 7195, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_notebooks/2021-05-22-Integration-and-Differentiation-with-python.ipynb", "max_stars_repo_name": "rohit-thakur12/ml", "max_stars_repo_head_hexsha": "1b365560f0ba809f32e27025586a6cd6df9472ac", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_notebooks/2021-05-22-Integration-and-Differentiation-with-python.ipynb", "max_issues_repo_name": "rohit-thakur12/ml", "max_issues_repo_head_hexsha": "1b365560f0ba809f32e27025586a6cd6df9472ac", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_notebooks/2021-05-22-Integration-and-Differentiation-with-python.ipynb", "max_forks_repo_name": "rohit-thakur12/ml", "max_forks_repo_head_hexsha": "1b365560f0ba809f32e27025586a6cd6df9472ac", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.6049822064, "max_line_length": 257, "alphanum_fraction": 0.5720639333, "converted": true, "num_tokens": 741, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9572778073288128, "lm_q2_score": 0.8807970795424087, "lm_q1q2_score": 0.8431674970059789}} {"text": "###Objective\n**Given a set of equilibria and their stability, construct a system of equations that produces the same dynamics as those determined by the equilibria.**\n\nTake a general two dimensional system of the following form\n\n$$\\begin{align}\n\\dot x &= f(x,y) \\\\\n\\dot y &= g(x,y)\n\\end{align}$$\n\nWhat form should $f(x,y)$ and $g(x,y)$ take? I will show that all dynamics in two dimensions can be reproduced by a coupled system of polynomials.\n\n$$\\begin{align}\nf(x,y) &= y+P_1(x) \\\\\ng(x,y) &= -x+P_2(y)\n\\end{align}$$\n\nwhere $P_1(x)$ and $P_2(y)$ are arbitrary polynomials of any degree. This form was chosen so that the nullclines could be explicitly defined and would be condusive to analysis. The nullclines are given by\n\n$$\\begin{align}\ny &= -P_1(x) \\\\ \nx &= P_2(y)\n\\end{align}$$\n\nThe Jacobian of the system is given by.\n\n$$J(x,y) = \\begin{bmatrix}\n p_1 & 1 \\\\\n -1 & p_2\n \\end{bmatrix}$$\n\nWhere for notational brevity we are referring to the derivatives as $\\frac{dP_1}{dx}(x,y) \\equiv p_1$ and $\\frac{dP_2}{dy}(x,y) \\equiv p_2$. We can quickly see the trace and determinant as\n\n$$\\begin{align}\nTrace(J) &= \\tau(p_1,p_2) = p_1 + p_2 \\\\ \nDet(J) &= \\Delta(p_1,p_2) = p_1 p_2 +1\n\\end{align}$$\n\nThe eigen values of the Jacobian determine the type and stability of nonhyperbolic equilibria. The characteristic for this system is given by\n\n$$\\begin{vmatrix}\n p_1-\\lambda & 1 \\\\\n -1 & p_2-\\lambda\n \\end{vmatrix} = (p_1-\\lambda)(p_2-\\lambda)+1 \n = \\lambda^2 - (p_1+p_2)\\lambda + p_1 p_2 +1 \n = \\lambda^2 - \\tau \\lambda + \\Delta\n$$\n\nGiven the characteristic we find the eigenvalues as\n\n$$\\lambda = \\frac{\\tau \\pm \\sqrt{\\tau^2-4\\Delta}}{2}$$\n\nIn conventional dynamics theory the type of equilibria is determined by the sign of the radicand $\\tau^2-4\\Delta$ and their stability is determined by the sign of $\\tau$ and $\\Delta$. The regions of stability in $\\tau$ vs $\\Delta$ space is shown below.\n\n\n\nIn general $\\tau$ and $\\Delta$ can take on all values and collectively describe all equilibria in two dimensions. Given that $P_1(x)$ and $P_2(y)$ are polynomials we know that their derivatives $p_1,p_2 \\hspace{3pt} \\epsilon \\hspace{3pt} (-\\infty,\\infty)$. Also, because our general system above produces continuous $\\tau$ and $\\Delta$, we see that it is capable of reproducing all equilibria in two dimensions.\n\n**TO DO: add the $p_1$ vs $p_2$ plane showing regions of stability**\n\n\nThe slopes of nullclines in phase space are given by\n\n$$\\begin{align}\n-\\frac{f_x}{f_y} &= -p_1 \\\\\n-\\frac{g_x}{g_y} &= \\frac{1}{p_2}\n\\end{align}$$\n\nIn the $y$ vs $x$ phase space we'll define a vector tangent to each nullcline as\n\n$$\\begin{align}\n\\vec{T_x} &= 1\\hat{i} + -p_1\\hat{j} \\\\\n\\vec{T_y} &= p_2\\hat{i} + 1\\hat{j} \n\\end{align}$$\n\nWe are interested in the angle between these vectors at an equilibria, therefore we will use the following formula...\n\n$$\\theta = \\cos^{-1}(\\frac{\\vec{T_x}\\cdot \\vec{T_y}}{\\|T_x\\| \\|T_y\\|})$$\n\nStarting with the numerator we get...\n\n$$\\vec{T_x}\\cdot \\vec{T_y} = p_2-p_1$$\n\nAnd in the denominator\n\n$$\\|T_x\\| \\|T_y\\| = \\sqrt{1+p^2_1}\\sqrt{1+p^2_2}$$\n\nThe objective here is to find the angle between the tangent vectors in terms of the eigenvalues of the system. We will need to convert the numerator and denominator into expressions of only eigenvalues. For this we will be using the handy fact from linear algebra that the trace of a matrix is equal to the sum of its eigenvalues, $\\tau=\\lambda_1+\\lambda_2$, and the determinant of a matrix is equal to the product of its eigenvalues, $\\Delta = \\lambda_1 \\lambda_2$.\n\n**TO DO: elegant algebra leading to the following formulas (don't have time now)**\n\nThus we find the angle between the tangent vectors at the equilibria is given by\n\n$$\\theta(\\lambda_1,\\lambda_2) = \\cos^{-1}\\Bigg (\\pm \\sqrt{\\frac{(\\lambda_1-\\lambda_2)^2+4}{(\\lambda_1+\\lambda_2)^2+(\\lambda_1 \\lambda_2-2)^2}}\\Bigg )$$\n\nOr, in terms of the trace and determinant\n\n$$\\theta(\\lambda_1,\\lambda_2) = \\cos^{-1}\\Bigg (\\pm \\sqrt{\\frac{\\tau^2+4(1-\\Delta)}{\\tau^2+(\\Delta-2)^2}}\\Bigg )$$\n\nWe will show that this is defined for all nonzero $\\lambda_1$ and $\\lambda_2$. We needn't worry about the degenerate case as the Hartman-Grobman theory tells us the Jacobian is not guaranteed to correctly determine the stability of non-hyperbolic equilibria. We need to show three things.\n\n1. The denominator is never zero\n2. The radicand is always positive\n3. The magnitude of the argument to $\\cos^{-1}$ is bounded by one\n\nThe second equation is found by algebraic manipulations and subsitutions of the first. Therefore showing any of the properties for one equation implies it is true for the other.\n\n**(1)**\nBy inspection we see that both terms in the denominator are squared (in both equations). Thus, the denominator is nonzero for nonzero eigenvalues.\n\n**(2)**\nBy inspecting the first equation we see that all terms in the radicand are positive, and we are done.\n\n**(3)**\nLooking at the second equation...\n\n$$\\frac{\\tau^2+4(1-\\Delta)}{\\tau^2+(\\Delta-2)^2} < 1 \\hspace{10pt} \\rightarrow \\hspace{10pt} \\tau^2+4(1-\\Delta) < \\tau^2+(\\Delta-2)^2$$\n\nRecalling that the denominator is nonzero we do not flip the inequality. We expand both sides,\n\n$$\\tau^2-4\\Delta+4 < \\tau^2+\\Delta^2-4\\Delta+4 \\hspace{10pt} \\rightarrow \\hspace{4pt} 0 < \\Delta^2$$\n\nwhich is always true for nonzero eigenvalues.\n\n**TO DO: Demonstrate with some small examples**\n\n\n\n```\n\n```\n\n\n```\n\n```\n", "meta": {"hexsha": "ea0c57f251debbcecc1e4d65de6b216409066e05", "size": 7643, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "cmb_phase_plane/research/.ipynb_checkpoints/Phase Diagram - Inverse Problem-checkpoint.ipynb", "max_stars_repo_name": "mathnathan/notebooks", "max_stars_repo_head_hexsha": "63ae2f17fd8e1cd8d80fef8ee3b0d3d11d45cd28", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-12-04T11:04:45.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-04T11:04:45.000Z", "max_issues_repo_path": "cmb_phase_plane/research/.ipynb_checkpoints/Phase Diagram - Inverse Problem-checkpoint.ipynb", "max_issues_repo_name": "mathnathan/notebooks", "max_issues_repo_head_hexsha": "63ae2f17fd8e1cd8d80fef8ee3b0d3d11d45cd28", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "cmb_phase_plane/research/.ipynb_checkpoints/Phase Diagram - Inverse Problem-checkpoint.ipynb", "max_forks_repo_name": "mathnathan/notebooks", "max_forks_repo_head_hexsha": "63ae2f17fd8e1cd8d80fef8ee3b0d3d11d45cd28", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 46.8895705521, "max_line_length": 483, "alphanum_fraction": 0.5573727594, "converted": true, "num_tokens": 1678, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9362850057480346, "lm_q2_score": 0.9005297834483234, "lm_q1q2_score": 0.8431525334721898}} {"text": "# Particle in multistable potentials\n\n\n## Phase portrait for a one-dimensional system\n\nThe Newton equation for a point particle in a one-dimensional potential $U(x)$ can be written as a set of two first-order differential equations:\n\n$$ \\begin{cases}\u00a0\\dot x = v \\\\ \\dot v = - \\frac{1}{m}U'(x) \\end{cases}\u00a0$$\n\nWe can draw a phase portrait, i.e. parametric solutions $(x(t),v(t))$ and a vector field defined by the right hand sides of equations in the $(x,v)$-phase space.\n\n\n## Example: motion in the $U(x) = x^3-x^2$ potential\n\n\n\n```python\nvar('v')\nm = 1\nU(x) = x^3-x^2\nxmax,xmin = sorted([s_.rhs() for s_ in solve(U.diff(x)==0,x)])\nEmin = U(xmin)\nEtot = 1/2*m*v^2 + U(x)\n\nplot(U(x),(x,-0.4,1.1),figsize=4) +\\\n point([xmin,U(xmin)],color='red',size=40)+\\\n point([xmax,U(xmax)],color='red',size=40)\n```\n\nThe potential $U(x) =x^3-x^2$. \n\n\n```python\npkt = point((xmin,0),size=20,color='black')\nplt = sum([ implicit_plot(Etot==E0,(x,-1/2,1.2),(v,-1,1),color='blue')\\\n for E0 in srange(Emin,0.0,0.02)])\nplt +=implicit_plot(Etot==0,(x,-1/2,1.2),(v,-1,1),color='black') +pkt\nplt +=sum([ implicit_plot(Etot==E0,(x,-1/2,1.2),(v,-1,1),color='green')\\\n for E0 in srange(0.02,-2*Emin,0.04)])\nplt.show()\n```\n\nThe phase curves $(x, v)$ depends on initial conditions $(x_0, v_0)$. There are two types of phase curves: closed (periodic motion of the particle in a bounded interval) and open (the motion is unbounded: the particle can escape to -infinity or can return from -infinity. \n\n\n```python\nvector_field = vector([v,-U.diff(x)])\nplt + plot_vector_field(vector_field.normalized(),(x,-1/2,1.2),(v,-1,1))\n```\n\nThe vector field shows direction of motion of the particle on the $(x, v)$-plane. \n\n\n## Harmonic oscillations limit for one-dimensional systems\n\nConsider a conservative one-dimensional system. In this case, the force $f(x)$ can always be represented as gradient of the potential $U(x)$, namely, \n$$f(x) = -\\frac{\\partial U(x)}{dx}.$$\nConsider a certain potential that has a minium at some point $x_0$. The necessary condition for minimum of the function is its zero first derivative at this point. Let's expand the potential in the Taylor series around the minimum. We obtain:\n$$ U(x) = U(x_0) + \\underbrace{U'(x_0)( x-x_0)}_{=0}+\\frac{1}{2} U''(x_0)(x-x_0)^2+...$$\nFor small deviation from the minimum this series can be approximated by the function \n$$U(x) = \\frac{1}{2} k (x-x_0)^2,$$\n\nThe Newton equation for such motion is as follows:\n\n$$m \\ddot x = m \u00a0a \u00a0= F = -U'(x) \u00a0= \u00a0-k (x-x_0)$$\n\nFor the new variable $y=x-x_0$ it takes the form \n\n$$m \\ddot y =-ky$$\n\nThis is the already known equation for the harmonic oscillator with the shifted equilibrium point $x_0$. \n\nNow, let us return to the system with the potential $U(x) = x^3-x^2$. \n\n\n```python\nvar('x v')\nEtot = 1/2*v^2 + U(x)\nElin = 1/2*v^2 + U(xmin)+1/2*U.diff(x,2).subs(x==xmin)*(x-xmin)^2\nshow(Etot)\nshow(Elin)\n```\n\n\n```python\nEmin = Etot(x=xmin,v=0)\nEmin\n```\n\nLet's have a look at the trajectories for the exact system with $U(x) = x^3-x^2$ and the linearized system with $U(x)=(1/2) x^2$. The blue line below is a separatrix - i.e. a solution with $E=0$\n\n\n```python\nplt = sum([ implicit_plot(Etot==E0,(x,.4,.91),(v,-.3,.3),color='red') \\\n for E0 in srange(Emin+1e-3,-0.1,0.005)])\nplt += implicit_plot(Etot==0.00,(x,0,1.1), (v,-.6,.6),color='blue') \n\nplt_lin =sum([ implicit_plot(Elin==E0+1e-3,(x,.4,.91),(v,-.3,.3),color='gray') \\\n for E0 in srange(Emin,-0.1,0.005)])\nplt+plt_lin\n```\n\nFor larger ones, there is a growing discrepancy:\n\n\u00a0- for the nonlinear system, above certain energy, there are open trajectories\n\u00a0- motion in an linearized system is always an ellipse. The period does not depend on the amplitude.\n\n## Period of oscillation around potential minimum\n\nWe take a particle with energy by $dE$ larger than minimum of the potential:\n\n\n```python\ndE = 0.01\n```\n\n\n```python\nE0 = U(xmin)+dE\nE0\n```\n\nIn order to analyze eqaution of motion, we need to solve: $$U(x)=E_0$$ and obtain the extreme values of position to the left and right from potentiall minimum. In our case it con be done analytically. \n\nHowever, note that it requires to neglect imaginary part, which is approximated small value of the order of [machine epsilon](https://en.wikipedia.org/wiki/Machine_epsilon).\n\n\n```python\n_, x1,x2 = sorted( [s_.rhs().n().real() for s_ in solve(U(x)==E0,x)])\nx1,x2\n```\n\nNow, having $x_{1,2}$ we can integrate numerically equation for period (\\ref{eq:1d_TE}) and obtain:\n\n\n\n```python\nperiod = 2*sqrt(m/2.)*\\\n integral_numerical(1/sqrt(E0-U(x)) , x1,x2, algorithm='qags')[0]\nperiod\n```\n\nLet is plot this situation:\n\n\n```python\nU(x) = x^3-x^2\nxmax,xmin = sorted([s_.rhs() for s_ in solve(U.diff(x)==0,x)])\nplot(U(x),(x,-0.4,1.1),figsize=4,gridlines=[None,[U(xmin),E0]])+\\\n point([xmin,U(xmin)],color='red',size=40)+\\\n point([xmax,U(xmax)],color='red',size=40)+\\\n point([x1,U(x1)],color='green',size=40)+\\\n point([x2,U(x2)],color='green',size=40)\n```\n\nWe can write a function which computes the period based on previous steps:\n\n\n```python\ndef T(E0):\n m = 1\n _, x1,x2 = sorted( [s_.rhs().n().real() \\\n for s_ in solve(U(x)==E0,x)])\n integral, error = \\\n integral_numerical(1/sqrt(E0-U(x)), x1,x2, algorithm='qags')\n period = 2*sqrt(m/2.) * integral\n return period\n```\n\n\n```python\nperiod_num = T(U(xmin)+dE)\nperiod_num\n```\n\nWe known exactly the period of oscillation for small neiborhood of potential minimum. It is simply period of the harmonic oscillator with frequecy $\\omega=\\sqrt{U''(x_{min})}$ (for $m=1$).\n\n\n```python\nomega = sqrt(U(x).diff(x,2).subs(x==xmin.n()))\nperiod_harm = 2*pi.n()/omega\nperiod_harm\n```\n\n#### How does period depent on energy?\n\nWe can easily compute plot $T(E_0)$:\n\n\n```python\nTonE = [(E_,T(E_)) for E_ in srange(U(xmin)+1e-6,-1e-5,0.001)]\n```\n\n\n```python\nline(TonE, figsize=(6,2),gridlines=[None,[period_harm]])\n```\n\n#### How does the tracjectory look like? \n\nLet us integrate numerically equation of motion:\n\n\n\n```python\nt_lst = srange(0, period_num, 0.01, include_endpoint=True) \nsol = desolve_odeint([v,-U.diff(x)],[x2,.0],t_lst,[x,v])\n```\n\nWe have in phase space $(x,\\dot x)$:\n\n\n```python\nline(sol[::10,:],marker='o',figsize=4)\n```\n\nPosition oscillates with time:\n\n\n```python\nline(zip(t_lst,sol[:,0]),figsize=(6,2))\n```\n\n## Time to reach the hill\n\nThe top of the potential hill is at the beginning of the coordinate system $(x,E)$. Then we examine the limit $E\\to0$ boundary\n\nNear zero, we can approximate the potential by the reverse parabola. Then the time to reach the hill from a certain point (for example $x=1$) reads:\n\n\n```python\nvar('E')\nassume(E>0)\nintegrate(-1/sqrt(E+x^2),x,1,0)\n```\n\nThis result is divergent for $E\\to0$:\n\n\n```python\nlimit( arcsinh(1/x),x=0)\n```\n\nIt means that time needed to climb a hill with *just* enough kinetic energy is **infinite**. It is valid only for potential hills which have zero derivative at the top. On the other hand for potential barriers which do not have this property, for example, $U(x)=-|x|$, the particle can reach the top with just enought energy in finite time.\n\nLet analyze it:\n\n\n```python\nU1(x) = -abs(x)\nE0 = 0\n```\n\nwe can plot velocity and potential:\n\n\n```python\nplot([U1(x), sqrt(2*(E0 - U1(x)))],(x,-1,1),figsize=4)\n```\n\nthe time of travel from $x=-1$ to $x=0$ is given by:\n\n\\begin{equation}\n\\label{eq:1d_TE}\nt=\\sqrt{\\frac{m}{2}} \\; \\int_{-x1}^{x1}{\\frac{dx}{(\\sqrt(E-U(x)}}\n\\end{equation}\n\nwhich in this case is:\n\n\n```python\nsqrt(m/2.)*integrate(1/sqrt((E0- U1(x))),x,-1,0).n()\n```\n\nIn the case of potentials which behave like $|x|^\\alpha$, for $\\alpha>1$ we can calculate time of travel if we particle total energy is by $dE$ larger than potential barrier. \n\n\n```python\ndef t_hill(E0):\n m = 1\n\n x2, = [s_.rhs().n().real() for s_ in solve(U(x)==E0,x) if s_.rhs().n().imag().abs()<1e-6]\n \n integral, error = \\\n integral_numerical(1/sqrt(E0-U(x)), 0,x2, algorithm='qags')\n m = 1\n period = 2*sqrt(m/2.) * integral\n return period\n```\n\n\n```python\nt_hill(9.1)\n```\n\n\n```python\nimport numpy as np\nt_E = [(E_,t_hill(E_)) for E_ in np.logspace(-6,1,120)]\nline(t_E,axes_labels=['$E$','$t$'], figsize=(6,2))\n```\n\n\n```python\nt_E = [(log(E_),t_hill(E_)) for E_ in np.logspace(-6,16,120)]\nline(t_E,axes_labels=['$\\log_{10} E$','$t$'], figsize=(6,2))\n```\n\n#### Experiment with Sage!\nExamine in a similar way the system corresponding to the movement in the $U(x) = -\\cos(x)$ potential - this is a physical pendulum.\n\n\\newpage\n", "meta": {"hexsha": "d8648c0faf3175a2b2d31304aa7d56a2a271123b", "size": 15935, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "013-1d-multistable.ipynb", "max_stars_repo_name": "marcinofulus/Mechanics_with_SageMath", "max_stars_repo_head_hexsha": "6d13cb2e83cd4be063c9cfef6ce536564a25cf57", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "013-1d-multistable.ipynb", "max_issues_repo_name": "marcinofulus/Mechanics_with_SageMath", "max_issues_repo_head_hexsha": "6d13cb2e83cd4be063c9cfef6ce536564a25cf57", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-30T16:45:58.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-30T16:45:58.000Z", "max_forks_repo_path": "013-1d-multistable.ipynb", "max_forks_repo_name": "marcinofulus/Mechanics_with_SageMath", "max_forks_repo_head_hexsha": "6d13cb2e83cd4be063c9cfef6ce536564a25cf57", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-15T08:26:14.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-12T13:07:16.000Z", "avg_line_length": 25.5778491172, "max_line_length": 350, "alphanum_fraction": 0.5280200816, "converted": true, "num_tokens": 2817, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9111797148356995, "lm_q2_score": 0.9252299653388752, "lm_q1q2_score": 0.8430507759749205}} {"text": "# Lecture 11: Poisson Distribution and Approximation\n\n## Sympathetic Magic, or _[Confusing the map for the territory](http://nobeliefs.com/MapandTerritory.htm)_\n\n### Don't mistake a random variable for its distribution\n\nAs a case in point, say we have two random variables $X$ and $Y$.\n\n\\begin{align}\n P(X + Y) \\ne P(X=x) + P(Y=y)\n\\end{align}\n\nClaiming equality is clearly nonsense, since $X + Y$ is itself a random variable, and it should be recognized that $P(X=x) + P(Y=y)$ can exceed 1.\n\n### Think of them as houses & blueprints\n\nMetaphorically, the distribution is the _blueprint_, and the random variable is the _house_.\n\n----\n\n## Poisson Distribution\n\n### Description\n\nThe single most often used real-world discrete model. Applicable when there are a _large number of trials_ with a _small probability of success in each trial_.\n\nNote that the probability of success in each trial does _not_ have to be the same in each trial (unlike $Bin(n,p)$).\n\nThe Poisson distribution expresses the probability of a given number of events occurring in a fixed interval of time and/or space, if these events occur with a known average _rate_ and independently of the time since the last event. The Poisson distribution can also be used for the number of events in other specified intervals such as distance, area or volume.\n\nExamples might be counting:\n\n- e-mails coming into your mailbox in an hour\n- buses following the same route arriving at a bus-stop in a day\n- natural phenomena, like earthquakes or meteor sightings, etc., in a month\n- chips in a chocolate-chip cookie (area/volume)\n\n\n### Notation\n\n$X \\sim Pois(\\lambda)$\n\n### Parameters\n\n$\\lambda$ - the average number of events per interval (rate), where $\\lambda \\gt 0$\n\n### Probability mass function\n\n\\begin{align}\n P(X = k) = e^{-\\lambda} \\frac{\\lambda^k}{k!} ~~ \\text{, where } k \\in \\{0,1,2,\\dots,\\}\n\\end{align}\n\nChecking the validity of this PMF it is easy to see that it is always non-negative.\n\nFurthermore,\n\n\\begin{align}\n \\sum_{k=0}^{\\infty} P(X = k) &= \\sum_{k=0}^{\\infty} e^{-\\lambda} \\frac{\\lambda^k}{k!} \\\\\n &= e^{-\\lambda} \\sum_{k=0}^{\\infty} \\frac{\\lambda^k}{k!} & &\\text{since } e^{-\\lambda} \\text{ is constant} \\\\ \n &= e^{-\\lambda} e^{\\lambda} & &\\text{recall Taylor series for } e^{\\lambda} \\\\\n &= 1 ~~~~ \\blacksquare\n\\end{align}\n\n### Expected value\n\n\\begin{align}\n \\mathbb{E}(X) &= \\sum_{k=0}^{\\infty} k ~~ e^{-\\lambda} \\frac{\\lambda^k}{k!} \\\\\n &= e^{-\\lambda} \\sum_{k=1}^{\\infty} k \\frac{\\lambda^k}{k!} \\\\\n &= e^{-\\lambda} \\sum_{k=1}^{\\infty} \\frac{\\lambda^k}{(k-1)!} \\\\\n &= \\lambda e^{-\\lambda} \\sum_{k=1}^{\\infty} \\frac{\\lambda^{k-1}}{(k-1)!} \\\\\n &= \\lambda e^{-\\lambda} e^{\\lambda} \\\\\n &= \\lambda ~~~~ \\blacksquare\n\\end{align} \n\n----\n\n## Poisson Paradigm (Approximate)\n\nSuppose we have:\n\n1. a lot of events $A_1, A_2, \\dots, A_n$, with $P(A_j) = p$\n1. number of trials $n$ is very large\n1. $p$ is very small\n\nThe events could be _independent_ (knowing that $A_1$ occurred has no bearing whatsoever on $A_2$.\n\nThey could even be _weakly dependent_ ($A_1$ occurring has some level of affect on the likelihood of $A_2$).\n\nEither way, the expected number of events $A_j$ occurring is approximately $Pois(\\lambda)$, and by Linearity\n\n\\begin{align}\n \\mathbb{E}(A) &= \\lambda = \\sum_{j=1}^n p_j\n\\end{align}\n\n### Relating the Binomial distribution to the Poisson\n\nThis example will relate the Binomial distribution to the Poisson distribution.\n\nSuppose we have $X \\sim Bin(n,p)$, and let $n \\rightarrow \\infty$, $p \\rightarrow 0$. \n\nWe will hold $\\lambda = np$ constant, which will let us see what happens to the Binomial distribution when the number of trails approaches $\\infty$ while the probabability of success $p$ gets very small.\n\nSpecifically let's see what happens to the Binomial PMF under these conditions.\n\n\\begin{align}\n P(X = k) &= \\binom{n}{k} p^k q^{n-k} ~~~~ \\text{where k is fixed} \\\\\n &= \\underbrace{\\frac{n (n-1) \\dots (n-k+1)}{k!} \\left(\\frac{\\lambda}{n}\\right)^k}_{\\text{n terms cancel out}} ~~ \\underbrace{\\left(1 - \\frac{\\lambda}{n}\\right)^n}_{\\text{continuous compounding and e}} ~~ \\underbrace{\\left(1 - \\frac{\\lambda}{n}\\right)^{-k}}_{\\text{goes to 1}} \\\\\n \\\\\n &\\rightarrow \\frac{\\lambda^k}{k!} e^{-\\lambda} ~~ \\text{which is the Poisson PMF at k}\n\\end{align}\n\n\n### Thought-experiment: counting raindrops\n\nLet's say we want to count the number of raindrops that fall on a certain area on the ground.\n\nWe could divide this area with a grid, with each grid cell being very, very tiny. \n\nTherefore:\n\n- the number grid cells is very, very large\n- the probability of a rain drop landing in an arbitrary cell is very, very small\n\n_What distribution could we use to model this situation?_\n\n\n#### How about $X \\sim Bin(n,p)$?\nConsidering the Binomial distribution, we would have to make certain assumptions:\n\n1. the event of a raindrop hitting a cell is independent of other events\n1. the probability $p$ of a raindrop hitting an arbitrary cell is identical. \n\nWhile it might be OK to make these assumptions, but one **big** stumbling block in using the Binomial is that we would end up calculating factorials of a very large $n$, which even for a computer would be problematic.\n\n#### How about $X \\sim Pois(\\lambda)$?\n\nThe Poisson distribution is a better model approximation than the Binomial, since\n\n1. no factorials involved, so much simpler than the Binomial\n1. we can deal with the case where more than one raindrop lands in a cell\n1. we don't need to make any assumption on $p$ being identical\n\n### Birthdays, revisited: triple matches\n\nSuppose we have $n$ people, and we want to know the _approximate probability_ that there are 3 people who share the same birthday.\n\nDoing this the way we did in Lecture 3 is possible, but it would get very messy.\n\nLet's set ourselves up first.\n\n- there are $\\binom{n}{3}$ triplets\n- we used indicator r.v. $I_{ijk} \\text{, where } i < j < k$\n- $\\Rightarrow \\mathbb{E}(\\# \\text{ triple matches}) = \\binom{n}{3} \\frac{1}{365^2}$ \n\nBut we are interested in the _approximate probability_ of triple matches, so now let's use the Poisson. Why?\n\n1. we expect the total number of trials $\\binom{n}{3}$ to be very large\n1. the probability $P(I_{ijk}$ is very small\n1. events are _weakly dependent_, since if persons $1$ and $2$ are already known to share a birthday, then $I_{123} \\text{, } I_{124}$ are not completely independent\n\nWe claim that these circumstances are _approximately_ $Pois(\\lambda)$ with $\\lambda = \\mathbb{E}(\\# \\text{ triple matches})$.\n\nLet $X = \\text{# triple matches}$\n\n\\begin{align} \n P(X \\ge 1) &= 1 - P(X=0) \\\\\n &\\approx 1 - e^{-\\lambda} \\frac{\\lambda^0}{0!} \\\\\n &= 1 - e^{-\\lambda} ~~~~ \\text{where you just plug in } \\lambda = \\binom{n}{3} \\frac{1}{365^2} ~~~~ \\blacksquare\n\\end{align}\n\n----\n", "meta": {"hexsha": "6f1c49d9fced68e08a82c39d5d8751193fef96b8", "size": 9183, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lecture_11.ipynb", "max_stars_repo_name": "dirtScrapper/Stats-110-master", "max_stars_repo_head_hexsha": "a123692d039193a048ff92f5a7389e97e479eb7e", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lecture_11.ipynb", "max_issues_repo_name": "dirtScrapper/Stats-110-master", "max_issues_repo_head_hexsha": "a123692d039193a048ff92f5a7389e97e479eb7e", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture_11.ipynb", "max_forks_repo_name": "dirtScrapper/Stats-110-master", "max_forks_repo_head_hexsha": "a123692d039193a048ff92f5a7389e97e479eb7e", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.123853211, "max_line_length": 371, "alphanum_fraction": 0.5704018295, "converted": true, "num_tokens": 2003, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9252299632771662, "lm_q2_score": 0.9111796979521252, "lm_q1q2_score": 0.8430507584751442}} {"text": "# Predefined Metrics in Symbolic Module\n\n### Importing some of the predefined tensors. All the metrics are comprehensively listed in EinsteinPy documentation.\n\n\n```python\nfrom einsteinpy.symbolic.predefined import Schwarzschild, DeSitter, AntiDeSitter, Minkowski, find\nfrom einsteinpy.symbolic import RicciTensor, RicciScalar\nimport sympy\nfrom sympy import simplify\n\nsympy.init_printing() # for pretty printing\n```\n\n### Printing the metrics for visualization\nAll the functions return instances of :py:class:`~einsteinpy.symbolic.metric.MetricTensor`\n\n\n```python\nsch = Schwarzschild()\nsch.tensor()\n```\n\n\n```python\nMinkowski(c=1).tensor()\n```\n\n\n```python\nDeSitter().tensor()\n```\n\n\n```python\nAntiDeSitter().tensor()\n```\n\n### Calculating the scalar (Ricci) curavtures\nThey should be constant for De-Sitter and Anti-De-Sitter spacetimes.\n\n\n```python\nscalar_curvature_de_sitter = RicciScalar.from_metric(DeSitter())\nscalar_curvature_anti_de_sitter = RicciScalar.from_metric(AntiDeSitter())\n```\n\n\n```python\nscalar_curvature_de_sitter.expr\n```\n\n\n```python\nscalar_curvature_anti_de_sitter.expr\n```\n\nOn simplifying the expression we got above, we indeed obtain a constant\n\n\n```python\nsimplify(scalar_curvature_anti_de_sitter.expr)\n```\n\n### Searching for a predefined metric\nfind function returns a list of available functions\n\n\n```python\nfind(\"sitter\")\n```\n\n\n\n\n ['AntiDeSitter', 'AntiDeSitterStatic', 'DeSitter']\n\n\n", "meta": {"hexsha": "ea5595d2f910cb816d43b0615ef57b0816fab690", "size": 49689, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "EinsteinPy/Predefined Metrics in Symbolic Module.ipynb", "max_stars_repo_name": "IsaacW4/Advanced-GR", "max_stars_repo_head_hexsha": "0351c368321b1a2375e2d328347f79b513be4c08", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "EinsteinPy/Predefined Metrics in Symbolic Module.ipynb", "max_issues_repo_name": "IsaacW4/Advanced-GR", "max_issues_repo_head_hexsha": "0351c368321b1a2375e2d328347f79b513be4c08", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "EinsteinPy/Predefined Metrics in Symbolic Module.ipynb", "max_forks_repo_name": "IsaacW4/Advanced-GR", "max_forks_repo_head_hexsha": "0351c368321b1a2375e2d328347f79b513be4c08", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 144.0260869565, "max_line_length": 12425, "alphanum_fraction": 0.8350137857, "converted": true, "num_tokens": 360, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9304582593509315, "lm_q2_score": 0.9059898267292559, "lm_q1q2_score": 0.8429857171681555}} {"text": "

Math 267 Project #3\n\n---\n\n## Solution\n\n---\n\n# Spring mass system.\n\n\nThe second order equation of an unforced spring/mass system with mass m, damping constant b and spring constant k is given by:

\n$$ mx''(t) + bx'(t) +kx(t) = 0. $$\n \n \nRecall that $x(t)$ represents the displacement of the mass from its resting position ( with x>0 the spring is stretched and x<0 the spring is compressed). Therefore the velocity of the moving mass is $ x'(t)$ and the acceleration is $x\u2019\u2019(t)$. To approximate the solution for this system using Euler\u2019s method we convert the equation into a system by introducing the variable\n \n $$ y = x\u2019.$$\n \nSo our two unknowns are: $x$ for the position of the mass and $y$ for the velocity of the mass. As a system the second order equation becomes ( using m = 1)\n \n\\begin{align}\nx'(t) &= \\; y(t) \\\\\ny'(t) &= -k \\cdot x(t) - b \\cdot y(t)\n\\end{align} \n\n \nand unlike the Predator-Prey model of exercise 2. this system is linear and can be represented in matrix notation:\n \n \n\\begin{align}\n \\begin{pmatrix}\n x'(t) \\\\\n y'(t) \n \\end{pmatrix}= \n \\begin{pmatrix}\n 0 & 1 \\\\\n -k & -b \n \\end{pmatrix}\\cdot\n \\begin{pmatrix}\n x(t) \\\\\n y(t) \n \\end{pmatrix}\n \\end{align}\n \n \n ---\n\n### Collaboration. Students are allowed to work together on this project. Colaboration is encouraged. \n However you final submission must be your own work.\n \n\n\n### Run the cell below to import the necessary libraries.\n\n\n```python\n# import libraries\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom IPython.display import Image\n\n# uncomment the line below if you are running a macbook\n%config InlineBackend.figure_format ='retina'\n```\n\n## Exericse 1.\n\nFor this exercise you will approximate the solution of an unforced mass-spring system with\nmass m =1, damping constant b = 1 and spring constant k = 2. The second order equation for this system is given by:\n\n$$x''(t) + 1x'(t) + 2x(t) = 0.$$\n\nUse the initial conditions $x(0) = 0$ and $ x'(0)=y(0) = 1$. Use the improved Euler\u2019s method to solve the system. You will need to convert the second order equation to a system as demonstrated above. Choose a \u0394t=h of 0.1 and a time interval of 12 seconds. Create two plots. One showing x(t) versus time and the second showing the solution curve \"trajectory\" for the system in the xy (phase) plane.\n\n\n\n## Complete the code cell below to solve the sytstem.\n\n\n```python\n# define the slope functions\n\ndef f(x,y):\n return y\n\ndef g(x,y):\n return -1*y -2*x\n\n\nh = 0.1 # set delta t\nt = np.arange(0,12,h)\n\n# Initialeze arrays to store the results.\n\nx = np.zeros_like(t)\ny = np.zeros_like(t)\n\n# Set initial conditions:\n\nx[0] = 0\ny[0] = 1\n\n# implement Euler's method\n\nfor i in range (len(t)-1):\n \n F1 = f(x[i],y[i])\n G1 = g(x[i],y[i])\n \n F2 = f(x[i]+F1*h,y[i]+G1*h)\n G2 = g(x[i]+F1*h,y[i]+G1*h)\n \n slope1 = (F1+F2)/2\n slope2 = (G1+G2)/2\n \n x[i+1] = x[i] + slope1 * h\n y[i+1] = y[i] + slope2 * h\n \n \n\n```\n\n### Execute the cell below to graph your results.\nYou should see a damped sinusoid.\n\n\n```python\nplt.figure(figsize=(8,5))\nplt.plot(t,x)\nplt.grid()\nplt.xlabel('t')\nplt.ylabel('x');\nplt.title(\"Damping b = 1\");\n```\n\n### Execute the cell below to see the trajectory in the phase plane for this problem. \nYou should see a spiral.\n\n\n```python\nplt.figure(figsize=(8,5))\nplt.plot(x,y,linewidth=2)\nplt.xlim(-1.5,1.5)\nplt.ylim(-1.5,1.5)\nplt.grid()\nplt.xlabel('x-displacement')\nplt.ylabel('y-velocity');\nplt.title(\"Trajectory for spring mass system b = 1\");\n```\n\n### Repeat the steps above for b=3 and b=0. Copy and paste cells above and make the necessary modifications. You should show time plots and phase plane plots for b=0 and b=3.\n\n\n\n```python\n# b = 0\n\n# define the slope functions\n\ndef f(x,y):\n return y\n\ndef g(x,y):\n return -2*x\n\n\nh = 0.1 # set delta t\nt = np.arange(0,12,h)\n\n# Initialeze arrays to store the results.\n\nx = np.zeros_like(t)\ny = np.zeros_like(t)\n\n# Set initial conditions:\n\nx[0] = 0\ny[0] = 1\n\n# implement Euler's method\n\nfor i in range (len(t)-1):\n \n F1 = f(x[i],y[i])\n G1 = g(x[i],y[i])\n \n F2 = f(x[i]+F1*h,y[i]+G1*h)\n G2 = g(x[i]+F1*h,y[i]+G1*h)\n \n slope1 = (F1+F2)/2\n slope2 = (G1+G2)/2\n \n x[i+1] = x[i] + slope1 * h\n y[i+1] = y[i] + slope2 * h\n \n \n\n```\n\n\n```python\nplt.figure(figsize=(8,5))\nplt.plot(t,x)\nplt.grid()\nplt.xlabel('t')\nplt.ylabel('x');\nplt.xticks(np.arange(0,13,.5))\nplt.title(\"Damping b = 0\");\n```\n\n\n```python\nplt.figure(figsize=(8,5))\nplt.plot(x,y,linewidth=2)\nplt.xlim(-1.5,1.5)\nplt.ylim(-1.5,1.5)\nplt.grid()\nplt.xlabel('x-displacement')\nplt.ylabel('y-velocity');\nplt.title(\"Trajectory for spring mass system b = 0\");\n```\n\n\n```python\n# b=3\n\n# define the slope functions\n\n\ndef f(x,y):\n return y\n\ndef g(x,y):\n return -3*y -2*x\n\n\nh = 0.1 # set delta t\nt = np.arange(0,12,h)\n\n# Initialeze arrays to store the results.\n\nx = np.zeros_like(t)\ny = np.zeros_like(t)\n\n# Set initial conditions:\n\nx[0] = 0\ny[0] = 1\n\n# implement Euler's method\n\nfor i in range (len(t)-1):\n \n F1 = f(x[i],y[i])\n G1 = g(x[i],y[i])\n \n F2 = f(x[i]+F1*h,y[i]+G1*h)\n G2 = g(x[i]+F1*h,y[i]+G1*h)\n \n slope1 = (F1+F2)/2\n slope2 = (G1+G2)/2\n \n x[i+1] = x[i] + slope1 * h\n y[i+1] = y[i] + slope2 * h\n \n \n\n```\n\n\n```python\nplt.figure(figsize=(8,5))\nplt.plot(t,x)\nplt.grid()\nplt.xlabel('t')\nplt.ylabel('x');\nplt.title(\"Damping b = 3\");\n```\n\n\n```python\nplt.figure(figsize=(8,5))\nplt.plot(x,y,linewidth=2)\nplt.xlim(-1.5,1.5)\nplt.ylim(-1.5,1.5)\nplt.grid()\nplt.xlabel('x-displacement')\nplt.ylabel('y-velocity');\nplt.title(\"Trajectory for spring mass system b = 3\");\n```\n\n---\n# Exercise 2.\n\nFor this exercise you will approximate the solution of an undamped periodically forced spring-mass system with mass m =1 and spring constant k = 1, no damping. The second order equation for this system is given by:\n\n$$ x''(t) + x(t) = Cos(\\omega t).$$\n\nUse the initial conditions $x(0) = 0$ and $ x'(0) = y(0) = 0$. Use the improved Euler\u2019s method to solve the system. Note this system is not autonomous so the slope functions could depend on x,y and t. Generate plots of $x(t)$ versus time for \u03c9 = 1.1 and \u03c9 = 1. Use a time interval of 50 seconds for $ \\omega = 1 $ and 150 seconds for $ \\omega = 1.1$. For both cases set \u0394t=0.1. Duplicate and modify the code above to generate the plots. Label the plots appropriately. There are no phase plane plots for Exercise 2. \n\n## Solution for $ \\omega = 1. $\n\n\n```python\n# define the slope functions\n\ndef f(x,y):\n return y\n\ndef g(x,y,t):\n return -1*x + np.cos(t)\n\n\nh = 0.1 # set delta t\nt = np.arange(0,50,h)\n\n# Initialeze arrays to store the results.\n\nx = np.zeros_like(t)\ny = np.zeros_like(t)\n\n# Set initial conditions:\n\nx[0] = 0\ny[0] = 0\n\n# implement Euler's method\n\nfor i in range (len(t)-1):\n \n F1 = f(x[i],y[i])\n G1 = g(x[i],y[i],t[i])\n \n F2 = f(x[i]+F1*h,y[i]+G1*h)\n G2 = g(x[i]+F1*h,y[i]+G1*h,t[i+1])\n \n slope1 = (F1+F2)/2\n slope2 = (G1+G2)/2\n \n x[i+1] = x[i] + slope1 * h\n y[i+1] = y[i] + slope2 * h\n \n \n\n```\n\n\n```python\nplt.figure(figsize=(8,5))\nplt.plot(t,x)\nplt.grid()\nplt.xlabel('t')\nplt.ylabel('x');\nplt.title(\"Forced Spring-Mass system no damping $\\omega = 1.$\");\n```\n\n## Solution for $ \\omega = 1.1 $\n\n\n```python\n# define the slope functions\n\ndef f(x,y):\n return y\n\ndef g(x,y,t):\n return -1*x + np.cos(1.1*t)\n\n\nh = 0.1 # set delta t\nt = np.arange(0,150,h)\n\n# Initialeze arrays to store the results.\n\nx = np.zeros_like(t)\ny = np.zeros_like(t)\n\n# Set initial conditions:\n\nx[0] = 0\ny[0] = 0\n\n# implement Euler's method\n\nfor i in range (len(t)-1):\n \n F1 = f(x[i],y[i])\n G1 = g(x[i],y[i],t[i])\n \n F2 = f(x[i]+F1*h,y[i]+G1*h)\n G2 = g(x[i]+F1*h,y[i]+G1*h,t[i+1])\n \n slope1 = (F1+F2)/2\n slope2 = (G1+G2)/2\n \n x[i+1] = x[i] + slope1 * h\n y[i+1] = y[i] + slope2 * h\n \n \n\n```\n\n\n```python\nplt.figure(figsize=(12,5))\nplt.plot(t,x)\nplt.grid()\nplt.xlabel('t')\nplt.ylabel('x');\nplt.title(\"Forced Srping-Mass system no damping $\\omega = 1.1$\");\n```\n\n## Answer the questions below. Create a text cell to enter your answers.\n1. Expain and discuss the results of exercise 1. \n\n For exercise \\#1 we see the spring behaviour under three damping conditions.

\n b = 0 ~ we see no damping and the spring oscillates.

\n b = 1 ~ we see an underdamped behaviour and the oscillation decays.

\n b = 3 ~ we see overdamped and the spring returns to equilibrium with no oscillations.

\n \n2. Compute the period of oscillation for exerice 1. b=0. Compare to value obtained from the graph.\n \n The period is 2*pi/omega = sqrt(2)*pi = 4.44. This is close to the value from the graph.\n \n \n3. Discuss the results of exercise 2. Hint: look at the envelopes.\n \n For omega = 1, we are exciting the system at the resonant frequency and thus see RESONANCE, that is the osicllation are increasing. \n \n For omega = 1.1, the exciting frequencey is \"close\" to the resonant frequency and we observe the BEATING phenomenon.\n\n\n```python\n\n```\n", "meta": {"hexsha": "de4219a648b815a085c85658ea1389dc5c56cdc5", "size": 528838, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Project#3_solution.ipynb", "max_stars_repo_name": "rmartin977/math---267-Spring-2022", "max_stars_repo_head_hexsha": "828fce843795318fb1ec32e4dd073b67861e06cf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Project#3_solution.ipynb", "max_issues_repo_name": "rmartin977/math---267-Spring-2022", "max_issues_repo_head_hexsha": "828fce843795318fb1ec32e4dd073b67861e06cf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Project#3_solution.ipynb", "max_forks_repo_name": "rmartin977/math---267-Spring-2022", "max_forks_repo_head_hexsha": "828fce843795318fb1ec32e4dd073b67861e06cf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 721.4706684857, "max_line_length": 156564, "alphanum_fraction": 0.9466396136, "converted": true, "num_tokens": 2955, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9304582516374121, "lm_q2_score": 0.9059898203834277, "lm_q1q2_score": 0.8429857042752572}} {"text": "# Modeling and Simulation in Python\n\nSymPy code for Chapter 16\n\nCopyright 2017 Allen Downey\n\nLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)\n\n\n### Mixing liquids\n\nWe can figure out the final temperature of a mixture by setting the total heat flow to zero and then solving for $T$.\n\n\n```python\nfrom sympy import *\n\ninit_printing() \n```\n\n\n```python\nC1, C2, T1, T2, T = symbols('C1 C2 T1 T2 T')\n\neq = Eq(C1 * (T - T1) + C2 * (T - T2), 0)\neq\n```\n\n\n```python\nsolve(eq, T)\n```\n\n### Analysis\n\nWe can use SymPy to solve the cooling differential equation.\n\n\n```python\nT_init, T_env, r, t = symbols('T_init T_env r t')\nT = Function('T')\n\neqn = Eq(diff(T(t), t), -r * (T(t) - T_env))\neqn\n```\n\nHere's the general solution:\n\n\n```python\nsolution_eq = dsolve(eqn)\nsolution_eq\n```\n\n\n```python\ngeneral = solution_eq.rhs\ngeneral\n```\n\nWe can use the initial condition to solve for $C_1$. First we evaluate the general solution at $t=0$\n\n\n```python\nat0 = general.subs(t, 0)\nat0\n```\n\nNow we set $T(0) = T_{init}$ and solve for $C_1$\n\n\n```python\nsolutions = solve(Eq(at0, T_init), C1)\nvalue_of_C1 = solutions[0]\nvalue_of_C1\n```\n\nThen we plug the result into the general solution to get the particular solution:\n\n\n```python\nparticular = general.subs(C1, value_of_C1)\nparticular\n```\n\nWe use a similar process to estimate $r$ based on the observation $T(t_{end}) = T_{end}$\n\n\n```python\nt_end, T_end = symbols('t_end T_end')\n```\n\nHere's the particular solution evaluated at $t_{end}$\n\n\n```python\nat_end = particular.subs(t, t_end)\nat_end\n```\n\nNow we set $T(t_{end}) = T_{end}$ and solve for $r$\n\n\n```python\nsolutions = solve(Eq(at_end, T_end), r)\nvalue_of_r = solutions[0]\nvalue_of_r\n```\n\nWe can use `evalf` to plug in numbers for the symbols. The result is a SymPy float, which we have to convert to a Python float.\n\n\n```python\nsubs = dict(t_end=30, T_end=70, T_init=90, T_env=22)\nr_coffee2 = value_of_r.evalf(subs=subs)\ntype(r_coffee2)\n```\n\n\n\n\n sympy.core.numbers.Float\n\n\n\n\n```python\nr_coffee2 = float(r_coffee2)\nr_coffee2\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "c8323988806652f8c00ff5b4397f305d34f76e64", "size": 36052, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/chap16sympy.ipynb", "max_stars_repo_name": "kanhaiyap/ModSimPy", "max_stars_repo_head_hexsha": "af16c079ec398ff9b3822d3dcda75873ce900ced", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-04-27T22:43:12.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-11T15:12:23.000Z", "max_issues_repo_path": "notebooks/chap16sympy.ipynb", "max_issues_repo_name": "ffriass/ModSimPy", "max_issues_repo_head_hexsha": "c36a476a20042acb33773e47d12aea5b0c413e60", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 33, "max_issues_repo_issues_event_min_datetime": "2019-10-09T18:50:22.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-21T01:39:48.000Z", "max_forks_repo_path": "notebooks/chap16sympy.ipynb", "max_forks_repo_name": "ffriass/ModSimPy", "max_forks_repo_head_hexsha": "c36a476a20042acb33773e47d12aea5b0c413e60", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 77.5311827957, "max_line_length": 3956, "alphanum_fraction": 0.8330189726, "converted": true, "num_tokens": 649, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9263037363973294, "lm_q2_score": 0.9099070115349838, "lm_q1q2_score": 0.8428502645589834}} {"text": "```python\n%matplotlib inline\nfrom IPython.display import Markdown, display\nimport numpy as np\nimport matplotlib.dates as mdates\nimport matplotlib.pyplot as plt\nimport time\nfrom scipy.stats import norm\nfrom sympy import Symbol, symbols, Matrix, sin, cos, integrate, diff, Q, refine, simplify, factor, expand_trig, exp, latex, Integral\nfrom sympy import init_printing\nfrom sympy.utilities.codegen import codegen\ninit_printing(use_latex=True)\n```\n\n# Motion models\nIn tracking we can use different motion models. Here, we will derive a couple of those:\n\n## CATR - constant acceleration and turn rate model\nThis model assumes the state $[x, y, \\theta, v, \\omega, a]^\\top$, where $x$ and $y$ are coordinates of the object, $\\theta$ is it CCW orientation, $v$ is its speed along the vector spawned by $\\theta$, $\\omega$ is the angular velocity (aka turn rate), and $a$ is the acceleration along the vector spawned by $\\theta$. Furthermore, we assume that the object has constant acceleration $a$ and constant turn rate $\\omega$. This is a useful model to model differential-drive systems.\n\n### Derivation:\n\n\n```python\nspeed, yaw, dt, x, y, acceleration, t, d_sin, d_cos = symbols('v \\\\theta \\Delta{t} x y a t d_{sin} d_{cos}')\nturn_rate_non_zero = Symbol('\\hat{\\omega}', nonzero=True, finite=True)\nzero_turn_rate = Symbol('\\omega_0', zero=True)\nturn_rate = Symbol('\\omega')\n\nstate_catr = Matrix([x, y, yaw, speed, turn_rate, acceleration])\nprint(\"CART state is:\")\ndisplay(state_catr)\n\ndef get_next_state(turn_rate):\n # Specify the functions for yaw, speed, x, and y.\n yaw_func = yaw + turn_rate * t\n speed_func = speed + acceleration * t\n x_speed_func = speed_func * cos(yaw_func)\n y_speed_func = speed_func * sin(yaw_func)\n\n # Get next state by integrating the functions.\n next_speed = speed + integrate(acceleration, (t, 0, dt))\n next_yaw = yaw + integrate(turn_rate, (t, 0, dt))\n next_x = x + integrate(x_speed_func, (t, 0, dt))\n next_y = y + integrate(y_speed_func, (t, 0, dt))\n\n return Matrix([next_x, next_y, next_yaw, next_speed, turn_rate, acceleration])\n\n# There is a difference in computation betwee the cases when the turn rate is allowed to be zero or not\nprint(\"Assuming a non-zero turn rate, the next state is:\")\nnext_state = get_next_state(turn_rate_non_zero)\ndisplay(next_state)\n\nsubstitutes = {x:42, y:23, yaw:0.5, speed:2, acceleration:2, dt:0.1, turn_rate_non_zero:2, zero_turn_rate:0}\ndisplay('Plugging in the numbers:', substitutes)\ndisplay(next_state.evalf(subs=substitutes))\n\nstate = Matrix([x,y,yaw,speed,turn_rate_non_zero,acceleration])\nprint(\"Jacobian of the next state with respect to the previous state:\")\nJ = next_state.jacobian(state)\ndisplay(J)\ndisplay('Plugging in the numbers:', substitutes)\ndisplay(J.evalf(subs=substitutes))\n\nprint(\"Assuming a zero turn rate, state is:\")\nnext_state = get_next_state(zero_turn_rate)\ndisplay(next_state)\ndisplay('Plugging in the numbers:', substitutes)\ndisplay(next_state.evalf(subs=substitutes))\n\nstate = Matrix([x,y,yaw,speed,zero_turn_rate,acceleration])\nprint(\"Jacobian of the next state with respect to the previous state with 0 turn rate:\")\nJ = next_state.jacobian(state)\ndisplay(J)\n\ndisplay('Plugging in the numbers:', substitutes)\ndisplay(J.evalf(subs=substitutes))\n\n```\n\n## CVTR - constant velocity and turn rate model\nThis model assumes the state $[x, y, \\theta, v, \\omega]^\\top$, where $x$ and $y$ are coordinates of the object, $\\theta$ is it CCW orientation, $v$ is its speed along the vector spawned by $\\theta$, $\\omega$ is the angular velocity (aka turn rate). Furthermore, we assume that the object has constant speed $v$ and constant turn rate $\\omega$. This is also a useful model to model differential-drive systems. It is a bit simpler than the CATR one.\n\n### Derivation:\n\n\n```python\nspeed, yaw, dt, x, y, t = symbols('v \\\\theta \\Delta{t} x y t')\nturn_rate_non_zero = Symbol('\\omega', nonzero=True, finite=True)\nzero_turn_rate = Symbol('\\omega_0', zero=True)\n\nstate_cvtr = Matrix([x, y, yaw, speed, turn_rate_non_zero])\nprint(\"CVRT state is:\")\ndisplay(state_cvtr)\n\ndef get_next_state(turn_rate):\n # Specify the functions for yaw, x, and y.\n yaw_func = yaw + turn_rate * t\n x_speed_func = speed * cos(yaw_func)\n y_speed_func = speed * sin(yaw_func)\n\n # Get next state by integrating the functions.\n next_yaw = yaw + integrate(turn_rate, (t, 0, dt))\n next_x = x + integrate(x_speed_func, (t, 0, dt))\n next_y = y + integrate(y_speed_func, (t, 0, dt))\n\n return Matrix([next_x, next_y, next_yaw, speed, turn_rate])\n\n# There is a difference in computation betwee the cases when the turn rate is allowed to be zero or not\nprint(\"Assuming a non-zero turn rate, next state is:\")\nnext_state = get_next_state(turn_rate_non_zero)\ndisplay(next_state)\nsubstitutes = {x:42, y:23, yaw:0.5, speed:2, dt:0.1, turn_rate_non_zero:2, zero_turn_rate:0}\ndisplay(Markdown('Plugging in the numbers:'), substitutes)\ndisplay(next_state.evalf(subs=substitutes))\n\nstate = Matrix([x,y,yaw,speed,turn_rate_non_zero])\nprint(\"Jacobian of the next state with respect to the previous state:\")\nJ = next_state.jacobian(state)\ndisplay(J)\n\ndisplay(Markdown('Plugging in the numbers:'), substitutes)\ndisplay(J.evalf(subs=substitutes))\n\nprint(\"Assuming a zero turn rate, next state is:\")\nnext_state = get_next_state(zero_turn_rate)\ndisplay(next_state)\ndisplay(Markdown('Plugging in the numbers:'), substitutes)\ndisplay(next_state.evalf(subs=substitutes))\n\nstate = Matrix([x,y,yaw,speed,zero_turn_rate])\nprint(\"Jacobian of the next state with respect to the previous one with zero turn rate:\")\nJ = next_state.jacobian(state)\ndisplay(J)\n\ndisplay(Markdown('Plugging in the numbers:'), substitutes)\ndisplay(J.evalf(subs=substitutes))\n```\n", "meta": {"hexsha": "a9d1ecd12e2c9678ce358bcfe4e5c82c98c1ef0b", "size": 103419, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "src/common/motion_model/notebooks/motion_model.ipynb", "max_stars_repo_name": "ruvus/auto", "max_stars_repo_head_hexsha": "25ae62d6e575cae40212356eed43ec3e76e9a13e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 19, "max_stars_repo_stars_event_min_datetime": "2021-05-28T06:14:21.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T10:03:08.000Z", "max_issues_repo_path": "src/common/motion_model/notebooks/motion_model.ipynb", "max_issues_repo_name": "ruvus/auto", "max_issues_repo_head_hexsha": "25ae62d6e575cae40212356eed43ec3e76e9a13e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 222, "max_issues_repo_issues_event_min_datetime": "2021-10-29T22:00:27.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-29T20:56:34.000Z", "max_forks_repo_path": "src/common/motion_model/notebooks/motion_model.ipynb", "max_forks_repo_name": "ruvus/auto", "max_forks_repo_head_hexsha": "25ae62d6e575cae40212356eed43ec3e76e9a13e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 14, "max_forks_repo_forks_event_min_datetime": "2021-05-29T14:59:17.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-10T10:03:09.000Z", "avg_line_length": 86.9067226891, "max_line_length": 5016, "alphanum_fraction": 0.5266923873, "converted": true, "num_tokens": 1524, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9449947163538935, "lm_q2_score": 0.8918110461567922, "lm_q1q2_score": 0.8427567266042069}} {"text": "# Ejercicios de canales de comunicaci\u00f3n\n\n## Ejercicio 1\n\nLa fuente de entrada a un canal de comunicaci\u00f3n ruidoso es una variable aleatoria $X$ que contiene los s\u00edmbolos $\\{a, b, c, d\\}$. La salida de este canal es una variable aleatoria $Y$ sobre estos mismos cuatro s\u00edmbolos. La distribuci\u00f3n conjunta de estas dos variables aleatorias es la siguiente:\n\n\n\nPara hacer m\u00e1s sencillos los c\u00e1lculos, se le aplicar\u00e1 una transposici\u00f3n a la matriz para que las distribuciones de $X$ queden como filas y las de $Y$ como columnas:\n\n\n```python\nsymbols = ('a', 'b', 'c', 'd')\n\nimport numpy as np\n\nmatrix = np.array([\n [1/8, 1/16, 1/16, 1/4],\n [1/16, 1/8, 1/16, 0],\n [1/32, 1/32, 1/16, 0],\n [1/32, 1/32, 1/16, 0] \n])\njoint_distribution = matrix.transpose()\njoint_distribution\n```\n\n\n\n\n array([[0.125 , 0.0625 , 0.03125, 0.03125],\n [0.0625 , 0.125 , 0.03125, 0.03125],\n [0.0625 , 0.0625 , 0.0625 , 0.0625 ],\n [0.25 , 0. , 0. , 0. ]])\n\n\n\nPara calcular la informaci\u00f3n mutua (en bits) de ambas variables aleatorias usaremos la siguiente f\u00f3rmula, la cual toma en cuenta las distribuciones marginales de cada variable aleatoria y las conjuntas de ambas:\n\n$$\nI(X;Y) = \\sum_{x \\in X}{ \\sum_{y \\in Y}{\n p(x,y) \\log_{2}{ \\frac{ p(x,y) }{ p(x)p(y) }}\n}}\n$$\n\nLa distribuci\u00f3n marginal de $X$, $p(X)$ es:\n\n\n```python\ndef get_x_marginal(joint_distribution):\n # sumar toda la fila\n return [sum(x) for x in joint_distribution]\n\nx_marginal = get_x_marginal(joint_distribution)\nf'p(X) = {x_marginal}'\n```\n\n\n\n\n 'p(X) = [0.25, 0.25, 0.25, 0.25]'\n\n\n\nLa distribuci\u00f3n marginal de $Y$, $p(Y)$ es:\n\n\n```python\ndef get_y_marginal(joint_distribution):\n # sumar cada columna\n return [sum(x[y] for x in joint_distribution) for y in range(len(joint_distribution))]\n\ny_marginal = get_y_marginal(joint_distribution)\nf'p(Y) = {y_marginal}'\n```\n\n\n\n\n 'p(Y) = [0.5, 0.25, 0.125, 0.125]'\n\n\n\nAhora pasaremos a codificar una funci\u00f3n que nos permita calcular la informaci\u00f3n mutua a partir de las distribuciones marginales:\n\n\n```python\nfrom math import log\nfrom itertools import chain, cycle\n\ndef get_mutual_information(joint_distribution, x_marginal, y_marginal, base=2):\n return sum(\n p_xy * log(p_xy / (p_x * p_y), base)\n for p_xy, p_x, p_y in zip(\n # aplanar matriz para iterar sobre cada valor\n chain.from_iterable(joint_distribution),\n # repetir marginales porque no tienen el mismo tama\u00f1o que la matriz aplanada\n cycle(x_marginal),\n cycle(y_marginal)\n )\n if p_xy > 0\n )\n\nmutual_info = get_mutual_information(joint_distribution, x_marginal, y_marginal)\nf'I(X;Y) = {mutual_info} bits'\n```\n\n\n\n\n 'I(X;Y) = 0.375 bits'\n\n\n\n## Ejercicio 2\n\nVamos a considerar un canal de comunicaci\u00f3n binario sim\u00e9trico, cuya fuente de entrada es el alfabeto $X = \\{0, 1\\}$ con probabilidades $\\{0.5, 0.5\\}$, cuyo alfabeto de salida es $Y = \\{0, 1\\}$, y cuya matriz de canales es:\n\n\\begin{equation}\n \\begin{pmatrix}\n 1 - \\epsilon && \\epsilon \\\\\n \\epsilon && 1 - \\epsilon\n \\end{pmatrix}\n\\end{equation}\n\ndonde $\\epsilon$ es la probabilidad de transmisi\u00f3n de error. Para este caso vamos a suponer que $\\epsilon = 0.35$, por lo que la matriz del canal es:\n\n\n```python\nerror = 0.35\nchannel_matrix = [\n [1 - error, error],\n [error, 1 - error]\n]\nnp.array(channel_matrix)\n```\n\n\n\n\n array([[0.65, 0.35],\n [0.35, 0.65]])\n\n\n\nY la distribuci\u00f3n conjunta, $p(X,Y)$:\n\n\n```python\nx_probabilities = (0.5, 0.5)\njoint_distribution = [\n [p_x * col for p_x, col in zip(x_probabilities, row)]\n for row in channel_matrix\n]\nnp.array(joint_distribution)\n```\n\n\n\n\n array([[0.325, 0.175],\n [0.175, 0.325]])\n\n\n\nFinalmente podemos calcular la informaci\u00f3n mutua del canal, ya que usaremos la f\u00f3rmula del ejercicio anterior y ya contamos con la distribuci\u00f3n conjunta. Debido a que nosotros calculamos $p(X,Y)$, las distribuciones marginales para este caso son las probabilidades dadas por el problema, es decir $\\{0.5, 0.5\\}$:\n\n\n```python\ni = get_mutual_information(joint_distribution, x_probabilities, x_probabilities)\nf'I(X;Y) = {i} bits'\n```\n\n\n\n\n 'I(X;Y) = 0.06593194462450899 bits'\n\n\n", "meta": {"hexsha": "704ef8ad64e8903462ae034dc14aed3471747b91", "size": 8264, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "canales/ejercicios.ipynb", "max_stars_repo_name": "netotz/teoria-informacion", "max_stars_repo_head_hexsha": "03faf56d176f05753fb9e9707eedcc54f46dd9d7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-19T04:05:03.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-19T04:05:03.000Z", "max_issues_repo_path": "canales/ejercicios.ipynb", "max_issues_repo_name": "netotz/teoria-informacion", "max_issues_repo_head_hexsha": "03faf56d176f05753fb9e9707eedcc54f46dd9d7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-19T04:05:34.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-28T23:56:49.000Z", "max_forks_repo_path": "canales/ejercicios.ipynb", "max_forks_repo_name": "netotz/teoria-informacion", "max_forks_repo_head_hexsha": "03faf56d176f05753fb9e9707eedcc54f46dd9d7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.4276923077, "max_line_length": 320, "alphanum_fraction": 0.5204501452, "converted": true, "num_tokens": 1366, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.934395157060208, "lm_q2_score": 0.9019206686206199, "lm_q1q2_score": 0.842750304811612}} {"text": "I'm a little confused by lognormal distributions.\n\nLet's say I have a parameter that is distributed as:\n\n\\begin{equation}\nV \\sim \\mathcal{N}(5, 2) \\, ,\n\\end{equation}\n\nhow would I represent this in a lognormal distribution so it can't fall below zero? A lognormal distribution is defined as:\n\n\\begin{equation}\n\\ln(V) \\sim \\ln(\\mathcal{N}(5, 2)) \\, ,\n\\end{equation}\n\n> In probability theory, a lognormal distribution is a ... probability distribution o a random variable who's *logarithm is normally distributed*.\n\n> Thus if $X$ is log-normally distributed, then $Y - \\ln(X)$ is normally distributed.\n\n\n```python\nimport numpy as np\nimport seaborn as sns\nimport pylab as plt\n```\n\n\n```python\nnpts = 5000\nmu = 10\nsig = 2\nv = np.random.normal(mu, sig, npts)\n```\n\n\n```python\nsns.distplot(v)\nplt.xlabel('V')\nplt.axvline(mu)\n```\n\nThis extends below 0, which we don't want it to do. What does the logarithm look like?\n\n\n```python\nsns.distplot(np.log(v))\nplt.xlabel(r'$\\ln(V)$')\nplt.axvline(np.log(mu))\nplt.show()\n```\n\nIf we wanted a lognormal distribution to recapture this same distribution, should we be using\n\n\\begin{equation}\n\\ln(V) \\sim \\log\\mathcal{N}(\\ln(10), \\ln(2)) \\, ?\n\\end{equation}\n\n\n```python\nlnv = np.random.lognormal(np.log(mu), np.log(2), npts)\n```\n\n\n```python\nsns.distplot(lnv, label='LogNormal')\nsns.distplot(np.log(v), label='ln(V)')\nplt.legend()\n```\n\nOkay, these don't line up...\n\n\n```python\nsns.distplot(lnv, label='LogNormal')\nsns.distplot(v, label='V')\n```\n\nI guess this only truly works if the logarithm of V is normally distributed, but in this case we want it to be the other way round?\n\n\n```python\nimport mystyle as ms\nwith plt.style.context(ms.ms):\n sns.distplot(v)\n sns.distplot(lnv)\n plt.axvline(np.median(v), c='r', label='Median V')\n plt.axvline(np.median(lnv), c='g', label='Median LnV')\n plt.axvline(np.mean(lnv), c='b', label='Mean LnV')\n plt.legend()\n```\n\nIn both cases the median values intersect at $\\mu$, but with $V$ distributed lognormally, it can not fall below zero.\n\nLet's try a more realistic example.\n\n\n```python\nnpts = 10000\nmu = 0.7\nsigma = .1\nv = np.random.normal(mu, sigma, npts)\nlnv = np.random.lognormal(np.log(mu), sigma, npts)\n```\n\n\n```python\nwith plt.style.context(ms.ms):\n sns.distplot(v, label='Normally distributed')\n sns.distplot(lnv, label='Lognormally distributed')\n plt.legend()\nprint(f'Median of Normal: {np.median(v)}')\nprint(f'Median of LogNormal: {np.median(lnv)}')\n```\n\nThis example indicates that $\\sigma$ should remain as is.\n\nThe equations seem to agree. Let's check our previous example:\n\n\n```python\nnpts = 10000\nmu = 10.\nsigma = 2.\nv = np.random.normal(mu, sigma, npts)\nlnv = np.random.lognormal(np.log(mu), sigma, npts)\n```\n\n\n```python\nwith plt.style.context(ms.ms):\n sns.distplot(v, label='Normally distributed')\n sns.distplot(lnv, label='Lognormally distributed')\n plt.legend()\nprint(f'Median of Normal: {np.median(v)}')\nprint(f'Median of LogNormal: {np.median(lnv)}')\n```\n\nYep, the plots are unsightly but otherwise okay.\n\nConclusions: our parameters that can't go below zero should be distributed as \n\n\\begin{equation}\nV \\sim \\rm{LogNormal}(\\ln(\\mu), \\sigma)\n\\end{equation}\n", "meta": {"hexsha": "deec48dbf4ae8283172fed25d3837d85ab232643", "size": 153054, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "code/tests/examples/investigate_lognormal.ipynb", "max_stars_repo_name": "ojhall94/malatium", "max_stars_repo_head_hexsha": "156e44b9ab386eebf1d4aa05254e1f3a7b255d98", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-04-25T07:45:20.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-25T07:45:20.000Z", "max_issues_repo_path": "code/tests/examples/investigate_lognormal.ipynb", "max_issues_repo_name": "ojhall94/malatium", "max_issues_repo_head_hexsha": "156e44b9ab386eebf1d4aa05254e1f3a7b255d98", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "code/tests/examples/investigate_lognormal.ipynb", "max_forks_repo_name": "ojhall94/malatium", "max_forks_repo_head_hexsha": "156e44b9ab386eebf1d4aa05254e1f3a7b255d98", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-02-19T09:38:35.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-19T09:38:35.000Z", "avg_line_length": 378.8465346535, "max_line_length": 35444, "alphanum_fraction": 0.939766357, "converted": true, "num_tokens": 918, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9433475699138558, "lm_q2_score": 0.8933093968230773, "lm_q1q2_score": 0.8427012486742623}} {"text": "#

N\u00fameros complejos
\n\n\n```python\nfrom sympy import *\ninit_printing()\nx = symbols('x')\n```\n\nPython puede trabajar de modo nativo con los n\u00fameros complejos. Para construir un n\u00famero complejo se puede utilizara la funci\u00f3n **complex** o bien escribir el n\u00famero complejo en forma bin\u00f3mica, teniendo en cuenta que la unidad imaginaria se denota por **j** en Python. Sin embargo nosotros vamos a trabajar con la biblioteca Sympy y en ella la unidad imaginaria se denota con la letra **I** may\u00fascula. \n\n###
Calcula la ra\u00edz cuadrada de -1 y resuelve la ecuaci\u00f3n $x^2+1=0$.
\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n###
Dados los n\u00fameros complejos $z=3+5i$ y $w=3-2i$, calcula su suma, su resta, multiplicaci\u00f3n, divisi\u00f3n, inverso y potencias.
\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\nPara calcular el m\u00f3dulo de un n\u00famero complejo empleamos **abs**, para obtener el argumento (en radianes) **arg**, para el conjugado **conj**, y para las partes real e imaginaria **re** e **im**. Con el m\u00f3dulo y el argumento podemos ya escribir el n\u00famero en forma polar.\n\n###
Calcula el m\u00f3dulo, el argumento, el conjugado, la parte real e imaginaria de $w$.
\n\n\n```python\n\n```\n\n\n```python\n\n```\n\nSi un n\u00famero complejo $z$ tiene m\u00f3dulo $m$ y argumento $\\theta$ (en radianes), entonces, **seg\u00fan la f\u00f3rmula de Euler**, el n\u00famero se puede escribir como $z = m\\cdot exp(i\\theta)$. Tambi\u00e9n se puede escribir como $z= m\\cos(\\theta) + i m\\sin(\\theta)$.\n\n###
Utilizando la f\u00f3rmula de Euler, construye el n\u00famero complejo $5_{30^0}$ y calcula el conjugado, la parte real, la imaginaria y el m\u00f3dulo.
\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n###
Calcula distintas ra\u00edces del n\u00famero $3+5i$, comprobando el resultado.
\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "107d5bc398ec52d2dca5afd35b52d592d03959d7", "size": 4732, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/08.- Numeros complejos.ipynb", "max_stars_repo_name": "disoftw/python-for-maths", "max_stars_repo_head_hexsha": "39d80cc7fe2584a0b5de01151bda361260e0ab1a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/08.- Numeros complejos.ipynb", "max_issues_repo_name": "disoftw/python-for-maths", "max_issues_repo_head_hexsha": "39d80cc7fe2584a0b5de01151bda361260e0ab1a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/08.- Numeros complejos.ipynb", "max_forks_repo_name": "disoftw/python-for-maths", "max_forks_repo_head_hexsha": "39d80cc7fe2584a0b5de01151bda361260e0ab1a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-26T04:54:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-26T04:54:09.000Z", "avg_line_length": 23.0829268293, "max_line_length": 408, "alphanum_fraction": 0.5505071851, "converted": true, "num_tokens": 579, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9314625107731764, "lm_q2_score": 0.9046505254608136, "lm_q1q2_score": 0.8426480498180028}} {"text": "# PRAKTIKUM 8\n`Curve Fitting / Data Fitting / Regresi`\n\n
\n\n# 1 Regresi Linear\n Koefisien garis regresi linear\n\\begin{equation} \ny=Ax+B\n\\end{equation}\nmerupakan solusi dari sistem persamaan linear berikut, yang disebut _normal equations_.\n\\begin{align} \n\\begin{split}\n\\left( \\sum_{k=1}^{N}{x_k^2} \\right) A + \\left( \\sum_{k=1}^{N}{x_k} \\right) B &= \\sum_{k=1}^{N}{x_ky_k}\\\\\n\\left( \\sum_{k=1}^{N}{x_k} \\right) A + NB &= \\sum_{k=1}^{N}{y_k} \n\\end{split}\n\\end{align}\nsolusi sistem tersebut adalah\n\\begin{align}\nA &= \\frac{\\sum_{k=1}^{N}{(x_k-\\bar{x})(y_k-\\bar{y})}}{\\sum_{k=1}^{N}{(x_k-\\bar{x})^2}} \\\\\nB &= \\bar{y}-A\\bar{x}\n\\end{align}\n\n\n```julia\nusing Statistics\n```\n\n\n```julia\nfunction reglin(X,Y)\n # Hitung nilai rataan data x dan y\n xmean = mean(X);\n ymean = mean(Y);\n # Hitung nilai jumlah dari xy dan x^2\n sumxy = (X.-xmean)'*(Y.-ymean)\n sumx2 = (X.-xmean)'*(X.-xmean)\n # Hitung nilau koefisien garis regresi linear Y=Ax+B\n A = sumxy/sumx2;\n B = ymean .- A*xmean;\n return A,B\nend\n```\n\n### Contoh 1:\nDiberikan data seperti berikut.\n\n|x||-1 | 0 | 1 | 2 | 3 | 4 | 5 | 6 | \n|-|| -| -| -| -| -| -| -| -| \n|y||10 | 9 | 7 | 5 | 4 | 3 | 0 | -1 | \n\nCari dan gambarkan garis regresi tersebut\n\n\n```julia\nx = [-1, 0, 1, 2, 3, 4, 5, 6]\ny = [10, 9, 7, 5, 4, 3, 0, -1];\n```\n\n\n```julia\nA,B = reglin(x,y)\n```\n\nPersamaan Regresi $ y = -1.6071 x + 8.6429 $\n\n\n```julia\nusing Plots\n```\n\n\n```julia\nf(x) = A*x+B;\nxk = -1:0.1:6\nplt1 = plot(xk,f.(xk),label = \"Garis Regresi\")\nscatter!(x,y,label = \"Data\")\n```\n\n\n```julia\nrmse(yk,yduga) = sqrt(mean((yk.-yduga).^2));\nyduga = f.(x);\ngalat = rmse(y,yduga)\n```\n\n#### Cara Lain:\n\n\n```julia\nP = [x ones(length(x),1)]\n```\n\n\n```julia\nP'*P # Matriks Koefisien dari SPL\nP'*y # Ruas Kanan SPL\n```\n\n\n```julia\nC = inv(P'*P)*(P'*y) # Solusi SPL\n```\n\n\n```julia\nf(x) = C[1]*x+C[2];\nxk = -1:0.1:6\nplot(xk,f.(xk),label = \"Garis Regresi\")\nscatter!(x,y,label = \"Data\")\n```\n\n# 2 Regresi Pangkat\nKoefisien $ A $ dari fungsi pangkat $$ y=Ax^M $$ adalah\n\\begin{equation}\\label{eq:8 pangkat}\nA=\\frac{ \\sum_{k=1}^{N}{x_k^My_k}}{ \\sum_{k=1}^N{x_k^{2M}}}\n\\end{equation}\n\n\n```julia\nfunction regpower(X,Y,m)\n # Hitung nilai jumlah dari x^m*y dan x^2m\n sumxy = (X.^m)'*Y\n sumx2 = (X.^m)'*(X.^m)\n # Hitung nilau koefisien garis regresi pangkat y = Ax^m\n A = sumxy/sumx2\nend\n```\n\n### Contoh 2:\nPada suatu praktikum fisika, seorang mahasiswa mendapatkan kumpulan data percobaan seperti berikut.\n\n|\tWaktu, t_k | Jarak, d_k |\n|-|-|\n|\t0.2 | 0.1960|\n|\t0.4 | 0.7850|\n|\t0.6 | 1.7665|\n|\t0.8 | 3.1405|\n|\t1.0 | 4.9075| \n\n Hubungan dari data tersebut adalah $ d=\\frac{1}{2}gt^2 $ dengan $ d $ merupakan jarak dalam meter dan $ t $ merupakan waktu dalam detik. Carilah koefisien gravitasi $ g $. \n\n\n```julia\ntk = [0.2,0.4,0.6,0.8,1.0];\ndk = [0.1960,0.7850,1.7665,3.1405,4.9075];\n```\n\nDidefinisikan $A = 1/2 g$\n\n\n```julia\nA = regpower(tk,dk,2)\n```\n\n\n```julia\ng = 2*A\n```\n\n#### Cara Lain\n\n\n```julia\nP = tk.^2\n```\n\n\n```julia\nA = inv(P'*P)*(P'*dk) # Solusi SPL\n```\n\n# 3 Regresi Non-Linear \n## Regresi Eksponensial\nMisalkan bahwa diberikan titik $ (x_1,y_1) $, $ (x_2,y_2) $, $\\dots$, $ (x_N,y_N) $ dan ingin dicocokan dengan grafik eksponensial dengan bentuk\n\\begin{equation}\\label{eq:8 eks}\ny = Ce^{Ax}\n\\end{equation}\nLangkah pertama adalah me-logaritma-kan kedua sisi, sehingga diperoleh\n\\begin{align}\n\\begin{split}\n&\\ln(y)=\\ln(Ce^{Ax})\\\\\n\\Leftrightarrow\\ &\\ln(y)=\\ln(e^{Ax})+\\ln(C)\\\\\n\\Leftrightarrow\\ &\\ln(y)=Ax+\\ln(C)\n\\end{split}\n\\end{align}\nSelanjutnya, perubahan peubah didefinisikan sebagai berikut.\n\\begin{align}\\label{eq:8 eks 1}\nY=\\ln(y),\\ \\ \\ \\ X=x,\\ \\ \\ \\ \\text{dan}\\ \\ \\ \\ B=\\ln(C)\n\\end{align}\n\n### Contoh 3:\nGunakan metode linearisasi data untuk mencari kurva regresi eksponensial $$ y=Ce^{Ax} $$ untuk 5 titik data, yaitu $ (0,1.5) $, $ (1,2.5) $, $ (2,3.5) $, $ (3,5.0) $, dan $ (4,7.5) $.\n\n\n```julia\nx = [ 0, 1, 2, 3, 4]\ny = [1.5, 2.5, 3.5, 5.0, 7.5]\nY = log.(y);\n```\n\n\n```julia\nA,B = reglin(x,Y)\nC = exp(B)\n(A,C)\n```\n\n\n```julia\nf(x) = C*exp(A*x);\nxk = -1:0.1:5\nplot(xk,f.(xk),label = \"Garis Regresi\",legend = :topleft)\nscatter!(x,y,label = \"Data\") \n```\n\n# 4 Regresi Polinomial \n## Regresi Parabola\nKoefisien dari garis regresi parabola\n\\begin{equation}\\label{eq:8 para}\ny=Ax^2+Bx+C\n\\end{equation}\nmerupakan nilai solusi $ A $, $ B $, dan $ C $ dari sistem linear\n\\begin{align}\\label{eq:8 para1}\n\\begin{split}\n\\left( \\sum_{k=1}^{N}x_k^4 \\right) A +\n\\left( \\sum_{k=1}^{N}x_k^3 \\right) B +\n\\left( \\sum_{k=1}^{N}x_k^2 \\right) C &=\n\\sum_{k=1}^{N}y_kx_k^2\\\\\n\\left( \\sum_{k=1}^{N}x_k^3 \\right) A +\n\\left( \\sum_{k=1}^{N}x_k^2 \\right) B +\n\\left( \\sum_{k=1}^{N}x_k \\right) C &=\n\\sum_{k=1}^{N}y_kx_k\\\\\n\\left( \\sum_{k=1}^{N}x_k^2 \\right) A +\n\\left( \\sum_{k=1}^{N}x_k \\right) B +\nN C &=\n\\sum_{k=1}^{N}y_k\n\\end{split}\n\\end{align}\n\nDalam notasi matriks, ruas kanan dari sistem linear akan setara dengan nilai dari $ F^TY $, yaitu\n\\begin{align} \nF^TY\n=\\begin{bmatrix}\n&&&&\\\\[-0.5em]\n(x_1)^0 & (x_2)^0 & (x_3)^0 & \\dots & (x_N)^0 \\\\[0.5em]\n(x_1)^1 & (x_2)^1 & (x_3)^1 & \\dots & (x_N)^1 \\\\[0.5em]\n(x_1)^2 & (x_2)^2 & (x_3)^2 & \\dots & (x_N)^2 \\\\[0.5em]\n\\end{bmatrix}\n\\begin{bmatrix}\ny_1\\\\y_2\\\\y_3\\\\\\vdots \\\\y_N\n\\end{bmatrix}\n\\end{align}\nSementara itu, ruas kiri dari sistem linear akan setara dengan nilai dari $ F^TF $, yaitu\n\\begin{align} \nF^TF\n=\\begin{bmatrix}\n&&&&\\\\[-0.5em]\n(x_1)^0 & (x_2)^0 & (x_3)^0 & \\dots & (x_N)^0 \\\\[0.5em]\n(x_1)^1 & (x_2)^1 & (x_3)^1 & \\dots & (x_N)^1 \\\\[0.5em]\n(x_1)^2 & (x_2)^2 & (x_3)^2 & \\dots & (x_N)^2 \\\\[0.5em]\n\\end{bmatrix}\n\\begin{bmatrix}\n(x_1)^0 & (x_1)^1 & (x_1)^2 \\\\\n(x_2)^0 & (x_2)^1 & (x_2)^2 \\\\\n(x_3)^0 & (x_3)^1 & (x_3)^2 \\\\\n\\vdots & \\vdots & \\vdots \\\\\n(x_N)^0 & (x_N)^1 & (x_N)^2 \\\\\n\\end{bmatrix} \n\\end{align}\nDengan demikian, nilai koefisien regresi $ C $ dapat dicari dengan menyelesaikan sistem linear\n\\begin{equation}\n(F^TF)\\ C=F^TY\n\\end{equation}\n\n## Regresi Polinomial berderajat m\n\n\n```julia\nfunction regpoly(X,Y,m)\n #% Hitung matriks F\n F = zeros(length(X),m+1)\n for k = 1: m+1\n F[:,k]=X.^(k-1);\n end\n #% Hitung matriks A dan matriks B serta koefisien regresi pangkat\n A = F'*F;\n B = F'*Y;\n C = A\\B; # sama saja dengan inv(A)*B\n C = reverse(C,dims=1)\nend\n```\n\n### Contoh 4:\nCarilah persamaan regresi parabola dari empat titik data $ (-3,3) $, $ (0,1) $, $ (2,1) $, dan $ (4,3) $.\n\n\n```julia\nx = [-3, 0, 2, 4]\ny = [ 3, 1, 1, 3]\nC = regpoly(x,y,2)\n```\n\n\n```julia\nf(x) = C[1]*x^2 + C[2]*x + C[3]\nxk = -3:0.1:4\nplt = plot(xk,f.(xk),label = \"Garis Regresi\",legend=:top)\nscatter!(x,y,label = \"Data\")\n```\n\n\n```julia\nrmse(yk,yduga) = sqrt(mean((yk.-yduga).^2));\nyduga = f.(x);\ngalat = rmse(y,yduga)\n```\n\n#### Cara Lain:\n\n\n```julia\nP = [x.^2 x ones(length(x),1)]\n```\n\n\n```julia\nC = inv(P'*P)*(P'*y)\n```\n\n
\n\n# Soal Latihan\nKerjakan soal berikut pada saat kegiatan praktikum berlangsung.\n\n`Nama: ________`\n\n`NIM: ________`\n\n### Soal 1\nUlangi langkah-langkah pada **Contoh 1** untuk membentuk garis regresi linear dari data berikut.\n\n\n| $x_k$ | 0.0 | 1.0 | 2.0 | 4.0 | 5.0|\n|-|-|-|-|-|-|\n| $y_k$ | 3.0 | 5.1 | 6.9 | 10.0 | 13.0|\n\nGambarkan garis regresi beserta titik-titik data, kemudian hitung nilai RMSE dari garis regresi tersebut.\n\n\n```julia\n\n```\n\n### Soal 2\nDiberikan data hasil pengamatan seperti berikut.\n\n| $d_k$ | 1.0 | 2.0 | 3.0 | 4.0 | 5.0|\n|-|-|-|-|-|-|\n| $t_k$ | 0.45 | 0.63 | 0.79 | 0.90 | 1.01 |\n\nHitunglah nilai koefisien grafitasi tempat pengamatan berdasarkan data tersebut dengan mengikuti cara pada **Contoh 2**.\n\n\n```julia\n\n```\n\n### Soal 3\n\nUlangi langkah-langkah pada **Contoh 3** untuk membentuk garis regresi eksponensial dari data berikut.\n\n| $x_k$ | 0.0 | 1.0 | 2.0 | 4.0 | 5.0|\n|-|-|-|-|-|-|\n| $y_k$ | 2.00 | 0.75 | 0.38 | 0.15 | 0.10 |\n\nGambarkan garis regresi beserta titik-titik data, kemudian hitung nilai RMSE dari garis regresi tersebut.\n\n\n```julia\n\n```\n\n### Soal 4\nUlangi langkah-langkah pada **Contoh 4** untuk membentuk garis regresi kuadratik dari data berikut.\n\n| $x_k$ | 0.0 | 1.0 | 2.0 | 4.0 | 5.0|\n|-|-|-|-|-|-|\n| $y_k$ | 7.10 | 3.00 | 3.60 | 11.00 | 24.00 |\n\nGambarkan garis regresi beserta titik-titik data, kemudian hitung nilai RMSE dari garis regresi tersebut.\n\n\n```julia\n\n```\n", "meta": {"hexsha": "4af95164e2abf4a25b39945494ec859b361ea18f", "size": 15596, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebookpraktikum/Praktikum 08.ipynb", "max_stars_repo_name": "mkhoirun-najiboi/metnum.jl", "max_stars_repo_head_hexsha": "a6e35d04dc277318e32256f9b432264157e9b8f4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebookpraktikum/Praktikum 08.ipynb", "max_issues_repo_name": "mkhoirun-najiboi/metnum.jl", "max_issues_repo_head_hexsha": "a6e35d04dc277318e32256f9b432264157e9b8f4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebookpraktikum/Praktikum 08.ipynb", "max_forks_repo_name": "mkhoirun-najiboi/metnum.jl", "max_forks_repo_head_hexsha": "a6e35d04dc277318e32256f9b432264157e9b8f4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.36875, "max_line_length": 189, "alphanum_fraction": 0.4699923057, "converted": true, "num_tokens": 3763, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541626630935, "lm_q2_score": 0.8947894717137996, "lm_q1q2_score": 0.8425822307464098}} {"text": "# Symbolic mathematics with Sympy\n\n[Sympy](http://www.sympy.org/en/index.html) is described as a:\n\n> \"... Python library for symbolic mathematics.\"\n\nThis means it can be used to:\n\n- Manipulate symbolic expressions;\n- Solve symbolic equations;\n- Carry out symbolic Calculus;\n- Plot symbolic function.\n\nIt has other capabilities that we will not go in to in this handbook. But you can read more about it here: http://www.sympy.org/en/index.html\n\n## Manipulating symbolic expressions\n\nBefore we can start using the library to manipulate expressions, we need to import it.\n\n\n```python\nimport sympy as sym\n```\n\nThe above imports the library and gives us access to it's commands using the shortand `sym` which is conventially used.\n\nIf we wanted to get Python to check that $x - x = 0$ we would get an error if we did not tell Python what $x$ was:\n\n\n```python\nx - x\n```\n\nThis is where Sympy comes in, we can tell Python to create $x$ as a symbolic variable:\n\n\n```python\nx = sym.symbols('x')\n```\n\nNow we can calculate $x - x$:\n\n\n```python\nx - x\n```\n\n\n\n\n 0\n\n\n\nWe can create and manipulate expressions in Sympy. Let us for example verify:\n\n$$(a + b) ^ 2 = a ^ 2 + 2ab + b ^2$$\n\nFirst, we create the symbolic variables $a, b$:\n\n\n```python\na, b = sym.symbols('a, b')\n```\n\nNow let's create our expression:\n\n\n```python\nexpr = (a + b) ** 2 \nexpr\n```\n\n\n\n\n (a + b)**2\n\n\n\n**Note** we can get Sympy to use LaTeX so that the output looks nice in a notebook:\n\n\n```python\nsym.init_printing()\n```\n\n\n```python\nexpr\n```\n\nLet us expand our expression:\n\n\n```python\nexpr.expand()\n```\n\nNote that we can also get Sympy to produce the LaTeX code for future use:\n\n\n```python\nsym.latex(expr.expand())\n```\n\n\n\n\n 'a^{2} + 2 a b + b^{2}'\n\n\n\n---\n**EXERCISE** Use Sympy to verify the following expressions:\n\n- $(a - b) ^ 2 = a ^ 2 - 2 a b + b^2$\n- $a ^ 2 - b ^ 2 = (a - b) (a + b)$ (instead of using `expand`, try `factor`)\n\n## Solving symbolic equations\n\nWe can use Sympy to solve symbolic expression. For example let's find the solution in $x$ of the quadratic equation:\n\n$$a x ^ 2 + b x + c = 0$$\n\n\n```python\n# We only really need to define `c` but doing them all again.\na, b, c, x = sym.symbols('a, b, c, x') \n```\n\nThe Sympy command for solving equations is `solveset`. The first argument is an expression for which the root will be found. The second argument is the value that we are solving for.\n\n\n```python\nsym.solveset(a * x ** 2 + b * x + c, x)\n```\n\n---\n**EXERCISE** Use Sympy to find the solutions to the generic cubic equation:\n\n$$a x ^ 3 + b x ^ 2 + c x + d = 0$$\n\n---\n\nIt is possible to pass more arguments to `solveset` for example to constrain the solution space. Let us see what the solution of the following is in $\\mathbb{R}$:\n\n$$x^2=-1$$\n\n\n```python\nsym.solveset(x ** 2 + 1, x, domain=sym.S.Reals)\n```\n\n---\n**EXERCISE** Use Sympy to find the solutions to the following equations:\n\n- $x ^ 2 == 2$ in $\\mathbb{N}$;\n- $x ^ 3 + 2 x = 0$ in $\\mathbb{R}$.\n\n---\n\n## Symbolic calculus\n\nWe can use Sympy to compute limits. Let us calculate:\n\n$$\\lim_{x\\to 0^+}\\frac{1}{x}$$\n\n\n```python\nsym.limit(1/x, x, 0, dir=\"+\")\n```\n\n---\n**EXERCISE** Compute the following limits:\n\n1. $\\lim_{x\\to 0^-}\\frac{1}{x}$\n2. $\\lim_{x\\to 0}\\frac{1}{x^2}$\n\n---\n\nWe can use also Sympy to differentiate and integrate. Let us experiment with differentiating the following expression:\n\n$$x ^ 2 - \\cos(x)$$\n\n\n```python\nsym.diff(x ** 2 - sym.cos(x), x)\n```\n\nSimilarly we can integrate:\n\n\n```python\nsym.integrate(x ** 2 - sym.cos(x), x)\n```\n\nWe can also carry out definite integrals:\n\n\n```python\nsym.integrate(x ** 2 - sym.cos(x), (x, 0, 5))\n```\n\n---\n\n**EXERCISE** Use Sympy to calculate the following:\n\n1. $\\frac{d\\sin(x ^2)}{dx}$\n2. $\\frac{d(x ^2 + xy - \\ln(y))}{dy}$\n3. $\\int e^x \\cos(x)\\;dx$\n4. $\\int_0^5 e^{2x}\\;dx$\n\n## Plotting with Sympy\n\nFinally Sympy can be used to plot functions. Note that this makes use of another Python library called [matplotlib](http://matplotlib.org/). Whilst Sympy allows us to not directly need to make use of matplotlib it could be worth learning to use as it's a very powerful and versatile library.\n\nBefore plotting in Jupyter we need to run a command to tell it to display the plots directly in the notebook:\n\n\n```python\n%matplotlib inline\n```\n\nLet us plot $x^2$:\n\n\n```python\nexpr = x ** 2\np = sym.plot(expr);\n```\n\nWe can directly save that plot to a file if we wish to:\n\n\n```python\np.save(\"x_squared.pdf\");\n```\n\n---\n**EXERCISE** Plot the following functions:\n\n- $y=x + cos(x)$\n- $y=x ^ 2 - e^x$ (you might find `ylim` helpful as an argument)\n\nExperiment with saving your plots to a file.\n\n---\n\n## Summary\n\nThis section has discussed using Sympy to:\n\n- Manipulate symbolic expressions;\n- Calculate limits, derivates and integrals;\n- Plot a symbolic expression.\n \nThis just touches the surface of what Sympy can do.\n\nLet us move on to using [Numpy](02 - Linear algebra with Numpy.ipynb) to do Linear Algebra.\n", "meta": {"hexsha": "727ee86219297137131767a233d6483865705b16", "size": 57518, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "nbs/01-Symbolic-mathematics-with-Sympy.ipynb", "max_stars_repo_name": "dianetam523/Python-Mathematics-Handbook", "max_stars_repo_head_hexsha": "04036b58dc82943c9ac577abe74c1a2b387e40ce", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "nbs/01-Symbolic-mathematics-with-Sympy.ipynb", "max_issues_repo_name": "dianetam523/Python-Mathematics-Handbook", "max_issues_repo_head_hexsha": "04036b58dc82943c9ac577abe74c1a2b387e40ce", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "nbs/01-Symbolic-mathematics-with-Sympy.ipynb", "max_forks_repo_name": "dianetam523/Python-Mathematics-Handbook", "max_forks_repo_head_hexsha": "04036b58dc82943c9ac577abe74c1a2b387e40ce", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 80.6704067321, "max_line_length": 17586, "alphanum_fraction": 0.8288361904, "converted": true, "num_tokens": 1461, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.894789454880027, "lm_q2_score": 0.9416541643004809, "lm_q1q2_score": 0.8425822163599347}} {"text": "## Eigenvalues and Eigenvectors\n\n### Learning to obtain eigenvalues and eigenvectors using python.\n\n#### [4,3,2],[1,4,1],[3,10,4]\n\n\n```\nimport numpy as np\nI=np.array([[4,3,2],[1,4,1],[3,10,4]])\nprint(\"#### input matrix\")\nprint(I)\nprint()\nx=np.linalg.eigvals(I)\ny=np.linalg.eig(I)\nprint(\"#### Eigen values\")\nprint(x)\nprint()\nprint(\"#### Eigen vectors\")\nprint(y)\n```\n\n #### input matrix\n [[ 4 3 2]\n [ 1 4 1]\n [ 3 10 4]]\n \n #### Eigen values\n [8.98205672 2.12891771 0.88902557]\n \n #### Eigen vectors\n (array([8.98205672, 2.12891771, 0.88902557]), array([[-0.49247712, -0.82039552, -0.42973429],\n [-0.26523242, 0.14250681, -0.14817858],\n [-0.82892584, 0.55375355, 0.89071407]]))\n\n\n#### [1,-3,3],[3,-5,3],[6,-6,4]\n\n\n```\nimport numpy as np\nI=np.array([[1,-3,3],[3,-5,3],[6,-6,4]])\nprint(\"#### input matrix\")\nprint(I)\nprint()\nx=np.linalg.eigvals(I)\ny=np.linalg.eig(I)\nprint(\"#### Eigen values\")\nprint(x)\nprint()\nprint(\"#### Eigen vectors\")\nprint(y)\n```\n\n #### input matrix\n [[ 1 -3 3]\n [ 3 -5 3]\n [ 6 -6 4]]\n \n #### Eigen values\n [ 4.+0.00000000e+00j -2.+1.10465796e-15j -2.-1.10465796e-15j]\n \n #### Eigen vectors\n (array([ 4.+0.00000000e+00j, -2.+1.10465796e-15j, -2.-1.10465796e-15j]), array([[-0.40824829+0.j , 0.24400118-0.40702229j,\n 0.24400118+0.40702229j],\n [-0.40824829+0.j , -0.41621909-0.40702229j,\n -0.41621909+0.40702229j],\n [-0.81649658+0.j , -0.66022027+0.j ,\n -0.66022027-0.j ]]))\n\n\n### Properties of eigenvalues and eigenvectors:\n1. For a nxn matrix, the number of eigen values is n.\n2. The sum of eigen values is equal to the sum of the diagonal elements of matrix.\n3. The product of eigenvalues is equal to the determinant of the matrix.\n4. The eigen value for an identity matrix is 1.\n5. The eigenvalue of a triangular matrix is same as the diagonal elements of a matrix.\n6. For a skew symmetric matrix, the eigenvalues are imaginary.\n7. For orthogonal matrix eigenvalues is 1.(Orthogonal matrix---> A.(A)\u02c6t=I).\n8. For indempotent matrix the eigenvalues are 0 and 1(A\u02c62=identity matrix).\n\n\n```\nimport math\nfrom math import *\nimport numpy as np\nfrom numpy import *\n```\n\n#### property 2\n\n\n```\nA=array([[1,2,3],[2,3,5],[3,1,1]])\nprint(\"#### Input matrix\")\nprint(A)\nprint()\nX=A[0][0]+A[1][1]+A[2][2]\nprint(\"#### X\")\nprint(X)\nprint()\nY=sum(linalg.eigvals(A))\nZ=round(Y)\nprint(\"#### Z\")\nprint(Z)\nprint()\nprint(\"The sum of eigen values is equal to the sum of the diagonal elements of matrix\")\nequal(X,Z)\n```\n\n #### Input matrix\n [[1 2 3]\n [2 3 5]\n [3 1 1]]\n \n #### X\n 5\n \n #### Z\n 5\n \n The sum of eigen values is equal to the sum of the diagonal elements of matrix\n\n\n\n\n\n True\n\n\n\nProperty 2 is verified.\n\n#### property 3\n\n\n```\nB=array([[1,2,3],[1,3,5],[4,1,2]])\nprint(\"#### B\")\nprint(B)\nprint()\nM=linalg.det(B)\nprint(\"#### M\")\nprint(M)\nprint()\nQ=prod(linalg.eigvals(B))\nP=np.round(Q)\nprint(\"#### P\")\nprint(P)\nprint()\nprint(\"The product of eigenvalues is equal to the determinant of the matrix.\")\nequal(P,M)\n```\n\n #### B\n [[1 2 3]\n [1 3 5]\n [4 1 2]]\n \n #### M\n 4.000000000000002\n \n #### P\n (4+0j)\n \n The product of eigenvalues is equal to the determinant of the matrix.\n\n\n\n\n\n False\n\n\n\nProperty 3 is verified.\n\n#### Property 4\n\n\n```\nI=array([[1,0,0],[0,1,0],[0,0,1]])\nprint(\"#### Input matrix\")\nprint(I)\nprint()\nprint(\"The eigen value for an identity matrix is 1\")\nlinalg.eigvals(I)\n```\n\n #### Input matrix\n [[1 0 0]\n [0 1 0]\n [0 0 1]]\n \n The eigen value for an identity matrix is 1\n\n\n\n\n\n array([1., 1., 1.])\n\n\n\nProperty 4 is verified.\n\n#### Property 5\n\n\n```\nT=array([[4,0,0],[2,3,0],[1,2,3]])\nprint(\"#### Input matrix\")\nprint(T)\nprint()\nprint(\"The eigenvalue of a triangular matrix is same as the diagonal elements of the matrix.\")\nlinalg.eigvals(T)\n```\n\n #### Input matrix\n [[4 0 0]\n [2 3 0]\n [1 2 3]]\n \n The eigenvalue of a triangular matrix is same as the diagonal elements of the matrix.\n\n\n\n\n\n array([3., 3., 4.])\n\n\n\nProperty 5 is verified\n\n#### Property 6\n\n\n```\nE=(B-B.transpose())/2\nprint(\"#### skew matrix\")\nprint(E)\nprint()\nprint(\"For a skew symmetric matrix, the eigenvalues are imaginary.\")\nlinalg.eigvals(E)\n```\n\n #### skew matrix\n [[ 0. 0.5 -0.5]\n [-0.5 0. 2. ]\n [ 0.5 -2. 0. ]]\n \n For a skew symmetric matrix, the eigenvalues are imaginary.\n\n\n\n\n\n array([0.+0.j , 0.+2.12132034j, 0.-2.12132034j])\n\n\n\nProperty 6 is verified.\n\n#### Property 7\n\n\n```\nF=array([[1,0],[0,-1]])\nprint(\"#### orthogonal matrix\")\nprint(F)\nprint()\nprint(\"For orthogonal matrix eigenvalues is 1.(Orthogonal matrix---> A.(A)\u02c6t=I).\")\nlinalg.eigvals(F)\n```\n\n #### orthogonal matrix\n [[ 1 0]\n [ 0 -1]]\n \n For orthogonal matrix eigenvalues is 1.(Orthogonal matrix---> A.(A)\u02c6t=I).\n\n\n\n\n\n array([ 1., -1.])\n\n\n\nProperty 7 verified.\n\n### Diagonalization of sqaure matrix\n\n\n```\nimport numpy as np\nfrom math import *\nA= np.mat([[2,-2,3],[1,1,1],[1,3,-1]])\nX,P=np.linalg.eig(A)\nI=np.linalg.inv(P)\nZ=np.around(I*A*P)\nfor i in range(len(Z)):\n for j in range(len(Z)):\n if Z[i,j]==-0:\n Z[i,j]=0\nprint(\"The final diagonalized matrix is\")\nprint(Z)\nprint()\n\nprint(\"Eigen vectors\")\nprint(P)\nprint()\n\nprint(\"Eigen values\")\nprint(X)\nprint()\n```\n\n The final diagonalized matrix is\n [[ 3. 0. 0.]\n [ 0. 1. 0.]\n [ 0. 0. -2.]]\n \n Eigen vectors\n [[ 0.57735027 0.57735027 -0.61684937]\n [ 0.57735027 -0.57735027 -0.05607722]\n [ 0.57735027 -0.57735027 0.78508102]]\n \n Eigen values\n [ 3. 1. -2.]\n \n\n\n### Cayley-Hamilton Theorem\n\n\n```\nA=np.mat([[2, 3],[4, 5]])\nfrom math import *\nX=np.poly(A)\nprint(\"The co-efficients of the characteristic equation are\",X)\n\ntrace_A = np.trace(A)\ndet_A = np.linalg.det(A)\nI = np.eye(len(A))\nA*A - trace_A * A + det_A * I\n\nfrom sympy import *\nfrom math import *\nfrom numpy import *\nprint(\"Enter elements of the matrix: \")\nA=mat(input())\ns=0\nprint()\nprint(\"The matrix is: \")\nprint(A)\nprint()\nI=eye(len(A),len(A))\nprint(\"The identity matrix is: \")\nprint(I)\nprint()\nce=poly(A)\nce=ce.round()\nprint(\"The coefficients of the characteristic equation=\",ce)\nfor i in range (len(ce)):\n eq=ce[i]*I*(A**(len(ce)-i))\n s=s+eq\nprint()\ns\n```\n\n The co-efficients of the characteristic equation are [ 1. -7. -2.]\n Enter elements of the matrix: \n 1 2 3 4; 3 7 9 2; 1 4 7 8; 1 5 9 2\n \n The matrix is: \n [[1 2 3 4]\n [3 7 9 2]\n [1 4 7 8]\n [1 5 9 2]]\n \n The identity matrix is: \n [[1. 0. 0. 0.]\n [0. 1. 0. 0.]\n [0. 0. 1. 0.]\n [0. 0. 0. 1.]]\n \n The coefficients of the characteristic equation= [ 1. -17. -38. 116. -32.]\n \n\n\n\n\n\n matrix([[0., 0., 0., 0.],\n [0., 0., 0., 0.],\n [0., 0., 0., 0.],\n [0., 0., 0., 0.]])\n\n\n\n### Conclusion: \nThe extraction of eigenvalues and eigenvectors of matrices were obtained. The extraction rows and columns in matrix was learnt. We also learnt the propertities of eigen values and eigen vectors, the process of diagonalization of matrix, solving system of equation using Cramer\u2019s rule. The Cayley-Hamilton\nTheorem was verified in Python in this lab. The extraction of coefficients of the characteristic equation of matrix was learned.\n\n\n```\n\n```\n", "meta": {"hexsha": "b2a9ddda4d9a7f627f761162c3c058fc4f961af5", "size": 20195, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Eigen_value_and_eigen_vectors.ipynb", "max_stars_repo_name": "noufalsalim/eigenvalues", "max_stars_repo_head_hexsha": "8b1acedc6dbc13d24042120d48817750c520e325", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Eigen_value_and_eigen_vectors.ipynb", "max_issues_repo_name": "noufalsalim/eigenvalues", "max_issues_repo_head_hexsha": "8b1acedc6dbc13d24042120d48817750c520e325", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Eigen_value_and_eigen_vectors.ipynb", "max_forks_repo_name": "noufalsalim/eigenvalues", "max_forks_repo_head_hexsha": "8b1acedc6dbc13d24042120d48817750c520e325", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.0580645161, "max_line_length": 317, "alphanum_fraction": 0.3919782124, "converted": true, "num_tokens": 2544, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541643004809, "lm_q2_score": 0.8947894520743981, "lm_q1q2_score": 0.8425822137180025}} {"text": "# RSA Cryptography Algorithm\n\n## About RSA Algorithm\nRSA (Rivest\u2013Shamir\u2013Adleman) is one of the first public-key cryptosystems and is widely used for secure data transmission. In such a cryptosystem, the encryption key is public and distinct from the decryption key which is kept secret (private).\n\n## Algorithm\n\n1. Select 2 Prime Numbers - **p & q**\n2. Calculate **n = p x q**\n3. Calculate Euler's Totient Function of n, **\u03c6(n) = (p-1) x (q-1)**\n4. Select PUBLIC KEY, **e** such that **e & \u03c6(n) are Co-primes** i.e, **gcd(e , \u03c6(n))=1**\n5. Calculate PRIVATE KEY, **d** such that **(d x e) mod \u03c6(n) = 1**\n\n## Public and Private Keys\n\n1. The Public key is { e , n }, which is known to all in the network.\n2. The Private key is { d , n }, which is known ONLY to the User to whom message is to be sent.\n\n## Encryption & Decryption\n\n#### Encryption Algorithm\n\nThe Cipher Text, C is generated from the plaintext, M using the public key, e as:\n\n**C = Me mod n**\n\n#### Decryption Algorithm\n\nThe Plain Text, M is generated from the ciphertext, C using the private key, d as:\n\n**M = Cd mod n**\n\n## Implementation of RSA using Python\n\n\n```python\nfrom sympy import *\nimport math \n\n#Generate p and q\np = randprime(1, 10)\nq = randprime(11, 20)\n\n#Generate n and l(n)\nn = p*q\nl = (p-1)*(q-1)\n\n#Function to test Co-Primality for generation of list of Public Keys\ndef isCoPrime(x):\n if math.gcd(l,x)==1:\n return True\n else:\n return False\n\n#Function to find mod Inverese of e withl(n) to generate d \ndef modInverse(e, l) :\n e = e % l;\n for x in range(1, l) :\n if ((e * x) % l == 1) :\n return x\n return 1\n\n#List for Co-Primes\nlistOfCP = []\nfor i in range(1, l):\n if isCoPrime(i) == True:\n listOfCP.append(i)\n\n#Print values of P, Q, N, L \nprint(\"Value of P = \", p)\nprint(\"Value of Q = \", q)\nprint(\"Value of N = \", n)\nprint(\"Value of L = \", l)\n\nprint(\" \")\n\n#Print List of Co-Primes for e\nprint(\"List of Available Public Keys\")\nprint(listOfCP)\n\nprint(\" \")\n\n#select a Public Key from list of Co-Primes\ne = int(input(\"Select Public Key from the Above List ONLY: \"))\n\n#Value of d\nd = modInverse(e, l)\n\nprint(\" \")\n\n#Print Public and Private Keys\nprint(\"PUBLIC KEY : { e , n } = {\", e ,\",\", n , \"}\")\nprint(\"PRIVATE KEY : { d , n } = {\", d ,\",\", n , \"}\")\n\nprint(\" \")\n\n#Encryption Algorithm\ndef encrypt(plainText):\n return (plainText**e)%n\n\n#Decryption Algorithm\ndef decrypt(cipherText):\n #pvtKey = int(input(\"Enter your Private Key: \"))\n return (cipherText**d)%n\n\n#Driver Code\n\n#Message Input\npt = int(input('Enter the Plain Text: '))\nprint(\"CipherText: \", encrypt(pt))\n\nprint(\" \")\n\n#CipherText Input\nct = int(input('Enter the Cipher Text: '))\nprint(\"PlainText: \", decrypt(ct))\n```\n\n Value of P = 7\n Value of Q = 19\n Value of N = 133\n Value of L = 108\n \n List of Available Public Keys\n [1, 5, 7, 11, 13, 17, 19, 23, 25, 29, 31, 35, 37, 41, 43, 47, 49, 53, 55, 59, 61, 65, 67, 71, 73, 77, 79, 83, 85, 89, 91, 95, 97, 101, 103, 107]\n \n Select Public Key from the Above List ONLY: 31\n \n PUBLIC KEY : { e , n } = { 31 , 133 }\n PRIVATE KEY : { d , n } = { 7 , 133 }\n \n Enter the Plain Text: 89\n CipherText: 110\n \n Enter the Cipher Text: 110\n PlainText: 89\n\n\n#### Tanmoy Sen Gupta\ntanmoysg.com | +91 9864809029 | tanmoysps@gmail.com\n", "meta": {"hexsha": "323251ebe8649e6ee2a63b0e0c6e8bd0f5eaf3de", "size": 5652, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "RSA-Algorithm/RSA-Algorithm.ipynb", "max_stars_repo_name": "TanmoySG/RSA-Algorithm", "max_stars_repo_head_hexsha": "cba1b4ba8e96c6eb1bd67097e47ed7763185fb63", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-05-05T11:07:08.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-03T13:19:05.000Z", "max_issues_repo_path": "RSA-Algorithm/RSA-Algorithm.ipynb", "max_issues_repo_name": "TanmoySG/RSA-Algorithm", "max_issues_repo_head_hexsha": "cba1b4ba8e96c6eb1bd67097e47ed7763185fb63", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-03-12T18:13:12.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-12T18:13:12.000Z", "max_forks_repo_path": "RSA-Algorithm/RSA-Algorithm.ipynb", "max_forks_repo_name": "TanmoySG/RSA-Algorithm", "max_forks_repo_head_hexsha": "cba1b4ba8e96c6eb1bd67097e47ed7763185fb63", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-10-01T01:19:39.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-01T01:19:39.000Z", "avg_line_length": 26.7867298578, "max_line_length": 249, "alphanum_fraction": 0.482661005, "converted": true, "num_tokens": 1087, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541544761565, "lm_q2_score": 0.8947894590884705, "lm_q1q2_score": 0.8425822115321312}} {"text": "\n\n\n```python\nimport sympy as sp\nsp.init_printing(use_unicode=True)\n```\n\n\n```python\nflowMat = sp.Matrix([[1,1,1,0,0,0,500], [1,0,0,1,0,1,400], [0,0,1,0,1,-1,100], [0,1,0,-1,-1,0,0]])\nflowMat\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 1 & 0 & 0 & 0 & 500\\\\1 & 0 & 0 & 1 & 0 & 1 & 400\\\\0 & 0 & 1 & 0 & 1 & -1 & 100\\\\0 & 1 & 0 & -1 & -1 & 0 & 0\\end{matrix}\\right]$\n\n\n\n\n```python\nflowMat.rref()\n```\n\n\n\n\n$\\displaystyle \\left( \\left[\\begin{matrix}1 & 0 & 0 & 1 & 0 & 1 & 400\\\\0 & 1 & 0 & -1 & -1 & 0 & 0\\\\0 & 0 & 1 & 0 & 1 & -1 & 100\\\\0 & 0 & 0 & 0 & 0 & 0 & 0\\end{matrix}\\right], \\ \\left( 0, \\ 1, \\ 2\\right)\\right)$\n\n\n\n\n```python\ncoffeeMat = sp.Matrix([[0.3,0.6,0.1], [0,0.8,0.2], [0.5,0.5,0]]).T\ncoffeeMat\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0.3 & 0 & 0.5\\\\0.6 & 0.8 & 0.5\\\\0.1 & 0.2 & 0\\end{matrix}\\right]$\n\n\n\n\n```python\ndrinks = sp.Matrix([1,0,0])\ndrinks\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1\\\\0\\\\0\\end{matrix}\\right]$\n\n\n\n\n```python\nfor i in range(10):\n drinks = coffeeMat*drinks\n print(drinks)\n```\n\n Matrix([[0.300000000000000], [0.600000000000000], [0.100000000000000]])\n Matrix([[0.140000000000000], [0.710000000000000], [0.150000000000000]])\n Matrix([[0.117000000000000], [0.727000000000000], [0.156000000000000]])\n Matrix([[0.113100000000000], [0.729800000000000], [0.157100000000000]])\n Matrix([[0.112480000000000], [0.730250000000000], [0.157270000000000]])\n Matrix([[0.112379000000000], [0.730323000000000], [0.157298000000000]])\n Matrix([[0.112362700000000], [0.730334800000000], [0.157302500000000]])\n Matrix([[0.112360060000000], [0.730336710000000], [0.157303230000000]])\n Matrix([[0.112359633000000], [0.730337019000000], [0.157303348000000]])\n Matrix([[0.112359563900000], [0.730337069000000], [0.157303367100000]])\n\n", "meta": {"hexsha": "bfbfb20c18e5d27dac893bc9b0298ae264da5c61", "size": 6873, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "4A_Applications.ipynb", "max_stars_repo_name": "tofighi/Linear-Algebra", "max_stars_repo_head_hexsha": "bea7d2a4a81e0c49b324f23c47cf03db72e376cf", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "4A_Applications.ipynb", "max_issues_repo_name": "tofighi/Linear-Algebra", "max_issues_repo_head_hexsha": "bea7d2a4a81e0c49b324f23c47cf03db72e376cf", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "4A_Applications.ipynb", "max_forks_repo_name": "tofighi/Linear-Algebra", "max_forks_repo_head_hexsha": "bea7d2a4a81e0c49b324f23c47cf03db72e376cf", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.0995475113, "max_line_length": 261, "alphanum_fraction": 0.4022988506, "converted": true, "num_tokens": 822, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9496693659780479, "lm_q2_score": 0.8872045907347108, "lm_q1q2_score": 0.8425510211758462}} {"text": "# Lab Assignment 2\n\n\n\n### Sam Daucey, s2028017\n\n## Presentation and coding style (3 marks)\n\nIn this assignment, some marks are allocated to your coding style and presentation. Try to make your code more readable using the tips given in your computer lab 2. Make sure your figures have good quality, right size, good range and proper labels.\n\n## Task 1 (4 marks)\n\nIn this task we try to use several method from Lab 2 to solve the initial value problem \n\n\\begin{equation}\ny' = 3y-4t, \\quad y(0)=1,\n\\end{equation}\n\nSet the step size to $h = 0.05$ and numerically solve this ODE from $t=0$ to $0.5$ using the following methods:\n\n- Forward Euler \n\n- Adams\u2013Bashforth order 2\n\n- Adams\u2013Bashforth order 3 (we did not code this method in the computer lab, but you can find the formula on [this wikipedia page](https://en.wikipedia.org/wiki/Linear_multistep_method)). For this method, you need to build the very first two steps using other methods. For the first step, use the Euler scheme. For the second step, use Adams\u2013Bashforth order 2. \n\n\nPlot the three different approximations, and display the values in a table.\n\n\n```python\n# Import packages\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef AB_s_coeffs(s):\n \"\"\"Returns the coefficients for the s-step Adams-Bashforth method up to s=3.\"\"\"\n return {\n 1: [1],\n 2: [-1/2, 3/2],\n 3: [5/12, -16/12, 23/12]\n }[s]\n\n\ndef AB_s_step(s, dy_dt, y_tup, t_tup, dt):\n \"\"\"\n Returns an approximation for y_(n+1) using the s-step Adams-Bashforth method.\n ----------------------------------------------------------\n inputs:\n s: the order of the AB method\n dy_dt: the RHS function in the system of ODE\n y_tup: an s-tuple containing [y_(n-s+1), y_(n-s+2) ... y_n]\n t_tup: an s-tuple containing [t_(n-s+1), t_(n-s+2) ... t_n]\n dt: the step size\n \"\"\"\n # Retreive the s-step coefficients\n coeffs = AB_s_coeffs(s)\n \n # Compute the change in y using the Adams-Bashforth formula\n dy = dt * sum(\n c * dy_dt(y_i, t_i) for c, y_i, t_i in zip(coeffs, y_tup, t_tup)\n )\n \n # Use the change in y to return y_(n+1)\n y_n = y_tup[-1]\n return y_n + dy\n\n\ndef AB_s_solve(s, dy_dt, y_0, t_0, t_end, h):\n \"\"\"\n Solves the differential equation t' = dy_dt(y, t) numerically using the s-step Adams-Bashforth method.\n ----------------------------------------------------------\n inputs:\n s: the order of the AB method\n dy_dt: the RHS function in the system of ODE\n y_0: initial condition on y, y(t_0) = y_0\n t_0: initial time\n t_end: end time\n h: step size\n output:\n y: the solution of ODE. \n t: the times at which this solution was evaluated at: y[i] ~ y(t[i])\"\"\"\n \n # We need to add an extra step to account for the fact we want to evaluate at the start and end points.\n step_count = math.ceil((t_end - t_0)/h) + 1\n y = np.zeros(step_count)\n t = np.linspace(t_0, t_end, step_count)\n y[0] = y_0\n \n # Generate y[1], y[2] ... y[s-1] using the 1, 2 ... s-1 step AB methods\n for i in range(1, s):\n y_tup = y[:i]\n t_tup = t[:i]\n y[i] = AB_s_step(i, dy_dt, y_tup, t_tup, h)\n \n # Generate y[s], y[s+1] ... using the s step AB method\n for i in range(s, step_count):\n y_tup = y[i-s:i]\n t_tup = t[i-s:i]\n y[i] = AB_s_step(s, dy_dt, y_tup, t_tup, h)\n \n return t, y\n```\n\n\n```python\n# defining the function in the RHS of the ODE given in the question\ndef y_prime(y, t):\n return 3*y - 4*t\n```\n\n\n```python\n\n# Note that Euler's method is just the one-step Adams-Bashforth method,\n# Hence we can use s-step Adams-Bashforth to get all the solutions.\neuler_soln = AB_s_solve(1, y_prime, 1, 0, 0.5, 0.05)\nAB2_soln = AB_s_solve(2, y_prime, 1, 0, 0.5, 0.05)\nAB3_soln = AB_s_solve(3, y_prime, 1, 0, 0.5, 0.05)\n\n# Plot the results\nfig, ax = plt.subplots(figsize=(14, 8))\n\nax.set_xlabel(\"t\")\nax.set_ylabel(\"y\")\n\nax.plot(*euler_soln, label=\"Euler\")\nax.plot(*AB2_soln, label=\"2-step Adams-Bashforth\")\nax.plot(*AB3_soln, label=\"3-step Adams-Bashforth\")\n\nax.legend()\n```\n\n\n```python\nfrom pandas import DataFrame\n\n# printing the solution in a table:\nDataFrame({\n \"Euler\": euler_soln[1],\n \"2-step Adams-Bashforth\": AB2_soln[1],\n \"3-step Adams-Bashforth\": AB3_soln[1]\n}, index=euler_soln[0])\n\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Euler2-step Adams-Bashforth3-step Adams-Bashforth
0.001.0000001.0000001.000000
0.051.1500001.1500001.150000
0.101.3125001.3187501.318750
0.151.4893751.5042191.505391
0.201.6827811.7087621.711315
0.251.8951981.9354171.939662
0.302.1294782.1877282.194139
0.352.3889002.4698112.478979
0.402.6772352.7864392.799086
0.452.9988203.1431523.160162
0.503.3586433.5463783.568827
\n
\n\n\n\n## Task 2 (3 marks)\n\nUse `SymPy` to solve the differential equation $y' = 3y-4t$, with $y(0)=1$, present the analytical solution, and check the exact value of $y(0.5)$.\n\nCompare the result with the approximations from the three methods in Task 1. You may use a table to show the results of each method at $y(0.5)$. Which method is the most/least accurate? Why?\n\n\n```python\n# standard setup\nimport sympy as sym\nsym.init_printing()\nfrom IPython.display import display_latex\nimport sympy.plotting as sym_plot\n\n# Define symbols to use with sympy\nt = sym.symbols(\"t\")\ny = sym.Function(\"y\")\ny_prime = y(t).diff(t)\n\n# Define and solve the differential equation\ndiff_eq = sym.Eq(y_prime, 3*y(t) - 4*t)\n\nsol = sym.dsolve(diff_eq, ics={y(0):1})\n\n# Print the solution into the console\nprint(\"The solution is:\")\ndisplay_latex(sol)\n\nprint(\"evaluated at 0.5:\")\ndisplay_latex(sol.subs(t, 0.5))\n\n# Substitute t into our symbolic solution for each t in our range\nt_eval = np.linspace(0, 0.5, 11)\ny_expr = sol.rhs\nexact_soln = [y_expr.subs(t, t_i) for t_i in t_eval]\n\n```\n\n The solution is:\n\n\n\n$\\displaystyle y{\\left(t \\right)} = \\left(\\frac{4 \\left(3 t + 1\\right) e^{- 3 t}}{9} + \\frac{5}{9}\\right) e^{3 t}$\n\n\n evaluated at 0.5:\n\n\n\n$\\displaystyle y{\\left(0.5 \\right)} = 3.60093837241004$\n\n\nWe can see that our most accurate approximation of $y(0.5)$ came from the order-3 Adams-Bashforth method, followed by the order-2 Adams Bashforth method, with the least accurate being Euler's method. This is unsuprising, as the higher order methods will eventually have a lower global truncation error for sufficiently small h. \n\nOn top of this, the step size $h=0.05$ is still relatively large in comparison to how precise floating point numbers are, so the higher round-off error associated with doing more computations for the Adams-Bashforth methods had negligible effect. Hence the majority of the total global error was truncation error, which the higher order Adams-Bashforth methods performed better on.\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "ec306d8d65c8a5ce9e51d58411a4f8a95894aed1", "size": 55825, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lab 2 assignment.ipynb", "max_stars_repo_name": "SamD770/hons-diff-eqs-notebooks", "max_stars_repo_head_hexsha": "48503988b75f113760b67979713c8dcf5f143fa4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lab 2 assignment.ipynb", "max_issues_repo_name": "SamD770/hons-diff-eqs-notebooks", "max_issues_repo_head_hexsha": "48503988b75f113760b67979713c8dcf5f143fa4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lab 2 assignment.ipynb", "max_forks_repo_name": "SamD770/hons-diff-eqs-notebooks", "max_forks_repo_head_hexsha": "48503988b75f113760b67979713c8dcf5f143fa4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 120.5723542117, "max_line_length": 41380, "alphanum_fraction": 0.8384236453, "converted": true, "num_tokens": 2641, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9196425245706047, "lm_q2_score": 0.9161096107264305, "lm_q1q2_score": 0.8424933551918484}} {"text": "```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sympy import Symbol, integrate\n%matplotlib notebook\n```\n\n### Smooth local paths\nWe will use cubic spirals to generate smooth local paths. Without loss of generality, as $\\theta$ smoothly changes from 0 to 1, we impose a condition on the curvature as follows\n\n$\\kappa = f'(\\theta) = K(\\theta(1-\\theta))^n $\n\nThis ensures curvature vanishes at the beginning and end of the path. Integrating, the yaw changes as\n$\\theta = \\int_0^x f'(\\theta)d\\theta$\n\nWith $n = 1$ we get a cubic spiral, $n=2$ we get a quintic spiral and so on. Let us use the sympy package to find the family of spirals\n\n1. Declare $x$ a Symbol\n\n2. You want to find Integral of $f'(x)$\n\n3. You can choose $K$ so that all coefficients are integers\n\nVerify if $\\theta(0) = 0$ and $\\theta(1) = 1$\n\n\n```python\nK = 30#choose for cubic/quintic\nn = 2#choose for cubic/ quintic\nx = Symbol('x')#declare as Symbol\nprint(integrate(K*(x*(1-x))**n, x)) # complete the expression\n```\n\n 6*x**5 - 15*x**4 + 10*x**3\n\n\n\n```python\n#write function to compute a cubic spiral\n#input/ output can be any theta\ndef cubic_spiral(theta_i, theta_f, n=10):\n x = np.linspace(0, 1, num=n)\n theta = (-2*x**3 + 3*x**2) * (theta_f-theta_i) + theta_i\n return theta\n # pass \n\ndef quintic_spiral(theta_i, theta_f, n=10):\n x = np.linspace(0, 1, num=n)\n theta = (6*x**5 - 15*x**4 + 10*x**3)* (theta_f-theta_i) + theta_i\n return theta\n # pass\ndef circular_spiral(theta_i, theta_f, n=10):\n x = np.linspace(0, 1, num=n)\n theta = x* (theta_f-theta_i) + theta_i\n return theta\n```\n\n### Plotting\nPlot cubic, quintic spirals along with how $\\theta$ will change when moving in a circular arc. Remember circular arc is when $\\omega $ is constant\n\n\n\n```python\ntheta_i = 1.57\ntheta_f = 0\nn = 10\nx = np.linspace(0, 1, num=n)\nplt.figure()\nplt.plot(x,circular_spiral(theta_i, theta_f, n),label='Circular')\nplt.plot(x,cubic_spiral(theta_i, theta_f, n), label='Cubic')\nplt.plot(x,quintic_spiral(theta_i, theta_f, n), label='Quintic')\n\nplt.grid()\nplt.legend()\n```\n\n## Trajectory\n\nUsing the spirals, convert them to trajectories $\\{(x_i,y_i,\\theta_i)\\}$. Remember the unicycle model \n\n$dx = v\\cos \\theta dt$\n\n$dy = v\\sin \\theta dt$\n\n$\\theta$ is given by the spiral functions you just wrote. Use cumsum() in numpy to calculate {}\n\nWhat happens when you change $v$?\n\n\n```python\nv = 1\ndt = 0.1\ntheta_i = 1.57\ntheta_f = 0\nn = 100\ntheta_cubic = cubic_spiral(theta_i, theta_f, n)\ntheta_quintic = quintic_spiral(theta_i, theta_f, int(n+(23/1000)*n))\ntheta_circular = circular_spiral(theta_i, theta_f, int(n-(48/1000)*n))\n# print(theta)\ndef trajectory(v,dt,theta):\n dx = v*np.cos(theta) *dt\n dy = v*np.sin(theta) *dt\n # print(dx)\n x = np.cumsum(dx)\n y = np.cumsum(dy)\n return x,y\n\n# plot trajectories for circular/ cubic/ quintic\nplt.figure()\nplt.plot(*trajectory(v,dt,theta_circular), label='Circular')\nplt.plot(*trajectory(v,dt,theta_cubic), label='Cubic')\nplt.plot(*trajectory(v,dt,theta_quintic), label='Quintic')\n\n\nplt.grid()\nplt.legend()\n```\n\n## Symmetric poses\n\nWe have been doing only examples with $|\\theta_i - \\theta_f| = \\pi/2$. \n\nWhat about other orientation changes? Given below is an array of terminal angles (they are in degrees!). Start from 0 deg and plot the family of trajectories\n\n\n```python\ndt = 0.1\nthetas = [15, 30, 45, 60, 90, 120, 150, 180] #convert to radians\nplt.figure()\nfor tf in thetas:\n t = cubic_spiral(0, np.deg2rad(tf),50)\n x = np.cumsum(np.cos(t)*dt)\n y = np.cumsum(np.sin(t)*dt)\n plt.plot(x, y, label=f'0 to {tf} degree')\nplt.grid()\nplt.legend()\n# On the same plot, move from 180 to 180 - theta\n#thetas = \nplt.figure()\nfor tf in thetas:\n t = cubic_spiral(np.pi, np.pi-np.deg2rad(tf),50)\n x = np.cumsum(np.cos(t)*dt)\n y = np.cumsum(np.sin(t)*dt)\n plt.plot(x, y, label=f'180 to {180-tf} degree')\n\n\nplt.grid()\nplt.legend()\n```\n\nModify your code to print the following for the positive terminal angles $\\{\\theta_f\\}$\n1. Final x, y position in corresponding trajectory: $x_f, y_f$ \n2. $\\frac{y_f}{x_f}$ and $\\tan \\frac{\\theta_f}{2}$\n\nWhat do you notice? \nWhat happens when $v$ is doubled?\n\n\n```python\ndt = 0.1\nthetas = [15, 30, 45, 60, 90, 120, 150, 180] #convert to radians\n# plt.figure()\nfor tf in thetas:\n t = cubic_spiral(0, np.deg2rad(tf),50)\n x = np.cumsum(np.cos(t)*dt)\n y = np.cumsum(np.sin(t)*dt)\n print(f'tf: {tf} x_f : {x[-1]} y_f: {y[-1]} y_f/x_f : {y[-1]/x[-1]} tan (theta_f/2) : {np.tan(np.deg2rad(tf)/2)}')\n\n```\n\n tf: 15 x_f : 4.936181599941893 y_f: 0.6498606361772978 y_f/x_f : 0.13165249758739583 tan (theta_f/2) : 0.13165249758739583\n tf: 30 x_f : 4.747888365557456 y_f: 1.2721928533042435 y_f/x_f : 0.2679491924311227 tan (theta_f/2) : 0.2679491924311227\n tf: 45 x_f : 4.444428497864582 y_f: 1.8409425608129912 y_f/x_f : 0.41421356237309487 tan (theta_f/2) : 0.41421356237309503\n tf: 60 x_f : 4.040733895009051 y_f: 2.3329188020071205 y_f/x_f : 0.5773502691896257 tan (theta_f/2) : 0.5773502691896257\n tf: 90 x_f : 3.0152040529843056 y_f: 3.0152040529843065 y_f/x_f : 1.0000000000000002 tan (theta_f/2) : 0.9999999999999999\n tf: 120 x_f : 1.8653713069408235 y_f: 3.230917878602665 y_f/x_f : 1.7320508075688772 tan (theta_f/2) : 1.7320508075688767\n tf: 150 x_f : 0.8004297415440109 y_f: 2.9872444633314705 y_f/x_f : 3.732050807568873 tan (theta_f/2) : 3.7320508075688776\n tf: 180 x_f : 5.551115123125783e-17 y_f: 2.3817721504025933 y_f/x_f : 4.290619267613818e+16 tan (theta_f/2) : 1.633123935319537e+16\n\n\nThese are called *symmetric poses*. With this spiral-fitting approach, only symmetric poses can be reached. \n\nIn order to move between any 2 arbitrary poses, you will have to find an intermediate pose that is pair-wise symmetric to the start and the end pose. \n\nWhat should be the intermediate pose? There are infinite possibilities. We would have to formulate it as an optimization problem. As they say, that has to be left for another time!\n\n\n```python\n\n```\n", "meta": {"hexsha": "cbf702d3add3851f8ce5638a5dc8ae9444ea946f", "size": 148166, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "week3/parvmaheshwari2002/Q2 - 1/Attempt1_filesubmission_cubic-quintic-spirals.ipynb", "max_stars_repo_name": "naveenmoto/lablet102", "max_stars_repo_head_hexsha": "24de9daa4ae75cbde93567a3239ede43c735cf03", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-09T16:48:44.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-09T16:48:44.000Z", "max_issues_repo_path": "week3/parvmaheshwari2002/Q2 - 1/Attempt1_filesubmission_cubic-quintic-spirals.ipynb", "max_issues_repo_name": "naveenmoto/lablet102", "max_issues_repo_head_hexsha": "24de9daa4ae75cbde93567a3239ede43c735cf03", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week3/parvmaheshwari2002/Q2 - 1/Attempt1_filesubmission_cubic-quintic-spirals.ipynb", "max_forks_repo_name": "naveenmoto/lablet102", "max_forks_repo_head_hexsha": "24de9daa4ae75cbde93567a3239ede43c735cf03", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 148166.0, "max_line_length": 148166, "alphanum_fraction": 0.9417477694, "converted": true, "num_tokens": 2084, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9149009573133051, "lm_q2_score": 0.9207896742607279, "lm_q1q2_score": 0.8424313544653463}} {"text": "# Damped, driven linear oscillator: various cases\n\nThe examples shown in this notebook were taken from M. Tabor, Chaos and Integrability in Nonlinear Dynamics, Chap. 1\n\nHere, we consider numerical solutions to ODEs which represent a system subjected to some form of external time-dependent force $F(t)$. Particularly interesting are those cases where $F(t)$ is a periodic function, for example, $F(t) \\approx cos(\\Omega t)$. Consider as an example the damped, driven linear oscillator\n\n$$\n\\ddot{x}+\\lambda \\dot{x}+\\omega^2 x = \\epsilon F(t)\n$$\n\nwhere $\\epsilon$ can be thought of as a 'coupling parameter' --in the limit $\\epsilon\\rightarrow 0$, the system becomes autonomous again. We can re-write this differential equation as:\n\n$$\n\\begin{align}\n \\begin{split}\n \\dot{x}&=y \\\\\n \\dot{y}&=-\\lambda y -\\omega^2 x + \\epsilon \\cos(\\Omega t) \\\\\n \\end{split}\n\\end{align}\n$$\n\nWe will study this system, and consider several distinct cases. We will integrate it using `TaylorIntegration`, and visualize the results using `Plots`:\n\n\n```julia\nusing TaylorIntegration, Plots\npyplot()\n```\n\n WARNING: Method definition macroexpand(Module, Any) in module Compat at /Users/Jorge/.julia/v0.6/Compat/src/Compat.jl:1491 overwritten in module MacroTools at /Users/Jorge/.julia/v0.6/MacroTools/src/utils.jl:64.\n\n\n\n\n\n Plots.PyPlotBackend()\n\n\n\nBefore we start, we fix some integration parameters (the order of Taylor expansions and the local absolute error tolerance):\n\n\n```julia\nconst abstol = 1e-30\nconst order = 28\n```\n\n\n\n\n 28\n\n\n\n## 1. The driven oscillator (non-resonant)\n\nFirst, we will consider the non-resonant, frictionless, linear oscillator:\n\n$$\n\\ddot{x}+\\omega^2x=\\epsilon\\cos(\\Omega t),\n$$\nwhich we will rewrite as:\n\n$$\n\\begin{align}\n\\begin{split}\n\\dot{x}_1 &= x_2 \\\\\n\\dot{x}_2 &= -\\omega^2x_1+\\epsilon\\cos(x_3) \\\\\n\\dot{x}_3 &= \\Omega\n\\end{split}\n\\end{align}\n$$\n\nThis ODE is subject to initial conditions $x(0)=x_0$ and $\\dot{x}(0)=v_0$. Here, we have $\\lambda=0$ (no friction), and $\\omega \\neq \\Omega$ (non-resonant condition).\n\n\n```julia\nconst \u03a9 = 1.0 #forcing frequency\nconst \u03f5 = 0.5 #forcing amplitude\nconst \u03c9 = 1.1 #\"natural\" frequency\nconst T = 2\u03c0/\u03c9 #period associated to oscillator's \u03c9\n\n#the driven linear oscillator ODE:\nfunction drivenosc!(t, x, dx)\n dx[1] = x[2]\n dx[2] = -(\u03c9^2)*x[1]+\u03f5*cos(\u03a9*t)\n nothing\nend\n```\n\n\n\n\n drivenosc! (generic function with 1 method)\n\n\n\nThe initial time, initial condition and final time are:\n\n\n```julia\nconst t0 = 0.0 #initial time\nconst x0 = [1.0,0.0] #initial condition\n#const x0 = [1.0,0.0,\u03a9*t0] #initial condition\nconst tmax = 100*T # 200*T #final time of integration\n```\n\n\n\n\n 571.1986642890532\n\n\n\nThen, the particular solution is:\n\n$$\nx_\\mathrm{part}(t)=\\frac{\\epsilon}{\\omega^2-\\Omega^2} \\cos(\\Omega t)\n$$\n\n\n```julia\nx_part(t) = \u03f5*cos(\u03a9*t)/(\u03c9^2-\u03a9^2)\n```\n\n\n\n\n x_part (generic function with 1 method)\n\n\n\nThe overall solution is:\n\n$$\nx_\\mathrm{gral}(t) = a\\sin(\\omega t +\\delta)+\\frac{\\epsilon}{\\omega^2-\\Omega^2} \\cos(\\Omega t)\n$$\n\nwhere\n\n$$\n\\begin{align}\n\\begin{split}\na &= \\sqrt{ \\left( x_0-\\frac{\\epsilon}{\\omega^2-\\Omega^2}\\right)^2+\\left( \\frac{\\dot{x}_0}{\\omega} \\right)^2 } \\\\\n\\delta &= \\arctan\\left( \\frac{ x_0-\\frac{\\epsilon}{\\omega^2-\\Omega^2} }{ \\dot{x}_0/\\omega } \\right)\n\\end{split}\n\\end{align}\n$$\n\n\n```julia\nconst a = sqrt( (x0[1]-\u03f5/(\u03c9^2-\u03a9^2))^2+(x0[2]^2/\u03c9^2) ) #amplitude\nconst \u03b4 = atan((\u03c9*(x0[1]-\u03f5/(\u03c9^2-\u03a9^2))/x0[2])) #phase shift\n\nx_gral(t) = a*sin(\u03c9*t+\u03b4)+x_part(t)\n\na, \u03b4\n```\n\n\n\n\n (1.3809523809523787, -1.5707963267948966)\n\n\n\nWe can check if our expression for `x_gral ` is consistent, using `TaylorSeries`:\n\n\n```julia\nt_poly = TaylorSeries.Taylor1([0.0, 1.0], 3) #A Taylor polynomial which represents time\nx_gral(t_poly) #evaluate x_gral at t_poly\n```\n\n\n\n\n 1.0 + 9.30148402209536e-17 t - 0.3550000000000001 t\u00b2 - 1.8757992777892314e-17 t\u00b3 + \ud835\udcaa(t\u2074)\n\n\n\nWe can get the numerical value of the derivatives of `x_gral` up to any order we want! In particular, its 0-th and 1st order coefficients are consistent, within machine epsilon, with the initial conditions $x_0=1$, $\\dot{x}_0=0$.\n\nNow, let's integrate the ODE:\n\n\n```julia\n@time tT, xT = taylorinteg(drivenosc!, x0, t0, tmax, order, abstol, maxsteps=50000);\n```\n\n 0.371958 seconds (612.46 k allocations: 56.206 MiB, 4.59% gc time)\n\n\nIn this case, the solution represents the sum of two sinusoidal signals with different frequencies and therefore we can observe the phenomenon of beats:\n\n\n```julia\n# x vs t, the first steps\nfirsti =1 # length(tT2)-20\nlasti =10 # length(tT2)\nmyrange=firsti:lasti\nlint = linspace(tT[firsti], tT[lasti], 10*length(myrange))\nplot(\nlint/T,\nx_gral.(lint),\nxaxis=\"time, t\",\nyaxis=\"x(t)\",\ntitle=\"x vs t\",\nlabel=\"analytical\"\n#leg=false\n)\nplot!(\ntT[myrange]/T,\nxT[myrange,1],\nxaxis=\"time, t\",\nyaxis=\"x(t)\",\ntitle=\"x vs t\",\nlabel=\"numerical\",\nleg=false\n)\nscatter!(\ntT[myrange]/T,\nxT[myrange,1],\nlabel=\"numerical\",\nleg=true,\nms=3.0\n)\n```\n\n\n\n\n\n\n\n\n\n```julia\n# x vs t\nplot(\ntT/T,\nxT[:,1],\nxaxis=\"time, t\",\nyaxis=\"x(t)\",\ntitle=\"x vs t\",\nleg=false\n)\n```\n\n\n\n\n\n\n\n\nNow, how does the numerical solution compare to the analytical solution? Well, we can measure this via the difference between the analytical solution and the numerical solution at each time step:\n\n$$\n\\Delta x(t) = x_\\mathrm{num}(t)-x_\\mathrm{gral}(t)\n$$\n\nWe measure this in machine epsilons (`eps()`):\n\n\n```julia\n#the absolute error in machine epsilons\n\u0394x = (xT[:,1]-x_gral.(tT))/eps()\n#absolute error, numerical solution, analytic solution, at last time step:\n\u0394x[end], xT[end,1], x_gral(tT[end])\n```\n\n\n\n\n (945.5, 0.6220322210267172, 0.6220322210265072)\n\n\n\nAt the end of the integration, the error in the numerical solution is less than 700`eps()`! Now, let's plot the absolute error as a function of time:\n\n\n```julia\nplot(\ntT/T,\n\u0394x, #the absolute error in machine epsilons\nxaxis=\"time, t\",\nyaxis=\"absolute error (machine epsilons)\",\ntitle=\"x vs t\",\nleg=false\n)\n```\n\n\n\n\n\n\n\n\n## 2. The driven oscillator: resonance\n\nWhen $\\Omega \\rightarrow \\omega$, the solution\n\n$$\n\\begin{align}\nx_\\mathrm{gral}(t) &= a\\sin(\\omega t +\\delta)+\\frac{\\epsilon}{\\omega^2-\\Omega^2} \\cos(\\Omega t)\n\\end{align}\n$$\n\ngiven before, breaks down. This condition is an example of resonance. In this case, the ODE we are trying to solve is:\n\n$$\n\\begin{align}\n \\begin{split}\n \\dot{x}_1 &= x_2 \\\\\n \\dot{x}_2 &= -\\omega^2x_1+\\epsilon\\cos(\\omega t) \\\\\n \\end{split}\n\\end{align}\n$$\n\nand the general solution (for $x_1$) is:\n\n$$\nx_\\mathrm{gral}(t) = a\\sin(\\omega t +\\delta)+\\frac{\\epsilon t}{2\\omega} \\cos(\\omega t)\n$$\n\nNote that, once we know $x_1$, then $x_2$ can be calculated straightforward; $x_3(t)=\\omega t$ by definition. The coefficients $a$ and $\\delta$ are given in this case by:\n\n$$\n\\begin{align}\n\\begin{split}\na &= \\sqrt{x_0^2+\\dot{x}_0^2/\\omega^2} \\\\\n\\delta &= \\arctan \\left( \\frac{\\omega x_0}{\\dot{x}_0} \\right)\n\\end{split}\n\\end{align}\n$$\n\n\n```julia\n#the resonant driven linear oscillator ODE:\nfunction drivenoscres!(t, x, dx)\n dx[1] = x[2]\n dx[2] = -(\u03c9^2)*x[1]+\u03f5*cos(\u03c9*t)\n nothing\nend\n```\n\n\n\n\n drivenoscres! (generic function with 1 method)\n\n\n\n\n```julia\nconst a\u2032 = sqrt( x0[1]^2+(x0[2]^2/\u03c9^2) ) #amplitude\nconst \u03b4\u2032 = atan((\u03c9*x0[1]/x0[2])) #phase shift\n\nx_gral_res(t) = a\u2032*sin(\u03c9*t+\u03b4\u2032)+\u03f5*t*sin(\u03c9*t)/(2\u03c9) #overall solution with resonance\n```\n\n\n\n\n x_gral_res (generic function with 1 method)\n\n\n\nIf we want to test the consistency of our expression for `x_gral`, we can do so by evaluating it using `TaylorSeries`:\n\n\n```julia\nx_gral_res(TaylorSeries.Taylor1([0.0, 1.0], 3))\n```\n\n\n\n\n 1.0 + 6.735557395310444e-17 t - 0.3550000000000001 t\u00b2 - 1.3583374080542732e-17 t\u00b3 + \ud835\udcaa(t\u2074)\n\n\n\nWe're ready to perform a Taylor integration for the resonant driven linear oscillator:\n\n\n```julia\nconst tmax = 100*T # 200*T\n```\n\n\n\n\n 571.1986642890532\n\n\n\n\n```julia\n@time tT2, xT2 = taylorinteg(drivenoscres!, x0, t0, tmax, order, abstol, maxsteps=50000);\n```\n\n 0.127499 seconds (518.14 k allocations: 54.571 MiB, 22.92% gc time)\n\n\n\n```julia\n# x vs t, the first steps\nfirsti2 =1 # length(tT2)-20\nlasti2 =10 # length(tT2)\nmyrange2=firsti2:lasti2\nlint2 = linspace(tT2[firsti2], tT2[lasti2], 10*length(myrange2))\nplot(\nlint2/T,\nx_gral_res.(lint2),\nxaxis=\"time, t\",\nyaxis=\"x(t)\",\ntitle=\"x vs t\",\nleg=false\n)\nplot!(\ntT2[myrange2]/T,\nxT2[myrange2,1],\nxaxis=\"time, t\",\nyaxis=\"x(t)\",\ntitle=\"x vs t\",\nleg=false,\n)\nscatter!(\ntT2[myrange2]/T,\nxT2[myrange2,1],\nleg=false,\nms=3.0\n)\n```\n\n\n\n\n\n\n\n\n\n```julia\n# x vs t\nplot(\ntT2/T,\nxT2[:,1],\nxaxis=\"time, t\",\nyaxis=\"x(t)\",\ntitle=\"x vs t\",\nleg=false\n)\nplot!(\ntT2/T,\nx_gral_res.(tT2),\nxaxis=\"time, t\",\nyaxis=\"x(t)\",\ntitle=\"x vs t\",\nleg=false\n)\n```\n\n\n\n\n\n\n\n\nThe solution grows unbounded with time! Physically, this means that the external source is imparting energy to the system, and there are no losses of energy due to any damping ($\\lambda = 0$).\n\nOnce again we ask: how does this numerical solution compare to the analytical solution?\n\n\n```julia\n#the absolute error in machine epsilons\n\u0394x2 = (xT2[:,1]-x_gral_res.(tT2))/eps();\n\u0394x2[end], xT2[end,1], x_gral_res(tT2[end])\n```\n\n\n\n\n (237767.0, 1.000000000053305, 1.00000000000051)\n\n\n\n\n```julia\nplot(\ntT2/T,\n\u0394x2, #the absolute error in machine epsilons\nxaxis=\"time, t\",\nyaxis=\"absolute error (machine epsilons)\",\ntitle=\"x vs t\",\nleg=false\n)\n```\n\n\n\n\n\n\n\n\nNow, try doing longer integrations for the system above. How does the absolute error behave for longer times of integration? Does it grow systematically? Is there a way to know if the error in our integrations is dominated by roundoff errors?\n\n## 3. The damped driven oscillator ($\\lambda \\neq 0$)\n\nWe return to the original problem:\n\n$$\n\\begin{align}\n \\begin{split}\n \\dot{x}_1&=x_2 \\\\\n \\dot{x}_2&=-\\lambda x_2 -\\omega^2 x_1 + \\epsilon \\cos(\\Omega t) \\\\\n \\end{split}\n\\end{align}\n$$\n\ni.e., here $\\lambda \\neq 0$. In this case, the overall solution may be written as:\n\n$$\nx(t) = Ae^{-\\lambda t/2}\\cos(\\nu t + \\Delta)+\\frac{\\epsilon\\cos(\\Omega t+\\Delta_\\mathrm{ext})}{\\sqrt{(\\omega^2-\\Omega^2)^2+\\lambda^2\\Omega^2}}\n$$\n\nwhere the intrinsic frequency $\\nu$ is\n\n$$\n\\nu = \\frac{1}{2}\\sqrt{4\\omega^2-\\lambda^2}\n$$\n\nand the \"external\" phase $\\Delta_\\mathrm{ext}$ is given by\n\n$$\n\\Delta_\\mathrm{ext} = \\arctan \\left( \\frac{ -\\lambda\\Omega }{ (\\omega^2-\\Omega^2) } \\right)\n$$\n\nWe actually have three sub-cases, depending on the value of $4\\omega^2-\\lambda$:\n\n+ $4\\omega^2-\\lambda>0$: sub-damping\n+ $4\\omega^2-\\lambda=0$: critical damping\n+ $4\\omega^2-\\lambda<0$: super-damping\n\nHere, we will analyze only the first case, but we only need to change the values of $\\omega$ and $\\lambda$ and then run the integration again to analyze those other cases too!\n\nThe amplitud $A$ and phase $\\Delta$ of the homogeneous part are determined by the initial conditions and are given by\n\n$$\n\\begin{align}\n\\begin{split}\nA &= \\sqrt{(A\\cos\\Delta)^2+(A\\sin\\Delta)^2} \\\\\n\\Delta &= \\arctan\\left( \\frac{A\\sin\\Delta}{A\\cos\\Delta} \\right)\n\\end{split}\n\\end{align}\n$$\n\nwhere \n\n$$\n\\begin{align}\n\\begin{split}\nA\\sin\\Delta &= \\frac{1}{\\nu}( \\frac{\\lambda}{2}(\\alpha-x_0)-(\\dot{x}_0+\\beta) ) \\\\\nA\\cos\\Delta &= x_0-\\alpha\n\\end{split}\n\\end{align}\n$$\n\nand, in turn,\n\n$$\n\\begin{align}\n\\begin{split}\n\\alpha &= \\frac{\\epsilon\\cos(\\Delta_\\mathrm{ext})}{\\sqrt{(\\omega^2-\\Omega^2)^2+\\lambda^2\\Omega^2}} \\\\\n\\beta &= \\frac{\\epsilon\\Omega\\sin(\\Delta_\\mathrm{ext})}{\\sqrt{(\\omega^2-\\Omega^2)^2+\\lambda^2\\Omega^2}}\n\\end{split}\n\\end{align}\n$$\n\n\n```julia\nconst \u03a9 = 2.0 #forcing frequency\nconst \u03f5 = 0.5 #forcing amplitude\nconst \u03c9 = 1.1 #\"natural\" frequency\nconst \u03bb = 0.2 #damping\n```\n\n WARNING: redefining constant \u03a9\n\n\n\n\n\n 0.2\n\n\n\n\n```julia\n#the driven, damped linear oscillator ODE:\n\nfunction drivdamposc!(t, x, dx)\n dx[1] = x[2]\n dx[2] = -\u03bb*x[2]-(\u03c9^2)*x[1]+\u03f5*cos(\u03a9*t)\n nothing\nend\n```\n\n\n\n\n drivdamposc! (generic function with 1 method)\n\n\n\n\n```julia\n#two-variable versions of atan(y/x)\nmyatan(x, y) = y>=zero(x)?( x>=zero(x)?atan(y/x):(atan(y/x)+pi) ):( x>=zero(x)?(atan(y/x)+2pi):(atan(y/x)+pi) )\nmyatan2(x, y) = y>=zero(x)?( x>=zero(x)?atan(y/x):(atan(y/x)-pi) ):( x>=zero(x)?(atan(y/x)):(atan(y/x)+pi) )\n\nconst t0 = 0.0\nconst x0 = [5.0,0.0] #initial condition\n\nconst \u03bd = 0.5*sqrt(4(\u03c9^2)-\u03bb^2) #the intrinsic frequency\nconst Text = 2\u03c0/\u03a9 #period associated to external frequency \u03a9\nconst \u0394ext = myatan2( (\u03c9^2-\u03a9^2), (-\u03bb*\u03a9) ) #the \"external\" phase\n\n#some auxiliary variables... (see text)\nconst \u03b1 = \u03f5*cos(\u0394ext)/sqrt( (\u03c9^2-\u03a9^2)^2+(\u03bb^2)*(\u03a9^2) )\nconst \u03b2 = \u03f5*\u03a9*sin(\u0394ext)/sqrt( (\u03c9^2-\u03a9^2)^2+(\u03bb^2)*(\u03a9^2) )\nconst Acos\u0394 = x0[1]-\u03b1\nconst Asin\u0394 = ( (\u03bb/2)*(\u03b1-x0[1])-(x0[2]+\u03b2) )/\u03bd\n\nconst A = sqrt( Asin\u0394^2+Acos\u0394^2 ) #the homogeneous amplitude\n#we have to be careful with the homogeneous phase, when inverting tan:\n#if atan( Asin\u0394./Acos\u0394 ) < 0\n# const \u0394 = atan( Asin\u0394./Acos\u0394 )+\u03c0 #homogeneous phase, case 1\n#else\n# const \u0394 = atan( Asin\u0394./Acos\u0394 ) #homogeneous phase, case 2\n#end\n\nconst \u0394 = myatan2( Acos\u0394 , Asin\u0394 )\n\nprintln(\"A=\", A)\nprintln(\"\u0394=\", \u0394)\nprintln(\"Acos\u0394=\", Acos\u0394)\nprintln(\"Asin\u0394=\", Asin\u0394)\nprintln(\"\u0394ext=\", \u0394ext)\n```\n\n A=5\n\n WARNING: redefining constant x0\n\n\n .193145415819222\n \u0394=-0.08222027678306153\n Acos\u0394=5.175602019108521\n Asin\u0394=-0.42650093744798\n \u0394ext=3.2839914642983628\n\n\n\n```julia\n#the general solution to the damped driven linear oscillator:\nx_ddo(t) = A*exp(-\u03bb*t/2)*cos(\u03bd*t+\u0394)+\u03f5*cos(\u03a9*t+\u0394ext)/sqrt( (\u03c9^2-\u03a9^2)^2+(\u03bb^2)*(\u03a9^2) )\n```\n\n\n\n\n x_ddo (generic function with 1 method)\n\n\n\nAgain, we check for consistency of our expression between the analytical solution, `x_ddo`, and the given initial conditions:\n\n\n```julia\nx_ddo(TaylorSeries.Taylor1([0.0, 1.0], 3))\n```\n\n\n\n\n 5.0 - 7.632783294297951e-17 t - 2.7750000000000004 t\u00b2 + 0.18500000000000005 t\u00b3 + \ud835\udcaa(t\u2074)\n\n\n\nThe final time of integration is:\n\n\n```julia\nconst tmax = 100*Text\n```\n\n WARNING: redefining constant tmax\n\n\n\n\n\n 314.1592653589793\n\n\n\nWe're now ready to integrate:\n\n\n```julia\n@time tT3, xT3 = taylorinteg(drivdamposc!, x0, t0, tmax, order, abstol, maxsteps=50000);\n```\n\n 0.170335 seconds (523.14 k allocations: 55.079 MiB, 11.53% gc time)\n\n\nHow does the solution $x(t)$ look like during the first few steps? Let's plot it!\n\n\n```julia\n# x vs t, the first firsti3-lasti3 steps, starting from the firsti3-th step\n\nfirsti3 = 1 # length(tT3)-200 #1 # length(tT3)-20\nlasti3 = 600 # length(tT3) #10 # length(tT3)\nmyrange3=firsti3:lasti3\nlint3 = linspace(tT3[firsti3], tT3[lasti3], 10*length(myrange3))\nplot(\nlint3/Text,\nx_ddo.(lint3),\nxaxis=\"time, t\",\nyaxis=\"x(t)\",\ntitle=\"x vs t\",\nleg=false\n)\nplot!(\ntT3[myrange3]/Text,\nxT3[myrange3,1],\nxaxis=\"time, t\",\nyaxis=\"x(t)\",\ntitle=\"x vs t\",\nleg=false,\n)\nscatter!(\ntT3[myrange3]/Text,\nxT3[myrange3,1],\nleg=false,\nms=3.0\n)\n```\n\n\n\n\n\n\n\n\nAs $t$ increases, the homogeneous (exponential) part of the solution decays and one is finally left with\n\n$$\n\\lim_{t\\to\\infty} x(t) = \\frac{\\epsilon\\cos(\\Omega t+\\Delta_\\mathrm{ext})}{\\sqrt{(\\omega^2-\\Omega^2)^2+\\lambda^2\\Omega^2}}\n$$\n\nNote that, unlike the case $\\lambda=0$, although the amplitude of the motion becomes large at the resonance $\\Omega\\to\\omega$, it does not diverge. Physically, this asymptotic solution corresponds to steady oscillations of constant energy, which represents a balance between the energy pumped to the system and the energy dissipated by friction.\n\nAs always, we ask, just how good, quantitatively, is the numerical solution compared to the analytical solution? To answer that (again, as always), we will plot the absolute error as a function of time:\n\n\n```julia\n#the absolute error in machine epsilons\n\u0394x3 = (xT3[:,1]-x_ddo.(tT3))/eps();\n\u0394x3[end], xT3[end,1], x_ddo(tT3[end])\n```\n\n\n\n\n (63.75, -0.17560201910850107, -0.17560201910851522)\n\n\n\n\n```julia\nplot(\ntT3/Text,\n\u0394x3, #the absolute error in machine epsilons\nxaxis=\"t (natural periods)\",\nyaxis=\"absolute error (machine epsilons)\",\ntitle=\"Absolute error vs time\",\nleg=false\n)\n```\n\n\n\n\n\n\n\n\n\n```julia\n\n```\n", "meta": {"hexsha": "ccbd3bc975da230b81d18efecc31ab973c48d238", "size": 701752, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/Damped-driven-linear-oscillator.ipynb", "max_stars_repo_name": "SebastianM-C/TaylorIntegration.jl", "max_stars_repo_head_hexsha": "f3575ee1caba43e21312062d960613ec2ccba325", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 72, "max_stars_repo_stars_event_min_datetime": "2016-09-22T22:32:12.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T13:35:18.000Z", "max_issues_repo_path": "examples/Damped-driven-linear-oscillator.ipynb", "max_issues_repo_name": "SebastianM-C/TaylorIntegration.jl", "max_issues_repo_head_hexsha": "f3575ee1caba43e21312062d960613ec2ccba325", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 132, "max_issues_repo_issues_event_min_datetime": "2016-09-21T05:43:08.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-15T02:55:17.000Z", "max_forks_repo_path": "examples/Damped-driven-linear-oscillator.ipynb", "max_forks_repo_name": "SebastianM-C/TaylorIntegration.jl", "max_forks_repo_head_hexsha": "f3575ee1caba43e21312062d960613ec2ccba325", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 20, "max_forks_repo_forks_event_min_datetime": "2016-09-24T04:37:11.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-25T13:48:07.000Z", "avg_line_length": 559.164940239, "max_line_length": 139199, "alphanum_fraction": 0.9487112256, "converted": true, "num_tokens": 5567, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9207896671963207, "lm_q2_score": 0.9149009532527357, "lm_q1q2_score": 0.8424313442631831}} {"text": "# A first example how to use Function_from_Expression\n\nDetermine the equation of a linear function \n\n$$\n f(x) = a\\,x + b\n$$ \n\nwhere the points $P(2|3)$ and $Q(-2,5)$ belong to the graph \n\n$$\n y=f(x)\n$$ \n\nof $f$ and plot the graph $y=f(x)$ for $-4\\le x\\le 4$.\n\n## Solution\n\nTo solve this, we first need some initialisations:\n\n\n```python\nfrom sympy import *\ninit_printing()\n\nimport matplotlib.pyplot as plt\n\n# usually you want this\n%matplotlib inline \n\n# useful for OS X\n%config InlineBackend.figure_format='retina' \n\nimport numpy as np\n\nfrom IPython.display import display, Math\n\nfrom fun_expr import Function_from_Expression as FE\n```\n\nIn the next step, the function $f$ is defined with two unknown coefficients $a$, $b$:\n\n\n```python\n# define the function\n\n# the variable of the function\nx = Symbol('x')\n\n# the unknown coefficients:\na,b = symbols('a,b')\n\n# the function\nf = FE(x, a*x+b)\n\n# display result\nf\n```\n\n\n```python\n# or, better\nMath(\"f(x)=\"+latex(f(x)))\n```\n\n\n\n\n$$f(x)=a x + b$$\n\n\n\nThe unknown coefficients $a$, $b$ need to be determined. To do this, a system of linear equations is derived from the given points:\n\n\n```python\n# define points:\nx_p,y_p = 2,3\nx_q,y_q = -2,5\n\npts = [(x_p,y_p),(x_q,y_q)]\n\n# define equations\neqns = [Eq(f(x_p),y_p),\n Eq(f(x_q),y_q)]\n\n# display result\nfor eq in eqns:\n display(eq)\n```\n\nThe resulting system of equations is solved:\n\n\n```python\n# solve equations\nsol = solve(eqns)\n\n# display result\nsol\n```\n\nThe solution is substituted into the function.\n\n\n```python\n# substitute results into f\nf = f.subs(sol)\n\n# display result\nMath(\"f(x)=\"+latex(f(x)))\n```\n\n\n\n\n$$f(x)=- \\frac{x}{2} + 4$$\n\n\n\nThe resulting function is plotted over the interval $-4\\le x \\le 4$\n\n\n```python\n# define new plot\nfig, ax = plt.subplots()\n\n# the interval along the x-axis\nlx = np.linspace(-4,4)\n\n# plot f(x)\nax.plot(lx,f.lambdified(lx),label=r\"$y={f}$\".format(f=latex(f(x))))\n\n# insert the given points to the graph\nax.scatter(*zip(*pts))\n\n# some additional commands\nax.grid(True)\nax.axhline(0,c='k')\nax.axvline(0,c='k')\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.legend(loc='upper right')\n```\n", "meta": {"hexsha": "bff83f08fbae96d845a7f1f0209edd9841dbe31d", "size": 42950, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/02-first_example.ipynb", "max_stars_repo_name": "w-meiners/fun-expr", "max_stars_repo_head_hexsha": "a44f0366f08c8c2d2eb2702176698bfe3f6febed", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-12-20T16:16:40.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-28T11:06:38.000Z", "max_issues_repo_path": "docs/02-first_example.ipynb", "max_issues_repo_name": "w-meiners/fun-expr", "max_issues_repo_head_hexsha": "a44f0366f08c8c2d2eb2702176698bfe3f6febed", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/02-first_example.ipynb", "max_forks_repo_name": "w-meiners/fun-expr", "max_forks_repo_head_hexsha": "a44f0366f08c8c2d2eb2702176698bfe3f6febed", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-01-27T09:50:59.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-27T09:50:59.000Z", "avg_line_length": 125.9530791789, "max_line_length": 32832, "alphanum_fraction": 0.87774156, "converted": true, "num_tokens": 635, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9511422199928905, "lm_q2_score": 0.8856314723088732, "lm_q1q2_score": 0.8423614846674338}} {"text": "# Exercise: beam bending with the least squares method\n\n\n\nThe differential equation for beam bending\n$$\n(EI w'')'' - q = 0\n$$\ncan be integrated analytically by specifying the linear loading as $q = q_0\\frac{x}{L}$ and the boundary conditions\n$$\nw(0) = 0\\\\\nw(L) = 0\\\\\nw''(0)=0\\\\\nw''(L) = 0\n$$\nto yield the deflection solution (\"Biegelinie\"):\n$$\nw(x) = \\frac{q_0 L^4}{360 EI} \\left[ 3\\left(\\frac{x}{L}\\right)^5 - 10 \\left(\\frac{x}{L}\\right)^3 + 7 \\left(\\frac{x}{L}\\right)\\right]\n$$\nWe now seek to approximate this solution numerically using the least squares method.\n\n\n```python\nimport numpy as np #numerical methods\nimport sympy as sp #symbolic operations\nimport matplotlib.pyplot as plt #plotting\nsp.init_printing(use_latex='mathjax') #makes sympy output look nice\n\n#Some plot settings\nplt.style.use('seaborn-deep')\nplt.rcParams['lines.linewidth']= 2.0\nplt.rcParams['lines.color']= 'black'\nplt.rcParams['legend.frameon']=True\nplt.rcParams['figure.figsize'] = (8, 6)\nplt.rcParams['font.family'] = 'serif'\nplt.rcParams['legend.fontsize']=14\nplt.rcParams['font.size'] = 14\nplt.rcParams['axes.spines.right'] = False\nplt.rcParams['axes.spines.top'] = False\nplt.rcParams['axes.spines.left'] = True\nplt.rcParams['axes.spines.bottom'] = True\nplt.rcParams['axes.axisbelow'] = True\n```\n\n\n```python\n#Defining the geometric an material characteristics as symbolic quantities\nL,q,EI,x = sp.symbols('L q_0 EI x')\n```\n\n\n```python\n#Analytical solution to deflection\ndef deflection_analytical():\n a = x/L\n f = q*L**4/(360 * EI)\n return f*(3*a**5 - 10*a**3 + 7*a)\n```\n\n\n```python\ndeflection_analytical() #check definition\n```\n\n\n\n\n$$\\frac{L^{4} q_{0}}{360 EI} \\left(\\frac{7 x}{L} - \\frac{10 x^{3}}{L^{3}} + \\frac{3 x^{5}}{L^{5}}\\right)$$\n\n\n\nNow, let's plot the analytical solution. For that purpose, we use some Python magic (\"lambdify\"). We sample the analytical solution for $x \\in [0,L]$ at 100 points and plot the dimensionless deflection over the dimensionless length.\n\n\n```python\nlam_x = sp.lambdify(x, deflection_analytical(), modules=['numpy'])\n#For the variable x the function deflection_analytical() will obtain something\n\nx_vals = np.linspace(0, 1, 100)*L #This something is x from 0 to L\nanalytical = lam_x(x_vals) #We calculate the solution by passing x = [0,...,L] to deflection_analytical\n\nplt.plot(x_vals/L, analytical/(L**4*q)*EI)\nplt.xlabel('$x / L$')\nplt.ylabel('$w / L^4 q_0 EI^{-1}$')\n```\n\n## Trigonometric Ansatz\n\nLet's try the approximation\n$$\n \\tilde{w} = a_1 \\sin \\left(\\pi \\frac{x}{L}\\right) + a_2 \\sin \\left(2\\pi\\frac{x}{L}\\right)\n$$\n\n\n```python\na1, a2 = sp.symbols('a_1 a_2')#Defining the free values as new symbols\n```\n\n\n```python\ndef deflection_ansatz():\n return a1*sp.sin(sp.pi/L*x) + a2*sp.sin(2*sp.pi/L*x) #defining the approximate solution with the unknown coefficients\n```\n\nNow we substitute this solution into the fourth-order ODE for beam bending by differentiating it twice and adding the distributed loading:\n$$\nEI w^\\text{IV} - q_0 \\frac{x}{L} = 0 \\text{ with } EI = \\text{const.}\n$$\n\n\n```python\nr = EI * deflection_ansatz().diff(x,4) - q * (x/L) #residual\nr\n```\n\n\n\n\n$$\\frac{\\pi^{4} EI}{L^{4}} \\left(a_{1} \\sin{\\left (\\frac{\\pi x}{L} \\right )} + 16 a_{2} \\sin{\\left (\\frac{2 \\pi}{L} x \\right )}\\right) - \\frac{q_{0} x}{L}$$\n\n\n\nNow we perform the derivatives with respect to the Ansatz free values for obtaining the stationarity of $\\int 1/2 r^2 \\text{d} x$:\n\n\n```python\ndr_da1 = r.diff(a1)\ndr_da2 = r.diff(a2)\ndr_da1, dr_da2\n```\n\n\n\n\n$$\\left ( \\frac{\\pi^{4} EI}{L^{4}} \\sin{\\left (\\frac{\\pi x}{L} \\right )}, \\quad \\frac{16 EI}{L^{4}} \\pi^{4} \\sin{\\left (\\frac{2 \\pi}{L} x \\right )}\\right )$$\n\n\n\nThis yields the two equations to be solved:\n\n\n```python\nstationarity_conditions = (sp.integrate(dr_da1*r,(x,0,L)),sp.integrate(dr_da2*r,(x,0,L)))\n```\n\n\n```python\ncoefficients = sp.solve(stationarity_conditions,a1,a2)\n```\n\n\n```python\ncoefficients\n```\n\n\n\n\n$$\\left \\{ a_{1} : \\frac{2 L^{4} q_{0}}{\\pi^{5} EI}, \\quad a_{2} : - \\frac{L^{4} q_{0}}{16 \\pi^{5} EI}\\right \\}$$\n\n\n\n\n```python\ndeflection_ansatz().subs([(a1,coefficients[a1]),(a2,coefficients[a2])]).simplify()\n```\n\n\n\n\n$$\\frac{L^{4} q_{0}}{16 \\pi^{5} EI} \\left(32 \\sin{\\left (\\frac{\\pi x}{L} \\right )} - \\sin{\\left (\\frac{2 \\pi}{L} x \\right )}\\right)$$\n\n\n\nNow we're ready to plot the result and compare it to the analytical solution. \n\n\n```python\n#We first substite the now known Ansatz free values into our Ansatz\nz = sp.symbols('z')\nw_numerical = deflection_ansatz().subs([(a1,coefficients[a1]),(a2,coefficients[a2]),(x,z*L)])\n#We also made the coordinate dimensionless (x/L --> z) because of sympy problems\n\nlam_x_num = sp.lambdify(z, w_numerical, modules=['numpy'])\n#For the variable x the expression w_numerical will be given something\nz_vals = np.linspace(0, 1,100) #This something is z from 0 to 1\nnumerical = lam_x_num(z_vals) #We calculate the solution by passing x = [0,...,L] to deflection_analytical\nplt.plot(x_vals/L, analytical/(L**4*q)*EI,label='analytical')\nplt.plot(z_vals, numerical/(L**4*q)*EI,label='numerical')\nplt.legend()\nplt.xlabel('$x\\ /\\ L$')\nplt.ylabel('$w\\ /\\ L^4 q_0 EI^{-1}$')\nplt.tight_layout()\nplt.savefig('beam_least_squares.pdf')\n```\n\n\n```python\nprint(\"Maximum absolute error: \", np.max(np.abs(analytical/(L**4*q)*EI - numerical/(L**4*q)*EI)))\n```\n\n Maximum absolute error: 3.41648400857485e-5\n\n\nWe can also plot and compare the bending moment. Let's first find the analytical expression by symbolically differentiating the deflection expression twice to obtain $M(x) = -EI w''(x)$:\n\n\n```python\n#analytical bending moment\nmoment_analytical = -deflection_analytical().diff(x,2)*EI\n#numerical bending moment\nmoment_numerical = -deflection_ansatz().subs([(a1,coefficients[a1]),(a2,coefficients[a2])]).diff(x,2)*EI\n\n#create lambdas for plotting along dimensionless length z\nlam_x_analyt = sp.lambdify(z, moment_analytical.subs(x,z*L), modules=['numpy'])\nlam_x_num = sp.lambdify(z, moment_numerical.subs(x,z*L), modules=['numpy'])\nz_vals = np.linspace(0, 1,100)\nanalytical = lam_x_analyt(z_vals)\nnumerical = lam_x_num(z_vals)\n\n#plot\nplt.plot(x_vals/L, analytical/(L**2*q),label='analytical')\nplt.plot(z_vals, numerical/(L**2*q),label='numerical')\nplt.xlabel('$x\\ /\\ L$')\nplt.ylabel('$M\\ /\\ L^2 q_0$')\nplt.legend()\nplt.tight_layout()\nplt.savefig('beam_least_squares_moment.pdf')\n```\n\n\n```python\nprint(\"Maximum absolute error in bending moment: \", np.max(np.abs(analytical/(L**2*q) - numerical/(L**2*q))))\n```\n\n Maximum absolute error in bending moment: 0.00384875067588690\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "d8eaa44918008c163ae5536bedd6fc6f0e4731e8", "size": 123855, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Beams/05_least_squares.ipynb", "max_stars_repo_name": "dominik-kern/Numerical_Methods_Introduction", "max_stars_repo_head_hexsha": "09a0d6bd0ddbfc6e7f94b65516d9691766ed46ae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Beams/05_least_squares.ipynb", "max_issues_repo_name": "dominik-kern/Numerical_Methods_Introduction", "max_issues_repo_head_hexsha": "09a0d6bd0ddbfc6e7f94b65516d9691766ed46ae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-04T19:02:05.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-06T08:40:21.000Z", "max_forks_repo_path": "Beams/05_least_squares.ipynb", "max_forks_repo_name": "dominik-kern/Numerical_Methods_Introduction", "max_forks_repo_head_hexsha": "09a0d6bd0ddbfc6e7f94b65516d9691766ed46ae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2020-12-03T13:01:55.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-16T14:07:04.000Z", "avg_line_length": 249.2052313883, "max_line_length": 41956, "alphanum_fraction": 0.9158128457, "converted": true, "num_tokens": 2099, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9511422213778251, "lm_q2_score": 0.8856314692902446, "lm_q1q2_score": 0.8423614830228303}} {"text": "---\n# What are the marginal and the conditional probabilities?\n---\n\nIn this script, we show the 1-D marginal and 1-D conditional probability density functions (PDF) for a 2-D gaussian PDF.\n\nIn its matrix-form, the equation for the 2-D gausian PDF reads like this: \n\n
$P(\\bf{x}) = \\frac{1}{2\\pi |\\Sigma|^{0.5}} \\exp{[-\\frac{1}{2}(\\bf{x}-\\bf{\\mu})^\\top \\Sigma^{-1} (\\bf{x}-\\bf{\\mu})]}$
\n\nwhere \n\n
\n$\n\\begin{align}\n\\bf{x} &= [x_{1} x_{2}]^\\top \\\\\n\\bf{\\mu} &= [\\mu_{1} \\mu_{2}]^\\top \\\\\n\\Sigma &= \\begin{pmatrix} \\sigma_{x_{1}}^2 & \\rho\\sigma_{x_{1}}\\sigma_{x_{2}} \\\\ \\rho\\sigma_{x_{1}}\\sigma_{x_{2}} & \\sigma_{x_{2}}^2 \\end{pmatrix}\n\\end{align}\n$ \n
\n\nwhere $\\rho$ is the correlation factor between the $x_{1}$ and $x_{2}$ data. \n\n## A useful trick to generate a covariance matrix $\\Sigma$ with desired features \n\nInstead of guessing the values of $\\sigma_{x_{1}}$, $\\sigma_{x_{2}}$ and $\\rho$ to create a \ncovariance matrix $\\Sigma$ for visualization purpose, we can design it with some desired characteristics. \nThose are the principal axis variances $\\sigma_{1}^2$ and $\\sigma_{2}^2$, and the rotation angle $\\theta$.\n\nFirst, we generate the covariance matrix of a correlated PDF with its principal axes oriented along the \n$x_{1}$ and $x_{2}$ axes.\n\n
$\\Sigma_{PA} = \\begin{pmatrix} \\sigma_{1}^2 & 0 \\\\ 0 & \\sigma_{2}^2 \\end{pmatrix}$
\n\nNext, we generate the rotation matrix for the angle $\\theta$:\n\n
$R = \\begin{pmatrix} \\cos{\\theta} & -\\sin{\\theta} \\\\ \\sin{\\theta} & \\cos{\\theta} \\end{pmatrix}$
\n\nThe covariance matrix we are looking for is \n\n
$\\Sigma = R \\Sigma_{PA} R^\\top $
\n\nHence, we only have to specify the values of $\\sigma_{1}$ and $\\sigma_{2}$ along principal axes and the \nrotation angle $\\theta$. The correlation coefficient $\\rho$ depends on the value of $\\theta$:\n\n
$\\rho>0$ when $\\theta>0$
\n
$\\rho<0$ when $\\theta<0$
\n\n
\nN.B. For ease of reading, we use below the variables x and y instead of x1 and x2. The final results are shown with x1 and x2.\n\n\n```python\nprint(__doc__)\n\n# Author: Pierre Gravel \n# License: BSD\n\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import cm\n\nfrom scipy import stats\nfrom scipy.stats import multivariate_normal\n\nimport seaborn as sns\nsns.set(color_codes=True)\n\n# Used for reproductibility of the results\nnp.random.seed(43)\n```\n\n Automatically created module for IPython interactive environment\n\n\nHere are the characteristics of the 2-D PDF we want to generate:\n\n\n```python\n# Origin of the PDF\nMu = np.array([0.5, 0.5])\n\n# Individual standard deviations along the principal axes\nsigma = np.array([0.25, 0.05])\n\n# Rotation angle for a negative correlation \ntheta = -45. \n\n```\n\nGenerate the covariance matrix $\\Sigma$:\n\n\n```python\ntheta = np.radians(theta)\nc, s = np.cos(theta), np.sin(theta)\n\n# Rotation matrix\nR = np.array(((c, -s), (s, c))) \n\n# Covariance matrix for a PDF with its principal axes oriented along the x and y directions\nSigma = np.array([[sigma[0]**2, 0.],[0., sigma[1]**2]])\n\n# Covariance matrix after rotation\nSigma = R.dot( Sigma.dot(R.T) ) \n\n```\n\nGenerate a spatial grid where the various PDF will be evaluated locally.\n\n\n```python\nx_min, x_max = 0., 1.\ny_min, y_max = 0., 1.\nnx, ny = 60, 60\nx = np.linspace(x_min, x_max, nx)\ny = np.linspace(y_min, y_max, ny)\nxx, yy = np.meshgrid(x,y) \npos = np.dstack((xx, yy)) \n \n```\n\nGet the marginal distributions $P(x)$ and $P(y)$. Each one is the probability of an event irrespective \nof the outcome of the other variable.\n\n\n```python\n# Generator for the 2-D gaussian PDF \nmodel = multivariate_normal(Mu, Sigma) \npdf = model.pdf(pos)\n\n# Project P(x,y) on the x axis to get the marginal 1-D distribution P(x)\npdf_x = multivariate_normal.pdf(x, mean=Mu[0], cov=Sigma[0,0])\n\n# Project P(x,y) on the y axis to get the marginal 1-D distribution P(y)\npdf_y = multivariate_normal.pdf(y, mean=Mu[1], cov=Sigma[1,1])\n\n```\n\nGet the conditional 1-D distributions $P(x|y=yc)$ and $P(y|x=xc)$. Each one is the probability of one event occurring in \nthe presence of a second event. \n\n\n```python\n# Vertical slice position\nxc = 0.3\n\n# Horizontal slice position\nyc = 0.7\n\n# Make a vertical slice of P(x,y) at x=xc to get the conditional 1-D distribution P(y|x=xc)\nP = np.empty([ny,2])\nP[:,0] = xc\nP[:,1] = y\npdf_xc = multivariate_normal.pdf(P, mean=Mu, cov=Sigma)\n\n# Make an horizontal slice of P(x,y) at y=yc to get the conditional 1-D distribution P(x|y=yc)\nP = np.empty([nx,2])\nP[:,0] = x\nP[:,1] = yc\npdf_yc = multivariate_normal.pdf(P, mean=Mu, cov=Sigma)\n\n```\n\nShow the various PDF. The color of each conditional 1-D distribution corresponds to its slice through the 2-D PDF. \n\n\n```python\nfig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2,2,figsize=(10,10))\n\n# Display the 2-D PDF P(x,y)\ncset = ax1.contourf(xx, yy, pdf, zdir='z', cmap=cm.viridis, levels=7)\nax1.plot([x_min, x_max], [yc, yc], linewidth=2.0, color='red')\nax1.text(0.79, 0.72,'$x_{2} = 0.7$', fontsize=16, color='white')\nax1.plot([xc, xc], [y_min, y_max], linewidth=2.0, color='green')\nax1.text(0.31, 0.02,'$x_{1} = 0.3$', fontsize=16, color='white')\nax1.set_xlabel('$x_{1}$',fontsize=18)\nax1.set_ylabel('$x_{2}$',rotation=0,fontsize=18)\nax1.xaxis.set_label_coords(0.5, -0.08)\nax1.yaxis.set_label_coords(-0.08, 0.5)\n\n# Display the 1-D conditional PDF P(y|x=xc)\nax2.plot(pdf_y, y, label='$P(x_{2})$', linewidth=3.0, color='black')\nax2.plot(pdf_xc, y, label='$P(x_{2}|x_{1}=0.3)$', linewidth=2.0, color='green')\nax2.set_ylabel('$x_{2}$',rotation=0,fontsize=18)\nax2.set_xlabel('Probability Density',fontsize=18)\nax2.xaxis.set_label_coords(0.5, -0.08)\nax2.yaxis.set_label_coords(-0.08, 0.5)\nax2.legend(loc='best',fontsize=14)\n\n# Display the 1-D conditional PDF P(x|y=yc)\nax3.plot(x, pdf_x, label='$P(x_{1})$', linewidth=3.0, color='black')\nax3.plot(x, pdf_yc, label='$P(x_{1}|x_{2}=0.7)$', linewidth=2.0, color='red')\nax3.set_xlabel('$x_{1}$',fontsize=18)\nax3.set_ylabel('Probability Density',fontsize=18)\nax3.xaxis.set_label_coords(0.5, -0.08)\nax3.yaxis.set_label_coords(-0.08, 0.5)\nax3.legend(loc='best',fontsize=14)\n\n# Hide the unused fouth panel\nax4.axis('off')\nfig.tight_layout()\n\nplt.savefig('Example_of_2D_PDF_with_conditional_1D.png')\nplt.savefig('Example_of_2D_PDF_with_conditional_1D.pdf')\n\nplt.show()\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "6392f9b66e7d0733bbc0d33d40004d0b38bfaa97", "size": 85517, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "generate_example_of_2D_PDF_with_conditional_1D_PDF.ipynb", "max_stars_repo_name": "AstroPierre/Scripts-for-figures-courses-GIF-4101-GIF-7005", "max_stars_repo_head_hexsha": "a38ad6f960cc6b8155fad00e4c4562f5e459f248", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "generate_example_of_2D_PDF_with_conditional_1D_PDF.ipynb", "max_issues_repo_name": "AstroPierre/Scripts-for-figures-courses-GIF-4101-GIF-7005", "max_issues_repo_head_hexsha": "a38ad6f960cc6b8155fad00e4c4562f5e459f248", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "generate_example_of_2D_PDF_with_conditional_1D_PDF.ipynb", "max_forks_repo_name": "AstroPierre/Scripts-for-figures-courses-GIF-4101-GIF-7005", "max_forks_repo_head_hexsha": "a38ad6f960cc6b8155fad00e4c4562f5e459f248", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 261.5198776758, "max_line_length": 75168, "alphanum_fraction": 0.9162388765, "converted": true, "num_tokens": 2060, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9511422186079557, "lm_q2_score": 0.8856314632529871, "lm_q1q2_score": 0.8423614748274564}} {"text": "# Programovanie\n\nLetn\u00e1 \u0161kola FKS 2018\n\nMa\u0165o Ga\u017eo, Fero Dr\u00e1\u010dek\n(& vykradnut\u00e9 materi\u00e1ly od Mateja Badina, Feriho Hermana, Kuba, Pe\u0165a, Jarn\u00fdch \u0161k\u00f4l FX a kade-tade po internete)\n\nV tomto kurze si uk\u00e1\u017eeme z\u00e1klady programovania a nau\u010d\u00edme sa programova\u0165 matematiku a fyziku.\nTak\u00e9to vedomosti s\u00fa skvel\u00e9 a budete v\u010faka nim: \n* vedie\u0165 efekt\u00edvnej\u0161ie robi\u0165 dom\u00e1ce \u00falohy\n* kvalitnej\u0161ie rie\u0161i\u0165 semin\u00e1rov\u00e9 a olympi\u00e1dov\u00e9 pr\u00edklady\n* lep\u0161ie rozumie\u0165 svetu (IT je dnes na trhu najr\u00fdchlej\u0161ie rozv\u00edjaj\u00facim sa odvetv\u00edm)\n\nPo\u010d\u00edta\u010d je blb\u00fd a treba mu v\u0161etko poveda\u0165 a vysvetli\u0165. Komunikova\u0165 sa s n\u00edm d\u00e1 na viacer\u00fdch \u00farovniach, my budeme pou\u017e\u00edva\u0165 Python. Python (n\u00e1zov odvoden\u00fd z Monty Python's Flying Circus) je v\u0161eobecn\u00fd programovac\u00ed jazyk, ktor\u00fdm sa daj\u00fa vytv\u00e1ra\u0165 webov\u00e9 str\u00e1nky ako aj robi\u0165 seri\u00f3zne vedeck\u00e9 v\u00fdpo\u010dty. To znamen\u00e1, \u017ee nau\u010di\u0165 sa ho nie je na \u0161kodu a mo\u017eno v\u00e1s raz bude \u017eivi\u0165.\n\nRozhranie, v ktorom p\u00ed\u0161eme k\u00f3d, sa vol\u00e1 Jupyter Notebook. Je to prostredie navrhnut\u00e9 tak, aby sa dalo programova\u0165 doslova v prehliada\u010di a aby sa k\u00f3d dal k\u00faskova\u0165. Pre zbehnutie k\u00faskov programu sta\u010d\u00ed stla\u010di\u0165 Shift+Enter. \n\n\n\n# D\u00e1tov\u00e9 typy a oper\u00e1tory\n\n### \u010c\u00edsla\n\n pod\u013ea o\u010dak\u00e1van\u00ed, vracia trojku\n\n\n```python\n3\n```\n\n\n\n\n 3\n\n\n\n\n```python\n2+3 # scitanie\n```\n\n\n\n\n 5\n\n\n\n\n```python\n6-2 # odcitanie\n```\n\n\n\n\n 4\n\n\n\n\n```python\n10*2 # nasobenie\n```\n\n\n\n\n 20\n\n\n\n\n```python\n35/5 # delenie\n```\n\n\n\n\n 7.0\n\n\n\n\n```python\n5//3 # celociselne delenie TODO je toto treba?\n```\n\n\n\n\n 1\n\n\n\n\n```python\n7%3 # modulo\n```\n\n\n\n\n 1\n\n\n\n\n```python\n2**3 # umocnovanie\n```\n\n\n\n\n 8\n\n\n\n\n```python\n4 * (2 + 3) # poradie dodrzane\n```\n\n\n\n\n 20\n\n\n\n### Logick\u00e9 v\u00fdrazy\n\n\n```python\n1 == 1 # logicka rovnost\n```\n\n\n\n\n True\n\n\n\n\n```python\n2 != 3 # logicka nerovnost\n```\n\n\n\n\n True\n\n\n\n\n```python\n1 < 10\n```\n\n\n\n\n True\n\n\n\n\n```python\n1 > 10\n```\n\n\n\n\n False\n\n\n\n\n```python\n2 <= 2\n```\n\n\n\n\n True\n\n\n\n# Premenn\u00e9\n\nToto je premenn\u00e1.\n\nPo stla\u010den\u00ed Shift+Enter program v okienku zbehne a premenn\u00e1 sa ulo\u017e\u00ed do pam\u00e4te (RAMky, v\u0161etko sa deje na RAMke).\n\n\n```python\na = 2\n```\n\nTeraz s \u0148ou mo\u017eno pracova\u0165 ako s be\u017en\u00fdm \u010d\u00edslom.\n\n\n```python\n2 * a\n```\n\n\n\n\n 4\n\n\n\n\n```python\na + a\n```\n\n\n\n\n 4\n\n\n\n\n```python\na + a*a\n```\n\n\n\n\n 6\n\n\n\nMo\u017eno ju aj umocni\u0165.\n\n\n```python\na**3\n```\n\n\n\n\n 8\n\n\n\nPridajme druh\u00fa premenn\u00fa.\n\n\n```python\nb = 5\n```\n\nNasledovn\u00e9 v\u00fdpo\u010dty dopadn\u00fa pod\u013ea o\u010dak\u00e1van\u00ed.\n\n\n```python\na + b\n```\n\n\n\n\n 7\n\n\n\n\n```python\na * b\n```\n\n\n\n\n 10\n\n\n\n\n```python\nb**a\n```\n\n\n\n\n 25\n\n\n\nRe\u00e1lne \u010d\u00edsla m\u00f4\u017eeme zobrazova\u0165 aj vo vedeckej forme: $2.3\\times 10^{-3}$.\n\n\n```python\nd = 2.3e-3\n```\n\n### Priklad [0]\n\nFERO DAJ SEM NIECO\n\n# ---------------------------------------------------------------------------\n\nTeraz sa m\u00f4\u017eeme posun\u00fa\u0165 k nie\u010domu viac zmysluplnej\u0161iemu. M\u00f4\u017eeme za\u010da\u0165 po\u010d\u00edta\u010du zad\u00e1va\u0165 \u00falohy, ktor\u00e9 by sme u\u017e sami nezvl\u00e1dli !\n\n# Funkcie\n\nSpravme si jednoduch\u00fa funkciu, ktor\u00e1 za n\u00e1s s\u010d\u00edta dve \u010d\u00edsla, aby sme sa s t\u00fdm u\u017e nemuseli tr\u00e1pi\u0165 my:\n\n\n```python\ndef scitaj(a, b):\n print(\"\u010c\u00edslo a je {} a \u010d\u00edslo b je {}\".format(a, b))\n return a + b\n```\n\n\n```python\nscitaj(10, 12) # vypise vetu a vrati sucet\n```\n\n \u010c\u00edslo a je 10 a \u010d\u00edslo b je 12\n\n\n\n\n\n 22\n\n\n\nFunkcia funguje na cel\u00fdch aj re\u00e1lnych \u010d\u00edslach.\n\nNa\u0161a s\u010d\u00edtacia funkcia m\u00e1 __\u0161tyri podstatn\u00e9 veci__:\n1. `def`: toto slovo definuje funkciu.\n2. dvojbodka na konci prv\u00e9ho riadku, odtia\u013e za\u010d\u00edna defin\u00edcia.\n3. Odsadenie k\u00f3du vn\u00fatri funkcie o \u0161tyri medzery.\n4. Samotn\u00fd k\u00f3d. V \u0148om sa m\u00f4\u017ee dia\u0165 \u010doko\u013evek, Python ho postupne prech\u00e1dza.\n5. `return`: k\u013e\u00fa\u010dov\u00e1 vec. Za toto slovo sa p\u00ed\u0161e, \u010do je output funkcie.\n\n### \u00daloha 1\nNap\u00ed\u0161te funkciu `priemer`, ktor\u00e1 zoberie dve \u010d\u00edsla (v\u00fd\u0161ky dvoch chlapcov) a vypo\u010d\u00edta ich priemern\u00fa v\u00fd\u0161ku.\n\nAk m\u00e1\u0161 \u00falohu hotov\u00fa, prihl\u00e1s sa ved\u00facemu.\n\n\n```python\n# Tvoje riesenie:\ndef priemer(prvy, druhy):\n return ((prvy+druhy)/2)\n\npriemer(90,20)\n\n```\n\n\n\n\n 55.0\n\n\n\n# Po\u010fme na fyziku\n\nV tomto momente m\u00f4\u017eeme za\u010da\u0165 pou\u017e\u00edva\u0165 Python ako sofistikovanej\u0161iu kalkula\u010dku a po\u010d\u00edta\u0165 \u0148oz z\u00e1kladn\u00e9 fyzik\u00e1lne probl\u00e9my. \nPredstavme si napr\u00edklad, \u017ee potrebujeme zisti\u0165, ko\u013eko m\u00f3lov at\u00f3mov je v dvoch litroch vody.\n\n\n```python\nrho = 1000.0 # hustota\nV = 2.0 * 1e-3 # treba premeni\u0165 na metre kubick\u00e9\nm = rho * V # hmotnos\u0165 vody\nMm = (16 + 1 + 1) * 1e-3 # kg/mol\n\nn = m / Mm\nprint(n) # vypiseme\n```\n\n 111.1111111111111\n\n\nA ko\u013eko molek\u00fal je v jednom litri vody?\n\n\n```python\nNA = 6.022e23 # Avogadrova kon\u0161tanta\nV = 1e-3\nm = rho * V\nN = m / Mm * NA # zamyslite sa nad porad\u00edm n\u00e1sobenia a delenia\nprint(N)\n```\n\n 3.345555555555555e+25\n\n\nVcelku dos\u0165...\n\n## \u00daloha 2\nSpo\u010d\u00edtajte objem, ktor\u00fd v priemere zaber\u00e1 jedna molekula \u013eubovo\u013enej kvapaliny. Vyjadrite ho v nanometroch kubick\u00fdch.\n\nUrobte to tak, \u017ee nap\u00ed\u0161ete funkciu, ktor\u00e1 bude ako vstup bra\u0165:\n* objem kvapaliny\n* hustotu kvapaliny\n* mol\u00e1rnu hmotnos\u0165 kvapaliny\na ako v\u00fdstup to d\u00e1 objem jednej molekuly kvapaliny. \n\nSpo\u010d\u00edtajte to pre metanol, etanol a benz\u00e9n a v\u00fdsledky potom uk\u00e1\u017ete ved\u00facemu. Nev\u00e1hajte pou\u017e\u00edva\u0165 Google.\n\n\n```python\n# Tvoje riesenie:\n\ndef ObjemMolekuly(objem, hustota, molarna_hmotnost):\n pocet_molekul = (objem*hustota/molarna_hmostnost)*6.022e23\n return (objem/pocet_molekul)\n```\n\n# Zoznamy\n\nZatia\u013e sme sa zozn\u00e1mili s \u010d\u00edslami (cel\u00e9, re\u00e1lne), stringami a trochu aj logick\u00fdmi hodnotami.\nZo v\u0161etk\u00fdch t\u00fdchto prvkov vieme vytv\u00e1ra\u0165 mno\u017einy, v informatickom jazyku `zoznamy`.\n\nNa \u00favod sa teda pozrieme, ako s vytv\u00e1ra zoznam (po anglicky `list`). Tak\u00fato vec v\u0161eobecne naz\u00fdvame d\u00e1tov\u00e1 \u0161trukt\u00fara.\n\n\n```python\nli = [] # prazdny list\n```\n\n\n```python\nv = [4, 2, 3] # list s cislami\n```\n\n\n```python\nv\n```\n\n\n\n\n [4, 2, 3]\n\n\n\n\n```python\nv[0] # indexovat zaciname nulou!\n```\n\n\n\n\n 4\n\n\n\n\n```python\nv[1]\n```\n\n\n\n\n 2\n\n\n\n\n```python\ntype(v)\n```\n\n\n\n\n list\n\n\n\n\n```python\nw = [5, 'ahoj', True]\n```\n\n\n```python\ntype(w[0])\n```\n\n\n\n\n int\n\n\n\n\u010co sa sa stane, ak zoznamy s\u010d\u00edtame? Spoja sa.\n\n\n```python\nv + w\n```\n\n\n\n\n [4, 2, 3, 5, 'ahoj', True]\n\n\n\nM\u00f4\u017eeme ich n\u00e1sobi\u0165?\n\n\n```python\nv * v\n```\n\nSmola, nem\u00f4\u017eeme. Ale v\u0161imnime si, ak\u00e1 u\u017eito\u010dn\u00e1 je chybov\u00e1 hl\u00e1\u0161ka. Jasne n\u00e1m hovor\u00ed, \u017ee nemo\u017eno n\u00e1sobi\u0165 `list`y.\n\nSo zoznamami m\u00f4\u017eeme robi\u0165 r\u00f4zne in\u00e9 u\u017eito\u010dn\u00e9 veci. Napr\u00edklad ich s\u010d\u00edta\u0165.\n\n\n```python\nsum(v)\n```\n\n\n\n\n 9\n\n\n\nAlebo zisti\u0165 d\u013a\u017eku:\n\n\n```python\nlen(v)\n```\n\n\n\n\n 3\n\n\n\nAlebo ich utriedi\u0165:\n\n\n```python\nsorted(v)\n```\n\n\n\n\n [2, 3, 4]\n\n\n\nAlebo na koniec prida\u0165 nov\u00fd prvok:\n\n\n```python\nv.append(10)\nv\n```\n\n\n\n\n [4, 2, 3, 10]\n\n\n\nAlebo odobra\u0165:\n\n\n```python\nv.pop()\nv\n```\n\n\n\n\n [4, 2, 3]\n\n\n\n### Interval\nv zozname sa d\u00e1 h\u013eada\u0165 pomocou intervalov, ktor\u00e9 s\u00fa $\\langle x,y)$, \u010di\u017ee uzavret\u00fd - otvoren\u00fd.\n\n\n```python\nli = [2, 5, 7, 8, 10, 11, 14, 18, 20, 25]\nlen(li)\n```\n\n\n\n\n 10\n\n\n\n\n```python\nli[1:5] # indexovanie na zaciatku nulou + polouzavrety interval\n```\n\n\n\n\n [5, 7, 8, 10]\n\n\n\n\n```python\nli[:3] # prve tri prvky\n```\n\n\n\n\n [2, 5, 7]\n\n\n\n\n```python\nli[6:] # prvky zacinajuce na 6tom mieste\n```\n\n\n\n\n [14, 18, 20, 25]\n\n\n\n\n```python\nli[2:9:2] # zaciatok:koniec:krok\n```\n\n\n\n\n [7, 10, 14, 20]\n\n\n\n\n```python\ndel li[2]\nli\n```\n\n\n\n\n [2, 5, 8, 10, 11, 14, 18, 20, 25]\n\n\n\nZoznam mo\u017eno zadefinova\u0165 aj cez rozsah:\n\n\n```python\nrange(10)\ntype(range(10))\n```\n\n\n\n\n range\n\n\n\n\n```python\nlist(range(10))\n```\n\n\n\n\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n\n\n\n```python\nlist(range(3, 9))\n```\n\n\n\n\n [3, 4, 5, 6, 7, 8]\n\n\n\n## \u00daloha 3\nSpo\u010d\u00edtajte:\n* s\u00fa\u010det v\u0161etk\u00fdch \u010d\u00edsel od 1 do 1000.\n\nVytvorte zoznam `letnaskola`, ktor\u00fd bude obsahova\u0165 va\u0161ich 5 ob\u013e\u00faben\u00fdch cel\u00fdch \u010d\u00edsel. \n* Pridajte na koniec zoznamu \u010d\u00edslo 100\n* Vyma\u017ete druh\u00e9 \u010d\u00edslo zo zoznamu.\n* Prep\u00ed\u0161te prv\u00e9 \u010d\u00edslo v zozname tak, aby sa rovnalo posledn\u00e9mu v zozname.\n* Vypo\u010d\u00edtajte s\u00fa\u010det prv\u00e9ho \u010d\u00edsla, posledn\u00e9ho \u010d\u00edsla a d\u013a\u017eky zoznamu.\n\n\n```python\n# Tvoje riesenie:\nzoznam = list(range(1,1001))\nprint(sum(zoznam))\n\nletnaskola = [1,1995,12,6,42]\nprint(letnaskola)\nletnaskola.append(100)\nprint(letnaskola)\ndel letnaskola[1]\nprint(letnaskola)\nletnaskola[0] = letnaskola[len(letnaskola)-1]\nprint(letnaskola)\n\nprint(letnaskola[0]+letnaskola[len(letnaskola)-1], len(letnaskola))\n```\n\n 500500\n [1, 1995, 12, 6, 42]\n [1, 1995, 12, 6, 42, 100]\n [1, 12, 6, 42, 100]\n [100, 12, 6, 42, 100]\n 200 5\n\n\n# For cyklus\n\nIndexy zoznamu m\u00f4\u017eeme postupne prech\u00e1dza\u0165. For cyklus je tzv. `iter\u00e1tor`, ktor\u00fd iteruje cez zoznam.\n\n\n```python\nfor i in li:\n print(i)\n```\n\n 2\n 5\n 8\n 10\n 11\n 14\n 18\n 20\n 25\n\n\n\n```python\nfor i in li:\n print(i**2)\n```\n\n 4\n 25\n 64\n 100\n 121\n 196\n 324\n 400\n 625\n\n\nAko \u00faspe\u0161ne vytvori\u0165 For cyklus? Podobne, ako pri funkci\u00e1ch:\n\n* `for`: toto slovo je na za\u010diatku.\n* `i`: iterovana velicina\n* `in`: pred zoznamom, cez ktor\u00fd prech\u00e1dzame (iterujeme).\n* dvojbodka na konci prv\u00e9ho riadku.\n* kod, ktory sa cykli sa odsadzuje o \u0161tyri medzery.\n\nZa pomoci for cyklu m\u00f4\u017eeme takisto s\u010d\u00edta\u0165 \u010d\u00edsla. Napr. \u010d\u00edsla od 0 do 100:\n\n\n```python\nsum = 0\n\nfor i in range(101): # uvedomme si, preco tam je 101 a nie 100\n sum = sum + i # skratene sum += i\n \nprint(sum)\n```\n\n 5050\n\n\n## \u00daloha 4\nSpo\u010d\u00edtajte s\u00fa\u010det druh\u00fdch mocn\u00edn v\u0161etk\u00fdch nep\u00e1rnych \u010d\u00edsel od 1 do 100 s vyu\u017eit\u00edm for cyklu.\n\n\n```python\n# Tvoje riesenie:\nsum = 0\nfor i in range(101):\n if (i%2 == 1): sum = sum + i**2\nprint(sum)\n\n```\n\n 166650\n\n\n# Podmienky\n\nPochop\u00edme ich na pr\u00edklade. Zme\u0148te `a` a zistite, \u010do to sprav\u00ed.\n\n\n```python\na = 5\n\nif a == 3:\n print(\"cislo a je rovne trom.\")\nelif a == 5:\n print(\"cislo a je rovne piatim\")\nelse:\n print(\"cislo a nie je rovne trom ani piatim.\")\n```\n\n cislo a je rovne piatim\n\n\nZa pomoci podmienky teraz m\u00f4\u017eeme z for cyklu vyp\u00edsa\u0165 napr. len p\u00e1rne \u010d\u00edsla. P\u00e1rne \u010d\u00edslo identifikujeme ako tak\u00e9, ktor\u00e9 po delen\u00ed dvomi d\u00e1va zvy\u0161ok nula.\n\nPre zvy\u0161ok po delen\u00ed sa pou\u017e\u00edva percento:\n\n\n```python\nfor i in range(10):\n if i % 2 == 0:\n print(i)\n```\n\n 0\n 2\n 4\n 6\n 8\n\n\nCyklus mozeme zastavit, ak sa porusi nejaka podmienka\n\n\n```python\nfor i in range(20):\n print(i)\n if i>10:\n print('Koniec.')\n break\n```\n\n 0\n 1\n 2\n 3\n 4\n 5\n 6\n 7\n 8\n 9\n 10\n 11\n Koniec.\n\n\n## \u00daloha 5\nSpo\u010d\u00edtajte:\n* s\u00fa\u010det v\u0161etk\u00fdch \u010d\u00edsel od 1 do 1000 delite\u013en\u00e9 jeden\u00e1stimi.\n* s\u00fa\u010det tret\u00edch mocn\u00edn \u010d\u00edsel od 1 do 1000 delite\u013en\u00fdch dvan\u00e1stimi.\n\n\n```python\n# Tvoje riesenie:\nsum = 0\n\nfor i in range(1001):\n if (i%11 == 0): sum = sum + i\n\nprint(sum)\n\nsum = 0\n\nfor i in range(1001):\n if (i%12 == 0): sum = sum + i**3\n \nprint(sum)\n```\n\n 45045\n 20998994688\n\n\n## \u00daloha 6\n\nTeraz, ke\u010f u\u017e vieme, ako sa zis\u0165uje delite\u013enos\u0165, m\u00f4\u017eeme tie\u017e zisti\u0165, \u010di je zadan\u00e9 \u010d\u00edslo prvo\u010d\u00edslom. Vymyslite algoritmus, ktor\u00fd over\u00ed prvo\u010d\u00edseln\u00fa vlastnos\u0165 nejak\u00e9ho \u010d\u00edsla.\n\nV\u00fdstup by mal by\u0165 nasledovn\u00fd:\n```Python\n>>> prvocislo(10)\nnie\n>>> prvocislo(13)\nano\n```\n\n\n```python\n# Tvoje riesenie:\nfrom math import *\n\ndef prvocislo(cislo):\n vysledok = \"ano\"\n for i in range(2,int(floor(sqrt(cislo)))+1):\n if (cislo%i == 0): \n vysledok = \"nie\"\n return vysledok\n \nprint(prvocislo(10))\nprint(prvocislo(13))\n```\n\n nie\n ano\n\n\n## \u00daloha 7\nPredstavme si, \u017ee chceme s\u010d\u00edta\u0165 nekone\u010dn\u00fd po\u010det \u010d\u00edsel:\n$$ \\sum_{n=1}^\\infty \\frac{1}{n^2}.$$\nAnalytick\u00fd v\u00fdsledok takejto sumy je $\\pi^2/6$. Ko\u013eko \u010dlenov potrebujeme s\u010d\u00edta\u0165, aby presnos\u0165 s analytick\u00fd v\u00fdsledkov bola ur\u010den\u00e1 na tri desatinn\u00e9 miesta?\n\n\n```python\n# Tvoje riesenie:\nfrom math import *\nprint('Presne ',round(pi**2/6,3))\n\nsum = 0\nfor i in range(1,2005):\n sum = sum + 1.0/i**2\n\nprint(i,' ',sum)\n```\n\n Presne 1.645\n 2004 1.6444351893330087\n\n\n# Kone\u010dne po\u010fme na nie\u010do zauj\u00edmav\u00e9!\n\nV predch\u00e1dzaj\u00facej \u010dasti sme sa zozn\u00e1mili so z\u00e1kladnou syntaxou Pythonu, zozn\u00e1mili sme sa s Jupyterom a nau\u010dili sme sa nie\u010do m\u00e1lo o tom akoby sme mohli s\u010d\u00edta\u0165 nejak\u00e9 rady. Nastal v\u0161ak \u010das pusti\u0165 sa do nie\u010do zauj\u00edmavej\u0161ieho.\n\nV nasleduj\u00facej \u010dasti sa pozrieme ako sa daj\u00fa pomocou numerickej matematiky a nejak\u00fdch t\u00fdch \u0161ikovn\u00fdch matematick\u00fdch vz\u0165ahov daj\u00fa vypo\u010d\u00edta\u0165 kon\u0161tanty, s ktor\u00fdmi ste sa u\u017e stretli.\n\n## H\u013eadanie hodnoty zlat\u00e9ho rezu $\\varphi$\n\nJednoduch\u00e9 cvi\u010denie na obozn\u00e1menie sa s tzv. selfkonzistentn\u00fdm probl\u00e9mom a for cyklom\n\n\n```python\nx = 1;\n\nfor i in range (0,20):\n x = 1+1/x\n print (x)\n```\n\n## H\u013eadanie hodnoty Eulerovho \u010d\u00edsla $e$\n\nHoci je to m\u00e1lo zn\u00e1me, Eulerove \u010dislo $e$ sa d\u00e1 n\u00e1js\u0165 ako odpove\u010f na nasleduj\u00facu \u00falohu (rozde\u013euj a pon\u00e1sob):\nNa ak\u00e9 ve\u013ek\u00e9 \u010dasti treba rozdeli\u0165 hocak\u00e9 \u010d\u00edslo tak, aby s\u00fa\u010din t\u00fdchto \u010dast\u00ed bol maxim\u00e1lny.\n\n\n```python\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport numpy as np\n```\n\n\n```python\nNum0 = 25\n\ndelitele = np.arange(1, 20 , 1)\n\ncasti = []\nsuciny = []\n\nfor i in range (0, 19):\n \n casti.append(Num0/delitele[i])\n \n suciny.append(casti[i]**delitele[i])\n```\n\n\n```python\nplt.plot(casti, suciny, \"ro-\")\nplt.show()\n```\n\n\n```python\nMinDel = delitele[np.argmax(suciny)-1]\nMaxDel = delitele[np.argmax(suciny)+1]\n```\n\n\n```python\ndelitele = np.arange(MinDel, MaxDel, (MaxDel - MinDel)/10)\n\ncasti = []\nsuciny = []\n\nfor i in range (0, 9):\n \n casti.append(Num0/delitele[i])\n \n suciny.append(casti[i]**delitele[i])\n```\n\n\n```python\nplt.plot(casti, suciny, \"ro-\")\nplt.show()\n```\n\n\n```python\ncasti[np.argmax(suciny)]\n```\n\n# Ako sa numericky derivuje, integruje a rie\u0161ia difky?\n\n## Derivovanie\n\nNa to, aby sme numericky zderivovali nejak\u00fa funkciu si mus\u00edme najprv uvedomi\u0165, \u010do t\u00e1 deriv\u00e1cia vlastne intuit\u00edvne rob\u00ed.\nPredstavme si obr\u00e1zok a v \u0148om nakreslen\u00fa doty\u010dnicu a mal\u00fd trojuholn\u00ed\u010dek. Bu\u010f sme sa u\u010dili alebo sme sa pr\u00e1ve teraz dozvedeli, \u017ee deriv\u00e1cia funkcie je smernica jej doty\u010dnice v danom bode. Odtia\u013e by u\u017e mohlo by\u0165 (po nejak\u00fdch t\u00fdch obr\u00e1zkoch) jasn\u00e9, \u017ee to bude nejako takto: \n\n$$ \\frac{df}{dx} = \\frac{f(x+h) - f(x-h)}{2h} $$\n\nZvol\u00edme si mal\u00e9 $h$ a derivujeme.\n\n\n```python\ndef func(x):\n return cos(x)\n\ndef deriv(f, x, h=0.01):\n return (f(x+h)-f(x-h))/(2*h)\n\nx = np.linspace(0, 2*pi, 101)\n#print(x)\ny = [func(i) for i in x]\ndydx = [deriv(func, i) for i in x]\n\nplt.plot(x, y, label=\"$f(x)$\")\nplt.plot(x, dydx, label=\"$\\\\frac{df}{dx}$\")\nplt.xlim([0, 2*pi])\nplt.xticks([0, pi, 2*pi])\nplt.legend(loc=\"best\")\nplt.show()\n```\n\n### I\u0161lo by to v\u0161ak aj s lep\u0161ou presnos\u0165ou?\n\nOdkia\u013e spadol magick\u00fd vzor\u010dek vy\u0161\u0161ie? T\u00ed \u010do u\u017e na podobn\u00e9 akcie chodia dlh\u0161ie sa u\u017e mo\u017eno stretli s pojmom Taylorov rozvoj, intuit\u00edvne v princ\u00edpe ka\u017ed\u00e1 slu\u0161n\u00e1 a poslu\u0161n\u00e1 funkcia, s ktorou sa stretneme sa d\u00e1 rozvi\u0165 v istom okol\u00ed bodu $x_0$ do s\u00fa\u010dtu mocninov\u00fdch funkci\u00ed nasledovne:\n\n$$ f(x) \\approx f(x_0) + \\left.\\frac{df}{dx}\\right|_{x=x_0}(x-x_0) + \\frac{1}{2!}{\\left(\\left.\\frac{df}{dx}\\right|_{x=x_0}\\right)}^2{(x-x_0)}^2 + \\frac{1}{3!}{\\left(\\left.\\frac{df}{dx}\\right|_{x=x_0}\\right)}^3{(x-x_0)}^3 + ...$$\n\nAk takto rozvinieme funkcie nielen v okol\u00ed bodu $x_0$, ale aj $x_0 + h$ a $x_0 - h$, a n\u00e1sledne rozvejo vhodne medzi sebou od\u010d\u00edtame, tak sme schopn\u00fd n\u00e1js\u0165 hodnotu deriv\u00e1cie funkcie $\\frac{df}{dx}$ v bode $x_0$, teda \n$$\n\\left.\\frac{df}{dx}\\right|_{x=x_0}\n$$\n\nSk\u00faste rozvi\u0165 funkciu aj v bodoch $x_0+2h$ a $x_0-2h$ a z\u00edska\u0165 vzor\u010dek, ktor\u00fd n\u00e1m umo\u017en\u00ed vypo\u010d\u00edta\u0165 deriv\u00e1ciu s lep\u0161ou presnos\u0165ou.\n\n## Integrovanie\n\nDve z\u00e1kladn\u00e9 met\u00f3dy ako sa d\u00e1 nie\u010do numericky integrova\u0165 s\u00fa:\n* Kvadrat\u00fara\n* Monte Carlo\n\nDva sp\u00f4soby:\n* Kvadrat\u00fara (dobr\u00e1 v 1D, ale so zvy\u0161ovan\u00edm rozmerov presnos\u0165 kles\u00e1)\n* Monte Carlo (presnos\u0165 v\u017edy $O(N^{-1/2})$ (N je po\u010det bodov, ktor\u00e9 pou\u017eijeme na v\u00fdpo\u010det inegr), pou\u017ei\u0165 pri viac ako troch rozmeroch)\n\n\nPom\u00f4\u017eu n\u00e1m kni\u017enice.\nTeraz si uk\u00e1\u017eeme kvadrat\u00faru.\n\n\n```python\n#from scipy.integrate import quad\nimport scipy.integrate as sp\n\ndef func2(x):\n return exp(-x**2)\n\nsp.quad(func2, -20, 20) # vysledok je 1/4\n```\n\n\n```python\nsqrt(pi)\n```\n\n## H\u013eadanie hodnoty Ludolfovho \u010d\u00edsla $\\pi$\nPomocou Monte Carlo met\u00f3dy integrovania sa nau\u010d\u00edme ako napr\u00edklad vypo\u010d\u00edta\u0165 $\\pi$.\n\n\n\n```python\nimport random as rnd\n\nNOP = 50000\n\nCoordXList = [];\nCoordYList = [];\n\nfor j in range (NOP):\n CoordXList.append(rnd.random())\n CoordYList.append(rnd.random())\n```\n\n\n```python\nCircPhi = np.arange(0,np.pi/2,0.01)\n```\n\n\n```python\nplt.figure(figsize=(7,7))\n\nplt.plot(\n CoordXList,\n CoordYList,\n color = \"red\",\n linestyle= \"none\",\n marker = \",\"\n)\nplt.plot(np.cos(CircPhi),np.sin(CircPhi))\n#plt.axis([0, 1, 0, 1])\n#plt.axes().set_aspect('equal', 'datalim')\nplt.show()\n```\nNumIn = 0\n\nfor j in range (NOP):\n \n #if (CoordXList[j] - 0.5)*(CoordXList[j] - 0.5) + (CoordYList[j] - 0.5)*(CoordYList[j] - 0.5) < 0.25:\n if CoordXList[j]*CoordXList[j] + CoordYList[j]*CoordYList[j] <= 1:\n \n NumIn = NumIn + 1; NumIn/NOP*4\n## Diferenci\u00e1lne rovnice\n\nPr\u00edklad nerie\u0161ite\u013enej difky:\n$$ y'(x) = \\sqrt{1+xy} $$\nPr\u00edklad Eulerovej (najjednoduch\u0161ej) met\u00f3dy.\n\n\n```python\nN = 101\nx = np.linspace(0, 1, N)\ndx = x[1] - x[0]\ny_eu = np.zeros(N)\ny_eu[0] = 1 # pociatocna podmienka\n\ndef func(x, y):\n return sqrt(1.0 + x*y)\n# return sin(x)\n\nfor i in range(1, N):\n y_eu[i] = y_eu[i-1] + func(x[i-1], y_eu[i-1])*dx\n \nplt.plot(x, y_eu)\nplt.show()\n```\n\n## Difky za pomoci kni\u017en\u00edc\n\nPou\u017eijeme funkciu `ode` z kni\u017enice `scipy`. Znova rie\u0161ime\n$$y'(x) = \\sqrt{1+xy} .$$\n\n\n```python\nfrom scipy.integrate import odeint\n\nN = 101\n\ndef func(y, x):\n return sqrt(1 + x*y)\n\nx = np.linspace(0, 1, N)\ny0 = 1.0\ny = odeint(func, y0, x) ### Pozri Google Scipy.integrate.odeint\n\nplt.plot(x, (y_eu-y.T[0])/y.T[0])\nplt.title(\"Rozdiel medzi odeint a Eulerom\")\nplt.show()\n```\n\n## In\u00fd pr\u00edklad na Difky: Exponenci\u00e1lny rozpad\n\nE\u0161te sa kukneme na numerick\u00e9 rie\u0161enie jednoduch\u00fdch diferenci\u00e1lnych rovn\u00edc. Met\u00f3du demon\u0161trujeme na pr\u00edklade exponenci\u00e1lneho rozpadu. Znovu pomocou najjednoduch\u0161ej - Eulerovej met\u00f3dy.\n\n$$\\frac{d n}{dt}=-\\lambda n$$\n\n$$\\frac{n(t+dt)-n(t)}{dt}=-\\lambda n(t)$$\n\n$$n(t+dt)=n(t)(1 -\\lambda dt), n(0)=N_0$$\n\n\n```python\nN0 = 500\n\n#\u03c4=\u03bbdt#\n\u03c4 = 0.5\n```\n\n\n```python\ntauka=[]\nNumOfPart=[]\n\nfor i in range (15):\n \n if i == 0:\n \n n = N0\n \n else:\n \n n = n*(1 - \u03c4)\n \n tauka.append(i*\u03c4)\n NumOfPart.append(n)\n```\n\n\n```python\nplt.plot(tauka, NumOfPart, \"ro-\")\nplt.show()\n```\n\n# Obiehanie Zeme okolo Slnka\n\nFyziku (d\u00fafam!) v\u0161etci pozn\u00e1me.\n\n* gravita\u010dn\u00e1 sila:\n$$ \\mathbf F(\\mathbf r) = -\\frac{G m M}{r^3} \\mathbf r $$\n\n### Eulerov algoritmus (zl\u00fd)\n$$\\begin{align}\na(t) &= F(t)/m \\\\\nv(t+dt) &= v(t) + a(t) dt \\\\\nx(t+dt) &= x(t) + v(t) dt \\\\\n\\end{align}$$\n\n### Verletov algoritmus (dobr\u00fd)\n$$ x(t+dt) = 2 x(t) - x(t-dt) + a(t) dt^2 $$\n\n\n```python\nfrom numpy.linalg import norm\n\nG = 6.67e-11\nMs = 2e30\nMz = 6e24\ndt = 86400.0\nN = int(365*86400.0/dt)\n#print(N)\n\nR0 = 1.5e11\nr_list = np.zeros((N, 2))\nr_list[0] = [R0, 0.0] # mozno miesat listy s ndarray\n\nv0 = 29.7e3\nv_list = np.zeros((N, 2))\nv_list[0] = [0.0, v0]\n\n# sila medzi planetami\ndef force(A, r):\n return -A / norm(r)**3 * r\n\n# Verletova integracia\ndef verlet_step(r_n, r_nm1, a, dt): # r_nm1 -- r n minus 1\n return 2*r_n - r_nm1 + a*dt**2\n\n# prvy krok je specialny\na = force(G*Ms, r_list[0])\nr_list[1] = r_list[0] + v_list[0]*dt + a*dt**2/2\n\n\n# riesenie pohybovych rovnic\nfor i in range(2, N):\n a = force(G*Ms, r_list[i-1])\n r_list[i] = verlet_step(r_list[i-1], r_list[i-2], a, dt)\n \n \nplt.plot(r_list[:, 0], r_list[:, 1])\nplt.xlim([-2e11, 2e11])\nplt.ylim([-2e11, 2e11])\nplt.xlabel(\"$x$\", fontsize=20)\nplt.ylabel(\"$y$\", fontsize=20)\nplt.gca().set_aspect('equal', adjustable='box')\n#plt.axis(\"equal\")\nplt.show()\n```\n\n## Pridajme Mesiac\n\n\n```python\nMm = 7.3e22\nR0m = R0 + 384e6\nv0m = v0 + 1e3\nrm_list = np.zeros((N, 2))\nrm_list[0] = [R0m, 0.0]\nvm_list = np.zeros((N, 2))\nvm_list[0] = [0.0, v0m]\n\n# prvy Verletov krok\nam = force(G*Ms, rm_list[0]) + force(G*Mz, rm_list[0] - r_list[0])\nrm_list[1] = rm_list[0] + vm_list[0]*dt + am*dt**2/2\n\n# riesenie pohybovych rovnic\nfor i in range(2, N):\n a = force(G*Ms, r_list[i-1]) - force(G*Mm, rm_list[i-1]-r_list[i-1])\n am = force(G*Ms, rm_list[i-1]) + force(G*Mz, rm_list[i-1]-r_list[i-1])\n r_list[i] = verlet_step(r_list[i-1], r_list[i-2], a, dt)\n rm_list[i] = verlet_step(rm_list[i-1], rm_list[i-2], am, dt)\n \nplt.plot(r_list[:, 0], r_list[:, 1])\nplt.plot(rm_list[:, 0], rm_list[:, 1])\nplt.xlabel(\"$x$\", fontsize=20)\nplt.ylabel(\"$y$\", fontsize=20)\nplt.gca().set_aspect('equal', adjustable='box')\nplt.xlim([-2e11, 2e11])\nplt.ylim([-2e11, 2e11])\nplt.show() # mesiac moc nevidno, ale vieme, ze tam je\n```\n\n## \u00daloha pre V\u00e1s: Treba prida\u0165 Mars :)\nPridajte Mars!\n\n## Matematick\u00e9 kyvadlo s odporom \nNasimulujte matematick\u00e9 kyvadlo s odporom $\\gamma$,\n$$ \\ddot \\theta = -\\frac g l \\sin\\theta -\\gamma \\theta^2,$$\nza pomoci met\u00f3dy `odeint`.\n\nAlebo p\u00e1d telesa v odporovom prostred\u00ed:\n$$ a = -g - kv^2.$$\n\n\n```python\nfrom scipy.integrate import odeint\n\ndef F(y, t, g, k):\n return [y[1], g -k*y[1]**2]\n\nN = 101\nk = 1.0\ng = 10.0\nt = np.linspace(0, 1, N)\ny0 = [0.0, 0.0]\ny = odeint(F, y0, t, args=(g, k))\n\nplt.plot(t, y[:, 1])\nplt.xlabel(\"$t$\", fontsize=20)\nplt.ylabel(\"$v(t)$\", fontsize=20)\nplt.show()\n```\n\n## Harmonick\u00fd oscil\u00e1tor pomocou met\u00f3dy Leapfrog (modifik\u00e1cia Verletovho algoritmu)\n\n\n```python\nN = 10000\nt = linspace(0,100,N)\ndt = t[1] - t[0]\n\n# Funkcie\ndef integrate(F,x0,v0,gamma):\n x = zeros(N)\n v = zeros(N)\n E = zeros(N) \n \n # Po\u010diato\u010dn\u00e9 podmienky\n x[0] = x0\n v[0] = v0\n \n # Integrovanie rovn\u00edc pomocou met\u00f3dy Leapfrog (wiki)\n fac1 = 1.0 - 0.5*gamma*dt\n fac2 = 1.0/(1.0 + 0.5*gamma*dt)\n \n for i in range(N-1):\n v[i + 1] = fac1*fac2*v[i] - fac2*dt*x[i] + fac2*dt*F[i]\n x[i + 1] = x[i] + dt*v[i + 1]\n E[i] += 0.5*(x[i]**2 + ((v[i] + v[i+1])/2.0)**2)\n \n E[-1] = 0.5*(x[-1]**2 + v[-1]**2)\n \n # Vr\u00e1time rie\u0161enie\n return x,v,E\n```\n\n\n```python\n# Pozrime sa na tri r\u00f4zne po\u010diato\u010dn\u00e9 podmienky\nF = zeros(N)\nx1,v1,E1 = integrate(F,0.0,1.0,0.0) # x0 = 0.0, v0 = 1.0, gamma = 0.0\nx2,v2,E2 = integrate(F,0.0,1.0,0.05) # x0 = 0.0, v0 = 1.0, gamma = 0.01\nx3,v3,E3 = integrate(F,0.0,1.0,0.4) # x0 = 0.0, v0 = 1.0, gamma = 0.5\n\n# Nakreslime si grafy\n\nplt.rcParams[\"axes.grid\"] = True\nplt.rcParams['font.size'] = 14\nplt.rcParams['axes.labelsize'] = 18\nplt.figure()\nplt.subplot(211)\nplt.plot(t,x1)\nplt.plot(t,x2)\nplt.plot(t,x3)\nplt.ylabel(\"x(t)\")\n\nplt.subplot(212)\nplt.plot(t,E1,label=r\"$\\gamma = 0.0$\")\nplt.plot(t,E2,label=r\"$\\gamma = 0.01$\")\nplt.plot(t,E3,label=r\"$\\gamma = 0.5$\")\nplt.ylim(0,0.55)\nplt.ylabel(\"E(t)\")\n\nplt.xlabel(\"\u010cas\")\nplt.legend(loc=\"center right\")\n\nplt.tight_layout()\n```\n\nA \u010do ak bude oscil\u00e1tor aj tlmenn\u00fd?\n\n\n```python\ndef force(f0,t,w,T):\n return f0*cos(w*t)*exp(-t**2/T**2) \n\nF1 = zeros(N)\nF2 = zeros(N)\nF3 = zeros(N)\nfor i in range(N-1):\n F1[i] = force(1.0,t[i] - 20.0,1.0,10.0)\n F2[i] = force(1.0,t[i] - 20.0,0.9,10.0)\n F3[i] = force(1.0,t[i] - 20.0,0.8,10.0)\n```\n\n\n```python\nx1,v1,E1 = integrate(F1,0.0,0.0,0.0)\nx2,v2,E2 = integrate(F1,0.0,0.0,0.01)\nx3,v3,E3 = integrate(F1,0.0,0.0,0.1)\n\nplt.figure()\nplt.subplot(211)\nplt.plot(t,x1)\nplt.plot(t,x2)\nplt.plot(t,x3)\nplt.ylabel(\"x(t)\")\n\nplt.subplot(212)\nplt.plot(t,E1,label=r\"$\\gamma = 0$\")\nplt.plot(t,E2,label=r\"$\\gamma = 0.01$\")\nplt.plot(t,E3,label=r\"$\\gamma = 0.1$\")\npt.ylabel(\"E(t)\")\n\nplt.xlabel(\"Time\")\nplt.rcParams['legend.fontsize'] = 14.0\nplt.legend(loc=\"upper left\")\n\nplt.show()\n```\n\n# Line\u00e1rna algebra\n\n## Program na dnes\n\n* Matematick\u00e9 oper\u00e1cie s vektormi a maticami. H\u013eadanie vlastn\u00fdch \u010d\u00edsel. \n* Zozn\u00e1menia sa s kni\u017enicou `numpy`.\n\n## Vektory\n\n\n```python\na = np.array([1, 2, 3])\nprint(a)\nprint(type(a))\n\nb = np.array([2, 3, 4])\nprint(a + b) # spravne scitanie!\n```\nnp.dot(a, b) # skalarny sucin\na.dot(b)\n\n```python\nnp.cross(a, b) # vektorovy sucin\n```\nnp.outer(a, b) # outer product (jak sa to povie po slovensky?), premyslite si\n## Matice\nA = np.array([[0, 1], [1, 0]])\nprint(A)\ntype(A)AA = np.matrix(A) # premena na iny datovy typ\ntype(AA)B = np.array([[1, 0], [0, -1]])\nprint(B)np.dot(A, B)# pozor!\nA*B# Vlastnosti numpy matic\nlen(A) # pocet riadkovA.shape # rozmery# uzitocne vektory/matice, konvencia z Matlabu\nN = 3\nnp.ones(N) # konstantny vektornp.ones((N, N)) # jednotky\nnp.eye(N) # identita\nnp.zeros((N, N+1)) # nulova matica NxN\n\n```python\n# spajanie matic\nA = np.ones((3, 3))\nB = np.eye(3)\nprint(A, \"\\n\")\nprint(B)\nnp.hstack((A, B))\n```\n\n### Pr\u00edstup k prvkom\n# Najprv si pripravime nase vektory\nA = np.arange(9)\nprint(A)\nA = A.reshape((3, 3))\nA# Pri poliach by sme urobili A[1][1]. Pri vektoroch mozme pouzit aj A[1,1]. \n# Pri maticiach musime pouzit A[1,1], lebo A[1][1] vrati prekvapivy vysledok A[1, 1]A[1][1]A[-1, -1] = 10 # znovu vieme indexovat aj od zadu\nAA[0, 0]A[-3, -2]\n## Slicing\n\nAko vybera\u0165 jednotliv\u00e9 st\u013apce a riadky mat\u00edc?\nA = np.arange(9)\nA = A.reshape((3, 3))\nAA[:, 0] # Ak nechceme vyberat nejaky usek, pouzijeme dvojbodku.A[1, :]A[[0, 2], :] # Mozme pouzit aj pole na indexovanie# zmena riadku\nA[:, 1] = 4\nA# pripocitanie cisla k stlpcu\nfor i in range(len(A)):\n A[i, 2] += 100\n\nAA = np.arange(25).reshape((5, 5))\nAA[2:, 3]\n## Generovanie (pseudo)n\u00e1hodn\u00fdch \u010d\u00edsel\n# cisla sa menie, pokial nefixneme seed!\nnp.random.seed(12)\n\nN = 5\na = 2*np.random.rand(N) # v intervale [0,1]\naA = np.random.rand(N, N)\nAa = np.random.randn(5)\namu, sigma = 1.0, 2.0\na = np.random.randn(1000)*sigma + mu\nplt.hist(a, bins=20)\nplt.show()a = np.random.randint(10, size=1000) # cele cisla, pozor na size!\nu = plt.hist(a)\n#plt.plot(u[1][1:], u[0])\n## Vlastn\u00e9 \u010d\u00edsla\n\n$$ A v = \\lambda v$$\nPou\u017eijeme funkciu `eig`.\n\nOkrem nej existuj\u00fa e\u0161te funkcie na vlastn\u00e9 \u010d\u00edsla symetrick\u00fdch alebo hermitovsk\u00fdch mat\u00edc `eigs` a `eigh`.\n\n\n```python\nfrom numpy.linalg import eig\n```\nN = 20\nA = np.random.rand(N, N)\nvals, vecs = eig(A)\nprint(vals.real)\n## Vlastn\u00e9 \u010d\u00edsla n\u00e1hodn\u00fdch mat\u00edc\n\nVygenerujte (100x100) maticu n\u00e1hodn\u00fdch \u010d\u00edsel, z\u00edskajte jej vlastn\u00e9 \u010d\u00edsla a spravte z nich histogram.\n\nN = 100\nA = np.random.rand(N, N)\nvals, vecs = eig(A)\nvals = np.real(vals)\n#print(vals)\nplt.hist(np.real(vals))\nplt.show()", "meta": {"hexsha": "9da35c83665e7c910b8dde4550a4c612a5e68ca1", "size": 62337, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Programko.ipynb", "max_stars_repo_name": "matoga/LetnaSkolaFKS_notebooks", "max_stars_repo_head_hexsha": "26faa2d30ee942e18246fe466d9bf42f16cc1433", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Programko.ipynb", "max_issues_repo_name": "matoga/LetnaSkolaFKS_notebooks", "max_issues_repo_head_hexsha": "26faa2d30ee942e18246fe466d9bf42f16cc1433", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Programko.ipynb", "max_forks_repo_name": "matoga/LetnaSkolaFKS_notebooks", "max_forks_repo_head_hexsha": "26faa2d30ee942e18246fe466d9bf42f16cc1433", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 20.4049099836, "max_line_length": 376, "alphanum_fraction": 0.4741806632, "converted": true, "num_tokens": 10834, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284087985746093, "lm_q2_score": 0.9073122295060291, "lm_q1q2_score": 0.8423566569277426}} {"text": "# Part 1: Optimising functions\n \nIn this lab we will play with some of the optimisation methods we learned in the lecture by exploring how they work on some analytic functions (both convex and non-convex).\n\n\n```python\nimport torch\nimport torch.optim as optim\n```\n\n## A Simple Function\n\nFor this first task, we are going to try to optimise the following using Stochastic Gradient Descent:\n\n\\begin{equation}\nmin_{\\textbf{x}} (\\textbf{x}[0] - 5)^2 + \\textbf{x}[1]^2 + (\\textbf{x}[2] - 1)^2\\; ,\n\\end{equation}\n\nUse the following block the write down the analytic minima of the above function:\n\n### Implement the function\n\nFirst, complete the following code block to implement the above function using PyTorch:\n\n\n```python\ndef function(x):\n return ((x[0] - 5)**2 + x[1]**2 + (x[2] - 1)**2)\n\n raise NotImplementedError()\n```\n\n### Optimising\n\nWe need two more things before we can start optimising.\nWe need our initial guess - which we've set to [2.0, 1.0, 10.0] and we need to how many epochs to take.\n\n\n```python\np = torch.tensor([2.0, 1.0, 10.0], requires_grad=True)\nepochs = 5000\n```\n\nWe define the optimisation loop in the standard way:\n\n\n```python\nopt = optim.SGD([p], lr=0.001)\n\nfor i in range(epochs):\n opt.zero_grad()\n output = function(p)\n output.backward()\n opt.step()\n```\n\nUse the following block to print out the final value of `p`. Does it match the value you expected?\n\n\n```python\nprint(p)\n\n#raise NotImplementedError()\n```\n\n tensor([4.9999e+00, 4.4948e-05, 1.0004e+00], requires_grad=True)\n\n\n## Visualising Himmelblau's Function\n\nWe'll now have a go at a more complex example, which we also visualise, with multiple optima; [Himmelblau's function](https://en.wikipedia.org/wiki/Himmelblau%27s_function). This is defined as:\n\n\\begin{equation}\nf(x, y) = (x^2 + y - 11)^2 + (x + y^2 - 7)^2\\; ,\n\\end{equation}\nand has minima at\n\\begin{equation}\nf(3, 2) = f(-2.805118, 3.131312) = f(-3.779310, -3.283186) = f(3.584428, -1.848126) = 0\\; .\n\\end{equation}\n\nUse the following block to first define the function (the inputs $x, y$ are packed into a vector as for the previous quadratic function above):\n\n\n```python\ndef himm(x):\n x, y = x[0], x[1] \n return (x**2 + y - 11)**2 + (x + y**2 - 7)**2\n \n raise NotImplementedError()\n```\n\nThe following will plot its surface:\n\n\n```python\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib.colors import LogNorm\n\nxmin, xmax, xstep = -5, 5, .2\nymin, ymax, ystep = -5, 5, .2\nx, y = np.meshgrid(np.arange(xmin, xmax + xstep, xstep), np.arange(ymin, ymax + ystep, ystep))\nz = himm(torch.tensor([x, y])).numpy()\n\nfig = plt.figure(figsize=(8, 5))\nax = plt.axes(projection='3d', elev=50, azim=-50)\nax.plot_surface(x, y, z, norm=LogNorm(), rstride=1, cstride=1, \n edgecolor='none', alpha=.8, cmap=plt.cm.jet)\nax.set_xlabel('$x$')\nax.set_ylabel('$y$')\nax.set_zlabel('$z$')\n\nax.set_xlim((xmin, xmax))\nax.set_ylim((ymin, ymax))\n\nplt.show()\n```\n\nCheck that the above plot looks correct by comparing to the picture on the [Wikipedia page](https://en.wikipedia.org/wiki/Himmelblau%27s_function).\n\n### Optimising\n\nLet's see how it looks for a few different optimisers from a range of starting points\n\n\n```python\nxmin, xmax, xstep = -5, 5, .2\nymin, ymax, ystep = -5, 5, .2\nx, y = np.meshgrid(np.arange(xmin, xmax + xstep, xstep), np.arange(ymin, ymax + ystep, ystep))\nz = himm(torch.tensor([x, y])).numpy()\n\nfig, ax = plt.subplots(figsize=(8, 8))\nax.contourf(x, y, z, levels=np.logspace(0, 5, 35), norm=LogNorm(), cmap=plt.cm.gray)\n\np = torch.tensor([[0.0],[0.0]], requires_grad=True)\nopt = optim.SGD([p], lr=0.01)\n\npath = np.empty((2,0))\npath = np.append(path, p.data.numpy(), axis=1)\n\nfor i in range(50):\n opt.zero_grad()\n output = himm(p)\n output.backward()\n opt.step()\n path = np.append(path, p.data.numpy(), axis=1)\n\nax.plot(path[0], path[1], color='red', label='SGD', linewidth=2)\n\nax.legend()\nax.set_xlabel('$x$')\nax.set_ylabel('$y$')\n\nax.set_xlim((xmin, xmax))\nax.set_ylim((ymin, ymax))\n```\n\nUse the following block to run SGD with momentum (lr=0.01, momentum=0.9) from the same initial point, saving the position at each timestep into a variable called `path_mom`.\n\n\n```python\np = torch.tensor([[0.0],[0.0]], requires_grad=True)\nopt = optim.SGD([p], lr=0.01, momentum=0.9)\n\nfor i in range(50):\n opt.zero_grad()\n output = himm(p)\n output.backward()\n opt.step()\n path_mom = np.append(path, p.data.numpy(), axis=1)\n\n#raise NotImplementedError()\n```\n\nThe following will plot the path taken when momentum was used, as well as the original plain SGD path:\n\n\n```python\nax.plot(path_mom[0], path_mom[1], color='yellow', label='SGDM', linewidth=2)\nax.legend()\nfig\n```\n\nNow explore what happens when you start from different points. What effect do you get with different optimisers? \n\n\n```python\np = torch.tensor([[2.0],[4.0]], requires_grad=True)\nopt = optim.SGD([p], lr=0.01, momentum=0.9)\n\n\n\nfor i in range(50):\n opt.zero_grad()\n output = himm(p)\n output.backward()\n opt.step()\n path_mom = np.append(path, p.data.numpy(), axis=1)\n\n\nax.plot(path_mom[0], path_mom[1], color='yellow', label='SGDM', linewidth=2)\nax.legend()\nfig\n```\n", "meta": {"hexsha": "5320bb84e5b0daddc71a96a759f6de887ad06c9e", "size": 240667, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Pytorch Practical Tasks/3_1_FuntionOptimisation.ipynb", "max_stars_repo_name": "VladimirsHisamutdinovs/deep-learning-pytorch", "max_stars_repo_head_hexsha": "252a2d9e496c444fad1cb77a9e200afca9590aef", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Pytorch Practical Tasks/3_1_FuntionOptimisation.ipynb", "max_issues_repo_name": "VladimirsHisamutdinovs/deep-learning-pytorch", "max_issues_repo_head_hexsha": "252a2d9e496c444fad1cb77a9e200afca9590aef", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Pytorch Practical Tasks/3_1_FuntionOptimisation.ipynb", "max_forks_repo_name": "VladimirsHisamutdinovs/deep-learning-pytorch", "max_forks_repo_head_hexsha": "252a2d9e496c444fad1cb77a9e200afca9590aef", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 312.9609882965, "max_line_length": 130938, "alphanum_fraction": 0.920811744, "converted": true, "num_tokens": 1606, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9353465062370313, "lm_q2_score": 0.900529781446146, "lm_q1q2_score": 0.8423073848380501}} {"text": "\n\n$\\large{\\text{Naive Bayes Classifier}}$\n\nConsider the data set as given below.\n\n\n\n\n```python\n#First, we import the required packages\nimport pandas as pd #the pandas library is useful for data processing \nimport matplotlib.pyplot as plt #the matplotlib library is useful for plotting purposes\n\n#Get the data from the csv file \nsample_data = pd.read_csv('dataset.csv', index_col=False)\n\n#print the data \nsample_data\n\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Temp gt 100Travel to foreign countryCoughAntibodies in bloodDisease presence
0YesYesNoYesYes
1NoYesYesYesYes
2YesNoYesNoYes
3YesNoYesYesYes
4NoNoNoYesNo
5NoYesYesNoNo
6NoYesNoYesNo
7YesYesNoNoNo
\n
\n\n\n\nNow suppose we have the observation $x$ given by \n\n$(\\text{Temp gt 100=Yes, Travel to foreign country=Yes, Cough = Yes, Antibodies in blood= Yes})$, \n\nthen how do we classify $x$ into the Disease presence category? \n\nConsider $X$ to be a random variable of which $x$ is an observation. \n\nSuppose $Y$ denotes the random variable associated with the Disease presence category, then the problem of finding the Disease presence category can be cast as finding the conditional probability of class label $P(Y=\\text{Yes}|X=x)$ and the conditional probability $P(Y=\\text{No}|X=x)$. \n\nTo find these conditional probabilities, we shall use the famous $\\textbf{Bayes' theorem}$ idea.\n\n$\\large{\\text{Bayes' Theorem}}$ \n\nThe conditional probability $P(Y|X)$ can be written as:\n\n$\\begin{align}\nP(Y|X) = \\frac{P(Y,X)}{P(X)}.\n\\end{align}\n$\n\nSince we know that $P(X|Y) = \\frac{P(X,Y)}{P(Y)}$, and since the $\\textbf{joint probability}$ $P(X,Y)$ is the same as $\\text{joint probability}$ $P(Y,X)$, we can write: \n\n$P(Y,X) = P(X|Y) P(Y)$. \n\nSubstituting $P(Y,X)$ in the previous conditional probability $P(Y|X)$, we have:\n\n$\\begin{align}\nP(Y|X) = \\frac{P(Y,X)}{P(X)} = \\frac{P(X|Y)P(Y)}{P(X)}. \n\\end{align}\n$\n\nThus the $\\textbf{posterior probability}$ $P(Y|X)$ of observing $Y$ after receiving $X$ can be seen to be proportional to the product of the $\\textbf{likelihood}$ term $P(X|Y)$ and the $\\textbf{prior probability}$ $P(Y)$ of $Y$. \n\n$P(X)$ is a $\\textbf{normalization}$ factor and is usually a constant for any value of the observation of $Y$. \n\nHence we may write $P(Y|X) \\propto P(X|Y)P(Y)$. \n\n\nHence using Bayes' Theorem we can write: \n\n$P(Y=\\text{Yes}|X=x) \\propto P(X=x|Y=\\text{Yes}) P(Y=\\text{Yes})$. \n\nand\n\n$P(Y=\\text{No}|X=x) \\propto P(X=x|Y=\\text{No}) P(Y=\\text{No})$.\n\n$\\textbf{Question:}$ How do we compute the prior probabilities $P(Y=\\text{Yes})$, $P(Y=\\text{No})$ and the likelihood terms $P(X=x|Y=\\text{Yes})$, $P(X=x|Y=\\text{No})$?\n\n\n$\\textbf{Idea:}$\n\nUse the training data to get the required probabilities.\n\n$\\textbf{Computing the prior probabilities}$ \n\nLet us relook at the data at hand. Then we can use a frequency based computation for prior probabilities $P(Y=\\text{Yes})$ and $P(Y=\\text{No})$. \n\n\n```python\n#print the labels\nsample_data['Disease presence']\n\n```\n\n\n\n\n 0 Yes\n 1 Yes\n 2 Yes\n 3 Yes\n 4 No\n 5 No\n 6 No\n 7 No\n Name: Disease presence, dtype: object\n\n\n\nWe note that out of the $8$ samples, $Y=\\text{Yes}$ appears $4$ times and $Y=\\text{No}$ appears $4$ times. \n\nHence we can write: \n\n$P(Y=\\text{Yes}) = \\frac{4}{8} = 0.5$ and $P(Y=\\text{No}) = \\frac{4}{8} = 0.5$. \n\n\n$\\textbf{How to compute P(X=x|Y=y)}$?\n\nNote that in our case $X$ is multi-dimensional attribute of the form $X=(X_1,X_2,X_3,X_4)$ where:\n\n\n* $X_1$ is the random variable associated with $\\texttt{Temp gt 100}$\n* $X_2$ is the random variable associated with $\\texttt{Travel to foreign country}$\n* $X_3$ is the random variable associated with $\\texttt{Cough}$\n* $X_4$ is the random variable associated with $\\texttt{Antibodies in blood}$\n\nSimilarly, an observation $x$ of $X$ is also multi-variate and is of the form $x=(x_1,x_2,x_3,x_4)$. \n\nThus $P(X=x|Y=y)$ can be equivalently written as: $P( (X_1,X_2,X_3,X_4) = (x_1,x_2,x_3,x_4)|Y=y)$. \n\nHence given an observation $x=(x_1,x_2,x_3,x_4)$ of $X=(X_1,X_2,X_3,X_4)$ how do we find the conditional probability $P((X_1,X_2,X_3,X_4)=(x_1,x_2,x_3,x_4)|Y=y)$? \n\n\n$\\textbf{Conditional Independence Assumption}$ \n\nIn Naive Bayes classifier, we make an important assumption about the data. \n\nThat is the covariates (or) attributes are conditionally independent given the class label. \n\nThis assumption is called the conditional independence assumption. \n\nUsing the conditional independence assumption, we can write: \n\n$\n\\begin{align}\nP( (X_1,X_2,X_3,X_4) = (x_1,x_2,x_3,x_4) | Y=y) = \\prod_{i=1}^{4} P(X_i=x_i|Y=y).\n\\end{align}\n$\n\nNow finding $P( (X_1,X_2,X_3,X_4) = (x_1,x_2,x_3,x_4) | Y=y)$ boils down to finding $P(X_i=x_i|Y=y)$ for $i=1,2,3,4$. \n\n$\\textbf{Question:}$ How do we find $P(X_i=x_i|Y=\\text{Yes})$ and $P(X_i=x_i|Y=\\text{No})$?\n\n\n```python\n\n```\n\n$\\textbf{Idea:}$ \n\nUse the training data. \n\n$\\textbf{Note:}$ Luckily, each $X_i$ in the data is discrete-valued and hence finding the probabilities would be easy. \n\n$\\textbf{Recall:}$\n\nThe observation $x$ is given by \n\n$(\\text{Temp gt 100=Yes, Travel to foreign country=Yes, Cough = Yes, Antibodies in blood= Yes})$. \n\nLet us now compute $P(X_1 = \\text{Yes}|Y=\\text{Yes})$. \n\nWe shall look into the data. \n\n\n```python\n#print the column 'Temp gt 100' along with label 'Yes'\n(sample_data.loc[sample_data['Disease presence'] == 'Yes'])[['Temp gt 100', 'Disease presence']]\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Temp gt 100Disease presence
0YesYes
1NoYes
2YesYes
3YesYes
\n
\n\n\n\nThus from the above data we can compute: \n\n$P(X_1=\\text{Yes}|Y=\\text{Yes}) = \\frac{3}{4} = 0.75$. \n\n\n```python\n\n```\n\nLet us now compute $P(X_2 = \\text{Yes}|Y=\\text{Yes})$. \n\nWe shall look into the data. \n\n\n```python\n#print the column 'Travel to foreign country' along with label 'Yes'\n(sample_data.loc[sample_data['Disease presence'] == 'Yes'])[['Travel to foreign country', 'Disease presence']]\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Travel to foreign countryDisease presence
0YesYes
1YesYes
2NoYes
3NoYes
\n
\n\n\n\nThus from the above data we can compute: \n\n$P(X_2=\\text{Yes}|Y=\\text{Yes}) = \\frac{2}{4} = 0.5$. \n\n\n```python\n\n```\n\nLet us now compute $P(X_3 = \\text{Yes}|Y=\\text{Yes})$. \n\nWe shall look into the data. \n\n\n```python\n#print the column 'Cough' along with label 'Yes'\n(sample_data.loc[sample_data['Disease presence'] == 'Yes'])[['Cough', 'Disease presence']]\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
CoughDisease presence
0NoYes
1YesYes
2YesYes
3YesYes
\n
\n\n\n\nThus from the above data we can compute: \n\n$P(X_3=\\text{Yes}|Y=\\text{Yes}) = \\frac{3}{4} = 0.75$. \n\n\n```python\n\n```\n\nLet us now compute $P(X_4 = \\text{Yes}|Y=\\text{Yes})$. \n\nWe shall look into the data. \n\n\n```python\n#print the column 'Antibodies in blood' along with label 'Yes'\n(sample_data.loc[sample_data['Disease presence'] == 'Yes'])[['Antibodies in blood', 'Disease presence']]\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Antibodies in bloodDisease presence
0YesYes
1YesYes
2NoYes
3YesYes
\n
\n\n\n\nThus from the above data we can compute: \n\n$P(X_4=\\text{Yes}|Y=\\text{Yes}) = \\frac{3}{4} = 0.75$. \n\n\nNow that we have computed all relevant probabilities, we can compute \n$P((X_1,X_2,X_3,X_4)=(\\text{Yes,Yes,Yes,Yes})|Y=\\text{Yes}) = \\prod_{i=1}^{4} P(X_i = \\text{Yes}|Y=\\text{Yes})$. \n\nThus $P((X_1,X_2,X_3,X_4)=(\\text{Yes,Yes,Yes,Yes})|Y=\\text{Yes}) = 0.75 \\times 0.5 \\times 0.75 \\times 0.75 = 0.2109375$. \n\n\n```python\n\n```\n\nSimilarly, we can compute $P((X_1,X_2,X_3,X_4)=(\\text{Yes,Yes,Yes,Yes})|Y=\\text{No}) = \\prod_{i=1}^{4} P(X_i = \\text{Yes}|Y=\\text{No})$. \n\n$\\textbf{Exercise:}$ Find $P((X_1,X_2,X_3,X_4)=(\\text{Yes,Yes,Yes,Yes})|Y=\\text{No})$. \n\nNow we can compute $P(Y=\\text{Yes}|X=x)$ as a quantity proportional to $P(X=x|Y=\\text{Yes}) P(Y=\\text{Yes}) = 0.2109375 \\times 0.5$. \n\nAlso similarly, we can compute $P(Y=\\text{No}|X=x)$ as a quantity proportional to $P(X=x|Y=\\text{No}) P(Y=\\text{No.})$. \n\n$\\textbf{Exercise:}$ Compute $P(Y=\\text{No}|X=x)$. \n\nHaving computed the posterior probabilities for $Y=\\text{Yes}$ label and $Y=\\text{No}$ label, we can compare them and assign a label which has the maximimum posterior probability. \n\n$\\textbf{Exercise:}$ What is the label predicted for observation $x$? \n\n\n\n```python\n\n```\n\n$\\textbf{Question:}$ How to deal with Continuous attributes? \n\n\n```python\n\n```\n\n$\\textbf{Idea:}$ Assume that the column containing continuous data has a Gaussian distribution. \n\nHence $P(X_i=x_i|Y=y_j)$ can be characterized as $f(x_i; \\mu_{ij},\\sigma^2_{ij}) = \\frac{1}{\\sqrt{2\\pi}\\sigma_{ij}} e^{-\\frac{(x_i-\\mu_{ij})^2}{2\\sigma_{ij}^2}}$. \n\nThe mean $\\mu_{ij}$ and variance $\\sigma_{ij}$ are estimated from data corresponding to $X_i=x_i$ and $Y=y_j$.\n\nIdeally we would be computing $P(x_i \\leq X_i \\leq x_i + \\epsilon|Y=y_j)$. \n\nThis would yield a quantity $P(x_i \\leq X_i \\leq x_i + \\epsilon|Y=y_j) \\approx \\epsilon f(x_i; \\mu_{ij},\\sigma^2_{ij})$. \n\nSince $\\epsilon$ is constant for each class, during the comparative analysis of the posterior probabilities, the effect of $\\epsilon$ will be effectively ignored. \n\n\n```python\n\n```\n\n$\\large{\\text{Some notable features of Naive Bayes Classifier}}$\n\n\n\n1. $\\textbf{Smoothing of noisy attributes:}$ Can Naive Bayes classifier deal with noise in attribute? \n2. $\\textbf{Dealing with irrelevant attributes:}$ For attribute $X$ which might not be relevant to prediction of class label $Y$, how will $P(Y|X)$ look like? What impact does it have?\n3. $\\textbf{Dealing with correlated attributes:}$ Can Naive Bayes classifier handle correlated attributes? \n\n\n\n\n\n$\\large{\\text{Exercise}}$\n\nSuppose the attribute column $\\texttt{Temp gt 100}$ is replaced with $\\texttt{Temperature}$ attribute column with the values $(100.3, 98.6, 100.5, 99.5, 101.01, 98.3, 99.5, 100.2)$. Find the posterior probabilities \n\n\n\n* $P(Y=\\text{Yes}|X=(Yes, No, No, Yes))$ and $P(Y=\\text{No}|X=(Yes, No, No, Yes))$. \n* $P(Y=\\text{Yes}|X=(No, Yes, No, No))$ and $P(Y=\\text{No}|X=(No, Yes, No, No))$. \n\n\n\n```python\n\n```\n", "meta": {"hexsha": "c1f426b41a2b8a0d2838f903a0359121aab0607b", "size": 33044, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Naive Bayes/NaiveBayesClassifier_AIML_CEP_31Oct2021_7Nov2021.ipynb", "max_stars_repo_name": "arvind-maurya/AIML_CEP_2021", "max_stars_repo_head_hexsha": "eb09ccb812b1b3fb5761d45423a08289923c8da9", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Naive Bayes/NaiveBayesClassifier_AIML_CEP_31Oct2021_7Nov2021.ipynb", "max_issues_repo_name": "arvind-maurya/AIML_CEP_2021", "max_issues_repo_head_hexsha": "eb09ccb812b1b3fb5761d45423a08289923c8da9", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Naive Bayes/NaiveBayesClassifier_AIML_CEP_31Oct2021_7Nov2021.ipynb", "max_forks_repo_name": "arvind-maurya/AIML_CEP_2021", "max_forks_repo_head_hexsha": "eb09ccb812b1b3fb5761d45423a08289923c8da9", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2021-10-17T10:16:01.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-29T13:43:25.000Z", "avg_line_length": 32.5877712032, "max_line_length": 302, "alphanum_fraction": 0.3808860913, "converted": true, "num_tokens": 4779, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9324533107374444, "lm_q2_score": 0.9032942021480236, "lm_q1q2_score": 0.8422796693628629}} {"text": "# Introduction\n\nWelcome to the Cyborg Math course!\n\nIn this course we will learn how to use sympy - a python package for symbolic math. The first step is to import the package\n\n\n```python\nimport sympy\n```\n\nSympy's default print mode is plain text, which is very robust but not very aesthetic\n\n\n```python\nx = sympy.Symbol('x')\n(x**2+1)/(x**2-1)\n```\n\n\n\n\n$\\displaystyle \\frac{x^{2} + 1}{x^{2} - 1}$\n\n\n\nWe can use latex rendering to make the output nicer\n\n\n```python\nsympy.init_printing()\n```\n\n\n```python\n(x**2+1)/(x**2-1)\n```\n\n# Section 1 - Objects\n\nsympy works by manipulating symbolic mathematical expressions. This is different from other packaes you might be familiar with, which manipulate collections of raw data (like numbers or strings). Here's an example of the difference between a regular calculation and a symbolic calculation. Suppose we want to calculate the square root of 8, normally, we would use\n\n\n```python\nimport math\nmath.sqrt(8)\n```\n\n\n```python\nsympy.sqrt(8)\n```\n\nIt seems like the sympy function didn't do anything, but what it actually did was to produce a new sympy object\n\n\n```python\ntype(sympy.sqrt(8))\n```\n\n\n\n\n sympy.core.mul.Mul\n\n\n\nAlso, unlike the regular math function, which truncates the output after about 15 significant digits, sympy expressions are, in principle, accurate to infinite precision. sympy knows how to incorporate integers into symbolic expressions seamlessly\n\n\n```python\nsympy.sqrt(8)+1\n```\n\nBut fractions can be problematic\n\n\n```python\nsympy.sqrt(8)+1/7\n```\n\nTo create a fraction, we need to use the Rational object\n\n\n```python\nsympy.sqrt(8) + sympy.Rational(1,7)\n```\n\n__Exercise a__\n\nCreate a sympy object that represents the fraction 1/3. Use the function below to verify your answer\n\n\n```python\nimport verify_1\nanswer = 0\nverify_1.check_1a(answer)\n```\n\n\n\n\n False\n\n\n\nWe can turn symbolic numbers into floating point numbers using .n()\n\n\n```python\nsympy.sqrt(8).n()\n```\n\nThough this expression is still a sympy object\n\n\n```python\ntype(sympy.sqrt(8).n())\n```\n\n\n\n\n sympy.core.numbers.Float\n\n\n\nTo turn it into a regular number, we need to use float()\n\n\n```python\n[float(sympy.sqrt(8).n()),type(float(sympy.sqrt(8).n()))]\n```\n\n\n\n\n [2.8284271247461903, float]\n\n\n\nsympy will by default display the last equation\n\n\n```python\nsympy.sqrt(8)+1\nsympy.sqrt(8)-1\n```\n\nIf you want to display multiple equation, or lines before the end, use display\n\n\n```python\ndisplay(sympy.sqrt(8)+1)\nsympy.sqrt(8)-1\n```\n\n# Section 2 - Symbols\n\nVariables and unknown quantities can be represented using symbols\n\n\n```python\nx = sympy.Symbol('x')\nx\n```\n\n__Exercise a__\n\nConstruct a variable called \"y\"\n\n\n```python\nanswer = 0\nverify_1.check_2a(answer)\n```\n\n\n\n\n False\n\n\n\nSymbols can also include greek letters\n\n\n```python\n[sympy.Symbol('zeta'), sympy.Symbol('Sigma')]\n```\n\n__Exercise b__\n\nConstruct the greek variable chi\n\n\n```python\nanswer = 0\nverify_1.check_2b(answer)\n```\n\n\n\n\n False\n\n\n\nSymbols can also contain subscript\n\n\n```python\nsympy.Symbol('a3')\n```\n\n__Exercise c__ \n\nConstruct the variable alpha_3\n\n\n```python\nanswer = 0\nverify_1.check_2c(answer)\n```\n\n\n\n\n False\n\n\n\nFinally, variables can be typeset using raw latex\n\n\n```python\nsympy.Symbol(r'\\tilde{\\aleph}')\n```\n\n__Exercise d__\n\nConstruct the variable $\\mathcal{L}$\n\n\n```python\nanswer = 0\nverify_1.check_2d(answer)\n```\n\n\n\n\n False\n\n\n\nBy default, sympy assumes everything is complex, so this is why it refrains from doing some obvious simplification. However, it is possible to give variables qualifiers that will enable these simplifications\n\n\n```python\ny = sympy.Symbol('y')\nz = sympy.Symbol('z', positive=True)\n[sympy.sqrt(y**2), sympy.sqrt(z**2)]\n```\n\nHere's another example for why assumptions are important for simplification\n\n\n```python\n%%html\n

I will not be taking questions at this time pic.twitter.com/H7KvemYuhF

— Anna Hughes (@AnnaGHughes) January 29, 2020
\n```\n\n\n

I will not be taking questions at this time pic.twitter.com/H7KvemYuhF

— Anna Hughes (@AnnaGHughes) January 29, 2020
\n\n\n\n# Section 3 - Arithmetic\n\nWe can combine numbers and variables using the four basic arithmetica operations\n\n\n```python\n(2*x+1)/(2*x-1)\n```\n\n__Exercise a__\n\nConstruct the [Mobius transformation](https://en.wikipedia.org/wiki/M%C3%B6bius_transformation)\n\n$\\frac{a z+b}{c z+d}$\n\n\n```python\nanswer = 0\nverify_1.check_3a(answer)\n```\n\n\n\n\n False\n\n\n\nRaising to a power is done using the operator **\n\n\n```python\nx**2\n```\n\n__Exercise b__\n\nCreate the general expression for a Mersenne number $2^n-1$\n\n\n```python\nanswer = 0\nverify_1.check_3b(answer)\n```\n\n\n\n\n False\n\n\n\nFractions can be separated to numerator and denominator using the fraction function\n\n\n```python\nsympy.fraction((2*x+1)/(2*x-1))\n```\n\n\n```python\n(x**2-1)/(x-1)\n```\n\n__Exercise c__\n\nOne can construct the [continued fraction](https://en.wikipedia.org/wiki/Continued_fraction) $1+\\frac{1}{1+\\frac{1}{1+...}}$ by using the recursion relation $a_1 = 1$, $a_{n+1} = 1+\\frac{1}{a_n}$. Find the numerator of $a_{20}$\n\n\n```python\nanswer = 0\nverify_1.check_3c(answer)\n```\n\n\n\n\n False\n\n\n\n# Section 4 - Simplification and Expansion\n\nSometimes sympy behaves in a strange way. Consider the following example\n\n\n```python\n(1+sympy.sqrt(5))*(1-sympy.sqrt(5))\n```\n\nThis expression is left as is, even though we can clearly see it can be simplified. We can carry out the product by calling the expand function\n\n\n```python\nsympy.expand((1+sympy.sqrt(5))*(1-sympy.sqrt(5)))\n```\n\nConversely, sometimes we want to do the opposite (i.e. put an expanded back in fractions)\n\n\n```python\nx**2+2*x+1\n```\n\nThis can be done using the factor function\n\n\n```python\nsympy.factor(x**2+2*x+1)\n```\n\nIn general, the simplest representation of an expression can be obtained using the simplify function\n\n\n```python\nsympy.simplify((x**3 + x**2 - x - 1)/(x**2 + 2*x + 1))\n```\n\n__Exercise a__\n\nSimplify the fraction $\\frac{\\sqrt{3}+\\sqrt{2}}{\\sqrt{3}-\\sqrt{2}}$ by multiplying and expanding both numerator and denominator by $\\sqrt{3}+\\sqrt{2}$\n\n\n```python\nanswer = 0\nverify_1.check_4a(answer)\n```\n\n\n\n\n False\n\n\n\n# Section 5 - Complex Numbers\n\nsympy can represent complex numbers using the imaginary number sympy.I\n\n\n```python\n[sympy.I, sympy.I**2]\n```\n\nComplex and real parts\n\n\n```python\ndummy = 3+4*sympy.I\n[sympy.re(dummy), sympy.im(dummy)]\n```\n\nComplex conjugate\n\n\n```python\ndummy.conjugate()\n```\n\nAbsolute magnitude\n\n\n```python\nsympy.Abs(dummy)\n```\n\nPhase\n\n\n```python\nsympy.arg(dummy)\n```\n\n__Exercise a__\n\nFind the imaginary part of $(3+4i)^2$\n\n\n```python\nanswer = 0\nverify_1.check_5a(answer)\n```\n\n\n\n\n False\n\n\n\n# Section 6 - Substitution\n\nSubstitution replaces one expression with another expression\n\n\n```python\ntemp = (2*x+1)/(2*x-1)\n[temp, temp.subs(x, x/2)]\n```\n\nsubs will match an exact expression, and not mathematically equivalent expressions\n\n\n```python\ny = sympy.Symbol('y')\ntemp = sympy.sqrt(x**2)+x\n[temp,temp.subs(x**2,y)]\n```\n\nOne common pitfall is that identically looking variables might be different, and so will respond differently to subs\n\n\n```python\nnot_x = sympy.Symbol('x', complex=True)\ntemp = x+not_x\n[temp, temp.subs(x,1)]\n```\n\n__Exercise a__\n\nThe Babylonians discovered a [repeated substitution](https://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Babylonian_method) to calculate the square root of a number $S$\n\n$x_{n+1} = \\frac{1}{2} \\left(\\frac{S}{x_n}+x_n\\right)$\n\nSuppose we want to use this method to calculate the square root of 3. Start from $x_1 = 1$ and find $x_{10}$\n\n\n```python\nanswer = 0\nverify_1.check_6a(answer)\n```\n\n\n\n\n False\n\n\n\n# Section 7 - Functions\n\nsympy supports all elementary functions, as well as a number of non elementary functions\n\n\n```python\n[sympy.log(x), sympy.sin(x), sympy.exp(x), sympy.gamma(x)]\n```\n\nSome functions have their own [simplifying functions](https://docs.sympy.org/1.5.1/tutorial/simplification.html)\n\n\n```python\n[sympy.cos(2*x),sympy.expand_trig(sympy.cos(2*x))]\n```\n\nYou can also define implicit function\n\n\n```python\nF = sympy.Function('F', positive=True)\nF(x)\n```\n\nAnd later substitute a different function\n\n\n```python\ntemp = F(2*x) - F(x)\ntemp = temp.subs(F, sympy.log)\ntemp\n```\n\n# Section 8 - Equations\n\nsympy has an equation object\n\n\n```python\nsympy.Eq(2*x+3,x**2)\n```\n\nWe can solve this equation and find its roots\n\n\n```python\nsympy.solve(sympy.Eq(2*x+3,x**2),x, dict=True)\n```\n\nThe equation object is often unnecessary, since if we input $f(x)$, simpy will solve for $f(x) = 0$\n\n\n```python\nsympy.solve(2*x+3-x**2,x)\n```\n\nSolve can't do magic, and if you give it something too complicated it will fail. \n\n__Exercise a__\n\nFind the positive root of the polynomial $x^2-x-1$\n\n\n```python\nanswer = 0 \nverify_1.check_8a(answer)\n```\n\n\n\n\n False\n\n\n\nYou can retrieve the left hand side or right hand side of equations\n\n\n```python\nE = sympy.Symbol('E')\nM = sympy.Symbol('M')\nc = sympy.Symbol('c')\ntemp = sympy.Eq(E,M*c**2)\n[temp, temp.lhs, temp.rhs]\n```\n\n# Section 9 - Conversion\n\nSometimes we would like to turn a symbolic expression into a normal python function. This can be achieved using lambdify\n\n\n```python\nfunc = (x+1)/(x-1)\nf = sympy.lambdify(x, func)\nf(8)\n```\n\nExpressions can also be turned directly into latex\n\n\n```python\nprint(sympy.latex(func))\n```\n\n \\frac{x + 1}{x - 1}\n\n\n# Section 10 - A Worked Example\n\nIn this example we will derive the strong shock conditions in an ideal gas. We begin with the [Rankine Hugoniot conditions](https://en.wikipedia.org/wiki/Rankine%E2%80%93Hugoniot_conditions)\n\n\n```python\nrho_1 = sympy.Symbol('rho1', positive=True) # Upstream density\nrho_2 = sympy.Symbol('rho2', positive=True) # Downstream density\nv_1 = sympy.Symbol('v1', positive=True) # Upstream velocity\nv_2 = sympy.Symbol('v2', positive=True) # Downstream velocity\np_1 = sympy.Symbol('p1', positive=True) # Upstream pressure\np_2 = sympy.Symbol('p2', positive=True) # Downstream pressure\nh_1 = sympy.Symbol('h1', positive=True) # Upstream enthalpy\nh_2 = sympy.Symbol('h2', positive=True) # Downstream enthalpy\nmass_conservation = sympy.Eq(rho_1*v_1, rho_2*v_2)\nmomentum_conservation = sympy.Eq(p_1+rho_1*v_1**2, p_2+rho_2*v_2**2)\nenthalpy_conservation = sympy.Eq(h_1+v_1**2/2, h_2+v_2**2/2)\nrankine_hugoniot_conditions = [mass_conservation,\n momentum_conservation,\n enthalpy_conservation]\nrankine_hugoniot_conditions\n```\n\nAdaptation for an ideal gas\n\n\n```python\ngamma = sympy.Symbol('gamma', positive=True) # Adiabatic index\nideal_gas_rankine_hugoniot_conditions = [itm.subs({h_1:gamma*p_1/rho_1/(gamma-1),\n h_2:gamma*p_2/rho_2/(gamma-1)})\n for itm in rankine_hugoniot_conditions]\nideal_gas_rankine_hugoniot_conditions\n```\n\nsympy can solve these equations, but the result is not very insightful. A more useful approximation is the strong shock assumption, where the upstream pressure is neglected\n\n\n```python\nstrong_sohck_rankine_hugoniot = [itm.subs(p_1,0) for itm in ideal_gas_rankine_hugoniot_conditions]\nstrong_sohck_rankine_hugoniot\n```\n\nSolving the system of equations to obtain the downstream values\n\n\n```python\nsympy.solve(strong_sohck_rankine_hugoniot,[p_2,rho_2,v_2],dict=True)[0]\n```\n\nAnd we reproduce one of the interesting features of strong shocks. Even when the shock is extremely strong, the shocked material is only compressed by a constant factor, that only depends on the adiabatic index\n\n\n```python\ntemp = rho_2.subs((sympy.solve(strong_sohck_rankine_hugoniot,[p_2,rho_2,v_2],dict=True)[0]))\ntemp = temp/rho_1\ntemp = temp.subs(gamma, sympy.Rational(5,3))\ntemp\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "4333c7ba4e177070316bd8e4cea51bf83b82c88e", "size": 108839, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lesson_1.ipynb", "max_stars_repo_name": "bolverk/cyborg_math", "max_stars_repo_head_hexsha": "298224dadd4218ebcc266b0a135e15595a359343", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2020-02-25T22:29:21.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T21:49:08.000Z", "max_issues_repo_path": "lesson_1.ipynb", "max_issues_repo_name": "bolverk/cyborg_math", "max_issues_repo_head_hexsha": "298224dadd4218ebcc266b0a135e15595a359343", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lesson_1.ipynb", "max_forks_repo_name": "bolverk/cyborg_math", "max_forks_repo_head_hexsha": "298224dadd4218ebcc266b0a135e15595a359343", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-05-14T21:02:56.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-05T19:12:11.000Z", "avg_line_length": 50.5287836583, "max_line_length": 4876, "alphanum_fraction": 0.7606372716, "converted": true, "num_tokens": 3516, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9324533013520765, "lm_q2_score": 0.9032942067038784, "lm_q1q2_score": 0.8422796651332364}} {"text": "# 2022-01-28 More Newton\n\nMonday (Jan 31) is virtual.\n\n## Last time\n\n* Newton's method via Taylor series\n* Convergence theory for fixed point methods\n\n## Today\n\n* Derive Newton's method via fixed point convergence theory\n* Newton methods in computing culture\n* Breaking Newton's method\n* Exploration\n\n\n```julia\nusing Plots\ndefault(linewidth=4, legendfontsize=12)\n\nfunction newton(f, fp, x0; tol=1e-8, verbose=false)\n x = x0\n for k in 1:100 # max number of iterations\n fx = f(x)\n fpx = fp(x)\n if verbose\n println(\"[$k] x=$x f(x)=$fx f'(x)=$fpx\")\n end\n if abs(fx) < tol\n return x, fx, k\n end\n x = x - fx / fpx\n end \nend\n\nf(x) = cos(x) - x\nfp(x) = -sin(x) - 1\n```\n\n\n\n\n fp (generic function with 1 method)\n\n\n\n# Newton-Raphson Method (by Taylor Series)\n\nMuch of numerical analysis reduces to [Taylor series](https://en.wikipedia.org/wiki/Taylor_series), the approximation\n$$ f(x) = f(x_0) + f'(x_0) (x-x_0) + f''(x_0) (x - x_0)^2 / 2 + \\underbrace{\\dotsb}_{O((x-x_0)^3)} $$\ncentered on some reference point $x_0$.\n\nIn numerical computation, it is exceedingly rare to look beyond the first-order approximation\n$$ \\tilde f_{x_0}(x) = f(x_0) + f'(x_0)(x - x_0) . $$\nSince $\\tilde f_{x_0}(x)$ is a linear function, we can explicitly compute the unique solution of $\\tilde f_{x_0}(x) = 0$ as\n$$ x = x_0 - \\frac{f(x_0)}{f'(x_0)} . $$\nThis is Newton's Method (aka Newton-Raphson or Newton-Raphson-Simpson) for finding the roots of differentiable functions.\n\n# Convergence of fixed-point (by Taylor Series)\n\nConsider the iteration\n$$x_{k+1} = g(x_k)$$\nwhere $g$ is a continuously differentiable function.\nSuppose that there exists a fixed point $x_* = g(x_*)$. There exists a Taylor series at $x_*$,\n$$ g(x_k) = g(x_*) + g'(x_*)(x_k - x_*) + O((x_k-x_*)^2) $$\nand thus\n\\begin{align}\nx_{k+1} - x_* &= g(x_k) - g(x_*) \\\\\n&= g'(x_*) (x_k - x_*) + O((x_k - x_*)^2).\n\\end{align}\n\nIn terms of the error $e_k = x_k - x_*$,\n$$ \\left\\lvert \\frac{e_{k+1}}{e_k} \\right\\rvert = \\lvert g'(x_*) \\rvert + O(e_k).$$\n\n\n\nRecall the definition of q-linear convergence\n$$ \\lim_{k\\to\\infty} \\left\\lvert \\frac{e_{k+1}}{e_k} \\right\\rvert = \\rho < 1. $$\n\n# Example of a fixed point iteration\n\nWe wanted to solve $\\cos x - x = 0$, which occurs when $g(x) = \\cos x$ is a fixed point.\n\n\n```julia\nxstar, _ = newton(f, fp, 1.)\ng(x) = cos(x)\ngp(x) = -sin(x)\n@show xstar\n@show gp(xstar)\nplot([x->x, g], xlims=(-2, 3))\nscatter!([xstar], [xstar],\n label=\"\\$x_*\\$\")\n```\n\n xstar = 0.739085133385284\n gp(xstar) = -0.6736120293089505\n\n\n\n\n\n \n\n \n\n\n\n\n```julia\nfunction fixed_point(g, x, n)\n xs = [x]\n for k in 1:n\n x = g(x)\n append!(xs, x)\n end\n xs\nend\n\nxs = fixed_point(g, 0., 15)\nplot!(xs, g.(xs), seriestype=:path, marker=:auto)\n```\n\n\n\n\n \n\n \n\n\n\n# Verifying fixed point convergence theory\n\n\n$$ \\left\\lvert \\frac{e_{k+1}}{e_k} \\right\\rvert \\to \\lvert g'(x_*) \\rvert $$\n\n\n```julia\n@show gp(xstar)\nes = xs .- xstar\nes[2:end] ./ es[1:end-1]\n```\n\n gp(xstar) = -0.6736120293089505\n\n\n\n\n\n 15-element Vector{Float64}:\n -0.3530241034880909\n -0.7618685362635164\n -0.5959673878312852\n -0.7157653025686597\n -0.6414883589709152\n -0.6933762938713267\n -0.6595161800339986\n -0.6827343083372247\n -0.667303950535869\n -0.677785479788835\n -0.670766892391035\n -0.6755130653097281\n -0.6723244355324894\n -0.6744762481989985\n -0.6730283414604459\n\n\n\n\n```julia\nscatter(abs.(es), yscale=:log10, label=\"fixed point\")\nplot!(k -> abs(gp(xstar))^k, label=\"\\$|g'|^k\\$\")\n```\n\n\n\n\n \n\n \n\n\n\n# Plotting Newton convergence\n\n\n```julia\nfunction newton_hist(f, fp, x0; tol=1e-12)\n x = x0\n hist = []\n for k in 1:100 # max number of iterations\n fx = f(x)\n fpx = fp(x)\n push!(hist, [x fx fpx])\n if abs(fx) < tol\n return vcat(hist...)\n end\n x = x - fx / fpx\n end\nend\n```\n\n\n\n\n newton_hist (generic function with 1 method)\n\n\n\n\n```julia\nxs = newton_hist(f, fp, 1.97)\n@show x_star = xs[end,1]\nplot(xs[1:end-1,1] .- x_star, yscale=:log10, marker=:auto)\n```\n\n x_star = xs[end, 1] = 0.7390851332151607\n\n\n\n\n\n \n\n \n\n\n\n## Poll: Is this convergence A=q-linear, B=r-linear, C=neither?\n\n# Formulations are not unique (constants)\n\nIf $x = g(x)$ then\n$$x = \\underbrace{x + c(g(x) - x)}_{g_2}$$\nfor any constant $c \\ne 0$. Can we choose $c$ to make $\\lvert g_2'(x_*) \\rvert$ small?\n\n\n```julia\nc = -1 / (gp(xstar) - 1)\ng2(x) = x + c * (cos(x) - x)\ng2p(x) = 1 + c * (-sin(x) - 1)\n@show g2p(xstar)\nplot([x->x, g, g2], ylims=(-5, 5), label=[\"x\" \"g\" \"g2\"])\n```\n\n g2p(xstar) = 0.0\n\n\n\n\n\n \n\n \n\n\n\n\n```julia\nxs = fixed_point(g2, 1., 15)\nxs .- xstar\n```\n\n\n\n\n 16-element Vector{Float64}:\n 0.26091486661471597\n -0.013759123582207766\n -4.1975680750594435e-5\n -5.591781482294778e-10\n -1.7012335984389892e-10\n -1.7012335984389892e-10\n -1.7012335984389892e-10\n -1.7012335984389892e-10\n -1.7012335984389892e-10\n -1.7012335984389892e-10\n -1.7012335984389892e-10\n -1.7012335984389892e-10\n -1.7012335984389892e-10\n -1.7012335984389892e-10\n -1.7012335984389892e-10\n -1.7012335984389892e-10\n\n\n\n# Formulations are not unique (functions)\n\nIf $x = g(x)$ then\n$$x = \\underbrace{x + h(x) \\big(g(x) - x\\big)}_{g_3(x)}$$\nfor any smooth $h(x) \\ne 0$. Can we choose $h(x)$ to make $\\lvert g_3'(x) \\rvert$ small?\n\n\n```julia\nh(x) = -1 / (gp(x) - 1)\ng3(x) = x + h(x) * (g(x) - x)\nplot([x-> x, cos, g2, g3], ylims=(-5, 5))\n```\n\n\n\n\n \n\n \n\n\n\n* We don't know $g'(x_*)$ in advance because we don't know $x_*$ yet.\n* This method converges very fast\n* We actually just derived Newton's method.\n\n# A fresh derivation of Newton's method\n\n* A rootfinding problem $f(x) = 0$ can be converted to a fixed point problem $$x = x + f(x) =: g(x)$$ but there is no guarantee that $g'(x_*) = 1 + f'(x_*)$ will have magnitude less than 1.\n* Problem-specific algebraic manipulation can be used to make $|g'(x_*)|$ small.\n* $x = x + h(x) f(x)$ is also a valid formulation for any $h(x)$ bounded away from $0$.\n* Can we choose $h(x)$ such that $$ g'(x) = 1 + h'(x) f(x) + h(x) f'(x) = 0$$ when $f(x) = 0$?\n\nIn other words,\n$$ x_{k+1} = x_k + \\underbrace{\\frac{-1}{f'(x_k)}}_{h(x_k)} f(x_k) . $$\n\n# Quadratic convergence!\n\n$$ \\left\\lvert \\frac{e_{k+1}}{e_k} \\right\\rvert \\to \\lvert g'(x_*) \\rvert $$\n\n* What does it mean that $g'(x_*) = 0$?\n* It turns out that Newton's method has _locally quadratic_ convergence to simple roots,\n$$\\lim_{k \\to \\infty} \\frac{|e_{k+1}|}{|e_k|^2} < \\infty.$$\n* \"The number of correct digits doubles each iteration.\"\n* Now that we know how to make a good guess accurate, the effort lies in getting a good guess.\n\n# Culture: fast inverse square root\n\nThe following code appeared literally (including comments) in the Quake III Arena source code (late 1990s).\n\n```C\nfloat Q_rsqrt( float number )\n{\n\tlong i;\n\tfloat x2, y;\n\tconst float threehalfs = 1.5F;\n\n\tx2 = number * 0.5F;\n\ty = number;\n\ti = * ( long * ) &y; // evil floating point bit level hacking\n\ti = 0x5f3759df - ( i >> 1 ); // what the fuck? \n\ty = * ( float * ) &i;\n y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration\n// y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed\n\n\treturn y;\n}\n```\n\nWe now have [vector instructions](https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=rsqrt&expand=2989,1224,4470) for approximate inverse square root.\nMore at https://en.wikipedia.org/wiki/Fast_inverse_square_root\n\n# How does it work?\n\nLet's look at the last line\n```c\ny = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration\n```\n\nWe want a function $f(y)$ such that $f(1/\\sqrt{x}) = 0$. One such function is\n$$ f(y) = 1/y^2 - x, \\quad f'(y) = -2/y^3.$$\n\nThere are others, e.g.,\n$$f_1(y) = y^2 - 1/x,\\quad f'(y) = 2 y,$$\nbut this would require a division.\n\nNewton's method is\n\\begin{align}\ny_{k+1} &= y_k - \\frac{f(y_k)}{f'(y_k)} \\\\\n&= y_k - \\frac{1/y_k^2 - x}{-2/y_k^3} \\\\\n&= y_k + \\frac 1 2 (y_k - x y_k^3) \\\\\n&= y_k \\left(\\frac 3 2 - \\frac 1 2 x y_k^2\\right)\n\\end{align}\n\n# Rootfinding outlook\n\n* Newton methods are immensely successful\n * Convergence theory is local; we need good initial guesses (activity)\n * Computing the derivative $f'(x)$ is *intrusive*\n * Avoided by secant methods (approximate the derivative; activity)\n * Algorithmic or numerical differentiation (future topics)\n * Bisection is robust when conditions are met\n * Line search (activity)\n * When does Newton diverge?\n\n* More topics\n * Find *all* the roots\n * Use Newton-type methods with bounds\n * Times when Newton converges slowly\n\n# Exploratory rootfinding\n\n* Find a function $f(x)$ that models something you're interested in. You could consider nonlinear physical models (aerodynamic drag, nonlinear elasticity), behavioral models, probability distributions, or anything else that that catches your interest. Implement the function in Julia or another language.\n\n* Consider how you might know the output of such functions, but not an input. Think from the position of different stakeholders: is the equation used differently by an experimentalist collecting data versus by someone making predictions through simulation? How about a company or government reasoning about people versus the people their decisions may impact?\n\n* Formulate the map from known to desired data as a rootfinding problem and try one or more methods (Newton, bisection, etc., or use a rootfinding library).\n\n* Plot the inverse function (output versus input) from the standpoint of one or more stakeholder. Are there interesting inflection points? Are the methods reliable?\n\n* If there are a hierarchy of models for the application you're interested in, consider using a simpler model to provide an initial guess to a more complicated model.\n\n# Equation of state example\n\nConsider an [equation of state](https://en.wikipedia.org/wiki/Real_gas#Beattie%E2%80%93Bridgeman_model) for a real gas, which might provide pressure $p(T, \\rho)$ as a function of temperature $T$ and density $\\rho = 1/v$.\n\n* An experimentalist can measure temperature and pressure, and will need to solve for density (which is difficult to measure directly).\n* A simulation might know (at each cell or mesh point, at each time step) the density and internal energy, and need to compute pressure (and maybe temperature).\n* An analyst might have access to simulation output and wish to compute entropy (a thermodynamic property whose change reflects irreversible processes, and can be used to assess accuracy/stability of a simulation or efficiency of a machine).\n\nThe above highlights how many equations are incomplete, failing to model how related quantities (internal energy and entropy in this case) depend on the other quantities. Standardization bodies (such as NIST, here in Boulder) and practitioners often prefer models that intrinsically provide a complete set of consistent relations. An elegent methodology for equations of state for gasses and fluids is by way of the [Helmholtz free energy](http://www.coolprop.org/fluid_properties/PurePseudoPure.html#pure-and-pseudo-pure-fluid-properties), which is not observable, but whose partial derivatives define a complete set of thermodynamic properties. The [CoolProp](http://www.coolprop.org) software has highly accurate models for many gasses, and practitioners often build less expensive models for narrower ranges of theromdynamic conditions.\n", "meta": {"hexsha": "4bdeed9564bb86ce7fa1dd699fe7518e5244d55a", "size": 194147, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "slides/2022-01-28-more-newton.ipynb", "max_stars_repo_name": "cu-numcomp/spring22", "max_stars_repo_head_hexsha": "f4c1f9287bff2c10645809e65c21829064493a66", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "slides/2022-01-28-more-newton.ipynb", "max_issues_repo_name": "cu-numcomp/spring22", "max_issues_repo_head_hexsha": "f4c1f9287bff2c10645809e65c21829064493a66", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/2022-01-28-more-newton.ipynb", "max_forks_repo_name": "cu-numcomp/spring22", "max_forks_repo_head_hexsha": "f4c1f9287bff2c10645809e65c21829064493a66", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T21:05:12.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-11T20:34:46.000Z", "avg_line_length": 120.7381840796, "max_line_length": 8494, "alphanum_fraction": 0.6515166343, "converted": true, "num_tokens": 3724, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391706552538, "lm_q2_score": 0.912436167620237, "lm_q1q2_score": 0.8422143234360417}} {"text": "# Machine Learning Implementation\n\n## Imports\n\n\n```python\nimport json\n\nimport numpy as np\nimport pandas as pd\nimport plotly.offline as py\nfrom plotly import graph_objects as go\n```\n\n## Linear regression\n\n### The maths\n\nThe linear model (or line of best fit in 2D) aims to describe the continuous y vairable a.k.a the target variable (e.g. house prices) as a linear combination of features (e.g. square footage / number of bedrooms) the features are also refered to as the design matrix.\n\n$$\n\\begin{align}\n\\hat{y}&=\\beta_0x_0+\\cdots+\\beta_nx_n\\quad &n\\in \\mathbb{N}, x_o = 1 \\\\\n\\hat{y}&=\\sum^{n}_{i=0}\\beta_ix_i \\\\\n\\hat{y}&=\\mathbf{\\boldsymbol{\\beta}^Tx}\\quad&\\boldsymbol{\\beta},\\mathbf{x}\\in\\mathbb{R}^{(n+1)\\times1}\\\\\n\\hat{y}&=g(\\boldsymbol{\\beta}^T\\mathbf{x})\n\\end{align}\n$$\n\nwhere g, the activation function, is the identidy in linear regression \n\nWe define the cost function as half of the mean square error:\n\n$$\n\\begin{align}\nJ(\\boldsymbol{\\beta})\n&= \\frac{1}{2m}\\sum^{m}_{j=1}\\left(\ny^j-\\hat{y}^j\n\\right)^2,\\quad m\\in \\mathbb{N} \\text{ is the number of training samples}\\\\\n&= \\frac{1}{2m}\\sum^{m}_{j=1}\\left(\ny^j-g(\\boldsymbol{\\beta}^T\\mathbf{x}^j)\n\\right)^2\n\\end{align}\n$$\n\nWe need to differentiate the cost function i.e. find the gradient\n\n$$\n\\begin{align}\n\\frac{\\partial J}{\\partial\\beta_k}\\left(\\boldsymbol{\\beta}\\right) &= \\frac{\\partial}{\\partial\\beta_k}\\left(\n\\frac{1}{2m}\\sum^{m}_{j=1}\\left(\ny^j-g(\\boldsymbol{\\beta}^T\\mathbf{x}^j)\\right)^2\n\\right)\\\\\n&= \\frac{\\partial}{\\partial\\beta_k}\\left(\n\\frac{1}{2m}\\sum^{m}_{j=1}\n\\left(\ny^j-\\sum^{n}_{i=0}\\beta_ix_i^j\n\\right)^2\n\\right)\\\\\n&=\n\\frac{1}{m}\\sum^{m}_{j=1}\n\\left(\ny^j-\\sum^{n}_{i=0}\\beta_ix_i^j\n\\right)(-x^j_k)\\\\\n\\end{align}\n$$\n\nhence\n\n$$\n\\nabla_{\\boldsymbol{\\beta}} J\n=\n\\begin{bmatrix}\n \\frac{\\partial J}{\\partial\\beta_1} \\\\\n \\vdots \\\\\n \\frac{\\partial J}{\\partial\\beta_n}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n -\\frac{1}{m}\\sum^{m}_{j=1}\n \\left(y^j-\\sum^{n}_{i=0}\\beta_ix_i^j\\right)x^j_1\\\\\n \\vdots \\\\\n -\\frac{1}{m}\\sum^{m}_{j=1}\n \\left(y^j-\\sum^{n}_{i=0}\\beta_ix_i^j\\right)x^j_n\\\\\n\\end{bmatrix}\n$$\n\nDefine the design matrix and column representation of y. Here each row of X and y are training examples hence there are m rows\n\n$$\n\\mathbf{X}\\in\\mathbb{R}^{m\\times (n+1)},\n\\quad \\mathbf{y}\\in\\mathbb{R}^{m\\times 1},\n\\quad \\boldsymbol{\\beta}\\in\\mathbb{R}^{(n+1)\\times1}\n$$\n\n$$\n\\mathbf{X}=\\begin{bmatrix}\n 1 & x_1^1 & x_2^1 & \\dots & x_n^1 \\\\\n 1 & x_1^2 & x_2^2 & \\dots & x_n^2 \\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n 1 & x_1^m & x_2^m & \\dots & x_n^m \\\\\n\\end{bmatrix}\\quad\n\\mathbf{y}=\\begin{bmatrix}\n y_1\\\\y_2\\\\\\vdots\\\\y_m\n\\end{bmatrix}\\quad\n\\boldsymbol{\\beta} = \\begin{bmatrix}\n \\beta_0\\\\\\beta_1\\\\\\vdots\\\\\\beta_n\n\\end{bmatrix}\n$$\n\n$$\n\\begin{align}\n\\nabla_{\\boldsymbol{\\beta}} J\n&=\n\\begin{bmatrix}\n -\\frac{1}{m}\\sum^{m}_{j=1}\n \\left(y^j-\\sum^{n}_{i=0}\\beta_ix_i^j\\right)x^j_1\\\\\n \\vdots \\\\\n -\\frac{1}{m}\\sum^{m}_{j=1}\n \\left(y^j-\\sum^{n}_{i=0}\\beta_ix_i^j\\right)x^j_n\\\\\n\\end{bmatrix}\n=-\\frac{1}{m}\n\\begin{bmatrix}\n \\sum^{m}_{j=1}y^jx^j_1\\\\\n \\vdots \\\\\n \\sum^{m}_{j=1}y^jx^j_n\\\\\n\\end{bmatrix}+\n\\frac{1}{m}\n\\begin{bmatrix}\n \\sum^{m}_{j=1}\\sum^{n}_{i=0}\\beta_ix_i^jx^j_1\\\\\n \\vdots \\\\\n \\sum^{m}_{j=1}\\sum^{n}_{i=0}\\beta_ix_i^jx^j_n\n\\end{bmatrix}\\\\\n\\end{align}\n$$\n\nso\n\n$$\n\\begin{align}\n\\nabla_{\\boldsymbol{\\beta}} J\n&=\\frac{1}{m}\\left(\n\\mathbf{X}^T\\mathbf{X}\\mathbf{\\boldsymbol{\\beta}}-\\mathbf{X}^T\\mathbf{y}\n\\right)\\\\\n&=\\frac{1}{m}\\mathbf{X}^T\\left(\n\\mathbf{X}\\mathbf{\\boldsymbol{\\beta}}-\\mathbf{y}\n\\right)\\\\\n&=\\frac{1}{m}\\mathbf{X}^T\\left(\n\\mathbf{\\hat{y}}-\\mathbf{y}\n\\right)\n\\end{align}\n$$\n\nwhere\n\n$$\n\\mathbf{\\hat{y}} = \\mathbf{X}\\mathbf{\\boldsymbol{\\beta}}\n$$\n\nWe could have derived the same thing using matrix calculus - noting the following:\n\n$$\n\\begin{align}\nJ(\\boldsymbol{\\beta}) &= \\frac{1}{2m}\\sum^{m}_{j=1}\\left(\ny^j-g(\\boldsymbol{\\beta}^T\\mathbf{x}^j)\n\\right)^2\\\\\n&= \\frac{1}{2m}\\left(\n\\mathbf{y}-\\mathbf{\\hat{y}}\n\\right)^T\n\\left(\n\\mathbf{y}-\\mathbf{\\hat{y}}\n\\right)\\\\\n&= \\frac{1}{2m}\\left(\n\\mathbf{y}-\\mathbf{X}\\boldsymbol{\\beta}\n\\right)^T\n\\left(\n\\mathbf{y}-\\mathbf{X}\\boldsymbol{\\beta}\n\\right)\\\\\n&= \\frac{1}{2m}\\left(\n\\mathbf{y}^T\\mathbf{y}\n-\\boldsymbol{\\beta}^T\\mathbf{X}^T\\mathbf{y}\n-\\mathbf{y}^T\\mathbf{X}\\boldsymbol{\\beta}\n+\\boldsymbol{\\beta}^T\\mathbf{X}^T\\mathbf{X}\\boldsymbol{\\beta}\n\\right)\\\\\n\\end{align}\n$$\n\nand\n\n$$\n\\frac{\\partial}{\\partial\\mathbf{\\boldsymbol{\\beta}}}\n\\left(\nA^T\\boldsymbol{\\beta}\n\\right) = A,\\quad \\forall A\\in\\mathbb{R}^{(n+1)\\times1}\\\\\n$$\n\nand\n\n$$\n\\frac{\\partial}{\\partial\\mathbf{\\boldsymbol{\\beta}}}\n\\left(\n\\boldsymbol{\\beta}^TA\\boldsymbol{\\beta}\n\\right) = 2A\\boldsymbol{\\beta},\\quad \\forall A\\in\\mathbb{R}^{m\\times (n+1)}\\\\\n$$\n\nso\n\n$$\n\\nabla_{\\boldsymbol{\\beta}}J=\\frac{1}{m}\\left(\n\\mathbf{X}^T\\mathbf{X}\\mathbf{\\boldsymbol{\\beta}}-\\mathbf{X}^T\\mathbf{y}\n\\right)$$\n\n### Make fake data\n\n\n```python\nm = 100\nx0 = np.ones(shape=(m, 1))\nx1 = np.linspace(0, 10, m).reshape(-1, 1)\nX = np.column_stack((x0, x1))\n\n# let y = 0.5 * x + 1 + epsilon\nepsilon = np.random.normal(scale=0.5, size=(m, 1))\ny = x1 + 1 + epsilon\n```\n\n\n```python\nfig = go.FigureWidget()\nfig = fig.add_scatter(\n x=X[:,1],\n y=y[:,0],\n mode='markers',\n name='linear data + noise')\nfig.layout.title = 'Fake linear data with noise'\nfig.layout.xaxis.title = 'x1'\nfig.layout.yaxis.title = 'y'\nfig\n```\n\n\n FigureWidget({\n 'data': [{'mode': 'markers',\n 'name': 'linear data + noise',\n 'ty\u2026\n\n\n### Linear regression class\n\n\n```python\nclass LinearRegression():\n\n def __init__(self, learning_rate=0.05):\n \"\"\" \n Linear regression model\n\n Parameters:\n ----------\n learning_rate: float, optional, default 0.05\n The learning rate parameter controlling the gradient descent\n step size\n \"\"\"\n self.learning_rate = learning_rate\n print('Creating linear model instance')\n\n def __repr__(self):\n return (\n f'')\n \n\n \n def fit(self, X, y, n_iter=1000):\n \"\"\" \n Fit the linear regression model\n\n Updates the weights with n_iter iterations of batch gradient\n descent updates\n\n Parameters:\n ----------\n X: numpy.ndarray\n Training data, shape (m samples, (n - 1) features + 1)\n Note the first column of X is expected to be ones (to allow \n for the bias to be included in beta)\n y: numpy.ndarray\n Target values, shape (m samples, 1)\n n_iter: int, optional, default 1000\n Number of batch gradient descent steps\n \"\"\" \n m, n = X.shape\n print(f'fitting with m={m} samples with n={n-1} features\\n')\n self.beta = np.zeros(shape=(n, 1))\n self.costs = []\n self.betas = [self.beta]\n for iteration in range(n_iter):\n y_pred = self.predict(X)\n cost = self.cost(y, y_pred)\n self.costs.append(cost[0][0])\n gradient = self.gradient(y, y_pred, X)\n self.beta = self.beta - (\n self.learning_rate * gradient)\n self.betas.append(self.beta)\n\n def cost(self, y, y_pred):\n \"\"\" \n Mean square error cost function\n\n Parameters:\n ----------\n y: numpy.ndarray\n True target values, shape (m samples, 1)\n y_pred: numpy.ndarray\n Predicted y values, shape (m samples, 1)\n\n Returns:\n -------\n float:\n mean square error value\n \"\"\"\n m = y.shape[0]\n cost = (1 / (2 * m)) * (y - y_pred).T @ (y - y_pred)\n return cost\n\n def gradient(self, y, y_pred, X):\n \"\"\" \n Calculates the gradient of the cost function\n\n Parameters:\n ----------\n y: numpy.ndarray\n Predicted y values, shape (m samples, 1)\n y_pred: numpy.ndarray\n True target values, shape (m samples, 1)\n X: numpy.ndarray\n Training data, shape (m samples, (n - 1) features + 1)\n Note the first column of X is expected to be ones (to allow \n for the bias to be included in beta)\n\n Returns:\n -------\n numpy.ndarray:\n Derivate of mean square error cost function with respect to\n the weights beta, shape (n features, 1)\n \"\"\"\n m = X.shape[0]\n gradient = (1 / m) * X.T @ (y_pred - y)\n return gradient\n\n def predict(self, X):\n \"\"\" \n Predict the target values from sample X feature values\n\n Parameters:\n ----------\n X: numpy.ndarray\n Training data, shape (m samples, (n - 1) features + 1)\n Note the first column of X is expected to be ones (to allow \n for the bias to be included in beta)\n\n Returns:\n -------\n numpy.ndarray:\n Target value predictions, shape (m samples, 1)\n \"\"\" \n y_pred = X @ self.beta\n return y_pred\n\n```\n\n\n```python\nlinear_regression = LinearRegression()\nlinear_regression\n```\n\n Creating linear model instance\n\n\n\n\n\n \n\n\n\n\n```python\nlinear_regression.fit(X, y)\n```\n\n fitting with m=100 samples with n=1 features\n \n\n\n### Plot the best fit\n\n\n```python\nfig = fig.add_scatter(\n x=X[:,1], \n y=linear_regression.predict(X)[:,0],\n mode='markers',\n name='best fit')\nfig\n```\n\n\n FigureWidget({\n 'data': [{'mode': 'markers',\n 'name': 'linear data + noise',\n 'ty\u2026\n\n\n### Plot the cost function\n\n\n```python\ndef plot_surface(linear_regression):\n cost_fig = go.FigureWidget()\n cost_fig = cost_fig.add_scatter(\n x=list(range(len(linear_regression.costs))),\n y=linear_regression.costs,\n mode='markers+lines')\n cost_fig.layout.title = 'Cost by iteration'\n return cost_fig\n```\n\n\n```python\ncost_fig = plot_surface(linear_regression)\ncost_fig\n```\n\n\n FigureWidget({\n 'data': [{'mode': 'markers+lines',\n 'type': 'scatter',\n 'uid': 'd\u2026\n\n\n\n```python\ndef plot_surface(linear_regression):\n beta0s = [beta[0][0] for beta in linear_regression.betas]\n beta1s = [beta[1][0] for beta in linear_regression.betas]\n beta0_max = max(map(abs, beta0s)) * 1.05\n beta1_max = max(map(abs, beta1s)) * 1.05\n\n gradient_descent_fig = go.FigureWidget()\n gradient_descent_fig = gradient_descent_fig.add_scatter3d(\n x=beta0s,\n y=beta1s,\n z=linear_regression.costs,\n mode='markers+lines',\n marker={'size':3, 'color':'red'})\n\n beta0, beta1 = np.meshgrid(\n np.linspace(-beta0_max, beta0_max, 100),\n np.linspace(-beta1_max, beta1_max, 100))\n\n z = np.diag(\n (1 / (2 * m)) * \\\n (y - (X @ np.column_stack((beta0.ravel(), beta1.ravel())).T)).T @ \\\n (y - (X @ np.column_stack((beta0.ravel(), beta1.ravel())).T))\n ).reshape(beta1.shape)\n\n gradient_descent_fig = gradient_descent_fig.add_surface(\n x=beta0,\n y=beta1,\n z=z,\n opacity=0.8)\n \n gradient_descent_fig.layout.title = 'Cost function surface'\n gradient_descent_fig.layout.scene.xaxis.title = 'beta_0'\n gradient_descent_fig.layout.scene.yaxis.title = 'beta_1'\n gradient_descent_fig.layout.scene.zaxis.title = 'cost' \n # cost = average sum square residuals\n gradient_descent_fig.layout.height = 500\n return gradient_descent_fig\n```\n\n\n```python\ngradient_descent_fig = plot_surface(linear_regression)\ngradient_descent_fig\n```\n\n\n FigureWidget({\n 'data': [{'marker': {'color': 'red', 'size': 3},\n 'mode': 'markers+lines',\n \u2026\n\n\n\n```python\n# py.plot(gradient_descent_fig, filename='gradient_descent.html')\n```\n\n## End\n", "meta": {"hexsha": "b0f17042635cbbb4c71b048929d13fc23ace9d17", "size": 22929, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/linear_regression.ipynb", "max_stars_repo_name": "simonwardjones/machine_learning", "max_stars_repo_head_hexsha": "1e92865bfe152acaf0df2df8f11a5f51833389a9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2020-05-13T13:21:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-29T08:11:22.000Z", "max_issues_repo_path": "notebooks/linear_regression.ipynb", "max_issues_repo_name": "simonwardjones/machine_learning", "max_issues_repo_head_hexsha": "1e92865bfe152acaf0df2df8f11a5f51833389a9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/linear_regression.ipynb", "max_forks_repo_name": "simonwardjones/machine_learning", "max_forks_repo_head_hexsha": "1e92865bfe152acaf0df2df8f11a5f51833389a9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-06-20T12:50:23.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-19T13:29:01.000Z", "avg_line_length": 25.3359116022, "max_line_length": 273, "alphanum_fraction": 0.4629072354, "converted": true, "num_tokens": 3811, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9525741322079104, "lm_q2_score": 0.8840392771633079, "lm_q1q2_score": 0.8421129472815464}} {"text": "## Define the Data Generating Distribution\n\nTo examine different approximations to the true loss surface of a particular data generating distribution $P(X,Y)$ we must first define it. We will work backwards by first defining a relationship between random variables and then construct the resulting distribution. Let \n\n$$y=mx+\\epsilon$$\n\nwhere $X \\sim U[0,1]$ ($p(x)=1$) and $\\epsilon \\sim \\mathcal{N}(0,s^2)$ is the noise term. We immediately see that $y$ can be interpreted as the result of a reparameterization, thus given a particular observation $X=x$ the random variable $Y$ is also distributed normally $\\mathcal{N}(mx,s^2)$ with the resulting pdf.\n\n$$p(y|x) = \\frac{1}{\\sqrt{2\\pi s^2}}\\exp\\bigg(-\\frac{(y-mx)^2}{2 s^2}\\bigg)$$\n\nIn this way, we can trivially define the joint pdf\n\n$$p(x,y) = p(y|x)p(x) = p(y|x) = \\frac{1}{\\sqrt{2\\pi s^2}}\\exp\\bigg(-\\frac{(y-mx)^2}{2 s^2}\\bigg)$$\n\n## Visualizing the Joint Distribution\n\nWe can create observations from $P(X,Y)$ via ancestral sampling, i.e. we first draw a sample $x\\sim p(x)$ and then use it to draw a sample $y \\sim p(y|x)$ resulting in $(x,y) \\sim P(X,Y)$.\n\n\n```python\nimport numpy as np\n\nclass P():\n \n def __init__(self, m, s):\n \n self.m = m # Slope of line\n self.s = s # Standard deviation of injected noise\n \n def sample(self, size):\n \n x = np.random.uniform(size=size)\n y = []\n for xi in x:\n y.append(np.random.normal(self.m*xi,self.s))\n return (x,y)\n```\n\n\n```python\nimport matplotlib.pyplot as plt\n\nm = 2.7\ns = 0.2\np = P(m,s)\nx,y = p.sample(50)\nplt.figure(figsize=(7,7))\nplt.plot([0,1],[0,m],label=\"Conditional Expectation\", c='k')\nplt.scatter(x,y, label=\"Samples\")\n\nplt.xlabel(\"x\",fontsize=18)\nplt.ylabel('y', fontsize=18)\nplt.legend(fontsize=16)\nplt.show()\n```\n\n## Fitting a Line\n\nWe now wish to fit a line to the points drawn from $P(X,Y)$. To do this we must introduce a model and a loss function. Our model is a line with a y-intercept of $0$, $y=\\theta x$, and we use the standard sum of squared errors (SSE) loss function\n\n$$ L(\\theta) = \\frac{1}{N}\\sum\\limits_{i=1}^N (\\theta x_i-y_i)^2$$\n\n## Examining Loss Surfaces\n\nThe above SSE is where most introductions to machine learning start. However, we can look a little bit closer. Not all loss surfaces are created equally because in practice the loss functions are calculated from only a sample of the true distribution. To see this, let us first take a look at what the **true** loss surface is\n\n\\begin{align}\nL_{\\text{True}}(\\theta) &= \\mathop{\\mathbb{E}}_{(x,y)\\sim P}[(\\theta x-y)^2] \\\\\n&= \\theta^2 \\mathbb{E}[x^2]-2\\theta\\mathbb{E}[xy] +\\mathbb{E}[y^2] \\\\\n\\end{align}\n\nLet us begin with the first expectation.\n\n\\begin{align}\n\\mathbb{E}[x^2] &= \\int_0^1 \\int_{-\\infty}^{\\infty} x^2 p(x,y) dy dx \\\\\n&= \\int_0^1 x^2 \\bigg(\\int_{-\\infty}^{\\infty} p(y|x) dy \\bigg) dx \\\\\n&= \\int_0^1 x^2 dx \\\\\n&= \\frac{1}{3}\n\\end{align}\n\nThe expectation of $xy$ follows a similar pattern\n\n\\begin{align}\n\\mathbb{E}[xy] &= \\int_0^1 \\int_{-\\infty}^{\\infty} xy p(x,y) dy dx \\\\\n&= \\int_0^1 x \\bigg(\\int_{-\\infty}^{\\infty} yp(y|x) dy \\bigg) dx \\\\\n&= \\int_0^1 x(mx) dx \\\\\n&= m\\int_0^1 x^2 dx \\\\\n&= \\frac{m}{3}\n\\end{align}\n\nAs well as the final expectation...\n\n\\begin{align}\n\\mathbb{E}_{P(X,Y)}[y^2] &= \\int_0^1 \\int_{-\\infty}^{\\infty} y^2 p(x,y) dy dx \\\\\n&= \\int_0^1 \\mathbb{E}_{P(Y|X)}[y^2] dx\n\\end{align}\n\nHere we use the fact that\n\n$$\\mathbb{E}_{P(Y|X)}[y^2] = Var[y]+(\\mathbb{E}_{P(Y|X)}[y])^2$$\n\nto arrive at \n\n\\begin{align}\n\\mathbb{E}_{P(X,Y)}[y^2] &= \\int_0^1 s^2+(mx)^2 dx \\\\\n&= s^2 + \\frac{m^2}{3}\n\\end{align}\n\nWe now substitute all three results into the definition of $L_{\\text{True}}$\n\n\\begin{align}\nL_{\\text{True}}(\\theta) &= \\frac{1}{3}\\theta^2 -\\frac{2m}{3}\\theta + \\frac{m^2}{3} + s^2 \\\\\n&= \\frac{1}{3}(\\theta - m)^2 + s^2\n\\end{align}\n\nwhere we see the classic results that\n$$\\text{argmin}_{\\theta} [L_{\\text{True}}(\\theta)] = m$$\nand\n$$L_{\\text{True}}(m) = s^2$$\nshowing us that the best we can do in minimizing the loss is governed by the gaussian noise injected into the data\n\n## Visualize True Loss Surface\n\n\n```python\ndef true_loss(theta):\n return 1/3*(theta-m)**2 + s**2\n\nthetas = np.linspace(m-2,m+2,1000)\nplt.figure(figsize=(7,5))\nplt.plot(thetas, true_loss(thetas),c='k',label=\"$\\mathcal{L}_D$\")\nplt.plot([2.7,2.7],[0,1.38],c='r',ls='dashed',label=\"$m$\")\nplt.xlabel(\"x\",fontsize=16)\nplt.ylabel(\"y\",fontsize=16)\nplt.legend(fontsize=17)\nplt.show()\n```\n\n## Approximate Loss Surfaces\n\nThe question now becomes, what does the loss surface look like when we only include a finite number of observations from the data generating distribution $P(X,Y)$? We can find an expression for it by expanding the previous defintion of the SSE above\n\n\\begin{align}\nL(\\theta) &= \\frac{1}{N}\\sum\\limits_{i=1}^N (\\theta x_i-y_i)^2 \\\\\n&= \\theta^2 \\frac{1}{N}\\sum\\limits_{i=1}^N x_i^2 -2\\theta \\frac{1}{N}\\sum\\limits_{i=1}^N x_i y_i + \\frac{1}{N}\\sum\\limits_{i=1}^N y_i^2 \\\\\n\\end{align}\n\n\n```python\ndef approx_loss(theta, x_vals, y_vals):\n x_sq = np.power(x,2).mean()\n xy = (x*y).mean()\n y_sq = np.power(y,2).mean()\n return theta**2*x_sq - 2*theta*xy + y_sq\n```\n\nNow we can examine what different approximations to the loss surface look like relative to the true loss surface\n\n\n```python\nplt.figure(figsize=(7,5))\nplt.plot([m,m],[-0.1,1.0],ls='dashed',c='r',label='m')\nplt.plot(thetas, true_loss(thetas),c='k',label='$\\mathcal{L}_D$')\nsample_sizes = [5,10,20,50]\ncolors = ['b','y','g','c','m']\nfor ss,color in zip(sample_sizes,colors):\n x,y = p.sample(ss)\n plt.plot(thetas, approx_loss(thetas,x,y), color, label=\"{} Samples\".format(ss))\nplt.legend()\nplt.ylim(0.0,s**2 + s)\nplt.ylabel(\"Mean Squared Error\")\nplt.xlim(m-1.0,m+1.0)\nplt.xlabel(\"$m$\")\nplt.show()\n```\n\nNow let us approximate the derivative of the true loss surface. We begin with calculating the true derivative\n\n\\begin{align}\nL_{\\text{True}}(\\theta) &= \\frac{1}{3}(\\theta - m)^2 + s^2 \\\\\n\\frac{\\partial L_{\\text{True}}}{\\partial \\theta}(\\theta) &= \\frac{2}{3}(\\theta-m)\n\\end{align}\n\nMeanwhile the derivative of the approximate loss function is\n\n\\begin{align}\nL(\\theta) &= \\theta^2 \\frac{1}{N}\\sum\\limits_{i=1}^N x_i^2 -2\\theta \\frac{1}{N}\\sum\\limits_{i=1}^N x_i y_i + \\frac{1}{N}\\sum\\limits_{i=1}^N y_i^2 \\\\\n\\frac{\\partial L}{\\partial \\theta}(\\theta) &= \\theta \\frac{2}{N}\\sum\\limits_{i=1}^N x_i^2 -\\frac{2}{N}\\sum\\limits_{i=1}^N x_i y_i\n\\end{align}\n\nand we immediately see that our approximation to the minimum of this loss function is\n\n$$\\theta^* = \\frac{\\sum\\limits_{i=1}^N x_i y_i}{\\sum\\limits_{i=1}^N x_i^2}$$\n\nWe will draw samples in the range from $[0,N]$. For each sample size, we will use 70% of it to approximate the minimum of that surface using $\\theta^*$ above. Then we will approximate the gradient of the loss surface at $\\theta^*$ using the remaining 30% of the data, as well as with all N of the data. For each case we will calculate the error between it and the **true** gradient of the loss surface. We will then plot both of the error rates as a function of sample size.\n\n\n```python\ndef partial_L(theta,x,y):\n x_sq = np.power(x,2).mean()\n xy = (x*y).mean()\n return 2*(theta*x_sq-xy)\n\ndef partial_L_True(theta):\n return 2/3*(theta-m)\n\ndef approx_argmin(x,y):\n x_sq = np.power(x,2).sum()\n xy = (x*y).sum()\n return xy/x_sq\n```\n\n\n```python\nerr_tot = []\nerr_test = []\ngrad = False\nfor samp_size in range(10,1000):\n x,y = p.sample(samp_size)\n index = int(samp_size*0.7)\n theta_min = approx_argmin(x[:index], y[:index])\n pL_test = partial_L(theta_min, x[index:], y[index:])\n pL_tot = partial_L(theta_min, x, y)\n pL_T = partial_L_True(theta_min)\n err_tot.append(np.abs(pL_T-pL_tot))\n err_test.append(np.abs(pL_T-pL_test))\n \n```\n\n\n```python\nplt.plot(err_tot, c='r', label='Err Tot')\nplt.plot(err_test, c='b', label='Err Test', alpha=0.5)\nplt.ylabel('Error')\nplt.xlabel('Sample Size')\nplt.title('Error in Approx Loss Geometry at $\\\\theta^*$')\nplt.legend()\nplt.show()\n```\n\n\n```python\nerr_tot = []\nerr_test = []\nerr_train = []\ngrad = False\nfor samp_size in range(10,1000):\n x,y = p.sample(samp_size)\n index = int(samp_size*0.7)\n theta_min = approx_argmin(x[:index], y[:index])\n loss_test = approx_loss(theta_min, x[index:], y[index:])\n loss_tot = approx_loss(theta_min, x, y)\n loss_train = approx_loss(theta_min, x[:index], y[:index])\n loss_true = true_loss(theta_min)\n err_test.append(np.abs(loss_true-loss_test))\n err_tot.append(np.abs(loss_true-loss_tot))\n err_train.append(np.abs(loss_true-loss_train))\n```\n\n\n```python\nplt.plot(err_tot, c='r', label='Err Tot')\nplt.plot(err_test, c='b', label='Err Test', alpha=0.5)\nplt.plot(err_test, c='g', label='Err Train', alpha=0.5)\nplt.ylabel('Error')\nplt.xlabel('Sample Size')\nplt.title('Error in Approx Loss Geometry at $\\\\theta^*$')\nplt.legend()\nplt.show()\n```\n\n$\\sum_i^N$\n\n### Why is the generalization biased if calcuated with the training data?\n\nFirst, we define bias $b(\\hat{\\theta}, \\theta)$. The bias of an estimator, $\\hat{\\theta}$, that is estimating a true population value $\\theta$ from a distribution $P$, is given by,\n\n$$\\text{bias}(\\hat{\\theta}, \\theta) = \\mathbb{E}_P[\\hat{\\theta}] - \\theta.$$\n\nWhen assessing a model's generalizability, we are interested in approximating the value of the true loss function. In other words, if we have an estimator, $\\hat{\\mathcal{L}}$, for the true loss $\\mathcal{L}_{\\text{true}}$ we measure the bias of the our estimator with,\n\n$$\\text{bias}(\\hat{\\mathcal{L}}, \\mathcal{L}_{\\text{true}}) = \\mathbb{E}_P[\\hat{\\mathcal{L}}] - \\mathcal{L}_{\\text{true}}.$$\n\n\n```python\n\n```\n", "meta": {"hexsha": "b5a27b1155b3880cf89b5f7b7b02bae5d7eef5c2", "size": 179195, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notes/Loss Manifolds.ipynb", "max_stars_repo_name": "mathnathan/notebooks", "max_stars_repo_head_hexsha": "63ae2f17fd8e1cd8d80fef8ee3b0d3d11d45cd28", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-12-04T11:04:45.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-04T11:04:45.000Z", "max_issues_repo_path": "notes/Loss Manifolds.ipynb", "max_issues_repo_name": "mathnathan/notebooks", "max_issues_repo_head_hexsha": "63ae2f17fd8e1cd8d80fef8ee3b0d3d11d45cd28", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/Loss Manifolds.ipynb", "max_forks_repo_name": "mathnathan/notebooks", "max_forks_repo_head_hexsha": "63ae2f17fd8e1cd8d80fef8ee3b0d3d11d45cd28", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 367.9568788501, "max_line_length": 56160, "alphanum_fraction": 0.9265381289, "converted": true, "num_tokens": 3183, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294404096760998, "lm_q2_score": 0.905989822921759, "lm_q1q2_score": 0.8420635521787767}} {"text": "```python\nimport numpy as np\nfrom sympy import *\ninit_printing(use_latex='mathjax')\n```\n\n\n```python\nx, y = symbols('x y')\ng = x*x + 3*(y+1)**2 - 1\n```\n\n\n```python\ndiff(g, x)\n```\n\n\n\n\n$$2 x$$\n\n\n\n\n```python\ndiff(g, y)\n```\n\n\n\n\n$$6 y + 6$$\n\n\n\n\n```python\nf = - exp(x - y ** 2 + x * y)\n```\n\n\n```python\ndiff(f, x)\n```\n\n\n\n\n$$- \\left(y + 1\\right) e^{x y + x - y^{2}}$$\n\n\n\n\n```python\ndiff(f, y)\n```\n\n\n\n\n$$- \\left(x - 2 y\\right) e^{x y + x - y^{2}}$$\n\n\n\n\n```python\ng = cosh(y) + x - 2\n```\n\n\n```python\ndiff(g, x)\n```\n\n\n\n\n$$1$$\n\n\n\n\n```python\ndiff(g, y)\n```\n\n\n\n\n$$\\sinh{\\left (y \\right )}$$\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "583d760ffc1ad01e2e261284b9699211b3d26851", "size": 3625, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Certification 2/Week5.2 - quick diff.ipynb", "max_stars_repo_name": "The-Brains/MathForMachineLearning", "max_stars_repo_head_hexsha": "5cbd9006f166059efaa2f312b741e64ce584aa1f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2018-04-16T02:53:59.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-16T06:51:57.000Z", "max_issues_repo_path": "Certification 2/Week5.2 - quick diff.ipynb", "max_issues_repo_name": "The-Brains/MathForMachineLearning", "max_issues_repo_head_hexsha": "5cbd9006f166059efaa2f312b741e64ce584aa1f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Certification 2/Week5.2 - quick diff.ipynb", "max_forks_repo_name": "The-Brains/MathForMachineLearning", "max_forks_repo_head_hexsha": "5cbd9006f166059efaa2f312b741e64ce584aa1f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2019-05-20T02:06:55.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-18T06:21:41.000Z", "avg_line_length": 16.9392523364, "max_line_length": 57, "alphanum_fraction": 0.3997241379, "converted": true, "num_tokens": 238, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294403999037782, "lm_q2_score": 0.9059898286330043, "lm_q1q2_score": 0.842063548633415}} {"text": "# Errors, Intervals, and CLT\n\n## Standard error: meaning and purpose\n\nThe previous example's sample means typically deviate about $0.7$. This is an estimate of the uncertainty on the sample mean. The standard deviation of a statistic's sampling distribution is called __standard error__ of that statistic.\n\nWith real data we do not have access to many samples or directly to the sampling distribution of the mean. If we know the standard deviation of the sampling distribution, we could say:\n\n* The sample mean is $177.88 \\pm 0.71$ at a $68\\%$ (1$\\sigma$) confidence level\n\n\n## Central Limit Theorem\n\nThe bell-shape of a sampling distribution is not coincidentally Gaussian, due to the __Central Limit Theorem__ (CLT) which states that:\n\n_the sum of $N$ independent and identically distributed (i.i.d.) random variables tends to the normal distribution as $N \\rightarrow \\infty$._\n\nAccording to the CLT, the _standard error of the mean_ (SEM) of an $N$-sized sample from a population with standard deviation $\\sigma$ is:\n\n$$SE(\\bar{x}) = \\frac{\\sigma}{\\sqrt{N}}$$\n\nWhen the standard deviation of the population is __unknown__ (the usual case) it can be approximated by the sample standard deviation $s$:\n\n$$SE(\\bar{x}) = \\frac{s}{\\sqrt{N}}$$\n\n### The 68 - 95 - 99.7 rule\n\nThe CLT explains the percentages the previous code block returns. These are approximately the areas of the PDF of the normal distribution in the ranges $\\left(\\mu - k\\sigma, \\mu + k\\sigma\\right)$ for $k = 1, 2, 3$.\n\n\n\n\n```python\n# Importing Libraries\nimport numpy as np\nfrom scipy import stats\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n\n```python\nsigmas = [1,2,3]\n\npercentages = []\n\nfor sigma in sigmas:\n per = round(stats.norm.cdf(sigma) - stats.norm.cdf(-sigma),3)\n print(\"For\", sigma, \"sigma the area of the PDF is ~\", per)\n percentages.append(per)\n\nprint('')\nfor per in percentages:\n print(\"For\", per, \"% coverage, the interval is\", np.round(stats.norm.interval(per), 3))\n\n\n```\n\n ('For', 1, 'sigma the area of the PDF is ~', 0.683)\n ('For', 2, 'sigma the area of the PDF is ~', 0.954)\n ('For', 3, 'sigma the area of the PDF is ~', 0.997)\n \n ('For', 0.683, '% coverage, the interval is', array([-1.001, 1.001]))\n ('For', 0.954, '% coverage, the interval is', array([-1.995, 1.995]))\n ('For', 0.997, '% coverage, the interval is', array([-2.968, 2.968]))\n\n\n\n```python\ndef plotsigma(sig, col):\n x = np.linspace(-sig, sig, 100)\n y = stats.norm.pdf(x)\n text = str(sig) + \" sigma\" + (\"s\" if abs(sig) != 1 else \"\")\n plt.fill_between(x, y, y2=0, facecolor = col, label = text, alpha = 0.2)\n \nx = np.linspace(-5, 5, 100)\nplt.plot(x, stats.norm.pdf(x))\nplotsigma(3, \"m\")\nplotsigma(2, \"g\")\nplotsigma(1, \"b\")\nplt.legend()\nplt.show()\n```\n\n## Standard error of the mean from one sample\n\nIn real-life applications, we have no access to the sampling distribution. Instead, we get __one sample__ of observations and therefore one point estimate per statistic (e.g. mean and standard deviation.)\n\n### Assuming normality\n\nWhen $\\sigma$ is known, then for various sigma levels $k$:\n\n$$\\bar{x} \\pm k \\frac{\\sigma}{\\sqrt{N}} \\qquad \\equiv \\qquad \\bar{x} \\pm k \\times SE\\left(\\bar{x}\\right)$$\n\nwhich correspond to confidence levels equal to the area of the standard normal distribution between values $\\left(-k, k\\right)$:\n\n$$ C = Pr(-k < z < k) \\Longrightarrow C = \\Phi(k) - \\Phi(-k)$$\n\nwhere $z \\sim \\mathcal{N}\\left(0, 1\\right)$ and $\\Phi(z)$ is the CDF of standard normal distribution.\n\n### Assuming normality: t-Student approximation\n\nWhen $\\sigma$ is unknown, then the uncertainty of $s$ should be accounted for using the Student's t approximation. For large samples ($N > 30$) this is not necessary as the t-distribution is well approximated by normal distribution. If we decide to use it, then for the sample mean the following formula holds:\n\n$$\\bar{x} \\pm t_c\\left(\\frac{a}{2}, N-1\\right) \\frac{s}{\\sqrt{N}} \\qquad \\equiv \\qquad \\bar{x} \\pm t_c \\times SE\\left(\\bar{x}\\right)$$\n\nwhere $N$ is the sample size and $t_c$ is a critical value that depends on the requested significance level $a$ or equivalently the confidence level $C = 1-a$, and the degrees of freedom (here $N-1$):\n\n$$Pr(-t_c < t < t_c) = 1 - a = C$$\n\nThe critical value is actually the inverse CDF of the $t$-distribution with $N-1$ d.o.f., evaluated at $\\frac{a}{2}$.\n\n### A common misconception...\n\nThe probability of the true parameter to lie inside the confidence interval produced from a sample is not equal to $68\\%$: it either contains it or not.\n\nInstead, $68\\%$ is the probability for a sample from the same distribution and under the same circumstances, to produce a confidence interval containing the true mean. E.g. out of 1000 samples we expect that $\\approx 680$ of the $1\\sigma$ CIs will contain the sample mean.\n\n\n# Confidence intervals\n\n
    \n
  • \n A confidence interval is an estimate of the range of a parameter of a population (in contrast to point estimates.)\n
  • \n
  • \n A condifence interval is an interval with random endpoints which contains the parameter of interest with a specified probability $C$ called confidence level\n
  • \n
\n\nIt is closely related to hypothesis testing: confidence level is the complement of significance level: $C = 1 - a$.\n\n\n## Parametric\n\nIf the sampling distribution is either known or assumed (e.g. normal from CLT), then deriving the interval at a confidence level $C$ is straightforward:\n
    \n
  • each endpoint corresponds to a value for the CDF: $p_1 = \\frac{1 - C}{2}$ and $p_2 = \\frac{1 + C}{2}$
  • \n
  • find the percentiles $x_1$, $x_2$: the values for which $F(x_i) = p_i \\Longrightarrow x_i = F^{-1}(p_i)$ where $F(x)$ is the CDF of the samlping distribution.\n
  • the confidence interval is $(x_1, x_2)$
  • \n
  • if $\\hat{x}$ is the point estimate of the parameter of interest then we can write down all three values in the format: $\\hat{x}_{x_1 - \\hat{x}}^{x_2 - \\hat{x}}$. Also, we shall always state the confidence level.\n
\n\n### Confidence bounds\n\nSimilarily, we can get one-sided limits. At a confidence level $C$ the lower/upper confidence bounds from a distribution with CDF $F(x)$ are:\n\n
    \n
  • upper: $F^{-1}(C)$ corresponding to the interval $\\left[F^{-1}(0), \\, F^{-1}(C)\\right]$
  • \n
  • lower: $F^{-1}(1-C)$ corresponding to the interval $\\left[F^{-1}(1-C), \\, F^{-1}(1)\\right]$
  • \n
\n\nFor example, if $F(x)$ is the CDF of the standard normal distribution, then $F(-1) \\approx 0.16$ and $F(1) \\approx 0.84$. Therefore:\n\n
    \n
  • $1$ is the upper $84\\%$ confidence bound
  • \n
  • $-1$ is the lower $84\\%$ confidence bound
  • \n
\n\n### Example\n\nLet's assume that we did the math and found that the sampling distribution of our parameter is the exponential power distribution with shape parameter $b = 3.8$. Then the confidence intervals at various levels would be assymetric:\n\n\n```python\ndist = stats.exponpow(3.8)\n# dist = stats.norm(0,10)\nci68 = dist.interval(0.68) # using .interval() method\nci80 = [dist.ppf(0.1), dist.ppf(0.9)] # ...or using percentile point function\ncb95 = [dist.ppf(0), dist.ppf(0.95)] # ...which is handy for one-sided intervals\n# ci95 = dist.interval(0.95) # using .interval() method\n\nx = np.linspace(0, 1.5, 400); y = dist.pdf(x) # total distr\nx68 = np.linspace(ci68[0], ci68[1]); y68 = dist.pdf(x68) # 68% conf interval\nx80 = np.linspace(ci80[0], ci80[1]); y80 = dist.pdf(x80) # 80% ...\nx95 = np.linspace(cb95[0], cb95[1]); y95 = dist.pdf(x95) # 95% ...\nplt.plot(x, y, \"k-\")\nplt.fill_between(x95, y95, 0 * y95, alpha = 0.5, label = \"95% UCB\", facecolor = \"r\")\nplt.fill_between(x80, y80, 0 * y80, alpha = 0.5, label = \"80% CI\", facecolor = \"b\")\nplt.fill_between(x68, y68, 0 * y68, alpha = 0.0, label = \"68% CI\", hatch = \"////\")\nplt.legend()\nplt.show() \n```\n\n## Non-parametric\n\nFor the sample mean, the standard error is well defined and performs quite well for most cases. Though, we may want to compute the standard error of other parameters for which the sampling distribution is either unknown or difficult to compute.\n\nThere are various methods for the non-parametric estimation of the standard error / confidence interval. Here we will see two such methods: bootstrap and jackknife.\n\n### Bootstrap\n\nBootsrapping is a resampling (with replacement) method. As we saw before, by drawing many samples we can approximate the sampling distribution of the mean which is impossible for real data without the assumption of a distribution.\n\nBootstrap method is based on randomly constructing $B$ samples from the original one, by sampling with replacement from the latter. The size of the resamples should be equal to the size of the original sample. For example, with the sample $X$ below, we can create $B = 5$ new samples $Y_i$:\n\n$$X = \\left[1, 8, 3, 4, 7\\right]$$\n\n$$\\begin{align}\nY_1 &= \\left[8, 3, 3, 7, 1\\right] \\\\\nY_2 &= \\left[3, 1, 4, 4, 1\\right] \\\\\nY_3 &= \\left[3, 7, 1, 8, 7\\right] \\\\\nY_4 &= \\left[7, 7, 4, 3, 1\\right] \\\\\nY_5 &= \\left[1, 7, 8, 3, 4\\right]\n\\end{align}$$\n\nThen, we compute the desired sample statistic for each of those samples to form an empirical sampling distribution. The standard deviation of the $B$ sample statistics is the bootstrap estimate of the standard error of the statistic.\n\n#### Example: Standard Error (SE) of the median and skewness\n\n\n```python\nN = 100 # size of sample\nM = 10000 # no of samples drawn from the distr.\nB = 10000 # no of bootstrap resample drawn from a signle sample of the distr.\n\n# Various distributions to be tested\n# dist = stats.norm(0, 1)\ndist = stats.uniform(0, 1)\n# dist = stats.cauchy()\n#dist = stats.dweibull(8.5)\n\nmany_samples = dist.rvs([M, N]) # this creates many sample of the same distr.\nm_many = np.median(many_samples, axis = 1)\ns_many = stats.skew(many_samples, axis = 1)\n\nboot_samples = np.random.choice(sample, (B, N), replace = True) # this creates bootstrapped samples of one single sample\nm_boot = np.median(boot_samples, axis = 1)\ns_boot = stats.skew(boot_samples, axis = 1)\n\nm_norm = np.sqrt(np.pi / (2.0 * N)) * dist.std() # this is the calculation if we assume a normal distribution\ns_norm = np.sqrt(6 * N * (N - 1) / ((N + 1) * (N - 2) * (N + 3)))\n\nplt.figure(figsize = [12, 2])\nplt.subplot(1, 2, 1)\nplt.hist(m_boot, 30, histtype = \"step\", normed = True, label = \"boot\")\nplt.hist(m_many, 30, histtype = \"step\", normed = True, label = \"many\")\nplt.title(\"Median\")\nplt.legend()\nplt.subplot(1, 2, 2)\nplt.hist(s_boot, 15, histtype = \"step\", normed = True, label = \"boot\")\nplt.hist(s_many, 15, histtype = \"step\", normed = True, label = \"many\")\nplt.title(\"Skewness\")\nplt.legend()\nplt.show()\n\nprint(\"SE median (if normal) :\", m_norm)\nprint(\"SE median (bootstrap) :\", np.std(m_boot))\nprint(\"SE median (many samples):\", np.std(m_many))\nprint(\"-----------------------------------------\")\nprint(\"SE skewness (if normal) :\", s_norm)\nprint(\"SE skewness (bootstrap) :\", np.std(s_boot))\nprint(\"SE skewness (many samples):\", np.std(s_many))\n```\n\nWe can see that the many samples drawn from the initial distribution have always Gaussian distribution of means due to CLT. However, this is not the case for the bootstrapped method, which however performs quite well. This is the reason we may use bootstrapping, when we don't know or don't expect that distribution to be normal.\n\n### Jackknife resampling\n\nThis older method inspired the Bootstrap which can be seen as a generalization (Jackknife is the linear approximation of Bootstrap.) It estimates the sampling distribution of a parameter on an $N$-sized sample through a collection of $N$ sub-samples by removing one element at a time.\n\nE.g. the sample $X$ leads to the Jackknife samples $Y_i$:\n\n$$ X = \\left[1, 7, 3\\right] $$\n\n$$\n\\begin{align}\nY_1 &= \\left[7, 3\\right] \\\\\nY_2 &= \\left[1, 3\\right] \\\\\nY_3 &= \\left[1, 7\\right]\n\\end{align}\n$$\n\nThe Jackknife Replicate $\\hat\\theta_{\\left(i\\right)}$ is the value of the estimator of interest $f(x)$ (e.g. mean, median, skewness) for the $i$-th subsample and $\\hat\\theta_{\\left(\\cdot\\right)}$ is the sample mean of all replicates:\n\n$$\n\\begin{align}\n\\hat\\theta_{\\left(i\\right)} &= f\\left(Y_i\\right) \\\\\n\\hat\\theta_{\\left(\\cdot\\right)} &= \\frac{1}{N}\\sum\\limits_{i=1}^N {\\hat\\theta_{\\left(i\\right)}}\n\\end{align}\n$$\n\nand the Jackknife Standard Error of $\\hat\\theta$ is computed using the formula:\n \n$$ SE_{jack}(\\hat\\theta) = \\sqrt{\\frac{N-1}{N}\\sum\\limits_{i=1}^N \\left[\\hat{\\theta}\\left(Y_i\\right) - \\hat\\theta_{\\left(\\cdot\\right)} \\right]^2} = \\cdots = \\frac{N-1}{\\sqrt{N}} s$$\n\nwhere $s$ is the standard deviation of the replicates.\n\n#### Example: estimation of the standard error of the mean\n\n\n```python\nN = 100 \nM = 100\n\n# Distributions to be tested\n#dist = stats.norm(0, 1)\ndist = stats.uniform(0, 1)\n#dist = stats.cauchy()\n#dist = stats.dweibull(8.5)\n\ndef jackknife(x):\n return [[x[j] for j in range(len(x)) if j != i] for i in range(len(x))]\n# Be careful the aboe is a double loop: first for i and then for j\n\nsample = dist.rvs(N)\nSE_clt = np.std(sample) / np.sqrt(N)\n\nmany_samples = dist.rvs([M, N])\nmany_means = np.mean(many_samples, axis = 1)\nmany_medians = stats.kurtosis(many_samples, axis = 1)\nSE_mean_many = np.std(many_means)\nSE_median_many = np.std(many_medians)\n\njack_samples = jackknife(sample)\njack_means = np.mean(jack_samples, axis = 1)\njack_medians = stats.kurtosis(jack_samples, axis = 1)\nSE_mean_jack = np.std(jack_means) * (N - 1.0) / np.sqrt(N)\nSE_median_jack = np.std(jack_medians) * (N - 1.0) / np.sqrt(N)\n\nprint(\"[ Standard error of the mean ]\")\nprint(\" SEM formula :\", SE_clt)\nprint(\" Jackknife :\", SE_mean_jack)\nprint(\" Many samples :\", SE_mean_many)\nprint(\"\\n[Standard error of the median ]\")\nprint(\" Jackknife :\", SE_median_jack)\nprint(\" Many samples :\", SE_median_many)\n```\n\n [ Standard error of the mean ]\n (' SEM formula :', 0.029713229319835173)\n (' Jackknife :', 0.029713229319835111)\n (' Many samples :', 0.027021166426376305)\n \n [Standard error of the median ]\n (' Jackknife :', 0.12885958726617672)\n (' Many samples :', 0.10724818607554371)\n\n\n# Propogation of Uncertainty\n\nLet's very briefly introduce the general cases, for completeness. This will be followed by the specific case used typically by engineers and physical scientists, which is perhaps of most interest to us.\n\n## 1) Linear Combinations\nFor $\\big\\{{f_k(x_1,x_2,\\dots,x_n)}\\big\\}$, a set of $m$ functions that are linear combinations of $n$ variables $x_1,x_2,\\dots,x_3$ with combination coefficients $A_{k1},A_{k2},\\dots,A_{kn},k=1 \\dots m$.\n\n$\\large{f_k=\\sum\\limits_{i=1}^{n} A_{ki}x_i}$ or $\\large{f=Ax}$\n\nFrom here, we would formally write out the $\\textbf{variance-covariance matrix}$, which deals with the correlation of uncertainties across variables and functions, and contains many $\\sigma$'s. Each covariance term $\\sigma_{ij}$ may be expressed in terms of a $\\textbf{correlation coefficient}$ $\\rho_{ij}$ as $\\sigma_{ij}=\\rho_{ij}\\sigma_{i}\\sigma_{j}$.\n\nIn our most typical case, where variables are uncorrelated, the entire matrix may be reduced to:\n\n$\\large{\\sigma^{2}_{f} = \\sum\\limits_{i=1}^{n} a^{2}_{i}\\sigma^{2}_{i}}$\n\nThis form will be seen in the most likely applicable case for astronomers below.\n\n## 2) Non-linear Combinations\n\nWhen $f$ is a non-linear combination of the variables $x$, $f$ must usually be linearized by approximation to a first-order Taylor series expansion:\n\n$\\large{f_k=f^{0}_{k}+\\sum\\limits^{n}_{i} \\frac{\\partial f_k}{\\partial x_i} x_i}$\n\nwhere $\\large{\\frac{\\partial f_k}{\\partial x_i}}$ denotes the partial derivative of $f_k$ with respect to the $i$-th variable.\n\n### Simplification\n\nIf we neglect correlations, or assume the variables are independent, we get the commonly used formula for analytical expressions:\n\n$\\large{\\sigma^{2}_{f}=\\big(\\frac{\\partial f}{\\partial x}\\big)^{2}\\sigma^{2}_{x}+\\big(\\frac{\\partial f}{\\partial y}\\big)^{2}\\sigma^{2}_{y}+\\dots}$\n\nwhere $\\sigma_f$ is the standard deviation of the function $f$, with $\\sigma_x$ being the standard deviation of the variable $x$ and so on.\n\nThis formula is based on the assumption of the linear characteristics of the gradient of $f$, and is therefore only a good estimation as long as the standard deviations are small compared to the partial derivatives.\n\n### Example: Mass Ratio\n\nThe mass ratio for a binary system may be expressed as:\n\n$\\large{q=\\frac{K_1}{K_2}=\\frac{M_2}{M_1}}$\n\nwhere K's denote the velocity semiamplitudes (from a Keplerian fit to the radial velocities) and M's represent the individual component masses.\n\nInserting this into the formula gives:\n\n$\\large{\\sigma^{2}_{q}=\\big(\\frac{\\partial q}{\\partial K_1}\\big)^{2}\\sigma^{2}_{K_1}+\\big(\\frac{\\partial q}{\\partial K_2}\\big)^{2}\\sigma^{2}_{K_2}}$\n\n$\\large{\\sigma^{2}_{q}=\\big(\\frac{1}{K_2}\\big)^{2}\\sigma^{2}_{K_1}+\\big(\\frac{K_1}{K_2^2}\\big)^{2}\\sigma^{2}_{K_2}}$\n\nFor a simple application of such a case, let's use the values of velocity semiamplitudes for the early-type B binary HD 42401 from Williams (2009):\n\n$K_1=151.4\\pm0.3$ km s$^{-1}$\n\n$K_2=217.9\\pm1.0$ km s$^{-1}$\n\nInserting these into the equations and computing the value, we get:\n\n$q=0.6948\\pm0.0038$\n\n## 3) Monte Carlo sampling\n\nAn uncertainty $\\sigma_x$ expressed as a standard error of the quantity $x$ implies that we could treat the latter as a normally distributed random variable: $X \\sim \\mathcal{N}\\left(x, \\sigma_x^2\\right)$. By sampling $M$ times each variable $x_i$ and computing $f$ we are experimentaly exploring the different outcomes $f$ could give.\n\n### 1) Example\n\nThe following code computes the mass ratio for HD 42401 and its uncertainty using uncertainty propagation and Monte Carlo sampling. For better comparison, we print 6 digits after decimal point.\n\n\n```python\n# observed quantities\nK1 = 151.4\nK2 = 217.9\nK1_err = 0.3\nK2_err = 1.0\n\n# Error propagation\nq = round(K1 / K2, 6)\nq_err = round(np.sqrt((K1_err / K2) ** 2.0 + (K2_err * K1 / K2 ** 2.0) ** 2.0), 6)\n\n# Monte Carlo sampling\nN = 10000\nK1_sample = stats.norm.rvs(K1, K1_err, N)\nK2_sample = stats.norm.rvs(K2, K2_err, N)\nq_sample = K1_sample / K2_sample\nq_mean = round(np.mean(q_sample), 6)\nq_std = round(np.std(q_sample), 6)\n# q_CI95 = np.percentile(q_sample, [0.025, 0.975])\nprint(\"From error propagation formula: q =\", q, \"+/-\", q_err)\nprint(\"From monte carlo samlping: q =\", q_mean, \"+/-\", q_std)\nplt.hist(q_sample, 30)\nplt.title(\"Sampling outcome\")\nplt.xlabel(\"q = K1/K2\")\nplt.show()\n```\n\n### 2) Example \n\nEstimates of the true distance modulus and the radial velocity of NGC 2544 are (unpublished galaxy catalog):\n\n$$\n\\begin{align}\n(m - M)_0 &= 33.2 \\pm 0.5 \\\\\nv &= \\left(3608 \\pm 271\\right) \\text{km/s}\n\\end{align}\n$$\n\nApplying the Hubble's Law for this object leads to the following formulæ and values:\n\n$$\n\\begin{align}\nH_0 &= \\frac{v}{r} = \\frac{v}{10^{0.2 m - 5}} = 82.7 \\, \\text{km}\\,\\text{s}^{-1}\\,\\text{Mpc}^{-1}\\\\\n\\sigma_{H_0}^2 &= \\left(\\frac{1}{10^{0.2 m - 5}}\\right)^2\\left[\\sigma_v^2 + \\left(\\frac{\\ln{10}}{5}v \\times \\sigma_m \\right)^2\\right] = 20.0 \\, \\text{km}\\,\\text{s}^{-1}\\,\\text{Mpc}^{-1}\n\\end{align}\n$$\n\nBut can we trust the uncertainty propagation formula for distance moduli? Due to the logarithmic nature of distance modulus, a change by $\\Delta\\left(m-M\\right)_0$ translates into multiplying/dividing the distance by a value close to $1$. Let alone, distance is always positive.\n\nApplying the uncertainty propagation formula and the sampling method we get:\n\n\n```python\nm = 33.2\nm_err = 0.5\nv = 3608\nv_err = 271\n\n# Estimate with formula and error propagation\nH0 = v / 10.0 ** (0.2 * m - 5.0)\nH0_err = np.sqrt(v_err ** 2.0 + (np.log(10.0) / 5.0 * v * m_err) ** 2.0) / 10.0 ** (0.2 * m - 5.0)\nprint(\"Error propagation : H0 =\", round(H0, 3), \"+/-\", round(H0_err, 3))\n\n# Estimate with sampling\nN = 100000\nm_sample = stats.norm.rvs(m, m_err, N)\nv_sample = stats.norm.rvs(v, v_err, N)\nH0_sample = v_sample / (10.0 ** (0.2 * m_sample - 5.0))\nH0_mean = np.mean(H0_sample)\nprint(\"Monte Carlo : H0 =\", round(H0_mean, 3), \"+/-\", round(np.std(H0_sample), 3))\n\n# Plot Monte-Carlo\nplt.hist(H0_sample, 100, normed = True, label = \"sampling\")\n# and plot the formula\nx = np.linspace(H0 - 4 * H0_err, H0 + 4 * H0_err, 100)\nplt.plot(x, stats.norm.pdf(x, H0, H0_err), \"r-\", label = \"unc. prop.\")\nplt.legend()\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "2de88b045a352b3b3c77ca66c993a30c43c4711c", "size": 101378, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "errors_intervals.ipynb", "max_stars_repo_name": "tbitsakis/astro_projects", "max_stars_repo_head_hexsha": "7453512779467d264b1455756d24e3d46409218d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-09-27T07:47:56.000Z", "max_stars_repo_stars_event_max_datetime": "2018-09-27T07:48:02.000Z", "max_issues_repo_path": "errors_intervals.ipynb", "max_issues_repo_name": "tbitsakis/astro_projects", "max_issues_repo_head_hexsha": "7453512779467d264b1455756d24e3d46409218d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "errors_intervals.ipynb", "max_forks_repo_name": "tbitsakis/astro_projects", "max_forks_repo_head_hexsha": "7453512779467d264b1455756d24e3d46409218d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 148.4304538799, "max_line_length": 18608, "alphanum_fraction": 0.8528477579, "converted": true, "num_tokens": 6309, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294403999037784, "lm_q2_score": 0.9059898191142621, "lm_q1q2_score": 0.8420635397863117}} {"text": "## Problem\nYou are faced repeatedly with a choice among n different options, or actions. After each choice you receive a\nnumerical reward chosen from a stationary probability distribution that depends on the action you selected. Your objective is to maximize the expected total reward over some time period, for example, over 1000 action selections,\nor time steps.\n\n## Exploring and exploiting\nLet expected or mean of reward of each action be called its *value*. At any time step if we choose best possible action as highest of our observed values of all actions. We call such action *greedy* and this process is called *exploiting*. If we choose one of the nongreedy action, this process will be called *exploration* because this may improve observed values of nongreedy actions which may be optimal.\n\n## Action value method\nwe denote the true (actual) value of action a as $q\u2217(a)$, and the estimated value on the tth time step as $Q_t(a)$. If by the $t^{th}$ time step action $a$ has been chosen $K_a$ times prior to $t$, yielding rewards $R_1, R_2,...R_{K_a}$, then its value is estimated to be: \n$$Q_t(a) = \\frac{R_1 + R_2 + .... + R_{K_a}}{K_a}$$\n\nWe choose explore with probability $\\epsilon$ and choose greedy action with probability $(1-\\epsilon)$ i.e Action with max $Q_t(a)$. This way of near greedy selection is called $\\epsilon-$greedy method\n\n## Softmax action selection\n\nIn Softmax action selection we choose *Exploration* action based on ranking given by *softmax action selection rules*. One method of ranking is *Gibbs, or Boltzmann, distribution*. It\nchooses action $a$ on the $t^{th}$ time step with probability, $$\\frac{e^{Q_t(a)/\\tau}}{\\sum_{i=1}^{n}{e^{Q_t(i)/\\tau}}}$$\n\n\n#### Optimized way to update mean\nLet mean at time t is $m_t$. Let new observation at time $t+1$ is $x_{t+1}$.\n$$\n\\begin{align}\nm_t &= \\frac{x_1 + x_2 + ....... + x_t}{t} \\tag{1} \\\\\nm_{t+1} &= \\frac{x_1 + x_2 + ....... + x_t + x_{t+1}}{t+1} \\\\\n&= \\frac{m_t*t + x_{t+1}}{t+1} && \\text{by (1)} \\\\\n&= \\frac{m_t*t + m_t -m_t + x_{t+1}}{t+1} \\\\\n&= \\frac{m_t(t + 1) + (x_{t+1}-m_t)}{t+1} \\\\\nm_{t+1} &= m_t + \\frac{x_{t+1}-m_t}{t+1}\n\\end{align}\n$$\n\n\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sb\n%matplotlib inline\n```\n\n\n```python\n#np.random.seed(seed=2017)\n\n# Environment code\nclass Bandit:\n def __init__(self,n):\n self.n = n\n self.q_star_arr = np.random.normal(size=self.n)\n \n # get reward R\n def reward(self,a): \n return np.random.normal(loc=self.q_star_arr[a])\n \n def getOptimal(self):\n return self.q_star_arr.argmax()\n```\n\n\n```python\nclass ActionValue:\n def __init__(self,n,epsilon,bandit,max_t = 1000):\n self.n = n\n self.bandit = bandit\n self.max_t = max_t\n self.Qta = np.zeros(n)\n self.Ka = np.zeros(n,dtype=int)\n self.optimalHistory = np.zeros(max_t)\n self.epsilon = epsilon \n self.rewardHistory = np.zeros(max_t)\n self.algoRun()\n \n def algoRun(self):\n for t in range(self.max_t):\n greedyAction = self.Qta.argmax()\n \n if np.random.rand() < self.epsilon:#exploring\n curAction = np.random.randint(self.n -1) \n #not to use greedyAction so\n if curAction>=greedyAction:\n curAction += 1\n else:#exploiting\n curAction = greedyAction\n \n self.optimalHistory[t] = 1 if curAction == self.bandit.getOptimal() else 0\n curActionReward = self.bandit.reward(curAction)\n self.rewardHistory[t] = curActionReward\n self.Ka[curAction] += 1\n #update Qt(a)\n self.Qta[curAction] += (curActionReward - self.Qta[curAction])/self.Ka[curAction] \n \n# n =10\n# bandit = Bandit(10)\n# print(bandit.q_star_arr)\n# max_t = 1000\n# a1 = ActionValue(n,0.1,bandit,max_t=max_t)\n# a1.algoRun()\n# print(a1.Ka)\n# a2 = ActionValue(n,0.01,bandit,max_t=max_t)\n# a2.algoRun()\n# print(a2.Ka)\n# a3 = ActionValue(n,0.0,bandit,max_t=max_t)\n# a3.algoRun()\n# print(a3.Ka)\n```\n\n\n```python\nclass SoftmaxAction:\n def __init__(self,n,tau,bandit,max_t = 1000):\n self.n = n\n self.tau = tau\n self.bandit = bandit\n self.max_t = max_t\n self.Qta = np.zeros(n)\n self.Ka = np.zeros(n,dtype=int)\n self.optimalHistory = np.zeros(max_t)\n self.rewardHistory = np.zeros(max_t)\n self.algoRun()\n \n def algoRun(self):\n for t in range(self.max_t):\n probArray = np.exp(self.Qta/ self.tau) \n probArray = probArray / sum(probArray) \n actionList = list(range(len(self.Qta)))\n curAction = np.random.choice(actionList,p=probArray)\n self.optimalHistory[t] = 1 if curAction == self.bandit.getOptimal() else 0\n curActionReward = self.bandit.reward(curAction)\n self.rewardHistory[t] = curActionReward\n self.Ka[curAction] += 1\n #update Qt(a)\n self.Qta[curAction] += (curActionReward - self.Qta[curAction])/self.Ka[curAction] \n\n```\n\n\n```python\nclass SoftmaxEpsilonGreedyAction:\n def __init__(self,n,epsilon,tau,bandit,max_t = 1000):\n self.n = n\n self.tau = tau\n self.bandit = bandit\n self.max_t = max_t\n self.Qta = np.zeros(n)\n self.Ka = np.zeros(n,dtype=int)\n self.optimalHistory = np.zeros(max_t)\n self.epsilon = epsilon \n self.rewardHistory = np.zeros(max_t)\n self.algoRun()\n \n def algoRun(self):\n for t in range(self.max_t):\n greedyAction = self.Qta.argmax()\n \n if np.random.rand() < self.epsilon:#exploring\n probArray = np.exp(self.Qta/ self.tau) \n actionList = list(range(len(self.Qta)))\n #not to use greedyAction so\n actionList.remove(greedyAction)\n probArray = np.delete(probArray, greedyAction)\n \n probArray = probArray / sum(probArray)\n curAction = np.random.choice(actionList,p=probArray) \n else:#exploiting\n curAction = greedyAction\n \n self.optimalHistory[t] = 1 if curAction == self.bandit.getOptimal() else 0\n curActionReward = self.bandit.reward(curAction)\n self.rewardHistory[t] = curActionReward\n self.Ka[curAction] += 1\n #update Qt(a)\n self.Qta[curAction] += (curActionReward - self.Qta[curAction])/self.Ka[curAction] \n \n```\n\n\n```python\nclass TestBed:\n def __init__(self, n, epsilon,tau,algoType,max_t = 1000):\n self.n = n\n self.maxExp = 2000\n self.max_t = max_t \n self.epsilon = epsilon \n self.tau = tau\n self.averageRewardHistory = np.zeros(self.max_t)\n self.averageOptimalHistory = np.zeros(self.max_t)\n \n for i in range(self.maxExp): \n bandit = Bandit(n)\n if algoType==\"ActionValue\":\n exp = ActionValue(self.n,self.epsilon,bandit,max_t=self.max_t)\n elif algoType==\"SoftmaxAction\":\n exp = SoftmaxAction(self.n,self.tau,bandit,max_t=self.max_t)\n elif algoType==\"SoftmaxEpsilonGreedyAction\":\n exp = SoftmaxEpsilonGreedyAction(self.n,self.epsilon,self.tau,bandit,max_t=self.max_t)\n \n self.averageRewardHistory += exp.rewardHistory \n self.averageOptimalHistory += exp.optimalHistory \n \n self.averageRewardHistory /= self.maxExp\n self.averageOptimalHistory *= 100.0/self.maxExp\n```\n\n\n```python\nn =10\nmax_t =1000\na2 = TestBed(n,0.01,None,\"ActionValue\",max_t)\na3 = TestBed(n,0.0,None,\"ActionValue\",max_t)\na4 = TestBed(n,None,0.2,\"SoftmaxAction\",max_t)\na5 = TestBed(n,None,0.05,\"SoftmaxAction\",max_t)\na6 = TestBed(n,0.01,0.2,\"SoftmaxEpsilonGreedyAction\",max_t)\na7 = TestBed(n,0.01,0.05,\"SoftmaxEpsilonGreedyAction\",max_t)\n\n\n\n```\n\n\n```python\nt = range(max_t)\nplt.plot(t,a2.averageRewardHistory,label='ActionValue: epsilon=0.01')\nplt.plot(t,a3.averageRewardHistory,label='ActionValue: epsilon=0.0')\nplt.plot(t,a4.averageRewardHistory,label='SA: tau=0.2')\nplt.plot(t,a5.averageRewardHistory,label='SA: tau=0.05')\nplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\nplt.show()\n```\n\n\n```python\nplt.plot(t,a2.averageOptimalHistory,label='ActionValue: epsilon=0.01')\nplt.plot(t,a3.averageOptimalHistory,label='ActionValue: epsilon=0.0')\nplt.plot(t,a4.averageOptimalHistory,label='SA: tau=0.2')\nplt.plot(t,a5.averageOptimalHistory,label='SA: tau=0.05')\nplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\nplt.show()\n```\n\n\n```python\nplt.plot(t,a2.averageRewardHistory,label='ActionValue: epsilon=0.01')\nplt.plot(t,a3.averageRewardHistory,label='ActionValue: epsilon=0.0')\nplt.plot(t,a6.averageRewardHistory,label='SEGA: epsilon=0.01 tau=0.2')\nplt.plot(t,a7.averageRewardHistory,label='SEGA: epsilon=0.01 tau=0.05')\nplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\nplt.show()\n```\n\n\n```python\nplt.plot(t,a2.averageOptimalHistory,label='ActionValue: epsilon=0.01')\nplt.plot(t,a3.averageOptimalHistory,label='ActionValue: epsilon=0.0')\nplt.plot(t,a6.averageOptimalHistory,label='SEGA: epsilon=0.01 tau=0.2')\nplt.plot(t,a7.averageOptimalHistory,label='SEGA: epsilon=0.01 tau=0.05')\nplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\nplt.show()\n```\n\n\n```python\na8 = TestBed(n,None,0.1,\"SoftmaxAction\",max_t)\na9 = TestBed(n,None,0.15,\"SoftmaxAction\",max_t)\na10 = TestBed(n,0.1,0.15,\"SoftmaxEpsilonGreedyAction\",max_t)\na11 = TestBed(n,0.1,0.2,\"SoftmaxEpsilonGreedyAction\",max_t)\n```\n\n\n```python\nplt.plot(t,a2.averageOptimalHistory,label='ActionValue: epsilon=0.01')\nplt.plot(t,a3.averageOptimalHistory,label='ActionValue: epsilon=0.0')\nplt.plot(t,a8.averageOptimalHistory,label='SA: tau=0.1')\nplt.plot(t,a9.averageOptimalHistory,label='SA: tau=0.15')\nplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\nplt.show()\n```\n\n\n```python\nplt.plot(t,a2.averageOptimalHistory,label='ActionValue: epsilon=0.01')\nplt.plot(t,a3.averageOptimalHistory,label='ActionValue: epsilon=0.0')\nplt.plot(t,a10.averageOptimalHistory,label='SEGA: epsilon=0.1 tau=0.15')\nplt.plot(t,a11.averageOptimalHistory,label='SEGA: epsilon=0.1 tau=0.2')\nplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\nplt.show()\n```\n\n\n```python\nplt.plot(t,a2.averageOptimalHistory,label='ActionValue: epsilon=0.01')\nplt.plot(t,a9.averageOptimalHistory,label='SA: tau=0.15')\nplt.plot(t,a10.averageOptimalHistory,label='SEGA: epsilon=0.1 tau=0.15')\nplt.plot(t,a11.averageOptimalHistory,label='SEGA: epsilon=0.1 tau=0.2')\nplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "da164d55472a948ea23d95453ae9bc308675e37f", "size": 258014, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/Reinforcement_learning/RL_Part_1_ArmedBandit_Softmax.ipynb", "max_stars_repo_name": "rakesh-malviya/MLCodeGems", "max_stars_repo_head_hexsha": "b9b2b4c2572f788724a7609499b3adee3a620aa4", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-02-19T14:42:57.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-19T14:42:57.000Z", "max_issues_repo_path": "notebooks/Reinforcement_learning/RL_Part_1_ArmedBandit_Softmax.ipynb", "max_issues_repo_name": "rakesh-malviya/MLCodeGems", "max_issues_repo_head_hexsha": "b9b2b4c2572f788724a7609499b3adee3a620aa4", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/Reinforcement_learning/RL_Part_1_ArmedBandit_Softmax.ipynb", "max_forks_repo_name": "rakesh-malviya/MLCodeGems", "max_forks_repo_head_hexsha": "b9b2b4c2572f788724a7609499b3adee3a620aa4", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-11-09T11:09:31.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-17T06:38:28.000Z", "avg_line_length": 526.5591836735, "max_line_length": 46022, "alphanum_fraction": 0.9275775733, "converted": true, "num_tokens": 3092, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294403979493139, "lm_q2_score": 0.9059898203834278, "lm_q1q2_score": 0.8420635391952005}} {"text": "# HW1 Try out gradient descent\n\n## Due Wed Oct 6th, 2021 at 11:59PM\n\nYou should submit this jupyter notebook with your solutions. The solutions should include the code and also the output of all the cells.\n\nNote that for the problmes that require a cost function as input you should always use the most recent cost function that you have implemented (unless specified otherwise).\n\n1) [5 points] Calculate the derivative of following cost function and write it down:\n\n$g(w) = \\frac{1}{50}\\left(w^4 + w^2 + 10w - 50 \\right)$\n\n$\\frac{\\partial}{\\partial w}g(w) = \\frac{1}{50}(4w^3+2w+10) $\n\n2) [25 points] Implement the gradient descent function as discussed in class using the gradient derived in the last problem. The function should return the cost history for each step. Use the code template below:\n\n\n\n```python\n#gradient descent function\n#inputs: alpha (learning rate parameter), max_its (maximum number of iterations), w0 (initialization)\ndef gradient_descent(alpha,max_its,w0):\n gw = 1/50*(w0**4+w0**2+10*w0-50)\n dw = 1/50*(4*w0**3+2*w0+10)\n cost_history = [gw]\n for i in range(max_its):\n w0 = w0 - alpha*dw\n gw = 1/50*(w0**4+w0**2+10*w0-50)\n dw = 1/50*(4*w0**3+2*w0+10)\n cost_history.append(gw)\n ##Your code here\n return cost_history\n```\n\n3) [10 points] Run the gradient_descent function you implemented three times, with the following parameters. Generate a single plot showing the cost as a function of step number for all three runs (combine all three runs into a single plot). If you are not familiar with plotting in python, here is the docs for matplotlib:(https://matplotlib.org/3.2.1/api/_as_gen/matplotlib.pyplot.plot.html#matplotlib.pyplot.plot). \n\n\n$w^0$ = 2.0\nmax_its = 1000\n\n# first run\nalpha = 1\n# second run\nalpha = 0.1\n# third run\nalpha = 0.01\n\n\n\n```python\nimport matplotlib.pyplot as plt\n%matplotlib inline \n# To keep your plots embedded\n\nfirst_run = gradient_descent(alpha = 1, max_its = 1000, w0 = 2.0 )\nsecond_run = gradient_descent(alpha = 0.1, max_its = 1000, w0 = 2.0 )\nthird_run = gradient_descent(alpha = 0.01, max_its = 1000, w0 = 2.0 )\nplt.plot(first_run,'g-',second_run,\"b-\",third_run,\"r-\")# red for alpha0.01, blue for 0.1, and green for 1\n\n##Your code here\n```\n\nFor the next few problems we will be comparing fixed and diminishing learning rates\n\nTake the following cost function:\n\\begin{equation}\ng(w) = \\left \\vert w \\right \\vert\n\\end{equation}\n\n4) [5 points] Is this function convex? If no, why not? If yes, where is its global minimum?\n\n Yes, this function is convex. The global minimum is 0.\n\n5) [5 points] What is the derivative of the cost function? \n\nThis cost function is not differentiable at the point of zero, so the derivative of the cose function would be following:\n$$\\frac{\\partial}{\\partial w}g(w) = \\begin{cases}\n1 & w>0 \\\\\n-1 & w<0 \\\\\n\\end{cases}\n$$\n\n6) [20 points] Rewrite the gradient descent function from question 2 such that it takes the cost funciton g as input and uses the autograd library to calculate the gradient. The function should return the weight and cost history for each step. Use the code template below.\n\nautograd is a python package for automatic calculation of the gradient. Here is a tutorial on it: (http://www.cs.toronto.edu/~rgrosse/courses/csc321_2017/tutorials/tut4.pdf\n\nNote that in Python you can pass functions around like any other variables. That is why you can pass the cost function g to the gradient_descent function. \n\nYou should be able to install it by running \"pip install autograd\" in a cell in your Jupyter notebook.\n\n\n```python\nfrom autograd import grad \n\n#gradient descent function\n#inputs: g (cost function), alpha (learning rate parameter), max_its (maximum number of iterations), w (initialization)\ndef gradient_descent(g,alpha,max_its,w0):\n gradient = grad(g) ## This is how you use the autograd library to find the gradient of a function \n ##Your code here\n weight_history = [w0]\n cost_history = [g(w0)]\n for i in range(max_its):\n w0 -= alpha*gradient(w0)\n cost_history.append(g(w0))\n weight_history.append(w0)\n return weight_history,cost_history\n```\n\n7) [10 points] Make a run of max_its=20 steps of gradient descent with initialization at the point $w^0 = 1.75$, and a fixed learning rate of $\\alpha = 0.5$. Using the cost and weight history, plot the cost as a function of the weight for each step (cost on y-axis, weight on x-axis). Recall that the terms weight and parameter used interchangeably and both refer to w.\n\n\n```python\nimport matplotlib.pyplot as plt\n%matplotlib inline \ndef g(x):\n return abs(x)\nresult07 = gradient_descent(g, alpha = 0.5,max_its=20, w0=1.75)\nplt.plot(result07[0],result07[1])\n```\n\n8) [15 points] Make a run of max_its=20 steps of gradient descent with initialization at the point $w^0 = 1.75$, using the diminishing rule $\\alpha = \\frac{1}{k}$ (for this you have to modify the gradient_descent function slightly. Use the code template below. Using the cost and wiehgt history, plot the cost as a function of the weight for each step (cost on y-axis, weight on x-axis)\n\n\n```python\n#from autograd import grad \n\n#gradient descent function\n#inputs: g (cost function), alpha (learning rate parameter), max_its (maximum number of iterations), w (initialization)\ndef gradient_descent(g,alpha,max_its,w0):\n gradient = grad(g) ## This is how you use the autograd library to find the gradient of a function \n ##Your code here\n weight_history = [w0]\n cost_history = [g(w0)]\n for k in range(max_its):\n if k > 1:\n alpha = 1/k\n w0 -= alpha*gradient(w0)\n cost_history.append(g(w0))\n weight_history.append(w0)\n return weight_history,cost_history\nresult08 = gradient_descent(g, alpha = 0.5,max_its=20, w0=1.75)\nplt.plot(result08[0],result08[1])\n```\n\n9) [10 points] Generate a single plot showing the cost as a function of step number for both runs (combine all runs into a single plot). Which approach works better? Why ?\n\n\n```python\nplt.plot(result07[1],'b-',result08[1],\"g-\")#blue for the first one\n```\n\nThe diminishing alpha approach works better, because it ultimately descend to the value that near to the minimun of the cost function, while the first approach didn't.\n\nWe will now look at the oscilating behavior of gradient descent. \n\nTake the following cost function:\n$g(w) = w_0^2 + w_1^2 + 2\\sin(1.5 (w_0 + w_1)) +2$\n\nNote that this cost function has two parameters.\n\n\n```python\nimport autograd.numpy as np\ndef g(w):\n w0 = w[0]\n w1 = w[1]\n result = w0**2 + w1**2+ 2*np.sin(1.5*(w0+w1))+2\n return result\n \n```\n\n10) [5 points] Make sure your gradient descent function from problem 6 can handle cost functions with more than one parameter. You may need to rewrite it if you were not careful. Use the code template below (if your function from problem 6 is good, you can just copy and paste it here)\n\n\n```python\nfrom autograd import grad \n\n\n#gradient descent function\n#inputs: g (cost function), alpha (learning rate parameter), max_its (maximum number of iterations), w (initialization)\ndef gradient_descent(g,alpha,max_its,w0):\n gradient = grad(g) ## This is how you use the autograd library to find the gradient of a function \n ##Your code here\n weight_history = [w0]\n cost_history = [g(w0)]\n for i in range(max_its):\n for j in range(0,len(w0)):\n w0[j] -= alpha * gradient(w0)[j]\n cost_history.append(g(w0))\n weight_history.append(w0)\n return weight_history,cost_history\n```\n\n11) [10 points] Run the gradient_descent function with the cost function above three times with the following parameters. Generate a single plot showing the cost as a function of step number for all three runs (combine all three runs into a single plot). Use the code template below. Which alpha leads to an oscillating behavior?\n\n$w^0$ = [3.0,3.0]\nmax_its = 10\n\n# first run\nalpha = 0.01\n# second run\nalpha = 0.1\n# third run\nalpha = 1\n\n\n\n\n```python\nimport autograd.numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline \n\nw = [3.0,3.0]\ngd = grad(g)\ngd(w)\n#q11first = gradient_descent(g,alpha = 0.01 ,max_its = 10,w0 = w)\n#q11second = gradient_descent(g,alpha = 0.1 ,max_its = 10,w0 = w)\n#q11third = gradient_descent(g,alpha = 1 ,max_its = 10,w0 = w)\n#plt.plot(q11first[1],'b-',q11second[1],\"r-\",q11third[1],\"g-\")#green(third one lead to oscillating behavior)\n#Your code here\n```\n\n\n\n\n [array(3.26660921), array(3.26660921)]\n\n\n\nThe third alpha($\\alpha = 1$) leads to oscillating behavor.\n\n### 12) [15 points] This problem is about learning to tune fixed step length for gradient descent. Here, you are given a cost function:\n$g(w) = 2w_0^2 + w_1^2 +4w_2^2$ \n\nAssume your $w^0$= [5,5,5] and your max_iter = 100\n\nUse your latest gradient descent function with a fixed learning rate. Play around with at least 5 different values of alpha (using your intuition). Generate a single plot of the cost as a function of the number of iterations. Which value of alpha seems to converge the fastest?\n\nNot that your grade will not depend on how well you do, as long as you try at least 5 different values for alpha and plot them.\n\n\n```python\ndef g(w):\n #w0 = w[0]\n #w1 = w[1]\n #w2 = w[2]\n square = np.square(w)\n result = 2 * square[0] + square[1] + 4 * square[2]\n return result \n```\n\n\n```python\n%matplotlib inline \nw =[5.0,5.0,5.0]\n\nq12first = gradient_descent(g,alpha = 0.01 ,max_its = 100,w0 = w)\nq12second = gradient_descent(g,alpha = 0.1 ,max_its = 100,w0 = w)\nq12third = gradient_descent(g,alpha = 0.15 ,max_its = 100,w0 = w)\nq12forth = gradient_descent(g,alpha = 0.20 ,max_its = 100,w0 = w)\nq12fifth = gradient_descent(g,alpha = 0.9 ,max_its = 100,w0 = w)\nplt.plot(q12first[1],'b:',q12second[1],'r-',q12third[1],'y-',q12forth[1],'g-',q12fifth[1],'k-')\n#Since the larger learing rate converge so fast, it almost like a straight line and covers all other lines of the plot\n\n```\n\n\n```python\nplt.plot(q12fifth[1],'k-')\n```\n\nFrom result above, the 0.9 of alpha seems have the fastest speed to converge.\n", "meta": {"hexsha": "6e1e62e94f3f0ae30e49b50515107d16acf3732c", "size": 91010, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "HW1_cosc74_FA2021.ipynb", "max_stars_repo_name": "chenhz1223/ML-dartmouth-cs274", "max_stars_repo_head_hexsha": "e096547b5c084b924522d168435ff979357f2c92", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW1_cosc74_FA2021.ipynb", "max_issues_repo_name": "chenhz1223/ML-dartmouth-cs274", "max_issues_repo_head_hexsha": "e096547b5c084b924522d168435ff979357f2c92", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW1_cosc74_FA2021.ipynb", "max_forks_repo_name": "chenhz1223/ML-dartmouth-cs274", "max_forks_repo_head_hexsha": "e096547b5c084b924522d168435ff979357f2c92", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 148.4665579119, "max_line_length": 14636, "alphanum_fraction": 0.8834084167, "converted": true, "num_tokens": 2874, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9241418158002492, "lm_q2_score": 0.9111796985551102, "lm_q1q2_score": 0.8420592611430432}} {"text": "```python\nimport numpy as np\nimport scipy as sp\nimport matplotlib.pyplot as plt\nplt.style.use(['science', 'notebook'])\nimport sympy as smp\nfrom skimage import color\nfrom skimage import io\nfrom scipy.fft import fftfreq\nfrom scipy.fft import fft, ifft, fft2, ifft2\n```\n\n# Different Types of Fourier Transforms\n\n## 1. Fourier Transform (Continuous time and frequency)\n\nThis occurs when the functional form of your time series is known analytically (i.e. you have a formula $x(t)=...$ for it) and goes from $-\\infty$ to $\\infty$\n\n$$\\hat{x}(f) = \\int_{-\\infty}^{\\infty} x(t) e^{-2 \\pi i f t} dt $$\n\n**Solving Analytically (If Possible)**: Be careful giving proper information about your variables when you define them for sympy to work properly!\n\n\n```python\nt, f = smp.symbols('t, f', real=True)\n```\n\n\n```python\nt, f = smp.symbols('t, f', real=True)\nk = smp.symbols('k', real=True, positive=True)\nx = smp.exp(-k * t**2) * k * t\nx\n```\n\n\n\n\n$\\displaystyle k t e^{- k t^{2}}$\n\n\n\n\n```python\nfrom sympy.integrals.transforms import fourier_transform\n```\n\n\n```python\nx_FT = fourier_transform(x, t, f)\nx_FT\n```\n\n\n\n\n$\\displaystyle - \\frac{i \\pi^{\\frac{3}{2}} f e^{- \\frac{\\pi^{2} f^{2}}{k}}}{\\sqrt{k}}$\n\n\n\n**Solving Numerically**: Sometimes sympy can't evaluate integrals analytically, in which case you'll need to use scipy\n\n\n```python\n# Won't run\n#x = smp.exp(-k * t**2) * smp.sin(k*t) * t**4\n#fourier_transform(x, t, f)\n```\n\n\n```python\nfrom scipy.integrate import quad\n```\n\nDefine function we want to take Fourier transform of and function to compute Fourier transform\n\n\n```python\ndef x(t, k):\n return np.exp(-k * t**2) * np.sin(k*t) * t**4\n\ndef get_x_FT(x, f, k):\n x_FT_integrand_real = lambda t: np.real(x(t, k)*np.exp(-2*np.pi*1j*f*t))\n x_FT_integrand_comp = lambda t: np.imag(x(t, k)*np.exp(-2*np.pi*1j*f*t))\n x_FT_real = quad(x_FT_integrand_real, -np.inf, np.inf)[0]\n x_FT_comp = quad(x_FT_integrand_comp, -np.inf, np.inf)[0]\n return x_FT_real + 1j*x_FT_comp\n```\n\nGet frequencies and fourier transform values\n\n\n```python\nf = np.linspace(-4, 4, 100)\nx_FT = np.vectorize(get_x_FT)(x, f, k=2)\n```\n\nPlot\n\n\n```python\nplt.plot(f, np.abs(x_FT))\nplt.ylabel('$|\\hat{x}(f)|$', fontsize=20)\nplt.xlabel('$f$', fontsize=20)\n```\n\n## 2. Fourier Series (Continuous Time, Discrete Frequency)\nThis occurs when the function $x(t)$ is bounded between times $0$ and $T$ (non-infinite)\n\n$$\\hat{x}(f_n) = \\frac{1}{T} \\int_{0}^{T} x(t) e^{-2 \\pi i f_n t} dt $$\n\nwhere $f_n = n/T$. \n\n\n```python\n# Consider now only between t=0 to t=1\nt = smp.symbols('t', real=True)\nk, n, T = smp.symbols('k, n, T', real=True, positive=True)\nfn = n/T\nx = smp.exp(-k * t)\nx\n```\n\n\n\n\n$\\displaystyle e^{- k t}$\n\n\n\nCompute the Fourier transform analytically:\n\n\n```python\nx_FT = smp.integrate(1/T * x*smp.exp(-2*smp.pi*smp.I*fn*t), (t, 0, T)).simplify()\nx_FT\n```\n\n\n\n\n$\\displaystyle \\frac{\\left(- T k - 2 i \\pi n + \\left(T k + 2 i \\pi n\\right) e^{T k + 2 i \\pi n}\\right) e^{- T k - 2 i \\pi n}}{T^{2} k^{2} + 4 i \\pi T k n - 4 \\pi^{2} n^{2}}$\n\n\n\n\n```python\nsmp.Abs(x_FT).simplify()\n```\n\n\n\n\n$\\displaystyle \\frac{\\sqrt{T^{2} k^{2} e^{2 T k} - T^{2} k^{2} e^{T k - 2 i \\pi n} - T^{2} k^{2} e^{T k + 2 i \\pi n} + T^{2} k^{2} + 4 \\pi^{2} n^{2} e^{2 T k} - 4 \\pi^{2} n^{2} e^{T k - 2 i \\pi n} - 4 \\pi^{2} n^{2} e^{T k + 2 i \\pi n} + 4 \\pi^{2} n^{2}} e^{- T k}}{\\sqrt{T^{4} k^{4} + 8 \\pi^{2} T^{2} k^{2} n^{2} + 16 \\pi^{4} n^{4}}}$\n\n\n\nConvert to a numerical function so the values can be extracted numerically and plotted:\n\n\n```python\nget_FT = smp.lambdify([k, T, n], x_FT)\nns = np.arange(0, 20, 1)\nxFT = get_FT(k=1, T=4, n=ns)\n```\n\nPlot:\n\n\n```python\nplt.figure(figsize=(10,3))\nplt.bar(ns, np.abs(xFT))\nplt.xticks(ns)\nplt.ylabel('$|\\hat{x}_n|$', fontsize=25)\nplt.xlabel('$n$', fontsize=25)\nplt.show()\n```\n\nIf it can't be done analytically, need to use scipy like before. Consider\n\n$$x(t) = e^{-k t^2} \\sin(kt) / t \\hspace{10mm} k=2, T=4$$\n\n\n```python\ndef x(t, k):\n return np.exp(-k * t**2) * np.sin(k*t) / t\n\ndef get_x_FT(x, n, k, T):\n x_FT_integrand_real = lambda t: np.real(x(t, k)*np.exp(-2*np.pi*1j*(n/T)*t))\n x_FT_integrand_comp = lambda t: np.imag(x(t, k)*np.exp(-2*np.pi*1j*(n/T)*t))\n x_FT_real = quad(x_FT_integrand_real, 0, T)[0]\n x_FT_comp = quad(x_FT_integrand_comp, 0, T)[0]\n return x_FT_real + 1j*x_FT_comp\n```\n\nCompute values of $n$ in $f_n=n/T$ and then $\\hat{x}_n$ itself using the function above:\n\n\n```python\nns = np.arange(0, 20, 1)\nxFT = np.vectorize(get_x_FT)(x, ns, k=2, T=4)\n```\n\nPlot\n\n\n```python\nplt.figure(figsize=(10,3))\nplt.bar(ns, np.abs(xFT))\nplt.xticks(ns)\nplt.ylabel('$|\\hat{x}_n|$', fontsize=25)\nplt.xlabel('$n$', fontsize=25)\nplt.show()\n```\n\n## 3. Discrete Fourier Transform (Discrete Time, Discrete Frequency)\n\nHere we consider a discrete time series $x_t$ that's measured for a finite amount of time ($N$ measurements over a time $T$ implies $N\\Delta t = T$). The Fourier transform here is **defined** as\n\n$$\\hat{x}(f_n) = \\sum_{k=0}^{N-1} x_t e^{-2 \\pi i f_n (k \\Delta t)} \\hspace{10mm} f_n=\\frac{n}{N\\Delta t}$$\n\nwhere $f_n$ are the so-called Fourier frequencies. The notation can be simplfied as\n\n$$\\hat{x}_n = \\sum_{k=0}^{N-1} x_t e^{-2 \\pi i kn/N}$$\n\n\nNote we get $\\hat{x}_n = \\hat{x}_{n \\pm N} = \\hat{x}_{n \\pm 2N} = ...$ with this definition. With this we can restrict ourselves from $n=0$ to $n=N-1$ and not lose any information OR we can also restrict ourselves to \n\n* In the case that $N$ is even, $n=-N/2$ to $n=N/2-1$ \n* In the case that $N$ is odd, $n=-(N-1)/2$ to $(N-1)/2$\n\nThis is precisely what scipy does, returning an array $\\hat{x}_n$ corresponding to the frequencies\n\n`f = [0, 1, ..., N/2-1, -N/2, ..., -1] / (dt*N) if N is even`\n\n`f = [0, 1, ..., (N-1)/2, -(N-1)/2, ..., -1] / (dt*N) if N is odd`\n\nWhy does it do this? Well typically one deals with real time series $x_t$, and there's a handy identity\n\n$$\\hat{x}_n = \\hat{x}_{-n}^*$$\n\nso one only needs to look at the first half of the frequencies to know everything about the Fourier transform $\\hat{x}_n$.\n\n\n\n\n```python\nT = 40 #seconds\nN = 100 #measurements\nt = np.linspace(0, T, N)\ndt = np.diff(t)[0]\n```\n\nLook at a couple particular frequencies\n\n\n```python\nf1 = 20/(N*dt)\nf2 = 10/(N*dt)\nf3 = (10+5*N)/(N*dt)\n```\n\nGet a few time series:\n\n\n```python\nx1 = np.sin(2*np.pi*f1*t) + 0.3*np.sin(2*np.pi*f2*t) + 0.3*np.random.randn(len(t))\nx2 = np.sin(2*np.pi*f2*t)+ 0.1*np.random.randn(len(t))\nx3 = np.sin(2*np.pi*f3*t)+ 0.1*np.random.randn(len(t))\n```\n\n\n```python\nplt.plot(t, x1)\nplt.xlabel('$t$ [seconds]', fontsize=20)\nplt.ylabel('Signal [arb]')\nplt.show()\n```\n\n\n```python\nf = fftfreq(len(t), np.diff(t)[0])\nx1_FFT = fft(x1)\n```\n\nPlot the first half of the spectrum (for $x(t)$ real, all information is contained in the first half)\n\n\n```python\nplt.plot(f[:N//2], np.abs(x1_FFT[:N//2]))\nplt.xlabel('$f_n$ [$s^{-1}$]', fontsize=20)\nplt.ylabel('|$\\hat{x}_n$|', fontsize=20)\nplt.show()\n```\n\nDemonstrate that $\\hat{x}_n = \\hat{x}_{n+5N}$ here:\n\n\n```python\nprint(f2)\nprint(f3)\n```\n\n 0.24750000000000003\n 12.6225\n\n\n\n```python\nplt.plot(t,x2)\nplt.plot(t,x3)\nplt.xlabel('$t$ [seconds]', fontsize=20)\nplt.ylabel('Signal [arb]')\nplt.show()\n```\n\n\n```python\nx2_FFT = fft(x2)\nx3_FFT = fft(x3)\n```\n\n\n```python\nplt.plot(f[:N//2], np.abs(x2_FFT[:N//2]), label='$x_2$')\nplt.plot(f[:N//2], np.abs(x3_FFT[:N//2]), 'r--', label='$x_3$')\nplt.axvline(1/(2*dt), ls='--', color='k')\nplt.xlabel('$f_n$ [$s^{-1}$]', fontsize=20)\nplt.ylabel('|$\\hat{x}_n$|', fontsize=20)\nplt.show()\n```\n\nA little bit of 2D Fourier transform stuff:\n\n\n```python\nimg = color.rgb2gray(io.imread('images/flower.PNG'))\n```\n\n :1: FutureWarning: Non RGB image conversion is now deprecated. For RGBA images, please use rgb2gray(rgba2rgb(rgb)) instead. In version 0.19, a ValueError will be raised if input image last dimension length is not 3.\n img = color.rgb2gray(io.imread('images/flower.PNG'))\n\n\n\n```python\nimg\n```\n\n\n\n\n array([[0.47175216, 0.47175216, 0.47175216, ..., 0.45696745, 0.45751804,\n 0.45695255],\n [0.47175216, 0.47175216, 0.47175216, ..., 0.45919255, 0.4589098 ,\n 0.45862706],\n [0.47175216, 0.47175216, 0.47175216, ..., 0.45777882, 0.45777882,\n 0.45777882],\n ...,\n [0.5619498 , 0.55802824, 0.55802824, ..., 0.48206 , 0.49270863,\n 0.47785569],\n [0.5619498 , 0.55802824, 0.5571949 , ..., 0.50166784, 0.50530667,\n 0.49270863],\n [0.5619498 , 0.5571949 , 0.55410667, ..., 0.50558941, 0.50530667,\n 0.49270863]])\n\n\n\n\n```python\nplt.imshow(img, cmap='gray')\n```\n\n\n```python\nimg_FT = fft2(img)\nfy = np.fft.fftfreq(img.shape[0],d=10) #suppose the spacing between pixels is 10mm, for example\nfx = np.fft.fftfreq(img.shape[1],d=10)\n```\n\n\n```python\nprint('{:.2f} correponds to fx={:.6f} and fy={:.6f}'.format(img_FT[10,20], fx[20], fy[10]))\n```\n\n -83.17+1871.49j correponds to fx=0.002268 and fy=0.001340\n\n\nAnalogous to 1D, the zero frequency terms correspond to low-order corners of the array, the positive frequency terms in the first half, the nyquist frequency in the middle, and the negative frequencies in the second half.\n\n* If $M(x,y)$ (the image) contains real values then $\\hat{M}(f_x, f_y)$ is symmetric WRT to the middle of each axis. \n\n\n```python\nplt.imshow(np.abs(img_FT), cmap='gray', vmax=50)\nplt.colorbar()\n```\n\nRemove low frequencies\n\n\n```python\nimg_FT_alt = np.copy(img_FT)\nimg_FT_alt[-2:] = 0 \nimg_FT_alt[:,-2:] = 0 \nimg_FT_alt[:2] = 0 \nimg_FT_alt[:,:2] = 0 \n```\n\n\n```python\nimg_alt = np.abs(ifft2(img_FT_alt))\n```\n\n\n```python\nplt.imshow(img_alt, cmap='gray')\nplt.colorbar()\n```\n\nFor more advanced image processing see https://scikit-image.org/\n", "meta": {"hexsha": "b6b6fb3725ad3805e1b0cb7ebc6985222b52f3c9", "size": 800012, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "fourier_transform1.ipynb", "max_stars_repo_name": "Sumanshekhar17/Pseudo-Spectral-Method", "max_stars_repo_head_hexsha": "5d1c2449bafec89006de95f785fe569d488792d9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-09-05T18:21:22.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-05T18:26:21.000Z", "max_issues_repo_path": "fourier_transform1.ipynb", "max_issues_repo_name": "Sumanshekhar17/Pseudo-Spectral-Method", "max_issues_repo_head_hexsha": "5d1c2449bafec89006de95f785fe569d488792d9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-09-05T18:26:10.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-14T08:54:49.000Z", "max_forks_repo_path": "fourier_transform1.ipynb", "max_forks_repo_name": "Sumanshekhar17/Pseudo-Spectral-Method", "max_forks_repo_head_hexsha": "5d1c2449bafec89006de95f785fe569d488792d9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-06T16:36:35.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-06T16:36:35.000Z", "avg_line_length": 799.2127872128, "max_line_length": 184600, "alphanum_fraction": 0.9521094684, "converted": true, "num_tokens": 3472, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9458012655937034, "lm_q2_score": 0.8902942348544447, "lm_q1q2_score": 0.8420414140761115}} {"text": "# Exercise 2\nWrite a function to compute the roots of a mathematical equation of the form\n\\begin{align}\n ax^{2} + bx + c = 0.\n\\end{align}\nYour function should be sensitive enough to adapt to situations in which a user might accidentally set $a=0$, or $b=0$, or even $a=b=0$. For example, if $a=0, b\\neq 0$, your function should print a warning and compute the roots of the resulting linear function. It is up to you on how to handle the function header: feel free to use default keyword arguments, variable positional arguments, variable keyword arguments, or something else as you see fit. Try to make it user friendly.\n\nYour function should return a tuple containing the roots of the provided equation.\n\n**Hint:** Quadratic equations can have complex roots of the form $r = a + ib$ where $i=\\sqrt{-1}$ (Python uses the notation $j=\\sqrt{-1}$). To deal with complex roots, you should import the `cmath` library and use `cmath.sqrt` when computing square roots. `cmath` will return a complex number for you. You could handle complex roots yourself if you want, but you might as well use available libraries to save some work.\n\n\n```python\nimport cmath\nimport warnings\n\ndef solve_quadratic(a, b, c):\n if a == 0: \n if b == 0: \n if c == 0: # infinetly many roots for c = 0\n raise Exception('The input equation is \"0=0\". Any value is a solution!')\n else : # no root for c != 0\n raise Exception('The input equation is c=0 for your c != 0. No solutions\uff01')\n else:\n warnings.warn('Soving a linear equation bx+c=0. You only get one root.')\n return (-c/b)\n delta = b*b - 4*a*c\n if delta >= 0: # 2 real roots\n r1 = (-b + math.sqrt(delta))/(2*a)\n r2 = (-b - math.sqrt(delta))/(2*a)\n else: # 2 complex roots\n r1 = (-b + cmath.sqrt(delta))/(2*a)\n r2 = (-b - cmath.sqrt(delta))/(2*a)\n return (r1, r2)\n \n```\n\n\n```python\n# solve the linear equation\nsolve_quadratic(0,2,-4)\n```\n\n /Users/jasminetong/anaconda/lib/python3.6/site-packages/ipykernel_launcher.py:12: UserWarning: Soving a linear equation bx+c=0. You only get one root.\n if sys.path[0] == '':\n\n\n\n\n\n 2.0\n\n\n\n\n```python\n# solve the most normal quadratic equation\nsolve_quadratic(1,4,4)\n```\n\n\n\n\n (-2.0, -2.0)\n\n\n\n\n```python\n# No solutions for the equation \"4=0\"\nsolve_quadratic(0,0,4)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "502ce957675be5a8d940494f52350bdbe0ae7e9b", "size": 6253, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lectures/L5/Exercise_2-final.ipynb", "max_stars_repo_name": "JasmineeeeeTONG/CS207_coursework", "max_stars_repo_head_hexsha": "666239ee5f8bd7cbe04725a52870191a3d40d8c2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lectures/L5/Exercise_2-final.ipynb", "max_issues_repo_name": "JasmineeeeeTONG/CS207_coursework", "max_issues_repo_head_hexsha": "666239ee5f8bd7cbe04725a52870191a3d40d8c2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/L5/Exercise_2-final.ipynb", "max_forks_repo_name": "JasmineeeeeTONG/CS207_coursework", "max_forks_repo_head_hexsha": "666239ee5f8bd7cbe04725a52870191a3d40d8c2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.0833333333, "max_line_length": 1187, "alphanum_fraction": 0.5714057253, "converted": true, "num_tokens": 663, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9441768651485395, "lm_q2_score": 0.8918110461567922, "lm_q1q2_score": 0.8420273578651595}} {"text": "# GLM and correlated variables\nI always forget how correlation between features impacts parameter estimation in GLMs. Here, I simulate some data with and without correlation between between features and visualize how this impacts the estimation of the parameters of the model: $\\beta$, $\\mathrm{cov}(\\beta)$, and $t(\\beta)$.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport statsmodels.api as sm\n%matplotlib inline\n```\n\n### Simulation variables\nFor $N=1000$, simulate two variables (let's say, $X_{1}$ and $X_{2}$) with or without correlation (if correlation: $\\rho_{12} = 0.7$). We then simulate an outcome variable ($y$) as follows:\n\n\\begin{align}\ny = X\\beta + \\epsilon, \\epsilon \\sim \\mathcal{N}(0, 10)\n\\end{align}\n\nwith the true parameters, $\\beta$, being $[1, 1]$. We then simulate the data and fit a GLM for 1000 iterations and plot the estimated parameters across for the correlated and uncorrelated data.\n\n\n```python\nN = 1000\ntrue_betas = np.array([0, 1, 1])\ntrue_corr = 0.7\niters = 1000\n\ncov_corr = np.array([\n [1, true_corr],\n [true_corr, 1]\n])\n\ncov_uncorr = np.array([\n [1, 0],\n [0, 1]\n])\n\ncov = dict(\n corr=cov_corr,\n uncorr=cov_uncorr\n)\n\nresults = dict(\n corr=dict(\n betas=np.zeros((iters, 2)),\n variances=np.zeros((iters, 2)),\n tvalues=np.zeros((iters, 2))\n ),\n uncorr=dict(\n betas=np.zeros((iters, 2)),\n variances=np.zeros((iters, 2)),\n tvalues=np.zeros((iters, 2))\n )\n)\n\nfor i in range(iters):\n \n for ii, (covname, covmat) in enumerate(cov.items()):\n data = np.random.multivariate_normal(np.zeros(2), cov=covmat, size=N)\n data = np.c_[np.ones(N), data]\n y = data.dot(effects) + np.random.normal(0, 10, N)\n fit = sm.OLS(y, data).fit()\n results[covname]['betas'][i, :] = fit.params[1:]\n results[covname]['variances'][i, :] = np.diag(fit.cov_params())[1:]\n results[covname]['tvalues'][i, :] = fit.tvalues[1:]\n```\n\nDo the actual plotting:\n\n\n```python\nfig, axes = plt.subplots(nrows=3, ncols=2, sharex='row', sharey=False, figsize=(15, 10))\n\nfor i, covtype in enumerate(['uncorr', 'corr']):\n \n for ii, stat in enumerate(['betas', 'variances', 'tvalues']):\n \n axes[ii, i].set_title(\"Cov-type: %s, stat: %s\" % (covtype, stat))\n sns.distplot(results[covtype][stat][:, 0], ax=axes[ii, i])\n sns.distplot(results[covtype][stat][:, 1], ax=axes[ii, i]) \n\nsns.despine()\nfig.tight_layout()\n```\n", "meta": {"hexsha": "7a76f043079f754babb40bd76b0758565b224756", "size": 113251, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "GLM_correlated_variables.ipynb", "max_stars_repo_name": "lukassnoek/random_notebooks", "max_stars_repo_head_hexsha": "d7df507ce2b6949726c29de0022aae2d0dc583ac", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-05-28T13:45:11.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-31T11:41:34.000Z", "max_issues_repo_path": "GLM_correlated_variables.ipynb", "max_issues_repo_name": "lukassnoek/random_notebooks", "max_issues_repo_head_hexsha": "d7df507ce2b6949726c29de0022aae2d0dc583ac", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "GLM_correlated_variables.ipynb", "max_forks_repo_name": "lukassnoek/random_notebooks", "max_forks_repo_head_hexsha": "d7df507ce2b6949726c29de0022aae2d0dc583ac", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-05-28T13:46:05.000Z", "max_forks_repo_forks_event_max_datetime": "2018-06-11T15:25:59.000Z", "avg_line_length": 716.7784810127, "max_line_length": 108444, "alphanum_fraction": 0.9486715349, "converted": true, "num_tokens": 745, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9441768541530198, "lm_q2_score": 0.8918110511888302, "lm_q1q2_score": 0.8420273528103673}} {"text": "# Monte Carlo methods - A Sandbox/Demo\n\nMonte Carlo broadly refers to random sampling methods. Here we will focus mainly on Monte Carlo sampling in the context of molecular simulations, but it's worth briefly considering an example of Monte Carlo integration. Particularly, we can use Monte Carlo integration to calculate $\\pi$ with just a few lines of code.\n\n## Preparation \n\nTo use this notebook you will either need to run it locally using Jupyter (with a fortran library you pre-compile) or on Google Colab. This notebook briefly explains either route, though if you are going the local route you may need to compile the library before installing some of the other course materials, or do so in a clean/new conda environment. Some users have reported that conda installing `openforcefield`, `openmm` and the openeye toolkits after installing `gfortran` results in the loss of the ability to compile fortran libraries for use in Python.\n\n\n### Preparation for Google Colab (NOT FOR LOCAL USE)\n\n[](https://colab.research.google.com/github/MobleyLab/drug-computing/blob/master/uci-pharmsci/lectures/MC/MC_Sandbox.ipynb)\n\nFor Google Colab, pip installation of software is faster, but we've only been able to get `gfortran` working via a conda installation, so we'll need to go that route. Begin by unsetting the PYTHONPATH to prevent issues with miniconda, then installing miniconda (which will take perhaps 20 seconds to a couple of minutes):\n\n\n```python\n%env PYTHONPATH=\n! wget https://repo.anaconda.com/miniconda/Miniconda3-py37_4.10.3-Linux-x86_64.sh\n! chmod +x Miniconda3-py37_4.10.3-Linux-x86_64.sh\n! bash ./Miniconda3-py37_4.10.3-Linux-x86_64.sh -b -f -p /usr/local\nimport sys\nsys.path.append('/usr/local/lib/python3.7/site-packages/')\n```\n\nOnce that is done, install `gfortran`, which will take roughly a similar amount of time:\n\n\n```python\n!conda install -c conda-forge libgfortran --yes\n```\n\nNext, mount your Google Drive and ensure you have the mc_sandbox.f90 file available **at a path you define below**:\n\n\n```python\nfrom google.colab import drive\ndrive.mount('/content/drive',force_remount = True)\n\n#EDIT THIS TO DEFINE WHERE YOU PUT THE FILE:\nmd_library_path = '/content/drive/MyDrive/drug-computing/uci-pharmsci/lectures/MC/'\n\n# Then run:\n%cd $md_library_path\n```\n\nThen compile the requisite library:\n\n\n```python\n!f2py3 -c -m mc_sandbox mc_sandbox.f90\n```\n\n### Preparation for local use\n\nFor local use, you need to compile the fortran module `mc_sandbox` for use as a Python library. This accelerates the numerical calculations and a similar framework will be used in the MD and MC assignments. Use `f2py3 -c -m mc_sandbox mc_sandbox.f90` to compile. (As noted above you may need to do this in advance of installing certain modules, such as the beginning of the course, or by making a clean conda environment.)\n\n## Calculating $\\pi$ via MC integration\n\nSo, let's calculate $\\pi$ by doing MC integration. Particularly, let's consider a drawing random numbers between -1 and 1, and then imagine a circle centered at (0, 0). Consider the area of that circle (of radius 1) to that of the full square spanned by our random numbers (2 units wide). Particularly, if $R$ is the radius of the circle, then the ratio of the areas is:\n\\begin{equation}\n\\frac{A_{sq}}{A_{cir}} = \\frac{(2R)^2}{\\pi R^2} = \\frac{4}{\\pi} \n\\end{equation}\n\nso we find\n\\begin{equation}\n\\pi = \\frac{4 A_{cir}}{A_{sq}}\n\\end{equation}\n\nSo, if we randomly place points in an interval -1 to 1, and then check to see how many fall within a square versus within a circle, we can use the ratio of counts (related to the ratio of the areas) to determine $\\pi$.\n\n\n```python\n#Import modules we need\nimport numpy\nimport numpy.random\n\n#Number of data points to sample\nNtrials = 1000\n\n#Randomly generate array of XY positions - spanning -1 to 1\nXY = 2.*numpy.random.rand( Ntrials, 2)-1.\n\n#Compute distance from each point to center of the circle\ndistances = numpy.sqrt( numpy.sum( XY*XY, axis=1))\n\n#Find indices of data points which are within the unit circle\n#Note that you could code this in a more straightforward - but slower - way by setting up a \n#'for' loop over data points and checking to see where the distance is less than 1.\nindices_inside = numpy.where( distances < 1)\n\n#Find how many points are here\nnum_inside = len( indices_inside[0] )\n\n#Calculate estimate of pi\npi_estimate = 4.*num_inside/float(Ntrials)\nprint(pi_estimate)\n```\n\n 3.152\n\n\n### Let's take a look at what we've done here, graphically. \nWe can plot the points inside and outside in two different colors to see.\n\n\n```python\n%pylab inline\n\n#After the code above, indices_inside has the numbers of points which are inside our unit circle.\n#We would also like indices of points which are outside the unit circle\nindices_outside = numpy.where( distances > 1)\n\n#Make our plot of points inside and outside using circles for the data points\n#blue for inside, red for outside\nplot(XY[indices_inside[0],0], XY[indices_inside[0],1], 'bo')\nplot(XY[indices_outside[0],0], XY[indices_outside[0],1], 'ro')\n\n#Adjust plot settings to look nicer\naxis('equal') #Set to have equal axes\nF = pylab.gcf() #Get handle of current figure\nF.set_size_inches(5,5) #Set size to be square (default view is rectangular)\n\n#Bonus: Drawing a unit circle on the graph is up to you\n```\n\n# Let's revisit the LJ particles we saw in the MD sandbox\n\nIn our last sandbox, we looked at molecular dynamics on a pair of Lennard-Jones particles. Now let's revisit that, but within the Metropolis Monte Carlo framework. Optionally, you could make things a little more interesting here by considering an extra particle. But for now let's just start with the same two particles as last time to make visualization easy.. \n\nHere, we'll apply the Metropolis MC framework as discussed in lecture, where every step we:\n* Randomly pick a particle\n* Change each of x, y, and z a small amount between $-\\Delta r_{max}$ and $\\Delta r_{max}$\n* Compute the energy change $\\Delta U$\n* Apply the Metropolis criterion to decide whether to accept the move or reject it\n * If $\\Delta U < 0$, accept the move\n * If $\\Delta U > 0$, accept with the probability $P_{acc} = e^{-\\Delta U/T}$\n* If accepted, keep the new configuration\n* Regardless of whether we accept it or not, update any running averages with the current state and energy\n\n\n## Let's run some MC \n\n(As noted above, you will need to have compiled the `mc_sandbox` Fortran library before doing this, using either the appropriate technique for local work or for on Colab.)\n\n### First, we set up our system:\n\n\n```python\nimport mc_sandbox\nimport numpy as np\nimport numpy.random\n\n#Let's define the variables we'll need\nCut = 2.5\nL = 3.0 #Let's put these in a small box so they don't lose each other\nmax_displacement = 0.1 #Maximum move size\nT = 1. #You should play with this and see how the results change. Don't make it an integer - use a floating point value (i.e. 1., not 1)\n\n#Choose N for number of particles; you could adjust this later.\nN = 2\n\n#Allocate position array - initially just zeros\nPos = np.zeros((N,3), float)\n\n#Let's place the first two particles just as we did in MD Sandbox:\nPos[0,:] = np.array([0,0,0])\n#We'll place the second one fairly nearby - at this point using the same starting location\n#as in the MD sandbox\nPos[1,:] = np.array([1.5,0,0])\n\n#If you have any other particles, let's just place them randomly\nfor i in range(2,N):\n Pos[i,:] = L*np.random.random( 3 )\n```\n\n### Now let's run a step of MC!\n\n\n```python\n#Set maximum number of steps to run\nmax_steps = 10000\n\n#Set up storage for position vs step\nPos_t = np.zeros(( N,3,max_steps), float)\n#Store initial positions\nPos_t[:,:,0] = Pos\n\n#Evaluate initial energy\nU = mc_sandbox.calcenergy(Pos, L, Cut)\n\n#Pick a random particle\nnum = np.random.randint(N) #Random integer from 0 up to but not including N\n\n#Store old position in case we need to revert\n#Note that it's necessary here to make a copy, otherwise both still point to the same \n#coordinates (try OldPos = Pos to see).\nOldPos = Pos.copy() \n\n#Pick a move - adjusting to make it between -DeltaX and +DeltaX\nmove = max_displacement * (np.random.random( 3)*2.-1.)\n\n#Update position\nPos[num, :] += move\n\n#Evaluate new energy\nUnew = mc_sandbox.calcenergy( Pos, L, Cut)\nDeltaU = Unew - U\nprint(\"U, Unew, DeltaU: \", U, Unew, DeltaU) #Just for debugging purposes so we can see what's happening.\n\n#Print acceptance probability\nPacc = np.exp(-DeltaU/T)\nprint(\"Acceptance probability Pacc=\", Pacc) #Just for debugging purposes so we can see what's happening.\n\n#We can handle the uphill and downhill cases with a single 'if' statement\nif np.random.rand() < Pacc:\n print(\"Accepted\") #Just for debugging purposes so we can see what's happening.\n U = Unew\nelse: #Revert\n Pos = OldPos\n print(\"Rejected\") #Just for debugging purposes so we can see what's happening.\n\n\n#Remember, at the end, if we are tracking energy, we update running averages/tracking data \n#with the current position and energy\nPos_t[:, :, 1] = Pos\n```\n\n U, Unew, DeltaU: -0.3366534854145746 -0.42428826461302754 -0.08763477919845292\n Acceptance probability Pacc= 1.0915893780702166\n Accepted\n\n\n## Now let's again define that get_r function so we can look at the separation between our particles over time\nLast time, I was lazy and didn't handle the minimum image convention (in part because I knew we weren't giving the particles enough energy initially that they wouldn't fly apart). Here, because T (effectively, the kinetic energy) is an adjustable parameter we might end up with them flying apart, so we need to use the minimum image convention to properly measure the distance between particles. In other words, if a particle crosses the box edge and then finds the other particle and interacts with it, we want our distance measurement to notice that they are interacting rather than reporting that they are very distant. (The Fortran code we have is using this convention for its energy calculations).\n\nSo, we define a new get_r function which handles this:\n\n\n```python\ndef get_r(Pos, L):\n \"\"\"Calculate r, the distance between particles, for a position array containing just two particles. Return it.\n Unlike in MD sandbox, here we also implement the minimum image convention\"\"\"\n \n #Get displacement\n disp = Pos[1,:] - Pos[0,:]\n #Apply minimum image convention\n disp = disp - L*np.round(disp/L) \n #Calculate distance\n d = np.sqrt( np.dot( disp, disp))\n return d\n```\n\n## Now, write your own code - adapting the code from above for a single step - to run an MC simulation of your pair of LJ particles\n\nBecause a little more code is required than for the MD assignment, I provide some comments guiding you through the steps you'll need to do. And I also provide code to generate a plot at the end.\n\nBe sure to track the acceptance probability over all suggested moves. (This could be used to adjust the size of the moves you suggest).\n\n\n```python\n# Put your code here\n\n```\n\n\n```python\n##GET READY TO PLOT\n#Find x axis (MC steps rather than time, as it was in the MD sandbox)\nt = np.arange(0,max_steps)\n#Find y axis (r values)\nr_vs_t = []\nfor i in range(max_steps):\n r=get_r(Pos_t[:,:,i], L)\n r_vs_t.append(r)\n\nr_vs_t = np.array(r_vs_t)\n\n#Plot\nfigure()\nplot(t, r_vs_t)\n```\n\n## Other things to be sure to try\n* See what happens if you adjust the temperature. Particularly, check what happens in the limit of T becoming very small (approaching zero). Are uphill moves ever accepted? What does an MC search end up doing?\n* Try making the box size reasonably big and see what happens at a moderate temperature\n* Adjust the move size (`max_displacement`) to make the acceptance probability 30-50%. How big can you make it? \n\n## Other things to try if you have extra time\n* Check the probability distribution of separations and see how it varies with temperature. \n\n## Simulation best practices\n\nScott Shell has some very useful [simulation best practices](https://sites.engineering.ucsb.edu/~shell/che210d/Simulation_best_practices.pdf) tips which can help with thinking through how to code up and conduct effective simulations. \n", "meta": {"hexsha": "472befaeada83e25d66788a30cf48c76a5dc3025", "size": 45622, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "uci-pharmsci/lectures/MC/MC_Sandbox.ipynb", "max_stars_repo_name": "aakankschit/drug-computing", "max_stars_repo_head_hexsha": "3ea4bd12f3b56cbffa8ea43396f3a32c009985a9", "max_stars_repo_licenses": ["CC-BY-4.0", "MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "uci-pharmsci/lectures/MC/MC_Sandbox.ipynb", "max_issues_repo_name": "aakankschit/drug-computing", "max_issues_repo_head_hexsha": "3ea4bd12f3b56cbffa8ea43396f3a32c009985a9", "max_issues_repo_licenses": ["CC-BY-4.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "uci-pharmsci/lectures/MC/MC_Sandbox.ipynb", "max_forks_repo_name": "aakankschit/drug-computing", "max_forks_repo_head_hexsha": "3ea4bd12f3b56cbffa8ea43396f3a32c009985a9", "max_forks_repo_licenses": ["CC-BY-4.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 80.4620811287, "max_line_length": 25652, "alphanum_fraction": 0.8162290123, "converted": true, "num_tokens": 3106, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.942506726044381, "lm_q2_score": 0.8933094032139577, "lm_q1q2_score": 0.8419501209678472}} {"text": "This notebook holds the design parameters and generates an audio chirp\n\n\n```python\n%matplotlib notebook\n```\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.gridspec import GridSpec\nfrom matplotlib.ticker import ScalarFormatter\nimport math\n```\n\nThis notebook assumes you have completed the notebook [Introduction of sine waves](TDS_Introduction-sine_waves.ipynb). This notebook follows the same pattern of time domain waveform generation: instantaneous frequency -> angle step -> total angle -> time domain waveform. \n\nOur goal is to track features of different acoustic impedance in material using a low power time domain waveform. Time delay spectrometry (TDS) is one implementation of this goal. To understand TDS we need to understand the waveform which is used by TDS called a chirp. A chirp is a sinusoid that is constantly varying in frequency. The chirp is generated by integrating a varying angle step which is derived from an instantaneous frequency profile. We will generate a chirp in this notebook. An overview of this technique is given [here](https://www.youtube.com/watch?v=RQplkt0bw_c).\n\nThe angle of the chirp can be found by integrating the instantaneous frequency:\n\n\\begin{equation}\n\tf(t)=\\frac{f_{end}-f_{start}}{T_c}t + f_{start}\n\\end{equation}\n\n\\begin{equation}\n\\Delta\\phi(t) = 2\\pi f(t)\\Delta t\n\\end{equation}\n\n\\begin{equation}\n\\phi (t)=\\int_{}^{} \\Delta\\phi(t) = \\int_{}^{} 2\\pi f(t) dt = \\int_{}^{}\\frac{f_{end}-f_{start}}{T_c}tdt + \\int_{}^{}f_{start}dt\n\\end{equation}\n\n\\begin{equation}\n \\phi (t)= \\frac{f_{end}-f_{start}}{T_c}\\int_{}^{}tdt + f_{start}\\int_{}^{}dt\n\\end{equation}\n\n\\begin{equation}\n \\phi (t)= \\frac{f_{end}-f_{start}}{T_c}\\frac{t^2}{2} + f_{start}t\n\\end{equation}\n\nThis gives the time series value of\n\n\\begin{equation}\nx(t) = e^{j\\phi (t)} = e^{j(\\frac{f_{end}-f_{start}}{T_c}\\frac{t^2}{2} + f_{start}t)} \n\\end{equation}\n\nBut the formula for angle requires squaring time which will cause numeric errors as the time increases. Another approach is to implement the formula for angle as a cummulative summation. \n\n\\begin{equation}\n\\phi_{sum} (N)=\\sum_{k=1}^{N} \\Delta\\phi(k) = \\sum_{k=1}^{N} 2\\pi f(k) t_s = \\sum_{k=1}^{N}(\\frac{f_{end}-f_{start}}{T_c}k + f_{start})t_s\n\\end{equation}\n\n\nThis allow for the angle always stay between 0 and two pi by subtracting two phi whenever the angle exceeds the value. We will work with the cummlative sum of angle, but then compare it to the integral to determine how accurate the cummulative sum is.\n\n\n\n\n```python\n#max free 8 points per sample\n\n#Tc is the max depth we are interested in\nTc_sec=1\n\nspeed_of_sound_in_air_m_per_sec=343\n\nf_start_Hz=3e3\n#talk about difference and similarity of sine wave example, answer why not 32 samples\nf_stop_Hz=20e3\n\nprint(f\"The wavelength ranges from {(speed_of_sound_in_air_m_per_sec/f_start_Hz):.3f}m to {speed_of_sound_in_air_m_per_sec/f_stop_Hz:.3f} m\")\n\n#We choose 8 samples per cycle at the maximum frequency to not require steep pulse shaping filter profiles on the output of the \n#digital to analog converter\nfs=44.1e3\n\nsamplesPerCycle=fs/f_stop_Hz\n\nts=1/fs\n\ntotal_samples= math.ceil(fs*Tc_sec)\nn = np.arange(0,total_samples, step=1, dtype=np.float64)\nt_sec=n*ts\n\n#This is the frequency of the chirp over time. We assume linear change in frequency\nchirp_freq_slope_HzPerSec=(f_stop_Hz-f_start_Hz)/Tc_sec\n\n#Compute the instantaneous frequency which is a linear function\nchirp_instantaneous_freq_Hz=chirp_freq_slope_HzPerSec*t_sec+f_start_Hz\nchirp_instantaneous_angular_freq_radPerSec=2*np.pi*chirp_instantaneous_freq_Hz\n\n#Since frequency is a change in phase the we can plot it as a phase step\nchirp_phase_step_rad=chirp_instantaneous_angular_freq_radPerSec*ts\n\n#The phase step can be summed (or integrated) to produce the total phase which is the phase value \n#for each point in time for the chirp function\nchirp_phase_rad=np.cumsum(chirp_phase_step_rad)\n#The time domain chirp function\nchirp = np.exp(1j*chirp_phase_rad)\n```\n\n The wavelength ranges from 0.114m to 0.017 m\n\n\n\n```python\n#We can see, unlike the complex exponential, the chirp's instantaneous frequency is linearly increasing. \n#This corresponds with the linearly increasing phase step. \nfig, ax = plt.subplots(2, 1, sharex=True,figsize = [8, 8])\nlns1=ax[0].plot(t_sec,chirp_instantaneous_freq_Hz,linewidth=4, label='instantanous frequency');\nax[0].set_title('Comparing the instantaneous frequency and phase step')\nax[0].set_ylabel('instantaneous frequency (Hz)')\n\naxt = ax[0].twinx()\nlns2=axt.plot(t_sec,chirp_phase_step_rad,linewidth=2,color='black', linestyle=':', label='phase step');\naxt.set_ylabel('phase step (rad)')\n\n#ref: https://stackoverflow.com/questions/5484922/secondary-axis-with-twinx-how-to-add-to-legend\nlns = lns1+lns2\nlabs = [l.get_label() for l in lns]\nax[0].legend(lns, labs, loc=0)\n\n#We see that summing or integrating the linearly increasing phase step gives a quadratic function of total phase.\nax[1].plot(t_sec,chirp_phase_rad,linewidth=4,label='chirp');\nax[1].plot([t_sec[0], t_sec[-1]],[chirp_phase_rad[0], chirp_phase_rad[-1]],linewidth=1, linestyle=':',label='linear (x=y)');\nax[1].set_title('Cumulative quandratic phase function of chirp')\nax[1].set_xlabel('time (sec)')\nax[1].set_ylabel('total phase (rad)')\nax[1].legend();\n\n\n```\n\n\n \n\n\n\n\n\n\n\n```python\n#Write a wav file that is 32-bit floating-point [-1.0,+1.0] np.float32\nfrom scipy.io.wavfile import write\nwrite(f'../data/audio_chirp_{t_sec}sec_{f_start_Hz}Hz_{f_stop_Hz}Hz.wav', int(fs), np.real(chirp).astype(np.float32))\n```\n", "meta": {"hexsha": "767fa2544194f0dff4999e16ccbc7b581607d01c", "size": 183327, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "course/tds-200/week_01/notebooks/Generate_audio_chirp.ipynb", "max_stars_repo_name": "potto216/tds-tutorials", "max_stars_repo_head_hexsha": "2acf2002ac5514dc60781c3e2e6797a4595104e6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2020-07-12T19:17:59.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-24T22:19:02.000Z", "max_issues_repo_path": "course/tds-200/week_01/notebooks/Generate_audio_chirp.ipynb", "max_issues_repo_name": "potto216/tds-tutorials", "max_issues_repo_head_hexsha": "2acf2002ac5514dc60781c3e2e6797a4595104e6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2020-09-16T12:18:01.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-17T23:04:37.000Z", "max_forks_repo_path": "course/tds-200/week_01/notebooks/Generate_audio_chirp.ipynb", "max_forks_repo_name": "potto216/tds-tutorials", "max_forks_repo_head_hexsha": "2acf2002ac5514dc60781c3e2e6797a4595104e6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 184.063253012, "max_line_length": 140585, "alphanum_fraction": 0.8605824565, "converted": true, "num_tokens": 1663, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9362850093037731, "lm_q2_score": 0.8991213786215105, "lm_q1q2_score": 0.8418338683478623}} {"text": "## RSA (Rivest\u2013Shamir\u2013Adleman) - Cryptosystem\n\n### Algorithm\n\n#### 1. Choose two distinct prime numbers p and q. \n\n

For security purposes, the integers p and q should be chosen at random, and should be similar in magnitude but differ in length by a few digits to make factoring harder. Prime integers can be efficiently found using a primality test.

\n\n#### 2. Compute n = pq.\n\n

n is used as the modulus for both the public and private keys. Its length, usually expressed in bits, is the key length.

\n\n#### 3. Compute \u03a6(n) = lcm(\u03a6(p), \u03a6(q)) = lcm(p \u2212 1, q \u2212 1)\n\n

\u03a6 is Carmichael's totient function. This value is kept private.

\n\n#### 4. Choose an integer e such that 1 < e < \u03a6(n) and gcd(e, \u03a6(n)) = 1; i.e., e and \u03a6(n) are coprime.\n\n

Note that if e is prime and also not a divisor of \u03a6(n), so e and \u03a6(n) are coprime.

\n\n#### 5. Determine d as (e*d) % \u03a6(n) = 1\n\n

d is the modular multiplicative inverse of e (modulo \u03a6(n)).

\n\n#### To encrypt:\n\n

c(m) = m^e % n

\n\n

The public key, composed by e and n, is used to encrypt the message.

\n\n#### To decrypt:\n\n

m(c) = m^d % n

\n\n

The private key, composed by d and n, is used to decrypt the encrypted message.

\n\n#### References:\n\n[Wikipedia - RSA (cryptosystem)](https://en.wikipedia.org/wiki/RSA_%28cryptosystem%29)\n\n[Wikipedia - Carmichael function](https://en.wikipedia.org/wiki/Carmichael_function)\n\n[Sympy.org](http://www.sympy.org/pt/index.html)\n\n[Wikibooks - Extended Euclidean algorithm](https://en.wikibooks.org/wiki/Algorithm_Implementation/Mathematics/Extended_Euclidean_algorithm)\n\n\n```python\nfrom random import randint\nimport sympy\nimport numpy\n\n# return the smallest prime in range [n,n+1000)\n# return -1 if it doesn't exist\ndef next_prime(n):\n for i in range(n,n+1000):\n if (sympy.isprime(i)):\n return i\n return -1\n\n# Extended Euclidean algorithm\ndef egcd(a, b):\n if a != 0:\n x, y, z = egcd(b % a, a)\n return (x, z - (b // a) * y, y)\n return (b, 0, 1)\n\n# Modular Inverse\n# return -1 if it doesn't exist\ndef mod_inv(a, b):\n x, y, _ = egcd(a, b)\n if x != 1:\n return -1\n else:\n return y % b\n\ndef encrypt(m,e,n):\n return pow(m,e,n)\n \ndef decrypt(c,d,n):\n return pow(c,d,n)\n\n\n## 1. p and q\n\np = -1\nwhile(p == -1):\n p = next_prime(randint(10**130,10**150))\n\nq = -1\nwhile(q == -1):\n q = next_prime(randint(10**100,10**120))\n if (q == p):\n q = -1\n\nif(randint(0,9) % 2 == 0):\n aux = p\n p = q\n q = aux\n\nprint(\"p and q:\")\nprint(hex(p))\nprint(hex(q))\n\n## 2. n\n\nn = p*q\n\nprint(\"\\nn:\")\nprint(hex(n))\n\n## 3. phi\n\nphi = sympy.lcm((p-1),(q-1))\n\nprint(\"\\nphi(n):\")\nprint(hex(phi))\n\n\n## 4. e\n\ne = -1\nwhile(e == -1):\n e = next_prime(randint(2,phi))\n if e >= phi:\n e = -1\n elif e%phi == 0:\n e = -1\n\nprint(\"\\ne:\")\nprint(hex(e))\n\n## 5. d\n\nd = int(mod_inv(e, phi))\n\nif d == -1:\n raise Exception('Modular inverse does not exist, but don`t worry, just run it again. =)')\n\nprint(\"\\nd:\")\nprint(hex(d))\n\n## Encrypt:\n\nc = encrypt(42,e,n)\nprint(\"\\nmessage encrypted:\")\nprint(hex(c))\n\n## Decrypt:\n\nm = decrypt(c,d,n)\nprint(\"\\nmessage decrypted:\")\nprint(m)\n\n```\n\n p and q:\n 0x4972224d37eeec694dbc80cdb3b88052a29faa76b113148ad06f89995c3a7fcdb97cbe523efadde3a0b889e7f8094b893b255c5c7a05d7c99819c9c3e9d29\n 0x3e5810c28da948c519eb2b92dccc0f1afc654f60bc1ad1588514a5cf20b0ee057117cc0071499f887d56e9477be11a12f23b\n \n n:\n 0x11e2e859715e5783e98e5c5d6d5d73817889d24062c2f6c3212a495b072cc1a30fbac63d61d0221094d0c31731a3d4ed56a36d57374822025b1f9626e90c087607d3da2127815f5f38216729e9fe211b8c6e37b702828131a57555c8d0b23f3d584e65118d537fc2f28b2173169e0fa73\n \n phi(n):\n 0x8f1742cb8af2bc1f4c72e2eb6aeb9c0bc44e92031617b61909524ad839660d187dd631eb0e811084a68618b98d1ea76ab5191f28a7d7509b75b2435341f286c033a8753743f2e56b86eb476cfb98a8d5bd5393c4b8d5cf2be9a61829bf4f8114802916bcb7f3ef3d61688cd9d9c7b588\n \n e:\n 0x5645c37219fad185353336fd4f4aa8a1d95ecfe4d083ef88e064082ddc892eee4db2127436c07be7edeed3fb96cf941531ac9ea5bb4b17bb5bb8854b42c67f6998038558b7f3033cf2c84cfd598a3218e3d1fa34901767143c26c744d80bc88fb691ebfe975279f1aba94fd56924ce9d\n \n d:\n 0x3d0cf1214c4b09200b502b82b053d719e78bb40a1bbfbe8314a4faaa0fecd828273b559921ba436d6ae979c7d6aeedac2255a76ec54d61e0476091101b2f2c3de4a31e2f00061ad351bad807dfbe4a55f5e3e4ef94e3b65f89f3d03ba7f79fadfb17a7a9113462fa256b4c2201f32a6d\n \n message encrypted:\n 0x1036f1372d8acabac7b61cc594e93393c9a30754d8270134da09a71cdbd1aa62591f132ef877a21e595996b90ab8fb3f2361afd44a5665f8ff51f86a1ea35f2f5144d9afeac125c48696bb3fd6dd034a3cffb423cc57d2a706cbbc6833344272be898d53d14bdc459b6b958128d96beaa\n \n message decrypted:\n 42\n\n", "meta": {"hexsha": "ff164429ba61533d3af1420af58c32a7556be6c7", "size": 6928, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "RSA.ipynb", "max_stars_repo_name": "vdbalbom/RSA", "max_stars_repo_head_hexsha": "cdc9189c6f6a6170a41d30ea8ed4ba1f5f5a30ed", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-06-15T10:47:31.000Z", "max_stars_repo_stars_event_max_datetime": "2019-06-13T10:17:08.000Z", "max_issues_repo_path": "RSA.ipynb", "max_issues_repo_name": "vdbalbom/RSA", "max_issues_repo_head_hexsha": "cdc9189c6f6a6170a41d30ea8ed4ba1f5f5a30ed", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "RSA.ipynb", "max_forks_repo_name": "vdbalbom/RSA", "max_forks_repo_head_hexsha": "cdc9189c6f6a6170a41d30ea8ed4ba1f5f5a30ed", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.7798165138, "max_line_length": 251, "alphanum_fraction": 0.5549942263, "converted": true, "num_tokens": 1879, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9688561721629776, "lm_q2_score": 0.8688267694452331, "lm_q1q2_score": 0.8417681781174343}} {"text": "### Probabilistic model\nNaive Bayes models the conditional probability of classes $C$, given an instance represented by a feature vector $x=(x_1, \\dots, x_n)$, as \n\\begin{align}\np(C_k \\mid x) = \\frac{p(x \\mid C) p(C)}{p(x)}.\n\\end{align}\nThe most important assumption of the Naive Bayes model is that it assumes that all features are mutually independent conditional on the category $C$, e.g., \n\\begin{align}\np(x_i \\mid x_1, \\dots, x_{i-1}, x_{i+1}, \\dots, x_n, C) = p(x_i \\mid C).\n\\end{align}\n\n### Naive Bayes classifier\nThe Naive Bayes classifier is based on the MAP (maximum a posteriori) estimate of the conditional probability $p(C, x)$, i.e., given a feature vector $x$, we predict it being of the class\n\\begin{align}\n\\hat{y} = \\text{argmax}_{k \\in [K]} p(C) \\prod_{i=1}^n p(x_i \\mid C),\n\\end{align}\nor equivalently (for computational reasons)\n\\begin{align}\n\\hat{y} = \\text{argmax}_{k \\in [K]} \\left[ \\log \\left( p(C) \\right) + \\sum_{i=1}^n \\log \\left( p(x_i \\mid C\\right) \\right].\n\\end{align}\nFor simplicity we assume here that $p(C=k) = c_k$ is constant. \n\n#### Modeling the conditional probabilities\nOne can choose any model for the conditional probabilities $p(x_i \\mid C)$, e.g., Gaussian, Bernoulli, multinomial, etc. Note that the Naive Bayes classifier can easily handle mixtures of categorical and real-valued features.\n\n### Maximum likelihood training\nGiven a dataset $\\left\\{\\left(X^{(j)}, Y^{(j)}\\right)\\right\\}_{j \\in [m]}$, we can train the parameters $\\{ \\theta_p \\}_{p \\in [P]}$ of our model (hidden in $p(x_i \\mid C)$) using maximum likelihood, i.e., setting derivatives of the likelihood function $p(\\theta | x, C)$ with respect to the $\\theta_p$'s to zero and solving for the $\\theta_p$'s. Using the independence assumption, we can find closed-form maximum likelihood estimates, e.g., if $p(x_i \\mid C=k)$ is Gaussian, then \n\\begin{align}\n\\mu_{ik} &= \\sum_{Y_j = k \\; \\forall j \\in [m]} \\frac{X^{(j)}_i}{n_k}, \\\\\n\\sigma_{ik} &= \\sum_{Y_j = k \\; \\forall j \\in [m]} \\frac{(X^{(j)}_i - \\mu_{ik})^2}{n_k},\n\\end{align}\nwhere $n_k$ is the amount of samples with class $k$ in the dataset.\n\n### Sampling\nDue to the independence assumption, sampling examples $\\left(\\hat{x}, \\hat{C}\\right) \\sim p(x, C)$ is fairly easy. We first set $p(C=k) = n_k / m$ and sample $\\hat{C} \\sim p(C)$. Afterwards, we can sample the features $\\hat{x}_i \\sim p\\left(x_i \\mid C = \\hat{C}\\right)$. Sampling methods for easy probability distributions are implemented in NumPy.\n\n\n```python\nimport numpy as np\n```\n\n\n```python\nclass NaiveBayes:\n def __init__(self, class_prior, features, n_classes):\n self.class_prior = class_prior\n self.features = features\n self.n_classes = n_classes\n \n self._init_weights()\n \n def _init_weights(self):\n self.weights = []\n \n for i, feature in enumerate(self.features):\n if feature == 'gaussian':\n self.weights.append((self._random_normal(), self._random_normal()))\n \n elif feature == 'bernoulli':\n self.weights.append((self._random_uniform()))\n \n def _random_normal(self, loc_in=0.0, scale_in=1.0, size_in=None):\n if size_in is None:\n size_in = self.n_classes\n \n return np.random.normal(loc=loc_in, scale=scale_in, size=size_in)\n \n def _random_uniform(self, low_in=0.0, high_in=1.0, size_in=None):\n if size_in is None:\n size_in = self.n_classes\n \n return np.random.uniform(low=low_in, high=high_in, size=size_in)\n \n def _random_bernoulli(p_in, n_in=1, size_in=1):\n return np.random.multionimal(n=n_in, pvals=p_in, size=size_in)\n \n def _conditional_log_probability(self, feature, class_index, feature_index):\n if self.features[feature_index] == 'gaussian':\n return - (1/2)*np.log(2*np.pi*self.weights[feature_index][1][class_index]**2) \\\n - (feature-self.weights[feature_index][0][class_index])**2 \\\n / (2*self.weights[feature_index][1][class_index]**2)\n \n elif self.features[feature_index] == 'bernoulli':\n if feature == 0:\n return np.log(1-self.weights[feature_index][class_index])\n else:\n return np.log(self.weights[feature_index][class_index])\n \n def log_likelihood(self, X, Y):\n log_likelihood = 0.0\n \n for x, y in zip(X, Y):\n sum_of_logs = 0.0\n for j, _ in enumerate(self.features):\n sum_of_logs += np.log(self.class_prior[y]) + self._conditional_log_probability(x[j], y, j)\n \n log_likelihood += sum_of_logs\n \n return log_likelihood\n \n def _gaussian_maximum_likelihood_fit(self, feature_number, X, Y):\n means = np.zeros((self.n_classes,))\n counts = np.zeros((self.n_classes,))\n \n for x, y in zip(X, Y):\n means[y] += x\n counts[y] += 1\n \n for i in range(self.n_classes):\n means[i] /= counts[i]\n \n variances = np.zeros((self.n_classes,))\n \n for x, y in zip(X, Y):\n variances[y] += (x - means[y])**2 / counts[y]\n \n for i in range(self.n_classes):\n self.weights[feature_number][0][i] = means[i]\n self.weights[feature_number][1][i] = np.sqrt(variances[i])\n \n def _bernoulli_maximum_likelihood_fit(self, feature_number, X, Y):\n means = np.zeros((self.n_classes,))\n counts = np.zeros((self.n_classes,))\n \n for x, y in zip(X, Y):\n means[y] += x\n counts[y] += 1\n \n for i in range(self.n_classes):\n means[i] /= counts[i]\n \n for i in range(self.n_classes):\n self.weights[feature_number][i] = means[i]\n \n def maximum_likelihood_fit(self, X, Y):\n for j, feature in enumerate(self.features):\n if feature == 'gaussian':\n self._gaussian_maximum_likelihood_fit(j, X[:, j], Y)\n elif feature == 'bernoulli':\n self._bernoulli_maximum_likelihood_fit(j, X[:, j], Y)\n \n def predictions(self, X):\n predictions = []\n \n for x in X:\n \n class_predictions = []\n for i in range(self.n_classes):\n \n class_predictions.append(self.log_likelihood([x], [i]))\n \n predictions.append(class_predictions)\n \n return predictions\n \n def sampling(self, n_samples):\n samples = np.empty((n_samples, len(self.features) + 1))\n for i in range(n_samples):\n features = []\n ### sample a class\n sampled_class = np.argwhere(np.random.multinomial(n=1, pvals=self.class_prior) == 1)\n samples[i, -1] = sampled_class\n \n for j, feature in enumerate(self.features):\n if feature == 'gaussian':\n samples[i, j] = self._random_normal(loc_in = self.weights[j][0][sampled_class][0],\n scale_in = self.weights[j][1][sampled_class][0],\n size_in = 1)\n elif feature == 'bernoulli':\n samples[i, j] = self._random_bernoulli(self.weights[j][samples_class][0])\n \n return samples\n```\n\nWe will now fit a Naive Bayes classifier Iris flower data set and afterwards try to sample points \nfrom our generative model.\n\n\n```python\nfrom sklearn.datasets import load_iris\nfrom sklearn.utils import shuffle\nfrom sklearn import preprocessing\nfrom scipy.special import softmax\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nX, Y = load_iris(return_X_y=True)\nX = preprocessing.scale(X) \nX, Y = shuffle(X, Y)\n\nX_train = X[:100, :]\nY_train = Y[:100]\n\nX_test = X[100:, :]\nY_test = Y[100:]\n\n### Maximum likelihood fitting \nnaive_bayes_iris = NaiveBayes([1/3.]*3, ['gaussian']*4, 3)\n\nprint('Log-likelihood before fitting (random weights):', naive_bayes_iris.log_likelihood(X_train, Y_train))\n\nnaive_bayes_iris.maximum_likelihood_fit(X_train, Y_train)\n\nprint('Log-likelihood after fitting:', naive_bayes_iris.log_likelihood(X_train, Y_train))\n```\n\n Log-likelihood before fitting (random weights): -1584.5829832421746\n Log-likelihood after fitting: -627.6014130704741\n\n\n\n```python\npredictions = naive_bayes_iris.predictions(X_test)\nprint('Accuracy on test set:', np.sum(np.argmax(predictions, axis=1) == Y_test) / 50)\n```\n\n Accuracy on test set: 0.96\n\n\n\n```python\n### Sampling points \nsamples = naive_bayes_iris.sampling(50)\n```\n\nIn the next plots we compare the data distribution to the sampling distribution. We can clearly see that the features (conditioned on the class) of the data distribution are correlated, and therefore the naive Bayes model might not be perfect for this case. \n\n\n```python\n### Data distribution for feature 1 and 2\nplt.scatter(X_test[:, 0], X_test[:, 1], c=Y_test)\n```\n\n\n```python\n### Sampling distribution for feature 1 and 2\nplt.scatter(samples[:, 0], samples[:, 1], c=samples[:, 4])\n```\n\n\n```python\n### Data distribution for feature 1 and 3\nplt.scatter(X_test[:, 0], X_test[:, 2], c=Y_test)\n```\n\n\n```python\n### Sampling distribution for feature 1 and 3\nplt.scatter(samples[:, 0], samples[:, 2], c=samples[:, 4])\n```\n", "meta": {"hexsha": "c6becca5f688dcb2569bd903122fc0cf83bc519f", "size": 74801, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "naive_bayes.ipynb", "max_stars_repo_name": "timudk/generative_models_in_numpy", "max_stars_repo_head_hexsha": "505871be358db047110d2b5b3a854abd81005733", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "naive_bayes.ipynb", "max_issues_repo_name": "timudk/generative_models_in_numpy", "max_issues_repo_head_hexsha": "505871be358db047110d2b5b3a854abd81005733", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "naive_bayes.ipynb", "max_forks_repo_name": "timudk/generative_models_in_numpy", "max_forks_repo_head_hexsha": "505871be358db047110d2b5b3a854abd81005733", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 165.8558758315, "max_line_length": 16692, "alphanum_fraction": 0.8739054291, "converted": true, "num_tokens": 2420, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9688561712637257, "lm_q2_score": 0.8688267626522814, "lm_q1q2_score": 0.8417681707547471}} {"text": "# Logistic regression with COSMO\n\n_The presented example is adapted from the [Machine Learning - course by Andrew Ng](https://www.coursera.org/learn/machine-learning)._\n\nIn this example we use logistic regression to estimate the parameters $\\theta_i$ of a logistic model in a classification problem. We will first transform the logistic regression problem into an exponential cone optimisation problem. We will then solve the optimisation problem with COSMO and determine the model parameters and the decision boundary.\n\n### Visualizing the data\n\nBefore we start, let's load and take a look at the example data from `examples/chip_data.txt`:\n\n\n```julia\nusing PyCall, LinearAlgebra, SparseArrays, COSMO, JuMP\npygui(:qt5)\nusing PyPlot\n\n# load example data\nf = open(\"./chip_data.txt\")\nlines = readlines(f)\nclose(f)\nn_data = length(lines)\nx1 = zeros(n_data)\nx2 = zeros(n_data)\ny = zeros(Float64, n_data)\nfor (i, l) in enumerate(lines)\n s = split(l, \",\")\n x1[i] = parse(Float64, s[1])\n x2[i] = parse(Float64, s[2])\n y[i] = parse(Float64, s[3])\nend\n\n\n# visualize data\nPyPlot.figure(1)\np1 = plot(x1[findall(x -> x == 1., y)], x2[findall(x -> x == 1., y)], marker = \"+\", color = \"b\", linestyle= \"None\", label=\"Accepted\")\np2 = plot(x1[findall(x -> x != 1., y)], x2[findall(x -> x != 1., y)], marker = \".\", color = \"r\", linestyle= \"None\", label=\"Rejected\")\naxis(\"equal\")\nPyPlot.grid(true)\nPyPlot.legend()\nxlabel(\"x1 - Microchip Test Score 1\")\nylabel(\"x2 - Microchip Test Score 2\")\nnothing\n```\n\nThe plot shows two test scores of $n$ microchip samples from a fabrication plant and whether the chip passed the quality check. Based on this data we would like to build a logistic model that takes into account the test scores and helps us predict the likelyhood of a chip being accepted.\n\n### Defining the logistic model\n\nThe logistic regression hypothesis is given by \n\n\\begin{equation}\nh_\\theta(x) = g(\\theta^\\top x)\n\\end{equation}\n\nwhere $g$ is the sigmoid function:\n\n\\begin{equation}\ng(\\theta^\\top x) = \\frac{1}{1+\\exp(-\\theta^\\top x)}.\n\\end{equation}\n\n$x$ are the independent variables and $\\theta$ are the model parameters. For our samples we set the dependent variable $y =1$ if the chip was accepted and $y = 0$ otherwise.\n\n$h_\\theta(x)$ can be interpreted as the probability of the outcome being true rather than false. We want to find the parameters $\\theta$ such that the following likelyhood function is maximized:\n\n\\begin{equation}\n\\text{maximize} \\quad J(\\theta) = h_\\theta(x_i)^{y_i} (1-h_\\theta(x_i))^{(1-y_i)} + \\mu \\|\\theta \\|_2,\n\\end{equation}\n\nwhere we added a regularization term with parameter $\\mu$ to prevent overfitting. \n\n### Feature mapping\nAs our dataset only has two independent variables (the test scores) our model $y = \\theta_0 + \\theta_1 x_1 + \\theta_2 x_2$ will have the form of a straight line. Looking at the plot one can see that a line will not perform well in separating the samples. Therefore, we will create more features based on each data point by mapping the original features ($x_1$, $x_2$) into all polynomial terms of $x_1$ and $x_2$ up to the 6th power:\n\n\\begin{equation}\n\\text{map_feature}(x_1,x_2) = [1, x_1, x_2, x_1^2, x_1x_2, x_2^2, x_1^3, \\dots, x_1x_2^5, x_2^6 ]\n\\end{equation}\n\nThis will create 28 features for each sample.\n\n\n\n\n```julia\nfunction map_feature(x1, x2)\n deg = 6\n x_new = ones(length(x1))\n for i = 1:deg, j = 0:i\n x_new = hcat(x_new, x1.^(i-j) .* x2.^j)\n end\n return x_new\nend\n\nX = map_feature(x1, x2);\nsize(X)\n```\n\n\n\n\n (118, 28)\n\n\n\n### Transformation into a conic optimisation problem\nWe can rewrite above likelyhood maximisation problem as a conic optimisation problem with exponential-cone-, second-order-cone-, equality-, and inequality constraints:\n\n\\begin{equation} \n\\begin{array}{ll}\n\\text{minimize} &\\sum_i^n \\epsilon_i + \\mu v\\\\\n\\text{subject to} & \\|\\theta \\|_2 \\leq v\\\\\n& \\log(1 + \\exp(-\\theta^\\top x_i)) \\leq \\epsilon_i \\quad \\text{if } y_i = 1, \\\\ \n& \\log(1 + \\exp(\\theta^\\top x_i)) \\leq \\epsilon_i \\quad\\text{ otherwise.}\n\\end{array}\n\\end{equation} \n\nImplementing the constraint $\\log(1 + \\exp(z)) \\leq \\epsilon $ for each of the $n$ samples requires two exponential cone constraints, one inequality constraint and two equality constraints. To see this, take the exponential on both sides and then divide by $\\exp(\\epsilon)$ to get:\n\n\\begin{equation}\n\\exp(-\\epsilon) + \\exp(z - \\epsilon) \\leq 1.\n\\end{equation}\n\nThis constraint is equivalent to:\n\n\\begin{equation} \n\\begin{array}{ll}\n(z - \\epsilon, s_1, t_1) &\\in K_{\\text{exp}}, \\\\\n(-\\epsilon, s_2, t_2) &\\in K_{\\text{exp}}, \\\\\nt_1 + t_2 &\\leq 1,\\\\\ns_1 = s_2 &= 1,\n\\end{array}\n\\end{equation}\n\nwhere we defined the exponential cone as:\n\n\\begin{equation}\nK_{\\text{exp}} = \\{(r, s, t) \\mid s >0, s \\exp(r/s) \\leq t \\} \\cup \\{ r \\leq 0, s = 0, t \\geq 0 \\}.\n\\end{equation}\n\nBased on this transformation our optimisation problem will have $5n + n_\\theta + 1$ variables, 1 SOCP constraint, $2n$ exponential cone constraints, $n$ inequality constraints and $2n$ equality constraints.\nLet's model the problem with JuMP and COSMO:\n\n\n```julia\nn_theta = size(X, 2)\nn = n_data\n\u03bc = 1.\n\nm = Model(with_optimizer(COSMO.Optimizer))\n@variable(m, v)\n@variable(m, \u03b8[1:n_theta])\n@variable(m, e[1:n])\n@variable(m, t1[1:n])\n@variable(m, t2[1:n])\n@variable(m, s1[1:n])\n@variable(m, s2[1:n])\n\n@objective(m, Min, \u03bc * v + sum(e))\n@constraint(m, [v; \u03b8] in MOI.SecondOrderCone(n_theta + 1))\n\n# create the constraints for each sample points\nfor i = 1:n\n yi = y[i]\n x = X[i, :]\n yi == 1. ? (a = -1) : (a = 1)\n @constraint(m, [a * dot(\u03b8, x) - e[i]; s1[i]; t1[i] ] in MOI.ExponentialCone())\n @constraint(m, [-e[i]; s2[i]; t2[i]] in MOI.ExponentialCone())\n @constraint(m, t1[i] + t2[i] <= 1)\n @constraint(m, s1[i] == 1)\n @constraint(m, s2[i] == 1)\nend\nJuMP.optimize!(m)\ntheta = value.(\u03b8)\n\n```\n\n ------------------------------------------------------------------\n COSMO - A Quadratic Objective Conic Solver\n Michael Garstka\n University of Oxford, 2017 - 2019\n ------------------------------------------------------------------\n \n Problem: x \u2208 R^{619},\n constraints: A \u2208 R^{1091x619} (4513 nnz), b \u2208 R^{1091},\n matrix size to factor: 1710x1710 (2924100 elem, 10736 nnz)\n Sets: ZeroSet{Float64} of dim: 236\n Nonnegatives{Float64} of dim: 118\n SecondOrderCone{Float64} of dim: 29\n ExponentialCone{Float64} of dim: 3\n ExponentialCone{Float64} of dim: 3\n ExponentialCone{Float64} of dim: 3\n ... and 234 more\n Settings: \u03f5_abs = 1.0e-04, \u03f5_rel = 1.0e-04,\n \u03f5_prim_inf = 1.0e-06, \u03f5_dual_inf = 1.0e-04,\n \u03c1 = 0.1, \u03c3 = 1.0e-6, \u03b1 = 1.6,\n max_iter = 2500,\n scaling iter = 10 (on),\n check termination every 40 iter,\n check infeasibility every 40 iter,\n KKT system solver: QDLDL\n Setup Time: 1.88ms\n \n Iter:\tObjective:\tPrimal Res:\tDual Res:\tRho:\n 40\t3.8685e+01\t3.8745e-02\t4.9828e-03\t1.0000e-01\n 80\t4.9739e+01\t7.6007e-03\t7.7075e-04\t1.0000e-01\n 120\t5.1537e+01\t1.6183e-03\t1.4459e-04\t1.0000e-01\n 160\t5.1856e+01\t3.5038e-04\t2.8090e-05\t1.0000e-01\n \n ------------------------------------------------------------------\n >>> Results\n Status: Solved\n Iterations: 160\n Optimal objective: 51.8557\n Runtime: 0.249s (248.56ms)\n \n\n\n\n\n\n 28-element Array{Float64,1}:\n 2.674439930406765 \n 1.7748860393946884 \n 2.934236553994507 \n -4.053008143592888 \n -3.3811679943697612 \n -4.0530397674103735 \n 0.7775813452248421 \n -1.0919224676820907 \n -0.46310226688930994\n -0.48894009682277273\n -3.2952703524511726 \n 0.5638302900174557 \n -1.8175402706656334 \n \u22ee \n -0.47307150885083393\n 0.6345032449500374 \n -1.1631145722989464 \n -1.2247379129099638 \n -0.09277151198530442\n -2.685527206587255 \n 0.4665466584002503 \n -0.7660832134314224 \n 0.44944903637367295\n -1.190858736040457 \n -0.9542098066640803 \n -1.1918683708910005 \n\n\n\nOnce we have solved the optimisation problem and obtained the parameter vector $\\theta$, we can plot the decision boundary. This can be done by evaluating our model over a grid of points $(u,v)$ and then plotting the contour line where the function returns a probability of $p=0.5$.\n\n\n```julia\n# First we evaluate our model over a grid of points z = Model(u, v)\nu = collect(range(-1., stop = 1.5, length = 50))\nv = collect(range(-1., stop = 1.5, length = 50))\nz = zeros(length(u), length(v));\nfor i = 1:length(u), j = 1:length(v)\n z[i, j] = dot(map_feature(u[i], v[j]), theta);\nend\n\n\n# visualize data with decision boundary\nPyPlot.figure(2)\np1 = plot(x1[findall(x -> x == 1., y)], x2[findall(x -> x == 1., y)], marker = \"+\", color = \"b\", linestyle= \"None\", label=\"Accepted\")\np2 = plot(x1[findall(x -> x != 1., y)], x2[findall(x -> x != 1., y)], marker = \".\", color = \"r\", linestyle= \"None\", label=\"Rejected\")\naxis(\"equal\")\nPyPlot.grid(true)\nPyPlot.legend()\nxlabel(\"x1 - Microchip Test Score 1\")\nylabel(\"x2 - Microchip Test Score 2\")\n# Now add the decision boundary as a contour plot\ncontour(u, v, z', [0.5])\nnothing\n```\n\n## Solving the optimisation problem directly with COSMO\nWe can solve the problem directly in COSMO by using its modeling interface. The problem will have $nn = 5 n + n_\\theta + 1$ variables. Let us define the cost function $ \\frac{1}{2}x^\\top P x + q^\\top x $:\n\n\n```julia\nnn = 5 * n + n_theta + 1\nP = spzeros(nn, nn)\nq = zeros(nn)\nq[1] = \u03bc\nfor i = 1:n\n q[1 + n_theta + (i - 1) * 5 + 1] = 1.\nend\n```\n\nNext we define a function that creates the `COSMO.Constraints` for a given sample: \n\n\n```julia\n# the order of the variables\n# v, thetas, [e1 t11 t12 s11 s12] [e2 t21 t22 s21 s22] ...\n# for each sample create two exponential cone constraints, \n# 1 nonnegatives constraint, 2 zeroset constraints\nfunction add_log_regression_constraints!(constraint_list, x, y, n, sample_num)\n num_thetas = length(x)\n # 1st exponential cone constraint (zi - ei, s1, t1) in Kexp\n c_start = 1 + num_thetas + (sample_num - 1) * 5 + 1\n A = spzeros(3, n)\n A[1, c_start] = -1.\n y == 1. ? (a = -1) : (a = 1)\n for k = 1:num_thetas\n A[1, 2 + k - 1] = a * x[k]\n end\n A[2, c_start + 3] = 1.\n A[3, c_start + 1] = 1.\n b = zeros(3)\n push!(constraint_list, COSMO.Constraint(A, b, COSMO.ExponentialCone))\n\n # 2nd exponential cone constraint (-e, s2, t2)\n A = spzeros(3, n)\n A[1, c_start] = -1.\n A[2, c_start + 4] = 1.\n A[3, c_start + 2] = 1.\n b = zeros(3)\n push!(constraint_list, COSMO.Constraint(A, b, COSMO.ExponentialCone))\n\n # Nonnegatives constraint t1 + t2 <= 1\n A = spzeros(1, n)\n A[1, c_start + 1] = -1.\n A[1, c_start + 2] = -1.\n b = [1.]\n push!(constraint_list, COSMO.Constraint(A, b, COSMO.Nonnegatives))\n\n # ZeroSet constraint s1 == 1, s2 == 1\n A = spzeros(2, n)\n A[1, c_start + 3] = 1.\n A[2, c_start + 4] = 1.\n b = -1 * ones(2)\n push!(constraint_list, COSMO.Constraint(A, b, COSMO.ZeroSet))\nend\n```\n\n\n\n\n add_log_regression_constraints! (generic function with 1 method)\n\n\n\nNow we can use this function to loop over the sample points and add the constraints to our constraint list:\n\n\n```julia\nconstraint_list = Array{COSMO.Constraint{Float64}}(undef, 0)\nfor i = 1:n\n add_log_regression_constraints!(constraint_list, X[i, :], y[i], nn, i )\nend\n```\n\nIt remains to add a second order cone constraint for the regularisation term:\n\n$\\|\\theta \\|_2 \\leq v$\n\n\n```julia\npush!(constraint_list, COSMO.Constraint(Matrix(1.0I, n_theta + 1, n_theta + 1), zeros(n_theta + 1), COSMO.SecondOrderCone, nn, 1:n_theta+1))\nnothing\n```\n\nWe can now create, assemble, and solve our `COSMO.Model`:\n\n\n```julia\nmodel = COSMO.Model()\nassemble!(model, P, q, constraint_list, settings = COSMO.Settings(verbose=true))\nres = COSMO.optimize!(model);\n```\n\n ------------------------------------------------------------------\n COSMO - A Quadratic Objective Conic Solver\n Michael Garstka\n University of Oxford, 2017 - 2019\n ------------------------------------------------------------------\n \n Problem: x \u2208 R^{619},\n constraints: A \u2208 R^{1091x619} (4513 nnz), b \u2208 R^{1091},\n matrix size to factor: 1710x1710 (2924100 elem, 10736 nnz)\n Sets: ZeroSet{Float64} of dim: 236\n Nonnegatives{Float64} of dim: 118\n SecondOrderCone{Float64} of dim: 29\n ExponentialCone{Float64} of dim: 3\n ExponentialCone{Float64} of dim: 3\n ExponentialCone{Float64} of dim: 3\n ... and 234 more\n Settings: \u03f5_abs = 1.0e-04, \u03f5_rel = 1.0e-04,\n \u03f5_prim_inf = 1.0e-06, \u03f5_dual_inf = 1.0e-04,\n \u03c1 = 0.1, \u03c3 = 1.0e-6, \u03b1 = 1.6,\n max_iter = 2500,\n scaling iter = 10 (on),\n check termination every 40 iter,\n check infeasibility every 40 iter,\n KKT system solver: QDLDL\n Setup Time: 2.51ms\n \n Iter:\tObjective:\tPrimal Res:\tDual Res:\tRho:\n 40\t3.8685e+01\t3.8745e-02\t4.9828e-03\t1.0000e-01\n 80\t4.9739e+01\t7.6007e-03\t7.7075e-04\t1.0000e-01\n 120\t5.1537e+01\t1.6183e-03\t1.4459e-04\t1.0000e-01\n 160\t5.1856e+01\t3.5038e-04\t2.8090e-05\t1.0000e-01\n \n ------------------------------------------------------------------\n >>> Results\n Status: Solved\n Iterations: 160\n Optimal objective: 51.8557\n Runtime: 0.25s (249.68ms)\n \n\n\nLet us double check that we get the same $\\theta$ as in the previous section:\n\n\n```julia\nusing Test\ntheta_cosmo = res.x[2:2+n_theta-1]\n@test norm(theta_cosmo - theta) < 1e-10\n```\n\n\n\n\n \u001b[32m\u001b[1mTest Passed\u001b[22m\u001b[39m\n\n\n\n\n```julia\n\n```\n", "meta": {"hexsha": "383c12399cd5fa326763e59b45afffd67436227a", "size": 106327, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/logistic_regression_regularization.ipynb", "max_stars_repo_name": "msarfati/COSMO.jl", "max_stars_repo_head_hexsha": "c12d46c485ddccba286d3447d60d1cb399402119", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 210, "max_stars_repo_stars_event_min_datetime": "2018-12-11T23:45:52.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T23:11:26.000Z", "max_issues_repo_path": "examples/logistic_regression_regularization.ipynb", "max_issues_repo_name": "msarfati/COSMO.jl", "max_issues_repo_head_hexsha": "c12d46c485ddccba286d3447d60d1cb399402119", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 110, "max_issues_repo_issues_event_min_datetime": "2018-12-12T15:52:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-20T00:44:39.000Z", "max_forks_repo_path": "examples/logistic_regression_regularization.ipynb", "max_forks_repo_name": "msarfati/COSMO.jl", "max_forks_repo_head_hexsha": "c12d46c485ddccba286d3447d60d1cb399402119", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 39, "max_forks_repo_forks_event_min_datetime": "2019-03-10T06:40:11.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-23T08:53:29.000Z", "avg_line_length": 178.4010067114, "max_line_length": 46126, "alphanum_fraction": 0.8815446688, "converted": true, "num_tokens": 4585, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9728307716151471, "lm_q2_score": 0.8652240877899776, "lm_q1q2_score": 0.8417166169447357}} {"text": "```python\nimport numpy as np \nimport sympy as sp\nimport scipy as sc\nfrom scipy.optimize import fsolve\nfrom numpy import linalg as la\n```\n\n$PROBLEM$ $1$\n\n$Part$ $A$\n\n\n```python\nA = np.array([[2,5,-3],[5,7,1], [-3,1,4]])\nprint(A)\nprint(\"\\nSince it's 3x3 Matrix we have 3 degree characterstic polynomial\")\n\u03bb = sp.symbols('\u03bb')\ndisplay(Eq(-\u03bb**3+13*\u03bb**2-15*\u03bb-139,0))\n```\n\n$Part$ $B$\n\n\n```python\n#Let's define the obove characterstic polynomial and compute it\ndef func(\u03bb):\n return -\u03bb**3+13*\u03bb**2-15*\u03bb-139\n\n\u03bb1=fsolve(func,1)\nprint(\"\u03bb1 =\",\u03bb1)\n\u03bb2=fsolve(func,10)\nprint(\"\u03bb2 =\",\u03bb2)\n\u03bb3=fsolve(func,5)\nprint(\"\u03bb3 =\",\u03bb3)\n\n```\n\n$Part$ $C$\n\n\n```python\neigVal,eigVect=la.eig(A)\nprint(\"Eigen Values\\n\",eigVal)\nprint(\"\\nEigenvectors\\n\",eigVect)\n```\n\n$Part$ $D$\n\n\n```python\n#for a Matrix to be orthogonal Q^T=Q^-1\nQ=eigVect\nprint(Q.T,\"\\n\")\nprint(np.linalg.inv(Q))\nprint(\"\\n Thus the EigenVectors are Orthogonal\")\n```\n\n$PROBLEM$ $2$\n\n$Part$ $A$\n\n\n```python\nM=np.array([[-1,8,2,10,45],[8,6,-11,-23,85],[2,-11,4,-3,-7],[10,-23,-3,2,2],[45,85,-7,2,2]])\nfor i in range(25):\n q,r=la.qr(M)\n M=r@q\n \nprint(\"\\nEigenvalues by QR factorization\\n\")\nprint(np.rint(M))\n```\n\n \n Eigenvalues by QR factorization\n \n [[105. -6. 0. -0. -0.]\n [ -6. -91. 0. 0. 0.]\n [ 0. 0. -24. 0. -0.]\n [ -0. 0. 0. 18. -0.]\n [ 0. -0. -0. -0. 5.]]\n\n\n$Part$ $B$\n\n\n```python\neigVal,eigVect=la.eig(M)\nprint(\"Eigen Values\\n\",np.around(eigVal,3))\nprint(\"\\nEigenvectors\\n\",np.around(eigVect,2))\n\n```\n\n Eigen Values\n [104.99 -91.557 -23.637 18.211 4.993]\n \n Eigenvectors\n [[-1. -0.03 -0. 0. 0. ]\n [ 0.03 -1. -0. 0. 0. ]\n [-0. 0. -1. 0. -0. ]\n [ 0. -0. 0. 1. 0. ]\n [-0. 0. -0. -0. 1. ]]\n\n\n$Part$ $C$\n\n\n```python\nprint(np.trace(M))\nprint(sum(eigVal))\n```\n\n 12.999999999999865\n 12.999999999999819\n\n\n$Part$ $D$\n\n\n```python\nprint(la.det(M))\nprint(np.prod(eigVal))\n```\n\n 20660378.000000045\n 20660378.00000005\n\n\n$PROBLEM$ $3$\n\n$Part$ $A$\n\nRewriting the Matrix as Eigen Problem\n\n\\begin{pmatrix}\n\\frac{k}{m_{1}}-w^{2} & -\\frac{k}{m_{1}} & 0\\\\ \n -\\frac{k}{m_{2}}& \\frac{2k}{m_{2}}-w^{2} & -\\frac{k}{m_{2}} \\\\ \n 0 & -\\frac{k}{m_{1}} & \\frac{k}{m_{1}}-w^{2}\n\\end{pmatrix}\n\n\n$Part$ $B$\n\n$Substituting $ the $ values $ of $ k, $ m1, $ m2 $ in $ the $ equation\n\n\\begin{pmatrix}\n3.06*10^{28}-\\lambda & -3.06*10^{28} & 0\\\\ \n -1.20*10^{29} & 2.41*10^{29}-\\lambda & -1.20*10^{29} \\\\ \n 0 & -3.06*10^{28} & 3.06*10^{28}-\\lambda\n\\end{pmatrix}\n\n\n\n```python\nV=np.array([[3.06*10**28,-3.06*10**28,0],[-1.20*10**29,2.41*10**29,-1.20*10**29],[0,-3.06*10**28,3.06*10**28]])\nprint(V)\neigValues,eigVectors=la.eig(V)\nprint(\"\\nThe vibration Frequenices (w^2)1, (w^2)2, (w^2)3 are \\n\\n\",eigValues)\n```\n\n [[ 3.06e+28 -3.06e+28 0.00e+00]\n [-1.20e+29 2.41e+29 -1.20e+29]\n [ 0.00e+00 -3.06e+28 3.06e+28]]\n \n The vibration Frequenices (w^2)1, (w^2)2, (w^2)3 are \n \n [2.71487288e+29 3.06000000e+28 1.12712460e+26]\n\n\n$Part$ $C$\n\n\n```python\nprint(\"\\nEigen Vectors\\n\\n\",eigVectors)\n```\n\n \n Eigen Vectors\n \n [[-1.25028831e-01 -7.07106781e-01 5.78059140e-01]\n [ 9.84243660e-01 -1.84988936e-17 5.75929909e-01]\n [-1.25028831e-01 7.07106781e-01 5.78059140e-01]]\n\n\n$PROBLEM$ $4$\n\n$Part$ $A$\n\n\n```python\nS=np.array([[20,8,-18],[8,28,12],[-18,12,4]])\n# print(S)\neigVal,eigVect=la.eig(S)\nprint(\"\\nPrincipal Stresses in Mega Pascals are\\n\\n\",eigVal)\n```\n\n \n Principal Stresses in Mega Pascals are\n \n [-12.79567553 31.67988775 33.11578778]\n\n\n$Part$ $B$\n\n\n```python\n#To find the angles we need the normal(eigenvectors) vectors frist\n#Also python normalizes the normal vectors automatically\n#angles are measured from the +ve axis CCW direction.\nN=eigVect\nprint(\"Normal Vectors to the Prinipal Plane\\n\",N)\n\nprint(\"\\nThe Maximum principal stress is 33.11 MPa\")\nprint(\"It's Unit Normal to plane is \", N[:,2])\n\nprint(\"\\nX-axis in degrees \",np.degrees(np.arccos(N[0,2])),\" or radians\",np.arccos(N[0,2]))\nprint(\"Y-axis in degrees \",np.degrees(np.arccos(N[1,2])),\" or radians\",np.arccos(N[1,2]))\nprint(\"Z-axis in degrees \",np.degrees(np.arccos(N[2,2])),\" or radians\",np.arccos(N[2,2]))\n```\n\n Normal Vectors to the Prinipal Plane\n [[-0.51481828 0.81232573 0.27402382]\n [ 0.33329183 -0.10484587 0.93697593]\n [-0.78985992 -0.57370224 0.21676496]]\n \n The Maximum principal stress is 33.11 MPa\n It's Unit Normal to plane is [0.27402382 0.93697593 0.21676496]\n \n X-axis in degrees 74.09615133093509 or radians 1.2932218037807959\n Y-axis in degrees 20.450248042933367 or radians 0.35692416119871395\n Z-axis in degrees 77.48090601002706 or radians 1.3522969173032346\n\n\n$Part$ $C$\n\n\n```python\n#similary the Minimum Princial stress is -12.79Mpa\n#Also python normalizes the normal vectors automatically\n#angles are measured from the +ve axis CCW direction.\nN=eigVect\nprint(\"Normal Vectors to the Prinipal Plane\\n\",N)\n\nprint(\"\\nThe Minimum principal stress is -12.79 MPa\")\nprint(\"It's Unit Normal to plane is \", N[:,0])\n\nprint(\"\\nX-axis in degrees \",np.degrees(np.arccos(N[0,0])),\" or radians\",np.arccos(N[0,0]))\nprint(\"Y-axis in degrees \",np.degrees(np.arccos(N[1,0])),\" or radians\",np.arccos(N[1,0]))\nprint(\"Z-axis in degrees \",np.degrees(np.arccos(N[2,0])),\" or radians\",np.arccos(N[2,0]))\n```\n\n Normal Vectors to the Prinipal Plane\n [[-0.51481828 0.81232573 0.27402382]\n [ 0.33329183 -0.10484587 0.93697593]\n [-0.78985992 -0.57370224 0.21676496]]\n \n The Minimum principal stress is -12.79 MPa\n It's Unit Normal to plane is [-0.51481828 0.33329183 -0.78985992]\n \n X-axis in degrees 120.9853092679676 or radians 2.111591993269645\n Y-axis in degrees 70.53130164099609 or radians 1.2310034393526612\n Z-axis in degrees 142.17242293287993 or radians 2.4813768857166476\n\n", "meta": {"hexsha": "f3661279a1b40fbc89d96c635c80195dd8e4a1d7", "size": 13360, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "assignments/assignment03/assignment03_solution.ipynb", "max_stars_repo_name": "eyobghiday/computational-mechanics", "max_stars_repo_head_hexsha": "c64aebca5d26fdf5fc93977a5ec3dbaa4e715a48", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-06-27T06:21:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T09:48:45.000Z", "max_issues_repo_path": "assignments/assignment03/assignment03_solution.ipynb", "max_issues_repo_name": "eyobghiday/computational-mechanics", "max_issues_repo_head_hexsha": "c64aebca5d26fdf5fc93977a5ec3dbaa4e715a48", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assignments/assignment03/assignment03_solution.ipynb", "max_forks_repo_name": "eyobghiday/computational-mechanics", "max_forks_repo_head_hexsha": "c64aebca5d26fdf5fc93977a5ec3dbaa4e715a48", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.8413926499, "max_line_length": 1227, "alphanum_fraction": 0.5026946108, "converted": true, "num_tokens": 2413, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9504109784205502, "lm_q2_score": 0.8856314677809304, "lm_q1q2_score": 0.841713869813702}} {"text": "```python\nimport sympy as sp\nsp.init_printing(use_latex = True)\n%matplotlib inline\n```\n\n\n```python\nM_s, x, w, y, h, a, beta, t, nu = sp.symbols('M_s, x, w, y, h, a, beta, t, nu')\n```\n\n\n```python\nH_x = M_s/(4*sp.pi) * (sp.log(((x+w)**2 + (y-h)**2)/((x+w)**2 + (y+h)**2)) - sp.log(((x-w)**2 + (y-h)**2)/((x-w)**2 + (y+h)**2)))\nH_x\n```\n\n\n```python\nH_y = M_s/(2*sp.pi) * (sp.atan((2*h*(x+w))/((x+w)**2 + y**2 - h**2)) - sp.atan((2*h*(x-w))/((x-w)**2 + y**2 - h**2)))\nH_y\n```\n\n\n```python\nH = sp.sqrt(H_x**2 + H_y**2)\nH\n```\n\n\n```python\nHx = sp.diff(H, x)\nHX = Hx.subs(y, x)\nprint(HX)\n```\n\n (M_s**2*(2*(2*h*(-2*w - 2*x)*(w + x)/(-h**2 + x**2 + (w + x)**2)**2 + 2*h/(-h**2 + x**2 + (w + x)**2))/(4*h**2*(w + x)**2/(-h**2 + x**2 + (w + x)**2)**2 + 1) - 2*(2*h*(-w + x)*(2*w - 2*x)/(-h**2 + x**2 + (-w + x)**2)**2 + 2*h/(-h**2 + x**2 + (-w + x)**2))/(4*h**2*(-w + x)**2/(-h**2 + x**2 + (-w + x)**2)**2 + 1))*(-atan(2*h*(-w + x)/(-h**2 + x**2 + (-w + x)**2)) + atan(2*h*(w + x)/(-h**2 + x**2 + (w + x)**2)))/(8*pi**2) + M_s**2*(-2*((-2*w + 2*x)/((h + x)**2 + (-w + x)**2) + (2*w - 2*x)*((-h + x)**2 + (-w + x)**2)/((h + x)**2 + (-w + x)**2)**2)*((h + x)**2 + (-w + x)**2)/((-h + x)**2 + (-w + x)**2) + 2*((-2*w - 2*x)*((-h + x)**2 + (w + x)**2)/((h + x)**2 + (w + x)**2)**2 + (2*w + 2*x)/((h + x)**2 + (w + x)**2))*((h + x)**2 + (w + x)**2)/((-h + x)**2 + (w + x)**2))*(-log(((-h + x)**2 + (-w + x)**2)/((h + x)**2 + (-w + x)**2)) + log(((-h + x)**2 + (w + x)**2)/((h + x)**2 + (w + x)**2)))/(32*pi**2))/sqrt(M_s**2*(-log(((-h + x)**2 + (-w + x)**2)/((h + x)**2 + (-w + x)**2)) + log(((-h + x)**2 + (w + x)**2)/((h + x)**2 + (w + x)**2)))**2/(16*pi**2) + M_s**2*(-atan(2*h*(-w + x)/(-h**2 + x**2 + (-w + x)**2)) + atan(2*h*(w + x)/(-h**2 + x**2 + (w + x)**2)))**2/(4*pi**2))\n\n\n\n```python\nH1 = H.subs(x, 0)\nH1\n```\n\n\n```python\nH2 = H1.subs(y, 0)\nH2\n```\n\n\n```python\nH3 = H2.subs(h, 12.5e-6)\nH3\n```\n\n\n```python\nH4 = H3.subs(w, 25e-6)\nH4\n```\n\n\n```python\nH5 = H4.subs(M_s, 8.6e5)\nH5\n```\n\n\n```python\nH5.evalf() #H5 = H0\n```\n\n\n```python\nprint(H)\n```\n\n sqrt(M_s**2*(-log(((-h + y)**2 + (-w + x)**2)/((h + y)**2 + (-w + x)**2)) + log(((-h + y)**2 + (w + x)**2)/((h + y)**2 + (w + x)**2)))**2/(16*pi**2) + M_s**2*(-atan(2*h*(-w + x)/(-h**2 + y**2 + (-w + x)**2)) + atan(2*h*(w + x)/(-h**2 + y**2 + (w + x)**2)))**2/(4*pi**2))\n\n", "meta": {"hexsha": "15f43c405e2aaba353f6054116c995688f3c4a8e", "size": 37913, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "FHD/FHD H Calc.ipynb", "max_stars_repo_name": "ali-kin4/JupyterNotes-1", "max_stars_repo_head_hexsha": "aeb944b43da4f731b46f25758f804110dd421fec", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "FHD/FHD H Calc.ipynb", "max_issues_repo_name": "ali-kin4/JupyterNotes-1", "max_issues_repo_head_hexsha": "aeb944b43da4f731b46f25758f804110dd421fec", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-05-17T07:45:07.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-17T09:25:10.000Z", "max_forks_repo_path": "FHD/FHD H Calc.ipynb", "max_forks_repo_name": "ali-kin4/JupyterNotes-part1", "max_forks_repo_head_hexsha": "aeb944b43da4f731b46f25758f804110dd421fec", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 104.4435261708, "max_line_length": 7568, "alphanum_fraction": 0.7619022499, "converted": true, "num_tokens": 1177, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9648551505674444, "lm_q2_score": 0.872347369700144, "lm_q1q2_score": 0.8416888527391466}} {"text": "# Setting the beta hyper-parameters\n\nSuppose $\\theta \\sim \\beta(\\alpha_1,\\alpha_2)$ and we believe that $E[\\theta] = m$ and $P(l < \\theta < u) = 0.95$. Write a program that can solve for $\\alpha_1$ and $\\alpha_2$ in terms of $m$, $l$, and $u$.\n\nThe mean for the beta distribution is well-known, so we have that\n\n\\begin{equation}\nE[\\theta] = m = \\frac{\\alpha_1}{\\alpha_1 + \\alpha_2}.\n\\end{equation}\n\nThus, this implies that \n\n\\begin{equation}\n\\alpha_2 = \\alpha_1\\frac{1-m}{m}.\n\\end{equation}\n\nNow, as $\\alpha_1$ increases the strength of our prior increases, so $P(l < \\theta < u)$ is a monotonic function of $\\alpha_1$. And so, we can solve for $\\alpha_1$, and thereby, $\\alpha_2$ with a binary search.\n\n\n```python\nfrom scipy import stats\n\nm = 0.15\nl = 0.05\nu = 0.3\n\nalpha_1_lower = 0.01\nalpha_1_upper = 1000\nalpha_1 = (alpha_1_lower + alpha_1_upper)/2\ntol = 1e-15\nwhile alpha_1_upper - alpha_1_lower >= 1e-9:\n alpha_2 = alpha_1*(1-m)/m\n density = stats.beta.cdf(u, a = alpha_1, b = alpha_2) - stats.beta.cdf(l, a = alpha_1, b = alpha_2)\n if density > 0.95:\n alpha_1_upper = alpha_1\n else:\n alpha_1_lower = alpha_1\n alpha_1 = (alpha_1_lower + alpha_1_upper)/2\n\nprint(alpha_1)\nprint(alpha_2)\n```\n\n 4.506062413888118\n 25.534353681276208\n\n\nThe problem asks for $\\alpha_1$ and $\\alpha_2$ if $m = 0.15$, $l = 0.05$, and $u = 0.3$. We find that\n\n\\begin{align}\n\\alpha_1 &\\approx 4.506062413888118 \\\\\n\\alpha_2 &\\approx 25.534353681276208.\n\\end{align}\n\n", "meta": {"hexsha": "e0ec3a388e985b7f836bd84f00f0566e70da0520", "size": 2723, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chap03/16.ipynb", "max_stars_repo_name": "ppham27/MLaPP-solutions", "max_stars_repo_head_hexsha": "3b3fb838b0873eae01bf9793d7464386dbb43835", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 56, "max_stars_repo_stars_event_min_datetime": "2017-01-04T20:48:46.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-28T08:31:33.000Z", "max_issues_repo_path": "chap03/16.ipynb", "max_issues_repo_name": "eric701803/MLaPP-solutions", "max_issues_repo_head_hexsha": "3b3fb838b0873eae01bf9793d7464386dbb43835", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chap03/16.ipynb", "max_forks_repo_name": "eric701803/MLaPP-solutions", "max_forks_repo_head_hexsha": "3b3fb838b0873eae01bf9793d7464386dbb43835", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 23, "max_forks_repo_forks_event_min_datetime": "2017-01-23T05:53:52.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-02T05:55:40.000Z", "avg_line_length": 26.9603960396, "max_line_length": 224, "alphanum_fraction": 0.5233198678, "converted": true, "num_tokens": 546, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9390248140158416, "lm_q2_score": 0.8962513800615313, "lm_q1q2_score": 0.8416022854737207}} {"text": "## BNH\n\nBinh and Korn defined the following test problem in with 2 objectives and 2 constraints:\n\n**Definition**\n\n\\begin{equation}\n\\newcommand{\\boldx}{\\mathbf{x}}\n\\begin{array}\n\\mbox{Minimize} & f_1(\\boldx) = 4x_1^2 + 4x_2^2, \\\\\n\\mbox{Minimize} & f_2(\\boldx) = (x_1-5)^2 + (x_2-5)^2, \\\\\n\\mbox{subject to} & C_1(\\boldx) \\equiv (x_1-5)^2 + x_2^2 \\leq 25, \\\\\n& C_2(\\boldx) \\equiv (x_1-8)^2 + (x_2+3)^2 \\geq 7.7, \\\\\n& 0 \\leq x_1 \\leq 5, \\\\\n& 0 \\leq x_2 \\leq 3.\n\\end{array}\n\\end{equation}\n\n**Optimum**\n\nThe Pareto-optimal solutions are constituted by solutions \n$x_1^{\\ast}=x_2^{\\ast} \\in [0,3]$ and $x_1^{\\ast} \\in [3,5]$,\n$x_2^{\\ast}=3$. These solutions are marked by using bold \ncontinuous\ncurves. The addition of both constraints in the problem does not make any solution\nin the unconstrained Pareto-optimal front infeasible. \nThus, constraints may not introduce any additional difficulty\nin solving this problem.\n\n**Plot**\n\n\n```python\nfrom pymoo.factory import get_problem\nfrom pymoo.util.plotting import plot\n\nproblem = get_problem(\"bnh\")\nplot(problem.pareto_front(), no_fill=True)\n```\n", "meta": {"hexsha": "51cb4766ae7a57ad3470c22eb276a0e5538468aa", "size": 42646, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/source/problems/multi/bnh.ipynb", "max_stars_repo_name": "gabicavalcante/pymoo", "max_stars_repo_head_hexsha": "1711ce3a96e5ef622d0116d6c7ea4d26cbe2c846", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-09-18T19:33:31.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-18T19:33:33.000Z", "max_issues_repo_path": "doc/source/problems/multi/bnh.ipynb", "max_issues_repo_name": "gabicavalcante/pymoo", "max_issues_repo_head_hexsha": "1711ce3a96e5ef622d0116d6c7ea4d26cbe2c846", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/source/problems/multi/bnh.ipynb", "max_forks_repo_name": "gabicavalcante/pymoo", "max_forks_repo_head_hexsha": "1711ce3a96e5ef622d0116d6c7ea4d26cbe2c846", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-31T08:19:13.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T08:19:13.000Z", "avg_line_length": 330.5891472868, "max_line_length": 39768, "alphanum_fraction": 0.9327252263, "converted": true, "num_tokens": 410, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9390248157222396, "lm_q2_score": 0.8962513703624558, "lm_q1q2_score": 0.8416022778954099}} {"text": "```python\nimport pandas as pd\nimport numpy as np\nimport scipy.stats as stats\nfrom statsmodels.stats.proportion import proportions_ztest\n```\n\n## Frequentist A/B testing - Example - Comparing two proportions\n\nA/B testing is essentially a simple randomized trial.\n\nWhen someone visits a website, they are randomly directed to one of two different landing pages. The purpose is to determine which page has a better conversion rate.\n\nThe key principle is that after a large number of visitors, the groups of people who visited the two pages are completely comparable in respect of all characteristics (age, gender, location etc). Consequenly, we can compare the two groups and obtain an unbiased assessment of which page has a better conversoin rate.\n\nBelow, we can see that Landing Page B has a higher conversion rate but is it statistical significant?\n\n\n```python\ndata = pd.DataFrame({\n 'landing_page': ['A', 'B'],\n 'not_converted': [4514, 4473],\n 'converted': [486, 527],\n 'conversion_rate':[486/(486 + 4514), 527/(527 + 4473)]\n})\ndata\n\n\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
landing_pagenot_convertedconvertedconversion_rate
0A45144860.0972
1B44735270.1054
\n
\n\n\n\n### Formulate hypothesis\n\nConversion rate can be thought of as the proportion of visitors that had an order. Thus, we have are comparing two proportions. Our hypothesis test becomes:\n\n- **Null Hypothesis:** there is no difference in proportions $p_a - p_b = 0$\n- **Alternative Hypothesis:** there is a difference in proportions $p_a - p_b \\ne 0$\n\n### Assumptions\n\n**1) Sample Size** \n* $n_a*\\hat{p_a}=486\\geq10$ \n* $n_a*(1-\\hat{p_a})=4515\\geq10$ \n* $n_b*\\hat{p_b}=527\\geq10$\n* $n_b*(1-\\hat{p_b})=4472\\geq10$ \n \n\n**2) Random Sample** \n \nBy design, our experiment uses a random sample\n\n### Test Statistic\n\nA test statistic is a single metric that can be used to evaluate the null hypothesis. A standard way to obtain this metric is to compute the z-score. This measures how many standard errors is our observe sample mean below or above the population mean\n\n$$ \\begin{align} z = \\frac{(\\hat{p_a}-\\hat{p_b}) - (p_a-p_b)}{SE(\\hat{p_a}-\\hat{p_b})} \\end{align} $$\n\n \n$\\hat{p_a}-\\hat{p_b}$: the sample difference in proportions \n$p_a-p_b$: the population difference in proportions \n$SE(p_a-p_b)$: the standard error of the sample difference in proportions \n\n\n$$\\begin{align*}\n& \\text{Standard error is defined} \\\\\n\\\\\n& SE(X)=\\frac{Var(x)}{\\sqrt{n_x}} \\\\\n\\\\\n\\\\ & \\text{Variance and covariance are defined} \\\\\n\\\\\n& Var(X) = E[X^2] - E[X]^2 \\\\\n& Cov(X, Y) = E[XY] - E[X]E[Y] \\\\\n\\\\\n\\\\ & \\text{Difference in variance between X and Y is defined} \\\\\n\\\\\n& Var(X - Y) = E[(X - Y)(X - Y)] - E[X - Y]^2 \\\\ \n& Var(X - Y) = E[X^2 - 2XY + Y^2] - (E[x] - E[y])^2 \\\\ \n& Var(X - Y) = E[X^2] - 2E[XY] + E[Y^2] - E[x]^2 + 2E[x]E[y] - E[y]^2 \\\\\n& Var(X - Y) = (E[X^2] - E[x]^2) + (E[Y^2] - E[y]^2) - 2(E[XY] - E[X]E[Y])\\\\\n& Var(X - Y) = Var(X) + Var(Y) -2Cov(X, Y) \\\\\n\\\\\n\\\\ & \\text{Groups are independent thereofore covariance is 0} \\\\\n\\\\\n& Var(X - Y) = Var(X) + Var(Y)\\\\\n\\\\\n\\\\ & \\text{Variance of a binomial proportion} \\\\\n\\\\\n& Var(p_a) = p_a (1 - p_a) \\\\\n\\\\\n\\\\ & \\text{Standard error of a binomial proportion} \\\\\n\\\\\n& SE(p_a) = \\frac{ p_a (1 - p_a)}{n_a}\n\\\\\n\\\\ & \\text{thus} \\\\\n\\\\\n& Var(p_a-p_b) = Var(p_a) + Var(p_b) \\\\\n& Var(p_a-p_b) = p_a(1-p_a) + p_b(1-p_b) \\\\\n& SE(p_a-p_b) = \\sqrt{\\frac{p_a(1-p_a)}{n_a} + \\frac{p_b(1-p_b)}{n_b}}\n\\\\\n\\\\ & \\text{Under the null: } p_a=p_b=p \\\\\n\\\\\n& SE(p_a-p_b) = \\sqrt{p(1-p)(\\frac{1}{n_a}+\\frac{1}{n_b})}\n\\end{align*}$$\n\n\n### P-Value and hypothesis test outcome\n\n\n```python\ndef ztest_proportion_two_samples(success_a, size_a, success_b, size_b, one_sided=False):\n \"\"\"\n A/B test for two proportions;\n given a success a trial size of group A and B compute\n its zscore and pvalue\n \n Parameters\n ----------\n success_a, success_b : int\n Number of successes in each group\n \n size_a, size_b : int\n Size, or number of observations in each group\n \n one_side: bool\n False if it is a two sided test\n \n Returns\n -------\n zscore : float\n test statistic for the two proportion z-test\n\n pvalue : float\n p-value for the two proportion z-test\n \"\"\"\n proportion_a = success_a/size_a\n proportion_b = success_b/size_b \n\n propotion = (success_a+success_b)/(size_a+size_b)\n se = propotion*(1-propotion)*(1/size_a+1/size_b)\n se = np.sqrt(se)\n \n z = (proportion_a-proportion_b)/se\n p_value = 1-stats.norm.cdf(abs(z))\n p_value *= 2-one_sided # if not one_sided: p *= 2\n \n return f\"z test statistic: {z}, p-value: {p_value}\"\n\nsuccess_a=486\nsize_a=486+4514\nsuccess_b=527\nsize_b=527+4473\n\nztest_proportion_two_samples(\n success_a=success_a,\n size_a=size_a,\n success_b=success_b,\n size_b=size_b,\n)\n```\n\n\n\n\n 'z test statistic: -1.3588507649479744, p-value: 0.17419388311717388'\n\n\n\nUnder the null that the conversion rate of the page A and page B are equal, we would observe this difference in conversion rate with a probability of 17.4%. Our threshold is typically set to 5% and thus the difference of the conversion rate we observe does not give us sufficient evidence to reject the null. \n \n**We fail to reject the null**\n", "meta": {"hexsha": "2974e909f2efa64d68d725c58823fd164114fefd", "size": 9741, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "statistics/ab_testing.ipynb", "max_stars_repo_name": "AndonisPavlidis/portfolio", "max_stars_repo_head_hexsha": "48de86ed33d703f17388e9e0f619ab225d4d6dde", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "statistics/ab_testing.ipynb", "max_issues_repo_name": "AndonisPavlidis/portfolio", "max_issues_repo_head_hexsha": "48de86ed33d703f17388e9e0f619ab225d4d6dde", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "statistics/ab_testing.ipynb", "max_forks_repo_name": "AndonisPavlidis/portfolio", "max_forks_repo_head_hexsha": "48de86ed33d703f17388e9e0f619ab225d4d6dde", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.7287066246, "max_line_length": 325, "alphanum_fraction": 0.4884508777, "converted": true, "num_tokens": 1836, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9518632247867715, "lm_q2_score": 0.8840392848011834, "lm_q1q2_score": 0.8414844844690457}} {"text": "# Laborat\u00f3rio 6: Pesca\n\n### Referente ao cap\u00edtulo 11\n\nSuponha que uma popula\u00e7\u00e3o de peixes \u00e9 introduzida em um tanque artificial ou em uma regi\u00e3o de \u00e1gua com redes. Seja $x(t)$ o n\u00edvel de peixes escalado em $t$, com $x(0) = x_0 > 0$. Os peixes inicialmente s\u00e3o pequenos e tem massa m\u00e9dia um valor quase nula: trateremos como $0$. Ap\u00f3s, a massa m\u00e9dia \u00e9 uma fun\u00e7\u00e3o \n$$\nf_{massa}(t) = k\\frac{t}{t+1},\n$$\nonde $k$ \u00e9 o m\u00e1ximo de massa possivelmente atingido. Consideraremos $T$ suficientemente pequeno de forma que n\u00e3o haja reprodu\u00e7\u00e3o de peixes. Seja $u(t)$ a taxa de colheita e $m$ a taxa de morte natural do peixe. Queremos maximizar a massa apanhada no intervalo, mas minimizando os custos envolvidos. Assim o problema \u00e9 \n\n$$\n\\max_u \\int_0^T Ak\\frac{t}{t+1}x(t)u(t) - u(t)^2 dt, A \\ge 0\n$$\n$$\n\\text{sujeito a }x'(t) = -(m + u(t))x(t), x(0) = x_0,\n$$\n$$\n0 \\le u(t) \\le M,\n$$\nonde $M$ \u00e9 o limite f\u00edsico da colheita. \n\n## Condi\u00e7\u00f5es Necess\u00e1rias \n\n### Hamiltoniano\n\n$$\nH = Ak\\frac{t}{t+1}x(t)u(t) - u(t)^2 - \\lambda(t)\\left(m + u(t)\\right)x(t)\n$$\n\n### Equa\u00e7\u00e3o adjunta \n\n$$\n\\lambda '(t) = - Ak\\frac{t}{t+1}u(t) + \\lambda(t)\\left(m + u(t)\\right)\n$$\n\n### Condi\u00e7\u00e3o de transversalidade \n\n$$\n\\lambda(T) = 0\n$$\n\n### Condi\u00e7\u00e3o de otimalidade\n\n$$\nH_u = Ak\\frac{t}{t+1}x(t) - 2u(t) - \\lambda(t)x(t)\n$$\n\n$$\nH_u < 0 \\implies u^*(t) = 0 \\implies x(t)\\left(Ak\\frac{t}{t+1} - \\lambda(t)\\right) < 0 \n$$\n\n$$\nH_u = 0 \\implies 0 \\le u^*(t) = 0.5x(t)\\left(Ak\\frac{t}{t+1} - \\lambda(t)\\right) \\le M\n$$\n\n$$\nH_u > 0 \\implies u^*(t) = M \\implies 0.5x(t)\\left(Ak\\frac{t}{t+1} - \\lambda(t)\\right) > M\n$$\n\nAssim $u^*(t) = \\min\\left\\{M, \\max\\left\\{0, 0.5x(t)\\left(Ak\\frac{t}{t+1} - \\lambda(t)\\right)\\right\\}\\right\\}$\n\n### Importanto as bibliotecas\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.integrate import solve_ivp\nimport sympy as sp\n\nimport sys \nsys.path.insert(0, '../pyscripts/')\n\nfrom optimal_control_class import OptimalControl\n```\n\n### Usando a biblitoca sympy \n\n\n```python\nt_sp, x_sp,u_sp,lambda_sp, k_sp, A_sp, m_sp = sp.symbols('t x u lambda k A m') \nH = A_sp*k_sp*(t_sp/(t_sp+1))*x_sp*u_sp - u_sp**2 - lambda_sp*(m_sp + u_sp)*x_sp\nH\n```\n\n\n\n\n$\\displaystyle \\frac{A k t u x}{t + 1} - \\lambda x \\left(m + u\\right) - u^{2}$\n\n\n\n\n```python\nprint('H_x = {}'.format(sp.diff(H,x_sp)))\nprint('H_u = {}'.format(sp.diff(H,u_sp)))\nprint('H_lambda = {}'.format(sp.diff(H,lambda_sp)))\n```\n\n H_x = A*k*t*u/(t + 1) - lambda*(m + u)\n H_u = A*k*t*x/(t + 1) - lambda*x - 2*u\n H_lambda = -x*(m + u)\n\n\nResolvendo para $H_u = 0$\n\n\n```python\neq = sp.Eq(sp.diff(H,u_sp), 0)\nsp.solve(eq,u_sp)\n```\n\n\n\n\n [x*(A*k*t - lambda*t - lambda)/(2*(t + 1))]\n\n\n\nAqui podemos descrever as fun\u00e7\u00f5es necess\u00e1rias para a classe. \n\n\n```python\nparameters = {'A': None, 'k': None, 'm': None, 'M': None}\n\ndiff_state = lambda t, x, u, par: -x*(par['m'] + u)\ndiff_lambda = lambda t, x, u, lambda_, par: - par['A']*par['k']*t*u/(t + 1) + lambda_*(par['m'] + u)\nupdate_u = lambda t, x, lambda_, par: np.minimum(par['M'], np.maximum(0, 0.5*x*(par['A']*par['k']*t - lambda_*t - lambda_)/(t + 1)))\n```\n\n## Aplicando a classe ao exemplo \n\nVamos fazer algumas exeperimenta\u00e7\u00f5es. Sinta-se livre para variar os par\u00e2metros. Nesse caso passaremos os limites como par\u00e2metro do `solve`. \n\n\n```python\nproblem = OptimalControl(diff_state, diff_lambda, update_u)\n```\n\n\n```python\nx0 = 0.4\nT = 10\nparameters['A'] = 5\nparameters['k'] = 10\nparameters['m'] = 0.2\nparameters['M'] = 1\n```\n\n\n```python\nt,x,u,lambda_ = problem.solve(x0, T, parameters, bounds = [(0, parameters['M'])])\nax = problem.plotting(t,x,u,lambda_)\nfor i in range(3): \n ax[i].set_xlabel('Semanas')\nplt.show()\n```\n\nA estrat\u00e9gia \u00f3tima nesse caso inicia em $0$ e logo aumenta muito rapidamente, com um decl\u00ednio posterior suave. A popula\u00e7\u00e3o \u00e9 praticamente extinta no per\u00edodo considerado. O limite superior n\u00e3o teve efeito, dado que foi bem alto. Por isso, podemos testar com outros valores.\n\n\n```python\nparameters['M'] = 0.4\nt,x,u,lambda_ = problem.solve(x0, T, parameters, bounds = [(0, parameters['M'])])\nax = problem.plotting(t,x,u,lambda_)\nfor i in range(3): \n ax[i].set_xlabel('Semanas')\nplt.show()\n```\n\nSugerimos que experimente a varia\u00e7\u00e3o dos outros par\u00e2metros. \n\n## Experimenta\u00e7\u00e3o \n\n\n```python\n#N0 = 1\n#T = 5\n#parameters['r'] = 0.3\n#parameters['a'] = 10\n#parameters['delta'] = 0.4\n#\n#t,x,u,lambda_ = problem.solve(N0, T, parameters)\n#roblem.plotting(t,x,u,lambda_)\n```\n\n### Este \u00e9 o final do notebook\n", "meta": {"hexsha": "de5daf349f551fdae6fb17d0b0b68886ae9b3bb0", "size": 115173, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/Laboratory6.ipynb", "max_stars_repo_name": "lucasmoschen/optimal-control-biological", "max_stars_repo_head_hexsha": "642a12b6a3cb351429018120e564b31c320c44c5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-11-03T16:27:39.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-03T16:27:39.000Z", "max_issues_repo_path": "notebooks/.ipynb_checkpoints/Laboratory6-checkpoint.ipynb", "max_issues_repo_name": "lucasmoschen/optimal-control-biological", "max_issues_repo_head_hexsha": "642a12b6a3cb351429018120e564b31c320c44c5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/.ipynb_checkpoints/Laboratory6-checkpoint.ipynb", "max_forks_repo_name": "lucasmoschen/optimal-control-biological", "max_forks_repo_head_hexsha": "642a12b6a3cb351429018120e564b31c320c44c5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 332.8699421965, "max_line_length": 53528, "alphanum_fraction": 0.9329617185, "converted": true, "num_tokens": 1654, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026618464795, "lm_q2_score": 0.917302657890151, "lm_q1q2_score": 0.841444169801486}} {"text": "```python\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy import stats\n```\n\n# Monte Carlo integration\n\n## Integrals and averates\n\nIn general, we want to estimate integrals of a function over a distribution using the following simple strategy:\n \n- Draw a probability from the target probability distribution\n- Multiply the probability with the function at that value\n- Repeat and average\n\nThe trick is in knowing how to draw samples from the target probability distribution.\n\n### A Simple Method: Rejection sampling\n\nSuppose we want random samples from some distribution for which we can calculate the PDF at a point, but lack a direct way to generate random deviates from. One simple idea that is also used in MCMC is rejection sampling - first generate a sample from a distribution from which we can draw samples (e.g. uniform or normal), and then accept or reject this sample with some probability (see figure).\n\n#### Example: Random samples from the unit circle using rejection sampling\n\n\n```python\nx = np.random.uniform(-1, 1, (10000, 2))\nx = x[np.sum(x**2, axis=1) < 1]\nplt.scatter(x[:, 0], x[:,1], s=1)\nplt.axis('square')\npass\n```\n\n#### Example: Rejection sampling from uniform distribution\n\nWe want to draw samples from a Cauchy distribution restricted to (-4, 4). We could choose a more efficient sampling/proposal distribution than the uniform, but this is just to illustrate the concept.\n\n\n```python\nx = np.linspace(-4, 4)\n\ndf = 10\ndist = stats.cauchy()\nupper = dist.pdf(0)\n```\n\n\n```python\nplt.plot(x, dist.pdf(x))\nplt.axhline(upper, color='grey')\npx = 1.0\nplt.arrow(px,0,0,dist.pdf(1.0)-0.01, linewidth=1,\n head_width=0.2, head_length=0.01, fc='g', ec='g')\nplt.arrow(px,upper,0,-(upper-dist.pdf(px)-0.01), linewidth=1, \n head_width=0.3, head_length=0.01, fc='r', ec='r')\nplt.text(px+.25, 0.2, 'Reject', fontsize=16)\nplt.text(px+.25, 0.01, 'Accept', fontsize=16)\nplt.axis([-4,4,0,0.4])\nplt.title('Rejection sampling concepts', fontsize=20)\npass\n```\n\n\n```python\nn = 100000\n# generate from sampling distribution\nu = np.random.uniform(-4, 4, n)\n# accept-reject criterion for each point in sampling distribution\nr = np.random.uniform(0, upper, n)\n# accepted points will come from target (Cauchy) distribution\nv = u[r < dist.pdf(u)]\n\nplt.plot(x, dist.pdf(x), linewidth=2)\n\n# Plot scaled histogram \nfactor = dist.cdf(4) - dist.cdf(-4)\nhist, bin_edges = np.histogram(v, bins=100, normed=True)\nbin_centers = (bin_edges[:-1] + bin_edges[1:]) / 2.\nplt.step(bin_centers, factor*hist, linewidth=2)\n\nplt.axis([-4,4,0,0.4])\nplt.title('Histogram of accepted samples', fontsize=20)\npass\n```\n\n## Simple Monte Carlo integration\n\nThe basic idea of Monte Carlo integration is very simple and only requires elementary statistics. Suppose we want to find the value of \n$$\nI = \\int_a^b f(x) dx\n$$\nin some region with volume $V$. Monte Carlo integration estimates this integral by estimating the fraction of random points that fall below $f(x)$ multiplied by $V$. \n\n\nIn a statistical context, we use Monte Carlo integration to estimate the expectation\n$$\nE[g(X)] = \\int_X g(x) p(x) dx\n$$\n\nwith\n\n$$\n\\bar{g_n} = \\frac{1}{n} \\sum_{i=1}^n g(x_i)\n$$\nwhere $x_i \\sim p$ is a draw from the density $p$.\n\nWe can estimate the Monte Carlo variance of the approximation as\n$$\nv_n = \\frac{1}{n^2} \\sum_{o=1}^n (g(x_i) - \\bar{g_n})^2)\n$$\n\nAlso, from the Central Limit Theorem,\n\n$$\n\\frac{\\bar{g_n} - E[g(X)]}{\\sqrt{v_n}} \\sim \\mathcal{N}(0, 1)\n$$\n\nThe convergence of Monte Carlo integration is $\\mathcal{0}(n^{1/2})$ and independent of the dimensionality. Hence Monte Carlo integration generally beats numerical integration for moderate- and high-dimensional integration since numerical integration (quadrature) converges as $\\mathcal{0}(n^{d})$. Even for low dimensional problems, Monte Carlo integration may have an advantage when the volume to be integrated is concentrated in a very small region and we can use information from the distribution to draw samples more often in the region of importance.\n\nAn elementary, readable description of Monte Carlo integration and variance reduction techniques can be found [here](https://www.cs.dartmouth.edu/~wjarosz/publications/dissertation/appendixA.pdf).\n\n## Intuition behind Monte Carlo integration\n\nWe want to find some integral \n\n$$I = \\int{f(x)} \\, dx$$\n\nConsider the expectation of a function $g(x)$ with respect to some distribution $p(x)$. By definition, we have\n\n$$\nE[g(x)] = \\int{g(x) \\, p(x) \\, dx}\n$$\n\nIf we choose $g(x) = f(x)/p(x)$, then we have\n\n$$\n\\begin{align}\nE[g(x)] &= \\int{\\frac{f(x}{p(x)} \\, p(x) \\, dx} \\\\\n&= \\int{f(x) dx} \\\\\n&= I\n\\end{align}\n$$\n\nBy the law of large numbers, the average converges on the expectation, so we have\n\n$$\nI \\approx \\bar{g_n} = \\frac{1}{n} \\sum_{i=1}^n g(x_i)\n$$\n\nIf $f(x)$ is a proper integral (i.e. bounded), and $p(x)$ is the uniform distribution, then $g(x) = f(x)$ and this is known as ordinary Monte Carlo. If the integral of $f(x)$ is improper, then we need to use another distribution with the same support as $f(x)$.\n\n**Example: Estimating $\\pi$**\n\nWe have a function \n\n$$\nf(x, y) = \n\\begin{cases}\n1 & \\text{if}\\ x^2 + y^2 \\le 1 \\\\\n0 & \\text{otherwise}\n\\end{cases}\n$$\n\nwhose integral is\n$$\nI = \\int_{-1}^{1} \\int_{-1}^{1} f(x,y) dx dy = \\pi\n$$\n\nSo a Monte Carlo estimate of $\\pi$ is \n$$\nQ = 4 \\sum_{i=1}^{N} f(x, y)\n$$\n\nif we sample $p$ from the standard uniform distribution in $\\mathbb{R}^2$.\n\n\n```python\nfrom scipy import stats\n```\n\n\n```python\nx = np.linspace(-3,3,100)\ndist = stats.norm(0,1)\na = -2\nb = 0\nplt.plot(x, dist.pdf(x))\nplt.fill_between(np.linspace(a,b,100), dist.pdf(np.linspace(a,b,100)), alpha=0.5)\nplt.text(b+0.1, 0.1, 'p=%.4f' % (dist.cdf(b) - dist.cdf(a)), fontsize=14)\npass\n```\n\n#### Using quadrature\n\n\n```python\nfrom scipy.integrate import quad\n```\n\n\n```python\ny, err = quad(dist.pdf, a, b)\ny\n```\n\n\n\n\n 0.47724986805182085\n\n\n\n#### Simple Monte Carlo integration\n\nIf we can sample directly from the target distribution $N(0,1)$\n\n\n```python\nn = 10000\nx = dist.rvs(n)\nnp.sum((a < x) & (x < b))/n\n```\n\n\n\n\n 0.4738\n\n\n\nIf we cannot sample directly from the target distribution $N(0,1)$ but can evaluate it at any point. \n\nRecall that $g(x) = \\frac{f(x)}{p(x)}$. Since $p(x)$ is $U(a, b)$, $p(x) = \\frac{1}{b-a}$. So we want to calculate\n\n$$\n\\frac{1}{n} \\sum_{i=1}^n (b-a) f(x)\n$$\n\n\n```python\nn = 10000\nx = np.random.uniform(a, b, n)\nnp.mean((b-a)*dist.pdf(x))\n```\n\n\n\n\n 0.48069595176138846\n\n\n\n## Intuition for error rate\n\nWe will just work this out for a proper integral $f(x)$ defined in the unit cube and bounded by $|f(x)| \\le 1$. Draw a random uniform vector $x$ in the unit cube. Then\n\n$$\n\\begin{align}\nE[f(x_i)] &= \\int{f(x) p(x) dx} = I \\\\\n\\text{Var}[f(x_i)] &= \\int{(f(x_i) - I )^2 p(x) \\, dx} \\\\\n&= \\int{f(x)^2 \\, p(x) \\, dx} - 2I \\int(f(x) \\, p(x) \\, dx + I^2 \\int{p(x) \\, dx} \\\\\n&= \\int{f(x)^2 \\, p(x) \\, dx} + I^2 \\\\\n& \\le \\int{f(x)^2 \\, p(x) \\, dx} \\\\\n& \\le \\int{p(x) \\, dx} = 1\n\\end{align}\n$$\n\nNow consider summing over many such IID draws $S_n = f(x_1) + f(x_2) + \\cdots + f(x_n)$. We have\n\n$$\n\\begin{align}\nE[S_n] &= nI \\\\\n\\text{Var}[S_n] & \\le n\n\\end{align}\n$$\n\nand as expected, we see that $I \\approx S_n/n$. From Chebyshev's inequality,\n\n$$\n\\begin{align}\nP \\left( \\left| \\frac{s_n}{n} - I \\right| \\ge \\epsilon \\right) &= \nP \\left( \\left| s_n - nI \\right| \\ge n \\epsilon \\right) & \\le \\frac{\\text{Var}[s_n]}{n^2 \\epsilon^2} & \\le\n\\frac{1}{n \\epsilon^2} = \\delta\n\\end{align}\n$$\n\nSuppose we want 1% accuracy and 99% confidence - i.e. set $\\epsilon = \\delta = 0.01$. The above inequality tells us that we can achieve this with just $n = 1/(\\delta \\epsilon^2) = 1,000,000$ samples, regardless of the data dimensionality.\n\n### Example\n\nWe want to estimate the following integral $\\int_0^1 e^x dx$. \n\n\n```python\nx = np.linspace(0, 1, 100)\nplt.plot(x, np.exp(x))\nplt.xlim([0,1])\nplt.ylim([0, np.exp(1)])\npass\n```\n\n#### Analytic solution\n\n\n```python\nfrom sympy import symbols, integrate, exp\n\nx = symbols('x')\nexpr = integrate(exp(x), (x,0,1))\nexpr.evalf()\n```\n\n\n\n\n 1.71828182845905\n\n\n\n#### Using quadrature\n\n\n```python\nfrom scipy import integrate\n\ny, err = integrate.quad(exp, 0, 1)\ny\n```\n\n\n\n\n 1.7182818284590453\n\n\n\n#### Monte Carlo integration\n\n\n```python\nfor n in 10**np.array([1,2,3,4,5,6,7,8]):\n x = np.random.uniform(0, 1, n)\n sol = np.mean(np.exp(x))\n print('%10d %.6f' % (n, sol))\n```\n\n 10 1.847075\n 100 1.845910\n 1000 1.731000\n 10000 1.727204\n 100000 1.719337\n 1000000 1.718142\n 10000000 1.718240\n 100000000 1.718388\n\n\n### Monitoring variance in Monte Carlo integration\n\nWe are often interested in knowing how many iterations it takes for Monte Carlo integration to \"converge\". To do this, we would like some estimate of the variance, and it is useful to inspect such plots. One simple way to get confidence intervals for the plot of Monte Carlo estimate against number of iterations is simply to do many such simulations.\n\nFor the example, we will try to estimate the function (again)\n\n$$\nf(x) = x \\cos 71 x + \\sin 13x, \\ \\ 0 \\le x \\le 1\n$$\n\n\n```python\ndef f(x):\n return x * np.cos(71*x) + np.sin(13*x)\n```\n\n\n```python\nx = np.linspace(0, 1, 100)\nplt.plot(x, f(x))\npass\n```\n\n#### Single MC integration estimate\n\n\n```python\nn = 100\nx = f(np.random.random(n))\ny = 1.0/n * np.sum(x)\ny\n```\n\n\n\n\n 0.03103616434230248\n\n\n\n#### Using multiple independent sequences to monitor convergence\n\nWe vary the sample size from 1 to 100 and calculate the value of $y = \\sum{x}/n$ for 1000 replicates. We then plot the 2.5th and 97.5th percentile of the 1000 values of $y$ to see how the variation in $y$ changes with sample size. The blue lines indicate the 2.5th and 97.5th percentiles, and the red line a sample path.\n\n\n```python\nn = 100\nreps = 1000\n\nx = f(np.random.random((n, reps)))\ny = 1/np.arange(1, n+1)[:, None] * np.cumsum(x, axis=0)\nupper, lower = np.percentile(y, [2.5, 97.5], axis=1)\n```\n\n\n```python\nplt.plot(np.arange(1, n+1), y, c='grey', alpha=0.02)\nplt.plot(np.arange(1, n+1), y[:, 0], c='red', linewidth=1);\nplt.plot(np.arange(1, n+1), upper, 'b', np.arange(1, n+1), lower, 'b')\npass\n```\n\n#### Using bootstrap to monitor convergence\n\nIf it is too expensive to do 1000 replicates, we can use a bootstrap instead.\n\n\n```python\nxb = np.random.choice(x[:,0], (n, reps), replace=True)\nyb = 1/np.arange(1, n+1)[:, None] * np.cumsum(xb, axis=0)\nupper, lower = np.percentile(yb, [2.5, 97.5], axis=1)\n```\n\n\n```python\nplt.plot(np.arange(1, n+1)[:, None], yb, c='grey', alpha=0.02)\nplt.plot(np.arange(1, n+1), yb[:, 0], c='red', linewidth=1)\nplt.plot(np.arange(1, n+1), upper, 'b', np.arange(1, n+1), lower, 'b')\npass\n```\n\n## Variance Reduction\n\nWith independent samples, the variance of the Monte Carlo estimate is \n\n\n$$\n\\begin{align}\n\\text{Var}[\\bar{g_n}] &= \\text{Var} \\left[ \\frac{1}{N}\\sum_{i=1}^{N} \\frac{f(x_i)}{p(x_i)} \\right] \\\\\n&= \\frac{1}{N^2} \\sum_{i=1}^{N} \\text{Var} \\left[ \\frac{f(x_i)}{p(x_i)} \\right] \\\\\n&= \\frac{1}{N^2} \\sum_{i=1}^{N} \\text{Var}[Y_i] \\\\\n&= \\frac{1}{N} \\text{Var}[Y_i]\n\\end{align}\n$$\n\nwhere $Y_i = f(x_i)/p(x_i)$. In general, we want to make $\\text{Var}[\\bar{g_n}]$ as small as possible for the same number of samples. There are several variance reduction techniques (also colorfully known as Monte Carlo swindles) that have been described - we illustrate the change of variables and importance sampling techniques here.\n\n### Change of variables\n\nThe Cauchy distribution is given by \n$$\nf(x) = \\frac{1}{\\pi (1 + x^2)}, \\ \\ -\\infty \\lt x \\lt \\infty \n$$\n\nSuppose we want to integrate the tail probability $P(X > 3)$ using Monte Carlo. One way to do this is to draw many samples form a Cauchy distribution, and count how many of them are greater than 3, but this is extremely inefficient.\n\n#### Only 10% of samples will be used\n\n\n```python\nimport scipy.stats as stats\n\nh_true = 1 - stats.cauchy().cdf(3)\nh_true\n```\n\n\n\n\n 0.10241638234956674\n\n\n\n\n```python\nn = 100\n\nx = stats.cauchy().rvs(n)\nh_mc = 1.0/n * np.sum(x > 3)\nh_mc, np.abs(h_mc - h_true)/h_true\n```\n\n\n\n\n (0.13, 0.26932817794994063)\n\n\n\n#### A change of variables lets us use 100% of draws\n\nWe are trying to estimate the quantity\n\n$$\n\\int_3^\\infty \\frac{1}{\\pi (1 + x^2)} dx\n$$\n\nUsing the substitution $y = 3/x$ (and a little algebra), we get\n\n$$\n\\int_0^1 \\frac{3}{\\pi(9 + y^2)} dy\n$$\n\nHence, a much more efficient MC estimator is \n\n$$\n\\frac{1}{n} \\sum_{i=1}^n \\frac{3}{\\pi(9 + y_i^2)}\n$$\n\nwhere $y_i \\sim \\mathcal{U}(0, 1)$.\n\n\n```python\ny = stats.uniform().rvs(n)\nh_cv = 1.0/n * np.sum(3.0/(np.pi * (9 + y**2)))\nh_cv, np.abs(h_cv - h_true)/h_true\n```\n\n\n\n\n (0.10219440906830025, 0.002167361082027339)\n\n\n\n### Importance sampling\n\nSuppose we want to evaluate\n\n$$\nI = \\int{h(x)\\,p(x) \\, dx}\n$$\n\nwhere $h(x)$ is some function and $p(x)$ is the PDF of $y$. If it is hard to sample directly from $p$, we can introduce a new density function $q(x)$ that is easy to sample from, and write\n\n$$\nI = \\int{h(x)\\, p(x)\\, dx} = \\int{h(x)\\, \\frac{p(x)}{q(x)} \\, q(x) \\, dx}\n$$\n\nIn other words, we sample from $h(y)$ where $y \\sim q$ and weight it by the likelihood ratio $\\frac{p(y)}{q(y)}$, estimating the integral as\n\n$$\n\\frac{1}{n}\\sum_{i=1}^n \\frac{p(y_i)}{q(y_i)} h(y_i)\n$$\n\nSometimes, even if we can sample from $p$ directly, it is more efficient to use another distribution.\n\n#### Example\n\nSuppose we want to estimate the tail probability of $\\mathcal{N}(0, 1)$ for $P(X > 5)$. Regular MC integration using samples from $\\mathcal{N}(0, 1)$ is hopeless since nearly all samples will be rejected. However, we can use the exponential density truncated at 5 as the importance function and use importance sampling. Note that $h$ here is simply the identify function.\n\n\n```python\nx = np.linspace(4, 10, 100)\nplt.plot(x, stats.expon(5).pdf(x))\nplt.plot(x, stats.norm().pdf(x))\npass\n```\n\n#### Expected answer\n\nWe expect about 3 draws out of 10,000,000 from $\\mathcal{N}(0, 1)$ to have a value greater than 5. Hence simply sampling from $\\mathcal{N}(0, 1)$ is hopelessly inefficient for Monte Carlo integration.\n\n\n```python\n%precision 10\n```\n\n\n\n\n '%.10f'\n\n\n\n\n```python\nv_true = 1 - stats.norm().cdf(5)\nv_true\n```\n\n\n\n\n 2.866515719235352e-07\n\n\n\n#### Using direct Monte Carlo integration\n\n\n```python\nn = 10000\ny = stats.norm().rvs(n)\nv_mc = 1.0/n * np.sum(y > 5)\n# estimate and relative error\nv_mc, np.abs(v_mc - v_true)/v_true \n```\n\n\n\n\n (0.0, 1.0)\n\n\n\n#### Using importance sampling\n\n\n```python\nn = 10000\ny = stats.expon(loc=5).rvs(n)\nv_is = 1.0/n * np.sum(stats.norm().pdf(y)/stats.expon(loc=5).pdf(y))\n# estimate and relative error\nv_is, np.abs(v_is- v_true)/v_true\n```\n\n\n\n\n (2.8290057563382236e-07, 0.01308555981236137)\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "2a65b070bb44ff3be255311ec10aa8736a55fea6", "size": 285813, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/S14D_Monte_Carlo_Integration.ipynb", "max_stars_repo_name": "cjuracek/sta-663-2020", "max_stars_repo_head_hexsha": "9f0e81e783000b1485951322561b3e47acef5072", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 47, "max_stars_repo_stars_event_min_datetime": "2020-01-08T21:45:54.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-25T09:25:59.000Z", "max_issues_repo_path": "notebooks/S14D_Monte_Carlo_Integration.ipynb", "max_issues_repo_name": "cjuracek/sta-663-2020", "max_issues_repo_head_hexsha": "9f0e81e783000b1485951322561b3e47acef5072", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/S14D_Monte_Carlo_Integration.ipynb", "max_forks_repo_name": "cjuracek/sta-663-2020", "max_forks_repo_head_hexsha": "9f0e81e783000b1485951322561b3e47acef5072", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 54, "max_forks_repo_forks_event_min_datetime": "2020-01-08T21:46:05.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-15T05:04:00.000Z", "avg_line_length": 254.5084594835, "max_line_length": 79960, "alphanum_fraction": 0.9216760609, "converted": true, "num_tokens": 4819, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9032942067038784, "lm_q2_score": 0.9314625022115511, "lm_q1q2_score": 0.8413846820095926}} {"text": "# Chain Rules and Total Probability\n\nIn this section, we introduce techniques that use conditional probability to decompose events. The goal is to express unknown probabilities of events in terms of probabilities that we already know. \n\n## Using Conditional Probability to Decompose Events: Part 1 -- Chain Rules\n\nFrom the definition of conditional probability, we can write $P(A|B)$ as\n\\begin{align}\n P(A|B)&= \\frac{P(A \\cap B)}{P(B)} \\\\\n \\Rightarrow P(A \\cap B)&= P(A|B)P(B), \n\\end{align}\n\nand we can write $P(B|A)$ as\n\n\\begin{align}\n P(B|A)&=\\frac{P(A \\cap B)}{P(A)} \\\\\n \\Rightarrow P(A \\cap B)&= P(B|A)P(A).\n\\end{align}\n\nAfter manipulating the expressions as shown, we get two *different* formula for expressing $P(A \\cap B)$. These are **chain rules** for the probability of the intersection of two events. Such rules are often used when:\n* Two events are dependent on each other, but the relation is simple if the outcome of one of the experiments is known.\n* The events are at two different point in a system, such as the input and output of a system.\n\n\n**Example**\n\nA simple example of the former is in card games. Two cards are drawn (without replacement) from a deck of 52 cards (without jokers). What is the probability that they are both Aces? Let $A_i$ be the event that the card on draw $i$ is an Ace. Then the most natural way to apply the chain rule is to write\n\n$$\nP(A_1 \\cap A_2) = P(A_2 | A_1) P(A_1).\n$$\n\nThe probability of getting an Ace in draw 1 is 4/52=1/13 because there are 4 Aces in the deck of 52 cards. The probability of getting an Ace on the second draw *given that the first draw was an Ace* is 3/51 = 1/17 because after the first draw, there are 3 Aces left in the remaining deck of 51 cards. Thus,\n\n$$\nP(A_1 \\cap A_2) = P(A_2 | A_1) P(A_1) = \\left( \\frac{1}{17} \\right) \\left( \\frac{1}{13} \\right) = \\frac{1}{221}\n$$\n\nAs a check, we can compare with a solution using combinatorics. There are \n\n$$\n\\binom{4}{2} = 6\n$$\nways to choose the two Aces from the four total Aces. There are \n\n$$\n\\binom{52}{2} = \\frac{ 52!}{50! 2!} = 1326\n$$\nways to choose two cards from 52. So,\n\n$$\nP( A_1 \\cap A_2) = \\frac{6}{1326} = \\frac{1}{221},\n$$\nwhich matches our answer using conditional probability.\n\nThe solution using conditional probability is usually much more intuitive for learners who are new to probability, but being able to use both techniques is a powerful method for checking your work\n\n**To be added: Question on probability of getting two face cards (JQK) on consecutive draws from a deck of cards. Question on getting defective computers when sitting down at random computers in a lab.**\n\n**Example** \n\nFor the Magician's Coin problem, what is the probability of getting the Fair coin and it coming up heads on the first flip? This is an example of a system where there is an input that affects the future outputs. In this case, the input is the choice of coin. When we have such problems, we usually will need to decompose them in terms of the probabilities of the input and the conditional probabilities of the output given the input. Let $H_i$ denote the event that the coin comes up heads on flip $i$. Let $F$ be the event that the fair coin was chosen. We are looking for $P(F \\cap H_1)$, which we can write as \n\n$$\nP(F \\cap H_1) = P(H_1 | F) P(F).\n$$\n\nIf there is one Fair coin and one two-headed coin, $P(F) =1/2$. Given that the coin is Fair, $P(H_1|F) = 1/2$. So,\n\n$$\nP(F \\cap H_1) = \\left( \\frac 1 2 \\right) \\left( \\frac 1 2 \\right) = \\frac 1 4.\n$$\n\nNote that it is generally **not helpful to write the probability using the other form of the chain rule:**\n\n$$\nP(F \\cap H_1) = P(F| H_1) P(H_1).\n$$\n\nWe do not know $P(H_1)$ nor $P(F|H_1)$. Thus, although the expression is valid mathematically, it is not helpful in solving this problem because it depends on probabilities that cannot be easily inferred from the problem setup.\n\n**To be added:Question on probability of getting the Fair coin and heads on first two flips.**\n\nThe chain rule can be easily generalized to more than two events. The easiest way is to write probabilities in terms of conditional probabilities that are expressed as fractions (as in the definition of probability), such that the denumenator of one fraction cancels with the numerator of the next fraction to make sure the expression is not changes when it is rewritten. This will make more sense with an example for rewriting the probability of the intersection of 3 events ($A$, $B$, and $C$):\n\\begin{align}\n P(A \\cap B \\cap C) &= \\frac{P(A \\cap B \\cap C)} {} \\cdot\n \\frac{\\hspace{4em} }{} \\cdot \\frac{ \\hspace{3em}\\mbox{ }}{} \\\\\n &\\\\\n &= \\frac{P(A \\cap B \\cap C)} {P(B \\cap C)} \\cdot\n \\frac{P(B \\cap C)}{\\mbox{ }} \\cdot \\frac{ \\hspace{3em}\\mbox{ }}{} \\\\\n &= \\frac{P(A \\cap B \\cap C)} {P(B \\cap C)} \\cdot\n \\frac{P(B \\cap C)}{P(C)} \\cdot \\frac{ P(C) }{1} \\\\\n &= P(A|B \\cap C) P(B | C) P(C)\n\\end{align}\n\nThis decomposition assumes we know the probability of $A$ given that $B$ and $C$ have occurred and we know the probability of $B$ given $C$. Such dependence occurs naturally in many systems, but the particular decomposition will depend on what we know about these probabilities. We could just have easily written\n\\begin{align}\n P(A \\cap B \\cap C) \n &= P(C|A \\cap B) P(B | A) P(A)\n\\end{align}\n\n\n\n\n## Using Conditional Probability to Decompose Events: Part 2 -- Partitions, and Total Probability\n\nWe previously introduced the concept of partitions in {doc}`Partitions<../03-first-data/partitions>` as a way to take a collection of data and break it into separate, disjoint groups. Now, we are ready to apply this concept to events, which are sets of outcomes. In particular, we will most often partition the sample space, $S$: \n\n\n````{panels}\nDEFINITION\n^^^\npartition (of the sample space)\n: A collection of sets $A_1, A_2, \\ldots$ *partitions*\n the sample space $S$ iff\n \n \\begin{align*}\n S &= \\bigcup_i A_i, \\mbox{ and } \n A_i \\cap A_j &= \\emptyset, i \\ne j. \n \\end{align*}\n\n````\n\n$\\left\\{A_i \\right\\}$ is also said to be a *partition* of $S$. For example, the collection of disjoint sets $A_0 A_1, \\ldots, A_7$ shown in {numref}`sample-space-partition` is a partition for $S$:\n\n:::{figure-md} sample-space-partition\n\n\nExample partition of the sample space.\n:::\n\n\n\nNow consider how we can use a partition $A_0, A_1, \\ldots, A_{n-1}$ of $S$ to decompose any event $B \\subseteq S$. In {numref}`event-partition1`, an example event $B$ is shown on top of our example partition. Note that we do not require the $B$ have a nonempty intersection with every partition event (i.e., in this example $B \\cap A_0 = \\emptyset$).\n\n:::{figure-md} event-partition1\n\n\nExample event $B$ superimposed on partition $A_0, A_1, \\ldots, A_7$ of the sample space.\n:::\n\nThen we can use our partition to decompose $B$ into smaller subsets as \n\n$$\nB_i = B \\cap A_i, ~~ i = 0, 1, \\ldots, n-1,\n$$\nas shown in {numref}`event-partition2`.\n\n:::{figure-md} event-partition2\n\n\nExample event $B$ superimposed on partition $A_0, A_1, \\ldots, A_7$ of the sample space.\n:::\n\nNote that $A_i \\cap A_j = \\emptyset$ also implies that\n\n$$\nB_i \\cap B_j = (B \\cap A_i) \\cap (B \\cap A_j) = B \\cap (A_i \\cap A_j) = B \\cap \\emptyset =\\emptyset,\n$$\nand \n\n$$\n\\bigcup_i B_i = \\bigcup_i \\left(A_i \\cap B\\right ) = B \\cap \\left( \\bigcup_i A_i \\right) = B \\cap S = B.\n$$\nThese two properties imply that $B_0, B_1, \\ldots, B_{n-1}$ are a partition for $B$. If we want to express the probability of $B$, we can write\n\n\\begin{align}\nP(B) & = P \\left( \\bigcup_i B_i \\right) \\\\\n& = \\sum_i P \\left(B_i \\right) \\\\\n&= \\sum_i P\\left( B \\cap A_i\\right)\n\\end{align}\n\nNow suppose that we choose the partitioning events $\\{A_i\\}$ such that:\n* We know the probabilities $P(A_i)$\n* We know the conditional probabilities of the event $B$ given that $A_i$ occurred, $P(B|A_i)$.\n\nApplying chain rule, we can write $P(B \\cap A_i) = P(B|A_i)P(A_i)$ for each $i$. Putting this all together, we get the *Law of Total Probability*:\n\n````{panels}\nLaw of Total Probability\n^^^\nGiven an event $B \\subseteq S$ and a partition of $S$ denoted by \n$A_1, A_2, \\ldots$, \n\n$$\nP(B) = \\sum_i P(B|A_i)P(A_i).\n$$\n````\n\nThe law of total probability is often used in systems where there is either:\n* random inputs and outputs, where the output is dependent on the input\n* a *hidden state*, which is some random property of the system that is not directly observable.\n\nNote that these are not mutually exclusive. For the Magician's coin, we can treat the choice of coin as the input to the system or as a hidden state. When the coin is flipped repeatedly, then it generally makes more sense to interpret the choice of coin as a hidden state because it is a property of the system that cannot be directly observed but that influences the outpus of the system.\n\nWhen applying chain rule in such problems, the Law of Total Probability will generally use conditioning on\n the different possibilities of the input or the hidden state. \n\n\n\n\n**Example: The Magician's Coin**\n\nAs before, a magician has a fair coin and a two-headed coin in her pocket. Let $H_i$ denote the event that the outcome of flip $i$ is heads. We can use total probability to answer more complicated questions regarding the probabilities of the outputs:\n\n**(a)** $P(H_1)$\n \nAs mentioned above, we can condition on the hidden state, which is whether the coin is fair ($F$) or not ($\\overline{F}$). \n\n```{note}\nOne thing that is often confusing to learners of probability is determining what is actually a partition of $S$. If you have a set of events that are both *mutually exclusive* and *one of those events must occur* (i.e., the events cover the sample space), then they are a partition. \n\nIn this case, $F$ and $\\overline{F}$ are complements, so they are mutually exclusive. Moreover, either the coin is fair ($F$) or it is not ($\\overline{F}$), so one of these events must occur. Therefore the events $F, \\overline{F}$ partition $S$.\n```\n\nApplying the Law of Total Probability,\n\n\\begin{align}\nP(H_1) &= P(H_1|F)P(F) + P(H_1| \\overline{F}) P( \\overline{F}) \\\\\n&= \\left( \\frac 1 2 \\right) \\left( \\frac 1 2 \\right) + \\biggl( 1\\biggr) \\left( \\frac 1 2 \\right) \\\\\n&= \\frac 3 4\n\\end{align}\n \n**(b)** $P(H_1 \\cap H_2)$\n\nWe can use the same partition as above to write:\n\\begin{align}\nP(H_1 \\cap H_2) &= P(H_1 \\cap H_2 |F)P(F) + P(H_1\\cap H_2| \\overline{F}) P( \\overline{F}).\n\\end{align}\nHowever, now we need to know the probabilities $P(H_1 \\cap H_2 |F)$ and $P(H_1\\cap H_2| \\overline{F})$. When the coin is fair the events $H_1$ and $H_2$ are conditionally independent, so $P(H_1 \\cap H_2 |F) =\nP(H_1|F)P(H_2|F)$. When the coin is two-headed, it always comes up heads, so $P(H_1\\cap H_2| \\overline{F})=1$. Then\n\\begin{align}\nP(H_1 \\cap H_2) &= P(H_1|F)P( H_2 |F)P(F) + (1) P( \\overline{F}) \\\\\n&= \\left( \\frac 1 2 \\right) \\left( \\frac 1 2 \\right) \\left( \\frac 1 2 \\right)\n+\\biggl( 1\\biggr) \\left( \\frac 1 2 \\right) \\\\\n&= \\frac 5 8.\n\\end{align}\n\n**(c)** $P(H_2 \\vert H_1)$\n\nBy calculating this probability, we can assess whether getting heads on flip 2 is independent of getting heads on flip 1, and if now, how information about the value of the first flip changes the probabilities for the second flip. We can directly apply the definition of conditional probability to calculate this from the answers to parts (a) and (b):\n\n$$\nP(H_2 \\vert H_1) = \\frac{ P(H_1 \\cap H_2) } {P(H_1)} = \\frac{5/8}{3/4} = \\frac{5}{6}\n$$\n\nIf we did not know $H_1$, then $P(H_2)= P(H_1)=3/4$ (you should verify this using the Law of Total Probability with $H_1$ as the hidden state). So knowing that heads occurred on the first flip increases the probability that heads will occur on the second flip. \n\nIf this surprises you (again), then just recall that we can anticipate this if we take it to extremes. What if I told you that heads occurred on the first 1000 flips? Then you would surely think that the magician had chosen the two-headed coin and expect the probability of getting heads on the 1001th flip to be close to 1. Then if heads is observed on one flip, we should expect the probability of getting heads on second flip to increase. \n\nIn fact, after observing one heads, the probability of having chosen the two-headed coin should increase to more than 1/2. It does -- but we will need some new tools that we will develop in Chapter 7 before we can calculate the new probability.\n\n**TO DO: Create numerical answer questions for** $P(H_1 \\cap H_2 \\cap H_3)$, $P(H_3 \\vert H_1 \\cap H_2)$\n\n\n**Example: Two Random Selections**\n\nAt the beginning of this Chapter, the following question was asked: A box contains 2 white balls and 1 black ball. I reach in the box and remove one ball. Without inspecting it, I place it in my pocket. I withdraw a second ball. What is the probability that the second ball is white?\n\nThis is a question that stumps many learners of probability, but the answer turns out to be simple. Let $W_i$ be the event that a white ball is chosen on draw $i$. We are asking about $W_2$. Then $P(W_2) = P(W_1) = 2/3$. However, this is not intuitive because our brain tells us that the probabilities for the second draw must depend on what happened on the first draw -- which is unknown in this case.\n\nWe can formally show this using the Law of Total Probability. The conditioning event will be the unobserved result of the first draw. The partition is $W_1$, $\\overline{W_1}$. Then\n\n\\begin{align}\nP(W_2) &= P(W_2|W_1) P(W_1) + P(W_2 | \\overline{W_1}) P(\\overline{W_1})\\\\\n\\end{align}\n\nIf a white ball is drawn first ($W_1$), then there is one white ball and one black ball left, so $P(W_2|W_1) = 1/2$. If a black ball is drawn first ($\\overline{W_1}$), then there are two white balls left, so $P(W_1| \n\\overline{W_1}) = 1$. Then\n\\begin{align}\nP(W_2) &= \\left( \\frac 1 2 \\right) \\left( \\frac 2 3 \\right) + \\biggl( 1 \\biggr) \n\\left( \\frac 1 3 \\right) \\\\\n&= \\frac 2 3,\n\\end{align}\nwhich is equal to $P(W_1)$. We can use similar math to show that $P(W_3)=2/3$, also. In the absence of information about what happened on previous draws, the probability of getting a white ball is equal to the original proportion of white balls in the box.\n\n\n\n**Example: The Monty Hall Problem**\n\nYou are on a game show, and you\u2019re given the choice of three doors:\n\n* Behind one door is a car\n* Behind the other doors are goats\n\nYou pick a door, and the host, who knows what\u2019s behind the doors, opens another door, which he knows has a goat.\n\nThe host then offers you the option to switch doors. Does it matter if you switch? If switching changes your probability of getting the prize, what is the new probability?\n\nHere is a simple solution. Let $W_i$ be the event that you are winning on choice $i$. Let $S$ be the event that you switch. We are interested in comparing $P(W_2|S)$ with $P(W_2 | \\overline{S})$. \n\nConsider $\\overline{S}$ first. Because there are two goats and the host always shows you a goat, the fact that he reveals a goat to you does not change the probability that you have selected the car. Thus $P(W_2|\\overline{S}) = P(W_1) = 1/3$. \n\nNow consider $S$. It may be tempting to think that either:\n1. Switching doesn't matter because the host was always going to show you a goat, so your new choice is just as likely to be a car as it was before, or \n2. After the host reveals a goat, there is one goat and one car, so the probability of choosing the car when you switch is 1/2. \n\nUnfortunately, neither of these are correct! Let's condition on what happens on choice 1:\n\n$$\nP(W_2|S) = P(W_2|S \\cap W_1) P(W_1|S) + P(W_2|S \\cap \\overline{W_1} ) P(\\overline{W_1}|S).\n$$\n\nThe probability of winning on choice 1 does not depend on whether you switch after that choice, so $P(W_1|S) = P(W_1) = 1/3$, and $P(\\overline{W_1}|S) = P(\\overline{W_1})= 2/3. \n\nNow, consider what happens on the second choice. There are two possibilities:\n* If you are winning on the first choice ($W_1$), then the two doors you have not chosen both contain goats. The host reveals one of the goats, and you will switch to the other goat. Thus, $P(W_2|W_1) =0$.\n* If you are not winning on the first choice ($\\overline{W_1}$), then one of the doors you have not chosen has a goat and the other has the car. The host shows the goat that is not behind your door. Then if you switch, you will switch to the door with the car. Thus, $P(W_1 | \\overline{W_1}) = 1$. \n\nPutting this all together, \n\n\\begin{align}\nP(W_2|S) &= P(W_2|S \\cap W_1) P(W_1|S) + P(W_2|S \\cap \\overline{W_1} ) P(\\overline{W_1}|S) \\\\\n&= \\biggl( 0 \\biggr) \\left( \\frac 1 3 \\right) \n+ \\biggl( 1 \\biggr) \\left( \\frac 2 3 \\right) \\\\\n&= \\frac 2 3.\n\\end{align}\n\nWhy is the probability not 1/2 using the reasoning above about one car and one goat left? Because you do not randomly choose among the two doors. The host is revealing information when he reveals the goat because he cannot choose your door, and he cannot choose the door with the car. If you were to randomly choose between the two doors after the goat is revealed, the probability would be 1/2.\n\n\n```python\n\n```\n", "meta": {"hexsha": "b9f25da11fc4d96610cfa01600c14ec7944c1b40", "size": 22662, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "06-conditional-prob/chain-rules-total-prob.ipynb", "max_stars_repo_name": "jmshea/intro-data-science-for-engineers", "max_stars_repo_head_hexsha": "7bfcd7125d9bac42054bbfc5ad67c1ddcd7dfd8b", "max_stars_repo_licenses": ["FSFAP"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-09T14:12:23.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-09T14:12:23.000Z", "max_issues_repo_path": "06-conditional-prob/chain-rules-total-prob.ipynb", "max_issues_repo_name": "jmshea/intro-data-science-for-engineers", "max_issues_repo_head_hexsha": "7bfcd7125d9bac42054bbfc5ad67c1ddcd7dfd8b", "max_issues_repo_licenses": ["FSFAP"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "06-conditional-prob/chain-rules-total-prob.ipynb", "max_forks_repo_name": "jmshea/intro-data-science-for-engineers", "max_forks_repo_head_hexsha": "7bfcd7125d9bac42054bbfc5ad67c1ddcd7dfd8b", "max_forks_repo_licenses": ["FSFAP"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.0725995316, "max_line_length": 624, "alphanum_fraction": 0.6111111111, "converted": true, "num_tokens": 5119, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8918110511888303, "lm_q2_score": 0.943347569515675, "lm_q1q2_score": 0.8412877876062023}} {"text": "# Unweighted and Weighted Means\n\n\n```python\nimport numpy as np\nimport scipy.stats as stats\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as mpathces\n#from matplotlib.pyplot import figure, show\n#from matplotlib.ticker import MaxNLocator\n```\n\n## Maximum Likelihood Estimator motivated \"derivations\"\n\n### Unweighted Means\n\nIf we make $n$ idential statistically independent (isi) measurments of a random varaible $x$, such that the measurements collected form data $\\vec{x} = \\left\\{x_i, \\cdots, x_n\\right\\}$, from a Gaussian (Normal) distribution,\n\n\\begin{equation}\nL\\left(\\vec{x}; \\vec{\\theta}\\right) = \\prod_{i=1}^{n} f(x_i; \\mu, \\sigma) = \\frac{1}{(2\\pi)^{n/2} \\sigma^{n}} \\exp\\left(-\\frac{1}{2\\sigma^2} \\sum_{i=1}^{n} \\left(x_i - \\mu\\right)^2 \\right)\n\\end{equation}\n\nthen\n\n\\begin{equation}\n-\\ln L = \\frac{n}{2} \\ln\\left(2\\pi\\right) + n \\ln \\sigma + \\frac{1}{2\\sigma^2} \\sum_{i=1}^{n}\\left(x_i - \\mu\\right)^2\n\\end{equation}\n\nand so $L$ is maximized with respect to a variable $\\alpha$ when $-\\ln L$ is minimized,\n\n\\begin{equation*}\n\\frac{\\partial \\left(-\\ln L\\right)}{\\partial \\alpha} = 0.\n\\end{equation*}\n\nThus, $L$ is maximized when\n\n\\begin{equation*}\n\\frac{\\partial \\left(-\\ln L\\right)}{\\partial \\mu} = -\\frac{1}{\\sigma^2} \\sum_{i=1}^{n}\\left(x_i - \\mu\\right) = 0,\n\\end{equation*}\n\nwhich occurs for\n\n\\begin{equation*}\n\\sum_{i=1}^{n} x_i = n \\mu,\n\\end{equation*}\n\nsuch that the best estimate for true parameter $\\mu$ is\n\n\\begin{equation}\n\\boxed{\\hat{\\mu} = \\frac{1}{n} \\sum_{i=1}^{n} x_i = \\bar{x}\\,}\\,,\n\\end{equation}\n\nand $L$ is maximized when\n\n\\begin{equation*}\n\\frac{\\partial \\left(-\\ln L\\right)}{\\partial \\sigma} = \\frac{n}{\\sigma} - \\frac{1}{\\sigma^3} \\sum_{i=1}^{n} \\left(x_i - \\mu\\right) = 0,\n\\end{equation*}\n\nwhich occurs for\n\n\\begin{equation*}\nn\\sigma^2 = \\sum_{i=1}^{n} \\left(x_i - \\mu\\right)^2,\n\\end{equation*}\n\nwhich is\n\n\\begin{equation*}\n\\sigma = \\sqrt{\\frac{1}{n}\\sum_{i=1}^{n} \\left(x_i - \\mu\\right)^2}.\n\\end{equation*}\n\nHowever, $\\mu$ is an unknown true parameter, and the best estimate of it is $\\hat{\\mu}$, which is in no\nmanner required to be equal to $\\mu$. Thus, the best estimate of $\\sigma$ is\n\n\\begin{equation}\n\\boxed{\\hat{\\sigma}_{\\hat{\\mu}} = \\sqrt{\\frac{1}{n}\\sum_{i=1}^{n} \\left(x_i - \\hat{\\mu}\\right)^2} = \\sqrt{\\frac{1}{n}\\sum_{i=1}^{n} \\left(x_i - \\bar{x}\\,\\right)^2}\\,}\\,.\n\\end{equation}\n\nIf the seperation from the mean of each observation, $\\left(x_i - \\bar{x}\\right) = \\delta x = \\text{constant}$, are the same then the uncertainity on the mean is found to be\n\n\\begin{equation*}\n\\sigma_{\\hat{\\mu}} = \\frac{\\delta x}{\\sqrt{n}},\n\\end{equation*}\n\nwhich is often refered to as the \"standard error\".\n\n---\nSo, for a population of measurements sampled from a distribution, it can be said that the sample mean is\n$$\\mu = \\frac{1}{n} \\sum_{i=1}^{n} x_i = \\bar{x},$$\n\nand the standard deviation of the sample is\n\n\\begin{equation*}\n\\sigma = \\sqrt{\\frac{1}{n}\\sum_{i=1}^{n} \\left(x_i - \\bar{x}\\,\\right)^2}.\n\\end{equation*}\n\n---\n\n### Weighted Means\n\nAssume that $n$ individual measurments $x_i$ are spread around (unknown) true value $\\theta$ according to a Gaussian distribution, each with known width $\\sigma_i$.\n\nThis then leads to the likelihood function\n\n\\begin{equation*}\nL(\\theta) = \\prod_{i=1}^{n} \\frac{1}{\\sqrt{2\\pi}\\sigma_i} \\exp\\left(-\\frac{\\left(x_i - \\theta\\right)^2}{2\\sigma_i^2} \\right)\n\\end{equation*}\n\nand so negative log-likelihood\n\n\\begin{equation}\n-\\ln L = \\frac{1}{2} \\ln\\left(2\\pi\\right) + \\ln \\sigma_i + \\frac{1}{2\\sigma_i^2} \\sum_{i=1}^{n}\\left(x_i - \\theta\\right)^2.\n\\end{equation}\n\nAs before, $L$ is maximized with respect to a variable $\\alpha$ when $-\\ln L$ is minimized,\n\n\\begin{equation*}\n\\frac{\\partial \\left(-\\ln L\\right)}{\\partial \\alpha} = 0,\n\\end{equation*}\n\nand so $L$ is maximized with respect to $\\theta$ when\n\n\\begin{equation*}\n\\frac{\\partial \\left(-\\ln L\\right)}{\\partial \\theta} = -\\sum_{i=1}^{n} \\frac{x_i - \\theta}{\\sigma_i^2} = 0,\n\\end{equation*}\n\nwhich occurs for\n\n\\begin{equation*}\n\\sum_{i=1}^{n} \\frac{x_i}{\\sigma_i^2} = \\theta \\sum_{i=1}^{n} \\frac{1}{\\sigma_i^2},\n\\end{equation*}\n\nwhich is\n\n\\begin{equation}\n\\hat{\\theta} = \\frac{\\displaystyle\\sum_{i=1}^{n} \\frac{x_i}{\\sigma_i^2}}{\\displaystyle\\sum_{i=1}^{n}\\frac{1}{\\sigma_i^2}}.\n\\end{equation}\n\nNote that by defining \"weights\" to be\n\n\\begin{equation*}\nw_i = \\frac{1}{\\sigma_1^2},\n\\end{equation*}\n\nthis can be expressed as\n\n\\begin{equation}\n\\boxed{\\hat{\\theta} = \\frac{\\displaystyle\\sum_{i=1}^{n} w_i\\, x_i}{\\displaystyle\\sum_{i=1}^{n}w_i}},\n\\end{equation}\n\nmaking the term \"weighted mean\" very transparent.\n\nTo find the standard deviation on the weighted mean, we first look to the variance, $\\sigma^2$. [4]\n\n\\begin{align*}\n\\sigma^2 &= \\text{E}\\left[\\left(\\hat{\\theta} - \\text{E}\\left[\\hat{\\theta}\\right]\\right)^2\\right] \\\\\n &= \\text{E}\\left[\\left(\\frac{\\displaystyle\\sum_{i=1}^{n} w_i\\, x_i}{\\displaystyle\\sum_{i=1}^{n}w_i} - \\text{E}\\left[\\frac{\\displaystyle\\sum_{i=1}^{n} w_i\\, x_i}{\\displaystyle\\sum_{i=1}^{n}w_i}\\right]\\,\\right)^2\\right] \\\\\n &= \\frac{1}{\\displaystyle\\left(\\sum_{i=1}^{n} w_i\\right)^2} \\text{E} \\left[ \\displaystyle\\left(\\sum_{i=1}^{n} w_i\\,x_i\\right)^2 - 2 \\displaystyle\\left(\\sum_{i=1}^{n} w_i\\,x_i\\right) \\displaystyle\\left(\\sum_{i=j}^{n} w_j\\, \\text{E}\\left[x_j\\right]\\right) + \\displaystyle\\left(\\sum_{i=1}^{n} w_i\\, \\text{E}\\left[x_i\\right]\\right)^2 \\right] \\\\\n &= \\frac{1}{\\displaystyle\\left(\\sum_{i=1}^{n} w_i\\right)^2} \\text{E} \\left[ \\sum_{i,j}^{n} w_i\\, x_i w_j\\, x_j - 2 \\sum_{i,j}^{n} w_i\\, x_i w_j\\, \\text{E}\\left[x_j\\right] + \\sum_{i,j}^{n} w_i\\, \\text{E}\\left[x_i\\right] w_j\\, \\text{E}\\left[x_j\\right] \\right] \\\\\n &= \\frac{1}{\\displaystyle\\left(\\sum_{i=1}^{n} w_i\\right)^2} \\sum_{i,j}^{n} w_i w_j \\left( \\text{E}\\left[ x_i x_j \\right] - 2 \\text{E}\\left[ x_i \\right]\\text{E}\\left[ x_j \\right] + \\text{E}\\left[ x_i \\right]\\text{E}\\left[ x_j \\right] \\right) \\\\\n &= \\frac{1}{\\displaystyle\\left(\\sum_{i=1}^{n} w_i\\right)^2} \\sum_{i,j}^{n} w_i w_j \\left( \\text{E}\\left[ x_i x_j \\right] - \\text{E}\\left[ x_i \\right]\\text{E}\\left[ x_j \\right] \\right) \\\\\n &= \\frac{1}{\\displaystyle\\left(\\sum_{i=1}^{n} w_i\\right)^2} \\sum_{i,j}^{n} w_i w_j \\,\\text{Cov}\\left( x_i, x_j \\right) = \\left\\{\n\\begin{array}{ll}\n\\frac{\\displaystyle1}{\\displaystyle\\left(\\sum_{i=1}^{n} w_i\\right)^2} \\displaystyle\\sum_{i}^{n} \\left( w_i \\sigma_i \\right)^2\\,, & x_i \\text{ and } x_j \\text{ statistically independent}, \\\\\n0\\,, &\\text{ otherwise},\n\\end{array}\n\\right. \\\\\n &= \\frac{\\displaystyle\\sum_{i}^{n} \\left( \\sigma_i^{-2} \\sigma_i \\right)^2}{\\displaystyle\\left(\\sum_{i=1}^{n} w_i\\right)^2} = \\frac{\\displaystyle\\sum_{i}^{n} w_i}{\\displaystyle\\left(\\sum_{i=1}^{n} w_i\\right)^2} \\\\\n &= \\frac{\\displaystyle 1}{\\displaystyle\\sum_{i=1}^{n} w_i}\n\\end{align*}\n\nThus, it is seen that the standard deviation on the weighted mean is\n\n\\begin{equation}\n\\boxed{\\sigma_{\\hat{\\theta}} = \\sqrt{\\frac{\\displaystyle 1}{\\displaystyle\\sum_{i=1}^{n} w_i}} = \\left(\\displaystyle\\sum_{i=1}^{n} \\frac{1}{\\sigma_i^2}\\right)^{-1/2}}\\,.\n\\end{equation}\n\nNotice that in the event that the uncertainties are uniform for each observation, $\\sigma_i = \\delta x$, the above yields the same result as the unweighted mean. $\\checkmark$\n\nAfter this aside it is worth pointing out that [1] have a very elegant demonstration that\n\n\\begin{equation*}\n\\sigma_{\\hat{\\theta}} = \\left(\\frac{\\partial^2\\left(- \\ln L\\right)}{\\partial\\, \\theta^2}\\right)^{-1/2} = \\left(\\displaystyle\\sum_{i=1}^{n} \\frac{1}{\\sigma_i^2}\\right)^{-1/2}.\n\\end{equation*}\n\n---\nSo, the average of $n$ measurements of quantity $\\theta$, with individual measurments, $x_i$, Gaussianly distributed about (unknown) true value $\\theta$ with known width $\\sigma_i$, is the weighted mean\n\n\\begin{equation*}\n\\hat{\\theta} = \\frac{\\displaystyle\\sum_{i=1}^{n} w_i\\, x_i}{\\displaystyle\\sum_{i=1}^{n}w_i},\n\\end{equation*}\n\nwith weights $w_i = \\sigma_i^{-2}$, with standard deviation on the weighted mean\n\n\\begin{equation*}\n\\sigma_{\\hat{\\theta}} = \\sqrt{\\frac{\\displaystyle 1}{\\displaystyle\\sum_{i=1}^{n} w_i}} = \\left(\\displaystyle\\sum_{i=1}^{n} \\frac{1}{\\sigma_i^2}\\right)^{-1/2}.\n\\end{equation*}\n\n---\n\n## Specific Examples\n\nGiven the measurments\n\n\\begin{equation*}\n\\vec{x} = \\left\\{10, 9, 11\\right\\}\n\\end{equation*}\n\nwith uncertanties\n$$\\vec{\\sigma_x} = \\left\\{1, 2, 3\\right\\}$$\n\n\n```python\nx_data = [10,9,11]\nx_uncertainty = [1,2,3]\n```\n\n\n```python\nnumerator = sum(x/(sigma_x ** 2) for x, sigma_x in zip(x_data, x_uncertainty))\ndenominator = sum(1/(sigma_x ** 2) for sigma_x in x_uncertainty)\n\nprint(\"hand calculated weighted mean: {}\".format(numerator/denominator))\n```\n\n hand calculated weighted mean: 9.897959183673468\n\n\nUsing [NumPy's `average` method](https://docs.scipy.org/doc/numpy/reference/generated/numpy.average.html)\n\n\n```python\n# unweighted mean\nnp.average(x_data)\n```\n\n\n\n\n 10.0\n\n\n\n\n```python\nx_weights = [1/(uncert ** 2) for uncert in x_uncertainty]\n# weighted mean\nweighted_mean = np.average(x_data, weights=x_weights)\nprint(weighted_mean)\n```\n\n 9.897959183673468\n\n\n\n```python\n# no method to do this in NumPy!?\nsigma = np.sqrt(1/np.sum(x_weights))\nprint(\"hand calculated uncertaintiy on weighted mean: {}\".format(sigma))\n```\n\n hand calculated uncertaintiy on weighted mean: 0.8571428571428571\n\n\n\n```python\n# A second way to find the uncertainty on the weighted mean\nsummand = sum((x * w) for x, w in zip(x_data, x_weights))\nnp.sqrt(np.average(x_data, weights=x_weights)/summand)\n```\n\n\n\n\n 0.8571428571428571\n\n\n\nLet's plot the data now and take a look at the results\n\n\n```python\ndef draw_weighted_mean(data, errors, w_mean, w_uncert):\n plt.figure(1)\n\n # the data to be plotted\n x = [i+1 for i in range(len(data))]\n \n x_min = x[x.index(min(x))]\n x_max = x[x.index(max(x))]\n \n y = data\n y_min = y[y.index(min(y))]\n y_max = y[y.index(max(y))]\n \n err_max = errors[errors.index(max(errors))]\n\n # plot data\n plt.errorbar(x, y, xerr=0, yerr=errors, fmt='o', color='black')\n # plot weighted mean\n plt.plot((x_min,x_max), (w_mean,w_mean), color = 'blue')\n # plot uncertainty on weighted mean\n plt.plot((x_min,x_max), (w_mean - w_uncert,w_mean - w_uncert), color = 'gray', linestyle = '--')\n plt.plot((x_min,x_max), (w_mean + w_uncert,w_mean + w_uncert), color = 'gray', linestyle = '--')\n\n # Axes\n plt.xlabel('Individual measurements')\n plt.ylabel('Value of measruement')\n # view range\n epsilon = 0.1\n plt.xlim(x_min-epsilon, x_max+epsilon)\n plt.ylim([y_min - err_max, 1.5 * y_max + err_max])\n\n #ax = figure().gca()\n #ax.xaxis.set_major_locator(MaxNLocator(integer=True))\n\n # Legends\n wmean_patch = mpathces.Patch(color = 'blue', label = \"Weighted mean: $\\mu={:0.3f}$\".format(w_mean))\n uncert_patch = mpathces.Patch(color = 'gray', label = \"Uncertainty on the weighted mean: $\\pm{:0.3f}$\".format(w_uncert))\n plt.legend(handles=[wmean_patch, uncert_patch])\n\n plt.show()\n```\n\n\n```python\ndraw_weighted_mean(x_data, x_uncertainty, weighted_mean, sigma)\n```\n\nNow let's do this again but with data that are Normally distributed about a mean value\n\n\n```python\ntrue_mu = np.random.uniform(3,9)\ntrue_sigma = np.random.uniform(0.1,2.0)\nn_samples = 20\n\nsamples = np.random.normal(true_mu, true_sigma, n_samples).tolist()\ngauss_errs = np.random.normal(2, 0.4, n_samples).tolist()\n\nweights = [1/(uncert ** 2) for uncert in gauss_errs]\n \ndraw_weighted_mean(samples, gauss_errs, np.average(samples, weights=weights), np.sqrt(1/np.sum(weights)))\n```\n\n## References\n\n1. [_Data Analysis in High Energy Physics_](http://eu.wiley.com/WileyCDA/WileyTitle/productCd-3527410589.html), Behnke et al., 2013, $\\S$ 2.3.3.1\n2. [_Statistical Data Analysis_](http://www.pp.rhul.ac.uk/~cowan/sda/), Glen Cowan, 1998\n3. University of Marlyand, Physics 261, [Notes on Error Propagation](http://www.physics.umd.edu/courses/Phys261/F06/ErrorPropagation.pdf)\n4. Physics Stack Exchange, [_How do you find the uncertainty of a weighted average?_](https://physics.stackexchange.com/questions/15197/how-do-you-find-the-uncertainty-of-a-weighted-average)\n", "meta": {"hexsha": "b76580e2bbb1d63a2df62f8de403f7715e17cb11", "size": 53768, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Notebooks/Introductory/Error-on-means.ipynb", "max_stars_repo_name": "fizisist/Statistics-Notes", "max_stars_repo_head_hexsha": "9399bca77abc36ee342f8af2fadddffd79390bed", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Notebooks/Introductory/Error-on-means.ipynb", "max_issues_repo_name": "fizisist/Statistics-Notes", "max_issues_repo_head_hexsha": "9399bca77abc36ee342f8af2fadddffd79390bed", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notebooks/Introductory/Error-on-means.ipynb", "max_forks_repo_name": "fizisist/Statistics-Notes", "max_forks_repo_head_hexsha": "9399bca77abc36ee342f8af2fadddffd79390bed", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.3784461153, "max_line_length": 16692, "alphanum_fraction": 0.7836259485, "converted": true, "num_tokens": 4267, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9433475699138559, "lm_q2_score": 0.89181104831338, "lm_q1q2_score": 0.8412877852487554}} {"text": "# Chapter 2 Exercises\nIn this notebook we will go through the exercises of chapter 2 of Introduction to Stochastic Processes with R by Robert Dobrow.\n\n\n```R\nlibrary(expm)\n```\n\n Loading required package: Matrix\n \n Attaching package: \u2018expm\u2019\n \n The following object is masked from \u2018package:Matrix\u2019:\n \n expm\n \n\n\n## 2.1\nA Markov chain has transition Matrix\n$$\np=\\left(\\begin{array}{cc} \n0.1 & 0.3&0.6\\\\\n0 & 0.4& 0.6 \\\\\n0.3 & 0.2 &0.5\n\\end{array}\\right)\n$$ \nWith initial distribution $\\alpha =(0.2,0.3,0.5)$. Find the following:\n\na) $P(X_7=3|X_6=2)$\n\nb) $P(X_9=2|X_1=2,X_5=1,X_7=3)$\n\nc) $P(X_0=3|X_1=1)$\n\nd) $E[X_2]$\n\n\n\n```R\na=c(0.2,0.3,0.5)\np = matrix(c(0.1,0.3,0.6,0,0.4,0.6,0.3,0.2,0.5),ncol=3,byrow = TRUE)\np[2,3]\n```\n\n\n0.6\n\n\n\n```R\n(p%*%p)[3,2]\n```\n\n\n0.27\n\n\n\n```R\na[3]*p[3,1]/(a[1]*p[1,1]+a[2]*p[2,1]+a[3]*p[3,1])\n```\n\n\n0.882352941176471\n\n\n\n```R\ne=0\npr = a%*%p%*%p\nfor (i in 1:ncol(pr))\n e = e+(i)*pr[1,i]\ne\n```\n\n\n2.363\n\n\n## 2.1\nA Markov chain has transition Matrix\n$$\np=\\left(\\begin{array}{cc} \n0 & 1/2&1/2\\\\\n1 & 0& 0 \\\\\n1/3 & 1/3 &1/3\n\\end{array}\\right)\n$$ \nWith initial distribution $\\alpha =(1/2,0,1/2)$. Find the following:\n\na) $P(X_2=1|X_1=3)$\n\nb) $P(X_1=3,X_2=1)$\n\nc) $P(X_1=3|X_2=1)$\n\nd) $P(X_9=1|X_1=3, X_4=1, X_7=2)$\n\n\n```R\na=c(1/2,0,1/2)\np = matrix(c(0,1/2,1/2,1,0,0,1/3,1/3,1/3),ncol=3,byrow = T)\np[3,1]\n```\n\n\n0.333333333333333\n\n\n\n```R\n(a%*%p)[1,3]*p[3,1]\n```\n\n\n0.138888888888889\n\n\n\n```R\n(a%*%p)[1,3]*p[3,1]/((a%*%p)[1,1]*p[1,1]+(a%*%p)[1,2]*p[2,1]+(a%*%p)[1,3]*p[3,1])\n```\n\n\n0.25\n\n\n\n```R\n(p%*%p)[2,1]\n```\n\n\n0\n\n\n## 2.3\nConsider the Wright-Fisher model with population k=3. If the initial population has one A allele, what is the probability that there are no alleles at time 3.\n\n\n```R\nmat = c()\nk = 3\nfor(j in 0:k){\n vec = c()\n for(i in (0:k)){\n vec=c(vec,choose(k, i)*(j/k)**i*(1-j/k)**(k-i))\n }\n mat=c(mat,vec)\n}\np=matrix(mat,nrow=k+1,byrow = T)\n(p%^%3)[2,1]\n```\n\n Loading required package: Matrix\n \n Attaching package: \u2018expm\u2019\n \n The following object is masked from \u2018package:Matrix\u2019:\n \n expm\n \n\n\n\n0.516689529035208\n\n\n## 2.4\nFor the General\u00f1 two-state chain with transformation matrix \n$$\\boldsymbol{P}=\\left(\\begin{array}{cc} \n1-p & p\\\\\nq & 1-q\n\\end{array}\\right)$$\n\nAnd initial distribution $\\alpha=(\\alpha_1,\\alpha_2)$, find the following:\n\n(a) the two step transition matrix\n\n(b) the distribution of $X_1$\n\n### Answer\n(a)\n\nFor this case the result comes from doing the matrix multiplication once i.e. finding $\\boldsymbol{P}*\\boldsymbol{P}$ that is:\n$$\\boldsymbol{P}*\\boldsymbol{P}=\\left(\\begin{array}{cc} \n(1-p)^2+pq & (1-p)p+(1-q)p\\\\\n(1-p)q+(1-q)q & (1-q)^2+pq\n\\end{array}\\right)$$\n\n(b) \nin this case we need to take into account alpha which would be $X_0$ and then multiply with the transition matrix, then:\n\n$\\boldsymbol{\\alpha}*\\boldsymbol{P}$ that is:\n$$\\boldsymbol{\\alpha}*\\boldsymbol{P}=( \n(1-p)\\alpha_1+q\\alpha_2 , p\\alpha_1+(1-q)\\alpha_2)$$\n\n## 2.5\nConsider a random walk on $\\{0,...,k\\}$, which moves left and right with respective probabilities q and p. If the walk is at 0 it transitions to 1 on the next step. If the walk is at k it transitions to k-1 on the next step. This is calledrandom walk with reflecting bounderies. Assume that $k=3,q=1/4, p =3/4$ and the initial distribution is uniform. For the following, use technology if needed.\n\n(a) Exhibit the transition Matrix\n\n(b) Find $P(X_7=1|X_0=3,X_2=2,X_4=2)$\n\n(c) Find $P(X_3=1,X_5=3)$\n\n### Answer\n(a) $$\\boldsymbol{Pr}=\\left(\\begin{array}{cc}\n0 & 1 & 0 & 0 \\\\\n1/4 & 0 & 3/4 & 0\\\\\n0 & 1/4 & 0 & 3/4 \\\\\n0 & 0 & 1 & 0\n\\end{array}\\right)$$\n\n(b) since it is a Markov Chain, then $P(X_7=1|X_0=3,X_2=2,X_4=2)=P(X_7=1|X_4=2)=\\boldsymbol{Pr}^3_{2,1}$\n\n\n\n```R\np = matrix(c(0,1,0,0,1/4,0,3/4,0,0,1/4,0,3/4,0,0,1,0),nrow =4,byrow = T)\n(p%^%3)[3,2]\n```\n\n\n0.296875\n\n\n(c) $P(X_3=1,X_5=3)=(\\alpha \\boldsymbol{Pr}^3)_{1}*\\boldsymbol{Pr}^2_{1,3}$\n\n\n```R\na=c(1/4,1/4,1/4,1/4)\n(a%*%(p%^%3))[1,2]*(p%*%p)[2,4]\n```\n\n\n0.103271484375\n\n\n## 2.6\nA tetrahedron die has four faces labeled 1,2,3 and 4. In repeated independent rolls of the die $R_0, R_1,...,$ let $X_n=max\\{R_0,...,R_n\\}$ be the maximum value after $n+1$ rolls, for $n\\geq0$\n\n(a) Give an intuitive argument for why $X_0, X_1, ... $ is a Markov Chain and exhibit its transition Matrix\n\n(b) find $P(X_3 \\geq 3)$\n\n### Answer\n(a) This is a Markov Chain since the value of the maximum in all the rolls does only depend on the last value given of the throw this is because the max value should not changhe based on past rolls of this given event, then $P(X_n=i|X_{n-1}=j,...)=P(X_n=i|X_{n-1}=j)$\nand the transition matrix is:\n$$\\boldsymbol{Pr}=\\left(\\begin{array}{cc}\n1/4 & 1/4 & 1/4 & 1/4 \\\\\n0 & 1/2 & 1/4 & 1/4 \\\\\n0 & 0 & 3/4 & 1/4 \\\\\n0 & 0 & 0 & 1\n\\end{array}\\right)$$\n\n\n(b) We know that the tetrahedron has unoiform probability to get the initial value, then $\\alpha=(1/4,1/4,1/4,1/4)$\nthen we want $\\alpha*\\boldsymbol{Pr}^3$\n\n\n```R\np = matrix(c(1/4,1/4,1/4,1/4,0,1/2,1/4,1/4,0,0,3/4,1/4,0,0,0,1),nrow=4,byrow = T)\na=c(1/4,1/4,1/4,1/4)\nsum((a%*%(p%^%3))[1,-(1:2)])\n```\n\n\n0.9375\n\n\n## 2.7\nLet $X_0, X_1,...$ be a Markov chain with transition matrix $P$. Let $Y_n=X_{3n}$, for $n = 0,1,2,...$ Show that $Y_0,Y_1,...$ is a Markov chain and exhibit its transition Matrix.\n\n### Answer\nLet's unravel $P(Y_k|Y_{k-1},...,Y_0)$\n\n$$P(Y_k|Y_{k-1},...,Y_0)=P(X_{3k}|X_{3k-3},...,X_0)$$\n\nBut since $X_0, X_1,...$ is a Markov Chain, then $P(X_{3k}|X_{3k-3},...,X_0) = P(X_{3k}|X_{3k-3}) = P(Y_k|Y_{k-1})$\nThis means that $Y_0,Y_1,...$ is also a Markov chain.\n\nand then the transition matrix for Y is $P_Y=P_X^3$\n\n## 2.8\nGive the Markov transition matrix for a random walk on the weighted graph in the next figure:\n
\n\n
\n\n### Answer \n$$\\boldsymbol{Pr}=\\left(\\begin{array}{cc}\n0 & 1/3 & 0 & 0 & 2/3 \\\\\n1/10 & 1/5 & 1/5 & 1/10 & 2/5 \\\\\n1/2 & 1/3 & 0 & 1/6 & 0\\\\\n0 & 1/2 & 1/2 & 0 & 0\\\\\n1/3 & 2/3 & 0 & 0 & 0\n\\end{array}\\right)$$\n\n## 2.8\nGive the Markov transition matrix for a random walk on the weighted graph in the next figure:\n
\n\n
\n\n### Answer \n$$\\boldsymbol{Pr}=\\left(\\begin{array}{cc}\n0 & 0 & 3/5 & 0 & 2/5 \\\\\n1/7 & 2/7 & 0 & 0 & 4/7 \\\\\n0 & 2/9 & 2/3 & 1/9 & 0\\\\\n0 & 1 & 0 & 0 & 0\\\\\n3/4 & 0 & 0 & 1/4 & 0\n\\end{array}\\right)$$\n\n## 2.10\nConsider a Markov Chain with transition Matrix\n\n$$\\boldsymbol{Pr}=\\left(\\begin{array}{cc}\n0 & 3/5 & 1/5 & 2/5 \\\\\n3/4 & 0 & 1/4 & 0 \\\\\n1/4 & 1/4 & 1/4 & 1/4\\\\\n1/4 & 0 & 1/4 & 1/2\n\\end{array}\\right)$$\n(a) Exhibit the directed, weighted transition graph for the chain\n\n(b) The transition graph for this chain can be given as a weighted graph without directed edges. Exhibit the graph\n\n(a)\n
\n\n
\n\n(b)\n
\n\n
\n\n## 2.11\nYou start with five dice. Roll all the dice and put aside those dice that come up 6. Then roll the remaining dice, putting aside thise dice that come up 6. and so on. Let $X_n$ be the number of dice that are sixes after n rolls. \n\n(a) Describe the transition matrix $\\boldsymbol{Pr}$ for this Markov chain\n\n(b) Find the probability of getting all sixes by the third play.\n\n(c) what do you expect $\\boldsymbol{Pr}^{100}$ to look like? Use tech to confirm your answer **(Good that we are doing this in jupyter notebook haha)**\n\n(a) Note that the space for $X_n$ is $0,1,2,3,4,5$\n\n$$\\boldsymbol{Pr}=\\left(\\begin{array}{cc}\n1 & 0 & 0 & 0 & 0 & 0 \\\\\n1/6 & 5/6 & 0 & 0 & 0 & 0\\\\\n\\frac{1^2}{6^2}{2\\choose 2} & \\frac{5*1}{6^2}{2\\choose 1} & \\frac{5^2}{6^2}{2\\choose 0} & 0 & 0 & 0\\\\\n\\frac{1^3}{6^3}{3\\choose 3} & \\frac{5*1^2}{6^3}{3\\choose 2} & \\frac{5^2*1}{6^3}{3\\choose 1} & \\frac{5^3}{6^3}{3\\choose 0} & 0 & 0\\\\\n\\frac{1^4}{6^4}{4\\choose 4} & \\frac{5*1^3}{6^4}{4\\choose 3} & \\frac{5^2*1^2}{6^4}{4\\choose 2} & \\frac{5^3*1}{6^4}{4\\choose 1} & \\frac{5^4}{6^4}{4\\choose 0} & 0\\\\\n\\frac{1^5}{6^5}{5\\choose 5} & \\frac{5*1^4}{6^5}{5\\choose 4} & \\frac{5^2*1^3}{6^5}{5\\choose 3} & \\frac{5^3*1^2}{6^5}{5\\choose 2} & \\frac{5^4*1}{6^5}{5\\choose 1} & \\frac{5^5}{6^5}{5\\choose 0}\\\\\n\\end{array}\\right) = \\left(\\begin{array}{cc}\n1 & 0 & 0 & 0 & 0 & 0 \\\\\n\\frac{1}{6} & \\frac{5}{6} & 0 & 0 & 0 & 0\\\\\n\\frac{1}{36} & \\frac{10}{36} & \\frac{25}{36} & 0 & 0 & 0\\\\\n\\frac{1}{216} & \\frac{15}{216} & \\frac{75}{216} & \\frac{125}{216} & 0 & 0\\\\\n\\frac{1}{1296} & \\frac{20}{1296} & \\frac{150}{1296} & \\frac{500}{1296} & \\frac{625}{1296} & 0\\\\\n\\frac{1}{7776} & \\frac{25}{7776} & \\frac{250}{7776} & \\frac{1250}{7776} & \\frac{3125}{7776} & \\frac{3125}{7776}\\\\\n\\end{array}\\right)$$\n\nIt is interesting that the distribution when $X_n=i$ is the components of the binomial expansion $(1/6+5/6)^i$\n\n(b)\n\n\n```R\np = matrix(c(1, 0, 0, 0, 0, 0,\n 1/6, 5/6, 0, 0, 0, 0, 1/36, 10/36, 25/36, 0, 0, 0, \n 1/216, 15/216, 75/216, 125/216, 0, 0, \n 1/1296, 20/1296, 150/1296, 500/1296, 625/1296, 0,\n 1/7776, 25/7776, 250/7776, 1250/7776 ,3125/7776, 3125/7776),nrow=6,byrow=T)\n(p%^%3)[6,1]\n```\n\n\n0.0132720560113747\n\n\n(c) I would expect that there is basically 100 percent posibility that there are 0 dices left without regarding how many dices you started with.\n\n\n```R\nround(p%^%100)\n```\n\n\n\n\n\t\n\t\n\t\n\t\n\t\n\t\n\n
100000
100000
100000
100000
100000
100000
\n\n\n\n## 2.12\ntwo urns contain k balls each. At the beginning of the experiment the urn on the left countains k red balls and the one on the right contains k blue balls. At each step, pick a ball at random from each urn and excange them. Let $X_n$ be the number of blue balls left in the urn on the left (Note that $X_0=0, X_1 = 1$). Argue the process is a Markov Chain. Find the transition matrix. This model is called the Bernoulli - Laplace process. \n\n### Answer\nThis process has to be a random process because the only thing that matters at step n is to know how many blue balls are in the urn on the left. This means that even if you know all the history only the last state is the one that matters to know the probability for the next step.\n\n$$P = \\left(\\begin{array}{cc}\n0 & 1 & 0 & ... &0 & 0 & 0 \\\\\n\\frac{1}{k} & 0 & \\frac{k-1}{k} & ... & 0 & 0 & 0\\\\\n0 & \\frac{2}{k} & 0 & ... & 0 & 0 & 0\\\\\n... & ... & ... & ... & ... & ... & ...\\\\\n0 & 0 & 0 & ... & \\frac{k-1}{k} & 0 & \\frac{1}{k}\\\\\n0 & 0 & 0 & ... & 0 & 1 & 0\\\\\n\\end{array}\\right)$$\n\n## 2.13\nsee the move-to-front process in Example 2.10. Here is anorher way to organize the bookshelf. when a book is returned it is put bacl on the library shelf one position forward from where it was originally. If the book at the fron of the shelf is returned it is put back at the front of the shelf. This, if the order of books is (a,b,c,d,e) and the book d is picjed, the new order is (a,b,d,c,e). This reorganization method us called the *transposition* or *move-ahead-1* scheme. Give the transition matrix for the transportation scheme for a shelf with three books.\n\n### Answer\nremember the states are $(abc, acb, bac,bca, cab, cba)$\n$$P = \\left(\\begin{array}{cc}\n0 & p_c & p_b &p_a & 0 & 0 \\\\\np_b & 0 & 0 & 0 & p_c & p_a\\\\\np_a & p_b & 0 & p_c & 0 & 0\\\\\n0 & 0 & p_a & 0 & p_b & p_c\\\\\np_c & p_a & 0 & 0 & 0 & p_b\\\\\n0 & 0 & p_c & p_b & p_a & 0\n\\end{array}\\right)$$\n\n## 2.14\nThere are k songs in Mary's music player. The player is set to *Shuffle* mode, which plays songs uniformly at random, sampling with replacement. Thus, repeats are possible. Let $X_n$ denote the number of *unique* songs that have been heard after the nth play. \n\n(a) show that $X_0,X_1, ...$ is a Markov chain and give the transition matrix\n\n(b) if Mary has four songs on her music player, find the probability that all songs are heard after six plays.\n\n### Answer\n(a) \nimagine that you have $P(X_n|X_{n-1},...,X_0)$ note that $X_{n+1}=max\\{\\xi_{n+1},X_n\\}$ where $\\xi_{n+1}$ is the result of the n+1 and it is independent to all $X_i$, $0\\leq i\\leq n$, then this means that $X_0,...,X_n$ is a Markov chain with transition matrix:\n$$P = \\left(\\begin{array}{cc}\n1/k & (k-1)/k & 0 & ... &0 & 0 & 0 \\\\\n0 & 2/k & (k-2)/k & ... & 0 & 0 & 0\\\\\n0 & 0 & 3/k & ... & 0 & 0 & 0\\\\\n... & ... & ... & ... & ... & ... & ...\\\\\n0 & 0 & 0 & ... & 0 & (k-1)/k & 1/k\\\\\n0 & 0 & 0 & ... & 0 & 0 & 1\\\\\n\\end{array}\\right)$$\n\n## 2.15\nAssume that $X_0,X_1,...$ is a two state Markov chain in $\\mathcal{S}=\\{0,1\\}$ with transition matrix:\n\n$$P=\\left(\\begin{array}{cc}\n1-p & p \\\\\nq & 1-q \n\\end{array}\\right)$$\n\nThe present state of the chain only depends on the previous state. We can create a bivariate process that looks back two time periods by the following contruction. Let $Z_n=(X_{n-1},X_n)$, for $n\\geq1$. The sequence $Z_1,Z_2,...$ is a Markov chain with state space $\\mathcal{S}X\\mathcal{S}={(0,0),(1,0),(0,1),(1,1)}$ Give the transition matrix of the new chain. \n\n### Answer\n$$P = \\left(\\begin{array}{cc}\n1-p & p & 0 & 0 \\\\\n0 & 0 & q & 1-q \\\\\n1-p & p & 0 & 0 \\\\\n0 & 0 & q & 1-q \n\\end{array}\\right)$$\n\n## 2.16\nAssume that $P$ is a stochastic matrix with equal rows. Show that $P^n=P$, for all $n\\geq1$\n\n### Answer\nlet's then see $P$ as:\n$$P = \\left(\\begin{array}{cc}\np_1 & p_2 & ... & p_k \\\\\np_1 & p_2 & ... & p_k \\\\\np_1 & p_2 & ... & p_k \\\\\n... & ... & ... & ... \\\\\np_1 & p_2 & ... & p_k \\\\\np_1 & p_2 & ... & p_k \\\\\n\\end{array}\\right)$$\n\nFirst let's calculate $P^2$, then\n\n$$P^2 = \\left(\\begin{array}{cc}\np_1*p_1+p_2*p_1+...+p_k*p_1 & p_1*p_2+p_2*p_2+...+p_k*p_2 & ... & p_1*p_k+p_2*p_k+...+p_k*p_k \\\\\np_1*p_1+p_2*p_1+...+p_k*p_1 & p_1*p_2+p_2*p_2+...+p_k*p_2 & ... & p_1*p_k+p_2*p_k+...+p_k*p_k \\\\\np_1*p_1+p_2*p_1+...+p_k*p_1 & p_1*p_2+p_2*p_2+...+p_k*p_2 & ... & p_1*p_k+p_2*p_k+...+p_k*p_k \\\\\n... & ... & ... & ... \\\\\np_1*p_1+p_2*p_1+...+p_k*p_1 & p_1*p_2+p_2*p_2+...+p_k*p_2 & ... & p_1*p_k+p_2*p_k+...+p_k*p_k \\\\\np_1*p_1+p_2*p_1+...+p_k*p_1 & p_1*p_2+p_2*p_2+...+p_k*p_2 & ... & p_1*p_k+p_2*p_k+...+p_k*p_k \\\\\n\\end{array}\\right)=\n\\left(\\begin{array}{cc}\np_1*(p_1+p_2+...+p_k) & ... & p_k(p_1+p_2+...+p_k) \\\\\np_1*(p_1+p_2+...+p_k) & ... & p_k(p_1+p_2+...+p_k) \\\\\np_1*(p_1+p_2+...+p_k) & ... & p_k(p_1+p_2+...+p_k) \\\\\n... & ... & ... \\\\\np_1*(p_1+p_2+...+p_k) & ... & p_k(p_1+p_2+...+p_k) \\\\\np_1*(p_1+p_2+...+p_k) & ... & p_k(p_1+p_2+...+p_k) \\\\\n\\end{array}\\right)=\n\\left(\\begin{array}{cc}\np_1*(1) & ... & p_k(1) \\\\\np_1*(1) & ... & p_k(1) \\\\\np_1*(1) & ... & p_k(1) \\\\\n... & ... & ... \\\\\np_1*(1) & ... & p_k(1) \\\\\np_1*(1) & ... & p_k(1) \\\\\n\\end{array}\\right) = P$$\n\nThen let's see what happens for $P^3$, then \n$$P^3=P^2*P=P*P=P^2=P$$\n\nAssume it is true for $P^n$ and see what happens at $P^{n+1}$\n$$P^{n+1}=P^n*P=P*P=P^2=P$$\nThen this means that $P^n=P$, for all $n\\geq1$\n\n## 2.17\nLet **$P$** be the stochastic matrix. Show that $\\lambda = 1$ is an eigenvalue of **$P$**. What is the associated eigenvector?\n\n## Answer \nLet's first think that it is clear that the rows all sum to one, and remember that an eigenvector asociated to an eigen value looks of the form:\n$$A\\overline{x}=\\lambda \\overline{x}$$\n\nit is easy to note that the eigenvector we are looking at is the vector $\\overline{x}=(1,1,...,1)^T$ this means that the eigenvalue asociated to this is also one\n\nwe can check this by doing the multiplication. Say we have\n$$A= \\left(\\begin{array}{cc}\np_{1,1} & p_{1,2} & ... & p_{1,k} \\\\\np_{2,1} & p_{2,2} & ... & p_{2,k} \\\\\n... & ... & ... & ... \\\\\np_{k,1} & p_{k,2} & ... & p_{k,k} \\\\\n\\end{array}\\right)$$\nwhere A is an stochastic matrix. Then:\n$$Ax= \\left(\\begin{array}{cc}\np_{1,1} & p_{1,2} & ... & p_{1,k} \\\\\np_{2,1} & p_{2,2} & ... & p_{2,k} \\\\\n... & ... & ... & ... \\\\\np_{k,1} & p_{k,2} & ... & p_{k,k} \\\\\n\\end{array}\\right)\\left(\\begin{array}{cc}\n1 \\\\\n1 \\\\\n... \\\\\n1\\\\\n\\end{array}\\right)=\n\\left(\\begin{array}{cc}\np_{1,1} + p_{1,2} + ... + p_{1,k} \\\\\np_{2,1} + p_{2,2} + ... + p_{2,k} \\\\\n... \\\\\np_{k,1} + p_{k,2} + ... + p_{k,k} \\\\\n\\end{array}\\right)=\n\\left(\\begin{array}{cc}\n1 \\\\\n1 \\\\\n... \\\\\n1\\\\\n\\end{array}\\right)$$\n\nThe way to prove this is to remember that if $A$ is an square matrix, then the eigenvalues of A are the same as its transpose's. Then, let's check if 1 is an eigenvalue by doing $det(A^T-\\lambda I)$ where $\\lambda$ is the eigenvalue (in this case 1) and $I$ is the identity matrix, when doing this we get that:\n$$A-I= \\left(\\begin{array}{cc}\np_{1,1} -1 & p_{1,2} & ... & p_{1,k} \\\\\np_{2,1} & p_{2,2}- 1 & ... & p_{2,k} \\\\\n... & ... & ... & ... \\\\\np_{k,1} & p_{k,2} & ... & p_{k,k}-1 \\\\\n\\end{array}\\right)$$\n\nAnd since we are dealing with the transpose, then if we sum all rows to the first row (i.e. $R_1 -> R_1+R_2+...+R_k$), then all values in the first row are zero, which means that the matrix is linearly dependand and $det(A^T- 1I)=0$, which means that 1 is an eigenvalue for $A$\n\n## 2.18\nA stochastic matrix is called *doubly* stochastic if its columns sum to 1. Let $X_0,X_1,...$ be a Markov chain on $\\{ 1,...,k \\}$ with doubly stochastic transition matrix and initial distribution that is uniform on $\\{ 1,...,k \\}$. Show that the distribution of $X_n$ is uniform on $\\{ 1,...,k \\}$, for all $n\\geq0$\n\n### Answer\nLet's remember that then the distribution at $X_n$ is $\\alpha P^n$, let's see what happens at $n=1$\n\n$n=1$\n$$X_1=\\alpha P = \\left(\\begin{array}{cc}\n1/k &1/k&...& 1/k\n\\end{array}\\right)\\left(\\begin{array}{cc}\np_{1,1} & p_{1,2} & ... & p_{1,k} \\\\\np_{2,1} & p_{2,2} & ... & p_{2,k} \\\\\n... \\\\\np_{k,1} & p_{k,2} & ... & p_{k,k} \\\\\n\\end{array}\\right)=\\left(\\begin{array}{cc}\np_{1,1}*1/k + p_{2,1}*1/k + ... + p_{k,1}*1/k \\\\\np_{1,2}*1/k + p_{2,2}*1/k + ... + p_{k,2}*1/k \\\\\n... \\\\\np_{1,k}*1/k + p_{2,k}*1/k + ... + p_{k,k}*1/k \\\\\n\\end{array}\\right)^T=\\left(\\begin{array}{cc}\n(p_{1,1} + p_{2,1} + ... + p_{k,1})*1/k\\\\\n(p_{1,2} + p_{2,2} + ... + p_{k,2})*1/k \\\\\n... \\\\\n(p_{1,k} + p_{2,k} + ... + p_{k,k})*1/k \\\\\n\\end{array}\\right)^T = \\left(\\begin{array}{cc}\n(1)*1/k\\\\\n(1)*1/k \\\\\n... \\\\\n(1)*1/k \\\\\n\\end{array}\\right)^T=\\alpha\n$$\nThis last step is due to the matrix being double stochastic\n\nNow let's $n=2$\n$$X_1=\\alpha P^2 = (\\alpha P)*P=\\alpha*p=\\alpha$$\n\nThen we assume it happens for $n$, let's see then for $n+1$\n$$X_{n+1}=\\alpha P^{n+1} = (\\alpha P^n)*P=\\alpha*p=\\alpha$$\n\n## 2.19\nLet **$P$** be the transition matrix of a Markov chain on $k$ states. Let **$I$** denote the $kXk$ identity matrix. consider the matrix\n\n$$\\boldsymbol{Q}=(1-p)\\boldsymbol{I}+p\\boldsymbol{P}\\text{ , for }0-1$, $(1+x)^n\\geq 1+nx$\n\n### Answer\n$#a$\n\nFor $n=1$\n\n$$\n\\begin{align}\n2(1)-1 &= 1 \\\\&=1^2 \n\\end{align}$$\n\nAssume it works for $n-1$\n\n$$\n\\begin{align}\n1+3+...+(2n-3) &= (n-1)^2 \\\\\n &=n^2 -2n+1\n\\end{align}$$\n\nThen for $n$\n$$\n\\begin{align}\n1+3+...+(2n-3)+(2n-1) &= n^2 -2n+1 +2n-1\\\\\n &=n^2 \n\\end{align}$$\n\nTherefore is proved.\n\n$#b$\n\nFor $n=1$\n\n$$\n\\begin{align}\n1(1+1)(2*1+1)/6 &= 6/6 \\\\&=1^2 \n\\end{align}$$\n\nAssume it works for $n-1$\n\n$$\n\\begin{align}\n1^2+2^2+...+(n-1)n^2&=(n-1)(n)(2n-1)/6\n\\end{align}$$\n\nThen for $n$\n$$\n\\begin{align}\n1^2+2^2+...+n^2 &=\\frac{(n-1)(n)(2n-1)}{6}+n^2\\\\\n &=\\frac{2n^-3n^2+n+6n^2}{6}\\\\\n &=\\frac{n(2n^2+3n+1)}{6}\\\\\n &=\\frac{n(n+1)(2n+1)}{6}\n\\end{align}$$\n\nTherefore is proved.\n\n$#c$\n\n\nFor this is important to notice that for $x>-1$ means that $(1+x)>0$ for all $x$. Now let's see what happens for $n=1$\n\n$$\n\\begin{align}\n(1+x)^1 &= (1+1*x)\\\\\n1+x &\\geq 1+x\n\\end{align}$$\n\nAssume it works for $n-1$\n\n$$\n\\begin{align}\n(1+x)^{n-1}&\\geq1+(n-1)x\n\\end{align}$$\n\nThen for $n$\n$$\n\\begin{align}\n(1+x)^{n} &= (1+x)^{n-1}*(1+x)\\\\\n &\\geq 1+(n-1)x*(1+x)\\\\\n\\\\ \\text{this last step because $1+x > 0$} \\\\ \\\\\n1+(n-1)x*(1+x) &=1+(n-1)x+x+(n-1)x^2\\\\\n &=1+n*x+ (n-1)x^2\\\\\n\\\\ \\text{but since $(n-1)x^2\\geq 0$ for all $x>1$}\\\\ \\\\\n1+n*x+ (n-1)x^2 &\\geq 1+n*x \\\\\n\\\\ \\text{=>} \\\\ \\\\\n(1+x)^{n} &\\geq 1+n*x+ (n-1)x^2 \\\\\n &\\geq 1+n*x \\\\\n\\\\ \\text{by transitivity relation} \\\\ \\\\\n(1+x)^{n} &\\geq 1+n*x \\\\\n\\end{align}$$\n\nTherefore is proved.\n\n\n## 2.23\nSimulate the first 20 letters (vowel/consonant) of the Pushkin poem Markov chain of Example 2.2.\n\n$$P=\\left(\\begin{array}{cc}\n0.175 & 0.825 \\\\\n0.526 & 0.474\n\\end{array}\\right)$$\n\n### Answer\n\n\n```R\nPushkinLetters<-function(P=matrix(c(0.175,0.825,0.526,0.474),nrow=2, byrow = T), \n init= c(8.638/20, 11.362/20),simulations=10){\n results = list()\n for(i in 1:simulations){\n results=c(results, list(OneSimulation(init=init, P=P)))\n }\n return(results)\n}\nOneSimulation<-function(steps = 20, init, P){\n sim = c()\n sim = samp(sim, init)\n for(i in 2:steps){\n sim=samp(sim, c(P[sim[i-1]+1,]))\n }\n return(parseSimulation(sim))\n}\nsamp<-function(simlist, dist){\n vowel = 0\n consonant = 1\n if (runif(1)\n\t
    1. \n\t
    2. 'vowel'
    3. \n\t
    4. 'vowel'
    5. \n\t
    6. 'Consonant'
    7. \n\t
    8. 'Consonant'
    9. \n\t
    10. 'vowel'
    11. \n\t
    12. 'Consonant'
    13. \n\t
    14. 'vowel'
    15. \n\t
    16. 'Consonant'
    17. \n\t
    18. 'vowel'
    19. \n\t
    20. 'Consonant'
    21. \n\t
    22. 'vowel'
    23. \n\t
    24. 'Consonant'
    25. \n\t
    26. 'vowel'
    27. \n\t
    28. 'Consonant'
    29. \n\t
    30. 'Consonant'
    31. \n\t
    32. 'vowel'
    33. \n\t
    34. 'Consonant'
    35. \n\t
    36. 'Consonant'
    37. \n\t
    38. 'Consonant'
    39. \n\t
    40. 'vowel'
    41. \n
    \n
  • \n\t
    1. \n\t
    2. 'Consonant'
    3. \n\t
    4. 'vowel'
    5. \n\t
    6. 'Consonant'
    7. \n\t
    8. 'Consonant'
    9. \n\t
    10. 'vowel'
    11. \n\t
    12. 'Consonant'
    13. \n\t
    14. 'Consonant'
    15. \n\t
    16. 'vowel'
    17. \n\t
    18. 'Consonant'
    19. \n\t
    20. 'Consonant'
    21. \n\t
    22. 'vowel'
    23. \n\t
    24. 'Consonant'
    25. \n\t
    26. 'vowel'
    27. \n\t
    28. 'Consonant'
    29. \n\t
    30. 'vowel'
    31. \n\t
    32. 'Consonant'
    33. \n\t
    34. 'Consonant'
    35. \n\t
    36. 'Consonant'
    37. \n\t
    38. 'Consonant'
    39. \n\t
    40. 'vowel'
    41. \n
    \n
  • \n\t
    1. \n\t
    2. 'vowel'
    3. \n\t
    4. 'Consonant'
    5. \n\t
    6. 'Consonant'
    7. \n\t
    8. 'Consonant'
    9. \n\t
    10. 'vowel'
    11. \n\t
    12. 'vowel'
    13. \n\t
    14. 'Consonant'
    15. \n\t
    16. 'Consonant'
    17. \n\t
    18. 'vowel'
    19. \n\t
    20. 'Consonant'
    21. \n\t
    22. 'vowel'
    23. \n\t
    24. 'Consonant'
    25. \n\t
    26. 'vowel'
    27. \n\t
    28. 'Consonant'
    29. \n\t
    30. 'Consonant'
    31. \n\t
    32. 'Consonant'
    33. \n\t
    34. 'vowel'
    35. \n\t
    36. 'Consonant'
    37. \n\t
    38. 'vowel'
    39. \n\t
    40. 'Consonant'
    41. \n
    \n
  • \n\t
    1. \n\t
    2. 'Consonant'
    3. \n\t
    4. 'vowel'
    5. \n\t
    6. 'Consonant'
    7. \n\t
    8. 'vowel'
    9. \n\t
    10. 'Consonant'
    11. \n\t
    12. 'vowel'
    13. \n\t
    14. 'Consonant'
    15. \n\t
    16. 'Consonant'
    17. \n\t
    18. 'Consonant'
    19. \n\t
    20. 'Consonant'
    21. \n\t
    22. 'Consonant'
    23. \n\t
    24. 'vowel'
    25. \n\t
    26. 'Consonant'
    27. \n\t
    28. 'Consonant'
    29. \n\t
    30. 'Consonant'
    31. \n\t
    32. 'vowel'
    33. \n\t
    34. 'Consonant'
    35. \n\t
    36. 'vowel'
    37. \n\t
    38. 'Consonant'
    39. \n\t
    40. 'Consonant'
    41. \n
    \n
  • \n\t
    1. \n\t
    2. 'Consonant'
    3. \n\t
    4. 'Consonant'
    5. \n\t
    6. 'vowel'
    7. \n\t
    8. 'vowel'
    9. \n\t
    10. 'vowel'
    11. \n\t
    12. 'Consonant'
    13. \n\t
    14. 'Consonant'
    15. \n\t
    16. 'vowel'
    17. \n\t
    18. 'Consonant'
    19. \n\t
    20. 'Consonant'
    21. \n\t
    22. 'Consonant'
    23. \n\t
    24. 'vowel'
    25. \n\t
    26. 'Consonant'
    27. \n\t
    28. 'Consonant'
    29. \n\t
    30. 'Consonant'
    31. \n\t
    32. 'vowel'
    33. \n\t
    34. 'Consonant'
    35. \n\t
    36. 'Consonant'
    37. \n\t
    38. 'vowel'
    39. \n\t
    40. 'Consonant'
    41. \n
    \n
  • \n\t
    1. \n\t
    2. 'Consonant'
    3. \n\t
    4. 'Consonant'
    5. \n\t
    6. 'Consonant'
    7. \n\t
    8. 'Consonant'
    9. \n\t
    10. 'vowel'
    11. \n\t
    12. 'vowel'
    13. \n\t
    14. 'Consonant'
    15. \n\t
    16. 'vowel'
    17. \n\t
    18. 'Consonant'
    19. \n\t
    20. 'vowel'
    21. \n\t
    22. 'Consonant'
    23. \n\t
    24. 'Consonant'
    25. \n\t
    26. 'vowel'
    27. \n\t
    28. 'Consonant'
    29. \n\t
    30. 'Consonant'
    31. \n\t
    32. 'vowel'
    33. \n\t
    34. 'Consonant'
    35. \n\t
    36. 'vowel'
    37. \n\t
    38. 'Consonant'
    39. \n\t
    40. 'Consonant'
    41. \n
    \n
  • \n\t
    1. \n\t
    2. 'vowel'
    3. \n\t
    4. 'Consonant'
    5. \n\t
    6. 'Consonant'
    7. \n\t
    8. 'vowel'
    9. \n\t
    10. 'Consonant'
    11. \n\t
    12. 'Consonant'
    13. \n\t
    14. 'Consonant'
    15. \n\t
    16. 'vowel'
    17. \n\t
    18. 'vowel'
    19. \n\t
    20. 'Consonant'
    21. \n\t
    22. 'vowel'
    23. \n\t
    24. 'vowel'
    25. \n\t
    26. 'Consonant'
    27. \n\t
    28. 'Consonant'
    29. \n\t
    30. 'vowel'
    31. \n\t
    32. 'Consonant'
    33. \n\t
    34. 'Consonant'
    35. \n\t
    36. 'vowel'
    37. \n\t
    38. 'Consonant'
    39. \n\t
    40. 'Consonant'
    41. \n
    \n
  • \n\t
    1. \n\t
    2. 'Consonant'
    3. \n\t
    4. 'Consonant'
    5. \n\t
    6. 'vowel'
    7. \n\t
    8. 'Consonant'
    9. \n\t
    10. 'vowel'
    11. \n\t
    12. 'vowel'
    13. \n\t
    14. 'Consonant'
    15. \n\t
    16. 'Consonant'
    17. \n\t
    18. 'Consonant'
    19. \n\t
    20. 'Consonant'
    21. \n\t
    22. 'Consonant'
    23. \n\t
    24. 'Consonant'
    25. \n\t
    26. 'Consonant'
    27. \n\t
    28. 'vowel'
    29. \n\t
    30. 'Consonant'
    31. \n\t
    32. 'vowel'
    33. \n\t
    34. 'Consonant'
    35. \n\t
    36. 'vowel'
    37. \n\t
    38. 'Consonant'
    39. \n\t
    40. 'vowel'
    41. \n
    \n
  • \n\t
    1. \n\t
    2. 'Consonant'
    3. \n\t
    4. 'vowel'
    5. \n\t
    6. 'Consonant'
    7. \n\t
    8. 'vowel'
    9. \n\t
    10. 'Consonant'
    11. \n\t
    12. 'vowel'
    13. \n\t
    14. 'Consonant'
    15. \n\t
    16. 'Consonant'
    17. \n\t
    18. 'Consonant'
    19. \n\t
    20. 'vowel'
    21. \n\t
    22. 'Consonant'
    23. \n\t
    24. 'vowel'
    25. \n\t
    26. 'Consonant'
    27. \n\t
    28. 'Consonant'
    29. \n\t
    30. 'vowel'
    31. \n\t
    32. 'Consonant'
    33. \n\t
    34. 'Consonant'
    35. \n\t
    36. 'Consonant'
    37. \n\t
    38. 'Consonant'
    39. \n\t
    40. 'Consonant'
    41. \n
    \n
  • \n\t
    1. \n\t
    2. 'Consonant'
    3. \n\t
    4. 'vowel'
    5. \n\t
    6. 'vowel'
    7. \n\t
    8. 'Consonant'
    9. \n\t
    10. 'vowel'
    11. \n\t
    12. 'Consonant'
    13. \n\t
    14. 'vowel'
    15. \n\t
    16. 'vowel'
    17. \n\t
    18. 'Consonant'
    19. \n\t
    20. 'Consonant'
    21. \n\t
    22. 'vowel'
    23. \n\t
    24. 'Consonant'
    25. \n\t
    26. 'Consonant'
    27. \n\t
    28. 'Consonant'
    29. \n\t
    30. 'Consonant'
    31. \n\t
    32. 'vowel'
    33. \n\t
    34. 'Consonant'
    35. \n\t
    36. 'Consonant'
    37. \n\t
    38. 'vowel'
    39. \n\t
    40. 'Consonant'
    41. \n
    \n
  • \n\n\n\n\n## 2.24\nSimulate 50 steps of the random walk on the graph in Figure2.1. Repeat the simulation 10 times. How many of your simulations end at vertex c? Compare with the exact long-term probability the walk visits c.\n
    \n\n
    \n\nWith transition matrix uniform across vertex as described in the frog example, then:\n$$P=\\left(\\begin{array}{cc}\n0 & 1 & 0 & 0 & 0 & 0\\\\\n1/4 & 0 & 1/4 & 1/4 & 1/4 & 0\\\\\n0 & 1/4 & 0 & 1/4 & 1/4 & 1/4\\\\\n0 & 1/4 & 1/4 & 0 & 1/4 & 1/4\\\\\n0 & 1/3 & 1/3 & 1/3 & 0 & 0\\\\\n0 & 0 & 1/2 & 1/2 & 0 & 0\n\\end{array}\\right)$$\n\n### Answer\n\n\n```R\nFrogJump<-function(P=matrix(c(0 , 1 , 0 , 0 , 0 , 0,\n 1/4 , 0 , 1/4 , 1/4 , 1/4 , 0,\n 0 , 1/4 , 0 , 1/4 , 1/4 , 1/4,\n 0 , 1/4 , 1/4 , 0 , 1/4 , 1/4,\n 0 , 1/3 , 1/3 , 1/3 , 0 , 0,\n 0 , 0 , 1/2 , 1/2 , 0 , 0),nrow=6,byrow=T), \n init= rep(1/6,6), simulations=10){\n results = list()\n for(i in 1:simulations){\n results=c(results,list(OneSimulation(init=init,P=P)))\n }\n return(results)\n}\nOneSimulation<-function(steps = 50, init, P){\n sim = c()\n sim = samp(sim, init)\n for(i in 2:steps){\n sim=c(sim,sample(0:(length(c(P[sim[i-1]+1,]))-1),size=1,prob=c(P[sim[i-1]+1,])))\n }\n return(sim)\n}\n \n \n```\n\n\n```R\nsapply(FrogJump(), tail, 1)\n```\n\n\n
      \n\t
    1. 1
    2. \n\t
    3. 2
    4. \n\t
    5. 1
    6. \n\t
    7. 0
    8. \n\t
    9. 4
    10. \n\t
    11. 4
    12. \n\t
    13. 5
    14. \n\t
    15. 1
    16. \n\t
    17. 3
    18. \n\t
    19. 4
    20. \n
    \n\n\n\n\n```R\nP=matrix(c(0 , 1 , 0 , 0 , 0 , 0,\n 1/4 , 0 , 1/4 , 1/4 , 1/4 , 0,\n 0 , 1/4 , 0 , 1/4 , 1/4 , 1/4,\n 0 , 1/4 , 1/4 , 0 , 1/4 , 1/4,\n 0 , 1/3 , 1/3 , 1/3 , 0 , 0,\n 0 , 0 , 1/2 , 1/2 , 0 , 0), nrow=6, byrow=T)\n(P%^%30)[1,]\n```\n\n\n
      \n\t
    1. 0.0555559496995727
    2. \n\t
    3. 0.222221212660565
    4. \n\t
    5. 0.22222272669261
    6. \n\t
    7. 0.22222272669261
    8. \n\t
    9. 0.166666666977107
    10. \n\t
    11. 0.111110717277535
    12. \n
    \n\n\n\n## 2.25 \nThe behavior of dolphins in the presence of tour boats in Patagonia, Argentina is studied in Dans et al. (2012). A Markov chain model is developed, with state space consisting of five primary dolphin activities (socializing, traveling, milling, feeding, and resting). The following transition matrix is\nobtained.\n\n$$P=\\left(\\begin{array}{cc}\n0.84 & 0.11 & 0.01 & 0.04 & 0 \\\\\n0.03 & 0.8 & 0.04 & 0.1 & 0.03 \\\\\n0.01 & 0.15 & 0.7 & 0.07 & 0.07 \\\\\n0.03 & 0.19 & 0.02 & 0.75 & 0.01 \\\\\n0.03 & 0.09 & 0.05 & 0 & 0.83 \n\\end{array}\\right)$$\n\n### Answer\n\n\n```R\n(matrix(c(\n0.84 , 0.11 , 0.01 , 0.04 , 0,\n0.03 , 0.8 , 0.04 , 0.1 , 0.03,\n0.01 , 0.15 , 0.7 , 0.07 , 0.07,\n0.03 , 0.19 , 0.02 , 0.75 , 0.01,\n0.03 , 0.09 , 0.05 , 0 , 0.83), nrow=5, byrow=T\n)%^%100)[1,]\n```\n\n\n
      \n\t
    1. 0.147835822368841
    2. \n\t
    3. 0.414925374905324
    4. \n\t
    5. 0.0955597006624145
    6. \n\t
    7. 0.216380599307721
    8. \n\t
    9. 0.125298502755702
    10. \n
    \n\n\n\n## 2.26 \nIn computer security applications, a honeypot is a trap set on a network to detect and counteract computer hackers. Honeypot data are studied in Kimou et al. (2010) using Markov chains. The authors obtain honeypot data from a central database and observe attacks against four computer ports\u201480, 135, 139, and 445\u2014over 1 year. The ports are the states of a Markov chain along with a state corresponding to no port is attacked. Weekly data are monitored, and the port most often attacked during the week is recorded. The estimated Markov transition matrix for weekly attacks is\n$$P=\\left(\\begin{array}{cc}\n0 & 0 & 0 & 0 & 1 \\\\\n0 & 8/13 & 3/13 & 1/13 & 1/13 \\\\\n1/16 & 3/16 & 3/8 & 1/4 & 1/8 \\\\\n0 & 1/11 & 4/11 & 5/11 & 1/11 \\\\\n0 & 1/8 & 1/2 & 1/8 & 1/4 \n\\end{array}\\right)$$\nwith initial distribution $\\alpha = (0, 0, 0, 0, 1)$.\n\n(a) Which are the least and most likely attacked ports after 2 weeks? \n\n(b) Find the long-term distribution of attacked ports.\n\n### Answer\n\n\n```R\nP=matrix(c(\n0 , 0 , 0, 0 , 1,\n0 , 8/13 , 3/13 , 1/13, 1/13 ,\n1/16 , 3/16 , 3/8 , 1/4, 1/8 ,\n0 , 1/11 , 4/11 , 5/11 , 1/11,\n0 , 1/8 , 1/2 , 1/8 , 1/4 ), nrow=5,byrow=T)\na=c(0,0,0,0,1)\nparser=c(80,135, 139, 445, 'No attack')\n```\n\n\n```R\ncat(sprintf(\"the most likely port to be attacked after two weeks is: %s, \nthe least likely port to be attacked after two weeks is: %s \", parser[which.max((a%*%(P%^%2)))], \nparser[which.min((a%*%(P%^%2)))]))\n```\n\n the most likely port to be attacked after two weeks is: 139, \n the least likely port to be attacked after two weeks is: 135 \n\n\n```R\nprint(\"the long last distribution is:\")\nprint((P%^%100)[1,])\n```\n\n [1] \"the long last distribution is:\"\n [1] 0.02146667 0.26693333 0.34346667 0.22733333 0.14080000\n\n\n## 2.27 \nSee gamblersruin.R. Simulate gambler\u2019s ruin for a gambler with initial stake $\\$2$, playing a fair game.\n\n(a) Estimate the probability that the gambler is ruined before he wins $\\$5$.\n\n(b) Construct the transition matrix for the associated Markov chain. Estimate\nthe desired probability in (a) by taking high matrix powers. \n\n(c) Compare your results with the exact probability.\n\n### Answer\n\n\n```R\nrandomWalk<-function(init_money=50 , prob=1/2, min_state=0, max_state=100, outcomes = c(-1,1)){\n \"\n this is is the function that performs one random walk, if you want to perform the simulation you\n can run the gambling walk function beforehand.\n Take into account the taking variables, where the outcomes are the possible results of loosing or earning money\n the min and max state respresent the values where the simulation would stop.\n\n the outputs are sent as a list so the $ comand can be used.\n \"\n money = init_money\n walk = c(money)\n win = F\n while (T){\n money = money+sample(outcomes, 1, prob = c(1-prob,prob))\n walk= c(walk, money)\n if ((money <= min_state) || (money >= max_state)){\n win=(money >= max_state)\n break\n }\n }\n my_list <- list(\"steps\" = length(walk), \"won\" = win, \"walk\" = walk)\n return(my_list)\n}\nGamblingWalk<-function(init_money=50, prob=1/2, min_state=0, max_state=100, outcomes = c(-1,1), sim=100){\n \"\n this is is the main function that performs the simulation.\n Take into account the taking variables, where the outcomes are the possible results of loosing or earning money\n the min and max state respresent the values where the simulation would stop.\n \n the outputs are sent as a list so the $ comand can be used.\n \"\n results = c()\n for(i in 0:sim){\n res = randomWalk(init_money, prob, min_state, max_state, outcomes)\n results = c(results,res$won)\n }\n my_list <- list(\"results\" = sum(results)/length(results), \"walk\" = res$walk)\n return(my_list)\n \n}\n```\n\n\n```R\nwalkSim = GamblingWalk(init_money = 2, max_state = 5, sim=10000)\nprint(1-walkSim$results)\n```\n\n [1] 0.5929407\n\n\n(b) The transition Matrix is given by:\n\n$$P=\\left(\\begin{array}{cc}\n1 & 0 & 0 & 0 & 0 & 0 \\\\\n1/2 & 0 & 1/2 & 0 & 0 & 0 \\\\\n0 & 1/2 & 0 & 1/2 & 0 & 0 \\\\\n0 & 0 & 1/2 & 0 & 1/2 & 0 \\\\\n0 & 0 & 0 & 1/2 & 0 & 1/2 \\\\ \n0 & 0 & 0 & 0 & 0 & 1 \\\\\n\\end{array}\\right)$$\n\n\n```R\nP=matrix(c(1 , 0 , 0 , 0 , 0 , 0,\n 1/2 , 0 , 1/2 , 0 , 0 , 0,\n 0 , 1/2 , 0 , 1/2 , 0 , 0,\n 0 , 0 , 1/2 , 0 , 1/2 , 0,\n 0 , 0 , 0 , 1/2 , 0 , 1/2,\n 0 , 0 , 0 , 0 , 0 ,1),nrow=6,byrow=T)\na=c(0,0,1,0,0,0)\nround(a%*%(P%^%100), 2)\n```\n\n\n\n\n\t\n\n
    0.60 0 0 0 0.4
    \n\n\n\n(c) As seen from the results from (a) and (b) we can see that the simulation is a bit off, even after 10,000 simulations.\n\n\n```R\n\n```\n", "meta": {"hexsha": "d15a7ea7934df43e248a97bd7a4864d799521c8c", "size": 87035, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter02_R.ipynb", "max_stars_repo_name": "larispardo/StochasticProcessR", "max_stars_repo_head_hexsha": "a2f8b6c41f2fe451629209317fc32f2c28e0e4ee", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter02_R.ipynb", "max_issues_repo_name": "larispardo/StochasticProcessR", "max_issues_repo_head_hexsha": "a2f8b6c41f2fe451629209317fc32f2c28e0e4ee", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter02_R.ipynb", "max_forks_repo_name": "larispardo/StochasticProcessR", "max_forks_repo_head_hexsha": "a2f8b6c41f2fe451629209317fc32f2c28e0e4ee", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.2204861111, "max_line_length": 584, "alphanum_fraction": 0.4410754294, "converted": true, "num_tokens": 16493, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.921921841290738, "lm_q2_score": 0.9124361521430956, "lm_q1q2_score": 0.8411948174439986}} {"text": "# Systems of Equations\nImagine you are at a casino, and you have a mixture of \u00a310 and \u00a325 chips. You know that you have a total of 16 chips, and you also know that the total value of chips you have is \u00a3250. Is this enough information to determine how many of each denomination of chip you have?\n\nWell, we can express each of the facts that we have as an equation. The first equation deals with the total number of chips - we know that this is 16, and that it is the number of \u00a310 chips (which we'll call ***x*** ) added to the number of \u00a325 chips (***y***).\n\nThe second equation deals with the total value of the chips (\u00a3250), and we know that this is made up of ***x*** chips worth \u00a310 and ***y*** chips worth \u00a325.\n\nHere are the equations\n\n\\begin{equation}x + y = 16 \\end{equation}\n\\begin{equation}10x + 25y = 250 \\end{equation}\n\nTaken together, these equations form a *system of equations* that will enable us to determine how many of each chip denomination we have.\n\n## Graphing Lines to Find the Intersection Point\nOne approach is to determine all possible values for x and y in each equation and plot them.\n\nA collection of 16 chips could be made up of 16 \u00a310 chips and no \u00a325 chips, no \u00a310 chips and 16 \u00a325 chips, or any combination between these.\n\nSimilarly, a total of \u00a3250 could be made up of 25 \u00a310 chips and no \u00a325 chips, no \u00a310 chips and 10 \u00a325 chips, or a combination in between.\n\nLet's plot each of these ranges of values as lines on a graph:\n\n\n```python\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\n# Get the extremes for number of chips\nchipsAll10s = [16, 0]\nchipsAll25s = [0, 16]\n\n# Get the extremes for values\nvalueAll10s = [25,0]\nvalueAll25s = [0,10]\n\n# Plot the lines\nplt.plot(chipsAll10s,chipsAll25s, color='blue')\nplt.plot(valueAll10s, valueAll25s, color=\"orange\")\nplt.xlabel('x (\u00a310 chips)')\nplt.ylabel('y (\u00a325 chips)')\nplt.grid()\n\nplt.show()\n```\n\nLooking at the graph, you can see that there is only a single combination of \u00a310 and \u00a325 chips that is on both the line for all possible combinations of 16 chips and the line for all possible combinations of \u00a3250. The point where the line intersects is (10, 6); or put another way, there are ten \u00a310 chips and six \u00a325 chips.\n\n### Solving a System of Equations with Elimination\nYou can also solve a system of equations mathematically. Let's take a look at our two equations:\n\n\\begin{equation}x + y = 16 \\end{equation}\n\\begin{equation}10x + 25y = 250 \\end{equation}\n\nWe can combine these equations to eliminate one of the variable terms and solve the resulting equation to find the value of one of the variables. Let's start by combining the equations and eliminating the x term.\n\nWe can combine the equations by adding them together, but first, we need to manipulate one of the equations so that adding them will eliminate the x term. The first equation includes the term ***x***, and the second includes the term ***10x***, so if we multiply the first equation by -10, the two x terms will cancel each other out. So here are the equations with the first one multiplied by -10:\n\n\\begin{equation}-10(x + y) = -10(16) \\end{equation}\n\\begin{equation}10x + 25y = 250 \\end{equation}\n\nAfter we apply the multiplication to all of the terms in the first equation, the system of equations look like this:\n\n\\begin{equation}-10x + -10y = -160 \\end{equation}\n\\begin{equation}10x + 25y = 250 \\end{equation}\n\nNow we can combine the equations by adding them. The ***-10x*** and ***10x*** cancel one another, leaving us with a single equation like this:\n\n\\begin{equation}15y = 90 \\end{equation}\n\nWe can isolate ***y*** by dividing both sides by 15:\n\n\\begin{equation}y = \\frac{90}{15} \\end{equation}\n\nSo now we have a value for ***y***:\n\n\\begin{equation}y = 6 \\end{equation}\n\nSo how does that help us? Well, now we have a value for ***y*** that satisfies both equations. We can simply use it in either of the equations to determine the value of ***x***. Let's use the first one:\n\n\\begin{equation}x + 6 = 16 \\end{equation}\n\nWhen we work through this equation, we get a value for ***x***:\n\n\\begin{equation}x = 10 \\end{equation}\n\nSo now we've calculated values for ***x*** and ***y***, and we find, just as we did with the graphical intersection method, that there are ten \u00a310 chips and six \u00a325 chips.\n\nYou can run the following Python code to verify that the equations are both true with an ***x*** value of 10 and a ***y*** value of 6.\n\n\n```python\nx = 10\ny = 6\nprint ((x + y == 16) & ((10*x) + (25*y) == 250))\n```\n\n True\n\n", "meta": {"hexsha": "964b5bce027d0e3f21a1161453352a5454a478ad", "size": 23218, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "MathsToML/Module01-Equations, Graphs, and Functions/01-03-Systems of Equations.ipynb", "max_stars_repo_name": "hpaucar/data-mining-repo", "max_stars_repo_head_hexsha": "d0e48520bc6c01d7cb72e882154cde08020e1d33", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "MathsToML/Module01-Equations, Graphs, and Functions/01-03-Systems of Equations.ipynb", "max_issues_repo_name": "hpaucar/data-mining-repo", "max_issues_repo_head_hexsha": "d0e48520bc6c01d7cb72e882154cde08020e1d33", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MathsToML/Module01-Equations, Graphs, and Functions/01-03-Systems of Equations.ipynb", "max_forks_repo_name": "hpaucar/data-mining-repo", "max_forks_repo_head_hexsha": "d0e48520bc6c01d7cb72e882154cde08020e1d33", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 331.6857142857, "max_line_length": 17318, "alphanum_fraction": 0.8899129985, "converted": true, "num_tokens": 1222, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.89330940889474, "lm_q2_score": 0.9416541618444, "lm_q1q2_score": 0.8411885227004927}} {"text": "# Introduction to LP Problems - standard form\n\n## Diet problem\nyou have a list of food, their macronutrients and the price. You want to get the most out of your food intake for the lowest cost possible\n\n(Per portion)\n\n|Product|Energy (kcal)|Proteins (g)|Calcium (mg)|Price (cent)|\n|------|-----|-----|-----|-----|\n|Oats|110|4|2|2|25|\n|Chicken|205|32|12|130|\n|Egg|160|13|54|85|\n|Milk|160|8|285|70 |\n|Cake|420 |22 | 95|\n|Bean|260|14|80|98|\n\nSuppose you require 2000kcal, 55g proteins and 800mg of calcium. Write the LP and the write it in its standard form:\n\n## More problems\n### 1\n\\begin{equation}\n\\min 3x_1+8x_2+4x_3\\\\\n\\textrm{subject to} \\begin{cases}\nx_1+x_2 \\ge 8\\\\\n2x_1-3x_2 \\le 0\\\\\nx_2 \\ge 9 \\\\\nx_1,x_2 \\ge 0\n\\end{cases}\n\\end{equation}\n### 2\n\\begin{equation}\n\\max 3x_1+2x_2-x_3+x_4\\\\\n\\textrm{subject to} \\begin{cases}\nx_1 + 2x_2 + x_3 - x_4 \\le 5\\\\\n-2x_1 - 4 x_2 + x_3 + x_4 \\le -1\\\\\nx_1\\ge 0,x_2 \\le 0\n\\end{cases}\n\\end{equation}\n\n### 3\n\\begin{equation}\n\\min x_1 - x_2 + x_3\\\\\n\\textrm{subject to} \\begin{cases}\nx_1 + 2x_2 - x_3 \\le 3\\\\\n-x_1 + x_2 + x_3 \\ge 2\\\\\nx_1 - x_2 = 10\\\\\nx_1 \\ge 0, x_2 \\le 0\n\\end{cases}\n\\end{equation}\n\n\n### 4\n\\begin{equation}\n\\max x_1 - x_2 + x_3\\\\\n\\textrm{subject to} \\begin{cases}\nx_1 + 2x_2 - x_3 \\le 3\\\\\n-x_1 + x_2 + x_3 \\ge 2\\\\\nx_1 - x_2 = 10\\\\\nx_1 \\ge 0, x_2 \\le 0\n\\end{cases}\n\\end{equation}\n\n\n```julia\n\n```\n", "meta": {"hexsha": "5461417eeddfd0453d146899a3a738508973542c", "size": 2444, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Exercises/PS7bis.ipynb", "max_stars_repo_name": "JuliaTagBot/ES313.jl", "max_stars_repo_head_hexsha": "3601743ca05bdb2562a26efd8b809c1a4f78c7b1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Exercises/PS7bis.ipynb", "max_issues_repo_name": "JuliaTagBot/ES313.jl", "max_issues_repo_head_hexsha": "3601743ca05bdb2562a26efd8b809c1a4f78c7b1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Exercises/PS7bis.ipynb", "max_forks_repo_name": "JuliaTagBot/ES313.jl", "max_forks_repo_head_hexsha": "3601743ca05bdb2562a26efd8b809c1a4f78c7b1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.0, "max_line_length": 147, "alphanum_fraction": 0.4742225859, "converted": true, "num_tokens": 635, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541593883189, "lm_q2_score": 0.8933094067644466, "lm_q1q2_score": 0.8411885185004527}} {"text": "# Eigenvalues and Eigenvectors; or \n## How Matrices Really Work\n\nThis notebook will first focus on a geometric understanding of eigenvalues and eigenvectors, then introduce some more advanced applications relevant to mathematical modelling. It is intended as a practical introduction, not a rigorous theoretical exposition.\n\nNumerical examples will make use of NumPy and its older sybling, SciPy.\n\n### Imports\n\n\n```python\nimport numpy as np\nfrom scipy import linalg\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n```\n\n### A motivating example\n\nThe concept of taking the square of a square matrix is straightforward (to a person familiar with matrix multiplication).\n\nSay\n\\begin{align*}\nA = \\left(\\begin{matrix}8 & 5\\\\6 & 1\\end{matrix}\\right),\n\\end{align*}\nso\n\\begin{align*}\nA^2 = A\\times A=\\left(\\begin{matrix}94 & 45\\\\54 & 31\\end{matrix}\\right).\n\\end{align*}\n\nBut what if you wanted to raise 2 to the power of a matrix?\n\\begin{align*}\n2^A = \\text{?}.\n\\end{align*}\n\nWhat does that even mean?\n\n### What is an eigenvalue?\n\nFor a matrix $A$, any non-zero vector $\\mathbf{v}$ and scalar $w$ such that\n\\begin{align*}\nA\\mathbf{v} = w\\mathbf{v},\n\\end{align*}\n$w$ is an eigenvalue of $A$ and $\\mathbf{v}$ is an eigenvector of $A$. $w$ and $\\mathbf{v}$ are an eigenvalue-eigenvector pair. (Practically everyone uses the symbol $\\lambda$ to represent eigenvalues, but in Python `lambda` is a reserved keyword, so we will use $w$ throughout.)\n\n### Who cares?\n\nFirst let's see what matrices do, geometrically speaking.\n\nStart by plotting a grid.\n\n\n```python\nx, y = np.meshgrid(np.arange(-2, 2.1, 0.5), np.arange(-2, 2, 0.5))\nxy_vectors = np.array([x.flatten(), y.flatten()])\nxy = pd.DataFrame({'x': xy_vectors[0], 'y': xy_vectors[1]})\nxy['quadrant'] = -np.sign(xy['x'] * xy['y'])\nfig, ax = plt.subplots(figsize=(5, 5))\nax = sns.scatterplot(data=xy, x='x', y='y', hue='quadrant', \n legend=False, palette=plt.get_cmap('viridis'), ax=ax)\nax.set_aspect('equal')\n```\n\nNow generate a random matrix. Ok, this is a random matrix with specific properties, but mostly so the pictures look nice.\n\n\n```python\nwhile True:\n V = np.random.randint(1, 10, 4).reshape((2, 2))\n if not np.isclose(np.linalg.det(V), 0):\n break\nw = (1.5, 2)\nA = np.linalg.inv(V) @ np.diag(w) @ V\nprint('Transformation matrix A:')\nprint(A)\n```\n\n Transformation matrix A:\n [[ 2.07446809 0.09574468]\n [-0.44680851 1.42553191]]\n\n\n### Matrices move points around\nPlot what happens to each point in the grid $\\mathbf{x}$ when it is multiplied by $A$. So plot $A\\mathbf{x}$ for each $\\mathbf{x}$ in the grid.\n\n\n```python\ntransformed_vectors = A @ xy_vectors\ntransformed_xy = pd.DataFrame(\n {'x': transformed_vectors[0], 'y': transformed_vectors[1]})\ntransformed_xy['quadrant'] = xy['quadrant']\n\nfig, ax = plt.subplots(figsize=(5, 5))\nax = sns.scatterplot(data=xy, x='x', y='y', hue='quadrant', alpha=0.3,\n legend=False, palette=plt.get_cmap('viridis'), ax=ax)\nax = sns.scatterplot(data=transformed_xy, x='x', y='y', hue='quadrant',\n legend=False, palette=plt.get_cmap('viridis'), ax=ax)\nax.set_aspect('equal')\n```\n\nMultiplying by the _inverse_ matrix moves the points back to their original locations.\n\n\n```python\nuntransformed_vectors = np.linalg.inv(A) @ transformed_vectors\nuntransformed_xy = pd.DataFrame(\n {'x': untransformed_vectors[0], 'y': untransformed_vectors[1]})\nuntransformed_xy['quadrant'] = xy['quadrant']\n\nfig, ax = plt.subplots(figsize=(5, 5))\nax = sns.scatterplot(data=transformed_xy, x='x', y='y', hue='quadrant', alpha=0.3,\n legend=False, palette=plt.get_cmap('viridis'), ax=ax)\nax = sns.scatterplot(data=untransformed_xy, x='x', y='y', hue='quadrant',\n legend=False, palette=plt.get_cmap('viridis'), ax=ax)\nax.set_aspect('equal')\n```\n\nLet's look more closely at what happens in the original transformation $A\\mathbf{x}$ by plotting a vector for each transformation.\n\n\n```python\nfig, ax = plt.subplots(figsize=(5, 5))\ndelta_x = transformed_xy['x'] - xy['x']\ndelta_y = transformed_xy['y'] - xy['y']\nq = ax.quiver(xy['x'], xy['y'], delta_x, delta_y, \n angles='xy', scale_units='xy', scale=1, \n color='purple')\nax = sns.scatterplot(data=transformed_xy, x='x', y='y', hue='quadrant',\n legend=False, palette=plt.get_cmap('viridis'), ax=ax)\nax.set_aspect('equal')\n```\n\n### Enter the eigenvectors\nPlot the eigenvalues multiplied by the eigenvectors ($w\\mathbf{v}$) for each eigenvalue-eigenvector pair. Note that in the direction of the eigenvectors, the transformation just moves the points further from (or closer to) the origin, without changing their bearings. Note also that the magnitude of the scaling is the eigenvalue.\n\nWe have obtained the eigenvalues and eigenvectors using `np.linalg.eig`. Eigenvectors are only determined up to scalar multiples, so NumPy chooses to scale them to be 1 unit long.\n\n\n```python\nw, V = np.linalg.eig(A)\nassert np.allclose(w.imag, 0)\nw = w.real\nV = V.real\nV = w * V\nprint('w1:', w[0])\nprint('V1:', V[:,0])\nprint('w2:', w[1])\nprint('V2:', V[:,1])\nfig, ax = plt.subplots(figsize=(5, 5))\nq = ax.quiver(xy['x'], xy['y'], delta_x, delta_y, \n angles='xy', scale_units='xy', scale=1, \n color='purple', alpha=0.5)\nax = sns.scatterplot(data=transformed_xy, x='x', y='y', hue='quadrant', alpha=0,\n legend=False, palette=plt.get_cmap('viridis'), ax=ax)\nq = ax.quiver((0,0), (0,0), V[0], V[1], angles='xy', scale_units='xy', scale=1, color='blue')\nax.set_aspect('equal')\n```\n\n### Again, who cares?\n\nWell, any vector $\\mathbf{x}$ can be restated as the sum of eigenvectors of $A$. That is,\n\\begin{align*}\n\\mathbf{x} = a\\mathbf{v}_1 + b\\mathbf{v}_2\n\\end{align*}\nSo\n\\begin{align*}\nA\\mathbf{x} = A(a\\mathbf{v}_1 + b\\mathbf{v}_2) = aA\\mathbf{v}_1 + bA\\mathbf{v}_2 = aw_1\\mathbf{v}_1 + bw_2\\mathbf{v}_2.\n\\end{align*}\n\nSo multiplying by a matrix is like expressing a vector as eigenvector components, then scaling those components. Regardless of the initial $\\mathbf{x}$ vector, the components are always scaled by the eigenvalues.\n\nKnowing a matrix's eigenvalues is therefore fundamental to understanding what the matrix \"does\".\n\nWe can figure out $a$ and $b$ by forming the matrix $V=\\left(\\begin{matrix}\\mathbf{v}_1&\\mathbf{v}_2\\end{matrix}\\right)$ and noting that\n\\begin{align}\n\\mathbf{x} = a\\mathbf{v}_1 + b\\mathbf{v}_2 = V\\left(\\begin{matrix}a\\\\b\\end{matrix}\\right),\n\\end{align}\nso \n\\begin{align*}\n\\left(\\begin{matrix}a\\\\b\\end{matrix}\\right) = V^{-1}\\mathbf{x}.\n\\end{align*}\n\n### Eigendecomposition\n\nTo put it another way, in the 2x2 case,\n\\begin{align*}\nA\\mathbf{v}_1 = w_1\\mathbf{v}_1 \\quad\\text{and}\\quad\\mathbf{v}_2 = w_2\\mathbf{v}_2,\n\\end{align*}\nor\n\\begin{align*}\nAV = \\left(\\begin{matrix}A\\mathbf{v}_1&A\\mathbf{v}_2\\end{matrix}\\right) = \\left(\\begin{matrix}w_1\\mathbf{v}_1&w_2\\mathbf{v}_2\\end{matrix}\\right) = \nV\\left(\\begin{matrix}w_1&0\\\\0&w_2\\end{matrix}\\right).\n\\end{align*}\n\nMultiplying from the right by $V^{-1}$ yields\n\\begin{align*}\nA = V\\left(\\begin{matrix}w_1&0\\\\0&w_2\\end{matrix}\\right)V^{-1},\n\\end{align*}\nwhich formalises what we saw above. Multiplying from the left by $A$ is like multiplying by $V^{-1}$ to transform the vector into a coordinate space where the basis is the eigenvectors of $A$, scaling the resulting vector by the eigenvalues, then transforming the result back to the original basis by multiplying by $V$.\n\nIf it is possible to decompose a matrix $A = VDV^{-1}$, where $D$ is a diagonal matrix of eigenvalues, we say that $A$ is _diagonalisable_. This result generalises for $m\\times m$ matrices.\n\n### Some useful facts about eigenvalues and eigenvectors\n\n- Eigenvalues can be complex, and the complex eigenvalues of a real matrix occur in conjugate pairs.\n- Eigenvectors that correspond to distinct eigenvalues are linearly independent.\n- If two eigenvalues are equal, there can be two eigenvectors for that eigenvalue, and any linear combination of those eigenvectors is also an eigenvector.\n\n### So what's this about $2^A$?\n\nNow that we can decompose diagonalisable matrices, we can write\n\\begin{align*}\nA^2 = VDV^{-1}VDV^{-1} = VDIDV^{-1} = VD^2V^{-1},\n\\end{align*}\nso the square of a matrix has the same eigenvectors as the original matrix, and with eigenvalues that are the squares of the original eigenvalues (remember that $D$ is diagonal). Hopefully it's not too much of a stretch to see that we also get\n\\begin{align*}\nA^n = VD^nV{-1}\n\\end{align*}\nfor any whole number $n$.\n\nSo if $x$ is a scalar, the Taylor expansion of $2^x$ around $x=0$ is\n\\begin{align*}\n2^x = \\sum_{n=0}^\\infty \\frac{(\\ln 2)^nx^n}{n!}.\n\\end{align*}\n\nFrom what we now know about eigenvalues and eigenvectors, we can apply that straight away our $m\\times m$ matrix $A$,\n\\begin{align*}\n2^A &= \\sum_{n=0}^\\infty \\frac{(\\ln 2)^nA^n}{n!} \\\\\n&= \\sum_{n=0}^\\infty \\frac{(\\ln 2)^nVD^nV^{-1}}{n!} \\\\\n&= V\\sum_{n=0}^\\infty \\frac{(\\ln 2)^nD^n}{n!}V^{-1} \\\\\n&= V\\left(\\begin{matrix}2^{w_1}&0&\\ldots&0\\\\\n0&2^{w_2}&\\ldots&0\\\\\n\\vdots&\\vdots&\\ddots&\\vdots\\\\\n0&0&\\ldots&2^{w_m}\\end{matrix}\\right)V^{-1}\\\\\n&= V2^DV^{-1}.\n\\end{align*}\n\nThe same goes for any function with a Taylor expansion. For instance for diagonalisable $A$, $\\exp(A)=V\\exp(A)V^{-1}$ and $\\ln(A)=V\\ln(A)V^{-1}$. $\\exp(A)$ has a dizzying array of applications.\n\nIt turns out that if you want to calculate these things numerically, there are better ways than taking the eigendecomposition, but SciPy has that covered.\n\n\n```python\nprint(linalg.expm(A)) # matrix exponential\n```\n\n [[ 7.82206821 0.55672986]\n [-2.59807266 4.04867696]]\n\n\n\n```python\nprint(linalg.logm(A)) # matrix logarithm\n```\n\n [[ 0.73599345 0.05508806]\n [-0.2570776 0.36261884]]\n\n\n\n```python\nprint(linalg.funm(A, lambda x: 2**x)) # 2^A\n```\n\n [[ 4.17448958 0.22434374]\n [-1.04693746 2.65393755]]\n\n\n... or alternatively ...\n\n\n```python\nprint(linalg.expm(np.log(2)*A))\n```\n\n [[ 4.17448958 0.22434374]\n [-1.04693746 2.65393755]]\n\n", "meta": {"hexsha": "8a30e30d0d72a2754aac247ead559366eb2d7f8e", "size": 185683, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/eigen.ipynb", "max_stars_repo_name": "BenKaehler/mm-labs", "max_stars_repo_head_hexsha": "5409ba7f6a4d4edb802c96e4bfc47e477fc84329", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/eigen.ipynb", "max_issues_repo_name": "BenKaehler/mm-labs", "max_issues_repo_head_hexsha": "5409ba7f6a4d4edb802c96e4bfc47e477fc84329", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/eigen.ipynb", "max_forks_repo_name": "BenKaehler/mm-labs", "max_forks_repo_head_hexsha": "5409ba7f6a4d4edb802c96e4bfc47e477fc84329", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 15, "max_forks_repo_forks_event_min_datetime": "2020-07-27T06:33:02.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-27T07:30:41.000Z", "avg_line_length": 360.5495145631, "max_line_length": 54940, "alphanum_fraction": 0.9290888234, "converted": true, "num_tokens": 3178, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284088005554475, "lm_q2_score": 0.9059898178450964, "lm_q1q2_score": 0.8411289201010143}} {"text": "### 1. Multiples of 3 and 5\nIf we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.\nFind the sum of all the multiples of 3 or 5 below 1000.\n\n\n```python\nget_sum = lambda x,y: x+y\nget_product = lambda x,y: x*y\nget_substract = lambda x,y: x-y\nget_division = lambda x,y: x/y\nis_bigger_than = lambda x,y: x if x>y else y\nis_smaller_than = lambda x,y: x if x=2:\n npf = next_prime_factor(number)\n prime_factors.append(npf)\n number /= npf\n\n for i in primes:\n prime_count = len(filter(lambda x: x==i, prime_factors))\n prime_dict[i]= prime_count if prime_count > prime_dict[i] else prime_dict[i] \n\nfinal_product = 1\nfor i in prime_dict:\n final_product *= i**prime_dict[i]\nprint final_product\n```\n\n 232792560\n\n\n### 6. Sum square difference\n\nThe sum of the squares of the first ten natural numbers is,\n12 + 22 + ... + 102 = 385\nThe square of the sum of the first ten natural numbers is,\n(1 + 2 + ... + 10)2 = 552 = 3025\nHence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is 3025 \u2212 385 = 2640.\nFind the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum.\n\n\n```python\nprint reduce(get_sum, range(1,101))**2 - reduce(get_sum, map(lambda x: x**2, range(1,101))) \n```\n\n 25164150\n\n\n### 7. 10001st prime\nBy listing the first six prime numbers: 2, 3, 5, 7, 11, and 13, we can see that the 6th prime is 13.\n\nWhat is the 10 001st prime number?\n\n\n```python\nprimes_sieve(1000000)[10000]\n```\n\n\n\n\n 104743\n\n\n\n### 8.Next Largest product in a series\nThe four adjacent digits in the 1000-digit number that have the greatest product are 9 \u00d7 9 \u00d7 8 \u00d7 9 = 5832.\n\n73167176531330624919225119674426574742355349194934\n96983520312774506326239578318016984801869478851843\n85861560789112949495459501737958331952853208805511\n12540698747158523863050715693290963295227443043557\n66896648950445244523161731856403098711121722383113\n62229893423380308135336276614282806444486645238749\n30358907296290491560440772390713810515859307960866\n70172427121883998797908792274921901699720888093776\n65727333001053367881220235421809751254540594752243\n52584907711670556013604839586446706324415722155397\n53697817977846174064955149290862569321978468622482\n83972241375657056057490261407972968652414535100474\n82166370484403199890008895243450658541227588666881\n16427171479924442928230863465674813919123162824586\n17866458359124566529476545682848912883142607690042\n24219022671055626321111109370544217506941658960408\n07198403850962455444362981230987879927244284909188\n84580156166097919133875499200524063689912560717606\n05886116467109405077541002256983155200055935729725\n71636269561882670428252483600823257530420752963450\n\nFind the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?\n\n\n```python\nnumber =\"7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450\"\nproducts = []\nfor i in range(len(number)):\n a_set = number[i:i+14]\n products.append(reduce(get_product, map(lambda char: int(char), a_set)))\nprint reduce(is_bigger_than, products)\n\n```\n\n 70573265280\n\n\n### 9. Special Pythagorean triplet\nA Pythagorean triplet is a set of three natural numbers, a < b < c, for which,\n\na2 + b2 = c2\nFor example, 32 + 42 = 9 + 16 = 25 = 52.\n\nThere exists exactly one Pythagorean triplet for which a + b + c = 1000.\nFind the product abc.\n\n\n```python\nsquares_list = filter(lambda x: x** 0.5 == int(x**0.5), range(1,1000000))\n\n```\n\n\n```python\nsquares_list[:10]\nsquares_set = set(squares_list)\ntriplets = []\nfor index,square in enumerate(squares_list):\n for another_square in squares_list[index+1:]:\n if another_square + square in squares_set:\n triplets.append((square, another_square, square+another_square))\n\nfor number in filter(lambda x: x[0]**(.5) + x[1]**(.5) + x[2]**(.5) == 1000, triplets)[0] :\n print number**(.5)\n\n```\n\n 200.0\n 375.0\n 425.0\n\n\n### 10. Summation of primes\nThe sum of the primes below 10 is 2 + 3 + 5 + 7 = 17.\n\nFind the sum of all the primes below two million.\n\n\n```python\nreduce(get_sum,filter(lambda x: x<2000000, primes_sieve(10000000)))\n```\n\n\n\n\n 142913828922\n\n\n\n### 11. Largest product in a grid\nProblem 11\nIn the 20\u00d720 grid below, four numbers along a diagonal line have been marked in red.\n\n08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08 \\n\n49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00 \\n\n81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65 \\n\n52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91 \\n\n22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80 \\n\n24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50 \\n\n32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70 \\n\n67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21 \\n\n24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72 \\n\n21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95 \\n\n78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92 \\n\n16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57 \\n\n86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58 \\n\n19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40 \\n\n04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66 \\n\n88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69 \\n\n04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36 \\n\n20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16 \\n\n20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54 \\n\n01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48 \\n\n\nThe product of these numbers is 26 \u00d7 63 \u00d7 78 \u00d7 14 = 1788696.\n\nWhat is the greatest product of four adjacent numbers in the same direction (up, down, left, right, or diagonally) in the 20\u00d720 grid?\n\n\n```python\ngrid = [[8,2,22,97,38,15,0,40,0,75,4,5,7,78,52,12,50,77,91,8],\n[49,49,99,40,17,81,18,57,60,87,17,40,98,43,69,48,4,56,62,0],\n[81,49,31,73,55,79,14,29,93,71,40,67,53,88,30,3,49,13,36,65],\n[52,70,95,23,4,60,11,42,69,24,68,56,1,32,56,71,37,2,36,91],\n[22,31,16,71,51,67,63,89,41,92,36,54,22,40,40,28,66,33,13,80],\n[24,47,32,60,99,3,45,2,44,75,33,53,78,36,84,20,35,17,12,50],\n[32,98,81,28,64,23,67,10,26,38,40,67,59,54,70,66,18,38,64,70],\n[67,26,20,68,2,62,12,20,95,63,94,39,63,8,40,91,66,49,94,21],\n[24,55,58,5,66,73,99,26,97,17,78,78,96,83,14,88,34,89,63,72],\n[21,36,23,9,75,0,76,44,20,45,35,14,0,61,33,97,34,31,33,95],\n[78,17,53,28,22,75,31,67,15,94,3,80,4,62,16,14,9,53,56,92],\n[16,39,5,42,96,35,31,47,55,58,88,24,0,17,54,24,36,29,85,57],\n[86,56,0,48,35,71,89,7,5,44,44,37,44,60,21,58,51,54,17,58],\n[19,80,81,68,5,94,47,69,28,73,92,13,86,52,17,77,4,89,55,40],\n[4,52,8,83,97,35,99,16,7,97,57,32,16,26,26,79,33,27,98,66],\n[88,36,68,87,57,62,20,72,3,46,33,67,46,55,12,32,63,93,53,69],\n[4,42,16,73,38,25,39,11,24,94,72,18,8,46,29,32,40,62,76,36],\n[20,69,36,41,72,30,23,88,34,62,99,69,82,67,59,85,74,4,36,16],\n[20,73,35,29,78,31,90,1,74,31,49,71,48,86,81,16,23,57,5,54],\n[1,70,54,71,83,51,54,69,16,92,33,48,61,43,52,1,89,19,67,48]]\n\nindeces=[]\n# horzontal and vertical\nfor i in range(20):\n for j in range(17):\n indeces.append([(i,j),(i,j+1),(i,j+2),(i,j+3)])\n indeces.append([(j,i),(j+1,i),(j+2,i),(j+3,i)])\n\n# \\\nfor i in range(17):\n for j in range(17):\n indeces.append([(i,j),(i+1,j+1),(i+2,j+2),(i+3,j+3)])\n# / \nfor i in range(17):\n for j in range(3,20):\n indeces.append([(i,j),(i+1,j-1), (i+2,j-2), (i+3,j-3)])\n\ndef get_nums_from_matrix(matrix, list_of_indeces):\n list_of_numbers = []\n for i in list_of_indeces:\n x = i[0]\n y = i[1]\n list_of_numbers.append(matrix[x][y])\n return list_of_numbers\n\nll_numbers = []\nfor i in indeces:\n ll_numbers.append(get_nums_from_matrix(grid,i))\n\nprint reduce(is_bigger_than, map(lambda x: reduce(lambda a,b: a*b , x), ll_numbers))\n\n```\n\n 70600674\n\n\n### 12. Highly divisible triangular number\nThe sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be:\n\n1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ...\n\nLet us list the factors of the first seven triangle numbers:\n\n 1: 1\n 3: 1,3\n 6: 1,2,3,6\n10: 1,2,5,10\n15: 1,3,5,15\n21: 1,3,7,21\n28: 1,2,4,7,14,28\nWe can see that 28 is the first triangle number to have over five divisors.\n\nWhat is the value of the first triangle number to have over five hundred divisors?\n\n\n```python\n# >>> import itertools\n# >>> list2d = [[1,2,3],[4,5,6], [7], [8,9]]\n# >>> merged = list(itertools.chain.from_iterable(list2d))\n\nimport itertools\n\ndef factors(n):\n a_list = []\n for i in ([i, n//i] for i in range(1, int(n**0.5) + 1) if n % i == 0):\n a_list.extend(i)\n return sorted(a_list)\n\ndef binary_search(array, target):\n lower = 0\n upper = len(array)\n while lower < upper: \n x = lower + (upper - lower) // 2\n val = array[x]\n if target == val:\n return x\n elif target > val:\n if lower == x: \n break \n lower = x\n elif target < val:\n upper = x\n\ntri_numbers = []\nfor number in range(1,1000000):\n new_tri = tri_numbers[-1] + number if len(tri_numbers) != 0 else number\n tri_numbers.append(new_tri)\n\n\n# for i in range(500000,600000):\n# if len(factors(tri_numbers[i]))>500:\n# print tri_numbers[i]\n# break\n# for number in reversed(tri_numbers):\n# get_factors(number)\n# for i in tri_numbers[-1:1]:\n# if len(print len(factors(tri_numbers[-1]))\n# reduce(list.__add__, [[1, 2, 3], [4, 5], [6, 7, 8]], [])\n\n```\n\n\n```python\ncount = 0\nfor i in tri_numbers:\n if len(factors(i))>=500:\n count +=1\n print i\n if count ==10:\n break\n```\n\n 76576500\n 103672800\n 137373600\n 147911400\n 163723560\n 214980480\n 228648420\n 231221760\n 236215980\n 250891200\n\n\n### Large sum\n\nWork out the first ten digits of the sum of the following one-hundred 50-digit numbers.\n\n37107287533902102798797998220837590246510135740250\n46376937677490009712648124896970078050417018260538\n74324986199524741059474233309513058123726617309629\n91942213363574161572522430563301811072406154908250\n23067588207539346171171980310421047513778063246676\n89261670696623633820136378418383684178734361726757\n28112879812849979408065481931592621691275889832738\n44274228917432520321923589422876796487670272189318\n47451445736001306439091167216856844588711603153276\n70386486105843025439939619828917593665686757934951\n62176457141856560629502157223196586755079324193331\n64906352462741904929101432445813822663347944758178\n92575867718337217661963751590579239728245598838407\n58203565325359399008402633568948830189458628227828\n80181199384826282014278194139940567587151170094390\n35398664372827112653829987240784473053190104293586\n86515506006295864861532075273371959191420517255829\n71693888707715466499115593487603532921714970056938\n54370070576826684624621495650076471787294438377604\n53282654108756828443191190634694037855217779295145\n36123272525000296071075082563815656710885258350721\n45876576172410976447339110607218265236877223636045\n17423706905851860660448207621209813287860733969412\n81142660418086830619328460811191061556940512689692\n51934325451728388641918047049293215058642563049483\n62467221648435076201727918039944693004732956340691\n15732444386908125794514089057706229429197107928209\n55037687525678773091862540744969844508330393682126\n18336384825330154686196124348767681297534375946515\n80386287592878490201521685554828717201219257766954\n78182833757993103614740356856449095527097864797581\n16726320100436897842553539920931837441497806860984\n48403098129077791799088218795327364475675590848030\n87086987551392711854517078544161852424320693150332\n59959406895756536782107074926966537676326235447210\n69793950679652694742597709739166693763042633987085\n41052684708299085211399427365734116182760315001271\n65378607361501080857009149939512557028198746004375\n35829035317434717326932123578154982629742552737307\n94953759765105305946966067683156574377167401875275\n88902802571733229619176668713819931811048770190271\n25267680276078003013678680992525463401061632866526\n36270218540497705585629946580636237993140746255962\n24074486908231174977792365466257246923322810917141\n91430288197103288597806669760892938638285025333403\n34413065578016127815921815005561868836468420090470\n23053081172816430487623791969842487255036638784583\n11487696932154902810424020138335124462181441773470\n63783299490636259666498587618221225225512486764533\n67720186971698544312419572409913959008952310058822\n95548255300263520781532296796249481641953868218774\n76085327132285723110424803456124867697064507995236\n37774242535411291684276865538926205024910326572967\n23701913275725675285653248258265463092207058596522\n29798860272258331913126375147341994889534765745501\n18495701454879288984856827726077713721403798879715\n38298203783031473527721580348144513491373226651381\n34829543829199918180278916522431027392251122869539\n40957953066405232632538044100059654939159879593635\n29746152185502371307642255121183693803580388584903\n41698116222072977186158236678424689157993532961922\n62467957194401269043877107275048102390895523597457\n23189706772547915061505504953922979530901129967519\n86188088225875314529584099251203829009407770775672\n11306739708304724483816533873502340845647058077308\n82959174767140363198008187129011875491310547126581\n97623331044818386269515456334926366572897563400500\n42846280183517070527831839425882145521227251250327\n55121603546981200581762165212827652751691296897789\n32238195734329339946437501907836945765883352399886\n75506164965184775180738168837861091527357929701337\n62177842752192623401942399639168044983993173312731\n32924185707147349566916674687634660915035914677504\n99518671430235219628894890102423325116913619626622\n73267460800591547471830798392868535206946944540724\n76841822524674417161514036427982273348055556214818\n97142617910342598647204516893989422179826088076852\n87783646182799346313767754307809363333018982642090\n10848802521674670883215120185883543223812876952786\n71329612474782464538636993009049310363619763878039\n62184073572399794223406235393808339651327408011116\n66627891981488087797941876876144230030984490851411\n60661826293682836764744779239180335110989069790714\n85786944089552990653640447425576083659976645795096\n66024396409905389607120198219976047599490197230297\n64913982680032973156037120041377903785566085089252\n16730939319872750275468906903707539413042652315011\n94809377245048795150954100921645863754710598436791\n78639167021187492431995700641917969777599028300699\n15368713711936614952811305876380278410754449733078\n40789923115535562561142322423255033685442488917353\n44889911501440648020369068063960672322193204149535\n41503128880339536053299340368006977710650566631954\n81234880673210146739058568557934581403627822703280\n82616570773948327592232845941706525094512325230608\n22918802058777319719839450180888072429661980811197\n77158542502016545090413245809786882778948721859617\n72107838435069186155435662884062257473692284509516\n20849603980134001723930671666823555245252804609722\n53503534226472524250874054075591789781264330331690\n\n\n\n```python\nlist_of_numbers = [\"37107287533902102798797998220837590246510135740250\",\n\"46376937677490009712648124896970078050417018260538\",\n\"74324986199524741059474233309513058123726617309629\",\n\"91942213363574161572522430563301811072406154908250\",\n\"23067588207539346171171980310421047513778063246676\",\n\"89261670696623633820136378418383684178734361726757\",\n\"28112879812849979408065481931592621691275889832738\",\n\"44274228917432520321923589422876796487670272189318\",\n\"47451445736001306439091167216856844588711603153276\",\n\"70386486105843025439939619828917593665686757934951\",\n\"62176457141856560629502157223196586755079324193331\",\n\"64906352462741904929101432445813822663347944758178\",\n\"92575867718337217661963751590579239728245598838407\",\n\"58203565325359399008402633568948830189458628227828\",\n\"80181199384826282014278194139940567587151170094390\",\n\"35398664372827112653829987240784473053190104293586\",\n\"86515506006295864861532075273371959191420517255829\",\n\"71693888707715466499115593487603532921714970056938\",\n\"54370070576826684624621495650076471787294438377604\",\n\"53282654108756828443191190634694037855217779295145\",\n\"36123272525000296071075082563815656710885258350721\",\n\"45876576172410976447339110607218265236877223636045\",\n\"17423706905851860660448207621209813287860733969412\",\n\"81142660418086830619328460811191061556940512689692\",\n\"51934325451728388641918047049293215058642563049483\",\n\"62467221648435076201727918039944693004732956340691\",\n\"15732444386908125794514089057706229429197107928209\",\n\"55037687525678773091862540744969844508330393682126\",\n\"18336384825330154686196124348767681297534375946515\",\n\"80386287592878490201521685554828717201219257766954\",\n\"78182833757993103614740356856449095527097864797581\",\n\"16726320100436897842553539920931837441497806860984\",\n\"48403098129077791799088218795327364475675590848030\",\n\"87086987551392711854517078544161852424320693150332\",\n\"59959406895756536782107074926966537676326235447210\",\n\"69793950679652694742597709739166693763042633987085\",\n\"41052684708299085211399427365734116182760315001271\",\n\"65378607361501080857009149939512557028198746004375\",\n\"35829035317434717326932123578154982629742552737307\",\n\"94953759765105305946966067683156574377167401875275\",\n\"88902802571733229619176668713819931811048770190271\",\n\"25267680276078003013678680992525463401061632866526\",\n\"36270218540497705585629946580636237993140746255962\",\n\"24074486908231174977792365466257246923322810917141\",\n\"91430288197103288597806669760892938638285025333403\",\n\"34413065578016127815921815005561868836468420090470\",\n\"23053081172816430487623791969842487255036638784583\",\n\"11487696932154902810424020138335124462181441773470\",\n\"63783299490636259666498587618221225225512486764533\",\n\"67720186971698544312419572409913959008952310058822\",\n\"95548255300263520781532296796249481641953868218774\",\n\"76085327132285723110424803456124867697064507995236\",\n\"37774242535411291684276865538926205024910326572967\",\n\"23701913275725675285653248258265463092207058596522\",\n\"29798860272258331913126375147341994889534765745501\",\n\"18495701454879288984856827726077713721403798879715\",\n\"38298203783031473527721580348144513491373226651381\",\n\"34829543829199918180278916522431027392251122869539\",\n\"40957953066405232632538044100059654939159879593635\",\n\"29746152185502371307642255121183693803580388584903\",\n\"41698116222072977186158236678424689157993532961922\",\n\"62467957194401269043877107275048102390895523597457\",\n\"23189706772547915061505504953922979530901129967519\",\n\"86188088225875314529584099251203829009407770775672\",\n\"11306739708304724483816533873502340845647058077308\",\n\"82959174767140363198008187129011875491310547126581\",\n\"97623331044818386269515456334926366572897563400500\",\n\"42846280183517070527831839425882145521227251250327\",\n\"55121603546981200581762165212827652751691296897789\",\n\"32238195734329339946437501907836945765883352399886\",\n\"75506164965184775180738168837861091527357929701337\",\n\"62177842752192623401942399639168044983993173312731\",\n\"32924185707147349566916674687634660915035914677504\",\n\"99518671430235219628894890102423325116913619626622\",\n\"73267460800591547471830798392868535206946944540724\",\n\"76841822524674417161514036427982273348055556214818\",\n\"97142617910342598647204516893989422179826088076852\",\n\"87783646182799346313767754307809363333018982642090\",\n\"10848802521674670883215120185883543223812876952786\",\n\"71329612474782464538636993009049310363619763878039\",\n\"62184073572399794223406235393808339651327408011116\",\n\"66627891981488087797941876876144230030984490851411\",\n\"60661826293682836764744779239180335110989069790714\",\n\"85786944089552990653640447425576083659976645795096\",\n\"66024396409905389607120198219976047599490197230297\",\n\"64913982680032973156037120041377903785566085089252\",\n\"16730939319872750275468906903707539413042652315011\",\n\"94809377245048795150954100921645863754710598436791\",\n\"78639167021187492431995700641917969777599028300699\",\n\"15368713711936614952811305876380278410754449733078\",\n\"40789923115535562561142322423255033685442488917353\",\n\"44889911501440648020369068063960672322193204149535\",\n\"41503128880339536053299340368006977710650566631954\",\n\"81234880673210146739058568557934581403627822703280\",\n\"82616570773948327592232845941706525094512325230608\",\n\"22918802058777319719839450180888072429661980811197\",\n\"77158542502016545090413245809786882778948721859617\",\n\"72107838435069186155435662884062257473692284509516\",\n\"20849603980134001723930671666823555245252804609722\",\n\"53503534226472524250874054075591789781264330331690\"]\n\nprint str(reduce(get_sum, map(lambda x: int(x[:10]), list_of_numbers)))[:10]\nprint str(reduce(get_sum, map(lambda x: int(x[:11]), list_of_numbers)))[:10]\nprint str(reduce(get_sum, map(lambda x: int(x[:50]), list_of_numbers)))[:10]\n\n```\n\n 5537376229\n 5537376230\n 5537376230\n\n\n### Longest Collatz sequence\n\nThe following iterative sequence is defined for the set of positive integers:\n\nn \u2192 n/2 (n is even)\nn \u2192 3n + 1 (n is odd)\n\nUsing the rule above and starting with 13, we generate the following sequence:\n\n13 \u2192 40 \u2192 20 \u2192 10 \u2192 5 \u2192 16 \u2192 8 \u2192 4 \u2192 2 \u2192 1\nIt can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although it has not been proved yet (Collatz Problem), it is thought that all starting numbers finish at 1.\n\nWhich starting number, under one million, produces the longest chain?\n\nNOTE: Once the chain starts the terms are allowed to go above one million.\n\n\n```python\ndef get_collatz(x):\n if type(x) != type([]):\n input_list = [x]\n else:\n input_list = x\n if input_list[-1] == 1:\n return x\n \n if input_list[-1]%2 == 0:\n input_list.append(input_list[-1]/2)\n else:\n input_list.append(input_list[-1]*3 +1)\n return get_collatz(input_list)\n\ncache = {}\nlongest_seq = 0\nlongest_gen = 0\nfor i in range(2,1000000):\n if i in cache:\n continue\n collatz = get_collatz(i)\n for index,number in enumerate(collatz): \n if number not in cache: \n cache[number] = len(collatz) - index\n if longest_seq < cache[number]:\n longest_seq = cache[number]\n longest_gen = number\n \nprint longest_gen, longest_seq\n```\n\n 837799 525\n\n\n\n```python\n[13][-1]%2 ==0\n```\n\n\n\n\n False\n\n\n\n### Lattice paths\n\nStarting in the top left corner of a 2\u00d72 grid, and only being able to move to the right and down, there are exactly 6 routes to the bottom right corner.\n\n\nHow many such routes are there through a 20\u00d720 grid?\n\n\n```python\n\n```\n\n### Power digit sum\n\n215 = 32768 and the sum of its digits is 3 + 2 + 7 + 6 + 8 = 26.\n\nWhat is the sum of the digits of the number 21000?\n\n\n```python\n\n```\n\n### Number letter counts\n\nIf the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.\n\nIf all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?\n\n\nNOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of \"and\" when writing out numbers is in compliance with British usage.\n\n\n```python\n\n```\n\n### Maximum path sum I\n\nBy starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23.\n\n3\n7 4\n2 4 6\n8 5 9 3\n\nThat is, 3 + 7 + 4 + 9 = 23.\n\nFind the maximum total from top to bottom of the triangle below:\n\n75\n95 64\n17 47 82\n18 35 87 10\n20 04 82 47 65\n19 01 23 75 03 34\n88 02 77 73 07 63 67\n99 65 04 28 06 16 70 92\n41 41 26 56 83 40 80 70 33\n41 48 72 33 47 32 37 16 94 29\n53 71 44 65 25 43 91 52 97 51 14\n70 11 33 28 77 73 17 78 39 68 17 57\n91 71 52 38 17 14 91 43 58 50 27 29 48\n63 66 04 68 89 53 67 30 73 16 69 87 40 31\n04 62 98 27 23 09 70 98 73 93 38 53 60 04 23\n\nNOTE: As there are only 16384 routes, it is possible to solve this problem by trying every route. However, Problem 67, is the same challenge with a triangle containing one-hundred rows; it cannot be solved by brute force, and requires a clever method! ;o)\n\n\n```python\n\n```\n\n### Counting Sundays\n\nYou are given the following information, but you may prefer to do some research for yourself.\n\n1 Jan 1900 was a Monday.\nThirty days has September,\nApril, June and November.\nAll the rest have thirty-one,\nSaving February alone,\nWhich has twenty-eight, rain or shine.\nAnd on leap years, twenty-nine.\nA leap year occurs on any year evenly divisible by 4, but not on a century unless it is divisible by 400.\nHow many Sundays fell on the first of the month during the twentieth century (1 Jan 1901 to 31 Dec 2000)?\n\n\n```python\na_set = {2, 4, 5, 9, 12, 21, 30, 51, 76, 127, 195}\nb_set = {1, 2, 3, 5, 6, 8, 9, 12, 15, 17, 18, 21}\nprint a_set.symmetric_difference(b_set)\nprint b_set.symmetric_difference(a_set)\n```\n\n set([1, 3, 4, 6, 8, 76, 15, 17, 18, 195, 127, 30, 51])\n set([3, 1, 195, 4, 6, 8, 76, 15, 17, 18, 51, 30, 127])\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "04013606fa8c343600a48e440095944412681c87", "size": 41988, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Euler.ipynb", "max_stars_repo_name": "itsjeevs/projectEuler", "max_stars_repo_head_hexsha": "2e8521df4c15b5e03562b282d9da37b5da13778f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Euler.ipynb", "max_issues_repo_name": "itsjeevs/projectEuler", "max_issues_repo_head_hexsha": "2e8521df4c15b5e03562b282d9da37b5da13778f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Euler.ipynb", "max_forks_repo_name": "itsjeevs/projectEuler", "max_forks_repo_head_hexsha": "2e8521df4c15b5e03562b282d9da37b5da13778f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.5830508475, "max_line_length": 1021, "alphanum_fraction": 0.6286558064, "converted": true, "num_tokens": 10501, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9585377249197139, "lm_q2_score": 0.8774767858797979, "lm_q1q2_score": 0.8410946020070844}} {"text": "```\n%pylab inline\n```\n\n \n Welcome to pylab, a matplotlib-based Python environment [backend: module://IPython.zmq.pylab.backend_inline].\n For more information, type 'help(pylab)'.\n\n\n\n```\nfrom sympy import Symbol, fresnels, fresnelc, oo, I, re, im, series, Rational, sin, cos, exp, plot\nfrom sympy.plotting import plot, plot_parametric\nfrom matplotlib.pyplot import figsize\n```\n\nPlot of the two Fresnel integrals $S(x)$ and $C(x)$\n\n\n```\nx = Symbol(\"x\")\n```\n\n\n```\nplot(fresnels(x), fresnelc(x), (x, 0, 8))\n```\n\nThe Cornu spiral defined as the parametric curve $u(t),v(t) := C(t), S(t)$\n\n\n```\nfigsize(8,8)\nplot_parametric(fresnelc(x), fresnels(x))\n```\n\nCompute and plot the leading order behaviour around $x=0$\n\n\n```\nltc = series(fresnelc(x), x, n=2).removeO()\nlts = series(fresnels(x), x, n=4).removeO()\n```\n\n\n```\nlts, ltc\n```\n\n\n\n\n (pi*x**3/6, x)\n\n\n\n\n```\nfigsize(4,4)\nplot(fresnels(x), lts, (x, 0, 1))\nplot(fresnelc(x), ltc, (x, 0, 1))\n```\n\nCompute and plot the asymptotic series expansion at $x=\\infty$\n\n\n```\n# Series expansion at infinity is not implemented yet\n#ats = series(fresnels(x), x, oo)\n#atc = series(fresnelc(x), x, oo)\n```\n\n\n```\n# However we can use the well known values\nats = Rational(1,2) - cos(pi*x**2/2)/(pi*x)\natc = Rational(1,2) + sin(pi*x**2/2)/(pi*x)\n```\n\n\n```\nfigsize(4,4)\nplot(fresnels(x), ats, (x, 6, 8))\nplot(fresnelc(x), atc, (x, 6, 8))\n```\nAnother nice example of a parametric plot\n\n```\nalpha = Symbol(\"alpha\")\nr = 3.0\ncirc = r*exp(1.0j*alpha)\n```\n\n\n```\nfigsize(8,8)\nplot_parametric(re(fresnelc(circ)), im(fresnelc(circ)), (alpha, 0, 2*pi))\n```\n\n\n```\n\n```\n", "meta": {"hexsha": "91907e16c60ff0a4ca6468af6f1c6d8c7e1256e2", "size": 214110, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/notebooks/fresnel_integrals.ipynb", "max_stars_repo_name": "Michal-Gagala/sympy", "max_stars_repo_head_hexsha": "3cc756c2af73b5506102abaeefd1b654e286e2c8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "examples/notebooks/fresnel_integrals.ipynb", "max_issues_repo_name": "Michal-Gagala/sympy", "max_issues_repo_head_hexsha": "3cc756c2af73b5506102abaeefd1b654e286e2c8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/notebooks/fresnel_integrals.ipynb", "max_forks_repo_name": "Michal-Gagala/sympy", "max_forks_repo_head_hexsha": "3cc756c2af73b5506102abaeefd1b654e286e2c8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 884.7520661157, "max_line_length": 80872, "alphanum_fraction": 0.9398019709, "converted": true, "num_tokens": 572, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9496693688269984, "lm_q2_score": 0.8856314813647587, "lm_q1q2_score": 0.8410570899209899}} {"text": "```python\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.stats import pearsonr\nplt.rcParams['font.size'] = 13\nplt.rcParams['axes.spines.right'] = False\nplt.rcParams['ytick.right'] = False\nplt.rcParams['axes.spines.top'] = False\nplt.rcParams['xtick.top'] = False\n```\n\n## Information as reduced uncertainty\n\nThe foundation for information theory differs slightly from many other concepts in physics, as it is not derived out of empirical observations. Rather, Shannon (1948) started from an intuition of what properties an information measure should posses, and then showed that there only exist one measure with those properties. In short, he imagined a situation where the probabilities ($p_1, \\ldots, p_N$) for $N$ outcomes/answers to an event/question are known beforehand, and soughed to quantify the information obtained ones the outcome/answer was learned. \n\nFor example, imagine a professor that wants to know how many students $x$ attended a specific lecture. The professor is assumed to know the distribution $p(x)$ over all possible number of attendants from previous experience, but the real number of attendants is unknown. The distribution $p(x)$ thus reflects current uncertainty, and ones the real number of attendees is learned, this uncertainty is decreased to zero. The basic idea is, therefore, to quantify the information learned by measuring how much the uncertainty has decreased.\n\n\n```python\n# Illustration of the uncertainty before and after\nN = 16 # number of possible outcomes\nmu = N/2. # mean\nsigma = N/4. # standard deviation\nx = np.arange(N) # possible outcomes\np = np.exp(-(x-mu)**2/sigma**2) # p(x)\np /= p.sum() # Normalize\n\n# One sample from p(x)\np_cum = np.cumsum(p)\noutcome = np.argmax(np.random.rand() < p_cum) \ny = np.zeros(N)\ny[outcome] = 1.\n\n# Plotting\nplt.figure(figsize=(15, 3))\nax = plt.subplot(1, 2, 1)\nax.bar(x-0.4, p)\nax.set_xlabel('Number of attendants')\nax.set_ylabel('P(x)')\nax.set_title('Before')\nax = plt.subplot(1, 2, 2)\nax.bar(x, y)\nax.set_xlabel('Number of attendants');\nax.set_title('After');\n```\n\nBased on the idea above, Shannon (1948) proposed that a measure $H(p_1,\\ldots,p_N)$ of uncertainty should posses the following three properties:\n1. $H$ should be continuous in the $p_i$.\n2. If all the $p_i$ are equal, $p_i=1/N$, then $H$ should be a monotonically increasing function of $N$.\n3. If a choice can be broken down into two successive choices, the original $H$ should be a weighted sum of the individual values of $H$. For example: $H(\\frac{1}{2}, \\frac{1}{3}, \\frac{1}{6}) = H(\\frac{1}{2}, \\frac{1}{2}) + \\frac{1}{2}H(\\frac{2}{3}, \\frac{1}{3})$.\n\n***\n```\n -----|----- -----|-----\n | | | | | \n 1/2 2/6 1/6 1/2 1/2\n | ---|---\n | | |\n | 2/3 1/3\n | | |\n 1/2 2/6 1/6\n```\n***\nShannon then moved on to shown that the only uncertainty measure that satisfies the above three properties is of the form:\n\n$$\n\\begin{equation}\nH=-\\sum_i p_i \\log(p_i), \n\\end{equation}\n$$\n\nwhere the base of the logarithm determines the information unit (usually base two which corresponds to bits). See Shannon (1948) or Bialek (2012) for the proof. \n\n\n```python\n# Uncertanities before and after\nH_before = -np.sum(p*np.log2(p))\nH_after = -np.sum(y[y>0]*np.log2(y[y>0]))\n\n# Plotting\nplt.figure(figsize=(15, 3))\nax = plt.subplot(1, 2, 1)\nax.bar(x, p)\nax.set_ylabel('P(x)')\nax.set_title('$H_\\mathrm{before} = %2.1f$ bits' % H_before)\nax.set_xlabel('Number of attendants')\nax = plt.subplot(1, 2, 2)\nax.bar(x, y)\nax.set_title('$H_\\mathrm{after} = %2.1f$ bits' % H_after)\nax.set_xlabel('Number of attendants');\n```\n\n## Entropy as a measure of uncertainty\n\nShannon (1948) chose to denote the uncertainty measure by $H$, and he referred to it as entropy due to its connection with statistical mechanics.\n> Quantities of the form $H=-\\sum_i p_i \\log(p_i)$ play a central role in information theory as measures of **information, choice, and uncertainty**. The form of $H$ will be recognized as that of entropy as defined in certain formulations of statistical mechanics where $p_i$ is the probability of a system being in cell $i$ of its phase space. $H$ is then, for example, the $H$ in Boltzman's famous $H$ theorem. We shall call $H=-\\sum_i p_i \\log(p_i)$ the entropy of the set of probabilities $p_1,\\ldots,p_n$.\n\nAlthough fascinating, this connection might, however, not be enough to provide an intuitive picture of which factors that lead to high or low entropies. In short, we can answer this second question by noting that 1) the entropy is always non-negative, 2) it increases with the number of possible outcomes, and 3) it obtains its maximum value for any fixed number of outcomes when all are equally likely.\n\n\n```python\n# Entropies for various example distributions\nN = 32\nmu = N/2.\nsigma = N/6.\nx = np.arange(N)\n\n# Distributions\np_equal = 1./N*np.ones(N) \np_normal = np.exp(-(x-mu)**2/sigma**2)\np_normal /= p_normal.sum()\np_random = np.random.rand(N)\np_random /= p_random.sum()\nps = [p_equal, p_normal, p_random]\np_max = np.hstack(ps).max()\n\n# Plotting\nplt.figure(figsize=(15, 3))\nfor idx, p in enumerate(ps, start=1):\n H = -np.sum(p*np.log2(p))\n ax = plt.subplot(1, len(ps), idx)\n ax.bar(x, p)\n ax.set_title('$H = %2.1f$ bits' % H)\n ax.set_ylim([0, p_max])\n if idx == 1:\n ax.set_ylabel('P(x)')\n elif idx == 2:\n ax.set_xlabel('Possible outcomes')\n \n```\n\nThe entropy of a distribution, as presented above, can also be derived by searching for a minimum length code for denoting each outcome. That is, the entropy also represents a lower limit on how many bits one needs on average to encode each outcome. For example, imagine that $N=4$ and that the probabilities are: $p_1=0.5,\\: p_2=0.25,\\: p_3=0.125,\\: p_4=0.125$. In this case, the minimum length codes would be:\n\n| Outcome | Code |\n|---------|:----:|\n| 1 | 0 |\n| 2 | 10 |\n| 3 | 110 |\n| 4 | 111 |\n\nand the entropy (or average code length) $-0.5\\log(0.5)-0.25\\log(0.25)-2*0.125\\log(0.125)=1.75$ bits. Bialek (2012) commented on this fact by writing:\n>It is quite remarkable that the only way of quantifying how much we learn is to measure how much space is required to write it down.\n\nSimilarly, Bialek (2012) also provided the following link between entropy as a minimum length code and the amount of heat needed to heat up a room:\n>Entropy is a very old idea. It arises in thermodynamics first as a way of keeping track of heat flows, so that a small amount of heat $dQ$ transferred at absolute temperature $T$ generates a change in entropy $dS=\\frac{dQ}{T}$. Although there is no function $Q$ that measures the heat content of a system, there is a function $S$ that characterizes the (macroscopic) state of a system independent of the path to that state. Now we know that the entropy of a probability distribution also measures the amount of space needed to write down a description of the (microscopic) states drawn out of that distribution.\n\n>Let us imagine, then, a thought experiment in which we measure (with some finite resolution) the positions and velocities of all gas molecules in a small room and types these numbers into a file on a computer. There are relatively efficient programs (gzip, or \"compress\" on a UNIX machine) that compress usch files to nearly their shortest possible length. If these programs really work as well as they can, then the length of the file tells us the entropy of the distribution out of which the numbers in the file are being drawn, but this is the entropy of the gas. Thus, if we heat up the room by 10 degreed and repeat the process, we will find that the resulting data file is longer. More profondly, if me measure the increase in the length of the file, we know the entropy change of the gas and hence the amount of heat that must be added to the room to increase the temperature. This connection between a rather abstract quantity (the length in bits of a computer file) and a very tangible physical quantity (the amount of heat added to a room) has long struck me as one of the more dramatic, if elementary, examples of the power of mathematics to unify descriptions of very disparate phenomena.\n\n[Maxwell\u2013Boltzmann distribution](https://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann_distribution)\n\n## Mutual information\n\nMost situation are not as easy the example with the professor, where the uncertainty was removed in total once the answer was obtained. That is, in practice we often face situation where the uncertainty in only partially decreased. For example, imagine a situation where a bright spot is flashed on one out of 8 equally likely horizontally placed locations {$x \\in [0, 1,\\ldots, 7]$}, and where our information about which location that was lit up comes from a light detector placed at one of the locations. The detector further has three states {$y \\in [0, 1, 2]$}, and it responds with state 2 if the spot is flashed on the location where it is located, state 1 if the spot is flashed at either of the two neighboring locations, and state 3 otherwise. Assuming that the detector is placed at location 3, then its response to a flash at any of the eight locations is as depicted below.\n\n\n\n\n```python\nN = 8; # Eight locations\nplacement = 3 # The detector's location\nresponses = np.zeros(N) # Detector reponses at each location\nresponses[placement] = 2\nresponses[placement-1] = 1\nresponses[placement+1] = 1\n\n# Plotting\nplt.figure(figsize=(7.5, 3))\nplt.bar(np.arange(N), responses)\nplt.xlabel('Spot location')\nplt.ylabel('Detector response');\n```\n\nIf we now expand on the initial idea to define information as the entropy difference between before and after knowing the output of the detector, then we get:\n\n$$\n\\begin{equation}\nI(X;Y) = \\sum_{i=0}^7 -p(x_i)\\log p(x_i) - \\sum_{j=0}^2 p(y_j) \\sum_{i=0}^7 -p(x_i|y_j) \\log p(x_i|y_j).\n\\end{equation}\n$$\n\nThat is, from the initial uncertainty in flash spot location $\\sum_{i=0}^7 -p(x_i)\\log p(x_i)$, we subtract off the uncertainty that remains for each possible state of the detector $\\sum_{i=0}^7 -p(x_i|y_j) \\log p(x_i|y_j)$ weighted by its probability of occurrence $p(y_j)$. For the case described above, the relevant probability distributions and entropies are:\n\n\n```python\n# Probability distributions\npx = 1./N * np.ones(N)\npx_y0 = np.zeros(N) + np.float64((responses == 0)) / (responses == 0).sum()\npx_y1 = np.zeros(N) + np.float64((responses == 1)) / (responses == 1).sum()\npx_y2 = np.zeros(N) + np.float64((responses == 2)) / (responses == 2).sum()\npy = 1./N * np.array([(responses==r).sum() for r in np.unique(responses)])\nps = [px, px_y0, px_y1, px_y2, py]\ntitles = ['$P(x)$', '$P(x|y=0)$', '$P(x|y=1)$', '$P(x|y=2)$', '$P(y)$']\n\n# Plotting\nHs = []\nplt.figure(figsize=(15, 3))\nfor idx, p in enumerate(ps, start=1):\n H = -np.sum(p[p>0]*np.log2(p[p>0]))\n Hs.append(H)\n ax = plt.subplot(1, len(ps), idx)\n ax.bar(np.arange(len(p)), p)\n ax.set_ylim([0, 1])\n ax.set_title(titles[idx-1] + ', $%2.1f$ bits' % H)\n if idx < len(ps):\n ax.set_xlabel('x')\n else:\n ax.set_xlabel('y')\n if idx > 1:\n ax.set_yticklabels([])\n else:\n ax.set_ylabel('Probability')\n \n# Calculate and write out the mutual information\nmi = Hs[0] - py[0]*Hs[1] - py[1]*Hs[2] - py[2]*Hs[3]\nprint('I=%3.2f - %3.2f*%3.2f - %3.2f*%3.2f - %3.2f*%3.2f=%3.2f' % (Hs[0], py[0], Hs[1], py[1], Hs[2], py[2], Hs[3], mi))\n```\n\nBy further replacing the summation limits with $x\\in X$ and $y\\in Y$, respectively, we obtain the more general expression:\n\n$$\n\\begin{equation}\nI(X;Y) = \\sum_{x\\in X} -p(x)\\log p(x) - \\sum_{y\\in Y} p(y) \\sum_{x\\in X} -p(x|y) \\log p(x|y) = H(X) - H(X|Y),\n\\end{equation}\n$$\n\nwhere $H(X|Y)$ is the conditional entropy (i.e., the average uncertainty that remains ones $y$ is known) and $I$ the mutual information between $X$ and $Y$. Mutual information is thus a generalization of the initial idea that we can quantify what we learn as the difference in uncertainty before and after. \n\n## Entropy, uncertainty or information \n\nShannon (1948) actually emphasized a different interpretation than the one presented above. As he was interested in the case where a source sends information over a noisy channel to a receiver, he interpreted the entropy $H(X)$ in $I(X;Y) = H(X) - H(X|Y)$ as the information produced by the source instead of an uncertainty. This interpretation can be understood by noting that the entropy can both be seen as an initial uncertainty or as an upper bound on the information learned when $H(X|Y)$ is zero (a duality that sometimes leads to confusion, especially if mutual information is abbreviated to information only). And in a source and receiver scenario, the upper limit obviously denotes the amount of information sent (produced) by the source. These different interpretations might seem unnecessary at first, but they help in interpreting the symmetry of the mutual information measure. Starting from the expression of mutual information as given above, one can reformulate it as:\n\n$$\n\\begin{align}\nI(X;Y) &= \\sum_{x\\in X} -p(x)\\log p(x) - \\sum_{y\\in Y} p(y) \\sum_{x\\in X} -p(x|y) \\log p(x|y) = H(X) - H(X|Y), \\quad\\quad (1) \\\\\n &=-\\sum_{x\\in X}\\sum_{y\\in Y} p(x, y)\\log p(x) + \\sum_{y\\in Y} \\sum_{x\\in X} p(x,y) \\log p(x|y), \\\\\n &= \\sum_{y\\in Y} \\sum_{x\\in X} p(x,y) \\log \\frac{p(x|y)}{p(x)}, \\\\\n &= \\sum_{y\\in Y} \\sum_{x\\in X} p(x,y) \\log \\frac{p(x,y)}{p(x)p(y)} = \\dots = H(X) + H(Y) - H(X,Y), \\\\\n &= \\quad \\vdots \\\\\nI(Y;X) &= \\sum_{y\\in Y} -p(y)\\log p(y) - \\sum_{x\\in X} p(x) \\sum_{y\\in Y} -p(y|x) \\log p(y|x) = H(Y) - H(Y|X), \\quad\\quad (2) \n\\end{align}\n$$\n\nShannon interpreted these two descriptions as: (1) The information that was sent less the uncertainty of what was sent. (2) The amount of information received less the part which is due to noise. Observe that expression (2) two makes little sense for the detector example above if $H(Y)$ is interpreted as uncertainty, whereas it becomes clearer with the interpretation that Shannon's emphasized. From that point of view, expression (2) tells us that the mutual information is the information contained in the detector's response $H(Y)$ less the part that is due to noise $H(Y|X)$. However, as the detector is deterministic (no noise), we arrive at the conclusion that the mutual information should equal $H(Y)$ in our particular example, which it also does.\n\nAdditionally, we note that the mutual information has the following properties:\n1. It is non-negative and equal to zero only when $x$ and $y$ are statistically independent, that is, when $p(x,y)=p(x)p(y)$.\n2. It is bounded from above by either $H(X)$ or $H(Y)$, whichever is smaller.\n\n\n## Mutual information as a general measure of correlation\n\nAs the mutual information is a measure of dependence between two random variables, it can also be understood in more familiar terms of correlations. To visualize this, imagine a joint distribution of two random variables ($X_1$ and $X_2$). Equation 1 and 2 above tells us that the mutual information can be obtained as either $H(X) - H(X|Y)$ or $H(Y) - H(Y|X)$. That is, the entropy of either marginal distribution less the conditional entropy. In more practical terms, this means that we subtract of the average uncertainty that remains ones either variable is known. And in even more practical terms, it corresponds to looking at individual rows or columns in the joint distribution, as these reflect the uncertainty that remains ones either variable is know. This is illustrated below where two 2D multivariate Gaussian distributions are plotted together with the mutual information between the two variables.\n\n\n```python\n# Generating one independent and one correlated gaussian distribution\nN = 16\nmu = (N-1) / 2.*np.ones([2, 1])\nvar = 9.\ncov = 8.\ncov_ind = np.array([[var, 0.], [0., var]])\ncov_cor = np.array([[var, cov], [cov, var]])\n[x1, x2,] = np.meshgrid(range(N), range(N))\np_ind = np.zeros([N, N])\np_cor = np.zeros([N, N])\nfor i in range(N**2):\n x_tmp = np.array([x1.ravel()[i]-mu[0], x2.ravel()[i]-mu[1]])\n p_ind.ravel()[i] = np.exp(-1/2 * np.dot(x_tmp.T, np.dot(np.linalg.inv(cov_ind), x_tmp)))\n p_cor.ravel()[i] = np.exp(-1/2 * np.dot(x_tmp.T, np.dot(np.linalg.inv(cov_cor), x_tmp)))\np_ind /= p_ind.sum()\np_cor /= p_cor.sum()\n \n# Calculate I(X1;X2)\np1_ind = p_ind.sum(axis=1)\np2_ind = p_ind.sum(axis=0)\nmi_ind = -np.sum(p1_ind*np.log2(p1_ind)) - np.sum(p2_ind*np.log2(p2_ind)) + np.sum(p_ind*np.log2(p_ind))\np1_cor = p_cor.sum(axis=1)\np2_cor = p_cor.sum(axis=0)\nmi_cor = -np.sum(p1_cor*np.log2(p1_cor)) - np.sum(p2_cor*np.log2(p2_cor)) + np.sum(p_cor[p_cor>0]*np.log2(p_cor[p_cor>0]))\n \n# Plotting\ntitles = ['Independent', 'Correlated']\np = [p_ind, p_cor]\nmi = [mi_ind, mi_cor]\nx_ticks = [0, 5, 10, 15]\nfig = plt.figure(figsize=(15, 7.5))\nfor idx, p_tmp in enumerate(p):\n ax = fig.add_axes([0.1 + idx*0.5, 0.1, 0.25, 0.5])\n ax.imshow(p_tmp.reshape(N, N))\n ax.set_xticks(x_ticks)\n ax.set_xticklabels([])\n ax.set_xlabel('$x_1$')\n ax.set_yticks(x_ticks)\n ax.set_yticklabels([])\n ax.set_ylabel('$x_2$')\n ax.invert_yaxis()\n plt.draw()\n pos = ax.get_position()\n ax = fig.add_axes([pos.x0, 0.65, pos.x1-pos.x0, 0.1])\n ax.plot(range(N), p_tmp.sum(axis=1), 'o-')\n ax.set_xticks(x_ticks)\n ax.get_yaxis().set_visible(False)\n ax.spines['left'].set_visible(False)\n ax.set_title(titles[idx] + ', $I(X_1;X_2) = %3.2f$ bits' % mi[idx])\n ax = fig.add_axes([pos.x1 + 0.03, 0.1, 0.1/2, 0.5])\n ax.plot(p_tmp.sum(axis=0), range(N), 'o-')\n ax.set_yticks(x_ticks)\n ax.get_xaxis().set_visible(False)\n ax.spines['bottom'].set_visible(False)\n \nprint('H(X1): %3.2f bits' % -np.sum(p1_cor*np.log2(p1_cor)))\nprint('H(X2): %3.2f bits' % -np.sum(p2_cor*np.log2(p2_cor)))\nprint('H(X1,X2)_ind: %3.2f bits' % -np.sum(p_ind*np.log2(p_ind)))\nprint('H(X1,X2)_cor: %3.2f bits' % -np.sum(p_cor[p_cor>0]*np.log2(p_cor[p_cor>0])))\n```\n\nAnother way of understanding why mutual information measures correlation is to look at the expression $I(X;Y) = H(X) + H(Y) - H(X,Y)$, from which we observe that the joint entropy $H(X,Y)$ is subtracted from the sum of the individual entropies. As entropy increases with uncertainty (or possible outcomes), we can infer that a less spread out joint distribution will cause a smaller subtraction. Importantly, however, the shape of the joint distribution does not matter, only how concentrated the probability mass is to a small number of outcomes. This is an important distinction that makes mutual information a general measure of correlation, in contrast to the commonly used correlation coefficients (Pearson's r), which only captures linear correlations. The example below highlight this by calculating the mutual information and the correlation coefficient for both a linear and quadratic relationship between $x$ and $y$.\n\n\n```python\n# Generate y responses as y = f(x) for 16 x values with f(x) being either f(x)=x or f(x) = -x^2\nx = np.arange(-3.75, 4, 0.5)\ny = [x, -x**2]\n\n# Entropies, mutual information, correlation coefficients\nHx = [np.log2(x.size), np.log2(x.size)] # Assume each x-value is equally likely\nHy = [np.log2(np.unique(y_tmp).size) for y_tmp in y]\nmi = Hy # H(Y|X) = 0 as there is no noise, thus I = H(Y) \nr = [pearsonr(x, y_tmp)[0] for y_tmp in y]\n\n# Plotting\nfig = plt.figure(figsize=(15, 3))\nfor i in range(len(y)): \n ax = plt.subplot(1, len(y), i+1)\n ax.plot(x, y[i], 'o')\n ax.set_xlabel('$x$')\n ax.set_ylabel('$y$')\n info = '$r: %2.1f$\\n$H(X): %2.1f$ bits\\n$H(Y): %2.1f$ bits\\n$I(X;Y): %2.1f$ bits' % (r[i], Hx[i], Hy[i], mi[i])\n ax.text(x[2]-i*x[2], y[i].max()-i*5, info, va='top', ha='center')\n```\n\nThe mutual information retains its maximum value in both cases (remember that it is bounded from above by min[H(x), H(y)]), whereas the correlation coefficient indicates maximal correlation for the linear $f$ and no correlation for the quadratic $f$. Additionally, the quadratic example provides a nice description of how the mutual information can be interpreted: If we learn 3 bits of information by observing $y$, then our uncertainty about $x$ is one bit $H(X) - I(X;Y)$. This, in turn, corresponds to a choice between two equally likely alternatives, a condition that simply reflects that there are two different $x$-values mapping onto the same $y$-value.\n", "meta": {"hexsha": "81529e9660025b4148fd5bc0b60156718366a959", "size": 147381, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "InformationTheory/Part 1, entropy, uncertainty and information.ipynb", "max_stars_repo_name": "ala-laurila-lab/jupyter-notebooks", "max_stars_repo_head_hexsha": "c7fac1ee74af8e61832dad8536b223a205e79bf2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "InformationTheory/Part 1, entropy, uncertainty and information.ipynb", "max_issues_repo_name": "ala-laurila-lab/jupyter-notebooks", "max_issues_repo_head_hexsha": "c7fac1ee74af8e61832dad8536b223a205e79bf2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "InformationTheory/Part 1, entropy, uncertainty and information.ipynb", "max_forks_repo_name": "ala-laurila-lab/jupyter-notebooks", "max_forks_repo_head_hexsha": "c7fac1ee74af8e61832dad8536b223a205e79bf2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-02-21T17:03:39.000Z", "max_forks_repo_forks_event_max_datetime": "2019-02-21T17:03:39.000Z", "avg_line_length": 253.2319587629, "max_line_length": 29060, "alphanum_fraction": 0.888927338, "converted": true, "num_tokens": 5886, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391706552536, "lm_q2_score": 0.9111797100118214, "lm_q1q2_score": 0.8410545638472061}} {"text": "# Particle in one-dimensional potential well\n\n## Period of oscillations in potential well \n\nDynamics of a particle of mass $m$ moving in one dimension $OX$ is described the Newton equation \n\n$$m\\ddot x =m\\dot v = F(x) = -U'(x),$$ \n \nwhere $F(x)$ is a force acting on tha particle and $U(x)$ is potential energy of the particle. We assume that there are no dissipative forces. Therefore, as we know from the previous section, the total energy $E$ of the particle is conserved, i.e., \n\n\\begin{equation}\n\\label{eq:cons_E1d}\n\\frac{mv^2}{2} + U(x) = E = const.\n\\end{equation}\n\nIf we treat the velocity $v=\\frac{dx}{dt}$ as an independent variable, it is implicit equation of the orbit (the phase curve) in **phase space** $(x,v)$ of the particle with energy $E$. This equation can be rewritten in the form \n\n$$\\frac{m}{2}\\left(\\frac{dx}{dt}\\right)^2+U(x)=E$$\n\nIt is a first-order differential equation and can be solved by the method of variable separation. The result reads \n\n\\begin{equation}\n\\label{eq:t_cons_E1d}\n t=\\sqrt{\\frac{m}{2}} \\; \\int_{a}^{b}{\\frac{dx}{\\sqrt{E-U(x)}}}\n\\end{equation}\n\n\nHere, $a$ is the initial position $x(0)=a$ and $b$ is the final position $x(t)=b$ of the particle. The time $t$ is time for moving the particle from the point $a$ to the point $b$ under the condition that $E \\ge U(x)$ for all $x \\in (a, b)$. \n\nThis is a useful formula which allows to calculate period of oscillations of the particle in a potential well. \n\nIn this section we will consider a class of potentials in the form \n\n\\begin{equation}\n\\label{eq:Uxn}\nU(x)=A |x|^n,\n\\end{equation}\n\nwhere $n$ is a positive real number.\n\nThese potentials are similar: they are bounded from below, have only one minimum at $x=0$ and tends to infinity when $x\\to \\pm \\infty$. In such potential the particle motion is bounded and the particle oscillates between two symmetrical positions $x_0$ and $-x_0$ which in general depend on the total energy $E$ and are determined by the equation \n\n$$U(\\pm x_0) = E$$\n\nBecause of symmetry, the period $T$ of oscillations can be determined by the equation \n\n\\begin{equation}\n\\label{eq:T}\n T=4 \\; \\sqrt{\\frac{m}{2}} \\; \\int_{0}^{x_0}{\\frac{dx}{\\sqrt{E-U(x)}}}\n\\end{equation}\n\nThis class of potentials is simple but nevertheless analysis of $T$ in dependence of the index $n$ and the total energy $E$ is very interesting and instructive. \n\nWe will use computer algebra and numerical methods to investigate properties of motion is such potential wells. \n\n\n\n```python\nload('cas_utils.sage')\n```\n\n\n```python\nt = var('t')\nm = var('m')\nA = var('A')\nassume(A > 0)\nassume(m > 0)\ny = function('y')(t)\nde = m*diff(y,t,2) + 2*A*y == 0\nshowmath( desolve(de, y,ivar=t) )\n```\n\nIt is an analytical solution of the Newton equation in the case when $n=2$ (a harmonic potential). \n\n## Particle in potential $x^2$\n\nFor $n=2$ the system is a harmonic oscillator:\n\n$$U(x)=A x^2.$$\n\n\n```python\n#reset()\nvar('m A x E')\nforget()\nassume(A > 0)\nassume(E > 0)\nassume(E,'real')\n```\n\nTo obtain the integration limit $x_0$ in the formula for the period of oscillations, we must solve the equation:\n\n$$U(x)=E$$\n\nSo for the $Ax^2$ potential, we have:\n\n\n```python\nU(A,x) = A*x^2\nxextr = solve (U(A=A,x=x)==E,x)\nshowmath(xextr)\n```\n\nThese formulas describe the values of the oscillator's extremal positions for a given energy. Let's put them into the formula for $T$:\n:\n\n\n```python\nperiod = 2*sqrt(m/2)*integrate( 1/sqrt(E-U(A,x)),(x,x.subs(xextr[0]),x.subs(xextr[1])))\nperiod = period.canonicalize_radical()\nshowmath(period)\n```\n\nWe see that the period $T$ does not depend on energy of the oscillator. It means that it does not depend on the initial conditions because the total energy of the particle depends on them. In turn, it means that it does not depend on the distance between the points $-x_0$ and $x_0$. It seems to be unusual behavior: time to travel from $-1$ to $1$ and back is the same as time to travel from $-10000$ to $10000$ and back. In the second case the distance is much, much longer but time is the same. This is an exceptional property valid only for the harmonic potential! \n\n## Particle in $|x|^n$ potential\n\nIf $n\\neq2$, the general formula for the period can be written as:\n\n$$T=4 \\sqrt{\\frac{m}{2}} \\; \\int_0^{x_0}\\frac{dx}{\\sqrt{E- A x^n}}$$\n\nor in the equivalent form: \n\n$$T=4 \\sqrt{\\frac{m}{2}}\\frac{1}{\\sqrt{E}}\\int_0^{x_0}\\frac{dx}{\\sqrt{1-Ax^n/E}}$$\n\nThis integral can be transformed to a dimensionless form by substitution \n\n$$\\frac{A}{E}x^n=y^n.$$\n\nIt is in fact a linear relationship between $x$ and $y$:\n\n$$\\left(\\frac{A}{E}\\right)^{\\frac{1}{n}}x=y.$$\n\nTherefore, we can change the integration variable to $y$. To do this, we use SAGE to transform the expression under integral in the following way:\n\n\n```python\nvar('dx dy A E x y')\nvar('n',domain='integer')\nassume(n>=0)\nassume(A>0)\nassume(E>0)\nex1 = dx/sqrt(1-A/E*x^n)\nshowmath(ex1)\n```\n\nand we substitute:\n\n\n```python\nex2 = ex1.subs({x:(E/A)^(1/n)*y,dx:dy*(E/A)^(1/n)})\nshowmath( ex2.canonicalize_radical().full_simplify() )\n```\n\nLet's take out the expression that depends on the parameters $A$ and $E$:\n\n\n```python\nexpr2 = (ex2/dy*sqrt(-y^n + 1)).full_simplify()\nshowmath( expr2.canonicalize_radical() )\n```\n\n\n```python\nprefactor = expr2*sqrt(m/2)*4*1/sqrt(E)\nshowmath( prefactor.canonicalize_radical() )\n```\n\nFinally, we obtain:\n\n$$T=4 \\sqrt{\\frac{m}{2}}\\frac{1}{A^{1/n}} E^{\\frac{1}{n}-\\frac{1}{2}}\\int_0^{y_0}\\frac{dy}{\\sqrt{1- y^n}}$$\n\n\nFor $n=2$, dependence on $E$ disappears, as we already have seen in the previous case.\n\nWe still need to calculate the upper limit $y_0$ of integration. In the integral, the upper limit is the position in which the total energy is the potential energy:\n\n$$U(x)=E$$\n\nIn this case $$Ax^n=E.$$\n\nBy changing the variables we get:\n\n\n```python\nsolve( (A*x^n == E).subs({x:(E/A)^(1/n)*y}), y)\n```\n\nThat is, the integration limit is $y_0=1.$\n\nTherefore the period of oscillations is given by the relation:\n\n$$T=4 \\sqrt{\\frac{m}{2}}\\frac{1}{A^{1/n}} E^{\\frac{1}{n}-\\frac{1}{2}}\\int_0^{1}\\frac{dy}{\\sqrt{1- y^n}}$$\n\nWe note that only for the case $n=2$, the period $T$ does not depend on E (i.e. on initial conditions, i.e. on the distance). In other cases it depends on the total energy $E$, i.e. it depends on initial conditions, i.e. it depends on the distance between the points $-x_0$ and $x_0$. \n\nThe above equation shows how much time the particle needs to travel the distance for one oscillation in dependence on $E$ and in consequence on the distance: If energy $E$ is higher then the distance $4x_0$ is longer. \n\nThe scaled integral can be expressed by the beta function of Euler http://en.wikipedia.org/wiki/Beta_function.\nWe can calculate it:\n\n\n```python\nvar('a')\nassume(a,'integer')\nassume(a>0)\nprint( assumptions() )\n```\n\n\n```python\nintegrate(1/sqrt(1-x^(a)),(x,0,1) )\n```\n\nWe get a formula containing the beta function. It can be evaluated numerically for any values \u200b\u200bof the $a$ parameter.\n\n\n```python\n(beta(1/2,1/a)/a).subs({a:2}).n()\n```\n\nLet's examine this formula numerically. You can use the $\\beta$ function, or numerically estimate the integral. This second approach allows you to explore any potential, not just $U(x)=ax^n$.\n\n\n```python\ndef beta2(a,b):\n return gamma(a)*gamma(b)/gamma(a+b)\n\na_list = srange(0.02,5,0.1)\na_list2 = [1/4,1/3,1/2,1,2,3,4,5]\n\nintegr_list = [ integral_numerical( 1/sqrt(1-x^a_) ,0,1, algorithm='qng',rule=2)[0] \\\n for a_ in a_list ]\nintegr_list_analytical = [ beta2(1/2, 1/a_)/a_ for a_ in a_list2 ]\n```\n\nwe obtain some analytically simple formulas:\n\n\n```python\nshowmath( integr_list_analytical )\n```\n\nNot we can compare those analytical numbers with numerical results, for example on the plot:\n\n\n```python\nplt_num = list_plot(zip( a_list,integr_list), plotjoined=True )\nplt_anal = list_plot(zip( a_list2,integr_list_analytical),color='red')\n(plt_num + plt_anal).show(ymin=0,figsize=(6,2))\n```\n\nHaving an analytical solution, you can examine the asymptotics for large $n$:\n\n\n```python\nvar('x')\nasympt = limit( beta2(1/2, 1/x)/x,x=oo )\nasympt\n```\n\n\n```python\nplt_asympt = plot(asympt,(x,0,5),linestyle='dashed',color='gray')\n```\n\nLet's add a few points for which the integral takes exact values\n\n\n```python\nl = zip(a_list2[:5],integr_list_analytical[:5])\nshowmath(l)\n```\n\n\n```python\ndef plot_point_labels(l):\n p=[]\n for x,y in l:\n p.append( text( \"$(\"+latex(x)+\", \"+latex(y)+\")$\" ,(x+0.1,y+0.2) , fontsize=14,horizontal_alignment='left',color='gray') )\n p.append( point ( (x,y),size=75,color='red' ) )\n return sum(p)\n```\n\n\n```python\nsome_points = plot_point_labels(l)\n```\n\n\n```python\nplt_all = plt_num+plt_anal+plt_asympt+some_points\nplt_all.show(figsize=(6,3),ymin=0,ymax=7)\n```\n\n## Numerical convergence\n\nThe integral \n$$\\int_0^1 \u00a0\\frac{dx}{\\sqrt{1-x^n}}$$ seems to be divergent for $n:$\n\n\n```python\nshowmath( numerical_integral( 1/sqrt(1-x^(0.25)) , 0, 1) )\n```\n\nHowever, choosing the right algorithm gives the correct result:\n\n\n```python\na_ = 1/4. # exponent in integral\nintegral_numerical( 1/sqrt(1-abs(x)^a_) , 0, 1, algorithm='qags')\n```\n\nlets check it out with an exact formula:\n\n\n```python\n(beta(1/2,1/a)/a).subs({a:a_}).n()\n```\n\nIndeed, we see that carefull numerical integration gives finite result.\n\n## The dependence of period on energy for different $n$.\n\n\n```python\nvar('E x n')\ndef draw_E(n,figsize=(6,2.5)): \n p = []\n p2 = []\n p.append( plot(abs(x)^n,(x,-2,2),\\\n ymin=0,ymax=4,legend_label=r\"$U(x)=|x|^{%s}$\" % n ) )\n p.append( plot( (x)^2,(x,-2,2),\\\n color='gray',legend_label=r\"$U(x)=x^{2}$\",\\\n axes_labels=[r\"$x$\",r\"$U(x)$\"] ))\n \n p2.append( plot( 4/sqrt(2)*(beta(1/2, 1/n)/n)* E^(1/n-1/2),\\\n (E,0.00,3),ymin=0,ymax=7,axes_labels=[r\"$E$\",r\"$T$\"] ) )\n p2.append( plot( 4/sqrt(2)*(beta(1/2, 1/2)/2),\\\n (E,0.00,3) ,color='gray' ) )\n \n show( sum(p), figsize=figsize )\n show( sum(p2), figsize=figsize )\n \n```\n\n\n```python\n@interact\ndef _(n=slider([1/4,1/2,2/3,3/4,1,3/2,2,3])):\n draw_E(n)\n```\n\n\n```python\nimport os\nif 'PDF' in os.environ.keys():\n draw_E(1/2)\n draw_E(1)\n draw_E(4)\n```\n\nWe can plot the dependence of period $T$ on energy (i.e. amplitude) $T(E)$ for different values of $n$. In figure belowe we see that if $n>2$ then oscillations are faster as energy grows. On the other hand if $n<1$, oscillations are getting slower with growing energy.\n\nAnother interesting observation is that for potentials with $n>>1$ and $n<<1$ oscillations will become arbitrarily slow and fast, respectively when $E\\to0$. For $n>1$ the potential well is *shallow* at the bottom and *steep* far away from the minimum and for $n<1$ the opposite is true.\n\n\n```python\nn_s = [1/3,2/3, 1, 2, 3] \nplot( [4/sqrt(2)*(beta(1/2, 1/n_)/n_)* E^(1/n_-1/2) \\\n for n_ in n_s],\\\n (E,0.00,2),axes_labels=[r\"$E$\",r\"$T$\"],\\\n legend_label=['$n='+str(latex(n_))+'$' for n_ in n_s],\\\n ymax=7, figsize=(6,3) )\n \n```\n\n## Numerical integration of equations of motion \n\nHere we will investigate numerically period $T$ and compare with the analytical formula.\nFirst, let's have a closer look how the potential and force behave for different index $n$:\n\n\n\n```python\ndef plot_Uf(n_):\n U(x) = abs(x)^(n_)\n plt = plot( [U(x),-diff( U(x),x)],(x,-1,1),\\\n detect_poles='show',ymin=-3,ymax=3,\n legend_label=[r\"$U(x)$\",'$f(x)$'])\n plt.axes_labels([r\"$x$\",r\"$U(x)=|x|^%s$\"%latex(n_)])\n show(plt,figsize=(6,3))\n```\n\n\n```python\n@interact\ndef _(n_=slider(0.1,2,0.1)):\n plot_Uf(n_)\n```\n\n\n```python\nif 'PDF' in os.environ.keys():\n plot_Uf(1)\n plot_Uf(2)\n plot_Uf(1/2)\n```\n\nWe can see that for $n \\ge 1$ the force and potential are continuous. If $n=1$ then force has finite jump (discontinuity). Both those cases should not be a problem for numerical integration. \n\nHowever for $n<1$ we have infinite force at $x=0$.\n\n#### Problems with numerical integration\n\nThere is a possibility that if the ODE integrator comes too close, it will blow out!\n\nWe can fix this problem by softening the potential singularity by adding small number: \n$$ |x| \\to |x| + \\epsilon. $$\n\n\n```python\nvar('x',domain='real')\nvar('v t')\neps = 1e-6\nU(x) = (abs(x)+eps)^(1/2)\nshowmath( U.diff(x).expand().simplify() )\n```\n\nto make sure that Sage will not leave $x/|x|$ unsimplified we can do:\n\n\n```python\nw0 = SR.wild(0)\nw1 = SR.wild(1)\nf = -U.diff(x).subs({w0*w1/abs(w1):w0*sign(w1)})\n```\n\n\n```python\nshowmath( f(x) )\n```\n\n\n```python\node_pot = [v,f(x)]\n\nt_lst = srange(0,10,0.01)\nsol = desolve_odeint(ode_pot,[1,.0],t_lst,[x,v])\n```\n\n\n```python\np = line(zip(t_lst, sol[:,0])) + line(zip(t_lst, sol[:,1]), color='red')\np.axes_labels(['$t$','$x(t),v(t)$'])\np + plot(1,(x,0,10),linestyle='dashed',color='gray')\n```\n\nWe can evaluate the period $T$ from the trajectory obtained via numerical solutions. For this purpose one might need an interpolation of numerical table returned by `desolve_odeint`:\n\n\n```python\nimport numpy as np\ndef find_period(x,t):\n zero_list=[]\n x0 = x[0]\n for i in range(1,len(x)):\n if x[i]*x[i-1] < 0:\n zero_list.append( - (t[i-1]*x[i] - t[i]*x[i-1])/(x[i-1] - x[i]) )\n lnp = np.array(zero_list)\n return 2*( (lnp-np.roll(lnp,1))[1:] ).mean()\n```\n\n\n```python\nvar('x1 x2 t1 t2 a b ')\nshowmath( (-b/a).subs( solve([a*t1+b==x1,a*t2+b==x2],[a,b], solution_dict=True)[0] ) )\n```\n\nWe find numerically a period of trajectory:\n\n\n```python\nT = find_period( sol[:,0],t_lst)\nT\n```\n\n\n```python\nassert abs(T-7.54250)<1e-4\n```\n\nExact results for comparison:\n\n\n```python\n# for n=2 2*pi/sqrt(2)==(2*pi/sqrt(2)).n()\ntable( [[\"n\",\"T\"]]+[ [n_,((4/sqrt(2)*(beta(1/2, 1/n_)/n_)* E^(1/n_-1/2)).subs({E:1})).n()] \n for n_ in [1/4,1/3,1/2,2/3,1,2,3,4,5] ] )\n\n```\n\n## Using the formula for the period to reproduce the trajectory of movement\n\nWe take $m=1$ and $A=1$ (then $x=E$), then we can reproduce the trajectory reversing the formula for $T(E)$. \n\n\n```python\nvar('x')\nU(A,x) = A*x^2\nA = 1/2\nE = 1\nm = 1.\nx1=0.1\nshowmath( solve(E-U(A,x), x) )\n```\n\n\n```python\nt_lst = [ (sqrt(m/2.)*integrate( 1/sqrt(E-U(A,x)),(x,0,x1)).n(),x1) \\\n for x1 in srange(0,sqrt(2.)+1e-10,1e-2)]\n```\n\n\n```python\npoint(t_lst ,color='red')+\\\n plot(sqrt(2)*sin(x),(x,0,pi),figsize=(6,2))\n```\n\nInterestingly, if we known the dependence of $T(E)$ then we can calculate exactly the potential! \n\n\n\\newpage\n", "meta": {"hexsha": "7f7f755c9b262e1286dd6f7e81e491b365331074", "size": 26100, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "012-1d_potential_well.ipynb", "max_stars_repo_name": "marcinofulus/Mechanics_with_SageMath", "max_stars_repo_head_hexsha": "6d13cb2e83cd4be063c9cfef6ce536564a25cf57", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "012-1d_potential_well.ipynb", "max_issues_repo_name": "marcinofulus/Mechanics_with_SageMath", "max_issues_repo_head_hexsha": "6d13cb2e83cd4be063c9cfef6ce536564a25cf57", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-30T16:45:58.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-30T16:45:58.000Z", "max_forks_repo_path": "012-1d_potential_well.ipynb", "max_forks_repo_name": "marcinofulus/Mechanics_with_SageMath", "max_forks_repo_head_hexsha": "6d13cb2e83cd4be063c9cfef6ce536564a25cf57", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-15T08:26:14.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-12T13:07:16.000Z", "avg_line_length": 25.9960159363, "max_line_length": 578, "alphanum_fraction": 0.5204597701, "converted": true, "num_tokens": 4766, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9111797003640646, "lm_q2_score": 0.9230391669503405, "lm_q1q2_score": 0.8410545515661071}} {"text": "# Examples of how to use SymPy\n\nSymPy is a Python package for symbolic manipulation. In this notebook we look at a few examples of useful operations, such as differentiation, integration, simplification and solving polynomials.\n\nSymPy suppports many more operations. The documentation can be found at: https://docs.sympy.org/latest/index.html\n\n\n```python\nfrom sympy import *\n```\n\nYou have to tell SymPy which variables are to be treated as symbols. In this notebook we'll make `x`, `y` and `z` symbols.\n\n\n```python\nx, y, z = symbols('x y z')\n```\n\n## Defining equations and displaying them\n\nWe can now write expressions, and assign them to a variable.\n\n\n```python\na = (x + x**2)/(x*sin(y)**2 + x*cos(y)**2)\na\n```\n\n\n\n\n$\\displaystyle \\frac{x^{2} + x}{x \\sin^{2}{\\left(y \\right)} + x \\cos^{2}{\\left(y \\right)}}$\n\n\n\n\n```python\nb = x**2 + 1\nb\n```\n\n\n\n\n$\\displaystyle x^{2} + 1$\n\n\n\nSimple simplifications are automatically applied\n\n\n```python\nb - 1\n```\n\n\n\n\n$\\displaystyle x^{2}$\n\n\n\nIf you define multiple expressions in a cell you can display them with the `display()` command\n\n\n```python\na = x**2 + x\ndisplay(a)\n\nb = 2*x + 3\ndisplay(b)\n\ndisplay(a+b)\n```\n\n\n$\\displaystyle x^{2} + x$\n\n\n\n$\\displaystyle 2 x + 3$\n\n\n\n$\\displaystyle x^{2} + 3 x + 3$\n\n\n## Simplification\n\nIn simple cases SymPy will automatically simplify. More complicated expressions can simplified using the `simplify()` command\n\n\n```python\na = (x-2)*(x+3)/((x-2)*(x-9))\ndisplay(a)\n\na = (x**2 +x -6)/(x**2 - 11*x+18)\ndisplay(a)\nsimplify(a)\n```\n\n\n$\\displaystyle \\frac{x + 3}{x - 9}$\n\n\n\n$\\displaystyle \\frac{x^{2} + x - 6}{x^{2} - 11 x + 18}$\n\n\n\n\n\n$\\displaystyle \\frac{x + 3}{x - 9}$\n\n\n\n`simplify()` will also use trigonometric identities\n\n\n```python\na = (x + x**2)/(x*sin(y)**2 + x*cos(y)**2)\ndisplay(a)\ndisplay(simplify(a))\n```\n\n\n$\\displaystyle \\frac{x^{2} + x}{x \\sin^{2}{\\left(y \\right)} + x \\cos^{2}{\\left(y \\right)}}$\n\n\n\n$\\displaystyle x + 1$\n\n\n## Differentiation\nExpressions can be differentiated using the `diff()` command. The first argument is the expression and the second argument is the variable you want to differential with respect too.\n\n\n```python\na = x**2 + 3*x + 4*y**2\ndisplay(diff(a,x))\ndisplay(diff(a,y))\n```\n\n\n$\\displaystyle 2 x + 3$\n\n\n\n$\\displaystyle 8 y$\n\n\nYou can also use trigonometric functions, such as `sin()`, `cos()`, `sinh()`, and their inverses `asin()`, `acos()`, `asinh()` etc\n\n\n```python\ndiff(asinh(x),x)\n```\n\n\n\n\n$\\displaystyle \\frac{1}{\\sqrt{x^{2} + 1}}$\n\n\n\nIt's easy to get complicated expressions. It is often useful to simplify the result.\n\n\n```python\nres = diff((x**2 +x -6)/(x**2 - 11*x+18))\ndisplay(res)\ndisplay(simplify(res))\n```\n\n\n$\\displaystyle \\frac{\\left(11 - 2 x\\right) \\left(x^{2} + x - 6\\right)}{\\left(x^{2} - 11 x + 18\\right)^{2}} + \\frac{2 x + 1}{x^{2} - 11 x + 18}$\n\n\n\n$\\displaystyle - \\frac{12}{x^{2} - 18 x + 81}$\n\n\n## Integration\n\nYou can also integrate using the `integrate()` command.\n\n\n```python\na = x/(x**2+2*x+1)\ndisplay(a)\n\nintegrate(a, x)\n```\n\n\n$\\displaystyle \\frac{x}{x^{2} + 2 x + 1}$\n\n\n\n\n\n$\\displaystyle \\log{\\left(x + 1 \\right)} + \\frac{1}{x + 1}$\n\n\n\nYou can also do definite integration:\n\n\n```python\nb = integrate(sin(x), (x,0,3))\ndisplay(b)\n```\n\n\n$\\displaystyle 1 - \\cos{\\left(3 \\right)}$\n\n\nSymPy will give the exact result. You can numerically evaluate it using `N()`. You can pass a second argument for the precision you wish to evaluate the result to.\n\n\n```python\ndisplay(N(b))\ndisplay(N(b, 50))\n```\n\n\n$\\displaystyle 1.98999249660045$\n\n\n\n$\\displaystyle 1.9899924966004454572715727947312613023936790966156$\n\n\nIf SymPy is not sure how to perform the integral it will just return it an unevaluated expression:\n\n\n```python\nintegrate(cos(x)/(x**2+1),(x,1,5))\n```\n\n\n\n\n$\\displaystyle \\int\\limits_{1}^{5} \\frac{\\cos{\\left(x \\right)}}{x^{2} + 1}\\, dx$\n\n\n\n## Solving equations\n\nSymPy can also solve some equations, such as polynomials.\n\n\n```python\nsolveset((x-2)*(x+3), x)\n```\n\n\n\n\n$\\displaystyle \\left\\{-3, 2\\right\\}$\n\n\n\n\n```python\nsolveset(x**2 +x -1, x)\n```\n\n\n\n\n$\\displaystyle \\left\\{- \\frac{1}{2} + \\frac{\\sqrt{5}}{2}, - \\frac{\\sqrt{5}}{2} - \\frac{1}{2}\\right\\}$\n\n\n\nHere a polynomial we encountered whilst looking for period-2 orbits related to root finding:\n\n$$ 16 x_p^9 - x_p(3 x_p^2 - 1)\\left[12x_p^6 - (3x_p^2 -1)^2\\right] = 0$$\n\nLet's find the roots\n\n\n```python\nsolveset(16*x**9 - x*(3*x**2 - 1)*(12*x**6 - (3*x**2-1)**2),x)\n```\n\n\n\n\n$\\displaystyle \\left\\{-1, 0, 1, - \\frac{\\sqrt{5}}{5}, \\frac{\\sqrt{5}}{5}, - \\frac{\\sqrt{2} \\cos{\\left(\\frac{\\operatorname{atan}{\\left(\\frac{\\sqrt{7}}{3} \\right)}}{2} \\right)}}{2} - \\frac{\\sqrt{2} i \\sin{\\left(\\frac{\\operatorname{atan}{\\left(\\frac{\\sqrt{7}}{3} \\right)}}{2} \\right)}}{2}, - \\frac{\\sqrt{2} \\cos{\\left(\\frac{\\operatorname{atan}{\\left(\\frac{\\sqrt{7}}{3} \\right)}}{2} \\right)}}{2} + \\frac{\\sqrt{2} i \\sin{\\left(\\frac{\\operatorname{atan}{\\left(\\frac{\\sqrt{7}}{3} \\right)}}{2} \\right)}}{2}, \\frac{\\sqrt{2} \\cos{\\left(\\frac{\\operatorname{atan}{\\left(\\frac{\\sqrt{7}}{3} \\right)}}{2} \\right)}}{2} - \\frac{\\sqrt{2} i \\sin{\\left(\\frac{\\operatorname{atan}{\\left(\\frac{\\sqrt{7}}{3} \\right)}}{2} \\right)}}{2}, \\frac{\\sqrt{2} \\cos{\\left(\\frac{\\operatorname{atan}{\\left(\\frac{\\sqrt{7}}{3} \\right)}}{2} \\right)}}{2} + \\frac{\\sqrt{2} i \\sin{\\left(\\frac{\\operatorname{atan}{\\left(\\frac{\\sqrt{7}}{3} \\right)}}{2} \\right)}}{2}\\right\\}$\n\n\n\n## Derive the weights for Simpson's method\n\nRecall that Lagrange Polynomial through points $f(a), f(b), f(c)$ is given by \n\n$$P_2(x) = f(a)\\frac{(x-b)(x-c)}{(a-b)(a-c)} + f(b)\\frac{(x-a)(x-c)}{(b-a)(b-c)} + f(c)\\frac{(x-a)(x-b)}{(c-a)(c-b)}$$\n\nWe integrate this to get Simpson's method:\n\n$$ \\int^b_a P_2(x) \\, dx = \\frac{b-a}{6}\\left[f(a) + 4 f\\left(\\frac{a+b}{2}\\right) + f(b)\\right]$$\n\nLet's do this integration with SymPy\n\n\n```python\na, b, c = symbols('a b c')\nf = Function('f')\n```\n\n\n```python\nc = (a+b)/2\n\nP2 = f(a)*(x-b)*(x-c)/((a-b)*(a-c)) + f(b)*(x-a)*(x-c)/((b-a)*(b-c)) + f(c)*(x-a)*(x-b)/((c-a)*(c-b))\n```\n\n\n```python\nintegral = integrate(P2,(x,a,b))\ndisplay(integral)\n```\n\n\n$\\displaystyle - \\frac{a^{3} \\left(2 f{\\left(a \\right)} + 2 f{\\left(b \\right)} - 4 f{\\left(\\frac{a}{2} + \\frac{b}{2} \\right)}\\right)}{3 a^{2} - 6 a b + 3 b^{2}} + \\frac{a^{2} \\left(a f{\\left(a \\right)} + 3 a f{\\left(b \\right)} - 4 a f{\\left(\\frac{a}{2} + \\frac{b}{2} \\right)} + 3 b f{\\left(a \\right)} + b f{\\left(b \\right)} - 4 b f{\\left(\\frac{a}{2} + \\frac{b}{2} \\right)}\\right)}{2 a^{2} - 4 a b + 2 b^{2}} - \\frac{a \\left(a^{2} f{\\left(b \\right)} + a b f{\\left(a \\right)} + a b f{\\left(b \\right)} - 4 a b f{\\left(\\frac{a}{2} + \\frac{b}{2} \\right)} + b^{2} f{\\left(a \\right)}\\right)}{a^{2} - 2 a b + b^{2}} + \\frac{b^{3} \\left(2 f{\\left(a \\right)} + 2 f{\\left(b \\right)} - 4 f{\\left(\\frac{a}{2} + \\frac{b}{2} \\right)}\\right)}{3 a^{2} - 6 a b + 3 b^{2}} - \\frac{b^{2} \\left(a f{\\left(a \\right)} + 3 a f{\\left(b \\right)} - 4 a f{\\left(\\frac{a}{2} + \\frac{b}{2} \\right)} + 3 b f{\\left(a \\right)} + b f{\\left(b \\right)} - 4 b f{\\left(\\frac{a}{2} + \\frac{b}{2} \\right)}\\right)}{2 a^{2} - 4 a b + 2 b^{2}} + \\frac{b \\left(a^{2} f{\\left(b \\right)} + a b f{\\left(a \\right)} + a b f{\\left(b \\right)} - 4 a b f{\\left(\\frac{a}{2} + \\frac{b}{2} \\right)} + b^{2} f{\\left(a \\right)}\\right)}{a^{2} - 2 a b + b^{2}}$\n\n\n\n```python\nres = simplify(integral)\ndisplay(res)\n```\n\n\n$\\displaystyle - \\frac{a f{\\left(a \\right)}}{6} - \\frac{a f{\\left(b \\right)}}{6} - \\frac{2 a f{\\left(\\frac{a}{2} + \\frac{b}{2} \\right)}}{3} + \\frac{b f{\\left(a \\right)}}{6} + \\frac{b f{\\left(b \\right)}}{6} + \\frac{2 b f{\\left(\\frac{a}{2} + \\frac{b}{2} \\right)}}{3}$\n\n\n\n```python\nfactor(res)\n```\n\n\n\n\n$\\displaystyle - \\frac{\\left(a - b\\right) \\left(f{\\left(a \\right)} + f{\\left(b \\right)} + 4 f{\\left(\\frac{a}{2} + \\frac{b}{2} \\right)}\\right)}{6}$\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "b347fce381584ec21db58f40fff2c1b662644c8f", "size": 19525, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "SymbolicManipulation/SymPyExamples.ipynb", "max_stars_repo_name": "gerryb123/nielsexamples", "max_stars_repo_head_hexsha": "ca1247475d94a0fcdf07ee4b37b58b70c69ca207", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 13, "max_stars_repo_stars_event_min_datetime": "2020-02-15T21:30:37.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-21T12:03:13.000Z", "max_issues_repo_path": "SymbolicManipulation/SymPyExamples.ipynb", "max_issues_repo_name": "gerryb123/nielsexamples", "max_issues_repo_head_hexsha": "ca1247475d94a0fcdf07ee4b37b58b70c69ca207", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "SymbolicManipulation/SymPyExamples.ipynb", "max_forks_repo_name": "gerryb123/nielsexamples", "max_forks_repo_head_hexsha": "ca1247475d94a0fcdf07ee4b37b58b70c69ca207", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 24, "max_forks_repo_forks_event_min_datetime": "2020-02-13T14:27:47.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-05T14:17:10.000Z", "avg_line_length": 24.872611465, "max_line_length": 1301, "alphanum_fraction": 0.4508578745, "converted": true, "num_tokens": 2897, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.939913354875362, "lm_q2_score": 0.8947894724152068, "lm_q1q2_score": 0.8410245749249322}} {"text": "# Exercise 1\n\n\n```python\nimport matplotlib.pyplot as plt\nfrom ipywidgets import interactive\n\nimport pandas as pd\nimport numpy as np\nimport sympy as sp\nfrom scipy.stats import norm\n\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 12, 8\n```\n\n## 1.1 Distribution of a random variable X\nMake sure you understand what the probability and density functions are and how they are related to the cumulative distribution function.\nLet X denotes a continuous random variable, $X\\in[0, +\\infty]$, with a probability density function (pdf) $f(x)$ and cumulative distribution function (cdf) $F(x)$. \n* Please write down the relationship between $F(x)$ and $f(x)$.\n\n\n\n```python\nx,t = sp.symbols('x t')\nf = sp.Function('f')(x) # PDF\nF = sp.Function('F')(x) # CDF\n```\n\n\n```python\nequation_CDF = sp.Eq(F,sp.Integral(f,(t, -sp.oo, x)))\nequation_CDF\n```\n\n\n\n\n$\\displaystyle F{\\left(x \\right)} = \\int\\limits_{-\\infty}^{x} f{\\left(x \\right)}\\, dt$\n\n\n\n* Please write down the mean and variance of the random variable $X$\n\n\n```python\nx_mean = sp.symbols('\\overline{x}')\nsigma = sp.symbols('sigma') ## Standard deviation\nn = sp.symbols('n')\n```\n\n### Mean value:\n\n\n```python\nequation_mean_x = sp.Eq(x_mean,1/n*sp.Sum(x,(x, 1,n)))\nequation_mean_x\n```\n\n\n\n\n$\\displaystyle \\overline{x} = \\frac{\\sum_{x=1}^{n} x}{n}$\n\n\n\n### Variance\n\n\n```python\nequation_variance_x = sp.Eq(sigma**2,1/(n-1)*sp.Sum((x-x_mean)**2,(x, 1,n)))\nequation_variance_x\n```\n\n\n\n\n$\\displaystyle \\sigma^{2} = \\frac{\\sum_{x=1}^{n} \\left(- \\overline{x} + x\\right)^{2}}{n - 1}$\n\n\n\n## 1.2\tEmpirical distribution $F(x)$ of random variable X\n\n\n```python\nN=10000\nMU=10\nSIGMA=30\nx_sample = np.random.normal(loc=MU, scale=SIGMA, size=N)\n```\n\nCompute the empirical distribution of $X$, $F_1(x)$, and plot it in a figure.\n\n\n```python\nfig,ax = plt.subplots()\n\nvalues = np.sort(x_sample)\ndensity = np.arange(N)/N\n\nax.plot(values,density,'-');\nax.set_title('Empirical distribution $F(x)$');\nax.set_xlabel('x')\nax.set_ylabel('P(Xwww.tsaad.net)
    Department of Chemical Engineering
    University of Utah**\n
    \n\n# Example 1\n\nA system of nonlinear equations consists of several nonlinear functions - as many as there are unknowns. Solving a system of nonlinear equations means funding those points where the functions intersect each other. Consider for example the following system of equations\n\\begin{equation}\ny = 4x - 0.5 x^3\n\\end{equation}\n\\begin{equation}\ny = \\sin(x)e^{-x}\n\\end{equation}\n\nThe first step is to write these in residual form\n\\begin{equation}\nf_1 = y - 4x + 0.5 x^3,\\\\\nf_2 = y - \\sin(x)e^{-x}\n\\end{equation}\n\n\n\n```python\nimport numpy as np\nfrom numpy import cos, sin, pi, exp\n%matplotlib inline\n%config InlineBackend.figure_format = 'svg'\nimport matplotlib.pyplot as plt\nfrom scipy.optimize import fsolve\n```\n\n\n```python\ny1 = lambda x: 4 * x - 0.5 * x**3\ny2 = lambda x: sin(x)*exp(-x)\nx = np.linspace(-3.5,4,100)\nplt.ylim(-8,6)\nplt.plot(x,y1(x), 'k')\nplt.plot(x,y2(x), 'r')\nplt.grid()\nplt.savefig('example1.pdf')\n```\n\n\n \n\n \n\n\n\n```python\ndef newton_solver(F, J, x, tol):\n F_value = F(x)\n err = np.linalg.norm(F_value, ord=2) # l2 norm of vector\n err = tol + 100\n niter = 0\n while abs(err) > tol and niter < 100:\n J_value = J(x)\n delta = np.linalg.solve(J_value, -F_value)\n x = x + delta\n F_value = F(x)\n err = np.linalg.norm(F_value, ord=2)\n niter += 1\n\n # Here, either a solution is found, or too many iterations\n if abs(err) > tol:\n niter = -1\n return x, niter, err\n```\n\n\n```python\ndef F(xval):\n x = xval[0]\n y = xval[1]\n f1 = 0.5 * x**3 + y - 4*x\n f2 = y - sin(x)*exp(-x)\n return np.array([f1,f2])\n\n\ndef J(xval):\n x = xval[0]\n y = xval[1]\n return np.array([[1.5 * x**2 - 4, 1],\n [-cos(x)*exp(-x)+sin(x)*exp(-x), 1]])\n\ndef Jdiff(F,xval):\n n = len(xval)\n J = np.zeros((n,n))\n col = 0\n for x0 in xval:\n delta = np.zeros(n)\n delta[col] = 1e-4 * x0 + 1e-12\n diff = (F(xval + delta) - F(xval))/delta[col]\n J[:,col] = diff\n col += 1\n return J\n```\n\n\n```python\ntol = 1e-8\nxguess = np.array([-2,-4])\nx, n, err = newton_solver(F, J, xguess, tol)\nprint (n, x)\nprint ('Error Norm =',err)\n```\n\n 4 [-1.46110592 -4.28481694]\n Error Norm = 1.523439979143842e-11\n\n\n\n```python\nfsolve(F,(-2,-4))\n```\n\n\n\n\n array([-1.46110592, -4.28481694])\n\n\n\n# Example 2\nFind the roots of the following system of equations\n\\begin{equation}\nx^2 + y^2 = 1, \\\\\ny = x^3 - x + 1\n\\end{equation}\nFirst we assign $x_1 \\equiv x$ and $x_2 \\equiv y$ and rewrite the system in residual form\n\\begin{equation}\nf_1(x_1,x_2) = x_1^2 + x_2^2 - 1, \\\\\nf_2(x_1,x_2) = x_1^3 - x_1 - x_2 + 1\n\\end{equation}\n\n\n\n```python\nx = np.linspace(-1,1)\ny1 = lambda x: x**3 - x + 1\ny2 = lambda x: np.sqrt(1 - x**2)\nplt.plot(x,y1(x), 'k')\nplt.plot(x,y2(x), 'r')\nplt.grid()\n```\n\n\n \n\n \n\n\n\n```python\ndef F(xval):\n x1 = xval[0]\n x2 = xval[1]\n f1 = x1**2 + x2**2 - 1\n f2 = x1**3 - x1 + 1 - x2\n return np.array([f1,f2])\n\n\ndef J(xval):\n x1 = xval[0]\n x2 = xval[1]\n return np.array([[2.0*x1, 2.0*x2],\n [3.0*x1**2 -1, -1]])\n```\n\n\n```python\ntol = 1e-8\nxguess = np.array([0.5,0.5])\nx, n, err = newton_solver(F, J, xguess, tol)\nprint (n, x)\nprint ('Error Norm =',err)\n```\n\n 6 [0.74419654 0.66796071]\n Error Norm = 4.965068306494546e-16\n\n\n\n```python\nfsolve(F,(0.5,0.5))\n```\n\n\n\n\n array([0.74419654, 0.66796071])\n\n\n\n\n```python\nimport urllib\nimport requests\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = requests.get(\"https://raw.githubusercontent.com/saadtony/NumericalMethods/master/styles/custom.css\")\n return HTML(styles.text)\ncss_styling()\n\n```\n\n\n\n\nCSS style adapted from https://github.com/barbagroup/CFDPython. Copyright (c) Barba group\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "2743bd5cd18d12374778547e4c42459055d1b12d", "size": 77641, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "topics/nonlinear-equations/Nonlinear System Demo.ipynb", "max_stars_repo_name": "jomorodi/NumericalMethods", "max_stars_repo_head_hexsha": "e040693001941079b2e0acc12e0c3ee5c917671c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-03-27T05:22:34.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-27T10:49:13.000Z", "max_issues_repo_path": "topics/nonlinear-equations/Nonlinear System Demo.ipynb", "max_issues_repo_name": "jomorodi/NumericalMethods", "max_issues_repo_head_hexsha": "e040693001941079b2e0acc12e0c3ee5c917671c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "topics/nonlinear-equations/Nonlinear System Demo.ipynb", "max_forks_repo_name": "jomorodi/NumericalMethods", "max_forks_repo_head_hexsha": "e040693001941079b2e0acc12e0c3ee5c917671c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2019-12-29T23:31:56.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-28T19:04:10.000Z", "avg_line_length": 39.4116751269, "max_line_length": 277, "alphanum_fraction": 0.4625520022, "converted": true, "num_tokens": 2152, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.901920681802153, "lm_q2_score": 0.9324533022906133, "lm_q1q2_score": 0.840998918150619}} {"text": "Tra\u00e7ar um esbo\u00e7o do gr\u00e1fico e obter uma equa\u00e7\u00e3o da par\u00e1bola que satisfa\u00e7a as condi\u00e7\u00f5es dadas.\n\n24. Foco:$F(3,-1)$; diretriz: $2x - 1 = 0$

    \n\nArrumando a equa\u00e7\u00e3o da diretriz

    \n$d: x = \\frac{1}{2}$


    \nFazendo um esbo\u00e7o \u00e9 poss\u00edvel perceber que a par\u00e1bola \u00e9 paralela ao eixo $x$, logo sua equa\u00e7\u00e3o \u00e9 dada por $(y-k)^2 = 2p(x-h)$

    \n\nSabendo que a dist\u00e2ncia da diretriz at\u00e9 o foco \u00e9 $p$, podemos calcular sua dist\u00e2ncia para achar $\\frac{p}{2}$ usando o ponto$P(\\frac{1}{2},-1)$ da diretriz

    \n$p = \\sqrt{(3-\\frac{1}{2})^2 + (-1-(-1))^2}$

    \n$p = \\sqrt{(\\frac{5}{2})^2 + 0}$

    \n$p = \\pm \\sqrt{\\frac{25}{4}}$

    \n$p = \\frac{5}{2}$

    \n$\\frac{p}{2} = \\frac{5}{4}$

    \nSomando $\\frac{p}{2}$ no eixo $x$ da diretriz, obtemos as coordenadas do v\u00e9rtice

    \n$V(\\frac{7}{4},-1)$

    \nSubstituindo agora os pontos dos v\u00e9rtice e o valor de $p$ na f\u00f3rmula, temos que

    \n$(y-(-1))^2 = 2 \\cdot \\frac{5}{2} \\cdot (x-\\frac{7}{4}) $

    \n$(y+1)^2 = 5(x-\\frac{7}{4})$

    \n$y^2 + 2y + 1 = 5x - \\frac{35}{4}$

    \n$y^2 + 2y - 5x + 1 + \\frac{35}{4}$

    \nTirando o mmc entre e somando $1 + \\frac{35}{4}$, temos que

    \n$y^2 + 2y - 5x + \\frac{39}{4} = 0$

    \nMultiplicando por $4$

    \n$4y^2 + 8y - 25x + 39 = 0$

    \nGr\u00e1fico da par\u00e1bola\n\n\n```python\nfrom sympy import *\nfrom sympy.plotting import plot_implicit\nx, y = symbols(\"x y\")\nplot_implicit(Eq((y+1)**2, 5*(x-7/4)), (x,-20,20), (y,-20,20),\ntitle=u'Gr\u00e1fico da par\u00e1bola', xlabel='x', ylabel='y');\n```\n", "meta": {"hexsha": "b269cc0a2cf8d3c170e29443d07bfc860c3a0a57", "size": 14354, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Problemas Propostos. Pag. 172 - 175/24.ipynb", "max_stars_repo_name": "mateuschaves/GEOMETRIA-ANALITICA", "max_stars_repo_head_hexsha": "bc47ece7ebab154e2894226c6d939b7e7f332878", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-02-03T16:40:45.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-03T16:40:45.000Z", "max_issues_repo_path": "Problemas Propostos. Pag. 172 - 175/24.ipynb", "max_issues_repo_name": "mateuschaves/GEOMETRIA-ANALITICA", "max_issues_repo_head_hexsha": "bc47ece7ebab154e2894226c6d939b7e7f332878", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Problemas Propostos. Pag. 172 - 175/24.ipynb", "max_forks_repo_name": "mateuschaves/GEOMETRIA-ANALITICA", "max_forks_repo_head_hexsha": "bc47ece7ebab154e2894226c6d939b7e7f332878", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 156.0217391304, "max_line_length": 11544, "alphanum_fraction": 0.8691653894, "converted": true, "num_tokens": 729, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9019206712569267, "lm_q2_score": 0.9324533069832973, "lm_q1q2_score": 0.8409989125501166}} {"text": "# Assignment 1\n\nThe goal of this assignment is to supply you with machine learning models and algorithms. In this notebook, we will cover linear and nonlinear models, the concept of loss functions and some optimization techniques. All mathematical operations should be implemented in **NumPy** only. \n\n\n## Table of contents\n* [1. Logistic Regression](#1.-Logistic-Regression)\n * [1.1 Linear Mapping](#1.1-Linear-Mapping)\n * [1.2 Sigmoid](#1.2-Sigmoid)\n * [1.3 Negative Log Likelihood](#1.3-Negative-Log-Likelihood)\n * [1.4 Model](#1.4-Model)\n * [1.5 Simple Experiment](#1.5-Simple-Experiment)\n* [2. Decision Tree](#2.-Decision-Tree)\n * [2.1 Gini Index & Data Split](#2.1-Gini-Index-&-Data-Split)\n * [2.2 Terminal Node](#2.2-Terminal-Node)\n * [2.3 Build the Decision Tree](#2.3-Build-the-Decision-Tree)\n* [3. Experiments](#3.-Experiments)\n * [3.1 Decision Tree for Heart Disease Prediction](#3.1-Decision-Tree-for-Heart-Disease-Prediction) \n * [3.2 Logistic Regression for Heart Disease Prediction](#3.2-Logistic-Regression-for-Heart-Disease-Prediction)\n\n### Note\nSome of the concepts below have not (yet) been discussed during the lecture. These will be discussed further during the next lectures. \n\n### Before you begin\n\nTo check whether the code you've written is correct, we'll use **automark**. For this, we created for each of you an account with the username being your student number. \n\n\n```python\nimport automark as am\n\n# fill in you student number as your username\nusername = '13300040'\n\n# to check your progress, you can run this function\nam.get_progress(username)\n```\n\n ---------------------------------------------\n | Magdalena Mladenova |\n | magdalena.mladenova@student.uva.nl |\n ---------------------------------------------\n | linear_forward | completed |\n | linear_grad_W | completed |\n | linear_grad_b | completed |\n | nll_forward | completed |\n | nll_grad_input | not attempted |\n | sigmoid_forward | completed |\n | sigmoid_grad_input | completed |\n | tree_gini_index | not attempted |\n | tree_split_data_left | not attempted |\n | tree_split_data_right | not attempted |\n | tree_to_terminal | not attempted |\n ---------------------------------------------\n\n\nSo far all your tests are 'not attempted'. At the end of this notebook you'll need to have completed all test. The output of `am.get_progress(username)` should at least match the example below. However, we encourage you to take a shot at the 'not attempted' tests!\n\n```\n---------------------------------------------\n| Your name / student number |\n| your_email@your_domain.whatever |\n---------------------------------------------\n| linear_forward | not attempted |\n| linear_grad_W | not attempted |\n| linear_grad_b | not attempted |\n| nll_forward | not attempted |\n| nll_grad_input | not attempted |\n| sigmoid_forward | not attempted |\n| sigmoid_grad_input | not attempted |\n| tree_data_split_left | not attempted |\n| tree_data_split_right | not attempted |\n| tree_gini_index | not attempted |\n| tree_to_terminal | not attempted |\n---------------------------------------------\n```\n\n\n```python\nfrom __future__ import print_function, absolute_import, division \nimport numpy as np \n```\n\nThis notebook makes use of **classes** and their **instances** that we have already implemented for you. It allows us to write less code and make it more readable. If you are interested in it, here are some useful links:\n* The official [documentation](https://docs.python.org/3/tutorial/classes.html) \n* Video by *sentdex*: [Object Oriented Programming Introduction](https://www.youtube.com/watch?v=ekA6hvk-8H8)\n* Antipatterns in OOP: [Stop Writing Classes](https://www.youtube.com/watch?v=o9pEzgHorH0)\n\n# 1. Logistic Regression\n\nWe start with a very simple algorithm called **Logistic Regression**. It is a generalized linear model for 2-class classification.\nIt can be generalized to the case of many classes and to non-linear cases as well. However, here we consider only the simplest case. \n\nLet us consider a data with 2 classes. Class 0 and class 1. For a given test sample, logistic regression returns a value from $[0, 1]$ which is interpreted as a probability of belonging to class 1. The set of points for which the prediction is $0.5$ is called a *decision boundary*. It is a line on a plane or a hyper-plane in a space.\n\n\n\nLogistic regression has two trainable parameters: a weight $W$ and a bias $b$. For a vector of features $X$, the prediction of logistic regression is given by\n\n$$\nf(X) = \\frac{1}{1 + \\exp(-[XW + b])} = \\sigma(h(X))\n$$\nwhere $\\sigma(z) = \\frac{1}{1 + \\exp(-z)}$ and $h(X)=XW + b$.\n\nParameters $W$ and $b$ are fitted by maximizing the log-likelihood (or minimizing the negative log-likelihood) of the model on the training data. For a training subset $\\{X_j, Y_j\\}_{j=1}^N$ the normalized negative log likelihood (NLL) is given by \n\n$$\n\\mathcal{L} = -\\frac{1}{N}\\sum_j \\log\\Big[ f(X_j)^{Y_j} \\cdot (1-f(X_j))^{1-Y_j}\\Big]\n= -\\frac{1}{N}\\sum_j \\Big[ Y_j\\log f(X_j) + (1-Y_j)\\log(1-f(X_j))\\Big]\n$$\n\nThere are different ways of fitting this model. In this assignment we consider Logistic Regression as a one-layer neural network. We use the following algorithm for the **forward** pass:\n\n1. Linear mapping: $h=XW + b$\n2. Sigmoid activation function: $f=\\sigma(h)$\n3. Calculation of NLL: $\\mathcal{L} = -\\frac{1}{N}\\sum_j \\Big[ Y_j\\log f_j + (1-Y_j)\\log(1-f_j)\\Big]$\n\nIn order to fit $W$ and $b$ we perform Gradient Descent ([GD](https://en.wikipedia.org/wiki/Gradient_descent)). We choose a small learning rate $\\gamma$ and after each computation of forward pass, we update the parameters \n\n$$W_{\\text{new}} = W_{\\text{old}} - \\gamma \\frac{\\partial \\mathcal{L}}{\\partial W}$$\n\n$$b_{\\text{new}} = b_{\\text{old}} - \\gamma \\frac{\\partial \\mathcal{L}}{\\partial b}$$\n\nWe use Backpropagation method ([BP](https://en.wikipedia.org/wiki/Backpropagation)) to calculate the partial derivatives of the loss function with respect to the parameters of the model.\n\n$$\n\\frac{\\partial\\mathcal{L}}{\\partial W} = \n\\frac{\\partial\\mathcal{L}}{\\partial h} \\frac{\\partial h}{\\partial W} =\n\\frac{\\partial\\mathcal{L}}{\\partial f} \\frac{\\partial f}{\\partial h} \\frac{\\partial h}{\\partial W}\n$$\n\n$$\n\\frac{\\partial\\mathcal{L}}{\\partial b} = \n\\frac{\\partial\\mathcal{L}}{\\partial h} \\frac{\\partial h}{\\partial b} =\n\\frac{\\partial\\mathcal{L}}{\\partial f} \\frac{\\partial f}{\\partial h} \\frac{\\partial h}{\\partial b}\n$$\n\n## 1.1 Linear Mapping\nFirst of all, you need to implement the forward pass of a linear mapping:\n$$\nh(X) = XW +b\n$$\n\n**Note**: here we use `n_out` as the dimensionality of the output. For logisitc regression `n_out = 1`. However, we will work with cases of `n_out > 1` in next assignments. You will **pass** the current assignment even if your implementation works only in case `n_out = 1`. If your implementation works for the cases of `n_out > 1` then you will not have to modify your method next week. All **numpy** operations are generic. It is recommended to use numpy when is it possible.\n\n\n```python\ndef linear_forward(x_input, W, b):\n \"\"\"Perform the mapping of the input\n # Arguments\n x_input: input of the linear function - np.array of size `(n_objects, n_in)`\n W: np.array of size `(n_in, n_out)`\n b: np.array of size `(n_out,)`\n # Output\n the output of the linear function \n np.array of size `(n_objects, n_out)`\n \"\"\"\n #assert that dimension x_input and weight are compatible \n XW = x_input.dot(W)\n output = XW + b \n return output\n```\n\nLet's check your first function. We set the matrices $X, W, b$:\n$$\nX = \\begin{bmatrix}\n1 & -1 \\\\\n-1 & 0 \\\\\n1 & 1 \\\\\n\\end{bmatrix} \\quad\nW = \\begin{bmatrix}\n4 \\\\\n2 \\\\\n\\end{bmatrix} \\quad\nb = \\begin{bmatrix}\n3 \\\\\n\\end{bmatrix}\n$$\n\nAnd then compute \n$$\nXW = \\begin{bmatrix}\n1 & -1 \\\\\n-1 & 0 \\\\\n1 & 1 \\\\\n\\end{bmatrix}\n\\begin{bmatrix}\n4 \\\\\n2 \\\\\n\\end{bmatrix} =\n\\begin{bmatrix}\n2 \\\\\n-4 \\\\\n6 \\\\\n\\end{bmatrix} \\\\\nXW + b = \n\\begin{bmatrix}\n5 \\\\\n-1 \\\\\n9 \\\\\n\\end{bmatrix} \n$$\n\n\n```python\nX_test = np.array([[1, -1],\n [-1, 0],\n [1, 1]])\n\nW_test = np.array([[4],\n [2]])\n\nb_test = np.array([3])\n\nh_test = linear_forward(X_test, W_test, b_test)\nprint(h_test)\n```\n\n [[ 5]\n [-1]\n [ 9]]\n\n\n\n```python\nam.test_student_function(username, linear_forward, ['x_input', 'W', 'b'])\n```\n\n Running local tests...\n linear_forward successfully passed local tests\n Running remote test...\n Test was successful. Congratulations!\n\n\nNow you need to implement the calculation of the partial derivative of the loss function with respect to the parameters of the model. As this expressions are used for the updates of the parameters, we refer to them as gradients.\n$$\n\\frac{\\partial \\mathcal{L}}{\\partial W} = \n\\frac{\\partial \\mathcal{L}}{\\partial h}\n\\frac{\\partial h}{\\partial W} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial b} = \n\\frac{\\partial \\mathcal{L}}{\\partial h}\n\\frac{\\partial h}{\\partial b} \\\\\n$$\n\n$$\n\\mathcal{L} = -\\frac{1}{N}\\sum_j \\log\\Big[ f(X_j)^{Y_j} \\cdot (1-f(X_j))^{1-Y_j}\\Big]\n= -\\frac{1}{N}\\sum_j \\Big[ Y_j\\log f(X_j) + (1-Y_j)\\log(1-f(X_j))\\Big]\n$$\n\n\n```python\n#from sympy import symbols, diff\n\ndef linear_grad_W(x_input, grad_output, W, b):\n \"\"\"Calculate the partial derivative of \n the loss with respect to W parameter of the function\n dL / dW = (dL / dh) * (dh / dW)\n # Arguments\n x_input: input of a dense layer - np.array of size `(n_objects, n_in)`\n grad_output: partial derivative of the loss functions with \n respect to the ouput of the dense layer (dL / dh)\n np.array of size `(n_objects, n_out)`\n # Output\n the partial derivative of the loss \n with respect to W parameter of the function\n np.array of size `(n_in, n_out)`\n \"\"\"\n x_input = x_input.transpose()\n dl_dh = grad_output\n grad_W = x_input.dot(grad_output) \n return grad_W\n```\n\n\n```python\nam.test_student_function(username, linear_grad_W, ['x_input', 'grad_output','W', 'b'])\n```\n\n Running local tests...\n linear_grad_W successfully passed local tests\n Running remote test...\n Test was successful. Congratulations!\n\n\n\n```python\ndef linear_grad_b(x_input, grad_output, W, b):\n \"\"\"Calculate the partial derivative of \n the loss with respect to b parameter of the function\n dL / db = (dL / dh) * (dh / db)\n # Arguments\n x_input: input of a dense layer - np.array of size `(n_objects, n_in)`\n grad_output: partial derivative of the loss functions with \n respect to the ouput of the linear function (dL / dh)\n np.array of size `(n_objects, n_out)`\n W: np.array of size `(n_in, n_out)`\n b: np.array of size `(n_out,)`\n # Output\n the partial derivative of the loss \n with respect to b parameter of the linear function\n np.array of size `(n_out,)`\n \"\"\"\n dh_db = np.transpose(np.ones(grad_output.shape))\n grad_b = dh_db.dot(grad_output) \n return grad_b\n```\n\n\n```python\nam.test_student_function(username, linear_grad_b, ['x_input', 'grad_output', 'W', 'b'])\n```\n\n\n```python\nam.get_progress(username)\n```\n\n## 1.2 Sigmoid\n$$\nf = \\sigma(h) = \\frac{1}{1 + e^{-h}} \n$$\n\nSigmoid function is applied element-wise. It does not change the dimensionality of the tensor and its implementation is shape-agnostic in general.\n\n\n```python\nfrom math import e\ndef sigmoid_forward(x_input):\n \"\"\"sigmoid nonlinearity\n # Arguments\n x_input: np.array of size `(n_objects, n_in)`\n # Output\n the output of relu layer\n np.array of size `(n_objects, n_in)`\n \"\"\"\n output = 1/(1+e**(-(x_input)))\n return output\n```\n\n\n```python\nam.test_student_function(username, sigmoid_forward, ['x_input'])\n```\n\n Running local tests...\n sigmoid_forward successfully passed local tests\n Running remote test...\n Test was successful. Congratulations!\n\n\nNow you need to implement the calculation of the partial derivative of the loss function with respect to the input of sigmoid. \n\n$$\n\\frac{\\partial \\mathcal{L}}{\\partial h} = \n\\frac{\\partial \\mathcal{L}}{\\partial f}\n\\frac{\\partial f}{\\partial h} \n$$\n\nTensor $\\frac{\\partial \\mathcal{L}}{\\partial f}$ comes from the loss function. Let's calculate $\\frac{\\partial f}{\\partial h}$\n\n$$\n\\frac{\\partial f}{\\partial h} = \n\\frac{\\partial \\sigma(h)}{\\partial h} =\n\\frac{\\partial}{\\partial h} \\Big(\\frac{1}{1 + e^{-h}}\\Big)\n= \\frac{e^{-h}}{(1 + e^{-h})^2}\n= \\frac{1}{1 + e^{-h}} \\frac{e^{-h}}{1 + e^{-h}}\n= f(h) (1 - f(h))\n$$\n\nTherefore, in order to calculate the gradient of the loss with respect to the input of sigmoid function you need \nto \n1. calculate $f(h) (1 - f(h))$ \n2. multiply it element-wise by $\\frac{\\partial \\mathcal{L}}{\\partial f}$\n\n\n```python\ndef sigmoid_grad_input(x_input, grad_output):\n \"\"\"sigmoid nonlinearity gradient. \n Calculate the partial derivative of the loss \n with respect to the input of the layer\n # Arguments\n x_input: np.array of size `(n_objects, n_in)`\n grad_output: np.array of size `(n_objects, n_in)` \n dL / df\n # Output\n the partial derivative of the loss \n with respect to the input of the function\n np.array of size `(n_objects, n_in)` \n dL / dh\n \"\"\"\n f = 1/(1+e**(-(x_input)))\n f_deriv = f*(1-f)\n grad_input = f_deriv*grad_output\n return grad_input\n```\n\n\n```python\nam.test_student_function(username, sigmoid_grad_input, ['x_input', 'grad_output'])\n```\n\n Running local tests...\n sigmoid_grad_input successfully passed local tests\n Running remote test...\n Test was successful. Congratulations!\n\n\n## 1.3 Negative Log Likelihood\n\n$$\n\\mathcal{L} \n= -\\frac{1}{N}\\sum_j \\Big[ Y_j\\log \\dot{Y}_j + (1-Y_j)\\log(1-\\dot{Y}_j)\\Big]\n$$\n\nHere $N$ is the number of objects. $Y_j$ is the real label of an object and $\\dot{Y}_j$ is the predicted one.\n\n\n```python\ndef nll_forward(target_pred, target_true):\n \"\"\"Compute the value of NLL\n for a given prediction and the ground truth\n # Arguments\n target_pred: predictions - np.array of size `(n_objects, 1)`\n target_true: ground truth - np.array of size `(n_objects, 1)`\n # Output\n the value of NLL for a given prediction and the ground truth\n scalar\n \"\"\" \n n = target_pred.shape[0] #number of objects (predictions) \n \n loss_array =target_true*(np.log(target_pred)) + (1-target_true)*(np.log(1-target_pred))\n loss = -(((loss_array).sum())/n)\n return loss\n```\n\n\n```python\nam.test_student_function(username, nll_forward, ['target_pred', 'target_true'])\n```\n\n Running local tests...\n nll_forward successfully passed local tests\n Running remote test...\n Test was successful. Congratulations!\n\n\n\n```python\n#Creating variables for testing:\nprediction_array = np.array([[0.99],[0.1],[0.1],[0.98]])\ntrue_values_array = np.array([[1],[0],[0],[1]])\n\n#compare to matrices: \nprediction_matrix = np.matrix(\"1;0;0;1\")\ntrue_values_matrix = np.matrix(\"1;1;0;1\")\n#same visualisation BUT\n#* in matrices performs dot multiplication, not calculating Hadamard product\n\n# print(prediction_array)\n# print(true_values_array)\n\n# print(prediction_array.shape)\n# print(true_values_array.shape)\n\n# prediction_array*true_values_array\n```\n\n\n```python\nnll_forward(prediction_array, true_values_array)\n```\n\n\n\n\n -0.06024351862166837\n\n\n\nNow you need to calculate the partial derivative of NLL with with respect to its input.\n\n$$\n\\frac{\\partial \\mathcal{L}}{\\partial \\dot{Y}}\n=\n\\begin{pmatrix}\n\\frac{\\partial \\mathcal{L}}{\\partial \\dot{Y}_0} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\dot{Y}_1} \\\\\n\\vdots \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\dot{Y}_N}\n\\end{pmatrix}\n$$\n\nLet's do it step-by-step\n\n\\begin{equation}\n\\begin{split}\n\\frac{\\partial \\mathcal{L}}{\\partial \\dot{Y}_0} \n&= \\frac{\\partial}{\\partial \\dot{Y}_0} \\Big(-\\frac{1}{N}\\sum_j \\Big[ Y_j\\log \\dot{Y}_j + (1-Y_j)\\log(1-\\dot{Y}_j)\\Big]\\Big) \\\\\n&= -\\frac{1}{N} \\frac{\\partial}{\\partial \\dot{Y}_0} \\Big(Y_0\\log \\dot{Y}_0 + (1-Y_0)\\log(1-\\dot{Y}_0)\\Big) \\\\\n&= -\\frac{1}{N} \\Big(\\frac{Y_0}{\\dot{Y}_0} - \\frac{1-Y_0}{1-\\dot{Y}_0}\\Big)\n= \\frac{1}{N} \\frac{\\dot{Y}_0 - Y_0}{\\dot{Y}_0 (1 - \\dot{Y}_0)}\n\\end{split}\n\\end{equation}\n\nAnd for the other components it can be done in exactly the same way. So the result is the vector where each component is given by \n$$\\frac{1}{N} \\frac{\\dot{Y}_j - Y_j}{\\dot{Y}_j (1 - \\dot{Y}_j)}$$\n\nOr if we assume all multiplications and divisions to be done element-wise the output can be calculated as\n$$\n\\frac{\\partial \\mathcal{L}}{\\partial \\dot{Y}} = \\frac{1}{N} \\frac{\\dot{Y} - Y}{\\dot{Y} (1 - \\dot{Y})}\n$$\n\n\n```python\ndef nll_grad_input(target_pred, target_true):\n \"\"\"Compute the partial derivative of NLL\n with respect to its input\n # Arguments\n target_pred: predictions - np.array of size `(n_objects, 1)`\n target_true: ground truth - np.array of size `(n_objects, 1)`\n # Output\n the partial derivative \n of NLL with respect to its input\n np.array of size `(n_objects, 1)`\n \"\"\"\n n = target_pred.shape[0]\n print(target_pred)\n print(n)\n grad_input = (target_pred - target_true)/(n*target_pred*(1-target_pred))\n return grad_input\n```\n\n\n```python\nam.test_student_function(username, nll_grad_input, ['target_pred', 'target_true'])\n```\n\n Running local tests...\n [[0.44149218]\n [0.49802603]\n [0.12991776]\n [0.54631544]\n [0.93359245]\n [0.76619702]]\n 6\n [[0.4034751 ]\n [0.36231152]]\n 2\n [[0.32128973]\n [0.19297686]\n [0.77783477]\n [0.27277821]\n [0.00945926]\n [0.65666999]]\n 6\n [[0.44483427]\n [0.73023461]]\n 2\n [[0.38822013]\n [0.80142426]\n [0.91762878]\n [0.88301271]\n [0.52892881]\n [0.19129686]]\n 6\n [[0.56112845]\n [0.66910656]]\n 2\n [[0.46500727]\n [0.47885429]\n [0.96522237]\n [0.41378956]\n [0.32003152]]\n 5\n [[0.49075781]\n [0.43031569]]\n 2\n [[0.17374548]\n [0.87711852]\n [0.63623989]\n [0.545399 ]\n [0.66406108]\n [0.6646313 ]]\n 6\n [[0.2392321 ]\n [0.88579622]\n [0.35941506]\n [0.44902499]\n [0.28456685]\n [0.50962131]]\n 6\n [[0.86842763]\n [0.52829461]\n [0.4988752 ]\n [0.21276133]\n [0.16242777]\n [0.77160718]]\n 6\n [[0.52347324]\n [0.30609505]\n [0.78676405]\n [0.94166458]\n [0.74196847]\n [0.06675144]]\n 6\n [[0.66100102]\n [0.05377475]\n [0.05807855]\n [0.85769082]\n [0.99480223]]\n 5\n [[0.00926097]]\n 1\n [[0.62023777]\n [0.71894748]\n [0.60263285]\n [0.9874348 ]\n [0.01759958]]\n 5\n [[0.02013438]\n [0.44225343]]\n 2\n [[0.21919753]\n [0.40846385]]\n 2\n [[0.21256704]]\n 1\n [[0.04950514]\n [0.29604472]\n [0.09754689]]\n 3\n [[0.8871072 ]\n [0.56474609]\n [0.0518327 ]\n [0.54651019]\n [0.14883728]\n [0.18125097]]\n 6\n [[0.33114106]\n [0.98654821]\n [0.89025191]\n [0.308293 ]]\n 4\n [[0.97033604]\n [0.93875813]\n [0.58919901]\n [0.64780771]\n [0.89180757]\n [0.14617799]]\n 6\n [[0.54827957]\n [0.69368268]\n [0.75590782]]\n 3\n [[0.83539158]\n [0.51928144]\n [0.31200754]\n [0.42234194]\n [0.93818552]\n [0.00228553]]\n 6\n [[0.12989161]\n [0.35335171]\n [0.20183886]\n [0.48871329]\n [0.65245702]]\n 5\n [[0.3444705 ]\n [0.40483695]\n [0.41831576]\n [0.37652998]\n [0.98668538]\n [0.20289536]]\n 6\n [[0.40453669]\n [0.94760548]]\n 2\n [[0.11154913]\n [0.62180863]\n [0.33390252]\n [0.17659504]\n [0.09123636]]\n 5\n [[0.93825816]]\n 1\n [[0.20019085]\n [0.20798923]\n [0.4055964 ]\n [0.39212545]\n [0.66819044]\n [0.89194762]]\n 6\n [[0.03127009]\n [0.02298013]\n [0.01384975]]\n 3\n [[0.25762287]\n [0.99217188]\n [0.13204732]]\n 3\n [[0.49777484]\n [0.23464077]\n [0.72658589]\n [0.49451356]\n [0.74986547]\n [0.24889401]]\n 6\n [[0.71436283]\n [0.20270132]]\n 2\n [[0.27169596]\n [0.30269292]\n [0.78654709]\n [0.3069444 ]\n [0.379331 ]\n [0.08090766]]\n 6\n [[0.23738474]\n [0.20456698]\n [0.48908061]]\n 3\n [[0.50707856]\n [0.03648437]\n [0.9640905 ]]\n 3\n [[0.90976558]\n [0.63728016]\n [0.44310998]\n [0.96064619]\n [0.62442183]]\n 5\n [[0.50398108]\n [0.29517032]\n [0.90436043]\n [0.36066192]\n [0.21716993]\n [0.76918049]]\n 6\n [[0.53772766]]\n 1\n [[0.17586308]\n [0.90634808]\n [0.94384896]]\n 3\n [[0.49060793]\n [0.55220483]]\n 2\n [[0.39519171]]\n 1\n [[0.03434373]\n [0.6138267 ]\n [0.47036275]\n [0.16236453]\n [0.09411384]]\n 5\n [[0.19089986]\n [0.84761351]\n [0.8462792 ]]\n 3\n [[0.39779783]\n [0.84162308]]\n 2\n [[0.92406366]\n [0.45031539]\n [0.85402207]\n [0.15316689]\n [0.89275927]]\n 5\n [[0.64010471]]\n 1\n [[0.33982215]\n [0.67746238]]\n 2\n [[0.22783106]\n [0.38975825]]\n 2\n [[0.9802084 ]\n [0.53868362]\n [0.01461558]\n [0.62171137]\n [0.04498339]\n [0.88781291]]\n 6\n [[0.66474519]]\n 1\n [[0.59445941]\n [0.77878887]\n [0.25494849]\n [0.64189979]]\n 4\n [[0.7175209]]\n 1\n [[0.08854945]\n [0.49604055]\n [0.04302953]\n [0.36824231]\n [0.08992412]]\n 5\n [[0.35452476]\n [0.6820189 ]\n [0.80945034]\n [0.1650229 ]\n [0.21320536]\n [0.80255547]]\n 6\n [[0.8861651]]\n 1\n [[0.29428122]\n [0.40449929]\n [0.94607064]]\n 3\n [[0.20451381]\n [0.80731092]]\n 2\n [[0.8868128 ]\n [0.68464132]\n [0.96354595]]\n 3\n [[0.39985722]\n [0.80953875]]\n 2\n [[0.50262563]\n [0.19314674]\n [0.78662357]\n [0.60238456]\n [0.43974697]]\n 5\n [[0.20689077]]\n 1\n [[0.4228594]]\n 1\n [[0.16220913]\n [0.62044583]\n [0.16228701]\n [0.0933396 ]\n [0.58231372]]\n 5\n [[0.29729978]\n [0.80872642]\n [0.12308581]\n [0.26117507]\n [0.37221987]]\n 5\n [[0.74582136]]\n 1\n [[0.59809444]\n [0.13532428]\n [0.29224631]\n [0.51791254]]\n 4\n [[0.01087373]\n [0.97127289]\n [0.60748636]\n [0.24023704]]\n 4\n [[0.11062596]\n [0.96194054]\n [0.21893109]]\n 3\n [[0.13763796]\n [0.26546275]\n [0.39716616]\n [0.23535513]\n [0.60393129]]\n 5\n [[0.03977042]\n [0.35924223]\n [0.6552682 ]]\n 3\n [[0.48651457]]\n 1\n [[0.62433777]]\n 1\n [[0.6856632 ]\n [0.07120992]\n [0.20874342]\n [0.54259184]]\n 4\n [[0.71545086]\n [0.40261108]\n [0.0311354 ]\n [0.51892539]]\n 4\n [[0.60793752]\n [0.14582905]\n [0.48633605]\n [0.37506066]\n [0.21457534]]\n 5\n [[0.72303158]\n [0.37431857]\n [0.53187361]\n [0.31838372]\n [0.98794156]\n [0.2571037 ]]\n 6\n [[0.09554118]\n [0.47046602]\n [0.92780551]]\n 3\n [[0.46190188]\n [0.7461441 ]\n [0.44509945]]\n 3\n [[0.33780642]\n [0.62424329]\n [0.45958335]\n [0.79494963]\n [0.56264295]\n [0.19624248]]\n 6\n [[0.55639635]\n [0.80387568]\n [0.91549617]\n [0.65021827]\n [0.78376871]\n [0.71997414]]\n 6\n [[0.42020722]\n [0.70983398]\n [0.94649174]\n [0.76501971]]\n 4\n [[0.0256399 ]\n [0.32513118]\n [0.83752743]]\n 3\n [[0.76086103]\n [0.54904677]\n [0.65039979]\n [0.29417816]\n [0.38294631]\n [0.47450256]]\n 6\n [[0.62298494]\n [0.58693756]\n [0.86122075]\n [0.86032384]\n [0.86849946]]\n 5\n [[0.59638548]\n [0.63719664]\n [0.10874566]]\n 3\n [[0.36858834]]\n 1\n [[0.23839624]]\n 1\n [[0.30664036]\n [0.2475675 ]\n [0.37133409]\n [0.80121306]]\n 4\n [[0.98181281]\n [0.32234796]\n [0.28997734]]\n 3\n [[0.27833633]\n [0.05237881]\n [0.35524493]\n [0.0249076 ]\n [0.71471005]]\n 5\n [[0.12273394]\n [0.9534158 ]\n [0.62409713]]\n 3\n [[0.88102583]\n [0.45884159]\n [0.84604589]\n [0.73927688]]\n 4\n [[0.30252666]]\n 1\n [[0.96295703]\n [0.8342786 ]\n [0.19362649]\n [0.32967574]]\n 4\n [[0.69371014]]\n 1\n [[0.95569688]]\n 1\n [[0.29187442]\n [0.58856177]\n [0.96276033]]\n 3\n [[0.03275109]\n [0.14280792]\n [0.82958898]\n [0.86157039]]\n 4\n nll_grad_input successfully passed local tests\n Running remote test...\n [[0.21144194]\n [0.43647571]\n [0.12744948]]\n 3\n Test was successful. Congratulations!\n\n\n\n```python\nam.get_progress(username)\n```\n\n ---------------------------------------------\n | Magdalena Mladenova |\n | magdalena.mladenova@student.uva.nl |\n ---------------------------------------------\n | linear_forward | completed |\n | linear_grad_W | completed |\n | linear_grad_b | completed |\n | nll_forward | completed |\n | nll_grad_input | completed |\n | sigmoid_forward | completed |\n | sigmoid_grad_input | completed |\n | tree_gini_index | completed |\n | tree_split_data_left | completed |\n | tree_split_data_right | completed |\n | tree_to_terminal | completed |\n ---------------------------------------------\n\n\n## 1.4 Model\n\nHere we provide a model for your. It consist of the function which you have implmeneted above\n\n\n```python\nclass LogsticRegressionGD(object):\n \n def __init__(self, n_in, lr=0.05):\n super().__init__()\n self.lr = lr\n self.b = np.zeros(1, )\n self.W = np.random.randn(n_in, 1)\n \n def forward(self, x):\n self.h = linear_forward(x, self.W, self.b)\n y = sigmoid_forward(self.h)\n return y\n \n def update_params(self, x, nll_grad):\n # compute gradients\n grad_h = sigmoid_grad_input(self.h, nll_grad)\n grad_W = linear_grad_W(x, grad_h, self.W, self.b)\n grad_b = linear_grad_b(x, grad_h, self.W, self.b)\n # update params\n self.W = self.W - self.lr * grad_W\n self.b = self.b - self.lr * grad_b\n```\n\n## 1.5 Simple Experiment\n\n\n```python\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n\n```python\n# Generate some data\ndef generate_2_circles(N=100):\n phi = np.linspace(0.0, np.pi * 2, 100)\n X1 = 1.1 * np.array([np.sin(phi), np.cos(phi)])\n X2 = 3.0 * np.array([np.sin(phi), np.cos(phi)])\n Y = np.concatenate([np.ones(N), np.zeros(N)]).reshape((-1, 1))\n X = np.hstack([X1,X2]).T\n return X, Y\n\n\ndef generate_2_gaussians(N=100):\n phi = np.linspace(0.0, np.pi * 2, 100)\n X1 = np.random.normal(loc=[1, 2], scale=[2.5, 0.9], size=(N, 2))\n X1 = X1 @ np.array([[0.7, -0.7], [0.7, 0.7]])\n X2 = np.random.normal(loc=[-2, 0], scale=[1, 1.5], size=(N, 2))\n X2 = X2 @ np.array([[0.7, 0.7], [-0.7, 0.7]])\n Y = np.concatenate([np.ones(N), np.zeros(N)]).reshape((-1, 1))\n X = np.vstack([X1,X2])\n return X, Y\n\ndef split(X, Y, train_ratio=0.7):\n size = len(X)\n train_size = int(size * train_ratio)\n indices = np.arange(size)\n np.random.shuffle(indices)\n train_indices = indices[:train_size]\n test_indices = indices[train_size:]\n return X[train_indices], Y[train_indices], X[test_indices], Y[test_indices]\n\nf, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 4))\n\n\nX, Y = generate_2_circles()\nax1.scatter(X[:,0], X[:,1], c=Y.ravel(), edgecolors= 'none')\nax1.set_aspect('equal')\n\n\nX, Y = generate_2_gaussians()\nax2.scatter(X[:,0], X[:,1], c=Y.ravel(), edgecolors= 'none')\nax2.set_aspect('equal')\n\n```\n\n\n```python\nX_train, Y_train, X_test, Y_test = split(*generate_2_gaussians(), 0.7)\n```\n\n\n```python\n# let's train our model\nmodel = LogsticRegressionGD(2, 0.05)\n\nfor step in range(30):\n Y_pred = model.forward(X_train)\n \n loss_value = nll_forward(Y_pred, Y_train)\n accuracy = ((Y_pred > 0.5) == Y_train).mean()\n print('Step: {} \\t Loss: {:.3f} \\t Acc: {:.1f}%'.format(step, loss_value, accuracy * 100))\n \n loss_grad = nll_grad_input(Y_pred, Y_train)\n model.update_params(X_train, loss_grad)\n\n \nprint('\\n\\nTesting...')\nY_test_pred = model.forward(X_test)\ntest_accuracy = ((Y_test_pred > 0.5) == Y_test).mean()\nprint('Acc: {:.1f}%'.format(test_accuracy * 100))\n```\n\n Step: 0 \t Loss: 0.448 \t Acc: 97.9%\n [[0.49066807]\n [0.23933572]\n [0.31303132]\n [0.77384888]\n [0.2781836 ]\n [0.41143628]\n [0.33244299]\n [0.62983742]\n [0.60581536]\n [0.38349525]\n [0.43309194]\n [0.26344452]\n [0.32404159]\n [0.80306921]\n [0.65121762]\n [0.29365382]\n [0.60871377]\n [0.62820368]\n [0.73383065]\n [0.2707865 ]\n [0.68791402]\n [0.63868949]\n [0.46825449]\n [0.31093282]\n [0.72672596]\n [0.42012007]\n [0.41455949]\n [0.26997816]\n [0.28635386]\n [0.65085762]\n [0.40523786]\n [0.51597134]\n [0.54955422]\n [0.66445317]\n [0.70678803]\n [0.67600008]\n [0.30195149]\n [0.70675137]\n [0.64256797]\n [0.60062027]\n [0.34279183]\n [0.54785406]\n [0.80521427]\n [0.29789714]\n [0.64191666]\n [0.43010913]\n [0.78994595]\n [0.76424215]\n [0.61913784]\n [0.3870707 ]\n [0.40438323]\n [0.30558038]\n [0.62122767]\n [0.67544465]\n [0.28415843]\n [0.39369168]\n [0.31709261]\n [0.48071186]\n [0.56358819]\n [0.37159886]\n [0.39601401]\n [0.30642377]\n [0.64847967]\n [0.4260625 ]\n [0.37268679]\n [0.56377461]\n [0.3645235 ]\n [0.65204463]\n [0.69732066]\n [0.34387387]\n [0.33375388]\n [0.7500607 ]\n [0.36117618]\n [0.64586958]\n [0.31936962]\n [0.40631759]\n [0.41640113]\n [0.69762526]\n [0.60172541]\n [0.39124769]\n [0.3839339 ]\n [0.54505012]\n [0.76475564]\n [0.65115054]\n [0.58085814]\n [0.81102854]\n [0.63179876]\n [0.45813667]\n [0.55268246]\n [0.32555907]\n [0.341223 ]\n [0.35407439]\n [0.58113755]\n [0.58680162]\n [0.50014032]\n [0.69301699]\n [0.38081679]\n [0.23954391]\n [0.77606518]\n [0.66352124]\n [0.70666013]\n [0.30058113]\n [0.43974587]\n [0.31256324]\n [0.61966679]\n [0.28804956]\n [0.67910768]\n [0.50172379]\n [0.36331762]\n [0.69235647]\n [0.24609659]\n [0.6868359 ]\n [0.43153287]\n [0.63237805]\n [0.40862957]\n [0.36513629]\n [0.39046573]\n [0.51142975]\n [0.44044016]\n [0.34775751]\n [0.48137275]\n [0.60699959]\n [0.362952 ]\n [0.68249001]\n [0.38530357]\n [0.78678589]\n [0.35918823]\n [0.66888291]\n [0.81561762]\n [0.33861878]\n [0.62155873]\n [0.49637315]\n [0.61013142]\n [0.59709279]\n [0.31867245]\n [0.26891625]\n [0.64084789]\n [0.6781699 ]\n [0.33099869]\n [0.61633958]]\n 140\n Step: 1 \t Loss: 0.425 \t Acc: 98.6%\n [[0.4779815 ]\n [0.21647557]\n [0.29513135]\n [0.78987167]\n [0.2551167 ]\n [0.39468391]\n [0.31479502]\n [0.6518715 ]\n [0.61428201]\n [0.36044788]\n [0.42042809]\n [0.24345068]\n [0.30004582]\n [0.81638869]\n [0.67126435]\n [0.26881581]\n [0.62505533]\n [0.64762679]\n [0.74945184]\n [0.25078887]\n [0.70327753]\n [0.65570756]\n [0.46134792]\n [0.30169039]\n [0.74053867]\n [0.40190114]\n [0.40577735]\n [0.25298226]\n [0.25870189]\n [0.66626147]\n [0.38888581]\n [0.53005642]\n [0.56002738]\n [0.67553242]\n [0.71993723]\n [0.69488426]\n [0.28254886]\n [0.72952283]\n [0.65551631]\n [0.60949117]\n [0.32249104]\n [0.56805048]\n [0.81663229]\n [0.27421459]\n [0.66207663]\n [0.4165485 ]\n [0.80166358]\n [0.76890533]\n [0.62949181]\n [0.38039657]\n [0.38496741]\n [0.28549954]\n [0.64347778]\n [0.69926399]\n [0.26498271]\n [0.39316544]\n [0.29957875]\n [0.47377668]\n [0.57695609]\n [0.3436882 ]\n [0.3767113 ]\n [0.2850229 ]\n [0.6553023 ]\n [0.41200402]\n [0.35395013]\n [0.57808724]\n [0.34886281]\n [0.66299759]\n [0.70597686]\n [0.32531617]\n [0.31336207]\n [0.76758909]\n [0.34234559]\n [0.66352373]\n [0.30435118]\n [0.39810004]\n [0.40212586]\n [0.71959124]\n [0.62063929]\n [0.36662741]\n [0.37325859]\n [0.55534681]\n [0.78530336]\n [0.6660256 ]\n [0.5934365 ]\n [0.8219208 ]\n [0.64451413]\n [0.44133306]\n [0.5598844 ]\n [0.30495609]\n [0.32340045]\n [0.33178124]\n [0.61447477]\n [0.60240445]\n [0.50317653]\n [0.71348324]\n [0.38207997]\n [0.22137943]\n [0.78402476]\n [0.67080134]\n [0.71900482]\n [0.28168833]\n [0.4230313 ]\n [0.29443069]\n [0.63618274]\n [0.26612876]\n [0.68924547]\n [0.48737956]\n [0.35377446]\n [0.70736287]\n [0.22700516]\n [0.71087228]\n [0.41657553]\n [0.6474898 ]\n [0.3948196 ]\n [0.36106561]\n [0.37894349]\n [0.519129 ]\n [0.43052504]\n [0.33019344]\n [0.47597562]\n [0.61470645]\n [0.33589811]\n [0.69312824]\n [0.37705067]\n [0.79922715]\n [0.3362023 ]\n [0.68922707]\n [0.83191688]\n [0.3248637 ]\n [0.63390709]\n [0.47924687]\n [0.61805184]\n [0.61349478]\n [0.29665026]\n [0.24879269]\n [0.64011221]\n [0.69339812]\n [0.31630514]\n [0.62860592]]\n 140\n Step: 2 \t Loss: 0.404 \t Acc: 98.6%\n [[0.46596778]\n [0.1963229 ]\n [0.27874536]\n [0.80425305]\n [0.2344601 ]\n [0.3790289 ]\n [0.29856135]\n [0.67218969]\n [0.62223852]\n [0.33917257]\n [0.40852184]\n [0.22549379]\n [0.27827264]\n [0.82832979]\n [0.68971391]\n [0.24654556]\n [0.64029585]\n [0.66560847]\n [0.76366571]\n [0.23278368]\n [0.71741704]\n [0.67148139]\n [0.45481832]\n [0.29311206]\n [0.75318416]\n [0.38487214]\n [0.39752249]\n [0.23756603]\n [0.23415492]\n [0.68053928]\n [0.37361489]\n [0.54339414]\n [0.56991952]\n [0.68584118]\n [0.73204351]\n [0.71219767]\n [0.26489034]\n [0.75004985]\n [0.66757441]\n [0.61783006]\n [0.30385461]\n [0.58702151]\n [0.82693603]\n [0.25289033]\n [0.68066625]\n [0.40381082]\n [0.81227768]\n [0.77322024]\n [0.63919221]\n [0.37413722]\n [0.36689097]\n [0.26723157]\n [0.66402986]\n [0.72090038]\n [0.24760956]\n [0.39270301]\n [0.28351951]\n [0.46721201]\n [0.58953975]\n [0.31817788]\n [0.35876654]\n [0.26560324]\n [0.66169125]\n [0.39881022]\n [0.33659888]\n [0.59155137]\n [0.33432611]\n [0.67320849]\n [0.71402049]\n [0.3082284 ]\n [0.29468572]\n [0.78337869]\n [0.32495006]\n [0.67984954]\n [0.29051299]\n [0.39038098]\n [0.38874886]\n [0.73948723]\n [0.63825161]\n [0.34390361]\n [0.36327057]\n [0.56507802]\n [0.80353412]\n [0.67982295]\n [0.60525653]\n [0.83175068]\n [0.65637769]\n [0.42551478]\n [0.5666944 ]\n [0.28613234]\n [0.30698009]\n [0.3113249 ]\n [0.64516534]\n [0.61701805]\n [0.50606792]\n [0.73210634]\n [0.38332184]\n [0.20511867]\n [0.79133907]\n [0.67760501]\n [0.73038839]\n [0.26448172]\n [0.40734058]\n [0.277841 ]\n [0.651554 ]\n [0.24637394]\n [0.6986704 ]\n [0.47378104]\n [0.34486022]\n [0.72117014]\n [0.20992154]\n [0.73261919]\n [0.40253558]\n [0.66154752]\n [0.38188832]\n [0.3572614 ]\n [0.36816131]\n [0.52643808]\n [0.42118296]\n [0.31398301]\n [0.47086752]\n [0.62195156]\n [0.31120012]\n [0.70300642]\n [0.3693159 ]\n [0.81048009]\n [0.31510594]\n [0.70786201]\n [0.84626196]\n [0.31212047]\n [0.64544976]\n [0.46303296]\n [0.62549421]\n [0.62882029]\n [0.27662156]\n [0.23069152]\n [0.63938134]\n [0.70744429]\n [0.30272933]\n [0.64008118]]\n 140\n Step: 3 \t Loss: 0.384 \t Acc: 98.6%\n [[0.45458638]\n [0.17854325]\n [0.26372842]\n [0.81719231]\n [0.21595724]\n [0.3643934 ]\n [0.28361393]\n [0.69091661]\n [0.6297301 ]\n [0.31954978]\n [0.39731694]\n [0.20934703]\n [0.25852607]\n [0.83906981]\n [0.70669752]\n [0.22658601]\n [0.6545186 ]\n [0.68225825]\n [0.77662509]\n [0.2165544 ]\n [0.73045074]\n [0.68611345]\n [0.44863612]\n [0.28512882]\n [0.76478715]\n [0.3689553 ]\n [0.38974965]\n [0.22355797]\n [0.21239085]\n [0.69378953]\n [0.35934579]\n [0.55602665]\n [0.57927317]\n [0.69545307]\n [0.74321368]\n [0.72808288]\n [0.24880455]\n [0.76855583]\n [0.67882121]\n [0.62568314]\n [0.2867413 ]\n [0.60482662]\n [0.83626699]\n [0.2336921 ]\n [0.69780981]\n [0.39183599]\n [0.82192379]\n [0.77723074]\n [0.64829644]\n [0.36825294]\n [0.35006355]\n [0.25060094]\n [0.68300163]\n [0.74054399]\n [0.2318514 ]\n [0.39229313]\n [0.2687769 ]\n [0.46098988]\n [0.60139436]\n [0.29491334]\n [0.34208438]\n [0.24797465]\n [0.6676899 ]\n [0.38641804]\n [0.32052499]\n [0.60422501]\n [0.32081824]\n [0.68274659]\n [0.72151577]\n [0.29248352]\n [0.27757455]\n [0.79762806]\n [0.30887311]\n [0.69495786]\n [0.27774151]\n [0.38311627]\n [0.37620299]\n [0.75751248]\n [0.65465219]\n [0.32295929]\n [0.35390918]\n [0.5742848 ]\n [0.81973118]\n [0.6926376 ]\n [0.61637581]\n [0.84065436]\n [0.66746329]\n [0.41062488]\n [0.57314391]\n [0.26892761]\n [0.29183858]\n [0.29256046]\n [0.67328279]\n [0.63071308]\n [0.50882478]\n [0.74906173]\n [0.38453689]\n [0.19053467]\n [0.7980852 ]\n [0.68398069]\n [0.74090971]\n [0.24879452]\n [0.39260888]\n [0.26264647]\n [0.66587003]\n [0.22856312]\n [0.70745342]\n [0.4608892 ]\n [0.33651574]\n [0.73389567]\n [0.19461002]\n [0.75228596]\n [0.38934917]\n [0.67463914]\n [0.36976767]\n [0.35369439]\n [0.3580559 ]\n [0.53338422]\n [0.41236907]\n [0.29900917]\n [0.46602583]\n [0.62877665]\n [0.28869452]\n [0.71220022]\n [0.36205136]\n [0.82069021]\n [0.2957551 ]\n [0.72493712]\n [0.8589247 ]\n [0.30029492]\n [0.65625529]\n [0.44769063]\n [0.63250184]\n [0.64314737]\n [0.2584042 ]\n [0.21439153]\n [0.63866234]\n [0.72041983]\n [0.29016645]\n [0.65083191]]\n 140\n Step: 4 \t Loss: 0.367 \t Acc: 98.6%\n [[0.44379688]\n [0.16283404]\n [0.24994664]\n [0.8288642 ]\n [0.19936864]\n [0.35070146]\n [0.26983341]\n [0.70817761]\n [0.63679749]\n [0.30145588]\n [0.38676003]\n [0.19480432]\n [0.24061443]\n [0.84876101]\n [0.72234172]\n [0.2086911 ]\n [0.66780338]\n [0.69768307]\n [0.78846633]\n [0.2019034 ]\n [0.74248609]\n [0.6997006 ]\n [0.44277386]\n [0.27768013]\n [0.77545829]\n [0.35407261]\n [0.38241761]\n [0.21080398]\n [0.19309753]\n [0.70610344]\n [0.34600185]\n [0.56799669]\n [0.58812845]\n [0.70443395]\n [0.75354291]\n [0.74267364]\n [0.23413258]\n [0.78525177]\n [0.68932886]\n [0.63309226]\n [0.27101552]\n [0.62153118]\n [0.84474603]\n [0.21639864]\n [0.71362819]\n [0.38056662]\n [0.83071839]\n [0.7809733 ]\n [0.65685658]\n [0.36270852]\n [0.33439444]\n [0.23544378]\n [0.70051302]\n [0.75838104]\n [0.21753646]\n [0.3919269 ]\n [0.25522351]\n [0.45508406]\n [0.61257287]\n [0.27372437]\n [0.32657007]\n [0.23195781]\n [0.67333626]\n [0.37476713]\n [0.30562394]\n [0.61616439]\n [0.30825022]\n [0.69167407]\n [0.7285187 ]\n [0.27796137]\n [0.26188514]\n [0.81051446]\n [0.29400299]\n [0.70895361]\n [0.26593347]\n [0.37626599]\n [0.36442425]\n [0.77385545]\n [0.66993025]\n [0.30366865]\n [0.34511924]\n [0.58300591]\n [0.83414814]\n [0.70455721]\n [0.62684857]\n [0.84874771]\n [0.67783869]\n [0.39660501]\n [0.57926182]\n [0.25318966]\n [0.27786019]\n [0.27534449]\n [0.69895234]\n [0.64355779]\n [0.51145686]\n [0.76451411]\n [0.38572176]\n [0.17742572]\n [0.80432871]\n [0.68997066]\n [0.75065652]\n [0.23447278]\n [0.37877131]\n [0.24871052]\n [0.67921606]\n [0.212489 ]\n [0.71565732]\n [0.4486635 ]\n [0.32868804]\n [0.74564575]\n [0.18085953]\n [0.77007572]\n [0.37695426]\n [0.68684673]\n [0.35839337]\n [0.35033945]\n [0.34856921]\n [0.53999332]\n [0.40404153]\n [0.28516185]\n [0.46142953]\n [0.6352191 ]\n [0.26820675]\n [0.72077671]\n [0.35521412]\n [0.82998305]\n [0.27800529]\n [0.74059494]\n [0.87013821]\n [0.28930136]\n [0.66638673]\n [0.43317497]\n [0.63911349]\n [0.65655151]\n [0.2418239 ]\n [0.19969091]\n [0.63795977]\n [0.73242642]\n [0.27852087]\n [0.66091944]]\n 140\n Step: 5 \t Loss: 0.351 \t Acc: 98.6%\n [[0.43355999]\n [0.14892677]\n [0.23727795]\n [0.83942116]\n [0.18447575]\n [0.33788056]\n [0.25711004]\n [0.72409454]\n [0.64347728]\n [0.28476845]\n [0.37680113]\n [0.18168109]\n [0.22435614]\n [0.85753397]\n [0.73676582]\n [0.1926323 ]\n [0.68022538]\n [0.71198482]\n [0.7993103 ]\n [0.18865293]\n [0.75362008]\n [0.71233307]\n [0.43720626]\n [0.27071289]\n [0.78529535]\n [0.3401481 ]\n [0.37548899]\n [0.19916684]\n [0.17598382]\n [0.71756468]\n [0.33351039]\n [0.57934636]\n [0.59652282]\n [0.71284259]\n [0.76311567]\n [0.75609336]\n [0.2207294 ]\n [0.80033233]\n [0.69916285]\n [0.64009519]\n [0.25654996]\n [0.63720295]\n [0.85247645]\n [0.20080472]\n [0.72823616]\n [0.36994868]\n [0.83876167]\n [0.78447846]\n [0.66491969]\n [0.3574728 ]\n [0.31979533]\n [0.22160977]\n [0.71668174]\n [0.77458822]\n [0.20450953]\n [0.39159726]\n [0.24274318]\n [0.44947024]\n [0.62312536]\n [0.25443658]\n [0.31213206]\n [0.21738713]\n [0.67866358]\n [0.36380061]\n [0.2917968 ]\n [0.62742308]\n [0.29653993]\n [0.70004661]\n [0.73507819]\n [0.26455027]\n [0.24748351]\n [0.82219475]\n [0.28023457]\n [0.72193451]\n [0.2549957 ]\n [0.3697942 ]\n [0.35335256]\n [0.78869047]\n [0.68417206]\n [0.28590397]\n [0.33685085]\n [0.59127758]\n [0.84700895]\n [0.7156619 ]\n [0.63672529]\n [0.85612952]\n [0.68756556]\n [0.38339739]\n [0.58507454]\n [0.23877681]\n [0.26493792]\n [0.25953944]\n [0.72233035]\n [0.65561698]\n [0.5139732 ]\n [0.77861518]\n [0.38687466]\n [0.16561426]\n [0.81012558]\n [0.69561189]\n [0.75970645]\n [0.22137638]\n [0.36576464]\n [0.23590849]\n [0.69167202]\n [0.19796194]\n [0.72333761]\n [0.43706334]\n [0.32132986]\n [0.75651598]\n [0.16848328]\n [0.78617969]\n [0.3652918 ]\n [0.69824607]\n [0.34770553]\n [0.34717501]\n [0.33964832]\n [0.54628968]\n [0.39616176]\n [0.27233912]\n [0.45705927]\n [0.64131232]\n [0.24956133]\n [0.72879527]\n [0.34876586]\n [0.83846679]\n [0.2617166 ]\n [0.75496861]\n [0.88010136]\n [0.27906236]\n [0.67590159]\n [0.41943927]\n [0.64536382]\n [0.66910437]\n [0.22671797]\n [0.1864083 ]\n [0.63727643]\n [0.74355615]\n [0.26770606]\n [0.67039979]]\n 140\n Step: 6 \t Loss: 0.337 \t Acc: 98.6%\n [[0.42383817]\n [0.13658618]\n [0.22561207]\n [0.84899566]\n [0.17108223]\n [0.32586248]\n [0.24534392]\n [0.7387831 ]\n [0.64980231]\n [0.26936966]\n [0.3673939 ]\n [0.16981361]\n [0.20958284]\n [0.86550067]\n [0.75008056]\n [0.17820157]\n [0.6918546 ]\n [0.72525901]\n [0.80926373]\n [0.17664471]\n [0.76393971]\n [0.72409396]\n [0.43191021]\n [0.26418053]\n [0.79438455]\n [0.32710922]\n [0.36893004]\n [0.18852504]\n [0.16078501]\n [0.72824936]\n [0.32180346]\n [0.59011631]\n [0.60449093]\n [0.72073134]\n [0.77200679]\n [0.76845469]\n [0.20846404]\n [0.81397419]\n [0.70838236]\n [0.64672592]\n [0.24322691]\n [0.65190972]\n [0.85954675]\n [0.18672346]\n [0.74174086]\n [0.3599319 ]\n [0.8461399 ]\n [0.78777194]\n [0.67252811]\n [0.35251816]\n [0.30618205]\n [0.20896268]\n [0.73162035]\n [0.78932955]\n [0.19263173]\n [0.39129861]\n [0.23123089]\n [0.444126 ]\n [0.63309862]\n [0.23687899]\n [0.29868359]\n [0.20411193]\n [0.68370108]\n [0.35346545]\n [0.27895134]\n [0.63805145]\n [0.28561222]\n [0.70791399]\n [0.74123701]\n [0.25214765]\n [0.23424633]\n [0.8328069 ]\n [0.26747039]\n [0.73399055]\n [0.24484461]\n [0.36366863]\n [0.34293209]\n [0.80217631]\n [0.6974595 ]\n [0.26954022]\n [0.32905902]\n [0.59913339]\n [0.85850951]\n [0.72602442]\n [0.64605256]\n [0.86288421]\n [0.69669977]\n [0.37094607]\n [0.59060608]\n [0.22555911]\n [0.25297395]\n [0.24501614]\n [0.7435888 ]\n [0.66695132]\n [0.51638223]\n [0.79150292]\n [0.38799491]\n [0.15494496]\n [0.81552379]\n [0.70093682]\n [0.76812812]\n [0.20937902]\n [0.35352846]\n [0.22412771]\n [0.70331198]\n [0.18481094]\n [0.73054335]\n [0.42604904]\n [0.31439906]\n [0.76659192]\n [0.15731719]\n [0.8007742 ]\n [0.3543063 ]\n [0.70890651]\n [0.33764884]\n [0.34418242]\n [0.33124523]\n [0.55229593]\n [0.38869438]\n [0.26044763]\n [0.45289732]\n [0.64708618]\n [0.23258845]\n [0.73630844]\n [0.34267238]\n [0.84623476]\n [0.24675678]\n [0.7681807 ]\n [0.88898336]\n [0.2695083 ]\n [0.68485208]\n [0.4064366 ]\n [0.65128379]\n [0.68087307]\n [0.21293706]\n [0.1743823 ]\n [0.63661381]\n [0.75389188]\n [0.2576441 ]\n [0.67932391]]\n 140\n Step: 7 \t Loss: 0.324 \t Acc: 98.6%\n [[0.41459595]\n [0.12560787]\n [0.21484995]\n [0.85770269]\n [0.15901352]\n [0.31458375]\n [0.23444472]\n [0.75235144]\n [0.655802 ]\n [0.25514834]\n [0.35849567]\n [0.15905755]\n [0.1961408 ]\n [0.87275727]\n [0.76238763]\n [0.16521214]\n [0.70275562]\n [0.73759408]\n [0.81842065]\n [0.16573874]\n [0.77352283]\n [0.73505925]\n [0.4268646 ]\n [0.25804208]\n [0.80280184]\n [0.31488766]\n [0.36271035]\n [0.17877138]\n [0.14726453]\n [0.73822641]\n [0.3108181 ]\n [0.60034525]\n [0.61206466]\n [0.72814679]\n [0.7802825 ]\n [0.77985957]\n [0.19721908]\n [0.82633594]\n [0.7170407 ]\n [0.65301502]\n [0.23093871]\n [0.66571759]\n [0.86603289]\n [0.17398679]\n [0.75424112]\n [0.35046981]\n [0.85292748]\n [0.79087559]\n [0.67971995]\n [0.34782011]\n [0.2934755 ]\n [0.19738006]\n [0.74543469]\n [0.80275509]\n [0.18177954]\n [0.39102647]\n [0.22059225]\n [0.43903083]\n [0.64253605]\n [0.22088883]\n [0.28614355]\n [0.19199642]\n [0.68847447]\n [0.34371267]\n [0.26700248]\n [0.64809639]\n [0.27539876]\n [0.71532064]\n [0.74703266]\n [0.24066011]\n [0.22206128]\n [0.84247188]\n [0.25562095]\n [0.74520392]\n [0.23540542]\n [0.35786031]\n [0.33311139]\n [0.81445606]\n [0.70986921]\n [0.25445801]\n [0.32170328]\n [0.60660424]\n [0.86882018]\n [0.73571053]\n [0.65487317]\n [0.86908411]\n [0.70529181]\n [0.35919774]\n [0.59587825]\n [0.21341857]\n [0.24187947]\n [0.23165511]\n [0.76290423]\n [0.67761717]\n [0.51869169]\n [0.80330171]\n [0.38908259]\n [0.14528244]\n [0.82056469]\n [0.70597393]\n [0.77598215]\n [0.19836768]\n [0.34200578]\n [0.21326695]\n [0.71420406]\n [0.17288321]\n [0.73731794]\n [0.4155825 ]\n [0.30785809]\n [0.77594991]\n [0.14721784]\n [0.81401952]\n [0.34394612]\n [0.71889122]\n [0.32817257]\n [0.34134552]\n [0.32331658]\n [0.55803296]\n [0.38160706]\n [0.24940244]\n [0.44892752]\n [0.65256735]\n [0.21712792]\n [0.74336273]\n [0.33690316]\n [0.85336759]\n [0.23300289]\n [0.78034293]\n [0.89692802]\n [0.26057675]\n [0.69328543]\n [0.39412102]\n [0.65690108]\n [0.69191986]\n [0.20034561]\n [0.16347029]\n [0.63597249]\n [0.76350796]\n [0.24826515]\n [0.68773795]]\n 140\n Step: 8 \t Loss: 0.312 \t Acc: 98.6%\n [[0.40580017]\n [0.11581519]\n [0.20490299]\n [0.86564199]\n [0.14811553]\n [0.30398576]\n [0.22433112]\n [0.76489941]\n [0.66150274]\n [0.24200104]\n [0.35006729]\n [0.14928622]\n [0.18389106]\n [0.87938648]\n [0.77377967]\n [0.15349794]\n [0.71298769]\n [0.7490713 ]\n [0.82686383]\n [0.1558117 ]\n [0.7824389 ]\n [0.74529803]\n [0.42205025]\n [0.25226145]\n [0.81061417]\n [0.30341979]\n [0.35680251]\n [0.16981146]\n [0.13521346]\n [0.74755805]\n [0.30049639]\n [0.61006965]\n [0.61927326]\n [0.73513048]\n [0.78800143]\n [0.79039986]\n [0.1868898 ]\n [0.83755884]\n [0.72518586]\n [0.65898997]\n [0.21958765]\n [0.67868993]\n [0.87200028]\n [0.16244486]\n [0.76582739]\n [0.34151976]\n [0.85918878]\n [0.79380805]\n [0.68652945]\n [0.34335689]\n [0.28160216]\n [0.18675244]\n [0.75822296]\n [0.81500066]\n [0.17184357]\n [0.39077723]\n [0.21074276]\n [0.43416598]\n [0.65147764]\n [0.20631423]\n [0.27443685]\n [0.18091911]\n [0.69300646]\n [0.33449721]\n [0.25587233]\n [0.65760133]\n [0.26583761]\n [0.72230626]\n [0.75249814]\n [0.23000308]\n [0.21082673]\n [0.85129552]\n [0.24460468]\n [0.75564933]\n [0.2266114 ]\n [0.35234326]\n [0.32384328]\n [0.82565774]\n [0.72147229]\n [0.24054529]\n [0.31474725]\n [0.61371852]\n [0.87808859]\n [0.74477952]\n [0.66322623]\n [0.87479141]\n [0.71338717]\n [0.34810219]\n [0.6009108 ]\n [0.20224881]\n [0.23157426]\n [0.21934698]\n [0.78045011]\n [0.68766646]\n [0.52090867]\n [0.81412295]\n [0.39013834]\n [0.13650895]\n [0.82528402]\n [0.71074834]\n [0.78332215]\n [0.1882416 ]\n [0.3311434 ]\n [0.20323565]\n [0.72441053]\n [0.16204319]\n [0.74369986]\n [0.40562757]\n [0.30167347]\n [0.78465798]\n [0.13806023]\n [0.82605988]\n [0.33416354]\n [0.72825749]\n [0.31923038]\n [0.33865023]\n [0.31582322]\n [0.56352003]\n [0.37487041]\n [0.23912672]\n [0.44513518]\n [0.65777967]\n [0.20303131]\n [0.74999937]\n [0.33143096]\n [0.85993505]\n [0.22034196]\n [0.79155637]\n [0.90405767]\n [0.25221181]\n [0.70124433]\n [0.38244824]\n [0.66224042]\n [0.7023021 ]\n [0.18882162]\n [0.15354674]\n [0.63535245]\n [0.7724709 ]\n [0.23950666]\n [0.69568365]]\n 140\n Step: 9 \t Loss: 0.301 \t Acc: 98.6%\n [[0.39741995]\n [0.10705597]\n [0.19569215]\n [0.87290018]\n [0.13825289]\n [0.29401464]\n [0.21493014]\n [0.77651853]\n [0.6669282 ]\n [0.22983243]\n [0.34207301]\n [0.14038862]\n [0.17270898]\n [0.88545962]\n [0.78434063]\n [0.14291222]\n [0.72260495]\n [0.75976485]\n [0.83466613]\n [0.14675527]\n [0.79074987]\n [0.75487294]\n [0.41744975]\n [0.24680678]\n [0.81788055]\n [0.29264671]\n [0.35118184]\n [0.16156222]\n [0.12444871]\n [0.75630033]\n [0.29078527]\n [0.61932363]\n [0.62614345]\n [0.74171943]\n [0.79521553]\n [0.80015794]\n [0.17738304]\n [0.84776805]\n [0.73286094]\n [0.66467549]\n [0.20908549]\n [0.69088674]\n [0.87750544]\n [0.15196487]\n [0.77658188]\n [0.33304265]\n [0.86497966]\n [0.79658539]\n [0.6929874 ]\n [0.33910915]\n [0.2704942 ]\n [0.17698238]\n [0.7700755 ]\n [0.82618829]\n [0.16272714]\n [0.39054799]\n [0.20160694]\n [0.42951437]\n [0.65996015]\n [0.19301549]\n [0.26349443]\n [0.17077178]\n [0.69731713]\n [0.32577781]\n [0.24548993]\n [0.66660623]\n [0.25687284]\n [0.72890631]\n [0.75766254]\n [0.22010032]\n [0.20045124]\n [0.85937036]\n [0.23434754]\n [0.76539438]\n [0.218403 ]\n [0.34709413]\n [0.31508464]\n [0.83589531]\n [0.73233423]\n [0.2276982 ]\n [0.30815825]\n [0.62050218]\n [0.88644239]\n [0.75328476]\n [0.67114741]\n [0.8800598 ]\n [0.72102687]\n [0.3376125 ]\n [0.60572164]\n [0.19195436]\n [0.22198611]\n [0.20799237]\n [0.79639214]\n [0.69714689]\n [0.52303969]\n [0.82406603]\n [0.39116312]\n [0.12852213]\n [0.82971289]\n [0.71528227]\n [0.79019559]\n [0.17891123]\n [0.32089199]\n [0.19395302]\n [0.73398811]\n [0.15217109]\n [0.74972324]\n [0.39615025]\n [0.29581533]\n [0.79277669]\n [0.12973553]\n [0.83702418]\n [0.32491466]\n [0.73705715]\n [0.31078006]\n [0.33608422]\n [0.30872986]\n [0.5687748 ]\n [0.3684577 ]\n [0.22955119]\n [0.44150693]\n [0.66274451]\n [0.19016278]\n [0.75625496]\n [0.32623144]\n [0.86599769]\n [0.20867107]\n [0.80191205]\n [0.91047644]\n [0.24436346]\n [0.70876729]\n [0.37137615]\n [0.66732397]\n [0.71207237]\n [0.17825587]\n [0.14450148]\n [0.63475321]\n [0.78084019]\n [0.23131271]\n [0.70319874]]\n 140\n Step: 10 \t Loss: 0.291 \t Acc: 98.6%\n [[0.38942672]\n [0.09919932]\n [0.18714697]\n [0.87955249]\n [0.12930698]\n [0.28462112]\n [0.20617635]\n [0.78729212]\n [0.67209969]\n [0.21855534]\n [0.3344802 ]\n [0.13226767]\n [0.1624833 ]\n [0.89103833]\n [0.79414631]\n [0.13332591]\n [0.7316567 ]\n [0.76974221]\n [0.84189171]\n [0.13847439]\n [0.79851098]\n [0.76384062]\n [0.4130473 ]\n [0.24164983]\n [0.82465312]\n [0.28251423]\n [0.34582611]\n [0.15395059]\n [0.11481072]\n [0.76450369]\n [0.2816363 ]\n [0.62813891]\n [0.63269966]\n [0.74794672]\n [0.80197086]\n [0.8092076 ]\n [0.16861613]\n [0.85707414]\n [0.74010473]\n [0.67009384]\n [0.19935281]\n [0.7023643 ]\n [0.88259732]\n [0.14242955]\n [0.78657911]\n [0.32500281]\n [0.87034869]\n [0.7992215 ]\n [0.69912149]\n [0.33505963]\n [0.26008936]\n [0.16798328]\n [0.78107489]\n [0.83642712]\n [0.15434497]\n [0.39033637]\n [0.19311746]\n [0.42506045]\n [0.66801725]\n [0.18086542]\n [0.25325311]\n [0.16145834]\n [0.70142436]\n [0.31751681]\n [0.23579089]\n [0.67514782]\n [0.24845397]\n [0.73515256]\n [0.76255163]\n [0.21088323]\n [0.19085276]\n [0.86677734]\n [0.22478258]\n [0.77450011]\n [0.21072715]\n [0.34209199]\n [0.30679615]\n [0.84526988]\n [0.74251506]\n [0.21582128]\n [0.30190689]\n [0.62697895]\n [0.89399192]\n [0.76127424]\n [0.67866923]\n [0.88493572]\n [0.72824783]\n [0.32768511]\n [0.61032699]\n [0.18244983]\n [0.21305015]\n [0.19750145]\n [0.81088536]\n [0.70610212]\n [0.5250907 ]\n [0.83321941]\n [0.39215813]\n [0.12123303]\n [0.83387843]\n [0.71959542]\n [0.79664455]\n [0.17029709]\n [0.31120598]\n [0.18534707]\n [0.74298833]\n [0.14316135]\n [0.75541845]\n [0.38711873]\n [0.29025694]\n [0.80036001]\n [0.12214908]\n [0.84702707]\n [0.31615927]\n [0.74533706]\n [0.30278324]\n [0.33363669]\n [0.30200466]\n [0.57381344]\n [0.36234465]\n [0.22061349]\n [0.43803067]\n [0.66748102]\n [0.17839917]\n [0.76216206]\n [0.32128282]\n [0.87160822]\n [0.19789697]\n [0.81149163]\n [0.91627319]\n [0.23698694]\n [0.71588904]\n [0.36086501]\n [0.67217159]\n [0.72127875]\n [0.16855097]\n [0.13623792]\n [0.63417404]\n [0.78866899]\n [0.22363331]\n [0.71031726]]\n 140\n Step: 11 \t Loss: 0.281 \t Acc: 98.6%\n [[0.38179411]\n [0.09213268]\n [0.17920471]\n [0.88566447]\n [0.12117406]\n [0.27576021]\n [0.19801121]\n [0.79729576]\n [0.67703634]\n [0.20809041]\n [0.32725915]\n [0.12483842]\n [0.15311503]\n [0.89617606]\n [0.80326499]\n [0.12462574]\n [0.74018778]\n [0.77906459]\n [0.84859715]\n [0.13088568]\n [0.80577153]\n [0.77225222]\n [0.40882855]\n [0.23676557]\n [0.83097796]\n [0.27297263]\n [0.34071526]\n [0.14691223]\n [0.10616086]\n [0.77221353]\n [0.27300538]\n [0.63654495]\n [0.63896414]\n [0.75384191]\n [0.80830836]\n [0.81761482]\n [0.16051574]\n [0.86557455]\n [0.7469521 ]\n [0.67526506]\n [0.19031833]\n [0.71317507]\n [0.88731854]\n [0.13373559]\n [0.79588635]\n [0.31736768]\n [0.87533829]\n [0.80172851]\n [0.70495666]\n [0.33119287]\n [0.2503307 ]\n [0.15967827]\n [0.79129623]\n [0.84581436]\n [0.1466218 ]\n [0.39014039]\n [0.18521427]\n [0.42079008]\n [0.67567975]\n [0.16974908]\n [0.24365526]\n [0.15289359]\n [0.70534405]\n [0.30967982]\n [0.22671691]\n [0.68325974]\n [0.24053548]\n [0.74107345]\n [0.76718825]\n [0.20229021]\n [0.18195784]\n [0.8735873 ]\n [0.21584937]\n [0.7830216 ]\n [0.20353646]\n [0.33731796]\n [0.29894202]\n [0.85387101]\n [0.75206965]\n [0.2048274 ]\n [0.29596672]\n [0.63317047]\n [0.90083256]\n [0.76879112]\n [0.68582124]\n [0.88945959]\n [0.73508335]\n [0.31827972]\n [0.61474154]\n [0.17365898]\n [0.20470814]\n [0.1877933 ]\n [0.82407279]\n [0.71457202]\n [0.52706714]\n [0.84166168]\n [0.39312469]\n [0.11456422]\n [0.83780445]\n [0.72370532]\n [0.80270641]\n [0.16232869]\n [0.30204349]\n [0.1773537 ]\n [0.75145795]\n [0.13492107]\n [0.76081256]\n [0.37850347]\n [0.28497441]\n [0.80745599]\n [0.11521842]\n [0.85617024]\n [0.30786055]\n [0.75313955]\n [0.29520508]\n [0.33129806]\n [0.29561893]\n [0.57865075]\n [0.35650926]\n [0.21225756]\n [0.4346954 ]\n [0.67200642]\n [0.16762948]\n [0.76774969]\n [0.31656556]\n [0.87681261]\n [0.18793555]\n [0.8203683 ]\n [0.92152391]\n [0.23004217]\n [0.72264097]\n [0.35087757]\n [0.67680113]\n [0.72996509]\n [0.15962024]\n [0.12867141]\n [0.63361403]\n [0.79600484]\n [0.21642372]\n [0.71706999]]\n 140\n Step: 12 \t Loss: 0.272 \t Acc: 98.6%\n [[0.3744978 ]\n [0.08575922]\n [0.17180949]\n [0.89129327]\n [0.11376343]\n [0.26739088]\n [0.19038227]\n [0.80659783]\n [0.68175543]\n [0.19836567]\n [0.32038279]\n [0.11802654]\n [0.14451627]\n [0.90091928]\n [0.81175808]\n [0.11671247]\n [0.74823895]\n [0.78778737]\n [0.85483244]\n [0.12391599]\n [0.8125755 ]\n [0.78015392]\n [0.40478047]\n [0.23213166]\n [0.83689593]\n [0.26397642]\n [0.3358312 ]\n [0.14039041]\n [0.09837889]\n [0.77947069]\n [0.26485235]\n [0.64456895]\n [0.64495723]\n [0.75943152]\n [0.81426446]\n [0.82543859]\n [0.15301682]\n [0.87335512]\n [0.75343446]\n [0.68020728]\n [0.18191813]\n [0.72336771]\n [0.89170627]\n [0.12579212]\n [0.8045643 ]\n [0.31010759]\n [0.87998559]\n [0.80411705]\n [0.71051542]\n [0.32749506]\n [0.24116632]\n [0.15199919]\n [0.80080767]\n [0.85443648]\n [0.13949123]\n [0.38995844]\n [0.17784381]\n [0.41669037]\n [0.6829758 ]\n [0.1595631 ]\n [0.23464845]\n [0.14500205]\n [0.70909043]\n [0.30223551]\n [0.21821528]\n [0.6909728 ]\n [0.23307636]\n [0.74669454]\n [0.77159276]\n [0.19426597]\n [0.17370081]\n [0.87986231]\n [0.20749342]\n [0.79100845]\n [0.19678862]\n [0.33275508]\n [0.29148967]\n [0.86177792]\n [0.76104802]\n [0.19463738]\n [0.29031391]\n [0.63909651]\n [0.90704685]\n [0.77587425]\n [0.69263037]\n [0.89366662]\n [0.74156345]\n [0.30935918]\n [0.61897864]\n [0.1655138 ]\n [0.19690786]\n [0.17879524]\n [0.83608498]\n [0.72259298]\n [0.52897396]\n [0.84946264]\n [0.39406416]\n [0.10844824]\n [0.84151192]\n [0.72762765]\n [0.80841447]\n [0.1549435 ]\n [0.29336608]\n [0.16991582]\n [0.75943933]\n [0.12736855]\n [0.76592978]\n [0.37027703]\n [0.27994627]\n [0.81410753]\n [0.10887172]\n [0.86454366]\n [0.29998488]\n [0.76050285]\n [0.28801392]\n [0.32905986]\n [0.28954676]\n [0.58330027]\n [0.3509315 ]\n [0.20443303]\n [0.43149112]\n [0.67633622]\n [0.15775414]\n [0.77304378]\n [0.3120621 ]\n [0.88165112]\n [0.17871114]\n [0.82860751]\n [0.92629374]\n [0.22349323]\n [0.72905139]\n [0.34137909]\n [0.68122866]\n [0.73817134]\n [0.15138666]\n [0.12172776]\n [0.63307218]\n [0.80289027]\n [0.20964391]\n [0.72348474]]\n 140\n Step: 13 \t Loss: 0.264 \t Acc: 98.6%\n [[0.36751544]\n [0.07999553]\n [0.16491147]\n [0.89648886]\n [0.10699575]\n [0.25947577]\n [0.18324258]\n [0.81526005]\n [0.68627256]\n [0.18931601]\n [0.31382646]\n [0.11176689]\n [0.13660909]\n [0.90530853]\n [0.81968078]\n [0.10949917]\n [0.75584721]\n [0.79596068]\n [0.86064177]\n [0.11750106]\n [0.81896224]\n [0.78758741]\n [0.40089124]\n [0.22772818]\n [0.84244332]\n [0.25548405]\n [0.33115755]\n [0.13433507]\n [0.09136061]\n [0.78631195]\n [0.25714071]\n [0.65223605]\n [0.65069746]\n [0.76473935]\n [0.81987167]\n [0.83273162]\n [0.14606167]\n [0.8804914 ]\n [0.75958014]\n [0.68493686]\n [0.17409492]\n [0.73298717]\n [0.89579304]\n [0.11851921]\n [0.81266764]\n [0.30319551]\n [0.88432322]\n [0.80639648]\n [0.71581807]\n [0.32395374]\n [0.23254894]\n [0.14488553]\n [0.80967086]\n [0.86237025]\n [0.13289461]\n [0.38978916]\n [0.17095827]\n [0.41274962]\n [0.68993112]\n [0.15021484]\n [0.22618505]\n [0.13771686]\n [0.71267626]\n [0.29515535]\n [0.21023836]\n [0.69831518]\n [0.22603958]\n [0.7520388 ]\n [0.77578334]\n [0.18676087]\n [0.16602293]\n [0.88565687]\n [0.19966564]\n [0.79850538]\n [0.19044579]\n [0.32838798]\n [0.28440948]\n [0.86906073]\n [0.7694957 ]\n [0.18517947]\n [0.28492696]\n [0.64477511]\n [0.91270632]\n [0.7825586 ]\n [0.69912113]\n [0.89758764]\n [0.74771524]\n [0.30088928]\n [0.62305041]\n [0.15795364]\n [0.18960243]\n [0.170442 ]\n [0.84704018]\n [0.73019815]\n [0.53081572]\n [0.8566843 ]\n [0.39497792]\n [0.1028262 ]\n [0.84501938]\n [0.73137642]\n [0.81379842]\n [0.14808603]\n [0.2851385 ]\n [0.16298256]\n [0.76697088]\n [0.12043189]\n [0.77079178]\n [0.36241403]\n [0.27515324]\n [0.82035295]\n [0.10304626]\n [0.87222694]\n [0.29250156]\n [0.7674615 ]\n [0.28118103]\n [0.32691454]\n [0.28376471]\n [0.58777438]\n [0.3455932 ]\n [0.19709464]\n [0.42840879]\n [0.6804844 ]\n [0.14868408]\n [0.77806756]\n [0.30775664]\n [0.88615907]\n [0.17015576]\n [0.83626781]\n [0.93063872]\n [0.21730787]\n [0.73514593]\n [0.33233728]\n [0.68546868]\n [0.74593389]\n [0.14378182]\n [0.11534185]\n [0.63254745]\n [0.80936337]\n [0.20325799]\n [0.72958667]]\n 140\n Step: 14 \t Loss: 0.257 \t Acc: 98.6%\n [[0.36082647]\n [0.07476966]\n [0.15846623]\n [0.90129504]\n [0.10080163]\n [0.25198087]\n [0.17655007]\n [0.8233381 ]\n [0.69060185]\n [0.18088262]\n [0.30756769]\n [0.10600231]\n [0.12932436]\n [0.90937926]\n [0.82708272]\n [0.1029097 ]\n [0.76304618]\n [0.80362982]\n [0.86606435]\n [0.11158441]\n [0.82496698]\n [0.79459034]\n [0.39715008]\n [0.22353729]\n [0.84765242]\n [0.24745759]\n [0.32667948]\n [0.12870193]\n [0.08501572]\n [0.79277047]\n [0.24983724]\n [0.65956941]\n [0.65620176]\n [0.76978682]\n [0.82515901]\n [0.83954109]\n [0.13959904]\n [0.88704996]\n [0.76541468]\n [0.68946865]\n [0.16679736]\n [0.74207491]\n [0.89960747]\n [0.11184653]\n [0.82024561]\n [0.29660678]\n [0.88837991]\n [0.80857511]\n [0.720883 ]\n [0.32055772]\n [0.22443554]\n [0.13828358]\n [0.81794154]\n [0.86968386]\n [0.1267801 ]\n [0.3896314 ]\n [0.16451495]\n [0.40895714]\n [0.69656924]\n [0.14162151]\n [0.21822179]\n [0.13097879]\n [0.716113 ]\n [0.28841331]\n [0.20274312]\n [0.70531263]\n [0.21939175]\n [0.75712696]\n [0.77977629]\n [0.17973035]\n [0.15887173]\n [0.89101892]\n [0.19232181]\n [0.80555269]\n [0.18447409]\n [0.32420282]\n [0.27767446]\n [0.87578145]\n [0.77745411]\n [0.17638882]\n [0.27978644]\n [0.65022277]\n [0.91787307]\n [0.7888757 ]\n [0.70531582]\n [0.90124976]\n [0.75356324]\n [0.29283861]\n [0.62696787]\n [0.1509244 ]\n [0.18274976]\n [0.16267507]\n [0.85704491]\n [0.7374178 ]\n [0.53259655]\n [0.86338172]\n [0.39586735]\n [0.09764662]\n [0.84834328]\n [0.73496424]\n [0.81888481]\n [0.14170696]\n [0.2773285 ]\n [0.15650856]\n [0.7740874 ]\n [0.11404781]\n [0.77541803]\n [0.35489104]\n [0.27057793]\n [0.82622653]\n [0.09768722]\n [0.87929044]\n [0.28538256]\n [0.77404676]\n [0.27468028]\n [0.32485537]\n [0.2782516 ]\n [0.5920844 ]\n [0.34047782]\n [0.19020169]\n [0.42544012]\n [0.68446361]\n [0.1403398 ]\n [0.78284189]\n [0.30363493]\n [0.89036758]\n [0.16220844]\n [0.84340158]\n [0.93460718]\n [0.2114571 ]\n [0.74094778]\n [0.32372218]\n [0.6895343 ]\n [0.75328582]\n [0.13674493]\n [0.10945647]\n [0.6320388 ]\n [0.81545834]\n [0.19723374]\n [0.73539861]]\n 140\n Step: 15 \t Loss: 0.250 \t Acc: 98.6%\n [[0.35441196]\n [0.07001942]\n [0.15243403]\n [0.9057503 ]\n [0.09512024]\n [0.2448752 ]\n [0.17026696]\n [0.83088218]\n [0.6947561 ]\n [0.17301241]\n [0.301586 ]\n [0.1006826 ]\n [0.1226008 ]\n [0.91316261]\n [0.83400849]\n [0.09687732]\n [0.76986641]\n [0.81083576]\n [0.87113497]\n [0.10611626]\n [0.83062129]\n [0.80119671]\n [0.39354719]\n [0.21954297]\n [0.85255201]\n [0.23986244]\n [0.32238355]\n [0.12345174]\n [0.07926588]\n [0.79887618]\n [0.24291174]\n [0.66659041]\n [0.6614856 ]\n [0.77459325]\n [0.8301525 ]\n [0.84590922]\n [0.13358336]\n [0.89308949]\n [0.77096121]\n [0.6938161 ]\n [0.15997941]\n [0.75066909]\n [0.90317475]\n [0.10571219]\n [0.82734259]\n [0.29031891]\n [0.89218108]\n [0.81066034]\n [0.72572688]\n [0.31729688]\n [0.21678702]\n [0.13214559]\n [0.82567005]\n [0.87643784]\n [0.12110177]\n [0.38948419]\n [0.15847564]\n [0.40530319]\n [0.70291165]\n [0.1337092 ]\n [0.2107194 ]\n [0.12473529]\n [0.71941102]\n [0.28198568]\n [0.19569067]\n [0.71198874]\n [0.21310268]\n [0.76197771]\n [0.78358623]\n [0.17313439]\n [0.15220022]\n [0.89599068]\n [0.18542202]\n [0.81218676]\n [0.17884311]\n [0.32018701]\n [0.27126007]\n [0.88199504]\n [0.78496094]\n [0.16820687]\n [0.27487476]\n [0.65545456]\n [0.92260109]\n [0.79485402]\n [0.71123479]\n [0.90467683]\n [0.75912966]\n [0.28517829]\n [0.63074108]\n [0.14437782]\n [0.17631206]\n [0.15544199]\n [0.86619478]\n [0.74427952]\n [0.53432025]\n [0.86960392]\n [0.39673376]\n [0.09286434]\n [0.85149827]\n [0.73840247]\n [0.82369745]\n [0.13576244]\n [0.26990655]\n [0.1504533 ]\n [0.78082048]\n [0.10816053]\n [0.77982605]\n [0.3476864 ]\n [0.26620466]\n [0.83175902]\n [0.09274653]\n [0.8857964 ]\n [0.27860229]\n [0.78028691]\n [0.26848793]\n [0.32287628]\n [0.2729882 ]\n [0.59624071]\n [0.33557031]\n [0.18371753]\n [0.42257762]\n [0.68828532]\n [0.1326504 ]\n [0.78738552]\n [0.29968407]\n [0.89430413]\n [0.15481447]\n [0.85005569]\n [0.93824093]\n [0.20591483]\n [0.74647797]\n [0.31550605]\n [0.6934374 ]\n [0.76025732]\n [0.13022199]\n [0.10402128]\n [0.63154519]\n [0.82120587]\n [0.19154222]\n [0.74094124]]\n 140\n Step: 16 \t Loss: 0.243 \t Acc: 98.6%\n [[0.34825453]\n [0.06569092]\n [0.14677935]\n [0.90988853]\n [0.08989823]\n [0.23813051]\n [0.16435934]\n [0.83793755]\n [0.69874695]\n [0.16565746]\n [0.29586267]\n [0.09576352]\n [0.11638404]\n [0.91668595]\n [0.84049823]\n [0.09134345]\n [0.77633568]\n [0.81761557]\n [0.87588464]\n [0.10105272]\n [0.83595355]\n [0.80743731]\n [0.39007363]\n [0.2157308 ]\n [0.85716786]\n [0.23266703]\n [0.31825752]\n [0.11854968]\n [0.07404307]\n [0.80465612]\n [0.23633673]\n [0.67331873]\n [0.66656316]\n [0.77917609]\n [0.83487548]\n [0.85187386]\n [0.12797406]\n [0.89866179]\n [0.77624067]\n [0.69799146]\n [0.15359977]\n [0.75880479]\n [0.90651717]\n [0.10006163]\n [0.83399856]\n [0.28431139]\n [0.89574925]\n [0.81265879]\n [0.73036485]\n [0.31416207]\n [0.20956782]\n [0.12642905]\n [0.83290188]\n [0.88268597]\n [0.11581889]\n [0.38934668]\n [0.15280612]\n [0.40177887]\n [0.70897802]\n [0.12641199]\n [0.20364223]\n [0.1189397 ]\n [0.72257967]\n [0.27585079]\n [0.18904584]\n [0.7183651 ]\n [0.20714507]\n [0.766608 ]\n [0.78722636]\n [0.16693696]\n [0.14596635]\n [0.90060945]\n [0.17893031]\n [0.81844041]\n [0.17352554]\n [0.31632916]\n [0.2651439 ]\n [0.88775019]\n [0.79205047]\n [0.16058081]\n [0.27017595]\n [0.66048426]\n [0.9269375 ]\n [0.80051929]\n [0.71689662]\n [0.90788996]\n [0.76443463]\n [0.27788181]\n [0.63437921]\n [0.13827074]\n [0.17025528]\n [0.14869568]\n [0.87457533]\n [0.75080848]\n [0.53599032]\n [0.8753945 ]\n [0.39757842]\n [0.08843972]\n [0.85449747]\n [0.74170137]\n [0.82825771]\n [0.13021338]\n [0.26284563]\n [0.14478054]\n [0.78719878]\n [0.10272084]\n [0.78403168]\n [0.34078013]\n [0.26201923]\n [0.836978 ]\n [0.08818201]\n [0.89179992]\n [0.27213736]\n [0.7862076 ]\n [0.26258235]\n [0.32097184]\n [0.26795707]\n [0.60025283]\n [0.33085693]\n [0.17760916]\n [0.41981443]\n [0.69195994]\n [0.12555268]\n [0.79171539]\n [0.29589239]\n [0.89799306]\n [0.14792476]\n [0.85627211]\n [0.94157628]\n [0.2006575 ]\n [0.75175561]\n [0.30766324]\n [0.69718878]\n [0.76687584]\n [0.12416493]\n [0.0989919 ]\n [0.63106565]\n [0.8266336 ]\n [0.18615733]\n [0.74623337]]\n 140\n Step: 17 \t Loss: 0.237 \t Acc: 98.6%\n [[0.34233817]\n [0.06173739]\n [0.14147036]\n [0.91373966]\n [0.0850887 ]\n [0.23172109]\n [0.1587967 ]\n [0.84454504]\n [0.70258499]\n [0.15877455]\n [0.29038061]\n [0.09120606]\n [0.11062578]\n [0.91997347]\n [0.84658807]\n [0.08625662]\n [0.78247929]\n [0.82400278]\n [0.88034099]\n [0.09635497]\n [0.84098929]\n [0.81334 ]\n [0.38672121]\n [0.21208775]\n [0.86152303]\n [0.22584254]\n [0.31429027]\n [0.11396477]\n [0.06928814]\n [0.81013477]\n [0.23008715]\n [0.67977253]\n [0.67144739]\n [0.78355116]\n [0.83934897]\n [0.85746899]\n [0.12273497]\n [0.90381265]\n [0.78127205]\n [0.70200589]\n [0.14762135]\n [0.76651423]\n [0.90965449]\n [0.0948467 ]\n [0.8402496 ]\n [0.27856545]\n [0.89910445]\n [0.8145764 ]\n [0.73481068]\n [0.31114496]\n [0.20274558]\n [0.12109609]\n [0.83967808]\n [0.88847606]\n [0.11089529]\n [0.38921816]\n [0.14747568]\n [0.39837604]\n [0.71478638]\n [0.11967107]\n [0.19695789]\n [0.11355054]\n [0.72562745]\n [0.26998887]\n [0.1827768 ]\n [0.72446149]\n [0.20149419]\n [0.77103318]\n [0.79070858]\n [0.16110564]\n [0.14013244]\n [0.90490827]\n [0.17281417]\n [0.82434333]\n [0.16849678]\n [0.31261887]\n [0.25930554]\n [0.89309015]\n [0.79875388]\n [0.15346301]\n [0.26567551]\n [0.66532452]\n [0.93092344]\n [0.8058948 ]\n [0.72231827]\n [0.91090787]\n [0.76949644]\n [0.27092483]\n [0.63789062]\n [0.13256459]\n [0.16454879]\n [0.14239391]\n [0.88226299]\n [0.75702769]\n [0.53760993]\n [0.88079237]\n [0.39840254]\n [0.08433781]\n [0.85735262]\n [0.74487023]\n [0.83258486]\n [0.12502493]\n [0.25612098]\n [0.13945783]\n [0.79324833]\n [0.09768522]\n [0.78804921]\n [0.33415377]\n [0.25800875]\n [0.84190831]\n [0.08395654]\n [0.8973498 ]\n [0.26596641]\n [0.79183209]\n [0.25694382]\n [0.31913711]\n [0.26314234]\n [0.60412948]\n [0.32632514]\n [0.17184682]\n [0.41714427]\n [0.69549696]\n [0.11899031]\n [0.79584679]\n [0.29224926]\n [0.90145598]\n [0.14149526]\n [0.8620885 ]\n [0.94464481]\n [0.19566385]\n [0.75679807]\n [0.30017002]\n [0.70079825]\n [0.77316644]\n [0.11853096]\n [0.09432912]\n [0.63059921]\n [0.83176643]\n [0.18105555]\n [0.75129212]]\n 140\n Step: 18 \t Loss: 0.231 \t Acc: 98.6%\n [[0.33664817]\n [0.0581181 ]\n [0.13647847]\n [0.91733018]\n [0.08065035]\n [0.22562343]\n [0.15355153]\n [0.85074148]\n [0.70627989]\n [0.15232465]\n [0.28512419]\n [0.08697574]\n [0.10528306]\n [0.92304658]\n [0.85231059]\n [0.08157151]\n [0.78832027]\n [0.83002779]\n [0.88452873]\n [0.09198864]\n [0.84575152]\n [0.81893006]\n [0.38348246]\n [0.20860205]\n [0.86563824]\n [0.21936267]\n [0.31047162]\n [0.10966936]\n [0.06494953]\n [0.81533431]\n [0.22414017]\n [0.68596857]\n [0.6761502 ]\n [0.78773281]\n [0.8435919 ]\n [0.86272514]\n [0.11783376]\n [0.90858261]\n [0.78607262]\n [0.70586957]\n [0.14201078]\n [0.77382702]\n [0.91260429]\n [0.09002483]\n [0.84612827]\n [0.27306393]\n [0.90226455]\n [0.81641854]\n [0.73907693]\n [0.308238 ]\n [0.19629085]\n [0.11611289]\n [0.8460358 ]\n [0.8938507 ]\n [0.10629873]\n [0.38909799]\n [0.14245672]\n [0.39508727]\n [0.72035325]\n [0.11343401]\n [0.19063695]\n [0.10853087]\n [0.72856208]\n [0.26438182]\n [0.17685472]\n [0.73029603]\n [0.1961276 ]\n [0.7752672 ]\n [0.79404371]\n [0.1556112 ]\n [0.13466466]\n [0.90891641]\n [0.16704424]\n [0.82992239]\n [0.16373464]\n [0.3090467 ]\n [0.25372631]\n [0.89805336]\n [0.80509958]\n [0.14681047]\n [0.26136023]\n [0.66998695]\n [0.93459496]\n [0.81100168]\n [0.72751529]\n [0.9137472 ]\n [0.77433175]\n [0.26428497]\n [0.641283 ]\n [0.12722482]\n [0.15916493]\n [0.13649872]\n [0.88932595]\n [0.76295817]\n [0.53918203]\n [0.88583227]\n [0.39920727]\n [0.0805278 ]\n [0.8600743 ]\n [0.74791751]\n [0.83669625]\n [0.1201659 ]\n [0.24970992]\n [0.13445606]\n [0.79899283]\n [0.09301514]\n [0.79189162]\n [0.32779032]\n [0.25416154]\n [0.84657231]\n [0.08003736]\n [0.90248937]\n [0.26006984]\n [0.79718156]\n [0.25155435]\n [0.31736764]\n [0.25852957]\n [0.60787868]\n [0.32196346]\n [0.1664036 ]\n [0.41456142]\n [0.69890502]\n [0.11291305]\n [0.79979359]\n [0.28874503]\n [0.90471215]\n [0.1354864 ]\n [0.86753866]\n [0.94747411]\n [0.19091462]\n [0.76162119]\n [0.29300447]\n [0.70427477]\n [0.77915199]\n [0.11328189]\n [0.08999826]\n [0.63014498]\n [0.83662686]\n [0.17621562]\n [0.75613308]]\n 140\n Step: 19 \t Loss: 0.225 \t Acc: 98.6%\n [[0.33117093]\n [0.05479752]\n [0.13177801]\n [0.92068362]\n [0.07654675]\n [0.21981608]\n [0.14859903]\n [0.85656015]\n [0.7098405 ]\n [0.14627249]\n [0.28007907]\n [0.08304198]\n [0.10031765]\n [0.92592427]\n [0.85769517]\n [0.07724816]\n [0.79387966]\n [0.83571818]\n [0.88847002]\n [0.08792321]\n [0.85026104]\n [0.82423042]\n [0.38035053]\n [0.205263 ]\n [0.86953213]\n [0.21320335]\n [0.30679229]\n [0.10563877]\n [0.06098225]\n [0.8202749 ]\n [0.21847493]\n [0.69192229]\n [0.68068251]\n [0.79173408]\n [0.84762138]\n [0.86766977]\n [0.11324152]\n [0.91300761]\n [0.79065811]\n [0.70959181]\n [0.13673802]\n [0.78077037]\n [0.91538226]\n [0.08555835]\n [0.85166401]\n [0.26779112]\n [0.9052455 ]\n [0.81819006]\n [0.74317505]\n [0.30543428]\n [0.19017682]\n [0.1114492 ]\n [0.85200861]\n [0.89884783]\n [0.10200049]\n [0.38898561]\n [0.13772438]\n [0.39190569]\n [0.72569379]\n [0.10765398]\n [0.1846526 ]\n [0.10384773]\n [0.73139061]\n [0.25901309]\n [0.17125346]\n [0.73588538]\n [0.19102491]\n [0.77932276]\n [0.79724152]\n [0.15042723]\n [0.1295326 ]\n [0.91265991]\n [0.16159392]\n [0.83520192]\n [0.15921907]\n [0.30560399]\n [0.24838915]\n [0.90267407]\n [0.81111345]\n [0.14058443]\n [0.25721808]\n [0.67448221]\n [0.93798373]\n [0.81585911]\n [0.73250189]\n [0.91642283]\n [0.77895574]\n [0.25794167]\n [0.6445634 ]\n [0.12222042]\n [0.15407874]\n [0.13097599]\n [0.89582505]\n [0.76861916]\n [0.54070931]\n [0.89054525]\n [0.39999368]\n [0.0769824 ]\n [0.86267203]\n [0.75085091]\n [0.8406076 ]\n [0.11560839]\n [0.24359166]\n [0.12974907]\n [0.80445387]\n [0.08867642]\n [0.79557071]\n [0.32167403]\n [0.25046692]\n [0.85099022]\n [0.07639552]\n [0.90725709]\n [0.25442975]\n [0.80227524]\n [0.2463975 ]\n [0.31565938]\n [0.25410555]\n [0.61150781]\n [0.31776138]\n [0.16125517]\n [0.41206063]\n [0.70219203]\n [0.10727606]\n [0.80356838]\n [0.28537089]\n [0.90777878]\n [0.1298626 ]\n [0.87265297]\n [0.95008835]\n [0.18639233]\n [0.76623941]\n [0.28614632]\n [0.70762652]\n [0.78485338]\n [0.10838363]\n [0.08596853]\n [0.62970213]\n [0.84123524]\n [0.17161831]\n [0.76077052]]\n 140\n Step: 20 \t Loss: 0.220 \t Acc: 98.6%\n [[0.32589394]\n [0.05174457]\n [0.12734582]\n [0.92382089]\n [0.07274568]\n [0.21427942]\n [0.14391678]\n [0.86203114]\n [0.71327491]\n [0.14058619]\n [0.27523211]\n [0.07937766]\n [0.09569544]\n [0.92862347]\n [0.86276839]\n [0.07325128]\n [0.79917666]\n [0.84109897]\n [0.89218473]\n [0.08413156]\n [0.85453665]\n [0.82926196]\n [0.37731913]\n [0.20206088]\n [0.87322152]\n [0.20734259]\n [0.30324379]\n [0.1018509 ]\n [0.05734694]\n [0.82497485]\n [0.21307239]\n [0.69764798]\n [0.68505437]\n [0.79556688]\n [0.85145289]\n [0.87232769]\n [0.10893232]\n [0.91711961]\n [0.79504284]\n [0.71318116]\n [0.13177599]\n [0.78736926]\n [0.91800242]\n [0.08141383]\n [0.85688346]\n [0.26273258]\n [0.90806162]\n [0.81989537]\n [0.74711554]\n [0.30272752]\n [0.18437901]\n [0.10707793]\n [0.85762691]\n [0.90350133]\n [0.09797488]\n [0.38888054]\n [0.13325627]\n [0.38882505]\n [0.73082195]\n [0.10228916]\n [0.17898047]\n [0.09947167]\n [0.73411947]\n [0.25386749]\n [0.16594926]\n [0.74124483]\n [0.18616758]\n [0.78321146]\n [0.80031096]\n [0.14552986]\n [0.12470889]\n [0.91616199]\n [0.1564391 ]\n [0.84020404]\n [0.15493193]\n [0.30228283]\n [0.24327842]\n [0.90698282]\n [0.81681913]\n [0.13474985]\n [0.25323805]\n [0.67882009]\n [0.94111761]\n [0.82048455]\n [0.73729114]\n [0.91894803]\n [0.78338226]\n [0.25187602]\n [0.64773827]\n [0.11752356]\n [0.1492676 ]\n [0.12579498]\n [0.90181455]\n [0.77402831]\n [0.54219427]\n [0.89495916]\n [0.40076278]\n [0.07367741]\n [0.86515444]\n [0.75367743]\n [0.84433315]\n [0.11132737]\n [0.2377471 ]\n [0.12531329]\n [0.80965112]\n [0.08463868]\n [0.79909721]\n [0.31579037]\n [0.24691518]\n [0.85518032]\n [0.07300534]\n [0.91168721]\n [0.24902968]\n [0.80713071]\n [0.24145821]\n [0.31400863]\n [0.24985824]\n [0.61502367]\n [0.31370926]\n [0.15637948]\n [0.40963708]\n [0.70536523]\n [0.1020393 ]\n [0.80718263]\n [0.28211879]\n [0.91067125]\n [0.12459183]\n [0.87745879]\n [0.95250875]\n [0.18208112]\n [0.77066594]\n [0.27957683]\n [0.71086101]\n [0.79028972]\n [0.10380568]\n [0.08221259]\n [0.62926986]\n [0.84561001]\n [0.16724613]\n [0.76521747]]\n 140\n Step: 21 \t Loss: 0.215 \t Acc: 98.6%\n [[0.32080564]\n [0.04893196]\n [0.12316103]\n [0.92676066]\n [0.06921861]\n [0.20899548]\n [0.13948447]\n [0.86718168]\n [0.71659054]\n [0.13523688]\n [0.27057122]\n [0.07595862]\n [0.09138599]\n [0.93115927]\n [0.86755429]\n [0.06954962]\n [0.80422884]\n [0.84619291]\n [0.89569076]\n [0.08058948]\n [0.85859541]\n [0.83404366]\n [0.37438247]\n [0.19898685]\n [0.87672162]\n [0.20176024]\n [0.29981831]\n [0.09828592]\n [0.05400909]\n [0.82945085]\n [0.20791508]\n [0.70315885]\n [0.68927506]\n [0.79924206]\n [0.85510049]\n [0.87672124]\n [0.10488288]\n [0.92094702]\n [0.79923992]\n [0.71664545]\n [0.12710024]\n [0.79364667]\n [0.92047735]\n [0.07756155]\n [0.86181076]\n [0.25787507]\n [0.91072579]\n [0.82153848]\n [0.75090803]\n [0.30011194]\n [0.17887511]\n [0.10297476]\n [0.86291824]\n [0.90784148]\n [0.09419891]\n [0.38878234]\n [0.12903212]\n [0.38583955]\n [0.73575055]\n [0.09730216]\n [0.1735983 ]\n [0.09537635]\n [0.73675455]\n [0.24893108]\n [0.16092054]\n [0.74638847]\n [0.1815387 ]\n [0.78694391]\n [0.80326015]\n [0.14089747]\n [0.12016887]\n [0.91944336]\n [0.15155791]\n [0.84494882]\n [0.15085676]\n [0.29907598]\n [0.23837978]\n [0.91100691]\n [0.82223817]\n [0.12927507]\n [0.24941005]\n [0.68300964]\n [0.9440212 ]\n [0.82489391]\n [0.74189504]\n [0.92133473]\n [0.787624 ]\n [0.2460706 ]\n [0.65081358]\n [0.11310916]\n [0.14471103]\n [0.12092798]\n [0.90734285]\n [0.7792018 ]\n [0.54363919]\n [0.89909898]\n [0.40151553]\n [0.07059132]\n [0.86752937]\n [0.75640352]\n [0.84788584]\n [0.10730032]\n [0.23215872]\n [0.12112748]\n [0.81460256]\n [0.08087487]\n [0.80248092]\n [0.31012589]\n [0.24349741]\n [0.85915919]\n [0.06984403]\n [0.91581025]\n [0.24385453]\n [0.81176399]\n [0.23672269]\n [0.31241201]\n [0.24577659]\n [0.61843254]\n [0.30979825]\n [0.15175653]\n [0.40728633]\n [0.70843126]\n [0.09716694]\n [0.81064678]\n [0.27898136]\n [0.9134034 ]\n [0.11964522]\n [0.8819808 ]\n [0.95475401]\n [0.17796654]\n [0.7749129 ]\n [0.27327867]\n [0.7139851 ]\n [0.79547852]\n [0.09952069]\n [0.07870608]\n [0.62884744]\n [0.84976792]\n [0.16308323]\n [0.76948586]]\n 140\n Step: 22 \t Loss: 0.210 \t Acc: 98.6%\n [[0.31589533]\n [0.04633573]\n [0.11920475]\n [0.9295196 ]\n [0.0659402 ]\n [0.20394779]\n [0.13528368]\n [0.87203647]\n [0.71979425]\n [0.13019842]\n [0.26608529]\n [0.07276333]\n [0.08736203]\n [0.93354519]\n [0.87207465]\n [0.06611546]\n [0.80905228]\n [0.85102073]\n [0.89900424]\n [0.07727538]\n [0.86245282]\n [0.83859285]\n [0.37153522]\n [0.19603282]\n [0.88004619]\n [0.19643783]\n [0.29650868]\n [0.09492599]\n [0.05093838]\n [0.83371811]\n [0.20298705]\n [0.7084671 ]\n [0.69335311]\n [0.80276956]\n [0.85857695]\n [0.88087069]\n [0.10107225]\n [0.92451521]\n [0.80326135]\n [0.71999189]\n [0.12268863]\n [0.79962374]\n [0.92281837]\n [0.07397508]\n [0.86646782]\n [0.25320642]\n [0.91324957]\n [0.82312307]\n [0.75456133]\n [0.29758224]\n [0.17364472]\n [0.09911784]\n [0.8679076 ]\n [0.91189541]\n [0.09065194]\n [0.38869062]\n [0.12503362]\n [0.38294388]\n [0.74049142]\n [0.09265948]\n [0.1684858 ]\n [0.09153813]\n [0.73930125]\n [0.24419103]\n [0.15614762]\n [0.75132926]\n [0.17712284]\n [0.79052979]\n [0.80609654]\n [0.13651043]\n [0.11589026]\n [0.92252259]\n [0.14693047]\n [0.84945457]\n [0.14697858]\n [0.29597678]\n [0.23368006]\n [0.91477077]\n [0.8273903 ]\n [0.12413145]\n [0.24572486]\n [0.6870592 ]\n [0.94671626]\n [0.82910171]\n [0.74632462]\n [0.92359365]\n [0.79169258]\n [0.24050936]\n [0.65379482]\n [0.10895465]\n [0.14039044]\n [0.11634998]\n [0.91245317]\n [0.78415448]\n [0.5450462 ]\n [0.90298716]\n [0.40225281]\n [0.06770496]\n [0.86980392]\n [0.75903507]\n [0.85127743]\n [0.10350698]\n [0.22681038]\n [0.11717244]\n [0.81932463]\n [0.07736084]\n [0.8057308 ]\n [0.30466814]\n [0.24020548]\n [0.86294187]\n [0.0668913 ]\n [0.91965347]\n [0.23889041]\n [0.81618974]\n [0.23217827]\n [0.31086643]\n [0.24185046]\n [0.62174021]\n [0.30602018]\n [0.14736814]\n [0.40500431]\n [0.71139625]\n [0.0926269 ]\n [0.81397039]\n [0.27595183]\n [0.91598766]\n [0.11499673]\n [0.88624128]\n [0.95684065]\n [0.17403539]\n [0.77899136]\n [0.26723578]\n [0.71700514]\n [0.80043584]\n [0.09550415]\n [0.07542725]\n [0.62843418]\n [0.85372418]\n [0.15911517]\n [0.77358667]]\n 140\n Step: 23 \t Loss: 0.206 \t Acc: 98.6%\n [[0.31115314]\n [0.04393474]\n [0.11545989]\n [0.93211267]\n [0.06288792]\n [0.19912123]\n [0.13129769]\n [0.87661789]\n [0.72289231]\n [0.12544708]\n [0.26176409]\n [0.06977256]\n [0.08359914]\n [0.93579333]\n [0.87634925]\n [0.06292418]\n [0.81366175]\n [0.8556013 ]\n [0.90213975]\n [0.07416991]\n [0.86612297]\n [0.84292536]\n [0.36877246]\n [0.19319137]\n [0.88320774]\n [0.19135843]\n [0.29330831]\n [0.09175505]\n [0.04810811]\n [0.83779055]\n [0.1982736 ]\n [0.71358406]\n [0.6972964 ]\n [0.80615848]\n [0.86189391]\n [0.88479435]\n [0.09748155]\n [0.92784684]\n [0.80711812]\n [0.72322712]\n [0.11852114]\n [0.80531992]\n [0.92503566]\n [0.07063082]\n [0.87087454]\n [0.24871541]\n [0.91564345]\n [0.82465249]\n [0.75808359]\n [0.29513355]\n [0.16866918]\n [0.09548746]\n [0.87261769]\n [0.91568745]\n [0.08731541]\n [0.38860503]\n [0.12124414]\n [0.38013312]\n [0.74505546]\n [0.08833113]\n [0.16362442]\n [0.08793578]\n [0.74176457]\n [0.23963554]\n [0.15161259]\n [0.75607917]\n [0.1729059 ]\n [0.79397803]\n [0.80882696]\n [0.13235093]\n [0.11185291]\n [0.92541631]\n [0.14253866]\n [0.85373794]\n [0.14328378]\n [0.29297908]\n [0.22916715]\n [0.91829633]\n [0.83229357]\n [0.11929305]\n [0.24217398]\n [0.69097647]\n [0.94922208]\n [0.83312124]\n [0.75059008]\n [0.92573446]\n [0.79559866]\n [0.2351775 ]\n [0.65668705]\n [0.10503963]\n [0.13628892]\n [0.11203837]\n [0.91718412]\n [0.78890003]\n [0.54641725]\n [0.90664395]\n [0.40297546]\n [0.06500119]\n [0.87198459]\n [0.76157749]\n [0.85451865]\n [0.09992906]\n [0.22168724]\n [0.1134308 ]\n [0.8238324 ]\n [0.07407501]\n [0.80885507]\n [0.2994056 ]\n [0.23703189]\n [0.86654208]\n [0.06412902]\n [0.92324125]\n [0.23412453]\n [0.8204214 ]\n [0.22781331]\n [0.30936905]\n [0.23807056]\n [0.62495204]\n [0.30236753]\n [0.14319774]\n [0.40278724]\n [0.71426582]\n [0.08839042]\n [0.81716221]\n [0.27302398]\n [0.91843525]\n [0.11062282]\n [0.89026038]\n [0.95878329]\n [0.17027566]\n [0.78291155]\n [0.26143331]\n [0.71992693]\n [0.80517642]\n [0.09173398]\n [0.07235667]\n [0.62802946]\n [0.85749269]\n [0.15532877]\n [0.77752997]]\n 140\n Step: 24 \t Loss: 0.202 \t Acc: 98.6%\n [[0.30656988]\n [0.04171032]\n [0.11191095]\n [0.93455328]\n [0.06004166]\n [0.19450191]\n [0.12751128]\n [0.8809463 ]\n [0.72589055]\n [0.12096132]\n [0.25759815]\n [0.06696905]\n [0.08007541]\n [0.9379146 ]\n [0.88039606]\n [0.05995384]\n [0.8180708 ]\n [0.85995187]\n [0.90511049]\n [0.07125573]\n [0.8696187 ]\n [0.84705566]\n [0.36608966]\n [0.19045573]\n [0.88621763]\n [0.18650646]\n [0.29021111]\n [0.08875858]\n [0.04549473]\n [0.84168091]\n [0.19376128]\n [0.71852023]\n [0.70111226]\n [0.8094172 ]\n [0.86506199]\n [0.88850889]\n [0.09409372]\n [0.93096219]\n [0.81082032]\n [0.72635727]\n [0.11457959]\n [0.81075314]\n [0.92713843]\n [0.06750766]\n [0.87504903]\n [0.24439167]\n [0.9179169 ]\n [0.82612984]\n [0.76148228]\n [0.29276139]\n [0.16393141]\n [0.09206586]\n [0.87706916]\n [0.91923945]\n [0.08417261]\n [0.38852525]\n [0.11764859]\n [0.37740272]\n [0.74945275]\n [0.08429014]\n [0.15899717]\n [0.08455019]\n [0.74414908]\n [0.23525375]\n [0.14729906]\n [0.76064924]\n [0.16887494]\n [0.7972968 ]\n [0.81145767]\n [0.12840276]\n [0.10803857]\n [0.92813949]\n [0.13836599]\n [0.85781417]\n [0.13975994]\n [0.29007726]\n [0.22482989]\n [0.92160328]\n [0.83696452]\n [0.11473633]\n [0.23874959]\n [0.69476859]\n [0.9515558 ]\n [0.83696464]\n [0.75470079]\n [0.9277659 ]\n [0.79935204]\n [0.23006134]\n [0.65949497]\n [0.10134567]\n [0.13239106]\n [0.10797265]\n [0.92157025]\n [0.79345104]\n [0.54775418]\n [0.9100876 ]\n [0.40368423]\n [0.06246468]\n [0.8740773 ]\n [0.76403577]\n [0.85761931]\n [0.09655005]\n [0.21677562]\n [0.1098868 ]\n [0.82813966]\n [0.07099804]\n [0.81186127]\n [0.29432754]\n [0.23396978]\n [0.86997228]\n [0.06154099]\n [0.92659544]\n [0.22954511]\n [0.82447127]\n [0.2236171 ]\n [0.30791725]\n [0.23442831]\n [0.62807301]\n [0.29883336]\n [0.13923024]\n [0.40063164]\n [0.71704516]\n [0.08443167]\n [0.82023027]\n [0.27019209]\n [0.92075631]\n [0.10650216]\n [0.89405637]\n [0.96059493]\n [0.16667631]\n [0.78668283]\n [0.25585746]\n [0.72275587]\n [0.8097138 ]\n [0.08819031]\n [0.06947689]\n [0.62763268]\n [0.86108611]\n [0.15171201]\n [0.78132504]]\n 140\n Step: 25 \t Loss: 0.198 \t Acc: 98.6%\n [[0.30213708]\n [0.03964591]\n [0.10854383]\n [0.93685353]\n [0.05738348]\n [0.19007704]\n [0.12391057]\n [0.88504022]\n [0.72879433]\n [0.11672154]\n [0.25357875]\n [0.06433735]\n [0.07677111]\n [0.93991882]\n [0.88423145]\n [0.05718486]\n [0.82229189]\n [0.86408818]\n [0.90792843]\n [0.06851725]\n [0.87295175]\n [0.850997 ]\n [0.3634826 ]\n [0.18781966]\n [0.88908622]\n [0.1818676 ]\n [0.28721146]\n [0.08592346]\n [0.04307739]\n [0.84540085]\n [0.18943769]\n [0.72328535]\n [0.70480744]\n [0.8125534 ]\n [0.86809089]\n [0.89202942]\n [0.09089333]\n [0.93387945]\n [0.81437725]\n [0.729388 ]\n [0.11084744]\n [0.81593993]\n [0.92913502]\n [0.06458672]\n [0.8790078 ]\n [0.24022567]\n [0.9200785 ]\n [0.82755795]\n [0.76476432]\n [0.29046163]\n [0.15941572]\n [0.08883698]\n [0.8812808 ]\n [0.92257109]\n [0.08120846]\n [0.38845098]\n [0.11423324]\n [0.37474847]\n [0.75369261]\n [0.08051229]\n [0.15458849]\n [0.08136413]\n [0.74645902]\n [0.23103561]\n [0.14319208]\n [0.76504969]\n [0.16501814]\n [0.80049364]\n [0.81399443]\n [0.12465112]\n [0.10443068]\n [0.9307056 ]\n [0.13439738]\n [0.86169717]\n [0.13639571]\n [0.28726607]\n [0.22065799]\n [0.92470935]\n [0.84141833]\n [0.11043993]\n [0.23544449]\n [0.69844216]\n [0.95373272]\n [0.84064309]\n [0.75866546]\n [0.92969587]\n [0.80296177]\n [0.22514825]\n [0.66222292]\n [0.09785605]\n [0.12868282]\n [0.10413426]\n [0.92564248]\n [0.79781911]\n [0.54905867]\n [0.9133346 ]\n [0.40437987]\n [0.06008167]\n [0.87608748]\n [0.7664145 ]\n [0.86058839]\n [0.09335498]\n [0.21206287]\n [0.10652612]\n [0.83225912]\n [0.06811258]\n [0.81475633]\n [0.28942405]\n [0.23101279]\n [0.87324389]\n [0.0591127 ]\n [0.92973564]\n [0.22514126]\n [0.82835068]\n [0.21957974]\n [0.30650863]\n [0.23091582]\n [0.63110773]\n [0.29541124]\n [0.13545186]\n [0.39853431]\n [0.71973909]\n [0.08072742]\n [0.82318193]\n [0.26745086]\n [0.92296004]\n [0.10261546]\n [0.89764582]\n [0.96228712]\n [0.16322724]\n [0.79031386]\n [0.25049547]\n [0.72549691]\n [0.81406047]\n [0.08485521]\n [0.06677227]\n [0.62724329]\n [0.86451601]\n [0.14825385]\n [0.78498043]]\n 140\n Step: 26 \t Loss: 0.194 \t Acc: 98.6%\n [[0.29784682]\n [0.03772682]\n [0.1053457 ]\n [0.9390243 ]\n [0.0548973 ]\n [0.18583483]\n [0.1204829 ]\n [0.88891652]\n [0.73160863]\n [0.11270988]\n [0.2496978 ]\n [0.06186353]\n [0.07366848]\n [0.94181485]\n [0.88787032]\n [0.05459975]\n [0.82633651]\n [0.86802466]\n [0.91060445]\n [0.06594043]\n [0.87613284]\n [0.85476155]\n [0.36094741]\n [0.18527742]\n [0.89182297]\n [0.17742865]\n [0.28430416]\n [0.0832378 ]\n [0.04083761]\n [0.84896109]\n [0.18529139]\n [0.72788847]\n [0.70838822]\n [0.81557417]\n [0.87098951]\n [0.89536972]\n [0.08786641]\n [0.93661499]\n [0.81779746]\n [0.73232456]\n [0.10730969]\n [0.82089554]\n [0.931033 ]\n [0.06185105]\n [0.88276591]\n [0.23620855]\n [0.92213606]\n [0.82893944]\n [0.7679361 ]\n [0.28823042]\n [0.15510774]\n [0.08578626]\n [0.88526976]\n [0.9257001 ]\n [0.07840932]\n [0.38838197]\n [0.11098558]\n [0.37216649]\n [0.75778368]\n [0.07697578]\n [0.15038411]\n [0.07836204]\n [0.7486983 ]\n [0.22697183]\n [0.13927792]\n [0.76928998]\n [0.16132459]\n [0.80357552]\n [0.81644254]\n [0.12108252]\n [0.10101419]\n [0.93312681]\n [0.13061905]\n [0.8653997 ]\n [0.1331807 ]\n [0.28454071]\n [0.21664195]\n [0.92763053]\n [0.84566895]\n [0.10638444]\n [0.23225204]\n [0.70200332]\n [0.95576647]\n [0.84416682]\n [0.7624921 ]\n [0.93153156]\n [0.80643615]\n [0.22042652]\n [0.66487494]\n [0.09455563]\n [0.12515137]\n [0.10050633]\n [0.92942852]\n [0.80201499]\n [0.55033228]\n [0.91639989]\n [0.40506305]\n [0.0578398 ]\n [0.8780201 ]\n [0.76871796]\n [0.86343411]\n [0.09033028]\n [0.20753734]\n [0.10333575]\n [0.83620242]\n [0.065403 ]\n [0.81754664]\n [0.28468588]\n [0.22815509]\n [0.87636733]\n [0.0568311 ]\n [0.9326795 ]\n [0.22090292]\n [0.83207004]\n [0.21569211]\n [0.30514094]\n [0.22752578]\n [0.63406049]\n [0.29209524]\n [0.13184998]\n [0.39649225]\n [0.72235205]\n [0.07725675]\n [0.82602399]\n [0.26479541]\n [0.92505477]\n [0.09894518]\n [0.9010438 ]\n [0.96387021]\n [0.15991917]\n [0.79381262]\n [0.24533545]\n [0.72815466]\n [0.81822793]\n [0.08171248]\n [0.06422871]\n [0.6268608 ]\n [0.867793 ]\n [0.14494423]\n [0.78850405]]\n 140\n Step: 27 \t Loss: 0.190 \t Acc: 98.6%\n [[0.29369177]\n [0.03593997]\n [0.10230486]\n [0.94107546]\n [0.0525687 ]\n [0.1817644 ]\n [0.11721668]\n [0.89259059]\n [0.73433805]\n [0.10891007]\n [0.24594783]\n [0.05953507]\n [0.0707515 ]\n [0.94361072]\n [0.89132631]\n [0.05218282]\n [0.83021522]\n [0.87177452]\n [0.91314846]\n [0.06351261]\n [0.87917181]\n [0.85836048]\n [0.35848045]\n [0.18282371]\n [0.89443651]\n [0.17317746]\n [0.28148438]\n [0.0806908 ]\n [0.03875899]\n [0.85237148]\n [0.18131185]\n [0.73233803]\n [0.71186043]\n [0.81848606]\n [0.873766 ]\n [0.89854236]\n [0.08500024]\n [0.93918351]\n [0.82108885]\n [0.73517182]\n [0.10395261]\n [0.82563407]\n [0.93283924]\n [0.0592854 ]\n [0.88633714]\n [0.23233213]\n [0.92409671]\n [0.83027676]\n [0.77100352]\n [0.28606423]\n [0.15099422]\n [0.0829005 ]\n [0.88905169]\n [0.9286425 ]\n [0.07576282]\n [0.38831797]\n [0.10789416]\n [0.36965314]\n [0.76173397]\n [0.07366093]\n [0.14637093]\n [0.07552983]\n [0.75087055]\n [0.22305383]\n [0.13554403]\n [0.77337889]\n [0.1577843 ]\n [0.80654884]\n [0.81880692]\n [0.11768462]\n [0.0977754 ]\n [0.93541412]\n [0.12701839]\n [0.86893345]\n [0.13010538]\n [0.28189669]\n [0.21277297]\n [0.93038128]\n [0.84972922]\n [0.10255217]\n [0.22916607]\n [0.70545778]\n [0.95766925]\n [0.84754527]\n [0.76618818]\n [0.93327949]\n [0.80978289]\n [0.21588533]\n [0.66745477]\n [0.09143061]\n [0.12178496]\n [0.09707352]\n [0.93295326]\n [0.80604856]\n [0.55157648]\n [0.91929703]\n [0.40573439]\n [0.05572791]\n [0.87987974]\n [0.77095006]\n [0.86616404]\n [0.08746361]\n [0.20318822]\n [0.10030382]\n [0.83998032]\n [0.06285525]\n [0.8202381 ]\n [0.28010444]\n [0.22539126]\n [0.87935215]\n [0.05468444]\n [0.9354429 ]\n [0.21682079]\n [0.83563895]\n [0.21194578]\n [0.30381214]\n [0.22425147]\n [0.63693527]\n [0.28887984]\n [0.12841305]\n [0.39450269]\n [0.72488817]\n [0.07400077]\n [0.82876272]\n [0.26222123]\n [0.9270481 ]\n [0.09547539]\n [0.90426402]\n [0.96535341]\n [0.15674358]\n [0.79718651]\n [0.24036638]\n [0.73073339]\n [0.82222681]\n [0.07874744]\n [0.0618335 ]\n [0.62648472]\n [0.87092682]\n [0.14177385]\n [0.7919032 ]]\n 140\n Step: 28 \t Loss: 0.187 \t Acc: 98.6%\n [[0.2896651 ]\n [0.03427371]\n [0.09941063]\n [0.9430159 ]\n [0.05038473]\n [0.17785572]\n [0.1141013 ]\n [0.89607649]\n [0.73698687]\n [0.10530724]\n [0.24232188]\n [0.05734065]\n [0.06800567]\n [0.94531371]\n [0.89461186]\n [0.04992 ]\n [0.83393781]\n [0.8753499 ]\n [0.9155695 ]\n [0.06122234]\n [0.88207768]\n [0.86180407]\n [0.35607836]\n [0.18045364]\n [0.89693478]\n [0.16910279]\n [0.27874766]\n [0.07827262]\n [0.0368269 ]\n [0.85564112]\n [0.17748933]\n [0.73664188]\n [0.7152295 ]\n [0.82129511]\n [0.87642786]\n [0.90155881]\n [0.08228329]\n [0.9415983 ]\n [0.82425872]\n [0.7379343 ]\n [0.1007637 ]\n [0.83016855]\n [0.93456002]\n [0.05687605]\n [0.88973409]\n [0.22858882]\n [0.92596692]\n [0.83157213]\n [0.77397209]\n [0.28395977]\n [0.147063 ]\n [0.08016771]\n [0.8926409 ]\n [0.93141279]\n [0.07325777]\n [0.38825875]\n [0.10494856]\n [0.36720506]\n [0.76555092]\n [0.07055 ]\n [0.14253688]\n [0.07285474]\n [0.75297914]\n [0.21927364]\n [0.13197884]\n [0.77732456]\n [0.15438804]\n [0.80941955]\n [0.8210921 ]\n [0.11444614]\n [0.09470183]\n [0.9375775 ]\n [0.12358381]\n [0.87230915]\n [0.12716101]\n [0.27932988]\n [0.20904292]\n [0.93297465]\n [0.85361095]\n [0.09892702]\n [0.22618089]\n [0.70881083]\n [0.95945203]\n [0.85078713]\n [0.76976061]\n [0.93494564]\n [0.8130091 ]\n [0.21151462]\n [0.66996591]\n [0.08846847]\n [0.11857282]\n [0.09382184]\n [0.93623907]\n [0.80992902]\n [0.55279264]\n [0.92203833]\n [0.40639448]\n [0.05373595]\n [0.88167062]\n [0.77311447]\n [0.86878513]\n [0.08474375]\n [0.19900552]\n [0.09741949]\n [0.84360273]\n [0.06045663]\n [0.82283616]\n [0.27567172]\n [0.2227163 ]\n [0.88220712]\n [0.05266214]\n [0.93804019]\n [0.21288624]\n [0.83906625]\n [0.20833293]\n [0.3025203 ]\n [0.22108663]\n [0.63973578]\n [0.2857599 ]\n [0.12513048]\n [0.39256307]\n [0.72735129]\n [0.07094243]\n [0.83140389]\n [0.25972411]\n [0.92894693]\n [0.09219158]\n [0.907319 ]\n [0.96674502]\n [0.1536926 ]\n [0.80044237]\n [0.23557798]\n [0.73323707]\n [0.8260669 ]\n [0.07594677]\n [0.05957518]\n [0.62611464]\n [0.87392639]\n [0.13873421]\n [0.79518465]]\n 140\n Step: 29 \t Loss: 0.183 \t Acc: 98.6%\n [[0.28576043]\n [0.0327176 ]\n [0.09665322]\n [0.94485374]\n [0.04833372]\n [0.17409948]\n [0.11112705]\n [0.89938711]\n [0.73955909]\n [0.1018878 ]\n [0.2388135 ]\n [0.05527002]\n [0.06541788]\n [0.94693043]\n [0.89773839]\n [0.04779863]\n [0.83751332]\n [0.87876194]\n [0.91787583]\n [0.05905924]\n [0.88485872]\n [0.86510179]\n [0.35373803]\n [0.17816269]\n [0.89932504]\n [0.16519426]\n [0.27608984]\n [0.07597431]\n [0.03502831]\n [0.85877835]\n [0.17381483]\n [0.74080732]\n [0.71850049]\n [0.82400693]\n [0.87898199]\n [0.90442958]\n [0.07970503]\n [0.94387137]\n [0.82731381]\n [0.74061619]\n [0.09773153]\n [0.83451105]\n [0.93620105]\n [0.05461061]\n [0.8929683 ]\n [0.22497158]\n [0.92775265]\n [0.83282765]\n [0.7768469 ]\n [0.28191397]\n [0.14330284]\n [0.07757698]\n [0.8960505 ]\n [0.93402408]\n [0.07088395]\n [0.3882041 ]\n [0.10213918]\n [0.36481911]\n [0.76924146]\n [0.06762691]\n [0.13887085]\n [0.07032519]\n [0.75502722]\n [0.21562387]\n [0.12857175]\n [0.78113455]\n [0.15112729]\n [0.81219316]\n [0.82330228]\n [0.11135669]\n [0.09178207]\n [0.93962599]\n [0.1203047 ]\n [0.87553669]\n [0.12433956]\n [0.2768364 ]\n [0.20544424]\n [0.93542251]\n [0.85732504]\n [0.09549427]\n [0.22329122]\n [0.71206743]\n [0.96112464]\n [0.85390041]\n [0.77321581]\n [0.93653542]\n [0.81612137]\n [0.20730507]\n [0.6724116 ]\n [0.08565775]\n [0.11550508]\n [0.09073855]\n [0.93930607]\n [0.81366487]\n [0.55398202]\n [0.92463498]\n [0.40704389]\n [0.0518548 ]\n [0.88339663]\n [0.77521457]\n [0.8713038 ]\n [0.08216047]\n [0.19497997]\n [0.09467286]\n [0.8470788 ]\n [0.05819564]\n [0.82534588]\n [0.27138026]\n [0.22012556]\n [0.88494031]\n [0.05075463]\n [0.94048433]\n [0.20909126]\n [0.84236013]\n [0.20484632]\n [0.30126364]\n [0.21802546]\n [0.6424655 ]\n [0.28273067]\n [0.12199252]\n [0.39067098]\n [0.72974497]\n [0.0680663 ]\n [0.83395287]\n [0.25730016]\n [0.93075759]\n [0.08908052]\n [0.91022016]\n [0.96805246]\n [0.15075897]\n [0.80358652]\n [0.2309607 ]\n [0.73566937]\n [0.82975731]\n [0.0732984 ]\n [0.05744334]\n [0.62575015]\n [0.87679995]\n [0.13581745]\n [0.7983547 ]]\n 140\n \n \n Testing...\n Acc: 100.0%\n\n\n\n```python\ndef plot_model_prediction(prediction_func, X, Y, hard=True):\n u_min = X[:, 0].min()-1\n u_max = X[:, 0].max()+1\n v_min = X[:, 1].min()-1\n v_max = X[:, 1].max()+1\n\n U, V = np.meshgrid(np.linspace(u_min, u_max, 100), np.linspace(v_min, v_max, 100))\n UV = np.stack([U.ravel(), V.ravel()]).T\n c = prediction_func(UV).ravel()\n if hard:\n c = c > 0.5\n plt.scatter(UV[:,0], UV[:,1], c=c, edgecolors= 'none', alpha=0.15)\n plt.scatter(X[:,0], X[:,1], c=Y.ravel(), edgecolors= 'black')\n plt.xlim(left=u_min, right=u_max)\n plt.ylim(bottom=v_min, top=v_max)\n plt.axes().set_aspect('equal')\n plt.show()\n \nplot_model_prediction(lambda x: model.forward(x), X_train, Y_train, False)\n\nplot_model_prediction(lambda x: model.forward(x), X_train, Y_train, True)\n```\n\n\n```python\n# Now run the same experiment on 2 circles\n```\n\n# 2. Decision Tree\nThe next model we look at is called **Decision Tree**. This type of model is non-parametric, meaning in contrast to **Logistic Regression** we do not have any parameters here that need to be trained.\n\nLet us consider a simple binary decision tree for deciding on the two classes of \"creditable\" and \"Not creditable\".\n\n\n\nEach node, except the leafs, asks a question about the the client in question. A decision is made by going from the root node to a leaf node, while considering the clients situation. The situation of the client, in this case, is fully described by the features:\n1. Checking account balance\n2. Duration of requested credit\n3. Payment status of previous loan\n4. Length of current employment\n\nIn order to build a decision tree we need training data. To carry on the previous example: we need a number of clients for which we know the properties 1.-4. and their creditability.\nThe process of building a decision tree starts with the root node and involves the following steps:\n1. Choose a splitting criteria and add it to the current node.\n2. Split the dataset at the current node into those that fullfil the criteria and those that do not.\n3. Add a child node for each data split.\n4. For each child node decide on either A. or B.:\n 1. Repeat from 1. step\n 2. Make it a leaf node: The predicted class label is decided by the majority vote over the training data in the current split.\n\n## 2.1 Gini Index & Data Split\nDeciding on how to split your training data at each node is dominated by the following two criterias:\n1. Does the rule help me make a final decision?\n2. Is the rule general enough such that it applies not only to my training data, but also to new unseen examples?\n\nWhen considering our previous example, splitting the clients by their handedness would not help us deciding on their creditability. Knowning if a rule will generalize is usually a hard call to make, but in practice we rely on [Occam's razor](https://en.wikipedia.org/wiki/Occam%27s_razor) principle. Thus the less rules we use, the better we believe it to generalize to previously unseen examples.\n\nOne way to measure the quality of a rule is by the [**Gini Index**](https://en.wikipedia.org/wiki/Gini_coefficient).\nSince we only consider binary classification, it is calculated by:\n$$\nGini = \\sum_{n\\in\\{L,R\\}}\\frac{|S_n|}{|S|}\\left( 1 - \\sum_{c \\in C} p_{S_n}(c)^2\\right)\\\\\np_{S_n}(c) = \\frac{|\\{\\mathbf{x}_{i}\\in \\mathbf{X}|y_{i} = c, i \\in S_n\\}|}{|S_n|}, n \\in \\{L, R\\}\n$$\nwith $|C|=2$ being your set of class labels and $S_L$ and $S_R$ the two splits determined by the splitting criteria.\nWhile we only consider two class problems for decision trees, the method can also be applied when $|C|>2$.\nThe lower the gini score, the better the split. In the extreme case, where all class labels are the same in each split respectively, the gini index takes the value of $0$.\n\n\n```python\ndef tree_gini_index(Y_left, Y_right, classes):\n \"\"\"Compute the Gini Index.\n # Arguments\n Y_left: class labels of the data left set\n np.array of size `(n_objects, 1)`\n Y_right: class labels of the data right set\n np.array of size `(n_objects, 1)`\n classes: list of all class values\n # Output\n gini: scalar `float`\n \"\"\" \n sets = [Y_left,Y_right]\n sets_gini = []\n\n for set_ in sets:\n set_fraction_of_whole = len(set_)/(len(Y_left)+len(Y_right))\n \n prob_pos = (set_.sum())/len(set_)\n sq_prob_pos = prob_pos**2\n prob_neg = 1 - prob_pos\n sq_prob_neg = prob_neg**2\n \n set_gini = (1 -(sq_prob_pos+sq_prob_neg))*set_fraction_of_whole\n sets_gini.append(set_gini)\n \n gini = np.sum(sets_gini) \n return gini\n```\n\n\n```python\nam.test_student_function(username, tree_gini_index, ['Y_left', 'Y_right', 'classes'])\n```\n\n Running local tests...\n tree_gini_index successfully passed local tests\n Running remote test...\n Test was successful. Congratulations!\n\n\nAt each node in the tree, the data is split according to a split criterion and each split is passed onto the left/right child respectively.\nImplement the following function to return all rows in `X` and `Y` such that the left child gets all examples that are less than the split value and vice versa. \n\n\n```python\ndef tree_split_data_left(X, Y, feature_index, split_value):\n \"\"\"Split the data `X` and `Y`, at the feature indexed by `feature_index`.\n If the value is less than `split_value` then return it as part of the left group.\n \n # Arguments\n X: np.array of size `(n_objects, n_in)`\n Y: np.array of size `(n_objects, 1)`\n feature_index: index of the feature to split at \n split_value: value to split between\n # Output\n (XY_left): np.array of size `(n_objects_left, n_in + 1)`\n \"\"\"\n \n ind = X[:, feature_index] < split_value\n print(ind) #outputs True/False values for whether cells meet that criteria \n X_left = X[ind]\n print(X_left)\n Y_left = Y[ind]\n print(Y_left)\n \n XY_left = np.concatenate([X_left, Y_left], axis=-1)\n return XY_left\n\n#X[:, feature_index]\n\ndef tree_split_data_right(X, Y, feature_index, split_value):\n \"\"\"Split the data `X` and `Y`, at the feature indexed by `feature_index`.\n If the value is greater or equal than `split_value` then return it as part of the right group.\n \n # Arguments\n X: np.array of size `(n_objects, n_in)`\n Y: np.array of size `(n_objects, 1)`\n feature_index: index of the feature to split at\n split_value: value to split between\n # Output\n (XY_left): np.array of size `(n_objects_left, n_in + 1)`\n \"\"\"\n ind = X[:, feature_index] >= split_value\n X_right = X[ind]\n Y_right = Y[ind]\n\n XY_right = np.concatenate([X_right, Y_right], axis=-1)\n return XY_right\n```\n\n\n```python\nam.test_student_function(username, tree_split_data_left, ['X', 'Y', 'feature_index', 'split_value'])\n```\n\n Running local tests...\n [False False True False True False]\n [[-2.37556118 -2.60844971]\n [ 4.26463158 -1.65238926]]\n [[1.]\n [0.]]\n [False True False True False True True True]\n [[ 1.94588175 -1.3034951 ]\n [-0.63283282 -2.97400198]\n [-1.7153846 0.74070777]\n [-5.79411975 -0.40943116]\n [-2.71633746 -1.64726742]]\n [[1.]\n [1.]\n [0.]\n [0.]\n [0.]]\n [False True True False False]\n [[ 1.35448839 -0.2099724 ]\n [ 7.77480498 -3.34950436]]\n [[0.]\n [1.]]\n [False True True False True True]\n [[ 0.87634404 -5.06310781]\n [-6.03751724 1.05850821]\n [-7.1926914 -6.50000432]\n [ 8.51710716 -4.21724322]]\n [[0.]\n [1.]\n [0.]\n [1.]]\n [ True True True True True False True False True]\n [[-9.36607929 3.73666788]\n [-7.78355718 6.5810372 ]\n [-6.11148523 7.90649653]\n [-9.72034328 3.36824612]\n [-8.94318009 0.07507696]\n [-1.63738251 0.47235638]\n [-9.03206994 8.00379159]]\n [[1.]\n [1.]\n [0.]\n [1.]\n [0.]\n [1.]\n [1.]]\n [False False False False False True False True]\n [[-5.26073174 -1.49986052]\n [-8.42770937 7.5915227 ]]\n [[1.]\n [1.]]\n [False False True False True False False False False]\n [[-9.10827565 -9.54824574]\n [-9.96200291 2.5183704 ]]\n [[1.]\n [0.]]\n [ True True True True False False True]\n [[ 2.65302163e+00 9.35304542e+00]\n [ 1.87233797e+00 3.45427003e+00]\n [-1.35570851e+00 3.76509218e+00]\n [-4.42484811e+00 7.86466080e-01]\n [-8.17879884e-03 -4.52946960e+00]]\n [[0.]\n [0.]\n [0.]\n [1.]\n [1.]]\n [False True False True True]\n [[-4.37185642 -4.87225659]\n [-9.9353802 9.33881489]\n [ 5.04790996 6.30033027]]\n [[0.]\n [1.]\n [1.]]\n [ True False False False True False True True]\n [[-7.28596597 -5.57235412]\n [-5.92654563 4.87763988]\n [-8.39325723 -7.21426803]\n [-2.37405564 4.65203543]]\n [[0.]\n [1.]\n [0.]\n [1.]]\n [False False False True False True True False False]\n [[ 0.21113109 -8.82799123]\n [-2.61553065 9.23804143]\n [-1.78688084 9.37506723]]\n [[0.]\n [1.]\n [1.]]\n [False False False True True False False True]\n [[ 6.31354453 -8.18307638]\n [-1.19612965 -9.87256673]\n [ 4.81831431 -3.75757405]]\n [[0.]\n [0.]\n [1.]]\n [ True True True True True False True False False]\n [[ 4.44463904 -4.65117093]\n [-5.61413303 -6.88834569]\n [-1.95748231 -2.88653308]\n [ 3.70540254 -3.001772 ]\n [-4.390432 -3.67733266]\n [-3.36110821 -9.23287607]]\n [[1.]\n [1.]\n [0.]\n [0.]\n [0.]\n [1.]]\n [False True True True True True True False False]\n [[-5.77153704 -0.76438147]\n [ 5.00316785 0.118027 ]\n [ 1.26612951 2.20084614]\n [ 9.74702944 0.29475347]\n [ 2.03368119 -8.67896275]\n [ 3.61945011 -0.8396783 ]]\n [[1.]\n [0.]\n [0.]\n [1.]\n [0.]\n [0.]]\n [False True True True True True False]\n [[ 0.58533874 1.6610768 ]\n [-8.35178568 -9.86183915]\n [-3.21501887 9.26737555]\n [ 0.74656272 -2.10365632]\n [ 1.54680626 -8.67605197]]\n [[0.]\n [0.]\n [1.]\n [1.]\n [1.]]\n [False False False True False True True]\n [[-7.23122502 3.62082515]\n [-5.75659807 0.20713906]\n [-4.70347676 -2.66121154]]\n [[1.]\n [1.]\n [0.]]\n [False False True False True True True]\n [[ 1.38041332 -7.90304597]\n [-2.49852907 -1.09573896]\n [-2.30090537 -7.65231574]\n [-7.3139662 -0.81838309]]\n [[0.]\n [0.]\n [1.]\n [0.]]\n [ True False False True False True True False]\n [[ 9.64559172 -9.00872846]\n [-0.05324152 -6.87058815]\n [-0.89077297 -6.65015207]\n [ 1.48407158 -8.26599853]]\n [[0.]\n [0.]\n [0.]\n [0.]]\n [ True False True False True True True]\n [[-7.3148247 -3.80817133]\n [-3.16647982 -8.34950305]\n [-3.44523248 3.47640827]\n [-5.52210163 7.6644342 ]\n [-0.34789753 -1.19014071]]\n [[1.]\n [1.]\n [1.]\n [1.]\n [0.]]\n [ True True False True False True True True]\n [[ 8.98015199 -8.86684139]\n [ 7.09268214 5.29217057]\n [-4.45964554 -7.99198627]\n [-5.68531485 -3.64304732]\n [ 4.21270522 4.32527494]\n [ 4.7601182 -6.18756936]]\n [[0.]\n [1.]\n [1.]\n [1.]\n [1.]\n [0.]]\n [ True False False True False True False True]\n [[ 6.66435037 -3.87849513]\n [-4.36138513 -2.43366449]\n [ 6.05228506 0.68493948]\n [-8.44336263 1.07215902]]\n [[1.]\n [0.]\n [1.]\n [1.]]\n [False False True True True False False True]\n [[ 5.84066412 -6.09674715]\n [ 6.04593831 -8.91448727]\n [ 5.94609503 -4.94677465]\n [-6.67897898 -4.14387699]]\n [[1.]\n [0.]\n [0.]\n [1.]]\n [ True True True False False]\n [[-2.25284259 0.98189791]\n [-7.90012135 1.90272944]\n [ 3.45246897 6.28514938]]\n [[0.]\n [0.]\n [0.]]\n [ True True False False False False True True False]\n [[-2.91808204 0.03218935]\n [-4.44021341 -9.04475902]\n [-5.97246031 -1.54516754]\n [ 3.80843555 -2.55212295]]\n [[0.]\n [0.]\n [0.]\n [1.]]\n [False False True True True True]\n [[-2.5560739 -2.60592515]\n [-0.5263865 -6.42367064]\n [ 8.74711608 -8.6728326 ]\n [ 6.6724728 -4.36396978]]\n [[0.]\n [0.]\n [1.]\n [0.]]\n [False True True False True]\n [[-6.96564368 -0.27322537]\n [-9.9980993 -7.84847771]\n [-7.27905356 0.74250288]]\n [[0.]\n [0.]\n [1.]]\n [ True False True False False False]\n [[-0.38360825 -9.57747752]\n [-5.48225289 0.47083727]]\n [[1.]\n [0.]]\n [False False False True False True True True]\n [[-5.45927576 -4.54719023]\n [-5.4209009 -5.67326749]\n [-6.71441608 -3.85110805]\n [-2.87490704 4.41993717]]\n [[1.]\n [0.]\n [1.]\n [1.]]\n [ True False True False False False False False]\n [[-4.90506179 -5.33466372]\n [-2.53472208 4.29138841]]\n [[0.]\n [0.]]\n [False True True True False False False]\n [[-0.59299695 -3.2942172 ]\n [-5.44373928 -7.32666865]\n [-2.79712023 -6.83112458]]\n [[0.]\n [1.]\n [1.]]\n [ True False True True True False True]\n [[-3.68718498 4.09794462]\n [-2.31102569 -5.55203413]\n [-5.85750672 -5.59810463]\n [-3.9142254 -6.4061488 ]\n [-4.34621776 7.54341135]]\n [[1.]\n [1.]\n [0.]\n [1.]\n [1.]]\n [ True True True False True True False True True]\n [[ 2.00909198 9.35851907]\n [-8.77972521 -9.68834571]\n [-4.31129695 -9.7187875 ]\n [ 2.33509167 -4.27897808]\n [-6.56337944 -7.17549142]\n [-3.2334858 -2.20819085]\n [-8.63337174 7.00434138]]\n [[1.]\n [0.]\n [1.]\n [0.]\n [1.]\n [1.]\n [0.]]\n [False True False True True]\n [[ 9.12897885 -4.36342744]\n [-0.34338788 -0.59414767]\n [ 0.43096634 -2.69557846]]\n [[0.]\n [0.]\n [0.]]\n [ True False True False False]\n [[ 5.06496341 -6.77222192]\n [-1.74686603 -5.8720616 ]]\n [[1.]\n [0.]]\n [ True False False False False True False True True]\n [[-8.28359946 -1.19299617]\n [-3.63763375 6.71175893]\n [-6.56600439 4.8218078 ]\n [-8.55821171 9.28315334]]\n [[1.]\n [1.]\n [1.]\n [0.]]\n [False True True True True True False]\n [[-0.94286557 -4.08069731]\n [-2.73686439 -2.68371506]\n [-3.65253517 8.56299482]\n [-5.66654458 9.39629416]\n [ 0.59048795 -9.2088475 ]]\n [[0.]\n [1.]\n [1.]\n [1.]\n [0.]]\n [False False True False True]\n [[-9.48821184 4.82324189]\n [-8.62008202 0.49974124]]\n [[1.]\n [1.]]\n [False False True False False True]\n [[ 2.47058779 -5.91974156]\n [-4.10266291 -9.14134166]]\n [[0.]\n [0.]]\n [False False True False False False False False True]\n [[-3.72601191 -6.27172069]\n [ 8.78643858 -6.59278394]]\n [[0.]\n [1.]]\n [ True True True True False False False]\n [[-4.72032533 9.18458634]\n [-1.61920226 9.05952796]\n [-8.06234418 -6.27702704]\n [-9.45923646 2.52972819]]\n [[0.]\n [1.]\n [0.]\n [0.]]\n [False True True True False False]\n [[-0.45162584 -3.2335316 ]\n [-1.28831511 -6.84237011]\n [-0.15308094 -7.77799513]]\n [[1.]\n [0.]\n [1.]]\n [ True True True False False False False True]\n [[-8.19713698 -3.63722183]\n [-7.97172673 -3.22334808]\n [-6.41521615 -1.18134938]\n [-9.80017337 -5.53944806]]\n [[0.]\n [1.]\n [1.]\n [1.]]\n [False False True True False False]\n [[-4.63433067 -8.81983161]\n [-3.84617438 -6.36269861]]\n [[0.]\n [1.]]\n [ True False True False False False]\n [[-6.53397699 -2.79657874]\n [ 0.17705281 -4.60140242]]\n [[0.]\n [0.]]\n [False False False True True]\n [[-6.60118327 1.63110532]\n [ 4.68722109 0.81532126]]\n [[0.]\n [0.]]\n [False False False True True]\n [[ 8.9204833 -5.9258933 ]\n [-6.4066338 -1.57470596]]\n [[0.]\n [1.]]\n [False False False True True]\n [[-2.73179573 -4.19069926]\n [-3.41452237 -6.67316056]]\n [[1.]\n [1.]]\n [ True True False True True False False True]\n [[-3.4961022 -9.69119261]\n [ 9.85718459 -7.11974872]\n [-2.93010878 -6.38764807]\n [ 6.54003422 -8.22089358]\n [ 2.50436642 -5.72641979]]\n [[1.]\n [0.]\n [1.]\n [1.]\n [1.]]\n [ True False True True False False False False]\n [[ 2.04304953 -2.66691743]\n [ 2.11998695 -6.95996898]\n [ 7.39505486 -8.4452957 ]]\n [[0.]\n [0.]\n [0.]]\n [False True True False True False True False]\n [[-9.26440348 4.94472464]\n [-7.1319719 -2.45027692]\n [-0.96638402 -4.84270652]\n [-8.04817681 -8.98948527]]\n [[1.]\n [1.]\n [1.]\n [0.]]\n [ True True False False True]\n [[ 8.00383431 -9.87060869]\n [-1.6114016 -6.58899118]\n [-6.8729875 -4.4155576 ]]\n [[0.]\n [1.]\n [1.]]\n [ True True True False True True False True False]\n [[-2.38212539 -3.02213617]\n [-4.57920761 2.91650932]\n [ 3.23766286 4.55335858]\n [-7.2921603 3.45951084]\n [-8.97675941 5.09661811]\n [ 0.03473972 7.26624062]]\n [[1.]\n [1.]\n [1.]\n [1.]\n [0.]\n [0.]]\n [False True True False False]\n [[-8.38217673 -1.45517537]\n [-5.70689777 -5.04001407]]\n [[0.]\n [0.]]\n [False False False True True]\n [[-3.87393233 9.35165637]\n [-5.93469357 9.79240232]]\n [[0.]\n [1.]]\n [ True False True True False False]\n [[ 0.12488474 -8.71189933]\n [-7.53680306 -5.88116334]\n [-5.08998958 3.797166 ]]\n [[0.]\n [1.]\n [0.]]\n [ True False False True True False]\n [[-7.24691362 6.97272631]\n [-7.43552393 8.12300635]\n [-8.18648773 0.63600758]]\n [[0.]\n [0.]\n [1.]]\n [ True True False True True False False True]\n [[-5.85207908 -0.59122037]\n [ 1.61606968 -6.87914746]\n [ 2.77498938 -0.46787719]\n [-5.36533543 -8.65924707]\n [ 2.02037997 -4.76943075]]\n [[0.]\n [1.]\n [0.]\n [0.]\n [1.]]\n [False False True True False True False True True]\n [[-6.33287127 9.53456732]\n [-7.10420299 -2.95333293]\n [-0.08443264 5.84655359]\n [-1.58121719 7.27356233]\n [-2.94707113 -1.84666023]]\n [[0.]\n [0.]\n [0.]\n [1.]\n [0.]]\n [ True True True True True False True False False]\n [[-2.87398601 -8.99548391]\n [-3.7649271 -2.50289884]\n [-2.68129205 -0.01257792]\n [-4.62628445 6.03429738]\n [-7.49375538 2.40902251]\n [-7.18118428 -1.2023141 ]]\n [[0.]\n [1.]\n [1.]\n [1.]\n [0.]\n [1.]]\n [ True True False True False True False True True]\n [[-3.66970987 -4.54154617]\n [-0.04433954 2.65642204]\n [-8.0981359 0.87803743]\n [-4.76792146 -7.00028031]\n [-4.99180092 -5.35301201]\n [ 8.19457303 2.59606365]]\n [[1.]\n [1.]\n [1.]\n [1.]\n [0.]\n [1.]]\n [False False False False True False True]\n [[-4.84935715 6.7547206 ]\n [-8.01174707 8.93445182]]\n [[0.]\n [1.]]\n [ True False True True False]\n [[-5.50090019 -6.71965793]\n [-6.2562681 5.3800424 ]\n [-1.96759371 8.82984571]]\n [[0.]\n [0.]\n [0.]]\n [ True False False False True True False]\n [[ 1.74431094 6.95163073]\n [-3.82743019 6.64104529]\n [-0.02901706 8.48311679]]\n [[1.]\n [0.]\n [0.]]\n [ True False True False True False True True False]\n [[-7.58259261 -9.91977316]\n [-2.40194889 3.84509431]\n [-7.93730542 -4.5259871 ]\n [-1.96065237 7.60936619]\n [-6.99162731 -8.87091189]]\n [[1.]\n [1.]\n [0.]\n [0.]\n [1.]]\n [ True False False True True]\n [[-6.77785272 -6.54742583]\n [-0.34029011 -0.23541973]\n [ 2.78476849 -6.03841927]]\n [[1.]\n [0.]\n [0.]]\n [False False False True False False True]\n [[-3.96488507 -0.35617297]\n [-2.59579459 -6.02516011]]\n [[1.]\n [1.]]\n [False False True False True False False]\n [[ 8.58922241 -8.04297604]\n [-1.71552812 -6.81573813]]\n [[0.]\n [0.]]\n [False False False False True True True True]\n [[-9.63881272 4.00267662]\n [-5.03307598 -0.10750789]\n [-7.29440437 -5.83795654]\n [-9.36169689 8.10434713]]\n [[1.]\n [1.]\n [1.]\n [0.]]\n [ True True True False True False]\n [[-2.50417291 -8.36174898]\n [-5.95913587 -9.5684707 ]\n [-1.05512041 7.00617975]\n [ 0.94759405 6.00070173]]\n [[0.]\n [0.]\n [0.]\n [1.]]\n [ True False False False False False False True True]\n [[-9.41290257 5.23155346]\n [-7.22303734 2.60855153]\n [-9.74225545 2.29896043]]\n [[0.]\n [0.]\n [0.]]\n [False False True False True True]\n [[-6.55566772 4.81986286]\n [ 6.05011921 -4.6320378 ]\n [ 4.73741385 2.59345509]]\n [[0.]\n [1.]\n [1.]]\n [ True False False False True]\n [[-6.78707299 2.57737964]\n [-3.22153838 -2.06931252]]\n [[1.]\n [0.]]\n [ True False True False False True False False False]\n [[-9.55202319 -9.94102357]\n [ 4.27491433 -7.78196405]\n [ 6.78600825 -3.51659557]]\n [[1.]\n [1.]\n [1.]]\n [False True False False True True False]\n [[-0.93970228 4.40791271]\n [-4.41035194 4.65295999]\n [-7.3572601 -7.07251393]]\n [[0.]\n [1.]\n [0.]]\n [False False False True True True True False]\n [[ 2.02528737 3.01206302]\n [-2.39231441 -6.050712 ]\n [ 0.37863343 5.42265122]\n [-8.96346952 -7.6930434 ]]\n [[0.]\n [1.]\n [1.]\n [1.]]\n [False False True True False]\n [[-6.26624723 -5.16014257]\n [-6.70061154 1.29390345]]\n [[1.]\n [1.]]\n [False True False True False False False False False]\n [[-3.62149249 -6.92291217]\n [ 1.18257795 -9.63747597]]\n [[0.]\n [0.]]\n [False True True True True False False]\n [[-9.9347983 -9.22290294]\n [-3.43343251 9.22024277]\n [-7.54835663 9.39324371]\n [-3.00011844 -0.08409379]]\n [[0.]\n [1.]\n [1.]\n [0.]]\n [ True False False True True False]\n [[-8.19588147 -9.37485802]\n [ 6.07354263 -3.1779417 ]\n [-5.18229015 -9.53823914]]\n [[1.]\n [1.]\n [1.]]\n [ True False True False False True False True False]\n [[ 0.08052513 1.27516653]\n [-7.28213357 -7.056836 ]\n [-7.03800875 -5.10133603]\n [-6.08436142 5.36328988]]\n [[1.]\n [1.]\n [0.]\n [0.]]\n [ True True False True False]\n [[ 9.83764274 2.46952373]\n [-0.06915623 -0.76587765]\n [-8.95868384 -9.99781408]]\n [[1.]\n [0.]\n [0.]]\n [False True False False True]\n [[ 4.72360431 -4.55592201]\n [-1.97286413 -3.08672912]]\n [[0.]\n [1.]]\n [False False True False True]\n [[ 1.43576384 -8.99474414]\n [-3.36516254 -6.4934647 ]]\n [[1.]\n [0.]]\n [ True False True True False]\n [[-6.10484594 1.90217048]\n [-2.1729519 3.07583054]\n [-3.56746362 -3.86210911]]\n [[0.]\n [0.]\n [1.]]\n [False False True True False True]\n [[-4.54583766 4.53405796]\n [-1.94975453 4.7680293 ]\n [-4.19862534 7.16475208]]\n [[1.]\n [1.]\n [1.]]\n [ True False False False False False True True]\n [[-0.51425963 8.2213693 ]\n [-1.955815 -4.00124272]\n [-0.33892306 6.15880467]]\n [[1.]\n [0.]\n [0.]]\n [ True False True False True]\n [[-0.69470219 0.58731506]\n [-5.9652306 0.67996546]\n [-8.07856453 9.02889533]]\n [[1.]\n [1.]\n [0.]]\n [False True False True False]\n [[-3.44882369 -5.31594267]\n [-0.60422171 8.37542776]]\n [[1.]\n [1.]]\n [False False True True False True]\n [[-2.80057539 -5.55489229]\n [-5.040751 -1.55424598]\n [-7.60086867 -2.29169235]]\n [[0.]\n [0.]\n [0.]]\n [False True True False False True False False]\n [[-0.60327182 -9.94776048]\n [-2.41385399 2.91112426]\n [ 0.97285143 -7.40061708]]\n [[1.]\n [0.]\n [1.]]\n [False True True True False True]\n [[ 0.48706579 -1.61110782]\n [-5.25910094 -9.62965388]\n [-9.09885416 -2.8767827 ]\n [ 8.43197642 0.65359807]]\n [[1.]\n [0.]\n [1.]\n [1.]]\n [ True False True True False False True False False]\n [[-1.19738089 4.57571159]\n [-6.64194726 -5.56365335]\n [-0.94392088 -0.88370692]\n [-3.6861174 -0.42453922]]\n [[1.]\n [1.]\n [1.]\n [1.]]\n [ True False False True False False]\n [[-4.71721731 -1.09871374]\n [-6.21443348 -2.84728111]]\n [[1.]\n [0.]]\n [False True True True True False]\n [[ 2.92732003 -1.88865707]\n [-1.60631255 -1.22566077]\n [-4.06649003 -7.70954635]\n [ 6.69107814 0.54028227]]\n [[0.]\n [0.]\n [0.]\n [0.]]\n [ True True False False True]\n [[-3.00140948 -6.41315406]\n [ 5.13893193 -7.60535297]\n [-3.70337221 -1.42053089]]\n [[0.]\n [0.]\n [1.]]\n [False True False True True]\n [[-7.62775463 -0.13606186]\n [-5.93502138 -5.20419842]\n [ 9.3924202 -8.03294589]]\n [[0.]\n [0.]\n [0.]]\n [False False False False True True]\n [[-9.61449335 -4.18374342]\n [-5.79933452 7.52870301]]\n [[1.]\n [0.]]\n [False False True False True False True True]\n [[-9.20678831 1.74767156]\n [ 0.78580093 -5.35266754]\n [ 5.13187746 -6.4307061 ]\n [ 2.81195689 -8.06716307]]\n [[0.]\n [1.]\n [0.]\n [0.]]\n [ True False False True True False False]\n [[-6.37269645 -2.28711518]\n [-9.89165526 -0.6289442 ]\n [ 2.49739494 -5.01484464]]\n [[1.]\n [0.]\n [0.]]\n [ True True False True True True True False True]\n [[ 1.30252665 1.41473139]\n [-9.16038639 3.48248068]\n [ 8.72096919 -8.20797439]\n [ 0.072007 -7.98098299]\n [ 2.89977394 -6.74748526]\n [-8.01589773 -0.61247194]\n [-0.53524827 2.88008634]]\n [[0.]\n [0.]\n [0.]\n [0.]\n [0.]\n [1.]\n [0.]]\n tree_split_data_left successfully passed local tests\n Running remote test...\n [False False True True False False]\n [[-3.61351138 -2.62259218]\n [-8.43132439 -5.70585008]]\n [[0.]\n [0.]]\n Test was successful. Congratulations!\n\n\n\n```python\nam.test_student_function(username, tree_split_data_right, ['X', 'Y', 'feature_index', 'split_value'])\n```\n\n Running local tests...\n tree_split_data_right successfully passed local tests\n Running remote test...\n Test was successful. Congratulations!\n\n\n\n```python\nam.get_progress(username)\n```\n\n ---------------------------------------------\n | Magdalena Mladenova |\n | magdalena.mladenova@student.uva.nl |\n ---------------------------------------------\n | linear_forward | completed |\n | linear_grad_W | completed |\n | linear_grad_b | completed |\n | nll_forward | completed |\n | nll_grad_input | not attempted |\n | sigmoid_forward | completed |\n | sigmoid_grad_input | completed |\n | tree_gini_index | completed |\n | tree_split_data_left | completed |\n | tree_split_data_right | completed |\n | tree_to_terminal | not attempted |\n ---------------------------------------------\n\n\nNow to find the split rule with the lowest gini score, we brute-force search over all features and values to split by.\n\n\n```python\ndef tree_best_split(X, Y):\n class_values = list(set(Y.flatten().tolist()))\n r_index, r_value, r_score = float(\"inf\"), float(\"inf\"), float(\"inf\")\n r_XY_left, r_XY_right = (X,Y), (X,Y)\n for feature_index in range(X.shape[1]):\n for row in X:\n XY_left = tree_split_data_left(X, Y, feature_index, row[feature_index])\n XY_right = tree_split_data_right(X, Y, feature_index, row[feature_index])\n XY_left, XY_right = (XY_left[:,:-1], XY_left[:,-1:]), (XY_right[:,:-1], XY_right[:,-1:])\n gini = tree_gini_index(XY_left[1], XY_right[1], class_values)\n if gini < r_score:\n r_index, r_value, r_score = feature_index, row[feature_index], gini\n r_XY_left, r_XY_right = XY_left, XY_right\n return {'index':r_index, 'value':r_value, 'XY_left': r_XY_left, 'XY_right':r_XY_right}\n```\n\n## 2.2 Terminal Node\nThe leaf nodes predict the label of an unseen example, by taking a majority vote over all training class labels in that node.\n\n\n```python\ndef tree_to_terminal(Y):\n \"\"\"The most frequent class label, out of the data points belonging to the leaf node,\n is selected as the predicted class.\n \n # Arguments\n Y: np.array of size `(n_objects)`\n \n # Output\n label: most frequent label of `Y.dtype`\n \"\"\"\n class_names, counts = np.unique(Y, return_counts= True)\n print(class_names, counts) \n \n ind=np.argmax(counts)\n label = class_names[ind]\n \n return label\n```\n\n\n```python\nam.test_student_function(username, tree_to_terminal, ['Y'])\n```\n\n Running local tests...\n [0. 1.] [2 7]\n [0. 1.] [7 6]\n [0. 1.] [12 3]\n [0. 1.] [6 5]\n [1.] [5]\n [0. 1.] [12 9]\n [0. 1.] [9 4]\n [0. 1.] [8 9]\n [0. 1.] [4 3]\n [0. 1.] [11 10]\n [0. 1.] [3 6]\n [0. 1.] [ 8 11]\n [0. 1.] [7 2]\n [0. 1.] [2 5]\n [0. 1.] [4 7]\n [0. 1.] [11 8]\n [0. 1.] [5 6]\n [0. 1.] [10 9]\n [0. 1.] [11 12]\n [0. 1.] [11 12]\n [0. 1.] [7 8]\n [0. 1.] [12 7]\n [0. 1.] [4 3]\n [0. 1.] [9 6]\n [0. 1.] [5 6]\n [0. 1.] [6 3]\n [0. 1.] [10 5]\n [0. 1.] [3 2]\n [0. 1.] [5 4]\n [0. 1.] [4 9]\n [1.] [5]\n [0. 1.] [ 7 10]\n [0. 1.] [9 6]\n [0. 1.] [7 4]\n [0. 1.] [11 10]\n [0. 1.] [10 7]\n [0. 1.] [8 3]\n [0. 1.] [4 5]\n [0. 1.] [3 8]\n [0. 1.] [9 8]\n [0. 1.] [10 5]\n [0. 1.] [ 7 10]\n [0. 1.] [12 9]\n [0. 1.] [9 8]\n [0. 1.] [ 6 11]\n [0. 1.] [15 10]\n [0. 1.] [4 5]\n [0. 1.] [10 5]\n [0. 1.] [12 5]\n [0. 1.] [10 3]\n [0. 1.] [1 6]\n [0. 1.] [13 8]\n [0. 1.] [ 8 13]\n [0. 1.] [5 6]\n [0. 1.] [ 9 12]\n [0. 1.] [3 4]\n [0. 1.] [5 4]\n [0. 1.] [8 7]\n [0. 1.] [12 7]\n [0. 1.] [3 4]\n [0. 1.] [10 11]\n [0. 1.] [ 7 10]\n [0. 1.] [15 4]\n [0. 1.] [3 2]\n [0. 1.] [8 5]\n [0. 1.] [13 6]\n [0. 1.] [8 7]\n [0. 1.] [16 9]\n [0. 1.] [3 2]\n [0. 1.] [ 3 14]\n [0. 1.] [7 8]\n [0. 1.] [3 6]\n [0. 1.] [4 9]\n [0. 1.] [12 9]\n [0. 1.] [8 5]\n [0. 1.] [3 6]\n [0. 1.] [10 15]\n [0. 1.] [17 6]\n [0. 1.] [6 9]\n [0. 1.] [13 4]\n [0. 1.] [9 4]\n [0. 1.] [5 8]\n [0. 1.] [4 5]\n [0. 1.] [ 5 12]\n [0. 1.] [6 3]\n [0. 1.] [7 6]\n [0. 1.] [5 6]\n [0. 1.] [7 4]\n [0. 1.] [12 13]\n [0. 1.] [ 8 11]\n [0. 1.] [12 11]\n [0. 1.] [13 10]\n [0. 1.] [14 11]\n [0. 1.] [11 8]\n [0. 1.] [3 4]\n [0. 1.] [4 5]\n [0. 1.] [7 6]\n [0. 1.] [4 5]\n [0. 1.] [9 6]\n [0. 1.] [6 3]\n tree_to_terminal successfully passed local tests\n Running remote test...\n [0. 1.] [2 7]\n Test was successful. Congratulations!\n\n\n\n```python\nam.get_progress(username)\n```\n\n## 2.3 Build the Decision Tree\nNow we recursively build the decision tree, by greedily splitting the data at each node according to the gini index.\nTo prevent the model from overfitting, we transform a node into a terminal/leaf node, if:\n1. a maximum depth is reached.\n2. the node does not reach a minimum number of training samples.\n\n\n\n```python\ndef tree_recursive_split(X, Y, node, max_depth, min_size, depth):\n XY_left, XY_right = node['XY_left'], node['XY_right']\n del(node['XY_left'])\n del(node['XY_right'])\n # check for a no split\n if XY_left[0].size <= 0 or XY_right[0].size <= 0:\n node['left_child'] = node['right_child'] = tree_to_terminal(np.concatenate((XY_left[1], XY_right[1])))\n return\n # check for max depth\n if depth >= max_depth:\n node['left_child'], node['right_child'] = tree_to_terminal(XY_left[1]), tree_to_terminal(XY_right[1])\n return\n # process left child\n if XY_left[0].shape[0] <= min_size:\n node['left_child'] = tree_to_terminal(XY_left[1])\n else:\n node['left_child'] = tree_best_split(*XY_left)\n tree_recursive_split(X, Y, node['left_child'], max_depth, min_size, depth+1)\n # process right child\n if XY_right[0].shape[0] <= min_size:\n node['right_child'] = tree_to_terminal(XY_right[1])\n else:\n node['right_child'] = tree_best_split(*XY_right)\n tree_recursive_split(X, Y, node['right_child'], max_depth, min_size, depth+1)\n\n\ndef build_tree(X, Y, max_depth, min_size):\n root = tree_best_split(X, Y)\n tree_recursive_split(X, Y, root, max_depth, min_size, 1)\n return root\n```\n\nBy printing the split criteria or the predicted class at each node, we can visualise the decising making process.\nBoth the tree and a a prediction can be implemented recursively, by going from the root to a leaf node.\n\n\n```python\ndef print_tree(node, depth=0):\n if isinstance(node, dict):\n print('%s[X%d < %.3f]' % ((depth*' ', (node['index']+1), node['value'])))\n print_tree(node['left_child'], depth+1)\n print_tree(node['right_child'], depth+1)\n else:\n print('%s[%s]' % ((depth*' ', node)))\n \ndef tree_predict_single(x, node):\n if isinstance(node, dict):\n if x[node['index']] < node['value']:\n return tree_predict_single(x, node['left_child'])\n else:\n return tree_predict_single(x, node['right_child'])\n \n return node\n\ndef tree_predict_multi(X, node):\n Y = np.array([tree_predict_single(row, node) for row in X])\n return Y[:, None] # size: (n_object,) -> (n_object, 1)\n```\n\nLet's test our decision tree model on some toy data.\n\n\n```python\nX_train, Y_train, X_test, Y_test = split(*generate_2_circles(), 0.7)\n\ntree = build_tree(X_train, Y_train, 4, 1)\nY_pred = tree_predict_multi(X_test, tree)\ntest_accuracy = (Y_pred == Y_test).mean()\nprint('Test Acc: {:.1f}%'.format(test_accuracy * 100))\n```\n\nWe print the decision tree in [pre-order](https://en.wikipedia.org/wiki/Tree_traversal#Pre-order_(NLR)).\n\n\n```python\nprint_tree(tree)\n```\n\n\n```python\nplot_model_prediction(lambda x: tree_predict_multi(x, tree), X_test, Y_test)\n```\n\n# 3. Experiments\nThe [Cleveland Heart Disease](https://archive.ics.uci.edu/ml/datasets/Heart+Disease) dataset aims at predicting the presence of heart disease based on other available medical information of the patient.\n\nAlthough the whole database contains 76 attributes, we focus on the following 14:\n1. Age: age in years \n2. Sex: \n * 0 = female\n * 1 = male \n3. Chest pain type: \n * 1 = typical angina\n * 2 = atypical angina\n * 3 = non-anginal pain\n * 4 = asymptomatic\n4. Trestbps: resting blood pressure in mm Hg on admission to the hospital \n5. Chol: serum cholestoral in mg/dl \n6. Fasting blood sugar: > 120 mg/dl\n * 0 = false\n * 1 = true\n7. Resting electrocardiographic results: \n * 0 = normal\n * 1 = having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of > 0.05 mV) \n * 2 = showing probable or definite left ventricular hypertrophy by Estes' criteria \n8. Thalach: maximum heart rate achieved \n9. Exercise induced angina:\n * 0 = no\n * 1 = yes\n10. Oldpeak: ST depression induced by exercise relative to rest \n11. Slope: the slope of the peak exercise ST segment\n * 1 = upsloping\n * 2 = flat \n * 3 = downsloping \n12. Ca: number of major vessels (0-3) colored by flourosopy \n13. Thal: \n * 3 = normal\n * 6 = fixed defect\n * 7 = reversable defect \n14. Target: diagnosis of heart disease (angiographic disease status)\n * 0 = < 50% diameter narrowing \n * 1 = > 50% diameter narrowing\n \nThe 14. attribute is the target variable that we would like to predict based on the rest.\n\nWe have prepared some helper functions to download and pre-process the data in `heart_disease_data.py`\n\n\n```python\nimport heart_disease_data\n```\n\n\n```python\nX, Y = heart_disease_data.download_and_preprocess()\nX_train, Y_train, X_test, Y_test = split(X, Y, 0.7)\n```\n\nLet's have a look at some examples\n\n\n```python\nprint(X_train[0:2])\nprint(Y_train[0:2])\n\n# TODO feel free to explore more examples and see if you can predict the presence of a heart disease\n```\n\n## 3.1 Decision Tree for Heart Disease Prediction \nLet's build a decision tree model on the training data and see how well it performs\n\n\n```python\n# TODO: you are free to make use of code that we provide in previous cells\n# TODO: play around with different hyper parameters and see how these impact your performance\n\ntree = build_tree(X_train, Y_train, 5, 4)\nY_pred = tree_predict_multi(X_test, tree)\ntest_accuracy = (Y_pred == Y_test).mean()\nprint('Test Acc: {:.1f}%'.format(test_accuracy * 100))\n```\n\nHow did changing the hyper parameters affect the test performance? Usually hyper parameters are tuned using a hold-out [validation set](https://en.wikipedia.org/wiki/Training,_validation,_and_test_sets#Validation_dataset) instead of the test set.\n\n## 3.2 Logistic Regression for Heart Disease Prediction\n\nInstead of manually going through the data to find possible correlations, let's try training a logistic regression model on the data.\n\n\n```python\n# TODO: you are free to make use of code that we provide in previous cells\n# TODO: play around with different hyper parameters and see how these impact your performance\n```\n\nHow well did your model perform? Was it actually better then guessing? Let's look at the empirical mean of the target.\n\n\n```python\nY_train.mean()\n```\n\nSo what is the problem? Let's have a look at the learned parameters of our model.\n\n\n```python\nprint(model.W, model.b)\n```\n\nIf you trained sufficiently many steps you'll probably see how some weights are much larger than others. Have a look at what range the parameters were initialized and how much change we allow per step (learning rate). Compare this to the scale of the input features. Here an important concept arises, when we want to train on real world data: \n[Feature Scaling](https://en.wikipedia.org/wiki/Feature_scaling).\n\nLet's try applying it on our data and see how it affects our performance.\n\n\n```python\n# TODO: Rescale the input features and train again\n```\n\nNotice that we did not need any rescaling for the decision tree. Can you think of why?\n", "meta": {"hexsha": "41d802e3fa59e01b7e9d0f3a89d9b31e72647d76", "size": 447169, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "week_2/ML.ipynb", "max_stars_repo_name": "MagdalenaMl/UVA_AML20", "max_stars_repo_head_hexsha": "f0ae1417e642c41a23567c91a9a6bf7c54838d9d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "week_2/ML.ipynb", "max_issues_repo_name": "MagdalenaMl/UVA_AML20", "max_issues_repo_head_hexsha": "f0ae1417e642c41a23567c91a9a6bf7c54838d9d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week_2/ML.ipynb", "max_forks_repo_name": "MagdalenaMl/UVA_AML20", "max_forks_repo_head_hexsha": "f0ae1417e642c41a23567c91a9a6bf7c54838d9d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.8227693145, "max_line_length": 108216, "alphanum_fraction": 0.723200848, "converted": true, "num_tokens": 57508, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8991213745668095, "lm_q2_score": 0.9353465147977104, "lm_q1q2_score": 0.8409900440811919}} {"text": "# Symbolic Computation\nSymbolic computation deals with symbols, representing them exactly, instead of numerical approximations (floating point). \n\nWe will start with the following [borrowed](https://docs.sympy.org/latest/tutorial/intro.html) tutorial to introduce the concepts of SymPy. Devito uses SymPy heavily and builds upon it in its DSL. \n\n\n```python\nimport math\n\nmath.sqrt(3)\n```\n\n\n\n\n 1.7320508075688772\n\n\n\n\n```python\nmath.sqrt(8)\n```\n\n\n\n\n 2.8284271247461903\n\n\n\n$\\sqrt(8) = 2\\sqrt(2)$, but it's hard to see that here\n\n\n```python\nimport sympy\nsympy.sqrt(3)\n```\n\n\n\n\n$\\displaystyle \\sqrt{3}$\n\n\n\nSymPy can even simplify symbolic computations\n\n\n```python\nsympy.sqrt(8)\n```\n\n\n\n\n$\\displaystyle 2 \\sqrt{2}$\n\n\n\n\n```python\nfrom sympy import symbols\nx, y = symbols('x y')\nexpr = x + 2*y\nexpr\n\n```\n\n\n\n\n$\\displaystyle x + 2 y$\n\n\n\nNote that simply adding two symbols creates an expression. Now let's play around with it. \n\n\n```python\nexpr + 1\n```\n\n\n\n\n$\\displaystyle x + 2 y + 1$\n\n\n\n\n```python\nexpr - x\n```\n\n\n\n\n$\\displaystyle 2 y$\n\n\n\nNote that `expr - x` was not `x + 2y -x`\n\n\n```python\nx*expr\n```\n\n\n\n\n$\\displaystyle x \\left(x + 2 y\\right)$\n\n\n\n\n```python\nfrom sympy import expand, factor\nexpanded_expr = expand(x*expr)\nexpanded_expr\n```\n\n\n\n\n$\\displaystyle x^{2} + 2 x y$\n\n\n\n\n```python\nfactor(expanded_expr)\n```\n\n\n\n\n$\\displaystyle x \\left(x + 2 y\\right)$\n\n\n\n\n```python\nfrom sympy import diff, sin, exp\n\ndiff(sin(x)*exp(x), x)\n```\n\n\n\n\n$\\displaystyle e^{x} \\sin{\\left(x \\right)} + e^{x} \\cos{\\left(x \\right)}$\n\n\n\n\n```python\nfrom sympy import limit\n\nlimit(sin(x)/x, x, 0)\n```\n\n\n\n\n$\\displaystyle 1$\n\n\n\n### Exercise\n\nSolve $x^2 - 2 = 0$ using sympy.solve\n\n\n```python\n# Type solution here\nfrom sympy import solve\n```\n\n## Pretty printing\n\n\n```python\nfrom sympy import init_printing, Integral, sqrt\n\ninit_printing(use_latex=True)\n```\n\n\n```python\nIntegral(sqrt(1/x), x)\n```\n\n\n```python\nfrom sympy import latex\n\nlatex(Integral(sqrt(1/x), x))\n```\n\n\n\n\n '\\\\int \\\\sqrt{\\\\frac{1}{x}}\\\\, dx'\n\n\n\nMore symbols.\nExercise: fix the following piece of code\n\n\n```python\n# NBVAL_SKIP\n# The following piece of code is supposed to fail as it is\n# The exercise is to fix the code\n\nexpr2 = x + 2*y +3*z\n```\n\n### Exercise \n\nSolve $x + 2*y + 3*z$ for $x$\n\n\n```python\n# Solution here\nfrom sympy import solve\n```\n\nDifference between symbol name and python variable name\n\n\n```python\nx, y = symbols(\"y z\")\n```\n\n\n```python\nx\n```\n\n\n```python\ny\n```\n\n\n```python\n# NBVAL_SKIP\n# The following code will error until the code in cell 16 above is\n# fixed\nz\n```\n\nSymbol names can be more than one character long\n\n\n```python\ncrazy = symbols('unrelated')\n\ncrazy + 1\n```\n\n\n```python\nx = symbols(\"x\")\nexpr = x + 1\nx = 2\n```\n\nWhat happens when I print expr now? Does it print 3?\n\n\n```python\nprint(expr)\n```\n\n x + 1\n\n\nHow do we get 3?\n\n\n```python\nx = symbols(\"x\")\nexpr = x + 1\nexpr.subs(x, 2)\n```\n\n## Equalities\n\n\n```python\nx + 1 == 4\n```\n\n\n\n\n False\n\n\n\n\n```python\nfrom sympy import Eq\n\nEq(x + 1, 4)\n```\n\nSuppose we want to ask whether $(x + 1)^2 = x^2 + 2x + 1$\n\n\n```python\n(x + 1)**2 == x**2 + 2*x + 1\n```\n\n\n\n\n False\n\n\n\n\n```python\nfrom sympy import simplify\n\na = (x + 1)**2\nb = x**2 + 2*x + 1\n\nsimplify(a-b)\n```\n\n### Exercise \nWrite a function that takes two expressions as input, and returns a tuple of two booleans. The first if they are equal symbolically, and the second if they are equal mathematically.\n\n## More operations\n\n\n```python\nz = symbols(\"z\")\nexpr = x**3 + 4*x*y - z\nexpr.subs([(x, 2), (y, 4), (z, 0)])\n```\n\n\n```python\nfrom sympy import sympify\n\nstr_expr = \"x**2 + 3*x - 1/2\"\nexpr = sympify(str_expr)\nexpr\n```\n\n\n```python\nexpr.subs(x, 2)\n```\n\n\n```python\nexpr = sqrt(8)\n```\n\n\n```python\nexpr\n```\n\n\n```python\nexpr.evalf()\n```\n\n\n```python\nfrom sympy import pi\n\npi.evalf(100)\n```\n\n\n```python\nfrom sympy import cos\n\nexpr = cos(2*x)\nexpr.evalf(subs={x: 2.4})\n```\n\n### Exercise\n\n\n\n```python\nfrom IPython.core.display import Image \nImage(filename='figures/comic.png') \n```\n\nWrite a function that takes a symbolic expression (like pi), and determines the first place where 789 appears.\nTip: Use the string representation of the number. Python starts counting at 0, but the decimal point offsets this\n\n## Solving an ODE\n\n\n```python\nfrom sympy import Function\n\nf, g = symbols('f g', cls=Function)\nf(x)\n```\n\n\n```python\nf(x).diff()\n```\n\n\n```python\ndiffeq = Eq(f(x).diff(x, x) - 2*f(x).diff(x) + f(x), sin(x))\ndiffeq\n```\n\n\n```python\nfrom sympy import dsolve\n\ndsolve(diffeq, f(x))\n```\n\n## Finite Differences\n\n\n```python\n\nf = Function('f')\ndfdx = f(x).diff(x)\ndfdx.as_finite_difference()\n```\n\n\n```python\nfrom sympy import Symbol\n\nd2fdx2 = f(x).diff(x, 2)\nh = Symbol('h')\nd2fdx2.as_finite_difference(h)\n```\n\nNow that we have seen some relevant features of vanilla SymPy, let's move on to Devito, which could be seen as SymPy finite differences on steroids!\n", "meta": {"hexsha": "b1e9cbb90d1c64e70bfdc05aa98bfe7e4846a069", "size": 130289, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/userapi/00-sympy.ipynb", "max_stars_repo_name": "tallamjr/devito", "max_stars_repo_head_hexsha": "677ea0f00809c865c54276b3f44821052913ffcf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-04-05T20:52:23.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-03T21:36:53.000Z", "max_issues_repo_path": "examples/userapi/00-sympy.ipynb", "max_issues_repo_name": "tallamjr/devito", "max_issues_repo_head_hexsha": "677ea0f00809c865c54276b3f44821052913ffcf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2019-06-11T20:54:19.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-06T17:56:10.000Z", "max_forks_repo_path": "examples/userapi/00-sympy.ipynb", "max_forks_repo_name": "vmickus/devito", "max_forks_repo_head_hexsha": "1e8d6af2cb8615e47ae20b68d5f2e30d530d03ff", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 106.9696223317, "max_line_length": 75420, "alphanum_fraction": 0.8733814827, "converted": true, "num_tokens": 1507, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9099069987088003, "lm_q2_score": 0.924141826246517, "lm_q1q2_score": 0.8408831155012378}} {"text": "\n\n# Introducci\u00f3n a SymPy\n\n\n\n__SymPy es una biblioteca de Python para matem\u00e1tica simb\u00f3lica__. Apunta a convertirse en un sistema de algebra computacional (__CAS__) con todas sus prestaciones manteniendo el c\u00f3digo tan simple como sea posible para manterlo comprensible y f\u00e1cilmente extensible. SymPy est\u00e1 __escrito totalmente en Python y no requiere bibliotecas adicionales__. _Este proyecto comenz\u00f3 en 2005, fue lanzado al p\u00fablico en 2007 y a \u00e9l han contribuido durante estos a\u00f1os cientos de personas._\n\n_ Otros CAS conocidos son Mathematica y Maple, sin embargo ambos son software privativo y de pago. [Aqu\u00ed](https://github.com/sympy/sympy/wiki/SymPy-vs.-Maple) puedes encontrar una comparativa de SymPy con Maple. _\n\nHoy veremos c\u00f3mo:\n\n* Crear s\u00edmbolos y expresiones.\n* Manipular expresiones (simplificaci\u00f3n, expansi\u00f3n...)\n* Calcular derivadas e integrales.\n* L\u00edmites y desarrollos en serie.\n* Resoluci\u00f3n de ecuaciones.\n* Resolci\u00f3n de EDOs.\n* Matrices\n\nSin embargo, SymPy no acaba aqu\u00ed ni mucho menos...\n\n## Documentaci\u00f3n & SymPy Live Shell\n\n\n```python\nfrom IPython.display import HTML\nHTML('')\n```\n\n\n\n\n\n\n\n\n## SymPy Gamma\n\n\n```python\nHTML('')\n```\n\n\n\n\n\n\n\n\n## Creaci\u00f3n de s\u00edmbolos\n\nLo primero, como siempre, es importar aquello que vayamos a necesitar. La manera usual de hacerlo con SymPy es importar la funci\u00f3n `init_session`:\n```\nfrom sympy import init_session\ninit_session(use_latex=True)```\n\n Esta funci\u00f3n ya se encarga de importar todas las funciones b\u00e1sicas y preparar las salidas gr\u00e1ficas. Sin embargo, en este momento, esta funci\u00f3n se encuentra en mantenimiento para su uso dentro de los notebooks por lo que activaremos la salida gr\u00e1fica e importaremos las funciones de la manera usual. Puedes consultar el estado de la correcci\u00f3n en: https://github.com/sympy/sympy/pull/13300 y https://github.com/sympy/sympy/issues/13319 .\n\nEl comando `init_session` llevar\u00eda a cabo algunas acciones por nostros:\n\n* Gracias a `use_latex=True` obtenemos la salida en $\\LaTeX$.\n* __Crea una serie de variables__ para que podamos ponernos a trabajar en el momento.\n\nEstas capacidades volver\u00e1n a estar disponibles cuando el problema se corrija.\n\n\n```python\nfrom sympy import init_printing\n```\n\n\n```python\ninit_printing() \n```\n\n\n```python\n# aeropython: preserve\nfrom sympy import (symbols, pi, I, E, cos, sin, exp, tan, simplify, expand, factor, collect,\n apart, cancel, expand_trig, diff, Derivative, Function, integrate, limit,\n series, Eq, solve, dsolve, Matrix, N)\n```\n\n
    Nota: \nEn Python, no se declaran las variables, sin embargo, no puedes usar una hasta que no le hayas asignado un valor. Si ahora intentamos crear una variable `a` que sea `a = 2 * b`, veamos qu\u00e9 ocurre:\n
    \n\n\n```python\n# Intentamos usar un s\u00edmbolo que no hemos creado\na = 2 * b\n```\n\nComo en `b` no hab\u00eda sido creada, Python no sabe qu\u00e9 es `b`.\n\nEsto mismo nos ocurre con los s\u00edmbolos de SymPy. __Antes de usar una variable, debo decir que es un s\u00edmbolo y asign\u00e1rselo:__\n\n\n```python\n# Creamos el s\u00edmbolo a\na = symbols('a')\na\n```\n\n\n```python\n# N\u00famero pi\n(a + pi) ** 2\n```\n\n\n```python\n# Unidad imaginaria\na + 2 * I\n```\n\n\n```python\n# N\u00famero e\nE\n```\n\n\n```python\n# Vemos qu\u00e9 tipo de variable es a\ntype(a)\n```\n\n\n\n\n sympy.core.symbol.Symbol\n\n\n\nAhora ya podr\u00eda crear `b = 2 * a`:\n\n\n```python\nb = 2 * a\nb\n```\n\n\n```python\ntype(b)\n```\n\n\n\n\n sympy.core.mul.Mul\n\n\n\n\u00bfQu\u00e9 est\u00e1 ocurriendo? Python detecta que a es una variable de tipo `Symbol` y al multiplicarla por `2` devuelve una variable de Sympy.\n\nComo Python permite que el tipo de una variable cambie, __si ahora le asigno a `a` un valor float deja de ser un s\u00edmbolo.__\n\n\n```python\na = 2.26492\na\n```\n\n\n```python\ntype(a)\n```\n\n\n\n\n float\n\n\n\n---\n__Las conclusiones son:__\n\n* __Si quiero usar una variable como s\u00edmbolo debo crearla previamente.__\n* Las operaciones con s\u00edmbolos devuelven s\u00edmbolos.\n* Si una varibale que almacenaba un s\u00edmbolo recibe otra asignaci\u00f3n, cambia de tipo.\n\n---\n\n__Las variables de tipo `Symbol` act\u00faan como contenedores en los que no sabemos qu\u00e9 hay (un real, un complejo, una lista...)__. Hay que tener en cuenta que: __una cosa es el nombre de la variable y otra el s\u00edmbolo con el que se representa__.\n\n\n```python\n#creaci\u00f3n de s\u00edmbolos\ncoef_traccion = symbols('c_T')\ncoef_traccion\n```\n\nIncluso puedo hacer cosas raras como:\n\n\n```python\n# Diferencia entre variable y s\u00edmbolo\na = symbols('b')\na\n```\n\nAdem\u00e1s, se pueden crear varos s\u00edmbolos a la vez:\n\n\n```python\nx, y, z, t = symbols('x y z t')\n```\n\ny s\u00edmbolos griegos:\n\n\n```python\nw = symbols('omega')\nW = symbols('Omega')\nw, W\n```\n\n\n_Fuente: Documentaci\u00f3n oficial de SymPy_\n\n__Por defecto, SymPy entiende que los s\u00edmbolos son n\u00fameros complejos__. Esto puede producir resultados inesperados ante determinadas operaciones como, por ejemplo, lo logaritmos. __Podemos indicar que la variable es real, entera... en el momento de la creaci\u00f3n__:\n\n\n```python\n# Creamos s\u00edmbolos reales\nx, y, z, t = symbols('x y z t', real=True)\n```\n\n\n```python\n# Podemos ver las asunciones de un s\u00edmbolo\nx.assumptions0\n```\n\n\n\n\n {'commutative': True,\n 'complex': True,\n 'hermitian': True,\n 'imaginary': False,\n 'real': True}\n\n\n\n## Expresiones\n\nComencemos por crear una expresi\u00f3n como: $\\cos(x)^2+\\sin(x)^2$\n\n\n```python\nexpr = cos(x)**2 + sin(x)**2\nexpr\n```\n\n### `simplify()`\n\nPodemos pedirle que simplifique la expresi\u00f3n anterior:\n\n\n```python\nsimplify(expr)\n```\n\nEn este caso parece estar claro lo que quiere decir m\u00e1s simple, pero como en cualquier _CAS_ el comando `simplify` puede no devolvernos la expresi\u00f3n que nosotros queremos. Cuando esto ocurra necesitaremos usar otras instrucciones.\n\n### `.subs()`\n\nEn algunas ocasiones necesitaremos sustituir una variable por otra, por otra expresi\u00f3n o por un valor.\n\n\n```python\nexpr\n```\n\n\n```python\n# Sustituimos x por y ** 2\nexpr.subs(x, y**2)\n```\n\n\n```python\n# \u00a1Pero la expresi\u00f3n no cambia!\nexpr\n```\n\n\n```python\n# Para que cambie\nexpr = expr.subs(x, y**2)\nexpr\n```\n\nCambia el `sin(x)` por `exp(x)`\n\n\n```python\nexpr.subs(sin(x), exp(x))\n```\n\nParticulariza la expresi\u00f3n $sin(x) + 3 x $ en $x = \\pi$\n\n\n```python\n(sin(x) + 3 * x).subs(x, pi)\n```\n\n__Aunque si lo que queremos es obtener el valor num\u00e9rico lo mejor es `.evalf()`__\n\n\n```python\n(sin(x) + 3 * x).subs(x, pi).evalf(25)\n```\n\n\n```python\n#ver pi con 25 decimales\npi.evalf(25)\n```\n\n\n```python\n#el mismo resultado se obtiene ocn la funci\u00f3n N()\nN(pi,25)\n```\n\n# Simplificaci\u00f3n\n\nSymPy ofrece numerosas funciones para __simplificar y manipular expresiones__. Entre otras, destacan:\n\n* `expand()`\n* `factor()`\n* `collect()`\n* `apart()`\n* `cancel()`\n\nPuedes consultar en la documentaci\u00f3n de SymPy lo que hace cada una y algunos ejemplos. __Existen tambi\u00e9n funciones espec\u00edficas de simplificaci\u00f3n para funciones trigonom\u00e9tricas, potencias y logaritmos.__ Abre [esta documentaci\u00f3n](http://docs.sympy.org/latest/tutorial/simplification.html) si lo necesitas.\n\n##### \u00a1Te toca!\n\nPasaremos r\u00e1pidamente por esta parte, para hacer cosas \"m\u00e1s interesantes\". Te proponemos algunos ejemplos para que te familiarices con el manejor de expresiones:\n\n__Crea las expresiones de la izquierda y averigua qu\u00e9 funci\u00f3n te hace obtener la de la derecha:__\n\nexpresi\u00f3n 1| expresi\u00f3n 2\n:------:|:------:\n$\\left(x^{3} + 3 y + 2\\right)^{2}$ | $x^{6} + 6 x^{3} y + 4 x^{3} + 9 y^{2} + 12 y + 4$\n$\\frac{\\left(3 x^{2} - 2 x + 1\\right)}{\\left(x - 1\\right)^{2}} $ | $3 + \\frac{4}{x - 1} + \\frac{2}{\\left(x - 1\\right)^{2}}$\n$x^{3} + 9 x^{2} + 27 x + 27$ | $\\left(x + 3\\right)^{3}$\n$\\sin(x+2y)$ | $\\left(2 \\cos^{2}{\\left (y \\right )} - 1\\right) \\sin{\\left (x \\right )} + 2 \\sin{\\left (y \\right )} \\cos{\\left (x \\right )} \\cos{\\left (y \\right )}$\n\n\n\n```python\n#1\nexpr1 = (x ** 3 + 3 * y + 2) ** 2\nexpr1\n```\n\n\n```python\nexpr1_exp = expr1.expand()\nexpr1_exp\n```\n\n\n```python\n#2\nexpr2 = (3 * x ** 2 - 2 * x + 1) / (x - 1) ** 2\nexpr2\n```\n\n\n```python\nexpr2.apart()\n```\n\n\n```python\n#3\nexpr3 = x ** 3 + 9 * x ** 2 + 27 * x + 27\nexpr3\n```\n\n\n```python\nexpr3.factor()\n```\n\n\n```python\n#4\nexpr4 = sin(x + 2 * y)\nexpr4\n```\n\n\n```python\nexpand(expr4)\n```\n\n\n```python\nexpand_trig(expr4)\n```\n\n\n```python\nexpand(expr4, trig=True)\n```\n\n# Derivadas e integrales\n\nPuedes derivar una expresion usando el m\u00e9todo `.diff()` y la funci\u00f3n `dif()`\n\n\n```python\n#creamos una expresi\u00f3n\nexpr = cos(x)\n\n#obtenemos la derivada primera con funcion\ndiff(expr, x)\n```\n\n\n```python\n#utilizando m\u00e9todo\nexpr.diff(x)\n```\n\n__\u00bfderivada tercera?__\n\n\n```python\nexpr.diff(x, x, x)\n```\n\n\n```python\nexpr.diff(x, 3)\n```\n\n__\u00bfvarias variables?__\n\n\n```python\nexpr_xy = y ** 3 * sin(x) ** 2 + x ** 2 * cos(y)\nexpr_xy\n```\n\n\n```python\ndiff(expr_xy, x, 2, y, 2)\n```\n\n__Queremos que la deje indicada__, usamos `Derivative()`\n\n\n```python\nDerivative(expr_xy, x, 2, y)\n```\n\n__\u00bfSer\u00e1 capaz SymPy de aplicar la regla de la cadena?__\n\n\n```python\n# Creamos una funci\u00f3n F\nF = Function('F')\nF(x)\n```\n\n\n```python\n# Creamos una funci\u00f3n G\nG = Function('G')\nG(x)\n```\n\n$$\\frac{d}{d x} F{\\left (G(x) \\right )} $$\n\n\n```python\n# Derivamos la funci\u00f3n compuesta F(G(x))\nF(G(x)).diff(x)\n```\n\n\n\n\n$$\\frac{d}{d x} G{\\left (x \\right )} \\left. \\frac{d}{d \\xi_{1}} F{\\left (\\xi_{1} \\right )} \\right|_{\\substack{ \\xi_{1}=G{\\left (x \\right )} }}$$\n\n\n\nEn un caso en el que conocemos las funciones:\n\n\n```python\n# definimos una f\nf = 2 * y * exp(x)\nf\n```\n\n\n```python\n# definimos una g(f)\ng = f **2 * cos(x) + f\ng\n```\n\n\n```python\n#la derivamos\ndiff(g,x)\n```\n\n##### Te toca integrar\n\n__Si te digo que se integra usando el m\u00e9todo `.integrate()` o la funci\u00f3n `integrate()`__. \u00bfTe atreves a integrar estas casi inmediatas...?:\n\n$$\\int{\\cos(x)^2}dx$$\n$$\\int{\\frac{dx}{\\sin(x)}}$$\n$$\\int{\\frac{dx}{(x^2+a^2)^2}}$$\n\n\n\n\n```python\nint1 = cos(x) ** 2\nintegrate(int1)\n```\n\n\n```python\nint2 = 1 / sin(x)\nintegrate(int2)\n```\n\n\n```python\nx, a = symbols('x a', real=True)\n\nint3 = 1 / (x**2 + a**2)**2\nintegrate(int3, x)\n```\n\n# L\u00edmites\n\nCalculemos este l\u00edmite sacado del libro _C\u00e1lculo: definiciones, teoremas y resultados_, de Juan de Burgos:\n\n$$\\lim_{x \\to 0} \\left(\\frac{x}{\\tan{\\left (x \\right )}}\\right)^{\\frac{1}{x^{2}}}$$\n\nPrimero creamos la expresi\u00f3n:\n\n\n```python\nx = symbols('x', real=True)\nexpr = (x / tan(x)) ** (1 / x**2)\nexpr\n```\n\nObtenemos el l\u00edmite con la funci\u00f3n `limit()` y si queremos dejarlo indicado, podemos usar `Limit()`:\n\n\n```python\nlimit(expr, x, 0)\n```\n\n# Series\n\nLos desarrollos en serie se pueden llevar a cabo con el m\u00e9todo `.series()` o la funci\u00f3n `series()`\n\n\n```python\n#creamos la expresi\u00f3n\nexpr = exp(x)\nexpr\n```\n\n\n```python\n#la desarrollamos en serie\nseries(expr)\n```\n\nSe puede especificar el n\u00famero de t\u00e9rminos pas\u00e1ndole un argumento `n=...`. El n\u00famero que le pasemos ser\u00e1 el primer t\u00e9rmino que desprecie.\n\n\n```python\n# Indicando el n\u00famero de t\u00e9rminos\nseries(expr, n=10)\n```\n\nSi nos molesta el $\\mathcal{O}(x^{10})$ lo podemos quitar con `removeO()`:\n\n\n```python\nseries(expr, n=10).removeO()\n```\n\n\n```python\nseries(sin(x), n=8, x0=pi/3).removeO().subs(x, x-pi/3)\n```\n\n---\n\n## Resoluci\u00f3n de ecuaciones\n\nComo se ha mencionado anteriormente las ecuaciones no se pueden crear con el `=`\n\n\n```python\n#creamos la ecuaci\u00f3n\necuacion = Eq(x ** 2 - x, 3)\necuacion\n```\n\n\n```python\n# Tambi\u00e9n la podemos crear como\nEq(x ** 2 - x -3)\n```\n\n\n```python\n#la resolvemos\nsolve(ecuacion)\n```\n\nPero la gracia es resolver con s\u00edmbolos, \u00bfno?\n$$a e^{\\frac{x}{t}} = C$$\n\n\n```python\n# Creamos los s\u00edmbolos y la ecuaci\u00f3n\na, x, t, C = symbols('a, x, t, C', real=True)\necuacion = Eq(a * exp(x/t), C)\necuacion\n```\n\n\n```python\n# La resolvemos\nsolve(ecuacion ,x)\n```\n\nSi consultamos la ayuda, vemos que las posibilidades y el n\u00famero de par\u00e1metros son muchos, no vamos a entrar ahora en ellos, pero \u00bfse ve la potencia?\n\n## Ecuaciones diferenciales\n\nTratemos de resolver, por ejemplo:\n\n$$y{\\left (x \\right )} + \\frac{d}{d x} y{\\left (x \\right )} + \\frac{d^{2}}{d x^{2}} y{\\left (x \\right )} = \\cos{\\left (x \\right )}$$\n\n\n```python\nx = symbols('x')\nf = Function('y')\necuacion_dif = Eq(y(x).diff(x,2) + y(x).diff(x) + y(x), cos(x))\necuacion_dif\n```\n\n\n```python\n#resolvemos\ndsolve(ecuacion_dif, f(x))\n```\n\n# Matrices\n\n\n```python\n#creamos una matriz llena de s\u00edmbolos\na, b, c, d = symbols('a b c d')\nA = Matrix([\n [a, b],\n [c, d]\n ])\nA\n```\n\n\n\n\n$$\\left[\\begin{matrix}a & b\\\\c & d\\end{matrix}\\right]$$\n\n\n\n\n```python\n#sacamos autovalores\nA.eigenvals()\n```\n\n\n```python\n#inversa\nA.inv()\n```\n\n\n\n\n$$\\left[\\begin{matrix}\\frac{d}{a d - b c} & - \\frac{b}{a d - b c}\\\\- \\frac{c}{a d - b c} & \\frac{a}{a d - b c}\\end{matrix}\\right]$$\n\n\n\n\n```python\n#elevamos al cuadrado la matriz\nA ** 2\n```\n\n\n\n\n$$\\left[\\begin{matrix}a^{2} + b c & a b + b d\\\\a c + c d & b c + d^{2}\\end{matrix}\\right]$$\n\n\n\n---\n\n_ Esto ha sido un r\u00e1pido recorrido por algunas de las posibilidades que ofrece SymPy . El c\u00e1lculo simb\u00f3lico es un terreno d\u00edficil y este joven paquete avanza a pasos agigantados gracias a un grupo de desarrolladores siempre dispuestos a mejorar y escuchar sugerencias. Sus posibilidades no acaban aqu\u00ed. En la siguiente clase presentaremos el paquete `mechanics`, pero adem\u00e1s cuenta con herramientas para geometr\u00eda, mec\u00e1nica cu\u00e1ntica, teor\u00eda de n\u00fameros, combinatoria... Puedes echar un ojo [aqu\u00ed](http://docs.sympy.org/latest/modules/index.html). _\n\nSi te ha gustado esta clase:\n\nTweet\n\n\n---\n\n####

    \u00a1S\u00edguenos en Twitter!\n\n###### Follow @Pybonacci Follow @Alex__S12 Follow @newlawrence \n\n#####
    Curso AeroPython por Juan Luis Cano Rodriguez y Alejandro S\u00e1ez Mollejo se distribuye bajo una Licencia Creative Commons Atribuci\u00f3n 4.0 Internacional.\n\n##### \n\n---\n_Las siguientes celdas contienen configuraci\u00f3n del Notebook_\n\n_Para visualizar y utlizar los enlaces a Twitter el notebook debe ejecutarse como [seguro](http://ipython.org/ipython-doc/dev/notebook/security.html)_\n\n File > Trusted Notebook\n\n\n```python\n%%html\nFollow @Pybonacci\n\n```\n\n\nFollow @Pybonacci\n\n\n\n\n```python\n# Esta celda da el estilo al notebook\nfrom IPython.core.display import HTML\ncss_file = '../styles/aeropython.css'\nHTML(open(css_file, \"r\").read())\n```\n\n\n\n\n/* This template is inspired in the one used by Lorena Barba\nin the numerical-mooc repository: https://github.com/numerical-mooc/numerical-mooc\nWe thank her work and hope you also enjoy the look of the notobooks with this style */\n\n\n\nEl estilo se ha aplicado =)\n\n\n\n\n\n\n", "meta": {"hexsha": "771a421e40a926d347f95ad8c44245729f624f50", "size": 187755, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks_completos/040-SymPy-Intro.ipynb", "max_stars_repo_name": "caorodriguez/aprendealgo-", "max_stars_repo_head_hexsha": "b8e9692214280cc0c1981bbd414b19fd0fa7dcb8", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 86, "max_stars_repo_stars_event_min_datetime": "2015-03-05T18:57:16.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-18T00:19:22.000Z", "max_issues_repo_path": "notebooks_completos/040-SymPy-Intro.ipynb", "max_issues_repo_name": "caorodriguez/aprendealgo-", "max_issues_repo_head_hexsha": "b8e9692214280cc0c1981bbd414b19fd0fa7dcb8", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 66, "max_issues_repo_issues_event_min_datetime": "2015-01-26T19:08:20.000Z", "max_issues_repo_issues_event_max_datetime": "2020-05-20T17:09:58.000Z", "max_forks_repo_path": "notebooks_completos/040-SymPy-Intro.ipynb", "max_forks_repo_name": "caorodriguez/aprendealgo-", "max_forks_repo_head_hexsha": "b8e9692214280cc0c1981bbd414b19fd0fa7dcb8", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 63, "max_forks_repo_forks_event_min_datetime": "2015-02-18T15:12:45.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T06:18:39.000Z", "avg_line_length": 69.5388888889, "max_line_length": 6392, "alphanum_fraction": 0.7829991212, "converted": true, "num_tokens": 5761, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.909907001151883, "lm_q2_score": 0.924141814233309, "lm_q1q2_score": 0.8408831068280908}} {"text": "```python\n%matplotlib inline\nimport numpy as np\nfrom numpy import max, abs, dot, array\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n```\n\n### Multiplying by complex numbers is a rotation\nThis is a minor re-wording of the bread-and-butter knowledge in signal processing that (using $i=e^{j \\frac {\\pi} 2}$) multiplication of a signal with a complex exponential results in shifting the phase of the signal. If we view the signal as a vector in polar form in the complex plane, the multiplication by a complex exponential is equivalent to applying a rotation matrix constructed by the desired angle of the phase shift.\n\n\n```python\ncomplex_nums = array([\n -5+4j,\n -5+3j, -4+3j, -3+3j, -2+3j, -1+3j,\n -2+2j,\n])\n\ndef plot_axes(length):\n plt.plot([0, 0], [-length, length], color='k')\n plt.plot([-length, length], [0, 0], color='k')\n \ndef plot_complex(nums):\n plt.scatter(nums.real, nums.imag)\n lim = 1 + max([max(abs(nums.real)), max(abs(nums.imag))])\n plot_axes(lim)\n plt.axis([-lim, lim, -lim, lim])\n \n# multiplying by i is a rotation counter-clockwise by 45 degrees\nplt.figure()\ninds = [1, 3, 4, 2]\nfor i in range(1, 5):\n plt.subplot(2, 2, inds[i-1])\n plot_complex(complex_nums)\n complex_nums *= 1j\n```\n\n### Complex Exponentials and Rotation Matrices\nLet's bridge the gap that many face when asking signal processing folks and graphics folks about rotations. We can do this by showing how to rotate a vector $\\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix}$ in $\\mathbb{C}$ by 45$^{\\circ}$ by 1) multiplication by a complex exponential and 2) multiplication by a rotation matrix.\n\n#### Complex Exponential (Signal Processing)\nBy Euler's method, we know that $i=e^{j \\frac {\\pi} 2}$. Let's also represent our vector $\\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix}$ in it's rectangular form as $1 + 0j$, and then convert it to its polar form as $e^{j0}$. Multiplying these two, we get\n\n$$\n\\begin{align}\n e^{j \\frac \\pi 2}e^{j0} = e^{j \\frac \\pi 2 + 0}\n &= e^{j \\frac \\pi 2} \\\\\n &= 0 + 1j \\\\\n &= \\begin{bmatrix}0 \\\\ 1 \\end{bmatrix}\n\\end{align}\n$$\n\n#### Rotation Matrix\nThe canonical rotation matrix for a desired angle $\\theta$ for a two-dimensional space is\n$$\n\\begin{bmatrix}\ncos \\theta & -sin \\theta \\\\\nsin \\theta & cos \\theta\n\\end{bmatrix}\n$$\n\nSo let's take $\\mathbb{C}$ for our space. Since the current rotation is $\\theta = 45^{\\circ}$, the corresponding rotation matrix is\n$$\n\\begin{bmatrix}\n0 & -1 \\\\\n1 & 0\n\\end{bmatrix}\n$$\n\nApplying the rotation matrix to the original vector, we have\n$$\n\\begin{bmatrix}\n0 & -1 \\\\\n1 & 0\n\\end{bmatrix}\n\\begin{bmatrix}\n1 \\\\\n0\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n0 \\\\\n1\n\\end{bmatrix}\n$$\n\n#### The Bridge\nWe can start with the rotation matrix representation and bridge over to the phasor (complex exponential) representation. The rotation of angle $\\theta$ on an arbitrary vector $c \\in \\mathbb{C}, c = c_r + j c_i$ is\n$$\n\\begin{bmatrix}\ncos \\theta & -sin \\theta \\\\\nsin \\theta & cos \\theta\n\\end{bmatrix}\n\\begin{bmatrix}\nc_r \\\\\nc_i\n\\end{bmatrix}\n=\n\\begin{bmatrix}\nc_r cos \\theta - c_i sin \\theta \\\\\nc_r sin \\theta + c_i cos \\theta\n\\end{bmatrix}\n$$\n\nRemember that this is a representation of a vector in $\\mathbb{C}$, so $\\begin{bmatrix}x \\\\ y \\end{bmatrix} = x + j y$. Therefore,\n$$\n\\begin{align}\n \\begin{bmatrix}\n c_r cos \\theta - c_i sin \\theta \\\\\n c_r sin \\theta + c_i cos \\theta\n \\end{bmatrix}\n &= c_r cos \\theta - c_i sin \\theta + j(c_r sin \\theta + c_i cos \\theta) \\\\\n &= c_r cos \\theta + j^2 c_i sin \\theta + j c_r sin \\theta + j c_i cos \\theta \\\\\n &= (c_r + j c_i)(cos \\theta + j sin \\theta) \\\\\n &= c e^{j\\theta}\n\\end{align}\n$$\n\nSo the real bridge here was that rotation via the complex exponential was accomplished using polar coordinates, while using the rotation matrix worked in rectangular coordinates. By converting the coordinate system of one, you arrive at the other.\n\n\n```python\n\n```\n", "meta": {"hexsha": "d5a5877311cdd1926312318f5f9836671ccba752", "size": 13697, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/fields.ipynb", "max_stars_repo_name": "aagnone3/linalg", "max_stars_repo_head_hexsha": "d4313eaf81990ca67841af018e8ca2d156b9a68c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/fields.ipynb", "max_issues_repo_name": "aagnone3/linalg", "max_issues_repo_head_hexsha": "d4313eaf81990ca67841af018e8ca2d156b9a68c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/fields.ipynb", "max_forks_repo_name": "aagnone3/linalg", "max_forks_repo_head_hexsha": "d4313eaf81990ca67841af018e8ca2d156b9a68c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 68.8291457286, "max_line_length": 7384, "alphanum_fraction": 0.7588523034, "converted": true, "num_tokens": 1195, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294404018582427, "lm_q2_score": 0.9046505267461572, "lm_q1q2_score": 0.8408187491202193}} {"text": "# Extended Likelihood\n\n## Unbinned Extended Likelihood\n\nLet $x$ be a random variable distributed according to a p.d.f. $~f\\left(x\\,\\middle|\\,\\vec{\\theta}\\right)$,\n\n\\begin{equation*}\nx \\sim f\\left(x\\,\\middle|\\,\\vec{\\theta}\\right),\n\\end{equation*}\n\nwith $n$ observations, $\\vec{x} = \\left(x_1, \\cdots, x_n\\right)$, and $m$ unkown parameters, $\\vec{\\theta} = \\left(\\theta_1, \\cdots, \\theta_m\\right)$. The likelihood would normally then be\n\n$$\nL\\left(\\vec{\\theta}\\right) = \\prod_{i=1}^{n} f\\left(x_i; \\vec{\\theta}\\right).\n$$\n\nHowever, if $n$ itself is a Poisson random variable with mean $\\nu$,\n\n$$\nn \\sim \\text{Pois}\\left(n \\,\\middle|\\, \\nu\\right),\n$$\n\nthen it follows that\n\n\\begin{align}\nL\\left(\\nu; \\vec{\\theta}\\right) &= \\text{Pois}\\left(n; \\nu\\right) \\prod_{i=1}^{n} f\\left(x_i; \\vec{\\theta}\\right) \\notag\\\\\n &= \\frac{\\nu^{n}\\,e^{-\\nu}}{n!} \\prod_{i=1}^{n} f\\left(x_i; \\vec{\\theta}\\right) \\notag\\\\\n &= \\frac{e^{-\\nu}}{n!} \\prod_{i=1}^{n} \\nu\\, f\\left(x_i; \\vec{\\theta}\\right).\n%\\label{eq_extended-likelihood}\n\\end{align}\n\nThis equation is known as the \"extended likelihood function\", as we have \"extended\" the information encoded in the likelihood to include the expected number of events — a quantity of great importance to physicists. It can be see from inspection though that the extended likelihood still follows the form of a likelihood, so no different treatment is required in finding its MLE estimators.\n\n### $\\nu$ is dependent on $\\vec{\\theta}$\n\nIn the instance that $\\nu$ is a function of $\\vec{\\theta}$, $\\nu = \\nu\\left(\\vec{\\theta}\\right)$, then\n\n$$\nL\\left(\\vec{\\theta}\\right) = \\frac{e^{-\\nu\\left(\\vec{\\theta}\\right)}}{n!} \\prod_{i=1}^{n} \\nu\\left(\\vec{\\theta}\\right)\\, f\\left(x_i; \\vec{\\theta}\\right),\n$$\n\nsuch that\n\n$$\n\\ln L\\left(\\vec{\\theta}\\right) = - \\nu\\left(\\vec{\\theta}\\right) - \\ln n! + \\sum_{i=1}^{n} \\ln\\left(\\nu\\left(\\vec{\\theta}\\right)\\, f\\left(x_i; \\vec{\\theta}\\right)\\right),\n$$\n\nwhere $n$ is a constant of the data, and so will have no effect on finding the estimators of any parameters, leading it to be safely ignored. Thus,\n\n\\begin{equation}\n\\boxed{-\\ln L\\left(\\vec{\\theta}\\right) = \\nu\\left(\\vec{\\theta}\\right) -\\sum_{i=1}^{n} \\ln\\left(\\nu\\left(\\vec{\\theta}\\right)\\, f\\left(x_i; \\vec{\\theta}\\right)\\right)}\\,.\n\\end{equation}\n\nNote that as the resultant estimators, $\\hat{\\vec{\\theta}}$, exploit information from both $n$ and $x$ this should generally lead to smaller variations for $\\hat{\\vec{\\theta}}$.\n\n### $\\nu$ is independent of $\\vec{\\theta}$\n\nIn the instance that $\\nu$ is independent of $\\vec{\\theta}$,\n$$\nL\\left(\\nu; \\vec{\\theta}\\right) = \\frac{e^{-\\nu}}{n!} \\prod_{i=1}^{n} \\nu\\, f\\left(x_i; \\vec{\\theta}\\right),\n$$\nthen\n\n\\begin{split}\n\\ln L\\left(\\nu; \\vec{\\theta}\\right) &= - \\nu - \\ln n! + \\sum_{i=1}^{n} \\ln\\left(\\nu\\, f\\left(x_i; \\vec{\\theta}\\right)\\right)\\\\\n &= - \\nu + \\sum_{i=1}^{n} \\left(\\ln\\nu + \\ln f\\left(x_i; \\vec{\\theta}\\right)\\right) - \\ln n! \\\\\n &= - \\nu + n \\ln\\nu + \\sum_{i=1}^{n} \\ln f\\left(x_i; \\vec{\\theta}\\right) - \\ln n!\\,,\n\\end{split}\nsuch that\n\\begin{equation}\n\\boxed{-\\ln L\\left(\\nu; \\vec{\\theta}\\right) = \\nu - n \\ln\\nu - \\sum_{i=1}^{n} \\ln f\\left(x_i; \\vec{\\theta}\\right)}\\,.\n\\end{equation}\n\nAs $L$ is maximized with respect to a variable $\\alpha$ when $\u2212\\ln \u2061L$ is minimized,\n\n$$\n\\frac{\\partial \\left(-\\ln L\\right)}{\\partial \\alpha} = 0,\n$$\n\nthen it is seen from\n\n$$\n\\frac{\\partial \\left(-\\ln L\\left(\\nu; \\vec{\\theta}\\right)\\right)}{\\partial \\nu} = 1 - \\frac{n}{\\nu} = 0,\n$$\n\nthat the maximum likelihood estimator for $\\nu$ is\n\\begin{equation}\n\\hat{\\nu} = n\\,,\n\\end{equation}\n\nand that\n$$\n\\frac{\\partial \\left(-\\ln L\\left(\\nu; \\vec{\\theta}\\right)\\right)}{\\partial \\theta_j} = 0,\n$$\nresults in the the same estimators $\\hat{\\vec{\\theta}}$ as in the \"usual\" maximum likelihood case.\n\nIf the p.d.f. is of the form of a mixture model,\n$$\nf\\left(x; \\vec{\\theta}\\right) = \\sum_{i=1}^{m} \\theta_i\\, f_i\\left(x\\right),\n$$\nand an estimate of the weights is of interest, then as the parameters are not fully independent, given the constraint\n$$\n\\sum_{i=1}^{m} \\theta_i = 1,\n$$\nthen one of the $m$ parameters can be replaced with\n$$\n1 - \\sum_{i=1}^{m-1} \\theta_i,\n$$\nso that the p.d.f. only constrains $m-1$ parameters. This then allows the the likelihood to be constructed that allows to find the estimator for the unconstrained parameter.\n\nEquivalently, the extended likelihood function can be used, as\n\n\\begin{split}\n\\ln L\\left(\\nu; \\vec{\\theta}\\right) &= - \\nu + n \\ln\\nu + \\sum_{i=1}^{n} \\ln f\\left(x_i; \\vec{\\theta}\\right) \\\\\n &= - \\nu + \\sum_{i=1}^{n} \\ln \\left(\\nu\\,f\\left(x_i; \\vec{\\theta}\\right)\\right)\\\\\n &= - \\nu + \\sum_{i=1}^{n} \\ln \\left(\\sum_{j=1}^{m} \\nu\\,\\theta_j\\, f_j\\left(x_i\\right)\\right).\n\\end{split}\n\nLetting $\\mu_i$, the expected number of events of type $i$, be $\\mu_i \\equiv \\theta_i \\nu$, for $\\vec{\\mu} = \\left(\\mu_1, \\cdots, \\mu_m\\right)$, then\n\n$$\n\\ln L\\left(\\vec{\\mu}\\right) = - \\sum_{j=1}^{m} \\mu_j + \\sum_{i=1}^{n} \\ln \\left(\\sum_{j=1}^{m} \\mu_j\\, f_j\\left(x_i\\right)\\right).\n$$\n\nHere, $\\vec{\\mu}$ are unconstrained and all parameters are treated symetrically, such that $\\hat{\\mu_i}$ give the maximum likelihood estimator means of number of events of type $i$.\n\n#### [Toy Example](http://www.physi.uni-heidelberg.de/~menzemer/Stat0708/statistik_vorlesung_7.pdf#page=10)\n\n\n```python\nimport numpy as np\nfrom scipy.optimize import minimize\n```\n\n\n```python\ndef NLL(x, n, S, B):\n nll = sum([(x[0] * S[meas] + B[meas]) - (n[meas] * np.log(x[0] * S[meas] + B[meas]))\n for meas in np.arange(0, len(n))])\n return nll\n```\n\n\n```python\nn_observed = [6, 24]\nf = np.array([1.])\nS = [0.9, 4.0]\nB = [0.2, 24.]\n\nmodel = minimize(NLL, f, args=(n_observed, S, B), method='L-BFGS-B', bounds=[(0, 10)])\nprint(\"The MLE estimate for f: {f}\".format(f=model.x[0]))\n```\n\n The MLE estimate for f: 2.6155463202001847\n\n\n#### HistFactory Example\n\nConsider a single channel with one signal and one bakcagound contribution (and no systematics). For $n$ events, signal model $f_{S}(x_e)$, background model $f_{B}(x_e)$, $S$ expected signal events, $B$ expected backagound events, and signal fraciton $\\mu$, a \"marked Poisson model\" [2] may be constructed, which treating the data as fixed results in the likelihood of\n\n\\begin{split}\nL\\left(\\mu\\right) &= \\text{Pois}\\left(n \\,\\middle|\\, \\mu S + B\\right) \\prod_{e=1}^{n} \\frac{\\mu S\\, f_{S}\\left(x_e\\right) + B\\, f_{B}\\left(x_e\\right)}{\\mu S + B}\\\\\n &= \\frac{\\left(\\mu S + B\\right)^{n} e^{-\\left(\\mu S + B\\right)}}{n!} \\prod_{e=1}^{n} \\frac{\\mu S\\, f_{S}\\left(x_e\\right) + B\\, f_{B}\\left(x_e\\right)}{\\mu S + B}\\\\\n &= \\frac{e^{-\\left(\\mu S + B\\right)}}{n!} \\prod_{e=1}^{n} \\left(\\,\\mu S\\, f_{S}\\left(x_e\\right) + B\\, f_{B}\\left(x_e\\right)\\right),\n\\end{split}\n\nand so\n\n$$\n-\\ln L\\left(\\mu\\right) = \\left(\\mu S + B\\right) - \\sum_{e=1}^{n} \\ln \\left(\\,\\mu S\\, f_{S}\\left(x_e\\right) + B\\, f_{B}\\left(x_e\\right)\\right) + \\underbrace{\\ln n!}_{\\text{constant}}.\n$$\n\n## Binned Extended Likelihood\n\n## References and Acknowledgements\n1. [Statistical Data Analysis](http://www.pp.rhul.ac.uk/~cowan/sda/), Glen Cowan, 1998\n2. ROOT collaboration, K. Cranmer, G. Lewis, L. Moneta, A. Shibata and W. Verkerke, [_HistFactory: A tool for creating statistical models for use with RooFit and RooStats_](http://inspirehep.net/record/1236448), 2012.\n3. [Vince Croft](https://www.nikhef.nl/~vcroft/), Discussions with the author at CERN, July 2017\n", "meta": {"hexsha": "353183de53b4f2211c6364ba86d122d429badd58", "size": 12847, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Notebooks/Likelihood_Methods/Extended-Likelihood.ipynb", "max_stars_repo_name": "fizisist/Statistics-Notes", "max_stars_repo_head_hexsha": "9399bca77abc36ee342f8af2fadddffd79390bed", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Notebooks/Likelihood_Methods/Extended-Likelihood.ipynb", "max_issues_repo_name": "fizisist/Statistics-Notes", "max_issues_repo_head_hexsha": "9399bca77abc36ee342f8af2fadddffd79390bed", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notebooks/Likelihood_Methods/Extended-Likelihood.ipynb", "max_forks_repo_name": "fizisist/Statistics-Notes", "max_forks_repo_head_hexsha": "9399bca77abc36ee342f8af2fadddffd79390bed", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.7384259259, "max_line_length": 405, "alphanum_fraction": 0.5075114813, "converted": true, "num_tokens": 2693, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9637799441350252, "lm_q2_score": 0.8723473846343393, "lm_q1q2_score": 0.8407509136292188}} {"text": "# Numerical integrals and error\n\nWhat about when we cannot integrate a function analytically? In other words, when there is no (obvious) closed-form solution. In these cases, we can use **numerical methods** to solve the problem.\n\nLet's use this problem:\n\\begin{align}\n\\frac{dy}{dx} &= e^{-x^2} \\\\\ny(x) &= \\int e^{-x^2} dx + C\n\\end{align}\n\n(You may recognize this as leading to the error function, $\\text{erf}$:\n$\\frac{1}{2} \\sqrt{\\pi} \\text{erf}(x) + C$,\nso the exact solution to the integral over the range $[0,1]$ is 0.7468.)\n\n\n```matlab\nx = linspace(0, 1);\nf = @(x) exp(-x.^2);\nplot(x, f(x))\naxis([0 1 0 1])\n```\n\n## Numerical integration: Trapezoidal rule\n\nIn such cases, we can find the integral by using the **trapezoidal rule**, which finds the area under the curve by creating trapezoids and summing their areas:\n\\begin{equation}\n\\text{area under curve} = \\sum \\left( \\frac{f(x_{i+1}) + f(x_i)}{2} \\right) \\Delta x\n\\end{equation}\n\nLet's see what this looks like with four trapezoids ($\\Delta x = 0.25$):\n\n\n```matlab\nhold off\nx = linspace(0, 1);\nplot(x, f(x)); hold on\naxis([0 1 0 1])\n\nx = 0 : 0.25 : 1;\n\n% plot the trapezoids\nfor i = 1 : length(x)-1\n xline = [x(i), x(i)];\n yline = [0, f(x(i))];\n line(xline, yline, 'Color','red','LineStyle','--')\n xline = [x(i+1), x(i+1)];\n yline = [0, f(x(i+1))];\n line(xline, yline, 'Color','red','LineStyle','--')\n xline = [x(i), x(i+1)];\n yline = [f(x(i)), f(x(i+1))];\n line(xline, yline, 'Color','red','LineStyle','--')\nend\nhold off\n```\n\nNow, let's integrate using the trapezoid formula given above:\n\n\n```matlab\ndx = 0.1;\nx = 0.0 : dx : 1.0;\n\narea = 0.0;\nfor i = 1 : length(x)-1\n area = area + (dx/2)*(f(x(i)) + f(x(i+1)));\nend\n\nfprintf('Numerical integral: %f\\n', area)\nexact = 0.5*sqrt(pi)*erf(1);\nfprintf('Exact integral: %f\\n', exact)\nfprintf('Error: %f %%\\n', 100.*abs(exact-area)/exact)\n```\n\n Numerical integral: 0.746211\n Exact integral: 0.746824\n Error: 0.082126 %\n\n\nWe can see that using the trapezoidal rule, a numerical integration method, with an internal size of $\\Delta x = 0.1$ leads to an approximation of the exact integral with an error of 0.08%.\n\nYou can make the trapezoidal rule more accurate by:\n\n- using more segments (that is, a smaller value of $\\Delta x$, or\n- using higher-order polynomials (such as with Simpson's rules) over the simpler trapezoids.\n\nFirst, how does reducing the segment size (step size) by a factor of 10 affect the error?\n\n\n```matlab\ndx = 0.01;\nx = 0.0 : dx : 1.0;\n\narea = 0.0;\nfor i = 1 : length(x)-1\n area = area + (dx/2)*(f(x(i)) + f(x(i+1)));\nend\n\nfprintf('Numerical integral: %f\\n', area)\nexact = 0.5*sqrt(pi)*erf(1);\nfprintf('Exact integral: %f\\n', exact)\nfprintf('Error: %f %%\\n', 100.*abs(exact-area)/exact)\n```\n\n Numerical integral: 0.746818\n Exact integral: 0.746824\n Error: 0.000821 %\n\n\nSo, reducing our step size by a factor of 10 (using 100 segments instead of 10) reduced our error by a factor of 100!\n\n## Numerical integration: Simpson's rule\n\nWe can increase the accuracy of our numerical integration approach by using a more sophisticated interpolation scheme with each segment. In other words, instead of using a straight line, we can use a polynomial. **Simpson's rule**, also known as Simpson's 1/3 rule, refers to using a quadratic polynomial to approximate the line in each segment.\n\nSimpson's rule defines the definite integral for our function $f(x)$ from point $a$ to point $b$ as\n\\begin{equation}\n\\int_a^b f(x) \\approx \\frac{1}{6} \\Delta x \\left( f(a) + 4 f \\left(\\frac{a+b}{2}\\right) + f(b) \\right)\n\\end{equation}\nwhere $\\Delta x = b - a$.\n\nThat equation comes from interpolating between points $a$ and $b$ with a third-degree polynomial, then integrating by parts.\n\n\n```matlab\nhold off\nx = linspace(0, 1);\nplot(x, f(x)); hold on\naxis([-0.1 1.1 0.2 1.1])\n\nplot([0 1], [f(0) f(1)], 'Color','black','LineStyle',':');\n\n% quadratic polynomial\na = 0; b = 1; m = (b-a)/2;\np = @(z) (f(a).*(z-m).*(z-b)/((a-m)*(a-b))+f(m).*(z-a).*(z-b)/((m-a)*(m-b))+f(b).*(z-a).*(z-m)/((b-a)*(b-m)));\nplot(x, p(x), 'Color','red','LineStyle','--');\n\nxp = [0 0.5 1];\nyp = [f(0) f(m) f(1)];\nplot(xp, yp, 'ok')\nhold off\nlegend('exact', 'trapezoid fit', 'polynomial fit', 'points used')\n```\n\nWe can see that the polynomial fit, used by Simpson's rule, does a better job of of approximating the exact function, and as a result Simpson's rule will be more accurate than the trapezoidal rule.\n\nNext let's apply Simpson's rule to perform the same integration as above:\n\n\n```matlab\ndx = 0.1;\nx = 0.0 : dx : 1.0;\n\narea = 0.0;\nfor i = 1 : length(x)-1\n area = area + (dx/6.)*(f(x(i)) + 4*f((x(i)+x(i+1))/2.) + f(x(i+1)));\nend\n\nfprintf('Simpson rule integral: %f\\n', area)\nexact = 0.5*sqrt(pi)*erf(1);\nfprintf('Exact integral: %f\\n', exact)\nfprintf('Error: %f %%\\n', 100.*abs(exact-area)/exact)\n```\n\n Simpson rule integral: 0.746824\n Exact integral: 0.746824\n Error: 0.000007 %\n\n\nSimpson's rule is about three orders of magnitude (~1000x) more accurate than the trapezoidal rule.\n\nIn this case, using a more-accurate method allows us to significantly reduce the error while still using the same number of segments/steps.\n\n## Error\n\nApplying the trapezoidal rule and Simpson's rule introduces the concept of **error** in numerical solutions.\n\nIn our work so far, we have come across two obvious kinds of error, that we'll come back to later:\n\n- **local truncation error**, which represents how \"wrong\" each interval/step is compared with the exact solution; and\n- **global truncation error**, which is the sum of the truncation errors over the entire method.\n\nIn any numerical solution, there are five main sources of error:\n\n1. Error in input data: this comes from measurements, and can be *systematic* (for example, due to uncertainty in measurement devices) or *random*.\n\n2. Rounding errors: loss of significant digits. This comes from the fact that computers cannot represent real numbers exactly, and instead use a floating-point representation.\n\n3. Truncation error: due to an infinite process being broken off. For example, an infinite series or sum ending after a finite number of terms, or discretization error by using a finite step size to approximate a continuous function.\n\n4. Error due to simplifications in a mathematical model: *\"All models are wrong, but some are useful\"* (George E.P. Box) All models make some idealizations, or simplifying assumptions, which introduce some error with respect to reality. For example, we may assume gases are continuous, that a spring has zero mass, or that a process is frictionless.\n\n5. Human error and machine error: there are many potential sources of error in any code. These can come from typos, human programming errors, errors in input data, or (more rarely) a pure machine error. Even textbooks, tables, and formulas may have errors.\n\n### Absolute and relative error\n\nWe can also differentiate between **absolute** and **relative** error in a quantity. If $y$ is an exact value and $\\tilde{y}$ is an approximation to that value, then we have\n\n- absolute error: $\\Delta y = | \\tilde{y} - y |$\n- relative error: $\\frac{\\Delta y}{y} = \\left| \\frac{\\tilde{y} - y}{y} \\right|$\n\nIf $y$ is a vector, then we can define error using the maximum of the elements:\n\\begin{equation}\n\\max_i \\frac{ |y_i - \\tilde{y}_{i} |}{|y_i|}\n\\end{equation}\n", "meta": {"hexsha": "e4d1ca517b8d67336b98acdc3a281b348e94ee03", "size": 67797, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/numerical-solution-error.ipynb", "max_stars_repo_name": "kyleniemeyer/ME373", "max_stars_repo_head_hexsha": "db7e78ac21d7a2cc5bd9fc49cdc3614f2f0fe00e", "max_stars_repo_licenses": ["CC-BY-4.0", "MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-11-03T18:09:05.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-03T18:09:05.000Z", "max_issues_repo_path": "docs/numerical-solution-error.ipynb", "max_issues_repo_name": "kyleniemeyer/ME373", "max_issues_repo_head_hexsha": "db7e78ac21d7a2cc5bd9fc49cdc3614f2f0fe00e", "max_issues_repo_licenses": ["CC-BY-4.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/numerical-solution-error.ipynb", "max_forks_repo_name": "kyleniemeyer/ME373", "max_forks_repo_head_hexsha": "db7e78ac21d7a2cc5bd9fc49cdc3614f2f0fe00e", "max_forks_repo_licenses": ["CC-BY-4.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 194.2607449857, "max_line_length": 22120, "alphanum_fraction": 0.8961310972, "converted": true, "num_tokens": 2250, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9252299570920386, "lm_q2_score": 0.9086179000259897, "lm_q1q2_score": 0.8406805006541047}} {"text": "# Supplementary Practice Problems\n\nThese are similar to programming problems you may encounter in the mid-terms. They are not graded but we will review them in lab sessions.\n\n**1**. (10 points) The logistic map is defined by the following simple function\n\n$$\nf(x) = rx(1-x)\n$$\n\nFor $x_0 = 0.1$ and $r = 4.0$, store the first 10 values of the iterated logistic map $x_{i+1} = rx_i(1-x_i)$ in a list. The first value in the list should be $x_0$.\n\n\n```python\nres = []\nx = 0.1\nr = 4.0\nfor i in range(10):\n res.append(x)\n x = r*x*(1-x)\nprint(res)\n```\n\n [0.1, 0.36000000000000004, 0.9216, 0.28901376000000006, 0.8219392261226498, 0.5854205387341974, 0.970813326249438, 0.11333924730376121, 0.4019738492975123, 0.9615634951138128]\n\n\n**2**. (10 points) Write a function to find the greatest common divisor (GCD) of 2 numbers using Euclid's algorithm.:\n\n\\begin{align}\n\\gcd(a,0) &= a \\\\\n\\gcd(a, b) &= \\gcd(b, a \\mod b)\n\\end{align}\n\nFind the GCD of 5797 and 190978. \n\nNow write a function to find the GCD given a collection of numbers.\n\nFind the GCD of (24, 48, 60, 120, 8).\n\n\n```python\ndef gcd(a,b):\n if b==0: return a\n else: return gcd(b,a%b)\ngcd(5797,190978)\n```\n\n\n\n\n 17\n\n\n\n\n```python\ninput = [24, 48, 60, 120, 8]\ndef gcd_collection(input):\n prev = gcd(input[0],input[1])\n for i in range(2,len(input)):\n curr = gcd(prev,input[i])\n prev = curr\n return curr\ngcd_collection(input)\n```\n\n\n\n\n 4\n\n\n\n**3**. (10 points) Find the least squares linear solution to the following data\n\n```\ny = [1,2,3,4]\nx1 = [1,2,3,4]\nx2 = [2,3,4,5]\n```\n\nThat is, find the \"best\" intercept and slope for the variables `x1` and `x2`.\n\n\n```python\nimport numpy as np\nimport scipy.linalg as la\ny = np.array([1,2,3,4])\nx1 = np.array([1,2,3,4]).reshape(4,1)\nx2 = np.array([2,3,4,5]).reshape(4,1)\nx0 = np.ones(4).reshape(4,1)\nX = np.c_[x0,x1,x2]\nla.lstsq(X,y)\n```\n\n\n\n\n (array([-0.33333333, 0.66666667, 0.33333333]),\n array([], dtype=float64),\n 2,\n array([9.34413269, 0.82896583, 0. ]))\n\n\n\n**4**. (10 points) Read the `mtcars` data frame from R to a `pandas` DataFrame. Find the mean `wt` and `mpg` for all cars grouped by the number of `gear`s.\n\n\n```python\nfrom rpy2.robjects import r, pandas2ri\npandas2ri.activate()\nr.data('mtcars')\nmtcars = r['mtcars']\nmtcars.groupby('gear').mean()[['wt','mpg']]\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    wtmpg
    gear
    3.03.89260016.106667
    4.02.61666724.533333
    5.02.63260021.380000
    \n
    \n\n\n\n**5**. (10 points) Read the `iris` data frame from R to a `pandas` DataFrame. Make a `seaborn` plot showing a linear regression of `Petal.Length` (y) against `Sepal.Length` (x). Make a separate regression line for each `Species`.\n\n\n```python\nimport seaborn as sns\nfrom rpy2.robjects import r, pandas2ri\npandas2ri.activate()\nr.data('iris')\niris = r['iris']\ng = sns.lmplot(x=\"Sepal.Length\", y=\"Petal.Length\", hue = 'Species', data = iris)\n```\n\n**6**. (10 points) Write a function that can flatten a nested list of arbitrary depth. Check that\n\n```python\nflatten([1,[2,3],[4,[5,[6,7],8],9],10,[11,12]])\n```\n\nreturns\n\n```python\n[1,2,3,4,5,6,7,8,9,10,11,12]\n```\n\nFor simplicity, assume that the only data structure you will encounter is a list. You can check if an item is a list by using \n\n```python\nisinstance(item, list)\n```\n\n\n```python\ndef flatten(l):\n if not isinstance(l,list):\n print('Not a list')\n return None\n else:\n \nflatten([1,[2,3],[4,[5,[6,7],8],9],10,[11,12]])\n```\n\n\n\n\n [1, [2, 3], [4, [5, [6, 7], 8], 9], 10, [11, 12]]\n\n\n\n\n```python\ndef flatten(l):\n res = []\n for element in l:\n if isinstance(element, list):\n for item in flatten(element):\n res.append(item)\n else:\n res.append(element)\n return res\nflatten([1,[2,3],[4,[5,[6,7],8],9],10,[11,12]])\n```\n\n\n\n\n [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]\n\n\n\n**7**. (10 points) Create the following table\n\n```python\narray([[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [ 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [ 1, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0],\n [ 1, 3, 3, 1, 0, 0, 0, 0, 0, 0, 0],\n [ 1, 4, 6, 4, 1, 0, 0, 0, 0, 0, 0],\n [ 1, 5, 10, 10, 5, 1, 0, 0, 0, 0, 0],\n [ 1, 6, 15, 20, 15, 6, 1, 0, 0, 0, 0],\n [ 1, 7, 21, 35, 35, 21, 7, 1, 0, 0, 0],\n [ 1, 8, 28, 56, 70, 56, 28, 8, 1, 0, 0],\n [ 1, 9, 36, 84, 126, 126, 84, 36, 9, 1, 0],\n [ 1, 10, 45, 120, 210, 252, 210, 120, 45, 10, 1]])\n```\n\nStart with the first row\n\n```\n[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n```\n\nand build the subsequent rows using a simple rule that only depends on the previous row.\n\n\n```python\nmatrix = np.fromfunction(lambda i,j: np.where((i==j) | (j==0), 1, 0), (11,11), dtype='int')\nfor i in range(2,11):\n for j in range(1,11):\n matrix[i,j] = matrix[i-1,j]+matrix[i-1,j-1]\nmatrix\n```\n\n\n\n\n array([[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [ 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [ 1, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0],\n [ 1, 3, 3, 1, 0, 0, 0, 0, 0, 0, 0],\n [ 1, 4, 6, 4, 1, 0, 0, 0, 0, 0, 0],\n [ 1, 5, 10, 10, 5, 1, 0, 0, 0, 0, 0],\n [ 1, 6, 15, 20, 15, 6, 1, 0, 0, 0, 0],\n [ 1, 7, 21, 35, 35, 21, 7, 1, 0, 0, 0],\n [ 1, 8, 28, 56, 70, 56, 28, 8, 1, 0, 0],\n [ 1, 9, 36, 84, 126, 126, 84, 36, 9, 1, 0],\n [ 1, 10, 45, 120, 210, 252, 210, 120, 45, 10, 1]])\n\n\n\n**8**. (10 points) Read the following data sets into DataFrames. \n\n- url1 = \"https://raw.github.com/vincentarelbundock/Rdatasets/master/csv/DAAG/hills.csv\"\n- url2 = \"https://raw.github.com/vincentarelbundock/Rdatasets/master/csv/DAAG/hills2000.csv\"\n\nCreate a new DataFraem only containing the names present in both DataFrames. Drop the `timef` column and have a single column for `dist` , `climb` and `time` that shows the average value of the two DataFrames. The final DtataFrame will thus have 4 columns (name, dist, climb, time).\n\n\n```python\nimport pandas as pd\nurl1 = \"https://raw.github.com/vincentarelbundock/Rdatasets/master/csv/DAAG/hills.csv\"\nurl2 = \"https://raw.github.com/vincentarelbundock/Rdatasets/master/csv/DAAG/hills2000.csv\"\ndata1 = pd.read_csv(url1)\ndata2 = pd.read_csv(url2)\ndata = pd.merge(data1,data2,on='Unnamed: 0')\ndata.drop('timef',axis=1)\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    Unnamed: 0dist_xclimb_xtime_xdist_yclimb_ytime_y
    0Craig Dunain6.090033.6506.09000.546111
    1Ben Lomond8.0307062.2679.031921.037778
    2Goatfell8.0286673.2178.028661.227778
    3Scolty5.080029.7505.08000.495833
    4Traprain6.065039.7506.56500.623889
    5Dollar5.0200043.0506.020000.638333
    6Lomonds9.5220065.0009.022001.053056
    7Black Hill4.5100017.4174.06000.447778
    8Meall Ant-Suidhe3.5150027.9003.515000.465000
    9Creag Dubh4.0200026.2173.012230.463889
    10Criffel6.5175050.5007.018000.793333
    11Ben Nevis10.0440085.58310.044001.426111
    12Knockfarrel6.060032.3835.012000.623333
    13Two Breweries18.05200170.25018.049002.565833
    \n
    \n\n\n", "meta": {"hexsha": "d14601298a36bafa77c06637d3106f2b2933e8c1", "size": 59179, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "labs/Lab05S-Zhechang Yang.ipynb", "max_stars_repo_name": "ZhechangYang/STA663", "max_stars_repo_head_hexsha": "0dcf48e3e7a2d1f698b15e84946e44344b8153f5", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "labs/Lab05S-Zhechang Yang.ipynb", "max_issues_repo_name": "ZhechangYang/STA663", "max_issues_repo_head_hexsha": "0dcf48e3e7a2d1f698b15e84946e44344b8153f5", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "labs/Lab05S-Zhechang Yang.ipynb", "max_forks_repo_name": "ZhechangYang/STA663", "max_forks_repo_head_hexsha": "0dcf48e3e7a2d1f698b15e84946e44344b8153f5", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 88.9909774436, "max_line_length": 39604, "alphanum_fraction": 0.7755960729, "converted": true, "num_tokens": 4379, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8887588052782737, "lm_q2_score": 0.9458012732322215, "lm_q1q2_score": 0.8405892096285392}} {"text": "```python\nfrom sympy import *\nimport mpmath\ninit_printing()\n\n'''\nr_GEO = 36000 + 6371 KM\nr_LEO = 2000 + 6371 KM\n\nG = 6.674e-11\nMe = 5.972e24\n'''\n\nM, E = symbols(\"M E\", Functions = True)\ne_c, a, G, M_e, r, mu = symbols(\"e_c a G M_e r mu\", Contstants = True)\nT_circular, T_elliptical, T_GEO, T_GTO, T_LEO, r_LEO, r_GEO, T_tot = symbols(\"T_circular T_elliptical T_GEO T_GTO T_LEO r_LEO r_GEO T_tot\", Constants = True)\nt, x, y, Y = symbols(\"t x y Y\", Variables = True)\n\nmu_calculated = (6.674e-11 * 5.972e24)\n```\n\nThe orbital period of a circular Orbit:\n\n\n```python\nEq(T_circular, 2*pi*sqrt(r**3 / mu))\n```\n\nWhere mu is: \n\n\n```python\nEq(mu, G*M_e)\n```\n\nThen, the GEO's orbital period in hours is:\n\n\n```python\nr_GEO_Calculated = (36000 + 6371)*1000\nT_GEO_Calculated = 2*pi*sqrt(r_GEO_Calculated**3 / mu_calculated)\nEq(T_GEO, T_GEO_Calculated.evalf()/60/60)\n```\n\nAnd the LEO's orbital period in hours is:\n\n\n```python\nr_LEO_Calculated = (2000 + 6371)*1000\nT_LEO_Calculated = 2*pi*sqrt(r_LEO_Calculated**3 / mu_calculated)\nEq(T_LEO, T_LEO_Calculated.evalf()/60/60)\n```\n\n_____________________________________________\n# Finding the GTO.\nThe goal is to get both 'e' (the eccentricity of our GTO) and 'a' (its semi-major axis). So, we need 2 eqns.\n\nThe equation of the GEO (A circle equation):\n\n\n```python\ngeo = Eq(y**2, r**2-x**2)\ngeo\n```\n\nThe equation of the GTO (An ellipse equation):\n\n\n```python\ngto = Eq(((x+a*e_c)**2/a**2)+(y**2/a**2*(1-e_c**2)), 1)\ngto\n```\n\nWe Wanna solve these two eqns to get the semi-major axis of our GTO and the raduis of our LEO.\n\nfirst, substitute the GEO's eqn in the GTO's eqn.\n\n\n```python\ntoSolve = gto.subs({y**2:geo.rhs})\ntoSolve\n```\n\nNow we can solve for x.\n\n\n```python\nsolX = solveset(toSolve, x)\nsolX\n```\n\nNow we can calculate the y coordinate for each x.\n\n\n```python\nsolY1 = solveset(Eq(geo.lhs, geo.rhs.subs({x:list(solX)[0]})), y)\nsolY1\n```\n\n\n```python\nsolY2 = solveset(Eq(geo.lhs, geo.rhs.subs({x:list(solX)[1]})), y)\nsolY2\n```\n\nWe have 4 different possible points for the intersection between a circle and an ellipse, but the intersection between the GEO and the GTO is going to be at only one point with an x coordinate of '-r_GEO' (the radius of the GEO). \n\nNow, we can get the first eqn.\n\n\n```python\ngeoAndGtoIntersection = solveset(Eq(list(solX)[0], -r_GEO).subs({r:r_GEO}), a)\ngeoAndGtoIntersection\n```\n\nSurbrisingly, there are 2 possible values for a. But we're not interrested in the negative value. So our first eqn is:\n\n\n```python\neqn1 = Eq(a, list(list(geoAndGtoIntersection.args)[2])[1])\neqn1\n```\n\nTo get another eqn, we can do the same but this time with the LOE.\n\nThe intersection between our LEO and GTO is exactly at the x coordinate of 'r_LEO'.\n\n\n```python\ngtoAndLeoIntersection = solveset(Eq(list(solX)[1], r).subs({r:r_LEO}), a)\ngtoAndLeoIntersection\n```\n\nAgain, there are 2 possible values for 'r_LEO' but we need the positive one.\n\n\n```python\neqn2 = Eq(a, list(list(gtoAndLeoIntersection.args)[2])[0])\neqn2\n```\n\nThis is the positive because 0 < e_c < 1.\n\nNow, we have 2 eqns and 2 variables. And we're ready to get 'a' and 'e_c'\n\n\n```python\ne_c_Exp = Eq(e_c, solveset(eqn1.subs({a:eqn2.rhs, r_GEO:r_GEO_Calculated, r_LEO:r_LEO_Calculated}), e_c).args[0])\ne_c_Calculated = e_c_Exp.rhs\ne_c_Exp\n```\n\n\n```python\ns = solveset(eqn2.subs({r_LEO:r_LEO_Calculated, e_c:e_c_Calculated})).args[0]\na_Exp = Eq(a, s)\na_Calculated = a_Exp.rhs\na_Exp\n```\n\nThere's another way for finding 'a'.\n\n\n```python\np1 = plot(sqrt(r_GEO_Calculated**2-x**2), -sqrt(r_GEO_Calculated**2-x**2), sqrt(r_LEO_Calculated**2-x**2), -sqrt(r_LEO_Calculated**2-x**2), sqrt(a_Calculated**2*(1-e_c_Calculated**2)*(1-((x+a_Calculated*e_c_Calculated)**2/a_Calculated**2))), -sqrt(a_Calculated**2*(1-e_c_Calculated**2)*(1-((x+a_Calculated*e_c_Calculated)**2/a_Calculated**2))),(x, -5*10**7, 5*10**7),xlim = (-7.5*10**7, 7.5*10**7), ylim=((-5*10**7, 5*10**7)))\n```\n\n\n \n\n\nFrom the geometry we can say that:\n\n\n```python\nEq(a, (r_LEO + r_GEO)/2)\n```\n\nThis could've saved us a lot of math work :)\n\n__________________________________\n# Now let's calculate the periods.\n\nThe orbital period of an elliptical orbit is:\n\n\n```python\nEq(T_elliptical, 2*pi*sqrt(a**3 / mu))\n```\n\n\n```python\nT_GTO_Calculated = 2*pi*sqrt(a_Calculated**3/mu_calculated)\nEq(T_GTO, T_GTO_Calculated.evalf()/60/60)\n```\n\nSo, the total time required to put our satellite in a GEO using Hohmann transfer is:\n\n\n```python\nEq(T_tot, T_GTO / 2 + T_LEO / 2)\n```\n\nThe total time required to put our satellite in a GEO of a 36,000 Kilometers above sea level in hours is:\n\n\n```python\nEq(T_tot, (T_GTO_Calculated / 2 + T_LEO_Calculated / 2).evalf()/60/60)\n```\n", "meta": {"hexsha": "4ed7ca7babe42de341bc622e436661645dee11bb", "size": 74903, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Python Notebooks/.ipynb_checkpoints/Hohmann Transfer-checkpoint.ipynb", "max_stars_repo_name": "Yaamani/Satellite-Simulation", "max_stars_repo_head_hexsha": "f9b3363e79b62a30724c53c99fdb097a68ff324d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Python Notebooks/.ipynb_checkpoints/Hohmann Transfer-checkpoint.ipynb", "max_issues_repo_name": "Yaamani/Satellite-Simulation", "max_issues_repo_head_hexsha": "f9b3363e79b62a30724c53c99fdb097a68ff324d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Python Notebooks/.ipynb_checkpoints/Hohmann Transfer-checkpoint.ipynb", "max_forks_repo_name": "Yaamani/Satellite-Simulation", "max_forks_repo_head_hexsha": "f9b3363e79b62a30724c53c99fdb097a68ff324d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 94.4552332913, "max_line_length": 6560, "alphanum_fraction": 0.8071906332, "converted": true, "num_tokens": 1648, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.959762057376384, "lm_q2_score": 0.8757869884059266, "lm_q1q2_score": 0.8405471218159395}} {"text": "# Solutions to Project Euler #1-#10\n\n\n```python\n# This enables fetching of a PE problem by\n#\n# %pe 32\n#\n%load_ext fetch_euler_problem\n```\n\n## Problem 1\n\n[Multiples of 3 and 5](https://projecteuler.net/problem=1)\n\n>

    If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.

    \n>

    Find the sum of all the multiples of 3 or 5 below 1000.

    \n\n\n```python\ndef prob001(a: int = 3, b: int = 5, below: int = 1000) -> int:\n \"\"\"\n >>> prob001(3, 5, 10)\n 23\n \"\"\"\n set1 = set(range(a, below, a))\n set2 = set(range(b, below, b))\n return sum(set1 | set2)\n\n```\n\n\n```python\nprob001()\n```\n\n\n\n\n 233168\n\n\n\n\n```python\n%timeit prob001()\n```\n\n 20.6 \u00b5s \u00b1 3.58 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\n\n\n**Another Solution:** filtering might be nice.\n\n\n```python\ndef prob001b(a: int = 3, b: int = 5, below: int = 1000) -> int:\n \"\"\"\n >>> prob001b(3, 5, 10)\n 23\n \"\"\"\n return sum(i for i in range(below) if i % a == 0 or i % b == 0)\n\n```\n\n\n```python\nprob001b()\n```\n\n\n\n\n 233168\n\n\n\n\n```python\n%timeit prob001b()\n```\n\n 120 \u00b5s \u00b1 8.08 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\n\n\n## Problem 2\n\n[Even Fibonacci numbers](https://projecteuler.net/problem=2)\n\n>

    Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:

    \n>

    1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...

    \n>

    By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.

    \n\nFibonacci sequence may be generated from repeated map of two-integer state. Or, in matrix form,\n\n$$\n\\begin{align*}\n a_{n+1} &= b_n \\\\\n b_{n+1} &= a_n + b_n \n\\end{align*}\n$$\n\nwith $a_0 = 0$ and $b_0 = 1$.\n\nAlso note that an infinite series is nicely handled with the Python generator.\n\n\n```python\nimport itertools as it\n\n\ndef prob002(maxval: int = 4000000) -> int:\n\n def fibs():\n i, j = 0, 1\n while True:\n (i, j) = (j, i + j)\n yield j\n\n fibseq = fibs()\n finite_fibs = it.takewhile(lambda x: x <= maxval, fibseq)\n return sum(n for n in finite_fibs if n % 2 == 0)\n\n```\n\n\n```python\nprob002()\n```\n\n\n\n\n 4613732\n\n\n\n\n```python\n%timeit prob002()\n```\n\n 12.7 \u00b5s \u00b1 2.87 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100000 loops each)\n\n\n## Problem 3\n\n[Largest prime factor](https://projecteuler.net/problem=3)\n\n>

    The prime factors of 13195 are 5, 7, 13 and 29.

    \n>

    What is the largest prime factor of the number 600851475143 ?

    \n\nQuick (and cheating) way to factor integers is to use `factorint()` in `sympy.ntheory` module.\n\n\n```python\nimport sympy.ntheory\nsympy.ntheory.factorint(13195)\n```\n\n\n\n\n {5: 1, 7: 1, 13: 1, 29: 1}\n\n\n\n\n```python\ndef prob003_sympy(n: int = 600851475143) -> int:\n \"\"\"\n >>> prob003_sympy(13195)\n 29\n \"\"\"\n factors = sympy.ntheory.factorint(n)\n return max(factors)\n```\n\n\n```python\nprob003_sympy()\n```\n\n\n\n\n 6857\n\n\n\n\n```python\n%timeit prob003_sympy()\n```\n\n 710 \u00b5s \u00b1 119 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\n\n\nMany nice algorithms for [integer factorization](https://en.wikipedia.org/wiki/Integer_factorization) exist (such as [Pollard's rho algorithm](https://en.wikipedia.org/wiki/Pollard's_rho_algorithm)), but a simple approach is good enough for this problem.\n\n\nI prepared a generator giving psudo-prime sequence. I don't worry about composite numbers in the sequence because the target number $n$ is not divisible by them.\n\n\n```python\nfrom typing import Iterator\n\ndef psudo_primes() -> Iterator[int]:\n \"\"\"\n Generate numbers n > 1 that is NOT multiple of 2 or 3.\n \n >>> import itertools\n >>> xs = tuple(itertools.takewhile(lambda x: x < 30, psudo_primes()))\n (2, 3, 5, 7, 11, 13, 17, 19, 23, 25, 29)\n \"\"\"\n yield 2\n yield 3\n\n x = 5\n while True:\n yield x\n x += 2\n yield x\n x += 4\n\n\ndef prob003(n: int = 600851475143) -> int:\n \"\"\"\n >>> prob003(13195)\n 29\n \"\"\"\n assert n > 1\n for p in psudo_primes():\n while n % p == 0:\n n //= p\n maxval = p\n if p * p > n:\n break\n if n > 1:\n maxval = n\n return maxval\n\n```\n\n\n```python\nprob003()\n```\n\n\n\n\n 6857\n\n\n\n\n```python\n%timeit prob003()\n```\n\n 198 \u00b5s \u00b1 21.3 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\n\n\n## Problem 4\n\n[Largest palindrome product](https://projecteuler.net/problem=4)\n\n>

    A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 \u00d7 99.

    \n>

    Find the largest palindrome made from the product of two 3-digit numbers.

    \n\n$n$-digit number $x$ is equivalent to $10^{n-1} \\leq x < 10^n$. Following brute force algorithm is quadratic time complexity. \n\n**[FIXME]** It should be faster.\n\n\n```python\ndef is_palindrome(number: int) -> bool:\n \"\"\"\n Check if number is palindrome, the numbers identical to its reversed-direction digits.\n\n >>> is_palindrome(15651)\n True\n\n >>> is_palindrome(56)\n False\n \"\"\"\n s = str(number)\n return s == \"\".join(reversed(s))\n\n\ndef prob004(digits: int = 3):\n \"\"\"\n >>> prob004(digits=2)\n (9009, 91, 99)\n \"\"\"\n lower_bound = 10 ** (digits - 1)\n upper_bound = 10 ** digits\n result = (0, 0, 0)\n for i in range(lower_bound, upper_bound):\n for j in range(i, upper_bound):\n x = i * j\n if is_palindrome(x) and result < (x, i, j):\n result = (x, i, j)\n\n return result\n\n```\n\n\n```python\nprob004(digits=3)\n```\n\n\n\n\n (906609, 913, 993)\n\n\n\n\n```python\n%timeit prob004()\n```\n\n 605 ms \u00b1 217 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\n\n\n## Problem 5\n\n[Smallest multiple](https://projecteuler.net/problem=5)\n\n>

    2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder.

    \n>

    What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20?

    \n\nThe problem is to find the loweset common multiplier (LCM). `lcm` is available in `sympy`.\n\n\n```python\nimport sympy\nimport functools\n\nfunctools.reduce(sympy.lcm, range(1,11))\n```\n\n\n\n\n 2520\n\n\n\nGreatest common divisor (GCD) is actually in the standard library `fractions`. And GCD and LCM are related by the identity\n\n$$ \\mathrm{LCM}(a, b) = \\frac{a\\, b}{\\mathrm{GCD}(a, b)}. $$\n\n\n```python\nimport math\nimport functools\n\ndef lcm(a: int, b: int) -> int:\n \"\"\"\n Return the lowest common multiplier of a and b\n \"\"\"\n return a // math.gcd(a, b) * b\n\n\ndef prob005(maxval: int = 20) -> int:\n \"\"\"\n >>> prob005(10)\n 2520\n \"\"\"\n return functools.reduce(lcm, range(1, maxval + 1))\n\n```\n\n\n```python\nprob005()\n```\n\n\n\n\n 232792560\n\n\n\n\n```python\n%timeit prob005()\n```\n\n 9.05 \u00b5s \u00b1 1.32 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100000 loops each)\n\n\n## Problem 6\n\n[Sum square difference](https://projecteuler.net/problem=6)\n\n>

    The sum of the squares of the first ten natural numbers is,

    \n>
    12 + 22 + ... + 102 = 385
    \n>

    The square of the sum of the first ten natural numbers is,

    \n>
    (1 + 2 + ... + 10)2 = 552 = 3025
    \n>

    Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is 3025 \u2212 385 = 2640.

    \n>

    Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum.

    \n\nKnowing the sums,\n\n\\begin{align}\n \\sum_{n=1}^{N} n &= \\frac{1}{2} N(N+1), \\\\\n \\sum_{n=1}^{N} n^2 &= \\frac{1}{6} N (N+1) (2N + 1).\n\\end{align}\n\nwe can calculate the difference between the sum of squares and the square of the sum.\n\n$$ \\left( \\sum_{n=1}^{N} n \\right)^2 - \\sum_{n=1}^{N} n^2 = \\frac{1}{12} n (3 n^3 + 2 n^2 - 3n -2). $$\n\n\n\n`sympy` can reproduce the algebraic manipulations.\n\n\n```python\nimport sympy\n# sympy.init_printing() # turn on sympy printing on Jupyter notebooks\n\n\ndef the_difference():\n i, n = sympy.symbols(\"i n\", integer=True)\n simple_sum = sympy.summation(i, (i, 1, n))\n sum_of_squares = sympy.summation(i ** 2, (i, 1, n))\n formula = sympy.simplify(simple_sum ** 2 - sum_of_squares)\n return formula\n\n\ndef prob006_sympy(nval=100):\n \"\"\"\n >>> prob006_sympy(10)\n 2640\n \"\"\"\n formula = the_difference()\n n = formula.free_symbols.pop()\n return int(formula.evalf(subs={n:nval}))\n\n```\n\nThis problem only requires summation over first one hundred numbers so bruteforce approach works.\n\n\n```python\ndef prob006(n: int = 100) -> int:\n \"\"\"\n >>> prob006(10)\n 2640\n \"\"\"\n xs = range(1, n + 1)\n simple_sum = sum(xs)\n sum_of_squares = sum(x ** 2 for x in xs)\n return simple_sum ** 2 - sum_of_squares\n\n```\n\nThen all you need is to evaluate the formula with upper bound.\n\n\n```python\nprob006()\n```\n\n\n\n\n 25164150\n\n\n\n\n```python\n%timeit prob006()\n```\n\n 30.8 \u00b5s \u00b1 2.64 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\n\n\n## Problem 7\n\n[10001st prime](https://projecteuler.net/problem=7)\n\n>

    By listing the first six prime numbers: 2, 3, 5, 7, 11, and 13, we can see that the 6th prime is 13.

    \n>

    What is the 10 001st prime number?

    \n\n**Solution:** `sympy.ntheory` package has prime number generators.\n\n\n```python\nsympy.ntheory.prime(6)\n```\n\n\n\n\n 13\n\n\n\n\n```python\nsympy.ntheory.prime(10001)\n```\n\n\n\n\n 104743\n\n\n\ns a good enough algorithm to generate prime numbers. \n\n\nIt is known [(wikipedia.org)](https://en.wikipedia.org/wiki/Prime_number_theorem#Approximations_for_the_nth_prime_number) that $n$-th prime number $p_n$ is bounded by\n\n$$ \\log n + \\log \\log n - 1 < \\frac{p_n}{n} < \\log n + \\log \\log n \\ \\ \\ \\textrm{ for $n \\geq 6$}. $$\n\nSo, I'll select integers up to the upper bound with [Sieve of Eratosthenes](https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes).\n\n\n```python\nimport math\nfrom typing import Iterator, List\n\n# Reproduction from Problem #3\ndef psudo_primes() -> Iterator[int]:\n yield 2\n yield 3\n x = 5\n while True:\n yield x\n x += 2\n yield x\n x += 6\n\n\ndef sieve(n: int) -> List[int]:\n \"\"\"\n Return all prime numbers below n\n \n >>> sieve(10)\n [2, 3, 5, 7]\n \"\"\"\n assert n > 1\n remaining = [True] * n # never use the first two elements (0th and 1st)\n for p in range(2, int(math.sqrt(n) + 1)):\n if not remaining[p]:\n continue\n for q in range(p * p, n, p):\n remaining[q] = False\n return [p for p in range(2, n) if remaining[p]]\n\n\ndef prime(n: int) -> List[int]:\n \"\"\"\n Return first n prime numbers\n \n >>> prime(10)\n [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]\n \"\"\"\n log = math.log\n if n >= 6:\n upperbound = int(n * (log(n) + log(log(n))))\n out = sieve(upperbound)\n else:\n out = [2, 3, 5, 7, 11]\n return out[:n]\n\n\ndef prob007(n: int = 10001) -> int:\n \"\"\"\n >>> prob007(6)\n 13\n \"\"\"\n return prime(n)[-1]\n\n```\n\n\n```python\nprob007()\n```\n\n\n\n\n 104743\n\n\n\n\n```python\nimport sympy\n%timeit sympy.ntheory.prime(10001)\n```\n\n 17.3 ms \u00b1 4.65 ms per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each)\n\n\n\n```python\n%timeit prob007()\n```\n\n 16.7 ms \u00b1 463 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each)\n\n\n## Problem 8\n\n## Problem 8\n\n[Largest product in a series](https://projecteuler.net/problem=8)\n\n>

    The four adjacent digits in the 1000-digit number that have the greatest product are 9 \u00d7 9 \u00d7 8 \u00d7 9 = 5832.

    \n>

    73167176531330624919225119674426574742355349194934
    96983520312774506326239578318016984801869478851843
    85861560789112949495459501737958331952853208805511
    12540698747158523863050715693290963295227443043557
    66896648950445244523161731856403098711121722383113
    62229893423380308135336276614282806444486645238749
    30358907296290491560440772390713810515859307960866
    70172427121883998797908792274921901699720888093776
    65727333001053367881220235421809751254540594752243
    52584907711670556013604839586446706324415722155397
    53697817977846174064955149290862569321978468622482
    83972241375657056057490261407972968652414535100474
    82166370484403199890008895243450658541227588666881
    16427171479924442928230863465674813919123162824586
    17866458359124566529476545682848912883142607690042
    24219022671055626321111109370544217506941658960408
    07198403850962455444362981230987879927244284909188
    84580156166097919133875499200524063689912560717606
    05886116467109405077541002256983155200055935729725
    71636269561882670428252483600823257530420752963450

    \n>

    Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?

    \n\n\n```python\nimport operator\nimport functools\n\n\ndef product(iterable):\n \"\"\"\n >>> product(range(1, 6))\n 120\n \"\"\"\n return functools.reduce(operator.mul, iterable)\n\n\ndef prob008(s: str, seq_size: int = 5):\n\n def slice_N_digits(idx):\n for i in s[idx : idx + seq_size]:\n yield int(i)\n\n return max(product(slice_N_digits(idx)) for idx in range(len(s) - seq_size))\n\n```\n\n\n```python\ns = \"\"\"\n73167176531330624919225119674426574742355349194934\n96983520312774506326239578318016984801869478851843\n85861560789112949495459501737958331952853208805511\n12540698747158523863050715693290963295227443043557\n66896648950445244523161731856403098711121722383113\n62229893423380308135336276614282806444486645238749\n30358907296290491560440772390713810515859307960866\n70172427121883998797908792274921901699720888093776\n65727333001053367881220235421809751254540594752243\n52584907711670556013604839586446706324415722155397\n53697817977846174064955149290862569321978468622482\n83972241375657056057490261407972968652414535100474\n82166370484403199890008895243450658541227588666881\n16427171479924442928230863465674813919123162824586\n17866458359124566529476545682848912883142607690042\n24219022671055626321111109370544217506941658960408\n07198403850962455444362981230987879927244284909188\n84580156166097919133875499200524063689912560717606\n05886116467109405077541002256983155200055935729725\n71636269561882670428252483600823257530420752963450\"\"\"\n\nxs = [int(c) for c in s.replace(\"\\n\", \"\")]\nprob008(xs)\n\n```\n\n\n\n\n 40824\n\n\n\n\n```python\n%timeit prob008(xs)\n```\n\n 2.5 ms \u00b1 358 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each)\n\n\n## Problem 9\n\n[Special Pythagorean triplet](https://projecteuler.net/problem=9)\n\n>

    A Pythagorean triplet is a set of three natural numbers, a < b < c, for which,

    \n>
    a2 + b2 = c2
    \n>

    For example, 32 + 42 = 9 + 16 = 25 = 52.

    \n>

    There exists exactly one Pythagorean triplet for which a + b + c = 1000.
    Find the product abc.

    \n\n\n```python\nfrom typing import Optional, Tuple\n\n\ndef prob009(total: int = 1000) -> Optional[Tuple[int, int, int, int]]:\n \"\"\"\n >>> prob009(total=12)\n (3, 4, 5, 60)\n \"\"\"\n ub = total // 2 # just an upper bound\n for a in range(1, ub): \n for b in range(a, ub):\n c = total - a - b\n if a ** 2 + b ** 2 == c ** 2:\n product = a * b * c\n return (a, b, c, product)\n else:\n return None\n```\n\n\n```python\nprob009(12)\n```\n\n\n\n\n (3, 4, 5, 60)\n\n\n\n\n```python\n%timeit prob009()\n```\n\n 66.8 ms \u00b1 2.58 ms per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)\n\n\n## Problem 10\n\n[Summation of primes](https://projecteuler.net/problem=10)\n\n>

    The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17.

    \n>

    Find the sum of all the primes below two million.

    \n\n`sympy` module has `primerange`.\n\n\n```python\nlist(sympy.sieve.primerange(1, 10))\n```\n\n\n\n\n [2, 3, 5, 7]\n\n\n\nSo, Just take sum of the prime numbers between 1 and 200000.\n\n\n```python\nimport sympy\n\ndef prob010_sympy(n=2000000):\n \"\"\"\n >>> prob010_sympy(10)\n 17\n \"\"\"\n return sum(sympy.sieve.primerange(1, n))\n```\n\n\n```python\n%timeit prob010_sympy()\n```\n\n 34.2 ms \u00b1 147 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\n\n\nUse Eratosthenes sieve in Problem 7.\n\n\n```python\nfrom typing import List\n\n# Reproduction from problem 7\ndef sieve(n: int) -> List[int]:\n assert n > 1\n remaining = [True] * (n + 1) # never use the first two elements (0th and 1st)\n for p in range(2, int(math.sqrt(n) + 1)):\n if not remaining[p]:\n continue\n for q in range(p * p, n + 1, p):\n remaining[q] = False\n return [p for p in range(2, n + 1) if remaining[p]]\n\n\ndef prob010(n: int = 2000000) -> int:\n \"\"\"\n >>> prob010(10)\n 17\n \"\"\"\n return sum(sieve(n))\n\n```\n\n\n```python\nprob010()\n```\n\n\n\n\n 142913828922\n\n\n\n\n```python\n%timeit prob010()\n```\n\n 497 ms \u00b1 128 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\n\n\n## [Appendix] Run doctests\n\n\n```python\nimport doctest\ndoctest.testmod()\n```\n\n\n\n\n TestResults(failed=0, attempted=16)\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "6ed073906c09cbb86013d31d98422f322c490a51", "size": 32870, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Problems_001-010.ipynb", "max_stars_repo_name": "yamaton/project-euler-jupyter", "max_stars_repo_head_hexsha": "7566c0536627173d9eaa53400e81a7e6c7d4b994", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-11-07T21:56:56.000Z", "max_stars_repo_stars_event_max_datetime": "2018-11-07T21:56:56.000Z", "max_issues_repo_path": "Problems_001-010.ipynb", "max_issues_repo_name": "yamaton/project-euler-ipython", "max_issues_repo_head_hexsha": "7566c0536627173d9eaa53400e81a7e6c7d4b994", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Problems_001-010.ipynb", "max_forks_repo_name": "yamaton/project-euler-ipython", "max_forks_repo_head_hexsha": "7566c0536627173d9eaa53400e81a7e6c7d4b994", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-07-19T05:08:59.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-02T18:11:33.000Z", "avg_line_length": 24.4568452381, "max_line_length": 1173, "alphanum_fraction": 0.5002738059, "converted": true, "num_tokens": 5705, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8918110454379297, "lm_q2_score": 0.942506720392153, "lm_q1q2_score": 0.8405379036452004}} {"text": "# Numerical methods for 2nd-order ODEs\n\nWe've gone over how to solve 1st-order ODEs using numerical methods, but what about 2nd-order or any higher-order ODEs? We can use the same methods we've already discussed by transforming our higher-order ODEs into a **system of first-order ODEs**.\n\nMeaning, if we are given a 2nd-order ODE\n\\begin{equation}\n\\frac{d^2 y}{dx^2} = y^{\\prime\\prime} = f(x, y, y^{\\prime})\n\\end{equation}\nwe can transform this into a system of **two 1st-order ODEs** that are coupled:\n\\begin{align}\n\\frac{dy}{dx} &= y^{\\prime} = u \\\\\n\\frac{du}{dx} &= u^{\\prime} = y^{\\prime\\prime} = f(x, y, u)\n\\end{align}\nwhere $f(x, y, u)$ is the same as that given above for $\\frac{d^2 y}{dx^2}$.\n\nThus, instead of a 2nd-order ODE to solve, we have two 1st-order ODEs:\n\\begin{align}\ny^{\\prime} &= u \\\\\nu^{\\prime} &= f(x, y, u)\n\\end{align}\n\nSo, we can use all of the methods we have talked about so far to solve 2nd-order ODEs by transforming the one equation into a system of two 1st-order equations.\n\n## Higher-order ODEs\n\nThis works for higher-order ODEs too! For example, if we have a 3rd-order ODE, we can transform it into a system of three 1st-order ODEs:\n\\begin{align}\n\\frac{d^3 y}{dx^3} &= f(x, y, y^{\\prime}, y^{\\prime\\prime}) \\\\\n\\rightarrow y^{\\prime} &= u \\\\\nu^{\\prime} &= y^{\\prime\\prime} = w \\\\\nw^{\\prime} &= y^{\\prime\\prime\\prime} = f(x, y, u, w)\n\\end{align}\n\n## Example: mass-spring problem\n\nFor example, let's solve a forced damped mass-spring problem given by a 2nd-order ODE:\n\\begin{equation}\ny^{\\prime\\prime} + 5y^{\\prime} + 6y = 10 \\sin \\omega t\n\\end{equation}\nwith the initial conditions $y(0) = 0$ and $y^{\\prime}(0) = 5$.\n\nWe start by transforming the equation into two 1st-order ODEs. Let's use the variables $z_1 = y$ and $z_2 = y^{\\prime}$:\n\\begin{align}\n\\frac{dz_1}{dt} &= z_1^{\\prime} = z_2 \\\\\n\\frac{dz_2}{dt} &= z_2^{\\prime} = y^{\\prime\\prime} = 10 \\sin \\omega t - 5z_2 - 6z_1\n\\end{align}\n\n### Forward Euler\n\nThen, let's solve numerically using the forward Euler method. Recall that the recursion formula for forward Euler is:\n\\begin{equation}\ny_{i+1} = y_i + \\Delta x f(x_i, y_i)\n\\end{equation}\nwhere $f(x,y) = \\frac{dy}{dx}$.\n\nLet's solve using $\\omega = 1$ and with a step size of $\\Delta t = 0.1$, over $0 \\leq t \\leq 3$.\n\nWe can compare this against the exact solution, obtainable using the method of undetermined coefficients:\n\\begin{equation}\ny(t) = -6 e^{-3t} + 7 e^{-2t} + \\sin t - \\cos t\n\\end{equation}\n\n\n```matlab\n% plot exact solution first\nt = linspace(0, 3);\ny_exact = -6*exp(-3*t) + 7*exp(-2*t) + sin(t) - cos(t);\nplot(t, y_exact); hold on\n\nomega = 1;\n\ndt = 0.1;\nt = [0 : dt : 3];\n\nf = @(t,z1,z2) 10*sin(omega*t) - 5*z2 - 6*z1;\n\nz1 = zeros(length(t), 1);\nz2 = zeros(length(t), 1);\nz1(1) = 0;\nz2(1) = 5;\nfor i = 1 : length(t)-1\n z1(i+1) = z1(i) + dt * z2(i);\n z2(i+1) = z2(i) + dt * f(t(i), z1(i), z2(i));\nend\n\nplot(t, z1, 'o--')\nxlabel('time'); ylabel('displacement')\nlegend('Exact', 'Forward Euler', 'Location','southeast')\n```\n\n### Heun's Method\n\nFor schemes that involve more than one stage, like Heun's method, we'll need to implement both stages for each 1st-order ODE. For example:\n\n\n```matlab\nclear\n% plot exact solution first\nt = linspace(0, 3);\ny_exact = -6*exp(-3*t) + 7*exp(-2*t) + sin(t) - cos(t);\nplot(t, y_exact); hold on\n\nomega = 1;\n\ndt = 0.1;\nt = [0 : dt : 3];\n\nf = @(t,z1,z2) 10*sin(omega*t) - 5*z2 - 6*z1;\n\nz1 = zeros(length(t), 1);\nz2 = zeros(length(t), 1);\nz1(1) = 0;\nz2(1) = 5;\nfor i = 1 : length(t)-1\n % predictor\n z1p = z1(i) + z2(i)*dt;\n z2p = z2(i) + f(t(i), z1(i), z2(i))*dt;\n\n % corrector\n z1(i+1) = z1(i) + 0.5*dt*(z2(i) + z2p);\n z2(i+1) = z2(i) + 0.5*dt*(f(t(i), z1(i), z2(i)) + f(t(i+1), z1p, z2p));\nend\nplot(t, z1, 'o')\nxlabel('time'); ylabel('displacement')\nlegend('Exact', 'Heuns', 'Location','southeast')\n```\n\n### Runge-Kutta: `ode45`\n\nWe can also solve using `ode45`, by providing a separate function file that defines the system of 1st-order ODEs. In this case, we'll need to use a single **array** variable, `Z`, to store $z_1$ and $z_2$. The first column of `Z` will store $z_1$ (`Z(:,1)`) and the second column will store $z_2$ (`Z(:,2)`).\n\n\n```matlab\n%%file mass_spring.m\nfunction dzdt = mass_spring(t, z)\n omega = 1;\n dzdt = zeros(2,1);\n \n dzdt(1) = z(2);\n dzdt(2) = 10*sin(omega*t) - 6*z(1) - 5*z(2);\nend\n```\n\n Created file '/Users/kyle/projects/ME373/docs/mass_spring.m'.\n\n\n\n```matlab\n% plot exact solution first\nt = linspace(0, 3);\ny_exact = -6*exp(-3*t) + 7*exp(-2*t) + sin(t) - cos(t);\nplot(t, y_exact); hold on\n\n% solution via ode45:\n[T, Z] = ode45('mass_spring', [0 3], [0 5]);\n\nplot(T, Z(:,1), 'o')\nxlabel('time'); ylabel('displacement')\nlegend('Exact', 'ode45', 'Location','southeast')\n```\n\n## Backward Euler for 2nd-order ODEs\n\nWe saw how to implement the Backward Euler method for a 1st-order ODE, but what about a 2nd-order ODE? (Or in general a system of 1st-order ODEs?)\n\nThe recursion formula is the same, except now our dependent variable is an array/vector:\n\\begin{equation}\n\\mathbf{y}_{i+1} = \\mathbf{y}_i + \\Delta t \\, \\mathbf{f} \\left( t_{i+1} , \\mathbf{y}_{i+1} \\right)\n\\end{equation}\nwhere the bolded $\\mathbf{y}$ and $\\mathbf{f}$ indicate array quantities (in other words, they hold more than one value).\n\nIn general, we can use Backward Euler to solve 2nd-order ODEs in a similar fashion as our other numerical methods:\n\n1. Convert the 2nd-order ODE into a system of two 1st-order ODEs\n2. Insert the ODEs into the Backward Euler recursion formula and solve for $\\mathbf{y}_{i+1}$\n\nThe main difference is that we will now have a system of two equations and two unknowns: $y_{1, i+1}$ and $y_{2, i+1}$.\n\nLet's demonstrate with an example:\n\\begin{equation}\ny^{\\prime\\prime} + 6 y^{\\prime} + 5y = 10 \\quad y(0) = 0 \\quad y^{\\prime}(0) = 5\n\\end{equation}\nwhere the exact solution is\n\\begin{equation}\ny(t) = -\\frac{3}{4} e^{-5t} - \\frac{5}{4} e^{-t} + 2\n\\end{equation}\n\nTo solve numerically,\n\n1. Convert the 2nd-order ODE into a system of two 1st-order ODEs:\n\\begin{gather}\ny_1 = y \\quad y_1(t=0) = 0 \\\\\ny_2 = y^{\\prime} \\quad y_2 (t=0) = 5\n\\end{gather}\nThen, for the derivatives of these variables:\n\\begin{align}\ny_1^{\\prime} &= y_2 \\\\\ny_2^{\\prime} &= 10 - 6 y_2 - 5 y_1\n\\end{align}\n\n2. Then plug these derivatives into the Backward Euler recursion formulas and solve for $y_{1,i+1}$ and $y_{2,i+1}$:\n\\begin{align}\ny_{1, i+1} &= y_{1, i} + \\Delta t \\, y_{2,i+1} \\\\\ny_{2, i+1} &= y_{2, i} + \\Delta t \\left( 10 - 6 y_{2, i+1} - 5 y_{1,i+1} \\right) \\\\\n\\\\\ny_{1, i+1} - \\Delta t \\, y_{2, i+1} &= y_{1,i} \\\\\n5 \\Delta t \\, y_{1, i+1} + (1 + 6 \\Delta t) y_{2, i+1} &= y_{2,i} + 10 \\Delta t \\\\\n\\text{or} \\quad \n\\begin{bmatrix} 1 & -\\Delta t \\\\ 5 \\Delta t & (1+6\\Delta t)\\end{bmatrix} \n\\begin{bmatrix} y_{1, i+1} \\\\ y_{2, i+1} \\end{bmatrix} &= \n\\begin{bmatrix} y_{1,i} \\\\ y_{2,i} + 10 \\Delta t \\\\ \\end{bmatrix} \\\\\n\\mathbf{A} \\mathbf{y}_{i+1} &= \\mathbf{b}\n\\end{align}\nTo isolate $\\mathbf{y}_{i+1}$ and get a usable recursion formula, we need to solve this system of two equations. We could solve this by hand using the substitution method, or we can use Cramer's rule:\n\\begin{align}\ny_{1, i+1} &= \\frac{ y_{1,i} (1 + 6 \\Delta t) + \\Delta t \\left( y_{2,i} + 10 \\Delta t \\right)}{1 + 6 \\Delta t + 5 \\Delta t^2} \\\\\ny_{2, i+1} &= \\frac{ y_{2,i} + 10 \\Delta t - 5 \\Delta t y_{1,i}}{1 + 6 \\Delta t + 5 \\Delta t^2}\n\\end{align}\n\nLet's confirm that this gives us a good, well-behaved numerical solution and compare with the Forward Euler method:\n\n\n```matlab\nclear\n\n% Exact solution\nt = linspace(0, 5);\ny_exact = @(t) -(3/4)*exp(-5*t) - (5/4)*exp(-t) + 2;\nplot(t, y_exact(t)); hold on\n\ndt = 0.1;\nt = 0 : dt : 5;\n\n% Forward Euler\nf = @(t, y1, y2) 10 - 6*y2 - 5*y1;\ny1 = zeros(length(t), 1); y2 = zeros(length(t), 1);\ny1(1) = 0; y2(1) = 5;\nfor i = 1 : length(t) - 1\n y1(i+1) = y1(i) + dt*y2(i);\n y2(i+1) = y2(i) + dt*f(t(i), y1(i), y2(i));\nend\nplot(t, y1, '+')\n\nY = zeros(length(t), 2);\nY(1,1) = 0;\nY(1,2) = 5;\nfor i = 1 : length(t) - 1\n D = 1 + 6*dt + 5*dt^2;\n Y(i+1, 1) = (Y(i,1)*(1 + 6*dt) + dt*(Y(i,2) + 10*dt)) / D;\n Y(i+1, 2) = (Y(i,2) + 10*dt - Y(i,1)*5*dt) / D;\nend\nplot(t, Y(:,1), 'o')\nlegend('Exact', 'Forward Euler', 'Backward Euler', 'location', 'southeast')\n\nfprintf('Maximum error of Forward Euler: %5.3f\\n', max(abs(y1(:) - y_exact(t)')));\nfprintf('Maximum error of Backward Euler: %5.3f', max(abs(Y(:,1) - y_exact(t)')));\n```\n\nSo, for $\\Delta t = 0.1$, we see that the Forward and Backward Euler methods give an error $\\mathcal{O}(\\Delta t)$, as expected since both methods are *first-order* accurate.\n\nLet's see how they compare for a larger step size:\n\n\n```matlab\nclear\n\n% Exact solution\nt = linspace(0, 5);\ny_exact = @(t) -(3/4)*exp(-5*t) - (5/4)*exp(-t) + 2;\nplot(t, y_exact(t)); hold on\n\ndt = 0.5;\nt = 0 : dt : 5;\n\n% Forward Euler\nf = @(t, y1, y2) 10 - 6*y2 - 5*y1;\ny1 = zeros(length(t), 1); y2 = zeros(length(t), 1);\ny1(1) = 0; y2(1) = 5;\nfor i = 1 : length(t) - 1\n y1(i+1) = y1(i) + dt*y2(i);\n y2(i+1) = y2(i) + dt*f(t(i), y1(i), y2(i));\nend\nplot(t, y1, 'o')\n\n% Backward Euler\n\nY = zeros(length(t), 2);\nY(1,1) = 0;\nY(1,2) = 5;\nfor i = 1 : length(t) - 1\n D = 1 + 6*dt + 5*dt^2;\n Y(i+1, 1) = (Y(i,1)*(1 + 6*dt) + dt*(Y(i,2) + 10*dt)) / D;\n Y(i+1, 2) = (Y(i,2) + 10*dt - Y(i,1)*5*dt) / D;\nend\nplot(t, Y(:,1), 'o')\nlegend('Exact', 'Backward Euler', 'location', 'southeast')\n\n%fprintf('Maximum error of Forward Euler: %5.3f\\n', max(abs(y1(:) - y_exact(t)')));\n%fprintf('Maximum error of Backward Euler: %5.3f', max(abs(Y(:,1) - y_exact(t)')));\n\nfprintf('Maximum error of Forward Euler: %5.3f\\n', max(abs(y1(:) - y_exact(t)')));\nfprintf('Maximum error of Backward Euler: %5.3f', max(abs(Y(:,1) - y_exact(t)')));\n```\n\nBackward Euler, since it is unconditionally stable, remains well-behaved at this larger step size, while the Forward Euler method blows up.\n\nOne other thing: instead of using Cramer's rule to get expressions for $y_{1,i+1}$ and $y_{2,i+1}$, we could instead use Matlab to solve the linear system of equations at each time step. To do that, we could replace\n```OCTAVE\nA = [1 -dt; 5*dt (1+6*dt)];\nb = [Y(i,1); Y(i,2)+10*dt];\nY(i+1,:) = (A\\b)';\n```\nwhere `A\\b` is equivalent to `inv(A)*b`, but faster. Let's confirm that this gives the same answer:\n\n\n```matlab\nclear\n\n% Exact solution\nt = linspace(0, 5);\ny_exact = @(t) -(3/4)*exp(-5*t) - (5/4)*exp(-t) + 2;\nplot(t, y_exact(t)); hold on\n\ndt = 0.1;\nt = 0 : dt : 5;\n\nY = zeros(length(t), 2);\nY(1,1) = 0;\nY(1,2) = 5;\nfor i = 1 : length(t) - 1\n A = [1 -dt; 5*dt (1+6*dt)];\n b = [Y(i,1); Y(i,2)+10*dt];\n Y(i+1,:) = (A\\b)';\nend\nplot(t, Y(:,1), 'o')\nlegend('Exact', 'Backward Euler', 'location', 'southeast')\n```\n\n## Cramer's Rule\n\nCramer's Rule provides a solution method for a system of linear equations, where the number of equations equals the number of unknowns. It works for any number of equations/unknowns, but isn't really practical for more than two or three. We'll focus on using it for a system of two equations, with two unknowns $x_1$ and $x_2$:\n\\begin{gather}\na_{11} + x_1 + a_{12} x_2 = b_1 \\\\\na_{21} + x_1 + a_{22} x_2 = b_2 \\\\\n\\text{or } \\mathbf{A} \\mathbf{x} = \\mathbf{b}\n\\end{gather}\nwhere\n\\begin{gather}\n\\mathbf{A} = \\begin{bmatrix} a_{11} & a_{12} \\\\ a_{21} & a_{22} \\end{bmatrix} \\\\\n\\mathbf{x} = \\begin{bmatrix} x_1 \\\\ x_2 \\end{bmatrix} \\\\\n\\mathbf{b} = \\begin{bmatrix} b_1 \\\\ b_2 \\end{bmatrix}\n\\end{gather}\n\nThe solutions for the unknowns are then\n\\begin{align}\nx_1 &= \\frac{ \\begin{vmatrix} b_1 & a_{12} \\\\ b_2 & a_{22} \\end{vmatrix} }{D} = \\frac{b_1 a_{22} - a_{12} b_2}{D} \\\\\nx_2 &= \\frac{ \\begin{vmatrix} a_{11} & b_1 \\\\ a_{21} & b_2 \\end{vmatrix} }{D} = \\frac{a_{11} b_2 - b_1 a_{21}}{D}\n\\end{align}\nwhere $D$ is the determinant of $\\mathbf{A}$:\n\\begin{equation}\nD = \\det(\\mathbf{A}) = \\begin{vmatrix} a_{11} & a_{12} \\\\ a_{21} & a_{22} \\end{vmatrix} = a_{11} a_{22} - a_{12} a_{21}\n\\end{equation}\nThis works as long as the determinant does not equal zero.\n", "meta": {"hexsha": "7c18b3820a625b67fd7f3dae1f0e57de7d2c189b", "size": 150655, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/_sources/content/second-order/numerical-methods.ipynb", "max_stars_repo_name": "kyleniemeyer/ME373-book", "max_stars_repo_head_hexsha": "66a9ef0f69a8c4e1656c02080aebfb5704e1a089", "max_stars_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/_sources/content/second-order/numerical-methods.ipynb", "max_issues_repo_name": "kyleniemeyer/ME373-book", "max_issues_repo_head_hexsha": "66a9ef0f69a8c4e1656c02080aebfb5704e1a089", "max_issues_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/_sources/content/second-order/numerical-methods.ipynb", "max_forks_repo_name": "kyleniemeyer/ME373-book", "max_forks_repo_head_hexsha": "66a9ef0f69a8c4e1656c02080aebfb5704e1a089", "max_forks_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 275.9249084249, "max_line_length": 25192, "alphanum_fraction": 0.9097739869, "converted": true, "num_tokens": 4687, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8976952975813453, "lm_q2_score": 0.9362850084148385, "lm_q1q2_score": 0.8404986492499108}} {"text": "sergazy.nurbavliyev@gmail.com \u00a9 2021\n\n## Can you form a triangle ?\n\nQuestion: Assume we have a stick of length 1. Suppose you randomly break it into three parts. What is the probability that you can form a triangle using these three parts?\n\n### Intuition\n\nI always like to break into the cases. The idea is divide and conquer :)\nLet us figure out what these cases are. First, assume we label these three pieces as $a,b$ and $c.$ Remember $a+b+c=1$. We know that if any of these pieces is bigger than $\\frac{1}{2}$ than it is impossible to form a triangle. There are 3 such cases. Now the last case is if all of them are strictly smaller than $\\frac{1}{2}.$ In that case it is possible to form a triangle. You can now intuitively think that all of these outcomes have an equal probability. Thus the answer is 1/4. To make things clear I show these argument in a table.\n\n| Cases: | a |b | c | Triangle ? | \n|--------------| -------------- |-----------------| ----------------|---------------|\n|Posssibility 1| a>1/2 |b<1/2| c<1/2|No |\n|Posssibility 2| a<1/2 |b>1/2| c<1/2|No |\n|Posssibility 3| a<1/2 |b<1/2| c>1/2|No |\n|Posssibility 4| a<1/2 |b<1/2| c<1/2|Yes |\n\n\n```python\n\n```\n\n## Theoritical result\n\nI would like to label the breaking points as $x$ and $y$. So there are two possibilities:\nCase 1: $x < y$ and Case 2: $y < x$\n\nCase 1:$x < y$: Length of pieces after choosing points x and y: would be\n$x , (y-x) , (1-y)$ \n\nWe showed the partition below.\n\n0----- $x$-------$y$------1\n\nThere are 3 possible combination for satisfying triangle inequality.\n\n\\begin{equation}\nx + (y-x) > (1-y)\\Rightarrow 2y > 1\\Rightarrow y > (1/2)\n\\end{equation}\n\n\\begin{equation}\n x + (1-1) > (y-x)\\Rightarrow 2x + 1 > 2y\\Rightarrow y < x + (1/2)\n\\end{equation}\n\n\\begin{equation}\n(y-x) + (1-1) > x\\Rightarrow 2x < 1 \\Rightarrow x < 1/2\n\\end{equation}\n\nI showed in a figure below the common area of these three region. \n\n\n```python\nfrom IPython.display import Image\nImage(filename='triangle.jpeg')\n```\n\n\n\n\n \n\n \n\n\n\nCase 2:$y < x$: Length of pieces after choosing points x and y: would be\n$y , (x-y) , (1-x)$. With the same logic, we would get $\\frac{1}{8}$\n\nNow we can add the results to get the probability of forming triangle. That would be $\\frac{1}{8}+ \\frac{1}{8}=\\frac{1}{4}.$\n\n## Python code for simulation\n\n\n```python\nimport random\ndef forms_triangle(a, b, c):\n return a + b > c and a + c > b and b + c > a\n```\n\n\n```python\ndef counts(trials):\n num_tri = 0\n for i in range(trials):\n x, y = random.uniform(0, 1.0), random.uniform(0, 1.0)\n if x > y:\n x, y = y, x\n if forms_triangle(x, y - x, 1 - y):\n num_tri += 1\n else:\n x, y = x, y\n if forms_triangle(x, y - x, 1 - y):\n num_tri += 1\n \n return num_tri\n```\n\n\n```python\ntrials = 1000000\nformed_triangles = counts(trials)\n```\n\n\n```python\nformed_triangles/trials\n```\n\n\n\n\n 0.249959\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "ab5b82f54937c4bebb6426bdb62d0b322bf704be", "size": 77412, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Can you form a triangle March 5, 2021.ipynb", "max_stars_repo_name": "sernur/probability_stats_interveiw_questions", "max_stars_repo_head_hexsha": "3144dae00fa83c82ff4e1f7668828349270a1937", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2021-03-04T06:48:28.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-19T10:04:24.000Z", "max_issues_repo_path": "Can you form a triangle March 5, 2021.ipynb", "max_issues_repo_name": "sernur/probability_stats_interveiw_questions", "max_issues_repo_head_hexsha": "3144dae00fa83c82ff4e1f7668828349270a1937", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-03-05T22:00:43.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-05T22:00:43.000Z", "max_forks_repo_path": "Can you form a triangle March 5, 2021.ipynb", "max_forks_repo_name": "sernur/probability_stats_interveiw_questions", "max_forks_repo_head_hexsha": "3144dae00fa83c82ff4e1f7668828349270a1937", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-03-04T05:02:00.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-16T01:13:40.000Z", "avg_line_length": 278.4604316547, "max_line_length": 70353, "alphanum_fraction": 0.9312768046, "converted": true, "num_tokens": 951, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9362850093037731, "lm_q2_score": 0.8976952900545975, "lm_q1q2_score": 0.8404986430007221}} {"text": "# Neural Network Fundamentals\n\n## Gradient Descent Introduction:\nhttps://www.youtube.com/watch?v=IxBYhjS295w\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.linear_model import LinearRegression\n\nnp.random.seed(1)\n\n%matplotlib inline\nnp.random.seed(1)\n```\n\n\n```python\nN = 100\nx = np.random.rand(N,1)*5\n# Let the following command be the true function\ny = 2.3 + 5.1*x\n# Get some noisy observations\ny_obs = y + 2*np.random.randn(N,1)\n```\n\n\n```python\nplt.scatter(x,y_obs,label='Observations')\nplt.plot(x,y,c='r',label='True function')\nplt.legend()\nplt.show()\n```\n\n## Gradient Descent\n\nWe are trying to minimise $\\sum \\xi_i^2$.\n\n\\begin{align}\n\\mathcal{L} & = \\frac{1}{N}\\sum_{i=1}^N (y_i-f(x_i,w,b))^2 \\\\\n\\frac{\\delta\\mathcal{L}}{\\delta w} & = -\\frac{1}{N}\\sum_{i=1}^N 2(y_i-f(x_i,w,b))\\frac{\\delta f(x_i,w,b)}{\\delta w} \\\\ \n& = -\\frac{1}{N}\\sum_{i=1}^N 2\\xi_i\\frac{\\delta f(x_i,w,b)}{\\delta w}\n\\end{align}\nwhere $\\xi_i$ is the error term $y_i-f(x_i,w,b)$ and \n$$\n\\frac{\\delta f(x_i,w,b)}{\\delta w} = x_i\n$$\n\nSimilar expression can be found for $\\frac{\\delta\\mathcal{L}}{\\delta b}$ (exercise).\n\nFinally the weights can be updated as $w_{new} = w_{current} - \\gamma \\frac{\\delta\\mathcal{L}}{\\delta w}$ where $\\gamma$ is a learning rate between 0 and 1.\n\n\n```python\n# Helper functions\ndef f(w,b):\n return w*x+b\n\ndef loss_function(e):\n L = np.sum(np.square(e))/N\n return L\n\ndef dL_dw(e,w,b):\n return -2*np.sum(e*df_dw(w,b))/N\n\ndef df_dw(w,b):\n return x\n\ndef dL_db(e,w,b):\n return -2*np.sum(e*df_db(w,b))/N\n\ndef df_db(w,b):\n return np.ones(x.shape)\n```\n\n\n```python\n# The Actual Gradient Descent\ndef gradient_descent(iter=100,gamma=0.1):\n # get starting conditions\n w = 10*np.random.randn()\n b = 10*np.random.randn()\n \n params = []\n loss = np.zeros((iter,1))\n for i in range(iter):\n# from IPython.core.debugger import Tracer; Tracer()()\n params.append([w,b])\n e = y_obs - f(w,b) # Really important that you use y_obs and not y (you do not have access to true y)\n loss[i] = loss_function(e)\n\n #update parameters\n w_new = w - gamma*dL_dw(e,w,b)\n b_new = b - gamma*dL_db(e,w,b)\n w = w_new\n b = b_new\n \n return params, loss\n \nparams, loss = gradient_descent()\n```\n\n\n```python\niter=100\ngamma = 0.1\nw = 10*np.random.randn()\nb = 10*np.random.randn()\n\nparams = []\nloss = np.zeros((iter,1))\nfor i in range(iter):\n# from IPython.core.debugger import Tracer; Tracer()()\n params.append([w,b])\n e = y_obs - f(w,b) # Really important that you use y_obs and not y (you do not have access to true y)\n loss[i] = loss_function(e)\n\n #update parameters\n w_new = w - gamma*dL_dw(e,w,b)\n b_new = b - gamma*dL_db(e,w,b)\n w = w_new\n b = b_new\n```\n\n\n```python\ndL_dw(e,w,b)\n```\n\n\n\n\n 0.0078296405377948283\n\n\n\n\n```python\nplt.plot(loss)\n```\n\n\n```python\nparams = np.array(params)\nplt.plot(params[:,0],params[:,1])\nplt.title('Gradient descent')\nplt.xlabel('w')\nplt.ylabel('b')\nplt.show()\n```\n\n\n```python\nparams[-1]\n```\n\n\n\n\n array([ 4.98991104, 2.72258102])\n\n\n\n## Multivariate case\n\nWe are trying to minimise $\\sum \\xi_i^2$. This time $ f = Xw$ where $w$ is Dx1 and $X$ is NxD.\n\n\\begin{align}\n\\mathcal{L} & = \\frac{1}{N} (y-Xw)^T(y-Xw) \\\\\n\\frac{\\delta\\mathcal{L}}{\\delta w} & = -\\frac{1}{N} 2\\left(\\frac{\\delta f(X,w)}{\\delta w}\\right)^T(y-Xw) \\\\ \n& = -\\frac{2}{N} \\left(\\frac{\\delta f(X,w)}{\\delta w}\\right)^T\\xi\n\\end{align}\nwhere $\\xi_i$ is the error term $y_i-f(X,w)$ and \n$$\n\\frac{\\delta f(X,w)}{\\delta w} = X\n$$\n\nFinally the weights can be updated as $w_{new} = w_{current} - \\gamma \\frac{\\delta\\mathcal{L}}{\\delta w}$ where $\\gamma$ is a learning rate between 0 and 1.\n\n\n```python\nN = 1000\nD = 5\nX = 5*np.random.randn(N,D)\nw = np.random.randn(D,1)\ny = X.dot(w)\ny_obs = y + np.random.randn(N,1)\n```\n\n\n```python\nw\n```\n\n\n\n\n array([[ 0.93774813],\n [-2.62540124],\n [ 0.74616483],\n [ 0.67411002],\n [ 1.0142675 ]])\n\n\n\n\n```python\nX.shape\n```\n\n\n\n\n (1000, 5)\n\n\n\n\n```python\nw.shape\n```\n\n\n\n\n (5, 1)\n\n\n\n\n```python\n(X*w.T).shape\n```\n\n\n\n\n (1000, 5)\n\n\n\n\n```python\n# Helper functions\ndef f(w):\n return X.dot(w)\n\ndef loss_function(e):\n L = e.T.dot(e)/N\n return L\n\ndef dL_dw(e,w):\n return -2*X.T.dot(e)/N \n```\n\n\n```python\ndef gradient_descent(iter=100,gamma=1e-3):\n # get starting conditions\n w = np.random.randn(D,1)\n params = []\n loss = np.zeros((iter,1))\n for i in range(iter):\n params.append(w)\n e = y_obs - f(w) # Really important that you use y_obs and not y (you do not have access to true y)\n loss[i] = loss_function(e)\n\n #update parameters\n w = w - gamma*dL_dw(e,w)\n \n return params, loss\n \nparams, loss = gradient_descent()\n```\n\n\n```python\nplt.plot(loss)\n```\n\n\n```python\nparams[-1]\n```\n\n\n\n\n array([[ 0.94792987],\n [-2.60989696],\n [ 0.72929842],\n [ 0.65272494],\n [ 1.01038855]])\n\n\n\n\n```python\nmodel = LinearRegression(fit_intercept=False)\nmodel.fit(X,y)\nmodel.coef_.T\n```\n\n\n\n\n array([[ 0.93774813],\n [-2.62540124],\n [ 0.74616483],\n [ 0.67411002],\n [ 1.0142675 ]])\n\n\n\n\n```python\n# compare parameters side by side\nnp.hstack([params[-1],model.coef_.T])\n```\n\n\n\n\n array([[ 0.94792987, 0.93774813],\n [-2.60989696, -2.62540124],\n [ 0.72929842, 0.74616483],\n [ 0.65272494, 0.67411002],\n [ 1.01038855, 1.0142675 ]])\n\n\n\n## Stochastic Gradient Descent\n\n\n```python\ndef dL_dw(X,e,w):\n return -2*X.T.dot(e)/len(X)\n\ndef gradient_descent(gamma=1e-3, n_epochs=100, batch_size=20, decay=0.9):\n epoch_run = int(len(X)/batch_size)\n \n # get starting conditions\n w = np.random.randn(D,1)\n params = []\n loss = np.zeros((n_epochs,1))\n for i in range(n_epochs):\n params.append(w)\n \n for j in range(epoch_run):\n idx = np.random.choice(len(X),batch_size,replace=False)\n e = y_obs[idx] - X[idx].dot(w) # Really important that you use y_obs and not y (you do not have access to true y)\n #update parameters\n w = w - gamma*dL_dw(X[idx],e,w)\n loss[i] = e.T.dot(e)/len(e) \n gamma = gamma*decay #decay the learning parameter\n \n return params, loss\n \nparams, loss = gradient_descent()\n```\n\n\n```python\nplt.plot(loss)\n```\n\n\n```python\nnp.hstack([params[-1],model.coef_.T])\n```\n\n\n\n\n array([[ 0.94494132, 0.93774813],\n [-2.6276984 , -2.62540124],\n [ 0.74654537, 0.74616483],\n [ 0.66766209, 0.67411002],\n [ 1.00760747, 1.0142675 ]])\n\n\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\ntested\n```\n", "meta": {"hexsha": "985d1b2cf2baa4334970a01901817e2a739b5c97", "size": 97300, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tests/ml-books/Lesson 02 - GradientDescent.ipynb", "max_stars_repo_name": "gopala-kr/ds-notebooks", "max_stars_repo_head_hexsha": "bc35430ecdd851f2ceab8f2437eec4d77cb59423", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-05-10T09:16:23.000Z", "max_stars_repo_stars_event_max_datetime": "2019-05-10T09:16:23.000Z", "max_issues_repo_path": "tests/ml-books/Lesson 02 - GradientDescent.ipynb", "max_issues_repo_name": "gopala-kr/ds-notebooks", "max_issues_repo_head_hexsha": "bc35430ecdd851f2ceab8f2437eec4d77cb59423", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tests/ml-books/Lesson 02 - GradientDescent.ipynb", "max_forks_repo_name": "gopala-kr/ds-notebooks", "max_forks_repo_head_hexsha": "bc35430ecdd851f2ceab8f2437eec4d77cb59423", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-10-14T07:30:18.000Z", "max_forks_repo_forks_event_max_datetime": "2019-10-14T07:30:18.000Z", "avg_line_length": 142.8781204112, "max_line_length": 27076, "alphanum_fraction": 0.8871839671, "converted": true, "num_tokens": 2202, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9755769113660689, "lm_q2_score": 0.8615382076534743, "lm_q1q2_score": 0.8404967836464354}} {"text": "## Chapter 5 - Support Vector Machines\n\n### Nonlinear SVC\n\nThe support vector classifier is used when the boundary between the two classes are linear. However, in practice, we are sometimes faced with non-linear boundaries. In this case, we consider enlarging the feature space using the higher order features. E.g. rather than fitting a support vector classifier on $p$ features $\\begin{pmatrix} X_1, \\cdots, X_p\\end{pmatrix}$, we add a polynomial (squared) feature and fit the support vector classifier on $2p$ features $\\begin{pmatrix} X_1, X_1^2 \\cdots, X_p, X_p^2\\end{pmatrix}$. Now, the optimisation problem will be:\n\n$$\\underset{\\beta_0, \\beta_{11}, \\beta_{12}, \\cdots, \\beta_{p1},\\beta_{p2},\\epsilon_1, \\cdots, \\epsilon_n}{\\text{Maximise }}M \\text{ s. t. }$$\n$$\\sum_{j=1}^p\\sum_{k=1}^2 \\beta_{jk}^2=1$$\n$$y_i\\begin{pmatrix}\\beta_0 + \\sum_{j=1}^p\\beta_{j1}x_{ij} + \\sum_{j=1}^p\\beta_{j2}^2x^2_{ij} \\end{pmatrix}\\geq M(1-\\epsilon_i)\\,\\,\\forall i \\in \\{1,\\cdots,n\\}$$\n$$\\epsilon_i \\geq 0\\,\\,\\forall i \\in \\{1,\\cdots,n\\}\\,\\,, \\sum_{i=1}^n \\epsilon_i \\leq C$$\n\nIn this enlarged feature space, the decision boundary is linear. However, in the original future space, the decision boundary is in the form $q(x)=0$ where $q$ is a quadratic polynomial, adn its solutions are generally non-linear. In extension, we can enlarge the feature space with higher polynomial terms or interaction terms.\n\n\n```python\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn.datasets import (make_moons, load_iris)\nfrom sklearn.preprocessing import (PolynomialFeatures, StandardScaler)\nfrom sklearn.svm import LinearSVC, SVC, LinearSVR, SVR\nfrom sklearn.model_selection import train_test_split\n```\n\nTo achieve this in SKLearn, use `PolynomialFeatures` to transform before training.\n\n\n```python\nX, y = make_moons(n_samples=500, noise=0.30, random_state=42)\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)\n```\n\n\n```python\n# Transform to 3rd degree polynomial features\npolyfeatures1 = PolynomialFeatures(degree=3)\nscaler1 = StandardScaler()\nX_expt1 = polyfeatures1.fit_transform(X_train)\nX_expt1 = scaler1.fit_transform(X_expt1)\n\n# Train on polynomial features\nclf_expt1 = LinearSVC(C=10, loss='hinge', max_iter=1000000)\nclf_expt1.fit(X_expt1, y_train)\n```\n\n\n\n\n LinearSVC(C=10, class_weight=None, dual=True, fit_intercept=True,\n intercept_scaling=1, loss='hinge', max_iter=1000000,\n multi_class='ovr', penalty='l2', random_state=None, tol=0.0001,\n verbose=0)\n\n\n\nIt is not hard to see that there are endless ways to enlarge the feature space and can come up with many features. This computationally becomes unmanageable. The support vector machine allows us to enlarge the feature space used by the support vector classifier in a way that leads to efficient computations.\n\n### Nonlinear SVM Classification - Using Kernels\n\nThe Support Vector Machine extends the support vector classifier that results from enlarging the feature space using kernels. This results in a method that is more efficient computationally.\n\nThe following from SKLearn implements this using a 3rd degree polynomial kernel.\n\n\n```python\n# Use 3rd degree polynomial and then train SVC on it\nclf_expt12= SVC(kernel='poly', degree=3, coef0=1, C=10)\nclf_expt12.fit(X_train, y_train)\n```\n\n\n\n\n SVC(C=10, break_ties=False, cache_size=200, class_weight=None, coef0=1,\n decision_function_shape='ovr', degree=3, gamma='scale', kernel='poly',\n max_iter=-1, probability=False, random_state=None, shrinking=True,\n tol=0.001, verbose=False)\n\n\n\nThe solution to the SVC problem involves only the inner products of the observations (as opposed to the observations themselves). The inner product of two vectors $\\vec{a}$ and $\\vec{b}$ of length $r$, $\\langle\\,\\vec{a},\\vec{b}\\rangle$ is $\\sum_{i=1}^ra_ib_i$\n\n\n```python\na, b = np.array([3,4]), np.array([2,7])\nprint(np.dot(a,b))\n```\n\n 34\n\n\nThe inner product of two observations $x_i,x_{i'}$, denoted by $\\langle x_{i},x_{i'}\\rangle$ is hence\n$$\\langle x_{i},x_{i'}\\rangle = \\sum_{j=1}^p x_{ij}x_{i'j}$$\n\nThe linear SVC solution can then be represented as:\n\n$$f(x) = \\beta_0 + \\sum_{i=1}^n \\alpha_i \\langle x,x_{i}\\rangle$$\n\nwhere there are $n$ parameters, $\\alpha_i \\forall i \\in \\{1,\\cdots,n\\}$\n\nTo estimate the parameters $\\alpha_i, i \\in \\{1,\\cdots,n\\}$ and $\\beta_0$, we need the inner products $\\langle x,x_{i}\\rangle$ of all the pairs of training observations.\n\nIt turns out that $\\alpha_i$ is nonzero for only the support vectors in the solution. So if the training observation is not the support vector then $\\alpha_i=0$. So if $S$ is the collection of indices of these support points, we can rewrite the solution function in the form:\n\n$$f(x) = \\beta_0 + \\sum_{i\\in S}^n \\alpha_i \\langle x,x_{i}\\rangle$$ which involve far fewer terms.\n\nTo expand the solution, instead of using the inner product $\\langle x,x_{i}\\rangle$ we use a generalisation of the inner product in the form: $$K(x_i,x_{i'})$$\n\nwhere $K$ is some function we refer to as a kernel. A kernel is a function that quantifies the similarity of two observations. If $K_{\\text{linear}}(x_i,x_{i'}) = \\sum_{j=1}^p x_{ij}x_{i'j}$ then $K_{\\text{linear}}$ is a linear kernel and the solution returns the support vector classifier. \n\nInstead, if the kernel is now $K_{\\text{polynomial}}(x_i,x_{i'}) = (1 + \\sum_{j=1}^p x_{ij}x_{i'j})^d$ where $d$ is a positive integer then we have $K_{\\text{polynomial}}$ the polynomial kernel of degree $d$ \n\nUsing a nonlinear kernel to perform classification results in the support vector machine. Now, the solution has the form:\n\n$$\\begin{align}f(x) &= \\beta_0 + \\sum_{i\\in S}^n \\alpha_i K_{\\text{polynomial}}(x_i,x_{i'})\\\\&=\\beta_0 + \\sum_{i\\in S}^n \\alpha_i (1 + \\sum_{j=1}^p x_{ij}x_{i'j})^d\\end{align}$$\n\n\nAnother popular choice is the radial kernel, $K_{\\text{radial}}(x_i,x_{i'}) = \\exp(-\\gamma \\sum_{j=1}^p (x_{ij}-x_{i'j})^2)$ where $\\gamma$ is a positive constant. Naturally, the solution to the radial kernel has the form:\n\n$$\\begin{align}f(x) &= \\beta_0 + \\sum_{i\\in S}^n \\alpha_i K_{\\text{radial}}(x_i,x_{i'})\\\\&=\\beta_0 + \\exp(-\\gamma \\sum_{j=1}^p (x_{ij}-x_{i'j})^2)\\end{align}$$\n\n\n```python\n# Feature Scaling\nscaler2 = StandardScaler()\nX_expt3 = scaler1.fit_transform(X_train)\n\n# Train on RBF Kernel\nclf_expt13 = SVC(kernel='rbf', gamma=5, C=0.001)\nclf_expt13.fit(X_expt3, y_train)\n```\n\n\n\n\n SVC(C=0.001, break_ties=False, cache_size=200, class_weight=None, coef0=0.0,\n decision_function_shape='ovr', degree=3, gamma=5, kernel='rbf', max_iter=-1,\n probability=False, random_state=None, shrinking=True, tol=0.001,\n verbose=False)\n\n\n\nHow does the radial kernel work? Given an unseen test observation $x^*$ is far from a training observation $x_i$ in terms of Euclidean distance, then $\\sum_{j=1}^p (x^*_{j}-x_{ij})^2)$ is large and hence $K_{\\text{radial}}(x^*,x_{i})$ is small. Then $x_i$ will play no role in $f(x^*)$. Training observations far from $x^*$ will play no role in the prediction for the class $x^*$\n\nThis means the radial kernel has very local behaviour, where only nearby training observations have an effect on the class label of the test observation.\n\nThe advantage of using kernels is computational. When using kernels, we only compute $K$ for every pair of the observations. This can be done without working explicitly working on an enlarged feature space. In enlarged feature space solutions, the feature space could be so large the computations are intractable. In the case of the radial kernel, the feature space is implicit and infinite-dimensional, so we cannot perform the mathematical calculations anyway.\n\n### SVM Regression (SVR)\n\nThe SVM algorithm is also versatile: It can support linear and nonlinear regression. The idea is to reverse the objective: instead of finding the largest possible street between two classes, we now find as many instances as possible on the street. The width of the street is controlled by a parameter $\\epsilon$. \n\n\n```python\n# Linear SVR, no polynomial features\nsvr1 = LinearSVR(epsilon=1.5)\nsvr1.fit(X_train, y_train)\n```\n\n\n\n\n LinearSVR(C=1.0, dual=True, epsilon=1.5, fit_intercept=True,\n intercept_scaling=1.0, loss='epsilon_insensitive', max_iter=1000,\n random_state=None, tol=0.0001, verbose=0)\n\n\n\nSimilarly, apply the polynomial kernel transformation if we want a SVM with a polynomial (not linear) solution.\n\n\n```python\n# SVR, with polynomial features\nsvr2 = SVR(kernel='poly', degree=2, C=100, epsilon=0.1)\nsvr2.fit(X_train, y_train)\n```\n\n\n\n\n SVR(C=100, cache_size=200, coef0=0.0, degree=2, epsilon=0.1, gamma='scale',\n kernel='poly', max_iter=-1, shrinking=True, tol=0.001, verbose=False)\n\n\n", "meta": {"hexsha": "446dd737fb93b8947799f6fdcec4eb9b465cd55f", "size": 13943, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chap05/05-textbook-support-vector-machines-02.ipynb", "max_stars_repo_name": "bryanblackbee/topic__hands-on-machine-learning", "max_stars_repo_head_hexsha": "3b9a2cfa011099178dd73c3366331958d49ad96f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chap05/05-textbook-support-vector-machines-02.ipynb", "max_issues_repo_name": "bryanblackbee/topic__hands-on-machine-learning", "max_issues_repo_head_hexsha": "3b9a2cfa011099178dd73c3366331958d49ad96f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chap05/05-textbook-support-vector-machines-02.ipynb", "max_forks_repo_name": "bryanblackbee/topic__hands-on-machine-learning", "max_forks_repo_head_hexsha": "3b9a2cfa011099178dd73c3366331958d49ad96f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-06-20T05:38:43.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-20T05:38:43.000Z", "avg_line_length": 36.9840848806, "max_line_length": 578, "alphanum_fraction": 0.6018073585, "converted": true, "num_tokens": 2577, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9263037384317888, "lm_q2_score": 0.9073122188543453, "lm_q1q2_score": 0.8404467002496214}} {"text": "## Calculate the GPS Distance with the Haversine Formula\n\n* Dan Couture [@MathYourLife](https://twitter.com/MathYourLife), [github](https://github.com/MathYourLife)\n* 2015-03-05\n\n### Problem\n\nI've got the start and end gps location from an excursion across town and need to determine the travel distance\n\n start: 43.059535, -71.013171\n end: 43.083620, -70.892085\n\nYou could always use [google maps](https://www.google.com/maps/dir/43.059535,+-71.013171/%2743.08361,-70.89202%27/), but that would just be cheating.\n\n### Haversine Formula\n\nThe goal for this formula is to calculate the shortest great circle distance between two points on the globe designated by latitude and longitudes. The added benefit of the Haversine equation is that it calculates the central angle as well where $s = r\\theta$.\n\n\nsource: https://software.intel.com/sites/default/files/great%20circle.png\n\nThe Haversine formula is mainly based on calculation of the central angle, $\\theta$, between two gps coordinates. Using the formula for arc length on a sphere\n\n$$\ns = r \\theta\n$$\n\nwhere $r$ is the Earth's radius, and $\\theta$ is the central angle calculated as\n\n$$\n\\theta = 2 \\arcsin\\left( \\sqrt{\\sin^2 \\left(\\frac{\\phi_2-\\phi_1}{2}\\right) + \\cos(\\phi_1)\\cos(\\phi_2)\\sin^2 \\left( \\frac{\\lambda_2-\\lambda_1}{2} \\right) } \\right)\n$$\n\nwith:\n\n$$\n\\begin{align}\n\\phi &= \\text{latitude}\\\\\n\\lambda &= \\text{longitude}\\\\\n\\end{align}\n$$\n\n\n```python\nimport numpy as np\nimport math\n\n# Mean radius of the earth\nEARTH_RADIUS = 6371.009\n\ndef haversine(lat1, lon1, lat2, lon2):\n \"\"\"\n Calculate the great circle distance between two points\n on the earth (specified in decimal degrees)\n \n Return (central angle, distance between points in km)\n \"\"\"\n # convert decimal degrees to radians\n lat1, lon1, lat2, lon2 = [math.radians(x) for x in [lat1, lon1, lat2, lon2]]\n\n # haversine formula\n dlon = lon2 - lon1\n dlat = lat2 - lat1\n \n a = math.sin(dlat/2)**2 + math.cos(lat1) * math.cos(lat2) * math.sin(dlon/2)**2\n central_angle = 2 * math.asin(math.sqrt(a))\n\n # s = r * theta\n km = EARTH_RADIUS * central_angle\n return (central_angle, km)\n```\n\n### Calculate Excursion Distance\n\n\n```python\nstart = (43.059535, -71.013171)\nend = (43.083620, -70.892085)\n\ncentral_angle, km = haversine(*(start + end))\n\nprint(\"Central Angle of %g radians\" % central_angle)\nprint(\"Arc length distance of %g km\" % km)\n```\n\n Central Angle of 0.00160001 radians\n Arc length distance of 10.1937 km\n\n\n### Earth is not a sphere\n\nThe Haversine is a straight forward formula that provides an approximation for the distance between gps coordinates. The Earth of course is not spherical, and elevation changes including terrain profiles will increase actual distance traveled.\n\n## References:\n\n* http://www.gcmap.com/faq/gccalc#ellipsoid\n* http://stackoverflow.com/questions/4913349/haversine-formula-in-python-bearing-and-distance-between-two-gps-points/4913653#4913653\n* http://www.gcmap.com/mapui?P=DXB-SFO%2CBINP&PM=b%3Adisc7%2B%25U%2Cp%3Adisc7%2B%25N&MS=wls&PW=2&DU=km\n", "meta": {"hexsha": "23ed8de7779ba3f012e8bfdad8f2a7c2a4e52e4e", "size": 5003, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Haversine/Haversine.ipynb", "max_stars_repo_name": "MathYourLife/spouting-jibberish", "max_stars_repo_head_hexsha": "d90aa1da0cc0e5dd463e3ec8ec43878f1e810079", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-01-02T20:38:32.000Z", "max_stars_repo_stars_event_max_datetime": "2015-01-02T20:38:32.000Z", "max_issues_repo_path": "Haversine/Haversine.ipynb", "max_issues_repo_name": "MathYourLife/spouting-jibberish", "max_issues_repo_head_hexsha": "d90aa1da0cc0e5dd463e3ec8ec43878f1e810079", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Haversine/Haversine.ipynb", "max_forks_repo_name": "MathYourLife/spouting-jibberish", "max_forks_repo_head_hexsha": "d90aa1da0cc0e5dd463e3ec8ec43878f1e810079", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.8827160494, "max_line_length": 271, "alphanum_fraction": 0.5648610833, "converted": true, "num_tokens": 899, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9407897509188344, "lm_q2_score": 0.8933094110250333, "lm_q1q2_score": 0.8404163382916917}} {"text": "# The Singular Value Decomposition\n\n## 1. Using a matrix to store our data\nWe imagine that the data we are interested in are shaped into a $n\\times m$ matrix, $\\mathbf{X}$. Most of our data will be real numbers only, so $\\mathbf{X}\\in\\mathbb{R}^{n\\times m}$.\n\nThe data in $\\mathbf{X}$ might be spatially 2-D (a single image, for example) or it might be composed such that each column is data from a set of spatial postions at different instants in time,\n\n\\begin{equation}\n\\mathbf{X} = \\begin{bmatrix} \n\\mid & \\mid & & \\mid \\\\ \n\\mathbf{x_1} & \\mathbf{x_2} & \\cdots & \\mathbf{x_m} \\\\\n\\mid & \\mid & & \\mid \\end{bmatrix}\\quad\\text{,}\n\\end{equation}\n\nwhere each column $\\mathbf{x_k}$ is data from a set of $n$ sensors at a specific instant in time.\n\n## 2. Definition of the Singular Value Decomposition\n\nAny matrix $\\mathbf{X}$ can be written as the product of 3 special matrices, $\\mathbf{U}$, $\\mathbf{\\Sigma}$ and $\\mathbf{V^T}$ (where $\\mathbf{V^T}$ is the transpose of $\\mathbf{V}$),\n\n\\begin{equation}\n\\mathbf{X} = \\mathbf{U}\\,\\mathbf{\\Sigma}\\,\\mathbf{V^T}\\quad\\text{.}\n\\end{equation}\n\n$\\mathbf{U}$ and $\\mathbf{V}$ are both *orthogonal* matrices because,\n\n\\begin{align}\n\\mathbf{U}\\mathbf{U^T} &=\\mathbf{U^T}\\mathbf{U}=\\mathbf{I}\\quad\\text{and} \\\\\n\\mathbf{V}\\mathbf{V^T} &=\\mathbf{V^T}\\mathbf{V}=\\mathbf{I}\\quad\\text{.}\n\\end{align}\n\nThe $\\mathbf{\\Sigma}$ matrix only has non-zero entries on the diagonal. These are the **singular values** and are arranged in descending order.\n\n(If our data contains complex numbers, $\\mathbf{X}\\in\\mathbb{C}^{n\\times m}$, then the SVD still works but now $\\mathbf{U}$ and $\\mathbf{V}$ are *unitary* matrices so that $\\mathbf{U}\\mathbf{U^*}=\\mathbf{I}$ where $^*$ denotes conjugate transponse.)\n\n## 3. Shapes of the matrices in the SVD\n\nIf $\\mathbf{X}\\in\\mathbb{R}^{n\\times m}$ then $\\mathbf{U}\\in\\mathbb{R}^{n\\times n}$, $\\mathbf{\\Sigma}\\in\\mathbb{R}^{n\\times m}$ and $\\mathbf{V^T}\\in\\mathbb{R}^{m\\times m}$.\n\n\n\nLet's see how we can evaluate the SVD using Python:\n\n\n```python\nimport numpy as np \nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n\n```python\nm = 3\nn = 5\nX = np.random.rand(n,m) # Create a matrix of random numbers (between 0. and 1.) with n rows and m columns\n```\n\n\n```python\nX # In a notebook, this writes X to the screen - similar to print(X)\n```\n\n\n```python\nU,S,VT = np.linalg.svd(X) # Use numpy's linalg.svd to evaluate the SVD\n```\n\n\n```python\nU.shape, S.shape, VT.shape # Find the shapes of U, S and VT \n```\n\nWe expected $\\mathbf{U}\\in\\mathbb{R}^{n\\times n}$, $\\mathbf{\\Sigma}\\in\\mathbb{R}^{n\\times m}$ and $\\mathbf{V^T}\\in\\mathbb{R}^{m\\times m}$, but `numpy.linalg.svd` has returned $\\mathbf{\\Sigma}$ as a vector of length $n$. `numpy` has returned a compact description of $\\mathbf{\\Sigma}$, noting that it is a diagonal matrix and, since $n>m$ in our case, the bottom $n-m$ rows of $\\mathbf{\\Sigma}$ are zeros.\n\nIf we need to, we can reconstruct the full $\\mathbf{\\Sigma}$ using:\n\n\n```python\nSfull = np.zeros([n,m]) # Make a matrix (array) that is all zeros and of the correct shape\nSfull[:m,:m] = np.diag(S) # Insert the diagonal matrix made of the vector S. \n # The [:m,:m] notation means the first m rows and first m columns\n```\n\n\n```python\nSfull # display the full Sigma matrix (note the order of the singular values)\n```\n\nNow we can check the SVD by reconstructing X:\n\n\n```python\nXrecon = U @ Sfull @ VT # @ is matrix multiplication in Python 3\nXrecon \n```\n\n## 4. Properties of U and V\n\nWe can also demonstrate some properties of $\\mathbf{U}$ (and $\\mathbf{V}$). The vectors that form the columns of $\\mathbf{U}$ (and the columns of $\\mathbf{V}$) have unit magnitude and are orthogonal to the other columns of the matrix, i.e. they are orthonomal.\n\n\n```python\nU[:,0] @ U[:,0].T # Check that the magnitude of the first column of U = 1.\n```\n\n\n```python\nU[:,0] @ U[:,1].T # Check that the first column is orthogonal to the second column\n```\n\n---\n\nKey intepretation of $\\mathbf{U}$ and $\\mathbf{V}$ (from Brunton and Kutz):\n\n* Columns of $\\mathbf{U}$ are an orthonormal basis for the column space of $\\mathbf{X}$ \n* Columns of $\\mathbf{V}$ (not $\\mathbf{V^T}$ are an orthonormal basis for the row space of $\\mathbf{X}$ \n\nIf each column of $\\mathbf{X}$ contains spatial data at a different time instant, then $\\mathbf{U}$ relates to spatial patterns, and $\\mathbf{V}$ to tempoeral patterns.\n\n---\n\n\n## 5. Matrix appoximation using the SVD\n\nThe SVD identifies the *rank* of a matrix by the number of singular values. The rank is the maximum number of linearly independent columns (or rows) of the matrix. \n\n\n```python\nm = 3\nn = 5\nX = np.random.rand(n,m) # Make a new (n,m) matrix of random numbers\nU,S,VT = np.linalg.svd(X) # Compute the SVD\nS # Show the singular values (since X is made of random numbers, we expect all columns (and rows)\n # to be independent and so X will be of rank m\n```\n\n\n```python\nX[:,2] = X[:,1]*2. # Change the third column (indices start from 0 in Python) to be twice the second column\nU,S,VT = np.linalg.svd(X) # Now when we compute the SVD, we expect two singular values because the third column is not idependent of the second\nS\n```\n\nWe can approximate $\\mathbf{X}$ in a lower rank using the SVD. The optimal rank $r$ approximation is given by the largest $r$ singular values. A nice way to show this is by using an image as the data matrix $\\mathbf{X}$.\n\n\n```python\nimg = np.load('data/img.npy')\nplt.imshow(img, cmap = 'gray', vmin = 0, vmax = 1)\n```\n\n\n```python\nU,S,VT = np.linalg.svd(img) # Perform the SVD\n```\n\n\n```python\nS.size # Tells us the rank of the img matrix\n```\n\nWe now approximate the `img` matrix using a new matrix of rank $r$.\n\n\n\nThe SVD now contains only the leading $r$ terms of $\\mathbf{\\Sigma}$ (the largest $r$ singular values), $\\mathbf{\\hat{U}}$ is the first $r$ columns of $\\mathbf{U}$ and $\\mathbf{\\hat{V}^T}$ is the first $r$ rows of $\\mathbf{V^T}$.\n\n\n```python\nr = 50 # Rank of new matrix\nimgRecon = U[:,:r] @ np.diag(S[:r]) @ VT[:r,:] # Use first r columns of U, first r values in S, and first r rows in VT\nplt.imshow(imgRecon, cmap='gray', vmin = 0, vmax = 1)\nplt.title(\"Approximation; Rank = \" + str(r) ) \n```\n\nThis represents a signficant reduction in the amount of data needed to describe the image:\n\n\n```python\nsize_orig = img.flatten().size\nsize_approx = U[:,:r].flatten().size + S[:r].flatten().size + VT[:r,:].flatten().size\nsize_approx / size_orig\n```\n\nEven a rank 10 approximation is surprisingly clear: \n\n\n```python\nr = 10 \nimgRecon = U[:,:r] @ np.diag(S[:r]) @ VT[:r,:]\nplt.imshow(imgRecon, cmap='gray')\nplt.title(\"Approximation; Rank = \" + str(r) ) \n```\n\nThe matrix formed when a single column of $\\mathbf{U}$ and a single row of $\\mathbf{V^T}$ are multiplied together is of rank 1. Our appoximation is the superposition of $r$ of these rank 1 matrices, each multiplied by a their respective singular value. \n\nWe can inspect the individual rank 1 matrices:\n\n\n```python\nf=plt.figure(figsize=(10,5))\nfor k in range(10):\n plt.subplot(2,5,k+1)\n plt.title('k = ' + str(k))\n rank1matrix = np.outer(U[:,k] , VT[k,:]) # Form the rank 1 matrix\n plt.imshow(rank1matrix, cmap='gray')\n plt.axis('off')\n```\n\nThe approximation of a matrix $\\mathbf{X}$ up to rank $r$ is often given as an example of data compression. In engineering, we take a slightly different perspective. If a reduced rank matrix is a good approximation to the original data, then this tells us about the underlying low rank structure of our data. It is often this low rank structure engineers can apply from one example of the problem to the next; it is both interpretable and transferrable.\n\nThe SVD is useful in a wide range of data science applications. In the next notebook, we will look at one of the most common, Principal Component Analysis.\n\n\n```python\n\n```\n", "meta": {"hexsha": "134c79b5b71f1318e86670c86c9da6fbc3a81365", "size": 13598, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "01 - Singular Value Decomposition.ipynb", "max_stars_repo_name": "grahampullan/datascienceBB", "max_stars_repo_head_hexsha": "87c1127d9deddd3e4e11f1a785b4f9e93a74fb19", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "01 - Singular Value Decomposition.ipynb", "max_issues_repo_name": "grahampullan/datascienceBB", "max_issues_repo_head_hexsha": "87c1127d9deddd3e4e11f1a785b4f9e93a74fb19", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "01 - Singular Value Decomposition.ipynb", "max_forks_repo_name": "grahampullan/datascienceBB", "max_forks_repo_head_hexsha": "87c1127d9deddd3e4e11f1a785b4f9e93a74fb19", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.8201754386, "max_line_length": 460, "alphanum_fraction": 0.549345492, "converted": true, "num_tokens": 2391, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9407897542390751, "lm_q2_score": 0.8933094060543488, "lm_q1q2_score": 0.840416336581325}} {"text": "# \ud589\ub82c \uc815\uc758\ud558\uae30\n\n\ud30c\uc774\uc36c\uc5d0\uc11c \ud589\ub82c\uc744 \uc0ac\uc6a9\ud558\ub294 \ubc29\ubc95\uc740 \uc5ec\ub7ec\uac00\uc9c0\uac00 \uc788\ub2e4. \uc6b0\uc120\uc740 numpy\uc744 \uc774\uc6a9\ud558\uc5ec \uc791\uc131\ud558\ub294 \ubc29\ubc95\uc744 \uc18c\uac1c\ud558\uace0, sympy\uc744 \uc774\uc6a9\ud558\ub294 \ubc29\ubc95\uc744 \uc18c\uac1c\ud55c\ub2e4.\n\n\n```python\nimport numpy as np\n```\n\n\uadf8\ub9ac\uace0 \uc774\uc81c \ud589\ub82c\uc744 \ub2e4\uc74c\uacfc \uac19\uc774 \uc815\uc758\ud55c\ub2e4.\n\n\n```python\ntest_a = np.array([[3,1,4],[2,-1,3]])\ntest_b = np.array([[2,1,-1],[1,-2,4]])\n```\n\n\ub9cc\ub4e4\uc5b4\uc9c4 \ud589\ub82c\uc740 2x3 \ud589\ub82c\uc774\ub77c\ub294 \uac83\uc744 \ub2e4\uc74c \uba85\ub839\uc5b4\ub85c \ud655\uc778\ud560 \uc218 \uc788\ub2e4.\n\n\n```python\nprint('a-type: {}, b-type: {}'.format(test_a.shape, test_b.shape))\n```\n\n a-type: (2, 3), b-type: (2, 3)\n\n\n# \ud589\ub82c\uc758 \uc5f0\uc0b0\n\n\ud589\ub82c\uc758 \uc5f0\uc0b0\uc5d0\ub294 \ub367\uc148, \uc2a4\uce7c\ub77c\ubc30, \uacf1\uc148\uc774 \uc788\ub2e4. \uadf8 \uc678\uc5d0\ub3c4 \uc131\ubd84\ub07c\ub9ac \uacf1\ud558\uae30 \ud150\uc11c\uacf1 \ub4f1\uc774 \uc788\ub2e4.\n\uc774 \uad50\uc548\uc740 \uc54c\uae30\uc26c\uc6b4 \uc120\ud615\ub300\uc218\ud559 \ucc45\uc744 \ubc14\ud0d5\uc73c\ub85c \uc9f0\ub2e4. \n\n\ub2e4\uc74c\uc740 p.12\uc758 \uc608\uc81c 3\uc774\ub2e4. \n\n\n```python\ntest_a + test_b\n```\n\n\n\n\n array([[ 5, 2, 3],\n [ 3, -3, 7]])\n\n\n\n\n```python\n2*test_a\n```\n\n\n\n\n array([[ 6, 2, 8],\n [ 4, -2, 6]])\n\n\n\n\uc601\ud589\ub82c\uc744 \ub9cc\ub4e4\uc5b4\uc8fc\ub824\uba74 \ub2e4\uc74c\uacfc \uac19\uc740 \uba85\ub839\uc5b4\ub97c \uc0ac\uc6a9\ud55c\ub2e4.\n\n\n```python\nzero=np.zeros((2,3))\nzero.shape\n```\n\n\n\n\n (2, 3)\n\n\n\np.14\uc5d0 \uc788\ub294 \uc608\uc81c4(\uacb0\ud569\ubc95\uce59\uc758 \ud655\uc778)\ub97c \ud30c\uc774\uc36c\uc73c\ub85c \ud655\uc778\ud574\ubcf4\uc790.\n\n\n```python\na = np.array([[1,-2,4],[0,3,-1]])\nb = np.array([[3,2,-2],[-2,1,5]])\nc = np.array([[6,4,9],[-1,2,1]])\n\nd = (a+b)+c\ne = a+(b+c)\n\nd == e\n```\n\n\n\n\n array([[ True, True, True],\n [ True, True, True]])\n\n\n\n\uc774\uc81c \ud589\ub82c\uc758 \uacf1\uc148\uc744 \uc815\uc758\ud558\uba74 \ub2e4\uc74c\uacfc \uac19\uc740 \uba85\ub839\uc5b4\ub97c \uc0ac\uc6a9\ud558\uba74 \ub41c\ub2e4. p.16 \uc608\uc81c 5\ub97c \ud574\ubcf4\uc790.\n\n\n```python\na = np.array([[2,1,-1],[-1,4,1]])\nb = np.array([[2,1],[0,3],[-1,1]])\n\n#np.matmul(a,b)\n\nprint('AB=\\n{}\\n\\nBA=\\n{}'.format(np.matmul(a,b),np.matmul(b,a)))\n```\n\n AB=\n [[ 5 4]\n [-3 12]]\n \n BA=\n [[ 3 6 -1]\n [-3 12 3]\n [-3 3 2]]\n\n\n\ud589\ub82c\uc758 \uacf1\uc148\uc5d0 \uad00\ud55c \uacb0\ud569\ubc95\uce59\uc774 \uc131\ub9bd\ud568\uc744 \ud30c\uc774\uc36c \uacc4\uc0b0\uc73c\ub85c \ud655\uc778\ud574\ubcf4\uc790.\n\n\n```python\na = np.array([[2,-1,4],[3,5,1]])\nb = np.array([[-5,0,-6],[1,-2,1],[0,7,4]])\nc = np.array([[2,1],[0,3],[-1,1]])\n\nnp.matmul(np.matmul(a,b),c) == np.matmul(a,np.matmul(b,c))\n```\n\n\n\n\n array([[ True, True],\n [ True, True]])\n\n\n\n## \uc5ed\ud589\ub82c \uad6c\ud558\uae30\n\n\uc5ed\ud589\ub82c\uc744 \uad6c\ud558\ub294 \uac83\ubd80\ud130\ub294 \ubcc4\ub3c4\uc758 \ub77c\uc774\ube0c\ub7ec\ub9ac\ub97c \ubd88\ub7ec\uc57c \ud55c\ub2e4.\np.43\uc758 \uc608\uc81c 18\uc744 \uc0dd\uac01\ud574\ubcf4\uc790\n\n\n```python\nimport numpy.linalg as linalg\n\na = np.array([[2,1],[3,4]])\nb = np.array([[1,1,2],[0,1,3],[1,-1,0]])\nprint('inverse of A:\\n{},\\n\\ninverse of B:\\n{}'.format(linalg.inv(a),linalg.inv(b)))\n```\n\n inverse of A:\n [[ 0.8 -0.2]\n [-0.6 0.4]],\n \n inverse of B:\n [[ 0.75 -0.5 0.25]\n [ 0.75 -0.5 -0.75]\n [-0.25 0.5 0.25]]\n\n\n\uc22b\uc790\ub97c \ubd84\uc218\ud615\uc73c\ub85c \uc4f0\uace0 \uc2f6\ub2e4\uba74 \ub2e4\uc74c\uacfc \uac19\uc740 \ub77c\uc774\ube0c\ub7ec\ub9ac\ub97c \ubd80\ub978\ub2e4.\n\n\n```python\nfrom sympy import MatrixSymbol, Matrix\n\nc=Matrix(a)\nd=Matrix(b)\n\n#print(d**-1)\n#print('inverse of A:\\n{},\\n\\ninverse of B:\\n{}'.format(c**-1,d**-1))\n```\n\n\n```python\nc**-1\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\frac{4}{5} & - \\frac{1}{5}\\\\- \\frac{3}{5} & \\frac{2}{5}\\end{matrix}\\right]$\n\n\n\n\n```python\nd**-1\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\frac{3}{4} & - \\frac{1}{2} & \\frac{1}{4}\\\\\\frac{3}{4} & - \\frac{1}{2} & - \\frac{3}{4}\\\\- \\frac{1}{4} & \\frac{1}{2} & \\frac{1}{4}\\end{matrix}\\right]$\n\n\n\n\ucc38\uace0\ub85c sympy\uc744 \uc774\uc6a9\ud574\uc11c \uc55e\uc808\uc5d0\uc11c \ud588\ub358 \uacc4\uc0b0\uc744 \uc801\uc6a9\ud574\ubcf4\uba74 \ub2e4\uc74c\uacfc \uac19\ub2e4.\n\n\n```python\na = sympy.Matrix([[2,-1,4],[3,5,1]])\nb = sympy.Matrix([[-5,0,-6],[1,-2,1],[0,7,4]])\nc = sympy.Matrix([[2,1],[0,3],[-1,1]])\n\n(a * b) * c \n```\n\n\n\n\n True\n\n\n\n\n```python\n(a * b) * c == a * (b * c)\n```\n\n\n\n\n True\n\n\n\n## \uc5f0\ub9bd\uc77c\ucc28\ubc29\uc815\uc2dd\uc758 \ud574\n\n\ud30c\uc774\uc36c\uc744 \uc774\uc6a9\ud574\uc11c \uc5f0\ub9bd\uc77c\ucc28\ubc29\uc815\uc2dd\uc758 \ud574\ub97c \uad6c\ud574\ubcf4\uc790.\np.34\uc758 \uc608\uc81c 16\ubc88\uc774\ub2e4.\n\n\n```python\na = np.array([[1,-3,2,5],[2,3,-4,10],[-6,0,-8,0],[0,6,-8,-5]])\nb = np.array([[1],[3],[-5],[-1]])\n\n#a.shape\n#b.shape\n\nlinalg.solve(a,b)\n```\n\n\n\n\n array([[0.5 ],\n [0.33333333],\n [0.25 ],\n [0.2 ]])\n\n\n\n\ub2f5\uc744 \ubd84\uc218\ud615\uc73c\ub85c \uad6c\ud558\uace0 \uc2f6\uc73c\uba74 \uc544\ub798\uc758 \ub77c\uc774\ube0c\ub7ec\ub9ac\ub97c \ubd80\ub978\ub2e4.\n\n\n```python\nfrom sympy.solvers.solveset import linsolve\n\nlinsolve((Matrix(a),Matrix(b)))\n```\n\n\n\n\n$\\displaystyle \\left\\{\\left( \\frac{1}{2}, \\ \\frac{1}{3}, \\ \\frac{1}{4}, \\ \\frac{1}{5}\\right)\\right\\}$\n\n\n\n## \uae30\uc57d\ud589\uc0ac\ub2e4\ub9ac\uaf34 \ub9cc\ub4e4\uae30 \n\n\uc774\uc81c \uc8fc\uc5b4\uc9c4 \ud589\ub82c\uc758 \uae30\uc57d\ud589\uc0ac\ub2e4\ub9ac\uaf34\uc744 \ub9cc\ub4e4\uc5b4\ubcf4\uc790.\n\uc774\ub97c \uc704\ud574\uc11c\ub294 \ubcc4\ub3c4\uc758 \ub77c\uc774\ube0c\ub7ec\ub9ac\ub85c sympy\ub97c \ubd88\ub7ec\uc57c\ud55c\ub2e4.\n\n\uac19\uc740 \uc608\uc81c 16\ubc88\uc744 \uc0dd\uac01\ud574\ubcf4\uc790.\n\n\n```python\nimport sympy\n\na = np.array([[1,-3,2,5,1],[2,3,-4,10,3],[-6,0,-8,0,-5],[0,6,-8,-5,-1]])\n\nsympy.Matrix(a).rref()\n```\n\n\n\n\n (Matrix([\n [1, 0, 0, 0, 1/2],\n [0, 1, 0, 0, 1/3],\n [0, 0, 1, 0, 1/4],\n [0, 0, 0, 1, 1/5]]),\n (0, 1, 2, 3))\n\n\n\n## \ud589\ub82c\uc2dd \uad6c\ud558\uae30\n\n\n```python\nM = sympy.Matrix([[1, 0, 1], [2, -1, 3], [4, 3, 2]])\nM.det()\n```\n\n\n\n\n$\\displaystyle -1$\n\n\n", "meta": {"hexsha": "754671fb58aaffea67864303966a721130530691", "size": 11103, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "assets/files/Matrix.ipynb", "max_stars_repo_name": "willkwon-math/willkwon-math.github.io", "max_stars_repo_head_hexsha": "aace3fc5ebadba00616699e9b2299770cf823d33", "max_stars_repo_licenses": ["0BSD"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assets/files/Matrix.ipynb", "max_issues_repo_name": "willkwon-math/willkwon-math.github.io", "max_issues_repo_head_hexsha": "aace3fc5ebadba00616699e9b2299770cf823d33", "max_issues_repo_licenses": ["0BSD"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assets/files/Matrix.ipynb", "max_forks_repo_name": "willkwon-math/willkwon-math.github.io", "max_forks_repo_head_hexsha": "aace3fc5ebadba00616699e9b2299770cf823d33", "max_forks_repo_licenses": ["0BSD"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.2426343154, "max_line_length": 211, "alphanum_fraction": 0.4143925065, "converted": true, "num_tokens": 2044, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9407897492587141, "lm_q2_score": 0.8933093961129794, "lm_q1q2_score": 0.8404163227795832}} {"text": "# Understand numpy indexing\n\n\n```python\n#%matplotlib widget\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n```\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sympy\n```\n\n## A few ways to get test numpy arrays\n\n\n```python\nnp.arange(3), np.arange(4,8), np.arange(5,1,-2)\n```\n\n\n\n\n (array([0, 1, 2]), array([4, 5, 6, 7]), array([5, 3]))\n\n\n\nFor experiments with multiplication, arrays of primes may be helpful:\n\n\n```python\ndef arangep(n, starting_index=0):\n sympy.sieve.extend_to_no(starting_index + n)\n return np.array(sympy.sieve._list[starting_index:starting_index + n])\n```\n\n\n```python\narangep(5), arangep(4,2)\n```\n\n\n\n\n (array([ 2, 3, 5, 7, 11]), array([ 5, 7, 11, 13]))\n\n\n\n# Shapes and Indexing\n\nIndexing [basics](https://numpy.org/devdocs/user/basics.indexing.html#basics-indexing) and [details](https://numpy.org/devdocs/reference/arrays.indexing.html#arrays-indexing)\n\n\n```python\na = np.arange(2*3*4).reshape(2,3,4); print(a)\n```\n\n [[[ 0 1 2 3]\n [ 4 5 6 7]\n [ 8 9 10 11]]\n \n [[12 13 14 15]\n [16 17 18 19]\n [20 21 22 23]]]\n\n\nIndexing is row-major order (smallest-address-delta last) (C-style):\n\n\n```python\na[0,0,1], a[0,1,0], a[1,0,0]\n```\n\n\n\n\n (1, 4, 12)\n\n\n\n\n```python\na[0], a[0,0], a[0,0,0]\n```\n\n\n\n\n (array([[ 0, 1, 2, 3],\n [ 4, 5, 6, 7],\n [ 8, 9, 10, 11]]),\n array([0, 1, 2, 3]),\n 0)\n\n\n\n\n```python\na[0], a[0][0], a[0][0][0]\n```\n\n\n\n\n (array([[ 0, 1, 2, 3],\n [ 4, 5, 6, 7],\n [ 8, 9, 10, 11]]),\n array([0, 1, 2, 3]),\n 0)\n\n\n\n\n```python\na.flat[7:12]\n```\n\n\n\n\n array([ 7, 8, 9, 10, 11])\n\n\n\n# Multiplicative-type operations\n\n\n```python\na = arangep(2)\nb = arangep(2,2)\na,b\n```\n\n\n\n\n (array([2, 3]), array([5, 7]))\n\n\n\nBinary scalar operations on vectors just map\n\n\n```python\na+1, a*2, a+b, a*b, b/a, b%a\n```\n\n\n\n\n (array([3, 4]),\n array([4, 6]),\n array([ 7, 10]),\n array([10, 21]),\n array([2.5 , 2.33333333]),\n array([1, 1]))\n\n\n\n[`dot`](https://numpy.org/devdocs/reference/generated/numpy.dot.html) is \"alternative matrix product with different broadcasting rules\"\n\n\n```python\na.dot(b), b.dot(a)\n```\n\n\n\n\n (31, 31)\n\n\n\n\n```python\nm = arangep(4,4).reshape(2,2); m\n```\n\n\n\n\n array([[11, 13],\n [17, 19]])\n\n\n\n## Dot product\n\nMatrix dot vector produces vector of dot products of rows of the matrix with the vector:\n\n\n```python\nm.dot(a), a.dot(m[0]), a.dot(m[1]), m[0], m[1]\n```\n\n\n\n\n (array([61, 91]), 61, 91, array([11, 13]), array([17, 19]))\n\n\n\nvector dot matrix produces vector of dot products of columns of matrix with the vector:\n\n\n```python\na.dot(m), a.dot(m[:,0]), a.dot(m[:,1]), m[:,0], m[:,1]\n```\n\n\n\n\n (array([73, 83]), 73, 83, array([11, 17]), array([13, 19]))\n\n\n\n`@` is infix [matrix multiplication](https://numpy.org/devdocs/reference/generated/numpy.matmul.html#numpy.matmul)\n\n\n```python\na, m, m @ a, a @ m, m.T @ a\n```\n\n\n\n\n (array([2, 3]),\n array([[11, 13],\n [17, 19]]),\n array([61, 91]),\n array([73, 83]),\n array([73, 83]))\n\n\n\nRight-multiplication by a matrix is equivalent to left-multiplication by its transpose:\n\n\n```python\na @ m, m.T @ a, a @ m.T, m @ a\n```\n\n\n\n\n (array([73, 83]), array([73, 83]), array([61, 91]), array([61, 91]))\n\n\n\n### \"Vectorizing\" the dot product\ne.g. when we batch inputs to the network. \\\nImagine `a` and `b` are both to be run through a network which does multiplication by `m`\n\n\n```python\nc = 2*a + b\na, b, c, a @ m, b @ m, c @ m\n```\n\n\n\n\n (array([2, 3]),\n array([5, 7]),\n array([ 9, 13]),\n array([73, 83]),\n array([174, 198]),\n array([320, 364]))\n\n\n\nThe convenient representation *(see below)*, is for the input vectors to be contiguous and adjacent in memory, as would happen if you read them into a memoryview of an array, and reshaped it appropriately, e.g.:\n\n\n```python\nX = np.array([2,3, 5,7, 9,13]).reshape(-1, 2); X\n```\n\n\n\n\n array([[ 2, 3],\n [ 5, 7],\n [ 9, 13]])\n\n\n\n\n```python\nX @ m\n```\n\n\n\n\n array([[ 73, 83],\n [174, 198],\n [320, 364]])\n\n\n\n\n```python\nX @ m + np.array([1000, 2000])\n```\n\n\n\n\n array([[1073, 2083],\n [1174, 2198],\n [1320, 2364]])\n\n\n\n\n```python\nX.shape\n```\n\n\n\n\n (3, 2)\n\n\n\n## Einstein summation notation\n\nNumpy provides [Einstein summation](https://mathworld.wolfram.com/EinsteinSummation.html) operations with [einsum](https://numpy.org/devdocs/reference/generated/numpy.einsum.html)\n1. Repeated indices are implicitly summed over.\n1. Each index can appear at most twice in any term.\n1. Each term must contain identical non-repeated indices.\n\n\n```python\nes = np.einsum\n```\n\n $$a_{ik}a_{ij} \\equiv \\sum_{i} a_{ik}a_{ij}$$\n\n$$M_{ij}v_j=\\sum_{j}M_{ij}v_j$$\n\n\n```python\nes('ij,j', m, a), es('ij,i', m, a)\n```\n\n\n\n\n (array([61, 91]), array([73, 83]))\n\n\n\n\n```python\nes('j,ij', a, m), es('i,ij', a, m)\n```\n\n\n\n\n (array([61, 91]), array([73, 83]))\n\n\n\nScalar multiplication bei\n\n\n```python\nall(es('ij,j', m, a) == es('j,ij', a, m))\n```\n\n\n\n\n True\n\n\n\n### Lorem Ipsum\n\n\n```python\nm2 = np.zeros((2,3), np.int); m2\n```\n\n\n\n\n array([[0, 0, 0],\n [0, 0, 0]])\n\n\n\n\n```python\nm2[1] = np.arange(3); m2\n```\n\n\n\n\n array([[0, 0, 0],\n [0, 1, 2]])\n\n\n\n\n```python\nm3 = arangep(8).reshape(4,2).T; m3\n```\n\n\n\n\n array([[ 2, 5, 11, 17],\n [ 3, 7, 13, 19]])\n\n\n\n\n```python\nm3[:,0]\n```\n\n\n\n\n array([2, 3])\n\n\n\n\n```python\nm @ m3[:,0]\n```\n\n\n\n\n array([61, 91])\n\n\n\n\n```python\nh = m @ m3; h\n```\n\n\n\n\n array([[ 61, 146, 290, 434],\n [ 91, 218, 434, 650]])\n\n\n\n\n```python\nb, b[...,np.newaxis]\n```\n\n\n\n\n (array([5, 7]),\n array([[5],\n [7]]))\n\n\n\n\n```python\nh + b[...,np.newaxis]\n```\n\n\n\n\n array([[ 66, 151, 295, 439],\n [ 98, 225, 441, 657]])\n\n\n\n## Convenient representations\n\nSuppose you have many __x__ to run through a net. What is the convenient representation?\n\nConsider a two-input net, e.g. the XOR net. We want to vectorize the evaluation of the net, and its backprop. In the case of XOR the entire input domain is four vectors: { (0,0), (0,1), (1,0), (1,1) }:\n\n\n```python\nX = np.array([0,0, 0,1, 1,0, 1,1]).reshape(-1,2); X\n```\n\n\n\n\n array([[0, 0],\n [0, 1],\n [1, 0],\n [1, 1]])\n\n\n\nThis is a convenient ordering for input, with each input vector contiguous in memory. But it's not in the form of column vectors for the classical left-multiplication by a transformation matrix to yield a column matrix product.\n\n\n```python\nm = np.arange(4).reshape(2,2) + 1; m\n```\n\n\n\n\n array([[1, 2],\n [3, 4]])\n\n\n\n\n```python\nm @ np.array([1, 2]).reshape(2,1)\n```\n\n\n\n\n array([[ 5],\n [11]])\n\n\n\nWe can transpose the input before left-multiplying ...\n\n\n```python\nm @ X.T\n```\n\n\n\n\n array([[0, 2, 1, 3],\n [0, 4, 3, 7]])\n\n\n\n... and transpose it back:\n\n\n```python\nY = (m @ X.T).T; Y\n```\n\n\n\n\n array([[0, 0],\n [2, 4],\n [1, 3],\n [3, 7]])\n\n\n\nOr we can be less pedantic about expressing the matrix multiply:\n\n\n```python\nX @ m.T\n```\n\n\n\n\n array([[0, 0],\n [2, 4],\n [1, 3],\n [3, 7]])\n\n\n\nIn Einstein summation notation:\n\n\n```python\nes('ij,kj', X, m)\n```\n\n\n\n\n array([[0, 0],\n [2, 4],\n [1, 3],\n [3, 7]])\n\n\n\nIf we really require the matrix on the left, we can index thus:\n\n\n```python\nes('ij,kj->ki', m, X)\n```\n\n\n\n\n array([[0, 0],\n [2, 4],\n [1, 3],\n [3, 7]])\n\n\n\n---\n### What way is faster?\n\n\n```python\ntimeit(X @ m.T)\n```\n\n 1.05 \u00b5s \u00b1 5.03 ns per loop (mean \u00b1 std. dev. of 7 runs, 1000000 loops each)\n\n\n\n```python\ntimeit(es('ij,kj->ki', m, X))\n```\n\n 2.16 \u00b5s \u00b1 9.84 ns per loop (mean \u00b1 std. dev. of 7 runs, 100000 loops each)\n\n\n\n```python\ntm = m.T\n```\n\n\n```python\ntimeit(X @ tm)\n```\n\n 897 ns \u00b1 9.11 ns per loop (mean \u00b1 std. dev. of 7 runs, 1000000 loops each)\n\n\nNo surprise, fastest is to have the transposed matrix ready. No surprise that the Einstein summation is slower, as it requires formulating a loop from the string of indexes. But what if the input data is much larger? E.g.\n\n\n```python\nXlarge = np.arange(2*10000).reshape(10000,2); Xlarge\n```\n\n\n\n\n array([[ 0, 1],\n [ 2, 3],\n [ 4, 5],\n ...,\n [19994, 19995],\n [19996, 19997],\n [19998, 19999]])\n\n\n\n\n```python\ntimeit(Xlarge @ tm)\n```\n\n 86.9 \u00b5s \u00b1 142 ns per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\n\n\n\n```python\ntimeit(es('ij,kj->ki', m, Xlarge))\n```\n\n 126 \u00b5s \u00b1 126 ns per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\n\n\nThe parsing of the index string and formulating a plan is maybe 1.6 \u00b5s, but the loop is \n\n\n```python\n(156 + 1.4 - 3.01)/94.2\n```\n\n\n\n\n 1.63895966029724\n\n\n\n64% slower.\n\n---\n\nAdding another vector to each result vector of the multiply:\n\n\n```python\na, a + Y, Y + a\n```\n\n\n\n\n (array([2, 3]),\n array([[ 2, 3],\n [ 4, 7],\n [ 3, 6],\n [ 5, 10]]),\n array([[ 2, 3],\n [ 4, 7],\n [ 3, 6],\n [ 5, 10]]))\n\n\n\nApplying a function to each result:\n\n\n```python\nrelu = np.vectorize(lambda x: max(0,x))\n```\n\nTry it out:\n\n\n```python\nt = arangep(10).reshape(5,2) - 12; t\n```\n\n\n\n\n array([[-10, -9],\n [ -7, -5],\n [ -1, 1],\n [ 5, 7],\n [ 11, 17]])\n\n\n\n\n```python\nrelu(t)\n```\n\n\n\n\n array([[ 0, 0],\n [ 0, 0],\n [ 0, 1],\n [ 5, 7],\n [11, 17]])\n\n\n\n---\n\n\n```python\nX @ m.T\n```\n\n\n\n\n array([[0, 0],\n [2, 4],\n [1, 3],\n [3, 7]])\n\n\n\n\n```python\nes('ij,kj', X, m)\n```\n\n\n\n\n array([[0, 0],\n [2, 4],\n [1, 3],\n [3, 7]])\n\n\n\n\n```python\nX, m\n```\n\n\n\n\n (array([[0, 0],\n [0, 1],\n [1, 0],\n [1, 1]]),\n array([[1, 2],\n [3, 4]]))\n\n\n\n___\n\n### Outer product\n\n\n```python\na, b = arangep(2), arangep(3,2)\na, b\n```\n\n\n\n\n (array([2, 3]), array([ 5, 7, 11]))\n\n\n\n\n```python\nes('i,j', a, b), es('j,i', a, b), np.outer(a, b), np.outer(b, a)\n```\n\n\n\n\n (array([[10, 14, 22],\n [15, 21, 33]]),\n array([[10, 15],\n [14, 21],\n [22, 33]]),\n array([[10, 14, 22],\n [15, 21, 33]]),\n array([[10, 15],\n [14, 21],\n [22, 33]]))\n\n\n\n\n```python\na, b = arangep(4).reshape(2,2), arangep(4,4).reshape(2,2)\na,b\n```\n\n\n\n\n (array([[2, 3],\n [5, 7]]),\n array([[11, 13],\n [17, 19]]))\n\n\n\n$$ \\sum_j{outer(a[:,j],b[:,j])}$$\n\n\n```python\nes('...i,j', a, b)\n```\n\n# Vectorized dot product\n\n\n```python\nt = np.arange(4*3).reshape(-1,3)\nt\n```\n\n\n\n\n array([[ 0, 1, 2],\n [ 3, 4, 5],\n [ 6, 7, 8],\n [ 9, 10, 11]])\n\n\n\n\n```python\n[t[i].dot(t[i]) for i in range(4)]\n```\n\n\n\n\n [5, 50, 149, 302]\n\n\n\n\n```python\nes('...i,...i', t, t)\n```\n\n\n\n\n array([ 5, 50, 149, 302])\n\n\n\n\n```python\nes('...i,...i', t[1:], t[:-1])\n```\n\n\n\n\n array([ 14, 86, 212])\n\n\n\n`np.arccos` range is $[0,\\pi)$. Want to convert to $[-\\pi/2, \\pi/2)$.\n\n\n```python\na = np.arange(-5,6)*np.pi/5\nca = np.cos(a)\naca = np.arccos(ca)\na, ca, aca\n```\n\n\n\n\n (array([-3.14159265, -2.51327412, -1.88495559, -1.25663706, -0.62831853,\n 0. , 0.62831853, 1.25663706, 1.88495559, 2.51327412,\n 3.14159265]),\n array([-1. , -0.80901699, -0.30901699, 0.30901699, 0.80901699,\n 1. , 0.80901699, 0.30901699, -0.30901699, -0.80901699,\n -1. ]),\n array([3.14159265, 2.51327412, 1.88495559, 1.25663706, 0.62831853,\n 0. , 0.62831853, 1.25663706, 1.88495559, 2.51327412,\n 3.14159265]))\n\n\n\n\n```python\nb = a >= (np.pi/2)\nb\n```\n\n\n\n\n array([False, False, False, False, False, True, True, True, True,\n True])\n\n\n\n\n```python\nb * np.pi/2\n```\n\n\n\n\n array([0. , 0. , 0. , 0. , 0. ,\n 1.57079633, 1.57079633, 1.57079633, 1.57079633, 1.57079633])\n\n\n\n\n```python\na[a>=np.pi/2] -= np.pi\na\n```\n\n\n\n\n array([ 0. , 0.31415927, 0.62831853, 0.9424778 , 1.25663706,\n -1.57079633, -1.25663706, -0.9424778 , -0.62831853, -0.31415927])\n\n\n\n\n```python\na = np.arange(8)\na, a%3, a%3>0\n```\n\n\n\n\n (array([0, 1, 2, 3, 4, 5, 6, 7]),\n array([0, 1, 2, 0, 1, 2, 0, 1]),\n array([False, True, True, False, True, True, False, True]))\n\n\n\n\n```python\na%3>0\n```\n\n# END\n---\n\n\n```python\nt = np.array([0,0, 0,1, 1,0, 1,1]).reshape(4,2); t\n```\n\n\n```python\nf = lambda a, b: 1 if (a > 0.5) ^ (b > 0.5) else 0\n```\n\n\n```python\nf(1,0), f(1,1)\n```\n\n\n```python\nf(t[1,0], t[1,1])\n```\n\n\n```python\n[f(x[0], x[1]) for x in t]\n```\n\n\n```python\ndef exor(a, b):\n return 1 if (a > 0.5) ^ (b > 0.5) else 1\n```\n\n\n```python\n#np.vectorize(exor, signature='(i)->()')(t)\n```\n\n\n```python\nf2 = lambda v: 1 if (v[0] > 0.5) ^ (v[1] > 0.5) else 0\n```\n\n\n```python\n#np.vectorize(f2)(t)\n```\n\n\n```python\nnp.vectorize(f2, signature='(i)->()')(t)\n```\n\n\n```python\n[a for a in t[0]]\n```\n\n---\n\n\n```python\na = np.arange(25).reshape(5,5)\nb = np.arange(5)\nc = np.arange(6).reshape(2,3)\n```\n\n\n```python\na,b,c\n```\n\n\n```python\nnp.einsum('ii', a)\n```\n\n\n```python\nnp.einsum('ii->i', a)\n```\n\n\n```python\nnp.trace(a)\n```\n\n\n```python\nnp.einsum('ji', a)\n```\n\n\n```python\nnp.einsum('ji,i', a, b)\n```\n\n\n```python\na.dot(b)\n```\n\n\n```python\na[:,0]\n```\n\n\n```python\na[:,0].dot(b)\n```\n\n\n```python\na[:,1]\n```\n\n\n```python\nd = np.arange(125).reshape(5,5,5)\n```\n\n\n```python\nnp.einsum('iii', d)\n```\n\n\n```python\nsum([d[i][i][i] for i in range(5)])\n```\n\n\n```python\nnp.einsum('iij',d)\n```\n\n\n```python\nnp.einsum('iiz', d)\n```\n\n\n```python\n[sum([d[i][i][j] for i in range(5)]) for j in range(5)]\n```\n\n\n```python\nsum(a[:])\n```\n\n\n```python\na[0]\n```\n\n\n```python\ntimeit(np.einsum('iii', d))\n```\n\n\n```python\nes = np.einsum\n```\n\n\n```python\nes('ijk,kji',d,d)\n```\n\n\n```python\ntimeit(es('iii', d))\n```\n\n\n```python\nes('i,ij', b, a)\n```\n\n\n```python\nes('ij', a)\n```\n\n\n```python\nes('i', b)\n```\n\n\n```python\ng = np.arange(4).reshape(2,2)\n```\n\n\n```python\ng\n```\n\n\n```python\nes('ij,jk',g,g)\n```\n\n\n```python\ng@g\n```\n\n\n```python\ng[:]\n```\n\n\n```python\nh = np.arange(2); h\n```\n\n\n```python\nh.dot(g)\n```\n\n\n```python\nes('i,ij', h, g)\n```\n\n\n```python\ng.dot(h)\n```\n\n\n```python\nes('ji,i', g, h)\n```\n\n\n```python\nes('ij,j', g, h)\n```\n\n\n```python\ng[0,1]\n```\n\n\n```python\nnp.array(1)\n```\n\n\n```python\nnp.array([1])\n```\n\n\n```python\nnp.array(2)\n```\n\n\n```python\nnp.array([1,2])\n```\n\n\n```python\nnp.array([2])\n```\n\n\n```python\nnp.array(0)\n```\n\n\n```python\nnp.array(0).shape\n```\n\n\n```python\nnp.array([0]).shape\n```\n\n\n```python\nnp.array(0)+1\n```\n\n\n```python\nnp.array([0])+1\n```\n\n\n```python\nnp.array(3).dot(np.array(5))\n```\n\n\n```python\nnp.array([3]).dot(np.array(5))\n```\n\n\n```python\nnp.array([3]).dot(np.array([5]))\n```\n\n\n```python\nes('i,i', np.array([3]), np.array([5]))\n```\n\n\n```python\n#es('i,i', np.array(3), np.array(5))\n```\n\n\n```python\nes('', np.array(3), np.array(5))\n```\n\n\n```python\nint(np.array(3))\n```\n\n\n```python\nint(np.array([3]))\n```\n\n\n```python\na = np.arange(4).reshape(2,2) + 1; print(a)\n```\n\n\n```python\nb = np.arange(2) + 1; print(b)\n```\n\n\n```python\nb.dot(a)\n```\n\n\n```python\nes('ij, jk', a, np.array([[1,0],[1,0]]))\n```\n\n\n```python\na[:,0]\n```\n\n\n```python\na[:,1]\n```\n\n\n```python\nsum(a[:,0])\n```\n\n\n```python\na.T\n```\n\n\n```python\nes('...j->...', a)\n```\n\n\n```python\na,b\n```\n\n\n```python\na.dot(b), b.dot(a)\n```\n\n\n```python\nb.shape\n```\n\n\n```python\nc = b.reshape(2,1); c\n```\n\n\n```python\nb.dot(c), b@c\n```\n\n\n```python\nes('...i,i...', b, c)\n```\n\n\n```python\ntimeit(b.dot(c))\n```\n\n\n```python\ntimeit(b@c)\n```\n\n\n```python\ntimeit(es('...i,i...', b, c))\n```\n\n\n```python\ntimeit(es('i,i...', b, c))\n```\n\n\n```python\na,b,c\n```\n\n\n```python\na@b\n```\n\n\n```python\na@c\n```\n\n\n```python\nb.shape, c.shape\n```\n\n\n```python\nes('ij,j...', a,c)\n```\n\n\n```python\nes('ij,j', a, b)\n```\n\n\n```python\nes('ij,j...', a, b)\n```\n\n\n```python\nes('ij,j->i', a, b)\n```\n\n\n```python\nes('ij,j->j', a, b)\n```\n\n\n```python\nes('ij,i...', a,c)\n```\n\n\n```python\nes('ij,j...', a, a)\n```\n\n\n```python\nes('...j,ij', a, a)\n```\n\n\n```python\nXd = np.array([0,0,1,0,0,1,1,1]).reshape(2,4); Xd\n```\n\n\n```python\na\n```\n\n\n```python\na@Xd\n```\n\n\n```python\nXd.reshape(4,2)\n```\n\n\n```python\nt = np.arange(8).reshape(4,2); t\n```\n\n\n```python\na @ t.T\n```\n\n\n```python\nt.T\n```\n\n\n```python\nEllipsis\n```\n\n\n```python\nb\n```\n\n\n```python\nb[:, np.newaxis]\n```\n\n\n```python\nt\n```\n\n\n```python\nt[np.newaxis]\n```\n\n\n```python\nx = np.arange(3); x\n```\n\n\n```python\nx[:,np.newaxis] + x[np.newaxis,:]\n```\n\n\n```python\nx[:,np.newaxis] * x[np.newaxis,:]\n```\n\n\n```python\nx[np.newaxis,:] * x[:,np.newaxis]\n```\n\n\n```python\nx[:,np.newaxis], x[np.newaxis,:]\n```\n\n\n```python\nt.dot(np.arange(2) + 1)\n```\n\n\n```python\n(np.arange(2) + 1).dot(t)\n```\n\n\n```python\nt,a\n```\n\n\n```python\nt.dot(a)\n```\n\n\n```python\nnp.prime\n```\n\n\n```python\nimport sympy\n```\n\n\n```python\nnp.array(list(sympy.sieve.primerange(2000,2050)))\n```\n\n\n```python\npa = lambda n: np.array([sympy.prime(i+1) for i in range(n)])\n```\n\n\n```python\npa(50)\n```\n\n\n```python\ntp = pa(1000)\n```\n\n\n```python\ntp[-1]\n```\n\n\n```python\ntp2 = pa(10000)\n```\n\n\n```python\nnp.sign(np.arange(5)-2)\n```\n\n\n```python\nnp.array([1]*2).shape\n```\n\n\n```python\nnp.ones(np.array([3,5]).shape)\n```\n\n\n```python\nM=np.arange(4).reshape(2,2)\nb=np.arange(2)+1\nx=np.arange(2)+5\n```\n\n\n```python\nM@x + b, (M@x) + b\n```\n\n\n```python\na, b = arangep(2), arangep(3,2)\na, b\n```\n\n\n\n\n (array([2, 3]), array([ 5, 7, 11]))\n\n\n\n\n```python\nt = es('i,j', a, b)\nt\n```\n\n\n\n\n array([[10, 14, 22],\n [15, 21, 33]])\n\n\n\n\n```python\nt[0].dot(t[0]), t[1].dot(t[1]), sum([t[0].dot(t[0]), t[1].dot(t[1])]), es('ij,ij', t, t)\n```\n\n\n\n\n (780, 1755, 2535, 2535)\n\n\n\n\n```python\n#es('ij,ij', t, t)\nnp.atleast_2d(t)\n```\n\n\n\n\n array([[10, 14, 22],\n [15, 21, 33]])\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "85a67d7a3e645990736550a1aac47c1acca4d991", "size": 58462, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "nbs/matrix_operations.ipynb", "max_stars_repo_name": "pramasoul/aix", "max_stars_repo_head_hexsha": "98333b875f6c6cda6dee86e6eab02c5ddc622543", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "nbs/matrix_operations.ipynb", "max_issues_repo_name": "pramasoul/aix", "max_issues_repo_head_hexsha": "98333b875f6c6cda6dee86e6eab02c5ddc622543", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-11-29T03:44:00.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-19T05:34:04.000Z", "max_forks_repo_path": "nbs/matrix_operations.ipynb", "max_forks_repo_name": "pramasoul/aix", "max_forks_repo_head_hexsha": "98333b875f6c6cda6dee86e6eab02c5ddc622543", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 18.5888712242, "max_line_length": 1154, "alphanum_fraction": 0.4440491259, "converted": true, "num_tokens": 6743, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9149009503523291, "lm_q2_score": 0.9184802378888549, "lm_q1q2_score": 0.8403184425243466}} {"text": "```python\nimport numpy as np\nimport sympy as sy\nfrom sympy import init_printing\ninit_printing()\nimport control.matlab as cm\n```\n\n# The Fibonacci sequence\n\n\\begin{equation}\ny(k+2) - y(k+1) - y(k) = 0\n\\end{equation}\n\n\n\n\n```python\nz = sy.symbols('z')\ny0, y1 = sy.symbols('y0, y1', real=True)\n```\n\n\n```python\nden = z*z - z -1\nnum = z*z\nY = num/sy.factor(den)\nsy.factor(den)\n```\n\n\n```python\nsy.factor(Y)\n```\n\n\n```python\nsy.apart(Y)\n```\n\n\n```python\nsy.apart( z/den)\n```\n\n\n```python\np1 = (1 + sy.sqrt(5))/2\np2 = (1 - sy.sqrt(5))/2\nsy.expand((z-p1)*(z-p2))\n```\n\n\n```python\nY = z/( (z-p1)*(z-p2))\nY\n```\n\n\n```python\nsy.apart(Y)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "33c414fa2617f457486726e25f65900af6839598", "size": 10604, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "discrete-time-systems/notebooks/Fibonacci.ipynb", "max_stars_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_stars_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-11-07T05:20:37.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-22T09:46:13.000Z", "max_issues_repo_path": "discrete-time-systems/notebooks/Fibonacci.ipynb", "max_issues_repo_name": "alfkjartan/control-computarizado", "max_issues_repo_head_hexsha": "5b9a3ae67602d131adf0b306f3ffce7a4914bf8e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-06-12T20:44:41.000Z", "max_issues_repo_issues_event_max_datetime": "2020-06-12T20:49:00.000Z", "max_forks_repo_path": "discrete-time-systems/notebooks/Fibonacci.ipynb", "max_forks_repo_name": "alfkjartan/control-computarizado", "max_forks_repo_head_hexsha": "5b9a3ae67602d131adf0b306f3ffce7a4914bf8e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-09-25T20:02:23.000Z", "max_forks_repo_forks_event_max_datetime": "2019-09-25T20:02:23.000Z", "avg_line_length": 39.7153558052, "max_line_length": 1956, "alphanum_fraction": 0.692663146, "converted": true, "num_tokens": 245, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9390248140158416, "lm_q2_score": 0.8947894738180211, "lm_q1q2_score": 0.8402295192353}} {"text": "# `sympy`\n\n\n\n`scipy` \uacc4\uc5f4\uc740 [`sympy`](https://www.sympy.org)\ub77c\ub294 *\uae30\ud638 \ucc98\ub9ac\uae30*\ub3c4 \ud3ec\ud568\ud558\uace0 \uc788\ub2e4.
    \n`scipy` stack also includes [`sympy`](https://www.sympy.org), a *symbolic processor*.\n\n\n\n2006\ub144 \uc774\ud6c4 2019 \uae4c\uc9c0 800\uba85\uc774 \ub118\ub294 \uac1c\ubc1c\uc790\uac00 \uc791\uc131\ud55c \ucf54\ub4dc\ub97c \uc81c\uacf5\ud558\uc600\ub2e4.
    \nSince 2006, more than 800 developers contributed so far in 2019.\n\n\n\n## \uae30\ud638 \uc5f0\uc0b0 \uc608
    Examples of symbolic processing\n\n\n\n`sympy` \ubaa8\ub4c8\uc744 `sy` \ub77c\ub294 \uc774\ub984\uc73c\ub85c \ubd88\ub7ec\uc628\ub2e4.
    Import `sympy` module in the name of `sy`.\n\n\n\n\n```python\nimport sympy as sy\nsy.init_printing()\n\n\n```\n\n\ube44\uad50\ub97c \uc704\ud574 `numpy` \ubaa8\ub4c8\ub3c4 \ubd88\ub7ec\uc628\ub2e4.
    \nImport `numpy` module to compare.\n\n\n\n\n```python\nimport numpy as np\n\n\n```\n\n### \uc81c\uacf1\uadfc
    Square root\n\n\n\n10\uc758 \uc81c\uacf1\uadfc\uc744 \uad6c\ud574\ubcf4\uc790.
    Let't find the square root of ten.\n\n\n\n\n```python\nnp.sqrt(10)\n\n\n```\n\n\n```python\nsy.sqrt(10)\n\n\n```\n\n10\uc758 \uc81c\uacf1\uadfc\uc744 \uc81c\uacf1\ud574\ubcf4\uc790.
    Let't square the square root of ten.\n\n\n\n\n```python\nnp.sqrt(10) ** 2\n\n\n```\n\n\n```python\nsy.sqrt(10) ** 2\n\n\n```\n\n\uc704 \uacb0\uacfc\uc758 \ucc28\uc774\uc5d0 \ub300\ud574 \uc5b4\ub5bb\uac8c \uc0dd\uac01\ud558\ub294\uac00?
    \nWhat do you think about the differences of the results above?\n\n\n\n### \ubd84\uc218
    Fractions\n\n\n\n10 / 3 \uc744 \uc0dd\uac01\ud574\ubcf4\uc790.
    Let't think about 10/3.\n\n\n\n\n```python\nten_over_three = 10 / 3\n\n\n```\n\n\n```python\nten_over_three\n\n\n```\n\n\n```python\nten_over_three * 3\n\n\n```\n\n\n```python\nimport fractions\n\n\n```\n\n\n```python\nfr_ten_over_three = fractions.Fraction(10 ,3)\n\n\n```\n\n\n```python\nfr_ten_over_three\n\n\n```\n\n\n```python\nfr_ten_over_three * 3\n\n\n```\n\n\n```python\nsy_ten_over_three = sy.Rational(10, 3)\n\n\n```\n\n\n```python\nsy_ten_over_three\n\n\n```\n\n\n```python\nsy_ten_over_three * 3\n\n\n```\n\n\uc704 \uacb0\uacfc\uc758 \ucc28\uc774\uc5d0 \ub300\ud574 \uc5b4\ub5bb\uac8c \uc0dd\uac01\ud558\ub294\uac00?
    \nWhat do you think about the differences of the results above?\n\n\n\n### \ubcc0\uc218\ub97c \ud3ec\ud568\ud558\ub294 \uc218\uc2dd
    Expressions with variables\n\n\n\n\uc0ac\uc6a9\ud560 \ubcc0\uc218\ub97c \uc815\uc758\ud55c\ub2e4.
    Define variables to use.\n\n\n\n\n```python\na, b, c, x = sy.symbols('a b c x')\ntheta, phi = sy.symbols('theta phi')\n\n\n```\n\n\ubcc0\uc218\ub4e4\uc744 \ud55c\ubc88 \uc0b4\ud3b4\ubcf4\uc790.
    Let's take a look at the variables\n\n\n\n\n```python\na, b, c, x\n\n\n```\n\n\n```python\ntheta, phi\n\n\n```\n\n\ubcc0\uc218\ub97c \uc870\ud569\ud558\uc5ec \uc0c8\ub85c\uc6b4 \uc218\uc2dd\uc744 \ub9cc\ub4e4\uc5b4 \ubcf4\uc790.
    \nLet's make equations using variables.\n\n\n\n\n```python\ny = a * x + b\n\n\n```\n\n\n```python\ny\n\n\n```\n\n\n```python\nz = a * x * x + b * x + c\n\n\n```\n\n\n```python\nz\n\n\n```\n\n\n```python\nw = a * sy.sin(theta) ** 2 + b\n\n\n```\n\n\n```python\nw\n\n\n```\n\n\n```python\np = (x - a) * (x - b) * (x - c)\n\n\n```\n\n\n```python\np\n\n\n```\n\n\n```python\nsy.expand(p, x)\n\n\n```\n\n\n```python\nsy.collect(_, x)\n\n\n```\n\n### \ubbf8\uc801\ubd84
    Calculus\n\n\n\n\n```python\nz.diff(x)\n\n\n```\n\n\n```python\nsy.integrate(z, x)\n\n\n```\n\n\n```python\nw.diff(theta)\n\n\n```\n\n\n```python\nsy.integrate(w, theta)\n\n\n```\n\n### \uadfc
    Root\n\n\n\n\n```python\nz_sol_list = sy.solve(z, x)\n\n\n```\n\n\n```python\nz_sol_list\n\n\n```\n\n\n```python\nsy.solve(2* sy.sin(theta) ** 2 - 1, theta)\n\n\n```\n\n### \ucf54\ub4dc \uc0dd\uc131
    Code generation\n\n\n\n\n```python\nprint(sy.python(z_sol_list[0]))\n\n\n```\n\n\n```python\nimport sympy.utilities.codegen as sc\n\n\n```\n\n\n```python\n[(c_name, c_code), (h_name, c_header)] = sc.codegen(\n (\"z_sol\", z_sol_list[0]), \n \"C89\", \n \"test\"\n)\n\n\n```\n\n\n```python\nc_name\n\n\n```\n\n\n```python\nprint(c_code)\n\n\n```\n\n\n```python\nh_name\n\n\n```\n\n\n```python\nprint(c_header)\n\n\n```\n\n### \uc5f0\ub9bd\ubc29\uc815\uc2dd
    System of equations\n\n\n\n\n```python\na1, a2, a3 = sy.symbols('a1:4')\nb1, b2, b3 = sy.symbols('b1:4')\nc1, c2 = sy.symbols('c1:3')\nx1, x2 = sy.symbols('x1:3')\n\n\n```\n\n\n```python\neq1 = sy.Eq(\n a1 * x1 + a2 * x2, \n c1,\n)\n\n\n```\n\n\n```python\neq1\n\n\n```\n\n\n```python\neq2 = sy.Eq(\n b1 * x1 + b2 * x2,\n c2,\n)\n\n\n```\n\n\n```python\neq_list = [eq1, eq2]\n\n\n```\n\n\n```python\neq_list\n\n\n```\n\n\n```python\nsy.solve(eq_list, (x1, x2))\n\n\n```\n\n## \ucc38\uace0\ubb38\ud5cc
    References\n\n\n\n* SymPy Development Team, SymPy 1.4 documentation, sympy.org, 2019 04 10. [Online] Available : https://docs/sympy.org/latest/index.html.\n* SymPy Development Team, SymPy Tutorial, SymPy 1.4 documentation, sympy.org, 2019 04 10. [Online] Available : https://docs/sympy.org/latest/tutorial/index.html.\n* d84_n1nj4, \"How to keep fractions in your equation output\", Stackoverflow.com, 2017 08 12. [Online] Available : https://stackoverflow.com/a/45651175.\n* Python developers, \"Fractions\", Python documentation, 2019 10 12. [Online] Available : https://docs.python.org/3.7/library/fractions.html.\n* SymPy Development Team, codegen, SymPy 1.4 documentation, sympy.org, 2019 04 10. [Online] Available : https://docs/sympy.org/latest/modules/utilities/codegen.html.\n\n\n\n## Final Bell
    \ub9c8\uc9c0\ub9c9 \uc885\n\n\n\n\n```python\n# stackoverfow.com/a/24634221\nimport os\nos.system(\"printf '\\a'\");\n\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "e50ac68a04bda1da28083127d607b4e5d0de7770", "size": 13280, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "70_sympy/10_sympy.ipynb", "max_stars_repo_name": "kangwonlee/2009eca-nmisp-template", "max_stars_repo_head_hexsha": "46a09c988c5e0c4efd493afa965d4a17d32985e8", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "70_sympy/10_sympy.ipynb", "max_issues_repo_name": "kangwonlee/2009eca-nmisp-template", "max_issues_repo_head_hexsha": "46a09c988c5e0c4efd493afa965d4a17d32985e8", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "70_sympy/10_sympy.ipynb", "max_forks_repo_name": "kangwonlee/2009eca-nmisp-template", "max_forks_repo_head_hexsha": "46a09c988c5e0c4efd493afa965d4a17d32985e8", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 17.3821989529, "max_line_length": 174, "alphanum_fraction": 0.4662650602, "converted": true, "num_tokens": 1549, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.950410981932005, "lm_q2_score": 0.8840392832736084, "lm_q1q2_score": 0.8402006432825361}} {"text": "# 1-2_4\n\nChanges:\n* Place sympy helper code in external library\n* Streamline algebra\n* Place common equations in external library\n* Move numerical inputs to the top of the document\n\nIgnore:\n* Units. Will rely on use of consistent units\n\n\n# 1-2_3\n\nChanges:\n* Separated Table A-17 into an external library. \n* Modified inputs to use consistent units. \n* Displayed governing equations in readable format. \n* Perform all calculations, including algebraic ones, within body of document.\n\n\n```python\nimport sympy as sp\nsp.init_printing()\nP, S, n_d, A, d = sp.symbols(\"P S n_d A d\", real=True)\nsigma = sp.symbols(r\"\\sigma\", real=True)\n\n## Knowns\n# Solid rod in tension\n#P = 500 #[lbf] Load\n#S = 24 #[ksi] Material Strength\n#n_d = 3.0 #[-] Factor of Safety\n\n## Governing Equations\n# Equation for area of circle\nA_eq = sp.Eq(A,sp.pi*d**2/4)\ndisplay (A_eq)\n\n# Equation for normal stress\nsigma_eq = sp.Eq(sigma, P/A)\ndisplay(sigma_eq)\n\n#Equation for safety factor\nn_d_eq = sp.Eq(n_d, S/sigma)\ndisplay(n_d_eq)\n```\n\n\n```python\ndef redefine(equality, symbol):\n sol = sp.solve(equality, symbol)\n #if len(sol) > 1:\n # raise Exception(f\"solution produces {len(sol)} possible results\")\n return sp.Eq(symbol, sol[-1])\n\ndef sub_in(target, sub):\n sol = sp.solve((target, sub), (target.lhs, sub.lhs))\n return sp.Eq(target.lhs, sol[0][0])\n```\n\n\n```python\n# Perform algebraic manipulations to prepare for substition\nA_eq_d = redefine(A_eq, d)\nsigma_eq_A = redefine(sigma_eq, A)\nn_d_eq_sigma = redefine(n_d_eq, sigma)\ndisplay(A_eq_d)\ndisplay(sigma_eq_A)\ndisplay(n_d_eq_sigma)\n```\n\n\n```python\n# Perform substitutions to obtain equation for diameter\nA_eq_d = sub_in(A_eq_d, sigma_eq_A)\ndisplay(A_eq_d)\nA_eq_d = sub_in(A_eq_d, n_d_eq_sigma)\ndisplay(A_eq_d)\n```\n\n\n```python\n# Solving for ideal diameter\nsub_vals = [\n (P, 500), #[lbs] Load\n (S, 24000), #[psi] material strength\n (n_d, 3) #[-] safety factor\n]\nd_ideal = A_eq_d.subs(sub_vals).evalf()\nd_ideal_val = float(d_ideal.rhs)\ndisplay(f\"Calculated ideal diameter = {d_ideal_val}\")\n```\n\n\n 'Calculated ideal diameter = 0.28209479177387814'\n\n\n\n```python\n# Find nearest standard fractional inch size\nimport sys\nsys.path.append('..')\nfrom DPy.Shigley_Tables.Table_A_17 import Table_A_17\ntable = Table_A_17()\nfrac_val = table.nearest_greater(d_ideal_val, units = \"inch_frac\")\ndisplay(f\"Nearest fractional value found for {d_ideal_val} is {frac_val}\")\n```\n\n\n 'Nearest fractional value found for 0.28209479177387814 is 0.3125'\n\n\n\n```python\n# Recalculate factor of safety\nn_d_eq = redefine(A_eq_d, n_d)\ndisplay(n_d_eq)\nsub_vals = sub_vals[0:2]+[(d, frac_val)] #[in] diameter of rod\nnew_n_d = float(n_d_eq.subs(sub_vals).evalf().rhs)\ndisplay(f\"New safety factor calculated as {new_n_d} for diameter of {frac_val} inches\")\n```\n\nReflection\n\nPros:\n* Input units clearly documented\n* Reacts to changing input values\n* Text outputs react to changing values\n* Code for Table_A_17 lookup recycled from elsewhere\n* Derivation of solution clearly followable and completely captured within document\n* All inputs are first to converted to compatible unit system\n\n\nCons:\n* Does not react to changing units\n* Unit handling is manual and error-prone\n* Quite a large document\n* Took great effort to write(hopefully it will get easier)\n* Define helper functions for sympy equations within body of document. Better to recycle from library.\n* Manually defined common functions for area, stress, safety factor\n\n# 1-2_2\n\nAdded automated lookup of values from Table A-17. Modified output messages to update numerical values\n\n\n```python\nfrom math import sqrt, pi\n\n## Knowns\n# Solid rod in tension\nP = 500 #[lbf] Load\nS = 24 #[ksi] Material Strength\nn_d = 3.0 #[-] Factor of Safety\n\n## Desired\n# d_min - Minimum diameter of solid rod\n# Use common sizes in Shigley table A-17\n# Calculate new factor of safety\n\n## Solution\nd = sqrt((4*P*n_d)/(pi*S*1000)) #[in] optimal diameter which provides specified factor of safety\n```\n\n\n```python\ndisplay(f\"The optimal calculated value is {d}, which must be fed into Table A-17 and the closest larger value found. A new safety factor will then be recalculated.\")\n```\n\n\n 'The optimal calculated value is 0.28209479177387814, which must be fed into Table A-17 and the closest larger value found. A new safety factor will then be recalculated.'\n\n\n\n```python\nA_17_dec = [0.01, 0.012, 0.016, 0.020, 0.025, 0.032, 0.040, 0.05, 0.06, 0.08, 0.10, 0.12, 0.16, 0.20, 0.24, 0.30, 0.40, 0.50, 0.60, 0.80, 1.00, 1.20, 1.40, 1.60, 1.80, 2.0, 2.4, 2.6, 2.8, 3.0, 3.2, 3.4, 3.6, 3.8, 4.0, 4.2, 4.4, 4.6, 4.8, 5.0, 5.2, 5.4, 5.6, 5.8, 6.0, 7.0, 7.5, 8.5, 9.0, 9.5, 10.0, 10.5, 11.0, 11.5, 12.0, 12.5, 13.0, 13.5, 14.0, 14.5, 15.0, 15.5, 16.0, 16.5, 17.0, 17.5, 18.0, 18.5, 19.0, 19.5, 20]\nfor index, value in enumerate(A_17_dec):\n if d0$ this is always true. Hence, the backward Euler method is unconditionally stable.\n\n## Implicit vs explicit methods\n\nThis analysis is very typical. In computational sciences we always have to make a choice between implicit and explicit methods. The advantage of implicit methods are the very good stability properties, allowing us for the backward Euler method to choose the time-discretisation independent of the spatial discretisation. For explicit methods we have to be much more careful and in the case of Euler we have the quadratic dependency between time-steps and spatial discretisation. However, a single time-step is much cheaper for explicit Euler as we do not need to solve a linear or nonlinear system of equations in each step. The right choice of solver depends on a huge number of factors and is very application dependent.\n\n## Time-Stepping Methods in Software\n\nIn practice we do not necesarily use explicit or implicit Euler. There are many better methods out there. The Scipy library provides a number of time-stepping algorithms. For PDE problems PETSc has an excellent infrastructure of time-stepping methods built in to support the solution of time-dependent PDEs.\n", "meta": {"hexsha": "b1a7b7e78ce42294e294d03ebc5d897796096f23", "size": 10126, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "hpc_lecture_notes/simple_time_stepping.ipynb", "max_stars_repo_name": "tbetcke/hpc_lecture_notes", "max_stars_repo_head_hexsha": "f061401a54ef467c8f8d0fb90294d63d83e3a9e1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-10-02T11:11:58.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-14T10:40:51.000Z", "max_issues_repo_path": "hpc_lecture_notes/simple_time_stepping.ipynb", "max_issues_repo_name": "tbetcke/hpc_lecture_notes", "max_issues_repo_head_hexsha": "f061401a54ef467c8f8d0fb90294d63d83e3a9e1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hpc_lecture_notes/simple_time_stepping.ipynb", "max_forks_repo_name": "tbetcke/hpc_lecture_notes", "max_forks_repo_head_hexsha": "f061401a54ef467c8f8d0fb90294d63d83e3a9e1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-18T15:21:30.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-26T12:38:25.000Z", "avg_line_length": 31.3498452012, "max_line_length": 728, "alphanum_fraction": 0.5346632431, "converted": true, "num_tokens": 2030, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9219218391455084, "lm_q2_score": 0.9111797082028671, "lm_q1q2_score": 0.8400364723784549}} {"text": "## Generating finite element sampling functions with sympy \n\nThis notebook describes how to use `sympy` to generate symbolic expressions describing sampling of finite elements of different order. It starts with some simple 1D examples and then moves to 2D and 3D and ends with a sampling method for the `VTK_LAGRANGE_HEXAHEDRON` VTK cell type. \n\n\n```python\nimport sympy as sy\nimport numpy as np \n\n%matplotlib notebook\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import axes3d \n```\n\n## 1D elements\n\nLet's start with the 1D lagrange polynomial basis functions! \n\nhttps://www.longqi.cf/python/2014/03/24/implement-of-lagrange-polynomial-in-sympy/ has a nice implementation already: \n\n\n```python\ndef LagrangPoly(x,order,i,xi=None):\n if xi==None:\n xi=sy.symbols('x:%d'%(order+1))\n index = range(order+1) \n return sy.prod([(x-xi[j])/(xi[i]-xi[j]) for j in index if j != i])\n```\n\nwhere \ud835\udc65 is a sympy symbol, order is the polynomial order (1 for linear, 2 for quadratic, 3 for cubic, etc.), \ud835\udc56 the node index and \ud835\udc65\ud835\udc56 are the node locations. If we run with order=2 and i=0 without node locations, we'll get the polynomial expression for \ud835\udc3f0: \n\n\n```python\nx=sy.symbols('x')\nLagrangPoly(x,2,0)\n```\n\n\n\n\n$\\displaystyle \\frac{\\left(x - x_{1}\\right) \\left(x - x_{2}\\right)}{\\left(x_{0} - x_{1}\\right) \\left(x_{0} - x_{2}\\right)}$\n\n\n\nFor the rest of the notebook, we're going to use [-1,0,1] for the node locations in each dimension, so let's supply those to get a simpler expression: \n\n\n```python\nLP = LagrangPoly(x,2,0,[-1,0,1])\nLP\n```\n\n\n\n\n$\\displaystyle - x \\left(\\frac{1}{2} - \\frac{x}{2}\\right)$\n\n\n\n\n```python\nsy.simplify(LP)\n```\n\n\n\n\n$\\displaystyle \\frac{x \\left(x - 1\\right)}{2}$\n\n\n\nThe 2 basis functions for a 1D linear element are \n\n\n```python\nsy.pprint(\n (sy.simplify(LagrangPoly(x,1,0,[-1,1])),\n sy.simplify(LagrangPoly(x,1,1,[-1,1])))\n)\n\n```\n\n \u239b1 x x 1\u239e\n \u239c\u2500 - \u2500, \u2500 + \u2500\u239f\n \u239d2 2 2 2\u23a0\n\n\nand then the 3 basis functions for a 1D quadratic element are \n\n\n```python\nsy.pprint(\n (sy.simplify(LagrangPoly(x,2,0,[-1,0,1])),\n sy.simplify(LagrangPoly(x,2,1,[-1,0,1])),\n sy.simplify(LagrangPoly(x,2,2,[-1,0,1]))\n )\n)\n\n```\n\n \u239bx\u22c5(x - 1) 2 x\u22c5(x + 1)\u239e\n \u239c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500, 1 - x , \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u239f\n \u239d 2 2 \u23a0\n\n\nTo get the functional form for evaluating at a point off a node, we also need the known values of the element at the nodes:\n\n\n```python\nvals = [sy.symbols('f'+str(i)) for i in range(3)]\nvals\n```\n\n\n\n\n [f0, f1, f2]\n\n\n\nSo our in-cell interpolation function for the linear element is \n\n\n```python\nshape_funcs = [\n sy.simplify(LagrangPoly(x,1,0,[-1,1])),\n sy.simplify(LagrangPoly(x,1,1,[-1,1]))\n]\nsample_expression = sum([vals[i]*shape_funcs[i] for i in range(2)])\nsample_expression\n```\n\n\n\n\n$\\displaystyle f_{0} \\left(\\frac{1}{2} - \\frac{x}{2}\\right) + f_{1} \\left(\\frac{x}{2} + \\frac{1}{2}\\right)$\n\n\n\nAnd for the quadratic: \n\n\n```python\nshape_funcs = [\n sy.simplify(LagrangPoly(x,2,0,[-1,0,1])),\n sy.simplify(LagrangPoly(x,2,1,[-1,0,1])),\n sy.simplify(LagrangPoly(x,2,2,[-1,0,1]))\n]\nsample_expression = sum([vals[i]*shape_funcs[i] for i in range(3)])\nsample_expression\n```\n\n\n\n\n$\\displaystyle \\frac{f_{0} x \\left(x - 1\\right)}{2} + f_{1} \\left(1 - x^{2}\\right) + \\frac{f_{2} x \\left(x + 1\\right)}{2}$\n\n\n\n## higher dimensions \n\nTo construct the basis functions for higher dimensions, we simply multiply on another lagrange basis function for the new dimension. \n\n### linear rectangular element \n$2^2$ total nodes (4 corners): \n\n\n```python\n\ny=sy.symbols('y')\nshape_funcs = []\n\nfor x_i in range(2):\n for y_i in range(2):\n LP1 = LagrangPoly(x,1,x_i,[-1,1])\n LP2 = LagrangPoly(y,1,y_i,[-1,1])\n shape_funcs.append(sy.simplify(LP2*LP1))\n \nsy.pprint(shape_funcs)\n```\n\n \u23a1(x - 1)\u22c5(y - 1) -(x - 1)\u22c5(y + 1) -(x + 1)\u22c5(y - 1) (x + 1)\u22c5(y + 1)\u23a4\n \u23a2\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500, \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500, \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500, \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u23a5\n \u23a3 4 4 4 4 \u23a6\n\n\n### quadratic rectangular element \n\n$3^3$ total nodes (4 corners, 4 edge-centers and 1 area-center):\n\n\n```python\nshape_funcs = []\n\nfor x_i in range(3):\n for y_i in range(3):\n LP1 = LagrangPoly(x,2,x_i,[-1,0,1])\n LP2 = LagrangPoly(y,2,y_i,[-1,0,1])\n shape_funcs.append(sy.simplify(LP2*LP1))\n\nfor sf in shape_funcs:\n sy.pprint(sf)\n```\n\n x\u22c5y\u22c5(x - 1)\u22c5(y - 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 4 \n -x\u22c5(x - 1)\u22c5(y - 1)\u22c5(y + 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 2 \n x\u22c5y\u22c5(x - 1)\u22c5(y + 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 4 \n -y\u22c5(x - 1)\u22c5(x + 1)\u22c5(y - 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 2 \n (x - 1)\u22c5(x + 1)\u22c5(y - 1)\u22c5(y + 1)\n -y\u22c5(x - 1)\u22c5(x + 1)\u22c5(y + 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 2 \n x\u22c5y\u22c5(x + 1)\u22c5(y - 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 4 \n -x\u22c5(x + 1)\u22c5(y - 1)\u22c5(y + 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 2 \n x\u22c5y\u22c5(x + 1)\u22c5(y + 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 4 \n\n\n### 3D linear hexahedral element \n\n$2^3=8$ total nodes (8 corner vertices): \n\n\n```python\nz=sy.symbols('z')\nshape_funcs = []\n\nfor z_i in range(2):\n for y_i in range(2):\n for x_i in range(2): \n LP1 = LagrangPoly(x,1,x_i,[-1,1])\n LP2 = LagrangPoly(y,1,y_i,[-1,1])\n LP3 = LagrangPoly(z,1,z_i,[-1,1])\n shape_funcs.append(sy.simplify(LP1 * LP2 * LP3))\n \n\nfor sf in shape_funcs:\n sy.pprint(sf) \n```\n\n -(x - 1)\u22c5(y - 1)\u22c5(z - 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 8 \n (x + 1)\u22c5(y - 1)\u22c5(z - 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 8 \n (x - 1)\u22c5(y + 1)\u22c5(z - 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 8 \n -(x + 1)\u22c5(y + 1)\u22c5(z - 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 8 \n (x - 1)\u22c5(y - 1)\u22c5(z + 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 8 \n -(x + 1)\u22c5(y - 1)\u22c5(z + 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 8 \n -(x - 1)\u22c5(y + 1)\u22c5(z + 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 8 \n (x + 1)\u22c5(y + 1)\u22c5(z + 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 8 \n\n\n### 3D quadratic hexahedral element \n\n$3^3=27$ total nodes (8 corner vertices, 12 edge-centers, 6 face-centers, 1 volume-center): \n\n\n```python\nshape_funcs = []\n\nfor z_i in range(3):\n for y_i in range(3):\n for x_i in range(3): \n LP1 = LagrangPoly(x,2,x_i,[-1,0,1])\n LP2 = LagrangPoly(y,2,y_i,[-1,0,1])\n LP3 = LagrangPoly(z,2,z_i,[-1,0,1])\n shape_funcs.append(sy.simplify(LP1 * LP2 * LP3))\n\nfor sf in shape_funcs:\n sy.pprint(sf)\n\n```\n\n x\u22c5y\u22c5z\u22c5(x - 1)\u22c5(y - 1)\u22c5(z - 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 8 \n -y\u22c5z\u22c5(x - 1)\u22c5(x + 1)\u22c5(y - 1)\u22c5(z - 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 4 \n x\u22c5y\u22c5z\u22c5(x + 1)\u22c5(y - 1)\u22c5(z - 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 8 \n -x\u22c5z\u22c5(x - 1)\u22c5(y - 1)\u22c5(y + 1)\u22c5(z - 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 4 \n z\u22c5(x - 1)\u22c5(x + 1)\u22c5(y - 1)\u22c5(y + 1)\u22c5(z - 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 2 \n -x\u22c5z\u22c5(x + 1)\u22c5(y - 1)\u22c5(y + 1)\u22c5(z - 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 4 \n x\u22c5y\u22c5z\u22c5(x - 1)\u22c5(y + 1)\u22c5(z - 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 8 \n -y\u22c5z\u22c5(x - 1)\u22c5(x + 1)\u22c5(y + 1)\u22c5(z - 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 4 \n x\u22c5y\u22c5z\u22c5(x + 1)\u22c5(y + 1)\u22c5(z - 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 8 \n -x\u22c5y\u22c5(x - 1)\u22c5(y - 1)\u22c5(z - 1)\u22c5(z + 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 4 \n y\u22c5(x - 1)\u22c5(x + 1)\u22c5(y - 1)\u22c5(z - 1)\u22c5(z + 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 2 \n -x\u22c5y\u22c5(x + 1)\u22c5(y - 1)\u22c5(z - 1)\u22c5(z + 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 4 \n x\u22c5(x - 1)\u22c5(y - 1)\u22c5(y + 1)\u22c5(z - 1)\u22c5(z + 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 2 \n -(x - 1)\u22c5(x + 1)\u22c5(y - 1)\u22c5(y + 1)\u22c5(z - 1)\u22c5(z + 1)\n x\u22c5(x + 1)\u22c5(y - 1)\u22c5(y + 1)\u22c5(z - 1)\u22c5(z + 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 2 \n -x\u22c5y\u22c5(x - 1)\u22c5(y + 1)\u22c5(z - 1)\u22c5(z + 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 4 \n y\u22c5(x - 1)\u22c5(x + 1)\u22c5(y + 1)\u22c5(z - 1)\u22c5(z + 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 2 \n -x\u22c5y\u22c5(x + 1)\u22c5(y + 1)\u22c5(z - 1)\u22c5(z + 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 4 \n x\u22c5y\u22c5z\u22c5(x - 1)\u22c5(y - 1)\u22c5(z + 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 8 \n -y\u22c5z\u22c5(x - 1)\u22c5(x + 1)\u22c5(y - 1)\u22c5(z + 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 4 \n x\u22c5y\u22c5z\u22c5(x + 1)\u22c5(y - 1)\u22c5(z + 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 8 \n -x\u22c5z\u22c5(x - 1)\u22c5(y - 1)\u22c5(y + 1)\u22c5(z + 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 4 \n z\u22c5(x - 1)\u22c5(x + 1)\u22c5(y - 1)\u22c5(y + 1)\u22c5(z + 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 2 \n -x\u22c5z\u22c5(x + 1)\u22c5(y - 1)\u22c5(y + 1)\u22c5(z + 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 4 \n x\u22c5y\u22c5z\u22c5(x - 1)\u22c5(y + 1)\u22c5(z + 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 8 \n -y\u22c5z\u22c5(x - 1)\u22c5(x + 1)\u22c5(y + 1)\u22c5(z + 1) \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 4 \n x\u22c5y\u22c5z\u22c5(x + 1)\u22c5(y + 1)\u22c5(z + 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 8 \n\n\n## node numbering\n\ndifferent FEM packages can use slightly different node number conventions. For example, the 3D quadratic hexahedral interpolation function would be given by \n\n\n```python\nvals = [sy.symbols('f'+str(i)) for i in range(27)]\nsample_expression = sum([vals[i]*shape_funcs[i] for i in range(27)])\nsample_expression\n```\n\n\n\n\n$\\displaystyle \\frac{f_{0} x y z \\left(x - 1\\right) \\left(y - 1\\right) \\left(z - 1\\right)}{8} - \\frac{f_{1} y z \\left(x - 1\\right) \\left(x + 1\\right) \\left(y - 1\\right) \\left(z - 1\\right)}{4} + \\frac{f_{10} y \\left(x - 1\\right) \\left(x + 1\\right) \\left(y - 1\\right) \\left(z - 1\\right) \\left(z + 1\\right)}{2} - \\frac{f_{11} x y \\left(x + 1\\right) \\left(y - 1\\right) \\left(z - 1\\right) \\left(z + 1\\right)}{4} + \\frac{f_{12} x \\left(x - 1\\right) \\left(y - 1\\right) \\left(y + 1\\right) \\left(z - 1\\right) \\left(z + 1\\right)}{2} - f_{13} \\left(x - 1\\right) \\left(x + 1\\right) \\left(y - 1\\right) \\left(y + 1\\right) \\left(z - 1\\right) \\left(z + 1\\right) + \\frac{f_{14} x \\left(x + 1\\right) \\left(y - 1\\right) \\left(y + 1\\right) \\left(z - 1\\right) \\left(z + 1\\right)}{2} - \\frac{f_{15} x y \\left(x - 1\\right) \\left(y + 1\\right) \\left(z - 1\\right) \\left(z + 1\\right)}{4} + \\frac{f_{16} y \\left(x - 1\\right) \\left(x + 1\\right) \\left(y + 1\\right) \\left(z - 1\\right) \\left(z + 1\\right)}{2} - \\frac{f_{17} x y \\left(x + 1\\right) \\left(y + 1\\right) \\left(z - 1\\right) \\left(z + 1\\right)}{4} + \\frac{f_{18} x y z \\left(x - 1\\right) \\left(y - 1\\right) \\left(z + 1\\right)}{8} - \\frac{f_{19} y z \\left(x - 1\\right) \\left(x + 1\\right) \\left(y - 1\\right) \\left(z + 1\\right)}{4} + \\frac{f_{2} x y z \\left(x + 1\\right) \\left(y - 1\\right) \\left(z - 1\\right)}{8} + \\frac{f_{20} x y z \\left(x + 1\\right) \\left(y - 1\\right) \\left(z + 1\\right)}{8} - \\frac{f_{21} x z \\left(x - 1\\right) \\left(y - 1\\right) \\left(y + 1\\right) \\left(z + 1\\right)}{4} + \\frac{f_{22} z \\left(x - 1\\right) \\left(x + 1\\right) \\left(y - 1\\right) \\left(y + 1\\right) \\left(z + 1\\right)}{2} - \\frac{f_{23} x z \\left(x + 1\\right) \\left(y - 1\\right) \\left(y + 1\\right) \\left(z + 1\\right)}{4} + \\frac{f_{24} x y z \\left(x - 1\\right) \\left(y + 1\\right) \\left(z + 1\\right)}{8} - \\frac{f_{25} y z \\left(x - 1\\right) \\left(x + 1\\right) \\left(y + 1\\right) \\left(z + 1\\right)}{4} + \\frac{f_{26} x y z \\left(x + 1\\right) \\left(y + 1\\right) \\left(z + 1\\right)}{8} - \\frac{f_{3} x z \\left(x - 1\\right) \\left(y - 1\\right) \\left(y + 1\\right) \\left(z - 1\\right)}{4} + \\frac{f_{4} z \\left(x - 1\\right) \\left(x + 1\\right) \\left(y - 1\\right) \\left(y + 1\\right) \\left(z - 1\\right)}{2} - \\frac{f_{5} x z \\left(x + 1\\right) \\left(y - 1\\right) \\left(y + 1\\right) \\left(z - 1\\right)}{4} + \\frac{f_{6} x y z \\left(x - 1\\right) \\left(y + 1\\right) \\left(z - 1\\right)}{8} - \\frac{f_{7} y z \\left(x - 1\\right) \\left(x + 1\\right) \\left(y + 1\\right) \\left(z - 1\\right)}{4} + \\frac{f_{8} x y z \\left(x + 1\\right) \\left(y + 1\\right) \\left(z - 1\\right)}{8} - \\frac{f_{9} x y \\left(x - 1\\right) \\left(y - 1\\right) \\left(z - 1\\right) \\left(z + 1\\right)}{4}$\n\n\n\nand while the first 8 nodes ($f_0$ to $f_7$) generally correspond to the 8 corner vertices, there is no set convention across FEM implementations. \n\npage 68 of https://fenicsproject.org/pub/documents/ufc/ufc-user-manual/ufc-user-manual.pdf describes the convention for UFC (used by fenics). \n\n## VTK_LAGRANGE_HEXAHEDRON node numbering convention \n\nThe node numbering for the VTK_LAGRANGE_HEXAHEDRON (VTK type 72) is buried in the VTK source code: \n\nhttps://gitlab.kitware.com/vtk/vtk/-/blob/7a0b92864c96680b1f42ee84920df556fc6ebaa3/Common/DataModel/vtkHigherOrderInterpolation.cxx\n\n\nSome other useful links:\n* https://blog.kitware.com/modeling-arbitrary-order-lagrange-finite-elements-in-the-visualization-toolkit/\n* https://github.com/Kitware/VTK/blob/0ce0d74e67927fd964a27c045d68e2f32b5f65f7/Common/DataModel/vtkCellType.h#L112\n* https://github.com/ju-kreber/paraview-scripts\n* https://discourse.paraview.org/t/about-high-order-non-traditional-lagrange-finite-element/1577/4\n\nhere's the VTK ordering: \n\n```\ncorners: edges: \n\n z\n7----------6 .----14----. \n|\\ ^ |\\ |\\ |\\ \n| \\ | | \\ | 15 | 13 \n| \\ | | \\ 19 \\ 18 \\ \n| 4------+---5 | .----12+---. \n| | +-- |-- | -> x | | | | \n3---+---\\--2 | .---+-10---. | \n \\ | \\ \\ | \\ 16 \\ 17 \n \\ | \\ \\ | 11 | 9 | \n \\| y \\| \\| \\| \n 0----------1 .----8-----. \n \n \ncenter-face node numbers \n\ny-z plane at x = -1 : 20 \ny-z plane at x = +1 : 21\nx-z plane at y = -1 : 22\nx-z plane at y = +1 : 24\nx-y plane at z = -1 : 23\nx-y plane at z = +1 : 25\n\nvolume-center point node number: 26 \n```\n\nNote that edge numbers 18 and 19 were switched by this VTK PR https://gitlab.kitware.com/vtk/vtk/-/commit/7a0b92864c96680b1f42ee84920df556fc6ebaa3 The above numbering is for after VTK 9.0, before VTK 9.0 would require a check and switch. \n \n\n\n```python\ncorner_coords = [\n [-1,-1,-1],\n [ 1,-1,-1], \n [ 1, 1,-1],\n [-1, 1,-1],\n [-1,-1, 1],\n [ 1,-1, 1],\n [ 1, 1, 1],\n [-1, 1, 1],\n]\n\n# the corner nodes defining the edges\nedge_nodes = [\n [0, 1], \n [1, 2],\n [2, 3],\n [0, 3],\n [4, 5],\n [5, 6],\n [6, 7],\n [4, 7], \n [0, 4],\n [1, 5],\n [2, 6],\n [3, 7],\n]\n\n# the corner nodes defning the faces\nface_nodes = [ \n [0,3,4,7],\n [1,2,5,6],\n [2,3,6,7],\n [0,1,2,3],\n [0,1,4,5],\n [4,5,6,7],\n]\n\n\n```\n\n\n```python\nedge_coords = []\nfor edge in edge_nodes:\n edge_center = (np.array(corner_coords[edge[0]]) + np.array(corner_coords[edge[1]]))/2\n edge_coords.append(edge_center.tolist())\n \nface_coords = [] \nfor face in face_nodes:\n coord = np.array([0,0,0]) \n for i in range(0,4):\n coord += np.array(corner_coords[face[i]]) \n face_coords.append(coord/4) \n\nvol_center_coords=[np.array(corner_coords).sum(axis=0)/8]\n```\n\n\n```python\ncorner_coords=np.array(corner_coords)\nedge_coords=np.array(edge_coords)\nface_coords=np.array(face_coords)\nvol_center_coords=np.array(vol_center_coords)\n```\n\n\n```python\nfig=plt.figure()\nax = fig.add_subplot(111, projection='3d')\n\nnode_num=0\nclrs=['r','g','b','k']\ntype_num=0\nall_coords = []\nfor coords in [corner_coords, edge_coords, face_coords, vol_center_coords]: \n \n for node in coords:\n ax.plot(node[0],node[1],node[2],marker='.',color=clrs[type_num])\n ax.text(node[0],node[1],node[2], str(node_num),fontsize=12,color=clrs[type_num])\n all_coords.append(node)\n node_num+=1 \n type_num+=1\n\nall_coords = np.array(all_coords)\nlncol=[0,0,0,.4]\nfor xyz in [1,-1]:\n ax.plot([-1,1],[xyz,xyz],[xyz,xyz],color=lncol)\n ax.plot([xyz,xyz],[-1,1],[xyz,xyz],color=lncol)\n ax.plot([xyz,xyz],[xyz,xyz],[-1,1],color=lncol) \n ax.plot([-1,1],[-xyz,-xyz],[xyz,xyz],color=lncol)\n ax.plot([-xyz,-xyz],[-1,1],[xyz,xyz],color=lncol)\n ax.plot([-xyz,-xyz],[xyz,xyz],[-1,1],color=lncol) \n\nax.plot([-1,1],[0.,0.],[0.,0.],'--',color=lncol) \nax.plot([0.,0.],[-1,1],[0.,0.],'--',color=lncol) \nax.plot([0.,0.],[0.,0.],[-1,1],'--',color=lncol)\n```\n\n\n \n\n\n\n\n\n\n\n\n\n []\n\n\n\n**So to build the proper interpolation function, we need to account for this node numbering:**\n\n\n```python\n# corresponding quadratic lagrange poly \nshape_funcs = []\nvtk_node_num = []\ncrd=[-1,0,1]\nfor z_i in range(3):\n for y_i in range(3):\n for x_i in range(3): \n LP1 = LagrangPoly(x,2,x_i,crd)\n LP2 = LagrangPoly(y,2,y_i,crd)\n LP3 = LagrangPoly(z,2,z_i,crd)\n \n # find the VTK node number \n indx = np.where((all_coords[:,0]==crd[x_i]) & (all_coords[:,1]==crd[y_i]) & (all_coords[:,2]==crd[z_i]))[0][0] \n vtk_node_num.append(indx)\n shape_funcs.append(sy.simplify(LP1 * LP2 * LP3))\n \n```\n\n\n```python\nvals = [sy.symbols('f'+str(i)) for i in vtk_node_num]\nsample_expression = sum([vals[i]*shape_funcs[i] for i in range(27)])\nsample_expression\n```\n\n\n\n\n$\\displaystyle \\frac{f_{0} x y z \\left(x - 1\\right) \\left(y - 1\\right) \\left(z - 1\\right)}{8} + \\frac{f_{1} x y z \\left(x + 1\\right) \\left(y - 1\\right) \\left(z - 1\\right)}{8} - \\frac{f_{10} y z \\left(x - 1\\right) \\left(x + 1\\right) \\left(y + 1\\right) \\left(z - 1\\right)}{4} - \\frac{f_{11} x z \\left(x - 1\\right) \\left(y - 1\\right) \\left(y + 1\\right) \\left(z - 1\\right)}{4} - \\frac{f_{12} y z \\left(x - 1\\right) \\left(x + 1\\right) \\left(y - 1\\right) \\left(z + 1\\right)}{4} - \\frac{f_{13} x z \\left(x + 1\\right) \\left(y - 1\\right) \\left(y + 1\\right) \\left(z + 1\\right)}{4} - \\frac{f_{14} y z \\left(x - 1\\right) \\left(x + 1\\right) \\left(y + 1\\right) \\left(z + 1\\right)}{4} - \\frac{f_{15} x z \\left(x - 1\\right) \\left(y - 1\\right) \\left(y + 1\\right) \\left(z + 1\\right)}{4} - \\frac{f_{16} x y \\left(x - 1\\right) \\left(y - 1\\right) \\left(z - 1\\right) \\left(z + 1\\right)}{4} - \\frac{f_{17} x y \\left(x + 1\\right) \\left(y - 1\\right) \\left(z - 1\\right) \\left(z + 1\\right)}{4} - \\frac{f_{18} x y \\left(x + 1\\right) \\left(y + 1\\right) \\left(z - 1\\right) \\left(z + 1\\right)}{4} - \\frac{f_{19} x y \\left(x - 1\\right) \\left(y + 1\\right) \\left(z - 1\\right) \\left(z + 1\\right)}{4} + \\frac{f_{2} x y z \\left(x + 1\\right) \\left(y + 1\\right) \\left(z - 1\\right)}{8} + \\frac{f_{20} x \\left(x - 1\\right) \\left(y - 1\\right) \\left(y + 1\\right) \\left(z - 1\\right) \\left(z + 1\\right)}{2} + \\frac{f_{21} x \\left(x + 1\\right) \\left(y - 1\\right) \\left(y + 1\\right) \\left(z - 1\\right) \\left(z + 1\\right)}{2} + \\frac{f_{22} y \\left(x - 1\\right) \\left(x + 1\\right) \\left(y + 1\\right) \\left(z - 1\\right) \\left(z + 1\\right)}{2} + \\frac{f_{23} z \\left(x - 1\\right) \\left(x + 1\\right) \\left(y - 1\\right) \\left(y + 1\\right) \\left(z - 1\\right)}{2} + \\frac{f_{24} y \\left(x - 1\\right) \\left(x + 1\\right) \\left(y - 1\\right) \\left(z - 1\\right) \\left(z + 1\\right)}{2} + \\frac{f_{25} z \\left(x - 1\\right) \\left(x + 1\\right) \\left(y - 1\\right) \\left(y + 1\\right) \\left(z + 1\\right)}{2} - f_{26} \\left(x - 1\\right) \\left(x + 1\\right) \\left(y - 1\\right) \\left(y + 1\\right) \\left(z - 1\\right) \\left(z + 1\\right) + \\frac{f_{3} x y z \\left(x - 1\\right) \\left(y + 1\\right) \\left(z - 1\\right)}{8} + \\frac{f_{4} x y z \\left(x - 1\\right) \\left(y - 1\\right) \\left(z + 1\\right)}{8} + \\frac{f_{5} x y z \\left(x + 1\\right) \\left(y - 1\\right) \\left(z + 1\\right)}{8} + \\frac{f_{6} x y z \\left(x + 1\\right) \\left(y + 1\\right) \\left(z + 1\\right)}{8} + \\frac{f_{7} x y z \\left(x - 1\\right) \\left(y + 1\\right) \\left(z + 1\\right)}{8} - \\frac{f_{8} y z \\left(x - 1\\right) \\left(x + 1\\right) \\left(y - 1\\right) \\left(z - 1\\right)}{4} - \\frac{f_{9} x z \\left(x + 1\\right) \\left(y - 1\\right) \\left(y + 1\\right) \\left(z - 1\\right)}{4}$\n\n\n\nAs an extra aside, if we don't want to assume the node positions: \n\n\n```python\n# corresponding quadratic lagrange poly \nshape_funcs = []\nvtk_node_num = []\ncrd=[-1,0,1]\nfor z_i in range(3):\n for y_i in range(3):\n for x_i in range(3): \n LP1 = LagrangPoly(x,2,x_i)\n LP2 = LagrangPoly(y,2,y_i)\n LP3 = LagrangPoly(z,2,z_i)\n \n # find the VTK node number \n indx = np.where((all_coords[:,0]==crd[x_i]) & (all_coords[:,1]==crd[y_i]) & (all_coords[:,2]==crd[z_i]))[0][0] \n vtk_node_num.append(indx)\n shape_funcs.append(sy.simplify(LP1 * LP2 * LP3))\n \n```\n\n\n```python\nvals = [sy.symbols('f'+str(i)) for i in vtk_node_num]\nsample_expression = sum([vals[i]*shape_funcs[i] for i in range(27)])\nsample_expression\n```\n\n\n\n\n$\\displaystyle \\frac{f_{0} \\left(x - x_{1}\\right) \\left(x - x_{2}\\right) \\left(x_{1} - y\\right) \\left(x_{1} - z\\right) \\left(x_{2} - y\\right) \\left(x_{2} - z\\right)}{\\left(x_{0} - x_{1}\\right)^{3} \\left(x_{0} - x_{2}\\right)^{3}} + \\frac{f_{1} \\left(x - x_{0}\\right) \\left(x - x_{1}\\right) \\left(x_{1} - y\\right) \\left(x_{1} - z\\right) \\left(x_{2} - y\\right) \\left(x_{2} - z\\right)}{\\left(x_{0} - x_{1}\\right)^{2} \\left(x_{0} - x_{2}\\right)^{3} \\left(x_{1} - x_{2}\\right)} - \\frac{f_{10} \\left(x - x_{0}\\right) \\left(x - x_{2}\\right) \\left(x_{0} - y\\right) \\left(x_{1} - y\\right) \\left(x_{1} - z\\right) \\left(x_{2} - z\\right)}{\\left(x_{0} - x_{1}\\right)^{2} \\left(x_{0} - x_{2}\\right)^{2} \\left(x_{1} - x_{2}\\right)^{2}} - \\frac{f_{11} \\left(x - x_{1}\\right) \\left(x - x_{2}\\right) \\left(x_{0} - y\\right) \\left(x_{1} - z\\right) \\left(x_{2} - y\\right) \\left(x_{2} - z\\right)}{\\left(x_{0} - x_{1}\\right)^{3} \\left(x_{0} - x_{2}\\right)^{2} \\left(x_{1} - x_{2}\\right)} - \\frac{f_{12} \\left(x - x_{0}\\right) \\left(x - x_{2}\\right) \\left(x_{0} - z\\right) \\left(x_{1} - y\\right) \\left(x_{1} - z\\right) \\left(x_{2} - y\\right)}{\\left(x_{0} - x_{1}\\right)^{2} \\left(x_{0} - x_{2}\\right)^{2} \\left(x_{1} - x_{2}\\right)^{2}} - \\frac{f_{13} \\left(x - x_{0}\\right) \\left(x - x_{1}\\right) \\left(x_{0} - y\\right) \\left(x_{0} - z\\right) \\left(x_{1} - z\\right) \\left(x_{2} - y\\right)}{\\left(x_{0} - x_{1}\\right) \\left(x_{0} - x_{2}\\right)^{2} \\left(x_{1} - x_{2}\\right)^{3}} - \\frac{f_{14} \\left(x - x_{0}\\right) \\left(x - x_{2}\\right) \\left(x_{0} - y\\right) \\left(x_{0} - z\\right) \\left(x_{1} - y\\right) \\left(x_{1} - z\\right)}{\\left(x_{0} - x_{1}\\right) \\left(x_{0} - x_{2}\\right)^{2} \\left(x_{1} - x_{2}\\right)^{3}} - \\frac{f_{15} \\left(x - x_{1}\\right) \\left(x - x_{2}\\right) \\left(x_{0} - y\\right) \\left(x_{0} - z\\right) \\left(x_{1} - z\\right) \\left(x_{2} - y\\right)}{\\left(x_{0} - x_{1}\\right)^{2} \\left(x_{0} - x_{2}\\right)^{2} \\left(x_{1} - x_{2}\\right)^{2}} - \\frac{f_{16} \\left(x - x_{1}\\right) \\left(x - x_{2}\\right) \\left(x_{0} - z\\right) \\left(x_{1} - y\\right) \\left(x_{2} - y\\right) \\left(x_{2} - z\\right)}{\\left(x_{0} - x_{1}\\right)^{3} \\left(x_{0} - x_{2}\\right)^{2} \\left(x_{1} - x_{2}\\right)} - \\frac{f_{17} \\left(x - x_{0}\\right) \\left(x - x_{1}\\right) \\left(x_{0} - z\\right) \\left(x_{1} - y\\right) \\left(x_{2} - y\\right) \\left(x_{2} - z\\right)}{\\left(x_{0} - x_{1}\\right)^{2} \\left(x_{0} - x_{2}\\right)^{2} \\left(x_{1} - x_{2}\\right)^{2}} - \\frac{f_{18} \\left(x - x_{0}\\right) \\left(x - x_{1}\\right) \\left(x_{0} - y\\right) \\left(x_{0} - z\\right) \\left(x_{1} - y\\right) \\left(x_{2} - z\\right)}{\\left(x_{0} - x_{1}\\right) \\left(x_{0} - x_{2}\\right)^{2} \\left(x_{1} - x_{2}\\right)^{3}} - \\frac{f_{19} \\left(x - x_{1}\\right) \\left(x - x_{2}\\right) \\left(x_{0} - y\\right) \\left(x_{0} - z\\right) \\left(x_{1} - y\\right) \\left(x_{2} - z\\right)}{\\left(x_{0} - x_{1}\\right)^{2} \\left(x_{0} - x_{2}\\right)^{2} \\left(x_{1} - x_{2}\\right)^{2}} + \\frac{f_{2} \\left(x - x_{0}\\right) \\left(x - x_{1}\\right) \\left(x_{0} - y\\right) \\left(x_{1} - y\\right) \\left(x_{1} - z\\right) \\left(x_{2} - z\\right)}{\\left(x_{0} - x_{1}\\right) \\left(x_{0} - x_{2}\\right)^{3} \\left(x_{1} - x_{2}\\right)^{2}} + \\frac{f_{20} \\left(x - x_{1}\\right) \\left(x - x_{2}\\right) \\left(x_{0} - y\\right) \\left(x_{0} - z\\right) \\left(x_{2} - y\\right) \\left(x_{2} - z\\right)}{\\left(x_{0} - x_{1}\\right)^{3} \\left(x_{0} - x_{2}\\right) \\left(x_{1} - x_{2}\\right)^{2}} + \\frac{f_{21} \\left(x - x_{0}\\right) \\left(x - x_{1}\\right) \\left(x_{0} - y\\right) \\left(x_{0} - z\\right) \\left(x_{2} - y\\right) \\left(x_{2} - z\\right)}{\\left(x_{0} - x_{1}\\right)^{2} \\left(x_{0} - x_{2}\\right) \\left(x_{1} - x_{2}\\right)^{3}} + \\frac{f_{22} \\left(x - x_{0}\\right) \\left(x - x_{2}\\right) \\left(x_{0} - y\\right) \\left(x_{0} - z\\right) \\left(x_{1} - y\\right) \\left(x_{2} - z\\right)}{\\left(x_{0} - x_{1}\\right)^{2} \\left(x_{0} - x_{2}\\right) \\left(x_{1} - x_{2}\\right)^{3}} + \\frac{f_{23} \\left(x - x_{0}\\right) \\left(x - x_{2}\\right) \\left(x_{0} - y\\right) \\left(x_{1} - z\\right) \\left(x_{2} - y\\right) \\left(x_{2} - z\\right)}{\\left(x_{0} - x_{1}\\right)^{3} \\left(x_{0} - x_{2}\\right) \\left(x_{1} - x_{2}\\right)^{2}} + \\frac{f_{24} \\left(x - x_{0}\\right) \\left(x - x_{2}\\right) \\left(x_{0} - z\\right) \\left(x_{1} - y\\right) \\left(x_{2} - y\\right) \\left(x_{2} - z\\right)}{\\left(x_{0} - x_{1}\\right)^{3} \\left(x_{0} - x_{2}\\right) \\left(x_{1} - x_{2}\\right)^{2}} + \\frac{f_{25} \\left(x - x_{0}\\right) \\left(x - x_{2}\\right) \\left(x_{0} - y\\right) \\left(x_{0} - z\\right) \\left(x_{1} - z\\right) \\left(x_{2} - y\\right)}{\\left(x_{0} - x_{1}\\right)^{2} \\left(x_{0} - x_{2}\\right) \\left(x_{1} - x_{2}\\right)^{3}} - \\frac{f_{26} \\left(x - x_{0}\\right) \\left(x - x_{2}\\right) \\left(x_{0} - y\\right) \\left(x_{0} - z\\right) \\left(x_{2} - y\\right) \\left(x_{2} - z\\right)}{\\left(x_{0} - x_{1}\\right)^{3} \\left(x_{1} - x_{2}\\right)^{3}} + \\frac{f_{3} \\left(x - x_{1}\\right) \\left(x - x_{2}\\right) \\left(x_{0} - y\\right) \\left(x_{1} - y\\right) \\left(x_{1} - z\\right) \\left(x_{2} - z\\right)}{\\left(x_{0} - x_{1}\\right)^{2} \\left(x_{0} - x_{2}\\right)^{3} \\left(x_{1} - x_{2}\\right)} + \\frac{f_{4} \\left(x - x_{1}\\right) \\left(x - x_{2}\\right) \\left(x_{0} - z\\right) \\left(x_{1} - y\\right) \\left(x_{1} - z\\right) \\left(x_{2} - y\\right)}{\\left(x_{0} - x_{1}\\right)^{2} \\left(x_{0} - x_{2}\\right)^{3} \\left(x_{1} - x_{2}\\right)} + \\frac{f_{5} \\left(x - x_{0}\\right) \\left(x - x_{1}\\right) \\left(x_{0} - z\\right) \\left(x_{1} - y\\right) \\left(x_{1} - z\\right) \\left(x_{2} - y\\right)}{\\left(x_{0} - x_{1}\\right) \\left(x_{0} - x_{2}\\right)^{3} \\left(x_{1} - x_{2}\\right)^{2}} + \\frac{f_{6} \\left(x - x_{0}\\right) \\left(x - x_{1}\\right) \\left(x_{0} - y\\right) \\left(x_{0} - z\\right) \\left(x_{1} - y\\right) \\left(x_{1} - z\\right)}{\\left(x_{0} - x_{2}\\right)^{3} \\left(x_{1} - x_{2}\\right)^{3}} + \\frac{f_{7} \\left(x - x_{1}\\right) \\left(x - x_{2}\\right) \\left(x_{0} - y\\right) \\left(x_{0} - z\\right) \\left(x_{1} - y\\right) \\left(x_{1} - z\\right)}{\\left(x_{0} - x_{1}\\right) \\left(x_{0} - x_{2}\\right)^{3} \\left(x_{1} - x_{2}\\right)^{2}} - \\frac{f_{8} \\left(x - x_{0}\\right) \\left(x - x_{2}\\right) \\left(x_{1} - y\\right) \\left(x_{1} - z\\right) \\left(x_{2} - y\\right) \\left(x_{2} - z\\right)}{\\left(x_{0} - x_{1}\\right)^{3} \\left(x_{0} - x_{2}\\right)^{2} \\left(x_{1} - x_{2}\\right)} - \\frac{f_{9} \\left(x - x_{0}\\right) \\left(x - x_{1}\\right) \\left(x_{0} - y\\right) \\left(x_{1} - z\\right) \\left(x_{2} - y\\right) \\left(x_{2} - z\\right)}{\\left(x_{0} - x_{1}\\right)^{2} \\left(x_{0} - x_{2}\\right)^{2} \\left(x_{1} - x_{2}\\right)^{2}}$\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "cf601dbcc5b26ca79fe9af922e71a1fc34950126", "size": 220996, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ASPECT_VTK_quad_hex_mapping.ipynb", "max_stars_repo_name": "chrishavlin/unstructured_vtu", "max_stars_repo_head_hexsha": "b2b2babee2537cf0b633b2f95167f5ae8e78c4ea", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-08-05T19:37:01.000Z", "max_stars_repo_stars_event_max_datetime": "2020-08-05T19:37:01.000Z", "max_issues_repo_path": "ASPECT_VTK_quad_hex_mapping.ipynb", "max_issues_repo_name": "chrishavlin/unstructured_vtu", "max_issues_repo_head_hexsha": "b2b2babee2537cf0b633b2f95167f5ae8e78c4ea", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-08-15T15:37:59.000Z", "max_issues_repo_issues_event_max_datetime": "2020-08-15T15:37:59.000Z", "max_forks_repo_path": "ASPECT_VTK_quad_hex_mapping.ipynb", "max_forks_repo_name": "chrishavlin/unstructured_vtu", "max_forks_repo_head_hexsha": "b2b2babee2537cf0b633b2f95167f5ae8e78c4ea", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-06-23T14:35:12.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-23T14:35:12.000Z", "avg_line_length": 119.3927606699, "max_line_length": 136011, "alphanum_fraction": 0.7631767091, "converted": true, "num_tokens": 11848, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9518632343454895, "lm_q2_score": 0.8824278726384089, "lm_q1q2_score": 0.8399506489262056}} {"text": "# Funciones de forma unidimensionales\n\nLas funciones de forma unidimensionales sirven para aproximar los desplazamientos: \n\n\\begin{equation}\nu = \\alpha_{0} + \\alpha_{1} x + \\cdots + \\alpha_{n} x^{n} = \\sum_{i = 0}^{n} \\alpha_{i} x^{i}\n\\end{equation}\n\n## Elemento barra\n\nLos elementos barra soportan esfuerzos debidos a tracci\u00f3n o compresi\u00f3n.\n\n### Elemento de dos nodos\n\nPara un elemento de dos nodos y dos grados de libertad:\n\n\\begin{equation}\nu = \\alpha_{0} + \\alpha_{1} x =\n\\left [\n\\begin{matrix}\n1 & x\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\n\\alpha_{0} \\\\\\\n\\alpha_{1}\n\\end{matrix}\n\\right ]\n\\end{equation}\n\nReemplazando los valores nodales en coordenadas naturales:\n\n\\begin{eqnarray}\n\\alpha_{0} + \\alpha_{1}(-1) &=& u_{1}\\\\\\\n\\alpha_{0} + \\alpha_{1}(1) &=& u_{2}\n\\end{eqnarray}\n\nEvaluando:\n\n\\begin{eqnarray}\n\\alpha_{0} - \\alpha_{1} &=& u_{1}\\\\\\\n\\alpha_{0} + \\alpha_{1} &=& u_{2}\n\\end{eqnarray}\n\nEn forma matricial:\n\n\\begin{equation}\n\\left [\n\\begin{matrix}\n1 & -1 \\\\\\\n1 & 1\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\n\\alpha_{0} \\\\\\\n\\alpha_{1}\n\\end{matrix}\n\\right ] =\n\\left [\n\\begin{matrix}\nu_{1} \\\\\\\nu_{2}\n\\end{matrix}\n\\right ]\n\\end{equation}\n\nResolviendo el sistema:\n\n\\begin{equation}\n\\left [\n\\begin{matrix}\n\\alpha_{0} \\\\\\\n\\alpha_{1}\n\\end{matrix}\n\\right ] =\n\\left [\n\\begin{matrix}\n\\frac{1}{2} & \\frac{1}{2} \\\\\\\n-\\frac{1}{2} & \\frac{1}{2}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\nu_{1} \\\\\\\nu_{2}\n\\end{matrix}\n\\right ]\n\\end{equation}\n\nReemplazando:\n\n\\begin{equation}\nu =\n\\left [\n\\begin{matrix}\n1 & x\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\n\\alpha_{0} \\\\\\\n\\alpha_{1}\n\\end{matrix}\n\\right ] =\n\\left [\n\\begin{matrix}\n1 & x\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\n\\frac{1}{2} & \\frac{1}{2} \\\\\\\n-\\frac{1}{2} & \\frac{1}{2}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\nu_{1} \\\\\\\nu_{2}\n\\end{matrix}\n\\right ] =\n\\left [\n\\begin{matrix}\n\\frac{1}{2} - \\frac{1}{2} x & \\frac{1}{2} + \\frac{1}{2} x\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\nu_{1} \\\\\\\nu_{2}\n\\end{matrix}\n\\right ]\n\\end{equation}\n\nReescribiendo $u$:\n\n\\begin{equation}\nu = \\Big ( \\frac{1}{2} - \\frac{1}{2} x \\Big ) u_{1} + \\Big ( \\frac{1}{2} + \\frac{1}{2} x \\Big ) u_{2} = N_{1} u_{1} + N_{2} u_{2}\n\\end{equation}\n\n### Elemento de tres nodos\n\nPara un elemento de tres nodos y tres grados de libertad:\n\n\\begin{equation}\nu = \\alpha_{0} + \\alpha_{1} x + \\alpha_{2} x^{2} =\n\\left [\n\\begin{matrix}\n1 & x & x^{2}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\n\\alpha_{0} \\\\\\\n\\alpha_{1} \\\\\\\n\\alpha_{2}\n\\end{matrix}\n\\right ]\n\\end{equation}\n\nReemplazando los valores nodales en coordenadas naturales:\n\n\\begin{eqnarray}\n\\alpha_{0} + \\alpha_{1}(-1) + \\alpha_{2}(-1)^{2}&=& u_{1}\\\\\\\n\\alpha_{0} + \\alpha_{1}(0) + \\alpha_{2}(0)^{2}&=& u_{2}\\\\\\\n\\alpha_{0} + \\alpha_{1}(1) + \\alpha_{2}(1)^{2}&=& u_{3}\n\\end{eqnarray}\n\nEvaluando:\n\n\\begin{eqnarray}\n\\alpha_{0} - \\alpha_{1} + \\alpha_{1} &=& u_{1}\\\\\\\n\\alpha_{0} &=& u_{2}\\\\\\\n\\alpha_{0} + \\alpha_{1} + \\alpha_{1}&=& u_{3}\n\\end{eqnarray}\n\nEn forma matricial:\n\n\\begin{equation}\n\\left [\n\\begin{matrix}\n1 & -1 & 1 \\\\\\\n1 & 0 & 0 \\\\\\\n1 & 1 & 1 \\\\\\\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\n\\alpha_{0} \\\\\\\n\\alpha_{1} \\\\\\\n\\alpha_{2}\n\\end{matrix}\n\\right ] =\n\\left [\n\\begin{matrix}\nu_{1} \\\\\\\nu_{2} \\\\\\\nu_{3}\n\\end{matrix}\n\\right ]\n\\end{equation}\n\nResolviendo el sistema:\n\n\\begin{equation}\n\\left [\n\\begin{matrix}\n\\alpha_{0} \\\\\\\n\\alpha_{1} \\\\\\\n\\alpha_{2}\n\\end{matrix}\n\\right ] =\n\\left [\n\\begin{matrix}\n0 & 1 & 0 \\\\\\\n-\\frac{1}{2} & 0 & \\frac{1}{2} \\\\\\\n\\frac{1}{2} & -1 & \\frac{1}{2}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\nu_{1} \\\\\\\nu_{2} \\\\\\\nu_{3}\n\\end{matrix}\n\\right ]\n\\end{equation}\n\nReemplazando:\n\n\\begin{equation}\nu =\n\\left [\n\\begin{matrix}\n1 & x & x^{2}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\n\\alpha_{0} \\\\\\\n\\alpha_{1} \\\\\\\n\\alpha_{2}\n\\end{matrix}\n\\right ] =\n\\left [\n\\begin{matrix}\n1 & x & x^{2}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\n0 & 1 & 0 \\\\\\\n-\\frac{1}{2} & 0 & \\frac{1}{2} \\\\\\\n\\frac{1}{2} & -1 & \\frac{1}{2}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\nu_{1} \\\\\\\nu_{2} \\\\\\\nu_{3}\n\\end{matrix}\n\\right ] =\n\\left [\n\\begin{matrix}\n-\\frac{1}{2} x + \\frac{1}{2} x^{2} & 1 - x^{2} & \\frac{1}{2} x + \\frac{1}{2} x^{2}\n\\end{matrix}\n\\right ]\n\\left [\n\\begin{matrix}\nu_{1} \\\\\\\nu_{2} \\\\\\\nu_{3}\n\\end{matrix}\n\\right ]\n\\end{equation}\n\nReescribiendo $u$:\n\n\\begin{equation}\nu = \\Big ( -\\frac{1}{2} x + \\frac{1}{2} x^{2} \\Big ) u_{1} + (1 - x^{2}) u_{2} + \\Big ( \\frac{1}{2} x + \\frac{1}{2} x^{2} \\Big ) u_{3} = N_{1} u_{1} + N_{2} u_{2} + N_{3} u_{3}\n\\end{equation}\n\n## Elementos barra de mayor grado polinomial\n\nLos elementos de mayor grado pueden obtenerse mediante polinomios de Lagrange:\n\n\\begin{equation}\n\\ell = \\prod_{i=0, i \\neq j}^{k} \\frac{x - x_{i}}{x_{j} - x_{i}}\n\\end{equation}\n\n### Elemento de dos nodos\n\nUsando la f\u00f3rmula:\n\n\\begin{eqnarray}\nN_{1} &=& \\frac{x - 1}{-1 - 1} = \\frac{1}{2} - \\frac{1}{2} x \\\\\\\nN_{2} &=& \\frac{x - (-1)}{1 - (-1)} = \\frac{1}{2} + \\frac{1}{2} x\n\\end{eqnarray}\n\n### Elemento de tres nodos\n\nUsando la f\u00f3rmula:\n\n\\begin{eqnarray}\nN_{1} &=& \\frac{x - 0}{-1 - 0} \\frac{x - 1}{-1 - 1} = -\\frac{1}{2} x + \\frac{1}{2} x^{2} \\\\\\\nN_{2} &=& \\frac{x - (-1)}{0 - (-1)} \\frac{x - 1}{0 - 1} = 1 - x^{2} \\\\\\\nN_{3} &=& \\frac{x - 0}{1 - 0} \\frac{x - 1}{1 - 1} = \\frac{1}{2} x + \\frac{1}{2} x^{2}\n\\end{eqnarray}\n\n\n```\n\n```\n", "meta": {"hexsha": "7c739b70f55fe24ab4a05be731f6c21f54682618", "size": 12694, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Funciones de forma/funciones forma barra.ipynb", "max_stars_repo_name": "ClaudioVZ/Teoria-FEM-Python", "max_stars_repo_head_hexsha": "8a4532f282c38737fb08d1216aa859ecb1e5b209", "max_stars_repo_licenses": ["Artistic-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-28T00:23:45.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-28T00:23:45.000Z", "max_issues_repo_path": "Funciones de forma/funciones forma barra.ipynb", "max_issues_repo_name": "ClaudioVZ/Teoria-FEM-Python", "max_issues_repo_head_hexsha": "8a4532f282c38737fb08d1216aa859ecb1e5b209", "max_issues_repo_licenses": ["Artistic-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Funciones de forma/funciones forma barra.ipynb", "max_forks_repo_name": "ClaudioVZ/Teoria-FEM-Python", "max_forks_repo_head_hexsha": "8a4532f282c38737fb08d1216aa859ecb1e5b209", "max_forks_repo_licenses": ["Artistic-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2015-12-04T12:42:00.000Z", "max_forks_repo_forks_event_max_datetime": "2019-10-31T21:50:32.000Z", "avg_line_length": 22.6274509804, "max_line_length": 195, "alphanum_fraction": 0.377579959, "converted": true, "num_tokens": 2309, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9518632288833652, "lm_q2_score": 0.8824278618165526, "lm_q1q2_score": 0.8399506338053477}} {"text": "> This is one of the 100 recipes of the [IPython Cookbook](http://ipython-books.github.io/), the definitive guide to high-performance scientific computing and data science in Python.\n\n\n# 15.2. Solving equations and inequalities\n\n\n```python\nfrom sympy import *\ninit_printing()\n```\n\n\n```python\nvar('x y z a')\n```\n\nUse the function solve to resolve equations (the right hand side is always 0).\n\n\n```python\nsolve(x**2 - a, x)\n```\n\nYou can also solve inequations. You may need to specify the domain of your variables. Here, we tell SymPy that x is a real variable.\n\n\n```python\nx = Symbol('x')\nsolve_univariate_inequality(x**2 > 4, x)\n```\n\n## Systems of equations\n\nThis function also accepts systems of equations (here a linear system).\n\n\n```python\nsolve([x + 2*y + 1, x - 3*y - 2], x, y)\n```\n\nNon-linear systems are also supported.\n\n\n```python\nsolve([x**2 + y**2 - 1, x**2 - y**2 - S(1)/2], x, y)\n```\n\nSingular linear systems can also be solved (here, there are infinitely many equations because the two equations are colinear).\n\n\n```python\nsolve([x + 2*y + 1, -x - 2*y - 1], x, y)\n```\n\nNow, let's solve a linear system using matrices with symbolic variables.\n\n\n```python\nvar('a b c d u v')\n```\n\nWe create the augmented matrix, which is the horizontal concatenation of the system's matrix with the linear coefficients, and the right-hand side vector.\n\n\n```python\nM = Matrix([[a, b, u], [c, d, v]]); M\n```\n\n\n```python\nsolve_linear_system(M, x, y)\n```\n\nThis system needs to be non-singular to have a unique solution, which is equivalent to say that the determinant of the system's matrix needs to be non-zero (otherwise the denominators in the fractions above are equal to zero).\n\n\n```python\ndet(M[:2,:2])\n```\n\n> You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).\n\n> [IPython Cookbook](http://ipython-books.github.io/), by [Cyrille Rossant](http://cyrille.rossant.net), Packt Publishing, 2014 (500 pages).\n", "meta": {"hexsha": "53c2adf45eafa23c4ba40c2873593195b6ba2e3d", "size": 4901, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/chapter15_symbolic/02_solvers.ipynb", "max_stars_repo_name": "hidenori-t/cookbook-code", "max_stars_repo_head_hexsha": "750f546ed87b09d28532884b8074b96cd8d32a38", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 820, "max_stars_repo_stars_event_min_datetime": "2015-01-01T18:15:54.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-06T16:15:07.000Z", "max_issues_repo_path": "notebooks/chapter15_symbolic/02_solvers.ipynb", "max_issues_repo_name": "bndxn/cookbook-code", "max_issues_repo_head_hexsha": "90c31341edccf039187e6a3809fb336f83bb758f", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 31, "max_issues_repo_issues_event_min_datetime": "2015-02-25T22:08:09.000Z", "max_issues_repo_issues_event_max_datetime": "2018-09-28T08:41:38.000Z", "max_forks_repo_path": "notebooks/chapter15_symbolic/02_solvers.ipynb", "max_forks_repo_name": "bndxn/cookbook-code", "max_forks_repo_head_hexsha": "90c31341edccf039187e6a3809fb336f83bb758f", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 483, "max_forks_repo_forks_event_min_datetime": "2015-01-02T13:53:11.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-18T21:05:16.000Z", "avg_line_length": 20.8553191489, "max_line_length": 232, "alphanum_fraction": 0.5415221383, "converted": true, "num_tokens": 534, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391558355999, "lm_q2_score": 0.9099070139780661, "lm_q1q2_score": 0.8398798020712055}} {"text": "```python\nfrom sympy import *\nfrom math import factorial\n```\n\n# Discrete Random Variables\n\n\n```python\n\"\"\"\nDefinition:\nThe cumulative distribution function (CDF), F(\u00b7), of a random\nvariable, X, is defined by\n\nF(x) := P(X \u2264 x).\n\n\"\"\"\n```\n\n\n```python\n#Exemplo: Jogar dado\nx = (1,2,3,4,5,6)\nwp = 1/len(x)\n```\n\n\n```python\n\"\"\"\nDefinition:\nA discrete random variable, X, has probability mass function (PMF),\np(\u00b7), if p(x) \u2265 0 and for all events A we have\nP(X \u2208 A) = X x\u2208A p(x).\n\n\"\"\"\n```\n\n\n```python\n#Probabilidade de ser maior ou igual a 4 P(x=>4)\npx=0\nseq = ''\nprint('Probabilidade de ser maior ou igual a 4 P(x=>4)', x[3:])\nfor i in x[3:]:\n px = px+wp\n seq = seq + ' ' + str(i)\n print('probabilidade: ', round(px, 2), ' valor', seq)\n```\n\n Probabilidade de ser maior ou igual a 4 P(x=>4) (4, 5, 6)\n probabilidade: 0.17 valor 4\n probabilidade: 0.33 valor 4 5\n probabilidade: 0.5 valor 4 5 6\n\n\n\n```python\n\"\"\"\nDefinition:\nThe expected value of a discrete random variable, X, is given by\nE[X] := SUM xi p(xi).\n\n\"\"\"\n```\n\n\n```python\n#Para o mesmo caso, o valor esperado seria:\nEx = 0\nfor i in x:\n Ex = Ex + wp*i\nprint('O valor esperado E(x) \u00e9 de:', Ex)\n```\n\n O valor esperado E(x) \u00e9 de: 3.5\n\n\n\n```python\n\"\"\"\nDefinition. The variance of any random variable, X, is defined as\nVar(X) := E[(X \u2212 E[X])2]\n = E[X2] \u2212 E[X]2\n\n\"\"\"\n```\n\n\n```python\n#Obtendo a variancia para o caso\n\nvarx = 0\nEx2 = 0\n\nfor i in x:\n Ex2 = Ex2 + wp*i**2\n\nvarx = Ex2 - Ex**2\n\nprint(\"A variancia para de um dado \u00e9 de:\", round(varx, 2))\n```\n\n A variancia para de um dado \u00e9 de: 2.92\n\n\n# The Binomial Distribution\n\n\n```python\n\"\"\"\n\nWe say X has a binomial distribution, or X \u223c Bin(n, p), if\nP(X = r) = (n r)p**r(1 \u2212 p)**n\u2212r\n\nFor example, X might represent the number of heads in n independent coin\ntosses, where p = P(head). The mean and variance of the binomial distribution\nsatisfy\nE[X] = np\nVar(X) = np(1 \u2212 p).\n\n\"\"\"\n```\n\n\n```python\n\"\"\"\n(n r) = n!/(r!(n-r)!)\n\"\"\"\n```\n\n### A Financial Application\n\n\n```python\n\"\"\"\n\nSuppose a fund manager outperforms the market in a given year with\nprobability p and that she underperforms the market with probability 1 \u2212 p.\nShe has a track record of 10 years and has outperformed the market in 8 of\nthe 10 years.\nMoreover, performance in any one year is independent of performance in\nother years.\nQuestion: How likely is a track record as good as this if the fund manager had no\nskill so that p = 1/2?\nAnswer: Let X be the number of outperforming years. Since the fund manager\nhas no skill, X \u223c Bin(n = 10, p = 1/2) and\nP(X \u2265 8) = Xnr=8(n r)p**r(1 \u2212 p)**n\u2212r\n\nQuestion: Suppose there are M fund managers? How well should the best one do\nover the 10-year period if none of them had any skill?\n\n\"\"\"\n```\n\n\n```python\n#Resolvendo a quest\u00e3o a cima, temos:\nn=10\np=1/2\nr=8\n\nPx = (factorial(n)/(factorial(r)*factorial(n-r)))*p**(r)*(1-p)**(n-r)\n\nprint('A probabilidade \u00e9 de:', round(Px*100, 1), '%')\n\n\n```\n\n A probabilidade \u00e9 de: 4.4 %\n\n\n# The Poisson Distribution\n\n\n```python\n\"\"\"\nWe say X has a Poisson(\u03bb) distribution if\nP(X = r) = \u03bb**(r)*e**(-\u03bb)/r!\n\nE[X] = \u03bb and Var(X) = \u03bb\n\n\"\"\"\n```\n\n# Bayes\u2019 Theorem\n\n\n```python\n\"\"\"\nLet A and B be two events for which P(B) 6= 0. Then\nP(A | B) = P(ATB)/P(B)\n = P(B | A)P(A)/P(B)\n = P(B | A)P(A)/(SUM P(B | Aj)P(Aj))\nwhere the Aj\u2019s form a partition of the sample-space.\n\n\"\"\"\n```\n\n\n```python\n\"\"\"\nLet Y1 and Y2 be the outcomes of tossing two fair dice independently of\none another.\n\nLet X := Y1 + Y2. Question: What is P(Y1 \u2265 4 | X \u2265 8)?\n\n\"\"\"\n```\n\n\n```python\n#Resolvendo a quest\u00e3o a cima, temos:\n\ny1 = (1,2,3,4,5,6)\ny2 =(1,2,3,4,5,6)\n\nwp=1/6\n\nfor i in x[3:]:\n pa = pa+wp\n\n\n```\n\n# Continuous Random Variables\n\n\n```python\n\"\"\"\nDefinition. A continuous random variable, X, has probability density function\n(PDF), f(\u00b7), if f(x) \u2265 0 and for all events A\n\n\n\"\"\"\n```\n\n\n```python\n\n```\n\n# The Normal Distribution\n\n\n```python\n\n```\n", "meta": {"hexsha": "6dd4e5721ddb6e000f8e25e79019951687dd25ed", "size": 8703, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Financial Engineering & Risk Management/Introduction to Financial Engineering and Risk Management/probability(I).ipynb", "max_stars_repo_name": "MaikeRM/FinancialEngineering", "max_stars_repo_head_hexsha": "d5881995ff3097e77cb62633ab22d25625c81ee7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Financial Engineering & Risk Management/Introduction to Financial Engineering and Risk Management/probability(I).ipynb", "max_issues_repo_name": "MaikeRM/FinancialEngineering", "max_issues_repo_head_hexsha": "d5881995ff3097e77cb62633ab22d25625c81ee7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Financial Engineering & Risk Management/Introduction to Financial Engineering and Risk Management/probability(I).ipynb", "max_forks_repo_name": "MaikeRM/FinancialEngineering", "max_forks_repo_head_hexsha": "d5881995ff3097e77cb62633ab22d25625c81ee7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.3308823529, "max_line_length": 90, "alphanum_fraction": 0.4736297828, "converted": true, "num_tokens": 1325, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391664210672, "lm_q2_score": 0.9099069974872589, "lm_q1q2_score": 0.8398797964813356}} {"text": "### \ubbf8\ubd84\n\n\n```python\n#\uc218\uce58\ubbf8\ubd84\ndef f(x):\n return x**3-3*x**2+x\n\nx=np.linspace(-1,3,400)\ny=f(x)\n\nfrom scipy.misc import derivative\n\nprint(derivative(f,0,dx=0.005)) #x=0\uc778 \uc9c0\uc810\uc5d0\uc11c \uc218\uce58\ubbf8\ubd84(\uac04\uaca9:0.005)\n```\n\n 1.000025\n\n\n### \uc2ec\ud30c\uc774(Sympy)\ub97c \uc774\uc6a9\ud55c \ud568\uc218\ubbf8\ubd84\n\n\n```python\nimport sympy\n\n# Juypter \ub178\ud2b8\ubd81\uc5d0\uc11c \uc218\ud559\uc2dd\uc758 LaTeX \ud45c\ud604\uc744 \uc704\ud574 \ud544\uc694\ud568\nsympy.init_printing(use_latex='mathjax')\n```\n\n\n```python\n#\uc2ec\ubcfc \ubcc0\uc218 \uc120\uc5b8\nx = sympy.symbols('x')\nx\n```\n\n\n\n\n$\\displaystyle x$\n\n\n\n\n```python\n\nf= x *sympy.exp(x)\nf\n```\n\n\n\n\n$\\displaystyle x e^{x}$\n\n\n\n\n```python\n#\ubbf8\ubd84\nsympy.diff(f)\n```\n\n\n\n\n$\\displaystyle x e^{x} + e^{x}$\n\n\n\n\n```python\n#\uc18c\uc778\uc218\ubd84\ud574 \ub4f1 \uc218\uc2dd\uc815\ub9ac\nsympy.simplify(sympy.diff(f))\n```\n\n\n\n\n$\\displaystyle \\left(x + 1\\right) e^{x}$\n\n\n\n\n```python\n#\ud568\uc218\uc120\uc5b8\nx, y = sympy.symbols('x y')\nf = x ** 2 + 4 * x * y + 4 * y ** 2\nf\n```\n\n\n\n\n$\\displaystyle x^{2} + 4 x y + 4 y^{2}$\n\n\n\n\n```python\n#x\uc5d0 \ub300\ud55c \ud3b8\ubbf8\ubd84\nsympy.diff(f,x)\n```\n\n\n\n\n$\\displaystyle 2 x + 4 y$\n\n\n\n\n```python\n#y\uc5d0 \ub300\ud55c \ud3b8\ubbf8\ubd84\nsympy.diff(f,y)\n```\n\n\n\n\n$\\displaystyle 4 x + 8 y$\n\n\n\n#### \uc815\uaddc\ubd84\ud3ec\uc758 \ud655\ub960\ubc00\ub3c4\ud568\uc218 \ubbf8\ubd84\ud558\uae30\n\n\n```python\nx,mu,sigma = sympy.symbols('x mu sigma')\nf = sympy.exp((x - mu) ** 2 / sigma ** 2)\nf \n```\n\n\n\n\n$\\displaystyle e^{\\frac{\\left(- \\mu + x\\right)^{2}}{\\sigma^{2}}}$\n\n\n\n\n```python\nsympy.diff(f,x)\n```\n\n\n\n\n$\\displaystyle \\frac{\\left(- 2 \\mu + 2 x\\right) e^{\\frac{\\left(- \\mu + x\\right)^{2}}{\\sigma^{2}}}}{\\sigma^{2}}$\n\n\n\n\n```python\nsympy.simplify(sympy.diff(f, x))\n```\n\n\n\n\n$\\displaystyle \\frac{2 \\left(- \\mu + x\\right) e^{\\frac{\\left(\\mu - x\\right)^{2}}{\\sigma^{2}}}}{\\sigma^{2}}$\n\n\n\n\n```python\n#\uc774\ucc28\ub3c4\ud568\uc218\nsympy.diff(f,x,x)\n```\n\n\n\n\n$\\displaystyle \\frac{2 \\left(1 + \\frac{2 \\left(\\mu - x\\right)^{2}}{\\sigma^{2}}\\right) e^{\\frac{\\left(\\mu - x\\right)^{2}}{\\sigma^{2}}}}{\\sigma^{2}}$\n\n\n\n\n```python\n#4.2.5\nx= sympy.symbols('x')\nf= x**3 -1\n```\n\n\n```python\nsympy.diff(f)\n```\n\n\n\n\n$\\displaystyle 3 x^{2}$\n\n\n\n\n```python\nx, k= sympy.symbols('x k')\nf2=sympy.log(x**2-3*k)\nsympy.diff(f2,x)\n```\n\n\n\n\n$\\displaystyle \\frac{2 x}{- 3 k + x^{2}}$\n\n\n\n\n```python\nx,a,b= sympy.symbols('x a b')\nf3=sympy.exp(a*x**b)\nsympy.diff(f3,x)\n```\n\n\n\n\n$\\displaystyle \\frac{a b x^{b} e^{a x^{b}}}{x}$\n\n\n\n#### 4.2.6 \ub2e4\uc74c\ud568\uc218\uc5d0\ub300\ud55c 1\ucc28/2\ucc28 \ud3b8\ubbf8\ubd84 \uad6c\ud558\uae30\n\n\n```python\nx,y= sympy. symbols('x y')\nf=sympy.exp(x**2+2*y**2)\nf\n```\n\n\n\n\n$\\displaystyle e^{x^{2} + 2 y^{2}}$\n\n\n\n\n```python\nsympy.diff(f,x)\n```\n\n\n\n\n$\\displaystyle 2 x e^{x^{2} + 2 y^{2}}$\n\n\n\n\n```python\nsympy.diff(f,y)\n```\n\n\n\n\n$\\displaystyle 4 y e^{x^{2} + 2 y^{2}}$\n\n\n\n\n```python\nsympy.diff(f,x,x)\n```\n\n\n\n\n$\\displaystyle 2 \\left(2 x^{2} + 1\\right) e^{x^{2} + 2 y^{2}}$\n\n\n\n\n```python\nsympy.diff(f,x,y)\n```\n\n\n\n\n$\\displaystyle 8 x y e^{x^{2} + 2 y^{2}}$\n\n\n\n\n```python\nsympy.diff(f,y,x) #\uc288\uc640\ub974\uce20 \uc815\ub9ac\uc5d0 \uc758\ud574 diff(f,x,y)\uc640 \ub3d9\uc77c\n```\n\n\n\n\n$\\displaystyle 8 x y e^{x^{2} + 2 y^{2}}$\n\n\n\n\n```python\nsympy.diff(f,y,y)\n```\n\n\n\n\n$\\displaystyle 4 \\left(4 y^{2} + 1\\right) e^{x^{2} + 2 y^{2}}$\n\n\n\n### \uc801\ubd84\n\n\n```python\nimport sympy\n\nsympy.init_printing(use_latex='mathjax')\n\nx = sympy.symbols('x')\nf = x * sympy.exp(x) + sympy.exp(x)\nf\n\n```\n\n\n\n\n$\\displaystyle x e^{x} + e^{x}$\n\n\n\n\n```python\nsympy.integrate(f)\n```\n\n\n\n\n$\\displaystyle x e^{x}$\n\n\n\n\n```python\nx, y = sympy.symbols('x y')\nf = 2 * x + y\nf\n```\n\n\n\n\n$\\displaystyle 2 x + y$\n\n\n\n\n```python\nsympy.integrate(f, x)\n```\n\n\n\n\n$\\displaystyle x^{2} + x y$\n\n\n\n\n```python\nx, y = sympy.symbols('x y')\nf= 1+ x*y\nsympy.integrate(f,x)\n```\n\n\n\n\n$\\displaystyle \\frac{x^{2} y}{2} + x$\n\n\n\n\n```python\nf1= x*y*sympy.exp(x**2+y**2)\nsympy.integrate(f1,x)\n```\n\n\n\n\n$\\displaystyle \\frac{y e^{x^{2} + y^{2}}}{2}$\n\n\n\n#### \ub2e4\ucc28\ub3c4\ud568\uc218\uc640 \ub2e4\uc911\uc801\ubd84\n\n\n```python\n#\ub2e4\uc74c \ubd80\uc815\uc801\ubd84\uc744 \uad6c\ud558\ub77c\nx, y = sympy.symbols('x y')\nf=x*y*sympy.exp(x**2+y**2)\nsympy.integrate(f,x,y)\n```\n\n\n\n\n$\\displaystyle \\frac{e^{x^{2} + y^{2}}}{4}$\n\n\n\n#### \uc815\uc801\ubd84\n- a,b\uad6c\uac04 \uc0ac\uc774\uc758 \uba74\uc801\n-$$\n\\begin{align}\n\\int_{a}^{b} f(x) dx \n\\tag{4.3.13}\n\\end{align}\n$$\n\n\n```python\nx, y = sympy.symbols('x y')\nf = x ** 3 - 3 * x ** 2 + x + 6\nf\n```\n\n\n\n\n$\\displaystyle x^{3} - 3 x^{2} + x + 6$\n\n\n\n\n```python\n# \ubd80\uc815 \uc801\ubd84\nF = sympy.integrate(f)\nF\n```\n\n\n\n\n$\\displaystyle \\frac{x^{4}}{4} - x^{3} + \\frac{x^{2}}{2} + 6 x$\n\n\n\n\n```python\n(F.subs(x, 2) - F.subs(x, 0)).evalf()\n```\n\n\n\n\n$\\displaystyle 10.0$\n\n\n\n#### \uc218\uce58\uc801\ubd84\n\n\n\n```python\nimport scipy.integrate as integrate\ndef f(x):\n return x ** 3 - 3 * x ** 2 + x + 6\n\n\nintegrate.quad(f, 0, 2) # \uc815\uc801\ubd84 (\uc218\uce58\uc801\ubd84)\n```\n\n\n\n\n$\\displaystyle \\left( 10.0, \\ 1.1102230246251565e-13\\right)$\n\n\n\n- \uc218\uce58\uc801\ubd84 \uacb0\uacfc\uac12\uc758 \ub450\ubc88\uc9f8 \uc22b\uc790\ub294 \uc624\ucc28\uc758 \uc0c1\ud55c\uac12\uc744 \ub098\ud0c0\ub0b8\ub2e4\n\n\ub2e4\uc74c\uc758 \ud568\uc218 \uc218\uce58\uc774\uc911\uc801\ubd84\ud558\uae30\n$$\n\\begin{align}\n\\int_0^{\\infty} \\int_1^{\\infty} \\dfrac{\\exp(-xy)}{y^2} dx dy\n\\tag{4.3.20}\n\\end{align}\n$$\n\n\n```python\ndef f(x,y):\n return np.exp(-x*y)/y**2\n\nintegrate.dblquad(f,1,np.inf, lambda x:0, lambda x: np.inf)\n```\n\n\n\n\n$\\displaystyle \\left( 0.4999999999999961, \\ 1.068453874338024e-08\\right)$\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "c53613aeb5bf4ab0a1b158613d4604a0a472b4f1", "size": 19795, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "math/04_differentiative_sympy_integral.ipynb", "max_stars_repo_name": "dayoungMM/TIL", "max_stars_repo_head_hexsha": "b844ef5621657908d4c256cdfe233462dd075e8b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "math/04_differentiative_sympy_integral.ipynb", "max_issues_repo_name": "dayoungMM/TIL", "max_issues_repo_head_hexsha": "b844ef5621657908d4c256cdfe233462dd075e8b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "math/04_differentiative_sympy_integral.ipynb", "max_forks_repo_name": "dayoungMM/TIL", "max_forks_repo_head_hexsha": "b844ef5621657908d4c256cdfe233462dd075e8b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 18.9425837321, "max_line_length": 171, "alphanum_fraction": 0.3780247537, "converted": true, "num_tokens": 2071, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9553191259110588, "lm_q2_score": 0.8791467706759584, "lm_q1q2_score": 0.8398657245096867}} {"text": "# Modeling data 2\n\n## Building a model\n\nRecall that in notebook 3, we saw that we could use a mathematical function to classify an image as an apple or a banana, based on the average amount of green in an image:\n\n\n\n\n\n\nA common function for performing this kind of **classification** is the sigmoid that we saw in the last notebook, and that we will now extend by adding two **parameters**, $w$ and $b$:\n\n$$\\sigma(x; w, b) := \\frac{1}{1 + \\exp(-wx + b)}$$\n\n$$ x = \\mathrm{data} $$\n\n\\begin{align}\n\\sigma(x;w,b) &\\approx 0 \\implies \\mathrm{apple} \\\\\n\\sigma(x;w,b) &\\approx 1 \\implies \\mathrm{banana}\n\\end{align}\n\nIn our mathematical notation above, the `;` in the function differentiates between the **data** and the **parameters**. `x` is the data and is determined from the image. The parameters, `w` and `b`, are numbers which we choose to make our function match the results it should be modeling.\n\nNote that in the code below, we don't distinguish between data and parameters - both are just inputs to our function, \u03c3!\n\n\n```julia\nusing Images\n\napple = load(\"d\n ata/10_100.jpg\")\nbanana = load(\"data/104_100.jpg\")\n\napple_green_amount = mean(Float64.(green.(apple)))\nbanana_green_amount = mean(Float64.(green.(banana)))\n\n\"Average green for apple = $apple_green_amount; \" *\n\"Average green for banana = $banana_green_amount; \"\n```\n\n\n```julia\n\u03c3(x, w, b) = 1 / (1 + exp(-w * x + b))\n```\n\nWhat we want is that when we give \u03c3 as input the average green for the apple, roughly `x = 0.3385`, it should return as output something close to 0, meaning \"apple\". And when we give \u03c3 the input `x = 0.8808`, it should output something close to 1, meaning \"banana\".\n\nBy changing the parameters of the function, we can change the shape of the function, and hence make it represent, or **fit**, the data better!\n\n## Data fitting by varying parameters\n\nWe can understand how our choice of `w` and `b` affects our model by seeing how our values for `w` and `b` change the plot of the $\\sigma$ function.\n\nTo do so, we will use the `Interact.jl` Julia package, which provides \"widgets\" for controlling parameters interactively via sliders:\n\n\n```julia\nusing Plots; gr() # GR works better for interactive manipulations\nusing Interact # package for interactive manipulation\n```\n\nRun the code in the next cell. You should see two \"sliders\" appear, one for `w` and one for `b`.\n\n**Game**: \nMove both of those sliders around until the blue curve, labeled \"model\", which is the graph of the `\\sigma` function, passes through *both* of the data points at the same time.\n\n\n```julia\n@manipulate for w in -10:0.01:30, b in 0:0.1:20\n \n plot(x -> \u03c3(x, w, b), xlim=(-0,1), ylim=(-0.1,1.1), label=\"model\", legend=:topleft, lw=3)\n \n scatter!([apple_green_amount], [0.0], label=\"apple\", ms=5) # marker size = 5\n scatter!([banana_green_amount], [1.0], label=\"banana\", ms=5)\n \nend\n```\n\nNotice that the two parameters do two very different things. The **weight**, `w`, determines *how fast* the transition between 0 and 1 occurs. It encodes how trustworthy we think our data actually is, and in what range we should be putting points between 0 and 1 and thus calling them \"unsure\". The **bias**, `b`, encodes *where* on the $x$-axis the switch should take place. It can be seen as shifting the function left-right. We'll come to understand these *parameters* more in notebook 6.\n\nHere are some parameter choices that work well:\n\n\n```julia\nw = 25.58; b = 15.6\n\nplot(x -> \u03c3(x, w, b), xlim=(0,1), ylim=(-0.1,1.1), label=\"model\", legend=:topleft, lw=3)\n\nscatter!([apple_green_amount], [0.0], label=\"apple\")\nscatter!([banana_green_amount],[1.0], label=\"banana\")\n```\n\n(Note that in this problem there are many combinations of `w` and `b` that fit the data well.)\n\nOnce we have a model, we have a computational representation for how to choose between \"apple\" and \"banana\". So let's pull in some new images and see what our model says about them!\n\n\n```julia\napple2 = load(\"data/107_100.jpg\")\n```\n\n\n```julia\ngreen_amount = mean(Float64.(green.(apple2)))\n@show green_amount\n\nscatter!([green_amount], [0.0], label=\"new apple\")\n```\n\nOur model successfully says that our new image is an apple! Pat yourself on the back: you've actually just trained your first neural network!\n\n#### Exercise 1\n\nLoad the image of a banana in `data/8_100.jpg` as `mybanana`. Edit the code below to calculate the amount of green in `mybanana` and to overlay data for this image with the existing model and data points.\n\n# To get the desired overlay, the code we need is\n\n```julia\nmybanana = load(\"data/8_100.jpg\")\nmybanana_green_amount = mean(Float64.(green.(banana)))\nscatter!([mybanana_green_amount], [1.0], label=\"my banana\")\n```\n\n## Closing remarks: bigger models, more data, more accuracy\n\nThat last apple should start making you think: not all apples are red; some are yellow. \"Redness\" is one attribute of being an apple, but isn't the whole thing. What we need to do is incorporate more ideas into our model by allowing more inputs. However, more inputs would mean more parameters to play with. Also, we would like to have the computer start \"learning\" on its own, instead of modifying the parameters ourselves until we think it \"looks right\". How do we take the next step?\n\nThe first thing to think about is, if you wanted to incorporate more data into the model, how would you change the sigmoid function? Play around with some ideas. But also, start thinking about how you chose parameters. What process did you do to finally end up at good parameters? These two problems (working with models with more data and automatically choosing parameters) are the last remaining step to understanding deep learning.\n", "meta": {"hexsha": "b3002a3a0639fa36798059e0c795a4103652608c", "size": 8764, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "introductory-tutorials/broader-topics-and-ecosystem/intro-to-ml/05. ML - Building models.ipynb", "max_stars_repo_name": "grenkoca/JuliaTutorials", "max_stars_repo_head_hexsha": "3968e0430db77856112521522e10f7da0d7610a0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 535, "max_stars_repo_stars_event_min_datetime": "2020-07-15T14:56:11.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-25T12:50:32.000Z", "max_issues_repo_path": "introductory-tutorials/broader-topics-and-ecosystem/intro-to-ml/05. ML - Building models.ipynb", "max_issues_repo_name": "grenkoca/JuliaTutorials", "max_issues_repo_head_hexsha": "3968e0430db77856112521522e10f7da0d7610a0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 42, "max_issues_repo_issues_event_min_datetime": "2018-02-25T22:53:47.000Z", "max_issues_repo_issues_event_max_datetime": "2020-05-14T02:15:50.000Z", "max_forks_repo_path": "introductory-tutorials/broader-topics-and-ecosystem/intro-to-ml/05. ML - Building models.ipynb", "max_forks_repo_name": "grenkoca/JuliaTutorials", "max_forks_repo_head_hexsha": "3968e0430db77856112521522e10f7da0d7610a0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 394, "max_forks_repo_forks_event_min_datetime": "2020-07-14T23:22:24.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-28T20:12:57.000Z", "avg_line_length": 46.6170212766, "max_line_length": 1003, "alphanum_fraction": 0.5964171611, "converted": true, "num_tokens": 1480, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9465966747198242, "lm_q2_score": 0.8872045996818986, "lm_q1q2_score": 0.8398249238550181}} {"text": "# ML 101\n\n## 2. Linear Algebra\n\nThe following system of equations:\n\n$\\begin{split}\n4 x_1 - 5 x_2 & = -13 \\\\\n-2x_1 + 3 x_2 & = 9\n\\end{split}$\n\nWe are looking for a unique solution for the two variables $x_1$ and $x_2$. The system can be described as:\n\n$$Ax = b$$\n\nas matrices:\n\n$A = \\begin{bmatrix}\n 4 & -5 \\\\[0.3em]\n -2 & 3 \n \\end{bmatrix},\\ \n b = \\begin{bmatrix}\n -13 \\\\[0.3em]\n 9 \n \\end{bmatrix}$\n\n\n```python\nimport numpy as np\n\na = np.array([[4, -5], [-2, 3]])\nb = np.array([-13, 9])\nx = np.linalg.solve(a, b)\nprint(f'{x}')\n```\n\nA **scalar** is an element in a vector, containing a real number **value**. In a vector space model or a vector mapping of (symbolic, qualitative, or quantitative) properties the scalar holds the concrete value or property of a variable.\n\nA **vector** is an array, tuple, or ordered list of scalars (or elements) of size $n$, with $n$ a positive integer. The **length** of the vector, that is the number of scalars in the vector, is also called the **order** of the vector.\n\n**Vectorization** is the process of creating a vector from some data using some process.\n\nVectors of the length $n$ could be treated like points in $n$-dimensional space. One can calculate the distance between such points using measures like [Euclidean Distance](https://en.wikipedia.org/wiki/Euclidean_distance). The similarity of vectors could also be calculated using [Cosine Similarity](https://en.wikipedia.org/wiki/Cosine_similarity).\n\n\n```python\nnp.show_config()\n```\n\n## Notation\n\nA **matrix** is a list of vectors that all are of the same length. $A$ is a matrix with $m$ rows and $n$ columns, entries of $A$ are real numbers:\n\n$$A \\in \\mathbb{R}^{m \\times n}$$\n\nA vector $x$ with $n$ entries of real numbers, could also be thought of as a matrix with $n$ rows and $1$ column, or as known as a **column vector**.\n\n$x = \\begin{bmatrix}\n x_1 \\\\[0.3em]\n x_2 \\\\[0.3em]\n \\vdots \\\\[0.3em]\n x_n\n \\end{bmatrix}$\n\nRepresenting a **row vector**, that is a matrix with $1$ row and $n$ columns, we write $x^T$ (this denotes the transpose of $x$, see above).\n\n$x^T = \\begin{bmatrix}\n x_1 & x_2 & \\cdots & x_n\n \\end{bmatrix}$\n\nWe use the notation $a_{ij}$ (or $A_{ij}$, $A_{i,j}$, etc.) to denote the entry of $A$ in the $i$th row and\n$j$th column:\n\n$A = \\begin{bmatrix}\n a_{11} & a_{12} & \\cdots & a_{1n} \\\\[0.3em]\n a_{21} & a_{22} & \\cdots & a_{2n} \\\\[0.3em]\n \\vdots & \\vdots & \\ddots & \\vdots \\\\[0.3em]\n a_{m1} & a_{m2} & \\cdots & a_{mn} \n \\end{bmatrix}$\n\nWe denote the $j^{th}$ column of $A$ by $a_j$ or $A_{:,j}$:\n\n$A = \\begin{bmatrix}\n \\big| & \\big| & & \\big| \\\\[0.3em]\n a_{1} & a_{2} & \\cdots & a_{n} \\\\[0.3em]\n \\big| & \\big| & & \\big| \n \\end{bmatrix}$\n\nWe denote the $i^{th}$ row of $A$ by $a_i^T$ or $A_{i,:}$:\n\n$A = \\begin{bmatrix}\n -- & a_1^T & -- \\\\[0.3em]\n -- & a_2^T & -- \\\\[0.3em]\n & \\vdots & \\\\[0.3em]\n -- & a_m^T & -- \n \\end{bmatrix}$\n\nA $n \\times m$ matrix is a two-dimensional array with $n$ rows and $m$ columns.\n\n## Matrix Multiplication\n\nThe result of the multiplication of two matrixes $A \\in \\mathbb{R}^{m \\times n}$ and $B \\in \\mathbb{R}^{n \\times p}$ is the matrix:\n\n$$C = AB \\in \\mathbb{R}^{m \\times n}$$\n\nThat is, we are multiplying the columns of $A$ with the rows of $B$:\n\n$$C_{ij}=\\sum_{k=1}^n{A_{ij}B_{kj}}$$\n\nThe number of columns in $A$ must be equal to the number of rows in $B$.\n\n### Vector-Vector Products\n\n#### Inner or Dot Product of Two Vectors\n\nFor two vectors $x, y \\in \\mathbb{R}^n$, the **inner product** or **dot product** $x^T y$ is a real number:\n\n$x^T y \\in \\mathbb{R} = \\begin{bmatrix}\n x_1 & x_2 & \\cdots & x_n\n \\end{bmatrix} \\begin{bmatrix}\n y_1 \\\\[0.3em]\n y_2 \\\\[0.3em]\n \\vdots \\\\[0.3em]\n y_n\n \\end{bmatrix} = \\sum_{i=1}^{n}{x_i y_i}$\n\nThe **inner products** are a special case of matrix multiplication.\n\nIt is always the case that $x^T y = y^T x$.\n\n##### Example\n\nTo calculate the inner product of two vectors $x = [1 2 3 4]$ and $y = [5 6 7 8]$, we can loop through the vector and multiply and sum the scalars (this is simplified code):\n\n\n```python\nx = (1, 2, 3, 4)\ny = (5, 6, 7, 8)\nn = len(x)\nif n == len(y):\n result = 0\n for i in range(n):\n result += x[i] * y[i]\n print(result)\n```\n\nIt is clear that in the code above we could change line 7 to `result += y[i] * x[i]` without affecting the result.\n\nWe can use the *numpy* module to apply the same operation, to calculate the **inner product**. We import the *numpy* module and assign it a name *np* for the following code:\n\n\n```python\nimport numpy as np\n```\n\nWe define the vectors $x$ and $y$ using *numpy*:\n\n\n```python\nx = np.array([1, 2, 3, 4])\ny = np.array([5, 6, 7, 8])\nprint(\"x:\", x)\nprint(\"y:\", y)\n```\n\nWe can now calculate the $dot$ or $inner product$ using the *dot* function of *numpy*:\n\n\n```python\nnp.dot(x, y)\n```\n\nThe order of the arguments is irrelevant:\n\n\n```python\nnp.dot(y, x)\n```\n\nNote that both vectors are actually **row vectors** in the above code. We can transpose them to column vectors by using the *shape* property:\n\n\n```python\nprint(\"x:\", x)\nx.shape = (4, 1)\nprint(\"xT:\", x)\nprint(\"y:\", y)\ny.shape = (4, 1)\nprint(\"yT:\", y)\n```\n\nIn fact, in our understanding of Linear Algebra, we take the arrays above to represent **row vectors**. *Numpy* treates them differently.\n\nWe see the issues when we try to transform the array objects. Usually, we can transform a row vector into a column vector in *numpy* by using the *T* method on vector or matrix objects:\n\n\n```python\nx = np.array([1, 2, 3, 4])\ny = np.array([5, 6, 7, 8])\nprint(\"x:\", x)\nprint(\"y:\", y)\nprint(\"xT:\", x.T)\nprint(\"yT:\", y.T)\n```\n\nThe problem here is that this does not do, what we expect it to do. It only works, if we declare the variables not to be arrays of numbers, but in fact a matrix:\n\n\n```python\nx = np.array([[1, 2, 3, 4]])\ny = np.array([[5, 6, 7, 8]])\nprint(\"x:\", x)\nprint(\"y:\", y)\nprint(\"xT:\", x.T)\nprint(\"yT:\", y.T)\n\n```\n\nNote that the *numpy* functions *dot* and *outer* are not affected by this distinction. We can compute the dot product using the mathematical equation above in *numpy* using the new $x$ and $y$ row vectors:\n\n\n```python\nprint(\"x:\", x)\nprint(\"y:\", y.T)\nnp.dot(x, y.T)\n```\n\nOr by reverting to:\n\n\n```python\nprint(\"x:\", x.T)\nprint(\"y:\", y)\nnp.dot(y, x.T)\n```\n\nTo read the result from this array of arrays, we would need to access the value this way:\n\n\n```python\nnp.dot(y, x.T)[0][0]\n```\n\n#### Outer Product of Two Vectors\n\nFor two vectors $x \\in \\mathbb{R}^m$ and $y \\in \\mathbb{R}^n$, where $n$ and $m$ do not have to be equal, the **outer product** of $x$ and $y$ is:\n\n$xy^T \\in \\mathbb{R}^{m\\times n}$\n\nThe **outer product** results in a matrix with $m$ rows and $n$ columns by $(xy^T)_{ij} = x_i y_j$:\n\n$xy^T \\in \\mathbb{R}^{m\\times n} = \\begin{bmatrix}\n x_1 \\\\[0.3em]\n x_2 \\\\[0.3em]\n \\vdots \\\\[0.3em]\n x_n\n \\end{bmatrix} \\begin{bmatrix}\n y_1 & y_2 & \\cdots & y_n\n \\end{bmatrix} = \\begin{bmatrix}\n x_1 y_1 & x_1 y_2 & \\cdots & x_1 y_n \\\\[0.3em]\n x_2 y_1 & x_2 y_2 & \\cdots & x_2 y_n \\\\[0.3em]\n \\vdots & \\vdots & \\ddots & \\vdots \\\\[0.3em]\n x_m y_1 & x_m y_2 & \\cdots & x_m y_n \\\\[0.3em]\n \\end{bmatrix}$\n\nSome useful property of the outer product: assume $\\mathbf{1} \\in \\mathbb{R}^n$ is an $n$-dimensional vector of scalars with the value $1$. Given a matrix $A \\in \\mathbb{R}^{m\\times n}$ with all columns equal to some vector $x \\in \\mathbb{R}^m$, using the outer product $A$ can be represented as:\n\n$A = \\begin{bmatrix}\n \\big| & \\big| & & \\big| \\\\[0.3em]\n x & x & \\cdots & x \\\\[0.3em]\n \\big| & \\big| & & \\big| \n \\end{bmatrix} = \\begin{bmatrix}\n x_1 & x_1 & \\cdots & x_1 \\\\[0.3em]\n x_2 & x_2 & \\cdots & x_2 \\\\[0.3em]\n \\vdots & \\vdots & \\ddots & \\vdots \\\\[0.3em]\n x_m &x_m & \\cdots & x_m\n \\end{bmatrix} = \\begin{bmatrix}\n x_1 \\\\[0.3em]\n x_2 \\\\[0.3em]\n \\vdots \\\\[0.3em]\n x_m\n \\end{bmatrix} \\begin{bmatrix}\n 1 & 1 & \\cdots & 1\n \\end{bmatrix} = x \\mathbf{1}^T$\n\n##### Example\n\nIf we want to compute the outer product of two vectors $x$ and $y$, we need to transpose the row vector $x$ to a column vector $x^T$. This can be achieved by the *reshape* function in *numpy*, the *T* method, or the *transpose()* function. The *reshape* function takes a parameter that describes the number of colums and rows for the resulting transposing:\n\n\n```python\nx = np.array([[1, 2, 3, 4]])\nprint(\"x:\", x)\nprint(\"xT:\", np.reshape(x, (4, 1)))\nprint(\"xT:\", x.T)\nprint(\"xT:\", x.transpose())\n```\n\nWe can now compute the **outer product** by multiplying the column vector $x$ with the row vector $y$:\n\n\n```python\nx = np.array([[1, 2, 3, 4]])\ny = np.array([[5, 6, 7, 8]])\nx.T * y\n```\n\n*Numpy* provides an *outer* function that does all that:\n\n\n```python\nnp.outer(x, y)\n```\n\nNote, in this simple case using the simple arrays for the data structures of the vectors does not affect the result of the *outer* function:\n\n\n```python\nx = np.array([1, 2, 3, 4])\ny = np.array([5, 6, 7, 8])\nnp.outer(x, y)\n```\n\n### Matrix-Vector Products\n\nAssume a matrix $A \\in \\mathbb{R}^{m\\times n}$ and a vector $x \\in \\mathbb{R}^n$ the product results in a vector $y = Ax \\in \\mathbb{R}^m$.\n\n$Ax$ could be expressed as the dot product of row $i$ of matrix $A$ with the column value $j$ of vector $x$. Let us first consider matrix multiplication with a scalar:\n\n$A = \\begin{bmatrix}\n 1 & 2 \\\\[0.3em]\n 3 & 4\n \\end{bmatrix}$\n\nWe can compute the product of $A$ with a scalar $n = 2$ as:\n\n$A = \\begin{bmatrix}\n 1 \\times n & 2 \\times n \\\\[0.3em]\n 3 \\times n & 4 \\times n\n \\end{bmatrix} = \\begin{bmatrix}\n 1 \\times 2 & 2 \\times 2 \\\\[0.3em]\n 3 \\times 2 & 4 \\times 2\n \\end{bmatrix} = \\begin{bmatrix}\n 2 & 4 \\\\[0.3em]\n 6 & 8\n \\end{bmatrix} $\n\nUsing *numpy* this can be achieved by:\n\n\n```python\nimport numpy as np\nA = np.array([[4, 5, 6],\n [7, 8, 9]])\nA * 2\n```\n\nAssume that we have a column vector $x$:\n\n$x = \\begin{bmatrix}\n 1 \\\\[0.3em]\n 2 \\\\[0.3em]\n 3 \n \\end{bmatrix}$\n\nTo be able to multiply this vector with a matrix, the number of columns in the matrix must correspond to the number of rows in the column vector. The matrix $A$ must have $3$ columns, as for example: \n\n$A = \\begin{bmatrix}\n 4 & 5 & 6\\\\[0.3em]\n 7 & 8 & 9\n \\end{bmatrix}$\n\nTo compute $Ax$, we multiply row $1$ of the matrix with column $1$ of $x$:\n\n$$\n\\begin{bmatrix}\n 4 & 5 & 6\n\\end{bmatrix}\n\\begin{bmatrix}\n 1 \\\\[0.3em]\n 2 \\\\[0.3em]\n 3 \n\\end{bmatrix} = 4 \\times 1 + 5 \\times 2 + 6 \\times 3 = 32 \n$$\n\nWe do the compute the dot product of row $2$ of $A$ and column $1$ of $x$:\n\n$$ \\begin{bmatrix}\n 7 & 8 & 9\n \\end{bmatrix}\n \\begin{bmatrix}\n 1 \\\\[0.3em]\n 2 \\\\[0.3em]\n 3 \n\\end{bmatrix} = 7 \\times 1 + 8 \\times 2 + 9 \\times 3 = 50 $$\n\nThe resulting column vector $Ax$ is:\n\n$Ax = \\begin{bmatrix}\n 32 \\\\[0.3em]\n 50 \n \\end{bmatrix}$\n\nUsing *numpy* we can compute $Ax$:\n\n\n```python\nA = np.array([[4, 5, 6],\n [7, 8, 9]])\nx = np.array([1, 2, 3])\nA.dot(x)\n```\n\nWe can thus describe the product writing $A$ by rows as:\n\n$y = Ax = \\begin{bmatrix}\n -- & a_1^T & -- \\\\[0.3em]\n -- & a_2^T & -- \\\\[0.3em]\n & \\vdots & \\\\[0.3em]\n -- & a_m^T & -- \n\\end{bmatrix} x = \\begin{bmatrix}\n a_1^T x \\\\[0.3em]\n a_2^T x \\\\[0.3em]\n \\vdots \\\\[0.3em]\n a_m^T x \n\\end{bmatrix}$\n\nThis means that the $i$th scalar of $y$ is the inner product of the $i$th row of $A$ and $x$, that is $y_i = a_i^T x$.\n\nIf we write $A$ in column form, then:\n\n$$ y = Ax =\n\\begin{bmatrix}\n \\big| & \\big| & & \\big| \\\\[0.3em]\n a_1 & a_2 & \\cdots & a_n \\\\[0.3em]\n \\big| & \\big| & & \\big| \n\\end{bmatrix}\n\\begin{bmatrix}\n x_1 \\\\[0.3em]\n x_2 \\\\[0.3em]\n \\vdots \\\\[0.3em]\n x_n\n\\end{bmatrix} =\n\\begin{bmatrix}\n a_1\n\\end{bmatrix} x_1 + \n\\begin{bmatrix}\n a_2\n\\end{bmatrix} x_2 + \\dots +\n\\begin{bmatrix}\n a_n\n\\end{bmatrix} x_n\n$$\n\nIn this case $y$ is a **[linear combination](https://en.wikipedia.org/wiki/Linear_combination)** of the *columns* of $A$, the coefficients taken from $x$.\n\nThe above examples multiply be the right with a column vector. One can multiply on the left by a row vector as well, $y^T = x^T A$ for $A \\in \\mathbb{R}^{m\\times n}$, $x\\in \\mathbb{R}^m$, $y \\in \\mathbb{R}^n$. There are two ways to express $y^T$, with $A$ expressed by its columns, with $i$th scalar of $y^T$ corresponds to the inner product of $x$ and the $i$th column of $A$:\n\n$$\ny^T = x^T A = x^t \\begin{bmatrix}\n \\big| & \\big| & & \\big| \\\\[0.3em]\n a_1 & a_2 & \\cdots & a_n \\\\[0.3em]\n \\big| & \\big| & & \\big| \n\\end{bmatrix} = \n\\begin{bmatrix}\n x^T a_1 & x^T a_2 & \\dots & x^T a_n \n\\end{bmatrix}\n$$\n\nOne can express $A$ by rows, where $y^T$ is a linear combination of the rows of $A$ with the scalars from $x$.\n\n$$\n\\begin{equation}\n\\begin{split}\ny^T & = x^T A \\\\\n & = \\begin{bmatrix}\n x_1 & x_2 & \\dots & x_n \n\\end{bmatrix}\n\\begin{bmatrix}\n -- & a_1^T & -- \\\\[0.3em]\n -- & a_2^T & -- \\\\[0.3em]\n & \\vdots & \\\\[0.3em]\n -- & a_m^T & -- \n\\end{bmatrix} \\\\\n & = x_1 \\begin{bmatrix}-- & a_1^T & --\\end{bmatrix} + x_2 \\begin{bmatrix}-- & a_2^T & --\\end{bmatrix} + \\dots + x_n \\begin{bmatrix}-- & a_n^T & --\\end{bmatrix}\n\\end{split}\n\\end{equation}\n$$\n\n### Matrix-Matrix Products\n\nOne can view matrix-matrix multiplication $C = AB$ as a set of vector-vector products. The $(i,j)$th entry of $C$ is the inner product of the $i$th row of $A$ and the $j$th column of $B$:\n\n$$\nC = AB =\n\\begin{bmatrix}\n -- & a_1^T & -- \\\\[0.3em]\n -- & a_2^T & -- \\\\[0.3em]\n & \\vdots & \\\\[0.3em]\n -- & a_m^T & -- \n\\end{bmatrix}\n\\begin{bmatrix}\n \\big| & \\big| & & \\big| \\\\[0.3em]\n b_1 & b_2 & \\cdots & b_p \\\\[0.3em]\n \\big| & \\big| & & \\big| \n\\end{bmatrix} = \n\\begin{bmatrix}\n a_1^T b_1 & a_1^T b_2 & \\cdots & a_1^T b_p \\\\[0.3em]\n a_2^T b_1 & a_2^T b_2 & \\cdots & a_2^T b_p \\\\[0.3em]\n \\vdots & \\vdots & \\ddots & \\vdots \\\\[0.3em]\n a_m^T b_1 & a_m^T b_2 & \\cdots & a_m^T b_p \n\\end{bmatrix}\n$$\n\nHere $A \\in \\mathbb{R}^{m\\times n}$ and $B \\in \\mathbb{R}^{n\\times p}$, $a_i \\in \\mathbb{R}^n$ and $b_j \\in \\mathbb{R}^n$, and $A$ is represented by rows, $B$ by columns.\n\nIf we represent $A$ by columns and $B$ by rows, then $AB$ is the sum of the outer products:\n\n$$\nC = AB =\n\\begin{bmatrix}\n \\big| & \\big| & & \\big| \\\\[0.3em]\n a_1 & a_2 & \\cdots & a_n \\\\[0.3em]\n \\big| & \\big| & & \\big| \n\\end{bmatrix}\n\\begin{bmatrix}\n -- & b_1^T & -- \\\\[0.3em]\n -- & b_2^T & -- \\\\[0.3em]\n & \\vdots & \\\\[0.3em]\n -- & b_n^T & -- \n\\end{bmatrix}\n= \\sum_{i=1}^n a_i b_i^T\n$$\n\nThis means that $AB$ is the sum over all $i$ of the outer product of the $i$th column of $A$ and the $i$th row of $B$.\n\nOne can interpret matrix-matrix operations also as a set of matrix-vector products. Representing $B$ by columns, the columns of $C$ are matrix-vector products between $A$ and the columns of $B$:\n\n$$\nC = AB = A\n\\begin{bmatrix}\n \\big| & \\big| & & \\big| \\\\[0.3em]\n b_1 & b_2 & \\cdots & b_p \\\\[0.3em]\n \\big| & \\big| & & \\big| \n\\end{bmatrix} = \n\\begin{bmatrix}\n \\big| & \\big| & & \\big| \\\\[0.3em]\n A b_1 & A b_2 & \\cdots & A b_p \\\\[0.3em]\n \\big| & \\big| & & \\big| \n\\end{bmatrix}\n$$\n\nIn this interpretation the $i$th column of $C$ is the matrix-vector product with the vector on the right, i.e. $c_i = A b_i$.\n\nRepresenting $A$ by rows, the rows of $C$ are the matrix-vector products between the rows of $A$ and $B$:\n\n$$\nC = AB = \\begin{bmatrix}\n -- & a_1^T & -- \\\\[0.3em]\n -- & a_2^T & -- \\\\[0.3em]\n & \\vdots & \\\\[0.3em]\n -- & a_m^T & -- \n\\end{bmatrix}\nB = \n\\begin{bmatrix}\n -- & a_1^T B & -- \\\\[0.3em]\n -- & a_2^T B & -- \\\\[0.3em]\n & \\vdots & \\\\[0.3em]\n -- & a_n^T B & -- \n\\end{bmatrix}\n$$\n\nThe $i$th row of $C$ is the matrix-vector product with the vector on the left, i.e. $c_i^T = a_i^T B$.\n\n#### Notes on Matrix-Matrix Products\n\n**Matrix multiplication is associative:** $(AB)C = A(BC)$\n\n**Matrix multiplication is distributive:** $A(B + C) = AB + AC$\n\n**Matrix multiplication is, in general, not commutative;** It can be the case that $AB \\neq BA$. (For example, if $A \\in \\mathbb{R}^{m\\times n}$ and $B \\in \\mathbb{R}^{n\\times q}$, the matrix product $BA$ does not even exist if $m$ and $q$ are not equal!)\n\n## Identity Matrix\n\nThe **identity matrix** $I \\in \\mathbb{R}^{n\\times n}$ is a square matrix with the value $1$ on the diagonal and $0$ everywhere else:\n\n$$\nI_{ij} = \\left\\{\n\\begin{array}{lr}\n 1 & i = j\\\\\n 0 & i \\neq j\n\\end{array}\n\\right.\n$$\n\nFor all $A \\in \\mathbb{R}^{m\\times n}$:\n\n$AI = A = IA$\n\nIn the equation above multiplication has to be made possible, which means that in the portion $AI = A$ the dimensions of $I$ have to be $n\\times n$, while in $A = IA$ they have to be $m\\times m$.\n\nWe can generate an *identity matrix* in *numpy* using:\n\n\n```python\nimport numpy as np\nA = np.array([[0, 1, 2],\n [3, 4, 5],\n [6, 7, 8],\n [9, 10, 11]])\nprint(\"A:\", A)\n```\n\nWe can ask for the shape of $A$:\n\n\n```python\nA.shape\n```\n\nThe *shape* property of a matrix contains the $m$ (number of rows) and $n$ (number of columns) properties in a tuple, in that particular order. We can create an identity matrix for the use in $AI$ by using the $n$ value: \n\n\n```python\nnp.identity(A.shape[1], dtype=\"int\")\n```\n\nNote that we specify the *dtype* parameter to *identity* as *int*, since the default would return a matrix of *float* values.\n\nTo generate an identity matrix for the use in $IA$ we would use the $m$ value:\n\n\n```python\nnp.identity(A.shape[0], dtype=\"int\")\n```\n\nWe can compute the dot product of $A$ and its identity matrix $I$:\n\n\n```python\nn = A.shape[1]\nI = np.array(np.identity(n, dtype=\"int\"))\nnp.dot(A, I)\n```\n\nThe same is true for the other direction:\n\n\n```python\nm = A.shape[0]\nI = np.array(np.identity(m, dtype=\"int\"))\nnp.dot(I, A)\n```\n\n## Diagonal Matrix\n\nIn the **diagonal matrix** non-diagonal elements are $0$, that is $D = diag(d_1, d_2, \\dots{}, d_n)$, with:\n\n$$\nD_{ij} = \\left\\{\n\\begin{array}{lr}\n d_i & i = j\\\\\n 0 & i \\neq j\n\\end{array}\n\\right.\n$$\n\nThe identity matrix is a special case of a diagonal matrix: $I = diag(1, 1, \\dots{}, 1)$.\n\nIn *numpy* we can create a *diagonal matrix* from any given matrix using the *diag* function:\n\n\n```python\nimport numpy as np\nA = np.array([[0, 1, 2, 3],\n [4, 5, 6, 7],\n [8, 9, 10, 11],\n [12, 13, 14, 15]])\nnp.diag(A)\n```\n\nAn optional parameter *k* to the *diag* function allows us to extract the diagonal above the main diagonal with a positive *k*, and below the main diagonal with a negative *k*:\n\n\n```python\nnp.diag(A, k=1)\n```\n\n\n```python\nnp.diag(A, k=-1)\n```\n\n## Transpose of a Matrix\n\n**Transposing** a matrix is achieved by *flipping* the rows and columns. For a matrix $A \\in \\mathbb{R}^{m\\times n}$ the transpose $A^T \\in \\mathbb{R}^{n\\times m}$ is the $n\\times m$ matrix given by:\n\n$(A^T)_{ij} = A_{ji}$\n\nProperties of transposes:\n\n- $(A^T)^T = A$\n- $(AB)^T = B^T A^T$\n- $(A+B)^T = A^T + B^T$\n\n## Symmetric Metrices\n\nSquare metrices $A \\in \\mathbb{R}^{n\\times n}$ are **symmetric**, if $A = A^T$.\n\n$A$ is **anti-symmetric**, if $A = -A^T$.\n\nFor any matrix $A \\in \\mathbb{R}^{n\\times n}$, the matrix $A + A^T$ is **symmetric**.\n\nFor any matrix $A \\in \\mathbb{R}^{n\\times n}$, the matrix $A - A^T$ is **anti-symmetric**.\n\nThus, any square matrix $A \\in \\mathbb{R}^{n\\times n}$ can be represented as a sum of a symmetric matrix and an anti-symmetric matrix:\n\n$A = \\frac{1}{2} (A + A^T) + \\frac{1}{2} (A - A^T)$\n\nThe first matrix on the right, i.e. $\\frac{1}{2} (A + A^T)$ is symmetric. The second matrix $\\frac{1}{2} (A - A^T)$ is anti-symmetric.\n\n$\\mathbb{S}^n$ is the set of all symmetric matrices of size $n$.\n\n$A \\in \\mathbb{S}^n$ means that $A$ is symmetric and of the size $n\\times n$.\n\n## The Trace\n\nThe **trace** of a square matrix $A \\in \\mathbb{R}^{n\\times n}$ is $tr(A)$ (or $trA$) is the sum of the diagonal elements in the matrix:\n\n$trA = \\sum_{i=1}^n A_{ii}$\n\nProperties of the **trace**:\n\n- For $A \\in \\mathbb{R}^{n\\times n}$, $\\mathrm{tr}A = \\mathrm{tr}A^T$\n- For $A,B \\in \\mathbb{R}^{n\\times n}$, $\\mathrm{tr}(A + B) = \\mathrm{tr}A + \\mathrm{tr}B$\n- For $A \\in \\mathbb{R}^{n\\times n}$, $t \\in \\mathbb{R}$, $\\mathrm{tr}(tA) = t \\mathrm{tr}A$\n- For $A,B$ such that $AB$ is square, $\\mathrm{tr}AB = \\mathrm{tr}BA$\n- For $A,B,C$ such that $ABC$ is square, $\\mathrm{tr}ABC = \\mathrm{tr}BCA = \\mathrm{tr}CAB$, and so on for the product of more matrices.\n\n## Norms\n\nThe **norm** of a vector $x$ is $\\| x\\|$, informally the length of a vector.\n\nExample: the Euclidean or $\\mathscr{l}_2$ norm:\n\n$\\|x\\|_2 = \\sqrt{\\sum_{i=1}^n{x_i^2}}$\n\nNote: $\\|x\\|_2^2 = x^T x$\n\nA **norm** is any function $f : \\mathbb{R}^n \\rightarrow \\mathbb{R}$ that satisfies the following properties:\n\n- For all $x \\in \\mathbb{R}^n$, $f(x) \\geq 0$ (non-negativity)\n- $f(x) = 0$ if and only if $x = 0$ (definiteness)\n- For all $x \\in \\mathbb{R}^n$, $t \\in \\mathbb{R}$, $f(tx) = |t|\\ f(x)$ (homogeneity)\n- For all $x, y \\in \\mathbb{R}^n$, $f(x + y) \\leq f(x) + f(y)$ (triangle inequality)\n\nNorm $\\mathscr{l}_1$:\n\n$\\|x\\|_1 = \\sum_{i=1}^n{|x_i|}$\n\nNorm $\\mathscr{l}_\\infty$:\n\n$\\|x\\|_\\infty = \\max_i|x_i|$\n\nAll these three norms are examples of the $\\mathscr{l}_p$ norms, with $p$ a real number parameter $p \\geq 1$:\n\n$\\|x\\|_p = \\left(\\sum_{i=1}^n{|x_i|^p}\\right)^{\\frac{1}{p}}$\n\n*Frobenius norm* for matrices:\n\n$\\|A\\|_F = \\sqrt{\\sum_{i=1}^m\\sum_{i=1}^n A_{ij}^2} = \\sqrt{\\mathrm{tr}(A^T A)}$\n\nAnd many more.\n\n## Linear Independence and Rank\n\nA set of vectors $\\{x_1, x_2, \\dots{}, x_n\\} \\subset \\mathbb{R}^m$ is said to be **(linearly) independent** if no vector can be represented as a linear combination of the remaining vectors.\n\nA set of vectors $\\{x_1, x_2, \\dots{}, x_n\\} \\subset \\mathbb{R}^m$ is said to be **(lineraly) dependent** if one vector from this set can be represented as a linear combination of the remaining vectors.\n\nFor some scalar values $\\alpha_1, \\dots{}, \\alpha_{n-1} \\in \\mathbb{R}$ the vectors $x_1, \\dots{}, x_n$ are linerly dependent, if:\n\n$\\begin{equation}\nx_n = \\sum_{i=1}^{n-1}{\\alpha_i x_i}\n\\end{equation}$\n\nExample: The following vectors are lineraly dependent, because $x_3 = -2 x_1 + x_2$\n\n$x_1 = \\begin{bmatrix}\n 1 \\\\[0.3em]\n 2 \\\\[0.3em]\n 3 \n\\end{bmatrix}\n\\quad\nx_2 = \\begin{bmatrix}\n 4 \\\\[0.3em]\n 1 \\\\[0.3em]\n 5 \n\\end{bmatrix}\n\\quad\nx_3 = \\begin{bmatrix}\n 2 \\\\[0.3em]\n -1 \\\\[0.3em]\n -1 \n\\end{bmatrix}\n$\n\n### Column Rank of a Matrix\n\nThe **column rank** of a matrix $A \\in \\mathbb{R}^{m\\times n}$ is the size of the largest subset of columns of $A$ that constitute a linear independent set. Informaly this is the number of linearly independent columns of $A$.\n\n### Row Rank of a Matrix\n\nThe **row rank** of a matrix $A \\in \\mathbb{R}^{m\\times n}$ is the largest number of rows of $A$ that constitute a lineraly independent set.\n\n### Rank of a Matrix\n\nFor any matrix $A \\in \\mathbb{R}^{m\\times n}$, the column rank of $A$ is equal to the row rank of $A$. Both quantities are referred to collectively as the rank of $A$, denoted as $rank(A)$. Here are some basic properties of the rank:\n\n- For $A \\in \\mathbb{R}^{m\\times n}$, $rank(A) \\leq \\min(m, n)$. If $rank(A) = \\min(m, n)$, then $A$ is said to be\n**full rank**.\n- For $A \\in \\mathbb{R}^{m\\times n}$, $rank(A) = rank(A^T)$\n- For $A \\in \\mathbb{R}^{m\\times n}$, $B \\in \\mathbb{R}^{n\\times p}$, $rank(AB) \\leq \\min(rank(A), rank(B))$\n- For $A,B \\in \\mathbb{R}^{m\\times n}$, $rank(A + B) \\leq rank(A) + rank(B)$\n\n## Subtraction and Addition of Metrices\n\nAssume $A \\in \\mathbb{R}^{m\\times n}$ and $B \\in \\mathbb{R}^{m\\times n}$, that is $A$ and $B$ are of the same size, to add $A$ to $B$, or to subtract $B$ from $A$, we add or subtract corresponding entries:\n\n$$\nA + B =\n\\begin{bmatrix}\n a_{11} & a_{12} & \\cdots & a_{1n} \\\\[0.3em]\n a_{21} & a_{22} & \\cdots & a_{2n} \\\\[0.3em]\n \\vdots & \\vdots & \\ddots & \\vdots \\\\[0.3em]\n a_{m1} & a_{m2} & \\cdots & a_{mn}\n\\end{bmatrix} +\n\\begin{bmatrix}\n b_{11} & b_{12} & \\cdots & b_{1n} \\\\[0.3em]\n b_{21} & b_{22} & \\cdots & b_{2n} \\\\[0.3em]\n \\vdots & \\vdots & \\ddots & \\vdots \\\\[0.3em]\n b_{m1} & b_{m2} & \\cdots & b_{mn}\n\\end{bmatrix} =\n\\begin{bmatrix}\n a_{11} + b_{11} & a_{12} + b_{12} & \\cdots & a_{1n} + b_{1n} \\\\[0.3em]\n a_{21} + b_{21} & a_{22} + b_{22} & \\cdots & a_{2n} + b_{2n} \\\\[0.3em]\n \\vdots & \\vdots & \\ddots & \\vdots \\\\[0.3em]\n a_{m1} + b_{m1} & a_{m2} + b_{m2} & \\cdots & a_{mn} + b_{mn}\n\\end{bmatrix}\n$$\n\nThe same is applies to subtraction:\n\n$$\nA - B =\n\\begin{bmatrix}\n a_{11} & a_{12} & \\cdots & a_{1n} \\\\[0.3em]\n a_{21} & a_{22} & \\cdots & a_{2n} \\\\[0.3em]\n \\vdots & \\vdots & \\ddots & \\vdots \\\\[0.3em]\n a_{m1} & a_{m2} & \\cdots & a_{mn}\n\\end{bmatrix} -\n\\begin{bmatrix}\n b_{11} & b_{12} & \\cdots & b_{1n} \\\\[0.3em]\n b_{21} & b_{22} & \\cdots & b_{2n} \\\\[0.3em]\n \\vdots & \\vdots & \\ddots & \\vdots \\\\[0.3em]\n b_{m1} & b_{m2} & \\cdots & b_{mn}\n\\end{bmatrix} =\n\\begin{bmatrix}\n a_{11} - b_{11} & a_{12} - b_{12} & \\cdots & a_{1n} - b_{1n} \\\\[0.3em]\n a_{21} - b_{21} & a_{22} - b_{22} & \\cdots & a_{2n} - b_{2n} \\\\[0.3em]\n \\vdots & \\vdots & \\ddots & \\vdots \\\\[0.3em]\n a_{m1} - b_{m1} & a_{m2} - b_{m2} & \\cdots & a_{mn} - b_{mn}\n\\end{bmatrix}\n$$\n\nIn Python using *numpy* this can be achieved using the following code:\n\n\n```python\nimport numpy as np\nprint(\"np.arange(9):\", np.arange(9))\nprint(\"np.arange(9, 18):\", np.arange(9, 18))\nA = np.arange(9, 18).reshape((3, 3))\nB = np.arange(9).reshape((3, 3))\nprint(\"A:\", A)\nprint(\"B:\", B)\n```\n\nThe *numpy* function *arange* is similar to the standard Python function *range*. It returns an array with $n$ elements, specified in the one parameter version only. If we provide to parameters to *arange*, it generates an array starting from the value of the first parameter and ending with a value one less than the second parameter. The function *reshape* returns us a matrix with the corresponding number of rows and columns.\n\nWe can now add and subtract the two matrices $A$ and $B$:\n\n\n```python\nA + B\n```\n\n\n```python\nA - B\n```\n\n## Inverse\n\nThe **inverse** of a square matrix $A \\in \\mathbb{R}^{n\\times n}$ is $A^{-1}$:\n\n$A^{-1} A = I = A A^{-1}$\n\nNot all matrices have inverses. Non-square matrices do not have inverses by definition. For some square matrices $A$ the inverse might not exist.\n\n$A$ is **invertible** or **non-singular** if $A^{-1}$ exists.\n\n$A$ is **non-invertible** or **singular** if $A^{-1}$ does not exist.\n\nFor $A$ to have an inverse $A^{-1}$, $A$ must be **full rank**.\n\nAssuming that $A,B \\in \\mathbb{R}^{n\\times n}$ are non-singular, then:\n\n- $(A^{-1})^{-1} = A$\n- $(AB)^{-1} = B^{-1} A^{-1}$\n- $(A^{-1})^T = (A^T)^{-1}$ (often simply $A^{-T}$)\n\n## Orthogonal Matrices\n\nTwo vectors $x, y \\in \\mathbb{R}^n$ are **orthogonal** if $x^T y = 0$.\n\nA vector $x \\in \\mathbb{R}^n$ is **normalized** if $\\|x\\|^2 = 1$.\n\nA square matrix $U \\in \\mathbb{R}^{n\\times n}$ is **orthogonal** if all its columns are orthogonal to each other and are **normalized**. The columns are then referred to as being **orthonormal**.\n\nIt follows immediately from the definition of orthogonality and normality that:\n\n$U^T U = I = U U^T$\n\nThis means that the inverse of an orthogonal matrix is its transpose.\n\nIf U is not square - i.e., $U \\in \\mathbb{R}^{m\\times n}$, $n < m$ - but its columns are still orthonormal, then $U^T U = I$, but $U U^T \\neq I$.\n\nWe generally only use the term orthogonal to describe the case, where $U$ is square.\n\nAnother nice property of orthogonal matrices is that operating on a vector with an orthogonal matrix will not change its Euclidean norm. For any $x \\in \\mathbb{R}^n$, $U \\in \\mathbb{R}^{n\\times n}$ orthogonal.\n\n$\\|U_x\\|^2 = \\|x\\|^2$\n\n## Range and Nullspace of a Matrix\n\nThe **span** of a set of vectors $\\{ x_1, x_2, \\dots{}, x_n\\}$ is the set of all vectors that can be expressed as\na linear combination of $\\{ x_1, \\dots{}, x_n \\}$:\n\n$\\mathrm{span}(\\{ x_1, \\dots{}, x_n \\}) = \\{ v : v = \\sum_{i=1}^n \\alpha_i x_i, \\alpha_i \\in \\mathbb{R} \\}$\n\nIt can be shown that if $\\{ x_1, \\dots{}, x_n \\}$ is a set of n linearly independent vectors, where each $x_i \\in \\mathbb{R}^n$, then $\\mathrm{span}(\\{ x_1, \\dots{}, x_n\\}) = \\mathbb{R}^n$. That is, any vector $v \\in \\mathbb{R}^n$ can be written as a linear combination of $x_1$ through $x_n$.\n\nThe projection of a vector $y \\in \\mathbb{R}^m$ onto the span of $\\{ x_1, \\dots{}, x_n\\}$ (here we assume $x_i \\in \\mathbb{R}^m$) is the vector $v \\in \\mathrm{span}(\\{ x_1, \\dots{}, x_n \\})$, such that $v$ is as close as possible to $y$, as measured by the Euclidean norm $\\|v \u2212 y\\|^2$. We denote the projection as $\\mathrm{Proj}(y; \\{ x_1, \\dots{}, x_n \\})$ and can define it formally as:\n\n$\\mathrm{Proj}( y; \\{ x_1, \\dots{}, x_n \\}) = \\mathrm{argmin}_{v\\in \\mathrm{span}(\\{x_1,\\dots{},x_n\\})}\\|y \u2212 v\\|^2$\n\nThe **range** (sometimes also called the columnspace) of a matrix $A \\in \\mathbb{R}^{m\\times n}$, denoted $\\mathcal{R}(A)$, is the the span of the columns of $A$. In other words,\n\n$\\mathcal{R}(A) = \\{ v \\in \\mathbb{R}^m : v = A x, x \\in \\mathbb{R}^n\\}$\n\nMaking a few technical assumptions (namely that $A$ is full rank and that $n < m$), the projection of a vector $y \\in \\mathbb{R}^m$ onto the range of $A$ is given by:\n\n$\\mathrm{Proj}(y; A) = \\mathrm{argmin}_{v\\in \\mathcal{R}(A)}\\|v \u2212 y\\|^2 = A(A^T A)^{\u22121} A^T y$\n\nThe **nullspace** of a matrix $A \\in \\mathbb{R}^{m\\times n}$, denoted $\\mathcal{N}(A)$ is the set of all vectors that equal $0$ when multiplied by $A$, i.e.,\n\n$\\mathcal{N}(A) = \\{ x \\in \\mathbb{R}^n : A x = 0 \\}$\n\nNote that vectors in $\\mathcal{R}(A)$ are of size $m$, while vectors in the $\\mathcal{N}(A)$ are of size $n$, so vectors in $\\mathcal{R}(A^T)$ and $\\mathcal{N}(A)$ are both in $\\mathbb{R}^n$. In fact, we can say much more. It turns out that:\n\n$\\{ w : w = u + v, u \\in \\mathcal{R}(A^T), v \\in \\mathcal{N}(A) \\} = \\mathbb{R}^n$ and $\\mathcal{R}(A^T) \\cap \\mathcal{N}(A) = \\{0\\}$\n\nIn other words, $\\mathcal{R}(A^T)$ and $\\mathcal{N}(A)$ are disjoint subsets that together span the entire space of\n$\\mathbb{R}^n$. Sets of this type are called **orthogonal complements**, and we denote this $\\mathcal{R}(A^T) = \\mathcal{N}(A)^\\perp$.\n\n## Determinant\n\nThe determinant of a square matrix $A \\in \\mathbb{R}^{n\\times n}$, is a function $\\mathrm{det} : \\mathbb{R}^{n\\times n} \\rightarrow \\mathbb{R}$, and is denoted $|A|$ or $\\mathrm{det}A$ (like the trace operator, we usually omit parentheses).\n\n### A geometric interpretation of the determinant\n\nGiven\n\n$\\begin{bmatrix}\n -- & a_1^T & -- \\\\[0.3em]\n -- & a_2^T & -- \\\\[0.3em]\n & \\vdots & \\\\[0.3em]\n -- & a_n^T & -- \n\\end{bmatrix}$\n\nconsider the set of points $S \\subset \\mathbb{R}^n$ formed by taking all possible linear combinations of the row vectors $a_1, \\dots{}, a_n \\in \\mathbb{R}^n$ of $A$, where the coefficients of the linear combination are all\nbetween $0$ and $1$; that is, the set $S$ is the restriction of $\\mathrm{span}( \\{ a_1, \\dots{}, a_n \\})$ to only those linear combinations whose coefficients $\\alpha_1, \\dots{}, \\alpha_n$ satisfy $0 \\leq \\alpha_i \\leq 1$, $i = 1, \\dots{}, n$. Formally:\n\n$$\nS = \\{v \\in \\mathbb{R}^n : v = \\sum_{i=1}^n \\alpha_i a_i \\textrm{ where } 0 \\leq \\alpha_i \\leq 1, i = 1, \\dots{}, n \\}\n$$\n\nThe absolute value of the determinant of $A$, it turns out, is a measure of the *volume* of the set $S$. The volume here is intuitively for example for $n = 2$ the area of $S$ in the Cartesian plane, or with $n = 3$ it is the common understanding of *volume* for 3-dimensional objects.\n\nExample:\n\n$A = \\begin{bmatrix}\n 1 & 3\\\\[0.3em]\n 3 & 2 \n\\end{bmatrix}$\n\nThe rows of the matrix are:\n\n$a_1 = \\begin{bmatrix}\n 1 \\\\[0.3em]\n 3 \n\\end{bmatrix}\n\\quad\na_2 = \\begin{bmatrix}\n 3 \\\\[0.3em]\n 2 \n\\end{bmatrix}$\n\nThe set S corresponding to these rows is shown in:\n\n\n\nThe figure above is an illustration of the determinant for the $2\\times 2$ matrix $A$ above. Here, $a_1$ and $a_2$\nare vectors corresponding to the rows of $A$, and the set $S$ corresponds to the shaded region (i.e., the parallelogram). The absolute value of the determinant, $|\\mathrm{det}A| = 7$, is the area of the parallelogram.\n\nFor two-dimensional matrices, $S$ generally has the shape of a parallelogram. In our example, the value of the determinant is $|A| = \u22127$ (as can be computed using the formulas shown later), so the area of the parallelogram is $7$.\n\nIn three dimensions, the set $S$ corresponds to an object known as a parallelepiped (a three-dimensional box with skewed sides, such that every face has the shape of a parallelogram). The absolute value of the determinant of the $3 \\times 3$ matrix whose rows define $S$ give the three-dimensional volume of the parallelepiped. In even higher dimensions, the set $S$ is an object known as an $n$-dimensional parallelotope.\n\nAlgebraically, the determinant satisfies the following three properties (from which all other properties follow, including the general formula):\n\n- The determinant of the identity is $1$, $|I| = 1$. (Geometrically, the volume of a unit hypercube is $1$).\n- Given a matrix $A \\in \\mathbb{R}^{n\\times n}$, if we multiply a single row in $A$ by a scalar $t \\in \\mathbb{R}$, then the determinant of the new matrix is $t|A|$,
    \n$\\left| \\begin{bmatrix}\n -- & t a_1^T & -- \\\\[0.3em]\n -- & a_2^T & -- \\\\[0.3em]\n & \\vdots & \\\\[0.3em]\n -- & a_m^T & -- \n\\end{bmatrix}\\right| = t|A|$
    \n(Geometrically, multiplying one of the sides of the set $S$ by a factor $t$ causes the volume\nto increase by a factor $t$.)\n- If we exchange any two rows $a^T_i$ and $a^T_j$ of $A$, then the determinant of the new matrix is $\u2212|A|$, for example
    \n$\\left| \\begin{bmatrix}\n -- & a_2^T & -- \\\\[0.3em]\n -- & a_1^T & -- \\\\[0.3em]\n & \\vdots & \\\\[0.3em]\n -- & a_m^T & -- \n\\end{bmatrix}\\right| = -|A|$\n\nSeveral properties that follow from the three properties above include:\n\n- For $A \\in \\mathbb{R}^{n\\times n}$, $|A| = |A^T|$\n- For $A,B \\in \\mathbb{R}^{n\\times n}$, $|AB| = |A||B|$\n- For $A \\in \\mathbb{R}^{n\\times n}$, $|A| = 0$ if and only if $A$ is singular (i.e., non-invertible). (If $A$ is singular then it does not have full rank, and hence its columns are linearly dependent. In this case, the set $S$ corresponds to a \"flat sheet\" within the $n$-dimensional space and hence has zero volume.)\n- For $A \\in \\mathbb{R}^{n\\times n}$ and $A$ non-singular, $|A\u22121| = 1/|A|$\n\n## Tensors\n\nA [**tensor**](https://en.wikipedia.org/wiki/Tensor) could be thought of as an organized multidimensional array of numerical values. A vector could be assumed to be a sub-class of a tensor. Rows of tensors extend alone the y-axis, columns along the x-axis. The **rank** of a scalar is 0, the rank of a **vector** is 1, the rank of a **matrix** is 2, the rank of a **tensor** is 3 or higher.\n\n## Hyperplane\n\nThe **hyperplane** is a sub-space in the ambient space with one dimension less. In a two-dimensional space the hyperplane is a line, in a three-dimensional space it is a two-dimensional plane, etc.\n\nHyperplanes divide an $n$-dimensional space into sub-spaces that might represent clases in a machine learning algorithm.\n\n# Summary\n\n## Dot Product\n\nThis is also the *inner product*. It is a function that returns a number computed from two vectors of the same length by summing up the product of the corresponding dimensions.\n\nFor two vectors $a = [a_1, a_2, \\dots{}, a_n]$ and $b = [b_1, b_2, \\dots{}, b_n]$ the dot product is:\n\n$\\mathbf{a} \\cdot \\mathbf{b} = \\sum_{i=1}^{n} a_{i} b_{i} = a_{1} b_{1} + a_{2} b_{2} + \\cdots + a_{n} b_{n}$\n\nIf we normalize two vectors and compute the dot product, we get the *cosine similarity*, which can be used as a metric for cimilarity of vectors. Independent of the absolute length we look at the angle between the vectors, i.e. the lenght is neutralized via normalization.\n\nThe cosine of two non-zero vectors can be derived by using the Euclidean dot product formula (see [Wikipedia: Cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity)):\n\n$\\mathbf{a} \\cdot \\mathbf{b} = \\left\\|\\mathbf{a}\\right\\| \\left\\|\\mathbf{b}\\right\\| \\cos\\theta$\n\nGiven two vectors of attributes, $A$ and $B$, the cosine similarity, $cos(\\theta)$, is represented using a dot product and magnitude as:\n\n$\\text{similarity} = \\cos(\\theta) = \\frac{\\mathbf{A} \\cdot \\mathbf{B}}{ \\|\\mathbf{A} \\|\\|\\mathbf{B} \\| } = \\frac{\\sum \\limits_{i=1}^{n}{A_{i}B_{i}}}{{\\sqrt {\\sum \\limits _{i=1}^{n}{A_{i}^{2}}}}{\\sqrt {\\sum \\limits _{i=1}^{n}{B_{i}^{2}}}}}$, with $A_i$ and $B_i$ components of vector $A$ and $B$ respectively.\n\n## Hadamard Product\n\nThis is also known as the **entrywise product**. For two matrices $A \\in \\mathbb{R}^{m\\times n}$ and $B \\in \\mathbb{R}^{m\\times n}$ the Hadamard product $A\\circ B$ is:\n\n$(A\\circ B)_{i,j} = (A)_{i,j} (B)_{i,j}$\n\nFor example:\n\n$$\\begin{bmatrix}\n a_{11} & a_{12} & a_{13} \\\\[0.3em]\n a_{21} & a_{22} & a_{23} \\\\[0.3em]\n a_{31} & a_{32} & a_{33}\n\\end{bmatrix} \\circ\n\\begin{bmatrix}\n b_{11} & b_{12} & b_{13} \\\\[0.3em]\n b_{21} & b_{22} & b_{23} \\\\[0.3em]\n b_{31} & b_{32} & b_{33}\n\\end{bmatrix} = \n\\begin{bmatrix}\n a_{11}b_{11} & a_{12}b_{12} & a_{13}b_{13} \\\\[0.3em]\n a_{21}b_{21} & a_{22}b_{22} & a_{23}b_{23} \\\\[0.3em]\n a_{31}b_{31} & a_{32}b_{32} & a_{33}b_{33}\n\\end{bmatrix}$$\n\n## Outer Product\n\nThis is also called the **tensor product** of two vectors. Compute the resulting matrix by multiplying each element from a column vector with all alements in a row vector.\n\n(Based on the work of [Damir Cavar](https://github.com/dcavar/python-tutorial-notebooks))\n", "meta": {"hexsha": "74975ce2959bbde07d2679499a87ae6afe8f1c80", "size": 67922, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "classes/00 refresher/00_refresher_01.ipynb", "max_stars_repo_name": "mariolpantunes/ml101", "max_stars_repo_head_hexsha": "71072bb4b0d2e9d7894c2de699d1ada569752ec3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2022-01-20T17:16:21.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-28T19:26:54.000Z", "max_issues_repo_path": "classes/00 refresher/00_refresher_01.ipynb", "max_issues_repo_name": "mariolpantunes/ml101", "max_issues_repo_head_hexsha": "71072bb4b0d2e9d7894c2de699d1ada569752ec3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "classes/00 refresher/00_refresher_01.ipynb", "max_forks_repo_name": "mariolpantunes/ml101", "max_forks_repo_head_hexsha": "71072bb4b0d2e9d7894c2de699d1ada569752ec3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-02T09:23:15.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-02T09:23:15.000Z", "avg_line_length": 26.2856037152, "max_line_length": 435, "alphanum_fraction": 0.4940520008, "converted": true, "num_tokens": 13879, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541651191744, "lm_q2_score": 0.8918110511888303, "lm_q1q2_score": 0.8397775908512713}} {"text": "# Simply Supported Beam subjected to Linearly Varying Load\n\n## Objective\nDesign, develop and test a Python program to analyse a simply supported beam subjcted to linearly varying load applied over the entire span. The program should calculate the support reactions, SF and BM at regular intervals along the span, point of zero shear and maximum bending moment.\n\n## Theory\n\n\nThe equations for reactions at supports, intensity of load ($w_x$), sfear force ($V_x$) and bending moment ($M_x$) at a distance $x$ from left support are as shown below:\n\n$$\n\\begin{align}\nR_a &= \\left( 2 w_1 + w_2 \\right) \\frac{L}{6} \\\\\nR_b &= \\left( w_1 + 2 w_2 \\right) \\frac{L}{6} \\\\\nw_x &= w_1 + \\frac{w_2 - w_1}{L} x \\\\\nV_x &= R_a - \\frac{w_1 + w_x}{2} x \\\\\nM_x &= R_a x - \\frac{w_1 + w_x}{2} x \\cdot \\frac{2 w_1 + w_x}{w_1 + w_x} \\frac{x}{3} = R_a x - (2 w_1 + w_x) \\frac{x^2}{6}\n\\end{align}$$\n\nIt can be seen that when the intensity of load $w_x$ varies linearly with $x$, variation of $V_x$ and $M_x$ with respect to $x$ is parabolic and cubic, respectively.\n\n## Input Data\nThe primary input data are:\n1. **Span $L$** in metres\n2. **Intensity of load at left support $w_1$** in kN/m\n3. **Intensity of load at right support $w_2$** in kN/m\n4. **Number of equal intervals along span** at which shear force and bending moment are to be calculated\n\n## Output Data\n1. Shear force at regular intervals\n2. Bending moment at regular intervals\n\n## Other Data and Sign Convention\n1. Distance $x$ of a chosen cross section from left support is positive when measured to the right of left support and $0 \\leq x \\leq L$\n2. Intensity of load $w_1, w_2, w_x$ are positive when acting downwards.\n3. Reactions at supports $R_a, R_b$ are positive when acting upwards.\n4. Shear force $V_x$ is positive when net force on left of section is upwards or when net force to the right of section is downwards. Negative otherwise.\n5. Bending moment $M_x$ is positive when bending moment is sagging and negative if hogging.\n\n## Program Organization\nWe will develop functions to calculate each of these results, namely, reactions, intensity of load, sgear force and bending moment, as follows:\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
    Function NameInput DataOutput DataPurpose
    **`reactions()`**$L$, $w_1$, $w_2$$R_a, R_b$Calculate reactions at supports
    **`calc_wx()`**$x$, $L$, $w_1$, $w_2$$w_x$Calculate intensity of load at $x$
    **`calc_Vx()`**$x$, $L$, $w_1$, $w_2$$V_x$Calculate shear force at $x$
    **`calc_Mx()`**$x$, $L$, $w_1$, $w_2$$M_x$Calculate bending moment at $x$
    \n\nThe function to calculate the reactions uses the above equations is shown below. It is tested for different types of input, namely, UDL ($w_1 = w_2$), triangular loads ($w_1 = 0$ and $w_2 = 0$).\n\n\n```python\nfrom __future__ import division, print_function\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndef reactions(L, w1, w2):\n '''Returns the reactions at left and right supports of a simply supported beam\n subjected to a linearly varying load with intensity w1 and w2 at the left and right\n ends, respectively'''\n Ra = (2 * w1 + w2) * L / 6\n Rb = (w1 + 2 * w2) * L / 6\n return (Ra, Rb)\n\nRa, Rb = reactions(6, 10, 10)\nprint(Ra, Rb)\n\nRa, Rb = reactions(6, 0, 30)\nprint(Ra, Rb)\n\nRa, Rb = reactions(6, 30, 0)\nprint(Ra, Rb)\n\nhelp(reactions)\n```\n\n 30.0 30.0\n 30.0 60.0\n 60.0 30.0\n Help on function reactions in module __main__:\n \n reactions(L, w1, w2)\n Returns the reactions at left and right supports of a simply supported beam\n subjected to a linearly varying load with intensity w1 and w2 at the left and right\n ends, respectively\n \n\n\nFunction to calculate intensity of load $w_x$ at distance $x$ from left support is shown below. It is tested for the intensity of load at midspan of a $6~m$ span beam with triangular loading ($w_1 = 0$) and UDL ($w_1 = w_2$).\n\n\n```python\ndef calc_wx(x, L, w1, w2):\n '''Returns the intensity of a linearly varying load at distance 'x' from left support\n when intensity at x=0 is w1 and at x=L is w2'''\n return w1 + (w2 - w1) * x / L\n\nprint(calc_wx(3, 6, 0, 20))\nprint(calc_wx(3, 6, 10, 10))\n\nhelp(calc_wx)\n\nx = np.array([0, 1, 2, 3, 4, 5, 6], dtype=float)\nwx = calc_wx(x, 6.0, 0.0, 12.0)\nprint(type(x), len(x))\nprint(type(wx), len(wx))\nprint(wx)\n```\n\n 10.0\n 10.0\n Help on function calc_wx in module __main__:\n \n calc_wx(x, L, w1, w2)\n Returns the intensity of a linearly varying load at distance 'x' from left support\n when intensity at x=0 is w1 and at x=L is w2\n \n 7\n 7\n [ 0. 2. 4. 6. 8. 10. 12.]\n\n\nFunctions to calculate shear force $V_x$ and bending moment $M_x$ at $x$ are shown below. They are tested for 13 data points spaced equally along the span. Graphs of $w_x, V_x$ and $M_x$ are plotted and verified for the nature of their variation.\n\n\n```python\ndef calc_Vx(x, L, w1, w2):\n '''Returns the shear force at a distance 'x' from left support in a simply supported beam\n of span L subjected to a linearly varying load of intensity w1 at x=0 and w2 at x=L'''\n\n Ra, Rb = reactions(L, w1, w2)\n return (Ra - (w1 + calc_wx(x, L, w1, w2)) / 2 * x)\n\ndef calc_Mx(x, L, w1, w2):\n '''Returns the bending moment at a distance 'x' from left support in a simply supported beam\n of span L subjected to a linearly varying load of intensity w1 at x=0 and w2 at x=L'''\n Ra, Rb = reactions(L, w1, w2)\n wx = calc_wx(x, L, w1, w2)\n return (Ra * x - (2*w1+wx)*x*x/6)\n\nL = 6.0\nw1 = 0.0\nw2 = 12.0\nnumint = 12\n\nRa, Rb = reactions(L, w1, w2)\nprint('Ra = %.4f, Rb = %.4f' % (Ra, Rb))\n\nx = np.linspace(0, L, numint+1)\nwx = calc_wx(x, 6.0, w1, w2)\nVx = calc_Vx(x, L, w1, w2)\nMx = calc_Mx(x, L, w1, w2)\n\nfor xx, yy, zz, mm in zip(x, wx, Vx, Mx):\n print(\"%8.2f %12.4f %12.4f %12.4f\" % (xx, yy, zz, mm))\n\nplt.subplot(311)\nplt.plot(x, wx)\nplt.grid()\n\nplt.subplot(312)\nplt.plot(x, Vx)\nplt.grid()\n\nplt.subplot(313)\nplt.plot(x, Mx)\nplt.grid()\n\nplt.show()\n```\n\nWe must now determine the value of $x$ at which $V_x = 0$. This requires the calculation of root of the shear force equation. To achieve this, we will write two functions **`bracket()`** and **`false_pos()`** which will accomplish the following tasks, which are general in that they can be used in other problems which require finding of roots.\n\n**`bracket()`** takes as input two vectors, namely the values of the independent variable $x$ and the corresponding dependent variable $y$. It returs **`None`** if no roots are bracketed else it returns the first brackets if a root is bracketed\n\n**`false_pos()`** takes as input the function $f(x, L, w_1, w_2)$ whose roots are to be found, the brackets $x_1$, $x_2$, tolerance and the maximum number of iterations desired.\n\nThe function **`false_pos()`** is tested and verified by calculating shear force at the distance calculated by **`false_pos()`**, which must be zero, subjected to the chosen tolerance. Once verified, we can calculate the bending moment at the same location.\n\n\n```python\ndef bracket(x, y):\n '''Returns an List with two elements corresponding to the first root that is bracketed \n within the array x[], else returns None'''\n \n b = []\n for i in range(len(x)-1):\n if y[i] * y[i+1] <= 0.0:\n b.append(x[i])\n b.append(x[i+1])\n return b\n return None\n\nprint(type(x), len(x))\nb = bracket(x, Vx)\nprint(b)\n```\n\n 13\n [3.0, 3.5]\n\n\n\n```python\ndef bisection(f, x1, x2, L, w1, w2, tol=1e-6, maxiter=50):\n '''Returns the root within the bracket [x1, x2] using Bisection method. Convergence is determined\n by the tolerance 'tol', which defaults to 1e-6 if not specified. A maximum of 'maxiter' iterations \n are performed, which defaults to 50 if not specified. If no root with the required tolerance can be \n found within the specified number of iterations, function returns None\n \n When calling this function, keyword arguments 'L', 'w1' and 'w2' must be supplied, which in turn will \n be passed on to function 'f', whose root is being determined.'''\n\n y1 = f(x1, L, w1, w2)\n y2 = f(x2, L, w1, w2)\n if abs(y1) <= tol:\n return x1\n elif abs(y2) <= tol:\n return x2\n\n if (y1 * y2) > 0:\n print('Root not bracketed')\n return None\n\n k = 0\n while (k <= maxiter):\n k += 1\n x = (x1 + x2) / 2\n y = f(x, L, w1, w2)\n if abs(y) <= tol:\n return x\n else:\n if y1 * y < 0:\n x2 = x\n y2 = y\n else:\n x1 = x\n y1 = y\n\n if abs(y1) <= tol:\n return(x1)\n elif abs(y2) <= tol:\n return x2\n else:\n return None\n\nx = bisection(calc_Vx, b[0], b[1], 6.0, 0.0, 12.0, 1e-12)\nprint(x, calc_Vx(x, L, w1, w2))\n\nhelp(bisection)\n```\n\n 3.46410161514 5.24025267623e-13\n Help on function bisection in module __main__:\n \n bisection(f, x1, x2, L, w1, w2, tol=1e-06, maxiter=50)\n Returns the root within the bracket [x1, x2] using Bisection method. Convergence is determined\n by the tolerance 'tol', which defaults to 1e-6 if not specified. A maximum of 'maxiter' iterations \n are performed, which defaults to 50 if not specified. If no root with the required tolerance can be \n found within the specified number of iterations, function returns None\n \n When calling this function, keyword arguments 'L', 'w1' and 'w2' must be supplied, which in turn will \n be passed on to function 'f', whose root is being determined.\n \n\n\n\n```python\nprint(calc_Mx(x, L, w1, w2))\n```\n\n 27.7128129211\n\n\nOnce each function is tested, we can now write the main function that will set up the input data, call the required functions in the correct sequence and print the results. A separate function is developed to plot graphs.\n\n\n```python\ndef plot_graphs(x, wx, Vx, Mx):\n '''Function to plot graphs for intensity of load, shear force and bending moment. Does not return any value.'''\n\n plt.subplot(311)\n plt.plot(x, wx)\n plt.grid()\n\n plt.subplot(312)\n plt.plot(x, Vx)\n plt.grid()\n\n plt.subplot(313)\n plt.plot(x, Mx)\n plt.grid()\n\n plt.show()\n return\n\ndef main(L, w1, w2, numint):\n '''Main function that sets up input, prints output and generates graphs. Does not return any value'''\n\n Ra, Rb = reactions(L, w1, w2)\n print('INPUT')\n print('Span = %.4f m' % L)\n print('Intensity of load at left support = %.4f kN/m' % w1)\n print('Intensity of load at right support = %.4f kN/m' % w2)\n print()\n print('OUTPUT')\n print('Reactions')\n print('Ra = %.4f, Rb = %.4f' % (Ra, Rb))\n x = np.linspace(0.0, L, numint+1)\n wx = calc_wx(x, L, w1, w2)\n Vx = calc_Vx(x, L, w1, w2)\n Mx = calc_Mx(x, L, w1, w2)\n for xx, ww, sf, bm in zip(x, wx, Vx, Mx):\n print(\"%8.2f %12.4f %12.4f %12.4f\" % (xx, ww, sf, bm))\n plot_graphs(x, wx, Vx, Mx)\n b = bracket(x, Vx)\n xmax = bisection(calc_Vx, b[0], b[1], L, w1, w2, 1e-12)\n Mmax = calc_Mx(xmax, L, w1, w2)\n print('Maximum Bending Moment')\n print('x = %.4f V = %.4f Mmax = %.4f' % (xmax, calc_Vx(xmax, L, w1, w2), calc_Mx(xmax, L, w1, w2)))\n return\n\nL = 6.0\nw1 = 0.0\nw2 = 12.0\nmain(L, w1, w2, 13)\n```\n\n\n```python\nL = 6.0\nw1 = 10.0\nw2 = 10.0\nmain(L, w1, w2, 13)\n```\n\n\n```python\nL = 6.0\nw1 = 12.0\nw2 = 0.0\nmain(L, w1, w2, 13)\n```\n\n## Results and Discussion\n* Discuss the results for different types of loads that can be generated using this program, namely\n * Triangular load ($w_1=0, w_2 \\neq 0$ or $w1 \\neq 0, w_2=0$)\n * Unifomly distributed load ($w_1 = w_2$)\n * Trapezoidal load ($w_1 \\neq 0$ and $w_2 \\neq 0$)\n* A study on the effect of changing number of intervals could be made\n\n## Limitations of the Program\nThe defined scope of the problem limits the program to simple supports, load applied over entire span and varying linearly from $w_1$ at left support to $w_2$ at the right support.\n\n## Improvements to the Program\n* Replacing Bisection with False Position method may reduce the number of iterations required.\n\n## Conclusions\n\n\n```python\ndef bisection(f, x1, x2, L, w1, w2, tol, maxiter):\n y1 = f(x1, L, w1, w2)\n\n if abs(y1) <= tol:\n return x1\n\n y2 = f(x2, L, w1, w2)\n if abs(y2) <= tol:\n return x2\n k = 0\n while k <= maxiter:\n k += 1\n x = (x1 + x2) / 2\n y = f(x, L, w1, w2)\n # print(x, y)\n if abs(y) <= tol:\n return x\n if y1 * y <= 0.0: # Root is bracketed by left half\n x2 = x\n y2 = y\n else: # Root is bracketed by right half\n x1 = x\n y1 = y\n return None\nroot = bisection(calc_Vx, 3.0, 3.5, 6, 0, 12, 1e-6, 50)\nVmax = calc_Vx(root, 6, 0, 12)\nMmax = calc_Mx(root, 6, 0, 12)\nprint(root, Vmax, Mmax)\nprint()\nroot = bisection(calc_Vx, 3.0, 3.5, 6, 0, 12, 1e-12, 50)\nVmax = calc_Vx(root, 6, 0, 12)\nMmax = calc_Mx(root, 6, 0, 12)\nprint(root, Vmax, Mmax)\nprint()\nroot = bisection(calc_Vx, 3.0, 3.5, 6, 0, 12, 1e-16, 50)\nVmax = calc_Vx(root, 6, 0, 12)\nMmax = calc_Mx(root, 6, 0, 12)\nprint(root, Vmax, Mmax)\n```\n\n 3.46410155296 4.30757552294e-07 27.7128129211\n \n 3.46410161514 5.24025267623e-13 27.7128129211\n \n 3.46410161514 0.0 27.7128129211\n\n\n\n```python\nclass UDL(object):\n def __init__(self, w, d1, d2):\n self.w = w\n if d2 > d1:\n self.d1 = d1\n self.d2 = d2\n else:\n seld.d1 = d2\n self.d2 = d1\n return\n\n def getResultant(self):\n return self.w * (self.d2 - self.d1)\n\n def getCG(self):\n return (self.d1 + self.d2) / 2.0\n\n def getMagnitude(self):\n return self.getResultant()\n\n def getMoment(self):\n return self.getResultant() * self.getCG()\n\n def calcV(self, x):\n if x <= self.d1:\n return 0.0\n elif x <= self.d2:\n return -(self.w * (x - self.d1))\n else:\n return -self.getResultant()\n\n def calcM(self, x):\n if x <= self.d1:\n return 0.0\n elif x <= self.d2:\n return -(self.w * (x - self.d1)**2 / 2.0)\n else:\n return -(self.getResultant() * (x - self.getCG()))\n\nclass SSBeam(object):\n def __init__(self, L, loads, ndiv=10):\n self.L = L\n self.loads = loads[:]\n self.ndiv = ndiv\n self.sections = np.linspace(0, self.L, self.ndiv+1)\n self.V = np.zeros(self.sections.shape)\n self.M = np.zeros(self.sections.shape)\n return\n\n def getSections(self):\n self.sections = np.linspace(0, self.L, self.ndiv+1)\n return\n\n def __str__(self):\n s = \"Simply Supported Beam, Span = %.2f\" % (self.L,)\n return s\n\n def calcReactions(self):\n P = 0.0\n M = 0.0\n for load in self.loads:\n P += load.getMagnitude()\n M += load.getMoment()\n Rb = M / self.L\n Ra = P - Rb\n return (Ra, Rb)\n\n def calcV_M(self):\n Ra, Rb = self.calcReactions()\n for i, x in enumerate(self.sections):\n V = Ra\n M = Ra * x\n for load in self.loads:\n V += load.calcV(x)\n M += load.calcM(x)\n self.V[i] = V\n self.M[i] = M\n print(self.V)\n print(self.M)\n return\n\n def plot(self):\n plt.subplot(211)\n plt.plot(self.sections, self.V)\n plt.subplot(212)\n plt.plot(self.sections, self.M)\n plt.grid()\n plt.show()\n return\n\nloads = [UDL(10.0, 0.0, 6.0)]\nB = SSBeam(6, loads, 12)\nprint(B)\nRa, Rb = B.calcReactions()\nprint(\"Ra = %.2f, Rb = %.2f\" % (Ra, Rb))\nB.calcV_M()\nB.plot()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "e95674ca34f1459ec5ab21ec98252abd15623965", "size": 118596, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "bvbcet/09_example_civil.ipynb", "max_stars_repo_name": "satish-annigeri/Notebooks", "max_stars_repo_head_hexsha": "92a7dc1d4cf4aebf73bba159d735a2e912fc88bb", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "bvbcet/09_example_civil.ipynb", "max_issues_repo_name": "satish-annigeri/Notebooks", "max_issues_repo_head_hexsha": "92a7dc1d4cf4aebf73bba159d735a2e912fc88bb", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bvbcet/09_example_civil.ipynb", "max_forks_repo_name": "satish-annigeri/Notebooks", "max_forks_repo_head_hexsha": "92a7dc1d4cf4aebf73bba159d735a2e912fc88bb", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 134.1583710407, "max_line_length": 19112, "alphanum_fraction": 0.8342777159, "converted": true, "num_tokens": 5273, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9481545289551957, "lm_q2_score": 0.8856314677809303, "lm_q1q2_score": 0.8397154871617265}} {"text": "# Linear Regression Tutorial\nby Marc Deisenroth\n\nThe purpose of this notebook is to practice implementing some linear algebra (equations provided) and to explore some properties of linear regression.\n\n\n```python\nimport numpy as np\nimport scipy.linalg\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\nWe consider a linear regression problem of the form\n$$\ny = \\boldsymbol x^T\\boldsymbol\\theta + \\epsilon\\,,\\quad \\epsilon \\sim \\mathcal N(0, \\sigma^2)\n$$\nwhere $\\boldsymbol x\\in\\mathbb{R}^D$ are inputs and $y\\in\\mathbb{R}$ are noisy observations. The parameter vector $\\boldsymbol\\theta\\in\\mathbb{R}^D$ parametrizes the function.\n\nWe assume we have a training set $(\\boldsymbol x_n, y_n)$, $n=1,\\ldots, N$. We summarize the sets of training inputs in $\\mathcal X = \\{\\boldsymbol x_1, \\ldots, \\boldsymbol x_N\\}$ and corresponding training targets $\\mathcal Y = \\{y_1, \\ldots, y_N\\}$, respectively.\n\nIn this tutorial, we are interested in finding good parameters $\\boldsymbol\\theta$.\n\n\n```python\n# Define training set\nX = np.array([-3, -1, 0, 1, 3]).reshape(-1,1) # 5x1 vector, N=5, D=1\ny = np.array([-1.2, -0.7, 0.14, 0.67, 1.67]).reshape(-1,1) # 5x1 vector\n\n# Plot the training set\nplt.figure()\nplt.plot(X, y, '+', markersize=10)\nplt.xlabel(\"$x$\")\nplt.ylabel(\"$y$\");\n```\n\n## 1. Maximum Likelihood\nWe will start with maximum likelihood estimation of the parameters $\\boldsymbol\\theta$. In maximum likelihood estimation, we find the parameters $\\boldsymbol\\theta^{\\mathrm{ML}}$ that maximize the likelihood\n$$\np(\\mathcal Y | \\mathcal X, \\boldsymbol\\theta) = \\prod_{n=1}^N p(y_n | \\boldsymbol x_n, \\boldsymbol\\theta)\\,.\n$$\nFrom the lecture we know that the maximum likelihood estimator is given by\n$$\n\\boldsymbol\\theta^{\\text{ML}} = (\\boldsymbol X^T\\boldsymbol X)^{-1}\\boldsymbol X^T\\boldsymbol y\\in\\mathbb{R}^D\\,,\n$$\nwhere \n$$\n\\boldsymbol X = [\\boldsymbol x_1, \\ldots, \\boldsymbol x_N]^T\\in\\mathbb{R}^{N\\times D}\\,,\\quad \\boldsymbol y = [y_1, \\ldots, y_N]^T \\in\\mathbb{R}^N\\,.\n$$\n\nLet us compute the maximum likelihood estimate for a given training set\n\n\n```python\n## EDIT THIS FUNCTION\ndef max_lik_estimate(X, y):\n \n # X: N x D matrix of training inputs\n # y: N x 1 vector of training targets/observations\n # returns: maximum likelihood parameters (D x 1)\n \n N, D = X.shape\n theta_ml = np.linalg.solve(X.T @ X, X.T @ y) ## <-- SOLUTION\n return theta_ml\n```\n\n\n```python\n# get maximum likelihood estimate\ntheta_ml = max_lik_estimate(X,y)\n```\n\nNow, make a prediction using the maximum likelihood estimate that we just found\n\n\n```python\n## EDIT THIS FUNCTION\ndef predict_with_estimate(Xtest, theta):\n \n # Xtest: K x D matrix of test inputs\n # theta: D x 1 vector of parameters\n # returns: prediction of f(Xtest); K x 1 vector\n \n prediction = Xtest @ theta ## <-- SOLUTION\n \n return prediction \n```\n\nNow, let's see whether we got something useful:\n\n\n```python\n# define a test set\nXtest = np.linspace(-5,5,100).reshape(-1,1) # 100 x 1 vector of test inputs\n\n# predict the function values at the test points using the maximum likelihood estimator\nml_prediction = predict_with_estimate(Xtest, theta_ml)\n\n# plot\nplt.figure()\nplt.plot(X, y, '+', markersize=10)\nplt.plot(Xtest, ml_prediction)\nplt.xlabel(\"$x$\")\nplt.ylabel(\"$y$\");\n```\n\n#### Questions\n1. Does the solution above look reasonable?\n2. Play around with different values of $\\theta$. How do the corresponding functions change?\n3. Modify the training targets $\\mathcal Y$ and re-run your computation. What changes?\n\nLet us now look at a different training set, where we add 2.0 to every $y$-value, and compute the maximum likelihood estimate\n\n\n```python\nynew = y + 2.0\n\nplt.figure()\nplt.plot(X, ynew, '+', markersize=10)\nplt.xlabel(\"$x$\")\nplt.ylabel(\"$y$\");\n```\n\n\n```python\n# get maximum likelihood estimate\ntheta_ml = max_lik_estimate(X, ynew)\nprint(theta_ml)\n\n# define a test set\nXtest = np.linspace(-5,5,100).reshape(-1,1) # 100 x 1 vector of test inputs\n\n# predict the function values at the test points using the maximum likelihood estimator\nml_prediction = predict_with_estimate(Xtest, theta_ml)\n\n# plot\nplt.figure()\nplt.plot(X, ynew, '+', markersize=10)\nplt.plot(Xtest, ml_prediction)\nplt.xlabel(\"$x$\")\nplt.ylabel(\"$y$\");\n```\n\n#### Question:\n1. This maximum likelihood estimate doesn't look too good: The orange line is too far away from the observations although we just shifted them by 2. Why is this the case?\n2. How can we fix this problem?\n\nLet us now define a linear regression model that is slightly more flexible:\n$$\ny = \\theta_0 + \\boldsymbol x^T \\boldsymbol\\theta_1 + \\epsilon\\,,\\quad \\epsilon\\sim\\mathcal N(0,\\sigma^2)\n$$\nHere, we added an offset (bias) parameter $\\theta_0$ to our original model.\n\n#### Question:\n1. What is the effect of this bias parameter, i.e., what additional flexibility does it offer?\n\nIf we now define the inputs to be the augmented vector $\\boldsymbol x_{\\text{aug}} = \\begin{bmatrix}1\\\\\\boldsymbol x\\end{bmatrix}$, we can write the new linear regression model as \n$$\ny = \\boldsymbol x_{\\text{aug}}^T\\boldsymbol\\theta_{\\text{aug}} + \\epsilon\\,,\\quad \\boldsymbol\\theta_{\\text{aug}} = \\begin{bmatrix}\n\\theta_0\\\\\n\\boldsymbol\\theta_1\n\\end{bmatrix}\\,.\n$$\n\n\n```python\nN, D = X.shape\nX_aug = np.hstack([np.ones((N,1)), X]) # augmented training inputs of size N x (D+1)\ntheta_aug = np.zeros((D+1, 1)) # new theta vector of size (D+1) x 1\n```\n\nLet us now compute the maximum likelihood estimator for this setting.\n_Hint:_ If possible, re-use code that you have already written\n\n\n```python\n## EDIT THIS FUNCTION\ndef max_lik_estimate_aug(X_aug, y):\n \n theta_aug_ml = max_lik_estimate(X_aug, y) ## <-- SOLUTION\n \n return theta_aug_ml\n```\n\n\n```python\ntheta_aug_ml = max_lik_estimate_aug(X_aug, y)\n```\n\nNow, we can make predictions again:\n\n\n```python\n# define a test set (we also need to augment the test inputs with ones)\nXtest_aug = np.hstack([np.ones((Xtest.shape[0],1)), Xtest]) # 100 x (D + 1) vector of test inputs\n\n# predict the function values at the test points using the maximum likelihood estimator\nml_prediction = predict_with_estimate(Xtest_aug, theta_aug_ml)\n\n# plot\nplt.figure()\nplt.plot(X, y, '+', markersize=10)\nplt.plot(Xtest, ml_prediction)\nplt.xlabel(\"$x$\")\nplt.ylabel(\"$y$\");\n```\n\nIt seems this has solved our problem! \n#### Question:\n1. Play around with the first parameter of $\\boldsymbol\\theta_{\\text{aug}}$ and see how the fit of the function changes.\n2. Play around with the second parameter of $\\boldsymbol\\theta_{\\text{aug}}$ and see how the fit of the function changes.\n\n### Nonlinear Features\nSo far, we have looked at linear regression with linear features. This allowed us to fit straight lines. However, linear regression also allows us to fit functions that are nonlinear in the inputs $\\boldsymbol x$, as long as the parameters $\\boldsymbol\\theta$ appear linearly. This means, we can learn functions of the form\n$$\nf(\\boldsymbol x, \\boldsymbol\\theta) = \\sum_{k = 1}^K \\theta_k \\phi_k(\\boldsymbol x)\\,,\n$$\nwhere the features $\\phi_k(\\boldsymbol x)$ are (possibly nonlinear) transformations of the inputs $\\boldsymbol x$.\n\nLet us have a look at an example where the observations clearly do not lie on a straight line:\n\n\n```python\ny = np.array([10.05, 1.5, -1.234, 0.02, 8.03]).reshape(-1,1)\nplt.figure()\nplt.plot(X, y, '+')\nplt.xlabel(\"$x$\")\nplt.ylabel(\"$y$\");\n```\n\n#### Polynomial Regression\nOne class of functions that is covered by linear regression is the family of polynomials because we can write a polynomial of degree $K$ as\n$$\n\\sum_{k=0}^K \\theta_k x^k = \\boldsymbol \\phi(x)^T\\boldsymbol\\theta\\,,\\quad\n\\boldsymbol\\phi(x)= \n\\begin{bmatrix}\nx^0\\\\\nx^1\\\\\n\\vdots\\\\\nx^K\n\\end{bmatrix}\\in\\mathbb{R}^{K+1}\\,.\n$$\nHere, $\\boldsymbol\\phi(x)$ is a nonlinear feature transformation of the inputs $x\\in\\mathbb{R}$.\n\nSimilar to the earlier case we can define a matrix that collects all the feature transformations of the training inputs:\n$$\n\\boldsymbol\\Phi = \\begin{bmatrix}\n\\boldsymbol\\phi(x_1) & \\boldsymbol\\phi(x_2) & \\cdots & \\boldsymbol\\phi(x_n)\n\\end{bmatrix}^T \\in\\mathbb{R}^{N\\times K+1}\n$$\n\nLet us start by computing the feature matrix $\\boldsymbol \\Phi$\n\n\n```python\n## EDIT THIS FUNCTION\ndef poly_features(X, K):\n \n # X: inputs of size N x 1\n # K: degree of the polynomial\n # computes the feature matrix Phi (N x (K+1))\n \n X = X.flatten()\n N = X.shape[0]\n \n #initialize Phi\n Phi = np.zeros((N, K+1))\n \n # Compute the feature matrix in stages\n for k in range(K+1):\n Phi[:,k] = X**k ## <-- SOLUTION\n return Phi\n```\n\nWith this feature matrix we get the maximum likelihood estimator as\n$$\n\\boldsymbol \\theta^\\text{ML} = (\\boldsymbol\\Phi^T\\boldsymbol\\Phi)^{-1}\\boldsymbol\\Phi^T\\boldsymbol y\n$$\nFor reasons of numerical stability, we often add a small diagonal \"jitter\" $\\kappa>0$ to $\\boldsymbol\\Phi^T\\boldsymbol\\Phi$ so that we can invert the matrix without significant problems so that the maximum likelihood estimate becomes\n$$\n\\boldsymbol \\theta^\\text{ML} = (\\boldsymbol\\Phi^T\\boldsymbol\\Phi + \\kappa\\boldsymbol I)^{-1}\\boldsymbol\\Phi^T\\boldsymbol y\n$$\n\n\n```python\n## EDIT THIS FUNCTION\ndef nonlinear_features_maximum_likelihood(Phi, y):\n # Phi: features matrix for training inputs. Size of N x D\n # y: training targets. Size of N by 1\n # returns: maximum likelihood estimator theta_ml. Size of D x 1\n \n kappa = 1e-08 # 'jitter' term; good for numerical stability\n \n D = Phi.shape[1] \n \n # maximum likelihood estimate\n Pt = Phi.T @ y # Phi^T*y\n PP = Phi.T @ Phi + kappa*np.eye(D) # Phi^T*Phi + kappa*I\n \n # maximum likelihood estimate\n C = scipy.linalg.cho_factor(PP)\n theta_ml = scipy.linalg.cho_solve(C, Pt) # inv(Phi^T*Phi)*Phi^T*y \n \n return theta_ml\n```\n\nNow we have all the ingredients together: The computation of the feature matrix and the computation of the maximum likelihood estimator for polynomial regression. Let's see how this works.\n\nTo make predictions at test inputs $\\boldsymbol X_{\\text{test}}\\in\\mathbb{R}$, we need to compute the features (nonlinear transformations) $\\boldsymbol\\Phi_{\\text{test}}= \\boldsymbol\\phi(\\boldsymbol X_{\\text{test}})$ of $\\boldsymbol X_{\\text{test}}$ to give us the predicted mean\n$$\n\\mathbb{E}[\\boldsymbol y_{\\text{test}}] = \\boldsymbol \\Phi_{\\text{test}}\\boldsymbol\\theta^{\\text{ML}}\n$$\n\n\n```python\nK = 5 # Define the degree of the polynomial we wish to fit\nPhi = poly_features(X, K) # N x (K+1) feature matrix\n\ntheta_ml = nonlinear_features_maximum_likelihood(Phi, y) # maximum likelihood estimator\n\n# test inputs\nXtest = np.linspace(-4,4,100).reshape(-1,1)\n\n# feature matrix for test inputs\nPhi_test = poly_features(Xtest, K)\n\ny_pred = Phi_test @ theta_ml # predicted y-values\n\nplt.figure()\nplt.plot(X, y, '+')\nplt.plot(Xtest, y_pred)\nplt.xlabel(\"$x$\")\nplt.ylabel(\"$y$\");\n```\n\nExperiment with different polynomial degrees in the code above.\n#### Questions:\n1. What do you observe?\n2. What is a good fit?\n\n## Evaluating the Quality of the Model\n\nLet us have a look at a more interesting data set\n\n\n```python\ndef f(x): \n return np.cos(x) + 0.2*np.random.normal(size=(x.shape))\n\nX = np.linspace(-4,4,20).reshape(-1,1)\ny = f(X)\n\nplt.figure()\nplt.plot(X, y, '+')\nplt.xlabel(\"$x$\")\nplt.ylabel(\"$y$\");\n```\n\nNow, let us use the work from above and fit polynomials to this dataset.\n\n\n```python\n## EDIT THIS CELL\nK = 2 # Define the degree of the polynomial we wish to fit\n\nPhi = poly_features(X, K) # N x (K+1) feature matrix\n\ntheta_ml = nonlinear_features_maximum_likelihood(Phi, y) # maximum likelihood estimator\n\n# test inputs\nXtest = np.linspace(-5,5,100).reshape(-1,1)\nytest = f(Xtest) # ground-truth y-values\n\n# feature matrix for test inputs\nPhi_test = poly_features(Xtest, K)\n\ny_pred = Phi_test @ theta_ml # predicted y-values\n\n# plot\nplt.figure()\nplt.plot(X, y, '+')\nplt.plot(Xtest, y_pred)\nplt.plot(Xtest, ytest)\nplt.legend([\"data\", \"prediction\", \"ground truth observations\"])\nplt.xlabel(\"$x$\")\nplt.ylabel(\"$y$\");\n```\n\n#### Questions:\n1. Try out different degrees of polynomials. \n2. Based on visual inspection, what looks like the best fit?\n\nLet us now look at a more systematic way to assess the quality of the polynomial that we are trying to fit. For this, we compute the root-mean-squared-error (RMSE) between the $y$-values predicted by our polynomial and the ground-truth $y$-values. The RMSE is then defined as\n$$\n\\text{RMSE} = \\sqrt{\\frac{1}{N}\\sum_{n=1}^N(y_n - y_n^\\text{pred})^2}\n$$\nWrite a function that computes the RMSE.\n\n\n```python\n## EDIT THIS FUNCTION\ndef RMSE(y, ypred):\n rmse = np.sqrt(np.mean((y-ypred)**2)) ## SOLUTION\n return rmse\n```\n\nNow compute the RMSE for different degrees of the polynomial we want to fit.\n\n\n```python\n## EDIT THIS CELL\nK_max = 20\nrmse_train = np.zeros((K_max+1,))\n\nfor k in range(K_max+1):\n \n \n # feature matrix\n Phi = poly_features(X, k)\n \n # maximum likelihood estimate\n theta_ml = nonlinear_features_maximum_likelihood(Phi, y)\n \n # predict y-values of training set\n ypred_train = Phi @ theta_ml\n \n # RMSE on training set\n rmse_train[k] = RMSE(y, ypred_train)\n \n\nplt.figure()\nplt.plot(rmse_train)\nplt.xlabel(\"degree of polynomial\")\nplt.ylabel(\"RMSE\");\n```\n\n#### Question: \n1. What do you observe?\n2. What is the best polynomial fit according to this plot?\n3. Write some code that plots the function that uses the best polynomial degree (use the test set for this plot). What do you observe now?\n\n\n```python\n# WRITE THE PLOTTING CODE HERE\nplt.figure()\nplt.plot(X, y, '+')\n\n# feature matrix\nPhi = poly_features(X, 5)\n\n# maximum likelihood estimate\ntheta_ml = nonlinear_features_maximum_likelihood(Phi, y) \n\n# feature matrix for test inputs\nPhi_test = poly_features(Xtest, 5)\n\nypred_test = Phi_test @ theta_ml\n\nplt.plot(Xtest, ypred_test) \nplt.xlabel(\"$x$\")\nplt.ylabel(\"$y$\")\nplt.legend([\"data\", \"maximum likelihood fit\"]);\n```\n\nThe RMSE on the training data is somewhat misleading, because we are interested in the generalization performance of the model. Therefore, we are going to compute the RMSE on the test set and use this to choose a good polynomial degree.\n\n\n```python\n## EDIT THIS CELL\nK_max = 20\nrmse_train = np.zeros((K_max+1,))\nrmse_test = np.zeros((K_max+1,))\n\nfor k in range(K_max+1):\n \n # feature matrix\n Phi = poly_features(X, k)\n \n # maximum likelihood estimate\n theta_ml = nonlinear_features_maximum_likelihood(Phi, y)\n \n # predict y-values of training set\n ypred_train = Phi @ theta_ml\n \n # RMSE on training set\n rmse_train[k] = RMSE(y, ypred_train) \n \n # feature matrix for test inputs\n Phi_test = poly_features(Xtest, k)\n \n # prediction\n ypred_test = Phi_test @ theta_ml\n \n # RMSE on test set\n rmse_test[k] = RMSE(ytest, ypred_test)\n \n\nplt.figure()\nplt.semilogy(rmse_train) # this plots the RMSE on a logarithmic scale\nplt.semilogy(rmse_test) # this plots the RMSE on a logarithmic scale\nplt.xlabel(\"degree of polynomial\")\nplt.ylabel(\"RMSE\")\nplt.legend([\"training set\", \"test set\"]);\n```\n\n#### Questions:\n1. What do you observe now?\n2. Why does the RMSE for the test set not always go down?\n3. Which polynomial degree would you choose now?\n4. Plot the fit for the \"best\" polynomial degree.\n\n\n```python\n# WRITE THE PLOTTING CODE HERE\nplt.figure()\nplt.plot(X, y, '+')\nk = 5\n# feature matrix\nPhi = poly_features(X, k)\n\n# maximum likelihood estimate\ntheta_ml = nonlinear_features_maximum_likelihood(Phi, y) \n\n# feature matrix for test inputs\nPhi_test = poly_features(Xtest, k)\n\nypred_test = Phi_test @ theta_ml\n\nplt.plot(Xtest, ypred_test) \nplt.xlabel(\"$x$\")\nplt.ylabel(\"$y$\")\nplt.legend([\"data\", \"maximum likelihood fit\"]);\n```\n\n#### Question\nIf you did not have a designated test set, what could you do to estimate the generalization error (purely using the training set)?\n\n## 2. Maximum A Posteriori Estimation\n\nWe are still considering the model\n$$\ny = \\boldsymbol\\phi(\\boldsymbol x)^T\\boldsymbol\\theta + \\epsilon\\,,\\quad \\epsilon\\sim\\mathcal N(0,\\sigma^2)\\,.\n$$\nWe assume that the noise variance $\\sigma^2$ is known.\n\nInstead of maximizing the likelihood, we can look at the maximum of the posterior distribution on the parameters $\\boldsymbol\\theta$, which is given as\n$$\np(\\boldsymbol\\theta|\\mathcal X, \\mathcal Y) = \\frac{\\overbrace{p(\\mathcal Y|\\mathcal X, \\boldsymbol\\theta)}^{\\text{likelihood}}\\overbrace{p(\\boldsymbol\\theta)}^{\\text{prior}}}{\\underbrace{p(\\mathcal Y|\\mathcal X)}_{\\text{evidence}}}\n$$\nThe purpose of the parameter prior $p(\\boldsymbol\\theta)$ is to discourage the parameters to attain extreme values, a sign that the model overfits. The prior allows us to specify a \"reasonable\" range of parameter values. Typically, we choose a Gaussian prior $\\mathcal N(\\boldsymbol 0, \\alpha^2\\boldsymbol I)$, centered at $\\boldsymbol 0$ with variance $\\alpha^2$ along each parameter dimension.\n\nThe MAP estimate of the parameters is\n$$\n\\boldsymbol\\theta^{\\text{MAP}} = (\\boldsymbol\\Phi^T\\boldsymbol\\Phi + \\frac{\\sigma^2}{\\alpha^2}\\boldsymbol I)^{-1}\\boldsymbol\\Phi^T\\boldsymbol y\n$$\nwhere $\\sigma^2$ is the variance of the noise.\n\n\n```python\n## EDIT THIS FUNCTION\ndef map_estimate_poly(Phi, y, sigma, alpha):\n # Phi: training inputs, Size of N x D\n # y: training targets, Size of D x 1\n # sigma: standard deviation of the noise \n # alpha: standard deviation of the prior on the parameters\n # returns: MAP estimate theta_map, Size of D x 1\n \n D = Phi.shape[1] \n \n # SOLUTION\n PP = Phi.T @ Phi + (sigma/alpha)**2 * np.eye(D)\n theta_map = scipy.linalg.solve(PP, Phi.T @ y)\n \n return theta_map\n```\n\n\n```python\n# define the function we wish to estimate later\ndef g(x, sigma):\n p = np.hstack([x**0, x**1, np.sin(x)])\n w = np.array([-1.0, 0.1, 1.0]).reshape(-1,1)\n return p @ w + sigma*np.random.normal(size=x.shape) \n```\n\n\n```python\n# Generate some data\nsigma = 1.0 # noise standard deviation\nalpha = 1.0 # standard deviation of the parameter prior\nN = 20\n\nnp.random.seed(42)\n\nX = (np.random.rand(N)*10.0 - 5.0).reshape(-1,1)\ny = g(X, sigma) # training targets\n\nplt.figure()\nplt.plot(X, y, '+')\nplt.xlabel(\"$x$\")\nplt.ylabel(\"$y$\");\n```\n\n\n```python\n# get the MAP estimate\nK = 8 # polynomial degree \n\n\n# feature matrix\nPhi = poly_features(X, K)\n\ntheta_map = map_estimate_poly(Phi, y, sigma, alpha)\n\n# maximum likelihood estimate\ntheta_ml = nonlinear_features_maximum_likelihood(Phi, y)\n\nXtest = np.linspace(-5,5,100).reshape(-1,1)\nytest = g(Xtest, sigma)\n\nPhi_test = poly_features(Xtest, K)\ny_pred_map = Phi_test @ theta_map\n\ny_pred_mle = Phi_test @ theta_ml\n\nplt.figure()\nplt.plot(X, y, '+')\nplt.plot(Xtest, y_pred_map)\nplt.plot(Xtest, g(Xtest, 0))\nplt.plot(Xtest, y_pred_mle)\n\nplt.legend([\"data\", \"map prediction\", \"ground truth function\", \"maximum likelihood\"]);\n```\n\n\n```python\nprint(np.hstack([theta_ml, theta_map]))\n```\n\n [[-1.49712990e+00 -1.08154986e+00]\n [ 8.56868912e-01 6.09177023e-01]\n [-1.28335730e-01 -3.62071208e-01]\n [-7.75319509e-02 -3.70531732e-03]\n [ 3.56425467e-02 7.43090617e-02]\n [-4.11626749e-03 -1.03278646e-02]\n [-2.48817783e-03 -4.89363010e-03]\n [ 2.70146690e-04 4.24148554e-04]\n [ 5.35996050e-05 1.03384719e-04]]\n\n\nNow, let us compute the RMSE for different polynomial degrees and see whether the MAP estimate addresses the overfitting issue we encountered with the maximum likelihood estimate.\n\n\n```python\n## EDIT THIS CELL\n\nK_max = 12 # this is the maximum degree of polynomial we will consider\nassert(K_max < N) # this is the latest point when we'll run into numerical problems\n\nrmse_mle = np.zeros((K_max+1,))\nrmse_map = np.zeros((K_max+1,))\n\nfor k in range(K_max+1):\n \n \n # feature matrix\n Phi = poly_features(X, k)\n \n # maximum likelihood estimate\n theta_ml = nonlinear_features_maximum_likelihood(Phi, y)\n \n # predict the function values at the test input locations (maximum likelihood)\n y_pred_test = 0*Xtest ## <--- EDIT THIS LINE\n \n ####################### SOLUTION\n # feature matrix for test inputs\n Phi_test = poly_features(Xtest, k)\n \n # prediction\n ypred_test_mle = Phi_test @ theta_ml\n #######################\n \n # RMSE on test set (maximum likelihood)\n rmse_mle[k] = RMSE(ytest, ypred_test_mle)\n \n # MAP estimate\n theta_map = map_estimate_poly(Phi, y, sigma, alpha)\n\n # Feature matrix\n Phi_test = poly_features(Xtest, k)\n \n # predict the function values at the test input locations (MAP)\n ypred_test_map = Phi_test @ theta_map\n \n # RMSE on test set (MAP)\n rmse_map[k] = RMSE(ytest, ypred_test_map)\n \n\nplt.figure()\nplt.semilogy(rmse_mle) # this plots the RMSE on a logarithmic scale\nplt.semilogy(rmse_map) # this plots the RMSE on a logarithmic scale\nplt.xlabel(\"degree of polynomial\")\nplt.ylabel(\"RMSE\")\nplt.legend([\"Maximum likelihood\", \"MAP\"])\n```\n\n#### Questions:\n1. What do you observe?\n2. What is the influence of the prior variance on the parameters ($\\alpha^2$)? Change the parameter and describe what happens.\n\n## 3. Bayesian Linear Regression\n\n\n```python\n# Test inputs\nNtest = 200\nXtest = np.linspace(-5, 5, Ntest).reshape(-1,1) # test inputs\n\nprior_var = 2.0 # variance of the parameter prior (alpha^2). We assume this is known.\nnoise_var = 1.0 # noise variance (sigma^2). We assume this is known.\n\npol_deg = 3 # degree of the polynomial we consider at the moment\n```\n\nAssume a parameter prior $p(\\boldsymbol\\theta) = \\mathcal N (\\boldsymbol 0, \\alpha^2\\boldsymbol I)$. For every test input $\\boldsymbol x_*$ we obtain the \nprior mean\n$$\nE[f(\\boldsymbol x_*)] = 0\n$$\nand the prior (marginal) variance (ignoring the noise contribution)\n$$\nV[f(\\boldsymbol x_*)] = \\alpha^2\\boldsymbol\\phi(\\boldsymbol x_*) \\boldsymbol\\phi(\\boldsymbol x_*)^\\top\n$$\nwhere $\\boldsymbol\\phi(\\cdot)$ is the feature map.\n\n\n```python\n## EDIT THIS CELL\n\n# compute the feature matrix for the test inputs\nPhi_test = poly_features(Xtest, pol_deg) # N x (pol_deg+1) feature matrix SOLUTION\n\n# compute the (marginal) prior at the test input locations\n# prior mean\nprior_mean = np.zeros((Ntest,1)) # prior mean <-- SOLUTION\n\n# prior variance\nfull_covariance = Phi_test @ Phi_test.T * prior_var # N x N covariance matrix of all function values\nprior_marginal_var = np.diag(full_covariance)\n\n# Let us visualize the prior over functions\nplt.figure()\nplt.plot(Xtest, prior_mean, color=\"k\")\n\nconf_bound1 = np.sqrt(prior_marginal_var).flatten()\nconf_bound2 = 2.0*np.sqrt(prior_marginal_var).flatten()\nconf_bound3 = 2.0*np.sqrt(prior_marginal_var + noise_var).flatten()\nplt.fill_between(Xtest.flatten(), prior_mean.flatten() + conf_bound1, \n prior_mean.flatten() - conf_bound1, alpha = 0.1, color=\"k\")\nplt.fill_between(Xtest.flatten(), prior_mean.flatten() + conf_bound2, \n prior_mean.flatten() - conf_bound2, alpha = 0.1, color=\"k\")\nplt.fill_between(Xtest.flatten(), prior_mean.flatten() + conf_bound3, \n prior_mean.flatten() - conf_bound3, alpha = 0.1, color=\"k\")\n\nplt.xlabel('$x$')\nplt.ylabel('$y$')\nplt.title(\"Prior over functions\");\n```\n\nNow, we will use this prior distribution and sample functions from it.\n\n\n```python\n## EDIT THIS CELL\n\n# samples from the prior\nnum_samples = 10\n\n# We first need to generate random weights theta_i, which we sample from the parameter prior\nrandom_weights = np.random.normal(size=(pol_deg+1,num_samples), scale=np.sqrt(prior_var))\n\n# Now, we compute the induced random functions, evaluated at the test input locations\n# Every function sample is given as f_i = Phi * theta_i, \n# where theta_i is a sample from the parameter prior\n\nsample_function = Phi_test @ random_weights # <-- SOLUTION\n\nplt.figure()\nplt.plot(Xtest, sample_function, color=\"r\")\nplt.title(\"Plausible functions under the prior\")\nprint(\"Every sampled function is a polynomial of degree \"+str(pol_deg));\n```\n\nNow we are given some training inputs $\\boldsymbol x_1, \\dotsc, \\boldsymbol x_N$, which we collect in a matrix $\\boldsymbol X = [\\boldsymbol x_1, \\dotsc, \\boldsymbol x_N]^\\top\\in\\mathbb{R}^{N\\times D}$\n\n\n```python\nN = 10\nX = np.random.uniform(high=5, low=-5, size=(N,1)) # training inputs, size Nx1\ny = g(X, np.sqrt(noise_var)) # training targets, size Nx1\n```\n\nNow, let us compute the posterior \n\n\n```python\n## EDIT THIS FUNCTION\n\ndef polyfit(X, y, K, prior_var, noise_var):\n # X: training inputs, size N x D\n # y: training targets, size N x 1\n # K: degree of polynomial we consider\n # prior_var: prior variance of the parameter distribution\n # sigma: noise variance\n \n jitter = 1e-08 # increases numerical stability\n \n Phi = poly_features(X, K) # N x (K+1) feature matrix \n \n # Compute maximum likelihood estimate\n Pt = Phi.T @ y # Phi*y, size (K+1,1)\n PP = Phi.T @ Phi + jitter*np.eye(K+1) # size (K+1, K+1)\n C = scipy.linalg.cho_factor(PP)\n # maximum likelihood estimate\n theta_ml = scipy.linalg.cho_solve(C, Pt) # inv(Phi^T*Phi)*Phi^T*y, size (K+1,1)\n \n# theta_ml = scipy.linalg.solve(PP, Pt) # inv(Phi^T*Phi)*Phi^T*y, size (K+1,1)\n \n # MAP estimate\n theta_map = scipy.linalg.solve(PP + noise_var/prior_var*np.eye(K+1), Pt)\n \n # parameter posterior\n iSN = (np.eye(K+1)/prior_var + PP/noise_var) # posterior precision\n SN = scipy.linalg.pinv(noise_var*np.eye(K+1)/prior_var + PP)*noise_var # posterior covariance\n mN = scipy.linalg.solve(iSN, Pt/noise_var) # posterior mean\n \n return (theta_ml, theta_map, mN, SN)\n```\n\n\n```python\ntheta_ml, theta_map, theta_mean, theta_var = polyfit(X, y, pol_deg, alpha, sigma)\n```\n\nNow, let's make predictions (ignoring the measurement noise). We obtain three predictors:\n\\begin{align}\n&\\text{Maximum likelihood: }E[f(\\boldsymbol X_{\\text{test}})] = \\boldsymbol \\phi(X_{\\text{test}})\\boldsymbol \\theta_{ml}\\\\\n&\\text{Maximum a posteriori: } E[f(\\boldsymbol X_{\\text{test}})] = \\boldsymbol \\phi(X_{\\text{test}})\\boldsymbol \\theta_{map}\\\\\n&\\text{Bayesian: } p(f(\\boldsymbol X_{\\text{test}})) = \\mathcal N(f(\\boldsymbol X_{\\text{test}}) \\,|\\, \\boldsymbol \\phi(X_{\\text{test}}) \\boldsymbol\\theta_{\\text{mean}},\\, \\boldsymbol\\phi(X_{\\text{test}}) \\boldsymbol\\theta_{\\text{var}} \\boldsymbol\\phi(X_{\\text{test}})^\\top)\n\\end{align}\nWe already computed all quantities. Write some code that implements all three predictors.\n\n\n```python\n## EDIT THIS CELL\n\n# predictions (ignoring the measurement/observations noise)\n\nPhi_test = poly_features(Xtest, pol_deg) # N x (K+1)\n\n# maximum likelihood predictions (just the mean)\nm_mle_test = Phi_test @ theta_ml\n\n# MAP predictions (just the mean)\nm_map_test = Phi_test @ theta_map\n\n# predictive distribution (Bayesian linear regression)\n# mean prediction\nmean_blr = Phi_test @ theta_mean\n# variance prediction\ncov_blr = Phi_test @ theta_var @ Phi_test.T\n```\n\n\n```python\n# plot the posterior\nplt.figure()\nplt.plot(X, y, \"+\")\nplt.plot(Xtest, m_mle_test)\nplt.plot(Xtest, m_map_test)\nvar_blr = np.diag(cov_blr)\nconf_bound1 = np.sqrt(var_blr).flatten()\nconf_bound2 = 2.0*np.sqrt(var_blr).flatten()\nconf_bound3 = 2.0*np.sqrt(var_blr + sigma).flatten()\n\nplt.fill_between(Xtest.flatten(), mean_blr.flatten() + conf_bound1, \n mean_blr.flatten() - conf_bound1, alpha = 0.1, color=\"k\")\nplt.fill_between(Xtest.flatten(), mean_blr.flatten() + conf_bound2, \n mean_blr.flatten() - conf_bound2, alpha = 0.1, color=\"k\")\nplt.fill_between(Xtest.flatten(), mean_blr.flatten() + conf_bound3, \n mean_blr.flatten() - conf_bound3, alpha = 0.1, color=\"k\")\nplt.legend([\"Training data\", \"MLE\", \"MAP\", \"BLR\"])\nplt.xlabel('$x$');\nplt.ylabel('$y$');\n```\n", "meta": {"hexsha": "83ba4b479b55bdb4a6cd075731e099d47ffbba5c", "size": 344362, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/MACHINE_LEARNING/MATHS_FOR_ML/LINEAR_REGRESSION_SOLUTION.ipynb", "max_stars_repo_name": "okara83/Becoming-a-Data-Scientist", "max_stars_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/MACHINE_LEARNING/MATHS_FOR_ML/LINEAR_REGRESSION_SOLUTION.ipynb", "max_issues_repo_name": "okara83/Becoming-a-Data-Scientist", "max_issues_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/MACHINE_LEARNING/MATHS_FOR_ML/LINEAR_REGRESSION_SOLUTION.ipynb", "max_forks_repo_name": "okara83/Becoming-a-Data-Scientist", "max_forks_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T15:41:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T07:47:40.000Z", "avg_line_length": 218.7814485388, "max_line_length": 36212, "alphanum_fraction": 0.9105563332, "converted": true, "num_tokens": 7888, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9353465170505204, "lm_q2_score": 0.8976952880018481, "lm_q1q2_score": 0.8396561610051924}} {"text": "# SymPy\n\n\n```python\nfrom datascience import *\nimport numpy as np\nfrom sympy import *\nimport sympy\ninit_printing()\nimport matplotlib.pyplot as plt\n%matplotlib inline\nsolve = lambda x,y: sympy.solve(x-y)[0] if len(sympy.solve(x-y))==1 else \"Not Single Solution\"\n```\n\nPython has many tools, such as the [SymPy library](https://docs.sympy.org/latest/tutorial/index.html) that we can use for expressing and evaluating formulas and functions in economics. \n\nSince SymPy helps with symbolic math, we start out by create a symbol using `Symbol`, which we assign to a variable name. Then, we can use the symbols to construct symbolic expressions.\n\n\n```python\nx = Symbol('x')\nx\n```\n\nNow let's try using SymPy to create a symbolic expression for some hypothetical supply and demand curves. \n\nTo define an upward sloping supply curve with price expressed as a function of quantity, we start off defining the symbol $Q$, which represents quantity. Then, we set up a negative relationship expressing $P_S$, which denotes the price of the supplied good (how much the producer earns), in terms of $Q$. \n\n\n```python\nQ = Symbol('Q')\nP_S = 2 * Q - 3\nP_S\n```\n\nSimilarly, we will also use $Q$ to express a relationship with $P_D$, the price of the good purchased (how much the consumer pays), creating a downward sloping demand curve. \n\nNote that both functions are of the variable $Q$; this will be important in allowing us to solve for the equilibrium.\n\n\n```python\nP_D = 2 - Q\nP_D\n```\n\nTo solve for the equilibrium given the supply and demand curve, we know that the price paid by consumers must equal to the price earned by suppliers. Thus, $P_D = P_S$, allowing us to set the two equations equal to each other and solve for the equilibrium quantity and thus equilibrium price. To solve this by hand, we would set up the following equation to solve for $Q$:\n\n$$\nP_D = P_S\\\\\n2-Q = 2Q-3\n$$\n\nUsing SymPy, we call `solve`, which takes in 2 arguments that represent the 2 sides of an equation and solves for the underlying variable such that the equation holds. Here, we pass in $P_D$ and $P_S$, both represented in terms of $Q$, to solve for the value of $Q$ such that $P_D=P_S$. It's good to know that `solve` is a custom function built for this class, and will be provided in the notebooks for you.\n\n\n```python\nQ_star = solve(P_D, P_S)\nQ_star\n```\n\nThe value of $Q$ that equates $P_D$ and $P_S$ is known as the market equilibrium quantity, and we denote it as $Q^*$. Here, $Q^* = \\frac{5}{3}$. \n\nWith $Q^*$ determined, we can substitute this value as $Q$ to thus calculate $P_D$ or $P_S$. We substitute values using the `subs` function, which follows the syntax `expression.subs(symbol_we_want_to_substitute, value_to_substitute_with)`.\n\n\n```python\nP_D.subs(Q, Q_star)\n```\n\nWe can also substitute $Q^*$ into $P_S$, and should get the same results.\n\n\n```python\nP_S.subs(Q, Q_star)\n```\n\nThus, the equilibrium price and quantity are \\$0.33 and $\\frac{5}{3}$, respectively. \n\nLet's try another example. Suppose our demand function is $\\text{Price}_{D}=-2 \\cdot \\text{Quantity} + 10$. Using SymPy, this would be\n\n\n```python\ndemand = -2 * Q + 10\ndemand\n```\n\nIn addition, let the supply function be $\\text{Price}_{S}=3 \\cdot \\text{Quantity} + 1$. Using SymPy, this would be\n\n\n\n```python\nsupply = 3 * Q + 1\nsupply\n```\n\nWe will now try to find the market equilibrium. The market price equilibrium $P^*$ is the price at which the quantity supplied and quantity demanded of a good or service are equal to each other. Similarly, the market quantity equilibrium $Q^*$ is the quantity at which the price paid by consumers is equal to the price received by producers. \n\nCombined, the price equilibrium and quantity equilibrium form a point on the graph with quantity and price as its axes, called the equilibrium point. This point is the point at which the demand and supply curves intersect.\n\nFirst, we solve for the quantity equilibrium.\n\n\n```python\nQ_star = solve(demand, supply)\nQ_star\n```\n\nNext, we plug the quantity equilibrium into our demand or supply expression to get the price equilibrium:\n\n\n```python\ndemand.subs(Q, 9/5)\n```\n\nGraphically, we can plot the supply and demand curves with quantity on the $x$ axis and price on the $y$ axis. The point at which they intersect is the equilibrium point.\n\n\n```python\ndef plot_equation(equation, price_start, price_end, label=None):\n plot_prices = [price_start, price_end]\n plot_quantities = [equation.subs(list(equation.free_symbols)[0], c) for c in plot_prices]\n plt.plot(plot_prices, plot_quantities, label=label)\n \ndef plot_intercept(eq1, eq2):\n ex = sympy.solve(eq1-eq2)[0]\n why = eq1.subs(list(eq1.free_symbols)[0], ex)\n plt.scatter([ex], [why])\n return (ex, why)\n \nplot_equation(demand, 0, 5)\nplot_equation(supply, 0, 5)\nplt.ylim(0,20)\nplt.xlabel(\"Quantity\")\nplt.ylabel(\"Price\")\nplot_intercept(supply, demand);\n```\n", "meta": {"hexsha": "d540b4840bd841023b4f59e00e605ce67f8f1e20", "size": 33729, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/_sources/content/02-supply/04-sympy.ipynb", "max_stars_repo_name": "d8a-88/econ-models-textbook", "max_stars_repo_head_hexsha": "b0b34afaf1f182fe6cdb8968c3045dc0692452d1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-12-06T17:30:11.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-06T17:30:11.000Z", "max_issues_repo_path": "content/02-supply/04-sympy.ipynb", "max_issues_repo_name": "d8a-88/econ-models-textbook", "max_issues_repo_head_hexsha": "b0b34afaf1f182fe6cdb8968c3045dc0692452d1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "content/02-supply/04-sympy.ipynb", "max_forks_repo_name": "d8a-88/econ-models-textbook", "max_forks_repo_head_hexsha": "b0b34afaf1f182fe6cdb8968c3045dc0692452d1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-12-06T17:30:12.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-06T17:30:12.000Z", "avg_line_length": 70.7106918239, "max_line_length": 13312, "alphanum_fraction": 0.818257286, "converted": true, "num_tokens": 1263, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9683812336438724, "lm_q2_score": 0.8670357580842941, "lm_q1q2_score": 0.8396211570270189}} {"text": "# 7. Bandit Algorithms\n\n**Recommender systems** are a subclass of information filtering system that seek to predict the 'rating' or 'preference' that a user would give to an item.\n\n**k-armed bandits** are one way to solve this recommendation problem. They can also be used in other similar contexts, such as clinical trials and (financial) portfolio optimization.\n\n## General concepts\n\nThe **(instantaneous) regret** is the difference between the $\\mu_i$ of the arm we pick, versus the $\\mu^{*}$ of the best arm.\n\nThe **total regret** is the sum of the instantaneous regrets over time. $i_1, \\dots, i_T$ are the decisions we made over time.\n\n\\begin{equation}\nR_T = \\sum_{t = 1}^T r_t = \\sum_{t = 1}^T \\mu^{*} - \\mu_{i_t}\n\\end{equation}\n\nIdeally, we pick the perfect arm from the beginning, and we get a total regret of 0.\n\nIn practice, if the regret goes to zero over time, we're happy enough, and the algorithm with this propery is called **no regret**, even though there might be a little bit of regret.\n\n## Stochastic k-armed bandits\n\nLike in online supervised learning, the algorithm runs over time, and at every time point we need to make a decision.\n\nk arms to pull, each arm can win ($y = $ reward $ = 1$) with probability $\\mu_i$ (unknown). Note that the reward is always constant, it's just the probability which varies.\n\nHowever, unlike online learning, we only receive information about the action we choose, i.e. we only have one single \"try\".\n\nIn e.g. OCP, at any time point we get a new $x$, and we can try a plethora of different ways to update our model ($w$) (hypothetically) and see how good the new potential model is. With bandits, you don't know how good your choice is until you commit to it and do it (pull the arm), by which time, you can no longer change it.\n\n## $\\epsilon$-greedy algorithm\n\nAt every time, pick random arm with probability $\\epsilon$, and pick the current best arm otherwise.\n\nThis works surprisingly well (it's no-regret), but could be better.\n\n## Hoeffding's inequality\n> [...] provides an upper bound on the probability that the sum of random variables deviates from its expected value. \n>\n> -- Wikipedia\n\nLet $X_1,\\dots,X_m$ be i.i.d. random variables taking values in $[0, 1]$.\n\nThe real mean $\\mu = \\mathbb{E}[X]$ is unknown.\n\nWe have an empirical estimate based on our $m$ trials:\n\n\\begin{equation}\n \\hat{\\mu}_m = \\frac{1}{m} \\sum_{i = 1}^{m} X_i\n\\end{equation}\n\nThen we have:\n\n\\begin{equation}\n P\\left(\\left|\\mu - \\hat{\\mu}_m\\right| \\ge b\\right) \\le 2 \\exp\\left(-2b^2m \\right) = \\delta_m\n\\end{equation}\n\nThat is, the probability of our operation being more than $b$ off the real value is smaller than a computable threshold.\n\nWe just want a lower bound $b$ for any given fixed probability bound. So we fix the probability bound to, say, $\\delta_m$, and then compute the corresponding $b$, as a function of $\\delta_m$ and $m$.\n\nFor a fixed upper probability bound, we fix $\\delta_m$ and get $b = \\sqrt{\\frac{1}{2m} \\log{\\frac{2}{\\delta_m}}}$.\n\nAll we need now is to decide what $\\delta_m$ should be.\n\nNow we also want to set an upper bound on the probability not just for a single $m$, but for all $m$. Want upper bound for $E_m = P(|\\mu - \\hat{\\mu}_m| \\ge b)$, i.e. a lower bound for $P(|\\mu - \\hat{\\mu}_t| \\ge b, \\> \\forall t)$.\n\nSo we get:\n\n\\begin{equation}\n\\begin{aligned}\n P(|\\mu - \\hat{\\mu}_t| \\le b, \\> \\forall t) & = 1 - P(E_1 \\cup E_2 \\cup \\dots) \\\\\n & \\ge 1 - \\sum_{t=1}^{\\infty} P(E_t) \\\\\n & \\ge 1 - \\sum_{t=1}^{\\infty} \\delta_t \\\\\n & \\ge 1 - \\delta \\> \\text{with} \\> \\delta < \\sum_{t=1}^{\\infty}\\delta_t\n\\end{aligned}\n\\end{equation}\n\n2nd row smaller since the sum we're subtracting is bigger (longer than the prev.).\n3rd row smaller because all deltas are larger than their corresponding Es, as defined by Hoeffding's inequality itself.\n\nWe therefore want a sum of $\\delta_t$ which is bounded. Setting $\\delta_t = \\frac{c}{t^2}$ works well, since the correspoinding sum converges, so the upper bound $\\delta$ exists and is finite.\n\nWe now have a good heuristic: at any time step $t$, our upper bound should be $\\delta_t = \\frac{c}{t^2}$.\n\nRecall that we want to express $b$ as a function of $\\delta_t$, since $b$ is *the value which actually defines our upper confidence bound*.\n\n(Note that this probability shrinks quadratically over time, so it keeps getting tighter and tighter for all arms, as we keep playing.)\n\n## UCB1\n\nAll we need to do now is shove our $\\delta_t$ in the formula for $b$, and we get (setting $c := 2$):\n\n\\begin{aligned}\n\\operatorname{UCB}(i) \n& = \\hat{\\mu}_i + \\sqrt{\\frac{1}{n_i} \\ln (2 \\frac{t^2}{2}) } \\\\\n& = \\hat{\\mu}_i + \\sqrt{\\frac{\\ln{t^2}}{n_i}} \\\\\n& = \\hat{\\mu}_i + \\sqrt{\\frac{2 \\ln{t}}{n_i}}\n\\end{aligned}\n\nWe can plug this formula right into a program! See `bonus/tutorial-bandits.ipynb` for an implementation.\n\nThis is an algorithm which is much smarter than $\\epsilon$-greedy about what it explores.\n\nTODO: there seems to be a \"2\" missing somewhere. Low priority.\n\nIt can be shown that UCB is a **no-regret** algorithm. ($R_T / T \\rightarrow 0 \\> \\text{as} \\> T \\rightarrow \\infty$)\n\n## Applications of bandit algorithms\nNon-DM:\n * Clinical trials (give the best possible cure to a patient, while also working on improving the accuracy of our diagnostics)\n * Matching markets (TODO: more info)\n * Asset pricing\n * Adaptive routing\n * Go\n \nDM:\n * Advertising\n * Optimizing relevance (e.g. news article recommendations)\n * Scheduling web crawlers\n * Optimizing user interfaces (e.g. smart A/B testing)\n\n## Contextual bandits\nAlso incorporate some info about every arm and every user. Useful when e.g. recommending articles, since it takes users topic preferences into account.\n\nWe still use **cummulative (contextual) regret** as a metric, $R_t = \\sum_{t=1}^{T}r_t$.\n\nCan achieve *sublinear regret* by learning **optimal mapping** from contexts (e.g. (user, article features) tuples) to actions.\n\n### Outline\n* Observe context: $z_t \\in \\mathcal{Z}$, and, e.g. $\\mathcal{Z} \\subseteq \\mathbb{R}^{d}$\n * Pick arm from set of possible arms, $x_t \\in \\mathcal{A}_t$\n * Observe reward $y_t$, which depends on the picked arm and the context, plus some possible noise: $y_t = f(x_t, z_t) + \\epsilon_t$\n * Incur regret: $r_t = \\max_{x \\in \\mathcal{A}_t}(f(x, z_t)) - f(x_t, z_t)$ (like before, difference between the best arm, given the context, and the arm we actually picked)\n\n### Linear recommendations\n\nWant to minimize regularized square loss for\n\n\\begin{equation}\n \\hat{w}_i = \\arg \\min_w \\sum_{t=1}^{m} (y_t - w^T z_t)^2 + \\|w\\|_2^2\n\\end{equation}\n\nNote: This model can take features from the user, the article, or both into account. And every article has its own $\\hat{w}$. This is linear regression and it's easy to solve.\n\nKey idea: Want to merge UCB and regression by having an upper confidence bound (UCB) for our $w$s. Ideally, just as in UCB1, this bound will shrink towards $w$ as time goes on.\n\nThis is LinUCB. [CHEATSHEET]\n\n\\begin{aligned}\n \\left| \\> \\text{estimated reward} - \\text{true reward} \\> \\right| & \\le \\text{some bound} \\quad \\text{(with some probability)} \\\\\n \\left| \\hat{w}^T_i z_t - w^T_i z_t \\right| & \n \\le \\alpha\\sqrt{z^T_t(D^T_i D_i + I)^{-1}z_t}, \\> p \\ge 1 - \\delta \\\\\n \\left| \\hat{w}^T_i z_t - w^T_i z_t \\right| & \n \\le \\alpha\\sqrt{z^T_t M_i z_t}, \\> p \\ge 1 - \\delta\n\\end{aligned}\n\nThis holds as long as $\\alpha = 1 + \\sqrt{\\frac{1}{2} \\ln \\left( \\frac{2}{\\delta} \\right)}$.\n\nWe set our desired probability bound, compute $\\alpha$ and we have an algorithm!\n\nSame as UCB1, but compute an arm's UCB as:\n\n\\begin{aligned}\n M_x \\in \\mathbb{R}^{d \\times d}, b_x \\in \\mathbb{R}^{d} & \\quad \\text{(the arm's model)} \\\\\n \\hat{w} = M_x^{-1} b & \\quad \\text{(the model used for the primary payoff prediction)} \\\\\n \\operatorname{UCB}_x = \\hat{w}_x^T z_t + \\alpha \\sqrt{z_t^T M_t^{-1} z_t} & \\quad \\text{(arm UCB given z)}\n\\end{aligned}\n\nNot storing $M_x$ and $b_x$ together because we need $M_x^{-1}$ to compute the upper confidence bound of our predicted payoff.\n\nLinUCB is also no-regret (i.e. regret sub-linear in T).\n\n### Learning from $y_t$\n\nIf the payoff $y_t > 0$ (see *rejection sampling*):\n\n* $M_x \\leftarrow M_x + z_tz_t^T$ (outer product)\n* $b_x \\leftarrow b_x + y_t z_t$\n\n### Problem with linear recommendations\n\nNo shared effect modeling. We optimize every arm separately based on what users like it, but there's no way to directly exploit the fact that similar users may like similar articles.\n\nUse hybrid models!\n\n## Hybrid LinUCB\n\n\\begin{equation}\n y_t = w_i^T z_t + \\beta^T \\phi(x_i, z_t) + \\epsilon_t\n\\end{equation}\n\n * $\\phi(x, z)$ simply flattens (like `numpy.ravel`) the outer product $x_i z_t^T$.\n * $w_i$ is an arm's model\n * $\\beta$ captures user-article similarity (i.e. user interests).\n \nCan also solve this using regularized regression. We also need to compute confidence intervals for this bad boy. The algorithm is fluffy, but it works.\n\n## Practical implementation of contextual bandits\n\nSample case:\n * 1193 user features, 81 article features\n * We need to perform dimensionality reduction!\n \n### Extracting feature vectors\n * Data consists of triplets of form (article_features, user_features, reward): $D = \\left\\{ (\\phi_{a,1}, \\phi_{u,1}, y_1), \\dots, (\\phi_{a,n}, \\phi_{u,n}, y_n) \\right\\}$\n * Learn the model parameters $W$ with *logistic regression* (remember that our reward $y_i$ is either 1 or 0, e.g. click or no click). This (super) model now predicts rewards based on both article and user features. It incorporates every arm's model.\n * Want per-arm models like before\n * Set: $\\psi_{a,i} = \\phi^T_{a,i} W$ (vector); in effect, this splits $W$ back to per-arm models;\n * $\\psi_{a,i}$ is still hugely dimensional\n * k-means cluster $\\psi_{a, i}$ our arm models (i.e. over i datapoints, with $i = 1, \\dots, n$)\n * Obtain $j < n$ clusters; the final article features for article $i$ are $x_{i, j} = \\frac{1}{Z} \\exp{\\left( -\\| \\psi_{a, i} - \\mu_j \\|_2^2 \\right)}, \\> x_{i, j} \\in \\mathbb{R}^{k}$\n * i.e. compute some clusters and model articles relative to them, i.e. every article's feature is its distance to that cluster (and exp + constant, but the principle stays the same). This way we can express our articles and users using much fewer features.\n\n## Evaluating bandit algorithms\n\nGather data with pure exploration (random).\n\nLearn from log using **rejection sampling**. Go through log and try to predict the click at every step. If we're wrong, reject the sample (ignore log line), if we're right, feed back that reward into the algorithm.\n\nStop when T events kept.\n\nThis is what we did in the last project!\n\nThis approach is **unbiased**, and the expected number of needed events are $kT$, with $k$ being the (post-dim-red) number of article features.\n\nIn general, UCB algorithms tend to perform *much* better than greedy algorithms when there isn't a lot of training data. And hybrid LinUCB seems to be the best. [Li et al WWW '10]\n\n## Sharing observations across users\n * Use stereotypes\n * Describe stereotypes in lower-dim space (estimate using PCA/SVD, so dim-reduction).\n * First explore in stereotype subspace, then in the full space (whose exploration is significantly more expensive). This is **coarse to fine bandits**.\n\n## Sets of k recommendations\n * In many cases (ads, news) want to recommend more than one thing at a time.\n * Want to choose set that's relevant to as many users as possible.\n * $\\implies$ optimize **relevance** and **diversity**\n * Want to cover as many users as possible with a (limited) set of e.g. ads.\n * Every article $i$ is relevant to a set of users $S_i$. Suppose this is known.\n\n### This is a maximum (set) coverage problem\n * Define the coverage of a set $A$ of articles:\n \n\\begin{equation}\nF(A) = \\left| \\bigcup_{i \\in A}S_i \\right|\n\\end{equation}\n\n * And we want to maximize this coverage: $\\max_{|A| \\le k} F(A)$\n - nr. sets A grows exponentially in $k$\n - finding the optimal A is NP-hard.\n - Let's try a greedy solution!\n - Start with $A_0$ empty, and always add the article which increases the coverage of $A$ the most.\n - Turns out, this solution is \"good enough\" (~63% of optimal)\n - $F(A_{\\text{greedy}}) \\ge \\left( 1 - \\frac{1}{e} \\right) F(A_{\\text{opt}})$\n - $F(\\> \\text{greedy set of size} \\> l \\>) \\ge \\left(1 - e\\left( -\\frac{l}{k}\\right)\\right) \\max_{|A| \\le k}F(A)$ TODO: hard to tell from prof.'s writing; double check!\n - this works because F is non-negative monotone and **submodular**\n\n### Submodularity\n[EXAM] **Submodularity** is a property of *set functions*.\n\n$F : 2^V \\rightarrow \\mathbb{R} \\> \\text{submodular} \\iff \\forall A \\subseteq B, s \\not\\in B: F(A \\cup \\{s\\}) - F(A) \\ge F(B \\cup \\{s\\}) - F(B)$\n\nAdding a set earlier cannot be worse than adding it later.\n\nMarginal benefits can never increase, i.e. our delta improvement at every step only gets smaller and smaller.\n\n**Closedness**: A weighted sum of submodular functions is also submodular (positive weights). (Closed under nonnegative linear combinations.)\n * Allows multi-objective optimization with weights, as long as each objective is itself submodular: $F(A) = \\sum_i \\lambda_i F_i(A)$ is submodular, if $F_1, \\dots F_n$ are submodular\n\n### \"Lazy\" greedy algorithm\n * First iteration as usual.\n * Keep ordered list of marginal benefits $\\Delta_i$ for every option from previous iteration (marginal benefit = increase in coverage = # new elements we would get by adding the ith set).\n * Re-evaluate $\\Delta_i$ only for top element.\n * If $\\Delta_i$ stays on top, use it; otherwise, re-sort.\n\nThis works because of submodularity. If $\\Delta_i$ is on top, there's no way some other $\\Delta_{i'}$ will \"grow\" in a subsequent step and overtake it. The only thing that can happen is for $\\Delta_i$ itself to \"drop down\".\n \nIn practice, this means we can solve greedy problems with submodular objective functions **really fast**. Examples include sensor placement and blog recommendation selection.\n\nGeneral idea for recommending sets of k articles: Select article from pool. Iterations represent adding additional articles in order to maximize the user interest coverage.\n\n * Bandit submodular optimization: learn from observing **marginal gains**\n * $F_t(A_t)$ is the feedback at time $t$, given that the set of articles $A_t$ was shown.\n\nSo how do we measure user coverage for articles?\n\n## Submodular bandits\n\n### Simple abstract model\n * Given set of articles $V$, $\\lvert V \\rvert = n$.\n * Every round $t = 1 : T$ do:\n - $\\exists$ an unknown subset of $V$ in which the user is interested: $W_t \\subseteq V$\n - recommend a set of articles $A_t \\subseteq V$ (how do we pick this? This is part of the challenge.)\n - if we recommended anything in which the user is interested, they click and we get a reward:\n \n\\begin{equation}\nF_t(A_t) = \\left\\{ \n \\begin{array}{ll}\n 1 & \\> \\text{if} A_t \\cap W_t \\not= \\varnothing \\\\\n 0 & \\> \\text{otherwise}\n \\end{array}\n \\right.\n\\end{equation}\n\n### Algorithm\n * Intialize $k$ multi-armed bandit algorithms for $k$ out of $n$ item selections.\n * In every round $t$, every bandit picks an article, and gets as feedback the difference between the reward for bandits up to, and including him, and the reward from all arms up to, but not including him (i.e. $\\Delta_i$).\n \nCan show that submodular bandits using semi-bandit feedback have sublinear regret.\n\nSF = Submodular Function\n\n### LSBGreedy\n * Bandit algorithm for context-aware article set recommendations.\n * No-regret\n\n\n```python\n\n```\n", "meta": {"hexsha": "98562941f2d56797f9a964b14c97eb418b72922f", "size": 21408, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "07-bandits.ipynb", "max_stars_repo_name": "AndreiBarsan/dm-notes", "max_stars_repo_head_hexsha": "24e5469c4ba9d6be0c8a5da18b8b99968436e69c", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2016-01-22T14:36:41.000Z", "max_stars_repo_stars_event_max_datetime": "2017-10-17T07:17:07.000Z", "max_issues_repo_path": "07-bandits.ipynb", "max_issues_repo_name": "AndreiBarsan/dm-notes", "max_issues_repo_head_hexsha": "24e5469c4ba9d6be0c8a5da18b8b99968436e69c", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "07-bandits.ipynb", "max_forks_repo_name": "AndreiBarsan/dm-notes", "max_forks_repo_head_hexsha": "24e5469c4ba9d6be0c8a5da18b8b99968436e69c", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.3281853282, "max_line_length": 332, "alphanum_fraction": 0.5837537369, "converted": true, "num_tokens": 4491, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9073122113355091, "lm_q2_score": 0.9252299503914833, "lm_q1q2_score": 0.8394724322835401}} {"text": "# Einstein Tensor calculations using Symbolic module\n\n\n```python\nimport sympy\nfrom sympy import symbols, sin, cos, sinh\nfrom einsteinpy.symbolic import EinsteinTensor, MetricTensor\n\nsympy.init_printing()\n```\n\n### Defining the Anti-de Sitter spacetime Metric\n\n\n```python\nsyms = sympy.symbols(\"t chi theta phi\")\nt, ch, th, ph = syms\nm = sympy.diag(-1, cos(t) ** 2, cos(t) ** 2 * sinh(ch) ** 2, cos(t) ** 2 * sinh(ch) ** 2 * sin(th) ** 2).tolist()\nmetric = MetricTensor(m, syms)\n```\n\n### Calculating the Einstein Tensor (with both indices covariant)\n\n\n```python\neinst = EinsteinTensor.from_metric(metric)\neinst.tensor()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}-3.0 & 0 & 0 & 0\\\\0 & 3.0 \\cos^{2}{\\left(t \\right)} & 0 & 0\\\\0 & 0 & 3.0 \\cos^{2}{\\left(t \\right)} \\sinh^{2}{\\left(\\chi \\right)} & 0\\\\0 & 0 & 0 & 3.0 \\sin^{2}{\\left(\\theta \\right)} \\cos^{2}{\\left(t \\right)} \\sinh^{2}{\\left(\\chi \\right)}\\end{matrix}\\right]$\n\n\n", "meta": {"hexsha": "166f5299543d27b4679f0f4bab6388d67697ec2e", "size": 3050, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/source/examples/Symbolic Module - Einstein Tensor.ipynb", "max_stars_repo_name": "QMrpy/einsteinpy", "max_stars_repo_head_hexsha": "f95ff6840e0648dd96ec11006fad958f67a18520", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-04-24T22:38:38.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-17T18:27:13.000Z", "max_issues_repo_path": "docs/source/examples/Symbolic Module - Einstein Tensor.ipynb", "max_issues_repo_name": "prakashaditya369/einsteinpy", "max_issues_repo_head_hexsha": "acfba21eecb5a5805042592d8e9ac5c96979f8af", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/source/examples/Symbolic Module - Einstein Tensor.ipynb", "max_forks_repo_name": "prakashaditya369/einsteinpy", "max_forks_repo_head_hexsha": "acfba21eecb5a5805042592d8e9ac5c96979f8af", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.0476190476, "max_line_length": 332, "alphanum_fraction": 0.3895081967, "converted": true, "num_tokens": 323, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9603611643025387, "lm_q2_score": 0.8740772236840656, "lm_q1q2_score": 0.8394298202275597}} {"text": "# Affine Transformations\n\nHere we use sympy to derive the calculations for various affine transformation\noperations.\n\n\n```python\nfrom sympy import *\n```\n\n\n```python\ndef mat3x2(name=None):\n a, b, c, d, e, f = symbols(f'{name}_a {name}_b {name}_c {name}_d {name}_e {name}_f' if name else 'a b c d e f')\n return Matrix([\n [a, b, c],\n [d, e, f],\n [0, 0, 1]\n ])\nmat3x2()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}a & b & c\\\\d & e & f\\\\0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\n# What is the formula for chaining affine transformations?\nmat3x2('A') * mat3x2('B')\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}A_{a} B_{a} + A_{b} B_{d} & A_{a} B_{b} + A_{b} B_{e} & A_{a} B_{c} + A_{b} B_{f} + A_{c}\\\\A_{d} B_{a} + A_{e} B_{d} & A_{d} B_{b} + A_{e} B_{e} & A_{d} B_{c} + A_{e} B_{f} + A_{f}\\\\0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\n# How do we invert an affine transformation?\nmat3x2() ** -1\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\frac{e}{a e - b d} & - \\frac{b}{a e - b d} & \\frac{b f - c e}{a e - b d}\\\\- \\frac{d}{a e - b d} & \\frac{a}{a e - b d} & \\frac{- a f + c d}{a e - b d}\\\\0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\n# How do we construct an affine transform from scale, rotation, and translation?\n\ndef rot(name='theta'):\n theta = symbols(name)\n s = sin(theta)\n c = cos(theta)\n return Matrix([\n [c, -s, 0],\n [s, c, 0],\n [0, 0, 1],\n ])\n\n\ndef scale(name='S'):\n x, y = symbols(f'{name}_x {name}_y')\n return Matrix([\n [x, 0, 0],\n [0, y, 0],\n [0, 0, 1]\n ])\n\n\ndef xlate(name='T'):\n x, y = symbols(f'{name}_x {name}_y')\n return Matrix([\n [1, 0, x],\n [0, 1, y],\n [0, 0, 1]\n ])\n\nxlate() * rot() * scale()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}S_{x} \\cos{\\left(\\theta \\right)} & - S_{y} \\sin{\\left(\\theta \\right)} & T_{x}\\\\S_{x} \\sin{\\left(\\theta \\right)} & S_{y} \\cos{\\left(\\theta \\right)} & T_{y}\\\\0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n## Transforming Coordinates\n\n\n```python\ncoord = Matrix([\n symbols(f'x_{n} y_{n}') + (1,)\n for n in range(10)\n])\ncoord\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}x_{0} & y_{0} & 1\\\\x_{1} & y_{1} & 1\\\\x_{2} & y_{2} & 1\\\\x_{3} & y_{3} & 1\\\\x_{4} & y_{4} & 1\\\\x_{5} & y_{5} & 1\\\\x_{6} & y_{6} & 1\\\\x_{7} & y_{7} & 1\\\\x_{8} & y_{8} & 1\\\\x_{9} & y_{9} & 1\\end{matrix}\\right]$\n\n\n\n\n```python\nxformed = (coord * mat3x2().T)\nxformed.col_del(2)\nxformed\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}a x_{0} + b y_{0} + c & d x_{0} + e y_{0} + f\\\\a x_{1} + b y_{1} + c & d x_{1} + e y_{1} + f\\\\a x_{2} + b y_{2} + c & d x_{2} + e y_{2} + f\\\\a x_{3} + b y_{3} + c & d x_{3} + e y_{3} + f\\\\a x_{4} + b y_{4} + c & d x_{4} + e y_{4} + f\\\\a x_{5} + b y_{5} + c & d x_{5} + e y_{5} + f\\\\a x_{6} + b y_{6} + c & d x_{6} + e y_{6} + f\\\\a x_{7} + b y_{7} + c & d x_{7} + e y_{7} + f\\\\a x_{8} + b y_{8} + c & d x_{8} + e y_{8} + f\\\\a x_{9} + b y_{9} + c & d x_{9} + e y_{9} + f\\end{matrix}\\right]$\n\n\n", "meta": {"hexsha": "105cfc263ed7af15eb76e16a16addd841820399c", "size": 7526, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/Transforms.ipynb", "max_stars_repo_name": "lordmauve/wasabigeom", "max_stars_repo_head_hexsha": "1c6693147cf58c8edd95fa0fd7716a1c22f9171f", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-10-29T16:16:17.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-24T06:26:14.000Z", "max_issues_repo_path": "notebooks/Transforms.ipynb", "max_issues_repo_name": "lordmauve/wasabi-geom", "max_issues_repo_head_hexsha": "8032bc4fbabbe145d881e3a284a309422fc87c7b", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-10-03T07:27:15.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-24T17:07:11.000Z", "max_forks_repo_path": "notebooks/Transforms.ipynb", "max_forks_repo_name": "lordmauve/wasabigeom", "max_forks_repo_head_hexsha": "1c6693147cf58c8edd95fa0fd7716a1c22f9171f", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.1872659176, "max_line_length": 555, "alphanum_fraction": 0.4061918682, "converted": true, "num_tokens": 1276, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9603611597645271, "lm_q2_score": 0.8740772269642949, "lm_q1q2_score": 0.839429819411192}} {"text": "```python\nfrom sympy import *\ninit_printing(use_unicode=True, use_latex=True)\n```\n\n# Unitary rotation operators around the Bloch Sphere\n\n## Introduction\n\nIn this document we are going to automatically derive the unitary operators representing the rotations arount the Bloch Sphere. This will be done under assumption Hamiltonian operator is time-independent. We will do it for the three Pauli operators $\\sigma^x$, $\\sigma^y$ and $\\sigma^z$.\n\n## Deriving the unitary operator\n\nHamiltonian operator $H$ is a representation of acting on a quantum state $\\left| \\psi(t) \\right>$ in a Schr\u00f6dinger picture, where operators are constant and quantum states depend on time. On the other hand, in Heisenberg picture, a unitary operator $U(t)$ is an semantically equivalent representation in which the the operator is time-dependent and quantum state $\\left| \\psi \\right>$ is static.\n\nLet us define a function which symbolically derives the $U(t)$ from $H$ using\n\n\\begin{equation}\nU(t) = e^{-\\frac{i t}{2}H}\n\\end{equation}\n\nunder assumption that $\\frac{d}{dt}H=0$ or simply, $H$ does not depend on time.\n\n\n```python\ndef timeIndependentHtoU(H, t) :\n rows, columns = H.shape\n U = zeros(rows, columns)\n eigenvects = H.eigenvects()\n for eigenvalue, multiplicity, eigenvectors in eigenvects :\n l = eigenvalue\n m = multiplicity\n for eigenvector in eigenvectors :\n normalized_eigenvector = eigenvector.normalized()\n entry = exp(-I*t*l*m/2)*normalized_eigenvector*conjugate(normalized_eigenvector.T)\n U += entry\n return U\n```\n\n## Defining the Pauli operators\n\nOperators matrices are defined as\n\n\\begin{align}\n\\sigma^x = \\begin{bmatrix}0 & 1\\\\1 & 0\\end{bmatrix},\n\\sigma^y = \\begin{bmatrix}0 & -i\\\\i & 0\\end{bmatrix},\n\\sigma^z = \\begin{bmatrix}1 & 0\\\\0 & -1\\end{bmatrix}\n\\end{align}\n\nLet us define them as `SymPy` matrices and derive the rotation operators $R_x(t)$, $R_y(t)$ and $R_z(t)$ correponsing to $\\sigma^x$, $\\sigma^y$ and $\\sigma^z$ (respectively).\n\n\n```python\nt = Symbol('t')\n\ns_x = Matrix([[0, 1], [1, 0]])\ns_y = Matrix([[0, -I], [I, 0]])\ns_z = Matrix([[1, 0], [0, -1]])\n\nr_x = timeIndependentHtoU(s_x, t)\nr_y = timeIndependentHtoU(s_y, t)\nr_z = timeIndependentHtoU(s_z, t)\n\ndisplay(simplify(r_x), simplify(r_y), simplify(r_z))\n```\n\n## Discuss results\n\nExpected solution is\n\n\\begin{align}\nR_x(t) = \\begin{bmatrix}cos(\\frac{t}{2}) & -i sin(\\frac{t}{2})\\\\-i sin(\\frac{t}{2}) & cos(\\frac{t}{2})\\end{bmatrix},\nR_y(t) = \\begin{bmatrix}cos(\\frac{t}{2}) & - sin(\\frac{t}{2})\\\\sin(\\frac{t}{2}) & cos(\\frac{t}{2})\\end{bmatrix},\nR_z(t) = \\begin{bmatrix}e^{-\\frac{it}{2}} & 0\\\\0 & e^{\\frac{it}{2}}\\end{bmatrix}\n\\end{align}\n\nwhich matches the result we symbolically derived using `SymPy`.\n", "meta": {"hexsha": "05ec1ebcfdfbf1241c46ce16a947104acba8bd5b", "size": 15802, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "bloch-sphere-rotations.ipynb", "max_stars_repo_name": "marekyggdrasil/notebooks", "max_stars_repo_head_hexsha": "22644eb62b7833fb4ef7fd751b3fc6f01ceebe09", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "bloch-sphere-rotations.ipynb", "max_issues_repo_name": "marekyggdrasil/notebooks", "max_issues_repo_head_hexsha": "22644eb62b7833fb4ef7fd751b3fc6f01ceebe09", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bloch-sphere-rotations.ipynb", "max_forks_repo_name": "marekyggdrasil/notebooks", "max_forks_repo_head_hexsha": "22644eb62b7833fb4ef7fd751b3fc6f01ceebe09", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 80.2131979695, "max_line_length": 4104, "alphanum_fraction": 0.7791418808, "converted": true, "num_tokens": 857, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9566341987633822, "lm_q2_score": 0.8774767842777551, "lm_q1q2_score": 0.8394243004610195}} {"text": "# Convexity\n\n### The vector uv consists of the set of convex combinations of u and v\n\n\n```python\n%matplotlib inline\nimport numpy as np\nfrom numpy import array\nimport matplotlib.pyplot as plt\n```\n\nTake the vectors $u=\\begin{bmatrix}0.5 \\\\ 1\\end{bmatrix}$ and $v=\\begin{bmatrix}3.5 \\\\ 3\\end{bmatrix}$. We can write an expression for the line from $u$ to $v$ as\n$$\n\\begin{align}\n uv &= \\alpha (\\begin{bmatrix}3.5 \\\\ 3\\end{bmatrix} - \\begin{bmatrix}0.5 \\\\ 1\\end{bmatrix}) + \\begin{bmatrix}0.5 \\\\ 1\\end{bmatrix} \\\\\n &= \\alpha \\begin{bmatrix}3 \\\\ 2\\end{bmatrix} + \\begin{bmatrix}0.5 \\\\ 1\\end{bmatrix}\n\\end{align}\n$$.\n\nThis is in the typical $y=mx+b$ format. However, we can form a different representation, where the coefficients of both vectors are defined by $\\alpha$:\n$$\n\\begin{align}\n uv &= \\alpha (\\begin{bmatrix}3.5 \\\\ 3\\end{bmatrix} - \\begin{bmatrix}0.5 \\\\ 1\\end{bmatrix}) + \\begin{bmatrix}0.5 \\\\ 1\\end{bmatrix} \\\\\n &= \\alpha \\begin{bmatrix}3.5 \\\\ 3\\end{bmatrix} - \\alpha \\begin{bmatrix}0.5 \\\\ 1\\end{bmatrix} + \\begin{bmatrix}0.5 \\\\ 1\\end{bmatrix} \\\\\n &= \\alpha \\begin{bmatrix}3.5 \\\\ 3\\end{bmatrix} + (1 - \\alpha) \\begin{bmatrix}0.5 \\\\ 1\\end{bmatrix}\n\\end{align}\n$$.\n\n\n```python\nu = array([0.5, 1])\nv = array([3.5, 3])\nplt.plot([0, u[0]], [0, u[1]], color='b')\nplt.plot([0, v[0]], [0, v[1]], color='b')\n\nalphas = np.random.rand(1000)\nconvex_combos = array([\n alpha*u + (1-alpha)*v\n for alpha in alphas\n])\nplt.scatter(convex_combos[:, 0], convex_combos[:, 1], color='r', marker='+')\n\n```\n\n\n```python\nfrom sklearn.datasets import fetch_lfw_people, fetch_olivetti_faces\n# ds = fetch_lfw_people(color=False)\nds = fetch_olivetti_faces()\n\n```\n\n downloading Olivetti faces from https://ndownloader.figshare.com/files/5976027 to /home/aagnone/scikit_learn_data\n\n\n\n```python\ndef convex_combine(x1, x2, alpha):\n return alpha * x1 + (1 - alpha) * x2\n\ndef blend_faces(i, j):\n face_i = ds.images[i]; face_j = ds.images[j]\n data_i = ds.data[i]; data_j = ds.data[j]\n \n # show different convex combinations of the faces\n plt.figure()\n for i, alpha in enumerate(np.linspace(0, 1, 5)):\n plt.subplot(1, 5, i+1)\n plt.title(\"{:.2f}\".format(alpha))\n combined_data = convex_combine(data_i, data_j, alpha)\n combined_img = combined_data.reshape((face_i.shape))\n plt.imshow(combined_img, cmap='gray')\n```\n\n\n```python\nplt.rcParams['figure.figsize'] = [10, 5]\ninds = list(np.random.choice(range(len(ds.data)), size=2, replace=False))\nblend_faces(*inds)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "1f53428659b0cf583651a5094ebecc8bb511cc63", "size": 67820, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/convex.ipynb", "max_stars_repo_name": "aagnone3/linalg", "max_stars_repo_head_hexsha": "d4313eaf81990ca67841af018e8ca2d156b9a68c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/convex.ipynb", "max_issues_repo_name": "aagnone3/linalg", "max_issues_repo_head_hexsha": "d4313eaf81990ca67841af018e8ca2d156b9a68c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/convex.ipynb", "max_forks_repo_name": "aagnone3/linalg", "max_forks_repo_head_hexsha": "d4313eaf81990ca67841af018e8ca2d156b9a68c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 366.5945945946, "max_line_length": 49216, "alphanum_fraction": 0.9202152757, "converted": true, "num_tokens": 851, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.951142217223021, "lm_q2_score": 0.8824278757303677, "lm_q1q2_score": 0.8393144062615824}} {"text": "# Numerical Methods in Scientific Computing\n# Assignment 4\n\n# Q1.\n\nTo compute $\\int_0^1e^{x^2}dx$ using Trapezoidal rule and modified Trapezoidal rule.\n\n- Trapezoidal Rule is given by,\n\\begin{equation}\n \\int_{x_0}^{x_N}f(x)dx = \\frac{h}{2}\\sum_{i=0}^{N-1} [f(x_i)+f(x_{i+1})] + O(h^2)\n\\end{equation}\n\n- Trapezoidal Rule with end corrections using first derivative is given by,\n\\begin{equation}\n \\int_{x_0}^{x_N}f(x)dx = \\frac{h}{2}\\sum_{i=0}^{N-1} [f(x_i)+f(x_{i+1})] - \\frac{h^2}{2}[f^{\\prime}(x_N)-f^{\\prime}(x_N)] + O(h^4)\n\\end{equation}\n\n- Trapezoidal Rule with end corrections using third derivative is given by,\n\nTo introduce third derivatives into the end corrections, say\n\\begin{equation}\n f^{\\prime\\prime}(y_{i+1}) = a_{-1}f^{\\prime}(x_{i}) + a_1f^{\\prime}(x_{i+1}) + b_{-1}f^{\\prime\\prime\\prime}(x_{i}) + b_{1}f^{\\prime\\prime\\prime}(x_{i+1})\n\\end{equation}\n\nBy taylor series expansion we have,\n\n\\begin{equation}\n f^{\\prime}(x_{i}) = f^{\\prime}(y_{i+1}) - \\frac{h}{2}f^{\\prime\\prime}(y_{i+1}) + \\frac{(\\frac{h}{2})^2}{2!}f^{\\prime\\prime\\prime}(y_{i+1}) - \\frac{(\\frac{h}{2})^3}{3!}f^{\\prime\\prime\\prime\\prime}(y_{i+1})+\\frac{(\\frac{h}{2})^4}{4!}f^{\\prime\\prime\\prime\\prime\\prime}(y_{i+1})-\\frac{(\\frac{h}{2})^5}{5!}f^{\\prime\\prime\\prime\\prime\\prime\\prime}(y_{i+1}) + O(h^6)\n\\end{equation}\n\n\\begin{equation}\n f^{\\prime}(x_{i+1}) = f^{\\prime}(y_{i+1}) + \\frac{h}{2}f^{\\prime\\prime}(y_{i+1}) + \\frac{(\\frac{h}{2})^2}{2!}f^{\\prime\\prime\\prime}(y_{i+1}) + \\frac{(\\frac{h}{2})^3}{3!}f^{\\prime\\prime\\prime\\prime}(y_{i+1})+\\frac{(\\frac{h}{2})^4}{4!}f^{\\prime\\prime\\prime\\prime\\prime}(y_{i+1})+\\frac{(\\frac{h}{2})^5}{5!}f^{\\prime\\prime\\prime\\prime\\prime\\prime}(y_{i+1}) + O(h^6)\n\\end{equation}\n\n\\begin{equation}\n f^{\\prime\\prime\\prime}(x_{i}) = f^{\\prime\\prime\\prime}(y_{i+1}) - \\frac{h}{2}f^{\\prime\\prime\\prime\\prime}(y_{i+1}) + \\frac{(\\frac{h}{2})^2}{2!}f^{\\prime\\prime\\prime\\prime\\prime}(y_{i+1}) - \\frac{(\\frac{h}{2})^3}{3!}f^{\\prime\\prime\\prime\\prime\\prime\\prime}(y_{i+1}) + O(h^4)\n\\end{equation}\n\n\\begin{equation}\n f^{\\prime\\prime\\prime}(x_{i+1}) = f^{\\prime\\prime\\prime}(y_{i+1}) + \\frac{h}{2}f^{\\prime\\prime\\prime\\prime}(y_{i+1}) + \\frac{(\\frac{h}{2})^2}{2!}f^{\\prime\\prime\\prime\\prime\\prime}(y_{i+1}) + \\frac{(\\frac{h}{2})^3}{3!}f^{\\prime\\prime\\prime\\prime\\prime\\prime}(y_{i+1})+ O(h^4)\n\\end{equation}\n\nSubstituting Taylor series expansions and solving for the coefficients, we have,\n\n\\begin{equation}\n a_{1}=-a_{-1}=\\frac{1}{h} \\quad b_{1}=-b_{-1}=-\\frac{h}{24}\n\\end{equation}\n\nThe trailing terms amount to order of $h^4$ and hence the finite difference equation is given by,\n\\begin{equation}\n \\Rightarrow f^{\\prime\\prime}(y_{i+1}) = \\frac{f^{\\prime}(x_{i+1}) - f^{\\prime}(x_{i})}{h} - \\frac{h(f^{\\prime\\prime\\prime}(x_{i+1}) - f^{\\prime\\prime\\prime}(x_{i}))}{24} + O(h^4)\n\\end{equation}\n\nAnd by central difference,\n\n\\begin{equation}\n f^{\\prime\\prime\\prime\\prime}(y_{i+1}) = \\frac{f^{\\prime\\prime\\prime}(x_{i+1}) - f^{\\prime\\prime\\prime}(x_{i})}{h} + O(h^2)\n\\end{equation}\n\nWe know,\n\n\\begin{equation}\n I_{i+1} = I_{i+1}^{trap} - \\frac{h^3}{12}f^{\\prime\\prime}(y_{i+1}) - \\frac{h^5}{480}f^{\\prime\\prime\\prime\\prime}(y_{i+1}) + O(h^7)\n\\end{equation}\n\nSubstituting the relevant terms and summing over all i we get,\n\\begin{equation}\n I = I^{trap} - \\frac{h^3}{12}(\\frac{f^{\\prime}(x_{N}) - f^{\\prime}(x_{0})}{h} - \\frac{h(f^{\\prime\\prime\\prime}(x_{N}) - f^{\\prime\\prime\\prime}(x_{0}))}{24}) - \\frac{h^5}{480}(\\frac{f^{\\prime\\prime\\prime}(x_{N}) - f^{\\prime\\prime\\prime}(x_{0})}{h}) + O(h^6)\n\\end{equation}\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy\n```\n\n\n```python\ndef func(N):\n h = 1/N\n X = [h*i for i in range(N+1)]\n F = np.exp(np.power(X,2))\n return X, F\n\ndef trap_rule(N):\n h = 1/N\n X, F = func(N)\n I_trap = (h/2)*sum([F[i]+F[i+1] for i in range(0,N)])\n return I_trap\n \ndef mod_trap_rule_first_der(N):\n h = 1/N\n X, F = func(N)\n F_prime = [0, 0]\n F_prime[0] = np.exp(np.power(X[0],2))*2*X[0]\n F_prime[1] = np.exp(np.power(X[N],2))*2*X[N]\n I_mod_trap1 = (h/2)*sum([F[i]+F[i+1] for i in range(0,N)])-(h**2/12)*(F_prime[1]-F_prime[0])\n return I_mod_trap1\n\ndef mod_trap_rule_third_der(N):\n h = 1/N\n X, F = func(N)\n F_1prime = [0, 0]\n F_1prime[0] = np.exp(np.power(X[0],2))*2*X[0]\n F_1prime[1] = np.exp(np.power(X[N],2))*2*X[N]\n F_3prime = [0, 0]\n F_3prime[0] = np.exp(np.power(X[0],2))*2*(4*np.power(X[0],3)+6*X[0])\n F_3prime[1] = np.exp(np.power(X[N],2))*2*(4*np.power(X[N],3)+6*X[N])\n I_mod_trap3 = (h/2)*sum([F[i]+F[i+1] for i in range(0,N)]) - (h**2/12)*(F_1prime[1]-F_1prime[0]) + (h**4/(12*24))*(F_3prime[1]-F_3prime[0]) - (h**4/480)*(F_3prime[1]-F_3prime[0])\n return I_mod_trap3\n```\n\n\n```python\nI_exact = 1.4626517459071816\nN_list = [2, 5, 10, 20, 50, 100, 200, 500, 1000]\nI_trap = []\nI_mod_trap1 = []\nI_mod_trap3 = []\nfor i,N in enumerate(N_list):\n I_trap.append(trap_rule(N))\n I_mod_trap1.append(mod_trap_rule_first_der(N))\n I_mod_trap3.append(mod_trap_rule_third_der(N))\n```\n\n\n```python\n# Plot the results to compare between Numerical and Exact solutions to the ODE for different values of n\nfig = plt.figure(figsize=(15,7))\nfig.suptitle(\"Plot of absolute Errors for the Three methods\", fontsize=16)\nI_numerical = {'Trapezoidal Rule':I_trap,\n 'Trapezoidal rule with end corrections using first derivative':I_mod_trap1,\n 'Trapezoidal rule with end corrections using third derivative':I_mod_trap3}\n\nfor i, method in enumerate(I_numerical):\n plt.subplot(1, 3, i+1)\n plt.loglog(N_list, np.abs(np.subtract(I_numerical[method],I_exact)),\n marker='o',color='r', label=\"abs error\", linestyle='dashed')\n plt.grid()\n plt.legend()\n plt.xlabel('N')\n plt.ylabel('Absolute error')\n plt.title(method if len(method)<35 else method[:37]+'\\n'+method[37:])\n \n# Plot the results to compare between Numerical and Exact solutions to the ODE for different values of n\nfig = plt.figure(figsize=(15,7))\nfig.suptitle(\"[Common scale for axes] Plot of absolute Errors for the Three methods\", fontsize=16)\nI_numerical = {'Trapezoidal Rule':I_trap,\n 'Trapezoidal rule with end corrections using first derivative':I_mod_trap1,\n 'Trapezoidal rule with end corrections using third derivative':I_mod_trap3}\n\nfor i, method in enumerate(I_numerical):\n plt.subplot(1, 3, i+1)\n plt.loglog(N_list, np.abs(np.subtract(I_numerical[method],I_exact)),\n marker='o',color='r', label=\"abs error\", linestyle='dashed')\n plt.grid()\n plt.legend()\n plt.xlabel('N')\n plt.ylabel('Absolute error')\n plt.title(method if len(method)<35 else method[:37]+'\\n'+method[37:])\n plt.xlim(10**0, 10**3+250)\n plt.ylim(10**-17, 10**0)\n```\n\n- Trapezoidal rule - Slope = 4/2 = 2 $\\Rightarrow Error is O(1/h^2)$\n- Trapezoidal rule with end correction using first derivative- Slope = 8/2 = 4 $\\Rightarrow Error is O(1/h^4)$\n- Trapezoidal rule with end correction using third derivative- Slope = 12/2 = 6 $\\Rightarrow Error is O(1/h^6)$\n\n# Q2.\n\nTo obtain $log(n!) = log(C(\\frac{n}{e})^n\\sqrt{n})+O(1/n)$ using Euler-Macluarin, where C is some constant.\n\nThe Euler-Maclaurin Formula is given by,\n\\begin{equation}\n \\sum_{n=a}^{b} f(n) = \\int_{a}^{b}f(x)dx + [\\frac{f(b)+f(a)}{2}] + \\sum_{k=1}^{p} \\frac{b_{2k}}{(2k)!} [f^{(2k-1)}(b) - f^{(2k-1)}(a)] - \\int_{a}^{b} \\frac{B_{2p}(\\{t\\})}{(2p)!}f^{2p}(t)dt\n\\end{equation}\n\n\\begin{equation}\n log(N!) = \\sum_{n=1}^{N} log(n) \\Rightarrow f(x) = log(x)\n\\end{equation}\n\n\\begin{equation}\n \\sum_{n=1}^{N} log(n) = \\int_{1}^{N}log(x)dx + [\\frac{log(N)+log(1)}{2}] + \\sum_{k=1}^{p} \\frac{b_{2k}}{(2k)!} (-1)^{2k-2}(2k-2)!(\\frac{1}{N^{2k-1}} - 1) - \\int_{1}^{N} \\frac{B_{2p}(\\{t\\})(-1)}{(2p)!t^2}dt\n\\end{equation}\n\n\\begin{equation}\n \\sum_{n=1}^{N} log(n) = (Nlog(N)-N+1) + \\frac{log(N)}{2} + \\sum_{k=1}^{p} \\frac{b_{2k}}{(2k)(2k-1)} (-1)^{2k}(\\frac{1}{N^{2k-1}} - 1) + (\\int_{1}^{\\infty} \\frac{B_{2p}(\\{t\\})}{(2p)!t^2}dt - \\int_{N}^{\\infty} \\frac{B_{2p}(\\{t\\})}{(2p)!t^2}dt)\n\\end{equation}\n\n\\begin{equation}\n \\lim_{n \\to \\infty}( \\sum_{n=1}^{N} log(n) - (Nlog(N)-N+1) - \\frac{log(N)}{2} )= \\lim_{n \\to \\infty}(\\sum_{k=1}^{p} \\frac{b_{2k}}{(2k)(2k-1)} (-1)^{2k}(\\frac{1}{N^{2k-1}} - 1)) + \\lim_{n \\to \\infty}((\\int_{1}^{\\infty} \\frac{B_{2p}(\\{t\\})}{(2p)!t^2}dt - \\int_{N}^{\\infty} \\frac{B_{2p}(\\{t\\})}{(2p)!t^2}dt))\n\\end{equation}\n\n\\begin{equation}\n \\lim_{n \\to \\infty}( \\sum_{n=1}^{N} log(n) - (Nlog(N)-N+1) - \\frac{log(N)}{2} )= (\\sum_{k=1}^{p} \\frac{b_{2k}}{(2k)(2k-1)} (-1)^{2k}(-1) + \\int_{1}^{\\infty} \\frac{B_{2p}(\\{t\\})}{(2p)!t^2}dt) - \\lim_{n \\to \\infty}(\\int_{N}^{\\infty} \\frac{B_{2p}(\\{t\\})}{(2p)!t^2}dt))\n\\end{equation}\n\nTaking the following expression as some constant,\n\n\\begin{equation}\n (\\sum_{k=1}^{p} \\frac{b_{2k}}{(2k)(2k-1)} (-1)^{2k}(-1) + \\int_{1}^{\\infty} \\frac{B_{2p}(\\{t\\})}{(2p)!t^2}dt) = log(C)-1\n\\end{equation}\n\nWhile a bound to the following expression is to be found,\n\\begin{equation}\n (\\int_{N}^{\\infty} \\frac{B_{2p}(\\{t\\})}{(2p)!t^2}dt))\n\\end{equation}\n\nTaking p = 1,\n\\begin{equation}\n B_{2}(\\{t\\}) = \\{t^2\\} - \\{t\\} + \\frac{1}{6} \\Rightarrow |B_{2}(\\{t\\})| \\lt 3\n\\end{equation}\n\nSo,\n\\begin{equation}\n |\\int_{N}^{\\infty} \\frac{B_{2}(\\{t\\})}{(2)!t^2}dt)| \\leq \\int_{N}^{\\infty} \\frac{|B_{2}(\\{t\\})|}{(2)!t^2}dt) \\leq \\frac{3}{2N}\n\\end{equation}\nwhich is O(1/N).\n\n\\begin{equation}\n \\Rightarrow \\sum_{n=1}^{N} log(n) = (Nlog(N)-N+1) + \\frac{log(N)}{2} + log(C) - 1 + O(1/N) = log((\\frac{N}{e})^N) + log(\\sqrt{N}) + log(C) + O(1/N)\n\\end{equation}\n\n\\begin{equation}\n \\Rightarrow \\sum_{n=1}^{N} log(n) = log(C(\\frac{N}{e})^N\\sqrt{N}) + O(1/N)\n\\end{equation}\n\n# Q3.\n\n- To evaluate\n\\begin{equation}\n I_k = \\int_{0}^{\\pi/2} sin^k(x)dx\n\\end{equation}\n\nLet $u = sin^{k-1}(x) \\Rightarrow du = (k-1)sin^{k-2}(x)cos(x)dx$ and $dv = sin(x)dx \\Rightarrow v = -cos(x)$.\n\n\\begin{equation}\n I_k = [-sin^{k-1}(x)cos(x)]_0^{\\pi/2} + \\int_{0}^{\\pi/2} (k-1)sin^{k-2}(x)cos^2(x)dx\n\\end{equation}\n\nWith $[-sin^{k-1}(x)cos(x)]_0^{\\pi/2} = 0$,\n\n\n\\begin{equation}\n I_k = \\int_{0}^{\\pi/2} (k-1)sin^{k-2}(x)(1-sin^2(x))dx \\Rightarrow I_k = \\int_{0}^{\\pi/2} (k-1)sin^{k-2}(x)dx + (k-1)I_k\n\\end{equation}\n\n\\begin{equation}\n I_k = \\frac{k-1}{k}\\int_{0}^{\\pi/2} sin^{k-2}(x)dx = \\frac{k-1}{k}I_{k-2}\n\\end{equation}\n\nWe can substitute for $I_k$ recursively to find that for when k is even,\n\\begin{equation}\n I_k = \\frac{(n-1)(n-3)...1}{n(n-2)...2}\\int_{0}^{\\pi/2} sin^{0}(x)dx\n\\end{equation}\n\n\\begin{equation}\n \\Rightarrow I_k = \\frac{(n-1)(n-3)...1}{n(n-2)...2}\\frac{\\pi}{2}\n\\end{equation}\n\nAnd, when k is odd\n\\begin{equation}\n I_k = \\frac{(n-1)(n-3)...2}{n(n-2)...3}\\int_{0}^{\\pi/2} sin^{1}(x)dx\n\\end{equation}\n\n\\begin{equation}\n \\Rightarrow I_k = \\frac{(n-1)(n-3)...2}{n(n-2)...3}\n\\end{equation}\n\n- From the recursive relation $I_k = \\frac{k-1}{k}I_{k-2}$ as $\\frac{k-1}{k} \\lt 1 \\quad \\forall k \\gt 0$ we have $I_{k} \\lt I_{k-2}$. Hence $I_k$ is monotone decreasing sequence.\n\n- $\\lim_{m \\to \\infty} \\frac{I_{2m-1}}{I_{2m+1}}$\n\n\\begin{equation}\n \\lim_{m \\to \\infty} \\frac{I_{2m-1}}{I_{2m+1}} = \\lim_{m \\to \\infty} \\frac{I_{2m-1}}{\\frac{2m}{2m+1}I_{2m-1}} = \\lim_{m \\to \\infty} \\frac{2m+1}{2m} = 1\n\\end{equation}\n\n- $\\lim_{m \\to \\infty} \\frac{I_{2m}}{I_{2m+1}}$\n\nWe know that since $I_k$ is monotone decreasing sequence $I_{2m-1} \\geq I_{2m} \\geq I_{2m+1}$. Dividing throughout by $I_{2m+1}$ we have,\n\n\\begin{equation}\n \\frac{I_{2m-1}}{I_{2m+1}} \\geq \\frac{I_{2m}}{I_{2m+1}} \\geq \\frac{I_{2m+1}}{I_{2m+1}} = 1\n\\end{equation}\n\nAnd as $\\lim_{m \\to \\infty} \\frac{I_{2m-1}}{I_{2m+1}} = \\lim_{m \\to \\infty} \\frac{2m+1}{2m} = 1$, by sandwich theorem,\n\n\\begin{equation}\n \\lim_{m \\to \\infty} \\frac{I_{2m}}{I_{2m+1}} = 1\n\\end{equation}\n\n- Central Binomial Coefficient\n\nWe know that $\\lim_{m \\to \\infty} \\frac{I_{2m}}{I_{2m+1}} = 1$.\n\n\\begin{equation}\n \\lim_{m \\to \\infty} \\frac{I_{2m}}{I_{2m+1}} = \\lim_{m \\to \\infty} \\frac{\\frac{(2m-1)(2m-3)...1.\\pi}{(2m)(2m-2)...2.2}}{\\frac{(2m)(2m-2)...2}{(2m+1)(2m-1)...3}} = \\lim_{m \\to \\infty} (2m+1)(\\frac{(2m-1)(2m-3)...3.1}{(2m)(2m-2)...4.2})^2\\frac{\\pi}{2} = 1\n\\end{equation}\n\n\\begin{equation}\n \\Rightarrow \\lim_{m \\to \\infty} \\frac{((2m)(2m-2)...4.2)^2}{(2m+1)((2m-1)(2m-3)...3.1)^2} = \\frac{\\pi}{2}\n\\end{equation}\n\nSimplifying the expression,\n\\begin{equation}\n \\frac{(m.(m-1)...2.1.2^m)^2}{(2m+1)((2m-1)(2m-3)...3.1)^2} = \\frac{(m!)^2.2^{2m}}{(2m+1)((2m-1)(2m-3)...3.1)^2}\n\\end{equation}\n\nMultiplying and dividing by $((2m)(2m-2)...4.2)^2$\n\\begin{equation}\n \\frac{(m!)^2.2^{2m}.((2m)(2m-2)...4.2)^2}{(2m+1)((2m)(2m-1)(2m-2)(2m-3)...3.2.1)^2} = \\frac{(m!)^4.2^{4m}}{(2m+1)(2m!)^2} = \\frac{2^{4m}}{(2m+1){2m \\choose m}^2}\n\\end{equation}\n\n\\begin{equation}\n \\lim_{m \\to \\infty} \\frac{2^{4m}}{(2m+1){2m \\choose m}^2} = \\frac{\\pi}{2} \\Rightarrow \\lim_{m \\to \\infty} {2m \\choose m} = \\lim_{m \\to \\infty} 2^{2m}\\sqrt{\\frac{2}{(2m+1)\\pi}}\n\\end{equation}\n\n\\begin{equation}\n \\Rightarrow {2m \\choose m} \\sim \\frac{4^{m}}{\\sqrt{m\\pi}}\n\\end{equation}\n\n- Evaluating C\n\nWe know, \n\\begin{equation}\n log(2m!) = log(C(\\frac{2m}{e})^{2m}\\sqrt{2m}) + O(1/2m) \\quad;\\quad 2.log(m!) = 2log(C(\\frac{m}{e})^m\\sqrt{m}) + O(1/m)\n\\end{equation}\n\n\\begin{equation}\n log(2m!)-2.log(m!) = log(\\frac{C(\\frac{2m}{e})^{2m}\\sqrt{2m})}{(C(\\frac{m}{e})^m\\sqrt{m})^2}\n\\end{equation}\n\n\\begin{equation}\n log(\\frac{2m!}{m!}) = log(\\frac{2^{2m}\\sqrt{2}}{C\\sqrt{m}})\n\\end{equation}\n\n\\begin{equation}\n \\Rightarrow log(\\frac{2^{2m}\\sqrt{2}}{C\\sqrt{m}}) = log(\\frac{4^{m}}{\\sqrt{m\\pi}}) \\Rightarrow C = \\sqrt{2\\pi}\n\\end{equation}\n\n- Substituting this back into the equation $log(N!) = log(C(\\frac{N}{e})^N\\sqrt{N}) + O(1/N)$ ,\n\\begin{equation}\n log(N!) = log(\\sqrt{2\\pi}(\\frac{N}{e})^N\\sqrt{N}) + O(1/N)\n\\end{equation}\n \n\\begin{equation}\n \\Rightarrow N! \\sim (\\frac{N}{e})^N\\sqrt{2\\pi N} \\quad \\text{(Stirling Formula)}\n\\end{equation}\n\n- $O(1/n^3)$\n\nIncluding $\\frac{b_2.f^{\\prime}(x)|_N}{2!} = \\frac{1}{12N}$\n\\begin{equation}\n \\Rightarrow \\sum_{n=1}^{N} log(n) = log((\\frac{N}{e})^N) + log(\\sqrt{N}) + log(\\sqrt{2\\pi}) + O(1/N) = log((\\frac{N}{e})^N) + log(\\sqrt{N}) + log(\\sqrt{2\\pi}) + \\frac{1}{12N} + O(1/N^3)\n\\end{equation}\n\n\\begin{equation}\n \\Rightarrow N! \\sim (\\frac{N}{e})^N\\sqrt{2\\pi N}.e^{\\frac{1}{12N}}\n\\end{equation}\n\n\n```python\n# Relative Error for {20, 50}\nN = [20, 50]\nn = N[0]\nfactorial_n = scipy.math.factorial(n)\nstirling_n = (np.power(n/np.exp(1),n))*np.power(2*np.pi*n, 0.5)\nprint('The factorial for n = 20 using: \\nStirling formula \\t=',stirling_n, '\\nExact value \\t\\t=', factorial_n)\nprint('Relative Error (%)\\t=', 100*(stirling_n-factorial_n)/factorial_n)\n\nn = N[1]\nfactorial_n = scipy.math.factorial(n)\nstirling_n = (np.power(n/np.exp(1),n))*np.power(2*np.pi*n, 0.5)\nprint('The factorial for n = 50 using: \\nStirling formula \\t=',stirling_n, '\\nExact value \\t\\t=', factorial_n)\nprint('Relative Error (%)\\t=', 100*(stirling_n-factorial_n)/factorial_n)\n```\n\n The factorial for n = 20 using: \n Stirling formula \t= 2.422786846761135e+18 \n Exact value \t\t= 2432902008176640000\n Relative Error (%)\t= -0.41576526228796995\n The factorial for n = 50 using: \n Stirling formula \t= 3.036344593938168e+64 \n Exact value \t\t= 30414093201713378043612608166064768844377641568960512000000000000\n Relative Error (%)\t= -0.16652563663756476\n\n\n\n```python\n# Factorial with O(1/n^3)\nN = [20, 50]\nn = N[0]\nfactorial_n = scipy.math.factorial(n)\nstirling_n = (np.power(n/np.exp(1),n))*np.power(2*np.pi*n, 0.5)*np.exp(1/(12*n))\nprint('The factorial for n = 20 using: \\nStirling formula \\t=',stirling_n, '\\nExact value \\t\\t=', factorial_n)\nprint('Relative Error (%)\\t=', 100*(stirling_n-factorial_n)/factorial_n)\n\nn = N[1]\nfactorial_n = scipy.math.factorial(n)\nstirling_n = (np.power(n/np.exp(1),n))*np.power(2*np.pi*n, 0.5)*np.exp(1/(12*n))\nprint('The factorial for n = 50 using: \\nStirling formula \\t=',stirling_n, '\\nExact value \\t\\t=', factorial_n)\nprint('Relative Error (%)\\t=', 100*(stirling_n-factorial_n)/factorial_n)\n```\n\n The factorial for n = 20 using: \n Stirling formula \t= 2.432902852332159e+18 \n Exact value \t\t= 2432902008176640000\n Relative Error (%)\t= 3.469747306463279e-05\n The factorial for n = 50 using: \n Stirling formula \t= 3.041409387750502e+64 \n Exact value \t\t= 30414093201713378043612608166064768844377641568960512000000000000\n Relative Error (%)\t= 2.221968747857392e-06\n\n", "meta": {"hexsha": "290c012fc711f14eeb14283c14cbbb4583766300", "size": 113333, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "me16b077_4.ipynb", "max_stars_repo_name": "ENaveen98/Numerical-methods-and-Scientific-computing", "max_stars_repo_head_hexsha": "5b931621e307386c8c20430db9cb8dae243d38ba", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-05T12:31:51.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-05T12:31:51.000Z", "max_issues_repo_path": "me16b077_4.ipynb", "max_issues_repo_name": "ENaveen98/Numerical-methods-and-Scientific-computing", "max_issues_repo_head_hexsha": "5b931621e307386c8c20430db9cb8dae243d38ba", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "me16b077_4.ipynb", "max_forks_repo_name": "ENaveen98/Numerical-methods-and-Scientific-computing", "max_forks_repo_head_hexsha": "5b931621e307386c8c20430db9cb8dae243d38ba", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 211.0484171322, "max_line_length": 48812, "alphanum_fraction": 0.8730643325, "converted": true, "num_tokens": 7115, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9511422186079557, "lm_q2_score": 0.8824278649085117, "lm_q1q2_score": 0.8393143971905632}} {"text": "# Interact Exercise 6\n\n## Imports\n\nPut the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.\n\n\n```python\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n```\n\n\n```python\nfrom IPython.display import Image\nfrom IPython.html.widgets import interact, interactive, fixed\n```\n\n :0: FutureWarning: IPython widgets are experimental and may change in the future.\n\n\n## Exploring the Fermi distribution\n\nIn quantum statistics, the [Fermi-Dirac](http://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac_statistics) distribution is related to the probability that a particle will be in a quantum state with energy $\\epsilon$. The equation for the distribution $F(\\epsilon)$ is:\n\n\n```python\nImage('fermidist.png')\n```\n\nIn this equation:\n\n* $\\epsilon$ is the single particle energy.\n* $\\mu$ is the chemical potential, which is related to the total number of particles.\n* $k$ is the Boltzmann constant.\n* $T$ is the temperature in Kelvin.\n\nIn the cell below, typeset this equation using LaTeX:\n\n\\begin{align}\n\\frac{1}{e^{(\\epsilon - \\mu)/kT} + 1}\n\\end{align}\n\nDefine a function `fermidist(energy, mu, kT)` that computes the distribution function for a given value of `energy`, chemical potential `mu` and temperature `kT`. Note here, `kT` is a single variable with units of energy. Make sure your function works with an array and don't use any `for` or `while` loops in your code.\n\n\n```python\ndef fermidist(energy, mu, kT):\n e = 2.71828182845904523536028747135266249775724709369995\n \"\"\"Compute the Fermi distribution at energy, mu and kT.\"\"\"\n x = 1/(e **((energy - mu)/kT) + 1)\n return x\n```\n\n\n```python\nassert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)\nassert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),\n np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,\n 0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))\n```\n\nWrite a function `plot_fermidist(mu, kT)` that plots the Fermi distribution $F(\\epsilon)$ as a function of $\\epsilon$ as a line plot for the parameters `mu` and `kT`.\n\n* Use enegies over the range $[0,10.0]$ and a suitable number of points.\n* Choose an appropriate x and y limit for your visualization.\n* Label your x and y axis and the overall visualization.\n* Customize your plot in 3 other ways to make it effective and beautiful.\n\n\n```python\ndef plot_fermidist(mu, kT):\n E = np.linspace(0, 10., 100)\n y = plt.plot(E, fermidist(E, mu, kT))\n plt.xlabel('t')\n plt.ylabel('X(t)')\n return y\n```\n\n\n```python\nplot_fermidist(4.0, 1.0)\n```\n\n\n```python\nassert True # leave this for grading the plot_fermidist function\n```\n\nUse `interact` with `plot_fermidist` to explore the distribution:\n\n* For `mu` use a floating point slider over the range $[0.0,5.0]$.\n* for `kT` use a floating point slider over the range $[0.1,10.0]$.\n\n\n```python\ninteract(plot_fermidist, mu = (0.0,5.0), kT=(.1,10.0));\n```\n\nProvide complete sentence answers to the following questions in the cell below:\n\n* What happens when the temperature $kT$ is low?\n* What happens when the temperature $kT$ is high?\n* What is the effect of changing the chemical potential $\\mu$?\n* The number of particles in the system are related to the area under this curve. How does the chemical potential affect the number of particles.\n\nUse LaTeX to typeset any mathematical symbols in your answer.\n\nWhen kT is low, the probablity drops off very quickly after a certain point. When kT is high, the probability follows a negatively sloped line. Mu changes the height of the function.\n", "meta": {"hexsha": "9cba4e0637b674a37f8312a024c6942f5b1eeb58", "size": 28949, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "midterm/InteractEx06.ipynb", "max_stars_repo_name": "edwardd1/phys202-2015-work", "max_stars_repo_head_hexsha": "b91da6959223a82c4c0b8030c92a789234a4b6b9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "midterm/InteractEx06.ipynb", "max_issues_repo_name": "edwardd1/phys202-2015-work", "max_issues_repo_head_hexsha": "b91da6959223a82c4c0b8030c92a789234a4b6b9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "midterm/InteractEx06.ipynb", "max_forks_repo_name": "edwardd1/phys202-2015-work", "max_forks_repo_head_hexsha": "b91da6959223a82c4c0b8030c92a789234a4b6b9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 78.0296495957, "max_line_length": 9994, "alphanum_fraction": 0.8354001865, "converted": true, "num_tokens": 1037, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026595857203, "lm_q2_score": 0.9149009602137116, "lm_q1q2_score": 0.8392410840615669}} {"text": "# Bayesian Linear Regression\n\n## What is the problem?\n\nGiven inputs $X$ and outputs $\\mathbf{y}$, we want to find the best parameters $\\boldsymbol{\\theta}$, such that predictions $\\hat{\\mathbf{y}} = X\\boldsymbol{\\theta}$ can estimate $\\mathbf{y}$ very well. In other words, we want L2 norm of errors $||\\hat{\\mathbf{y}} - \\mathbf{y}||_2$, as low as possible. \n\n## Applying Bayes Rule\n\nIn this problem, we will {ref}`model the distribution of parameters `. \n\n\\begin{equation}\n\\underbrace{p(\\boldsymbol{\\theta}|X, \\mathbf{y})}_{\\text{Posterior}} = \\frac{\\overbrace{p(\\mathbf{y}|X, \\boldsymbol{\\theta})}^{\\text{Likelihood}}}{\\underbrace{p(\\mathbf{y}|X)}_{\\text{Evidence}}}\\underbrace{p(\\boldsymbol{\\theta})}_{\\text{Prior}}\n\\end{equation}\n\n\\begin{equation}\np(\\mathbf{y}|X) = \\int_{\\boldsymbol{\\theta}}p(\\mathbf{y}|X, \\boldsymbol{\\theta})p(\\boldsymbol{\\theta})d\\boldsymbol{\\theta}\n\\end{equation}\n\nWe are interested in posterior $p(\\boldsymbol{\\theta}|X, \\mathbf{y})$ and to derive that, we need prior, likelihood and evidence terms. Let us look at them one by one.\n\n### Prior\n\nLet's assume a multivariate Gaussian prior over the $\\boldsymbol{\\theta}$ vector.\n\n$$\np(\\theta) \\sim \\mathcal{N}(\\boldsymbol{\\mu}_0, \\Sigma_0)\n$$\n\n### Likelihood\n\nGiven a $\\boldsymbol{\\theta}$, our prediction is $X\\boldsymbol{\\theta}$. Our data $\\mathbf{y}|X$ will have some irreducible noise which needs to be incorporated in the likelihood. Thus, we can assume the likelihood distribution over $\\mathbf{y}$ to be centered at $X\\boldsymbol{\\theta}$ with random i.i.d. homoskedastic noise with variance $\\sigma^2$:\n\n$$\np(\\mathbf{y}|X, \\theta) \\sim \\mathcal{N}(X\\boldsymbol{\\theta}, \\sigma^2I)\n$$\n\n### Maximum Likelihood Estimation (MLE)\n\nLet us find the optimal parameters by differentiating likelihood $p(\\mathbf{y}|X, \\boldsymbol{\\theta})$ w.r.t $\\boldsymbol{\\theta}$.\n\n\\begin{equation}\np(\\mathbf{y}|X, \\boldsymbol{\\theta}) = \\frac{1}{\\sqrt{(2\\pi)^n |\\sigma^2I|}}\\exp \\left( (\\mathbf{y} - X\\boldsymbol{\\theta})^T(\\sigma^2I)^{-1}(\\mathbf{y} - X\\boldsymbol{\\theta}) \\right)\n\\end{equation}\n\nSimplifying the above equation:\n\n\\begin{equation}\np(\\mathbf{y}|X, \\boldsymbol{\\theta}) = \\frac{1}{(2\\pi\\sigma^2)^{\\frac{n}{2}}}\\exp \\left( \\sigma^{-2}(\\mathbf{y} - X\\boldsymbol{\\theta})^T(\\mathbf{y} - X\\boldsymbol{\\theta}) \\right)\n\\end{equation}\n\nTaking log to simplify further:\n\n\\begin{align}\n\\log p(\\mathbf{y}|X, \\boldsymbol{\\theta}) &= (\\mathbf{y} - X\\boldsymbol{\\theta})^T(\\mathbf{y} - X\\boldsymbol{\\theta}) + \\log \\sigma^{-2} + \\log \\frac{1}{(2\\pi\\sigma^2)^{\\frac{n}{2}}}\\\\\n\\frac{d}{d\\boldsymbol{\\theta}} \\log p(\\mathbf{y}|X, \\boldsymbol{\\theta}) &= \\frac{d}{d\\boldsymbol{\\theta}}(\\mathbf{y} - X\\boldsymbol{\\theta})^T(\\mathbf{y} - X\\boldsymbol{\\theta})\\\\\n&= \\frac{d}{d\\boldsymbol{\\theta}}(\\mathbf{y}^T - \\boldsymbol{\\theta}^TX^T)(\\mathbf{y} - X\\boldsymbol{\\theta})\\\\\n&= \\frac{d}{d\\boldsymbol{\\theta}} \\left[ \\mathbf{y}^T\\mathbf{y} - \\mathbf{y}^TX\\boldsymbol{\\theta} - \\boldsymbol{\\theta}^TX^T\\mathbf{y} + \\boldsymbol{\\theta}^TX^TX\\boldsymbol{\\theta}\\right]\\\\\n&= -(\\mathbf{y}^TX)^T - X^T\\mathbf{y} + 2X^TX\\boldsymbol{\\theta} = 0\\\\\n\\therefore X^TX\\boldsymbol{\\theta} &= X^T\\mathbf{y}\\\\\n\\therefore \\boldsymbol{\\theta}_{MLE} &= (X^TX)^{-1}X^T\\mathbf{y}\n\\end{align}\n\nWe used some of the formulas from [this cheatsheet](http://www.gatsby.ucl.ac.uk/teaching/courses/sntn/sntn-2017/resources/Matrix_derivatives_cribsheet.pdf) but they can also be derived from scratch.\n\n### Maximum a posteriori estimation (MAP)\n\nWe know from {ref}`the previous discussion ` that:\n\n$$\n\\arg \\max_{\\boldsymbol{\\theta}} p(\\boldsymbol{\\theta}|X, \\mathbf{y}) = \\arg \\max_{\\boldsymbol{\\theta}} p(\\mathbf{y}|X, \\boldsymbol{\\theta})p(\\boldsymbol{\\theta})\n$$\n\nNow, differentiating $p(\\mathbf{y}|X, \\boldsymbol{\\theta})p(\\boldsymbol{\\theta})$ w.r.t $\\theta$ by reusing some of the steps from MLE:\n\n\n", "meta": {"hexsha": "0d05335fe186302c056309541eb50d3aca9f16c0", "size": 5368, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "linear-regression.ipynb", "max_stars_repo_name": "patel-zeel/bayesian-ml", "max_stars_repo_head_hexsha": "2b7657f22fbf70953a91b2ab2bc321bb451fa5a5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "linear-regression.ipynb", "max_issues_repo_name": "patel-zeel/bayesian-ml", "max_issues_repo_head_hexsha": "2b7657f22fbf70953a91b2ab2bc321bb451fa5a5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "linear-regression.ipynb", "max_forks_repo_name": "patel-zeel/bayesian-ml", "max_forks_repo_head_hexsha": "2b7657f22fbf70953a91b2ab2bc321bb451fa5a5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.5044247788, "max_line_length": 369, "alphanum_fraction": 0.5644560358, "converted": true, "num_tokens": 1351, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9458012655937034, "lm_q2_score": 0.8872045981907006, "lm_q1q2_score": 0.8391192318093177}} {"text": "```python\nimport numpy as np\nimport numpy.linalg as linalg\nfrom sympy import Matrix\n```\n\n\n```python\na = [[1,2], [3,4]]\n```\n\n\n```python\na\n```\n\n\n\n\n [[1, 2], [3, 4]]\n\n\n\n\n```python\na = np.array(a)\n```\n\n\n```python\na\n```\n\n\n\n\n array([[1, 2],\n [3, 4]])\n\n\n\n\n```python\na.shape\n```\n\n\n\n\n (2, 2)\n\n\n\n\n```python\nmtx = Matrix(a)\n```\n\n\n```python\nmtx.rref()\n```\n\n\n\n\n (Matrix([\n [1, 0],\n [0, 1]]),\n (0, 1))\n\n\n\n\n```python\na\n```\n\n\n\n\n array([[1, 2],\n [3, 4]])\n\n\n\n\n```python\nlinalg.inv(a)\n```\n\n\n\n\n array([[-2. , 1. ],\n [ 1.5, -0.5]])\n\n\n\n\n```python\na\n```\n\n\n\n\n array([[1, 2],\n [3, 4]])\n\n\n\n\n```python\nmtx.inv()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}-2 & 1\\\\\\frac{3}{2} & - \\frac{1}{2}\\end{matrix}\\right]$\n\n\n\n\n```python\nmtx_b = Matrix([[3,1], [2,1]])\n```\n\n\n```python\nmtx_b\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}3 & 1\\\\2 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\nmtx_b.inv()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & -1\\\\-2 & 3\\end{matrix}\\right]$\n\n\n\n\n```python\nmtx_b.shape\n```\n\n\n\n\n (2, 2)\n\n\n\n\n```python\na.shape\n```\n\n\n\n\n (2, 2)\n\n\n\n\n```python\nmtx_b.adjoint()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}3 & 2\\\\1 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\nmtx.det()\n```\n\n\n\n\n$\\displaystyle -2$\n\n\n\n\n```python\nmtx.cofactor_matrix()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}4 & -3\\\\-2 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\nmtx_b\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}3 & 1\\\\2 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\nmtx_b_inv = mtx_b.inv()\n```\n\n\n```python\nmtx_b_inv\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & -1\\\\-2 & 3\\end{matrix}\\right]$\n\n\n\n\n```python\nI = mtx_b * mtx_b_inv\n```\n\n\n```python\nI\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0\\\\0 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\nI_ = mtx_b_inv * mtx_b\n```\n\n\n```python\nI_\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0\\\\0 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\nI == I_\n```\n\n\n\n\n True\n\n\n\n\n```python\nmtx_c = Matrix([[3,1,-1], [2,-2,0],[1,2,-1]])\n```\n\n\n```python\nmtx_c\n\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}3 & 1 & -1\\\\2 & -2 & 0\\\\1 & 2 & -1\\end{matrix}\\right]$\n\n\n\n\n```python\ncofactor = mtx_c.cofactor_matrix()\n```\n\n\n```python\ncofactor\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}2 & 2 & 6\\\\-1 & -2 & -5\\\\-2 & -2 & -8\\end{matrix}\\right]$\n\n\n\n\n```python\nmtx_c.cofactorMatrix()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}2 & 2 & 6\\\\-1 & -2 & -5\\\\-2 & -2 & -8\\end{matrix}\\right]$\n\n\n\n\n```python\nmtx_c.adjoint()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}3 & 2 & 1\\\\1 & -2 & 2\\\\-1 & 0 & -1\\end{matrix}\\right]$\n\n\n\n\n```python\ncofactor.transpose()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}2 & -1 & -2\\\\2 & -2 & -2\\\\6 & -5 & -8\\end{matrix}\\right]$\n\n\n\n\n```python\nmtx_c.adjugate()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}2 & -1 & -2\\\\2 & -2 & -2\\\\6 & -5 & -8\\end{matrix}\\right]$\n\n\n\n\n```python\na = Matrix([[1,2], [3,4]])\nb = Matrix([[5,6], [7,8]])\n```\n\n\n```python\na\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 2\\\\3 & 4\\end{matrix}\\right]$\n\n\n\n\n```python\nb\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}5 & 6\\\\7 & 8\\end{matrix}\\right]$\n\n\n\n\n```python\nc = a * b\n```\n\n\n```python\nc\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}19 & 22\\\\43 & 50\\end{matrix}\\right]$\n\n\n\n\n```python\nc.det()\n```\n\n\n\n\n$\\displaystyle 4$\n\n\n\n\n```python\na.det() * b.det()\n```\n\n\n\n\n$\\displaystyle 4$\n\n\n\n\n```python\ninv = a.adjugate()/a.det()\n```\n\n\n```python\ninv\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}-2 & 1\\\\\\frac{3}{2} & - \\frac{1}{2}\\end{matrix}\\right]$\n\n\n\n\n```python\na.inv()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}-2 & 1\\\\\\frac{3}{2} & - \\frac{1}{2}\\end{matrix}\\right]$\n\n\n\n\n```python\na\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 2\\\\3 & 4\\end{matrix}\\right]$\n\n\n\n\n```python\na.rref()\n```\n\n\n\n\n (Matrix([\n [1, 0],\n [0, 1]]),\n (0, 1))\n\n\n\n\n```python\na.echelon_form()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 2\\\\0 & -2\\end{matrix}\\right]$\n\n\n\n\n```python\na.rref()\n```\n\n\n\n\n (Matrix([\n [1, 0],\n [0, 1]]),\n (0, 1))\n\n\n\n\n```python\nrref = a.rref()\n```\n\n\n```python\nrref\n```\n\n\n\n\n (Matrix([\n [1, 0],\n [0, 1]]),\n (0, 1))\n\n\n\n\n```python\na.rank()\n```\n\n\n\n\n 2\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "2ae849e7e6bf9a1ba656ada7e07551864bba25ee", "size": 21697, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "matrix.ipynb", "max_stars_repo_name": "lgtejas/mfds", "max_stars_repo_head_hexsha": "dc9569a970f963d298dbe56db58b090684558d7b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "matrix.ipynb", "max_issues_repo_name": "lgtejas/mfds", "max_issues_repo_head_hexsha": "dc9569a970f963d298dbe56db58b090684558d7b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "matrix.ipynb", "max_forks_repo_name": "lgtejas/mfds", "max_forks_repo_head_hexsha": "dc9569a970f963d298dbe56db58b090684558d7b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.0826737027, "max_line_length": 110, "alphanum_fraction": 0.4387242476, "converted": true, "num_tokens": 1532, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9196425355825847, "lm_q2_score": 0.9124361640485893, "lm_q1q2_score": 0.8391151074628919}} {"text": "# Three faces of classical mechanics\n\n- 1687 - edition of Principia Mathematics by Isaac Newton \n- 1788 - edition of the M\u00e9canique analytique by Joseph Louis Lagrange (Giuseppe Lodovico Lagrangia)\n- 1833 - formulation of mechanics by William Rowand Hamilton \n\n\nThere are three approaches to describe classical mechanical systems. Historically, the first theory was formulated by Newton and is called Netwon Mechanics. The second is Lagrange Mechanics and is convenient for description of systems with constraints. The third is Hamilton Mechanics. Lagrange mechanics is a basis for the modern theory of elementary particles. In turn, the Hamilton approach is used in formulation of quantum physics. \n\n## Harmonic Oscillator \n\nWe demostrate three approaches to classical mechanical systems by considering one of the simplest example: a one-dimensional harmonic oscillator (a particle of mass $m$) moving along the x-axis and characterised by the position $x(t)$ at time $t$. We present it in a trivialized way. \n\n### The Newton mechanics \n\n\nIn the Newton description we have to know all forces $F$ which act on the particle of mass $m$ and its dynamics is determined by the Newton second law (the equation of motion): \n\n$$m a =F,$$\nwhere $a=a(t)$ is an acceleration of the particle which is a time-derivative of the particle velocity $v=v(t)$, which in turn in a time-derivative of the particle position (coordinate) $x=x(t)$: \n\n$$a(t) = \\dot{v}(t) = \\frac{dv(t)}{dt}, \\qquad v(t) = \\dot{x}(t) = \\frac{dx(t)}{dt}, \\qquad F = F(x, v, t).$$\nTaking into account the above relations the Newton equation $ma=F$ is in fact a second-order differential equation \nin the form \n\n$$m \\frac{d^2 x}{dt^2} = F(x, \\dot{x}, t)$$\nTherefore two initial conditions have to be imposed. Usually, it is the initial position $x_0=x(0)$ and the initial velocity $v_0=v(0)$ of the particle. \n\nIn a general case, the force $F$ depends on the particle position $x$, its velocity $\\dot x =v$ and explicitly on time $t$ (in the case of an external time-dependent driving as e.g. $A \\cos(B t)$). When $F=F(x)$ then it is conservative system. In such a case we can define a potential energy $U(x)$ of the system by the relations: \n\n$$ U(x) = - \\int F(x) dx \\qquad \\mbox{or} \\qquad F(x) = - \\frac{dU(x)}{dx}$$\n\nFor the harmonic oscillator, the force $F$ is proportional to the partice displacement $x$, namely, \n\n$$F = F(x) =-kx$$\nwhere $k$ is a positive parameter (e.g. $k$ is a measure of the stiffness of the spring for the mass-spring oscillator). The corresponding potential energy $E_p = U(x)$ is \n\n$$ U(x) = \\frac{1}{2} k x^2 $$\ni.e. it is a parabola. The Newton equation for the harmonic oscillator is rewritten in the standadrd form as \n\n$$ \\ddot x + \\omega_0^2 x =0, \\qquad \\mbox{where} \\qquad \\omega_0^2 = \\frac{k}{m}$$\nIts analysis in presented in the next notebook. \n\n### The Lagrange mechanics \n\nIn the Lagrange approach, the mechanical system is described by the scalar function $L$ called the Lagrange function. In the case of conservative systems it is a difference between the kinetic energy $E_k$ and the potential energy $E_p$ of the system, i.e., \n\n$$ L = L(x, v) = E_k - E_p = \\frac{m v^2}{2} - U(x) $$\nFor the harmonic oscillator it reads \n\n$$ L = \\frac{m v^2}{2} - \\frac{1}{2} k x^2 $$ \nDynamics of the system is determined by the Euler-Lagrange equation: \n\n$$ \\frac{d}{dt} \\frac{\\partial L}{\\partial v} - \\frac{\\partial L}{\\partial x} = 0$$\nIt is a counterpart of the Newton equation. \n\n$$ \\frac{\\partial L}{\\partial v} = mv = p $$ \nis the (Newton) momentum $p$ of the particle and \n\n$$\\frac{\\partial L}{\\partial x} = -U'(x) =F$$\nis the force $F$ acting on the particle. As a result, the Euler-Lagrange equation leads to the equation of motion\n\n$$ m \\frac{dv}{dt} = -U'(x) = F$$\nand is the same as the Newton equation because $v = \\dot x$. For the harmonic oscillator the potential is \n$U(x)= kx^2/2$, the force is $f(x)=-U'(x)=-kx$ the the Newton equation is \n\n$$ m \\frac{dv}{dt} = -kx .$$\n\nIt has the same form as in the Newton mechanics. \n\n### The Hamilton mechanics\n\nIn the Hamilton approach, the mechanical system is described by the scalar function $H$ called the Hamilton function. In the case of conservative systems, it is a total energy of the system, i.e., \n\n$$ H = H(x, p) = E_k + E_p = \\frac{p^2}{2m} + U(x) $$\nwhere the kinetic energy $E_k$ is expressed by the CANONICAL MOMENTUM $p$ !\n\nFor the harmonic oscillator it reads \n\n$$ H = \\frac{p^2}{2m} + \\frac{1}{2} k x^2 $$ \nDynamics of the system is determined by the Hamilton equations: \n\n$$ \\frac{dx}{dt} = \\frac{\\partial H}{\\partial p}, \\qquad \\frac{dp}{dt} = -\\frac{\\partial H}{\\partial x} $$\nIt is a counterpart of the Newton equation or the Euler-Lagrange equation. For the harmonic oscillator we obtain \n\n$$ \\frac{\\partial H}{\\partial p} = \\frac{p}{m} $$ \nis the velocity $v$ of the particle and \n\n$$\\frac{\\partial H}{\\partial x} = kx = - F$$\nis proportional to the force $F$ acting on the oscillator. As a result, the Hamilton equations lead to the equation of motion in the form \n\n$$ \\frac{dx}{dt} = \\dot x =\\frac{p}{m}, \\qquad \\frac{dp}{dt} = \\dot p = -kx$$\nand are equivalent to the Newton equation or the Euler-Lagrange equation. Indeed, if we differentiate the first equation we get $\\ddot x = \\dot p /m$. Next we insert to it the second equation for $\\dot p$ and finally we obtain $\\ddot x = -k x/m = -\\omega_0^2 x$. \n\n#### REMARK: \n\nWe want to stress that in a general case, the construction of the Lagrange function or the Hamilton function can be a more complicated task. \n\n#### Exercise\n\nSolve the equation of motion for the harmonic oscillator with the mass $m=2$ and the eigen-frequency $\\omega_0 =1$. Find $x(t)$ and $v(t)$ for a given initial conditions: $x(0) =1$ and $v(0) =0.5$. Next, depict the time-evolution of the kinetic energy $E_k(t)$, potential energy $E_p(t)$ and the total energy $E(t) =E_k(t) + E_p(t)$: \n\n$$E_k(t) = \\frac{1}{2} m v^2(t), \\qquad E_p(t) = \\frac{1}{2} m \\omega_0^2 x^2(t)$$\nWhat is the main conclusion regarding the total $E(t)$. \n\n#### Solution\n\nIn SageMath we can write easile solve any system of ODE numerically. First we write Newton equation in a following form:\n\n\\begin{equation}\n\\label{intro_eq:odes4num}\n\\left\\{\n\\begin{array}{l}\n\\displaystyle\\frac{dx}{dt} = v \\\\\n\\displaystyle\\frac{dv}{dt} = - \\omega_0^2 x\n\\end{array}\n\\right.\n\\end{equation}\nThen we use ``desolve_odeint`` to obtain a numerical solution.\n\n\n```python\nm = 2 \nomega0 = 1\nvar('x,v')\ntimes = srange(0,4,0.01)\nxv = desolve_odeint([ v, -omega0^2*x ], [1,0.5] , times, [x,v])\n```\n\nWe can compute $E_k$ and $E_p$:\n\n\n```python\nEk = 1/2*m*xv[:,1]^2\nEp = 1/2*m*omega0^2*xv[:,0]^2\n```\n\nAnd plot the results:\n\n\n```python\np_Ek = line( zip( times, Ek),\\\n legend_label=r'$E_k(t)$', figsize=(6,2))\np_Ep = line( zip( times,Ep ),\\\n color='red',legend_label=r'$E_p(t)$')\np_Etot = line( zip(times,Ek + Ep),\\\n color='green',legend_label=r'$E_{tot}(t) = E_p(t)+E_k(t)$')\np_Ek + p_Ep + p_Etot\n```\n\n## Damped harmonic oscillator \n\n\n### The Newton mechanics \n\n\nA system which interacts with its environment is dissipative (it losses its energy) due to friction. It can be conveniently described by the Newton mechanics. For small velocity of the particle, the friction force is proportional to its velocity (the Stokes force) $F=-\\gamma v = -\\gamma \\dot x$, where $\\gamma$ is a friction or damping coefficient. The Newton equation for the damped harmonic oscillator has the form \n\n$$ m\\ddot x = -\\gamma \\dot x -k x.$$ \nIts analysis in presented in the next notebook. \n\n\n### The Lagrange mechanics \n\nIn the Lagrange approach, in this case the Lagrange function is not a difference between the kinetic energy and the potential energy but is constructed in such an artificial way in order to obtain the correct equation of motion presented above. Let us propose the following function: \n\n$$ L = L(x, v, t) = e^{\\gamma t/m} \\left[\\frac{m v^2}{2} - \\frac{1}{2} k x^2\\right] $$ \nThen in the Euler-Lagrange equation: \n\n\n$$ \\frac{\\partial L}{\\partial v} = mv e^{\\gamma t/m} $$ \nIts time derivative is \n\n$$\\frac{d}{dt} \\frac{\\partial L}{\\partial v} = m \\dot v e^{\\gamma t/m} + \\gamma v e^{\\gamma t/m}$$ \nThe second part of the Euler-Lagrange equation is \n\n$$\\frac{\\partial L}{\\partial x} = -kx e^{\\gamma t/m} $$\nAs a result, the final form of the Euler-Lagrange equation is \n\n$$ m\\ddot x + \\gamma \\dot x + k x = 0$$ \nand is the same as in the Newton approach. \n\n\n### The Hamilton mechanics\n\nIn the Hamilton approach, the Hamilton function is in the form \n$$ H = H(x, p, t) = \\frac{p^2}{2m} e^{-\\gamma t/m} + \\frac{1}{2} k x^2 e^{\\gamma t/m}$$ \nThe partial derivatives are \n$$ \\frac{\\partial H}{\\partial p} = \\frac{p}{m} e^{-\\gamma t/m} $$ \nand \n$$\\frac{\\partial H}{\\partial x} = kx e^{\\gamma t/m} $$\nAs a result, the Hamilton equations lead to the equation of motion in the form \n\n$$ \\frac{dx}{dt} = \\frac{p}{m} e^{-\\gamma t/m}, \\qquad \\frac{dp}{dt} = -kx e^{\\gamma t/m} $$\nOne can show that they are equivalent to the Newton equation or the Euler-Lagrange equation. \nHowever, there is one important remark: From the first Hamilton equation it follows that \n\n$$ m v = p e^{-\\gamma t/m}$$\nThe left side is the Newton momentum of the particle. In the right side, $p$ is the canonical momentum. \nThe above equations of motion are investigated in detail in the next notebook. \n\n#### Exercise\n\nSolve the equation of motion for the damped harmonic oscillatora with the mass $m=2$, the friction coefficient $\\gamma = 1$ and the eigen-frequency $\\omega_0 =1$. Find $x(t)$ and $v(t)$ for a given initial conditions: $x(0) =1$ and $v(0) =0.5$. Next depict the time-evolution of the kinetic energy $E_k(t)$, potential energy $E_p(t)$ and the total energy $E(t) =E_k(t) + E_p(t)$: \n\n$$E_k(t) = \\frac{1}{2} m v^2(t), \\qquad E_p(t) = \\frac{1}{2} m \\omega^2 x^2(t)$$\n\nCompare the results with the frictionless case, $\\gamma =0$. \n\n#### Solution\n\nIn this case the Newton equation can be rewritten in the form:\n\n\\begin{equation}\n\\label{intro_eq:ode4num}\n\\left\\{\n\\begin{array}{l}\n\\displaystyle\\frac{dx}{dt} = v \\\\\n\\displaystyle\\frac{dv}{dt} = - \\omega_0^2 x - \\gamma_0 v, \\quad \\gamma_0=\\gamma/m\n\\end{array}\n\\right.\n\\end{equation}\n\nThen we use ``desolve_odeint`` to obtain a numerical solution.\n\n\n```python\nm = 2 \nomega0 = 1\ngamma_ = 1\nvar('x,v')\ntimes = srange(0,4,0.01)\nxv = desolve_odeint([ v, -x-gamma_*v ], [1,0.5] , times, [x,v])\n\n```\n\nWe can compute $E_k$ and $E_p$:\n\n\n```python\nEk = 1/2*m*xv[:,1]^2\nEp = 1/2*m*omega0^2*xv[:,0]^2\n```\n\nAnd plot the results:\n\n\n```python\np_Ek = line( zip( times, Ek),\\\n legend_label=r'$E_k(t)$', figsize=(6,2))\np_Ep = line( zip( times,Ep ),\\\n color='red',legend_label=r'$E_p(t)$')\np_Etot = line( zip(times,Ek + Ep),\\\n color='green',legend_label=r'$E_{tot}(t) = E_p(t)+E_k(t)$')\np_Ek + p_Ep + p_Etot\n```\n\n\\newpage\n", "meta": {"hexsha": "bd18400a07c27c632ba6c030f7dbb77e69e6d652", "size": 15423, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "001-Introduction_to_classical_mechanics.ipynb", "max_stars_repo_name": "marcinofulus/Mechanics_with_SageMath", "max_stars_repo_head_hexsha": "6d13cb2e83cd4be063c9cfef6ce536564a25cf57", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "001-Introduction_to_classical_mechanics.ipynb", "max_issues_repo_name": "marcinofulus/Mechanics_with_SageMath", "max_issues_repo_head_hexsha": "6d13cb2e83cd4be063c9cfef6ce536564a25cf57", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-30T16:45:58.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-30T16:45:58.000Z", "max_forks_repo_path": "001-Introduction_to_classical_mechanics.ipynb", "max_forks_repo_name": "marcinofulus/Mechanics_with_SageMath", "max_forks_repo_head_hexsha": "6d13cb2e83cd4be063c9cfef6ce536564a25cf57", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-15T08:26:14.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-12T13:07:16.000Z", "avg_line_length": 38.6541353383, "max_line_length": 446, "alphanum_fraction": 0.5636387214, "converted": true, "num_tokens": 3385, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9196425289753969, "lm_q2_score": 0.912436159286392, "lm_q1q2_score": 0.8391150970547356}} {"text": "# Simulating a Predator and Prey Relationship\n\nWithout a predator, rabbits will reproduce until they reach the carrying capacity of the land. When coyotes show up, they will eat the rabbits and reproduce until they can't find enough rabbits. We will explore the fluctuations in the two populations over time.\n\n# Using Lotka-Volterra Model\n\n## Part 1: Rabbits without predators\n\nAccording to [Mother Earth News](https://www.motherearthnews.com/homesteading-and-livestock/rabbits-on-pasture-intensive-grazing-with-bunnies-zbcz1504), a rabbit eats six square feet of pasture per day. Let's assume that our rabbits live in a five acre clearing in a forest: 217,800 square feet/6 square feet = 36,300 rabbit-days worth of food. For simplicity, let's assume the grass grows back in two months. Thus, the carrying capacity of five acres is 36,300/60 = 605 rabbits.\n\nFemale rabbits reproduce about six to seven times per year. They have six to ten children in a litter. According to [Wikipedia](https://en.wikipedia.org/wiki/Rabbit), a wild rabbit reaches sexual maturity when it is about six months old and typically lives one to two years. For simplicity, let's assume that in the presence of unlimited food, a rabbit lives forever, is immediately sexually mature, and has 1.5 children every month.\n\nFor our purposes, then, let $x_t$ be the number of rabbits in our five acre clearing on month $t$.\n$$\n\\begin{equation*}\n R_t = R_{t-1} + 1.5\\frac{605 - R_{t-1}}{605} R_{t-1}\n\\end{equation*}\n$$\n\nThe formula could be put into general form\n$$\n\\begin{equation*}\n R_t = R_{t-1} + growth_{R} \\times \\big( \\frac{capacity_{R} - R_{t-1}}{capacity_{R}} \\big) R_{t-1}\n\\end{equation*}\n$$\n\nBy doing this, we allow users to interact with growth rate and the capacity value visualize different interaction \n\n\n\n```python\nfrom __future__ import print_function\nfrom ipywidgets import interact, interactive, fixed, interact_manual\nfrom IPython.display import display, clear_output\nimport ipywidgets as widgets\nimport matplotlib.pyplot as plt\nimport numpy as np\n```\n\n\n```python\n%matplotlib inline\nstyle = {'description_width': 'initial'}\ncapacity_R = widgets.FloatText(description=\"Capacity\", value=605)\ngrowth_rate_R = widgets.FloatText(description=\"Growth rate\", value=1.5)\ninitial_R = widgets.FloatText(description=\"Initial population\",style=style, value=1)\nbutton_R = widgets.Button(description=\"Plot Graph\")\ndisplay(initial_R, capacity_R, growth_rate_R, button_R)\n\ndef plot_graph_r(b):\n print(\"helo\")\n clear_output()\n display(initial_R, capacity_R, growth_rate_R, button_R)\n fig = plt.figure()\n ax = fig.add_subplot(111)\n t = np.arange(0, 20, 1)\n s = np.zeros(t.shape)\n R = initial_R.value\n for i in range(t.shape[0]):\n s[i] = R\n R = R + growth_rate_R.value * (capacity_R.value - R)/(capacity_R.value) * R\n if R < 0.0:\n R = 0.0\n \n ax.plot(t, s)\n ax.set(xlabel='time (months)', ylabel='number of rabbits',\n title='Rabbits Without Predators')\n ax.grid()\n\nbutton_R.on_click(plot_graph_r)\n```\n\n**Exercise 1** (1 point). Complete the following functions, find the number of rabbits at time 5, given $x_0$ = 10, population capcity =100, and growth rate = 0.8\n\n\n```python\nR_i = 10\nfor i in range(5):\n R_i = int(R_i + 0.8 * (100 - R_i)/(100) * R_i)\n \nprint(f'There are {R_i} rabbits in the system at time 5')\n\n```\n\n There are 81 rabbits in the system at time 5\n\n\n## Tweaking the Growth Function\nThe growth is regulated by this part of the formula:\n$$\n\\begin{equation*}\n \\frac{capacity_{R} - R_{t-1}}{capacity_{R}}\n\\end{equation*}\n$$\nThat is, this fraction (and thus growth) goes to zero when the land is at capacity. As the number of rabbits goes to zero, this fraction goes to 1.0, so growth is at its highest speed. We could substitute in another function that has the same values at zero and at capacity, but has a different shape. For example, \n$$\n\\begin{equation*}\n \\left( \\frac{capacity_{R} - R_{t-1}}{capacity_{R}} \\right)^{\\beta}\n\\end{equation*}\n$$\nwhere $\\beta$ is a positive number. For example, if $\\beta$ is 1.3, it indicates that the rabbits can sense that food supplies are dwindling and pre-emptively slow their reproduction.\n\n\n```python\n#### %matplotlib inline\nimport math\nstyle = {'description_width': 'initial'}\ncapacity_R_2 = widgets.FloatText(description=\"Capacity\", value=605)\ngrowth_rate_R_2 = widgets.FloatText(description=\"Growth rate\", value=1.5)\ninitial_R_2 = widgets.FloatText(description=\"Initial population\",style=style, value=1)\nshaping_R_2 = widgets.FloatText(description=\"Shaping\", value=1.3)\nbutton_R_2 = widgets.Button(description=\"Plot Graph\")\ndisplay(initial_R_2, capacity_R_2, growth_rate_R_2, shaping_R_2, button_R_2)\n\ndef plot_graph_r(b):\n clear_output()\n display(initial_R_2, capacity_R_2, growth_rate_R_2, shaping_R_2, button_R_2) \n fig = plt.figure()\n ax = fig.add_subplot(111)\n t = np.arange(0, 20, 1)\n s = np.zeros(t.shape)\n R = initial_R_2.value\n beta = float(shaping_R_2.value)\n for i in range(t.shape[0]):\n s[i] = R\n reserve_ratio = (capacity_R_2.value - R)/capacity_R_2.value\n if reserve_ratio > 0.0:\n R = R + R * growth_rate_R_2.value * reserve_ratio**beta\n else:\n R = R - R * growth_rate_R_2.value * (-1.0 * reserve_ratio)**beta\n if R < 0.0:\n R = 0\n \n ax.plot(t, s)\n ax.set(xlabel='time (months)', ylabel='number of rabbits',\n title='Rabbits Without Predators (Shaped)')\n ax.grid()\n\nbutton_R_2.on_click(plot_graph_r)\n```\n\n**Exercise 2** (1 point). Repeat Exercise 1, with $\\beta$ = 1.5 Complete the following functions, find the number of rabbits at time 5. Should we expect to see more rabbits or less?\n\n\n```python\nR_i = 10\nb=1.5\nfor i in range(5):\n R_i = int(R_i + 0.8 * ((100 - R_i)/(100))**b * R_i)\n \nprint(f'There are {R_i} rabbits in the system at time 5, less rabbits compare to exercise 1, where beta = 1')\n```\n\n There are 64 rabbits in the system at time 5, less rabbits compare to exercise 1, where beta = 1\n\n\n## Part 2: Coyotes without Prey\nAccording to [Huntwise](https://www.besthuntingtimes.com/blog/2020/2/3/why-you-should-coyote-hunt-how-to-get-started), coyotes need to consume about 2-3 pounds of food per day. Their diet is 90 percent mammalian. The perfect adult cottontail rabbits weigh 2.6 pounds on average. Thus, we assume the coyote eats one rabbit per day. \n\nFor coyotes, the breeding season is in February and March. According to [Wikipedia](https://en.wikipedia.org/wiki/Coyote#Social_and_reproductive_behaviors), females have a gestation period of 63 days, with an average litter size of 6, though the number fluctuates depending on coyote population density and the abundance of food. By fall, the pups are old enough to hunt for themselves.\n\nIn the absence of rabbits, the number of coyotes will drop, as their food supply is scarce.\nThe formula could be put into general form:\n\n$$\n\\begin{align*}\n C_t & \\sim (1 - death_{C}) \\times C_{t-1}\\\\\n &= C_{t-1} - death_{C} \\times C_{t-1}\n\\end{align*}\n$$\n\n\n\n\n```python\n%matplotlib inline\nstyle = {'description_width': 'initial'}\ninitial_C=widgets.FloatText(description=\"Initial Population\",style=style,value=200.0)\ndeclining_rate_C=widgets.FloatText(description=\"Death rate\",value=0.5)\nbutton_C=widgets.Button(description=\"Plot Graph\")\ndisplay(initial_C, declining_rate_C, button_C)\n\ndef plot_graph_c(b):\n clear_output()\n display(initial_C, declining_rate_C, button_C)\n fig = plt.figure()\n ax = fig.add_subplot(111)\n t1 = np.arange(0, 20, 1)\n s1 = np.zeros(t1.shape)\n C = initial_C.value\n for i in range(t1.shape[0]):\n s1[i] = C\n C = (1 - declining_rate_C.value)*C\n \n ax.plot(t1, s1)\n ax.set(xlabel='time (months)', ylabel='number of coyotes',\n title='Coyotes Without Prey')\n ax.grid()\n\nbutton_C.on_click(plot_graph_c)\n\n```\n\n\n FloatText(value=200.0, description='Initial Population', style=DescriptionStyle(description_width='initial'))\n\n\n\n FloatText(value=0.5, description='Death rate')\n\n\n\n Button(description='Plot Graph', style=ButtonStyle())\n\n\n**Exercise 3** (1 point). Assume the system has 100 coyotes at time 0, the death rate is 0.5 if there are no prey. At what point in time, coyotes become extinct.\n\n\n```python\nti = 0\ncoyotes_init = 100\nc_i = coyotes_init\nd_r = 0.5\nwhile c_i > 10:\n c_i= int((1 - d_r)*c_i)\n ti =ti + 1\nprint(f'At time t={ti}, the coyotes become extinct') \n```\n\n At time t=4, the coyotes become extinct\n\n\n## Part 3: Interaction Between Coyotes and Rabbit\nWith the simple interaction from the first two parts, now we can combine both interaction and come out with simple interaction.\n$$\n\\begin{align*}\n R_t &= R_{t-1} + growth_{R} \\times \\big( \\frac{capacity_{R} - R_{t-1}}{capacity_{R}} \\big) R_{t-1} - death_{R}(C_{t-1})\\times R_{t-1}\\\\\\\\\n C_t &= C_{t-1} - death_{C} \\times C_{t-1} + growth_{C}(R_{t-1}) \\times C_{t-1}\n\\end{align*}\n$$\n\nIn equations above, death rate of rabbit is a function parameterized by the amount of coyote. Similarly, the growth rate of coyotes is a function parameterized by the amount of the rabbit.\n\nThe death rate of the rabbit should be $0$ if there are no coyotes, while it should approach $1$ if there are many coyotes. One of the formula fulfilling this characteristics is hyperbolic function.\n\n$$\n\\begin{equation}\ndeath_R(C) = 1 - \\frac{1}{xC + 1}\n\\end{equation}\n$$\n\nwhere $x$ determines how quickly $death_R$ increases as the number of coyotes ($C$) increases. Similarly, the growth rate of the coyotes should be $0$ if there are no rabbits, while it should approach infinity if there are many rabbits. One of the formula fulfilling this characteristics is a linear function.\n\n$$\n\\begin{equation}\ngrowth_C(R) = yC\n\\end{equation}\n$$\n\nwhere $y$ determines how quickly $growth_C$ increases as number of rabbit ($R$) increases.\n\nPutting all together, the final equtions are\n\n$$\n\\begin{align*}\n R_t &= R_{t-1} + growth_{R} \\times \\big( \\frac{capacity_{R} - R_{t-1}}{capacity_{R}} \\big) R_{t-1} - \\big( 1 - \\frac{1}{xC_{t-1} + 1} \\big)\\times R_{t-1}\\\\\\\\\n C_t &= C_{t-1} - death_{C} \\times C_{t-1} + yR_{t-1}C_{t-1}\n\\end{align*}\n$$\n\n\n\n**Exercise 4** (3 point). The model we have created above is a variation of the Lotka-Volterra model, which describes various forms of predator-prey interactions. Complete the following functions, which should generate the state variables plotted over time. Blue = prey, Orange = predators. \n\n\n```python\n%matplotlib inline\ninitial_rabbit = widgets.FloatText(description=\"Initial Rabbit\",style=style, value=1)\ninitial_coyote = widgets.FloatText(description=\"Initial Coyote\",style=style, value=1)\ncapacity = widgets.FloatText(description=\"Capacity rabbits\", style=style,value=5)\ngrowth_rate = widgets.FloatText(description=\"Growth rate rabbits\", style=style,value=1)\ndeath_rate = widgets.FloatText(description=\"Death rate coyotes\", style=style,value=1)\nx = widgets.FloatText(description=\"Death rate ratio due to coyote\",style=style, value=1)\ny = widgets.FloatText(description=\"Growth rate ratio due to rabbit\",style=style, value=1)\nbutton = widgets.Button(description=\"Plot Graph\")\ndisplay(initial_rabbit, initial_coyote, capacity, growth_rate, death_rate, x, y, button)\ndef plot_graph(b):\n clear_output()\n display(initial_rabbit, initial_coyote, capacity, growth_rate, death_rate, x, y, button)\n fig = plt.figure()\n ax = fig.add_subplot(111)\n t = np.arange(0, 20, 0.5)\n s = np.zeros(t.shape)\n p = np.zeros(t.shape)\n R = initial_rabbit.value\n C = initial_coyote.value\n for i in range(t.shape[0]):\n s[i] = R\n p[i] = C\n R = R + growth_rate.value * (capacity.value - R)/(capacity.value) * R - (1 - 1/(x.value*C + 1))*R\n C = C - death_rate.value * C + y.value*s[i]*C\n \n ax.plot(t, s, label=\"rabit\")\n ax.plot(t, p, label=\"coyote\")\n ax.set(xlabel='time (months)', ylabel='population size',\n title='Coyotes-Rabbit (Predator-Prey) Relationship')\n ax.grid()\n ax.legend()\n\nbutton.on_click(plot_graph)\n```\n\nThe system shows an oscillatory behavior. Let's try to verify the nonlinear oscillation in phase space visualization.\n\n\n## Part 4: Trajectories and Direction Fields for a system of equations \n\nTo further demonstrate the predator numbers rise and fall cyclically with their preferred prey, we will be using the Lotka-Volterra equations, which is based on differential equations. The Lotka-Volterra Prey-Predator model involves two equations, one describes the changes in number of preys and the second one decribes the changes in number of predators. The dynamics of the interaction between a rabbit population $R_t$ and a coyotes $C_t$ is described by the following differential equations:\n$$\n\\begin{align*}\n\\frac{dR}{dt} = aR_t - bR_tC_t\n\\end{align*}\n$$\n\n$$\n\\begin{align*}\n\\frac{dC}{dt} = bdR_tC_t - cC_t\n\\end{align*}\n$$\n\nwith the following notations:\n\nR$_t$: number of preys(rabbits)\n\nC$_t$: number of predators(coyotes)\n\na: natural growing rate of rabbits, when there is no coyotes\n\nb: natural dying rate of rabbits, which is killed by coyotes per unit of time\n\nc: natural dying rate of coyotes, when ther is not rabbits\n\nd: natural growing rate of coyotes with which consumed prey is converted to predator\n\nWe start from defining the system of ordinary differential equations, and then find the equilibrium points for our system. Equilibrium occurs when the frowth rate is 0, and we can see that we have two equilibrium points in our example, the first one happens when theres no preys or predators, which represents the extinction of both species, the second equilibrium happens when $R_t=\\frac{c}{b d}$ $C_t=\\frac{a}{b}$. Move on, we will use the scipy to help us integrate the differential equations, and generate the plot of evolution for both species:\n\n\n**Exercise 5** (3 point). As we can tell from the simulation results of predator-prey model, the system shows an oscillatory behavior. Find the equilibrium points of the system and generate the phase space visualization to demonstrate the oscillation seen previously is nonlinear with distorted orbits.\n\n\n```python\nfrom scipy import integrate\n\n#using the same input number from the previous example\ninput_a = widgets.FloatText(description=\"a\",style=style, value=1)\ninput_b = widgets.FloatText(description=\"b\",style=style, value=1)\ninput_c = widgets.FloatText(description=\"c\",style=style, value=1)\ninput_d = widgets.FloatText(description=\"d\",style=style, value=1)\n# Define the system of ODEs\n# P[0] is prey, P[1] is predator\ndef dP_dt(P,t=0):\n return np.array([a*P[0]-b*P[0]*P[1], d*b*P[0]*P[1]-c*P[1]])\n\nbutton_draw_trajectories = widgets.Button(description=\"Plot Graph\")\ndisplay(input_a, input_b, input_c, input_d, button_draw_trajectories)\n\ndef plot_trajectories(graph):\n global a, b, c, d, eq1, eq2\n clear_output()\n display(input_a, input_b, input_c, input_d, button_draw_trajectories)\n a = input_a.value\n b = input_b.value\n c = input_c.value\n d = input_d.value\n # Define the Equilibrium points\n eq1 = np.array([0. , 0.])\n eq2 = np.array([c/(d*b),a/b])\n values = np.linspace(0.1, 3, 10)\n # Colors for each trajectory\n vcolors = plt.cm.autumn_r(np.linspace(0.1, 1., len(values)))\n f = plt.figure(figsize=(10,6))\n t = np.linspace(0, 150, 1000)\n for v, col in zip(values, vcolors):\n # Starting point \n P0 = v*eq2\n P = integrate.odeint(dP_dt, P0, t)\n plt.plot(P[:,0], P[:,1],\n lw= 1.5*v, # Different line width for different trajectories\n color=col, label='P0=(%.f, %.f)' % ( P0[0], P0[1]) )\n ymax = plt.ylim(bottom=0)[1]\n xmax = plt.xlim(left=0)[1]\n nb_points = 20\n x = np.linspace(0, xmax, nb_points)\n y = np.linspace(0, ymax, nb_points)\n X1,Y1 = np.meshgrid(x, y) \n DX1, DY1 = dP_dt([X1, Y1]) \n M = (np.hypot(DX1, DY1)) \n M[M == 0] = 1. \n DX1 /= M \n DY1 /= M\n plt.title('Trajectories and direction fields')\n Q = plt.quiver(X1, Y1, DX1, DY1, M, pivot='mid', cmap=plt.cm.plasma)\n plt.xlabel('Number of rabbits')\n plt.ylabel('Number of coyotes')\n plt.legend()\n plt.grid()\n plt.xlim(0, xmax)\n plt.ylim(0, ymax)\n print(f\"\\n\\nThe equilibrium pointsof the system are:\", list(eq1), list(eq2))\n plt.show() \n \nbutton_draw_trajectories.on_click(plot_trajectories)\n\n```\n\nThe model here is described in continuous differential equations, thus there is no jump or intersections between the trajectories.\n\n\n\n## Part 5: Multiple Predators and Preys Relationship\n\nThe previous relationship could be extended to multiple predators and preys relationship\n\n**Exercise 6** (3 point). Develop a discrete-time mathematical model of four species, and each two of them competing for the same resource, and simulate its behavior. Plot the simulation results.\n\n\n```python\n%matplotlib inline\ninitial_rabbit2 = widgets.FloatText(description=\"Initial Rabbit\", style=style,value=2)\ninitial_coyote2 = widgets.FloatText(description=\"Initial Coyote\",style=style, value=2)\ninitial_deer2 = widgets.FloatText(description=\"Initial Deer\", style=style,value=1)\ninitial_wolf2 = widgets.FloatText(description=\"Initial Wolf\", style=style,value=1)\npopulation_capacity = widgets.FloatText(description=\"capacity\",style=style, value=10)\npopulation_capacity_rabbit = widgets.FloatText(description=\"capacity rabbit\",style=style, value=3)\ngrowth_rate_rabbit = widgets.FloatText(description=\"growth rate rabbit\",style=style, value=1)\ndeath_rate_coyote = widgets.FloatText(description=\"death rate coyote\",style=style, value=1)\ngrowth_rate_deer = widgets.FloatText(description=\"growth rate deer\",style=style, value=1)\ndeath_rate_wolf = widgets.FloatText(description=\"death rate wolf\",style=style, value=1)\nx1 = widgets.FloatText(description=\"death rate ratio due to coyote\",style=style, value=1)\ny1 = widgets.FloatText(description=\"growth rate ratio due to rabbit\", style=style,value=1)\nx2 = widgets.FloatText(description=\"death rate ratio due to wolf\",style=style, value=1)\ny2 = widgets.FloatText(description=\"growth rate ratio due to deer\", style=style,value=1)\nplot2 = widgets.Button(description=\"Plot Graph\")\ndisplay(initial_rabbit2, initial_coyote2,initial_deer2, initial_wolf2, population_capacity, \n population_capacity_rabbit, growth_rate_rabbit, growth_rate_deer, death_rate_coyote,death_rate_wolf,\n x1, y1,x2, y2, plot2)\ndef plot_graph(b):\n clear_output()\n display(initial_rabbit2, initial_coyote2,initial_deer2, initial_wolf2, population_capacity, \n population_capacity_rabbit, growth_rate_rabbit, growth_rate_deer, death_rate_coyote,death_rate_wolf,\n x1, y1,x2, y2, plot2)\n fig = plt.figure()\n ax = fig.add_subplot(111)\n t_m = np.arange(0, 20, 0.5)\n r_m = np.zeros(t_m.shape)\n c_m = np.zeros(t_m.shape)\n d_m = np.zeros(t_m.shape)\n w_m = np.zeros(t_m.shape)\n R_m = initial_rabbit2.value\n C_m = initial_coyote2.value\n D_m = initial_deer2.value\n W_m = initial_wolf2.value\n population_capacity_deer = population_capacity.value - population_capacity_rabbit.value\n for i in range(t_m.shape[0]):\n r_m[i] = R_m\n c_m[i] = C_m\n d_m[i] = D_m\n w_m[i] = W_m\n \n R_m = R_m + growth_rate_rabbit.value * (population_capacity_rabbit.value - R_m)\\\n /(population_capacity_rabbit.value) * R_m - (1 - 1/(x1.value*C_m + 1))*R_m - (1 - 1/(x2.value*W_m + 1))*R_m \n D_m = D_m + growth_rate_deer.value * (population_capacity_deer - D_m) \\\n /(population_capacity_deer) * D_m - (1 - 1/(x1.value*C_m + 1))*D_m - (1 - 1/(x2.value*W_m + 1))*D_m\n \n C_m = C_m - death_rate_coyote.value * C_m + y1.value*r_m[i]*C_m + y2.value*d_m[i]*C_m\n W_m = W_m - death_rate_wolf.value * W_m + y1.value*r_m[i]*W_m + y2.value*d_m[i]*W_m\n \n ax.plot(t_m, r_m, label=\"rabit\")\n ax.plot(t_m, c_m, label=\"coyote\")\n ax.plot(t_m, d_m, label=\"deer\")\n ax.plot(t_m, w_m, label=\"wolf\")\n ax.set(xlabel='time (months)', ylabel='population',\n title='Multiple Predator Prey Relationship')\n ax.grid()\n ax.legend()\n\nplot2.on_click(plot_graph)\n```\n", "meta": {"hexsha": "d735aa0c7a59329143225a3a885dabc774601667", "size": 303489, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "src/Lotka-Volterra Model.ipynb", "max_stars_repo_name": "hillegass/complex-sim", "max_stars_repo_head_hexsha": "acfd3849c19fa3361788a6e8f96ce76ca64be613", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-03-05T20:57:14.000Z", "max_stars_repo_stars_event_max_datetime": "2020-03-17T00:45:54.000Z", "max_issues_repo_path": "src/Lotka-Volterra Model.ipynb", "max_issues_repo_name": "hillegass/complex-sim", "max_issues_repo_head_hexsha": "acfd3849c19fa3361788a6e8f96ce76ca64be613", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/Lotka-Volterra Model.ipynb", "max_forks_repo_name": "hillegass/complex-sim", "max_forks_repo_head_hexsha": "acfd3849c19fa3361788a6e8f96ce76ca64be613", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 238.2174254317, "max_line_length": 160504, "alphanum_fraction": 0.910016508, "converted": true, "num_tokens": 5675, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9196425267730008, "lm_q2_score": 0.9124361580958427, "lm_q1q2_score": 0.83911509395031}} {"text": "Introducci\u00f3n a la computaci\u00f3n num\u00e9rica y simb\u00f3lica con Python \n\n\n\n# 8. C\u00e1lculo simb\u00f3lico\n\nCon `SymPy`, podemos derivar, integrar o resolver ecuaciones diferenciales de forma analitica, de forma muy parecida a c\u00f3mo lo hacemos con papel y l\u00e1piz, pero con la ventaja de que el ordenador nos ofrece la soluci\u00f3n.\n\n## L\u00edmites\n\nSymPy ofrece la funci\u00f3n `limit` para realizar esta operaci\u00f3n. Empecemos por un caso sencillo.\n\n$\\lim_{x \\to 1} \\frac{x-1}{x+1}$\n\n\n```python\nimport numpy as np\nimport scipy as sci\nimport matplotlib.pyplot as plt\nimport sympy as sp\nsp.init_printing() # Esto es para que las expresiones se impriman bonitas\n\nx, expresion = sp.symbols('x expresion') # Siempre hay que definir los s\u00edmbolos antes de usarlos\n\nexpresion = (x-1)/(x+1)\nsp.pprint(expresion)\nlimite_1 = sp.limit(expresion,x,1)\nprint()\nprint(\"El l\u00edmite cuando tiende a 1 es\",limite_1)\n```\n\n x - 1\n \u2500\u2500\u2500\u2500\u2500\n x + 1\n \n El l\u00edmite cuando tiende a 1 es 0\n\n\nTambi\u00e9n puede calcularse el l\u00edmite cuando la varaible independiente tiende a infinito de la siguiente forma\n\n\n```python\nlimite_inf = sp.limit(expresion,x,sp.oo)\nprint(\"El l\u00edmite cuando tiende a infinito es\",limite_inf)\n```\n\n El l\u00edmite cuando tiende a infinito es 1\n\n\nProbemos con otro l\u00edmite m\u00e1s complicado $\\lim_{x \\to 0} \\frac{\\sin(x)}{x}$, para el que sabemos que hay que aplicar la regla de L'H\u00f4pital.\n\n\n$\\lim_{x \\to 0} \\frac{d\\sin(x)/dx}{dx/dx} = \\lim_{x \\to 0} \\cos(x) = 1$. SymPy lo hace por nosotros sin tener que especificarlo.\n\nNota hist\u00f3rica. La regla la descubri\u00f3 Johann Bernouilli, pero l'H\u00f4pital se la compr\u00f3 y la incluy\u00f3 en su libro sobre c\u00e1lculo diferencial en 1704.\n\n\n\n\n```python\nexpresion = sp.sin(x)/x\nsp.pprint(expresion)\n\nlimite_0 = sp.limit(expresion,x,0)\nprint()\nprint(\"El l\u00edmite cuando tiende a 0 es \",limite_0)\n```\n\n sin(x)\n \u2500\u2500\u2500\u2500\u2500\u2500\n x \n \n El l\u00edmite cuando tiende a 0 es 1\n\n\nCalcular los l\u00edmites siguientes:\n \n$\\lim_{x \\to \\infty} (\\frac{x+3}{x})^x$\n\n$\\lim_{x \\to 5} { {2-\\sqrt{x-1}}\\over{x^2-25} }$\n\n$\\lim_{x \\to \\infty} x(\\sqrt{x^2+1}-x)$\n\nPrimero lo hacemos anal\u00edticamente. \n\n$\\lim_{x \\to \\infty} (\\frac{x+3}{x})^x$. Es una indeterminaci\u00f3n del tipo $1^\\infty$. Tenemos que convertir la expresi\u00f3n en una variante de $\\lim_{x \\to \\infty} (1+\\frac{1}{x})^x = e$. Es sencillo, haciendo el cambio de variable $y = x/3$.\n\n$\\lim_{x \\to \\infty} (\\frac{x+3}{x})^x = \\lim_{x \\to \\infty} (1+\\frac{1}{x/3})^x = \\lim_{y \\to \\infty} (1+\\frac{1}{y})^{3y} =(\\lim_{y \\to \\infty} (1+\\frac{1}{y})^{y})^3=e^3$.\n\n\n$\\lim_{x \\to 5} { {2-\\sqrt{x-1}}\\over{x^2-25} }$ es una indeterminaci\u00f3n $\\frac{\\infty}{\\infty}$. Para resolverla, racionalizamos y simplificamos.\n\n$\\lim_{x \\to 5} { {2-\\sqrt{x-1}}\\over{x^2-25} } = \\lim_{x \\to 5} { {(2-\\sqrt{x-1})(2+\\sqrt{x-1})}\\over{(x^2-25)(2+\\sqrt{x-1})}} = \\lim_{x \\to 5} { {4-(x-1)}\\over{(x-5)(x+5)(2+\\sqrt{x-1})}} = \\lim_{x \\to 5} {-1\\over{(x+5)(2+\\sqrt{x-1})}} = {-1\\over(10\\times 4)} = \\frac{-1}{40}$\n\n\n$\\lim_{x \\to \\infty} x(\\sqrt{x^2+1}-x)$ es un indeterminaci\u00f3n de la forma $\\infty\\times(\\infty-\\infty)$. Multipicamos numerador y denominador por la expresi\u00f3n $\\sqrt{x^2+1}+x$ y simplificamos.\n\n$\\lim_{x \\to \\infty} {{x(\\sqrt{x^2+1}-x)(\\sqrt{x^2+1}+x)}\\over{\\sqrt{x^2+1}+x}}$ = \n$\\lim_{x \\to \\infty} {{x(x^2+1-x^2)}\\over{\\sqrt{x^2+1}+x}}$ =\n$\\lim_{x \\to \\infty} {{x}\\over{\\sqrt{x^2+1}+x}}$ = $\\lim_{x \\to \\infty} {{1}\\over{\\sqrt{1+1/x^2}+1}} = \\frac{1}{2}$\n\nAhora dejamos que SymPy lo haga por nosotros.\n\n\n```python\nexpresion = ((x+3)/x)**x\nprint(expresion)\nlimite_inf = sp.limit(expresion,x,sp.oo)\nprint(\"El l\u00edmite cuando x tiende a infinito es \",limite_inf)\n\nexpresion = (2-sp.sqrt(x-1))/(x**2-25)\nprint(expresion)\nlimite_5 = sp.limit(expresion,x,5)\nprint(\"El l\u00edmite cuando x tiende a infinito es \",limite_5)\n\nexpresion = x*(sp.sqrt(x**2+1)-x)\nprint(expresion)\nlimite_inf = sp.limit(expresion,x,sp.oo)\nprint(\"El l\u00edmite cuando x tiende a infinito es \",limite_inf)\n```\n\n ((x + 3)/x)**x\n El l\u00edmite cuando x tiende a infinito es exp(3)\n (2 - sqrt(x - 1))/(x**2 - 25)\n El l\u00edmite cuando x tiende a infinito es -1/40\n x*(-x + sqrt(x**2 + 1))\n El l\u00edmite cuando x tiende a infinito es 1/2\n\n\nSi la funci\u00f3n tiende a infinito, SymPy nos avisa.\n\n$\\lim_{x \\to 0} \\frac{1}{|x|}$\n\n\n```python\nexpresion = 1/sp.Abs(x)\nsp.pprint(expresion)\n\nlimite_0 = sp.limit(expresion,x,0)\nprint()\nprint(\"El l\u00edmite cuando x tiende a 0 es\",limite_0)\n```\n\n 1 \n \u2500\u2500\u2500\n \u2502x\u2502\n \n El l\u00edmite cuando x tiende a 0 es oo\n\n\nCuando los l\u00edmites laterales son diferentes, SymPy devuelve por defecto el l\u00edmite por la derecha, pero podemos especificar que l\u00edmite lateral queremos encontrar.\n\n$\\lim_{x \\to 0+} \\frac{|x|}{x} = 1$\n\n$\\lim_{x \\to 0-} \\frac{|x|}{x} = -1$\n\n\n```python\nexpresion = sp.Abs(x)/x\nsp.pprint(expresion)\n\nlimite_0 = sp.limit(expresion,x,0)\nprint()\nprint(\"El l\u00edmite cuando x tiende a 0 es\",limite_0)\n\nlimite_0plus = sp.limit(expresion,x,0,'+')\nprint(\"El l\u00edmite cuando x tiende a 0 por la derecha es\",limite_0plus)\n\nlimite_0minus = sp.limit(expresion,x,0,'-')\nprint(\"El l\u00edmite cuando x tiende a 0 por la izquierda es\",limite_0minus)\n```\n\n \u2502x\u2502\n \u2500\u2500\u2500\n x \n \n El l\u00edmite cuando x tiende a 0 es 1\n El l\u00edmite cuando x tiende a 0 por la derecha es 1\n El l\u00edmite cuando x tiende a 0 por la izquierda es -1\n\n\n## Derivaci\u00f3n\n\nPara derivar una expresi\u00f3n algebraica con SymPy basta aplicar la funci\u00f3n `diff()` o el m\u00e9todo del mismo nombre.\n\n$\\frac{d}{dt} (8t^2+3t+sin(2t)) = 16t^2+3+2cos(2t)$\n\n\n```python\nt = sp.symbols('t')\nexpresion = 8*t**2+3*t+sp.sin(2*t)\nsp.pprint(expresion)\nprint(\"df/dt = \",end = '')\nderiv = sp.diff(expresion)\nsp.pprint(deriv)\nprint(\"f'(0)=\",(deriv.subs(t,0)).evalf(3))\n```\n\n 2 \n 8\u22c5t + 3\u22c5t + sin(2\u22c5t)\n df/dt = 16\u22c5t + 2\u22c5cos(2\u22c5t) + 3\n f'(0)= 5.00\n\n\n\n```python\nexprime, fprime = sp.symbols('exprime fprime')\nt = np.linspace(0,2*np.pi,1000) # Eje de tiempos\nexpresionmat = sp.sin(x) # \u00a1OJO, usamos sp.sin, no np.sin! expresionmat no es una lista de valores\nprint(\"expresionmat es\",expresionmat) # sino una expresi\u00f3n SymPy\nf = sp.lambdify(x, expresionmat, \"numpy\") # Convierte la expresi\u00f3n simb\u00f3lica a la funciones equivalentes de de NumPy\nplt.plot(t,f(t),label=\"f(t)\") # Aqu\u00ed se calcula la lista de valores en los instantes expecificados\nexprime = sp.diff(expresionmat) # Diferenciamos la expresi\u00f3n\nprint(\"y su derivada es\",exprime)\nfprime = sp.lambdify(x, exprime, \"numpy\")\nplt.title(expresionmat)\nplt.plot(t,fprime(t),label=\"f'(t)\")\nplt.legend()\n```\n\nPodemos aplicar `diff` repetidas veces para calcular derivadas de orden superior.\n\n\n```python\nexprime2, fprime2 = sp.symbols('exprime2 fprime2')\nt = np.linspace(0,4*np.pi,1000) \nexpresionmat = sp.cos(x)-0.2*sp.sin(2*x)\nf = sp.lambdify(x, expresionmat, \"numpy\") # Convierte la expresi\u00f3n simb\u00f3lica a la funciones equivalentes de de NumPy\nplt.figure(figsize=(8,5))\nplt.plot(t,f(t),label=\"f(t)\") \nexprime = sp.diff(expresionmat)\nfprime = sp.lambdify(x, exprime, \"numpy\")\nplt.title(expresionmat)\nplt.plot(t,fprime(t),label=\"f'(t)\")\nexprime2 = sp.diff(expresionmat,x,2)\nfprime2 = sp.lambdify(x, exprime2, \"numpy\")\nplt.plot(t,fprime2(t),label=\"f''(t)\")\nplt.legend()\n```\n\n## Aproximaci\u00f3n local\n\nSymPy dispone de la funci\u00f3n `series` para obtener el desarrollo de Taylor.\n\n\n```python\nexpr = sp.exp(x)\nprint(expr)\nprint(\"Aproximaci\u00f3n por serie de Taylor\")\nst = expr.series(x, 0, 6)\nst\n```\n\n\n```python\n# C\u00e1lculo de la aproximaci\u00f3n para x=1, es decir, e\naprox = st.removeO().subs(x, 1).evalf(8)\naprox\n```\n\nAhora representamos en una gr\u00e1fica la aproximaxi\u00f3n local, frente a la funci\u00f3n $e^x$ en el entorno de $x=1$.\n\n\n```python\nxx = np.linspace(0,3,100)\nyy = []\nfor i in xx:\n expr = st.removeO()\n yy.append(expr.subs(x, i).evalf(8))\nplt.figure(figsize=(8,5))\nplt.title(\"Aproximaci\u00f3n de exp(x)\")\nplt.plot(xx,yy,label=\"Taylor\") \nplt.plot(xx,np.exp(xx),label=\"exp(x)\")\nplt.legend()\n```\n\n## Integraci\u00f3n\n\nSympy dispode de la funci\u00f3n `integrate` para calcular tanto integrales indefinidas como definidas.\n\n$F(t) = \\int te^tdt$\n\n\n```python\nt = sp.symbols('t')\nintegral = sp.symbols('integral')\nexpresion = t*sp.exp(t)\nintegral = sp.integrate(expresion, t)\nprint (\"La integral indefinida de\")\nsp.pprint(expresion)\nprint (\"es\")\nsp.pprint(integral)\n```\n\n La integral indefinida de\n t\n t\u22c5\u212f \n es\n t\n (t - 1)\u22c5\u212f \n\n\nEl c\u00e1lculo de una integral definida se hace de la misma manera indicando los l\u00edmites de integraci\u00f3n\n\n$\\int _{0}^{2}x^2dx$\n\n\n```python\nexpresion = x**2\nsp.pprint(expresion)\nprint (\"La integral entre 0 y 2 vale\", sp.integrate(expresion, (x,0,2)))\n```\n\n 2\n x \n La integral entre 0 y 2 vale 8/3\n\n\nEn la lecci\u00f3n de integraci\u00f3n num\u00e9rica calculamos el volumen encerrado por la superficie $e^{\\sqrt{x^2+y^2}}$ dentro del c\u00edrculo de radio $1$ como $2\\pi \\int_{0}^{1}re^r dr = 2\\pi (re^r - e^r) \\Big|_0^1 = 2\\pi$\n\n\n\n```python\n# El mismo c\u00e1lculo con SymPy\nexpresion = 2*sp.pi*t*sp.exp(t)\nintegral = sp.integrate(expresion, (t,0,1))\nprint (\"El volumen entre la superficie y el plano z=1\")\nsp.pprint(expresion)\nprint (\"dentro del c\u00edrculo de radio unidad es\")\nsp.pprint(integral)\n```\n\n El volumen entre la superficie y el plano z=1\n t\n 2\u22c5\u03c0\u22c5t\u22c5\u212f \n dentro del c\u00edrculo de radio unidad es\n 2\u22c5\u03c0\n\n\nSymPy es capaz de resolver integrales impropias como la integral de Dirichlet de la funci\u00f3n $sinc(x)$\n\n$\\int _{0}^{\\infty}\\frac{sin(x)}{x} dx$\n\n\n```python\nexpresion = sp.sin(x)/x\nsp.pprint(expresion)\nprint (\"La integral entre 0 e infinito vale\", sp.integrate(expresion, (x,0,sp.oo)))\n```\n\n sin(x)\n \u2500\u2500\u2500\u2500\u2500\u2500\n x \n La integral entre 0 e infinito vale pi/2\n\n\nLa siguiente integral muestra que SymPy es nivel superh\u00e9roe en c\u00e1lculo.\n\n\n```python\nx, y= sp.symbols(\"x y\")\nsp.integrate(x**(y-1)*sp.exp(-x), (x, 0, sp.oo))\n```\n\nEn efecto, $\\Gamma(y) = \\int_0^\\infty x^{y-1} e^{-x}\\,dx \\,\\!$. La funci\u00f3n gamma fue definida por Adrien-Marie Legendre y si $n$ es un entero positivo se cumple que $\\Gamma(n) = (n-1)!$\n\n\n```python\nsp.gamma(8)\n```\n\n\n```python\nsci.math.factorial(7)\n```\n\nSymPy tambi\u00e9n permite calcular integrales m\u00faltiples, aunque en este curso de introducci\u00f3n solo hemos visto un ejemplo muy sencillo de c\u00e1lculo de varias variables. Hallar el volumen definido por el tri\u00e1ngulo definido por el eje $X$ entre $0$ y $1$, la recta $y=x$ y la superficie $f(x,y) = xy$.\n\n$\\int_{0}^{1}\\int_{0}^{x}xydxdy = \\int_{0}^{1} x\\frac{y^2}{2}\\Big|_0^x =\\int_{0}^{1} \\frac{x^3}{2} = \\frac{x^4}{8}\\Big|_0^x = \\frac{1}{8}$ \n\n\n\n```python\nfrom sympy import *\n\nf = y*x\nprint(\"Integral entre x(0,1) con y=x de \",end='')\nsp.pprint(f)\nres = sp.integrate(f, (y, 0, x), (x, 0, 1))\nprint(res)\n```\n\n Integral entre x(0,1) con y=x de x\u22c5y\n 1/8\n\n\n## Ecuaciones diferenciales\n\nSympy resuelve ecuaciones diferenciales de forma simb\u00f3lica. Comenzamos por la ecuaci\u00f3n que modela el proceso de disminuci\u00f3n exponencial de una magnitud $x$ que ya vimos que es $\\frac{dN(t)}{dt}=-kN$\n\n\n```python\n# Definici\u00f3n de la ecuaci\u00f3n diferencial\n\nk = sp.symbols('k')\nN = sp.symbols('N', cls=sp.Function)\ndiffeq = sp.Eq(N(t).diff(t), -k*N(t))\ndiffeq\n```\n\n\n```python\n# Resoluci\u00f3n de la ecuaci\u00f3n\n\nsoln = sp.dsolve(diffeq,N(t))\nsoln\n```\n\nAplicaci\u00f3n de las condiciones iniciales. Supongamos que N(0)= 1000\n\n\n\n```python\nconstantes = sp.solve([soln.rhs.subs(t,0) - 1000])\nconstantes\n\n```\n\n\n```python\nC1 = sp.symbols('C1')\nsoln = soln.subs(constantes)\nsoln\n```\n\n\n```python\nimport math\nsoln = soln.subs(k,0.1)\nprint(soln.rhs)\nfunc = sp.lambdify(t, soln.rhs, \"math\") # Convierte la expresi\u00f3n simb\u00f3lica a la funciones equivalentes de de NumPy\nxpuntos = np.linspace(0,50,1000)\nypuntos = []\nfor i in xpuntos:\n ypuntos.append(func(i)) \nplt.figure(figsize=(8,5)) # tama\u00f1o de la gr\u00e1fica en pulgadas\nplt.plot(xpuntos,ypuntos)\nplt.title(soln.rhs)\nplt.xlabel('$x$') # t\u00edtulo del eje horizontal\nplt.ylabel('$y$') # t\u00edtulo del eje vertical\nplt.show()\n```\n\nEl procedimiento paso a paso es muy similar al que seguimos cuando resolvemos a mano este tipo de problemas. La resoluci\u00f3n num\u00e9rica de este mismo caso es mucho m\u00e1s r\u00e1pida en tiempo y c\u00f3digo.\n\n\n```python\nfrom scipy.integrate import solve_ivp\ninstantes = np.linspace(0, 50)\n# El tercer par\u00e1metro indica los puntos de interpolaci\u00f3n de la soluci\u00f3n\nsol = solve_ivp(lambda x,t:-0.1*t, (0, 50), np.array([500]), t_eval = instantes) # Cambiamos la condici\u00f3n inicial a N(0)=500\nplt.figure(figsize=(8,5)) # tama\u00f1o de la gr\u00e1fica en pulgadas\nplt.plot(sol.t, sol.y[0, :], '-')\n```\n\nCon SymPy tambi\u00e9n podemos resolver ecuaciones de grado superior, como la del oscilador amortiguado.\n\n$\\frac{d^2y}{dt^2}+ 3\\frac{dy}{dt} + y = 0 ; y(0) = 1 ; y'(0) = 0$\n\n\n```python\ny = sp.symbols('y',cls=sp.Function)\nysol = sp.dsolve(y(t).diff(t,t)+0.3*y(t).diff(t)+y(t),y(t))\nysol\n```\n\n\n```python\nC = sp.solve([ysol.rhs.subs(t,0)-1.0,ysol.rhs.diff(t).subs(t,0)-0.0])\nC\n```\n\n\n```python\nysol = ysol.subs(C)\nysol = sp.lambdify(t,ysol.rhs,'numpy')\nt = np.linspace(0,25,200)\nplt.plot(t,ysol(t))\n```\n\n---\n\n\n (c) 2020 Javier Garc\u00eda Algarra. www.u-tad.com
    \nLicensed under a Creative Commons Reconocimiento 4.0 Internacional License\n
    \n", "meta": {"hexsha": "d73d53069de7069df28b57176c1d67b77d998dce", "size": 298244, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/.ipynb_checkpoints/08_sympy_calculo-checkpoint.ipynb", "max_stars_repo_name": "jgalgarra/PythonMatematicas", "max_stars_repo_head_hexsha": "8f3a294f134a949403919d38eb15e7144237cdfb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-09T12:22:31.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T12:22:31.000Z", "max_issues_repo_path": "notebooks/08_sympy_calculo.ipynb", "max_issues_repo_name": "jgalgarra/PythonMatematicas", "max_issues_repo_head_hexsha": "8f3a294f134a949403919d38eb15e7144237cdfb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/08_sympy_calculo.ipynb", "max_forks_repo_name": "jgalgarra/PythonMatematicas", "max_forks_repo_head_hexsha": "8f3a294f134a949403919d38eb15e7144237cdfb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-10-29T09:18:34.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-29T09:18:34.000Z", "avg_line_length": 254.0408858603, "max_line_length": 54181, "alphanum_fraction": 0.9159413098, "converted": true, "num_tokens": 4794, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9473810488807638, "lm_q2_score": 0.8856314828740728, "lm_q1q2_score": 0.8390304831670653}} {"text": "```julia\nusing DifferentialEquations\nusing Plots\n```\n\n# Simple Harmonic Oscillator\n\nThe differential equation for a simple harmonic oscillator is given by \n\\begin{equation}\n \\ddot{x} + \\omega_0^2x = 0 \n\\end{equation}\nwhere $k=K/m$ is the reduced spring constant. \n\n\n```julia\nforce(dx, x, k, t) = -k*x\ndx\u2080 = 1.0 \nx\u2080 = 0.0 \nk = 10.0\ntspan = (0.0, 10.0)\n\nprob = SecondOrderODEProblem(force, dx\u2080, x\u2080, tspan, k)\n```\n\n\n\n\n \u001b[36mODEProblem\u001b[0m with uType \u001b[36mArrayPartition{Float64, Tuple{Float64, Float64}}\u001b[0m and tType \u001b[36mFloat64\u001b[0m. In-place: \u001b[36mfalse\u001b[0m\n timespan: (0.0, 10.0)\n u0: (1.0, 0.0)\n\n\n\n\n```julia\nsol = solve(prob); \nplot(sol, label=[\"Velocity\" \"Position\"])\n```\n\n\n\n\n \n\n \n\n\n\n# Damped Harmonic Oscillator\n\nWe can introduce friction into the system via a new term. This leads us to the form: \n\\begin{equation}\n \\ddot{x} + \\frac{\\gamma}{2}\\dot{x} + \\omega_0^2x = 0 \n\\end{equation}\nFriction works *against* the spring force... hence the positive coefficient $\\gamma$. \n\n\n```julia\ndamped_force(dx, x, p, t) = -p[1]*x - p[2]*dx\ndx\u2080 = 1.0 \nx\u2080 = 0.0 \ntspan = (0.0, 10.0)\n\np = [9.0, 1.0]\nprob = SecondOrderODEProblem(damped_force, dx\u2080, x\u2080, tspan, p)\n```\n\n\n\n\n \u001b[36mODEProblem\u001b[0m with uType \u001b[36mArrayPartition{Float64, Tuple{Float64, Float64}}\u001b[0m and tType \u001b[36mFloat64\u001b[0m. In-place: \u001b[36mfalse\u001b[0m\n timespan: (0.0, 10.0)\n u0: (1.0, 0.0)\n\n\n\n\n```julia\nsol = solve(prob);\nplot(sol, label=[\"Velocity\" \"Position\"])\n```\n\n\n\n\n \n\n \n\n\n\nthere are 3 regimes of parameter values to consider: \n- Under damped: $\\left(\\dfrac{\\gamma}{2}\\right)^2 < \\omega_0^2$\n- Critically damped: $\\left(\\dfrac{\\gamma}{2}\\right)^2 = \\omega_0^2$\n- Over damped: $\\left(\\dfrac{\\gamma}{2}\\right)^2 > \\omega_0^2$\n\n\n```julia\nusing Interact # for widgets\n```\n\n\n
    \n

    The WebIO Jupyter extension was not detected. See the\n\n WebIO Jupyter integration documentation\n\nfor more information.\n

    \n\n\n\n\n```julia\n@manipulate for i \u2208 0.1:0.1:15\n p = [3.0, i]\n prob = SecondOrderODEProblem(damped_force, dx\u2080, x\u2080, tspan, p)\n sol = solve(prob);\n plot(sol, label=[\"Velocity\" \"Position\"])\nend\n```\n\n\n\n\n\n \n\n\n\n\n\n\n```julia\n\n```\n", "meta": {"hexsha": "c5be241e26ce30429d5cbcce265d772d956a0a03", "size": 292265, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Data Generation.ipynb", "max_stars_repo_name": "john-waczak/SciML_SHO", "max_stars_repo_head_hexsha": "180d46e9755e6a70f281086a373e183b57544275", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Data Generation.ipynb", "max_issues_repo_name": "john-waczak/SciML_SHO", "max_issues_repo_head_hexsha": "180d46e9755e6a70f281086a373e183b57544275", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Data Generation.ipynb", "max_forks_repo_name": "john-waczak/SciML_SHO", "max_forks_repo_head_hexsha": "180d46e9755e6a70f281086a373e183b57544275", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 253.4822202949, "max_line_length": 73921, "alphanum_fraction": 0.6709185157, "converted": true, "num_tokens": 868, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9473810525948927, "lm_q2_score": 0.8856314617436728, "lm_q1q2_score": 0.8390304664378742}} {"text": "# Bayes' Theorem - Blood Test\n\n> This document is written in *R*.\n>\n> ***GitHub***: https://github.com/czs108\n\n## Background\n\n> In a certain clinic **0.15** of the patients have a certain *virus*. Suppose a blood test is carried out on a patient.\n>\n> If the patient has got the virus the test will turn out *positive* with probability **0.95**. If the patient does *not* have the virus the test will turn out *positive* with probability **0.02**.\n\n## Question A\n\n> Calculate the probability that a patient will have a *positive* test.\n\nGive the events the following labels:\n\n\\begin{align}\nH &= \\text{The patient has got the virus.} \\\\\nP &= \\text{The outcome of the test is positive.}\n\\end{align}\n\nAccording to the background:\n\n\\begin{align}\nP(H) = 0.15 \\\\\nP(P \\mid H) = 0.95 \\\\\nP(P \\mid \\overline{H}) = 0.02\n\\end{align}\n\nThere are 2 situations that the outcome of the test can be *positive*.\n\n\\begin{equation}\n\\begin{split}\nP(P) &= P(H \\cap P) + P(\\overline{H} \\cap P) \\\\\n &= P(H) \\cdot P(P \\mid H) + P(\\overline{H}) \\cdot P(P \\mid \\overline{H}) \\\\\n &= 0.15 \\times 0.95 + 0.85 \\times 0.02 \\\\\n &= 0.1595\n\\end{split}\n\\end{equation}\n\n## Question B\n\n> If the test is *positive* then:\n\n### Question 1\n\n> What are the probabilities that the patient has the virus?\n\n\\begin{equation}\n\\begin{split}\nP(H \\mid P) &= \\frac{P(P \\mid H) \\cdot P(H)}{P(P)} \\\\\n &= \\frac{0.95 \\times 0.15}{0.1595} \\\\\n &= 0.8934\n\\end{split}\n\\end{equation}\n\n### Question 2\n\n> What are the probabilities that the patient does *not* have the virus?\n\n\\begin{equation}\n\\begin{split}\nP(\\overline{H} \\mid P)\n &= 1 - P(H \\mid P) \\\\\n &= 0.1066\n\\end{split}\n\\end{equation}\n\n## Question C\n\n> If the test is *negative* then:\n\n### Question 1\n\n> What are the probabilities that the patient has the virus?\n\n\\begin{equation}\n\\begin{split}\nP(H \\mid \\overline{P}) &= \\frac{P(\\overline{P} \\mid H) \\cdot P(H)}{P(\\overline{P})} \\\\\n &= \\frac{(1 - 0.95) \\times 0.15}{1 - 0.1595} \\\\\n &= 0.0089\n\\end{split}\n\\end{equation}\n\n### Question 2\n\n> What are the probabilities that the patient does *not* have the virus?\n\n\\begin{equation}\n\\begin{split}\nP(\\overline{H} \\mid \\overline{P})\n &= 1 - P(H \\mid \\overline{P}) \\\\\n &= 0.9911\n\\end{split}\n\\end{equation}\n\n## Question D\n\n> Write some *R* code which simulates the possible outcomes of a blood test. You can use the following line of example code:\n>\n> ```r\n> if (runif(1) <= 0.15)\n> ```\n>\n> The *R* command `runif(1)` generates a random number between the values **0.0** and **1.0**.\n>\n> If this random number is less than or equal to **0.15** then we can say that the patient has the virus, otherwise the patient does *not* have the virus.\n>\n> In a similar way, you can decide if the test is *positive* or *negative*.\n>\n> Then run this code **100000** times.\n\n\n```R\nVirus <- function(count) {\n virus <- rep(FALSE, times=count)\n for (i in c(1:count)) {\n if (runif(1) < 0.15) {\n virus[i] <- TRUE\n }\n }\n\n return (virus)\n}\n\nTest <- function(virus) {\n pos <- rep(FALSE, times=length(virus))\n for (i in c(1:length(virus))) {\n if (virus[i]) {\n if (runif(1) < 0.95) {\n pos[i] <- TRUE\n }\n } else {\n if (runif(1) < 0.02) {\n pos[i] <- TRUE\n }\n }\n }\n\n return (pos)\n}\n\ncount <- 100000\nvirus <- Virus(count)\ntests <- Test(virus)\n```\n\n### Question 1\n\n> How often does the test turn out *positive*?\n\n\n```R\nsum(tests) / count\n```\n\n\n0.15743\n\n\n### Question 2\n \n> How often that the patient has the virus if the test is *positive*?\n\n\n```R\nsum(tests & virus) / sum(tests)\n```\n\n\n0.893539986025535\n\n\n## Question E\n\n> Modify the code to include a *2nd* blood test on the patient. You can assume that the *2nd* test is unaffected by the *1st* test.\n>\n> Then run this code **100000** times.\n\n\n```R\ntests.1 <- tests\ntests.2 <- Test(virus)\n```\n\n### Question 1\n\n> How often do you get two *positive* tests?\n\nThe two tests are *conditionally independent* under the status of a patient, so:\n\n\\begin{equation}\nP(P_{1} \\cap P_{2} \\mid H) = P(P_{1} \\mid H) \\cdot P(P_{2} \\mid H)\n\\end{equation}\n\n\\begin{equation}\n\\begin{split}\nP(P_{1} \\cap P_{2})\n &= P(P_{1} \\cap P_{2} \\mid H) \\cdot P(H) + P(P_{1} \\cap P_{2} \\mid \\overline{H}) \\cdot P(\\overline{H}) \\\\\n &= P(P_{1} \\mid H) \\cdot P(P_{2} \\mid H) \\cdot P(H)\n + P(P_{1} \\mid \\overline{H}) \\cdot P(P_{2} \\mid \\overline{H}) \\cdot P(\\overline{H}) \\\\\n &= 0.95 \\times 0.95 \\times 0.15 + 0.02 \\times 0.02 \\times 0.85 \\\\\n &= 0.1357\n\\end{split}\n\\end{equation}\n\n\n```R\nallpos <- sum(tests.1 & tests.2)\n\nallpos / count\n```\n\n\n0.13421\n\n\n### Question 2\n\n> If you get two *positive* tests, how often does the patient have the virus?\n\n\\begin{equation}\n\\begin{split}\nP(H \\mid P_{1} \\cap P_{2})\n &= \\frac{P(P_{1} \\cap P_{2} \\mid H) \\cdot P(H)}{P(P_{1} \\cap P_{2})} \\\\\n &= \\frac{0.95 \\times 0.95 \\times 0.15}{0.1357} \\\\\n &= 0.9976\n\\end{split}\n\\end{equation}\n\n\n```R\nsum(virus & tests.1 & tests.2) / allpos\n```\n\n\n0.997317636539751\n\n\n### Question 3\n\n> If the *1st* test is *postive* and the *2nd* is *negative*, how often does the patient have the virus?\n\nAccording to *conditional independence*:\n\n\\begin{equation}\nP(P_{1} \\cap \\overline{P_{2}} \\mid H) = P(P_{1} \\mid H) \\cdot P(\\overline{P_{2}} \\mid H)\n\\end{equation}\n\nThen:\n\n\\begin{equation}\n\\begin{split}\nP(P_{1} \\cap \\overline{P_{2}})\n &= P(P_{1} \\cap \\overline{P_{2}} \\mid H) \\cdot P(H) + P(P_{1} \\cap \\overline{P_{2}} \\mid \\overline{H}) \\cdot P(\\overline{H}) \\\\\n &= P(P_{1} \\mid H) \\cdot P(\\overline{P_{2}} \\mid H) \\cdot P(H)\n + P(P_{1} \\mid \\overline{H}) \\cdot P(\\overline{P_{2}} \\mid \\overline{H}) \\cdot P(\\overline{H}) \\\\\n &= 0.95 \\times 0.05 \\times 0.15 + 0.02 \\times 0.98 \\times 0.85 \\\\\n &= 0.0238\n\\end{split}\n\\end{equation}\n\n\\begin{equation}\n\\begin{split}\nP(H \\mid P_{1} \\cap \\overline{P_{2}})\n &= \\frac{P(P_{1} \\cap \\overline{P_{2}} \\mid H) \\cdot P(H)}{P(P_{1} \\cap \\overline{P_{2}})} \\\\\n &= \\frac{0.95 \\times 0.05 \\times 0.15}{0.0238} \\\\\n &= 0.2994\n\\end{split}\n\\end{equation}\n\n\n```R\nsum(virus & tests.1 & !tests.2) / sum(tests.1 & !tests.2)\n```\n\n\n0.293712316968131\n\n", "meta": {"hexsha": "807aad492a4d4eca1fdf39d6d19070c70f791278", "size": 12662, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "exercises/Bayes' Theorem - Blood Test.ipynb", "max_stars_repo_name": "czs108/Probability-Theory-Exercises", "max_stars_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exercises/Bayes' Theorem - Blood Test.ipynb", "max_issues_repo_name": "czs108/Probability-Theory-Exercises", "max_issues_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercises/Bayes' Theorem - Blood Test.ipynb", "max_forks_repo_name": "czs108/Probability-Theory-Exercises", "max_forks_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-21T05:04:07.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-21T05:04:07.000Z", "avg_line_length": 23.404805915, "max_line_length": 203, "alphanum_fraction": 0.4526930975, "converted": true, "num_tokens": 2198, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9473810466522862, "lm_q2_score": 0.8856314617436728, "lm_q1q2_score": 0.8390304611749149}} {"text": "# Artificial Neural Networks\n\nIt consists of a supervised learning algorithm, that consists of the reception of input and computation of a function with weights (in layers).\n\nIt can sort of mimmic a natural brain neural net, because of its structure with neurons and connections between them.\n\n## Why use them?\n\nIn traditional Machine Learning, to compute predictions, one must feed input; perform feature engineering; feed features; perform the model; output, whereas in AI (Neural Nets), one must only feed input to the net to receive output. \n\nNeural Nets with multiple layers can in each node or in each layer detect different aspects of the input (specific patterns), for example if you feed dog images, a region of the net can form patterns for the eyes, another for the nose, et cetera. So basically the only requirement for learning is sufficient quantity of data.\n\n\n## Perceptron\n\nThe Perceptron is the simplest ANN, since it only has one neuron and one output value. We will use it to start .\n\n\nSo, given the input matrix $X$, and weights $\\mathbf{W} = \\{w_j\\}_{j=1}^N$ and bias $w_0$ we have the **activation function** $g: \\mathbb{R}^{N+1} \\rightarrow \\mathbb{R}^M$:\n\n\\begin{equation}\n\\hat{y} = g( w_0 + \\mathbf{X}^T \\mathbf{W} )\n\\end{equation}\n\nOne of the most simple and popular activation functions is the sigmoid function:\n\\begin{equation}\n\\sigma(x) = \\frac{1}{1 + e^{-x}}\n\\end{equation}\n\nIs is useful because it receibes a certain value $x$ and maps its value into the interval $[0,1]$, therefore possibily computing a probability measure.\nAlso, multiple activation functions can be used in different layers, to induce complexity to the model.\n\n\n\n- The activation function can help us to introduce \"non-linearity\" into the model, which is important when dealing with real data, that is not usually linear.\n\n- The weigths can be adjusted (and must be!), and this process is the training process: by adjusting our weights, model performance can improve (\"and learn\").\n\n- $w_0$ is our bias. It can alter our activation function argument independantly from the input $\\mathbf{X}$.\n\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef sigmoid(x):\n return 1/(1+np.exp(-x))\n\ndef hyperbolic_tanget(x):\n return (np.exp(x) - np.exp(-x))/(np.exp(x) + np.exp(-x))\n\ndef relu(x):\n return max(0, x)\n\ndef plot_af():\n activation_functions = {\n 'sigmoid': sigmoid, 'hyperbolic tangent': hyperbolic_tanget, 'relu': relu\n }\n plt.figure(figsize=(12,8))\n plt.suptitle('Some common activation functions', fontsize=25, y=1.05)\n c=0\n for f in list(activation_functions.keys()):\n c+=1\n plt.subplot(3,1,c)\n x = np.linspace(-10, 10, 200)\n y = list(map(activation_functions[f], x))\n \n plt.title(f, fontsize=18)\n plt.plot(x, y)\n plt.axvline(0, color='black')\n plt.axhline(0, color='black')\n \n plt.tight_layout()\n plt.show()\n\nplot_af()\n```\n\n\n\n
    \n\n\n\n\n## Multioutput Neural Net\n\nGeneralizing our Neural Net, let us say we have $N$ input values, $K$ layers \nand $J$ output classes (taking it is a classification problem), our output expression would be:\n\n\\begin{equation}\n\\hat{y}_j = g(w_{0, j}^{(k)} + \\sum_{i=1}^N x_i w_{i, j}^{(k)}) \\quad \\begin{cases}\n\\text{layers} & k=1, ..., K \\\\\n\\text{outputs} & j = 1, 2, ..., J\n\\end{cases}\n\\end{equation}\n\n\nIn the cases that we have an intermediate layer (neither input nor output), it can be called a \"hidden layer\".\n\n\n\n\n## Deep Neural Nets\n\nThey are ANNs with a large quantity of hidden layers.\n\n\n\n\n\n## Loss\n\nTo quantify the error and accuracy of a prediction (output of the net) $\\hat{y}_j$, comparing to the real value $y_j$, we must define a Cost function\n\n\\begin{equation}\n\\mathcal{L} (\\hat{y}_j, y_j): \\mathbb{R}^n \\rightarrow \\mathbb{R}\n\\end{equation}\n\n### Empirical Loss \n\nAlso called Objective Function, Cost Function, Empirical Risk. \nTaking $f$ as the function of the ANN.\n\n\\begin{equation}\nJ(\\mathbf{W}) = \\frac{1}{N} \\sum_{i=1}^N \\mathcal{L} \\left( f(x^{(i)}; \\mathbf{W}), y^{(i)} \\right) \n\\end{equation}\n\n### Binary Cross Entropy Loss\n\n\\begin{equation}\nJ(\\mathbf{W}) = \\frac{1}{N} \\sum_{i=1}^N y^{(i)} log \\left(f(x^{(i)}; \\mathbf{W}) \\right) + (1 - y^{(i)}) log \\left( 1 - f(x^{(i)}; \\mathbf{W}) \\right)\n\\end{equation}\n\n- can be used for binary classification problems, since it computes a probability value.\n\n### MSE\n\nThe Mean Squared Error (MSE) is computed by\n\n\\begin{equation}\nJ(\\mathbf{W}) = \\frac{1}{N} \\sum_{i=1}^N \\left( f(x^{(i)}; \\mathbf{W})- y^{(i)} \\right)^2 \n\\end{equation}\n\n- This can be used for regression problems, to compute a Loss value in the continuous real line.\n\n\n\n\n## Example\n\nLet us see a simple example with 2 input nodes, 1 hidden layer and 1 output node\n\n\n```python\n# given the input data X\nX = [\n [0.7, 0.4],\n [0.2, 0.8],\n [0.3, 0.9],\n [0.01, 0.01]\n]\n\n# weigths\nW = [[1, 1],\n [1, 1],\n [1, 1],\n [1, 1]]\n\ndef simple_net(X, W, w0, hidden_layers=1):\n \"\"\" Simple Net with sigmoid activation function \"\"\"\n y = []\n for j in range(len(X)): # for each sequence of inputs {x_j}_{j=1}^I, with I input nodes\n s=0\n for k in range(np.array(W).shape[1]): # for each hidden layer\n s += W[j][k] * X[j][k]\n print(\"j={}, k= {}, W[j][k]={}, X[j][k]={}, W[j][k] * X[j][k]={}\".format(j, k, W[j][k], X[j][k], s))\n y.append(sigmoid(s + w0))\n\n print(\"Final Layer values: \", y)\n print('Output: ', sum(y)) \n\nsimple_net(X, W, w0=0)\n```\n\n j=0, k= 0, W[j][k]=1, X[j][k]=0.7, W[j][k] * X[j][k]=0.7\n j=0, k= 1, W[j][k]=1, X[j][k]=0.4, W[j][k] * X[j][k]=1.1\n j=1, k= 0, W[j][k]=1, X[j][k]=0.2, W[j][k] * X[j][k]=0.2\n j=1, k= 1, W[j][k]=1, X[j][k]=0.8, W[j][k] * X[j][k]=1.0\n j=2, k= 0, W[j][k]=1, X[j][k]=0.3, W[j][k] * X[j][k]=0.3\n j=2, k= 1, W[j][k]=1, X[j][k]=0.9, W[j][k] * X[j][k]=1.2\n j=3, k= 0, W[j][k]=1, X[j][k]=0.01, W[j][k] * X[j][k]=0.01\n j=3, k= 1, W[j][k]=1, X[j][k]=0.01, W[j][k] * X[j][k]=0.02\n Final Layer values: [0.7502601055951177, 0.7310585786300049, 0.7685247834990175, 0.5049998333399998]\n Output: 2.75484330106414\n\n\nThis simple example shows us that if we don't use a activation function, we simply compute a linear relationship to our data, and that our weights must be adjusted for us to leave random computations and achieve pattern recognition.\n\n## Training - Loss Optimization\n\nThe training process consists of a optimization process, since, given our weigths, we have want to minimize a given cost function , to improve accuracy, by adjusting our weights.\n\n\\begin{equation}\n\\mathbf{W}^* = \\underset{W}{argmin} \\frac{1}{N} \\sum_{i=1}^N \\mathcal{L} \\left( f(x^{(i)}; \\mathbf{W}), y^{(i)} \\right) = \\underset{W}{argmin} J(\\mathbf{W})\n\\end{equation}\n\n\n\n\n\n## Gradient Descent\n\nA simple algorithm for optimizing our Loss function through gradient computation, given a learning rate $\\eta$:\n\n- 1) Initialize pseudo-random weights $\\mathbf{W} \\sim \\mathcal{N}(0, \\sigma^2)$\n- 2) while $\\nabla J(\\mathbf{W}) \\neq 0 $ (loop until convergence)\n- 2.1) Compute $\\nabla J(\\mathbf{W})$\n- 2.2) Update weights $\\mathbf{W} \\leftarrow \\mathbf{W} - \\eta \\nabla J(\\mathbf{W})$\n- 3) return optimized weigths $\\mathbf{W}^*$\n\n
    \n\n\n#### Notes:\n\n- Note that in our algorithm steps, with \n\\begin{equation}\nx^{k+1} = x^k + t_k d^k\n\\end{equation}\nwe are taking a step $t_k$ (relative to the learning rate), in the direction $d^k$. In gradient descent, the direction is $d^k = - \\nabla J(\\mathbf{W})$, because the gradient points to the maximum, so we take orthogonal steps, searching for a minimum.\n\n- The Loss Surfice (as in the figure above with $J(w_0, w_1)$ x $w_0$ x $w_1$ can have multiple local minimals, and we want the global minimal. So one must be careful to decide the learning rate $\\eta$, for not to take too little steps (increasing computational time), nor too large steps (and skipping the minimal region).\n\n- In gradient descent backpropagation, basically we update each weight by a small amount, considering a learning rate.\n\n
    \n\n\n\n### Stochastic Gradient Descent\n\nIn this method, random samples are selected to compute the gradient.\nAlgorithm:\n\n- 1) Initialize weights $\\mathbf{W} \\sim \\mathcal{N}(0,\\sigma^2)$\n- 2) while $\\nabla J(\\mathbf{W}) \\neq 0$ (loop until co\nnvergence)\n- 2.1) sample a batch of $B$ data points\n- 2.2) Compute the gradient at the selected batch $\\frac{\\partial J(\\mathbf{W})}{\\partial \\mathbf{W}} = \\frac{1}{B} \\sum_{k=1}^B \\frac{\\partial J_k(\\mathbf{W})}{\\partial \\mathbf{W}}$\n- 2.3) update weights $\\mathbf{W} \\leftarrow \\mathbf{W} - \\eta \\nabla J(\\mathbf{W})$\n- 3) return weights $\\mathbf{W}^*$\n\n\n
    \n\n\n\n## Backpropagation\n\nIt is the process of computing the gradients and training feedforward neural networks\n\n\n```python\nimport random\n\ndef create_random_weights(rowdim=4, coldim=2):\n \"\"\" Given the dimensions of rows and columns, create a matrix with values:\n u_1 (-1)^(u_2), u_1, u_2 \\sim U[0,1]\n i.e. a uniform number that might have a minus sign\n \"\"\"\n\n W = []\n for r in range(rowdim):\n W.append([])\n for c in range(coldim):\n W[r].append(np.random.uniform()*((-1)**round(np.random.uniform())))\n\n return W\n\ncreate_random_weights()\n```\n\n\n\n\n [[0.22972808230970665, 0.9273836410739588],\n [-0.5081797922848657, -0.5133557483459836],\n [-0.4464993139568254, -0.059074472410352796],\n [-0.7863491811557749, -0.23029365785182865]]\n\n\n\n\n```python\n#implementing Gradient Descent and backpropagation\n\nimport numpy as np\nfrom copy import deepcopy\n\n\n# activation function\ndef sigmoid(x):\n return 1/(1+np.exp(-x))\n\n# loss function\ndef mse(y, y_pred):\n return np.mean(np.square(np.array(y) - np.array(y_pred)))\n\n\ndef simple_net(X, y, W, w0, hidden_layers=1, y_true=0, verbose=False):\n \"\"\" Simple Net with sigmoid activation function \"\"\"\n y_pred = []\n for j in range(len(X)): # for each sequence of inputs {x_j}_{j=1}^I, with I input nodes\n s=0\n for k in range(np.array(W).shape[1]): # for each hidden layer\n s += W[j][k] * X[j][k]\n if verbose:\n print(\"j={}, k= {}, W[j][k]={}, X[j][k]={}, W[j][k] * X[j][k]={}\".format(j, k, W[j][k], X[j][k], s))\n y_pred.append(sigmoid(s + w0))\n\n if verbose:\n print(\"Final Layer values: \", y)\n print('Output: ', sum(y)) \n return mse(y=y, y_pred=y_pred)\n\ndef backprop(X, y, W, \n dif=0.0001,\n eta=0.01,\n w0 = -0.5516,\n verbose=False\n ):\n \n loss_0 = simple_net(X, y, W, w0, verbose=verbose)\n w_update = deepcopy(W)\n adj_w = deepcopy(W)\n\n for i, layer in enumerate(W):\n for j, w_j in np.ndenumerate(layer):\n if verbose:\n print('i: {}, j: {}, w_j: {}'.format(i, j, w_j))\n j=j[0]\n # small difference in weights\n adj_w[i][j] += dif\n\n aditional_loss = simple_net(X, y, adj_w, w0, verbose=verbose)\n\n # \\nabla J(W) = \\frac{\\partial J(\\mathbf{W})}{\\partial \\mathbf{W}}\n grad = (aditional_loss - loss_0) / dif\n\n # W <- W - \\eta * \\nabla J(W)\n w_update[i][j] -= eta * grad\n\n return w_update, loss_0\n```\n\n\n```python\ndef test_loss(X, y, W, w0, epochs=100):\n loss=[]\n for epoch in range(epochs):\n W, l = backprop(X, y, W, w0)\n loss.append(l)\n\n plt.figure(figsize=(15,4))\n plt.title('Loss function x epochs')\n plt.plot(loss)\n plt.show()\n\n'''\nW = np.array([[-0.0053, 0.3793],\n [-0.5820, -0.5204],\n [-0.2723, 0.1896]])\nw0 = -0.5516\nX = [[1, 1]]\ny = np.array([[0]])\n\n\ntest_loss(X, y, W, w0, epochs=3000)\n'''\n```\n\n\n\n\n '\\nW = np.array([[-0.0053, 0.3793],\\n [-0.5820, -0.5204],\\n [-0.2723, 0.1896]])\\nw0 = -0.5516\\nX = [[1, 1]]\\ny = np.array([[0]])\\n\\n\\ntest_loss(X, y, W, w0, epochs=3000)\\n'\n\n\n\n\n```python\n# generating random weights and bias\nW = create_random_weights(3, 2)\nw0 = create_random_weights(1,1)[0][0]\n\n# given the input\nX = [[1, 1]]\n\n# and the actual value\ny = np.array([[0]])\n\n# adjust weight through gradient descent backpropagation to minimize loss\ntest_loss(X, y, W, w0, epochs=3000)\n```\n\n\n```python\n# given the input data X\nX = [\n [0.7, 0.4],\n [0.2, 0.8],\n [0.3, 0.9],\n [0.01, 0.01]\n]\ny = np.array([[0]])\n\n# weigths\nW = create_random_weights(4, 2)\nw0 = -0.5516\n\nbackprop(X, y, W, w0, verbose=True)\n```\n\n j=0, k= 0, W[j][k]=-0.6016959373589182, X[j][k]=0.7, W[j][k] * X[j][k]=-0.42118715615124275\n j=0, k= 1, W[j][k]=0.8776325229643589, X[j][k]=0.4, W[j][k] * X[j][k]=-0.07013414696549919\n j=1, k= 0, W[j][k]=0.3206434547013204, X[j][k]=0.2, W[j][k] * X[j][k]=0.06412869094026408\n j=1, k= 1, W[j][k]=0.8855268518762458, X[j][k]=0.8, W[j][k] * X[j][k]=0.7725501724412608\n j=2, k= 0, W[j][k]=0.6430365439526071, X[j][k]=0.3, W[j][k] * X[j][k]=0.19291096318578213\n j=2, k= 1, W[j][k]=-0.022319621495935804, X[j][k]=0.9, W[j][k] * X[j][k]=0.17282330383943992\n j=3, k= 0, W[j][k]=-0.44887512611931557, X[j][k]=0.01, W[j][k] * X[j][k]=-0.004488751261193156\n j=3, k= 1, W[j][k]=-0.6476148590463662, X[j][k]=0.01, W[j][k] * X[j][k]=-0.010964899851656818\n Final Layer values: [[0]]\n Output: [0]\n i: 0, j: (0,), w_j: -0.6016959373589182\n j=0, k= 0, W[j][k]=-1.1532959373589182, X[j][k]=0.7, W[j][k] * X[j][k]=-0.8073071561512427\n j=0, k= 1, W[j][k]=0.8776325229643589, X[j][k]=0.4, W[j][k] * X[j][k]=-0.45625414696549915\n j=1, k= 0, W[j][k]=0.3206434547013204, X[j][k]=0.2, W[j][k] * X[j][k]=0.06412869094026408\n j=1, k= 1, W[j][k]=0.8855268518762458, X[j][k]=0.8, W[j][k] * X[j][k]=0.7725501724412608\n j=2, k= 0, W[j][k]=0.6430365439526071, X[j][k]=0.3, W[j][k] * X[j][k]=0.19291096318578213\n j=2, k= 1, W[j][k]=-0.022319621495935804, X[j][k]=0.9, W[j][k] * X[j][k]=0.17282330383943992\n j=3, k= 0, W[j][k]=-0.44887512611931557, X[j][k]=0.01, W[j][k] * X[j][k]=-0.004488751261193156\n j=3, k= 1, W[j][k]=-0.6476148590463662, X[j][k]=0.01, W[j][k] * X[j][k]=-0.010964899851656818\n Final Layer values: [[0]]\n Output: [0]\n i: 0, j: (1,), w_j: 0.8776325229643589\n j=0, k= 0, W[j][k]=-1.1532959373589182, X[j][k]=0.7, W[j][k] * X[j][k]=-0.8073071561512427\n j=0, k= 1, W[j][k]=0.3260325229643589, X[j][k]=0.4, W[j][k] * X[j][k]=-0.6768941469654991\n j=1, k= 0, W[j][k]=0.3206434547013204, X[j][k]=0.2, W[j][k] * X[j][k]=0.06412869094026408\n j=1, k= 1, W[j][k]=0.8855268518762458, X[j][k]=0.8, W[j][k] * X[j][k]=0.7725501724412608\n j=2, k= 0, W[j][k]=0.6430365439526071, X[j][k]=0.3, W[j][k] * X[j][k]=0.19291096318578213\n j=2, k= 1, W[j][k]=-0.022319621495935804, X[j][k]=0.9, W[j][k] * X[j][k]=0.17282330383943992\n j=3, k= 0, W[j][k]=-0.44887512611931557, X[j][k]=0.01, W[j][k] * X[j][k]=-0.004488751261193156\n j=3, k= 1, W[j][k]=-0.6476148590463662, X[j][k]=0.01, W[j][k] * X[j][k]=-0.010964899851656818\n Final Layer values: [[0]]\n Output: [0]\n i: 1, j: (0,), w_j: 0.3206434547013204\n j=0, k= 0, W[j][k]=-1.1532959373589182, X[j][k]=0.7, W[j][k] * X[j][k]=-0.8073071561512427\n j=0, k= 1, W[j][k]=0.3260325229643589, X[j][k]=0.4, W[j][k] * X[j][k]=-0.6768941469654991\n j=1, k= 0, W[j][k]=-0.2309565452986796, X[j][k]=0.2, W[j][k] * X[j][k]=-0.046191309059735924\n j=1, k= 1, W[j][k]=0.8855268518762458, X[j][k]=0.8, W[j][k] * X[j][k]=0.6622301724412608\n j=2, k= 0, W[j][k]=0.6430365439526071, X[j][k]=0.3, W[j][k] * X[j][k]=0.19291096318578213\n j=2, k= 1, W[j][k]=-0.022319621495935804, X[j][k]=0.9, W[j][k] * X[j][k]=0.17282330383943992\n j=3, k= 0, W[j][k]=-0.44887512611931557, X[j][k]=0.01, W[j][k] * X[j][k]=-0.004488751261193156\n j=3, k= 1, W[j][k]=-0.6476148590463662, X[j][k]=0.01, W[j][k] * X[j][k]=-0.010964899851656818\n Final Layer values: [[0]]\n Output: [0]\n i: 1, j: (1,), w_j: 0.8855268518762458\n j=0, k= 0, W[j][k]=-1.1532959373589182, X[j][k]=0.7, W[j][k] * X[j][k]=-0.8073071561512427\n j=0, k= 1, W[j][k]=0.3260325229643589, X[j][k]=0.4, W[j][k] * X[j][k]=-0.6768941469654991\n j=1, k= 0, W[j][k]=-0.2309565452986796, X[j][k]=0.2, W[j][k] * X[j][k]=-0.046191309059735924\n j=1, k= 1, W[j][k]=0.3339268518762458, X[j][k]=0.8, W[j][k] * X[j][k]=0.22095017244126072\n j=2, k= 0, W[j][k]=0.6430365439526071, X[j][k]=0.3, W[j][k] * X[j][k]=0.19291096318578213\n j=2, k= 1, W[j][k]=-0.022319621495935804, X[j][k]=0.9, W[j][k] * X[j][k]=0.17282330383943992\n j=3, k= 0, W[j][k]=-0.44887512611931557, X[j][k]=0.01, W[j][k] * X[j][k]=-0.004488751261193156\n j=3, k= 1, W[j][k]=-0.6476148590463662, X[j][k]=0.01, W[j][k] * X[j][k]=-0.010964899851656818\n Final Layer values: [[0]]\n Output: [0]\n i: 2, j: (0,), w_j: 0.6430365439526071\n j=0, k= 0, W[j][k]=-1.1532959373589182, X[j][k]=0.7, W[j][k] * X[j][k]=-0.8073071561512427\n j=0, k= 1, W[j][k]=0.3260325229643589, X[j][k]=0.4, W[j][k] * X[j][k]=-0.6768941469654991\n j=1, k= 0, W[j][k]=-0.2309565452986796, X[j][k]=0.2, W[j][k] * X[j][k]=-0.046191309059735924\n j=1, k= 1, W[j][k]=0.3339268518762458, X[j][k]=0.8, W[j][k] * X[j][k]=0.22095017244126072\n j=2, k= 0, W[j][k]=0.09143654395260714, X[j][k]=0.3, W[j][k] * X[j][k]=0.027430963185782142\n j=2, k= 1, W[j][k]=-0.022319621495935804, X[j][k]=0.9, W[j][k] * X[j][k]=0.007343303839439919\n j=3, k= 0, W[j][k]=-0.44887512611931557, X[j][k]=0.01, W[j][k] * X[j][k]=-0.004488751261193156\n j=3, k= 1, W[j][k]=-0.6476148590463662, X[j][k]=0.01, W[j][k] * X[j][k]=-0.010964899851656818\n Final Layer values: [[0]]\n Output: [0]\n i: 2, j: (1,), w_j: -0.022319621495935804\n j=0, k= 0, W[j][k]=-1.1532959373589182, X[j][k]=0.7, W[j][k] * X[j][k]=-0.8073071561512427\n j=0, k= 1, W[j][k]=0.3260325229643589, X[j][k]=0.4, W[j][k] * X[j][k]=-0.6768941469654991\n j=1, k= 0, W[j][k]=-0.2309565452986796, X[j][k]=0.2, W[j][k] * X[j][k]=-0.046191309059735924\n j=1, k= 1, W[j][k]=0.3339268518762458, X[j][k]=0.8, W[j][k] * X[j][k]=0.22095017244126072\n j=2, k= 0, W[j][k]=0.09143654395260714, X[j][k]=0.3, W[j][k] * X[j][k]=0.027430963185782142\n j=2, k= 1, W[j][k]=-0.5739196214959358, X[j][k]=0.9, W[j][k] * X[j][k]=-0.4890966961605601\n j=3, k= 0, W[j][k]=-0.44887512611931557, X[j][k]=0.01, W[j][k] * X[j][k]=-0.004488751261193156\n j=3, k= 1, W[j][k]=-0.6476148590463662, X[j][k]=0.01, W[j][k] * X[j][k]=-0.010964899851656818\n Final Layer values: [[0]]\n Output: [0]\n i: 3, j: (0,), w_j: -0.44887512611931557\n j=0, k= 0, W[j][k]=-1.1532959373589182, X[j][k]=0.7, W[j][k] * X[j][k]=-0.8073071561512427\n j=0, k= 1, W[j][k]=0.3260325229643589, X[j][k]=0.4, W[j][k] * X[j][k]=-0.6768941469654991\n j=1, k= 0, W[j][k]=-0.2309565452986796, X[j][k]=0.2, W[j][k] * X[j][k]=-0.046191309059735924\n j=1, k= 1, W[j][k]=0.3339268518762458, X[j][k]=0.8, W[j][k] * X[j][k]=0.22095017244126072\n j=2, k= 0, W[j][k]=0.09143654395260714, X[j][k]=0.3, W[j][k] * X[j][k]=0.027430963185782142\n j=2, k= 1, W[j][k]=-0.5739196214959358, X[j][k]=0.9, W[j][k] * X[j][k]=-0.4890966961605601\n j=3, k= 0, W[j][k]=-1.0004751261193157, X[j][k]=0.01, W[j][k] * X[j][k]=-0.010004751261193157\n j=3, k= 1, W[j][k]=-0.6476148590463662, X[j][k]=0.01, W[j][k] * X[j][k]=-0.016480899851656818\n Final Layer values: [[0]]\n Output: [0]\n i: 3, j: (1,), w_j: -0.6476148590463662\n j=0, k= 0, W[j][k]=-1.1532959373589182, X[j][k]=0.7, W[j][k] * X[j][k]=-0.8073071561512427\n j=0, k= 1, W[j][k]=0.3260325229643589, X[j][k]=0.4, W[j][k] * X[j][k]=-0.6768941469654991\n j=1, k= 0, W[j][k]=-0.2309565452986796, X[j][k]=0.2, W[j][k] * X[j][k]=-0.046191309059735924\n j=1, k= 1, W[j][k]=0.3339268518762458, X[j][k]=0.8, W[j][k] * X[j][k]=0.22095017244126072\n j=2, k= 0, W[j][k]=0.09143654395260714, X[j][k]=0.3, W[j][k] * X[j][k]=0.027430963185782142\n j=2, k= 1, W[j][k]=-0.5739196214959358, X[j][k]=0.9, W[j][k] * X[j][k]=-0.4890966961605601\n j=3, k= 0, W[j][k]=-1.0004751261193157, X[j][k]=0.01, W[j][k] * X[j][k]=-0.010004751261193157\n j=3, k= 1, W[j][k]=-1.199214859046366, X[j][k]=0.01, W[j][k] * X[j][k]=-0.021996899851656815\n Final Layer values: [[0]]\n Output: [0]\n\n\n\n\n\n ([[-0.6019251279691497, 0.877311665460072],\n [0.3201882258482481, 0.8846020809687967],\n [0.6419742435461838, -0.023684247760512176],\n [-0.4502439379277646, -0.6489878353278336]],\n 0.1817566000006355)\n\n\n\nAnother way of looking at the gradient descent computing is trought the chain rule, in which\n\n\\begin{equation}\n\\frac{ \\partial J(\\mathbf{W})}{\\partial w_j} = \\frac{ \\partial J(\\mathbf{W})}{\\partial \\hat{y_j}} \\frac{\\partial \\hat{y}}{\\partial w_j} = \\frac{ \\partial J(\\mathbf{W})}{\\partial \\hat{y_j}} \\frac{\\partial \\hat{y}}{\\partial z_j} \\frac{\\partial z_j}{\\partial w_j}\n\\end{equation}\n\nIn a chain operation\n\\begin{equation}\n(x) \\overset{w_1}{\\rightarrow} (z_1) \\overset{w_2}{\\rightarrow} (\\hat{y}) \\rightarrow J(\\mathbf{W})\n\\end{equation}\n\n## Authors:\n- Pedro Bl\u00f6ss Braga\n\n## Repository:\nhttps://github.com/pedroblossbraga/NeuralNets\n\n## LICENSE: \n- MIT\n", "meta": {"hexsha": "1fb78ce19b1ddd54c0c34e75b08d774cd455e48d", "size": 96737, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "1_Artificial_Neural_Nets.ipynb", "max_stars_repo_name": "pedroblossbraga/NeuralNets", "max_stars_repo_head_hexsha": "a8c908ca3388b2c736d0411eef4f666e5d3e487b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "1_Artificial_Neural_Nets.ipynb", "max_issues_repo_name": "pedroblossbraga/NeuralNets", "max_issues_repo_head_hexsha": "a8c908ca3388b2c736d0411eef4f666e5d3e487b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "1_Artificial_Neural_Nets.ipynb", "max_forks_repo_name": "pedroblossbraga/NeuralNets", "max_forks_repo_head_hexsha": "a8c908ca3388b2c736d0411eef4f666e5d3e487b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 123.0750636132, "max_line_length": 46386, "alphanum_fraction": 0.7957658393, "converted": true, "num_tokens": 8525, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9525741268224333, "lm_q2_score": 0.8807970811069351, "lm_q1q2_score": 0.8390245104431866}} {"text": "## BNH\n\nBinh and Korn defined the following test problem in with 2 objectives and 2 constraints:\n\n**Definition**\n\n\\begin{equation}\n\\newcommand{\\boldx}{\\mathbf{x}}\n\\begin{array}\n\\mbox{Minimize} & f_1(\\boldx) = 4x_1^2 + 4x_2^2, \\\\\n\\mbox{Minimize} & f_2(\\boldx) = (x_1-5)^2 + (x_2-5)^2, \\\\\n\\mbox{subject to} & C_1(\\boldx) \\equiv (x_1-5)^2 + x_2^2 \\leq 25, \\\\\n& C_2(\\boldx) \\equiv (x_1-8)^2 + (x_2+3)^2 \\geq 7.7, \\\\\n& 0 \\leq x_1 \\leq 5, \\\\\n& 0 \\leq x_2 \\leq 3.\n\\end{array}\n\\end{equation}\n\n**Optimum**\n\nThe Pareto-optimal solutions are constituted by solutions \n$x_1^{\\ast}=x_2^{\\ast} \\in [0,3]$ and $x_1^{\\ast} \\in [3,5]$,\n$x_2^{\\ast}=3$. These solutions are marked by using bold \ncontinuous\ncurves. The addition of both constraints in the problem does not make any solution\nin the unconstrained Pareto-optimal front infeasible. \nThus, constraints may not introduce any additional difficulty\nin solving this problem.\n\n**Plot**\n\n\n```python\nfrom pymoo.factory import get_problem\nfrom pymoo.util.plotting import plot\n\nproblem = get_problem(\"bnh\")\nplot(problem.pareto_front(), no_fill=True)\n```\n", "meta": {"hexsha": "aafe924ff3da72874de0de502d4da08bc1d8e9b6", "size": 43099, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "source/problems/multi/bnh.ipynb", "max_stars_repo_name": "SunTzunami/pymoo-doc", "max_stars_repo_head_hexsha": "f82d8908fe60792d49a7684c4bfba4a6c1339daf", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-09-11T06:43:49.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-10T13:36:09.000Z", "max_issues_repo_path": "source/problems/multi/bnh.ipynb", "max_issues_repo_name": "SunTzunami/pymoo-doc", "max_issues_repo_head_hexsha": "f82d8908fe60792d49a7684c4bfba4a6c1339daf", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2021-09-21T14:04:47.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-07T13:46:09.000Z", "max_forks_repo_path": "source/problems/multi/bnh.ipynb", "max_forks_repo_name": "SunTzunami/pymoo-doc", "max_forks_repo_head_hexsha": "f82d8908fe60792d49a7684c4bfba4a6c1339daf", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-10-09T02:47:26.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-10T07:02:37.000Z", "avg_line_length": 319.2518518519, "max_line_length": 39964, "alphanum_fraction": 0.9312745075, "converted": true, "num_tokens": 410, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9407897558991953, "lm_q2_score": 0.8918110339361276, "lm_q1q2_score": 0.8390066849249785}} {"text": "```python\nfrom sympy import *\nfrom sympy.abc import x, y, e\ninit_printing(use_unicode=True)\n\n```\n\n\n```python\na1 = (x**7 + sin(x))\ndisplay(a1)\ndisplay(a1.diff())\n```\n\n\n```python\na2 = sqrt(x)\ndisplay(a2)\ndisplay(a2.diff())\n```\n\n\n```python\na3 = e**(3*x)*sin(x)\ndisplay(a3)\ndisplay(a3.diff(x).factor())\n```\n\n\n```python\na4 = Eq(sqrt(x) + sqrt(y), 1)\ndisplay(a4)\ndisplay(Derivative(a4, y, x))\n```\n\n\n```python\na5 = (2*x**4 + 3*x**2 + x + 8)\ndisplay(a5)\ndisplay(integrate(a5))\n```\n\n\n```python\na6 = (sqrt(x) + 1/sqrt(x))\ndisplay(a6)\ndisplay(integrate(a6))\n```\n\n\n```python\n# Warning, this is a burn processor integral :\n\na7 = (x**3*(1-12*x**4))**Rational(1/8)\ndisplay(a7)\ndisplay(integrate(a7))\n```\n\n\n```python\na8 = (x/(sqrt(8-2*x**2)))\ndisplay(a8)\ndisplay(integrate(a8))\n```\n\n\n```python\na10 = sin(5*x)\ndisplay(a10)\ndisplay(integrate(a10))\n```\n\n\n```python\na11 = cos(x)*(sin(x)**3)**-1\ndisplay(a11)\ndisplay(integrate(a11))\n```\n\n\n```python\na12 = tan(x)**6 * sec(x)**2\ndisplay(a12)\nres = integrate(a12)\ndisplay(res)\ndisplay(integrate(a12).simplify())\n```\n\n\n```python\n(x**3-x).diff()\n```\n\n\n```python\nk = 1\nn = 1000\nr = sum(pow(-1, i) for i in range(k,n))\nprint(r)\n```\n\n -1\n\n\n\n```python\nx, k = symbols(\"x k\")\ns = Sum(Indexed('-1',k),(k,1,1001))\ndisplay(s)\n```\n\n\n```python\n# primes, negative-positive iteration\nx, k = symbols(\"x k\")\ns = Sum(Indexed('(-1)^n+1 * 2*n+1',k),(k,1,20))\ndisplay(s)\n\nfor p in range(1, 20):\n print( ((-1)**(p+1)) * ((2*p) + 1) )\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "c6578a9ca637e63c6faf3895eb0ebe1d290ff215", "size": 57285, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "problem_set_1_part_a/lab.ipynb", "max_stars_repo_name": "beltrewilton/mitx_18.01.2x", "max_stars_repo_head_hexsha": "5b3b260b9ef979b17d28465824e98a5138a9e8b6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "problem_set_1_part_a/lab.ipynb", "max_issues_repo_name": "beltrewilton/mitx_18.01.2x", "max_issues_repo_head_hexsha": "5b3b260b9ef979b17d28465824e98a5138a9e8b6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "problem_set_1_part_a/lab.ipynb", "max_forks_repo_name": "beltrewilton/mitx_18.01.2x", "max_forks_repo_head_hexsha": "5b3b260b9ef979b17d28465824e98a5138a9e8b6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 87.8604294479, "max_line_length": 3400, "alphanum_fraction": 0.822274592, "converted": true, "num_tokens": 543, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9597620608291781, "lm_q2_score": 0.8740772466456688, "lm_q1q2_score": 0.8389061795645409}} {"text": "\n\n\n# Linear Regression\n\n
    This work is licensed under a Creative Commons Attribution 4.0 International License.\n\n\n```python\n# Imports\nfrom tinhatbenbranding import TINHATBEN_GRAY, TINHATBEN_YELLOW, add_tinhatbendotcom\nfrom matplotlib import pyplot as plt\nimport numpy as np\nfrom sklearn import linear_model\nfrom sklearn.preprocessing import PolynomialFeatures\n\n%matplotlib inline\n```\n\nLet's say we completed a survey of university students to determine the total amount of sleep lost for each year of study at uni.\n\n_Note: this data is completely fictional_\n\n\n```python\nimport scipy.io as sio\nX = np.array([\n [1.21, 26.79],\n [1.4, 39.84],\n [3.66, 47.61],\n [4.27, 42.23],\n [5.72, 62.39],\n [4.56, 65.32],\n [6.19, 65.43],\n [6.92, 59.81],\n [7.3, 74.58],\n [7.9, 72.97],\n [9.32, 82.12],\n [10.79, 87.96],\n ])\nfig = plt.figure(figsize=(10,10))\nax = fig.add_subplot(111)\nax.scatter(x=X[:,0], y=X[:,1], c=TINHATBEN_GRAY, edgecolors=TINHATBEN_GRAY)\nax.set_title(\"Time at University vs Total Sleep Lost\")\nax.set_xlabel(\"Time at University (years)\")\nax.set_ylabel(\"Total Sleep Lost (hrs)\")\nadd_tinhatbendotcom(ax, (0,0))\n```\n\n## Cost Function & Gradient Descent\nWe can write the equation of a straight line as being parameterized by weights from a learning algorithm. The following equation defines a hypothesis (a guess at the line fit) and somewhat resembles the equation of a straight line:\n\n$$\\large{h(x^{(i)}) = \\theta_0 + \\theta_1x_1} $$\n\nWe can simplify the hypothesis by allowing $$\\large{x_0 = 1}$$\n\n$$\n\\begin{align}\n\\large{h(x^{(i)})} & \\large{= \\theta_0x_0 + \\theta_1x_1} \\\\\n& \\large{= \\sum_{i=0}^n\\theta^Tx}\n\\end{align}$$\n\nNow using the **ordinary least squares** of error estimation we define a cost function for the hypothesis; a function which describes the error between the hypothesis and the observed values.\n\n$$\\large{J(\\theta) = \\frac{1}{2}\\sum_{i=0}^n(h(x^{(i)}) - y^{(i)})^2 }$$\n\n### Batch Gradient Descent\nStarting with some initial value of the weights of the model ($\\theta$) compute the current error between the hypothesis and the observed data. Take a small step down the slope of the cost function and re-adjust the current estimate for $\\theta$. The algorithm for batch gradient descent is defined as:\n\n$$\\large{ \\theta_j := \\theta_j - \\left(\\frac{\\alpha}{n}\\right) \\frac{d}{d\\theta_j}J(\\theta) }$$\n\nWhere:\n$$\\large{ \\frac{d}{d\\theta_j} = \\sum_{i=0}^n(h(x^{(i)}) - y^{(i)})x^{(i)}} \\text{ and; }$$\n\n$\\large{\\alpha}$ is the learning rate\n\nThe learning rate is the size of the step taken down the cost function, if the learning rate is very low the algorithm could take a long time to converge, while if it is too high the algorithm may not converge as it steps over the local minima. It is called batch gradient descent because at each step the operation is applied to all values.\n\n\n```python\n# Define the hypothesis function\ndef h_x(theta, x):\n \"\"\"Compute the hypothesis of a linear regression function given:\n theta: the weights of the model as a numpy array vector\n x: the input values for the model as a numpy array\n \n returns h_x(theta) = sum(theta * x)\n \"\"\"\n return np.sum(np.dot(theta.T, x.T), axis=0)\n\n# Define the cost function\ndef J(theta, x, y):\n \"\"\"Compute the linear regression cost function given:\n theta: the weights of the model as a numpy array vector\n x: the input values for the model as a numpy array\n y: the observed response for the system as a numpy array\n \n returns J(theta) = 0.5 * sum(h_x(theta) - y)^2 \n \n \"\"\"\n return np.sum((h_x(theta, x) - y) ** 2) * 0.5\n\n# Define d/dtheta_j\ndef d_d_theta_J(theta, x, y):\n \"\"\"Compute the partial derivative of the cost function:\n theta: the weights of the model as a numpy array vector\n x: the input values for the model as a numpy array\n y: the observed response for the system as a numpy array\n \n returns d_d_theta_J = sum( (h_x(theta, x) - y)x )\n \"\"\"\n return (h_x(theta, x) - y).dot(x)\n\n# Define gradient descent\ndef gradient_descent(theta, x, y, alpha, convergence_lim = 0.01, iteration_lim = 1500):\n \"\"\" Execute batch gradient descent \n theta: the weights of the model as a numpy array vector\n x: the input values for the model as a numpy array\n y: the observed response for the system as a numpy array\n alpha: the learning rate for gradient descent\n convergence_lim = 0.01: convergence is declared once the difference in theta \n values on each iteration is within this value. If set \n to < 0 convergence is not tested to complete gradient descent,\n the iteration limit is used instead.\n iteration_lim = 1500: The maximum number of iterations to complete if convergence is \n not declared earlier.\n \n returns (theta, J_log, iteration_counter) as a tuple\n \n theta: the weights of the model as found by gradient descent\n J_log the cost function value for each set of weights tested during gradient descent\n iteration_counter: the number of iterations executed during gradient descent\n \"\"\"\n \n assert len(x) == len(y), \"X and y must have the same length\"\n \n tol = 1 + convergence_lim # Store the current convergence tolerance\n J_log = [] # Store a log of all the cost function results for later review\n new_theta = np.copy(theta) # Copy theta for later use\n \n prev_theta = np.copy(theta) # Initialise the previous theta values for later comparison\n n = len(x) # The number of samples\n \n iteration_counter = 0\n \n while ( (tol > convergence_lim) or (convergence_lim < 0)):\n \n # Calculate the updated value of theta\n new_theta -= ((alpha / n) * d_d_theta_J(new_theta, x, y)).reshape(theta.shape)\n \n # Compute the cost function for the new theta and store\n J_log.append(J(new_theta, x, y))\n \n # Determine if new_theta has converged\n tol = (np.sqrt(np.sum((new_theta - prev_theta) ** 2)))\n \n if iteration_counter > iteration_lim:\n break\n \n prev_theta = np.copy(new_theta)\n iteration_counter += 1\n \n return (new_theta, J_log, iteration_counter)\n```\n\n\n```python\n# Construct some initial values of theta\ntheta = np.zeros((2,1))\n\n# Insert x_0 = 1 into the array to calculate the hypothesis\nx = np.ones((X.shape))\nx[:,1] = X[:,0]\ny = np.copy(X[:,1])\n\n# Set an initial learning rate\nalpha = 0.01\n\n#Execute gradient descent\ntheta, J_log, iter_counter = gradient_descent(theta, x, y, alpha)\nprint(\"Theta found via gradient descent:\\n%s\" % theta.__str__())\n\nlinear_function_error = J(theta, x, y)\n```\n\n Theta found via gradient descent:\n [[ 21.2277094 ]\n [ 6.65028129]]\n\n\n\n```python\n# Plot the cost function and observe to confirm minimisation\nfig = plt.figure(figsize=(10,10))\nax = fig.add_subplot(111)\nax.plot(J_log, c=TINHATBEN_GRAY, linewidth=2)\nax.set_xlim([-5, iter_counter + 5])\nax.set_xlabel('Iteration')\nax.set_ylabel('J(theta)')\nax.set_title('Gradient Descent Cost Function')\n```\n\nNotice that in the above plot of the cost function over gradient descent we can see that the cost function aymptotes, approaching a minimum value. This provides some evidence of a successful fit. Let's now overlay the model with the data to observe the fit.\n\n\n```python\n# Compute the new hypothesis\nx_displ = np.ones((x.shape))\nx_displ[:,1] = np.arange(x.shape[0])\n\nh = h_x(theta, x_displ)\nfig = plt.figure(figsize=(10,10))\nax = fig.add_subplot(111)\nax.scatter(x=X[:,0], y=X[:,1], c=TINHATBEN_GRAY, edgecolors=TINHATBEN_GRAY)\nax.plot(x_displ[:,1], h, c=TINHATBEN_YELLOW)\nax.text(5, 40, r'$h(x)'+ \" = %0.3f + %0.3fx$\" % (theta[0], theta[1]), fontsize=25, color=TINHATBEN_YELLOW)\nax.set_title(\"Time at University vs Total Sleep Lost\")\nax.set_xlabel(\"Time at University (years)\")\nax.set_ylabel(\"Total Sleep Lost (hrs)\")\nadd_tinhatbendotcom(ax, (0,0))\nplt.savefig(\"model.png\", dpi=300)\n```\n\nWhat happens if we increase the learning rate to say 0.025 and 0.075?\n\n\n```python\n# Construct some initial values of theta\ntheta = np.zeros((2,1))\n\n# Set an initial learning rate\nalpha = 0.025\n\n#Execute gradient descent\ntheta, J_log, iter_counter = gradient_descent(theta, x, y, alpha, -1)\nprint(\"Theta found via gradient descent:\\n%s\" % theta.__str__())\n\n# Plot the cost function and observe to confirm minimisation\nfig = plt.figure(figsize=(10,10))\nax = fig.add_subplot(111)\nax.plot(J_log, c=TINHATBEN_GRAY, linewidth=2)\nax.set_xlim([-5, iter_counter + 5])\nax.set_xlabel('Iteration')\nax.set_ylabel('J(theta)')\nax.set_title('Gradient Descent Cost Function alpha = %0.4f' % alpha)\n\nh = h_x(theta, x_displ)\n\nfig = plt.figure(figsize=(10,10))\nax = fig.add_subplot(111)\nax.scatter(x=X[:,0], y=X[:,1], c=TINHATBEN_GRAY, edgecolors=TINHATBEN_GRAY)\nax.plot(x_displ[:,1], h, c=TINHATBEN_YELLOW)\nax.text(5, 40, r'$h(x)'+ \" = %0.3f + %0.3fx$\" % (theta[0], theta[1]), fontsize=25, color=TINHATBEN_YELLOW)\nax.set_title(\"Time at University vs Total Sleep Lost (alpha = %0.4f)\" % alpha)\nax.set_xlabel(\"Time at University (years)\")\nax.set_ylabel(\"Total Sleep Lost (hrs)\")\nadd_tinhatbendotcom(ax, (0,0))\n```\n\nLook at the number of iterations it takes for $J(\\theta)$ to asymptote, convergence is taking longer with a larger learning rate\n\n\n```python\n# Construct some initial values of theta\ntheta = np.zeros((2,1))\n\n# Set an initial learning rate\nalpha = 0.075\n\n#Execute gradient descent\ntheta, J_log, iter_counter = gradient_descent(theta, x, y, alpha, -1)\nprint(\"Theta found via gradient descent:\\n%s\" % theta.__str__())\n```\n\n Theta found via gradient descent:\n [[ nan]\n [ nan]]\n\n\n /usr/local/lib/python2.7/dist-packages/ipykernel/__main__.py:21: RuntimeWarning: overflow encountered in square\n /usr/local/lib/python2.7/dist-packages/ipykernel/__main__.py:75: RuntimeWarning: overflow encountered in square\n /usr/local/lib/python2.7/dist-packages/ipykernel/__main__.py:69: RuntimeWarning: invalid value encountered in subtract\n\n\nThis example does not converge, the learning rate is too high.\n\n## Stochastic Gradient Descent\nWe mentioned above during batch gradient descent all of the input variables are treated simultaneously. With a lot of training data this may be very slow of may not be possible due to memory constraints. Another way of executing gradient descent is to treat the values independently, iterating over each separately and computing:\n\n$$\\large{ \\theta_j := \\theta_j - \\left(\\frac{\\alpha}{n}\\right) \\frac{d}{d\\theta_j}J(\\theta) }$$\n\nWhere:\n$$\\large{ \\frac{d}{d\\theta_j} = (h(x^{(i)}) - y^{(i)})x^{(i)}} \\text{ and; }$$\n\n$\\large{\\alpha}$ is the learning rate\n\nIn this example, there is little benefit to completing stochastic gradient descent over batch as the data set is so small.\n\n\n```python\n# Define gradient descent\ndef stochastic_gradient_descent(theta, x, y, alpha, convergence_lim = 0.01, iteration_lim = 1500):\n \"\"\" Execute stochastic gradient descent \n theta: the weights of the model as a numpy array vector\n x: the input values for the model as a numpy array\n y: the observed response for the system as a numpy array\n alpha: the learning rate for gradient descent\n convergence_lim = 0.01: convergence is declared once the difference in theta \n values on each iteration is within this value. If set \n to < 0 convergence is not tested to complete gradient descent,\n the iteration limit is used instead.\n iteration_lim = 1500: The maximum number of iterations to complete if convergence is \n not declared earlier.\n \n returns (theta, J_log, iteration_counter) as a tuple\n \n theta: the weights of the model as found by gradient descent\n J_log the cost function value for each set of weights tested during gradient descent\n iteration_counter: the number of iterations executed during gradient descent\n \"\"\"\n \n assert len(x) == len(y), \"X and y must have the same length\"\n \n tol = 1 + convergence_lim # Store the current convergence tolerance\n J_log = [] # Store a log of all the cost function results for later review\n new_theta = np.copy(theta) # Copy theta for later use\n \n prev_theta = np.copy(theta) # Initialise the previous theta values for later comparison\n n = len(x) # The number of samples\n \n iteration_counter = 0\n while ( (tol > convergence_lim) or (convergence_lim < 0)):\n \n # Stochastic gradient descent\n for i in range(n):\n \n # Calculate the updated value of theta\n tmp_theta = (alpha / n) * ((h_x(new_theta, x[i, :]) - y[i])*(x[i,:]))\n new_theta -= tmp_theta.reshape(theta.shape)\n\n # Compute the cost function for the new theta and store\n J_log.append(J(new_theta, x, y))\n\n # Determine if new_theta has converged\n tol = (np.sqrt(np.sum((new_theta - prev_theta) ** 2)))\n\n if iteration_counter > iteration_lim:\n break\n\n prev_theta = np.copy(new_theta)\n iteration_counter += 1\n \n return (new_theta, J_log, iteration_counter)\n```\n\n\n```python\n# Execute stochastic gradient descent\n# Construct some initial values of theta\ntheta = np.ones((2,1))\n# Insert x_0 = 1 into the array to calculate the hypothesis\nx = np.ones((X.shape))\nx[:,1] = X[:,0]\ny = np.copy(X[:,1])\n\n# Set an initial learning rate\nalpha = 0.01\n\n#Execute gradient descent\ntheta, J_log, iter_counter = stochastic_gradient_descent(theta, x, y, alpha)\nprint(\"Theta found via gradient descent:\\n%s\" % theta.__str__())\n```\n\n Theta found via gradient descent:\n [[ 21.38188976]\n [ 6.58546528]]\n\n\n\n```python\n# Plot the cost function and observe to confirm minimisation\nfig = plt.figure(figsize=(10,10))\nax = fig.add_subplot(111)\nax.plot(J_log, c=TINHATBEN_GRAY, linewidth=2)\nax.set_xlim([-5, iter_counter + 5])\nax.set_xlabel('Iteration')\nax.set_ylabel('J(theta)')\nax.set_title('Gradient Descent Cost Function')\n```\n\n\n```python\n# Compute the new hypothesis\nx_displ = np.ones((x.shape))\nx_displ[:,1] = np.arange(x.shape[0])\nh = h_x(theta, x_displ)\n\nfig = plt.figure(figsize=(10,10))\nax = fig.add_subplot(111)\nax.scatter(x=X[:,0], y=X[:,1], c=TINHATBEN_GRAY, edgecolors=TINHATBEN_GRAY)\nax.plot(x_displ[:,1], h, c=TINHATBEN_YELLOW)\nax.text(5, 40, r'$h(x)'+ \" = %0.3f + %0.3fx$\" % (theta[0], theta[1]), fontsize=25, color=TINHATBEN_YELLOW)\nax.set_title(\"Time at University vs Total Sleep Lost\")\nax.set_xlabel(\"Time at University (years)\")\nax.set_ylabel(\"Total Sleep Lost (hrs)\")\nadd_tinhatbendotcom(ax, (0,0))\nplt.savefig(\"model.png\", dpi=300)\n```\n\nCompare to the result obtained from scikit learn\n\n\n```python\n# Create the model\nregr = linear_model.LinearRegression(fit_intercept=True)\n\n# Train the model\nregr.fit(X[:,0].reshape(-1,1), X[:,1].reshape(-1,1))\n\n# The model\nprint(\"The model\")\nprint('h(x) = %0.3f + %0.3fx' % (regr.intercept_, regr.coef_[0]))\n\n```\n\n The model\n h(x) = 26.505 + 5.907x\n\n\nThis is consistent with the linear model we developed\n\n## Polynomial Fit\nSay the data wasn't so linear and we had a few extra points such as the following:\n\n\n```python\n# Compute the new hypothesis\nX = np.append(X, [[15, 90],[20, 92], [25, 98], [23, 100],[30,100],[31, 105]], axis=0)\n\nfig = plt.figure(figsize=(10,10))\nax = fig.add_subplot(111)\nax.scatter(x=X[:,0], y=X[:,1], c=TINHATBEN_GRAY, edgecolors=TINHATBEN_GRAY)\nax.set_title(\"Time at University vs Total Sleep Lost\")\nax.set_xlabel(\"Time at University (years)\")\nax.set_ylabel(\"Total Sleep Lost (hrs)\")\nadd_tinhatbendotcom(ax, (0,0))\nplt.savefig(\"poly_dat.jpg\", dpi=300)\n```\n\nWould a polynomial fit this data better? Let's try a second order polynomial by adding an additional weight to the hypothesis:\n\n$$\\large{ h(\\theta) = \\theta_0x_0 + \\theta_1x_1 + \\theta_2x_1^2 + \\theta_3x_1^3}$$\n\nIn this example we have simply taken the feature $x_1$ and added it as polynomial terms in the equation. We could however also use other features i.e. $x_2, x_3$ etc if we had additional info for the model.\n\n
    \n\nFirst we need to update the gradient descent and cost functions\n\n\n```python\ndef normalise_features(x):\n mn = np.mean(x, axis=0)\n stdev = np.std(x, axis=0)\n X = (x - mn) / stdev\n return X, mn, stdev\n\n# Define cost function for the polynomial function\ndef J_poly(theta, x, y):\n \"\"\"Compute the linear regression cost function given:\n theta: the weights of the model as a numpy array vector\n x: the input values for the model as a numpy array\n y: the observed response for the system as a numpy array\n \n returns J(theta) = 0.5 * sum(h_x(theta) - y)^2 \n \n \"\"\"\n n = x.shape[0]\n h = h_x(theta, x)\n J = np.dot((h - y).T, (h - y)) / float(2 * n)\n #J = (1 / (2 * m)) * (h - y)' * (h - y); \n return J\n\n# Define gradient descent for the polynomial function\ndef gradient_descent_poly(theta, x, y, alpha, convergence_lim = 0.01, iteration_lim = 1500):\n \"\"\" Execute batch gradient descent \n theta: the weights of the model as a numpy array vector\n x: the input values for the model as a numpy array\n y: the observed response for the system as a numpy array\n alpha: the learning rate for gradient descent\n convergence_lim = 0.01: convergence is declared once the difference in theta \n values on each iteration is within this value. If set \n to < 0 convergence is not tested to complete gradient descent,\n the iteration limit is used instead.\n iteration_lim = 1500: The maximum number of iterations to complete if convergence is \n not declared earlier.\n \n returns (theta, J_log, iteration_counter) as a tuple\n \n theta: the weights of the model as found by gradient descent\n J_log the cost function value for each set of weights tested during gradient descent\n iteration_counter: the number of iterations executed during gradient descent\n \"\"\"\n \n assert len(x) == len(y), \"X and y must have the same length\"\n \n tol = 1 + convergence_lim # Store the current convergence tolerance\n J_log = [] # Store a log of all the cost function results for later review\n new_theta = np.copy(theta) # Copy theta for later use\n \n prev_theta = np.copy(theta) # Initialise the previous theta values for later comparison\n n = len(x) # The number of samples\n \n iteration_counter = 0\n \n while ( (tol > convergence_lim) or (convergence_lim < 0)):\n \n # Determine the current estimate\n h = h_x(new_theta, x)\n \n # Update the values for theta\n k = ((alpha / n) * np.dot((h - y).T, x)).reshape(new_theta.shape)\n \n new_theta -= k\n \n # Calculate the updated value of theta\n #new_theta -= ((alpha / n) * d_d_theta_J(new_theta, x, y)).reshape(theta.shape)\n \n # Compute the cost function for the new theta and store\n J_log.append(J_poly(new_theta, x, y))\n \n # Determine if new_theta has converged\n tol = (np.sqrt(np.sum((new_theta - prev_theta) ** 2)))\n \n if iteration_counter > iteration_lim:\n break\n \n prev_theta = np.copy(new_theta)\n iteration_counter += 1\n \n return (new_theta, J_log, iteration_counter)\n```\n\n### Normalising the features\nAs we are now using more than one feature in our model $x_1, x_1^2$ and $x_1^3$ we need to normalise the features to ensure they are of the same scale. If we didn't then the squared and cubed terms would completely dominate the model. There are a number of different ways of normalising: for this example we are going to subtract the mean and divide by the standard deviation. \n\n## Building the model\n\n\n```python\n# Construct some initial values of theta\ntheta = np.zeros((4,1))\n\n# Insert x_0 = 1 into the array to calculate the hypothesis\nx = np.ones((X.shape[0], X.shape[1] + 2))\nx[:,1] = X[:,0]\nx[:,2] = X[:,0] ** 2\nx[:,3] = X[:,0] ** 3\ny = np.copy(X[:,1])\n\nx[:,1:], mn, stdev = normalise_features(x[:,1:])\n\n# Set an initial learning rate\nalpha = 0.01\n\n#Execute gradient descent\ntheta, J_log, iter_counter = gradient_descent_poly(theta, x, y, alpha)\nprint(\"Theta found via gradient descent:\\n%s\" % theta.__str__())\n\npolynomial_function_error = J(theta, x, y)\n```\n\n Theta found via gradient descent:\n [[ 72.8916464 ]\n [ 28.51282928]\n [ 1.02618994]\n [-11.13507311]]\n\n\n\n```python\n# Plot the cost function and observe to confirm minimisation\nfig = plt.figure(figsize=(10,10))\nax = fig.add_subplot(111)\nax.plot(J_log, c=TINHATBEN_GRAY, linewidth=2)\nax.set_xlim([-5, iter_counter + 5])\nax.set_xlabel('Iteration')\nax.set_ylabel('J(theta)')\nax.set_title('Gradient Descent Cost Function')\n```\n\n## Making Predictions\n\n\n```python\n## Construct the values to predict\nx_test = np.zeros((7, 3))\nx_test[:,0] = np.linspace(0, 30, num=7)\nx_test[:,1] = x_test[:,0] ** 2\nx_test[:,2] = x_test[:,0] ** 3\n\nx_adjusted = (x_test - mn) / stdev # Don't forget to normalise\nx_displ = np.ones((x_test.shape[0], 4)) # Add the intercept term\nx_displ[:,1:] = x_adjusted\n\nh = h_x(theta, x_displ)\n\n\nfig = plt.figure(figsize=(10,10))\nax = fig.add_subplot(111)\ndata_fig = ax.scatter(x=X[:,0], y=X[:,1], c=TINHATBEN_GRAY, edgecolors=TINHATBEN_GRAY)\npredict_fig = ax.scatter(x_test[:,0], h, c=TINHATBEN_YELLOW, edgecolors=TINHATBEN_YELLOW)\nax.text(-3, 30, r'$h(x)'+ \" = %0.3f + %0.3fx + %0.3fx^2 %0.3fx^3$\" % (theta[0], theta[1], theta[2], theta[3]), \n fontsize = 25, color=TINHATBEN_YELLOW)\nax.set_title(\"Time at University vs Total Sleep Lost\")\nax.set_xlabel(\"Time at University (years)\")\nax.set_ylabel(\"Total Sleep Lost (hrs)\")\nax.legend([data_fig, predict_fig], ['Trained Data', 'Predictions'])\nadd_tinhatbendotcom(ax, (0,0)) \n```\n\nWe can see that the fitted polynomial is capable of making reasonable predictions, though the fit is worse around the $x_1 =$ 10 to 15 year mark. We could continue to increase the complexity of the polynomial to improve the fit, though we do risk overfitting the data. We will cover quality of fit in a later blog post.\n\nFinally, let's compare out result with scikit learn\n\n\n```python\n# Create the model\nregr = linear_model.LinearRegression(fit_intercept=True, normalize=True)\n\nx_poly = np.zeros((X.shape[0], 3))\nx_poly[:,0] = X[:,0]\nx_poly[:,1] = X[:,0] ** 2\nx_poly[:,2] = X[:,0] **3\n\n# Train the model\nregr.fit(x_poly, X[:,1].reshape(-1,1))\n\nh = regr.predict(x_test)\n\n# The model\nprint('h(x) = %0.3f + %0.3fx %0.3fx^2 + %0.3fx^3' % \n (regr.intercept_, regr.coef_[0][0], regr.coef_[0][1], regr.coef_[0][2]))\n\nfig = plt.figure(figsize=(10,10))\nax = fig.add_subplot(111)\ndata_fig = ax.scatter(x=X[:,0], y=X[:,1], c=TINHATBEN_GRAY, edgecolors=TINHATBEN_GRAY)\npredict_fig = ax.scatter(x_test[:,0], h, c=TINHATBEN_YELLOW, edgecolors=TINHATBEN_YELLOW)\nax.text(-3, 15, r'$h(x)'+ \" = %0.3f + %0.3fx %0.3fx^2 + %0.3fx^3$\" % \n (regr.intercept_, regr.coef_[0][0], regr.coef_[0][1], regr.coef_[0][2]),\n fontsize = 25, color=TINHATBEN_YELLOW)\nax.set_title(\"Time at University vs Total Sleep Lost\")\nax.set_xlabel(\"Time at University (years)\")\nax.set_ylabel(\"Total Sleep Lost (hrs)\")\nax.legend([data_fig, predict_fig], ['Trained Data', 'Predictions'])\nadd_tinhatbendotcom(ax, (0,0)) \n```\n\nIn the polynomial example we can see that the scikit-learn model is a much better fit. This is in part due to more advanced optimisation techniques and accounting for regularisation, which we will discuss later\n", "meta": {"hexsha": "2bcd1f9f9c900cd266f7b0efcbadfc7bb65d4ffd", "size": 358913, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "linear_regression.ipynb", "max_stars_repo_name": "tinhatben/linear_regression", "max_stars_repo_head_hexsha": "5ec810e6ae0958df1f754cf46565403165a9f327", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "linear_regression.ipynb", "max_issues_repo_name": "tinhatben/linear_regression", "max_issues_repo_head_hexsha": "5ec810e6ae0958df1f754cf46565403165a9f327", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "linear_regression.ipynb", "max_forks_repo_name": "tinhatben/linear_regression", "max_forks_repo_head_hexsha": "5ec810e6ae0958df1f754cf46565403165a9f327", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 332.019426457, "max_line_length": 39746, "alphanum_fraction": 0.9134219156, "converted": true, "num_tokens": 6577, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9489172644875641, "lm_q2_score": 0.8840392771633078, "lm_q1q2_score": 0.8388801325853695}} {"text": "## Using lambdify for plotting expressions\nThe syntethic isotope Technetium-99m is used in medical diagnostics ([scintigraphy](https://en.wikipedia.org/wiki/Nuclear_medicine)):\n$$\n^{99m}Tc \\overset{\\lambda_1}{\\longrightarrow} \\,^{99}Tc \\overset{\\lambda_2}{\\longrightarrow} \\,^{99}Ru \\\\\n\\lambda_1 = 3.2\\cdot 10^{-5}\\,s^{-1} \\\\\n\\lambda_2 = 1.04 \\cdot 10^{-13}\\,s^{-1} \\\\\n$$\nSymPy can solve the differential equations describing the amounts versus time analytically.\nLet's denote the concentrations of each isotope $x(t),\\ y(t)\\ \\&\\ z(t)$ respectively.\n\n\n```python\nimport sympy as sym\nsym.init_printing()\n```\n\n\n```python\nsymbs = t, l1, l2, x0, y0, z0 = sym.symbols('t lambda_1 lambda_2 x0 y0 z0', real=True, nonnegative=True)\nfuncs = x, y, z = [sym.Function(s)(t) for s in 'xyz']\ninits = [f.subs(t, 0) for f in funcs]\ndiffs = [f.diff(t) for f in funcs]\nexprs = -l1*x, l1*x - l2*y, l2*y\neqs = [sym.Eq(diff, expr) for diff, expr in zip(diffs, exprs)]\neqs\n```\n\n\n```python\nsolutions = sym.dsolve(eqs)\nsolutions\n```\n\nnow we need to determine the integration constants from the intial conditions:\n\n\n```python\nintegration_constants = set.union(*[sol.free_symbols for sol in solutions]) - set(symbs)\nintegration_constants\n```\n\n\n```python\ninitial_values = [sol.subs(t, 0) for sol in solutions]\ninitial_values\n```\n\n\n```python\nconst_exprs = sym.solve(initial_values, integration_constants)\nconst_exprs\n```\n\n\n```python\nanalytic = [sol.subs(const_exprs) for sol in solutions]\nanalytic\n```\n\n## Exercise: Create a function from a symbolic expression\nWe want to plot the time evolution of x, y & z from the above analytic expression (called ``analytic`` above):\n\n\n```python\nfrom math import log10\nimport numpy as np\nyear_s = 365.25*24*3600\ntout = np.logspace(0, log10(3e6*year_s), 500) # 1 s to 3 million years\n```\n\n\n```python\n%load_ext scipy2017codegen.exercise\n```\n\n*Use either the *``%exercise``* or *``%load``* magic to get the exercise / solution respecitvely:*\n\nReplace **???** so that `f(t)` evaluates $x(t),\\ y(t)\\ \\&\\ z(t)$. Hint: use the right hand side of the equations in ``analytic`` (use the attribute ``rhs`` of the items in ``anayltic``):\n\n\n```python\n# %exercise exercise_Tc99.py\nxyz_num = sym.lambdify([t, l1, l2, *inits], [eq.rhs for eq in analytic])\nyout = xyz_num(tout, 3.2e-5, 1.04e-13, 1, 0, 0)\n```\n\n\n```python\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfig, ax = plt.subplots(1, 1, figsize=(14, 4))\nax.loglog(tout.reshape((tout.size, 1)), np.array(yout).T)\nax.legend(['$^{99m}Tc$', '$^{99}Tc$', '$^{99}Ru$'])\nax.set_xlabel('Time / s')\nax.set_ylabel('Concentration / a.u.')\n_ = ax.set_ylim([1e-11, 2])\n```\n", "meta": {"hexsha": "efca427e4a8e1381e89f1da6624c1f925bcb5939", "size": 5233, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/23-lambdify-Tc99m.ipynb", "max_stars_repo_name": "gvvynplaine/scipy-2017-codegen-tutorial", "max_stars_repo_head_hexsha": "4bd0cdb1bdbdc796bb90c08114a00e390b3d3026", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 56, "max_stars_repo_stars_event_min_datetime": "2017-05-31T21:01:22.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-14T04:26:01.000Z", "max_issues_repo_path": "notebooks/23-lambdify-Tc99m.ipynb", "max_issues_repo_name": "gvvynplaine/scipy-2017-codegen-tutorial", "max_issues_repo_head_hexsha": "4bd0cdb1bdbdc796bb90c08114a00e390b3d3026", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 26, "max_issues_repo_issues_event_min_datetime": "2017-06-06T19:05:04.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-29T23:15:19.000Z", "max_forks_repo_path": "notebooks/23-lambdify-Tc99m.ipynb", "max_forks_repo_name": "gvvynplaine/scipy-2017-codegen-tutorial", "max_forks_repo_head_hexsha": "4bd0cdb1bdbdc796bb90c08114a00e390b3d3026", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 29, "max_forks_repo_forks_event_min_datetime": "2017-06-06T14:45:12.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-25T14:11:06.000Z", "avg_line_length": 24.8009478673, "max_line_length": 196, "alphanum_fraction": 0.5474871011, "converted": true, "num_tokens": 857, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9489172644875641, "lm_q2_score": 0.8840392725805822, "lm_q1q2_score": 0.8388801282367421}} {"text": "# Breve introducci\u00f3n a Python simb\u00f3lico (paquete SymPy)\n\n## Introducci\u00f3n\n\nEl paquete SymPy permite realizar en Python tareas como la manipulaci\u00f3n de expresiones matem\u00e1ticas, derivaci\u00f3n, etc. \n\nProcediendo como se indica a continuaci\u00f3n, podemos iniciar una sesion de SymPy adaptada a IPython Notebook. Por defecto, s\u00f3lo se cargar\u00e1n algunas variables simb\u00f3licas: \n\n* x, y, z, t, k, m y n: variables n\u00famericas, \n* f, g y h: nombres simb\u00f3licos para funciones funciones\n\nSi se necesita a\u00f1adir otras variables simb\u00f3licas se pueden usar, por ejemplo, las funciones `var` o `symbols`. Para m\u00e1s detalles, v\u00e9ase por ejemplo la [secci\u00f3n *basic operations*](http://docs.sympy.org/latest/tutorial/basic_operations.html) (localizada en el [tutorial de SymPy](http://docs.sympy.org/latest/tutorial)).\n\nA continuaci\u00f3n, se detalla c\u00f3mo activar el entorno simb\u00f3lico. Pero antes, una observaci\u00f3n importante: una vez se inicializa SymPy, el entorno simb\u00f3lico estar\u00e1 activado permanentemente (salvo que se reinicie el n\u00facleo). Por eso, si nuestro inter\u00e9s principal es el c\u00e1lculo num\u00e9rico en coma flotante, es una buena costumbre el utilizar SyPy en ficheros independientes, en los que exclusivamente se realicen tareas simb\u00f3licas (evitando posibles conflictos con paquetes num\u00e9ricos como *NumPy* o *Scipy*).\n\n## Activaci\u00f3n de SymPy y primeros ejemplos\n\n\n```python\nimport sympy # Cargar paquete de Python Simb\u00f3lico\nsympy.init_session() # Iniciar sesi\u00f3n simb\u00f3lica\n```\n\n IPython console for SymPy 0.7.6 (Python 3.5.2-64-bit) (ground types: python)\n \n These commands were executed:\n >>> from __future__ import division\n >>> from sympy import *\n >>> x, y, z, t = symbols('x y z t')\n >>> k, m, n = symbols('k m n', integer=True)\n >>> f, g, h = symbols('f g h', cls=Function)\n >>> init_printing()\n \n Documentation can be found at http://www.sympy.org\n\n\nIntroducimos una primera expresi\u00f3n simb\u00f3lica (usando las variables x e y):\n\n\n```python\n1/x + 1/y\n```\n\n\n\n\n 1/y + 1/x\n\n\n\nFactorizamos esta expresi\u00f3n. Para ello usamos la funci\u00f3n `factor()` sobre la expresi\u00f3n anterior (que se referencia mediante el operador `_` (gui\u00f3n bajo)). Podemos definir nuevas variables y realizar distinto tipo de operaciones simb\u00f3licas \n\n\n```python\nfactor(_) # El operador _ se\u00f1ala a la salida anterior\n```\n\n\n\n\n (x + y)/(x*y)\n\n\n\n\n```python\ncociente = _\ncociente**3\n```\n\n\n\n\n (x + y)**3/(x**3*y**3)\n\n\n\n\n```python\nexpand(sqrt(cociente**3))\n```\n\n\n\n\n sqrt(y**(-3) + 3/(x*y**2) + 3/(x**2*y) + x**(-3))\n\n\n\n## Derivadas e integrales simb\u00f3licas\n\nA continuaci\u00f3n, definimos una funci\u00f3n y calculamos su derivada:\n\n\n```python\ndef f(x,y): return sin(cociente**5)\nprint(\"f(x,y) =\", f(x,y))\nprint(\"Derivada parcial respecto a x:\")\ndiff(f(x,y), x) # Derivada respecto a x\n```\n\n f(x,y) = sin((x + y)**5/(x**5*y**5))\n Derivada parcial respecto a x:\n\n\n\n\n\n (5*(x + y)**4/(x**5*y**5) - 5*(x + y)**5/(x**6*y**5))*cos((x + y)**5/(x**5*y**5))\n\n\n\nMuchas de las funciones simb\u00f3licas est\u00e1n disponibles usando la sintaxis de orientaci\u00f3n a objetos (tambi\u00e9n `factor()` y `dif()`, funciones que vimos antes). Por ejemplo, para tomar $x=\\log(\\pi)$ en la funci\u00f3n anterior (sustituir $x$ por $\\log(\\pi)$) podemos usar la funci\u00f3n `subs()`:\n\n\n```python\nf(x,y).subs(x,log(pi))\n```\n\nV\u00e9ase que la variable `pi` ha cambiado y ahora es simb\u00f3lica. Tambi\u00e9n las funciones habituales (como `log()`, `sin()`, `cos()`, etc.) son ahora simb\u00f3licas:\n\n\n```python\ncos(1+1)\n```\n\nPodemos calcular integrales simb\u00f3licas f\u00e1cilmente:\n\n\n```python\nintegrate(x*E**(x**2),x) # En simb\u00f3lico, el n\u00famero e se escribe con may\u00fasculas.\n```\n\nPor \u00faltimo: la funci\u00f3n `plot()` es actualizada por el paquete `SymPy` y act\u00faa de de forma diferente a la funci\u00f3n habtual (contenida en el paquete `matplotlib`, que suele ser cargado autom\u00e1ticamente por *Ipython Notebook*). Un ejemplo:\n\n\n```python\nplot(x**2*cos(1/x), (x, -pi/4, pi/4))\n```\n\n### Para saber m\u00e1s,\nv\u00e9ase el [tutorial de SymPy](http://docs.sympy.org/latest/tutorial/index.html) (o [documentaci\u00f3n adicional](http://docs.sympy.org/latest/index.html) disponible en internet).\n", "meta": {"hexsha": "0688a23dd85acef9d8a85ec4102d2146b276e040", "size": 7787, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Documentos.bak/Intro-SymPy.ipynb", "max_stars_repo_name": "rrgalvan/Python4Maths", "max_stars_repo_head_hexsha": "0412f34a2c70ab1a16a738a23581f52bcf5c1c84", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Documentos.bak/Intro-SymPy.ipynb", "max_issues_repo_name": "rrgalvan/Python4Maths", "max_issues_repo_head_hexsha": "0412f34a2c70ab1a16a738a23581f52bcf5c1c84", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Documentos.bak/Intro-SymPy.ipynb", "max_forks_repo_name": "rrgalvan/Python4Maths", "max_forks_repo_head_hexsha": "0412f34a2c70ab1a16a738a23581f52bcf5c1c84", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.699669967, "max_line_length": 505, "alphanum_fraction": 0.5551560293, "converted": true, "num_tokens": 1222, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.909907010924213, "lm_q2_score": 0.9219218407544306, "lm_q1q2_score": 0.8388631464266123}} {"text": "# Line Fitting\n\n* We wish to fit a line to data where we are given pairs\n$$\n(y_i, x_i)\n$$\nfor $i=1\\dots N$ (or $i=0\\dots N-1$)\n\n* Model\n$$\nf(x; w_1, w_0) = w_0 + w_1 x \n$$\n\n\n> $x$ : Input \n\n> $w_1$: The slope\n\n> $w_0$: Intercept\n\n$f_i \\equiv f(x_i; w_1, w_0)$\n\n\n### Example Dataset\n\nNumber of registered cars in Turkey, as a function of years.\n\n|$i$|$y_i$|$x_i$|\n|-------------|\n|Index|Number of Cars (In Millions)|Years from 1995|\n\n\n\n```python\n%matplotlib inline\n\nimport scipy as sc\nimport numpy as np\nimport pandas as pd\nimport matplotlib as mpl\nimport matplotlib.pylab as plt\n\ndf_arac = pd.read_csv(u'data/arac.csv',sep=';')\n```\n\n\n```python\nBaseYear = 1995\nx = np.matrix(df_arac.Year[31:]).T-BaseYear\ny = np.matrix(df_arac.Car[31:]).T/1000000\n\nplt.plot(x+BaseYear, y, 'o-')\nplt.xlabel('Year')\nplt.ylabel('Number of Cars (Millions)')\n\nplt.show()\n```\n\n\n```python\n\nw_0 = 2.27150786\nw_1 = 0.37332256\n\nf = w_1*x + w_0\nplt.plot(x+BaseYear, y, 'o-')\nplt.plot(x+BaseYear, f, 'r')\n\n\nplt.xlabel('Years')\nplt.ylabel('Number of Cars (Millions)')\n\nplt.show()\n\ne = y - f\n\nprint('Total Error = ', e.T*e/2)\n\n```\n\n* Fitting the model: estimating $w = [w_0, w_1]$\n\n* As there is noise, we can't hope to fit our model exactly\n\n* Define the error for each observation \n\n$$e_i = \\frac{1}{2}(y_i - f(x_i; w))^2$$\n\nsquared Euclidian norm. The constant $1/2$ is useful for a cosmetical simplification.\n\n* Total Error (sum over all data points)\n\n$$\nE(w) = \\frac{1}{2} \\sum_i (y_i - f(x_i; w))^2\n$$\n\n* We can minimize the total error by adjusting $w_0$ and $w_1$\n\n### Visualization of the error surface\n\nA good approach for low dimensional problems is the visualization of the error surface. We evaluate her exhaustively the error for each possible setting of the parameter $w$. \n\n\n```python\nfrom itertools import product\n\n# Setup the vandermonde matrix\nN = len(x)\nA = np.hstack((np.ones((N,1)), x))\n\nleft = -5\nright = 25\nbottom = -4\ntop = 6\nstep = 0.05\nW0 = np.arange(left,right, step)\nW1 = np.arange(bottom,top, step)\n\nErrSurf = np.zeros((len(W1),len(W0)))\n\nfor i,j in product(range(len(W1)), range(len(W0))):\n e = y - A*np.matrix([W0[j], W1[i]]).T\n ErrSurf[i,j] = e.T*e/2\n\nplt.figure(figsize=(10,10))\nplt.imshow(ErrSurf, interpolation='nearest', \n vmin=0, vmax=1000,origin='lower',\n extent=(left,right,bottom,top))\nplt.xlabel('w0')\nplt.ylabel('w1')\nplt.colorbar()\nplt.show()\n```\n\n## Idea: Least Squares\n\n* Compute the derivative of the total error w.r.t. $w_0$ and $w_1$ and solve the equations\n\n__Derivation comes here__\n\n## Vector Notation\n\n\\begin{eqnarray}\n\\left(\n\\begin{array}{c}\ny_0 \\\\ y_1 \\\\ \\vdots \\\\ y_{N-1} \n\\end{array}\n\\right)\n\\approx\n\\left(\n\\begin{array}{cc}\n1 & x_0 \\\\ 1 & x_1 \\\\ \\vdots \\\\ 1 & x_{N-1} \n\\end{array}\n\\right) \n\\left(\n\\begin{array}{c}\n w_0 \\\\ w_1 \n\\end{array}\n\\right)\n\\end{eqnarray}\n\n\\begin{eqnarray}\ny \\approx A w\n\\end{eqnarray}\n\n* Error\n\n$e = y - Aw$\n\n\\begin{eqnarray}\nE(w) & = & \\frac{1}{2}e^\\top e = \\frac{1}{2}(y - Aw)^\\top (y - Aw)\\\\\n& = & \\frac{1}{2}y^\\top y - \\frac{1}{2} y^\\top Aw - \\frac{1}{2} w^\\top A^\\top y + \\frac{1}{2} w^\\top A^\\top Aw \\\\\n& = & \\frac{1}{2} y^\\top y - y^\\top Aw + \\frac{1}{2} w^\\top A^\\top Aw \\\\\n\\end{eqnarray}\n\n* Gradient: Derivative of a function with respect to a vector \n\\begin{eqnarray}\n\\frac{d f}{d w } & = & \\left(\\begin{array}{c}\n \\partial f/\\partial w_0 \\\\ \\partial f/\\partial w_1 \\\\ \\vdots \\\\ \\partial f/\\partial w_{K-1}\n\\end{array}\\right)\n\\end{eqnarray}\n\n* Gradient of the inner product\n\\begin{eqnarray}\n\\frac{d}{d w }(h^\\top w) & = & h\n\\end{eqnarray}\n\n* Gradient of a Quadratic form\n\\begin{eqnarray}\n\\frac{d}{d w }(w^\\top K w) & = & (K+K^\\top) w\n\\end{eqnarray}\n \n* We derive the gradient of the total error as\n\\begin{eqnarray}\n\\frac{d}{d w }E(w) & = & \\frac{d}{d w }(\\frac{1}{2} y^\\top y) + \\frac{d}{d w }(- y^\\top Aw) + \\frac{d}{d w }(\\frac{1}{2} w^\\top A^\\top Aw) \\\\\n& = & 0 - A^\\top y + A^\\top A w = - A^\\top (y - Aw) \\\\\n& = & - A^\\top e \n& \\equiv & \\nabla E(w)\n\\end{eqnarray}\n\n### Least Squares solution: Directly solving the equations \n* Find\n\n\\begin{eqnarray}\nw^* & = & \\arg\\min_{w} E(w)\n\\end{eqnarray}\n\n* Set the derivative to zero and solve for $w^*$\n\n\\begin{eqnarray}\n0 & = & - A^\\top y + A^\\top A w^* \\\\\nA^\\top y & = & A^\\top A w^* \\\\\nw^* & = & (A^\\top A)^{-1} A^\\top y \n\\end{eqnarray}\n\n* Projection interpretation:\n\n\\begin{eqnarray}\nf & = A w^* = A (A^\\top A)^{-1} A^\\top y \n\\end{eqnarray}\n\n$P$ is the orthogonal projection matrix onto the space spanned by the columns of $A$\n\\begin{eqnarray}\nP & = & A (A^\\top A)^{-1} A^\\top \n\\end{eqnarray}\n\n\n```python\n# Solving the Normal Equations\n\n# Setup the vandermonde matrix\nN = len(x)\nA = np.hstack((np.ones((N,1)), x))\n\n#plt.imshow(A, interpolation='nearest')\n# Solve the least squares problem\nw_ls,E,rank,sigma = np.linalg.lstsq(A, y)\n\nprint('Parameters: \\nw0 = ', w_ls[0],'\\nw1 = ', w_ls[1] )\nprint('Error:', E/2)\n\n```\n\n Parameters: \n w0 = [[ 2.28150786]] \n w1 = [[ 0.37332256]]\n Error: [[ 1.31560622]]\n\n\n### Alternative Solution: Gradient Descent (GD)\n\n* Here, we can iteratively search for better solutions by moving in the negative direction of the gradient\n\n* For solving linear systems, GD is a quite poor algorithm. However, it is applicable to many other problems so it is a good idea to know about it and learn it in this context.\n\n* Iterate until convergence \n\\begin{align}\nw(0) & = \\text{some initial value} \\\\\n\\text{for}\\;\\; & \\tau = 1, 2,\\dots \\\\\n& w(\\tau) = w(\\tau-1) - \\eta \\nabla E(w(\\tau-1))\n\\end{align}\n\n* $\\eta$ is the learning rate parameter, to be chosen sufficiently small\n\n\n```python\n# An implementation of Gradient Descent for solving linear a system\n\nleft = -5\nright = 25\nbottom = -4\ntop = 6\n\n# Setup the vandermonde matrix\nN = len(x)\nA = np.hstack((np.ones((N,1)), x))\n\n# Starting point\nw = np.matrix('[15; -6]')\n\n# Number of iterations\nEPOCH = 5000\n\n# Learning rate: The following is the largest possible fixed rate for this problem\neta = 0.000696\n\nError = np.zeros((EPOCH))\nW = np.zeros((2,EPOCH))\n\nfor tau in range(EPOCH):\n # Calculate the error\n e = y - A*w \n \n # Store the intermediate results\n W[0,tau] = w[0]\n W[1,tau] = w[1]\n Error[tau] = (e.T*e)/2\n \n # Compute the gradient descent step\n g = -A.T*e\n w = w - eta*g\n\n #print(w.T)\n \nw_star = w \nplt.figure(figsize=(8,8))\nplt.semilogy(Error)\nplt.xlabel('Iteration tau')\nplt.ylabel('Error')\nplt.show()\n\n\nplt.figure(figsize=(8,8))\nplt.imshow(ErrSurf, interpolation='nearest', \n vmin=0, vmax=1000,origin='lower',\n extent=(left,right,bottom,top))\nplt.xlabel('w0')\nplt.ylabel('w1')\n\nln = plt.Line2D(W[0,:300:1], W[1,:300:1], marker='o',markerfacecolor='w')\n\nplt.gca().add_line(ln)\nplt.show()\n\n```\n\nThe illustration shows the convergence of GD with learning rate near the limit where the convergence is oscillatory.\n\n\n```python\n#f = A*w_ls\n\nf = A*w_star\nplt.plot(x+BaseYear, y, 'o-')\nplt.plot(x+BaseYear, f, 'r')\n\n\nplt.xlabel('Years')\nplt.ylabel('Number of Cars')\n\nplt.show()\n\ne = y - f\n\nprint('Total Error = ', e.T*e/2)\n```\n\n### Analysis of convergence of Gradient descent\n\nLet the gradient be\n$\\nabla E(w) = g(w)$\n\nWe have already derived that\n$g(w) = A^\\top (Aw - y)$\n\nFor constant $\\eta$, GD executes the following iteration\n\n$$\nw_t = w_{t-1} - \\eta g(w_{t-1})\n$$\n\n\n$$\nw_t = (I - \\eta A^\\top A) w_{t-1} + \\eta A^\\top y\n$$\n\nThis is a fixed point equation of form\n$$\nw_t = T(w_{t-1}) \n$$\nwhere $T$ is an affine transformation. \n\nWe will investigate under which conditions $T$ becomes a contraction, i.e. for two different\nparameter $w$ and $w'$\n\n$$\n\\| T(w) - T(w') \\| \\leq L_\\eta \\|w-w' \\|\n$$\n\nif $L_\\eta < 1$, then the distance shrinks. Hence the mapping converges to a fixed point (this is a consequence of a deeper result in analysis called the Brouwer fixed-point theorem (https://en.wikipedia.org/wiki/Brouwer_fixed-point_theorem))\n\n$$\nT(w) = (I - \\eta A^\\top A) w + \\eta A^\\top y\n$$\n$$\nT(w') = (I - \\eta A^\\top A) w' + \\eta A^\\top y\n$$\n\n$$\n\\| T(w) - T(w') \\| = \\| (I - \\eta A^\\top A) (w-w') \\| \\leq \\| I - \\eta A^\\top A \\| \\| w-w' \\|\n$$\n\nWhen the norm of the matrix $\\| I - \\eta A^\\top A \\| < 1$ we have convergence. Here we take the operator norm, i.e., the magnitude of the largest eigenvalue.\n\nBelow, we plot the absolute value of the maximum eigenvalues of $I - \\eta A^\\top A$ as a function of $\\eta$.\n\n\n```python\n\nleft = 0.00001\nright = 0.0008\nstp = 0.000005\n\nN = 100\nETA = np.linspace(left,right,N)\n\ndef compute_largest_eig(ETA, A):\n \n N = len(ETA)\n LAM = np.zeros(N)\n D = A.shape[1]\n \n for i,eta in enumerate(ETA):\n #print(eta)\n lam,v = np.linalg.eig(np.eye(D) - eta*A.T*A)\n LAM[i] = np.max(np.abs(lam))\n \n return LAM\n\nLAM = compute_largest_eig(ETA, A)\n\nplt.plot(ETA, LAM)\nplt.plot(ETA, np.ones((N,1)))\nplt.gca().set_ylim([0.98, 1.02])\nplt.xlabel('eta')\nplt.ylabel('absolute value of the largest eigenvalue')\nplt.show()\n\n\n```\n\n* Note that GD is sensetive to scaling of data\n* For example, if we would not have shifted the $x$ axis our original data, GD might not have worked. The maximum eigenvalue is very close to $1$ for all $\\eta$ upto numerical precision\n\n\n```python\n# Try to fit with GD to the original data\nBaseYear2 = 0\nx2 = np.matrix(df_arac.Year[31:]).T-BaseYear2\n\n# Setup the vandermonde matrix\nN = len(x2)\nA = np.hstack((np.ones((N,1)), x2))\n\nleft = -8\nright = -7.55\nN = 100\nETA = np.logspace(left,right,N)\n\nLAM = compute_largest_eig(ETA, A)\n\nplt.plot(ETA, LAM)\nplt.plot(ETA, np.ones((N,1)))\nplt.gca().set_ylim([0.98, 1.02])\nplt.xlabel('eta')\nplt.ylabel('absolute value of the largest eigenvalue')\nplt.show()\n\n\n```\n\n## Fitting a polynomial \n\n### Parabola\n\\begin{eqnarray}\n\\left(\n\\begin{array}{c}\ny_0 \\\\ y_1 \\\\ \\vdots \\\\ y_{N-1} \n\\end{array}\n\\right)\n\\approx\n\\left(\n\\begin{array}{ccc}\n1 & x_0 & x_0^2 \\\\ 1 & x_1 & x_1^2 \\\\ \\vdots \\\\ 1 & x_{N-1} & x_{N-1}^2 \n\\end{array}\n\\right) \n\\left(\n\\begin{array}{c}\n w_0 \\\\ w_1 \\\\ w_2\n\\end{array}\n\\right)\n\\end{eqnarray}\n\n### Polynomial of order $K$\n\\begin{eqnarray}\n\\left(\n\\begin{array}{c}\ny_0 \\\\ y_1 \\\\ \\vdots \\\\ y_{N-1} \n\\end{array}\n\\right)\n\\approx\n\\left(\n\\begin{array}{ccccc}\n1 & x_0 & x_0^2 & \\dots & x_0^K \\\\ 1 & x_1 & x_1^2 & \\dots & x_1^K\\\\ \\vdots \\\\ 1 & x_{N-1} & x_{N-1}^2 & \\dots & x_{N-1}^K \n\\end{array}\n\\right) \n\\left(\n\\begin{array}{c}\n w_0 \\\\ w_1 \\\\ w_2 \\\\ \\vdots \\\\ w_K\n\\end{array}\n\\right)\n\\end{eqnarray}\n\n\n\\begin{eqnarray}\ny \\approx A w\n\\end{eqnarray}\n\nThe solution is identical\n\n\n```python\n# Setup the vandermonde matrix\nN = len(x)\ndegree = 9\n#A = np.hstack((np.power(x,0), np.power(x,1), np.power(x,2)))\nA = np.hstack((np.power(x,i) for i in range(degree+1)))\nxx = np.matrix(np.linspace(-2,25)).T\nA2 = np.hstack((np.power(xx,i) for i in range(degree+1)))\n\n#plt.imshow(A, interpolation='nearest')\n# Solve the least squares problem\nw_ls,E,rank,sigma = np.linalg.lstsq(A, y)\n\nf = A2*w_ls\nplt.plot(x+BaseYear, y, 'o-')\nplt.plot(xx+BaseYear, f, 'r')\n\n\nplt.xlabel('Years')\nplt.ylabel('Number of Cars')\n\nplt.show()\n\n```\n\n# Ignore Cells below\n\n\n```python\neta = 0.000346\nlam,v = np.linalg.eig(np.eye(2) - eta*2*A.T*A)\n\nlam\n\nA.shape[1]\n\nnp.logspace(-8,-5,100)\n```\n\n\n```python\nnp.linspace(0,5,6)\n```\n\n\n\n\n array([ 0., 1., 2., 3., 4., 5.])\n\n\n", "meta": {"hexsha": "20fe6793952c723981a90dbf528f63b68a6c99fc", "size": 124150, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Regression.ipynb", "max_stars_repo_name": "tugbatugbatugba/data-mining", "max_stars_repo_head_hexsha": "e4db929b075bc3cfd5c290b2ccd551da3cba1041", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Regression.ipynb", "max_issues_repo_name": "tugbatugbatugba/data-mining", "max_issues_repo_head_hexsha": "e4db929b075bc3cfd5c290b2ccd551da3cba1041", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Regression.ipynb", "max_forks_repo_name": "tugbatugbatugba/data-mining", "max_forks_repo_head_hexsha": "e4db929b075bc3cfd5c290b2ccd551da3cba1041", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 155.5764411028, "max_line_length": 24814, "alphanum_fraction": 0.8755296013, "converted": true, "num_tokens": 3986, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9099070084811307, "lm_q2_score": 0.9219218402181233, "lm_q1q2_score": 0.8388631436862916}} {"text": "```\nimport sympy\nfrom sympy import Symbol\n\nfrom sympy.abc import x, g, b\n\nxdd = (0.0034*g*sympy.exp(-x/22000)*sympy.diff(x)**2) / (2*b) - g\ny=sympy.symbols('y',cls=sympy.Function)\n\nxdd = sympy.diff(y(x), x, x) - (0.0034*g*sympy.exp(-x/22000)*sympy.diff(x)**2) + g\n\n\nsympy.pprint(xdd)\n\n```\n\n -x \n \u2500\u2500\u2500\u2500\u2500 2 \n 22000 d \n g - 0.0034\u22c5g\u22c5\u212f + \u2500\u2500\u2500(y(x))\n 2 \n dx \n\n", "meta": {"hexsha": "6ff0f981a662efd9777d69aeab2585c86f273059", "size": 1252, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "experiments/zarchan_ball.ipynb", "max_stars_repo_name": "VladPodilnyk/Kalman-and-Bayesian-Filters-in-Python", "max_stars_repo_head_hexsha": "1b47e2c27ea0a007e8c36d9f6d453c47402b3615", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 12315, "max_stars_repo_stars_event_min_datetime": "2015-01-07T12:06:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T11:03:03.000Z", "max_issues_repo_path": "experiments/zarchan_ball.ipynb", "max_issues_repo_name": "VladPodilnyk/Kalman-and-Bayesian-Filters-in-Python", "max_issues_repo_head_hexsha": "1b47e2c27ea0a007e8c36d9f6d453c47402b3615", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 356, "max_issues_repo_issues_event_min_datetime": "2015-01-09T18:53:02.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-14T20:21:06.000Z", "max_forks_repo_path": "experiments/zarchan_ball.ipynb", "max_forks_repo_name": "VladPodilnyk/Kalman-and-Bayesian-Filters-in-Python", "max_forks_repo_head_hexsha": "1b47e2c27ea0a007e8c36d9f6d453c47402b3615", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 3419, "max_forks_repo_forks_event_min_datetime": "2015-01-02T20:47:47.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T18:07:33.000Z", "avg_line_length": 25.04, "max_line_length": 94, "alphanum_fraction": 0.4169329073, "converted": true, "num_tokens": 180, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639653084244, "lm_q2_score": 0.8633916205190225, "lm_q1q2_score": 0.8388401864455279}} {"text": "# Aerospace Design via Quasiconvex Optimization \n\nConsider a triangle, or a wedge, located within a hypersonic flow. A standard aerospace design optimization problem is to design the wedge to maximize the lift-to-drag ratio (L/D) (or conversely minimize the D/L ratio), subject to certain geometric constraints. In this example, the wedge is known to have a constant hypotenuse, and our job is to choose its width and height.\n\nThe drag-to-lift ratio is given by\n\n\\begin{equation}\n\\frac{\\mathrm{D}}{\\mathrm{L}} = \\frac{\\mathrm{c_d}}{\\mathrm{c_l}},\n\\end{equation}\n\nwhere $\\mathrm{c_d}$ and $\\mathrm{c_l}$ are drag and lift coefficients, respectively, that are obtained by integrating the projection of the pressure coefficient in directions parallel to, and perpendicular to, the body.\n\nIt turns out that the the drag-to-lift ratio is a quasilinear function, as we'll now show. We will assume the pressure coefficient is given by the Newtonian sine-squared law for whetted areas of the body,\n\n\\begin{equation}\n\\mathrm{c_p} = 2(\\hat{v}\\cdot\\hat{n})^2\n\\end{equation}\n\nand elsewhere $\\mathrm{c_p} = 0$. Here, $\\hat{v}$ is the free stream direction, which for simplicity we will assume is parallel to the body so that, $\\hat{v} = \\langle 1, 0 \\rangle$, and $\\hat{n}$ is the local unit normal. For a wedge defined by width $\\Delta x$, and height $\\Delta y$, \n\n\\begin{equation}\n\\hat{n} = \\langle -\\Delta y/s,-\\Delta x/s \\rangle\n\\end{equation}\n\nwhere $s$ is the hypotenuse length. Therefore,\n\n\\begin{equation}\n\\mathrm{c_p} = 2((1)(-\\Delta y/s)+(0)(-\\Delta x/s))^2 = \\frac{2 \\Delta y^2}{s^2}\n\\end{equation}\n\nThe lift and drag coefficients are given by\n\n\\begin{align*}\n\\mathrm{c_d} &= \\frac{1}{c}\\int_0^s -\\mathrm{c_p}\\hat{n}_x \\mathrm{d}s \\\\\n\\mathrm{c_l} &= \\frac{1}{c}\\int_0^s -\\mathrm{c_p}\\hat{n}_y \\mathrm{d}s\n\\end{align*}\n\nWhere $c$ is the reference chord length of the body. Given that $\\hat{n}$, and therefore $\\mathrm{c_p}$ are constant over the whetted surface of the body,\n\n\\begin{align*}\n\\mathrm{c_d} &= -\\frac{s}{c}\\mathrm{c_p}\\hat{n}_x = \\frac{s}{c}\\frac{2 \\Delta y^2}{s^2}\\frac{\\Delta y}{s} \\\\\n\\mathrm{c_l} &= -\\frac{s}{c}\\mathrm{c_p}\\hat{n}_y = \\frac{s}{c}\\frac{2 \\Delta y^2}{s^2}\\frac{\\Delta x}{s}\n\\end{align*}\n\nAssuming $s=1$, so that $\\Delta y = \\sqrt{1-\\Delta x^2}$, plugging in the above into the equation for $D/L$, we obtain \n\n\\begin{equation}\n\\frac{\\mathrm{D}}{\\mathrm{L}} = \\frac{\\Delta y}{\\Delta x} = \\frac{\\sqrt{1-\\Delta x^2}}{\\Delta x} = \\sqrt{\\frac{1}{\\Delta x^2}-1}.\n\\end{equation}\n\nThis function is representable as a DQCP, quasilinear function. We plot it below, and then we write it using DQCP.\n\n\n\n```\n%matplotlib inline\n\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport math\n\nx = np.linspace(.25,1,num=201)\nobj = []\nfor i in range(len(x)):\n obj.append(math.sqrt(1/x[i]**2-1))\n\nplt.plot(x,obj)\n```\n\n\n```\nimport cvxpy as cp\n\nx = cp.Variable(pos=True)\nobj = cp.sqrt(cp.inv_pos(cp.square(x))-1)\nprint(\"This objective function is\", obj.curvature)\n```\n\n This objective function is QUASILINEAR\n\n\nMinimizing this objective function subject to constraints representing payload requirements is a standard aerospace design problem. In this case we will consider the constraint that the wedge must be able to contain a rectangle of given length and width internally along its hypotenuse. This is representable as a convex constraint.\n\n\n```\na = .05 # USER INPUT: height of rectangle, should be at most b\nb = .65 # USER INPUT: width of rectangle\nconstraint = [a*cp.inv_pos(x)-(1-b)*cp.sqrt(1-cp.square(x))<=0]\nprint(constraint)\n```\n\n [Inequality(Expression(CONVEX, UNKNOWN, ()))]\n\n\n\n```\nprob = cp.Problem(cp.Minimize(obj), constraint)\nprob.solve(qcp=True, verbose=True)\nprint('Final L/D Ratio = ', 1/obj.value)\nprint('Final width of wedge = ', x.value)\nprint('Final height of wedge = ', math.sqrt(1-x.value**2))\n```\n\n \n ********************************************************************************\n Preparing to bisect problem\n \n minimize 0.0\n subject to 0.05 * var30766 + -0.35 * var30793 <= 0.0\n SOC(reshape(var30747 + var30766, (1,)), Vstack(reshape(var30747 + -var30766, (1, 1)), reshape(2.0 * 1.0, (1, 1))))\n SOC(reshape(var30779 + 1.0, (1,)), Vstack(reshape(var30779 + -1.0, (1, 1)), reshape(2.0 * var30747, (1, 1))))\n SOC(reshape(1.0 + -var30779 + 1.0, (1,)), Vstack(reshape(1.0 + -var30779 + -1.0, (1, 1)), reshape(2.0 * var30793, (1, 1))))\n power(power(power(param30811, 2) + --1.0, -1), 1/2) <= var30747\n \n Finding interval for bisection ...\n initial lower bound: 0.000000\n initial upper bound: 1.000000\n \n (iteration 0) lower bound: 0.000000\n (iteration 0) upper bound: 1.000000\n (iteration 0) query point: 0.500000 \n (iteration 0) query was feasible. Solution(status=optimal, opt_val=0.0, primal_vars={30766: array(1.28425055), 30793: array(0.32048066), 30747: 0.9203698369509382, 30779: array(0.86287821)}, dual_vars={30764: 1.184352986830617e-10, 30775: array([ 7.68139086e-12, -9.11799720e-13, -6.85059567e-12]), 30788: array([ 6.73308751e-11, 7.50722737e-12, -6.55220021e-11]), 30802: array([ 4.04979217e-11, 3.43109122e-11, -1.68754271e-11]), 30835: 1.4165742899966837e-10}, attr={'solve_time': 6.0109e-05, 'setup_time': 4.4997e-05, 'num_iters': 7}))\n \n (iteration 5) lower bound: 0.125000\n (iteration 5) upper bound: 0.156250\n (iteration 5) query point: 0.140625 \n (iteration 5) query was infeasible.\n \n (iteration 10) lower bound: 0.145508\n (iteration 10) upper bound: 0.146484\n (iteration 10) query point: 0.145996 \n (iteration 10) query was feasible. Solution(status=optimal, opt_val=0.0, primal_vars={30766: array(1.01067238), 30793: array(0.14440604), 30747: 0.9895144829793, 30779: array(0.97914383)}, dual_vars={30764: 1.2610785752467482e-05, 30775: array([ 6.37367039e-07, 6.73702792e-09, -6.37322961e-07]), 30788: array([ 1.50627898e-05, 1.58286953e-07, -1.50619494e-05]), 30802: array([ 7.77053008e-06, 7.45051237e-06, -2.20683981e-06]), 30835: 2.948014872712083e-05}, attr={'solve_time': 0.000114922, 'setup_time': 3.6457e-05, 'num_iters': 10}))\n \n (iteration 15) lower bound: 0.145874\n (iteration 15) upper bound: 0.145905\n (iteration 15) query point: 0.145889 \n (iteration 15) query was infeasible.\n \n Bisection completed, with lower bound 0.145897 and upper bound 0.1458979\n ********************************************************************************\n \n Final L/D Ratio = 6.854107648695203\n Final width of wedge = 0.9895238539767502\n Final height of wedge = 0.14436946495363565\n\n\nOnce the solution has been found, we can create a plot to verify that the rectangle is inscribed within the wedge.\n\n\n```\ny = math.sqrt(1-x.value**2)\nlambda1 = a*x.value/y\nlambda2 = a*x.value**2/y+a*y\nlambda3 = a*x.value-y*(a*x.value/y-b)\n\nplt.plot([0,x.value],[0,0],'b.-')\nplt.plot([0,x.value],[0,-y],'b.-')\nplt.plot([x.value,x.value],[0,-y],'b.-')\n\npt1 = [lambda1*x.value,-lambda1*y]\npt2 = [(lambda1+b)*x.value,-(lambda1+b)*y]\npt3 = [(lambda1+b)*x.value+a*y,-(lambda1+b)*y+a*x.value]\npt4 = [lambda1*x.value+a*y,-lambda1*y+a*x.value]\n\nplt.plot([pt1[0],pt2[0]],[pt1[1],pt2[1]],'r.-')\nplt.plot([pt2[0],pt3[0]],[pt2[1],pt3[1]],'r.-')\nplt.plot([pt3[0],pt4[0]],[pt3[1],pt4[1]],'r.-')\nplt.plot([pt4[0],pt1[0]],[pt4[1],pt1[1]],'r.-')\n\nplt.axis('equal')\n```\n", "meta": {"hexsha": "9900cfda3ef1d4e90a7115684036687d04bd6da9", "size": 36868, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/notebooks/dqcp/hypersonic_shape_design.ipynb", "max_stars_repo_name": "dberkens/cvxpy", "max_stars_repo_head_hexsha": "b639e4a691d4986b9952de268282c9ece570411b", "max_stars_repo_licenses": ["ECL-2.0", "Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "examples/notebooks/dqcp/hypersonic_shape_design.ipynb", "max_issues_repo_name": "dberkens/cvxpy", "max_issues_repo_head_hexsha": "b639e4a691d4986b9952de268282c9ece570411b", "max_issues_repo_licenses": ["ECL-2.0", "Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/notebooks/dqcp/hypersonic_shape_design.ipynb", "max_forks_repo_name": "dberkens/cvxpy", "max_forks_repo_head_hexsha": "b639e4a691d4986b9952de268282c9ece570411b", "max_forks_repo_licenses": ["ECL-2.0", "Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 108.1173020528, "max_line_length": 12780, "alphanum_fraction": 0.7836063795, "converted": true, "num_tokens": 2524, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9390248140158417, "lm_q2_score": 0.8933094159957173, "lm_q1q2_score": 0.8388397082139787}} {"text": "```python\n\n```\n\n\n```javascript\n%%javascript\nMathJax.Hub.Config({\n TeX: { equationNumbers: { autoNumber: \"AMS\" } }\n});\n```\n\n\n \n\n\n# Exercise 1\n\nLet us consider the sequence $U_n$ given by \n\\begin{equation}\\label{fib}\n\\left\\lbrace \n\\begin{array}{ll}\nU_0 &= 1,\\\\\nU_1 &= 2,\\\\\nU_{n} &=-3U_{n-1} +U_{n-2}, \\;\\; \\forall\\; n=2,3,4\\cdots\n\\end{array}\\right. \n\\end{equation}\n\nWrite a python function named SeqTerms that takes as input an integer $n,\\;\\;n\\geq 0$ and return an array of the first $n$ terms (i.e. $U_0, \\cdots, U_{n-1}$) of the sequence \\eqref{fib}.\n\n The code should return an array of one or two when (n=0 or n =1), for other inputs the code works properly, you may need to print the results as an array not as a list. \n\n\n```python\nimport numpy as np\ndef Seq(n):\n a=1\n b=2\n if n==0:\n return 1\n if n==1:\n return 2\n for i in range(2,n+1):\n c=-3*b+a\n a=b\n b=c\n return c\nSeq(2)\ndef SeqTerms(n):\n l=[]\n g=np.vectorize(Seq)\n for i in range(n):\n l+=[Seq(i)]\n return l\nSeqTerms(1)\n```\n\n\n\n\n [1]\n\n\n\n# Exercise 2\n\nLet $\\{ x_k\\}$ be a partition of $[a,b]$ such that $a=x_0Trap that takes $a,b,N, f$ as inputs and return A\n\n\n Correct. \n\n\n\n```python\ndef trap(a,b,N,f):\n C=np.linspace(a,b,N+1)\n g=np.vectorize(f)\n A=g(C)\n S=0\n for i in range(1,len(A)):\n S+=A[i]+A[i-1]\n K=1/2*S*((b-a)/N)\n return K\nf= lambda x: x**3+7\ntrap(0,1,10**6,f)\n```\n\n\n\n\n 7.250000000000129\n\n\n\n2. Write a Python code to compute and display an approximation $Aquad$ of the integral bellow using the Python function $quad$\n$$A = \\int_{0}^{2} \\dfrac{x^3+5x-20}{x^2+3}dx$$\n\n Correct. \n\n\n```python\nfrom scipy.integrate import quad\na = 0\nb = 2\nf = lambda x: (x**3+5*x-20)/(x**2+3)\nAquad= quad(f, a, b)[0]\nprint(Aquad)\n```\n\n -7.049316535735796\n\n\n3. write a Python function ErrorTrap that takes $M$ as input and return an arrays $ErrorInt$ and $ListN$. Here, $ErrorInt$ contains the absolute errors between $Aquad$ and the approximation of the integral $A$ obtained using the function Trap for all positve intergers $N$ in $ListN$ the set of all multiples of 10 less or equal to $M$.\n\n\n Correct. \n\n\n```python\ndef ErrorTrap(M):\n u= lambda x: abs(quad(f,0,2)[0]-trap(0,2,x,f))\n ListN=[]\n #ErrorInt=np.zeros(M)\n for i in range(1,M+1):\n if i%10==0:\n ListN+=[i]\n g=np.vectorize(u)\n ErrorInt=g(ListN)\n return ErrorInt, ListN\nErrorTrap(100) \n```\n\n\n\n\n (array([3.07950054e-03, 7.70701307e-04, 3.42601551e-04, 1.92726674e-04,\n 1.23349010e-04, 8.56605200e-05, 6.29349176e-05, 4.81848732e-05,\n 3.80721757e-05, 3.08385649e-05]),\n [10, 20, 30, 40, 50, 60, 70, 80, 90, 100])\n\n\n\n4. Plot the output $ErrorInt$ against $ListN$ for $M=200$\n\n You were asked to plot not to print the ListN and ErrorInt.\n\n\n```python\n\ud835\udc38\ud835\udc5f\ud835\udc5f\ud835\udc5c\ud835\udc5f\ud835\udc3c\ud835\udc5b\ud835\udc61 , \ud835\udc3f\ud835\udc56\ud835\udc60\ud835\udc61\ud835\udc41 = ErrorTrap(200)\nprint(\ud835\udc3f\ud835\udc56\ud835\udc60\ud835\udc61\ud835\udc41) \nprint(ErrorInt)\n```\n\n [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200]\n [3.07950054e-03 7.70701307e-04 3.42601551e-04 1.92726674e-04\n 1.23349010e-04 8.56605200e-05 6.29349176e-05 4.81848732e-05\n 3.80721757e-05 3.08385649e-05 2.54864800e-05 2.14157629e-05\n 1.82477772e-05 1.57340710e-05 1.37061368e-05 1.20464185e-05\n 1.06708827e-05 9.51816892e-06 8.54262634e-06 7.70972323e-06]\n\n\n# Exercise 3\n\n1. Write code to solve the following system of ordinary differential equations using the Python function odeint.\n\n$$\n\\begin{cases}\n\\dfrac{dx_1}{dt}& = & -\\dfrac{1}{2}x_1\\\\\\\\\n\\dfrac{dx_2}{dt}& = & \\dfrac{1}{2}x_1-\\dfrac{1}{4}x_2\\\\\\\\\n\\dfrac{dx_3}{dt}& = & \\dfrac{1}{4}x_2-\\dfrac{1}{6}x_3\n\\end{cases}, \\text{ on } [0,4]\n$$\n\nSubject to the initial conditions $x_1(0) = 1, x_2(0) = 1, x_3(0) = 1$.\n\nCorrect.\n\n\n```python\nimport numpy as np\nfrom scipy.integrate import odeint\nimport matplotlib.pyplot as plt\n\n# function that returns dz/dt\ndef model(z,t):\n x_1,x_2,x_3 = z\n dx_1dt = -1/2*x_1 \n dx_2dt = 1/2*x_1 -1/4*x_2\n dx_3dt = 1/4*x_2-1/6*x_3\n return dx_1dt,dx_2dt,dx_3dt\n\n# initial condition\nz0 = [1,1,1]\n\n# time points\na = 0\nb = 4\nN = 100\nt = np.linspace(a,b,N+1)\n\n# solve ODE\nz = odeint(model,z0,t)\n\nx_1 = z[:,0]\nx_2 = z[:,1]\nx_3=z[:,2]\n\n\nplt.plot(t,x_1,'b-')\nplt.plot(t,x_3,'r--')\nplt.plot(t,x_2,'green');\n\n```\n\n\n```python\ndef f(z,t):\n x1,x2,x3=z\n dx1dt=-1/2*z[0]\n dx2dt=1/2*z[0]-1/4*z[1]\n dx3dt=1/4*z[1]-1/6*z[2]\n return dx1dt, dx2dt,dx3dt\n#f(6,7)\n```\n\n2. The exact solution of the above system of ODEs is given by\n\n$$\n\\begin{cases}\nx_1(t)& = & e^{-t/2}\\\\\nx_2(t)& = & -2e^{-t/2}+3e^{-t/4}\\\\\nx_3(t)& = & \\dfrac{3}{2}e^{-t/2} - 9e^{-t/4} + \\dfrac{17}{2}e^{-t/6}\n\\end{cases}\n$$\n\nUse $Subplot$ to plot side by side\n\n- each exact and approximate solution in the same window\n- and their absolute error vs the time \n\n\nCorrect.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# x_1(t)=np.exp(-t/2)\n# x_2(t)=-2*np.exp(-t/2)+3*np.exp(-t/4)\n# x_3(t)=3/2*np.exp(-t/2)-9*np.exp(-t/4)+17/2*np.exp(-t/6)\n# #plot results\nplt.subplot(3,1,1)\nplt.plot(t,np.exp(-t/2),'b')\nplt.plot(t,x_1,'y--')\nplt.xlabel('time')\nplt.ylabel('x_1(t)')\nplt.show()\n\n#plot results\nplt.subplot(3,1,2)\nplt.plot(t,-2*np.exp(-t/2)+3*np.exp(-t/4),'y-')\nplt.plot(t,x_2,'g--')\nplt.xlabel('time')\nplt.ylabel('x_2(t)')\nplt.show()\n\nplt.subplot(3,1,3)\nplt.plot(t,3/2*np.exp(-t/2)-9*np.exp(-t/4)+17/2*np.exp(-t/6),'r-')\nplt.plot(t,x_3,'b--')\nplt.xlabel('time')\nplt.ylabel('x_3(t)')\nplt.show()\n\n#plot results\n# plt.subplot(3,1,3)\n# plt.plot(x,y)\n# plt.xlabel('x')\n# plt.ylabel('y')\n# plt.show()\n\n```\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# x_1(t)=np.exp(-t/2)\n# x_2(t)=-2*np.exp(-t/2)+3*np.exp(-t/4)\n# x_3(t)=3/2*np.exp(-t/2)-9*np.exp(-t/4)+17/2*np.exp(-t/6)\n# #plot results\n\nplt.subplot(3,1,1)\nplt.title(\"Absolute Error vs Times\")\n#plt.plot(t,np.exp(-t/2),'b')\nplt.plot(t,abs(x_1-np.exp(-t/2)),'b-')\nplt.xlabel('time')\nplt.ylabel('absolute error of x_1')\nplt.show()\n\n#plot results\nplt.subplot(3,1,2)\n#plt.plot(t,-2*np.exp(-t/2)+3*np.exp(-t/4),'g-')\nplt.plot(t,abs(x_2+2*np.exp(-t/2)-3*np.exp(-t/4)),'g-')\nplt.xlabel('time')\nplt.ylabel('absolute error of x_2')\nplt.show()\n\nplt.subplot(3,1,3)\n#plt.plot(t,3/2*np.exp(-t/2)-9*np.exp(-t/4)+17/2*np.exp(-t/6),'r-')\nplt.plot(t,abs(x_3-3/2*np.exp(-t/2)+9*np.exp(-t/4)-17/2*np.exp(-t/6)),'r-')\nplt.xlabel('time')\nplt.ylabel('absolute error of x_3')\nplt.show()\n\n#plot results\n# plt.subplot(3,1,3)\n# plt.plot(x,y)\n# plt.xlabel('x')\n# plt.ylabel('y')\n# plt.show()\n\n```\n\n# Exercise 4\n\nLet $\\{ t_k\\}$ be a partition of $[a,b]$ such that $a=t_1 EulerOdeSys that takes $f,c,t$ and return the solution $Z$ of the initial value problem \\eqref{eul2} using Euler method i.e.\n$$ z_{k+1} = z_k + Hf(z_k,t_k) $$\n\n\nThe structure is fine.\n\n\n```python\ndef EulerOdeSys(f,c,t):\n n=len(t)\n Z = np.zeros((len(t),)+ np.shape(c)) \n Z[0]= c \n for i in range(n-1):\n h =(t[i+1] - t[i])\n Z[i+1]= Z[i]+ h*f(Z[i],t[i])\n return Z\ndef f(x,y):\n return x+y\nc=[5,3]\nt=np.linspace(0,4,10)\nEulerOdeSys(f,c,t)\n \n```\n\n\n\n\n array([[ 5. , 3. ],\n [ 7.22222222, 4.33333333],\n [ 10.62962963, 6.45679012],\n [ 15.74897119, 9.72153635],\n [ 23.34110654, 14.63481177],\n [ 34.50505512, 21.92929601],\n [ 50.8282895 , 32.66330411],\n [ 74.60382557, 48.36551335],\n [109.14379743, 71.2440131 ],\n [159.23239876, 104.48826584]])\n\n\n\n2. Write a python function RK4OdeSys that takes $f,c,t$ and return the solution $Z$ of the initial value problem (1) using the fourth order Runge-Kutta method i.e.\n\n\\begin{equation}\n\\begin{cases}\nk_1 = f(z_k,t_k),\\\\\\\\\nk_2 = f(z_k+H\\dfrac{k_1}{2}, t_k + \\dfrac{H}{2}),\\\\\\\\\nk_3 = f(z_k+H\\dfrac{k_2}{2}, t_k + \\dfrac{H}{2}),\\\\\\\\\nk_4 = f(z_k+Hk_3, t_k + H),\\\\\\\\\nz_{k+1} = z_k + \\dfrac{H}{6}(k_1+2k_2+2k_3+k_4)\n\\end{cases}\n\\end{equation}\n\n\n\nCorrect\n\n\n```python\ndef RK4OdeSys(f,c,t):\n n = len (t)\n Z = np.zeros((len(t),)+ np.shape(c)) \n Z[0]= c \n for i in range (n-1):\n k1 = f(Z[i] ,t[i]) \n h =(t[i+1] - t[i])/2\n k2 = f(Z[i]+ h*k1 , t[i]+h)\n k3 = f(Z[i]+ h*k2 , t[i]+h)\n k4 = f(Z[i]+2*h*k3 ,t[i]+2*h )\n Z[i+1]= Z[i]+ h/3*(k1 +2*k2 +2*k3+k4 ) \n return Z\n\ndef f(x,y):\n return x+y**2\nc=[5,2]\nt=np.linspace(0,4,10)\nRK4OdeSys(f,c,t)\n#plt.plot(RK4OdeSys1(f,c,t),'b-') \n```\n\n\n\n\n array([[ 5. , 2. ],\n [ 7.83021445, 3.15181177],\n [ 12.45659697, 5.16077975],\n [ 20.10506952, 8.72747924],\n [ 32.68741769, 14.94443472],\n [ 53.18500904, 25.51540266],\n [ 86.24718959, 43.09733603],\n [139.12446363, 71.83366675],\n [223.12375742, 118.18594254],\n [355.87785565, 192.23073745]])\n\n\n\n3. Solve the system of ODEs in $Exercise2$ using your function EulerOdeSys and RK4OdeSys \n\nNot done.\n\n\n```python\n\n```\n\n4. By plotting the absolute error in the approximate and exact solutions, tell us which function gives a more accurate solution of a system of ODEs.\n\nNot done.\n\n\n```python\n\n```\n\n\n\n\n 1.791759469228055\n\n\n\n# Exercise 5\n\nLet consider us consider the function primes that takes $n$ as input and return a list of primes less than $n$\n\n\n```python\n# This cell is only to import the labraries \n\nimport numpy as np\nimport time\n\ndef primes(n):\n \"\"\" Returns a list of primes < n \"\"\"\n sieve = [True] * (n//2)\n for i in range(3,int(n**0.5)+1,2):\n if sieve[i//2]:\n sieve[i*i//2::i] = [False] * ((n-i*i-1)//(2*i)+1)\n return [2] + [2*i+1 for i in range(1,n//2) if sieve[i]]\n\n```\n\nFor any integer $n>0$ and a prime number $p$, define $\\nu_p(n)$ as the greatest integer $r$ such that $p^r$ divides $n$.\nDefine $$ D(n,m) = \\sum_{p\\; prime} \\Bigl| \\nu_p(n) - \\nu_p(m)\\Bigr| $$\n\nFor example $D(14,24)=4$.\n\nFurthermore, define\n\n$$S(N) = \\sum_{n=1}^{N}\\sum_{m=1}^{N}D(n,m).$$\n\nYou are given $S(10)=210$.\n\n1. Write an efficient python function, Func_S , that takes $N$ as input and return the value $S(N)$.\n\nCorrect.\n\n\n\n```python\nfrom math import floor \nfrom math import log as ln\ndef nu(n,p):\n L=[]\n for i in range(floor(ln(n)//ln(p))+2):\n if n%(p**i)==0:\n L+=[i]\n return L[-1]\n\ndef D(n,m):\n list_prime=primes(max(m,n)+1)\n SumD=0\n for i in list_prime:\n SumD+=abs(nu(n,i)-nu(m,i))\n \n return SumD\n\n\nprint(D(14,24))\n\ndef Func_S(N):\n s=0\n for i in range(1,N+1):\n for j in range(1,N+1):\n #if j!=i:\n s=s+D(i,j)\n return s\n \nFunc_S(10) \nnu(7,23)\n\n```\n\n 4\n\n\n\n\n\n 0\n\n\n\n2. Compute $S(10)$ and display its computational time\n\n\n```python\nN = 10\ntime_start = time.perf_counter()\nS = Func_S(N)\ntime_elapsed = (time.perf_counter() - time_start)\nprint('S({}) = {}'.format(N,S))\nprint('computational Time = ', time_elapsed)\n```\n\n S(10) = 210\n computational Time = 0.0028889940003864467\n\n\n3. Compute $S(100)$ and display its computational time\n\n\n```python\nN = 100\ntime_start = time.perf_counter()\nS = Func_S(N)\ntime_elapsed = (time.perf_counter() - time_start)\nprint('S({}) = {}'.format(N,S))\nprint('computational Time = ', time_elapsed)\n```\n\n S(100) = 37018\n computational Time = 0.4434022469940828\n\n\n4. Compute $S(1000)$ and display its computational time\n\n\n```python\nN = 1000\ntime_start = time.perf_counter()\nS = Func_S(N)\ntime_elapsed = (time.perf_counter() - time_start)\nprint('S({}) = {}'.format(N,S))\nprint('computational Time = ', time_elapsed)\n```\n\n S(1000) = 4654406\n computational Time = 256.6060328760068\n\n\n5. Compute $S(10000)$ and display its computational time\n\nYou were asked to compute S(10) and so on. There is no outputs for that. This was meant to test the efficiency for your algorithm.\n\n\n```python\nN = 10000\ntime_start = time.perf_counter()\nS = Func_S(N)\ntime_elapsed = (time.perf_counter() - time_start)\nprint('S({}) = {}'.format(N,S))\nprint('computational Time = ', time_elapsed)\n```\n\n6. Compute $S(100000)$ and display its computational time\n\n\n```python\nN = 100000\ntime_start = time.perf_counter()\nS = Func_S(N)\ntime_elapsed = (time.perf_counter() - time_start)\nprint('S({}) = {}'.format(N,S))\nprint('computational Time = ', time_elapsed)\n```\n\n7. Compute $S(1000000)$ and display its computational time\n\n\n```python\nN = 1000000\ntime_start = time.perf_counter()\nS = Func_S(N)\ntime_elapsed = (time.perf_counter() - time_start)\nprint('S({}) = {}'.format(N,S))\nprint('computational Time = ', time_elapsed)\n```\n\n# Exercise 6 \n1. Read the Covid-19 dataset\n\n\n```python\nimport pandas as pd\nimport numpy as np\na=pd.read_csv('Covid-19.csv')\na\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    Date_reportedCountry_codeCountryWHO_regionNew_casesCumulative_casesNew_deathsCumulative_deaths
    02020-01-03AFAfghanistanEMRO0000
    12020-01-04AFAfghanistanEMRO0000
    22020-01-05AFAfghanistanEMRO0000
    32020-01-06AFAfghanistanEMRO0000
    42020-01-07AFAfghanistanEMRO0000
    ...........................
    1495422021-09-20ZWZimbabweAFRO19912793844567
    1495432021-09-21ZWZimbabweAFRO24812818624569
    1495442021-09-22ZWZimbabweAFRO012818604569
    1495452021-09-23ZWZimbabweAFRO618128804234592
    1495462021-09-24ZWZimbabweAFRO33012913484600
    \n

    149547 rows \u00d7 8 columns

    \n
    \n\n\n\n2. Drop the Country code column\n\n\n\n```python\ndel a['Country_code']\na\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    Date_reportedCountryWHO_regionNew_casesCumulative_casesNew_deathsCumulative_deaths
    02020-01-03AfghanistanEMRO0000
    12020-01-04AfghanistanEMRO0000
    22020-01-05AfghanistanEMRO0000
    32020-01-06AfghanistanEMRO0000
    42020-01-07AfghanistanEMRO0000
    ........................
    1495422021-09-20ZimbabweAFRO19912793844567
    1495432021-09-21ZimbabweAFRO24812818624569
    1495442021-09-22ZimbabweAFRO012818604569
    1495452021-09-23ZimbabweAFRO618128804234592
    1495462021-09-24ZimbabweAFRO33012913484600
    \n

    149547 rows \u00d7 7 columns

    \n
    \n\n\n\n3. Randomely choose three different countries\n\n\n```python\na.sample(n=3)\nb=a['Country']\nrand=b.sample(n=3)\nrand\n# b=a.sample(n=3)\n# b\n```\n\n\n\n\n 47232 French Polynesia\n 28957 Congo\n 114118 Saint Kitts and Nevis\n Name: Country, dtype: object\n\n\n\n4. Select and display the records for those three countries\n\n\n```python\nq=a[a['Country'].isin(rand)]\nq\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    Date_reportedCountryWHO_regionNew_casesCumulative_casesNew_deathsCumulative_deaths
    283952020-01-03CongoAFRO0000
    283962020-01-04CongoAFRO0000
    283972020-01-05CongoAFRO0000
    283982020-01-06CongoAFRO0000
    283992020-01-07CongoAFRO0000
    ........................
    1142062021-09-20Saint Kitts and NevisAMRO421674110
    1142072021-09-21Saint Kitts and NevisAMRO61680010
    1142082021-09-22Saint Kitts and NevisAMRO01680010
    1142092021-09-23Saint Kitts and NevisAMRO381718010
    1142102021-09-24Saint Kitts and NevisAMRO441762010
    \n

    1893 rows \u00d7 7 columns

    \n
    \n\n\n\n5. Calculate and display the sum, the average of the cumulative cases of each WHO region.\n\n\n```python\nM=a.groupby('WHO_region').mean()\nprint(\"the average of cumulative case of each WHO region is\\n\",M['Cumulative_cases'])\nS=a.groupby('WHO_region').sum()\nprint(\"the Sum of cumulative case of each WHO region is\\n\",S['Cumulative_cases'])\n```\n\n the average of cumulative case of each WHO region is\n WHO_region\n AFRO 3.853838e+04\n AMRO 5.795662e+05\n EMRO 2.228024e+05\n EURO 3.936270e+05\n Other 6.911569e+02\n SEARO 1.181455e+06\n WPRO 4.465386e+04\n Name: Cumulative_cases, dtype: float64\n the Sum of cumulative case of each WHO region is\n WHO_region\n AFRO 1215886016\n AMRO 20479550865\n EMRO 3092942346\n EURO 15399475886\n Other 436120\n SEARO 8200477089\n WPRO 986180579\n Name: Cumulative_cases, dtype: int64\n\n\n6. Calculate and display sum, the average of the cumulative deaths of each WHO region.\n\n\n```python\nM=a.groupby('WHO_region').mean()\nprint(\"the average of cumulative deaths of each WHO region is\\n\",M['Cumulative_deaths'])\nS=a.groupby('WHO_region').sum()\nprint(\"the Sum of cumulative case of each WHO region is\\n\",S['Cumulative_deaths'])\n```\n\n the average of cumulative deaths of each WHO region is\n WHO_region\n AFRO 916.600856\n AMRO 15814.192891\n EMRO 4672.818470\n EURO 8890.024871\n Other 11.554675\n SEARO 17467.325457\n WPRO 723.480281\n Name: Cumulative_deaths, dtype: float64\n the Sum of cumulative case of each WHO region is\n WHO_region\n AFRO 28918757\n AMRO 558810320\n EMRO 64868066\n EURO 347795553\n Other 7291\n SEARO 121240706\n WPRO 15978062\n Name: Cumulative_deaths, dtype: int64\n\n\n7. Produce plots that look like the following three figures. Pay attention to the annotations.\n\n7.a. \n\n\n```python\nimport seaborn as sns\nsns.boxplot(x=\"Country\", y=\"New_cases\", data=q)\nsns.stripplot(x=\"Country\", y=\"New_cases\", data=q);#, jitter=True, edgecolor=\"gray\")\nplt.legend()\n```\n\n\n```python\na.groupby('WHO_region')['Cumulative_cases',\"Cumulative_deaths\"].sum().plot.bar(grid=False);\n```\n\n7.b. \n\n\n```python\nimport matplotlib.pyplot as plt\nsns.lineplot(x=\"Date_reported\", y=\"Cumulative_cases\", hue=\"Country\",linewidth=5,data=q)\n\n```\n\n7.c. \n\n\n```python\n\n```\n", "meta": {"hexsha": "68a03683af55130aeaeff0ca6e48af7439bff834", "size": 184784, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "feedbackPython2.ipynb", "max_stars_repo_name": "Louis-Mozart/Louis-Mozart.github.io", "max_stars_repo_head_hexsha": "ffc11437fb47fa4007b47ef0bf82388de132cca1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "feedbackPython2.ipynb", "max_issues_repo_name": "Louis-Mozart/Louis-Mozart.github.io", "max_issues_repo_head_hexsha": "ffc11437fb47fa4007b47ef0bf82388de132cca1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "feedbackPython2.ipynb", "max_forks_repo_name": "Louis-Mozart/Louis-Mozart.github.io", "max_forks_repo_head_hexsha": "ffc11437fb47fa4007b47ef0bf82388de132cca1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 99.026795284, "max_line_length": 21308, "alphanum_fraction": 0.8075753312, "converted": true, "num_tokens": 9706, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. Yes\n2. Yes", "lm_q1_score": 0.9005297861178929, "lm_q2_score": 0.931462503162843, "lm_q1q2_score": 0.8388097287500721}} {"text": "# Random Number Generation\n\nThis demonstrates simple random number generation using `numpy`, and shows how to make plots along the way.\n\nFor details about the API of modules used here, see the documentation for [numpy](https://numpy.org/) and [matplotlib](https://matplotlib.org/).\n\n\n```python\nimport numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\n\nmpl.rc('font', size=14)\n```\n\n## Set Random Number Seed\n\nSet the state of numpy's random number generator. Can call this repeatedly to change the seed.\n\n\n```python\n# The same combination as my luggage...\nnp.random.seed(12345)\n```\n\n## Transforming a Uniform Random Variable\n\n### Draws from a Uniform Distribution\n\nGenerate $u\\sim U(0,1)$ and histogram the results.\n\n\n```python\n# Generate 1000 random draws from U(0,1).\nu = np.random.uniform(0, 1, size=1000)\n\nfig, ax = plt.subplots(1,1, figsize=(6,4), tight_layout=True)\n\n# Histogram the result.\n# ubins = bin edges in the histogram: [0, 0.05, 0.1, ..., 1]\nubins = np.linspace(0., 1., 21)\nn, bin_edges, p = ax.hist(u, bins=ubins, histtype='step')\n\n# Bin centers: mean of bin edges: [0.025, 0.075, ..., 0.975].\n# Bin counts estimate Poisson mean; estimated uncertainties are sqrt(counts).\nbin_center = 0.5*(bin_edges[1:] + bin_edges[:-1])\nax.errorbar(bin_center, n, yerr=np.sqrt(n), fmt='_')\n\nax.set(xlim=(0,1), xlabel='$u$', ylabel='count');\n```\n\n### Transform Uniform Numbers into Exponentials\n\nUse the transformation/inversion method to convert $u\\sim U(0,1)$ to $x\\sim\\mathrm{Exp}(-1)$, e.g., sample the distribution\n\n$$\np(x|\\beta,I) = \\frac{1}{\\beta}e^{-x/\\beta}~\\mathrm{where}~\\beta=1.\n$$\n\nIn practice you could directly generate these quantities by calling `numpy.random.exponential`.\n\n\n```python\nbeta = 1.\nx = -beta*np.log(u)\n\nfig, ax = plt.subplots(1,1, figsize=(6,4), tight_layout=True)\n\n# Histogram the result.\n# Stretch the previous bin edges by 6x.\n# Use a log-linear scale.\nn, bin_edges, p = ax.hist(x, bins=6*ubins, histtype='step', log=True)\n\nbin_center = 0.5*(bin_edges[1:] + bin_edges[:-1])\nax.errorbar(bin_center, n, yerr=np.sqrt(n), fmt='_')\n\nax.set(xlim=(bin_edges[0], bin_edges[-1]), xlabel='$x$', ylabel='count');\n```\n\n### Generate Two Gaussian Variables using the Box-M\u00fcller Algorithm\n\nYou could of course use `numpy.random.normal` to generate Gaussian variables but this nicely illustrates the algorithm:\n\n$$\n\\begin{align}\n u &\\sim U(0,1) & v &\\sim U(0,1) \\\\\n x &= \\sqrt{-2\\ln{u}}\\cos{2\\pi v}\\sim\\mathcal{N}(0,1) & y &= \\sqrt{-2\\ln{u}}\\sin{2\\pi v}\\sim\\mathcal{N}(0,1)\n\\end{align}\n$$\n\n\n```python\n# We already have u, so generate v ~ U(0,1).\nv = np.random.uniform(0.,1., size=1000)\nr = np.sqrt(-2*np.log(u))\nx = r * np.cos(2*np.pi*v)\ny = r * np.sin(2*np.pi*v)\n\n# Draw a scatter plot with projected x, y histograms.\n# Here I do it manually, but the seaborn package can do this automatically.\nfig, axes = plt.subplots(2,2, figsize=(7,7),\n gridspec_kw={'width_ratios':[2,1], 'height_ratios':[1,2]})\n\nax = axes[1,0]\nax.scatter(x, y, alpha=0.5)\nax.set(xlim=(-4,4), xlabel='$x$',\n ylim=(-4,4), ylabel='$y$',\n aspect='equal')\n\n# Draw the x projection.\n# Binning: [-4, -3.5, -3, ..., 3.5, 4]\nhbins = np.linspace(-4,4,17)\nax = axes[0,0]\nn, bin_edges, p = ax.hist(x, bins=hbins, histtype='step')\nbin_center = 0.5*(bin_edges[1:] + bin_edges[:-1])\nax.errorbar(bin_center, n, yerr=np.sqrt(n), fmt='_')\nax.set(xlim=(-4,4), xticklabels=[], ylabel='count')\n\n# Draw the y projection.\nax = axes[1,1]\nn, bin_edges, p = ax.hist(y, bins=hbins, histtype='step', orientation='horizontal')\nbin_center = 0.5*(bin_edges[1:] + bin_edges[:-1])\nax.errorbar(n, bin_center, xerr=np.sqrt(n), fmt='_')\nax.set(ylim=(-4,4), yticklabels=[], xlabel='count')\n\n# Turn the upper right panel off.\nax = axes[0,1]\nax.set_axis_off()\n\nfig.tight_layout();\n```\n\n## Random Numbers via Acceptance-Rejection\n\nDemonstrate sampling from the Rayleigh scattering function\n$$\nf(x) = \\frac{3}{8}(1+x^2)\\ \\mathrm{where}\\ x\\in[-1,1]\n$$\nusing the acceptance-rejection technique. In this case the instrumental distribution will be a uniform box between $-1$ and $1$ with maximum value $3/4$.\n\n\n```python\nf = lambda x: 3/8 * (1 + x**2)\nx = np.random.uniform(-1,1, size=1000)\ny = np.random.uniform(0,0.75, size=1000)\n\n# Boolean array to track accepted and rejected points.\naccept = y < f(x)\nreject = ~accept\n```\n\n\n```python\nfig, axes = plt.subplots(1,2, figsize=(10,4), sharex=True, sharey=True, tight_layout=True)\n\n# Draw the accepted points in red and rejected points in blue.\nax = axes[0]\nax.scatter(x[accept], y[accept], color='r')\nax.scatter(x[reject], y[reject], color='b')\n\n# Draw the function.\n_x = np.linspace(-1,1,101)\nax.plot(_x, f(_x), color='k')\nax.set(xlabel='$x$', xlim=(-1,1),\n ylabel='$f(x)=3/8~(1+x^2)$', ylim=(0,0.75))\n\n# Histogram the accepted data.\nax = axes[1]\nax.hist(x[accept], density=True, histtype='step')\nax.plot(_x, f(_x), color='k');\n```\n", "meta": {"hexsha": "e965e9b3bb893705acec557fc82a9ac115cf9614", "size": 123736, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "nb/rng-examples.ipynb", "max_stars_repo_name": "sybenzvi/PHY403", "max_stars_repo_head_hexsha": "159f1203b5fc92ffc1f2b7dc9fef3c2f78207dd7", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-05-27T23:51:39.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-03T03:34:53.000Z", "max_issues_repo_path": "nb/rng-examples.ipynb", "max_issues_repo_name": "sybenzvi/PHY403", "max_issues_repo_head_hexsha": "159f1203b5fc92ffc1f2b7dc9fef3c2f78207dd7", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "nb/rng-examples.ipynb", "max_forks_repo_name": "sybenzvi/PHY403", "max_forks_repo_head_hexsha": "159f1203b5fc92ffc1f2b7dc9fef3c2f78207dd7", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2020-05-06T16:01:09.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-04T18:47:26.000Z", "avg_line_length": 401.7402597403, "max_line_length": 57884, "alphanum_fraction": 0.9366797052, "converted": true, "num_tokens": 1540, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9343951607140233, "lm_q2_score": 0.8976952852648487, "lm_q1q2_score": 0.8388021303472692}} {"text": "# Information Retrieval in High Dimensional Data\n## Lab #2, 26.10.2017\n## Statistical Decision making\n\n### Task 1\n\nConsider the two dimensional, discrete random variable $X = [X_1\\,X_2]^T$ subjected to the joint probability density $p_X$ as described in the following table.\n\n$\\begin{array}{c||cc} p_X(X_1,X_2) & X_2=0 & X_2=1\\\\ \\hline\nX_1 = 0 & 0.4 & 0.3 \\\\\nX_1 = 1 & 0.2 & 0.1\n\\end{array}$\n\na) Compute the marginal probability densities $p_{X_1}, p_{X_2}$ and the coonditional probability $P(X_2=0|X_1=0)$ as well as the expected value $\\mathbb{E}[X]$ and the covariance matrix $\\mathbb{E}[(X-\\mathbb{E}[X])(X-\\mathbb{E}[X])^T]$.\n\n\n```python\nimport numpy as np\n```\n\n\n```python\n# Init needed variables\njoint_prob = np.array([[0.4, 0.3], [0.2, 0.1]])\nresults = np.array([[0, 1, 0, 1],\n [0, 0, 1, 1]])\n```\n\n\n```python\npx1 = np.sum(joint_prob, axis=1)\npx2 = np.sum(joint_prob, axis=0)\n\nprint (\"Marginal distribution density px1: {}\".format(px1))\nprint (\"Marginal distribution density px2: \",px2)\n```\n\n Marginal distribution density px1: [ 0.7 0.3]\n Marginal distribution density px2: [ 0.6 0.4]\n\n\n\n```python\ncond_prob = joint_prob[0][0]/px1[0]\nprint (\"Conditional probability P(X2=0|X1=0): {:.3f}\".format(cond_prob))\n```\n\n Conditional probability P(X2=0|X1=0): 0.571\n\n\n\n```python\nEX = np.dot(results, np.ravel(joint_prob, 'F'))\nprint (\"Expected Value EX:\", EX)\n```\n\n Expected Value EX: [ 0.3 0.4]\n\n\n\n```python\nresults_centered = results - np.reshape(EX, (2,1))\nCovX = np.dot(np.dot(results_centered, np.diag(joint_prob.ravel('F'))), results_centered.T)\nprint (\"Covariance Matrix: \\n{}\".format(CovX))\n```\n\n Covariance Matrix: \n [[ 0.21 -0.02]\n [-0.02 0.24]]\n\n\nb) Write a PYTHON function toyrnd that expects the positive integer parameter n as its input an returns a matrix $X$ of size (2,n), containing $n$ samples drawn independently from the distribution $p_X$, as its output.\n\n\n```python\ndef toyrnd(n):\n x1 = np.random.choice([0, 1], (1, n), p=px1)\n x2 = np.random.choice([0, 1], (1, n), p=px2)\n return np.vstack((x1,x2))\n```\n\n [[0 1 0 1 0 0 0 0 0 0 1 0 0 1 1 0 0 1 0 0 1 0 1 0 1 0 0 0 1 0 0 1 0 1 0 1 0\n 0 0 1 0 1 0 1 1 0 0 0 0 0 0 1 0 0 1 1 1 0 0 0 0 0 0 0 0 1 0 1 0 1 0 0 1 0\n 0 1 0 0 0 0 1 1 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0]\n [1 0 0 1 1 0 0 0 0 1 1 1 0 1 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 1 0 0 1 0\n 0 0 1 0 0 1 1 1 1 0 1 1 1 1 1 1 0 0 1 0 1 1 0 0 1 0 0 0 0 0 0 0 1 1 1 0 0\n 1 1 1 0 1 0 0 0 0 0 0 0 1 0 0 1 1 0 1 0 1 0 0 1 0 0]]\n\n\nc) Verify your results in a) by generating 10000 samples with toyrnd and computing the respective empirical values\n\n\n```python\nsamples = toyrnd(10000)\n\np_x1equ1 = (samples[0].sum()/len(samples[0]))\np_x2equ1 = (samples[1].sum()/len(samples[1]))\np_x1 = np.array([1-p_x1equ1, p_x1equ1])\np_x2 = np.array([1-p_x2equ1, p_x2equ1])\n\nprint (\"Empirical Marginal distribution density p_x1: {}\".format(p_x1))\nprint (\"Empirical Marginal distribution density p_x2: \",p_x2)\n```\n\n Empirical Marginal distribution density p_x1: [ 0.6986 0.3014]\n Empirical Marginal distribution density p_x2: [ 0.5997 0.4003]\n\n\n\n```python\ncond_prob_empirical = samples[1, samples[0,:]==0]\nnp.sum(cond_prob_empirical==0)/len(cond_prob_empirical)\n```\n\n\n\n\n 0.5986258230747209\n\n\n\n### Task 2\nThe MNS trainign set consists of handwritten digits from 0 to 9, stored as PNG files of size 28 x 28 an indexed by label. Download the provided ZIP file from Moodle and make yourself familiar with the directory structure.\n\na) Grayscale images are typically described as matrices of uint8 values. For numerical calculations, it is more sensible to work with floating point numbers. Load two (arbitrary) images from the database and convert them to matrices I1 and I2 of float64 values in the interval $[0, 1]$.\n\n\n```python\nimport imageio\nimport os\n```\n\n\n```python\npath = './mnist/d4/'\nfilenames = os.listdir(path)\nim1 = imageio.imread(path + filenames[0])\nI1 = im1/255\nim2 = imageio.imread(path + filenames[10])\nI2 = im2/255\n```\n\nb) The matrix equivalent of te euclidean norm $\\| \\cdot \\|_2$ is the $Frobenius$ norm. For any matrix $\\mathbf{A} \\in \\mathbb{R}^{m\\, \\times \\, n}$, it is defined as\n\n\\begin{equation} \\|\\mathbf{A}\\|_F = \\sqrt{tr(\\mathbf{A}^T\\mathbf{A})} \\end{equation}\n\nwhere tr denotes the trace of a matrix. Compute the distance $\\|\\mathbf{I}_1 - \\mathbf{I}_2 \\|_F$ between the images I1 and I2 using three different procedures in PYTHON:\n- Running the numpy.linalg.norm function with the 'fro' parameter\n- Directly applying formula (1)\n- Computing the euclidean norm between the vectorized images\n\n\n```python\nD = I1 - I2\n```\n\n\n```python\ndist1 = np.linalg.norm(D, ord='fro')\ndist1\n```\n\n\n\n\n 9.0642043267400751\n\n\n\n\n```python\ndist2 = np.sqrt(np.trace(np.dot(D.T,D)))\ndist2\n```\n\n\n\n\n 9.0642043267400751\n\n\n\n\n```python\nd = np.reshape(D, 784)\ndist3 = np.linalg.norm(d)\ndist3\n```\n\n\n\n\n 9.0642043267400751\n\n\n\nc) In the following, we want to solve a simple classification problem by applying k-Nearest Neighbors. To this end, choose two digits classes, e.g. 0 and 1, and lod n_train = 500 images from each class to the workspace. Convert them according to subtask a) and store them in vectorized form in the matrix X_train of size (784, 2*n_train). Provide an indicator vector Y_train pf length 2*n_train that assigns the respective digit class label to each column of X_train. \nFrom each of the two classes, choose another set of n_test=10 images and create according matrices X_test and Y_test. Now, for each sample in the test set, determine the k = 20 training samples with the smallest Frobenius distance to it and store their indices in the (2*n_test, k) matrix NN. Generate a vector Y_kNN containing the respective estimated class labels by performing a majority vote on NN. Compare the result with Y_test.\n\n\n```python\nX_train = np.zeros((784, 1000))\nY_train = np.zeros(1000, dtype='int64')\ndigitclass0 = 0\ndigitclass1 = 1\npath0 = './mnist/d' + str(digitclass0) +'/'\npath1 = './mnist/d' + str(digitclass1) +'/'\nfilenames0 = os.listdir(path0)\nfilenames1 = os.listdir(path1)\n\nfor i in range(500):\n im0 = imageio.imread(path0 + filenames0[i])\n im1 = imageio.imread(path1 + filenames1[i])\n X_train[:, i] = np.reshape(im0, 784)/255\n X_train[:, 500 + i] = np.reshape(im1, 784)/255\n Y_train[i] = digitclass0\n Y_train[500 + i] = digitclass1\n```\n\n\n```python\nX_test = np.zeros((784, 20))\nY_test = np.zeros(20, dtype='int64')\n\nfor i in range(500, 510):\n im0 = imageio.imread(path0 + filenames0[i])\n im1 = imageio.imread(path1 + filenames1[i])\n X_test[:, i-500] = np.reshape(im0, 784)\n X_test[:, i-490] = np.reshape(im1, 784)\n Y_test[i - 500] = digitclass0\n Y_test[i - 490]= digitclass1\n```\n\n\n```python\ndistances = np.zeros(1000)\nNN = np.zeros((20, 20))\nY_kNN = np.zeros(20, dtype='int64')\nfor i in range(20):\n for j in range(1000):\n DIFF = X_test[:, i] - X_train[:, j]\n distances[j] = np.linalg.norm(DIFF)\n indices = np.argsort(distances)\n labels = Y_train[indices[:20]]\n bins = np.bincount(labels)\n Y_kNN[i] = np.argsort(bins)[-1]\n```\n\n\n```python\nprint('Labels determined by k-Nearest Neighbors:')\nprint(Y_kNN)\nprint('\\nLabels of training set')\nprint(Y_test)\n```\n\n Labels determined by k-Nearest Neighbors:\n [0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1]\n \n Labels of training set\n [0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1]\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "97c8f53a13ffa0489533b9193bbaa2dc9c2d856d", "size": 13211, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "information-retrieval-exercises/lab02.ipynb", "max_stars_repo_name": "achmart/inforet", "max_stars_repo_head_hexsha": "3596ff971207728a42b335e71608b0b96e241228", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "information-retrieval-exercises/lab02.ipynb", "max_issues_repo_name": "achmart/inforet", "max_issues_repo_head_hexsha": "3596ff971207728a42b335e71608b0b96e241228", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "information-retrieval-exercises/lab02.ipynb", "max_forks_repo_name": "achmart/inforet", "max_forks_repo_head_hexsha": "3596ff971207728a42b335e71608b0b96e241228", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.9063136456, "max_line_length": 485, "alphanum_fraction": 0.5367496783, "converted": true, "num_tokens": 2662, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391621868804, "lm_q2_score": 0.9086179025005187, "lm_q1q2_score": 0.8386899074720794}} {"text": "# Simple Linear Regression with NumPy\n\nIn school, students are taught to draw lines like the following.\n\n$$ y = 2 x + 1$$\n\nThey're taught to pick two values for $x$ and calculate the corresponding values for $y$ using the equation.\nThen they draw a set of axes, plot the points, and then draw a line extending through the two dots on their axes.\n\n\n```python\n# Import matplotlib.\nimport matplotlib.pyplot as plt\n```\n\n\n```python\n# Draw some axes.\nplt.plot([-1, 10], [0, 0], 'k-')\nplt.plot([0, 0], [-1, 10], 'k-')\n\n# Plot the red, blue and green lines.\nplt.plot([1, 1], [-1, 3], 'b:')\nplt.plot([-1, 1], [3, 3], 'r:')\n\n# Plot the two points (1,3) and (2,5).\nplt.plot([1, 2], [3, 5], 'ko')\n# Join them with an (extending) green lines.\nplt.plot([-1, 10], [-1, 21], 'g-')\n\n# Set some reasonable plot limits.\nplt.xlim([-1, 10])\nplt.ylim([-1, 10])\n\n# Show the plot.\nplt.show()\n```\n\nSimple linear regression is about the opposite problem - what if you have some points and are looking for the equation?\nIt's easy when the points are perfectly on a line already, but usually real-world data has some noise.\nThe data might still look roughly linear, but aren't exactly so.\n\n***\n\n## Example (contrived and simulated)\n\n\n\n#### Scenario\nSuppose you are trying to weigh your suitcase to avoid an airline's extra charges.\nYou don't have a weighing scales, but you do have a spring and some gym-style weights of masses 7KG, 14KG and 21KG.\nYou attach the spring to the wall hook, and mark where the bottom of it hangs.\nYou then hang the 7KG weight on the end and mark where the bottom of the spring is.\nYou repeat this with the 14KG weight and the 21KG weight.\nFinally, you place your case hanging on the spring, and the spring hangs down halfway between the 7KG mark and the 14KG mark.\nIs your case over the 10KG limit set by the airline?\n\n#### Hypothesis\nWhen you look at the marks on the wall, it seems that the 0KG, 7KG, 14KG and 21KG marks are evenly spaced.\nYou wonder if that means your case weighs 10.5KG.\nThat is, you wonder if there is a *linear* relationship between the distance the spring's hook is from its resting position, and the mass on the end of it.\n\n#### Experiment\nYou decide to experiment.\nYou buy some new weights - a 1KG, a 2KG, a 3Kg, all the way up to 20KG.\nYou place them each in turn on the spring and measure the distance the spring moves from the resting position.\nYou tabulate the data and plot them.\n\n#### Analysis\nHere we'll import the Python libraries we need for or investigations below.\n\n\n```python\n# numpy efficiently deals with numerical multi-dimensional arrays.\nimport numpy as np\n\n# matplotlib is a plotting library, and pyplot is its easy-to-use module.\nimport matplotlib.pyplot as plt\n\n# This just sets the default plot size to be bigger.\nplt.rcParams['figure.figsize'] = (8, 6)\n```\n\nIgnore the next couple of lines where I fake up some data. I'll use the fact that I faked the data to explain some results later. Just pretend that w is an array containing the weight values and d are the corresponding distance measurements.\n\n\n```python\nw = np.arange(0.0, 21.0, 1.0)\nd = 5.0 * w + 10.0 + np.random.normal(0.0, 5.0, w.size)\n```\n\n\n```python\n# Let's have a look at w.\nw\n```\n\n\n\n\n array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12.,\n 13., 14., 15., 16., 17., 18., 19., 20.])\n\n\n\n\n```python\n# Let's have a look at d.\nd\n```\n\n\n\n\n array([ 15.99410275, 10.9911427 , 10.53445534, 15.97058549,\n 27.44753565, 39.51180328, 43.06251898, 47.47682646,\n 52.27808676, 56.67322568, 59.16156245, 56.69603504,\n 67.15387974, 80.96017265, 84.50699528, 87.26011678,\n 90.77740446, 100.07004239, 107.93739931, 105.51085343,\n 118.07859596])\n\n\n\nLet's have a look at the data from our experiment.\n\n\n```python\n# Create the plot.\n\nplt.plot(w, d, 'k.')\n\n# Set some properties for the plot.\nplt.xlabel('Weight (KG)')\nplt.ylabel('Distance (CM)')\n\n# Show the plot.\nplt.show()\n```\n\n#### Model\nIt looks like the data might indeed be linear.\nThe points don't exactly fit on a straight line, but they are not far off it.\nWe might put that down to some other factors, such as the air density, or errors, such as in our tape measure.\nThen we can go ahead and see what would be the best line to fit the data. \n\n#### Straight lines\nAll straight lines can be expressed in the form $y = mx + c$.\nThe number $m$ is the slope of the line.\nThe slope is how much $y$ increases by when $x$ is increased by 1.0.\nThe number $c$ is the y-intercept of the line.\nIt's the value of $y$ when $x$ is 0.\n\n#### Fitting the model\nTo fit a straight line to the data, we just must pick values for $m$ and $c$.\nThese are called the parameters of the model, and we want to pick the best values possible for the parameters.\nThat is, the best parameter values *given* the data observed.\nBelow we show various lines plotted over the data, with different values for $m$ and $c$.\n\n\n```python\n# Plot w versus d with black dots.\nplt.plot(w, d, 'k.', label=\"Data\")\n\n# Overlay some lines on the plot.\nx = np.arange(0.0, 21.0, 1.0)\nplt.plot(x, 5.0 * x + 10.0, 'r-', label=r\"$5x + 10$\")\nplt.plot(x, 6.0 * x + 5.0, 'g-', label=r\"$6x + 5$\")\nplt.plot(x, 5.0 * x + 15.0, 'b-', label=r\"$5x + 15$\")\n\n# Add a legend.\nplt.legend()\n\n# Add axis labels.\nplt.xlabel('Weight (KG)')\nplt.ylabel('Distance (CM)')\n\n# Show the plot.\nplt.show()\n```\n\n#### Calculating the cost\nYou can see that each of these lines roughly fits the data.\nWhich one is best, and is there another line that is better than all three?\nIs there a \"best\" line?\n\nIt depends how you define the word best.\nLuckily, everyone seems to have settled on what the best means.\nThe best line is the one that minimises the following calculated value.\n\n$$ \\sum_i (y_i - mx_i - c)^2 $$\n\nHere $(x_i, y_i)$ is the $i^{th}$ point in the data set and $\\sum_i$ means to sum over all points. \nThe values of $m$ and $c$ are to be determined.\nWe usually denote the above as $Cost(m, c)$.\n\nWhere does the above calculation come from?\nIt's easy to explain the part in the brackets $(y_i - mx_i - c)$.\nThe corresponding value to $x_i$ in the dataset is $y_i$.\nThese are the measured values.\nThe value $m x_i + c$ is what the model says $y_i$ should have been.\nThe difference between the value that was observed ($y_i$) and the value that the model gives ($m x_i + c$), is $y_i - mx_i - c$.\n\nWhy square that value?\nWell note that the value could be positive or negative, and you sum over all of these values.\nIf we allow the values to be positive or negative, then the positive could cancel the negatives.\nSo, the natural thing to do is to take the absolute value $\\mid y_i - m x_i - c \\mid$.\nWell it turns out that absolute values are a pain to deal with, and instead it was decided to just square the quantity instead, as the square of a number is always positive.\nThere are pros and cons to using the square instead of the absolute value, but the square is used.\nThis is usually called *least squares* fitting.\n\n\n```python\n# Calculate the cost of the lines above for the data above.\ncost = lambda m,c: np.sum([(d[i] - m * w[i] - c)**2 for i in range(w.size)])\n\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (5.0, 10.0, cost(5.0, 10.0)))\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (6.0, 5.0, cost(6.0, 5.0)))\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (5.0, 15.0, cost(5.0, 15.0)))\n```\n\n Cost with m = 5.00 and c = 10.00: 567.04\n Cost with m = 6.00 and c = 5.00: 1073.18\n Cost with m = 5.00 and c = 15.00: 911.51\n\n\n#### Minimising the cost\nWe want to calculate values of $m$ and $c$ that give the lowest value for the cost value above.\nFor our given data set we can plot the cost value/function.\nRecall that the cost is:\n\n$$ Cost(m, c) = \\sum_i (y_i - mx_i - c)^2 $$\n\nThis is a function of two variables, $m$ and $c$, so a plot of it is three dimensional.\nSee the **Advanced** section below for the plot.\n\nIn the case of fitting a two-dimensional line to a few data points, we can easily calculate exactly the best values of $m$ and $c$.\nSome of the details are discussed in the **Advanced** section, as they involve calculus, but the resulting code is straight-forward.\nWe first calculate the mean (average) values of our $x$ values and that of our $y$ values.\nThen we subtract the mean of $x$ from each of the $x$ values, and the mean of $y$ from each of the $y$ values.\nThen we take the *dot product* of the new $x$ values and the new $y$ values and divide it by the dot product of the new $x$ values with themselves.\nThat gives us $m$, and we use $m$ to calculate $c$.\n\nRemember that in our dataset $x$ is called $w$ (for weight) and $y$ is called $d$ (for distance).\nWe calculate $m$ and $c$ below.\n\n\n```python\n# Calculate the best values for m and c.\n\n# First calculate the means (a.k.a. averages) of w and d.\nw_avg = np.mean(w)\nd_avg = np.mean(d)\n\n# Subtract means from w and d.\nw_zero = w - w_avg\nd_zero = d - d_avg\n\n# The best m is found by the following calculation.\nm = np.sum(w_zero * d_zero) / np.sum(w_zero * w_zero)\n# Use m from above to calculate the best c.\nc = d_avg - m * w_avg\n\nprint(\"m is %8.6f and c is %6.6f.\" % (m, c))\n```\n\n m is 5.395020 and c is 6.909486.\n\n\nNote that numpy has a function that will perform this calculation for us, called polyfit.\nIt can be used to fit lines in many dimensions.\n\n\n```python\nnp.polyfit(w, d, 1)\n```\n\n\n\n\n array([5.39501974, 6.90948551])\n\n\n\n#### Best fit line\nSo, the best values for $m$ and $c$ given our data and using least squares fitting are about $4.95$ for $m$ and about $11.13$ for $c$.\nWe plot this line on top of the data below.\n\n\n```python\n# Plot the best fit line.\nplt.plot(w, d, 'k.', label='Original data')\nplt.plot(w, m * w + c, 'b-', label='Best fit line')\n\n# Add axis labels and a legend.\nplt.xlabel('Weight (KG)')\nplt.ylabel('Distance (CM)')\nplt.legend()\n\n# Show the plot.\nplt.show()\n```\n\nNote that the $Cost$ of the best $m$ and best $c$ is not zero in this case.\n\n\n```python\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (m, c, cost(m, c)))\n```\n\n Cost with m = 5.40 and c = 6.91: 431.37\n\n\n### Summary\nIn this notebook we:\n1. Investigated the data.\n2. Picked a model.\n3. Picked a cost function.\n4. Estimated the model parameter values that minimised our cost function.\n\n### Advanced\nIn the following sections we cover some of the more advanced concepts involved in fitting the line.\n\n#### Simulating data\nEarlier in the notebook we glossed over something important: we didn't actually do the weighing and measuring - we faked the data.\nA better term for this is *simulation*, which is an important tool in research, especially when testing methods such as simple linear regression.\n\nWe ran the following two commands to do this:\n\n```python\nw = np.arange(0.0, 21.0, 1.0)\nd = 5.0 * w + 10.0 + np.random.normal(0.0, 5.0, w.size)\n```\n\nThe first command creates a numpy array containing all values between 1.0 and 21.0 (including 1.0 but not including 21.0) in steps of 1.0.\n\n\n```python\n np.arange(0.0, 21.0, 1.0)\n```\n\n\n\n\n array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12.,\n 13., 14., 15., 16., 17., 18., 19., 20.])\n\n\n\nThe second command is more complex.\nFirst it takes the values in the `w` array, multiplies each by 5.0 and then adds 10.0.\n\n\n```python\n5.0 * w + 10.0\n```\n\n\n\n\n array([ 10., 15., 20., 25., 30., 35., 40., 45., 50., 55., 60.,\n 65., 70., 75., 80., 85., 90., 95., 100., 105., 110.])\n\n\n\nIt then adds an array of the same length containing random values.\nThe values are taken from what is called the normal distribution with mean 0.0 and standard deviation 5.0.\n\n\n```python\nnp.random.normal(0.0, 5.0, w.size)\n```\n\n\n\n\n array([ 2.77924728, -7.36852008, -1.37782612, 9.98075174, 4.21995499,\n -2.60231604, 9.32075467, 2.83510953, -9.14550022, 6.52177982,\n 11.11216945, 6.69576981, 6.84188875, -3.55449951, 6.14654406,\n 0.97382891, -2.89697444, 3.99446255, -1.07846657, -9.81654276,\n -3.02725231])\n\n\n\nThe normal distribution follows a bell shaped curve.\nThe curve is centred on the mean (0.0 in this case) and its general width is determined by the standard deviation (5.0 in this case).\n\n\n```python\n# Plot the normal distrution.\nnormpdf = lambda mu, s, x: (1.0 / (2.0 * np.pi * s**2)) * np.exp(-((x - mu)**2)/(2 * s**2))\n\nx = np.linspace(-20.0, 20.0, 100)\ny = normpdf(0.0, 5.0, x)\nplt.plot(x, y)\n\nplt.show()\n```\n\nThe idea here is to add a little bit of randomness to the measurements of the distance.\nThe random values are entered around 0.0, with a greater than 99% chance they're within the range -15.0 to 15.0.\nThe normal distribution is used because of the [Central Limit Theorem](https://en.wikipedia.org/wiki/Central_limit_theorem) which basically states that when a bunch of random effects happen together the outcome looks roughly like the normal distribution. (Don't quote me on that!)\n\n#### Plotting the cost function\nWe can plot the cost function for a given set of data points.\nRecall that the cost function involves two variables: $m$ and $c$, and that it looks like this:\n\n$$ Cost(m,c) = \\sum_i (y_i - mx_i - c)^2 $$\n\nTo plot a function of two variables we need a 3D plot.\nIt can be difficult to get the viewing angle right in 3D plots, but below you can just about make out that there is a low point on the graph around the $(m, c) = (\\approx 5.0, \\approx 10.0)$ point. \n\n\n```python\n# This code is a little bit involved - don't worry about it.\n# Just look at the plot below.\n\nfrom mpl_toolkits.mplot3d import Axes3D\n\n# Ask pyplot a 3D set of axes.\nax = plt.figure().gca(projection='3d')\n\n# Make data.\nmvals = np.linspace(4.5, 5.5, 100)\ncvals = np.linspace(0.0, 20.0, 100)\n\n# Fill the grid.\nmvals, cvals = np.meshgrid(mvals, cvals)\n\n# Flatten the meshes for convenience.\nmflat = np.ravel(mvals)\ncflat = np.ravel(cvals)\n\n# Calculate the cost of each point on the grid.\nC = [np.sum([(d[i] - m * w[i] - c)**2 for i in range(w.size)]) for m, c in zip(mflat, cflat)]\nC = np.array(C).reshape(mvals.shape)\n\n# Plot the surface.\nsurf = ax.plot_surface(mvals, cvals, C)\n\n# Set the axis labels.\nax.set_xlabel('$m$', fontsize=16)\nax.set_ylabel('$c$', fontsize=16)\nax.set_zlabel('$Cost$', fontsize=16)\n\n# Show the plot.\nplt.show()\n```\n\n#### Coefficient of determination\nEarlier we used a cost function to determine the best line to fit the data.\nUsually the data do not perfectly fit on the best fit line, and so the cost is greater than 0.\nA quantity closely related to the cost is the *coefficient of determination*, also known as the *R-squared* value.\nThe purpose of the R-squared value is to measure how much of the variance in $y$ is determined by $x$.\n\nFor instance, in our example the main thing that affects the distance the spring is hanging down is the weight on the end.\nIt's not the only thing that affects it though.\nThe room temperature and density of the air at the time of measurment probably affect it a little.\nThe age of the spring, and how many times it has been stretched previously probably also have a small affect.\nThere are probably lots of unknown factors affecting the measurment.\n\nThe R-squared value estimates how much of the changes in the $y$ value is due to the changes in the $x$ value compared to all of the other factors affecting the $y$ value.\nIt is calculated as follows:\n\n$$ R^2 = 1 - \\frac{\\sum_i (y_i - m x_i - c)^2}{\\sum_i (y_i - \\bar{y})^2} $$\n\nNote that sometimes the [*Pearson correlation coefficient*](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) is used instead of the R-squared value.\nYou can just square the Pearson coefficient to get the R-squred value.\n\n\n```python\n# Calculate the R-squared value for our data set.\nrsq = 1.0 - (np.sum((d - m * w - c)**2)/np.sum((d - d_avg)**2))\n\nprint(\"The R-squared value is %6.4f\" % rsq)\n```\n\n The R-squared value is 0.9811\n\n\n\n```python\n# The same value using numpy.\nnp.corrcoef(w, d)[0][1]**2\n```\n\n\n\n\n 0.9811159849088361\n\n\n\n#### The minimisation calculations\nEarlier we used the following calculation to calculate $m$ and $c$ for the line of best fit.\nThe code was:\n\n```python\nw_zero = w - np.mean(w)\nd_zero = d - np.mean(d)\n\nm = np.sum(w_zero * d_zero) / np.sum(w_zero * w_zero)\nc = np.mean(d) - m * np.mean(w)\n```\n\nIn mathematical notation we write this as:\n\n$$ m = \\frac{\\sum_i (x_i - \\bar{x}) (y_i - \\bar{y})}{\\sum_i (x_i - \\bar{x})^2} \\qquad \\textrm{and} \\qquad c = \\bar{y} - m \\bar{x} $$\n\nwhere $\\bar{x}$ is the mean of $x$ and $\\bar{y}$ that of $y$.\n\nWhere did these equations come from?\nThey were derived using calculus.\nWe'll give a brief overview of it here, but feel free to gloss over this section if it's not for you.\nIf you can understand the first part, where we calculate the partial derivatives, then great!\n\nThe calculations look complex, but if you know basic differentiation, including the chain rule, you can easily derive them.\nFirst, we differentiate the cost function with respect to $m$ while treating $c$ as a constant, called a partial derivative.\nWe write this as $\\frac{\\partial m}{ \\partial Cost}$, using $\\delta$ as opposed to $d$ to signify that we are treating the other variable as a constant.\nWe then do the same with respect to $c$ while treating $m$ as a constant.\nWe set both equal to zero, and then solve them as two simultaneous equations in two variables.\n\n###### Calculate the partial derivatives\n$$\n\\begin{align}\nCost(m, c) &= \\sum_i (y_i - mx_i - c)^2 \\\\[1cm]\n\\frac{\\partial Cost}{\\partial m} &= \\sum 2(y_i - m x_i -c)(-x_i) \\\\\n &= -2 \\sum x_i (y_i - m x_i -c) \\\\[0.5cm]\n\\frac{\\partial Cost}{\\partial c} & = \\sum 2(y_i - m x_i -c)(-1) \\\\\n & = -2 \\sum (y_i - m x_i -c) \\\\\n\\end{align}\n$$\n\n###### Set to zero\n$$\n\\begin{align}\n& \\frac{\\partial Cost}{\\partial m} = 0 \\\\[0.2cm]\n& \\Rightarrow -2 \\sum x_i (y_i - m x_i -c) = 0 \\\\\n& \\Rightarrow \\sum (x_i y_i - m x_i x_i - x_i c) = 0 \\\\\n& \\Rightarrow \\sum x_i y_i - \\sum_i m x_i x_i - \\sum x_i c = 0 \\\\\n& \\Rightarrow m \\sum x_i x_i = \\sum x_i y_i - c \\sum x_i \\\\[0.2cm]\n& \\Rightarrow m = \\frac{\\sum x_i y_i - c \\sum x_i}{\\sum x_i x_i} \\\\[0.5cm]\n& \\frac{\\partial Cost}{\\partial c} = 0 \\\\[0.2cm]\n& \\Rightarrow -2 \\sum (y_i - m x_i - c) = 0 \\\\\n& \\Rightarrow \\sum y_i - \\sum_i m x_i - \\sum c = 0 \\\\\n& \\Rightarrow \\sum y_i - m \\sum_i x_i = c \\sum 1 \\\\\n& \\Rightarrow c = \\frac{\\sum y_i - m \\sum x_i}{\\sum 1} \\\\\n& \\Rightarrow c = \\frac{\\sum y_i}{\\sum 1} - m \\frac{\\sum x_i}{\\sum 1} \\\\[0.2cm]\n& \\Rightarrow c = \\bar{y} - m \\bar{x} \\\\\n\\end{align}\n$$\n\n###### Solve the simultaneous equations\nHere we let $n$ be the length of $x$, which is also the length of $y$.\n\n$$\n\\begin{align}\n& m = \\frac{\\sum_i x_i y_i - c \\sum_i x_i}{\\sum_i x_i x_i} \\\\[0.2cm]\n& \\Rightarrow m = \\frac{\\sum x_i y_i - (\\bar{y} - m \\bar{x}) \\sum x_i}{\\sum x_i x_i} \\\\\n& \\Rightarrow m \\sum x_i x_i = \\sum x_i y_i - \\bar{y} \\sum x_i + m \\bar{x} \\sum x_i \\\\\n& \\Rightarrow m \\sum x_i x_i - m \\bar{x} \\sum x_i = \\sum x_i y_i - \\bar{y} \\sum x_i \\\\[0.3cm]\n& \\Rightarrow m = \\frac{\\sum x_i y_i - \\bar{y} \\sum x_i}{\\sum x_i x_i - \\bar{x} \\sum x_i} \\\\[0.2cm]\n& \\Rightarrow m = \\frac{\\sum (x_i y_i) - n \\bar{y} \\bar{x}}{\\sum (x_i x_i) - n \\bar{x} \\bar{x}} \\\\\n& \\Rightarrow m = \\frac{\\sum (x_i y_i) - n \\bar{y} \\bar{x} - n \\bar{y} \\bar{x} + n \\bar{y} \\bar{x}}{\\sum (x_i x_i) - n \\bar{x} \\bar{x} - n \\bar{x} \\bar{x} + n \\bar{x} \\bar{x}} \\\\\n& \\Rightarrow m = \\frac{\\sum (x_i y_i) - \\sum y_i \\bar{x} - \\sum \\bar{y} x_i + n \\bar{y} \\bar{x}}{\\sum (x_i x_i) - \\sum x_i \\bar{x} - \\sum \\bar{x} x_i + n \\bar{x} \\bar{x}} \\\\\n& \\Rightarrow m = \\frac{\\sum_i (x_i - \\bar{x}) (y_i - \\bar{y})}{\\sum_i (x_i - \\bar{x})^2} \\\\\n\\end{align}\n$$\n\n
    \n\n#### Using sklearn neural networks\n\n***\n\n\n```python\nimport sklearn.neural_network as sknn\n\n# Expects a 2D array of inputs.\nw2d = w.reshape(-1, 1)\n\n# Train the neural network.\nregr = sknn.MLPRegressor(max_iter=10000).fit(w2d, d)\n\n# Show the predictions.\nnp.array([d, regr.predict(w2d)]).T\n```\n\n\n\n\n array([[ 15.99410275, 15.99136402],\n [ 10.9911427 , 10.99025661],\n [ 10.53445534, 9.83175482],\n [ 15.97058549, 18.21173347],\n [ 27.44753565, 26.74674379],\n [ 39.51180328, 35.26744107],\n [ 43.06251898, 40.53315812],\n [ 47.47682646, 45.79886552],\n [ 52.27808676, 51.06457291],\n [ 56.67322568, 56.33028031],\n [ 59.16156245, 61.5959877 ],\n [ 56.69603504, 66.86169509],\n [ 67.15387974, 72.12740249],\n [ 80.96017265, 77.39310988],\n [ 84.50699528, 82.65881728],\n [ 87.26011678, 87.92452467],\n [ 90.77740446, 93.19023207],\n [100.07004239, 98.45593946],\n [107.93739931, 103.72164686],\n [105.51085343, 108.98735425],\n [118.07859596, 114.25306165]])\n\n\n\n\n```python\n# The score.\nregr.score(w2d, d)\n```\n\n\n\n\n 0.9895666755071412\n\n\n\n#### End\n", "meta": {"hexsha": "40e8879895de2a4d2ea2faed0136000d3bdd6bd1", "size": 264071, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "simple-linear-regression.ipynb", "max_stars_repo_name": "ianmcloughlin/jupyter-teaching-notebooks", "max_stars_repo_head_hexsha": "46fed19115bf2f7e63c7bcb3d78bee92295c33b6", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2018-10-23T15:30:48.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-31T20:30:47.000Z", "max_issues_repo_path": "simple-linear-regression.ipynb", "max_issues_repo_name": "ianmcloughlin/jupyter-teaching-notebooks", "max_issues_repo_head_hexsha": "46fed19115bf2f7e63c7bcb3d78bee92295c33b6", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "simple-linear-regression.ipynb", "max_forks_repo_name": "ianmcloughlin/jupyter-teaching-notebooks", "max_forks_repo_head_hexsha": "46fed19115bf2f7e63c7bcb3d78bee92295c33b6", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 34, "max_forks_repo_forks_event_min_datetime": "2018-10-30T00:08:01.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-08T23:33:52.000Z", "avg_line_length": 252.4579349904, "max_line_length": 107364, "alphanum_fraction": 0.9117585801, "converted": true, "num_tokens": 6707, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9086178870347124, "lm_q2_score": 0.9230391706552538, "lm_q1q2_score": 0.8386899008910499}} {"text": "---\n# Example of classification using two gaussian distributions.\n---\n\nIn this example, we generate a global probability distribution function (PDF) that is the sum of two gaussian PDF. \nEach one is weighted with its a priori class probability $P(C_{i})$:\n\n\n
    $P(\\bf{x}) = P(\\bf{x}|C_{1}) P(C_{1}) + P(\\bf{x}|C_{2}) P(C_{2})$
    \n\n\n\nOur goal is to locate the influence zone of each class i. This corresponds to the zone where \n\n
    $P(\\bf{x}|C_{i}) P(C_{i}) > P(\\bf{x}|C_{j}) P(C_{j})$
    \n\n\n## The 2-D gausian PDF\n\nIn its matrix-form, the equation for the 2-D gausian PDF reads like this: \n\n
    $P(\\bf{x}) = \\frac{1}{2\\pi |\\Sigma|^{0.5}} \\exp{[-\\frac{1}{2}(\\bf{x}-\\bf{\\mu})^\\top \\Sigma^{-1} (\\bf{x}-\\bf{\\mu})]}$
    \n\nwhere \n\n
    \n$\n\\begin{align}\n\\bf{x} &= [x_{1} x_{2}]^\\top \\\\\n\\bf{\\mu} &= [\\mu_{1} \\mu_{2}]^\\top \\\\\n\\Sigma &= \\begin{pmatrix} \\sigma_{x_{1}}^2 & \\rho\\sigma_{x_{1}}\\sigma_{x_{2}} \\\\ \\rho\\sigma_{x_{1}}\\sigma_{x_{2}} & \\sigma_{x_{2}}^2 \\end{pmatrix}\n\\end{align}\n$ \n
    \n\nwhere $\\rho$ is the correlation factor between the $x_{1}$ and $x_{2}$ data. \n\n## A useful trick to generate a covariance matrix $\\Sigma$ with desired features \n\nInstead of guessing the values of $\\sigma_{x_{1}}$, $\\sigma_{x_{2}}$ and $\\rho$ to create a \ncovariance matrix $\\Sigma$ for visualization purpose, we can design it with some desired characteristics. \nThose are the principal axis variances $\\sigma_{1}^2$ and $\\sigma_{2}^2$, and the rotation angle $\\theta$.\n\nFirst, we generate the covariance matrix of a correlated PDF with its principal axes oriented along the \n$x_{1}$ and $x_{2}$ axes.\n\n
    $\\Sigma_{PA} = \\begin{pmatrix} \\sigma_{1}^2 & 0 \\\\ 0 & \\sigma_{2}^2 \\end{pmatrix}$
    \n\nNext, we generate the rotation matrix for the angle $\\theta$:\n\n
    $R = \\begin{pmatrix} \\cos{\\theta} & -\\sin{\\theta} \\\\ \\sin{\\theta} & \\cos{\\theta} \\end{pmatrix}$
    \n\nThe covariance matrix we are looking for is \n\n
    $\\Sigma = R \\Sigma_{PA} R^\\top $
    \n\nHence, we only have to specify the values of $\\sigma_{1}$ and $\\sigma_{2}$ along principal axes and the \nrotation angle $\\theta$. The correlation coefficient $\\rho$ depends on the value of $\\theta$:\n\n
    $\\rho>0$ when $\\theta>0$
    \n
    $\\rho<0$ when $\\theta<0$
    \n\n
    \nN.B. For ease of reading, we use below the variables x and y instead of x1 and x2. The final results are shown with x1 and x2.\n\n\n```python\nprint(__doc__)\n\n# Author: Pierre Gravel \n# License: BSD\n\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import colors\nfrom matplotlib import cm\nfrom matplotlib.ticker import NullFormatter\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfrom scipy import stats\nfrom scipy.stats import multivariate_normal\n\nfrom skimage.transform import resize\n\nimport seaborn as sns\nsns.set(color_codes=True)\n\n# Used for reproductibility of the results\nnp.random.seed(43)\n```\n\n Automatically created module for IPython interactive environment\n\n\nHere are the characteristics of the 2-D PDF we want to generate:\n\n\n```python\n# Origins of the PDF\nMu = np.zeros((2,2))\nMu[0,:] = [6., 3.]\nMu[1,:] = [6., 7.]\n\n# Standard deviations along the x1 and x2 directions (or x and y)\nsigma = np.zeros((2,2))\nsigma[0,:] = [2., 0.7]\nsigma[1,:] = [1.2, 0.7]\n\n# Rotation angles for the principal axis\ntheta = np.array([-30., 0.]) \n\n# A priori class probabilities\nprob_C = np.array([0.5, 0.5]) \n\n```\n\nCompute the covariance matrix $\\Sigma$\n\n\n```python\nSigma = np.zeros((2,2,2))\nfor i in range(2):\n angle = np.radians(theta[i])\n c, s = np.cos(angle), np.sin(angle)\n\n # Rotation matrix\n R = np.array(((c, -s), (s, c))) \n\n # Covariance matrix for a PDF with its principal axes oriented along the x and y directions\n sig = np.array([[sigma[i,0]**2, 0.],[0., sigma[i,1]**2]])\n\n # Covariance matrix after rotation\n Sigma[:,:,i] = R.dot( sig.dot(R.T) ) \n\n```\n\n## Global PDF computation\nGenerate a spatial grid where the PDF will be evaluated locally.\n\n\n```python\nx_min, x_max = 0., 10.\ny_min, y_max = 0., 10.\nnx, ny = 300, 300\nx = np.linspace(x_min, x_max, nx)\ny = np.linspace(y_min, y_max, ny)\nxx, yy = np.meshgrid(x,y) \npos = np.dstack((xx, yy)) \n \n```\n\nCompute the global PDF\n\n\n```python\n# Weighted PDF\nmodel = multivariate_normal(Mu[0,:], Sigma[:,:,0]) \npdf0 = prob_C[0]*model.pdf(pos)\n\nmodel = multivariate_normal(Mu[1,:], Sigma[:,:,1]) \npdf1 = prob_C[1]*model.pdf(pos)\n\n# Global PDF \npdf = pdf0 + pdf1\n\n# Define a mask to identify the influence zone of each class: Mask=False for class 0 and Mask=True for class 1\nmask = pdf1 > pdf0\n\n```\n\n## Display the global PDF \n\n\n```python\nz_offset = -0.1\nview=[22., -30.]\n\n\nfig = plt.figure(figsize = (15,10))\nax = fig.gca(projection='3d')\n\n\n# ---------Generate overlying 3-D surface with colors indicating the influence zone of each class\nrstride, cstride = 5, 5\n\ns = ax.plot_surface(xx, yy, pdf, rstride=rstride, cstride=cstride, linewidth=.5, antialiased=True, color='gray', edgecolors='k') \na1 = s.__dict__['_original_facecolor']\nb1 = s.__dict__['_facecolors']\nc1 = s.__dict__['_facecolors3d']\n\ns = ax.plot_surface(xx, yy, pdf, rstride=rstride, cstride=cstride, linewidth=.5, antialiased=True, color='w', edgecolors='k')\na2 = s.__dict__['_original_facecolor']\nb2 = s.__dict__['_facecolors']\nc2 = s.__dict__['_facecolors3d']\n\nLx = int(nx/rstride)\nLy = int(ny/cstride)\n\nmask2 = resize(mask, (Lx,Ly), order=0)\nindx = np.argwhere(mask2)\nidx = indx[:,0]*Lx + indx[:,1]\n\na = a1\nb = b1\nc = c1\nfor i in idx:\n a[i,:] = a2[i,:]\n b[i,:] = b2[i,:]\n c[i,:] = c2[i,:]\n \ns.__dict__['_original_facecolor'] = a\ns.__dict__['_facecolors'] = b\ns.__dict__['_facecolors3d'] = c\n\n\n# --------- Generate underlying contours with borders between each class\ncset = ax.contourf(xx, yy, pdf, zdir='z', offset=z_offset, cmap='viridis')\nax.contour(xx, yy, mask, [0.5], offset=z_offset, linewidths=2., colors='white') \n\n# N.B. For convenience, we use classes 1 and 2 instead of classes 0 and 1 in the figure\nax.text(2, 1, z_offset,'$\\mathcal{R}_{1}$', zdir='y', color='w',fontsize=18) \nax.text(9, 8, z_offset,'$\\mathcal{R}_{2}$', zdir='y', color='w',fontsize=18) \nax.text(6, 2.5, 0.05, '$p(\\mathbf{X}|C_{1})p(C_{1})$', zdir='y', horizontalalignment='right', verticalalignment='bottom', \n color='k',fontsize=18) \nax.text(6, 6., 0.08, '$p(\\mathbf{X}|C_{2})p(C_{2})$', zdir='y', horizontalalignment='right', verticalalignment='bottom', \n color='gray',fontsize=18) \n\nax.set_zlim(z_offset, np.max(pdf))\nax.view_init(view[0], view[1])\n\nax.set_ylabel('$x_{2}$', fontsize=18)\nax.xaxis.set_rotate_label(False) \nax.set_xlabel('$x_{1}$', rotation=10, fontsize=18)\nax.set_zlabel('$Z$', rotation=0, fontsize=18)\nax.set_title('$Z = PDF(x_{1},x_{2})$', fontsize=20)\n \nfig.tight_layout()\n\nplt.savefig('Influence_zones_in_gaussian_PDF_classification.png')\nplt.savefig('Influence_zones_in_gaussian_PDF_classification.pdf')\n\nplt.show()\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "75925852beae19ced419ba1288cb01ea2df63c20", "size": 316832, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "generate_explanatory_diagram_for_influence_zones_in_two_class_segmentation.ipynb", "max_stars_repo_name": "AstroPierre/Scripts-for-figures-courses-GIF-4101-GIF-7005", "max_stars_repo_head_hexsha": "a38ad6f960cc6b8155fad00e4c4562f5e459f248", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "generate_explanatory_diagram_for_influence_zones_in_two_class_segmentation.ipynb", "max_issues_repo_name": "AstroPierre/Scripts-for-figures-courses-GIF-4101-GIF-7005", "max_issues_repo_head_hexsha": "a38ad6f960cc6b8155fad00e4c4562f5e459f248", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "generate_explanatory_diagram_for_influence_zones_in_two_class_segmentation.ipynb", "max_forks_repo_name": "AstroPierre/Scripts-for-figures-courses-GIF-4101-GIF-7005", "max_forks_repo_head_hexsha": "a38ad6f960cc6b8155fad00e4c4562f5e459f248", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 918.3536231884, "max_line_length": 305928, "alphanum_fraction": 0.9536883901, "converted": true, "num_tokens": 2233, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9504109770159683, "lm_q2_score": 0.8824278649085117, "lm_q1q2_score": 0.8386691292338134}} {"text": "We have a square matrix $R$. We consider the error for $T = R'R$ where $R'$ is the transpose of $R$.\n\nThe elements of $R$ are $r_{i,j}$, where $i = 1 \\dots N, j = 1 \\dots N$.\n\n$r_{i, *}$ is row $i$ of $R$.\n\nNow let $R$ be a rotation matrix. $T$ at infinite precision will be the identity matrix $I$\n\nAssume the maximum error in the specification of values $r_{i, j}$ is constant, $\\delta$. That is, any floating point value $r_{i, j}$ represents an infinite precision value between $r_{i, j} \\pm \\delta$\n\n\n```\nfrom sympy import Symbol, symarray, Matrix, matrices, simplify, nsimplify\nR = Matrix(symarray('r', (3,3)))\nR\n```\n\n\n\n\n [r_0_0, r_0_1, r_0_2]\n [r_1_0, r_1_1, r_1_2]\n [r_2_0, r_2_1, r_2_2]\n\n\n\n\n```\nT = R.T * R\nT\n```\n\n\n\n\n [ r_0_0**2 + r_1_0**2 + r_2_0**2, r_0_0*r_0_1 + r_1_0*r_1_1 + r_2_0*r_2_1, r_0_0*r_0_2 + r_1_0*r_1_2 + r_2_0*r_2_2]\n [r_0_0*r_0_1 + r_1_0*r_1_1 + r_2_0*r_2_1, r_0_1**2 + r_1_1**2 + r_2_1**2, r_0_1*r_0_2 + r_1_1*r_1_2 + r_2_1*r_2_2]\n [r_0_0*r_0_2 + r_1_0*r_1_2 + r_2_0*r_2_2, r_0_1*r_0_2 + r_1_1*r_1_2 + r_2_1*r_2_2, r_0_2**2 + r_1_2**2 + r_2_2**2]\n\n\n\nNow the same result with error $\\delta$ added to each element\n\n\n```\nd = Symbol('d')\nE = matrices.ones((3,3)) * d\nRE = R + E\nRE\n```\n\n\n\n\n [d + r_0_0, d + r_0_1, d + r_0_2]\n [d + r_1_0, d + r_1_1, d + r_1_2]\n [d + r_2_0, d + r_2_1, d + r_2_2]\n\n\n\nCalculate the result $T$ with error\n\n\n```\nTE = RE.T * RE\nTE\n```\n\n\n\n\n [ (d + r_0_0)**2 + (d + r_1_0)**2 + (d + r_2_0)**2, (d + r_0_0)*(d + r_0_1) + (d + r_1_0)*(d + r_1_1) + (d + r_2_0)*(d + r_2_1), (d + r_0_0)*(d + r_0_2) + (d + r_1_0)*(d + r_1_2) + (d + r_2_0)*(d + r_2_2)]\n [(d + r_0_0)*(d + r_0_1) + (d + r_1_0)*(d + r_1_1) + (d + r_2_0)*(d + r_2_1), (d + r_0_1)**2 + (d + r_1_1)**2 + (d + r_2_1)**2, (d + r_0_1)*(d + r_0_2) + (d + r_1_1)*(d + r_1_2) + (d + r_2_1)*(d + r_2_2)]\n [(d + r_0_0)*(d + r_0_2) + (d + r_1_0)*(d + r_1_2) + (d + r_2_0)*(d + r_2_2), (d + r_0_1)*(d + r_0_2) + (d + r_1_1)*(d + r_1_2) + (d + r_2_1)*(d + r_2_2), (d + r_0_2)**2 + (d + r_1_2)**2 + (d + r_2_2)**2]\n\n\n\nSubtract the true result to get the absolute error\n\n\n```\nTTE = TE-T\nTTE.simplify()\nTTE\n```\n\n\n\n\n [ d*(3*d + 2*r_0_0 + 2*r_1_0 + 2*r_2_0), d*(3*d + r_0_0 + r_0_1 + r_1_0 + r_1_1 + r_2_0 + r_2_1), d*(3*d + r_0_0 + r_0_2 + r_1_0 + r_1_2 + r_2_0 + r_2_2)]\n [d*(3*d + r_0_0 + r_0_1 + r_1_0 + r_1_1 + r_2_0 + r_2_1), d*(3*d + 2*r_0_1 + 2*r_1_1 + 2*r_2_1), d*(3*d + r_0_1 + r_0_2 + r_1_1 + r_1_2 + r_2_1 + r_2_2)]\n [d*(3*d + r_0_0 + r_0_2 + r_1_0 + r_1_2 + r_2_0 + r_2_2), d*(3*d + r_0_1 + r_0_2 + r_1_1 + r_1_2 + r_2_1 + r_2_2), d*(3*d + 2*r_0_2 + 2*r_1_2 + 2*r_2_2)]\n\n\n\n\n```\nTTE[0,0], TTE[1,1], TTE[2,2]\n```\n\n\n\n\n (d*(3*d + 2*r_0_0 + 2*r_1_0 + 2*r_2_0),\n d*(3*d + 2*r_0_1 + 2*r_1_1 + 2*r_2_1),\n d*(3*d + 2*r_0_2 + 2*r_1_2 + 2*r_2_2))\n\n\n\nAssuming $\\delta$ is small ($\\delta^2$ is near zero) then the diagonal values $TTE_{k, k}$ are approximately $2\\delta \\Sigma_i r_{i, k}$\n\n$\\Sigma_i r_{i, k}$ is the column sum for column $k$ of $R$. We know that the column $L^2$ norms of $R$ are each 1. We know $\\|x\\|_1 \\leq \\sqrt{n}\\|x\\|_2$ - http://en.wikipedia.org/wiki/Lp_space. Therefore the column sums must be $\\le \\sqrt{N}$. Therefore the maximum error for the diagonal of $T$ is $\\sqrt{N} 2\\delta$.\n\nMore generally, the elements $k, m$ of $TTE$ are approximately $\\delta (\\Sigma_i{r_{i, k}} + \\Sigma_i{r_{i, m}})$\n\nSo the error for each of the elements of $TTE$ is also bounded by $\\sqrt{N} 2\\delta$.\n\nNow consider the floating point calculation error. This depends on the floating point representation we use for the calculations. Let $\\epsilon = x-1$ where $x$ is the smallest number greater than 1 that is representable in our floating point format (see http://matthew-brett.github.com/pydagogue/floating_error.html). The largest error for a calculation resulting in a value near 1 is $\\frac{\\epsilon}{2}$. For the diagonal values, the calculation error will be the error for the $r_{i,*} r_{i, *}'$ dot product. This comprises $N$ scalar products with results each bounded by 1 ($r_{i, j} r_{i, j}$) followed by $N-1$ sums each bounded by 1. Maximum error is therefore $(2N-1) \\frac{\\epsilon}{2}$ = $\\frac{5}{2} \\epsilon$ where $N=3$.\n\nFor the off-diagonal values $T_{k, m}$, we have the $r_{k,*} r_{m, *}'$ dot product. Because $R$ is a rotation matrix, by definition this result must be zero.\n\nBecause the column and row $L^2$ norms of $R$ are 1, the values in $R$ cannot be greater than 1. Therefore $r_{k,*} r_{m, *}'$ consists of the $N$ products with results each bounded by 1 ($r_{k, j} r_{m, j}$) followed by $N-2$ sums each bounded by 1 (the last of the sums must be approximately 0). Maximum error is therefore $(2N-2) \\frac{\\epsilon}{2}$ = $2\\epsilon$ where $N=3$.\n\nSo, assuming an initial error of $\\delta$ per element, and $N=3$, the maximum error for diagonal elements is $\\sqrt{3} 2 \\delta + \\frac{5}{3} \\epsilon$. For the off-diagonal elements it is $\\sqrt{3} 2 \\delta + 2 \\epsilon$.\n", "meta": {"hexsha": "cf35cda2bee9298d5a963b9e546723741eeb8ee2", "size": 8608, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/source/notebooks/ata_error.ipynb", "max_stars_repo_name": "tobon/nibabel", "max_stars_repo_head_hexsha": "ff2b5457207bb5fd6097b08f7f11123dc660fda7", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-10-01T01:13:59.000Z", "max_stars_repo_stars_event_max_datetime": "2015-10-01T01:13:59.000Z", "max_issues_repo_path": "doc/source/notebooks/ata_error.ipynb", "max_issues_repo_name": "tobon/nibabel", "max_issues_repo_head_hexsha": "ff2b5457207bb5fd6097b08f7f11123dc660fda7", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2015-11-13T03:05:24.000Z", "max_issues_repo_issues_event_max_datetime": "2016-08-06T19:18:54.000Z", "max_forks_repo_path": "doc/source/notebooks/ata_error.ipynb", "max_forks_repo_name": "tobon/nibabel", "max_forks_repo_head_hexsha": "ff2b5457207bb5fd6097b08f7f11123dc660fda7", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-02-27T20:48:03.000Z", "max_forks_repo_forks_event_max_datetime": "2019-02-27T20:48:03.000Z", "avg_line_length": 34.2948207171, "max_line_length": 754, "alphanum_fraction": 0.4925650558, "converted": true, "num_tokens": 2292, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284087965937711, "lm_q2_score": 0.9032942073547149, "lm_q1q2_score": 0.8386262880203151}} {"text": "\n\n# Simplification and Evaluation of mathematical expressions using Sympy\n\n**Question 1**\n\n\\begin{align}\n\\frac{2x^{12}}{5y^{10}}\n\\end{align}\n\nSolve for (a) $x = 2$ , $y = 2$ (b) Case 2 : $x = 1$ , $y = 1$ \n\n\n\n**Evaluating the expression using Sympy**\n\n\n```python\nfrom sympy import*\n\nx = Symbol('x')\ny = Symbol('y')\n\nexpression = \"(2*x**12)/(5*y**10)\"\n\nsimplified_expression = simplify(expression)\nprint(\"The simplified expression is {}\". format(simplified_expression))\n\nevaluated_result_a = simplified_expression.subs({x:2, y:2}) \nevaluated_result_b = simplified_expression.subs({x:1, y:1}) \n\nprint(\"The Evaluated result for (a) is: {}\".format(evaluated_result_a))\nprint(\"The Evaluated result for (b) is: {}\".format(evaluated_result_b))\n```\n\n The simplified expression is 2*x**12/(5*y**10)\n The Evaluated result for (a) is: 8/5\n The Evaluated result for (b) is: 2/5\n\n\n**Question 2**\n\n\\begin{align}35\n\\left (\\frac{4x}{5}\\right)\n\\end{align}\n\nSolve for (a) $x = 1$ and (b) $x = 5$\n\n\n```python\nfrom sympy import*\n\nx = Symbol('x')\n\nexpression = \"35*(4*x)/5\"\n\nsimplified_expression = simplify(expression)\nprint(\"The simplified expression is {}\". format(simplified_expression))\n\nevaluated_result_a = simplified_expression.subs({x:1})\nevaluated_result_b = simplified_expression.subs({x:5})\n\n\nprint(\"The Evaluated result for (a) is: {}\".format(evaluated_result_a))\nprint(\"The Evaluated result for (b) is: {}\".format(evaluated_result_b))\n\n```\n\n The simplified expression is 28*x\n The Evaluated result for (a) is: 28\n The Evaluated result for (b) is: 140\n\n\n**Question 3**\n\n\\begin{align}24\n\\left(\\frac{y}{6}+ \\frac{3}{8} \\right)\n\\end{align}\n\nSolve for (a) $y = 2$ and (b) $y = 1$\n\n\n```python\nfrom sympy import*\n\ny = Symbol('y')\n\nexpression = \"24*((y/6)+(3/8))\"\nsimplified_expression = simplify(expression)\n\nprint(\"The simplified expression is {}\". format(simplified_expression))\n\nevaluated_result_a = simplified_expression.subs({y:2})\nevaluated_result_b = simplified_expression.subs({y:1})\n\n\nprint(\"The Evaluated result for (a) is: {}\".format(evaluated_result_a))\nprint(\"The Evaluated result for (b) is: {}\".format(evaluated_result_b))\n\n\n```\n\n The simplified expression is 4*y + 9\n The Evaluated result for (a) is: 17\n The Evaluated result for (b) is: 13\n\n\n**Question 4**\n\nCalculate the pressure P of the gas using Van der Waal\u2019s Equation for real gas\n\n\\begin{align}P =\n\\left(\\frac{RT}{V-b} - \\frac{a}{V^2}\\right)\n\\end{align}\n\n\n1. Average attraction between particles (a) = 1.360\n2. volume excluded by a mole of particles (b) = 0.03186\n3. Universal Gas constant (R) = 8.31\n4. Volume of gas(V) = 5\n5. Temperature(T) = 275\n\n\n\n```python\nfrom sympy import*\n\na = Symbol('a')\nb = Symbol('b')\nR = Symbol('R')\nV = Symbol('V')\nT = Symbol('T')\n\nexpression = \"(((R*T)/(V-b))-(a/V**2))\"\nsimplified_expression = simplify(expression)\nprint(\"The simplified expression is: {}\". format(simplified_expression))\n\nevaluated_result = simplified_expression.subs({a:1.360, b:0.03186, R:8.31, V:5, T:275})\nprint(\"The solution of Van der Waal\u2019s Equation for real gas under the given conditions is: {}\".format(evaluated_result))\n```\n\n The simplified expression is: (R*T*V**2 - a*(V - b))/(V**2*(V - b))\n The solution of Van der Waal\u2019s Equation for real gas under the given conditions is: 459.926598925151\n\n", "meta": {"hexsha": "597454d3cf2591304dab110aef64ec82d322eda8", "size": 7982, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "FOAM_Assignment_5.ipynb", "max_stars_repo_name": "tony2020edx/brew", "max_stars_repo_head_hexsha": "15528d11d99795a1ccfded08964b24c0ac6e1d48", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "FOAM_Assignment_5.ipynb", "max_issues_repo_name": "tony2020edx/brew", "max_issues_repo_head_hexsha": "15528d11d99795a1ccfded08964b24c0ac6e1d48", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "FOAM_Assignment_5.ipynb", "max_forks_repo_name": "tony2020edx/brew", "max_forks_repo_head_hexsha": "15528d11d99795a1ccfded08964b24c0ac6e1d48", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.2380952381, "max_line_length": 232, "alphanum_fraction": 0.4654221999, "converted": true, "num_tokens": 1044, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284088005554475, "lm_q2_score": 0.9032942008463507, "lm_q1q2_score": 0.8386262855564519}} {"text": "```python\n%matplotlib inline\n\n# Using gradient descent for linear regression\n# Ideas from https://spin.atomicobject.com/2014/06/24/gradient-descent-linear-regression/\n\n# We will attempt to predict the college admission test score based\n# on the high school math score (following on Chapter 3 of \"Doing Math with Python\")\n\n# Known data\nx_data = [83, 85, 84, 96, 94, 86, 87, 97, 97, 85]\ny_data = [85, 87, 86, 97, 96, 88, 89, 98, 98, 87]\n\nfrom sympy import Symbol, Derivative\nimport matplotlib.pyplot as plt\n\n# Assumed linear model\n# x = math score in high school\n# y = admission test score\n\n# y = m*x + c\ndef estimate_y(x, m, c):\n y_estimated = m*x + c\n return y_estimated\n\ndef estimate_theta(m_current, c_current, max_iterations=50000):\n learning_rate = 0.0001\n m_gradient = 0\n c_gradient = 0\n N = len(x_data)\n \n m = Symbol('m')\n c = Symbol('c')\n y = Symbol('y')\n x = Symbol('x')\n # Error term\n error_term = (y - (m*x+c))**2\n # Error function = 1/n*sum(error_term)\n for i in range(max_iterations):\n for i in range(0, N):\n m_gradient += (1/N)*Derivative(error_term, m).doit().subs({x:x_data[i], y:y_data[i], m:m_current, c:c_current})\n c_gradient += (1/N)*Derivative(error_term, c).doit().subs({x:x_data[i], y:y_data[i], m:m_current, c:c_current})\n\n m_new = m_current - (learning_rate * m_gradient)\n c_new = c_current - (learning_rate * c_gradient)\n if abs(m_new - m_current) < 1e-5 or abs(c_new - c_current) < 1e-5:\n break\n else:\n m_current = m_new\n c_curret = c_new\n return m_new, c_new\n \nm, c = estimate_theta(1, 1)\n\n# Let's try and unknown set of data \n# This data set is different and widely spread, \n# but they are very similarly correlated\nx_data = [63, 61, 98, 76, 74, 59, 40, 87, 71, 75]\ny_data = [65, 62, 99, 78, 75, 60, 42, 89, 71, 77]\n\ny_estimated = [estimate_y(x, m, c) for x in x_data]\nplt.plot(x_data, y_data, 'ro')\nplt.plot(x_data, y_estimated, 'b*')\nplt.legend(['Actual', 'Estimated'], loc='best')\nplt.show()\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "a0f362f456e7cc27caf0d91844dd82f3b8bfa112", "size": 14586, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/Simple Linear Regression.ipynb", "max_stars_repo_name": "doingmathwithpython/pycon-us-2016", "max_stars_repo_head_hexsha": "08ecbddcc1dad9c6ffe83ea6d0c483c2d9dd4b62", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/Simple Linear Regression.ipynb", "max_issues_repo_name": "doingmathwithpython/pycon-us-2016", "max_issues_repo_head_hexsha": "08ecbddcc1dad9c6ffe83ea6d0c483c2d9dd4b62", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/Simple Linear Regression.ipynb", "max_forks_repo_name": "doingmathwithpython/pycon-us-2016", "max_forks_repo_head_hexsha": "08ecbddcc1dad9c6ffe83ea6d0c483c2d9dd4b62", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2017-07-23T06:53:38.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-11T09:13:57.000Z", "avg_line_length": 121.55, "max_line_length": 11050, "alphanum_fraction": 0.8457424928, "converted": true, "num_tokens": 673, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9732407168145568, "lm_q2_score": 0.8615382094310357, "lm_q1q2_score": 0.838484064509791}} {"text": "```python\nfrom sympy import *\ninit_printing(use_latex='mathjax')\n```\n\n\n```python\ndef get_diff(expressions, symbols):\n rows = len(expressions)\n columns = len(symbols)\n assert rows == columns , \"Number of expression doesnt match number of symbols\"\n \n print(\"Expressions:\")\n for expression in expressions:\n display(expression)\n\n results = [[0 for x in range(rows)] for y in range(columns)] \n for row, expression in enumerate(expressions):\n for column, symbol in enumerate(symbols):\n# print('Row %d, column %d, expression: %s, symbol: %s' % (row, column, expression, symbol))\n df = diff(expression, symbol)\n# print(\"DF: %s\" % df)\n results[row][column] = df\n return results\n```\n\n\n```python\nx, y = symbols('x y')\nget_diff([x ** 2 - y**2, 2 * x * y], [x, y])\n```\n\n Expressions:\n\n\n\n$$x^{2} - y^{2}$$\n\n\n\n$$2 x y$$\n\n\n\n\n\n$$\\left [ \\left [ 2 x, \\quad - 2 y\\right ], \\quad \\left [ 2 y, \\quad 2 x\\right ]\\right ]$$\n\n\n\n\n```python\nx, y, z = symbols('x y z')\nget_diff([\n 2 * x + 3 * y,\n cos(x) * sin(z),\n exp(x) * exp(y) * exp(z)\n], [x, y , z])\n```\n\n Expressions:\n\n\n\n$$2 x + 3 y$$\n\n\n\n$$\\sin{\\left (z \\right )} \\cos{\\left (x \\right )}$$\n\n\n\n$$e^{x} e^{y} e^{z}$$\n\n\n\n\n\n$$\\left [ \\left [ 2, \\quad 3, \\quad 0\\right ], \\quad \\left [ - \\sin{\\left (x \\right )} \\sin{\\left (z \\right )}, \\quad 0, \\quad \\cos{\\left (x \\right )} \\cos{\\left (z \\right )}\\right ], \\quad \\left [ e^{x} e^{y} e^{z}, \\quad e^{x} e^{y} e^{z}, \\quad e^{x} e^{y} e^{z}\\right ]\\right ]$$\n\n\n\n\n```python\nx, y, a, b, c, d = symbols('x y a b c d')\nget_diff([\n a * x + b * y,\n c * x + d * y\n], [x, y])\n```\n\n Expressions:\n\n\n\n$$a x + b y$$\n\n\n\n$$c x + d y$$\n\n\n\n\n\n$$\\left [ \\left [ a, \\quad b\\right ], \\quad \\left [ c, \\quad d\\right ]\\right ]$$\n\n\n\n\n```python\ndef jacobian_at(jacobian, point):\n rows = len(jacobian)\n columns = len(jacobian[0])\n results = [[0 for x in range(rows)] for y in range(columns)] \n for rowIndex, row in enumerate(jacobian):\n for colIndex, cell in enumerate(row):\n results[rowIndex][colIndex] = cell.evalf(subs=point)\n \n return results\n```\n\n\n```python\nx, y, z = symbols('x y z')\nJ = get_diff([\n 9 * x ** 2 * y ** 2 + z * exp(x),\n x * y + x ** 2 * y ** 3 + 2 * z,\n cos(x) * sin(z) * exp(y)\n], [x, y, z])\njacobian_at(J, {x: 0, y:0, z:0})\n```\n\n Expressions:\n\n\n\n$$9 x^{2} y^{2} + z e^{x}$$\n\n\n\n$$x^{2} y^{3} + x y + 2 z$$\n\n\n\n$$e^{y} \\sin{\\left (z \\right )} \\cos{\\left (x \\right )}$$\n\n\n\n\n\n$$\\left [ \\left [ 0, \\quad 0, \\quad 1.0\\right ], \\quad \\left [ 0, \\quad 0, \\quad 2.0\\right ], \\quad \\left [ 0, \\quad 0, \\quad 1.0\\right ]\\right ]$$\n\n\n\n\n```python\nr, theta, phi = symbols('r theta phi')\nget_diff([\n r * cos(theta) * sin(phi),\n r * sin(theta) * sin(phi),\n r * cos(phi)\n], [r, theta, phi])\n```\n\n Expressions:\n\n\n\n$$r \\sin{\\left (\\phi \\right )} \\cos{\\left (\\theta \\right )}$$\n\n\n\n$$r \\sin{\\left (\\phi \\right )} \\sin{\\left (\\theta \\right )}$$\n\n\n\n$$r \\cos{\\left (\\phi \\right )}$$\n\n\n\n\n\n$$\\left [ \\left [ \\sin{\\left (\\phi \\right )} \\cos{\\left (\\theta \\right )}, \\quad - r \\sin{\\left (\\phi \\right )} \\sin{\\left (\\theta \\right )}, \\quad r \\cos{\\left (\\phi \\right )} \\cos{\\left (\\theta \\right )}\\right ], \\quad \\left [ \\sin{\\left (\\phi \\right )} \\sin{\\left (\\theta \\right )}, \\quad r \\sin{\\left (\\phi \\right )} \\cos{\\left (\\theta \\right )}, \\quad r \\sin{\\left (\\theta \\right )} \\cos{\\left (\\phi \\right )}\\right ], \\quad \\left [ \\cos{\\left (\\phi \\right )}, \\quad 0, \\quad - r \\sin{\\left (\\phi \\right )}\\right ]\\right ]$$\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "1864aaf237c6ea7f42ad859290d5de2ca4bb3e2d", "size": 9915, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Certification 2/Week2.3 - Jacobian Matrix.ipynb", "max_stars_repo_name": "The-Brains/MathForMachineLearning", "max_stars_repo_head_hexsha": "5cbd9006f166059efaa2f312b741e64ce584aa1f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2018-04-16T02:53:59.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-16T06:51:57.000Z", "max_issues_repo_path": "Certification 2/Week2.3 - Jacobian Matrix.ipynb", "max_issues_repo_name": "The-Brains/MathForMachineLearning", "max_issues_repo_head_hexsha": "5cbd9006f166059efaa2f312b741e64ce584aa1f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Certification 2/Week2.3 - Jacobian Matrix.ipynb", "max_forks_repo_name": "The-Brains/MathForMachineLearning", "max_forks_repo_head_hexsha": "5cbd9006f166059efaa2f312b741e64ce584aa1f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2019-05-20T02:06:55.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-18T06:21:41.000Z", "avg_line_length": 23.3844339623, "max_line_length": 610, "alphanum_fraction": 0.4001008573, "converted": true, "num_tokens": 1274, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9433475778774728, "lm_q2_score": 0.8887587993853654, "lm_q1q2_score": 0.8384084607174752}} {"text": "```python\nimport numpy as np\n\nimport scipy.linalg as la\n\nimport sympy as sp\n```\n\nLecture 14. Orthogonality\n\nOrthogonal vectors: \n\n\n```python\nx = np.array([1,0])\ny = np.array([0,1])\nnp.dot(x.T,y)\n```\n\n\n\n\n 0\n\n\n\n|x|^2 + |y|^2 = |x+y|^2\n\nx.Tx - length squared\n\nx.Tx+y.Ty = (x+y).T(x+y)\n\nx.Tx+y.Ty = x.Tx+x.Ty+y.Tx+y.Ty\n\n2x.Ty = 0\n\nx.Ty = 0\n\n\n```python\nnp.dot(x.T,x)\n```\n\n\n\n\n 1\n\n\n\n\n```python\nnp.dot(y.T,y)\n```\n\n\n\n\n 1\n\n\n\n\n```python\nnp.dot((x+y).T,(x+y))\n```\n\n\n\n\n 2\n\n\n\nDot product of orthogonal vectors is 0.\n\n0 vector is orthogonal to everthing.\n\nNow to subspaces\n\nSupspace S is orthogonal to subspace T:\n\nmeans every vector in S is orthogonal to every vector in T. They don't intersect (except of 0 vector).\n\nRowspace is orthogonal to a nullspace.\n\nAx = 0\n\nMeans the dot product of all the rows of A and x is 0, and x is a vector in nullspace as a definition.\n\nc1(row1).Tx = 0\n\n\"+\" = 0\n\nc2(row2).Tx = 0\n\nNullspace and rowspace are orthogonal complements in R^n.\n\nNullspace contains all vectors perpedicular to rowspace.\n\nComing: Ax = b\n\n\"solve\" when there is no solution.\n\nm > n\n\nb is not in acolumnspace\n\nPractically too many equations with noise on a right side. E.g. finding the best solution.\n\nDrop some equtions is not a solution, as by droping them we literally drop the measurements and loose information.\n\n\n\nA.TA - is a square symmetric matrix (A.TA).T = A.TA\n\nA.TAx' = A.Tb is a way to go. x' is a best solution, not a solution to original system.\n\nN(A.TA) = N(A)\n\nrank of A.TA = rank of A\n\nA.TA is invertible if A has independant columns.\n\nLecture 15. Projections.\n\na.T(b-xa) = 0 - projection of a vector b to a vector a.\n\nxa.Ta = a.Tb\n\nx = (a.Tb)/(a.Ta)\n\np = ax\n\np = a(a.Tb)/(a.Ta)\n\nDoubling b will double p - projection.\n\nproj p = Pb\n\nP = aa.T/a.Ta - n x n matrix\n\nC(P) = a line through a\n\nrank(P) = 1\n\nP.T = P\n\nThe projection of a projection is the same projection.\n\nP^2 = P\n\n\nWhy to even project?\n\nBecause Ax = b may have no solution, thus we project b to a columnspace of A and this is exactly the closest and the best possible solution.\n\ne = b-p - error. The reason why we project, and projection sort of eliminates the error.\n\np = x1'a1 + x2'a2 = Ax'\n\np = Ax'\n\nFind a solution, such as error vector is perpendicular to the plain. b = Ax'\n\n\na1.T(b-Ax' = 0 a2.T(b-Ax')\n\nA.T(b-Ax') = 0\n\ne in N(A.T)\n\ne perpendicular to C(A) - YES!\n\nKey: A.TAx' = A.Tb\n\nProjection matrix.\n\nx' = (A.TA)^(-1)A.Tb\n\np = Ax' = A(A.TA)^(-1)A.T b\n\nmatrix P = A(A.TA)^(-1)A.T\n\nfor squared A; P = AA^-1(A.T)^-1A.T = I, which will mean it covers the b, and P prohjects b to b.\n\nYou can't do this for rectangular.\n\nP.T = P\n\nP^2 = P\n\n\n\nLeast squares. Fitting 3 points with a best line. Practically a linear regression.\n\nLecture 16\n\nExample:\n\nC+D = 1\n\nC+2D = 2\n\nC+3D = 2\n\n\n```python\nA = np.array([[1,1],\n [1,2],\n [1,3]])\n\nb = np.array([1,2,2])\n```\n\n\n```python\nP = np.dot(np.dot(A, np.linalg.inv(np.dot(A.T,A))),A.T)\n```\n\n\n```python\nP\n```\n\n\n\n\n array([[ 0.83333333, 0.33333333, -0.16666667],\n [ 0.33333333, 0.33333333, 0.33333333],\n [-0.16666667, 0.33333333, 0.83333333]])\n\n\n\n\n```python\nnewb = np.dot(P,b)\nnewb\n```\n\n\n\n\n array([ 1.16666667, 1.66666667, 2.16666667])\n\n\n\nAx = newb\n\n\n```python\nA\n```\n\n\n\n\n array([[1, 1],\n [1, 2],\n [1, 3]])\n\n\n\nPractically just solve A.TAx' = A.Tb\n\n\n```python\nxhat = la.solve(np.dot(A.T,A),np.dot(A.T,b))\nxhat\n```\n\n\n\n\n array([ 0.66666667, 0.5 ])\n\n\n\n\n```python\nnp.around(np.dot(A,xhat) - newb)\n```\n\n\n\n\n array([-0., -0., -0.])\n\n\n\n\n```python\nla.inv(np.dot(A.T,A))\n```\n\n\n\n\n array([[ 2.33333333, -1. ],\n [-1. , 0.5 ]])\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "2827e6ee58d2122a9f1126a5a29d045b4e55e96d", "size": 10767, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "LinAl_007.ipynb", "max_stars_repo_name": "rtgshv/linal101", "max_stars_repo_head_hexsha": "f520987a6f1e468b3b466c14820e43dada565e43", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "LinAl_007.ipynb", "max_issues_repo_name": "rtgshv/linal101", "max_issues_repo_head_hexsha": "f520987a6f1e468b3b466c14820e43dada565e43", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "LinAl_007.ipynb", "max_forks_repo_name": "rtgshv/linal101", "max_forks_repo_head_hexsha": "f520987a6f1e468b3b466c14820e43dada565e43", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 18.0957983193, "max_line_length": 146, "alphanum_fraction": 0.4502646977, "converted": true, "num_tokens": 1279, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9433475762847495, "lm_q2_score": 0.8887587861263211, "lm_q1q2_score": 0.838408446794041}} {"text": "# Revis\u00e3o de conceitos estat\u00edsticos I\n\nVamos explorar alguns conceitos estat\u00edsticos aplicados \u00e0 an\u00e1lise de sinais.\n\n\n```python\n# importar as bibliotecas necess\u00e1rias\nimport numpy as np # arrays\nimport matplotlib.pyplot as plt # plots\nplt.rcParams.update({'font.size': 14})\nimport IPython.display as ipd # to play signals\nimport sounddevice as sd\n```\n\n# Exemplo 1: Tr\u00eas pessoas lan\u00e7am uma moeda 10 vezes\n\nAssim, teremos um registro \"temporal\" do lan\u00e7amento da moeda. Vamos dizer que \n\n- Ao conjunto de registros temporais chamamos de ***Ensemble***\n- A cada $n$ em $n_{\\text{eventos}}$, temos um ***evento***. Cada ***evento*** (cara ou coroa) tem associado a si uma ***vari\u00e1vel aleat\u00f3ria***, definida como\n - Cara = 0\n - Coroa = 1\n\n- O universo amostral \u00e9: $\\Omega = (0, 1)$\n- Podemos calcular a probavilidade do evento 'cara' tomando $p(\\text{cara}) = n(\\text{cara})/n_{\\text{eventos}}$. Esta \u00e9 uma vis\u00e3o frequentista do fen\u00f4meno.\n\n\n```python\nn_eventos = np.arange(1,11)\np1 = np.random.randint(2, size=len(n_eventos))\np2 = np.random.randint(2, size=len(n_eventos))\np3 = np.random.randint(2, size=len(n_eventos))\n\nprint(\"Prob. de caras em p1 \u00e9 de: {:.2f}\".format(len(p1[p1==1])/len(p1)))\nprint(\"Prob. de caras em p2 \u00e9 de: {:.2f}\".format(len(p2[p2==1])/len(p2)))\nprint(\"Prob. de caras em p3 \u00e9 de: {:.2f}\".format(len(p3[p3==1])/len(p3)))\n\nplt.figure(figsize = (12,6))\nplt.subplot(3,1,1)\nplt.bar(n_eventos, p1, color = 'lightblue')\nplt.xticks(n_eventos)\nplt.grid(linestyle = '--', which='both')\nplt.xlabel('Eventos')\nplt.ylabel('V.A. [-]')\nplt.xlim((0, len(n_eventos)+1))\nplt.ylim((0, 1.2))\n\nplt.subplot(3,1,2)\nplt.bar(n_eventos, p2, color = 'lightcoral')\nplt.xticks(n_eventos)\nplt.grid(linestyle = '--', which='both')\nplt.xlabel('Eventos')\nplt.ylabel('V.A. [-]')\nplt.xlim((0, len(n_eventos)+1))\nplt.ylim((0, 1.2))\n\nplt.subplot(3,1,3)\nplt.bar(n_eventos, p3, color = 'lightgreen')\nplt.xticks(n_eventos)\nplt.grid(linestyle = '--', which='both')\nplt.xlabel('Eventos')\nplt.ylabel('V.A. [-]')\nplt.xlim((0, len(n_eventos)+1))\nplt.ylim((0, 1.2))\nplt.tight_layout();\n```\n\n# Exemplo 2: Ru\u00eddo aleat\u00f3rio com distribui\u00e7\u00e3o normal\n\nNeste exemplo n\u00f3s consideramos um sinal gerado a partir do processo de obter amostras de uma distribui\u00e7\u00e3o normal, dada por:\n\n\\begin{equation}\np(x) = \\mathcal{N}(\\mu_x, \\sigma_x) = \\frac{1}{\\sqrt{2\\pi}\\sigma_x}\\mathrm{e}^{-\\frac{1}{2\\sigma_x^2}(x-\\mu_x)^2}\n\\end{equation}\nem que $\\mu_x$ \u00e9 a m\u00e9dia e $\\sigma_{x}$ \u00e9 o desvio padr\u00e3o.\n\nImaginemos ent\u00e3o, que a cada instante de tempo $t$, n\u00f3s sorteamos um valor da distribui\u00e7\u00e3o normal.\n\n\n```python\nfs = 44100\ntime = np.arange(0,2, 1/fs)\n\n# sinal\nmu_x = 0\nsigma_x = 0.5\nxt = np.random.normal(loc = mu_x, scale = sigma_x, size=len(time))\n\n# Figura\nfig, axs = plt.subplots(1, 2, gridspec_kw={'width_ratios': [8, 2]}, figsize = (12, 4))\naxs[0].plot(time, xt, '-b', linewidth = 1, color = 'lightcoral')\naxs[0].grid(linestyle = '--', which='both')\naxs[0].set_xlabel('Tempo [s]')\naxs[0].set_ylabel('Amplitude [-]')\naxs[0].set_xlim((0, 0.05))\naxs[0].set_ylim((-4, 4))\n\naxs[1].hist(xt, density = True, orientation='horizontal', bins=np.linspace(-4*sigma_x, 4*sigma_x, 20), color = 'lightcoral')\naxs[1].grid(linestyle = '--', which='both')\naxs[1].set_xlabel('Densidade [-]')\naxs[1].set_ylabel(r'Valores de $x(t)$ [-]')\naxs[1].set_ylim((-4, 4))\n\n\n\n\nplt.tight_layout()\n\nipd.Audio(xt, rate=fs) # load a NumPy array\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "a73ce3cfea74f26681dbdfab661195f541116796", "size": 325237, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Aula 49 - Rev conceitos estatisticos I/conceitos estatisticos I.ipynb", "max_stars_repo_name": "RicardoGMSilveira/codes_proc_de_sinais", "max_stars_repo_head_hexsha": "e6a44d6322f95be3ac288c6f1bc4f7cfeb481ac0", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2020-10-01T20:59:33.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-27T22:46:58.000Z", "max_issues_repo_path": "Aula 49 - Rev conceitos estatisticos I/conceitos estatisticos I.ipynb", "max_issues_repo_name": "RicardoGMSilveira/codes_proc_de_sinais", "max_issues_repo_head_hexsha": "e6a44d6322f95be3ac288c6f1bc4f7cfeb481ac0", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Aula 49 - Rev conceitos estatisticos I/conceitos estatisticos I.ipynb", "max_forks_repo_name": "RicardoGMSilveira/codes_proc_de_sinais", "max_forks_repo_head_hexsha": "e6a44d6322f95be3ac288c6f1bc4f7cfeb481ac0", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2020-10-15T12:08:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-12T12:26:53.000Z", "avg_line_length": 1451.9508928571, "max_line_length": 235352, "alphanum_fraction": 0.9477888432, "converted": true, "num_tokens": 1176, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9353465134460243, "lm_q2_score": 0.8962513807543223, "lm_q1q2_score": 0.8383056041597405}} {"text": "# Euler's Formula\n\n[back to overview page](index.ipynb)\n\na proof: http://austinrochford.com/posts/2014-02-05-eulers-formula-sympy.html\n\n\n```python\nimport sympy as sp\nsp.init_printing()\n```\n\n\n```python\nx = sp.symbols('x', real=True)\n```\n\n\n```python\nexp1 = sp.exp(sp.I * x)\nexp1\n```\n\n\n```python\nexp2 = exp1.expand(complex=True)\nexp2\n```\n\n\n```python\nexp2.rewrite(sp.exp)\n```\n\nEuler's identity:\n\n\n```python\nsp.exp(sp.I * sp.pi) + 1\n```\n\n

    \n \n \n \n
    \n To the extent possible under law,\n the person who associated CC0\n with this work has waived all copyright and related or neighboring\n rights to this work.\n

    \n", "meta": {"hexsha": "3bbea582709c7df157ab0ccc2f75dc54092d1d99", "size": 5921, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "sympy/euler.ipynb", "max_stars_repo_name": "mgeier/python-audio", "max_stars_repo_head_hexsha": "70d4d62b148c08c50ec0057f8d4fd9876ce67a13", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 144, "max_stars_repo_stars_event_min_datetime": "2015-04-14T20:13:25.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-26T20:00:27.000Z", "max_issues_repo_path": "sympy/euler.ipynb", "max_issues_repo_name": "mgeier/python-audio", "max_issues_repo_head_hexsha": "70d4d62b148c08c50ec0057f8d4fd9876ce67a13", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-05-02T13:22:41.000Z", "max_issues_repo_issues_event_max_datetime": "2017-05-03T13:17:15.000Z", "max_forks_repo_path": "sympy/euler.ipynb", "max_forks_repo_name": "mgeier/python-audio", "max_forks_repo_head_hexsha": "70d4d62b148c08c50ec0057f8d4fd9876ce67a13", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 36, "max_forks_repo_forks_event_min_datetime": "2015-04-19T14:08:37.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-21T14:24:37.000Z", "avg_line_length": 30.8385416667, "max_line_length": 1148, "alphanum_fraction": 0.65799696, "converted": true, "num_tokens": 250, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9353465134460243, "lm_q2_score": 0.8962513765975758, "lm_q1q2_score": 0.8383056002717422}} {"text": "## Classical Mechanics - Week 9\n\n### Last Week:\n- We saw how a potential can be used to analyze a system\n- Gained experience with plotting and integrating in Python \n\n### This Week:\n- We will study harmonic oscillations using packages\n- Further develope our analysis skills \n- Gain more experience wtih sympy\n\n\n```python\n# Let's import packages, as usual\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sympy as sym\nsym.init_printing(use_unicode=True)\n```\n\nLet's analyze a spring using sympy. It will have mass $m$, spring constant $k$, angular frequency $\\omega_0$, initial position $x_0$, and initial velocity $v_0$.\n\nThe motion of this harmonic oscillator is described by the equation:\n\neq 1.) $m\\ddot{x} = -kx$\n\nThis can be solved as \n\neq 2.) $x(t) = A\\cos(\\omega_0 t - \\delta)$, $\\qquad \\omega_0 = \\sqrt{\\dfrac{k}{m}}$\n\nUse SymPy below to plot this function. Set $A=2$, $\\omega_0 = \\pi/2$ and $\\delta = \\pi/4$. \n\n(Refer back to ***Notebook 7*** if you need to review plotting with SymPy.)\n\n\n```python\n# Plot for equation 2 here\n```\n\n## Q1.) Calculate analytically the initial conditions, $x_0$ and $v_0$, and the period of the motion for the given constants. Is your plot consistent with these values?\n\n✅ Double click this cell, erase its content, and put your answer to the above question here.\n\n#### Now let's make plots for underdamped, critically-damped, and overdamped harmonic oscillators.\nBelow are the general equations for these oscillators:\n\n- Underdamped, $\\beta < \\omega_0$ : \n\neq 3.) $x(t) = A e^{-\\beta t}cos(\\omega ' t) + B e^{-\\beta t}sin(\\omega ' t)$ , $\\omega ' = \\sqrt{\\omega_0^2 - \\beta^2}$\n \n ___________________________________\n \n \n- Critically-damped, $\\beta = \\omega_0$:\n\neq 4.) $x(t) = Ae^{-\\beta t} + B t e^{-\\beta t}$\n\n ___________________________________\n \n- Overdamped, $\\beta > \\omega_0$:\n\neq 5.) $x(t) = Ae^{-\\left(\\beta + \\sqrt{\\beta^2 - \\omega_0^2}\\right)t} + Be^{-\\left(\\beta - \\sqrt{\\beta^2 - \\omega_0^2}\\right)t}$\n \n _______________________\n \nIn the cells below use SymPy to create the Position vs Time plots for these three oscillators. \n\nUse $\\omega_0=\\pi/2$ as before, and then choose an appropriate value of $\\beta$ for the three different damped oscillator solutions. Play around with the variables, $A$, $B$, and $\\beta$, to see how different values affect the motion and if this agrees with your intuition.\n\n\n```python\n# Put your code for graphing Underdamped here\n```\n\n\n```python\n# Put your code for graphing Critical here\n```\n\n\n```python\n# Put your code for graphing Overdamped here\n```\n\n## Q2.) How would you compare the 3 different oscillators?\n\n✅ Double click this cell, erase its content, and put your answer to the above question here.\n\n# Here's another simple harmonic system, the pendulum. \n\nThe equation of motion for the pendulum is:\n\neq 6.) $ml\\dfrac{d^2\\theta}{dt^2} + mg \\sin(\\theta) = 0$, where $v=l\\dfrac{d\\theta}{dt}$ and $a=l\\dfrac{d^2\\theta}{dt^2}$\n\nIn the small angle approximation $\\sin\\theta\\approx\\theta$, so this can be written:\n\neq 7.) $\\dfrac{d^2\\theta}{dt^2} = -\\dfrac{g}{l}\\theta$\n\nWe then find the period of the pendulum to be $T = \\dfrac{2\\pi}{\\sqrt{l/g}}$ and the angle at any given time \n(if released from rest) is given by \n\n$\\theta = \\theta_0\\cos{\\left(\\sqrt{\\dfrac{g}{l}} t\\right)}$.\n\nLet's use Euler's Forward method to solve equation (7) for the motion of the pendulum in the small angle approximation, and compare to the analytic solution.\n\nFirst, let's graph the analytic solution for $\\theta$. Go ahead and graph using either sympy, or the other method we have used, utilizing these variables:\n\n- $t:(0s,50s)$\n- $\\theta_0 = 0.5$ radians\n- $l = 40$ meters\n\n\n```python\n# Plot the analytic solution here\n```\n\nNow, use Euler's Forward method to obtain a plot of $\\theta$ as a function of time $t$ (in the small angle approximation). Use eq (7) to calculate $\\ddot{\\theta}$ at each time step.\nTry varying the time step size to see how it affects the Euler's method solution.\n\n\n```python\n# Perform Euler's Method Here\n```\n\nYou should have found that if you chose the time step size small enough, then the Euler's method solution was\nindistinguishable from the analytic solution. \n\nWe can now trivially modify this, to solve for the pendulum **exactly**, without using the small angle approximation.\nThe exact equation for the acceleration is\n\neq 8.) $\\dfrac{d^2\\theta}{dt^2} = -\\dfrac{g}{l}\\sin\\theta$.\n\nModify your Euler's Forward method calculation to use eq (8) to calculate $\\ddot{\\theta}$ at each time step in the cell below.\n\n\n\n```python\n\n```\n\n# Q3.) What time step size did you use to find agreement between Euler's method and the analytic solution (in the small angle approximation)? How did the exact solution differ from the small angle approximation? \n\n✅ Double click this cell, erase its content, and put your answer to the above question here.\n\n### Now let's do something fun:\n\nIn class we found that the 2-dimensional anisotropic harmonic motion can be solved as\n\neq 8a.) $x(t) = A_x \\cos(\\omega_xt)$\n\neq 8b.) $y(t) = A_y \\cos(\\omega_yt - \\delta)$\n\nIf $\\dfrac{\\omega_x}{\\omega_y}$ is a rational number (*i.e,* a ratio of two integers), then the trajectory repeats itself after some amount of time. The plots of $x$ vs $y$ in this case are called Lissajous figures (after the French physicists Jules Lissajous). If $\\dfrac{\\omega_x}{\\omega_y}$ is not a rational number, then the trajectory does not repeat itself, but it still shows some very interesting behavior.\n\nLet's make some x vs y plots below for the 2-d anisotropic oscillator. \n\nFirst, recreate the plots in Figure 5.9 of Taylor. (Hint: Let $A_x=A_y$. For the left plot of Figure 5.9, let $\\delta=\\pi/4$ and for the right plot, let $\\delta=0$.)\n\nNext, try other rational values of $\\dfrac{\\omega_x}{\\omega_y}$ such as 5/6, 19/15, etc, and using different phase angles $\\delta$.\n\nFinally, for non-rational $\\dfrac{\\omega_x}{\\omega_y}$, what does the trajectory plot look like if you let the length of time to be arbitrarily long?\n\n\\[For these parametric plots, it is preferable to use our original plotting method, *i.e.* using `plt.plot()`, as introduced in ***Notebook 1***.\\]\n\n\n```python\n# Plot the Lissajous curves here\n```\n\n# Q4.) What are some observations you make as you play with the variables? What happens for non-rational $\\omega_x/\\omega_y$ if you let the oscillator run for a long time?\n\n\n✅ Double click this cell, erase its content, and put your answer to the above question here.\n\n# Notebook Wrap-up. \nRun the cell below and copy-paste your answers into their corresponding cells.\n\n\n```python\nfrom IPython.display import HTML\nHTML(\n\"\"\"\n\n\"\"\"\n)\n```\n\n# Well that's that, another Notebook! It's now been 10 weeks of class\n\nYou've been given lots of computational and programing tools these past few months. These past two weeks have been practicing these tools and hopefully you are understanding how some of these pieces add up. Play around with the code and see how it affects our systems of equations. Solve the Schrodinger Equation for the Helium atom. Figure out the unifying theory. The future is limitless!\n", "meta": {"hexsha": "97e4ea8cb9864cd7b107c814f69f83269316fbc5", "size": 11393, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/AdminBackground/PHY321/CM_Jupyter_Notebooks/Student_Work/CM_Notebook9.ipynb", "max_stars_repo_name": "Shield94/Physics321", "max_stars_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2020-01-09T17:41:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T00:48:58.000Z", "max_issues_repo_path": "doc/AdminBackground/PHY321/CM_Jupyter_Notebooks/Student_Work/CM_Notebook9.ipynb", "max_issues_repo_name": "Shield94/Physics321", "max_issues_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2020-01-08T03:47:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-15T15:02:57.000Z", "max_forks_repo_path": "doc/AdminBackground/PHY321/CM_Jupyter_Notebooks/Student_Work/CM_Notebook9.ipynb", "max_forks_repo_name": "Shield94/Physics321", "max_forks_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 33, "max_forks_repo_forks_event_min_datetime": "2020-01-10T20:40:55.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T20:28:41.000Z", "avg_line_length": 32.4586894587, "max_line_length": 430, "alphanum_fraction": 0.5817607303, "converted": true, "num_tokens": 1936, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8962513759047847, "lm_q2_score": 0.9353465057864692, "lm_q1q2_score": 0.8383055927588556}} {"text": "# 3.2.4 Multiple Outputs\n\nSuppose we have multiple outputs $Y_1, Y_2, ..., Y_k$ that we wish to predict from our inputs $X_0, X_1, ..., X_p$. We assume a linear model for each output (3.34, 3.35):\n\n$$\n\\begin{align}\nY_k &= \\beta_{0k} + \\sum_{j=1}^{p} X_j\\beta_{jk} + \\varepsilon_k\\\\\n&= f_k(X) + \\varepsilon_k\n\\end{align}\n$$\n\nWe can write the model in matrix notation:\n\n$$\n\\mathbf{Y} = \\mathbf{XB} + \\mathbf{E}\n$$\n\nHere **Y** is the $N\\times K $ response matrix, **X** is the $N\\times(p+1)$ input matrix, **B** is the $(p+1)\\times K$ matrix and **E** is the $N\\times K$ matrix of errors.\n\nA straightforward generalization of the univariate loss functio is (3.37, 3.38):\n$$\n\\begin{align}\nRSS(\\mathbf{B}) &= \\sum_{k=1}^K{\\sum_{i=1}^N (y_{ik} - f_k(x_i))^2}\\\\\n&= tr\\left[(\\mathbf{Y}-\\mathbf{XB})^T(\\mathbf{Y}-\\mathbf{XB})\\right]\n\\end{align}\n$$\n\nThe least squares estimates is (3.39):\n$$\\hat{\\mathbf{B}} = (\\mathbf{X}^T\\mathbf{X})^{-1}\\mathbf{X}^T\\mathbf{Y}$$\n\nIf the errors $\\varepsilon=(\\varepsilon_1,...,\\varepsilon_K)$ are correlated, then (3.40):\n\n$$\nRSS(B; \\Sigma)=\\sum_{i=1}^N(y_i - f(x_i))^T\\Sigma^{-1}(y_i - f(x_i))\n$$\n\narises from multivariate Gaussian theory.\n", "meta": {"hexsha": "924828281c9cff57d61abe13c3212cd077a7ead7", "size": 2305, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chapter-03/3.2.4-multiple-outputs.ipynb", "max_stars_repo_name": "leduran/ESL", "max_stars_repo_head_hexsha": "fcb6c8268d6a64962c013006d9298c6f5a7104fe", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 360, "max_stars_repo_stars_event_min_datetime": "2019-01-28T14:05:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-27T00:11:21.000Z", "max_issues_repo_path": "chapter-03/3.2.4-multiple-outputs.ipynb", "max_issues_repo_name": "leduran/ESL", "max_issues_repo_head_hexsha": "fcb6c8268d6a64962c013006d9298c6f5a7104fe", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-07-06T16:51:40.000Z", "max_issues_repo_issues_event_max_datetime": "2020-07-06T16:51:40.000Z", "max_forks_repo_path": "chapter-03/3.2.4-multiple-outputs.ipynb", "max_forks_repo_name": "leduran/ESL", "max_forks_repo_head_hexsha": "fcb6c8268d6a64962c013006d9298c6f5a7104fe", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 79, "max_forks_repo_forks_event_min_datetime": "2019-03-21T23:48:35.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T13:05:10.000Z", "avg_line_length": 25.8988764045, "max_line_length": 182, "alphanum_fraction": 0.5028199566, "converted": true, "num_tokens": 464, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9648551495568569, "lm_q2_score": 0.868826784729373, "lm_q1q2_score": 0.8382919973190623}} {"text": "# Resistor Cube Solutions\n\nQuick symbolic and numeric solutions to the popular resistor-cube problem. \n\n\n```python\nimport sympy as sp\nimport numpy as np\n```\n\n\n```python\nIR,v1,v2,v3,v4,v5,v6,v7 = sp.symbols('IR,v1,v2,v3,v4,v5,v6,v7')\n```\n\n## Circuit Matrix\n\nAfter a half-page or so of algebra, we can derive the system of seven node-equations. \n\n$$\\displaystyle \\left[\\begin{matrix}3 & -1 & -1 & -1 & 0 & 0 & 0\\\\-1 & 3 & 0 & 0 & -1 & 0 & -1\\\\-1 & 0 & 3 & 0 & -1 & -1 & 0\\\\-1 & 0 & 0 & 3 & 0 & -1 & -1\\\\0 & -1 & -1 & 0 & 3 & 0 & 0\\\\0 & 0 & -1 & -1 & 0 & 3 & 0\\\\0 & -1 & 0 & -1 & 0 & 0 & 3\\end{matrix}\\right] \n\\left[\\begin{matrix}V1\\\\V2\\\\V3\\\\V4\\\\V5\\\\V6\\\\V7\\end{matrix}\\right]\n= \n\\left[\\begin{matrix}IR\\\\0\\\\0\\\\0\\\\0\\\\0\\\\0\\end{matrix}\\right]$$\n\nWe'll represent these in `sympy.Matrix` objects `A` and `b`:\n\n\n```python\nlhs = np.array([\n [ 3,-1,-1,-1, 0, 0, 0],\n [-1, 3, 0, 0,-1, 0,-1],\n [-1, 0, 3, 0,-1,-1, 0],\n [-1, 0, 0, 3, 0,-1,-1],\n [ 0,-1,-1, 0, 3, 0, 0],\n [ 0, 0,-1,-1, 0, 3, 0],\n [ 0,-1, 0,-1, 0, 0, 3]\n ],)\nA = sp.Matrix(lhs)\nA\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}3 & -1 & -1 & -1 & 0 & 0 & 0\\\\-1 & 3 & 0 & 0 & -1 & 0 & -1\\\\-1 & 0 & 3 & 0 & -1 & -1 & 0\\\\-1 & 0 & 0 & 3 & 0 & -1 & -1\\\\0 & -1 & -1 & 0 & 3 & 0 & 0\\\\0 & 0 & -1 & -1 & 0 & 3 & 0\\\\0 & -1 & 0 & -1 & 0 & 0 & 3\\end{matrix}\\right]$\n\n\n\n\n```python\nb = sp.Matrix([[IR],[0],[0],[0],[0],[0],[0]])\nb\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}IR\\\\0\\\\0\\\\0\\\\0\\\\0\\\\0\\end{matrix}\\right]$\n\n\n\n## Symbolic Solution\n\nMost statements of the resistor-cube problem ask for the resistance between diagonal corners of the cube, relative to the unit-resistance per edge. In other words, the desired quantity is: \n\n```\nV(1) / I\n```\n\nConveniently the `sympy.linsolve` solver will give us just this - along with all of the other node voltages:\n\n\n```python\nsp.linsolve((A,b),(v1,v2,v3,v4,v5,v6,v7))\n```\n\n\n\n\n$\\displaystyle \\left\\{\\left( \\frac{5 IR}{6}, \\ \\frac{IR}{2}, \\ \\frac{IR}{2}, \\ \\frac{IR}{2}, \\ \\frac{IR}{3}, \\ \\frac{IR}{3}, \\ \\frac{IR}{3}\\right)\\right\\}$\n\n\n\nThese results square with our intuitive expectations. \n\n## Numeric Solution\n\nMost \"real\" circuit-simulations are not amenable to symbolic-solvers such as `sympy`, for a slew of reasons. \n\n* They are often non-linear and piece-wise defined. \n* They have (many) more nodes than numeric solvers can efficiently handle. \n* Even if we generated symbolic results for complicated circuits, we'd often not know what to do with them - other than converting into numeric answers. \n\nInstead nearly all practical circuit simulation uses numerical solutions. We'll just have to choose a numeric value for our single parameter `IR` - say, 1.0V. The resistor-cube is nice enough to be completely linear, and allows us to use a linear solver such as `numpy.linalg.solve`:\n\n\n```python\nrhs = np.transpose(np.array([1,0,0,0,0,0,0]))\nnp.linalg.solve(lhs, rhs)\n```\n\n\n\n\n array([0.83333333, 0.5 , 0.5 , 0.5 , 0.33333333,\n 0.33333333, 0.33333333])\n\n\n\nAgain this checks out, both with our intuitive expectations and the earlier symbolic results. \n\n## Intuitive Solution\n\nThe resistor-cube is usually offered as a brain-teaser of sorts, whether as a job interview problem or just for shits & grins. I have never seen anyone (but me!) work through it analytically as we have here. \n\nInstead, a typical thought-process looks something like so:\n\n* We can tell from looking at this circuit that there are a few symmetries between similar-looking nodes. This may become particularly clear when we \"flatten\" the cube into a 2D visual form. \n* From inspection (and our deep well of experience), we can discern that `V(2)=V(3)=V(4)`. In other words, `I2=I3=I4 = I/3`.\n* Similarly we can tell that `V(5)=V(6)=V(7)`, and that `I5=I6=I7 = I/3`. \n* With those two realizations in hand, we can further discern that each of the six middle-row resistors has *half* the current of their top and bottom-row counterparts, or `(I/3)/2 = I/6`. \n* Walking through any KVL loop including a top, middle, and bottom row resistor, we find that their voltages are `IR/3`, `IR/6`, and `IR/3`, making a total of `5*IR/6`. The overall resistance `V(1)/I` is therefore `5*R/6`.\n\nAlthough not usually asked, we can use the same insights to notice that the values of `V(5)=V(6)=V(7) = IR/3`, and that `V(2)=V(3)=V(4)` are `IR/3+IR/6 = IR/2`. All checks out with the symbolic and numeric solutions. \n\n\n```python\n\n```\n", "meta": {"hexsha": "95f60752de819ccb8e6193cecf08e1df42f2cdfe", "size": 8165, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "nb/resistor-cube.ipynb", "max_stars_repo_name": "HW21/TeachSpice", "max_stars_repo_head_hexsha": "8cf0ba8603dd82eeb35b45e964df9ca69cd09747", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-10-09T12:26:26.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-15T16:10:16.000Z", "max_issues_repo_path": "nb/resistor-cube.ipynb", "max_issues_repo_name": "HW21/TeachSpice", "max_issues_repo_head_hexsha": "8cf0ba8603dd82eeb35b45e964df9ca69cd09747", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "nb/resistor-cube.ipynb", "max_forks_repo_name": "HW21/TeachSpice", "max_forks_repo_head_hexsha": "8cf0ba8603dd82eeb35b45e964df9ca69cd09747", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-10-09T12:28:20.000Z", "max_forks_repo_forks_event_max_datetime": "2019-10-09T12:28:20.000Z", "avg_line_length": 30.6954887218, "max_line_length": 291, "alphanum_fraction": 0.4979791794, "converted": true, "num_tokens": 1583, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294404096760998, "lm_q2_score": 0.90192067257508, "lm_q1q2_score": 0.8382815194135258}} {"text": "# Constrained ESP fitting\n\n## using Horton's cost function\n\nAccording to https://theochem.github.io/horton/2.1.0/user_postproc_espfit.html, cost function is constructed as\n\n$$ c(\\mathbf{q}, \\Delta V_\\text{ref}) = \\int_V d\\mathbf{r} \\omega(\\mathbf{r}) \\cdot \\left( V_\\text{ab initio}(\\mathbf{r}) - \\sum_{i=1}^N \\frac{q_i}{\\mathbf{r} - \\mathbf{R}_i} - \\Delta V_\\text{ref} \\right)^2 $$\n\n> $\\Delta V_\\text{ref}$: constant that may account for differences in reference between the ab initio ESP and the point charge ESP. The need for such a term was established in the REPEAT paper for ESP fitting in 3D periodic systems.\n\nWe look at an aperiodic system, neglect this term for now. The (unconstrained) cost function takes the general quadratic form\n\n$$ c_u(\\mathbf{x}) = \\mathbf{x}^T A\\ \\mathbf{x} - 2 \\mathbf{b}^T \\mathbf{x} - C $$\n\nwith \n\n$$ C = - \\int_V d\\mathbf{r} \\omega(\\mathbf{r}) \\cdot \\left[ V_\\text{ab initio}(\\mathbf{r}) \\right]^2$$\n\nentry i,j of matrix $A^{N\\times N}$\n\n$$ A_{ij} = \\int_V d\\mathbf{r}\\ \\omega(\\mathbf{r}) \\cdot \\left( \\frac{1}{|\\mathbf{r} - \\mathbf{R}_i|} \\cdot \\frac{1}{|\\mathbf{r} - \\mathbf{R}_j|} \\right) $$\n\nand entry i of vector $\\mathbf{b}$\n\n$$ b_i = \\int_V d\\mathbf{r}\\ \\omega(\\mathbf{r}) \\cdot \\frac{V_\\text{ab initio}(\\mathbf{r})}{|\\mathbf{r} - \\mathbf{R}_i|} $$\n\nIn the code below, first the miniumum of an unconstrained system is found by solving\n\n$$ \\frac{1}{2} \\cdot \\frac{\\mathrm{d} c(\\mathbf{x})}{\\mathrm d \\mathbf{x}} = \\frac{1}{2} \\cdot \\nabla_\\mathbf{x} c = A \\mathbf{x} - \\mathbf{b} = 0$$\n\nwith $\\nabla (\\mathbf{b}^T \\mathbf{x}) = \\mathbf{b}$ and \n$\\nabla \\cdot (\\mathbf{x}^T A \\mathbf{x}) = (A + A^T) \\mathbf{x} \n= 2 A \\mathbf{x} $ for symmetric A, as in our case. \nThe (unconstrained) solution\n\n$$ \\mathbf{x}_u = A^{-1} \\mathbf{b} $$\n\nis corrected for *one* total charge constraint of the form \n\n$$ g(\\mathbf{x}) = \\mathbf{d} \\cdot \\mathbf{x} - q_\\text{tot} = 0 $$\n\nwith all entries of $\\mathbf{d}$ unity. Notice that in the code below, the whole system is normalized in order to have unity diagonal entries $A_{jj}$. We neglect this normalization here.\n\nA Lagrange multiplier $\\lambda$ is introduced into the *constrained* cost function \n\n$$ c_c(\\mathbf{x},\\lambda) = \\mathbf{x}^T A\\ \\mathbf{x} - 2 \\mathbf{b}^T \\mathbf{x} - C + \\lambda \\cdot g(\\mathbf{x}) $$\n\nand the system\n\n$$ \\frac{1}{2} \\cdot \\nabla_\\mathbf{x} c = A \\mathbf{x} - \\mathbf{b} + \\lambda \\cdot \\nabla_\\mathbf{x} g(\\mathbf{x}) = A \\mathbf{x} - \\mathbf{b} +\\lambda \\cdot \\mathbf{d} = 0 $$\n\n$$ \\nabla_\\mathbf{\\lambda} c = g(\\mathbf{x}) = \\mathbf{d} \\cdot \\mathbf{x} - q_\\text{tot} = 0 $$\n\nis solved by finding a correction for the unconstrained solution\n\n$$ \\mathbf{x} = \\mathbf{x}_u - \\lambda \\cdot\\delta \\mathbf{x} = A^{-1} \\mathbf{b} - \\lambda \\cdot \\delta \\mathbf{x} $$\n\nas \n\n$$ - \\lambda A \\delta \\mathbf{x} + \\lambda \\cdot \\mathbf{d} = 0 \n\\Leftrightarrow \\delta \\mathbf{x} = A^{-1} \\mathbf{d}$$\n\nand the Lagrange multiplier by\n\n$$ g(\\mathbf{x}) \n = \\mathbf{d} \\cdot \\left( \\mathbf{x}_u \n - \\lambda \\ \\delta \\mathbf{x} \\right)- q_\\text{tot} \n = \\mathbf{d} \\cdot A^{-1} \\mathbf{b} \n - \\lambda \\ \\mathbf{d} \\cdot \\delta \\mathbf{x} - q_\\text{tot} \n = 0 $$\n\n$$ \\lambda = \\frac{\\mathbf{d} \\cdot A^{-1} \\mathbf{b} - q_\\text{tot}}\n { \\mathbf{d} \\cdot \\delta \\mathbf{x} }\n = \\frac{\\mathbf{b} \\cdot \\delta \\mathbf{x} - q_\\text{tot}}\n { \\mathbf{d} \\cdot \\delta \\mathbf{x} }$$\n \n \nand thereby the constrained minimum at\n$$ \\mathbf{x} \n = \\mathbf{x}_u \n - \\frac{\\mathbf{b} \\cdot \\delta \\mathbf{x} - q_\\text{tot}}\n { \\mathbf{d} \\cdot \\delta \\mathbf{x} } \n \\cdot \\delta \\mathbf{x}$$\n \nas implemented in HORTON\n\n\n```python\n# modified excerpt from horton/espfit/cost.py\n\nimport numpy as np\nimport h5py\n\ndef solve(_A, _B, _C, _natom, qtot=None, ridge=0.0):\n # apply regularization to atomic degrees of freedom\n A = _A.copy()\n A.ravel()[::len(A)+1][:_natom] += ridge*np.diag(A)[:_natom].mean()\n # construct preconditioned matrices\n norms = np.diag(A)**0.5\n print(\"Diagonal of A: \\n {}\".format(np.diag(A)))\n print(\"Vector norms: \\n {}\".format(norms))\n print(\"Matrix norms: \\n {}\".format(norms*norms.reshape(-1,1)))\n\n A = A/norms/norms.reshape(-1,1)\n print(\"Normed diagonal of A: \\n {}\".format(np.diag(A)))\n\n B = _B/norms\n print(\"Normed B: \\n {}\".format(B))\n\n x = np.linalg.solve(A, B)\n if qtot is not None:\n # Fix the total charge with a lagrange multiplier\n d = np.zeros(len(A))\n d[:_natom] = 1/norms[:_natom]\n d[_natom:] = 0.0\n aid = np.linalg.solve(A, d)\n lagrange = (np.dot(aid, B) - qtot)/np.dot(aid, d)\n x -= aid*lagrange\n x /= norms\n return x\n```\n\n\n```python\nnp.set_printoptions(precision=4)\n```\n\n\n```python\n# Example: water\n# load the cost function\ncost_function = h5py.File('h2o.cost.h5')\ncost = cost_function['cost']\nA = cost['A'][:]\nB = cost['B'][:]\nC = cost['C'].value\nN = cost['natom'].value\nprint(\"A: \\n {}\".format(A))\nprint(\"B: {}\".format(B))\nprint(\"C: {}\".format(C))\nprint(\"N: {}\".format(N))\n```\n\n A: \n [[ 8.5387 8.5216 8.5216]\n [ 8.5216 9.0212 8.3956]\n [ 8.5216 8.3956 9.0212]]\n B: [-0.0105 -0.0797 -0.0797]\n C: 0.0266115260155\n N: 3\n\n\n\n```python\nq1 = solve(A,B,C,N)\n```\n\n Diagonal of A: \n [ 8.5387 9.0212 9.0212]\n Vector norms: \n [ 2.9221 3.0035 3.0035]\n Matrix norms: \n [[ 8.5387 8.7766 8.7766]\n [ 8.7766 9.0212 9.0212]\n [ 8.7766 9.0212 9.0212]]\n Normed diagonal of A: \n [ 1. 1. 1.]\n Normed B: \n [-0.0036 -0.0265 -0.0265]\n\n\n\n```python\nq1\n```\n\n\n\n\n array([ 0.3378, -0.1699, -0.1699])\n\n\n\n\n```python\nsum(q1)\n```\n\n\n\n\n -0.0019095241274412478\n\n\n\n\n```python\nq2 = solve(A,B,C,N,qtot=0)\n```\n\n Diagonal of A: \n [ 8.5387 9.0212 9.0212]\n Vector norms: \n [ 2.9221 3.0035 3.0035]\n Matrix norms: \n [[ 8.5387 8.7766 8.7766]\n [ 8.7766 9.0212 9.0212]\n [ 8.7766 9.0212 9.0212]]\n Normed diagonal of A: \n [ 1. 1. 1.]\n Normed B: \n [-0.0036 -0.0265 -0.0265]\n\n\n\n```python\nq2\n```\n\n\n\n\n array([ 0.3396, -0.1698, -0.1698])\n\n\n\n\n```python\nsum(q2)\n```\n\n\n\n\n 0.0\n\n\n\n\n```python\nq3 = solve(A,B,C,N,qtot=1)\n```\n\n Diagonal of A: \n [ 8.5387 9.0212 9.0212]\n Vector norms: \n [ 2.9221 3.0035 3.0035]\n Matrix norms: \n [[ 8.5387 8.7766 8.7766]\n [ 8.7766 9.0212 9.0212]\n [ 8.7766 9.0212 9.0212]]\n Normed diagonal of A: \n [ 1. 1. 1.]\n Normed B: \n [-0.0036 -0.0265 -0.0265]\n\n\n\n```python\nq3\n```\n\n\n\n\n array([ 1.2558, -0.1279, -0.1279])\n\n\n\n\n```python\nsum(q3)\n```\n\n\n\n\n 0.99999999999999989\n\n\n\n# Arbitrary number of equality constraints\n\nWe modifiy the optimization in order to allow for an arbitrary amount of constraints.\n\nM lagrange multipliers $\\lambda_k$ are introduced into the *constrained* cost function \n\n$$ c_c(\\mathbf{x},\\mathbf{\\lambda}) = \\mathbf{x}^T A\\ \\mathbf{x} - 2 \\mathbf{b}^T \\mathbf{x} - C + \\sum_{k=1}^M \\lambda_k \\cdot g_k(\\mathbf{x}) $$\n\nAll our constraints (charge groups and symmetries) will be of linear form \n\n$$ g_k(\\mathbf{x}) = \\mathbf{d}_k \\cdot \\mathbf{x} - q_k = 0 $$\n\nand thus can be compacted into matrix form\n\n$$ D^{(M \\times N)} \\mathbf{x} - \\mathbf{q}^{(M\\times 1)} = 0 $$\n\nwith \n\n$$ D^T = [\\mathbf{d}_1, \\mathbf{d}_2, \\dots , \\mathbf{d}_M] $$\n\nand hence \n$$ c_c(\\mathbf{x}^{(N\\times1)},\\mathbf{\\lambda}^{(M\\times1)}) = \\mathbf{x}^T A\\ \\mathbf{x} \n - 2 \\mathbf{b}^T \\mathbf{x} \n - C + \\mathbf{\\lambda}^T \\cdot \\left( D \\mathbf{x} - \\mathbf{q} \\right) $$\n\nDerivative \n\n$$ \n\\begin{align}\n \\nabla_\\mathbf{x} \\cdot c_c & = 2\\ A\\ \\mathbf{x} \n + \\sum_{k=1}^M \\lambda_k \\mathbf{d}_k - 2 \\mathbf{b} & = 0\\\\\n \\nabla_\\mathbf{\\lambda} \\cdot c_c & = D\\ \\mathbf{x} - \\mathbf{q} & = 0\n\\end{align}\n$$\n\nIdentify \n\n$$ D^T \\mathbf{\\lambda}= \\sum_{k=1}^M \\lambda_k \\mathbf{d}_k $$\n\nand solve\n\n$$ \\tilde{A} \\mathbf{y} - \\tilde{\\mathbf{b}} = 0$$ \n\nwith generalized $\\mathbf{y}^T = [\\mathbf{x}^T, \\mathbf{\\lambda}^T ]$ \nas well as $(N+M)\\times(N+M)$ matrix\n\n$$ \\tilde{A} = \n \\begin{bmatrix}\n 2 A & D^T \\\\\n D & 0\n \\end{bmatrix} $$\n \nand $(N+M)$ vector\n\n$$ \\tilde{\\mathbf{b}} = \n \\begin{bmatrix}\n 2 \\mathbf{b} \\\\\n \\mathbf{q}\n \\end{bmatrix} $$\n\n\n```python\nimport numpy as np\n\ndef constrainedMinimize(A_matrix, b_vector, C_scalar, D_matrix = None, q_vector = np.array([0]), debug=False):\n N = b_vector.shape[0]\n M = q_vector.shape[0]\n \n if not isinstance(D_matrix,np.ndarray):\n D_matrix = np.atleast_2d( np.ones(N) )\n \n if debug:\n print(\"{:d} unknowns, {:d} equality constraints\".format(N,M))\n print(\"A {}: \\n {}\".format(A_matrix.shape,A_matrix))\n print(\"B {}: \\n {}\".format(b_vector.shape,b_vector))\n print(\"C {}: \\n {}\".format(C_scalar.shape,C_scalar))\n print(\"D {}: \\n {}\".format(D_matrix.shape,D_matrix))\n print(\"q {}: \\n {}\".format(q_vector.shape,q_vector))\n\n A_upper = np.block([ 2*np.atleast_2d(A_matrix), np.atleast_2d(D_matrix).T ])\n A_lower = np.block([ np.atleast_2d(D_matrix), np.atleast_2d(np.zeros((M,M)))])\n if debug:\n print(\"upper A ({}): \\n {}\".format(A_upper.shape,A_upper))\n print(\"upper A ({}): \\n {}\".format(A_lower.shape,A_lower))\n\n A = np.block( [ [ A_upper ], [ A_lower ] ] )\n \n if debug:\n print(\"block A ({}): \\n {}\".format(A.shape,A))\n \n B = np.block( [2*np.atleast_1d(b_vector), np.atleast_1d(q_vector)] )\n \n if debug:\n print(\"block B ({}): \\n {}\".format(B.shape,B))\n\n C = C_scalar\n \n x = np.linalg.solve(A, B)\n \n return x\n```\n\n\n```python\n# charges: list of charges to meet symmetry constraint, \n# indices beginning from 0\n# can also be list of list of group of charges to be symmetric, \n# indices beginning from 0, i.e. charges = [[0,1],[2,3,4]] requires\n# the cummulative charge of atom 0 and 1 to equal \n# the sum over charges 2 to 4\n# N: total number of charges\n# symmetry: scalar or list of symmetry requirements\n# default 1 causes equality, -1 antisymmetry, \n# values different from +/- unity result in relative scaling\ndef constructPairwiseSymmetryConstraints(charges, N, symmetry=1.0, debug=False):\n M = len(charges)-1\n\n symmetry = symmetry*np.ones(M)\n \n if debug:\n print(\"{:d} unknowns, {:d} pairwise equality constraints\".format(N,M))\n print(\"symmetry list ({}):\\n{}\".format(symmetry.shape,symmetry))\n \n D = np.atleast_2d(np.zeros((M,N)))\n q = np.atleast_1d(np.zeros(M))\n D[:,charges[0]] = 1\n \n for j in range(M):\n D[j,charges[j+1]] = -1.0*symmetry[j]\n\n if debug:\n print(\"D ({}):\\n{}\".format(D.shape,D))\n print(\"q ({}):\\n{}\".format(q.shape,q))\n \n return D, q\n```\n\n\n```python\n# chargeGroups: list of list of charges in charge groups, \n# indices beginning from 0, i.e. chargeGroups = [[0,1],[2,3,4]]\n# N: total number of charges\n# q: required charges per charge group\n\ndef constructChargegroupConstraints(chargeGroups, N, q=0, debug=False):\n M = len(chargeGroups)\n\n q = np.atleast_1d(q*np.ones(M))\n\n if debug:\n print(\"{:d} unknowns, {:d} pairwise equality constraints\".format(N,M))\n\n D = np.atleast_2d(np.zeros((M,N)))\n #q = np.atleast_2d(np.zeros(M)) \n \n for j in range(M):\n D[j,chargeGroups[j]] = 1.0\n\n if debug:\n print(\"D ({}):\\n{}\".format(D.shape,D))\n print(\"q ({}):\\n{}\".format(q.shape,q))\n \n return D, q\n```\n\n\n```python\nq1u = solve(A,B,C,N)\n```\n\n Diagonal of A: \n [ 8.5387 9.0212 9.0212]\n Vector norms: \n [ 2.9221 3.0035 3.0035]\n Matrix norms: \n [[ 8.5387 8.7766 8.7766]\n [ 8.7766 9.0212 9.0212]\n [ 8.7766 9.0212 9.0212]]\n Normed diagonal of A: \n [ 1. 1. 1.]\n Normed B: \n [-0.0036 -0.0265 -0.0265]\n\n\n\n```python\nq1c = constrainedMinimize(A,B,C,debug=True)\n```\n\n 3 unknowns, 1 equality constraints\n A (3, 3): \n [[ 8.5387 8.5216 8.5216]\n [ 8.5216 9.0212 8.3956]\n [ 8.5216 8.3956 9.0212]]\n B (3,): \n [-0.0105 -0.0797 -0.0797]\n C (): \n 0.0266115260155\n D (1, 3): \n [[ 1. 1. 1.]]\n q (1,): \n [0]\n upper A ((3, 4)): \n [[ 17.0774 17.0433 17.0433 1. ]\n [ 17.0433 18.0423 16.7912 1. ]\n [ 17.0433 16.7912 18.0423 1. ]]\n upper A ((1, 4)): \n [[ 1. 1. 1. 0.]]\n block A ((4, 4)): \n [[ 17.0774 17.0433 17.0433 1. ]\n [ 17.0433 18.0423 16.7912 1. ]\n [ 17.0433 16.7912 18.0423 1. ]\n [ 1. 1. 1. 0. ]]\n block B ((4,)): \n [-0.021 -0.1594 -0.1594 0. ]\n\n\n\n```python\nq1u\n```\n\n\n\n\n array([ 0.3378, -0.1699, -0.1699])\n\n\n\n\n```python\nq1c\n```\n\n\n\n\n array([ 0.3396, -0.1698, -0.1698, -0.0326])\n\n\n\n\n```python\nq2_horton = solve(A,B,C,N,qtot=1)\n```\n\n Diagonal of A: \n [ 8.5387 9.0212 9.0212]\n Vector norms: \n [ 2.9221 3.0035 3.0035]\n Matrix norms: \n [[ 8.5387 8.7766 8.7766]\n [ 8.7766 9.0212 9.0212]\n [ 8.7766 9.0212 9.0212]]\n Normed diagonal of A: \n [ 1. 1. 1.]\n Normed B: \n [-0.0036 -0.0265 -0.0265]\n\n\n\n```python\nq2c = constrainedMinimize(A,B,C,q_vector=np.array([1]),debug=True)\n```\n\n 3 unknowns, 1 equality constraints\n A (3, 3): \n [[ 8.5387 8.5216 8.5216]\n [ 8.5216 9.0212 8.3956]\n [ 8.5216 8.3956 9.0212]]\n B (3,): \n [-0.0105 -0.0797 -0.0797]\n C (): \n 0.0266115260155\n D (1, 3): \n [[ 1. 1. 1.]]\n q (1,): \n [1]\n upper A ((3, 4)): \n [[ 17.0774 17.0433 17.0433 1. ]\n [ 17.0433 18.0423 16.7912 1. ]\n [ 17.0433 16.7912 18.0423 1. ]]\n upper A ((1, 4)): \n [[ 1. 1. 1. 0.]]\n block A ((4, 4)): \n [[ 17.0774 17.0433 17.0433 1. ]\n [ 17.0433 18.0423 16.7912 1. ]\n [ 17.0433 16.7912 18.0423 1. ]\n [ 1. 1. 1. 0. ]]\n block B ((4,)): \n [-0.021 -0.1594 -0.1594 1. ]\n\n\n\n```python\nq2_horton\n```\n\n\n\n\n array([ 1.2558, -0.1279, -0.1279])\n\n\n\n\n```python\nq2c\n```\n\n\n\n\n array([ 1.2558, -0.1279, -0.1279, -17.1071])\n\n\n\n\n```python\nd1 = np.array([0,1,-2])\n```\n\n\n```python\nq3c = constrainedMinimize(A,B,C,D_matrix=d1,q_vector=np.array([0]),debug=True)\n```\n\n 3 unknowns, 1 equality constraints\n A (3, 3): \n [[ 8.5387 8.5216 8.5216]\n [ 8.5216 9.0212 8.3956]\n [ 8.5216 8.3956 9.0212]]\n B (3,): \n [-0.0105 -0.0797 -0.0797]\n C (): \n 0.0266115260155\n D (3,): \n [ 0 1 -2]\n q (1,): \n [0]\n upper A ((3, 4)): \n [[ 17.0774 17.0433 17.0433 0. ]\n [ 17.0433 18.0423 16.7912 1. ]\n [ 17.0433 16.7912 18.0423 -2. ]]\n upper A ((1, 4)): \n [[ 0. 1. -2. 0.]]\n block A ((4, 4)): \n [[ 17.0774 17.0433 17.0433 0. ]\n [ 17.0433 18.0423 16.7912 1. ]\n [ 17.0433 16.7912 18.0423 -2. ]\n [ 0. 1. -2. 0. ]]\n block B ((4,)): \n [-0.021 -0.1594 -0.1594 0. ]\n\n\n\n```python\nq3c\n```\n\n\n\n\n array([ 0.2884, -0.1935, -0.0967, 0.0403])\n\n\n\n\n```python\nd1 = np.array([0,1,-2])\n```\n\n\n```python\nd2 = np.array([1,1,1])\n```\n\n\n```python\nD = np.block([[d1],[d2]])\n```\n\n\n```python\nD\n```\n\n\n\n\n array([[ 0, 1, -2],\n [ 1, 1, 1]])\n\n\n\n\n```python\nq = np.array([0,3])\n```\n\n\n```python\nq4c = constrainedMinimize(A,B,C,D_matrix=D,q_vector=q,debug=True)\n```\n\n 3 unknowns, 2 equality constraints\n A (3, 3): \n [[ 8.5387 8.5216 8.5216]\n [ 8.5216 9.0212 8.3956]\n [ 8.5216 8.3956 9.0212]]\n B (3,): \n [-0.0105 -0.0797 -0.0797]\n C (): \n 0.0266115260155\n D (2, 3): \n [[ 0 1 -2]\n [ 1 1 1]]\n q (2,): \n [0 3]\n upper A ((3, 5)): \n [[ 17.0774 17.0433 17.0433 0. 1. ]\n [ 17.0433 18.0423 16.7912 1. 1. ]\n [ 17.0433 16.7912 18.0423 -2. 1. ]]\n upper A ((2, 5)): \n [[ 0. 1. -2. 0. 0.]\n [ 1. 1. 1. 0. 0.]]\n block A ((5, 5)): \n [[ 17.0774 17.0433 17.0433 0. 1. ]\n [ 17.0433 18.0423 16.7912 1. 1. ]\n [ 17.0433 16.7912 18.0423 -2. 1. ]\n [ 0. 1. -2. 0. 0. ]\n [ 1. 1. 1. 0. 0. ]]\n block B ((5,)): \n [-0.021 -0.1594 -0.1594 0. 3. ]\n\n\n\n```python\nq4c\n```\n\n\n\n\n array([ 3.0755e+00, -5.0328e-02, -2.5164e-02, 1.0487e-02, -5.1256e+01])\n\n\n\n\n```python\nsum(q4c)\n```\n\n\n\n\n -48.245273601712007\n\n\n\n\n```python\nD2,q2 = constructPairwiseSymmetryConstraints([1,3,8],10,debug=True)\n```\n\n 10 unknowns, 2 pairwise equality constraints\n symmetry list ((2,)):\n [ 1. 1.]\n D ((2, 10)):\n [[ 0. 1. 0. -1. 0. 0. 0. 0. 0. 0.]\n [ 0. 1. 0. 0. 0. 0. 0. 0. -1. 0.]]\n q ((2,)):\n [ 0. 0.]\n\n\n\n```python\nD3,q3 = constructPairwiseSymmetryConstraints([1,3,8],10,symmetry=-1,debug=True)\n```\n\n 10 unknowns, 2 pairwise equality constraints\n symmetry list ((2,)):\n [-1. -1.]\n D ((2, 10)):\n [[ 0. 1. 0. 1. 0. 0. 0. 0. 0. 0.]\n [ 0. 1. 0. 0. 0. 0. 0. 0. 1. 0.]]\n q ((2,)):\n [ 0. 0.]\n\n\n\n```python\nD4,q4 = constructPairwiseSymmetryConstraints([1,3,8],10,symmetry=np.array([1,-1]),debug=True)\n```\n\n 10 unknowns, 2 pairwise equality constraints\n symmetry list ((2,)):\n [ 1. -1.]\n D ((2, 10)):\n [[ 0. 1. 0. -1. 0. 0. 0. 0. 0. 0.]\n [ 0. 1. 0. 0. 0. 0. 0. 0. 1. 0.]]\n q ((2,)):\n [ 0. 0.]\n\n\n\n```python\nD5,q5 = constructPairwiseSymmetryConstraints([[1,2],[3,4,5],[8,9]],10,symmetry=np.array([1,-1]),debug=True)\n```\n\n 10 unknowns, 2 pairwise equality constraints\n symmetry list ((2,)):\n [ 1. -1.]\n D ((2, 10)):\n [[ 0. 1. 1. -1. -1. -1. 0. 0. 0. 0.]\n [ 0. 1. 1. 0. 0. 0. 0. 0. 1. 1.]]\n q ((2,)):\n [ 0. 0.]\n\n\n### full example\nWater with antisymmetric hydrogens and +1 integer charge group of oxygen and 1 hydrogen\n\n\n```python\n# Example: water\nimport h5py\n#load the cost function\ncost_function = h5py.File('h2o.cost.h5')\ncost = cost_function['cost']\nA = cost['A'][:]\nB = cost['B'][:]\nC = cost['C'].value\nN = cost['natom'].value\nprint(\"A: {}\".format(A))\nprint(\"B: {}\".format(B))\nprint(\"C: {}\".format(C))\nprint(\"N: {}\".format(N))\n\n\n```\n\n A: [[ 8.5387 8.5216 8.5216]\n [ 8.5216 9.0212 8.3956]\n [ 8.5216 8.3956 9.0212]]\n B: [-0.0105 -0.0797 -0.0797]\n C: 0.0266115260155\n N: 3\n\n\n\n```python\nD1,q1 = constructPairwiseSymmetryConstraints(\n charges=[1,2],N=3,symmetry=-1,debug=True)\n```\n\n 3 unknowns, 1 pairwise equality constraints\n symmetry list ((1,)):\n [-1.]\n D ((1, 3)):\n [[ 0. 1. 1.]]\n q ((1,)):\n [ 0.]\n\n\n\n```python\nD1, q1\n```\n\n\n\n\n (array([[ 0., 1., 1.]]), array([ 0.]))\n\n\n\n\n```python\nD2,q2 = constructChargegroupConstraints(\n chargeGroups=[[0,1]],N=3,q=1,debug=True)\n```\n\n 3 unknowns, 1 pairwise equality constraints\n D ((1, 3)):\n [[ 1. 1. 0.]]\n q ((1,)):\n [ 1.]\n\n\n\n```python\nD2, q2\n```\n\n\n\n\n (array([[ 1., 1., 0.]]), array([ 1.]))\n\n\n\n\n```python\nD = np.vstack([D1,D2])\n```\n\n\n```python\nD\n```\n\n\n\n\n array([[ 0., 1., 1.],\n [ 1., 1., 0.]])\n\n\n\n\n```python\nq = np.hstack([q1,q2])\n```\n\n\n```python\nq\n```\n\n\n\n\n array([ 0., 1.])\n\n\n\n\n```python\nX = constrainedMinimize(A,B,C,D,q,debug=True)\n```\n\n 3 unknowns, 2 equality constraints\n A (3, 3): \n [[ 8.5387 8.5216 8.5216]\n [ 8.5216 9.0212 8.3956]\n [ 8.5216 8.3956 9.0212]]\n B (3,): \n [-0.0105 -0.0797 -0.0797]\n C (): \n 0.0266115260155\n D (2, 3): \n [[ 0. 1. 1.]\n [ 1. 1. 0.]]\n q (2,): \n [ 0. 1.]\n upper A ((3, 5)): \n [[ 17.0774 17.0433 17.0433 0. 1. ]\n [ 17.0433 18.0423 16.7912 1. 1. ]\n [ 17.0433 16.7912 18.0423 1. 0. ]]\n upper A ((2, 5)): \n [[ 0. 1. 1. 0. 0.]\n [ 1. 1. 0. 0. 0.]]\n block A ((5, 5)): \n [[ 17.0774 17.0433 17.0433 0. 1. ]\n [ 17.0433 18.0423 16.7912 1. 1. ]\n [ 17.0433 16.7912 18.0423 1. 0. ]\n [ 0. 1. 1. 0. 0. ]\n [ 1. 1. 0. 0. 0. ]]\n block B ((5,)): \n [-0.021 -0.1594 -0.1594 0. 1. ]\n\n\n\n```python\nX\n```\n\n\n\n\n array([ 0.1267, 0.8733, -0.8733, -1.2267, -2.1852])\n\n\n\n\n```python\nQ = X[:N]\n```\n\n\n```python\nLagrange = X[N:]\n```\n\n\n```python\nQ # all constraints fulfilled\n```\n\n\n\n\n array([ 0.1267, 0.8733, -0.8733])\n\n\n\n\n```python\nQ[0]+Q[1]\n```\n\n\n\n\n 1.0\n\n\n\n\n```python\nQ[1]+Q[2]\n```\n\n\n\n\n 1.1102230246251565e-16\n\n\n\n\n```python\nLagrange\n```\n\n\n\n\n array([-1.2267, -2.1852])\n\n\n\n\n```python\n\n```\n\n\n```python\nD3,q3 = constructPairwiseSymmetryConstraints(\n charges=[1,2],N=3,symmetry=1,debug=True)\n```\n\n 3 unknowns, 1 pairwise equality constraints\n symmetry list ((1,)):\n [ 1.]\n D ((1, 3)):\n [[ 0. 1. -1.]]\n q ((1,)):\n [ 0.]\n\n\n\n```python\nD4,q4 = constructPairwiseSymmetryConstraints(\n charges=[1,2],N=3,symmetry=-1,debug=True)\n```\n\n 3 unknowns, 1 pairwise equality constraints\n symmetry list ((1,)):\n [-1.]\n D ((1, 3)):\n [[ 0. 1. 1.]]\n q ((1,)):\n [ 0.]\n\n\n\n```python\nD5,q5 = constructChargegroupConstraints(chargeGroups=[[0,1,2],[0,1,2]],N=3,q=[0,1],debug=True)\n```\n\n 3 unknowns, 2 pairwise equality constraints\n D ((2, 3)):\n [[ 1. 1. 1.]\n [ 1. 1. 1.]]\n q ((2,)):\n [ 0. 1.]\n\n\n\n```python\nD = np.vstack([D3,D4,D5])\n```\n\n\n```python\nD\n```\n\n\n\n\n array([[ 0., 1., -1.],\n [ 0., 1., 1.],\n [ 1., 1., 1.],\n [ 1., 1., 1.]])\n\n\n\n\n```python\nq = np.hstack([q3,q4,q5])\n```\n\n\n```python\nq\n```\n\n\n\n\n array([ 0., 0., 0., 1.])\n\n\n\n\n```python\nX = constrainedMinimize(A,B,C,D,q,debug=True)\n```\n\n 3 unknowns, 4 equality constraints\n A (3, 3): \n [[ 8.5387 8.5216 8.5216]\n [ 8.5216 9.0212 8.3956]\n [ 8.5216 8.3956 9.0212]]\n B (3,): \n [-0.0105 -0.0797 -0.0797]\n C (): \n 0.0266115260155\n D (4, 3): \n [[ 0. 1. -1.]\n [ 0. 1. 1.]\n [ 1. 1. 1.]\n [ 1. 1. 1.]]\n q (4,): \n [ 0. 0. 0. 1.]\n upper A ((3, 7)): \n [[ 17.0774 17.0433 17.0433 0. 0. 1. 1. ]\n [ 17.0433 18.0423 16.7912 1. 1. 1. 1. ]\n [ 17.0433 16.7912 18.0423 -1. 1. 1. 1. ]]\n upper A ((4, 7)): \n [[ 0. 1. -1. 0. 0. 0. 0.]\n [ 0. 1. 1. 0. 0. 0. 0.]\n [ 1. 1. 1. 0. 0. 0. 0.]\n [ 1. 1. 1. 0. 0. 0. 0.]]\n block A ((7, 7)): \n [[ 17.0774 17.0433 17.0433 0. 0. 1. 1. ]\n [ 17.0433 18.0423 16.7912 1. 1. 1. 1. ]\n [ 17.0433 16.7912 18.0423 -1. 1. 1. 1. ]\n [ 0. 1. -1. 0. 0. 0. 0. ]\n [ 0. 1. 1. 0. 0. 0. 0. ]\n [ 1. 1. 1. 0. 0. 0. 0. ]\n [ 1. 1. 1. 0. 0. 0. 0. ]]\n block B ((7,)): \n [-0.021 -0.1594 -0.1594 0. 0. 0. 1. ]\n\n\n\n```python\nX[:N]\n```\n\n\n\n\n array([-0.004 , 0.0258, -0.0218])\n\n\n\n\n```python\nX[N:]\n```\n\n\n\n\n array([ -3.1395e-02, -1.2500e-01, 1.4412e+17, -1.4412e+17])\n\n\n\n\n```python\nX = constrainedMinimize(A,B,C,D5,q5,debug=True)\n```\n\n 3 unknowns, 2 equality constraints\n A (3, 3): \n [[ 8.5387 8.5216 8.5216]\n [ 8.5216 9.0212 8.3956]\n [ 8.5216 8.3956 9.0212]]\n B (3,): \n [-0.0105 -0.0797 -0.0797]\n C (): \n 0.0266115260155\n D (2, 3): \n [[ 1. 1. 1.]\n [ 1. 1. 1.]]\n q (2,): \n [ 0. 1.]\n upper A ((3, 5)): \n [[ 17.0774 17.0433 17.0433 1. 1. ]\n [ 17.0433 18.0423 16.7912 1. 1. ]\n [ 17.0433 16.7912 18.0423 1. 1. ]]\n upper A ((2, 5)): \n [[ 1. 1. 1. 0. 0.]\n [ 1. 1. 1. 0. 0.]]\n block A ((5, 5)): \n [[ 17.0774 17.0433 17.0433 1. 1. ]\n [ 17.0433 18.0423 16.7912 1. 1. ]\n [ 17.0433 16.7912 18.0423 1. 1. ]\n [ 1. 1. 1. 0. 0. ]\n [ 1. 1. 1. 0. 0. ]]\n block B ((5,)): \n [-0.021 -0.1594 -0.1594 0. 1. ]\n\n\n\n```python\nX\n```\n\n\n\n\n array([ 3.5032e-01, -1.6107e-01, -1.8995e-01, 1.4412e+17, -1.4412e+17])\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "c741ea3893b3f874d783dca54f5e73915e0629c4", "size": 44117, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/horton-esp-fit-constrained.ipynb", "max_stars_repo_name": "jotelha/smampppp", "max_stars_repo_head_hexsha": "729e4733b436e68adfe07bcaa39a47727d0c8dd8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-15T17:23:52.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-15T17:23:52.000Z", "max_issues_repo_path": "examples/horton-esp-fit-constrained.ipynb", "max_issues_repo_name": "jotelha/smampppp", "max_issues_repo_head_hexsha": "729e4733b436e68adfe07bcaa39a47727d0c8dd8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/horton-esp-fit-constrained.ipynb", "max_forks_repo_name": "jotelha/smampppp", "max_forks_repo_head_hexsha": "729e4733b436e68adfe07bcaa39a47727d0c8dd8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-04-06T11:29:44.000Z", "max_forks_repo_forks_event_max_datetime": "2018-04-06T11:29:44.000Z", "avg_line_length": 24.8266741699, "max_line_length": 243, "alphanum_fraction": 0.4225581975, "converted": true, "num_tokens": 10546, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9553191259110588, "lm_q2_score": 0.8774767922879693, "lm_q1q2_score": 0.8382703622157825}} {"text": "# Eigenvalue problems\n\n\"Eigenvalue\" means characteristic value. These types of problems show up in many areas involving boundary-value problems, where we may not be able to obtain an analytical solution, but we can identify certain characteristic values that tell us important information about the system: the eigenvalues.\n\n## Example: beam buckling\n\nLet's consider deflection in a simply supported (static) vertical beam: $y(x)$, with boundary conditions $y(0) = 0$ and $y(L) = 0$. To get the governing equation, start with considering the sum of moments around the upper pin:\n\\begin{align}\n\\sum M &= M_z + P y = 0 \\\\\nM_z &= -P y\n\\end{align}\n\nWe also know that $M_z = E I y''$, so we can obtain\n\\begin{align}\nM_z = E I \\frac{d^2 y}{dx^2} &= -P y \\\\\ny'' + \\frac{P}{EI} y &= 0\n\\end{align}\nThis equation governs the stability of a beam, considering small deflections.\nTo simplify things, let's define $\\lambda^2 = \\frac{P}{EI}$, which gives us the ODE\n\\begin{equation}\ny'' + \\lambda^2 y = 0\n\\end{equation}\nWe can get the general solution to this:\n\\begin{equation}\ny(x) = A \\cos (\\lambda x) + B \\sin (\\lambda x)\n\\end{equation}\n\nTo find the coefficients, let's apply the boundary conditions, starting with $x=0$:\n\\begin{align}\ny(x=0) &= 0 = A \\cos 0 + B \\sin 0 \\\\\n\\rightarrow A &= 0 \\\\\ny(x=L) &= 0 = B \\sin (\\lambda L)\n\\end{align}\nNow what? $B \\neq 0$, because otherwise we would have the trivial solution $y(x) = 0$. Instead, to satisfy the boundary condition, we need\n\\begin{align}\nB \\neq 0 \\rightarrow \\sin (\\lambda L) &= 0 \\\\\n\\text{so} \\quad \\lambda L &= n \\pi \\quad n = 1, 2, 3, \\ldots, \\infty \\\\\n\\lambda &= \\frac{n \\pi}{L} \\quad n = 1, 2, 3, \\ldots, \\infty\n\\end{align}\n$\\lambda$ give the the **eigenvalues** for this problem; as you can see, there are an infinite number, that correspond to **eigenfunctions**:\n\\begin{equation}\ny_n = B \\sin \\left( \\frac{n \\pi x}{L} \\right) \\quad n = 1, 2, 3, \\ldots, \\infty\n\\end{equation}\n\nThe eigenvalues and associated eigenfunctions physically represent different modes of deflection.\nFor example, consider the first three modes (corresponding to $n = 1, 2, 3$):\n\n\n```matlab\nclear all; clc\n\nL = 1.0;\nx = linspace(0, L);\nsubplot(1,3,1);\ny = sin(pi * x / L);\nplot(y, x); title('n = 1')\nsubplot(1,3,2);\ny = sin(2 * pi * x / L);\nplot(y, x); title('n = 2')\nsubplot(1,3,3);\ny = sin(3* pi * x / L);\nplot(y, x); title('n = 3')\n```\n\nHere we see different modes of how the beam will buckle. How do we know when this happens?\n\nRecall that the eigenvalue is connected to the physical properties of the beam:\n\\begin{gather}\n\\lambda^2 = \\frac{P}{EI} \\rightarrow \\lambda = \\sqrt{\\frac{P}{EI}} = \\frac{n \\pi}{L} \\\\\nP = \\frac{EI}{L} n^2 \\pi^2\n\\end{gather}\nThis means that when the combination of load force and beam properties match certain values, the beam will deflect\u2014and buckle\u2014in one of the modes corresponding to the associated eigenfunction.\n\nIn particular, the first mode ($n=1$) is interesting, because this is the first one that will be encountered if a load starts at zero and increases. This is the **Euler critical load** of buckling, $P_{cr}$:\n\\begin{gather}\n\\lambda_1 = \\frac{\\pi}{L} \\rightarrow \\lambda_1^2 = \\frac{P}{EI} = \\frac{\\pi^2}{L^2} \\\\\nP_{cr} = \\frac{\\pi^2 E I}{L^2}\n\\end{gather}\n\n## Example: beam buckling with different boundary conditions\n\nLet's consider a slightly different case, where at $x=0$ the beam is supported such that $y'(0) = 0$. How does the beam buckle in this case?\n\nThe governing equation and general solution are the same:\n\\begin{align}\ny'' + \\lambda^2 y &= 0 \\\\\ny(x) &= A \\cos (\\lambda x) + B \\sin (\\lambda x)\n\\end{align}\nbut our boundary conditions are now different:\n\\begin{align}\ny'(0) = 0 = -\\lambda A \\sin(0) + \\lambda B\\cos(0) \\\\\n\\rightarrow B &= 0 \\\\\ny &= A \\cos (\\lambda x) \\\\\ny(L) &= 0 = A \\cos (\\lambda L) \\\\\nA \\neq 0 \\rightarrow \\cos(\\lambda L) &= 0 \\\\\n\\text{so} \\quad \\lambda L &= \\frac{(2n-1) \\pi}{2} \\quad n = 1,2,3,\\ldots, \\infty \\\\\n\\lambda &= \\frac{(2n-1) \\pi}{2 L} \\quad n = 1,2,3,\\ldots, \\infty\n\\end{align}\n\nThen, the critical buckling load, again corresponding to $n=1$, is\n\\begin{equation}\nP_{cr} = \\frac{\\pi^2 EI}{4 L^2}\n\\end{equation}\n\n## Getting eigenvalues numerically\n\nWe can only get the eigenvalues analytically if we can obtain an analytical solution of the ODE, but we might want to get eigenvalues for more complex problems too. In that case, we can use an approach based on *finite differences* to find the eigenvalues.\n\nConsider the same problem as above, for deflection of a simply supported beam:\n\\begin{equation}\ny'' + \\lambda^2 y = 0 \n\\end{equation}\nwith boundary conditions $y(0) = 0$ and $y(L) = 0$. Let's represent this using finite differences, for a case where $L=3$ and $\\Delta x = 1$, so we have four points in our solution grid.\n\nThe finite difference representation of the ODE is:\n\\begin{align}\n\\frac{y_{i-1} - 2y_i + y_{i+1}}{\\Delta x^2} + \\lambda^2 y_i &= 0 \\\\\ny_{i-1} + \\left( \\lambda^2 \\Delta x^2 - 2 \\right) y_i + y_{i+1} &= 0\n\\end{align}\nHowever, in this case, we are not solving for the values of deflection ($y_i$), but instead the **eigenvalues** $\\lambda$.\n\nThen, we can write the system of equations using the above recursion formula and our two boundary conditions:\n\\begin{align}\ny_1 &= 0 \\\\\ny_1 + y_2 \\left( \\lambda^2 \\Delta x^2 - 2 \\right) + y_3 &= 0 \\\\\ny_2 + y_3 \\left( \\lambda^2 \\Delta x^2 - 2 \\right) + y_4 &= 0 \\\\\ny_4 &= 0\n\\end{align}\nwhich we can simplify down to two equations by incorporating the boundary conditions into the equations for the two middle points, and also letting $k = \\lambda^2 \\Delta x^2$:\n\\begin{align}\ny_2 (k-2) + y_3 &= 0 \\\\\ny_2 + y_3 (k-2) &= 0\n\\end{align}\nLet's modify this once more by multiplying both equations by $-1$:\n\\begin{align}\ny_2 (2-k) - y_3 &= 0 \\\\\n-y_2 + y_3 (2-k) &= 0\n\\end{align}\n\nNow we can represent this system of equations as a matrix equation $A \\mathbf{y} = \\mathbf{b} = \\mathbf{0}$:\n\\begin{equation}\n\\begin{bmatrix} 2-k & -1 \\\\ -1 & 2-k \\end{bmatrix}\n\\begin{bmatrix} y_2 \\\\ y_3 \\end{bmatrix} = \\begin{bmatrix} 0 \\\\ 0 \\end{bmatrix}\n\\end{equation}\n$\\mathbf{y} = \\mathbf{0}$ is a trivial solution to this, so instead $\\det(A) = 0$ satisfies this equation.\nFor our $2\\times 2$ matrix, that looks like:\n\\begin{align}\n\\det(A) = \\begin{vmatrix} 2-k & -1 \\\\ -1 & 2-k \\end{vmatrix} = (2-k)^2 - 1 &= 0 \\\\\nk^2 - 4k + 3 &= 0 \\\\\n(k-3)(k-1) &= 0\n\\end{align}\nso the roots of this equation are $k_1 = 1$ and $k_2 = 3$. Recall that $k$ is directly related to the eigenvalue: $k = \\lambda^2 \\Delta x^2$, and $\\Delta x = 1$ for this case, so we can calculate the two associated eigenvalues:\n\\begin{align}\nk_1 &= \\lambda_1^2 \\Delta x^2 = 1 \\rightarrow \\lambda_1 = 1 \\\\\nk_2 &= \\lambda_2^2 \\Delta x^2 = 3 \\rightarrow \\lambda_2 = \\sqrt{3} = 1.732\n\\end{align}\n\nOur work has given us approximations for the first two eigenvalues. We can compare these against the exact values, given in general by $\\lambda = n \\pi / L$ (which we determined above):\n\\begin{align}\nn=1: \\quad \\lambda_1 &= \\frac{\\pi}{L} = \\frac{\\pi}{3} = 1.0472 \\\\\nn=2: \\quad \\lambda_2 &= \\frac{2\\pi}{L} = \\frac{2\\pi}{3} = 2.0944\n\\end{align}\nSo, our approximations are close, but with some obvious error. This is because we used a fairly crude step size of $\\Delta x = 1$, dividing the domain into just three segments. By using a finer resolution, we can get more-accurate eigenvalues and also more of them (remember, there are actually an infinite number!). \n\nTo do that, we will need to use Matlab, which offers the `eig()` function for calculating eigenvalues---essentially it is finding the roots to the polynomial given by $\\det(A) = 0$. We need to modify this slightly, though, to use the function:\n\\begin{align}\n\\det(A) &= 0 \\\\\n\\det \\left( A^* - k I \\right) = 0\n\\end{align}\nwhere the new matrix is\n\\begin{equation}\nA^* = \\begin{bmatrix} 2 & -1 \\\\ -1 & 2 \\end{bmatrix}\n\\end{equation}\nThen, `eig(A*)` will provide the values of $k$, which we can use to find the $\\lambda$s:\n\n\n```matlab\nclear all; clc\n\ndx = 1.0;\nL = 3.0;\n\nAstar = [2 -1; -1 2];\nk = eig(Astar);\n\nlambda = sqrt(k) / dx;\n\nfprintf('lambda_1: %6.3f\\n', lambda(1));\nfprintf('lambda_2: %6.3f', lambda(2));\n```\n\n lambda_1: 1.000\n lambda_2: 1.732\n\nAs expected, this matches with our manual calculation above. But, we might want to calculate these eigenvalues more accurately, so let's generalize this a bit and then try using $\\Delta x= 0.1$:\n\n\n```matlab\nclear all; clc\n\ndx = 0.1;\nL = 3.0;\nx = 0 : dx : L;\nn = length(x) - 2;\n\nAstar = zeros(n,n);\nfor i = 1 : n\n if i == 1\n Astar(1,1) = 2;\n Astar(1,2) = -1;\n elseif i == n\n Astar(n,n-1) = -1;\n Astar(n,n) = 2;\n else\n Astar(i,i-1) = -1;\n Astar(i,i) = 2;\n Astar(i,i+1) = -1;\n end\nend\nk = eig(Astar);\n\nlambda = sqrt(k) / dx;\n\nfprintf('lambda_1: %6.3f\\n', lambda(1));\nfprintf('lambda_2: %6.3f\\n\\n', lambda(2));\n\nerr = abs(lambda(1) - (pi/L)) / (pi/L);\nfprintf('Error in lambda_1: %5.2f%%\\n', 100*err);\n```\n\n lambda_1: 1.047\n lambda_2: 2.091\n \n Error in lambda_1: 0.05%\n\n\n## Example: mass-spring system\n\nLet's analyze the motion of masses connected by springs in a system:\n\n:::{figure-md} fig-mass-spring\n\n\nSystem with two masses connected by springs\n:::\n\nFirst, we need to write the equations of motion, based on doing a free-body diagram on each mass:\n\\begin{align}\nm_1 \\frac{d^2 x_1}{dt^2} &= -k x_1 + k(x_2 - x_1) \\\\\nm_2 \\frac{d^2 x_2}{dt^2} &= -k (x_2 - x_1) - k x_2\n\\end{align}\nWe can condense these equations a bit:\n\\begin{align}\nx_1^{\\prime\\prime} - \\frac{k}{m_1} \\left( -2 x_1 + x_2 \\right) &= 0 \\\\\nx_2^{\\prime\\prime} - \\frac{k}{m_2} \\left( x_1 - 2 x_2 \\right) &= 0\n\\end{align}\n\nTo proceed, we can assume that the masses will move in a sinusoidal fashion, with a shared frequency but separate amplitude:\n\\begin{align}\nx_i &= A_i \\sin (\\omega t) \\\\\nx_i^{\\prime\\prime} &= -A_i \\omega^2 \\sin (\\omega t)\n\\end{align}\nWe can plug these into the ODEs:\n\\begin{align}\n\\sin (\\omega t) \\left[ \\left( \\frac{2k}{m_1} - \\omega^2 \\right) A_1 - \\frac{k}{m_1} A_2 \\right] &= 0 \\\\\n\\sin (\\omega t) \\left[ -\\frac{k}{m_2} A_1 + \\left( \\frac{2k}{m_2} - \\omega^2 \\right) A_2 \\right] &= 0\n\\end{align}\nor\n\\begin{align}\n\\left( \\frac{2k}{m_1} - \\omega^2 \\right) A_1 - \\frac{k}{m_1} A_2 &= 0 \\\\\n-\\frac{k}{m_2} A_1 + \\left( \\frac{2k}{m_2} - \\omega^2 \\right) A_2 &= 0\n\\end{align}\nLet's put some numbers in, and try to solve for the eigenvalues: $\\omega^2$.\nLet $m_1 = m_2 = 40 $ kg and $k = 200$ N/m.\n\nNow, the equations become\n\\begin{align}\n\\left( 10 - \\omega^2 \\right) A_1 - 5 A_2 &= 0 \\\\\n-5 A_1 + \\left( 10 - \\omega^2 \\right) A_2 &= 0\n\\end{align}\nor $A \\mathbf{y} = \\mathbf{0}$, which we can represent as\n\\begin{equation}\n\\begin{bmatrix} 10-\\omega^2 & -5 \\\\ -5 & 10-\\omega^2 \\end{bmatrix}\n\\begin{bmatrix} A_1 \\\\ A_2 \\end{bmatrix} = \n\\begin{bmatrix} 0 \\\\ 0 \\end{bmatrix}\n\\end{equation}\nHere, $\\omega^2$ are the eigenvalues, and we can find them with $\\det(A) = 0$:\n\\begin{align}\n\\det(B) &= 0 \\\\\n\\det (B^* - \\omega^2 I) &= 0\n\\end{align}\n\n\n```matlab\nclear all; clc\n\nBstar = [10 -5; -5 10];\nomega_squared = eig(Bstar);\nomega = sqrt(omega_squared);\n\nfprintf('omega_1 = %5.2f rad/s\\n', omega(1));\nfprintf('omega_2 = %5.2f rad/s\\n', omega(2));\n```\n\n omega_1 = 2.24 rad/s\n omega_2 = 3.87 rad/s\n\n\nWe find there are two modes of oscillation, each associated with a different natural frequency. Unfortunately, we cannot calculate independent and unique values for the amplitudes, but if we insert the values of $\\omega$ into the above equations, we can find relations connecting the amplitudes:\n\\begin{align}\n\\omega_1: \\quad A_1 &= A_2 \\\\\n\\omega_2: \\quad A_1 &= -A_2\n\\end{align}\n\nSo, for the first mode, we have the two masses moving in sync with the same amplitude. In the second mode, they move with opposite (but equal) amplitude. With the two different frequencies, they also have two different periods:\n\n\n```matlab\nt = linspace(0, 3);\nsubplot(1,5,1)\nplot(sin(omega(1)*t), t); hold on\nplot(0,0, 's');\nset (gca, 'ydir', 'reverse' )\nbox off; set(gca,'Visible','off')\n\nsubplot(1,5,2)\nplot(sin(omega(1)*t), t); hold on\nplot(0,0, 's');\nset (gca, 'ydir', 'reverse' )\ntext(-2.5,-0.2, 'First mode')\nbox off; set(gca,'Visible','off')\n\nsubplot(1,5,4)\nplot(-sin(omega(2)*t), t); hold on\nplot(0,0, 's');\nset (gca, 'ydir', 'reverse' )\nbox off; set(gca,'Visible','off')\n\nsubplot(1,5,5)\nplot(sin(omega(2)*t), t); hold on\nplot(0,0, 's');\nset (gca, 'ydir', 'reverse' )\nbox off; set(gca,'Visible','off')\ntext(-2.7,-0.2, 'Second mode')\n```\n\nWe can confirm that the system would actually behave in this way by setting up the system of ODEs and integrating based on initial conditions matching the amplitudes of the two modes.\n\nFor example, let's use $x_1 (t=0) = x_2(t=0) = 1$ for the first mode, and $x_1(t=0) = 1$ and $x_2(t=0) = -1$ for the second mode. We'll use zero initial velocity for both cases. \n\nThen, we can solve by converting the system of two 2nd-order ODEs into a system of four 1st-order ODEs:\n\n\n```matlab\n%%file masses.m\nfunction dxdt = masses(t, x)\n% this is a function file to calculate the derivatives associated with the system\n\nm1 = 40;\nm2 = 40;\nk = 200;\n\ndxdt = zeros(4,1);\n\ndxdt(1) = x(2);\ndxdt(2) = (k/m1)*(-2*x(1) + x(3));\ndxdt(3) = x(4);\ndxdt(4) = (k/m2)*(x(1) - 2*x(3));\n```\n\n Created file '/Users/niemeyek/projects/ME373-book/content/bvps/masses.m'.\n\n\n\n```matlab\nclear all; clc\n\n% this is the integration for the system in the first mode\n[t, X] = ode45('masses', [0 3], [1.0 0.0 1.0 0.0]);\nsubplot(1,5,1)\nplot(X(:,1), t); \nylabel('displacement (m)'); xlabel('time (s)')\nset (gca, 'ydir', 'reverse' )\n%box off; set(gca,'Visible','off')\n\nsubplot(1,5,2)\nplot(X(:,3), t); xlabel('time (s)')\nset (gca, 'ydir', 'reverse' )\ntext(-4,-0.2, 'First mode')\n\n% this is the integration for the system in the second mode\n[t, X] = ode45('masses', [0 3], [1.0 0.0 -1.0 0.0]);\nsubplot(1,5,4)\nplot(X(:,1), t);\nylabel('displacement (m)'); xlabel('time (s)')\nset (gca, 'ydir', 'reverse' )\n%box off; set(gca,'Visible','off')\n\nsubplot(1,5,5)\nplot(X(:,3), t); xlabel('time (s)')\nset (gca, 'ydir', 'reverse' )\ntext(-4,-0.2, 'Second mode')\n```\n\nThis shows that we get either of the pure modes of motion with the appropriate initial conditions.\n\nWhat about if the initial conditions *don't* match either set of amplitude patterns?\n\n\n```matlab\n[t, X] = ode45('masses', [0 3], [0.25 0.0 0.75 0.0]);\nsubplot(1,5,1)\nplot(X(:,1), t);\n%plot(0,0, 's');\nset (gca, 'ydir', 'reverse' )\n%box off; set(gca,'Visible','off')\n\nsubplot(1,5,2)\nplot(X(:,3), t);\nset (gca, 'ydir', 'reverse' )\n\n```\n\nIn this case, the resulting motion will be a complicated superposition of the two modes.\n", "meta": {"hexsha": "a8b9958be6ae785e65f3b8f9ba3f005653827995", "size": 99408, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/_sources/content/bvps/eigenvalue.ipynb", "max_stars_repo_name": "kyleniemeyer/ME373-book", "max_stars_repo_head_hexsha": "66a9ef0f69a8c4e1656c02080aebfb5704e1a089", "max_stars_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/_sources/content/bvps/eigenvalue.ipynb", "max_issues_repo_name": "kyleniemeyer/ME373-book", "max_issues_repo_head_hexsha": "66a9ef0f69a8c4e1656c02080aebfb5704e1a089", "max_issues_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/_sources/content/bvps/eigenvalue.ipynb", "max_forks_repo_name": "kyleniemeyer/ME373-book", "max_forks_repo_head_hexsha": "66a9ef0f69a8c4e1656c02080aebfb5704e1a089", "max_forks_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 166.2341137124, "max_line_length": 26108, "alphanum_fraction": 0.8720223724, "converted": true, "num_tokens": 5150, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9059898102301019, "lm_q2_score": 0.9252299643080207, "lm_q1q2_score": 0.8382489197826277}} {"text": "$ x = \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a} $\n\n\n```python\nfrom sympy import *\nfrom IPython.display import display\nfrom sympy.printing.mathml import mathml\nfrom IPython.display import display, Math, Latex\n\nx, y, z = symbols('x y z')\ninit_printing(use_unicode=True)\n```\n\n\n```python\ndef mprint(e):\n display(Math(latex(e)))\n```\n\n\n```python\npprint(diff(2*x**4))\n```\n\n 3\n 8\u22c5x \n\n\n\n```python\nexpr = simplify((x**3 + x**2 - x - 1)/(x**2 + 2*x + 1))\n```\n\n\n```python\nexpr = (x**3 + x**2 - x - 1)/(x**2 + 2*x + 1)\ndisplay(Math(latex(expr)))\nexpr = simplify(expr)\nprint(type(expr))\nprint(latex(expr))\ndisplay(Math(latex(expr)))\n```\n\n\n$$\\frac{x^{3} + x^{2} - x - 1}{x^{2} + 2 x + 1}$$\n\n\n \n x - 1\n\n\n\n$$x - 1$$\n\n\n\n```python\nfrom IPython.display import display, Math, Latex\ndisplay(Math('\\\\frac{1}{2}'))\n```\n\n\n$$\\frac{1}{2}$$\n\n\n\n```python\nprint(expr.subs(x,5))\n```\n\n 4\n\n\n\n```python\neql = Eq(3*x+5,10)\n```\n\n\n```python\nz = solveset(eql,x)\ndisplay(Math(latex(z)))\n```\n\n\n```python\n\n```\n\n\n$$\\left\\{\\frac{5}{3}\\right\\}$$\n\n\n\n```python\nfrom sympy import *\nx, y, z = symbols('x y z')\ninit_printing(use_unicode=True)\nexpr = diff(sin(x)/x**2, x)\nmprint(expr)\n```\n\n\n$$\\frac{1}{x^{2}} \\cos{\\left (x \\right )} - \\frac{2}{x^{3}} \\sin{\\left (x \\right )}$$\n\n\n\n```python\nexpr_i = integrate(expr,x)\nmprint(expr_i)\n```\n\n\n$$\\frac{1}{x^{2}} \\sin{\\left (x \\right )}$$\n\n", "meta": {"hexsha": "365e74f40a2fbb7b701194a1ca5318ea6443baed", "size": 4954, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "SOA/paf-sympy/sympy-soa.ipynb", "max_stars_repo_name": "awaiswill/present", "max_stars_repo_head_hexsha": "1cfbc8d1f31ade6c21e3ed1d0685c31b3a772485", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 120, "max_stars_repo_stars_event_min_datetime": "2019-02-15T18:46:59.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T14:49:05.000Z", "max_issues_repo_path": "SOA/paf-sympy/sympy-soa.ipynb", "max_issues_repo_name": "smartquark/present", "max_issues_repo_head_hexsha": "21a31ab2f38474a68e5560a3330aa42d0847c9b3", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2020-04-20T14:20:00.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-12T00:23:35.000Z", "max_forks_repo_path": "SOA/paf-sympy/sympy-soa.ipynb", "max_forks_repo_name": "awaiswill/present", "max_forks_repo_head_hexsha": "1cfbc8d1f31ade6c21e3ed1d0685c31b3a772485", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 116, "max_forks_repo_forks_event_min_datetime": "2017-01-05T20:05:32.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-19T23:04:00.000Z", "avg_line_length": 18.6240601504, "max_line_length": 102, "alphanum_fraction": 0.4457004441, "converted": true, "num_tokens": 512, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9399133447766224, "lm_q2_score": 0.8918110490322425, "lm_q1q2_score": 0.8382251060046434}} {"text": "```python\nfrom sympy import *\n```\n\nCopyright - Michael Corley\n\n# **Printing and graphing functions**\n* This tutorial should be useful for those looking for a Mathematica or Matlab-like tool in Python.\n* This tutorial is designed to bring you up to speed as quickly as possible on how to use the tool, not to teach you what functions are.\n> *SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible. SymPy is written entirely in Python.*\n\n\n---\n\n\n\n\n\n\n\n\n**Let** *f(x) = e^x* and *g(x) = sqrt(x-1)*\n* Lets work through how to initiate these functions\n\n\n\n\n\n\n```python\nx = Symbol('x')\nf = Function('f')\ng = Function('g')(x)\n\nclass f(Function):\n @classmethod\n def eval(cls, x):\n return exp(x)\n\nclass g(Function):\n @classmethod\n def eval(cls, x):\n return sqrt(x-1)\n```\n\n**First:** sympy allows you to print symbolic functions\n* You can either print it directly, or you can use the **latex()** function to output in Latex\n\n\n```python\nprint(g(f(x)))\nprint(f(g(x)))\n\nprint('\\n Latex output:')\nprint(latex(f(g(x))))\n```\n\n sqrt(exp(x) - 1)\n exp(sqrt(x - 1))\n \n Latex output:\n e^{\\sqrt{x - 1}}\n\n\n**Second:** sympy allows you to graph functions automatically, but is build on commonly used Python libraries.\n* Normally this would required that you input some values for the function\n* You would have to write custom code to do this\n\n\n\n\n```python\nplot(f(g(x)))\n```\n\n\n```python\nplot(g(f(x)))\n```\n\n\n\n---\n\n\n\n# Solving for variables and simplifying expressions\n\nSolving algebraic expressions using **solve()**\n* Simplfication is automatically set to true\n* **solve()** works with linear and non-linear equations\n\n\n\n\n```python\nz = Symbol('z')\nt = Symbol('t')\nequation = Equality(3*z-t,5)\n\n#solve for z\nprint('z=')\nprint(solve(equation,z))\n\n#solve for t\nprint('t=')\nprint(solve(equation,t))\n```\n\n z=\n [t/3 + 5/3]\n t=\n [3*z - 5]\n\n\nNon-linear function\n\n\n```python\nx = Symbol('x')\ny = Symbol('y')\nz = Symbol('z')\n\n#solve for x\nprint('x=')\nprint(solve(z*x**2-y**2, x))\n\nprint('y=')\nprint(solve(z*x**2-y**2, y))\n```\n\n x=\n [-y*sqrt(1/z), y*sqrt(1/z)]\n y=\n [-x*sqrt(z), x*sqrt(z)]\n\n\n---\n**Integrals:** probability density function example\n$p(x) = \\frac{\\sqrt{2} e^{- \\frac{0.5 \\left(- \\mu + x\\right)^{2}}{s^{2}}}}{2 \\sqrt{\\pi} s}$\n\n\n\n\n\n```python\nmu = Symbol('mu')\ns = Symbol('s')\nx = Symbol('x')\npr = Function('pr')\n\n#probability density function\nclass pr(Function):\n @classmethod\n def eval(cls, x, mu, s):\n return (1/(s*sqrt(2*pi)))*exp((-1/2)*((x-mu)/s)**2)\n\nprint(latex(pr(x, mu, s)))\n```\n\n \\frac{\\sqrt{2} e^{- \\frac{0.5 \\left(- \\mu + x\\right)^{2}}{s^{2}}}}{2 \\sqrt{\\pi} s}\n\n\nPlot the probability density function\n\n\n```python\nplot(pr(x,0,1)) #standard normal distribution has a mean of 0 and a sdev of 1\n```\n\np(a\u2264x\u2264b) = \u222bf(x)dx between b an a. What is the probability that x falls between 1 and 4?\n* **Solution per Excel: 0.158624**\n* N() will make the numerical solution display\n* integrate() can be used to evaluate definate and indefinate integrals\n\n\n```python\na = 1\nb = 4\nprint(N(integrate(pr(x, 0, 1), (x, a, b))))\n```\n\n 0.158623582689624\n\n\n**Statistics:** probability density function\n* The Normal() function can be used to define a normal distribution.\n* The P() function can be used to compute the probability.\n\n\n\n```python\nfrom sympy.stats import *\nZ=Normal('Z', 0, 1)\nprint(N(P(Z<4)-P(Z<1)))\n```\n\n 0.158623582689624\n\n\n# Resources\n* Latest official documentation: https://docs.sympy.org/latest/index.html\n* Unofficial Tutorial: https://www.tutorialspoint.com/sympy/\n\n\n\n", "meta": {"hexsha": "10a730161fcb6d0d59c7e5cc1d85c46284a7b959", "size": 58004, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lecture1-SymPy.ipynb", "max_stars_repo_name": "ikicker/FinTech-Python-package-tutorials", "max_stars_repo_head_hexsha": "906b99e6cf14612a84dce392747993fdf47b608d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lecture1-SymPy.ipynb", "max_issues_repo_name": "ikicker/FinTech-Python-package-tutorials", "max_issues_repo_head_hexsha": "906b99e6cf14612a84dce392747993fdf47b608d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture1-SymPy.ipynb", "max_forks_repo_name": "ikicker/FinTech-Python-package-tutorials", "max_forks_repo_head_hexsha": "906b99e6cf14612a84dce392747993fdf47b608d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58004.0, "max_line_length": 58004, "alphanum_fraction": 0.914402455, "converted": true, "num_tokens": 1108, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9399133447766225, "lm_q2_score": 0.8918110461567922, "lm_q1q2_score": 0.8382251033019694}} {"text": "# Binomial Distribution - Coin Toss\n\n> This document is written in *R*.\n>\n> ***GitHub***: https://github.com/czs108\n\n## Background\n\n> Suppose you conduct a series of experiments, each of which consists of tossing a fair coin **6** times. Let $X$ be the number of *heads*.\n\n## Question A\n\n> What are the probabilities of each value of $X$?\n\n\\begin{equation}\nP(X = r) =\\, ^{6}C_{r} \\cdot 0.5^{r} \\cdot 0.5^{6 - r}\n\\end{equation}\n\n\n```R\nheads <- c(0:6)\n\nprobs <- choose(n=6, k=heads) * (0.5 ^ heads) * (0.5 ^ (6 - heads))\nprobs\n```\n\n\n
      \n\t
    1. 0.015625
    2. \n\t
    3. 0.09375
    4. \n\t
    5. 0.234375
    6. \n\t
    7. 0.3125
    8. \n\t
    9. 0.234375
    10. \n\t
    11. 0.09375
    12. \n\t
    13. 0.015625
    14. \n
    \n\n\n\nUse the `dbinom` function directly.\n\n\n```R\ndbinom(x=heads, prob=0.5, size=6)\n```\n\n\n
      \n\t
    1. 0.015625
    2. \n\t
    3. 0.09375
    4. \n\t
    5. 0.234375
    6. \n\t
    7. 0.3125
    8. \n\t
    9. 0.234375
    10. \n\t
    11. 0.09375
    12. \n\t
    13. 0.015625
    14. \n
    \n\n\n\n\n```R\nbarplot(probs ~ c(0:6), xlab=\"Number of heads\", ylab=\"Probability\")\n```\n\n## Question B\n\n\\begin{equation}\n\\begin{split}\nP(1st = head \\mid X = 4)\n &= \\frac{P(1st = head \\cap X = 4)}{P(X = 4)} \\\\\n &= \\frac{^{5}C_{3}}{^{6}C_{4}} \\\\\n &= 0.6667\n\\end{split}\n\\end{equation}\n\n## Question C\n\n\\begin{equation}\n\\begin{split}\nP(X = 4 \\mid 1st = head)\n &= \\frac{P(1st = head \\cap X = 4)}{P(1st = head)} \\\\\n &= \\frac{^{5}C_{3}}{2^{5}} \\\\\n &= 0.3125\n\\end{split}\n\\end{equation}\n\n\n```R\ndbinom(x=3, prob=0.5, size=5)\n```\n\n\n0.3125\n\n\n## Question D\n\n> Suppose the event\n\n\\begin{equation}\nA = \\text{The 1st trial where X = 6 is 10th.}\n\\end{equation}\n\n> Find out\n\n\\begin{equation}\nP(A)\n\\end{equation}\n\nIts's a *Geometric Distribution*.\n\n\\begin{equation}\np = P(X = 6) = \\frac{1}{2^{6}}\n\\end{equation}\n\n\\begin{equation}\n\\begin{split}\nP(A) &= (1 - p)^{9} \\cdot p \\\\\n &= (1 - \\frac{1}{2^{6}})^{9} \\times \\frac{1}{2^{6}} \\\\\n &= 0.0136\n\\end{split}\n\\end{equation}\n\n## Question E\n\n\\begin{equation}\n\\begin{split}\nP(X > 4) &= P(X = 5) + P(X = 6) \\\\\n &= \\frac{^{6}C_{5} +\\, ^{6}C_{6}}{2^6} \\\\\n &= 0.1094\n\\end{split}\n\\end{equation}\n\nUse the `pbinom` function.\n\n\n```R\n1 - pbinom(q=4, prob=0.5, size=6)\n```\n\n\n0.109375\n\n\n## Question F\n\n> What is the *mean* of $X$?\n\nSuppose\n\n\\begin{equation}\nN = \\text{The number of trials needed to get the first head.}\n\\end{equation}\n\nThe the *expectation* is\n\n\\begin{equation}\nE(N) = \\frac{1}{0.5} = 2\n\\end{equation}\n\nThere are **6** trials.\n\n\\begin{equation}\nE(X) = \\frac{6}{E(N)} = 3\n\\end{equation}\n\n\n```R\nmean <- sum(heads * probs)\nmean\n```\n\n\n3\n\n", "meta": {"hexsha": "6d6fba31af5a836b8ef70a22efb1f0f2df11298f", "size": 20213, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "exercises/Binomial Distribution - Coin Toss.ipynb", "max_stars_repo_name": "czs108/Probability-Theory-Exercises", "max_stars_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exercises/Binomial Distribution - Coin Toss.ipynb", "max_issues_repo_name": "czs108/Probability-Theory-Exercises", "max_issues_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercises/Binomial Distribution - Coin Toss.ipynb", "max_forks_repo_name": "czs108/Probability-Theory-Exercises", "max_forks_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-21T05:04:07.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-21T05:04:07.000Z", "avg_line_length": 50.2810945274, "max_line_length": 12162, "alphanum_fraction": 0.7444713798, "converted": true, "num_tokens": 1049, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9481545362802364, "lm_q2_score": 0.8840392878563335, "lm_q1q2_score": 0.8382058610309323}} {"text": "# pH-Absorbance Calibration with PyTorch \n\nIn this notebook, we will be revisiting the data I collected in my UCLA undergrad Bioengineering capstone project. This data involves an absorbance based pH sensor (using an Arduino, LED, and phenol red indicator solution) for noninvasive monitoring of cell culture. By normalizing the voltage reading to that of phosphate buffered saline as blank, we obtained the Absorbance: $A = -\\log \\frac{I}{I_{PBS}}$. \n\nThe theoretical equation relating pH to absorbance is then given by: \n\n\\begin{equation}\n\nA = f(pH) = \\frac{A_{max}}{1 + 10^{pK_{a} - pH}}\n\n\\end{equation}\n\nThis corresponds to a sigmoid curve from $0$ to $A_{max}$. We choose to add in an extra shape parameter $\\phi$ to account for deviations from the theory and use the natura exponential: \n\n\\begin{equation}\n\nA = f(pH) = \\frac{A_{max}}{1 + e^{(pK_{a} - pH)/\\phi}}\n\n\\end{equation}\n\n\nUnlike say a typical logistic regression sigmoid, this sigmoid has parameters that need to be found via nonlinear least square optimization methods. The loss to be minimized is the mean squared error: \n\n\\begin{equation}\n\nLoss(A_{max},pK_{a},\\phi) = \\frac{1}{n} \\sum^{n}_{i=1} (A_i - \\frac{A_{max}}{1 + e^{(pK_{a} - pH_{i})/\\phi}})^{2}\n\n\\end{equation}\n\n\n\nWe also have some prior information from theory. It can be shown with algebra that Equation (2) simplifies to Equation (1) when $\\phi = \\frac{1}{\\log(10)} \\approx 0.4343$. Additionally the theoretical pKa of phenol red is $pK_{a} = 7.6$. In a frequentist sense, this prior knowledge can be used to add regularization terms. For $A_{max}$ we do not necessarily have prior information, but we do not want the maximum absorbance to be extremely high, and thus can regularize it toward 0. An L1 penalty (in this case it will simplify to absolute values) will be used to regularize these parameters and will penalize the deviation from these prior values: \n\n\\begin{equation}\n\nPenalty(A_{max},pK_{a},\\phi) = \\lambda_{A_{max}} |A_{max}| + \\lambda_{pK_{a}} |pK_{a} - 7.6| + \\lambda_{\\phi}|\\phi - \\frac{1}{\\log(10)}|\n\n\\end{equation}\n\nThe minimization problem, with $\\theta = (A_{max},pK_{a},\\phi)$ then becomes: \n\n\\begin{equation}\n\n\\underset{\\theta}{\\arg\\min} (Loss(\\theta) + Penalty(\\theta))\n\n\\end{equation}\n\n\n\n\n## Nonlinear Least Squares and Nonlinear Mixed Model\n\nThis dataset consists of 4 Trials, and during the trial, the solution pH was adjusted by adding very small drops of concentrated HCl or NaOH to neglect volume changes. The absorbance was measured and calibrated to a standard pH sensor. However, the nature of the experiment leads to correlated data points within a given trial. **In this first section, we will investigate the dataset with standard built in methods**. \n\nWe will fit NLS models from a wrapper calling R's nls() and (for comparison) scipy least_squares(). These do not account for correlation. To account for correlation, a nonlinear mixed model (NLMM) must be used. This is done through a wrapper that calls R's nlmer() function from lme4 package. \n\nIt is assumed that the only random effect is for $A_{max}$ and is normally distributed: \n\n\\begin{equation}\n\nA_{max,Trial} \\sim N(A_{max},\\sigma_{A_{max}}^{2})\n\n\\end{equation}\n\nThe rpy2 package is used to communicate with R in order to use the wrappers found in the pHAbs_NLSNLMM.R file \n\nAll of these are unregularized (beyond the trial-specific regularization toward the mean induced by random effects in the NLMM from nlmer())\n\n\n```python\nimport numpy as np \nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom plotnine import ggplot, geom_point,geom_line, aes\nfrom scipy.stats import truncnorm\nfrom scipy.optimize import least_squares\n```\n\n\n```python\nimport rpy2.robjects as ro\nfrom rpy2.robjects.packages import importr\nfrom rpy2.robjects import pandas2ri\nfrom rpy2.robjects.conversion import localconverter\n```\n\n\n```python\n\nbase = importr('base')\nstats = importr('stats')\nlme4 = importr('lme4')\nro.r['source']('pHAbs_NLSNLMM.R')\n```\n\n\n\n\n\nListVector with 2 elements.\n\n\n\n \n \n \n \n\n \n \n \n \n\n\n
    \n value\n \n [RTYPES.CLOSXP]\n
    \n visible\n \n [RTYPES.LGLSXP]\n
    \n\n\n\n\n\n```python\ndata = pd.read_csv(\"Full_pHAbsdata.csv\")\ndata.sample(frac=1) #randomize row order for later \n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    pHALEDTrial
    117.740.3138641
    194.08-0.0016172
    276.680.0552232
    675.360.0052384
    397.960.2904642
    ............
    418.510.3357652
    6411.560.4612223
    726.670.0505564
    8910.660.4138474
    306.970.0963032
    \n

    91 rows \u00d7 3 columns

    \n
    \n\n\n\n\n```python\npH_data = data.pH.to_numpy()\nALED_data = data.ALED.to_numpy()\n```\n\n\n```python\n\nwith localconverter(ro.default_converter + pandas2ri.converter):\n NLSresult = ro.r.Fit_NLS(data)\n NLMMresult = ro.r.Fit_NLMM(data)\n\ndata[\"Ahat_NLS\"] = np.array(stats.predict(NLSresult))\ndata[\"Ahat_NLMM\"] = np.array(stats.predict(NLMMresult))\n\n(ggplot(data,aes('pH','ALED',color ='factor(Trial)')) \n+ geom_point() + geom_line(aes('pH','Ahat_NLMM',color='factor(Trial)'))\n+ geom_line(aes('pH','Ahat_NLS'),inherit_aes=False))\n```\n\nThe data and the fitted values from R's nls() and nlmer() (colored) are seen above. The dark curve represents the overall average relationship based on nls() while the different colored curves are the Trial-specific fits as calculated by nlmer() with a random effect on $A_{max}$. The differences in $A_{max}$ can be caused by differing optics between the trials, which would affect how the light enters the cuvette. \n\n## Nonlinear Least Squares Results (R nls())\n\n\n```python\nprint(base.summary(NLSresult))\n```\n\n \n Formula: ALED ~ Amax/(1 + exp((pKa - pH)/phi))\n \n Parameters:\n Estimate Std. Error t value Pr(>|t|) \n Amax 0.423757 0.006912 61.30 <2e-16 ***\n pKa 7.470071 0.027519 271.45 <2e-16 ***\n phi 0.453836 0.023565 19.26 <2e-16 ***\n ---\n Signif. codes: 0 \u2018***\u2019 0.001 \u2018**\u2019 0.01 \u2018*\u2019 0.05 \u2018.\u2019 0.1 \u2018 \u2019 1\n \n Residual standard error: 0.02612 on 88 degrees of freedom\n \n Number of iterations to convergence: 5 \n Achieved convergence tolerance: 2.001e-06\n \n \n\n\nAccording to R nls(), we find that $\\hat{\\theta} = (\\hat{A_{max}},\\hat{pK_{a}},\\hat{\\phi}) = (0.42,7.47,0.45)$\n\nThe pKa is in agreement with the theory, although to assess this rigorously (and trust the SEs) we should use the mixed model approach. Before that, we will try scipy least_squares() next. \n\n\n```python\n\ndef pHAbsfun(theta,pH,Aobs):\n A = theta[0]/(1+np.exp((theta[1]-pH)/(theta[2])))\n res = A-Aobs\n return res\n\npHAbsdata_fun = lambda theta: pHAbsfun(theta,pH_data,ALED_data)\n\nls_result = least_squares(pHAbsdata_fun,[0.5,7.6,0.4])\n\n```\n\n\n```python\nls_result.x, ls_result.cost\n```\n\n\n\n\n (array([0.42375714, 7.47007174, 0.45383623]), 0.03002231818680512)\n\n\n\nThe results between R's nls() and scipy least_squares() are in agreement for the coefficient values. \n\n## Nonlinear Mixed Effect Model (R nlmer())\n\n\n```python\nprint(base.summary(NLMMresult))\n```\n\n Nonlinear mixed model fit by maximum likelihood ['nlmerMod']\n Formula: ALED ~ nfunall(pH, Amax, pKa, phi) ~ Amax | Trial\n Data: data\n \n AIC BIC logLik deviance df.resid \n -497.3 -484.7 253.7 -507.3 86 \n \n Scaled residuals: \n Min 1Q Median 3Q Max \n -2.9346 -0.6216 -0.0852 0.5278 4.2925 \n \n Random effects:\n Groups Name Variance Std.Dev.\n Trial Amax 0.0015279 0.03909 \n Residual 0.0001855 0.01362 \n Number of obs: 91, groups: Trial, 4\n \n Fixed effects:\n Estimate Std. Error t value\n Amax 0.42990 0.01990 21.60\n pKa 7.46974 0.01498 498.76\n phi 0.46458 0.01289 36.04\n \n Correlation of Fixed Effects:\n Amax pKa \n pKa 0.133 \n phi 0.092 0.497\n \n\n\nBased on the above, we can compute a z-score for $pK_{a}$ and $\\phi$ to compare them to 7.6 and 1/log(10) respectively: \n\n\\begin{equation}\n\n|z_{pKa}| = |\\frac{7.469-7.6}{0.015}| = 8.69 \\\\\n\n|z_{\\phi}| = |\\frac{0.4646 - 0.4343}{0.013}| = 2.33 \n\n\\end{equation}\n\nWith a bonferroni correction for 2 tests assuming overall familywise error rate of $\\alpha = 0.05$, the critical value for each test (per test $\\alpha = 0.025$) occurs at $z_{crit} = 2.24$. Thus we reject both null hypotheses, and there is a significant difference obtained in our experiment vs the theoretical curve. However, this difference may not be practically significant, and as long as the results from our device are consistent, that is all that matters for calibrating the sensor. \n\nBased on the above parameters for the NLMM, we can also simulate more values to obtain a larger dataset for the later parts involving PyTorch: \n\n## pH-Absorbance Simulation Functions\n\n\n```python\ndef generate_pHAbs(n,Amax=0.43,pKa=7.47,phi=0.46,sd_e=0.025):\n mean_pH,sd_pH = 7.6, 2.2\n min_pH, max_pH = 0, 14\n a,b = (min_pH - mean_pH)/sd_pH , (max_pH-mean_pH)/sd_pH\n pH = truncnorm.rvs(a,b,loc=mean_pH,scale=sd_pH,size=n)\n e = np.random.normal(loc=0,scale=sd_e,size=n)\n A = Amax / (1+(np.exp(pKa-pH))/phi) + e\n simdf = pd.DataFrame({'pH': pH,'ALED': A})\n return simdf\n\ndef generate_pHAbs_Trials(Trials,n,Amax=0.43,Asd=0.04,pKa=7.47,phi=0.46,sd_e=0.025):\n Amaxes = np.random.normal(Amax,Asd,Trials)\n simdfall = []\n for i in range(Trials):\n simdf = generate_pHAbs(n=n,Amax=Amaxes[i],pKa=pKa,phi=phi,sd_e=sd_e)\n simdf['Trial'] = i+1 \n simdfall.append(simdf)\n simdfall = pd.concat(simdfall)\n return simdfall \n\n```\n\n# PyTorch pH-Absorbance Analysis\n\n## pHAbsorbance Custom Layer\n\nBelow, we implement a custom layer that contains the 3 parameters and outputs the absorbance values. A random initialization is used as follows for the parameters (we set reasonable values as if we have not seen the above standard analysis): \n\n\\begin{equation}\n\nA_{max} \\sim N(1,0.2^{2}) \\\\ \n\npK_{a} \\sim N(7.6,0.5^{2}) \\\\ \n\n\\phi \\sim N(0.5,0.1^{2}) \\\\\n\n\\end{equation}\n\nNotice that nn.Parameter() needs to be used on the weights so that PyTorch optimizer later on knows these are the custom parameters of the layer. Additionally, in the pHAbsLayer custom layer we initialize regularizers to 0, and instead choose to configure them when the pHAbsModel containing the layer is instantiated. \n\n\n```python\nimport torch\nfrom torch import nn\nfrom torch.utils.data import Dataset, DataLoader\n\n```\n\n\n```python\nclass pHAbsLayer(nn.Module):\n \"\"\"Custom pHAbs Layer: Amax/(1+e^(pKa-pH)/phi)\"\"\"\n def __init__(self):\n super().__init__()\n weights = np.random.normal([1,7.6,0.5],[0.2,0.5,0.1]) #[Amax,pKa,phi]\n weights = torch.from_numpy(weights)\n self.weights = nn.Parameter(weights)\n self.regularizer = torch.zeros(3,dtype=torch.float64)\n\n def forward(self,x):\n y = self.weights[0]/(1+torch.exp((self.weights[1]-x)/self.weights[2]))\n return y \n```\n\n## pHAbsModel Model Class \n\nNow that the pHAbsLayer() custom layer is created, we can use it like any other layer within the actual model class. In this class, we will also leave the option to set hyperparameters. \n\n\n```python\n\nclass pHAbsModel(nn.Module):\n def __init__(self,lam_Amax=0,lam_pKa=0,lam_phi=0):\n super().__init__()\n self.f_pH = pHAbsLayer()\n self.f_pH.regularizer[0] = lam_Amax\n self.f_pH.regularizer[1] = lam_pKa\n self.f_pH.regularizer[2] = lam_phi \n\n def forward(self,x):\n return self.f_pH(x)\n```\n\n## pHAbs Dataset\n\nBelow, we create the Dataset class for the data. In this case it is relatively simple, the __getitem__ method should return the features (just pH) and label at a certain index and the __len__ method should return the total length of the dataset. \n\n\n```python\nclass pHAbsDataset(Dataset):\n def __init__(self,pH,Abs):\n self.pH=pH.reshape(-1,1)\n self.Abs = Abs.reshape(-1,1)\n \n def __len__(self):\n return len(self.pH)\n \n def __getitem__(self,idx):\n return self.pH[idx],self.Abs[idx]\n\n```\n\n## Loss Penalty\n\nAs mentioned earlier in this notebook, we will be using an L1 penalty on the parameters' $\\theta = (A_{max},pK_{a},\\phi)$ deviations from $(0, 7.6, 0.43)$ respectively \n\n\n```python\ndef penalty(model):\n weights = model.f_pH.weights\n regularizer = model.f_pH.regularizer\n prior = torch.Tensor([0,7.6,1/np.log(10)]) \n penalty = (weights-prior).abs().dot(regularizer)\n return penalty \n```\n\n## Train and Test Loop \n\nBelow we define the training and testing loop. On the original dataset, we will use the full data to compare results to the first part and then later on will simulate data to compare train/test curves and effect of regularizers, etc. \n\n\n```python\n\ndef train_loop(dataloader, model, loss_fn, optimizer):\n size = len(dataloader.dataset)\n for batch, (X, y) in enumerate(dataloader):\n # Compute prediction and loss\n pred = model(X)\n loss = loss_fn(pred, y)\n pen = penalty(model)\n\n pen_loss = loss + pen\n # Backpropagation\n optimizer.zero_grad()\n pen_loss.backward() \n #loss.backward()\n optimizer.step()\n\n if batch % 10 == 0:\n loss, current = loss.item(), batch * len(X)\n print(f\"loss: {loss:>7f} [{current:>5d}/{size:>5d}]\")\n return(loss)\n\n\ndef test_loop(dataloader, model, loss_fn):\n size = len(dataloader.dataset)\n test_loss, correct = 0, 0\n\n with torch.no_grad():\n for X, y in dataloader:\n pred = model(X)\n test_loss += loss_fn(pred, y).item()\n \n\n test_loss /= size\n print(f\"Avg loss: {test_loss:>8f} \\n\")\n return(test_loss)\n\n```\n\n## Train Model on full original data \n\nWe now train the model on the full original data, and, since this dataset is small, we use a batch size of 91 which is all of the data. \n\nAdditionally, the Adam optimizer with a learning rate of 0.01 is used below, the model is trained for 1000 epochs. No regularization is applied for this first time on the full data. \n\nFor reference, we can extract the mean square error from the R nls() fit, which appears to be 0.00066\n\n\n```python\nresidNLS = np.array(stats.residuals(NLSresult))\nnp.mean(np.square(residNLS))\n```\n\n\n\n\n 0.0006598311689431394\n\n\n\n\n```python\norigdataset = pHAbsDataset(pH_data,ALED_data)\n\norigdataloader = DataLoader(origdataset,batch_size=91,shuffle=True)\n```\n\n\n```python\n\norigmodel = pHAbsModel()\nlearning_rate = 0.01\n\nloss_fn = nn.MSELoss()\n\noptimizer = torch.optim.Adam(origmodel.parameters(), lr=learning_rate)\n\n```\n\n\n```python\n%%capture \n\nepochs = 1000\nloss_orig = np.zeros(epochs)\nAmax_orig = np.zeros(epochs)\npKa_orig = np.zeros(epochs)\nphi_orig = np.zeros(epochs)\n\nfor i in range(epochs):\n print(f\"Epoch {i+1}\\n-------------------------------\")\n loss_orig[i] = train_loop(origdataloader, origmodel, loss_fn, optimizer)\n Amax_orig[i] = origmodel.f_pH.weights[0]\n pKa_orig[i] = origmodel.f_pH.weights[1]\n phi_orig[i] = origmodel.f_pH.weights[2]\n\n```\n\n\n```python\nplt.plot(loss_orig,\"r-\")\nplt.title(\"Loss vs Epochs\")\nplt.ylabel(\"MSE Loss\")\nplt.xlabel(\"Epochs\")\nplt.show()\n```\n\n\n```python\nloss_orig[-1]\n```\n\n\n\n\n 0.0006598443647888356\n\n\n\nThe above final loss is the same as the loss obtained for R's nls() function on this dataset. The parameter weights are also almost exactly the same as obtained via nls(), and thus solving the NLS problem via PyTorch tools was a success. Below we can examine the parameter traces vs epochs as well: \n\n\n```python\norigmodel.f_pH.weights\n```\n\n\n\n\n Parameter containing:\n tensor([0.4240, 7.4712, 0.4544], dtype=torch.float64, requires_grad=True)\n\n\n\n\n```python\nplt.plot(Amax_orig,\"y-\")\nplt.title(\"Amax vs Epochs\")\nplt.ylabel(\"Amax\")\nplt.xlabel(\"Epochs\")\nplt.show()\n```\n\n\n```python\nplt.plot(pKa_orig,\"m-\")\nplt.title(\"pKa vs Epochs\")\nplt.ylabel(\"pKa\")\nplt.xlabel(\"Epochs\")\nplt.show()\n```\n\nThe pKa trace is interesting in that at first it started at a low value 7.35 and increased for some time until 7.85 before it started decreasing. \n\n\n```python\nplt.plot(phi_orig,\"r-\")\nplt.title(\"phi vs Epochs\")\nplt.ylabel(\"phi\")\nplt.xlabel(\"Epochs\")\nplt.show()\n```\n\nThe trace for the $\\phi$ parameter shows an peak as well, although shorter indicating that at first this parameter was increasing briefly before it settled on the final value. This whole time, the loss was still decreasing, however. \n\n## Experiment with regularization on original data \n\nBelow, we experiment with some regularization on the original data. The regularization parameters are $\\lambda = (0.0001,0.001,0.01)$ for $(A_{max},pK_{a},\\phi)$ respectively\n\n\n\n\n```python\n%%capture \n\norigmodelreg = pHAbsModel(lam_Amax=0.0001,lam_pKa=0.001,lam_phi=0.01)\nlearning_rate = 0.01\n\nloss_fn = nn.MSELoss()\n\noptimizer = torch.optim.Adam(origmodelreg.parameters(), lr=learning_rate)\n\nepochs = 1000\nloss_origreg = np.zeros(epochs)\n\nfor i in range(epochs):\n print(f\"Epoch {i+1}\\n-------------------------------\")\n loss_origreg[i] = train_loop(origdataloader, origmodelreg, loss_fn, optimizer)\n\n```\n\n\n```python\nprint(loss_origreg[-1])\norigmodelreg.f_pH.weights\n```\n\n 0.0006840412471735585\n\n\n\n\n\n Parameter containing:\n tensor([0.4267, 7.4964, 0.4337], dtype=torch.float64, requires_grad=True)\n\n\n\nAs seen above, the parameters are closer to the prior values that were mentioned in the beginning of this notebook. Thus the regularization has worked.\n\nWe now move on to simulated data where we can also investigate the train-val curves to investigate phenomenon such as early stopping. \n\n## Simulated Data \n\nBelow, we simulate 100 Trials with 100 points each for both a Training and Validation set. The true parameters in the training set are set to $A_{max,true} = 0.43,~~ pK_{a,true} = 7.47,~~ \\phi_{true} = 0.46$. \n\nTo examine how distribution shift may affect the training/val curves, the true parameters in the validation set are set to $A_{max,true} = 0.40,~~ pK_{a,true} = 7.52,~~ \\phi_{true} = 0.48$. \n\nThe noise in the absorbance value is $\\epsilon \\sim N(0, 0.025^{2})$\n\n\n```python\n\nnp.random.seed(100)\nTrainSim = generate_pHAbs_Trials(Trials=100,n=100)\nnp.random.seed(10)\nValSim = generate_pHAbs_Trials(Trials=100,n=100,Amax=0.40,pKa=7.52,phi=0.48)\n\n```\n\n\n```python\npH_Train, Abs_Train = TrainSim.pH.to_numpy(), TrainSim.ALED.to_numpy()\npH_Val,Abs_Val = ValSim.pH.to_numpy(), ValSim.ALED.to_numpy()\n```\n\n\n```python\nTrainDS = pHAbsDataset(pH_Train,Abs_Train)\nValDS = pHAbsDataset(pH_Val,Abs_Val)\n\nTrainLoader = DataLoader(TrainDS,batch_size=100,shuffle=True)\nValLoader = DataLoader(ValDS,batch_size=100,shuffle=True)\n```\n\n\n```python\n%%capture \n\nsim_model = pHAbsModel()\nlearning_rate = 0.01\n\nloss_fn_train = nn.MSELoss()\nloss_fn_val = nn.MSELoss(reduction=\"sum\") #because test loop divides in the end\n\noptimizer = torch.optim.Adam(sim_model.parameters(), lr=learning_rate)\n\nepochs = 1000\nloss_simtrain = np.zeros(epochs)\nloss_simval = np.zeros(epochs)\n\nfor i in range(epochs):\n print(f\"Epoch {i+1}\\n-------------------------------\")\n loss_simtrain[i] = train_loop(TrainLoader, sim_model, loss_fn_train, optimizer)\n loss_simval[i] = test_loop(ValLoader,sim_model,loss_fn_val)\n```\n\n\n```python\nplt.plot(loss_simtrain,\"b-\")\nplt.plot(loss_simval,\"r-\")\nplt.legend([\"Train\",\"Val\"])\nplt.title(\"Loss vs Epochs\")\nplt.ylabel(\"MSE Loss\")\nplt.xlabel(\"Epochs\")\nplt.show()\n```\n\n\n```python\nsim_model.f_pH.weights\n```\n\n\n\n\n Parameter containing:\n tensor([0.4332, 8.2639, 0.9997], dtype=torch.float64, requires_grad=True)\n\n\n\n\n```python\nfinal_losstrain = loss_simtrain[-1]\nfinal_lossval = loss_simval[-1]\n\nprint(f\"The final training Loss is: {final_losstrain:.5f} and final validation Loss is: {final_lossval:.5f}\")\n```\n\n The final training Loss is: 0.00108 and final validation Loss is: 0.00124\n\n\nThis time, the $pK_{a} = 8.27,~~\\phi = 1.01$ which are far from the true parameter values. We can check the answer with the wrapper from R's nls(), which confirms that this is just a result of the data obtained. The good news is that the validation loss and training loss are still about the same. The slight distribution shift did not appear to affect the results too much in this case. \n\n\n```python\nwith localconverter(ro.default_converter + pandas2ri.converter):\n NLSTrainresult = ro.r.Fit_NLS(TrainSim)\n\nprint(base.summary(NLSTrainresult))\n\n```\n\n \n Formula: ALED ~ Amax/(1 + exp((pKa - pH)/phi))\n \n Parameters:\n Estimate Std. Error t value Pr(>|t|) \n Amax 0.427711 0.001322 323.6 <2e-16 ***\n pKa 8.255887 0.009396 878.6 <2e-16 ***\n phi 1.002325 0.006636 151.1 <2e-16 ***\n ---\n Signif. codes: 0 \u2018***\u2019 0.001 \u2018**\u2019 0.01 \u2018*\u2019 0.05 \u2018.\u2019 0.1 \u2018 \u2019 1\n \n Residual standard error: 0.0322 on 9997 degrees of freedom\n \n Number of iterations to convergence: 6 \n Achieved convergence tolerance: 8.089e-08\n \n \n\n\n## With Regularization \n\nNow we will try the same thing as above with regularization and determine whether this ends up having a better test error. The same regularization parameters as earlier will be used. Ideally, cross validation or other hyperparameter selection methods would be used. \n\n\n```python\n%%capture \n\nsim_modelreg = pHAbsModel(lam_Amax=0.0001,lam_pKa=0.001,lam_phi=0.01)\nlearning_rate = 0.01\n\nloss_fn_train = nn.MSELoss()\nloss_fn_val = nn.MSELoss(reduction=\"sum\") #because test loop divides in the end\n\noptimizer = torch.optim.Adam(sim_modelreg.parameters(), lr=learning_rate)\n\nepochs = 1000\nloss_simtrain = np.zeros(epochs)\nloss_simval = np.zeros(epochs)\n\nfor i in range(epochs):\n print(f\"Epoch {i+1}\\n-------------------------------\")\n loss_simtrain[i] = train_loop(TrainLoader, sim_modelreg, loss_fn_train, optimizer)\n loss_simval[i] = test_loop(ValLoader,sim_modelreg,loss_fn_val)\n```\n\n\n```python\nplt.plot(loss_simtrain,\"b-\")\nplt.plot(loss_simval,\"r-\")\nplt.legend([\"Train\",\"Val\"])\nplt.title(\"Loss vs Epochs\")\nplt.ylabel(\"MSE Loss\")\nplt.xlabel(\"Epochs\")\nplt.show()\n```\n\n\n```python\nfinal_losstrain = loss_simtrain[-1]\nfinal_lossval = loss_simval[-1]\n\nprint(f\"The final training Loss is: {final_losstrain:.5f} and final validation Loss is: {final_lossval:.5f}\")\n```\n\n The final training Loss is: 0.00233 and final validation Loss is: 0.00237\n\n\n\n```python\n\n```\n\nIn this case, the regularization resulted in both worse training and test error, this indicates we are over-regularizing the parameters. In the next run, the regularizers will be decreased \n\n\n```python\n%%capture\n\nsim_modelreg = pHAbsModel(lam_Amax=1e-5,lam_pKa=1e-5,lam_phi=1e-3)\nlearning_rate = 0.01\n\nloss_fn_train = nn.MSELoss()\nloss_fn_val = nn.MSELoss(reduction=\"sum\") #because test loop divides in the end\n\noptimizer = torch.optim.Adam(sim_modelreg.parameters(), lr=learning_rate)\n\nepochs = 1000\nloss_simtrain = np.zeros(epochs)\nloss_simval = np.zeros(epochs)\n\nfor i in range(epochs):\n print(f\"Epoch {i+1}\\n-------------------------------\")\n loss_simtrain[i] = train_loop(TrainLoader, sim_modelreg, loss_fn_train, optimizer)\n loss_simval[i] = test_loop(ValLoader,sim_modelreg,loss_fn_val)\n```\n\n\n```python\nfinal_losstrain = loss_simtrain[-1]\nfinal_lossval = loss_simval[-1]\n\nprint(f\"The final training Loss is: {final_losstrain:.5f} and final validation Loss is: {final_lossval:.5f}\")\n```\n\n The final training Loss is: 0.00112 and final validation Loss is: 0.00124\n\n\nWith a new less conservative choice of hyperparameters after experimenting, the training loss is higher as expected, and the validation loss is ever so slightly lower. \n", "meta": {"hexsha": "0fe12e2c7a23f68522f39df6b492d225f1690328", "size": 480005, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Torch_NLS_pHAbs.ipynb", "max_stars_repo_name": "rokapre/Nonlinear_Regression", "max_stars_repo_head_hexsha": "d705f6a010fc0bf000531c967ffcf8ed79a5f92e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Torch_NLS_pHAbs.ipynb", "max_issues_repo_name": "rokapre/Nonlinear_Regression", "max_issues_repo_head_hexsha": "d705f6a010fc0bf000531c967ffcf8ed79a5f92e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Torch_NLS_pHAbs.ipynb", "max_forks_repo_name": "rokapre/Nonlinear_Regression", "max_forks_repo_head_hexsha": "d705f6a010fc0bf000531c967ffcf8ed79a5f92e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 396.0437293729, "max_line_length": 101222, "alphanum_fraction": 0.7527234091, "converted": true, "num_tokens": 7465, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9481545274901876, "lm_q2_score": 0.8840392695254318, "lm_q1q2_score": 0.8382058358796564}} {"text": "# Modeling and Simulation in Python\n\nCase study\n\nCopyright 2017 Allen Downey\n\nLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)\n\n\n\n```python\n# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import functions from the modsim.py module\nfrom modsim import *\n```\n\n## Yo-yo\n\nSuppose you are holding a yo-yo with a length of string wound around its axle, and you drop it while holding the end of the string stationary. As gravity accelerates the yo-yo downward, tension in the string exerts a force upward. Since this force acts on a point offset from the center of mass, it exerts a torque that causes the yo-yo to spin.\n\n\n\nThis figure shows the forces on the yo-yo and the resulting torque. The outer shaded area shows the body of the yo-yo. The inner shaded area shows the rolled up string, the radius of which changes as the yo-yo unrolls.\n\nIn this model, we can't figure out the linear and angular acceleration independently; we have to solve a system of equations:\n\n$\\sum F = m a $\n\n$\\sum \\tau = I \\alpha$\n\nwhere the summations indicate that we are adding up forces and torques.\n\nAs in the previous examples, linear and angular velocity are related because of the way the string unrolls:\n\n$\\frac{dy}{dt} = -r \\frac{d \\theta}{dt} $\n\nIn this example, the linear and angular accelerations have opposite sign. As the yo-yo rotates counter-clockwise, $\\theta$ increases and $y$, which is the length of the rolled part of the string, decreases.\n\nTaking the derivative of both sides yields a similar relationship between linear and angular acceleration:\n\n$\\frac{d^2 y}{dt^2} = -r \\frac{d^2 \\theta}{dt^2} $\n\nWhich we can write more concisely:\n\n$ a = -r \\alpha $\n\nThis relationship is not a general law of nature; it is specific to scenarios like this where there is rolling without stretching or slipping.\n\nBecause of the way we've set up the problem, $y$ actually has two meanings: it represents the length of the rolled string and the height of the yo-yo, which decreases as the yo-yo falls. Similarly, $a$ represents acceleration in the length of the rolled string and the height of the yo-yo.\n\nWe can compute the acceleration of the yo-yo by adding up the linear forces:\n\n$\\sum F = T - mg = ma $\n\nWhere $T$ is positive because the tension force points up, and $mg$ is negative because gravity points down.\n\nBecause gravity acts on the center of mass, it creates no torque, so the only torque is due to tension:\n\n$\\sum \\tau = T r = I \\alpha $\n\nPositive (upward) tension yields positive (counter-clockwise) angular acceleration.\n\nNow we have three equations in three unknowns, $T$, $a$, and $\\alpha$, with $I$, $m$, $g$, and $r$ as known parameters. It is simple enough to solve these equations by hand, but we can also get SymPy to do it for us.\n\n\n\n\n```python\nfrom sympy import init_printing, symbols, Eq, solve\n\ninit_printing()\n```\n\n\n```python\nT, a, alpha, I, m, g, r = symbols('T a alpha I m g r')\n```\n\n\n```python\neq1 = Eq(a, -r * alpha)\n```\n\n\n```python\neq2 = Eq(T - m * g, m * a)\n```\n\n\n```python\neq3 = Eq(T * r, I * alpha)\n```\n\n\n```python\nsoln = solve([eq1, eq2, eq3], [T, a, alpha])\n```\n\n\n```python\nsoln[T]\n```\n\n\n```python\nsoln[a]\n```\n\n\n```python\nsoln[alpha]\n```\n\n\nThe results are\n\n$T = m g I / I^* $\n\n$a = -m g r^2 / I^* $\n\n$\\alpha = m g r / I^* $\n\nwhere $I^*$ is the augmented moment of inertia, $I + m r^2$.\n\nYou can also see [the derivation of these equations in this video](https://www.youtube.com/watch?v=chC7xVDKl4Q).\n\nTo simulate the system, we don't really need $T$; we can plug $a$ and $\\alpha$ directly into the slope function.\n\n\n```python\nradian = UNITS.radian\nm = UNITS.meter\ns = UNITS.second\nkg = UNITS.kilogram\nN = UNITS.newton\n```\n\n\n\n\nnewton\n\n\n\n**Exercise:** Simulate the descent of a yo-yo. How long does it take to reach the end of the string?\n\nI provide a `Params` object with the system parameters:\n\n* `Rmin` is the radius of the axle. `Rmax` is the radius of the axle plus rolled string.\n\n* `Rout` is the radius of the yo-yo body. `mass` is the total mass of the yo-yo, ignoring the string. \n\n* `L` is the length of the string.\n\n* `g` is the acceleration of gravity.\n\n\n```python\nparams = Params(Rmin = 8e-3 * m,\n Rmax = 16e-3 * m,\n Rout = 35e-3 * m,\n mass = 50e-3 * kg,\n L = 1 * m,\n g = 9.8 * m / s**2,\n t_end = 1 * s)\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    values
    Rmin0.008 meter
    Rmax0.016 meter
    Rout0.035 meter
    mass0.05 kilogram
    L1 meter
    g9.8 meter / second ** 2
    t_end1 second
    \n
    \n\n\n\nHere's a `make_system` function that computes `I` and `k` based on the system parameters.\n\nI estimated `I` by modeling the yo-yo as a solid cylinder with uniform density ([see here](https://en.wikipedia.org/wiki/List_of_moments_of_inertia)).\n\nIn reality, the distribution of weight in a yo-yo is often designed to achieve desired effects. But we'll keep it simple.\n\n\n```python\ndef make_system(params):\n \"\"\"Make a system object.\n \n params: Params with Rmin, Rmax, Rout, \n mass, L, g, t_end\n \n returns: System with init, k, Rmin, Rmax, mass,\n I, g, ts\n \"\"\"\n L, mass = params.L, params.mass\n Rout, Rmax, Rmin = params.Rout, params.Rmax, params.Rmin \n \n init = State(theta = 0 * radian,\n omega = 0 * radian/s,\n y = L,\n v = 0 * m / s)\n \n I = mass * Rout**2 / 2\n k = (Rmax**2 - Rmin**2) / 2 / L / radian \n \n return System(params, init=init, I=I, k=k)\n```\n\nTesting `make_system`\n\n\n```python\nsystem = make_system(params)\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    values
    Rmin0.008 meter
    Rmax0.016 meter
    Rout0.035 meter
    mass0.05 kilogram
    L1 meter
    g9.8 meter / second ** 2
    t_end1 second
    inittheta 0 radian\nomega 0.0 radi...
    I3.0625000000000006e-05 kilogram * meter ** 2
    k9.6e-05 meter / radian
    \n
    \n\n\n\n\n```python\nsystem.init\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    values
    theta0 radian
    omega0.0 radian / second
    y1 meter
    v0.0 meter / second
    \n
    \n\n\n\nWrite a slope function for this system, using these results from the book:\n\n$ r = \\sqrt{2 k y + R_{min}^2} $ \n\n$ T = m g I / I^* $\n\n$ a = -m g r^2 / I^* $\n\n$ \\alpha = m g r / I^* $\n\nwhere $I^*$ is the augmented moment of inertia, $I + m r^2$.\n\n\n\n```python\n# Solution\n\ndef slope_func(state, t, system):\n \"\"\"Computes the derivatives of the state variables.\n \n state: State object with theta, omega, y, v\n t: time\n system: System object with Rmin, k, I, mass\n \n returns: sequence of derivatives\n \"\"\"\n theta, omega, y, v = state\n g, k, Rmin = system.g, system.k, system.Rmin\n I, mass = system.I, system.mass\n \n r = sqrt(2*k*y + Rmin**2)\n alpha = mass * g * r / (I + mass * r**2)\n a = -r * alpha\n \n return omega, alpha, v, a \n```\n\nTest your slope function with the initial paramss.\n\n\n```python\n# Solution\n\nslope_func(system.init, 0*s, system)\n```\n\n\n\n\n (0.0 ,\n 180.54116292458264 ,\n 0.0 ,\n -2.888658606793322 )\n\n\n\nWrite an event function that will stop the simulation when `y` is 0.\n\n\n```python\n# Solution\n\ndef event_func(state, t, system):\n \"\"\"Stops when y is 0.\n \n state: State object with theta, omega, y, v\n t: time\n system: System object with Rmin, k, I, mass\n \n returns: y\n \"\"\"\n theta, omega, y, v = state\n return y\n```\n\nTest your event function:\n\n\n```python\n# Solution\n\nevent_func(system.init, 0*s, system)\n```\n\n\n\n\n1 meter\n\n\n\nThen run the simulation.\n\n\n```python\n# Solution\n\nresults, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.05)\ndetails\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    values
    solNone
    t_events[[0.879217870162702]]
    nfev134
    njev0
    nlu0
    status1
    messageA termination event occurred.
    successTrue
    \n
    \n\n\n\nCheck the final state. If things have gone according to plan, the final value of `y` should be close to 0.\n\n\n```python\n# Solution\n\nresults.tail()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    thetaomegayv
    0.70614744.0182121.320.327291-1.76573
    0.75614750.268128.6090.236982-1.84499
    0.80614756.8724135.4960.142962-1.91405
    0.85614763.8093141.8850.0457628-1.97198
    0.87921867.1147144.6319.02056e-17-1.99469
    \n
    \n\n\n\nPlot the results.\n\n`theta` should increase and accelerate.\n\n\n```python\ndef plot_theta(results):\n plot(results.theta, color='C0', label='theta')\n decorate(xlabel='Time (s)',\n ylabel='Angle (rad)')\nplot_theta(results)\n```\n\n`y` should decrease and accelerate down.\n\n\n```python\ndef plot_y(results):\n plot(results.y, color='C1', label='y')\n\n decorate(xlabel='Time (s)',\n ylabel='Length (m)')\n \nplot_y(results)\n```\n\nPlot velocity as a function of time; is the yo-yo accelerating?\n\n\n```python\n# Solution\n\nv = results.v # m / s\nplot(v)\ndecorate(xlabel='Time (s)',\n ylabel='Velocity (m/s)')\n```\n\nUse `gradient` to estimate the derivative of `v`. How does the acceleration of the yo-yo compare to `g`?\n\n\n```python\n# Solution\n\na = gradient(v)\nplot(a)\ndecorate(xlabel='Time (s)',\n ylabel='Acceleration (m/$s^2$)')\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "2c0f1a3033d6fb6b19f8b8d47b3dbd58fad8aa8a", "size": 110832, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "soln/yoyo_soln.ipynb", "max_stars_repo_name": "kanhaiyap/ModSimPy", "max_stars_repo_head_hexsha": "af16c079ec398ff9b3822d3dcda75873ce900ced", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-04-27T22:43:12.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-11T15:12:23.000Z", "max_issues_repo_path": "soln/yoyo_soln.ipynb", "max_issues_repo_name": "ffriass/ModSimPy", "max_issues_repo_head_hexsha": "c36a476a20042acb33773e47d12aea5b0c413e60", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 33, "max_issues_repo_issues_event_min_datetime": "2019-10-09T18:50:22.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-21T01:39:48.000Z", "max_forks_repo_path": "soln/yoyo_soln.ipynb", "max_forks_repo_name": "ffriass/ModSimPy", "max_forks_repo_head_hexsha": "c36a476a20042acb33773e47d12aea5b0c413e60", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 94.8092386655, "max_line_length": 17312, "alphanum_fraction": 0.8182564602, "converted": true, "num_tokens": 4085, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9136765210631689, "lm_q2_score": 0.9173026488471137, "lm_q1q2_score": 0.8381178929606604}} {"text": "# Regression\n\nRegression is the process of approximating noisy data with functions. The same numerical methods can also be used to approximate a complicated function with a simpler function. We'll begin by looking at some limited cases in terms of linear algebra.\n\n## Polynomial regression\n\nLong ago (in the Linear Algebra notebook), we solved an over-determined linear system to compute a polynomial approximation of a function.\nSometimes we make approximations because we are interested in the polynomial coefficients.\nThat is usually only for very low order (like linear or quadratic fits).\nInferring higher coefficients is ill-conditioned (as we saw with Vandermonde matrices) and probably not meaningful.\nWe now know that Chebyshev bases are good for representing high-degree polynomials, but if the points are arbitrarily spaced, how many do we need for the Chebyshev approximation to be well-conditioned?\n\n\n```python\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('ggplot')\n\ndef cosspace(a, b, n=50):\n return (a + b)/2 + (b - a)/2 * (\n np.cos(np.linspace(-np.pi, 0, n)))\n\ndef vander_chebyshev(x, n=None):\n if n is None:\n n = len(x)\n T = np.ones((len(x), n))\n if n > 1:\n T[:,1] = x\n for k in range(1,n-1):\n T[:,k+1] = 2 * x * T[:,k] - T[:,k-1]\n return T\n\ndef runge1(x):\n return 1 / (1 + 10*x**2)\n```\n\n\n```python\ndef chebyshev_regress_eval(x, xx, n):\n V = vander_chebyshev(x, n)\n Q, R = np.linalg.qr(V)\n return vander_chebyshev(xx, n) @ np.linalg.solve(R, Q.T)\n\nxx = np.linspace(-1, 1, 100)\nx = np.linspace(-1, 1, 80)\nplt.plot(x, runge1(x), '.k')\nplt.plot(xx, chebyshev_regress_eval(x, xx, 40) @ runge1(x), label='p(x)')\nplt.plot(xx, runge1(xx), '-.', label='runge1(x)')\nplt.legend(loc='upper left');\n```\n\n* What is the degree $k$ of the polynomial that is used?\n* What distribution of points is used for the data?\n* Would this have artifacts if we used a degree $k$ polynomial to interpolate noise-free samples at $k+1$ points using the same distribution?\n\n\n```python\nns = np.geomspace(5, 1000, dtype=int)\nconds = [np.linalg.cond(vander_chebyshev(np.linspace(-1, 1, n),\n int(n**(.5))))\n for n in ns]\nplt.semilogy(ns, conds)\nplt.xlabel('$n$')\nplt.ylabel('cond');\n```\n\n* If we have $n$ data points, can we use a polynomial of degree $k = \\lfloor n/5 \\rfloor$?\n* What about $k = n^{3/4}$?\n* What expression $k = ?(n)$ appears to be sufficient?\n\n## Noisy data\n\nRegression really comes into its own when the data is noisy.\n\n\n```python\ndef runge1_noisy(x, sigma):\n return runge1(x) + np.random.randn(*x.shape)*sigma\n\nx = np.linspace(-1, 1, 500)\ny = runge1_noisy(x, 0.2)\nplt.plot(x, y, '.')\nplt.plot(x, runge1(x), 'k', label='runge1(x)')\nplt.plot(x, chebyshev_regress_eval(x, x, 7) @ y, label='regress(x)')\nplt.legend(loc='upper left');\n```\n\n### Probability distributions\n\nIn order to interpret real data, we need a model for the noise. We have chosen the most common and computationally convenient choice when creating the synthetic data above.\nThe function `randn` draws samples from the \"standard normal\" or Gaussian distribution.\n\n$$ p(t) = \\frac{1}{\\sqrt{2\\pi}} e^{-t^2/2} . $$\n\n\n```python\ndef stdnormal(t):\n return np.exp(-t**2/2) / np.sqrt(2*np.pi)\n\nn = 1000\nw = np.random.randn(n)\nplt.hist(w, bins=40, range=(-3,3), density=True)\nt = np.linspace(-3, 3)\nplt.plot(t, stdnormal(t))\nplt.xlabel('$t$')\nplt.ylabel('$p(t)$');\nw.mean()\n```\n\n### Regression on noisy data\n\nWe can just go run our regression algorithm on the noisy data.\n\n\n```python\nx = np.linspace(-1, 1, 500)\ny = runge1_noisy(x, sigma=0.1)\n```\n\n\n```python\nplt.plot(x, y, '.')\nplt.plot(xx, chebyshev_regress_eval(x, xx, 9) @ y, label='p(x)')\nplt.plot(xx, runge1(xx), label='runge1(x)')\nplt.legend(loc='upper left');\n```\n\n## What problem are we solving?\n\n### Why do we call it a linear model?\n\nWe are currently working with algorithms that express the regression as a linear function of the model parameters. That is, we search for coefficients $c = [c_0, c_1, \\dotsc]^T$ such that\n\n$$ V(x) c \\approx y $$\n\nwhere the left hand side is linear in $c$. In different notation, we are searching for a predictive model\n\n$$ f(x_i, c) \\approx y_i \\text{ for all $(x_i, y_i)$} $$\n\nthat is linear in $c$.\n\n### Assumptions\n\n1. The independent variables $x$ are error-free\n1. The prediction (or \"response\") $f(x,c)$ is linear in $c$\n1. The noise in the measurements $y$ is independent (uncorrelated)\n1. The noise in the measurements $y$ has constant variance\n\nThere are reasons why all of these assumptions may be undesirable in practice, thus leading to more complicated methods.\n\n### Loss functions\n\nThe error in a single prediction $f(x_i,c)$ of an observation $(x_i, y_i)$ is often measured as\n$$ \\frac 1 2 \\big( f(x_i, c) - y_i \\big)^2, $$\nwhich turns out to have a statistical interpretation when the noise is normally distributed.\nIt is natural to define the error over the entire data set as\n\\begin{align} L(c; x, y) &= \\sum_i \\frac 1 2 \\big( f(x_i, c) - y_i \\big)^2 \\\\\n&= \\frac 1 2 \\lVert f(x, c) - y \\rVert^2\n\\end{align}\nwhere I've used the notation $f(x,c)$ to mean the vector resulting from gathering all of the outputs $f(x_i, c)$.\nThe function $L$ is called the \"loss function\" and is the key to relaxing the above assumptions.\n\n### Partial derivatives\n\nLet's step back from optimization and consider how to differentiate a function of several variables. Let $f(\\boldsymbol x)$ be a function of a vector $\\boldsymbol x$. For example,\n\n$$ f(\\boldsymbol x) = x_0^2 + \\sin(x_1) e^{3x_2} . $$\n\nWe can evaluate the **partial derivative** by differentiating with respect to each component $x_i$ separately (holding the others constant), and collect the result in a vector,\n\n\\begin{align}\n\\frac{\\partial f}{\\partial \\boldsymbol x} &= \\begin{bmatrix} \\frac{\\partial f}{\\partial x_0} & \\frac{\\partial f}{\\partial x_1} & \\frac{\\partial f}{\\partial x_2} \\end{bmatrix} \\\\\n&= \\begin{bmatrix} 2 x_0 & \\cos(x_1) e^{3 x_2} & 3 \\sin(x_1) e^{3 x_2} \\end{bmatrix}.\n\\end{align}\n\nNow let's consider a vector-valued function $\\boldsymbol f(\\boldsymbol x)$, e.g.,\n\n$$ \\boldsymbol f(\\boldsymbol x) = \\begin{bmatrix} x_0^2 + \\sin(x_1) e^{3x_2} \\\\ x_0 x_1^2 / x_3 \\end{bmatrix} . $$\n\nand write the derivative as a matrix,\n\n\\begin{align}\n\\frac{\\partial \\boldsymbol f}{\\partial \\boldsymbol x} &=\n\\begin{bmatrix} \\frac{\\partial f_0}{\\partial x_0} & \\frac{\\partial f_0}{\\partial x_1} & \\frac{\\partial f_0}{\\partial x_2} \\\\\n\\frac{\\partial f_1}{\\partial x_0} & \\frac{\\partial f_1}{\\partial x_1} & \\frac{\\partial f_1}{\\partial x_2} \\\\\n\\end{bmatrix} \\\\\n&= \\begin{bmatrix} 2 x_0 & \\cos(x_1) e^{3 x_2} & 3 \\sin(x_1) e^{3 x_2} \\\\\nx_1^2 / x_3 & 2 x_0 x_1 / x_3 & -x_0 x_1^2 / x_3^2\n\\end{bmatrix}.\n\\end{align}\n\n#### Geometry of derivatives\n\n\n\n* Handy resource on partial derivatives for matrices and vectors: https://explained.ai/matrix-calculus/index.html#sec3\n\n#### Derivative of dot product\n\nLet $f(\\boldsymbol x) = \\boldsymbol y^T \\boldsymbol x = \\sum_i y_i x_i$ and compute the derivative\n\n$$ \\frac{\\partial f}{\\partial \\boldsymbol x} = \\begin{bmatrix} y_0 & y_1 & \\dotsb \\end{bmatrix} = \\boldsymbol y^T . $$\n\nNote that $\\boldsymbol y^T \\boldsymbol x = \\boldsymbol x^T \\boldsymbol y$ and we have the product rule,\n\n$$ \\frac{\\partial \\lVert \\boldsymbol x \\rVert^2}{\\partial \\boldsymbol x} = \\frac{\\partial \\boldsymbol x^T \\boldsymbol x}{\\partial \\boldsymbol x} = 2 \\boldsymbol x^T . $$\n\nAlso,\n$$ \\frac{\\partial \\lVert \\boldsymbol x - \\boldsymbol y \\rVert^2}{\\partial \\boldsymbol x} = \\frac{\\partial (\\boldsymbol x - \\boldsymbol y)^T (\\boldsymbol x - \\boldsymbol y)}{\\partial \\boldsymbol x} = 2 (\\boldsymbol x - \\boldsymbol y)^T .$$\n\n### Optimization\nGiven data $(x,y)$ and loss function $L(c; x,y)$, we wish to find the coefficients $c$ that minimize the loss, thus yielding the \"best predictor\" (in a sense that can be made statistically precise). I.e.,\n$$ \\bar c = \\arg\\min_c L(c; x,y) . $$\n\nIt is usually desirable to design models such that the loss function is differentiable with respect to the coefficients $c$, because this allows the use of more efficient optimization methods. Recall that our forward model is given in terms of the Vandermonde matrix,\n\n$$ f(x, c) = V(x) c $$\n\nand thus\n\n$$ \\frac{\\partial f}{\\partial c} = V(x) . $$\n\nWe can now differentiate our loss function\n$$ L(c; x, y) = \\frac 1 2 \\lVert f(x, c) - y \\rVert^2 = \\frac 1 2 \\sum_i (f(x_i,c) - y_i)^2 $$\nterm-by-term as\n\\begin{align} \\nabla_c L(c; x,y) = \\frac{\\partial L(c; x,y)}{\\partial c} &= \\sum_i \\big( f(x_i, c) - y_i \\big) \\frac{\\partial f(x_i, c)}{\\partial c} \\\\\n&= \\sum_i \\big( f(x_i, c) - y_i \\big) V(x_i)\n\\end{align}\nwhere $V(x_i)$ is the $i$th row of $V(x)$.\nAlternatively, we can take a more linear algebraic approach to write the same expression is\n\\begin{align} \\nabla_c L(c; x,y) &= \\big( f(x,c) - y \\big)^T V(x) \\\\\n&= \\big(V(x) c - y \\big)^T V(x) \\\\\n&= V(x)^T \\big( V(x) c - y \\big) .\n\\end{align}\nA necessary condition for the loss function to be minimized is that $\\nabla_c L(c; x,y) = 0$.\n\n* Is the condition sufficient for general $f(x, c)$?\n* Is the condition sufficient for the linear model $f(x,c) = V(x) c$?\n* Have we seen this sort of equation before?\n\n##### Uniqueness\nWe can read the expression $ V(x)^T \\big( V(x) c - y \\big) = 0$ as saying that the residual $V(x) c - y$ is orthogonal to the range of $V(x)$. Suppose $c$ satisfies this equation and $c' \\ne c$ is some other value of the coefficients. Then\n\\begin{align}\nV(x)^T \\big( V(x) c' - y \\big) &= V(x)^T V(x) c' - V(x)^T y \\\\\n&= V(x)^T V(x) c' - V(x)^T V(x) c \\\\\n&= V(x)^T V(x) (c' - c) \\ne 0\n\\end{align}\nwhenever $V(x)^T V(x)$ is nonsingular, which happens any time $V(x)$ has full column rank.\n\n##### Gradient descent\nInstead of solving the least squares problem using linear algebra (QR factorization), we could solve it using gradient descent. That is, on each iteration, we'll take a step in the direction of the negative gradient.\n\n\n```python\ndef grad_descent(loss, grad, c0, gamma=1e-3, tol=1e-5):\n \"\"\"Minimize loss(c) via gradient descent with initial guess c0\n using learning rate gamma. Declares convergence when gradient\n is less than tol or after 500 steps.\n \"\"\"\n c = c0.copy()\n chist = [c.copy()]\n lhist = [loss(c)]\n for it in range(500):\n g = grad(c)\n c -= gamma * g\n chist.append(c.copy())\n lhist.append(loss(c))\n if np.linalg.norm(g) < tol:\n break\n return c, np.array(chist), np.array(lhist)\n\nclass quadratic_loss:\n \"\"\"Test problem to give example of gradient descent.\"\"\"\n def __init__(self):\n self.A = np.array([[1, 1], [1, 4]])\n def loss(self, c):\n return .5 * c @ self.A @ c\n def grad(self, c):\n return self.A @ c\n\ndef test(gamma):\n q = quadratic_loss()\n c, chist, lhist = grad_descent(q.loss, q.grad, .9*np.ones(2), gamma=gamma)\n plt.semilogy(lhist)\n plt.ylabel('loss')\n plt.xlabel('cumulative iterations')\n\n plt.figure()\n l = np.linspace(-1, 1)\n x, y = np.meshgrid(l, l)\n z = [q.loss(np.array([x[i,j], y[i,j]])) for i in range(50) for j in range(50)]\n plt.contour(x, y, np.reshape(z, x.shape))\n plt.plot(chist[:,0], chist[:,1], 'o-')\n plt.title('gamma={}'.format(gamma))\n \ntest(.45)\n```\n\n\n```python\nclass chebyshev_regress:\n def __init__(self, x, y, n):\n self.V = vander_chebyshev(x, n)\n self.y = y\n self.n = n\n \n def init(self):\n return np.zeros(self.n)\n\n def loss(self, c):\n r = self.V @ c - y\n return 0.5 * (r @ r)\n \n def grad(self, c):\n r = self.V @ c - y\n return self.V.T @ r\n \nreg = chebyshev_regress(x, y, 6)\nc, _, lhist = grad_descent(reg.loss, reg.grad, reg.init(), gamma=2e-3)\nplt.semilogx(np.arange(1, 1+len(lhist)), lhist)\nplt.ylabel('loss')\nplt.xlabel('cumulative iterations')\nplt.ylim(bottom=0);\n```\n\n\n```python\nnp.linalg.cond(reg.V.T @ reg.V)\n```\n\n\n\n\n 8.108087894474187\n\n\n\n* How does changing the \"learning rate\" `gamma` affect convergence?\n* How does changing the number of basis functions affect convergence rate?\n\n#### Did we find the same solution?\nWe intend to solve the same problem as we previously solved using QR, so we hope to find the same solution.\n\n\n```python\nplt.plot(xx, chebyshev_regress_eval(x, xx, 6) @ y, '-k', label='QR')\nplt.plot(xx, vander_chebyshev(xx, 6) @ c, '--', label='grad_descent')\nplt.plot(x, y, '.')\nplt.legend();\n```\n\n#### Observations\n\n* This algorithm is kind of finnicky:\n * It takes many iterations to converge\n * The \"learning rate\" needs to be empirically tuned\n * When not tuned well, the algorithm can diverge\n* It could be made more robust using a line search\n* The $QR$ algorithm is a more efficient and robust way to solve these problems\n\n## Nonlinear regression\n\nInstead of the linear model\n$$ f(x,c) = V(x) c = c_0 + c_1 \\underbrace{x}_{T_1(x)} + c_2 T_2(x) + \\dotsb $$\nlet's consider a rational model with only three parameters\n$$ f(x,c) = \\frac{1}{c_0 + c_1 x + c_2 x^2} = (c_0 + c_1 x + c_2 x^2)^{-1} . $$\nWe'll use the same loss function\n$$ L(c; x,y) = \\frac 1 2 \\lVert f(x,c) - y \\rVert^2 . $$\n\nWe will also need the gradient\n$$ \\nabla_c L(c; x,y) = \\big( f(x,c) - y \\big)^T \\nabla_c f(x,c) $$\nwhere\n\\begin{align}\n\\frac{\\partial f(x,c)}{\\partial c_0} &= -(c_0 + c_1 x + c_2 x^2)^{-2} = - f(x,c)^2 \\\\\n\\frac{\\partial f(x,c)}{\\partial c_1} &= -(c_0 + c_1 x + c_2 x^2)^{-2} x = - f(x,c)^2 x \\\\\n\\frac{\\partial f(x,c)}{\\partial c_2} &= -(c_0 + c_1 x + c_2 x^2)^{-2} x^2 = - f(x,c)^2 x^2 .\n\\end{align}\n\n\n```python\nclass rational_regress:\n def __init__(self, x, y):\n self.x = x\n self.y = y\n self.n = 3\n \n def init(self):\n return np.ones(self.n)\n \n def f(self, c):\n x = self.x\n return 1 / (c[0] + c[1]*x + c[2]*x**2)\n \n def df(self, c):\n x = self.x\n f2 = self.f(c)**2\n return np.array([-f2, -f2*x, -f2*x**2]).T\n\n def loss(self, c):\n r = self.f(c) - self.y\n return 0.5 * (r @ r)\n \n def grad(self, c):\n r = self.f(c) - self.y\n return r @ self.df(c)\n\nreg = rational_regress(x, y)\nc, _, lhist = grad_descent(reg.loss, reg.grad, reg.init(), gamma=2e-2)\nplt.semilogx(np.arange(1, 1+len(lhist)), lhist)\nplt.ylabel('loss')\nplt.xlabel('cumulative iterations')\nplt.ylim(bottom=0)\nplt.title('rational c={}'.format(c));\n```\n\n\n```python\nx = np.linspace(-1, 1, 500)\ny = runge1_noisy(x, sigma=0.1)\n\nc0 = np.array([1, 0, 1.])\nc, _, lhist = grad_descent(reg.loss, reg.grad, c0, gamma=2e-2)\nplt.semilogx(np.arange(1, 1+len(lhist)), lhist)\nplt.ylabel('loss')\nplt.xlabel('cumulative iterations')\nplt.ylim(bottom=0)\nplt.title('rational c={}'.format(c));\n```\n\n\n```python\nplt.plot(x, y, '.')\nplt.plot(xx, runge1(xx), label='runge1(x)')\nplt.plot(xx, chebyshev_regress_eval(x, xx, 40) @ y, label='Polynomial')\nplt.plot(xx, 1/(c[0] + c[1]*xx + c[2]*xx**2), '--k', label='Rational')\nplt.legend();\n```\n\n#### Observations\n\n* There can be local minima or points that look an awful lot like local minima.\n* Convergence is sensitive to learning rate.\n* It takes a lot of iterations to converge.\n* A well-parametrized model (such as the rational model above) can accurately reconstruct the mean even with very noisy data.\n\n### Outlook\n\n* The optimization problem can be solved using a Newton method. It can be onerous to implement the needed derivatives.\n* The [Gauss-Newton method](https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm) (see activity) is often more practical than Newton while being faster than gradient descent, though it lacks robustness.\n* The [Levenberg-Marquardt method](https://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm) provides a sort of middle-ground between Gauss-Newton and gradient descent.\n* Many globalization techniques are used for models that possess many local minima.\n* One pervasive approach is stochastic gradient descent, where small batches (e.g., 1 or 10 or 20) are selected randomly from the corpus of observations (500 in our current example), and a step of gradient descent is applied to that reduced set of observations. This helps to escape saddle points and weak local minima.\n* Among expressive models $f(x,c)$, some may converge much more easily than others. Having a good optimization algorithm is essential for nonlinear regression with complicated models, especially those with many parameters $c$.\n* Classification is a very similar problem to regression, but the observations $y$ are discrete, thus\n\n * models $f(x,c)$ must have discrete output\n * the least squares loss function is not appropriate.\n* [Why momentum really works](https://distill.pub/2017/momentum/)\n", "meta": {"hexsha": "95beca73965f65f472957065640a055460c8a7a8", "size": 360141, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Regression.ipynb", "max_stars_repo_name": "cu-numcomp/numcomp-class", "max_stars_repo_head_hexsha": "8d9f5944774082ad4d8e8462e3afae7a87cddf4a", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2020-01-14T17:00:27.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-03T14:17:07.000Z", "max_issues_repo_path": "Regression.ipynb", "max_issues_repo_name": "cu-numcomp/numcomp-class", "max_issues_repo_head_hexsha": "8d9f5944774082ad4d8e8462e3afae7a87cddf4a", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-02-28T08:39:35.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-17T20:42:56.000Z", "max_forks_repo_path": "Regression.ipynb", "max_forks_repo_name": "cu-numcomp/numcomp-class", "max_forks_repo_head_hexsha": "8d9f5944774082ad4d8e8462e3afae7a87cddf4a", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2020-01-16T19:52:55.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-09T18:30:56.000Z", "avg_line_length": 472.625984252, "max_line_length": 63160, "alphanum_fraction": 0.9349726913, "converted": true, "num_tokens": 5179, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.89330940889474, "lm_q2_score": 0.938124016006303, "lm_q1q2_score": 0.8380350102085501}} {"text": "# Homework 5\n\n1. A group of $n \\ge 4$ people are comparing their birthdays (as usual, assume their birthdays are independent, are not February 29, etc.).\n\n (a) Let $I_{ij}$ be the indicator r.v. of $i$ and $j$ having the same birthday (for $i \\lt j$). Is $I_{12}$ independent of $I_{34}$? Are the $I_{ij}$'s independent?\n \n (b) Explain why the Poisson Paradigm is applicable here even for moderate $n$, and use it to get a good approximation to the probability of at least 1 match when $n=23$.\n \n (c) About how many people are needed so that there is a 50% chance (or better) that two either have the same birthday or are only 1 day apart? (Note that this is much harder than the birthday problem to do exactly, but the Poisson Paradigm makes it possible to get fairly accurate approximations quickly.)\n\n#### Solution\n\n##### 1 (a)\n\n$I_{12}$ is independent of $I_{34}$, since knowing that persons 1 and 2 having the same birthday provides absolutely no information about persons 3 and 4.\n\nHowever, $I_{ij}$ are not entirely independent, as knowing that $I_{12}$ and $I_{23}$ does tell us that person 1 and 3 _must_ have the same birthday.\n\n##### 1 (b)\n\nChecklist for the applying the Poisson Paradigm:\n\n1. events $I_{ij}$ where $i \\lt j$, $P(I_{ij}) = \\frac{1}{365}$\n2. large number of trials $\\binom{n}{2}$; when $n=23$, $\\binom{23}{2} = 253$\n3. small $p = \\frac{1}{365}$\n\nAs the events $I_{ij}$ are _weakly independent_, we can approximate this r.v. with a Poisson distribution. \n\nLet $X \\sim Pois(\\mu)$.\n\n\\begin{align}\n \\mu &= \\binom{n}{2} ~ p \\\\\n &= \\frac{n(n-1)}{2} ~ \\frac{1}{365} \\\\\n &= \\frac{n(n-1)}{730} \\\\\n &\\approx 0.693 & \\quad \\text{when } n = 23\n \\\\\\\\\n \\therefore P(X \\ge 1) &= 1 - P(X = 0) \\\\\n &= 1 - e^{-0.693} ~ \\frac{0.693^0}{0!} \\\\\n &= 1 - e^{-0.693} \\\\\n &\\approx 0.500\n\\end{align}\n\n##### 1 (c)\n\nWe are interested in the case where $I_{ij}$ is where 2 people have the _same_ birthday, or _1 day apart_.\n\nThis means that $I_{ij} = 1$, and so $P(I_{ij}=1) = \\frac{3}{365}$.\n\nWe can approximate this with $Pois(\\mu = \\frac{n(n-1)}{2} ~ \\frac{3}{365})$, and so $\\mu = \\frac{3n(n-1)}{730}$.\n\n\\begin{align}\n P(X \\ge 1) &= 1 - P(X=0) \\\\\n &= 1 - e^{-\\frac{3n(n-1)}{730}} ~ \\frac{\\left(\\frac{3n(n-1)}{730}\\right)^0}{0!} \\\\\n &= 1 - e^{\\frac{-3n^2+3n}{730}}\n \\\\\\\\\n 1 - e^{\\frac{-3n^2+3n}{730}} &= \\frac{1}{2} \\\\\n \\frac{-3n^2+3n}{730} &= log\\left(\\frac{1}{2}\\right) \\\\\n &= - log ~ 2 \\\\\n 3n^2 -3n - 730 ~ log ~ 2 &= 0\n\\end{align} \n\n----\n\n\n```python\nimport math\nmath.log(2)\n```\n\n\n\n\n 0.6931471805599453\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "49fd92598caf6feac32c69c5709502c130ec002e", "size": 5042, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "homework/Homework_05.ipynb", "max_stars_repo_name": "dirtScrapper/Stats-110-master", "max_stars_repo_head_hexsha": "a123692d039193a048ff92f5a7389e97e479eb7e", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "homework/Homework_05.ipynb", "max_issues_repo_name": "dirtScrapper/Stats-110-master", "max_issues_repo_head_hexsha": "a123692d039193a048ff92f5a7389e97e479eb7e", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "homework/Homework_05.ipynb", "max_forks_repo_name": "dirtScrapper/Stats-110-master", "max_forks_repo_head_hexsha": "a123692d039193a048ff92f5a7389e97e479eb7e", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.9325153374, "max_line_length": 315, "alphanum_fraction": 0.5041650139, "converted": true, "num_tokens": 944, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9496693688269985, "lm_q2_score": 0.8824278618165526, "lm_q1q2_score": 0.8380147105666834}} {"text": "# Tipos comp\u00f3sitos\n\n[1] (i) \u00bfC\u00f3mo representar\u00edas a una part\u00edcula en 1D con posici\u00f3n, velocidad y masa en Julia?\n\n(ii) \u00bfC\u00f3mo mover\u00edas la part\u00edcula en un paso $\\delta t$?\n\n(iii) \u00bfSi necesitas otra part\u00edcula con las mismas propiedades, qu\u00e9 har\u00edas?\n\n(iv) Para $N$ tales part\u00edculas, \u00bfqu\u00e9 podr\u00edas hacer?\n\n\nEl problema aqu\u00ed es que la representaci\u00f3n del concepto \"part\u00edcula\" est\u00e1 repartida en distintas variables. Julia provee una manera de recolectar la informaci\u00f3n de un \"objeto\", al definir un *tipo comp\u00f3sito* (\"composite type\"):\n\n\n```julia\ntype MiTipo\n a\n b::Int\nend\n```\n\nEsto define un tipo de objeto llamado `MiTipo`. Cada objeto de este tipo tendr\u00e1 *adentro* su propia copia de una variable llamada `a` y otra llamada `b`. En este caso, no hemos especificado ning\u00fan tipo para `a`, mientras que `b` est\u00e1 forzado a tener el tipo `Int`. \n\n[En general, en Julia, para *anotar* a una variable con un tipo dado, usamos esta notaci\u00f3n con `::`.]\n\n[2] Define un tipo que se llama `Particula`, que tiene variables para la posici\u00f3n, velocidad y masa en una dimensi\u00f3n.\n\n[3] Experimenta para ver c\u00f3mo crear un objeto de tipo `Particula`. [Pista: piensa en funciones]\n\n[4] \u00bfC\u00f3mo podemos definir una funci\u00f3n `mover` que mueve la part\u00edcula en un paso de tiempo $\\delta t$? [Pista: Para especificar que un objeto `t` es de tipo `MiTipo`, usamos la sintaxis `t::MiTipo`.]\n\n[5] Define un objeto `Gas` que representa $N$ part\u00edculas, as\u00ed como una funci\u00f3n `mover` que mueve el gas.\n\n[6] Considera una estructura compuesta que denotaremos\n${\\overline v} = (f_v, d_v)$, que consta de dos campos `f_v` y `d_v`, que son flotantes. Esta estructura est\u00e1 definida de tal manera que se cumplen las siguientes propiedades:\n\n\\begin{align}\n{\\overline c} &= (c,\\,0), &\\textrm{ para toda constante $c$},\\\\\n{\\overline x} &= (x_0,\\,1), &\\textrm{para toda variable independiente $x = x_0$},\\\\\nc {\\overline v} &= {\\overline c}\\, {\\overline v} = (c f_v,\\, c d_v), \\\\\n{\\overline v} \\pm {\\overline w} & = (f_v \\pm f_w,\\, d_v \\pm d_w),\\\\\n{\\overline v} \\cdot {\\overline w} & = (f_v \\cdot f_w,\\, f_v \\cdot d_w + d_v \\cdot f_w),\\\\\n\\frac{{\\overline v}}{{\\overline w}} & = \n\\left( \\frac{f_v}{f_w},\\, \\frac{d_v \\cdot f_w - f_v \\cdot d_w}{f_w^2} \\right),\\\\\n{\\overline u}^\\alpha &= (f_u^\\alpha,\\, \\alpha f_u^{\\alpha-1} d_u), &\\textrm{donde $\\alpha$ \nes un n\u00famero real}.\n\\end{align}\n\n\ni. Implementa esto usando Julia.\n\nii. Define un polinomio $p(x)$ cuya variable independiente es $x$. Eval\u00faa el polinomio en ${\\overline x}$ (variable independiente $x$), en $x_0=0$. \u00bfQu\u00e9 interpretaci\u00f3n tiene *el valor* obtenido para $d_x$? Y si en lugar de un polinomio utilizas un cociente de polinomios $r({\\overline x}) = p({\\overline x}) / q({\\overline x})$?\n\niii. Pensando en la interpretaci\u00f3n que le diste a $d_x$, c\u00f3mo definir\u00edas la acci\u00f3n sobre ${\\overline x}$ de las siguientes funciones:\n- $\\exp(\\,{\\overline x}\\,)$\n- $\\log(\\,{\\overline x}\\,)$\n- $\\sin(\\,{\\overline x}\\,)$\n- $\\cos(\\,{\\overline x}\\,)$\n- $\\tan(\\,{\\overline x}\\,)$\n\niv. \u00bfC\u00f3mo podemos definir las cosas en Julia de tal manera que ${\\overline v} + c$, y las dem\u00e1s posibles operaciones\nentre una variable comp\u00f3sita y un flotante $c$, tengan sentido?\n\n\n[7] En el resto del curso, trataremos con *aritm\u00e9tica de intervalos*. En este nuevo tipo de aritm\u00e9tica, ocupamos intervalos $[a,b]$ de la recta real, que es el conjunto \n\n$$[a, b] := \\{x : a \\le x \\le b \\}$$\n\n(i) Define un tipo composito para representar un intervalo de dos n\u00fameros reales.\n\n(ii) \u00bfC\u00f3mo podr\u00edamos tener operaciones sensatas sobre los intervalos? La idea b\u00e1sica es que el resultado de la operaci\u00f3n sobre dos intervalos contenga los valores posibles resultantes de operar con los miembros de los dos intervalos respectivos.\n\n(iii) Implementa estas operaciones, sin tomar en cuenta cuestiones de redondeo.\n\n(iv) \u00bfC\u00f3mo nos puede ayudar el redondeo? Implem\u00e9ntalo.\n", "meta": {"hexsha": "23647f738447df834f36b122307c193e4e720b4d", "size": 6356, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/06. Tipos compositos en Julia.ipynb", "max_stars_repo_name": "dpsanders/MetodosNumericosAvanzados", "max_stars_repo_head_hexsha": "57ed97526afa7f922a43a34dfdc2509274431034", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2015-02-15T17:46:09.000Z", "max_stars_repo_stars_event_max_datetime": "2020-03-12T02:50:44.000Z", "max_issues_repo_path": "notebooks/06. Tipos compositos en Julia.ipynb", "max_issues_repo_name": "dpsanders/MetodosNumericosAvanzados", "max_issues_repo_head_hexsha": "57ed97526afa7f922a43a34dfdc2509274431034", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/06. Tipos compositos en Julia.ipynb", "max_forks_repo_name": "dpsanders/MetodosNumericosAvanzados", "max_forks_repo_head_hexsha": "57ed97526afa7f922a43a34dfdc2509274431034", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2015-02-15T16:55:23.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-03T03:43:01.000Z", "avg_line_length": 40.7435897436, "max_line_length": 363, "alphanum_fraction": 0.5712712398, "converted": true, "num_tokens": 1238, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9046505351008906, "lm_q2_score": 0.9263037379231739, "lm_q1q2_score": 0.8379811721781544}} {"text": "```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sympy as sy\nimport simtk.unit as unit\nfrom simtk import openmm as mm\nfrom simtk.openmm import app\nimport skopt as skopt\nfrom tqdm import tqdm\n```\n\n# A Lennard-Jones Fluid\n\n## The Lennard-Jones potential\n\nThe Lennard-Jones (LJ) potential between two particles is defined by the following equation, where $x$ is the distance between the particles, and $\\sigma$ and $\\epsilon$ are two parameters of the potential:\n \n\\begin{equation}\nV(x) = 4 \\epsilon \\left[ \\left( \\frac{\\sigma}{x} \\right)^{12} - \\left( \\frac{\\sigma}{x} \\right)^6 \\right]\n\\end{equation}\n\nLets see the shape of this function:\n\n\n```python\ndef LJ (x, sigma, epsilon):\n \n t = sigma/x\n t6 = t**6\n t12 = t6**2\n \n return 4.0*epsilon*(t12-t6)\n```\n\n\n```python\nsigma = 2.0 * unit.angstrom\nepsilon = 1.0 * unit.kilocalories_per_mole\n\nxlim_figure = [0.01, 6.0]\nylim_figure = [-2.0, 10.0]\n\nx = np.linspace(xlim_figure[0], xlim_figure[1], 100, True) * unit.angstrom\nplt.plot(x, LJ(x, sigma, epsilon))\nplt.xlim(xlim_figure)\nplt.ylim(ylim_figure)\nplt.xlabel('x [{}]'.format(x.unit.get_symbol()))\nplt.ylabel('V [{}]'.format(epsilon.unit.get_symbol()))\nplt.show()\n```\n\nThe way the LJ potential is built, the $\\sigma$ and $\\epsilon$ parameters have a straightforward interpretation. The cut with $y=0$ is located in $x=\\sigma$:\n\n\n```python\nsigma = 2.0 * unit.angstrom\nepsilon = 1.0 * unit.kilocalories_per_mole\n\nxlim_figure = [0.01, 6.0]\nylim_figure = [-2.0, 10.0]\n\nx = np.linspace(xlim_figure[0], xlim_figure[1], 100, True) * unit.angstrom\nplt.plot(x, LJ(x, sigma, epsilon))\nplt.hlines(0, xlim_figure[0], xlim_figure[1], linestyles='dotted', color='gray')\nplt.vlines(sigma._value, ylim_figure[0], ylim_figure[1], linestyles='dashed', color='red')\nplt.text(sigma._value+0.02*xlim_figure[1], 0.7*ylim_figure[1], '$\\sigma$', fontsize=14)\nplt.xlim(xlim_figure)\nplt.ylim(ylim_figure)\nplt.xlabel('x [{}]'.format(x.unit.get_symbol()))\nplt.ylabel('V [{}]'.format(epsilon.unit.get_symbol()))\nplt.show()\n```\n\nAnd $\\epsilon$ is the depth of the minimum measured from $y=0$:\n\n\n```python\nsigma = 2.0 * unit.angstrom\nepsilon = 1.0 * unit.kilocalories_per_mole\n\nxlim_figure = [0.01, 6.0]\nylim_figure = [-2.0, 10.0]\n\nx = np.linspace(xlim_figure[0], xlim_figure[1], 100, True) * unit.angstrom\nplt.plot(x, LJ(x, sigma, epsilon))\nplt.hlines(0, xlim_figure[0], xlim_figure[1], linestyles='dotted', color='gray')\nplt.hlines(-epsilon._value, xlim_figure[0], xlim_figure[1], linestyles='dashed', color='red')\nplt.annotate(text='', xy=(1.0,0.0), xytext=(1.0,-epsilon._value), arrowprops=dict(arrowstyle='<->'))\nplt.text(1.0+0.02*xlim_figure[1], -0.7*epsilon._value, '$\\epsilon$', fontsize=14)\nplt.xlim(xlim_figure)\nplt.ylim(ylim_figure)\nplt.xlabel('x [{}]'.format(x.unit.get_symbol()))\nplt.ylabel('V [{}]'.format(epsilon.unit.get_symbol()))\nplt.show()\n```\n\nNotice that the LJ potential has physical meaning when $\\epsilon>0$ and $\\sigma>0$ only. Actually, the potential vanishes whether $\\epsilon=0$ or $\\sigma=0$.\n\n### The Lennard Jones minimum and the size of the particles\n\nThe LJ potential has a single minimum located in $x_{min}$. Lets equal to $0$ the first derivative of the potential to find the value of $x_{min}$:\n\n\n```python\nx, sigma, epsilon = sy.symbols('x sigma epsilon', real=True, positive=True)\nV = 4.0*epsilon*((sigma/x)**12-(sigma/x)**6)\ngradV = sy.diff(V,x)\nroots=sy.solve(gradV, x)\nx_min = roots[0]\n```\n\n\n```python\nx_min\n```\n\n\n\n\n$\\displaystyle 1.12246204830937 \\sigma$\n\n\n\nThe minimum is then located in:\n\n\\begin{equation}\nx_{min} = 2^{1/6} \\sigma\n\\end{equation}\n\nwhere the potential takes the value:\n\n\\begin{equation}\nV(x_{min}) = -\\epsilon\n\\end{equation}\n\n\n```python\nsigma = 2.0 * unit.angstrom\nepsilon = 1.0 * unit.kilocalories_per_mole\n\nx_min = 2**(1/6)*sigma\ny_min = -epsilon\n\nxlim_figure = [x_min._value-0.4, x_min._value+0.4]\nylim_figure = [y_min._value-0.1, y_min._value+0.5]\n\nx = np.linspace(xlim_figure[0], xlim_figure[1], 100, True) * unit.angstroms\nplt.plot(x, LJ(x, sigma, epsilon))\nplt.hlines(y_min._value, xlim_figure[0], xlim_figure[1], linestyles='dashed', color='gray')\nplt.vlines(x_min._value, ylim_figure[0], ylim_figure[1], linestyles='dashed', color='gray')\nplt.xlim(xlim_figure)\nplt.ylim(ylim_figure)\nplt.xlabel('x [{}]'.format(x.unit.get_symbol()))\nplt.ylabel('V [{}]'.format(epsilon.unit.get_symbol()))\nplt.show()\n```\n\nThis way two particles in the equilibrium position will be placed at a $2^{1/6} \\sigma$ distance. The potential is thereby modeling two \"soft spheres\" atracting each other very lightly. Their radii, given that both particles are equal, are equal to $r$:\n\n\\begin{equation}\nr = \\frac{1}{2} x_{min} = 2^{-5/6} \\sigma\n\\end{equation}\n\nAnd we say these spheres are \"soft\" because their volume is not limited by a hard-wall potential, they can penetrate each other suffering a not infinite repulsive force.\n\n### Time period of the small harmonic oscillations around the minimum\n\nIf we want to perform a molecular simulation of this two particles we should wonder how big the integrator timestep must be. To answer this question we can study the harmonic approximation around the minimum. Lets calculate the time period, $\\tau$, of a small harmonic oscillation around the minimum:\n\n\n```python\nx, sigma, epsilon = sy.symbols('x sigma epsilon', real=True, positive=True)\nV = 4.0*epsilon*((sigma/x)**12-(sigma/x)**6)\ngradV = sy.diff(V,x)\ngrad2V = sy.diff(V,x,x)\n\nx_min = sy.solve(gradV,x)[0]\nk_harm = grad2V.subs(x, x_min)\n```\n\n\n```python\nk_harm\n```\n\n\n\n\n$\\displaystyle \\frac{57.1464378708551 \\epsilon}{\\sigma^{2}}$\n\n\n\nThe harmonic constant of the second degree Taylor polynomial of the LJ potential at $x=x_{min}$ is then:\n\n\\begin{equation}\nk_{harm} = 36\u00b72^{2/3} \\frac{\\epsilon}{\\sigma^2}\n\\end{equation}\n\nThe oscillation period of a particle with $m$ mass in an harmonic potential defined by $\\frac{1}{2} k x\u00b2$ is:\n\n\\begin{equation}\n\\tau = 2 \\pi \\sqrt{ \\frac{m}{k}}\n\\end{equation}\n\nAs such, the period of the small harmonic oscillations around the LJ minimum of particle with $m$ mass is:\n\n\\begin{equation}\n\\tau = 2 \\pi \\sqrt{ \\frac{m}{k_{harm}}} = \\frac{\\pi}{3\u00b72^{1/3}} \\sqrt{\\frac{m\\sigma^2}{\\epsilon}}\n\\end{equation}\n\nWith the mass and parameters taking values of amus, angstroms and kilocalories per mole, the time period is in the order of:\n\n\n```python\nmass = 50.0 * unit.amu\nsigma = 2.0 * unit.angstrom\nepsilon = 1.0 * unit.kilocalories_per_mole\n\nk = 36 * 2**(2/3) * epsilon/sigma**2\n\ntau = 2*np.pi * np.sqrt(mass/k)\n\nprint(tau)\n```\n\n 0.5746513694274475 ps\n\n\nBut, is this characteristic time a good threshold for a LJ potential? If the oscillations around the minimum are not small enough, the harmonic potential of the second degree term of the taylor expansion is easily overcome by the sharp left branch of the LJ potential:\n\n\n```python\nsigma = 2.0 * unit.angstrom\nepsilon = 1.0 * unit.kilocalories_per_mole\n\nk = 36 * 2**(2/3) * epsilon/sigma**2\n\nx_min = 2**(1/6)*sigma\ny_min = -epsilon\n\nxlim_figure = [x_min._value-0.2, x_min._value+0.2]\nylim_figure = [y_min._value-0.1, y_min._value+0.6]\n\nx = np.linspace(xlim_figure[0], xlim_figure[1], 100, True) * unit.angstroms\nplt.plot(x, LJ(x, sigma, epsilon))\nplt.plot(x, 0.5*k*(x-x_min)**2+y_min)\nplt.hlines(y_min._value, xlim_figure[0], xlim_figure[1], linestyles='dashed', color='gray')\nplt.vlines(x_min._value, ylim_figure[0], ylim_figure[1], linestyles='dashed', color='gray')\nplt.xlim(xlim_figure)\nplt.ylim(ylim_figure)\nplt.xlabel('x [{}]'.format(x.unit.get_symbol()))\nplt.ylabel('V [{}]'.format(epsilon.unit.get_symbol()))\nplt.show()\n```\n\nLet's imagine the following situation. Let a particle be in the harmonic potential at temperature of 300K. Will the particle be more constrained in space than in the well of the LJ potential? Will the particle feel the harmonic potential softer or sharper than the LJ? Lets make some numbers to evaluate if the oscillation time period of the harmonic approximation can be a good time threshold for the integration timestep of a molecular dynamics of the LJ potential.\n\nThe standard deviation of an harmonic oscillation with the shape $\\frac{1}{2}k x^2$ in contact with a stochastic thermal bath can be computed as:\n\n\\begin{equation}\n\\beta = \\frac{1}{k_{\\rm B} T} \n\\end{equation}\n\n\\begin{equation}\nZ_x = \\int_{-\\infty}^{\\infty} {\\rm e}^{- \\beta \\frac{1}{2}k x^2} = \\sqrt{\\frac{2 \\pi}{\\beta k}}\n\\end{equation}\n\n\\begin{equation}\n\\left< x \\right> = \\frac{1}{Z_x} \\int_{-\\infty}^{\\infty} x {\\rm e}^{-\\beta \\frac{1}{2}k x^2} = 0\n\\end{equation}\n\n\\begin{equation}\n\\left< x^2 \\right> = \\frac{1}{Z_x} \\int_{-\\infty}^{\\infty} x^{2} {\\rm e}^{-\\beta \\frac{1}{2}k x^2} = \\frac{1}{Z_x} \\sqrt{\\frac{2 \\pi}{\\beta\u00b3 k^3}} = \\frac{1}{\\beta k}\n\\end{equation}\n\n\n\\begin{equation}\n{\\rm std} = \\left( \\left< x^2 \\right> -\\left< x \\right>^2 \\right)^{1/2} = \\sqrt{ \\frac{k_{\\rm B}T}{k} }\n\\end{equation}\n\n\nThis way, in the case of the harmonic potential obtained as the second degree term of the Taylor expansion around the LJ minimum:\n\n\n```python\nmass = 50.0 * unit.amu\nsigma = 2.0 * unit.angstrom\nepsilon = 1.0 * unit.kilocalories_per_mole\ntemperature = 300 * unit.kelvin\nkB = unit.BOLTZMANN_CONSTANT_kB * unit.AVOGADRO_CONSTANT_NA\n\nk = 36 * 2**(2/3) * epsilon/sigma**2\nstd = np.sqrt(kB*temperature/k)\n\nx_min = 2**(1/6)*sigma\ny_min = -epsilon\n\nxlim_figure = [x_min._value-0.4, x_min._value+0.4]\nylim_figure = [y_min._value-0.1, y_min._value+0.6]\n\nx = np.linspace(xlim_figure[0], xlim_figure[1], 100, True) * unit.angstroms\nplt.plot(x, LJ(x, sigma, epsilon))\nplt.plot(x, 0.5*k*(x-x_min)**2+y_min)\nplt.hlines(y_min._value, xlim_figure[0], xlim_figure[1], linestyles='dashed', color='gray')\nplt.vlines(x_min._value, ylim_figure[0], ylim_figure[1], linestyles='dashed', color='gray')\nplt.axvspan(x_min._value - std._value, x_min._value + std._value, alpha=0.2, color='red')\nplt.annotate(text='', xy=(x_min._value, y_min._value - 0.5*(y_min._value-ylim_figure[0])),\n xytext=(x_min._value-std._value, y_min._value - 0.5*(y_min._value-ylim_figure[0])),\n arrowprops=dict(arrowstyle='<->'))\nplt.text(x_min._value-0.6*std._value, y_min._value - 0.4*(y_min._value-ylim_figure[0]), '$std$', fontsize=14)\nplt.xlim(xlim_figure)\nplt.ylim(ylim_figure)\nplt.xlabel('x [{}]'.format(x.unit.get_symbol()))\nplt.ylabel('V [{}]'.format(epsilon.unit.get_symbol()))\nplt.show()\n```\n\nThe harmonic potential is too soft as approximation. Its oscillation time used as threshold to choose the integration timestep can yield to numeric problems. Let's try with a stiffer potential, let's double the harmonic constant:\n\n\n```python\nmass = 50.0 * unit.amu\nsigma = 2.0 * unit.angstrom\nepsilon = 1.0 * unit.kilocalories_per_mole\ntemperature = 300 * unit.kelvin\nkB = unit.BOLTZMANN_CONSTANT_kB * unit.AVOGADRO_CONSTANT_NA\n\nk = 36 * 2**(2/3) * epsilon/sigma**2\nstd = np.sqrt(kB*temperature/k)\n\nx_min = 2**(1/6)*sigma\ny_min = -epsilon\n\nxlim_figure = [x_min._value-0.4, x_min._value+0.4]\nylim_figure = [y_min._value-0.1, y_min._value+0.6]\n\nx = np.linspace(xlim_figure[0], xlim_figure[1], 100, True) * unit.angstroms\nplt.plot(x, LJ(x, sigma, epsilon))\nplt.plot(x, 0.5*k*(x-x_min)**2+y_min)\nplt.plot(x, k*(x-x_min)**2+y_min, label='2k_{harm}')\nplt.hlines(y_min._value, xlim_figure[0], xlim_figure[1], linestyles='dashed', color='gray')\nplt.vlines(x_min._value, ylim_figure[0], ylim_figure[1], linestyles='dashed', color='gray')\nplt.axvspan(x_min._value - std._value, x_min._value + std._value, alpha=0.2, color='red')\nplt.annotate(text='', xy=(x_min._value, y_min._value - 0.5*(y_min._value-ylim_figure[0])),\n xytext=(x_min._value-std._value, y_min._value - 0.5*(y_min._value-ylim_figure[0])),\n arrowprops=dict(arrowstyle='<->'))\nplt.text(x_min._value-0.6*std._value, y_min._value - 0.4*(y_min._value-ylim_figure[0]), '$std$', fontsize=14)\nplt.xlim(xlim_figure)\nplt.ylim(ylim_figure)\nplt.xlabel('x [{}]'.format(x.unit.get_symbol()))\nplt.ylabel('V [{}]'.format(epsilon.unit.get_symbol()))\nplt.show()\n```\n\nLets take then, as reference, an harmonic potential with constant equal to $2k_{harm}$ could be a better idea. Lets compute then the new time threshold to choose the integration timestep:\n\n\\begin{equation}\n\\tau = 2 \\pi \\sqrt{ \\frac{m}{2k_{harm}}} = \\frac{\\pi}{3\u00b72^{5/6}} \\sqrt{\\frac{m\\sigma^2}{\\epsilon}}\n\\end{equation}\n\n\n```python\nmass = 50.0 * unit.amu\nsigma = 2.0 * unit.angstrom\nepsilon = 1.0 * unit.kilocalories_per_mole\n\nk = 36 * 2**(2/3) * epsilon/sigma**2\n\ntau = 2*np.pi * np.sqrt(mass/(2*k))\n\nprint(tau)\n```\n\n 0.4063398801402841 ps\n\n\nIt is an accepted rule of thumb that the integration timestep must be as large as $\\tau / 10$, being $\\tau$ the oscillation time period of the fastest possible vibration mode. So finally, in this case the integration time step should not be longer than:\n\n\n```python\nmass = 50.0 * unit.amu\nsigma = 2.0 * unit.angstrom\nepsilon = 1.0 * unit.kilocalories_per_mole\n\nk = 36 * 2**(2/3) * epsilon/sigma**2\n\ntau = 2*np.pi * np.sqrt(mass/(2*k))\n\nprint(tau/10.0)\n```\n\n 0.04063398801402841 ps\n\n\nFor the case of a LJ potential modeling an Argon fluid:\n\n\n```python\nmass = 39.9 * 2 * unit.amu # reduced mass of a pair of atoms\nsigma = 3.4 * unit.angstrom\nepsilon = 0.238 * unit.kilocalories_per_mole\n\nk = 36 * 2**(2/3) * epsilon/sigma**2\n\ntau = 2*np.pi * np.sqrt(mass/(2*k))\n\nprint(tau/10.0)\n```\n\n 0.17888187339607967 ps\n\n\n## Two Lennard-Jones atoms in vacuum\n\n### Ar-Ar\n\n\n```python\nmass = 39.948 * unit.amu\nsigma = 3.404 * unit.angstroms\nepsilon = 0.238 * unit.kilocalories_per_mole\ncharge = 0.0 * unit.elementary_charge\n```\n\n\n```python\nsystem = mm.System()\n\nnon_bonded_force = mm.NonbondedForce()\nnon_bonded_force.setNonbondedMethod(mm.NonbondedForce.NoCutoff)\n\n# First Ar atom\nsystem.addParticle(mass)\nnon_bonded_force.addParticle(charge, sigma, epsilon)\n\n# Second Ar atom\nsystem.addParticle(mass)\nnon_bonded_force.addParticle(charge, sigma, epsilon)\n\nsystem.addForce(non_bonded_force)\n\nintegrator = mm.VerletIntegrator(2*unit.femtoseconds)\nplatform = mm.Platform.getPlatformByName('CUDA')\n\ncontext = mm.Context(system, integrator, platform)\n```\n\n\n```python\npositions = np.zeros([2,3], float) * unit.angstroms\n\nx = np.linspace(1.0, 8.0, 200, endpoint=True) * unit.angstroms\nV = [] * unit.kilocalories_per_mole\n\nfor xi in x:\n positions[1,0] = xi\n context.setPositions(positions)\n state = context.getState(getEnergy=True)\n potential_energy = state.getPotentialEnergy()\n V.append(potential_energy)\n```\n\nPosition of minimum:\n\n\\begin{equation}\nx_{min} = 2^{1/6} \\sigma\n\\end{equation}\n\n\n```python\nx_min = 2**(1/6)*sigma\n```\n\n\n```python\nx_min\n```\n\n\n\n\n Quantity(value=3.8208608124451056, unit=angstrom)\n\n\n\n\n```python\nk = 36 * 2**(2/3) * epsilon/sigma**2\nreduced_mass = (mass**2)/(2*mass)\n\ntau = 2*np.pi * np.sqrt(reduced_mass/k)\n\nprint(tau)\n```\n\n 1.267135457848527 ps\n\n\n\n```python\nV._value = np.array(V._value)\n\nxlim_figure = [0.01, 8.0]\nylim_figure = [-2.0, 10.0]\n\nplt.plot(x, V, linewidth=2)\nplt.plot(x, LJ(x, sigma, epsilon), linestyle='--', color='red')\nplt.vlines(x_min._value, ylim_figure[0], ylim_figure[1], linestyles='dashed', color='gray')\nplt.xlim(xlim_figure)\nplt.ylim(ylim_figure)\nplt.xlabel('x [{}]'.format(x.unit.get_symbol()))\nplt.ylabel('V [{}]'.format(epsilon.unit.get_symbol()))\nplt.show()\n```\n\n\n```python\nxlim_figure = [3.0, 5.0]\nylim_figure = [-0.4, 0.2]\n\nx = np.linspace(xlim_figure[0], xlim_figure[1], 100, True) * unit.angstrom\n\nplt.plot(x, LJ(x, sigma, epsilon))\nplt.vlines(x_min._value, ylim_figure[0], ylim_figure[1], linestyles='dashed', color='gray')\n\nplt.xlim(xlim_figure)\nplt.ylim(ylim_figure)\nplt.xlabel('x [{}]'.format(x.unit.get_symbol()))\nplt.ylabel('V [{}]'.format(epsilon.unit.get_symbol()))\nplt.show()\n```\n\n\n```python\ninitial_distance = x_min + 0.05 * unit.angstroms\nsimulation_time = 4*tau\nsaving_time = 0.01*tau\nintegration_timestep = 0.01*tau\n\nsaving_steps = int(saving_time/integration_timestep)\nn_savings = int(simulation_time/saving_time)\n\nintegrator = context.getIntegrator()\nintegrator.setStepSize(integration_timestep)\n\ntrajectory = []*unit.angstroms\ntimes = []*unit.picoseconds\n\npositions = np.zeros([2,3], float) * unit.angstroms\npositions[1,0] = initial_distance\ncontext.setPositions(positions)\n\nvelocities = np.zeros([2,3], float) * unit.angstroms/unit.picoseconds\ncontext.setVelocities(velocities)\n\ncontext.setTime(0.0*unit.picoseconds)\n\nstate = context.getState(getPositions=True)\npositions = state.getPositions(asNumpy=True)\ntime = state.getTime()\ndistance = positions[1,0] - positions[0,0]\ntrajectory.append(distance)\ntimes.append(time)\n\nfor _ in range(n_savings):\n \n integrator.step(saving_steps)\n state = context.getState(getPositions=True)\n positions = state.getPositions(asNumpy=True)\n time = state.getTime()\n distance = positions[1,0] - positions[0,0]\n trajectory.append(distance)\n times.append(time)\n\ntimes._value = np.array(times._value)\ntrajectory._value = np.array(trajectory._value)\n\nplt.plot(times, trajectory)\nplt.show()\n```\n\n- Try to place the initial distance between the pair of atoms further than the simulated above. \n\n### Ar-Xe molecular system\n\n\n```python\nmass_Ar = 39.948 * unit.amu\nsigma_Ar = 3.404 * unit.angstroms\nepsilon_Ar = 0.238 * unit.kilocalories_per_mole\ncharge_Ar = 0.0 * unit.elementary_charge\n\nmass_Xe = 131.293 * unit.amu\nsigma_Xe = 3.961 * unit.angstroms\nepsilon_Xe = 0.459 * unit.kilocalories_per_mole\ncharge_Xe = 0.0 * unit.elementary_charge\n```\n\n\n```python\nsystem = mm.System()\n\nnon_bonded_force = mm.NonbondedForce()\nnon_bonded_force.setNonbondedMethod(mm.NonbondedForce.NoCutoff)\n\n# Ar atom\nsystem.addParticle(mass_Ar)\nnon_bonded_force.addParticle(charge_Ar, sigma_Ar, epsilon_Ar)\n\n# Xe atom\nsystem.addParticle(mass_Xe)\nnon_bonded_force.addParticle(charge_Xe, sigma_Xe, epsilon_Xe)\n\nsystem.addForce(non_bonded_force)\n\nintegrator = mm.VerletIntegrator(2*unit.femtoseconds)\nplatform = mm.Platform.getPlatformByName('CUDA')\n\ncontext = mm.Context(system, integrator, platform)\n```\n\n\n```python\npositions = np.zeros([2,3], float) * unit.angstroms\n\nx = np.linspace(1.0, 8.0, 200, endpoint=True) * unit.angstroms\nV = [] * unit.kilocalories_per_mole\n\nfor xi in x:\n\n positions[1,0] = xi\n context.setPositions(positions)\n state = context.getState(getEnergy=True)\n potential_energy = state.getPotentialEnergy()\n V.append(potential_energy)\n```\n\n\n```python\nsigma = (sigma_Ar+sigma_Xe)/2.0\nepsilon = np.sqrt(epsilon_Ar*epsilon_Xe)\n\nV._value = np.array(V._value)\n\nxlim_figure = [0.01, 8.0]\nylim_figure = [-2.0, 10.0]\n\nplt.plot(x, V, linewidth=2)\nplt.plot(x, LJ(x, sigma, epsilon), linestyle='--', color='red')\nplt.xlim(xlim_figure)\nplt.ylim(ylim_figure)\nplt.xlabel('x [{}]'.format(x.unit.get_symbol()))\nplt.ylabel('V [{}]'.format(epsilon.unit.get_symbol()))\nplt.show()\n```\n\n## Liquid Argon model\n\n\n```python\nn_particles = 1000\nreduced_density = 0.75\nmass = 39.948 * unit.amu\nsigma = 3.404 * unit.angstroms\nepsilon = 0.238 * unit.kilocalories_per_mole\ncharge = 0.0 * unit.elementary_charge\n\ntemperature = 300.0 * unit.kelvin\n\nintegration_timestep = 2.0 * unit.femtoseconds\ncollisions_rate = 1.0 / unit.picoseconds\n\nequilibration_time = 1.0 * unit.nanoseconds\n\nproduction_time = 5.0 * unit.nanoseconds\nsaving_time = 50.0 * unit.picoseconds\n```\n\nThe Van der Waals radius of Argon is 1.88 angstroms.\n\n\n```python\nradius = 2.0**(-5/6) * sigma\nprint(radius)\n```\n\n\n```python\nvolume_particles = n_particles * sigma**3\nvolume = volume_particles/reduced_density\nl_box = volume**(1/3)\n```\n\n\n```python\nsystem = mm.System()\n```\n\n\n```python\nv1 = np.zeros(3) * unit.angstroms\nv2 = np.zeros(3) * unit.angstroms\nv3 = np.zeros(3) * unit.angstroms\n\nv1[0] = l_box\nv2[1] = l_box\nv3[2] = l_box\n\nsystem.setDefaultPeriodicBoxVectors(v1, v2, v3)\n```\n\n\n```python\nnon_bonded_force = mm.NonbondedForce()\nnon_bonded_force.setNonbondedMethod(mm.NonbondedForce.CutoffPeriodic)\nnon_bonded_force.setCutoffDistance(3.0*sigma)\nnon_bonded_force.setUseSwitchingFunction(True)\nnon_bonded_force.setSwitchingDistance(2.0*sigma)\nnon_bonded_force.setUseDispersionCorrection(True)\n```\n\n\n```python\nfor _ in range(n_particles):\n system.addParticle(mass)\n non_bonded_force.addParticle(charge, sigma, epsilon)\n```\n\n\n```python\n_ = system.addForce(non_bonded_force)\n```\n\n\n```python\nspace = skopt.Space([[0.0, l_box._value], [0.0, l_box._value], [0.0, l_box._value]])\n```\n\n\n```python\nintegrator = mm.LangevinIntegrator(temperature, collisions_rate, integration_timestep)\nplatform = mm.Platform.getPlatformByName('CUDA')\ncontext = mm.Context(system, integrator, platform)\n```\n\n\n```python\ngrid_generator = skopt.sampler.Grid(use_full_layout=False)\ninitial_positions = grid_generator.generate(space.dimensions, n_particles)\ninitial_positions = np.array(initial_positions)*unit.angstroms\n```\n\n\n```python\ncontext.setPositions(initial_positions)\ncontext.setVelocitiesToTemperature(temperature)\n```\n\n\n```python\nstate=context.getState(getEnergy=True)\nprint(\"Before minimization: {}\".format(state.getPotentialEnergy()))\nmm.LocalEnergyMinimizer_minimize(context)\nstate=context.getState(getEnergy=True)\nprint(\"After minimization: {}\".format(state.getPotentialEnergy()))\n```\n\n\n```python\nequilibration_n_steps = int(equilibration_time/integration_timestep)\nintegrator.step(equilibration_n_steps)\ncontext.setTime(0.0*unit.picoseconds)\n```\n\n\n```python\nproduction_n_steps = int(production_time/integration_timestep)\nsaving_n_steps = int(saving_time/integration_timestep)\nn_saving_periods = int(production_n_steps/saving_n_steps)\n\ntime = np.zeros([n_saving_periods]) * unit.nanoseconds\ntrajectory = np.zeros([n_saving_periods, n_particles, 3]) * unit.angstroms\npotential_energy = np.zeros([n_saving_periods]) * unit.kilocalories_per_mole\n\nfor ii in tqdm(range(n_saving_periods)):\n integrator.step(saving_n_steps)\n state = context.getState(getPositions=True, getEnergy=True)\n time[ii] = state.getTime()\n trajectory[ii,:,:] = state.getPositions(asNumpy=True)\n potential_energy = state.getPotentialEnergy()\n```\n\n\n```python\ntrajectory_mem = trajectory.size * trajectory.itemsize * unit.bytes\nprint('Trajectory size: {} GB'.format(trajectory_mem._value/(1024*1024)))\n```\n\n\n```python\nl_box\n```\n\n\n```python\ntrajectory.max()\n```\n\n\n```python\n- Calcular difusion de las part\u00edculas\n- Calcular RDF\n- Energy\n```\n", "meta": {"hexsha": "0456b4d03eb0d0aa9ca356d326da11587502340f", "size": 212590, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/contents/LJ_simulation.ipynb", "max_stars_repo_name": "uibcdf/OpenMembrane", "max_stars_repo_head_hexsha": "c9705cb32706b882bdb3d75d19539ca323e6b741", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/contents/LJ_simulation.ipynb", "max_issues_repo_name": "uibcdf/OpenMembrane", "max_issues_repo_head_hexsha": "c9705cb32706b882bdb3d75d19539ca323e6b741", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/contents/LJ_simulation.ipynb", "max_forks_repo_name": "uibcdf/OpenMembrane", "max_forks_repo_head_hexsha": "c9705cb32706b882bdb3d75d19539ca323e6b741", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 156.7772861357, "max_line_length": 29540, "alphanum_fraction": 0.8942565502, "converted": true, "num_tokens": 6868, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9196425333801889, "lm_q2_score": 0.9111797154386843, "lm_q1q2_score": 0.8379596218706712}} {"text": "# Integrating Over Parameters, Tractability, MLE, MAP, Bayesian, and Probability Theory\n\n## Probability Theory Basics\n\n**Rule I**: $\\displaystyle p(a)=\\int p(a,b)\\,db$\n\n**Rule II**: $\\displaystyle p(a\\mid b)=\\frac{p(a,b)}{p(b)}$\n\n**Bayes Rule**: $\\displaystyle p(a\\mid b)=\\frac{p(b\\mid a)\\cdot p(a)}{p(b)}$\n\n## Maximum Likelihood Estimation (MLE)\n\nMethod for estimating the parameter(s) of a distribution (?).\n\nGiven is a set of points $\\mathcal{D}=\\{y_n\\}_{n=1}^N$.\nWe _choose to model_ the points with a normal distribution $\\mathcal{N}(\\mu,\\sigma)$ so we can assess the probability of a new point $y$ as $p(y;\\mu,\\sigma)\\sim\\mathcal{N}(\\mu,\\sigma)$.\nThe goal is to be able to compute $p(y;\\mu,\\sigma)$ for any $y$.\nTo be able to do that we need to estimate $\\mu$ and $\\sigma$ and that is what we use MLE for.\n\nThe data is all information we have, so we want $p(\\mathcal{D};\\mu,\\sigma)$ to be high.\nWe assume the data is i.i.d. given the parameters $\\mu$ and $\\sigma$.\nWe can therefore write\n\n$$p(\\mathcal{D};\\mu,\\sigma)=\\prod_{n=1}^Np(y_n;\\mu,\\sigma)$$\n\nWhat we actually look for are the parameters.\nWe are interested in the particular set of parameters for which $p(\\mathcal{D};\\mu,\\sigma)$ is maximized.\nLet $\\mu^\\star$ and $\\sigma^\\star$ denote this maximizer.\nWe search for\n\\begin{align}\n\\mu^\\star=&\\arg\\max_\\mu p(\\mathcal{D};\\mu,\\sigma)\\\\\n=&\\arg\\max_\\mu\\prod_{n=1}^Np(y_n;\\mu,\\sigma)\\\\\n=&\\arg\\max_\\mu\\log\\prod_{n=1}^Np(y_n;\\mu,\\sigma)\\\\\n=&\\arg\\max_\\mu\\sum_{n=1}^N\\log p(y_n;\\mu,\\sigma)\n\\end{align}\n\nWe can apply a $\\log$ to the product because it will not change the maximizer.\n\nThe same formulation can be written down for $\\sigma^\\star$ to find the optimal parameter here.\n\nIn order to _actually_ get a value for $\\mu^\\star$ we replace $p(y_n;\\mu,\\sigma)$ with the definition of our model.\nAbove it is a normal distribution, so it would be TODO.\n\nDepending on the model there may be a closed-form solution for computing maximizer (i.e., the parameterization for which the data is the most likely) or one can restort to gradient descent to find it.\n\nTODO: apply this to linear regression for example. How can we get $p(y;w)$ there?\n\n## Maximum Aposteriori (MAP)\n\nMethod for estimating the parameter(s) of a distribution (?).\nJust like MLE, except the probabilistic approach is a bit different.\n\nPreviously, with MLE, we modeled $p(\\mathcal{D};\\mu,\\sigma)$, that is, the probability of the data given the parameters.\nWe then looked for the parameters for which this probability is the highest.\nNow we model the probability $p(\\mu\\mid\\mathcal{D})$, that is, the probability of the parameters given the data.\nNote that $\\sigma$ is also a parameter, it's left out for readability.\n\nAgain, analogous to MLE, we seek for the argument that maximizes the probability.\nMathematically speaking, that is\n\n$$\\mu^\\star=\\arg\\max_\\mu p(\\mu\\mid\\mathcal{D})\\,.$$\n\nFollowing Bayes rule, \n\n$$p(\\mu\\mid\\mathcal{D})=\\frac{p(\\mathcal{D}\\mid\\mu)\\cdot p(\\mu)}{p(\\mathcal{D})}\\,.$$\n\nWe can plug the rewritten version of $p(\\mu\\mid\\mathcal{D})$ into the optimization objective from above:\n\n\\begin{align}\n\\mu^\\star=&\\arg\\max_\\mu p(\\mu\\mid\\mathcal{D})\\\\\n=&\\arg\\max_\\mu\\frac{p(\\mathcal{D}\\mid\\mu)\\cdot p(\\mu)}{p(\\mathcal{D})}\\\\\n=&\\arg\\max_\\mu p(\\mathcal{D}\\mid\\mu)\\cdot p(\\mu)\n\\end{align}\n\nNote that the denominator $p(\\mathcal{D})$ can be dropped because it does not depend on $\\mu$.\n\nJust like we did with MLE, we first expand $p(\\mathcal{D}\\mid\\mu)$ into a product over all data samples available to us, and second, apply a $\\log$ to ease the optimization, without affecting the position of the maximum (the best parameter configuration).\n\n\\begin{align}\n\\mu^\\star=&\\arg\\max_\\mu p(\\mathcal{D}\\mid\\mu)\\cdot p(\\mu)\\\\\n=&\\arg\\max_\\mu\\prod_{n=1}^Np(y_n\\mid\\mu)\\cdot p(\\mu)\\\\\n=&\\arg\\max_\\mu\\log\\left(\\prod_{n=1}^N p(y_n\\mid\\mu)\\cdot p(\\mu)\\right)\\\\\n=&\\arg\\max_\\mu\\sum_{n=1}^N\\log p(y_n\\mid\\mu)+\\log p(\\mu)\\\\\n\\end{align}\n\nThis is very similar to MLE, with the only difference that there is a new component in the optimization problem that we need to consider.\nIt is the probability of the parameters, $p(\\mu)$, also known as the **prior**.\n\nThis is where we have to _explicitly_ make an assumption, for example, by saying our parameter is normally distributed. This is in contrast to MLE where this assumption was made implicitly. (?)\n\nTODO: plug in normal distribution definition and show how it resolves to MSE loss with L2 regularization.\n\n## Bayesian Modeling\n\nInstead of just estimating a single value per parameter (like above just a scalar for the mean of our normal distribution), we now **model the parameters as random variables** themselves.\n\nWhat was previously $\\mu$ is now $\\mu$ but stands for a random variable, which is defined by a parameterized distribution we choose.\n\n**Terminology**\n\n* Parameters: $\\mu$\n* Data: $\\mathcal{D}$\n* Prior: $p(\\mu)$\n* Posterior: generally _after having seen the data_, i.e., $p(\\cdot\\mid\\mathcal{D})$, for example the updated parameters $p(\\mu\\mid\\mathcal{D})$\n* Likelihood: model of the data given the parameters, i.e., $p(y\\mid\\mu,\\mathcal{D})=p(y\\mid\\mu)$\n* Posterior predictive distribution: $p(y\\mid\\mathcal{D})$\n* Evidence: probability of the data $p(\\mathcal{D})$\n* Marginal: simplified form in which a random variable was integrated away, e.g., $p(y)$ where the $\\mu$ was integrated over\n* Marginal likelihood: ?\n\n**Posterior** is always after having seen the data.\n\nWhat we are interested in is modeling the probability of a new (test) data point $y$ given all the information we have, namely the data $\\mathcal{D}$. It is called the **posterior predictive distribution**.\n\n\\begin{align}\np(y\\mid\\mathcal{D})=&\\int p(y\\mid\\mu,\\mathcal{D})\\cdot p(\\mu\\mid\\mathcal{D})\\,d\\mu\\\\\n=&\\int p(y\\mid\\mu)\\cdot p(\\mu\\mid\\mathcal{D})\\,d\\mu\n\\end{align}\n\nWe constructed the integral using Rule I.\nThe choice of $\\mu$ is an arbitrary one, one could have integrated over anything, but it is a _helpful_ one. In the second equality we remove $\\mathcal{D}$ because the data is i.i.d. given the parameter. Having the parameter plus the data does not provide additional information.\n\nFor example: We know the mean of a normal distribution. Gathering additional samples from it will not provide us with additional information.\n\nThere are not two components in the integral.\nThe first is the **likelihood** $p(y\\mid\\mu,\\mathcal{D})$ and something we can compute, because it is our model. It is *easy* because it was our model choice, how to model $y$s.\n\nFor example: $p(y\\mid\\mu)=\\mathcal{N}(\\mu,\\sigma=1)$; $\\sigma=1$ for brevity.\n\nAs for the second part, the so called **posterior distribution of the parameters** $p(\\mu\\mid\\mathcal{D})$, we can again apply Bayes rule as done in MAP above:\n\n$$p(\\mu\\mid\\mathcal{D})=\\frac{p(\\mathcal{D}\\mid\\mu)\\cdot p(\\mu)}{p(\\mathcal{D})}\\,.$$\n\nThis time around we cannot just discard $p(\\mathcal{D})$, because we are not looking for the $\\arg\\max_\\mu$ but the actual probability $p(\\mu\\mid\\mathcal{D})$.\n\nThere are two difficulties: One is the numerator, one the denominator.\n\n**Numerator**. Because the data is i.i.d. given the parameters we can break it up into a product of the likelihoods of the data samples. Likelihood, because the part that we condition on is unknown, not the one in the front (where it would be a probability).\n\n$$p(\\mathcal{D}\\mid\\mu)\\cdot p(\\mu)=\\prod_{i=1}^Np(y_i\\mid\\mu)\\cdot p(\\mu)$$\n\nThe two distributions which we multiply with each other can easily (once multiplied) into a new distribution which is hard to handle / unknown. In some cases, though, for example, for two normal distributions we know what the product is (also normal).\n\nMore generally, if the two distributions (likelihood and prior) are **conjugate** to each other, the resulting posterior has the same as the distribution as the prior. Examples are: \n* normal and normal -> normal,\n* bernoulli and beta -> beta,\n* multinoulli and dirichlet --> dirichlet,\n* ...\n\nExample.\n\nOur prior is $$p(\\mu)=\\operatorname{Beta}(1,1)=\\frac{\\mu^{\\alpha-1}(1-\\mu)^{\\beta-1}}{B(\\alpha,\\beta)}$$\n\nwith ${\\displaystyle \\mathrm {B} (\\alpha ,\\beta )={\\frac {\\Gamma (\\alpha )\\Gamma (\\beta )}{\\Gamma (\\alpha +\\beta )}}}$.\n\nThe likelihood is $$p(y\\mid\\mu)=\\operatorname{Ber}(\\mu)=\\mu^{y}(1-\\mu)^{1-y}$$\n\nThis gives the numerator\n\n$$\\mu^{y}(1-\\mu)^{1-y}\\cdot\\frac{\\mu^{\\alpha-1}(1-\\mu)^{\\beta-1}}{B(\\alpha,\\beta)}\\,.$$\n\nFor a data point $y=1$ it is\n\n\\begin{align}\n&\\mu\\cdot\\frac{\\mu^{\\alpha-1}(1-\\mu)^{\\beta-1}}{B(\\alpha,\\beta)}\\\\\n=&\\frac{1}{B(\\alpha,\\beta)}\\cdot\\mu\\cdot\\mu^{\\alpha-1}(1-\\mu)^{\\beta-1}\\\\\n=&\\frac{1}{B(\\alpha,\\beta)}\\cdot\\mu^{\\underbrace{(\\alpha+1)}_{\\text{new }\\alpha}-1}(1-\\mu)^{\\beta-1}\\\\\n\\end{align}\n\nIt is apparent how the multiplication with Bernoulli did not change the distribution type.\nWe still have a Beta distribution and its parameter $\\alpha$ has changed slightly to turn into $\\alpha+1$.\nHad our data point been $y=0$, would $\\alpha$ have stayed the same whereas $\\beta$ would have been incremented.\n\n**Denominator**. $p(\\mathcal{D})$ is just a constant. So the part that actually determines the _distribution part_ of $p(\\mu\\mid\\mathcal{D})$ is the numerator.\n\nIn the following, $\\mathcal{D}=\\{y\\}$, i.e., there is only a single data point. Taking the numerator for the example from above and plugging it back into the fraction we get\n\n\\begin{align}\np(\\mu\\mid\\mathcal{D})=&\\frac{\\mu^{y}(1-\\mu)^{1-y}\\cdot\\frac{\\mu^{\\alpha-1}(1-\\mu)^{\\beta-1}}{B(\\alpha,\\beta)}}{p(\\mathcal{D})}\\\\\n=&\\underbrace{\\frac{1}{p(\\mathcal{D})\\cdot B(\\alpha,\\beta)}}_{\\text{numbers}}\\cdot\\underbrace{\\mu^{y}(1-\\mu)^{1-y}\\cdot\\mu^{\\alpha-1}(1-\\mu)^{\\beta-1}}_{\\text{depends on }\\mu}\\\\\n=&\\underbrace{\\frac{1}{p(\\mathcal{D})\\cdot B(\\alpha,\\beta)}}_{\\text{numbers}}\\cdot\\underbrace{\\mu^{(\\alpha+y)-1}(1-\\mu)^{(\\beta+1-y)-1}}_{\\text{depends on }\\mu}\n\\end{align}\n\nThe additional information comes from the fact that the distribution has to integrate to 1. That helps us determine the unknown component $p(\\mathcal{D})$.\n\n\\begin{align}\n\\int p(\\mu\\mid\\mathcal{D})\\,d\\mu=&1\\\\\n\\int \\frac{1}{p(\\mathcal{D})\\cdot B(\\alpha,\\beta)}\\cdot\\mu^{(\\alpha+y)-1}(1-\\mu)^{(\\beta+1-y)-1}\\,d\\mu=&1\\\\\n\\frac{1}{p(\\mathcal{D})\\cdot B(\\alpha,\\beta)}\\cdot\\int\\mu^{(\\alpha+y)-1}(1-\\mu)^{(\\beta+1-y)-1}\\,d\\mu=&1\\\\\n\\int\\mu^{(\\alpha+y)-1}(1-\\mu)^{(\\beta+1-y)-1}\\,d\\mu=&\\underbrace{p(\\mathcal{D})}_\\text{unknown}\\cdot B(\\alpha,\\beta)\\\\\n\\int\\mu^{(\\alpha+y)-1}(1-\\mu)^{(\\beta+1-y)-1}\\,d\\mu=&B(\\alpha+y,\\beta+1-y)\n\\end{align}\n\nThe last equality holds because we know that the left side is an unnormalized $\\operatorname{Beta}(\\alpha+y,\\beta+1-y)$ distribution PDF. So we define the right-hand side to be its corresponding normalization constant, which is $B(\\alpha+y,\\beta+1-y)$. And $p(\\mathcal{D})$ is just a part of it.\n\n$$p(\\mathcal{D})=\\frac{B(\\alpha+y,\\beta+1-y)}{B(\\alpha,\\beta)}$$\n\n---\n\nNow we jump back to the posterior predictive distribution, where we aim to determine the probability of a test point $y^*$ given the data. The above paragraphs helped determine the $p(\\mu\\mid\\mathcal{D})$ part of it, and the likelihood is given by $p(y\\mid\\mu)=\\operatorname{Ber}(\\mu)=\\mu^{y}(1-\\mu)^{1-y}$\n\n\\begin{align}\np(y^*\\mid\\mathcal{D})=&\\int p(y^*\\mid\\mu)\\cdot p(\\mu\\mid\\mathcal{D})\\,d\\mu=\\mathbb{E}_{p(\\mu\\mid\\mathcal{D})}\\left[p(y^*\\mid\\mu)\\right]\\\\\n=&\\int p(y^*\\mid\\mu)\\cdot\\frac{1}{p(\\mathcal{D})\\cdot B(\\alpha,\\beta)}\\cdot\\mu^{(\\alpha+y)-1}(1-\\mu)^{(\\beta+1-y)-1}\\,d\\mu\\\\\n=&\\int p(y^*\\mid\\mu)\\cdot\\frac{1}{B(\\alpha+y,\\beta+1-y)}\\cdot\\mu^{(\\alpha+y)-1}(1-\\mu)^{(\\beta+1-y)-1}\\,d\\mu\\\\\n=&\\int \\mu^{y^*}(1-\\mu)^{1-y^*}\\cdot\\frac{1}{B(\\alpha+y,\\beta+1-y)}\\cdot\\mu^{(\\alpha+y)-1}(1-\\mu)^{(\\beta+1-y)-1}\\,d\\mu\\\\\n=&\\frac{1}{B(\\alpha+y,\\beta+1-y)}\\cdot\\int \\mu^{y^*}(1-\\mu)^{1-y^*}\\cdot\\mu^{(\\alpha+y)-1}(1-\\mu)^{(\\beta+1-y)-1}\\,d\\mu\\\\\n=&\\frac{1}{B(\\alpha+y,\\beta+1-y)}\\cdot\\int \\mu^{y^*}(1-\\mu)^{1-y^*}\\cdot\\mu^{(\\alpha+y)-1}(1-\\mu)^{(\\beta+1-y)-1}\\,d\\mu\\\\\n=&\\frac{1}{B(\\alpha+y,\\beta+1-y)}\\cdot\\int \\mu^{(\\alpha+y+y^*)-1}(1-\\mu)^{(\\beta+1-y+1-y^*)-1}\\,d\\mu\\\\\n=&\\frac{1}{B(\\alpha+y,\\beta+1-y)}\\cdot B(\\alpha+y+y^*,\\beta+1-y+1-y^*)\\\\\n=&\\frac{B(\\alpha+y+y^*,\\beta+1-y+1-y^*)}{B(\\alpha+y,\\beta+1-y)}\\\\\n\\end{align}\n\n\\begin{align}\np(y^*\\mid\\mathcal{D})=&\\begin{cases}\n\\underbrace{\\frac{B(\\alpha+y,\\beta-y+2)}{B(\\alpha+y,\\beta+1-y)}}_{1-\\mu?} & \\text{if }y^*=0\\\\\n\\underbrace{\\frac{B(\\alpha+y+1,\\beta+1-y)}{B(\\alpha+y,\\beta+1-y)}}_{\\mu?} & \\text{if }y^*=1\n\\end{cases}\\\\\n=&\\operatorname{Ber}\\left(\\frac{B(\\alpha+y+1,\\beta+1-y)}{B(\\alpha+y,\\beta+1-y)}\\right)\n\\end{align}\n\nNote: Here $y$ is the data that we learned from (a single point).\n\nHere we would have almost given up. What we hope here is (as **successfully** validated in the following) that \n\n\\begin{align}\n\\frac{B(\\alpha+y,\\beta-y+2)}{B(\\alpha+y,\\beta-y+1)}+\\frac{B(\\alpha+y+1,\\beta-y+1)}{B(\\alpha+y,\\beta-y+1)}\\overset{?}{=}&1\\\\\\\\\n\\frac{B(\\alpha+y,\\beta-y+2)+B(\\alpha+y+1,\\beta-y+1)}{B(\\alpha+y,\\beta-y+1)}\\overset{?}{=}&1\\\\\n\\end{align}\n\nWith the definition of $B$ \n\n$${\\displaystyle \\mathrm {B} (\\alpha ,\\beta )={\\frac {\\Gamma (\\alpha )\\Gamma (\\beta )}{\\Gamma (\\alpha +\\beta )}}}$$\n\nwe get the fraction\n\n\\begin{align}\n\\frac{\\frac{\\Gamma(\\alpha+y)\\Gamma(\\beta-y+2)}{\\Gamma(\\alpha+y+\\beta-y+2)}+\\frac{\\Gamma(\\alpha+y+1)\\Gamma(\\beta-y+1)}{\\Gamma(\\alpha+y+1+\\beta-y+1)}}{\\frac{\\Gamma(\\alpha+y)\\Gamma(\\beta-y+1)}{\\Gamma(\\alpha+y+\\beta-y+1)}}\\overset{?}{=}&1\\\\\\\\\n\\frac{\\frac{\\Gamma(\\alpha+y)\\Gamma(\\beta-y+2)}{\\Gamma(\\alpha+\\beta+2)}+\\frac{\\Gamma(\\alpha+y+1)\\Gamma(\\beta-y+1)}{\\Gamma(\\alpha+\\beta+2)}}{\\frac{\\Gamma(\\alpha+y)\\Gamma(\\beta-y+1)}{\\Gamma(\\alpha+\\beta+1)}}\\overset{?}{=}&1\\\\\\\\\n\\frac{\\frac{\\Gamma(\\alpha+y)\\Gamma(\\beta-y+2)+\\Gamma(\\alpha+y+1)\\Gamma(\\beta-y+1)}{\\Gamma(\\alpha+\\beta+2)}}{\\frac{\\Gamma(\\alpha+y)\\Gamma(\\beta-y+1)}{\\Gamma(\\alpha+\\beta+1)}}\\overset{?}{=}&1\\\\\n\\end{align}\n\nWith the definition of ${\\displaystyle \\Gamma (n)=(n-1)!}$ we continue to pursue our goals at 00:43 am...\n\n$$\\Gamma (n+1)=\\Gamma(n)\\cdot n=n!$$\n\n\\begin{align}\n\\frac{\\frac{\\Gamma(\\alpha+y)\\Gamma(\\beta-y+2)+\\Gamma(\\alpha+y+1)\\Gamma(\\beta-y+1)}{\\Gamma(\\alpha+\\beta+2)}}{\\frac{\\Gamma(\\alpha+y)\\Gamma(\\beta-y+1)}{\\Gamma(\\alpha+\\beta+1)}}\\overset{?}{=}&1\\\\\\\\\n\\frac{\n (\\Gamma(\\alpha+y)\n \\Gamma(\\beta-y+2)+\\Gamma(\\alpha+y+1)\\Gamma(\\beta-y+1))\\cdot\\Gamma(\\alpha+\\beta+1)}\n {\\Gamma(\\alpha+\\beta+2)\\cdot\\Gamma(\\alpha+y)\\cdot\\Gamma(\\beta-y+1)}\\overset{?}{=}&1\\\\\\\\\n\\frac{\n (\\Gamma(\\alpha+y)\n \\Gamma(\\beta-y+2)+\\Gamma(\\alpha+y+1)\\Gamma(\\beta-y+1))}\n {(\\alpha+\\beta+1)\\cdot\\Gamma(\\alpha+y)\\cdot\\Gamma(\\beta-y+1)}\\overset{?}{=}&1\\\\\\\\\n\\frac{\n (\n \\Gamma(\\beta-y+2)+(\\alpha+y)\\Gamma(\\beta-y+1))}\n {(\\alpha+\\beta+1)\\cdot\\Gamma(\\beta-y+1)}\\overset{?}{=}&1\\\\\\\\\n\\frac{\\beta-y+1+\\alpha+y}{\\alpha+\\beta+1}\\overset{?}{=}&1\\\\\\\\\n\\frac{\\beta+1+\\alpha}{\\alpha+\\beta+1}\\overset{!}{=}&1\\\\\\\\\n\\end{align}\n\n---\n\nWhat we did above is a special case. It's an example for a conjugate prior (bernoulli likelihood and beta prior) in which a closed-form solution was available. In cases where there is no closed-form solution, one can solve\n\n$$p(y^*\\mid\\mathcal{D})=\\int p(y^*\\mid\\mu)\\cdot p(\\mu\\mid\\mathcal{D})\\,d\\mu=\\mathbb{E}_{p(\\mu\\mid\\mathcal{D})}\\left[p(y^*\\mid\\mu)\\right]$$\n\nby sampling from $p(\\mu\\mid\\mathcal{D})$ to approximate the expectation (**Monte Carlo**). So here we would approximate $p(y^*\\mid\\mu)$.\n\nAlternatively, one can (probably?) approximate $p(\\mathcal{D}\\mid\\mu)$ in\n\n$$p(\\mu\\mid\\mathcal{D})=\\frac{p(\\mathcal{D}\\mid\\mu)\\cdot p(\\mu)}{p(\\mathcal{D})}\\,.$$\n\n**Variational inference** (probably?) fits into this picture where one says we define $q$ to approximate $p(y^*\\mid\\mathcal{D})$ by minimizing the KL between the two.\n", "meta": {"hexsha": "c4c5b7c4e552176521cf1b424b2208133ee93a67", "size": 21680, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "probability-theory/integrating-over-parameters.ipynb", "max_stars_repo_name": "Simsso/Machine-Learning-Tinkering", "max_stars_repo_head_hexsha": "0a024aab0bb1ac5fbdd2f77380ab36d192278701", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-02-19T16:30:04.000Z", "max_stars_repo_stars_event_max_datetime": "2020-03-13T02:22:29.000Z", "max_issues_repo_path": "probability-theory/integrating-over-parameters.ipynb", "max_issues_repo_name": "Simsso/Machine-Learning-Tinkering", "max_issues_repo_head_hexsha": "0a024aab0bb1ac5fbdd2f77380ab36d192278701", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2021-02-17T14:00:33.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-10T03:05:35.000Z", "max_forks_repo_path": "probability-theory/integrating-over-parameters.ipynb", "max_forks_repo_name": "Simsso/Machine-Learning-Tinkering", "max_forks_repo_head_hexsha": "0a024aab0bb1ac5fbdd2f77380ab36d192278701", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.260960334, "max_line_length": 324, "alphanum_fraction": 0.5523523985, "converted": true, "num_tokens": 5296, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.933430812881347, "lm_q2_score": 0.8976953016868439, "lm_q1q2_score": 0.8379364551733167}} {"text": "\n\n# First exploration of finite-difference approximations to derivatives\n\nNote: Prof. Brown's [differentiation notebook](https://github.com/cu-numcomp/numcomp-class/blob/master/Differentiation.ipynb) for CSCI-3656 is quite good, and covers [algorithmic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation) using JAX. [](https://colab.research.google.com/github/cu-numcomp/numcomp-class/blob/master/Differentiation.ipynb)\n\n- Task 1: try the simple forward differences formula to approximate the derivative of $f(x)=\\sin(x)$. The formula is:\n$$f'(x) \\approx \\frac{ f(x+h)-f(x) }{h}$$\nTry this for a range of stepsizes $h$. Can we make the approximation (aka \"truncation\") error arbitrarily small?\n\n\n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Approximate the derivative of sin(x)\nf = lambda x : np.sin(x)\ntrueDerivative = lambda x : np.cos(x)\n\ndef twoPointForwardDifferences( f, x, h ):\n \"\"\" The simplest, plain vanilla finite difference approx\"\"\"\n return ( f(x+h) - f(x) )/h\n\n# Pick some point x at which we want the derivative\nx = 2.\ndf = trueDerivative(x)\ntwoPointF= lambda h : twoPointForwardDifferences( f, x, h)\n\n# Pick stepsizes h\nhList = np.logspace(-1,-12,30)\n\n# Plot\nplt.loglog( hList, np.abs( df - twoPointF(hList) ), 'o-', label=\"2 Pt Forward Diff\" );\nplt.xlabel(\"h\");\nplt.ylabel(\"Error\");\n```\n\n## Getting fancier: adding in other methods\n- Task 2: try other finite difference methods (e.g, 3 point forward differences; 3 point centered differences; 5 point centered differences)\n\n\n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Approximate the derivative of sin(x)\nf = lambda x : np.sin(x)\ntrueDerivative = lambda x : np.cos(x)\n\ndef twoPointForwardDifferences( f, x, h ):\n \"\"\" The simplest, plain vanilla finite difference approx\"\"\"\n return ( f(x+h) - f(x) )/h\ndef threePointForwardDifferences( f, x, h):\n \"\"\"Eq. (4.4) in the Burden and Faires book\"\"\"\n return (-3*f(x)+4*f(x+h)-f(x+2*h))/(2*h)\ndef threePointCenteredDifferences( f, x, h ):\n \"\"\" Book calls it 3 points, since it sort of includes f(x) \"\"\"\n return ( f(x+h) - f(x-h) )/(2*h)\ndef fivePointCenteredDifferences( f, x, h ):\n \"\"\" Eq. (4.6) in Burden and Faires book \"\"\"\n return ( f(x-2*h) - 8*f(x-h) + 8*f(x+h) - f(x+2*h) )/(12*h)\n\n# Pick some point x at which we want the derivative\nx = 2.\ndf = trueDerivative(x)\ntwoPointF= lambda h : twoPointForwardDifferences( f, x, h)\nthreePointC= lambda h : threePointCenteredDifferences( f, x, h)\nthreePointF= lambda h : threePointForwardDifferences( f, x, h)\nfivePointC= lambda h : fivePointCenteredDifferences( f, x, h)\n\n# Pick stepsizes h\nhList = np.logspace(-1,-12,20)\n\n# Plot\nplt.figure(figsize=(10,8)) \nplt.loglog( hList, np.abs( df - twoPointF(hList) ), 'o-', label=\"2 Pt Forward Diff\" );\nplt.loglog( hList, np.abs( df - threePointC(hList) ), 'o-', label=\"3 Pt Centered Diff\" );\nplt.loglog( hList, np.abs( df - threePointF(hList) ), 'o-', label=\"3 Pt Forward Diff\" );\nplt.loglog( hList, np.abs( df - fivePointC(hList) ), 'o-', label=\"5 Pt Centered Diff\" )\nplt.legend();\nplt.xlabel(\"h\");\nplt.ylabel(\"Error\");\n```\n\n## Can we estimate how low the error can be?\nIt's a trade off between truncation error (low $h$ is good), and roundoff error (low $h$ is bad)\n- Task 3: can we predict how low each method gets? Assume error looks like\n$$\\epsilon/h + h^k$$ where $k$ is the order, and use `sympy` to minimize this (e.g., differentiate it and set it equal to 0). You may want to declare $\\epsilon$ as a symbol like: `e = sym.symbols('\\epsilon',positive=True)`\n\n\n```\nimport sympy as sym\nfrom sympy import init_printing\ninit_printing()\n\nh = sym.symbols('h')\ne = sym.symbols('\\epsilon',positive=True)\ng = e/h + h\nd = sym.diff( g, h)\nd\nh0 = list( sym.solveset( d, h, domain=sym.S.Reals) )[0]\nprint(\"Order 1, (optimal h, value-at-optimal)\")\nh0, g.subs(h,h0)\n```\n\n\n```\ng = e/h + h**2\nd = sym.diff( g, h)\nh0 = list( sym.solveset( d, h, domain=sym.S.Reals) )[0]\nprint(\"Order 2\")\nh0, g.subs(h,h0)\n```\n\n\n```\ng = e/h + h**3\nd = sym.diff( g, h)\nh0 = list( sym.solveset( d, h, domain=sym.S.Reals) )[0]\nprint(\"Order 3\")\nh0, g.subs(h,h0)\n```\n\n\n```\ng = e/h + h**4\nd = sym.diff( g, h)\nh0 = list( sym.solveset( d, h, domain=sym.S.Reals) )[0]\nprint(\"Order 4\")\nh0, g.subs(h,h0)\n```\n\n### And put these in the plot\n- 2 pt forward diff is $O(h)$\n- 3 pt forward diff is $O(h^2)$\n- 3 pt centered diff is $O(h^2)$\n- 5 pt centered diff is $O(h^4)$\n\n\n```\n# So add to the plot\nplt.figure(figsize=(10,8)) \nplt.loglog( hList, np.abs( df - twoPointF(hList) ), 'o-', label=\"2 Pt Forward Diff\" );\nplt.loglog( hList, np.abs( df - threePointC(hList) ), 'o-', label=\"3 Pt Centered Diff\" );\nplt.loglog( hList, np.abs( df - threePointF(hList) ), 'o-', label=\"3 Pt Forward Diff\" );\nplt.loglog( hList, np.abs( df - fivePointC(hList) ), 'o-', label=\"5 Pt Centered Diff\" )\nplt.legend();\nplt.xlabel(\"h\");\nplt.ylabel(\"Error\");\n\n# And can we predict the values of h which gives us the smallest amount?\neps = np.finfo(float).eps\neps**(1/2)\neps**(1/4)\nplt.vlines( eps**(1/2), *plt.gca().get_ylim() ,linestyles='--',colors=\"C0\");\nplt.vlines( eps**(1/3), *plt.gca().get_ylim() ,linestyles='--',colors=\"C1\");\nplt.vlines( eps**(1/5), *plt.gca().get_ylim() ,linestyles='--',colors=\"C3\");\n\nplt.hlines( eps**(1/2), *plt.gca().get_xlim() ,linestyles='--',colors=\"C0\");\nplt.hlines( eps**(2/3), *plt.gca().get_xlim() ,linestyles='--',colors=\"C1\");\nplt.hlines( eps**(4/5), *plt.gca().get_xlim() ,linestyles='--',colors=\"C3\");\n```\n", "meta": {"hexsha": "b96ac8900fc597824046e03a5ca07c1e46c2bf9a", "size": 164764, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Demos/Ch4_FiniteDifferences.ipynb", "max_stars_repo_name": "skhadem/numerical-analysis-class", "max_stars_repo_head_hexsha": "a022fc2e73254800a1e193f94280446223ef01ae", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2020-08-25T19:11:46.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-12T21:49:44.000Z", "max_issues_repo_path": "Demos/Ch4_FiniteDifferences.ipynb", "max_issues_repo_name": "sabrinazhengliu/numerical-analysis-class", "max_issues_repo_head_hexsha": "1585b68d3f3c375259507bf18f03ba0332851edb", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-09-01T21:44:12.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-01T21:44:12.000Z", "max_forks_repo_path": "Demos/Ch4_FiniteDifferences.ipynb", "max_forks_repo_name": "sabrinazhengliu/numerical-analysis-class", "max_forks_repo_head_hexsha": "1585b68d3f3c375259507bf18f03ba0332851edb", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2020-08-25T21:25:17.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-03T04:14:26.000Z", "avg_line_length": 378.767816092, "max_line_length": 66138, "alphanum_fraction": 0.9193573839, "converted": true, "num_tokens": 1830, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8976952893703477, "lm_q2_score": 0.9334308170480065, "lm_q1q2_score": 0.8379364474171103}} {"text": "```python\ndef f(x):\n return x**3 -3*x**2 + x\n```\n\n\n```python\nfrom scipy.misc import derivative\n\nprint(derivative(f, 0, dx=1e-6))\nprint(derivative(f, 1, dx=1e-6))\n```\n\n 1.000000000001\n -2.000000000002\n\n\n\n```python\ndef relu(x):\n return np.where(x>0, x, 0)\n\nxx = np.linspace(-2, 2, 100)\nplt.plot(xx, relu(xx))\nplt.plot(0, 0, marker=\"o\", ms=10)\nplt.title(\"ReLu\")\nplt.xlabel(\"$x$\")\nplt.ylabel(\"ReLU(x)\")\nplt.show()\n```\n\n\n```python\nimport sympy\nsympy.init_printing(use_latex='mathjax')\n\n\n```\n\n\n```python\nx = sympy.symbols('x')\nx\n```\n\n\n\n\n$\\displaystyle x$\n\n\n\n\n```python\ntype(x)\n```\n\n\n\n\n sympy.core.symbol.Symbol\n\n\n\n\n```python\nf = x*sympy.exp(x)\nf\n```\n\n\n\n\n$\\displaystyle x e^{x}$\n\n\n\n\n```python\nsympy.diff(f)\n```\n\n\n\n\n$\\displaystyle x e^{x} + e^{x}$\n\n\n\n\n```python\nsympy.simplify(sympy.diff(f))\n```\n\n\n\n\n$\\displaystyle \\left(x + 1\\right) e^{x}$\n\n\n\n\n```python\nx, y = sympy.symbols('x y')\nf = x**2 + 4*x*y + 4*y**2\nf\n```\n\n\n\n\n$\\displaystyle x^{2} + 4 x y + 4 y^{2}$\n\n\n\n\n```python\nsympy.diff(f, x)\n```\n\n\n\n\n$\\displaystyle 2 x + 4 y$\n\n\n\n\n```python\nsympy.diff(f, y)\n```\n\n\n\n\n$\\displaystyle 4 x + 8 y$\n\n\n\n\n```python\nx, mu, sigma = sympy.symbols('x mu sigma')\nf = sympy.exp((x-mu)**2/sigma**2)\nf\n```\n\n\n\n\n$\\displaystyle e^{\\frac{\\left(- \\mu + x\\right)^{2}}{\\sigma^{2}}}$\n\n\n\n\n```python\nsympy.diff(f, x)\n```\n\n\n\n\n$\\displaystyle \\frac{\\left(- 2 \\mu + 2 x\\right) e^{\\frac{\\left(- \\mu + x\\right)^{2}}{\\sigma^{2}}}}{\\sigma^{2}}$\n\n\n\n\n```python\nsympy.simplify(sympy.diff(f,x))\n```\n\n\n\n\n$\\displaystyle \\frac{2 \\left(- \\mu + x\\right) e^{\\frac{\\left(\\mu - x\\right)^{2}}{\\sigma^{2}}}}{\\sigma^{2}}$\n\n\n\n\n```python\nsympy.diff(f, x, x)\n```\n\n\n\n\n$\\displaystyle \\frac{2 \\left(1 + \\frac{2 \\left(\\mu - x\\right)^{2}}{\\sigma^{2}}\\right) e^{\\frac{\\left(\\mu - x\\right)^{2}}{\\sigma^{2}}}}{\\sigma^{2}}$\n\n\n\n\n```python\nf = x**3 - 1\n```\n\n\n```python\nsympy.diff(f, x)\n```\n\n\n\n\n$\\displaystyle 3 x^{2}$\n\n\n\n\n```python\nk = sympy.symbols('k')\nf = sympy.log(x**2- 3*k)\nsympy.simplify(sympy.diff(f, x))\n```\n\n\n\n\n$\\displaystyle \\frac{2 x}{- 3 k + x^{2}}$\n\n\n\n\n```python\na,b = sympy.symbols('a b')\n```\n\n\n```python\nf = sympy.exp(a*(x**b))\nf\n```\n\n\n\n\n$\\displaystyle e^{a x^{b}}$\n\n\n\n\n```python\nsympy.simplify(sympy.diff(f, x))\n```\n\n\n\n\n$\\displaystyle a b x^{b - 1} e^{a x^{b}}$\n\n\n\n\n```python\ny = sympy.symbols('y')\n```\n\n\n```python\nf = sympy.exp(x**2 + 2*(y**2))\n\nfx = sympy.simplify(sympy.diff(f, x))\nfy = sympy.simplify(sympy.diff(f, y))\nfxx = sympy.simplify(sympy.diff(f, x, x))\nfxy = sympy.simplify(sympy.diff(f, x, y))\nfyx = sympy.simplify(sympy.diff(f, y, x))\nfyy = sympy.simplify(sympy.diff(f, y, y))\n\n\n```\n\n 2*x*exp(x**2 + 2*y**2) 4*y*exp(x**2 + 2*y**2) (4*x**2 + 2)*exp(x**2 + 2*y**2) 8*x*y*exp(x**2 + 2*y**2) 8*x*y*exp(x**2 + 2*y**2) (16*y**2 + 4)*exp(x**2 + 2*y**2)\n\n\n\n```python\nfx, fy, fxx, fxy, fyx, fyy\n```\n\n\n\n\n$\\displaystyle \\left( 2 x e^{x^{2} + 2 y^{2}}, \\ 4 y e^{x^{2} + 2 y^{2}}, \\ \\left(4 x^{2} + 2\\right) e^{x^{2} + 2 y^{2}}, \\ 8 x y e^{x^{2} + 2 y^{2}}, \\ 8 x y e^{x^{2} + 2 y^{2}}, \\ \\left(16 y^{2} + 4\\right) e^{x^{2} + 2 y^{2}}\\right)$\n\n\n\n\n```python\nx = sympy.symbols('x')\nf = x*sympy.exp(x) + sympy.exp(x)\nf\n```\n\n\n\n\n$\\displaystyle x e^{x} + e^{x}$\n\n\n\n\n```python\nsympy.integrate(f)\n```\n\n\n\n\n$\\displaystyle x e^{x}$\n\n\n\n\n```python\nx, y = sympy.symbols('x y')\nf = 2*x + y\nf\n```\n\n\n\n\n$\\displaystyle 2 x + y$\n\n\n\n\n```python\nsympy.integrate(f, x)\n```\n\n\n\n\n$\\displaystyle x^{2} + x y$\n\n\n\n\n```python\nf = 3*(x**2)\nsympy.integrate(f)\n```\n\n\n\n\n$\\displaystyle x^{3}$\n\n\n\n\n```python\nf = 3*(x**2) - 6*x + 1\n```\n\n\n```python\nsympy.integrate(f)\n```\n\n\n\n\n$\\displaystyle x^{3} - 3 x^{2} + x$\n\n\n\n\n```python\nf = 2 + 6*x + 4*(sympy.exp(x)) + 5*(x**-1)\n```\n\n\n```python\nf\n```\n\n\n\n\n$\\displaystyle 6 x + 4 e^{x} + 2 + \\frac{5}{x}$\n\n\n\n\n```python\nsympy.integrate(f)\n```\n\n\n\n\n$\\displaystyle 3 x^{2} + 2 x + 4 e^{x} + 5 \\log{\\left(x \\right)}$\n\n\n\n\n```python\nf = 2*x/x**2 -1\n```\n\n\n```python\nsympy.integrate(f)\n```\n\n\n\n\n$\\displaystyle - x + 2 \\log{\\left(x \\right)}$\n\n\n\n\n```python\ng = 1+(x*y)\ng\n```\n\n\n\n\n$\\displaystyle x y + 1$\n\n\n\n\n```python\nsympy.integrate(g, x)\n```\n\n\n\n\n$\\displaystyle \\frac{x^{2} y}{2} + x$\n\n\n\n\n```python\ng = x*y*(sympy.exp(x**2 + y**2))\n```\n\n\n```python\nsympy.integrate(g, x)\n```\n\n\n\n\n$\\displaystyle \\frac{y e^{x^{2} + y^{2}}}{2}$\n\n\n\n\n```python\nf = x**3 - 3*x**2 + x + 6\n```\n\n\n```python\nF = sympy.integrate(f)\n```\n\n\n```python\n(F.subs(x, 2) - F.subs(x, 0)).evalf()\n```\n\n\n\n\n$\\displaystyle 10.0$\n\n\n\n\n```python\nimport scipy as sp\ndef f(x):\n return x**3 - 3*x**2 + x + 6\n\nsp.integrate.quad(f, 0, 2)\n \n```\n\n\n\n\n$\\displaystyle \\left( 10.0, \\ 1.1102230246251565e-13\\right)$\n\n\n\n\n```python\nimport scipy as sp\ndef f(x, y):\n return np.exp(-x * y) / y**2\n\nsp.integrate.dblquad(f, 1, np.inf, lambda x:0, lambda x:np.inf)\n```\n\n\n\n\n$\\displaystyle \\left( 0.4999999999999961, \\ 1.068453874338024e-08\\right)$\n\n\n\n\n```python\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\n_x = np.arange(12) / 2 + 2\n_y = np.arange(12) / 2\nX, Y = np.meshgrid(_x, _y)\nx, y = X.ravel(), Y.ravel()\nz = x * x - 10 * x + y + 50\nz0 = np.zeros_like(z)\nax.bar3d(x, y, z0, 0.48, 0.48, z)\nax.set_xlim(0, 10)\nax.set_ylim(-2, 10)\nax.set_zlim(0, 50)\nax.set_xlabel(\"x\")\nax.set_ylabel(\"y\")\nplt.title(\"f(x, y)\")\nplt.show()\n\n```\n\n\n```python\nfrom mpl_toolkits.mplot3d import Axes3D\n```\n\n\n```python\n#\uc5f0\uc2b5\ubb38\uc81c 4.3.6\ndef f(x, y):\n return (1+x*y)\nsp.integrate.dblquad(f, -1, 1, lambda x:-1, lambda x:1 )\n```\n\n\n\n\n$\\displaystyle \\left( 4.0, \\ 4.440892098500626e-14\\right)$\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "2a44a98850cabc67a8b7fea832330b3381aad0d4", "size": 194743, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "200131_mathsympy.ipynb", "max_stars_repo_name": "BAEintelli/TIL", "max_stars_repo_head_hexsha": "ef18333837198b5bb7d5add66f6b1a96d3b01844", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "200131_mathsympy.ipynb", "max_issues_repo_name": "BAEintelli/TIL", "max_issues_repo_head_hexsha": "ef18333837198b5bb7d5add66f6b1a96d3b01844", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "200131_mathsympy.ipynb", "max_forks_repo_name": "BAEintelli/TIL", "max_forks_repo_head_hexsha": "ef18333837198b5bb7d5add66f6b1a96d3b01844", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 174.9712488769, "max_line_length": 136204, "alphanum_fraction": 0.9036935859, "converted": true, "num_tokens": 2253, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.933430812881347, "lm_q2_score": 0.8976952907388474, "lm_q1q2_score": 0.8379364449541195}} {"text": "## 10. Resolver Ecuaciones\n\n[Playlist de Ciencia de Datos en castellano](https://www.youtube.com/playlist?list=PLjyvn6Y1kpbEmRY4-ELeRA80ZywV7Xd67)\n[](https://www.youtube.com/watch?v=c40z75JnT44&list=PLLBUgWXdTBDg1Qgmwt4jKtVn9BWh5-zgy \"Python Data Science\")\n\nLas ecuaciones son fundamentales en Ciencia de Datos. Permiten convertir datos en informaci\u00f3n procesable mediante el desarrollo de expresiones matem\u00e1ticas que describen sistemas f\u00edsicos. Algunas expresiones matem\u00e1ticas son simples y pueden calcularse secuencialmente, por ejemplo:\n\n$x=1 \\quad y=x^2+2x-4$\n\nLa soluci\u00f3n es $x=1$, $y=1+2-4=-1$. \n\nConsidera el caso donde $x$ tambi\u00e9n depende de $y$.\n\n$x=y \\quad y=x^2+2x-4$\n\nHay dos soluciones que se calculan a partir de la f\u00f3rmula cuadr\u00e1tica $y=\\frac{-b\\pm\\sqrt{b^2-4ac}}{2a}$\n\n$0=y^2+(2y-y)-4$, \n\n$\\quad y^2+y-4 = 0$, con $a=1$, $b=1$ y $c=-4$.\n\n$y = \\frac{-1 \\pm \\sqrt{17}}{2} = {1.56,-2.56}$\n\nHay dos m\u00e9todos para resolver este problema. El primero es una **soluci\u00f3n num\u00e9rica**, donde la computadora usa m\u00e9todos de prueba y error para hallar la soluci\u00f3n. Se utilizan m\u00e9todos num\u00e9ricos cuando hay un gran n\u00famero de ecuaciones y no hay soluci\u00f3n anal\u00edtica. El segundo m\u00e9todo es una **soluci\u00f3n simb\u00f3lica** que genera una soluci\u00f3n exacta.\n\n\n\n### Soluci\u00f3n Num\u00e9rica\n\nLos problemas complejos y a gran escala utilizan una soluci\u00f3n num\u00e9rica, por ejemplo con `fsolve` o `gekko`. Se requiere una funci\u00f3n que devuelva el error residual de la ecuaci\u00f3n. Este residual es $f(y)=y^2+y-4$, que no es igual a cero cuando el valor de $y$ no es la soluci\u00f3n correcta. Un valor inicial de `1` o `-2` da un resultado diferente porque se empieza cerca de uno u otro. \n\n#### Soluci\u00f3n con Scipy fsolve\n\n\n```python\nfrom scipy.optimize import fsolve\ndef f(y):\n return y**2+y-4\nz = fsolve(f,1); print(z)\nz = fsolve(f,-2); print(z)\n```\n\n\n\n**Soluci\u00f3n con Python Gekko**\n\n\n```python\nfrom gekko import GEKKO\nm = GEKKO(remote=False)\n#Declaraci\u00f3n de la variable y. y=1\ny = m.Var(1)\n#Declaraci\u00f3n de la ecuaci\u00f3n\nm.Equation(y**2+y-4==0)\n#Resoluci\u00f3n\nm.solve(disp=False)\nprint(y.value)\n#Valor inicial y=-2\ny.value = -2\nm.solve(disp=False); print(y.value)\n```\n\n\n\n### Resolver 2 Ecuaciones\n\nEs similar tener una o dos ecuaciones\n\n$y=x^2+2x-4$\n\n$x=y$\n\nLa funci\u00f3n devuelve el error residual de cada ecuaci\u00f3n como una lista. \n\nSe requieren dos valores iniciales. \n\nEste mismo m\u00e9todo tambi\u00e9n se aplica para m\u00e1s ecuaciones. Los algoritmos que resuelven ecuaciones (solvers) pueden solucionar problemas con miles o millones de variables!!\n\n**Soluci\u00f3n con Scipy `fsolve`**\n\n\n```python\nfrom scipy.optimize import fsolve\ndef f(z):\n x,y = z\n return [x-y,y-x**2-2*x+4]\nz = fsolve(f,[1,1]); print(z)\nz = fsolve(f,[-2,-2]); print(z)\n```\n\n\n\n**Soluci\u00f3n con Python Gekko**\n\n\n```python\nm = GEKKO(remote=False)\nx=m.Var(); y = m.Var(1);\nm.Equations([y==x**2+2*x-4, x==y])\nm.solve(disp=False)\nprint(x.value, y.value)\n\nx.value=-2; y.value=-2\nm.solve(disp=False)\nprint(x.value, y.value)\n```\n\n\n\n### Resolver 3 Ecuaciones\n\n$x^2+y^2+z^2=1$\n\n$x-2y+3z=0.5$\n\n$x+y+z=0$\n\nResuelve el problema con 3 variables y 3 ecuaciones.\n\n**Soluci\u00f3n con Scipy `fsolve`**\n\n\n```python\n\n```\n\n\n\n**Soluci\u00f3n con Python Gekko**\n\n\n```python\n\n```\n\n\n\n### Soluci\u00f3n Simb\u00f3lica\n\nEs posible expresar simb\u00f3licamente la soluci\u00f3n anal\u00edtica de problemas simples. Una librer\u00eda con s\u00edmbolos matem\u00e1ticos en Python es `sympy`. La funci\u00f3n `display` est\u00e1 disponible para imprimir la ecuaci\u00f3n en Jupyter notebook. Se requiere importar `from IPython.display import display`. \n\n\n\n```python\nfrom IPython.display import display\nimport sympy as sym\nx = sym.Symbol('x')\ny = sym.Symbol('y')\nans = sym.nonlinsolve([x-y, y-x**2-2*x+4], [x,y])\ndisplay(ans)\n```\n\n\n\n### Resolver Simb\u00f3licamente 3 Ecuaciones\n\n$x\\,y\\,z=0$\n\n$x\\,y=0$\n\n$x+5\\,y+z$\n\nResuelve el problema con 3 variables y 3 ecuaciones de forma simb\u00f3lica. El problema est\u00e1 subespecificado, por lo que una de las variables aparecer\u00e1 en la soluci\u00f3n; es decir, hay un set infinito de soluciones. \n\n\n```python\n\n```\n\n\n\n### Ecuaciones Lineales\n\nPuedes resolver ecuaciones lineales en Python. Existen m\u00e9todos eficientes como `x = np.linalg.solve(A,b)` para resolver ecuaciones de tipo $A x = b$ con la matriz $A$ y los vectores $x$ y $b$.\n\n$A = \\begin{bmatrix}3 & 2\\\\ 1 & 2 \\end{bmatrix} \\quad b = \\begin{bmatrix}1 \\\\ 0 \\end{bmatrix}$\n\n\n```python\nimport numpy as np\nA = np.array([[3,2],[1,2]])\nb = np.array([1,0])\n\nx = np.linalg.solve(A,b)\nprint(x)\n```\n\nLa soluci\u00f3n simb\u00f3lica para este set de ecuaciones lineales est\u00e1 disponible con la funci\u00f3n `sympy` `linsolve`. Si el problema es lineal, se prefiere usar `linsolve` porque es m\u00e1s eficiente que `nonlinsolve`, pero puede resolver los dos.\n\n\n```python\nimport sympy as sym\nx, y = sym.symbols('x y')\nans = sym.linsolve([3*x + 2*y - 1, x + 2*y], (x, y))\nsym.pprint(ans)\n```\n\n\n\n### Optimizaci\u00f3n\n\nCuando hay m\u00e1s variables que ecuaciones, el problema est\u00e1 subespecificado y no puede hallarse soluci\u00f3n con una funci\u00f3n como `fsolve` (para problemas lineales y no lineales) o `linalg.solve` (solo para problemas lineales). Se requiere informaci\u00f3n adicional para guiar la selecci\u00f3n de las variables extra.\n\nUna funci\u00f3n objetivo $J(x)$ es una manera de especificar el problema para que exista soluci\u00f3n \u00fanica. El objetivo es minimizar $x_1 x_4 \\left(x_1 + x_2 + x_3\\right) + x_3$. Las dos ecuaciones que gu\u00edan la selecci\u00f3n de dos variables son las restricciones de desigualdad $\\left(x_1 x_2 x_3 x_4 \\ge 25\\right)$ y de igualdad $\\left(x_1^2 + x_2^2 + x_3^2 + x_4^2 = 40\\right)$. Las cuatro variables deben estar entre `1` (l\u00edmite inferior) y `5` (l\u00edmite superior).\n\n\n$\\quad \\min x_1 x_4 \\left(x_1 + x_2 + x_3\\right) + x_3$\n\n$\\quad \\mathrm{s.t.:} \\quad x_1 x_2 x_3 x_4 \\ge 25$\n\n$\\quad x_1^2 + x_2^2 + x_3^2 + x_4^2 = 40$\n\n$\\quad 1\\le x_1, x_2, x_3, x_4 \\le 5$\n\ncon valores iniciales:\n\n$\\quad x_0 = (1,5,5,1)$\n\nInformaci\u00f3n adicional sobre optimizaci\u00f3n encontrar\u00e1s en el [Curso de Dise\u00f1o y Optimizaci\u00f3n en ingl\u00e9s](https://apmonitor.com/me575) y en el [Libro de Dise\u00f1o y Optimizaci\u00f3n en ingl\u00e9s](https://apmonitor.com/me575/index.php/Main/BookChapters).\n\nReferencia:\nOptimization Methods for Engineering Design, Parkinson, A.R., Balling, R., and J.D. Hedengren, Second Edition, Brigham Young University, 2018.\n\nEl primer m\u00e9todo de resoluci\u00f3n es con `scipy.optimize.minimize`. Las funciones en esta librer\u00eda trabajan bien con problemas de tama\u00f1o moderado y modelos de caja negra, donde una funci\u00f3n objetivo est\u00e1 disponible a trav\u00e9s de la llamada a una funci\u00f3n (function call).\n\n\n```python\nimport numpy as np\nfrom scipy.optimize import minimize\n\ndef objective(x):\n return x[0]*x[3]*(x[0]+x[1]+x[2])+x[2]\n\ndef constraint1(x):\n return x[0]*x[1]*x[2]*x[3]-25.0\n\ndef constraint2(x):\n sum_eq = 40.0\n for i in range(4):\n sum_eq = sum_eq - x[i]**2\n return sum_eq\n\n# condiciones iniciales\nn = 4\nx0 = np.zeros(n)\nx0[0] = 1.0\nx0[1] = 5.0\nx0[2] = 5.0\nx0[3] = 1.0\n\n# optimizaci\u00f3n\nb = (1.0,5.0)\nbnds = (b, b, b, b)\ncon1 = {'type': 'ineq', 'fun': constraint1} \ncon2 = {'type': 'eq', 'fun': constraint2}\ncons = ([con1,con2])\nsolution = minimize(objective,x0,method='SLSQP',\\\n bounds=bnds,constraints=cons)\nx = solution.x\n\n# mostrar objetivo final\nprint('Final Objective: ' + str(objective(x)))\n\n# imprimir soluci\u00f3n\nprint('Solution')\nprint('x1 = ' + str(x[0]))\nprint('x2 = ' + str(x[1]))\nprint('x3 = ' + str(x[2]))\nprint('x4 = ' + str(x[3]))\n```\n\n\n\n### Optimizaci\u00f3n con Gekko\n\n[Python Gekko](https://gekko.readthedocs.io/en/latest/) tambi\u00e9n resuelve problemas de optimizaci\u00f3n. Usa diferenciaci\u00f3n autom\u00e1tica y algoritmos (solvers) basados en gradientes como `APOPT` o `IPOPT` para hallar una soluci\u00f3n. Este m\u00e9todo de soluci\u00f3n es mejor para problemas a gran escala. [Tutoriales Adicionales sobre Gekko en ingl\u00e9s](https://apmonitor.com/wiki/index.php/Main/GekkoPythonOptimization) muestran c\u00f3mo resolver otro tipo de problemas de optimizaci\u00f3n.\n\n\n```python\nfrom gekko import GEKKO\nm = GEKKO(remote=False)\n\n# Inicializar Variables\nx1,x2,x3,x4 = [m.Var(lb=1, ub=5) for i in range(4)]\n\n# Condiciones Iniciales\nx1.value = 1\nx2.value = 5\nx3.value = 5\nx4.value = 1\n\n# Ecuaciones\nm.Equation(x1*x2*x3*x4>=25)\nm.Equation(x1**2+x2**2+x3**2+x4**2==40)\n\n# Objetivo\nm.Obj(x1*x4*(x1+x2+x3)+x3)\n\n# Resolver\nm.solve(disp=False)\n\n# Objectivo Final\nprint('Final Objective: ' + str(m.options.objfcnval))\n\n# Imprimir soluciones\nprint('Solution')\nprint('x1: ' + str(x1.value))\nprint('x2: ' + str(x2.value))\nprint('x3: ' + str(x3.value))\nprint('x4: ' + str(x4.value))\n```\n\n### Actividad con el TCLab\n\n\n\n### Recolecci\u00f3n de Datos\n\n\n\nEnciende el calentador 1 al 100% y graba $T_1$ cada 10 segundos por 3 minutos. Los datos deben incluir un total de 19 puntos para cada sensor de temperatura y el tiempo, iniciando en cero. Toma nota de los puntos de temperatura en 0, 90 y 180 segundos.\n\n\n```python\n\n```\n\n\n\n### Ecuaciones Lineales\n\nSe requieren tres puntos para especificar un polinomio de grado 2 de la forma $y =a_0 + a_1 \\; x + a_2 \\; x^2$. Crea una regresi\u00f3n cuadr\u00e1tica para $T_2$ usando solo el primer, medio y \u00faltimo punto. Sup\u00f3n que los puntos para $T_2$ son los siguientes: \n\n| Tiempo (sec) | Temperatura (\u00b0C) |\n|------|------|\n| 0 | 23.0 |\n| 90 | 33.0 |\n| 180 | 43.0 |\n\nResuelve la regresi\u00f3n lineal como un set de tres ecuaciones que se derivan conectando los tres puntos a la ecuaci\u00f3n polin\u00f3mica. Crea tres ecuaciones que consideren $y=T_2$ y $x=tiempo$.\n\n$\\quad a_0 + a_1 \\; 0 + a_2 \\; 0^2 = 23.0$\n\n$\\quad a_0 + a_1 \\; 90 + a_2 \\; 90^2 = 33.0$\n\n$\\quad a_0 + a_1 \\; 180 + a_2 \\; 180^2 = 43.0$\n\nEn forma de matriz, el set de ecuaciones lineales es: \n\n$\\quad \\begin{bmatrix}1 & 0 & 0 \\\\ 1 & 90 & 90^2 \\\\ 1 & 180 & 180^2 \\end{bmatrix}\\begin{bmatrix}a_0\\\\a_1\\\\a_2\\end{bmatrix} = \\begin{bmatrix}23.0\\\\33.0\\\\43.0\\end{bmatrix}$\n\nResuelve el set de ecuaciones para hallar los par\u00e1metros cuadr\u00e1ticos $a_0$, $a_1$ y $a_2$ con los datos recolectados al inicio de la actividad con el TCLab. Dibuja el ajuste cuadr\u00e1tico con los datos para asegurar que la curva pasa por los tres puntos indicados.\n\n\n```python\n\n```\n\n\n\n### Ecuaciones No Lineales\n\nAjusta los datos de $T_1$ a una correlaci\u00f3n no lineal usando solo tres puntos.\n\n$\\quad T_1 = a + b \\exp{(c \\, tiempo)}$\n\nSe requieren \u00fanicamente tres puntos para especificar un modelo con tres par\u00e1metros. Cuando hay m\u00e1s del m\u00ednimo n\u00famero de puntos requerido, usualmente se ejecuta una regresi\u00f3n de m\u00ednimos cuadrados para minimizar el cuadrado del error entre los valores medidos y predecidos. Para este ejercicio, usa solo tres puntos (primero, medio y \u00faltimo) de los datos $T_1$. Sup\u00f3n que los puntos para $T_1$ son los siguientes: \n\n\n| Tiempo (sec) | Temperatura (\u00b0C) |\n|------|------|\n| 0 | 22.0 |\n| 90 | 42.0 |\n| 180 | 52.0 |\n\nResuelve para hallar los tres par\u00e1metros a partir de las tres ecuaciones que intersecan a los datos requeridos. \n\n$\\quad 22.0 = a + b \\exp{(c \\, 0)}$\n\n$\\quad 42.0 = a + b \\exp{(c \\, 90.3)}$\n\n$\\quad 52.0 = a + b \\exp{(c \\, 180.5)}$\n\nResuelve este set de ecuaciones para los par\u00e1metros desconocidos $a$, $b$ y $c$ con los datos recolectados al inicio de esta actividad. Utiliza como valores iniciales $a=100$, $b=-100$ y $c=-0.01$. Grafica el ajuste no lineal con los datos para asegurar que este pasa por los tres puntos especificados. A\u00f1ade las etiquetas necesarias en el gr\u00e1fico.\n\n\n```python\n\n```\n", "meta": {"hexsha": "5fc573b427b66045e5044f8b3250b8fe41c6505f", "size": 18333, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "10. Resolver_ecuaciones.ipynb", "max_stars_repo_name": "APMonitor/ciencia_de_datos", "max_stars_repo_head_hexsha": "4e989992a71bc59040799772af14784a83b94153", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-08-21T15:32:07.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-05T15:56:38.000Z", "max_issues_repo_path": "10. Resolver_ecuaciones.ipynb", "max_issues_repo_name": "APMonitor/ciencia_de_datos", "max_issues_repo_head_hexsha": "4e989992a71bc59040799772af14784a83b94153", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "10. Resolver_ecuaciones.ipynb", "max_forks_repo_name": "APMonitor/ciencia_de_datos", "max_forks_repo_head_hexsha": "4e989992a71bc59040799772af14784a83b94153", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2021-05-27T06:08:57.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-19T20:57:15.000Z", "avg_line_length": 33.95, "max_line_length": 473, "alphanum_fraction": 0.5737195222, "converted": true, "num_tokens": 3796, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9304582564583618, "lm_q2_score": 0.900529778109184, "lm_q1q2_score": 0.8379053672283068}} {"text": "---\n# Section 2.1: Vector and Matrix Norms\n---\n\n## Motivation\n\nWhen solving $Ax = b$ numerically, we get a solution $\\hat{x}$ that satisfies\n\n$$\n\\hat{A} \\hat{x} = \\hat{b}.\n$$\n\nWe hope that $\\hat{A} \\approx A$ and $\\hat{b} \\approx b$.\n\nIt is for this reason that we need a way to measure the distance between both matrices and vectors.\n\n---\n\n## Vector norms\n\nA (vector) **norm** is a function \n\n$$\n\\|\\cdot\\| : \\mathbb{R}^n \\rightarrow \\mathbb{R}\n$$\n\nthat satisfies\n\n1. $\\|x\\| > 0$ if $x \\neq 0$ and $\\|0\\| = 0$\n2. $\\|\\alpha x\\| = |\\alpha| \\|x\\|$\n3. $\\|x + y\\| \\leq \\|x\\| + \\|y\\| \\qquad$ (triangle inequality)\n\nThe **distance** between vectors $x$ and $y$ can then be measured by\n\n$$\n\\mathrm{dist}(x,y) = \\| x - y \\|.\n$$\n\n---\n\n## The $p$-norm\n\n$$\n\\|x\\|_p = \\left( \\sum_{i=1}^n \\left|x_i\\right|^p \\right)^{1/p}, \\quad 1 \\leq p \\leq \\infty.\n$$\n\nSome examples are:\n\n$$\n\\begin{array}{llclll}\n\\|x\\|_1 &=& \\displaystyle{\\sum_{i=1}^n |x_i|} &=& \\texttt{norm(x, 1)} \\qquad &\\text{(Manhattan norm)} \\\\\n\\|x\\|_2 &=& \\displaystyle{\\sqrt{\\sum_{i=1}^n |x_i|^2}} &=& \\texttt{norm(x)} \\qquad &\\text{(Euclidean norm)} \\\\\n\\|x\\|_\\infty &=& \\displaystyle{\\max_{i=1,\\ldots,n} |x_i|} &=& \\texttt{norm(x, Inf)} \\qquad &\\text{(max-norm)} \\\\\n\\end{array}\n$$\n\nThe **unit $p$-norm ball** is the set of vectors having $p$-norm at most $1$:\n\n$$\nB_p = \\big\\{ x \\in \\mathbb{R}^n : \\lVert x \\rVert_p \\leq 1 \\big\\}.\n$$\n\n---\n\n## `norm` of a vector\n\n\n```julia\nusing LinearAlgebra\n```\n\n\n```julia\n?norm\n```\n\n\n```julia\nx = Float64[1, -2, 1]\n```\n\n\n```julia\ny = Float64[0.9, -1.9, 1.1]\n```\n\n\n```julia\n# The 1-norm of x\nnorm(x, 1)\n```\n\n\n```julia\n# The 0-norm counts the number of nonzero entries in x\nnorm([1, 0, 0, 0, 1], 0)\n```\n\n\n```julia\n# The Manhattan distance between x and y\nnorm(x - y, 1)\n```\n\n\n```julia\nx\n```\n\n\n```julia\n# The Euclidean norm of x\nnorm(x, 2)\n```\n\n\n```julia\nsqrt(1 + 4 + 1)\n```\n\n\n```julia\n# The default norm is the Euclidean norm\nnorm(x)\n```\n\n\n```julia\n# The Euclidean distance between x and y\nnorm(x - y)\n```\n\n\n```julia\nx\n```\n\n\n```julia\n# The max-norm of x\nnorm(x, Inf)\n```\n\n\n```julia\n# The max-norm distance between x and y\nnorm(x - y, Inf)\n```\n\n---\n\n## The $A$-norm\n\n$$\n\\|x\\|_A = \\sqrt{x^T A x}\n$$\n\nwhere $A$ is an $n \\times n$ positive definite matrix.\n\nFor example, when $A = I$, we have the usual Euclidean norm:\n\n$$\n\\|x\\|_I = \\|x\\|_2.\n$$\n\n---\n\n## The resistance distance on a social network\n\nA practical example of the $A$-norm is the **resistance distance** between individuals on a **social network**. \n\nA social network can be represented as a **graph** where individuals are **nodes** which are connected by an **edge** if they are friends.\n\nThis graph can be represented using an **adjacency matrix** $A = [a_{ij}]$ where $a_{ij} = 1$ if $i$ and $j$ are friends, otherwise $a_{ij} = 0$; you cannot be friends with yourself, so $a_{ii} = 0$.\n\nThe **Laplacian matrix** $L = [l_{ij}]$ of the graph has $l_{ii} = \\deg(i)$ (i.e., the number of friends of $i$), and $l_{ij} = -a_{ij}$ for $i \\neq j$. That is,\n\n$$\nL = \\mathrm{Diag}(Ae) - A,\n$$\n\nwhere $e$ is the vector of all ones, and $\\mathrm{Diag}(Ae)$ is the diagonal matrix with the vector $Ae$ on its diagonal.\n\nThe **resistance distance** between $i$ and $j$ is then given by\n\n$$\n\\mathrm{dist}(i,j) = \\|e_i - e_j\\|_B, \\qquad \\text{where $B = (L + ee^T)^{-1}$}.\n$$\n\nHere $e_i$ and $e_j$ are the $i^\\mathrm{th}$ and $j^\\mathrm{th}$ columns of the identity matrix. (Note: it can be shown that $L + ee^T$ is positive definite, which implies that $(L + ee^T)^{-1}$ is positive definite.)\n\n\n```julia\nn = 6\nA = Symmetric(rand(n, n))\nA = round.(A)\nA = A - diagm(diag(A)) # Make the diagonal zero\n```\n\n\n```julia\ne = ones(n)\nL = diagm(A*e) - A\n```\n\n\n```julia\nL + e*e'\n```\n\n\n```julia\nisposdef(L + e*e')\n```\n\n\n```julia\nB = inv(L + e*e')\n```\n\n\n```julia\ncholesky(Symmetric(B))\n```\n\n\n```julia\neigvals(Symmetric(B))\n```\n\n\n```julia\nisposdef(Symmetric(B))\n```\n\n\n```julia\n# Define the resistance norm\nresnorm(x) = sqrt(x'*B*x)[1]\n\n# Form the matrix of distances between all nodes\nI = diagm(ones(n))\nD = Float64[resnorm(I[:,i] - I[:,j]) for i = 1:n, j = 1:n]\n```\n\n\n```julia\nD[3,4]\n```\n\n\n```julia\nD[3,6]\n```\n\n\n```julia\nD[3,4] < D[3,6]\n```\n\n\n```julia\nusing GraphPlot, LightGraphs\n```\n\n\n```julia\ng = Graph(A)\ngplot(g, nodelabel=1:n)\n```\n\n\n```julia\nD[3,2]\n```\n\n\n```julia\nD[3,6]\n```\n\n---\n\n## The Euclidean norm is a norm\n\nTo prove that the **Euclidean norm** is indeed a norm, we need to show it satisfies the **triangle inequality**:\n\n$$\n\\|x + y\\|_2 \\leq \\|x\\|_2 + \\|y\\|_2.\n$$\n\nWe will prove this using the following fundamental result.\n\n> ### Theorem: (Cauchy-Schwarz Inequality)\n>\n> $$\n\\left|x^Ty\\right| \\leq \\|x\\|_2 \\|y\\|_2, \\qquad \\forall x, y \\in \\mathbb{R}^n.\n$$\n\nIf $\\theta \\in [0, \\pi]$ is the angle between the nonzero vectors $x$ and $y$, then\n\n$$\n\\cos \\theta = \\frac{x^T y}{\\|x\\|_2 \\|y\\|_2}.\n$$\n\n### Proof of the triangle inequality.\n\nLet $x, y \\in \\mathbb{R}^n$. Then\n\n$$\n\\begin{align}\n\\|x + y\\|_2^2 \n&= (x+y)^T(x+y) \\\\\n&= x^Tx + x^Ty + y^Tx + y^Ty \\\\\n&= \\|x\\|_2^2 + 2x^Ty + \\|y\\|_2^2 \\\\\n&\\leq \\|x\\|_2^2 + 2\\|x\\|_2\\|y\\|_2 + \\|y\\|_2^2 \\qquad \\text{(Cauchy-Schwarz inequality)} \\\\\n&= \\big(\\|x\\|_2 + \\|y\\|_2\\big)^2. \\\\\n\\end{align}\n$$\n\nTaking the square root of both sides, we obtain $\\|x + y\\|_2 \\leq \\|x\\|_2 + \\|y\\|_2$. $\\blacksquare$\n\n---\n\nNow let's see a proof of the Cauchy-Schwarz inequality.\n\n### Proof of Cauchy-Schwarz.\n\nLet $t \\in \\mathbb{R}$. Then\n\n$$\n\\begin{align}\n0 \\leq \\|x + ty\\|_2^2 \n&= (x + ty)^T(x + ty) \\\\\n&= x^T x + 2tx^Ty + t^2y^Ty \\\\\n&= \\|x\\|_2^2 + \\big(2x^Ty\\big) t + \\|y\\|_2^2 t^2 \\\\\n&= c + bt + at^2,\n\\end{align}\n$$\n\nwhere \n\n$$\na = \\|y\\|_2^2, \\qquad\nb = 2x^Ty, \\qquad \nc = \\|x\\|_2^2.\n$$\n\n$\\therefore$ $at^2 + bt + c \\geq 0$, $\\forall t \\in \\mathbb{R}$.\n\nThis implies that the equation\n\n$$\nat^2 + bt + c = 0\n$$\n\neither has no solution or exactly one solution. By the **quadratic formula**,\n\n$$\nt = \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a},\n$$\n\nwe must have $b^2 - 4ac \\leq 0$.\n\nThus, \n\n$$\n\\big(2x^Ty\\big)^2 - 4 \\|y\\|_2^2 \\|x\\|_2^2 \\leq 0,\n$$\n\nwhich simplifies to\n\n$$\n\\left|x^Ty\\right| \\leq \\|x\\|_2 \\|y\\|_2. \\qquad \\blacksquare\n$$\n\n---\n\n## Matrix norms\n\nA **matrix norm** is a function \n\n$$\n\\|\\cdot\\| : \\mathbb{R}^{n \\times n} \\rightarrow \\mathbb{R}\n$$\n\nthat satisfies\n\n1. $\\|A\\| > 0$ if $A \\neq 0$ and $\\|0\\| = 0$\n2. $\\|\\alpha A\\| = |\\alpha| \\|A\\|$\n3. $\\|A + B\\| \\leq \\|A\\| + \\|B\\|$\n4. $\\|AB\\| \\leq \\|A\\|\\|B\\| \\qquad$ (submultiplicativity)\n\nThe **distance** between matrices $A$ and $B$ can then be measured by\n\n$$\n\\mathrm{dist}(A, B) = \\| A - B \\|.\n$$\n\n---\n\n### The Frobenius norm\n\n$$\n\\|A\\|_F = \\sqrt{\\sum_{i=1}^n \\sum_{j=1}^n |a_{ij}|^2} = \\texttt{norm(A)}\n$$\n\n\n```julia\nA = [1 2; 3 4.0]\n```\n\n\n```julia\nnorm(A)\n```\n\n\n```julia\nsqrt(30)\n```\n\n---\n\n### Exercise\n\nCompute $\\lVert I \\rVert_F$, where $I$ is the $n \\times n$ identity matrix.\n\n$$\n\\|I\\|_F = \\sqrt{n}.\n$$\n\n---\n\n\n```julia\nI = diagm(ones(4))\n```\n\n\n```julia\nnorm(I)\n```\n\n---\n\n## Induced Matrix Norms\n\nLet $\\|\\cdot\\| : \\mathbb{R}^n \\rightarrow \\mathbb{R}$ be a vector norm.\n\nThe **induced matrix norm** (a.k.a. the **operator norm**) is defined as\n\n$$\n\\|A\\| = \\max_{x \\neq 0} \\frac{\\|Ax\\|}{\\|x\\|} = \\max_{\\|x\\| = 1} \\|Ax\\|.\n$$\n\nThe induced matrix norm is a matrix norm.\n\n---\n\n## Exercise\n\nLet $\\|\\cdot\\|$ be an induced matrix norm. Compute $\\|I\\|$.\n\nLet $x$ be a nonzero vector. Then\n\n$$\n\\frac{\\|Ix\\|}{\\|x\\|} = \\frac{\\|x\\|}{\\|x\\|} = 1.\n$$\n\nTherefore, $\\|I\\| = 1$.\n\n---\n\n## Examples\n\n$$\n\\begin{array}{llclll}\n\\|A\\|_p &=& \\displaystyle{\\max_{x \\neq 0} \\frac{\\|Ax\\|_p}{\\|x\\|_p}} &=& \\texttt{opnorm(A, p)} \\qquad &\\text{($p$-norm)} \\\\\n\\|A\\|_1 &=& \\displaystyle{\\max_{1 \\leq j \\leq n} \\sum_{i=1}^n |a_{ij}|} &=& \\texttt{opnorm(A, 1)} \\qquad &\\text{(max-column-sum)} \\\\\n\\|A\\|_2 &=& \\displaystyle{\\sqrt{\\lambda_{\\max}\\left(A^TA\\right)}} &=& \\texttt{opnorm(A)} \\qquad &\\text{(spectral norm)} \\\\\n\\|A\\|_\\infty &=& \\displaystyle{\\max_{1 \\leq i \\leq n} \\sum_{j=1}^n |a_{ij}|} &=& \\texttt{opnorm(A, Inf)} \\qquad &\\text{(max-row-sum)} \\\\\n\\end{array}\n$$\n\n**Note:** $\\lambda_{\\max}\\left(A^TA\\right)$ is the **largest eigenvalue** of the symmetric matrix $A^TA$; $\\sqrt{\\lambda_{\\max}\\left(A^TA\\right)}$ is the **largest singular value** of the matrix $A$.\n\nHowever, the Frobenius norm is **not** an induced matrix norm.\n\n\n```julia\nA = [1 3; -2 0.0]\n\nsqrt(maximum(eigvals(A'*A)))\n```\n\n\n```julia\nmaximum(svdvals(A))\n```\n\n\n```julia\neigvals(A'*A)\n```\n\n\n```julia\nsvdvals(A)\n```\n\n---\n\n## `opnorm` of a matrix\n\n\n```julia\nA = [1 2 3; 4 5 6; 7 8 9.0]\n```\n\n\n```julia\n# max-column-sum\nopnorm(A, 1)\n```\n\n\n```julia\n# max-row-sum\nopnorm(A, Inf)\n```\n\n\n```julia\n# spectral norm\nopnorm(A)\n```\n\n\n```julia\n# The spectral norm of A is the \n# maximum singular value of A\nsvdvals(A)\n```\n\n\n```julia\n# The singular values of A are the square root of the\n# eigenvalues of A'*A\n\u03bb = max.(eigvals(A'*A),0)\nsqrt.(\u03bb)\n```\n\n---\n\n## Induced Matrix Norm Inequality\n\n> ### Theorem: (Induced Matrix Norm Inequality)\n>\n> Let $\\|\\cdot\\| : \\mathbb{R}^n \\rightarrow \\mathbb{R}$ be a vector norm. Then the corresponding induced matrix norm satisfies\n>\n> $$\\|Ax\\| \\leq \\|A\\|\\|x\\|,\\qquad \\text{for all $A \\in \\mathbb{R}^{n \\times n}$ and $x \\in \\mathbb{R}^n$.}$$\n\n### Proof:\n\nLet $x \\in \\mathbb{R}^n$. If $x = 0$, then $\\|Ax\\| \\le \\|A\\|\\|x\\|$ clearly holds since both sides of the inequality would equal zero. Now suppose that $x \\ne 0$. Then,\n\n$$\n\\frac{\\|Ax\\|}{\\|x\\|} \\le \\max_{y \\ne 0} \\frac{\\|Ay\\|}{\\|y\\|} = \\|A\\|.\n$$\n\nMultiplying both sides by $\\|x\\|$, we have $\\|Ax\\| \\le \\|A\\|\\|x\\|$. $\\blacksquare$\n\n---\n", "meta": {"hexsha": "379d1922ca157c02ab3e641cba2d711a7fea13dd", "size": 20924, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Section 2.1 - Vector and Matrix Norms.ipynb", "max_stars_repo_name": "jesus-lua-8/fall2021math434", "max_stars_repo_head_hexsha": "e6c6931676922193a6718b51c92232a3de7fa650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-08-31T21:01:22.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-31T21:01:22.000Z", "max_issues_repo_path": "Section 2.1 - Vector and Matrix Norms.ipynb", "max_issues_repo_name": "jesus-lua-8/fall2021math434", "max_issues_repo_head_hexsha": "e6c6931676922193a6718b51c92232a3de7fa650", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Section 2.1 - Vector and Matrix Norms.ipynb", "max_forks_repo_name": "jesus-lua-8/fall2021math434", "max_forks_repo_head_hexsha": "e6c6931676922193a6718b51c92232a3de7fa650", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-16T19:28:56.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-16T19:28:56.000Z", "avg_line_length": 21.2426395939, "max_line_length": 225, "alphanum_fraction": 0.442506213, "converted": true, "num_tokens": 3850, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9585377296574668, "lm_q2_score": 0.8740772286044094, "lm_q1q2_score": 0.8378360022517611}} {"text": "Euler Problem 7\n===============\n\nBy listing the first six prime numbers: 2, 3, 5, 7, 11, and 13, we can see that the 6th prime is 13.\n\nWhat is the 10,001st prime number?\n\n\n```python\nMAX = 200000\nindex = 10001\nsieve = [True] * MAX\nprime_count = 0\nfor p in range(2, MAX):\n if sieve[p]:\n prime_count += 1\n if prime_count == index:\n print(p)\n break\n for n in range(p*p, MAX, p):\n sieve[n] = False\n```\n\n 104743\n\n\nThe Sympy module makes this problem even easier.\n\n\n```python\nfrom sympy import prime\nprint(prime(index))\n```\n\n 104743\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "a744f6cc4d6b1f032eb6099848c9c7b09bd609f0", "size": 1861, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Euler 007 - 10001st prime.ipynb", "max_stars_repo_name": "Radcliffe/project-euler", "max_stars_repo_head_hexsha": "5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2016-05-11T18:55:35.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-27T21:38:43.000Z", "max_issues_repo_path": "Euler 007 - 10001st prime.ipynb", "max_issues_repo_name": "Radcliffe/project-euler", "max_issues_repo_head_hexsha": "5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Euler 007 - 10001st prime.ipynb", "max_forks_repo_name": "Radcliffe/project-euler", "max_forks_repo_head_hexsha": "5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 18.9897959184, "max_line_length": 109, "alphanum_fraction": 0.4658785599, "converted": true, "num_tokens": 187, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.964321450147636, "lm_q2_score": 0.8688267830311354, "lm_q1q2_score": 0.8378283033396899}} {"text": "# Implement a physically correct distribution\n\nIn order to understand lubrication better, we simulate thin layers of lubricant on a metallic surface, solvated in water.\nDifferent structures of lubricant films are created by varying parameters like their concentration and the charge of the surface.\nThe lubricant is somewhat solvable in water, thus parts of the film will diffuse into the bulk water.\nLubricant molecules are charged, and their distribution is roughly exponential.\n\nSo, let's first generate a simple exponential distribution:\n\n\n```python\nimport numpy as np\nimport scipy.constants as sc\nimport matplotlib.pyplot as plt\n\nfrom generate_structure import generate_structure \nfrom generate_structure import plot_dist\nfrom generate_structure import get_histogram\nfrom generate_structure import exponential\n\nnp.random.seed(74)\n```\n\n\n```python\nbox = np.array([50, 50, 100])\nstruc = generate_structure(distribution=exponential, box=box, atom_count=1000)\nhistx, histy, histz = get_histogram(struc, box=box)\nplot_dist(histz, 'z', reference_distribution=exponential)\n```\n\nThis seems to work reasonably well, but does not describe a physical system - units and the slope of the exponential are rather abitrary.\n\nNow, to incorporate a precise physical description of the distribution, we describe the concentration of our ion species, $c_{Na^+}$, by the solution to the Poisson-Boltzmann equation:\n\n$\n\\begin{align}\n\\rho_{Na^+}(z) &= \\rho_{Na^+}(\\infty) e^{-e \\Psi(z)/k_B T}\\\\\n\\Psi(z) &= \\frac{2k_B T}{e} \\log\\Big(\\frac{1 + \\gamma e^{-\\kappa z}}{1- \\gamma e^{-\\kappa z}}\\Big) \n \\approx \\frac{4k_B T}{e} \\gamma e^{-\\kappa z} \\\\\n\\gamma &= \\tanh(\\frac{e\\Psi(0)}{4k_B T})\\\\\n\\kappa &= 1/\\lambda_D\\\\\n\\lambda_D &= \\Big(\\frac{\\epsilon \\epsilon_0 k_B T}{\\sum_{i} \\rho_i(\\infty) e^2 z_i^2} \\Big)^\\frac{1}{2} [m^{-1}]\n\\end{align}\n$\n\nWith:\n* $z$: Distance from the double layer\n* $\\Psi(0)$: Potential at the surface\n* $\\Psi(z)$: Potential in the solution\n* $k_B$: Boltzmann Constant\n* $T$: Temperature [Kelvin]\n* $e$: Elemental Charge (or Euler's constant when exponentiated)\n* $\\gamma$: Term from Gouy-Chapmann theory\n * $\\gamma \\rightarrow 1$ for high potentials\n * $\\Psi(z) \\approx \\Psi_0 e^{-\\kappa z}$ for low potentials $\\Psi(0) \\approx 0$\n* $\\lambda_D$: Debye Length (34.0 nm for NaCl, 10^-4 M, 25\u00b0C)\n* $\\rho_{Na^+}$: Concentration of Natrium ions\n* $\\rho_{Na^+}(\\infty)$: Bulk Natrium concentration (at infinity, where the solution is homogeneous)\n* $\\epsilon$: Permittivity of the solution\n* $\\epsilon_0$: Electric constant aka Vacuum permittivity\n* $z_i$: Charge of species i \n\n\nLet's translate these equations into python code.\n\n## Debye length\nWe start with the expression for the Debye length, which describes the thickness of the ionic double layer, or the distance at which the surface charge is mostly screened off.\n\n\n```python\ndef debye(rho_bulk, charge, permittivity=79, temperature=298.15):\n \"\"\"Calculate the Debye length.\n The Dybe length indicates at which distance a charge will be screened off.\n \n Arguments:\n rho_bulk: dictionary of the bulk number densites for each ionic species [1/m^3]\n charge: dictionary of the charge of each ionic species [1]\n permittivity: capacitance of the ionic solution [1]\n temperature: Temperature of the solution [K]\n \n Returns:\n float: the Debye length [m], should be around 10^-19\n \n Example: the Debye length of 10^-4 M salt water is 30.4 nm.\n >>> density = sc.Avogadro * 1000 * 10**-4\n >>> rho = {'Na': density, 'Cl': density} \n >>> charge = {'Na': 1, 'Cl': -1}\n >>> deb = debye(rho_bulk=rho, charge=charge) * 10**9\n >>> deb - 30.4 < 0.5\n True \n \"\"\"\n # The numerator of the expression in the square root\n # e_r * e_0 * k_B * T\n numerator = permittivity * sc.epsilon_0 * sc.Boltzmann * temperature \n\n # The divisor of the expression in the square root\n # \\sum_i rho_i e^2 z_i^2\n divisor = 0\n for key in rho_bulk.keys():\n divisor += rho_bulk[key] * sc.elementary_charge ** 2 * charge[key] ** 2\n \n # The complete square root\n return np.sqrt(numerator / divisor)\n\n# Factor of 1000 is due to 1/m^3, would be 1 for 1/l\ndefault_density = sc.Avogadro * 1000 * 10**-4\nrho = {'Na': default_density, 'Cl': default_density} \ncharge = {'Na': 1, 'Cl': -1}\ndeb = debye(rho_bulk=rho, charge=charge) * 10**9\nprint('Debye Length of 10^-4 M saltwater: {} nm (Target: 30.4 nm)'.format(round(deb, 2)))\n\ndensity = np.logspace(-6, 0, 50) * sc.Avogadro * 1000\n\ndebyes = [debye(rho_bulk={'Na': d, 'Cl': d}, charge=charge) * 10**9 for d in density]\nplt.xlabel('Density [1/m^3]')\nplt.ylabel('Debye length at 25\u00b0 [nm]')\nplt.semilogx(density, debyes, marker='.')\nplt.show()\n```\n\nThe debye length depends on the concentration of ions in solution, at low concentrations it becomes large. We can reproduce literature debye lengths with our function, so everything looks good.\n\n## Gamma Function\n\nNext we calculate the gamma function $\\gamma = \\tanh(\\frac{e\\Psi(0)}{4k_B T})$\n\n\n```python\ndef gamma(surface_potential, temperature):\n \"\"\"Calculate term from Gouy-Chapmann theory.\"\"\"\n product = sc.elementary_charge * surface_potential / (4 * sc.Stefan_Boltzmann * temperature)\n return np.tanh(product)\n\nx = np.linspace(12, 16, 40)\ngammas = [gamma(10 ** i, 300) for i in x]\nplt.xlabel('Potential')\nplt.ylabel('Gamma at 300K')\nplt.plot(x, gammas, marker='o')\nplt.show()\n```\n\nWhich looks as expected, but we have no values to compare it against.\n\n## Potential\n\nWe plug these two functions, into the expression for the potential\n\n$\\Psi(z) = \\frac{2k_B T}{e} \\log\\Big(\\frac{1 + \\gamma e^{-\\kappa z}}{1- \\gamma e^{-\\kappa z}}\\Big) \n \\approx \\frac{4k_B T}{e} \\gamma e^{-\\kappa z}$\n\n\n```python\ndef potential(location, rho_bulk, charge, surface_potential, temperature=300, permittivity=80):\n \"\"\"The potential near a charged surface in an ionic solution.\n Arguments:\n location: z-distance from the surface [m]\n temperature: Temperature of the soultion [Kelvin] \n gamma: term from Gouy-Chapmann theory\n kappa: the inverse of the debye length\n charge: dictionary of the charge of each ionic species\n permittivity: capacitance of the ionic solution []\n temperature: Temperature of the solution [Kelvin]\n \n Returns:\n psi: Electrostatic potential [V]\n \"\"\"\n prefactor = 2 * sc.Stefan_Boltzmann * temperature / sc.elementary_charge\n \n gamma_value = gamma(surface_potential=surface_potential, temperature=temperature)\n debye_value = debye(rho_bulk=rho_bulk, charge=charge, permittivity=permittivity, temperature=temperature)\n kappa = 1/debye_value\n \n numerator = 1 + gamma_value * np.exp(-kappa * location)\n divisor = 1 - gamma_value * np.exp(-kappa * location)\n \n psi = prefactor * np.log(numerator / divisor)\n\n return psi\n```\n\n\n```python\nz = np.linspace(0, 2*10**-7, 10000)\ndensity = sc.Avogadro * 1000 * 10**-4\nrho = {'Na': density, 'Cl':density} \ncharge = {'Na': 1, 'Cl': -1}\npot_0 = 100\npsi = [potential(location=loc, rho_bulk=rho, charge=charge, surface_potential=pot_0) for loc in z]\nplt.xlabel('z [nm]')\nplt.ylabel('Potential [V]')\nplt.plot(z*10**9, psi, marker='')\nplt.show()\n```\n\nAs expected, this gives us an exponentially decaying potential.\n\nUnfortunately, the differences in our potential calculation already strain python's numerical precision:\n\n\n```python\nz = np.linspace(0, 3*10**-10, 10000)\npsi = [potential(location=loc, rho_bulk=rho, charge=charge, surface_potential=pot_0) for loc in z]\nplt.xlabel('z [nm]')\nplt.ylabel('Potential [V]')\nplt.plot(z*10**9, psi, marker='')\nplt.show()\n```\n\nThese steps in the potential will lead to major errors in the density, so we want to remove them.\n\nTo get a smooth potential, we use the decimal package with increased float precision.\n\n\n```python\nfrom decimal import *\ngetcontext().prec = 30\n\ndef precise_potential(location, rho_bulk, charge, surface_potential, temperature=300, permittivity=80):\n \"\"\"The potential near a charged surface in an ionic solution.\n \n The decimal package is used for increased precision.\n \n Arguments:\n location: z-distance from the surface [m]\n temperature: Temperature of the soultion [Kelvin] \n gamma: term from Gouy-Chapmann theory\n kappa: the inverse of the debye length\n charge: dictionary of the charge of each ionic species\n permittivity: capacitance of the ionic solution []\n temperature: Temperature of the solution [Kelvin]\n \n Returns:\n psi: Electrostatic potential [V]\n \"\"\"\n prefactor = Decimal(2 * sc.Stefan_Boltzmann * temperature / sc.elementary_charge)\n \n debye_value = debye(rho_bulk=rho_bulk, charge=charge, permittivity=permittivity, temperature=temperature)\n kappa = 1/debye_value\n\n gamma_value = Decimal(gamma(surface_potential=surface_potential, temperature=temperature)) \n \n exponential = Decimal(np.exp(-kappa * location))\n numerator = Decimal(1) + gamma_value * exponential\n divisor = Decimal(1) - gamma_value * exponential\n \n psi = prefactor * (numerator / divisor).ln()\n psi = float(psi)\n\n return psi\n```\n\n\n```python\nz = np.linspace(0, 2*10**-7, 10000)\ndensity = sc.Avogadro * 1000 * 10**-4\nrho = {'Na': density, 'Cl':density} \ncharge = {'Na': 1, 'Cl': -1}\npot_0 = 100\npsi = [precise_potential(location=loc, rho_bulk=rho, charge=charge, surface_potential=pot_0) for loc in z]\nplt.xlabel('z [nm]')\nplt.ylabel('Potential [V]')\nplt.plot(z*10**9, psi, marker='')\nplt.show()\n\nz = np.linspace(0, 3*10**-10, 10000)\npsi = [precise_potential(location=loc, rho_bulk=rho, charge=charge, surface_potential=pot_0) for loc in z]\nplt.xlabel('z [nm]')\nplt.ylabel('Potential [V]')\nplt.plot(z*10**9, psi, marker='')\nplt.show()\n```\n\nWe can see that the steps in the potential are gone, everything is smooth now.\n\n## Charge density\n\nNow we obtain the charge density $\\rho$ from the potential $\\Psi$ via\n\n$\\rho_{Na^+}(z) = \\rho_{Na^+}(\\infty) e^{-e \\Psi(z)/k_B T}$\n\n\n```python\ndef charge_density(location, rho_bulk, charge, surface_potential, temperature=300, permittivity=80, species='Na'):\n \"\"\"The potential near a charged surface in an ionic solution.\n Arguments:\n location: z-distance from the surface [m]\n temperature: Temperature of the soultion [Kelvin] \n gamma: term from Gouy-Chapmann theory\n kappa: the inverse of the debye length\n charge: dictionary of the charge of each ionic species\n permittivity: capacitance of the ionic solution []\n temperature: Temperature of the solution [Kelvin]\n \n Returns:\n rho: number density of salt ions\n \"\"\"\n potential_value = precise_potential(location, rho_bulk, charge, surface_potential, temperature, permittivity)\n \n # Save the species' densities\n rho = rho_bulk[species] * np.exp(-1 * charge[species] * sc.elementary_charge * potential_value / (sc.Boltzmann * temperature))\n \n return rho\n```\n\n\n```python\nz = np.linspace(0, 100*10**-9, 2000)\ndensity = sc.Avogadro * 1000 * 10**-4\nrho = {'Na': density, 'Cl':density} \ncharge = {'Na': 1, 'Cl': -1}\npot_0 = 0.05 # Breaks if > 1\n\npsi = [precise_potential(location=loc, rho_bulk=rho, charge=charge, surface_potential=pot_0) for loc in z]\n\nrho_na = np.array([charge_density(location=loc, rho_bulk=rho, charge=charge, surface_potential=pot_0, species='Na') for loc in z])\nrho_cl = np.array([charge_density(location=loc, rho_bulk=rho, charge=charge, surface_potential=pot_0, species='Cl') for loc in z])\n\ndeb = debye(rho_bulk=rho, charge=charge) * 10**9\n\nfig, ax1 = plt.subplots(figsize=[15,5])\nax1.set_xlabel('z [nm]')\nax1.plot(z*10**9, psi, marker='', color='red', label='Potential')\nax1.set_ylabel('Potential')\nax1.axvline(x=deb, label='Debye Length', color='orange')\n\nax2 = ax1.twinx()\nax2.plot(z*10**9, [density]*len(z), label='Bulk concentration', color='grey')\nax2.plot(z*10**9, rho_na, marker='', color='green', label='Na+ ions')\nax2.plot(z*10**9, rho_cl, marker='', color='blue', label='Cl- ions')\nax2.set_ylabel('Density')\n\n#fig.legend(loc='center')\nax2.legend(loc='best', fontsize=15)\nax1.legend(loc='upper center', fontsize=15)\nfig.tight_layout()\nplt.show()\n```\n\nThe charge density behaves as expected, it interpolates between low (high) concentration and the bulk concentration within the first few debye lengths.\n\n## Sampling\nNow let's see if we can just plug our new distribution in our existing framework.\n\nFirst, we need to convert the physical distribution to the format we were using so far:\n\n\n```python\ndef wrap_distribution(x, species):\n \"\"\"Wrapper for na+ ions.\"\"\"\n density = sc.Avogadro * 1000 * 10**-4\n rho = {'Na': density, 'Cl':density} \n charge = {'Na': 1, 'Cl': -1}\n pot_0 = 0.05 # Breaks if > 1\n \n def call_distri(loc):\n distri = charge_density(location=loc, rho_bulk=rho, \n charge=charge, surface_potential=pot_0, species=species)\n return float(distri)\n \n if not np.isscalar(x):\n y = []\n for i in range(0, len(x)):\n val = call_distri(x[i])\n \n # Normalize to be 1 at x=0\n val /= call_distri(0)\n # Scale distribution to have values in [0, 0.1] for ease of sampling\n val /= 10\n y += [val]\n return np.array(y)\n\n # If we have only a point estimate\n val = call_distri(x)\n # Normalize to be 1 at x=0\n val /= call_distri(0)\n # Scale distribution to have values in [0, 0.1] for ease of sampling\n val /= 10 \n return val\n \ndef cl_distribution(x):\n return wrap_distribution(x, species='Cl')\n\n \ndef na_distribution(x):\n return wrap_distribution(x, species='Na')\n```\n\n\n```python\nx = 50 * 10**-9\nz = 100 * 10**-9\nbox = np.array([x, x, z])\nstruc = generate_structure(distribution=na_distribution, box=box, atom_count=10000)\nhistx, histy, histz = get_histogram(struc, box=box, n_bins=101)\nplot_dist(histz, 'z', reference_distribution=na_distribution)\n```\n\n\n```python\nx = 50 * 10**-9\nz = 100 * 10**-9\nbox = np.array([x, x, z])\nstruc = generate_structure(distribution=cl_distribution, box=box, atom_count=10000)\nhistx, histy, histz = get_histogram(struc, box=box, n_bins=101)\nplot_dist(histz, 'z', reference_distribution=cl_distribution)\n```\n\n## Write to file\nTo visualize our structure, we export it to the .xyz file format, which is basically\n\n```\nATOM_NUMBER\nOptional comment\natom_type x y z\natom_type x y z\n```\n\nAvogadro expects x, y, z to be in units of $10^{-9}~m$, so we convert our salt \"solution\" to this unit.\n\n\n```python\nfrom generate_structure import concat_names_structs\nfrom generate_structure import export_named_struc\n\ncl_struc = generate_structure(distribution=cl_distribution, box=box, atom_count=100)\nna_struc = generate_structure(distribution=na_distribution, box=box, atom_count=100)\n\nconcat_list = concat_names_structs(struc_list=[cl_struc, na_struc], name_list=['Cl', 'Na'])\nrescaled_list = []\nfor line in concat_list:\n name, x, y, z = line\n x = float(x) * 10**9\n y = float(y) * 10**9\n z = float(z) * 10**9 \n rescaled_list += [[name, x, y, z]]\n\nrescaled_list = np.array(rescaled_list)\nexport_named_struc(rescaled_list)\n```\n\n\n```python\nfrom IPython.display import Image\nImage(filename='distributed_atom_structure.png') \n```\n", "meta": {"hexsha": "5aa19ad6e6f1cb7cf028f06290de0a9974ae286c", "size": 262620, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "dev_sourcecode_physical_distribution.ipynb", "max_stars_repo_name": "jotelha/continuous2discrete", "max_stars_repo_head_hexsha": "fd22f82fe9351ccadfaf35c7177e1ab668b74c34", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "dev_sourcecode_physical_distribution.ipynb", "max_issues_repo_name": "jotelha/continuous2discrete", "max_issues_repo_head_hexsha": "fd22f82fe9351ccadfaf35c7177e1ab668b74c34", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "dev_sourcecode_physical_distribution.ipynb", "max_forks_repo_name": "jotelha/continuous2discrete", "max_forks_repo_head_hexsha": "fd22f82fe9351ccadfaf35c7177e1ab668b74c34", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 359.7534246575, "max_line_length": 62236, "alphanum_fraction": 0.9286307212, "converted": true, "num_tokens": 4346, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9643214460461697, "lm_q2_score": 0.8688267830311355, "lm_q1q2_score": 0.8378282997762263}} {"text": "# 15 Partial Differential Equations \u2014 1\n\n## Solving Laplace's or Poisson's equation\n\n**Poisson's equation** for the electric potential $\\Phi(\\mathbf{r})$ and the charge density $\\rho(\\mathbf{r})$:\n\n$$\n\\nabla^2 \\Phi(x, y, z) = -4\\pi\\rho(x, y, z)\\\\\n$$\n\nFor a region of space without charges ($\\rho = 0$) this reduces to **Laplace's equation**\n\n$$\n\\nabla^2 \\Phi(x, y, z) = 0\n$$\n\n\nSolutions depend on the **boundary conditions**: \n\n* the *value of the potential* on the *boundary* or \n* the *electric field* (i.e. the derivative of the potential, $\\mathbf{E} = -\\nabla\\Phi$ *normal to the surface* ($\\mathbf{n}\\cdot\\mathbf{E}$), which directly follows from the charge distribution).\n\n### Example: 2D Laplace equation\n$$\n\\frac{\\partial^2 \\Phi(x,y)}{\\partial x^2} + \\frac{\\partial^2 \\Phi(x,y)}{\\partial y^2} = 0\n$$\n(\"elliptic PDE\")\n\nBoundary conditions:\n* square area surrounded by wires\n* three wires at ground (0 V), one wire at 100 V\n\n## Finite difference algorithm for Poisson's equation\nDiscretize space on a lattice (2D) and solve for $\\Phi$ on each lattice site.\n\nTaylor-expansion of the four neighbors of $\\Phi(x, y)$:\n\n\\begin{align}\n\\Phi(x \\pm \\Delta x, y) &= \\Phi(x, y) \\pm \\Phi_x \\Delta x + \\frac{1}{2} \\Phi_{xx} \\Delta x^2 + \\dots\\\\\n\\Phi(x, y \\pm \\Delta y) &= \\Phi(x, y) \\pm \\Phi_y \\Delta x + \\frac{1}{2} \\Phi_{yy} \\Delta x^2 + \\dots\\\\\n\\end{align}\n\nAdd equations in pairs: odd terms cancel, and **central difference approximation** for 2nd order partial derivatives (to $\\mathcal{O}(\\Delta^4)$):\n\n\\begin{align}\n\\Phi_{xx}(x,y) = \\frac{\\partial^2 \\Phi}{\\partial x^2} & \\approx \n \\frac{\\Phi(x+\\Delta x,y) + \\Phi(x-\\Delta x,y) - 2\\Phi(x,y)}{\\Delta x^2} \\\\\n\\Phi_{yy}(x,y) = \\frac{\\partial^2 \\Phi}{\\partial y^2} &\\approx \n \\frac{\\Phi(x,y+\\Delta y) + \\Phi(x,y-\\Delta y) - 2\\Phi(x,y)}{\\Delta y^2}\n\\end{align}\n\nTake $x$ and $y$ grids of equal spacing $\\Delta$: Discretized Poisson equation\n\n$$\n\\begin{split}\n\\Phi(x+\\Delta x,y) + \\Phi(x-\\Delta x,y) +\\Phi(x,y+\\Delta y) &+ \\\\\n +\\, \\Phi(x,y-\\Delta y) - 4\\Phi(x,y) &= -4\\pi\\rho(x,y)\\,\\Delta^2\n \\end{split}\n$$\n\nDefines a system of $N_x \\times N_y$ simultaneous algebraic equations for $\\Phi_{ij}$ to be solved.\n\nCan be solved directly via matrix approaches (and then is the best solution) but can be unwieldy for large grids.\n\nAlternatively: **iterative solution**:\n\n$$\n\\begin{split}\n4\\Phi(x,y) &= \\Phi(x+\\Delta x,y) + \\Phi(x-\\Delta x,y) +\\\\\n &+ \\Phi(x,y+\\Delta y) + \\Phi(x,y-\\Delta y) + 4\\pi\\rho(x,y)\\,\\Delta^2\n\\end{split}\n$$\n\nOr written for lattice sites $(i, j)$ where \n\n$$\nx = x_0 + i\\Delta\\quad\\text{and}\\quad y = y_0 + j\\Delta, \\quad 0 \\leq i,j < N_\\text{max}\n$$\n\n$$\n\\Phi_{i,j} = \\frac{1}{4}\\Big(\\Phi_{i+1,j} + \\Phi_{i-1,j} + \\Phi_{i,j+1} + \\Phi_{i,j-1}\\Big)\n + \\pi\\rho_{i,j} \\Delta^2\n$$\n\n* Converged solution at $(i, j)$ will be the average potential from the four neighbor sites + charge density contribution.\n* *Not a direct solution*: iterate and hope for convergence.\n\n#### Jacobi method\nDo not change $\\Phi_{i,j}$ until a complete sweep has been completed.\n\n#### Gauss-Seidel method\nImmediately use updated new values for $\\Phi_{i-1, j}$ and $\\Phi_{i, j-1}$ (if starting from $\\Phi_{1, 1}$).\n\nLeads to *accelerated convergence* and therefore *less round-off error* (but distorts symmetry of boundary conditions... hopefully irrelevant when converged but check!)\n\n### Solution via relaxation (Gauss-Seidel) \n\nSolve the box-wire problem on a lattice: The wire at $x=0$ (the $y$-axis) is at 100 V, the other three sides of the box are grounded (0 V).\n\nNote: $\\rho=0$ inside the box.\n\nNote for Jupyter notebook use:\n* For interactive 3D plots, select\n ```\n %matplotlib widget\n ```\n or if the above fails, try\n ```\n %matplotlib notebook\n ```\n* For standard inline figures (e.g. for exporting the notebook to LaTeX/PDF or html) use \n ```\n %matplotlib inline\n ``` \n \nEnable a matplotlib-Jupyter integration that works for you (try `conda install ipympl` or `pip install ipympl` first to get `%matplotlib widget` working).\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n```\n\n\n```python\n# for interactive work\n\n%matplotlib widget\n#%matplotlib inline\n#%matplotlib notebook\n```\n\n\n```python\n# for plotting/saving\n%matplotlib inline\n```\n\n#### Wire on a box: Solution of Laplace's equation with the Gauss-Seidel algorithm\n\n\n```python\nNmax = 100\nMax_iter = 70\nPhi = np.zeros((Nmax, Nmax), dtype=np.float64)\n\n# initialize boundaries\n# everything starts out zero so nothing special for the grounded wires\nPhi[0, :] = 100 # wire at x=0 at 100 V\n\nNx, Ny = Phi.shape\n\nfor n_iter in range(Max_iter):\n for xi in range(1, Nx-1):\n for yj in range(1, Ny-1):\n Phi[xi, yj] = 0.25*(Phi[xi+1, yj] + Phi[xi-1, yj] \n + Phi[xi, yj+1] + Phi[xi, yj-1])\n```\n\n#### Visualization of the potential \n\n\n```python\n# plot Phi(x,y)\nx = np.arange(Phi.shape[0])\ny = np.arange(Phi.shape[1])\nX, Y = np.meshgrid(x, y)\n\nZ = Phi[X, Y]\n```\n\n\n```python\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nax.plot_wireframe(X, Y, Z, rstride=2, cstride=2)\n\nax.set_xlabel('X')\nax.set_ylabel('Y')\nax.set_zlabel(r'potential $\\Phi$ (V)')\n\nax.view_init(elev=40, azim=-65)\n```\n\nNicer plot (use this code for other projects):\n\n\n```python\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\n\nax.plot_wireframe(X, Y, Z, rstride=2, cstride=2, linewidth=0.5, color=\"gray\")\nsurf = ax.plot_surface(X, Y, Z, cmap=plt.cm.coolwarm, alpha=0.6)\ncset = ax.contourf(X, Y, Z, zdir='z', offset=-50, cmap=plt.cm.coolwarm)\n\nax.set_xlabel('X')\nax.set_ylabel('Y')\nax.set_zlabel(r'potential $\\Phi$ (V)')\nax.set_zlim(-50, 100)\nax.view_init(elev=40, azim=-65)\n\ncb = fig.colorbar(surf, shrink=0.5, aspect=5)\ncb.set_label(r\"potential $\\Phi$ (V)\")\n```\n\n(Note that the calculation above is is *not converged* ... see next lecture.)\n", "meta": {"hexsha": "c3ba726f52fb5919a7f6443d9349a384a15ba459", "size": 178392, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "15_PDEs/15_PDEs-1.ipynb", "max_stars_repo_name": "ASU-CompMethodsPhysics-PHY494/PHY494-resources-2020", "max_stars_repo_head_hexsha": "20e08c20995eab567063b1845487e84c0e690e96", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "15_PDEs/15_PDEs-1.ipynb", "max_issues_repo_name": "ASU-CompMethodsPhysics-PHY494/PHY494-resources-2020", "max_issues_repo_head_hexsha": "20e08c20995eab567063b1845487e84c0e690e96", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "15_PDEs/15_PDEs-1.ipynb", "max_forks_repo_name": "ASU-CompMethodsPhysics-PHY494/PHY494-resources-2020", "max_forks_repo_head_hexsha": "20e08c20995eab567063b1845487e84c0e690e96", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 356.0718562874, "max_line_length": 87780, "alphanum_fraction": 0.9358939863, "converted": true, "num_tokens": 1914, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9362849986365572, "lm_q2_score": 0.8947894675053567, "lm_q1q2_score": 0.8377779553632587}} {"text": "# Numerical Methods in Scientific Computing\n# Assignment 6\n## E Naveen\n## (ME16B077)\n\n# Q1.\n\nThe Fredholm Integral equation is given by,\n\\begin{equation}\n f(x) = \\phi (x) + \\int_a^b K_s(x,t) \\phi (t)dt\n\\end{equation}\n\nOne method for solving the above equation numerically is to write the integral on the RHS using any of the Quadrature methods discussed. \nChoosing Gaussian-Legendre Quadrature for this method we have,\n\\begin{equation}\n \\int_a^b K(x,t) \\phi (t)dt = \\frac{b-a}{2}\\int_{-1}^1 K_s(x,s) \\phi_s(s)ds = \\frac{b-a}{2}\\sum_{i=0}^{n} w_i K_s(x,s_i) \\phi_s(s_i)\n\\end{equation}\nwhere\n$$ s = \\frac{2t - (a+b)}{b-a} $$\nand $K(x,t)$ and $\\phi(t)$ are changed to $K_s(x,s)$ and $\\phi_s(s)$ after change of variable t into s.\n\nSo, for $x=x_0,x_1,...,x_n$ and $s=s_0,s_1,...,s_n$ (s are gauss-legendre nodes) we have\n\\begin{equation}\n f(x_i) = \\phi_s(s_i) + \\frac{b-a}{2}\\sum_{j=0}^{n} w_j K_s(x_i,s_j) \\phi_s(s_j)\n\\end{equation}\nwith $$ x_i = \\frac{s_i(b-a)+(a+b)}{2} $$\nThus we have n+1 equations with n+1 unknowns. We can solve this as Linear system of equations.\n\n$$ A= \\frac{b-a}{2}\\left(\\begin{matrix}w_0K_s(x_0,s_0)&w_1K_s(x_0,s_1)&w_2K_s(x_0,s_2).&.&.&w_{n-1}K_s(x_{0},s_{n-1})&w_nK_s(x_0,s_n)\\\\w_0K_s(x_1,s_0)&w_1K_s(x_1,s_1)&w_2K_s(x_1,s_2).&.&.&w_{n-1}K_s(x_{1},s_{n-1})&w_nK_s(x_1,s_n)\n\\\\.&.&w_2K_s(x_2,s_2).&.&.&.&.\n\\\\.&.&.&.&.&.&.\n\\\\.&.&.&.&.&w_{n-1}K_s(x_{n-1},s_{n-1})&w_nK_s(x_{n-1},s_n)\\\\w_0K_s(x_n,s_0)&.&.&.&.&w_{n-1}K_s(x_{n},s_{n-1})&w_nK_s(x_n,s_n)\\end{matrix}\\right) + \\left(\\begin{matrix}1&.&.&.&.&.&.&.\\\\.&1&.&.&.&.&.&.\\\\.&.&.&.&.&.&.&.\\\\.&.&.&.&.&.&.&.\\\\.&.&.&.&.&.&1&.\\\\.&.&.&.&.&.&.&1\\end{matrix}\\right) $$\n\n$$ \\phi = \\left(\\begin{matrix}\\phi_s(s_0)\\\\\\phi_s(s_1)\\\\.\\\\.\\\\\\phi_s(s_{n-1})\\\\\\phi_s(s_n)\\end{matrix}\\right) $$\n\n$$ f = \\left(\\begin{matrix}f(x_0)\\\\f(x_1)\\\\.\\\\.\\\\f(x_{n-1})\\\\f(x_n)\\end{matrix}\\right) $$\n\n$$ \\left(\\frac{b-a}{2}\\left(\\begin{matrix}w_0K_s(x_0,s_0)&w_1K_s(x_0,s_1)&w_2K_s(x_0,s_2).&.&.&w_{n-1}K_s(x_{0},s_{n-1})&w_nK_s(x_0,s_n)\\\\w_0K_s(x_1,s_0)&w_1K_s(x_1,s_1)&w_2K_s(x_1,s_2).&.&.&w_{n-1}K_s(x_{1},s_{n-1})&w_nK_s(x_1,s_n)\n\\\\.&.&w_2K_s(x_2,s_2).&.&.&.&.\n\\\\.&.&.&.&.&.&.\n\\\\.&.&.&.&.&w_{n-1}K_s(x_{n-1},s_{n-1})&w_nK_s(x_{n-1},s_n)\\\\w_0K_s(x_n,s_0)&.&.&.&.&w_{n-1}K_s(x_{n},s_{n-1})&w_nK_s(x_n,s_n)\\end{matrix}\\right) + I\\right) \\left(\\begin{matrix}\\phi_s(s_0)\\\\\\phi_s(s_1)\\\\.\\\\.\\\\\\phi_s(s_{n-1})\\\\\\phi_s(s_n)\\end{matrix}\\right) = \\left(\\begin{matrix}f(x_0)\\\\f(x_1)\\\\.\\\\.\\\\f(x_{n-1})\\\\f(x_n)\\end{matrix}\\right) $$\n\n$$ A\\phi = f \\Rightarrow \\phi = A^{-1}f$$\n\nFor the following equation,\n\n\\begin{equation}\n \\phi(x) = \\pi x^2 + \\int_0^{\\pi} 3(0.5sin(3x)-tx^2)\\phi(t)dt \\Rightarrow \\phi(x) + \\int_0^{\\pi} 3(tx^2-0.5sin(3x))\\phi(t)dt= \\pi x^2\n\\end{equation}\n\nHence we have,\n\\begin{equation}\n K(x,t) = 3(tx^2-0.5sin(3x)) ;\\quad s = \\frac{2t - (a+b)}{b-a}=\\frac{2t-\\pi}{\\pi} ;\\quad t = \\frac{s(b-a)+(a+b)}{2} = \\frac{\\pi(1+s)}{2} = x\n\\end{equation}\n\nWith $a=0$ and $b=\\pi$\n\\begin{equation}\n \\Rightarrow K_s(x,s) = 3(\\frac{\\pi(1+s)}{2}x^2-0.5sin(3x)) \\quad; \\quad f(x) = \\pi x^2\n\\end{equation}\n\nThe exact solution is obtained by substituting $\\phi(t)=sin(kt)$ and evaluating $\\phi(x) = \\pi x^2 + \\int_0^{\\pi} 3(0.5sin(3x)-tx^2)\\phi(t)dt$ which is given by,\n\n\\begin{equation}\n \\phi(x) = (-3x^2(sin(\\pi k) - \\pi kcos(\\pi k)) - 1.5 k (cos(\\pi k) - 1) sin(3 x))/k^2 + \\pi x^2 =sin(kx)\n\\end{equation}\n\nThe above equation can be solved by equating,\n\n\\begin{equation}\n \\frac{3x^2\\pi cos(\\pi k)}{k} = -\\pi x^2 \\Rightarrow k=3\n\\end{equation}\n\nHence exact solution is,\n\\begin{equation}\n \\phi(x) = sin(3x)\n\\end{equation}\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport time\n```\n\n\n```python\n# Function to return K(x,s) as defined above.\ndef ret_K_x_s(x,s):\n K = 3*(((np.pi*(1+s)/2)*x**2) - 0.5*np.sin(3*x))\n return K\n\n# Function to calculate the required function phi.\ndef calc_phi(n):\n h = 2/n\n S, W = np.polynomial.legendre.leggauss(n+1)\n X = (S+1)*np.pi/2\n A = np.identity(n+1)\n for i in range(n+1):\n for j in range(n+1):\n A[i, j] = A[i, j] + W[j]*ret_K_x_s(X[i],S[j])*np.pi/2\n F = np.zeros((n+1,1))\n for i in range(n+1):\n F[i] = np.pi*X[i]**2\n phi = np.dot(np.linalg.inv(A), F)\n return X, phi\n```\n\n\n```python\n# Call the function for n=100\nX, phi = calc_phi(100)\n\n# Plot the numerical and exact solution to phi.\nplt.rcParams[\"figure.figsize\"] = [8,6]\nplt.plot(X,phi,color='b', label='numerical solution')\nplt.plot(X,np.sin(3*X), marker='x',color='r', label='Exact solution', linestyle=' ')\nplt.legend()\nplt.grid()\nplt.xlabel('x')\nplt.ylabel('phi')\nplt.title('Numerical vs Exact Solution for Fredholm integral equation')\nplt.show()\n```\n\n\n```python\n# Plot the error as a function of number of nodes.\nN = np.linspace(5,400,20)\nError = []\nfor n in N:\n X, phi = calc_phi(int(n))\n phi = [float(x) for x in phi]\n phi_exact = np.sin(3*X)\n phi_exact = [float(x) for x in phi_exact]\n Error.append(np.mean(np.abs(np.subtract(phi,phi_exact))))\n \nplt.rcParams[\"figure.figsize\"] = [8,6]\nplt.loglog(N,Error,color='b', label='Max Absolute Error')\nplt.legend()\nplt.grid()\nplt.xlabel('N')\nplt.ylabel('error')\nplt.title('Infinity Norm Error')\nplt.show()\n```\n\n- The numerical method is very accurate when comparing with the exact solution even for small values of n.\n- Initially the rate of convergence to the actual solution as seen using infinity norm error is very steep for n from 5 to 15 beyond which the solution is accurate to machine precision.\n\n# Q2.\n\n\n```python\n# Return function value when in terms of x\ndef f_x(x):\n f = np.exp(-x)/x**0.5\n return f\n\n# Return function value when in terms of t\ndef f_t(t):\n f = 2*np.exp(-t**2)\n return f\n\n# Rectangular rule in terms of x.\ndef rect_rule_x(n):\n h = 1/n\n X = np.linspace(0,1,n+1)\n Y = [(X[i]+X[i+1])/2 for i in range(n)]\n F_x = [f_x(Y[i]) for i in range(n)]\n I_x = h*sum(F_x)\n return I_x\n\n# Rectangular rule in terms of t.\ndef rect_rule_t(n):\n h = 1/n\n X = np.linspace(0,1,n+1)\n Y = [(X[i]+X[i+1])/2 for i in range(n)]\n F_t = [f_t(Y[i]) for i in range(n)]\n I_t = h*sum(F_t)\n return I_t\n```\n\n\n```python\n# Defined values for n.\nN = [5,10,20,50,100,200,500,1000,2000,5000,10000]\n\n# Store the integral value obtained from wolfram alhpa.\naccurate_I = 1.493648265624854050798934872263706010708999373625212658055\naccurate_I = [accurate_I for _ in range(len(N))]\n\n# Initialize parameters to store the results.\nx_I = []\nx_time = []\nx_error = []\nt_I = []\nt_time = []\nt_error = []\nfor n in N:\n \n start = time.time()\n I = rect_rule_x(n)\n end = time.time()\n x_I.append(I)\n x_time.append(end-start)\n \n start = time.time()\n I = rect_rule_t(n)\n end = time.time()\n t_I.append(I)\n t_time.append(end-start)\n \n x_error.append(abs(rect_rule_x(n)-accurate_I[0]))\n t_error.append(abs(rect_rule_t(n)-accurate_I[0]))\n \nprint(\"The average time to run function to find both types of Integrals are:\")\nprint(\"Using x =\", abs(np.mean(x_time)))\nprint(\"Using t =\", abs(np.mean(t_time)))\n\n# Plot the absolute error in log-log plot as a function of n.\nplt.rcParams[\"figure.figsize\"] = [5,4]\nplt.loglog(N, x_error, color='r', label='using x')\nplt.loglog(N, t_error, color='b', label='using t')\nplt.legend()\nplt.grid()\nplt.xlabel('n')\nplt.ylabel('Error')\nplt.title('Absolute Error vs n in log-log plot')\nplt.show()\n```\n\n- The Integral obtained using Rectangular rule after making a change of variable to 't' is way more accurate than with 'x'.\n- The Slope in log-log plot of error vs number of intervals using 'x' is 0.5 whereas that for 't' is 2, signifying multiple orders accurate than with just using 'x'.\n- On average the cost of computation is more for integrating $\\frac{e^{-x}}{\\sqrt{x}}$ than $e^{-t^2}$ owing to the requirement of performing a square root and division operation which requires more effort and done using newton raphson method.\n\n# Q3.\n\nIt is given that f(x) is d+1 times continuously differentiable function. The Householder's method is given by,\n\\begin{equation}\n x_{n+1} = x_n + d\\frac{(1/f)^{d-1}(x_n)}{(1/f)^d(x_n)}\n\\end{equation}\n\nIf the sequence of iterates converge to a root a, we know $f(a) = 0$. Then near $x = a$ the function $(1/f)(x)$ has a singularity or is called a meromorphic function.\n\nWriting the Taylor Series expansion for $(1/f)(x)$,\n\n\\begin{equation}\n (1/f)(x) = \\sum_{d=0}^{\\infty} \\frac{(1/f)^{(d)}(b)}{d!} (x-b)^d\n\\end{equation}\n\nKonig's theorem states that given a meromorphic function as defined above with infinite terms, the limit beow as $d\\rightarrow \\infty$ is\n\\begin{equation}\n \\lim_{d\\rightarrow\\infty} \\frac{\\frac{(1/f)^{(d-1)}(b)}{(d-1)!}}{\\frac{(1/f)^{(d)}(b)}{d!}} = \\lim_{d\\rightarrow\\infty} d\\frac{(1/f)^{(d-1)}(b)}{(1/f)^{(d)}(b)} = a-b\n\\end{equation}\n\nwhere a-b is the error. Hence for any d, by the power of the error term (b-a) in the Taylor series expansion containing the $d^{th}$ derivative, we have\n\\begin{equation}\n |x_{n+1}-a| \\leq K|x_n-a|^{d+1}\n\\end{equation}\n\n\n# Q4.\n\nAs the function is strictly convex, twice differentiable and with single simple root its first derivative must either be such that $f^{\\prime}(x) \\gt 0 \\text{ } \\forall x$ or $f^{\\prime}(x) \\lt 0 \\text{ } \\forall x$. Which also tells that the function value has opposite signs on all values of x on either side of the root (x=a).\n\nCase 1: $f^{\\prime}(x) \\gt 0$,\n- When the initial guess is to the left of root (xa), \n - f(x) is positive and this leads to the term $-\\frac{f(x_n)}{f^{\\prime}(x_n)}$ being negative and hence $x_{n+1} \\lt x_n$ making the next iteration move towards to the root as well.\n\nCase 2: $f^{\\prime}(x) \\lt 0$,\n- When the initial guess is to the left of root (x f(a) making f(x) positive as f(a) = 0 \n - This leads to the term $-\\frac{f(x_n)}{f^{\\prime}(x_n)}$ being positive and hence $x_{n+1} \\gt x_n$ making the next iteration move towards to the root.\n- Similarly, when the initial guess is to the right of root (x>a), \n - f(x) is negative and this leads to the term $-\\frac{f(x_n)}{f^{\\prime}(x_n)}$ being negative and hence $x_{n+1} \\lt x_n$ making the next iteration move towards to the root as well.\n\nWhen the initial guess is infact the root, regardless of the case, the subsequent iterations have no change in value of x as f(x)=0.\n\nHence the Newton method converges to the root irrespective of the initial guess.\n\n# Q5.\n\nTo show $w(x) = xe^{x} - a$ has only one real root for $a\\gt0$.\n\nIt can be said that when $a\\gt0$ the root, say x*, should also be $\\gt0$. The derivative of $w(x)$ is given by,\n\n\\begin{equation}\n w^{\\prime}(x) = (x+1)e^x \\gt 0 \\quad \\forall x \\gt 0\n\\end{equation}\n\nSince $w^{\\prime}(x) \\gt 0$ the function $w(x)$ is monotonically increasing for $x \\gt 0$. Moreover we know,\n\n\\begin{equation}\n w(0) = -a \\lt 0 \\quad and \\quad w(a) = a(e^a-1) \\gt 0\n\\end{equation}\n\nSo the root lies in the real interval [0,a] and since the function is monotonically increasing it has only one real root. \n\n\n\n```python\ndef f_x(x, a):\n f = x*np.exp(x) - a\n return f\n \ndef f_prime_x(x):\n f_prime = (x+1)*np.exp(x)\n return f_prime\n\ndef Bisect(a, x1, x2, tol_x = 10**-3, tol_y = 10**-3):\n iterations = 0\n while True:\n iterations += 1\n if np.abs(x2-x1) < tol_x and np.abs(f_x(x1, a)-f_x(x2, a)) < tol_y: break\n x_new = (x1+x2)/2\n if f_x(x_new, a)*f_x(x2, a) <= 0:\n x1 = x_new\n elif f_x(x_new, a)*f_x(x1, a) <= 0:\n x2 = x_new\n else:\n break\n return [x1, x2], iterations\n\ndef Newton(a, x1, x2, tol_x = 10**-3, tol_y = 10**-3):\n x_old = (x1+x2)/2\n iterations = 0\n while True:\n iterations += 1\n x_new = x_old - (f_x(x_old, a)/f_prime_x(x_old))\n if np.abs(x_new-x_old) < tol_x and np.abs(f_x(x_new, a)-f_x(x_old, a)) < tol_y: break\n x_old = x_new\n if x_new >= x_old: range_x = [x_old, x_new]\n else: range_x = [x_new, x_old]\n return range_x, iterations\n\ndef Secant(a, x1, x2, tol_x = 10**-3, tol_y = 10**-3):\n x_n_1 = x1 + 0.75*(x2-x1)\n x_n_2 = x1 + 0.25*(x2-x1)\n iterations = 0\n while True:\n iterations += 1\n x_n = x_n_1 - f_x(x_n_1, a)*(x_n_1 - x_n_2)/(f_x(x_n_1, a) - f_x(x_n_2, a))\n if np.abs(x_n-x_n_1) < tol_x and np.abs(f_x(x_n, a)-f_x(x_n_1, a)) < tol_y: break\n x_n_2 = x_n_1\n x_n_1 = x_n\n if x_n >= x_n_1: range_x = [x_n_1, x_n]\n else: range_x = [x_n, x_n_1]\n return range_x, iterations\n```\n\n\n```python\nA = np.logspace(-4, 1.5, 10)\n\nfor a in A:\n print('a =', a)\n range_x, iterations = Bisect(a, x1=0, x2=a, tol_x = 10**-5, tol_y = 10**-5)\n print('Bisect Method: \\t Range =', [np.around(x, 10) for x in range_x], '\\t Iterations =', iterations)\n range_x, iterations = Newton(a, x1=0, x2=a, tol_x = 10**-5, tol_y = 10**-5)\n print('Newton Method: \\t Range =', [np.around(x, 10) for x in range_x], '\\t Iterations =', iterations)\n range_x, iterations = Secant(a, x1=0, x2=a, tol_x = 10**-5, tol_y = 10**-5)\n print('Secant Method: \\t Range =', [np.around(x, 10) for x in range_x], '\\t Iterations =', iterations)\n print('-'*75)\n```\n\n a = 0.0001\n Bisect Method: \t Range = [9.375e-05, 0.0001] \t Iterations = 5\n Newton Method: \t Range = [9.999e-05, 9.99925e-05] \t Iterations = 2\n Secant Method: \t Range = [9.999e-05, 9.99919e-05] \t Iterations = 2\n ---------------------------------------------------------------------------\n a = 0.00040842386526745213\n Bisect Method: \t Range = [0.0004020422, 0.0004084239] \t Iterations = 7\n Newton Method: \t Range = [0.0004082572, 0.0004082988] \t Iterations = 2\n Secant Method: \t Range = [0.0004082572, 0.0004082884] \t Iterations = 2\n ---------------------------------------------------------------------------\n a = 0.0016681005372000592\n Bisect Method: \t Range = [0.0016615845, 0.0016681005] \t Iterations = 9\n Newton Method: \t Range = [0.0016653249, 0.0016660159] \t Iterations = 2\n Secant Method: \t Range = [0.0016653247, 0.001665842] \t Iterations = 2\n ---------------------------------------------------------------------------\n a = 0.006812920690579615\n Bisect Method: \t Range = [0.006766348, 0.0067730012] \t Iterations = 11\n Newton Method: \t Range = [0.0067669735, 0.0067669736] \t Iterations = 3\n Secant Method: \t Range = [0.0067669596, 0.0067753654] \t Iterations = 2\n ---------------------------------------------------------------------------\n a = 0.02782559402207126\n Bisect Method: \t Range = [0.0270783247, 0.027085118] \t Iterations = 13\n Newton Method: \t Range = [0.0270821304, 0.02708216] \t Iterations = 3\n Secant Method: \t Range = [0.0270813617, 0.0270821303] \t Iterations = 3\n ---------------------------------------------------------------------------\n a = 0.11364636663857254\n Bisect Method: \t Range = [0.1025619615, 0.1025688979] \t Iterations = 15\n Newton Method: \t Range = [0.1025677755, 0.1025719053] \t Iterations = 3\n Secant Method: \t Range = [0.1025677495, 0.1025677755] \t Iterations = 4\n ---------------------------------------------------------------------------\n a = 0.4641588833612782\n Bisect Method: \t Range = [0.3327678384, 0.3327713796] \t Iterations = 18\n Newton Method: \t Range = [0.3327713368, 0.3327713425] \t Iterations = 4\n Secant Method: \t Range = [0.3327713368, 0.3327714532] \t Iterations = 4\n ---------------------------------------------------------------------------\n a = 1.8957356524063793\n Bisect Method: \t Range = [0.828156774, 0.828158582] \t Iterations = 21\n Newton Method: \t Range = [0.8281581322, 0.8281581373] \t Iterations = 4\n Secant Method: \t Range = [0.8281565661, 0.8281581326] \t Iterations = 6\n ---------------------------------------------------------------------------\n a = 7.742636826811277\n Bisect Method: \t Range = [1.5857096343, 1.5857100958] \t Iterations = 25\n Newton Method: \t Range = [1.5857100296, 1.5857100297] \t Iterations = 8\n Secant Method: \t Range = [1.5857100296, 1.5857100318] \t Iterations = 8\n ---------------------------------------------------------------------------\n a = 31.622776601683793\n Bisect Method: \t Range = [2.5268887514, 2.5268888692] \t Iterations = 29\n Newton Method: \t Range = [2.526888812, 2.5268888121] \t Iterations = 20\n Secant Method: \t Range = [2.526888812, 2.5268888506] \t Iterations = 17\n ---------------------------------------------------------------------------\n\n\n#### Criteria for Convergence\n\n- Bisection Method\n\n - When the guess for intial interval lies in range [0, a] (as derived earlier the root lies in [0,a]) the Bisection method always converges to a specified tolerance. In fact when the initial guess is any interval containing [0, a] the Bisection method converges.\n \n - This is because we know that only one real root is present and $w(x) \\geq 0$ for all $x \\geq root$ and $w(x) \\leq 0$ for all $x \\leq root$. Thus the function value at extreme points of the interval are of opposite sign. This ensures that we always eliminate the interval not containing the root when bisecting and thus always keep the interval containing root while at the same time shrinking it to half at each iteration.\n\n\n- Newton Method\n\n - When the initial guess is any point $ x_0 \\gt -1$ the method always converges. This is due to the fact that at $x=-1$ the first derivative $f^{\\prime}(x=-1)=0$ and $\\forall x \\lt -1 \\quad f^{\\prime}(x) < 0$. THis causes the iterations to diverge as both $f(x)$ and $f^{\\prime}(x)$ are negative.\n\n\n- Secant Method\n\n - Similar to the Newton method the condition is that the first 2 initial points ($x_0 \\& x_1$) that we consider should both be $\\gt -1$. Definitely both points should not be $ \\leq -1$. \n \n - Intuitively it is because this method tries to aprroximate the first derivative using 2 points on either side and if both of them are less than -1 the approximation resembles for a point less than -1. If say $x_0$ is towards the left of -1 the convergence might not be guaranteed as it depends on the values that functions take on subsequent iterations.\n\n#### When $a<0$\n\nEquating the first derivative of the function to 0,\n\\begin{equation}\n w^{\\prime}(x) = (x+1)e^x = 0 \\quad \\Rightarrow x = -1\n\\end{equation}\n\nThe second derivative at x = -1 is,\n\\begin{equation}\n w^{\\prime\\prime}(x=-1) = (x+2)e^x|_{x=-1} = e^{-1} \\gt 0\n\\end{equation}\n\n\nSo the function $w(x)$ reaches the minimum value at $x=-1$ and as $a<0$, the plot of function will be tangent to the x-axis at \n$$a_0 = xe^x|_{x=-1} = -e^{-1}$$\n\nHence, given $a < 0$, for all values of $a > a_0$ there will be 2 roots and for all values of $a < a_0$ there will be no roots. For when $a=a_0$ there will be one root given by $x^* = -1$.\n\n#### Criteria for Convergence\n\n- Bisection Method\n\n - When $w(-1)=0 \\Rightarrow$ one root exists at $x=-1$, \n - the bisection method cannot be used due to the fact that the function value is non-negative for all values of x and cannot choose an interval with opposite signs.\n \n - When $w(-1) \\lt 0 \\Rightarrow$ two roots exist on either side of $x = -1$. \n - The Bisection method would only converge if we choose the initial range such that one of the extremes is at $x = -1$.\n \n - When $w(-1) \\gt 0 \\Rightarrow$ no root exists and function value is always positive. \n - So Bisection method cannot be used.\n\n\n- Newton Method\n\n - When $w(-1)=0 \\Rightarrow$ one root exists at $x=-1$, \n - Newtons method would converge for any initial guess other than $x=-1$. When the point is to the left of $x=-1$, $f(x)>0 \\& f^{\\prime}(x)<0$ pushing further iterations towards the root. Similarly for when the point is to the right of $x=-1$ further iterations will come closer to the root.\n \n - When $w(-1) \\lt 0 \\Rightarrow$ two roots exist on either side of $x = -1$. \n - The Newtons method converges to the roots when guess is other than $x=-1$. When the initial guess is $\\lt-1$ it converges to the root to the left of $x=-1$ and when the guess is $\\gt-1$ it converges to the other root.\n \n - When $w(-1) \\gt 0 \\Rightarrow$ no root exists.\n - Newtons method will not converge as no root exists.\n \n \n- Secant Method\n\n - When $w(-1)=0 \\Rightarrow$ one root exists at $x=-1$,\n - Similar to that of Newtons method. It will converge when initial guesses are towards the right or left of $x=-1. \n \n - When $w(-1) \\lt 0 \\Rightarrow$ two roots exist on either side of $x = -1$.\n - It will converge to the root depending on whether the initial guesses are both either $\\lt -1$ or $\\gt -1$ to the root that is $\\lt -1$ or $\\gt -1$ respectively. When initial guesses are on either side of $x=-1$ we cannot say to which root it would converge as it depends on the function values aling the iterations.\n \n - When $w(-1) \\gt 0 \\Rightarrow$ no root exist.\n - Would not converge.\n", "meta": {"hexsha": "9fdccdf33768b4b5132a9dbf8a590d44e338872b", "size": 99519, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "me16b077_6.ipynb", "max_stars_repo_name": "ENaveen98/Numerical-methods-and-Scientific-computing", "max_stars_repo_head_hexsha": "5b931621e307386c8c20430db9cb8dae243d38ba", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-01-05T12:31:51.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-05T12:31:51.000Z", "max_issues_repo_path": "me16b077_6.ipynb", "max_issues_repo_name": "ENaveen98/Numerical-methods-and-Scientific-computing", "max_issues_repo_head_hexsha": "5b931621e307386c8c20430db9cb8dae243d38ba", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "me16b077_6.ipynb", "max_forks_repo_name": "ENaveen98/Numerical-methods-and-Scientific-computing", "max_forks_repo_head_hexsha": "5b931621e307386c8c20430db9cb8dae243d38ba", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 148.3144560358, "max_line_length": 36932, "alphanum_fraction": 0.8333685025, "converted": true, "num_tokens": 7323, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9511422227627598, "lm_q2_score": 0.880797068590724, "lm_q1q2_score": 0.8377632816223043}} {"text": "\n\n\n```python\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n### Goals of this Lesson\n \n- Present the fundamentals of Linear Regression for Prediction\n - Notation and Framework\n - Gradient Descent for Linear Regression\n - Advantages and Issues\n - Closed form Matrix Solutions for Linear Regression\n - Advantages and Issues\n- Demonstrate Python \n - Exploratory Plotting\n - Simple plotting with `pyplot` from `matplotlib`\n - Code Gradient Descent\n - Code Closed Form Matrix Solution\n - Perform Linear Regression in scikit-learn\n\n\n### References for Linear Regression\n\n\n- Elements of Statistical Learning by Hastie, Tibshriani, Friedman - Chapter 3 \n- Alex Ihler's Course Notes on Linear Models for Regression - http://sli.ics.uci.edu/Classes/2015W-273a\n- scikit-learn Documentation - http://scikit-learn.org/stable/modules/linear_model.html#ordinary-least-squares\n- Linear Regression Analysis By Seber and Lee - http://www.wiley.com/WileyCDA/WileyTitle/productCd-0471415405,subjectCd-ST24.html\n- Applied Linear Regression by Weisberg - http://onlinelibrary.wiley.com/book/10.1002/0471704091\n- Wikipedia - http://en.wikipedia.org/wiki/Linear_regression\n\n### Linear Regression Notation and Framework\n\nLinear Regression is a supervised learning technique that is interested in predicting a response or target $\\mathbf{y}$, based on a linear combination of a set $D$ predictors or features, $\\mathbf{x}= (1, x_1,\\dots, x_D)$ such that,\n\n\\begin{equation*}\ny = \\beta_0 + \\beta_1 x_1 + \\dots + \\beta_D x_D = \\mathbf{x_i}^T\\mathbf{\\beta}\n\\end{equation*}\n\n_**Data We Observe**_\n\n\\begin{eqnarray*}\ny &:& \\mbox{response or target variable} \\\\\n\\mathbf{x} &:& \\mbox{set of $D$ predictor or explanatory variables } \\mathbf{x}^T = (1, x_1, \\dots, x_D) \n\\end{eqnarray*}\n\n_** What We Are Trying to Learn**_\n\n\\begin{eqnarray*}\n\\beta^T = (\\beta_0, \\beta_1, \\dots, \\beta_D) : \\mbox{Parameter values for a \"best\" prediction of } y \\rightarrow \\hat y\n\\end{eqnarray*}\n\n_**Outcomes We are Trying to Predict**_\n\n\\begin{eqnarray*}\n\\hat y : \\mbox{Prediction for the data that we observe}\n\\end{eqnarray*}\n\n_**Matrix Notation**_\n\n\\begin{equation*}\n\\mathbf{Y} = \\left( \\begin{array}{ccc}\ny_1 \\\\\ny_2 \\\\\n\\vdots \\\\\ny_i \\\\\n\\vdots \\\\\ny_N\n\\end{array} \\right)\n\\qquad\n\\mathbf{X} = \\left( \\begin{array}{ccc}\n1 & x_{1,1} & x_{1,2} & \\dots & x_{1,D} \\\\\n1 & x_{2,1} & x_{2,2} & \\dots & x_{2,D} \\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n1 & x_{i,1} & x_{i,2} & \\dots & x_{i,D} \\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n1 & x_{N,1} & x_{N,2} & \\dots & x_{N,D} \\\\\n\\end{array} \\right)\n\\qquad\n\\beta = \\left( \\begin{array}{ccc}\n\\beta_0 \\\\\n\\beta_1 \\\\\n\\vdots \\\\\n\\beta_j \\\\\n\\vdots \\\\\n\\beta_D\n\\end{array} \\right)\n\\end{equation*}\n\n\n_Why is it called Linear Regression?_\n\nIt is often asked, why is it called linear regression if we can use polynomial terms and other transformations as the predictors. That is \n\n\\begin{equation*}\n y = \\beta_0 + \\beta_1 x_1 + \\beta_2 x_1^2 + \\beta_3 x_1^3 + \\beta_4 \\sin(x_1)\n\\end{equation*}\n\nis still a linear regression, though it contains polynomial and trigonometric transformations of $x_1$. This is due to the fact that the term _linear_ applies to the learned coefficients $\\beta$ and not the input features $\\mathbf{x}$. \n\n\n_** How can we Learn $\\beta$? **_\n\nLinear Regression can be thought of as an optimization problem where we want to minimize some loss function of the error between the prediction $\\hat y$ and the observed data $y$. \n\n\\begin{eqnarray*}\n error_i &=& y_i - \\hat y_i \\\\\n &=& y_i - \\mathbf{x_i^T}\\beta\n\\end{eqnarray*}\n\n_Let's see what these errors look like..._\n\nBelow we show a simulation where the observed $y$ was generated such that $y= 1 + 0.5 x + \\epsilon$ and $\\epsilon \\sim N(0,1)$. If we assume that know the truth that $y=1 + 0.5 x$, the red lines demonstrate the error (or residuals) between the observed and the truth. \n\n\n```python\n#############################################################\n# Demonstration - What do Residuals Look Like\n#############################################################\n\nnp.random.seed(33) # Setting a seed allows reproducability of experiments\n\nbeta0 = 1 # Creating an intercept\nbeta1 = 0.5 # Creating a slope\n\n# Randomly sampling data points\nx_example = np.random.uniform(0,5,10)\ny_example = beta0 + beta1 * x_example + np.random.normal(0,1,10)\nline1 = beta0 + beta1 * np.arange(-1, 6)\n\nf = plt.figure()\nplt.scatter(x_example,y_example) # Plotting observed data\nplt.plot(np.arange(-1,6), line1) # Plotting the true line\nfor i, xi in enumerate(x_example):\n plt.vlines(xi, beta0 + beta1 * xi, y_example[i], colors='red') # Plotting Residual Lines\nplt.annotate('Error or \"residual\"', xy = (x_example[5], 2), xytext = (-1.5,2.1),\n arrowprops=dict(width=1,headwidth=7,facecolor='black', shrink=0.01))\nf.set_size_inches(10,5)\nplt.title('Errors in Linear Regression')\nplt.show()\n```\n\n_Choosing a Loss Function to Optimize_\n\nHistorically Linear Regression has been solved using the method of Least Squares where we are interested in minimizing the mean squared error loss function of the form:\n\n\\begin{eqnarray*}\n Loss(\\beta) = MSE &=& \\frac{1}{N} \\sum_{i=1}^{N} (y_i - \\hat y_i)^2 \\\\\n &=& \\frac{1}{N} \\sum_{i=1}^{N} (y_i - \\mathbf{x_i^T}\\beta)^2 \\\\\n\\end{eqnarray*}\n\nWhere $N$ is the total number of observations. Other loss functions can be used, but using mean squared error (also referred to sum of the squared residuals in other text) has very nice properities for closed form solutions. We will use this loss function for both gradient descent and to create a closed form matrix solution.\n\n### Before We Present Solutions for Linear Regression: Introducing a Baseball Dataset\n\nWe'll use this dataset to investigate Linear Regression. The dataset consists of 337 observations and 18 variables from the set of Major League Baseball players who played at least one game in both the 1991 and 1992\nseasons, excluding pitchers. The dataset contains the 1992 salaries for that population, along with performance measures for each player. Four categorical variables indicate how free each player was to move to other teams.\n\n** Reference **\n\n- Pay for Play: Are Baseball Salaries Based on Performance?\n - http://www.amstat.org/publications/jse/v6n2/datasets.watnik.html\n\n**Filename**\n\n- 'baseball.dat.txt'.\n\n**Variables**\n\n- _Salary_: Thousands of dollars\n- _AVG_: Batting average\n- _OBP_: On-base percentage\n- _Runs_: Number of runs\n- _Hits_: Number of hits\n- _Doubles_: Number of doubles\n- _Triples_: Number of triples\n- _HR_: Number of home runs\n- _RBI_: Number of runs batted in\n- _Walks_: Number of walks\n- _SO_: Number of strike-outs\n- _SB_: Number of stolen bases\n- _Errs_: Number of errors\n- _free agency eligibility_: Indicator of \"free agency eligibility\"\n- _free agent in 1991/2_: Indicator of \"free agent in 1991/2\"\n- _arbitration eligibility_: Indicator of \"arbitration eligibility\"\n- _arbitration in 1991/2_: Indicator of \"arbitration in 1991/2\"\n- _Name_: Player's name (in quotation marks)\n\n** What we will try to predict **\n\nWe will attempt to predict the players salary based upon some predictor variables such as Hits, OBP, Walks, RBIs, etc. \n\n\n#### Load The Data\n\nLoading data in python from csv files in python can be done by a few different ways. The numpy package has a function called 'genfromtxt' that can read csv files, while the pandas library has the 'read_csv' function. Remember that we have imported numpy and pandas as `np` and `pd` respectively at the top of this notebook. An example using pandas is as follows:\n\n pd.read_csv(filename, **args)\n\nhttp://pandas.pydata.org/pandas-docs/dev/generated/pandas.io.parsers.read_csv.html\n\n\n###STUDENT ACTIVITY (2 MINS) \n_**Student Action - Load the 'baseball.dat.txt' file into a variable called 'baseball'. Then use baseball.head() to view the first few entries**_\n\n\n```python\n#######################################################################\n# Student Action - Load the file 'baseball.dat.txt' using pd.read_csv()\n#######################################################################\nbaseball = pd.read_csv('data/baseball.dat.txt')\n```\n\n_**Crash Course: Plotting with Matplotlib**_\n\nAt the top of this notebook we have imported the the package `pyplot as plt` from the `matplotlib` library. `matplotlib` is a great package for creating simple plots in Python. Below is a link to their tutorial for basic plotting.\n\n_Tutorials_\n\n- http://matplotlib.org/users/pyplot_tutorial.html\n- https://scipy-lectures.github.io/intro/matplotlib/matplotlib.html\n\n_Simple Plotting_\n\n- Step 0: Import the packge pyplot from matplotlib for plotting \n - `import matplotlib.pyplot as plt`\n- Step 1: Create a variable to store a new figure object\n - `fig = plt.figure()`\n- Step 2: Create the plot of your choice\n - Common Plots\n - `plt.plot(x,y)` - A line plot\n - `plt.scatter(x,y)` - Scatter Plots\n - `plt.hist(x)` - Histogram of a variable\n - Example Plots: http://matplotlib.org/gallery.html\n- Step 3: Create labels for your plot for better interpretability\n - X Label\n - `plt.xlabel('String')`\n - Y Label\n - `plt.ylabel('String')`\n - Title\n - `plt.title('String')`\n- Step 4: Change the figure size for better viewing within the iPython Notebook\n - `fig.set_size_inches(width, height)`\n- Step 5: Show the plot\n - `plt.show()`\n - The above command allows the plot to be shown below the cell that you're currently in. This is made possible by the `magic` command `%matplotlib inline`. \n- _NOTE: This may not always be the best way to create plots, but it is a quick template to get you started._\n \n_Transforming Variables_\n\nWe'll talk more about numpy later, but to perform the logarithmic transformation use the command\n\n- `np.log(`$array$`)`\n\n\n```python\n#############################################################\n# Demonstration - Plot a Histogram of Hits \n#############################################################\nf = plt.figure()\nplt.hist(baseball['Hits'], bins=15)\nplt.xlabel('Number of Hits')\nplt.ylabel('Frequency')\nplt.title('Histogram of Number of Hits')\nf.set_size_inches(10, 5)\nplt.show()\n```\n\n##STUDENT ACTIVITY (7 MINS) \n\n### Data Exploration - Investigating Variables\n\nWork in pairs to import the package `matplotlib.pyplot`, create the following two plots. \n\n- A histogram of the $log(Salary)$\n - hint: `np.log()`\n- a scatterplot of $log(Salary)$ vs $Hits$.\n\n\n```python\n#############################################################\n# Student Action - import matplotlib.pylot \n# - Plot a Histogram of log(Salaries)\n#############################################################\n\nf = plt.figure()\nplt.hist(np.log(baseball['Salary']), bins = 15)\nplt.xlabel('log(Salaries)')\nplt.ylabel('Frequency')\nplt.title('Histogram of log Salaries')\nf.set_size_inches(10, 5)\nplt.show()\n```\n\n\n```python\n#############################################################\n# Studdent Action - Plot a Scatter Plot of Salarie vs. Hitting\n#############################################################\n\nf = plt.figure()\nplt.scatter(baseball['Hits'], np.log(baseball['Salary']))\nplt.xlabel('Hits')\nplt.ylabel('log(Salaries)')\nplt.title('Scatter Plot of Salarie vs. Hitting')\nf.set_size_inches(10, 5)\nplt.show()\n```\n\n## Gradient Descent for Linear Regression\n\nIn Linear Regression we are interested in optimizing our loss function $Loss(\\beta)$ to find the optimatal $\\beta$ such that \n\n\\begin{eqnarray*}\n\\hat \\beta &=& \\arg \\min_{\\beta} \\frac{1}{N} \\sum_{i=1}^{N} (y_i - \\mathbf{x_i^T}\\beta)^2 \\\\\n&=& \\arg \\min_{\\beta} \\frac{1}{N} \\mathbf{(Y - X\\beta)^T (Y - X\\beta)} \\\\\n\\end{eqnarray*}\n\nOne optimization technique called 'Gradient Descent' is useful for finding an optimal solution to this problem. Gradient descent is a first order optimization technique that attempts to find a local minimum of a function by updating its position by taking steps proportional to the negative gradient of the function at its current point. The gradient at the point indicates the direction of steepest ascent and is the best guess for which direction the algorithm should go. \n\nIf we consider $\\theta$ to be some parameters we are interested in optimizing, $L(\\theta)$ to be our loss function, and $\\alpha$ to be our step size proportionality, then we have the following algorithm:\n\n_________\n\n_**Algorithm - Gradient Descent**_\n\n- Initialize $\\theta$\n- Until $\\alpha || \\nabla L(\\theta) || < tol $:\n - $\\theta^{(t+1)} = \\theta^{(t)} - \\alpha \\nabla_{\\theta} L(\\theta^{(t)})$\n__________\n\nFor our problem at hand, we therefore need to find $\\nabla L(\\beta)$. The deriviative of $L(\\beta)$ due to the $j^{th}$ feature is:\n\n\\begin{eqnarray*}\n \\frac{\\partial L(\\beta)}{\\partial \\beta_j} = -\\frac{2}{N}\\sum_{i=1}^{N} (y_i - \\mathbf{x_i^T}\\beta)\\cdot{x_{i,j}}\n\\end{eqnarray*}\n\nIn matrix notation this can be written:\n\n\\begin{eqnarray*}\nLoss(\\beta) &=& \\frac{1}{N}\\mathbf{(Y - X\\beta)^T (Y - X\\beta)} \\\\\n&=& \\frac{1}{N}\\mathbf{(Y^TY} - 2 \\mathbf{\\beta^T X^T Y + \\beta^T X^T X\\beta)} \\\\\n\\nabla_{\\beta} L(\\beta) &=& \\frac{1}{N} (-2 \\mathbf{X^T Y} + 2 \\mathbf{X^T X \\beta)} \\\\\n&=& -\\frac{2}{N} \\mathbf{X^T (Y - X \\beta)} \\\\\n\\end{eqnarray*}\n\n###STUDENT ACTIVITY (7 MINS) \n### Create a function that returns the gradient of $L(\\beta)$\n\n\n```python\n###################################################################\n# Student Action - Programming the Gradient\n###################################################################\n\ndef gradient(X, y, betas):\n #****************************\n # Your code here!\n return -2.0/len(X)*np.dot(X.T, y - np.dot(X, betas))\n #****************************\n \n\n#########################################################\n# Testing your gradient function\n#########################################################\nnp.random.seed(33)\nX = pd.DataFrame({'ones':1, \n 'X1':np.random.uniform(0,1,50)})\ny = np.random.normal(0,1,50)\nbetas = np.array([-1,4])\ngrad_expected = np.array([ 2.98018138, 7.09758971])\ngrad = gradient(X,y,betas)\ntry:\n np.testing.assert_almost_equal(grad, grad_expected)\n print \"Test Passed!\"\nexcept AssertionError:\n print \"*******************************************\"\n print \"ERROR: Something isn't right... Try Again!\"\n print \"*******************************************\"\n\n```\n\n Test Passed!\n\n\n###STUDENT ACTIVITY (15 MINS) \n\n_** Student Action - Use your Gradient Function to complete the Gradient Descent for the Baseball Dataset**_\n\n#### Code Gradient Descent Here\n\nWe have set-up the all necessary matrices and starting values. In the designated section below code the algorithm from the previous section above. \n\n\n```python\n# Setting up our matrices \nY = np.log(baseball['Salary'])\nN = len(Y)\nX = pd.DataFrame({'ones' : np.ones(N), \n 'Hits' : baseball['Hits']})\np = len(X.columns)\n\n# Initializing the beta vector \nbetas = np.array([0.015,5.13])\n\n# Initializing Alpha\nalph = 0.00001\n\n# Setting a tolerance \ntol = 1e-8\n\n###################################################################\n# Student Action - Programming the Gradient Descent Algorithm Below\n###################################################################\n\nniter = 1.\nwhile (alph*np.linalg.norm(gradient(X,Y,betas)) > tol) and (niter < 20000):\n #****************************\n # Your code here!\n betas -= alph*gradient(X, Y, betas)\n niter += 1\n \n #****************************\n\nprint niter, betas\n\ntry:\n beta_expected = np.array([ 0.01513772, 5.13000121])\n np.testing.assert_almost_equal(betas, beta_expected)\n print \"Test Passed!\"\nexcept AssertionError:\n print \"*******************************************\"\n print \"ERROR: Something isn't right... Try Again!\"\n print \"*******************************************\"\n\n```\n\n 33.0 [ 0.01513772 5.13000121]\n Test Passed!\n\n\n** Comments on Gradient Descent**\n\n- Advantage: Very General Algorithm $\\rightarrow$ Gradient Descent and its variants are used throughout Machine Learning and Statistics\n- Disadvantage: Highly Sensitive to Initial Starting Conditions\n - Not gauranteed to find the global optima\n- Disadvantage: How do you choose step size $\\alpha$?\n - Too small $\\rightarrow$ May never find the minima\n - Too large $\\rightarrow$ May step past the minima\n - Can we fix it?\n - Adaptive step sizes\n - Newton's Method for Optimization\n - http://en.wikipedia.org/wiki/Newton%27s_method_in_optimization\n - Each correction obviously comes with it's own computational considerations.\n\nSee the Supplementary Material for any help necessary with scripting this in Python.\n\n### Visualizing Gradient Descent to Understand its Limitations \n\nLet's try to find the value of $X$ that maximizes the following function:\n\n\\begin{equation}\n f(x) = w \\times \\frac{1}{\\sqrt{2\\pi \\sigma_1^2}} \\exp \\left( - \\frac{(x-\\mu_1)^2}{2\\sigma_1^2}\\right) + (1-w) \\times \\frac{1}{\\sqrt{2\\pi \\sigma_2^2}} \\exp \\left( - \\frac{(x-\\mu_2)^2}{2\\sigma_2^2}\\right)\n\\end{equation}\n\nwhere $w=0.3$, $\\mu_1 = 3, \\sigma_1^2=1$ and $\\mu_2 = -1, \\sigma_2^2=0.5$\n\nLet's visualize this function\n\n\n```python\nx1 = np.arange(-10, 15, 0.05)\nmu1 = 6.5 \nvar1 = 3\nmu2 = -1\nvar2 = 10\nweight = 0.3\ndef mixed_normal_distribution(x, mu1, var1, mu2, var2):\n pdf1 = np.exp( - (x - mu1)**2 / (2*var1) ) / np.sqrt(2 * np.pi * var1)\n pdf2 = np.exp( - (x - mu2)**2 / (2*var2) ) / np.sqrt(2 * np.pi * var2)\n return weight * pdf1 + (1-weight )*pdf2\n\npdf = mixed_normal_distribution(x1, mu1, var1, mu2, var2)\nfig = plt.figure()\nplt.plot(x1, pdf)\nfig.set_size_inches([10,5])\nplt.show()\n```\n\n### Now let's show visualize happens for different starting conditions and different step sizes\n\n\n\n```python\ndef mixed_gradient(x, mu1, var1, mu2, var2):\n grad_pdf1 = np.exp( - (x - mu1)**2 / (2*var1) ) * ((x-mu1)/var1) / np.sqrt(2 * np.pi * var1)\n grad_pdf2 = np.exp( - (x - mu2)**2 / (2*var2) ) * ((x-mu2)/var2) / np.sqrt(2 * np.pi * var2)\n return weight * grad_pdf1 + (1-weight)*grad_pdf2\n\n# Initialize X\nx = 3.25\n# Initializing Alpha\nalph = 5\n# Setting a tolerance \ntol = 1e-8\nniter = 1.\nresults = []\nwhile (alph*np.linalg.norm(mixed_gradient(x, mu1, var1, mu2, var2)) > tol) and (niter < 500000):\n #****************************\n results.append(x)\n x = x - alph * mixed_gradient(x, mu1, var1, mu2, var2)\n niter += 1\n \n #****************************\nprint x, niter\n\nif niter < 500000:\n exes = mixed_normal_distribution(np.array(results), mu1, var1, mu2, var2)\n fig = plt.figure()\n plt.plot(x1, pdf)\n plt.plot(results, exes, color='red', marker='x')\n plt.ylim([0,0.1])\n fig.set_size_inches([20,10])\n plt.show()\n```\n\n## Linear Regression Matrix Solution\n\nFrom the last section, you may have recognized that we could actually solve for $\\beta$ directly. \n\n\\begin{eqnarray*}\nLoss(\\beta) &=& \\frac{1}{N}\\mathbf{(Y - X\\beta)^T (Y - X\\beta)} \\\\\n\\nabla_{\\beta} L(\\beta) &=& \\frac{1}{N} (-2 \\mathbf{X^T Y} + 2 \\mathbf{X^T X \\beta}) \\\\\n\\end{eqnarray*}\n\nSetting to zero\n\n\\begin{eqnarray*}\n-2 \\mathbf{X^T Y} + 2 \\mathbf{X^T X} \\beta &=& 0 \\\\\n\\mathbf{X^T X \\beta} &=& \\mathbf{X^T Y} \\\\\n\\end{eqnarray*}\n\nIf we assume that the columns $X$ are linearly independent then\n\n\\begin{eqnarray*}\n \\hat \\beta &=& \\mathbf{(X^T X)^{-1}X^T Y} \\\\\n\\end{eqnarray*}\n\nThis is called the _Ordinary Least Squares_ (OLS) Estimator \n\n###STUDENT ACTIVITY (10 MINS) \n\n_** Student Action - Solve for $\\hat \\beta$ directly using OLS on the Baseball Dataset - 10 mins** _\n \n- Review the Supplementary Materials for help with Linear Algebra\n\n\n```python\n# Setting up our matrices \ny = np.log(baseball['Salary'])\nN = len(Y)\nX = pd.DataFrame({'ones' : np.ones(N), \n 'Hits' : baseball['Hits']})\n\n#############################################################\n# Student Action - Program a closed form solution for \n# Linear Regression. Compare with Gradient\n# Descent.\n#############################################################\n\ndef solve_linear_regression(X, y):\n #****************************\n return np.dot(np.linalg.inv(np.dot(X.T, X)), np.dot(X.T, y))\n \n #****************************\n\nbetas = solve_linear_regression(X,y)\n\ntry:\n beta_expected = np.array([ 0.01513353, 5.13051682])\n np.testing.assert_almost_equal(betas, beta_expected)\n print \"Betas: \", betas\n print \"Test Passed!\"\nexcept AssertionError:\n print \"*******************************************\"\n print \"ERROR: Something isn't right... Try Again!\"\n print \"*******************************************\"\n```\n\n Betas: [ 0.01513353 5.13051682]\n Test Passed!\n\n\n\n** Comments on solving the loss function directly **\n\n- Advantage: Simple solution to code \n- Disadvantage: The Design Matrix must be Full Rank to invert\n - Can be corrected with a Generalized Inverse Solution\n- Disadvantage: Inverting a Matrix can be a computational expensive operation\n - If we have a design matrix that has $N$ observations and $D$ predictors, then X is $(N\\times D)$ it follows then that\n \n \\begin{eqnarray*}\n \\mathbf{X^TX} \\mbox{ is of size } (D \\times N) \\times (N \\times D) = (D \\times D) \\\\\n \\end{eqnarray*}\n \n - If a matrix is of size $(D\\times D)$, the computational cost of inverting it is $O(D^3)$. \n - Thus inverting a matrix is directly related to the number of predictors that are included in the analysis. \n\n\n## Sci-Kit Learn Linear Regression\n\nAs we've shown in the previous two exercises, when coding these algorithms ourselves, we must consider many things such as selecting step sizes, considering the computational cost of inverting matrices. For many applications though, packages have been created that have taken into consideration many of these parameter selections. We now turn our attention to the Python package for Machine Learning called 'scikit-learn'. \n\n- http://scikit-learn.org/stable/\n\nIncluded is the documentation for the scikit-learn implementation of Ordinary Least Squares from their linear models package\n\n- _Generalized Linear Models Documentation:_ \n - http://scikit-learn.org/stable/modules/linear_model.html#ordinary-least-squares\n\n- _LinearRegression Class Documentation:_ \n - http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression\n\nFrom this we that we'll need to import the module `linear_model` using the following:\n\n from sklearn import linear_model\n \nLet's examine an example using the `LinearRegression` class from scikit-learn. We'll continue with the simulated data from the beginning of the exercise.\n\n### _Example using the variables from the Residual Example_\n\n** Notes ** \n\n- Calling `linear_model.LinearRegression()` creates an object of class `sklearn.linear_model.base.LinearRegression`\n - Defaults \n - `fit_intercept = True`: automatically adds a column vector of ones for an intercept\n - `normalize = False`: defaults to not normalizing the input predictors\n - `copy_X = False`: defaults to not copying X\n - `n_jobs = 1`: The number of jobs to use for the computation. If -1 all CPUs are used. This will only provide speedup for n_targets > 1 and sufficient large problems.\n - Example\n - `lmr = linear_model.LinearRegression()\n- To fit a model, the method `.fit(X,y)` can be used\n - X must be a column vector for scikit-learn\n - This can be accomplished by creating a DataFrame using `pd.DataFrame()`\n - Example\n - lmr.fit(X,y)\n- To predict out of sample values, the method `.predict(X)` can be used\n- To see the $\\beta$ estimates use `.coef_` for the coefficients for the predictors and `.intercept` for $\\beta_0$\n\n\n```python\n#############################################################\n# Demonstration - scikit-learn with Regression Example\n#############################################################\n\nfrom sklearn import linear_model\n\nlmr = linear_model.LinearRegression()\nlmr.fit(pd.DataFrame(x_example), pd.DataFrame(y_example))\n\nxTest = pd.DataFrame(np.arange(-1,6))\nyHat = lmr.predict(xTest)\n\nf = plt.figure()\nplt.scatter(x_example, y_example)\np1, = plt.plot(np.arange(-1,6), line1)\np2, = plt.plot(xTest, yHat)\nplt.legend([p1, p2], ['y = 1 + 0.5x', 'OLS Estimate'], loc=2)\nf.set_size_inches(10,5)\nplt.show()\n\nprint lmr.coef_, lmr.intercept_\n```\n\n###STUDENT ACTIVITY (15 MINS) \n\n### _**Final Student Task**_\n\nProgramming Linear Regression using the scikit-learn method. For the ambitious students, plot all results on one plot. \n\n\n```python\n#######################################################################\n# Student Action - Use scikit-learn to calculate the beta coefficients\n#\n# Note: You no longer need the intercept column in your X matrix for \n# sci-kit Learn. It will add that column automatically.\n#######################################################################\n\nlmr2 = linear_model.LinearRegression(fit_intercept=True)\nlmr2.fit(pd.DataFrame(baseball['Hits']), np.log(baseball['Salary']))\n\nxtest = np.arange(0,200)\nytest = lmr2.intercept_ + lmr2.coef_*xtest\n\nf = plt.figure()\nplt.scatter(baseball['Hits'], np.log(baseball['Salary']))\nplt.plot(xtest, ytest, color='r', linewidth=3)\nf.set_size_inches(10,5)\nplt.show()\nprint lmr2.coef_, lmr2.intercept_\n```\n\n## Linear Regression in the Real World\n\nIn the real world, Linear Regression for predictive modeling doesn't end once you've fit the model. Models are often fit and used to predict user behavior, used to quantify business metrics, or sometimes used to identify cats faces for internet points. In that pursuit, it isn't really interesting to fit a model and assess its performance on data that has already been observed. The real interest lies in _**how it predicts future observations!**_\n\nOften times then, we may be susceptible to creating a model that is perfected for our observed data, but that does not generalize well to new data. In order to assess how we perform to new data, we can _score_ the model on both the old and new data, and compare the models performance with the hope that the it generalizes well to the new data. After lunch we'll introduce some techniques and other methods to better our chances of performing well on new data. \n\nBefore we break for lunch though, let's take a look at a simulated dataset to see what we mean...\n\n_Situation_\n\nImagine that last year a talent management company managed 400 celebrities and tracked how popular they were within the public eye, as well various predictors for that metric. The company is now interested in managing a few new celebrities, but wants to sign those stars that are above a certain 'popularity' threshold to maintain their image.\n\nOur job is to predict how popular each new celebrity will be over the course of the coming year so that we make that best decision about who to manage. For this analysis we'll use a function `l2_error` to compare our errors on a training set, and on a test set of celebrity data.\n\nThe variable `celeb_data_old` represents things we know about the previous batch of celebrities. Each row represents one celeb. Each column represents some tangible measure about them -- their age at the time, number of Twitter followers, voice squeakiness, etc. The specifics of what each column represents aren't important.\n\nSimilarly, `popularity_score_old` is a previous measure of the celebrities popularity.\n\nFinally, `celeb_data_new` represents the same information that we had from `celeb_data_old` but for the new batch of internet wonders that we're considering.\n\nHow can we predict how popular the NEW batch of celebrities will be ahead of time so that we can decide who to sign? And are these estimates stable from year to year?\n\n\n```python\nwith np.load('data/mystery_data_old.npz') as data:\n celeb_data_old = data['celeb_data_old']\n popularity_old = data['popularity_old']\n celeb_data_new = data['celeb_data_new']\n\nlmr3 = linear_model.LinearRegression()\nlmr3.fit(celeb_data_old, popularity_old)\npredicted_popularity_old = lmr3.predict(celeb_data_old)\npredicted_popularity_new = lmr3.predict(celeb_data_new)\n\ndef l2_error(y_true, y_pred):\n \"\"\"\n calculate the sum of squared errors (i.e. \"L2 error\") \n given a vector of true ys and a vector of predicted ys\n \"\"\"\n diff = (y_true-y_pred)\n return np.sqrt(np.dot(diff, diff))\n\nprint \"Predicted L2 Error:\", l2_error(popularity_old, predicted_popularity_old)\n```\n\n Predicted L2 Error: 18.1262825607\n\n\n### Checking How We Did\nAt the end of the year, we tally up the popularity numbers for each celeb and check how well we did on our predictions.\n\n\n```python\nwith np.load('data/mystery_data_new.npz') as data:\n popularity_new = data['popularity_new']\n\nprint \"Predicted L2 Error:\", l2_error(popularity_new, predicted_popularity_new)\n```\n\n Predicted L2 Error: 24.173135433\n\n\nSomething's not right... our model seems to be performing worse on this data! Our model performed so well on last year's data, why didn't it work on the data from this year?\n", "meta": {"hexsha": "20dd458bd91e953aecfac2ce86bbfb6d10943437", "size": 204715, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Session 1 - Linear_Regression.ipynb", "max_stars_repo_name": "dinrker/PredictiveModeling", "max_stars_repo_head_hexsha": "af69864fbe506d095d15049ee2fea6ecd770af36", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Session 1 - Linear_Regression.ipynb", "max_issues_repo_name": "dinrker/PredictiveModeling", "max_issues_repo_head_hexsha": "af69864fbe506d095d15049ee2fea6ecd770af36", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Session 1 - Linear_Regression.ipynb", "max_forks_repo_name": "dinrker/PredictiveModeling", "max_forks_repo_head_hexsha": "af69864fbe506d095d15049ee2fea6ecd770af36", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 180.2068661972, "max_line_length": 38724, "alphanum_fraction": 0.8701853797, "converted": true, "num_tokens": 7713, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.927363293639213, "lm_q2_score": 0.903294196941332, "lm_q1q2_score": 0.8376818816007016}} {"text": "# Symbolic mathematics with Sympy\n\n[Sympy](http://www.sympy.org/en/index.html) is described as a:\n\n> \"... Python library for symbolic mathematics.\"\n\nThis means it can be used to:\n\n- Manipulate symbolic expressions;\n- Solve symbolic equations;\n- Carry out symbolic Calculus;\n- Plot symbolic function.\n\nIt has other capabilities that we will not go in to in this handbook. But you can read more about it here: http://www.sympy.org/en/index.html\n\n## Manipulating symbolic expressions\n\nBefore we can start using the library to manipulate expressions, we need to import it.\n\n\n```python\nimport sympy as sym\n```\n\nThe above imports the library and gives us access to it's commands using the shortand `sym` which is conventially used.\n\nIf we wanted to get Python to check that $x - x = 0$ we would get an error if we did not tell Python what $x$ was.\n\nThis is where Sympy comes in, we can tell Python to create $x$ as a symbolic variable:\n\n\n```python\nx = sym.symbols('x')\n```\n\nNow we can calculate $x - x$:\n\n\n```python\nx - x\n```\n\n\n\n\n 0\n\n\n\nWe can create and manipulate expressions in Sympy. Let us for example verify:\n\n$$(a + b) ^ 2 = a ^ 2 + 2ab + b ^2$$\n\nFirst, we create the symbolic variables $a, b$:\n\n\n```python\na, b = sym.symbols('a, b')\n```\n\nNow let's create our expression:\n\n\n```python\nexpr = (a + b) ** 2 \nexpr\n```\n\n\n\n\n (a + b)**2\n\n\n\n**Note** we can get Sympy to use LaTeX so that the output looks nice in a notebook:\n\n\n```python\nsym.init_printing()\n```\n\n\n```python\nexpr\n```\n\nLet us expand our expression:\n\n\n```python\nexpr.expand()\n```\n\nNote that we can also get Sympy to produce the LaTeX code for future use:\n\n\n```python\nsym.latex(expr.expand())\n```\n\n\n\n\n 'a^{2} + 2 a b + b^{2}'\n\n\n\n---\n**EXERCISE** Use Sympy to verify the following expressions:\n\n- $(a - b) ^ 2 = a ^ 2 - 2 a b + b^2$\n- $a ^ 2 - b ^ 2 = (a - b) (a + b)$ (instead of using `expand`, try `factor`)\n\n## Solving symbolic equations\n\nWe can use Sympy to solve symbolic expression. For example let's find the solution in $x$ of the quadratic equation:\n\n$$a x ^ 2 + b x + c = 0$$\n\n\n```python\n# We only really need to define `c` but doing them all again.\na, b, c, x = sym.symbols('a, b, c, x') \n```\n\nThe Sympy command for solving equations is `solveset`. The first argument is an expression for which the root will be found. The second argument is the value that we are solving for.\n\n\n```python\nsym.solveset(a * x ** 2 + b * x + c, x)\n```\n\n---\n**EXERCISE** Use Sympy to find the solutions to the generic cubic equation:\n\n$$a x ^ 3 + b x ^ 2 + c x + d = 0$$\n\n---\n\nIt is possible to pass more arguments to `solveset` for example to constrain the solution space. Let us see what the solution of the following is in $\\mathbb{R}$:\n\n$$x^2=-1$$\n\n\n```python\nsym.solveset(x ** 2 + 1, x, domain=sym.S.Reals)\n```\n\n---\n**EXERCISE** Use Sympy to find the solutions to the following equations:\n\n- $x ^ 2 == 2$ in $\\mathbb{N}$;\n- $x ^ 3 + 2 x = 0$ in $\\mathbb{R}$.\n\n---\n\n## Symbolic calculus\n\nWe can use Sympy to compute limits. Let us calculate:\n\n$$\\lim_{x\\to 0^+}\\frac{1}{x}$$\n\n\n```python\nsym.limit(1/x, x, 0, dir=\"+\")\n```\n\n---\n**EXERCISE** Compute the following limits:\n\n1. $\\lim_{x\\to 0^-}\\frac{1}{x}$\n2. $\\lim_{x\\to 0}\\frac{1}{x^2}$\n\n---\n\nWe can use also Sympy to differentiate and integrate. Let us experiment with differentiating the following expression:\n\n$$x ^ 2 - \\cos(x)$$\n\n\n```python\nsym.diff(x ** 2 - sym.cos(x), x)\n```\n\nSimilarly we can integrate:\n\n\n```python\nsym.integrate(x ** 2 - sym.cos(x), x)\n```\n\nWe can also carry out definite integrals:\n\n\n```python\nsym.integrate(x ** 2 - sym.cos(x), (x, 0, 5))\n```\n\n---\n\n**EXERCISE** Use Sympy to calculate the following:\n\n1. $\\frac{d\\sin(x ^2)}{dx}$\n2. $\\frac{d(x ^2 + xy - \\ln(y))}{dy}$\n3. $\\int e^x \\cos(x)\\;dx$\n4. $\\int_0^5 e^{2x}\\;dx$\n\n## Plotting with Sympy\n\nFinally Sympy can be used to plot functions. Note that this makes use of another Python library called [matplotlib](http://matplotlib.org/). Whilst Sympy allows us to not directly need to make use of matplotlib it could be worth learning to use as it's a very powerful and versatile library.\n\nBefore plotting in Jupyter we need to run a command to tell it to display the plots directly in the notebook:\n\n\n```python\n%matplotlib inline\n```\n\nLet us plot $x^2$:\n\n\n```python\nexpr = x ** 2\np = sym.plot(expr);\n```\n\nWe can directly save that plot to a file if we wish to:\n\n\n```python\np.save(\"x_squared.pdf\");\n```\n\n---\n**EXERCISE** Plot the following functions:\n\n- $y=x + cos(x)$\n- $y=x ^ 2 - e^x$ (you might find `ylim` helpful as an argument)\n\nExperiment with saving your plots to a file.\n\n---\n\n## Summary\n\nThis section has discussed using Sympy to:\n\n- Manipulate symbolic expressions;\n- Calculate limits, derivates and integrals;\n- Plot a symbolic expression.\n \nThis just touches the surface of what Sympy can do.\n\nLet us move on to using [Numpy](02 - Linear algebra with Numpy.ipynb) to do Linear Algebra.\n", "meta": {"hexsha": "236091c13b28fd2b599095ab15dca8bc798693dd", "size": 55927, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "01-Symbolic-mathematics-with-Sympy.ipynb", "max_stars_repo_name": "sierxue/Python-Mathematics-Handbook", "max_stars_repo_head_hexsha": "447502fe05503ccc588c2b9c8abb3207668d6f07", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 147, "max_stars_repo_stars_event_min_datetime": "2017-02-14T20:07:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T10:41:28.000Z", "max_issues_repo_path": "01-Symbolic-mathematics-with-Sympy.ipynb", "max_issues_repo_name": "sierxue/Python-Mathematics-Handbook", "max_issues_repo_head_hexsha": "447502fe05503ccc588c2b9c8abb3207668d6f07", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 8, "max_issues_repo_issues_event_min_datetime": "2017-02-14T20:07:53.000Z", "max_issues_repo_issues_event_max_datetime": "2018-11-16T11:11:40.000Z", "max_forks_repo_path": "01-Symbolic-mathematics-with-Sympy.ipynb", "max_forks_repo_name": "sierxue/Python-Mathematics-Handbook", "max_forks_repo_head_hexsha": "447502fe05503ccc588c2b9c8abb3207668d6f07", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 87, "max_forks_repo_forks_event_min_datetime": "2017-02-15T02:16:16.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-01T10:41:21.000Z", "avg_line_length": 84.6096822995, "max_line_length": 17448, "alphanum_fraction": 0.8436890947, "converted": true, "num_tokens": 1452, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9032941988938414, "lm_q2_score": 0.9273632876167045, "lm_q1q2_score": 0.8376818779712901}} {"text": "---\nauthor: Nathan Carter (ncarter@bentley.edu)\n---\n\nThis answer assumes you have imported SymPy as follows.\n\n\n```python\nfrom sympy import * # load all math functions\ninit_printing( use_latex='mathjax' ) # use pretty math output\n```\n\nLet's create an example function to work with.\n\n\n```python\nvar( 'x' )\nformula = sqrt( x - 1 ) - x\nformula\n```\n\n\n\n\n$\\displaystyle - x + \\sqrt{x - 1}$\n\n\n\nCritical numbers come in two kinds. First, where is the derivative zero?\nSecond, where is the derivative undefined but the function is defined?\n\nLet's begin by finding where the derivative is zero. We'll use the same\ntechniques introduced in how to write symbolic equations and\nhow to solve symbolic equations.\n\n\n```python\nderivative = diff( formula )\nderivative\n```\n\n\n\n\n$\\displaystyle -1 + \\frac{1}{2 \\sqrt{x - 1}}$\n\n\n\n\n```python\nsolve( Eq( derivative, 0 ) )\n```\n\n\n\n\n$\\displaystyle \\left[ \\frac{5}{4}\\right]$\n\n\n\nSo one critical number, where the derivative is zero, is $x=\\frac54$.\n\nNow where is the derivative defined but the function undefined?\nWe compute the domain of both functions and subtract them, using the techniques\nfrom how to compute the domain of a function.\n\n\n```python\nfrom sympy.calculus.util import continuous_domain\nf_domain = continuous_domain( formula, x, S.Reals )\nderiv_domain = continuous_domain( derivative, x, S.Reals )\nComplement( f_domain, deriv_domain )\n```\n\n\n\n\n$\\displaystyle \\left\\{1\\right\\}$\n\n\n\nSo another critical number, where the function is defined but the derivative is not,\nis $x=1$.\n\nThus the full set of critical numbers for this function is $\\left\\{1,\\frac54\\right\\}$.\n", "meta": {"hexsha": "fca0ce303129986fbdee75b3665db1d3ef8e42fa", "size": 4194, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "database/tasks/How to find the critical numbers of a function/Python, using SymPy.ipynb", "max_stars_repo_name": "nathancarter/how2data", "max_stars_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "database/tasks/How to find the critical numbers of a function/Python, using SymPy.ipynb", "max_issues_repo_name": "nathancarter/how2data", "max_issues_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "database/tasks/How to find the critical numbers of a function/Python, using SymPy.ipynb", "max_forks_repo_name": "nathancarter/how2data", "max_forks_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-07-18T19:01:29.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T06:47:11.000Z", "avg_line_length": 23.3, "max_line_length": 99, "alphanum_fraction": 0.5236051502, "converted": true, "num_tokens": 409, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9546474220263198, "lm_q2_score": 0.877476793890012, "lm_q1q2_score": 0.8376809591750203}} {"text": "On se propose de r\u00e9soudre l'EDO de l'oscillateur harmonique pour une particule de masse $m$ \u00e0 une dimension,\n\n$$\n\\ddot{x}(t) = - \\omega^2 x(t), \\quad \\omega = \\sqrt{\\frac{k}{m}}\n$$\n\nIl s'agit d'une EDO du second ordre. Il s'agit maintenant de la r\u00e9\u00e9crire sous la forme d'un syst\u00e8me d'EDO du premier ordre. \nUne premi\u00e8re id\u00e9e est de poser: $x_1 = x$ et $x_2 = \\dot{x}$. \n\nUne autre id\u00e9e est d'utiliser le formalisme de la m\u00e9canique hamiltonienne. En effet, celui-ci gr\u00e2ce aux \u00e9quations canoniques permet de passer de $N$ EDOs du 2\u00e8me ordre \u00e0 $2N$ EDO du 1er ordre.\n\nSoit l'Hamiltonien de l'oscillateur harmonique,\n\n$$\n \\mathcal{H}(q,p) = \\frac{p^2}{2m} + \\frac{1}{2}kq^2\n$$\n\nEcrivons les \u00e9quations canoniques,\n\n$$\n\\dot{q} = \\frac{\\partial \\mathcal{H}}{\\partial p} = \\frac{p}{m}\n$$\n\n$$\n\\quad \\dot{p} = - \\frac{\\partial \\mathcal{H}}{\\partial q} = -kq\n$$\n\nOn peut r\u00e9\u00e9crire sous forme matriciel,\n$$\n\\begin{pmatrix}\n \\dot{q} \\\\ \\dot{p}\n\\end{pmatrix}\n=\n\\underbrace{\n\\begin{pmatrix}\n 0 & 1/m \\\\ -k & 0\n\\end{pmatrix}\n}_{M}\n\\begin{pmatrix}\n q \\\\ p\n\\end{pmatrix}\n, \\quad \\vec{x} = (q, p)\n$$\n\nOn \u00e9crit le syst\u00e8me discr\u00e8tis\u00e9 pour Euler avant,\n$$\n\\dot{x} \\approx \\frac{x_{i+1} - x_{i}}{\\Delta t} \\implies x_{i+1} = x_{i} + \\Delta t \\dot{x}\n$$\n\n$$\n\\begin{align}\n x_{i+1} \n &= x_{i} + \\Delta t \\cdot Mx_{i} \\\\\n &= x_{i}(1 + \\Delta t \\cdot M)\n\\end{align}\n$$\n\n\n```python\n#%matplotlib widget\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n\n```python\nk = 1. # constante de rappel\nm = 1. # mass \nw = np.sqrt(k/m) # pulsation\n\n# matrix M for our EDO system: \\dot{x} = Mx\nM = np.array([[0, 1/m], [-k, 0]])\n\n# initial conditions: x(0) = (q(0), p(0))\nq0 = 1.\np0 = 0.\n\n# time\nt0 = 0\ntf = 50\ndeltaT = 0.1\n\nnt = int((tf - t0) / deltaT)\n\n# time mesh\ntime_mesh = np.linspace(start=t0, stop=tf, num=nt)\n\n# set initial conditions\n# create a big matrix that will contain\n# our position and speed solutions\n# for each step time\n# [ \"t0, ti, tN\" \n# q: [q0, ..., qN]\n# p: [p0, ..., pN]\n# ]\n# row, column\nx = np.zeros((2, nt))\nx[:,0] = [q0, p0]\n\n# we can find it easily\n# through the resolution of canonical equations\n# and putting initial conditions: q(t=0) = q0 ; p(t=0) = p0\ndef analytical_solution(x, q0, p0, t):\n # q(t)\n x[0,:] = q0 * np.cos(w * t) + (p0 / m * w) * np.sin(w * t)\n # p(t)\n x[1,:] = - m * w * q0 * np.sin(w * t) + m * w * (p0 / m * w) * np.cos(w * t)\n return x\n\n# here, the hamiltonian doesn't explicitly depends on time\n# and it coincide with the energy\ndef energy(q, p, k, m):\n return (p ** 2) / (2 * m) + (1 / 2) * k * (q ** q)\n\ndef forward_euler(x, nt, deltaT, M):\n # set t+1 values in the loop => range from 0 to nt-1\n for t in range(0, nt-1): \n # x_{n+1} = x_n(1 + \\Delta T M)\n # x is a vector \\in \\R^2\n # 1 is the identity matrix\n x[:,t+1] = np.matmul(x[:,t], (np.eye(2) + deltaT * M))\n return x.copy()\n\ndef backward_euler(x, nt, deltaT, M):\n for t in range(0, nt-1):\n invert_matrix = np.linalg.inv(np.eye(2) - deltaT * M)\n\n x[:,t+1] = np.matmul(x[:,t], invert_matrix)\n return x.copy()\n\ndef centered_euler(x, nt, deltaT, M):\n for t in range(0, nt-1):\n normal_matrix = np.eye(2) + (deltaT/2) * M \n invert_matrix = np.linalg.inv(np.eye(2) - (deltaT/2) * M)\n #matrix = np.matmul(normal_matrix, invert_matrix)\n\n x[:,t+1] = np.linalg.multi_dot([x[:,t], normal_matrix, invert_matrix])\n return x.copy()\n\nforward_euler_sol = forward_euler(x, nt, deltaT, M)\nbackward_euler_sol = backward_euler(x, nt, deltaT, M)\ncentered_euler_sol = centered_euler(x, nt, deltaT, M)\nanalytical_solution = analytical_solution(x, q0, p0, time_mesh)\n \ndef plot(k, m):\n fig, ax = plt.subplots(figsize=(7,7))\n ax.plot(analytical_solution[0,:], analytical_solution[1,:]/m, label=\"Th\u00e9orique\")\n ax.plot(forward_euler_sol[0,:], forward_euler_sol[1,:]/m, label=\"Euler avant\")\n ax.plot(backward_euler_sol[0,:], backward_euler_sol[1,:]/m, label=\"Euler arri\u00e8re\")\n ax.plot(centered_euler_sol[0,:], centered_euler_sol[1,:], label=\"Euler centr\u00e9\")\n ax.legend(fontsize=\"small\")\n ax.set_xlabel(\"$q$\")\n ax.set_ylabel(\"$\\dot{q}$\")\n ax.set_xlim(-2, 2)\n ax.set_ylim(-2, 2)\n ax.set_title(f\"Harmonic Oscillator: $k={k}$, $m={m}$\")\n plt.show()\n\nplot(k, m)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "da19b36dfd5b3b638742146dd9dcaae51e1ae75a", "size": 82478, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "python/notebooks/harmonic_oscillator.ipynb", "max_stars_repo_name": "Mathieu-R/computational-physics", "max_stars_repo_head_hexsha": "7b1a39ffb65f864bc741a3e709892ddb18723a58", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "python/notebooks/harmonic_oscillator.ipynb", "max_issues_repo_name": "Mathieu-R/computational-physics", "max_issues_repo_head_hexsha": "7b1a39ffb65f864bc741a3e709892ddb18723a58", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2021-06-08T23:08:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-12T00:54:45.000Z", "max_forks_repo_path": "python/notebooks/harmonic_oscillator.ipynb", "max_forks_repo_name": "Mathieu-R/computational-physics", "max_forks_repo_head_hexsha": "7b1a39ffb65f864bc741a3e709892ddb18723a58", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 368.2053571429, "max_line_length": 75612, "alphanum_fraction": 0.9267683503, "converted": true, "num_tokens": 1581, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9546474168650673, "lm_q2_score": 0.8774767970940975, "lm_q1q2_score": 0.837680957704913}} {"text": "# Activation Functions\n\nThis function introduces activation functions in TensorFlow\n\nWe start by loading the necessary libraries for this script.\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport tensorflow as tf\n# from tensorflow.python.framework import ops\n# ops.reset_default_graph()\ntf.reset_default_graph()\n```\n\n### Start a graph session\n\n\n```python\nsess = tf.Session()\n```\n\n### Initialize the X range values for plotting\n\n\n```python\nx_vals = np.linspace(start=-10., stop=10., num=100)\n```\n\n### Activation Functions:\n\nReLU activation\n\n\n```python\nprint(sess.run(tf.nn.relu([-3., 3., 10.])))\ny_relu = sess.run(tf.nn.relu(x_vals))\n```\n\n [ 0. 3. 10.]\n\n\nReLU-6 activation\n\n\n```python\nprint(sess.run(tf.nn.relu6([-3., 3., 10.])))\ny_relu6 = sess.run(tf.nn.relu6(x_vals))\n```\n\n [0. 3. 6.]\n\n\nReLU-6 refers to the following function\n\n\\begin{equation}\n \\min\\left(\\max(0, x), 6\\right)\n\\end{equation}\n\n\nSigmoid activation\n\n\n```python\nprint(sess.run(tf.nn.sigmoid([-1., 0., 1.])))\ny_sigmoid = sess.run(tf.nn.sigmoid(x_vals))\n```\n\n [0.26894143 0.5 0.7310586 ]\n\n\nHyper Tangent activation\n\n\n```python\nprint(sess.run(tf.nn.tanh([-1., 0., 1.])))\ny_tanh = sess.run(tf.nn.tanh(x_vals))\n```\n\n [-0.7615942 0. 0.7615942]\n\n\nSoftsign activation\n\n\n```python\nprint(sess.run(tf.nn.softsign([-1., 0., 1.])))\ny_softsign = sess.run(tf.nn.softsign(x_vals))\n```\n\n [-0.5 0. 0.5]\n\n\nsoftsign refers to the following function\n\n\\begin{equation}\n \\frac{x}{1 + |x|}\n\\end{equation}\n\n
    \n\n\n\nSoftplus activation\n\n\n\n\n```python\nprint(sess.run(tf.nn.softplus([-1., 0., 1.])))\ny_softplus = sess.run(tf.nn.softplus(x_vals))\n```\n\n [0.31326166 0.6931472 1.3132616 ]\n\n\nSoftplus refers to the following function\n\n\\begin{equation}\n \\log\\left(\\exp(x) + 1\\right)\n\\end{equation}\n\n\nExponential linear activation\n\n\n```python\nprint(sess.run(tf.nn.elu([-1., 0., 1.])))\ny_elu = sess.run(tf.nn.elu(x_vals))\n```\n\n [-0.63212055 0. 1. ]\n\n\nELU refers to the following function\n\n\\begin{equation}\\label{eq:}\n f = \\begin{cases}\n \\exp(x) - 1 &(x < 0 )\\\\\n 0 &(x \\geq 0 )\\\\\n \\end{cases}\n\\end{equation}\n\n\n### Plot the different functions\n\n\n```python\nplt.style.use('ggplot')\nplt.plot(x_vals, y_softplus, 'r--', label='Softplus', linewidth=2)\nplt.plot(x_vals, y_relu, 'b:', label='ReLU', linewidth=2)\nplt.plot(x_vals, y_relu6, 'g-.', label='ReLU6', linewidth=2)\nplt.plot(x_vals, y_elu, 'k-', label='ExpLU', linewidth=0.5)\nplt.ylim([-1.5,7])\nplt.legend(loc='upper left', shadow=True, edgecolor='k')\nplt.show()\n\nplt.plot(x_vals, y_sigmoid, 'r--', label='Sigmoid', linewidth=2)\nplt.plot(x_vals, y_tanh, 'b:', label='Tanh', linewidth=2)\nplt.plot(x_vals, y_softsign, 'g-.', label='Softsign', linewidth=2)\nplt.ylim([-1.3,1.3])\nplt.legend(loc='upper left', shadow=True, edgecolor='k')\nplt.show()\n```\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "4aecf5e2aea19df62d47768b5fcf2465a61c4c86", "size": 50963, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chap01/06_activation_functions.ipynb", "max_stars_repo_name": "haru-256/tensorflow_cookbook", "max_stars_repo_head_hexsha": "18923111eaccb57b47d07160ae5c202c945da750", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-02-27T16:16:02.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-27T16:16:02.000Z", "max_issues_repo_path": "01_Introduction/06_Implementing_Activation_Functions/06_activation_functions.ipynb", "max_issues_repo_name": "haru-256/tensorflow_cookbook", "max_issues_repo_head_hexsha": "18923111eaccb57b47d07160ae5c202c945da750", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-03-07T14:31:22.000Z", "max_issues_repo_issues_event_max_datetime": "2018-03-07T15:04:17.000Z", "max_forks_repo_path": "Chap01/06_activation_functions.ipynb", "max_forks_repo_name": "haru-256/tensorflow_cookbook", "max_forks_repo_head_hexsha": "18923111eaccb57b47d07160ae5c202c945da750", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 121.9210526316, "max_line_length": 23456, "alphanum_fraction": 0.884151247, "converted": true, "num_tokens": 902, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.944176863577751, "lm_q2_score": 0.8872046026642944, "lm_q1q2_score": 0.8376780590953182}} {"text": "# The Taylor Series\n** October 2017 **\n\n** Andrew Riberio @ [AndrewRib.com](http://www.andrewrib.com) **\n\nIn this notebook we will explore the taylor series using sympy. \n\nResources:\n* https://en.wikipedia.org/wiki/Series_(mathematics)\n* https://en.wikipedia.org/wiki/Taylor_series\n\n## Libraries\n\n\n```python\nimport sympy as sp\nimport sympy.functions.combinatorial.factorials as fact\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom IPython.display import display\nfrom decimal import *\n\n# Pretty Latex printing for Sympy with an argument to show the polynomials the way we write them \n# from small term to larger. May give you problems if you don't have a latex distribution. \nsp.init_printing(order='rev-lex',use_latex='mathjax')\n```\n\n## Taylor Series\nThe taylor series is an intriguing power series that makes use of acending derivitives to approximate a function. \nPlease see the [Taylor Series](https://en.wikipedia.org/wiki/Taylor_series) wiki for a formal definition. Here we will implement the taylor series function and use it to approximate various functions. \n\n\n```python\ndef taylor(fn,depth, x=None,a = None, printSteps = True):\n # If no x or a is provided, we will be symbolically computing the taylor series. \n if(x == None):\n x = sp.symbols('x')\n if(a == None):\n a = sp.symbols('a')\n \n # The start of the taylor series \n result = fn(a)\n \n if depth == 0:\n return result \n elif depth > 0:\n cummulativeDYDX = fn(a)\n for i in range(depth):\n cummulativeDYDX = sp.diff(cummulativeDYDX)\n \n if(printSteps):\n print(\"Derivitive of order d^{0}y/dx^[0|: {1}\".format(i+1,cummulativeDYDX))\n \n result = result + (cummulativeDYDX / fact.factorial(i+1))*(x - a)**(i+1)\n \n return result\n else:\n # Malformed case.\n return 0\n```\n\nConsider the beloved Euler Number: $e^{x}$. It has a most interesting taylor series. Let's show it here expanded to 5 steps. \n\n\n```python\neTaylor5 = taylor(sp.exp,5)\neTaylor5 \n```\n\n Derivitive of order d^1y/dx^[0|: exp(a)\n Derivitive of order d^2y/dx^[0|: exp(a)\n Derivitive of order d^3y/dx^[0|: exp(a)\n Derivitive of order d^4y/dx^[0|: exp(a)\n Derivitive of order d^5y/dx^[0|: exp(a)\n\n\n\n\n\n$$e^{a} + \\left(x - a\\right) e^{a} + \\frac{e^{a}}{2} \\left(x - a\\right)^{2} + \\frac{e^{a}}{6} \\left(x - a\\right)^{3} + \\frac{e^{a}}{24} \\left(x - a\\right)^{4} + \\frac{e^{a}}{120} \\left(x - a\\right)^{5}$$\n\n\n\nAs we see, one of the nifty features of the $e^{x}$ is that it is equal to its derivative. Thus, when we chain higher order derivatives of $e^{x}$, we simply get the same thing ( as shown in the printout). This leads to the simplest type of taylor series, as the derivitive term remains constant. \n\nSo what does this taylor series of $e^{x}$ look like with only 5 terms? We will make use of sympy's symbolic computation abilities to substitute in values for x and a.\n\nIn the case of a = 0, we get:\n\n\n```python\nx, a = sp.symbols('x a')\neTaylor5_0 = eTaylor5.subs(a,0)\neTaylor5_0 \n```\n\n\n\n\n$$1 + x + \\frac{x^{2}}{2} + \\frac{x^{3}}{6} + \\frac{x^{4}}{24} + \\frac{x^{5}}{120}$$\n\n\n\n** Theorem **: When we let a = 0, and x = 1, the taylor series as its number of terms approaches infinity = $e$ and is [the definition of $e$](https://en.wikipedia.org/wiki/Exponential_function).\n\n\n```python\nxs = np.arange(-2,4,0.1)\nys = np.exp(xs)\nyTaylor = []\n\nfor xi in xs:\n yTaylor.append( eTaylor5_0.subs(x,xi) )\n \nplt.figure(figsize=(15,15))\nplt.plot(xs,ys,color = \"blue\")\nplt.plot(xs,yTaylor,color=\"red\")\n\nplt.title(\"Value of $e$ and Taylor Series Approximation\",fontsize=20)\nplt.ylabel(\"Y\",fontsize=20)\nplt.xlabel(\"X\",fontsize=20)\n\nplt.show()\n```\n\nAs we see here, due to only having 5 terms in our taylor series, we get an approximation of $e^x$ with some error. As we increase our number of terms in our taylor series, we increase the approximation precision. \n\nLet's see what the value of $e$ is in our taylor series and compare that to what numpy defines $e$ as:\n\n\n```python\nres = eTaylor5_0.subs(x,1)\ndecimalVal = res.p/res.q\nnpE = np.exp(1)\n\nprint(\"Our approximated e value: {0}\".format(res.p/res.q) )\nprint(\"Numpy's approximated e value: {0}\".format(npE))\nprint(\"Error: {0}\".format(npE - decimalVal))\n```\n\n Our approximated e value: 2.716666666666667\n Numpy's approximated e value: 2.718281828459045\n Error: 0.0016151617923783057\n\n\nLet's beef up our taylor series to include more terms instead of the measly five we've been dealing with. We will produce 100 taylor series with a = 0, starting with a one termed taylor series to a 100 termed one. \n\n\n```python\ntaylors = []\nfor i in range(1,101):\n taylors.append(taylor(sp.exp,i,printSteps=False))\n taylors[i-1] = taylors[i-1].subs(a,0)\n```\n\nWe can pretty print one of the taylor series in the list by:\n\n\n```python\ntaylors[4]\n```\n\n\n\n\n$$1 + x + \\frac{x^{2}}{2} + \\frac{x^{3}}{6} + \\frac{x^{4}}{24} + \\frac{x^{5}}{120}$$\n\n\n\n\n```python\n# Showing the first ten taylors. There must be a better way to pretty print this. \nfor i in range(0,10):\n display(taylors[i])\n```\n\n\n$$1 + x$$\n\n\n\n$$1 + x + \\frac{x^{2}}{2}$$\n\n\n\n$$1 + x + \\frac{x^{2}}{2} + \\frac{x^{3}}{6}$$\n\n\n\n$$1 + x + \\frac{x^{2}}{2} + \\frac{x^{3}}{6} + \\frac{x^{4}}{24}$$\n\n\n\n$$1 + x + \\frac{x^{2}}{2} + \\frac{x^{3}}{6} + \\frac{x^{4}}{24} + \\frac{x^{5}}{120}$$\n\n\n\n$$1 + x + \\frac{x^{2}}{2} + \\frac{x^{3}}{6} + \\frac{x^{4}}{24} + \\frac{x^{5}}{120} + \\frac{x^{6}}{720}$$\n\n\n\n$$1 + x + \\frac{x^{2}}{2} + \\frac{x^{3}}{6} + \\frac{x^{4}}{24} + \\frac{x^{5}}{120} + \\frac{x^{6}}{720} + \\frac{x^{7}}{5040}$$\n\n\n\n$$1 + x + \\frac{x^{2}}{2} + \\frac{x^{3}}{6} + \\frac{x^{4}}{24} + \\frac{x^{5}}{120} + \\frac{x^{6}}{720} + \\frac{x^{7}}{5040} + \\frac{x^{8}}{40320}$$\n\n\n\n$$1 + x + \\frac{x^{2}}{2} + \\frac{x^{3}}{6} + \\frac{x^{4}}{24} + \\frac{x^{5}}{120} + \\frac{x^{6}}{720} + \\frac{x^{7}}{5040} + \\frac{x^{8}}{40320} + \\frac{x^{9}}{362880}$$\n\n\n\n$$1 + x + \\frac{x^{2}}{2} + \\frac{x^{3}}{6} + \\frac{x^{4}}{24} + \\frac{x^{5}}{120} + \\frac{x^{6}}{720} + \\frac{x^{7}}{5040} + \\frac{x^{8}}{40320} + \\frac{x^{9}}{362880} + \\frac{x^{10}}{3628800}$$\n\n\n\n```python\n# Showing the first ten computed rationals given x = 1\nfor i in range(0,10):\n res = taylors[i].subs(x,1)\n print(\"Series with {0} terms: {1} = {2}\".format(i,res,res.p/res.q))\n```\n\n Series with 0 terms: 2 = 2.0\n Series with 1 terms: 5/2 = 2.5\n Series with 2 terms: 8/3 = 2.6666666666666665\n Series with 3 terms: 65/24 = 2.7083333333333335\n Series with 4 terms: 163/60 = 2.716666666666667\n Series with 5 terms: 1957/720 = 2.7180555555555554\n Series with 6 terms: 685/252 = 2.7182539682539684\n Series with 7 terms: 109601/40320 = 2.71827876984127\n Series with 8 terms: 98641/36288 = 2.7182815255731922\n Series with 9 terms: 9864101/3628800 = 2.7182818011463845\n\n\nIf we proceed in this fashion, by showing the value of the taylor series rational by calling res.p/res.q, we will run up against pythons default precision for floating point numbers, and it will seem like we reach a point where adding another term to the taylor series adds no further precision. Let's show this by showing the result of the taylor series with 99 vs 98 terms:\n\n\n```python\nr1 = taylors[99].subs(x,1)\nr2 = taylors[98].subs(x,1)\n\nprint(\"Result 1: {0}\".format(r1.p/r1.q))\nprint(\"Result 2: {0}\".format(r2.p/r2.q) )\nprint(\"Numpy exp: {0}\".format(np.exp(1)))\n```\n\n Result 1: 2.718281828459045\n Result 2: 2.718281828459045\n Numpy exp: 2.718281828459045\n\n\nAs we see, it shows the same value for all results. This is a result of truncation. We can use a higher precision floating point object by using the Decimal class. Let's do the same thing, but use that class. \n\n\n```python\nr1Dec = Decimal(r1.p)/Decimal(r1.q)\nr2Dec = Decimal(r1.p)/Decimal(r1.q)\nnpE = Decimal(np.exp(1))\n\nprint(\"Result 1: {0}\".format(r1Dec))\nprint(\"Result 2: {0}\".format(r2Dec))\nprint(\"Numpy exp: {0}\".format(npE))\n```\n\n Result 1: 2.718281828459045235360287471\n Result 2: 2.718281828459045235360287471\n Numpy exp: 2.718281828459045090795598298427648842334747314453125\n\n\nAt this point, we still don't enough precision to show the difference between the last two taylor series in our list which have 98 and 99 terms, nor can we match numpy's precision. Let's go a little deeper here. We can do arithmetic up to any precision we'd like. This is called arbitrary precision. Sympy uses a library called [mpmath](http://mpmath.org/) behind the scenes to accomplish this. \n\nSee: http://docs.sympy.org/latest/modules/evalf.html\n\n\n```python\nhigherPrecision = sp.N(r1.p/r1.q,52)\nprint(\"No sub: {0}\".format(higherPrecision))\nprint(\"Numpy exp: {0}\".format(Decimal(np.exp(1))))\nprint(\"Sub {0}\".format(sp.N(taylors[99].subs(x,1),52)))\n```\n\n No sub: 2.718281828459045090795598298427648842334747314453125\n Numpy exp: 2.718281828459045090795598298427648842334747314453125\n Sub 2.718281828459045235360287471352662497757247093699960\n\n\nSo this result is a bit tricky. Somewhere in our computation we are truncating results when we computer higherPrecision. It turns out, we must call the Numeric evaluation function sp.N when performing our x substitution, so that we do not truncate results when dividing during the substitution; however, if we symbolically compute the substitution of x, we get a rational approximation that is more in line with standard results ( like arriving at numpy's $e$). Sympy uses different libraries for performing different things. Namely, the numeric evaluation module sp.N uses mpmath to do arbitrary precision arithmetic, and the symbolic computation uses another module. This explains the difference in values, but since much is hidden in the nuts and bolts of the package, I cannot provide a satisfactory answer why at this time. \n\nAs observed above, in the special case of a = 0, it transforms the series into the definition of $e$. No division or truncation happens during this substitution, so the symbolic solution should equal the numeric one. \n\nHere we will print the results of our taylor series' with differing precision to show how it changes the values. \n\n\n```python\np = 50\nprint(\"Precision = {0}\".format(p))\n\ncntr = 0\nfor taylor_i in taylors:\n res = sp.N(taylor_i.subs(x,1),p)\n print(\"taylors[{0}] = {1}\".format(cntr,res))\n cntr+=1\n```\n\n Precision = 50\n taylors[0] = 2.0000000000000000000000000000000000000000000000000\n taylors[1] = 2.5000000000000000000000000000000000000000000000000\n taylors[2] = 2.6666666666666666666666666666666666666666666666667\n taylors[3] = 2.7083333333333333333333333333333333333333333333333\n taylors[4] = 2.7166666666666666666666666666666666666666666666667\n taylors[5] = 2.7180555555555555555555555555555555555555555555556\n taylors[6] = 2.7182539682539682539682539682539682539682539682540\n taylors[7] = 2.7182787698412698412698412698412698412698412698413\n taylors[8] = 2.7182815255731922398589065255731922398589065255732\n taylors[9] = 2.7182818011463844797178130511463844797178130511464\n taylors[10] = 2.7182818261984928651595318261984928651595318261985\n taylors[11] = 2.7182818282861685639463417241195018972796750574528\n taylors[12] = 2.7182818284467590023145578701134256689812245367801\n taylors[13] = 2.7182818284582297479122875948272773669599066424463\n taylors[14] = 2.7182818284589944642854695764748674801584854494907\n taylors[15] = 2.7182818284590422590587934503278418622333966249310\n taylors[16] = 2.7182818284590450705160477958486050611789796352510\n taylors[17] = 2.7182818284590452267081174817108696833426231358244\n taylors[18] = 2.7182818284590452349287527283351994002986043726966\n taylors[19] = 2.7182818284590452353397844906664158861464034345403\n taylors[20] = 2.7182818284590452353593574317298071473772510089138\n taylors[21] = 2.7182818284590452353602471108690522047059258986580\n taylors[22] = 2.7182818284590452353602857925707585115463030677773\n taylors[23] = 2.7182818284590452353602874043083296076646521164906\n taylors[24] = 2.7182818284590452353602874687778324515093860784392\n taylors[25] = 2.7182818284590452353602874712574287147341835385141\n taylors[26] = 2.7182818284590452353602874713492656133721389999984\n taylors[27] = 2.7182818284590452353602874713525455026092088379085\n taylors[28] = 2.7182818284590452353602874713526586022380733150778\n taylors[29] = 2.7182818284590452353602874713526623722257021309835\n taylors[30] = 2.7182818284590452353602874713526624938382062863353\n taylors[31] = 2.7182818284590452353602874713526624976385970411900\n taylors[32] = 2.7182818284590452353602874713526624977537603973977\n taylors[33] = 2.7182818284590452353602874713526624977571475549333\n taylors[34] = 2.7182818284590452353602874713526624977572443308628\n taylors[35] = 2.7182818284590452353602874713526624977572470190831\n taylors[36] = 2.7182818284590452353602874713526624977572470917377\n taylors[37] = 2.7182818284590452353602874713526624977572470936497\n taylors[38] = 2.7182818284590452353602874713526624977572470936987\n taylors[39] = 2.7182818284590452353602874713526624977572470936999\n taylors[40] = 2.7182818284590452353602874713526624977572470937000\n taylors[41] = 2.7182818284590452353602874713526624977572470937000\n taylors[42] = 2.7182818284590452353602874713526624977572470937000\n taylors[43] = 2.7182818284590452353602874713526624977572470937000\n taylors[44] = 2.7182818284590452353602874713526624977572470937000\n taylors[45] = 2.7182818284590452353602874713526624977572470937000\n taylors[46] = 2.7182818284590452353602874713526624977572470937000\n taylors[47] = 2.7182818284590452353602874713526624977572470937000\n taylors[48] = 2.7182818284590452353602874713526624977572470937000\n taylors[49] = 2.7182818284590452353602874713526624977572470937000\n taylors[50] = 2.7182818284590452353602874713526624977572470937000\n taylors[51] = 2.7182818284590452353602874713526624977572470937000\n taylors[52] = 2.7182818284590452353602874713526624977572470937000\n taylors[53] = 2.7182818284590452353602874713526624977572470937000\n taylors[54] = 2.7182818284590452353602874713526624977572470937000\n taylors[55] = 2.7182818284590452353602874713526624977572470937000\n taylors[56] = 2.7182818284590452353602874713526624977572470937000\n taylors[57] = 2.7182818284590452353602874713526624977572470937000\n taylors[58] = 2.7182818284590452353602874713526624977572470937000\n taylors[59] = 2.7182818284590452353602874713526624977572470937000\n taylors[60] = 2.7182818284590452353602874713526624977572470937000\n taylors[61] = 2.7182818284590452353602874713526624977572470937000\n taylors[62] = 2.7182818284590452353602874713526624977572470937000\n taylors[63] = 2.7182818284590452353602874713526624977572470937000\n taylors[64] = 2.7182818284590452353602874713526624977572470937000\n taylors[65] = 2.7182818284590452353602874713526624977572470937000\n taylors[66] = 2.7182818284590452353602874713526624977572470937000\n taylors[67] = 2.7182818284590452353602874713526624977572470937000\n taylors[68] = 2.7182818284590452353602874713526624977572470937000\n taylors[69] = 2.7182818284590452353602874713526624977572470937000\n taylors[70] = 2.7182818284590452353602874713526624977572470937000\n taylors[71] = 2.7182818284590452353602874713526624977572470937000\n taylors[72] = 2.7182818284590452353602874713526624977572470937000\n taylors[73] = 2.7182818284590452353602874713526624977572470937000\n taylors[74] = 2.7182818284590452353602874713526624977572470937000\n taylors[75] = 2.7182818284590452353602874713526624977572470937000\n taylors[76] = 2.7182818284590452353602874713526624977572470937000\n taylors[77] = 2.7182818284590452353602874713526624977572470937000\n taylors[78] = 2.7182818284590452353602874713526624977572470937000\n taylors[79] = 2.7182818284590452353602874713526624977572470937000\n taylors[80] = 2.7182818284590452353602874713526624977572470937000\n taylors[81] = 2.7182818284590452353602874713526624977572470937000\n taylors[82] = 2.7182818284590452353602874713526624977572470937000\n taylors[83] = 2.7182818284590452353602874713526624977572470937000\n taylors[84] = 2.7182818284590452353602874713526624977572470937000\n taylors[85] = 2.7182818284590452353602874713526624977572470937000\n taylors[86] = 2.7182818284590452353602874713526624977572470937000\n taylors[87] = 2.7182818284590452353602874713526624977572470937000\n taylors[88] = 2.7182818284590452353602874713526624977572470937000\n taylors[89] = 2.7182818284590452353602874713526624977572470937000\n taylors[90] = 2.7182818284590452353602874713526624977572470937000\n taylors[91] = 2.7182818284590452353602874713526624977572470937000\n taylors[92] = 2.7182818284590452353602874713526624977572470937000\n taylors[93] = 2.7182818284590452353602874713526624977572470937000\n taylors[94] = 2.7182818284590452353602874713526624977572470937000\n taylors[95] = 2.7182818284590452353602874713526624977572470937000\n taylors[96] = 2.7182818284590452353602874713526624977572470937000\n taylors[97] = 2.7182818284590452353602874713526624977572470937000\n taylors[98] = 2.7182818284590452353602874713526624977572470937000\n taylors[99] = 2.7182818284590452353602874713526624977572470937000\n\n\nAs we see, at expansion 40 we arrive at the closest value given this precision. After 40 we see that the differences are chopped off. Let's start at expansion 40 with a higher precision. \n\n\n```python\np = 100\nprint(\"Precision = {0}\".format(p))\n\ncntr = 38\nfor taylor_i in taylors[38:100]:\n res = sp.N(taylor_i.subs(x,1),p)\n print(\"taylors[{0}] = {1}\".format(cntr,res))\n cntr+=1\n```\n\n Precision = 100\n taylors[38] = 2.718281828459045235360287471352662497757247093698703335742056391892310837864596648186030883920472746\n taylors[39] = 2.718281828459045235360287471352662497757247093699928953181184777741734377849536405109756445884951071\n taylors[40] = 2.718281828459045235360287471352662497757247093699958846289456201786842269068681277229847313249938347\n taylors[41] = 2.718281828459045235360287471352662497757247093699959558030129330930773409335803774185087571996723758\n taylors[42] = 2.718281828459045235360287471352662497757247093699959574582238008352725296318760111323581531502462954\n taylors[43] = 2.718281828459045235360287471352662497757247093699959574958422296475951475568372755349456394218502481\n taylors[44] = 2.718281828459045235360287471352662497757247093699959574966781947323134279551697480772253613389970026\n taylors[45] = 2.718281828459045235360287471352662497757247093699959574966963678863290427464378453064053552937175842\n taylors[46] = 2.718281828459045235360287471352662497757247093699959574966967545491804388058265282261751423991371711\n taylors[47] = 2.718281828459045235360287471352662497757247093699959574966967626046565095570637924536703462971667458\n taylors[48] = 2.718281828459045235360287471352662497757247093699959574966967627690539803887216958052518810705959208\n taylors[49] = 2.718281828459045235360287471352662497757247093699959574966967627723419298053548538722835117660645043\n taylors[50] = 2.718281828459045235360287471352662497757247093699959574966967627724063994017594255990880535444070256\n taylors[51] = 2.718281828459045235360287471352662497757247093699959574966967627724076392016902827476804485786059202\n taylors[52] = 2.718281828459045235360287471352662497757247093699959574966967627724076625941418083542576635792511824\n taylors[53] = 2.718281828459045235360287471352662497757247093699959574966967627724076630273353551247498342274112798\n taylors[54] = 2.718281828459045235360287471352662497757247093699959574966967627724076630352116014296678736937414634\n taylors[55] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353522486851128386842116452\n taylors[56] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547161808223994735181397\n taylors[57] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547587238518746595406654\n taylors[58] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594449201708491342676\n taylors[59] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594569379757856274943\n taylors[60] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571349889813077111\n taylors[61] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571381666134961017\n taylors[62] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382170521022666\n taylors[63] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178402054879\n taylors[64] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178523301529\n taylors[65] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525138599\n taylors[66] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166018\n taylors[67] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166421\n taylors[68] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[69] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[70] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[71] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[72] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[73] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[74] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[75] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[76] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[77] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[78] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[79] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[80] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[81] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[82] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[83] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[84] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[85] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[86] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[87] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[88] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[89] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[90] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[91] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[92] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[93] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[94] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[95] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[96] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[97] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[98] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n taylors[99] = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n\n\nThe trend: the higher the precision, the more we can approximate without truncation, but the precision changes the overall value of the computation.\n\nFeel free to use this section to experiment with values\n\n\n```python\n# Define the taylor series with depth = your value. \ndepth = 18\nt = taylor(sp.exp,depth,printSteps = False)\nt2 = t.subs(a,0)\n```\n\n\n```python\n# Perform the symbolic substitution with precision = p. \np = 50\nr = sp.N(t2.subs(x,1),p)\n\nnpE = Decimal(np.exp(1))\nprint(\"Result: {0}\".format(r))\nprint(\"Numpy exp: {0}\".format(npE))\n```\n\n Result: 2.7182818284590452267081174817108696833426231358244\n Numpy exp: 2.718281828459045090795598298427648842334747314453125\n\n\n\n```python\n# Let's tail this off with plotting the first 10 taylors derived above (up to ten units in the series)\nxs = np.arange(-2,4,0.1)\nys = np.exp(xs)\nyTaylor = []\n\nplt.figure(figsize=(15,15)),\n\n\nfor taylor_i in taylors[0:10]:\n print(\"Plotting: \")\n display(taylor_i)\n for xi in xs:\n yTaylor.append( sp.N(taylor_i.subs(x,xi) ) )\n \n plt.plot(xs,yTaylor,color=\"red\")\n yTaylor = []\n\nplt.plot(xs,ys,color = \"blue\")\n\nplt.title(\"Value of $e$ and Taylor Series Approximation\",fontsize=20)\nplt.ylabel(\"Y\",fontsize=20)\nplt.xlabel(\"X\",fontsize=20)\n\nplt.show()\n\n# Obviously the red lines that get closer to the blue line represent taylor series with more terms. \n```\n\nOur current implementation is a general one and doesn't take into consideration the structure of the series. With a little creativity, it can be clear how we can linearlize the approximation or use dynamic programming/sub-solutions to make it more efficient. \n\nI hope this notebook showed you a bit about using the taylor series for numerical function approximation and some considerations of precision and differences between computing a value using sympy's symbolic packages versus the numeric ones. \n", "meta": {"hexsha": "4428eb2fd6c208334e19f88afd775c60954a6794", "size": 165343, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Notebooks/The Taylor Series.ipynb", "max_stars_repo_name": "Andrewnetwork/WorkshopScipy", "max_stars_repo_head_hexsha": "739d24b9078fffb84408e7877862618d88d947dc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 433, "max_stars_repo_stars_event_min_datetime": "2017-12-16T20:50:07.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-08T13:05:57.000Z", "max_issues_repo_path": "Notebooks/The Taylor Series.ipynb", "max_issues_repo_name": "Andrewnetwork/WorkshopScipy", "max_issues_repo_head_hexsha": "739d24b9078fffb84408e7877862618d88d947dc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2017-12-17T06:10:28.000Z", "max_issues_repo_issues_event_max_datetime": "2018-11-14T15:50:10.000Z", "max_forks_repo_path": "Notebooks/The Taylor Series.ipynb", "max_forks_repo_name": "Andrewnetwork/WorkshopScipy", "max_forks_repo_head_hexsha": "739d24b9078fffb84408e7877862618d88d947dc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 47, "max_forks_repo_forks_event_min_datetime": "2017-12-06T20:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-01T11:33:57.000Z", "avg_line_length": 142.4142980189, "max_line_length": 77156, "alphanum_fraction": 0.8519804286, "converted": true, "num_tokens": 10000, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9219218391455084, "lm_q2_score": 0.9086178963141964, "lm_q1q2_score": 0.8376746820505069}} {"text": "```python\n## This cell just imports necessary modules\n%pylab notebook\nfrom sympy import sin, cos, exp, Function, Symbol, diff, integrate, solve\nfrom mpl_toolkits import mplot3d\n```\n\n\n```python\n###### SCALAR FIELDS ######\n###### Lecture 3, slide 3 ######\n# Create a mesh of 2D Cartesian coordinates, where -5 <= x <= 5 and -5 <= y <= 5\nx = numpy.arange(-2., 2., 0.025)\ny = numpy.arange(-2., 2., 0.025)\nX, Y = numpy.meshgrid(x, y)\n\n# Computes the value of the scalar field at each (x,y) coordinate, and stores it in Z.\nf = 16 - 2*(X**2) - Y**2 + X*Y\n```\n\n\n```python\n# The scalar field on a contour plot\nprint(\"Plotting contours of the scalar field f(x,y) = 16 - 2*(x**2) - y**2 + x*y...\")\nfig = pylab.figure()\ncontour_plot = pylab.contour(X, Y, f)\npylab.clabel(contour_plot, inline=1)\npylab.xlabel(\"x\")\npylab.ylabel(\"y\")\n```\n\n\n```python\nfig = pylab.figure()\nax = pylab.axes(projection='3d')\nax.plot_surface(X, Y, f, cmap='gray', edgecolor = 'k', lw=0.25)\nax.set_xlabel(\"x\")\nax.set_ylabel(\"y\")\nax.set_zlabel(\"h\")\n```\n\n\n```python\nfig = pylab.figure()\nax = pylab.axes(projection='3d')\nax.contour3D(X, Y, f, 10, colors = 'k', linestyles='-')\nax.plot_surface(X, Y, f, cmap='gray', alpha=0.5)\nax.set_xlabel(\"x\")\nax.set_ylabel(\"y\")\nax.set_zlabel(\"h\")\n```\n\n\n```python\n###### VECTOR FIELDS ######\n###### Lecture 3, slide 4 ######\n# Create a mesh of 2D Cartesian coordinates, where -5 <= x <= 5 and -5 <= y <= 5\nx = numpy.arange(-5.0, 5.0, 0.25)\ny = numpy.arange(-5.0, 5.0, 0.25)\nX, Y = numpy.meshgrid(x, y)\n\n# Computes the value of the vector field at each (x,y) coordinate.\n# Z1 and Z2 hold the i and j component of the vector field respectively.\nZ1 = -(X**2)\nZ2 = -(Y**2)\n\nprint(\"Plotting the vector field f(x,y) = [-x**2, -y**2] ...\")\nfig = pylab.figure()\nplt = pylab.quiver(X,Y,Z1,Z2,angles='xy',scale=1000,color='r')\npylab.quiverkey(plt,-5,5.5,50,\"50 units\",coordinates='data',color='r')\npylab.xlabel(\"x\")\npylab.ylabel(\"y\")\n```\n\n\n```python\n###### GRADIENTS ######\n###### Lecture 3, slide 6 ######\n\n# Define the independent variables using Symbol\nx = Symbol('x')\ny = Symbol('y')\n# Define the function f(x,y)\nf = 16 - 2*(x**2) - y**2 + x*y\n\n# The gradient of f (a scalar field) is a vector field:\ngrad_f = [diff(f,x), diff(f,y)]\nprint(\"The gradient of the scalar field f(x,y) = 16 - 2*(x**2) - y**2 + x*y is: \")\nprint(grad_f)\n\nprint(\"The point where the gradient is zero is: \")\n# We solve a simultaneous equation such that grad_f[0] == 0 and grad_f[1] == 0\nprint(solve([grad_f[0], grad_f[1]], [x, y]))\n\n```\n\n\n```python\n###### DIRECTIONAL DERIVATIVES ######\n###### Lecture 3, slide 12 ######\n\n# Define the independent variables using Symbol\nx = Symbol('x')\ny = Symbol('y')\n# Define the function h(x,y)\nh = 3*x*(y**2)\n\n# The gradient of h\ngrad_h = [diff(h,x), diff(h,y)]\nprint(\"The gradient of h(x,y) = 3*x*(y**2) is: \")\nprint(grad_h, \"\\n\")\n\nprint(\"At the point (1,2), the gradient is: \")\n# Use evalf to evaluate a function, with subs to substitute in specific values for x and y\ngrad_h_at_point = [grad_h[0].evalf(subs={x:1, y:2}), grad_h[1].evalf(subs={x:1, y:2})]\nprint(grad_h_at_point, \"\\n\")\n\n# Find the unit vector in the direction 3i + 4j\na = numpy.array([3, 4])\na_magnitude = numpy.linalg.norm(a, ord=2)\nunit_a = a/a_magnitude\n\nprint(\"The unit vector in the direction 3i + 4j is:\")\nprint(unit_a, \"\\n\")\n\n# Dot product to get the directional derivative \n# (i.e. the gradient of h in the direction of the vector unit_a)\nslope = numpy.dot(grad_h_at_point, unit_a)\nprint(\"The slope of h in the direction \", unit_a, \" at (1,2) is: \", slope)\n\n\n```\n\n\n```python\n###### DIVERGENCE ######\n###### Lecture 3, slide 15 ######\n\n# Define the independent variables using Symbol\nx = Symbol('x')\ny = Symbol('y')\n# Define the vector field v(x,y)\nv = [-(x**2), -(y**2)]\n\n# Compute the divergence using diff. \n# NOTE 1: A neater way would be to use SymPy's dot function. \n# However, there doesn't seem to be a way of defining a gradient vector\n# in SymPy without specifying the function we wish to operate on,\n# so we'll compute the divergence the long way.\n# NOTE 2: this is the dot product of the gradient vector and v,\n# which will always result in a scalar.\n# d/dx is applied to the first component of v (i.e. v[0]),\n# d/dy is applied to the second component of v (i.e. v[1])\ndivergence = diff(v[0],x) + diff(v[1],y)\nprint(\"The divergence of the vector field \", v, \" is: \")\nprint(divergence)\n\n```\n\n\n```python\n###### CURL ######\n###### Lecture 3, slide 19 ######\n\n# Define the independent variables using Symbol\nx = Symbol('x')\ny = Symbol('y')\n# Define the vector field v(x,y)\nv = [cos(pi*y), -cos(pi*x)]\n\n# Compute the curl using diff.\n# Remember: the curl of a vector always results in another vector.\n# The first two components of the curl are zero because v has a zero k-component.\ncurl = [0, 0, diff(v[1], x) - diff(v[0], y)]\nprint(\"The curl of the vector field \", v, \" is: \")\nprint(curl)\nprint()\nprint(\"At the point (0, -0.5), the curl is: \")\nprint([0, 0, curl[2].evalf(subs={x:0, y:-0.5})])\n```\n\n\n```python\n###### LAPLACIAN ######\n###### Lecture 3, slide 22 ######\n\n# Define the independent variables using Symbol\nx = Symbol('x')\ny = Symbol('y')\n# Define the scalar field f(x,y)\nf = x*y + 3*exp(x*y)\n\n# In SymPy we can specify the order of the derivative as an optional argument\n# (in this case, it is '2' to get the second derivative).\nprint(\"The Laplacian of \", f, \" is: \")\nlaplacian = diff(f, x, 2) + diff(f, y, 2)\nprint(laplacian)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "c330f0a2866478f8a0ce6deb2e90f1cf4ae2fe4a", "size": 8848, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "mathematics/mm1/Lecture_3_Vector_Calculus.ipynb", "max_stars_repo_name": "jrper/thebe-test", "max_stars_repo_head_hexsha": "554484b1422204a23fe47da41c6dc596a681340f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "mathematics/mm1/Lecture_3_Vector_Calculus.ipynb", "max_issues_repo_name": "jrper/thebe-test", "max_issues_repo_head_hexsha": "554484b1422204a23fe47da41c6dc596a681340f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mathematics/mm1/Lecture_3_Vector_Calculus.ipynb", "max_forks_repo_name": "jrper/thebe-test", "max_forks_repo_head_hexsha": "554484b1422204a23fe47da41c6dc596a681340f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.7222222222, "max_line_length": 99, "alphanum_fraction": 0.5215867993, "converted": true, "num_tokens": 1748, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9407897475985937, "lm_q2_score": 0.8902942304882371, "lm_q1q2_score": 0.8375796843895128}} {"text": "```python\nfrom sympy import *\ninit_printing(use_unicode=True)\nfrom sympy.codegen.ast import Assignment\n```\n\n\n```python\nm, n = symbols('m n')\nx = symbols('x')\ntheta = Symbol(\"theta\")\n```\n\n\n```python\nm = cos(theta)\nn = sin(theta)\n```\n\n# Definition of rotation matrix $A$\n\n\n```python\nA = Matrix([[m**2, n**2, 2*m*n], [n**2, m**2, -2*m*n], [-m*n, m*n,m**2-n**2]])\nAt = simplify(A.inv())\n\nAt, simplify(A)\n# The inverse of the rotation matrix in voigt notation is not the transpose\n```\n\n\n```python\nsimplify(A)\n```\n\n\n```python\nsimplify(A.subs(theta,pi/4))\n```\n\n# Definition of the Reuter Matrix $R$\n\n\n```python\nR = Matrix([[1, 0, 0], [0, 1, 0], [0, 0,2]])\nRt = Matrix([[1, 0, 0], [0, 1, 0], [0, 0,0.5]])\nR\n```\n\n# transversely isotropic stiffness tensor $C$\n\n$C' = A^{-1}CRAR^{-1}$\n\n\n```python\nC = Matrix( symarray('C', (3,3)) )\nC\n```\n\n\n```python\nC[2,0] = 0\nC[2,1] = 0\nC[0,2] = 0\nC[1,2] = 0\nC\n```\n\n\n```python\nCTrue = simplify(At*C*R*A*Rt)\nCTrue\n```\n\n\n```python\nCTrue[2,2]\n```\n\n\n```python\nCPrime = simplify(At*C*A)\nCPrime[2,2]\n```\n\n\n```python\nprint(ccode(Assignment(x,CTrue[2,2])))\n```\n\n x = C_2_2*pow(cos(2*theta), 2) + 0.125*(1 - cos(4*theta))*(C_0_0 - C_1_0) + 0.125*(1 - cos(4*theta))*(-C_0_1 + C_1_1);\n\n\n\n```python\nCTrue[2,2].subs(theta,0)\n```\n\n\n```python\nsimplify(CTrue[2,2].subs(theta,pi/2))\n```\n\n\n```python\nsimplify(CTrue[2,2].subs(theta,pi/4))\n```\n\n\n```python\n(R*A*Rt).subs(theta,0)\n```\n\n\n```python\nC\n```\n\n\n```python\nA\n```\n\n\n```python\nR*A*Rt\n```\n\n\n```python\nCTrue.subs(theta,pi/2)\n```\n\n\n```python\nv1 = Matrix([1,2,3])\nv1\n```\n\n\n```python\nC[0,0] = 1\nC[1,1] = 3\nC[0,1] = 2\nC[1,0] = 2\n```\n", "meta": {"hexsha": "75988d255094693f420ce83daaa6ca420ee69b72", "size": 154408, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "PythonCodes/Utilities/Sympy/.ipynb_checkpoints/StiffnessRotNPMblending-checkpoint.ipynb", "max_stars_repo_name": "Nicolucas/C-Scripts", "max_stars_repo_head_hexsha": "2608df5c2e635ad16f422877ff440af69f98f960", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-02-25T08:05:13.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-25T08:05:13.000Z", "max_issues_repo_path": "PythonCodes/Utilities/Sympy/.ipynb_checkpoints/StiffnessRotNPMblending-checkpoint.ipynb", "max_issues_repo_name": "Nicolucas/C-Scripts", "max_issues_repo_head_hexsha": "2608df5c2e635ad16f422877ff440af69f98f960", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "PythonCodes/Utilities/Sympy/.ipynb_checkpoints/StiffnessRotNPMblending-checkpoint.ipynb", "max_forks_repo_name": "Nicolucas/C-Scripts", "max_forks_repo_head_hexsha": "2608df5c2e635ad16f422877ff440af69f98f960", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 197.958974359, "max_line_length": 44996, "alphanum_fraction": 0.8622739754, "converted": true, "num_tokens": 681, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9473810525948927, "lm_q2_score": 0.8840392802184581, "lm_q1q2_score": 0.8375220638285942}} {"text": "```python\nimport jax\nimport jax.numpy as jnp\nfrom IPython.display import display, Latex\n```\n\n## If else condition with `lax`\n\n$$\nf(\\mathbf{x}) = \\sum_{x \\in \\mathbf{x}} \\begin{cases}\n x^2,& \\text{if } x \\gt 5\\\\\n x^3, & \\text{otherwise}\n\\end{cases}\n$$\n\n\n```python\nx = [jnp.array(10.0), jnp.array(2.0)]\n\n@jax.jit\n@jax.value_and_grad\ndef f(x):\n bool_val = jax.tree_map(lambda val: val > 5.0, x)\n ans = jax.tree_map(lambda val, bool: jax.lax.cond(bool, lambda: val**2, lambda: val**3), x, bool_val)\n return jax.tree_util.tree_reduce(lambda a, b: a + b, ans)\n\nvalue, grad = f(x)\n\ndisplay(Latex(f\"$f(\\mathbf{{x}}) = {value}$\"))\nfor idx in range(len(x)):\n display(Latex(f\"$\\\\frac{{df}}{{dx_{idx}}} = {grad[idx]}$\"))\n```\n\n WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)\n\n\n\n$f(\\mathbf{x}) = 108.0$\n\n\n\n$\\frac{df}{dx_0} = 20.0$\n\n\n\n$\\frac{df}{dx_1} = 12.0$\n\n\n## Pair-wise distance with `vmap`\n\n\n```python\n# create vour pairwise function\ndef distance(a, b):\n return jnp.linalg.norm(a - b)\n\n\n# map based combinator to operate on all pairs\ndef all_pairs(f):\n f = jax.vmap(f, in_axes=(None, 0))\n f = jax.vmap(f, in_axes=(0, None))\n return f\n\n\n# transform to operate over sets\ndistances = all_pairs(distance)\n\n# Example\nx = jnp.array([1.0, 2.0, 3.0])\ny = jnp.array([3.0, 4.0, 5.0])\ndistances(x, y)\n```\n\n\n\n\n DeviceArray([[2., 3., 4.],\n [1., 2., 3.],\n [0., 1., 2.]], dtype=float32)\n\n\n\n## Compute Hessian with `jax`\n\nLet us consider Linear regression loss function\n\n\\begin{align}\n\\mathcal{L}(\\boldsymbol{\\theta}) &= (\\boldsymbol{y} - X\\boldsymbol{\\theta})^T(\\boldsymbol{y} - X\\boldsymbol{\\theta})\\\\\n\\frac{d\\mathcal{L}}{d\\boldsymbol{\\theta}} &= -2X^T\\boldsymbol{y} + 2X^TX\\boldsymbol{\\theta}\\\\\nH_{\\mathcal{L}}(\\boldsymbol{\\theta}) &= 2X^TX\n\\end{align}\n\n\n```python\ndef loss_function_per_point(theta, x, y):\n y_pred = x.T@theta\n return jnp.square(y_pred - y)\n\ndef loss_function(theta, x, y):\n loss_per_point = jax.vmap(loss_function_per_point, in_axes=(None, 0, 0))(theta, x, y)\n return jnp.sum(loss_per_point)\n\ndef gt_loss(theta, x, y):\n return jnp.sum(jnp.square(x@theta - y))\n\ndef gt_grad(theta, x, y):\n return 2 * (x.T@x@theta - x.T@y)\n\ndef gt_hess(theta, x, y):\n return 2 * x.T@x \n```\n\n### Simulate dataset \n\n\n```python\nkey = jax.random.PRNGKey(0)\nkey, subkey1, subkey2 = jax.random.split(key, num=3)\nN = 100\nD = 11\nx = jax.random.uniform(key, shape=(N, D))\ny = jax.random.uniform(subkey1, shape=(N,))\ntheta = jax.random.uniform(subkey2, shape=(D,))\n```\n\n### Verify loss and gradient values\n\n\n```python\nloss_and_grad_function = jax.value_and_grad(loss_function)\n\nloss_val, grad = loss_and_grad_function(theta, x, y)\n\nassert jnp.allclose(loss_val, gt_loss(theta, x, y))\nassert jnp.allclose(grad, gt_grad(theta, x, y))\n```\n\n### Verify hessian matrix\n\n#### Way-1 \n\n\n```python\nhess = jax.hessian(loss_function)(theta, x, y)\n\nassert jnp.allclose(hess, gt_hess(theta, x, y))\n```\n\n#### Way-2\n\n\n```python\nhess = jax.jacfwd(jax.jacrev(loss_function))(theta, x, y)\n\nassert jnp.allclose(hess, gt_hess(theta, x, y))\n```\n", "meta": {"hexsha": "9dc0b0b98d0012afa2aca9c216d40852dc948226", "size": 8373, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/Practical_JAX_Tips.ipynb", "max_stars_repo_name": "susnato/probml-notebooks", "max_stars_repo_head_hexsha": "95a1a1045ed96ce8ca9f59b8664b1356098d427f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/Practical_JAX_Tips.ipynb", "max_issues_repo_name": "susnato/probml-notebooks", "max_issues_repo_head_hexsha": "95a1a1045ed96ce8ca9f59b8664b1356098d427f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/Practical_JAX_Tips.ipynb", "max_forks_repo_name": "susnato/probml-notebooks", "max_forks_repo_head_hexsha": "95a1a1045ed96ce8ca9f59b8664b1356098d427f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.3301886792, "max_line_length": 142, "alphanum_fraction": 0.4263704765, "converted": true, "num_tokens": 1090, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391558356, "lm_q2_score": 0.9073122113355092, "lm_q1q2_score": 0.83748469763046}} {"text": "# What's up with polynomial regression?\n\nWhy do we have to use this `PolynomialFeatures` thing from scikit? What does it do?\n\nLet's imagine we have some data from which we know the true function we want our model to learn. This function is:\n\\begin{align}\ny = 3x + 1x^2 -2\n\\end{align}\n\n\n\n\n```python\nimport numpy\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.preprocessing import PolynomialFeatures \n\n\nx_values = numpy.array([[-2], [-1], [0], [1], [2]])\ny_values = numpy.array([[-4], [-4], [-2], [2], [8]])\n```\n\n\n```python\nx_values\n```\n\n\n\n\n array([[-2],\n [-1],\n [ 0],\n [ 1],\n [ 2]])\n\n\n\nPolynomial regression extends the linear model by adding extra predictors, obtained by raising each of the original predictors to a power. Our original predictors were:\n```\n[-2, -1, 0, 1, 2]\n```\n\n\n```python\ntransformer = PolynomialFeatures(degree=2)\nx_values_transformed = transformer.fit_transform(x_values)\n```\n\n\n```python\nx_values_transformed\n```\n\n\n\n\n array([[ 1., -2., 4.],\n [ 1., -1., 1.],\n [ 1., 0., 0.],\n [ 1., 1., 1.],\n [ 1., 2., 4.]])\n\n\n\nWhat the transformer has done for each value of x is to expand it from a single number into an array for three numbers:\n\n1. The bias (always 1.0, the feature in which all polynomial powers are zero )\n2. The original value\n3. The value, squared\n\nIt has *extended the linear model* by adding extra predictors. \n\nWe can now hand off the transformed values off to the linear regression model's ``fit`` method and it will understand that it needs to fit a second-degree polynomial instead of a straight line.\n\n\n\n```python\nmodel = LinearRegression()\nmodel.fit(x_values_transformed,y_values)\n```\n\n\n\n\n LinearRegression()\n\n\n\nNow we can predict. In the same way we transformed out `x` inputs with the `PolynomialFeatures` class before training the model, we will need to transform any `x` values for which we want to predict a `y`:\n\n\n```python\n# Values of x for which we want to predict a y\nx_pred = [[3], [-3]]\n\nx_pred_transformed = transformer.fit_transform(x_pred)\n\nmodel.predict(x_pred_transformed)\n```\n\n\n\n\n array([[16.],\n [-2.]])\n\n\n\nAre these correct?\n\n\\begin{align}\ny = (3 * 3) + 3^2 -2 \\\\\ny = (3 * -3) + -3^2 -2\n\\end{align}\n\nYes, they are! Our model correctly learned the function! If you still aren't convinced from just two examples that the model has correctly learned the true function:\n\n\n\n```python\nintercept = model.intercept_[0]\nslope = model.coef_[0]\nprint(f\"Intercept is: {intercept}\")\nprint(f\"Slope is: {slope}\")\n```\n\n Intercept is: -2.000000000000004\n Slope is: [0. 3. 1.]\n\n\nAnd our true function, again:\n\\begin{align}\ny = 3x + 1x^2 -2\n\\end{align}\n\n\n```python\n\n```\n", "meta": {"hexsha": "44b4249dfb1cb69b0751cbd16b3b094d7f0e845b", "size": 5803, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "cmsc_210/examples/lecture_25/notebooks/PolynomialFeatureTransforms.ipynb", "max_stars_repo_name": "mazelife/cmsc-210", "max_stars_repo_head_hexsha": "dbaa1604ef49bcfe5a70e09c17fbd243a8b80220", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "cmsc_210/examples/lecture_25/notebooks/PolynomialFeatureTransforms.ipynb", "max_issues_repo_name": "mazelife/cmsc-210", "max_issues_repo_head_hexsha": "dbaa1604ef49bcfe5a70e09c17fbd243a8b80220", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2022-01-16T23:30:12.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-30T23:03:21.000Z", "max_forks_repo_path": "cmsc_210/examples/lecture_25/notebooks/PolynomialFeatureTransforms.ipynb", "max_forks_repo_name": "mazelife/cmsc-210", "max_forks_repo_head_hexsha": "dbaa1604ef49bcfe5a70e09c17fbd243a8b80220", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.5894308943, "max_line_length": 211, "alphanum_fraction": 0.5135274858, "converted": true, "num_tokens": 757, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.952574122783325, "lm_q2_score": 0.8791467564270271, "lm_q1q2_score": 0.8374524503012808}} {"text": "# 2022-02-14 QR Factorization\n\n* Sahitya's office hours: Friday 11-12:30\n\n## Last time\n\n* Revisit projections, rotations, and reflections\n* Constructing orthogonal bases\n\n## Today\n\n* Gram-Schmidt process\n* QR factorization\n* Stability and ill conditioning\n* Intro to performance modeling\n\n\n```julia\nusing LinearAlgebra\nusing Plots\nusing Polynomials\ndefault(linewidth=4, legendfontsize=12)\n\nfunction vander(x, k=nothing)\n if isnothing(k)\n k = length(x)\n end\n m = length(x)\n V = ones(m, k)\n for j in 2:k\n V[:, j] = V[:, j-1] .* x\n end\n V\nend\n```\n\n\n\n\n vander (generic function with 2 methods)\n\n\n\n# Gram-Schmidt orthogonalization\n\nSuppose we're given some vectors and want to find an orthogonal basis for their span.\n\n$$ \\Bigg[ a_1 \\Bigg| a_2 \\Bigg] = \\Bigg[ q_1 \\Bigg| q_2 \\Bigg] \\begin{bmatrix} r_{11} & r_{12} \\\\ 0 & r_{22} \\end{bmatrix} $$\n\n# A naive algorithm\n\n\n```julia\nfunction gram_schmidt_naive(A)\n m, n = size(A)\n Q = zeros(m, n)\n R = zeros(n, n)\n for j in 1:n\n v = A[:,j]\n for k in 1:j-1\n r = Q[:,k]' * v\n v -= Q[:,k] * r\n R[k,j] = r\n end\n R[j,j] = norm(v)\n Q[:,j] = v / R[j,j]\n end\n Q, R\nend\n```\n\n\n\n\n gram_schmidt_naive (generic function with 1 method)\n\n\n\n\n```julia\nx = LinRange(-1, 1, 10)\nA = vander(x, 4)\nQ, R = gram_schmidt_naive(A)\n@show norm(Q' * Q - I)\n@show norm(Q * R - A);\n```\n\n norm(Q' * Q - I) = 3.684346652564232e-16\n norm(Q * R - A) = 1.6779947319649215e-16\n\n\n# What do orthogonal polynomials look like?\n\n\n```julia\nx = LinRange(-1, 1, 200)\nA = vander(x, 6)\nQ, R = gram_schmidt_naive(A)\nplot(x, Q)\n```\n\n\n\n\n \n\n \n\n\n\nWhat happens if we use more than 50 values of $x$? Is there a continuous limit?\n\n# Theorem\n## Every full-rank $m\\times n$ matrix ($m \\ge n$) has a unique reduced $Q R$ factorization with $R_{j,j} > 0$\n\nThe algorithm we're using generates this matrix due to the line:\n```julia\n R[j,j] = norm(v)\n```\n\n# Solving equations using $QR = A$\n\nIf $A x = b$ then $Rx = Q^T b$.\n\n\n```julia\nx1 = [-0.9, 0.1, 0.5, 0.8] # points where we know values\ny1 = [1, 2.4, -0.2, 1.3]\nscatter(x1, y1)\nA = vander(x1, 3)\nQ, R = gram_schmidt_naive(A)\np = R \\ (Q' * y1)\np = A \\ y1\nplot!(x, vander(x, 3) * p)\n```\n\n\n\n\n \n\n \n\n\n\n# How accurate is it?\n\n\n```julia\nm = 30\nx = LinRange(-1, 1, m)\nA = vander(x, m)\nQ, R = gram_schmidt_naive(A)\n@show norm(Q' * Q - I)\n@show norm(Q * R - A)\n```\n\n norm(Q' * Q - I) = 0.00043830040698919693\n norm(Q * R - A) = 1.25091691161899e-15\n\n\n\n\n\n 1.25091691161899e-15\n\n\n\n# A variant with more parallelism\n\n\\begin{align}\n(I - q_2 q_2^T) (I - q_1 q_1^T) v &= (I - q_1 q_1^T - q_2 q_2^T + q_2 q_2^T q_1 q_1^T) v \\\\\n&= \\Bigg( I - \\Big[ q_1 \\Big| q_2 \\Big] \\begin{bmatrix} q_1^T \\\\ q_2^T \\end{bmatrix} \\Bigg) v\n\\end{align}\n\n\n```julia\nfunction gram_schmidt_classical(A)\n m, n = size(A)\n Q = zeros(m, n)\n R = zeros(n, n)\n for j in 1:n\n v = A[:,j]\n R[1:j-1,j] = Q[:,1:j-1]' * v\n v -= Q[:,1:j-1] * R[1:j-1,j]\n R[j,j] = norm(v)\n Q[:,j] = v / norm(v)\n end\n Q, R\nend\n```\n\n\n\n\n gram_schmidt_classical (generic function with 1 method)\n\n\n\n\n```julia\nm = 10\nx = LinRange(-1, 1, m)\nA = vander(x, m)\nQ, R = gram_schmidt_classical(A)\n@show norm(Q' * Q - I)\n@show norm(Q * R - A)\n```\n\n norm(Q' * Q - I) = 6.339875256299394e-11\n norm(Q * R - A) = 1.217027619812654e-16\n\n\n\n\n\n 1.217027619812654e-16\n\n\n\n# Cost of Gram-Schmidt?\n\n* We'll count flops (addition, multiplication, division*)\n* Inner product $\\sum_{i=1}^m x_i y_i$?\n* Vector \"axpy\": $y_i = a x_i + y_i$, $i \\in [1, 2, \\dotsc, m]$.\n* Look at the inner loop:\n```julia\nfor k in 1:j-1\n r = Q[:,k]' * v\n v -= Q[:,k] * r\n R[k,j] = r\nend\n```\n\n\n\n# Counting flops is a bad model\n\n\n* We load a single entry (8 bytes) and do 2 flops (add + multiply). That's an **arithmetic intensity** of 0.25 flops/byte.\n* Current hardware can do about 10 flops per byte, so our best algorithms will run at about 2% efficiency.\n* Need to focus on memory bandwidth, not flops.\n\n## Dense matrix-matrix mulitply\n\n* [BLIS project](https://github.com/flame/blis/)\n* [Analytic modeling](https://www.cs.utexas.edu/users/flame/pubs/TOMS-BLIS-Analytical.pdf)\n\n\n", "meta": {"hexsha": "ede2cbbf07f13daa074465c0f3202e0f01274edb", "size": 87690, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "slides/2022-02-14-qr-factorization.ipynb", "max_stars_repo_name": "cu-numcomp/spring22", "max_stars_repo_head_hexsha": "f4c1f9287bff2c10645809e65c21829064493a66", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "slides/2022-02-14-qr-factorization.ipynb", "max_issues_repo_name": "cu-numcomp/spring22", "max_issues_repo_head_hexsha": "f4c1f9287bff2c10645809e65c21829064493a66", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/2022-02-14-qr-factorization.ipynb", "max_forks_repo_name": "cu-numcomp/spring22", "max_forks_repo_head_hexsha": "f4c1f9287bff2c10645809e65c21829064493a66", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T21:05:12.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-11T20:34:46.000Z", "avg_line_length": 100.4467353952, "max_line_length": 7507, "alphanum_fraction": 0.6533698255, "converted": true, "num_tokens": 1602, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9390248157222396, "lm_q2_score": 0.8918110468756548, "lm_q1q2_score": 0.8374327039514694}} {"text": "# Hopf Bifurcation: The Emergence of Limit-cycle Dynamics\n\n*Cem \u00d6zen*, May 2017.\n\nA *Hopf bifurcation* is a critical point in which a periodic orbit appears or disappears through a local change in the stability of a fixed point in a dynamical system as one of the system parameters is varied. Hopf bifurcations occur in many of the well-known dynamical systems such as the Lotka-Volterra model, the Lorenz model, the Selkov model of glycolysis, the Belousov-Zhabotinsky reaction model, and the Hodgkin-Huxley model for nerve membrane.\n\nIn this notebook, I will consider a system of chemical reactions known by the name *Brusselator* in literature (see: https://en.wikipedia.org/wiki/Brusselator for more information) as a model for Hopf bifurcations. The Brusselator reactions are given by\n\n$A \\longrightarrow X$
    \n$2X + Y\\longrightarrow 3X$
    \n$B + X \\longrightarrow Y + D$
    \n$X \\longrightarrow E$
    \n\nFor the sake of simplicity, we will assume that the reaction constants of all these reactions are unity (i.e. in all the reactions, $k=1$ ). Furthermore let's assume that the reactant concentrations $A$ and $B$ are so large that they remain constant. Therefore, only $X$ and $Y$ concentrations will be dynamical. \n\nThe rate equations for $X$ and $Y$ are then given by
    \n\n\n$$\n\\begin{eqnarray}\n\\dot{X} & = & A + X^2Y - BX - X, \\\\\n\\dot{Y} & = & BX - X^2Y\n\\end{eqnarray}\n$$\n\nThe X-nullcline and the Y-nullcline are given by the conditions of $0 = A + X^2Y - BX - X$ and $0 = BX - X^2Y$ respectively. From these equations, we obtain:\n\n$$\n\\begin{eqnarray}\nY(X) & = & \\frac{-A + X(B+1)}{X^2}, & \\quad \\textrm{(X-nullcline)} \\\\\nY(X) & = & \\frac{B}{X}, & \\quad \\textrm{(Y-nullcline)} \n\\end{eqnarray}\n$$\n\nIn this notebook, I will also demonstrate how one can perform symbolical computations using Python's `SymPy` library. We also need extra Jupyter Notebook functionality to render nice display of the resulting equations. (Notice that we are using LaTex in typesetting this document particularly for the purpose of producing nice looking equations). \n\n\n```python\nimport numpy as np \nfrom numpy.linalg import eig\nfrom scipy import integrate\nimport sympy\nfrom IPython.display import display, Math, Latex\nimport matplotlib.pyplot as plt\nsympy.init_printing(use_latex='mathjax')\n%matplotlib inline\n```\n\nLet's obtain the nullcline equations using `SymPy`: \n\n\n```python\nX, Y, A, B = sympy.symbols('X Y A B') # you need to introduce the sysmbols first\n\n# let's get the X-nullcline as a function of X:\nsympy.solve(sympy.Eq(A + X**2 * Y - B * X - X, 0), Y)\n```\n\n\n\n\n$$\\left [ \\frac{1}{X^{2}} \\left(- A + X \\left(B + 1\\right)\\right)\\right ]$$\n\n\n\n\n```python\n# let's get the Y-nullcline as a function of X:\nsympy.solve(sympy.Eq(B * X - X**2 * Y, 0), Y)\n```\n\n\n\n\n$$\\left [ \\frac{B}{X}\\right ]$$\n\n\n\nNow let's find the fixed points ($X^*, Y^*$) of this 2-D system (there is only one, actually). The fixed point is given by the simultaneous solution of the X-nullcline and Y-nullcline equations, therefore \n\n$$ (X^*, Y^*) = (A, \\frac{B}{A}) $$\n\nFor the sake of using `SymPy`, let's obtain this solution once again:\n\n\n```python\n# Solve the system of equations defined by the X-nullcline and Y-nullcline with respect to X and Y: \nsympy.solve([A + X**2 * Y - B * X - X, B * X - X**2 * Y], [X, Y])\n```\n\n\n\n\n$$\\left [ \\left ( A, \\quad \\frac{B}{A}\\right )\\right ]$$\n\n\n\nNow, a bifurcation analysis of the Brusselator model requires us to keep track of the local stability of its fixed point. This can be done, according to the *linearized stability analysis* by evaluating the Jacobian matrix at the fixed point.
    \n\n\nThe Jacobian matrix at the fixed point is given by:\n\n$$\n\\begin{eqnarray}\nJ & = & \\left\\vert\\matrix{{\\partial f \\over \\partial x} & {\\partial f\\over \\partial y} \\cr \n {\\partial g \\over \\partial x} & {\\partial g\\over \\partial y}\n }\\right\\vert_{(X^*, Y^*)} \\\\\n & = & \\left\\vert\\matrix{ -B + 2XY - 1 & X^2 \\cr \n B - 2XY & -X^2\n }\\right\\vert_{(X^*, Y^*)} \\\\\n & = & \\left\\vert\\matrix{ B - 1 & A^2 \\cr \n -B & -A^2\n }\\right\\vert\n\\end{eqnarray}\n$$\n\nThis result can also be obtained very easily using `SymPy`:\n\n\n```python\n# define the Brusselator dynamical system as a SymPy matrix\nbrusselator = sympy.Matrix([A + X**2 * Y - B * X - X, B * X - X**2 * Y])\n```\n\n\n```python\n# Jacobian matrix with respect to X and Y\nJ = brusselator.jacobian([X, Y])\nJ\n```\n\n\n\n\n$$\\left[\\begin{matrix}- B + 2 X Y - 1 & X^{2}\\\\B - 2 X Y & - X^{2}\\end{matrix}\\right]$$\n\n\n\n\n```python\n# Jacobian matrix evaluated at the coordinates of the fixed point\nJ_at_fp = J.subs({X:A, Y:B/A}) # substitute X with A and Y with B/A\nJ_at_fp\n```\n\n\n\n\n$$\\left[\\begin{matrix}B - 1 & A^{2}\\\\- B & - A^{2}\\end{matrix}\\right]$$\n\n\n\nA limit-cycle can emerge in a 2-dimensional, attractive dynamical system if the fixed point of the system goes unstable. In this case, trajectories must be pulled by a limit cycle. (According to the Poincare-Bendixson theorem, a 2-dimensional system cannot have strange attractors). In this case, the Hopf bifurcation is called a *supercritical Hopf bifurcation*, because limit cycle is stable. \n\nIn the following, we will see how the stable fixed point (spiral) of the Brusselator goes unstable, giving rise to a limit cycle in turn. Conditions for the stability are determined by the trace and the determinant of the Jacobian. So let's evaluate them:\n\n\n```python\nDelta = J_at_fp.det() # determinant of the Jacobian\nDelta.simplify()\n```\n\n\n\n\n$$A^{2}$$\n\n\n\n\n```python\ntau = J_at_fp.trace() # trace of the Jacobian\ntau\n```\n\n\n\n\n$$- A^{2} + B - 1$$\n\n\n\nTo have an unstable spiral we need:\n\n$$\n\\begin{eqnarray}\n\\tau & > & 0 \\quad \\Rightarrow \\quad & B > A^2 + 1 \\quad \\textrm{required} \\\\\n\\Delta & > & 0 \\quad {} \\quad & \\textrm{automatically satisfied} \\\\\n\\tau^2 & < & 4 \\Delta \\quad {} \\quad & \\textrm{automatically satisfied}\n\\end{eqnarray}\n$$\n\nThe second and third conditions were satisfied because of the first condition, automatically. Therefore we need to have:\n\n$$ B > A^2 + 1 $$ for limit cycles.\n\n## Birth of A Limit Cycle: Hopf Bifurcation\n### Numerical Simulation of the Brusselator System\nIn the following, I perform a numerical simulation of the (supercritical) Hopf bifurcation in the Brusselator system by varying the parameter $B$ while the value of $A=1$. \n\n\n```python\n# Brusselator System:\ndef dX_dt(A, B, X, t):\n x, y = X[0], X[1]\n return np.array([A + x**2 * y - B * x -x, \n B * x - x**2 * y])\n\nT = 50 * np.pi # simulation time\ndt = 0.01 # integration time step\n\n# time steps to be used in integration of the Brusselator system\nt=np.arange(0, T, dt) \n\n# create a canvas and 3 subplots..we will use each one for different choice of A and B paraeters\nfig, ax = plt.subplots(1, 3)\nfig.set_size_inches(13,5)\n\ndef plotter(A, B, ax):\n \"\"\"\n This function draws a phase portrait by assigning a vector characterizing how the concentrations\n change at a given value of X and Y. It also draws a couple of example trajectories.\n \"\"\"\n # Drow direction fields using matplotlib 's quiver function, similar to what we did\n # in class but qualitatively\n xmin, xmax = 0, 5 # min and max values of x axis in the plot\n ymin, ymax = 0, 5 # min and max values of y axis in the plot\n x = np.linspace(xmin, xmax, 10) # divide x axis to intervals\n y = np.linspace(ymin, ymax, 10) # divide y axis to intervals\n X1 , Y1 = np.meshgrid(x, y) # from these intervals create a grid\n DX1, DY1 = dX_dt(A, B, [X1, Y1], t) # compute rate of change of the concentrations on grid points\n M = (np.hypot(DX1, DY1)) # norm of the rate of changes \n M[ M == 0] = 1. # prevention against divisions by zero\n DX1 /= M # we normalize the direction field vector (each has unit length now)\n DY1 /= M # we normalize the direction field vector (each has unit length now)\n Q = ax.quiver(X1, Y1, DX1, DY1, M, pivot='mid', cmap=plt.cm.jet)\n\n num_traj = 5 # number of trajectories \n\n # choose several initial points (x_i, y_j), for i and j chosen as in linspace calls below\n X0 = np.asarray(list(zip(np.linspace(xmin, xmax, num_traj), np.linspace(ymin, ymax, num_traj)))) \n vcolors = plt.cm.jet_r(np.linspace(0., 1., num_traj)) # colors for each trajectory\n\n # integrate the Brusellator ODE's using all initial points to produce corresponding trajectories\n X = np.asarray([integrate.odeint(lambda x, t: dX_dt(A, B, x, t), X0i,t)\n for X0i in X0])\n # plot the trajectories we obtained above \n for i in range(num_traj):\n x, y = X[i, :, :].T # x and y histories for trajectory i\n ax.plot(x, y, '-', c=vcolors[i], lw=2)\n # set limits, put labels etc.. \n ax.set_xlim(xmin=xmin, xmax=xmax)\n ax.set_ylim(ymin=ymin, ymax=ymax)\n ax.set_xlabel(\"X\", fontsize = 20)\n ax.set_ylabel(\"Y\", fontsize = 20)\n ax.annotate(\"A={}, B={}\".format(A, B), xy = (0.4, 0.9), xycoords = 'axes fraction', fontsize = 20, color = \"k\")\n\n \n# Now let's prepare plots for the following choices of A and B: \nplotter(A=1, B=1, ax=ax[0])\nplotter(A=1, B=2, ax=ax[1])\nplotter(A=1, B=3, ax=ax[2])\n```\n\nAbove you see how a limit cycle can be created in a dynamical system, as one of the system parameters is varied.\nHere we have kept $A=1$ but varied $B$ from 1 to 2.5. Note that $B=2$ is the borderline value, marking the change in the stability of the fixed point. For $B<1$ the fixed point is stable but as we cross the value $B=2$, it changes its character to unstable and a limit cycle is born. This phenomenon is an example of *Hopf Bifurcation*.\n\nOn the leftmost panel we have a stable spiral. Technically, this means that the Jacobian at the fixed point has two complex eigenvalues (a complex conjugate pair). The fact that the eigenvalues are complex is responsible for the spiralling effect. In stable spirals, the real part of the eigenvalues are negative, which is why these spiralling solutions decay, that is, trajectories nearby fall on to the fixed point. As the bifurcation parameter (here $B$) varies, the real part of the complex eigenvalues increase, reach zero at certain value of $B$, and keep growing now on the positive side. If the real part of the eigenvalues are positive, the fixed point is an unstable spiral; trajectories nearby are pushed out of the fixed point (see the rightmost panel and plot below). Since this 2-D dynamical system is attractive, by the Poincare-Bendixson theorem, the emergence of the unstable spiral accompanies the birth of a limit-cycle. Notice that the panel in the middle is the borderline case between the stable and unstable spirals: in this case the real part of the eigenvalues is exactly zero (see plots below); the linear stabilization theory falsely predicts a neutral oscillation (i.e a center) at $B=2$---due to purely imaginary eigenvalues. However, the fixed point is still a stable spiral then.\n\n### Eigenvalues of the Jacobian \n\n\n```python\n# Eigenvalues of the Jacobian at A=1, B=1 (fixed point is stable spiral)\nJ_numeric = np.asarray(J_at_fp.evalf(subs={A:1, B:1})).astype(np.float64)\nw, _ = eig(J_numeric)\nw\n```\n\n\n\n\n array([-0.5+0.8660254j, -0.5-0.8660254j])\n\n\n\n\n```python\n# Eigenvalues of the Jacobian at A=1, B=3 (fixed point is unstable spiral)\nJ_numeric = np.asarray(J_at_fp.evalf(subs={A:1, B:3})).astype(np.float64)\nw, _ = eig(J_numeric)\nw\n```\n\n\n\n\n array([ 0.5+0.8660254j, 0.5-0.8660254j])\n\n\n\nLet's prepare plots showing how the real and imaginary parts of the eigenvalues change as $B$ is varied.\n\n\n```python\nfrom numpy.linalg import eig\n\na = 1\neigen_real, eigen_imag = [], []\nB_vals = np.linspace(1, 3, 20)\nfor b in B_vals: \n J_numeric = np.asarray(J_at_fp.evalf(subs={A:a, B:b})).astype(np.float64)\n w, _ = eig(J_numeric)\n eigen_real.append(w[0].real)\n eigen_imag.append(abs(w[0].imag))\neigen_real = np.asanyarray(eigen_real)\neigen_imag = np.asarray(eigen_imag)\n```\n\n\n```python\nfig, ax = plt.subplots(1, 2)\nfig.set_size_inches(10,5)\nfig.subplots_adjust(wspace=0.5)\nax[0].axhline(y=0, c=\"k\", ls=\"dashed\")\nax[0].plot(B_vals, eigen_real)\nax[0].set_ylabel(r\"$\\mathfrak{Re}(\\lambda)$\", fontsize = 20)\nax[0].set_xlabel(r\"$B$\", fontsize = 20)\nax[1].set_ylabel(r\"$|\\mathfrak{Im}(\\lambda)|$\", fontsize = 20)\nax[1].set_xlabel(r\"$B$\", fontsize = 20)\nax[1].plot(B_vals, eigen_imag)\n```\n\nHopf bifurcation, is only one type of bifurcation, albeit a very important one for physical and biological systems. There are other types of bifurcation in which one can create or destroy fixed points or alter their properties in different ways than a Hopf bifurcations does. If you are curious, I suggest you to perform your own numerical experiements by playing with the values of $A$, $B$\nor both.\n\n### An Animation of the Hopf Bifurcation \n\n\n```python\nfrom matplotlib import animation, rc\nfrom IPython.display import HTML\n\n# Brusselator System:\ndef dX_dt(A, B, X, t):\n x, y = X[0], X[1]\n return np.array([A + x**2 * y - B * x -x, B * x - x**2 * y])\n\nT = 50 * np.pi # simulation time\ndt = 0.01 # integration time step\n\n# time steps to be used in integration of the Brusselator system\nt=np.arange(0, T, dt) \n\n\nnum_traj = 5 # number of trajectories\nxmin, xmax = 0, 5 # min and max values of x axis in the plot\nymin, ymax = 0, 5 # min and max values of y axis in the plot\nA = 1. # we will keep A parameter constant \n\n# vary B parameter \nBmin, Bmax, numB = 1., 3., 100 # min, max, number of steps for varying B\nBvals = np.linspace(Bmin, Bmax, numB)\n\n# set up the figure, the axis, and the plot element we want to animate\nfig = plt.figure()\nfig.set_size_inches(8,8)\nax = plt.axes(xlim=(xmin, xmax), ylim=(ymin, ymax))\nax.set_ylabel(\"Y\", fontsize = 20)\nax.set_xlabel(\"X\", fontsize = 20)\n\n \n# choose a set of initial points for our trajectories (in each frame we will use the same set)\nX0 = list(zip(np.linspace(xmin, xmax, num_traj), np.linspace(ymin, ymax, num_traj))) \n\n# choose a color set for our trajectories\nvcolors = plt.cm.jet_r(np.linspace(0., 1., num_traj)) \n\n# prepare the mesh grid \nx = np.linspace(xmin, xmax, 15) # divide x axis to intervals\ny = np.linspace(ymin, ymax, 15) # divide y axis to intervals\nX1 , Y1 = np.meshgrid(x, y) # from these intervals create a grid\n \n# set up the lines, the quiver and the text object\nlines = [ax.plot([], [], [], '-', c=c, lw=2)[0] for c in vcolors]\nQ = ax.quiver(X1, Y1, [], [], [], pivot='mid', cmap=plt.cm.jet)\ntext = ax.text(0.02, 0.95, '', fontsize=20, transform=ax.transAxes)\n\n# initialization function: plot the background of each frame. Needs to return each object to be updated\ndef init():\n for line in lines:\n line.set_data([], [])\n Q.set_UVC([], [], [])\n text.set_text(\"\")\n return Q, lines, text\n\n# animation function. This is called sequentially\ndef animate(i):\n B = Bvals[i]\n DX1, DY1 = dX_dt(A, B, [X1, Y1], t) # compute rate of change of the concentrations on grid points\n M = (np.hypot(DX1, DY1)) # norm of the rate of changes \n M[ M == 0] = 1. # prevention against divisions by zero\n DX1 /= M # we normalize the direction field vector (each has unit length now)\n DY1 /= M # we normalize the direction field vector (each has unit length now)\n Q.set_UVC(DX1, DY1, M)\n # integrate the Brusellator ODE's for the set of trajectories, store them in X\n for line, X0i in zip(lines, X0):\n X = integrate.odeint(lambda x, t: dX_dt(A, B, x, t), X0i,t) \n x, y = X.T # get x and y for current trajectory\n line.set_data(x, y) \n text.set_text(\"A={:.2f}, B={:.2f}\".format(A, B))\n return Q, lines, text \n \n# call the animator. blit=True means only re-draw the parts that have changed.\nanim = animation.FuncAnimation(fig, animate, init_func=init, frames=100, interval=30, blit=False)\n# instantiate the animator.\n#anim = animation.FuncAnimation(fig, animate, init_func=init, frames=1000, interval=200, blit=True)\n#HTML(anim.to_html5_video())\nrc('animation', html='html5')\nplt.close()\nanim\n```\n\n\n\n\n\n\n\n\nIn the animation above, we see how the direction field gets modified as $B$ is varied. Also shown several trajectories that are initalized at various points (I have chosen them on the $Y=X$ line here).\n\n## Notes:\n\nShould you encounter difficulty in running the embedded animation, try to launch Jupter Notebook using the command:
    \n`jupyter notebook --NotebookApp.iopub_data_rate_limit=10000000000`\n\n## References:\n\n1) Strogatz, S.H (2015). *Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering, Second Edition*, Boulder, USA: Westview Press.
    \n2) https://en.wikipedia.org/wiki/Brusselator\n", "meta": {"hexsha": "ea2d14f88f0858fe87c6488a202633885c6ef02c", "size": 523717, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": ".ipynb_checkpoints/brusselator_hopf-checkpoint.ipynb", "max_stars_repo_name": "cemozen/pattern_formation_in_reaction-diffusion_systems", "max_stars_repo_head_hexsha": "7788c2dec71bcbe47758cabdc9816d99f88df589", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-04-04T02:01:50.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-04T02:01:50.000Z", "max_issues_repo_path": ".ipynb_checkpoints/brusselator_hopf-checkpoint.ipynb", "max_issues_repo_name": "cemozen/pattern_formation_in_reaction-diffusion_systems", "max_issues_repo_head_hexsha": "7788c2dec71bcbe47758cabdc9816d99f88df589", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": ".ipynb_checkpoints/brusselator_hopf-checkpoint.ipynb", "max_forks_repo_name": "cemozen/pattern_formation_in_reaction-diffusion_systems", "max_forks_repo_head_hexsha": "7788c2dec71bcbe47758cabdc9816d99f88df589", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 107.6499486125, "max_line_length": 100204, "alphanum_fraction": 0.8510722394, "converted": true, "num_tokens": 4955, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9390248191350352, "lm_q2_score": 0.8918110353738529, "lm_q1q2_score": 0.8374326961945607}} {"text": "# 11 ODE integrators: Verlet\n\n\n```python\nfrom importlib import reload\n\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\n%matplotlib inline\nmatplotlib.style.use('ggplot')\n\nimport integrators2\n```\n\n\n```python\nreload(integrators2)\n```\n\n\n\n\n \n\n\n\n## Velocity Verlet\n\nUse expansion *forward* and *backward* (!) in time (Hamiltons (i.e. Newton without friction) equations are time symmetric)\n\n\\begin{align}\nr(t + \\Delta r) &\\approx r(t) + \\Delta t\\, v(t) + \\frac{1}{2m} \\Delta t^2 F(t)\\\\\nr(t) &\\approx r(t + \\Delta t) - \\Delta t\\, v(t + \\Delta t) + \\frac{1}{2m} \\Delta t^2 F(t+\\Delta t)\n\\end{align}\n\nSolve for $v$:\n\\begin{align}\nv(t+\\Delta t) &\\approx v(t) + \\frac{1}{2m} \\Delta t \\big(F(t) + F(t+\\Delta t)\\big)\n\\end{align}\n\nComplete **Velocity Verlet** integrator consists of the first and third equation.\n\nIn practice, split into three steps (calculate the velocity at the half time step):\n\\begin{align}\nv(t+\\frac{\\Delta t}{2}) &= v(t) + \\frac{\\Delta t}{2} \\frac{F(t)}{m} \\\\\nr(t + \\Delta r) &= r(t) + \\Delta t\\, v(t+\\frac{\\Delta t}{2})\\\\\nv(t+\\Delta t) &= v(t+\\frac{\\Delta t}{2}) + \\frac{\\Delta t}{2} \\frac{F(t+\\Delta t)}{m}\n\\end{align}\n\nWhen writing production-level code, remember to re-use $F(t+\\Delta t)$ als the \"new\" starting $F(t)$ in the next iteration (and don't recompute).\n\n### Integration of planetary motion \nGravitational potential energy:\n$$\nU(r) = -\\frac{GMm}{r}\n$$\nwith $r$ the distance between the two masses $m$ and $M$.\n\n#### Central forces\n$$\nU(\\mathbf{r}) = f(r) = f(\\sqrt{\\mathbf{r}\\cdot\\mathbf{r}})\\\\\n\\mathbf{F} = -\\nabla U(\\mathbf{r}) = -\\frac{\\partial f(r)}{\\partial r} \\, \\frac{\\mathbf{r}}{r} \n$$\n\n#### Force of gravity\n\\begin{align}\n\\mathbf{F} &= -\\frac{G m M}{r^2} \\hat{\\mathbf{r}}\\\\\n\\hat{\\mathbf{r}} &= \\frac{1}{\\sqrt{x^2 + y^2}} \\left(\\begin{array}{c} x \\\\ y \\end{array}\\right)\n\\end{align}\n\n#### Integrate simple planetary orbits \nSet $M = 1$ (one solar mass) and $m = 3.003467\u00d710^{-6}$ (one Earth mass in solar masses) and try initial conditions\n\n\\begin{alignat}{1}\nx(0) &= 1,\\quad y(0) &= 0\\\\\nv_x(0) &= 0,\\quad v_y(0) &= 6.179\n\\end{alignat}\n\nNote that we use the following units:\n* length in astronomical units (1 AU = 149,597,870,700 m )\n* mass in solar masses (1 $M_\u2609 = 1.988435\u00d710^{30}$ kg)\n* time in years (1 year = 365.25 days, 1 day = 86400 seconds)\n\nIn these units, the gravitational constant is $G = 4\\pi^2$ (in SI units $G = 6.674\u00d710^{-11}\\, \\text{N}\\cdot\\text{m}^2\\cdot\\text{kg}^{-2}$).\n\n\n```python\nM_earth = 3.003467e-6\nM_sun = 1.0\n```\n\n\n```python\nG_grav = 4*np.pi**2\n\ndef F_gravity(r, m=M_earth, M=M_sun):\n rr = np.sum(r*r)\n rhat = r/np.sqrt(rr)\n return -G_grav*m*M/rr * rhat\n\ndef U_gravity(r, m=M_earth, M=M_sun):\n return -G_grav*m*M/np.sqrt(np.sum(r*r))\n```\n\nLet's now integrate the equations of motions under gravity with the **Velocity Verlet** algorithm:\n\n\n```python\ndef planet_orbit(r0=np.array([1, 0]), v0=np.array([0, 6.179]), mass=M_earth, dt=0.001, t_max=1):\n \"\"\"2D planetary motion with velocity verlet\"\"\"\n dim = len(r0)\n assert len(v0) == dim\n\n nsteps = int(t_max/dt)\n\n r = np.zeros((nsteps, dim))\n v = np.zeros_like(r)\n\n r[0, :] = r0\n v[0, :] = v0\n\n # start force evaluation for first step\n Ft = F_gravity(r[0], m=mass)\n for i in range(nsteps-1):\n vhalf = v[i] + 0.5*dt * Ft/mass\n r[i+1, :] = r[i] + dt * vhalf\n Ftdt = F_gravity(r[i+1], m=mass)\n v[i+1] = vhalf + 0.5*dt * Ftdt/mass\n # new force becomes old force\n Ft = Ftdt\n \n return r, v\n```\n\n\n```python\nr, v = planet_orbit(dt=0.1, t_max=10)\n```\n\n\n```python\nrx, ry = r.T\nax = plt.subplot(1,1,1)\nax.set_aspect(1)\nax.plot(rx, ry)\n```\n\nThese are not closed orbits (as we would expect from a $1/r$ potential). But it gets much besser when stepsize is reduced to 0.01 (just rerun the code with `dt = 0.01` and replot):\n\n\n```python\nr, v = planet_orbit(dt=0.01, t_max=10)\n```\n\n\n```python\nrx, ry = r.T\nax = plt.subplot(1,1,1)\nax.set_aspect(1)\nax.plot(rx, ry)\n```\n\n## Velocity Verlet vs RK4: Energy conservation\nAssess the stability of `rk4` and `Velocity Verlet` by checking energy conservation over longer simulation times.\n\nThe file `integrators2.py` contains almost all code that you will need.\n\n### Implement gravity force in `integrators2.py`\nAdd `F_gravity` to the `integrators2.py` module. Use the new function `unitvector()`.\n\n### Planetary orbits with `integrators2.py` \n\n\n```python\nr0 = np.array([1, 0])\nv0 = np.array([0, 6.179])\n```\n\n\n```python\nimport integrators2\nfrom importlib import reload\nreload(integrators2)\n```\n\n\n\n\n \n\n\n\nUse the new function `integrators2.integrate_newton_2d()` to integrate 2d coordinates.\n\n#### RK4\n\n\n```python\ntrk4, yrk4 = integrators2.integrate_newton_2d(x0=r0, v0=v0, t_max=30, mass=M_earth,\n h=0.01,\n force=integrators2.F_gravity, \n integrator=integrators2.rk4)\n```\n\n\n```python\nrxrk4, ryrk4 = yrk4[:, 0, 0], yrk4[:, 0, 1]\nax = plt.subplot(1,1,1)\nax.set_aspect(1)\nax.plot(rxrk4, ryrk4)\n```\n\n\n```python\nintegrators2.analyze_energies(trk4, yrk4, integrators2.U_gravity, m=M_earth)\n```\n\n\n```python\nprint(\"Energy conservation RK4 for {} steps: {}\".format(\n len(trk4),\n integrators2.energy_conservation(trk4, yrk4, integrators2.U_gravity, m=M_earth)))\n```\n\n Energy conservation RK4 for 3000 steps: 3.5366039779498594e-06\n\n\n#### Euler\n\n\n```python\nte, ye = integrators2.integrate_newton_2d(x0=r0, v0=v0, t_max=30, mass=M_earth,\n h=0.01,\n force=F_gravity, \n integrator=integrators2.euler)\nrex, rey = ye[:, 0].T\n```\n\n\n```python\nax = plt.subplot(1,1,1)\nax.plot(rx, ry, label=\"RK4\")\nax.plot(rex, rey, label=\"Euler\")\nax.legend(loc=\"best\")\nax.set_aspect(1)\n```\n\n\n```python\nintegrators2.analyze_energies(te, ye, integrators2.U_gravity, m=M_earth)\n```\n\n\n```python\nprint(\"Energy conservation Euler for {} steps: {}\".format(\n len(te),\n integrators2.energy_conservation(te, ye, integrators2.U_gravity, m=M_earth)))\n```\n\n Energy conservation Euler for 3000 steps: 0.6763156457080265\n\n\n*Euler* is just awful... but we knew that already.\n\n#### Velocity Verlet\n\n\n```python\ntv, yv = integrators2.integrate_newton_2d(x0=r0, v0=v0, t_max=30, mass=M_earth,\n h=0.01,\n force=F_gravity, \n integrator=integrators2.velocity_verlet)\n```\n\n\n```python\nrxv, ryv = yv[:, 0].T\nax = plt.subplot(1,1,1)\nax.set_aspect(1)\nax.plot(rxv, ryv, label=\"velocity Verlet\")\nax.plot(rxrk4, ryrk4, label=\"RK4\")\nax.legend(loc=\"best\")\n```\n\n\n```python\nintegrators2.analyze_energies(tv, yv, integrators2.U_gravity, m=M_earth)\n```\n\n\n```python\nprint(\"Energy conservation Velocity Verlet for {} steps: {}\".format(\n len(tv),\n integrators2.energy_conservation(tv, yv, integrators2.U_gravity, m=M_earth)))\n```\n\n Energy conservation Velocity Verlet for 3000 steps: 6.37822713158452e-05\n\n\n*Velocity Verlet* only has moderate accuracy, especially when compared to *RK4*.\n\nHowever, let's look at energy conservation over longer times:\n\n#### Longer time scale stability\nRun RK4 and Velocity Verlet for longer.\n\n\n```python\ntv2, yv2 = integrators2.integrate_newton_2d(x0=r0, v0=v0, t_max=1000, mass=M_earth,\n h=0.01,\n force=F_gravity, \n integrator=integrators2.velocity_verlet)\n```\n\n\n```python\nprint(\"Energy conservation Velocity Verlet for {} steps: {}\".format(\n len(tv2),\n integrators2.energy_conservation(tv2, yv2, integrators2.U_gravity, m=M_earth)))\n```\n\n Energy conservation Velocity Verlet for 100000 steps: 6.379584728912368e-05\n\n\n\n```python\nt4, y4 = integrators2.integrate_newton_2d(x0=r0, v0=v0, t_max=1000, mass=M_earth,\n h=0.01,\n force=F_gravity, \n integrator=integrators2.rk4)\n```\n\n\n```python\nprint(\"Energy conservation RK4 for {} steps: {}\".format(\n len(t4),\n integrators2.energy_conservation(t4, y4, integrators2.U_gravity, m=M_earth)))\n```\n\n Energy conservation RK4 for 100000 steps: 0.00011860288209883538\n\n\nVelocity Verlet shows **good long-term stability** but relative low precision. On the other hand, RK4 has high precision but the accuracy decreases over time.\n\n* Use a *Verlet* integrator when energy conservation is important and long term stability is required (e.g. molecular dynamics simulations). It is generally recommended to use an integrator that conserves some of the inherent symmetries and structures of the governing physical equations (e.g. for Hamilton's equations of motion, time reversal symmetry and the symplectic and area-preserving structure).\n* Use *RK4* for high short-term accuracy (but may be difficult to know what \"short term\" should mean) or when solving general differential equations.\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "c5c8af659758c73da54b92987b7083cbaebf881b", "size": 350360, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "11_ODEs/11-ODE-integrators-verlet.ipynb", "max_stars_repo_name": "nachrisman/PHY494", "max_stars_repo_head_hexsha": "bac0dd5a7fe6f59f9e2ccaee56ebafcb7d97e2e7", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "11_ODEs/11-ODE-integrators-verlet.ipynb", "max_issues_repo_name": "nachrisman/PHY494", "max_issues_repo_head_hexsha": "bac0dd5a7fe6f59f9e2ccaee56ebafcb7d97e2e7", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "11_ODEs/11-ODE-integrators-verlet.ipynb", "max_forks_repo_name": "nachrisman/PHY494", "max_forks_repo_head_hexsha": "bac0dd5a7fe6f59f9e2ccaee56ebafcb7d97e2e7", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 420.0959232614, "max_line_length": 66392, "alphanum_fraction": 0.9272890741, "converted": true, "num_tokens": 2910, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9019206870747658, "lm_q2_score": 0.9284087980793998, "lm_q1q2_score": 0.8373511010500297}} {"text": "```python\nfrom sympy import symbols, Integral, integrate\n\nx = symbols('x')\n\nf = x**4*(1 - x)**4/(1 + x**2)\nf\n```\n\n\n\n\n$\\displaystyle \\frac{x^{4} \\left(1 - x\\right)^{4}}{x^{2} + 1}$\n\n\n\n\n```python\nF = Integral(f, x)\nF\n```\n\n\n\n\n$\\displaystyle \\int \\frac{x^{4} \\left(1 - x\\right)^{4}}{x^{2} + 1}\\, dx$\n\n\n\n\n```python\nF.doit()\n```\n\n\n\n\n$\\displaystyle \\frac{x^{7}}{7} - \\frac{2 x^{6}}{3} + x^{5} - \\frac{4 x^{3}}{3} + 4 x - 4 \\operatorname{atan}{\\left(x \\right)}$\n\n\n\n\n```python\nF = integrate(f, x)\nF\n```\n\n\n\n\n$\\displaystyle \\frac{x^{7}}{7} - \\frac{2 x^{6}}{3} + x^{5} - \\frac{4 x^{3}}{3} + 4 x - 4 \\operatorname{atan}{\\left(x \\right)}$\n\n\n\n\n```python\nF = integrate(f, (x, 0, 1))\nF\n```\n\n\n\n\n$\\displaystyle \\frac{22}{7} - \\pi$\n\n\n", "meta": {"hexsha": "37367c890323b4f40a5bf48f6fb6de56e1f0355c", "size": 3196, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Integral.ipynb", "max_stars_repo_name": "mfkiwl/GMPE340", "max_stars_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-07T09:36:36.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-07T09:36:36.000Z", "max_issues_repo_path": "Integral.ipynb", "max_issues_repo_name": "mfkiwl/GMPE340", "max_issues_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Integral.ipynb", "max_forks_repo_name": "mfkiwl/GMPE340", "max_forks_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-20T18:48:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-20T18:48:20.000Z", "avg_line_length": 20.7532467532, "max_line_length": 142, "alphanum_fraction": 0.4446182728, "converted": true, "num_tokens": 305, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9579122720843811, "lm_q2_score": 0.8740772400852111, "lm_q1q2_score": 0.8372893150272697}} {"text": "## Numerical integration of Ordinary Differential Equations\nThis notebook serves as a quick refresher on ordinary differential equations. If you are familiar with the topic: feel free to skim this notebook.\n\nWe will first consider the decay of tritium as an example:\n\n$$\n\\mathrm{^3H \\overset{\\lambda}\\rightarrow\\ ^3He + e^- + \\bar{\\nu_e}}\n$$\n\nWe will not concern ourselves with the products, instead we will only take interest in the number density of $\\mathrm{^3H}$ as function of time, let's call it $y(t)$. The rate of change of $y(t)$ is proportional to itself and the decay constant ($\\lambda$):\n\n$$\n\\frac{dy(t)}{dt} = -\\lambda y(t)\n$$\n\nyou probably know the solution to this class of differential equations (either from experience or by guessing an appropriate ansatz). SymPy can of course also solve this equation:\n\n\n```python\nimport sympy as sym\nsym.init_printing()\nt, l = sym.symbols('t lambda')\ny = sym.Function('y')(t)\ndydt = y.diff(t)\nexpr = sym.Eq(dydt, -l*y)\nexpr\n```\n\n\n```python\nsym.dsolve(expr)\n```\n\nNow, pretend for a while that this function lacked an analytic solution. We could then integrate this equation *numerically* from an initial state for a predetermined amount of time by discretizing the time into a seriers of small steps.\n\n### Explicit methods\nFor each step taken we would update $y$ by multiplying the derivative with the step size (assuming that the derivate is approximately constant on the scale of the step-size), formally this method is known as \"forward Euler\":\n\n$$\ny_{n+1} = y_n + y'(t_n)\\cdot \\Delta h\n$$\n\nthis is known as an *explicit* method, i.e. the derivative at the current time step is used to calculate the next step *forward*.\n\nFor demonstration purposes only, we implement this in Python:\n\n\n```python\nimport numpy as np\ndef euler_fw(rhs, y0, tout, params):\n y0 = np.atleast_1d(np.asarray(y0, dtype=np.float64))\n dydt = np.empty_like(y0)\n yout = np.zeros((len(tout), len(y0)))\n yout[0] = y0\n t_old = tout[0]\n for i, t in enumerate(tout[1:], 1):\n dydt[:] = rhs(yout[i-1], t, *params)\n h = t - t_old\n yout[i] = yout[i-1] + dydt*h\n t_old = t\n return yout\n```\n\napplying this function on our model problem:\n\n\n```python\ndef rhs(y, t, decay_constant):\n return -decay_constant*y # the rate does not depend on time (\"t\")\n```\n\n\n```python\ntout = np.linspace(0, 2e9, 100)\ny0 = 3\nparams = (1.78e-9,) # 1 parameter, decay constant of tritium\nyout = euler_fw(rhs, y0, tout, params)\n```\n\nand plotting the solution & the numerical error using matplotlib:\n\n\n```python\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndef my_plot(tout, yout, params, xlbl='time / a.u.', ylabel=None, analytic=None):\n fig, axes = plt.subplots(1, 2 if analytic else 1, figsize=(14, 4))\n axes = np.atleast_1d(axes)\n for i in range(yout.shape[1]):\n axes[0].plot(tout, yout[:, i], label='y%d' % i)\n if ylabel:\n axes[0].set_ylabel(ylabel)\n for ax in axes:\n ax.set_xlabel(xlbl)\n if analytic:\n axes[0].plot(tout, analytic(tout, yout, params), '--')\n axes[1].plot(tout, yout[:, 0] - yout[0]*np.exp(-params[0]*(tout-tout[0])))\n if ylabel:\n axes[1].set_ylabel('Error in ' + ylabel)\n\n```\n\n\n```python\ndef analytic(tout, yout, params):\n return yout[0, 0]*np.exp(-params[0]*tout)\nmy_plot(tout, yout, params, analytic=analytic, ylabel='number density / a.u.')\n```\n\nWe see that 100 points gave us almost plotting accuracy.\n\nUnfortunately, Euler forward is not practical for most real world problems. Usually we want a higher order formula (the error in Euler forward scales only as $n^{-1}$), and we want to use an adaptive step size (larger steps when the function is smooth). So we use the well tested LSODA algorithm (provided in scipy as ``odeint``):\n\n\n```python\nfrom scipy.integrate import odeint\nyout, info = odeint(rhs, y0, tout, params, full_output=True)\nmy_plot(tout, yout, params, analytic=analytic)\nprint(\"Number of function evaluations: %d\" % info['nfe'][-1])\n```\n\nWe can see that ``odeint`` was able to achieve a much higher precision using fewer number of function evaluations.\n\n### Implicit methods\nFor a large class of problems we need to base the step not on the derivative at the current time point, but rather at the next one (giving rise to an implicit expression). The simplest implicit stepper is \"backward euler\":\n\n$$\ny_{n+1} = y_n + y'(t_{n+1})\\cdot \\Delta h\n$$\n\nProblems requiring this type of steppers are known as \"stiff\". We will not go into the details of this (LSODA actually uses something more refined and switches between explicit and implicit steppers).\n\nIn the upcoming notebooks we will use ``odeint`` to solve systems of ODEs (and not only linear equations as in this notebook). The emphasis is not on the numerical methods, but rather on how we, from symbolic expressions, can generate fast functions for the solver.\n\n### Systems of differential equations\nIn order to show how we would formulate a system of differential equations we will here briefly look at the [van der Pol osciallator](https://en.wikipedia.org/wiki/Van_der_Pol_oscillator). It is a second order differential equation:\n\n$$\n{d^2y_0 \\over dx^2}-\\mu(1-y_0^2){dy_0 \\over dx}+y_0= 0\n$$\n\nOne way to reduce the order of our second order differential equation is to formulate it as a system of first order ODEs, using:\n\n$$ y_1 = \\dot y_0 $$\n\nwhich gives us:\n\n$$\n\\begin{cases}\n\\dot y_0 = y_1 \\\\\n\\dot y_1 = \\mu(1-y_0^2) y_1-y_0\n\\end{cases}\n$$\n\nLet's call the function for this system of ordinary differential equations ``vdp``:\n\n\n```python\ndef vdp(y, t, mu):\n return [\n y[1],\n mu*(1-y[0]**2)*y[1] - y[0]\n ]\n```\n\nusing \"Euler forward\":\n\n\n```python\ntout = np.linspace(0, 200, 1024)\ny_init, params = [1, 0], (17,)\ny_euler = euler_fw(vdp, y_init, tout, params) # never mind the warnings emitted here...\n```\n\n\n```python\nmy_plot(tout, y_euler, params)\n```\n\nThat does not look like an oscillator. (we see that Euler forward has deviated to values with enormous magnitude), here the advanced treatment by the ``odeint`` solver is far superior:\n\n\n```python\ny_odeint, info = odeint(vdp, y_init, tout, params, full_output=True)\nprint(\"Number of function evaluations: %d, number of Jacobian evaluations: %d\" % (info['nfe'][-1], info['nje'][-1]))\nmy_plot(tout, y_odeint, params)\n```\n\nWe see that LSODA has evaluated the Jacobian. But we never gave it an explicit representation of it\u2015so how could it?\n\nIt estimated the Jacobian matrix by using finite differences. Let's see if we can do better if we provide a function to calculate the (analytic) Jacobian.\n\n## Exercise: manually write a function evaluating a Jacobian\nFirst we need to know what signature ``odeint`` expects, we look at the documentation by using the ``help`` command: (or using ``?`` in IPython)\n\n\n```python\nhelp(odeint) # just skip to \"Dfun\"\n```\n\nso the signature needs to be: ``(state-vector, time, parameters) -> matrix``\n\n\n```python\n%load_ext scipy2017codegen.exercise\n```\n\nUse either the * ``%exercise`` * or * ``%load`` * magic to get the exercise / solution respecitvely (*i.e.* delete the whole contents of the cell except for the uncommented magic command). Replace **???** with the correct expression.\n\nRemember that our system is defined as:\n$$\n\\begin{cases}\n\\dot y_0 = y_1 \\\\\n\\dot y_1 = \\mu(1-y_0^2) y_1-y_0\n\\end{cases}\n$$\n\n\n```python\n%exercise exercise_jac_func.py\n```\n\n\n```python\nJ_func(y_init, tout[0], params[0])\n```\n\n\n```python\ny_odeint, info = odeint(vdp, y_init, tout, params, full_output=True, Dfun=J_func)\n```\n\n\n```python\nmy_plot(tout, y_odeint, params)\nprint(\"Number of function evaluations: %d, number of Jacobian evaluations: %d\" % (info['nfe'][-1], info['nje'][-1]))\n```\n\nSo this time the integration needed to evaluate both the ODE system function and its Jacobian fewer times than when using finite difference approximations. The reason for this is that the more accurate the Jacobian is, the better is the convergence in the iterative (Newton's) method solving the implicit system of equations.\n\nFor larger systems of ODEs the importance of providing a (correct) analytic Jacobian can be much bigger. \n\n### SymPy to the rescue\nInstead of writing the jacobian function by hand we could have used SymPy's ``lambdify`` which we will introduce next. Here is a sneak peak on how it could be achieved:\n\n\n```python\ny = y0, y1 = sym.symbols('y0 y1')\nmu = sym.symbols('mu')\nJ = sym.Matrix(vdp(y, None, mu)).jacobian(y)\nJ_func = sym.lambdify((y, t, mu), J)\nJ\n```\n", "meta": {"hexsha": "771196f4b107a0298eb0333dd848f3c7aeb6a9dc", "size": 13469, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/20-ordinary-differential-equations.ipynb", "max_stars_repo_name": "gvvynplaine/scipy-2017-codegen-tutorial", "max_stars_repo_head_hexsha": "4bd0cdb1bdbdc796bb90c08114a00e390b3d3026", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 56, "max_stars_repo_stars_event_min_datetime": "2017-05-31T21:01:22.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-14T04:26:01.000Z", "max_issues_repo_path": "notebooks/20-ordinary-differential-equations.ipynb", "max_issues_repo_name": "gvvynplaine/scipy-2017-codegen-tutorial", "max_issues_repo_head_hexsha": "4bd0cdb1bdbdc796bb90c08114a00e390b3d3026", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 26, "max_issues_repo_issues_event_min_datetime": "2017-06-06T19:05:04.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-29T23:15:19.000Z", "max_forks_repo_path": "notebooks/20-ordinary-differential-equations.ipynb", "max_forks_repo_name": "gvvynplaine/scipy-2017-codegen-tutorial", "max_forks_repo_head_hexsha": "4bd0cdb1bdbdc796bb90c08114a00e390b3d3026", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 29, "max_forks_repo_forks_event_min_datetime": "2017-06-06T14:45:12.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-25T14:11:06.000Z", "avg_line_length": 30.8922018349, "max_line_length": 336, "alphanum_fraction": 0.5748756404, "converted": true, "num_tokens": 2355, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9241418241572635, "lm_q2_score": 0.9059898191142621, "lm_q1q2_score": 0.8372630841041634}} {"text": "A random variable is a numerical description of the outcome of a statistical experiment. A random variable that may assume only a finite number or an infinite sequence of values is said to be discrete; one that may assume any value in some interval on the real number line is said to be continuous.\n\n# Expected value\n\nThe expected value for a variable X is the value of X we would **expect** to get after performing the experiment once. ***It is also called the expectation, average, and mean value***. Mathematically speaking, for a random variable X that can take values x1,x2,x3,...........,xn, the expected value (EV) is given by:\n\n\\begin{align}\nEV(X) = {x_1*P(X=x_1)+x_2*P(X=x_2)+x_3*P(X=x_3)+......+x_n*P(X=x_n)}\n\\end{align}\n\n\n\nSample question:\n\nHugo is considering buying a scratch-off lottery ticket that costs 3 dollars. If the ticket wins, he can redeem the ticket for 50 dollars. If the ticket loses, the ticket is worthless. According to the lottery's website, 2%, percent of all tickets are winners.\n\nFind the expected value of buying a scratch-off lottery ticket. \n\n1. There is a 2%, percent chance of the ticket winning. If this happens, the value is 50\u22123=47 (the redemption value minus the cost of the ticket).\n2. There is a 98%, percent chance of the ticket losing. In this case, the value is \u22123 (just the cost of the ticket).\n3. Let's multiply each value by its probability because expected value is the sum of all possible values, each value multiplied by its probability.\n \n| Type | Value | Probability | Value.Probability |\n| -------------- | ----- | ----------- | ----------------- |\n| Winning ticket | 47 | 0.020 | .94 |\n| Losing ticket | \u22123 | 0.980 | \u22122.94 |\n\nThe expected value is 0.94+(\u22122.94)=\u22122\n\n# Binomial Probability\n\nThe formula for finding binomial probability is given by -\n\n \n\\begin{align}\nP(X=r) = {C^n_r \\cdot p^r(1-p)^{n-r}}\n\\end{align}\n\nSample question:\n\nSteph's little brother Luke only has a 20%, percent chance of making a free-throw. He is going to shoot 4444 free-throws.\nWhat is the probability that he makes exactly 2 of the 4 free throws?\n\n1. n=4, equals, 4 trials (shots)\n2. each shot is either a make or miss\n3. probability that he makes a shot is p=0.20\n4. shots are independent\n\nP(exactly 2 makes) =4C2*(0.20)2*(0.80)2\n```\n= (4\u22122)!/2!*4!*0.04*0.64\n\n= (4*3*2*1/(2*2))*0.04*0.64\n\n= 6*0.04*0.64\n\n= 0.1536\n```\n\n# Cumulative probability\n\nA cumulative probability refers to the probability that the value of a random variable falls within a specified range. Frequently, cumulative probabilities refer to the probability that a random variable is less than or equal to a specified value.\nCumulative probability of X, denoted by F(x), is defined as the probability of the variable being less than or equal to x.\n\n Consider a coin flip experiment. If we flip a coin two times, we might ask: What is the probability that the coin flips would result in one or fewer heads? The answer would be a cumulative probability. It would be the probability that the coin flip results in zero heads plus the probability that the coin flip results in one head. Thus, the cumulative probability would equal:\n\nP(X < 1) = P(X = 0) + P(X = 1) = 0.25 + 0.50 = 0.75 \n\n| Number of heads | Probability | Cumulative Probability |\n| --------------- | ----------- | ---------------------- |\n| 0 | 0.25 | 0.25 |\n| 1 | 0.50 | 0.75 |\n| 2 | 0.25 | 1.00 |\n\nA CDF, or a cumulative distribution function, is a distribution which plots the cumulative probability of X against X.\n\nA PDF, or Probability Density Function, however, is a function in which the area under the curve, gives you the cumulative probability.\n\nThe main difference between the cumulative probability distribution of a continuous random variable and a discrete one, is the way we plot them. While the continuous variables\u2019 cumulative distribution is a curve, the distribution for discrete variables looks more like a bar chart.\n\nA commonly observed type of distribution among continuous variables is the uniform distribution. For a continuous random variable following a uniform distribution, the value of probability density is equal for all possible values.\n\nPDFs are more commonly used in real life. The reason is that it is much easier to see patterns in PDFs as compared to CDFs. For example, here are the PDF and the CDF of a uniformly distributed continuous random variable.\n\n\n\nThe PDF clearly shows uniformity, as the probability density\u2019s value remains constant for all possible values. However, the CDF does not show any trends that help you identify quickly that the variable is uniformly distributed.\n\n\n\n\nit is clear that the symmetrical nature of the variable is much more apparent in the PDF than in the CDF.\n\n# Normal Distribution\n\nAll data that is normally distributed follows the **1-2-3** rule. This rule states that there is:\n\n1. 68% probability of the variable lying within 1 standard deviation of the mean\n\n2. 95% probability of the variable lying within 2 standard deviations of the mean\n\n3. 99.7% probability of the variable lying within 3 standard deviations of the mean\n\n\n\n\n\n***Standardised random variable (Z-Score)*** tells us how many standard deviations away from the mean the random variable is.\n\n\\begin{align}\nZ = {X - \\mu \\over \\sigma}\n\\end{align}\n\nCumulative probability from Z-Score is calculated using Z Table.\n\nIf we need to find the cumulative probability for Z = 0.68\n\n\n\n\nThe intersection of row **0.6** and column **0.08**, i.e. **0.7517**, is the answer.\n\n# Central Limit Theorem\n\n\n\n\nSo, there are two important properties for a sampling distribution of the mean:\n\n1. Sampling distribution\u2019s mean (\u03bcX\u0304) = Population mean (\u03bc)\n\n2. Sampling distribution\u2019s standard deviation (Standard error) = \u03c3/\u221an, where \u03c3 is the population\u2019s standard deviation and n is the sample size\n\n\n### The central limit theorem says that, for any kind of data, provided a high number of samples has been taken, the following properties hold true:\n\n1. Sampling distribution\u2019s mean (\u03bcX\u0304) = Population mean (\u03bc)\n\n2. Sampling distribution\u2019s standard deviation (Standard error) = \u03c3/\u221an\n\n3. For n > 30, the sampling distribution becomes a normal distribution\n\n## Confidence Interval\n\n\\begin{align}\nConfidence Interval = {(X-\\frac {Z^*S}{\\sqrt(n)},X+\\frac {Z^*S}{\\sqrt(n)})}\n\\end{align}\n\n\n\\begin{align}\nMargin of error = {\\frac {Z^*S}{\\sqrt(n)}}\n\\end{align}\n\n\n\n#### Z* is the Z-score associated\n\nSome commonly used Z* values are:\n\n\n| Confidence Level| Z* |\n| --------------- | ----------- | \n| 90% | \u00b11.65 | \n| 95% | \u00b11.96 | \n| 99% | \u00b12.58 | \n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "9297d93fdbe4f192a7545bd96cb2a90479f90ca8", "size": 10338, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Statistics Essential /inferential_statistics.ipynb", "max_stars_repo_name": "ashwanikumar04/ml-ai-notes", "max_stars_repo_head_hexsha": "f6164230db555cdb4606b55f787aa1e4cb099bf2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Statistics Essential /inferential_statistics.ipynb", "max_issues_repo_name": "ashwanikumar04/ml-ai-notes", "max_issues_repo_head_hexsha": "f6164230db555cdb4606b55f787aa1e4cb099bf2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Statistics Essential /inferential_statistics.ipynb", "max_forks_repo_name": "ashwanikumar04/ml-ai-notes", "max_forks_repo_head_hexsha": "f6164230db555cdb4606b55f787aa1e4cb099bf2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.5746268657, "max_line_length": 387, "alphanum_fraction": 0.5815438189, "converted": true, "num_tokens": 1741, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9059898102301019, "lm_q2_score": 0.9241418126663686, "lm_q1q2_score": 0.8372630654833058}} {"text": "# Base rate fallacy example\n\nIn this notenook we work an example of the base rate fallacy using Bayes Theorem. \n\nAssume that we have two random variables $HasDisease$ and $FailsTest$. $HasDisease=y$ indicates that a person has the disease while $HasDisease=n$ indicates that the person in disease free. In addition, we have a test which attempts to detect the disease. $FailsTest=y$ indicates that our test says a person hasthe disease while $FailsTest=n$ indicates that our test says a person does not have the disease. \n\nIn this notebook you can play around with the probabilities of interest and see now likely it is that, given you fail the test, that you actually have the disease.\n\nSuppose we know the following probabilities:\n\n\\begin{align}\nPr(FailsTest=y | HasDisease=y) &= FailAndHasDisease \\\\\nPr(FailsTest=n | HasDisease=y) &= NotFailAndHasDisease \\\\\nPr(FailsTest=y | HasDisease=n) &= FailAndNotHasDisease \\\\\nPr(FailsTest=n | HasDisease=n) &= NotFailAndNotHasDisease \\\\\n\\end{align}\n\nAnd we know the prior probability of the disease in the population\n\n$$\nPr(HasDisease=y).\n$$\n\nNote, the point of the base rate fallacy is that you need all 5 probabilities to compute what you are interested in, namely the probability you have the disease given you fail the test, denoted\n\n$$\nPr(HasDisease=y | FailsTest=y).\n$$\n\nWithout, $Pr(HasDisease=y)$ you cannot truly understand $Pr(HasDisease=y | FailsTest=y)$.\n\nYou can play aroun with the numbers in the next cell to see how things work out.\n\n\n```python\nFailAndHasDisease = 1.0\nNotFailAndHasDisease = 0.0\nFailAndNotHasDisease = 0.01\nNotFailAndNotHasDisease = 0.99\nHasDisease = 1./1000\n```\n\nBayes theorem says that\n\n$$\nPr(HasDisease=y | FailsTest=y) = \\frac{Pr(FailsTest=y | HasDisease=y) Pr(HasDisease=y)}{Pr(FailsTest=y)}\n$$\n\nOur table gives us the two terms in the numerator, we get the demoninator from the Law of total probability.\n\n\\begin{align}\nPr(FailsTest=y) & = Pr(FailsTest=y | HasDisease=y) Pr(HasDisease=y) + Pr(FailsTest=y | HasDisease=n) Pr(HasDisease=n) \\\\\n & = Pr(FailsTest=y | HasDisease=y) Pr(HasDisease=y) + Pr(FailsTest=y | HasDisease=n) (1- Pr(HasDisease=y))\n\\end{align}\n\nSo, the whole thing is\n\n$$\nPr(HasDisease=y | FailsTest=y) = \\frac{Pr(FailsTest=y | HasDisease=y) Pr(HasDisease=y)}{(Pr(FailsTest=y | HasDisease=y) Pr(HasDisease=y) + Pr(FailsTest=y | HasDisease=n) (1- Pr(HasDisease=y)))}\n$$\n\n\n```python\nFailAndHasDisease*HasDisease/(FailAndHasDisease*HasDisease + FailAndNotHasDisease*(1-HasDisease))\n```\n\n\n\n\n 0.09099181073703368\n\n\n\nThis matches the result we did by hand in class. Play around with the probabilities and see what you discover.\n\n\n```python\n\n```\n", "meta": {"hexsha": "73072720b11ffb3df6bd2ac886fe415400411a2b", "size": 4450, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lectures/05 Basic Statistics, Probability, and Linear Algebra/BaseRateFallacy.ipynb", "max_stars_repo_name": "thomasmeagher/DS-501", "max_stars_repo_head_hexsha": "b5c697c3bc4f44903af16219f242b5728c9a82d1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2017-07-27T02:52:06.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-01T09:25:14.000Z", "max_issues_repo_path": "lectures/05 Basic Statistics, Probability, and Linear Algebra/BaseRateFallacy.ipynb", "max_issues_repo_name": "thomasmeagher/DS-501", "max_issues_repo_head_hexsha": "b5c697c3bc4f44903af16219f242b5728c9a82d1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/05 Basic Statistics, Probability, and Linear Algebra/BaseRateFallacy.ipynb", "max_forks_repo_name": "thomasmeagher/DS-501", "max_forks_repo_head_hexsha": "b5c697c3bc4f44903af16219f242b5728c9a82d1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2017-07-18T21:50:17.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-01T09:25:18.000Z", "avg_line_length": 30.4794520548, "max_line_length": 420, "alphanum_fraction": 0.591011236, "converted": true, "num_tokens": 798, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9597620596782468, "lm_q2_score": 0.8723473813156294, "lm_q1q2_score": 0.8372459194464134}} {"text": "# Logistic Regression\n\nA binary classifier.\n\n## Logistic Regression Prediction\n\nTo perform a prediction using logistic regression we use the following formula: \n\n\\begin{equation}\n\\hat{y} = \\sigma (w^Tx + b)\n\\end{equation}\n\n\\begin{equation}\n\\sigma(z) = \\frac{1}{1 + e^{-z}}\n\\end{equation}\n\n$\\sigma(z)$: Also called the sigmoid function. The output will never be greater than one\n\n$\\hat{y}^{(i)}$: The predicted label for example $i$\n\n$w$: Vector of weights\n\n$b$: bias (a real number)\n\n$x^{(i)}$: The $i$th input or training example\n\n${y}^{(i)}$: Ground truth label for example $i$\n\nWe want $\\hat{y}$ to be as close as possible to $y$, or $\\hat{y} \\approx y$\n\n## Loss (error) function $\\mathcal{L}$\n\n\\begin{equation*}\n\\mathcal{L}(\\hat{y}^{(i)},y^{(i)}) = -(y^{(i)}\\log{\\hat{y}^{(i)}} + (1 - y^{(i)}) \\log({1 - \\hat{y}^{(i)}}))\n\\end{equation*}\n\nThis function will measure how close our output $\\hat{y}$ is to the true label $y$.\n\n### Intuition \nWe want $\\mathcal{L}$ to be as small as possible. \n\nIf $y = 1$ then $\\mathcal{L}(\\hat{y},y) = - \\log \\hat{y}$. \n\nThe second expression cancels out because $(1-1) = 0$. So now we want $- \\log \\hat{y}$ to be as large as possible. That means we want $\\hat{y}$ to be large. The sigmoid function above, $\\sigma (z)$ can never be greater than one.\n\nIf $y = 0$ then $\\mathcal{L}(\\hat{y},y) = - \\log (1 - \\hat{y})$\n\nSimilar reasoning, now we want $\\hat{y}$ as small as possible because we still want $\\log 1 -\\hat{y}$ large\n\nAnother option is to use the squared error function: $\\mathcal{L}(\\hat{y},y) = \\frac{1}{2}(\\hat{y} - y)^2$ But this will produce a non-convex surface which is not good for gradient descent because it may not find the global optimum, but rather only a local optimum.\n\n## Cost function $J$ used in Logistic Regression\n\n\\begin{equation*}\nJ(w,b) = - \\frac{1}{m}\\sum_{i=1}^m \\mathcal{L}(\\hat{y}^{(i)},y^{(i)})\n\\end{equation*}\n\nThe cost function is the average, the sum, over the loss functions, divided by the number of examples $m$. This measures how well your parameters, $w$ and $b$ are performing on the training set. So the optimization problem is to minimize $J(w,b)$. We want to find $w$ and $b$ that make $J$ as small as possible.\n\n## Gradient Descent\n\n_Repeat until convergence:_\n\\begin{equation}\nw = w - \\alpha \\frac{\\partial J(w, b)}{\\partial w}\n\\end{equation}\n\n\\begin{equation}\nb = b - \\alpha \\frac{\\partial J(w, b)}{\\partial b}\n\\end{equation}\n\n$\\alpha$: Learning rate. How big of a step we take towards the minimum with each iteration\n\n### Initializing $w$\nTypically this is initialized to zero. Since the error surface is convex, it should arrive at the same minimum given any initialization.\n", "meta": {"hexsha": "b4a9c183b238dfe72c4363b25a0775a333266082", "size": 4543, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/Logistic_Regression.ipynb", "max_stars_repo_name": "jandersson/ML-Notebooks", "max_stars_repo_head_hexsha": "0962e5b44a25dd0bba1aab58c4800dd56db46bab", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/Logistic_Regression.ipynb", "max_issues_repo_name": "jandersson/ML-Notebooks", "max_issues_repo_head_hexsha": "0962e5b44a25dd0bba1aab58c4800dd56db46bab", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2020-03-24T15:28:15.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-01T21:47:08.000Z", "max_forks_repo_path": "notebooks/Logistic_Regression.ipynb", "max_forks_repo_name": "jandersson/ML-Notebooks", "max_forks_repo_head_hexsha": "0962e5b44a25dd0bba1aab58c4800dd56db46bab", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.6518518519, "max_line_length": 320, "alphanum_fraction": 0.5496368039, "converted": true, "num_tokens": 832, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9559813463747181, "lm_q2_score": 0.8757870013740061, "lm_q1q2_score": 0.8372360367109994}} {"text": "# Solution {-}\n\nPVA model\n\n\n```python\nfrom sympy import Matrix, symbols, sqrt, eye, integrate\nfrom lib.vanloan import numeval\n\nq, dt = symbols('q dt')\n\nF = Matrix([[0, 1, 0],\n [0, 0, 1],\n [0, 0, 0]])\n\nG = Matrix([[0],\n [0],\n [sqrt(q)]])\n\nphi = eye(3) + F*dt + (F*dt)**2/2\nphi\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & dt & \\frac{dt^{2}}{2}\\\\0 & 1 & dt\\\\0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\n# Process noise\nQ = integrate(phi@G@G.T@phi.T, dt)\nQ\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\frac{dt^{5} q}{20} & \\frac{dt^{4} q}{8} & \\frac{dt^{3} q}{6}\\\\\\frac{dt^{4} q}{8} & \\frac{dt^{3} q}{3} & \\frac{dt^{2} q}{2}\\\\\\frac{dt^{3} q}{6} & \\frac{dt^{2} q}{2} & dt q\\end{matrix}\\right]$\n\n\n\nNumerical example\n\n\n```python\nfrom numpy import array, zeros, arange\nimport matplotlib.pyplot as plt\n\n# System parameters\ndt = 0.1\nq = 1\n\nF = array([[0, 1, 0],\n [0, 0, 1],\n [0, 0, 0]])\n\nG = array([[0],\n [0],\n [sqrt(q)]])\n\n# Numerical evaluation\n[phi, Q_true] = numeval(F, G, dt)\n\n# Process noise approximation\nQ_approx = array([[0, 0, 0],\n [0, 0, 0],\n [0, 0, q*dt]])\n\n# Plot vectors\nvar_true = []\nvar_approx = []\n\n# Initial covariance\nP = zeros([3, 3])\nfor i in range(0, 200):\n P = phi@P@phi.T + Q_true\n var_true.append(P)\n\n# Initial covariance\nP = zeros([3, 3])\nfor i in range(0, 200):\n P = phi@P@phi.T + Q_approx\n var_approx.append(P)\n\n# Difference at step 200\nprint(var_true[199] - var_approx[199])\n```\n\n [[1.99333335e+03 1.99500000e+02 9.98333333e+00]\n [1.99500000e+02 1.99666667e+01 1.00000000e+00]\n [9.98333333e+00 1.00000000e+00 0.00000000e+00]]\n\n", "meta": {"hexsha": "e6656a7073886e2b8a508b0f733ded7d3b045eeb", "size": 3913, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Problem 4.2.ipynb", "max_stars_repo_name": "mfkiwl/GMPE340", "max_stars_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Problem 4.2.ipynb", "max_issues_repo_name": "mfkiwl/GMPE340", "max_issues_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Problem 4.2.ipynb", "max_forks_repo_name": "mfkiwl/GMPE340", "max_forks_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.1538461538, "max_line_length": 252, "alphanum_fraction": 0.4244824942, "converted": true, "num_tokens": 647, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9559813526452772, "lm_q2_score": 0.8757869900269367, "lm_q1q2_score": 0.8372360313550868}} {"text": "```python\nfrom sympy.physics.mechanics import *\nimport sympy as sp\nmechanics_printing(pretty_print=True)\n```\n\n\n```python\nm1, m2, m3, m4, m5, l1, l2, l3, l4, l5 = sp.symbols(r'm_1 m_2 m_3 m_4 m_5 l_1 l_2 l_3 l_4 l_5')\nt, g, h = sp.symbols('t g h')\nv1, v2, v3, v4, v5 = dynamicsymbols(r'\\theta_1 \\theta_2 \\theta_3 \\theta_4 \\theta_5')\ndv1, dv2, dv3, dv4, dv5 = dynamicsymbols(r'\\theta_1 \\theta_2 \\theta_3 \\theta_4 \\theta_5', 1)\n```\n\n\n```python\nx1 = l1*sp.sin(v1)\ny1 = -l1*sp.cos(v1)\nx2 = x1+l2*sp.sin(v2)\ny2 = y1+-l2*sp.cos(v2)\nx3 = x2+l3*sp.sin(v3)\ny3 = y2+-l3*sp.cos(v3)\nx4 = x3+l4*sp.sin(v4)\ny4 = y3+-l4*sp.cos(v4)\nx5 = x4+l5*sp.sin(v5)\ny5 = y4+-l5*sp.cos(v5)\n\ndx1 = x1.diff(t)\ndy1 = y1.diff(t)\ndx2 = x2.diff(t)\ndy2 = y2.diff(t)\ndx3 = x3.diff(t)\ndy3 = y3.diff(t)\ndx4 = x4.diff(t)\ndy4 = y4.diff(t)\ndx5 = x5.diff(t)\ndy5 = y5.diff(t)\n```\n\n\n```python\nV = (m1*g*y1)+(m2*g*y2)+(m3*g*y3)+(m4*g*y4)+(m5*g*y5)\nT = (sp.Rational(1,2)*m1*(dx1**2+dy1**2))+(sp.Rational(1,2)*m2*(dx2**2+dy2**2))+(sp.Rational(1,2)*m3*(dx3**2+dy3**2))+(sp.Rational(1,2)*m4*(dx4**2+dy4**2))+(sp.Rational(1,2)*m5*(dx5**2+dy5**2))\nL = T-V\n```\n\n\n```python\nLM = LagrangesMethod(L,[v1,v2,v3,v4,v5])\n```\n\n\n```python\nsoln = LM.form_lagranges_equations()\n```\n\n\n```python\nsoln\n```\n\n\n```python\n# solvedsoln = sp.solve((sp.Eq(soln[0]),sp.Eq(soln[1]),sp.Eq(soln[2]),sp.Eq(soln[3]),sp.Eq(soln[4])),(v1.diff(t,t),v2.diff(t,t),v3.diff(t,t),v4.diff(t,t),v5.diff(t,t)))\n```\n\n\n```python\n# solvedsoln\n```\n", "meta": {"hexsha": "0293745d4cddd76c8616f3a23fd3f9624d290ca5", "size": 196794, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Pendula/Simple/5Pendulum/QuintuplePendulum.ipynb", "max_stars_repo_name": "ethank5149/Classical-Mechanics", "max_stars_repo_head_hexsha": "4684cc91abcf65a684237c6ec21246d5cebd232a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Pendula/Simple/5Pendulum/QuintuplePendulum.ipynb", "max_issues_repo_name": "ethank5149/Classical-Mechanics", "max_issues_repo_head_hexsha": "4684cc91abcf65a684237c6ec21246d5cebd232a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Pendula/Simple/5Pendulum/QuintuplePendulum.ipynb", "max_forks_repo_name": "ethank5149/Classical-Mechanics", "max_forks_repo_head_hexsha": "4684cc91abcf65a684237c6ec21246d5cebd232a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 1366.625, "max_line_length": 132060, "alphanum_fraction": 0.7885707898, "converted": true, "num_tokens": 698, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.95598134762883, "lm_q2_score": 0.8757869803008764, "lm_q1q2_score": 0.8372360176638155}} {"text": "# Probit Maximum Likelihood\n\nIn this homework you should implement the maximum likelihood estimator for the probit model. To remind you, this model is defined as follows:\n $$\n \\begin{align} \n y_i &\\in \\{0,1\\} \\\\\n \\Pr\\{y_i=1\\} &= \\Phi(x_i \\beta) \\\\\n L(\\beta) & = \\Pi_{i=1}^N \\Phi(x_i \\beta)^{y_i} (1-\\Phi(x_i \\beta))^{1-y_i} \\\\\n \\beta & \\in \\mathbb{R}^k \\\\\n x_i & \\sim N\\left([0,0,0],\\left[ \\begin{array}{ccc} 1 & 0 & 0 \\\\ 0 & 1 & 0 \\\\ 0 & 0 & 1\\end{array} \\right] \\right) \\\\\n k & = 3 \n \\end{align}\n $$\n \nWhere $\\Phi$ is the standard Normal cdf. Think of $x_i$ as a row-vector. You should proceed as follows:\n\n1. define a data generating function with default argument `N=10000`, generating `N` simulated data points from this model. Generate the data using $\\beta=[1,1.5,-0.5]$. The function should return a `Dict` as outlined in the code.\n1. Define the log likelihood function, $l(\\beta) = \\log(L)$\n1. Write a function `plotLike` to plot the log likelihood function for different parameter values. Follow the outline of that function.\n1. Fill in the similare function `plotGrad`, plotting the gradient.\n1. Define the function `maximize_like`. this should optimize your log likelihood function.\n1. (Optional) Define the gradient of the log likelihood function and use it in another optimization `maximize_like_grad`.\n1. (Optional) Define the hessian of the log likelihood function and use it in another optimization `maximize_like_grad_hess`.\n1. (Optional) Use the hessian of the log likelihood function to compute the standard errors of your estimates and use it in `maximize_like_grad_se`\n\n## Tests (NOT optional)\n\n* The code comes with a test suite that you should fill out. \n* There are some example tests, you should make those work and maybe add other ones. \n* Please do not change anything in the file structure.\n\n\n```julia\n\n```\n", "meta": {"hexsha": "887a250b48c7cde40cac87800c3b9d8590368b9d", "size": 2697, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "questions.ipynb", "max_stars_repo_name": "jeannesorin/HWunconstrained.jl", "max_stars_repo_head_hexsha": "0806ee96f74c989d44e81b7d78559095cd092c43", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "questions.ipynb", "max_issues_repo_name": "jeannesorin/HWunconstrained.jl", "max_issues_repo_head_hexsha": "0806ee96f74c989d44e81b7d78559095cd092c43", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2019-03-23T16:14:45.000Z", "max_issues_repo_issues_event_max_datetime": "2019-03-28T17:00:29.000Z", "max_forks_repo_path": "questions.ipynb", "max_forks_repo_name": "jeannesorin/HWunconstrained.jl", "max_forks_repo_head_hexsha": "0806ee96f74c989d44e81b7d78559095cd092c43", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 11, "max_forks_repo_forks_event_min_datetime": "2019-03-18T17:07:39.000Z", "max_forks_repo_forks_event_max_datetime": "2019-03-24T20:51:34.000Z", "avg_line_length": 41.4923076923, "max_line_length": 240, "alphanum_fraction": 0.5943641083, "converted": true, "num_tokens": 531, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9372107896491796, "lm_q2_score": 0.8933094067644466, "lm_q1q2_score": 0.8372192145147471}} {"text": "# Conditioning of evaluating tan()\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as pt\n```\n\nLet us estimate the sensitivity of evaluating the $\\tan$ function:\n\n\n```python\nx = np.linspace(-5, 5, 1000)\npt.ylim([-10, 10])\npt.plot(x, np.tan(x))\n```\n\n\n```python\nx = np.pi/2 - 0.0001\n#x = 0.1\nx\n```\n\n\n\n\n 1.5706963267948966\n\n\n\n\n```python\nnp.tan(x)\n```\n\n\n\n\n 9999.9999666616441\n\n\n\n\n```python\ndx = 0.00005\nnp.tan(x+dx)\n```\n\n\n\n\n 19999.99998335545\n\n\n\n## Condition number estimates\n\n### From evaluation data\n\n\n\n```python\n#clear\n\nnp.abs(np.tan(x+dx) - np.tan(x))/np.abs(np.tan(x)) / (np.abs(dx) / np.abs(x))\n```\n\n\n\n\n 31413.926693068603\n\n\n\n### Using the derivative estimate\n\n\n```python\nimport sympy as sp\n\nxsym = sp.Symbol(\"x\")\n\nf = sp.tan(xsym)\ndf = f.diff(xsym)\ndf\n```\n\n\n\n\n tan(x)**2 + 1\n\n\n\nEvaluate the derivative estimate. Use `.subs(xsym, x)` to substitute in the value of `x`.\n\n\n```python\n#clear\n(xsym*df/f).subs(xsym, x)\n```\n\n\n\n\n 15706.9633726542\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "86537b9f3cdaff85d305188b7d2699650c5cb06b", "size": 16436, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "demos/error_and_fp/Conditioning of Evaluating tan.ipynb", "max_stars_repo_name": "xywei/numerics-notes", "max_stars_repo_head_hexsha": "70e67e17d855b7bb06a0de7e3570d40ad50f941b", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2021-01-24T21:12:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-02T19:58:25.000Z", "max_issues_repo_path": "demos/error_and_fp/Conditioning of Evaluating tan.ipynb", "max_issues_repo_name": "xywei/numerics-notes", "max_issues_repo_head_hexsha": "70e67e17d855b7bb06a0de7e3570d40ad50f941b", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2021-08-24T17:48:50.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-14T21:22:02.000Z", "max_forks_repo_path": "demos/error_and_fp/Conditioning of Evaluating tan.ipynb", "max_forks_repo_name": "xywei/numerics-notes", "max_forks_repo_head_hexsha": "70e67e17d855b7bb06a0de7e3570d40ad50f941b", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2020-11-23T09:56:26.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-24T17:30:26.000Z", "avg_line_length": 61.5580524345, "max_line_length": 11888, "alphanum_fraction": 0.8174129959, "converted": true, "num_tokens": 331, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9372107966642557, "lm_q2_score": 0.8933093968230773, "lm_q1q2_score": 0.837219211464222}} {"text": "# Week 1\n### Section 0.3, problem 7\n\n[link to relevant section of the book](http://scidiv.bellevuecollege.edu/dh/Calculus_all/CC_0_3_functions)\n\n$f(x) = x^2 + 3$ , $g(x) = \\sqrt{x - 5}$ , and $h(x) = \\frac{x}{x - 2}$\n\n### (a) evaluate $f(1)$, $g(1)$, and $h(1)$\n\n\n```python\nimport math\nimport numpy as np\n\ndef f(x):\n return x**2 + 3\n\ndef g(x):\n if x-5 >= 0:\n return math.sqrt(x-5)\n else:\n return str(abs(x-5))+'i'\n\n\ndef h(x):\n try:\n return x/(x-2)\n except ZeroDivisionError:\n return np.NaN\n\nprint(\n \"f of 1:\", f(1),\n \"\\ng of 1:\", g(1),\n \"\\nh of 1:\", h(1)\n)\n```\n\n f of 1: 4 \n g of 1: 4i \n h of 1: -1.0\n\n\n### (b) graph f(x), g(x) and h(x) for $-5 \\le x \\le 10$\n\n\n```python\nfrom bokeh.plotting import figure, show\nfrom bokeh.io import output_notebook\nfrom bokeh.layouts import row\n\noutput_notebook()\n```\n\n\n\n
    \n \n Loading BokehJS ...\n
    \n\n\n\n\n\n```python\n# plot of f(x)\np = figure(plot_width=300, plot_height=300, \n tools='pan,wheel_zoom,reset', title=\"f(x) plot\")\n\nx = [x/100 for x in range(-500, 1100)]\ny = [f(num) for num in x]\n\np.line(x, y, line_width=2)\n\n\n#plot of g(x)\np2 = figure(plot_width=300, plot_height=300, \n tools='pan,wheel_zoom,reset', title=\"g(x) plot\")\n\nx = [x/100 for x in range(-500, 1100)]\ny = [g(num) for num in x]\n\np2.line(x, y, line_width=2)\n\n\n#plot of h(x)\np3 = figure(plot_width=300, plot_height=300, \n tools='pan,wheel_zoom,reset', title=\"h(x) plot\")\n\nx = [x/100 for x in range(-500, 1100)]\ny = [h(num) for num in x]\n\np3.line(x, y, line_width=2)\n\nshow(row(p, p2, p3))\n```\n\n\n\n\n\n\n\n\n
    \n\n\n\n\n\n### (c) evaluate f(3x), g(3x), and h(3x)\n\n\n```python\nimport sympy\n\n# set printing to unicode/latex\nsympy.init_printing(use_unicode=True)\n```\n\n\n```python\n# define our symbols\nx = sympy.symbols('x')\n\n# make our functions using sympy\nf = x**2 + 3\ng = sympy.sqrt(x-5)\nh = x / (x-2)\n```\n\n\n```python\n# evaluate functions\nf.subs(x, 3*x)\n```\n\n\n```python\ng.subs(x, 3*x)\n```\n\n\n```python\nh.subs(x, 3*x)\n```\n\n### evaluate $f(x+h)$, $g(x+h)$, and $h(x+h)$\n(assuming the $h$ in $h(x+h)$ is a separate number)\n\n\n```python\n# I'll assign the symbol 'h' to the variable y\ny = sympy.symbols(\"h\") \n\nf.subs(x, x+y)\n```\n\n\n```python\ng.subs(x, x+y)\n```\n\n\n```python\nh.subs(x, x+y)\n```\n", "meta": {"hexsha": "91ff138e87eba8da3250788367e37974af3b66cd", "size": 137902, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "week1/section_03_problem_07.ipynb", "max_stars_repo_name": "MaraudingAvenger/Math140", "max_stars_repo_head_hexsha": "bddf788d370d1b1fe9db6f495261cbfc096197e0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "week1/section_03_problem_07.ipynb", "max_issues_repo_name": "MaraudingAvenger/Math140", "max_issues_repo_head_hexsha": "bddf788d370d1b1fe9db6f495261cbfc096197e0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week1/section_03_problem_07.ipynb", "max_forks_repo_name": "MaraudingAvenger/Math140", "max_forks_repo_head_hexsha": "bddf788d370d1b1fe9db6f495261cbfc096197e0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 194.2281690141, "max_line_length": 103782, "alphanum_fraction": 0.6844425752, "converted": true, "num_tokens": 926, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9504109728022221, "lm_q2_score": 0.8807970873650401, "lm_q1q2_score": 0.8371192166439716}} {"text": "```python\n%matplotlib notebook\n```\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.gridspec import GridSpec\nfrom matplotlib.ticker import ScalarFormatter\nimport math\n```\n\nThis notebook assumes you have completed the notebook [Introduction of sine waves](TDS_Introduction-sine_waves.ipynb). This notebook follows the same pattern of time domain waveform generation: instantaneous frequency -> phase step -> total phase -> time domain waveform. \n\nOur goal is to track features of different acoustic impedance in material using a low power time domain waveform. Time delay spectrometry (TDS) is one implementation of this goal. To understand TDS we need to understand the waveform which is used by TDS called a chirp. A chirp is a sinusoid that is constantly varying in frequency. The chirp is generated by integrating a varying phase step which is derived from an instantaneous frequency profile. We will generate a chirp in this notebook.\nThe phase of the chirp can be found by integrating the instantaneous frequency:\n\n\\begin{equation}\n\tf(t)=\\frac{f_{end}-f_{start}}{T_c}t + f_{start}\n\\end{equation}\n\n\\begin{equation}\n\\Delta\\phi(t) = 2\\pi f(t)\\Delta t\n\\end{equation}\n\n\\begin{equation}\n\\phi (t)=\\int_{}^{} \\Delta\\phi(t) = \\int_{}^{} 2\\pi f(t) dt = \\int_{}^{}\\frac{f_{end}-f_{start}}{T_c}tdt + \\int_{}^{}f_{start}dt\n\\end{equation}\n\n\\begin{equation}\n \\phi (t)= \\frac{f_{end}-f_{start}}{T_c}\\int_{}^{}tdt + f_{start}\\int_{}^{}dt\n\\end{equation}\n\n\\begin{equation}\n \\phi (t)= \\frac{f_{end}-f_{start}}{T_c}\\frac{t^2}{2} + f_{start}t\n\\end{equation}\n\nThis gives the time series value of\n\n\\begin{equation}\nx(t) = e^{j\\phi (t)} = e^{j(\\frac{f_{end}-f_{start}}{T_c}\\frac{t^2}{2} + f_{start}t)} \n\\end{equation}\n\nBut the formula for phase requires squaring time which will cause numeric errors as the time increases. Another approach is to implement the formula for phase as a cummulative summation. \n\n\\begin{equation}\n\\phi_{sum} (N)=\\sum_{k=1}^{N} \\Delta\\phi(k) = \\sum_{k=1}^{N} 2\\pi f(k) t_s = \\sum_{k=1}^{N}(\\frac{f_{end}-f_{start}}{T_c}k + f_{start})t_s\n\\end{equation}\n\n\nThis allow for the phase always stay between 0 and two pi by subtracting two phi whenever the phase exceeds the value. We will work with the cummlative sum of phase, but then compare it to the integral to determine how accurate the cummulative sum is.\n\n\n\n\n```python\n#max free 8 points per sample\n\n#Tc is the max depth we are interested in\nTc_sec=5\n\nf_start_Hz=3000\n#talk about difference and similarity of sine wave example, answer why not 32 samples\nf_stop_Hz=20000\n\n#sample rate\nfs=44100\nsamplesPerCycle=fs/f_stop_Hz\nts=1/fs\n\ntotal_samples= math.ceil(fs*Tc_sec)\nn = np.arange(0,total_samples, step=1, dtype=np.float64)\nt_sec=n*ts\nt_usec = t_sec *1e6\n\n#This is the frequency of the chirp over time. We assume linear change in frequency\nchirp_freq_slope_HzPerSec=(f_stop_Hz-f_start_Hz)/Tc_sec\n\n#Compute the instantaneous frequency which is a linear function\nchirp_instantaneous_freq_Hz=chirp_freq_slope_HzPerSec*t_sec+f_start_Hz\nchirp_instantaneous_angular_freq_radPerSec=2*np.pi*chirp_instantaneous_freq_Hz\n\n#Since frequency is a change in phase the we can plot it as a phase step\nchirp_phase_step_rad=chirp_instantaneous_angular_freq_radPerSec*ts\n\n#The phase step can be summed (or integrated) to produce the total phase which is the phase value \n#for each point in time for the chirp function\nchirp_phase_rad=np.cumsum(chirp_phase_step_rad)\n#The time domain chirp function\nchirp = np.exp(1j*chirp_phase_rad)\n```\n\n\n```python\ndef calc_chirp_phase_step(f_start_Hz, f_stop_Hz, Tc_sec,fs):\n ts=1/fs\n total_samples= math.ceil(fs*Tc_sec)\n n = np.arange(0,total_samples, step=(total_samples-1), dtype=np.float64)\n t_sec=n*ts\n\n #This is the frequency of the chirp over time. We assume linear change in frequency\n chirp_freq_slope_HzPerSec=(f_stop_Hz-f_start_Hz)/Tc_sec\n\n #Compute the instantaneous frequency which is a linear function\n chirp_instantaneous_freq_Hz=chirp_freq_slope_HzPerSec*t_sec+f_start_Hz\n chirp_instantaneous_angular_freq_radPerSec=2*np.pi*chirp_instantaneous_freq_Hz\n\n #Since frequency is a change in phase the we can plot it as a phase step\n chirp_phase_step_rad=chirp_instantaneous_angular_freq_radPerSec*ts\n return chirp_phase_step_rad\n\nchirp_phase_step_rad=calc_chirp_phase_step(f_start_Hz, f_stop_Hz, Tc_sec,fs)\nprint(f'chirp_phase_step_rad[0] = {chirp_phase_step_rad[0]} rad and the end is chirp_phase_step_rad[-1] = {chirp_phase_step_rad[-1]}')\nprint(f'amplitude = {chirp_phase_step_rad[-1] - chirp_phase_step_rad[0]} rad')\nprint(f'offset = {(chirp_phase_step_rad[-1] - chirp_phase_step_rad[0])/2+chirp_phase_step_rad[0]} rad')\n\n```\n\n chirp_phase_step_rad[0] = 0.42742757191697867 rad and the end is chirp_phase_step_rad[-1] = 2.8495061615799755\n amplitude = 2.4220785896629966 rad\n offset = 1.638466866748477 rad\n\n\n\n```python\n#We can see, unlike the complex exponential, the chirp's instantaneous frequency is linearly increasing. \n#This corresponds with the linearly increasing phase step. \nfig, ax = plt.subplots(2, 1, sharex=True,figsize = [8, 8])\nlns1=ax[0].plot(t_usec,chirp_instantaneous_freq_Hz,linewidth=4, label='instantanous frequency');\nax[0].set_title('Comparing the instantaneous frequency and phase step')\nax[0].set_ylabel('instantaneous frequency (Hz)')\n\naxt = ax[0].twinx()\nlns2=axt.plot(t_usec,chirp_phase_step_rad,linewidth=2,color='black', linestyle=':', label='phase step');\naxt.set_ylabel('phase step (rad)')\n\n#ref: https://stackoverflow.com/questions/5484922/secondary-axis-with-twinx-how-to-add-to-legend\nlns = lns1+lns2\nlabs = [l.get_label() for l in lns]\nax[0].legend(lns, labs, loc=0)\n\n#We see that summing or integrating the linearly increasing phase step gives a quadratic function of total phase.\nax[1].plot(t_usec,chirp_phase_rad,linewidth=4,label='chirp');\nax[1].plot([t_usec[0], t_usec[-1]],[chirp_phase_rad[0], chirp_phase_rad[-1]],linewidth=1, linestyle=':',label='linear (x=y)');\nax[1].set_title('Cumulative quandratic phase function of chirp')\nax[1].set_xlabel('time ($\\mu$sec)')\nax[1].set_ylabel('total phase (rad)')\nax[1].legend();\n\n\n```\n\n\n \n\n\n\n\n\n\n\n```python\nprint(f'chirp_phase_step_rad[0] = {chirp_phase_step_rad[0]} rad and the end is chirp_phase_step_rad[-1] = {chirp_phase_step_rad[-1]}')\nprint(f'amplitude = {chirp_phase_step_rad[-1] - chirp_phase_step_rad[0]} rad')\nprint(f'offset = {(chirp_phase_step_rad[-1] - chirp_phase_step_rad[0])/2+chirp_phase_step_rad[0]} rad')\n```\n\n chirp_phase_step_rad[0] = 0.42742757191697867 rad and the end is chirp_phase_step_rad[-1] = 2.8495061615799755\n amplitude = 2.4220785896629966 rad\n offset = 1.638466866748477 rad\n\n", "meta": {"hexsha": "edfc6dc36705341905f2aaca6f30e83c1db8a569", "size": 201846, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "course/tds-200/week_01/notebooks/make_chirp_for_GNURadio.ipynb", "max_stars_repo_name": "potto216/tds-tutorials", "max_stars_repo_head_hexsha": "2acf2002ac5514dc60781c3e2e6797a4595104e6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2020-07-12T19:17:59.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-24T22:19:02.000Z", "max_issues_repo_path": "course/tds-200/week_01/notebooks/make_chirp_for_GNURadio.ipynb", "max_issues_repo_name": "potto216/tds-tutorials", "max_issues_repo_head_hexsha": "2acf2002ac5514dc60781c3e2e6797a4595104e6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2020-09-16T12:18:01.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-17T23:04:37.000Z", "max_forks_repo_path": "course/tds-200/week_01/notebooks/make_chirp_for_GNURadio.ipynb", "max_forks_repo_name": "potto216/tds-tutorials", "max_forks_repo_head_hexsha": "2acf2002ac5514dc60781c3e2e6797a4595104e6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 198.2770137525, "max_line_length": 157803, "alphanum_fraction": 0.8674633136, "converted": true, "num_tokens": 2077, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.95041097139764, "lm_q2_score": 0.880797071719777, "lm_q1q2_score": 0.83711920053739}} {"text": "# COURSE: Master math by coding in Python\n## SECTION: Introduction to Sympy and LaTeX\n\n#### https://www.udemy.com/course/math-with-python/?couponCode=MXC-DISC4ALL\n#### INSTRUCTOR: sincxpress.com\n\nNote about this code: Each video in this section of the course corresponds to a section of code below. Please note that this code roughly matches the code shown in the live recording, but is not exactly the same -- the variable names, order of lines, and parameters may be slightly different. \n\n\n```python\n\n```\n\n# VIDEO: Intro to sympy, part 1\n\n\n```python\n# import the sympy package\nimport sympy as sym\n\n# optional setup for \"fancy\" printing (used later)\n# sym.init_printing()\n\n# import additional functions for nice printing\nfrom IPython.display import display, Math\n```\n\n\n```python\n# create symbolic variables\nx,y,z = sym.symbols('x,y,z')\n\nx\n\n```\n\n\n```python\n# x doesn't have a value; it's just a symbol\nx + 4\n```\n\n\n```python\n# for \"fancy\" printing (note: you need to run this line only once)\nsym.init_printing()\nx + 4\n```\n\n\n```python\n# more fun...\ndisplay( x**y )\ndisplay( y/z )\n```\n\n\n```python\n# let's compare with numpy\nimport numpy as np\n\ndisplay(sym.sqrt(2)) # square root in symbolic math\ndisplay(np.sqrt(2)) # square root in numeric/precision math\n\n```\n\n### Exercises\n\n\n```python\n# 1)\ndisplay(y*x**2)\n\n# 2)\ndisplay( sym.sqrt(4)*x )\n\n# 3)\ndisplay( sym.sqrt(x)*sym.sqrt(x) )\n```\n\n# VIDEO: Intro to Latex\n\n\n```python\n# basic latex coding is easy:\ndisplay(Math('4+5=7'))\n```\n\n\n```python\n# special characters are indicated using \\\\\ndisplay(Math('\\\\sigma = \\\\mu \\\\times \\\\sqrt{5}'))\n\n# outside Python, use one \\\ndisplay(Math('\\\\sigma = \\\\mu \\times \\\\sqrt{5}'))\n\n# subscripts and superscripts\ndisplay(Math('x_n + y^m - z^{m+n}'))\n\n# fractions\ndisplay(Math('\\\\frac{1+x}{2e^{\\pi}}'))\n\n# right-click to change properties\n```\n\n\n```python\n# regular text requires a special tag\nf = 4\ndisplay(Math('Set x equal to %g'%f))\n\ndisplay(Math('\\\\text{Set x equal to %g}'%f))\n```\n\n### Latex code in a markdown cell\n\nNote: this is not for evaluating variables or other numerical code!\n\n\n$$ \\frac{1+\\sqrt{2x}}{e^{i\\pi}} $$\n\n### Exercises!\n\n\n```python\n# 1) \ndisplay(Math('4x+5y-8z=17'))\n\n# 2) \ndisplay(Math('\\\\sin(2\\\\pi f t+\\\\theta)'))\n\n# 3)\ndisplay(Math('e=mc^2'))\n\n# 4)\ndisplay(Math('\\\\frac{4+5x^2}{(1+x)(1-x)}'))\n```\n\n# VIDEO: Intro to sympy, part 2\n\n\n```python\n# using Greek characters as variable names\n\nmu,alpha,sigma = sym.symbols('mu,alpha,sigma')\n\nexpr = sym.exp( (mu-alpha)**2/ (2*sigma**2) )\n\ndisplay(expr)\n\n```\n\n\n```python\n# can also use longer and more informative variable names\nhello = sym.symbols('hello')\n\nhello/3\n\n```\n\n\n```python\n# substituting numbers for variables\n\n# don't forget to define variables before using them!\ndisplay(x+4)\n\n(x+4).subs(x,3)\n```\n\n\n```python\n# substituting multiple variables\n\nx,y = sym.symbols('x,y') # can create multiple variables in one line\nexpr = x+y+5\n\n# substituting one variable\nprint( expr.subs(x,5) )\n\n# substituting multiple variables\nprint( expr.subs({x:5,y:4}) )\n\n```\n\n\n```python\n# using sympy to print latex code\nexpr = 3/x\n\nprint( sym.latex(expr) )\n\n# notice:\nprint( sym.latex(3/4) )\nprint( sym.latex('3/4') )\n# but\nprint( sym.latex(sym.sympify('3/4')) )\n\n```\n\n### Exercise!\n\n\n```python\nfor i in range(-2,3):\n ans = (x+4).subs(x,i**2)\n display(Math('\\\\text{With }x=%g:\\; x^2+4 \\\\quad \\\\Rightarrow \\\\quad %g^2+4 =%g' %(i,i,ans)))\n```\n\n# VIDEO: Example: Use sympy to understand the law of exponents\n\n\n```python\nx,y,z = sym.symbols('x,y,z')\n\nex = x**y * x**z\n\ndisplay(ex)\ndisplay( sym.simplify(ex) )\n\n```\n\n\n```python\nexpr1 = x**y * x**z\nexpr2 = x**y / x**z\nexpr3 = x**y * y**z\n\ndisplay(Math('%s = %s' %( sym.latex(expr1),sym.latex(sym.simplify(expr1)) ) ))\ndisplay(Math('%s = %s' %( sym.latex(expr2),sym.latex(sym.simplify(expr2)) ) ))\ndisplay(Math('%s = %s' %( sym.latex(expr3),sym.latex(sym.simplify(expr3)) ) ))\n\n```\n\n\n```python\n# using sym.Eq\n\nsym.Eq(4,2+2)\n```\n\n\n```python\ndisplay( sym.Eq(x-3,4) )\n\n# using variables\nlhs = x-3\nrhs = x\ndisplay( sym.Eq(lhs,rhs) )\n```\n\n\n```python\nlhs = x-3\nrhs = x+4#-7\ndisplay( sym.Eq(lhs,rhs) )\n\n# or\nsym.Eq( lhs-rhs )\n```\n\n#### NOTE ABOUT `SYM.EQ`\nSympy changed the behavior of `sym.Eq` since I made this course. To test against zero, you need to write `sym.Eq(expr,0)` instead of just `sym.Eq(expr)`.\n\nIf you are using a sympy version earlier than 1.5, you don't need to change anything. If you get a warning or error message, then simply add `,0` as the second input to `sym.Eq`.\n\nTo test which version of sympy you have, type\n`sym.__version__`\n\n\n```python\ndisplay( sym.Eq(expr1,expr1) )\n\n# but...\ndisplay( sym.Eq(expr1 - sym.simplify(expr1)) )\n\ndisplay( sym.Eq(sym.expand( expr1-sym.simplify(expr1) )) )\n\n```\n\n\n```python\n# btw, there's also a powsimp function:\ndisplay( sym.powsimp(expr1) )\n\n# and its converse\nres = sym.powsimp(expr1)\ndisplay( sym.expand_power_exp(res) )\n```\n\n# VIDEO: printing with f-strings\n\n\n```python\n# basic intro to f-strings\n\nsvar = 'Mike'\nnvar = 7\n\nprint(f'Hi my name is {svar} and I eat {nvar} chocolates every day.')\n\n```\n\n\n```python\n# now with latex integration\n\nx,y = sym.symbols('x,y')\nexpr = 3/x\n\n\n# trying to print using symbolic variables\ndisplay(Math(f'\\\\frac{x}{y}'))\ndisplay(Math(f'\\\\frac{{sym.latex(expr)}}{{y}}'))\ndisplay(Math(f'\\\\frac{{{x}}}{{{y}}}'))\ndisplay(Math(f'\\\\frac{{{sym.latex(expr)}}}{{{y}}}'))\n\n\n# my preference for mixing replacements with latex\ndisplay(Math('\\\\frac{%s}{%s}'%(sym.latex(expr),y)))\n```\n\n\n```python\n# print using numeric variables\nu,w = 504,438\n\ndisplay(Math(f'\\\\frac{u}{w}'))\ndisplay(Math(f'\\\\frac{{u}}{{w}}'))\ndisplay(Math(f'\\\\frac{{{u}}}{{{w}}}'))\n\n# my preference for mixing replacements with latex\ndisplay(Math('\\\\frac{%g}{%g}'%(u,w)))\n```\n\n\n```python\n\n```\n\n# VIDEO: Sympy and LaTex: Bug hunt!\n\n\n```python\nmu,alpha = sym.symbols('mu,alpha')\n\nexpr = 2*sym.exp(mu**2/alpha)\n\ndisplay(Math( sym.latex(expr) ))\n```\n\n\n```python\nMath('1234 + \\\\frac{3x}{\\sin(2\\pi t+\\\\theta)}')\n```\n\n\n```python\na = '3'\nb = '4'\n\n# answer should be 7\nprint(sym.sympify(a)+sym.sympify(b))\n\n```\n\n\n```python\nx = sym.symbols('x')\nsym.solve( 4*x - 2 )\n```\n\n\n```python\n# part 1 of 2\n\nq = x**2\nr = x**2\n\ndisplay(q)\ndisplay(r)\n```\n\n\n```python\n# part 2 of 2\n\nq,r = sym.symbols('q,r')\n\nq = sym.sympify('x^2')\nr = sym.sympify('x**2')\n\ndisplay(Math(sym.latex(q)))\ndisplay(r)\n\nsym.Eq(q,r)\n```\n\n\n```python\nx = sym.symbols('x')\n\nequation = (4*x**2 - 5*x + 10)**(1/2)\ndisplay(equation)\nequation.subs(x,3)\n```\n\n\n```python\nx,y = sym.symbols('x,y')\n\nequation = 1/4*x*y**2 - x*(5*x + 10*y**2)**(3)\ndisplay(equation)\n```\n", "meta": {"hexsha": "74dd2f01c9f5dcf9e5764e8a260ea06f6287fe74", "size": 13791, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "MXC_pymath_sympy.ipynb", "max_stars_repo_name": "stefan-cross/mathematics-with-python", "max_stars_repo_head_hexsha": "40a130b0d7da98c4bd54406c61e99daeca57680f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "MXC_pymath_sympy.ipynb", "max_issues_repo_name": "stefan-cross/mathematics-with-python", "max_issues_repo_head_hexsha": "40a130b0d7da98c4bd54406c61e99daeca57680f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MXC_pymath_sympy.ipynb", "max_forks_repo_name": "stefan-cross/mathematics-with-python", "max_forks_repo_head_hexsha": "40a130b0d7da98c4bd54406c61e99daeca57680f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.6499215071, "max_line_length": 299, "alphanum_fraction": 0.4870567762, "converted": true, "num_tokens": 2093, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9324533022906133, "lm_q2_score": 0.8976952975813453, "lm_q1q2_score": 0.8370589446804801}} {"text": "# Mandatory exercises\n\n1. Consider the ODE\n\\begin{equation}\n \\begin{split}\n y'-y-t=0,\\\\\n y(0)=0\n \\end{split}\n\\end{equation}\n\n 1. Use forward Euler method to compute an approximation of the solution at $t=0.2$ with step size $h=0.1$.\n 2. Use backward Euler method to compute an approximation of the solution at $t=0.2$ with step size $h=0.1$.\n 3. Use classical Runge-Kutta method to compute an approximation of the solution at $t=0.1$ with step size $h=0.1$.\n\n2. Consider the ODE\n\\begin{equation}\n \\begin{split}\n y'-cos(yt)=0,\\\\\n y(0)=0\n \\end{split}\n\\end{equation}\nWrite down the backward Euler formula to solve the ODE. Comparing with problem 1.2, what is the difficulty here?\n\n3. Ordinary differential equations can be used to describe the movement of a satellite around a planet. Let $x(t)$ and $y(t)$ denote the $x$ and $y$ coordinate, respectively, for the satellite position at time $t$. The planet\u2019s position is at the origin. Then the following model describes how the position of the satellite changes with time:\n\\begin{equation}\n \\begin{split}\n x''(t)= -k\\frac{x(t)}{\\left(\\sqrt{x(t)^2+y(t)^2}\\right)^3}\\\\\n y''(t)= -k\\frac{y(t)}{\\left(\\sqrt{x(t)^2+y(t)^2}\\right)^3}\n \\end{split}\n\\end{equation}\nFor simplicity, assume that $k = 1$. The initial time is $t = 0$, the initial satellite position $(x(0), y(0)) = (a, b)$ and the initial velocity in the $x$-and $y$-directions $(x'(0), y'(0)) = (c, d)$.\n\n 1. Formulate the model above as a system of first order ordinary differential equations. Remember initial conditions.\n 2. (*) Describe how you can use the model to simulate the satellite orbit around the planet for $0\\leq t\\leq100$ with the help of Matlab\u2019s built-in function `ode45` or Python function `scipy.integrate.solve_ivp`. You do not have to write a complete program. It is enough to write the code for representing the mathematical model and to show how `ode45` or `scipy.integrate.solve_ivp` can be called to solve the ODE system.\n 3. (*) You wrote a code for the SIR model in the lab session. Can you use that code to simulate the movement of a satellite around a planet?\n\n# Non-mandatory exercises\n\n4. Implement the forward Euler method in Matlab/Python. For simplicity we consider scalar ODEs. The function header should be: \n```Fortran\nfunction [t, y] = euler(f,a,b,y0,n)\n```\nor \n```python\ndef euler(f,a,b,y0,n):\n ...\n return [t, y]\n```\nThe input parameter `f` should be a function handle to a Matlab/Python function evaluating the right-hand side of the ODE. The values `a` and `b` represent\ninitial time and final time, respectively. The last input parameter, `n`, is the number of time steps.\nThe output parameter `y` is a vector, where $y_k$ is the approximate solution at time step `k`, $k = 0,\\dots, n$.\n\n5. What modifications would be required in your function `euler`, to make it able to handle *a system* of ODEs?\n\n6. Your next task is to simulate how an object is winched from the ground and into a helicopter. As the object is winched, it will make a pendular movement under the helicopter. Consequently, this process can be modelled as a variable\u2013length pendulum:\n\\begin{equation}\n \\theta''(t)+2\\frac{r'(t)}{r(t)}\\theta'(t)+\\frac{g}{r(t)}\\sin(\\theta(t))=0\n\\end{equation}\nwhere $\\theta(t)$ is the angle of the pendulum, $r(t)$ is the length of the pendulum, and $g$ is the gravity. The winching is represented by a \u201dwinching\nstrategy\u201d in the form of a model for how $r'(t)$ varies with time. This gives an additional differential equation:\n\\begin{equation}\n r'(t)=f(t,\\theta(t),\\theta'(t)).\n\\end{equation}\nThe purpose of the simulation is to investigate the effect of different winching strategies.\nSuppose that you are to use Heun\u2019s method to simulate how the object is winched to the helicopter. Your present task is to give a detailed\ndescription of how to apply Heun\u2019s method to this problem. (Note: this is a purely theoretical exercise. You are not required to carry out the\ncomputations, only to describe the approach in principle.)\n", "meta": {"hexsha": "cf3d477ea1459fc837559283e7d04e69b38185de", "size": 7406, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Workouts/Workout2.ipynb", "max_stars_repo_name": "enigne/ScientificComputingBridging", "max_stars_repo_head_hexsha": "920f3c9688ae0e7d17cffce5763289864b9cac80", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-05-04T01:15:32.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-08T15:08:27.000Z", "max_issues_repo_path": "Workouts/Workout2.ipynb", "max_issues_repo_name": "enigne/ScientificComputingBridging", "max_issues_repo_head_hexsha": "920f3c9688ae0e7d17cffce5763289864b9cac80", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Workouts/Workout2.ipynb", "max_forks_repo_name": "enigne/ScientificComputingBridging", "max_forks_repo_head_hexsha": "920f3c9688ae0e7d17cffce5763289864b9cac80", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.2666666667, "max_line_length": 437, "alphanum_fraction": 0.6009991898, "converted": true, "num_tokens": 1136, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9149009526726544, "lm_q2_score": 0.9149009515124917, "lm_q1q2_score": 0.8370437521398967}} {"text": "This is my attempt at trying to resolve the issue of different results from $P_{n}\\prime cos \\theta$ in eqn. 12.98. \n\n\n\n\nFor this solution App II, eqn. 65 is referred to. \n\n\n\nApplying the appendix II eqn. 65, to 12.98, I'd done the following. \n\n\n```python\n\n\nfrom sympy import symbols, diff, legendre, cos, sin, simplify, pi\nalpha, m,n,z,theta = symbols('alpha m n z theta')\n\n```\n\n\n```python\nPnz = legendre(n,z) # the 'generic term'\n```\n\n## 1) The 'naive' substitution where $z = cos \\theta$\nThis is what I'd done before, and shown Gaurav.\n\n\n\n```python\ngeneric_subs = simplify(diff(Pnz, z).subs(z, cos(theta)))\ngeneric_subs\n\n```\n\n\n\n\n$\\displaystyle - \\frac{n \\left(\\cos{\\left(\\theta \\right)} P_{n}\\left(\\cos{\\left(\\theta \\right)}\\right) - P_{n - 1}\\left(\\cos{\\left(\\theta \\right)}\\right)\\right)}{\\sin^{2}{\\left(\\theta \\right)}}$\n\n\n\n## 2) Do the partial derivative ( $\\frac{\\partial}{\\partial \\theta}P_{n}\\prime cos \\theta$ ) anew with sympy\n\n\n```python\nPncostheta = legendre(n, cos(theta))\nspecific_partialdiff = simplify(diff(Pncostheta, theta))\nspecific_partialdiff\n```\n\n\n\n\n$\\displaystyle \\frac{n \\left(\\cos{\\left(\\theta \\right)} P_{n}\\left(\\cos{\\left(\\theta \\right)}\\right) - P_{n - 1}\\left(\\cos{\\left(\\theta \\right)}\\right)\\right)}{\\sin{\\left(\\theta \\right)}}$\n\n\n\nThis already gives something different!!\n\n## 3) eqn. 12.98 as written in textbook\n\n\n```python\neqn1298 = - (n*(n+1)/((2*n+1)*sin(theta)))*(legendre(n-1, cos(theta))-legendre(n+1, cos(theta)))\neqn1298\n```\n\n\n\n\n$\\displaystyle - \\frac{n \\left(n + 1\\right) \\left(P_{n - 1}\\left(\\cos{\\left(\\theta \\right)}\\right) - P_{n + 1}\\left(\\cos{\\left(\\theta \\right)}\\right)\\right)}{\\left(2 n + 1\\right) \\sin{\\left(\\theta \\right)}}$\n\n\n\n\n```python\nn_values = [2, 10, 21]\ntheta_values = [pi/4, pi/15, 1.98*pi]\n\nfor nval, thetaval in zip(n_values, theta_values):\n values = {'theta': thetaval,\n 'n': nval}\n # substitute numeric values and see what happens\n subst_values = [ eqn1298.subs(values).evalf(), specific_partialdiff.subs(values).evalf(), generic_subs.subs(values).evalf()]\n print(subst_values)\n```\n\n [-1.50000000000000, -1.50000000000000, 2.12132034355964]\n [-5.85744699177188, -5.85744699177188, 28.1727639688434]\n [11.4510073170455, 11.4510073170455, 182.368411710620]\n\n\nThe order of numeric values is from the equations defined by cases 3),2) and 1) in that order.\n\n## The textbook eqn. 12.98 is CORRECT! \nIt was my naive substitution that led to different results somewhere. \n\n## Matter 2: *m,n* index ordering. \nThis is me verifying stuff for the [Mathematics Exchange post](https://math.stackexchange.com/q/4156607/933933) I opened. \n\n\nIn the textbook, the solution for $K_{mn}$ (eqn. 12.107), when $m \\neq n$ is given by: \n\n$\\frac{sin\\:\\alpha( P_{m}(cos\\:\\alpha)P^{\\prime}_{n}(cos\\:\\alpha) - P_{n}(cos\\:\\alpha)P^{\\prime}_{m}(cos\\:\\alpha))}{m(m+1) - n(n+1)}$\n\nHowever, the Mathematica code implementation has the equivalent of:\n\n$\\frac{sin\\:\\alpha( P_{n}(cos\\:\\alpha)P^{\\prime}_{m}(cos\\:\\alpha) - P_{m}(cos\\:\\alpha)P^{\\prime}_{n}(cos\\:\\alpha))}{m(m+1) - n(n+1)}$\n\n\n\n\n```python\nPncosalpha = legendre(n, cos(alpha))\nPmcosalpha = Pncosalpha.subs(n,m)\n\nPnpr_cosalpha = diff(Pncosalpha, alpha)\nPmpr_cosalpha = diff(Pmcosalpha, alpha)\n\n```\n\n\n```python\nintextbook = (sin(alpha)/(m*(m+1)-n*(n+1)))*(Pmcosalpha*Pnpr_cosalpha - Pncosalpha*Pmpr_cosalpha)\nintextbook\n```\n\n\n\n\n$\\displaystyle \\frac{\\left(\\frac{m \\left(\\cos{\\left(\\alpha \\right)} P_{m}\\left(\\cos{\\left(\\alpha \\right)}\\right) - P_{m - 1}\\left(\\cos{\\left(\\alpha \\right)}\\right)\\right) \\sin{\\left(\\alpha \\right)} P_{n}\\left(\\cos{\\left(\\alpha \\right)}\\right)}{\\cos^{2}{\\left(\\alpha \\right)} - 1} - \\frac{n \\left(\\cos{\\left(\\alpha \\right)} P_{n}\\left(\\cos{\\left(\\alpha \\right)}\\right) - P_{n - 1}\\left(\\cos{\\left(\\alpha \\right)}\\right)\\right) \\sin{\\left(\\alpha \\right)} P_{m}\\left(\\cos{\\left(\\alpha \\right)}\\right)}{\\cos^{2}{\\left(\\alpha \\right)} - 1}\\right) \\sin{\\left(\\alpha \\right)}}{m \\left(m + 1\\right) - n \\left(n + 1\\right)}$\n\n\n\n\n```python\nincode = (sin(alpha)/(m*(m+1)-n*(n+1)))*(Pncosalpha*Pmpr_cosalpha - Pmcosalpha*Pnpr_cosalpha)\nincode\n```\n\n\n\n\n$\\displaystyle \\frac{\\left(- \\frac{m \\left(\\cos{\\left(\\alpha \\right)} P_{m}\\left(\\cos{\\left(\\alpha \\right)}\\right) - P_{m - 1}\\left(\\cos{\\left(\\alpha \\right)}\\right)\\right) \\sin{\\left(\\alpha \\right)} P_{n}\\left(\\cos{\\left(\\alpha \\right)}\\right)}{\\cos^{2}{\\left(\\alpha \\right)} - 1} + \\frac{n \\left(\\cos{\\left(\\alpha \\right)} P_{n}\\left(\\cos{\\left(\\alpha \\right)}\\right) - P_{n - 1}\\left(\\cos{\\left(\\alpha \\right)}\\right)\\right) \\sin{\\left(\\alpha \\right)} P_{m}\\left(\\cos{\\left(\\alpha \\right)}\\right)}{\\cos^{2}{\\left(\\alpha \\right)} - 1}\\right) \\sin{\\left(\\alpha \\right)}}{m \\left(m + 1\\right) - n \\left(n + 1\\right)}$\n\n\n\n\n```python\nsubs_vals = {'alpha':pi/3, 'm':5,'n':2}\nincode.subs(subs_vals)\n```\n\n\n\n\n$\\displaystyle - \\frac{147}{32768}$\n\n\n\n\n```python\nintextbook.subs(subs_vals)\n```\n\n\n\n\n$\\displaystyle \\frac{147}{32768}$\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "c96ab46bdc09217b22b32d07dc9fc4a7e34eaded", "size": 11162, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "beamshapes/workshop/derivative-issue.ipynb", "max_stars_repo_name": "faroit/bat_beamshapes", "max_stars_repo_head_hexsha": "9c2919b4e56dbd0cfc1039edc608ce1592b78df3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2022-01-05T02:06:33.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-16T09:00:08.000Z", "max_issues_repo_path": "beamshapes/workshop/derivative-issue.ipynb", "max_issues_repo_name": "faroit/bat_beamshapes", "max_issues_repo_head_hexsha": "9c2919b4e56dbd0cfc1039edc608ce1592b78df3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 19, "max_issues_repo_issues_event_min_datetime": "2021-08-10T20:48:34.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-05T08:02:04.000Z", "max_forks_repo_path": "beamshapes/workshop/derivative-issue.ipynb", "max_forks_repo_name": "faroit/bat_beamshapes", "max_forks_repo_head_hexsha": "9c2919b4e56dbd0cfc1039edc608ce1592b78df3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2022-01-04T14:50:17.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-28T02:33:34.000Z", "avg_line_length": 29.6074270557, "max_line_length": 704, "alphanum_fraction": 0.5195305501, "converted": true, "num_tokens": 1736, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9136765210631689, "lm_q2_score": 0.9161096130168221, "lm_q1q2_score": 0.837027844133736}} {"text": "# Functions\nSo far in this course we've explored equations that perform algebraic operations to produce one or more results. A *function* is a way of encapsulating an operation that takes an input and produces exactly one ouput.\n\nFor example, consider the following function definition:\n\n\\begin{equation}f(x) = x^{2} + 2\\end{equation}\n\nThis defines a function named ***f*** that accepts one input (***x***) and returns a single value that is the result calculated by the expression *x2 + 2*.\n\nHaving defined the function, we can use it for any input value. For example:\n\n\\begin{equation}f(3) = 11\\end{equation}\n\nYou've already seen a few examples of Python functions, which are defined using the **def** keyword. However, the strict definition of an algebraic function is that it must return a single value. Here's an example of defining and using a Python function that meets this criteria:\n\n\n```python\n# define a function to return x^2 + 2\ndef f(x):\n return x**2 + 2\n\n# call the function\nf(3)\n```\n\n\n\n\n 11\n\n\n\nYou can use functions in equations, just like any other term. For example, consider the following equation:\n\n\\begin{equation}y = f(x) - 1\\end{equation}\n\nTo calculate a value for ***y***, we take the ***f*** of ***x*** and subtract 1. So assuming that ***f*** is defined as previously, given an ***x*** value of 4, this equation returns a ***y*** value of **17** (*f*(4) returns 42 + 2, so 16 + 2 = 18; and then the equation subtracts 1 to give us 17). Here it is in Python:\n\n\n```python\nx = 4\ny = f(x) - 1\nprint(y)\n```\n\n 17\n\n\nOf course, the value returned by a function depends on the input; and you can graph this with the iput (let's call it ***x***) on one axis and the output (***f(x)***) on the other.\n\n\n```python\n%matplotlib inline\n\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values from -100 to 100\nx = np.array(range(-100, 101))\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('f(x)')\nplt.grid()\n\n# Plot x against f(x)\nplt.plot(x,f(x), color='purple')\n\nplt.show()\n```\n\nAs you can see (if you hadn't already figured it out), our function is a *quadratic function* - it returns a squared value that results in a parabolic graph when the output for multiple input values are plotted.\n\n## Bounds of a Function\nSome functions will work for any input and may return any output. For example, consider the function ***u*** defined here:\n\n\\begin{equation}u(x) = x + 1\\end{equation}\n\nThis function simply adds 1 to whatever input is passed to it, so it will produce a defined output for any value of ***x*** that is a *real* number; in other words, any \"regular\" number - but not an *imaginary* number like √-1, or ∞ (infinity). You can specify the set of real numbers using the symbol ${\\rm I\\!R}$ (note the double stroke). The values that can be used for ***x*** can be expressed as a *set*, which we indicate by enclosing all of the members of the set in \"{...}\" braces; so to indicate the set of all possible values for x such that x is a member of the set of all real numbers, we can use the following expression:\n\n\\begin{equation}\\{x \\in \\rm I\\!R\\}\\end{equation}\n\n\n### Domain of a Function\nWe call the set of numbers for which a function can return value it's *domain*, and in this case, the domain of ***u*** is the set of all real numbers; which is actually the default assumption for most functions.\n\nNow consider the following function ***g***:\n\n\\begin{equation}g(x) = (\\frac{12}{2x})^{2}\\end{equation}\n\nIf we use this function with an ***x*** value of **2**, we would get the output **9**; because (12 ÷ (2•2))2 is 9. Similarly, if we use the value **-3** for ***x***, the output will be **4**. However, what happens when we apply this function to an ***x*** value of **0**? Anything divided by 0 is undefined, so the function ***g*** doesn't work for an ***x*** value of 0.\n\nSo we need a way to denote the domain of the function ***g*** by indicating the input values for which a defined output can be returned. Specifically, we need to restrict ***x*** to a specific list of values - specifically any real number that is not 0. To indicate this, we can use the following notation:\n\n\\begin{equation}\\{x \\in \\rm I\\!R\\;\\;|\\;\\; x \\ne 0 \\}\\end{equation}\n\nThis is interpreted as *Any value for x where x is in the set of real numbers such that x is not equal to 0*, and we can incorporate this into the function's definition like this:\n\n\\begin{equation}g(x) = (\\frac{12}{2x})^{2}, \\{x \\in \\rm I\\!R\\;\\;|\\;\\; x \\ne 0 \\}\\end{equation}\n\nOr more simply:\n\n\\begin{equation}g(x) = (\\frac{12}{2x})^{2},\\;\\; x \\ne 0\\end{equation}\n\nWhen you plot the output of a function, you can indicate the gaps caused by input values that are not in the function's domain by plotting an empty circle to show that the function is not defined at this point:\n\n\n```python\n%matplotlib inline\n\n# Define function g\ndef g(x):\n if x != 0:\n return (12/(2*x))**2\n\n# Plot output from function g\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values from -100 to 100\nx = range(-100, 101)\n\n# Get the corresponding y values from the function\ny = [g(a) for a in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('g(x)')\nplt.grid()\n\n# Plot x against g(x)\nplt.plot(x,y, color='purple')\n\n# plot an empty circle to show the undefined point\nplt.plot(0,g(0.0000001), color='purple', marker='o', markerfacecolor='w', markersize=8)\n\nplt.show()\n```\n\nNote that the function works for every value other than 0; so the function is defined for x = 0.000000001, and for x = -0.000000001; it only fails to return a defined value for exactly 0.\n\nOK, let's take another example. Consider this function:\n\n\\begin{equation}h(x) = 2\\sqrt{x}\\end{equation}\n\nApplying this function to a non-negative ***x*** value returns a meaningful output; but for any value where ***x*** is negative, the output is undefined.\n\nWe can indicate the domain of this function in its definition like this:\n\n\\begin{equation}h(x) = 2\\sqrt{x}, \\{x \\in \\rm I\\!R\\;\\;|\\;\\; x \\ge 0 \\}\\end{equation}\n\nThis is interpreted as *Any value for x where x is in the set of real numbers such that x is greater than or equal to 0*.\n\nOr, you might see this in a simpler format:\n\n\\begin{equation}h(x) = 2\\sqrt{x},\\;\\; x \\ge 0\\end{equation}\n\nNote that the symbol ≥ is used to indicate that the value must be *greater than **or equal to*** 0; and this means that **0** is included in the set of valid values. To indicate that the value must be *greater than 0, **not including 0***, use the > symbol. You can also use the equivalent symbols for *less than or equal to* (≤) and *less than* (<).\n\nWhen plotting a function line that marks the end of a continuous range, the end of the line is shown as a circle, which is filled if the function includes the value at that point, and unfilled if it does not.\n\nHere's the Python to plot function ***h***:\n\n\n```python\n%matplotlib inline\n\ndef h(x):\n if x >= 0:\n import numpy as np\n return 2 * np.sqrt(x)\n\n# Plot output from function h\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values from -100 to 100\nx = range(-100, 101)\n\n# Get the corresponding y values from the function\ny = [h(a) for a in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('h(x)')\nplt.grid()\n\n# Plot x against h(x)\nplt.plot(x,y, color='purple')\n\n# plot a filled circle at the end to indicate a closed interval\nplt.plot(0, h(0), color='purple', marker='o', markerfacecolor='purple', markersize=8)\n\nplt.show()\n```\n\nSometimes, a function may be defined for a specific *interval*; for example, for all values between 0 and 5:\n\n\\begin{equation}j(x) = x + 2,\\;\\; x \\ge 0 \\text{ and } x \\le 5\\end{equation}\n\nIn this case, the function is defined for ***x*** values between 0 and 5 *inclusive*; in other words, **0** and **5** are included in the set of defined values. This is known as a *closed* interval and can be indicated like this:\n\n\\begin{equation}\\{x \\in \\rm I\\!R\\;\\;|\\;\\; 0 \\le x \\le 5 \\}\\end{equation}\n\nIt could also be indicated like this:\n\n\\begin{equation}\\{x \\in \\rm I\\!R\\;\\;|\\;\\; [0,5] \\}\\end{equation}\n\nIf the condition in the function was **x > 0 and x < 5**, then the interval would be described as *open* and 0 and 5 would *not* be included in the set of defined values. This would be indicated using one of the following expressions:\n\n\\begin{equation}\\{x \\in \\rm I\\!R\\;\\;|\\;\\; 0 \\lt x \\lt 5 \\}\\end{equation}\n\\begin{equation}\\{x \\in \\rm I\\!R\\;\\;|\\;\\; (0,5) \\}\\end{equation}\n\nHere's function ***j*** in Python:\n\n\n```python\n%matplotlib inline\n\ndef j(x):\n if x >= 0 and x <= 5:\n return x + 2\n\n \n# Plot output from function j\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values from -100 to 100\nx = range(-100, 101)\ny = [j(a) for a in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('j(x)')\nplt.grid()\n\n# Plot x against k(x)\nplt.plot(x, y, color='purple')\n\n# plot a filled circle at the ends to indicate an open interval\nplt.plot(0, j(0), color='purple', marker='o', markerfacecolor='purple', markersize=8)\nplt.plot(5, j(5), color='purple', marker='o', markerfacecolor='purple', markersize=8)\n\nplt.show()\n```\n\nNow, suppose we have a function like this:\n\n\\begin{equation}\nk(x) = \\begin{cases}\n 0, & \\text{if } x = 0, \\\\\n 1, & \\text{if } x = 100\n\\end{cases}\n\\end{equation}\n\nIn this case, the function has highly restricted domain; it only returns a defined output for 0 and 100. No output for any other ***x*** value is defined. In this case, the set of the domain is:\n\n\\begin{equation}\\{0,100\\}\\end{equation}\n\nNote that this does not include all real numbers, it only includes 0 and 100.\n\nWhen we use Python to plot this function, note that it only makes sense to plot a scatter plot showing the individual values returned, there is no line in between because the function is not continuous between the values within the domain. \n\n\n```python\n%matplotlib inline\n\ndef k(x):\n if x == 0:\n return 0\n elif x == 100:\n return 1\n\n \n# Plot output from function k\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values from -100 to 100\nx = range(-100, 101)\n# Get the k(x) values for every value in x\ny = [k(a) for a in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('k(x)')\nplt.grid()\n\n# Plot x against k(x)\nplt.scatter(x, y, color='purple')\n\nplt.show()\n```\n\n### Range of a Function\nJust as the domain of a function defines the set of values for which the function is defined, the *range* of a function defines the set of possible outputs from the function.\n\nFor example, consider the following function:\n\n\\begin{equation}p(x) = x^{2} + 1\\end{equation}\n\nThe domain of this function is all real numbers. However, this is a quadratic function, so the output values will form a parabola; and since the function has no negative coefficient or constant, it will be an upward opening parabola with a vertex that has a y value of 1.\n\nSo what does that tell us? Well, the minimum value that will be returned by this function is 1, so it's range is:\n\n\\begin{equation}\\{p(x) \\in \\rm I\\!R\\;\\;|\\;\\; p(x) \\ge 1 \\}\\end{equation}\n\nLet's create and plot the function for a range of ***x*** values in Python:\n\n\n```python\n%matplotlib inline\n\n# define a function to return x^2 + 1\ndef p(x):\n return x**2 + 1\n\n\n# Plot the function\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values from -100 to 100\nx = np.array(range(-100, 101))\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('p(x)')\nplt.grid()\n\n# Plot x against f(x)\nplt.plot(x,p(x), color='purple')\n\nplt.show()\n```\n\nNote that the ***p(x)*** values in the plot drop exponentially for ***x*** values that are negative, and then rise exponentially for positive ***x*** values; but the minimum value returned by the function (for an *x* value of 0) is **1**.\n\n\n```python\n\n```\n", "meta": {"hexsha": "3b0652fbaca431099f942b892eb13386a591e9b2", "size": 95574, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Basics Of Algebra by Hiren/01-08-Functions.ipynb", "max_stars_repo_name": "serkin/Basic-Mathematics-for-Machine-Learning", "max_stars_repo_head_hexsha": "ac0ae9fad82a9f0429c93e3da744af6e6d63e5ab", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Basics Of Algebra by Hiren/01-08-Functions.ipynb", "max_issues_repo_name": "serkin/Basic-Mathematics-for-Machine-Learning", "max_issues_repo_head_hexsha": "ac0ae9fad82a9f0429c93e3da744af6e6d63e5ab", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Basics Of Algebra by Hiren/01-08-Functions.ipynb", "max_forks_repo_name": "serkin/Basic-Mathematics-for-Machine-Learning", "max_forks_repo_head_hexsha": "ac0ae9fad82a9f0429c93e3da744af6e6d63e5ab", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 176.9888888889, "max_line_length": 17580, "alphanum_fraction": 0.8895306255, "converted": true, "num_tokens": 3361, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026663679976, "lm_q2_score": 0.9124361521430956, "lm_q1q2_score": 0.8369801152514176}} {"text": "```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n```\n\n\n```python\nplt.style.use(['ggplot'])\n```\n\n# Create Data\n\n
    Generate some data with:\n\\begin{equation} \\theta_0= 4 \\end{equation} \n\\begin{equation} \\theta_1= 3 \\end{equation} \n\nAdd some Gaussian noise to the data\n\n\n```python\nX = 2 * np.random.rand(100,1)\ny = 4 +3 * X+np.random.randn(100,1)\n```\n\nLet's plot our data to check the relation between X and Y\n\n\n```python\n\nplt.plot(X,y,'b.')\nplt.xlabel(\"$x$\", fontsize=18)\nplt.ylabel(\"$y$\", rotation=0, fontsize=18)\n_ =plt.axis([0,2,0,15])\n```\n\n# Analytical way of Linear Regression\n\n\n```python\nX_b = np.c_[np.ones((100,1)),X]\ntheta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)\nprint(theta_best)\n```\n\n [[3.87706422]\n [2.96335497]]\n\n\n
    This is close to our real thetas 4 and 3. It cannot be accurate due to the noise I have introduced in data\n\n\n```python\nX_new = np.array([[0],[2]])\nX_new_b = np.c_[np.ones((2,1)),X_new]\ny_predict = X_new_b.dot(theta_best)\ny_predict\n```\n\n\n\n\n array([[3.87706422],\n [9.80377417]])\n\n\n\n
    Let's plot prediction line with calculated:theta\n\n\n```python\nplt.plot(X_new,y_predict,'r-')\nplt.plot(X,y,'b.')\nplt.xlabel(\"$x_1$\", fontsize=18)\nplt.ylabel(\"$y$\", rotation=0, fontsize=18)\nplt.axis([0,2,0,15])\n\n```\n\n# Gradient Descent\n\n## Cost Function & Gradients\n\n

    The equation for calculating cost function and gradients are as shown below. Please note the cost function is for Linear regression. For other algorithms the cost function will be different and the gradients would have to be derived from the cost functions\n\n\n\nCost\n\\begin{equation}\nJ(\\theta) = 1/2m \\sum_{i=1}^{m} (h(\\theta)^{(i)} - y^{(i)})^2 \n\\end{equation}\n\nGradient\n\n\\begin{equation}\n\\frac{\\partial J(\\theta)}{\\partial \\theta_j} = 1/m\\sum_{i=1}^{m}(h(\\theta^{(i)} - y^{(i)}).X_j^{(i)}\n\\end{equation}\n\nGradients\n\\begin{equation}\n\\theta_0: = \\theta_0 -\\alpha . (1/m .\\sum_{i=1}^{m}(h(\\theta^{(i)} - y^{(i)}).X_0^{(i)})\n\\end{equation}\n\\begin{equation}\n\\theta_1: = \\theta_1 -\\alpha . (1/m .\\sum_{i=1}^{m}(h(\\theta^{(i)} - y^{(i)}).X_1^{(i)})\n\\end{equation}\n\\begin{equation}\n\\theta_2: = \\theta_2 -\\alpha . (1/m .\\sum_{i=1}^{m}(h(\\theta^{(i)} - y^{(i)}).X_2^{(i)})\n\\end{equation}\n\n\\begin{equation}\n\\theta_j: = \\theta_j -\\alpha . (1/m .\\sum_{i=1}^{m}(h(\\theta^{(i)} - y^{(i)}).X_0^{(i)})\n\\end{equation}\n\n\n```python\n\ndef cal_cost(theta,X,y):\n '''\n \n Calculates the cost for given X and Y. The following shows and example of a single dimensional X\n theta = Vector of thetas \n X = Row of X's np.zeros((2,j))\n y = Actual y's np.zeros((2,1))\n \n where:\n j is the no of features\n '''\n \n m = len(y)\n \n predictions = X.dot(theta)\n cost = (1/2*m) * np.sum(np.square(predictions-y))\n return cost\n\n```\n\n\n```python\ndef gradient_descent(X,y,theta,learning_rate=0.01,iterations=100):\n '''\n X = Matrix of X with added bias units\n y = Vector of Y\n theta=Vector of thetas np.random.randn(j,1)\n learning_rate \n iterations = no of iterations\n \n Returns the final theta vector and array of cost history over no of iterations\n '''\n m = len(y)\n cost_history = np.zeros(iterations)\n theta_history = np.zeros((iterations,2))\n for it in range(iterations):\n \n prediction = np.dot(X,theta)\n \n theta = theta -(1/m)*learning_rate*( X.T.dot((prediction - y)))\n theta_history[it,:] =theta.T\n cost_history[it] = cal_cost(theta,X,y)\n \n return theta, cost_history, theta_history\n \n \n \n```\n\n

    Let's start with 1000 iterations and a learning rate of 0.01. Start with theta from a Gaussian distribution\n\n\n```python\nlr =0.01\nn_iter = 1000\n\ntheta = np.random.randn(2,1)\n\nX_b = np.c_[np.ones((len(X),1)),X]\ntheta,cost_history,theta_history = gradient_descent(X_b,y,theta,lr,n_iter)\n\n\nprint('Theta0: {:0.3f},\\nTheta1: {:0.3f}'.format(theta[0][0],theta[1][0]))\nprint('Final cost/MSE: {:0.3f}'.format(cost_history[-1]))\n```\n\n Theta0: 3.635,\n Theta1: 3.172\n Final cost/MSE: 6584.880\n\n\n

    Let's plot the cost history over iterations\n\n\n```python\nfig,ax = plt.subplots(figsize=(12,8))\n\nax.set_ylabel('J(Theta)')\nax.set_xlabel('Iterations')\n_=ax.plot(range(n_iter),cost_history,'b.')\n```\n\n

    After around 150 iterations the cost is flat so the remaining iterations are not needed or will not result in any further optimization. Let us zoom in till iteration 200 and see the curve\n\n\n```python\n\nfig,ax = plt.subplots(figsize=(10,8))\n_=ax.plot(range(200),cost_history[:200],'b.')\n```\n\nIt is worth while to note that the cost drops faster initially and then the gain in cost reduction is not as much\n\n### It would be great to see the effect of different learning rates and iterations together\n\n### Let us build a function which can show the effects together and also show how gradient decent actually is working\n\n\n```python\n\ndef plot_GD(n_iter,lr,ax,ax1=None):\n \"\"\"\n n_iter = no of iterations\n lr = Learning Rate\n ax = Axis to plot the Gradient Descent\n ax1 = Axis to plot cost_history vs Iterations plot\n\n \"\"\"\n _ = ax.plot(X,y,'b.')\n theta = np.random.randn(2,1)\n\n tr =0.1\n cost_history = np.zeros(n_iter)\n for i in range(n_iter):\n pred_prev = X_b.dot(theta)\n theta,h,_ = gradient_descent(X_b,y,theta,lr,1)\n pred = X_b.dot(theta)\n\n cost_history[i] = h[0]\n\n if ((i % 25 == 0) ):\n _ = ax.plot(X,pred,'r-',alpha=tr)\n if tr < 0.8:\n tr = tr+0.2\n if not ax1== None:\n _ = ax1.plot(range(n_iter),cost_history,'b.') \n```\n\n### Plot the graphs for different iterations and learning rates combination\n\n\n```python\nfig = plt.figure(figsize=(30,25),dpi=200)\nfig.subplots_adjust(hspace=0.4, wspace=0.4)\n\nit_lr =[(2000,0.001),(500,0.01),(200,0.05),(100,0.1)]\ncount =0\nfor n_iter, lr in it_lr:\n count += 1\n \n ax = fig.add_subplot(4, 2, count)\n count += 1\n \n ax1 = fig.add_subplot(4,2,count)\n \n ax.set_title(\"lr:{}\".format(lr))\n ax1.set_title(\"Iterations:{}\".format(n_iter))\n plot_GD(n_iter,lr,ax,ax1)\n \n```\n\n See how useful it is to visualize the effect of learning rates and iterations on gradient descent. The red lines show how the gradient descent starts and then slowly gets closer to the final value\n\n## You can always plot Indiviual graphs to zoom in\n\n\n```python\n_,ax = plt.subplots(figsize=(14,10))\nplot_GD(100,0.1,ax)\n```\n\n# Stochastic Gradient Descent\n\n\n```python\ndef stocashtic_gradient_descent(X,y,theta,learning_rate=0.01,iterations=10):\n '''\n X = Matrix of X with added bias units\n y = Vector of Y\n theta=Vector of thetas np.random.randn(j,1)\n learning_rate \n iterations = no of iterations\n \n Returns the final theta vector and array of cost history over no of iterations\n '''\n m = len(y)\n cost_history = np.zeros(iterations)\n \n \n for it in range(iterations):\n cost =0.0\n for i in range(m):\n rand_ind = np.random.randint(0,m)\n X_i = X[rand_ind,:].reshape(1,X.shape[1])\n y_i = y[rand_ind].reshape(1,1)\n prediction = np.dot(X_i,theta)\n\n theta = theta -(1/m)*learning_rate*( X_i.T.dot((prediction - y_i)))\n cost += cal_cost(theta,X_i,y_i)\n cost_history[it] = cost\n \n return theta, cost_history\n```\n\n\n```python\nlr =0.5\nn_iter = 50\n\ntheta = np.random.randn(2,1)\n\nX_b = np.c_[np.ones((len(X),1)),X]\ntheta,cost_history = stocashtic_gradient_descent(X_b,y,theta,lr,n_iter)\n\n\nprint('Theta0: {:0.3f},\\nTheta1: {:0.3f}'.format(theta[0][0],theta[1][0]))\nprint('Final cost/MSE: {:0.3f}'.format(cost_history[-1]))\n```\n\n Theta0: 3.878,\n Theta1: 3.017\n Final cost/MSE: 57.764\n\n\n\n```python\nfig,ax = plt.subplots(figsize=(10,8))\n\nax.set_ylabel('{J(Theta)}',rotation=0)\nax.set_xlabel('{Iterations}')\ntheta = np.random.randn(2,1)\n\n_=ax.plot(range(n_iter),cost_history,'b.')\n```\n\n# Mini Batch Gradient Descent\n\n\n```python\ndef minibatch_gradient_descent(X,y,theta,learning_rate=0.01,iterations=10,batch_size =20):\n '''\n X = Matrix of X without added bias units\n y = Vector of Y\n theta=Vector of thetas np.random.randn(j,1)\n learning_rate \n iterations = no of iterations\n \n Returns the final theta vector and array of cost history over no of iterations\n '''\n m = len(y)\n cost_history = np.zeros(iterations)\n n_batches = int(m/batch_size)\n \n for it in range(iterations):\n cost =0.0\n indices = np.random.permutation(m)\n X = X[indices]\n y = y[indices]\n for i in range(0,m,batch_size):\n X_i = X[i:i+batch_size]\n y_i = y[i:i+batch_size]\n \n X_i = np.c_[np.ones(len(X_i)),X_i]\n \n prediction = np.dot(X_i,theta)\n\n theta = theta -(1/m)*learning_rate*( X_i.T.dot((prediction - y_i)))\n cost += cal_cost(theta,X_i,y_i)\n cost_history[it] = cost\n \n return theta, cost_history\n```\n\n\n```python\nlr =0.1\nn_iter = 200\n\ntheta = np.random.randn(2,1)\n\n\ntheta,cost_history = minibatch_gradient_descent(X,y,theta,lr,n_iter)\n\n\nprint('Theta0: {:0.3f},\\nTheta1: {:0.3f}'.format(theta[0][0],theta[1][0]))\nprint('Final cost/MSE: {:0.3f}'.format(cost_history[-1]))\n```\n\n Theta0: 3.885,\n Theta1: 2.961\n Final cost/MSE: 1296.250\n\n\n\n```python\nfig,ax = plt.subplots(figsize=(10,8))\n\nax.set_ylabel('{J(Theta)}',rotation=0)\nax.set_xlabel('{Iterations}')\ntheta = np.random.randn(2,1)\n\n_=ax.plot(range(n_iter),cost_history,'b.')\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "846cfb91115e205c287d58377af3d31ec07d177a", "size": 974016, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Day 24 - Gradient Descent Manual/GradientDescent.ipynb", "max_stars_repo_name": "Satyam-Bhalla/Machine-Learning-Practice", "max_stars_repo_head_hexsha": "0ae4b8ae9501fb0a22b236dbc508fe6b32e21f42", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Day 24 - Gradient Descent Manual/GradientDescent.ipynb", "max_issues_repo_name": "Satyam-Bhalla/Machine-Learning-Practice", "max_issues_repo_head_hexsha": "0ae4b8ae9501fb0a22b236dbc508fe6b32e21f42", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Day 24 - Gradient Descent Manual/GradientDescent.ipynb", "max_forks_repo_name": "Satyam-Bhalla/Machine-Learning-Practice", "max_forks_repo_head_hexsha": "0ae4b8ae9501fb0a22b236dbc508fe6b32e21f42", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-03-28T12:31:48.000Z", "max_forks_repo_forks_event_max_datetime": "2019-10-31T05:40:08.000Z", "avg_line_length": 1054.1298701299, "max_line_length": 818536, "alphanum_fraction": 0.9536927525, "converted": true, "num_tokens": 2943, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9124361604769413, "lm_q2_score": 0.9173026528034426, "lm_q1q2_score": 0.8369801105192859}} {"text": "# Simulation of variance of OLS parameters\nRecently, I've spend some time to understand what *variance* of parameters actually means. If I understand it correctly, the variance of an estimate refers to how much we expect our estimate ($\\hat{\\beta}$) to vary from the true population parameter ($\\beta$). In OLS, two terms contribute to the variance estimate: the unexplained variance and the design variance. \n\nUnexplained variance, also referred to as $\\hat{\\sigma}^{2}$, is defined as:\n\n\\begin{align}\n\\hat{\\sigma}^{2} = \\frac{\\sum_{i=1}^{N} (y_{i} - \\hat{y}_{i})^{2}}{N - K}\n\\end{align}\n\nThe design variance, for any contrast $c$, is defined as:\n\n\\begin{align}\ndesvar = c(X'X)^{-1}c'\n\\end{align}\n\nAs such, the variance of any contrast of beta-parameter(s) is defined as:\n\n\\begin{align}\n\\mathrm{var}[c\\beta] = \\frac{\\sum_{i=1}^{N} (y_{i} - \\hat{y}_{i})^{2}}{N - K} c(X'X)^{-1}c'\n\\end{align}\n\nWhile mathematically, I understand *how* variance is computed. But it's conceptual meaning -- i.e. the expected squared deviation from the true population parameter -- is still a bit muddy for me. As such, I decided to simulate the process, which often helps me to understand statistical issues.\n\nFirst, some imports.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\nNow, let's define an arbitrary *true* model of some dependent variable $y$. Let's assume the true function is:\n\n\\begin{align}\ny = 10 + 5x\n\\end{align}\n\n\n```python\nx = np.arange(100)\ny = 10 + 5 * x\nplt.plot(x, y)\nplt.title(\"True function of y\")\nplt.show()\n```\n\nLet's now add some random gaussian noise ($\\mu = 0, \\sigma=1$) to our dependent variable and estimate the parameter of the intercept and $x$ using linear regression. We'll also calculate the variance of this estimate (using the contrast-vector `[1, 0]` for the intercept-parameter and `[0, 1]` for the $x$-parameter). We'll iterate this process 10,000 times and keep track of the parameter-estimates and their variance-estimates.\n\n\n```python\niters = 50000\nbetas = np.zeros((iters, 2))\nvars = np.zeros((iters, 2))\nc = np.array([[1, 0], [0, 1]])\nX = np.hstack((np.ones((100, 1)), x[:, np.newaxis]))\n \nfor i in range(iters):\n y2 = y + np.random.normal(0, 1, 100) * 100\n betas[i, :] = np.linalg.lstsq(X, y2)[0]\n y_hat = X.dot(betas[-1, :])\n sigma_hat = np.sum((y_hat - y2) ** 2) / (X.shape[0] - X.shape[1])\n desvar = c.dot(np.linalg.pinv(X.T.dot(X))).dot(c.T)\n vars[i, :] = np.diag(sigma_hat * desvar)\n```\n\nLet's plot the histograms of the two parameter-estimates:\n\n\n```python\nplt.figure(figsize=(15, 5))\nplt.subplot(1, 2, 1)\nplt.hist(betas[:, 0], bins=50)\nplt.title(r\"$\\hat{\\beta}_{0}$\", fontsize=20)\n\nplt.subplot(1, 2, 2)\nplt.hist(betas[:, 1], bins=50)\nplt.title(r\"$\\hat{\\beta}_{x}$\", fontsize=20)\n\nplt.show()\n```\n\nNow, the central limit theorem (CTL) predicts that the mean of this distribution should approximate the true parameters (I think). Let's check that:\n\n\n```python\nbetas.mean(axis=0)\n```\n\n\n\n\n array([ 10.10447338, 4.99799692])\n\n\n\nNow, let's calculate the variance of our parameter-estimates across the simulations:\n\n\n```python\nbetas.var(axis=0)\n```\n\n\n\n\n array([ 3.94466169e+02, 1.19993340e-01])\n\n\n\nThis estimate from our simulation should then (I think) approximate the average variance estimate across simulations:\n\n\n```python\nvars.mean(axis=0)\n```\n\n\n\n\n array([ 3.90509682e+03, 1.18930922e+00])\n\n\n\n... which seems indeed the case!\n\n\n```python\nplt.figure(figsize=(15, 5))\nplt.subplot(1, 2, 1)\nplt.hist(vars[:, 0], bins=50)\nplt.title(r\"$\\hat{\\mathrm{var}[\\beta}_{0}]$\", fontsize=20)\n\nplt.subplot(1, 2, 2)\nplt.hist(vars[:, 1], bins=50)\nplt.title(r\"$\\hat{\\mathrm{var}[\\beta}_{x}]$\", fontsize=20)\n\nplt.show()\n```\n", "meta": {"hexsha": "3f2d7672d2808bcc9cbd07ce8484c24ef5e681bf", "size": 48546, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "simulation_variance_OLS_parameters.ipynb", "max_stars_repo_name": "lukassnoek/random_notebooks", "max_stars_repo_head_hexsha": "d7df507ce2b6949726c29de0022aae2d0dc583ac", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-05-28T13:45:11.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-31T11:41:34.000Z", "max_issues_repo_path": "simulation_variance_OLS_parameters.ipynb", "max_issues_repo_name": "lukassnoek/random_notebooks", "max_issues_repo_head_hexsha": "d7df507ce2b6949726c29de0022aae2d0dc583ac", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "simulation_variance_OLS_parameters.ipynb", "max_forks_repo_name": "lukassnoek/random_notebooks", "max_forks_repo_head_hexsha": "d7df507ce2b6949726c29de0022aae2d0dc583ac", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-05-28T13:46:05.000Z", "max_forks_repo_forks_event_max_datetime": "2018-06-11T15:25:59.000Z", "avg_line_length": 169.149825784, "max_line_length": 14618, "alphanum_fraction": 0.8934618712, "converted": true, "num_tokens": 1137, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9433475730993027, "lm_q2_score": 0.8872046011730964, "lm_q1q2_score": 0.8369423073591753}} {"text": "# Z-Score\n\nA z-score measures how many standard deviations above or below the mean a data point is.\n\n> \n\\begin{align}\nz = \\frac{data\\ point - mean}{standard\\ deviation} \\\\\n\\end{align}\n\n\n## Important facts\n\n- A positive z-score says the data point is above average.\n- A negative z-score says the data point is below average.\n- A z-score close to 0 says the data point is close to average.\n\n__Source:__ [Khan Academy](https://www.khanacademy.org/math/statistics-probability/modeling-distributions-of-data/z-scores/a/z-scores-review)\n\n## Code\n\n### Packages\n\n\n```python\nimport numpy as np\n```\n\n### Function\n\n\n```python\ndef z_score(sample, data_point):\n # data point needs to come from the sample\n assert data_point in sample\n avg = np.mean(sample)\n sd = np.std(sample)\n z_score = (data_point - avg)/sd\n return z_score\n```\n\n### Input\n\n\n```python\nsample = np.random.randn(30)\nsample\n```\n\n\n\n\n array([-0.59877513, -0.51223475, -1.40174708, -1.86207935, -0.10035927,\n 0.12655593, 0.32697515, -0.35919666, -1.865143 , -0.27180892,\n 0.42933215, -0.44281586, -0.30340738, 1.17923063, -0.09077022,\n -0.60174546, 0.34032583, -0.58059233, -1.90742451, -1.00759512,\n 0.73630683, -0.13672296, 1.54279119, -0.4798372 , 1.41396989,\n 0.87904765, -0.9446896 , 1.02256431, 0.77302047, -0.15333515])\n\n\n\n\n```python\n# grab the first data point\nfirst_data_point = sample[0]\nfirst_data_point\n```\n\n\n\n\n -0.59877512996891891\n\n\n\n### Output\n\n\n```python\nz_score(sample, first_data_point)\n```\n\n\n\n\n -0.4795330537549915\n\n\n", "meta": {"hexsha": "1ee1a6b981dc3488418501c541f274f070425f19", "size": 4613, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/statistics/z_score.ipynb", "max_stars_repo_name": "kvn219/notebooks", "max_stars_repo_head_hexsha": "8d218c48e28d1cebdd39152b6496335a587e96eb", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/statistics/z_score.ipynb", "max_issues_repo_name": "kvn219/notebooks", "max_issues_repo_head_hexsha": "8d218c48e28d1cebdd39152b6496335a587e96eb", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/statistics/z_score.ipynb", "max_forks_repo_name": "kvn219/notebooks", "max_forks_repo_head_hexsha": "8d218c48e28d1cebdd39152b6496335a587e96eb", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.7594339623, "max_line_length": 147, "alphanum_fraction": 0.5168003468, "converted": true, "num_tokens": 522, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9449947163538936, "lm_q2_score": 0.8856314738181875, "lm_q1q2_score": 0.8369170633948989}} {"text": "# Package\n\n\n```python\nimport numpy as np \nimport pandas as pd \nimport matplotlib.pyplot as plt \nimport yfinance as yf\nimport yahoofinancials\nfrom IPython.core.display import HTML\nHTML(\"\"\"\n\n\"\"\")\npd.options.plotting.backend = 'plotly'\n```\n\n\n\n\n\n\n\n\n\n\n# Random Walk\n\nPerhatikan suatu partikel bergerak sepanjang garis, berangkat dari titik 0. Setiap satu satuan waktu partikel bergerak ke kiri atau ke kanan sejauh satu satuan jarak, dengan peluang ke kiri atau ke kanan diberikan oleh \n\n\\begin{equation}\np_{i,i+1} = \\frac{1}{2} = p_{i,i-1}, \\quad i = 0, \\pm 1, \\pm 2, \\dots\n\\end{equation}\n\nSelanjutnya proses dipercepat dengan mengambil langkah yang semakin pendek dan juga selang waktu yang semakin pendek. Untuk tiap satuan waktu $\\Delta t$ melangkah ke kiri atau ke kanan sejauh $\\Delta x$, dengan peluang yang sama besar. \n \nMisalkan $X(t)$ posisi partikel pada saat $t$ dengan $t = n \\Delta t (n=0,1,2,\\dots)$, maka \n\\begin{equation}\nX(t) = 0 + X_1 \\Delta x + X_2 \\Delta x + \\dots + X_n \\Delta x\n\\end{equation}\n\n\n```python\nn = 500 # Banyak Langkah \nsigma = 1 # Volatilitas\nT = 3 # Total Waktu\ndt = T/n # Delta t\ndx = sigma*dt**0.5 # Delta x\nt_i = np.linspace(0, T, n) # Diskritisasi dari t\nX_i = 2*(np.random.rand(1,n)<0.5) - 1 # Nilai x_1 adalah 1 atau -1 \nX = dx*np.cumsum(X_i)\n\nplt.plot(t_i, X)\nplt.grid(True)\nplt.xlabel('t')\nplt.ylabel('X(t): posisi Partikel')\nplt.title('Simulasi Perjalanan Acak Vertikel Simetri 1 Dimensi')\nplt.show()\n```\n\n# Peubah Acak Log Normal\n\n\n```python\nx = np.linspace(0.01,4,500)\nmu = 0.05\nsigma = 0.3\na = ((np.log(x)-mu)**2)/(2*sigma**2)\nb = x*sigma*(2*np.pi)**0.5\ny1 = np.exp(-a)/b\n\nsigma = 0.5\na = ((np.log(x)-mu)**2)/(2*sigma**2)\nb = x*sigma*(2*np.pi)**0.5\ny2 = np.exp(-a)/b\n\nplt.plot(x,y1, 'r-')\nplt.plot(x,y2, 'b:')\nplt.ylim([0, 1.5])\nplt.xlabel('x')\nplt.ylabel('f(x)')\nplt.legend([r'$\\sigma = 0.3$', r'$\\sigma = 0.5$' ])\nplt.title(r'Grafik Fungsi Padat Peluang Log Normal , $\\mu$ = 0.05')\nplt.show()\n```\n\n# Pemanfaatan pada model harga saham\n\n\n```python\nS = 6987\nmu = 0.3\nsigma = 0.3\nL = 252\nT = 1\ndt = T/L\nM = 1\n```\n", "meta": {"hexsha": "098650257ac2f8036804dfaaad1e1fe9df3ff930", "size": 58546, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Pergerakkan Harga Saham.ipynb", "max_stars_repo_name": "leonv1602/ak4261-optimisasi-matematika-keuangan", "max_stars_repo_head_hexsha": "8dc7fd54b6ceaefc80c220359be7c77d6997939d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Pergerakkan Harga Saham.ipynb", "max_issues_repo_name": "leonv1602/ak4261-optimisasi-matematika-keuangan", "max_issues_repo_head_hexsha": "8dc7fd54b6ceaefc80c220359be7c77d6997939d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Pergerakkan Harga Saham.ipynb", "max_forks_repo_name": "leonv1602/ak4261-optimisasi-matematika-keuangan", "max_forks_repo_head_hexsha": "8dc7fd54b6ceaefc80c220359be7c77d6997939d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 210.5971223022, "max_line_length": 31862, "alphanum_fraction": 0.9181669115, "converted": true, "num_tokens": 834, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541626630937, "lm_q2_score": 0.8887588038050466, "lm_q1q2_score": 0.8369034272064939}} {"text": "# Factorization\nFactorization is the process of restating an expression as the *product* of two expressions (in other words, expressions multiplied together).\n\nFor example, you can make the value **16** by performing the following multiplications of integer numbers:\n- 1 x 16\n- 2 x 8\n- 4 x 4\n\nAnother way of saying this is that 1, 2, 4, 8, and 16 are all factors of 16.\n\n## Factors of Polynomial Expressions\nWe can apply the same logic to polynomial expressions. For example, consider the following monomial expression:\n\n\\begin{equation}-6x^{2}y^{3} \\end{equation}\n\nYou can get this value by performing the following multiplication:\n\n\\begin{equation}(2xy^{2})(-3xy) \\end{equation}\n\nRun the following Python code to test this with arbitrary ***x*** and ***y*** values:\n\n\n```python\nfrom random import randint\nx = randint(1,100)\ny = randint(1,100)\n\n(2*x*y**2)*(-3*x*y) == -6*x**2*y**3\n```\n\nSo, we can say that **2xy2** and **-3xy** are both factors of **-6x2y3**.\n\nThis also applies to polynomials with more than one term. For example, consider the following expression:\n\n\\begin{equation}(x + 2)(2x^{2} - 3y + 2) = 2x^{3} + 4x^{2} - 3xy + 2x - 6y + 4 \\end{equation}\n\nBased on this, **x+2** and **2x2 - 3y + 2** are both factors of **2x3 + 4x2 - 3xy + 2x - 6y + 4**.\n\n(and if you don't believe me, you can try this with random values for x and y with the following Python code):\n\n\n```python\nfrom random import randint\nx = randint(1,100)\ny = randint(1,100)\n\n(x + 2)*(2*x**2 - 3*y + 2) == 2*x**3 + 4*x**2 - 3*x*y + 2*x - 6*y + 4\n```\n\n## Greatest Common Factor\nOf course, these may not be the only factors of **-6x2y3**, just as 8 and 2 are not the only factors of 16.\n\nAdditionally, 2 and 8 aren't just factors of 16; they're factors of other numbers too - for example, they're both factors of 24 (because 2 x 12 = 24 and 8 x 3 = 24). Which leads us to the question, what is the highest number that is a factor of both 16 and 24? Well, let's look at all the numbers that multiply evenly into 12 and all the numbers that multiply evenly into 24:\n\n| 16 | 24 |\n|--------|--------|\n| 1 x 16 | 1 x 24 |\n| 2 x **8** | 2 x 12 |\n| | 3 x **8** |\n| 4 x 4 | 4 x 6 |\n\nThe highest value that is a multiple of both 16 and 24 is **8**, so 8 is the *Greatest Common Factor* (or GCF) of 16 and 24.\n\nOK, let's apply that logic to the following expressions:\n\n\\begin{equation}15x^{2}y\\;\\;\\;\\;\\;\\;\\;\\;9xy^{3}\\end{equation}\n\nSo what's the greatest common factor of these two expressions?\n\nIt helps to break the expressions into their consitituent components. Let's deal with the coefficients first; we have 15 and 9. The highest value that divides evenly into both of these is **3** (3 x 5 = 15 and 3 x 3 = 9).\n\nNow let's look at the ***x*** terms; we have x2 and x. The highest value that divides evenly into both is these is **x** (*x* goes into *x* once and into *x*2 *x* times).\n\nFinally, for our ***y*** terms, we have y and y3. The highest value that divides evenly into both is these is **y** (*y* goes into *y* once and into *y*3 *y•y* times).\n\nPutting all of that together, the GCF of both of our expression is:\n\n\\begin{equation}3xy\\end{equation}\n\nAn easy shortcut to identifying the GCF of an expression that includes variables with exponentials is that it will always consist of:\n- The *largest* numeric factor of the numeric coefficients in the polynomial expressions (in this case 3)\n- The *smallest* exponential of each variable (in this case, x and y, which technically are x1 and y1.\n\nYou can check your answer by dividing the original expressions by the GCF to find the coefficent expressions for the GCF (in other words, how many times the GCF divides into the original expression). The result, when multiplied by the GCF will always produce the original expression. So in this case, we need to perform the following divisions:\n\n\\begin{equation}\\frac{15x^{2}y}{3xy}\\;\\;\\;\\;\\;\\;\\;\\;\\frac{9xy^{3}}{3xy}\\end{equation}\n\nThese fractions simplify to **5x** and **3y2**, giving us the following calculations to prove our factorization:\n\n\\begin{equation}3xy(5x) = 15x^{2}y\\end{equation}\n\\begin{equation}3xy(3y^{2}) = 9xy^{3}\\end{equation}\n\nLet's try both of those in Python:\n\n\n```python\nfrom random import randint\nx = randint(1,100)\ny = randint(1,100)\n\nprint((3*x*y)*(5*x) == 15*x**2*y)\nprint((3*x*y)*(3*y**2) == 9*x*y**3)\n```\n\n## Distributing Factors\nLet's look at another example. Here is a binomial expression:\n\n\\begin{equation}6x + 15y \\end{equation}\n\nTo factor this, we need to find an expression that divides equally into both of these expressions. In this case, we can use **3** to factor the coefficents, because 3 • 2x = 6x and 3• 5y = 15y, so we can write our original expression as:\n\n\\begin{equation}6x + 15y = 3(2x) + 3(5y) \\end{equation}\n\nNow, remember the distributive property? It enables us to multiply each term of an expression by the same factor to calculate the product of the expression multiplied by the factor. We can *factor-out* the common factor in this expression to distribute it like this:\n\n\\begin{equation}6x + 15y = 3(2x) + 3(5y) = \\mathbf{3(2x + 5y)} \\end{equation}\n\nLet's prove to ourselves that these all evaluate to the same thing:\n\n\n```python\nfrom random import randint\nx = randint(1,100)\ny = randint(1,100)\n\n(6*x + 15*y) == (3*(2*x) + 3*(5*y)) == (3*(2*x + 5*y))\n```\n\nFor something a little more complex, let's return to our previous example. Suppose we want to add our original 15x2y and 9xy3 expressions:\n\n\\begin{equation}15x^{2}y + 9xy^{3}\\end{equation}\n\nWe've already calculated the common factor, so we know that:\n\n\\begin{equation}3xy(5x) = 15x^{2}y\\end{equation}\n\\begin{equation}3xy(3y^{2}) = 9xy^{3}\\end{equation}\n\nNow we can factor-out the common factor to produce a single expression:\n\n\\begin{equation}15x^{2}y + 9xy^{3} = \\mathbf{3xy(5x + 3y^{2})}\\end{equation}\n\nAnd here's the Python test code:\n\n\n```python\nfrom random import randint\nx = randint(1,100)\ny = randint(1,100)\n\n(15*x**2*y + 9*x*y**3) == (3*x*y*(5*x + 3*y**2))\n```\n\nSo you might be wondering what's so great about being able to distribute the common factor like this. The answer is that it can often be useful to apply a common factor to multiple terms in order to solve seemingly complex problems.\n\nFor example, consider this:\n\n\\begin{equation}x^{2} + y^{2} + z^{2} = 127\\end{equation}\n\nNow solve this equation:\n\n\\begin{equation}a = 5x^{2} + 5y^{2} + 5z^{2}\\end{equation}\n\nAt first glance, this seems tricky because there are three unknown variables, and even though we know that their squares add up to 127, we don't know their individual values. However, we can distribute the common factor and apply what we *do* know. Let's restate the problem like this:\n\n\\begin{equation}a = 5(x^{2} + y^{2} + z^{2})\\end{equation}\n\nNow it becomes easier to solve, because we know that the expression in parenthesis is equal to 127, so actually our equation is:\n\n\\begin{equation}a = 5(127)\\end{equation}\n\nSo ***a*** is 5 times 127, which is 635\n\n## Formulae for Factoring Squares\nThere are some useful ways that you can employ factoring to deal with expressions that contain squared values (that is, values with an exponential of 2).\n\n### Differences of Squares\nConsider the following expression:\n\n\\begin{equation}x^{2} - 9\\end{equation}\n\nThe constant *9* is 32, so we could rewrite this as:\n\n\\begin{equation}x^{2} - 3^{2}\\end{equation}\n\nWhenever you need to subtract one squared term from another, you can use an approach called the *difference of squares*, whereby we can factor *a2 - b2* as *(a - b)(a + b)*; so we can rewrite the expression as:\n\n\\begin{equation}(x - 3)(x + 3)\\end{equation}\n\nRun the code below to check this:\n\n\n```python\nfrom random import randint\nx = randint(1,100)\n\n(x**2 - 9) == (x - 3)*(x + 3)\n```\n\n### Perfect Squares\nA *perfect square* is a number multiplied by itself, for example 3 multipled by 3 is 9, so 9 is a perfect square.\n\nWhen working with equations, the ability to factor between polynomial expressions and binomial perfect square expressions can be a useful tool. For example, consider this expression:\n\n\\begin{equation}x^{2} + 10x + 25\\end{equation}\n\nWe can use 5 as a common factor to rewrite this as:\n\n\\begin{equation}(x + 5)(x + 5)\\end{equation}\n\nSo what happened here?\n\nWell, first we found a common factor for our coefficients: 5 goes into 10 twice and into 25 five times (in other words, squared). Then we just expressed this factoring as a multiplication of two identical binomials *(x + 5)(x + 5)*.\n\nRemember the rule for multiplication of polynomials is to multiple each term in the first polynomial by each term in the second polynomial and then add the results; so you can do this to verify the factorization:\n\n- x • x = x2\n- x • 5 = 5x\n- 5 • x = 5x\n- 5 • 5 = 25\n\nWhen you combine the two 5x terms we get back to our original expression of x2 + 10x + 25.\n\nNow we have an expression multipled by itself; in other words, a perfect square. We can therefore rewrite this as:\n\n\\begin{equation}(x + 5)^{2}\\end{equation}\n\nFactorization of perfect squares is a useful technique, as you'll see when we start to tackle quadratic equations in the next section. In fact, it's so useful that it's worth memorizing its formula:\n\n\\begin{equation}(a + b)^{2} = a^{2} + b^{2}+ 2ab \\end{equation}\n\nIn our example, the *a* terms is ***x*** and the *b* terms is ***5***, and in standard form, our equation *x2 + 10x + 25* is actually *a2 + 2ab + b2*. The operations are all additions, so the order isn't actually important!\n\nRun the following code with random values for *a* and *b* to verify that the formula works:\n\n\n```python\nfrom random import randint\na = randint(1,100)\nb = randint(1,100)\n\na**2 + b**2 + (2*a*b) == (a + b)**2\n```\n", "meta": {"hexsha": "16a0c3ae606a8e6cff40f2f68588cce068e2c04d", "size": 12601, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "MathsToML/Module01-Equations, Graphs, and Functions/01-06-Factorization.ipynb", "max_stars_repo_name": "hpaucar/data-mining-repo", "max_stars_repo_head_hexsha": "d0e48520bc6c01d7cb72e882154cde08020e1d33", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "MathsToML/Module01-Equations, Graphs, and Functions/01-06-Factorization.ipynb", "max_issues_repo_name": "hpaucar/data-mining-repo", "max_issues_repo_head_hexsha": "d0e48520bc6c01d7cb72e882154cde08020e1d33", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MathsToML/Module01-Equations, Graphs, and Functions/01-06-Factorization.ipynb", "max_forks_repo_name": "hpaucar/data-mining-repo", "max_forks_repo_head_hexsha": "d0e48520bc6c01d7cb72e882154cde08020e1d33", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 94.0373134328, "max_line_length": 2840, "alphanum_fraction": 0.6467740656, "converted": true, "num_tokens": 3110, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8887587934924569, "lm_q2_score": 0.9416541634817872, "lm_q1q2_score": 0.836903418223222}} {"text": "# Simulating a Yo-Yo\n\n*Modeling and Simulation in Python*\n\nCopyright 2021 Allen Downey\n\nLicense: [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/)\n\n\n```python\n# install Pint if necessary\n\ntry:\n import pint\nexcept ImportError:\n !pip install pint\n```\n\n\n```python\n# download modsim.py if necessary\n\nfrom os.path import exists\n\nfilename = 'modsim.py'\nif not exists(filename):\n from urllib.request import urlretrieve\n url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'\n local, _ = urlretrieve(url+filename, filename)\n print('Downloaded ' + local)\n```\n\n\n```python\n# import functions from modsim\n\nfrom modsim import *\n```\n\n## Yo-yo\n\nSuppose you are holding a yo-yo with a length of string wound around its axle, and you drop it while holding the end of the string stationary. As gravity accelerates the yo-yo downward, tension in the string exerts a force upward. Since this force acts on a point offset from the center of mass, it exerts a torque that causes the yo-yo to spin.\n\nThe following diagram shows the forces on the yo-yo and the resulting torque. The outer shaded area shows the body of the yo-yo. The inner shaded area shows the rolled up string, the radius of which changes as the yo-yo unrolls.\n\n\n\nIn this system, we can't figure out the linear and angular acceleration independently; we have to solve a system of equations:\n\n$\\sum F = m a $\n\n$\\sum \\tau = I \\alpha$\n\nwhere the summations indicate that we are adding up forces and torques.\n\nAs in the previous examples, linear and angular velocity are related because of the way the string unrolls:\n\n$\\frac{dy}{dt} = -r \\frac{d \\theta}{dt} $\n\nIn this example, the linear and angular accelerations have opposite sign. As the yo-yo rotates counter-clockwise, $\\theta$ increases and $y$, which is the length of the rolled part of the string, decreases.\n\nTaking the derivative of both sides yields a similar relationship between linear and angular acceleration:\n\n$\\frac{d^2 y}{dt^2} = -r \\frac{d^2 \\theta}{dt^2} $\n\nWhich we can write more concisely:\n\n$ a = -r \\alpha $\n\nThis relationship is not a general law of nature; it is specific to scenarios like this where there is rolling without stretching or slipping.\n\nBecause of the way we've set up the problem, $y$ actually has two meanings: it represents the length of the rolled string and the height of the yo-yo, which decreases as the yo-yo falls. Similarly, $a$ represents acceleration in the length of the rolled string and the height of the yo-yo.\n\nWe can compute the acceleration of the yo-yo by adding up the linear forces:\n\n$\\sum F = T - mg = ma $\n\nWhere $T$ is positive because the tension force points up, and $mg$ is negative because gravity points down.\n\nBecause gravity acts on the center of mass, it creates no torque, so the only torque is due to tension:\n\n$\\sum \\tau = T r = I \\alpha $\n\nPositive (upward) tension yields positive (counter-clockwise) angular acceleration.\n\nNow we have three equations in three unknowns, $T$, $a$, and $\\alpha$, with $I$, $m$, $g$, and $r$ as known parameters. We could solve these equations by hand, but we can also get SymPy to do it for us.\n\n\n```python\nfrom sympy import symbols, Eq, solve\n\nT, a, alpha, I, m, g, r = symbols('T a alpha I m g r')\n```\n\n\n```python\neq1 = Eq(a, -r * alpha)\neq1\n```\n\n\n```python\neq2 = Eq(T - m * g, m * a)\neq2\n```\n\n\n```python\neq3 = Eq(T * r, I * alpha)\neq3\n```\n\n\n```python\nsoln = solve([eq1, eq2, eq3], [T, a, alpha])\n```\n\n\n```python\nsoln[T]\n```\n\n\n```python\nsoln[a]\n```\n\n\n```python\nsoln[alpha]\n```\n\nThe results are\n\n$T = m g I / I^* $\n\n$a = -m g r^2 / I^* $\n\n$\\alpha = m g r / I^* $\n\nwhere $I^*$ is the augmented moment of inertia, $I + m r^2$.\n\nYou can also see [the derivation of these equations in this video](https://www.youtube.com/watch?v=chC7xVDKl4Q).\n\nWe can use these equations for $a$ and $\\alpha$ to write a slope function and simulate this system.\n\n**Exercise:** Simulate the descent of a yo-yo. How long does it take to reach the end of the string?\n\nHere are the system parameters:\n\n\n```python\nRmin = 8e-3 # m\nRmax = 16e-3 # m\nRout = 35e-3 # m\nmass = 50e-3 # kg\nL = 1 # m\ng = 9.8 # m / s**2\n```\n\n* `Rmin` is the radius of the axle. `Rmax` is the radius of the axle plus rolled string.\n\n* `Rout` is the radius of the yo-yo body. `mass` is the total mass of the yo-yo, ignoring the string. \n\n* `L` is the length of the string.\n\n* `g` is the acceleration of gravity.\n\n\n```python\n1 / (Rmax)\n```\n\nBased on these parameters, we can compute the moment of inertia for the yo-yo, modeling it as a solid cylinder with uniform density ([see here](https://en.wikipedia.org/wiki/List_of_moments_of_inertia)).\n\nIn reality, the distribution of weight in a yo-yo is often designed to achieve desired effects. But we'll keep it simple.\n\n\n```python\nI = mass * Rout**2 / 2\nI\n```\n\nAnd we can compute `k`, which is the constant that determines how the radius of the spooled string decreases as it unwinds.\n\n\n```python\nk = (Rmax**2 - Rmin**2) / 2 / L \nk\n```\n\nThe state variables we'll use are angle, `theta`, angular velocity, `omega`, the length of the spooled string, `y`, and the linear velocity of the yo-yo, `v`.\n\nHere is a `State` object with the the initial conditions.\n\n\n```python\ninit = State(theta=0, omega=0, y=L, v=0)\n```\n\nAnd here's a `System` object with `init` and `t_end` (chosen to be longer than I expect for the yo-yo to drop 1 m).\n\n\n```python\nsystem = System(init=init, t_end=2)\n```\n\nWrite a slope function for this system, using these results from the book:\n\n$ r = \\sqrt{2 k y + R_{min}^2} $ \n\n$ T = m g I / I^* $\n\n$ a = -m g r^2 / I^* $\n\n$ \\alpha = m g r / I^* $\n\nwhere $I^*$ is the augmented moment of inertia, $I + m r^2$.\n\n\n```python\n# Solution goes here\n```\n\nTest your slope function with the initial conditions.\nThe results should be approximately\n\n```\n0, 180.5, 0, -2.9\n```\n\n\n```python\n# Solution goes here\n```\n\nNotice that the initial acceleration is substantially smaller than `g` because the yo-yo has to start spinning before it can fall.\n\nWrite an event function that will stop the simulation when `y` is 0.\n\n\n```python\n# Solution goes here\n```\n\nTest your event function:\n\n\n```python\n# Solution goes here\n```\n\nThen run the simulation.\n\n\n```python\n# Solution goes here\n```\n\nCheck the final state. If things have gone according to plan, the final value of `y` should be close to 0.\n\n\n```python\n# Solution goes here\n```\n\nHow long does it take for the yo-yo to fall 1 m? Does the answer seem reasonable?\n\nThe following cells plot the results.\n\n`theta` should increase and accelerate.\n\n\n```python\nresults.theta.plot(color='C0', label='theta')\ndecorate(xlabel='Time (s)',\n ylabel='Angle (rad)')\n```\n\n`y` should decrease and accelerate down.\n\n\n```python\nresults.y.plot(color='C1', label='y')\n\ndecorate(xlabel='Time (s)',\n ylabel='Length (m)')\n \n```\n\nPlot velocity as a function of time; is the acceleration constant?\n\n\n```python\nresults.v.plot(label='velocity', color='C3')\n\ndecorate(xlabel='Time (s)',\n ylabel='Velocity (m/s)')\n```\n\nWe can use `gradient` to estimate the derivative of `v`. How does the acceleration of the yo-yo compare to `g`?\n\n\n```python\na = gradient(results.v)\na.plot(label='acceleration', color='C4')\ndecorate(xlabel='Time (s)',\n ylabel='Acceleration (m/$s^2$)')\n```\n\nAnd we can use the formula for `r` to plot the radius of the spooled thread over time.\n\n\n```python\nr = np.sqrt(2*k*results.y + Rmin**2)\nr.plot(label='radius')\n\ndecorate(xlabel='Time (s)',\n ylabel='Radius of spooled thread (m)')\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "8ecf3111d94fe79ab3b13d9d3a40e84a7ba280d0", "size": 14773, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/yoyo.ipynb", "max_stars_repo_name": "maciejkos/ModSimPy", "max_stars_repo_head_hexsha": "fe80a994689dafd282c3d479b19c90c34c590eb5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-04-27T22:43:12.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-11T15:12:23.000Z", "max_issues_repo_path": "examples/yoyo.ipynb", "max_issues_repo_name": "Dicaromonroy/ModSimPy", "max_issues_repo_head_hexsha": "fe80a994689dafd282c3d479b19c90c34c590eb5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 33, "max_issues_repo_issues_event_min_datetime": "2019-10-09T18:50:22.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-21T01:39:48.000Z", "max_forks_repo_path": "examples/yoyo.ipynb", "max_forks_repo_name": "Dicaromonroy/ModSimPy", "max_forks_repo_head_hexsha": "fe80a994689dafd282c3d479b19c90c34c590eb5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.2976973684, "max_line_length": 354, "alphanum_fraction": 0.5361808705, "converted": true, "num_tokens": 2109, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9184802507195636, "lm_q2_score": 0.911179702173019, "lm_q1q2_score": 0.8369005613024518}} {"text": "# Summary of Quantum Operations \n\n\n```python\n# Useful additional packages \nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport numpy as np\nfrom math import pi\n```\n\n\n```python\nfrom qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute\nfrom qiskit.tools.visualization import circuit_drawer\nfrom qiskit.quantum_info import state_fidelity\nfrom qiskit import BasicAer\n\nbackend = BasicAer.get_backend('unitary_simulator')\n```\n\n## Multi-Qubit Gates\n\n### Mathematical Preliminaries\n\nThe space of quantum computer grows exponential with the number of qubits. For $n$ qubits the complex vector space has dimensions $d=2^n$. To describe states of a multi-qubit system, the tensor product is used to \"glue together\" operators and basis vectors.\n\nLet's start by considering a 2-qubit system. Given two operators $A$ and $B$ that each act on one qubit, the joint operator $A \\otimes B$ acting on two qubits is\n\n$$\\begin{equation}\n\tA\\otimes B = \n\t\\begin{pmatrix} \n\t\tA_{00} \\begin{pmatrix} \n\t\t\tB_{00} & B_{01} \\\\\n\t\t\tB_{10} & B_{11}\n\t\t\\end{pmatrix} & A_{01} \t\\begin{pmatrix} \n\t\t\t\tB_{00} & B_{01} \\\\\n\t\t\t\tB_{10} & B_{11}\n\t\t\t\\end{pmatrix} \\\\\n\t\tA_{10} \t\\begin{pmatrix} \n\t\t\t\t\tB_{00} & B_{01} \\\\\n\t\t\t\t\tB_{10} & B_{11}\n\t\t\t\t\\end{pmatrix} & A_{11} \t\\begin{pmatrix} \n\t\t\t\t\t\t\tB_{00} & B_{01} \\\\\n\t\t\t\t\t\t\tB_{10} & B_{11}\n\t\t\t\t\t\t\\end{pmatrix}\n\t\\end{pmatrix},\t\t\t\t\t\t\n\\end{equation}$$\n\nwhere $A_{jk}$ and $B_{lm}$ are the matrix elements of $A$ and $B$, respectively.\n\nAnalogously, the basis vectors for the 2-qubit system are formed using the tensor product of basis vectors for a single qubit:\n$$\\begin{equation}\\begin{split}\n\t\\left|{00}\\right\\rangle &= \\begin{pmatrix} \n\t\t1 \\begin{pmatrix} \n\t\t\t1 \\\\\n\t\t\t0\n\t\t\\end{pmatrix} \\\\\n\t\t0 \\begin{pmatrix} \n\t\t\t1 \\\\\n\t\t\t0 \n\t\t\\end{pmatrix}\n\t\\end{pmatrix} = \\begin{pmatrix} 1 \\\\ 0 \\\\ 0 \\\\0 \\end{pmatrix}~~~\\left|{01}\\right\\rangle = \\begin{pmatrix} \n\t1 \\begin{pmatrix} \n\t0 \\\\\n\t1\n\t\\end{pmatrix} \\\\\n\t0 \\begin{pmatrix} \n\t0 \\\\\n\t1 \n\t\\end{pmatrix}\n\t\\end{pmatrix} = \\begin{pmatrix}0 \\\\ 1 \\\\ 0 \\\\ 0 \\end{pmatrix}\\end{split}\n\\end{equation}$$\n \n$$\\begin{equation}\\begin{split}\\left|{10}\\right\\rangle = \\begin{pmatrix} \n\t0\\begin{pmatrix} \n\t1 \\\\\n\t0\n\t\\end{pmatrix} \\\\\n\t1\\begin{pmatrix} \n\t1 \\\\\n\t0 \n\t\\end{pmatrix}\n\t\\end{pmatrix} = \\begin{pmatrix} 0 \\\\ 0 \\\\ 1 \\\\ 0 \\end{pmatrix}~~~ \t\\left|{11}\\right\\rangle = \\begin{pmatrix} \n\t0 \\begin{pmatrix} \n\t0 \\\\\n\t1\n\t\\end{pmatrix} \\\\\n\t1\\begin{pmatrix} \n\t0 \\\\\n\t1 \n\t\\end{pmatrix}\n\t\\end{pmatrix} = \\begin{pmatrix} 0 \\\\ 0 \\\\ 0 \\\\1 \\end{pmatrix}\\end{split}\n\\end{equation}.$$\n\nNote we've introduced a shorthand for the tensor product of basis vectors, wherein $\\left|0\\right\\rangle \\otimes \\left|0\\right\\rangle$ is written as $\\left|00\\right\\rangle$. The state of an $n$-qubit system can described using the $n$-fold tensor product of single-qubit basis vectors. Notice that the basis vectors for a 2-qubit system are 4-dimensional; in general, the basis vectors of an $n$-qubit sytsem are $2^{n}$-dimensional, as noted earlier.\n\n### Basis vector ordering in Qiskit\n\nWithin the physics community, the qubits of a multi-qubit systems are typically ordered with the first qubit on the left-most side of the tensor product and the last qubit on the right-most side. For instance, if the first qubit is in state $\\left|0\\right\\rangle$ and second is in state $\\left|1\\right\\rangle$, their joint state would be $\\left|01\\right\\rangle$. Qiskit uses a slightly different ordering of the qubits, in which the qubits are represented from the most significant bit (MSB) on the left to the least significant bit (LSB) on the right (big-endian). This is similar to bitstring representation on classical computers, and enables easy conversion from bitstrings to integers after measurements are performed. For the example just given, the joint state would be represented as $\\left|10\\right\\rangle$. Importantly, _this change in the representation of multi-qubit states affects the way multi-qubit gates are represented in Qiskit_, as discussed below.\n\nThe representation used in Qiskit enumerates the basis vectors in increasing order of the integers they represent. For instance, the basis vectors for a 2-qubit system would be ordered as $\\left|00\\right\\rangle$, $\\left|01\\right\\rangle$, $\\left|10\\right\\rangle$, and $\\left|11\\right\\rangle$. Thinking of the basis vectors as bit strings, they encode the integers 0,1,2 and 3, respectively.\n\n\n### Controlled operations on qubits\n\nA common multi-qubit gate involves the application of a gate to one qubit, conditioned on the state of another qubit. For instance, we might want to flip the state of the second qubit when the first qubit is in $\\left|0\\right\\rangle$. Such gates are known as _controlled gates_. The standard multi-qubit gates consist of two-qubit gates and three-qubit gates. The two-qubit gates are:\n- controlled Pauli gates\n- controlled Hadamard gate\n- controlled rotation gates\n- controlled phase gate\n- controlled u3 gate\n- swap gate\n\nThe three-qubit gates are: \n- Toffoli gate \n- Fredkin gate\n\n## Two-qubit gates\n\nMost of the two-gates are of the controlled type (the SWAP gate being the exception). In general, a controlled two-qubit gate $C_{U}$ acts to apply the single-qubit unitary $U$ to the second qubit when the state of the first qubit is in $\\left|1\\right\\rangle$. Suppose $U$ has a matrix representation\n\n$$U = \\begin{pmatrix} u_{00} & u_{01} \\\\ u_{10} & u_{11}\\end{pmatrix}.$$\n\nWe can work out the action of $C_{U}$ as follows. Recall that the basis vectors for a two-qubit system are ordered as $\\left|00\\right\\rangle, \\left|01\\right\\rangle, \\left|10\\right\\rangle, \\left|11\\right\\rangle$. Suppose the **control qubit** is **qubit 0** (which, according to Qiskit's convention, is one the _right-hand_ side of the tensor product). If the control qubit is in $\\left|1\\right\\rangle$, $U$ should be applied to the **target** (qubit 1, on the _left-hand_ side of the tensor product). Therefore, under the action of $C_{U}$, the basis vectors are transformed according to\n\n$$\\begin{align*}\nC_{U}: \\underset{\\text{qubit}~1}{\\left|0\\right\\rangle}\\otimes \\underset{\\text{qubit}~0}{\\left|0\\right\\rangle} &\\rightarrow \\underset{\\text{qubit}~1}{\\left|0\\right\\rangle}\\otimes \\underset{\\text{qubit}~0}{\\left|0\\right\\rangle}\\\\\nC_{U}: \\underset{\\text{qubit}~1}{\\left|0\\right\\rangle}\\otimes \\underset{\\text{qubit}~0}{\\left|1\\right\\rangle} &\\rightarrow \\underset{\\text{qubit}~1}{U\\left|0\\right\\rangle}\\otimes \\underset{\\text{qubit}~0}{\\left|1\\right\\rangle}\\\\\nC_{U}: \\underset{\\text{qubit}~1}{\\left|1\\right\\rangle}\\otimes \\underset{\\text{qubit}~0}{\\left|0\\right\\rangle} &\\rightarrow \\underset{\\text{qubit}~1}{\\left|1\\right\\rangle}\\otimes \\underset{\\text{qubit}~0}{\\left|0\\right\\rangle}\\\\\nC_{U}: \\underset{\\text{qubit}~1}{\\left|1\\right\\rangle}\\otimes \\underset{\\text{qubit}~0}{\\left|1\\right\\rangle} &\\rightarrow \\underset{\\text{qubit}~1}{U\\left|1\\right\\rangle}\\otimes \\underset{\\text{qubit}~0}{\\left|1\\right\\rangle}\\\\\n\\end{align*}.$$\n\nIn matrix form, the action of $C_{U}$ is\n\n$$\\begin{equation}\n\tC_U = \\begin{pmatrix}\n\t1 & 0 & 0 & 0 \\\\\n\t0 & u_{00} & 0 & u_{01} \\\\\n\t0 & 0 & 1 & 0 \\\\\n\t0 & u_{10} &0 & u_{11}\n\t\t\\end{pmatrix}.\n\\end{equation}$$\n\nTo work out these matrix elements, let\n\n$$C_{(jk), (lm)} = \\left(\\underset{\\text{qubit}~1}{\\left\\langle j \\right|} \\otimes \\underset{\\text{qubit}~0}{\\left\\langle k \\right|}\\right) C_{U} \\left(\\underset{\\text{qubit}~1}{\\left| l \\right\\rangle} \\otimes \\underset{\\text{qubit}~0}{\\left| k \\right\\rangle}\\right),$$\n\ncompute the action of $C_{U}$ (given above), and compute the inner products.\n\nAs shown in the examples below, this operation is implemented in Qiskit as `cU(q[0],q[1])`.\n\n\nIf **qubit 1 is the control and qubit 0 is the target**, then the basis vectors are transformed according to\n$$\\begin{align*}\nC_{U}: \\underset{\\text{qubit}~1}{\\left|0\\right\\rangle}\\otimes \\underset{\\text{qubit}~0}{\\left|0\\right\\rangle} &\\rightarrow \\underset{\\text{qubit}~1}{\\left|0\\right\\rangle}\\otimes \\underset{\\text{qubit}~0}{\\left|0\\right\\rangle}\\\\\nC_{U}: \\underset{\\text{qubit}~1}{\\left|0\\right\\rangle}\\otimes \\underset{\\text{qubit}~0}{\\left|1\\right\\rangle} &\\rightarrow \\underset{\\text{qubit}~1}{\\left|0\\right\\rangle}\\otimes \\underset{\\text{qubit}~0}{\\left|1\\right\\rangle}\\\\\nC_{U}: \\underset{\\text{qubit}~1}{\\left|1\\right\\rangle}\\otimes \\underset{\\text{qubit}~0}{\\left|0\\right\\rangle} &\\rightarrow \\underset{\\text{qubit}~1}{\\left|1\\right\\rangle}\\otimes \\underset{\\text{qubit}~0}{U\\left|0\\right\\rangle}\\\\\nC_{U}: \\underset{\\text{qubit}~1}{\\left|1\\right\\rangle}\\otimes \\underset{\\text{qubit}~0}{\\left|1\\right\\rangle} &\\rightarrow \\underset{\\text{qubit}~1}{\\left|1\\right\\rangle}\\otimes \\underset{\\text{qubit}~0}{U\\left|1\\right\\rangle}\\\\\n\\end{align*},$$\n\n\nwhich implies the matrix form of $C_{U}$ is\n$$\\begin{equation}\n\tC_U = \\begin{pmatrix}\n\t1 & 0 & 0 & 0 \\\\\n\t0 & 1 & 0 & 0 \\\\\n\t0 & 0 & u_{00} & u_{01} \\\\\n\t0 & 0 & u_{10} & u_{11}\n\t\t\\end{pmatrix}.\n\\end{equation}$$\n\n\n```python\nq = QuantumRegister(2)\n```\n\n### Controlled Pauli Gates\n\n#### Controlled-X (or, controlled-NOT) gate\nThe controlled-not gate flips the `target` qubit when the control qubit is in the state $\\left|1\\right\\rangle$. If we take the MSB as the control qubit (e.g. `cx(q[1],q[0])`), then the matrix would look like\n\n$$\nC_X = \n\\begin{pmatrix}\n1 & 0 & 0 & 0\\\\\n0 & 1 & 0 & 0\\\\\n0 & 0 & 0 & 1\\\\\n0 & 0 & 1 & 0\n\\end{pmatrix}. \n$$\n\nHowever, when the LSB is the control qubit, (e.g. `cx(q[0],q[1])`), this gate is equivalent to the following matrix:\n\n$$\nC_X = \n\\begin{pmatrix}\n1 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 1\\\\\n0 & 0 & 1 & 0\\\\\n0 & 1 & 0 & 0\n\\end{pmatrix}. \n$$\n\n\n\n\n```python\nqc = QuantumCircuit(q)\nqc.cx(q[0],q[1])\nqc.draw(output='mpl')\n```\n\n\n```python\njob = execute(qc, backend)\njob.result().get_unitary(qc, decimals=3)\n```\n\n\n\n\n array([[1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\n [0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j],\n [0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j],\n [0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j]])\n\n\n\n#### Controlled $Y$ gate\n\nApply the $Y$ gate to the target qubit if the control qubit is the MSB\n\n$$\nC_Y = \n\\begin{pmatrix}\n1 & 0 & 0 & 0\\\\\n0 & 1 & 0 & 0\\\\\n0 & 0 & 0 & -i\\\\\n0 & 0 & i & 0\n\\end{pmatrix},\n$$\n\nor when the LSB is the control\n\n$$\nC_Y = \n\\begin{pmatrix}\n1 & 0 & 0 & 0\\\\\n0 & 0 & 0 & -i\\\\\n0 & 0 & 1 & 0\\\\\n0 & i & 0 & 0\n\\end{pmatrix}.\n$$\n\n\n```python\nqc = QuantumCircuit(q)\nqc.cy(q[0],q[1])\nqc.draw(output='mpl')\n```\n\n\n```python\njob = execute(qc, backend)\njob.result().get_unitary(qc, decimals=3)\n```\n\n\n\n\n array([[1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\n [0.+0.j, 0.+0.j, 0.+0.j, 0.-1.j],\n [0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j],\n [0.+0.j, 0.+1.j, 0.+0.j, 0.+0.j]])\n\n\n\n#### Controlled $Z$ (or, controlled Phase) gate\n\nSimilarly, the controlled Z gate flips the phase of the target qubit if the control qubit is $\\left|1\\right\\rangle$. The matrix looks the same regardless of whether the MSB or LSB is the control qubit:\n\n$$\nC_Z = \n\\begin{pmatrix}\n1 & 0 & 0 & 0\\\\\n0 & 1 & 0 & 0\\\\\n0 & 0 & 1 & 0\\\\\n0 & 0 & 0 & -1\n\\end{pmatrix}\n$$\n\n\n\n```python\nqc = QuantumCircuit(q)\nqc.cz(q[0],q[1])\nqc.draw(output='mpl')\n```\n\n\n```python\njob = execute(qc, backend)\njob.result().get_unitary(qc, decimals=3)\n```\n\n\n\n\n array([[ 1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\n [ 0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j],\n [ 0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j],\n [ 0.+0.j, 0.+0.j, 0.+0.j, -1.+0.j]])\n\n\n\n### Controlled Hadamard gate\n\nApply $H$ gate to the target qubit if the control qubit is $\\left|1\\right\\rangle$. Below is the case where the control is the LSB qubit.\n\n$$\nC_H = \n\\begin{pmatrix}\n1 & 0 & 0 & 0\\\\\n0 & \\frac{1}{\\sqrt{2}} & 0 & \\frac{1}{\\sqrt{2}}\\\\\n0 & 0 & 1 & 0\\\\\n0 & \\frac{1}{\\sqrt{2}} & 0& -\\frac{1}{\\sqrt{2}}\n\\end{pmatrix}\n$$\n\n\n```python\nqc = QuantumCircuit(q)\nqc.ch(q[0],q[1])\nqc.draw(output='mpl')\n```\n\n\n```python\njob = execute(qc, backend)\njob.result().get_unitary(qc, decimals=3)\n```\n\n\n\n\n array([[ 0.707+0.707j, 0. +0.j , 0. +0.j , 0. +0.j ],\n [ 0. +0.j , 0.5 +0.5j , 0. +0.j , 0.5 +0.5j ],\n [ 0. +0.j , 0. +0.j , 0.707+0.707j, 0. +0.j ],\n [ 0. +0.j , 0.5 +0.5j , 0. +0.j , -0.5 -0.5j ]])\n\n\n\n### Controlled rotation gates\n\n#### Controlled rotation around Z-axis\n\nPerform rotation around Z-axis on the target qubit if the control qubit (here LSB) is $\\left|1\\right\\rangle$.\n\n$$\nC_{Rz}(\\lambda) = \n\\begin{pmatrix}\n1 & 0 & 0 & 0\\\\\n0 & e^{-i\\lambda/2} & 0 & 0\\\\\n0 & 0 & 1 & 0\\\\\n0 & 0 & 0 & e^{i\\lambda/2}\n\\end{pmatrix}\n$$\n\n\n```python\nqc = QuantumCircuit(q)\nqc.crz(pi/2,q[0],q[1])\nqc.draw(output='mpl')\n```\n\n\n```python\njob = execute(qc, backend)\njob.result().get_unitary(qc, decimals=3)\n```\n\n\n\n\n array([[1. +0.j , 0. +0.j , 0. +0.j , 0. +0.j ],\n [0. +0.j , 0.707-0.707j, 0. +0.j , 0. +0.j ],\n [0. +0.j , 0. +0.j , 1. +0.j , 0. +0.j ],\n [0. +0.j , 0. +0.j , 0. +0.j , 0.707+0.707j]])\n\n\n\n### Controlled phase rotation\n\nPerform a phase rotation if both qubits are in the $\\left|11\\right\\rangle$ state. The matrix looks the same regardless of whether the MSB or LSB is the control qubit.\n\n$$\nC_{u1}(\\lambda) = \n\\begin{pmatrix}\n1 & 0 & 0 & 0\\\\\n0 & 1 & 0 & 0\\\\\n0 & 0 & 1 & 0\\\\\n0 & 0 & 0 & e^{i\\lambda}\n\\end{pmatrix}\n$$\n\n\n```python\nqc = QuantumCircuit(q)\nqc.cu1(pi/2,q[0], q[1])\nqc.draw(output='mpl')\n```\n\n\n```python\njob = execute(qc, backend)\njob.result().get_unitary(qc, decimals=3)\n```\n\n\n\n\n array([[1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\n [0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j],\n [0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j],\n [0.+0.j, 0.+0.j, 0.+0.j, 0.+1.j]])\n\n\n\n### Controlled $u3$ rotation\n\nPerform controlled-$u3$ rotation on the target qubit if the control qubit (here LSB) is $\\left|1\\right\\rangle$. \n\n$$\nC_{u3}(\\theta, \\phi, \\lambda) \\equiv \n\\begin{pmatrix}\n1 & 0 & 0 & 0\\\\\n0 & e^{-i(\\phi+\\lambda)/2}\\cos(\\theta/2) & 0 & -e^{-i(\\phi-\\lambda)/2}\\sin(\\theta/2)\\\\\n0 & 0 & 1 & 0\\\\\n0 & e^{i(\\phi-\\lambda)/2}\\sin(\\theta/2) & 0 & e^{i(\\phi+\\lambda)/2}\\cos(\\theta/2)\n\\end{pmatrix}.\n$$\n\n\n```python\nqc = QuantumCircuit(q)\nqc.cu3(pi/2, pi/2, pi/2, q[0], q[1])\nqc.draw(output='mpl')\n```\n\n\n```python\njob = execute(qc, backend)\njob.result().get_unitary(qc, decimals=3)\n```\n\n\n\n\n array([[ 1. +0.j , 0. +0.j , 0. +0.j , 0. +0.j ],\n [ 0. +0.j , 0. -0.707j, 0. +0.j , -0.707+0.j ],\n [ 0. +0.j , 0. +0.j , 1. +0.j , 0. +0.j ],\n [ 0. +0.j , 0.707+0.j , 0. +0.j , 0. +0.707j]])\n\n\n\n### SWAP gate\n\nThe SWAP gate exchanges the two qubits. It transforms the basis vectors as\n\n$$\\left|00\\right\\rangle \\rightarrow \\left|00\\right\\rangle~,~\\left|01\\right\\rangle \\rightarrow \\left|10\\right\\rangle~,~\\left|10\\right\\rangle \\rightarrow \\left|01\\right\\rangle~,~\\left|11\\right\\rangle \\rightarrow \\left|11\\right\\rangle,$$\n\nwhich gives a matrix representation of the form\n\n$$\n\\mathrm{SWAP} = \n\\begin{pmatrix}\n1 & 0 & 0 & 0\\\\\n0 & 0 & 1 & 0\\\\\n0 & 1 & 0 & 0\\\\\n0 & 0 & 0 & 1\n\\end{pmatrix}.\n$$\n\n\n```python\nqc = QuantumCircuit(q)\nqc.swap(q[0], q[1])\nqc.draw(output='mpl')\n```\n\n\n```python\njob = execute(qc, backend)\njob.result().get_unitary(qc, decimals=3)\n```\n\n\n\n\n array([[1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j],\n [0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j],\n [0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j],\n [0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j]])\n\n\n\n------------\n\n\n```python\nimport qiskit.tools.jupyter\n%qiskit_version_table\n%qiskit_copyright\n```\n\n UsageError: Line magic function `%qiskit_version_table` not found.\n\n\nThis code is a part of Qiskit\n\u00a9 Copyright IBM 2017, 2019.\n\nThis code is licensed under the Apache License, Version 2.0. You may\nobtain a copy of this license in the LICENSE.txt file in the root directory\nof this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n\nAny modifications or derivative works of this code must retain this\ncopyright notice, and modified files need to carry a notice indicating\nthat they have been altered from the originals.\n\n\n```python\n\n```\n", "meta": {"hexsha": "0d74b77e987e50b514c5bb1f04072c0be7589831", "size": 63310, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "2-Two-qubits-gates.ipynb", "max_stars_repo_name": "koiralakp5/Quantum_computation", "max_stars_repo_head_hexsha": "bccfaf00f92a14021cc8cb8e762c65424da9c4a2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2-Two-qubits-gates.ipynb", "max_issues_repo_name": "koiralakp5/Quantum_computation", "max_issues_repo_head_hexsha": "bccfaf00f92a14021cc8cb8e762c65424da9c4a2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2-Two-qubits-gates.ipynb", "max_forks_repo_name": "koiralakp5/Quantum_computation", "max_forks_repo_head_hexsha": "bccfaf00f92a14021cc8cb8e762c65424da9c4a2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.3353973168, "max_line_length": 5157, "alphanum_fraction": 0.7384773338, "converted": true, "num_tokens": 5960, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9184802395624257, "lm_q2_score": 0.9111797039819735, "lm_q1q2_score": 0.8369005527977832}} {"text": "\n\n# Multivariate Gaussian with full covariance\n\nIn this reading you will learn how you can use TensorFlow to specify any multivariate Gaussian distribution.\n\n\n```python\nimport tensorflow as tf\nimport tensorflow_probability as tfp\ntfd = tfp.distributions\n\nprint(\"TF version:\", tf.__version__)\nprint(\"TFP version:\", tfp.__version__)\n```\n\n TF version: 2.5.0\n TFP version: 0.13.0\n\n\nSo far, you've seen how to define multivariate Gaussian distributions using `tfd.MultivariateNormalDiag`. This class allows you to specify a multivariate Gaussian with a diagonal covariance matrix $\\Sigma$. \n\nIn cases where the variance is the same for each component, i.e. $\\Sigma = \\sigma^2 I$, this is known as a _spherical_ or _isotropic_ Gaussian. This name comes from the spherical (or circular) contours of its probability density function, as you can see from the plot below for the two-dimensional case. \n\n\n```python\n# Plot the approximate density contours of a 2d spherical Gaussian\n\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nspherical_2d_gaussian = tfd.MultivariateNormalDiag(loc=[0., 0.])\n\nN = 100000\nx = spherical_2d_gaussian.sample(N)\nx1 = x[:, 0]\nx2 = x[:, 1]\nsns.jointplot(x=x1, y=x2, kind='kde', space=0);\n```\n\nAs you know, a diagonal covariance matrix results in the components of the random vector being independent. \n\n## Full covariance with `MultivariateNormalFullTriL`\n\nYou can define a full covariance Gaussian distribution in TensorFlow using the Distribution `tfd.MultivariateNormalTriL`.\n\nMathematically, the parameters of a multivariate Gaussian are a mean $\\mu$ and a covariance matrix $\\Sigma$, and so the `tfd.MultivariateNormalTriL` constructor requires two arguments:\n\n- `loc`, a Tensor of floats corresponding to $\\mu$,\n- `scale_tril`, a a lower-triangular matrix $L$ such that $LL^T = \\Sigma$.\n\nFor a $d$-dimensional random variable, the lower-triangular matrix $L$ looks like this:\n\n\\begin{equation}\n L = \\begin{bmatrix}\n l_{1, 1} & 0 & 0 & \\cdots & 0 \\\\\n l_{2, 1} & l_{2, 2} & 0 & \\cdots & 0 \\\\\n l_{3, 1} & l_{3, 2} & l_{3, 3} & \\cdots & 0 \\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n l_{d, 1} & l_{d, 2} & l_{d, 3} & \\cdots & l_{d, d}\n \\end{bmatrix},\n\\end{equation}\n\nwhere the diagonal entries are positive: $l_{i, i} > 0$ for $i=1,\\ldots,d$.\n\nHere is an example of creating a two-dimensional Gaussian with non-diagonal covariance:\n\n\n```python\n# Set the mean and covariance parameters\n\nmu = [0., 0.] # mean\nscale_tril = [[1., 0.],\n [0.6, 0.8]]\n\nsigma = tf.matmul(tf.constant(scale_tril), tf.transpose(tf.constant(scale_tril))) # covariance matrix\nprint(sigma)\n```\n\n tf.Tensor(\n [[1. 0.6]\n [0.6 1. ]], shape=(2, 2), dtype=float32)\n\n\n\n```python\n# Create the 2D Gaussian with full covariance\n\nnonspherical_2d_gaussian = tfd.MultivariateNormalTriL(loc=mu, scale_tril=scale_tril)\nnonspherical_2d_gaussian\n```\n\n\n\n\n \n\n\n\n\n```python\n# Check the Distribution mean\n\nnonspherical_2d_gaussian.mean()\n```\n\n\n\n\n \n\n\n\n\n```python\n# Check the Distribution covariance\n\nnonspherical_2d_gaussian.covariance()\n```\n\n\n\n\n \n\n\n\n\n```python\n# Plot its approximate density contours\n\nx = nonspherical_2d_gaussian.sample(N)\nx1 = x[:, 0]\nx2 = x[:, 1]\nsns.jointplot(x=x1, y=x2, kind='kde', space=0, color='r');\n```\n\nAs you can see, the approximate density contours are now elliptical rather than circular. This is because the components of the Gaussian are correlated.\n\nAlso note that the marginal distributions (shown on the sides of the plot) are both univariate Gaussian distributions.\n\n## The Cholesky decomposition\n\nIn the above example, we defined the lower triangular matrix $L$ and used that to build the multivariate Gaussian distribution. The covariance matrix is easily computed from $L$ as $\\Sigma = LL^T$.\n\nThe reason that we define the multivariate Gaussian distribution in this way - as opposed to directly passing in the covariance matrix - is that not every matrix is a valid covariance matrix. The covariance matrix must have the following properties:\n\n1. It is symmetric\n2. It is positive (semi-)definite\n\n_NB: A symmetric matrix $M \\in \\mathbb{R}^{d\\times d}$ is positive semi-definite if it satisfies $b^TMb \\ge 0$ for all nonzero $b\\in\\mathbb{R}^d$. If, in addition, we have $b^TMb = 0 \\Rightarrow b=0$ then $M$ is positive definite._\n\nThe Cholesky decomposition is a useful way of writing a covariance matrix. The decomposition is described by this result:\n\n> For every real-valued symmetric positive-definite matrix $M$, there is a unique lower-diagonal matrix $L$ that has positive diagonal entries for which \n>\n> \\begin{equation}\n LL^T = M\n \\end{equation}\n> This is called the _Cholesky decomposition_ of $M$.\n\nThis result shows us why Gaussian distributions with full covariance are completely represented by the `MultivariateNormalTriL` Distribution.\n\n### `tf.linalg.cholesky`\n\nIn case you have a valid covariance matrix $\\Sigma$ and would like to compute the lower triangular matrix $L$ above to instantiate a `MultivariateNormalTriL` object, this can be done with the `tf.linalg.cholesky` function. \n\n\n```python\n# Define a symmetric positive-definite matrix\n\nsigma = [[10., 5.], [5., 10.]]\n```\n\n\n```python\n# Compute the lower triangular matrix L from the Cholesky decomposition\n\nscale_tril = tf.linalg.cholesky(sigma)\nscale_tril\n```\n\n\n```python\n# Check that LL^T = Sigma\n\ntf.linalg.matmul(scale_tril, tf.transpose(scale_tril))\n```\n\n\n\n\n \n\n\n\nIf the argument to the `tf.linalg.cholesky` is not positive definite, then it will fail:\n\n\n```python\n# Try to compute the Cholesky decomposition for a matrix with negative eigenvalues\n\nbad_sigma = [[10., 11.], [11., 10.]]\n\ntry:\n scale_tril = tf.linalg.cholesky(bad_sigma)\nexcept Exception as e:\n print(e)\n```\n\n Cholesky decomposition was not successful. The input might not be valid. [Op:Cholesky]\n\n\n### What about positive semi-definite matrices?\n\nIn cases where the matrix is only positive semi-definite, the Cholesky decomposition exists (if the diagonal entries of $L$ can be zero) but it is not unique.\n\nFor covariance matrices, this corresponds to the degenerate case where the probability density function collapses to a subspace of the event space. This is demonstrated in the following example:\n\n\n```python\n# Create a multivariate Gaussian with a positive semi-definite covariance matrix\n\npsd_mvn = tfd.MultivariateNormalTriL(loc=[0., 0.], scale_tril=[[1., 0.], [0.4, 0.]])\npsd_mvn\n```\n\n\n\n\n \n\n\n\n\n```python\n# Plot samples from this distribution\n\nx = psd_mvn.sample(N)\nx1 = x[:, 0]\nx2 = x[:, 1]\nplt.xlim(-5, 5)\nplt.ylim(-5, 5)\nplt.title(\"Scatter plot of samples\")\nplt.scatter(x1, x2, alpha=0.5);\n```\n\nIf the input to the function `tf.linalg.cholesky` is positive semi-definite but not positive definite, it will also fail:\n\n\n```python\n# Try to compute the Cholesky decomposition for a positive semi-definite matrix\n\nanother_bad_sigma = [[10., 0.], [0., 0.]]\n\ntry:\n scale_tril = tf.linalg.cholesky(another_bad_sigma)\nexcept Exception as e:\n print(e)\n```\n\n Cholesky decomposition was not successful. The input might not be valid. [Op:Cholesky]\n\n\nIn summary: if the covariance matrix $\\Sigma$ for your multivariate Gaussian distribution is positive-definite, then an algorithm that computes the Cholesky decomposition of $\\Sigma$ returns a lower-triangular matrix $L$ such that $LL^T = \\Sigma$. This $L$ can then be passed as the `scale_tril` of `MultivariateNormalTriL`.\n\n## Putting it all together\n\nYou are now ready to put everything that you have learned in this reading together.\n\nTo create a multivariate Gaussian distribution with full covariance you need to:\n\n1. Specify parameters $\\mu$ and either $\\Sigma$ (a symmetric positive definite matrix) or $L$ (a lower triangular matrix with positive diagonal elements), such that $\\Sigma = LL^T$.\n\n2. If only $\\Sigma$ is specified, compute `scale_tril = tf.linalg.cholesky(sigma)`.\n\n3. Create the distribution: `multivariate_normal = tfd.MultivariateNormalTriL(loc=mu, scale_tril=scale_tril)`.\n\n\n```python\n# Create a multivariate Gaussian distribution\n\nmu = [1., 2., 3.]\nsigma = [[0.5, 0.1, 0.1],\n [0.1, 1., 0.6],\n [0.1, 0.6, 2.]]\n\nscale_tril = tf.linalg.cholesky(sigma)\n\nmultivariate_normal = tfd.MultivariateNormalTriL(loc=mu, scale_tril=scale_tril)\n```\n\n\n```python\n# Check the covariance matrix\n\nmultivariate_normal.covariance()\n```\n\n\n\n\n \n\n\n\n\n```python\n# Check the mean\n\nmultivariate_normal.mean()\n```\n\n\n\n\n \n\n\n\n## Deprecated: `MultivariateNormalFullCovariance`\n\nThere was previously a class called `tfd.MultivariateNormalFullCovariance` which takes the full covariance matrix in its constructor, but this is being deprecated. Two reasons for this are:\n\n* covariance matrices are symmetric, so specifying one directly involves passing redundant information, which involves writing unnecessary code. \n* it is easier to enforce positive-definiteness through constraints on the elements of a decomposition than through a covariance matrix itself. The decomposition's only constraint is that its diagonal elements are positive, a condition that is easy to parameterize for.\n\n### Further reading and resources\n* https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/MultivariateNormalTriL\n* https://www.tensorflow.org/api_docs/python/tf/linalg/cholesky\n", "meta": {"hexsha": "8bccbd096c48ba95907ac127b9b7eaba39de9a32", "size": 120209, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Course3-Probabilistic_Deep_Learning_with_Tensorflow2/week1_Multivariate_Gaussian_with_full_covariance.ipynb", "max_stars_repo_name": "mella30/Probabilistic-Deep-Learning-with-TensorFlow-2", "max_stars_repo_head_hexsha": "e9748316547d7f433632f4735990306d6e15da72", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Course3-Probabilistic_Deep_Learning_with_Tensorflow2/week1_Multivariate_Gaussian_with_full_covariance.ipynb", "max_issues_repo_name": "mella30/Probabilistic-Deep-Learning-with-TensorFlow-2", "max_issues_repo_head_hexsha": "e9748316547d7f433632f4735990306d6e15da72", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Course3-Probabilistic_Deep_Learning_with_Tensorflow2/week1_Multivariate_Gaussian_with_full_covariance.ipynb", "max_forks_repo_name": "mella30/Probabilistic-Deep-Learning-with-TensorFlow-2", "max_forks_repo_head_hexsha": "e9748316547d7f433632f4735990306d6e15da72", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 153.9167733675, "max_line_length": 52350, "alphanum_fraction": 0.8730793867, "converted": true, "num_tokens": 2807, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9465966671870766, "lm_q2_score": 0.8840392893839085, "lm_q1q2_score": 0.8368286449932394}} {"text": "# Pyomo - Getting started\n\nPyomo installation: see http://www.pyomo.org/installation\n\n```\npip install pyomo\n```\n\n\n```python\nfrom pyomo.environ import *\n```\n\n## Definition of the problem\n\n$$\n\\begin{align}\n \\min_{x_0,x_1} & \\quad -x_0 + 4 x_1 \\\\\n \\text{s.t.} & \\quad -3 x_0 + x_1 \\leq 6 \\\\\n & \\quad -x_0 - 2 x_1 \\geq -4 \\\\\n & \\quad x_1 \\geq -3\n\\end{align}\n$$\n\n## Solution\n\n### Version 1: naive implementation\n\n\n```python\nmodel = ConcreteModel(name=\"Getting started\")\n\nc = [-1, 4]\nb = [ 6, 4, 3]\nA = [[-3, 1],\n [ 1, 2],\n [ 0, -1]]\n\nN = list(range(len(c))) # num decision variables\nM = list(range(len(A))) # num constraints\n\nmodel.x = Var(N)\n\nmodel.obj = Objective(expr = sum(c[n] * model.x[n] for n in N) )\n\nm = 0\nmodel.const_0 = Constraint(expr = sum(A[m][n] * model.x[n] for n in N) <= b[m] )\nm = 1\nmodel.const_1 = Constraint(expr = sum(A[m][n] * model.x[n] for n in N) <= b[m] )\nm = 2\nmodel.const_2 = Constraint(expr = sum(A[m][n] * model.x[n] for n in N) <= b[m] )\n\nmodel.pprint()\n\n# @tail:\nprint()\nprint(\"-\" * 60)\nprint()\n\nopt = SolverFactory('glpk')\n\nresults = opt.solve(model)\n\nmodel.display()\n\nprint()\nprint(\"Optimal solution: \", [value(model.x[n]) for n in N])\nprint(\"Gain of the optimal solution: \", value(model.obj))\n# @:tail\n```\n\n### Version 2: without construction rules\n\n\n```python\nmodel = ConcreteModel(name=\"Getting started\")\n\nc = [-1, 4]\nb = [ 6, 4, 3]\nA = [[-3, 1],\n [ 1, 2],\n [ 0, -1]]\n\nN = list(range(len(c))) # num decision variables\nM = list(range(len(A))) # num constraints\n\nmodel.x = Var(N)\n\nmodel.obj = Objective(expr = sum(c[n] * model.x[n] for n in N) )\n\nfor m in M:\n name = \"const_{}\".format(m)\n val = Constraint(expr = sum(A[m][n] * model.x[n] for n in N) <= b[m])\n model.add_component(name=name, val=val)\n\nmodel.pprint()\n\n# @tail:\nprint()\nprint(\"-\" * 60)\nprint()\n\nopt = SolverFactory('glpk')\n\nresults = opt.solve(model)\n\nmodel.display()\n\nprint()\nprint(\"Optimal solution: \", [value(model.x[n]) for n in N])\nprint(\"Gain of the optimal solution: \", value(model.obj))\n# @:tail\n```\n\n### Version 3: with construction rules\n\n\n```python\nmodel = ConcreteModel(name=\"Getting started\")\n\nc = [-1, 4]\nb = [ 6, 4, 3]\nA = [[-3, 1],\n [ 1, 2],\n [ 0, -1]]\n\nN = list(range(len(c))) # num decision variables\nM = list(range(len(A))) # num constraints\n\nmodel.x = Var(N)\n\nmodel.obj = Objective(expr = sum(c[n] * model.x[n] for n in N) )\n\ndef constraint_fn(model, m):\n return sum(A[m][n] * model.x[n] for n in N) <= b[m]\n\nmodel.constraint = Constraint(M, rule=constraint_fn)\n\nmodel.pprint()\n\n# @tail:\nprint()\nprint(\"-\" * 60)\nprint()\n\nopt = SolverFactory('glpk')\n\nresults = opt.solve(model)\n\nmodel.display()\n\nprint()\nprint(\"Optimal solution: \", [value(model.x[n]) for n in N])\nprint(\"Gain of the optimal solution: \", value(model.obj))\n# @:tail\n```\n\n### Version 4: construction rules on Objective too\n\n\n```python\nmodel = ConcreteModel(name=\"Getting started\")\n\nc = [-1, 4]\nb = [ 6, 4, 3]\nA = [[-3, 1],\n [ 1, 2],\n [ 0, -1]]\n\nN = list(range(len(c))) # num decision variables\nM = list(range(len(A))) # num constraints\n\nmodel.x = Var(N)\n\n\ndef objective_fn(model):\n return sum(c[n] * model.x[n] for n in N)\n\nmodel.obj = Objective(rule=objective_fn)\n\n\ndef constraint_fn(model, m):\n return sum(A[m][n] * model.x[n] for n in N) <= b[m]\n\nmodel.constraint = Constraint(M, rule=constraint_fn)\n\n\nmodel.pprint()\n\n# @tail:\nprint()\nprint(\"-\" * 60)\nprint()\n\nopt = SolverFactory('glpk')\n\nresults = opt.solve(model)\n\nmodel.display()\n\nprint()\nprint(\"Optimal solution: \", [value(model.x[n]) for n in N])\nprint(\"Gain of the optimal solution: \", value(model.obj))\n# @:tail\n```\n", "meta": {"hexsha": "029cca72eda55e1b3249a55800e5d1602d0fcd2c", "size": 6912, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "nb_dev_python/python_pyomo_getting_started_2.ipynb", "max_stars_repo_name": "jdhp-docs/python-notebooks", "max_stars_repo_head_hexsha": "91a97ea5cf374337efa7409e4992ea3f26b99179", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2017-05-03T12:23:36.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-26T17:30:56.000Z", "max_issues_repo_path": "nb_dev_python/python_pyomo_getting_started_2.ipynb", "max_issues_repo_name": "jdhp-docs/python-notebooks", "max_issues_repo_head_hexsha": "91a97ea5cf374337efa7409e4992ea3f26b99179", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "nb_dev_python/python_pyomo_getting_started_2.ipynb", "max_forks_repo_name": "jdhp-docs/python-notebooks", "max_forks_repo_head_hexsha": "91a97ea5cf374337efa7409e4992ea3f26b99179", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-10-26T17:30:57.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-26T17:30:57.000Z", "avg_line_length": 23.2727272727, "max_line_length": 89, "alphanum_fraction": 0.44921875, "converted": true, "num_tokens": 1176, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9196425267730008, "lm_q2_score": 0.9099070023734244, "lm_q1q2_score": 0.8367891747911429}} {"text": "\n\n# Differences in Matrix Representation - SymPy vs. NumPy\nSymPy stores entries in 1D arrays, while NumPy stores entries in 2D arrays.\n\n\n```python\nimport sympy as sp \nimport numpy as np\nsp.init_printing(use_unicode=True)\n```\n\n\n```python\n#Sympy version of our demo matrix\nAs = sp.Matrix([[1,2,3], [4,5,6]])\nAs\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 2 & 3\\\\4 & 5 & 6\\end{matrix}\\right]$\n\n\n\n\n```python\n#Numpy version of our demo matrix - Libraries are inter-compatible\nAn = np. array(As)\nAn\n```\n\n\n\n\n array([[1, 2, 3],\n [4, 5, 6]], dtype=object)\n\n\n\n\n```python\n#We can't obtain the dimensions of the SymPy Matrix easily\nlen(As)\n```\n\n\n```python\n#We can obtain the size of the NumPy Matrix:\nsize= [len(An), len(An[0])]\nsize\n```\n\n\n```python\n#We can also obtain the size of the NumPy Matrix using shape property\nAn.shape\n```\n\n\n```python\nAn\n```\n\n\n\n\n array([[1, 2, 3],\n [4, 5, 6]], dtype=object)\n\n\n\n\n```python\nAn+1\n```\n\n\n\n\n array([[2, 3, 4],\n [5, 6, 7]], dtype=object)\n\n\n\n\n```python\nAs.T\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 4\\\\2 & 5\\\\3 & 6\\end{matrix}\\right]$\n\n\n\n\n```python\nAn.T\n```\n\n\n\n\n array([[1, 4],\n [2, 5],\n [3, 6]], dtype=object)\n\n\n\n\n```python\n3*As\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}3 & 6 & 9\\\\12 & 15 & 18\\end{matrix}\\right]$\n\n\n\n\n```python\nAs/2\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\frac{1}{2} & 1 & \\frac{3}{2}\\\\2 & \\frac{5}{2} & 3\\end{matrix}\\right]$\n\n\n\n\n```python\nBs = 3*As\nBs\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}3 & 6 & 9\\\\12 & 15 & 18\\end{matrix}\\right]$\n\n\n\n\n```python\n#As is 2x3 Bs^T is 3x2, so the product is 2x2 ...\nAs*Bs.T\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}42 & 96\\\\96 & 231\\end{matrix}\\right]$\n\n\n\n\n```python\n#Try the following code, it will generate ShapeError: Matrix size mismatch: (2, 3) * (2, 3).\n#As*Bs\n```\n\n\n```python\n# You can use Numpy library as well\nBn = 3*An\nBn\n```\n\n\n\n\n array([[3, 6, 9],\n [12, 15, 18]], dtype=object)\n\n\n\n\n```python\n# we use dot method in Numpy for Matrix multiplications\nnp.dot(An, Bn. T)\n```\n\n\n\n\n array([[42, 96],\n [96, 231]], dtype=object)\n\n\n\n\n```python\n# if you use * for multiplication in Numpy, it is element-wise, it is NOT a matrix product\nAn*Bn\n```\n\n\n\n\n array([[3, 12, 27],\n [48, 75, 108]], dtype=object)\n\n\n\n# Matrix Inverse\n\n\n```python\nx = np.array([[1,2],[3,4]]) \ny = np.linalg.inv(x) \nprint (x) \nprint (y) \nprint (np.dot(x,y))\n```\n\n [[1 2]\n [3 4]]\n [[-2. 1. ]\n [ 1.5 -0.5]]\n [[1.0000000e+00 0.0000000e+00]\n [8.8817842e-16 1.0000000e+00]]\n\n", "meta": {"hexsha": "58f0489583d1829cf61c6de0de5e8b3eb11d8cd4", "size": 16663, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Matrix_Operations.ipynb", "max_stars_repo_name": "tofighi/Linear-Algebra", "max_stars_repo_head_hexsha": "bea7d2a4a81e0c49b324f23c47cf03db72e376cf", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Matrix_Operations.ipynb", "max_issues_repo_name": "tofighi/Linear-Algebra", "max_issues_repo_head_hexsha": "bea7d2a4a81e0c49b324f23c47cf03db72e376cf", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Matrix_Operations.ipynb", "max_forks_repo_name": "tofighi/Linear-Algebra", "max_forks_repo_head_hexsha": "bea7d2a4a81e0c49b324f23c47cf03db72e376cf", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.7293103448, "max_line_length": 1010, "alphanum_fraction": 0.471763788, "converted": true, "num_tokens": 958, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.926303724190573, "lm_q2_score": 0.9032942041005327, "lm_q1q2_score": 0.836724785298083}} {"text": "# Solutions for the Lane-Emden with n=0\n\n\n```python\nfrom distutils.spawn import find_executable\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport seaborn\n\nrem = 12\n\nseaborn.set(context='notebook', style='darkgrid')\n\nplt.ioff()\n\nplt.rc('lines', linewidth=1)\nplt.rc('font', family='serif')\nplt.rc('font', size=rem)\nplt.rc('axes', titlepad=1.500*rem)\nplt.rc('axes', titlesize=1.728*rem)\nplt.rc('axes', labelsize=1.200*rem)\nplt.rc('legend', fontsize=1.000*rem)\nplt.rc('xtick', labelsize=0.833*rem)\nplt.rc('ytick', labelsize=0.833*rem)\n\nif find_executable('latex'):\n plt.rc('text', usetex=True)\n\nmaterial_palette = {\n -1: \"#212121\",\n 0: \"#F44336\",\n 1: \"#E91E63\",\n 2: \"#9C27B0\",\n 3: \"#673AB7\",\n 4: \"#3F51B5\",\n 5: \"#2196F3\",\n 6: \"#03A9F4\",\n 7: \"#00BCD4\",\n 8: \"#009688\",\n 9: \"#4CAF50\",\n 10: \"#8BC34A\",\n 11: \"#CDDC39\",\n 12: \"#FFEB3B\",\n 13: \"#FFC107\",\n 14: \"#FF9800\",\n 15: \"#FF5722\",\n}\n```\n\n\n```python\n%matplotlib inline\n```\n\n\n```python\nfrom sympy import (\n Eq as Equation,\n Derivative,\n Function,\n Symbol,\n lambdify,\n simplify,\n symbols,\n dsolve,\n solve,\n)\n```\n\n\n```python\nn = Symbol(\"n\")\nxi = Symbol(\"xi\")\ntheta = Function(\"theta\")\n```\n\n\n```python\nlhs = simplify(\n (1 / xi ** 2) * Derivative(\n (xi ** 2) * Derivative(theta(xi), xi), xi\n ).doit()\n)\nlhs\n```\n\n\n\n\n$\\displaystyle \\frac{d^{2}}{d \\xi^{2}} \\theta{\\left(\\xi \\right)} + \\frac{2 \\frac{d}{d \\xi} \\theta{\\left(\\xi \\right)}}{\\xi}$\n\n\n\n\n```python\nrhs = -theta(xi) ** n\nrhs\n```\n\n\n\n\n$\\displaystyle - \\theta^{n}{\\left(\\xi \\right)}$\n\n\n\n\n```python\nlane_endem_eq = Equation(lhs, rhs)\nlane_endem_eq\n```\n\n\n\n\n$\\displaystyle \\frac{d^{2}}{d \\xi^{2}} \\theta{\\left(\\xi \\right)} + \\frac{2 \\frac{d}{d \\xi} \\theta{\\left(\\xi \\right)}}{\\xi} = - \\theta^{n}{\\left(\\xi \\right)}$\n\n\n\nReemplazando n=0:\n\n\n```python\nlane_endem_eq_0 = lane_endem_eq.subs(n, 0)\nlane_endem_eq_0\n```\n\n\n\n\n$\\displaystyle \\frac{d^{2}}{d \\xi^{2}} \\theta{\\left(\\xi \\right)} + \\frac{2 \\frac{d}{d \\xi} \\theta{\\left(\\xi \\right)}}{\\xi} = -1$\n\n\n\n\n```python\nsolution = dsolve(lane_endem_eq_0, theta(xi))\nsolution\n```\n\n\n\n\n$\\displaystyle \\theta{\\left(\\xi \\right)} = C_{1} + \\frac{C_{2}}{\\xi} - \\frac{\\xi^{2}}{6}$\n\n\n\n\n```python\nconstants = solve(\n [\n simplify(xi * solution.rhs).subs(xi, 0),\n Derivative(\n simplify(xi * solution.rhs), xi\n ).doit().subs(xi, 0) - 1,\n ],\n symbols('C1 C2'),\n)\nconstants\n```\n\n\n\n\n {C1: 1, C2: 0}\n\n\n\n$$C1= 1, C2=0$$\n\n\n```python\nsolution = solution.subs(constants)\nsolution\n```\n\n\n\n\n$\\displaystyle \\theta{\\left(\\xi \\right)} = 1 - \\frac{\\xi^{2}}{6}$\n\n\n\n\n```python\ntheta_zeros = solve(solution.rhs, xi)\ntheta_zeros\n```\n\n\n\n\n [-sqrt(6), sqrt(6)]\n\n\n\n\n```python\nnum_theta_f = lambdify(xi, solution.rhs, \"numpy\")\n```\n\n\n```python\nn_xi = np.linspace(0, 10, 101)\nn_theta = num_theta_f(n_xi)\n\nfig = plt.figure(figsize=(11.25, 4.5), frameon=False)\naxs = fig.add_subplot(1, 1, 1)\n\naxs.plot(\n n_xi, \n n_theta, \n color=material_palette[1],\n label=r\"$\\theta_0(\\xi)$\"\n)\naxs.legend()\naxs.set_title(r\"Solution for Lane-Endem equation for $n=0$\")\naxs.set_xlim([0, 10])\naxs.set_xlabel(r\"$\\xi$\")\naxs.set_ylim([-2, 2])\naxs.set_ylabel(r\"$\\theta_0(\\xi)$\")\nplt.tight_layout()\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "5098f6696bc85846e51467c22b23577fa334aa21", "size": 29357, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "01_polytropic_index_0.ipynb", "max_stars_repo_name": "RcrdPhysics/Estructura-Estelar", "max_stars_repo_head_hexsha": "23fe348ab0a49caf7f0c6c25c9bed4b669930d1e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "01_polytropic_index_0.ipynb", "max_issues_repo_name": "RcrdPhysics/Estructura-Estelar", "max_issues_repo_head_hexsha": "23fe348ab0a49caf7f0c6c25c9bed4b669930d1e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "01_polytropic_index_0.ipynb", "max_forks_repo_name": "RcrdPhysics/Estructura-Estelar", "max_forks_repo_head_hexsha": "23fe348ab0a49caf7f0c6c25c9bed4b669930d1e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.6756152125, "max_line_length": 20096, "alphanum_fraction": 0.8019893041, "converted": true, "num_tokens": 1182, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9381240142763573, "lm_q2_score": 0.8918110555020057, "lm_q1q2_score": 0.8366293673635768}} {"text": "\n\n[Qiita \u306e\u89e3\u8aac\u8a18\u4e8b](https://qiita.com/tobira-code/items/d76ed91a88f112b4a474)\u3092\u53c2\u8003\u306b\u3057\u305f ODE \u306e\u30b5\u30f3\u30d7\u30eb\u30ce\u30fc\u30c8\u3002\n\u306f\u3058\u3081\u306b\u5fc5\u8981\u306a\u30d1\u30c3\u30b1\u30fc\u30b8\u3092 import \u3059\u308b.\n- numpy \u3067\u591a\u6b21\u5143\u914d\u5217\u30c7\u30fc\u30bf\u51e6\u7406\n- matplotlib \u3067\u30c7\u30fc\u30bf\u53ef\u8996\u5316\n- scipy.integrate \u3067\u7a4d\u5206\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.integrate import ode\n```\n\n\u5e38\u5fae\u5206\u65b9\u7a0b\u5f0f $y''=-y$ \u3092\u89e3\u304f\u305f\u3081\u306b\u4ee5\u4e0b\u306e\u3088\u3046\u306b\u9023\u7acb1\u968e\u5fae\u5206\u65b9\u7a0b\u5f0f\u306b\u5206\u89e3\u3059\u308b.\n\n\\begin{align}\ny'&=z \\\\\nz'&=-y\n\\end{align}\n\n\u30b3\u30fc\u30c9\u4e0a\u306f $v\\equiv(y,z)$ \u3068\u3059\u308b.\n\u5c0e\u95a2\u6570\u3092 $v'\\equiv f(t,v)$ \u3068\u3057\u3001\u521d\u671f\u6761\u4ef6\u3092 $v(t=0)\\equiv v_0 \\equiv (1.0, 0.0)$ \u3068\u3059\u308b\u3068\u4ee5\u4e0b\u306e\u30b3\u30fc\u30c9\u306e\u3088\u3046\u306b\u306a\u308b\u3002\n\n\n```python\ndef f(t, v): # t, y:v[0], y'=z:v[1]\n return [v[1], -v[0]] # y':return[0] y''=z':return[1]\n\nv0 = [1.0, 0.0] # y0, y'0=z0\n```\n\nNonstiff \u306a\u5fae\u5206\u65b9\u7a0b\u5f0f\u3092\u89e3\u304f\u305f\u3081\u306e\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u300c8(5,3)\u6b21\u306e Runge-Kutta \u6cd5(Dorman-Prince \u6cd5)\u300d\u306b\u3088\u308b\u95a2\u6570 dop853 \u3092\u5229\u7528\u3059\u308b\u3002\n\n\u8a73\u7d30\u306f[\u3053\u3061\u3089](https://org-technology.com/posts/ordinary-differential-equations.html).\n\u516c\u5f0f\u30c9\u30ad\u30e5\u30e1\u30f3\u30c8\u306f[\u3053\u3061\u3089](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html).\n\n\n```python\nsolver = ode(f)\nsolver.set_integrator(name=\"dop853\")\nsolver.set_initial_value(v0)\n```\n\n\n\n\n \n\n\n\n\u7a4d\u5206\u533a\u9593 $01.25)]))\n```\n", "meta": {"hexsha": "61996085a5b8b0604954c226a9a90e2cc0f93d5a", "size": 95528, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/sampling/2021-03-10-importance-sampling.ipynb", "max_stars_repo_name": "yadav-sachin/prog-ml.github.io", "max_stars_repo_head_hexsha": "aeb2a22ec11f6ee97a3e2ef05b503a6578f2b931", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2022-03-04T16:29:48.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-27T18:14:18.000Z", "max_issues_repo_path": "notebooks/sampling/2021-03-10-importance-sampling.ipynb", "max_issues_repo_name": "yadav-sachin/prog-ml.github.io", "max_issues_repo_head_hexsha": "aeb2a22ec11f6ee97a3e2ef05b503a6578f2b931", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2022-03-04T16:27:46.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T04:38:53.000Z", "max_forks_repo_path": "notebooks/sampling/2021-03-10-importance-sampling.ipynb", "max_forks_repo_name": "yadav-sachin/prog-ml.github.io", "max_forks_repo_head_hexsha": "aeb2a22ec11f6ee97a3e2ef05b503a6578f2b931", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-03-13T11:14:10.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-23T21:22:17.000Z", "avg_line_length": 188.0472440945, "max_line_length": 15592, "alphanum_fraction": 0.9166317729, "converted": true, "num_tokens": 1070, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9381240090865197, "lm_q2_score": 0.8918110382493034, "lm_q1q2_score": 0.8366293465500481}} {"text": "```python\nfrom sympy import *\n```\n\n\n```python\nx = Symbol('x')\ny = Function('y')\nec = Eq(x**2*y(x).diff(x,2) - 3*x*y(x).diff(x) + 6*y(x),0)\n```\n\n\n```python\nec\n```\n\n\n\n\n$\\displaystyle x^{2} \\frac{d^{2}}{d x^{2}} y{\\left(x \\right)} - 3 x \\frac{d}{d x} y{\\left(x \\right)} + 6 y{\\left(x \\right)} = 0$\n\n\n\n\n```python\nprint('Ejercicio a ')\ndsolve(ec)\n```\n\n Ejercicio a \n\n\n\n\n\n$\\displaystyle y{\\left(x \\right)} = C_{2} e^{- 2 x} + \\left(C_{1} + 2 x\\right) e^{2 x}$\n\n\n\n\n```python\n\nec = Eq(x**2*y(x).diff(x,2) + 5*x*y(x).diff(x) + 4*y(x),0)\n```\n\n\n```python\nprint('ejrcicio b')\nec = Eq(x**2*y(x).diff(x,2) + 5*x*y(x).diff(x) + 4*y(x),0)\ndsolve(ec)\n```\n\n ejrcicio b\n\n\n\n\n\n$\\displaystyle y{\\left(x \\right)} = \\frac{C_{1} + C_{2} \\log{\\left(x \\right)}}{x^{2}}$\n\n\n\n\n```python\nec = Eq(x**2*y(x).diff(x,2) + 9*x*y(x).diff(x) +17*y(x),0)\nprint('Ejercicio C')\ndsolve(ec)\n```\n\n Ejercicio C\n\n\n\n\n\n$\\displaystyle y{\\left(x \\right)} = \\frac{C_{1} \\sin{\\left(\\log{\\left(x \\right)} \\right)} + C_{2} \\cos{\\left(\\log{\\left(x \\right)} \\right)}}{x^{4}}$\n\n\n\n\n```python\nprint('Ejercicio d')\nec = Eq(y(x).diff(x,2) - 4*y(x),8*exp(2*x))\nec\n```\n\n Ejercicio d\n\n\n\n\n\n$\\displaystyle - 4 y{\\left(x \\right)} + \\frac{d^{2}}{d x^{2}} y{\\left(x \\right)} = 8 e^{2 x}$\n\n\n\n\n```python\ndsolve(ec)\n```\n\n\n\n\n$\\displaystyle y{\\left(x \\right)} = C_{2} e^{- 2 x} + \\left(C_{1} + 2 x\\right) e^{2 x}$\n\n\n\n\n```python\nprint(' Ejercicio e ')\nec = Eq(y(x).diff(x,2)-4*y(x).diff(x)+ 4*y(x),exp(-2*x))\ndsolve(ec)\n```\n\n Ejercicio e \n\n\n\n\n\n$\\displaystyle y{\\left(x \\right)} = \\left(C_{1} + C_{2} x\\right) e^{2 x} + \\frac{e^{- 2 x}}{16}$\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "9d03d4068c79759f8f5f7138180603bb579be148", "size": 7794, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "talllerprogramacion.ipynb", "max_stars_repo_name": "sergiorodri234/sergiorodri234.github.io", "max_stars_repo_head_hexsha": "311597033b7ce94aaa8dcf85c34461ffdc7aa661", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "talllerprogramacion.ipynb", "max_issues_repo_name": "sergiorodri234/sergiorodri234.github.io", "max_issues_repo_head_hexsha": "311597033b7ce94aaa8dcf85c34461ffdc7aa661", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "talllerprogramacion.ipynb", "max_forks_repo_name": "sergiorodri234/sergiorodri234.github.io", "max_forks_repo_head_hexsha": "311597033b7ce94aaa8dcf85c34461ffdc7aa661", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-02-24T20:47:09.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-24T20:47:09.000Z", "avg_line_length": 25.3876221498, "max_line_length": 192, "alphanum_fraction": 0.3864511162, "converted": true, "num_tokens": 666, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9609517083920618, "lm_q2_score": 0.8705972633721708, "lm_q1q2_score": 0.8366019275589412}} {"text": "# 15.4. Computing exact probabilities and manipulating random variables\n\n\n```python\nfrom sympy import *\nfrom sympy.stats import *\ninit_printing()\n```\n\n\n```python\nX, Y = Die('X', 6), Die('Y', 6)\n```\n\n\n```python\nP(Eq(X, 3))\n```\n\n\n```python\nP(X > 3)\n```\n\n\n```python\nP(X > Y)\n```\n\n\n```python\nP(X + Y > 6, X < 5)\n```\n\n\n```python\nZ = Normal('Z', 0, 1) # Gaussian variable\n```\n\n\n```python\nP(Z > pi)\n```\n\n\n```python\nE(Z**2), variance(Z**2)\n```\n\n\n```python\nf = density(Z)\n```\n\n\n```python\nvar('x')\nf(x)\n```\n\n\n```python\n%matplotlib inline\nplot(f(x), (x, -6, 6))\n```\n\n\n```python\nEq(Integral(f(x), (x, pi, oo)),\n simplify(integrate(f(x), (x, pi, oo))))\n```\n", "meta": {"hexsha": "96036cffe50a9f994152ad9cad1027b2dbe501a0", "size": 3133, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "001-Jupyter/001-Tutorials/002-IPython-Cookbook/chapter15_symbolic/04_stats.ipynb", "max_stars_repo_name": "jhgoebbert/jupyter-jsc-notebooks", "max_stars_repo_head_hexsha": "bcd08ced04db00e7a66473b146f8f31f2e657539", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "001-Jupyter/001-Tutorials/002-IPython-Cookbook/chapter15_symbolic/04_stats.ipynb", "max_issues_repo_name": "jhgoebbert/jupyter-jsc-notebooks", "max_issues_repo_head_hexsha": "bcd08ced04db00e7a66473b146f8f31f2e657539", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "001-Jupyter/001-Tutorials/002-IPython-Cookbook/chapter15_symbolic/04_stats.ipynb", "max_forks_repo_name": "jhgoebbert/jupyter-jsc-notebooks", "max_forks_repo_head_hexsha": "bcd08ced04db00e7a66473b146f8f31f2e657539", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-13T18:49:12.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-13T18:49:12.000Z", "avg_line_length": 16.4031413613, "max_line_length": 77, "alphanum_fraction": 0.4586658155, "converted": true, "num_tokens": 232, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9648551495568569, "lm_q2_score": 0.8670357512127872, "lm_q1q2_score": 0.8365639094075555}} {"text": "```python\n%%markdown\n# Overview\nIn a linear regression, the outcome, $y$, is approximated by a hidden function of the type $\\forall x \\in S, f(x) = a + bx$,\nand the error function is $J(x) = E_{x\\in S} (f(x) - y)^2$\n\nFor a $n$ samples of $x$:\n\\begin{align}\nJ_{a,b}(x) = \\frac{1}{n} \\sum_{i=1}^{i=n} (f(x_i) - y_i)^2 = \\frac{1}{n} \\sum_{i=1}^{i=n} (a + bx_i - y_i)^2\n\\end{align}\n\nThe gradient of $J(x)$ is then:\n\\begin{align}\n\\frac{\\partial J(x)}{\\partial a} &= \\frac{2}{n} \\sum_{i=1}^{i=n} (a + bx_i - y_i) \\\\\n\\frac{\\partial J(x)}{\\partial b} &= \\frac{2}{n} \\sum_{i=1}^{i=n} (a + bx_i - y_i)x_i\n\\end{align}\n\n# References\n## Gradient descent\n* http://en.wikipedia.org/wiki/Gradient_descent\n* http://en.wikipedia.org/wiki/Stochastic_gradient_descent#Example\n## Formatting\n* [Tex/$\\LaTeX$ support in Jupyter Markdown](http://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Typesetting%20Equations.html)\n```\n\n\n# Overview\nIn a linear regression, the outcome, $y$, is approximated by a hidden function of the type $\\forall x \\in S, f(x) = a + bx$,\nand the error function is $J(x) = E_{x\\in S} (f(x) - y)^2$\n\nFor a $n$ samples of $x$:\n\\begin{align}\nJ_{a,b}(x) = \\frac{1}{n} \\sum_{i=1}^{i=n} (f(x_i) - y_i)^2 = \\frac{1}{n} \\sum_{i=1}^{i=n} (a + bx_i - y_i)^2\n\\end{align}\n\nThe gradient of $J(x)$ is then:\n\\begin{align}\n\\frac{\\partial J(x)}{\\partial a} &= \\frac{2}{n} \\sum_{i=1}^{i=n} (a + bx_i - y_i) \\\\\n\\frac{\\partial J(x)}{\\partial b} &= \\frac{2}{n} \\sum_{i=1}^{i=n} (a + bx_i - y_i)x_i\n\\end{align}\n\n# References\n## Gradient descent\n* http://en.wikipedia.org/wiki/Gradient_descent\n* http://en.wikipedia.org/wiki/Stochastic_gradient_descent#Example\n## Formatting\n* [Tex/$\\LaTeX$ support in Jupyter Markdown](http://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Typesetting%20Equations.html)\n\n\n\n```python\nfrom mpl_toolkits import mplot3d\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n```\n\n\n```python\n# Error function for x = {1, 2}: 1/2 [(a+b-1)^2 + (a+2b-2)^2]\n# Derivative:\n# a. dJa = (a+b-1) + (a+2b-2) = 2a + 3b - 3\n# b. dJb = (a+b-1) + 2(a+2b-2) = 3a + 5b - 5\n# a = 0, b = 0 => dJa = -3 ; dJb = -5\ndef error_function(a, b):\n total_error = 0.0\n n = 2\n for x in range(1, n):\n total_error += (a + b*x - x) ** 2\n \n return total_error / n\n\nx = np.linspace(-5, 5, 30)\ny = np.linspace(-5, 5, 30)\n\nX, Y = np.meshgrid(x, y)\nZ = error_function(X, Y)\n```\n\n\n```python\nfig = plt.figure()\nax = plt.axes(projection='3d')\nax.contour3D(X, Y, Z, 50, cmap='binary')\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_zlabel('z')\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "e71285b47f08afa1842d7407eefcd198652ed601", "size": 50252, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "general/error-function-for-linear-regression.ipynb", "max_stars_repo_name": "machine-learning-helpers/induction-books-python", "max_stars_repo_head_hexsha": "d26816f92d4f6a64e8c4c2ed6c7c8343c77cd3ad", "max_stars_repo_licenses": ["RSA-MD"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-02-11T12:34:19.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-22T18:06:01.000Z", "max_issues_repo_path": "general/error-function-for-linear-regression.ipynb", "max_issues_repo_name": "machine-learning-helpers/induction-books-python", "max_issues_repo_head_hexsha": "d26816f92d4f6a64e8c4c2ed6c7c8343c77cd3ad", "max_issues_repo_licenses": ["RSA-MD"], "max_issues_count": 17, "max_issues_repo_issues_event_min_datetime": "2019-11-22T00:48:20.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-16T11:00:50.000Z", "max_forks_repo_path": "general/error-function-for-linear-regression.ipynb", "max_forks_repo_name": "machine-learning-helpers/induction-python", "max_forks_repo_head_hexsha": "631a735a155f0feb7012472fbca13efbc273dfb0", "max_forks_repo_licenses": ["RSA-MD"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 299.119047619, "max_line_length": 45448, "alphanum_fraction": 0.9235453315, "converted": true, "num_tokens": 996, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9532750440288019, "lm_q2_score": 0.877476793890012, "lm_q1q2_score": 0.8364767293297531}} {"text": "# Logistic Regression Example Notebook\nThis example shows how to predict if a student passes a test using Logistic Regression.\n\nThe goal is to give a basic introduction to classification problems, so many things will be simplified.\n\nRegarding notation, $a$ is a scalar, $\\mathbf{a}$ is a vector (column vector), and $\\mathbf{A}$ is a matrix.\n\n\n# Imports\n\n\n```python\nimport matplotlib.pyplot as plt # Plotting\nimport numpy as np # Array computation\nimport seaborn as sns # Plotting \nfrom sklearn.linear_model import LinearRegression # Linear Regression model\nfrom IPython.display import display, clear_output # Plotting\n\n%matplotlib inline\nsns.set_theme()\nsns.set_style(\"ticks\")\n```\n\n# Create and Process data\nWe use fake data for this example. The input is the number of hours a student has studied for a test and the target is a binary label indicating if they have passed the test or not.\n\nWe want train_xs to be a matrix with shape ($m$, $n + 1$) with the first column being all ones (in our case $m = 12$ and $n = 1$): \n\\begin{align}\n\\mathbf{X} = \\begin{bmatrix} x^1_{0} && \\ldots && x^1_{n} \\\\ \\vdots && \\ddots && \\vdots \\\\ x^m_{0} && \\ldots && x^m_{n} \\end{bmatrix}\n\\end{align}\n\nWe want train_ys to be a vector with shape ($m$, 1):\n\\begin{align}\n\\mathbf{y} = \\begin{bmatrix} y^1 \\\\ \\vdots \\\\ y^m \\end{bmatrix}\n\\end{align}\n\n\n```python\ntrain_xs = np.array([1, 2, 3.5, 5, 6, 7.2, 8, 10.2, 12, 15.7, 17, 20]).reshape(-1, 1)\ntrain_ys = np.array([0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1]).reshape(-1, 1)\n\n# Plot data\nsns.relplot(train_xs.flatten(), train_ys.flatten())\nplt.title(\"Students Passing Exam\")\nplt.ylabel(\"Passed Exam\")\nplt.xlabel(\"Hours Studied\")\nplt.xlim([0, 25])\n\n```\n\nThis is a classification problem but for educational purposes let's try fitting a line to this data using sklearn.\n\n\n```python\nregressor = LinearRegression()\nregressor = regressor.fit(train_xs, train_ys) # Remove the extra features from x\n\nplot_xs = np.linspace(-10, 30, 500)\nplot_ys = regressor.intercept_ + regressor.coef_[0] * plot_xs\nsns.relplot(train_xs.flatten(), train_ys.flatten())\nsns.lineplot(plot_xs, plot_ys, color=\"red\",linestyle=\"--\")\nplt.title(\"Students Passing Exam\")\nplt.ylabel(\"Passed Exam\")\nplt.xlabel(\"Hours Studied\")\nplt.xlim([-10, 30])\n```\n\nWe quickly see that regression can't be directly used for classification in this way. Linear regression produces a line that is not restricted to 0 and 1.\n\nTherefore, we need to define a different hypothesis.\n\n\n```python\n# Add extra feature to xs\nones = np.ones(shape=(12, 1))\ntrain_xs = np.c_[ones, train_xs] # Shape is now (12, 2)\n```\n\n# Hypothesis Definition\nThe hypothesis function $h$ is defined as follows for a single example $\\mathbf{x}^k$:\n\n\\begin{align}\nh_\\theta(\\mathbf{x}^k) & = g(\\boldsymbol{\\theta}^T\\mathbf{x}^k) = \\frac{1}{1 + e^{-\\boldsymbol{\\theta}^T\\mathbf{x}^k}}\n\\end{align}\n\nwhere $g$ is the logistic or sigmoid function:\n\n\\begin{align}\ng(z) &= \\frac{1}{1 + e^{-z}}\n\\end{align}\n\nWe want to implement a batched method which computes the hypothesis for all examples in one go. To do so, we can implement the following version:\n\n\\begin{align}\nh_\\theta(\\mathbf{X}) & = g(\\mathbf{X} \\boldsymbol{\\theta}) = \\begin{bmatrix} h_\\theta(\\mathbf{x}^1) \\\\ \\vdots \\\\ h_\\theta(\\mathbf{x}^m) \\end{bmatrix}\n\\end{align}\n\n\n\n```python\ndef logistic(z):\n return 1/(1 + np.e**-z)\n\ndef hypothesis(thetas, input_data):\n return logistic(np.dot(input_data, thetas))\n```\n\n\n```python\n# Plot logistic function\nx_plot = np.linspace(-10, 10, 500)\ny_plot = logistic(x_plot)\n\nsns.lineplot(x_plot, y_plot)\n```\n\nWe use the hypothesis to define the probabilities of the classes:\n\n\\begin{align}\nP(y^k = 1 | \\mathbf{x}^k; \\boldsymbol{\\theta}) &= h_\\theta(\\mathbf{x}^k) \\\\\nP(y^k = 0 | \\mathbf{x}^k; \\boldsymbol{\\theta}) &= 1 - h_\\theta(\\mathbf{x}^k) \\\\\nP(y^k| \\mathbf{x}^k; \\boldsymbol{\\theta}) &= (h_\\theta(\\mathbf{x}^k))^{y^{k}} (1- h_\\theta(\\mathbf{x}^k))^{1 - y^{k}} \\\\\n\\end{align}\n\n\n```python\ndef class_prob(thetas, input_data, class_n):\n hypothesis_estimate = hypothesis(thetas, input_data)\n return (hypothesis_estimate ** class_n) * (1-hypothesis_estimate)**(1-class_n)\n```\n\n# Learning Algorithm\n\n## Cost Function\nWe want to maximise the likelihood of our training data:\n\\begin{align}\nL(\\boldsymbol\\theta) &= P(\\mathbf{y}| \\mathbf{X}; \\boldsymbol{\\theta}) \\\\\n &= \\prod_{k=1}^m P(y^k| \\mathbf{x}^k; \\boldsymbol{\\theta})\n\\end{align}\n\nTo make things easier, we will instead minimise the negative log likelihood:\n\\begin{align}\nl(\\boldsymbol\\theta) &= -log(L(\\boldsymbol\\theta)) \\\\\n &= -\\left[\\sum_{k=1}^m y^klog(h_\\theta(\\mathbf{x}^k)) + (1 - y^k)log(1 - h_\\theta(\\mathbf{x}^k))\\right]\n\\end{align}\n\nWe can instead implement a vectorized form which returns the negative log likelihood per example:\n\\begin{align}\nl(\\boldsymbol\\theta) &= -\\left[\\mathbf{y} * log(h_\\theta(\\mathbf{X})) + (\\mathbf{1} - \\mathbf{y}) * log(\\mathbf{1} - h_\\theta(\\mathbf{X}))\\right]\n\\end{align}\n\n\n```python\ndef cost_function(thetas, input_data, target_data):\n hypothesis_result = hypothesis(thetas, input_data)\n return -(target_data * np.log(hypothesis_result) + (1 - target_data) * np.log(1 - hypothesis_result))\n```\n\n## Gradient Descent\nSecond step, define gradient descent per epoch, where each parameter $\\theta_i$ is updated following:\n\\begin{align}\n\\theta_i & \u2254 \\theta_i - \\alpha \\frac{\\partial}{\\partial \\theta_i} l(\\boldsymbol\\theta) \\\\\n& \u2254 \\theta_i - \\alpha \\sum_{k=1}^m (h_\\theta(\\mathbf{x}^k) - y^k)x^k_i\n\\end{align}\nAs before, we can instead implement a vectorized operation:\n\n\\begin{align}\n\\boldsymbol{\\theta} & \u2254 \\boldsymbol{\\theta} - \\alpha \\mathbf{X}^T(h_\\theta(\\mathbf{X}) - \\mathbf{y})\n\\end{align}\n\n\n```python\ndef gradient_descent(thetas, input_data, target_data, alpha=0.1):\n \"\"\"\n thetas: parameters for the hypothesis, expected shape is (2, 1)\n input_data: full matrix of input data, expected shape is (n_samples, 2)\n target_data: full vector of target data, expected shape is (n_samples, 1)\n alpha: learning rate, defaults to 0.1\n\n return: New parameters, shape is (2, 1)\n \"\"\"\n return thetas - alpha * np.dot(input_data.T, hypothesis(thetas, input_data) - target_data)\n```\n\n# Train\n\n\n\n```python\nparameters = np.array([[0.01], [0.01]])\n\n# Create array of xs to use for plotting \nx_space = np.linspace(0, 25, 500).reshape(500, 1)\nones = np.ones(shape=(500, 1))\nx_added = np.c_[ones, x_space]\n\ndef create_fig():\n fig = plt.figure()\n ax = fig.add_subplot(1, 1, 1) \n\n return fig, ax\n\ndef plot(ax, fig, parameters):\n current_curve = hypothesis(parameters, x_added)\n \n ax.cla()\n \n ax.set_xlim([0, 25])\n ax.set_ylim([-0.05, 1.05])\n ax.plot(x_space.flatten(), current_curve.flatten(), \"r--\")\n ax.scatter(train_xs[:, 1], train_ys.flatten())\n ax.hlines(0.5, 0, 25)\n ax.set_title(\"Students Passing Exam\")\n ax.set_ylabel(\"Probability of Passing Exam\")\n ax.set_xlabel(\"Hours Studied\")\n display(fig)\n \n clear_output(wait = True)\n plt.pause(0.0001)\n\ndef train(parameters, n_epochs=300):\n # Create empty figure for plotting\n fig, ax = create_fig()\n \n # Iterate over n_epochs\n for epoch in range(n_epochs): \n parameters = gradient_descent(parameters, train_xs, train_ys, alpha=0.005)\n plot(ax, fig, parameters)\n \n return parameters\n\ntrained_parameters = train(parameters)\n```\n\n\n```python\n# Compute accuracy for the training set\ndef class_accuracy(thetas, input_data, targets):\n probs = class_prob(thetas, input_data, targets)\n correct = (probs >= 0.5).sum() # Using >= means that we say passing when the probability is 50%\n return correct/probs.size\n\nprint(f\"Final training accuracy: {class_accuracy(trained_parameters, train_xs, train_ys):.2f}\")\n```\n\n Final training accuracy: 0.75\n\n\n## Library Version\n\n\n```python\nimport sklearn.linear_model\n\nlr_classifier = sklearn.linear_model.LogisticRegression()\nlr_classifier.fit(train_xs[:, 1:], train_ys)\n\nsklearn_params = np.array([lr_classifier.intercept_[0], lr_classifier.coef_[0, 0]]) # Write the learned parameters as before\nfig, ax = create_fig()\nplot(ax, fig, sklearn_params)\n```\n\n\n```python\nlr_acc = lr_classifier.score(train_xs[:, 1:], train_ys)\nlr_acc\n```\n\n\n\n\n 0.75\n\n\n", "meta": {"hexsha": "2d27e747bb2fb1b9777321c06f45907989e639f9", "size": 278324, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Logistic Regression/Logistic Regression.ipynb", "max_stars_repo_name": "GIlunga/MLWorkshop2020", "max_stars_repo_head_hexsha": "de5b8a25197afcb816ab9686c008017c8b034576", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-10-30T17:54:13.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-20T15:39:30.000Z", "max_issues_repo_path": "Logistic Regression/Logistic Regression.ipynb", "max_issues_repo_name": "GIlunga/MLWorkshop2020", "max_issues_repo_head_hexsha": "de5b8a25197afcb816ab9686c008017c8b034576", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Logistic Regression/Logistic Regression.ipynb", "max_forks_repo_name": "GIlunga/MLWorkshop2020", "max_forks_repo_head_hexsha": "de5b8a25197afcb816ab9686c008017c8b034576", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 560.0080482897, "max_line_length": 39812, "alphanum_fraction": 0.7362570242, "converted": true, "num_tokens": 2486, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9362850057480347, "lm_q2_score": 0.8933093968230773, "lm_q1q2_score": 0.8363921937392684}} {"text": "```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport torch\n```\n\n$\\textbf{Definitions:}$ $\\\\$\n\n$\\mbox{---Gradient---}$ $\\\\$ $\\\\$\nVector formed by partial derivatives of scalar function, f(x) in which $x = \\begin{bmatrix} x_1 \\\\ x_2 \\\\ \\vdots \\\\x_n \\end{bmatrix}$ $\\\\$\nGradient maps $\\mathbb{R}^n \\rightarrow \\mathbb{R}$ $\\\\$\n\n$$\\nabla f(\\mathbf{x})=\\frac{\\partial f(\\mathbf{x})}{\\partial x_1}\\hat{x}_1+\\frac{\\partial f(\\mathbf{x})}{\\partial x_2}\\hat{x}_2+\\ldots+\\frac{\\partial f(\\mathbf{x})}{\\partial x_n}\\hat{x}_n$$\n\n$$\\nabla f(x) = \\left[\\frac{\\partial f}{\\partial x_1}\\frac{\\partial f}{\\partial x_2}\\dots\\frac{\\partial f}{\\partial x_n}\\right]$$\n\nNote: Input is a column vector, outputs a row vector. $\\\\$\nGradient is the rate of change wrt each dimension/component and corresponds to steepest slope due to linear independence used for gradient descent. $\\\\$\n\n$\\mbox{---Jacobian---}$ $\\\\$\nMatrix formed by partial derivatives of vector function of scalar functions, maps $\\mathbb{R}^n \\rightarrow \\mathbb{R}^m$ $\\\\$\n\n$$J_\\mathbf{f} = \\frac{\\partial (f_1,\\ldots,f_m)}{\\partial(x_1,\\ldots,x_n)} = \\left[\n\\begin{matrix}\n\\frac{\\partial f_1}{\\partial x_1} & \\cdots & \\frac{\\partial f_1}{\\partial x_n} \\\\\n\\vdots & \\ddots & \\vdots \\\\\n\\frac{\\partial f_m}{\\partial x_1} & \\cdots & \\frac{\\partial f_m}{\\partial x_n}\n\\end{matrix}\n\\right]$$\nThe Jacobian is the gradient applied to multiple rows, commonly used as a change of basis/unit conversion:\n\n$$\\iiint_R f(x,y,z) \\,dx\\,dy\\,dz = \\iiint_S f(x(u,v,w),y(u,v,w),z(u,v,w))\\left|\\frac{\\partial (x,y,z)}{\\partial(u,v,w)}\\right|\\,du\\,dv\\,dw$$\nNote: Gradient = Jacobian if $m = 1$.\n\n$\\mbox{---Hessian---}$ $\\\\$\nGradient applied to Gradient, Double Gradient: $\\\\$\n$$\\begin{align}D[\\nabla f(\\mathbf x)] &= D[D[f(\\mathbf x)]]\\\\\n&=\\left(D\\left[\\frac{\\partial f}{\\partial x_1}\\right]^T, \\ldots, D\\left[\\frac{\\partial f}{\\partial x_n}\\right]^T\\right)\\end{align}$$\nWhich expands to give us the Hessian matrix:\n$$D^2[f(\\mathbf x)]=\\left(\\begin{matrix}\\frac{\\partial^2 f}{\\partial x_1^2} & \\ldots & \\frac{\\partial^2 f}{\\partial x_1\\partial x_n}\\\\\n\\vdots & \\ddots & \\vdots \\\\\n\\frac{\\partial^2 f}{\\partial x_n\\partial x_1}& \\ldots & \\frac{\\partial^2 f}{\\partial x_n^2}\\end{matrix}\\right)$$\n\nNote: Inputs are column vectors, outputs are row vectors. (Transposed first because the first gradient outputs a row vector) $\\\\$\nThe Hessian represents the rate of change of gradient, analogous to curvature. Used to computationally determine the position of a min/max point in optimization, which is darn impossible to visualize past 2 dimensions.\n\n$\\textbf{Analytic Gradient:}$ $\\\\$\n$\\mbox{---Linear Form---}$ $\\\\$\n$f(x) = a^T x$ $\\\\$\nComponent-wise derivative yields corresponding dot-product coefficent of each component k. $\\\\$\nAssemble each partial derivatives into vector: $\\\\$\n$\\nabla f(x) = \\begin{bmatrix} a_1 \\\\ a_2 \\\\ \\vdots \\\\ a_n \\end{bmatrix} = a$\n\n-General Linear Form: $\\\\$\n$f(x) = a^Tx + b$ $\\\\$\n$\\nabla f(x) = a$\n\n\n$\\mbox{---Quadratic Form---}$ $\\\\$\n$f(x) = x^T A x$ $\\\\$\nTracing through 2x2 example: $\\\\$\n$\\nabla f(x) = (A + A^T)x$ $\\\\$\nFor pd matrices, $A = A^T$ so: $\\\\$\n$\\nabla f(x) = 2Ax$\n\n-General Quadratic Form, which builds from gradient of general linear form: $\\\\$\n$f(x) = \\frac{1}{2}x^T A x + b^Tx + c$ $\\\\$\n$\\nabla f(x) = \\frac{1}{2}(A^T + A)x + b$ $\\\\$\nFor symmteric matrix A: $\\\\$\n$\\nabla f(x) = Ax + b$\n\n-Mixed Quadratic Form: $\\\\$\n$f(x,y) = x^T A y$ $\\\\$\nWrt x: $\\nabla_x f(x,y) = Ay$ $\\\\$\nWrt y: $\\nabla_y f(x,y) = A^Tx$ $\\\\$\nTaking the right partial derivative, transpose. $\\\\$\nMatrices are pd, so wrt y: $\\nabla_y f(x,y) = Ax$\n\n$\\textbf{Analytic Hessian:}$ $\\\\$\nTracing through 2x2 example again: $\\\\$\n$\\mbox{---Linear Form---}$ $\\\\$\n$f(x) = a^T x$ $\\\\$\n$\\nabla f(x)$ does not depend on x, so $\\nabla^2 f(x) = 0$.\n\n$\\mbox{---Quadratic Form---}$ $\\\\$\n$f(x) = x^T A x$ $\\\\$\n$\\nabla^2 f(x) = A + A^T$ $\\\\$\nFor symmteric matrix A: $\\\\$\n$\\nabla^2 f(x) = 2A$\n\nMixed Quadratic Form: $\\\\$\n$f(x,y) = x^T A y$ $\\\\$\nWrt xx, yy: $H_{xx} = H_{yy} = 0$ $\\\\$\nWrt xy, yx: $H_{xy} = H_{yx} = 2A$\n\n\nSimultaneous gradient descent (continuous time):\n $\\dot x = -D_1f_1(x,y),\\ \\dot y = -D_2f_2(x,y)$, simgrad Jacobian\n $J(x,y) = \\begin{bmatrix} D_1^2f_1(x,y) & D_{12}f_1(x,y) \\\\ D_{21}f_2(x,y) & D_2^2f_2(x,y) \\end{bmatrix}$\n \n(discrete time): \n$x^+ = x - \\gamma_x D_1f_1(x,y),\\ \ny^+ = y - \\gamma_y D_2f_2(x,y)$\n\n\n```python\n\n```\n\n\n```python\nm = 2\nn = 2\n\n#Random pd matrices, Cholesky form:\nnp.random.seed(0)\nA1 = np.random.randn(n,n)\nA1 = A1.T @ A1\n\nA2 = np.random.randn(n,n)\nA2 = A2.T @ A2\n\n#Random matricies, not pd:\nB1 = np.random.randn(n,m)\nB2 = np.random.randn(n,m)\nC1 = np.random.randn(m,m)\nC2 = np.random.randn(m,m)\n\n#Define e,h vectors:\ne1 = np.random.randn(n)\ne2 = np.random.randn(m)\nh1 = np.random.randn(m)\nh2 = np.random.randn(n)\n\n#Convert Matrices into tensors\nA1 = torch.tensor(A1, dtype = torch.float)\nA2 = torch.tensor(A2, dtype = torch.float)\nB1 = torch.tensor(B1, dtype = torch.float)\nB2 = torch.tensor(B2, dtype = torch.float)\nC1 = torch.tensor(C1, dtype = torch.float)\nC2 = torch.tensor(C2, dtype = torch.float)\ne1 = torch.tensor(e1, dtype = torch.float)\ne2 = torch.tensor(e2, dtype = torch.float)\nh1 = torch.tensor(h1, dtype = torch.float)\nh2 = torch.tensor(h2, dtype = torch.float)\n\nx1 = torch.ones((n, 1), requires_grad=True)\nx2 = torch.ones((m, 1), requires_grad=True)\n\n#Generic Quadratic Cost:\n#B_ij, C_ij still rather vague\ndef f1(x1,x2):\n return (0.5 * x1.t() @ A1 @ x1) + (x1.t() @ B1 @ x2) + (0.5 * x2.t() @ C1 @ x2) + (e1.t() @ x1) + (h1.t() @ x2)\n\ndef f2(x1,x2):\n return (0.5 * x2.t() @ A2 @ x2) + (x2.t() @ B2 @ x1) + (0.5 * x1.t() @ C2 @ x1) + (e2.t() @ x2) + (h2.t() @ x1)\n```\n\n\n```python\n#Analytical Gradient:\n#D wrt x1:\ndef D1f1(x1,x2):\n return (A @ x1) + 0.5 * (B1 @ x2) + e1\n\ndef D1f2(x1,x2):\n return 0.5 * (B2.t() @ x2) + (C2 @ x1) + h2\n\n#D wrt x2:\ndef D2f1(x1,x2):\n return 0.5 * (B1.t() @ x1) + (C1 @ x2) + h1\n\ndef D2f2(x1,x2):\n return (A2 @ x2) + 0.5 * (B2 @ x1) + e2\n\n#Analytical Hessian:\n#H wrt x1:\ndef H11f1(x1, x2):\n return A1\n\ndef H11f2(x1, x2):\n return C2\n\n#H wrt x2:\ndef H22f1(x1, x2):\n return C1\n\ndef H22f2(x1, x2):\n return A2\n\n```\n\n\n```python\n#Computational Gradient:\n#tensors = [tensor.zero_grad() for tensor in tensors]\n\n'''\n-Possible solutions:\n-Make functions just expressions\n-use backward\n-Seeing an example would be nice\n'''\nprint(f1(x1,x2).grad)\n\n\n#Computational Hessian:\n#print(torch.autograd(x1).autograd(x1))\n#print(torch.autograd(x2).autograd(x2))\n```\n\n None\n\n\n /projects/ac66b0d1-d01e-42cc-9d9b-3799ed5ac4d1/.local/lib/python3.6/site-packages/torch/tensor.py:746: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.\n warnings.warn(\"The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad \"\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "cbc162fb4c70f94f6a95f3db2635b7c9805eed2a", "size": 10771, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Derivatives/Derivative Tests.ipynb", "max_stars_repo_name": "JWongDude/FruitLoops", "max_stars_repo_head_hexsha": "f4346d9db16ba619d71ce5bb819f5da08a88a120", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Derivatives/Derivative Tests.ipynb", "max_issues_repo_name": "JWongDude/FruitLoops", "max_issues_repo_head_hexsha": "f4346d9db16ba619d71ce5bb819f5da08a88a120", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-07-09T19:52:36.000Z", "max_issues_repo_issues_event_max_datetime": "2020-07-09T19:52:36.000Z", "max_forks_repo_path": "Derivatives/Derivative Tests.ipynb", "max_forks_repo_name": "JWongDude/FruitLoops", "max_forks_repo_head_hexsha": "f4346d9db16ba619d71ce5bb819f5da08a88a120", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.857605178, "max_line_length": 525, "alphanum_fraction": 0.5121158667, "converted": true, "num_tokens": 2675, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391685381605, "lm_q2_score": 0.9059898159413479, "lm_q1q2_score": 0.8362640864105428}} {"text": "## EIGENVECTORS AND EIGENVALUES FROM A MATRIX\n\n\n```python\nimport numpy as np\nfrom numpy import linalg as LA\nimport matplotlib.pyplot as plt\nfrom plotnine import *\nimport scipy.linalg as la\nfrom sympy.interactive.printing import init_printing\nfrom sympy.matrices import Matrix, eye, zeros, ones, diag, GramSchmidt\n%matplotlib inline\nplt.style.use('ggplot')\n```\n\n\n```python\nprint(plt.style.available)\n```\n\n ['seaborn-dark', 'seaborn-darkgrid', 'seaborn-ticks', 'fivethirtyeight', 'seaborn-whitegrid', 'classic', '_classic_test', 'fast', 'seaborn-talk', 'seaborn-dark-palette', 'seaborn-bright', 'seaborn-pastel', 'grayscale', 'seaborn-notebook', 'ggplot', 'seaborn-colorblind', 'seaborn-muted', 'seaborn', 'Solarize_Light2', 'seaborn-paper', 'bmh', 'tableau-colorblind10', 'seaborn-white', 'dark_background', 'seaborn-poster', 'seaborn-deep']\n\n\n\n```python\n# import jtplot module in notebook\n#from jupyterthemes import jtplot\n\n# choose which theme to inherit plotting style from\n# onedork | grade3 | oceans16 | chesterish | monokai | solarizedl | solarizedd\n#jtplot.style(theme='oceans16')\n\n# set \"context\" (paper, notebook, talk, poster)\n# scale font-size of ticklabels, legend, etc.\n# remove spines from x and y axes and make grid dashed\n#jtplot.style(context='notebook', fscale=1.2, spines=False, ticks=True, gridlines='-')\n\n# turn on X- and Y-axis tick marks (default=False)\n# turn off the axis grid lines (default=True)\n# and set the default figure size\n# jtplot.style(ticks=True, grid=True)\n\n# reset default matplotlib rcParams\n\n```\n\n\n```python\ninit_printing(use_unicode=True, wrap_line=True)\n```\n\nLet **A** be a square matrix.\nS non-zero vector **v** is an eigenvector for A with eigenvalue $\\lambda$ if:\n$$Av = \\lambda v$$\nRearranging the equations, we see that **v** is a solution of the homogenous system of equations:\n$$(A - \\lambda I)v = 0$$\nwhere I is the identity matrix of size n.\n\nNon-trivial solutions exist only if the matrix $A- \\lambda I$ is singular => $det(A - \\lambda I) = 0$.\n\nEigenvalues of A are roots of the CHARACTERISTIC POLYNOMIAL:\n$$p(\\lambda) = det(A - \\lambda I)$$\n\n***\nScipy package helps in computing the eigenvalues and eigenvectors of a square matrix A.\n\n**scipy.linalg.eig** - computes eigenvalues and eigenvectors of a square matrix A\n***\n\n**!!! In numpy linalg and scipy the eigenvectors are normalized so their Euclidean norms are 1** \n\n### How it works step by step\n\n\n```python\n# crate an array (vectors are in the columns, so x coord go first, y in the second)\nM = np.array([[3,4], [0,5]])\nM\n```\n\n\n\n\n array([[3, 4],\n [0, 5]])\n\n\n\n\n```python\n# USING SCIPY the formula results a tuple in the form (eigvals, eigvecs)\nresults = la.eig(M)\nresults\n```\n\n\n\n\n (array([3.+0.j, 5.+0.j]), array([[1. , 0.89442719],\n [0. , 0.4472136 ]]))\n\n\n\n\n```python\n# print the eigenvalues of A (these are real numbers so we can print them accordingly)\nresults[0].real\n```\n\n\n\n\n array([3., 5.])\n\n\n\n\n```python\n# print the corresponding eigenvectors\nresults[1]\n```\n\n\n\n\n array([[1. , 0.89442719],\n [0. , 0.4472136 ]])\n\n\n\n\n```python\n# unpacking the tuple\neigvals, eigvecs = LA.eig(M)\n\nprint(\"The eigenvalues are: \", eigvals.real)\nprint(\"The eigenvectors are: \\n\", eigvecs)\n```\n\n The eigenvalues are: [3. 5.]\n The eigenvectors are: \n [[1. 0.89442719]\n [0. 0.4472136 ]]\n\n\n**Check the eigenvector/eigenvalue condition $Au = \\lambda u$ where $u$ is the eigenvector and $\\lambda$ is its eigenvalue** \n- from the previous results, we have lambda1 = 3 and lambda2 = 5\n- we can index them to extract each individually\n- **we multiply the eigenvector by M (original matrix) and check that it is the same as multiplying the same eigenvector by its eigenvalue**\n\n$u$ - one of the eigenvectors resulting from the calculations \n$\\lambda$ -corresponding eigenvalue for the previous vector \n$M$ - matrix given as input for the calculations\n\n\n```python\n# take the second eigenvector\nu = eigvecs[:, 1]\nu\n```\n\n\n\n\n array([0.89442719, 0.4472136 ])\n\n\n\n\n```python\n# take the second lambda\nlambdaM = eigvals[1].real\nlambdaM\n```\n\n\n```python\n# multiply the input matrix with the eigenvector selected\nnp.dot(M, u)\n```\n\n\n\n\n array([4.47213595, 2.23606798])\n\n\n\nWe see that the result of multiplying the input matrix with the eigenvector and the eigenvalue with the eigenvector are the same. So $Au = \\lambda u$.\n\n\n```python\n# multiply same eigenvector by its corresponding eigenvalue\nlambdaM*u\n```\n\n\n\n\n array([4.47213595, 2.23606798])\n\n\n\n### Function to calculate the eigenvalues and vectors and display them nicely\n\n\n```python\n# helper function\ndef isSquareMatrix(matrix):\n \"\"\"\n checks is a matrix is square (length of all rows must equal the number of rows)\n \"\"\"\n return matrix.shape[0] == matrix.shape[1]\n```\n\n\n```python\n# helper function\ndef characteristic_polynomial(matrix):\n \"\"\"\n display the characteristic polynomial of a matrix\n \"\"\"\n \n m = Matrix(matrix)\n \n return m.charpoly()\n```\n\n\n```python\ndef solve_sympy(matrix):\n \"\"\"\n matrix: a numpy array in a square shape\n \"\"\"\n \n # for sympy calculations, transform the array into a Matrix object\n m = Matrix(matrix)\n print(m)\n \n # check that the matrix is square\n if isSquareMatrix(matrix):\n \n # calculate the eigenvalues and eigenvectors using sympy\n sympy_eigvals = m.eigenvals()\n sympy_eigvect = m.eigenvects()\n \n # calculate the characteristic polynomical\n char_poly = m.charpoly()\n \n # error for matrix input\n else:\n print(\"Your input matrix is not square or is empty.\")\n \n print(\"Eigenvalues:\")\n for i in sympy_eigvals.items():\n print(i[0])\n \n for i in sympy_eigvect:\n print(\"Eigenvalue: \", i[0], \"\\nCorresponding eigenvector: \", i[2])\n print(\"Characteristic polynomial: \", char_poly)\n \n # loop through the results and return the eigenvectors to plot\n l = []\n for i in range(len(sympy_eigvect)):\n l.append(sympy_eigvect[i][2][0])\n \n to_plot = np.array(l).astype(np.float64).T\n \n #if len(sympy_eigvect) ==2:\n # p = sympy_eigvect[0][2][0], sympy_eigvect[1][2][0]\n # to_plot = np.array(p).astype(np.float64).T\n #else:\n # p = sympy_eigvect[0][2][0]\n # to_plot = np.array(p).astype(np.float64).T\n \n return sympy_eigvect, char_poly, to_plot\n```\n\n\n```python\ndef solve_scipy(matrix):\n \"\"\"\n matrix: a numpy array in a square shape\n \"\"\"\n \n # check that the matrix is square\n if isSquareMatrix(matrix):\n \n # calculate the eigenvalues and eigenvectors - using scipy\n eigvals, eigvecs = la.eig(matrix)\n \n # error for matrix input\n else:\n print(\"Your input matrix is not square or is empty.\")\n \n print(\"The eigenvalues are: \", eigvals.real, \"\\nThe eigenvectors are: \\n\", eigvecs)\n return eigvals, eigvecs\n```\n\n\n```python\ndef plot_vectors(original, eigen):\n \"\"\"\n Create an overaly of the vectors in 2D space.\n original -> given matrix\n eigen -> calculated eigen vectors\n \"\"\"\n \n # set the values in this order: 1st row for x, 2nd row for y\n #for origin\n X = np.array((0)) \n Y = np.array((0))\n \n # for axes\n U = original[0,:]\n V = original[1,:]\n U1 = eigen[0,:]\n V1 = eigen[1,:]\n \n # check the limits for x and y axes and add 1\n # this can be condensed into a one liner if you want to make it square - here only for display purposes\n xlimit = max(np.amax(abs(U))+1, np.amax(abs(U1))+1)\n ylimit = max(np.amax(abs(V))+1, np.amax(abs(V1))+1)\n \n # choose the bigger number between x and y limits to have a square graph\n lim = xlimit if xlimit > ylimit else ylimit\n \n # set figure size\n fig, ax = plt.subplots(figsize=(20,10))#(1, 2, sharex=True, sharey=True, figsize=(20, 10))\n \n # IMPORTANT FOR ASPECT RATIO\n ax.set_aspect('equal')\n \n #ax[0]\n # plot the vectors\n plt.quiver(X, Y, U, V, \n color =[\"r\", \"black\"],\n units='xy', scale=1, width = 0.03, alpha=0.7)\n #ax[1]\n plt.quiver(X, Y, U1, V1, \n color =[\"green\", \"orange\"],\n units='xy', scale=1, headlength=5.5, width=0.05, alpha=0.7)\n \n # set the x,y limits automatically and limit them at 1 interval ticks\n plt.xlim(-lim, lim)\n plt.ylim(-lim, lim)\n plt.xticks(np.arange(-lim, lim, step=1)) \n plt.yticks(np.arange(-lim, lim, step=1))\n #ax[0]\n \n # annotate the vectors\n plt.annotate(\"v1\", xy = (original[0,:][0], original[1,:][0]),\n color = \"red\", weight = \"semibold\", size = 12)\n #ax[0]\n plt.annotate(\"v2\", xy = (original[0,:][1], original[1,:][1]), \n color = \"black\", weight = \"semibold\", size =12)\n #ax[1]\n if len(eigen[0])>1:\n plt.annotate(\"eigen_v1\", xy = ((eigen[0,:][0]), (eigen[1,:][0])-0.5),\n color = \"green\", weight = \"bold\", size = 13)\n #ax[1]\n plt.annotate(\"eigen_v2\", xy = (eigen[0,:][1], eigen[1,:][1]), \n color = \"orange\", weight = \"bold\", size =13)\n else:\n plt.annotate(\"eigen_v1\", xy = ((eigen[0,:][0]), (eigen[1,:][0])-0.5),\n color = \"green\", weight = \"bold\", size = 13)\n plt.show()\n```\n\n## COURSERA QUIZ\n\n1. **For the matrix A = [[1,0],[0,2]], what is the caracteristic polynomical and the solutions to it?**\n\n\n```python\nA = Matrix(2,2, (1, 0, 0, 2))\nA1 = np.array([[1,0], [0, 2]])\n\neveca, epa, to_plota = solve_sympy(A1)\n\nplot_vectors(A1, to_plota)\n```\n\n### 3. For the matrix A = [[3,4], [0,5]], what is the characteristic polynomial and the solutions to it?\n\n\n```python\nB1 = np.array([[3,4], [0,5]])\n\nevecb1, epb1, to_plotb1 = solve_sympy(B1)\n\nplot_vectors(B1, to_plotb1)\n```\n\n### For matrix A = [[1,0], [-1,4]] what is the charatceristic polynomial and the solutions?\n\n\n```python\nC1 = np.array([[1,0], [-1,4]])\n\nevec_c1, ep_c1, to_plot_c1 = solve_sympy(C1)\n\nplot_vectors(C1, to_plot_c1)\n```\n\n\n```python\nto_plot_c1\n```\n\n\n\n\n array([[3., 0.],\n [1., 1.]])\n\n\n\n### For matrix A = [[-3, 8], [2, 3]] what is the characteristic polynomical and the solutions?\n\n\n```python\nD1 = np.array([[-3, 8], [2, 3]])\n\nevec_d1, ep_d1, to_plot_d1 = solve_sympy(D1)\n\nplot_vectors(D1, to_plot_d1)\n```\n\n### For the matrix A = [[1,0], [-1,4]] what is the characteristic polynomial and the solutions?\n\n\n```python\nE1 = np.array([[ 1, 0], [-1, 4]])\n\nevec_E1, ep_E1, to_plot_e1 = solve_sympy(E1)\n\nplot_vectors(E1, to_plot_e1)\n```\n\n\n```python\nF = np.array([[5, 4], [ -4, -3]])\nevec_F, ep_F, to_plot_f = solve_sympy(F)\n\nplot_vectors(F, to_plot_f)\n```\n\n\n```python\nG = np.array([[-2, -3], [1,1]])\nevec_G, ep_G, to_plot_g = solve_sympy(G)\n```\n", "meta": {"hexsha": "8c5cbc8945a35569c79a7405da9ff1fca596055d", "size": 129080, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "helper notebooks/Eigenvalues_eigenvectors.ipynb", "max_stars_repo_name": "kriystinne/math-ML-ImperialCollegeLondon", "max_stars_repo_head_hexsha": "9d917b0aa4443b50150f6151698c1f346eeee876", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "helper notebooks/Eigenvalues_eigenvectors.ipynb", "max_issues_repo_name": "kriystinne/math-ML-ImperialCollegeLondon", "max_issues_repo_head_hexsha": "9d917b0aa4443b50150f6151698c1f346eeee876", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "helper notebooks/Eigenvalues_eigenvectors.ipynb", "max_forks_repo_name": "kriystinne/math-ML-ImperialCollegeLondon", "max_forks_repo_head_hexsha": "9d917b0aa4443b50150f6151698c1f346eeee876", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 117.3454545455, "max_line_length": 19320, "alphanum_fraction": 0.8575457081, "converted": true, "num_tokens": 3274, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9425067195846918, "lm_q2_score": 0.8872046026642944, "lm_q1q2_score": 0.8361962996575639}} {"text": "## Chapter 4 - Training Models\n\n### Linear Regression\n\nLinear regression is usually the first machine learning model in modern statistical learning. Although dull, it is widely used and serves as a good jumping-off point for newer approaches - many fancy statistical learning approaches can be seen as generalisations or extensions of linear regression. Hence, having a good understanding of linear regression before studying more complex learning methods cannot be overstated.\n\n\n```python\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nimport statsmodels.api as sm\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.linear_model import LinearRegression, SGDRegressor\n```\n\n\n```python\n# Ingest, preprocessing\ndf = pd.read_csv('Advertising.csv', index_col=0)\n\nX1 = df.iloc[:,:1]\ndf1 = df.iloc[:,[0,1,2,3]]\n\nX2 = df.iloc[:,:3]\ndf2 = df.iloc[:, [0,3]]\n\ny = df.iloc[:, 3]\n```\n\n\n```python\n# # For testing\n# print('df')\n# display(df.head())\n# print('X1')\n# display(X1.head())\n# print('df1')\n# display(df1.head())\n# print('X2')\n# display(X2.head())\n# print('df2')\n# display(df2.head())\n# print('y')\n# display(y.head())\n```\n\n\n```python\n# Plot each of the variables against each other\nsns.pairplot(df)\n```\n\nUnivariate / Simple Linear Regression\n\nGiven $n$ samples, the model assumes there is approximately a linear relationship between $X$ and $Y$. Mathematically, it is in the form:\n$$\ny \\approx \\beta_0 + \\beta_1x\n$$\n$\\beta_0$ and $\\beta_1$ are model coefficients or parameters that need to be estimated. The estimated values are $\\hat{\\beta_0}$ and $\\hat{\\beta_1}$ and the predicted value of $y$, $\\hat{y}$ is\n\n$$\\hat{y} = \\hat{\\beta_0} + \\hat{\\beta_1}x$$ \n\nTo obtain the parameter estimates, we find these values such that the mean squared error or MSE is minimised. Let the observations be represented in an $n$ by $p$ matrix, $\\mathbf X$. The MSE is then calculated as\n\n$$\\begin{align}\\text{MSE} (\\mathbf X) &= \\frac 1 n \\sum_{i=1}^n \\begin{pmatrix} \\hat{y^{(i)}} - y^{(i)}\\end{pmatrix}^2\\\\&= \\frac 1 n \\sum_{i=1}^n \\begin{pmatrix} \\hat{\\beta_0} + \\hat{\\beta_1}x^{(i)}- y^{(i)}\\end{pmatrix}^2\\end{align}$$\n\nNote that minimising the MSE is the same as minimising the residual sum of squares, RSS. This is because $\\frac 1n \\text{RSS} = \\text{MSE}$\n\nThe closed-form solution of univariate linear regression is:\n\n| $$\\hat{\\beta_1}$$| $$\n\\hat{\\beta_1} = \n\\frac{\\sum^n_{i=1}\\begin{bmatrix}\n\\begin{pmatrix}\nx_i-\\bar{x}\n\\end{pmatrix}\n\\begin{pmatrix}\ny_i-\\bar{y}\n\\end{pmatrix}\n\\end{bmatrix}}\n{\\sum^n_{i=1}\\begin{pmatrix}x_i - \\bar{x}\\end{pmatrix}^2}\n$$ |\n|:-|:-|\n| $$\\hat{\\beta_0}$$ | $$\n\\hat{\\beta_0}=\\bar{y}-\\hat{\\beta_1}\\bar{x} \n$$|\n\nwhere $\\bar{x} = \\frac {\\sum_i x}{n}$ and $\\bar{y} = \\frac {\\sum_i y}{N}$, the respective sample means. They are also considered the least-squares coefficient estimates for univariate linear regression.\n\n\n```python\nXarray1 = np.c_[np.ones((X1.shape[0],1)), X1] # Add x0=1\n```\n\n\n```python\n# Obtaining the coefficients using the closed-form solution\ndf_train1 = df.copy()\ndf_train1['TV_mean'] = df_train1['TV'].mean()\ndf_train1['sales_mean'] = df_train1['sales'].mean()\ndf_train1['model_beta1_xmxbar'] = df_train1['TV'] - df_train1['TV_mean']\ndf_train1['model_beta1_ymybar'] = df_train1['sales'] - df_train1['sales_mean']\ndf_train1['model_beta1_numer'] = df_train1['model_beta1_xmxbar'] * df_train1['model_beta1_ymybar']\ndf_train1['model_beta1_denom'] = df_train1['model_beta1_xmxbar']**2\ndf_train1[['model_beta1_numer', 'model_beta1_denom']].head()\nbeta1 = df_train1['model_beta1_numer'].sum()/df_train1['model_beta1_denom'].sum()\nbeta0 = df_train1['sales'].mean()-beta1*df_train1['TV'].mean()\nTheta_hat11 = [beta0, beta1]\nprint(Theta_hat11)\n```\n\n [7.0325935491276965, 0.047536640433019736]\n\n\n\n```python\n# Predict\nnp.dot(Xarray1[:2], Theta_hat11)\n```\n\n\n\n\n array([17.97077451, 9.14797405])\n\n\n\n\n```python\n# For univariate linear regression, obtain the coefficients using the normal equations\nTheta_hat12 = np.dot(np.dot(np.linalg.inv(np.dot(Xarray1.T, Xarray1)), Xarray1.T),y)\nTheta_hat12 = list(Theta_hat12)\nprint(Theta_hat12)\n```\n\n [7.032593549127698, 0.047536640433019736]\n\n\n\n```python\n# Predict\nnp.dot(Xarray1[:2], Theta_hat12)\n```\n\n\n\n\n array([17.97077451, 9.14797405])\n\n\n\n\n```python\n# Obtaining the coefficients using statsmodels\nreg11 = sm.OLS(y, Xarray1)\nresults11 = reg11.fit()\nTheta_hat13 = list(results11.params)\nprint(Theta_hat13)\n```\n\n [7.032593549127698, 0.047536640433019764]\n\n\n\n```python\n# Predict\nprint(results11.predict(Xarray1[:2]))\n```\n\n [17.97077451 9.14797405]\n\n\n\n```python\n# Obtaining the coefficients using sklearn\nreg = LinearRegression()\nreg.fit(Xarray1, y)\nTheta_hat14 = reg.coef_.copy()\nTheta_hat14[0] = reg.intercept_\nprint(Theta_hat14)\n```\n\n [7.03259355 0.04753664]\n\n\n\n```python\n# Predict\nprint(reg.predict(Xarray1[:2]))\n```\n\n [17.97077451 9.14797405]\n\n\nMultiple Linear Regression\n\nThe multiple linear regression model extends the univariate model. It is in the form:\n\n$$\ny \\approx \\beta_0 + \\beta_1x_{1} + \\beta_2x_{2}+ \\cdots + \\beta_px_{p}\n$$\n\nwhere $p$ is the number of features, $x_{j}$ is the value of the $j$th feature for the sample and $\\beta_j$ is the corresponding model coefficient for that feature. Letting $\\Theta = (\\beta_1, \\cdots, \\beta_p)^T$ and $\\mathbf x = (x_1, \\cdots, x_p)^T$, we can use the more concise notation:\n\n$$\ny = \\Theta^T \\mathbf x\n$$\nWhere $\\Theta$ is the parameter column vector containing $\\theta_0$ and the feature weights $\\theta_1$ to $\\theta_n$. and $\\mathbf x_i$ is the feature column vector containing $x_{ti}$ for the $i$th sample.\n\nTo obtain $\\hat{\\Theta}$, we use the same approach as univariate linear regression. Find values $\\hat{\\beta_0}, \\cdots, \\hat{\\beta_p}$ to minimise the MSE (and consequently RSS):\n\n$$\\begin{align}\\text{MSE} (\\mathbf X) &= \\frac 1 n \\sum_{i=1}^n \\begin{bmatrix} \\hat{y^{(i)}} - y^{(i)})\\end{bmatrix}^2\\\\&= \\frac 1 n \\sum_{i=1}^n \\begin{bmatrix} \\hat{\\beta_0} + \\hat{\\beta_1}x^{(i)}_1 + \\hat{\\beta_2}x^{(i)}_2 + \\cdots + \\hat{\\beta_p}x^{(i)}_p - y^{(i)})\\end{bmatrix}^2\\end{align}$$\n\nThe closed-form solution for multiple linear regression is the normal equation. Let $\\mathbf y = (y^{(1)}, \\cdots, y^{(n)})^T$:\n\n$$\\hat{\\Theta} = (\\mathbf X ^T \\mathbf X)^{-1} \\mathbf X^T \\mathbf y$$\n\n\n```python\nXarray2 = np.c_[np.ones((X1.shape[0],1)), X2] # Add x0 = 1\n```\n\n\n```python\n# For multiple linear regression, obtain the coefficients using the normal equations\nTheta_hat21 = np.dot(np.dot(np.linalg.inv(np.dot(Xarray2.T, Xarray2)), Xarray2.T),y)\nfloat_formatter = \"{:.6f}\".format\nnp.set_printoptions(formatter={'float_kind':float_formatter})\nprint(Theta_hat21)\n```\n\n [2.938889 0.045765 0.188530 -0.001037]\n\n\n\n```python\n# Predict\nprint(np.dot(Xarray2[:2], Theta_hat21))\n```\n\n [20.523974 12.337855]\n\n\n\n```python\n# Obtaining the coefficients using statsmodels\nreg21 = sm.OLS(y, Xarray2)\nresults21 = reg21.fit()\nTheta_hat22 = list(results21.params)\nprint([\"{:.6f}\".format(t) for t in Theta_hat22])\n```\n\n ['2.938889', '0.045765', '0.188530', '-0.001037']\n\n\n\n```python\n# Predict\nresults21.predict(Xarray2[:2])\n```\n\n\n\n\n array([20.523974, 12.337855])\n\n\n\n\n```python\n# Obtaining the coefficients using sklearn\nreg2 = LinearRegression()\nreg2.fit(Xarray2, y)\nTheta_hat23 = reg2.coef_.copy()\nTheta_hat23[0] = reg2.intercept_\nprint(Theta_hat23)\n```\n\n [2.938889 0.045765 0.188530 -0.001037]\n\n\n\n```python\n# Predict\nprint(reg2.predict(Xarray2[:2]))\n```\n\n [20.523974 12.337855]\n\n\n### Gradient Descent - Batch Gradient Descent\n\nGradient descent is a generic optimization problem capable of finding optimal solutions to many problems. The idea is to tweak parameters iteratively to minimize a cost function. \n\n- Note that when using gradient descent, the features must have a similar scale or it will take longer to converge.\n\nHence, to use gradient descent on a linear regression problem, we revisit the cost function, the MSE: $\\text{MSE} (\\Theta) = \\frac 1 n \\sum_{i=1}^n \\begin{bmatrix} \\Theta^T \\mathbf x^{(i)} - y^{(i)})\\end{bmatrix}^2$ and now compute the gradient w.r.t. the parameters $\\beta_j$. Each expression is the partial derivative for every parameter $\\beta_j$.\n$$\\frac{\\partial}{\\partial \\beta_j}\\text{MSE}(\\Theta) = \\frac 2n \\sum_{i=1}^n \\begin{bmatrix} \\Theta^T \\mathbf x^{(i)} - y^{(i)})\\end{bmatrix} x_j^{(i)}$$\n\nCombining them all, we obtain the gradient vector of the cost function:\n\n$$\\nabla_\\beta \\text{MSE}(\\Theta) = \\begin{bmatrix}\\frac{\\partial}{\\partial \\beta_0}\\text{MSE}(\\Theta)\\\\\\frac{\\partial}{\\partial \\beta_1}\\text{MSE}(\\Theta)\\\\\\ \\vdots\\\\\\frac{\\partial}{\\partial \\beta_p}\\text{MSE}(\\Theta)\\end{bmatrix}= \\begin{bmatrix}\\frac 2n \\sum_{i=1}^n \\begin{bmatrix} \\Theta^T \\mathbf x^{(i)} - y^{(i)})\\end{bmatrix} x_0^{(i)}\\\\\\frac 2n \\sum_{i=1}^n \\begin{bmatrix} \\Theta^T \\mathbf x^{(i)} - y^{(i)})\\end{bmatrix} x_1^{(i)}\\\\\\ \\vdots\\\\\\frac 2n \\sum_{i=1}^n \\begin{bmatrix} \\Theta^T \\mathbf x^{(i)} - y^{(i)})\\end{bmatrix} x_n^{(i)}\\end{bmatrix} = \\frac 2n \\mathbf X^T (X\\Theta - y)$$\n\nIn each step of gradient descent, we calculate $\\nabla_\\beta \\text{MSE}(\\Theta)$, the direction of movement for the gradients after going through the batch once. Then, we multiply this by the learning rate $\\eta$, and update the parameter vector:\n$$\\Theta^{\\text{new}} = \\Theta - \\eta \\cdot \\nabla_\\theta \\text{MSE}(\\Theta)$$\n\n\n```python\n# GD for the multivariate case\neta = 0.00001 # learning rate\nn_iterations = 50000 # number of iterations for GD\n\n# Initialise the parameter vector\nn, p = Xarray2.shape[0], Xarray2.shape[1]\nTheta_hat2gd = np.random.normal(0,1,p)\nprint(Theta_hat2gd)\n```\n\n [0.787003 0.068653 1.612421 -1.582863]\n\n\n\n```python\nfor itn in range(n_iterations):\n gradients = 2/n * np.dot(Xarray2.T,np.dot(Xarray2, Theta_hat2gd) - np.array(y))\n Theta_hat2gd = Theta_hat2gd - eta * gradients\n```\n\n\n```python\nprint(Theta_hat2gd)\n```\n\n [1.073751 0.050858 0.209926 0.010291]\n\n\nAnd it can be verified that the values obtained via gradient descent is the same as that of the normal equations.\n\n### Gradient Descent - Stochastic Gradient Descent\n\nBatch gradient descent uses the whole training set when running every epoch. In contrast, Stochastic gradient descent picks a random sample from the training set at every epoch and computes the gradients based only on that single instance.\n\nBy convention, we iterate by rounds of $m$ iterations, and each round is called an epoch. \n\n\n```python\n# SGD for the univariate case, using SKLearn\n# Train\nscaler = StandardScaler()\n\nreg_sgd1 = SGDRegressor(max_iter=1000000, penalty=None, eta0=0.0025)\nXarray1_s = scaler.fit_transform(Xarray1)\nreg_sgd1.fit(Xarray1_s, y)\n```\n\n\n\n\n SGDRegressor(alpha=0.0001, average=False, early_stopping=False, epsilon=0.1,\n eta0=0.0025, fit_intercept=True, l1_ratio=0.15,\n learning_rate='invscaling', loss='squared_loss', max_iter=1000000,\n n_iter_no_change=5, penalty=None, power_t=0.25, random_state=None,\n shuffle=True, tol=0.001, validation_fraction=0.1, verbose=0,\n warm_start=False)\n\n\n\n\n```python\n# SGD for the multivariate case, using SKLearn\n# Train\nscaler2 = StandardScaler()\nreg_sgd = SGDRegressor(max_iter=5000000, penalty=None, eta0=0.0025)\nXarray2_s = scaler2.fit_transform(Xarray2)\nreg_sgd.fit(Xarray2_s, y)\n```\n\n\n\n\n SGDRegressor(alpha=0.0001, average=False, early_stopping=False, epsilon=0.1,\n eta0=0.0025, fit_intercept=True, l1_ratio=0.15,\n learning_rate='invscaling', loss='squared_loss', max_iter=5000000,\n n_iter_no_change=5, penalty=None, power_t=0.25, random_state=None,\n shuffle=True, tol=0.001, validation_fraction=0.1, verbose=0,\n warm_start=False)\n\n\n\n\n```python\n# Univariate Case\nprint(np.dot(Xarray1[:3], Theta_hat11)) # Closed-form solution\nprint(np.dot(Xarray1[:3], Theta_hat12)) # Closed-form solution, normal equations\nprint(results11.predict(Xarray1[:3])) # statsmodels\nprint(reg.predict(Xarray1[:3])) # sklearn LinearRegression\nprint(reg_sgd1.predict(Xarray1_s[:3])) # Stochastic Gradient Descent (SGDRegressor from sklearn)\n```\n\n [17.970775 9.147974 7.850224]\n [17.970775 9.147974 7.850224]\n [17.970775 9.147974 7.850224]\n [17.970775 9.147974 7.850224]\n [17.826052 9.074389 7.787103]\n\n\n\n```python\n# Multivariate Case\nprint(np.dot(Xarray2[10:14],Theta_hat21)) # Normal equations\nprint(np.dot(Xarray2[10:14],Theta_hat22)) # statsmodels\nprint(reg2.predict(Xarray2[10:14])) # sklearn LinearRegression\nprint(np.dot(Xarray2[10:14],Theta_hat2gd)) # Batch Gradient Descent\nprint(reg_sgd.predict(Xarray2_s[10:14])) # Stochastic Gradient Descent (SGDRegressor from sklearn)\n```\n\n [7.032299 17.285129 10.577121 8.826300]\n [7.032299 17.285129 10.577121 8.826300]\n [7.032299 17.285129 10.577121 8.826300]\n [5.902096 17.072383 10.330784 7.701954]\n [7.004401 17.085039 10.565385 8.734636]\n\n", "meta": {"hexsha": "5adbb61f8252196ee8711048d85d32b25dc2ce7e", "size": 165806, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chap04/04-textbook-training-models-01a.ipynb", "max_stars_repo_name": "bryanblackbee/topic__hands-on-machine-learning", "max_stars_repo_head_hexsha": "3b9a2cfa011099178dd73c3366331958d49ad96f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chap04/04-textbook-training-models-01a.ipynb", "max_issues_repo_name": "bryanblackbee/topic__hands-on-machine-learning", "max_issues_repo_head_hexsha": "3b9a2cfa011099178dd73c3366331958d49ad96f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chap04/04-textbook-training-models-01a.ipynb", "max_forks_repo_name": "bryanblackbee/topic__hands-on-machine-learning", "max_forks_repo_head_hexsha": "3b9a2cfa011099178dd73c3366331958d49ad96f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-06-20T05:38:43.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-20T05:38:43.000Z", "avg_line_length": 189.709382151, "max_line_length": 140636, "alphanum_fraction": 0.906637878, "converted": true, "num_tokens": 4152, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.942506726044381, "lm_q2_score": 0.8872045862611166, "lm_q1q2_score": 0.8361962899285247}} {"text": "# Solution {-}\nFactor spectral density function $S(s)$:\n\n\n```python\nfrom sympy import symbols, factor\n\ns = symbols('s')\n\nS = (-s**2 + 1)/(s**4 - 8*s**2 + 16)\nS\n```\n\n\n\n\n$\\displaystyle \\frac{1 - s^{2}}{s^{4} - 8 s^{2} + 16}$\n\n\n\n\n```python\nfactor(S, s)\n```\n\n\n\n\n$\\displaystyle - \\frac{\\left(s - 1\\right) \\left(s + 1\\right)}{\\left(s - 2\\right)^{2} \\left(s + 2\\right)^{2}}$\n\n\n", "meta": {"hexsha": "b5010f81bdbee3316ebdd7f6aa7b387d5eeb69a4", "size": 1783, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Problem 3.6.ipynb", "max_stars_repo_name": "mfkiwl/GMPE340", "max_stars_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-07T09:36:36.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-07T09:36:36.000Z", "max_issues_repo_path": "Problem 3.6.ipynb", "max_issues_repo_name": "mfkiwl/GMPE340", "max_issues_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Problem 3.6.ipynb", "max_forks_repo_name": "mfkiwl/GMPE340", "max_forks_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-20T18:48:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-20T18:48:20.000Z", "avg_line_length": 20.2613636364, "max_line_length": 128, "alphanum_fraction": 0.4621424565, "converted": true, "num_tokens": 146, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9566342061815148, "lm_q2_score": 0.87407724336544, "lm_q1q2_score": 0.8361721898482245}} {"text": "# Introduction to some of the functions in SymPy\n\n\n```python\nimport sympy as smp\nfrom sympy import *\n```\n\n\n```python\nx = smp.symbols('x')\n```\n\n\n```python\nx**3\n```\n\n\n\n\n$\\displaystyle x^{3}$\n\n\n\n\n```python\nx , y = smp.symbols('x y')\n```\n\n\n```python\nf = x**2 + y **2 \n```\n\n\n```python\nf.subs(x,10) # substituting the value of x\n```\n\n\n\n\n$\\displaystyle y^{2} + 100$\n\n\n\n\n```python\nf.subs(y,3) # substituting the value of y\n```\n\n\n\n\n$\\displaystyle x^{2} + 9$\n\n\n\n\n```python\nsmp.sin(x)\n```\n\n\n\n\n$\\displaystyle \\sin{\\left(x \\right)}$\n\n\n\n\n```python\nsmp.exp(1/x) # exponent\n```\n\n\n\n\n$\\displaystyle e^{\\frac{1}{x}}$\n\n\n\n\n```python\nsmp.log(x) #Natural log\n```\n\n\n\n\n$\\displaystyle \\log{\\left(x \\right)}$\n\n\n\n\n```python\nsmp.log(x,10) # log with a base 10\n```\n\n\n\n\n$\\displaystyle \\frac{\\log{\\left(x \\right)}}{\\log{\\left(10 \\right)}}$\n\n\n\n\n```python\nx**(smp.Rational(7,2)) # keeping the power odf x in fraction\n```\n\n\n\n\n$\\displaystyle x^{\\frac{7}{2}}$\n\n\n\n# Lets test our Fortune with Limits.\n\n\n```python\nsmp.sin(x/2+smp.sin(x))\n```\n\n\n\n\n$\\displaystyle \\sin{\\left(\\frac{x}{2} + \\sin{\\left(x \\right)} \\right)}$\n\n\n\n\n```python\nsmp.limit(smp.sin(x/2+smp.sin(x)),x,smp.pi)\n```\n\n\n\n\n$\\displaystyle 1$\n\n\n\n\n```python\n2*smp.exp(1/x)/(smp.exp(1/x)+1)\n```\n\n\n\n\n$\\displaystyle \\frac{2 e^{\\frac{1}{x}}}{e^{\\frac{1}{x}} + 1}$\n\n\n\n\n```python\nsmp.limit(2*smp.exp(1/x)/(smp.exp(1/x)+1),x,0,dir ='+')\n```\n\n\n\n\n$\\displaystyle 2$\n\n\n\n\n```python\nsmp.limit(2*smp.exp(1/x)/(smp.exp(1/x)+1),x,0,dir ='-')\n```\n\n\n\n\n$\\displaystyle 0$\n\n\n\n\n```python\n(smp.cos(x)-1)/x\n```\n\n\n\n\n$\\displaystyle \\frac{\\cos{\\left(x \\right)} - 1}{x}$\n\n\n\n\n```python\nsmp.limit(((smp.cos(x)-1)/x),x,smp.oo) # Here smp.oo defins a limit of x with indinity..\n```\n\n\n\n\n$\\displaystyle 0$\n\n\n\n# Derivatives.........\n\n\n```python\n((1 + smp.sin(x))/(1-smp.cos(x)))**2\n```\n\n\n\n\n$\\displaystyle \\frac{\\left(\\sin{\\left(x \\right)} + 1\\right)^{2}}{\\left(1 - \\cos{\\left(x \\right)}\\right)^{2}}$\n\n\n\n\n```python\nsmp.diff(((1 + smp.sin(x))/(1-smp.cos(x)))**2,x)\n```\n\n\n\n\n$\\displaystyle \\frac{2 \\left(\\sin{\\left(x \\right)} + 1\\right) \\cos{\\left(x \\right)}}{\\left(1 - \\cos{\\left(x \\right)}\\right)^{2}} - \\frac{2 \\left(\\sin{\\left(x \\right)} + 1\\right)^{2} \\sin{\\left(x \\right)}}{\\left(1 - \\cos{\\left(x \\right)}\\right)^{3}}$\n\n\n\n\n```python\nsmp.log(x,5)**(x/2)\n```\n\n\n\n\n$\\displaystyle \\left(\\frac{\\log{\\left(x \\right)}}{\\log{\\left(5 \\right)}}\\right)^{\\frac{x}{2}}$\n\n\n\n\n```python\nsmp.diff(smp.log(x,5)**(x/2),x)\n```\n\n\n\n\n$\\displaystyle \\left(\\frac{\\log{\\left(x \\right)}}{\\log{\\left(5 \\right)}}\\right)^{\\frac{x}{2}} \\left(\\frac{\\log{\\left(\\frac{\\log{\\left(x \\right)}}{\\log{\\left(5 \\right)}} \\right)}}{2} + \\frac{1}{2 \\log{\\left(x \\right)}}\\right)$\n\n\n\n\n```python\n# Let's Start with an abstract approach........../////\n\nf,g = smp.symbols('f g', cls = smp.Function)\ng = g(x)\nf = f(x+g)\n```\n\n\n```python\nf\n```\n\n\n\n\n$\\displaystyle f{\\left(x + g{\\left(x \\right)} \\right)}$\n\n\n\n\n```python\nsmp.diff(f,x) #Derivative of f with respect to x\n```\n\n\n\n\n$\\displaystyle \\left(\\frac{d}{d x} g{\\left(x \\right)} + 1\\right) \\left. \\frac{d}{d \\xi_{1}} f{\\left(\\xi_{1} \\right)} \\right|_{\\substack{ \\xi_{1}=x + g{\\left(x \\right)} }}$\n\n\n\n# Intergration {Indefinate}\n\n\n```python\nsmp.cos(x)*smp.cot(x)\n```\n\n\n\n\n$\\displaystyle \\cos{\\left(x \\right)} \\cot{\\left(x \\right)}$\n\n\n\n\n```python\nsmp.integrate(smp.cos(x)*smp.cot(x),x) #By defalt SciPy do not give the conatant term. Hence, we have to keep this in mind \n```\n\n\n\n\n$\\displaystyle \\frac{\\log{\\left(\\cos{\\left(x \\right)} - 1 \\right)}}{2} - \\frac{\\log{\\left(\\cos{\\left(x \\right)} + 1 \\right)}}{2} + \\cos{\\left(x \\right)}$\n\n\n\n\n```python\nsmp.csc(x) #Cosec is coined as csc\n```\n\n\n\n\n$\\displaystyle \\csc{\\left(x \\right)}$\n\n\n\n\n```python\n4*smp.sec(3*x)*smp.tan(3*x)\n```\n\n\n\n\n$\\displaystyle 4 \\tan{\\left(3 x \\right)} \\sec{\\left(3 x \\right)}$\n\n\n\n\n```python\nsmp.integrate(4*smp.sec(3*x)*smp.tan(3*x),x)\n```\n\n\n\n\n$\\displaystyle \\frac{4}{3 \\cos{\\left(3 x \\right)}}$\n\n\n\n\n```python\n2/smp.sqrt(1-x**2) - 1/(x)**(smp.Rational(1,4))\n```\n\n\n\n\n$\\displaystyle \\frac{2}{\\sqrt{1 - x^{2}}} - \\frac{1}{\\sqrt[4]{x}}$\n\n\n\n\n```python\nsmp.integrate(2/smp.sqrt(1-x**2) - 1/(x)**(smp.Rational(1,4)),x)\n```\n\n\n\n\n$\\displaystyle - \\frac{4 x^{\\frac{3}{4}}}{3} + 2 \\operatorname{asin}{\\left(x \\right)}$\n\n\n\n# Initial Value Function\n\n#1 Given [dy/dx = 8x+csc^2(x)] with y(pi/2)= -7 solve for y(x)\n\n\n```python\nintegral = 8*x + (smp.csc(x))**2 # in the give equation we Take Integral Both Sides.\n```\n\n\n```python\nintegral = smp.integrate((integral),x) # now, we will try to calculate the constat term {C}.\n```\n\n\n```python\nintegral.subs(x,smp.pi/2) # substuiting the value of x with pi/2 we get pi^2\n```\n\n\n\n\n$\\displaystyle \\pi^{2}$\n\n\n\n\n```python\nC = -integral.subs(x,smp.pi/2) - 7\ny = integral + C\n```\n\n\n```python\ny.subs(x,smp.pi/2) # Verification\n```\n\n\n\n\n$\\displaystyle -7$\n\n\n\n\n```python\ny # as function of x\n```\n\n\n\n\n$\\displaystyle 4 x^{2} - \\pi^{2} - 7 - \\frac{\\cos{\\left(x \\right)}}{\\sin{\\left(x \\right)}}$\n\n\n\n#2 let me just define it--\n\n\n```python\n(1+smp.sqrt(x))**(smp.Rational(1,3)) / x**(smp.Rational(1,2))\n```\n\n\n\n\n$\\displaystyle \\frac{\\sqrt[3]{\\sqrt{x} + 1}}{\\sqrt{x}}$\n\n\n\n\n```python\nsmp.integrate((1+smp.sqrt(x))**(smp.Rational(1,3)) / x**(smp.Rational(1,2)),x)\n```\n\n\n\n\n$\\displaystyle \\frac{3 \\sqrt{x} \\sqrt[3]{\\sqrt{x} + 1}}{2} + \\frac{3 \\sqrt[3]{\\sqrt{x} + 1}}{2}$\n\n\n\n#3\n\n\n```python\nx*(1-x**2)**smp.Rational(1/4)\n```\n\n\n\n\n$\\displaystyle x \\sqrt[4]{1 - x^{2}}$\n\n\n\n\n```python\nsmp.integrate(x*(1-x**2)**smp.Rational(1/4),x)\n```\n\n\n\n\n$\\displaystyle \\frac{2 x^{2} \\sqrt[4]{1 - x^{2}}}{5} - \\frac{2 \\sqrt[4]{1 - x^{2}}}{5}$\n\n\n\n\n```python\n(2*x-1)*smp.cos((3*(2*x-1)**2+6)**smp.Rational(1/2)) /(3*(2*x-1)**2+6)**smp.Rational(1/2)\n\n```\n\n\n\n\n$\\displaystyle \\frac{\\left(2 x - 1\\right) \\cos{\\left(\\sqrt{3 \\left(2 x - 1\\right)^{2} + 6} \\right)}}{\\sqrt{3 \\left(2 x - 1\\right)^{2} + 6}}$\n\n\n\n\n```python\nsmp.integrate((2*x-1)*smp.cos((3*(2*x-1)**2+6)**smp.Rational(1/2)) /(3*(2*x-1)**2+6)**smp.Rational(1/2),x)\n```\n\n\n\n\n$\\displaystyle \\frac{\\sin{\\left(\\sqrt{3 \\left(2 x - 1\\right)^{2} + 6} \\right)}}{6}$\n\n\n\n# Integration{Definate}\n\n\n```python\nsmp.exp(x)/(smp.exp(2*x)+9)**smp.Rational(1,2)\n```\n\n\n\n\n$\\displaystyle \\frac{e^{x}}{\\sqrt{e^{2 x} + 9}}$\n\n\n\n\n```python\nsmp.integrate(smp.exp(x)/(smp.exp(2*x)+9)**(smp.Rational(1,2)),(x,0,smp.log(4))) #asin(x) is inverse of sin(x)\n```\n\n\n\n\n$\\displaystyle - \\operatorname{asinh}{\\left(\\frac{1}{3} \\right)} + \\operatorname{asinh}{\\left(\\frac{4}{3} \\right)}$\n\n\n\n\n```python\nx**10 * smp.exp(x)\n```\n\n\n\n\n$\\displaystyle x^{10} e^{x}$\n\n\n\n\n```python\nt = smp.symbols('t')\n```\n\n\n```python\nsmp.integrate(x**10 * smp.exp(x),(x,1,t))\n```\n\n\n\n\n$\\displaystyle \\left(t^{10} - 10 t^{9} + 90 t^{8} - 720 t^{7} + 5040 t^{6} - 30240 t^{5} + 151200 t^{4} - 604800 t^{3} + 1814400 t^{2} - 3628800 t + 3628800\\right) e^{t} - 1334961 e$\n\n\n\n# Improper Integrals\n\n\n```python\n(16*smp.atan(x))/ (1+x**2)\n```\n\n\n\n\n$\\displaystyle \\frac{16 \\operatorname{atan}{\\left(x \\right)}}{x^{2} + 1}$\n\n\n\n\n```python\nsmp.integrate((16*smp.atan(x))/ (1+x**2),(x,0,smp.oo)) #Very Cool\n```\n\n\n\n\n$\\displaystyle 2 \\pi^{2}$\n\n\n\n# Summation Functions\n\n\n```python\nn = smp.symbols('n')\n```\n\n\n```python\n6/4**n\n```\n\n\n\n\n$\\displaystyle 6 \\cdot 4^{- n}$\n\n\n\n\n```python\nsmp.Sum(6/4**n, (n,0,smp.oo))\n```\n\n\n\n\n$\\displaystyle \\sum_{n=0}^{\\infty} 6 \\cdot 4^{- n}$\n\n\n\n\n```python\n#Bro Solve it ///////\n\nsmp.Sum(6/4**n, (n,0,smp.oo)).doit()\n\n#///// WE have TO TELL SCIPY TO doit()\n```\n\n\n\n\n$\\displaystyle 8$\n\n\n\n\n```python\nsmp.Sum(2**(n+1)/5**n, (n,0,smp.oo))\n```\n\n\n\n\n$\\displaystyle \\sum_{n=0}^{\\infty} 2^{n + 1} \\cdot 5^{- n}$\n\n\n\n\n```python\nsmp.Sum(2**(n+1)/5**n, (n,0,smp.oo)).doit()\n```\n\n\n\n\n$\\displaystyle \\frac{10}{3}$\n\n\n\n\n```python\nsmp.Sum(smp.atan(n)/n**smp.Rational(11,10), (n,1,smp.oo))\n```\n\n\n\n\n$\\displaystyle \\sum_{n=1}^{\\infty} \\frac{\\operatorname{atan}{\\left(n \\right)}}{n^{\\frac{11}{10}}}$\n\n\n\n\n```python\nsmp.Sum(smp.atan(n)/n**smp.Rational(11,10), (n,1,smp.oo)).doit() # Some time it does not work..\n```\n\n\n\n\n$\\displaystyle \\sum_{n=1}^{\\infty} \\frac{\\operatorname{atan}{\\left(n \\right)}}{n^{\\frac{11}{10}}}$\n\n\n\n\n```python\n#Lets try this!! {.n()}\nsmp.Sum(smp.atan(n)/n**(smp.Rational(11,10)), (n,1,smp.oo)).n()\n#////////Here, we are actually dealing with approximations./////////////////\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n```\n\n\n\n\n$\\displaystyle 15.3028821020457$\n\n\n\n\n```python\nsmp.Sum((1+smp.cos(n))/n**2, (n,1,smp.oo)).n()\n```\n\n\n\n\n$\\displaystyle 1.969$\n\n\n\n\n```python\n# Happy Learning\n```\n", "meta": {"hexsha": "991a207e09313687b6096800faa9ececf8f21131", "size": 29899, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Calclus_BASICS.ipynb", "max_stars_repo_name": "DhruvKumarPHY/Mathematics_With_Python", "max_stars_repo_head_hexsha": "90823f2385a2ef515d9172744a7168fa752d1cc8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Calclus_BASICS.ipynb", "max_issues_repo_name": "DhruvKumarPHY/Mathematics_With_Python", "max_issues_repo_head_hexsha": "90823f2385a2ef515d9172744a7168fa752d1cc8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Calclus_BASICS.ipynb", "max_forks_repo_name": "DhruvKumarPHY/Mathematics_With_Python", "max_forks_repo_head_hexsha": "90823f2385a2ef515d9172744a7168fa752d1cc8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 20.147574124, "max_line_length": 287, "alphanum_fraction": 0.4373055955, "converted": true, "num_tokens": 3266, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9566342037088041, "lm_q2_score": 0.8740772318846386, "lm_q1q2_score": 0.8361721767039569}} {"text": "```python\nimport sympy\nsympy.init_printing()\nimport verify_3\n```\n\n# Section 1 - Linear Alebra Types\n\nYou can define both a vector and a matrix by creating an instance of Matrix from an array\n\n\n```python\ndef demo_1a():\n \n vector = sympy.Matrix([1,2,3])\n matrix = sympy.Matrix([[1,2],[3,5]])\n display(vector)\n display(matrix)\ndemo_1a()\n```\n\n\n$\\displaystyle \\left[\\begin{matrix}1\\\\2\\\\3\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 2\\\\3 & 5\\end{matrix}\\right]$\n\n\nTranspositions\n\n\n```python\ndef demo_1b():\n \n vector = sympy.Matrix([1,2,3])\n matrix = sympy.Matrix([[1,2],[3,5]])\n display(vector)\n display(vector.transpose())\n display(matrix)\n display(matrix.transpose())\ndemo_1b()\n```\n\n\n$\\displaystyle \\left[\\begin{matrix}1\\\\2\\\\3\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 2 & 3\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 2\\\\3 & 5\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 3\\\\2 & 5\\end{matrix}\\right]$\n\n\nAccess individual entries\n\n\n```python\ndef demo_1c():\n \n matrix = sympy.Matrix([[1,2],[3,5]])\n display(matrix)\n display(matrix[3]) # Read\n display(matrix[1,0])\n matrix[3] = 7 # Write\n display(matrix)\ndemo_1c()\n```\n\n__Exercise a__\n\nCreate the column vector $\\left[x,y\\right]^T$\n\n\n```python\ndef exercise_1a():\n \n x = sympy.Symbol('x')\n y = sympy.Symbol('y')\n \n # Enter answer here\n answer = 0\n display(answer)\n print(verify_3.verify_1a(answer))\nexercise_1a()\n```\n\nSlightly more interesting example, Jacobian matrix for polar coordinates\n\n\n```python\ndef demo_1d():\n \n r = sympy.Symbol('r', positive=True)\n q = sympy.Symbol('theta', positive=True)\n x = r*sympy.cos(q)\n y = r*sympy.sin(q)\n jac = sympy.Matrix([[var1.diff(var2) for var1 in [x,y]] for var2 in [r,q]])\n display(jac)\n\ndemo_1d()\n```\n\n\n$\\displaystyle \\left[\\begin{matrix}\\cos{\\left(\\theta \\right)} & \\sin{\\left(\\theta \\right)}\\\\- r \\sin{\\left(\\theta \\right)} & r \\cos{\\left(\\theta \\right)}\\end{matrix}\\right]$\n\n\n# Section 2 - Multiplication\n\nTo multiply matrices, use the operator *\n\n\n```python\ndef demo_2a():\n \n matrix_a = sympy.Matrix([[1,1],[1,1]])\n matrix_b = sympy.Matrix([[1,-1],[-1,1]])\n display(matrix_a)\n display(matrix_b)\n display(matrix_a*matrix_b)\ndemo_2a()\n```\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1\\\\1 & 1\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & -1\\\\-1 & 1\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0 & 0\\\\0 & 0\\end{matrix}\\right]$\n\n\nSimilarly, matrices and vectors\n\n\n```python\ndef demo_2b():\n \n matrix = sympy.Matrix([[1,2],[3,4]])\n vector = sympy.Matrix([1,1])\n display(matrix)\n display(vector)\n display(matrix*vector)\ndemo_2b()\n```\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 2\\\\3 & 4\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1\\\\1\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}3\\\\7\\end{matrix}\\right]$\n\n\nWhen multiplying vectors, and order and orientation is important\n\n\n```python\ndef demo_1c():\n \n vector_a = sympy.Matrix([1,2])\n vector_b = sympy.Matrix([3,4])\n display(vector_a)\n display(vector_b)\n display(vector_a.transpose()*vector_b)\n display(vector_a*vector_b.transpose())\ndemo_1c()\n```\n\n\n$\\displaystyle \\left[\\begin{matrix}1\\\\2\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}3\\\\4\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}11\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}3 & 4\\\\6 & 8\\end{matrix}\\right]$\n\n\n__Exercise a__\n\nCalculate the product $R \\cdot b$ where $R$ is the 2D rotation matrix and $b = [1,0]$\n\n\n```python\ndef exercise_2a():\n \n q = sympy.Symbol('theta', positive=True)# Rotation angle\n b = sympy.Matrix([1,0])\n R = sympy.Matrix([[sympy.cos(q), -sympy.sin(q)],[sympy.sin(q), sympy.cos(q)]])\n display(R,b)\n \n # Enter answer here\n answer = 0\n display(answer)\n print(verify_3.verify_2a(answer))\nexercise_2a()\n```\n\n# Section 3 - Inversion and Null Spaces\n\nTo get the matrix use .inv()\n\n\n```python\ndef demo_3a():\n \n matrix = sympy.Matrix([[1,2],[3,4]])\n display(matrix)\n display(matrix.inv())\ndemo_3a()\n```\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 2\\\\3 & 4\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}-2 & 1\\\\\\frac{3}{2} & - \\frac{1}{2}\\end{matrix}\\right]$\n\n\n__Exercise a__\n\nSolve the following system of equations $A \\cdot x = b$\n\n\n```python\ndef exercise_3a():\n \n A = sympy.Matrix([[1,2],[3,4]])\n b = sympy.Matrix([5,6])\n \n sol = 0\n print(verify_3.verify_3a(sol))\nexercise_3a()\n```\n\n False\n\n\nIf the matrix is not invertible, we can find its null space\n\n\n```python\ndef demo_3b():\n \n matrix = sympy.Matrix([[1,1],[1,1]])\n display(matrix)\n display(matrix.nullspace())\ndemo_3b()\n```\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1\\\\1 & 1\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[ \\left[\\begin{matrix}-1\\\\1\\end{matrix}\\right]\\right]$\n\n\n# Section 4 - Determinant\n\nTaking the determinant\n\n\n```python\ndef demo_4a():\n \n matrix = sympy.Matrix([[1,2],[3,4]])\n display(matrix)\n display(matrix.det())\ndemo_4a()\n```\n\n__Exercise a__\n\nFind x for which the following matrix is singular\n\n\n```python\ndef exercise_4a():\n \n x = sympy.Symbol('x', positive=True)\n matrix = sympy.Matrix([[1,2-x],[3,4]])\n \n answer = 0\n print(verify_3.verify_4a(answer))\n \nexercise_4a()\n```\n\n False\n\n\n# Section 5 - Eigenvalue system\n\nTo find the eigenvalues\n\n\n```python\ndef demo_5a():\n \n matrix = sympy.Matrix([[1,2],[3,4]])\n display(matrix.eigenvals())\n\ndemo_5a()\n```\n\nThe first number is the eigenvalue, and the second number is the multiplicity.\n\nTo get the eigenvectors\n\n\n```python\ndef demo_5b():\n \n matrix = sympy.Matrix([[1,2],[3,4]])\n display(matrix.eigenvects())\n\ndemo_5b()\n```\n\n\n$\\displaystyle \\left[ \\left( \\frac{5}{2} - \\frac{\\sqrt{33}}{2}, \\ 1, \\ \\left[ \\left[\\begin{matrix}- \\frac{2}{- \\frac{3}{2} + \\frac{\\sqrt{33}}{2}}\\\\1\\end{matrix}\\right]\\right]\\right), \\ \\left( \\frac{5}{2} + \\frac{\\sqrt{33}}{2}, \\ 1, \\ \\left[ \\left[\\begin{matrix}- \\frac{2}{- \\frac{\\sqrt{33}}{2} - \\frac{3}{2}}\\\\1\\end{matrix}\\right]\\right]\\right)\\right]$\n\n\nIn some cases it is also useful to find the characteristic polynomial\n\n\n```python\ndef demo_5c():\n \n x = sympy.Symbol('x')\n matrix = sympy.Matrix([[1,2],[3,4]])\n display(matrix.charpoly('x'))\ndemo_5c()\n```\n\n__Exercise a__\n\nA series of aligned point masses, with springs connecting the two adjacent pairs of masses. Their equations of motion can be described by \n\n$\\ddot{\\vec{x}} = \\vec{M} \\cdot \\vec{x}$. Find the eigensystem (eigenvectors + eigenvalues) for the matrix $M$. What does each of the modes represent?\n\n\n```python\ndef exercise_5a():\n \n M = sympy.Matrix([[-1,1,0],\n [1,-2,1],\n [0,1,-1]])\n \n display(M.eigenvects())\n \n answer = 0\n print(verify_3.verify_5a(answer))\nexercise_5a()\n```\n\n\n$\\displaystyle \\left[ \\left( -3, \\ 1, \\ \\left[ \\left[\\begin{matrix}1\\\\-2\\\\1\\end{matrix}\\right]\\right]\\right), \\ \\left( -1, \\ 1, \\ \\left[ \\left[\\begin{matrix}-1\\\\0\\\\1\\end{matrix}\\right]\\right]\\right), \\ \\left( 0, \\ 1, \\ \\left[ \\left[\\begin{matrix}1\\\\1\\\\1\\end{matrix}\\right]\\right]\\right)\\right]$\n\n\n False\n\n\n# Section 6 - Matrix functions\n\nsympy interprets functions on matrices as matrix functions (rather than operations on the \n\n\n```python\ndef demo_6a():\n \n matrix = sympy.Matrix([[0,-1],[1,0]])\n display(matrix)\n display(sympy.exp(matrix))\n \ndemo_6a()\n```\n\n\n$\\displaystyle \\left[\\begin{matrix}0 & -1\\\\1 & 0\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\cos{\\left(1 \\right)} & - \\sin{\\left(1 \\right)}\\\\\\sin{\\left(1 \\right)} & \\cos{\\left(1 \\right)}\\end{matrix}\\right]$\n\n\n\n```python\ndef demo_6b():\n \n matrix = sympy.Matrix([[0,-1],[1,0]])\n display(matrix)\n display(matrix**2)\n \ndemo_6b()\n```\n\n\n$\\displaystyle \\left[\\begin{matrix}0 & -1\\\\1 & 0\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}-1 & 0\\\\0 & -1\\end{matrix}\\right]$\n\n\n__Exercise a__\n\nIn this problem we will explore the generator to the Lorentz boost. Calculate $\\exp \\left(\\psi Z\\right)$ ,where $Z$ is the anti diagonal unit matrix and $\\tanh \\psi = \\beta$ is the rapidity.\n\n\n```python\ndef exercise_6a():\n \n psi = sympy.Symbol('psi', positive=True) # Rapidity\n gen = sympy.Matrix([[0,-1],[-1,0]])\n \n # Enter anser here\n temp = sympy.exp(psi*gen)\n temp.simplify()\n display(temp)\n answer = sympy.Rational(0,1)\n print(verify_3.verify_6a(answer))\n \nexercise_6a()\n```\n\n\n$\\displaystyle \\left[\\begin{matrix}\\frac{e^{\\psi}}{2} + \\frac{e^{- \\psi}}{2} & - \\frac{e^{\\psi}}{2} + \\frac{e^{- \\psi}}{2}\\\\- \\frac{e^{\\psi}}{2} + \\frac{e^{- \\psi}}{2} & \\frac{e^{\\psi}}{2} + \\frac{e^{- \\psi}}{2}\\end{matrix}\\right]$\n\n\n False\n\n\n# Section 7 - Worked example\n\n## Tidal Field\n\nIn this section we derive the tidal field of a point mass. Close to some distant point the potential behaves roughly as $\\varphi \\approx \\frac{1}{2} \\vec{x} T \\vec{x}$ where $T$ is the tidal tensor.\n\n\n```python\ndef demo_7a():\n \n x = sympy.Symbol('x', real=True)\n y = sympy.Symbol('y', real=True)\n z = sympy.Symbol('z', real=True)\n G = sympy.Symbol('G', positive=True) # Gravitation constant\n M = sympy.Symbol('M', positive=True) # Mass\n a = sympy.Symbol('a', positive=True) # Distance\n \n potential = G*M/sympy.sqrt(x**2+y**2+z**2)\n \n display(potential)\n \n hessian = sympy.Matrix([[potential.diff(var1).diff(var2).simplify()\n for var1 in [x,y,z]] for var2 in [x,y,z]])\n \n display(hessian)\n \ndemo_7a()\n```\n\n## Two Body Problem\n\n\n```python\ndef demo_7b():\n \n r = sympy.Function('r', positive=True)\n s = sympy.Function('s', positive=True)\n qf = sympy.Function('theta', real=True)\n q = sympy.Symbol('theta', real=True)\n t = sympy.Symbol('t', real=True)\n pos = sympy.Matrix([r(t)*sympy.cos(qf(t)),r(t)*sympy.sin(qf(t))])\n G = sympy.Symbol('G', positive=True) # Gravitation constant\n M = sympy.Symbol('M', positive=True) # Mass\n l = sympy.Symbol('l', positive=True) # Angular momentum\n u = sympy.Symbol('u', real=True) # Energy\n xi = sympy.Symbol('xi', positive=True) # Auxiliary variable\n energy = -G*M/r(t)+(r(t).diff(t)**2+qf(t).diff(t)**2*r(t)**2)/2-u\n eom = pos.diff(t,2)+G*M*pos/(pos.transpose()*pos)**sympy.Rational(3,2)\n \n v = pos.diff(t)\n LRL = l*sympy.Matrix([v[1],-v[0]])-G*M*pos/sympy.sqrt((pos.transpose()*pos)[0])\n LRL = sympy.Matrix([LRL[0].simplify(), LRL[1].simplify()])\n LRL = LRL.subs(r(t),xi).subs(xi,r(t))\n \n print('conservation of angular momentum')\n temp = eom.transpose()*sympy.Matrix([sympy.sin(qf(t)), -sympy.cos(qf(t))])\n temp = temp[0].simplify()\n display(temp)\n temp = (r(t)**2*qf(t).diff(t)).diff(t).subs(sympy.solve(temp,qf(t).diff(t,2),dict=True)[0])\n display(temp)\n \n print('conservation of the LRL vector')\n temp = LRL\n display(LRL)\n temp = temp.diff(t) \n temp = temp.subs(sympy.solve(eom,[r(t).diff(t,2), qf(t).diff(t,2)],dict=True)[0])\n temp = temp.subs(qf(t).diff(t),l/r(t)**2)\n temp = sympy.Matrix([temp[0].simplify(), temp[1].simplify()])\n display(temp)\n \n print('trajectory')\n temp = energy\n temp = temp.subs(r(t).diff(t), r(q).diff(q)*qf(t).diff(t))\n temp = temp.subs(qf(t).diff(t), l/r(q)**2)\n temp = temp.subs(r(t), r(q))\n energy_r_q = temp\n temp = temp.subs(r(q),1/s(q))\n temp = temp.doit()\n temp = temp.diff(q)\n temp = temp/s(q).diff(q)\n temp = temp.simplify()\n temp = sympy.dsolve(temp,s(q))\n temp = 1/sympy.solve(temp,s(q))[0]\n temp = temp.subs(sympy.solve(temp.diff(q).subs(q,0),sympy.Symbol('C1'),dict=True)[0])\n display(temp)\n temp = temp.subs(sympy.solve(energy_r_q.subs(r(q),temp).doit(),sympy.Symbol('C2'),dict=True)[1])\n display(temp)\n \ndemo_7b()\n```\n\n## Sound Waves\n\n\n```python\ndef demo_7c():\n \n x = sympy.Symbol('x', real=True) # Position\n t = sympy.Symbol('t', real=True) # Time\n rho = sympy.Symbol('rho') # Density\n v = sympy.Symbol('v') # Velocity\n p = sympy.Symbol('p') # Pressure\n s = sympy.Symbol('s') # Entropy\n J = sympy.Symbol('J') # Mass flux\n xi = sympy.Symbol('xi') # Auxiliary variable\n gamma = sympy.Symbol('gamma', positive=True) # Adiabatic index\n \n # Hydrodynamic equations\n continuity_equation = sympy.Derivative(rho,t)+sympy.Derivative(J,x)\n momentum_conservatoin = sympy.Derivative(v,t)+v*sympy.Derivative(v,x)+sympy.Derivative(p,x)/rho\n entropy_conservation = sympy.Derivative(s,t)+v*sympy.Derivative(s,x)\n hydro_eqns = [continuity_equation,\n momentum_conservatoin,\n entropy_conservation]\n print('equations of motion')\n display(hydro_eqns)\n \n # Perturbation\n rho_0 = sympy.Symbol('rho_0', positive=True) # Ambient density\n p_0 = sympy.Symbol('p_0', positive=True) # Ambient pressure\n delta_rho = sympy.Symbol(r'\\delta \\rho', positive=True) # Density fluctuation amplitude\n delta_p = sympy.Symbol(r'\\delta p', positive=True) # Pressure fluctuation amplitude\n delta_v = sympy.Symbol(r'\\delta v', positive=True) # Velocity fluctuation amplitude\n omega = sympy.Symbol('omega', real=True) # Frequency\n k = sympy.Symbol('k', positive=True) # Wavenumber\n epsilon = sympy.Symbol('epsilon') # Perturbative parameter\n temp = hydro_eqns\n mode = sympy.exp(sympy.I*(omega*t-k*x))\n temp = [itm.subs({rho:rho_0 + epsilon*delta_rho*mode,\n p:p_0 + epsilon*delta_p*mode,\n v:epsilon*delta_v*mode,\n J:(rho_0 + epsilon*delta_rho*mode)*(epsilon*delta_v*mode), # sympy spoonfeed\n s:sympy.log(p_0 + epsilon*delta_p*mode) - gamma*sympy.log(rho_0 + epsilon*delta_rho*mode)}) \n for itm in temp]\n temp = [itm.doit() for itm in temp]\n temp = [itm.diff(epsilon).subs(epsilon,0) for itm in temp]\n temp = [itm/mode for itm in temp]\n temp = [sympy.expand(itm) for itm in temp]\n temp = sympy.Matrix([[itm.diff(var) for itm in temp] for var in [delta_rho, delta_p, delta_v]])\n print('Solutions are only possible if this matrix is singular')\n display(temp)\n print('The determinant is')\n temp = temp.det().simplify()\n display(temp)\n print('And the frequencies are')\n display(sympy.solve(temp,omega))\n print('These solutions represent a left going wave, a right going wave and a density perturbation that does not move')\n \ndemo_7c()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "28c23d1036fa8cf975b61f84f0ac7ca2c9d9cd9b", "size": 70444, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lesson_3.ipynb", "max_stars_repo_name": "bolverk/cyborg_math", "max_stars_repo_head_hexsha": "298224dadd4218ebcc266b0a135e15595a359343", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2020-02-25T22:29:21.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T21:49:08.000Z", "max_issues_repo_path": "lesson_3.ipynb", "max_issues_repo_name": "bolverk/cyborg_math", "max_issues_repo_head_hexsha": "298224dadd4218ebcc266b0a135e15595a359343", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lesson_3.ipynb", "max_forks_repo_name": "bolverk/cyborg_math", "max_forks_repo_head_hexsha": "298224dadd4218ebcc266b0a135e15595a359343", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-05-14T21:02:56.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-05T19:12:11.000Z", "avg_line_length": 41.6581904199, "max_line_length": 4180, "alphanum_fraction": 0.6039265232, "converted": true, "num_tokens": 4627, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9458012686491107, "lm_q2_score": 0.8840392924390585, "lm_q1q2_score": 0.8361254843245238}} {"text": "```python\nimport numpy as np\nimport numpy.linalg as LA\nfrom sklearn import datasets\nfrom sklearn.linear_model import LogisticRegression\nimport matplotlib.pyplot as plt\n```\n\n# Logistic Regression\n\n## Maximim Log-likelihood\n\nHere we will use logistic regression to conduct a binary classification. The logistic regression process will be formulated using Maximum Likelihood estimation. To begin, consider a random variable $y \\in\\{0,1\\}$. Let the $p$ be the probability that $y=1$. Given a set of $N$ trials, the liklihood of the sequence $y_{1}, y_{2}, \\dots, y_{N}$ is given by:\n\n$\\mathcal{L} = \\Pi_{i}^{N}p^{y_{i}}(1-p)^{1 - y_{i}}$\n\nGiven a set of labeled training data, the goal of maximum liklihood estimation is to determine a probability distribution that best recreates the empirical distribution of the training set. \n\nThe log-likelihood is the logarithmic transformation of the likelihood function. As logarithms are strictly increasing functions, the resulting solution from maximizing the likelihood vs. the log-likelihood are the equivalent. Given a dataset of cardinality $N$, the log-likelihood (normalized by $N$) is given by:\n\n$l = \\frac{1}{N}\\sum_{i=1}^{N}\\Big(y_{i}\\log(p) + (1 - y_{i})\\log(1 - p)\\Big)$\n\n## Logistic function\n\nLogistic regression performs binary classification based on a probabilistic interpretation of the data. Essentially, the process seeks to assign a probability to new observations. If the probability associated with the new instance of data is greater than 0.5, then the new observation is assigned to 1 (for example). If the probability associated with the new instance of the data is less than 0.5, then it is assigned to 0. To map the real numerical values into probabilities (which must lie between 0 and 1), logistic regression makes use of the logistic (sigmoid) function, given by:\n\n$\\sigma(t) = \\frac{1}{1 + e^{-t}}$\n\nNote that by setting $t=0$, $\\sigma(0) = 0.5$, which is the decision boundary. We should also note that the derivative of the logistic function with respect to the parameter $t$ is:\n\n$\\frac{d}{dt}\\sigma(t) = \\sigma(t)(1 - \\sigma(t))$\n\n## Logistic Regression and Derivation of the Gradient\n\nLet's assume the training data consists of $N$ observations, where observation $i$ is denoted by the pair $(y_{i},\\mathbf{x}_{i})$, where $y \\in \\{0,1\\}$ is the label for the feature vector $\\mathbf{x}$. We wish to compute a linear decision boundary that best seperates the labeled observations. Let $\\mathbf{\\theta}$ denote the vector of coefficients to be estimated. In this problem, the log likelihood can be expressed as:\n\n$l = \\frac{1}{N}\\sum_{i=1}^{N}\\Big(y_{i}\\log\\big(\\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\big) + (1 - y_{i}) \\log\\big( 1 - \\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\big)\\Big)$\n\nThe gradient of the objective with respect to the $j^{th}$ element of $\\mathbf{\\theta}$ is:\n$$\n\\begin{aligned}\n\\frac{d}{d\\theta^{(j)}} l &= \\frac{1}{N}\\sum_{i=1}^{N}\\Bigg( \\frac{d}{d\\theta^{(j)}} y_{i}\\log\\big(\\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\big) + \\frac{d}{d\\theta^{(j)}}(1 - y_{i}) \\log\\big( 1 - \\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\big)\\Bigg) \\\\\n&= \\frac{1}{N}\\sum_{i=1}^{N}\\Bigg(\\frac{y_{i}}{\\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})} - \\frac{1 - y_{i}}{1 - \\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})} \\Bigg)\\frac{d}{d\\theta^{(j)}}\\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\\\\n&= \\frac{1}{N}\\sum_{i=1}^{N}\\Bigg(\\frac{y_{i}}{\\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})} - \\frac{1 - y_{i}}{1 - \\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})} \\Bigg)\\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\Big(1 - \\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\Big)x_{i}^{(j)}\\\\\n&= \\frac{1}{N}\\sum_{i=1}^{N}\\Bigg(\\frac{y_{i} - \\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})}{\\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\Big(1 - \\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\Big)}\\Bigg)\\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\Big(1 - \\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\Big)x_{i}^{(j)}\\\\\n&= \\frac{1}{N}\\sum_{i=1}^{N}\\Bigg(y_{i} - \\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\Bigg)x_{i}^{(j)}\n\\end{aligned}\n$$\n\nwhere the last equation has the familiar form of the product of the prediciton error and the $j^{th}$ feature. With the gradient of the log likelihood function, the parameter vector $\\mathbf{\\theta}$ can now be calculated via gradient ascent (as we're maximizing the log likelihood):\n\n$$\n\\begin{equation}\n \\mathbf{\\theta}^{(j)}(k+1) = \\mathbf{\\theta}^{(j)}(k) + \\alpha \\frac{1}{N}\\sum_{i=1}^{N}\\Bigg( y_{i} - \\sigma(\\mathbf{\\theta}^{T}(k)\\mathbf{x}_{i}))\\Bigg)x_{i}^{(j)}\n\\end{equation}\n$$\n\n\n```python\n# Supporting Methods\n\n#logistic function\ndef sigmoid(a):\n return 1/(1 + np.exp(-a))\n\n#ll function\ndef log_likelihood(x, y, theta):\n logits = np.dot(x, theta)\n log_like = np.sum(y * logits - np.log(1 + np.exp(logits)))\n return log_like\n```\n\n\n```python\n#Load the data\niris = datasets.load_iris()\nx = iris.data[:,2:] #features will be petal width and petal length\ny = (iris.target==2).astype(int).reshape(len(x),1) #1 of iris-virginica, and 0 ow\n\n#Prepare Data for Regression\n#pad x with a vector of ones for computation of intercept\nx_aug = np.concatenate( (x,np.ones((len(x),1))) , axis=1)\n```\n\n\n```python\n#sklearn logistic regression\nlog_reg = LogisticRegression(penalty='none')\nlog_reg.fit(x,y.ravel())\nlog_reg.get_params()\ncoefs = log_reg.coef_.reshape(-1,1)\nintercept = log_reg.intercept_\ntheta_sklearn = np.concatenate((coefs, intercept.reshape(-1,1)), axis=0)\nprint(\"sklearn coefficients:\")\nprint(theta_sklearn)\nprint(\"sklearn log likelihood: \", log_likelihood(x_aug, y, theta_sklearn))\n```\n\n sklearn coefficients:\n [[ 5.75452053]\n [ 10.44681116]\n [-45.27248307]]\n sklearn log likelihood: -10.281754052558687\n\n\n\n```python\n#Perform Logistic Regression\nnum_iterations = 40000\nalpha = 1e-2\n\ntheta0 = np.ones((3,1))\ntheta = []\ntheta.append(theta0)\n\nk=0\nwhile k < num_iterations:\n \n #compute prediction error\n e = y - sigmoid(np.dot(x_aug, theta[k]))\n\n #compute the gradient of the log-likelihood\n grad_ll = np.dot(x_aug.T, e)\n\n #gradient ascent\n theta.append(theta[k] + alpha * grad_ll)\n\n #update iteration step\n k += 1\n\n if k % 4000 == 0:\n #print(\"iteration: \", k, \" delta: \", delta)\n print(\"iteration: \", k, \" log_likelihood:\", log_likelihood(x_aug, y, theta[k]))\n \ntheta_final = theta[k]\nprint(\"scratch coefficients:\")\nprint(theta_final)\n```\n\n iteration: 4000 log_likelihood: -11.529302688612685\n iteration: 8000 log_likelihood: -10.800986140073512\n iteration: 12000 log_likelihood: -10.543197464480874\n iteration: 16000 log_likelihood: -10.425775111214602\n iteration: 20000 log_likelihood: -10.36535749825992\n iteration: 24000 log_likelihood: -10.331961877189842\n iteration: 28000 log_likelihood: -10.312622172168293\n iteration: 32000 log_likelihood: -10.301055609225223\n iteration: 36000 log_likelihood: -10.293975480174714\n iteration: 40000 log_likelihood: -10.289566343598294\n scratch coefficients:\n [[ 5.5044475 ]\n [ 10.17562424]\n [-43.60242548]]\n\n\n\n```python\n#Plot the data and the decision boundary\n\n#create feature data for plotting\nx_dec_bnd = np.linspace(0,7,100).reshape(-1,1)\n#classification boundary from sklearn\ny_sklearn = (theta_sklearn[2] * np.ones((100,1)) + theta_sklearn[0] * x_dec_bnd) / -theta_sklearn[1]\n#classification boundary from scratch\ny_scratch = (theta_final[2] * np.ones((100,1)) + theta_final[0] * x_dec_bnd) / -theta_final[1]\n\ny_1 = np.where(y==1)[0] #training data, iris-virginica\ny_0 = np.where(y==0)[0] #training data, not iris-virginica\nplt.plot(x[y_0,0],x[y_0,1],'bo',label=\"not iris-virginica\")\nplt.plot(x[y_1,0],x[y_1,1],'k+',label=\"iris-virginica\")\nplt.plot(x_dec_bnd,y_sklearn,'r',label=\"sklearn dec. boundary\")\nplt.plot(x_dec_bnd,y_scratch,'g',label=\"scratch dec. boundary\")\nplt.xlabel('petal length')\nplt.ylabel('petal width')\nplt.title('Logistic Regression Classification')\nplt.xlim((3,7))\nplt.ylim((0,3.5))\nplt.legend()\nplt.show()\n```\n", "meta": {"hexsha": "314c5ff7a15076b6624236804d729e8863579356", "size": 38077, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Logistic_Regression.ipynb", "max_stars_repo_name": "dbarnold0220/from_scratch", "max_stars_repo_head_hexsha": "088b059377fd735326daaff6fc924123a13f8324", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Logistic_Regression.ipynb", "max_issues_repo_name": "dbarnold0220/from_scratch", "max_issues_repo_head_hexsha": "088b059377fd735326daaff6fc924123a13f8324", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Logistic_Regression.ipynb", "max_forks_repo_name": "dbarnold0220/from_scratch", "max_forks_repo_head_hexsha": "088b059377fd735326daaff6fc924123a13f8324", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 146.45, "max_line_length": 27026, "alphanum_fraction": 0.8519316123, "converted": true, "num_tokens": 2582, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9458012686491108, "lm_q2_score": 0.8840392802184582, "lm_q1q2_score": 0.8361254727662646}} {"text": "# Bose Einstein Coefficients\n\nA collection of $n$ objects that can have one of $k$ different classes are selected. The classes have equal probability. How many ways are there to select the objects? \n\nAlternatively: how many ways are there to partition $n$ objects into $k$ buckets? \n\nIf $n_i$ is the number of objects in bucket $i$, then $\\sum_i n_i = n$.\n\n\n## Stars and Bars\n\nThis is a neat way to approach counting problems.\n\nDividing objects into $k$ classes requires $k-1$ separators. The division can be visualized of a sequence of $n+k-1$ objects and dividers.\n\nConsider 10 objects divided over 4 classes. \n\nThat makes 10 objects and 3 dividers that are allocated to $n+k-1$ slots:\n\n``[ , , , , , , , , , , , , ]``\n\nFor example:\n\n``[|,* ,* ,* ,|,* ,| ,* ,* ,* ,* ,* ,* ]``\n\nCorresponds to an allocation of $\\{0,3,1,6\\}$ across the 4 classes. \n\nThe amount of ways to distribute $n$ objects over $n+k-1$ is:\n\n\\begin{equation}\n\\left(\\begin{array}{c}n+k-1\\\\n\\end{array}\\right)\n\\end{equation}\n\nWhich is the result.\n\nThe Bose Einstein Coefficient is also known as the multichoose coefficient. It counts the ways that $n$ indistinct objects can be assigned to $k$ classes. \n\n# Rising Factorials\n\nWhat if the the objects that are distributed are distinct, so that they can have an internal ordering? For example, consider the case of $n$ books being distributed over $k$ library shelves.\n\nIn that case, there are \n\n\\begin{equation}\n\\left(\\begin{array}{c}n+k-1\\\\n\\end{array}\\right)\n\\end{equation}\n\nWays of dividing the books up between the shelves, and \n\n\\begin{equation}\nn! \n\\end{equation}\n\nways for the books to be ordered internally. \n\n\nFor example:\n\n``[|,1 ,3 ,6 ,|,4 ,| ,5 ,7 ,9 ,8 ,11 ,10 ]``\n\nThe amount of combinations is:\n\n\\begin{equation}\nn! {n + k - 1 \\choose n} = \\frac{(n+k-1)!}{(k-1)!} = (n+k-1)^\\underline{n} = k^\\overline{n}\n\\end{equation}\n\n\n", "meta": {"hexsha": "9e9fc300f4c474e97821a1ae04a2e49bcab37b62", "size": 2991, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Combinatorics - Stars and Bars, Bose Einstein Coefficients.ipynb", "max_stars_repo_name": "jpbm/probabilism", "max_stars_repo_head_hexsha": "a2f5c1595aed616236b2b889195604f365175899", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Combinatorics - Stars and Bars, Bose Einstein Coefficients.ipynb", "max_issues_repo_name": "jpbm/probabilism", "max_issues_repo_head_hexsha": "a2f5c1595aed616236b2b889195604f365175899", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Combinatorics - Stars and Bars, Bose Einstein Coefficients.ipynb", "max_forks_repo_name": "jpbm/probabilism", "max_forks_repo_head_hexsha": "a2f5c1595aed616236b2b889195604f365175899", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.5204081633, "max_line_length": 199, "alphanum_fraction": 0.5386158475, "converted": true, "num_tokens": 564, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9546474220263198, "lm_q2_score": 0.8757869981319863, "lm_q1q2_score": 0.8360678000108701}} {"text": "# Circular motion\n\n## A simple one-link manipulator\nConsider the simple manipulator below, consisting of a homogenous rod of mass $m=0.1$kg and length $l=1$m. The rod is rotating about the single joint with a constant angular velocity of $\\omega=2$rad/s. \n\n\n\n\n\n### What is the centripetal acceleration at the endpoint?\n\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n\n```python\n# Parameters. SI units, of course\nm = 0.1\nl = 1.0\nomega = 2.0\n\n# The formular for centripetal acceleration is\na_c = omega**2 * l\nprint \"The centripetal acceleration at the endpoint is %1.2f m/s^2\" % a_c\n```\n\n The centripetal acceleration at the endpoint is 4.00 m/s^2\n\n\n### What is the centripetal acceleration at the center of mass?\n\n\n```python\n# %load com-acceleration.py\n# The center of mass for a homogeneous rod is of course in the middle of the rod\nfrom IPython.display import display, Math, Latex\ndisplay(Latex(r\"The centripetal acceleration at the center of mass is %1.2f $m/s^2$\" % (omega**2*l/2)))\n```\n\n\nThe centripetal acceleration at the center of mass is 2.00 $m/s^2$\n\n\n## Derivation of the acceleration for uniform circular motion\nDefine the x- and y axis as in the figure below. The angle $\\theta$ to be the angle to the x-axis\n\n\n\n\nWe have defined unit vectors in a polar coordinate system attached to the moving link\n\\begin{align}\ne_r &= \\begin{bmatrix}\\cos\\theta\\\\\\sin\\theta\\end{bmatrix}\\\\\ne_\\theta &= \\begin{bmatrix}-\\sin\\theta\\\\\\cos\\theta\\end{bmatrix}\n\\end{align}\nWith these unit vectors it is easy to express the position $p$ of the endpoint with respect to the origin, which is located at the joint:\n$$ p(t) = l e_r(t) = l \\begin{bmatrix}\\cos\\theta\\\\\\sin\\theta\\end{bmatrix}. $$\n\nIn order to find the velocity of the endpoint we take the time-derivative of the position\n\\begin{align}\n v(t) &= \\frac{d}{dt} p(t) = \\left( \\frac{d}{dt} l \\right) e_r + l \\frac{d}{dt} e_r\\\\\n &= l \\frac{d}{dt} \\begin{bmatrix}\\cos\\theta\\\\\\sin\\theta\\end{bmatrix}\\\\\n &= l \\frac{d}{d\\theta}\\begin{bmatrix}\\cos\\theta\\\\\\sin\\theta\\end{bmatrix} \\frac{d\\theta}{dt}\\\\\n &= l \\omega \\begin{bmatrix}-\\sin\\theta\\\\\\cos\\theta\\end{bmatrix} = l \\omega e_\\theta.\n\\end{align}\nWe see that the *speed*, the magnitude of the velocity, $|v| = l \\omega $ is proportional to both the distance from the center of rotation and to the angular velocity. The velocity vector is tangential to the path of the endpoint, and if $\\omega$ is positive, then the direction of the velocity is in the positive direction of $e_\\theta$. \n\n### Exercise: derive the centripetal acceleration by differentiating the velocity\nAlso: Discuss the magnitude and direction of the centripetal acceleration.\n\n\n```python\n# %load centripetal-acceleration-solution.py\ndisplay(Latex(\"\"\"\nThe acceleration becomes\n\\\\begin{align}\na(t) &= \\\\frac{d}{dt} v(t) = \\\\frac{d}{dt} l \\\\omega e_\\\\theta = \\\\frac{d}{dt} l \\\\omega \\\\begin{bmatrix} -\\\\sin\\\\theta \\\\\\\\ \\\\cos\\\\theta \\\\end{bmatrix} \\\\\\\\\n &= \\\\frac{d}{dt} \\left( l \\\\ omega \\\\right) e_\\\\theta + l \\\\omega \\\\frac{d}{dt} \\\\begin{bmatrix} -\\\\sin\\\\theta \\\\\\\\ \\\\cos\\\\theta \\\\end{bmatrix} \\\\\\\\\n &= 0 + l \\\\omega^2 \\\\begin{bmatrix} -\\\\cos\\\\theta \\\\\\\\ -\\\\sin\\\\theta \\\\end{bmatrix} = - l \\omega^2 e_r. \n\\\\end{align}\n\"\"\"))\n```\n\n\n\nThe acceleration becomes\n\\begin{align}\na(t) &= \\frac{d}{dt} v(t) = \\frac{d}{dt} l \\omega e_\\theta = \\frac{d}{dt} l \\omega \\begin{bmatrix} -\\sin\\theta \\\\ \\cos\\theta \\end{bmatrix} \\\\\n &= \\frac{d}{dt} \\left( l \\ omega \\right) e_\\theta + l \\omega \\frac{d}{dt} \\begin{bmatrix} -\\sin\\theta \\\\ \\cos\\theta \\end{bmatrix} \\\\\n &= 0 + l \\omega^2 \\begin{bmatrix} -\\cos\\theta \\\\ -\\sin\\theta \\end{bmatrix} = - l \\omega^2 e_r. \n\\end{align}\n\n\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "f952bd671399cfca99a9adb8379f121e11bba48f", "size": 6825, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "introduction/notebooks/circular-motion.ipynb", "max_stars_repo_name": "alfkjartan/robotica-aplicada", "max_stars_repo_head_hexsha": "515265059792d0e8586e2f450bd1af34a47fb963", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "introduction/notebooks/circular-motion.ipynb", "max_issues_repo_name": "alfkjartan/robotica-aplicada", "max_issues_repo_head_hexsha": "515265059792d0e8586e2f450bd1af34a47fb963", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "introduction/notebooks/circular-motion.ipynb", "max_forks_repo_name": "alfkjartan/robotica-aplicada", "max_forks_repo_head_hexsha": "515265059792d0e8586e2f450bd1af34a47fb963", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.7871287129, "max_line_length": 352, "alphanum_fraction": 0.5443223443, "converted": true, "num_tokens": 1176, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284088045171237, "lm_q2_score": 0.9005297874526778, "lm_q1q2_score": 0.8360597834010001}} {"text": "# Market Equilibrium under different market forms\n\nImport various packages\n\n\n```python\nimport numpy as np\nimport scipy as sp\nfrom scipy import linalg\nfrom scipy import optimize\nfrom scipy import interpolate\nimport sympy as sm\nfrom scipy import optimize,arange\nfrom numpy import array\nimport ipywidgets as widgets # Import a package for interactive plots\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom matplotlib import cm\nfrom mpl_toolkits.mplot3d import Axes3D\n```\n\n# Model Description\n\nWe consider the standard economic promlem for a Monopoly firm maximizing it's profits.\n\nThe aggregate market demand is given by\n$$Q(p)=A-\\alpha\\cdot p$$\nwhich corresponds to the inverse market demand function\n$$p(Q)=\\frac{A}{\\alpha}-\\frac{1}{\\alpha}\\cdot Q$$\n\nand the Monopoly profits are given $$\\pi(q)=p\\cdot q-c(q)=\\left(\\frac{A}{\\alpha}-\\frac{1}{\\alpha}\\cdot q\\right)\\cdot q-c\\cdot q$$\n\nwhere $q=Q$, $p(Q)$ is a linear market demand curve and $c(q)$ is the firms cost-function with constant cost $c$. \n\n# Market Equilibrium\n\n## Analytical Solution\n\nUsing Sympy, we seek to find an analytical expression for the market equilibrium when one firm has monopoly power, i.e. solve the monopoly firm's maximization problem\n\n\\\\[ \\max_{q}\\pi(q)=\\max_{q} \\left(\\frac{A}{\\alpha}-\\frac{1}{\\alpha}\\cdot q\\right)\\cdot q-c\\cdot q \\\\]\n\nWhich has the standard solution given by:\n$$q^{M\\ast}=\\frac{A-\\alpha\\cdot c}{2}\\wedge p^{\\ast}=\\frac{A+\\alpha\\cdot c}{2\\cdot\\alpha}$$\n\n\n```python\nsm.init_printing(use_unicode=True) # sets printing on\n# Defining variables;\nA = sm.symbols('A')\nq = sm.symbols('q')\nc = sm.symbols('c')\nalpha=sm.symbols('alpha')\n\n```\n\n\n```python\nPi = (A/alpha-q/alpha)*q-c*q # Define the firms profit function\nF = sm.diff(Pi,q) # Take the first order condition\nsm.solve(F,q)[0]\n```\n\n\n```python\nMq = sm.solve(F,q)[0] # Solves F for market quantity\n# And the market price is given by;\nMp=(A-Mq)*1/alpha\nsm.Matrix([Mq,Mp]) # Prints the market quantity and price\n\n```\n\n\n```python\n#For later use, We turn the above solution into a Python function\nMq_func = sm.lambdify((A,alpha,c),Mq)\nMp_func = sm.lambdify((A,alpha,c),Mp)\n```\n\n## Numerical Solution\n\nAs a brief introduction to solving the problem numerically, we use a solver like fsolve to solve the first-order condition given the following parameter values:\n\nRemember, the first-order condition is given by:\n$$\\frac{A}{\\alpha}-c-\\frac{2q}{\\alpha}=0$$\n\n\n```python\nA = 4\nalpha = 2 \nc = 1\noutput = optimize.fsolve(lambda q: 2-q-1,0)\nprint(f'analytical solution for market quantity is: {Mq_func(A,alpha,c):.2f}')\nprint(f' Solution with fsolve for market quantity is: {output}')\nprint(f'analytical solution for market price is: {Mp_func(A,alpha,c):.2f}')\n```\n\n analytical solution for market quantity is: 1.00\n Solution with fsolve for market quantity is: [1.]\n analytical solution for market price is: 1.50\n\n\nHowever for later use, It is perhaps more efficent to make Python maximize the firm's profits directly. However, as scipy only has minimization procedueres. We continue to minimize $-\\pi(q)$, i.e. minimizing negative profits is the same as maximizing profits. \n\nBelow we first define functions for market demand and costs in python\n\n\n```python\ndef demand(Q):\n return A/alpha-1/alpha*Q\n\ndef cost(q,c): # c is constant marginal cost\n return c*q\n```\n\n\n```python\ndef minus_profits(q,*args): # we want to see profits as a function of q when we maximize profits or\n return -(demand(q)*q-cost(q,c)) # minimize minus_profits; hence c is specified as \"*args\", when calling fmin\n # we specify the c in the \"args=(c,)\"\nx0 = 0 # Initial guess\nc = 1.0 # Specify the value of the constant cost 'c'\nA=4.0 # Specify the value of the Constant in the market demand function Q(p)\nalpha=2.0 # Specify the slope coefficient in Q(p)\n\noutput = optimize.fmin(minus_profits,x0,args=(c,)) # note the comma in \"args(c,)\"; it needs to be there!\nprice=A/alpha-1/alpha*output\nprint(output,price)\n```\n\n Optimization terminated successfully.\n Current function value: -0.500000\n Iterations: 25\n Function evaluations: 50\n [1.] [1.5]\n\n\nHence, the optimal output is 1, which yields the maximum profits of $-\\cdot(-0.5)=0.5$\n\nFor the specified parameter values, we have plotted the monopoly firm's profit function below.\n\n\n```python\n# Define the expression whose roots we want to find\n\nA = 4.0 # Specify the value of the Constant in the market demand function Q(p) \nalpha = 2.0 # Specify the slope coefficient in Q(p)\nc = 1.0 # Specify the value of the constant cost 'c'\n\nfunc = lambda q : (A/alpha-q/alpha)*q-c*q # Defines the profit function using a lambda function.\n\n# Plot the profit function\n\nq = np.linspace(0, 2, 200) # Return evenly spaced numbers over a specified interval from 0 to 2 .\n\nplt.plot(q, func(q)) # -minus_profits(q) could have been used instead of func(q). But we wanted to show the lambda function. \nplt.axhline(y=0.5,linestyle='dashed',color='k') # creates a horizontal line in the plot at func(q)=0.5\nplt.axvline(x=1,linestyle='dashed',color='k') # creates a vertical line in the plot at q=0.5\nplt.xlabel(\"Quantity produced q \")\nplt.ylabel(\"Firm Profits\")\nplt.grid()\nplt.title('The Monopoly Firms profit function')\nplt.show()\n```\n\nAnd we can plot the market equilibrium price and output in a standard diagram as shown below.\n\n\n```python\n# Define marginal Revenue:\ndef MR(Q):\n return A/alpha-2/alpha*Q\n\n\nplt.plot(q, demand(q)) \nplt.plot(q, MR(q))\nplt.axhline(y=c,color='k') # creates a horizontal line in the plot at func(q)=0.5\nplt.axvline(x=output,ymin=0,ymax=0.73,linestyle='dashed',color='k') # creates a vertical line in the plot at q=0.5\nplt.axhline(y=price,xmin=0, xmax=0.5,linestyle='dashed',color='k')\nplt.xlabel(\"Quantity produced q \")\nplt.ylabel(\"Price\")\nplt.grid()\nplt.title('The Demand Function')\nplt.show()\n```\n\nBoth plottet side by side.\n\n\n```python\nf = plt.figure(figsize=(13,6))\nax = f.add_subplot(121)\nax2 = f.add_subplot(122)\nax.plot(q, func(q))\nax.set_title('The Monopoly Firms profit function')\nax.set_xlabel('Quantity Produced q')\nax.set_ylabel('Firm Profits')\nax.axhline(y=0.5,linestyle='dashed',color='k') # creates a horizontal line in the plot at func(q)=0.5\nax.axvline(x=1,linestyle='dashed',color='k') # creates a vertical line in the plot at q=0.5\nax.set_ylim(0,) # set a lower limit for y-axis at zero.\nax.grid()\n\nax2.plot(q, demand(q),label='Demand')\nax2.plot(q,MR(q), label='Marginal Revenue')\nax2.legend(loc='upper right') # Place the graph descriptions in the upper right corner\nax2.grid()\nax2.axhline(y=c,color='k', label='Marginal Cost') # creates a horizontal line in the plot at func(q)=0.5\nax2.axvline(x=output,ymin=0,ymax=0.71,linestyle='dashed',color='k') # creates a vertical line in the plot at q=0.5\nax2.axhline(y=price,xmin=0, xmax=0.5,linestyle='dashed',color='k')\nax2.set_xlabel(\"Quantity produced q \")\nax2.set_ylabel(\"Market Price p\")\nax2.set_ylim(0,)\nax2.set_title('The Market Equilibrium')\n```\n\nWe see that when the monopoly firm is producing $q=1$, they get the maximum profits of 0.5. In the right figure, we see that the monopoly firm maximises profits in the point, where the marginal revenue curve intersect the marginal cost curve(black line). The two curves intersect at $q=1$, and the market price is $p=1.5$ as given by the demand curve.\n\n\n**Extention at the exam**: We have extended the above plot with fixed parameter values with an interactive feature such that you can change the two graphs by changing the parameters $(A,\\alpha,c)$.\n\n\n```python\ndef interactive_figure(A,alpha,c):\n \n \"\"\"This function makes a intertive figure of the profit function, demand function and marginal revenue function\n with A,alpha and c as free variables\"\"\"\n \n # a. Specify the functions.\n \n \n func = lambda q : (A/alpha-q/alpha)*q-c*q # Defines the profit function using a lambda function.\n \n def demand(Q): # defies demand function\n return A/alpha-1/alpha*Q\n \n def MR(Q): # defines the marginal revenue function\n return A/alpha-2/alpha*Q\n \n # b. Create values for quantity.\n\n q = np.linspace(0, A-alpha*c, 200) # Return evenly spaced numbers over a specified interval from 0 to the point, where profits equal zero .\n qM = np.linspace(0, A/2, 200) # Return evenly spaced numbers over the interval from 0 to where MR(Q) intersect the x-axis.\n qD = np.linspace(0, A, 200) # Return evenly spaced numbers over the interval from 0 to where DM(Q) intersect the x-axis.\n \n # c. plot the figures\n f = plt.figure(figsize=(13,6))\n ax = f.add_subplot(121)\n ax2 = f.add_subplot(122)\n ax.plot(q, func(q))\n ax.set_title('The Monopoly Firms profit function')\n ax.set_xlabel('Quantity Produced q')\n ax.set_ylabel('Firm Profits')\n ax.axhline(y=np.max(func(q)),linestyle='dashed',color='k') # creates a horizontal line in the plot at func(q)=0.5\n ax.axvline(x=(A-alpha*c)/2,linestyle='dashed',color='k') # creates a vertical line in the plot at q=0.5\n ax.set_xlim(0,)\n ax.set_ylim(0,np.max(func(q))*1.05) # set a lower limit for y-axis at zero and max-limit at max profit.\n ax.grid()\n\n ax2.plot(qD, demand(qD),label='Demand')\n ax2.plot(qM,MR(qM), label='Marginal Revenue')\n ax2.legend(loc='upper right') # Place the graph descriptions in the upper right corner\n ax2.grid()\n ax2.axhline(y=c,color='k', label='Marginal Cost') # creates a horizontal line in the plot at func(q)=0.5\n ax2.axvline(x=(A-alpha*c)/2,linestyle='dashed',color='k') # creates a vertical line in the plot at q=0.5\n ax2.axhline(y=(A+c*alpha)/(2*alpha),linestyle='dashed',color='k')\n ax2.set_xlabel(\"Quantity produced q \")\n ax2.set_ylabel(\"Market Price p\")\n ax2.set_xlim(0,A/2) # set x axis to go from zero to where MR(Q) intersect the x-axis\n ax2.set_ylim(0,np.max(demand(q))*1.05) # set y axis to go from zero to max of demand(q).\n ax2.set_title('The Market Equilibrium')\n```\n\nAnd the interactive figure is illustrated below.\n\n\n```python\nwidgets.interact(interactive_figure,\n A=widgets.FloatSlider(description=\"A\",min=0.1,max=10,step=0.1,value=4), # \n alpha=widgets.FloatSlider(description=\"alpha\",min=0.1,max=5,step=0.1,value=2), \n c=widgets.FloatSlider(description=\"c\",min=0,max=1.999,step=0.1,value=1) \n);\n```\n\n\n interactive(children=(FloatSlider(value=4.0, description='A', max=10.0, min=0.1), FloatSlider(value=2.0, descr\u2026\n\n\n# Extentions: Solving for market equilibrium in a duopoly setting\n\n## Market Equilibrium with Cournot Competition\n\nConsider the inverse demand funcion with identical goods\n$$p(Q)=\\frac{A}{\\alpha}-\\frac{1}{\\alpha}\\cdot Q=\\frac{A}{\\alpha}-\\frac{1}{\\alpha}\\cdot(q_1+q_2)$$\n\nwhere $q_1$ is firm 1's output and $q_2$ is firm 2's output. $Q=q_1+q_2$. \n\nBoth firms have identical cost-function $c(q_i)=c\\cdot q_i$. So given cost and demand, each firm have the following profit function:\n$$\\pi_{i}(q_{i},q_{j})=p_i(q_i,q_j)q_i-c(q_i)$$,\n$i,j\\in\\{0,1\\},i\\neq j$, which they seek to maximize.\n\nAs this is the standard Cournot problem with two firms competing in quantities, we know that in equilibrium both firms produces the same Cournot output level given by:\n$$q_1^{C}=q_2^{C}=\\frac{A-\\alpha c}{3}$$\n\n### Analytical Solution\n\nWe can use **sympy** to find an analytical expression for the market equilibrium/ the Cournot Nash Equilibrium, i.e. solving for the pair $(q_1^{C},q_2^{C})$ in which both firms play a best-response to the other firms equilibrium strategy. Hence\n$$\\max_{q_{i}} \\pi_{i}(q_i,q_j^{\\ast})=\\max \\left(\\frac{A}{\\alpha}-\\frac{1}{\\alpha}\\cdot(q_i+q_j^{\\ast})-c\\right)q_i $$\n\n\n\n\n```python\n# Defining variables;\nA = sm.symbols('A') # Constant in Q(p)\nq1 = sm.symbols('q1') # Firm 1's output\nq2 = sm.symbols('q2') # Firm 2's output\nc = sm.symbols('c') # Contant cost\nalpha=sm.symbols('alpha') # Slope coefficient in Q(p)\n\nPi1 = (A/alpha-1/alpha*(q1+q2))*q1-c*q1 # Firm 1's profit function\n\nPi2 = (A/alpha-1/alpha*(q1+q2))*q2-c*q2 # Frim 2's profit function\n\nF1 = sm.diff(Pi1,q1) # Take the first order condition for firm 1\nF2 = sm.diff(Pi2,q2) # Take the first order condition for firm 2\nsm.Matrix([F1,F2]) # Prints the first order conditions\n```\n\n\n```python\nCq2 = sm.solve(F2,q2)[0] # Solves Firm 2's FOC for q2.\nCq2\n```\n\n\n```python\nCq1 = sm.solve(F1,q2)[0] # Solves Firm 1's FOC for q2.\nCq1\n```\n\n\n```python\nCON=sm.solve(Cq1-Cq2,q1)[0] # In Eq Cq1=Cq2, so solve Cq1=Cq2=0 for q1\nCON\n```\n\nGiven the standard symmetry argument, we know that both firms produce the same in equilibrium. Hence\n$$q_1^{C}=q_2^{C}=\\frac{A-\\alpha c}{3}$$\nas given above.\n\nThe total market quantity and price are found below.\n\n\n```python\nMCQ = 2*CON # market quantiy\n# And the market price is given by;\nMCP=(A-MCQ)*1/alpha\nsm.Matrix([MCQ,MCP]) # Prints the market quantity and price\n```\n\nThese can again by turned into python-functions to compare the analytical solution with the numerical solution\n\n\n```python\nCON_func = sm.lambdify((A,alpha,c),CON) # Cournot quantity\nMCP_func = sm.lambdify((A,alpha,c),MCP) # Market price\n```\n\n### Numerical Solution\n\n\n```python\ndef demand(q1,q2,b): # Define demand \n return A/alpha-1/alpha*(q1+b*q2) # b is in place to allow for potential heterogeneous goods.\n\ndef cost(q,c):\n if q == 0:\n cost = 0\n else:\n cost = c*q\n return cost\n```\n\n\n```python\ndef profit(q1,q2,c1,b): # Define a function for profits\n return demand(q1,q2,b)*q1-cost(q1,c1)\n```\n\nDefine reaction functions.\n\nAs we know scipy has various methods to optimize function. However as they are defined as minimization problems, maximizing a function $f(x)$ is the same as minimzing $-f(x)$.\n\n\n```python\ndef reaction(q2,c1,b):\n q1 = optimize.brute(lambda q: -profit(q,q2,c1,b), ((0,1,),)) # brute minimizes the function;\n # when we minimize -profits, we maximize profits\n return q1[0]\n```\n\nA solution method which can be used to solve many economic problems to find the Nash equilibrium, is to solve for the equilibirum as fixed point.\n\nHence we are looking for a fixed point in which the following is true.\n$$\\left(\\begin{matrix}\n q_{1}^{\\ast} \\\\\n q_{2}^{\\ast} \n \\end{matrix}\\right)=\\left(\\begin{matrix}\n r_{1}(q_2^{\\ast}) \\\\\n r_{2}(q_1^{\\ast}) \n \\end{matrix}\\right) \n $$\n \nwhere $r_1(q_2)$ is firm 1's reaction-function to firm 2's production level and vice versa.\n\nNumerically this can be solved by defining a vector function:\n$$f(q)=\\left(\\begin{matrix}\n r_{1}(q_2^{\\ast}) \\\\\n r_{2}(q_1^{\\ast}) \n \\end{matrix}\\right)$$\nand solve for a point $q^{\\ast}=(q_1^{\\ast},q_2^{\\ast})$ such that $f(q^{\\ast})=q^{\\ast}$ \n\nWe then define a function defined as $x-f(x)$ and look for the solution $x^{\\ast}-f(x^{\\ast})=0$\n\n\n```python\ndef vector_reaction(q,param): # vector parameters = (b,c1,c2)\n return array(q)-array([reaction(q[1],param[1],param[0]),reaction(q[0],param[2],param[0])])\n```\n\n\n```python\nparam = [1.0,1.0,1.0] # Specify the parameters (b,c1,c2)\nq0 = [0.3, 0.3] # Initial guess for quantities\nalpha=2\nA=4\nans = optimize.fsolve(vector_reaction, q0, args = (param))\nprint(ans)\n```\n\n [0.6666581 0.6666581]\n\n\n\n```python\nA = 4\nalpha = 2 \nc = 1\nprint(f'analytical solution for Cournot quantity is: {CON_func(A,alpha,c):.2f}')\nprint(f'analytical solution for market price is: {MCP_func(A,alpha,c):.2f}')\nprint(f' Solution with fsolve for market quantity is: {ans}')\n\n```\n\n analytical solution for Cournot quantity is: 0.67\n analytical solution for market price is: 1.33\n Solution with fsolve for market quantity is: [0.6666581 0.6666581]\n\n\nAnd we see that the numerical solution for the market quantity is fairly close to the analytical solution at $q_1^{C}=q_2^{C}=\\frac{2}{3}$\n\nBelow we illustrate the equilibrium quantities visually by plotting the two firms reaction functions/best-response functions. The equilibrium quantities is found in the point in which they intersect.\n\n\n```python\n# Define the expression whose roots we want to find\n\nA = 4.0 # Specify the value of the Constant in the market demand function Q(p) \nalpha = 2.0 # Specify the slope coefficient in Q(p)\nc = 1 # Specify the value of the constant cost 'c'\n\nfunc1 = lambda q : 1/2*(A-alpha*c-q) # Defines the best-response function for firm 1using a lambda function.\nfunc2 = lambda q : A-alpha*c-2*q\n\n# Plot the profit function\n\nq = np.linspace(0, 5, 200) # Return evenly spaced numbers over a specified interval from 0 to 2 .\n\nplt.clf()\nplt.plot(q, func1(q),'-', color = 'r', linewidth = 2)\nplt.plot(q,func2(q),'-', color = 'b', linewidth = 2)\nplt.title(\"Cournot Nash Equilibrium\",fontsize = 15)\nplt.xlabel(\"$q_1$\",fontsize = 15)\nplt.ylabel(\"$q_2$\",fontsize = 15,rotation = 90)\nplt.axvline(x=CON_func(A,alpha,c),ymin=0,ymax=1/3,linestyle='dashed',color='k') # creates a vertical line in the plot at q=2/3\nplt.axhline(y=CON_func(A,alpha,c),xmin=0,xmax=1/3,linestyle='dashed',color='k') # creates a horizontal line in the plot at q=2/3\n\nplt.annotate('$R_2(q_1)$', xy=(1,0.5), xycoords='data', # here we define the labels and arrows in the graph\n xytext=(30, 50), textcoords='offset points', size = 20,\n arrowprops=dict(arrowstyle=\"->\", linewidth = 2,\n connectionstyle=\"arc3,rad=.2\"),\n )\nplt.annotate('$R_1(q_2)$', xy=(0.5,1), xycoords='data', # here we define the labels and arrows in the graph\n xytext=(30, 50), textcoords='offset points', size = 20,\n arrowprops=dict(arrowstyle=\"->\", linewidth = 2,\n connectionstyle=\"arc3,rad=.2\"),\n )\nplt.xlim(0,2) # sets the x-axis\nplt.ylim(0,2) # Sets the y-axis\n```\n\nWe see that when both firms have symmetric cost $c_1=c_2=c$ and produce homogeneous goods, both firms produce the same in the Cournot Nash equilibrium. We see that both firms individually produce less than if they have monopoly power due to the small increase in competition in the market. Hence when no collusion is possible as this is a one period static problem, the total market output is larger than the monopoly outcome and the associated market price is lower. However assuming no externalities and the standard economic assumptions, the market outcome is still inefficient seen from a social planners perspective as it is not equal to the social optimum, where all firms produce in the point in which marginal costs equal the market price.\n\n## Market Equilibrium with Betrand Competition with differentiated goods\n\nLastly we will investiate the market outcome in the duopoly setting with two firms is competing in prices rather than quanties. This competition type is called bertrand competion, and we will consinder the Betrand model with differentiated products and the standard Betrand Model of Duopoly with identical firms producing homogeneous products with the same cost-functions with constant marginal costs c.\n\nThe market demand function is the same given by\n$$Q(p)=A-\\alpha\\cdot p$$\n\nHowever from the perspective of firm i, the consumers demand for firm i's good is:\n$$q_i(p_i,p_j)=A-p_i+b\\cdot p_j$$, $i,j\\in\\{1,2\\}, i\\neq j$, where b indicates that the goods are imperfect substitutes.\n\nThe profit of firm i when choosing the price $p_i$ and firm j chooses the price $p_j$ is given by:\n$$\\pi_i(p_i,p_j)=q_i(p_i,p_j)[p_i-c]$$\n\nAnd the price pair $(p_1^{\\ast},p_2^{\\ast})$ constitute a Nash equilibrium if, for each firm i, the choosen price $p_i^{\\ast}$ solve the firms maximization problem, i.e.\n$$\\max_{0\\leq p_i<\\infty}\\pi_i(p_i,p_j^{\\ast})=\\max_{0\\leq p_i<\\infty}\\left[A-p_i+b\\cdot p_j^{\\ast}\\right][p_i-c]$$\n\n\n### Analytical Solution with differentiated goods\n\nWe can use **sympy** to find an analytical expression for the market equilibrium/ the Bertrand Nash Equilibrium, i.e. solving for the pair $(p_1^{B},p_2^{B})$ for which both firms play a best-response to the other firms equilibrium strategy.\n\nAs this is the betrand problem with differentiated goods, we know already that both firms will try to underbid eachother in prices. Hence in equilibrium, $p_1^{B}=p_2^{B}$ and it is assumed that each firm produce half of the market demand, i.e.\n$$p_1^{B}=p_2^B=\\frac{A+c}{2-b}$$\n\n\n\n```python\n# Defining variables;\nA = sm.symbols('A') # Constant in Q(p)\np1 = sm.symbols('p1') # Firm 1's price\np2 = sm.symbols('p2') # Firm 2's price\nc = sm.symbols('c') # Contant cost\nb = sm.symbols('b') # constant reflecting that the goods are differentiated\nalpha=sm.symbols('alpha') # Slope coefficient in Q(p)\n\nPi1 = (A-p1+b*p2)*(p1-c) # Firm 1's profit function\n\nPi2 = (A-p2+b*p1)*(p2-c) # Firm2's profit function\n\nF1 = sm.diff(Pi1,p1) # Take the first order condition for firm 1\nF2 = sm.diff(Pi2,p2) # Take the first order condition for firm 2\nsm.Matrix([F1,F2]) # Prints the first order conditions\n```\n\nWe can then use the first order conditions to find the best-response functions by using sympy's solve function to solve $$F1=0$$ for $p_1$.\n\n\n```python\nBR1 = sm.solve(F1,p1)[0] # Solves Firm 1's FOC for p1.\nBR2 = sm.solve(F2,p2)[0] # Solves Firm 2's FOC for p2.\nsm.Matrix([BR1,BR2]) # Prints the best-response functions \n```\n\nHowever to solve the function in an easier way, we solve firm 2's FOC for $p_1$. Call this BR12. We know that both firm's FOC must hold in equilibrium. Hence the equilibrium price is found by solving $BR1=BR12\\Leftrightarrow BR1-BR12=0$ which can be solved sympy's solve function:\n\n\n```python\nBR12=sm.solve(F2,p1)[0] # Solves Firm 2's FOC for p1.\nMP=sm.solve(BR1-BR12,p2)[0] # In Eq p1=p2, so solve BR1-BR12=0 for p2\nMP\n\n```\n\nHence both firms charge the price $p_1^{B}=p_2^{B}=-\\frac{A+c}{b-2}=\\frac{A+c}{2-b}$\n\nTurned into a function\n\n\n```python\nMP_func = sm.lambdify((A,b,c),MP) # Market price\n\nA = 4\nb = 0.5 # parameter different from 1 to indicate imperfect substitutes\nc = 1\nprint(f'analytical solution for Bertrand price is: {MP_func(A,b,c):.2f}')\n\n```\n\n analytical solution for Bertrand price is: 3.33\n\n\n### Numerical Solution\n\n\n```python\nA = 4\nb=0.5\nc = 1\nB1 = 1/2*(A+b*p2+c)\nB2 = 1/b*(2*p2-A-c)\nSOL = optimize.fsolve(lambda p: 1/2*(A+b*p+c)-1/b*(2*p-A-c),0)\nprint(f' Solution with fsolve for Bertrand price is: {SOL:}')\n\n```\n\n Solution with fsolve for Bertrand price is: [3.33333333]\n\n\nThus when the firms products are imperfect substitutes, the market price is still above the firms marginal cost. Hence the market outcome is pareto inefficient, which differs from the result from the standard Betrand model with homogeneous products.\n\n## Market Equilibrium with Betrand Competition and homogeneous goods\n\nThe market equilibrium with betrand competition, homogeneous goods and identical firms has some interesting, nice properties. Most importantly, the equilibrium is pareto efficient equal to perfect market outcome. \n\nThe market demand function is the same given by\n$$Q(p)=A-\\alpha\\cdot p$$\n\n\nBoth firm compete in prices, and seek to maximize the profit function:\n$$\\max_{p_i}\\pi_i(p_i,p_j)=\n\\begin{cases}\nQ(p)\\cdot (p_i-c) &p_j>p_i\\\\\n\\frac{Q(p)\\cdot (p_i-c)}{2} &p_i=p_j\\\\\n0 &p_i>p_j\n\\end{cases}\n$$\nIt is a standard assumption that the market is divided evenly between the two firms if the set the same price, but there is no reason why it couldn't be different in practice.\n\nBoth firms have the symmetric Best-Response functions:\n$$BR_i(p_j)=\n\\begin{cases}\np_i=p_m &p_j>p_m\\\\\np_i=p_j-\\epsilon &p_i>p_j>c\\\\\np_i=c &p_j p2:\n profits = 0 # firm 2 takes all the market\n elif p1 == p2:\n profits = 0.5*total_demand(p1)*(p1-c1) # The firms split the market in two\n else:\n profits = total_demand(p1)*(p1-c1) # firm 1 takes all the market\n return profits\n\ndef reaction(p2,c1): # Reaction function\n if p2 > c1:\n reaction = c1+0.8*(p2-c1)\n else:\n reaction = c1\n return reaction\n```\n\n\n```python\ndef vector_reaction(p,param): # vector param = (c1,c2)\n return array(p)-array([reaction(p[1],param[0]),reaction(p[0],param[1])])\n\nparam = [2.0,2.0] # c1 = c2 =2\nalpha=2\nA=4\np0 = [0.5, 0.5] # initial guess: p1 = p2 = 0.5\n\nPsol = optimize.fsolve(vector_reaction, p0, args = (param))\nprint(Psol) # Bertrand prices\n```\n\n [2. 2.]\n\n\nAs should become clear from this little numerical demostration. The two firms price setting decision is in practical sense invariant to the shape of the demand curve - as long as it is downwards slopping. The two firms will regardless of the demand function try to underbid the other firm in terms of price as long as the market price is above the marginal cost c. Hence as long as there is positive profits to get in the market, both firm will try to get the entire market by setting a price just below the other. This proces will continue until the market price reach the firms marginal cost, which are assumed identical. Thus the betrand price solely depends on the value of the costs $c_1=c_2$, as the firms compete the market price $p^{B}$ down to $c_1=c_2$. Because only then, no firm has an incentive to deviate, i.e. the pair of prices ${(p_1^{B},p_2^{B}})={c}$ constitute a Nash equilibrium. This is also known as the bertrand paradox.\n\n# Conclusion\n\nWe see that the assumption about the market structure have a critical impact on the market equilibrium. We have shown that under the standard assumptions, when there is only one firm in the market, which utilizes it's monopoly power, the market equilibrium output is inefficiently low and the equilibrium price is ineffciently high from a social welfare perspective. When the number of firms increases to two, we show that the market inefficiency decreases but at different degrees depending on competition type. If the firms compete in quantities (Cournot), the market output is still lower than the social optimum. However there is still some competition between the firms, which results in a lower market price and higher market output compared to the monopoly case. Lastly, we show that when the two firms compete in prices(Bertrand) the market equilibrium is pareto efficient. As both firms seek to undercut the other firm resulting in both firms asking a price equal to their marginal costs (assumed identical). Hence even though there are only two firms, the market equilibrium is efficient as it is identical to social optimum with a market price equal to the marginal costs. However when allowing for the two firms to produce different goods that are imperfect substitutes, both firms raise their prices above marginal cost. Thus they earn positive profit and the market outcome is once again inefficient.\n", "meta": {"hexsha": "3e275fd9c58ce69c4cc1bc5a687524a13dc4a7d7", "size": 186654, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "modelproject/modelproject.ipynb", "max_stars_repo_name": "NumEconCopenhagen/projects-2019-peter-larsens-kaffe-m-havre", "max_stars_repo_head_hexsha": "5e20f4f592ec38e84e85dd6f5713dc575caa9341", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "modelproject/modelproject.ipynb", "max_issues_repo_name": "NumEconCopenhagen/projects-2019-peter-larsens-kaffe-m-havre", "max_issues_repo_head_hexsha": "5e20f4f592ec38e84e85dd6f5713dc575caa9341", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 8, "max_issues_repo_issues_event_min_datetime": "2019-04-15T06:56:14.000Z", "max_issues_repo_issues_event_max_datetime": "2019-05-24T18:59:19.000Z", "max_forks_repo_path": "modelproject/modelproject.ipynb", "max_forks_repo_name": "NumEconCopenhagen/projects-2019-peter-larsens-kaffe-m-havre", "max_forks_repo_head_hexsha": "5e20f4f592ec38e84e85dd6f5713dc575caa9341", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 134.3801295896, "max_line_length": 54836, "alphanum_fraction": 0.8666998832, "converted": true, "num_tokens": 7864, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9390248225478306, "lm_q2_score": 0.8902942261220292, "lm_q1q2_score": 0.8360083776995967}} {"text": "```python\nfrom sympy import *\nx, y, z, t = symbols('x y z t')\nk, m, n = symbols('k m n', integer=True)\ninit_printing()\n```\n\n## Calculus\n\nCalculus is the study of the properties of functions. The operations of\ncalculus are used to describe the limit behaviour of functions, calculate\ntheir rates of change, and calculate the areas under their graphs. In\nthis section we'll learn about the `SymPy` functions for calculating\nlimits, derivatives, integrals, and summations.\n\n### Infinity\n\nThe infinity symbol is denoted `oo` (two lowercase `o`s) in `SymPy`. Infinity\nis not a number but a process: the process of counting forever. Thus,\n$\\infty + 1 = \\infty$, $\\infty$ is greater than any finite number, and $1/\\infty$ is an\ninfinitely small number. `Sympy` knows how to correctly treat infinity\nin expressions:\n\n\n```python\noo+1\n```\n\n\n```python\n5000 < oo\n```\n\n\n```python\n1/oo\n```\n\n### Limits\n\nWe use limits to describe, with mathematical precision, infinitely large\nquantities, infinitely small quantities, and procedures with infinitely\nmany steps.\n\nThe number $e$ is defined as the limit $e \\equiv \\lim_{n\\to\\infty}\\left(1+\\frac{1}{n}\\right)^n$:\n\n\n```python\nlimit( (1+1/n)**n, n, oo)\n```\n\nThis limit expression describes the annual growth rate of a loan with\na nominal interest rate of 100% and infinitely frequent compounding.\nBorrow \\$1000 in such a scheme, and you'll owe $2718.28 after one year.\n\nLimits are also useful to describe the behaviour of functions. Consider\nthe function $f(x) = \\frac{1}{x}$. The `limit` command shows us what happens\nto $f(x)$ near $x = 0$ and as $x$ goes to infinity:\n\n\n```python\nlimit( 1/x, x, 0, dir=\"+\")\n```\n\n\n```python\nlimit( 1/x, x, 0, dir=\"-\")\n```\n\n\n```python\nlimit( 1/x, x, oo)\n```\n\nAs $x$ becomes larger and larger, the fraction $\\frac{1}{x}$ becomes smaller\nand smaller. In the limit where $x$ goes to infinity, $\\frac{1}{x}$ approaches\nzero: $\\lim_{x\\to\\infty}\\frac{1}{x} = 0$. On the other hand, when $x$ takes on smaller\nand smaller positive values, the expression $\\frac{1}{x}$ becomes infinite:\n$\\lim_{x\\to0^+}\\frac{1}{x} = \\infty$. When $x$ approaches 0 from the left, we have\n$\\lim_{x\\to0^-}\\frac{1}{x}=-\\infty$. If these calculations are not clear to you, study\nthe graph of $f(x) = \\frac{1}{x}$.\n\nHere are some other examples of limits:\n\n\n```python\nlimit(sin(x)/x, x, 0)\n```\n\n\n```python\nlimit(sin(x)**2/x, x, 0)\n```\n\n\n```python\nlimit(exp(x)/x**100,x,oo) # which is bigger e^x or x^100 ?\n # exp f >> all poly f for big x\n```\n\nLimits are used to define the derivative and the integral operations.\n\n### Derivatives\n\nThe derivative function, denoted $f'(x)$, $\\frac{d}{dx}f(x)$, $\\frac{df}{dx}$, or $\\frac{dy}{dx}$, \ndescribes the *rate of change* of the function $f(x)$.\nThe `SymPy` function `diff` computes the derivative of any expression:\n\n\n```python\ndiff(x**3, x)\n```\n\nThe differentiation operation knows about the product rule $[f(x)g(x)]^\\prime=f^\\prime(x)g(x)+f(x)g^\\prime(x)$, \nthe chain rule $f(g(x))' = f'(g(x))g'(x)$, \nand the quotient rule $\\left[\\frac{f(x)}{g(x)}\\right]^\\prime = \\frac{f'(x)g(x) - f(x)g'(x)}{g(x)^2}$:\n\n\n```python\ndiff( x**2*sin(x), x )\n```\n\n\n```python\ndiff( sin(x**2), x )\n```\n\n\n```python\ndiff( x**2/sin(x), x )\n```\n\nThe second derivative of a function `f` is `diff(f,x,2)`:\n\n\n```python\ndiff(x**3, x, 2) # same as diff(diff(x**3, x), x)\n```\n\nThe exponential function $f(x)=e^x$ is special because it is equal to its derivative:\n\n\n```python\ndiff( exp(x), x ) # same as diff( E**x, x )\n```\n\nA differential equation is an equation that relates some unknown function $f(x)$ to its derivative. \nAn example of a differential equation is $f'(x)=f(x)$.\nWhat is the function $f(x)$ which is equal to its derivative?\nYou can either try to guess what $f(x)$ is or use the `dsolve` function:\n\n\n```python\nx = symbols('x')\nf = symbols('f', cls=Function) # can now use f(x)\ndsolve( f(x) - diff(f(x),x), f(x) )\n```\n\nWe'll discuss `dsolve` again in the section on mechanics.\n\n### Tangent lines\n\nThe *tangent line* to the function $f(x)$ at $x=x_0$ is \nthe line that passes through the point $(x_0, f(x_0))$ and has \nthe same slope as the function at that point.\nThe tangent line to the function $f(x)$ at the point $x=x_0$ is described by the equation\n\n$$\n T_1(x) = f(x_0) \\ + \\ f'(x_0)(x-x_0).\n$$\n\nWhat is the equation of the tangent line to $f(x)=\\frac{1}{2}x^2$ at $x_0=1$?\n\n\n```python\nf = S('1/2')*x**2\nf\n```\n\n\n```python\ndf = diff(f,x)\ndf\n```\n\n\n```python\nT_1 = f.subs({x:1}) + df.subs({x:1})*(x - 1)\nT_1\n```\n\nThe tangent line $T_1(x)$ has the same value and slope as the function $f(x)$ at $x=1$:\n\n\n```python\nT_1.subs({x:1}) == f.subs({x:1})\n```\n\n\n\n\n True\n\n\n\n\n```python\ndiff(T_1,x).subs({x:1}) == diff(f,x).subs({x:1})\n```\n\n\n\n\n True\n\n\n\n### Optimization\n\nOptimization is about choosing an input for a function $f(x)$ that results in the best value for $f(x)$.\nThe best value usually means the *maximum* value \n(if the function represents something desirable like profits) \nor the *minimum* value \n(if the function represents something undesirable like costs).\n\nThe derivative $f'(x)$ encodes the information about the *slope* of $f(x)$.\nPositive slope $f'(x)>0$ means $f(x)$ is increasing,\nnegative slope $f'(x)<0$ means $f(x)$ is decreasing, \nand zero slope $f'(x)=0$ means the graph of the function is horizontal.\nThe *critical points* of a function $f(x)$ are the solutions to the equation $f'(x)=0$.\nEach critical point is a candidate to be either a maximum or a minimum of the function.\n\nThe second derivative $f^{\\prime\\prime}(x)$ encodes the information about the *curvature* of $f(x)$.\nPositive curvature means the function looks like $x^2$,\nnegative curvature means the function looks like $-x^2$.\n\nLet's find the critical points of the function $f(x)=x^3-2x^2+x$ \nand use the information from its second derivative \nto find the maximum of the function \non the interval $x \\in [0,1]$.\n\n\n```python\nx = Symbol('x')\nf = x**3-2*x**2+x\ndiff(f, x)\n```\n\n\n```python\nsols = solve( diff(f,x), x)\nsols\n```\n\n\n```python\ndiff(diff(f,x), x).subs( {x:sols[0]} )\n```\n\n\n```python\ndiff(diff(f,x), x).subs( {x:sols[1]} )\n```\n\n[It will help to look at the graph of this function.](https://www.google.com/#safe=off&q=plot+x**3-2*x**2%2Bx)\nThe point $x=\\frac{1}{3}$ is a local maximum because it is a critical point of $f(x)$\nwhere the curvature is negative, meaning $f(x)$ looks like the peak of a mountain at $x=\\frac{1}{3}$.\nThe maximum value of $f(x)$ on the interval $x\\in [0,1]$ is $f\\!\\left(\\frac{1}{3}\\right)=\\frac{4}{27}$.\nThe point $x=1$ is a local minimum because it is a critical point\nwith positive curvature, meaning $f(x)$ looks like the bottom of a valley at $x=1$.\n\n### Integrals\n\nThe *integral* of $f(x)$ corresponds to the computation of the area under the graph of $f(x)$.\nThe area under $f(x)$ between the points $x=a$ and $x=b$ is denoted as follows:\n\n$$\n A(a,b) = \\int_a^b f(x) \\: dx.\n$$\n\nThe *integral function* $F$ corresponds to the area calculation as a function \nof the upper limit of integration:\n\n$$\n F(c) \\equiv \\int_0^c \\! f(x)\\:dx\\,.\n$$\n\nThe area under $f(x)$ between $x=a$ and $x=b$ is obtained by \ncalculating the *change* in the integral function:\n\n$$\n A(a,b) = \\int_a^b \\! f(x)\\:dx = F(b)-F(a).\n$$\n\nIn `SymPy` we use `integrate(f, x)` to obtain the integral function $F(x)$ of any function $f(x)$:\n$F(x) = \\int_0^x f(u)\\,du$.\n\n\n```python\nintegrate(x**3, x)\n```\n\n\n```python\nintegrate(sin(x), x)\n```\n\n\n```python\nintegrate(ln(x), x)\n```\n\nThis is known as an *indefinite integral* since the limits of integration are not defined. \n\nIn contrast, \na *definite integral* computes the area under $f(x)$ between $x=a$ and $x=b$.\nUse `integrate(f, (x,a,b))` to compute the definite integrals of the form $A(a,b)=\\int_a^b f(x) \\, dx$:\n\n\n```python\nintegrate(x**3, (x,0,1)) # the area under x^3 from x=0 to x=1\n```\n\nWe can obtain the same area by first calculating the indefinite integral $F(c)=\\int_0^c \\!f(x)\\,dx$,\nthen using $A(a,b) = F(x)\\big\\vert_a^b \\equiv F(b) - F(a)$:\n\n\n```python\nF = integrate(x**3, x)\nF.subs({x:1}) - F.subs({x:0})\n```\n\nIntegrals correspond to *signed* area calculations:\n\n\n```python\nintegrate(sin(x), (x,0,pi))\n```\n\n\n```python\nintegrate(sin(x), (x,pi,2*pi))\n```\n\n\n```python\nintegrate(sin(x), (x,0,2*pi))\n```\n\nDuring the first half of its $2\\pi$-cycle,\nthe graph of $\\sin(x)$ is above the $x$-axis, so it has a positive contribution to the area under the curve.\nDuring the second half of its cycle (from $x=\\pi$ to $x=2\\pi$),\n$\\sin(x)$ is below the $x$-axis, so it contributes negative area.\nDraw a graph of $\\sin(x)$ to see what is going on.\n\n### Fundamental theorem of calculus\n\nThe integral is the “inverse operation” of the derivative.\nIf you perform the integral operation followed by the derivative operation on some function, \nyou'll obtain the same function:\n\n$$\n \\left(\\frac{d}{dx} \\circ \\int dx \\right) f(x) \\equiv \\frac{d}{dx} \\int_c^x f(u)\\:du = f(x).\n$$\n\n\n```python\nf = x**2\nF = integrate(f, x)\nF\n```\n\n\n```python\ndiff(F,x)\n```\n\nAlternately, if you compute the derivative of a function followed by the integral,\nyou will obtain the original function $f(x)$ (up to a constant):\n\n$$\n \\left( \\int dx \\circ \\frac{d}{dx}\\right) f(x) \\equiv \\int_c^x f'(u)\\;du = f(x) + C.\n$$\n\n\n```python\nf = x**2\ndf = diff(f,x)\ndf\n```\n\n\n```python\nintegrate(df, x)\n```\n\nThe fundamental theorem of calculus is important because it tells us how to solve differential equations.\nIf we have to solve for $f(x)$ in the differential equation $\\frac{d}{dx}f(x) = g(x)$,\nwe can take the integral on both sides of the equation to obtain the answer $f(x) = \\int g(x)\\,dx + C$.\n\n### Sequences\n\nSequences are functions that take whole numbers as inputs.\nInstead of continuous inputs $x\\in \\mathbb{R}$,\nsequences take natural numbers $n\\in\\mathbb{N}$ as inputs.\nWe denote sequences as $a_n$ instead of the usual function notation $a(n)$.\n\nWe define a sequence by specifying an expression for its $n^\\mathrm{th}$ term:\n\n\n```python\na_n = 1/n\nb_n = 1/factorial(n)\n```\n\nSubstitute the desired value of $n$ to see the value of the $n^\\mathrm{th}$ term:\n\n\n```python\na_n.subs({n:5})\n```\n\nThe `Python` list comprehension syntax `[item for item in list]`\ncan be used to print the sequence values for some range of indices:\n\n\n```python\n[ a_n.subs({n:i}) for i in range(0,8) ]\n```\n\n\n```python\n[ b_n.subs({n:i}) for i in range(0,8) ]\n```\n\nObserve that $a_n$ is not properly defined for $n=0$ since $\\frac{1}{0}$ is a division-by-zero error.\nTo be precise, we should say $a_n$'s domain is the positive naturals $a_n:\\mathbb{N}^+ \\to \\mathbb{R}$.\nObserve how quickly the `factorial` function $n!=1\\cdot2\\cdot3\\cdots(n-1)\\cdot n$ grows:\n$7!= 5040$, $10!=3628800$, $20! > 10^{18}$.\n\nWe're often interested in calculating the limits of sequences as $n\\to \\infty$.\nWhat happens to the terms in the sequence when $n$ becomes large?\n\n\n```python\nlimit(a_n, n, oo)\n```\n\n\n```python\nlimit(b_n, n, oo)\n```\n\nBoth $a_n=\\frac{1}{n}$ and $b_n = \\frac{1}{n!}$ *converge* to $0$ as $n\\to\\infty$. \n\nMany important math quantities are defined as limit expressions.\nAn interesting example to consider is the number $\\pi$, \nwhich is defined as the area of a circle of radius $1$.\nWe can approximate the area of the unit circle by drawing a many-sided regular polygon around the circle.\nSplitting the $n$-sided regular polygon into identical triangular splices,\nwe can obtain a formula for its area $A_n$.\nIn the limit as $n\\to \\infty$, \nthe $n$-sided-polygon approximation to the area of the unit-circle becomes exact:\n\n\n```python\nA_n = n*tan(2*pi/(2*n))\nlimit(A_n, n, oo)\n```\n\n### Series\n\nSuppose we're given a sequence $a_n$ and we want to compute the sum of all the values in this sequence $\\sum_{n}^\\infty a_n$.\nSeries are sums of sequences.\nSumming the values of a sequence $a_n:\\mathbb{N}\\to \\mathbb{R}$\nis analogous to taking the integral of a function $f:\\mathbb{R}\\to \\mathbb{R}$.\n\nTo work with series in `SymPy`,\nuse the `summation` function whose syntax is analogous to the `integrate` function: \n\n\n```python\na_n = 1/n\nsummation(a_n, [n, 1, oo])\n```\n\n\n```python\nb_n = 1/factorial(n)\nsummation(b_n, [n, 0, oo])\n```\n\nWe say the series $\\sum a_n$ *diverges* to infinity (or *is divergent*) while the series $\\sum b_n$ converges (or *is convergent*).\nAs we sum together more and more terms of the sequence $b_n$, the total becomes closer and closer to some finite number.\nIn this case, the infinite sum $\\sum_{n=0}^\\infty \\frac{1}{n!}$ converges to the number $e=2.71828\\ldots$.\n\n\nThe `summation` command is useful because it allows us to compute `infinite` sums,\nbut for most practical applications we don't need to take an infinite number of terms in a series to obtain a good approximation. \nThis is why series are so neat: they represent a great way to obtain approximations.\n\nUsing standard `Python` commands, \nwe can obtain an approximation to $e$ that is accurate to six decimals by summing 10 terms in the series:\n\n\n```python\nimport math\ndef b_nf(n): \n return 1.0/math.factorial(n)\nsum( [b_nf(n) for n in range(0,10)] )\n```\n\n\n```python\nE.evalf() # true value\n```\n\n### Taylor series\n\nWait, there's more! \nNot only can we use series to approximate numbers,\nwe can also use them to approximate functions.\n\nA *power series* is a series whose terms contain different powers of the variable $x$.\nThe $n^\\mathrm{th}$ term in a power series is a function of both the sequence index $n$ and the input variable $x$.\n\nFor example, the power series of the function $\\exp(x)=e^x$ is \n\n$$\n \\exp(x) \\equiv 1 + x + \\frac{x^2}{2} + \\frac{x^3}{3!} + \\frac{x^4}{4!} + \\frac{x^5}{5!} + \\cdots \n = \\sum_{n=0}^\\infty \\frac{x^n}{n!}.\n$$\n\nThis is, IMHO, one of the most important ideas in calculus:\nyou can compute the value of $\\exp(5)$ by taking the infinite sum of the terms in the power series with $x=5$:\n\n\n```python\nexp_xn = x**n/factorial(n)\nsummation( exp_xn.subs({x:5}), [n, 0, oo] ).evalf()\n```\n\n\n```python\nexp(5).evalf() # the true value\n```\n\nNote that `SymPy` is actually smart enough to recognize that the infinite series\nyou're computing corresponds to the closed-form expression $e^5$:\n\n\n```python\nsummation( exp_xn.subs({x:5}), [n, 0, oo])\n```\n\nTaking as few as 35 terms in the series is sufficient to obtain an approximation to $e$\nthat is accurate to 16 decimals:\n\n\n```python\nimport math # redo using only python \ndef exp_xnf(x,n): \n return x**n/math.factorial(n)\nsum( [exp_xnf(5.0,i) for i in range(0,35)] )\n```\n\nThe coefficients in the power series of a function (also known as the *Taylor series*) \nThe formula for the $n^\\mathrm{th}$ term in the Taylor series of $f(x)$ expanded at $x=c$ is $a_n(x) = \\frac{f^{(n)}(c)}{n!}(x-c)^n$,\nwhere $f^{(n)}(c)$ is the value of the $n^\\mathrm{th}$ derivative of $f(x)$ evaluated at $x=c$.\nThe term *Maclaurin series* refers to Taylor series expansions at $x=0$.\n\nThe `SymPy` function `series` is a convenient way to obtain the series of any function.\nCalling `series(expr,var,at,nmax)` \nwill show you the series expansion of `expr` \nnear `var`=`at` \nup to power `nmax`:\n\n\n```python\nseries( sin(x), x, 0, 8)\n```\n\n\n```python\nseries( cos(x), x, 0, 8)\n```\n\n\n```python\nseries( sinh(x), x, 0, 8)\n```\n\n\n```python\nseries( cosh(x), x, 0, 8)\n```\n\nSome functions are not defined at $x=0$, so we expand them at a different value of $x$.\nFor example, the power series of $\\ln(x)$ expanded at $x=1$ is\n\n\n```python\nseries(ln(x), x, 1, 6) # Taylor series of ln(x) at x=1\n```\n\nHere, the result `SymPy` returns is misleading.\nThe Taylor series of $\\ln(x)$ expanded at $x=1$ has terms of the form $(x-1)^n$:\n\n$$\n \\ln(x) = (x-1) - \\frac{(x-1)^2}{2} + \\frac{(x-1)^3}{3} - \\frac{(x-1)^4}{4} + \\frac{(x-1)^5}{5} + \\cdots.\n$$\n\nVerify this is the correct formula by substituting $x=1$.\n`SymPy` returns an answer in terms of coordinates `relative` to $x=1$.\n\nInstead of expanding $\\ln(x)$ around $x=1$,\nwe can obtain an equivalent expression if we expand $\\ln(x+1)$ around $x=0$:\n\n\n```python\nseries(ln(x+1), x, 0, 6) # Maclaurin series of ln(x+1)\n```\n", "meta": {"hexsha": "6f4ae8eaced70d5094d074a7c09243f2c8b0cee3", "size": 130410, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/Calculus.ipynb", "max_stars_repo_name": "minireference/sympytut_notebooks", "max_stars_repo_head_hexsha": "6669e7bfccef9e70ae029ac5cbb54cb6cbc31652", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2016-08-29T12:04:19.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-23T05:14:52.000Z", "max_issues_repo_path": "notebooks/Calculus.ipynb", "max_issues_repo_name": "minireference/sympytut_notebooks", "max_issues_repo_head_hexsha": "6669e7bfccef9e70ae029ac5cbb54cb6cbc31652", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/Calculus.ipynb", "max_forks_repo_name": "minireference/sympytut_notebooks", "max_forks_repo_head_hexsha": "6669e7bfccef9e70ae029ac5cbb54cb6cbc31652", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 56.3813229572, "max_line_length": 7096, "alphanum_fraction": 0.7739360478, "converted": true, "num_tokens": 5069, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.939024820841433, "lm_q2_score": 0.8902942239389252, "lm_q1q2_score": 0.8360083741304118}} {"text": "# Algebra and Symbolic Math With Sympy\nThe mathematical problems and solutions\nin our programs so far have all involved the\nmanipulation of numbers. But there\u2019s another\nway math is taught, learned, and practiced, and that\u2019s\nin terms of symbols and the operations between them.\nJust think of all the xs and ys in a typical algebra prob-\nlem. We refer to this type of math as symbolic math.\n---\n\n## Defining Symbols and Symbolic Operations\n\n\n```python\nfrom sympy import Symbol\na = Symbol('x')\na + a + 1\n```\n\n\n\n\n$\\displaystyle 2 x + 1$\n\n\n\n\n```python\np = x*(x + x)\np\n```\n\n\n\n\n$\\displaystyle 2 x^{2}$\n\n\n\n\n```python\np = (x + 2)*(x + 3)\np\n```\n\n\n\n\n$\\displaystyle \\left(x + 2\\right) \\left(x + 3\\right)$\n\n\n\n## Working with Expressions\n\n### Factorizing and Expanding Expressions\n\n\n```python\nfrom sympy import Symbol\nfrom sympy import factor\n\nx = Symbol('x')\ny = Symbol('y')\n\nexpr = x**2 - y**2\nfactor(expr)\n```\n\n\n\n\n$\\displaystyle \\left(x - y\\right) \\left(x + y\\right)$\n\n\n\n## Pretty Printing\n\n\n\n```python\nfrom sympy import Symbol\nfrom sympy import factor\nfrom sympy import init_printing\n\ninit_printing(order='rev-lex')\n\nx = Symbol('x')\ny = Symbol('y')\n\nexpr = x*x + 2*x*y + y*y\n\npprint(expr)\n```\n\n### Printing a Series\n\n\n\n```python\nfrom sympy import Symbol, pprint, init_printing\ndef print_series(n): \n # Initialize printing system with reverse order\n init_printing(order='rev-lex')\n\n x = Symbol('x')\n series = x\n for i in range(2, n+1):\n series = series + (x**i)/i\n pprint(series)\n\nif __name__ == '__main__':\n n = input('Enter the number of terms you want in the series: ')\n print_series(int(n))\n```\n\n## Substituting in Values\nLet\u2019s see how we can use SymPy to plug values into an algebraic expression.\nThis will let us calculate the value of the expression for certain values of the\nvariables. Consider the mathematical expression x 2 + 2xy + y 2 , which can be\ndefined as follows:\n\n\n```python\nfrom sympy import Symbol\n\nx = Symbol('x')\ny = Symbol('y')\nexpr = x*x + x*y + x*y + y*y\n\nres = expr.subs({x:1, y:2})\nprint(\"the answer is: \" + str(res))\n```\n\n### On Python Dictionaries\n\n\n\n```python\nsampledict = {\"key1\":5, \"key2\":20}\nsampledict['key1']\nexpr_subs = expr.subs({x:1-y})\n```\n\n### Calculating the Value of a Series\nLet\u2019s revisit the series-printing program. In addition to printing the series,\nwe want our program to be able to find the value of the series for a par-\nticular value of x. That is, our program will now take two inputs from the\nuser\u2014the number of terms in the series and the value of x for which the\nvalue of the series will be calculated. Then, the program will output both\nthe series and the sum. The following program extends the series printing\nprogram to include these enhancements:\n\n\n```python\nfrom sympy import Symbol, pprint, init_printing\ndef print_series(n, x_value):\n # Initialize printing system with reverse order\n init_printing(order='rev-lex')\n\n x = Symbol('x')\n series = x\n for i in range(2, n+1):\n series = series + (x**i)/i \n\n pprint(series)\n\n # Evaluate the series at x_value\n series_value = series.subs({x:x_value})\n print('Value of the series at {0}: {1}'.format(x_value, series_value))\n\nif __name__ == '__main__':\n n = input('Enter the number of terms you want in the series: ')\n x_value = input('Enter the value of x at which you want to evaluate the series: ')\n print_series(int(n), float(x_value))\n```\n", "meta": {"hexsha": "2a5426b91a3018c930230e3013b90335a682f028", "size": 5725, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "2021/Python-Maths/chapter4.ipynb", "max_stars_repo_name": "Muramatsu2602/python-study", "max_stars_repo_head_hexsha": "c81eb5d2c343817bc29b2763dcdcabed0f6a42c6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2021/Python-Maths/chapter4.ipynb", "max_issues_repo_name": "Muramatsu2602/python-study", "max_issues_repo_head_hexsha": "c81eb5d2c343817bc29b2763dcdcabed0f6a42c6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2021/Python-Maths/chapter4.ipynb", "max_forks_repo_name": "Muramatsu2602/python-study", "max_forks_repo_head_hexsha": "c81eb5d2c343817bc29b2763dcdcabed0f6a42c6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 5725.0, "max_line_length": 5725, "alphanum_fraction": 0.6508296943, "converted": true, "num_tokens": 933, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.939024820841433, "lm_q2_score": 0.8902942195727173, "lm_q1q2_score": 0.8360083700304343}} {"text": "```python\nimport loader\nfrom sympy import *\ninit_printing()\nfrom root.solver import *\n```\n\nFind the general solution of\n$$\n \\vec{\\mathbf{x'}} = \\begin{bmatrix}\n 1 & 1 & 1 \\\\\n 2 & 1 & -1 \\\\\n 0 & -1 & 1 \\\\\n \\end{bmatrix} \\vec{\\mathbf{x}}\n$$\n\n\n```python\n_, p = system([\n [1, 1, 1],\n [2, 1, -1],\n [0, -1, 1]\n])\np.display()\n```\n\n\n$\\displaystyle \\text{Characteristic equation: }$\n\n\n\n$\\displaystyle 3 \\lambda - \\left(\\lambda - 1\\right)^{3} - 5 = 0$\n\n\n\n$\\displaystyle \\text{Eigenvalues and eigenvectors}$\n\n\n\n$\\displaystyle \\lambda_1 = -1$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}2 & 1 & 1 & 0\\\\2 & 2 & -1 & 0\\\\0 & -1 & 2 & 0\\end{matrix}\\right]\\text{ ~ }\\left[\\begin{matrix}1 & 0 & \\frac{3}{2} & 0\\\\0 & 1 & -2 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right]\\Rightarrow v = \\left[\\begin{matrix}- \\frac{3}{2}\\\\2\\\\1\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\lambda_2 = 2$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}-1 & 1 & 1 & 0\\\\2 & -1 & -1 & 0\\\\0 & -1 & -1 & 0\\end{matrix}\\right]\\text{ ~ }\\left[\\begin{matrix}1 & 0 & 0 & 0\\\\0 & 1 & 1 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right]\\Rightarrow v = \\left[\\begin{matrix}0\\\\-1\\\\1\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\text{Find the generalized eigenvector}\\left( M - \\lambda I \\right) w = v $\n\n\n\n$\\displaystyle \\left[\\begin{matrix}-1 & 1 & 1 & 0\\\\2 & -1 & -1 & -1\\\\0 & -1 & -1 & 1\\end{matrix}\\right]\\text{ ~ }\\left[\\begin{matrix}1 & 0 & 0 & -1\\\\0 & 1 & 1 & -1\\\\0 & 0 & 0 & 0\\end{matrix}\\right]\\Rightarrow w = \\left[\\begin{matrix}-1\\\\-1\\\\0\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\text{General solution: }$\n\n\n\n$\\displaystyle \\vec{\\mathbf{x}} = C_{1}e^{- t}\\left[\\begin{matrix}- \\frac{3}{2}\\\\2\\\\1\\end{matrix}\\right]+C_{2}e^{2 t}\\left[\\begin{matrix}0\\\\-1\\\\1\\end{matrix}\\right]+C_{3}e^{2 t}\\left(\\left[\\begin{matrix}0\\\\-1\\\\1\\end{matrix}\\right]t + \\left[\\begin{matrix}-1\\\\-1\\\\0\\end{matrix}\\right]\\right)$\n\n\n\n```python\n_, p = system([\n [3, -2],\n [4, -1]\n])\np.display()\n```\n\n\n$\\displaystyle \\text{Characteristic equation: }$\n\n\n\n$\\displaystyle \\left(\\lambda - 3\\right) \\left(\\lambda + 1\\right) + 8 = 0$\n\n\n\n$\\displaystyle \\text{Eigenvalues and eigenvectors}$\n\n\n\n$\\displaystyle \\lambda_1 = 1 - 2*I$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}2 + 2 i & -2 & 0\\\\4 & -2 + 2 i & 0\\end{matrix}\\right]\\text{ ~ }\\left[\\begin{matrix}1 & - \\frac{2 - 2 i}{4} & 0\\\\0 & 0 & 0\\end{matrix}\\right]\\Rightarrow v = \\left[\\begin{matrix}\\frac{2 - 2 i}{4}\\\\1\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\text{Use Euler's formula to expand the imaginary part}$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\frac{2 - 2 i}{4}\\\\1\\end{matrix}\\right] e^{t - 2 i t} = e^{t} \\left[\\begin{matrix}\\frac{1}{4} \\left(2 - 2 i\\right) \\left(- i \\sin{\\left(2 t \\right)} + \\cos{\\left(2 t \\right)}\\right)\\\\- i \\sin{\\left(2 t \\right)} + \\cos{\\left(2 t \\right)}\\end{matrix}\\right] = e^{t}\\left( \\left[\\begin{matrix}- \\frac{\\sin{\\left(2 t \\right)}}{2} + \\frac{\\cos{\\left(2 t \\right)}}{2}\\\\\\cos{\\left(2 t \\right)}\\end{matrix}\\right] + \\left[\\begin{matrix}- \\frac{\\sin{\\left(2 t \\right)}}{2} - \\frac{\\cos{\\left(2 t \\right)}}{2}\\\\- \\sin{\\left(2 t \\right)}\\end{matrix}\\right]\\right)$\n\n\n\n$\\displaystyle \\text{General solution: }$\n\n\n\n$\\displaystyle \\vec{\\mathbf{x}} = e^{t}\\left(C_{1}\\left[\\begin{matrix}- \\frac{\\sin{\\left(2 t \\right)}}{2} + \\frac{\\cos{\\left(2 t \\right)}}{2}\\\\\\cos{\\left(2 t \\right)}\\end{matrix}\\right] + C_{2}\\left[\\begin{matrix}- \\frac{\\sin{\\left(2 t \\right)}}{2} - \\frac{\\cos{\\left(2 t \\right)}}{2}\\\\- \\sin{\\left(2 t \\right)}\\end{matrix}\\right]\\right)$\n\n\nFind the solution and draw the phase portrait for the following system of ode\n$$\n \\vec{\\mathbf{x'}} = \\begin{bmatrix}\n 2 & -5 \\\\\n 1 & -2 \\\\\n \\end{bmatrix} \\vec{\\mathbf{x}}\n$$\n\n\n```python\n_, p = system([\n [2, -5],\n [1, -2]\n])\np.display()\n\n%matplotlib inline\nphase_portrait([\n [2, -5],\n [1, -2]\n ],\n -4, 4, -4, 4, 15, \"stream\"\n)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "c18e734e5ad85d5188e368922f4bb60d61eaa29d", "size": 158282, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/system-of-ode.ipynb", "max_stars_repo_name": "kaiyingshan/ode-solver", "max_stars_repo_head_hexsha": "30c6798efe9c35a088b2c6043493470701641042", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-02-17T23:15:20.000Z", "max_stars_repo_stars_event_max_datetime": "2019-02-17T23:15:27.000Z", "max_issues_repo_path": "notebooks/system-of-ode.ipynb", "max_issues_repo_name": "kaiyingshan/ode-solver", "max_issues_repo_head_hexsha": "30c6798efe9c35a088b2c6043493470701641042", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/system-of-ode.ipynb", "max_forks_repo_name": "kaiyingshan/ode-solver", "max_forks_repo_head_hexsha": "30c6798efe9c35a088b2c6043493470701641042", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 326.3546391753, "max_line_length": 145100, "alphanum_fraction": 0.9261381585, "converted": true, "num_tokens": 1519, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9390248225478307, "lm_q2_score": 0.8902942159342104, "lm_q1q2_score": 0.8360083681329821}} {"text": "# Second Order ODEs in Python\n\nOur last assignment consisted of a series of first order ODEs. This assignment will be similarly structured; however, you will be using the same techniques to solve second order ODEs. In particular, the examples I will give will be for a damped spring-mass system and coupled spring mass system then the problems you will solve will be for a single damped pendulum and a coupled pendulum.\n\n## ODEs in Python\n\nIn this class, we will make use for the [scipy.integrate toolbox](https://docs.scipy.org/doc/scipy/reference/integrate.html). Primarily, we will be accessing the [solve_ivp](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html#scipy.integrate.solve_ivp) (for *initial* value problems) and the [solve_bvp](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_bvp.html#scipy.integrate.solve_bvp) (for *boundary* value problems).\nThrough these two functions, Scipy seeks the best type of solver for a particular system. We'll import these and other necessary elements below.\n\n\n```python\n# Python Imports\n\n# 3rd Party Numerical Imports\nimport numpy as np\nfrom scipy.integrate import solve_ivp as ivp\n\n# 3rd Party Plotting Utilities\nfrom matplotlib import pyplot as plt\n```\n\n## Examples\n\n### The Damped Spring-Mass System\n\nYou all may remember that the equation of motion for a damped spring-mass system is\n\n\\begin{equation}\n m \\ddot{x} + c \\dot{x} + k x = 0\n\\end{equation}\n\nwhere $m$ is the mass, $c$ is the damping constant and $k$ is the spring constant. Just like we did in the first ODE assignment, we will have to rewrite this equation in terms of the highest order term.\n\n\\begin{equation}\n \\ddot{x} = -2\\gamma \\dot{x} - \\omega_0^2 x\n\\end{equation}\n\nwhere $2\\gamma = c/m$ is the normalized damping coefficient and $\\omega_0^2 = k/m$ is the natural frequency of the system. Now, the only problem with this equation is that we still have a term of $\\dot{x}$ which we will have to write out as a second equation. This way, the differential equation solver will know how $\\ddot{x}$, $\\dot{x}$ and $x$ are related. We know that $\\dot{x} = v$. Therefore, we will tell the solver this information by using a *system* of equations which we will write as\n\n\\begin{align}\n \\dot{x} &= v \\\\\n \\dot{v} &= -2\\gamma v - \\omega_0^2 x.\n\\end{align}\n\nNow, as it did on the last assignment, the ODE solver in Python expects as its first argument a function that packages all the necessary information about the problem we are trying to solve. And, the specified function is meant to take the independent variable as the first argument ($t$ in this case), the dependent variables as a tuple in the second argument ($X$ in this case) and any other variables after that. To see this visually, check the example and input below.\n\n```python\ndef odefunc(indepVar, depVars, *otherArgs)\n```\n\n\n\n```python\ndef dshm(t, X, gamma, omega0Sq):\n '''A function for the ODE solver that specifies the conditions for damped, simple harmonic oscillations.'''\n \n # Unpack the variables\n pos, vel = X\n \n # Return the Output of the Equations\n return [\n vel,\n -2*gamma*vel - omega0Sq*pos\n ]\n```\n\nNote from the equations and functions above that, as we have it written the first output will be the *integration* of $\\dot{x}$ with respect to time which is the position and the second output will be the integration of $\\dot{v} = \\ddot{x}$ with respect to time which is the velocity. Essentially, ODE solvers do not know how to handle anything other than a first order ODE. Therefore, we have to reframe our problems of higher orders into ones of a system of first order ODEs.\n\nAt this point, we are ready to begin testing this equation against different initial conditions. Before doing so, let's take a look at the necessary arguments for Scipy's IVP solver. Their documentation states that the function call looks like\n\n```python\nsolve_ivp(odefunc, tSpan, y0, args=None, tEval=None)\n```\n\nThe function call takes other arguments which we will not utilize here. However, those listed above are explained below.\n\n* `odefunc`: This represents the name of the function to be evaluated. The IVP solver will call this function to create the solution.\n* `tSpan`: This is meant to be a two-tuple meant to represent the start and stop times.\n* `y0`: The initial condition(s) of the system as a tuple or list\n* `args`: A tuple of arguments meant to be passed on to the `odefunc`. Note: **they must be in the same order as they appear in the `odefunc`**.\n* `tEval`: Used to specify specific times to evaluate.\n\nNow that we're aware of these, let's solve the charging problem under various initial conditions.\n\n\n```python\n# Set the values of the variables\nm = 1\nc = 0.5\nk = 2\ngamma = c/(2*m)\nomega0Sq = k/m\n\n# Set the initial value of the position and velocity\nX0 = [1, 0] # pos = 1, vel = 0\n\n# Set Time Specs\ntSpan = (0, 10) # The simulation times for our system\ntEval = np.linspace(tSpan[0], tSpan[1], 101)\n\n# Solve the Problem\nsol = ivp(dshm, tSpan, X0, args=(gamma, omega0Sq), t_eval=tEval)\n\n# Plot the Solution\nfig, ax = plt.subplots(figsize=(10, 7))\n_ = ax.plot(sol.t, sol.y[0], linewidth=2, label='Position (m)') # Plot the Position (which is the zeroth solution in the solution list)\n_ = ax.plot(sol.t, sol.y[1], linewidth=2, label='Velocity (m/s)') # Plot the Velocity (which is the first solution in the solution list)\n_ = ax.set_xlim(tSpan)\n_ = ax.set_xlabel('Time (s)', fontsize=14)\n_ = ax.set_ylabel('Amplitude', fontsize=14)\n_ = ax.set_title('Damped Harmonic Oscillator', fontsize=16)\n_ = ax.grid(True)\n_ = ax.legend()\n```\n\nNow, let's instead assume that the mass starts at the origin and is given some initial kick in the positive direction.\n\n\n```python\n# Set the values of the variables\nm = 1\nc = 0.5\nk = 2\ngamma = c/(2*m)\nomega0Sq = k/m\n\n# Set the initial value of the position and velocity\nX0 = [0, 2] # pos = 0, vel = 2\n\n# Set Time Specs\ntSpan = (0, 10) # The simulation times for our system\ntEval = np.linspace(tSpan[0], tSpan[1], 101)\n\n# Solve the Problem\nsol = ivp(dshm, tSpan, X0, args=(gamma, omega0Sq), t_eval=tEval)\n\n# Plot the Solution\nfig, ax = plt.subplots(figsize=(10, 7))\n_ = ax.plot(sol.t, sol.y[0], linewidth=2, label='Position (m)') # Plot the Position (which is the zeroth solution in the solution list)\n_ = ax.plot(sol.t, sol.y[1], linewidth=2, label='Velocity (m/s)') # Plot the Velocity (which is the first solution in the solution list)\n_ = ax.set_xlim(tSpan)\n_ = ax.set_xlabel('Time (s)', fontsize=14)\n_ = ax.set_ylabel('Amplitude', fontsize=14)\n_ = ax.set_title('Damped Harmonic Oscillator', fontsize=16)\n_ = ax.grid(True)\n_ = ax.legend()\n```\n\nFinally, let's take a look at the *critically damped* case when $\\gamma = \\omega_0$.\n\n\n```python\n# Set the values of the variables\ngamma = 1\nomega0 = 1\nomega0Sq = omega0**2\n\n# Set the initial value of the position and velocity\nX0 = [1, -3] # pos = 1, vel = -1\n\n# Set Time Specs\ntSpan = (0, 10) # The simulation times for our system\ntEval = np.linspace(tSpan[0], tSpan[1], 101)\n\n# Solve the Problem\nsol = ivp(dshm, tSpan, X0, args=(gamma, omega0Sq), t_eval=tEval)\n\n# Plot the Solution\nfig, ax = plt.subplots(figsize=(10, 7))\n_ = ax.plot(sol.t, sol.y[0], linewidth=2, label='Position (m)') # Plot the Position (which is the zeroth solution in the solution list)\n_ = ax.plot(sol.t, sol.y[1], linewidth=2, label='Velocity (m/s)') # Plot the Velocity (which is the first solution in the solution list)\n_ = ax.set_xlim(tSpan)\n_ = ax.set_xlabel('Time (s)', fontsize=14)\n_ = ax.set_ylabel('Amplitude', fontsize=14)\n_ = ax.set_title('Critically Damped Harmonic Oscillator', fontsize=16)\n_ = ax.grid(True)\n_ = ax.legend()\n```\n\nYou are welcome to play around with the three cells above to see how changing $k$, $m$, $c$ or the initial conditions affects the system.\n\n### Coupled, Damped Harmonic Oscillators\n\nNow, let's consider the case where we have a mass connected to a spring which is attached to the left wall and a spring attached between another mass and the right wall with a spring connecting the two masses in between. We can write the equations of motion for the system as being\n\n\\begin{align}\n m_1 \\ddot{x}_1 &= -c \\dot{x}_1 - k_1 x_1 - k_2 \\left( x_1 - x_2 \\right) \\\\\n m_2 \\ddot{x}_2 &= -c \\dot{x}_2 - k_2 \\left( x_2 - x_1 \\right) - k_3 x_2\n\\end{align}\n\nwe can then rewrite this as\n\n\\begin{align}\n \\dot{x}_1 &= v_1 \\\\\n \\dot{x}_2 &= v_2 \\\\\n \\dot{v}_1 &= -2\\gamma v_1 - \\omega_0^2 x_1 - \\omega_0^2 \\left( x_1 - x_2 \\right) \\\\\n \\dot{v}_2 &= -2\\gamma v_2 - \\omega_0^2 \\left( x_2 - x_1 \\right) - \\omega_0^2 x_2\n\\end{align}\n\nassuming $k_1=k_2=k$ and $m_1=m_2=m$.\n\n\n```python\n# The Predator-Prey Equation\ndef cdhm(t, X, gamma, omega0Sq):\n '''A function for the ODE solver that specifies the conditions for damped, simple harmonic oscillations.'''\n \n # Unpack the variables\n pos1, pos2, vel1, vel2 = X\n \n # Return the Output of the Equations\n return [\n vel1,\n vel2,\n -2*gamma*vel1 - omega0Sq*pos1 - omega0Sq*(pos1 - pos2),\n -2*gamma*vel2 - omega0Sq*(pos2 - pos1) - omega0Sq*pos2\n ]\n```\n\n\n```python\n# Set the values of the variables\nm = 1\nc = 0.5\nk = 2\ngamma = c/(2*m)\nomega0Sq = k/m\n\n# Set the initial value of the position and velocity\nX0 = [0, -1, 0, 0] # pos1 = 1, pos2 = 1.5, vel = 0\n\n# Set Time Specs\ntSpan = (0, 10) # The simulation times for our system\ntEval = np.linspace(tSpan[0], tSpan[1], 301)\n\n# Solve the Problem\nsol = ivp(cdhm, tSpan, X0, args=(gamma, omega0Sq), t_eval=tEval)\n\n# Plot the Solution\nfig, ax = plt.subplots(2, 1, figsize=(10, 12), sharex=True)\n_ = ax[0].plot(sol.t, sol.y[0], linewidth=2, label='$x_1$') # Plot the Position1 (which is the zeroth solution in the solution list)\n_ = ax[0].plot(sol.t, sol.y[1], linewidth=2, label='$x_2$') # Plot the Position2 (which is the first solution in the solution list)\n_ = ax[1].plot(sol.t, sol.y[2], linewidth=2, label='$v_1$') # Plot the Position1 (which is the zeroth solution in the solution list)\n_ = ax[1].plot(sol.t, sol.y[3], linewidth=2, label='$v_2$') # Plot the Position2 (which is the first solution in the solution list)\n_ = ax[0].set_xlim(tSpan)\n_ = ax[1].set_xlim(tSpan)\n_ = ax[1].set_xlabel('Time (s)', fontsize=14)\n_ = ax[0].set_ylabel('Position (m)', fontsize=14)\n_ = ax[1].set_ylabel('Velocity (m/s)', fontsize=14)\n_ = fig.suptitle('Coupled, Damped Harmonic Oscillator', fontsize=16)\n_ = ax[0].grid(True)\n_ = ax[1].grid(True)\n_ = ax[0].legend()\n_ = ax[1].legend()\n```\n\n\n```python\n# Set the values of the variables\nm = 1\nc = 0.5\nk = 2\ngamma = c/(2*m)\nomega0Sq = k/m\n\n# Set the initial value of the position and velocity\nX0 = [0, 0, 0, -2] # pos1 = 1, pos2 = 1.5, vel = 0\n\n# Set Time Specs\ntSpan = (0, 10) # The simulation times for our system\ntEval = np.linspace(tSpan[0], tSpan[1], 301)\n\n# Solve the Problem\nsol = ivp(cdhm, tSpan, X0, args=(gamma, omega0Sq), t_eval=tEval)\n\n# Plot the Solution\nfig, ax = plt.subplots(2, 1, figsize=(10, 12), sharex=True)\n_ = ax[0].plot(sol.t, sol.y[0], linewidth=2, label='$x_1$') # Plot the Position1 (which is the zeroth solution in the solution list)\n_ = ax[0].plot(sol.t, sol.y[1], linewidth=2, label='$x_2$') # Plot the Position2 (which is the first solution in the solution list)\n_ = ax[1].plot(sol.t, sol.y[2], linewidth=2, label='$v_1$') # Plot the Position1 (which is the zeroth solution in the solution list)\n_ = ax[1].plot(sol.t, sol.y[3], linewidth=2, label='$v_2$') # Plot the Position2 (which is the first solution in the solution list)\n_ = ax[0].set_xlim(tSpan)\n_ = ax[1].set_xlim(tSpan)\n_ = ax[1].set_xlabel('Time (s)', fontsize=14)\n_ = ax[0].set_ylabel('Position (m)', fontsize=14)\n_ = ax[1].set_ylabel('Velocity (m/s)', fontsize=14)\n_ = fig.suptitle('Coupled, Damped Harmonic Oscillator', fontsize=16)\n_ = ax[0].grid(True)\n_ = ax[1].grid(True)\n_ = ax[0].legend()\n_ = ax[1].legend()\n```\n\n## Assignment\n\n### The Pendulum\n\nThe equation of motion for a damped pendulum is\n\n\\begin{equation}\n l \\ddot{\\theta} + c \\dot{\\theta} + g \\sin \\theta = 0\n\\end{equation}\n\nwhich can be rewritten as\n\n\\begin{align}\n \\dot{\\theta} &= \\omega \\\\\n \\dot{\\omega} &= -\\frac{c}{l} \\omega - \\frac{g}{l} \\sin \\theta\n\\end{align}\n\nWrite the function for Scipy below.\n\n\n```python\n\n```\n\nWrite the code to solve the equation here for the initial conditions\n\n\\begin{align}\n \\theta(0) &= \\frac{9\\pi}{10} \\\\\n \\dot{\\theta}(0) &= 0\n\\end{align}\n\nand when $c=0.5$, $g=9.8$ and $l=1$. Show 10 seconds of simulation time.\n\n\n```python\n\n```\n\n### Coupled Pendulums\n\nIn the case when two pendula are connected to each other via a spring, the equations of motion are\n\n\\begin{align}\n \\ddot{\\theta}_1 + \\frac{g}{l} \\sin \\theta_1 + \\frac{k}{m} (\\theta_1 - \\theta_2) &= 0 \\\\\n \\ddot{\\theta}_2 + \\frac{g}{l} \\sin \\theta_2 + \\frac{k}{m} (\\theta_2 - \\theta_1) &= 0 \\\\\n\\end{align}\n\nwhich can be written as\n\n\\begin{align}\n \\dot{\\theta}_1 &= \\omega_1 \\\\\n \\dot{\\theta}_2 &= \\omega_2 \\\\\n \\dot{\\omega}_1 &= - \\frac{g}{l} \\sin \\theta_1 - \\frac{k}{m} (\\theta_1 - \\theta_2) \\\\\n \\dot{\\omega}_2 &= - \\frac{g}{l} \\sin \\theta_2 - \\frac{k}{m} (\\theta_2 - \\theta_1) \\\\\n\\end{align}\n\nWrite the function for Scipy below.\n\n\n```python\n\n```\n\nAnd write the code to solve the differential equation below for the initial conditions\n\n\\begin{align}\n \\theta_1(0) &= 0 \\\\\n \\theta_2(0) &= \\frac{\\pi}{6} \\\\\n \\dot{\\theta}_1(0) &= 0 \\\\\n \\dot{\\theta}_2(0) &= 0 \\\\\n\\end{align}\n\n\nand when $k=1$, $m=1$, $g=9.8$ and $l=2$. Make the simulation for 30 seconds.\n\n\n```python\n\n```\n", "meta": {"hexsha": "ac86b69260a297d1b3bafb0a30ce6132f6f560fe", "size": 316370, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "05-ODEs/SecondOrderODEs/SecondOrderODEs.ipynb", "max_stars_repo_name": "wwaldron/NumericalPythonGuide", "max_stars_repo_head_hexsha": "8e0c2947251b9639cbc66d6462dd495c180e3faa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-22T02:29:11.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-22T02:29:11.000Z", "max_issues_repo_path": "05-ODEs/SecondOrderODEs/SecondOrderODEs.ipynb", "max_issues_repo_name": "wwaldron/NumericalPythonGuide", "max_issues_repo_head_hexsha": "8e0c2947251b9639cbc66d6462dd495c180e3faa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "05-ODEs/SecondOrderODEs/SecondOrderODEs.ipynb", "max_forks_repo_name": "wwaldron/NumericalPythonGuide", "max_forks_repo_head_hexsha": "8e0c2947251b9639cbc66d6462dd495c180e3faa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 572.0976491863, "max_line_length": 87440, "alphanum_fraction": 0.9410026235, "converted": true, "num_tokens": 4220, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9124361604769414, "lm_q2_score": 0.9161096107264305, "lm_q1q2_score": 0.8358915357872496}} {"text": "# Tutorial\n\nWe will solve the following problem using a computer to assist with the technical aspects:\n\n```{admonition} Problem\n\nThe matrix $A$ is given by $A=\\begin{pmatrix}a & 1 & 1\\\\ 1 & a & 1\\\\ 1 & 1 & 2\\end{pmatrix}$.\n\n1. Find the determinant of $A$\n2. Hence find the values of $a$ for which $A$ is singular.\n3. For the following values of $a$, when possible obtain $A ^ {- 1}$ and confirm\n the result by computing $AA^{-1}$:\n 1. $a = 0$;\n 2. $a = 1$;\n 3. $a = 2$;\n 4. $a = 3$.\n\n```\n\n`sympy` is once again the library we will use for this.\n\nWe will start by our matrix $A$:\n\n\n```python\nimport sympy as sym\n\na = sym.Symbol(\"a\")\nA = sym.Matrix([[a, 1, 1], [1, a, 1], [1, 1, 2]])\n```\n\nWe can now create a variable `determinant` and assign it the value of the\ndeterminant of $A$:\n\n\n```python\ndeterminant = A.det()\ndeterminant\n```\n\n\n\n\n$\\displaystyle 2 a^{2} - 2 a$\n\n\n\nA matrix is singular if it has determinant 0. We can find the values of $a$ for\nwhich this occurs:\n\n\n```python\nsym.solveset(determinant, a)\n```\n\n\n\n\n$\\displaystyle \\left\\{0, 1\\right\\}$\n\n\n\nThus it is not possible to find the inverse of $A$ for $a\\in\\{0, 1\\}$.\n\nHowever for $a = 2$:\n\n\n```python\nA.subs({a: 2})\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}2 & 1 & 1\\\\1 & 2 & 1\\\\1 & 1 & 2\\end{matrix}\\right]$\n\n\n\n\n```python\nA.subs({a: 2}).inv()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\frac{3}{4} & - \\frac{1}{4} & - \\frac{1}{4}\\\\- \\frac{1}{4} & \\frac{3}{4} & - \\frac{1}{4}\\\\- \\frac{1}{4} & - \\frac{1}{4} & \\frac{3}{4}\\end{matrix}\\right]$\n\n\n\nTo carry out matrix multiplication we use the `@` symbol:\n\n\n```python\nA.subs({a: 2}).inv() @ A.subs({a: 2})\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0 & 0\\\\0 & 1 & 0\\\\0 & 0 & 1\\end{matrix}\\right]$\n\n\n\nand for $a = 3$:\n\n\n```python\nA.subs({a: 3}).inv()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\frac{5}{12} & - \\frac{1}{12} & - \\frac{1}{6}\\\\- \\frac{1}{12} & \\frac{5}{12} & - \\frac{1}{6}\\\\- \\frac{1}{6} & - \\frac{1}{6} & \\frac{2}{3}\\end{matrix}\\right]$\n\n\n\n\n```python\nA.subs({a: 3}).inv() @ A.subs({a: 3})\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0 & 0\\\\0 & 1 & 0\\\\0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n```{important}\nIn this tutorial we have\n\n- Created a matrix.\n- Calculated the determinant of the matrix.\n- Substituted values in the matrix.\n- Inverted the matrix.\n```\n", "meta": {"hexsha": "55379c4199ba86e74f21c2ffed5a7996f9980b7e", "size": 6461, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "book/tools-for-mathematics/04-matrices/tutorial/.main.md.bcp.ipynb", "max_stars_repo_name": "daffidwilde/pfm", "max_stars_repo_head_hexsha": "dcf38faccee3c212c8394c36f4c093a2916d283e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2020-09-24T21:02:41.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-14T08:37:21.000Z", "max_issues_repo_path": "book/tools-for-mathematics/04-matrices/tutorial/.main.md.bcp.ipynb", "max_issues_repo_name": "daffidwilde/pfm", "max_issues_repo_head_hexsha": "dcf38faccee3c212c8394c36f4c093a2916d283e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 87, "max_issues_repo_issues_event_min_datetime": "2020-09-21T15:54:23.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-19T23:26:15.000Z", "max_forks_repo_path": "book/tools-for-mathematics/04-matrices/tutorial/.main.md.bcp.ipynb", "max_forks_repo_name": "daffidwilde/pfm", "max_forks_repo_head_hexsha": "dcf38faccee3c212c8394c36f4c093a2916d283e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-02T09:21:27.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-08T14:46:27.000Z", "avg_line_length": 21.6086956522, "max_line_length": 219, "alphanum_fraction": 0.4383222411, "converted": true, "num_tokens": 872, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9744347845918813, "lm_q2_score": 0.8577681122619883, "lm_q1q2_score": 0.8358390857017952}} {"text": "# Eigenvalue and Eigenvector\nGiven an $m \\times m$ matrix $A$, we say a ***nonzero vector*** $\\vec{x} \\in \\mathbb{C}^m$ is an **eigenvector** of $A$, and a number $\\lambda \\in \\mathbb{C}$ is its **eigenvalue**, if\n\n$$A\\vec{x} = \\lambda \\vec{x}$$\n\nHow to compute them?\n\n$$A\\vec{x} = \\lambda \\vec{x} \\Longleftrightarrow \\left( \\lambda I - A \\right)\\vec{x} = \\vec{0}$$\n\n\nwhich has a ***nonzero solution*** $\\vec{x}$, which requires that the matrix $\\lambda I - A$ is ***singular***, thus\n\n$$\\DeclareMathOperator*{\\det}{det} A\\vec{x} = \\lambda \\vec{x} \\Longleftrightarrow \\left( \\lambda I - A \\right)\\vec{x} = \\vec{0} \\Longleftrightarrow \\det\\left( \\lambda I - A \\right) = 0$$\n\nHere $\\det\\left( \\lambda I - A \\right)$ is actually a polynomial of degree $m$ in $\\lambda$, called the **characteristic polynomial** of matrix $A$.\n1. Solve the **characteristic polynomial** to find possible $\\lambda$s, the engenvalues\n2. For each **engenvalue**, solve the linear system $\\left( \\lambda I - A \\right)\\vec{x} = \\vec{0}$ to find the corresponding eigenvectors of $A$.\n\n>**e.g.** Compute the eigenvalue of the matrix $A = \\left[\\begin{array}{cc}\n2 & 1 \\\\\n0 & 1\n\\end{array}\\right]$\n>\n>>$$\\lambda I - A = \\lambda \\cdot \\left[\\begin{array}{cc}\n1 & 0 \\\\\n0 & 1\n\\end{array}\\right] - \\left[\\begin{array}{cc}\n2 & 1 \\\\\n0 & 1\n\\end{array}\\right] = \\left[\\begin{array}{cc}\n\\lambda - 2 & 1 \\\\\n0 & \\lambda - 1\n\\end{array}\\right]$$\n\n>> So that the chracteristic polynomial of $A$ is $\\left( \\lambda - 2 \\right)\\left( \\lambda - 1 \\right)$. Therefore the eigenvalues of $A$ are $\\lambda_1 = 2$ and $\\lambda_2 = 1$. Corresponding to the eigenvalue $2$, the solution of the linear system $(\\lambda_1 I - A) \\vec{x} = \\vec{0}$ would be\n\n>>$$\\left[\\begin{array}{cc}\n0 & -1 \\\\\n0 & 1\n\\end{array}\\right] \\vec{x} = \\left[\\begin{array}{c}\n0 \\\\\n0 \n\\end{array}\\right] \\Longrightarrow \\vec{x} = k_1 \\cdot \\left[\\begin{array}{c}\n1 \\\\\n0 \n\\end{array}\\right]\n$$\n\n>> where $k_1$ is an arbitrary constant. And the other one is $k_2 \\cdot \\left[\\begin{array}{c}\n1 \\\\\n-1 \n\\end{array}\\right]$\n\n# Eigenspace and multiplicity of eigenvalues\n$\\odot$ If a vector $\\vec{x}$ is an eigenvector of $A$ corresponding to an eigenvalue $\\lambda$, then any scalar multiple of $\\vec{x}$ is also an eigenvector of $A$ corresponding to $\\lambda$.$\\Join$\n\n**eigenspace** of $A$ corresponding to the eigenvalue $\\lambda$: the solution space of $\\left( A - \\lambda I\\right)\\vec{x} = \\vec{0}$, denoted a $E_{\\lambda}$. And the dimension of $E_{\\lambda}$ is also the **geometric multiplicity** of $\\lambda$, which represents the maximum number of linearly linearly independent eigenvectors corresponding to the eigenvalue $\\lambda$.\n\n**algebraic multiplicity**, the multiplicity of each root $\\lambda$.\n\nNow denote all the distinct eigenvalues of a matrix $A$ be $\\lambda_1, \\lambda_2, \\dots, \\lambda_k$, so that the characteristic polynomial $\\det(\\lambda I - A)$ would be then expressed as\n\n$$\\det(\\lambda I - A) = (\\lambda - \\lambda_1)^{l_1}(\\lambda - \\lambda_2)^{l_2 }\\cdots(\\lambda - \\lambda_k)^{l_k}$$\n\nHere $l_1, l_2, \\dots, l_k$ are the algebraic multiplicities of $\\lambda_1, \\lambda_2, \\dots, \\lambda_k$ respectively.\n\n$Conclusion$\n\nFor any eigenvalue $\\lambda^*$ of a matrix $A$, its geometric multiplicity is always less than or equal to its algebric multiplicity.\n\n$Proof$\n\nSuppose a $m \\times m$ matrix $A$ with a eigenvalue $\\lambda^*$ whose geomatrix multiplicity is $l^*$. Then let the orthonormal basis of its eigenspace $E_{\\lambda^*}$ be $\\vec{v}_1, \\vec{v}_2, \\dots, \\vec{v}_{l^*}$. And let the complementary orthonormal vector be $\\vec{v}_{l^*+1},\\vec{v}_{l^*+2},\\dots,\\vec{v}_{m}$, so that they can form the orthogonormal basis of the $m\\text{-dimensional}$ vector space.\n\nLet $V$ be the orthogonal matrix with $\\vec{v}_{1},\\vec{v}_{2},\\dots,\\vec{v}_{m}$ as its columns, then we have\n\n$$V^{\\mathrm{T}}AV = \\left[\\begin{array}{cc}\n\\lambda I & C \\\\\n\\mathbf{0} & D\n\\end{array}\\right]$$\n\nwhere $I$ is the $l^* \\times l^*$ identity matrix, $C$ is an $l^*\\times (m-l^*)$ matrix, and $D$ is an $(m-l^*) \\times (m-l^*)$ matrix. Note that $\\det(V^{\\mathrm{T}})\\det(V) = 1$. So we have\n\n$$\\det(\\lambda I - A) = \\det(\\lambda I - V^{\\mathrm{T}}AV) = \\det( (\\lambda - \\lambda^*)I ) \\cdot \\det(\\lambda I - D) = (\\lambda - \\lambda^*)^{l^*} \\det(\\lambda I - D)$$\n\nwhich implies that the algebraic multiplicity of $\\lambda^*$ is at least $l^*$.\n\n# Similarity and Diagonalizability\n$Def$\n\nA matrix $A$ is said to be **similar** to another $B$ $iff$ there exists a ***nonsigular matrix*** $X$ such that\n\n$$A = XBX^{-1}$$\n\n$Conclusion$\n\n$$\\det(A - \\lambda I) = \\det(XBX^{-1} - \\lambda XIX^{-1}) = \\det(B - \\lambda I)$$\n\nAnd therefore they have the same eigenvalues.\n\nHere $m \\times m$ matrix $A$ is called **diagonalizable** if it is similar to a diagonal matrix $\\Lambda $.\n\n$$A = X\\Lambda X^{-1} \\Longrightarrow AX = X\\Lambda $$\n\nWe can also see that the columns of $X$ are eigenvectors of the matrix $A$ corresponding to eigenvalues on the diagonal of $\\Lambda $: $A \\vec{x}^c_i = \\lambda_i \\vec{x}^c_i$.\n\n**defective**: An eigenvalue whose algebraic mutiplicity exceeds its geometric multiplicity is a **defective eigenvalue**; A matrix with defective eigenvalues is a **defective matrix**. This promises its eigenvalue decomposition. And of course any diagonal matrix is nondefective.\n\n***\nFor simplicity, now we only consider real symmetric matrix $A$ ($A^{\\mathrm{T}} = A$). So that the eigenvalues are all real numbers, and eigenvectors can form a orthonormal basis of the $m\\text{-dimensional}$ vector space. \n\n$Theorem$ 1 \n\nFor any real symmetric $m \\times m$ matrix $A$, there exist a real orthogonal matrix $Q$, and a real diagonal matrix $\\Lambda$, such that \n\n$$A = Q \\Lambda Q^{\\mathrm{T}} \\Longrightarrow A\\left[\n\\begin{array}{cccc}\n\\vec{q}_1^c & \\vec{q}_2^c & \\cdots & \\vec{q}_m^c \\\\\n\\end{array}\n\\right] = \\left[\n\\begin{array}{cccc}\n\\vec{q}_1^c & \\vec{q}_2^c & \\cdots & \\vec{q}_m^c \\\\\n\\end{array}\n\\right] \\left[\n\\begin{array}{cccc}\n\\lambda_1 & & & \\\\\n& \\lambda_2 & & \\\\\n& & \\ddots & \\\\\n& & & \\lambda_m \\\\\n\\end{array}\n\\right]$$\n\nThis is the **eigenvalue decomposition** of matrix $A$. We can also give them an order so that $\\left| \\lambda_1 \\right| \\geq \\left| \\lambda_2 \\right| \\geq \\cdots \\geq \\left| \\lambda_1 \\right| \\geq\\left| \\lambda_m \\right| \\geq 0$\n\n# Conditioning of Eigenvalue Problems\nGiven an $m \\times m$ matrix $A$ which is diagonalizable with eigenvalues $\\lambda_1, \\lambda_2, \\dots \\lambda_m$, and a set of eigenvectors $\\vec{x}^c_1,\\vec{x}^c_2, \\dots, \\vec{x}^c_m$ so that we can write $A = X\\Lambda X^{-1}$ or equivalently $\\Lambda = X^{-1}AX$.\n\nAssume there is a small perturbation $\\Delta A$ on the matrix $A$. Let us consider eigenvalues of the perturbed matrix $A + \\Delta A$. We denote\n\n$$X^{-1}\\Delta A X = F$$\n\nSo that $X^{-1}\\left( A + \\Delta A \\right)X = \\Lambda + F$ and let $\\mu$ be the eigenvalue of $\\left( A + \\Delta A \\right)$ and $D+F$ corresponding to the eigenvector $\\vec{v}$.\n\n$$\\left( D + F \\right) \\vec{v} = \\mu \\vec{v} \\Longrightarrow \\mu \\vec{v} - D \\vec{v} = F\\vec{v}$$\n\nTherefore we have\n\n$$\\begin{align}\n\\left\\| \\vec{v} \\right\\|_2 &= \\left\\| \\left( \\mu I - D \\right)^{-1} F \\vec{v} \\right\\|_2 \\\\\n&\\leq \\left\\| \\left( \\mu I - D \\right)^{-1} \\right\\|_2 \\cdot \\left\\| F \\right\\|_2 \\cdot \\left\\| \\vec{v} \\right\\|_2 \\\\\n&= \\frac{\\left\\| F \\right\\|_2 \\cdot \\left\\| \\vec{v} \\right\\|_2} {\\left| \\mu - \\lambda_k \\right|}\n\\end{align}$$\n\nSo that \n\n$$\\DeclareMathOperator*{\\Cond}{Cond}\n\\begin{align}\n\\left| \\mu - \\lambda_k \\right| &\\leq \\left\\| F \\right\\|_2 \\\\\n&\\leq \\left\\| X \\right\\|_2 \\cdot \\left\\| \\Delta A \\right\\|_2 \\cdot \\left\\| X^{-1} \\right\\|_2 \\\\\n&= \\Cond(X) \\left\\|\\Delta A \\right\\|_2 \n\\end{align}$$\n\nwhere $\\lambda_k$ is the closest to $\\mu$ among the eigenvalues of the matrix $A$. \n\nIf the matrix $A$ is symmetric, then we know From $Theorem$ 1 that the matrix $Q$ is orthogonal. Therefore solving the eigenvalues of a symmetric matrix is always well-conditioned, since $\\Cond(Q) = 1$ for orthogonal matrix Q.\n\n# Algorithms for solving eigenvalue problems\nTo find the eigenvalue is to solve the corresponding characteristic polynomial. But that's an ill-conditioned problem. At the same time, there is no exact formula to express the roots of a general high order ($> 5$) polynomials. This means that in general finding the eigenvalues of a matrix cannot be achieved within a finite number of steps of operations. Therefore an eigensolver has to be *iterative*.\n\nSimilar to the idea from $QR$ and $LU$, we wish to multiply a sequence of orthogonal matrices and their transposes to the both sides of $A$, such that\n\n$$Q_m \\cdots Q_2 Q_1 A Q^{\\mathrm{T}}_1 Q^{\\mathrm{T}}_2 \\cdots Q^{\\mathrm{T}}_m = \\Lambda$$\n\nAnd then\n\n$$A= Q^{\\mathrm{T}}_1Q^{\\mathrm{T}}_2\\cdots Q^{\\mathrm{T}}_m \\Lambda Q_1Q_2 \\cdots Q_m = Q^{\\mathrm{T}}\\Lambda Q$$\n\nHowever such an approach is just an imagination. An eigenvalue problem cannot be solved in a finite number of steps in general. The approach to solve eigenvalue problem is in general to produce a sequence\n\n$$Q_j \\cdots Q_2 Q_1 A Q^{\\mathrm{T}}_1 Q^{\\mathrm{T}}_2 \\cdots Q^{\\mathrm{T}}_j$$\n\nsuch that it converges to an diagonal matrix $\\Lambda$ as $ j \\to 1$. In this sense we say\n\n**Any eigenvalue solver must be iterative.**\n", "meta": {"hexsha": "f8132f53dc720d462b8a8a82ec051f956ee3f519", "size": 12686, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Computational/Intro to Numerical Computing/Note_Chap08_Fundamentals of Eigenvalue Problems.ipynb", "max_stars_repo_name": "XavierOwen/Notes", "max_stars_repo_head_hexsha": "d262a9103b29ee043aa198b475654aabd7a2818d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-11-27T10:31:08.000Z", "max_stars_repo_stars_event_max_datetime": "2019-01-20T03:11:58.000Z", "max_issues_repo_path": "Computational/Intro to Numerical Computing/Note_Chap08_Fundamentals of Eigenvalue Problems.ipynb", "max_issues_repo_name": "XavierOwen/Notes", "max_issues_repo_head_hexsha": "d262a9103b29ee043aa198b475654aabd7a2818d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Computational/Intro to Numerical Computing/Note_Chap08_Fundamentals of Eigenvalue Problems.ipynb", "max_forks_repo_name": "XavierOwen/Notes", "max_forks_repo_head_hexsha": "d262a9103b29ee043aa198b475654aabd7a2818d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-07-14T19:57:23.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-14T19:57:23.000Z", "avg_line_length": 48.7923076923, "max_line_length": 428, "alphanum_fraction": 0.5594355983, "converted": true, "num_tokens": 3079, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9372107966642556, "lm_q2_score": 0.891811041124754, "lm_q1q2_score": 0.8358149363265098}} {"text": "```python\n#format the book\nfrom __future__ import division, print_function\n%matplotlib inline\nimport sys\nsys.path.insert(0, '..')\nimport book_format\nbook_format.set_style()\n```\n\n\n\n\n\n\n\n\n\n\n# Computing and plotting PDFs of discrete data\n\nSo let's investigate how to compute and plot probability distributions.\n\n\nFirst, let's make some data according to a normal distribution. We use `numpy.random.normal` for this. The parameters are not well named. `loc` is the mean of the distribution, and `scale` is the standard deviation. We can call this function to create an arbitrary number of data points that are distributed according to that mean and std.\n\n\n```python\nimport numpy as np\nimport numpy.random as random\n\nmean = 3\nstd = 2\n\ndata = random.normal(loc=mean, scale=std, size=50000)\nprint(len(data))\nprint(data.mean())\nprint(data.std())\n```\n\n 50000\n 3.004197839408925\n 1.995905183997337\n\n\n\n```python\ndata = 1.8 + np.random.randn(50000)*0.414\ndata.shape\n```\n\n\n\n\n (50000,)\n\n\n\n\n```python\ndata.std(), data.mean()\n```\n\n\n\n\n (0.413856930662519, 1.801027366859001)\n\n\n\nAs you can see from the print statements we got 5000 points that have a mean very close to 3, and a standard deviation close to 2.\n\nWe can plot this Gaussian by using `scipy.stats.norm` to create a frozen function that we will then use to compute the pdf (probability distribution function) of the Gaussian.\n\n\n```python\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport scipy.stats as stats\n\ndef plot_normal(xs, mean, std, **kwargs):\n norm = stats.norm(mean, std)\n plt.plot(xs, norm.pdf(xs), **kwargs)\n\nxs = np.linspace(-5, 15, num=200)\nplot_normal(xs, mean, std, color='k')\n```\n\n\n```python\nnorm = stats.norm(mean, std)\nnorm.pdf([1,2,3,4])\n```\n\n\n\n\n array([0.12098536, 0.17603266, 0.19947114, 0.17603266])\n\n\n\nBut we really want to plot the PDF of the discrete data, not the idealized function.\n\nThere are a couple of ways of doing that. First, we can take advantage of `matplotlib`'s `hist` method, which computes a histogram of a collection of data. Normally `hist` computes the number of points that fall in a bin, like so:\n\n\n```python\nplt.hist(data, bins=200)\nplt.show()\n```\n\nthat is not very useful to us - we want the PDF, not bin counts. Fortunately `hist` includes a `density` parameter which will plot the PDF for us.\n\n\n```python\nplt.hist(data, bins=200, density=True)\nplt.show()\n```\n\nI may not want bars, so I can specify the `histtype` as 'step' to get a line.\n\n\n```python\nplt.hist(data, bins=200, density=True, histtype='step', lw=2)\nplt.show()\n```\n\nTo be sure it is working, let's also plot the idealized Gaussian in black.\n\n\n```python\nplt.hist(data, bins=200, density=True, histtype='step', lw=2)\nnorm = stats.norm(mean, std)\nplt.plot(xs, norm.pdf(xs), color='k', lw=2)\nplt.show()\n```\n\nThere is another way to get the approximate distribution of a set of data. There is a technique called *kernel density estimate* that uses a kernel to estimate the probability distribution of a set of data. SciPy implements it with the function `gaussian_kde`. Do not be mislead by the name - Gaussian refers to the type of kernel used in the computation. This works for any distribution, not just Gaussians. In this section we have a Gaussian distribution, but soon we will not, and this same function will work.\n\n\n```python\nkde = stats.gaussian_kde(data)\n\nxs = np.linspace(-5, 15, num=200)\nplt.plot(xs, kde(xs))\nplt.show()\n```\n\n## Monte Carlo Simulations\n\n\nWe (well I) want to do this sort of thing because I want to use monte carlo simulations to compute distributions. It is easy to compute Gaussians when they pass through linear functions, but difficult to impossible to compute them analytically when passed through nonlinear functions. Techniques like particle filtering handle this by taking a large sample of points, passing them through a nonlinear function, and then computing statistics on the transformed points. Let's do that.\n\nWe will start with the linear function $f(x) = 2x + 12$ just to prove to ourselves that the code is working. I will alter the mean and std of the data we are working with to help ensure the numbers that are output are unique It is easy to be fooled, for example, if the formula multipies x by 2, the mean is 2, and the std is 2. If the output of something is 4, is that due to the multication factor, the mean, the std, or a bug? It's hard to tell. \n\n\n```python\ndef f(x):\n return 2*x + 12\n\nmean = 1.\nstd = 1.4\ndata = random.normal(loc=mean, scale=std, size=50000)\n\nd_t = f(data) # transform data through f(x)\n\nplt.hist(data, bins=200, density=True, histtype='step', lw=2)\nplt.hist(d_t, bins=200, density=True, histtype='step', lw=2)\n\nplt.ylim(0, .35)\nplt.show()\nprint('mean = {:.2f}'.format(d_t.mean()))\nprint('std = {:.2f}'.format(d_t.std()))\n```\n\nThis is what we expected. The input is the Gaussian $\\mathcal{N}(\\mu=1, \\sigma=1.4)$, and the function is $f(x) = 2x+12$. Therefore we expect the mean to be shifted to $f(\\mu) = 2*1+12=14$. We can see from the plot and the print statement that this is what happened. \n\nBefore I go on, can you explain what happened to the standard deviation? You may have thought that the new $\\sigma$ should be passed through $f(x)$ like so $2(1.4) + 12=14.81$. But that is not correct - the standard deviation is only affected by the multiplicative factor, not the shift. If you think about that for a moment you will see it makes sense. We multiply our samples by 2, so they are twice as spread out as before. Standard deviation is a measure of how spread out things are, so it should also double. It doesn't matter if we then shift that distribution 12 places, or 12 million for that matter - the spread is still twice the input data.\n\n\n\n## Nonlinear Functions\n\nNow that we believe in our code, lets try it with nonlinear functions.\n\n\n```python\ndef f2(x):\n return (np.cos((1.5*x + 2.1))) * np.sin(0.3*x) - 1.6*x\n\nd_t = f2(data)\nplt.subplot(121)\nplt.hist(d_t, bins=200, density=False, histtype='step', lw=2)\n# plt.hist(data, bins=200, density=True, histtype='step', lw=2)\n\nplt.subplot(122)\nkde = stats.gaussian_kde(d_t)\nxs = np.linspace(-10, 10, 200)\nplt.plot(xs, kde(xs), 'k')\nplot_normal(xs, d_t.mean(), d_t.std(), color='g', lw=3)\nplt.show()\nprint('mean = {:.2f}'.format(d_t.mean()))\nprint('std = {:.2f}'.format(d_t.std()))\n```\n\nHere I passed the data through the nonlinear function $f(x) = \\cos(1.5x+2.1)\\sin(\\frac{x}{3}) - 1.6x$. That function is quite close to linear, but we can see how much it alters the pdf of the sampled data. \n\nThere is a lot of computation going on behind the scenes to transform 50,000 points and then compute their PDF. The Extended Kalman Filter (EKF) gets around this by linearizing the function at the mean and then passing the Gaussian through the linear equation. We saw above how easy it is to pass a Gaussian through a linear function. So lets try that.\n\nWe can linearize this by taking the derivative of the function at x. We can use sympy to get the derivative. \n\n\n```python\n#its not necessary that the tangent \n```\n\n\n```python\nimport sympy\nx = sympy.symbols('x')\nf = sympy.cos(1.5*x+2.1) * sympy.sin(x/3) - 1.6*x\ndfx = sympy.diff(f, x)\ndfx\n```\n\n\n\n\n -1.5*sin(x/3)*sin(1.5*x + 2.1) + cos(x/3)*cos(1.5*x + 2.1)/3 - 1.6\n\n\n\nWe can now compute the slope of the function by evaluating the derivative at the mean.\n\n\n```python\nm = dfx.subs(x, mean)\nm\n```\n\n\n\n\n -1.66528051815545\n\n\n\nThe equation of a line is $y=mx+b$, so the new standard deviation should be $~1.67$ times the input std. We can compute the new mean by passing it through the original function because the linearized function is just the slope of f(x) evaluated at the mean. The slope is a tangent that touches the function at $x$, so both will return the same result. So, let's plot this and compare it to the results from the monte carlo simulation.\n\n\n```python\nplt.hist(d_t, bins=200, density=True, histtype='step', lw=2)\nplot_normal(xs, f2(mean), abs(float(m)*std), color='k', lw=3, label='EKF')\nplot_normal(xs, d_t.mean(), d_t.std(), color='r', lw=3, label='MC')\nplt.legend()\nplt.show()\n```\n\nWe can see from this that the estimate from the EKF (in red) is not exact, but it is not a bad approximation either. \n", "meta": {"hexsha": "0bcd565de52101c119e78e17591842c576216599", "size": 112085, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb", "max_stars_repo_name": "goel42/Kalman-and-Bayesian-Filters-in-Python", "max_stars_repo_head_hexsha": "b163a4df71acef393adfc69bcfc16bdc78de2f85", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb", "max_issues_repo_name": "goel42/Kalman-and-Bayesian-Filters-in-Python", "max_issues_repo_head_hexsha": "b163a4df71acef393adfc69bcfc16bdc78de2f85", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb", "max_forks_repo_name": "goel42/Kalman-and-Bayesian-Filters-in-Python", "max_forks_repo_head_hexsha": "b163a4df71acef393adfc69bcfc16bdc78de2f85", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 192.2555746141, "max_line_length": 17140, "alphanum_fraction": 0.9080608467, "converted": true, "num_tokens": 2293, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9099070158103778, "lm_q2_score": 0.9184802468145655, "lm_q1q2_score": 0.8357316204598206}} {"text": "# Poisson Distribution\n\n> ***GitHub***: https://github.com/czs108\n\n## Definition\n\n\\begin{equation}\nP(X = r) = \\frac{e^{-\\lambda} \\cdot {\\lambda}^{r}}{r!}\n\\end{equation}\n\n\\begin{equation}\n\\lambda = \\text{The mean number of occurrences in the interval or the rate of occurrence.}\n\\end{equation}\n\nIf a variable $X$ follows a *Poisson Distribution* where the *mean* number of occurrences in the interval or the rate of occurrence is $\\lambda$. This can be written as\n\n\\begin{equation}\nX \\sim Po(\\lambda)\n\\end{equation}\n\n## Expectation\n\n\\begin{equation}\nE(X) = \\lambda\n\\end{equation}\n\n## Variance\n\n\\begin{equation}\nVar(X) = \\lambda\n\\end{equation}\n\n## Approximation\n\nA *Binomial Distribution* $X \\sim B(n,\\, p)$ can be approximated by $X \\sim Po(np)$ if $n$ is *large* and $p$ is *small*.\n\nBecause both the *expectation* and *variance* of $X \\sim Po(np)$ are $np$. When $n$ is large and $p$ is small, $(1 - p) \\approx 1$ and for $X \\sim B(n,\\, p)$:\n\n\\begin{equation}\nE(X) = np\n\\end{equation}\n\n\\begin{equation}\nVar(X) \\approx np\n\\end{equation}\n\nThe approximation is typically very close if $n > 50$ and $p < 0.1$.\n", "meta": {"hexsha": "28bd062f42e394a77e59fa24b9aceb6a9aecc546", "size": 2222, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "exercises/Poisson Distribution.ipynb", "max_stars_repo_name": "czs108/Probability-Theory-Exercises", "max_stars_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exercises/Poisson Distribution.ipynb", "max_issues_repo_name": "czs108/Probability-Theory-Exercises", "max_issues_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercises/Poisson Distribution.ipynb", "max_forks_repo_name": "czs108/Probability-Theory-Exercises", "max_forks_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-21T05:04:07.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-21T05:04:07.000Z", "avg_line_length": 23.6382978723, "max_line_length": 178, "alphanum_fraction": 0.498649865, "converted": true, "num_tokens": 366, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9099070060380482, "lm_q2_score": 0.9184802412359966, "lm_q1q2_score": 0.8357316064081499}} {"text": "# Quadratic Equations\n\nConsider the following equation:\n\n\\begin{equation}y = 2(x - 1)(x + 2)\\end{equation}\n\nIf you multiply out the factored ***x*** expressions, this equates to:\n\n\\begin{equation}y = 2x^{2} + 2x - 4\\end{equation}\n\nNote that the highest ordered term includes a squared variable (x2).\n\nLet's graph this equation for a range of ***x*** values:\n\n\n```python\nimport pandas as pd\n\n# Create a dataframe with an x column containing values to plot\ndf = pd.DataFrame ({'x': range(-9, 9)})\n\n# Add a y column by applying the quadratic equation to x\ndf['y'] = 2*df['x']**2 + 2 *df['x'] - 4\n\n# Plot the line\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\nplt.show()\n```\n\nNote that the graph shows a *parabola*, which is an arc-shaped line that reflects the $x$ and $y$ values calculated for the equation.\n\nNow let's look at another equation that includes an ***x2*** term:\n\n\\begin{equation}y = -2x^{2} + 6x + 7\\end{equation}\n\nWhat does that look like as a graph?:\n\n\n```python\nimport pandas as pd\n\n# Create a dataframe with an x column containing values to plot\ndf = pd.DataFrame ({'x': range(-8, 12)})\n\n# Add a y column by applying the quadratic equation to x\ndf['y'] = -2*df['x']**2 + 6*df['x'] + 7\n\n# Plot the line\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\nplt.show()\n```\n\nAgain, the graph shows a parabola, but this time instead of being open at the top, the parabola is open at the bottom.\n\nEquations that assign a value to ***y*** based on an expression that includes a squared value for ***x*** create parabolas. If the relationship between ***y*** and ***x*** is such that ***y*** is a *positive* multiple of the ***x2*** term, the parabola will be open at the top; when ***y*** is a *negative* multiple of the ***x2*** term, then the parabola will be open at the bottom.\n\nThese kinds of equations are known as *quadratic* equations, and they have some interesting characteristics. There are several ways quadratic equations can be written, but the *standard form* for quadratic equation is:\n\n\\begin{equation}y = ax^{2} + bx + c\\end{equation}\n\nWhere ***a***, ***b***, and ***c*** are numeric coefficients or constants.\n\nLet's start by examining the parabolas generated by quadratic equations in more detail.\n\n## Parabola Vertex and Line of Symmetry\nParabolas are symmetrical, with x and y values converging exponentially towards the highest point (in the case of a downward opening parabola) or lowest point (in the case of an upward opening parabola). The point where the parabola meets the line of symmetry is known as the *vertex*.\n\nRun the following cell to see the line of symmetry and vertex for the two parabolas described previously (don't worry about the calculations used to find the line of symmetry and vertex - we'll explore that later):\n\n\n```python\n%matplotlib inline\n\ndef plot_parabola(a, b, c):\n import pandas as pd\n import numpy as np\n from matplotlib import pyplot as plt\n \n # get the x value for the line of symmetry\n vx = (-1*b)/(2*a)\n \n # get the y value when x is at the line of symmetry\n vy = a*vx**2 + b*vx + c\n\n # Create a dataframe with an x column containing values from x-10 to x+10\n minx = int(vx - 10)\n maxx = int(vx + 11)\n df = pd.DataFrame ({'x': range(minx, maxx)})\n\n # Add a y column by applying the quadratic equation to x\n df['y'] = a*df['x']**2 + b *df['x'] + c\n\n # get min and max y values\n miny = df.y.min()\n maxy = df.y.max()\n\n # Plot the line\n plt.plot(df.x, df.y, color=\"grey\")\n plt.xlabel('x')\n plt.ylabel('y')\n plt.grid()\n plt.axhline()\n plt.axvline()\n\n # plot the line of symmetry\n sx = [vx, vx]\n sy = [miny, maxy]\n plt.plot(sx,sy, color='magenta')\n\n # Annotate the vertex\n plt.scatter(vx,vy, color=\"red\")\n plt.annotate('vertex',(vx, vy), xytext=(vx - 1, (vy + 5)* np.sign(a)))\n\n plt.show()\n\n\nplot_parabola(2, 2, -4) \n\nplot_parabola(-2, 3, 5) \n```\n\n## Parabola Intercepts\nRecall that linear equations create lines that intersect the **x** and **y** axis of a graph, and we call the points where these intersections occur *intercepts*. Now look at the graphs of the parabolas we've worked with so far. Note that these parabolas both have a y-intercept; a point where the line intersects the y axis of the graph (in other words, when x is 0). However, note that the parabolas have *two* x-intercepts; in other words there are two points at which the line crosses the x axis (and y is 0). Additionally, imagine a downward opening parabola with its vertex at -1, -1. This is perfectly possible, and the line would never have an x value greater than -1, so it would have *no* x-intercepts.\n\nRegardless of whether the parabola crosses the x axis or not, other than the vertex, for every ***y*** point in the parabola, there are *two* ***x*** points; one on the right (or positive) side of the axis of symmetry, and one of the left (or negative) side. The implications of this are what make quadratic equations so interesting. When we solve the equation for ***x***, there are *two* correct answers.\n\nLet's take a look at an example to demonstrate this. Let's return to the first of our quadratic equations, and we'll look at it in its *factored* form:\n\n\\begin{equation}y = 2(x - 1)(x + 2)\\end{equation}\n\nNow, let's solve this equation for a ***y*** value of 0. We can restate the equation like this:\n\n\\begin{equation}2(x - 1)(x + 2) = 0\\end{equation}\n\nThe equation is the product of two expressions **2(x - 1)** and **(x + 2)**. In this case, we know that the product of these expressions is 0, so logically *one or both of the expressions must return 0*.\n\nLet's try the first one:\n\n\\begin{equation}2(x - 1) = 0\\end{equation}\n\nIf we distrbute this, we get:\n\n\\begin{equation}2x - 2 = 0\\end{equation}\n\nThis simplifies to:\n\n\\begin{equation}2x = 2\\end{equation}\n\nWhich gives us a value for *x* of **1**.\n\nNow let's try the other expression:\n\n\\begin{equation}x + 2 = 0\\end{equation}\n\nThis gives us a value for *x* of **-2**.\n\nSo, when *y* is **0**, *x* is **-2** or **1**. Let's plot these points on our parabola:\n\n\n```python\nimport pandas as pd\n\n# Assign the calculated x values\nx1 = -2\nx2 = 1\n\n# Create a dataframe with an x column containing some values to plot\ndf = pd.DataFrame ({'x': range(x1-5, x2+6)})\n\n# Add a y column by applying the quadratic equation to x\ndf['y'] = 2*(df['x'] - 1) * (df['x'] + 2)\n\n# Get x at the line of symmetry (halfway between x1 and x2)\nvx = (x1 + x2) / 2\n\n# Get y when x is at the line of symmetry\nvy = 2*(vx -1)*(vx + 2)\n\n# get min and max y values\nminy = df.y.min()\nmaxy = df.y.max()\n\n# Plot the line\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\n\n# Plot calculated x values for y = 0\nplt.scatter([x1,x2],[0,0], color=\"green\")\nplt.annotate('x1',(x1, 0))\nplt.annotate('x2',(x2, 0))\n\n# plot the line of symmetry\nsx = [vx, vx]\nsy = [miny, maxy]\nplt.plot(sx,sy, color='magenta')\n\n# Annotate the vertex\nplt.scatter(vx,vy, color=\"red\")\nplt.annotate('vertex',(vx, vy), xytext=(vx - 1, (vy - 5)))\n\nplt.show()\n```\n\nSo from the plot, we can see that both of the values we calculated for ***x*** align with the parabola when ***y*** is 0. Additionally, because the parabola is symmetrical, we know that every pair of ***x*** values for each ***y*** value will be equidistant from the line of symmetry, so we can calculate the ***x*** value for the line of symmetry as the average of the ***x*** values for any value of ***y***. This in turn means that we know the ***x*** coordinate for the vertex (it's on the line of symmetry), and we can use the quadratic equation to calculate ***y*** for this point.\n\n## Solving Quadratics Using the Square Root Method\nThe technique we just looked at makes it easy to calculate the two possible values for ***x*** when ***y*** is 0 if the equation is presented as the product two expressions. If the equation is in standard form, and it can be factored, you could do the necessary manipulation to restate it as the product of two expressions. Otherwise, you can calculate the possible values for x by applying a different method that takes advantage of the relationship between squared values and the square root.\n\nLet's consider this equation:\n\n\\begin{equation}y = 3x^{2} - 12\\end{equation}\n\nNote that this is in the standard quadratic form, but there is no *b* term; in other words, there's no term that contains a coeffecient for ***x*** to the first power. This type of equation can be easily solved using the square root method. Let's restate it so we're solving for ***x*** when ***y*** is 0:\n\n\\begin{equation}3x^{2} - 12 = 0\\end{equation}\n\nThe first thing we need to do is to isolate the ***x2*** term, so we'll remove the constant on the left by adding 12 to both sides:\n\n\\begin{equation}3x^{2} = 12\\end{equation}\n\nThen we'll divide both sides by 3 to isolate x2:\n\n\\begin{equation}x^{2} = 4\\end{equation}\n\nNo we can isolate ***x*** by taking the square root of both sides. However, there's an additional consideration because this is a quadratic equation. The ***x*** variable can have two possibe values, so we must calculate the *principle* and *negative* square roots of the expression on the right:\n\n\\begin{equation}x = \\pm\\sqrt{4}\\end{equation}\n\nThe principle square root of 4 is 2 (because 22 is 4), and the corresponding negative root is -2 (because -22 is also 4); so *x* is **2** or **-2**.\n\nLet's see this in Python, and use the results to calculate and plot the parabola with its line of symmetry and vertex:\n\n\n```python\nimport pandas as pd\nimport math\n\ny = 0\nx1 = int(- math.sqrt(y + 12 / 3))\nx2 = int(math.sqrt(y + 12 / 3))\n\n# Create a dataframe with an x column containing some values to plot\ndf = pd.DataFrame ({'x': range(x1-10, x2+11)})\n\n# Add a y column by applying the quadratic equation to x\ndf['y'] = 3*df['x']**2 - 12\n\n# Get x at the line of symmetry (halfway between x1 and x2)\nvx = (x1 + x2) / 2\n\n# Get y when x is at the line of symmetry\nvy = 3*vx**2 - 12\n\n# get min and max y values\nminy = df.y.min()\nmaxy = df.y.max()\n\n# Plot the line\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\n\n# Plot calculated x values for y = 0\nplt.scatter([x1,x2],[0,0], color=\"green\")\nplt.annotate('x1',(x1, 0))\nplt.annotate('x2',(x2, 0))\n\n# plot the line of symmetry\nsx = [vx, vx]\nsy = [miny, maxy]\nplt.plot(sx,sy, color='magenta')\n\n# Annotate the vertex\nplt.scatter(vx,vy, color=\"red\")\nplt.annotate('vertex',(vx, vy), xytext=(vx - 1, (vy - 20)))\n\nplt.show()\n```\n\n## Solving Quadratics Using the Completing the Square Method\nIn quadratic equations where there is a *b* term; that is, a term containing **x** to the first power, it is impossible to directly calculate the square root. However, with some algebraic manipulation, you can take advantage of the ability to factor a polynomial expression in the form *a2 + 2ab + b2* as a binomial *perfect square* expression in the form *(a + b)2*.\n\nAt first this might seem like some sort of mathematical sleight of hand, but follow through the steps carefull and you'll see that there's nothing up my sleeve!\n\nThe underlying basis of this approach is that a trinomial expression like this:\n\n\\begin{equation}x^{2} + 24x + 12^{2}\\end{equation}\n\nCan be factored to this:\n\n\\begin{equation}(x + 12)^{2}\\end{equation}\n\nOK, so how does this help us solve a quadratic equation? Well, let's look at an example:\n\n\\begin{equation}y = x^{2} + 6x - 7\\end{equation}\n\nLet's start as we've always done so far by restating the equation to solve ***x*** for a ***y*** value of 0:\n\n\\begin{equation}x^{2} + 6x - 7 = 0\\end{equation}\n\nNow we can move the constant term to the right by adding 7 to both sides:\n\n\\begin{equation}x^{2} + 6x = 7\\end{equation}\n\nOK, now let's look at the expression on the left: *x2 + 6x*. We can't take the square root of this, but we can turn it into a trinomial that will factor into a perfect square by adding a squared constant. The question is, what should that constant be? Well, we know that we're looking for an expression like *x2 + 2**c**x + **c**2*, so our constant **c** is half of the coefficient we currently have for ***x***. This is **6**, making our constant **3**, which when squared is **9** So we can create a trinomial expression that will easily factor to a perfect square by adding 9; giving us the expression *x2 + 6x + 9*.\n\nHowever, we can't just add something to one side without also adding it to the other, so our equation becomes:\n\n\\begin{equation}x^{2} + 6x + 9 = 16\\end{equation}\n\nSo, how does that help? Well, we can now factor the trinomial expression as a perfect square binomial expression:\n\n\\begin{equation}(x + 3)^{2} = 16\\end{equation}\n\nAnd now, we can use the square root method to find x + 3:\n\n\\begin{equation}x + 3 =\\pm\\sqrt{16}\\end{equation}\n\nSo, x + 3 is **-4** or **4**. We isolate ***x*** by subtracting 3 from both sides, so ***x*** is **-7** or **1**:\n\n\\begin{equation}x = -7, 1\\end{equation}\n\nLet's see what the parabola for this equation looks like in Python:\n\n\n```python\nimport pandas as pd\nimport math\n\nx1 = int(- math.sqrt(16) - 3)\nx2 = int(math.sqrt(16) - 3)\n\n# Create a dataframe with an x column containing some values to plot\ndf = pd.DataFrame ({'x': range(x1-10, x2+11)})\n\n# Add a y column by applying the quadratic equation to x\ndf['y'] = ((df['x'] + 3)**2) - 16\n\n# Get x at the line of symmetry (halfway between x1 and x2)\nvx = (x1 + x2) / 2\n\n# Get y when x is at the line of symmetry\nvy = ((vx + 3)**2) - 16\n\n# get min and max y values\nminy = df.y.min()\nmaxy = df.y.max()\n\n# Plot the line\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\n\n# Plot calculated x values for y = 0\nplt.scatter([x1,x2],[0,0], color=\"green\")\nplt.annotate('x1',(x1, 0))\nplt.annotate('x2',(x2, 0))\n\n# plot the line of symmetry\nsx = [vx, vx]\nsy = [miny, maxy]\nplt.plot(sx,sy, color='magenta')\n\n# Annotate the vertex\nplt.scatter(vx,vy, color=\"red\")\nplt.annotate('vertex',(vx, vy), xytext=(vx - 1, (vy - 10)))\n\nplt.show()\n```\n\n## Vertex Form\nLet's look at another example of a quadratic equation in standard form:\n\n\\begin{equation}y = 2x^{2} - 16x + 2\\end{equation}\n\nWe can start to solve this by subtracting 2 from both sides to move the constant term from the right to the left:\n\n\\begin{equation}y - 2 = 2x^{2} - 16x\\end{equation}\n\nNow we can factor out the coefficient for x2, which is **2**. 2x2 is 2 • x2, and -16x is 2 • 8x:\n\n\\begin{equation}y - 2 = 2(x^{2} - 8x)\\end{equation}\n\nNow we're ready to complete the square, so we add the square of half of the -8x coefficient on the right side to the parenthesis. Half of -8 is -4, and -42 is 16, so the right side of the equation becomes *2(x2 - 8x + 16)*. Of course, we can't add something to one side of the equation without also adding it to the other side, and we've just added 2 • 16 (which is 32) to the right, so we must also add that to the left.\n\n\\begin{equation}y - 2 + 32 = 2(x^{2} - 8x + 16)\\end{equation}\n\nNow we can simplify the left and factor out a perfect square binomial expression on the right:\n\n\\begin{equation}y + 30 = 2(x - 4)^{2}\\end{equation}\n\nWe now have a squared term for ***x***, so we could use the square root method to solve the equation. However, we can also isolate ***y*** by subtracting 30 from both sides. So we end up restating the original equation as:\n\n\\begin{equation}y = 2(x - 4)^{2} - 30\\end{equation}\n\nLet's just quickly check our math with Python:\n\n\n```python\nfrom random import randint\nx = randint(1,100)\n\n2*x**2 - 16*x + 2 == 2*(x - 4)**2 - 30\n```\n\n\n\n\n True\n\n\n\nSo we've managed to take the expression ***2x2 - 16x + 2*** and change it to ***2(x - 4)2 - 30***. How does that help?\n\nWell, when a quadratic equation is stated this way, it's in *vertex form*, which is generically described as:\n\n\\begin{equation}y = a(x - h)^{2} + k\\end{equation}\n\nThe neat thing about this form of the equation is that it tells us the coordinates of the vertex - it's at ***h,k***.\n\nSo in this case, we know that the vertex of our equation is 4, -30. Moreover, we know that the line of symmetry is at ***x = 4***.\n\nWe can then just use the equation to calculate two more points, and the three points will be enough for us to determine the shape of the parabola. We can simply choose any ***x*** value we like and substitute it into the equation to calculate the corresponding ***y*** value. For example, let's calculate ***y*** when x is **0**:\n\n\\begin{equation}y = 2(0 - 4)^{2} - 30\\end{equation}\n\nWhen we work through the equation, it gives us the answer **2**, so we know that the point (0, 2) is in our parabola.\n\nSo, we know that the line of symmetry is at ***x = h*** (which is 4), and we now know that the ***y*** value when ***x*** is 0 (***h*** - ***h***) is 2. The ***y*** value at the same distance from the line of symmetry in the negative direction will be the same as the value in the positive direction, so when ***x*** is ***h*** + ***h***, the ***y*** value will also be 2.\n\nThe following Python code encapulates all of this in a function that draws and annotates a parabola using only the ***a***, ***h***, and ***k*** values from a quadratic equation in vertex form:\n\n\n```python\ndef plot_parabola_from_vertex_form(a, h, k):\n import pandas as pd\n import math\n\n # Create a dataframe with an x column a range of x values to plot\n df = pd.DataFrame ({'x': range(h-10, h+11)})\n\n # Add a y column by applying the quadratic equation to x\n df['y'] = (a*(df['x'] - h)**2) + k\n\n # get min and max y values\n miny = df.y.min()\n maxy = df.y.max()\n\n # calculate y when x is 0 (h+-h)\n y = a*(0 - h)**2 + k\n\n # Plot the line\n %matplotlib inline\n from matplotlib import pyplot as plt\n\n plt.plot(df.x, df.y, color=\"grey\")\n plt.xlabel('x')\n plt.ylabel('y')\n plt.grid()\n plt.axhline()\n plt.axvline()\n\n # Plot calculated y values for x = 0 (h-h and h+h)\n plt.scatter([h-h, h+h],[y,y], color=\"green\")\n plt.annotate(str(h-h) + ',' + str(y),(h-h, y))\n plt.annotate(str(h+h) + ',' + str(y),(h+h, y))\n\n # plot the line of symmetry (x = h)\n sx = [h, h]\n sy = [miny, maxy]\n plt.plot(sx,sy, color='magenta')\n\n # Annotate the vertex (h,k)\n plt.scatter(h,k, color=\"red\")\n plt.annotate('v=' + str(h) + ',' + str(k),(h, k), xytext=(h - 1, (k - 10)))\n\n plt.show()\n\n \n# Call the function for the example discussed above\nplot_parabola_from_vertex_form(2, 4, -30)\n```\n\nIt's important to note that the vertex form specifically requires a *subtraction* operation in the factored perfect square term. For example, consider the following equation in the standard form:\n\n\\begin{equation}y = 3x^{2} + 6x + 2\\end{equation}\n\nThe steps to solve this are:\n1. Move the constant to the left side:\n\\begin{equation}y - 2 = 3x^{2} + 6x\\end{equation}\n2. Factor the ***x*** expressions on the right:\n\\begin{equation}y - 2 = 3(x^{2} + 2x)\\end{equation}\n3. Add the square of half the x coefficient to the right, and the corresponding multiple on the left:\n\\begin{equation}y - 2 + 3 = 3(x^{2} + 2x + 1)\\end{equation}\n4. Factor out a perfect square binomial:\n\\begin{equation}y + 1 = 3(x + 1)^{2}\\end{equation}\n5. Move the constant back to the right side:\n\\begin{equation}y = 3(x + 1)^{2} - 1\\end{equation}\n\nTo express this in vertex form, we need to convert the addition in the parenthesis to a subtraction:\n\n\\begin{equation}y = 3(x - -1)^{2} - 1\\end{equation}\n\nNow, we can use the a, h, and k values to define a parabola:\n\n\n```python\nplot_parabola_from_vertex_form(3, -1, -1)\n```\n\n## Shortcuts for Solving Quadratic Equations\nWe've spent some time in this notebook discussing how to solve quadratic equations to determine the vertex of a parabola and the ***x*** values in relation to ***y***. It's important to understand the techniques we've used, which incude:\n- Factoring\n- Calculating the Square Root\n- Completing the Square\n- Using the vertex form of the equation\n\nThe underlying algebra for all of these techniques is the same, and this consistent algebra results in some shortcuts that you can memorize to make it easier to solve quadratic equations without going through all of the steps:\n\n### Calculating the Vertex from Standard Form\nYou've already seen that converting a quadratic equation to the vertex form makes it easy to identify the vertex coordinates, as they're encoded as ***h*** and ***k*** in the equation itself - like this:\n\n\\begin{equation}y = a(x - \\textbf{h})^{2} + \\textbf{k}\\end{equation}\n\nHowever, what if you have an equation in standard form?:\n\n\\begin{equation}y = ax^{2} + bx + c\\end{equation}\n\nThere's a quick and easy technique you can apply to get the vertex coordinates. \n\n1. To find ***h*** (which is the x-coordinate of the vertex), apply the following formula:\n\\begin{equation}h = \\frac{-b}{2a}\\end{equation}\n2. After you've found ***h***, use it in the quadratic equation to solve for ***k***:\n\\begin{equation}\\textbf{k} = a\\textbf{h}^{2} + b\\textbf{h} + c\\end{equation}\n\nFor example, here's the quadratic equation in standard form that we previously converted to the vertex form:\n\n\\begin{equation}y = 2x^{2} - 16x + 2\\end{equation}\n\nTo find ***h***, we perform the following calculation:\n\n\\begin{equation}h = \\frac{-b}{2a}\\;\\;\\;\\;=\\;\\;\\;\\;\\frac{-1 \\cdot16}{2\\cdot2}\\;\\;\\;\\;=\\;\\;\\;\\;\\frac{16}{4}\\;\\;\\;\\;=\\;\\;\\;\\;4\\end{equation}\n\nThen we simply plug the value we've obtained for ***h*** into the quadratic equation in order to find ***k***:\n\n\\begin{equation}k = 2\\cdot(4^{2}) - 16\\cdot4 + 2\\;\\;\\;\\;=\\;\\;\\;\\;32 - 64 + 2\\;\\;\\;\\;=\\;\\;\\;\\;-30\\end{equation}\n\nNote that a vertex at 4,-30 is also what we previously calculated for the vertex form of the same equation:\n\n\\begin{equation}y = 2(x - 4)^{2} - 30\\end{equation}\n\n### The Quadratic Formula\nAnother useful formula to remember is the *quadratic formula*, which makes it easy to calculate values for ***x*** when ***y*** is **0**; or in other words:\n\n\\begin{equation}ax^{2} + bx + c = 0\\end{equation}\n\nHere's the formula:\n\n\\begin{equation}x = \\frac{-b \\pm \\sqrt{b^{2} - 4ac}}{2a}\\end{equation}\n\nLet's apply that formula to our equation, which you may remember looks like this:\n\n\\begin{equation}y = 2x^{2} - 16x + 2\\end{equation}\n\nOK, let's plug the ***a***, ***b***, and ***c*** variables from our equation into the quadratic formula:\n\n\\begin{equation}x = \\frac{--16 \\pm \\sqrt{-16^{2} - 4\\cdot2\\cdot2}}{2\\cdot2}\\end{equation}\n\nThis simplifes to:\n\n\\begin{equation}x = \\frac{16 \\pm \\sqrt{256 - 16}}{4}\\end{equation}\n\nThis in turn (with the help of a calculator) simplifies to:\n\n\\begin{equation}x = \\frac{16 \\pm 15.491933384829668}{4}\\end{equation}\n\nSo our positive value for ***x*** is:\n\n\\begin{equation}x = \\frac{16 + 15.491933384829668}{4}\\;\\;\\;\\;=7.872983346207417\\end{equation}\n\nAnd the negative value for ***x*** is:\n\n\\begin{equation}x = \\frac{16 - 15.491933384829668}{4}\\;\\;\\;\\;=0.12701665379258298\\end{equation}\n\n\n\nThe following Python code uses the vertex formula and the quadtratic formula to calculate the vertex and the -x and +x for y = 0, and then plots the resulting parabola:\n\n\n```python\ndef plot_parabola_from_formula (a, b, c):\n import math\n\n # Get vertex\n print('CALCULATING THE VERTEX')\n print('vx = -b / 2a')\n\n nb = -b\n a2 = 2*a\n print('vx = ' + str(nb) + ' / ' + str(a2))\n\n vx = -b/(2*a)\n print('vx = ' + str(vx))\n\n print('\\nvy = ax^2 + bx + c')\n print('vy =' + str(a) + '(' + str(vx) + '^2) + ' + str(b) + '(' + str(vx) + ') + ' + str(c))\n\n avx2 = a*vx**2\n bvx = b*vx\n print('vy =' + str(avx2) + ' + ' + str(bvx) + ' + ' + str(c))\n\n vy = avx2 + bvx + c\n print('vy = ' + str(vy))\n\n print ('\\nv = ' + str(vx) + ',' + str(vy))\n\n # Get +x and -x (showing intermediate calculations)\n print('\\nCALCULATING -x AND +x FOR y=0')\n print('x = -b +- sqrt(b^2 - 4ac) / 2a')\n\n\n b2 = b**2\n ac4 = 4*a*c\n print('x = ' + str(nb) + '+-sqrt(' + str(b2) + ' - ' + str(ac4) + ')/' + str(a2))\n\n sr = math.sqrt(b2 - ac4)\n print('x = ' + str(nb) + ' +- ' + str(sr) + ' / ' + str(a2))\n print('-x = ' + str(nb) + ' - ' + str(sr) + ' / ' + str(a2))\n print('+x = ' + str(nb) + ' + ' + str(sr) + ' / ' + str(a2))\n\n posx = (nb + sr) / a2\n negx = (nb - sr) / a2\n print('-x = ' + str(negx))\n print('+x = ' + str(posx))\n\n\n print('\\nPLOTTING THE PARABOLA')\n import pandas as pd\n\n # Create a dataframe with an x column a range of x values to plot\n df = pd.DataFrame ({'x': range(round(vx)-10, round(vx)+11)})\n\n # Add a y column by applying the quadratic equation to x\n df['y'] = a*df['x']**2 + b*df['x'] + c\n\n # get min and max y values\n miny = df.y.min()\n maxy = df.y.max()\n\n # Plot the line\n %matplotlib inline\n from matplotlib import pyplot as plt\n\n plt.plot(df.x, df.y, color=\"grey\")\n plt.xlabel('x')\n plt.ylabel('y')\n plt.grid()\n plt.axhline()\n plt.axvline()\n\n # Plot calculated x values for y = 0\n plt.scatter([negx, posx],[0,0], color=\"green\")\n plt.annotate('-x=' + str(negx) + ',' + str(0),(negx, 0), xytext=(negx - 3, 5))\n plt.annotate('+x=' + str(posx) + ',' + str(0),(posx, 0), xytext=(posx - 3, -10))\n\n # plot the line of symmetry\n sx = [vx, vx]\n sy = [miny, maxy]\n plt.plot(sx,sy, color='magenta')\n\n # Annotate the vertex\n plt.scatter(vx,vy, color=\"red\")\n plt.annotate('v=' + str(vx) + ',' + str(vy),(vx, vy), xytext=(vx - 1, vy - 10))\n\n plt.show()\n \n\nplot_parabola_from_formula (2, -16, 2)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "a2088abd699a6e36ce000af91eab15cf6c1113cd", "size": 128252, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "MathsToML/Module01-Equations, Graphs, and Functions/01-07-Quadratic Equations.ipynb", "max_stars_repo_name": "hpaucar/data-mining-repo", "max_stars_repo_head_hexsha": "d0e48520bc6c01d7cb72e882154cde08020e1d33", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "MathsToML/Module01-Equations, Graphs, and Functions/01-07-Quadratic Equations.ipynb", "max_issues_repo_name": "hpaucar/data-mining-repo", "max_issues_repo_head_hexsha": "d0e48520bc6c01d7cb72e882154cde08020e1d33", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MathsToML/Module01-Equations, Graphs, and Functions/01-07-Quadratic Equations.ipynb", "max_forks_repo_name": "hpaucar/data-mining-repo", "max_forks_repo_head_hexsha": "d0e48520bc6c01d7cb72e882154cde08020e1d33", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 461.3381294964, "max_line_length": 24546, "alphanum_fraction": 0.8784814272, "converted": true, "num_tokens": 8095, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.909907001151883, "lm_q2_score": 0.9184802390045688, "lm_q1q2_score": 0.835731599889912}} {"text": "## Learning Objectives\n\nBy the end of this session you should be able to...\n\n1. Take the derivative of a function over one variable\n1. Take the partial derivative of a function over all of its variables \n1. Find the minimum of the function to obtain the best line that represents relationships between two variables in a dataset\n\n## Why are derivatives important?\n\nDerivatives are the foundation for Linear Regression (a topic we'll cover later in the course) that allows us to obtain the best line that represents relationships between two variables in a dataset.\n\n## Introduction to Derivatives\n\nThe process of fidning a derivative is called **Differentiation**, which is a technique used to calculate the slope of a graph at different points.\n\n### Activity - Derivative Tutorial:\n\n1. Go through this [Derivative tutorial from Math Is Fun](https://www.mathsisfun.com/calculus/derivatives-introduction.html) (15 min)\n1. When you're done, talk with a partner about topics you still have questions on. See if you can answer each other's questions. (5 min)\n1. We'll then go over questions on the tutorial as a class (10 min)\n\n### Review Diagram\n\nReview the below diagram as a class, and compare with what you just learned in the above Derivative Tutorial. Note that a Gradient Function is just another name for the Derivative of a function:\n\n\n\n\n## Derivative Formula\n\n- Choose small $\\Delta x$\n\n- $f^\\prime(x) = \\frac{d}{dx}f(x) = \\frac{\\Delta y}{\\Delta x} = \\frac{f(x + \\Delta x) - f(x)}{\\Delta x}$\n\nRemember that $\\Delta x$ approaches 0. So if plugging in a value in the above formula, choose a _very_ small number, or simplify the equation further such that all $\\Delta x = 0$, like we saw in the tutorial\n\n## Activity: Write a Python function that calculates the gradient of $x^2$ at $x = 3$ and $x = -2$ using the above definition\n\n\n```python\ndef f(x):\n return x**2\n\n\neps = 1e-6\nx = 3\nprint((f(x + eps) - f(x)) / eps)\nx = -2\nprint((f(x + eps) - f(x)) / eps)\n```\n\n 6.000001000927568\n -3.999998999582033\n\n\nNote that these values match $2x$, our derivative of $x^2$:\n\n$2*3 = 6$\n\n$2 * -2 = -4$\n\n## Derivative Table\n\nAs a shortcut, use the second page of this PDF to find the derivative for common formulas. Utilize this as a resource going forward!\n\n- https://www.qc.edu.hk/math/Resource/AL/Derivative%20Table.pdf\n\n## Extend Gradient into Two-Dimensional Space\n\nNow we know how to calculate a derivative of one variable. But what if we have two?\n\nTo do this, we need to utilize **Partial Derivatives**. Calculating a partial derivative is essentially calculating two derivatives for a function: one for each variable, where they other variable is set to a constant.\n\n### Activity - Partial Derivative Video\n\nLets watch this video about Partial Derivative Intro from Khan Academy: https://youtu.be/AXqhWeUEtQU\n\n**Note:** Here are some derivative shortcuts that will help in the video:\n\n$\\frac{d}{dx}x^2 = 2x$\n\n$\\frac{d}{x}sin(x) = cos(x)$\n\n$\\frac{d}{dx}x = 1$\n\n### Activity - Now You Try!\nConsider the function $f(x, y) = \\frac{x^2}{y}$\n\n- Calculate the first order partial derivatives ($\\frac{\\partial f}{\\partial x}$ and $\\frac{\\partial f}{\\partial y}$) and evaluate them at the point $P(2, 1)$.\n\n## We can use the Symbolic Python package (library) to compute the derivatives and partial derivatives\n\n\n```python\nfrom sympy import symbols, diff\n# initialize x and y to be symbols to use in a function\nx, y = symbols('x y', real=True)\nf = (x**2)/y\n# Find the partial derivatives of x and y\nfx = diff(f, x, evaluate=True) # partial derivative of f(x,y) with respect to x\nfy = diff(f, y, evaluate=True) # partial derivative of f(x,y) with respect to y\nprint(fx)\nprint(fy)\n# print(f.evalf(subs={x: 2, y: 1}))\nprint(fx.evalf(subs={x: 2, y: 1}))\nprint(fy.evalf(subs={x: 2, y: 1}))\n```\n\n 2*x/y\n -x**2/y**2\n 4.00000000000000\n -4.00000000000000\n\n\n## Optional Reading: Tensorflow is a powerful package from Google that calculates the derivatives and partial derivatives numerically \n\n\n```python\nimport tensorflow as tf \n\nx = tf.Variable(2.0)\ny = tf.Variable(1.0)\n\nwith tf.GradientTape(persistent=True) as t:\n z = tf.divide(tf.multiply(x, x), y)\n\n# Use the tape to compute the derivative of z with respect to the\n# intermediate value x and y.\ndz_dx = t.gradient(z, x)\ndz_dy = t.gradient(z, y)\n\n\nprint(dz_dx)\nprint(dz_dy)\n\n# All at once:\ngradients = t.gradient(z, [x, y])\nprint(gradients)\n\n\ndel t\n```\n\n## Optional Reading: When x and y are declared as constant, we should add `t.watch(x)` and `t.watch(y)`\n\n\n```python\nimport tensorflow as tf \n\nx = tf.constant(2.0)\ny = tf.constant(1.0)\n\nwith tf.GradientTape(persistent=True) as t:\n t.watch(x)\n t.watch(y)\n z = tf.divide(tf.multiply(x, x), y)\n\n# Use the tape to compute the derivative of z with respect to the\n# intermediate value y.\ndz_dx = t.gradient(z, x)\ndz_dy = t.gradient(z, y)\n```\n\n# Calculate Partial Derivative from Definition\n\n\n```python\ndef f(x, y):\n return x**2/y\n\n\neps = 1e-6\nx = 2\ny = 1\nprint((f(x + eps, y) - f(x, y)) / eps)\nprint((f(x, y + eps) - f(x, y)) / eps)\n```\n\n 4.0000010006480125\n -3.9999959997594203\n\n\nLooks about right! This works rather well, but it is just an approximation. Also, you need to call `f()` at least once per parameter (not twice, since we could compute `f(x, y)` just once). This makes this approach difficult to control for large systems (for example neural networks).\n\n## Why Do we need Partial Gradients?\n\nIn many applications, more specifically DS applications, we want to find the Minimum of a cost function\n\n- **Cost Function:** a function used in machine learning to help correct / change behaviour to minimize mistakes. Or in other words, a measure of how wrong the model is in terms of its ability to estimate the relationship between x and y. [Source](https://towardsdatascience.com/machine-learning-fundamentals-via-linear-regression-41a5d11f5220)\n\n\nWhy do we want to find the minimum for a cost function? Given that a cost function mearues how wrong a model is, we want to _minimize_ that error!\n\nIn Machine Learning, we frequently use models to run our data through, and cost functions help us figure out how badly our models are performing. We want to find parameters (also known as **weights**) to minimize our cost function, therefore minimizing error!\n\nWe find find these optimal weights by using a **Gradient Descent**, which is an algorithm that tries to find the minimum of a function (exactly what we needed!). The gradient descent tells the model which direction it should take in order to minimize errors, and it does this by selecting more and more optimal weights until we've minimized the function! We'll learn more about models when we talk about Linear Regression in a future lesson, but for now, let's review the Gradient Descent process with the below images, given weights $w_0$ and $w_1$:\n\n\n\nLook at that bottom right image. Looks like we're using partial derivatives to find out optimal weights. And we know exactly how to do that!\n\n## Finding minimum of a function\n\nAssume we want to minimize the function $J$ which has two weights $w_0$ and $w_1$\n\nWe have two options to find the minimum of $J(w_0, w_1)$:\n\n1. Take partial derivatives of $J(w_0, w_1)$ with relation to $w_0$ and $w_1$:\n\n$\\frac{\\partial J(w_0, w_1)}{\\partial w_0}$\n\n$\\frac{\\partial J(w_0, w_1)}{\\partial w_1}$\n\nAnd find the appropriate weights such that the partial derivatives equal 0:\n\n$\\frac{\\partial J(w_0, w_1)}{\\partial w_0} = 0$\n\n$\\frac{\\partial J(w_0, w_1)}{\\partial w_1} = 0$\n\nIn this approach we should solve system of linear or non-linear equation\n\n2. Use the Gradient Descent algorithm:\n\nFirst we need to define two things:\n\n- A step-size alpha ($\\alpha$) -- also called the *learning rate* -- as a small number (like $1.e-5$)\n- An arbitrary random initial value for $w_0$ and $w_1$: $w_0 = np.random.randn()$ and $w_1 = np.random.randn()$\n\nFinally, we need to search for the most optimal $w_0$ and $w_1$ by using a loop to update the weights until we find the most optimal weights. We'll need to establish a threshold to compare weights to know when to stop the loop. For example, if the weight update -- the change in the weight parameter -- from one iteration is within 0.0001 of the weight from the previous iteration, we can stop the loop (0.0001 is our threshold here)\n\nLet's review some pseudocode for how to implement this algorithm:\n\n```\n# initialization\ninitialize the following:\n a starting weight value -- an initial guess, could be random\n the learning rate (alpha), a small number (we'll choose 1.e-5)\n the threshold -- set this to 1.e-4\n the current weight update -- initialize to 1\n\n# weight update loop\nwhile the weight update is greater than the threshold:\n store the current values of the weights into a previous value variable \n set the weight values to new values based on the algorithm, by adding the weight updates\n```\n\nHow do we `set the weight values to new values based on the algorithm`? by using the below equations:\n \n$w_0 = w_0 - \\alpha \\frac{\\partial J(w_0, w_1)}{\\partial w_0}$\n \n$w_1 = w_1 - \\alpha \\frac{\\partial J(w_0, w_1)}{\\partial w_1}$\n\n\nFinish the \"starter code\" block below, creating real code from the pseudocode!\n\n\n**Stretch Challenge:** We may also want to limit the number of loops we do, in addition to checking the threshold. Determine how we may go about doing that\n\n\n## Resources\n\n- [Derivative tutorial from Math Is Fun](https://www.mathsisfun.com/calculus/derivatives-introduction.html) \n- [Derivative Table](https://www.qc.edu.hk/math/Resource/AL/Derivative%20Table.pdf)\n- [Khan Academy - Partial Derivatives video](https://www.youtube.com/watch?v=AXqhWeUEtQU&feature=youtu.be)\n- [Towards Data Science - Machine Learning Fundamentals: cost functions and gradient Descent](https://towardsdatascience.com/machine-learning-fundamentals-via-linear-regression-41a5d11f5220)\n\n## Gradient descent in one dimension\n\n\n```python\n# pseudo-code\ndef minimize(f):\n # Initialize\n \n # run the weight update loop until it terminates\n \n # return the current weights\n```\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\nnp.random.seed(42)\neps = 1.e-10\n\n# define an interesting function to minimize\nx = np.linspace(-10,10, 100)\nfx =lambda x: (1/50)*x**4 - 2*x**2 + x + 1 \ndf_dx = lambda x: (4/50)*x**3 - 4*x + 1\nplt.plot(x,fx(x),label = 'function f(x)')\nplt.plot(x,df_dx(x),'r--',label = 'derivative df/dx')\nplt.legend()\nplt.grid()\n```\n\n### gradient descent function\n\n\n```python\ndef minimize(fx,x_init):\n # Initialize\n alpha = 1.e-6\n thresh = 1.e-4\n weight_update = 1.\n x_values = []\n max_iter = 1000\n n_iter = 0\n x = x_init\n eps = 1.e-10\n \n # run the weight update loop until it terminates\n while np.abs(weight_update) > thresh: #and n_iter < max_iter:\n n_iter+=1\n df_dx = (fx(x+eps) - fx(x))/eps\n weight_update = -alpha*df_dx\n x = x + weight_update\n x_values.append(x) \n \n # return the final value of the weight -- which should correspond to the minimum of the function\n return x, x_init, x_values, n_iter\n```\n\n### Choose an initial value for x, then run gradient descent\n\n\n```python\nnp.random.seed(42)\nx_init = np.random.uniform(-10,10,1) # choose a random starting point\nprint(x_init)\nx_star, x_init, x_values, n_iter = minimize(f,x_init)\nprint(f'Started at {x_init}, found minimum {x_star} after {n_iter} iterations')\n```\n\n [-2.50919762]\n Started at [-2.50919762], found minimum [-2.5092074] after 1 iterations\n\n\n### Check that the derivative of the function is indeed zero at the minimum found by gradient descent\n\n\n```python\n# derivative, from calculus \nprint(f'derivative from calculus: {df_dx(x_star)}')\n\n# derivative, from definition\nprint(f'derivative from calculus: {(fx(x_star + eps) - fx(x_star) )/eps}')\n```\n\n derivative from calculus: [10.01321418]\n derivative from calculus: [10.01321692]\n\n\n\n```python\nx_values = np.array(x_values)\nplt.plot(x,fx(x),label = 'function f(x)')\nplt.plot(np.array(x_values),fx(x_values),'.',markersize = 1, label='gradient descent updates')\nplt.plot(x_init,fx(x_init),'r.',markersize = 10, label = '$x_{init}$')\nplt.plot(x_star,fx(x_star),'k*',markersize = 10, label = '$x_{star}$')\n\nplt.xlabel('x')\nplt.ylabel('f(x)')\nplt.grid()\nplt.legend(loc='best');\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "2a72778584005da7348bab9a7b0ae3d3d4f96596", "size": 68552, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "site/public/courses/QL-1.1/Notebooks/Calculus/.ipynb_checkpoints/partial_derivative_jcat-checkpoint.ipynb", "max_stars_repo_name": "KitsuneNoctus/makeschool", "max_stars_repo_head_hexsha": "5eec1a18146abf70bb78b4ee3d301f6a43c9ede4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-08-24T20:22:19.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-24T20:22:19.000Z", "max_issues_repo_path": "site/public/courses/QL-1.1/Notebooks/Calculus/.ipynb_checkpoints/partial_derivative_jcat-checkpoint.ipynb", "max_issues_repo_name": "KitsuneNoctus/makeschool", "max_issues_repo_head_hexsha": "5eec1a18146abf70bb78b4ee3d301f6a43c9ede4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "site/public/courses/QL-1.1/Notebooks/Calculus/.ipynb_checkpoints/partial_derivative_jcat-checkpoint.ipynb", "max_forks_repo_name": "KitsuneNoctus/makeschool", "max_forks_repo_head_hexsha": "5eec1a18146abf70bb78b4ee3d301f6a43c9ede4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 115.9932318105, "max_line_length": 24777, "alphanum_fraction": 0.8492385343, "converted": true, "num_tokens": 3353, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9196425289753969, "lm_q2_score": 0.9086178926024028, "lm_q1q2_score": 0.8356036566251692}} {"text": "```python\nfrom sympy.integrals.quadrature import gauss_legendre\n```\n\n\n```python\ngauss_legendre(4, 17)\n```\n\n\n\n\n ([-0.86113631159405258,\n -0.33998104358485626,\n 0.33998104358485626,\n 0.86113631159405258],\n [0.34785484513745386,\n 0.65214515486254614,\n 0.65214515486254614,\n 0.34785484513745386])\n\n\n\nCompare against:\n\nhttps://github.com/certik/hfsolver/blob/0693c60abc8da820b9c626f2092bf9d767644a8b/src/quadrature.f90#L30\n\nPoints:\n\n```fortran\n case(4)\n xi(1)=-0.86113631159405258_dp\n xi(2)=-0.33998104358485626_dp\n xi(3)=0.33998104358485626_dp\n xi(4)=0.86113631159405258_dp\n```\n\nWeights:\n\n```fortran\n case(4)\n w(1)=0.34785484513745386_dp\n w(2)=0.65214515486254614_dp\n w(3)=0.65214515486254614_dp\n w(4)=0.34785484513745386_dp\n```\n\n\n```python\ngauss_legendre??\n```\n", "meta": {"hexsha": "a8f5b2bfa1f3f86bb7c83bca805f3dca6f2b7301", "size": 2166, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "applications/Applications -- Numerical Quadrature.ipynb", "max_stars_repo_name": "gvvynplaine/scipy-2016-tutorial", "max_stars_repo_head_hexsha": "aa417427a1de2dcab2a9640b631b809d525d7929", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 53, "max_stars_repo_stars_event_min_datetime": "2016-06-21T21:11:02.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-04T07:51:03.000Z", "max_issues_repo_path": "applications/Applications -- Numerical Quadrature.ipynb", "max_issues_repo_name": "gvvynplaine/scipy-2016-tutorial", "max_issues_repo_head_hexsha": "aa417427a1de2dcab2a9640b631b809d525d7929", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2016-07-02T20:24:06.000Z", "max_issues_repo_issues_event_max_datetime": "2016-07-11T11:31:44.000Z", "max_forks_repo_path": "applications/Applications -- Numerical Quadrature.ipynb", "max_forks_repo_name": "gvvynplaine/scipy-2016-tutorial", "max_forks_repo_head_hexsha": "aa417427a1de2dcab2a9640b631b809d525d7929", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 36, "max_forks_repo_forks_event_min_datetime": "2016-06-25T09:04:24.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-09T06:46:01.000Z", "avg_line_length": 20.4339622642, "max_line_length": 112, "alphanum_fraction": 0.5106186519, "converted": true, "num_tokens": 331, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9086178944582997, "lm_q2_score": 0.9196425240200057, "lm_q1q2_score": 0.8356036538293738}} {"text": "\n\n\n```python\nfrom sympy import * \n```\n\n\n```python\nx,y,d,z = symbols('x,y,d,z')\n```\n\necuaci\u00f3n\n\n\n```python\n d = (x + y) ** 2\n d\n```\n\n\n\n\n$\\displaystyle \\left(x + y\\right)^{2}$\n\n\n\nExpandimos la ecuaci\u00f3n\n\n\n```python\nd = expand((x+y)**2)\nd\n```\n\n\n\n\n$\\displaystyle x^{2} + 2 x y + y^{2}$\n\n\n\n\n```python\nd= expand((x+y)**3)\nd\n```\n\n\n\n\n$\\displaystyle x^{3} + 3 x^{2} y + 3 x y^{2} + y^{3}$\n\n\n\nEcuaci\u00f3n con dos variables\n\n\n```python\nd = 5 * x + 3 * y\nd\n```\n\n\n\n\n$\\displaystyle 5 x + 3 y$\n\n\n\nEcuaci\u00f3n con 3 variables \n\n\n```python\nd = 5 * x + 2 * y + 12 * z \nd\n```\n\n\n\n\n$\\displaystyle 5 x + 2 y + 12 z$\n\n\n", "meta": {"hexsha": "5aa52fcc3a22c8bcce117eca4c4376d2e1811fbd", "size": 5471, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ecuacion_lineal_2.ipynb", "max_stars_repo_name": "andresvillamayor/Python-Algebra-Lineal", "max_stars_repo_head_hexsha": "40a60296ec9245572fafa9e57e20c764182d0b0b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ecuacion_lineal_2.ipynb", "max_issues_repo_name": "andresvillamayor/Python-Algebra-Lineal", "max_issues_repo_head_hexsha": "40a60296ec9245572fafa9e57e20c764182d0b0b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ecuacion_lineal_2.ipynb", "max_forks_repo_name": "andresvillamayor/Python-Algebra-Lineal", "max_forks_repo_head_hexsha": "40a60296ec9245572fafa9e57e20c764182d0b0b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.4221311475, "max_line_length": 252, "alphanum_fraction": 0.3907877902, "converted": true, "num_tokens": 283, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9597620596782468, "lm_q2_score": 0.8705972633721708, "lm_q1q2_score": 0.8355662226443198}} {"text": "# Pyomo - Stock problem v2\n\nPyomo installation: see http://www.pyomo.org/installation\n\n```\npip install pyomo\n```\n\n\n```python\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\n```\n\n\n```python\nfrom pyomo.environ import *\n```\n\n* $s_t$ = the battery level at time $t$\n\n$$\n\\begin{align}\n \\min_{x_{inj}, x_{ext}, x_{grid}} & \\quad \\sum_t p_t x_{inj,t} + \\sum_t p_t x_{grid,t} \\quad\\quad\\quad\\quad \\text{(P1)} \\\\\n \\text{s.t.} & \\quad s_0 = 0 \\\\\n & \\quad s_t = s_{t-1} + x_{inj,t} - x_{ext,t} \\\\\n & \\quad x_{inj,t} \\leq s_{\\max} - s_{t-1} \\quad\\quad\\quad \\quad ~~ (L4) \\\\\n & \\quad x_{inj,t} \\geq 0 \\\\\n & \\quad x_{ext,t} \\leq s_{t-1} \\quad\\quad \\quad \\quad \\quad \\quad \\quad (L6) \\\\\n & \\quad x_{ext,t} \\geq 0 \\\\\n & \\quad x_{ext,t} + x_{grid,t} = \\text{needs}_{t} \\quad \\quad \\quad ~ (L8)\n\\end{align}\n$$\n\n- L4: can't inject more than the \"free space\" of the battery\n- L6: can't extract more than what have been stored in the battery\n- L8: fulfill the needs at time t\n\n\n```python\n# Cost of energy on the market\n\nprice = [10, 20, 10, 20]\nneeds = [ 0, 30, 20, 50]\n\nstock_max = 100 # battery capacity\n```\n\n\n```python\nT = list(range(len(price) + 1)) # num decision variables\n```\n\n\n```python\nplt.plot(T[:-1], price, \"o-\")\nplt.xlabel(\"Unit of time (t)\")\nplt.ylabel(\"Price of one unit of energy (p)\")\nplt.title(\"Price of energy on the market\")\nplt.show();\n```\n\n\n```python\nplt.plot(T[:-1], needs, \"o-\")\nplt.xlabel(\"Unit of time (t)\")\nplt.ylabel(\"Needs\")\nplt.title(\"Needs\")\nplt.show();\n```\n\n\n```python\nmodel = ConcreteModel(name=\"Stock problem v1\")\n\nmodel.x_inj = Var(T, within=NonNegativeReals)\nmodel.x_ext = Var(T, within=NonNegativeReals)\nmodel.x_grid = Var(T, within=NonNegativeReals)\nmodel.s = Var(T, within=NonNegativeReals)\n\n\ndef objective_fn(model):\n return sum(price[t-1] * model.x_inj[t] + price[t-1] * model.x_grid[t] for t in T if t != 0)\n\nmodel.obj = Objective(rule=objective_fn, sense=minimize)\n\n########\n\nmodel.constraint_s0 = Constraint(expr = model.s[0] == 0)\n\ndef constraint_stock_level(model, t):\n if t > 0:\n return model.s[t] == model.s[t-1] + model.x_inj[t] - model.x_ext[t]\n else:\n return Constraint.Skip\n\nmodel.constraint_st = Constraint(T, rule=constraint_stock_level)\n\n\ndef constraint_x_inj_sup(model, t):\n if t > 0:\n return model.x_inj[t] <= stock_max - model.s[t-1]\n else:\n return Constraint.Skip\n\nmodel.constraint_x_inj_sup = Constraint(T, rule=constraint_x_inj_sup)\n\n\ndef constraint_x_ext_sup(model, t):\n if t > 0:\n return model.x_ext[t] <= model.s[t-1]\n else:\n return Constraint.Skip\n\nmodel.constraint_x_ext_sup = Constraint(T, rule=constraint_x_ext_sup)\n\n\ndef constraint_needs(model, t):\n if t > 0:\n return model.x_ext[t] + model.x_grid[t] == needs[t-1]\n else:\n return Constraint.Skip\n\nmodel.constraint_needs = Constraint(T, rule=constraint_needs)\n\n########\n\nmodel.pprint()\n\n# @tail:\nprint()\nprint(\"-\" * 60)\nprint()\n\nopt = SolverFactory('glpk')\n\nresults = opt.solve(model) # solves and updates instance\n\nmodel.display()\n\nprint()\nprint(\"Optimal solution (x_inj): \", [value(model.x_inj[t]) for t in T if t != 0])\nprint(\"Optimal solution (x_ext): \", [value(model.x_ext[t]) for t in T if t != 0])\nprint(\"Optimal solution (x_grid): \", [value(model.x_grid[t]) for t in T if t != 0])\nprint(\"Gain of the optimal solution: \", value(model.obj))\n# @:tail\n```\n", "meta": {"hexsha": "ac8b9fafd1adcc254048a47e76160629e0201d48", "size": 6101, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "nb_dev_python/python_pyomo_stock_problem_2.ipynb", "max_stars_repo_name": "jdhp-docs/python-notebooks", "max_stars_repo_head_hexsha": "91a97ea5cf374337efa7409e4992ea3f26b99179", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2017-05-03T12:23:36.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-26T17:30:56.000Z", "max_issues_repo_path": "nb_dev_python/python_pyomo_stock_problem_2.ipynb", "max_issues_repo_name": "jdhp-docs/python-notebooks", "max_issues_repo_head_hexsha": "91a97ea5cf374337efa7409e4992ea3f26b99179", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "nb_dev_python/python_pyomo_stock_problem_2.ipynb", "max_forks_repo_name": "jdhp-docs/python-notebooks", "max_forks_repo_head_hexsha": "91a97ea5cf374337efa7409e4992ea3f26b99179", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-10-26T17:30:57.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-26T17:30:57.000Z", "avg_line_length": 27.3587443946, "max_line_length": 146, "alphanum_fraction": 0.4841829208, "converted": true, "num_tokens": 1073, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9504109770159682, "lm_q2_score": 0.8791467659263148, "lm_q1q2_score": 0.8355507367444576}} {"text": "# Systems of Equations\nImagine you are at a casino, and you have a mixture of \u00a310 and \u00a325 chips. You know that you have a total of 16 chips, and you also know that the total value of chips you have is \u00a3250. Is this enough information to determine how many of each denomination of chip you have?\n\nWell, we can express each of the facts that we have as an equation. The first equation deals with the total number of chips - we know that this is 16, and that it is the number of \u00a310 chips (which we'll call ***x*** ) added to the number of \u00a325 chips (***y***).\n\nThe second equation deals with the total value of the chips (\u00a3250), and we know that this is made up of ***x*** chips worth \u00a310 and ***y*** chips worth \u00a325.\n\nHere are the equations\n\n\\begin{equation}x + y = 16 \\end{equation}\n\\begin{equation}10x + 25y = 250 \\end{equation}\n\nTaken together, these equations form a *system of equations* that will enable us to determine how many of each chip denomination we have.\n\n## Graphing Lines to Find the Intersection Point\nOne approach is to determine all possible values for x and y in each equation and plot them.\n\nA collection of 16 chips could be made up of 16 \u00a310 chips and no \u00a325 chips, no \u00a310 chips and 16 \u00a325 chips, or any combination between these.\n\nSimilarly, a total of \u00a3250 could be made up of 25 \u00a310 chips and no \u00a325 chips, no \u00a310 chips and 10 \u00a325 chips, or a combination in between.\n\nLet's plot each of these ranges of values as lines on a graph:\n\n\n```python\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\n# Get the extremes for number of chips\nchipsAll10s = [16, 0]\nchipsAll25s = [0, 16]\n\n# Get the extremes for values\nvalueAll10s = [25,0]\nvalueAll25s = [0,10]\n\n# Plot the lines\nplt.plot(chipsAll10s,chipsAll25s, color='blue')\nplt.plot(valueAll10s, valueAll25s, color=\"orange\")\nplt.xlabel('x (\u00a310 chips)')\nplt.ylabel('y (\u00a325 chips)')\nplt.grid()\n\nplt.show()\n```\n\nLooking at the graph, you can see that there is only a single combination of \u00a310 and \u00a325 chips that is on both the line for all possible combinations of 16 chips and the line for all possible combinations of \u00a3250. The point where the line intersects is (10, 6); or put another way, there are ten \u00a310 chips and six \u00a325 chips.\n\n### Solving a System of Equations with Elimination\nYou can also solve a system of equations mathematically. Let's take a look at our two equations:\n\n\\begin{equation}x + y = 16 \\end{equation}\n\\begin{equation}10x + 25y = 250 \\end{equation}\n\nWe can combine these equations to eliminate one of the variable terms and solve the resulting equation to find the value of one of the variables. Let's start by combining the equations and eliminating the x term.\n\nWe can combine the equations by adding them together, but first, we need to manipulate one of the equations so that adding them will eliminate the x term. The first equation includes the term ***x***, and the second includes the term ***10x***, so if we multiply the first equation by -10, the two x terms will cancel each other out. So here are the equations with the first one multiplied by -10:\n\n\\begin{equation}-10(x + y) = -10(16) \\end{equation}\n\\begin{equation}10x + 25y = 250 \\end{equation}\n\nAfter we apply the multiplication to all of the terms in the first equation, the system of equations look like this:\n\n\\begin{equation}-10x + -10y = -160 \\end{equation}\n\\begin{equation}10x + 25y = 250 \\end{equation}\n\nNow we can combine the equations by adding them. The ***-10x*** and ***10x*** cancel one another, leaving us with a single equation like this:\n\n\\begin{equation}15y = 90 \\end{equation}\n\nWe can isolate ***y*** by dividing both sides by 15:\n\n\\begin{equation}y = \\frac{90}{15} \\end{equation}\n\nSo now we have a value for ***y***:\n\n\\begin{equation}y = 6 \\end{equation}\n\nSo how does that help us? Well, now we have a value for ***y*** that satisfies both equations. We can simply use it in either of the equations to determine the value of ***x***. Let's use the first one:\n\n\\begin{equation}x + 6 = 16 \\end{equation}\n\nWhen we work through this equation, we get a value for ***x***:\n\n\\begin{equation}x = 10 \\end{equation}\n\nSo now we've calculated values for ***x*** and ***y***, and we find, just as we did with the graphical intersection method, that there are ten \u00a310 chips and six \u00a325 chips.\n\nYou can run the following Python code to verify that the equations are both true with an ***x*** value of 10 and a ***y*** value of 6.\n\n\n```python\nx = 10\ny = 6\nprint ((x + y == 16) & ((10*x) + (25*y) == 250))\n```\n", "meta": {"hexsha": "0aeefaaa15d70b19ce38aff56ba4aab378768e7a", "size": 6077, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Basics Of Algebra by Hiren/01-03-Systems of Equations.ipynb", "max_stars_repo_name": "awesome-archive/Basic-Mathematics-for-Machine-Learning", "max_stars_repo_head_hexsha": "b6699a9c29ec070a0b1615c46952cb0deeb73b54", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 401, "max_stars_repo_stars_event_min_datetime": "2018-08-29T04:55:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T11:03:39.000Z", "max_issues_repo_path": "Basics Of Algebra by Hiren/01-03-Systems of Equations.ipynb", "max_issues_repo_name": "aligeekk/Basic-Mathematics-for-Machine-Learning", "max_issues_repo_head_hexsha": "8662076d60e89f58a6e81e4ca1377569472760a2", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2018-11-19T23:54:27.000Z", "max_issues_repo_issues_event_max_datetime": "2018-11-20T00:15:39.000Z", "max_forks_repo_path": "Basics Of Algebra by Hiren/01-03-Systems of Equations.ipynb", "max_forks_repo_name": "aligeekk/Basic-Mathematics-for-Machine-Learning", "max_forks_repo_head_hexsha": "8662076d60e89f58a6e81e4ca1377569472760a2", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 135, "max_forks_repo_forks_event_min_datetime": "2018-08-29T05:04:00.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-30T07:04:25.000Z", "avg_line_length": 43.0992907801, "max_line_length": 406, "alphanum_fraction": 0.6129669245, "converted": true, "num_tokens": 1219, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8791467643431001, "lm_q2_score": 0.9504109750846679, "lm_q1q2_score": 0.8355507335418565}} {"text": "# Introduction to probabilistic programming with PyMC3\n\n\n```python\nimport os,sys\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport pymc3 as pm\nimport scipy.stats as st\n\n%matplotlib inline\n%precision 4\nplt.style.use('bmh')\n```\n\n## Coin toss example\n\nWe are looking to determine if our coin is biased. We can investigate the fairness of a coin in several ways. One useful distribution is the Binomial. It is parameterized by the number of trials ($n$) and the success probability in each trial ($p$).\n\n\n```python\nfrom IPython.display import Image\nImage(filename='../binomial.png')\n```\n\n\n```python\n## lets start with some data\nn = 100\npcoin = 0.62 # actual value of p for coin\nresults = st.bernoulli(pcoin).rvs(n)\nh = sum(results)\nprint(\"We observed %s heads out of %s\"%(h,n))\n\n## the expected distribution\np = 0.5\nrv = st.binom(n,p)\nmu = rv.mean()\nsd = rv.std()\nprint(\"The expected distribution for a fair coin is mu=%s, sd=%s\"%(mu,sd))\n\n## p-value by simulation\nnsamples = 100000\nxs = np.random.binomial(n, p, nsamples)\nprint(\"Simulation p-value - %s\"%(2*np.sum(xs >= h)/(xs.size + 0.0)))\n\n## p-value by binomial test\nprint(\"Binomial test - %s\"%st.binom_test(h, n, p))\n\n## MLE\nprint(\"Maximum likelihood %s\"%(np.sum(results)/float(len(results))))\n\n## bootstrap\nbs_samples = np.random.choice(results, (nsamples, len(results)), replace=True)\nbs_ps = np.mean(bs_samples, axis=1)\nbs_ps.sort()\nprint(\"Bootstrap CI: (%.4f, %.4f)\" % (bs_ps[int(0.025*nsamples)], bs_ps[int(0.975*nsamples)]))\n```\n\n We observed 59 heads out of 100\n The expected distribution for a fair coin is mu=50.0, sd=5.0\n Simulation p-value - 0.08668\n Binomial test - 0.0886260801141\n Maximum likelihood 0.59\n Bootstrap CI: (0.4900, 0.6900)\n\n\n### Lets perform inference with PyMC3\n\n\n```python\nprint(\"n = %s\"%n)\nprint(\"h = %s\"%h)\nalpha = 2\nbeta = 2\n\nniter = 1000\nwith pm.Model() as model: # context management\n # define priors\n p = pm.Beta('p', alpha=alpha, beta=beta)\n\n # define likelihood\n y = pm.Binomial('y', n=n, p=p, observed=h)\n\n # inference\n start = pm.find_MAP() # Use MAP estimate (optimization) as the initial state for MCMC\n step = pm.Metropolis() # Have a choice of samplers\n trace = pm.sample(niter, step, start, random_seed=123, progressbar=True)\n```\n\n n = 100\n h = 59\n\n\n Applied logodds-transform to p and added transformed p_logodds_ to model.\n 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1000/1000 [00:00<00:00, 8636.83it/s]\n\n\n\n```python\nfig = plt.figure(figsize=(8,6))\nax = fig.add_subplot(111)\n\nax.hist(trace['p'], 15, histtype='step', normed=True, label='post');\nx = np.linspace(0, 1, 100)\nax.plot(x, st.beta.pdf(x, alpha, beta), label='prior');\nax.legend(loc='best');\n```\n\n## Estimating the mean and standard deviation of a normal\n\nOne of the amazing things about probabilistic programming in general is that to evaluate a complex model it is just an extension of what we would do with simple models---we go about it using the same process.\n\n$$\nX \\sim \\mathcal{N}(\\mu,\\sigma^{2})\n$$\n\n\n```python\n# generate observed data\nN = 100\n_mu = np.array([10])\n_sigma = np.array([2])\ny = np.random.normal(_mu, _sigma, N)\n\nniter = 1000\nwith pm.Model() as model:\n # define priors\n mu = pm.Uniform('mu', lower=0, upper=100, shape=_mu.shape)\n sigma = pm.Uniform('sigma', lower=0, upper=10, shape=_sigma.shape)\n\n # define likelihood\n y_obs = pm.Normal('Y_obs', mu=mu, sd=sigma, observed=y)\n\n # inference\n start = pm.find_MAP()\n step = pm.Slice()\n trace = pm.sample(niter, step, start, random_seed=123, progressbar=True)\n```\n\n Applied interval-transform to mu and added transformed mu_interval_ to model.\n Applied interval-transform to sigma and added transformed sigma_interval_ to model.\n 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1000/1000 [00:01<00:00, 918.42it/s]\n\n\n\n```python\nfig = plt.figure(figsize=(10,6))\nax1 = fig.add_subplot(1,2,1);\nax2 = fig.add_subplot(1,2,2);\n\nax1.hist(trace['mu'][-niter/2:,0], 25, histtype='step');\nax2.hist(trace['sigma'][-niter/2:,0], 25, histtype='step');\n```\n\n## Switchpoint analysis\n\nThis example comes from [Cam Davidson-Pilon's book](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers)\n\nWe will be looking at count data---specifically at the frequency of text messages recieved over a period of time. We will use the Poisson distribution to help use investigate this series of events.\n\n\n```python\nfrom IPython.display import Image\nImage(filename='../poisson.png')\n```\n\n\n```python\nfig = plt.figure(figsize=(12.5, 3.5))\nax = fig.add_subplot(111)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nax.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nax.set_xlabel(\"Time (days)\")\nax.set_ylabel(\"count of text-msgs received\")\nax.set_title(\"Did the user's texting habits change over time?\")\nax.set_xlim(0, n_count_data);\n```\n\nThese are time-course data. Can we infer when a behavioral change occurred?\n\nLets denote the day with $i$ and text-message count by $C_i$,\n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\n * Does the rate $\\lambda$ change?\n * Is there a day (call it $\\tau$) where the parameter $\\lambda$ suddenly jumps to a higher value?\n * We are looking for a _switchpoint_ s.t.\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\nIf no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. We are using Bayesian inference so we need priors. The *exponential* distribution provides a continuous density function for positive numbers.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is a *hyper-parameter*. In literal terms, it is a parameter that influences other parameters.\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo what does this look like in PyMC3?\n\n\n```python\nimport pymc3 as pm\nimport theano.tensor as tt\n\n## assign lambdas and tau to stochastic variables\nwith pm.Model() as model:\n alpha = 1.0/count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\n lambda_1 = pm.Exponential(\"lambda_1\", alpha)\n lambda_2 = pm.Exponential(\"lambda_2\", alpha)\n tau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data)\n\n## create a combined function for lambda (it is still a RV) \nwith model:\n idx = np.arange(n_count_data) # Index\n lambda_ = pm.math.switch(tau >= idx, lambda_1, lambda_2)\n\n## combine the data with our proposed data generation scheme \nwith model:\n observation = pm.Poisson(\"obs\", lambda_, observed=count_data)\n\n## inference\nwith model:\n step = pm.Metropolis()\n trace = pm.sample(10000, tune=5000,step=step)\n```\n\n Applied log-transform to lambda_1 and added transformed lambda_1_log_ to model.\n Applied log-transform to lambda_2 and added transformed lambda_2_log_ to model.\n 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 10000/10000 [00:03<00:00, 3171.19it/s]\n\n\n\n```python\n## get the variables we want to plot from our trace\nlambda_1_samples = trace['lambda_1']\nlambda_2_samples = trace['lambda_2']\ntau_samples = trace['tau']\n\n# draw histogram of the samples:\nfig = plt.figure(figsize=(12.5,10))\nax1 = fig.add_subplot(311)\nax2 = fig.add_subplot(312)\nax3 = fig.add_subplot(313)\n\nfor ax in [ax1,ax2]:\n ax.set_autoscaley_on(False)\n\n## axis 1 \nax1.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", normed=True)\nax1.legend(loc=\"upper left\")\nax1.set_title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nax1.set_xlim([15, 30])\nax1.set_xlabel(\"$\\lambda_1$ value\")\n\n## axis 2\nax2.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", normed=True)\nax2.legend(loc=\"upper left\")\nax2.set_xlim([15, 30])\nax2.set_xlabel(\"$\\lambda_2$ value\")\n\n## axis 3\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nax3.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nax3.set_xticks(np.arange(n_count_data))\n\nax3.legend(loc=\"upper left\")\nax3.set_ylim([0, .75])\nax3.set_xlim([35, len(count_data)-20])\nax3.set_xlabel(r\"$\\tau$ (in days)\")\nax3.set_ylabel(\"probability\");\nplt.subplots_adjust(hspace=0.4)\n```\n\n### How do we plot the expection distribution of this model?\n\n\n```python\nfig = plt.figure(figsize=(12.5,5))\nax = fig.add_subplot(111)\n\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n ix = day < tau_samples\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\nax.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nax.set_xlim(0, n_count_data)\nax.set_xlabel(\"Day\")\nax.set_ylabel(\"Expected # text-messages\")\nax.set_title(\"Expected number of text-messages received\")\nax.set_ylim(0, 60)\nax.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,label=\"observed texts per day\")\n\nax.legend(loc=\"upper left\");\n```\n\nA distribution allows us to see the uncertainty in our estimates. The two distribution for $\\lambda$ are distinct. Near day 45, there was a 50% chance that the user's behavior changed.\n\n> In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind\n\n -Cam Davidson Pilon\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "9facdfaecb72b06f9eb337c0d05c25462b374bfd", "size": 604304, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/introduction.ipynb", "max_stars_repo_name": "GalvanizeDataScience/probabilistic-programming-intro", "max_stars_repo_head_hexsha": "9e007b6ae8e9d0ca2aff6c3c5e00c0aa4c012595", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2017-02-08T04:18:46.000Z", "max_stars_repo_stars_event_max_datetime": "2018-09-10T14:33:13.000Z", "max_issues_repo_path": "notebooks/introduction.ipynb", "max_issues_repo_name": "zipfian/probabilistic-programming-intro", "max_issues_repo_head_hexsha": "9e007b6ae8e9d0ca2aff6c3c5e00c0aa4c012595", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/introduction.ipynb", "max_forks_repo_name": "zipfian/probabilistic-programming-intro", "max_forks_repo_head_hexsha": "9e007b6ae8e9d0ca2aff6c3c5e00c0aa4c012595", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 26, "max_forks_repo_forks_event_min_datetime": "2017-02-09T01:06:04.000Z", "max_forks_repo_forks_event_max_datetime": "2018-02-09T06:09:29.000Z", "avg_line_length": 1032.9982905983, "max_line_length": 194682, "alphanum_fraction": 0.9453453891, "converted": true, "num_tokens": 2921, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9433475794701961, "lm_q2_score": 0.8856314828740729, "lm_q1q2_score": 0.8354583156718571}} {"text": "## How do distributions transform under a change of variables ?\n\nKyle Cranmer, March 2016\n\n\n```python\n%pylab inline --no-import-all\n```\n\n Populating the interactive namespace from numpy and matplotlib\n\n\nWe are interested in understanding how distributions transofrm under a change of variables.\nLet's start with a simple example. Think of a spinner like on a game of twister. \n\n\n\nWe flick the spinner and it stops. Let's call the angle of the pointer $x$. It seems a safe assumption that the distribution of $x$ is uniform between $[0,2\\pi)$... so $p_x(x) = 1/\\sqrt{2\\pi}$\n\nNow let's say that we change variables to $y=\\cos(x)$ (sorry if the names are confusing here, don't think about x- and y-coordinates, these are just names for generic variables). The question is this:\n** what is the distribution of y?** Let's call it $p_y(y)$\n\nWell it's easy to do with a simulation, let's try it out\n\n\n```python\n# generate samples for x, evaluate y=cos(x)\nn_samples = 100000\nx = np.random.uniform(0,2*np.pi,n_samples)\ny = np.cos(x)\n```\n\n\n```python\n# make a histogram of x\nn_bins = 50\ncounts, bins, patches = plt.hist(x, bins=50, normed=True, alpha=0.3)\nplt.plot([0,2*np.pi], (1./2/np.pi, 1./2/np.pi), lw=2, c='r')\nplt.xlim(0,2*np.pi)\nplt.xlabel('x')\nplt.ylabel('$p_x(x)$')\n```\n\nOk, now let's make a histogram for $y=\\cos(x)$\n\n\n```python\ncounts, y_bins, patches = plt.hist(y, bins=50, normed=True, alpha=0.3)\nplt.xlabel('y')\nplt.ylabel('$p_y(y)$')\n```\n\nIt's not uniform! Why is that? Let's look at the $x-y$ relationship\n\n\n```python\n# make a scatter of x,y\nplt.scatter(x[:300],y[:300]) #just the first 300 points\n\nxtest = .2\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')\nxtest = xtest+.1\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')\n\nxtest = 2*np.pi-xtest\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')\nxtest = xtest+.1\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')\n\n\nxtest = np.pi/2\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')\nxtest = xtest+.1\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')\n\nxtest = 2*np.pi-xtest\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')\nxtest = xtest+.1\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')\n\n\nplt.ylim(-1.5,1.5)\nplt.xlim(-1,7)\n```\n\nThe two sets of vertical lines are both separated by $0.1$. The probability $P(a < x < b)$ must equal the probability of $P( cos(b) < y < cos(a) )$. In this example there are two different values of $x$ that give the same $y$ (see green and red lines), so we need to take that into account. For now, let's just focus on the first part of the curve with $x<\\pi$.\n\nSo we can write (this is the important equation):\n\n\\begin{equation}\n\\int_a^b p_x(x) dx = \\int_{y_b}^{y_a} p_y(y) dy \n\\end{equation}\nwhere $y_a = \\cos(a)$ and $y_b = \\cos(b)$.\n\nand we can re-write the integral on the right by using a change of variables (pure calculus)\n\n\\begin{equation}\n\\int_a^b p_x(x) dx = \\int_{y_b}^{y_a} p_y(y) dy = \\int_a^b p_y(y(x)) \\left| \\frac{dy}{dx}\\right| dx \n\\end{equation}\n\nnotice that the limits of integration and integration variable are the same for the left and right sides of the equation, so the integrands must be the same too. Therefore:\n\n\\begin{equation}\np_x(x) = p_y(y) \\left| \\frac{dy}{dx}\\right| \n\\end{equation}\nand equivalently\n\\begin{equation}\np_y(y) = p_x(x) \\,/ \\,\\left| \\, {dy}/{dx}\\, \\right | \n\\end{equation}\n\nThe factor $\\left|\\frac{dy}{dx} \\right|$ is called a Jacobian. When it is large it is stretching the probability in $x$ over a large range of $y$, so it makes sense that it is in the denominator.\n\n\n```python\nplt.plot((0.,1), (0,.3))\nplt.plot((0.,1), (0,0), lw=2)\nplt.plot((1.,1), (0,.3))\nplt.ylim(-.1,.4)\nplt.xlim(-.1,1.6)\nplt.text(0.5,0.2, '1', color='b')\nplt.text(0.2,0.03, 'x', color='black')\nplt.text(0.5,-0.05, 'y=cos(x)', color='g')\nplt.text(1.02,0.1, '$\\sin(x)=\\sqrt{1-y^2}$', color='r')\n```\n\nIn our case:\n\\begin{equation}\n\\left|\\frac{dy}{dx} \\right| = \\sin(x)\n\\end{equation}\n\nLooking at the right-triangle above you can see $\\sin(x)=\\sqrt{1-y^2}$ and finally there will be an extra factor of 2 for $p_y(y)$ to take into account $x>\\pi$. So we arrive at\n\\begin{equation}\np_y(y) = 2 \\times \\frac{1}{2 \\pi} \\frac{1}{\\sin(x)} = \\frac{1}{\\pi} \\frac{1}{\\sin(\\arccos(y))} = \\frac{1}{\\pi} \\frac{1}{\\sqrt{1-y^2}}\n\\end{equation}\n\n Notice that when $y=\\pm 1$ the pdf is diverging. This is called a [caustic](http://www.phikwadraat.nl/huygens_cusp_of_tea/) and you see them in your coffee and rainbows!\n\n| | |\n|---|---|\n| | | \n\n\n**Let's check our prediction**\n\n\n```python\ncounts, y_bins, patches = plt.hist(y, bins=50, normed=True, alpha=0.3)\npdf_y = (1./np.pi)/np.sqrt(1.-y_bins**2)\nplt.plot(y_bins, pdf_y, c='r', lw=2)\nplt.ylim(0,5)\nplt.xlabel('y')\nplt.ylabel('$p_y(y)$')\n```\n\nPerfect!\n\n## A trick using the cumulative distribution function (cdf) to generate random numbers\n\nLet's consider a different variable transformation now -- it is a special one that we can use to our advantage. \n\\begin{equation}\ny(x) = \\textrm{cdf}(x) = \\int_{-\\infty}^x p_x(x') dx'\n\\end{equation}\n\nHere's a plot of a distribution and cdf for a Gaussian.\n\n(NOte: the axes are different for the pdf and the cdf http://matplotlib.org/examples/api/two_scales.html\n\n\n```python\nfrom scipy.stats import norm\n```\n\n\n```python\nx_for_plot = np.linspace(-3,3, 30)\nfig, ax1 = plt.subplots()\n\nax1.plot(x_for_plot, norm.pdf(x_for_plot), c='b')\nax1.set_ylabel('p(x)', color='b')\nfor tl in ax1.get_yticklabels():\n tl.set_color('b')\n \nax2 = ax1.twinx()\nax2.plot(x_for_plot, norm.cdf(x_for_plot), c='r')\nax2.set_ylabel('cdf(x)', color='r')\nfor tl in ax2.get_yticklabels():\n tl.set_color('r')\n```\n\nOk, so let's use our result about how distributions transform under a change of variables to predict the distribution of $y=cdf(x)$. We need to calculate \n\n\\begin{equation}\n\\frac{dy}{dx} = \\frac{d}{dx} \\int_{-\\infty}^x p_x(x') dx'\n\\end{equation}\n\nJust like particles and anti-particles, when derivatives meet anti-derivatives they annihilate. So $\\frac{dy}{dx} = p_x(x)$, which shouldn't be a surprise.. the slope of the cdf is the pdf.\n\nSo putting these together we find the distribution for $y$ is:\n\n\\begin{equation}\np_y(y) = p_x(x) \\, / \\, \\frac{dy}{dx} = p_x(x) /p_x(x) = 1\n\\end{equation}\n\nSo it's just a uniform distribution from $[0,1]$, which is perfect for random numbers.\n\nWe can turn this around and generate a uniformly random number between $[0,1]$, take the inverse of the cdf and we should have the distribution we want for $x$.\n\nLet's try it for a Gaussian. The inverse of the cdf for a Gaussian is called [ppf](http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.stats.norm.html)\n\n\n\n```python\nnorm.ppf.__doc__\n```\n\n\n\n\n '\\n Percent point function (inverse of `cdf`) at q of the given RV.\\n\\n Parameters\\n ----------\\n q : array_like\\n lower tail probability\\n arg1, arg2, arg3,... : array_like\\n The shape parameter(s) for the distribution (see docstring of the\\n instance object for more information)\\n loc : array_like, optional\\n location parameter (default=0)\\n scale : array_like, optional\\n scale parameter (default=1)\\n\\n Returns\\n -------\\n x : array_like\\n quantile corresponding to the lower tail probability q.\\n\\n '\n\n\n\n\n```python\n#check it out\nnorm.cdf(0), norm.ppf(0.5)\n```\n\n\n\n\n (0.5, 0.0)\n\n\n\nOk, let's use CDF trick to generate Normally-distributed (aka Gaussian-distributed) random numbers\n\n\n```python\nrand_cdf = np.random.uniform(0,1,10000)\nrand_norm = norm.ppf(rand_cdf)\n```\n\n\n```python\n_ = plt.hist(rand_norm, bins=30, normed=True, alpha=0.3)\nplt.xlabel('x')\n```\n\n**Pros**: The great thing about this technique is it is very efficient. You only generate one random number per random $x$.\n\n**Cons**: the downside is you need to know how to compute the inverse cdf for $p_x(x)$ and that can be difficult. It works for a distribution like a Gaussian, but for some random distribution this might be even more computationally expensive than the accept/reject approach. This approach also doesn't really work if your distribution is for more than one variable.\n\n## Going full circle\n\nOk, let's try it for our distribution of $y=\\cos(x)$ above. We found \n\n\\begin{equation}\np_y(y) = \\frac{1}{\\pi} \\frac{1}{\\sqrt{1-y^2}}\n\\end{equation}\n\nSo the CDF is (see Wolfram alpha for [integral](http://www.wolframalpha.com/input/?i=integrate%5B1%2Fsqrt%5B1-x%5E2%5D%2FPi%5D) )\n\\begin{equation}\ncdf(y') = \\int_{-1}^{y'} \\frac{1}{\\pi} \\frac{1}{\\sqrt{1-y^2}} = \\frac{1}{\\pi}\\arcsin(y') + C\n\\end{equation}\nand we know that for $y=-1$ the CDF must be 0, so the constant is $1/2$ and by looking at the plot or remembering some trig you know that it's also $cdf(y') = (1/\\pi) \\arccos(y')$.\n\nSo to apply the trick, we need to generate uniformly random variables $z$ between 0 and 1, and then take the inverse of the cdf to get $y$. Ok, so what would that be:\n\\begin{equation}\ny = \\textrm{cdf}^{-1}(z) = \\cos(\\pi z)\n\\end{equation}\n\n**Of course!** that's how we started in the first place, we started with a uniform $x$ in $[0,2\\pi]$ and then defined $y=\\cos(x)$. So we just worked backwards to get where we started. The only difference here is that we only evaluate the first half: $\\cos(x < \\pi)$\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "2cafa7b948f86d198a829ccca2051c175fe37d75", "size": 96657, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_site/content/distributions/change-of-variables.ipynb", "max_stars_repo_name": "cranmer/intro-exp-phys-book", "max_stars_repo_head_hexsha": "255c8b99bd172d1274f0dac04d735290da421c83", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2020-03-02T20:35:57.000Z", "max_stars_repo_stars_event_max_datetime": "2020-03-06T16:14:53.000Z", "max_issues_repo_path": "_site/content/distributions/change-of-variables.ipynb", "max_issues_repo_name": "cranmer/intro-exp-phys-book", "max_issues_repo_head_hexsha": "255c8b99bd172d1274f0dac04d735290da421c83", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-03-03T18:41:07.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-26T06:17:16.000Z", "max_forks_repo_path": "content/distributions/change-of-variables.ipynb", "max_forks_repo_name": "cranmer/intro-exp-phys-book", "max_forks_repo_head_hexsha": "255c8b99bd172d1274f0dac04d735290da421c83", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-04-13T00:22:03.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-13T00:22:03.000Z", "avg_line_length": 164.9436860068, "max_line_length": 20438, "alphanum_fraction": 0.8784568112, "converted": true, "num_tokens": 3137, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9263037262250327, "lm_q2_score": 0.9019206666433899, "lm_q1q2_score": 0.8354524742711376}} {"text": "# Normal and tangential coordinates\n\nIn the normal and tangential coordinate system, the (vector) equation of motion\n\n$$\n\\sum \\mathbf{F} = m \\mathbf{a}\n$$\n\ndecomposes into the three scalar equations for the tangential ($t$), \nnormal ($n$), and binormal ($b$) directons:\n\n$$\n\\begin{align}\n\\sum F_t &= m a_t = m \\dot{v} = m \\ddot{s} \\\\\n\\sum F_n &= m a_n = m \\frac{v^2}{\\rho} \\\\\n\\sum F_b &= m a_b = 0 \\;,\n\\end{align} \n$$\n\nwhere $F_i$ is a force in the $i$ direction, $m$ is the particle mass,\n$v$ is the velocity, and $\\rho$ is the radius of curvature.\n\nThe tangential acceleration $a_t$ is positive or negative in the direction of motion,\nthe normal acceleration $a_n$ is **always** positive in the normal direction,\nand the binormal acceleration $a_b$ is **always** zero, because motion lies in the plane\nformed by the normal and tangential directions.\n\n## Example: race car at banking angle (no friction)\n\nA Formula 1 race car of mass $m$ = 740 kg is traveling on a track at \nconstant velocity $v$ = 60 m/s, where the radius of curvature is $\\rho$ = 400 m.\nWhat is the banking angle $\\theta$ necessary for the car to avoid sliding as it\ngoes around this curve?\n\nFirst, let's draw a free-body diagram for the car:\n\n\nNow, write the three scalar equations of motion for the car. \nThe equation in the tangential direction does not really tell us much,\nsince the car is moving at a constant speed in its direction of motion:\n\n$$\n\\sum F_t = m a_t = m \\dot{v} = 0 \\;.\n$$\n\nIn the normal direction, the only force is the normal component of the resultant force:\n\n$$\n\\begin{align}\n\\sum F_n &= m a_n \\\\\nN_C \\sin \\theta &= m \\frac{v^2}{\\rho}\n\\end{align}\n$$\n\nand in the binormal direction, we have both the component of the resultant force and also the car's weight, but the binormal acceleration is zero:\n\n$$\n\\begin{align}\n\\sum F_b &= m a_b = 0 \\\\\nN_C \\cos \\theta - m g &= 0 \\\\\n\\rightarrow N_C \\cos \\theta &= m g\n\\end{align} \n$$\n\nIf we divide the latter equation from the former, we can solve for the banking angle:\n\n$$\n\\begin{align}\n\\frac{N_C \\sin \\theta}{N_C \\cos \\theta} &= \\frac{m v^2 / \\rho}{mg} \\\\\n\\tan \\theta &= \\frac{v^2}{\\rho g} \\\\\n\\therefore \\theta &= \\tan^{-1} \\left(\\frac{v^2}{\\rho g} \\right)\n\\end{align}\n$$\n\nFor the parameters given:\n\n\n```python\nimport numpy as np\n\nmass = 740 # kg\nvelocity = 60 # m/s\nrho = 400 # m\ng = 9.81 # m/s^2\n\ntheta = np.arctan(velocity**2 / (rho * g))\nprint(f'theta ={theta * 180/np.pi: .2f}\u00b0')\n```\n\n theta = 42.53\u00b0\n\n\n## Example: race car at banking angle (with friction)\n\nNow, consider the same situation, but account for the effect of friction,\nwhich will counter the car's motion in the outward direction. \nWhat is the new banking angle needed to avoid the car sliding in this case?\nAssume the coefficient of static friction is $\\mu_s = 0.2$.\n\n\n```python\nmu = 0.2\n```\n\nIn this case, we now have to account for components of the friction force in the normal and binormal directions, where the friction force is\n\n$$\nf = \\mu_s N_C \\;.\n$$\n\nIn the normal direction, we have\n\n$$\n\\begin{align}\n\\sum F_n &= m a_n \\\\\nN_C \\sin \\theta + f \\cos \\theta &= m \\frac{v^2}{\\rho}\n\\end{align}\n$$\n\nand in the binormal direction\n\n$$\n\\begin{align}\n\\sum F_b &= m a_b = 0 \\\\\nN_C \\cos \\theta - f \\sin \\theta - m g &= 0 \\\\\n\\rightarrow N_C \\cos \\theta - f \\sin \\theta &= m g\n\\end{align} \n$$\n\nCombining the two equations (again, by dividing the first by the second) and recalling that $f = \\mu_s N_c$:\n\n$$\n\\begin{align}\n\\frac{N_C \\sin \\theta + f \\cos \\theta}{N_C \\cos \\theta - f \\sin \\theta} &= \\frac{m v^2 / \\rho}{m g} \\\\\n\\frac{\\sin \\theta + \\mu_s \\cos \\theta}{\\cos \\theta - \\mu_s \\sin \\theta} &= \\frac{v^2}{\\rho g} \\;.\n\\end{align}\n$$\n\nThis is our equation to find the banking angle, but unfortunately it has no closed-form solution. So, how do we find $\\theta$? Using a numerical method!\n\n### Method 1: manual iteration\n\nWe could first attack this problem by manually guessing and checking different values of $\\theta$, until the left-hand side of the equation equals the right-hand side.\nFor example, trying different values from 20\u00b0 to 40\u00b0:\n\n\n```python\n# need to convert to radians\nvals = np.arange(20, 41, 2) * np.pi / 180\n\nprint('Theta LHS RHS')\nfor theta in vals:\n lhs = (np.sin(theta) + mu*np.cos(theta)) / (np.cos(theta) - mu*np.sin(theta))\n rhs = velocity**2 / (rho*g)\n print(f'{theta*180/np.pi: 4.1f}\u00b0 {lhs: 5.3f} {rhs: 5.3f}')\n```\n\n Theta LHS RHS\n 20.0\u00b0 0.608 0.917\n 22.0\u00b0 0.657 0.917\n 24.0\u00b0 0.708 0.917\n 26.0\u00b0 0.762 0.917\n 28.0\u00b0 0.819 0.917\n 30.0\u00b0 0.879 0.917\n 32.0\u00b0 0.943 0.917\n 34.0\u00b0 1.011 0.917\n 36.0\u00b0 1.084 0.917\n 38.0\u00b0 1.163 0.917\n 40.0\u00b0 1.249 0.917\n\n\nSo, clearly the correct value is between 30\u00b0 and 32\u00b0. A bit more manual iteration shows that the correct angle is just about 31.2\u00b0:\n\n\n```python\ntheta = 31.2 * np.pi / 180\n\nlhs = (np.sin(theta) + mu*np.cos(theta)) / (np.cos(theta) - mu*np.sin(theta))\nrhs = velocity**2 / (rho*g)\nprint(lhs, rhs)\n```\n\n 0.9166501382856722 0.9174311926605505\n\n\n### Method 2: `root_scalar`\n\nManually solving like this would be quite tedious; fortunately, there are numerical methods for solving scalar equations like this. We refer to this at **root finding**, since to solve we\nformulate the equation like $F(x) = 0$, and find the value of the unknown variable that makes the function zero (i.e., the root).\n\nIn this case, we make the equation into the form\n\n$$\nF(\\theta) = \\frac{\\sin \\theta + \\mu_s \\cos \\theta}{\\cos \\theta - \\mu_s \\sin \\theta} - \\frac{v^2}{\\rho g} = 0 \\;,\n$$\n\nThen, to solve, we can use the `root_scalar` function provided in the `scipy.optimize` module,\nwhich needs us to provide it with a function that returns $F(\\theta)$ for candidate values of $\\theta$ (with the goal of finding the one that makes $F(\\theta)=0$), along with a few guess values:\n\n\n```python\nfrom scipy.optimize import root_scalar\n\ndef f(theta):\n '''This function evaluates the equation for finding theta.\n '''\n return (\n (np.sin(theta) + mu*np.cos(theta)) / (np.cos(theta) - mu*np.sin(theta)) -\n velocity**2 / (rho*g)\n )\n\nsol = root_scalar(f, x0=(20*np.pi/180), x1=(40*np.pi/180))\nprint(f'theta ={sol.root * 180/np.pi: .2f}\u00b0')\n```\n\n theta = 31.22\u00b0\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "15bc03ec75880a1ab6f22fe93864956997684d6b", "size": 10317, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "book/particle-kinetics/normal-tangential.ipynb", "max_stars_repo_name": "kyleniemeyer/dynamics-book", "max_stars_repo_head_hexsha": "c1b416164ba2a773706892afa4871786a3d3929c", "max_stars_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-27T20:34:20.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-27T20:34:20.000Z", "max_issues_repo_path": "book/particle-kinetics/normal-tangential.ipynb", "max_issues_repo_name": "kyleniemeyer/dynamics-book", "max_issues_repo_head_hexsha": "c1b416164ba2a773706892afa4871786a3d3929c", "max_issues_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "book/particle-kinetics/normal-tangential.ipynb", "max_forks_repo_name": "kyleniemeyer/dynamics-book", "max_forks_repo_head_hexsha": "c1b416164ba2a773706892afa4871786a3d3929c", "max_forks_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.3931623932, "max_line_length": 203, "alphanum_fraction": 0.517980033, "converted": true, "num_tokens": 2007, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9073122238669026, "lm_q2_score": 0.9207896726304802, "lm_q1q2_score": 0.8354437255880381}} {"text": "# Technical Details\n\nAs with all packages, numerous technical details that are abstracted away from the user. Now to ensure a clean interface, this abstraction is entirely necessary. However, it can sometimes be confusing when navigating a package's source code to pin down what's going on when there's so many _under the hood_ operations taking place. In this notebook, I'll aim to shed some light on all of the tricks that we do in GPJax to help elucidate the code to anyone wishing to extend GPJax for their own uses.\n\n\n## Parameter Transformations\n\n### Motivations\n\nMany parameters in a Gaussian process are what we call a _constrained parameter_. By this, we mean that the parameter's value is only defined on a subset of $\\mathbb{R}$. One example of this is the lengthscale parameter in any of the stationary kernels. It would not make sense to have a negative lengthscale, and as such the parameter's value is constrained to exist only on the positive real line. \n\nWhilst mathematically correct, constrained parameters can become a pain when optimising as many optimisers are designed to operate on an unconstrained space. Further, it can often be computationally inefficient to restrict the search space of an optimiser. For these reasons, we instead transform the constrained parameter to exist in an unconstrained space. The optimisation is then done on this unconstrained parameter before we transform it back when we need to evaluate its value. \n\nOnly bijective transformations are valid as we cannot afford to lose our original parameter value when transforming. As such, we have to be careful about which transformations we use. Some common choices include the log-exponential bijection and the softplus transform. We, by default, opt for the softplus transformation in GPJax as it is less prone to numerical overflow in comparison to log-exp transformations.\n\n\n### Implementation\n\nWhen it comes to implementations, we attach the transformation directly to the `Parameter` class. It is an optional argument that one can specify when instantiating their parameter. To see this, consider the following example\n\n\n```python\nfrom gpjax.parameters import Parameter\nfrom gpjax.transforms import Softplus\nimport jax.numpy as jnp\n\nx = Parameter(jnp.array(1.0), transform=Softplus())\n```\n\n WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)\n\n\nNow we know that the softplus transformation operation on an input $x \\in \\mathbb{R}_{>0}$ can be written as \n$$\\alpha(x) = \\log(\\exp(x)-1)$$\nwhere $\\alpha(x) \\in \\mathbb{R}$. In this instance, it can be seen that $\\alpha(1)=0.54$. Now this unconstrained value is stored within the parameter's `value` property\n\n\n```python\nprint(x.value)\n```\n\n 0.541324854612918\n\n\nwhilst the original constrained value can be computed by accesing the parameter's `untransform` property\n\n\n```python\nprint(x.untransform)\n```\n\n 1.0\n\n\n### Custom transformation\n\nShould you wish to define your own custom transformation, then this can easily be done by extending the `Transform` class within `gpjax.transforms` and defining a forward transformation and a backward transformation.\n\n\n```python\nclass Transform:\n def __init__(self, name=\"Transformation\"):\n self.name = name\n\n @staticmethod\n def forward(x: jnp.ndarray) -> jnp.ndarray:\n raise NotImplementedError\n\n @staticmethod\n def backward(x: jnp.ndarray) -> jnp.ndarray:\n raise NotImplementedError\n```\n\nThe `forward` method is the transformation that maps from a constrained space to an unconstrained space, whilst the `backward` method is the transformation that reverses this. A nice example of this can be seen for the earlier used softplus transformation\n\n\n```python\nfrom jax.nn import softplus\n\nclass Softplus(Transform):\n def __init__(self):\n super().__init__(name='Softplus')\n\n @staticmethod\n def forward(x: jnp.ndarray) -> jnp.ndarray:\n return jnp.log(jnp.exp(x) - 1.)\n\n @staticmethod\n def backward(x: jnp.ndarray) -> jnp.ndarray:\n return softplus(x)\n```\n\n## Prior distributions\n\n### Motivations\n\nOften when we use Gaussian processes, we do so as they facilitate incorporation of prior information into the model. Implicitly, by the very use of a Gaussian process, we are incorporating our prior information around the functional behaviour of the latent function that we are seeking to recover. However, we can take this one step further by placing priors on the hyperparameters of the Gaussian process. Going into the details of which priors are recommended and how to go about selecting them goes beyond the scope of this article, but it suffices to say that doing so can greatly enhance the utility of a Gaussian process. \n\nAt least in my own experience, when priors are placed on the hyperparameters of a Gaussian process they are specified with respect to the constrained parameter value. As an example of this, consider the lengthscale parameter $\\ell \\in \\mathbb{R}_{>0}$. When specifying a prior distribution $p_{0}(\\ell)$, I would typically select a distribution that has support on the positive real line, such as the Gamma distribution. An opposing approach would be to transform the parameter so that it is defined on the entire real line (as discussed in \u00a71) and then specify a prior distribution such as a Gaussian that has unconstrained support. Deciding which of these two approaches to adopt in GPJax is somewhat a moot point to me, so I've opted for priors to be defined on the constrained parameter. That being said, I'd be more than open to altering this opinion is people felt strongly that priors should be defined on the unconstrained parameter value.\n\n### Implementation\n\nRegarding the implementational details of enabling prior specification, this is hopefully a more lucid concept upon code inspection. As with the earlier discussed parameter transformations, the notion of a prior distribution is acknowledged in the definition of a parameter. To exactly specify a prior distribution, one should simply call in the relevant distribution from TensorFlow probability's distributions module. For an example of this, consider the parameter `x` that was earlier defined.\n\n\n```python\nfrom tensorflow_probability.substrates.jax import distributions as tfd\n\nx.prior = tfd.Gamma(concentration = 3., rate = 2.)\n```\n\nIf we momentarily pause to consider the state of this parameter now, then we have a constrained parameter value with a corresponding prior distribution. When it comes to deriving our posterior distribution, then we know that it is proportional to the product of the likelihood and the prior density function. As addition is less prone to numerical overflow than multiplication, we take the log of this produce. The log of a product is just a sum of logs, meaning that our log-posterior is then proportional to the sum of our log-likelihood and the log-prior density. Therefore, to connect the value of our parameter and its respective prior distribution, the only implementational point left to cover is how to evaluate the parameters log-prior density. This can be done through the following `@property`\n\n\n```python\nprint(x.log_density)\n```\n\n -0.613706111907959\n\n\nNaturally, should one wish to evaluate the prior density of the parameter, then the exponent can be taken\n\n\n```python\nprint(jnp.exp(x.log_density))\n```\n\n 0.5413408768770793\n\n\n## Cholesky decomposition\n\n### Motivation\n\n\n#### Single-step of the Cholesky decomposition \n\nBefore we examine the use of Cholesky decompositions in GPJax, I think it's worthwhile to first see why we even need the Cholesky decomposition. In mathematics and statistics, we are often presented with the task of solving a linear system of equations $A\\mathbf{x}=\\mathbf{y}$ for $A \\in \\mathbb{R}^{m \\times n}$ and $\\mathbf{x} \\in \\mathbb{R}^{n}$, $\\mathbf{y}\\in\\mathbb{R}^{m}$. In this scenario, our goal is to identify the values of $\\mathbf{x}$ and for people who studied numerical methods, this probably evokes flashbacks to manually calculating $\\mathbf{x}$ using Gaussian elimination. Whilst manageable for a handful of unknown variables, we certainly would not want to do Gaussian elimination for systems containing thousands of variables. Fortunately, computers are quite good at solving these types of problem.\n\nLet our matrix $A$ now be symmetric in $\\mathbb{R}^{n \\times n}$ (analogously termed Hermitian in $\\mathbb{C}^{n \\times n}$) and positive definite (SPD) i.e. \n$$\\begin{align*} A & = A^{\\top} \\quad & \\text{(symmetry)} \\\\\n\\lambda^{\\top}A\\lambda & > 0 \\text{ , for all } \\lambda \\in \\mathbb{R}^{n}\\quad & \\text{(positive-definite)}\\end{align*}.$$\nIf we were to now begin to apply Gaussian elimination to $A$ with a 1 element in the upper-left, then the first step would yield\n$$A = \\begin{bmatrix}1 & \\mathbf{v}^{\\top}\\\\ \n\\mathbf{v} & K \\end{bmatrix} = \\begin{bmatrix}1 & \\mathbf{0} \\\\ \n\\mathbf{v} & I \\end{bmatrix}\\begin{bmatrix}1 & \\mathbf{v}^{\\top} \\\\ \n\\mathbf{0} & K - \\mathbf{v} \\mathbf{v}^{\\top} \\end{bmatrix}. \\tag{1}$$\nUnder the process of regular Gaussian elimination, we would now move onto the second column and introduce $(n-2)$-length $\\mathbf{0}$ vector here too. However, doing so would invalidate the matrix's symmetry. Therefore, before proceeding to the second column, a Cholesky decomposition introduce an of $(n-1)$-length $\\mathbf{0}$ vector into the fist row too. To achieve this upper-right triangulation of our reduced matrix system in (1) we can write\n$$\\begin{bmatrix}1 & \\mathbf{v}^{\\top} \\\\ \n\\mathbf{0} & K - \\mathbf{vv}^{\\top} \\end{bmatrix} = \\begin{bmatrix}1 & \\mathbf{0} \\\\ \n\\mathbf{0} & K - \\mathbf{vv}^{\\top} \\end{bmatrix}\\begin{bmatrix}1 & \\mathbf{v}^{\\top} \\\\ \\mathbf{0} & I \\end{bmatrix}. \\tag{2}$$\nBringing the expressions in (1) and (2) together now we get the following factorisation of $A$\n$$\\begin{align}A & = \\begin{bmatrix}1 & \\mathbf{v}^{\\top}\\\\ \n\\mathbf{v} & K \\end{bmatrix} \\nonumber \\\\ & = \\begin{bmatrix}1 & \\mathbf{0} \\\\ \n\\mathbf{v} & I \\end{bmatrix} \\begin{bmatrix}1 & \\mathbf{0} \\\\ \n\\mathbf{0} & K - \\mathbf{vv}^{\\top} \\end{bmatrix}\\begin{bmatrix}1 & \\mathbf{v}^{\\top} \\\\ \\mathbf{0} & I \\end{bmatrix}. \\tag{3}\\end{align}$$\n\nA Cholesky factorisation will apply the operation described in (3) iteratively for all remaining row-column pairs.\n\n#### Generalisation\n\nAt this point, you may well ask the question of how this factorisation can ever be generalisable as it is by no means guaranteed that $A_{1, 1} = 1$ is true. Simply put, we can introduce a constant $\\alpha$ such that $\\alpha = \\sqrt{A_{1,1}}$. Working through (3) with an upper-left entry of $\\alpha^2$ now instead of 1 we get the general form of a Cholesky decomposition\n$$\n\\begin{align}A & = \\begin{bmatrix}\\alpha^2 & \\mathbf{v}^{\\top}\\\\ \n\\mathbf{v} & K \\end{bmatrix} \\nonumber \\\\ \n& = \\begin{bmatrix}\\alpha & \\mathbf{0} \\\\ \n\\frac{\\mathbf{v}}{\\alpha} & I \\end{bmatrix} \\begin{bmatrix}1 & \\mathbf{0} \\\\ \n\\mathbf{0} & K - \\frac{\\mathbf{vv}^{\\top}}{\\alpha^2} \\end{bmatrix}\\begin{bmatrix}1 & \\frac{\\mathbf{v}^{\\top}}{\\alpha} \\\\ \\mathbf{0} & I \\end{bmatrix} \\\\ \n& = L_{1}^{\\top}A_{1}L_{1}\\end{align}$$\nWhen this process is iteratively applied from the upper-left element of $A$ down to the lower-right element, we get the complete factorisation which we know as the Cholesky decomposition \n$$\\begin{align}A & = L_1^{\\top}L_2^{\\top}\\ldots L_n^{\\top} L_n \\ldots L_2 L_1 \\\\ \n& = L^{\\top}L\\end{align}$$\nfor an upper-triangular $L$. \n\n#### Uniqueness\n\nFor a Cholesky decomposition to work, the upper-left element of $K-\\frac{\\mathbf{vv}^{\\top}}{\\alpha^2}$ must be positive, and it's not trivial as to why this is true. However, when considering the first step of the Cholesky decomposition, we established at that $A$ and $L_1^{-\\top}A L_1^{-1}$ are both SPD. Further, $K-\\frac{\\mathbf{vv}^{\\top}}{\\alpha^2}$ is clearly the principal submatrix of $L_1^{-\\top}A L_1^{-1}$ and is therefore positive definite. Right at the start when we defined a positive-definite matrix we saw that for a PD matrix the diagonals are all positive, and therefore the upper-left element of $K-\\frac{\\mathbf{vv}^{\\top}}{\\alpha^2}$ is guaranteed to be positive. Having proved this now for $n=1$, we can apply proof by induction to show that this positiveness is true for all steps of a Cholesky factorisation.\n\nA generalisation of this result is that every SPD matrix has a unique Cholesky factorisation. This can be proved by considering the fact that each $\\alpha$ is unique as it is determined by the form of $L^{\\top}L$. Once $\\alpha$ is determined, the first row of $L^{\\top}$ is also available. This is true for each step of the decomposition, and therefore the factorisation is unique.\n\n#### Computational details\n\nIn the words of [Linus Torvalds](https://en.wikipedia.org/wiki/Linus_Torvalds), _\u201cTalk is cheap. Show me the code.\u201d_, it helps at this point to see the Cholesky decomposition through an algorithmic lens.\n```python\nL = A\nfor i in range(n):\n for j in range(i+1, n):\n L[j, j:n] -= L[i, j:n] * R[i, j]/R[i, i]\n R[i, i:n] = R[i, i:n]/R[i, i]**0.5\n```\n\nThe triangular nature of L is unveiled here the inner for loop's decreasing span. It is also the multiplication and subtraction operations of this inner loop that dominate the computational cost of a Cholesky decomposition. At the $i^{\\text{th}}$ step of the loop, there are $i-j+1$ subtractions and $i-j+1$ multiplications required. By then summing over $i \\in 1:n$ and $j \\in (i+1):n$ we get a scaling of $\\frac{1}{3}n^3$ floating-point operations (flops) with $n$. If we wanted to achieve the same factorisation using Gaussian elimination, then our computational complexity would scale $\\frac{2}{3}n^3$ flops; twice as slow in comparison to a Cholesky factorisation.\n\n#### More stable computations\n\nNow that we've established what a Cholesky factorisation is and how one might go about computing the factors, a natural question one might ask is why we would go to this effort? The answer lies in the increased stability of a Cholesky factor in comparison to Gaussian elimination. The stabilisation comes from the fact that the values of $L$ are upper-bounded by the values of $A$. When working with the $2$-norm operator, we can actually go one further and state that $\\lVert L^{\\top} \\rVert := \\lVert L \\rVert := \\lVert A \\rVert^{\\frac{1}{2}}$. More generally, for any finite norm, the values $\\lVert L\\rVert$ and $\\lVert A \\rVert^{\\frac{1}{2}}$ cannot differ by more than a factor of $\\sqrt{n}$.\n\n#### Matrix inversion\n\nDrawing upon the concept of Cholesky factorisation, we can recast our original problem of solving a system of linear equations using the lower-triangular matrices that are given in a Cholesky factorisation\n$$\\begin{align} A\\mathbf{x} & = \\mathbf{b} \\nonumber \\\\\n\\implies LL^{\\top}\\mathbf{x} & = \\mathbf{b} \\nonumber \\\\\n\\implies LL^{\\top}\\mathbf{x} & = L\\mathbf{y} \\ \\text{ for some } \\mathbf{y} \\nonumber \\\\\n\\implies L^{\\top}\\mathbf{x} & = \\mathbf{y}\n\\end{align}$$\nFor conciseness, we'll now adopt the backslash notation $A\\backslash \\mathbf{b}$ to indicate that the vector $\\mathbf{x}$ solves the linear system of equations $A \\mathbf{x} = \\mathbf{b}$. By this, we can equate $\\mathbf{x}$ such that\n$$\\mathbf{x} = L^{\\top} \\backslash (L \\backslash \\mathbf{b}).$$\nAs $L$ is a lower-triangular matrix, we can use forward-substitution to determine the values of $\\mathbf{b} = L\\mathbf{y}$. Similarily, $\\mathbf{x} = L^{\\top}\\mathbf{y}$ can be solved using backward-substitution due to the upper-triangular structure in $L^{\\top}$.\n\n#### Determinants\n\nA final note on the Cholesky factorisation that is relavent within the context of Gaussian processes is that for a $n \\times n$ SPD matrix $A$, the matrix's determinant and log-determinant can be computed by\n$$\\lvert A \\rvert = \\prod_{i=1}^n L^2_{ii} \\quad \\text{and} \\quad \\log\\lvert A \\rvert = 2\\log\\sum_{i=1}^n L_{i,i}$$\nrespectively.\n\n### Implementation\n\nDue to the enhanced stability that is experience when working with Cholesky factors in comparison to regular SPD matrices, all matrix inversions in GPJax are done using the Cholesky factors. To see this, consider the evaluation of the conjugate Gaussian process' marginal log-likelihood term. Mathematically, this quantity is written as\n$$\\log p(\\mathbf{y} | X, \\theta) = -0.5 \\left(\\operatorname{logdet}(K_{xx}+\\sigma_n^2 I) + \\mathbf{y}^{\\top}(K_{xx} + \\sigma_n^2 I)^{-1}\\mathbf{y} + N \\log (2 \\pi) \\right).$$\nHowever, by using the Cholesky factorisation $L = \\operatorname{cholesky}(K_{xx}+\\sigma_n^2 I)$ to compute the matrix inverses and determinants, we instead write\n$$\\log p(\\mathbf{y}| X, \\theta) = -0.5 \\left(\\mathbf{y}^{\\top}\\boldsymbol{\\alpha} + \\sum\\limits_{i=1}^{n}\\log L_{ii} + N \\log (2 \\pi) \\right)$$\nwhere \n$$\\boldsymbol{\\alpha} = L^{\\top}\\backslash (L \\backslash \\mathbf{y}).$$\n\nIt is this second form of the marginal log-likelihood that is used within GPJax due to its enhanced numerical stability; a crucial attribute when optimising the model's parameters with respect to it.\n\n## Floating point precision\n\nBy default, Jax uses 32-bit floats. From the excellent Jax documentation, the rationale behind this is _\" to mitigate the Numpy API\u2019s tendency to aggressively promote operands to double\"_. For many machine learning algorithms, 32-bit precision is probably sufficient. However, for Gaussian processes, we have to deal with challenging matrix inversion and we want to mitigate the effects of numerical rounding as much as possible.\n\nTo see this, consider a SPD matrix $A \\in \\mathbb{R}^{n \\times n}$ and the vectors $\\mathbf{x}, \\mathbf{y} \\in \\mathbb{R}^{n}$ which, together, form a linear system of equations\n$$ \\mathbf{y} = A\\mathbf{x}.$$\nAs the matrix is PD, we know that it is then invertible so we can then write\n$$\\begin{align*}\\mathbf{y}^{\\top}A^{-1}\\mathbf{y} & = \\mathbf{x}K^{\\top}K^{-1}K\\mathbf{x}\\\\\n& = \\mathbf{x}^{\\top}K\\mathbf{x}>0\\end{align*}$$\ni.e., for a PD matrix $A$ that is invertible, its inverse $A^{-1}$ is also invertible. \n\nWhilst we might be happy with this fact theoretically, the result breaks down computationally due to the floating-point rounding that occurs in our machines. To convince ourself of this, we can define a function that checks if a matrix is PD.\n\n\n```python\nfrom typing import Callable\nimport jax\n\n\ndef is_positive_definte(A: jnp.ndarray):\n return (jnp.linalg.eigvals(A)>0).all()\n\n\ndef sqexp_kernel(x: jnp.ndarray, y: jnp.ndarray, variance: float = 1.0, lengthscale: float = 1.0):\n tau = jnp.square(x-y)\n return jnp.square(variance)*jnp.exp(-0.5*tau/jnp.square(lengthscale))\n\n\ndef kernel(func: Callable, x: jnp.ndarray, y: jnp.ndarray):\n return jax.vmap(lambda x: jax.vmap(lambda y: func(x, y))(y))(x)\n```\n\nIn the above chunk of code, we have defined the squared exponential kernel for which we know the resultant Gram matrix is PD. However, as we'll see below, even for a small number of points, the rounding introduced by 32-bit precision yields a non-SPD matrix.\n\n\n```python\nx = jnp.linspace(-3., 3., 20, dtype=jnp.float32)\nK = kernel(sqexp_kernel, x, x)\nKinv = jnp.linalg.inv(K)\n\nprint(is_positive_definte(K))\nprint(is_positive_definte(Kinv))\n```\n\n False\n False\n\n\nIf we now redefine our original data vector `x` as an array with 64bit precision and force Jax to use 64-bit precision, then this same matrix is now PD.\n\n\n```python\nfrom jax.config import config; config.update(\"jax_enable_x64\", True)\n\nx = jnp.linspace(-3., 3., 20, dtype=jnp.float64)\nK = kernel(sqexp_kernel, x, x)\nKinv = jnp.linalg.inv(K)\n\nprint(is_positive_definte(K))\nprint(is_positive_definte(Kinv))\n```\n\n True\n True\n\n\nFor this reason, we enforce 64bit precision in GPJax by loading the configuration import above when one imports GPJax. To further enforce numerical stability in the kernel matrix, we add some _jitter_ to the matrix's diagonal. In doing this, we do introduce some numerical error, but the amount is often tolerable. \n\nAgain, we can see the effect of this jitter in the above example by reducing the distance between our points in `x`. This will reduce the size of the kernel matrix's eigenvalues and increase the chance of numerical rounding being a problem, even with 64-bit precision.\n\n\n```python\nx = jnp.linspace(-3., 3., 200, dtype=jnp.float64)\nK = kernel(sqexp_kernel, x, x)\nKinv = jnp.linalg.inv(K)\n\nprint(is_positive_definte(K))\nprint(is_positive_definte(Kinv))\n```\n\n False\n False\n\n\nIf we now add a tiny amount ($10^{-12}$) of jitter to the diagonal of K, then our inversion should restabilise.\n\n\n```python\njitter = jnp.eye(K.shape[0])*1e-12\nK += jitter\nKinv = jnp.linalg.inv(K)\n\nprint(is_positive_definte(K))\nprint(is_positive_definte(Kinv))\n```\n\n True\n True\n\n\nAs can be seen, our matrix inversions are now much more stable.\n\nIn GPJax, we set the default amount of jitter as $10^{6}$. Often you'll be able to use a far smaller amount of jitter, particularly if you have a small amount of well-spaced input data. Conversely, you may need more jitter for some of your problems. Fortunately, this value can be altered upon instantiation of the GP prior through the option `jitter` argument. For example, this is how you could manually set the jitter to $10^{-10}$\n\n\n```python\nfrom gpjax.gps import Prior\nfrom gpjax import RBF\n\nf = Prior(RBF, jitter=1e-10)\n```\n", "meta": {"hexsha": "7b468b28f3fe86e1f0e3cbe2bc7d8eed3eac754e", "size": 28544, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/nbs/technical_details.ipynb", "max_stars_repo_name": "jejjohnson/GPJax-1", "max_stars_repo_head_hexsha": "1ac2c1d5cfd52ca4d58875a83422e6230026b582", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 44, "max_stars_repo_stars_event_min_datetime": "2020-12-03T14:07:39.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-14T17:45:34.000Z", "max_issues_repo_path": "docs/nbs/technical_details.ipynb", "max_issues_repo_name": "jejjohnson/GPJax-1", "max_issues_repo_head_hexsha": "1ac2c1d5cfd52ca4d58875a83422e6230026b582", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 28, "max_issues_repo_issues_event_min_datetime": "2020-12-05T08:54:45.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-01T09:56:50.000Z", "max_forks_repo_path": "docs/nbs/technical_details.ipynb", "max_forks_repo_name": "jejjohnson/GPJax-1", "max_forks_repo_head_hexsha": "1ac2c1d5cfd52ca4d58875a83422e6230026b582", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2021-02-05T12:37:57.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-13T13:00:20.000Z", "avg_line_length": 46.262560778, "max_line_length": 957, "alphanum_fraction": 0.6334431054, "converted": true, "num_tokens": 5718, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9073122163480667, "lm_q2_score": 0.920789673173896, "lm_q1q2_score": 0.8354437191578196}} {"text": "# Section 5.6 $\\quad$ Least Squares\n\n**Recall** An $m\\times n$ linear system $A\\mathbf{x}=\\mathbf{b}$ is consistent if and only if



    \n\n**Question** What can we do if the system $A\\mathbf{x}=\\mathbf{b}$ is inconsistent?



    \n\n>The **least square solution** to the linear system $A\\mathbf{x}=\\mathbf{b}$ is the solution to the system



    \n\n**Remark** If $A$ is an $m\\times n$ matrix,



    \n\n### Example 1\n\nDetermine the least square solution to $A\\mathbf{x}=\\mathbf{b}$, where\n\\begin{equation*}\n A =\n \\left[\n \\begin{array}{cc}\n 2 & 1 \\\\\n 1 & 0 \\\\\n 0 & -1 \\\\\n -1 & 1 \\\\\n \\end{array}\n \\right],~~~~~~\n \\mathbf{b} =\n \\left[\n \\begin{array}{c}\n 3 \\\\\n 1 \\\\\n 2 \\\\\n -1\\\\\n \\end{array}\n \\right].\n\\end{equation*}\n\n\n```python\nfrom sympy import *\n\nA = Matrix([[2, 1], [1, 0], [0, -1], [-1, 1]]);\nb = Matrix([3, 1, 2, -1]);\n\nA.LDLsolve(b)\n```\n\n\n\n\n Matrix([\n [24/17],\n [-8/17]])\n\n\n\nLeast square problems often arise in constructing a mathematical model from discrete data.\n\n### Example 2\n\nThe following data shows U.S. per capita health care expenditures\n\nYear | Per Capita Expenditures (in \\$) \n-----|------\n1960 | $\\qquad\\qquad$ 143 \n1970 | $\\qquad\\qquad$ 348 \n1980 | $\\qquad\\qquad$ 1,067 \n1990 | $\\qquad\\qquad$ 2,738\n1995 | $\\qquad\\qquad$ 3,698 \n2000 | $\\qquad\\qquad$ 4,560\n\n- Determine the line of best fit to the given data.\n- Predict the per capita expenditure for the year 2005, 2010, and 2015.\n\n\n```python\nfrom sympy import *\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nA = Matrix([[1960, 1], [1970, 1], [1980, 1], [1990, 1], [1995, 1], [2000, 1]]);\nb = Matrix([143, 348, 1067, 2738, 3698, 4560]);\nlinePara = A.LDLsolve(b);\n\nplt.xlabel('Year');\nplt.ylabel('Per Capita Expenditures (in $)');\nplt.title('U.S. per capita health care expenditures');\nplt.plot(A[:,0], b, 'o', label = 'Data');\nx = np.linspace(1950, 2030, 1000);\ny = x * linePara[0] + linePara[1];\nplt.plot(x, y, label = 'Prediction');\nplt.legend();\nplt.show()\n\n2005*linePara[0] + linePara[1], 2010*linePara[0] + linePara[1], 2015*linePara[0] + linePara[1]\n```\n\n\n\n\n (14026/3, 15748/3, 17470/3)\n\n\n", "meta": {"hexsha": "26d84f61393c5574b734ca30b8dcbf8e41ce0865", "size": 4611, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Jupyter_Notes/Lecture29_Sec5-6_LeastSquares.ipynb", "max_stars_repo_name": "xiuquan0418/MAT341", "max_stars_repo_head_hexsha": "2fb7ec4e5f0771f10719cb5e4a00a7ab07c49b59", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Jupyter_Notes/Lecture29_Sec5-6_LeastSquares.ipynb", "max_issues_repo_name": "xiuquan0418/MAT341", "max_issues_repo_head_hexsha": "2fb7ec4e5f0771f10719cb5e4a00a7ab07c49b59", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Jupyter_Notes/Lecture29_Sec5-6_LeastSquares.ipynb", "max_forks_repo_name": "xiuquan0418/MAT341", "max_forks_repo_head_hexsha": "2fb7ec4e5f0771f10719cb5e4a00a7ab07c49b59", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.4060913706, "max_line_length": 138, "alphanum_fraction": 0.4660594231, "converted": true, "num_tokens": 800, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9207896693699845, "lm_q2_score": 0.9073122163480667, "lm_q1q2_score": 0.8354437157064842}} {"text": "$\\newcommand{\\pr}{\\textrm{Pr}}$\n$\\newcommand{\\l}{\\left}$\n$\\newcommand{\\r}{\\right}$\n$\\newcommand\\given[1][]{\\:#1\\vert\\:}$\n$\\newcommand{\\var}{\\textrm{Var}}$\n$\\newcommand{\\mc}{\\mathcal}$\n$\\newcommand{\\lp}{\\left(}$\n$\\newcommand{\\rp}{\\right)}$\n$\\newcommand{\\lb}{\\left\\{}$\n$\\newcommand{\\rb}{\\right\\}}$\n$\\newcommand{\\iid}{\\textrm{i.i.d. }}$\n$\\newcommand{\\ev}{\\textrm{E}}$\n\n# 3.1 The binomial model\n\nFor exchangeable binary data, our joint beliefs about the i.i.d. responses $Y_1, \\cdots, Y_n$\nare well approximated by\n\n* our beliefs about $\\theta = \\sum_{i=1}^N Y_i / N$\n* the model that, conditional on $\\theta$, the $Y_i$'s are i.i.d. binary random variables \nwith expectation $\\theta$\n\nIf $Y_1, \\cdots, Y_n \\given \\theta$ are i.i.d. binary$\\lp\\theta\\rp$,\n\n$$p\\lp\\theta \\given y_1, \\cdots, y_n \\rp = \\theta^{\\sum y_i} \\lp 1-\\theta \\rp^{n-\\sum y_i}\n\\times p\\lp \\theta \\rp / p\\lp y_1, \\cdots, y_n \\rp$$\n\nGiven observations $y_1, \\cdots, y_n$, we can use Bayes' rule to calculate \n$p\\lp \\theta | \\given y_1, \\cdots, y_n \\rp$. If we have a uniform prior, the denominator $p\\lp y_1, \\cdots, y_n \\rp$\nis known from calculus to be the beta distribution. An uncertain quantity $\\theta$, known \nto be between 0 and 1, has a $\\textrm{beta}\\lp a,b\\rp$ distribution if\n$$p\\lp\\theta\\rp = \\textrm{dbeta}\\lp\\theta, a, b\\rp = \n\\frac{\\Gamma\\lp a+b \\rp}{\\Gamma\\lp a \\rp \\Gamma\\lp b \\rp}\\theta^{a-1} \\lp 1-\\theta \\rp^{b-1}$$\n\nfor $0 \\leq \\theta \\leq 1$. In the case of our binary data, $a$ is the number of success (e.g. $Y_i = 1$) and \n$b$ is the number of failures (e.g. $Y_i = 0$). For a random variable with a beta distribution,\n\n* $\\textrm{mode}\\l[\\theta\\r] = \\frac{ a-1}{\\lp a-1 \\rp + \\lp b-1 \\rp}$ if $a>1$, $b>1$\n* $\\ev\\l[\\theta\\r] = \\frac{a}{a + b}$\n* $\\var\\l[\\theta\\r] = \\frac{ab}{\\lp a+b+1 \\rp\\lp a+b\\rp^2} \n= \\frac{\\ev\\l[\\theta\\r] \\ev\\l[1 - \\theta\\r]}{a + b + 1}$\n\n## 3.1.1 Inference for exchangeable binary data\n\nIf we compare the relative probabilities of any two $\\theta$ values $\\theta_a$ and $\\theta_b$ \nby calculating $\\frac{p\\lp\\theta_a \\given y_1, \\cdots, y_n\\rp}{p\\lp\\theta_b \\given y_1, \\cdots, y_n\\rp}$,\nwe will see that the resulting expression is only a function of $\\sum_{i=1}^n y_i$. This means that\n$\\sum_{i=1}^n Y_i$ contains all the information about $\\theta$ available from the data, so we say\nthat $\\sum_{i=1}^n Y_i$ is a **sufficient statistic** for $\\theta$ and $p\\lp y_1, \\cdots, y_n \\given \\theta\\rp$.\nBasically, the individual binary responses don't give any more info than knowing in total how many\nresponses were 1. It is sufficient to know $\\sum_{i=1}^n Y_i$ in order to make inferences about $\\theta$. \nIn this case where $Y_1, \\cdots, Y_n \\given \\theta$ are i.i.d. $\\textrm{binary}\\lp\\theta\\rp$ random variables,\nthe sufficient statistic $\\sum_{i=1}^n Y_i$ has a binomial distribution with parameters $\\lp n, \\theta\\rp$.\n\nA random variable $Y\\in \\lb 0,1,\\cdots,n\\rb$ has a $\\textrm{binomial}\\lp n, \\theta\\rp$ distribution if\n$$\\pr\\lp Y=y\\given\\theta\\rp = \\textrm{dbinom}\\lp y,n,\\theta\\rp = {{n}\\choose{y}} \\theta^y\\lp 1 - \\theta\\rp^{n-y},\ny\\in\\lb 0,1,\\cdots,n\\rb$$\n* $\\ev\\l[Y \\given\\theta\\r] = n\\theta$\n* $\\var\\l[Y\\given\\theta\\r] = n\\theta\\lp 1 - \\theta\\rp$\n\n### Posterior inference under a uniform prior distribution\n\nHaving observed $Y=y$, our task is to obtain the posterior distribution of $\\theta$:\n$$p\\lp \\theta \\given y\\rp = \\frac{p\\lp y \\given \\theta\\rp p\\lp \\theta \\rp}{p\\lp y \\rp}$$\nUsing the binomial pdf and the fact that the denominator is the beta distribution, we \nfind that \n$$p\\lp\\theta\\given y\\rp = \\textrm{beta}\\lp y + 1, n-y+1 \\rp$$\n\n### Posterior distributions under beta prior distributions\n\nThe uniform prior distribution has $p\\lp\\theta\\rp = 1$ for all $\\theta \\in \\l[0,1\\r]$. \nThis is equivalent to a beta distribution with $a=1,b=1$. Note that $\\Gamma\\lp 1 \\rp = 1$\nby convention. We saw above that if $\\theta \\sim \\textrm{beta}\\lp 1,1\\rp \\textrm{(uniform)}$\nand $Y \\sim \\textrm{binomial}\\lp n, \\theta \\rp$, then \n$\\lb \\theta \\given Y=y \\rp \\sim \\textrm{beta}\\lp 1+y, 1+n-y\\rp$. If we use a beta prior \n$\\theta \\sim \\textrm{beta}\\lp a,b \\rp$, then\n$p\\lp\\theta \\given y\\rp = \\textrm{dbeta}\\lp\\theta, a + y, b + n - y\\rp$. So the ones are \nreplaced with $a$ and $b$. We can see this from\n\\begin{align}\np\\lp \\theta \\given y\\rp &= \\frac{p\\lp \\theta \\rp p\\lp y \\given \\theta\\rp}{p\\lp y\\rp} \\\\\n&= \\frac{1}{p\\lp y \\rp} \\times \\frac{\\Gamma\\lp a+b \\rp}{\\Gamma\\lp a \\rp \\Gamma\\lp b \\rp}\\theta^{a-1} \\lp 1-\\theta \\rp^{b-1} \\times {{n}\\choose{y}} \\theta^y\\lp 1 - \\theta\\rp^{n-y}\n\\end{align}\n\nThe first part of this expression $\\frac{1}{p\\lp y \\rp}$ is the normalizing factor. The second part \ncomes from the fact that the prior $p\\lp \\theta \\rp$ is a beta distribution. The third part\nis the binomial pdf. We can then write\n\\begin{align}\np\\lp \\theta \\given y\\rp &= c\\lp n, y, a, b\\rp \\times\\theta^{a+y-1}\\lp 1 - \\theta\\rp^{b+n-y-1} \\\\\n&= \\textrm{dbeta}\\lp \\theta, a + y, b + n -y\\rp\n\\end{align}\n\nThe first line says that $p\\lp \\theta \\given y\\rp$ is proportional to \n$\\theta^{a+y-1}\\lp 1 - \\theta\\rp^{b+n-y-1}$ which has the shape of a beta distribution. \nSince $p\\lp \\theta \\given y\\rp$ and the beta density must both integrate to 1, this means\nthat $p\\lp \\theta \\given y\\rp$ and the beta density are in fact the same. This **trick** \nwill be used throughout the book:\n\n* We will recognize the posterior distribution is proportional to a known probability density,\nand therefore must equal that density.\n\n### Conjugacy\n\nWe have shown that a beta prior distribution and a binomial sampling model lead to a beta\nposterior distribution. We say that the class of beta priors is **conjugate** for the \nbinomial sampling model.\n\n**Definition 4 (Conjugate)**: A class $\\mathcal{P}$ of prior distributions for $\\theta$\nis called conjugate for a sampling model $p\\lp y\\given\\theta\\rp$ if \n$$p\\lp \\theta \\rp \\in \\mathcal{P} \\rightarrow p\\lp\\theta\\given y\\rp \\in \\mathcal{P}$$\nIn other words, the distribution is conjugate when the prior and posterior are in the same family.\n\nWhile conjugate priors are easy to calculate, they may not always be the best model.\nMixtures of conjugate priors are very flexible and also computationally tractable.\n\n### Combining information\n\nIf $\\theta | Y=y \\sim \\textrm{beta}\\lp a+y, b+n-y\\rp$, then $E\\l[ \\theta \\given y\\r] = \\frac{a+y}{a+b+n}$.\nThis **posterior expectation** is a combination of prior and data information:\n\\begin{align}\nE\\l[ \\theta \\given y\\r] &= \\frac{a+y}{a+b+n} \\\\\n&= \\frac{a+b}{a+b+n}\\frac{a}{a+b} + \\frac{n}{a+b+n}\\frac{y}{n} \\\\\n&=\\frac{a+b}{a+b+n} \\times \\textrm{ prior expectation } + \\frac{n}{a+b+n} \\times \\textrm{ data average}\n\\end{align}\nFor this model, the posterior expectation (or posterior mean) is a weighted average of the prior expectation\nand the sample average, with weights proportional to $a+b$ and $n$ respectively. This leads to an\ninterpretation of $a$ and $b$ as \"prior data\":\n\n* $a \\approx$ \"prior number of 1s\"\n* $b \\approx$ \"prior number of 0s\"\n* $a + b \\approx$ \"prior sample size\"\n\nIf our sample size $n$ is much larger than $a+b$, then we would probably want a majority of our\ninformation about $\\theta$ to come from the data as opposed to the prior. We can see that is the\ncase if we look back at\n$$ E\\l[ \\theta \\given y\\r] \n=\\frac{a+b}{a+b+n} \\times \\textrm{ prior expectation } + \\frac{n}{a+b+n} \\times \\textrm{ data average}$$\nWhen $n >> a+b$, $E\\l[ \\theta \\given y\\r] \\approx \\frac{y}{n}$, the data average.\n\nWe can also reparameterize $a$ and $b$ in terms of a \"prior sample size\" $n' = a + b$ and \"prior frequency\"\n$\\theta' = a / n$. This may be easier to use when we are thinking of defining our prior because we can\ndefine what we think the frequency might be ($\\theta'$) and how many samples worth of data we are sure\n($n'$). In particular, we can choose $n'$ based on how many samples we know we will get so we can sort\nof specify how much weight we are putting on our prior.\n\n### Prediction\n\nWe can create a predictive distribution for data we have yet to observe.\nLet's consider binary data with $y_1, \\cdots, y_n$ as the observed outcomes\nof $n$ binary random variables. Let $\\tilde{Y} \\in \\lb0, 1\\rb$ be an additional\noutcome yet to be observed. The **predictive distributon** of $\\tilde{Y}$ is\nthe conditional distribution of $\\tilde{Y}$ given $\\lb Y_1=y_1, \\cdots, Y_n=y_n\\rb$.\nFor conditionally i.i.d. binary variables this distribution can be derived from\nthe distribution of $\\tilde{Y}$ given $\\theta$ and the posterior distribution of \n$\\theta$: \n\\begin{align}\n\\pr\\lp\\tilde{Y} = 1 \\given y_1, \\cdots, y_n\\rp &= \\ev\\l[\\theta \\given y_1, \\cdots, y_n\\r] \\\\\n&= \\frac{a + \\sum_{i=1}^n y_i}{a+b+n} \\\\\n\\pr\\lp\\tilde{Y} = 0 \\given y_1, \\cdots, y_n\\rp &= \\frac{a + \\sum_{i=1}^n \\lp 1-y_i \\rp}{a+b+n}\n\\end{align}\n\nYou can see that the predictive distribution does not rely on unknown quantities but does\nrely on the observed data which makes sense.\n\n## 3.1.2 Confidence regions\n\n**Definition 5 (Bayesian coverage)**: An interval $\\l[l\\lp y\\rp, u\\lp y\\rp\\r]$, based on the\nobserved data $Y=y$, has 95% Bayesian coverage for $\\theta$ if \n$$\\pr \\lp l\\lp y\\rp \\leq \\theta \\leq u\\lp y\\rp \\given Y=y\\rp =0.95$$\n\n**Definition 6 (frequentist coverage)**: A random interval $\\l[l\\lp Y\\rp, u\\lp Y\\rp\\r]$\nhas 95% frequentist coverage for $\\theta$ if \n$$\\pr \\lp l\\lp Y\\rp \\leq \\theta \\leq u\\lp Y\\rp \\given \\theta\\rp =0.95$$\n\nIn a sense, the frequentist and Bayesian notions of coverage describe pre- and post-experimental\ncoverage respectively. \n\nMy interpretation of this is that when you calculate a frequentist coverage region (same\nas confidence interval as far as I know), the interpretation is based on *future* experiments. \nFor instance, typically we think of the frequentist 95% confidence interval as having a 95%\nchance of encompassing the true parameter value. However, the confidence interval actually means\nthat there is a 95% chance that the confidence interval from a *future* experiment will contain\nthe true parameter value. When we perform the experiment, the formula for the confidence interval\nis specified before the experiment, so the experiment itself is the \"future\" experiment. The\ncalculation of the confidence interval is the realization of the idea that the confidence interval\ncalculated from this \"future\" experiment has a 95% chance of containing the true parameter value.\n\nIn other words, in the frequentist case, you believe that there is a \"true\" value of $\\theta$, \nso before you even do your experiment,\nyou expect that the resulting confidence interval will contain the \"true\" value of $\\theta$\nwith 95% probability.\n\nThe types of intervals constructed in this book are equal to the frequentist confidence intervals\nasymptotically. Bayesian confidence intervals are also known as **credible regions** or credible intervals.\n\n### Quantile-based interval\n\nTo make a $100 \\times \\lp 1- \\alpha\\rp \\%$ quantile-based confidence interval,\nfind numbers $\\theta_{\\alpha / 2} < \\theta_{1 - \\alpha / 2}$ such that\n\n1. $\\pr\\lp \\theta \\leq \\theta_{\\alpha / 2} \\given Y=y\\rp = \\alpha / 2$\n2. $\\pr\\lp \\theta \\geq \\theta_{1 - \\alpha / 2} \\given Y=y\\rp = \\alpha / 2$\n\nThe numbers $\\theta_{\\alpha / 2}$, $\\theta_{1 - \\alpha / 2}$ are the $\\alpha / 2$\nand $1 - \\alpha / 2$ posterior quantiles of $\\theta$, so \n$$\\pr\\lp \\theta \\in \\l[\\theta_{\\alpha / 2}, \\theta_{1 - \\alpha / 2}\\r] \\given Y=y \\rp = 1 - \\alpha$$\n\n### Highest posterior density (HPD) region\n\n**Definition 7 (HPD region)**: A $100 \\times \\lp 1-\\alpha\\rp\\%$ HPD region consists of a \nsubset of the parameter space, $s\\lp y\\rp \\subset \\Theta$ such that \n\n1. $\\pr\\lp\\theta \\in s\\lp y\\rp \\given Y=y\\rp = 1 - \\alpha$\n2. If $\\theta_a \\in s\\lp y \\rp$, and $\\theta_b \\notin s\\lp y \\rp$, then \n$p\\lp \\theta_a \\given Y=y\\rp > p\\lp\\theta_b \\given Y=y\\rp$\n\nThe first point says that the subset $s\\lp y\\rp$ contains $1-\\alpha$ of the posterior probability.\nThis is the same as the $1-\\alpha$ confidence interval. The second point says that every point in\n$s\\lp y \\rp$ has a higher posterior probability than every point outside of $s\\lp y \\rp$. This\nis not necessarily true for the $1-\\alpha$ confidence interval. If the posterior is bimodal, the HPD\nmay not be a continuous region. You can imagine calculate the HPD by starting a horizontal line\nat the top of a plot of the posterior distribution and moving it down until there is $1-\\alpha$ of the\nposterior above the line. This would be the HPD region.\n\n# 3.2 The Poisson model\n\nWhen our sample space is $\\mathcal{Y} = \\lb 0, 1, 2, \\cdots \\rb$, perhaps the simplest model\nis the Poisson model. A random variable $Y$ has a Poisson distribution with mean $\\theta$ if\n$$p\\lp Y=y \\given \\theta \\rp = \\textrm{dpois}\\lp y,\\theta\\rp = \\theta^ye^{-\\theta}/y!$$\nfor $y \\in \\lb 0, 1, 2, \\cdots \\rb$. A Poisson random variable has\n\n* $\\ev \\l[Y\\given\\theta\\r] = \\theta$\n* $\\textrm{Var}\\l[Y\\given\\theta\\r] = \\theta$\n\nThe Poisson family of distributions is said to have a mean-variance relationship because\nif one Poisson has a larger mean than another, it will also have a larger variance.\n\n## 3.2.1 Posterior inference\n\nIf we model $Y_1, \\cdots, Y_n$ as i.i.d. Poisson with mean $\\theta$, then the joint pdf\nof our sample data is\n\\begin{align}\n\\pr\\lp Y_1=y_1, \\cdots, Y_n=y_n \\given\\theta\\rp &= \\prod_{i=1}^n p\\lp y_i\\given\\theta\\rp \\\\\n&=\\prod_{i=1}^n\\frac{1}{y_i!}\\theta^{y_i}e^{-\\theta} \\\\\n&=c\\lp y_1, \\cdots, y_n \\rp\\theta^{\\sum y_i}e^{-n\\theta}\n\\end{align}\nWe can compare two values of $\\theta$ *a posteriori* and see\n\\begin{align}\n\\frac{p\\lp \\theta_a \\given y \\rp}{p \\lp \\theta_b \\given y \\rp} &= \n\\frac{e^{-n\\theta_a} \\theta_a^{\\sum y_i} p\\lp\\theta_a\\rp}{e^{-n\\theta_b} \\theta_b^{\\sum y_i} p\\lp\\theta_b\\rp}\n\\end{align}\nAs with the binary model, $\\sum_{i=1}^n Y_i$ contains all the information about $\\theta$ that is \navailable in the data, so $\\sum_{i=1}^n Y_i$ is a sufficient statistic. Furthermore,\n$\\lb\\sum_{i=1}^n Y_i\\rb \\sim \\textrm{Poisson}\\lp n\\theta\\rp$.\n\n### Conjugate prior\n\nA class of prior densities is conjugate for the sampling model $p\\lp y_1, \\cdots, y_n\\rp$ if the\nposterior distribution is also in the class. For the Poisson sampling model, our posterior for\n$\\theta$ has the following form\n\\begin{align}\np\\lp y_1, \\cdots, y_n\\rp &\\propto p\\lp \\theta\\rp \\times p\\lp y_1, \\cdots, y_n \\given \\theta\\rp\n&\\propto p\\lp \\theta\\rp \\times \\theta^{\\sum y_i} e^{-n\\theta}\n\\end{align}\nWe need to find a conjugate class of densities. The simplest class of densities of this form\nis known as the family of gamma distributions.\n\n### Gamma distribution\n\nAn uncertain positive quantity $\\theta$ has a $\\textrm{gamma}\\lp a,b \\rp$ distribution if \n$$p\\lp \\theta \\rp = \\textrm{dgamma}\\lp\\theta,a,b\\rp = \\frac{b^a}{\\Gamma\\lp a\\rp} \\theta^{a-1}e^{-b\\theta}$$\nfor $\\theta,a,b > 0$. For such a random variable\n\n* $\\ev\\l[\\theta\\r] = a/b$\n* $\\var\\l[\\theta\\r] = a/b^2$\n* $\\textrm{mode}\\l[\\theta\\r] = \\begin{cases} \\lp a-1\\rp/b & \\mbox{if }a>1 \\\\\n0 & \\mbox{if }a\\leq 1\\end{cases}$\n\n### Posterior distribution of $\\theta$\n\nThe gamma family is conjugate for the Poisson sampling model:\n$$\n\\begin{array}{c}\n\\theta \\sim \\textrm{gamma}\\lp a,b \\rp \\\\ \nY_1, \\cdots, Y_n \\given \\theta \\sim \\textrm{Poisson}\\lp\\theta\\rp\n\\end{array} \n\\Rightarrow\n\\lb \\theta \\given Y_1, \\cdots, Y_n\\rb \\sim \\textrm{gamma}\\lp a + \\sum_{i=1}^n Y_i, b+n\\rp\n$$\nThe posterior expectation of the $\\theta$ is a convex combination of the prior expectation\nand the sample average\n$$\\ev\\l[\\theta\\given y_1, \\cdots, y_n\\r] = \\frac{b}{b+n}\\frac{a}{b} + \\frac{n}{b+n}\\frac{\\sum y_i}{n}$$\n\n* $b$ is interpreted as the number of prior observations\n* $a$ is interpreted as the sum of counts from $b$ prior observations\n\nAs for the binary model, the information from the data dominates for large $n$.\n\nWe can derive the posterior predictive distribution as\n$$p\\lp\\tilde{y} | y_1, \\cdots, y_n\\rp =\n\\frac{\\Gamma\\lp a + \\sum y_i + \\tilde{y}\\rp}{\\Gamma\\lp\\tilde{y} + 1\\rp\\Gamma\\lp a + \\sum y_i\\rp}\n\\lp\\frac{b + n}{b + n + 1}\\rp^{a + \\sum y_i}\\lp\\frac{1}{b+n_1}\\rp^\\tilde{y}$$\nfor $y\\in\\lb 0,1,2,\\cdots\\rb$. This is the negative binomial distribution with parameters\n$\\lp a+\\sum y_i, b+n\\rp$ for which \n\n* $\\ev\\l[\\tilde{Y} \\given y_1, \\cdots, y_n\\r] = \\frac{a + \\sum y_i}{b + n} = \\ev\\l[\\theta \\given y_1, \\cdots, y_n\\r]$\n* $\\var\\l[\\tilde{Y} \\given y_1, \\cdots, y_n\\r] = \\frac{a + \\sum y_i}{b + n}\\frac{b+n+1}{b+n} =\n\\var\\l[\\theta \\given y_1, \\cdots, y_n\\r] \\times \\lp b+n+1\\rp = \\ev\\l[\\tilde{Y} \\given y_1, \\cdots, y_n\\r] \\times\n\\frac{b+n+1}{b+n}$\n\nThe predictive variance is to some extent a measure of our posterior uncertainty about a new sample\n$\\tilde{Y}$ from the population. Uncertainty about $\\tilde{Y}$ stems from uncertainty about the population\n(uncertainty in our estimate of $\\theta$) and variability in sampling. For large $n$, the uncertainty about\n$\\theta$ is small ($\\lp b+n+1\\rp/\\lp b+n\\rp \\approx 1$) and so the predictive variability \nbecomes $\\theta$, the sampling variability from the Poisson. For small $n$, the uncertainty for $\\tilde{Y}$\nincludes additional uncertainty about $\\theta$.\n\n# 3.3 Exponential families and conjugate priors\n\nThe binomial and Poisson models are both instances of one-parameter **exponential family models**.\nA one-parameter exponential family model is any model whose densities can be expressed as\n$$p\\lp y\\given \\phi\\rp = h\\lp y\\rp c\\lp\\phi\\rp e^{\\phi t\\lp y\\rp}$$ where $\\phi$ is the unknown\nparameter and $t\\lp y\\rp$ is the sufficient statistic.\n\nCombining priors of the form \n$$p\\lp \\phi \\given n_0, t_0\\rp = \\kappa\\lp n_0, t_0\\rp c\\lp \\phi \\rp^{n_0} e^{n_0 t_0 \\phi}$$\nwith information from $Y_1, \\cdots, Y_n \\sim \\iid p\\lp y\\given \\phi\\rp$ yields the following\nposterior distribution\n\\begin{align}\np\\lp \\phi \\given y_1, \\cdots, y_n\\rp &\\propto p\\lp\\phi\\rp p\\lp y_1, \\cdots, y_n \\given \\phi\\rp \\\\\n&\\propto c\\lp \\phi\\rp^{n_0 + n} \\exp\\lb\\phi\\times\\l[ n_0t_0 + \\sum_{i=1}^n t\\lp y_i \\rp\\r]\\rb \\\\\n&\\propto p\\lp \\phi \\given n_0 + n, n_0t_0 + n\\bar{t}\\lp \\mathbf{y} \\rp\\rp\n\\end{align}\nwhere $t\\lp \\mathbf{y} \\rp = \\sum t\\lp y_i \\rp/n$. The similarity between the posterior and prior\ndistributions suggests that $n_0$ can be interpreted as a \"prior sample size\" and $t_0$ as a \n\"prior guess\" of $t\\lp Y\\rp$. In fact\n$$\\ev\\l[t\\lp Y\\rp\\r] = \\ev\\l[\\ev\\l[t\\lp Y\\rp \\given \\phi\\r]\\r] =\n\\ev\\l[-c'\\lp \\phi\\rp/c\\lp\\phi\\rp\\r] = t_0$$\nso $t_0$ represents the expected value of $t\\lp Y\\rp$. The parameter $n_0$ is a measure of how\ninformative the prior is. If you want to think of the relative strength of the prior versus the\ndata, you can think of it that the prior distribution $p\\lp \\phi \\given n_0,t_0\\rp$\ncontains the same information as $n_0$ independent samples from the population.\n\nYou can take the binomial and Poisson pdfs and rewrite them in the exponential family form.\n\n\n```python\n\n```\n", "meta": {"hexsha": "c9d71b8d6d39486f06eb7a106c1fa0dbda9393e5", "size": 23529, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "readings/A First Course in Bayesian Statistical Methods/3. One-parameter models.ipynb", "max_stars_repo_name": "rivas-lab/presentations-readings", "max_stars_repo_head_hexsha": "3d9af7fba9c01ec67e80c12468106ab2cbd0ab24", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-02-09T19:23:06.000Z", "max_stars_repo_stars_event_max_datetime": "2017-03-09T18:23:37.000Z", "max_issues_repo_path": "readings/A First Course in Bayesian Statistical Methods/3. One-parameter models.ipynb", "max_issues_repo_name": "rivas-lab/reading", "max_issues_repo_head_hexsha": "3d9af7fba9c01ec67e80c12468106ab2cbd0ab24", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "readings/A First Course in Bayesian Statistical Methods/3. One-parameter models.ipynb", "max_forks_repo_name": "rivas-lab/reading", "max_forks_repo_head_hexsha": "3d9af7fba9c01ec67e80c12468106ab2cbd0ab24", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.4858611825, "max_line_length": 211, "alphanum_fraction": 0.5795826427, "converted": true, "num_tokens": 6402, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9449947086083138, "lm_q2_score": 0.8840392939666335, "lm_q1q2_score": 0.8354124550002983}} {"text": "## Setting up python environment\n\n\n```python\n# reset all previously defined varibles\n%reset -f\n\n# import everything from sympy moduleb \nfrom sympy import *\n\n# pretty math formatting\ninit_printing() # latex\n```\n\n### Symbolic variables must be declared\n\n\n```python\nx,y,z = symbols('x y z')\nt = symbols('t')\n```\n\n## Example\n\nEvaluate the line integral $\\int_C y \\, ds$ along the parabola $y = 2\\sqrt{x}$ from $A(3,2\\sqrt{3})$ to $B (24, 4\\sqrt{6})$.\n\n\n```python\nx = t\ny = 2*sqrt(t)\n\nf = y \n\ns = sqrt( diff(x,t)**2 + diff(y,t)**2 )\n\n# integration\nintegrate(f*s,[t,3,24])\n\n```\n\n---------------------------------\n## Example \n\nFind the work done by the vector field $\\mathbf{F} = r^2 \\mathbf{i}$ in moving a particle along the helix $x = \\cos{t}, y = \\sin{t}, z = t$ from the point $(1,0,0)$ to $(1,0,4 \\pi)$.\n\nNote: $r^2 = x^2 + y^2 + z^2$\n\n\n\n```python\nx = cos(t)\ny = sin(t)\nz = t\n\nF = [x**2 + y**2 + z**2 , 0 , 0]\n\nintegrate(\n F[0]*diff(x,t) + F[1]*diff(y,t) + F[2]*diff(z,t),\n [t,0,4*pi]\n )\n\n\n```\n\n---------------------------\n## Example\n\nEvaluate $\\displaystyle \\int_A^B 2xy\\,dx + (x^2 - y^2)\\, dy$ along the arc of the circle $x^2 + y^2 = 1$ in the first quadrant from $A(1,0)$ to $B(1,0)$.\n\n\n```python\nx = cos(t)\ny = sin(t)\n\nF = [2*x*y, x**2 - y**2]\n\nintegrate(\n F[0]*diff(x,t) + F[1]*diff(y,t),\n [t,0,pi/2]\n )\n```\n\n-------------------\n## Example\n\nProve that $\\displaystyle \\mathbf{F} = (y^2 \\cos{x} + z^3)\\mathbf{i} + (2y \\sin{x} - 4)\\mathbf{j} +(3xz^2 + z) \\mathbf{k}$ is a conservative force field. Hence find the work done in moving an object in this field from point $(0,1,-1)$ to $(\\pi/2, -1, 2)$.\n\n### Note:\n\nIf a vector field $\\mathbf{F} = F_1 \\mathbf{i} + F_2 \\mathbf{j} + F_3 \\mathbf{k\n} $ is conservative then\n\n$$\n\\mathbf{curl} (\\mathbf{F}) = \\left| \n\\begin{array}{ccc}\n\\mathbf{i} & \\mathbf{j} & \\mathbf{k} \\\\\n\\frac{\\partial}{\\partial x} & \\frac{\\partial}{\\partial y} & \\frac{\\partial}{\\partial z} \\\\\nF_1 & F_2 & F_3\n\\end{array}\n\\right| = \\vec{0}\n$$\n\n\n\n```python\n# reset variables from previous examples\nx,y,z = symbols('x y z') \n\ndef curl(F):\n c1 = diff(F[2],y) - diff(F[1],z)\n c2 = diff(F[0],z) - diff(F[2],x)\n c3 = diff(F[1],x) - diff(F[0],y)\n return [c1,c2,c3]\n```\n\n\n```python\nF = [(y**2 *cos(x) + z**3), 2*y* sin(x) - 4. , 3*x*z**2 + z ]\n```\n\n\n```python\ncurl(F)\n\n```\n\nZero curl implies conservative vector field. \n\n-----------------------------------------\n\n## Example\n\nEvaluate \n$$\n\\underset{R}{\n\\int\\!\\!\\!\\!\\int} (x^2 + y^2) \\, dA\n$$\nover the triangle with vertices $(0,0)$, $(2,0)$, and $(1,1)$.\n\n\n\n\n\n```python\nx,y = symbols('x y')\n\nintegrate( \n integrate(x**2 + y**2, [x,y,2-y]),\n [y,0,1])\n\n```\n\n--------------------\n\n## Example\n\nEvaluate \n\n$$\n\\underset{R}{\n\\int \\!\\!\\! \\int} (x + 2y )^{-1/2} \\, dA\n$$\nover the region $x - 2y \\le 1$ and $x \\ge y^2 +1$.\n\n\n\n\n```python\nx,y = symbols('x y')\n\n\n# does not work in one shot\nintegrate((x + 2*y)**(-Rational(1,2)), [x, y**2+1, 1+2*y])\n\n```\n\n\n```python\n#manually simlipy one radical\n\nintegrate(2*sqrt(4*y + 1) - 2*(y+1),[y,0,2])\n```\n\n\n```python\nintegrate(\nintegrate(x**2 + y**2, [y,0,2 - x]),\n [x,1,2])\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "5763fd498f099a86251a6af739c14520760eb518", "size": 13887, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "sympy/integration.ipynb", "max_stars_repo_name": "krajit/krajit.github.io", "max_stars_repo_head_hexsha": "221c8bcdf0612b3ae28c827809aa309ea6a7b0c2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-09-29T07:40:07.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-28T22:17:04.000Z", "max_issues_repo_path": "sympy/integration.ipynb", "max_issues_repo_name": "krajit/krajit.github.io", "max_issues_repo_head_hexsha": "221c8bcdf0612b3ae28c827809aa309ea6a7b0c2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-08-26T08:42:47.000Z", "max_issues_repo_issues_event_max_datetime": "2019-08-26T09:48:21.000Z", "max_forks_repo_path": "sympy/integration.ipynb", "max_forks_repo_name": "krajit/krajit.github.io", "max_forks_repo_head_hexsha": "221c8bcdf0612b3ae28c827809aa309ea6a7b0c2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2017-09-09T23:32:09.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-28T21:11:39.000Z", "avg_line_length": 30.0584415584, "max_line_length": 1394, "alphanum_fraction": 0.5907683445, "converted": true, "num_tokens": 1219, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.961533812390815, "lm_q2_score": 0.8688267813328977, "lm_q1q2_score": 0.8354063273622622}} {"text": "#### Jupyter notebooks\n\nThis is a [Jupyter](http://jupyter.org/) notebook using Python. You can install Jupyter locally to edit and interact with this notebook.\n\n# Finite difference/collocation methods\n\nConsider the boundary value problem\n\n$$ \\begin{gather} -\\frac{d^2 u}{dx^2} = f(x) \\quad x \\in \\Omega = (-1,1) \\\\\nu(-1) = a \\quad \\frac{du}{dx}(1) = b . \\end{gather} $$\n\n$f(x)$ is the \"forcing\" term and we have a Dirichlet boundary condition at the left end of the domain and a Neumann condition on the right end. We need to choose\n* how to represent $u(x)$, including evaluating it on the boundary,\n* how to compute derivatives of $u$,\n* in what sense to ask for the differential equation to be satisfied,\n* where to evaluate $f(x)$ or integrals thereof,\n* how to enforce boundary conditions.\n\n## The framework\n\n* Finite difference (FD) methods choose to represent the function $u(x)$ by its values $u_i = u(x_i)$ at a discrete set of points $$ -1 = x_0 < x_1 < \\dotsb < x_n = 1 . $$\nThe FD framework does not uniquely specify the solution values at points outside this discrete set.\n* FD computes derivatives at $x_i$ via differencing formulas involving a finite number of neighbor points (independent of the total number of points $n$). This will be our main focus.\n* FD methods ask for the differential equation to be satisfied pointwise at each $x_i$ in the interior of the domain.\n* FD methods evaluate the forcing term $f$ pointwise at $x_i$.\n* FD methods approximate derivatives at discrete boundary points ($x_n = 1$ above), typically using one-sided differencing formulas.\n\n## Differencing formulas\n\nHow can we compute $\\frac{du}{dx}(x_i)$ using $u_i$ and $u_{i+1}$? How accurate is this approximation?\n\n### Taylor series\n\nWithout loss of generality, we may assume $x_i = 0$ by shifting our function. For notational convenience, we will also define $h = x_{i+1} - x_i$.\nTo determine the order of accuracy, we represent the function $u(x)$ by its Taylor series\n$$ u(x) = u(0) + u'(0)x + u''(0)x^2/2! + O(x^3)$$\nand substitute into the differencing formula\n$$ \\begin{split} u'(0) \\approx \\frac{u(h) - u(0)}{h} = h^{-1} \\Big( u(0) + u'(0) h + u''(0)h^2/2 + O(h^3) - u(0) \\Big) \\\\\n= u'(0) + u''(0)h/2 + O(h^2) . \\end{split}$$\nEvidently the error in this approximation is $u''(0)h/2 + O(h^2)$. We say this method is *first order accurate*.\n\n\n```python\n%matplotlib inline\nimport numpy\nfrom matplotlib import pyplot\npyplot.style.use('ggplot')\n\nn = 10\nh = 2/(n-1)\nx = numpy.linspace(-1,1,n)\nu = numpy.sin(x)\npyplot.plot(x[:-1], (u[1:] - u[:-1]) / h, 'o')\npyplot.plot(x[:-1], numpy.cos(x[:-1]));\n```\n\nThis \"has the right shape\", but the numerical approximation is \"shifted\" to the left of the analytic derivative. If we differenced to the left, it would be shifted the other way.\n\n#### Is the centered difference formula better?\n\nHere we try\n\n$$ \\begin{split} u'(0) \\approx \\frac{u(h) - u(-h)}{2h} \\\\\n= (2h)^{-1} \\Big( \\big[ u(0) + u'(0)h + u''(0)h^2/2 + u'''(0)h^3/6 + O(h^4) \\Big] - \\big[ u(0) - u'(0)h + u''(0)h^2/2 - u'''(0)h^3/6 + O(h^4) \\big] \\Big) \\\\\n= u'(0) + u'''(0)h^2/6 + O(h^3) \\end{split} $$\n\n\n```python\ndef diff1l(x, u):\n return x[:-1], (u[1:] - u[:-1]) / (x[1:] - x[:-1])\n\ndef diff1r(x, u):\n return x[1:], (u[1:] - u[:-1]) / (x[1:] - x[:-1])\n\ndef diff1c(x, u):\n return x[1:-1], (u[2:] - u[:-2]) / (x[2:] - x[:-2])\n\nfor diff in (diff1l, diff1r, diff1c):\n xx, yy = diff(x, u)\n pyplot.plot(xx, yy, 'o', label=diff.__name__)\npyplot.plot(x, numpy.cos(x), label='exact')\npyplot.legend(loc='upper right');\n```\n\n\n```python\ngrids = 2**numpy.arange(3,10)\ndef grid_refinement_error(f, fp, diff):\n error = []\n for n in grids:\n x = numpy.linspace(-1, 1, n)\n xx, yy = diff(x, f(x))\n error.append(numpy.linalg.norm(yy - fp(xx), numpy.inf))\n return grids, error\n\nfor diff in (diff1l, diff1r, diff1c):\n ns, error = grid_refinement_error(numpy.sin, numpy.cos, diff)\n pyplot.loglog(1/ns, error, 'o', label=diff.__name__)\nhs = 2 / (grids - 1)\npyplot.loglog(hs, hs, label='$h$')\npyplot.loglog(hs, hs**2, label='$h^2$')\npyplot.xlabel('Resolution $h$')\npyplot.ylabel('Derivative error')\npyplot.legend(loc='upper right');\n```\n\n#### Stability\n\nAre there \"rough\" functions for which these differencing formulas estimate $u'(x_i) = 0$?\n\n\n```python\nx = numpy.linspace(-1, 1, 9)\nxfine = numpy.linspace(-1, 1, 200)\ndef f_rough(x):\n return numpy.cos(.5+4*numpy.pi*x)\ndef fp_rough(x):\n return -4*numpy.pi * numpy.sin(.5+4*numpy.pi*x)\n\npyplot.plot(x, f_rough(x), 'o-')\npyplot.plot(xfine, f_rough(xfine), '-');\n```\n\n\n```python\nfor diff in (diff1r, diff1l, diff1c):\n xx, yy = diff(x, f_rough(x))\n pyplot.plot(xx, yy, 'o', label=diff.__name__)\npyplot.plot(xfine, fp_rough(xfine), label='exact')\npyplot.legend(loc='upper right');\n```\n\nWe have this function $f_{\\text{rough}}(x)$ that is rough at the grid scale and has derivative identically zero according to the centered difference formula. Except perhaps for boundary conditions (which we'll consider later), given any solution $u(x)$ to a differential equation discretized using the centered difference approximation, $$\\tilde u(x) = u(x) + f_{\\text{rough}}(x)$$ would also be a solution. This non-uniqueness is a disaster for numerical algorithms and in most cases, the centered difference formula is not used directly to compute first derivatives.\n\nWhen assessing a discretization we require **both accuracy and stability**.\n\n### Second derivatives\n\nWe will need at least three grid points to compute a second derivative.\n* Why?\n\nAgain, there is more than one possible approach.\n* One method is to use a first derivative formula to compute $u'(x_{i+1})$ and $u'(x_{i-1})$, then apply the centered first derivative again.\n* Another is to define some \"staggered\" points $x_{i+1/2}$ and $x_{i-1/2}$.\n\nWhy should we choose one over the other? Can we understand this in terms of accuracy and stability?\n\n\n```python\ndef diff2a(x, u):\n xx, yy = diff1c(x, u)\n return diff1c(xx, yy)\n\ndef diff2b(x, u):\n xx, yy = diff1l(x, u)\n return diff1r(xx, yy)\n\nx = numpy.linspace(-1, 1, 11)\nu = numpy.cos(x)\nfor diff2 in (diff2a, diff2b):\n xx, yy = diff2(x, u)\n pyplot.plot(xx, yy, 'o', label=diff2.__name__)\npyplot.plot(x, -numpy.cos(x), label='exact')\npyplot.legend(loc='upper right');\n```\n\n\n```python\ndef grid_refinement_error2(f, fpp, diff):\n error = []\n for n in grids:\n x = numpy.linspace(-1, 1, n)\n xx, yy = diff(x, f(x))\n error.append(numpy.linalg.norm(yy - fpp(xx), numpy.inf))\n return grids, error\n\nfor diff2 in (diff2a, diff2b):\n ns, error = grid_refinement_error2(numpy.cos, lambda x: -numpy.cos(x), diff2)\n pyplot.loglog(ns, error, 'o', label=diff.__name__)\npyplot.loglog(grids, (grids-1)**(-1.), label='$h$')\npyplot.loglog(grids, (grids-1)**(-2.), label='$h^2$')\npyplot.loglog(grids, .25*(grids-1)**(-2.), label='$h^2/4$')\npyplot.xlabel('Resolution $n$')\npyplot.ylabel('Second derivative error')\npyplot.legend(loc='upper right');\n```\n\n#### Observations\n\n* Both methods are second order accurate\n* The `diff2b` method is more accurate than `diff2a` by a factor of 4\n* The `diff2a` method cannot compute the derivative at points next to the boundary\n* We don't know yet whether either method is stable\n\n## Differentiation as matrices\n\nSo far we have written functions of the form `diff(x, u)` that compute derivatives. These functions happen to have been linear in `u`. We should be able to write differentiation as a matrix $D$ such that\n\n$$ u'(x) = D u(x) $$\n\nwhere $x$ is the vector of $n$ discrete points, thus $u(x)$ is also a vector of length $n$.\n\n### Homework 1: Due 2018-09-16 (Sunday)\n\n1. Fork the class repository, clone, and create a directory `hw1` inside the repository. Add your source file(s) to that directory.\n2. Write a function `diffmat(x)` that returns a square matrix $D$ that computes first derivatives at all points.\n3. Write a function `diff2mat(x)` that returns a square matrix $D_2$ that computes second derivatives at all points.\n4. Use test solutions to determine the order of accuracy of your methods for evenly and non-evenly spaced points. Which norm did you use?\n5. Add `README.md` in the `hw1` directory and summarize your results (one paragraph or a few bullet items is fine).\n6. Commit (`git commit`) your source code and `README.md` in the `hw1` directory and push to your fork. I'll pull from there after the due date.\n\n\n* You may assume that the points `x` are monotonically increasing.\n* You'll need to think about what to do at the endpoints.\n\n## Boundary conditions\n\nOur boundary value problem states a differential equation to be satisfied in the interior of the domain, combined with Dirichlet conditions at the left endpoint and Neumann at the right endpoint.\n\n### Dirichlet\nThe left endpoint in our example BVP has a Dirichlet boundary condition,\n$$u(-1) = a . $$\nWith finite difference methods, we have an explicit degree of freedom $u_0 = u(x_0 = -1)$ at that endpoint.\nWhen building a matrix system for the BVP, we can implement this boundary condition by modifying the first row of the matrix,\n$$ \\begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\\\ \\\\ & & A_{1:,:} & & \\\\ \\\\ \\end{bmatrix} \\begin{bmatrix} u_0 \\\\ \\\\ u_{1:} \\\\ \\\\ \\end{bmatrix} = \\begin{bmatrix} a \\\\ \\\\ f_{1:} \\\\ \\\\ \\end{bmatrix} . $$\n\n* This matrix is not symmetric even if $A$ is.\n* We can eliminate $u_0$ and create a reduced system for $u_{1:}$. We will describe this for a more general $2\\times 2$ block system of the form\n$$ \\begin{bmatrix} I & 0 \\\\ A_{10} & A_{11} \\end{bmatrix} \\begin{bmatrix} u_0 \\\\ u_1 \\end{bmatrix} = \\begin{bmatrix} f_0 \\\\ f_1 \\end{bmatrix} .$$\nWe can rearrange as\n$$ A_{11} u_1 = f_1 - A_{10} f_0 $$\nwhich is symmetric if $A_{11}$ is. This is sometimes called \"lifting\" and is often done implicitly in the mathematics literature. It is convenient for linear solvers and eigenvalue solvers, but inconvenient for IO and postprocessing, as well as some nonlinear problems. For this reason, it may be preferable to write\n$$ \\begin{bmatrix} I & 0 \\\\ 0 & A_{11} \\end{bmatrix} \\begin{bmatrix} u_0 \\\\ u_1 \\end{bmatrix} = \\begin{bmatrix} f_0 \\\\ f_1 - A_{10} f_0 \\end{bmatrix} $$\nwhich is symmetric and entirely decouples the degrees of freedom associated with the boundary. This method turns out to be relatively elegant for nonlinear solvers.\n\n* It is sometimes useful to scale the identity by some scalar related to the norm of $A_{11}$.\n\n## Neumann\nThe right endpoint in our example BVP has a Neumann boundary condition,\n$$ \\frac{du}{dx}(1) = b . $$\nWith finite difference methods, there are generally two ways to derive an equation for the value $u_n = u(x_n = 1)$. The first is to use a one-sided difference formula as in\n$$ \\frac{u_n - u_{n-1}}{h} = b . $$\nThis requires an extra discretization choice and it may not be of the same order of accuracy as the interior discretization and may destroy symmetry.\n\nAn alternative is to temporarily introduce a ghost value $u_{n+1} = u(x_{n+1} = 1 + h)$ (possibly more) and define it to be a reflection of the values from inside the domain. In the case $b=0$, this reflection is $u_{n+i} = u_{n-i}$. More generally, we can bias the reflection by the slope $b$,\n$$ u_{n+i} = u_{n-i} + 2b(x_n - x_{n-i}) . $$\nAfter this definition of ghost values, we apply the interior discretization at the boundary. For our reference equation, we would write\n\n$$ \\frac{-u_{n-1} + 2 u_n - u_{n+1}}{h^2} = f(x_n) $$\n\nwhich simplifies to $$ \\frac{u_n - u_{n-1}}{h^2} = f(x_n)/2 + b/h $$\nafter dividing by 2 and moving the boundary term to the right hand side.\n\n\n```python\ndef laplacian(n, rhsfunc, a, b):\n x = numpy.linspace(-1, 1, n+1)\n h = 2 / n\n rhs = rhsfunc(x)\n e = numpy.ones(n)\n L = (2 * numpy.eye(n+1) - numpy.diag(e, 1) - numpy.diag(e, -1)) / h**2\n # Dirichlet condition\n L[0,:] = numpy.eye(1,n+1)\n rhs[0] = a\n rhs[1:] -= L[1:,0] * a\n L[1:,0] = 0\n # Neumann condition\n L[-1,-1] /= 2\n rhs[-1] = b / h + rhs[-1] / 2\n return x, L, rhs\n \nlaplacian(5, lambda x: 0*x+1, .5, -1)\n```\n\n\n\n\n (array([-1. , -0.6, -0.2, 0.2, 0.6, 1. ]),\n array([[ 1. , 0. , 0. , 0. , 0. , 0. ],\n [ 0. , 12.5 , -6.25, 0. , 0. , 0. ],\n [ 0. , -6.25, 12.5 , -6.25, 0. , 0. ],\n [ 0. , 0. , -6.25, 12.5 , -6.25, 0. ],\n [ 0. , 0. , 0. , -6.25, 12.5 , -6.25],\n [ 0. , 0. , 0. , 0. , -6.25, 6.25]]),\n array([ 0.5 , 4.125, 1. , 1. , 1. , -2. ]))\n\n\n\n\n```python\nx, L, rhs = laplacian(160, lambda x: 0*x-1, 1, 0)\nu = numpy.linalg.solve(L, rhs)\npyplot.plot(x, u);\npyplot.xlabel('x')\npyplot.ylabel('u');\n```\n\n#### Observations\n\n* The reflection formulation maintains symmetry and order of accuracy.\n* We'll need to be careful in case of nonlinearity, especially when bounds or positivity are in play. The reflected values could produce negative pressure or density, for example.\n\n## Green's functions\n\nA continuous [Green's function](https://en.wikipedia.org/wiki/Green%27s_function) is the solution of a differential equation where the right hand side is a [Dirac delta function](https://en.wikipedia.org/wiki/Dirac_delta_function). We can compute discrete Green's functions by solving against columns of the identity.\n\n\n```python\nLinv = numpy.linalg.inv(L)\ncols = [5, 8, 13]\npyplot.plot(x, Linv[:,cols])\npyplot.xlabel('x')\npyplot.ylabel('Solution u')\npyplot.legend(['$L^{-1} e_{%d}$' % c for c in cols]);\n```\n\n## Spectra and eigenfunctions\n\nAn eigenvalue $\\lambda$ of the matrix $L$ satisfies\n$$ L u = \\lambda u $$\nand $u$ is called an eigenvector. The set of all eigenvalues $\\lambda$ is called the spectrum of $L$.\n\n\n```python\nx, L, _ = laplacian(160, lambda x: 0*x, 1, 0)\nlam, U = numpy.linalg.eigh(L)\n\npyplot.plot(lam, 'o')\npyplot.xlabel('Eigenvalue number')\npyplot.ylabel('Eigenvalue');\n```\n\n\n```python\nx, L, rhs = laplacian(40, lambda x: 0*x+1, 1, -.5)\nlam, U = numpy.linalg.eigh(L)\nfor i in (0, 1, 2, 3):\n pyplot.plot(x, U[:,i], label='$\\lambda_{%d} = %5f$'%(i, lam[i]))\npyplot.legend(loc='upper right');\n```\n\n### Note on stability\n\nAbove, we see evidence of stability in that all the eigenvalues are positive and the eigenvectors associated with the smallest eigenvalues are low frequency (ignoring the mode corresponding to the Dirichlet boundary). Specifically, there are no \"noisy\" functions with eigenvalues close to zero.\n\n\n```python\nfor i in (-3, -2, -1):\n pyplot.plot(x, U[:,i], label='$\\lambda_{%d} = %5f$'%(i, lam[i]))\npyplot.legend(loc='upper right');\n```\n\n## Manufactured solutions\n\nLet's choose a smooth function with rich derivatives,\n$$ u(x) = \\tanh(x) . $$\nThen $$ u'(x) = \\cosh^{-2}(x) $$ and $$ u''(x) = -2 \\tanh(x) \\cosh^{-2}(x) . $$\n\n\n```python\ndef uexact(x):\n return numpy.tanh(3*x)\ndef duexact(x):\n return 3*numpy.cosh(3*x)**(-2)\ndef negdduexact(x):\n return 3**2 * 2 * numpy.tanh(3*x) * numpy.cosh(3*x)**(-2)\n\nx, L, f = laplacian(20, negdduexact, uexact(-1), duexact(1))\nu = numpy.linalg.solve(L, f)\npyplot.plot(x, uexact(x), label='exact')\npyplot.plot(x, u, 'o', label='computed')\npyplot.legend(loc='upper left');\n```\n\n\n```python\ndef mms_error(n, discretize):\n x, L, f = discretize(n, negdduexact, uexact(-1), duexact(1))\n u = numpy.linalg.solve(L, f)\n return numpy.linalg.norm(u - uexact(x), numpy.inf)\n\nns = 2**numpy.arange(3,8)\nerrors = [mms_error(n, laplacian) for n in ns]\npyplot.loglog(2/ns, errors, 'o', label='numerical')\npyplot.loglog(2/ns, (2/ns), label='$h$')\npyplot.loglog(2/ns, (2/ns)**2, label='$h^2$')\npyplot.legend(loc='upper left');\n```\n\n## The other second order centered discretization\n\nWe have been using the discretization\n\n$$ -u''(x_i) \\approx \\frac{-u_{i-1} + 2 u_i - u_{i+1}}{h^2} $$\n\nwhich is sometimes summarized using the stencil\n\n$$ h^{-2} \\begin{bmatrix} -1 & 2 & -1 \\end{bmatrix} . $$\n\nThere is also the discretization arising from applying centered first derivatives twice,\n\n$$ -u''(x_i) \\approx \\frac{-u_{i-2} + 2 u_i - u{i+2}}{4 h^2} $$\n\nwhich corresponds to the stencil\n\n$$ (2h)^{-2} \\begin{bmatrix} -1 & 0 & 2 & 0 & -1 \\end{bmatrix} . $$\n\nWe'll reuse the boundary condition handling and keep the first stencil at the points adjacent to the boundaries, $x_1$ and $x_{n-1}$, but apply this discretization for true interior points.\n\n\n```python\ndef laplacian2(n, rhsfunc, a, b):\n x, L, rhs = laplacian(n, rhsfunc, a, b)\n h = 2 / n\n L[2:-2,:] = (2 * numpy.eye(n-3, n+1, k=2)\n - numpy.eye(n-3, n+1, k=0)\n - numpy.eye(n-3, n+1, k=4)) / (2*h)**2\n return x, L, rhs\n\nlaplacian2(6, numpy.cos, 1, 0)\n```\n\n\n\n\n (array([-1. , -0.66666667, -0.33333333, 0. , 0.33333333,\n 0.66666667, 1. ]),\n array([[ 1. , 0. , 0. , 0. , 0. , 0. , 0. ],\n [ 0. , 18. , -9. , 0. , 0. , 0. , 0. ],\n [-2.25, 0. , 4.5 , 0. , -2.25, 0. , 0. ],\n [ 0. , -2.25, 0. , 4.5 , 0. , -2.25, 0. ],\n [ 0. , 0. , -2.25, 0. , 4.5 , 0. , -2.25],\n [ 0. , 0. , 0. , 0. , -9. , 18. , -9. ],\n [ 0. , 0. , 0. , 0. , 0. , -9. , 9. ]]),\n array([1. , 9.78588726, 0.94495695, 1. , 0.94495695,\n 0.78588726, 0.27015115]))\n\n\n\nThis matrix is not symmetric due to the discretization used near the boundaries.\n\n\n```python\nx, L, f = laplacian2(20, negdduexact, uexact(-1), duexact(1))\nu = numpy.linalg.solve(L, f)\npyplot.plot(x, uexact(x), label='exact')\npyplot.plot(x, u, 'o', label='computed')\npyplot.legend(loc='upper left');\n```\n\n\n```python\nerrors = [mms_error(n, laplacian) for n in ns]\nerrors2 = [mms_error(n, laplacian2) for n in ns]\npyplot.loglog(2/ns, errors, 'o', label='numerical')\npyplot.loglog(2/ns, errors2, 's', label='numerical2')\npyplot.loglog(2/ns, (2/ns), label='$h$')\npyplot.loglog(2/ns, (2/ns)**2, label='$h^2$')\npyplot.legend(loc='upper left');\n```\n\n\n```python\nn = 40\nx, L, _ = laplacian(n, lambda x: 0*x+1, 1, -1)\n_, L2, _ = laplacian2(n, lambda x: 0*x+1, 1, -1)\n\nlam, U = numpy.linalg.eigh(L)\nlam2, U2 = numpy.linalg.eig(L2) # Need to use nonsymmetric eigensolver\nidx = numpy.argsort(lam2)\nlam2 = lam2[idx]\nU2 = U2[:,idx]\npyplot.plot(lam, 'o', label='original')\npyplot.plot(lam2, 'o', label='version 2')\npyplot.legend(loc='lower right');\n```\n\n\n```python\npyplot.figure()\nfor i in (0, 1, 2, 3):\n pyplot.plot(x, U[:,i], label='$\\lambda_{%d} = %5f$'%(i, lam[i]))\npyplot.legend(loc='upper right')\npyplot.title('Original')\n\nfor i in (0, 1, 2, 3):\n pyplot.plot(x, U2[:,i], label='$\\lambda_{%d} = %5f$'%(i, lam2[i]))\npyplot.legend(loc='upper right')\npyplot.title('Version 2');\n```\n\n### Observations about `laplacian2`\n\n* `laplacian2` converges at the same order of accuracy as `laplacian`\n* `laplacian2` is less accurate, especially on coarse/under-resolved grids\n* `laplacian2` (with this boundary condition implementation) is nonsingular\n* `laplacian2` is a weakly unstable discretization, even away from boundaries\n\n# Fourier analysis of FD methods\n\nSuppose we are operating on an infinite domain with uniform grid spacing $h=1$. Some finite difference stencils that we have worked with are\n\n\n```python\nstencil_1L = [-1, 1, 0]\nstencil_1C = [-1/2, 0, 1/2]\nstencil_1R = [0, -1, 1]\nstencil_2a = [1, -2, 1]\nstencil_2b = [1/4, 0, -1/2, 0, 1/4]\n```\n\nWe consider the action of these stencils on the grid functions\n$$ \\phi(x, \\theta) = e^{i \\theta x} . $$\nFor example, take `stencil_2a`\n\\begin{align}\nS \\phi(x, \\theta) &= \\phi(x-1, \\theta) - 2 \\phi(x, \\theta) + \\phi(x+1, \\theta) \\\\\n&= \\underbrace{(2 \\cos\\theta - 2)}_{\\hat S(\\theta)} \\phi(x, \\theta) .\n\\end{align}\n\n* Show this\n\nEvidently $\\phi(x, \\theta)$ is an eigenvector of $S$ for every value of $\\theta$ and $\\hat S(\\theta)$ is the corresponding eigenvalue. We call the function $\\hat S(\\theta)$ the *symbol* of the stencil $S$.\n\n* We need only consider $-\\pi < \\theta \\le \\pi$. Why?\n\nThe exact derivatives of $\\phi$ are\n\\begin{align}\n\\phi'(x, \\theta) &= i \\theta \\phi(x, \\theta) \\\\\n\\phi''(x, \\theta) &= -\\theta^2 \\phi(x, \\theta) .\n\\end{align}\n\n* What is the Taylor series for the symbol $2 - 2\\cos\\theta$?\n\n\n```python\ndef symbol(S, theta):\n \"\"\"Compute the symbol of the stencil S(theta)\"\"\"\n if len(S) % 2 != 1:\n raise RuntimeError('Stencil length must be odd')\n w = len(S) // 2 # integer division rounds down\n phi = numpy.exp(1j * numpy.outer(range(-w, w+1), theta))\n return S @ phi\n\nsymbol(stencil_2a, [0, numpy.pi/4])\n```\n\n\n\n\n array([ 0. +0.j, -0.58578644+0.j])\n\n\n\n\n```python\ndef plot_symbol(S, deriv=2):\n theta = numpy.linspace(-numpy.pi, numpy.pi)\n sym = symbol(S, theta)\n rsym = numpy.real_if_close((-1j)**deriv * sym)\n pyplot.plot(theta, rsym, '.', label='stencil')\n pyplot.plot(theta, theta**deriv, '-', label='exact')\n pyplot.legend()\n pyplot.title('Symbol of S={}'.format(S))\n pyplot.xlabel(r'$\\theta$')\n pyplot.ylabel('approximate and $\\\\theta^{%d}$' % deriv)\n return theta, rsym\n\ntheta, _ = plot_symbol(stencil_2a)\npyplot.plot(theta, 2 - 2*numpy.cos(theta));\n```\n\n\n```python\nplot_symbol(stencil_2b);\n```\n\n\n```python\nplot_symbol(stencil_1C, deriv=1);\n```\n\n### Observations\n\n* Plotting the symbol tells us about stability (eigenvalues near 0 for high frequency).\n* The order of agreement with the exact symbol is the order of the method.\n* We can't use it for variable coefficient or boundary conditions.\n* We'll use this technique again to optimize solvers like multigrid.\n", "meta": {"hexsha": "081d5376561e7466f925dd7e599256175475d979", "size": 537680, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "FiniteDifference.ipynb", "max_stars_repo_name": "dkesseli/numpde", "max_stars_repo_head_hexsha": "c2f005b81c2f687dfecdd63d37da3591d61c9598", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2018-08-27T16:13:44.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-03T02:55:24.000Z", "max_issues_repo_path": "FiniteDifference.ipynb", "max_issues_repo_name": "dkesseli/numpde", "max_issues_repo_head_hexsha": "c2f005b81c2f687dfecdd63d37da3591d61c9598", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "FiniteDifference.ipynb", "max_forks_repo_name": "dkesseli/numpde", "max_forks_repo_head_hexsha": "c2f005b81c2f687dfecdd63d37da3591d61c9598", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2018-08-27T21:51:53.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-03T02:55:35.000Z", "avg_line_length": 488.3560399637, "max_line_length": 78100, "alphanum_fraction": 0.9352923672, "converted": true, "num_tokens": 7124, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8976952948443461, "lm_q2_score": 0.9304582612793113, "lm_q1q2_score": 0.8352680031994889}} {"text": "# Conjugate gradient method\n\nFrom https://en.wikipedia.org/wiki/Conjugate_gradient_method:\n\nIn mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is symmetric and positive-definite.\n\n#### Description\nLet $A\\mathbf x = \\mathbf b$ be a square system of $n$ linear equations.\n
    \n
    \n
    \n#### The Algorithm:\n
    \n$\\begin{align}\n& \\mathbf{r}_0 := \\mathbf{b} - \\mathbf{A x}_0 \\\\\n& \\hbox{if } \\mathbf{r}_{0} \\text{ is sufficiently small, then return } \\mathbf{x}_{0} \\text{ as the result}\\\\\n& \\mathbf{p}_0 := \\mathbf{r}_0 \\\\\n& k := 0 \\\\\n& \\text{repeat} \\\\\n& \\qquad \\alpha_k := \\frac{\\mathbf{r}_k^\\mathsf{T} \\mathbf{r}_k}{\\mathbf{p}_k^\\mathsf{T} \\mathbf{A p}_k} \\\\\n& \\qquad \\mathbf{x}_{k+1} := \\mathbf{x}_k + \\alpha_k \\mathbf{p}_k \\\\\n& \\qquad \\mathbf{r}_{k+1} := \\mathbf{r}_k - \\alpha_k \\mathbf{A p}_k \\\\\n& \\qquad \\hbox{if } \\mathbf{r}_{k+1} \\text{ is sufficiently small, then exit loop} \\\\\n& \\qquad \\beta_k := \\frac{\\mathbf{r}_{k+1}^\\mathsf{T} \\mathbf{r}_{k+1}}{\\mathbf{r}_k^\\mathsf{T} \\mathbf{r}_k} \\\\\n& \\qquad \\mathbf{p}_{k+1} := \\mathbf{r}_{k+1} + \\beta_k \\mathbf{p}_k \\\\\n& \\qquad k := k + 1 \\\\\n& \\text{end repeat} \\\\\n& \\text{return } \\mathbf{x}_{k+1} \\text{ as the result}\n\\end{align}$\n
    \n
    \n\n#### Stop Condition:\n$$ \\lVert X^{(k+1)} - X^{(k)} \\rVert_2 \\le 10^{-4}$$\n\n\n\n```python\nimport numpy as np\n```\n\n\n```python\ndef conjgrad(A, b, initial_guess):\n x = initial_guess\n #Symbol of matrix multiplication in numpy is @\n r = b - A@x\n p = r\n \n while(1):\n Ap = A@p\n alpha = (r.T@r)/(p.T@Ap)\n alpha = np.asscalar(alpha)\n x_old = x\n x = x + alpha*p\n x_new = x\n #using norm2:\n if np.linalg.norm(x_new-x_old) < 10**(-4):\n break\n r_old = r\n r = r - alpha*Ap\n beta = (r.T@r)/(r_old.T@r_old) \n beta = np.asscalar(beta)\n p = r + beta*p\n return x\n\n```\n\n\n```python\nA = np.matrix([[2.0,5.0],\n [5.0,7.0]])\n\nb = np.matrix([[11.0],[13.0]])\n\ninitialGuess = np.matrix([[1.0],[1.0]])\n\nsol = conjgrad(A,b,initialGuess)\n\nprint ('A:')\nprint(A)\n\nprint ('\\nb:')\nprint(b)\n\nprint('\\nSolution:')\nprint(sol)\n```\n\n A:\n [[2. 5.]\n [5. 7.]]\n \n b:\n [[11.]\n [13.]]\n \n Solution:\n [[-1.09090909]\n [ 2.63636364]]\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "2675e05a6d53c8d9c6237cc293615eb67c84b840", "size": 4264, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": ".ipynb_checkpoints/Conjugate_Gradient_Method-checkpoint.ipynb", "max_stars_repo_name": "enazari/iterative-methods-for-solvling-linear-systems-of-equations", "max_stars_repo_head_hexsha": "1529c135923e616b7893b602688a748e037ddffd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-10-02T19:33:07.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-03T21:04:06.000Z", "max_issues_repo_path": ".ipynb_checkpoints/Conjugate_Gradient_Method-checkpoint.ipynb", "max_issues_repo_name": "enazari/iterative-methods-for-solvling-linear-systems-of-equations", "max_issues_repo_head_hexsha": "1529c135923e616b7893b602688a748e037ddffd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": ".ipynb_checkpoints/Conjugate_Gradient_Method-checkpoint.ipynb", "max_forks_repo_name": "enazari/iterative-methods-for-solvling-linear-systems-of-equations", "max_forks_repo_head_hexsha": "1529c135923e616b7893b602688a748e037ddffd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-01T17:27:50.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-01T17:27:50.000Z", "avg_line_length": 26.65, "max_line_length": 202, "alphanum_fraction": 0.4500469043, "converted": true, "num_tokens": 861, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.96741025335478, "lm_q2_score": 0.8633916117313211, "lm_q1q2_score": 0.8352538978493892}} {"text": "# Circuits for constructing entangled states\n\n\n```python\nimport numpy as np\n\nfrom qiskit import QuantumRegister, ClassicalRegister\nfrom qiskit import QuantumCircuit\nfrom qiskit import execute, BasicAer\n\nimport qiskit.tools.visualization as qvis\n\nimport warnings\nwarnings.filterwarnings(\"ignore\", category=DeprecationWarning) \n```\n\nIn the lecture I showed you the entangled state\n\\begin{eqnarray}\n |\\Psi_\\rangle &=& \\frac{1}{\\sqrt{2}} \\left( |00\\rangle + |11 \\rangle \\right).\n\\end{eqnarray}\n\nBut this isn't the only two-qubit entangled states. In fact, this state is just one of the four included in the so-called 'Bell basis', a set of 4 orthonormal 2-qubit entangled states. The Bell basis is especially important in the quantum teleporation protocol found in the other notebooks.\n\nYour job here is to write small quantum circuits to construct the four states in the Bell basis\n\\begin{eqnarray}\n|\\Psi_{0}\\rangle &=& \\frac{1}{\\sqrt{2}} \\left( |00\\rangle + |11 \\rangle \\right) \\\\\n|\\Psi_{1}\\rangle &=& \\frac{1}{\\sqrt{2}} \\left( |01\\rangle + |10 \\rangle \\right) \\\\\n|\\Psi_{2}\\rangle &=& \\frac{1}{\\sqrt{2}} \\left( |00\\rangle - |11 \\rangle \\right) \\\\\n|\\Psi_{3}\\rangle &=& \\frac{1}{\\sqrt{2}} \\left( |01\\rangle - |10 \\rangle \\right) \\\\\n\\end{eqnarray}\nBelow is some boilerplate code for creating the circuits and outputting the state vector so you can be sure you got the right one.\n\n\n```python\nq = QuantumRegister(2, name='q')\nbell_circuit = QuantumCircuit(q)\n\n# YOUR CODE GOES HERE \n# Construct the Bell states here by applying Qiskit gates\n# You can copy the code into 4 different cells or just modify in here\n# to get the different states\nbell_circuit.h(q[0])\nbell_circuit.cx(q[0], q[1])\n\n# Draw the circuit\nprint(bell_circuit.draw())\n\n# Execute the quantum circuit and print the state vector\nbackend = BasicAer.get_backend('statevector_simulator')\nresult = execute(bell_circuit, backend).result()\nprint(\"The state vector in the computational basis is:\")\nprint(result.get_statevector(bell_circuit).reshape(4, 1))\n```\n\n## Higher-dimensional entangled states\n\nEntanglement is not limited to two qubits, but can become much more complicated as there are many different 'separability structures' that a multi-qubit entangled state can take. For example, a 3-qubit state might be a tensor product of 3 single qubit states, a tensor product of a two-qubit entangled state with a single-qubit state, or even a purely entangled 3-qubit state (i.e. 'genuine multipartite entanglement').\n\nOne such state is the GHZ state,\n\\begin{equation}\n |GHZ\\rangle = \\frac{1}{\\sqrt{2}} \\left( |000\\rangle + |111\\rangle \\right)\n\\end{equation}\n\n\n```python\n# YOUR CODE HERE\n# Create a circuit and construct the GHZ state. \n# You can copy the boilerplate code from the 2-qubit case but will of course have to\n# alter the register size and the gate sequence\n```\n", "meta": {"hexsha": "c243ef9b69fd4b3402c12ba0f7c7d4c8fe5abf1e", "size": 4416, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "01-gate-model-theory/notebooks/Making-Entangled-States.ipynb", "max_stars_repo_name": "a-capra/Intro-QC-TRIUMF", "max_stars_repo_head_hexsha": "9738e6a49f226367247cf7bc05a00751f7bf2fe5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 27, "max_stars_repo_stars_event_min_datetime": "2019-05-09T17:40:20.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-15T12:23:17.000Z", "max_issues_repo_path": "01-gate-model-theory/notebooks/Making-Entangled-States.ipynb", "max_issues_repo_name": "a-capra/Intro-QC-TRIUMF", "max_issues_repo_head_hexsha": "9738e6a49f226367247cf7bc05a00751f7bf2fe5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-09-29T07:34:09.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-29T21:01:29.000Z", "max_forks_repo_path": "01-gate-model-theory/notebooks/Making-Entangled-States.ipynb", "max_forks_repo_name": "a-capra/Intro-QC-TRIUMF", "max_forks_repo_head_hexsha": "9738e6a49f226367247cf7bc05a00751f7bf2fe5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 14, "max_forks_repo_forks_event_min_datetime": "2019-05-09T18:45:49.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-15T12:23:21.000Z", "avg_line_length": 34.2325581395, "max_line_length": 428, "alphanum_fraction": 0.6030344203, "converted": true, "num_tokens": 779, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.921921841290738, "lm_q2_score": 0.9059898273638387, "lm_q1q2_score": 0.835251809833948}} {"text": "# Game instructions\nConsider the following board game: A game board has 12 spaces. The swine senses the Christmas spirit and manages to run away from home couple of weeks beforehand. Fortunately for it, the butcher is a bit of a drunkard and easily distracted. The swine starts on space 7, and a butcher on space 1. On each game turn a 6-sided die is rolled. On a result of 1 to 3, the swine moves that many spaces forward. On a result of 5 or 6, the butcher moves that many spaces forward. On result 4, both advance one space forward. The swine wins if it reaches the river at space 12 (the final roll does not have to be exact, moving past space 12 is OK). The butcher wins if he catches up with the swine (or moves past it).\n\nWhat are the probabilities of winning for the swine and the butcher?\n\nYour assignment is to create a mathematical or statistical model to find these probabilities, and implement the solution as a computer program in whatever language you like. You will present it during the interview and we will discuss it with you. \n\nConsider the following questions as well: \n- Can you make your model easily extendable for different initial conditions (board size and initial positions)?\n- Pros and cons of the approach?\n- Can you say something about how long the game takes (also under different initial conditions)?\n\n\n```python\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport itertools\n```\n\n# Model: dynamic programming\n\n#### Initialize game parameters\n\n\n```python\nboard_size = 12\nswine_start = 7\nbutcher_start = 1\n\nif butcher_start >= swine_start:\n raise ValueError('Error in starting positions: The swine has to start ahead of the butcher.')\nelif swine_start >= board_size:\n raise ValueError('Error in starting positions: The river has to lie ahead of the swine.')\n```\n\n#### DP formulation\nAssume a fixed board size. Let $s$ be the position of the swine on the board, $b$ the position of the butcher on the board, and $F(s, b)$ the probability that the swine wins the game if the swine is at space $s$ and the butcher at space $b$.\n\nThe recurrence relation can be formulated as follows:\n\n\\begin{align}\n F(s, b \\mid s \\geq \\text{board size}) = & 1 & \\text{Swine has won with 100% probability if end of board is reached} \\\\\n F(s, b \\mid s \\leq b) = & 0 & \\text{Swine wins with 0% probability if the butcher is ahead or at the same square} \\\\\n F(s, b) = & 1/6 \\cdot F(s+1, b) + & \\text{DP recurrence relation} \\\\\n & 1/6 \\cdot F(s+2, b) + \\\\\n & 1/6 \\cdot F(s+3, b) + \\\\\n & 1/6 \\cdot F(s+1, b+1) + \\\\\n & 1/6 \\cdot F(s, b+5) + \\\\\n & 1/6 \\cdot F(s, b+6)\n\\end{align}\n\nThe equations express that the swine's probability of winning is a weighted combination of its winning chances in all possible subsequent states.\n\n#### Prefill winning probabilities known at start\n\n\n```python\nswine_positions = np.arange(swine_start, board_size+1)\nbutcher_positions = np.arange(butcher_start, board_size+1)\nprobs = pd.DataFrame(np.nan*np.ones(shape=(len(swine_positions), len(butcher_positions))), index=swine_positions, columns=butcher_positions)\n\n# Swine has won with 100% probability if end of board is reached. \nprobs.loc[board_size, :] = 1.0 \n\n# Swine wins with 0% probability if the butcher is ahead or at the same square.\n# I.e. upper right triangle should be zeros.\nmask = np.ones(probs.shape, dtype='bool')\ntriu = np.triu_indices(n=probs.shape[0], m=probs.shape[0])\nmask[tuple([triu[0], triu[1] + probs.shape[1] - probs.shape[0]])] = False\nprobs.where(mask, other=0.0, inplace=True)\n```\n\n\n```python\nprobs\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    123456789101112
    7NaNNaNNaNNaNNaNNaN0.00.00.00.00.00.0
    8NaNNaNNaNNaNNaNNaNNaN0.00.00.00.00.0
    9NaNNaNNaNNaNNaNNaNNaNNaN0.00.00.00.0
    10NaNNaNNaNNaNNaNNaNNaNNaNNaN0.00.00.0
    11NaNNaNNaNNaNNaNNaNNaNNaNNaNNaN0.00.0
    121.01.01.01.01.01.01.01.01.01.01.00.0
    \n
    \n\n\n\n#### Solve DP\n\n\n```python\ndef F(probs, sp, bp):\n \"\"\"Calculate the swine's probability of winning based on the current swine and butcher position.\n \n Arguments\n - probs: dataframe of swine's winning chances (known and unknown)\n - sp: swine's current position\n - bp: butcher's current position\n \"\"\"\n \n # Check that the current positions are not lower than the starting positions.\n if sp < probs.index.min():\n sp = probs.index.min()\n print('Swine position lower than starting position: using swine starting position ({}) instead.'.format(swine_start))\n if bp < probs.columns.min():\n bp = probs.columns.min()\n print('Butcher position lower than starting position: using butcher starting position ({}) instead.'.format(butcher_start))\n \n # Check that neither the swine nor the butcher has already reached the end of the board.\n # If so, reset position to the last space on the board.\n if sp > probs.index.max():\n# print('Swine position exceeds board length: using highest possible position instead.')\n sp = probs.index.max()\n if bp > probs.columns.max():\n# print('Butcher position exceeds board length: using highest possible position instead.')\n bp = probs.columns.max()\n \n # If the requested probability is already known: return from storage.\n if not np.isnan(probs.loc[sp, bp]):\n return probs.loc[sp, bp]\n \n # Else: calculate the requested probability according to the DP recurrence, store and return.\n else:\n prob = 1/6 * F(probs, sp+1, bp) \\\n + 1/6 * F(probs, sp+2, bp) \\\n + 1/6 * F(probs, sp+3, bp) \\\n + 1/6 * F(probs, sp+1, bp+1) \\\n + 1/6 * F(probs, sp, bp+5) \\\n + 1/6 * F(probs, sp, bp+6)\n \n probs.loc[sp, bp] = prob\n \n return prob\n```\n\n\n```python\nswine_winning_prob = F(probs, swine_start, butcher_start)\nprint('The swine and the butcher start at space {} and {} respectively. The board is {} spaces long.'.format(swine_start, butcher_start, board_size))\nprint('The probability that the swine wins is {:.1f}%.'.format(swine_winning_prob*100))\n```\n\n The swine and the butcher start at space 7 and 1 respectively. The board is 12 spaces long.\n The probability that the swine wins is 51.2%.\n\n", "meta": {"hexsha": "b0b1f31011d80ff94471c0a6d78e4ab2d8afb62a", "size": 12936, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "dp.ipynb", "max_stars_repo_name": "MeekeRoet/swine-escape", "max_stars_repo_head_hexsha": "4201354dbc6cef7f84b6c6b7ad395292b29cccf8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "dp.ipynb", "max_issues_repo_name": "MeekeRoet/swine-escape", "max_issues_repo_head_hexsha": "4201354dbc6cef7f84b6c6b7ad395292b29cccf8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "dp.ipynb", "max_forks_repo_name": "MeekeRoet/swine-escape", "max_forks_repo_head_hexsha": "4201354dbc6cef7f84b6c6b7ad395292b29cccf8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.152173913, "max_line_length": 716, "alphanum_fraction": 0.4741032777, "converted": true, "num_tokens": 2486, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9219218348550491, "lm_q2_score": 0.9059898254600902, "lm_q1q2_score": 0.8352518022481721}} {"text": "```python\n# Importing the necessary packages\nfrom sympy import *\nimport matplotlib.pyplot as plt\n```\n\n\n```python\nr_1,r_2,n = symbols('r_1,r_2,n') # Defining new symbols. You can use LaTeX here\n\nM = Matrix([[1-r_1,r_2],[r_1,1-r_2]]) # Defining the transition matrix\n\ndisplay(M) # Display the answer (it's like printing, for symbolic objects)\n```\n\n\n$\\displaystyle \\left[\\begin{matrix}1 - r_{1} & r_{2}\\\\r_{1} & 1 - r_{2}\\end{matrix}\\right]$\n\n\n\n```python\n# Numerical values of parameters ############\n\nrONE=0.3\nrTWO=0.2\n\n#############################################\n\n\nM = M.subs(r_1,rONE).subs(r_2,rTWO) # Substitute the values of r_1 and r_2 in the matrix\n\npsi_0 = Matrix([9,1]) # Initial state\n\npsi_n = M**n*psi_0 # Arbitrary state psi_n\n\n\n# Determining psi_\\inf ######################\n\npsi_n_inf = [] # Start off with an empty vector, psi_n_inf\n\nfor el in psi_n: # For each element in psi_n\n psi_n_inf.append(Limit(el,n,oo).doit()) # take the limit n -> \\infty, and add it to psi_n_inf\n\n\npsi_n_inf = Matrix(psi_n_inf) # Make psi_n_inf into a matrix object\n\n# Calculating psi_n at different n ##########\n\ntemp = psi_0 # Starting state\n\nS_list = [] # Lists for S, I,and t\nI_list = []\nt_list = []\n\nS_list.append(9) # Initialising them\nI_list.append(1)\nt_list.append(0)\n\n\nfor i in range(0,100,3): # For every time-step\n temp = M*temp # Find M^i temp\n S_list.append(temp[0]) # Append the values of the lists\n I_list.append(temp[1])\n t_list.append(i)\n\n# Graph the results #########################\nplt.xlabel('Time(n)') \nplt.ylabel('Number')\nplt.plot(t_list,S_list,'o-',label='Susceptible')\nplt.plot(t_list,I_list,'o-',label='Infected')\nplt.legend()\nplt.show()\n#############################################\n```\n\n\n```python\ndef SIS(r_1,r_2,init_S,init_I):\n \n M = Matrix([[1-r_1,r_2],[r_1,1-r_2]]) # Defining the transition matrix\n\n psi_0 = Matrix([init_S,init_I]) # Initial state\n \n psi_n = M**n*psi_0 # Arbitrary state psi_n\n\n # Determining psi_\\inf ######################\n\n psi_n_inf = [] # Start off with an empty vector, psi_n_inf\n\n for el in psi_n: # For each element in psi_n\n psi_n_inf.append(Limit(el,n,oo).doit()) # take the limit n -> \\infty, and add it to psi_n_inf\n\n\n psi_n_inf = Matrix(psi_n_inf) # Make psi_n_inf into a matrix object\n\n # Calculating psi_n at different n ##########\n\n temp = psi_0 # Starting state\n\n S_list = [] # Lists for S, I,and t\n I_list = []\n t_list = []\n\n S_list.append(init_S) # Initialising them\n I_list.append(init_I)\n t_list.append(0)\n\n for i in range(1,20,1): # For every time-step\n temp = M*temp # Find M^i temp\n S_list.append(temp[0]) # Append the values of the list\n I_list.append(temp[1])\n t_list.append(i)\n\n # Graph the results #########################\n plt.xlabel('Time(n)') \n plt.ylabel('Number')\n plt.plot(t_list,S_list,'o-')\n plt.plot(t_list,I_list,'o-')\n plt.show()\n #############################################\n```\n\n\n```python\nSIS(0.3,0.2,55,45)\nSIS(0.3,0.2,70,30)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "60ebd62da6d90cd869d1bb84e93dee1ad3700331", "size": 51557, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "teaching/PHY1110/DS/Code/DS09/DS09_B_SIS_Model.ipynb", "max_stars_repo_name": "dpcherian/dpcherian.github.io", "max_stars_repo_head_hexsha": "6110da8c20e05edc106882db8b1c3795ade37708", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "teaching/PHY1110/DS/Code/DS09/DS09_B_SIS_Model.ipynb", "max_issues_repo_name": "dpcherian/dpcherian.github.io", "max_issues_repo_head_hexsha": "6110da8c20e05edc106882db8b1c3795ade37708", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "teaching/PHY1110/DS/Code/DS09/DS09_B_SIS_Model.ipynb", "max_forks_repo_name": "dpcherian/dpcherian.github.io", "max_forks_repo_head_hexsha": "6110da8c20e05edc106882db8b1c3795ade37708", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 223.1904761905, "max_line_length": 16096, "alphanum_fraction": 0.8942917548, "converted": true, "num_tokens": 947, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9518632343454896, "lm_q2_score": 0.8774767922879692, "lm_q1q2_score": 0.8352378975703316}} {"text": "# *Sobol'* sensitivity indices\n\nIn this example we are going to quantify the correlation between the input variables and the output variable of a model thanks to the *Sobol'* indices.\n\nThe *Sobol'* indices allow to evaluate the importance of a single variable or a specific set of variables. \n\n*Sobol'* indices range is $\\left[0; 1\\right]$ ; the more the indice value is close to 1 the more the variable is important toward the output variance of the function. The *Sobol'* indices can be computed at different orders.\n\nThe first order indices evaluate the importance of one variable at a time ($d$ indices, with $d$ the input dimension of the model). The $d$ total indices give the relative importance of every variables except the variable $x_i$, for every variable.\n\nHere the *Sobol'* indices are estimated on an analytical function: *Ishigami*. It writes\n\n$$ F(\\mathbf{x}) = \\sin(x_1)+7\\sin(x_2)^2+0.1x_3^4\\sin(x_1), \\quad \\mathbf{x}\\in [-\\pi, \\pi]^3 $$\nAnalytical values of *Sobol'* indices for this function are available:\n\n\\begin{align}\n S_{1, 2, 3} &= [0.314, 0.442, 0.], \\\\\n S_{T_{1, 2, 3}} &= [0.558, 0.442, 0.244].\n\\end{align}\n\nThis function is interesting because it exhibits an interaction between $x_1$ and $x_3$ although $x_3$ by itself do not play a role at first order.\n\n## The Ishigami function\n\n\n```python\nimport openturns as ot\nimport numpy as np\n```\n\n\n```python\n# Create the model and input distribution\nformula = ['sin(X1)+7*sin(X2)^2+0.1*X3^4*sin(X1)']\ninput_names = ['X1', 'X2', 'X3']\ndimension = 3\ncorners = [[-np.pi] * dimension, [np.pi] * dimension]\nmodel = ot.SymbolicFunction(input_names, formula)\ndistribution = ot.ComposedDistribution([ot.Uniform(corners[0][i], corners[1][i])\n for i in range(dimension)])\ntrue_sobol = [[0.314, 0.442, 0.], [0.558, 0.442, 0.244]] # [first, total]\n```\n\n## Without Surrogate\n\n\n```python\n# Create X/Y data\not.RandomGenerator.SetSeed(0)\nsize = 10000\nsample = ot.SobolIndicesExperiment(distribution, size, True).generate()\noutput = model(sample)\n```\n\n\n```python\n# Compute Sobol' indices using the Saltelli estimator\n## first order indices\nsensitivityAnalysis = ot.SaltelliSensitivityAlgorithm(sample, output, size)\nfirst_order = sensitivityAnalysis.getFirstOrderIndices()\nprint(f\"First order Sobol' indices: {first_order}\\n\"\n f\"Relative error: {abs(np.array(first_order) - true_sobol[0]) * 100}%\") # maximum is 1\n\n## total order indices\ntotal_order = sensitivityAnalysis.getTotalOrderIndices()\nprint(f\"Total order Sobol' indices: {total_order}\\n\"\n f\"Relative error: {abs(np.array(total_order) - true_sobol[1]) * 100}%\") # maximum is 1\n```\n\n First order Sobol' indices: [0.302751,0.460825,0.00669407]\n Relative error: [1.12489872 1.88252502 0.66940742]%\n Total order Sobol' indices: [0.57499,0.427147,0.256687]\n Relative error: [1.69897217 1.48529071 1.26868602]%\n\n\n## With Surrogate\n\n\n```python\n# Create a Gaussian process surrogate model\nfrom otsurrogate import SurrogateModel\nsurrogate = SurrogateModel('kriging', corners, input_names)\n```\n\n\n```python\n# Generation of data to fit the surrogate model\nsequence_type = ot.LowDiscrepancyExperiment(ot.SobolSequence(), distribution, 256)\nlearning_sample = sequence_type.generate()\nlearning_output = model(learning_sample)\n```\n\n\n```python\nsurrogate.fit(learning_sample, learning_output)\n```\n\n\n```python\n# Compute sensitivity indices\n## first order indices\noutput, _ = surrogate(sample) # Do not return the information about the variance\nsensitivityAnalysis = ot.SaltelliSensitivityAlgorithm(sample, output, size)\nfirst_order = sensitivityAnalysis.getFirstOrderIndices()\nprint(f\"First order Sobol' indices: {first_order}\\n\"\n f\"Relative error: {abs(np.array(first_order) - true_sobol[0]) * 100}%\") # maximum is 1\n\n## total order indices\ntotal_order = sensitivityAnalysis.getTotalOrderIndices()\nprint(f\"Total order Sobol' indices: {total_order}\\n\"\n f\"Relative error: {abs(np.array(total_order) - true_sobol[1]) * 100}%\") # maximum is 1\n```\n\n First order Sobol' indices: [0.312765,0.472573,0.0070339]\n Relative error: [0.1235497 3.05730132 0.70338973]%\n Total order Sobol' indices: [0.562337,0.447125,0.237793]\n Relative error: [0.43372069 0.51250457 0.62068279]%\n\n\n## Using otsklearn\n\n\n```python\nsurrogate = SurrogateModel('otsklearn.Kriging()', corners, input_names)\nsurrogate.fit(learning_sample, learning_output)\n```\n\n\n```python\n# Compute sensitivity indices\n## first order indices\n#output = surrogate.evaluate(sample)\noutput, _ = surrogate(sample) # Do not return the information about the variance\nsensitivityAnalysis = ot.SaltelliSensitivityAlgorithm(sample, output, size)\nfirst_order = sensitivityAnalysis.getFirstOrderIndices()\nprint(f\"First order Sobol' indices: {first_order}\\n\"\n f\"Relative error: {abs(np.array(first_order) - true_sobol[0]) * 100}%\") # maximum is 1\n\n## total order indices\ntotal_order = sensitivityAnalysis.getTotalOrderIndices()\nprint(f\"Total order Sobol' indices: {total_order}\\n\"\n f\"Relative error: {abs(np.array(total_order) - true_sobol[1]) * 100}%\") # maximum is 1\n```\n\n First order Sobol' indices: [0.303545,0.459592,0.00691174]\n Relative error: [1.04549598 1.75916887 0.69117411]%\n Total order Sobol' indices: [0.577597,0.426664,0.257912]\n Relative error: [1.95973662 1.53359806 1.3912165 ]%\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "7d059de69ee1490032e15076773fe8ad9ba1e43a", "size": 8780, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/examples/sensitivity_sobol.ipynb", "max_stars_repo_name": "mbaudin47/otsurrogate", "max_stars_repo_head_hexsha": "2c2f043f456b8e8a752a38980c6c1515c0c90fd5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/examples/sensitivity_sobol.ipynb", "max_issues_repo_name": "mbaudin47/otsurrogate", "max_issues_repo_head_hexsha": "2c2f043f456b8e8a752a38980c6c1515c0c90fd5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-02-18T14:57:50.000Z", "max_issues_repo_issues_event_max_datetime": "2019-03-13T16:18:56.000Z", "max_forks_repo_path": "doc/examples/sensitivity_sobol.ipynb", "max_forks_repo_name": "tupui/otsurrogate", "max_forks_repo_head_hexsha": "2c2f043f456b8e8a752a38980c6c1515c0c90fd5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-29T08:21:58.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T08:21:58.000Z", "avg_line_length": 31.2455516014, "max_line_length": 257, "alphanum_fraction": 0.5883826879, "converted": true, "num_tokens": 1568, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9518632261523028, "lm_q2_score": 0.8774767906859264, "lm_q1q2_score": 0.8352378888560749}} {"text": "## [MathJax](http://www.mathjax.org)\n\nTimes: $$28 \\times 28$$\n\nSum and subscripts: $$\\begin{eqnarray}\\sigma\\left(b + \\sum_{l=0}^4 \\sum_{m=0}^4 w_{l,m} a_{j+l, k+m} \\right).\\tag{125}\\end{eqnarray}$$\n\nsupperscripts: $$a^1 = \\sigma(b + w * a^0)$$\n\n$$\\eta = 0.1 \\dot{F}$$\n\n$$f(z)\\equiv \\max(0, z)$$\n\n$$\\lambda = 0.1$$\n\n$$c^{L-1}$$\n\n$$x = {-b \\pm \\sqrt{b^2-4ac} \\over 2a}.$$\n\n$$f(a) = \\frac{1}{2\\pi i} \\oint\\frac{f(z)}{z-a}dz$$\n\n$$\\cos(\u03b8+\u03c6)=\\cos(\u03b8)\\cos(\u03c6)\u2212\\sin(\u03b8)\\sin(\u03c6)$$\n\n$$\\int_D ({\\nabla\\cdot} F)dV=\\int_{\\partial D} F\\cdot ndS$$\n\n$$\\vec{\\nabla} \\times \\vec{F} = \\left( \\frac{\\partial F_z}{\\partial y} - \\frac{\\partial F_y}{\\partial z} \\right) \\mathbf{i}\n + \\left( \\frac{\\partial F_x}{\\partial z} - \\frac{\\partial F_z}{\\partial x} \\right) \\mathbf{j} + \\left( \\frac{\\partial\n F_y}{\\partial x} - \\frac{\\partial F_x}{\\partial y} \\right) \\mathbf{k}$$\n\n$$\\sigma = \\sqrt{ \\frac{1}{N} \\sum_{i=1}^N (x_i -\\mu)^2}$$\n\n$$(\\nabla_X Y)^k = X^i (\\nabla_i Y)^k = X^i \\left( \\frac{\\partial Y^k}{\\partial x^i} + \\Gamma_{im}^k Y^m \\right)$$\n\n$$\n\\left[\n\\begin{array}{lcr}\na1 & b22 & c333 \\\\\nd444 & e555555 & f6\n\\end{array}\n\\right]\n$$\n\n$$\n\\left\\{\n\\begin{aligned}\n&a+b=c\\\\\n&d=e+f+g\n\\end{aligned}\n\\right.\n$$\n\n$$\n\\begin{align}\na+b&=c\\\\\nd&=e+f+g\n\\end{align}\n$$\n\n$$\\frac{\\partial}{\\partial a}(a+b) = \\frac{\\partial a}{\\partial a} + \\frac{\\partial b}{\\partial a} = 1$$\n\n$$\\frac{\\partial}{\\partial u}uv = u\\frac{\\partial v}{\\partial u} + v\\frac{\\partial u}{\\partial u} = v$$\n\n$$\\frac{\\partial Z}{\\partial X} = (\\alpha + \\beta + \\gamma)(\\delta + \\epsilon + \\zeta)$$\n\n\n```python\n\n```\n", "meta": {"hexsha": "bfc0af4e184ea87101230a9568cbc845be0348e1", "size": 4119, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "fast.ai/markdown.ipynb", "max_stars_repo_name": "Rockung/sma", "max_stars_repo_head_hexsha": "4e2dad5bff698c78fbb4beea1cfe45534ed5ff1c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "fast.ai/markdown.ipynb", "max_issues_repo_name": "Rockung/sma", "max_issues_repo_head_hexsha": "4e2dad5bff698c78fbb4beea1cfe45534ed5ff1c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "fast.ai/markdown.ipynb", "max_forks_repo_name": "Rockung/sma", "max_forks_repo_head_hexsha": "4e2dad5bff698c78fbb4beea1cfe45534ed5ff1c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 20.4925373134, "max_line_length": 149, "alphanum_fraction": 0.443068706, "converted": true, "num_tokens": 709, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9518632234212402, "lm_q2_score": 0.8774767858797979, "lm_q1q2_score": 0.8352378818848538}} {"text": "# \"Jupyter notebooks for remote science\"\n> \"Using Jupyter notebooks for remote science\"\n- toc: true \n- badges: false\n- comments: false\n\nComputational notebooks have massively gained in popularity in the past years. They allow to share rich documents with text, code, mathematical formulas, images, videos etc. both in static and interactive ways. They are heavily used both in research to document projects and in teaching to provide content support. Notebooks exist in different flavors, but currently the most popular form is the Jupyter (**Ju**lia, **Py**thon, **R**) notebook. Here, we briefly describe 1. [What type of content they can be used for](#1.-What-is-a-notebook), 2. [Solutions to write/run notebooks online (no installation required)](#2.-Using-Jupyter-notebooks), 3. [How to share the documents, interactively or not and](#3.-Sharing-documents) 4. [A few examples of courses given as notebooks](#4.-Examples-of-courses).\n\n## 1. What is a notebook\n\nNotebooks are interactive documents with rich content that run directly in the internet browser. They are composed of series of blocks, called cells that can be executed individually. Notebooks are typically used when computer coding is involved but they provide a rich environment to create interactive documentation in general. So even if you don't need the coding part, you can still use notebooks to create documents and use the many existing resources to share them. \n\nNotebooks are composed of mainly two types of cells: text cells and code cells.\n\n### Text cells\n\n#### Markdown\nThe content of text cells is formatted using the Markdown language, a very simple markup language allowing to create formatted text. For example *italic* and **bold** are generated by surrounding text with one ```*italic*```or two stars ```**bold**```. Titles and subtitles are generated by preceding text by a series of hash signs ```# My Title```, ```## My subtitle```.\n\n#### Images\nImages can be embedded using either a simple markdown syntax `````` or directly html language which can be customized (e.g. for image size) `````` where ```myimage``` is a path or a weblink to an image (e.g. https://img.myswitzerland.com/671846/407):\n\n\n#### Equations\nFortunately, Jupyter also allows to embed $\\LaTeX$ text. There a multiple solutions for this: surround your equation $\\vec F = m*\\vec{a}$ with dollar signs ```$\\vec F = m*\\vec{a}$```, use double dollar ```$$\\vec F = m*\\vec{a}$$``` to create a block:\n$$\\vec F = m*\\vec{a}$$\nor in a separate cell use the standard ```\\begin{equation} ... \\end{equation}```:\n\n\\begin{equation}\n\\vec F = m*\\vec{a}\n\\end{equation}\n\n### Code cells\n\n#### Computational code\nThe most common type of code cell, is a cell with actual computational code demonstrating some algorithm or function. It has an input and if some output is generated also an output cell:\n\n\n```python\ndef my_fun(x):\n y = x**2 + 3*x + 2\n return y\n\nmy_fun(5)\n```\n\n\n\n\n 42\n\n\n\nNot only numerical calculations are possible but also symbolic ones, e.g. using sympy:\n\n\n```python\nimport sympy\nimport warnings\nwarnings.filterwarnings(\"ignore\")\nfrom ipywidgets import interact, FloatSlider\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nt, k, c, y_0, ydot_0 = sympy.symbols('t k c y_0 ydot_0')\ny = sympy.Function('y')\ndamped_oscillator = sympy.Eq(y(t).diff(t, t) + c* y(t).diff(t) + k*y(t), 0)\nfun_solved = sympy.dsolve(damped_oscillator, y(t), ics={y(0):y_0, y(t).diff(t,1).subs(t, 0): ydot_0})\n```\n\n\n```python\ndamped_oscillator\n```\n\n\n\n\n$\\displaystyle c \\frac{d}{d t} y{\\left(t \\right)} + k y{\\left(t \\right)} + \\frac{d^{2}}{d t^{2}} y{\\left(t \\right)} = 0$\n\n\n\n\n```python\nfun_solved\n```\n\n\n\n\n$\\displaystyle y{\\left(t \\right)} = \\frac{\\left(\\frac{y_{0} \\left(c + \\sqrt{c^{2} - 4 k}\\right)}{2} + \\dot{y}_{0}\\right) e^{\\frac{t \\left(- c + \\sqrt{c^{2} - 4 k}\\right)}{2}}}{\\sqrt{c^{2} - 4 k}} + \\frac{\\left(- \\frac{c y_{0}}{2} + \\frac{y_{0} \\sqrt{c^{2} - 4 k}}{2} - \\dot{y}_{0}\\right) e^{\\frac{t \\left(- c - \\sqrt{c^{2} - 4 k}\\right)}{2}}}{\\sqrt{c^{2} - 4 k}}$\n\n\n\n#### Output code\nCode cells can however also be used to display interactive content, in particular the output of an analysis e.g. with Matplotlib:\n\n\n```python\nplt.plot(np.arange(-10,11),my_fun(np.arange(-10,11)));\n```\n\nResults can also be shown as interactive plots which the user can play with to get an intuition:\n\n\n```python\ndef myfun(var):\n solved = fun_solved.subs({y_0:1, ydot_0:0, k:var, c : 1})\n sympy.plot(solved.rhs,ylim = (-1,1),xlim = (0,10));\ninteract(myfun, var = FloatSlider(min=1, max = 10,step = .5));\n```\n\n\n interactive(children=(FloatSlider(value=1.0, description='var', max=10.0, min=1.0, step=0.5), Output()), _dom_\u2026\n\n\n#### Multimedia output\n\nIn addition to these standard ways of showing code, Jupyter can also be used to display other types of information. For example a movie uploaded from Youtube:\n\n\n```python\nfrom IPython.display import YouTubeVideo\nYouTubeVideo(\"6FZ5k9_6Vlw\",560,rel=0)\n```\n\n\n\n\n\n\n\n\n\n\n## 2. Using Jupyter notebooks\n\nOf course Jupyter can be installed on any computer, the simplest route being to use Anaconda or Miniconda as a base for installation. Even though this works flawlessly in a large majority of cases, it is often desirable to avoid any installation procedure, especially in the current period where in-person meetings should be avoided. As Jupyter notebooks have become very popular and as they are entirely rendered in the browser, there are many solutions to **run** the notebook on a remote computer and simply display it in the browser. The different solutions vary in the resources available, price etc.\n\n### Renku from the Swiss Data Science Center\n\nThe SDSC provides for all academic institutions free access to a Jupyter environment on a platform called [Renku](https://renkulab.io/). It allows to create entire projects including code and data and provides a full fledged access to a Jupyter environment entirely identical to one running on a laptop. One can login with a switch.edu account (your regular @xxx.unibe.ch account) or a Github account and create both private and public projects which are in fact repositories on Gitlab. This service is free.\n\n### Google colab\n\nGoogle has developed it's own flavor of Jupyter called [Colab](https://colab.research.google.com/). It works exactly in the same way as a regular notebook, only the interface looks slightly different. Here notebooks are saved on the GoogleDrive of the account you are using to connect to Colab. One of the main advantages of this solution is the access to GPUs. This service is free and you can purchase a subscription to enjoy longer sessions and more computing power.\n\n### Mybinder\n\nFor those already familiar with Python, Github and Jupyter, an interesting solution is the [mybinder](https://mybinder.org/) service. This allows you to turn any repository on e.g. on GitHub into an interactive Jupyter session. For that, you only have to include in your repository a file specifying the necessary packages to run your code. One negative side is that sessions are temporary, and any changes are lost on session end. This service is free.\n\n### Private JupyterHub\n\nIf you need e.g. a highly customized Jupyter environment or the possibility to share large amounts of data you might want to create your own Jupyter service called JupyterHub. Users of such a service are simply provided a link and again connect to the service via their browser, getting access to a standard Jupyter session. ScITS has experience in setting up such services for example using [SWITCHengines](https://www.switch.ch/fr/engines/) computing resources.\n\n## 3. Sharing documents\n\n### File exchange\nJupyter notebooks are simple text files. So you can simply share them as any other file type. For example you can upload all of them on ilias. This means however that to open them, people have to locally install Jupyter. Again in the perspective of avoiding local installations, it is easier to give access to them on some type of server.\n\n### Code repositories and notebook renderer\nThe most common solution for this is to use a code repository such as GitHub or Gitlab. If you are worried about publicly releasing your work, you can leave repositories private and share them with your students. This however requires all of them to have e.g. a GitHub account. Notebooks can be visualized as static objects by browsing through the repository. Note that even though notebooks are rendered as static objects, the links and videos will still work. \nA more efficient way of displaying notebooks is however to use the [nbviewer](https://nbviewer.jupyter.org/) service, which efficiently renders any notebook. Notebooks created on Renku (see above) can be visualized on Gitlab directly.\n\n### Interactive sharing\nAs mentioned above, if you want to go beyond rendering static notebooks, you can use mybinder or Google Colab to provide a quick access to interactive versions of your notebooks that collaborators and students can test and modify. You can embed directly in the notebooks badges to further simplify this procedure. Here for Colab:\n[](https://colab.research.google.com/github/guiwitz/RemoteJupyter/blob/master/Jupyter_in_remote_times_colab.ipynb)\nand here for mybinder:\n[](https://github.com/guiwitz/RemoteJupyter/blob/master)\n\n### Create a course website with GitHub pages\nImagine now that you have a series of notebooks for a course and you would like to assemble them into a simple website. If you are not familiar with web technologies, it can be a daunting task to create such a simple website. [GitHub pages](https://pages.github.com/) is a service integrated in GitHub which takes a lot of the pain away from that process i.e. no database, no domain to setup, no HTML to write etc. However it still involves setting up the appropriate environment to create content. This is where [Fastpages](https://github.com/fastai/fastpages) a solution built on top of GitHub pages comes to the rescue: it completely automates the process of creating and publishing the web page. The only requirement is to be minimally familiar with GitHub (forks, pull requests etc.). Faspages offers a very detailed step by step guide on how to proceed and turns your collection of notebooks into a blog. See for example [this notebook](https://guiwitz.github.io/blog/2020/03/23/Jupyter-in-remote-times.html) on my blog.\n\n## 4. Examples of courses\n\nWe finish here by giving a few examples of courses of different nature.\n\n### Computational courses\n\nAt ScITS we heavily use notebooks to teach programming in various areas, e.g. image processing. For that course, a [Github repository](https://github.com/guiwitz/PyImageCourse) contains all the course material. One can visualize it directly in the repository or using [nbviewer](https://nbviewer.jupyter.org/github/guiwitz/PyImageCourse/tree/master/). It can even be run interactively using the [mybinder service](https://mybinder.org/v2/gh/guiwitz/PyImageCourse/master).\n\n### Scientific courses\n\nVirtually all disciplines can benefit from being presented at least partially in interactive forms. To see for example how notebooks are used to teach chemistry, you can have a look at [this specific notebook](https://nbviewer.jupyter.org/github/jckantor/CBE20255/blob/master/notebooks/02.01-Balancing-Reactions.ipynb#Stoichiometry) from [this course](http://jckantor.github.io/CBE20255/). It contains videos, text, equations, symbolic calculations and is directly executable on Google Colab.\n\nHere's another [great example](https://nbviewer.jupyter.org/github/engineersCode/EngComp3_tourdynamics/blob/master/notebooks_en/3_Get_Oscillations.ipynb) of a course on mechanics and numerical approximations. Again it uses all the resources available in notebooks. Note that if you visit the Github page, you will see this sort of badge [](https://mybinder.org/v2/gh/engineersCode/EngComp3_tourdynamics/master) telling you, you can run the material in a mybinder session.\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "441c6ce538c4497a10b40b40c8f59fb8443561f9", "size": 45054, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_notebooks/2020-03-23-Jupyter-in-remote-times.ipynb", "max_stars_repo_name": "guiwitz/blog", "max_stars_repo_head_hexsha": "0c1258f643114bb68c67a7fb80a4b8ae15d119ed", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_notebooks/2020-03-23-Jupyter-in-remote-times.ipynb", "max_issues_repo_name": "guiwitz/blog", "max_issues_repo_head_hexsha": "0c1258f643114bb68c67a7fb80a4b8ae15d119ed", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-09-28T00:51:08.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-28T00:51:08.000Z", "max_forks_repo_path": "_notebooks/2020-03-23-Jupyter-in-remote-times.ipynb", "max_forks_repo_name": "guiwitz/blog", "max_forks_repo_head_hexsha": "0c1258f643114bb68c67a7fb80a4b8ae15d119ed", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 118.2519685039, "max_line_length": 14016, "alphanum_fraction": 0.8397700537, "converted": true, "num_tokens": 2917, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8947894632969137, "lm_q2_score": 0.9334308091776496, "lm_q1q2_score": 0.835224052768873}} {"text": "# For today's code challenge you will be reviewing yesterdays lecture material. Have fun!\n\n### if you get done early check out [these videos](https://www.3blue1brown.com/neural-networks).\n\n# The Perceptron\n\nThe first and simplest kind of neural network that we could talk about is the perceptron. A perceptron is just a single node or neuron of a neural network with nothing else. It can take any number of inputs and spit out an output. What a neuron does is it takes each of the input values, multplies each of them by a weight, sums all of these products up, and then passes the sum through what is called an \"activation function\" the result of which is the final value.\n\nI really like figure 2.1 found in this [pdf](http://www.uta.fi/sis/tie/neuro/index/Neurocomputing2.pdf) even though it doesn't have bias term represented there.\n\n\n\nIf we were to write what is happening in some verbose mathematical notation, it might look something like this:\n\n\\begin{align}\n y = sigmoid(\\sum(weight_{1}input_{1} + weight_{2}input_{2} + weight_{3}input_{3}) + bias)\n\\end{align}\n\nUnderstanding what happens with a single neuron is important because this is the same pattern that will take place for all of our networks. \n\nWhen imagining a neural network I like to think about the arrows as representing the weights, like a wire that has a certain amount of resistance and only lets a certain amount of current through. And I like to think about the node itselef as containing the prescribed activation function that neuron will use to decide how much signal to pass onto the next layer.\n\n# Activation Functions (transfer functions)\n\nIn Neural Networks, each node has an activation function. Each node in a given layer typically has the same activation function. These activation functions are the biggest piece of neural networks that have been inspired by actual biology. The activation function decides whether a cell \"fires\" or not. Sometimes it is said that the cell is \"activated\" or not. In Artificial Neural Networks activation functions decide how much signal to pass onto the next layer. This is why they are sometimes referred to as transfer functions because they determine how much signal is transferred to the next layer.\n\n## Common Activation Functions:\n\n\n\n# Implementing a Perceptron from scratch in Python\n\n### Establish training data\n\n\n```python\nimport numpy as np\n\nnp.random.seed(812)\n\ninputs = np.array([\n [0, 0, 1],\n [1, 1, 1],\n [1, 0, 1],\n [0, 1, 1]\n])\n\ncorrect_outputs = [[0], [1], [1], [0]]\n```\n\n### Sigmoid activation function and its derivative for updating weights\n\n\n```python\ndef sigmoid(x):\n return 1 / (1 + np.exp(-x))\n\ndef sigmoid_derivative(x):\n sx = sigmoid(x)\n return sx * (1 - sx)\n```\n\n## Updating weights with derivative of sigmoid function:\n\n\n\n### Initialize random weights for our three inputs\n\n\n```python\nweights = 2 * np.random.random((3, 1)) - 1\nweights\n```\n\n\n\n\n array([[-0.80108081],\n [ 0.5104585 ],\n [ 0.59516067]])\n\n\n\n### Calculate weighted sum of inputs and weights\n\n\n```python\nweighted_sum = np.dot(inputs, weights)\nweighted_sum\n```\n\n\n\n\n array([[ 0.59516067],\n [ 0.30453837],\n [-0.20592014],\n [ 1.10561918]])\n\n\n\n### Output the activated value for the end of 1 training epoch\n\n\n```python\nactivated_output = sigmoid(weighted_sum)\nactivated_output\n```\n\n\n\n\n array([[0.64454837],\n [0.57555158],\n [0.44870111],\n [0.75131149]])\n\n\n\n### take difference of output and true values to calculate error\n\n\n```python\nerror = correct_outputs - activated_output\nerror\n```\n\n\n\n\n array([[-0.64454837],\n [ 0.42444842],\n [ 0.55129889],\n [-0.75131149]])\n\n\n\n\n```python\nadjustments = error * sigmoid_derivative(activated_output)\nadjustments\n```\n\n\n\n\n array([[-0.14549539],\n [ 0.09778778],\n [ 0.13111388],\n [-0.16363003]])\n\n\n\n\n```python\nweights += np.dot(inputs.T, adjustments)\nweights\n```\n\n\n\n\n array([[-0.57217915],\n [ 0.44461626],\n [ 0.51493691]])\n\n\n\n### Put it all together\n\n\n```python\nfor iteration in range(10000):\n \n # Weighted sum of inputs/weights\n weighted_sum = np.dot(inputs, weights)\n \n # Activate!\n activated_output = sigmoid(weighted_sum)\n \n # Calculate error\n error = correct_outputs - activated_output\n \n # Adjustments\n adjustments = error * sigmoid_derivative(activated_output)\n \n # Update weights\n weights += np.dot(inputs.T, adjustments)\n \nprint(\"Weights after training\")\nprint(weights)\n\nprint(\"Output after training\")\nprint(activated_output)\n```\n\n Weights after training\n [[15.03729668]\n [-0.40682518]\n [-7.23231091]]\n Output after training\n [[7.22398796e-04]\n [9.99387935e-01]\n [9.99592428e-01]\n [4.81060734e-04]]\n\n", "meta": {"hexsha": "2207293169d32e524a37d0e975e508dab5f06581", "size": 11580, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Tuesday Solution.ipynb", "max_stars_repo_name": "eyvonne/Neural_network_foundations_code_challenges", "max_stars_repo_head_hexsha": "fa9ef104cb4e5dc43e3b6649659778ce10ff17b3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Tuesday Solution.ipynb", "max_issues_repo_name": "eyvonne/Neural_network_foundations_code_challenges", "max_issues_repo_head_hexsha": "fa9ef104cb4e5dc43e3b6649659778ce10ff17b3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Tuesday Solution.ipynb", "max_forks_repo_name": "eyvonne/Neural_network_foundations_code_challenges", "max_forks_repo_head_hexsha": "fa9ef104cb4e5dc43e3b6649659778ce10ff17b3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 10, "max_forks_repo_forks_event_min_datetime": "2019-11-05T16:41:55.000Z", "max_forks_repo_forks_event_max_datetime": "2019-11-05T17:09:13.000Z", "avg_line_length": 24.7965738758, "max_line_length": 614, "alphanum_fraction": 0.5389464594, "converted": true, "num_tokens": 1194, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639694252316, "lm_q2_score": 0.8596637487122111, "lm_q1q2_score": 0.8352183240698107}} {"text": "# Systems of Equations\nImagine you are at a casino, and you have a mixture of \u00a310 and \u00a325 chips. You know that you have a total of 16 chips, and you also know that the total value of chips you have is \u00a3250. Is this enough information to determine how many of each denomination of chip you have?\n\nWell, we can express each of the facts that we have as an equation. The first equation deals with the total number of chips - we know that this is 16, and that it is the number of \u00a310 chips (which we'll call ***x*** ) added to the number of \u00a325 chips (***y***).\n\nThe second equation deals with the total value of the chips (\u00a3250), and we know that this is made up of ***x*** chips worth \u00a310 and ***y*** chips worth \u00a325.\n\nHere are the equations\n\n\\begin{equation}x + y = 16 \\end{equation}\n\\begin{equation}10x + 25y = 250 \\end{equation}\n\nTaken together, these equations form a *system of equations* that will enable us to determine how many of each chip denomination we have.\n\n## Graphing Lines to Find the Intersection Point\nOne approach is to determine all possible values for x and y in each equation and plot them.\n\nA collection of 16 chips could be made up of 16 \u00a310 chips and no \u00a325 chips, no \u00a310 chips and 16 \u00a325 chips, or any combination between these.\n\nSimilarly, a total of \u00a3250 could be made up of 25 \u00a310 chips and no \u00a325 chips, no \u00a310 chips and 10 \u00a325 chips, or a combination in between.\n\nLet's plot each of these ranges of values as lines on a graph:\n\n\n```python\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\n# Get the extremes for number of chips\nchipsAll10s = [16, 0]\nchipsAll25s = [0, 16]\n\n# Get the extremes for values\nvalueAll10s = [25,0]\nvalueAll25s = [0,10]\n\n# Plot the lines\nplt.plot(chipsAll10s,chipsAll25s, color='blue')\nplt.plot(valueAll10s, valueAll25s, color=\"orange\")\nplt.xlabel('x (\u00a310 chips)')\nplt.ylabel('y (\u00a325 chips)')\nplt.grid()\n\nplt.show()\n```\n\nLooking at the graph, you can see that there is only a single combination of \u00a310 and \u00a325 chips that is on both the line for all possible combinations of 16 chips and the line for all possible combinations of \u00a3250. The point where the line intersects is (10, 6); or put another way, there are ten \u00a310 chips and six \u00a325 chips.\n\n### Solving a System of Equations with Elimination\nYou can also solve a system of equations mathematically. Let's take a look at our two equations:\n\n\\begin{equation}x + y = 16 \\end{equation}\n\\begin{equation}10x + 25y = 250 \\end{equation}\n\nWe can combine these equations to eliminate one of the variable terms and solve the resulting equation to find the value of one of the variables. Let's start by combining the equations and eliminating the x term.\n\nWe can combine the equations by adding them together, but first, we need to manipulate one of the equations so that adding them will eliminate the x term. The first equation includes the term ***x***, and the second includes the term ***10x***, so if we multiply the first equation by -10, the two x terms will cancel each other out. So here are the equations with the first one multiplied by -10:\n\n\\begin{equation}-10(x + y) = -10(16) \\end{equation}\n\\begin{equation}10x + 25y = 250 \\end{equation}\n\nAfter we apply the multiplication to all of the terms in the first equation, the system of equations look like this:\n\n\\begin{equation}-10x + -10y = -160 \\end{equation}\n\\begin{equation}10x + 25y = 250 \\end{equation}\n\nNow we can combine the equations by adding them. The ***-10x*** and ***10x*** cancel one another, leaving us with a single equation like this:\n\n\\begin{equation}15y = 90 \\end{equation}\n\nWe can isolate ***y*** by dividing both sides by 15:\n\n\\begin{equation}y = \\frac{90}{15} \\end{equation}\n\nSo now we have a value for ***y***:\n\n\\begin{equation}y = 6 \\end{equation}\n\nSo how does that help us? Well, now we have a value for ***y*** that satisfies both equations. We can simply use it in either of the equations to determine the value of ***x***. Let's use the first one:\n\n\\begin{equation}x + 6 = 16 \\end{equation}\n\nWhen we work through this equation, we get a value for ***x***:\n\n\\begin{equation}x = 10 \\end{equation}\n\nSo now we've calculated values for ***x*** and ***y***, and we find, just as we did with the graphical intersection method, that there are ten \u00a310 chips and six \u00a325 chips.\n\nYou can run the following Python code to verify that the equations are both true with an ***x*** value of 10 and a ***y*** value of 6.\n\n\n```python\nx = 10\ny = 6\nprint ((x + y == 16) & ((10*x) + (25*y) == 250))\n```\n\n True\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "e2fd813bf3a15e4f00abb15851564191653ff53c", "size": 23733, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Basics Of Algebra by Hiren/01-03-Systems of Equations.ipynb", "max_stars_repo_name": "serkin/Basic-Mathematics-for-Machine-Learning", "max_stars_repo_head_hexsha": "ac0ae9fad82a9f0429c93e3da744af6e6d63e5ab", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Basics Of Algebra by Hiren/01-03-Systems of Equations.ipynb", "max_issues_repo_name": "serkin/Basic-Mathematics-for-Machine-Learning", "max_issues_repo_head_hexsha": "ac0ae9fad82a9f0429c93e3da744af6e6d63e5ab", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Basics Of Algebra by Hiren/01-03-Systems of Equations.ipynb", "max_forks_repo_name": "serkin/Basic-Mathematics-for-Machine-Learning", "max_forks_repo_head_hexsha": "ac0ae9fad82a9f0429c93e3da744af6e6d63e5ab", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 140.4319526627, "max_line_length": 17236, "alphanum_fraction": 0.8617536763, "converted": true, "num_tokens": 1227, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8807970811069351, "lm_q2_score": 0.9481545366464883, "lm_q1q2_score": 0.8351317483165255}} {"text": "# Sympy Solver\n\n## CHE 341\n\nThis notebook introduces SymPy, the Python package for symbolic mathematics. First, let's import everything and set up a nice solve function:\n\n\n```python\nimport sympy as sm\nsm.init_printing()\nimport numpy as np\nfrom copy import copy\nimport matplotlib.pyplot as plt\nfrom sympy.abc import *\n\ndef solve(equation, variable, subs=None, unwrap=True):\n \"\"\"Solve equation for the given variable; if given, a dictionary of subs\n (substitutions) can be given. This is useful if you want to solve numerically\n rather than symbolically. \n \n Parameters:\n equation : the sympy equation to solve\n variable : the sympy variable to solve for\n subs : the dictionary of substitutions\n unwrap : if there is only one solution, return it directly rather than returning a list.\n \n Returns:\n The solution (if one solution and unwrap=True), or a list of all possible solutions.\n \n Examples: Jupyter notebooks will give prettier printing of the output\n >>> solve(a*x**2 + b*x + c, x)\n [(-b + sqrt(-4*a*c + b**2))/(2*a), -(b + sqrt(-4*a*c + b**2))/(2*a)]\n \n \"\"\"\n if subs is not None:\n subs = copy(subs)\n subs.pop(variable.name, None)\n out = sm.solve(equation.subs(subs), variable)\n else:\n out = sm.solve(equation, variable)\n if unwrap and len(out) == 1:\n out = out[0]\n return out\n```\n\nAll capital and lowercase single letter variables (`a, b, c, A, B, C`) and Greek letters (`alpha, beta, gamma`) are defined as Sympy variables by default because of the line `from sympy.abc import *`.\n\nFor example, below we solve the quadratic equation for $x$:\n\n\n```python\nsolns = solve(a*x**2 + b*x+c, x)\nsolns\n```\n\nHere's another example, solving the Boltzmann equation for $T$. In order to use the normal variables, we define $n_i$, $n_j$, and $k_B$ first:\n\n\n```python\nn_i, n_j, k_B = sm.symbols(\"n_i n_j k_B\", positive=True) # Define the 3 symbols with subscripts in the Boltzmann equation\n\nboltzmann_eqn = n_j/n_i-sm.exp(-epsilon/(k_B*T)) # We use sympy's exponential function, sm.exp\nsolve(boltzmann_eqn, T)\n```\n\nOne final example - let's do an integral? \n\n\n```python\nsm.sequence(sm.exp(-epsilon*i/(k_B*T)), (i, 0, 5))\n```\n\n\n```python\nepsilon, T = sm.symbols('epsilon T', positive=True)\n```\n\n\n```python\n\n```\n\n\n```python\nsm.Sum(x**i, (i, 0, sm.oo)).doit()\n```\n\n\n```python\nsubs=dict(\nP = 1.0,\nR = 0.08206,\nT = 298,\nn = 1,\nV=22.4)\nequation = P*V-n*R*T\nsolve(equation, V, subs)\n```\n\n\n```python\nn_i, n_j, k_B = sm.symbols(\"n_i n_j k_B\")\n\n# The sympy solver!\nsubs=dict(\nn_i=0.5,\nn_j=1,\nepsilon=1e-21,\nk_B=1.381e-23,\nT=298)\nboltzmann = n_i/n_j-sm.exp(-epsilon/(k_B*T))\nboltzmann\n\n```\n\n\n```python\nsolve(boltzmann, n_i, subs)\n```\n\n\n```python\n# Let's \nsolve(boltzmann, T)`\n```\n\n\n```python\nsolve(boltzmann, epsilon)\n```\n\n\n```python\nrotational_energy_eqn = i*(i+1) * h**2/(8 * pi**2) * (1/(mu*R**2)) - epsilon\nrotational_energy_eqn\n```\n\n\n```python\nsolve(rotational_energy_eqn, R)[1]\n```\n\n\n```python\n!pip install pint\n```\n\n\n```python\nimport pint\nu = pint.UnitRegistry()\n```\n\n\n```python\n(u.h * 1.0).to('J s')\n```\n\n\n```python\ndef reduced_mass(m1, m2):\n return m1 * m2 / (m1 + m2)\n```\n\n\n```python\nmu_CO = reduced_mass(12, 16)*u.amu\n```\n\n\n```python\nmu_CO\n```\n\n\n```python\ndef boltzmann_ni(nj, energy, T):\n return nj*np.exp(-energy/ (T*1.381e-23))\n```\n\n\n```python\ntemps = np.linspace(100, 300, 5)\ntemps\n```\n\n\n```python\nboltzmann_ni(1, 1e-21, temps)\n```\n\nWhat if I need to do this for many values of temperature? Make a new function!\n\n\n```python\ndef boltzmann_n_i_temp(T, subs):\n subs = copy(subs)\n # The sympy solver!\n subs['T'] = T\n boltzmann = n_i/n_j-sm.exp(-epsilon/(k_B*T))\n return solve(boltzmann, n_i, subs)\n```\n\n\n```python\ntemps = np.linspace(10, 300,30)\npops = [boltzmann_n_i_temp(t, subs) for t in temps]\n```\n\n\n```python\nplt.plot(temps, pops, 'o')\n```\n\n\n```python\n# subs is a dictionary of substitutions - variables and their values...\nsubs=dict(\nP = , # atm\nR = 0.08206, # L atm/mol-K\nT = 293, # K\nn = 9.32, # mol\nV= # L\n)\ngas_law = P*V-n*R*T\nsolve(gas_law, V, subs) #\n```\n\nn_i_func(1, 1e-21, 1.381e-23, 343)\n", "meta": {"hexsha": "707483b202642068401882f4ff315743d4c01a96", "size": 32324, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/The SymPy Solver.ipynb", "max_stars_repo_name": "ryanpdwyer/pchem", "max_stars_repo_head_hexsha": "ad097d7fce07669f4ad269e895e2185fa51ac2d2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/The SymPy Solver.ipynb", "max_issues_repo_name": "ryanpdwyer/pchem", "max_issues_repo_head_hexsha": "ad097d7fce07669f4ad269e895e2185fa51ac2d2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/The SymPy Solver.ipynb", "max_forks_repo_name": "ryanpdwyer/pchem", "max_forks_repo_head_hexsha": "ad097d7fce07669f4ad269e895e2185fa51ac2d2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.4186915888, "max_line_length": 5176, "alphanum_fraction": 0.7663036753, "converted": true, "num_tokens": 1328, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9273632996617212, "lm_q2_score": 0.9005297947939938, "lm_q1q2_score": 0.8351182819438507}} {"text": "# Lecture 6: Monty Hall, Simpson's Paradox\n\n## The Monty Hall Problem\n\nYou know this problem.\n\n* There are three doors.\n* A car is behind one of the doors.\n* The other two doors have goats behind them.\n* You choose a door, but before you see what's behind your choice, Monty opens one of the other doors to reveal a goat.\n* Monty offers you the chance to switch doors.\n\n_Should you switch?_\n\n### Defining the problem\n\nLet $S$ be the event of winning when you switch.\n\nLet $D_j$ be the event of the car being behind door $j$.\n\n### Solving with a probability tree\n\n\n\nWith a probability tree, it is easy to represent the case where you condition on Monty opening door 2. Given that you initially choose door 1, you can quickly see that if you stick with door 1, you have a $\\frac{1}{3}~$ chance of winning.\n\nYou have a $\\frac{2}{3}~$ chance of winning if you switch.\n\n### Solving with the Law of Total Probability\n\nThis is even easier to solve using the Law of Total Probability.\n\n\\begin{align}\n P(S) &= P(S|D_1)P(D_1) + P(S|D_2)P(D_2) + P(S|D_3)P(D_3) \\\\\n &= 0 \\frac{1}{3} + 1 \\frac{1}{3} + 1 \\frac{1}{3} \\\\\n &= \\frac{2}{3}\n\\end{align}\n\n### A more general solution\n\nLet $n = 7$ be the number of doors in the game.\n\nLet $m=3$ be the number of doors with goats that Monty opens after you select your initial door choice.\n\nLet $S$ be the event where you win _by sticking with your original door choice of door 1_.\n\nLet $C_j$ be the event that the car is actually behind door $j$.\n\nConditioning only on which door has the car, we have\n\\begin{align}\n & &P(S) &= P(S|C_1)P(C_1) + \\dots + P(S|C_n)P(C_n) & &\\text{Law of Total Probability} \\\\\n & & &= P(C_1) \\\\\n & & &= \\frac{1}{7} \\\\\n\\end{align}\n\nLet $M_{i,j,k}$ be the event that Monty opens doors $i,j,k$. Conditioning on Monty opening up doors $i,j,k$, we have\n\n\\begin{align}\n & &P(S) &= \\sum_{i,j,k} P(S|M_{i,j,k})P(M_{i,j,k}) & &\\text{summed over all i, j, k with } 2 \\le i \\lt j \\lt k \\le 7 \\\\\n \\\\\n & &\\Rightarrow P(S|M_{i,j,k}) &= P(S) & &\\text{by symmetry} \\\\\n & & &=\\frac{1}{7}\n\\end{align}\n\nNote that we can now generalize this to the case where:\n* there are $n \\ge 3$ doors\n* after you choose a door, Monty opens $m$ of the remaining doors $n-1$ doors to reveal a goat (with $1 \\le m \\le n-m-2$)\n\nThe probability of winning with the strategy of _sticking to your initial choice_ is $\\frac{1}{n}$, whether __unconditional or conditioning on the doors Monty opens__.\n\nAfter Monty opens $m$ doors, each of the remaining $n-m-1$ doors has __conditional__ probability of $\\left(\\frac{n-1}{n-m-1}\\right) \\left(\\frac{1}{n}\\right)$.\n\nSince $\\frac{1}{n} \\lt \\left(\\frac{n-1}{n-m-1}\\right) \\left(\\frac{1}{n}\\right)$, you will always have a greater chance of winning if you switch.\n\n\n## Simpson's Paradox\n\n_Is it possible for a certain set of events to be more (or less) probable than another without conditioning, and then be less (or more) probable with conditioning?_\n\n\n\nAssume that we have the above rates of success/failure for Drs. Hibbert and Nick for two types of surgery: heart surgery and band-aid removal.\n\n### Defining the problem\n\nLet $A$ be the event of a successful operation.\n\nLet $B$ be the event of treatment by Dr. Nick.\n\nLet $C$ be the event of heart surgery.\n\n\\begin{align}\n P(A|B,C) &< P(A|B^c,C) & &\\text{Dr. Nick is not as skilled as Dr. Hibbert in heart surgery} \\\\\n P(A|B,C^c) &< P(A|B^c,C^c) & &\\text{neither is he that good at band-aid removal} \\\\\n\\end{align}\n\nAnd yet $P(A|B) > P(A|B^c)$?\n\n### Explaining with the Law of Total Probability\n\nTo explain this paradox, let's try to use the Law of Total Probability.\n\n\\begin{align}\n P(A|B) &= P(A|B,C)P(C|B) + P(A|B,C^c)P(C^c|B) \\\\\n \\\\\n \\text{but } P(A|B,C) &< P(A|B^c,C) \\\\\n \\text{and } P(A|B,C^c) &< P(A|B^c,C^c)\n\\end{align}\n\nLook at $P(C|B$ and $P(C|B^c)$. These weights are what makes this paradox possible, as they are what make the inequality relation sign flip.\n\nEvent $C$ is a case of __confounding__\n\n\n### Another example\n\n_Is it possible to have events $A_1, A_2, B, C$ such that_ \n\n\\begin{align}\n P(A_1|B) &\\gt P(A_1|C) \\text{ and } P(A_2|B) \\gt P(A_2|C) & &\\text{ ... yet...} \\\\ \n P(A_1 \\cup A_2|B) &\\lt P(A_1 \\cup A_2|C)\n\\end{align}\n\nYes, and this is just another case of Simpson's Paradox.\n\nNote that \n\n\\begin{align}\n P(A_1 \\cup A_2|B) &= P(A_1|B) + P(A_2|B) - P(A_1 \\cap A_2|B)\n\\end{align}\n\nSo this is _not_ possible if $A_1$ and $A_2$ are disjoint and $P(A_1 \\cup A_2|B) = P(A_1|B) + P(A_2|B)$.\n\nIt is crucial, therefore, to consider the _intersection_ $P(A_1 \\cap A_2|B)$, so let's looks at the following example where $P(A_1 \\cap A_2|B) \\gg P(A_1 \\cap A_2|C)$ in order to offset the other inequalities.\n\nConsider two basketball players each shooting a pair of free throws.\n\nLet $A_j$ be the event basketball free throw scores on the $j^{th}$ try.\n\nPlayer $B$ always either makes both $P(A_1 \\cap A_2|B) = 0.8$, or misses both.\n\n\\begin{align}\n P(A_1|B) = P(A_2|B) = P(A_1 \\cap A_2|B) = P(A_1 \\cup A_2|B) = 0.8\n\\end{align} \n\nPlayer $C$ makes free throw shots with probability $P(A_j|C) = 0.7$, independently, so we have\n\n\\begin{align}\n P(A_1|C) &= P(A_2|C) = 0.7 \\\\\n P(A_1 \\cap A_2|C) &= P(A_1|C) P(A_2|C) = 0.49 \\\\\n P(A_1 \\cup A_2|C) &= P(A_1|C) + P(A_2|C) - P(A_1 \\cap A_2|C) \\\\\n &= 2 \\times 0.7 - 0.49 \\\\\n &= 0.91\n\\end{align} \n\nAnd so we have our case where\n\n\\begin{align}\n P(A_1|B) = 0.8 &\\gt P(A_1|C) = 0.7 \\\\\n P(A_2|B) = 0.8 &\\gt P(A_2|C) = 0.7 \\\\\n \\\\\n \\text{ ... and yet... } \\\\\n \\\\\n P(A_1 \\cup A_2|B) &\\lt P(A_1 \\cup A_2|C) ~~~~ \\blacksquare\n\\end{align}\n", "meta": {"hexsha": "1101208cb1edba4a90f77869b0b671b471c81161", "size": 8543, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lecture_06.ipynb", "max_stars_repo_name": "dirtScrapper/Stats-110-master", "max_stars_repo_head_hexsha": "a123692d039193a048ff92f5a7389e97e479eb7e", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lecture_06.ipynb", "max_issues_repo_name": "dirtScrapper/Stats-110-master", "max_issues_repo_head_hexsha": "a123692d039193a048ff92f5a7389e97e479eb7e", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture_06.ipynb", "max_forks_repo_name": "dirtScrapper/Stats-110-master", "max_forks_repo_head_hexsha": "a123692d039193a048ff92f5a7389e97e479eb7e", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.3531914894, "max_line_length": 248, "alphanum_fraction": 0.5234695072, "converted": true, "num_tokens": 1994, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.89181104831338, "lm_q2_score": 0.9362850057480346, "lm_q1q2_score": 0.8349893124962537}} {"text": "# Nonlinear Equations\n\nWe want to find a root of the nonlinear function $f$ using different methods.\n\n1. Bisection method\n2. Newton method\n3. Chord method\n4. Secant method\n5. Fixed point iterations\n\n\n\n\n\n\n```python\n%matplotlib inline\nfrom numpy import *\nfrom matplotlib.pyplot import *\nimport sympy as sym\n\n```\n\n\n```python\nt = sym.symbols('t')\n\nf_sym = t/8. * (63.*t**4 - 70.*t**2. +15.) # Legendre polynomial of order 5\n\nf_prime_sym = sym.diff(f_sym,t)\n\nf = sym.lambdify(t, f_sym, 'numpy')\nf_prime = sym.lambdify(t,f_prime_sym, 'numpy')\n\nphi = lambda x : 63./70.*x**3 + 15./(70.*x)\n#phi = lambda x : 70.0/15.0*x**3 - 63.0/15.0*x**5\n#phi = lambda x : sqrt((63.*x**4 + 15.0)/70.)\n\n# Let's plot\nn = 1025\n\nx = linspace(-1,1,n)\nc = zeros_like(x)\n\n_ = plot(x,f(x))\n_ = plot(x,c)\n_ = grid()\n\n```\n\n\n```python\n# Initial data for the variuos algorithms\n\n# interval in which we seek the solution \na = 0.7\nb = 1.\n\n# initial points\nx0 = (a+b)/2.0\nx00 = b\n\n```\n\n\n```python\n# stopping criteria\neps = 1e-10\nn_max = 1000\n```\n\n## Bisection method\n\n$$\nx^k = \\frac{a^k+b^k}{2}\n$$\n```\n if (f(a_k) * f(x_k)) < 0:\n b_k1 = x_k\n a_k1 = a_k\n else:\n a_k1 = x_k\n b_k1 = b_k\n```\n\n\n```python\ndef bisect(f,a,b,eps,n_max):\n assert f(a)*f(b)<0\n a_new = a\n b_new = b\n x = mean([a,b])\n err = eps + 1.\n errors = [err]\n it = 0\n while (err > eps and it < n_max):\n if ( f(a_new) * f(x) < 0 ):\n # root in (a_new,x)\n b_new = x\n else:\n # root in (x,b_new)\n a_new = x\n \n x_new = mean([a_new,b_new])\n \n #err = 0.5 *(b_new -a_new)\n err = abs(f(x_new))\n #err = abs(x-x_new)\n \n errors.append(err)\n x = x_new\n it += 1\n \n semilogy(errors)\n print(it)\n print(x)\n print(err)\n return errors\n \nerrors_bisect = bisect(f,a,b,eps,n_max)\n\n\n \n \n```\n\n\n```python\n# is the number of iterations coherent with the theoretical estimation?\n```\n\nIn order to find out other methods for solving non-linear equations, let's compute the Taylor's series of $f(x^k)$ up to the first order \n\n$$\nf(x^k) \\simeq f(x^k) + (x-x^k)f^{\\prime}(x^k)\n$$\nwhich suggests the following iterative scheme\n$$\nx^{k+1} = x^k - \\frac{f(x^k)}{f^{\\prime}(x^k)}\n$$\n\nThe following methods are obtained applying the above scheme where\n\n$$\nf^{\\prime}(x^k) \\approx q^k\n$$\n\n## Newton's method\n$$\nq^k = f^{\\prime}(x^k)\n$$\n\n$$\nx^{k+1} = x^k - \\frac{f(x^k)}{q^k}\n$$\n\n\n```python\ndef newton(f,f_prime,x0,eps,n_max):\n pass # TODO\n\n%time errors_newton = newton(f,f_prime,1.0,eps,n_max)\n```\n\n## Chord method\n\n$$\nq^k \\equiv q = \\frac{f(b)-f(a)}{b-a}\n$$\n\n$$\nx^{k+1} = x^k - \\frac{f(x^k)}{q}\n$$\n\n\n```python\ndef chord(f,a,b,x0,eps,n_max):\n pass # TODO\n\nerrors_chord = chord (f,a,b,x0,eps,n_max)\n```\n\n## Secant method\n\n$$\nq^k = \\frac{f(x^k)-f(x^{k-1})}{x^k - x^{k-1}}\n$$\n\n$$\nx^{k+1} = x^k - \\frac{f(x^k)}{q^k}\n$$\n\nNote that this algorithm requirs **two** initial points\n\n\n```python\ndef secant(f,x0,x00,eps,n_max):\n pass # TODO\n \nerrors_secant = secant(f,x0,x00,eps,n_max)\n```\n\n## Fixed point iterations\n\n$$\nf(x)=0 \\to x-\\phi(x)=0\n$$\n\n$$\nx^{k+1} = \\phi(x^k)\n$$\n\n\n```python\ndef fixed_point(phi,x0,eps,n_max):\n pass # TODO\n\nerrors_fixed = fixed_point(phi,0.3,eps,n_max)\n \n```\n\n## Comparison\n\n\n```python\n# plot the error convergence for the methods\nloglog(errors_bisect, label='bisect')\nloglog(errors_chord, label='chord')\nloglog(errors_secant, label='secant')\nloglog(errors_newton, label ='newton')\nloglog(errors_fixed, label ='fixed')\n_ = legend()\n```\n\n\n```python\n# Let's compare the scipy implmentation of Newton's method with our..\n```\n\n\n```python\nimport scipy.optimize as opt\n%time opt.newton(f, 1.0, f_prime, tol = eps)\n```\n", "meta": {"hexsha": "729dcf9afae02c68f7c154ee109936d1746b9254", "size": 8242, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/03-nonlinear-equations.ipynb", "max_stars_repo_name": "debarshibanerjee/numerical-analysis-2021-2022", "max_stars_repo_head_hexsha": "e23043a56a66ff119301088c2a85ba9ca1ba37ce", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-12T23:19:50.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-12T23:19:50.000Z", "max_issues_repo_path": "notebooks/03-nonlinear-equations.ipynb", "max_issues_repo_name": "giovastabile/numerical-analysis-2021-2022", "max_issues_repo_head_hexsha": "15b4557cc06eb089077931e08367845a7c10935c", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/03-nonlinear-equations.ipynb", "max_forks_repo_name": "giovastabile/numerical-analysis-2021-2022", "max_forks_repo_head_hexsha": "15b4557cc06eb089077931e08367845a7c10935c", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.4635416667, "max_line_length": 146, "alphanum_fraction": 0.4400630915, "converted": true, "num_tokens": 1325, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8962513675912912, "lm_q2_score": 0.9314625050654264, "lm_q1q2_score": 0.8348245440248984}} {"text": "# Minimal distance between point and plane\n\nConsider the case that $V = \\mathbb R^3$ and $S$ is the plane spanned by the vectors\n\n$$\nb_0 = (-1,-1,0), \\quad b_1 = (1,0,1).\n$$\n\nLet $u = (0,0,3)$ and let us find the closest point $s \\in S$ to $u$. \nThere are $c_0, c_1 \\in \\mathbb R$ such that \n\n$$\ns = c_0 b_0 + c_1 b_1 = B c,\n$$\n\nwhere $B$ is the matrix with columns $b_0$ and $b_1$ and $c$ is the vector with entries $c_0$ and $c_1$. Perhaps the most natural way to find the closest point is to solve the least squares problem to minimize $|Bc - u|^2$.\n\n\n```python\nimport numpy as np\nfrom scipy import linalg as la\n```\n\n\n```python\nu = np.array([0, 0, 3], dtype=float)\nb0 = np.array([-1, -1, 0], dtype=float)\nb1 = np.array([1, 0, 1], dtype=float)\n\nB = np.stack((b0, b1), axis=-1)\nc, _, _, _ = la.lstsq(B, u)\ns = B @ c\nc\n```\n\n\n```python\nfrom matplotlib import pyplot as plt\n\nimport plot_vec3d as p3d\ndef prettify():\n p3d.plot_plane(s, c[0] * b0)\n p3d.plot_vec(u - s, 'k--', o=s)\n p3d.plot_coeffs(c, b0, b1, 'k:')\n p3d.set_equal_aspect()\n p3d.hide_ticks()\n```\n\n\n```python\nplt.figure(dpi=120)\nplt.axes(projection='3d')\np3d.plot_vec(u, 'b')\np3d.plot_vec(s, 'g')\np3d.plot_vec(b0, 'r')\np3d.plot_vec(b1, 'r')\nprettify()\n```\n\nThe finite element method leads to problems that are similar to the above distance minimization, but \n\n* $V$ will then be an infinite dimensional vector space\n* The corresponding least squares problem can not be solved directly\n\nHowever,\n\n* $S$ will still be finite dimensional\n* We will be able to form a finite dimensional system of linear equations \n\nAs a predule, let's demonstrate this idea in the context of the above distance minimization.\n\nBy the _orthogonality implies minimality_ lemma, the closest point $s$ is characterized by \n\n\\begin{align}\\tag{1}\n(s,v) = (u, v) \\quad \\text{for all $v \\in S$}.\n\\end{align}\n\nIn particular, equation (1) needs to hold for $v=b_0$ and $v=b_1$.\nOn the other hand, if (1) holds for $v=b_0$ and $v=b_1$ then it holds for all $v \\in S$ by taking linear combinations. Writing again $s = c_0 b_0 + c_1 b_1$, we get the following system of equations for vector $c$ with entries $c_0$ and $c_1$\n\n$$\n\\sum_{j=0}^1 c_j (b_j, b_i) = (u, b_i), \\quad i=0,1.\n$$\n\nWe write $K$ for the matrix with entries $(b_i, b_j)$ and $F$ for the vector with entries $(u, b_i)$. Then (1) is equivalent with $Kc = F$.\n\n\n```python\n# Assemble matrix K\nK = np.zeros((2,2))\nfor i in range(2):\n for j in range(2):\n K[i, j] = np.dot(B[:, i], B[:, j])\n\n# Assemble vector F\nF = np.zeros(2)\nfor i in range(2):\n F[i] = np.dot(u, B[:, i])\n \nc_alt = la.solve(K, F)\n\n# Compare to the solution using least squares\neps = np.finfo(float).eps # Machine epsilon\nla.norm(c_alt - c) / la.norm(c) < 10*eps\n\n```\n\n# P1 finite element\n\n## Definition: P1 basis functions on reference domain \n>$$\n\\psi_0(x) \n= \\begin{cases}\n1 - x & x \\in [0,1]\n\\\\\n0 & \\text{otherwise},\n\\end{cases} \n\\qquad\n\\psi_1(x) \n= \\begin{cases}\nx & x \\in [0,1]\n\\\\\n0 & \\text{otherwise}.\n\\end{cases} \n$$\n\nHere the reference domain means the unit interval $[0,1]$.\nViewing $\\psi_0$ and $\\psi_1$ as functions on $[0,1]$, they form a basis of $\\mathbb P_1$.\nIndeed, the standard basis of $\\mathbb P_1$ is obtained as \n\n$$\n1 = \\psi_0(x) + \\psi_1(x), \\quad x = \\psi_1(x).\n$$\n\n\n\n```python\ndef cutoff(x):\n return (x > 0) * (x < 1)\n\ndef psi0(x):\n return cutoff(x) * (1 - x)\ndef psi1(x):\n return cutoff(x) * x\n\nxs_plot = np.linspace(-0.2,1.2, 200)\nplt.plot(xs_plot, psi0(xs_plot), 'r', label='$\\psi_0$')\nplt.plot(xs_plot, psi1(xs_plot), 'b--', label='$\\psi_1$')\nplt.legend();\n```\n\n## Definition: local basis functions\n>Let \n>\n>\\begin{align*}\n0 = x_0 < x_1 < \\dots < x_n = 1,\n\\end{align*}\n>\n> write $I_e = [x_{e-1}, x_e]$, $e=1,\\dots,n$, and consider the affine function\n>\n>\\begin{align*}\n\\Phi_e : \\mathbb R \\to \\mathbb R, \\quad \\Phi_e(x) = \\frac{x - x_{e-1}}{x_e - x_{e-1}},\n\\end{align*}\n>\n> mapping $I_e$ to $[0,1]$. The local basis functions on $I_e$ are\n>\n>\\begin{align*}\n\\psi_{j,e}(x) = \\psi_j (\\Phi_e (x)), \\quad j = 0,1.\n\\end{align*}\n\nThe linear interpolant $\\mathcal I_h u$ of a function $u \\in C(I)$, with $I = [0,1]$, can be written on $I_e \\subset I$ as \n\n$$\n\\mathcal I_h u(x) = u(x_{e-1}) \\psi_{0,e}(x) + u(x_{e}) \\psi_{1,e}(x).\n$$\n\n\n```python\nn = 4\nxs = np.linspace(0,1, n+1)\n\ndef Phi(psi, e, x):\n return psi((x - xs[e-1]) / (xs[e] - xs[e-1]))\n\nxs_plot = np.linspace(0,1, 200)\nfor e in range(1, n+1):\n plt.plot(xs_plot, Phi(psi0, e, xs_plot), label=f'$\\psi_0^{e}$')\nplt.gca().set_xticks(xs)\nplt.legend();\n```\n\n\n```python\nfor e in range(1, n+1):\n plt.plot(xs_plot, Phi(psi1, e, xs_plot), label=f'$\\psi_1^{e}$')\nplt.gca().set_xticks(xs)\nplt.legend();\n```\n\n\n```python\ndef u(x):\n return x * (1 - x)\n\ndef I_loc(u, x):\n out = np.zeros(np.shape(x))\n for e in range(1, n+1):\n out += u(xs[e-1])*Phi(psi0, e, x) + u(xs[e])*Phi(psi1, e, x)\n return out\n\nplt.plot(xs_plot, u(xs_plot), label='u')\nplt.plot(xs_plot, I_loc(u, xs_plot), label='$I_h u$')\nplt.gca().set_xticks(xs)\nplt.grid(axis='x')\nplt.legend();\n```\n\nRecall that the basis functions $\\phi_i$ for the space $S = P_{h,0}^1$ were defined in the _P1 finite element space_ section of the lecture notes.\nThey can be written as\n\n$$\n\\phi_i(x) = \\psi_{1,i}(x) + \\psi_{0,i+1}(x), \\quad i = 1,\\dots,n-1.\n$$\n\nFor $u$ satisfying $u(0) = 0 = u(1)$ there holds\n\n$$\n\\mathcal I_h u(x) = \\sum_{i=1}^{n-1} u(x_i) \\phi_i(x).\n$$\n\n\n```python\ndef phi(i, x):\n return Phi(psi1, i, x) + Phi(psi0, i+1, x)\n\nfor i in range(1, n):\n plt.plot(xs_plot, phi(i, xs_plot), label=f'$\\phi_{i}$')\nplt.gca().set_xticks(xs)\nplt.grid(axis='x')\nplt.legend();\n```\n\n\n```python\ndef I(u, x):\n out = np.zeros(np.shape(x))\n for i in range(1, n):\n out += u(xs[i]) * phi(i, x)\n return out\n\nplt.plot(xs_plot, u(xs_plot), label='u')\nplt.plot(xs_plot, I(u, xs_plot), label='$I_h u$')\nplt.gca().set_xticks(xs)\nplt.grid(axis='x')\nplt.legend();\n```\n\n# Implementation of the P1 finite element method\n\nLet us consider the bilinear and linear forms $a$ and $L$ defined in the _a boundary value problem_ section of the lecture notes, that is,\n\n$$\na(u, v) = \\int_0^1 u'(x) v'(x) dx, \\quad L(v) = \\int_0^1 f(x) v(x) dx.\n$$\n\nFollowing the proof the _Galerkin solution_ theorem in the lecture notes, we define\n\n$$\nu_S = \\sum_{j=1}^{n-1} U_j \\phi_j, \n\\quad\nK_{ij} = a(\\phi_i, \\phi_j),\n\\quad\nF_i = L(\\phi_i), \n\\qquad i,j=1,\\dots,n-1,\n$$\n\nand write $U$ and $F$ for the vectors with elements $U_i$ and $F_i$, and $K$ for the matrix with elements $K_{ij}$.\n\nThe equation for the Galerkin solution \n\n$$\na(u_S,v) = L(v) \\quad \\text{for all $v \\in S$}.\n$$\n\nis equivalent with $K U = F$.\n\nLet us assemble the matrix $K$. As the global basis functions $\\phi_j$ are defined piecewisely on intervals $I_e$, we will consider each interval separately. On $I_e$ there holds\n\n$$\n\\phi_e(x) = \\psi_1^e(x), \\quad \\phi_{e-1}(x) = \\psi_0^e(x), \\quad \\phi_i(x) = 0, \\quad i \\ne e, e-1.\n$$\n\nDifferentiating the local basis functions on $I_e$, we get \n\n$$\n\\phi_e'(x) = \\frac{1}{x_e - x_{e-1}}, \\quad \\phi_{e-1}'(x) = \\frac{-1}{x_e - x_{e-1}}, \\quad \\phi_i'(x) = 0, \\quad i \\ne e, e-1.\n$$\n\nIn order to handle the first and last interval in the same way as the rest, we introduce the auxiliary basis functions\n\n$$\n\\phi_0(x) = \\psi_{0,1}(x), \\quad \\phi_n(x) = \\psi_{1,n}(x).\n$$\n\n\n```python\nplt.plot(xs_plot, Phi(psi0, 1, xs_plot), label=f'$\\phi_0$')\nplt.plot(xs_plot, Phi(psi1, n, xs_plot), label=f'$\\phi_n$')\nplt.gca().set_xticks(xs)\nplt.grid(axis='x')\nplt.legend();\n```\n\n\n```python\ndef assemble_K_demo(x):\n n = np.size(x) - 1\n K = np.zeros((n+1, n+1))\n for e in range(1, n+1):\n dx = x[e] - x[e-1]\n dphis = [-1 / dx, 1 / dx]\n ix = [e - 1, e]\n for i in range(2):\n for j in range(2):\n K[ix[i], ix[j]] += dphis[i] * dphis[j] * dx\n \n return K \n\nK = assemble_K_demo(xs)\n# Condense K, that is, \n# throw away the values corresponding to the auxiliary functions\nK = K[1:-1,1:-1]\nprint(K)\n```\n\nWe turn to assembly of the vector $F$. We need to evaluate the integrals $(f, \\phi_j)$ numerically. Let us use the trapezium rule separately on each interval $I_e$:\n\n$$\n\\int_{I_e} f(x) \\phi_e(x) dx \\approx \\frac{x_e - x_{e-1}}{2} f(x_e),\n\\quad\n\\int_{I_e} f(x) \\phi_{e-1}(x) dx \\approx \\frac{x_e - x_{e-1}}{2} f(x_{e-1}),\n$$\n\nand\n\n$$\n\\int_{I_e} f(x) \\phi_{j}(x) dx = 0, \\quad j\\ne e, e-1.\n$$\n\n\n```python\ndef assemble_F_demo(x, f):\n n = np.size(x) - 1\n F = np.zeros(n+1)\n for e in range(1, n+1):\n dx = x[e] - x[e-1]\n F[e] += dx / 2 * f(x[e])\n F[e-1] += dx / 2 * f(x[e-1])\n return F\n\ndef f(x):\n return 1\n\nF = assemble_F_demo(xs, f)\n# Condense F\nF = F[1:-1] \nprint(F)\n```\n\n## Example: Poisson's equation in 1d with unit load\n\nLet us solve the problem\n\n$$\n\\begin{cases}\n-u'' = 1 & \\text{on $(0,1)$},\n\\\\\nu(0) = 0 = u(1).\n\\end{cases}\n$$\n\n\n```python\ndef assemble_poisson_demo(xs):\n K = assemble_K_demo(xs)\n K = K[1:-1,1:-1]\n F = assemble_F_demo(xs, f)\n F = F[1:-1] \n return K, F\n\ndef solve_poisson_demo(xs):\n K, F = assemble_poisson_demo(xs)\n u = np.zeros(np.size(xs))\n u[1:-1] = la.solve(K, F)\n return u\n```\n\nThe solution vector `u` contains the coefficients of the Galerkin solution $u_S$ in the global basis $\\phi_0, \\dots, \\phi_n$.\nRecall that the global basis functions satisfy \n\n$$\n\\phi_j(x_k) = \\delta_{jk} = \\begin{cases}\n1 & j=k,\n\\\\\n0 & \\text{otherwise}.\n\\end{cases}\n$$\n\nHence we can interpret the coefficients `u` as the point values of $u_S$ at the nodes $x_j$, $j=0,\\dots,n$.\n\n\n```python\nplt.plot(xs, solve_poisson_demo(xs), label='$u_S$ coarse')\nxs_fine = np.linspace(0,1)\nplt.plot(xs_fine, solve_poisson_demo(xs_fine), label='$u_S$ fine')\nplt.gca().set_xticks(xs)\nplt.grid(axis='x')\nplt.legend();\n```\n\n# Sparsity\n\nMost of the elements of $K$ vanish, that is, $K$ is [sparse](https://en.wikipedia.org/wiki/Sparse_matrix). (In fact, in this particular case, $K$ is [tridiagonal](https://en.wikipedia.org/wiki/Tridiagonal_matrix).)\nSolving the system $KU = F$ is typically the most costly part of the finite element method, and efficiency of the method boils down to this system being sparse.\n\n\n```python\nK, F = assemble_poisson_demo(xs_fine)\nK\n```\n\n\n```python\nplt.spy(K);\n```\n\nWe could (and should) reimplement `assemble_K_demo` using the [sparse matrix package](https://docs.scipy.org/doc/scipy/reference/sparse.html) of SciPy.\n\n# Implementation using Scikit-fem\n\nLet us now assemble the linear system for the same problem by using the Scikit-fem package.\n\n\n```python\nimport skfem as fem\nfrom skfem.helpers import dot, grad # helpers make forms look nice\n```\n\n\n```python\n@fem.BilinearForm\ndef a(u, v, w):\n # Return the integrand in the bilinear form a(u, v)\n # using a notation that works in all dimensions\n return dot(grad(u), grad(v)) \n\n@fem.LinearForm\ndef L(v, w):\n x = w.x[0] # quadrature points for numerical integration\n # Return the integrand in the linear form L(v) \n return f(x) * v # use the same f as before\n\ndef assemble_poisson(xs): \n mesh = fem.MeshLine(xs) \n basis = fem.Basis(mesh, fem.ElementLineP1())\n K = a.assemble(basis)\n F = L.assemble(basis)\n # Empty call to get_dofs() matches all boundary degrees-of-freedom\n boundary_dofs = basis.get_dofs()\n K, F, _, _ = fem.condense(K, F, D=boundary_dofs)\n return K, F\n```\n\n\n```python\nK, F = assemble_poisson(xs)\n# K is a sparse matrix, convert it to a standard matrix\nprint(K.toarray()) \nprint(F)\n```\n\nWe get the same linear system as before. Let's check that this is also the case when we use a finer mesh.\n\n\n```python\nK1, F1 = assemble_poisson_demo(xs_fine)\nK2, F2 = assemble_poisson(xs_fine)\nprint(la.norm(K1 - K2))\nprint(la.norm(F1 - F2))\n```\n\nWe can solve the Poisson problem in even briefer way using Scikit-fem.\n\n\n```python\nfrom skfem.models.poisson import laplace, unit_load\nimport skfem.visuals.matplotlib as fem_plt\n\nmesh = fem.MeshLine(xs) \nbasis = fem.Basis(mesh, fem.ElementLineP1())\nK = laplace.assemble(basis)\nF = unit_load.assemble(basis)\nfem_plt.plot(mesh, fem.solve(*fem.condense(K, F, D=basis.get_dofs())));\n```\n\n# Computational study of convergence\n\nLet $I = [0,1]$ and let $u$ solve the problem \n\n$$\n\\begin{cases}\n-u'' = f & \\text{on $I$},\n\\\\\nu(0) = 0 = u(1),\n\\end{cases}\n$$\n\nand consider the corresponding Galerkin solution $u_S$ computed in a mesh\n\n$$\n0 = x_0 < x_1 < \\dots < x_n = 1.\n$$ \n\nIt is typical to think that $u_S$ is parametrized by the mesh constant\n\n$$\nh = \\max_{e=1,\\dots,n} |x_e - x_{e-1}|,\n$$\n\nand write $u_h = u_S$. \n\nWe will study computationally how the error $u_h - u$ converges in the space $L^2(I)$ as $h \\to 0$.\nIn order to do this, we need to set up the problem so that we know the exact solution $u$.\nA way to achieve this is to choose first $u$ satisfying the boundary conditions and then compute $f$.\n\nWe set \n\n$$\nu(x) = \\sin(\\pi x).\n$$\n\nThen $u(0) = u(1) = 0$ and \n\n$$\n-u''(x) = \\pi^2 \\sin(\\pi x) =: f(x). \n$$\n\n\n```python\ndef u_exact(x):\n return np.sin(np.pi * x)\n\ndef f(x):\n return np.pi**2 * np.sin(np.pi * x)\n\n# We need to redefine this, as we didn't parametrize f\ndef solve_poisson_demo(xs):\n K = assemble_K_demo(xs)\n K = K[1:-1,1:-1]\n F = assemble_F_demo(xs, f)\n F = F[1:-1] \n u = np.zeros(np.size(xs))\n u[1:-1] = la.solve(K, F)\n return u\n```\n\n\n```python\nns = [4, 8]\nfor n in ns:\n xs, h = np.linspace(0,1, n+1, retstep=True)\n u = solve_poisson_demo(xs)\n plt.plot(xs, u, label=f'{h = }')\nxs_plot = np.linspace(0, 1)\nplt.plot(xs_plot, u_exact(xs_plot), label='exact')\nplt.legend();\n```\n\nRecalling that on $I_e$\n\n$$\n\\phi_e(x) = \\psi_1^e(x), \\quad \\phi_{e-1}(x) = \\psi_0^e(x), \\quad \\phi_i(x) = 0, \\quad i \\ne e, e-1,\n$$\n\nwe have\n\n$$\n\\|u_h - u\\|^2 \n= \\sum_{e=1}^n \\int_{I_e} |u_h(x) - u(x)|^2 dx\n= \\sum_{e=1}^n \\int_{I_e} |U_{e} \\psi_1^e(x) + U_{e-1} \\psi_0^e(x) - u(x)|^2 dx.\n$$\n\nEvaluating the integral using the trapezium rule yields\n\n$$\n\\int_{I_e} |U_{e} \\psi_1^e(x) + U_{e-1} \\psi_0^e(x) - u(x)|^2 dx\n\\approx \\frac{x_e - x_{e-1}}{2} \\left( |U_{e-1} - u(x_{e-1})|^2 + |U_{e} - u(x_{e})|^2 \\right).\n$$\n\nIn principle, we could expand the square and compute the terms\n\n$$\nU_i U_j \\int_{I_e} \\psi_1^i(x) \\psi_0^j(x) dx, \\quad i,j=e,e-1\n$$\n\nanalytically. In practice, we can use a numerical integration method, that yields an exact result for polynomials of degree 2 or less, to achieve the same accuracy as the analytic computation. Here we will use the trapezium rule for simplicity. \n\n\n```python\ndef error_sq_demo(x, u):\n '''Squared L2 error'''\n n = np.size(x) - 1\n error = 0\n for e in range(1, n+1):\n dx = x[e] - x[e-1]\n error += dx / 2 * ( (u[e-1] - u_exact(x[e-1]))**2 + (u[e] - u_exact(x[e]))**2 ) \n return error\n\nns = [2**n for n in range(2,6)]\nN = len(ns)\nerrs_demo = np.zeros(N)\nhs = np.zeros(N)\nfor i in range(N):\n n = ns[i]\n xs, h = np.linspace(0,1, n+1, retstep=True)\n u = solve_poisson_demo(xs)\n hs[i] = h\n errs_demo[i] = np.sqrt(error_sq_demo(xs, u))\n```\n\n\n```python\nplt.loglog(hs, errs_demo, '-o', label='error')\nplt.loglog(hs, hs**2, label='$h^2$')\nplt.legend();\n```\n\n# Evaluation of error using Scikit-fem\n\n\n```python\n# We need to redefine this, as we didn't parametrize f\n@fem.LinearForm\ndef L(v, w):\n x = w.x[0] # quadrature points for numerical integration\n # Return the integrand in the linear form L(v) \n return f(x) * v \n\n@fem.Functional\ndef error_sq(w):\n x = w.x[0] \n uh = w['uh'] # uh is given in the assemble call below\n # Return the integrand in the squared L2 error\n return (uh - u_exact(x))**2\n\ndef error_poisson(xs): \n mesh = fem.MeshLine(xs) \n basis = fem.Basis(mesh, fem.ElementLineP1())\n K = a.assemble(basis)\n F = L.assemble(basis)\n boundary_dofs = basis.get_dofs()\n u = fem.solve(*fem.condense(K, F, D=boundary_dofs))\n return np.sqrt(error_sq.assemble(basis, uh=basis.interpolate(u)))\n\nerrs = np.array([\n error_poisson(np.linspace(0,1, n+1)) for n in ns])\n```\n\n\n```python\nplt.loglog(hs, errs, 'ob-', label='skfem')\nplt.loglog(hs, errs_demo, '.r--', label='demo')\nplt.legend();\n```\n\nThe two errors are slightly different since Scikit-fem is using more accurate numerical integration method than the trapezium rule. \n\n\n```python\n\n```\n", "meta": {"hexsha": "626d3737a962907de8b7a1e6503d71679d717488", "size": 32268, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "implementation.ipynb", "max_stars_repo_name": "uh-comp-methods2/notebooks", "max_stars_repo_head_hexsha": "ce55d0e3e6b3d23341ca9d54630b37832a11d1d9", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "implementation.ipynb", "max_issues_repo_name": "uh-comp-methods2/notebooks", "max_issues_repo_head_hexsha": "ce55d0e3e6b3d23341ca9d54630b37832a11d1d9", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "implementation.ipynb", "max_forks_repo_name": "uh-comp-methods2/notebooks", "max_forks_repo_head_hexsha": "ce55d0e3e6b3d23341ca9d54630b37832a11d1d9", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.8765036087, "max_line_length": 252, "alphanum_fraction": 0.50068179, "converted": true, "num_tokens": 5747, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9136765281148513, "lm_q2_score": 0.9136765175373275, "lm_q1q2_score": 0.8348047883635734}} {"text": "# MATH 452: Final Project\n\nRemark: \n\nPlease upload your solutions for this project to Canvas with a file named \"Final_Project_yourname.ipynb\".\n\n=================================================================================================================\n\n## Problem 1 [20%]: \n\nConsider the following linear system\n\n\\begin{equation}\\label{matrix}\nA\\ast u =f,\n\\end{equation}\nor equivalently $u=\\arg\\min \\frac{1}{2} (A* v,v)_F-(f,v)_F$, where $(f,v)_F =\\sum\\limits_{i,j=1}^{n}f_{i,j}v_{i,j}$ is the Frobenius inner product.\nHere $\\ast$ represents a convolution with one channel, stride one and zero padding one. The convolution kernel $A$ is given by\n$$ \nA=\\begin{bmatrix} 0 & -1 & 0 \\\\ -1 & 4 & -1 \\\\ 0 & -1 & 0 \\end{bmatrix},~~\n$$\nthe solution $ u \\in \\mathbb{R}^{n\\times n} $, and the RHS $ f\\in \\mathbb{R}^{n\\times n}$ is given by $f_{i,j}=\\dfrac{1}{(n+1)^2}.$\n\n\n### Tasks:\nSet $J=4$, $n=2^J-1$ and the number of iterations $M=100$. Use the gradient descent method and the multigrid method to solve the above problem with a random initial guess $u^0$. Let $u_{GD}$ and $u_{MG}$ denote the solutions obtained by gradient descent and multigrid respectively.\n \n* [5%] Plot the surface of solution $u_{GD}$ and $u_{MG}$.\n\n* [10%] Define error $e_{GD}^m = \\|A * u^{m}_{GD}- f\\|_F=\\sqrt{\\sum\\limits_{i=1}^{n}\\sum\\limits_{j=1}^{n} |(A * u^{m}_{GD}- f)_{i,j}}|^2 $ for $m=0,1,2,3,...,M$. Similarly, we define the multigrid error $e_{MG}^m$. Plot the errors $e_{GD}^m$ and $e_{MG}^m$ as a function of the iteration $m$ (your x-axis is $m$ and your y-axis is the error). Put both plots together in the same figure.\n\n* [5%] Find the minimal $m_1$ for which $e^{m_1}_{GD} <10^{-5}$ and the minimal $m_2$ for which $e^{m_2}_{MG} <10^{-5}$, and report the computational time for each method. Note that $m_1$ or $m_2$ may be greater than $M=100$, in this case you will have to run more iterations.\n\n### Remark:\n\nBelow are examples of using gradient descent and multigrid iterations for M-times \n* #### For gradient descent method with $\\eta=\\frac{1}{8}$, you need to write a code:\n\n Given initial guess $u^0$\n$$\n\\begin{align}\n&\\text{for } m = 1,2,...,M\\\\\n&~~~~\\text{for } i,j = 1: n\\\\\n&~~~~~~~~u_{i,j}^{m} = u_{i,j}^{m-1}-\\eta(f_{i,j}-(A\\ast u^{m-1})_{i,j})\\\\\n&~~~~\\text{endfor}\\\\\n&\\text{endfor}\n\\end{align} \n$$\n\n* #### For multigrid method, we have provided the framework code in F02_MultigridandMgNet.ipynb:\n\n Given initial guess $u^0$\n$$\n\\begin{align}\n&\\text{for } m = 1,2,...,M\\\\\n&~~~~u^{m} = MG1(u^{m-1},f, J, \\nu)\\\\\n&\\text{endfor}\n\\end{align} \n$$\n\n=================================================================================================================\n\n## Problem 2 [50%]: \n\nUse SGD with momentum and weight decay to train MgNet on the Cifar10 dataset. Use 120 epochs, set the initial learning rate to 0.1, momentum to 0.9, weight decay to 0.0005, and divide the learning rate by 10 every 30 epochs. (The code to do this has been provided.) Let $b_i$ denote the test accuracy of the model after $i$ epochs, and let $b^*$ = $\\max_i(b_i)$ be the best test accuracy attained during training.\n\n\n### Tasks:\n * [30%] Train MgNet with the following three sets of hyper-parameters (As a reminder, the hyper-parameters of MgNet are $\\nu$, the number of iterations of each layer, $c_u$, the number of channels for $u$, and $c_f$, the number of channels for $f$.):\n \n (1) $\\nu=$[1,1,1,1], $c_u=c_f=64$.\n \n (2) $\\nu=$[2,2,2,2], $c_u=c_f=64$.\n\n (3) $\\nu=$[2,2,2,2], $c_u=c_f=64$, try to improve the test accuracy by implementing MgNet with $S^{l,i}$, which means different iterations in the same layer do not share the same $S^{l}$. \n \n \n * For each numerical experiment above, print the results with the following format:\n\n \"Epoch: i, Learning rate: lr$_i$, Training accuracy: $a_i$, Test accuracy: $b_i$\"\n\n where $i=1,2,3,...$ means the $i$-th epoch, $a_i$ and $b_i$ are the training accuracy and test accuracy computed at the end of $i$-th epoch, and lr$_i$ is the learning rate of $i$-th epoch.\n \n \n * [10%] For each numerical experiment above, plot the test accuracy against the epoch count, i.e. the x-axis is the number of epochs $i$ and y-axis is the test accuracy $b_i$. An example plot is shown in the next cell.\n \n \n * [10%] Calculate the number of parameters that each of the above models has. Discuss why the number of parameters is different (or the same) for each of the models.\n \n\n\n```python\nfrom IPython.display import Image\nImage(filename='plot_sample_code.png')\n\n```\n\n\n```python\n# You can calculate the number of parameters of my_model by:\nmodel_size = sum(param.numel() for param in my_model.parameters())\n\n```\n\n=================================================================================================================\n\n## Problem 3 [25 %]:\n\nTry to improve the MgNet Accuracy by increasing the number of channels. (We use the same notation as in the previous problem.) Double the number of channels to $c_u=c_f=128$ and try different $\\nu$ to maximize the test accuracy.\n\n### Tasks:\n * [20%] Report $b^{*}$, $\\nu$ and the number of parameters of your model for each of the experiments you run.\n * [5%] For the best experiment, plot the test accuracy against the epoch count, i.e. the x-axis is the number of epochs $i$ and y-axis is the test accuracy $b_i$. (Same as for the previous problem.)\n\n\n```python\n# You can calculate the number of parameters of my_model by:\nmodel_size = sum(param.numel() for param in my_model.parameters())\n\n```\n\n=================================================================================================================\n\n## Problem 4 [5%]:\n\nContinue testing larger MgNet models (i.e. increase the number of channels) to maximize the test accuracy. (Again, we use the same notation as in problem 2.)\n\n### Tasks: \n \n+ [5%] Try different training strategies and MgNet architectures with the goal of achieving $b^*>$ 95%. Hint: you can tune the number of epochs, the learning rate schedule, $c_u$, $c_f$, $\\nu$, try different $S^{l,i}$ in the same layer $l$, etc...\n\n=================================================================================================================\n", "meta": {"hexsha": "29f6065ac3e10a9a2e63c1d78dba2bdd1afe1343", "size": 15504, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_build/jupyter_execute/Module6/Final_Project.ipynb", "max_stars_repo_name": "liuzhengqi1996/math452_Spring2022", "max_stars_repo_head_hexsha": "b01d1d9bee4778b3069e314c775a54f16dd44053", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_build/jupyter_execute/Module6/Final_Project.ipynb", "max_issues_repo_name": "liuzhengqi1996/math452_Spring2022", "max_issues_repo_head_hexsha": "b01d1d9bee4778b3069e314c775a54f16dd44053", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_build/jupyter_execute/Module6/Final_Project.ipynb", "max_forks_repo_name": "liuzhengqi1996/math452_Spring2022", "max_forks_repo_head_hexsha": "b01d1d9bee4778b3069e314c775a54f16dd44053", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.4, "max_line_length": 1592, "alphanum_fraction": 0.5893317853, "converted": true, "num_tokens": 1788, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9149009549929797, "lm_q2_score": 0.9124361652391386, "lm_q1q2_score": 0.8347887189474202}} {"text": "```python\nimport numpy as np\nimport sympy as sy\nimport matplotlib.pyplot as plt\nfrom openmm import unit\n```\n\n# Harmonic well potential\n\nThe harmonic well potential is described by the following expression:\n\n$$\nf(x)=\\frac{1}{2} k \\left(x^2 + y^2 + z^2 \\right)\n$$ (harmonic-well-potential)\n\nWhere $k$ is the only parameter and represents the stiffness of the harmonic potential -or the stiffness of the harmonic spring described by the hookes' law-. Notice that the potential for potential dimensions $Y$ and $Z$, has the same shape. In this way we have a three dimensional harmonic well. But let's see here only the proyection over a single dimension $X$ since $Y$ and $Z$ are decorrelated, and there by will behave as $X$.\n\n\n```python\ndef harmonic_well(x,k):\n return 0.5*k*x**2\n```\n\n\n```python\nk=5.0 * unit.kilocalories_per_mole/ unit.nanometers**2 # stiffness of the harmonic potential\n\nx_serie = np.arange(-5., 5., 0.05) * unit.nanometers\n\nplt.plot(x_serie, harmonic_well(x_serie, k), 'r-')\nplt.ylim(-1,5)\nplt.xlim(-2,2)\nplt.grid()\nplt.xlabel(\"X ({})\".format(unit.nanometers))\nplt.ylabel(\"Energy ({})\".format(unit.kilocalories_per_mole))\nplt.title(\"Harmonic Well\")\nplt.show()\n```\n\nDifferent values of $k$ can be tested to graphically see how this parameter accounts for the openness of the well's arms. Or, as it is described below, the period of oscillations of a particle of mass $m$ in a newtonian dynamics.\n\nThe hooks' law describes the force suffered by a mass attached to an ideal spring as:\n\n$$\nF(x) = -k(x-x_{0})\n$$ (hooks_law)\n\nWhere $k$ is the stiffness of the spring and $x_{0}$ is the equilibrium position. Now, since the force is minus the gradient of the potential energy $V(x)$,\n\n$$\nF(x) = -\\frac{d V(x)}{dx},\n$$ (grad_potential)\n\nwe can proof that the spring force is the result of the first harmonic potential derivative:\n\n$$\nV(x) = \\frac{1}{2} k (x-x_{0})^{2}\n$$ (x_potential)\n\nAnd the angular frequency of oscillations of a spring, or a particle goberned by the former potential, is:\n\n$$\n\\omega = \\sqrt{\\frac{k}{m}}\n$$ (omega)\n\nWhere $m$ is the mass of the particle. This way the potential can also be written as:\n\n$$\nV(x) = \\frac{1}{2} k (x-x_{0})^{2} = \\frac{1}{2} m \\omega^{2} (x-x_{0})^{2}\n$$ (x_potential_2)\n\nFinnally, the time period of these oscillations are immediately computed from the mass of the particle, $m$, and the stiffness parameter $k$. Given that by definition:\n\n$$\nT = 2\\pi / \\omega\n$$ (period)\n\nThen:\n\n$$\nT = 2\\pi \\sqrt{\\frac{m}{k}}\n$$ (period_2)\n", "meta": {"hexsha": "3f1456e2c33ac5555922a93dbaa8ee7fa2e33d59", "size": 20297, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/contents/molecular_systems/harmonic_well/harmonic_well.ipynb", "max_stars_repo_name": "uibcdf/Molecular-Systems", "max_stars_repo_head_hexsha": "74c4313ae25584ad24bea65f961280f187eda9cb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/contents/molecular_systems/harmonic_well/harmonic_well.ipynb", "max_issues_repo_name": "uibcdf/Molecular-Systems", "max_issues_repo_head_hexsha": "74c4313ae25584ad24bea65f961280f187eda9cb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/contents/molecular_systems/harmonic_well/harmonic_well.ipynb", "max_forks_repo_name": "uibcdf/Molecular-Systems", "max_forks_repo_head_hexsha": "74c4313ae25584ad24bea65f961280f187eda9cb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 135.3133333333, "max_line_length": 15992, "alphanum_fraction": 0.8802778736, "converted": true, "num_tokens": 729, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9032942171172603, "lm_q2_score": 0.9241418158002492, "lm_q1q2_score": 0.8347719580086095}} {"text": "# Iterative Solvers 3 - The Conjugate Gradient Method\n\n## Symmetric positive definite matrices\n\nA very frequent type of matrices are symmetric positive definite matrices. Let $A\\in\\mathbb{R}^{n\\times n}$ be a symmetric matrix (that is $A^T=A$). $A$ is called symmetric positive definite if \n\n$$\nx^TAx > 0, \\forall x\\neq 0.\n$$\n\nThis is equivalent to the condition that all eigenvalues of $A$ are larger than zero (remember that symmetric matrices only have real eigenvalues).\n\nOne application of symmetric positive definite matrices are energy functionals. The expression $x^TAx$ arises when discretising functional involving kinetic energies (e.g. energies of the from $E = \\frac{1}{2}m|\\nabla f|^2$ for f a given function).\n\nFor linear systems involving symmetric positive definite matrices we can derive a special algorithm, namely the Method of Conjugate Gradients (CG).\n\n## Lanczos - Arnoldi for symmetric matrices\n\nLet us start with the Arnoldi recurrence relation\n\n$$\nAV_m = V_mH_m + h_{m+1,m}v_{m+1}e_m^T\n$$\n\nWe know that $H_m$ is an upper Hessenberg matrix (i.e. the upper triangular part plus the first lower triangular diagonal can only be nonzero). Also, we know from the orthogonality of the $v_k$ vectors that\n\n$$\nV_m^TAV_m = H_m.\n$$\n\nLet $A$ now be symmetric. From the symmetry of $A$ an even nicer structure for $H_m$ arises. $H_m$ is upper Hessenberg, but now it is also symmetric. The only possible type of matrices to satisfy this condition are tridional matrices. These are matrices, where only the diagonal and the first upper and lower super/subdiagonals are nonzero.\n\nLet us test this out. Below you find our simple implementation of Arnoldi's method. We then plot the resulting matrix $H_m$.\n\n\n```python\nimport numpy as np\n\ndef arnoldi(A, r0, m):\n \"\"\"Perform m-1 step of the Arnoldi method.\"\"\"\n n = A.shape[0]\n \n V = np.empty((n, m + 1), dtype=np.float64)\n H = np.zeros((m+1, m), dtype=np.float64)\n \n V[:, 0] = r0 / np.linalg.norm(r0)\n \n for index in range(m):\n # Multiply the previous vector with A\n tmp = A @ V[:, index]\n # Now orthogonalise against the previous basis vectors\n h = V[:, :index + 1].T @ tmp # h contains all inner products against previous vectors\n H[:index + 1, index] = h\n w = tmp - V[:, :index + 1] @ h # Subtract the components in the directions of the previous vectors\n # Normalise and store\n H[index + 1, index] = np.linalg.norm(w)\n V[:, index + 1] = w[:] / H[index + 1, index]\n \n return V, H\n```\n\nThe following code creates a random symmetric positive definite matrix.\n\n\n```python\nfrom numpy.random import RandomState\n\nn = 500\n\nrand = RandomState(0)\nQ, _ = np.linalg.qr(rand.randn(n, n))\nD = np.diag(rand.rand(n))\nA = Q.T @ D @ Q\n```\n\nNow let's run Arnoldi's method and plot the matrix H. We are adding some artificial noise so as to ensure for the log-plot that all values are nonzero. The colorscale shows the logarithm of the magnitude of the entries.\n\n\n```python\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nm = 50\nr0 = rand.randn(n)\nV, H = arnoldi(A, r0, m)\n\nfig = plt.figure(figsize=(8, 8))\nax = fig.add_subplot(111)\n\nim = ax.imshow(np.log10(1E-15 + np.abs(H)))\nfig.colorbar(im)\n```\n\nIt is clearly visible that only the main diagonal and the first upper and lower off-diagonal are nonzero, as expected. This hugely simplifies the Arnoldi iteration. Instead of orthogonalising $Av_m$ against all previous vectors we only need to orthogonalise against $v_m$ and $v_{m-1}$. All other inner products are already zero. Hence, the main orthogonalisation step now takes the form\n\n$$\nw = Av_m - (v_m^TAv_m)v_m - (v_{m-1}^TAv_m)v_{m-1}.\n$$\n\nSince the new vector $w$ is composed of only 3 vectors. This is also called a 3-term recurrence. The big advantage is that in addition to $Av_m$ we only need to keep $v_m$ and $v_{m-1}$ in memory. Hence, no matter how many iterations we do, the memory requirement remains constant, in contrast to Arnoldi for nonsymmetric matrices, where we need to keep all previous vectors in memory.\n\nArnoldi with a short recurrence relation for symmetric matrices has a special name. It is called **Lanczos method**.\n\n## Solving linear systems of equations with Lanczos\n\nWe can now proceed exactly as in the Full orthogonalisation method and arrive at the linear system of equations\n\n$$\nT_my_m = \\|r_0\\|_2e_1,\n$$\n\nwhere $x_m = x_0 + V_my_m$ and $T_m = V_m^TAV_m$ is the tridiagonal matrix obtained from the Lanczos method.\n\nThe conjugate gradient method is an implementation of this approach. A very good derivation from Lanczos to CG is obtained in the beautiful book by Yousef Saad \"[Iterative Methods for Sparse Linear Systems](https://www-users.cs.umn.edu/~saad/IterMethBook_2ndEd.pdf)\", which is available online for free. Here, we will briefly motivate another approach to CG, which is a bit more intuitive and reveals more about the structure of the method, namely CG as an optimisation algorithm for a quadratic minimisation problem. One of the most beautiful summaries of this approach is contained in the paper [An introduction to the Conjugate Gradient Method Without the Agonizing Pain](https://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf) by Jonathan Shewchuk.\n\n## A quadratic optimisation problem\n\nWe consider the quadratic minimisation problem\n\n$$\n\\min_{x\\in\\mathbb{R}^n} f(x)\n$$\n\nwith $f(x)=\\frac{1}{2}x^TAx - b^Tx$. We have\n\n$$\n\\nabla f(x) = Ax - b\n$$\n\nand hence the only stationary point is the solution of the linear system $Ax=b$. Furthermore, it is really a minimiser since $f''(x)$ is positive definite for all $x\\in\\mathbb{R}^n$ as $A$ is positive definite.\n\n## The Method of Steepest Descent\n\nOur first idea is the method of steepest descent. Remember that the negative gradient is a descent direction. Given a point $x_k$. We have\n\n$$\n-\\nabla f(x_k) = b - Ax_k := r_k.\n$$\n\nThe negative gradient is hence just the residual. Hence, we need to minimise along the direction of the residual, that is we will have\n$x_{k+1} = x_k + \\alpha_k r_k$ for some value $\\alpha_k$. To compute $\\alpha_k$ we just solve\n\n$$\n\\frac{d}{d\\alpha}f(x_k + \\alpha r_k) = 0\n$$\n\nSince $\\frac{d}{d\\alpha}f(x_k + \\alpha r_k) = r_{k+1}^Tr_k$ we just need to choose $\\alpha_k$ such that $r_{k+1}$ is orthogonal to $r_k$. The solution is given by $\\alpha_k = \\frac{r_k^Tr_k}{r_k^TAr_k}$. This gives us a complete method consisting of three steps to get from $x_k$ to $x_{k+1}$.\n\n$$\n\\begin{align}\nr_k &= b - Ax_k\\nonumber\\\\\n\\alpha_k &= \\frac{r_k^Tr_k}{r_k^TAr_k}\\nonumber\\\\\nx_{k+1} &= x_k + \\alpha_k r_k\n\\end{align}\n$$\n\nWe are not going to derive the complete convergence analysis here but only state the final result. Let $\\kappa := \\frac{\\lambda_{max}}{\\lambda_{min}}$, where $\\lambda_{max}$ and $\\lambda_{min}$ are the largest, respectively smallest eigenvalue of $A$ (remember that all eigenvalues are positive since $A$ is symmetric positive definite). The number $\\kappa$ is called the condition number of $A$. Let $e_k = x_k - x^*$ be the difference of the exact solution $x^*$ satisfying $Ax^*=b$ and our current iterate $x_k$. Note that $r_k = -Ae_k$.\n\nWe now have that\n\n$$\n\\|e_k\\|_A\\leq \\left(\\frac{\\kappa - 1}{\\kappa + 1}\\right)^k\\|e_0\\|_A,\n$$\n\nwhere $\\|e_k\\|_A := \\left(e_k^TAe_k\\right)^{1/2}$.\n\nThis is an extremely slow rate of convergence. Let $\\kappa=10$, which is a fairly small number. Then the error reduces in each step only by a factor of $\\frac{9}{11}\\approx 0.81$ and we need 11 iterations for each digit of accuracy.\n\n## The method of conjugate directions\n\nThe steepest descent approach was not bad. But we want to improve on it. The problem with the steepest descent method is that we have no guarantee that we are reducing the error $e_{k+1}$ as much as possible along our current direction $r_k$ when we minimize. But we can fix this.\n\nLet us pick a set of directions $d_0, d_1, \\dots, d_{n-1}$, which are mutually orthogonal, that is $d_i^Td_j =0$ for $i\\neq j$. We now want to enforce the condition that\n\n$$\ne_{k+1}^Td_k = 0.\n$$\n\nThis means that the remaining error is orthogonal to $d_k$ and hence is a linear combination of all the other search directions. We have therefore exhausted all the information from $d_k$. Let's play this through.\n\nWe know that $e_{k+1} = x_{k+1} - x^* = x_k -x^* + \\alpha_k d_k = e_k + \\alpha_kd_k$.\n\nIt follows that\n\n$$\n\\begin{align}\ne_{k+1}^Td_k &= d_k^T(e_k + \\alpha_kd_k)\\nonumber\\\\\n &= d_k^Te_k + \\alpha_kd_k^Td_k = 0\\nonumber\n\\end{align}\n$$\n\nand therefore $\\alpha_k = -\\frac{d_k^Te_k}{d_k^Td_k}$.\n\nUnfortunately, this does not quite work in practice as we don't know $e_k$. But there is a solution. Remember that $r_k = -Ae_k$. We just need an $A$ in the right place. To achieve this we choose **conjugate directions**, that is we impose the condition that\n\n$$\nd_i^TAd_j = 0\n$$\n\nfor $i\\neq j$. We also impose the condition that $e_{k+1}^TAd_k = 0$. Writing this out we obtain\n\n$$\n\\alpha_k = \\frac{d_k^Tr_k}{d_k^TAd_k}.\n$$\n\nThis expression is computable if we have a suitable set of conjugate directions $d_k$. Moreoever, it guarantees that the method converges in at most $n$ steps since in every iteration we are annihiliating the error in the direction of $d_k$ and there are only $n$ different directions.\n\n## Conjugate Gradients - Mixing steepest descent with conjugate directions\n\nThe idea of conjugate gradients is to obtain the conjugate directions $d_i$ by taking the $r_i$ (the gradients) and to $A$-orthogonalise (conjugate) them against the previous directions. We are leaving out the details of the derivation and refer to the Shewchuk paper. But the final algorithm now takes the following form.\n\n$$\n\\begin{align}\nd_0 &= r_0 = b - Ax_0\\nonumber\\\\\n\\alpha_i &= \\frac{r_i^Tr_i}{d_i^TAd_i}\\nonumber\\\\\nx_{i+1} &=x_i + \\alpha_id_i\\nonumber\\\\\nr_{i+1} &= r_i - \\alpha_i Ad_i\\nonumber\\\\\n\\beta_{i+1} &= \\frac{r_{i+1}^Tr_{i+1}}{r_i^Tr_i}\\nonumber\\\\\nd_{i+1} &= r_{i+1} + \\beta_{i+1}d_i\\nonumber\n\\end{align}\n$$\n\nConjugate Gradients has a much more favourable convergence bound than steepest descent. One can derive that\n\n$$\n\\|e_i\\|_A\\leq 2\\left(\\frac{\\sqrt{\\kappa} - 1}{\\sqrt{\\kappa} + 1}\\right)^i\\|e_0\\|_A.\n$$\n\nIf we choose again the example that $\\kappa=10$ we obtain\n\n$$\n\\|e_i\\|A\\lessapprox 0.52^i\\|e_0\\|_A.\n$$\n\nHence, we need around $4$ iterations for each digits of accuracy instead of 11 for the method of steepest descent.\n\n## A numerical example\n\nThe following code creates a symmetric positive definite matrix.\n\n\n```python\nfrom scipy.sparse import diags\n\nn = 10000\n\ndata = [2.1 * np.ones(n),\n -1. * np.ones(n - 1),\n -1. * np.ones(n - 1)]\n\noffsets = [0, 1, -1]\n\nA = diags(data, offsets=offsets, shape=(n, n), format='csr')\n```\n\nWe now solve the associated linear system with CG and plot the convergence.\n\n\n```python\nfrom scipy.sparse.linalg import cg\n\nb = rand.randn(n)\nresiduals = []\n\ncallback = lambda x: residuals.append(np.linalg.norm(b - A @ x) / np.linalg.norm(b))\n\nsol, _ = cg(A, b, tol=1E-6, callback=callback, maxiter=1000)\n\nfig = plt.figure(figsize=(8, 8))\nax = fig.add_subplot(111)\nax.semilogy(1 + np.arange(len(residuals)), residuals, 'k--')\nax.set_title('CG Convergence')\nax.set_xlabel('Iteration Step')\n_ = ax.set_ylabel('$\\|Ax-b\\|_2 / \\|b\\|_2$')\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "ee9e0b67e6e2c0a32505af0db074da66cac4b3dd", "size": 52826, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "hpc_lecture_notes/it_solvers3.ipynb", "max_stars_repo_name": "tbetcke/hpc_lecture_notes", "max_stars_repo_head_hexsha": "f061401a54ef467c8f8d0fb90294d63d83e3a9e1", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-10-02T11:11:58.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-14T10:40:51.000Z", "max_issues_repo_path": "hpc_lecture_notes/it_solvers3.ipynb", "max_issues_repo_name": "tbetcke/hpc_lecture_notes", "max_issues_repo_head_hexsha": "f061401a54ef467c8f8d0fb90294d63d83e3a9e1", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hpc_lecture_notes/it_solvers3.ipynb", "max_forks_repo_name": "tbetcke/hpc_lecture_notes", "max_forks_repo_head_hexsha": "f061401a54ef467c8f8d0fb90294d63d83e3a9e1", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-18T15:21:30.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-26T12:38:25.000Z", "avg_line_length": 111.2126315789, "max_line_length": 22652, "alphanum_fraction": 0.8415363647, "converted": true, "num_tokens": 3326, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9032942014971871, "lm_q2_score": 0.9241418158002491, "lm_q1q2_score": 0.8347719435734466}} {"text": "# Basis for grayscale images\n\n## Introduction\n\nConsider the set of real-valued matrices of size $M\\times N$; we can turn this into a vector space by defining addition and scalar multiplication in the usual way:\n\n\\begin{align}\n\\mathbf{A} + \\mathbf{B} &= \n \\left[ \n \\begin{array}{ccc} \n a_{0,0} & \\dots & a_{0,N-1} \\\\ \n \\vdots & & \\vdots \\\\ \n a_{M-1,0} & \\dots & b_{M-1,N-1} \n \\end{array}\n \\right]\n + \n \\left[ \n \\begin{array}{ccc} \n b_{0,0} & \\dots & b_{0,N-1} \\\\ \n \\vdots & & \\vdots \\\\ \n b_{M-1,0} & \\dots & b_{M-1,N-1} \n \\end{array}\n \\right]\n \\\\\n &=\n \\left[ \n \\begin{array}{ccc} \n a_{0,0}+b_{0,0} & \\dots & a_{0,N-1}+b_{0,N-1} \\\\ \n \\vdots & & \\vdots \\\\ \n a_{M-1,0}+b_{M-1,0} & \\dots & a_{M-1,N-1}+b_{M-1,N-1} \n \\end{array}\n \\right] \n \\\\ \\\\ \\\\\n\\beta\\mathbf{A} &= \n \\left[ \n \\begin{array}{ccc} \n \\beta a_{0,0} & \\dots & \\beta a_{0,N-1} \\\\ \n \\vdots & & \\vdots \\\\ \n \\beta a_{M-1,0} & \\dots & \\beta a_{M-1,N-1}\n \\end{array}\n \\right]\n\\end{align}\n\n\nAs a matter of fact, the space of real-valued $M\\times N$ matrices is completely equivalent to $\\mathbb{R}^{MN}$ and we can always \"unroll\" a matrix into a vector. Assume we proceed column by column; then the matrix becomes\n\n$$\n \\mathbf{a} = \\mathbf{A}[:] = [\n \\begin{array}{ccccccc}\n a_{0,0} & \\dots & a_{M-1,0} & a_{0,1} & \\dots & a_{M-1,1} & \\ldots & a_{0, N-1} & \\dots & a_{M-1,N-1}\n \\end{array}]^T\n$$\n\nAlthough the matrix and vector forms represent exactly the same data, the matrix form allows us to display the data in the form of an image. Assume each value in the matrix is a grayscale intensity, where zero is black and 255 is white; for example we can create a checkerboard pattern of any size with the following function:\n\n\n```python\n# usual pyton bookkeeping...\n%pylab inline\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport IPython\nfrom IPython.display import Image\nimport math\nfrom __future__ import print_function\n\n# ensure all images will be grayscale\ngray();\n```\n\n /opt/conda/envs/python2/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.\n warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')\n\n\n Populating the interactive namespace from numpy and matplotlib\n\n\n\n \n\n\n\n```python\n# let's create a checkerboard pattern\nSIZE = 4\nimg = np.zeros((SIZE, SIZE))\nfor n in range(0, SIZE):\n for m in range(0, SIZE):\n if (n & 0x1) ^ (m & 0x1):\n img[n, m] = 255\n\n# now display the matrix as an image\nplt.matshow(img); \n```\n\nGiven the equivalence between the space of $M\\times N$ matrices and $\\mathbb{R}^{MN}$ we can easily define the inner product between two matrices in the usual way:\n\n$$\n\\langle \\mathbf{A}, \\mathbf{B} \\rangle = \\sum_{m=0}^{M-1} \\sum_{n=0}^{N-1} a_{m,n} b_{m, n}\n$$\n\n(where we have neglected the conjugation since we'll only deal with real-valued matrices); in other words, we can take the inner product between two matrices as the standard inner product of their unrolled versions. The inner product allows us to define orthogonality between images and this is rather useful since we're going to explore a couple of bases for this space.\n\n\n## Actual images\n\nConveniently, using IPython, we can read images from disk in any given format and convert them to numpy arrays; let's load and display for instance a JPEG image:\n\n\n```python\nimg = np.array(plt.imread('cameraman.jpg'), dtype=int)\nplt.matshow(img);\n```\n\nThe image is a $64\\times 64$ low-resolution version of the famous \"cameraman\" test picture. Out of curiosity, we can look at the first column of this image, which is is a $64\u00d71$ vector:\n\n\n```python\nimg[:,0]\n```\n\n\n\n\n array([156, 157, 157, 152, 154, 155, 151, 157, 152, 155, 158, 159, 159,\n 160, 160, 161, 155, 160, 161, 161, 164, 162, 160, 162, 158, 160,\n 158, 157, 160, 160, 159, 158, 163, 162, 162, 157, 160, 114, 114,\n 103, 88, 62, 109, 82, 108, 128, 138, 140, 136, 128, 122, 137,\n 147, 114, 114, 144, 112, 115, 117, 131, 112, 141, 99, 97])\n\n\n\nThe values are integers between zero and 255, meaning that each pixel is encoded over 8 bits (or 256 gray levels).\n\n## The canonical basis\n\nThe canonical basis for any matrix space $\\mathbb{R}^{M\\times N}$ is the set of \"delta\" matrices where only one element equals to one while all the others are 0. Let's call them $\\mathbf{E}_n$ with $0 \\leq n < MN$. Here is a function to create the canonical basis vector given its index:\n\n\n```python\ndef canonical(n, M=5, N=10):\n e = np.zeros((M, N))\n e[(n % M), int(n / M)] = 1\n return e\n```\n\nHere are some basis vectors: look for the position of white pixel, which differentiates them and note that we enumerate pixels column-wise:\n\n\n```python\nplt.matshow(canonical(0));\nplt.matshow(canonical(1));\nplt.matshow(canonical(49));\n```\n\n## Transmitting images\n\nSuppose we want to transmit the \"cameraman\" image over a communication channel. The intuitive way to do so is to send the pixel values one by one, which corresponds to sending the coefficients of the decomposition of the image over the canonical basis. So far, nothing complicated: to send the cameraman image, for instance, we will send $64\\times 64 = 4096$ coefficients in a row. \n\nNow suppose that a communication failure takes place after the first half of the pixels have been sent. The received data will allow us to display an approximation of the original image only. If we replace the missing data with zeros, here is what we would see, which is not very pretty:\n\n\n```python\n# unrolling of the image for transmission (we go column by column, hence \"F\")\ntx_img = np.ravel(img, \"F\")\n\n# oops, we lose half the data\ntx_img[int(len(tx_img)/2):] = 0\n\n# rebuild matrix\nrx_img = np.reshape(tx_img, (64, 64), \"F\")\nplt.matshow(rx_img);\n```\n\nCan we come up with a trasmission scheme that is more robust in the face of channel loss? Interestingly, the answer is yes, and it involves a different, more versatile basis for the space of images. What we will do is the following: \n\n* describe the Haar basis, a new basis for the image space\n* project the image in the new basis\n* transmit the projection coefficients\n* rebuild the image using the basis vectors\n\nWe know a few things: if we choose an orthonormal basis, the analysis and synthesis formulas will be super easy (a simple inner product and a scalar multiplication respectively). The trick is to find a basis that will be robust to the loss of some coefficients. \n\nOne such basis is the **Haar basis**. We cannot go into too many details in this notebook but, for the curious, a good starting point is [here](https://chengtsolin.wordpress.com/2015/04/15/real-time-2d-discrete-wavelet-transform-using-opengl-compute-shader/). Mathematical formulas aside, the Haar basis works by encoding the information in a *hierarchical* way: the first basis vectors encode the broad information and the higher coefficients encode the detail. Let's have a look. \n\nFirst of all, to keep things simple, we will remain in the space of square matrices whose size is a power of two. The code to generate the Haar basis matrices is the following: first we generate a 1D Haar vector and then we obtain the basis matrices by taking the outer product of all possible 1D vectors (don't worry if it's not clear, the results are what's important):\n\n\n```python\ndef haar1D(n, SIZE):\n # check power of two\n if math.floor(math.log(SIZE) / math.log(2)) != math.log(SIZE) / math.log(2):\n print(\"Haar defined only for lengths that are a power of two\")\n return None\n if n >= SIZE or n < 0:\n print(\"invalid Haar index\")\n return None\n \n # zero basis vector\n if n == 0:\n return np.ones(SIZE)\n \n # express n > 1 as 2^p + q with p as large as possible;\n # then k = SIZE/2^p is the length of the support\n # and s = qk is the shift\n p = math.floor(math.log(n) / math.log(2))\n pp = int(pow(2, p))\n k = SIZE / pp\n s = (n - pp) * k\n \n h = np.zeros(SIZE)\n h[int(s):int(s+k/2)] = 1\n h[int(s+k/2):int(s+k)] = -1\n # these are not normalized\n return h\n\n\ndef haar2D(n, SIZE=8):\n # get horizontal and vertical indices\n hr = haar1D(n % SIZE, SIZE)\n hv = haar1D(int(n / SIZE), SIZE)\n # 2D Haar basis matrix is separable, so we can\n # just take the column-row product\n H = np.outer(hr, hv)\n H = H / math.sqrt(np.sum(H * H))\n return H\n```\n\nFirst of all, let's look at a few basis matrices; note that the matrices have positive and negative values, so that the value of zero will be represented as gray:\n\n\n```python\nplt.matshow(haar2D(0));\nplt.matshow(haar2D(1));\nplt.matshow(haar2D(10));\nplt.matshow(haar2D(63));\n```\n\nWe can notice two key properties\n\n* each basis matrix has positive and negative values in some symmetric patter: this means that the basis matrix will implicitly compute the difference between image areas\n* low-index basis matrices take differences between large areas, while high-index ones take differences in smaller **localized** areas of the image\n\nWe can immediately verify that the Haar matrices are orthogonal:\n\n\n```python\n# let's use an 8x8 space; there will be 64 basis vectors\n# compute all possible inner product and only print the nonzero results\nfor m in range(0,64):\n for n in range(0,64):\n r = np.sum(haar2D(m, 8) * haar2D(n, 8))\n if r != 0:\n print(\"[%dx%d -> %f] \" % (m, n, r), end=\"\")\n```\n\n [0x0 -> 1.000000] [1x1 -> 1.000000] [2x2 -> 1.000000] [3x3 -> 1.000000] [4x4 -> 1.000000] [5x5 -> 1.000000] [6x6 -> 1.000000] [7x7 -> 1.000000] [8x8 -> 1.000000] [9x9 -> 1.000000] [10x10 -> 1.000000] [11x11 -> 1.000000] [12x12 -> 1.000000] [13x13 -> 1.000000] [14x14 -> 1.000000] [15x15 -> 1.000000] [16x16 -> 1.000000] [16x17 -> -0.000000] [17x16 -> -0.000000] [17x17 -> 1.000000] [18x18 -> 1.000000] [19x19 -> 1.000000] [20x20 -> 1.000000] [21x21 -> 1.000000] [22x22 -> 1.000000] [23x23 -> 1.000000] [24x24 -> 1.000000] [24x25 -> -0.000000] [25x24 -> -0.000000] [25x25 -> 1.000000] [26x26 -> 1.000000] [27x27 -> 1.000000] [28x28 -> 1.000000] [29x29 -> 1.000000] [30x30 -> 1.000000] [31x31 -> 1.000000] [32x32 -> 1.000000] [33x33 -> 1.000000] [34x34 -> 1.000000] [35x35 -> 1.000000] [36x36 -> 1.000000] [37x37 -> 1.000000] [38x38 -> 1.000000] [39x39 -> 1.000000] [40x40 -> 1.000000] [41x41 -> 1.000000] [42x42 -> 1.000000] [43x43 -> 1.000000] [44x44 -> 1.000000] [45x45 -> 1.000000] [46x46 -> 1.000000] [47x47 -> 1.000000] [48x48 -> 1.000000] [49x49 -> 1.000000] [50x50 -> 1.000000] [51x51 -> 1.000000] [52x52 -> 1.000000] [53x53 -> 1.000000] [54x54 -> 1.000000] [55x55 -> 1.000000] [56x56 -> 1.000000] [57x57 -> 1.000000] [58x58 -> 1.000000] [59x59 -> 1.000000] [60x60 -> 1.000000] [61x61 -> 1.000000] [62x62 -> 1.000000] [63x63 -> 1.000000] \n\nOK! Everything's fine. Now let's transmit the \"cameraman\" image: first, let's verify that it works\n\n\n```python\n# project the image onto the Haar basis, obtaining a vector of 4096 coefficients\n# this is simply the analysis formula for the vector space with an orthogonal basis\ntx_img = np.zeros(64*64)\nfor k in range(0, (64*64)):\n tx_img[k] = np.sum(img * haar2D(k, 64))\n\n# now rebuild the image with the synthesis formula; since the basis is orthonormal\n# we just need to scale the basis matrices by the projection coefficients\nrx_img = np.zeros((64, 64))\nfor k in range(0, (64*64)):\n rx_img += tx_img[k] * haar2D(k, 64)\n\nplt.matshow(rx_img);\n```\n\nCool, it works! Now let's see what happens if we lose the second half of the coefficients:\n\n\n```python\n# oops, we lose half the data\nlossy_img = np.copy(tx_img);\nlossy_img[int(len(tx_img)/2):] = 0\n\n# rebuild matrix\nrx_img = np.zeros((64, 64))\nfor k in range(0, (64*64)):\n rx_img += lossy_img[k] * haar2D(k, 64)\n\nplt.matshow(rx_img);\n```\n\nThat's quite remarkable, no? We've lost the same amount of information as before but the image is still acceptable. This is because we lost the coefficients associated to the fine details of the image but we retained the \"broad strokes\" encoded by the first half. \n\nNote that if we lose the first half of the coefficients the result is markedly different:\n\n\n```python\nlossy_img = np.copy(tx_img);\nlossy_img[0:int(len(tx_img)/2)] = 0\n\nrx_img = np.zeros((64, 64))\nfor k in range(0, (64*64)):\n rx_img += lossy_img[k] * haar2D(k, 64)\n\nplt.matshow(rx_img);\n```\n\nIn fact, schemes like this one are used in *progressive encoding*: send the most important information first and add details if the channel permits it. You may have experienced this while browsing the interned over a slow connection. \n\nAll in all, a great application of a change of basis!\n", "meta": {"hexsha": "4c3f25d53eb81821ac8b7ba6c46e9e8c14b057b3", "size": 164679, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "HaarBasis/.ipynb_checkpoints/hb-checkpoint.ipynb", "max_stars_repo_name": "AkshayPR244/Coursera-EPFL-Digital-Signal-Processing", "max_stars_repo_head_hexsha": "bdf9c65e2c02f0a99336cbe60ebac919891e05e3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-07-24T03:16:36.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-25T10:21:00.000Z", "max_issues_repo_path": "HaarBasis/.ipynb_checkpoints/hb-checkpoint.ipynb", "max_issues_repo_name": "AkshayPR244/Coursera-EPFL-Digital-Signal-Processing", "max_issues_repo_head_hexsha": "bdf9c65e2c02f0a99336cbe60ebac919891e05e3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HaarBasis/.ipynb_checkpoints/hb-checkpoint.ipynb", "max_forks_repo_name": "AkshayPR244/Coursera-EPFL-Digital-Signal-Processing", "max_forks_repo_head_hexsha": "bdf9c65e2c02f0a99336cbe60ebac919891e05e3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-23T19:37:53.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-23T19:37:53.000Z", "avg_line_length": 253.3523076923, "max_line_length": 22520, "alphanum_fraction": 0.9058957123, "converted": true, "num_tokens": 4079, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9241418178895028, "lm_q2_score": 0.903294196941332, "lm_q1q2_score": 0.8347719412504011}} {"text": "# Demystifying `meshgrid()`\n\nWe use the Numpy [meshgrid()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.meshgrid.html) function to plot 2D arrays but it can be a bit mysterious as to what the \"meshgrid\" actually does or look like.\n\nThe docs say: \n\n_Return coordinate matrices from coordinate vectors._\n\n_Make N-D coordinate arrays for vectorized evaluations of N-D scalar/vector fields over N-D grids, given one-dimensional coordinate arrays x1, x2,..., xn._\n\n## Visualization with a Gaussian\nTesting with the initial condition, an isotropic 2D [Gaussian](http://mathworld.wolfram.com/GaussianFunction.html)\n\n$$\ng_2(x, y) = \\frac{1}{2\\pi\\sigma^2} \\exp\\left[-\\frac{(x-x_0)^2 + (y-y_0)^2}{2\\sigma^2}\\right]\n$$\n\nNote that the 2D Gaussian can be written as a product of two 1D Gaussians\n\n\\begin{align}\ng_2(x, y) &= \\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\exp\\left[-\\frac{(x-x_0)^2}{2\\sigma^2}\\right] \\cdot \n \\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\exp\\left[-\\frac{(y-y_0)^2}{2\\sigma^2}\\right]\\\\\n &= g_1(x) \\cdot g_1(y)\n\\end{align}\n\nPlot $g_2(x, y)$ by (1) constructing it from the product of $g_1$s and (2) from the explicit formula. Use [`np.meshgrid`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.meshgrid.html) to evaluate the function on a grid.\n\nUse an *asymmetric* grid ($L_x=0.5, N_x=50$ and $L_y=1, N_y=100$) to clearly see the shape of the underlying arrays.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n\n```python\n%matplotlib inline\n```\n\n\n```python\nL = 0.5\nLx, Ly = 0.5, 1\nNx, Ny = 50, 100\n\nx = np.linspace(0, Lx, Nx+1) # need N+1!\ny = np.linspace(0, Ly, Ny+1)\nX, Y = np.meshgrid(x, y)\n \ndef gaussian2D(x, y, u0=0.05, x0=None, y0=None, sigma=0.1*L):\n x0 = np.mean(x) if x0 is None else x0\n y0 = np.mean(y) if y0 is None else y0\n return u0/(2*np.pi*sigma) * np.exp(-((x-x0)**2 + (y-y0)**2)/(2*sigma**2))\n\ndef gaussian(x, u0=0.05, x0=None, sigma=0.1*L):\n x0 = np.mean(x) if x0 is None else x0\n return u0/np.sqrt(2*np.pi*sigma) * np.exp(-(x-x0)**2 / (2*sigma**2))\n```\n\n#### Meshgrid \n\n\n```python\nprint(x.shape, y.shape)\nprint(X.shape, Y.shape)\nprint(X)\nprint(Y)\n```\n\n (51,) (101,)\n (101, 51) (101, 51)\n [[0. 0.01 0.02 ... 0.48 0.49 0.5 ]\n [0. 0.01 0.02 ... 0.48 0.49 0.5 ]\n [0. 0.01 0.02 ... 0.48 0.49 0.5 ]\n ...\n [0. 0.01 0.02 ... 0.48 0.49 0.5 ]\n [0. 0.01 0.02 ... 0.48 0.49 0.5 ]\n [0. 0.01 0.02 ... 0.48 0.49 0.5 ]]\n [[0. 0. 0. ... 0. 0. 0. ]\n [0.01 0.01 0.01 ... 0.01 0.01 0.01]\n [0.02 0.02 0.02 ... 0.02 0.02 0.02]\n ...\n [0.98 0.98 0.98 ... 0.98 0.98 0.98]\n [0.99 0.99 0.99 ... 0.99 0.99 0.99]\n [1. 1. 1. ... 1. 1. 1. ]]\n\n\n#### Plot 1D Gaussians\nPlot the 1D Gaussians using the meshgrid components.\n\n\n```python\ngaussian(X).shape\n```\n\n\n\n\n (101, 51)\n\n\n\n\n```python\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.contourf(X, Y, gaussian(X, sigma=0.1), 30, cmap=plt.cm.viridis)\nax.set_aspect(1)\nplt.show()\n```\n\n\n```python\ngaussian(Y).shape\n```\n\n\n\n\n (101, 51)\n\n\n\n\n```python\nax = plt.subplot(131)\nax.contourf(X, Y, gaussian(X, sigma=0.1), 30, cmap=plt.cm.viridis)\nax.set_aspect(1)\nax = plt.subplot(132)\nax.contourf(X, Y, gaussian(Y, sigma=0.1), 30, cmap=plt.cm.viridis)\nax.set_aspect(1)\nax = plt.subplot(133)\nax.contourf(X, Y, gaussian(X, sigma=0.1) * gaussian(Y, sigma=0.1), 30, cmap=plt.cm.viridis)\nax.set_aspect(1)\n```\n\n#### 2D gaussian \n\n\n```python\ngaussian2D(X, Y).shape\n```\n\n\n\n\n (101, 51)\n\n\n\n\n```python\nax = plt.subplot(111)\nax.contourf(X, Y, gaussian2D(X, Y, sigma=0.1), 30, cmap=plt.cm.viridis)\nax.set_aspect(1)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "2ae0e972e0fab8cfafc650f39306c04d40be1641", "size": 30524, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Module_7/CaseStudyIII/meshgrid.ipynb", "max_stars_repo_name": "Py4Physics/PHY194", "max_stars_repo_head_hexsha": "68966ad96bbf2756ca3c0c39210be69c379c7619", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-10-26T00:39:14.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-29T19:35:20.000Z", "max_issues_repo_path": "Module_7/CaseStudyIII/meshgrid.ipynb", "max_issues_repo_name": "Py4Phy/PHY202", "max_issues_repo_head_hexsha": "ec3a0b0285f2601accfdbf0c30416e1351430342", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Module_7/CaseStudyIII/meshgrid.ipynb", "max_forks_repo_name": "Py4Phy/PHY202", "max_forks_repo_head_hexsha": "ec3a0b0285f2601accfdbf0c30416e1351430342", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 97.8333333333, "max_line_length": 11244, "alphanum_fraction": 0.8504127899, "converted": true, "num_tokens": 1426, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284088005554475, "lm_q2_score": 0.8991213772699435, "lm_q1q2_score": 0.8347521994249502}} {"text": "---\nauthor: Nathan Carter (ncarter@bentley.edu)\n---\n\nThis answer assumes you have imported SymPy as follows.\n\n\n```python\nfrom sympy import * # load all math functions\ninit_printing( use_latex='mathjax' ) # use pretty math output\n```\n\nLet's create a simple example. We'll be approximating $f(x)=\\sin x$\ncentered at $a=0$ with a Taylor series of degree $n=5$. We will be\napplying our approximation at $x_0=1$. What is the error bound?\n\n\n```python\nvar( 'x' )\nformula = sin(x)\na = 0\nx_0 = 1\nn = 5\n```\n\nWe will not ask SymPy to compute the formula exactly, but will instead\nhave it sample a large number of $c$ values from the interval in question,\nand compute the maximum over those samples. (The exact solution can be too\nhard for SymPy to compute.)\n\n\n```python\n# Get 1000 evenly-spaced c values:\ncs = [ Min(x_0,a) + abs(x_0-a)*i/1000 for i in range(1001) ]\n# Create the formula |f^(n+1)(x)|:\nformula2 = abs( diff( formula, x, n+1 ) )\n# Find the max of it on all the 1000 values:\nm = Max( *[ formula2.subs(x,c) for c in cs ] )\n# Compute the error bound:\nN( abs(x_0-a)**(n+1) / factorial(n+1) * m )\n```\n\n\n\n\n$\\displaystyle 0.00116870970112208$\n\n\n\nThe error is at most $0.00116871\\ldots$.\n", "meta": {"hexsha": "4993b8178d24d78fc53cec86f83d4a6933804dad", "size": 2780, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "database/tasks/How to compute the error bounds on a Taylor approximation/Python, using SymPy.ipynb", "max_stars_repo_name": "nathancarter/how2data", "max_stars_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "database/tasks/How to compute the error bounds on a Taylor approximation/Python, using SymPy.ipynb", "max_issues_repo_name": "nathancarter/how2data", "max_issues_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "database/tasks/How to compute the error bounds on a Taylor approximation/Python, using SymPy.ipynb", "max_forks_repo_name": "nathancarter/how2data", "max_forks_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-07-18T19:01:29.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T06:47:11.000Z", "avg_line_length": 24.6017699115, "max_line_length": 99, "alphanum_fraction": 0.5438848921, "converted": true, "num_tokens": 382, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9425067211996142, "lm_q2_score": 0.8856314753275017, "lm_q1q2_score": 0.8347136180021006}} {"text": "> This is one of the 100 recipes of the [IPython Cookbook](http://ipython-books.github.io/), the definitive guide to high-performance scientific computing and data science in Python.\n\n\n# 15.5. A bit of number theory with SymPy\n\n\n```python\nfrom sympy import *\ninit_printing()\n```\n\n\n```python\nimport sympy.ntheory as nt\n```\n\n## Prime numbers\n\nTest whether a number is prime.\n\n\n```python\nnt.isprime(2011)\n```\n\nFind the next prime after a given number.\n\n\n```python\nnt.nextprime(2011)\n```\n\nWhat is the 1000th prime number?\n\n\n```python\nnt.prime(1000)\n```\n\nHow many primes less than 2011 are there?\n\n\n```python\nnt.primepi(2011)\n```\n\nWe can plot $\\pi(x)$, the prime-counting function (the number of prime numbers less than or equal to some number x). The famous *prime number theorem* states that this function is asymptotically equivalent to $x/\\log(x)$. This expression approximately quantifies the distribution of the prime numbers among all integers.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n\n```python\nx = np.arange(2, 10000)\nplt.plot(x, list(map(nt.primepi, x)), '-k', label='$\\pi(x)$');\nplt.plot(x, x / np.log(x), '--k', label='$x/\\log(x)$');\nplt.legend(loc=2);\n```\n\nLet's compute the integer factorization of some number.\n\n\n```python\nnt.factorint(1998)\n```\n\n\n```python\n2 * 3**3 * 37\n```\n\n## Chinese Remainder Theorem\n\nA lazy mathematician is counting his marbles. When they are arranged in three rows, the last column contains one marble. When they form four rows, there are two marbles in the last column, and there are three with five rows. The Chinese Remainer Theorem can give him the answer directly.\n\n\n```python\nfrom sympy.ntheory.modular import solve_congruence\n```\n\n\n```python\nsolve_congruence((1, 3), (2, 4), (3, 5))\n```\n\nThere are infinitely many solutions: 58, and 58 plus any multiple of 60. Since 118 seems visually too high, 58 is the right answer.\n\n> You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).\n\n> [IPython Cookbook](http://ipython-books.github.io/), by [Cyrille Rossant](http://cyrille.rossant.net), Packt Publishing, 2014 (500 pages).\n", "meta": {"hexsha": "08bb133dad476abb4fa85a06d715cab81115b171", "size": 5363, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/chapter15_symbolic/05_number_theory.ipynb", "max_stars_repo_name": "hidenori-t/cookbook-code", "max_stars_repo_head_hexsha": "750f546ed87b09d28532884b8074b96cd8d32a38", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 820, "max_stars_repo_stars_event_min_datetime": "2015-01-01T18:15:54.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-06T16:15:07.000Z", "max_issues_repo_path": "notebooks/chapter15_symbolic/05_number_theory.ipynb", "max_issues_repo_name": "bndxn/cookbook-code", "max_issues_repo_head_hexsha": "90c31341edccf039187e6a3809fb336f83bb758f", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 31, "max_issues_repo_issues_event_min_datetime": "2015-02-25T22:08:09.000Z", "max_issues_repo_issues_event_max_datetime": "2018-09-28T08:41:38.000Z", "max_forks_repo_path": "notebooks/chapter15_symbolic/05_number_theory.ipynb", "max_forks_repo_name": "bndxn/cookbook-code", "max_forks_repo_head_hexsha": "90c31341edccf039187e6a3809fb336f83bb758f", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 483, "max_forks_repo_forks_event_min_datetime": "2015-01-02T13:53:11.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-18T21:05:16.000Z", "avg_line_length": 20.8677042802, "max_line_length": 328, "alphanum_fraction": 0.5489464852, "converted": true, "num_tokens": 582, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9343951698485602, "lm_q2_score": 0.893309411735131, "lm_q1q2_score": 0.8347039995055652}} {"text": "# day06: Gradient Descent for Linear Regression\n\n# Objectives\n\n* Learn how to fit weight parameters of Linear Regression to a simple dataset via gradient descent\n* Understand impact of step size\n* Understand impact of initialization\n\n\n# Outline\n* [Part 1: Loss and Gradient for 1-dim. Linear Regression](#part1)\n* [Part 2: Gradient Descent Algorithm in a few lines of Python](#part2)\n* [Part 3: Debugging with Trace Plots](#part3)\n* [Part 4: Selecting the step size](#part4)\n* [Part 5: Selecting the initialization](#part5)\n* [Part 6: Using SciPy's built-in routines](#part6)\n\n# Takeaways\n\n\n* Gradient descent is a simple algorithm that can be implemented in a few lines of Python\n* * Practical issues include selecting step size and initialization\n* Step size matters a lot\n* * Need to select carefully for each problem\n\n* Initialization of the parameters can matter too!\n\n* scipy offers some useful tools for gradient-based optimization\n* * scipy's toolbox cannot do scalable \"stochastic\" methods (requires a modest size dataset, not too big)\n* * \"L-BFGS-B\" method is highly recommended if you have your loss and gradient functions available\n\n\n\n```python\nimport numpy as np\n```\n\n\n```python\n# import plotting libraries\nimport matplotlib\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\nplt.style.use('seaborn') # pretty matplotlib plots\n\nimport seaborn as sns\nsns.set('notebook', font_scale=1.25, style='whitegrid')\n```\n\n# Create simple dataset: y = 1.234 * x + noise\n\nWe will *intentionally* create a toy dataset where we know that a good solution has slope near 1.234.\n\nNaturally, the best slope for the finite dataset of N=100 examples we create won't be exactly 1.234 (because of the noise added plus the fact that our dataset size is limited).\n\n\n```python\ndef create_dataset(N=100, slope=1.234, noise_stddev=0.1, random_state=0):\n random_state = np.random.RandomState(int(random_state))\n\n # input features\n x_N = np.linspace(-2, 2, N)\n \n # output features\n y_N = slope * x_N + random_state.randn(N) * noise_stddev\n \n return x_N, y_N\n```\n\n\n```python\nx_N, y_N = create_dataset(N=50, noise_stddev=0.3)\n```\n\n\n```python\nfig, ax = plt.subplots(nrows=1, ncols=1, figsize=(5,5))\nplt.plot(x_N, y_N, 'k.');\nplt.xlabel('x');\nplt.ylabel('y');\n```\n\n# Part 1: Gradient Descent for 1-dim. Linear Regression\n\n## Define model\n\nConsider the *simplest* linear regression model. A single weight parameter $w \\in \\mathbb{R}$ representing the slope of the prediction line. No bias/intercept.\n\nTo make predictions, we just compute the weight multiplied by the input feature\n$$\n\\hat{y}(x) = w \\cdot x\n$$\n\n## Define loss function\n\nWe want to minimize the total *squared error* across all N observed data examples (input features $x_n$, output responses $y_n$)\n\n\\begin{align}\n \\min_{w \\in \\mathbb{R}} ~~ &\\ell(w)\n \\\\\n \\text{calc_loss}(w) = \\ell(w) &= \\sum_{n=1}^N (y_n - w x_n)^2\n\\end{align}\n\n### Exercise 1A: Complete the code below\n\nYou should make it match the math expression above.\n\n\n```python\ndef calc_loss(w):\n ''' Compute loss for slope-only least-squares linear regression\n \n Args\n ----\n w : float\n Value of slope parameter\n\n Returns\n -------\n loss : float\n Sum of squared error loss at provided w value\n '''\n yhat_N = x_N * w\n sum_squared_error = np.sum((y_N - yhat_N)**2) # todo compute the sum of squared error between y and yhat\n return sum_squared_error\n```\n\n# Define the gradient function\n\n\\begin{align}\n\\text{calc_grad}(w) = \\ell'(w) &= \\frac{\\partial}{\\partial w} [ \\sum_{n=1}^N (y_n - w x_n)^2] \n\\\\\n&= \\sum_{n=1}^N 2 (y_n - w x_n) (-x_n)\n\\\\\n&= 2 \\sum_{n=1}^N (w x_n - y_n) (x_n)\n\\\\\n&= 2 w \\left( \\sum_{n=1}^N x_n^2 \\right) - 2 \\sum_{n=1}^N y_n x_n\n\\end{align}\n\nBelow, we've implemented the gradient calculation in code for you\n\n\n```python\ndef calc_grad(w):\n ''' Compute gradient for slope-only least-squares linear regression\n \n Args\n ----\n w : float\n Value of slope parameter\n\n Returns\n -------\n g : float\n Value of derivative of loss function at provided w value\n '''\n g = 2.0 * w * np.sum(np.square(x_N)) - 2.0 * np.sum(x_N * y_N)\n return g\n```\n\n## Plot loss evaluated at each w from -3 to 8\n\nWe should see a \"bowl\" shape with one *global* minima, because our optimization problem is \"convex\"\n\n\n```python\nw_grid = np.linspace(-3, 8, 300) # create array of 300 values between -3 and 8\n```\n\n\n```python\nloss_grid = np.asarray([calc_loss(w) for w in w_grid])\nplt.plot(w_grid, loss_grid, 'b.-');\nplt.xlabel('w');\nplt.ylabel('loss(w)');\n```\n\n### Discussion 1b: Visually, at what value of $w$ does the loss function have a minima? Is it near where you would expect (hint: look above for the \"true\" slope value used to generate the data)\n\n### Exercise 1c: Write NumPy code to identify which entry in the w_grid array corresponds to the lowest entry in the loss_grid array\n\nHint: use np.argmin\n\n\n```python\n# TODO write code here\nindex = np.argmin(loss_grid)\nprint(index)\nprint(w_grid[index])\n```\n\n 112\n 1.120401337792643\n\n\n## Sanity check: plot gradient evaluated at each w from -3 to 8\n\n\n```python\ngrad_grid = np.asarray([calc_grad(w) for w in w_grid])\nplt.plot(w_grid, grad_grid, 'b.-');\nplt.xlabel('w');\nplt.ylabel('grad(w)');\n```\n\n### Discussion 1d: Visually, at what value of $w$ does the gradient function cross zero? Is it the same place as the location of the minimum in the loss above?\n\nTODO interpret the graph above and write your answer here, then discuss with your group\n\n### Exercise 1d: Numerically, at which value of w does grad_grid cross zero?\n\nWe might try to estimate numerically where the gradient crosses zero.\n\nWe could do this in a few steps:\n\n1) Compute the distance from each gradient in `grad_grid` to 0.0 (we could use just absolute distance)\n\n2) Find the index of `grad_grid` with smallest distance (using `np.argmin`)\n\n3) Plug that index into `w_grid` to get the $w$ value corresponding to that zero-crossing\n\n\n```python\ndist_from_zero_G = np.abs(grad_grid - 0.0)\n\nzero_cross_index = 0 # TODO fix me for step 2 above\n\nprint(\"Zero crossing occurs at w = %.4f\" % w_grid[0]) # TODO fix me for step 3 above\n```\n\n Zero crossing occurs at w = -3.0000\n\n\n## Part 2: Gradient Descent (GD) as an algorithm in Python\n\n\n### Define minimize_via_grad_descent algorithm\n\nCan you understand what each step of this algorithm does?\n\n\n```python\ndef minimize_via_grad_descent(calc_loss, calc_grad, init_w=0.0, step_size=0.001, max_iters=100):\n ''' Perform minimization of provided loss function via gradient descent\n \n Args\n ----\n calc_loss : function\n calc_grad : function\n init_w : float\n step_size : float\n max_iters : positive int\n \n Return\n ----\n wopt: float\n array of optimized weights that approximately gives the least error\n info_dict : dict\n Contains information about the optimization procedure useful for debugging\n Entries include:\n * trace_loss_list : list of loss values\n * trace_grad_list : list of gradient values\n '''\n w = 1.0 * init_w \n grad = calc_grad(w)\n\n # Create some lists to track progress over time (for debugging)\n trace_loss_list = []\n trace_w_list = []\n trace_grad_list = []\n\n for iter_id in range(max_iters):\n if iter_id > 0:\n w = w - step_size * grad\n \n loss = calc_loss(w)\n grad = calc_grad(w) \n\n print(\" iter %5d/%d | w % 13.5f | loss % 13.4f | grad % 13.4f\" % (\n iter_id, max_iters, w, loss, grad))\n \n trace_loss_list.append(loss)\n trace_w_list.append(w)\n trace_grad_list.append(grad)\n \n wopt = w\n info_dict = dict(\n trace_loss_list=trace_loss_list,\n trace_w_list=trace_w_list, \n trace_grad_list=trace_grad_list)\n \n return wopt, info_dict\n```\n\n### Discussion 2a: Which line of the above function does the *parameter update* happen?\n\nRemember, in math, the parameter update of gradient descent is this:\n$$\nw \\gets w - \\alpha \\nabla_w \\ell(w)\n$$\n\nwhere $\\alpha > 0$ is the step size.\n\nIn words, this math says *move* the parameter $w$ from its current value a *small step* in the \"downhill\" direction (indicated by gradient).\n\nTODO write down here which line above *you* think it is, then discuss with your group\n\n\n```python\n\n```\n\n### Try it! Run GD with step_size = 0.001\n\nRunning the cell below will have the following effects:\n\n1) one line will be printed for every iteration, indicating the current w value and its associated loss\n\n2) the \"optimal\" value of w will be stored in the variable named `wopt` returned by this function\n\n3) a dictionary of information useful for debugging will be stored in the `info_dict` returned by this function\n\n\n```python\nwopt, info_dict = minimize_via_grad_descent(calc_loss, calc_grad, step_size=0.001);\n```\n\n iter 0/100 | w 0.00000 | loss 93.3197 | grad -156.5566\n iter 1/100 | w 0.15656 | loss 70.5104 | grad -134.8304\n iter 2/100 | w 0.29139 | loss 53.5926 | grad -116.1192\n iter 3/100 | w 0.40751 | loss 41.0446 | grad -100.0047\n iter 4/100 | w 0.50751 | loss 31.7376 | grad -86.1265\n iter 5/100 | w 0.59364 | loss 24.8345 | grad -74.1743\n iter 6/100 | w 0.66781 | loss 19.7144 | grad -63.8807\n iter 7/100 | w 0.73169 | loss 15.9168 | grad -55.0156\n iter 8/100 | w 0.78671 | loss 13.1001 | grad -47.3808\n iter 9/100 | w 0.83409 | loss 11.0110 | grad -40.8055\n iter 10/100 | w 0.87489 | loss 9.4614 | grad -35.1427\n iter 11/100 | w 0.91004 | loss 8.3121 | grad -30.2657\n iter 12/100 | w 0.94030 | loss 7.4597 | grad -26.0656\n iter 13/100 | w 0.96637 | loss 6.8274 | grad -22.4483\n iter 14/100 | w 0.98882 | loss 6.3584 | grad -19.3331\n iter 15/100 | w 1.00815 | loss 6.0106 | grad -16.6501\n iter 16/100 | w 1.02480 | loss 5.7526 | grad -14.3395\n iter 17/100 | w 1.03914 | loss 5.5612 | grad -12.3495\n iter 18/100 | w 1.05149 | loss 5.4193 | grad -10.6357\n iter 19/100 | w 1.06212 | loss 5.3140 | grad -9.1597\n iter 20/100 | w 1.07128 | loss 5.2360 | grad -7.8886\n iter 21/100 | w 1.07917 | loss 5.1781 | grad -6.7938\n iter 22/100 | w 1.08597 | loss 5.1351 | grad -5.8510\n iter 23/100 | w 1.09182 | loss 5.1032 | grad -5.0390\n iter 24/100 | w 1.09686 | loss 5.0796 | grad -4.3397\n iter 25/100 | w 1.10120 | loss 5.0621 | grad -3.7375\n iter 26/100 | w 1.10493 | loss 5.0491 | grad -3.2188\n iter 27/100 | w 1.10815 | loss 5.0394 | grad -2.7721\n iter 28/100 | w 1.11092 | loss 5.0323 | grad -2.3874\n iter 29/100 | w 1.11331 | loss 5.0270 | grad -2.0561\n iter 30/100 | w 1.11537 | loss 5.0231 | grad -1.7708\n iter 31/100 | w 1.11714 | loss 5.0201 | grad -1.5250\n iter 32/100 | w 1.11866 | loss 5.0180 | grad -1.3134\n iter 33/100 | w 1.11998 | loss 5.0164 | grad -1.1311\n iter 34/100 | w 1.12111 | loss 5.0152 | grad -0.9742\n iter 35/100 | w 1.12208 | loss 5.0143 | grad -0.8390\n iter 36/100 | w 1.12292 | loss 5.0136 | grad -0.7225\n iter 37/100 | w 1.12364 | loss 5.0132 | grad -0.6223\n iter 38/100 | w 1.12427 | loss 5.0128 | grad -0.5359\n iter 39/100 | w 1.12480 | loss 5.0125 | grad -0.4615\n iter 40/100 | w 1.12526 | loss 5.0123 | grad -0.3975\n iter 41/100 | w 1.12566 | loss 5.0122 | grad -0.3423\n iter 42/100 | w 1.12600 | loss 5.0121 | grad -0.2948\n iter 43/100 | w 1.12630 | loss 5.0120 | grad -0.2539\n iter 44/100 | w 1.12655 | loss 5.0119 | grad -0.2187\n iter 45/100 | w 1.12677 | loss 5.0119 | grad -0.1883\n iter 46/100 | w 1.12696 | loss 5.0119 | grad -0.1622\n iter 47/100 | w 1.12712 | loss 5.0118 | grad -0.1397\n iter 48/100 | w 1.12726 | loss 5.0118 | grad -0.1203\n iter 49/100 | w 1.12738 | loss 5.0118 | grad -0.1036\n iter 50/100 | w 1.12749 | loss 5.0118 | grad -0.0892\n iter 51/100 | w 1.12757 | loss 5.0118 | grad -0.0768\n iter 52/100 | w 1.12765 | loss 5.0118 | grad -0.0662\n iter 53/100 | w 1.12772 | loss 5.0118 | grad -0.0570\n iter 54/100 | w 1.12777 | loss 5.0118 | grad -0.0491\n iter 55/100 | w 1.12782 | loss 5.0118 | grad -0.0423\n iter 56/100 | w 1.12787 | loss 5.0118 | grad -0.0364\n iter 57/100 | w 1.12790 | loss 5.0118 | grad -0.0314\n iter 58/100 | w 1.12793 | loss 5.0118 | grad -0.0270\n iter 59/100 | w 1.12796 | loss 5.0118 | grad -0.0233\n iter 60/100 | w 1.12798 | loss 5.0118 | grad -0.0200\n iter 61/100 | w 1.12800 | loss 5.0118 | grad -0.0172\n iter 62/100 | w 1.12802 | loss 5.0118 | grad -0.0149\n iter 63/100 | w 1.12804 | loss 5.0118 | grad -0.0128\n iter 64/100 | w 1.12805 | loss 5.0118 | grad -0.0110\n iter 65/100 | w 1.12806 | loss 5.0118 | grad -0.0095\n iter 66/100 | w 1.12807 | loss 5.0118 | grad -0.0082\n iter 67/100 | w 1.12808 | loss 5.0118 | grad -0.0070\n iter 68/100 | w 1.12808 | loss 5.0118 | grad -0.0061\n iter 69/100 | w 1.12809 | loss 5.0118 | grad -0.0052\n iter 70/100 | w 1.12810 | loss 5.0118 | grad -0.0045\n iter 71/100 | w 1.12810 | loss 5.0118 | grad -0.0039\n iter 72/100 | w 1.12810 | loss 5.0118 | grad -0.0033\n iter 73/100 | w 1.12811 | loss 5.0118 | grad -0.0029\n iter 74/100 | w 1.12811 | loss 5.0118 | grad -0.0025\n iter 75/100 | w 1.12811 | loss 5.0118 | grad -0.0021\n iter 76/100 | w 1.12812 | loss 5.0118 | grad -0.0018\n iter 77/100 | w 1.12812 | loss 5.0118 | grad -0.0016\n iter 78/100 | w 1.12812 | loss 5.0118 | grad -0.0014\n iter 79/100 | w 1.12812 | loss 5.0118 | grad -0.0012\n iter 80/100 | w 1.12812 | loss 5.0118 | grad -0.0010\n iter 81/100 | w 1.12812 | loss 5.0118 | grad -0.0009\n iter 82/100 | w 1.12812 | loss 5.0118 | grad -0.0007\n iter 83/100 | w 1.12812 | loss 5.0118 | grad -0.0006\n iter 84/100 | w 1.12812 | loss 5.0118 | grad -0.0006\n iter 85/100 | w 1.12812 | loss 5.0118 | grad -0.0005\n iter 86/100 | w 1.12813 | loss 5.0118 | grad -0.0004\n iter 87/100 | w 1.12813 | loss 5.0118 | grad -0.0004\n iter 88/100 | w 1.12813 | loss 5.0118 | grad -0.0003\n iter 89/100 | w 1.12813 | loss 5.0118 | grad -0.0003\n iter 90/100 | w 1.12813 | loss 5.0118 | grad -0.0002\n iter 91/100 | w 1.12813 | loss 5.0118 | grad -0.0002\n iter 92/100 | w 1.12813 | loss 5.0118 | grad -0.0002\n iter 93/100 | w 1.12813 | loss 5.0118 | grad -0.0001\n iter 94/100 | w 1.12813 | loss 5.0118 | grad -0.0001\n iter 95/100 | w 1.12813 | loss 5.0118 | grad -0.0001\n iter 96/100 | w 1.12813 | loss 5.0118 | grad -0.0001\n iter 97/100 | w 1.12813 | loss 5.0118 | grad -0.0001\n iter 98/100 | w 1.12813 | loss 5.0118 | grad -0.0001\n iter 99/100 | w 1.12813 | loss 5.0118 | grad -0.0001\n\n\n### Discussion 2b: Does it appear from the *loss* values in trace above that the GD procedure converged?\n\n### Discussion 2c: Does it appear from the *parameter* values in trace above that the GD procedure converged?\n\n### Exercise 2d: What exactly is the gradient of the returned \"optimal\" value of w?\n\nUse your `calc_grad` function to check the result. What is the gradient of the returned `wopt`?\n\nDoes this look totally converged? Can you find a $w$ value that would be even better?\n\n\n```python\n# TODO call calc_grad on the return value from above\n```\n\n## Part 3: Diagnostic plots for gradient descent\n\nLet's look at some trace functions.\n\nWhenever you run gradient descent, an *excellent* debugging strategy is the ability to plot the loss, the gradient magnitude, and the parameter of interest at every step of the algorithm.\n\n\n```python\nfig, axes = plt.subplots(nrows=1, ncols=3, sharex=True, sharey=False, figsize=(18,3.6))\n\naxes[0].plot(info_dict['trace_loss_list']);\naxes[0].set_title('loss');\naxes[1].plot(info_dict['trace_grad_list']);\naxes[1].set_title('grad');\naxes[2].plot(info_dict['trace_w_list']);\naxes[2].set_title('w');\n\nplt.xlim([0, 100]);\n```\n\n### Discussion 3a: What value do we expect the *loss* to converge to? Should it always be zero?\n\n### Discussion 3b: What value do we expect the *gradient* to converge to? Should it always be zero?\n\n# Part 4: Larger step sizes\n\n## Try with larger step_size = 0.014\n\n\n```python\nwopt, info_dict = minimize_via_grad_descent(calc_loss, calc_grad, step_size=0.014);\n```\n\n\n```python\nfig, axes = plt.subplots(nrows=1, ncols=3, sharex=True, sharey=False, figsize=(12,3))\n\naxes[0].plot(info_dict['trace_loss_list'], '.-');\naxes[0].set_title('loss');\naxes[1].plot(info_dict['trace_grad_list'], '.-');\naxes[1].set_title('grad');\naxes[2].plot(info_dict['trace_w_list'], '.-');\naxes[2].set_title('w');\n```\n\n### Discussion 4a: What happens here? How is this step size different than in Part 3 above?\n\nTODO discuss with your group\n\n## Try with even larger step size 0.1\n\n\n```python\nwopt, info_dict = minimize_via_grad_descent(calc_loss, calc_grad, step_size=0.1, max_iters=25);\n```\n\n### Discussion 3b: What happens here with this even larger step size? Is it converging?\n\n### Exercise 3c: What is the largest step size you can get to converge reasonably?\n\n\n```python\n# TODO try some other step sizes here\nwopt, info_dict = minimize_via_grad_descent(calc_loss, calc_grad, step_size=0) # TODO fix step_size\n```\n\n# Part 5: Sensitivity to initial conditions\n\n\n\n### Exercise 5a: Try to call the defined procedure with a different initial condition for $w$. What happens?\n\nYou could try $w = 5.0$ or something else.\n\n\n```python\n# TODO try some other initial condition for init_w\nwopt2, info_dict2 = minimize_via_grad_descent(calc_loss, calc_grad, init_w=0, step_size=0.001, max_iters=10) # TODO fix step_size\n```\n\n### Exercise 5b: Try again with another initial value. \n\n\n```python\n# TODO try some other initial condition for init_w\nwopt3, info_dict3 = minimize_via_grad_descent(calc_loss, calc_grad, init_w=0, step_size=0.001, max_iters=10) # TODO fix\n```\n\n### Exercise 5c: Make a trace plot\n\nMake a trace plot showing convergence from multiple different starting values for $w$. What do you notice?\n\n\n```python\n# TODO\n```\n\n# Part 6: Using scipy's built-in gradient optimization tools\n\n\n\n```python\nimport scipy.optimize\n```\n\nTake a look at SciPy's built in minimization toolbox\n\n\n\nWe'll use \"L-BFGS\", a second-order method that uses the function and its gradient.\n\nThis is a \"quasi-newton\" method, which you can get an intuition for here:\n\nhttps://en.wikipedia.org/wiki/Newton%27s_method_in_optimization\n\n\n```python\nresult = scipy.optimize.minimize(calc_loss, 0.0, jac=calc_grad, method='L-BFGS-B')\n\n# Returns an object with several fields, let's print the result to get an idea\nprint(result)\n```\n\n\n```python\nprint(str(result.message))\n```\n\n\n```python\nbest_w = result.x\nprint(best_w)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "dad2c324691ef10106cec34a390a75eb52a05509", "size": 81870, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "labs/day06_GradientDescent_LinearRegression.ipynb", "max_stars_repo_name": "brawnerquan/comp135-20f-assignments", "max_stars_repo_head_hexsha": "9570c17b872b7334b0e5b86160d868e3b854c71a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "labs/day06_GradientDescent_LinearRegression.ipynb", "max_issues_repo_name": "brawnerquan/comp135-20f-assignments", "max_issues_repo_head_hexsha": "9570c17b872b7334b0e5b86160d868e3b854c71a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "labs/day06_GradientDescent_LinearRegression.ipynb", "max_forks_repo_name": "brawnerquan/comp135-20f-assignments", "max_forks_repo_head_hexsha": "9570c17b872b7334b0e5b86160d868e3b854c71a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 84.2283950617, "max_line_length": 22224, "alphanum_fraction": 0.7886283132, "converted": true, "num_tokens": 6674, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.934395168021653, "lm_q2_score": 0.8933094060543488, "lm_q1q2_score": 0.8347039925654763}} {"text": "# In depth: Gradients \n\nThis tutorial shows how to supply gradient information about an objective to simplenlopt in SciPy or NLopt style. One example for modern automatic differentiation via the external package autograd is also included. The studied optimization problem is again the Rosenbrock function. Its objective and partial derivatives are given by\n\n$$\n\\begin{align}\nf(x, y) & =(1-x)^2+100(y-x^2)^2\\\\\n\\frac{\\partial f}{\\partial x} &=-2(1-x)-400x(y-x^2) \\\\\n\\frac{\\partial f}{\\partial y} &=200(y-x^2)\n\\end{align}\n$$\n\n## jac=callable\n\nThe easiest case which is also shown in the quickstart example. Objective and gradient are supplied as two individual functions:\n\n\n```python\nimport numpy as np\nimport simplenlopt\nfrom time import time\n\ndef rosenbrock(pos):\n\n x, y = pos\n return (1-x)**2 + 100 * (y - x**2)**2\n\ndef rosenbrock_grad(pos):\n\n x, y = pos\n dx = 2 * x -2 - 400 * x * (y-x**2)\n dy = 200 * (y-x**2)\n\n return np.array([dx, dy])\n\nx0=np.array([-1.5, 2.25])\n%timeit -n 1000 res = simplenlopt.minimize(rosenbrock, x0, jac = rosenbrock_grad)\n```\n\n 2.33 ms \u00b1 51.8 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\n\n\n## jac = True\n\nTaking another look at the objective and its partial derivaties you can see that the expression in the brackets appear in both the objective and the partial derivatives. If both are calculated in individual functions, these terms are unnecessarily recomputed. This can be avoided by supplying both the objective and its gradient in one function. That the objective also contains the gradient information, is indicated by setting jac=True. Let's see how this works and how it affects the performance.\n\n\n```python\ndef rosenbrock_incl_grad(pos):\n \n x, y = pos\n first_bracket = 1-x\n second_bracket = y-x*x\n \n obj = first_bracket*first_bracket+100*second_bracket*second_bracket\n dx = -2*first_bracket-400*x*second_bracket\n dy = 200 * second_bracket\n \n return obj, np.array([dx, dy])\n\n%timeit -n 1000 res = simplenlopt.minimize(rosenbrock_incl_grad, x0, jac = True)\n```\n\n 2.25 ms \u00b1 44.3 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\n\n\nWe see a small performance improvement. For more complicated objective functions, the performance gains can be massive though it repeated computations can be avoided.\n\n## jac='nlopt'\nThis flag is mostly for former NLopt users. It indicates that the objective and its gradient are supplied in vanilla [NLopt style](https://nlopt.readthedocs.io/en/latest/NLopt_Tutorial/#example-in-python). NLopt requires another signature for the objective: ``f(x, grad)`` instead of ``f(x)``. The gradient given by grad is given by a NumPy array which must be replaced in-place. For the Rosenbrock example this looks like:\n\n\n```python\ndef rosenbrock_nlopt_style(pos, grad):\n \n x, y = pos\n first_bracket = 1-x\n second_bracket = y-x*x\n \n obj = first_bracket*first_bracket+100*second_bracket*second_bracket\n \n if grad.size > 0:\n \n dx = -2*first_bracket-400*x*second_bracket\n dy = 200 * second_bracket\n grad[0] = dx\n grad[1] = dy\n \n return obj\n\n%timeit -n 1000 res = simplenlopt.minimize(rosenbrock_nlopt_style, x0, jac = 'nlopt')\n```\n\n 2.14 ms \u00b1 13.5 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\n\n\nThe in-place replacement led to another small performance gain. Side note: while the if statement might seem weird and unnecessary, it is required for some of the optimizers, so you are on the safe side if you include it in your objective function .\n\n## jac = '3-point'/'2-point'\nThese flags tell simplenlopt which finite difference scheme to use. Finite differencing is borrowed from [SciPy](https://github.com/scipy/scipy/blob/v1.6.3/scipy/optimize/_numdiff.py). Note that '2-point' requires less function evaluations but is less precise and therefore more prone to cause optimization failures.\n\n\n```python\n%timeit -n 100 res = simplenlopt.minimize(rosenbrock, x0, jac = '3-point')\n```\n\n 7.39 ms \u00b1 245 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each)\n\n\nThis example shows that finite differences are not competitive against analytical gradients. For simple cases such as low dimensional curve fitting they are often still useful. If possible, automatic differentiation represents a powerful alternative.\n\n## Autodiff\n\nIn recent years, automatic differentiation (autodiff) has become one of the building blocks of machine learning. Many frameworks such as pytorch and tensorflow actually are centered around autodiff. Here, we will use the slightly older [autograd](https://github.com/hips/autograd) package to automatigally compute the gradient for us and feed it to simplenlopt. \n\n\n```python\nimport autograd.numpy as anp\nfrom autograd import value_and_grad\n\nrosen_and_grad = value_and_grad(rosenbrock)\n%timeit -n 10 res = simplenlopt.minimize(rosen_and_grad, x0, jac = True)\n```\n\n 17.3 ms \u00b1 392 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)\n\n\nWhile autograd results in the worst performance for this example, autodiff shines when it comes to high dimensional problems where the inaccuracies of finite differences are much more severe. To circumvent autograd's performance issues, another candidate could be for example autograd's succesor [jax](https://github.com/google/jax) which additionally provides just-in-time compilation.\n", "meta": {"hexsha": "dd79595a4fe3feaa61ea17c56f4a317bc26628a6", "size": 8305, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/source/InDepth_Gradients.ipynb", "max_stars_repo_name": "dschmitz89/simplenlopt", "max_stars_repo_head_hexsha": "238000f6f799a2275b1b155046b3740ab5f23014", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-28T23:01:39.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-28T23:01:39.000Z", "max_issues_repo_path": "docs/source/InDepth_Gradients.ipynb", "max_issues_repo_name": "dschmitz89/simplenlopt", "max_issues_repo_head_hexsha": "238000f6f799a2275b1b155046b3740ab5f23014", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-07-13T19:00:23.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-13T19:00:23.000Z", "max_forks_repo_path": "docs/source/InDepth_Gradients.ipynb", "max_forks_repo_name": "dschmitz89/simplenlopt", "max_forks_repo_head_hexsha": "238000f6f799a2275b1b155046b3740ab5f23014", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.568627451, "max_line_length": 505, "alphanum_fraction": 0.5980734497, "converted": true, "num_tokens": 1438, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9099070109242131, "lm_q2_score": 0.9173026618464796, "lm_q1q2_score": 0.8346601231535544}} {"text": "# Miscellaneous Modules\n## SymPy\n**SymPy** is a symbolic mathematics module for Python. \n\n\n```\nfrom __future__ import print_function, division\nfrom sympy import *\nx, y, z, t = symbols('x y z t')\nk, m, n = symbols('k m n', integer=True)\nf, g, h = symbols('f g h', cls=Function)\ninit_printing()\n\na = Integral(cos(x), x)\nEq(a, a.doit())\n```\n\n\n```\nb = Integral(sqrt(1/x), x)\nEq(b, b.doit())\n```\n\n\n```\ns = \"x**2 + 3*x - 1/2\"\nc = sympify(s)\nd = diff(c)\npretty_print(d)\nprint(d.subs(x, 2))\n```\n\n 2\u22c5x + 3\n 7\n\n\n\n```\nprint(integrate(exp(-x), (x, 0, oo)))\n```\n\n 1\n\n\n\n```\nintegrate(exp(-x**2 - y**2), (x, -oo, oo), (y, -oo, oo))\n```\n\n## Vectorize Calculations\nNumPy functions are optimized to work with arrays and are much faster than performing the computations in loops. This can be demonstrated by calculating the $\\sin$ of an array of numbers using any **math** function inside a loop and using the corresponding NumPy function.\n\n\n```\nimport math\nimport numpy as np\n\ndef f1(x):\n y = np.zeros(x.shape, dtype=float)\n for i in range(len(x)):\n y[i] = math.sin(x[i])\n return x\n\ndef f2(x):\n return np.sin(x)\n\nx = np.linspace(0, 10, 1001)\n%timeit f1(x)\n%timeit f2(x)\n```\n\n 1000 loops, best of 3: 427 \u00b5s per loop\n 10000 loops, best of 3: 24.8 \u00b5s per loop\n\n\n## Numba\n\nNumba provides a Just In Time compiler to generate optimized machine code to speed up computations in Python. It uses LLVM compiler infrastructure.\n\n\n```\nimport numpy as np\nfrom numba import f8, jit\n\ndef sum1d(x):\n s = 0.0\n for i in range(x.shape[0]):\n s += x[i]\n return s\n\n@jit(f8(f8[:]))\ndef fast_sum1d(x):\n s = 0.0\n for i in range(x.shape[0]):\n s += x[i]\n return s\n\nx = np.linspace(0, 100, 1001)\n\n%timeit sum1d(x)\n%timeit fast_sum1d(x)\n```\n\n 1000 loops, best of 3: 196 \u00b5s per loop\n 1000000 loops, best of 3: 1.22 \u00b5s per loop\n\n\nLet us compute the approximate value of $\\pi$ using the following sum:\n\n$$\\pi \\approx \\sqrt{6 \\sum_{k=1}^{n+1} \\frac{1}{k^2}}$$\n\n\n```\ndef approx_pi(n=10000):\n s = 0\n for k in range(1, n+1):\n s += 1.0 / k**2\n return (6 * s) ** 0.5\n\n%timeit approx_pi(1000)\n%timeit approx_pi(10000)\n\nprint(approx_pi(1000))\nprint(approx_pi(10000))\n```\n\n 10000 loops, best of 3: 174 \u00b5s per loop\n 1000 loops, best of 3: 1.8 ms per loop\n 3.14063805621\n 3.14149716395\n\n\n\n```\nfrom numba.decorators import autojit\n\n@autojit\ndef fast_approx_pi(n):\n s = 0\n for k in range(1, n+1):\n s += 1.0 / k**2\n return (6 * s) ** 0.5\n\nn = 10000\n%timeit approx_pi(n)\n%timeit fast_approx_pi(n)\n```\n\n 100 loops, best of 3: 1.83 ms per loop\n 10000 loops, best of 3: 44.8 \u00b5s per loop\n\n\n## Other Modules\n\n1. **Data frames, data munging:** pandas\n2. **Machine learning:** scikit-learn\n3. **Statistical modelling:** statsmodels\n4. **Image processing:** scikits-image\n\n# Image Processing using scikit-image\nscikit-image is an image proessing module for SciPy.While SciPy has some image processing functions, scikit-image greatly enhances image processing capabilities of Python.\n\n\n```\nimport matplotlib.pyplot as plt\nfrom scipy import ndimage\nfrom skimage import data, io, filter\n%matplotlib inline\n\n\n```\n\n\n```\nimage = data.coins() # or any NumPy array!\nplt.subplot(211)\nio.imshow(image)\nplt.subplot(212)\nedges = filter.sobel(image)\nio.imshow(edges)\nplt.show()\n```\n\n\n```\n\n```\n", "meta": {"hexsha": "e7089de1707dc9fec29389521d0593da894bf7d7", "size": 83240, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "gnunify2015/gnunify06_Miscellaneous.ipynb", "max_stars_repo_name": "satish-annigeri/Notebooks", "max_stars_repo_head_hexsha": "92a7dc1d4cf4aebf73bba159d735a2e912fc88bb", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "gnunify2015/gnunify06_Miscellaneous.ipynb", "max_issues_repo_name": "satish-annigeri/Notebooks", "max_issues_repo_head_hexsha": "92a7dc1d4cf4aebf73bba159d735a2e912fc88bb", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "gnunify2015/gnunify06_Miscellaneous.ipynb", "max_forks_repo_name": "satish-annigeri/Notebooks", "max_forks_repo_head_hexsha": "92a7dc1d4cf4aebf73bba159d735a2e912fc88bb", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 203.0243902439, "max_line_length": 69359, "alphanum_fraction": 0.8955309947, "converted": true, "num_tokens": 1103, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9099070060380482, "lm_q2_score": 0.9173026646724284, "lm_q1q2_score": 0.834660121242813}} {"text": "# Initial Conditions\n## Data\n\n\n```python\nm = 42.0; k = 32000.0; z = 0.02\nx0 = 200/1000; v0 = -800/1000\nP = 700; a = 1.3\n```\n\n## Particular Integral\nThe loading and the assumed particular integral, as well its derivatives\n$$p(t) = P_0 \\, \\exp(-a t),$$\n$$\\xi(t)=C\\exp(-a t),\\quad\\dot\\xi=-aC\\exp(-a t),\\quad\\ddot\\xi=a^2C\\exp(-a t).$$\nSubstituting $\\xi$ into the EoM we can simplify and solve for $C$:\n\\begin{align*}\n C(k-ac+^2m)\\exp(-a t) = P \\exp(-at)\\ \\rightarrow\\ C=\\frac{P}{k-ac+a^2m}.\n\\end{align*}\n\n\n```python\nc = 2*z*np.sqrt(k*m)\nC = P/(k-a*c+a*a*m)\n```\n\n\n```python\ndisplay(Math('C=%.2f\\\\,\\\\text{mm}'%(C*1000)))\n```\n\n\n$\\displaystyle C=21.87\\,\\text{mm}$\n\n\n## General Integral\n\n$$x = \\exp(-\\zeta\\omega_nt) (A\\cos(\\omega_Dt)+B\\sin(\\omega_Dt)) + C\\exp(-a t),$$$$\n x(0) = A+C, \\quad \\dot x(0)= \\omega_DB-\\zeta\\omega_nA-aC.$$\n\n\n```python\nwn2 = k/m\nwn = np.sqrt(wn2)\nwd = wn*np.sqrt(1-z*z)\n# A+C=x0; wd B - z wn A -a C = v0\nA = x0-C; B = (v0 + a*C + z*wn*A)/wd\n```\n\n\n```python\nexpw = f'\\\\exp(-{(z*wn):.3f}\\\\,t)' ; expa = f'\\\\exp(-{a:.3f}\\\\,t)'\ncosd = f'\\\\cos({wd:.3f}\\\\,t)' ; sind = f'\\\\sin({wd:.3f}\\\\,t)'\nlp = '\\\\big(' ; rp = '\\\\big)'\ndef f(v): return f'{v:+.4f}'\ndisplay(Latex(r'\\begin{align*}x(t) &=' + \n expw+lp+f(A)+cosd+f(B)+sind+rp+f(A)+expa+r'\\\\' +\n r'\\dot x(t) &=' + \n expw+lp+f(B*wd-z*wn*A)+cosd+f(-A*wd-z*wn*B)+sind+rp + \n f(-a*A)+expa + r'\\end{align*}'))\n```\n\n\n\\begin{align*}x(t) &=\\exp(-0.552\\,t)\\big(+0.1781\\cos(27.597\\,t)-0.0244\\sin(27.597\\,t)\\big)+0.1781\\exp(-1.300\\,t)\\\\\\dot x(t) &=\\exp(-0.552\\,t)\\big(-0.7716\\cos(27.597\\,t)-4.9025\\sin(27.597\\,t)\\big)-0.2316\\exp(-1.300\\,t)\\end{align*}\n\n\n\n```python\nt = np.linspace(0,2,1001)\ndef x(t): return np.exp(-z*wn*t)*(A*np.cos(wd*t)+B*np.sin(wd*t))+C*np.exp(-a*t)\nplt.plot(t,100*x(t), label='$x(t)$')\nplt.plot(t,100*P*np.exp(-a*t)/k, label='$\\Delta_\\\\mathrm{stat}$')\nplt.grid()\nplt.legend()\nplt.xlabel('$t/s$')\nplt.ylabel('$x/cm$')\nNone\n```\n\n\n \n\n \n\n\n# Rayleigh Quotient\n\nWe need some symbols (`sp` is an alias for the `sympy` module, aka library, that provides symbolic math to Python)\n\n\n```python\nL, k, mu0, eta, x0, x1, omega = sp.symbols('L k mu_0 eta x_0 x_1 omega')\n```\n\nWe write an expression for $\\mu(\\eta)$ and we integrate it to compute the total mass, $m=\\int_0^L\\mu(x)\\,dx=\\mu_0 L \\int_0^1\\mu(\\eta)\\,d\\eta$\n\n\n```python\nmu = 2-eta\nmass = sp.integrate(mu,(eta, 0, 1))\n```\n\n\n```python\ndisplay(Math('m='+sp.latex(mass)+r'\\,\\mu_0L'))\n```\n\n\n$\\displaystyle m=\\frac{3}{2}\\,\\mu_0L$\n\n\nWe write the expression for $V_\\mathrm{max}$ (that's easy) and the one for $T_\\mathrm{max}$, here we start computing the integral\n$\\int_0^L\\mu(x)v^2(x)\\,dx=L\\int_0^1\\mu(\\eta)(x_0+(x_1-x_0)\\eta)^2\\,d\\eta$ to finally adjust our expression collecting terms and multiplying by $\\omega^2$\n\n\n```python\nV = (2*x0**2+3*x1**2)*k/2\ntemp = sp.integrate(mu*(x0+eta*(x1-x0))**2,(eta,0,1)).expand()*L\nT = omega**2*sp.collect(temp,mu0)/2\n```\n\n\n```python\ndisplay(Math('T_\\mathrm{max}=V_\\mathrm{max}\\Rightarrow'+sp.latex(sp.Eq(T,V))))\n```\n\n\n$\\displaystyle T_\\mathrm{max}=V_\\mathrm{max}\\Rightarrow\\frac{L \\omega^{2} \\left(\\frac{7 x_{0}^{2}}{12} + \\frac{x_{0} x_{1}}{2} + \\frac{5 x_{1}^{2}}{12}\\right)}{2} = \\frac{k \\left(2 x_{0}^{2} + 3 x_{1}^{2}\\right)}{2}$\n\n\nSolving the previous equation we have $\\omega^2=\\omega^2(x_0, x_1)$, \n\n\n```python\nw2 = sp.solve(sp.Eq(T,V),omega)[1]**2\n```\n\n\n```python\ndisplay(sp.Eq(omega**2,w2))\n```\n\nOur next step is to remove all dependencies except $x_1$ and finally plot our function of the single variable $x_1$\n\n\n```python\nvals = {k:1, mu0:1, L:1, x0:1}\ndisplay(Math(r'\\left.\\frac{\\omega^2}{\\omega_0^2}\\right|_{x_0=1} = '+sp.latex(w2.subs(vals))))\n\nwith rc_context(rc={'axes.grid':True}):\n plot = sp.plot(w2.subs(vals), (x1, 0,1), show=0)\n plot.ylabel = r'$\\omega^2/\\omega_0^2$'\n plot.show()\n```\n\n\n$\\displaystyle \\left.\\frac{\\omega^2}{\\omega_0^2}\\right|_{x_0=1} = \\frac{12 \\left(3 x_{1}^{2} + 2\\right)}{5 x_{1}^{2} + 6 x_{1} + 7}$\n\n\n\n \n\n \n\n\nIt's apparent that the minimum is near the point $x_1=0.4$ but also $x_1=0.5$ should give us a very good approximation...\n\n\n```python\nw_1_2 = w2.subs(vals).subs(x1,sp.Rational(1,2))\nw_4_10 = w2.subs(vals).subs(x1,sp.Rational(4,10))\ndisplay(Math(r'\\omega^2='+sp.latex(w_1_2)+'\\\\frac{k}{\\\\mu_0L}'+\n '=%.6f\\\\frac{k}{\\\\mu_0L}'%w_1_2.evalf()+\n ',\\\\quad \\\\text{for }x_0=1,\\ x_1=\\\\frac12'))\ndisplay(Math(r'\\omega^2='+sp.latex(w_4_10)+'\\\\frac{k}{\\\\mu_0L}'+\n '=%.6f\\\\frac{k}{\\\\mu_0L}'%w_4_10.evalf()+\n ',\\\\quad \\\\text{for }x_0=1,\\ x_1=0.4'))\n```\n\n\n$\\displaystyle \\omega^2=\\frac{44}{15}\\frac{k}{\\mu_0L}=2.933333\\frac{k}{\\mu_0L},\\quad \\text{for }x_0=1,\\ x_1=\\frac12$\n\n\n\n$\\displaystyle \\omega^2=\\frac{248}{85}\\frac{k}{\\mu_0L}=2.917647\\frac{k}{\\mu_0L},\\quad \\text{for }x_0=1,\\ x_1=0.4$\n\n\nIt's not difficult to find the location of the minimum (and hence the first eigenvector) and the corresponding eigenvalue. First we derive $\\omega^2(x_1)$ with respect to $x_1$ and find the roots of the numerator\n\n\n```python\ndiffw = w2.subs(vals).diff(x1)\nnum, _ = sp.fraction(diffw.simplify())\ndisplay(sp.Eq(num, 0))\nx1min = sp.solve(num)[0]\ndisplay(Math(r'\\hat x_1 = '+ sp.latex(x1min)+'='+sp.latex(x1min.evalf(6))))\n```\n\nso we have an eigenvector, $\\{1, \\hat x_1\\}$ and substituting $\\hat x_1$ in $\\omega(1,x_1)$ we have the corresponding eigenvalue\n\n\n```python\ndisplay(Math(r'\\omega^2(1,\\ %.6f)=%.6f\\,\\omega_0^2'%\n (x1min,w2.subs(vals).subs(x1,x1min))))\n```\n\n\n$\\displaystyle \\omega^2(1,\\ 0.408753)=2.917486\\,\\omega_0^2$\n\n\n# Structural Testing\n\nWe can estimate the stiffness and the damping ratio.\n\nThe stiffness is easy\n\n\n```python\nP = 48*1000; x0 = 12/1000; x4 = 9.81/1000\nk = P/x0\n```\n\n\n```python\ndisplay(Math(r'k = %.1f\\,\\text{kN/m}'%(k/1000)))\n```\n\n\n$\\displaystyle k = 4000.0\\,\\text{kN/m}$\n\n\nand the damping ratio is easy too, mind that we don't need the period of vibration to apply our formula\n$$\\zeta_{n+1} = \\frac{\\log\\frac{x_0}{x_4}}{4\\cdot2\\pi}\\,\\sqrt{1-\\zeta_n^2}.$$\n\nHere we start with $\\zeta_0=0$ and stop at $\\zeta_2$ because we have a meaningful value.\n\n\n```python\nd = np.log(x0/x4); m = 4\nz0 = 0\nz1 = d/(2*m*np.pi)*np.sqrt(1-z0**2)\nz2 = d/(2*m*np.pi)*np.sqrt(1-z1**2)\n```\n\n\n```python\ndisplay(Math(r'\\zeta_0=%d\\,\\%%,\\ \\zeta_1=%.6f\\,\\%%,\\ \\zeta_2=%.6f\\,\\%%.'%(z0*100,z1*100,z2*100)))\n```\n\n\n$\\displaystyle \\zeta_0=0\\,\\%,\\ \\zeta_1=0.801760\\,\\%,\\ \\zeta_2=0.801735\\,\\%.$\n\n\nJust for fun, we can plot the decrement after $m$ cycles, $x_m/x_0$, as a function of $\\zeta$. On the plot also a horizontal line for our $x_4/x_0=0.8175$, the intersection with the red curve gives the estimated value of the damping ratio.\n\n\n```python\nz = np.linspace(0,0.03,31)\nfor n in range(1,6):\n plt.plot(z*100, np.exp(-z*n*2*np.pi/np.sqrt(1-z*z)), label='m=%d'%n)\nplt.plot(z*100, z**0*x4/x0, '--', color='xkcd:green', label='$x_4/x_0$ in test')\nplt.grid()\nplt.legend()\nplt.xlabel('$\\zeta\\,\\%$')\nplt.title(r'$x_m/x_0$ for varying $\\zeta$.')\nNone\n```\n\n\n \n\n \n\n\n# Vibration Isolation & Numerical Integration\n\n## Data, TR\n\n\n```python\nm = 18E3; P0 = 1E3; t0 = 6; w0 = 2*np.pi*10; Pmax = 300\nz00 = 0.00; z01 = 0.01; z12 = 0.12\nTR = Pmax/P0\n```\n\n## Plot the phase,\nthe angular velocity, the angular acceleration and the unbalanced load.\n\nIn the graph of the angular velocity I have put in evidence the natural frequency of vibration of an undamped suspension system (dashed line), a horizontal band of angular velocities around such frequency (in red) that are expected to excite the system in near-resonance and a vertical band (in blue) that highlights the time interval in which this near-resonance excitation is expected to take place.\n\nThe same vertical band is drawn on the plot of the unbalanced load and, especially, will be drawn on the plots of the transmitted force, to highlight the dynamic amplification of the response corresponding to a load that is just a fraction of the steady state load.\n\n\n```python\nt = np.linspace(0,8,1001)\na = t/t0\nphi0 = w0*t0*np.where(t\ndiv.prompt{display:none}\nh1 {\n color: #ffffff!important;\n background-color: #606090;\n text-align:center;\n margin-top: 2cm;\n margin-bottom: 1cm;\n padding: 15px;\n font-size: x-large!important ;\n}\n\n```\n\n\n\n\n\n", "meta": {"hexsha": "db9cf23393f76ed88013b424a7c32431626fe8e6", "size": 424934, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "dati_2019/hw/Homework1.ipynb", "max_stars_repo_name": "shishitao/boffi_dynamics", "max_stars_repo_head_hexsha": "365f16d047fb2dbfc21a2874790f8bef563e0947", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "dati_2019/hw/Homework1.ipynb", "max_issues_repo_name": "shishitao/boffi_dynamics", "max_issues_repo_head_hexsha": "365f16d047fb2dbfc21a2874790f8bef563e0947", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "dati_2019/hw/Homework1.ipynb", "max_forks_repo_name": "shishitao/boffi_dynamics", "max_forks_repo_head_hexsha": "365f16d047fb2dbfc21a2874790f8bef563e0947", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-06-23T12:32:39.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-15T18:33:55.000Z", "avg_line_length": 41.5015138197, "max_line_length": 2400, "alphanum_fraction": 0.4938390432, "converted": true, "num_tokens": 5189, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9099070011518829, "lm_q2_score": 0.9173026646724284, "lm_q1q2_score": 0.8346601167607206}} {"text": "```python\nfrom ipywidgets import interactive, interact\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport ipywidgets as widgets\nimport sympy as sym\nimport seaborn as sns\nimport plotly.graph_objects as go\nfrom plotly.offline import init_notebook_mode, iplot\nfrom numba import jit\n\ninit_notebook_mode(connected=True)\njit(nopython=True, parallel=True)\nsns.set()\n\n```\n\n\n\n\n\n\n\n```python\n\n\nclass plot():\n \n def __init__(self, preWidgetN):\n \n self.N = preWidgetN\n \n x,y,n ,k = sym.symbols('x, y,n,k', real=True)\n X=np.linspace(0, 10, 100)\n \n f = sym.Sum((-1)**k*(x**(2*k+1))/(sym.factorial(2*k+1)),(k,0, n))\n #f = sym.Sum((-1)**k*(x**(2*k))/(sym.factorial(2*k)),(k,0, n))\n #print(sym.latex(f))\n f = f.subs(n, self.N.value)\n f = sym.lambdify(x, f)\n self.trace1 = go.Scatter(x=X, y=np.sin(X),\n mode='lines+markers',\n name='sin'\n )\n self.trace2 = go.Scatter(x=X, y=f(X),\n mode='lines',\n name=r'$\\sum_{k=0}^{%s} \\frac{\\left(-1\\right)^{k} x^{2 k + 1}}{\\left(2 k + 1\\right)!}$' %(self.N.value)\n )\n \n layout = go.Layout(template='plotly_dark')\n\n self.fig = go.FigureWidget(data=[self.trace1, self.trace2], \n layout = layout,\n layout_yaxis_range=[-3 , 3]\n )\n\n\n def sineSeries(self, change):\n\n x,y,n ,k = sym.symbols('x, y,n,k', real=True)\n X=np.linspace(0, 10, 100)\n f = sym.Sum((-1)**k*(x**(2*k+1))/(sym.factorial(2*k+1)),(k,0, n))\n #f = sym.Sum((-1)**k*(x**(2*k))/(sym.factorial(2*k)),(k,0, n))\n f = f.subs(n, self.N.value)\n f = sym.lambdify(x, f)\n\n\n\n\n with self.fig.batch_update():\n self.fig.data[1].x = X\n self.fig.data[1].y = f(X)\n self.fig.data[1].name = r'$\\sum_{k=0}^{%s} \\frac{\\left(-1\\right)^{k} x^{2 k + 1}}{\\left(2 k + 1\\right)!}$' %(self.N.value)\n\n return \n \n def show(self):\n self.N.observe(self.sineSeries, names='value')\n display(self.N, self.fig)\n return\n```\n\n\n```python\nN = widgets.IntSlider(min=0, max=20, step=1, value=0, description='partial sum order')\n\np = plot(N)\n\np.show()\n```\n\n\n IntSlider(value=0, description='partial sum order', max=20)\n\n\n\n FigureWidget({\n 'data': [{'mode': 'lines+markers',\n 'name': 'sin',\n 'type': 'scat\u2026\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "ecc5719530c0343604e9c0013abe6e3ca037e4ed", "size": 5574, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": ".ipynb_checkpoints/interactive_sinus-checkpoint.ipynb", "max_stars_repo_name": "zolabar/Interactive-Calculus", "max_stars_repo_head_hexsha": "5b4b01124eba7a981e4e9df7afcb6ab33cd7341f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-11T01:26:50.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-11T01:26:50.000Z", "max_issues_repo_path": ".ipynb_checkpoints/interactive_sinus-checkpoint.ipynb", "max_issues_repo_name": "zolabar/Interactive-Calculus", "max_issues_repo_head_hexsha": "5b4b01124eba7a981e4e9df7afcb6ab33cd7341f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": ".ipynb_checkpoints/interactive_sinus-checkpoint.ipynb", "max_forks_repo_name": "zolabar/Interactive-Calculus", "max_forks_repo_head_hexsha": "5b4b01124eba7a981e4e9df7afcb6ab33cd7341f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.6489361702, "max_line_length": 149, "alphanum_fraction": 0.4269824184, "converted": true, "num_tokens": 747, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9458012732322216, "lm_q2_score": 0.8824278587245936, "lm_q1q2_score": 0.8346013923173037}} {"text": "# Example #1: Neural Network for $y = \\sin(x)$\n\nSame example as yesterday, a sine-curve with 10 points as training values:\n\n\n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.arange(0,6.6, 0.6)\ny = np.sin(x)\n\nxplot = np.arange(0, 6.6, 0.01)\nyplot = np.sin(xplot)\n\nplt.scatter(x,y, color=\"b\", label=\"Training\")\nplt.plot(xplot, yplot, color=\"g\", label=\"sin(x)\")\n\nplt.legend()\nplt.show()\n```\n\n## Defining the architecture of our neural network:\n\nFully connected with 1 input node, 1 hidden layer, 1 output node.\n\n\n\n\n\n\n\nLayer connections:\n\\begin{equation}\ny = b+\\sum_i x_i w_i\n\\end{equation}\n\n\n**Question:** \"How many weights are there in the above example?\"\n\n### Defining the Activation function (sigmoid):\n\\begin{equation}\n\\sigma\\left(x\\right) = \\frac{1}{1 + \\exp\\left(-x\\right)}\n\\end{equation}\nPopular because the derivative of the sigmoid function is simple:\n\\begin{equation}\n\\frac{\\mathrm{d}}{\\mathrm{d}x}\\sigma\\left(x\\right) = \\sigma\\left(x\\right)\\left(1 - \\sigma\\left(x\\right)\\right)\n\\end{equation}\n\n\n```\ndef activation(val):\n\n sigmoid = 1.0 / (1.0 + np.exp(-val))\n return sigmoid\n```\n\n### Defining the architecture (i.e. the layers):\n\n* `input_value` - Input value\n* `w_ih` - Weights that connect input layer with hidden layer\n* `w_io` - Weights that connect hidden layer with output layer\n\n\n\n\n```\ndef model(input_value, w_ih, w_ho):\n\n hidden_layer = activation(input_value * w_ih)\n output_value = np.sum(hidden_layer*w_ho)\n\n return output_value\n```\n\nLet's start by testing the neural network with random weights:\n\n\n```\nnp.random.seed(1000)\nrandom_weights_ih = np.random.random(10)\nrandom_weights_ho = np.random.random(10)\n\nprint(random_weights_ih)\nprint(random_weights_ho)\nprint()\n\nval = 2.0\nsinx_predicted = model(val, random_weights_ih, random_weights_ho)\n\nprint(\"Predicted:\", sinx_predicted)\nprint(\"True: \", np.sin(2.0))\n```\n\nSetting our Model parameters:\n\n\n```\n# The number of nodes in the hidden layer\nHIDDEN_LAYER_SIZE = 40\n\n# L2-norm regularization\nL2REG = 0.01\n```\n\n## Optimizing the weights:\n\nWe want to find the best set of weights $\\mathbf{w}$ that minimizes some loss function. For example we can minimize the squared error (like we did in least squares fitting):\n\n\\begin{equation}\nL\\left(\\mathbf{w}\\right) = \\sum_i \\left(y_i^\\mathrm{true} - y_i^\\mathrm{predicted}(\\mathbf{w}) \\right)^{2}\n\\end{equation}\nOr with L2-regularization:\n\\begin{equation}\nL\\left(\\mathbf{w}\\right) = \\sum_i \\left(y_i^\\mathrm{true} - y_i^\\mathrm{predicted}(\\mathbf{w}) \\right)^{2} + \\lambda\\sum_j w_j^{2}\n\\end{equation}\nJust like in the numerics lectures and exercises, we can use a function from SciPy to do this minimization: `scipy.optimize.minimize()`.\n\n\n```\ndef loss_function(parameters):\n\n w_ih = parameters[:HIDDEN_LAYER_SIZE]\n w_ho = parameters[HIDDEN_LAYER_SIZE:]\n\n squared_error = 0.0\n\n for i in range(len(x)):\n\n # Predict y for x[i]\n y_predicted = model(x[i], w_ih, w_ho)\n \n # Without # Regularization\n squared_error = squared_error + (y[i] - y_predicted)**2 \n \n # With regularization\n # rmse += (z - y[i])**2 + np.linalg.norm(parameters) * L2REG\n \n return squared_error\n```\n\n## Running the minimization with `scipy.optimize.minimize()`:\n\nDocumentation: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html\n\nSince we haven't implemented the gradient of the neural network, we can't use optimizers that require the gradient. One algorithm we can use is the Nelder-Mead optimizer.\n\n\n```\nfrom scipy.optimize import minimize\n\n# Define random initial weights\nnp.random.seed(666)\np = np.random.random(size=2*HIDDEN_LAYER_SIZE)\n\n# Minimize the loss function with parameters p\nresult = minimize(loss_function, p, method=\"Nelder-Mead\",\n options={\"maxiter\": 100000, \"disp\": True})\n\nwfinal_in = result.x[:HIDDEN_LAYER_SIZE]\nwfinal_hl = result.x[HIDDEN_LAYER_SIZE:]\n\nprint(wfinal_in)\nprint(wfinal_hl)\n```\n\n\n```\n\n# Print sin(2.5) and model(2.5)\nval = 2.5\nsinx_predicted = model(val, wfinal_in, wfinal_hl)\n\nprint(\"Predicted:\", sinx_predicted)\nprint(\"True: \", np.sin(val))\n```\n\nLets make a plot with pyplot!\n\n\n```\nxplot = np.arange(0,6.6, 0.01)\nyplot = np.sin(xplot)\n\nypred = np.array([model(val, wfinal_in, wfinal_hl) for val in xplot])\n\nimport matplotlib.pyplot as plt\n\nplt.plot(xplot,yplot, color=\"g\", label=\"sin(x)\")\nplt.scatter(x, y, color=\"b\", label=\"Training\")\nplt.plot(xplot, ypred, color=\"r\", label=\"Predicted\")\n\nplt.ylim([-2,2])\nplt.show()\n\n\n```\n\n## What to do about \"crazy\" behaviour?\n* Regularization\n* Adjust hyperparameters (hidden layer size)\n", "meta": {"hexsha": "b276771ba95a51c67f9e41b7538da84b8eddf1b0", "size": 10841, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "machine_learning_example_sinx.ipynb", "max_stars_repo_name": "andersx/python-intro", "max_stars_repo_head_hexsha": "8409c89da7dd9cea21e3702a0f0f47aae816eb58", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2020-05-03T11:59:01.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-15T12:33:39.000Z", "max_issues_repo_path": "machine_learning_example_sinx.ipynb", "max_issues_repo_name": "andersx/python-intro", "max_issues_repo_head_hexsha": "8409c89da7dd9cea21e3702a0f0f47aae816eb58", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "machine_learning_example_sinx.ipynb", "max_forks_repo_name": "andersx/python-intro", "max_forks_repo_head_hexsha": "8409c89da7dd9cea21e3702a0f0f47aae816eb58", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2020-05-10T21:15:15.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-05T15:13:54.000Z", "avg_line_length": 27.726342711, "max_line_length": 187, "alphanum_fraction": 0.4632413984, "converted": true, "num_tokens": 1328, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9458012655937034, "lm_q2_score": 0.8824278556326344, "lm_q1q2_score": 0.8346013826524834}} {"text": "# Lecture 23\n\n# Beta distribution, Bayes' Billiards\n\n## Beta Distribution\n\nThe Beta distribution is characterized as $Beta(\\alpha, \\beta)$ where $\\alpha \\gt 0 \\text{, } \\beta \\gt 0$.\n\nThe Beta distribution PDF is given by:\n\n\\begin{align}\n f(x) &= c \\, x^{\\alpha-1} \\, (1 - x)^{\\beta-1} \\quad \\text{where } 0 \\lt x \\lt 1 \n\\end{align}\n\nWe will put aside the normalization constant $c$ for now (wait until lecture 25!).\n\n$Beta(\\alpha ,\\beta)$ distribution\n\n* is a flexible family of continuous distributions over $(0,1)$ (see graph below for some examples)\n* often used as a _prior_ for a parameter in range $(0,1)$\n* _conjugate prior_ for $Binomial$ distribution\n\n\n```python\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.stats import beta\n%matplotlib inline \n\nplt.xkcd()\n\nx = np.linspace(0, 1.0, 500)\n\nalphas = [0.5, 5, 1, 1, 2, 3]\nbetas = [0.5, 3, 2, 1, 1, 5]\nparams = map(list, zip(alphas, betas))\n\n_, ax = plt.subplots(figsize=(12,8))\n\nfor a,b in params:\n y = beta.pdf(x, a, b)\n ax.plot(x, y, label=r'$\\alpha$={}, $\\beta$={}'.format(a,b))\nax.grid(True, which='both')\nax.axhline(y=0, color='k')\nax.axvline(x=0, color='k')\n\nlegend = ax.legend()\nfor label in legend.get_texts():\n label.set_fontsize('large')\nfor label in legend.get_lines():\n label.set_linewidth(1.5)\n\nplt.title(r'Beta Distribution $f(x) = \\frac{x^{\\alpha-1} \\, (1-x)^{\\beta-1}}{B(\\alpha,\\beta)}$')\nplt.xlabel(r'x')\nplt.ylabel(r'f(x)')\nplt.xlim((0,1.0))\nplt.ylim((0,5.0))\n \nplt.show()\n```\n\n### Conjugate prior to the Binomial\n\nRecall Laplace's Rule of Sucession dealing with the problem of the sun rising: there, we probability $p$ that the sun will rise on any given day $X_k$, given a consecutive string of days $X_1,X_2, \\dots, X_{k-1}$, to be i.i.d. $Bern(p)$. \n\nWe made an assumption that $p \\sim Unif(0,1)$.\n\n$Beta(1,1)$ is the same as $Unif(0,1)$, and so we will show how to generalize using the $Beta$ distribution.\n\nGiven $X|p \\sim Bin(n,p)$. We get to observe $X$, but we do not know the true value of $p$. \n\nIn such a case, we can assume that the _prior_ $p \\sim Beta(\\alpha, \\beta)$. After observing $n$ further trials, where perhaps $k$ are successes and $n-k$ are failures, we can use this information to update our beliefs on the nature of $p$ using Bayes Theorem.\n\nSo we what we want is the _posterior_ $p|X$, since we will get to observe more values of $X$ and want to update our understanding of $p$.\n\n\\begin{align}\n f(p|X=k) &= \\frac{P(X=k|p) \\, f(p)}{P(X=k)} \\\\\n &= \\frac{\\binom{n}{k} \\, p^k \\, (1-p)^{n-k} \\, c \\, p^{\\alpha-1}(1-p)^{\\beta-1}}{P(X=k)} \\\\\n &\\propto p^{\\alpha + k - 1} (1-p)^{\\beta + n - k - 1} \\quad \\text{since }\\binom{n}{k} \\text{, }c \\text{ and }P(X=k) \\text{ do not depend on }p \\\\\n \\\\\n \\Rightarrow p|X &\\sim Beta(\\alpha + X, \\beta + n - X) \n\\end{align}\n\n_Conjugate_ refers to the fact that we are looking at an entire _family_ of $Beta$ distributions as the _prior_. We started off with $Beta(\\alpha, \\beta)$, and after an additional $n$ more observations of $X$ we end up with $Beta(\\alpha + X, \\beta + n - X)$.\n\n### Bayes' Billiards\n\nThe $Beta(\\alpha, \\beta)$ distribution has PDF:\n\n\\begin{align}\n f(x) &= c \\, x^{\\alpha-1} \\, (1 - x)^{\\beta-1}\n\\end{align}\n\nLet's try to find the normalizing constant $c$ for the case where $\\alpha \\gt 0 \\text{, } \\beta \\gt 0$, and $\\alpha,\\beta \\in \\mathbb{Z}$\n\nIn order to do that, we need to find out \n\n\\begin{align}\n \\int_0^1 x^k \\, (1 - x)^{n-k} \\, dx \\\\\n \\\\\n \\rightarrow \\int_0^1 \\binom{n}{k} x^k \\, (1 - x)^{n-k} \\, dx\n\\end{align}\n\n#### Story 1\nWe have $n+1$ white billiard balls, and we paint one of them pink. Now we throw them down on the number line from 0 to 1.\n\n#### Story 2\nWe throw our $n+1$ white billiard balls down on the number line from 0 to 1, and _then_ randomly select one of them to paint in pink. \n\n\n\n_Note that the image above could have resulted from either of the stories, so both stories are actually equivalent._\n\n#### Doing calculus without using calculus\n\nAt this point, we know exactly how to the above integral _without using any calculus._\n\nLet $X = \\text{# balls to left of pink}$, so $X$ ranges from $0$ to $n$. If we condition on $p$ (pink billiard ball), we can consider this to be a binomial distribution problem, where \"success\" means being to the left of the pink ball.\n\n\\begin{align}\n P(X = k) &= \\int_0^1 P(X=k|p) \\, f(p) \\, dp &\\quad \\text{conditioning on } p \\\\\n &= \\int_0^1 \\binom{n}{k} p^k \\, (1-p)^{n-k} \\, dp &\\quad \\text{since } f(p) \\sim Unif(0,1) \\\\\n \\\\\n & &\\quad \\text{but from Story 2, } k \\text{ could be any value in } \\{0,n\\} \\\\\n \\\\\n &= \\boxed{ \\frac{1}{n+1} }\n\\end{align}\n\nAnd so now we have the normalizing constant when $\\alpha, \\beta$ are positive integers.\n", "meta": {"hexsha": "910ee8476995f5db36702d48d2812b3b63e3a660", "size": 128341, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lecture_23.ipynb", "max_stars_repo_name": "dirtScrapper/Stats-110-master", "max_stars_repo_head_hexsha": "a123692d039193a048ff92f5a7389e97e479eb7e", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lecture_23.ipynb", "max_issues_repo_name": "dirtScrapper/Stats-110-master", "max_issues_repo_head_hexsha": "a123692d039193a048ff92f5a7389e97e479eb7e", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture_23.ipynb", "max_forks_repo_name": "dirtScrapper/Stats-110-master", "max_forks_repo_head_hexsha": "a123692d039193a048ff92f5a7389e97e479eb7e", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 661.5515463918, "max_line_length": 121312, "alphanum_fraction": 0.9307937448, "converted": true, "num_tokens": 1593, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9086179043564153, "lm_q2_score": 0.9184802484881361, "lm_q1q2_score": 0.8345475985740498}} {"text": "# Chapter 2 Exercises\nIn this notebook we will go through the exercises of chapter 2 of Introduction to Stochastic Processes with R by Robert Dobrow.\n\n\n```python\nimport numpy as np\n```\n\n## 2.1\nA Markov chain has transition Matrix\n$$\np=\\left(\\begin{array}{cc} \n0.1 & 0.3&0.6\\\\\n0 & 0.4& 0.6 \\\\\n0.3 & 0.2 &0.5\n\\end{array}\\right)\n$$ \nWith initial distribution $\\alpha =(0.2,0.3,0.5)$. Find the following:\n\na) $P(X_7=3|X_6=2)$\n\nb) $P(X_9=2|X_1=2,X_5=1,X_7=3)$\n\nc) $P(X_0=3|X_1=1)$\n\nd) $E[X_2]$\n\n\n\n```python\na=np.array([0.2,0.3,0.5])\np = np.matrix([[0.1,0.3,0.6],[0,0.4,0.6],[0.3,0.2,0.5]])\np[1,2]\n```\n\n\n\n\n 0.6\n\n\n\n\n```python\n(p*p)[2,1]\n```\n\n\n\n\n 0.27\n\n\n\n\n```python\na[2]*p[2,0]/(a[0]*p[0,0]+a[1]*p[1,0]+a[2]*p[2,0])\n```\n\n\n\n\n 0.8823529411764707\n\n\n\n\n```python\ne=0\npr = a*p*p\nfor i in range(pr.shape[1]):\n e += (i+1)*pr[0,i]\ne\n```\n\n\n\n\n 2.3630000000000004\n\n\n\n## 2.1\nA Markov chain has transition Matrix\n$$\np=\\left(\\begin{array}{cc} \n0 & 1/2&1/2\\\\\n1 & 0& 0 \\\\\n1/3 & 1/3 &1/3\n\\end{array}\\right)\n$$ \nWith initial distribution $\\alpha =(1/2,0,1/2)$. Find the following:\n\na) $P(X_2=1|X_1=3)$\n\nb) $P(X_1=3,X_2=1)$\n\nc) $P(X_1=3|X_2=1)$\n\nd) $P(X_9=1|X_1=3, X_4=1, X_7=2)$\n\n\n```python\na=np.array([1/2,0,1/2])\np = np.matrix([[0,1/2,1/2],[1,0,0],[1/3,1/3,1/3]])\np[2,0]\n```\n\n\n\n\n 0.3333333333333333\n\n\n\n\n```python\n(a*p)[0,2]*p[2,0]\n```\n\n\n\n\n 0.13888888888888887\n\n\n\n\n```python\n(a*p)[0,2]*p[2,0]/((a*p)[0,0]*p[0,0]+(a*p)[0,1]*p[1,0]+(a*p)[0,2]*p[2,0])\n```\n\n\n\n\n 0.25\n\n\n\n\n```python\n(p*p)[1,0]\n```\n\n\n\n\n 0.0\n\n\n\n## 2.3\nConsider the Wright-Fisher model with population k=3. If the initial population has one A allele, what is the probability that there are no alleles at time 3.\n\n\n```python\ndef calculate_combinations(n, r):\n from math import factorial\n return factorial(n) // factorial(r) // factorial(n-r)\nmat = []\nk = 3\nfor j in range(k+1):\n vec = []\n for i in range(k+1):\n vec.append(calculate_combinations(k, i)*(j/k)**i*(1-j/k)**(k-i))\n mat.append(vec)\np=np.matrix(mat)\n(p**3)[1,0]\n```\n\n\n\n\n 0.5166895290352083\n\n\n\n## 2.4\nFor the General\u00f1 two-state chain with transformation matrix \n$$\\boldsymbol{P}=\\left(\\begin{array}{cc} \n1-p & p\\\\\nq & 1-q\n\\end{array}\\right)$$\n\nAnd initial distribution $\\alpha=(\\alpha_1,\\alpha_2)$, find the following:\n\n(a) the two step transition matrix\n\n(b) the distribution of $X_1$\n\n### Answer\n(a)\n\nFor this case the result comes from doing the matrix multiplication once i.e. finding $\\boldsymbol{P}*\\boldsymbol{P}$ that is:\n$$\\boldsymbol{P}*\\boldsymbol{P}=\\left(\\begin{array}{cc} \n(1-p)^2+pq & (1-p)p+(1-q)p\\\\\n(1-p)q+(1-q)q & (1-q)^2+pq\n\\end{array}\\right)$$\n\n(b) \nin this case we need to take into account alpha which would be $X_0$ and then multiply with the transition matrix, then:\n\n$\\boldsymbol{\\alpha}*\\boldsymbol{P}$ that is:\n$$\\boldsymbol{\\alpha}*\\boldsymbol{P}=( \n(1-p)\\alpha_1+q\\alpha_2 , p\\alpha_1+(1-q)\\alpha_2)$$\n\n## 2.5\nConsider a random walk on $\\{0,...,k\\}$, which moves left and right with respective probabilities q and p. If the walk is at 0 it transitions to 1 on the next step. If the walk is at k it transitions to k-1 on the next step. This is calledrandom walk with reflecting bounderies. Assume that $k=3,q=1/4, p =3/4$ and the initial distribution is uniform. For the following, use technology if needed.\n\n(a) Exhibit the transition Matrix\n\n(b) Find $P(X_7=1|X_0=3,X_2=2,X_4=2)$\n\n(c) Find $P(X_3=1,X_5=3)$\n\n### Answer\n(a) $$\\boldsymbol{Pr}=\\left(\\begin{array}{cc}\n0 & 1 & 0 & 0 \\\\\n1/4 & 0 & 3/4 & 0\\\\\n0 & 1/4 & 0 & 3/4 \\\\\n0 & 0 & 1 & 0\n\\end{array}\\right)$$\n\n(b) since it is a Markov Chain, then $P(X_7=1|X_0=3,X_2=2,X_4=2)=P(X_7=1|X_4=2)=\\boldsymbol{Pr}^3_{2,1}$\n\n\n\n```python\np = np.matrix([[0,1,0,0],[1/4,0,3/4,0],[0,1/4,0,3/4],[0,0,1,0]])\n(p**3)[2,1]\n```\n\n\n\n\n 0.296875\n\n\n\n(c) $P(X_3=1,X_5=3)=(\\alpha \\boldsymbol{Pr}^3)_{1}*\\boldsymbol{Pr}^2_{1,3}$\n\n\n```python\na=np.matrix([1/4,1/4,1/4,1/4])\n(a*p**3)[0,1]*(p**2)[1,3]\n```\n\n\n\n\n 0.103271484375\n\n\n\n## 2.6\nA tetrahedron die has four faces labeled 1,2,3 and 4. In repeated independent rolls of the die $R_0, R_1,...,$ let $X_n=max\\{R_0,...,R_n\\}$ be the maximum value after $n+1$ rolls, for $n\\geq0$\n\n(a) Give an intuitive argument for why $X_0, X_1, ... $ is a Markov Chain and exhibit its transition Matrix\n\n(b) find P(X_3 \\geq 3)\n\n### Answer\n(a) This is a Markov Chain since the value of the maximum in all the rolls does only depend on the last value given of the throw this is because the max value should not changhe based on past rolls of this given event, then $P(X_n=i|X_{n-1}=j,...)=P(X_n=i|X_{n-1}=j)$\nand the transition matrix is:\n$$\\boldsymbol{Pr}=\\left(\\begin{array}{cc}\n1/4 & 1/4 & 1/4 & 1/4 \\\\\n0 & 1/2 & 1/4 & 1/4 \\\\\n0 & 0 & 3/4 & 1/4 \\\\\n0 & 0 & 0 & 1\n\\end{array}\\right)$$\n\n\n(b) We know that the tetrahedron has unoiform probability to get the initial value, then $\\alpha=(1/4,1/4,1/4,1/4)$\nthen we want $\\alpha*\\boldsymbol{Pr}^3$\n\n\n```python\np = np.matrix([[1/4,1/4,1/4,1/4],[0,1/2,1/4,1/4],[0,0,3/4,1/4],[0,0,0,1]])\na=np.matrix([1/4,1/4,1/4,1/4])\n(a*p**3)[0,2:].sum()\n```\n\n\n\n\n 0.9375\n\n\n\n## 2.7\nLet $X_0, X_1,...$ be a Markov chain with transition matrix $P$. Let $Y_n=X_{3n}$, for $n = 0,1,2,...$ Show that $Y_0,Y_1,...$ is a Markov chain and exhibit its transition Matrix.\n\n### Answer\nLet's unravel $P(Y_k|Y_{k-1},...,Y_0)$\n\n$$P(Y_k|Y_{k-1},...,Y_0)=P(X_{3k}|X_{3k-3},...,X_0)$$\n\nBut since $X_0, X_1,...$ is a Markov Chain, then $P(X_{3k}|X_{3k-3},...,X_0) = P(X_{3k}|X_{3k-3}) = P(Y_k|Y_{k-1})$\nThis means that $Y_0,Y_1,...$ is also a Markov chain.\n\nand then the transition matrix for Y is $P_Y=P_X^3$\n\n## 2.8\nGive the Markov transition matrix for a random walk on the weighted graph in the next figure:\n
    \n\n
    \n\n### Answer \n$$\\boldsymbol{Pr}=\\left(\\begin{array}{cc}\n0 & 1/3 & 0 & 0 & 2/3 \\\\\n1/10 & 1/5 & 1/5 & 1/10 & 2/5 \\\\\n1/2 & 1/3 & 0 & 1/6 & 0\\\\\n0 & 1/2 & 1/2 & 0 & 0\\\\\n1/3 & 2/3 & 0 & 0 & 0\n\\end{array}\\right)$$\n\n## 2.8\nGive the Markov transition matrix for a random walk on the weighted graph in the next figure:\n
    \n\n
    \n\n### Answer \n$$\\boldsymbol{Pr}=\\left(\\begin{array}{cc}\n0 & 0 & 3/5 & 0 & 2/5 \\\\\n1/7 & 2/7 & 0 & 0 & 4/7 \\\\\n0 & 2/9 & 2/3 & 1/9 & 0\\\\\n0 & 1 & 0 & 0 & 0\\\\\n3/4 & 0 & 0 & 1/4 & 0\n\\end{array}\\right)$$\n\n## 2.10\nConsider a Markov Chain with transition Matrix\n\n$$\\boldsymbol{Pr}=\\left(\\begin{array}{cc}\n0 & 3/5 & 1/5 & 2/5 \\\\\n3/4 & 0 & 1/4 & 0 \\\\\n1/4 & 1/4 & 1/4 & 1/4\\\\\n1/4 & 0 & 1/4 & 1/2\n\\end{array}\\right)$$\n(a) Exhibit the directed, weighted transition graph for the chain\n\n(b) The transition graph for this chain can be given as a weighted graph without directed edges. Exhibit the graph\n\n(a)\n
    \n\n
    \n\n(b)\n
    \n\n
    \n\n## 2.11\nYou start with five dice. Roll all the dice and put aside those dice that come up 6. Then roll the remaining dice, putting aside thise dice that come up 6. and so on. Let $X_n$ be the number of dice that are sixes after n rolls. \n\n(a) Describe the transition matrix $\\boldsymbol{Pr}$ for this Markov chain\n\n(b) Find the probability of getting all sixes by the third play.\n\n(c) what do you expect $\\boldsymbol{Pr}^{100}$ to look like? Use tech to confirm your answer **(Good that we are doing this in jupyter notebook haha)**\n\n(a) Note that the space for $X_n$ is $0,1,2,3,4,5$\n\n$$\\boldsymbol{Pr}=\\left(\\begin{array}{cc}\n1 & 0 & 0 & 0 & 0 & 0 \\\\\n1/6 & 5/6 & 0 & 0 & 0 & 0\\\\\n\\frac{1^2}{6^2}{2\\choose 2} & \\frac{5*1}{6^2}{2\\choose 1} & \\frac{5^2}{6^2}{2\\choose 0} & 0 & 0 & 0\\\\\n\\frac{1^3}{6^3}{3\\choose 3} & \\frac{5*1^2}{6^3}{3\\choose 2} & \\frac{5^2*1}{6^3}{3\\choose 1} & \\frac{5^3}{6^3}{3\\choose 0} & 0 & 0\\\\\n\\frac{1^4}{6^4}{4\\choose 4} & \\frac{5*1^3}{6^4}{4\\choose 3} & \\frac{5^2*1^2}{6^4}{4\\choose 2} & \\frac{5^3*1}{6^4}{4\\choose 1} & \\frac{5^4}{6^4}{4\\choose 0} & 0\\\\\n\\frac{1^5}{6^5}{5\\choose 5} & \\frac{5*1^4}{6^5}{5\\choose 4} & \\frac{5^2*1^3}{6^5}{5\\choose 3} & \\frac{5^3*1^2}{6^5}{5\\choose 2} & \\frac{5^4*1}{6^5}{5\\choose 1} & \\frac{5^5}{6^5}{5\\choose 0}\\\\\n\\end{array}\\right) = \\left(\\begin{array}{cc}\n1 & 0 & 0 & 0 & 0 & 0 \\\\\n\\frac{1}{6} & \\frac{5}{6} & 0 & 0 & 0 & 0\\\\\n\\frac{1}{36} & \\frac{10}{36} & \\frac{25}{36} & 0 & 0 & 0\\\\\n\\frac{1}{216} & \\frac{15}{216} & \\frac{75}{216} & \\frac{125}{216} & 0 & 0\\\\\n\\frac{1}{1296} & \\frac{20}{1296} & \\frac{150}{1296} & \\frac{500}{1296} & \\frac{625}{1296} & 0\\\\\n\\frac{1}{7776} & \\frac{25}{7776} & \\frac{250}{7776} & \\frac{1250}{7776} & \\frac{3125}{7776} & \\frac{3125}{7776}\\\\\n\\end{array}\\right)$$\n\nIt is interesting that the distribution when $X_n=i$ is the components of the binomial expansion $(1/6+5/6)^i$\n\n(b)\n\n\n```python\np = np.matrix([[1, 0, 0, 0, 0, 0],\n [1/6, 5/6, 0, 0, 0, 0], [1/36, 10/36, 25/36, 0, 0, 0], \n [1/216, 15/216, 75/216, 125/216, 0, 0], \n [1/1296, 20/1296, 150/1296, 500/1296, 625/1296, 0],\n [1/7776, 25/7776, 250/7776, 1250/7776 ,3125/7776, 3125/7776]])\n(p**3)[5,0]\n```\n\n\n\n\n 0.013272056011374654\n\n\n\n(c) I would expect that there is basically 100 percent posibility that there are 0 dices left without regarding how many dices you started with.\n\n\n```python\n(p**100).round()\n```\n\n\n\n\n matrix([[1., 0., 0., 0., 0., 0.],\n [1., 0., 0., 0., 0., 0.],\n [1., 0., 0., 0., 0., 0.],\n [1., 0., 0., 0., 0., 0.],\n [1., 0., 0., 0., 0., 0.],\n [1., 0., 0., 0., 0., 0.]])\n\n\n\n## 2.12\ntwo urns contain k balls each. At the beginning of the experiment the urn on the left countains k red balls and the one on the right contains k blue balls. At each step, pick a ball at random from each urn and excange them. Let $X_n$ be the number of blue balls left in the urn on the left (Note that $X_0=0, X_1 = 1$). Argue the process is a Markov Chain. Find the transition matrix. This model is called the Bernoulli - Laplace process. \n\n### Answer\nThis process has to be a random process because the only thing that matters at step n is to know how many blue balls are in the urn on the left. This means that even if you know all the history only the last state is the one that matters to know the probability for the next step.\n\n$$P = \\left(\\begin{array}{cc}\n0 & 1 & 0 & ... &0 & 0 & 0 \\\\\n\\frac{1}{k} & 0 & \\frac{k-1}{k} & ... & 0 & 0 & 0\\\\\n0 & \\frac{2}{k} & 0 & ... & 0 & 0 & 0\\\\\n... & ... & ... & ... & ... & ... & ...\\\\\n0 & 0 & 0 & ... & \\frac{k-1}{k} & 0 & \\frac{1}{k}\\\\\n0 & 0 & 0 & ... & 0 & 1 & 0\\\\\n\\end{array}\\right)$$\n\n## 2.13\nsee the move-to-front process in Example 2.10. Here is anorher way to organize the bookshelf. when a book is returned it is put bacl on the library shelf one position forward from where it was originally. If the book at the fron of the shelf is returned it is put back at the front of the shelf. This, if the order of books is (a,b,c,d,e) and the book d is picjed, the new order is (a,b,d,c,e). This reorganization method us called the *transposition* or *move-ahead-1* scheme. Give the transition matrix for the transportation scheme for a shelf with three books.\n\n### Answer\nremember the states are $(abc, acb, bac,bca, cab, cba)$\n$$P = \\left(\\begin{array}{cc}\n0 & p_c & p_b &p_a & 0 & 0 \\\\\np_b & 0 & 0 & 0 & p_c & p_a\\\\\np_a & p_b & 0 & p_c & 0 & 0\\\\\n0 & 0 & p_a & 0 & p_b & p_c\\\\\np_c & p_a & 0 & 0 & 0 & p_b\\\\\n0 & 0 & p_c & p_b & p_a & 0\n\\end{array}\\right)$$\n\n## 2.14\nThere are k songs in Mary's music player. The player is set to *Shuffle* mode, which plays songs uniformly at random, sampling with replacement. Thus, repeats are possible. Let $X_n$ denote the number of *unique* songs that have been heard after the nth play. \n\n(a) show that $X_0,X_1, ...$ is a Markov chain and give the transition matrix\n\n(b) if Mary has four songs on her music player, find the probability that all songs are heard after six plays.\n\n### Answer\n(a) \nimagine that you have $P(X_n|X_{n-1},...,X_0)$ note that $X_{n+1}=max\\{\\xi_{n+1},X_n\\}$ where $\\xi_{n+1}$ is the result of the n+1 and it is independent to all $X_i$, $0\\leq i\\leq n$, then this means that $X_0,...,X_n$ is a Markov chain with transition matrix:\n$$P = \\left(\\begin{array}{cc}\n1/k & (k-1)/k & 0 & ... &0 & 0 & 0 \\\\\n0 & 2/k & (k-2)/k & ... & 0 & 0 & 0\\\\\n0 & 0 & 3/k & ... & 0 & 0 & 0\\\\\n... & ... & ... & ... & ... & ... & ...\\\\\n0 & 0 & 0 & ... & 0 & (k-1)/k & 1/k\\\\\n0 & 0 & 0 & ... & 0 & 0 & 1\\\\\n\\end{array}\\right)$$\n\n## 2.15\nAssume that $X_0,X_1,...$ is a two state Markov chain in $\\mathcal{S}=\\{0,1\\}$ with transition matrix:\n\n$$P=\\left(\\begin{array}{cc}\n1-p & p \\\\\nq & 1-q \n\\end{array}\\right)$$\n\nThe present state of the chain only depends on the previous state. We can create a bivariate process that looks back two time periods by the following contruction. Let $Z_n=(X_{n-1},X_n)$, for $n\\geq1$. The sequence $Z_1,Z_2,...$ is a Markov chain with state space $\\mathcal{S}X\\mathcal{S}={(0,0),(1,0),(0,1),(1,1)}$ Give the transition matrix of the new chain. \n\n### Answer\n$$P = \\left(\\begin{array}{cc}\n1-p & p & 0 & 0 \\\\\n0 & 0 & q & 1-q \\\\\n1-p & p & 0 & 0 \\\\\n0 & 0 & q & 1-q \n\\end{array}\\right)$$\n\n## 2.16\nAssume that $P$ is a stochastic matrix with equal rows. Show that $P^n=P$, for all $n\\geq1$\n\n### Answer\nlet's then see $P$ as:\n$$P = \\left(\\begin{array}{cc}\np_1 & p_2 & ... & p_k \\\\\np_1 & p_2 & ... & p_k \\\\\np_1 & p_2 & ... & p_k \\\\\n... & ... & ... & ... \\\\\np_1 & p_2 & ... & p_k \\\\\np_1 & p_2 & ... & p_k \\\\\n\\end{array}\\right)$$\n\nFirst let's calculate $P^2$, then\n\n$$P^2 = \\left(\\begin{array}{cc}\np_1*p_1+p_2*p_1+...+p_k*p_1 & p_1*p_2+p_2*p_2+...+p_k*p_2 & ... & p_1*p_k+p_2*p_k+...+p_k*p_k \\\\\np_1*p_1+p_2*p_1+...+p_k*p_1 & p_1*p_2+p_2*p_2+...+p_k*p_2 & ... & p_1*p_k+p_2*p_k+...+p_k*p_k \\\\\np_1*p_1+p_2*p_1+...+p_k*p_1 & p_1*p_2+p_2*p_2+...+p_k*p_2 & ... & p_1*p_k+p_2*p_k+...+p_k*p_k \\\\\n... & ... & ... & ... \\\\\np_1*p_1+p_2*p_1+...+p_k*p_1 & p_1*p_2+p_2*p_2+...+p_k*p_2 & ... & p_1*p_k+p_2*p_k+...+p_k*p_k \\\\\np_1*p_1+p_2*p_1+...+p_k*p_1 & p_1*p_2+p_2*p_2+...+p_k*p_2 & ... & p_1*p_k+p_2*p_k+...+p_k*p_k \\\\\n\\end{array}\\right)=\n\\left(\\begin{array}{cc}\np_1*(p_1+p_2+...+p_k) & ... & p_k(p_1+p_2+...+p_k) \\\\\np_1*(p_1+p_2+...+p_k) & ... & p_k(p_1+p_2+...+p_k) \\\\\np_1*(p_1+p_2+...+p_k) & ... & p_k(p_1+p_2+...+p_k) \\\\\n... & ... & ... \\\\\np_1*(p_1+p_2+...+p_k) & ... & p_k(p_1+p_2+...+p_k) \\\\\np_1*(p_1+p_2+...+p_k) & ... & p_k(p_1+p_2+...+p_k) \\\\\n\\end{array}\\right)=\n\\left(\\begin{array}{cc}\np_1*(1) & ... & p_k(1) \\\\\np_1*(1) & ... & p_k(1) \\\\\np_1*(1) & ... & p_k(1) \\\\\n... & ... & ... \\\\\np_1*(1) & ... & p_k(1) \\\\\np_1*(1) & ... & p_k(1) \\\\\n\\end{array}\\right) = P$$\n\nThen let's see what happens for $P^3$, then \n$$P^3=P^2*P=P*P=P^2=P$$\n\nAssume it is true for $P^n$ and see what happens at $P^{n+1}$\n$$P^{n+1}=P^n*P=P*P=P^2=P$$\nThen this means that $P^n=P$, for all $n\\geq1$\n\n## 2.17\nLet **$P$** be the stochastic matrix. Show that $\\lambda = 1$ is an eigenvalue of **$P$**. What is the associated eigenvector?\n\n## Answer \nLet's first think that it is clear that the rows all sum to one, and remember that an eigenvector asociated to an eigen value looks of the form:\n$$A\\overline{x}=\\lambda \\overline{x}$$\n\nit is easy to note that the eigenvector we are looking at is the vector $\\overline{x}=(1,1,...,1)^T$ this means that the eigenvalue asociated to this is also one\n\nwe can check this by doing the multiplication. Say we have\n$$A= \\left(\\begin{array}{cc}\np_{1,1} & p_{1,2} & ... & p_{1,k} \\\\\np_{2,1} & p_{2,2} & ... & p_{2,k} \\\\\n... & ... & ... & ... \\\\\np_{k,1} & p_{k,2} & ... & p_{k,k} \\\\\n\\end{array}\\right)$$\nwhere A is an stochastic matrix. Then:\n$$Ax= \\left(\\begin{array}{cc}\np_{1,1} & p_{1,2} & ... & p_{1,k} \\\\\np_{2,1} & p_{2,2} & ... & p_{2,k} \\\\\n... & ... & ... & ... \\\\\np_{k,1} & p_{k,2} & ... & p_{k,k} \\\\\n\\end{array}\\right)\\left(\\begin{array}{cc}\n1 \\\\\n1 \\\\\n... \\\\\n1\\\\\n\\end{array}\\right)=\n\\left(\\begin{array}{cc}\np_{1,1} + p_{1,2} + ... + p_{1,k} \\\\\np_{2,1} + p_{2,2} + ... + p_{2,k} \\\\\n... \\\\\np_{k,1} + p_{k,2} + ... + p_{k,k} \\\\\n\\end{array}\\right)=\n\\left(\\begin{array}{cc}\n1 \\\\\n1 \\\\\n... \\\\\n1\\\\\n\\end{array}\\right)$$\n\nThe way to prove this is to remember that if $A$ is an square matrix, then the eigenvalues of A are the same as its transpose's. Then, let's check if 1 is an eigenvalue by doing $det(A^T-\\lambda I)$ where $\\lambda$ is the eigenvalue (in this case 1) and $I$ is the identity matrix, when doing this we get that:\n$$A-I= \\left(\\begin{array}{cc}\np_{1,1} -1 & p_{1,2} & ... & p_{1,k} \\\\\np_{2,1} & p_{2,2}- 1 & ... & p_{2,k} \\\\\n... & ... & ... & ... \\\\\np_{k,1} & p_{k,2} & ... & p_{k,k}-1 \\\\\n\\end{array}\\right)$$\n\nAnd since we are dealing with the transpose, then if we sum all rows to the first row (i.e. $R_1 -> R_1+R_2+...+R_k$), then all values in the first row are zero, which means that the matrix is linearly dependand and $det(A^T- 1I)=0$, which means that 1 is an eigenvalue for $A$\n\n## 2.18\nA stochastic matrix is called *doubly* stochastic if its columns sum to 1. Let $X_0,X_1,...$ be a Markov chain on $\\{ 1,...,k \\}$ with doubly stochastic transition matrix and initial distribution that is uniform on $\\{ 1,...,k \\}$. Show that the distribution of $X_n$ is uniform on $\\{ 1,...,k \\}$, for all $n\\geq0$\n\n### Answer\nLet's remember that then the distribution at $X_n$ is $\\alpha P^n$, let's see what happens at $n=1$\n\n$n=1$\n$$X_1=\\alpha P = \\left(\\begin{array}{cc}\n1/k &1/k&...& 1/k\n\\end{array}\\right)\\left(\\begin{array}{cc}\np_{1,1} & p_{1,2} & ... & p_{1,k} \\\\\np_{2,1} & p_{2,2} & ... & p_{2,k} \\\\\n... \\\\\np_{k,1} & p_{k,2} & ... & p_{k,k} \\\\\n\\end{array}\\right)=\\left(\\begin{array}{cc}\np_{1,1}*1/k + p_{2,1}*1/k + ... + p_{k,1}*1/k \\\\\np_{1,2}*1/k + p_{2,2}*1/k + ... + p_{k,2}*1/k \\\\\n... \\\\\np_{1,k}*1/k + p_{2,k}*1/k + ... + p_{k,k}*1/k \\\\\n\\end{array}\\right)^T=\\left(\\begin{array}{cc}\n(p_{1,1} + p_{2,1} + ... + p_{k,1})*1/k\\\\\n(p_{1,2} + p_{2,2} + ... + p_{k,2})*1/k \\\\\n... \\\\\n(p_{1,k} + p_{2,k} + ... + p_{k,k})*1/k \\\\\n\\end{array}\\right)^T = \\left(\\begin{array}{cc}\n(1)*1/k\\\\\n(1)*1/k \\\\\n... \\\\\n(1)*1/k \\\\\n\\end{array}\\right)^T=\\alpha\n$$\nThis last step is due to the matrix being double stochastic\n\nNow let's $n=2$\n$$X_1=\\alpha P^2 = (\\alpha P)*P=\\alpha*p=\\alpha$$\n\nThen we assume it happens for $n$, let's see then for $n+1$\n$$X_{n+1}=\\alpha P^{n+1} = (\\alpha P^n)*P=\\alpha*p=\\alpha$$\n\n## 2.19\nLet **$P$** be the transition matrix of a Markov chain on $k$ states. Let **$I$** denote the $kXk$ identity matrix. consider the matrix\n\n$$\\boldsymbol{Q}=(1-p)\\boldsymbol{I}+p\\boldsymbol{P}\\text{ , for }0-1$, $(1+x)^n\\geq 1+nx$\n\n### Answer\n$#a$\n\nFor $n=1$\n\n$$\n\\begin{align}\n2(1)-1 &= 1 \\\\&=1^2 \n\\end{align}$$\n\nAssume it works for $n-1$\n\n$$\n\\begin{align}\n1+3+...+(2n-3) &= (n-1)^2 \\\\\n &=n^2 -2n+1\n\\end{align}$$\n\nThen for $n$\n$$\n\\begin{align}\n1+3+...+(2n-3)+(2n-1) &= n^2 -2n+1 +2n-1\\\\\n &=n^2 \n\\end{align}$$\n\nTherefore is proved.\n\n$#b$\n\nFor $n=1$\n\n$$\n\\begin{align}\n1(1+1)(2*1+1)/6 &= 6/6 \\\\&=1^2 \n\\end{align}$$\n\nAssume it works for $n-1$\n\n$$\n\\begin{align}\n1^2+2^2+...+(n-1)n^2&=(n-1)(n)(2n-1)/6\n\\end{align}$$\n\nThen for $n$\n$$\n\\begin{align}\n1^2+2^2+...+n^2 &=\\frac{(n-1)(n)(2n-1)}{6}+n^2\\\\\n &=\\frac{2n^-3n^2+n+6n^2}{6}\\\\\n &=\\frac{n(2n^2+3n+1)}{6}\\\\\n &=\\frac{n(n+1)(2n+1)}{6}\n\\end{align}$$\n\nTherefore is proved.\n\n$#c$\n\n\nFor this is important to notice that for $x>-1$ means that $(1+x)>0$ for all $x$. Now let's see what happens for $n=1$\n\n$$\n\\begin{align}\n(1+x)^1 &= (1+1*x)\\\\\n1+x &\\geq 1+x\n\\end{align}$$\n\nAssume it works for $n-1$\n\n$$\n\\begin{align}\n(1+x)^{n-1}&\\geq1+(n-1)x\n\\end{align}$$\n\nThen for $n$\n$$\n\\begin{align}\n(1+x)^{n} &= (1+x)^{n-1}*(1+x)\\\\\n &\\geq 1+(n-1)x*(1+x)\\\\\n\\\\ \\text{this last step because $1+x > 0$} \\\\ \\\\\n1+(n-1)x*(1+x) &=1+(n-1)x+x+(n-1)x^2\\\\\n &=1+n*x+ (n-1)x^2\\\\\n\\\\ \\text{but since $(n-1)x^2\\geq 0$ for all $x>1$}\\\\ \\\\\n1+n*x+ (n-1)x^2 &\\geq 1+n*x \\\\\n\\\\ \\text{=>} \\\\ \\\\\n(1+x)^{n} &\\geq 1+n*x+ (n-1)x^2 \\\\\n &\\geq 1+n*x \\\\\n\\\\ \\text{by transitivity relation} \\\\ \\\\\n(1+x)^{n} &\\geq 1+n*x \\\\\n\\end{align}$$\n\nTherefore is proved.\n\n\n## 2.23\nSimulate the first 20 letters (vowel/consonant) of the Pushkin poem Markov chain of Example 2.2.\n\n$$P=\\left(\\begin{array}{cc}\n0.175 & 0.825 \\\\\n0.526 & 0.474\n\\end{array}\\right)$$\n\n### Answer\n\n\n```python\nclass PushkinLetters():\n def __init__(self, P=np.matrix([[0.175,0.825],[0.526,0.474]]), init= np.array([8.638/20, 11.362/20])):\n self.P=P\n self.init = init\n def sample(self, simlist, dist):\n vowel = 0\n consonant = 1\n if (np.random.random()\n\n\n\nWith transition matrix uniform across vertex as described in the frog example, then:\n$$P=\\left(\\begin{array}{cc}\n0 & 1 & 0 & 0 & 0 & 0\\\\\n1/4 & 0 & 1/4 & 1/4 & 1/4 & 0\\\\\n0 & 1/4 & 0 & 1/4 & 1/4 & 1/4\\\\\n0 & 1/4 & 1/4 & 0 & 1/4 & 1/4\\\\\n0 & 1/3 & 1/3 & 1/3 & 0 & 0\\\\\n0 & 0 & 1/2 & 1/2 & 0 & 0\n\\end{array}\\right)$$\n\n### Answer\n\n\n```python\nclass FrogJump():\n def __init__(self, P=np.matrix([[0 , 1 , 0 , 0 , 0 , 0],\n [1/4 , 0 , 1/4 , 1/4 , 1/4 , 0],\n [0 , 1/4 , 0 , 1/4 , 1/4 , 1/4],\n [0 , 1/4 , 1/4 , 0 , 1/4 , 1/4],\n [0 , 1/3 , 1/3 , 1/3 , 0 , 0],\n [0 , 0 , 1/2 , 1/2 , 0 , 0]]), \n init= np.array([1/6]*6)):\n self.P=P\n self.init = init\n def sample(self, simlist, dist):\n simlist.append(np.random.choice(len(dist),p=dist))\n def OneSimulation(self, steps = 50):\n sim = []\n self.sample(sim, self.init)\n for i in range(1,steps):\n self.sample(sim, np.array(self.P[sim[i-1]]).reshape(-1))\n return sim\n def Simulation(self, simulations):\n vowel = 0\n consonant = 1\n results = []\n for i in range(0,simulations):\n results.append(self.OneSimulation())\n return results\n```\n\n\n```python\nnp.array(FrogJump().Simulation(10))[:,-1]\n```\n\n\n\n\n array([3, 2, 4, 2, 3, 4, 1, 2, 3, 3])\n\n\n\n\n```python\nP=np.matrix([[0 , 1 , 0 , 0 , 0 , 0],\n [1/4 , 0 , 1/4 , 1/4 , 1/4 , 0],\n [0 , 1/4 , 0 , 1/4 , 1/4 , 1/4],\n [0 , 1/4 , 1/4 , 0 , 1/4 , 1/4],\n [0 , 1/3 , 1/3 , 1/3 , 0 , 0],\n [0 , 0 , 1/2 , 1/2 , 0 , 0]])\n(P**30)[0,:]\n```\n\n\n\n\n matrix([[0.05555595, 0.22222121, 0.22222273, 0.22222273, 0.16666667,\n 0.11111072]])\n\n\n\n## 2.25 \nThe behavior of dolphins in the presence of tour boats in Patagonia, Argentina is studied in Dans et al. (2012). A Markov chain model is developed, with state space consisting of five primary dolphin activities (socializing, traveling, milling, feeding, and resting). The following transition matrix is\nobtained.\n\n$$P=\\left(\\begin{array}{cc}\n0.84 & 0.11 & 0.01 & 0.04 & 0 \\\\\n0.03 & 0.8 & 0.04 & 0.1 & 0.03 \\\\\n0.01 & 0.15 & 0.7 & 0.07 & 0.07 \\\\\n0.03 & 0.19 & 0.02 & 0.75 & 0.01 \\\\\n0.03 & 0.09 & 0.05 & 0 & 0.83 \n\\end{array}\\right)$$\n\n### Answer\n\n\n```python\n(np.matrix([\n[0.84 , 0.11 , 0.01 , 0.04 , 0],\n[0.03 , 0.8 , 0.04 , 0.1 , 0.03],\n[0.01 , 0.15 , 0.7 , 0.07 , 0.07],\n[0.03 , 0.19 , 0.02 , 0.75 , 0.01],\n[0.03 , 0.09 , 0.05 , 0 , 0.83]\n])**100)[0]\n```\n\n\n\n\n matrix([[0.14783582, 0.41492537, 0.0955597 , 0.2163806 , 0.1252985 ]])\n\n\n\n## 2.26 \nIn computer security applications, a honeypot is a trap set on a network to detect and counteract computer hackers. Honeypot data are studied in Kimou et al. (2010) using Markov chains. The authors obtain honeypot data from a central database and observe attacks against four computer ports\u201480, 135, 139, and 445\u2014over 1 year. The ports are the states of a Markov chain along with a state corresponding to no port is attacked. Weekly data are monitored, and the port most often attacked during the week is recorded. The estimated Markov transition matrix for weekly attacks is\n$$P=\\left(\\begin{array}{cc}\n0 & 0 & 0 & 0 & 1 \\\\\n0 & 8/13 & 3/13 & 1/13 & 1/13 \\\\\n1/16 & 3/16 & 3/8 & 1/4 & 1/8 \\\\\n0 & 1/11 & 4/11 & 5/11 & 1/11 \\\\\n0 & 1/8 & 1/2 & 1/8 & 1/4 \n\\end{array}\\right)$$\nwith initial distribution $\\alpha = (0, 0, 0, 0, 1)$.\n\n(a) Which are the least and most likely attacked ports after 2 weeks? \n\n(b) Find the long-term distribution of attacked ports.\n\n### Answer\n\n\n```python\nP=np.matrix([\n[0 , 0 , 0, 0 , 1],\n[0 , 8/13 , 3/13 , 1/13, 1/13 ],\n[1/16 , 3/16 , 3/8 , 1/4, 1/8 ],\n[0 , 1/11 , 4/11 , 5/11 , 1/11],\n[0 , 1/8 , 1/2 , 1/8 , 1/4 ]])\na=np.array([0,0,0,0,1])\nparser=[80,135, 139, 445, 'No attack']\n```\n\n\n```python\nprint(\"the most likely port to be attacked after two weeks is:\",parser[(a*P**2).argmax()],\"\\n\"+\n \"the least likely port to be attacked after two weeks is:\",parser[(a*P**2).argmin()])\n```\n\n the most likely port to be attacked after two weeks is: 139 \n the least likely port to be attacked after two weeks is: 80\n\n\n\n```python\nprint(\"the long last distribution is:\",(P**100)[0])\n```\n\n the long last distribution is: [[0.02146667 0.26693333 0.34346667 0.22733333 0.1408 ]]\n\n\n## 2.27 \nSee gamblersruin.R. Simulate gambler\u2019s ruin for a gambler with initial stake $\\$2$, playing a fair game.\n\n(a) Estimate the probability that the gambler is ruined before he wins $\\$5$.\n\n(b) Construct the transition matrix for the associated Markov chain. Estimate\nthe desired probability in (a) by taking high matrix powers. \n\n(c) Compare your results with the exact probability.\n\n### Answer\n\n\n```python\nclass GamblingWalk():\n '''\n This class is used to simulate a random walk, it is flexible to take different \n outcomes for the results of throwing the coin and you can have different limits to when stop the experiment\n The plotting functions are handy to see the final results.\n '''\n def __init__(self, initial_money=50, prob=1/2, min_state=0, max_state=100, outcomes = [-1,1]):\n self.prob = prob\n self.init_money = initial_money\n self.walk = []\n self.min_state = min_state\n self.max_state = max_state\n self.outcomes = outcomes\n def P_w(self, sim=100):\n results = []\n for i in range(sim):\n res = self.randomWalk()\n results.append(res[1])\n return sum(results)/len(results)\n def randomWalk(self):\n money = self.init_money\n self.walk = [int(money)]\n win = False\n while True:\n money += np.random.choice(self.outcomes,1,p=[1-self.prob,self.prob])\n self.walk.append(int(money))\n if (money <= self.min_state) or (money >= self.max_state):\n win=(money >= self.max_state)\n break\n return len(self.walk), win\n def plotWalk(self):\n if(len(self.walk)==0):\n self.randomWalk()\n plt.plot(range(len### Answer(self.walk)), self.walk)\n plt.xlabel(\"Time\")\n plt.ylabel(\"Money\")\n plt.show()\n```\n\n\n```python\nwlk = GamblingWalk(initial_money=2, max_state=5)\n1-wlk.P_w(10000)\n```\n\n\n\n\n array([0.5985])\n\n\n\n(b) The transition Matrix is given by:\n\n$$P=\\left(\\begin{array}{cc}\n1 & 0 & 0 & 0 & 0 & 0 \\\\\n1/2 & 0 & 1/2 & 0 & 0 & 0 \\\\\n0 & 1/2 & 0 & 1/2 & 0 & 0 \\\\\n0 & 0 & 1/2 & 0 & 1/2 & 0 \\\\\n0 & 0 & 0 & 1/2 & 0 & 1/2 \\\\ \n0 & 0 & 0 & 0 & 0 & 1 \\\\\n\\end{array}\\right)$$\n\n\n```python\nP=np.matrix([[1 , 0 , 0 , 0 , 0 , 0],\n [1/2 , 0 , 1/2 , 0 , 0 , 0],\n [0 , 1/2 , 0 , 1/2 , 0 , 0],\n [0 , 0 , 1/2 , 0 , 1/2 , 0],\n [0 , 0 , 0 , 1/2 , 0 , 1/2],\n [0 , 0 , 0 , 0 , 0 ,1]])\na=np.array([0,0,1,0,0,0])\n(a*P**100).round(4)\n```\n\n\n\n\n array([[0.6, 0. , 0. , 0. , 0. , 0.4]])\n\n\n\n(c) As seen from the results from (a) and (b) we can see that the simulation is a bit off, even after 10,000 simulations.\n", "meta": {"hexsha": "ce9e388822f72a9455c50b39e459bb0cf61cc436", "size": 56321, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter02_py.ipynb", "max_stars_repo_name": "larispardo/StochasticProcessR", "max_stars_repo_head_hexsha": "a2f8b6c41f2fe451629209317fc32f2c28e0e4ee", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter02_py.ipynb", "max_issues_repo_name": "larispardo/StochasticProcessR", "max_issues_repo_head_hexsha": "a2f8b6c41f2fe451629209317fc32f2c28e0e4ee", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter02_py.ipynb", "max_forks_repo_name": "larispardo/StochasticProcessR", "max_forks_repo_head_hexsha": "a2f8b6c41f2fe451629209317fc32f2c28e0e4ee", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.0479603087, "max_line_length": 584, "alphanum_fraction": 0.4714404929, "converted": true, "num_tokens": 13771, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9184802484881361, "lm_q2_score": 0.9086178895092414, "lm_q1q2_score": 0.8345475849372138}} {"text": "# sympy\n\nSince we want to work interactively with `sympy`, we will import the complete module. Note that this polutes the namespace, and is not recommended in general.\n\n\n```python\nfrom sympy import *\n```\n\nEnable pretty printing in this notebook.\n\n\n```python\ninit_printing()\n```\n\n## Expression manipulation\n\nDefine a number of symbols to work with, as well as an example expression.\n\n\n```python\nx, y, a, b, c = symbols('x y a b c')\n```\n\n\n```python\nexpr = (a*x**2 - b*y**2 + 5)/(c*x + y)\n```\n\nCheck the expression's type.\n\n\n```python\nexpr.func\n```\n\n\n\n\n sympy.core.mul.Mul\n\n\n\nAlthough the expression was defined as a divisino, it is represented as a multiplicatino by `sympy`. The `args` attribute of an expressions stores the operands of the top-level operator.\n\n\n```python\nexpr.args\n```\n\nAlthough the first factor appears to be a division, it is in fact a power. The denominator of this expression would be given by:\n\n\n```python\nexpr.args[0].func\n```\n\n\n\n\n sympy.core.power.Pow\n\n\n\n\n```python\nexpr.args[0].args[0]\n```\n\nThe expression $\\frac{1}{a x + b}$ can alternatively be defined as follows, which highlights the internal representation of expressions.\n\n\n```python\nexpr = Pow(Add(Mul(a, x), b), -1)\n```\n\n\n```python\npprint(expr)\n```\n\n 1 \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\n a\u22c5x + b\n\n\n\n```python\nexpr.args\n```\n\n\n\n\n (a*x + b, -1)\n\n\n\n\n```python\nexpr.args[0].args[0]\n```\n\n\n\n\n b\n\n\n\n\n```python\nexpr = x**2 + 2*a*x + y**2\n```\n\n\n```python\nexpr2 = expr.subs(a, y)\nexpr2\n```\n\nMost expression manipulation algorithms can be called as functions, or as methods on expressions.\n\n\n```python\nfactor(expr2)\n```\n\n\n```python\nexpr2.factor()\n```\n\n\n```python\nx, y = symbols('x y', positive=True)\n```\n\n\n```python\n(log(x) + log(y)).simplify()\n```\n\n## Calculus\n\n### Series expansion\n\n\n```python\nx, a = symbols('x a')\n```\n\n\n```python\nexpr = sin(a*x)/x\n```\n\n\n```python\nexpr2 = series(expr, x, 0, n=7)\n```\n\n\n```python\nexpr2\n```\n\nA term of a specific order in a given variable can be selected easily.\n\n\n```python\nexpr2.taylor_term(2, x)\n```\n\nWhen the order is unimportant, or when the expression should be used to define a function, the order term can be removed.\n\n\n```python\nexpr2.removeO()\n```\n\nAdding two series deals with the order correctly.\n\n\n```python\ns1 = series(sin(x), x, 0, n=7)\n```\n\n\n```python\ns2 = series(cos(x), x, 0, n=4)\n```\n\n\n```python\ns1 + s2\n```\n\n### Derivatives and integrals\n\n\n```python\nexpr = a*x**2 + b*x + c\n```\n\n\n```python\nexpr.diff(x)\n```\n\n\n```python\nexpr.integrate(x)\n```\n", "meta": {"hexsha": "bc4dddf0cb38ea390f1d59cd6ece1635e1686d1e", "size": 35882, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Python/Sympy/sympy.ipynb", "max_stars_repo_name": "Gjacquenot/training-material", "max_stars_repo_head_hexsha": "16b29962bf5683f97a1072d961dd9f31e7468b8d", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 115, "max_stars_repo_stars_event_min_datetime": "2015-03-23T13:34:42.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-21T00:27:21.000Z", "max_issues_repo_path": "Python/Sympy/sympy.ipynb", "max_issues_repo_name": "Gjacquenot/training-material", "max_issues_repo_head_hexsha": "16b29962bf5683f97a1072d961dd9f31e7468b8d", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 56, "max_issues_repo_issues_event_min_datetime": "2015-02-25T15:04:26.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-03T07:42:48.000Z", "max_forks_repo_path": "Python/Sympy/sympy.ipynb", "max_forks_repo_name": "Gjacquenot/training-material", "max_forks_repo_head_hexsha": "16b29962bf5683f97a1072d961dd9f31e7468b8d", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 59, "max_forks_repo_forks_event_min_datetime": "2015-11-26T11:44:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-21T00:27:22.000Z", "avg_line_length": 49.5607734807, "max_line_length": 4034, "alphanum_fraction": 0.7642271891, "converted": true, "num_tokens": 717, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9184802350995703, "lm_q2_score": 0.9086178882719769, "lm_q1q2_score": 0.8345475716357205}} {"text": "# Normal distribution\n\nGiven normal density and distribution functions for N(0,1)\n\n\\begin{equation}\n f_X(x)=\\frac{1}{\\sqrt{2\\pi}}e^{-x^2/2}\n\\end{equation}\n\n\\begin{equation}\n F_X(x)=\\int_\\infty^x \\frac{1}{\\sqrt{2\\pi}}e^{-u^2/2}du\n\\end{equation}\n\nWhat is the probability that $4.0 < x < 4.5$?\n\n\n```python\nprint(\"Probability from table: %.6f\" % (0.999997 - 0.999968))\n```\n\n Probability from table: 0.000029\n\n\n\n```python\nfrom numpy import pi, sqrt, exp\nfrom scipy.integrate import quad\n\nfx = lambda x: 1/sqrt(2*pi)*exp(-x**2/2)\nprob = quad(fx, 4, 4.5)\nprint(\"Probability from integration: %.6f (%.2e)\" % prob)\n```\n\n Probability from integration: 0.000028 (3.14e-19)\n\n", "meta": {"hexsha": "de4122a375672e0d2b3a95682d56a1d6bb854ca3", "size": 1763, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Example 1.7.ipynb", "max_stars_repo_name": "mfkiwl/GMPE340", "max_stars_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-07T09:36:36.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-07T09:36:36.000Z", "max_issues_repo_path": "Example 1.7.ipynb", "max_issues_repo_name": "mfkiwl/GMPE340", "max_issues_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Example 1.7.ipynb", "max_forks_repo_name": "mfkiwl/GMPE340", "max_forks_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-20T18:48:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-20T18:48:20.000Z", "avg_line_length": 20.9880952381, "max_line_length": 70, "alphanum_fraction": 0.4991491775, "converted": true, "num_tokens": 248, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9566341999997378, "lm_q2_score": 0.8723473796562744, "lm_q1q2_score": 0.8345173376593475}} {"text": "### DEMSLV04\n\n# Compute fixedpoint of $f(x, y)= [x^2 + y^3; xy - 0.5]$\n\nCompute fixedpoint of \n\n\\begin{equation}\nf(x, y)= \\begin{bmatrix}x^2 + y^3 \\\\ xy - 0.5 \\end{bmatrix}\n\\end{equation}\n\nusing Newton, Broyden, and function iteration methods.\n\nInitial values generated randomly. Some algorithms may fail to converge, depending on the initial value.\n\nTrue fixedpoint is $x = -0.09$, $y=-0.46$.\n\n\n```python\nfrom demos.setup import np, tic, toc\nfrom compecon import NLP\nnp.random.seed(12)\n```\n\n### Set up the problem\n\n\n```python\ndef g(z):\n x, y = z\n return np.array([x **2 + y ** 3, x * y - 0.5])\n\nproblem_as_fixpoint = NLP(g, maxit=1500)\n```\n\n### Equivalent Rootfinding Formulation\n\n\n```python\ndef f(z):\n x, y = z\n fval = [x - x ** 2 - y ** 3,\n y - x * y + 0.5]\n fjac = [[1 - 2 * x, -3 * y **2],\n [-y, 1 - x]]\n\n return np.array(fval), np.array(fjac)\n\nproblem_as_zero = NLP(f, maxit=1500)\n```\n\n### Randomly generate starting point\n\n\n```python\nxinit = np.random.randn(2)\n```\n\n### Compute fixed-point using Newton method\n\n\n```python\nt0 = tic()\nz1 = problem_as_zero.newton(xinit)\nt1 = 100 * toc(t0)\nn1 = problem_as_zero.fnorm\n```\n\n### Compute fixed-point using Broyden method\n\n\n```python\nt0 = tic()\nz2 = problem_as_zero.broyden(xinit)\nt2 = 100 * toc(t0)\nn2 = problem_as_zero.fnorm\n```\n\n### Compute fixed-point using function iteration\n\n\n```python\nt0 = tic()\nz3 = problem_as_fixpoint.fixpoint(xinit)\nt3 = 100 * toc(t0)\nn3 = np.linalg.norm(problem_as_fixpoint.fx - z3)\n```\n\n\n\n\n```python\nprint('Hundredths of seconds required to compute fixed-point of ')\nprint('\\n\\t\\tg(x1,x2)=[x1^2+x2^3; x1*x2-0.5]')\nprint('\\nusing Newton, Broyden, and function iteration methods, starting at')\nprint('\\n\\t\\tx1 = {:4.2f} x2 = {:4.2f}\\n\\n'.format(*xinit))\nprint('Method Time Norm of f x1 x2\\n', '-' * 48)\nprint('Newton {:8.2f} {:8.0e} {:5.2f} {:5.2f}'.format(t1, n1, *z1))\nprint('Broyden {:8.2f} {:8.0e} {:5.2f} {:5.2f}'.format(t2, n2, *z2))\nprint('Function {:8.2f} {:8.0e} {:5.2f} {:5.2f}'.format(t3, n3, *z3))\n```\n\n Hundredths of seconds required to compute fixed-point of \n \n \t\tg(x1,x2)=[x1^2+x2^3; x1*x2-0.5]\n \n using Newton, Broyden, and function iteration methods, starting at\n \n \t\tx1 = 0.47 x2 = -0.68\n \n \n Method Time Norm of f x1 x2\n ------------------------------------------------\n Newton 0.80 2e-15 -0.09 -0.46\n Broyden 0.10 8e-10 -0.09 -0.46\n Function 0.05 8e-09 -0.09 -0.46\n\n", "meta": {"hexsha": "4e3b5263bc7c3f27bfe80a7159bd93dcd5565196", "size": 5454, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/slv/04 Compute fixedpoint of f(x1,x2)= [x1 2+x2 3; x1 x2 - 0.5].ipynb", "max_stars_repo_name": "daniel-schaefer/CompEcon-python", "max_stars_repo_head_hexsha": "d3f66e04a7e02be648fc5a68065806ec7cc6ffd6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/slv/04 Compute fixedpoint of f(x1,x2)= [x1 2+x2 3; x1 x2 - 0.5].ipynb", "max_issues_repo_name": "daniel-schaefer/CompEcon-python", "max_issues_repo_head_hexsha": "d3f66e04a7e02be648fc5a68065806ec7cc6ffd6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/slv/04 Compute fixedpoint of f(x1,x2)= [x1 2+x2 3; x1 x2 - 0.5].ipynb", "max_forks_repo_name": "daniel-schaefer/CompEcon-python", "max_forks_repo_head_hexsha": "d3f66e04a7e02be648fc5a68065806ec7cc6ffd6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-06-01T03:47:35.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-01T03:47:35.000Z", "avg_line_length": 22.3524590164, "max_line_length": 114, "alphanum_fraction": 0.470663733, "converted": true, "num_tokens": 916, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9372107949104866, "lm_q2_score": 0.8902942319436397, "lm_q1q2_score": 0.8343933648241197}} {"text": "# The Jones Calculus: Symbolic Algebra\n\n**Scott Prahl**\n\n**April 2020**\n\n\n```python\n#this must be run first\n%matplotlib inline\n\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport sympy\nimport pypolar.sym_jones as sym_jones\nsympy.init_printing(use_latex='mathjax')\n\n```\n\n## Background\n\n### Jones Vector for Linearly Polarized Light\n\nA Jones vector is a 2x1 matrix that can represent fully polarized light. Linearly polarized light is relatively simple\n$$\n\\mbox{horizontal linearly polarized light}=\n\\left[\\begin{array}{c} \n1\\\\\n0\\\\\n\\end{array}\\right]\n$$\nand\n\n\n```python\nsym_jones.field_horizontal()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1\\\\0\\end{matrix}\\right]$\n\n\n\n$$\n\\mbox{vertical linearly polarized light}=\n\\left[\\begin{array}{c} \n0\\\\\n1\\\\\n\\end{array}\\right]\n$$\n\n\n```python\nsym_jones.field_vertical()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0\\\\1\\end{matrix}\\right]$\n\n\n\n$$\n\\mbox{linearly polarized at $\\theta$ from horizontal}=\n\\left[\\begin{array}{c} \n\\cos\\theta\\\\\n\\sin\\theta\\\\\n\\end{array}\\right]\n$$\nSo the vector for linearly polarized light at 45 degrees can be written in python as\n\n\n```python\ntheta = sympy.Symbol('theta', real=True)\nsym_jones.field_linear(theta)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\cos{\\left(\\theta \\right)}\\\\\\sin{\\left(\\theta \\right)}\\end{matrix}\\right]$\n\n\n\n### Jones Vectors for Circularly Polarized Light\n\nFor circularly polarized light, the direction of polarization rotates through an entire circle every wavelength, and retains a constant magnitude. \n\nRight circularly polarized light has the direction of polarization rotating clockwise when looking into the beam. \n$$\n\\mbox{right linearly polarized light}=\n{1\\over\\sqrt{2}}\\left[\\begin{array}{c} \nj\\\\\n1\\\\\n\\end{array}\\right]\n$$\n\n\n```python\nsym_jones.field_right_circular()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\frac{\\sqrt{2}}{2}\\\\- \\frac{\\sqrt{2} i}{2}\\end{matrix}\\right]$\n\n\n\n$$\n\\mbox{vertical linearly polarized light}=\n{1\\over\\sqrt{2}}\\left[\\begin{array}{c} \n1\\\\\nj\\\\\n\\end{array}\\right]\n$$\nSo the vector for left circularly polarized light can be written in python as\n\n\n```python\nsym_jones.field_left_circular()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\frac{\\sqrt{2}}{2}\\\\\\frac{\\sqrt{2} i}{2}\\end{matrix}\\right]$\n\n\n\n### Elliptically polarized light\nIn general, the expression for elliptically polarized light is\n$$\n\\mathbf{I}=\n\\left[\\begin{array}{c} \nA\\\\\nBe^{j\\delta}\\\\\n\\end{array}\\right]\n$$\n\n### Irradiance\nThe scalar intensity of the beam is the square of the field $E\\cdot E^*$,\n$$\n\\left[\\begin{array}{c} \nA\\\\\nBe^{j\\delta}\\\\\n\\end{array}\\right]\n\\cdot\n\\left[\\begin{array}{c} \nA^* &\nB^*e^{-j\\delta}\\\\\n\\end{array}\\right]\n$$\nor\n$$\nI= A A^* + B B^*\n$$ \nwhere $A^*$ is the complex conjugate of $A$.\n\n## Jones Matrix for Linear Polarizer\n\nThe Jones matrix representing a perfect horizontal polarizer is\n$$\n\\mathbf{H}=\\mbox{Horizontal Linear Polarizer}=\\left[\\begin{array}{cc}\n1 & 0\\\\\n0 & 0\\\\\n\\end{array}\\right]\n$$\n\nThe matrix representing a rotation through an angle $\\theta$ is\n$$\n\\mathbf{R}(\\theta)=\\mbox{Rotation by $\\theta$}=\\left[\\begin{array}{cc}\n\\cos\\theta & \\sin \\theta\\\\\n-\\sin\\theta & \\cos\\theta\\\\\n\\end{array}\\right]\n$$\n\nThe matrix representing a linear polarizer oriented at an angle of $\\theta$ to the horizontal axis can by rotating the beam so the axis coincides with the horizontal axis and then unrotating by the same amount\n$$\n\\mbox{Linear Polarizer at $\\theta$} = \\mathbf{R}(-\\theta)\\cdot\\mathbf{H}\\cdot\\mathbf{R}(\\theta)\n$$\n\n\n\n```python\n## By rotation\ntheta = sympy.Symbol('theta', real=True)\nH = sym_jones.op_linear_polarizer(0)\nR = sym_jones.op_rotation(theta)\nlinear = sym_jones.op_rotation(-theta) * H * sym_jones.op_rotation(theta)\nlinear\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\cos^{2}{\\left(\\theta \\right)} & \\sin{\\left(\\theta \\right)} \\cos{\\left(\\theta \\right)}\\\\\\sin{\\left(\\theta \\right)} \\cos{\\left(\\theta \\right)} & \\sin^{2}{\\left(\\theta \\right)}\\end{matrix}\\right]$\n\n\n\n\n```python\n## Directly\nsym_jones.op_linear_polarizer(theta)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\cos^{2}{\\left(\\theta \\right)} & \\sin{\\left(\\theta \\right)} \\cos{\\left(\\theta \\right)}\\\\\\sin{\\left(\\theta \\right)} \\cos{\\left(\\theta \\right)} & \\sin^{2}{\\left(\\theta \\right)}\\end{matrix}\\right]$\n\n\n\n### Retarders and Wave Plates\n\nAn optical retarder has a fast and a slow axis. Light polarized parallel to the fast axis moves through the wave plate faster than light polarized perpendicular to the fast axis. A phase lag (or retardance) $\\phi$ is created between the two polarization states. The Jones matrix for a retarder with its fast axis oriented at $\\theta$ is\n$$\n\\left[\\begin{array}{cc}\n\\cos{\\phi\\over2} + j\\sin{\\phi\\over2}\\cos2\\theta & j \\sin{\\phi\\over2}\\sin 2\\theta\\\\\nj \\sin{\\phi\\over2}\\sin 2\\theta & \\cos{\\phi\\over2} - j\\sin{\\phi\\over2}\\cos2\\theta \n\\end{array}\\right]\n$$\n\n\n```python\n## are these the same??\nphi = sympy.Symbol('phi', real=True)\nsym_jones.op_retarder(theta,phi)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}e^{\\frac{i \\phi}{2}} \\cos^{2}{\\left(\\theta \\right)} + e^{- \\frac{i \\phi}{2}} \\sin^{2}{\\left(\\theta \\right)} & 2 i \\sin{\\left(\\frac{\\phi}{2} \\right)} \\sin{\\left(\\theta \\right)} \\cos{\\left(\\theta \\right)}\\\\2 i \\sin{\\left(\\frac{\\phi}{2} \\right)} \\sin{\\left(\\theta \\right)} \\cos{\\left(\\theta \\right)} & e^{\\frac{i \\phi}{2}} \\sin^{2}{\\left(\\theta \\right)} + e^{- \\frac{i \\phi}{2}} \\cos^{2}{\\left(\\theta \\right)}\\end{matrix}\\right]$\n\n\n\n\n```python\ntheta = sympy.Symbol('theta', real=True)\n\nslm = sym_jones.op_retarder(sympy.pi/2,phi)\ni0 = sym_jones.field_linear(sympy.pi/4)\nanal = sym_jones.op_linear_polarizer(theta)\nresult = slm * i0\nresult/result[0]\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1\\\\e^{i \\phi}\\end{matrix}\\right]$\n\n\n\nHaIf the thickness of the retarder is $d$ and the fast and slow indices of refraction are $n_o$ and $n_e$, then the retardance for a wavelength $\\lambda$ is\n$$\n\\phi(\\lambda) = {2\\pi\\over\\lambda} d (n_o-n_e)\n$$\nA half-wave plate has $\\phi(\\lambda)=\\pi$ and a Jones matrix\n$$\n\\mathbf{M}_\\mathrm{\\lambda/2}=j \\left[\\begin{array}{cc}\n\\cos2\\theta & \\sin 2\\theta\\\\\n\\sin2\\theta & -\\cos2\\theta\n\\end{array}\\right]\n$$\n\n\n```python\nsympy.simplify(sym_jones.op_half_wave_plate(theta))\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}i \\cos{\\left(2 \\theta \\right)} & i \\sin{\\left(2 \\theta \\right)}\\\\i \\sin{\\left(2 \\theta \\right)} & - i \\cos{\\left(2 \\theta \\right)}\\end{matrix}\\right]$\n\n\n\nA quarter-wave plate has $\\phi(\\lambda)=\\pi/2$ and\n$$\n\\mathbf{M}_\\mathrm{\\lambda/4}={1\\over\\sqrt{2}}\\left[\\begin{array}{cc}\n1 + j\\cos2\\theta & j\\sin2\\theta\\\\\nj\\sin2\\theta & 1-j\\cos 2\\theta\n\\end{array}\\right]\n$$\n\n\n```python\n## same??\nsym_jones.op_quarter_wave_plate(theta)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}e^{- \\frac{i \\pi}{4}} \\sin^{2}{\\left(\\theta \\right)} + e^{\\frac{i \\pi}{4}} \\cos^{2}{\\left(\\theta \\right)} & \\sqrt{2} i \\sin{\\left(\\theta \\right)} \\cos{\\left(\\theta \\right)}\\\\\\sqrt{2} i \\sin{\\left(\\theta \\right)} \\cos{\\left(\\theta \\right)} & e^{\\frac{i \\pi}{4}} \\sin^{2}{\\left(\\theta \\right)} + e^{- \\frac{i \\pi}{4}} \\cos^{2}{\\left(\\theta \\right)}\\end{matrix}\\right]$\n\n\n\nA retarder exhibits different retardances at a different wavelengths. Because $n_o-n_e$ changes slowly with wavelength, the phase at $\\lambda_1$ of a wave plated designed for $\\lambda_0$ is\n$$\n\\phi(\\lambda_1) = \\phi(\\lambda_0) {\\lambda_0\\over\\lambda_1}\n$$\nThis is relevant because you will be using phase plates that are created for wavelengths other than 633nm.\n\n### Optical Isolator\n\n\n\n\nLinearly polarized light passes through the polarizer and is converted to right hand circularly polarized light by the quarter wave plate. Upon reflection, this beam will change handedness, and upon passage back through the quarter wave plate it is linear again but orthogonal to the polarizer.\n\n\n## Malus's Law\n\nShow that the transmission of linearly-polarized light as it passes through a linear analyzer at an angle $\\theta$.\n\n\n```python\nincident = sym_jones.field_horizontal()\nlinear = sym_jones.op_linear_polarizer(theta)\nresult = linear * incident\nintensity = sym_jones.intensity(result)\nfraction = sympy.simplify(intensity)\nfraction\n```\n\n\n\n\n$\\displaystyle \\cos^{2}{\\left(\\theta \\right)}$\n\n\n\n\n```python\ntheta = np.linspace(0, 180, 100)\nplt.plot(theta, np.cos(np.radians(theta))**2)\n\nplt.title('Malus\\'s Law')\nplt.xlabel( \"Analyzer Angle (degrees)\" )\nplt.ylabel( \"Transmission\" )\nplt.show()\n```\n\n## Crossed polarizers with quarter-wave plate between\n\nShow that the total transmission of horizontally-polarized light that passes through a quarter-wave plate oriented at an angle $\\theta$ and then through a vertical analyzer is $2\\cos^2\\theta\\sin^2\\theta$ and plot the transmitted light as the polarizer is rotated through 180 degrees. \n\n\n\n```python\ntheta = sympy.Symbol('theta', real=True)\nincident = sym_jones.field_horizontal()\n\nplate = sym_jones.op_quarter_wave_plate(theta)\nanalyzer = sym_jones.op_linear_polarizer(sympy.pi/2)\nresult = analyzer * plate * incident\nfraction = sym_jones.intensity(result)\nfraction\n\n```\n\n\n\n\n$\\displaystyle 2 \\sin^{2}{\\left(\\theta \\right)} \\cos^{2}{\\left(\\theta \\right)}$\n\n\n\n\n```python\ntheta = np.linspace(0, 180, 180)\nth = 2*np.radians(theta)\nplt.plot(theta, 0.5*np.sin(th)**2)\n\nplt.title('Horizontal -> Quarter Wave Plate -> Vertical Analyzer')\nplt.xlabel( \"Quarter Waveplate angle (degrees)\" )\nplt.ylabel( \"Transmission\" )\nplt.show()\n```\n\n## Crossed polarizers with half-wave plate between\n\nShow that the total transmission of horizontally-polarized light that passes through a half-wave plate and then through a vertical linear analyzer is $4\\sin^2\\theta\\cos^2\\theta = \\sin^2 2\\theta$ and plot as the analyzer is rotated through 180 degrees. \n\n\n\n```python\ntheta = sympy.Symbol('theta', real=True)\n\nincident = sym_jones.field_horizontal()\nplate = sym_jones.op_half_wave_plate(theta)\nanalyzer = sym_jones.op_linear_polarizer(sympy.pi/2)\n\nresult = analyzer * plate * incident\n\nfraction = sym_jones.intensity(result)\nfraction\n```\n\n\n\n\n$\\displaystyle 4 \\sin^{2}{\\left(\\theta \\right)} \\cos^{2}{\\left(\\theta \\right)}$\n\n\n\n\n```python\ntheta = np.linspace(0, 180, 500)\n\nplt.plot(theta, np.sin(np.radians(2*theta))**2 )\n\nplt.title('Horizontal -> Half Wave Plate -> Vertical Analyzer')\nplt.xlabel( \"Half waveplate Angle (degrees)\" )\nplt.ylabel( \"Transmission\" )\nplt.show()\n```\n\n## Waveplates not at design wavelength\n\nThe phase shift in a halfwave plate depends on the wavelength. If the halfwave plates are designed for 532nm light, but we use them for 633nm light then how will the measurements be affected? (Assume the speeds along the fast and slow axis are independent of wavelength).\n\nPlot the transmitted light for both 633 and 532nm light.\n\n\n```python\nf = sympy.Symbol('f', real=True)\ntheta = sympy.Symbol('theta', real=True)\nlambda_0 = sympy.Symbol('lambda_0', real=True)\nlambda_1 = sympy.Symbol('lambda_1', real=True)\nf = lambda_0/lambda_1\n\nincident = sym_jones.field_horizontal()\nplate_1 = sym_jones.op_retarder(theta,f*sympy.pi)\nanalyzer = sym_jones.op_linear_polarizer(sympy.pi/2)\n\nresult_1 = analyzer * plate_1 * incident\n\nfraction_1 = sym_jones.intensity(result_1)\n\nfraction_1\n```\n\n\n\n\n$\\displaystyle 4 \\sin^{2}{\\left(\\theta \\right)} \\sin^{2}{\\left(\\frac{\\pi \\lambda_{0}}{2 \\lambda_{1}} \\right)} \\cos^{2}{\\left(\\theta \\right)}$\n\n\n\n\n```python\nlambda_0 = 532e-9\nlambda_1 = 633e-9\ntheta = np.linspace(0, 180, 500)\nplt.plot(theta, np.sin(np.radians(2*theta))**2 ,label='532nm')\nplt.plot(theta, np.sin(lambda_0*np.pi/2/lambda_1)**2*np.sin(np.radians(2*theta))**2, label='633nm' )\n\nplt.title('Horizontal -> Half Wave Plate -> Vertical Analyzer')\nplt.xlabel( \"Half waveplate Angle (degrees)\" )\nplt.ylabel( \"Transmission\" )\nplt.legend()\nplt.show()\n```\n", "meta": {"hexsha": "e0e82084b337ab52f6e03a2bc2e04bb924028b6c", "size": 128060, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/05a-Symbolic-Jones.ipynb", "max_stars_repo_name": "gyger/pypolar", "max_stars_repo_head_hexsha": "2c212a4dbf0971003a2872db53d257a614f02ace", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2018-07-26T02:46:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-14T08:40:24.000Z", "max_issues_repo_path": "docs/05a-Symbolic-Jones.ipynb", "max_issues_repo_name": "gyger/pypolar", "max_issues_repo_head_hexsha": "2c212a4dbf0971003a2872db53d257a614f02ace", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-08-04T09:59:51.000Z", "max_issues_repo_issues_event_max_datetime": "2021-06-03T20:02:25.000Z", "max_forks_repo_path": "docs/05a-Symbolic-Jones.ipynb", "max_forks_repo_name": "gyger/pypolar", "max_forks_repo_head_hexsha": "2c212a4dbf0971003a2872db53d257a614f02ace", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-07-26T12:43:29.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-09T18:56:38.000Z", "avg_line_length": 144.5372460497, "max_line_length": 32708, "alphanum_fraction": 0.8716695299, "converted": true, "num_tokens": 3552, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.960361157495521, "lm_q2_score": 0.8688267660487573, "lm_q1q2_score": 0.8343874787056748}} {"text": "$\\newcommand{\\bkt}[1]{\\left(#1\\right)}$\n$\\newcommand{\\dsum}[1]{\\displaystyle\\sum}$\n$\\newcommand{\\spade}{\\bkt{\\spadesuit}}$\n$\\newcommand{\\club}{\\bkt{\\clubsuit}}$\n\nPolynomial Interpolation\n==\n\n1.1 **Introduction**\n\nGiven the values of a function $f(x)$ at $n+1$ distinct locations of $x$, say $\\{x_i\\}_{i=0}^n$, we could approximate $f$ by a polynomial function $p_n(x)$ of degree $n$ that satisfies\n\n$$p_n\\bkt{x_i} = f\\bkt{x_i}$$\n\nWe can construct the polynomial $p_n(x)$ as $p_n(x) = a_0 + a_1 x + a_2 x^2 + \\cdots + a_n x^n$. The $n+1$ coefficients are determined by forcing $p_n(x)$ to pass through the data points. This leads to $n+1$ equations in $n+1$ unknowns, $a_0,a_1,\\ldots,a_n$, i.e.,\n$$y_i = a_0 + a_1 x_i + a_2 x_i^2 + \\cdots + a_n x_i^n$$\nfor $i \\in \\{0,1,2,\\ldots,n\\}$. This procedure for finding the coefficients of the polynomial is not very attractive. It involves solving a linear system, whose matrix is extremely ill-conditioned. See below.\n\n\n```python\nimport scipy as sp;\nimport numpy as np;\n\nfrom scipy.stats import binom;\nimport matplotlib.pylab as pl;\nfrom scipy import linalg\nfrom numpy import linalg\nfrom ipywidgets import interact;\n```\n\n\n```python\nNmax = 41;\nN = np.arange(2,Nmax);\nc = np.zeros(Nmax-2);\nfor n in N:\n x = np.linspace(-1,1,n);\n V = np.vander(x,increasing=\"True\");\n c[n-2] = np.linalg.cond(V);\n\npl.semilogy(N,c,'k-+');\n```\n\nA better way to go about interpolating with polynomials is via Lagrange interpolation. Define the Lagrange polynomial $L_i(x)$ to be $1$ when $x=x_i$ and is zero at all the other nodes, i.e.,\n$$L_i\\bkt{x_j} = \\delta_{ij}$$\nWe then have\n$$p_n(x) = \\sum_{i=0}^n f_i L_i(x)$$\nSince we want $L_i(x)$ to vanish at all $x_j$, where $j \\neq i$, we have $L_i(x) = c_i \\prod_{j \\neq i}\\bkt{x-x_j}$. Further, since $L_i\\bkt{x_i} = 1$, we get that $c = \\dfrac1{\\prod_{j \\neq i} \\bkt{x_i-x_j}}$. Hence, we see that\n$$L_i\\bkt{x} = \\prod_{j \\neq i} \\bkt{\\dfrac{x-x_j}{x_i-x_j}}$$\nIf we call $l_i(x) = \\prod_{j \\neq i} \\bkt{x-x_j}$ and $w_i = \\prod_{j \\neq i} \\bkt{\\dfrac1{x_i-x_j}}$, we see that $L_i(x) = w_i l_i(x)$.\n\nFurther, if we set $l(x) = \\prod_{j=0}^n \\bkt{x-x_j}$, we see that $l_i(x) = \\dfrac{l(x)}{x-x_i}$, and hence $L_i(x) = \\dfrac{w_il(x)}{x-x_i}$ and hence we see that\n\\begin{align}\np_n(x) = l(x) \\bkt{\\sum_{i=0}^n \\dfrac{w_i f_i}{x-x_i}} \\,\\,\\, \\spade\n\\end{align}\nNote that $\\spade$ is an attractive way to compute the Lagrange interpolant. It requires $\\mathcal{O}\\bkt{n^2}$ work to calculate $w_i$'s, followed by $\\mathcal{O}(n)$ work to compute the interpolant for each $x$. This is called as the ***first form of Barycentric interpolation***.\n\nWhat about updating when a new interpolation node $x_{n+1}$ is added? There are only two steps involved.\n\n- For $i \\in\\{0,1,\\ldots,n\\}$, divide each $w_i$ by $\\bkt{x_i-x_{n+1}}$. Cost is $n+1$ flops.\n- Compute $w_{n+1}$ for another $n+1$ flops.\n\nHence, we see that the Lagrange interpolant can also be updated at $\\mathcal{O}(n)$ flops.\n\nThe above barycentric formula can be made even more elegant in practice. Note that the function $1$ gets interpolated by any polynomial exactly. Hence, we see that\n$$1 = l(x) \\bkt{\\sum_{i=0}^n \\dfrac{w_i}{x-x_i}}$$\nwhich gives us that\n$$l(x) = \\dfrac1{\\bkt{\\displaystyle\\sum_{i=0}^n \\dfrac{w_i}{x-x_i}}}$$\nHence, we obtain the ***second form of Barycentric interpolation***\n\\begin{align}\np_n(x) = \\dfrac{\\bkt{\\displaystyle\\sum_{i=0}^n \\dfrac{w_i f_i}{x-x_i}}}{\\bkt{\\displaystyle\\sum_{i=0}^n \\dfrac{w_i}{x-x_i}}} \\,\\,\\, \\club\n\\end{align}\nAgain note that it is fairly easy to incorporate a new intepolation node at a cost of $\\mathcal{O}(n)$. Below is the Lagrange interpolation using the original form.\n\n\n```python\ndef function(x):\n# f = np.abs(x)+x/2-x**2;\n f = 1.0/(1+25*x*x);\n# f = np.abs(x+0.3) + np.abs(x-0.2) + + np.abs(x*x*x*x-0.8);\n return f;\n\ndef Lagrange(xnodes,x,i):\n f = 1;\n nnodes = np.size(xnodes);\n for j in range(0,i):\n f = f*(x-xnodes[j])/(xnodes[i]-xnodes[j]);\n for j in range(i+1,nnodes):\n f = f*(x-xnodes[j])/(xnodes[i]-xnodes[j]);\n return f;\n \ndef Chebyshev(nnodes,xplot):\n # Chebyshev node interpolation\n xnodes = np.cos(np.arange(0,nnodes)*np.pi/(nnodes-1));\n fnodes = function(xnodes);\n fplot = 0;\n for i in range(0,nnodes):\n fplot = fplot + fnodes[i]*Lagrange(xnodes,xplot,i);\n return xnodes, fnodes, fplot;\n\ndef Uniform(nnodes,xplot):\n # Uniform node interpolation\n xnodes = np.linspace(-1,1,nnodes);\n fnodes = function(xnodes);\n fplot = 0;\n for i in range(0,nnodes):\n fplot = fplot + fnodes[i]*Lagrange(xnodes,xplot,i);\n return xnodes, fnodes, fplot;\n\ndef Bernstein(n,xplot):\n xnodes = np.linspace(-1,1,nnodes);\n fnodes = function(xnodes);\n fplot = 0;\n for i in range(0,nnodes):\n fplot = fplot + sp.stats.binom.pmf(i,nnodes-1,0.5+0.5*xplot)*function(xnodes[i]);\n return fplot;\n \nnplot = 1001;\nxplot = np.linspace(-1,1,nplot);\nf_actual = function(xplot);\n@interact\ndef inter(nnodes=(5,45,2)):\n xnodes, fnodes, fplot = Chebyshev(nnodes,xplot);\n# xnodes, fnodes, fplot = Uniform(nnodes,xplot);\n # fplot = Bernstein(nnodes,xplot);\n\n error = f_actual-fplot;\n print(np.amax(np.abs(error)))\n pl.plot(xplot,f_actual,'-');\n pl.plot(xplot,fplot,'r');\n pl.rcParams[\"figure.figsize\"] = [16,4];\n```\n\n\n interactive(children=(IntSlider(value=25, description='nnodes', max=45, min=5, step=2), Output()), _dom_classe\u2026\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "f8a0d27f8bbe43b7484d7952e2b7725e2bcf396c", "size": 20575, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lectures/Univariate_Interpolation.ipynb", "max_stars_repo_name": "sivaramambikasaran/2019_NMSC", "max_stars_repo_head_hexsha": "05ab65ccfa8ddca5d19e4ac86bf581d37781213d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-09-25T05:22:07.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-30T16:04:44.000Z", "max_issues_repo_path": "Lectures/Univariate_Interpolation.ipynb", "max_issues_repo_name": "sivaramambikasaran/NMSC_19", "max_issues_repo_head_hexsha": "05ab65ccfa8ddca5d19e4ac86bf581d37781213d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lectures/Univariate_Interpolation.ipynb", "max_forks_repo_name": "sivaramambikasaran/NMSC_19", "max_forks_repo_head_hexsha": "05ab65ccfa8ddca5d19e4ac86bf581d37781213d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 95.2546296296, "max_line_length": 12408, "alphanum_fraction": 0.7906196841, "converted": true, "num_tokens": 1948, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8976952838963489, "lm_q2_score": 0.9294403989265462, "lm_q1q2_score": 0.8343542627791016}} {"text": "# Singular value decomposition (SVD)\n\nThe singular value decompostion of a real-valued $m \\times n$ matrix $\\boldsymbol{A}$ is:\n\n$$\n\\boldsymbol{A} = \\boldsymbol{U} \\boldsymbol{\\Sigma} \\boldsymbol{V}^{T}\n$$\n\nwhere\n\n- $\\boldsymbol{U}$ is an $m \\times m$ orthogonal matrix;\n- $\\boldsymbol{\\Sigma}$ is an $m \\times n$ diagonal matrix with diagonal entries $\\sigma_{1} \\ge \\sigma_{2} \\ge \\ldots \\ge \\sigma_{p} \\ge 0$, where $p = \\min(m, n)$; and\n- $\\boldsymbol{U}$ is an $n \\times n$ orthogonal matrix.\n\nWe will use NumPy to compute the SVD and Matplotlib to visualise results, so we first import some modules:\n\n\n```python\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom PIL import Image\n```\n\n**Note:** If you run this notebook yourself it can take sometime because it computes number of moderate size SVD problems.\n\n## Low rank approximations\n\nRecall that we can represent a matrix as a sum of rank-1 matrices:\n\n$$\n\\boldsymbol{A} = \\sum_{i} \\sigma_{i} \\boldsymbol{u}_{i} \\boldsymbol{v}^{T}_{i}\n$$\n\nwhere $\\sigma_{i}$ is the $i$th singular value and $\\boldsymbol{u}_{i}$ and $\\boldsymbol{v}_{i}$ are the $i$th columns vectors of $\\boldsymbol{U}$ and $\\boldsymbol{V}$, respectively from the SVD. Clearly, for any $\\sigma_{i} = 0$ we can avoid storing the data that makes no contribution. If $\\sigma_{i}$ is small, then the contribution of $\\boldsymbol{u}_{i} \\boldsymbol{v}^{T}_{i}$ is small and we discard it and introduce only a small 'error' to the matrix. We will use low rank approximations in a number of examples in this notebook.\n\n## Data compression\n\nWe start with a $100 \\times 200$ matrix that has entries equal to one or zero. We create a matrix with all entries set to zero, and we then set some entries equal to one in the pattern of rectangle.\n\n\n```python\nA = np.ones((100, 200))\nA[33:33 + 4, 33:133] = 0.0\nA[78:78 + 4, 33:133] = 0.0\nA[33:78+4, 33:33+4] = 0.0\nA[33:78+4, 129:129+4] = 0.0\nplt.imshow(A, cmap='gray', interpolation='none')\nplt.show()\n```\n\nPerforming the SVD and counting the number of singular values that are greater than $10^{-9}$:\n\n\n```python\nU, s, V = np.linalg.svd(A, full_matrices=False)\nprint(\"Number of singular values greater than 1.0e-9: {}\".format((s > 1.0e-9).sum()))\n```\n\nWith only three nonzero singular values, we could reconstruct the matrix with very little data - just three singular values and six vectors.\n\n### Removing noise\n\nWe consider the same matrix problem again, this time with some back ground noise in the white regions.\n\n\n```python\nA = np.ones((100, 200))\nA = A - 1.0e-1*np.random.rand(100, 200)\nA[33:33 + 4, 33:133] = 0.0\nA[78:78 + 4, 33:133] = 0.0\nA[33:78+4, 33:33+4] = 0.0\nA[33:78+4, 129:129+4] = 0.0\nplt.imshow(A, cmap='gray', interpolation='none'); \n```\n\nThe effect of the noise is clear in the image.\n\nWe can try to eliminate much of the background noise via a low-rank approximation of the noisy image that discards information associated with small singular values of the matrix.\n\n\n```python\n# Compute SVD of nois matrix\nU, s, V = np.linalg.svd(A, full_matrices=False)\n\n# Set any singular values less than 1.0 equation zero\ns[s < 1.0] = 0.0\n\n# Reconstruct low rank approximation and display\nA_denoised = np.dot(U, np.dot(np.diag(s), V))\nplt.imshow(A_denoised, cmap='gray', interpolation='none')\nplt.show();\n```\n\nWe can see that much of the noise in the image has been eliminated.\n\n## Image compression\n\n### Gray scale image\n\nWe load a colour PNG file. It uses three colour channels (red/green/blue), with at each pixel an 8-bit unsigned integer (in the range $[0, 255]$, but sometimes represented as a float) for each colour for the colour intensity. This is know as 24-bit colour - three channels times 8 bit.\n\nWe load the image as three matrices (red, green, blue), each with dimension equal to the number pixels in each direction: \n\n\n```python\nfrom urllib.request import urlopen\nurl = \"https://github.com/garth-wells/notebooks-3M1/raw/master/photo/2020-1.png\"\nimg_colour = Image.open(urlopen(url))\nimg_colour = img_colour.convert('RGB')\n\nprint(\"Image size (pixels):\", img_colour.size)\nprint(\"Image array shape: \", np.array(img_colour).shape)\n\nplt.figure(figsize=(15, 15/1.77))\nplt.imshow(img_colour);\n```\n\nWe could work with the colour image, but it is simpler to work with a gray scale image because then we have only one value for the colour intensity at each pixel rather than three (red/green/blue).\n\n\n```python\nimg_bw = img_colour.convert('L')\n\nplt.figure(figsize=(15, 15/1.77))\nplt.imshow(img_bw, cmap='gray'); \nprint(\"Image array shape: {}\".format(img_bw.size))\n\nplt.savefig(\"bw.pdf\")\n```\n\nWe can convert the image to a regular matrix with values between 0 and 255, with each entry corresponding to a pixel in the image. Creating the matrix and inspecting first four rows and three columns (top left corner of the image):\n\n\n```python\nimg_array = np.array(img_bw)\nprint(\"Image shape:\", img_array.shape)\nprint(img_array[:4, :3])\n```\n\nNow, maybe we can discard information associated with small singular values without perceiving any visual change in the image. To explore this, we compute the SVD of the gray scale image:\n\n\n```python\nU, s, V = np.linalg.svd(img_array, full_matrices=False)\n```\n\nThe argument `full_matrices=False` tells NumPy to not store all the redundant zero terms in the $\\boldsymbol{\\Sigma}$ array. This is the normal approach in practice, but not in most text books. Note that NumPy return the singular values as a one-dimendional array, not as a matrix.\n\nWe now print the largest and smallest singular values, and plot all the singular values $\\sigma_{i}$ on a log-scale:\n\n\n```python\nprint(\"Number of singular values: {}\".format(len(s)))\nprint(\"Max, min singular values: {}, {}\".format(s[0], s[-1]))\n\nplt.xlabel('$i$')\nplt.ylabel('$\\sigma_i$')\nplt.title('Singular values')\nplt.yscale('log')\nplt.plot(s, 'bo');\n\nplt.savefig(\"bw-svd.pdf\")\n```\n\nWe can now try compressing the image. We first try retaining using only the largest 25% of values: \n\n\n```python\n# Compute num_sigma/4 (25%) and zero values \nr = int(0.25*len(s)) \n\n# Re-construct low rank approximation (this may look a little cryptic, but we use the below \n# expression to avoid unecessary computation)\ncompressed = U[:,:r].dot(s[:r, np.newaxis]*V[:r,:])\ncompressed = compressed.astype(int)\n\n# Plot compressed and original image\nfig, axes = plt.subplots(nrows=1, ncols=2, figsize=(18, 18/1.77));\naxes[0].set_title('Compressed image with largest 25% of singular values retained')\naxes[0].imshow(compressed, cmap='gray');\naxes[1].set_title('Original image')\naxes[1].imshow(img_array, cmap='gray');\n```\n\nWe have discarded 3/4 of the singular values, but can barely perceive a difference in the image.\n\nTo explore other levels of compression, we write a function that takes the fraction of singular values we wish to retain:\n\n\n```python\ndef compress_image(U, s, V, f):\n \"Compress image where 0 < f <= 1 is the fraction on singular values to retain\"\n r = int(f*len(s))\n return (U[:,:r].dot(s[:r, np.newaxis]*V[:r,:])).astype(int)\n```\n\nLet's try retaining just 10% of the singular values:\n\n\n```python\n# Compress image/matrix\ncompressed = compress_image(U, s, V, 0.1)\n\n# Plot compressed and original image\nfig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 20/1.77))\naxes[0].set_title('Compressed image with largest 10% of singular values retained')\naxes[0].imshow(compressed, cmap='gray');\naxes[1].set_title('Original image')\naxes[1].imshow(img_array, cmap='gray');\n\nplt.savefig(\"bw-0-10.pdf\")\n```\n\nEven with only 10% if the singular values retains, it is hard to perceive a difference between the images. Next we try keeping only 2%: \n\n\n```python\n# Compress image/matrix\ncompressed = compress_image(U, s, V, 0.02)\n\n# Plot compressed and original image\nfig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 20/1.77))\naxes[0].set_title('Compressed image with largest 2% of singular values retained')\naxes[0].imshow(compressed, cmap='gray');\naxes[1].set_title('Original image')\naxes[1].imshow(img_array, cmap='gray');\n\nplt.savefig(\"bw-0-02.pdf\")\n```\n\nWe now see some image clear degradation, but the image is sill recognisable. We'll try one more case where we retain only 0.5% of the singular values. \n\n\n```python\n# Compress image/matrix\ncompressed = compress_image(U, s, V, 0.005)\n\n# Plot compressed and original image\nfig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 20/1.77))\naxes[0].set_title('Compressed image with largest 0.5% of singular values retained')\naxes[0].imshow(compressed, cmap='gray');\naxes[1].set_title('Original image')\naxes[1].imshow(img_array, cmap='gray');\n\nplt.savefig(\"bw-0-005.pdf\")\n```\n\nThe image quality is now quite poor.\n\n### Colour image: RGB\n\nWe'll now try compressing a colour image.\n\n\n```python\nprint(\"Image array shape: {}\".format(img_colour.size))\n\nplt.figure(figsize=(20,20/1.77))\nplt.title('This is a photo of 2020 3M1 class members')\nplt.imshow(img_colour);\n```\n\nWe can extract the red, green and blue components to have a look:\n\n\n```python\n# Display red, green and blue channels by zeroing other channels\nfig, axes = plt.subplots(nrows=1, ncols=3, figsize=(20, 20/1.77))\nimg_array = np.array(img_colour)\n\n# Zero the g/b channels\nred = img_array.copy()\nred[:,:,(1,2)] = 0.0\naxes[0].imshow(red);\n\n# Zero the r/b channels\ngreen = img_array.copy()\ngreen[:,:,(0,2)] = 0.0\naxes[1].imshow(green);\n\n# Zero the r/g channels\nblue = img_array.copy()\nblue[:,:,(0,1)] = 0.0\naxes[2].imshow(blue);\n```\n\nWe now compute an SVD for the matrix of each colour:\n\n\n```python\n# Compute SVD for each colour\nU, s, V = [0]*3, [0]*3, [0]*3\nfor i in range(3):\n U[i], s[i], V[i] = np.linalg.svd(img_array[:, :, i], full_matrices=False)\n```\n\nCompressing the matrix for each colouring separately and then reconstructing the three-dimensional array:\n\n\n```python\n# Compress each colour separately\ncompressed = [compress_image(U[i], s[i], V[i], 0.1) for i in range(3)]\n\n# Reconstruct 3D RGB array and filter any values outside of (0, 1)\ncompressed = np.dstack(compressed)\n```\n\nComparing the compressed and original images side-by-side:\n\n\n```python\nfig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 20/1.77))\naxes[0].set_title('Image with largest 10% of singular values retained')\naxes[0].imshow(compressed, interpolation=\"nearest\");\naxes[1].set_title('Original image')\naxes[1].imshow(img_colour);\n```\n\nRetaining 10% of the singular values for each colour, we can see some artifacts in the compressed image, which indicates that using the SVD for each colour independently is probably not a good idea.\n\n### Colour image: YCbCr\n\nA better approach is to split the image into [YCbCr](https://en.wikipedia.org/wiki/YCbCr), rather than RGB.\nYCbCr is splits the image into luminance (Y), and chrominance (Cb and Cr) colour values.\n\n\n```python\nimg_colour_ycbcr = np.array(img_colour.convert(\"YCbCr\"))\n```\n\n\n```python\n# Display Luminance(Y), Blue Chroma(Cb) and Red Chroma(Cr) channels\nfig, axes = plt.subplots(nrows=1, ncols=3, figsize=(20, 20/1.77))\n\nY = img_colour_ycbcr[:,:,0]\naxes[0].imshow(Y, cmap='gray');\n\nCb = img_colour_ycbcr[:,:,1]\naxes[1].imshow(Cb, cmap='gray');\n\nCr = img_colour_ycbcr[:,:,2]\naxes[2].imshow(Cr, cmap='gray');\n```\n\nCompute the SVD of each channel:\n\n\n```python\n# Compute SVD for each channel\nU, s, V = [0]*3, [0]*3, [0]*3\nfor i in range(3):\n U[i], s[i], V[i] = np.linalg.svd(img_colour_ycbcr[:, :, i], full_matrices=False)\n```\n\nCompress each channel, and display compressed channels in gray scale:\n\n\n```python\n# Compress each component separately\ncompressed = [compress_image(U[0], s[0], V[0], 0.05),\n compress_image(U[1], s[1], V[1], 0.005),\n compress_image(U[2], s[2], V[2], 0.005)]\n# Reconstruct 3D YCbCr array\ncompressed = np.dstack(compressed)\nfig, axes = plt.subplots(nrows=1, ncols=3, figsize=(20, 20/1.77))\nY = compressed[:,:,0]\naxes[0].imshow(Y, cmap='gray');\n\nCb = compressed[:,:,1]\naxes[1].imshow(Cb, cmap='gray');\n\nCr = compressed[:,:,2]\naxes[2].imshow(Cr, cmap='gray');\n```\n\nCombine compressed channels:\n\n\n```python\nfig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 20/1.77))\naxes[0].set_title('Image with largest 20% of brightness singular values retained and 0.5% colours')\nim = Image.fromarray(np.uint8(compressed), mode=\"YCbCr\")\naxes[0].imshow(im)\naxes[1].set_title('Original image')\naxes[1].imshow(img_colour);\n```\n\n### Interactive compression\n\nWe'll now create an interactive image with sliders to interactively control the compression level.\n\n\n```python\nfrom ipywidgets import widgets\nfrom ipywidgets import interact\n\nurl = \"https://github.com/garth-wells/notebooks-3M1/raw/master/photo/IMG_20190117_141222563.png\"\nimg = Image.open(urlopen(url))\nimg_colour_ycbcr = np.array(img.convert(\"YCbCr\"))\n\n# Compute SVD for each channel\nU0, s0, V0 = [0]*3, [0]*3, [0]*3\nfor i in range(3):\n U0[i], s0[i], V0[i] = np.linalg.svd(img_colour_ycbcr[:, :, i], full_matrices=False)\n\n@interact(ratio_Y=(0.005, 0.4, 0.02), \n ratio_Cb=(0.001, 0.1, 0.01), \n ratio_Cr=(0.001, 0.1, 0.01))\ndef plot_image(ratio_Y=0.1, ratio_Cb=0.01, ratio_Cr=0.01):\n\n compressed = [compress_image(U0[0], s0[0], V0[0], ratio_Y),\n compress_image(U0[1], s0[1], V0[1], ratio_Cb),\n compress_image(U0[2], s0[2], V0[2], ratio_Cr)]\n\n # Reconstruct 3D YCbCr array\n compressed = np.dstack(compressed) \n img_compressed = Image.fromarray(np.uint8(compressed), mode=\"YCbCr\")\n\n # Show\n fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 20/1.77))\n\n axes[0].set_title('Compressed image')\n axes[0].imshow(img_compressed)\n\n axes[1].set_title('Original image')\n axes[1].imshow(img)\n```\n\n## Effective rank\n\nDetermining the rank of a matrix is not a binary question in the context of floating point arithmetic or measurement errors. The SVD can be used to determine the 'effective rank' of a matrix. \n\nConsider the matrix:\n\n\n```python\nA = np.array([[1, 1, 1], [2, 2, 2], [1, 0 ,1]])\nprint(A)\n```\n\nClearly the first two rows are linearly dependent and the rank of this matrix is 2. We can verify this using NumPy:\n\n\n```python\nprint(\"Rank of A is: {}\".format(np.linalg.matrix_rank(A)))\n```\n\nWe now add some noise in the range $(0, 10^{-6})$ to the matrix entries:\n\n\n```python\nnp.random.seed(10)\nA = A + 1.0e-6*np.random.rand(A.shape[0], A.shape[1])\n```\n\nWe now test the rank:\n\n\n```python\nprint(\"Rank of A (with noise) is: {}\".format(np.linalg.matrix_rank(A)))\n```\n\nThe problem is that we have a 'data set' that is linearly dependent, but this is being masked by very small measurement noise. \n\nComputing the SVD of the matrix with noise and printing the singular values:\n\n\n```python\nU, s, V = np.linalg.svd(A)\nprint(\"The singular values of A (with noise) are: {}\".format(s))\n```\n\nIf we define the effective rank as the number of singular values that are greater than the noise level, the effective rank of $\\boldsymbol{A}$ is 2.\n\n## Rank deficient least-squares problems\n\nFor least squares problem, we have seen before that we solve\n\n$$\n\\boldsymbol{A}^{T} \\boldsymbol{A} \\hat{\\boldsymbol{x}} = \\boldsymbol{A}^{T} \\boldsymbol{b}\n$$\n\nor\n\n$$\n\\begin{align}\n\\hat{\\boldsymbol{x}} &= (\\boldsymbol{A}^{T} \\boldsymbol{A})^{-1} \\boldsymbol{A}^{T} \\boldsymbol{b}\n\\\\\n&= \\boldsymbol{A}^{+}\\boldsymbol{b}\n\\end{align}\n$$\n\nEverything is fine as long as $\\boldsymbol{A}$ is full rank. The problem is that we might have data that leads to $\\boldsymbol{A}$ not being full rank. For example, if we try to fit a polynomial in $x$ and $y$, but the data lies on a line. \n\nWe have covered in the lectures how to handle least-squares problems that are rank deficient. Here we present an example.\n\n### Example: fitting points in a two-dimensional space\n\nSay we are given four data points that depend on $x$ and $y$, and we are asked to fit a polynomial of the form\n\n$$\nf(x, y) = c_{00} + c_{10}x + c_{01}y + c_{11}xy\n$$\n\nto the data points. Normally, we would expect to be able to fit the above polynomial to four data points by interpolation, i.e. solving $\\boldsymbol{A} \\boldsymbol{c} = \\boldsymbol{f}$ where\n$\\boldsymbol{A}$ a square Vandermonde matrix. However, if the points happened to lie on a line, then $\\boldsymbol{A}$ will be singular. If the points happen to almost lie on a line, then $\\boldsymbol{A}$ will be close to singular. \n\nA possibility is to exclude zero or small singular values from the process, thereby finding a least-squares fit with minimal $\\|\\boldsymbol{c}\\|_{2}$. We test this for the data set \n\n\\begin{equation}\nf_{1}(1, 0) = 3, \\\\\nf_{2}(2, 0) = 5, \\\\\nf_{3}(3, 0) = 7, \\\\\nf_{4}(4, 0) = 9.\n\\end{equation}\n\nThe data lies on the line $y = 0$, and is in fact is linear in $x$.\n\nWe create arrays to hold this data, and visualise the points:\n\n\n```python\nx, y, f = np.zeros(4), np.zeros(4), np.zeros(4)\nx[0], y[0], f[0] = 1.0, 0.0, 3.0\nx[1], y[1], f[1] = 2.0, 0.0, 5.0\nx[2], y[2], f[2] = 3.0, 0.0, 7.0\nx[3], y[3], f[3] = 4.0, 0.0, 9.0\n\nfrom mpl_toolkits.mplot3d import Axes3D\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_zlabel('f')\nax.scatter(x, y, f)\nplt.show()\n```\n\nTo find the polynomial coefficients we want to solve\n\n\\begin{equation}\n\\begin{bmatrix}\n1 & x_{1} & y_{1} & x_{1}y_{1} \\\\ \n1 & x_{2} & y_{2} & x_{2}y_{2} \\\\ \n1 & x_{3} & y_{3} & x_{3}y_{3} \\\\ \n1 & x_{4} & y_{4} & x_{4}y_{4} \\\\ \n\\end{bmatrix}\n\\begin{bmatrix}\nc_{00} \\\\ c_{10} \\\\ c_{01} \\\\ c_{11} \n\\end{bmatrix}\n=\n\\begin{bmatrix}\nf_{1} \\\\ f_{2} \\\\ f_{3} \\\\ f_{4} \n\\end{bmatrix}\n\\end{equation}\n\nwhere the matrix is the Vandermonde matrix. We can use a NumPy function to create the Vandermonde matrix:\n\n\n```python\nA = np.polynomial.polynomial.polyvander2d(y, x, [1, 1])\nprint(A)\n```\n\nIt is clear by inspection that $\\boldsymbol{A}$ is not full rank, and is rank 2.\n\nComputing the SVD of $\\boldsymbol{A}$ and printing the singular values:\n\n\n```python\nU, s, V = np.linalg.svd(A)\nprint(s)\n```\n\nWe can see that two of the singular values are zero. To find a least-squares fit to the data with minimal $\\| \\boldsymbol{c}\\|_{2}$ we compute\n\n$$\n\\hat{\\boldsymbol{c}} = \\boldsymbol{V}_{1} \\boldsymbol{\\Sigma}^{+} \n\\boldsymbol{U}_{1}^{T}\\boldsymbol{b}\n$$\n\nCreating $\\boldsymbol{V}_{1}$, $\\boldsymbol{\\Sigma}^{+}$ and $\\boldsymbol{U}_{1}$ (recall that the NumPy SVD returns $\\boldsymbol{V}^{T}$ rather than $\\boldsymbol{V}$): \n\n\n```python\n# Create view of U with last two columns removed \nU1 = U[:, :2]\n\n# Create view of V with last two columns removed \nV1 = V[:2,:]\n\n# Create Sigma^{+} by inverting the nonzero singular values and \n# discarding the zero singular values\nS1 = np.diag(1.0/s[:-2])\nprint(S1)\n```\n\nComputing the least-squares solution from $\\hat{\\boldsymbol{c}} = \\boldsymbol{V}_{1} \\boldsymbol{\\Sigma}^{+} \\boldsymbol{U}_{1}^{T}\\boldsymbol{b}$:\n\n\n```python\nc = np.transpose(V1).dot(S1.dot(U1.T).dot(f))\nprint(c)\n```\n\nThe solution is $f(x, y) = 1 + 2x$, which in this case in fact interpolates the data points. Plotting the function, we have a plane that passes through the points.\n\n\n```python\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\n\n# Plot points\nax.set_xlabel('$x$')\nax.set_ylabel('$y$')\nax.set_zlabel('$f$')\nax.scatter(x, y, f)\n\n# Plot surface\nX = np.arange(0, 5, 0.2)\nY = np.arange(-5, 5, 0.2)\nX, Y = np.meshgrid(X, Y)\nZ = 1.0 + 2.0*X + 0.0*Y\nsurf = ax.plot_surface(X, Y, Z, rstride=5, cstride=5, alpha=0.1)\nax.view_init(elev=30, azim=80)\nplt.show()\n```\n\nWe now try adding some noise to the sample positions and the measured values. The Vandermonde matrix is no longer singular so we can solve $\\boldsymbol{A} \\boldsymbol{c} = \\boldsymbol{f}$ to get the polynomial coefficients:\n\n\n```python\nnp.random.seed(20)\nxn = x + 1.0e-3*(1.0 - np.random.rand(len(x)))\nyn = y + 1.0e-3*(1.0 - np.random.rand(len(y)))\nfn = f + 1.0e-3*(1.0 - np.random.rand(len(f)))\n\nA = np.polynomial.polynomial.polyvander2d(yn, xn, [1, 1])\nc = np.linalg.solve(A, fn)\nprint(c)\n```\n\nWe now see significant coefficients for the $y$ and $xy$ terms in the interpolating polynomial just as a consequence of adding small amount of noise. Plotting the surface and the points, we see in dramatic impact of the noise.\n\n\n```python\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\n\n# Plot points\nax.set_xlabel('$x$')\nax.set_ylabel('$y$')\nax.set_zlabel('$f$')\nax.scatter(xn, yn, fn)\n\n# Plot surface\nX = np.arange(0, 5, 0.2)\nY = np.arange(-5, 5, 0.2)\nX, Y = np.meshgrid(X, Y)\nZ = c[0] + c[1]*X + c[2]*Y + c[3]*X*Y\nsurf = ax.plot_surface(X, Y, Z, rstride=5, cstride=5, alpha=0.1)\nax.view_init(elev=30, azim=80)\nplt.show()\n```\n\nPerforming an SVD on the matrix with noise and printing the singular values:\n\n\n```python\nU, s, V = np.linalg.svd(A)\nprint(s)\n```\n\nWe see that two of the values are considerably small than the others. If we set these to zero and follow the least-squares procedure for rank-deficient problem:\n\n\n```python\n# Create view of U with last two columns removed \nU1 = U[:, :2]\n\n# Create view of V with last two columns removed \nV1 = V[:2,:]\n\n# Create \\Sigma^{+}\nS1 = np.diag(1.0/s[:-2])\n\nc = np.transpose(V1).dot(S1.dot(U1.T).dot(f))\nprint(c)\n```\n\nWe see that the fitting polynomial is very close to the noise-free case.\n\n## Principal component analysis \n\nPrincipal component analysis finds a transformation such that covariance of a data set is zero in the transformed directions, and the variance in these directions is greatest. From a dataset this tells us which are the 'important' parameters in a system. \n\nConsider taking $N = 200$ measurements of two quantities $x_{1}$ and $x_{2}$. We model the system by:\n\n\n```python\nnp.random.seed(1)\nx0 = np.random.randn(200) + 5.0\nx1 = 1.5*x0 + np.random.rand(len(x0))\n\nax = plt.axes()\nax.scatter(x0, x1, alpha=0.5);\nax.set_xlabel('$x_{1}$');\nax.set_ylabel('$x_{2}$');\n```\n\nWe collect the data in a $200 \\times 2$ matrix $\\boldsymbol{X}$ (200 measurements, 2 variables):\n\n\n```python\nX = np.column_stack((x0, x1))\n```\n\nWe can compute the covariance matrix $\\boldsymbol{C}$ by making the columns of $\\boldsymbol{X}$ zero mean and computing $\\boldsymbol{X}^{T}\\boldsymbol{X}^{T}/(N-1)$\n\n\n```python\nfor c in range(X.shape[1]):\n X[:,c] = X[:,c] - np.mean(X[:,c])\nC = (X.T).dot(X)/(len(x0)-1.0)\n```\n\nThe covariance matrix is square and symmetric, so w can diagonalise it by computing the eigenvalues and eigenvectors.\n\nWe could also compute the SVD of $\\boldsymbol{X}$ since the $\\boldsymbol{V}$ is made of the eigenvectors of $\\boldsymbol{X}^{T}\\boldsymbol{X}^{T}$:\n\n\n```python\nU, s, V = np.linalg.svd(C)\nprint(s)\n```\n\nPlotting the data set and the principal directions:\n\n\n```python\nax = plt.axes()\nax.set_aspect(1.0);\nax.set_ylim(-4.0, 4.0);\nax.set_xlabel('$x_{1}$')\nax.set_ylabel('$x_{2}$')\nax.quiver(V[0, 0], V[0, 1], angles='xy',scale_units='xy',scale=0.3);\nax.quiver(V[1, 0], V[1, 1], angles='xy',scale_units='xy',scale=1);\nax.scatter(X[:,0], X[:,1], alpha=0.2);\n```\n\nPCA effectively detects correlation in a data set. In the above example it suggest that the system could be modelled with one variable in the direction of the first column of $\\boldsymbol{V}$.\n", "meta": {"hexsha": "2ed90c29409b0085ee86d280f634b944c75d2c91", "size": 37739, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "04-SingularValueDecomposition.ipynb", "max_stars_repo_name": "garth-wells/3m1-notebooks", "max_stars_repo_head_hexsha": "2f3d65b10a40da66270549dfdd901c7ec5c928aa", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2017-01-24T21:10:34.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T22:50:05.000Z", "max_issues_repo_path": "04-SingularValueDecomposition.ipynb", "max_issues_repo_name": "garth-wells/3m1-notebooks", "max_issues_repo_head_hexsha": "2f3d65b10a40da66270549dfdd901c7ec5c928aa", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2016-01-21T20:45:14.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-28T16:32:38.000Z", "max_forks_repo_path": "04-SingularValueDecomposition.ipynb", "max_forks_repo_name": "garth-wells/3m1-notebooks", "max_forks_repo_head_hexsha": "2f3d65b10a40da66270549dfdd901c7ec5c928aa", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 29, "max_forks_repo_forks_event_min_datetime": "2015-01-15T15:08:46.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-06T20:59:59.000Z", "avg_line_length": 28.7425742574, "max_line_length": 552, "alphanum_fraction": 0.5525053658, "converted": true, "num_tokens": 7009, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9207896824119662, "lm_q2_score": 0.9059898299021697, "lm_q1q2_score": 0.8342260877440901}} {"text": "About the author:\nThis notebook was forked from this [project](https://github.com/fonnesbeck/scipy2014_tutorial). The original author is Chris Fonnesbeck, Assistant Professor of Biostatistics. You can follow Chris on Twitter [@fonnesbeck](https://twitter.com/fonnesbeck).\n\n#### Introduction\n\nFor most problems of interest, Bayesian analysis requires integration over multiple parameters, making the calculation of a [posterior](https://en.wikipedia.org/wiki/Posterior_probability) intractable whether via analytic methods or standard methods of numerical integration.\n\nHowever, it is often possible to *approximate* these integrals by drawing samples\nfrom posterior distributions. For example, consider the expected value (mean) of a vector-valued random variable $\\mathbf{x}$:\n\n$$\nE[\\mathbf{x}] = \\int \\mathbf{x} f(\\mathbf{x}) \\mathrm{d}\\mathbf{x}\\,, \\quad\n\\mathbf{x} = \\{x_1, \\ldots, x_k\\}\n$$\n\nwhere $k$ (dimension of vector $\\mathbf{x}$) is perhaps very large.\n\nIf we can produce a reasonable number of random vectors $\\{{\\bf x_i}\\}$, we can use these values to approximate the unknown integral. This process is known as [**Monte Carlo integration**](https://en.wikipedia.org/wiki/Monte_Carlo_integration). In general, Monte Carlo integration allows integrals against probability density functions\n\n$$\nI = \\int h(\\mathbf{x}) f(\\mathbf{x}) \\mathrm{d}\\mathbf{x}\n$$\n\nto be estimated by finite sums\n\n$$\n\\hat{I} = \\frac{1}{n}\\sum_{i=1}^n h(\\mathbf{x}_i),\n$$\n\nwhere $\\mathbf{x}_i$ is a sample from $f$. This estimate is valid and useful because:\n\n- $\\hat{I} \\rightarrow I$ with probability $1$ by the [strong law of large numbers](https://en.wikipedia.org/wiki/Law_of_large_numbers#Strong_law);\n\n- simulation error can be measured and controlled.\n\n### Example (Negative Binomial Distribution)\n\nWe can use this kind of simulation to estimate the expected value of a random variable that is negative binomial-distributed. The [negative binomial distribution](https://en.wikipedia.org/wiki/Negative_binomial_distribution) applies to discrete positive random variables. It can be used to model the number of Bernoulli trials that one can expect to conduct until $r$ failures occur.\n\nThe [probability mass function](https://en.wikipedia.org/wiki/Probability_mass_function) reads\n\n$$\nf(k \\mid p, r) = {k + r - 1 \\choose k} (1 - p)^k p^r\\,,\n$$\n\nwhere $k \\in \\{0, 1, 2, \\ldots \\}$ is the value taken by our non-negative discrete random variable and\n$p$ is the probability of success ($0 < p < 1$).\n\n\n\n\nMost frequently, this distribution is used to model *overdispersed counts*, that is, counts that have variance larger\nthan the mean (i.e., what would be predicted under a\n[Poisson distribution](http://en.wikipedia.org/wiki/Poisson_distribution)).\n\nIn fact, the negative binomial can be expressed as a continuous mixture of Poisson distributions,\nwhere a [gamma distributions](http://en.wikipedia.org/wiki/Gamma_distribution) act as mixing weights:\n\n$$\nf(k \\mid p, r) = \\int_0^{\\infty} \\text{Poisson}(k \\mid \\lambda) \\,\n\\text{Gamma}_{(r, (1 - p)/p)}(\\lambda) \\, \\mathrm{d}\\lambda,\n$$\n\nwhere the parameters of the gamma distribution are denoted as (shape parameter, inverse scale parameter).\n\nLet's resort to simulation to estimate the mean of a negative binomial distribution with $p = 0.7$ and $r = 3$:\n\n\n```python\nimport numpy as np\n\nr = 3\np = 0.7\n```\n\n\n```python\n# Simulate Gamma means (r: shape parameter; p / (1 - p): scale parameter).\nlam = np.random.gamma(r, p / (1 - p), size=100)\n```\n\n\n```python\n# Simulate sample Poisson conditional on lambda.\nsim_vals = np.random.poisson(lam)\n```\n\n\n```python\nsim_vals.mean()\n```\n\n\n\n\n 6.3399999999999999\n\n\n\nThe actual expected value of the negative binomial distribution is $r p / (1 - p)$, which in this case is 7. That's pretty close, though we can do better if we draw more samples:\n\n\n```python\nlam = np.random.gamma(r, p / (1 - p), size=100000)\nsim_vals = np.random.poisson(lam)\nsim_vals.mean()\n```\n\n\n\n\n 7.0135199999999998\n\n\n\nThis approach of drawing repeated random samples in order to obtain a desired numerical result is generally known as **Monte Carlo simulation**.\n\nClearly, this is a convenient, simplistic example that did not require simuation to obtain an answer. For most problems, it is simply not possible to draw independent random samples from the posterior distribution because they will generally be (1) multivariate and (2) not of a known functional form for which there is a pre-existing random number generator.\n\nHowever, we are not going to give up on simulation. Though we cannot generally draw independent samples for our model, we can usually generate *dependent* samples, and it turns out that if we do this in a particular way, we can obtain samples from almost any posterior distribution.\n\n## Markov Chains\n\nA Markov chain is a special type of *stochastic process*. The standard definition of a stochastic process is an ordered collection of random variables:\n\n$$\n\\{X_t:t \\in T\\}\n$$\n\nwhere $t$ is frequently (but not necessarily) a time index. If we think of $X_t$ as a state $X$ at time $t$, and invoke the following dependence condition on each state:\n\n\\begin{align*}\n&Pr(X_{t+1}=x_{t+1} | X_t=x_t, X_{t-1}=x_{t-1},\\ldots,X_0=x_0) \\\\\n&= Pr(X_{t+1}=x_{t+1} | X_t=x_t)\n\\end{align*}\n\nthen the stochastic process is known as a Markov chain. This conditioning specifies that the future depends on the current state, but not past states. Thus, the Markov chain wanders about the state space,\nremembering only where it has just been in the last time step. \n\nThe collection of transition probabilities is sometimes called a *transition matrix* when dealing with discrete states, or more generally, a *transition kernel*.\n\nIt is useful to think of the Markovian property as **mild non-independence**. \n\nIf we use Monte Carlo simulation to generate a Markov chain, this is called **Markov chain Monte Carlo**, or MCMC. If the resulting Markov chain obeys some important properties, then it allows us to indirectly generate independent samples from a particular posterior distribution.\n\n\n> ### Why MCMC Works: Reversible Markov Chains\n> \n> Markov chain Monte Carlo simulates a Markov chain for which some function of interest\n> (e.g., the joint distribution of the parameters of some model) is the unique, invariant limiting distribution. An invariant distribution with respect to some Markov chain with transition kernel $Pr(y \\mid x)$ implies that:\n> \n> $$\\int_x Pr(y \\mid x) \\pi(x) dx = \\pi(y).$$\n> \n> Invariance is guaranteed for any *reversible* Markov chain. Consider a Markov chain in reverse sequence:\n> $\\{\\theta^{(n)},\\theta^{(n-1)},...,\\theta^{(0)}\\}$. This sequence is still Markovian, because:\n> \n> $$Pr(\\theta^{(k)}=y \\mid \\theta^{(k+1)}=x,\\theta^{(k+2)}=x_1,\\ldots ) = Pr(\\theta^{(k)}=y \\mid \\theta^{(k+1)}=x)$$\n> \n> Forward and reverse transition probabilities may be related through Bayes theorem:\n> \n> $$\\frac{Pr(\\theta^{(k+1)}=x \\mid \\theta^{(k)}=y) \\pi^{(k)}(y)}{\\pi^{(k+1)}(x)}$$\n> \n> Though not homogeneous in general, $\\pi$ becomes homogeneous if:\n> \n> - $n \\rightarrow \\infty$\n> \n> - $\\pi^{(i)}=\\pi$ for some $i < k$\n> \n> If this chain is homogeneous it is called reversible, because it satisfies the ***detailed balance equation***:\n> \n> $$\\pi(x)Pr(y \\mid x) = \\pi(y) Pr(x \\mid y)$$\n> \n> Reversibility is important because it has the effect of balancing movement through the entire state space. When a Markov chain is reversible, $\\pi$ is the unique, invariant, stationary distribution of that chain. Hence, if $\\pi$ is of interest, we need only find the reversible Markov chain for which $\\pi$ is the limiting distribution.\n> This is what MCMC does!\n\n## Gibbs Sampling\n\nThe Gibbs sampler is the simplest and most prevalent MCMC algorithm. If a posterior has $k$ parameters to be estimated, we may condition each parameter on current values of the other $k-1$ parameters, and sample from the resultant distributional form (usually easier), and repeat this operation on the other parameters in turn. This procedure generates samples from the posterior distribution. Note that we have now combined Markov chains (conditional independence) and Monte Carlo techniques (estimation by simulation) to yield Markov chain Monte Carlo.\n\nHere is a stereotypical Gibbs sampling algorithm:\n\n1. Choose starting values for states (parameters):\n ${\\bf \\theta} = [\\theta_1^{(0)},\\theta_2^{(0)},\\ldots,\\theta_k^{(0)}]$.\n\n2. Initialize counter $j=1$.\n\n3. Draw the following values from each of the $k$ conditional\n distributions:\n\n $$\\begin{aligned}\n \\theta_1^{(j)} &\\sim& \\pi(\\theta_1 | \\theta_2^{(j-1)},\\theta_3^{(j-1)},\\ldots,\\theta_{k-1}^{(j-1)},\\theta_k^{(j-1)}) \\\\\n \\theta_2^{(j)} &\\sim& \\pi(\\theta_2 | \\theta_1^{(j)},\\theta_3^{(j-1)},\\ldots,\\theta_{k-1}^{(j-1)},\\theta_k^{(j-1)}) \\\\\n \\theta_3^{(j)} &\\sim& \\pi(\\theta_3 | \\theta_1^{(j)},\\theta_2^{(j)},\\ldots,\\theta_{k-1}^{(j-1)},\\theta_k^{(j-1)}) \\\\\n \\vdots \\\\\n \\theta_{k-1}^{(j)} &\\sim& \\pi(\\theta_{k-1} | \\theta_1^{(j)},\\theta_2^{(j)},\\ldots,\\theta_{k-2}^{(j)},\\theta_k^{(j-1)}) \\\\\n \\theta_k^{(j)} &\\sim& \\pi(\\theta_k | \\theta_1^{(j)},\\theta_2^{(j)},\\theta_4^{(j)},\\ldots,\\theta_{k-2}^{(j)},\\theta_{k-1}^{(j)})\\end{aligned}$$\n\n4. Increment $j$ and repeat until convergence occurs.\n\nAs we can see from the algorithm, each distribution is conditioned on the last iteration of its chain values, constituting a Markov chain as advertised. The Gibbs sampler has all of the important properties outlined in the previous section: it is aperiodic, homogeneous and ergodic. Once the sampler converges, all subsequent samples are from the target distribution. This convergence occurs at a geometric rate.\n\n## Example: Inferring patterns in UK coal mining disasters\n\nLet's try to model a more interesting example, a time series of recorded coal mining \ndisasters in the UK from 1851 to 1962.\n\nOccurrences of disasters in the time series is thought to be derived from a \nPoisson process with a large rate parameter in the early part of the time \nseries, and from one with a smaller rate in the later part. We are interested \nin locating the change point in the series, which perhaps is related to changes \nin mining safety regulations.\n\n\n```python\ndisasters_array = np.array([4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6,\n 3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5,\n 2, 2, 3, 4, 2, 1, 3, 2, 2, 1, 1, 1, 1, 3, 0, 0,\n 1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1,\n 0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2,\n 3, 3, 1, 1, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4,\n 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1])\n\nn_count_data = len(disasters_array)\n```\n\n\n```python\nimport plotly.plotly as py\nimport plotly.graph_objs as pgo\n```\n\n\n```python\ndata = pgo.Data([\n pgo.Scatter(\n x=[str(year) + '-01-01' for year in np.arange(1851, 1962)],\n y=disasters_array,\n mode='lines+markers'\n )\n])\n```\n\n\n```python\nlayout = pgo.Layout(\n title='UK coal mining disasters (per year), 1851--1962',\n xaxis=pgo.XAxis(title='Year', type='date', range=['1851-01-01', '1962-01-01']),\n yaxis=pgo.YAxis(title='Disaster count')\n)\n```\n\n\n```python\nfig = pgo.Figure(data=data, layout=layout)\n```\n\n\n```python\npy.iplot(fig, filename='coal_mining_disasters')\n```\n\n\n\n\n\n\n\n\nWe are going to use Poisson random variables for this type of count data. Denoting year $i$'s accident count by $y_i$, \n\n$$y_i \\sim \\text{Poisson}(\\lambda).$$\n\nFor those unfamiliar, Poisson random variables look like this:\n\n\n```python\ndata2 = pgo.Data([\n pgo.Histogram(\n x=np.random.poisson(l, 1000),\n opacity=0.75,\n name=u'\u03bb=%i' % l\n ) for l in [1, 5, 12, 25]\n])\n```\n\n\n```python\nlayout_grey_bg = pgo.Layout(\n xaxis=pgo.XAxis(zeroline=False, showgrid=True, gridcolor='rgb(255, 255, 255)'),\n yaxis=pgo.YAxis(zeroline=False, showgrid=True, gridcolor='rgb(255, 255, 255)'),\n paper_bgcolor='rgb(255, 255, 255)',\n plot_bgcolor='rgba(204, 204, 204, 0.5)'\n)\n```\n\n\n```python\nlayout2 = layout_grey_bg.copy()\n```\n\n\n```python\nlayout2.update(\n barmode='overlay',\n title='Poisson Means',\n xaxis=pgo.XAxis(range=[0, 50]),\n yaxis=pgo.YAxis(range=[0, 400])\n)\n```\n\n\n```python\nfig2 = pgo.Figure(data=data2, layout=layout2)\n```\n\n\n```python\npy.iplot(fig2, filename='poisson_means')\n```\n\n\n\n\n\n\n\n\nThe modeling problem is about estimating the values of the $\\lambda$ parameters. Looking at the time series above, it appears that the rate declines over time.\n\nA **changepoint model** identifies a point (here, a year) after which the parameter $\\lambda$ drops to a lower value. Let us call this point in time $\\tau$. So we are estimating two $\\lambda$ parameters:\n$\\lambda = \\lambda_1$ if $t \\lt \\tau$ and $\\lambda = \\lambda_2$ if $t \\geq \\tau$.\n\nWe need to assign prior probabilities to both $\\{\\lambda_1, \\lambda_2\\}$. The gamma distribution not only provides a continuous density function for positive numbers, but it is also *conjugate* with the Poisson sampling distribution. \n\n\n```python\nlambda1_lambda2 = [(0.1, 100), (1, 100), (1, 10), (10, 10)]\n```\n\n\n```python\ndata3 = pgo.Data([\n pgo.Histogram(\n x=np.random.gamma(*p, size=1000),\n opacity=0.75,\n name=u'\u03b1=%i, \u03b2=%i' % (p[0], p[1]))\n for p in lambda1_lambda2\n])\n```\n\n\n```python\nlayout3 = layout_grey_bg.copy()\nlayout3.update(\n barmode='overlay',\n xaxis=pgo.XAxis(range=[0, 300])\n)\n```\n\n\n```python\nfig3 = pgo.Figure(data=data3, layout=layout3)\n```\n\n\n```python\npy.iplot(fig3, filename='gamma_distributions')\n```\n\n\n\n\n\n\n\n\nWe will specify suitably vague hyperparameters $\\alpha$ and $\\beta$ for both priors:\n\n\\begin{align}\n\\lambda_1 &\\sim \\text{Gamma}(1, 10), \\\\\n\\lambda_2 &\\sim \\text{Gamma}(1, 10).\n\\end{align}\n\nSince we do not have any intuition about the location of the changepoint (unless we visualize the data), we will assign a discrete uniform prior over the entire observation period [1851, 1962]:\n\n\\begin{align}\n&\\tau \\sim \\text{DiscreteUniform(1851, 1962)}\\\\\n&\\Rightarrow P(\\tau = k) = \\frac{1}{111}.\n\\end{align}\n\n### Implementing Gibbs sampling\n\nWe are interested in estimating the joint posterior of $\\lambda_1, \\lambda_2$ and $\\tau$ given the array of annnual disaster counts $\\mathbf{y}$. This gives:\n\n$$\n P( \\lambda_1, \\lambda_2, \\tau | \\mathbf{y} ) \\propto P(\\mathbf{y} | \\lambda_1, \\lambda_2, \\tau ) P(\\lambda_1, \\lambda_2, \\tau) \n$$\n\nTo employ Gibbs sampling, we need to factor the joint posterior into the product of conditional expressions:\n\n$$\n P(\\lambda_1, \\lambda_2, \\tau | \\mathbf{y}) \\propto P(y_{t \\lt \\tau} | \\lambda_1, \\tau) P(y_{t \\geq \\tau} | \\lambda_2, \\tau) P(\\lambda_1) P(\\lambda_2) P(\\tau)\n$$\n\nwhich we have specified as:\n\n$$\\begin{aligned}\nP( \\lambda_1, \\lambda_2, \\tau | \\mathbf{y} ) &\\propto \\left[\\prod_{t=1851}^{\\tau} \\text{Poi}(y_t|\\lambda_1) \\prod_{t=\\tau+1}^{1962} \\text{Poi}(y_t|\\lambda_2) \\right] \\text{Gamma}(\\lambda_1|\\alpha,\\beta) \\text{Gamma}(\\lambda_2|\\alpha, \\beta) \\frac{1}{111} \\\\\n&\\propto \\left[\\prod_{t=1851}^{\\tau} e^{-\\lambda_1}\\lambda_1^{y_t} \\prod_{t=\\tau+1}^{1962} e^{-\\lambda_2} \\lambda_2^{y_t} \\right] \\lambda_1^{\\alpha-1} e^{-\\beta\\lambda_1} \\lambda_2^{\\alpha-1} e^{-\\beta\\lambda_2} \\\\\n&\\propto \\lambda_1^{\\sum_{t=1851}^{\\tau} y_t +\\alpha-1} e^{-(\\beta+\\tau)\\lambda_1} \\lambda_2^{\\sum_{t=\\tau+1}^{1962} y_i + \\alpha-1} e^{-\\beta\\lambda_2}\n\\end{aligned}$$\n\nSo, the full conditionals are known, and critically for Gibbs, can easily be sampled from.\n\n$$\\lambda_1 \\sim \\text{Gamma}(\\sum_{t=1851}^{\\tau} y_t +\\alpha, \\tau+\\beta)$$\n$$\\lambda_2 \\sim \\text{Gamma}(\\sum_{t=\\tau+1}^{1962} y_i + \\alpha, 1962-\\tau+\\beta)$$\n$$\\tau \\sim \\text{Categorical}\\left( \\frac{\\lambda_1^{\\sum_{t=1851}^{\\tau} y_t +\\alpha-1} e^{-(\\beta+\\tau)\\lambda_1} \\lambda_2^{\\sum_{t=\\tau+1}^{1962} y_i + \\alpha-1} e^{-\\beta\\lambda_2}}{\\sum_{k=1851}^{1962} \\lambda_1^{\\sum_{t=1851}^{\\tau} y_t +\\alpha-1} e^{-(\\beta+\\tau)\\lambda_1} \\lambda_2^{\\sum_{t=\\tau+1}^{1962} y_i + \\alpha-1} e^{-\\beta\\lambda_2}} \\right)$$\n\nImplementing this in Python requires random number generators for both the gamma and discrete uniform distributions. We can leverage NumPy for this:\n\n\n```python\n# Function to draw random gamma variate\nrgamma = np.random.gamma\n\ndef rcategorical(probs, n=None):\n # Function to draw random categorical variate\n return np.array(probs).cumsum().searchsorted(np.random.sample(n))\n```\n\nNext, in order to generate probabilities for the conditional posterior of $\\tau$, we need the kernel of the gamma density:\n\n\\\\[\\lambda^{\\alpha-1} e^{-\\beta \\lambda}\\\\]\n\n\n```python\ndgamma = lambda lam, a, b: lam**(a - 1) * np.exp(-b * lam)\n```\n\nDiffuse hyperpriors for the gamma priors on $\\{\\lambda_1, \\lambda_2\\}$:\n\n\n```python\nalpha, beta = 1., 10\n```\n\nFor computational efficiency, it is best to pre-allocate memory to store the sampled values. We need 3 arrays, each with length equal to the number of iterations we plan to run:\n\n\n```python\n# Specify number of iterations\nn_iterations = 1000\n\n# Initialize trace of samples\nlambda1, lambda2, tau = np.empty((3, n_iterations + 1))\n```\n\nThe penultimate step initializes the model paramters to arbitrary values:\n\n\n```python\nlambda1[0] = 6\nlambda2[0] = 2\ntau[0] = 50\n```\n\nNow we can run the Gibbs sampler.\n\n\n```python\n# Sample from conditionals\nfor i in range(n_iterations):\n \n # Sample early mean\n lambda1[i + 1] = rgamma(disasters_array[:tau[i]].sum() + alpha, 1./(tau[i] + beta))\n \n # Sample late mean\n lambda2[i + 1] = rgamma(disasters_array[tau[i]:].sum() + alpha,\n 1./(n_count_data - tau[i] + beta))\n \n # Sample changepoint: first calculate probabilities (conditional)\n p = np.array([dgamma(lambda1[i + 1], disasters_array[:t].sum() + alpha, t + beta) *\n dgamma(lambda2[i + 1], disasters_array[t:].sum() + alpha, n_count_data - t + beta)\n for t in range(n_count_data)])\n \n # ... then draw sample\n tau[i + 1] = rcategorical(p/p.sum())\n```\n\nPlotting the trace and histogram of the samples reveals the marginal posteriors of each parameter in the model.\n\n\n```python\ncolor = '#3182bd'\n```\n\n\n```python\ntrace1 = pgo.Scatter(\n y=lambda1,\n xaxis='x1',\n yaxis='y1',\n line=pgo.Line(width=1),\n marker=pgo.Marker(color=color)\n)\n\ntrace2 = pgo.Histogram(\n x=lambda1,\n xaxis='x2',\n yaxis='y2',\n line=pgo.Line(width=0.5),\n marker=pgo.Marker(color=color)\n)\n\ntrace3 = pgo.Scatter(\n y=lambda2,\n xaxis='x3',\n yaxis='y3',\n line=pgo.Line(width=1),\n marker=pgo.Marker(color=color)\n)\n\ntrace4 = pgo.Histogram(\n x=lambda2,\n xaxis='x4',\n yaxis='y4',\n marker=pgo.Marker(color=color)\n)\n\ntrace5 = pgo.Scatter(\n y=tau,\n xaxis='x5',\n yaxis='y5',\n line=pgo.Line(width=1),\n marker=pgo.Marker(color=color)\n)\n\ntrace6 = pgo.Histogram(\n x=tau,\n xaxis='x6',\n yaxis='y6',\n marker=pgo.Marker(color=color)\n)\n```\n\n\n```python\ndata4 = pgo.Data([trace1, trace2, trace3, trace4, trace5, trace6])\n```\n\n\n```python\nimport plotly.tools as tls\n```\n\n\n```python\nfig4 = tls.make_subplots(3, 2)\n```\n\n This is the format of your plot grid:\n [ (1,1) x1,y1 ] [ (1,2) x2,y2 ]\n [ (2,1) x3,y3 ] [ (2,2) x4,y4 ]\n [ (3,1) x5,y5 ] [ (3,2) x6,y6 ]\n \n\n\n\n```python\nfig4['data'] += data4\n```\n\n\n```python\ndef add_style(fig):\n for i in fig['layout'].keys():\n fig['layout'][i]['zeroline'] = False\n fig['layout'][i]['showgrid'] = True\n fig['layout'][i]['gridcolor'] = 'rgb(255, 255, 255)'\n fig['layout']['paper_bgcolor'] = 'rgb(255, 255, 255)'\n fig['layout']['plot_bgcolor'] = 'rgba(204, 204, 204, 0.5)'\n fig['layout']['showlegend']=False\n```\n\n\n```python\nadd_style(fig4)\n```\n\n\n```python\nfig4['layout'].update(\n yaxis1=pgo.YAxis(title=r'$\\lambda_1$'),\n yaxis3=pgo.YAxis(title=r'$\\lambda_2$'),\n yaxis5=pgo.YAxis(title=r'$\\tau$'))\n```\n\n\n```python\npy.iplot(fig4, filename='modelling_params')\n```\n\n\n\n\n\n\n\n\n## The Metropolis-Hastings Algorithm\n\nThe key to success in applying the Gibbs sampler to the estimation of Bayesian posteriors is being able to specify the form of the complete conditionals of\n${\\bf \\theta}$, because the algorithm cannot be implemented without them. In practice, the posterior conditionals cannot always be neatly specified. \n\n\nTaking a different approach, the Metropolis-Hastings algorithm generates ***candidate*** state transitions from an alternate distribution, and *accepts* or *rejects* each candidate probabilistically.\n\nLet us first consider a simple Metropolis-Hastings algorithm for a single parameter, $\\theta$. We will use a standard sampling distribution, referred to as the *proposal distribution*, to produce candidate variables $q_t(\\theta^{\\prime} | \\theta)$. That is, the generated value, $\\theta^{\\prime}$, is a *possible* next value for\n$\\theta$ at step $t+1$. We also need to be able to calculate the probability of moving back to the original value from the candidate, or\n$q_t(\\theta | \\theta^{\\prime})$. These probabilistic ingredients are used to define an *acceptance ratio*:\n\n$$\\begin{gathered}\n\\begin{split}a(\\theta^{\\prime},\\theta) = \\frac{q_t(\\theta^{\\prime} | \\theta) \\pi(\\theta^{\\prime})}{q_t(\\theta | \\theta^{\\prime}) \\pi(\\theta)}\\end{split}\\notag\\\\\\begin{split}\\end{split}\\notag\\end{gathered}$$\n\nThe value of $\\theta^{(t+1)}$ is then determined by:\n\n$$\\theta^{(t+1)} = \\left\\{\\begin{array}{l@{\\quad \\mbox{with prob.} \\quad}l}\\theta^{\\prime} & \\text{with probability } \\min(a(\\theta^{\\prime},\\theta^{(t)}),1) \\\\ \\theta^{(t)} & \\text{with probability } 1 - \\min(a(\\theta^{\\prime},\\theta^{(t)}),1) \\end{array}\\right.$$\n\nThis transition kernel implies that movement is not guaranteed at every step. It only occurs if the suggested transition is likely based on the acceptance ratio.\n\nA single iteration of the Metropolis-Hastings algorithm proceeds as follows:\n\n1. Sample $\\theta^{\\prime}$ from $q(\\theta^{\\prime} | \\theta^{(t)})$.\n\n2. Generate a Uniform[0,1] random variate $u$.\n\n3. If $a(\\theta^{\\prime},\\theta) > u$ then\n $\\theta^{(t+1)} = \\theta^{\\prime}$, otherwise\n $\\theta^{(t+1)} = \\theta^{(t)}$.\n\nThe original form of the algorithm specified by Metropolis required that\n$q_t(\\theta^{\\prime} | \\theta) = q_t(\\theta | \\theta^{\\prime})$, which reduces $a(\\theta^{\\prime},\\theta)$ to\n$\\pi(\\theta^{\\prime})/\\pi(\\theta)$, but this is not necessary. In either case, the state moves to high-density points in the distribution with high probability, and to low-density points with low probability. After convergence, the Metropolis-Hastings algorithm describes the full target posterior density, so all points are recurrent.\n\n### Random-walk Metropolis-Hastings\n\nA practical implementation of the Metropolis-Hastings algorithm makes use of a random-walk proposal.\nRecall that a random walk is a Markov chain that evolves according to:\n\n$$\n\\theta^{(t+1)} = \\theta^{(t)} + \\epsilon_t \\\\\n\\epsilon_t \\sim f(\\phi)\n$$\n\nAs applied to the MCMC sampling, the random walk is used as a proposal distribution, whereby dependent proposals are generated according to:\n\n$$\\begin{gathered}\n\\begin{split}q(\\theta^{\\prime} | \\theta^{(t)}) = f(\\theta^{\\prime} - \\theta^{(t)}) = \\theta^{(t)} + \\epsilon_t\\end{split}\\notag\\\\\\begin{split}\\end{split}\\notag\\end{gathered}$$\n\nGenerally, the density generating $\\epsilon_t$ is symmetric about zero,\nresulting in a symmetric chain. Chain symmetry implies that\n$q(\\theta^{\\prime} | \\theta^{(t)}) = q(\\theta^{(t)} | \\theta^{\\prime})$,\nwhich reduces the Metropolis-Hastings acceptance ratio to:\n\n$$\\begin{gathered}\n\\begin{split}a(\\theta^{\\prime},\\theta) = \\frac{\\pi(\\theta^{\\prime})}{\\pi(\\theta)}\\end{split}\\notag\\\\\\begin{split}\\end{split}\\notag\\end{gathered}$$\n\nThe choice of the random walk distribution for $\\epsilon_t$ is frequently a normal or Student\u2019s $t$ density, but it may be any distribution that generates an irreducible proposal chain.\n\nAn important consideration is the specification of the **scale parameter** for the random walk error distribution. Large values produce random walk steps that are highly exploratory, but tend to produce proposal values in the tails of the target distribution, potentially resulting in very small acceptance rates. Conversely, small values tend to be accepted more frequently, since they tend to produce proposals close to the current parameter value, but may result in chains that ***mix*** very slowly.\n\nSome simulation studies suggest optimal acceptance rates in the range of 20-50%. It is often worthwhile to optimize the proposal variance by iteratively adjusting its value, according to observed acceptance rates early in the MCMC simulation .\n\n## Example: Linear model estimation\n\nThis very simple dataset is a selection of real estate prices \\\\(p\\\\), with the associated age \\\\(a\\\\) of each house. We wish to estimate a simple linear relationship between the two variables, using the Metropolis-Hastings algorithm.\n\n**Linear model**:\n\n$$\\mu_i = \\beta_0 + \\beta_1 a_i$$\n\n**Sampling distribution**:\n\n$$p_i \\sim N(\\mu_i, \\tau)$$\n\n**Prior distributions**:\n\n$$\\begin{aligned}\n& \\beta_i \\sim N(0, 10000) \\cr\n& \\tau \\sim \\text{Gamma}(0.001, 0.001)\n\\end{aligned}$$\n\n\n```python\nage = np.array([13, 14, 14,12, 9, 15, 10, 14, 9, 14, 13, 12, 9, 10, 15, 11, \n 15, 11, 7, 13, 13, 10, 9, 6, 11, 15, 13, 10, 9, 9, 15, 14, \n 14, 10, 14, 11, 13, 14, 10])\n\nprice = np.array([2950, 2300, 3900, 2800, 5000, 2999, 3950, 2995, 4500, 2800, \n 1990, 3500, 5100, 3900, 2900, 4950, 2000, 3400, 8999, 4000, \n 2950, 3250, 3950, 4600, 4500, 1600, 3900, 4200, 6500, 3500, \n 2999, 2600, 3250, 2500, 2400, 3990, 4600, 450,4700])/1000.\n```\n\nTo avoid numerical underflow issues, we typically work with log-transformed likelihoods, so the joint posterior can be calculated as sums of log-probabilities and log-likelihoods.\n\nThis function calculates the joint log-posterior, conditional on values for each parameter:\n\n\n```python\nfrom scipy.stats import distributions\ndgamma = distributions.gamma.logpdf\ndnorm = distributions.norm.logpdf\n\ndef calc_posterior(a, b, t, y=price, x=age):\n # Calculate joint posterior, given values for a, b and t\n\n # Priors on a,b\n logp = dnorm(a, 0, 10000) + dnorm(b, 0, 10000)\n # Prior on t\n logp += dgamma(t, 0.001, 0.001)\n # Calculate mu\n mu = a + b*x\n # Data likelihood\n logp += sum(dnorm(y, mu, t**-2))\n \n return logp\n```\n\nThe `metropolis` function implements a simple random-walk Metropolis-Hastings sampler for this problem. It accepts as arguments:\n\n- the number of iterations to run\n- initial values for the unknown parameters\n- the variance parameter of the proposal distribution (normal)\n\n\n```python\nrnorm = np.random.normal\nrunif = np.random.rand\n\ndef metropolis(n_iterations, initial_values, prop_var=1):\n\n n_params = len(initial_values)\n \n # Initial proposal standard deviations\n prop_sd = [prop_var]*n_params\n \n # Initialize trace for parameters\n trace = np.empty((n_iterations+1, n_params))\n \n # Set initial values\n trace[0] = initial_values\n \n # Calculate joint posterior for initial values\n current_log_prob = calc_posterior(*trace[0])\n \n # Initialize acceptance counts\n accepted = [0]*n_params\n \n for i in range(n_iterations):\n \n if not i%1000: print('Iteration %i' % i)\n \n # Grab current parameter values\n current_params = trace[i]\n \n for j in range(n_params):\n \n # Get current value for parameter j\n p = trace[i].copy()\n \n # Propose new value\n if j==2:\n # Ensure tau is positive\n theta = np.exp(rnorm(np.log(current_params[j]), prop_sd[j]))\n else:\n theta = rnorm(current_params[j], prop_sd[j])\n \n # Insert new value \n p[j] = theta\n \n # Calculate log posterior with proposed value\n proposed_log_prob = calc_posterior(*p)\n \n # Log-acceptance rate\n alpha = proposed_log_prob - current_log_prob\n \n # Sample a uniform random variate\n u = runif()\n \n # Test proposed value\n if np.log(u) < alpha:\n # Accept\n trace[i+1,j] = theta\n current_log_prob = proposed_log_prob\n accepted[j] += 1\n else:\n # Reject\n trace[i+1,j] = trace[i,j]\n \n return trace, accepted\n```\n\nLet's run the MH algorithm with a very small proposal variance:\n\n\n```python\nn_iter = 10000\ntrace, acc = metropolis(n_iter, initial_values=(1,0,1), prop_var=0.001)\n```\n\n Iteration 0\n Iteration 1000\n Iteration 2000\n Iteration 3000\n Iteration 4000\n Iteration 5000\n Iteration 6000\n Iteration 7000\n Iteration 8000\n Iteration 9000\n\n\nWe can see that the acceptance rate is way too high:\n\n\n```python\nnp.array(acc, float)/n_iter\n```\n\n\n\n\n array([ 0.9768, 0.9689, 0.961 ])\n\n\n\n\n```python\ntrace1 = pgo.Scatter(\n y=trace.T[0],\n xaxis='x1',\n yaxis='y1',\n marker=pgo.Marker(color=color)\n)\n\ntrace2 = pgo.Histogram(\n x=trace.T[0],\n xaxis='x2',\n yaxis='y2',\n marker=pgo.Marker(color=color)\n)\n\ntrace3 = pgo.Scatter(\n y=trace.T[1],\n xaxis='x3',\n yaxis='y3',\n marker=pgo.Marker(color=color)\n)\n\ntrace4 = pgo.Histogram(\n x=trace.T[1],\n xaxis='x4',\n yaxis='y4',\n marker=pgo.Marker(color=color)\n)\n\ntrace5 = pgo.Scatter(\n y=trace.T[2],\n xaxis='x5',\n yaxis='y5',\n marker=pgo.Marker(color=color)\n)\n\ntrace6 = pgo.Histogram(\n x=trace.T[2],\n xaxis='x6',\n yaxis='y6',\n marker=pgo.Marker(color=color)\n)\n```\n\n\n```python\ndata5 = pgo.Data([trace1, trace2, trace3, trace4, trace5, trace6])\n```\n\n\n```python\nfig5 = tls.make_subplots(3, 2)\n```\n\n This is the format of your plot grid:\n [ (1,1) x1,y1 ] [ (1,2) x2,y2 ]\n [ (2,1) x3,y3 ] [ (2,2) x4,y4 ]\n [ (3,1) x5,y5 ] [ (3,2) x6,y6 ]\n \n\n\n\n```python\nfig5['data'] += data5\n```\n\n\n```python\nadd_style(fig5)\n```\n\n\n```python\nfig5['layout'].update(showlegend=False,\n yaxis1=pgo.YAxis(title='intercept'),\n yaxis3=pgo.YAxis(title='slope'),\n yaxis5=pgo.YAxis(title='precision')\n)\n```\n\n\n```python\npy.iplot(fig5, filename='MH algorithm small proposal variance')\n```\n\n\n\n\n\n\n\n\nNow, with a very large proposal variance:\n\n\n```python\ntrace_hivar, acc = metropolis(n_iter, initial_values=(1,0,1), prop_var=100)\n```\n\n Iteration 0\n Iteration 1000\n Iteration 2000\n Iteration 3000\n Iteration 4000\n Iteration 5000\n Iteration 6000\n Iteration 7000\n Iteration 8000\n Iteration 9000\n\n\n\n```python\nnp.array(acc, float)/n_iter\n```\n\n\n\n\n array([ 0.003 , 0.0001, 0.0009])\n\n\n\n\n```python\ntrace1 = pgo.Scatter(\n y=trace_hivar.T[0],\n xaxis='x1',\n yaxis='y1',\n marker=pgo.Marker(color=color)\n)\n\ntrace2 = pgo.Histogram(\n x=trace_hivar.T[0],\n xaxis='x2',\n yaxis='y2',\n marker=pgo.Marker(color=color)\n)\n\ntrace3 = pgo.Scatter(\n y=trace_hivar.T[1],\n xaxis='x3',\n yaxis='y3',\n marker=pgo.Marker(color=color)\n)\n\ntrace4 = pgo.Histogram(\n x=trace_hivar.T[1],\n xaxis='x4',\n yaxis='y4',\n marker=pgo.Marker(color=color)\n)\n\ntrace5 = pgo.Scatter(\n y=trace_hivar.T[2],\n xaxis='x5',\n yaxis='y5',\n marker=pgo.Marker(color=color)\n)\n\ntrace6 = pgo.Histogram(\n x=trace_hivar.T[2],\n xaxis='x6',\n yaxis='y6',\n marker=pgo.Marker(color=color)\n)\n```\n\n\n```python\ndata6 = pgo.Data([trace1, trace2, trace3, trace4, trace5, trace6])\n```\n\n\n```python\nfig6 = tls.make_subplots(3, 2)\n```\n\n This is the format of your plot grid:\n [ (1,1) x1,y1 ] [ (1,2) x2,y2 ]\n [ (2,1) x3,y3 ] [ (2,2) x4,y4 ]\n [ (3,1) x5,y5 ] [ (3,2) x6,y6 ]\n \n\n\n\n```python\nfig6['data'] += data6\n```\n\n\n```python\nadd_style(fig6)\n```\n\n\n```python\nfig6['layout'].update(\n yaxis1=pgo.YAxis(title='intercept'),\n yaxis3=pgo.YAxis(title='slope'),\n yaxis5=pgo.YAxis(title='precision')\n)\n```\n\n\n```python\npy.iplot(fig6, filename='MH algorithm large proposal variance')\n```\n\n\n\n\n\n\n\n\n### Adaptive Metropolis\n\nIn order to avoid having to set the proposal variance by trial-and-error, we can add some tuning logic to the algorithm. The following implementation of Metropolis-Hastings reduces proposal variances by 10% when the acceptance rate is low, and increases it by 10% when the acceptance rate is high.\n\n\n```python\ndef metropolis_tuned(n_iterations, initial_values, f=calc_posterior, prop_var=1, \n tune_for=None, tune_interval=100):\n \n n_params = len(initial_values)\n \n # Initial proposal standard deviations\n prop_sd = [prop_var] * n_params\n \n # Initialize trace for parameters\n trace = np.empty((n_iterations+1, n_params))\n \n # Set initial values\n trace[0] = initial_values\n # Initialize acceptance counts\n accepted = [0]*n_params\n \n # Calculate joint posterior for initial values\n current_log_prob = f(*trace[0])\n \n if tune_for is None:\n tune_for = n_iterations/2\n\n for i in range(n_iterations):\n \n if not i%1000: print('Iteration %i' % i)\n \n # Grab current parameter values\n current_params = trace[i]\n \n for j in range(n_params):\n \n # Get current value for parameter j\n p = trace[i].copy()\n \n # Propose new value\n if j==2:\n # Ensure tau is positive\n theta = np.exp(rnorm(np.log(current_params[j]), prop_sd[j]))\n else:\n theta = rnorm(current_params[j], prop_sd[j])\n \n # Insert new value \n p[j] = theta\n \n # Calculate log posterior with proposed value\n proposed_log_prob = f(*p)\n \n # Log-acceptance rate\n alpha = proposed_log_prob - current_log_prob\n \n # Sample a uniform random variate\n u = runif()\n \n # Test proposed value\n if np.log(u) < alpha:\n # Accept\n trace[i+1,j] = theta\n current_log_prob = proposed_log_prob\n accepted[j] += 1\n else:\n # Reject\n trace[i+1,j] = trace[i,j]\n \n # Tune every 100 iterations\n if (not (i+1) % tune_interval) and (i < tune_for):\n \n # Calculate aceptance rate\n acceptance_rate = (1.*accepted[j])/tune_interval\n if acceptance_rate<0.1:\n prop_sd[j] *= 0.9\n if acceptance_rate<0.2:\n prop_sd[j] *= 0.95\n if acceptance_rate>0.4:\n prop_sd[j] *= 1.05\n elif acceptance_rate>0.6:\n prop_sd[j] *= 1.1\n \n accepted[j] = 0\n \n return trace[tune_for:], accepted\n```\n\n\n```python\ntrace_tuned, acc = metropolis_tuned(n_iter*2, initial_values=(1,0,1), prop_var=5, tune_interval=25, tune_for=n_iter)\n```\n\n Iteration 0\n Iteration 1000\n Iteration 2000\n Iteration 3000\n Iteration 4000\n Iteration 5000\n Iteration 6000\n Iteration 7000\n Iteration 8000\n Iteration 9000\n Iteration 10000\n Iteration 11000\n Iteration 12000\n Iteration 13000\n Iteration 14000\n Iteration 15000\n Iteration 16000\n Iteration 17000\n Iteration 18000\n Iteration 19000\n\n\n\n```python\nnp.array(acc, float)/(n_iter)\n```\n\n\n\n\n array([ 0.2888, 0.312 , 0.3421])\n\n\n\n\n```python\ntrace1 = pgo.Scatter(\n y=trace_tuned.T[0],\n xaxis='x1',\n yaxis='y1',\n line=pgo.Line(width=1),\n marker=pgo.Marker(color=color)\n)\n\ntrace2 = pgo.Histogram(\n x=trace_tuned.T[0],\n xaxis='x2',\n yaxis='y2',\n marker=pgo.Marker(color=color)\n)\n\ntrace3 = pgo.Scatter(\n y=trace_tuned.T[1],\n xaxis='x3',\n yaxis='y3',\n line=pgo.Line(width=1),\n marker=pgo.Marker(color=color)\n)\n\ntrace4 = pgo.Histogram(\n x=trace_tuned.T[1],\n xaxis='x4',\n yaxis='y4',\n marker=pgo.Marker(color=color)\n)\n\ntrace5 = pgo.Scatter(\n y=trace_tuned.T[2],\n xaxis='x5',\n yaxis='y5',\n line=pgo.Line(width=0.5),\n marker=pgo.Marker(color=color)\n)\n\ntrace6 = pgo.Histogram(\n x=trace_tuned.T[2],\n xaxis='x6',\n yaxis='y6',\n marker=pgo.Marker(color=color)\n)\n```\n\n\n```python\ndata7 = pgo.Data([trace1, trace2, trace3, trace4, trace5, trace6])\n```\n\n\n```python\nfig7 = tls.make_subplots(3, 2)\n```\n\n This is the format of your plot grid:\n [ (1,1) x1,y1 ] [ (1,2) x2,y2 ]\n [ (2,1) x3,y3 ] [ (2,2) x4,y4 ]\n [ (3,1) x5,y5 ] [ (3,2) x6,y6 ]\n \n\n\n\n```python\nfig7['data'] += data7\n```\n\n\n```python\nadd_style(fig7)\n```\n\n\n```python\nfig7['layout'].update(\n yaxis1=pgo.YAxis(title='intercept'),\n yaxis3=pgo.YAxis(title='slope'),\n yaxis5=pgo.YAxis(title='precision')\n)\n```\n\n\n```python\npy.iplot(fig7, filename='adaptive-metropolis')\n```\n\n\n\n\n\n\n\n\n50 random regression lines drawn from the posterior:\n\n\n```python\n# Data points\npoints = pgo.Scatter(\n x=age,\n y=price,\n mode='markers'\n)\n\n# Sample models from posterior\nxvals = np.linspace(age.min(), age.max())\nline_data = [np.column_stack([np.ones(50), xvals]).dot(trace_tuned[np.random.randint(0, 1000), :2]) for i in range(50)]\n\n# Generate Scatter obejcts\nlines = [pgo.Scatter(x=xvals, y=line, opacity=0.5, marker=pgo.Marker(color='#e34a33'),\n line=pgo.Line(width=0.5)) for line in line_data]\n\ndata8 = pgo.Data([points] + lines)\n\nlayout8 = layout_grey_bg.copy()\nlayout8.update(\n showlegend=False,\n hovermode='closest',\n xaxis=pgo.XAxis(title='Age', showgrid=False, zeroline=False),\n yaxis=pgo.YAxis(title='Price', showline=False, zeroline=False)\n)\n\nfig8 = pgo.Figure(data=data8, layout=layout8)\npy.iplot(fig8, filename='regression_lines')\n```\n\n\n\n\n\n\n\n\n## Exercise: Bioassay analysis\n\nGelman et al. (2003) present an example of an acute toxicity test, commonly performed on animals to estimate the toxicity of various compounds.\n\nIn this dataset `log_dose` includes 4 levels of dosage, on the log scale, each administered to 5 rats during the experiment. The response variable is `death`, the number of positive responses to the dosage.\n\nThe number of deaths can be modeled as a binomial response, with the probability of death being a linear function of dose:\n\n
    \n$$\\begin{aligned}\ny_i &\\sim \\text{Bin}(n_i, p_i) \\\\\n\\text{logit}(p_i) &= a + b x_i\n\\end{aligned}$$\n
    \n\nThe common statistic of interest in such experiments is the **LD50**, the dosage at which the probability of death is 50%.\n\nUse Metropolis-Hastings sampling to fit a Bayesian model to analyze this bioassay data, and to estimate LD50.\n\n\n```python\n# Log dose in each group\nlog_dose = [-.86, -.3, -.05, .73]\n\n# Sample size in each group\nn = 5\n\n# Outcomes\ndeaths = [0, 1, 3, 5]\n```\n\n\n```python\nfrom scipy.stats import distributions\ndbin = distributions.binom.logpmf\ndnorm = distributions.norm.logpdf\n\ninvlogit = lambda x: 1./(1 + np.exp(-x))\n\ndef calc_posterior(a, b, y=deaths, x=log_dose):\n\n # Priors on a,b\n logp = dnorm(a, 0, 10000) + dnorm(b, 0, 10000)\n # Calculate p\n p = invlogit(a + b*np.array(x))\n # Data likelihood\n logp += sum([dbin(yi, n, pi) for yi,pi in zip(y,p)])\n \n return logp\n```\n\n\n```python\nbioassay_trace, acc = metropolis_tuned(n_iter, f=calc_posterior, initial_values=(1,0), prop_var=5, tune_for=9000)\n```\n\n Iteration 0\n Iteration 1000\n Iteration 2000\n Iteration 3000\n Iteration 4000\n Iteration 5000\n Iteration 6000\n Iteration 7000\n Iteration 8000\n Iteration 9000\n\n\n\n```python\ntrace1 = pgo.Scatter(\n y=bioassay_trace.T[0],\n xaxis='x1',\n yaxis='y1',\n marker=pgo.Marker(color=color)\n)\n\ntrace2 = pgo.Histogram(\n x=bioassay_trace.T[0],\n xaxis='x2',\n yaxis='y2',\n marker=pgo.Marker(color=color)\n)\n\ntrace3 = pgo.Scatter(\n y=bioassay_trace.T[1],\n xaxis='x3',\n yaxis='y3',\n marker=pgo.Marker(color=color)\n)\n\ntrace4 = pgo.Histogram(\n x=bioassay_trace.T[1],\n xaxis='x4',\n yaxis='y4',\n marker=pgo.Marker(color=color)\n)\n```\n\n\n```python\ndata9 = pgo.Data([trace1, trace2, trace3, trace4])\n```\n\n\n```python\nfig9 = tls.make_subplots(2, 2)\n```\n\n This is the format of your plot grid:\n [ (1,1) x1,y1 ] [ (1,2) x2,y2 ]\n [ (2,1) x3,y3 ] [ (2,2) x4,y4 ]\n \n\n\n\n```python\nfig9['data'] += data9\n```\n\n\n```python\nadd_style(fig9)\n```\n\n\n```python\nfig9['layout'].update(\n yaxis1=pgo.YAxis(title='intercept'),\n yaxis3=pgo.YAxis(title='slope')\n)\n```\n\n\n```python\npy.iplot(fig9, filename='bioassay')\n```\n\n\n\n\n\n\n\n\n\n```python\nfrom IPython.display import display, HTML\n\ndisplay(HTML(''))\ndisplay(HTML(''))\n\n! pip install publisher --upgrade\nimport publisher\npublisher.publish(\n 'montecarlo.ipynb', 'ipython-notebooks/computational-bayesian-analysis/',\n 'Computational Methods in Bayesian Analysis', \n 'Monte Carlo simulations, Markov chains, Gibbs sampling illustrated in Plotly',\n name='Computational Methods in Bayesian Analysis')\n```\n\n\n\n\n\n\n\n\n\n Requirement already up-to-date: publisher in /Users/chelsea/venv/venv2.7/lib/python2.7/site-packages\r\n\n\n /Users/chelsea/venv/venv2.7/lib/python2.7/site-packages/IPython/nbconvert.py:13: ShimWarning: The `IPython.nbconvert` package has been deprecated since IPython 4.0. You should import from nbconvert instead.\n \"You should import from nbconvert instead.\", ShimWarning)\n /Users/chelsea/venv/venv2.7/lib/python2.7/site-packages/publisher/publisher.py:53: UserWarning: Did you \"Save\" this notebook before running this command? Remember to save, always save.\n warnings.warn('Did you \"Save\" this notebook before running this command? '\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "73df114a10c817383c7a44b99811ab34cf0469c5", "size": 69194, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_posts/python-v3/ipython-notebooks/montecarlo.ipynb", "max_stars_repo_name": "kremuwa/graphing-library-docs", "max_stars_repo_head_hexsha": "1b3f303084f7a656c9a8b9ab7ee73c9ca6e748de", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 43, "max_stars_repo_stars_event_min_datetime": "2020-02-06T00:54:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-24T22:29:33.000Z", "max_issues_repo_path": "_posts/python-v3/ipython-notebooks/montecarlo.ipynb", "max_issues_repo_name": "kremuwa/graphing-library-docs", "max_issues_repo_head_hexsha": "1b3f303084f7a656c9a8b9ab7ee73c9ca6e748de", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": 92, "max_issues_repo_issues_event_min_datetime": "2020-01-31T16:23:50.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-21T05:31:39.000Z", "max_forks_repo_path": "_posts/python-v3/ipython-notebooks/montecarlo.ipynb", "max_forks_repo_name": "kremuwa/graphing-library-docs", "max_forks_repo_head_hexsha": "1b3f303084f7a656c9a8b9ab7ee73c9ca6e748de", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 63, "max_forks_repo_forks_event_min_datetime": "2020-02-08T15:16:06.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T17:24:38.000Z", "avg_line_length": 31.0565529623, "max_line_length": 563, "alphanum_fraction": 0.5315489782, "converted": true, "num_tokens": 12678, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9263037302939515, "lm_q2_score": 0.9005297841157157, "lm_q1q2_score": 0.8341640982671943}} {"text": "## \uc801\ubd84\n- \ubbf8\ubd84\uacfc \ubc18\ub300\ub418\ub294 \uac1c\ub150\n- \ubd80\uc815\uc801\ubd84(indefinite integral)\n- \uc815\uc801\ubd84(definite integral)\n\n### \ubd80\uc815\uc801\ubd84(indefinite integral)\n- **\uc815\ud655\ud558\uac8c \ubbf8\ubd84\uacfc \ubc18\ub300\ub418\ub294 \uac1c\ub150, \ubc18-\ubbf8\ubd84(anti-derivative)**\n\n- \ubd80\uc815\uc801\ubd84\uc73c\ub85c \ucc3e\uc740 \uc6d0\ub798\uc758 \ud568\uc218\ub97c \ud45c\uae30\n - $\\int$ \uae30\ud638(integral)\ub85c \ub098\ud0c0\ub0b4\ub294 \uac83\uc774 \uc77c\ubc18\uc801\n\n- \ub3c4\ud568\uc218\uac00 $f(x)$\uc774\ubbc0\ub85c \ubbf8\ubd84\ud558\uae30 \uc804\uc758 \ud568\uc218\ub97c $F(x)$ \ub610\ub294 $\\int f(x)dx$\ub85c \uc4f4\ub2e4. \n- $dx$\ub294 $x$\ub77c\ub294 \ubcc0\uc218\ub85c \uc801\ubd84\ud588\ub2e4\ub294 \uac83\uc744 \ub098\ud0c0\ub0b4\ub294 \uae30\ud638\ub85c \ud3b8\ubbf8\ubd84\uc5d0 \ub300\uc751\ud558\ub294 \uc801\ubd84\uc744 \ud45c\uae30\ud560 \ub54c \ud544\uc694\ud558\ub2e4.\n\n $ \n \\dfrac{dF(x)}{dx} = f(x) \\;\\;\\leftrightarrow\\;\\; F(x) = \\int_{}^{} f(x) dx + C \n $\n\n- $C$\ub294 \uc0c1\uc218\ud56d\uc744 \uc758\ubbf8\n - \uc0c1\uc218\ud56d\uc740 \ubbf8\ubd84\ud558\uba74 0\uc774 \ub418\ubbc0\ub85c \ubd80\uc815\uc801\ubd84\uc740 \ubb34\ud55c \uac1c\uc758 \ud574\uac00 \uc874\uc7ac \n - $C$\ub294 \ub2f9\uc5f0\ud558\ubbc0\ub85c \uc0dd\ub7b5\ud558\uace0 \uc4f0\ub294 \uacbd\uc6b0\ub3c4 \uc788\ub2e4.\n\n### \ud3b8\ubbf8\ubd84\uc758 \ubd80\uc815\uc801\ubd84\n- \ud3b8\ubbf8\ubd84\uc744 \ud55c \ub3c4\ud568\uc218\uc5d0\uc11c \uc6d0\ub798\uc758 \ud568\uc218\ub97c \ucc3e\uc744 \uc218\ub3c4 \uc788\ub2e4.\n- $f(x, y)$\uac00 \ud568\uc218 $F(x, y)$\ub97c $x$\ub85c \ud3b8\ubbf8\ubd84\ud55c \ud568\uc218\uc77c \ub54c\n\n $\n \\dfrac{\\partial F(x, y)}{\\partial x} = f(x, y) \\; \\leftrightarrow \\; F(x, y) = \\int_{}^{} f(x, y) dx + C(y)\n $\n\n- $f(x, y)$\uac00 \ud568\uc218 $F(x, y)$\ub97c $y$\ub85c \ud3b8\ubbf8\ubd84\ud55c \ud568\uc218\uc77c \ub54c\n\n $\n \\dfrac{\\partial F(x, y)}{\\partial y} = f(x, y) \\; \\leftrightarrow \\; F(x, y) = \\int_{}^{} f(x, y) dy + C(x) \n $\n\n\n### \ub2e4\ucc28 \ub3c4\ud568\uc218\uc640 \ub2e4\uc911\uc801\ubd84\n- \ubbf8\ubd84\uc744 \uc5ec\ub7ec\ubc88 \ud55c \uacb0\uacfc\ub85c \ub098\uc628 \ub2e4\ucc28 \ub3c4\ud568\uc218\ub85c\ubd80\ud130 \uc6d0\ub798\uc758 \ud568\uc218\ub97c \ucc3e\uc544\ub0bc\ub54c\n\n $ \n \\iint f(x, y) dydx\n $\n\n## \uc2ec\ud30c\uc774\ub97c \uc774\uc6a9\ud55c \ubd80\uc815\uc801\ubd84\n- \uc2ec\ud30c\uc774\uc758 `integrate()` \uba85\ub839\uc744 \uc0ac\uc6a9\n\n\n```python\nimport sympy\n\nsympy.init_printing(use_latex='mathjax')\n\nx = sympy.symbols('x')\nf = x * sympy.exp(x)\nf\n```\n\n\n\n\n$$x e^{x}$$\n\n\n\n\n```python\nsympy.integrate(f)\n```\n\n\n\n\n$$\\left(x - 1\\right) e^{x}$$\n\n\n\n### \uc815\uc801\ubd84\n- **\ub3c5\ub9bd\ubcc0\uc218 $x$\uac00 \uc5b4\ub5a4 \uad6c\uac04 $[a, b]$ \uc0ac\uc774\uc77c \ub54c \uadf8 \uad6c\uac04\uc5d0\uc11c \ud568\uc218 $f(x)$\uc758 \uac12\uacfc x \ucd95\uc774 \uc774\ub8e8\ub294 \uba74\uc801**\uc744 \uad6c\ud558\ub294 \ud589\uc704 or \uac12\n\n $\n \\int_{a}^{b} f(x) dx \n $\n- \uc815\uc801\ubd84 \uad6c\ud558\ub294 \ubc29\ubc95 \n - \ubbf8\uc801\ubd84\ud559\uc758 \uae30\ubcf8 \uc815\ub9ac\ub97c \uc0ac\uc6a9\ud558\uc5ec \ud478\ub294 \ubc29\ubc95\n - \uc218\uce58\uc801\ubd84(numerical integration)\ubc29\ubc95\n\n#### \ubbf8\uc801\ubd84\ud559\uc758 \uae30\ubcf8 \uc815\ub9ac(Fundamental Theorem of Calculus)\n- \ubd80\uc815\uc801\ubd84\uc73c\ub85c \uad6c\ud55c \ud568\uc218 $F(x)$\ub97c \uc774\uc6a9\ud558\uba74 \ub2e4\uc74c\ucc98\ub7fc \uc815\uc801\ubd84\uc758 \uac12\uc744 \uad6c\ud560 \uc218 \uc788\ub2e4. \n\n $ \n \\int_{a}^{b} f(x) dx = F(b) - F(a) \n $\n\n### \uc218\uce58\uc801\ubd84\n- \ud568\uc218\ub97c \uc544\uc8fc \uc791\uc740 \uad6c\uac04\uc73c\ub85c \ub098\ub204\uc5b4 \uc2e4\uc81c \uba74\uc801\uc744 \uacc4\uc0b0\ud574 \uac12\uc744 \uad6c\ud558\ub294 \ubc29\ubc95.\n\n\n```python\nimport scipy as sp\ndef f(x):\n return 2 * x ** 3 - 4 * x ** 2 + x + 8\nsp.integrate.quad(f, 0, 2) # \uc218\uce58\uc801\ubd84, Scipy\uc758 integrate \uc11c\ube0c\ud328\ud0a4\uc9c0\uc758 quad \uba85\ub839\n```\n\n\n\n\n$$\\left ( 15.333333333333332, \\quad 1.7023419710919066e-13\\right )$$\n\n\n\n### \ub2e4\ubcc0\uc218 \uc815\uc801\ubd84\n- \uc785\ub825 \ubcc0\uc218\uac00 2\uac1c\uc778 2\ucc28\uc6d0 \ud568\uc218 $ f(x, y) $\n- \ub450 \ubcc0\uc218\ub85c \ubaa8\ub450 \uc801\ubd84\ud558\ub294 \uac83\uc740 2\ucc28\uc6d0 \ud3c9\uba74\uc5d0\uc11c \uc8fc\uc5b4\uc9c4 \uc0ac\uac01\ud615 \uc601\uc5ed \uc544\ub798\uc758 \ubd80\ud53c\ub97c \uad6c\ud558\ub294 \uac83\uacfc \uac19\ub2e4.\n\n $ \n \\int_{y=c}^{y=d} \\int_{x=a}^{x=b} f(x, y) dx dy \n $\n\n### \ub2e4\ucc28\uc6d0 \ud568\uc218\uc758 \ub2e8\uc77c \uc815\uc801\ubd84\n- 2\ucc28\uc6d0 \ud568\uc218\uc774\uc9c0\ub9cc \uc774\uc911\uc801\ubd84\uc744 \ud558\uc9c0 \uc54a\uace0 \ub2e8\uc77c \uc815\uc801\ubd84\uc744 \ud558\ub294 \uacbd\uc6b0\n - \ud558\ub098\uc758 \ubcc0\uc218\ub9cc \uc9c4\uc9dc \ubcc0\uc218\ub85c \ubcf4\uace0 \ub098\uba38\uc9c0 \ud558\ub098\ub294 \uc0c1\uc218\ub77c\uace0 \uac04\uc8fc\ud558\ub294 \uacbd\uc6b0 \n\n $\n \\int_a^b f(x, y) dx\n $\n", "meta": {"hexsha": "bbcf0f454778ff29a40606e6fe6918642ff9551e", "size": 5161, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Math-Study/Integral.ipynb", "max_stars_repo_name": "kimchangkyu/TIL", "max_stars_repo_head_hexsha": "721a46417734431f83bb8bd1f3fa831633b2419a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-12-28T10:33:45.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-28T10:33:45.000Z", "max_issues_repo_path": "Math-Study/Integral.ipynb", "max_issues_repo_name": "kimchangkyu/TIL", "max_issues_repo_head_hexsha": "721a46417734431f83bb8bd1f3fa831633b2419a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Math-Study/Integral.ipynb", "max_forks_repo_name": "kimchangkyu/TIL", "max_forks_repo_head_hexsha": "721a46417734431f83bb8bd1f3fa831633b2419a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.6848739496, "max_line_length": 129, "alphanum_fraction": 0.4297616741, "converted": true, "num_tokens": 1306, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9263037323284109, "lm_q2_score": 0.9005297761070066, "lm_q1q2_score": 0.8341640926807884}} {"text": "## The index of a saddle point\n\nWe consider a *n* degree-of-freedom Hamiltonian of the following form:\n\n\\begin{equation}\nH(q, p) = \\sum_{i=1}^{n} \\frac{p_i^2}{2} + V(q), \\quad (q,p) \\in \\mathbb{R}^n \\times \\mathbb{R}^n,\n\\label{ham_int}\n\\end{equation}\n\n\nwhere $q \\in \\mathbb{R}^n$ denote the configuration space variables and $p \\in \\mathbb{R}^n$ denote the corresponding conjugate momentum variables. This Hamiltonian function gives rise to the corresponding Hamilton's differential equations (or just ''Hamilton's equations'') having the following form:\n\n\\begin{eqnarray}\n\\dot{q}_i & = & p_i, \\nonumber \\\\\n\\dot{p}_i & = & -\\frac{\\partial V}{\\partial q_i} (q), \\quad i=1. \\ldots , n.\n\\label{hameq_int}\n\\end{eqnarray}\n\n\nThese are a set of *2n* first order differential equations defined on the phase space \n$\\mathbb{R}^n \\times \\mathbb{R}^n$. \n\n \nA critical point of the potential energy function is a point $\\bar{q} \\in \\mathbb{R}^n$ satisfying the following equations:\n\n\n\\begin{equation}\n\\frac{\\partial V}{\\partial q_i} (\\bar{q}) =0, \\quad i=1, \\ldots n.\n\\end{equation}\n\nOnce a critical point of the potential energy function is located, we want to ''classify'' it. This is done by examining the second derivative of the potential energy function evaluated at the critical point. The second derivative matrix is referred to as the *Hessian matrix*, and it is given by:\n\n\\begin{equation}\n\\frac{\\partial^2 V}{\\partial q_i \\partial q_j} (\\bar{q}) =0, \\quad i,j=1, \\ldots n,\n\\label{hessian}\n\\end{equation}\n\n\nwhich is a $n \\times n$ symmetric matrix. Hence \\eqref{hessian} has *n* real eigenvalues, which we denote by:\n\n\\begin{equation}\n\\sigma_k, \\quad k=1, \\ldots, n.\n\\label{eiv_Hess}\n\\end{equation}\n\nHowever, returning to dynamics as given by Hamilton's equations \\eqref{hameq_int}, the point $(\\bar{q}, 0)$ is an equilibrium point of Hamilton's equations, i.e. when this point is substituted into the right-hand-side of \\eqref{hameq_int} we obtain $(\\dot{q}_1, \\ldots, \\dot{q}_n, \\dot{p}_1, \\ldots, \\dot{p}_n) = (0, \\ldots, 0, 0, \\ldots, 0)$, i.e. the point $(\\bar{q}, 0)$ does not change in time.\n\nNext, we want to determine the nature of the stability of this equilibrium point. Linearized stability is determined by computing the Jacobian of the right hand side of \\eqref{hameq_int}, which we will denote by $M$, evaluating it at the equilibrium point $(\\bar{q}, 0)$, and determining its eigenvalues. The following calculation is from {% cite ezra2004impenetrable --file reaction_dynamics %}. \nThe Jacobian of the Hamiltonian vector field \\eqref{hameq_int} evaluated at $(\\bar{q}, 0)$ is given by:\n\n\n\\begin{equation}\nM = \n\\left(\n\\begin{array}{cc}\n0_{n\\times n} & \\rm{id}_{n \\times n} \\\\\n-\\frac{\\partial^2 V}{\\partial q_i \\partial q_j} (\\bar{q}) & 0_{n\\times n} \n\\end{array}\n\\right),\n\\end{equation}\n\n\nwhich is a $2n \\times 2n$ matrix. The eigenvalues of $M$, denoted by $\\lambda$, are given by the solutions of the following characteristic equation:\n\n\\begin{equation}\n{\\rm det} \\, \\left( M - \\lambda \\, {\\rm id}_{2n \\times 2n} \\right) =0,\n\\label{eivM}\n\\end{equation}\n\n\nwhere ${\\rm id}_{2n \\times 2n}$ denoted the $2n \\times 2n$ identity matrix. Writing \\eqref{eivM} in detail (i.e. using the explicit expression for the Jacobian of \\eqref{hameq_int}) gives:\n\n\n\n\\begin{equation}\n{\\rm det} \\, \n\\left(\n\\begin{array}{cc}\n-\\lambda \\, \\rm{id}_{n \\times n} & \\rm{id}_{n \\times n} \\\\\n -\\frac{\\partial^2 V}{\\partial q_i \\partial q_j} (\\bar{q}) & -\\lambda \\rm{id}_{n \\times n}\n \\end{array}\n \\right) = {\\rm det} \\, \\left(\\lambda^2 \\, \\rm{id}_{n \\times n} + \\frac{\\partial^2 V}{\\partial q_i \\partial q_j} (\\bar{q}) \\right) =0.\n \\end{equation}\n \nWe can conclude from this calculation that the eigenvalues of the $n \\times n$ symmetric matrix $\\frac{\\partial^2 V}{\\partial q_i \\partial q_j} (\\bar{q})$ are $-\\lambda^2$, where $\\lambda$ are the eigenvalues of the $n \\times n$ matrix $M$. Hence, the eigenvalues of $M$ occur in pairs, denoted by\n $\\lambda_k, \\, \\lambda_{k+n}, \\, k=1, \\ldots n$, which have the form:\n \n\n\n\\begin{equation}\n\\lambda_k, \\, \\lambda_{k+n} = \\pm \\sqrt{-\\sigma_k}, \\quad k=1, \\ldots, n,\n\\end{equation}\n\n\n\nwhere $\\sigma_k$ are the eigenvalues of the Hessian of the potential energy evaluated at the critical point $\\bar{q}$ as denoted in \\eqref{eiv_Hess}. Hence,\nwe see that the existence of equilibrium points of Hamilton's equations of ''saddle-like stability'' implies that there must be *at least* one negative eigenvalue of \\eqref{hessian}. In fact, we have the following classification of the linearized stability of saddle-type equilibrium points of Hamilton's equations in terms of the critical points of the potential energy surface.\n\n\n+ **Index 1 saddle.** One eigenvalue of \\eqref{hessian} is positive, the rest are negative. We will assume that none of the eigenvalues of \\eqref{hessian} are zero. Zero eigenvalues give rise to special cases that must be dealt with separately. In the mathematics literature, these are often referred to as *saddle-center-$\\cdots$-center equilibria*, with the number of center-$\\cdots$-center terms equal to the number of pairs of pure imaginary eigenvalues.\n\n+ **Index 2 saddle.** Two eigenvalues of \\eqref{hessian} are positive, the rest are negative\n\n\nand in general,\n\n\n+ **Index k saddle.** *k* eigenvalues of \\eqref{hessian} are positive,thevrestvare negativev($k \\le n$).\n\n## References\n\n{% bibliography --file reaction_dynamics --cited %}\n", "meta": {"hexsha": "5766769da29eba14a41f7819f78c2e827cf6a1f1", "size": 6919, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "content/prologue/index_of_a_saddle_point-jekyll.ipynb", "max_stars_repo_name": "champsproject/chem_react_dyn", "max_stars_repo_head_hexsha": "53ee9b30fbcfa4316eb08fd3ca69cba82cf7b598", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2019-12-09T11:23:13.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-16T09:49:55.000Z", "max_issues_repo_path": "content/prologue/index_of_a_saddle_point-jekyll.ipynb", "max_issues_repo_name": "champsproject/chem_react_dyn", "max_issues_repo_head_hexsha": "53ee9b30fbcfa4316eb08fd3ca69cba82cf7b598", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 40, "max_issues_repo_issues_event_min_datetime": "2019-12-09T14:52:38.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-26T06:10:08.000Z", "max_forks_repo_path": "content/prologue/index_of_a_saddle_point-jekyll.ipynb", "max_forks_repo_name": "champsproject/chem_react_dyn", "max_forks_repo_head_hexsha": "53ee9b30fbcfa4316eb08fd3ca69cba82cf7b598", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-05-12T06:27:20.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-08T05:29:56.000Z", "avg_line_length": 6919.0, "max_line_length": 6919, "alphanum_fraction": 0.6424338777, "converted": true, "num_tokens": 1700, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9353465098415279, "lm_q2_score": 0.891811053345418, "lm_q1q2_score": 0.8341523561847334}} {"text": "# Exercise 2\nWrite a function to compute the roots of a mathematical equation of the form\n\\begin{align}\n ax^{2} + bx + c = 0.\n\\end{align}\nYour function should be sensitive enough to adapt to situations in which a user might accidentally set $a=0$, or $b=0$, or even $a=b=0$. For example, if $a=0, b\\neq 0$, your function should print a warning and compute the roots of the resulting linear function. It is up to you on how to handle the function header: feel free to use default keyword arguments, variable positional arguments, variable keyword arguments, or something else as you see fit. Try to make it user friendly.\n\nYour function should return a tuple containing the roots of the provided equation.\n\n**Hint:** Quadratic equations can have complex roots of the form $r = a + ib$ where $i=\\sqrt{-1}$ (Python uses the notation $j=\\sqrt{-1}$). To deal with complex roots, you should import the `cmath` library and use `cmath.sqrt` when computing square roots. `cmath` will return a complex number for you. You could handle complex roots yourself if you want, but you might as well use available libraries to save some work.\n\n\n```python\nimport cmath\ndef find_root(a,b,c):\n if (a==0 and b==0 and c==0):\n print(\"warning!\\n x has infinite numbers\")\n return()\n elif (a==0 and b==0 and c!=0):\n print(\"error!\\n no x\")\n return()\n elif (a==0 and b!=0):\n print(\"warning!\\n x=\",-c/b)\n return(-c/b)\n else: \n x1=(-b+cmath.sqrt(b*b-4*a*c))/(2*a)\n x2=(-b-cmath.sqrt(b*b-4*a*c))/(2*a)\n print(\"x1=\",x1)\n print(\"x2=\",x2)\n return(x1,x2)\n \nfind_root(0,0,0)\n```\n\n warning!\n x has infinite numbers\n\n\n\n\n\n ()\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "3ab5036975894c0b6baef7bab0dee28ea8669d18", "size": 2978, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lectures/L5/Exercise_2.ipynb", "max_stars_repo_name": "crystalzhaizhai/cs207_yi_zhai", "max_stars_repo_head_hexsha": "faabdc5dd1171af04eed6639225adddc26402bf1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lectures/L5/Exercise_2.ipynb", "max_issues_repo_name": "crystalzhaizhai/cs207_yi_zhai", "max_issues_repo_head_hexsha": "faabdc5dd1171af04eed6639225adddc26402bf1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/L5/Exercise_2.ipynb", "max_forks_repo_name": "crystalzhaizhai/cs207_yi_zhai", "max_forks_repo_head_hexsha": "faabdc5dd1171af04eed6639225adddc26402bf1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.7010309278, "max_line_length": 495, "alphanum_fraction": 0.5433176629, "converted": true, "num_tokens": 468, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.921921834855049, "lm_q2_score": 0.9046505338155469, "lm_q1q2_score": 0.8340170800378286}} {"text": "# Neural Network Fundamentals\n\n## Gradient Descent Introduction:\nhttps://www.youtube.com/watch?v=IxBYhjS295w\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.linear_model import LinearRegression\n\nnp.random.seed(1)\n\n%matplotlib inline\nnp.random.seed(1)\n```\n\n\n```python\nN = 100\nx = np.random.rand(N,1)*5\n# Let the following command be the true function\ny = 2.3 + 5.1*x\n# Get some noisy observations\ny_obs = y + 2*np.random.randn(N,1)\n```\n\n\n```python\nplt.scatter(x,y_obs,label='Observations')\nplt.plot(x,y,c='r',label='True function')\nplt.legend()\nplt.show()\n```\n\n## Gradient Descent\n\nWe are trying to minimise $\\sum \\xi_i^2$.\n\n\\begin{align}\n\\mathcal{L} & = \\frac{1}{N}\\sum_{i=1}^N (y_i-f(x_i,w,b))^2 \\\\\n\\frac{\\delta\\mathcal{L}}{\\delta w} & = -\\frac{1}{N}\\sum_{i=1}^N 2(y_i-f(x_i,w,b))\\frac{\\delta f(x_i,w,b)}{\\delta w} \\\\ \n& = -\\frac{1}{N}\\sum_{i=1}^N 2\\xi_i\\frac{\\delta f(x_i,w,b)}{\\delta w}\n\\end{align}\nwhere $\\xi_i$ is the error term $y_i-f(x_i,w,b)$ and \n$$\n\\frac{\\delta f(x_i,w,b)}{\\delta w} = x_i\n$$\n\nSimilar expression can be found for $\\frac{\\delta\\mathcal{L}}{\\delta b}$ (exercise).\n\nFinally the weights can be updated as $w_{new} = w_{current} - \\gamma \\frac{\\delta\\mathcal{L}}{\\delta w}$ where $\\gamma$ is a learning rate between 0 and 1.\n\n\n```python\n# Helper functions\ndef f(w,b):\n return w*x+b\n\ndef loss_function(e):\n L = np.sum(np.square(e))/N\n return L\n\ndef dL_dw(e,w,b):\n return -2*np.sum(e*df_dw(w,b))/N\n\ndef df_dw(w,b):\n return x\n\ndef dL_db(e,w,b):\n return -2*np.sum(e*df_db(w,b))/N\n\ndef df_db(w,b):\n return np.ones(x.shape)\n```\n\n\n```python\n# The Actual Gradient Descent\ndef gradient_descent(iter=100,gamma=0.1):\n # get starting conditions\n w = 10*np.random.randn()\n b = 10*np.random.randn()\n \n params = []\n loss = np.zeros((iter,1))\n for i in range(iter):\n# from IPython.core.debugger import Tracer; Tracer()()\n params.append([w,b])\n e = y_obs - f(w,b) # Really important that you use y_obs and not y (you do not have access to true y)\n loss[i] = loss_function(e)\n\n #update parameters\n w_new = w - gamma*dL_dw(e,w,b)\n b_new = b - gamma*dL_db(e,w,b)\n w = w_new\n b = b_new\n \n return params, loss\n \nparams, loss = gradient_descent()\n```\n\n\n```python\niter=100\ngamma = 0.1\nw = 10*np.random.randn()\nb = 10*np.random.randn()\n\nparams = []\nloss = np.zeros((iter,1))\nfor i in range(iter):\n# from IPython.core.debugger import Tracer; Tracer()()\n params.append([w,b])\n e = y_obs - f(w,b) # Really important that you use y_obs and not y (you do not have access to true y)\n loss[i] = loss_function(e)\n\n #update parameters\n w_new = w - gamma*dL_dw(e,w,b)\n b_new = b - gamma*dL_db(e,w,b)\n w = w_new\n b = b_new\n```\n\n\n```python\ndL_dw(e,w,b)\n```\n\n\n\n\n 0.0078296405377948283\n\n\n\n\n```python\nplt.plot(loss)\n```\n\n\n```python\nparams = np.array(params)\nplt.plot(params[:,0],params[:,1])\nplt.title('Gradient descent')\nplt.xlabel('w')\nplt.ylabel('b')\nplt.show()\n```\n\n\n```python\nparams[-1]\n```\n\n\n\n\n array([ 4.98099133, 2.75130442])\n\n\n\n## Multivariate case\n\nWe are trying to minimise $\\sum \\xi_i^2$. This time $ f = Xw$ where $w$ is Dx1 and $X$ is NxD.\n\n\\begin{align}\n\\mathcal{L} & = \\frac{1}{N} (y-Xw)^T(y-Xw) \\\\\n\\frac{\\delta\\mathcal{L}}{\\delta w} & = -\\frac{1}{N} 2\\left(\\frac{\\delta f(X,w)}{\\delta w}\\right)^T(y-Xw) \\\\ \n& = -\\frac{2}{N} \\left(\\frac{\\delta f(X,w)}{\\delta w}\\right)^T\\xi\n\\end{align}\nwhere $\\xi_i$ is the error term $y_i-f(X,w)$ and \n$$\n\\frac{\\delta f(X,w)}{\\delta w} = X\n$$\n\nFinally the weights can be updated as $w_{new} = w_{current} - \\gamma \\frac{\\delta\\mathcal{L}}{\\delta w}$ where $\\gamma$ is a learning rate between 0 and 1.\n\n\n```python\nN = 1000\nD = 5\nX = 5*np.random.randn(N,D)\nw = np.random.randn(D,1)\ny = X.dot(w)\ny_obs = y + np.random.randn(N,1)\n```\n\n\n```python\nw\n```\n\n\n\n\n array([[-0.19670626],\n [-0.74645194],\n [ 0.93774813],\n [-2.62540124],\n [ 0.74616483]])\n\n\n\n\n```python\nX.shape\n```\n\n\n\n\n (1000, 5)\n\n\n\n\n```python\nw.shape\n```\n\n\n\n\n (5, 1)\n\n\n\n\n```python\n(X*w.T).shape\n```\n\n\n\n\n (1000, 5)\n\n\n\n\n```python\n# Helper functions\ndef f(w):\n return X.dot(w)\n\ndef loss_function(e):\n L = e.T.dot(e)/N\n return L\n\ndef dL_dw(e,w):\n return -2*X.T.dot(e)/N \n```\n\n\n```python\ndef gradient_descent(iter=100,gamma=1e-3):\n # get starting conditions\n w = np.random.randn(D,1)\n params = []\n loss = np.zeros((iter,1))\n for i in range(iter):\n params.append(w)\n e = y_obs - f(w) # Really important that you use y_obs and not y (you do not have access to true y)\n loss[i] = loss_function(e)\n\n #update parameters\n w = w - gamma*dL_dw(e,w)\n \n return params, loss\n \nparams, loss = gradient_descent()\n```\n\n\n```python\nplt.plot(loss)\n```\n\n\n```python\nparams[-1]\n```\n\n\n\n\n array([[-0.18613581],\n [-0.74929568],\n [ 0.95245495],\n [-2.602146 ],\n [ 0.73246543]])\n\n\n\n\n```python\nmodel = LinearRegression(fit_intercept=False)\nmodel.fit(X,y)\nmodel.coef_.T\n```\n\n\n\n\n array([[-0.19670626],\n [-0.74645194],\n [ 0.93774813],\n [-2.62540124],\n [ 0.74616483]])\n\n\n\n\n```python\n# compare parameters side by side\nnp.hstack([params[-1],model.coef_.T])\n```\n\n\n\n\n array([[ 0.93191854, 0.93774813],\n [-2.62216904, -2.62540124],\n [ 0.74418434, 0.74616483],\n [ 0.64963419, 0.67411002],\n [ 1.00760672, 1.0142675 ]])\n\n\n\n## Stochastic Gradient Descent\n\n\n```python\ndef dL_dw(X,e,w):\n return -2*X.T.dot(e)/len(X)\n\ndef gradient_descent(gamma=1e-3, n_epochs=100, batch_size=20, decay=0.9):\n epoch_run = int(len(X)/batch_size)\n \n # get starting conditions\n w = np.random.randn(D,1)\n params = []\n loss = np.zeros((n_epochs,1))\n for i in range(n_epochs):\n params.append(w)\n \n for j in range(epoch_run):\n idx = np.random.choice(len(X),batch_size,replace=False)\n e = y_obs[idx] - X[idx].dot(w) # Really important that you use y_obs and not y (you do not have access to true y)\n #update parameters\n w = w - gamma*dL_dw(X[idx],e,w)\n loss[i] = e.T.dot(e)/len(e) \n gamma = gamma*decay #decay the learning parameter\n \n return params, loss\n \nparams, loss = gradient_descent()\n```\n\n\n```python\nplt.plot(loss)\n```\n\n\n```python\nnp.hstack([params[-1],model.coef_.T])\n```\n\n\n\n\n array([[-0.18387464, -0.19670626],\n [-0.74895223, -0.74645194],\n [ 0.9476572 , 0.93774813],\n [-2.62055252, -2.62540124],\n [ 0.75009766, 0.74616483]])\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "6ce3f0064dcd660df63480e4f781da02ae1381c0", "size": 128513, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "deepschool.io/Lesson 02 - GradientDescent.ipynb", "max_stars_repo_name": "gopala-kr/ds-notebooks", "max_stars_repo_head_hexsha": "bc35430ecdd851f2ceab8f2437eec4d77cb59423", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2018-03-28T09:08:02.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-17T11:15:38.000Z", "max_issues_repo_path": "Lesson 02 - GradientDescent.ipynb", "max_issues_repo_name": "OWLYone/deepschool.io", "max_issues_repo_head_hexsha": "ae6718fc14f3ac499697c97edc97a66dad9d9a6c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lesson 02 - GradientDescent.ipynb", "max_forks_repo_name": "OWLYone/deepschool.io", "max_forks_repo_head_hexsha": "ae6718fc14f3ac499697c97edc97a66dad9d9a6c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2019-03-24T19:29:08.000Z", "max_forks_repo_forks_event_max_datetime": "2019-07-24T13:38:40.000Z", "avg_line_length": 190.3896296296, "max_line_length": 32450, "alphanum_fraction": 0.8979636301, "converted": true, "num_tokens": 2181, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9744347875615794, "lm_q2_score": 0.855851143290548, "lm_q1q2_score": 0.83397112699666}} {"text": "```python\nimport sympy as sym\nfrom sympy.polys import subresultants_qq_zz\n\nsym.init_printing()\n```\n\nResultant\n----------\n\nIf $p$ and $q$ are two polynomials over a commutative ring with identity which can be factored into linear factors,\n\n\n$$p(x)= a_0 (x - r_1) (x- r_2) \\dots (x - r_m) $$\n\n$$q(x)=b_0 (x - s_1)(x - s_2) \\dots (x - s_n)$$\n\n\n\n\n\n\nthen the resultant $R(p,q)$ of $p$ and $q$ is defined as:\n\n$$R(p,q)=a^n_{0}b^m_{0}\\prod_{i=1}^{m}\\prod_{j=1}^{n}(r_i - s_j)$$\n\nSince the resultant is a symmetric function of the roots of the polynomials $p$ and $q$, it can be expressed as a polynomial in the coefficients of $p$ and $q$.\n\nFrom the definition, it is clear that the resultant will equal zero if and only if $p$ and $q$ have at least one common root. Thus, the resultant becomes very useful in identifying whether common roots exist. \n\nSylvester's Resultant\n---------------------\n\nIt was proven that the determinant of the Sylvester's matrix is equal to the resultant. Assume the two polynomials:\n\n- $$p(x) = a_0 x_m + a_1 x_{m-1}+\\dots+a_{m-1}x+a_m$$\n\n\n\n\n- $$q(x)=b_0x_n + b_1x_{n-1}+\\dots+b_{n-1}x+b_n$$\n\nThen the Sylverster matrix in the $(m+n)\\times(m+n)$ matrix:\n\n$$\n\\left|\n\\begin{array}{cccccc} \na_{0} & a_{1} & a_{2} & \\ldots & a_{m} & 0 & \\ldots &0 \\\\ \n0 & a_{0} & a_{1} & \\ldots &a_{m-1} & a_{m} & \\ldots &0 \\\\\n\\vdots & \\ddots & \\ddots& \\ddots& \\ddots& \\ddots& \\ddots&\\vdots \\\\\n0 & 0 & \\ddots & \\ddots& \\ddots& \\ddots& \\ddots&a_{m}\\\\\nb_{0} & b_{1} & b_{2} & \\ldots & b_{n} & 0 & \\ldots & 0 \\\\\n0 & b_{0} & b_{1} & \\ldots & b_{n-1} & b_{n} & \\ldots & 0\\\\\n\\ddots &\\ddots & \\ddots& \\ddots& \\ddots& \\ddots& \\ddots&\\ddots \\\\\n0 & 0 & \\ldots& \\ldots& \\ldots& \\ldots& \\ldots& b_{n}\\\\\n\\end{array}\n\\right| = \\Delta $$\n\nThus $\\Delta$ is equal to the $R(p, q)$.\n\nExample: Existence of common roots\n------------------------------------------\n\nTwo examples are consider here. Note that if the system has a common root we are expecting the resultant/determinant to equal to zero.\n\n\n```python\nx = sym.symbols('x')\n```\n\n**A common root exists.**\n\n\n```python\nf = x ** 2 - 5 * x + 6\ng = x ** 2 - 3 * x + 2\n```\n\n\n```python\nf, g\n```\n\n\n```python\nsubresultants_qq_zz.sylvester(f, g, x)\n```\n\n\n```python\nsubresultants_qq_zz.sylvester(f, g, x).det()\n```\n\n**A common root does not exist.**\n\n\n```python\nz = x ** 2 - 7 * x + 12\nh = x ** 2 - x\n```\n\n\n```python\nz, h\n```\n\n\n```python\nmatrix = subresultants_qq_zz.sylvester(z, h, x)\nmatrix\n```\n\n\n```python\nmatrix.det()\n```\n\nExample: Two variables, eliminator\n----------\n\nWhen we have system of two variables we solve for one and the second is kept as a coefficient.Thus we can find the roots of the equations, that is why the resultant is often refeered to as the eliminator.\n\n\n```python\ny = sym.symbols('y')\n```\n\n\n```python\nf = x ** 2 + x * y + 2 * x + y -1\ng = x ** 2 + 3 * x - y ** 2 + 2 * y - 1\nf, g\n```\n\n\n```python\nmatrix = subresultants_qq_zz.sylvester(f, g, y)\n```\n\n\n```python\nmatrix\n```\n\n\n```python\nmatrix.det().factor()\n```\n\nThree roots for x $\\in \\{-3, 0, 1\\}$.\n\nFor $x=-3$ then $y=1$.\n\n\n```python\nf.subs({x:-3}).factor(), g.subs({x:-3}).factor()\n```\n\n\n```python\nf.subs({x:-3, y:1}), g.subs({x:-3, y:1})\n```\n\nFor $x=0$ the $y=1$.\n\n\n```python\nf.subs({x:0}).factor(), g.subs({x:0}).factor()\n```\n\n\n```python\nf.subs({x:0, y:1}), g.subs({x:0, y:1})\n```\n\nFor $x=1$ then $y=-1$ is the common root,\n\n\n```python\nf.subs({x:1}).factor(), g.subs({x:1}).factor()\n```\n\n\n```python\nf.subs({x:1, y:-1}), g.subs({x:1, y:-1})\n```\n\n\n```python\nf.subs({x:1, y:3}), g.subs({x:1, y:3})\n```\n\nExample: Generic Case\n-----------------\n\n\n```python\na = sym.IndexedBase(\"a\")\nb = sym.IndexedBase(\"b\")\n```\n\n\n```python\nf = a[1] * x + a[0]\ng = b[2] * x ** 2 + b[1] * x + b[0]\n```\n\n\n```python\nmatrix = subresultants_qq_zz.sylvester(f, g, x)\n```\n\n\n```python\nmatrix.det()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "e3274c6155405f1d9ed2401f4d52c4ce143f00c2", "size": 35812, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/notebooks/Sylvester_resultant.ipynb", "max_stars_repo_name": "Michal-Gagala/sympy", "max_stars_repo_head_hexsha": "3cc756c2af73b5506102abaeefd1b654e286e2c8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "examples/notebooks/Sylvester_resultant.ipynb", "max_issues_repo_name": "Michal-Gagala/sympy", "max_issues_repo_head_hexsha": "3cc756c2af73b5506102abaeefd1b654e286e2c8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/notebooks/Sylvester_resultant.ipynb", "max_forks_repo_name": "Michal-Gagala/sympy", "max_forks_repo_head_hexsha": "3cc756c2af73b5506102abaeefd1b654e286e2c8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.0591327201, "max_line_length": 2241, "alphanum_fraction": 0.6983692617, "converted": true, "num_tokens": 1480, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541577509316, "lm_q2_score": 0.885631484383387, "lm_q1q2_score": 0.8339585695047457}} {"text": "# Parameter Distributions\n\n## Gamma Distribution: DD1\nHistology studies show that axons do not have only one fixed diameter throughout the brain, but follow an axon diameter distribution *(Aboitiz et al. 1992)*.\nIn Microstructure Imaging, the true axon diameter distribution is usually modeled as a Gamma distribution $\\Gamma(R;\\alpha,\\beta)$ with $R$ the axon radius as half its diameter and $\\alpha$ and $\\beta$ its scale and rate parameters *(Assaf et al. 2008)*.\nThe probability density of a Gamma distribution is given as\n\n\\begin{equation}\n \\Gamma(R;\\alpha,\\beta)=\\frac{\\beta^\\alpha R^{\\alpha-1}e^{-R\\beta}}{\\Gamma(\\alpha)}\n\\end{equation}\n\nwith $\\Gamma(\\alpha)$ a Gamma function.\nHowever, we must take the cross-sectional area of these cylinders into account to relate this Gamma distribution of cylinder radii to the signal attenuation of this distribution of cylinders.\nThe reason for this is that it is not the cylinders themselves, but the (simulated) particles diffusing inside these cylinders that are contributing to the signal attenuation.\nThe final perpendicular, intra-cylindrical signal attenuation for a Gamma-distributed cylinder ensemble, using any of the previously describe cylinder representations, is then given as\n\n\\begin{equation}\nE^{\\Gamma}_\\perp(q,\\Delta,\\delta;\\alpha,\\beta)=\\frac{\\int_{\\mathbb{R}^+}\\overbrace{\\Gamma(R;\\alpha,\\beta)}^{\\textrm{Gamma Distribution}}\\times\\overbrace{E_\\perp(q,\\Delta,\\delta,R)}^{\\textrm{Cylinder Signal Attenuation}}\\times \\overbrace{\\pi R^2}^{\\textrm{Surface Correction}} dR}{\\underbrace{\\int_{\\mathbb{R}^+}\\Gamma(R;\\alpha,\\beta)\\times \\textrm{N}(R) dR}_{\\textrm{Normalization}}}.\n\\end{equation}\n\nWhen modeling cylinders, $\\textrm{N}(R)$ is the surface function $\\textrm{N}(R)=\\pi R^2$. But, if we were to model a Gamma distribution of spheres, then it is the volume function $\\textrm{N}(R)=(4/3)\\pi R^3$. In fact, Dmipy internally checks what model parameter it is distributing, and normalizes the Gamma distribution accordingly.\n\n\n```python\nfrom dmipy.distributions import distributions\ngamma_standard = distributions.DD1Gamma(normalization='standard')\ngamma_plane = distributions.DD1Gamma(normalization='plane')\ngamma_cylinder = distributions.DD1Gamma(normalization='cylinder')\ngamma_sphere = distributions.DD1Gamma(normalization='sphere')\n\nradii_std, Pstd = gamma_standard(alpha=2., beta=1e-6)\nradii_pln, Pstd = gamma_plane(alpha=2., beta=1e-6)\nradii_cyl, Pcyl = gamma_cylinder(alpha=2., beta=1e-6)\nradii_sph, Psph = gamma_sphere(alpha=2., beta=1e-6)\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.plot(radii_std, Pstd, label='Gamma Standard')\nplt.plot(radii_pln, Pstd, label='Gamma Plane')\nplt.plot(radii_cyl, Pcyl, label='Gamma Cylinder')\nplt.plot(radii_sph, Psph, label='Gamma Sphere')\nplt.legend()\nplt.xlabel('Radius [m]')\nplt.ylabel('Probability Density [-]');\n```\n\nNotice that the normalization changes a lot in the apparent shape of the Gamma distribution. The distance between planes linearly with radius, the surface of cylinders grows quadratically with radius, and the volume of spheres grows cubically with radius. This is why the re-normalized distributions shift increasingly to the right.\n\nUpon initialization the Gamma distribution pre-calculates between which radii there is 99% of the volume under the distribution, and only samples there. This is why the different distributions produce samples at different positions, depending on $\\alpha, \\beta$ and the normalization.\n\n## References\n\n- Aboitiz, Francisco, et al. \"Fiber composition of the human corpus callosum.\" Brain research 598.1 (1992): 143-153.\n- Assaf, Yaniv, et al. \"AxCaliber: a method for measuring axon diameter distribution from diffusion MRI.\" Magnetic resonance in medicine 59.6 (2008): 1347-1354. \n", "meta": {"hexsha": "ed58f43472a8f404c181e3702f9c601e97396b3c", "size": 48139, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/example_diameter_distributions.ipynb", "max_stars_repo_name": "AthenaEPI/mipy", "max_stars_repo_head_hexsha": "dbbca4066a6c162dcb05865df5ff666af0e4020a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 59, "max_stars_repo_stars_event_min_datetime": "2018-02-22T19:14:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-22T05:40:27.000Z", "max_issues_repo_path": "examples/example_diameter_distributions.ipynb", "max_issues_repo_name": "AthenaEPI/mipy", "max_issues_repo_head_hexsha": "dbbca4066a6c162dcb05865df5ff666af0e4020a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 95, "max_issues_repo_issues_event_min_datetime": "2018-02-03T11:55:30.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T15:10:39.000Z", "max_forks_repo_path": "examples/example_diameter_distributions.ipynb", "max_forks_repo_name": "AthenaEPI/mipy", "max_forks_repo_head_hexsha": "dbbca4066a6c162dcb05865df5ff666af0e4020a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 23, "max_forks_repo_forks_event_min_datetime": "2018-02-13T07:21:01.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-22T20:12:08.000Z", "avg_line_length": 414.9913793103, "max_line_length": 42886, "alphanum_fraction": 0.9206464613, "converted": true, "num_tokens": 969, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541610257062, "lm_q2_score": 0.8856314753275017, "lm_q1q2_score": 0.8339585638774771}} {"text": "```python\nimport sympy as sm\nsm.init_printing()\nfrom sympy.abc import x # Normal assumptions\nfrom sympy import pi, sin, cos, integrate, solve\n```\n\n\n```python\nn = sm.symbols('n', positive=True, integer=True)\nL = sm.symbols('L', positive=True)\nintegrand = A**2 * sin(n*pi*x/L)**2\nintegrand\n```\n\n\n```python\npsi_integral = sm.integrate(integrand, (x, 0, L))\n```\n\n\n```python\nsm.solve(psi_integral - 1, A)\n```\n\n\n```python\nsin(x).diff(x)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "e4e37ef27716613063ad6d593a7f86ad3ec44381", "size": 8830, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/Untitled.ipynb", "max_stars_repo_name": "ryanpdwyer/pchem", "max_stars_repo_head_hexsha": "ad097d7fce07669f4ad269e895e2185fa51ac2d2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/Untitled.ipynb", "max_issues_repo_name": "ryanpdwyer/pchem", "max_issues_repo_head_hexsha": "ad097d7fce07669f4ad269e895e2185fa51ac2d2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/Untitled.ipynb", "max_forks_repo_name": "ryanpdwyer/pchem", "max_forks_repo_head_hexsha": "ad097d7fce07669f4ad269e895e2185fa51ac2d2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 66.3909774436, "max_line_length": 2836, "alphanum_fraction": 0.8177802945, "converted": true, "num_tokens": 146, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9658995782141546, "lm_q2_score": 0.8633916152464017, "lm_q1q2_score": 0.8339495970001372}} {"text": "# sympy\n\n*sympy* library can be used to handle mathematical formula. **mathemathica** and **matlab** are the famous language in this genre, but they are not open source and expensive software.\n\nThe advantage of sympy is that\n\n- calculate algebra\n- copy the formula in *latex* format, which can be paste in thesis or technical paper\n- (and many more)\n\n### fraction (Rational)\n\n*sympy* supports fraction(Function name is *Rational*) such as 1/3.\n\n\n```python\nimport sympy as smp\n\nx = smp.Rational(1,3)\ny = smp.Rational(1,2)\n\nprint(x, y, x+y)\nprint(x*3, (x+y)*6)\n```\n\n 1/3 1/2 5/6\n 1 5\n\n\n### Irrational Number\n\n*sympy* support irrational number such as $\\sqrt{2}, \\pi, e$\n\n\n```python\nroot2 = smp.sqrt(2)\nprint('Root 2:', root2.evalf(100))\nprint('Pi: ', smp.pi.evalf(100))\nprint('e: ', smp.E.evalf(100))\n```\n\n Root 2: 1.414213562373095048801688724209698078569671875376948073176679737990732478462107038850387534327641573\n Pi: 3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117068\n e: 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427\n\n\n### algebra\n\n*sympy* is powerful when solving algebra.\n\n\n```python\nx = smp.Symbol('x') # Define 1 symbol\ny, z = smp.symbols('y z') # Define 2 or more symbols\n\nf = x**2 - 2*x + 1\nprint('Factor', smp.factor(f))\nprint('Solve', smp.solve(f, x))\n```\n\n Factor (x - 1)**2\n Solve [1]\n\n\n\n```python\nf = x + y -7 # x+y = 7\ng = x * y -10 # x*y = 10\nsmp.solve([f,g])\n```\n\n\n\n\n [{x: 2, y: 5}, {x: 5, y: 2}]\n\n\n\n\n```python\nfrom sympy import symbols, solve, factor\n\nf = x**7 - 1\nprint(factor(f))\nprint(solve(f))\n```\n\n (x - 1)*(x**6 + x**5 + x**4 + x**3 + x**2 + x + 1)\n [1, -cos(pi/7) - I*sin(pi/7), -cos(pi/7) + I*sin(pi/7), cos(2*pi/7) - I*sin(2*pi/7), cos(2*pi/7) + I*sin(2*pi/7), -cos(3*pi/7) - I*sin(3*pi/7), -cos(3*pi/7) + I*sin(3*pi/7)]\n\n\n### Prime factorization\n\nfactorint() function of *sympy* performs prime factorization(decomposition).\n\n\n```python\nfrom sympy import factorint\n\nyears = [2017,2018,2019]\n\nfor y in years:\n print(factorint(y))\n```\n\n {2017: 1}\n {2: 1, 1009: 1}\n {3: 1, 673: 1}\n\n\n\n```python\n# print years that are product of 3 different prims\n\nfor year in range(1900, 2019):\n primes = factorint(year)\n if sum(primes.values()) == 3 and len(primes) == 3:\n print('Year:', year, 'Prime', primes)\n```\n\n Year: 1902 Prime {2: 1, 3: 1, 317: 1}\n Year: 1905 Prime {3: 1, 5: 1, 127: 1}\n Year: 1910 Prime {2: 1, 5: 1, 191: 1}\n Year: 1918 Prime {2: 1, 7: 1, 137: 1}\n Year: 1930 Prime {2: 1, 5: 1, 193: 1}\n Year: 1946 Prime {2: 1, 7: 1, 139: 1}\n Year: 1947 Prime {3: 1, 11: 1, 59: 1}\n Year: 1955 Prime {5: 1, 17: 1, 23: 1}\n Year: 1958 Prime {2: 1, 11: 1, 89: 1}\n Year: 1965 Prime {3: 1, 5: 1, 131: 1}\n Year: 1970 Prime {2: 1, 5: 1, 197: 1}\n Year: 1978 Prime {2: 1, 23: 1, 43: 1}\n Year: 1986 Prime {2: 1, 3: 1, 331: 1}\n Year: 1990 Prime {2: 1, 5: 1, 199: 1}\n Year: 2001 Prime {3: 1, 23: 1, 29: 1}\n Year: 2006 Prime {2: 1, 17: 1, 59: 1}\n Year: 2013 Prime {3: 1, 11: 1, 61: 1}\n Year: 2014 Prime {2: 1, 19: 1, 53: 1}\n Year: 2015 Prime {5: 1, 13: 1, 31: 1}\n\n\n\n```python\nfactorint(1234567890123456789012345678913)\n```\n\n\n\n\n {13: 1, 199: 1, 263: 1, 467: 1, 184567: 1, 67947437: 1, 309826661: 1}\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "f0c52a9c5badcb9b3f3fabac0a85a0174da3adb3", "size": 6629, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "sympy.ipynb", "max_stars_repo_name": "ShinjiKatoA16/UCSY-sw-eng", "max_stars_repo_head_hexsha": "ef8375522e1c18642b05039cf24ec91053c829ff", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sympy.ipynb", "max_issues_repo_name": "ShinjiKatoA16/UCSY-sw-eng", "max_issues_repo_head_hexsha": "ef8375522e1c18642b05039cf24ec91053c829ff", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sympy.ipynb", "max_forks_repo_name": "ShinjiKatoA16/UCSY-sw-eng", "max_forks_repo_head_hexsha": "ef8375522e1c18642b05039cf24ec91053c829ff", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.590747331, "max_line_length": 192, "alphanum_fraction": 0.4866495701, "converted": true, "num_tokens": 1431, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9559813488829418, "lm_q2_score": 0.8723473630627235, "lm_q1q2_score": 0.8339478088351798}} {"text": "# **Una breve introducci\u00f3n al ecosisteam de Python**\n\n
    \n\n*Por: Martin Vuelta ([zodiacfireworks](https://github.com/zodiacfireworks))*\n\n*Email:* `martin.vuelta@softbutterfly.io`\n\n
    \n\n## 1. \u00bfQu\u00e9 es Python?\n\n Python is a programming language that lets you work more quickly and integrate your systems more effectively.\n\n### *Ejemplos*\n\na. Hola Mundo\n\n\n```python\nprint(\"Hola Mundo!\")\n```\n\nb. Saludos\n\n\n```python\nname = input(\"What's your name? \")\ngreetings = f\"Hello {name}!\"\nprint(greetings)\n```\n\n## 2. M\u00f3dulos\n\n Consider a module to be the same as a code library. A file or set of files containing a set of functions you want to include in your application.\n\n### *Ejemplos*\n\na. Gravity\n\n\n```python\nfrom math import pi, sin, sqrt\n\ndef g(phi):\n sin2phi = sin(phi) ** 2\n return 9.7803253359 * ( 1 + 0.001931850400 * sin2phi ) / sqrt(1 - 0.006694384442 * sin2phi)\n\nlatitude_deg = input(\"What's your latitude? [deg]\")\nlatitude_rad = float(latitude_deg) * pi / 180\ngravity = f\"La gravedad a {latitude_deg}deg es de {g(latitude_rad)} m/s^2\"\nprint(gravity)\n```\n\n## 3. Paquetes\n\n A package, in essence, is a module or a set of modules prepared to be distributed. The most common way of distribution is through the Python Package Index (PyPI).\n\n\n\n### 3.1. Paquetes comunes en el Python club\n\n#### Numpy\n\n Base N-dimensional array package\n\n##### *Ejemplos*\n\n\n```python\nimport numpy as np\n\nsample_array = np.array([(1.5,2,3), (4,5,6)])\nsample_array\n```\n\n\n```python\nimport numpy as np\n\nsample_array = np.array([(1.5,2,3), (4,5,6)])\nprint(sample_array)\n```\n\n\n```python\nzeros_array = np.zeros((3,4))\nprint(\"Array of zeros\")\nprint(zeros_array)\nprint()\n\nones_array = np.ones((2,3,4), dtype=np.int16)\nprint(\"Array of ones\")\nprint(ones_array)\nprint()\n\nempty_array = np.empty((2,3,4))\nprint(\"Empty array\")\nprint(empty_array)\n```\n\n### Scipy \n\n Fundamental library for scientific computing\n\n##### *Ejemplos*\n\n\n```python\nfrom numpy import poly1d\n\np = poly1d([3,4,5])\np\n```\n\n\n```python\np = poly1d([3,4,5])\nprint(\"Polinomio\")\nprint(p)\nprint()\nprint(\"Coeficientes\")\nprint(p.coeffs)\nprint()\nprint(\"Raices\")\nprint(p.roots)\nprint()\nprint(\"Integrate\")\nprint(p.integ())\nprint()\nprint(\"Detivative\")\nprint(p.deriv())\n```\n\n\n```python\nimport numpy as np\nfrom scipy.fft import fft, ifft\nx = np.array([1.0, 2.0, 1.0, -1.0, 1.5])\ny = fft(x)\nyinv = ifft(y)\n\nprint(x)\nprint(y)\nprint(yinv)\n```\n\n#### Matplotlib y seaborn\n\n Matplotlib: Comprehensive 2-D plotting\n\n Seaborn: Statistical data visualization\n\n##### *Ejemplos*\n\n\n```python\n%matplotlib inline\n```\n\n\n```python\nimport numpy as np\nfrom scipy.fft import fft\nimport matplotlib.pyplot as plt\n\nN = 600\nT = 1.0 / 800.0\nx = np.linspace(0.0, N*T, N)\ny = np.sin(50.0*2.0*np.pi*x) + 0.5 * np.sin(80.0*2.0*np.pi*x)\nyf = fft(y)\nxf = np.linspace(0.0, 1.0/(2.0*T), N//2)\n\nplt.plot(xf, 2.0/N * np.abs(yf[0:N//2]))\nplt.grid()\nplt.show()\n```\n\n\n```python\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nsns.set(style=\"whitegrid\")\n\n# Load the example diamonds dataset\ndiamonds = sns.load_dataset(\"diamonds\")\n\n# Draw a scatter plot while assigning point colors and sizes to different\n# variables in the dataset\nf, ax = plt.subplots(figsize=(6.5, 6.5))\nsns.despine(f, left=True, bottom=True)\nclarity_ranking = [\"I1\", \"SI2\", \"SI1\", \"VS2\", \"VS1\", \"VVS2\", \"VVS1\", \"IF\"]\nsns.scatterplot(\n x=\"carat\", \n y=\"price\",\n hue=\"clarity\", \n size=\"depth\",\n palette=\"ch:r=-.2,d=.3_r\",\n hue_order=clarity_ranking,\n sizes=(1, 8), \n linewidth=0,\n data=diamonds, \n ax=ax\n)\nplt.show()\n```\n\n#### Pandas\n\n Data structures & analysis\n\n##### *Ejemplos*\n\n\n```python\nimport pandas as pd\n\ndataframe = pd.read_csv(\"../datasets/neos.csv\")\ndataframe\n```\n\n\n```python\ndataframe[\"Perihelion Distance\"].min() * 150 * (10 ** 6) / (12.7 * (10 ** 3) )\n```\n\n#### Sympy\n\n Symbolic mathematics\n\n##### *Ejemplos*\n\n\n```python\nfrom sympy import *\n\ninit_printing(use_unicode=True)\n```\n\n\n```python\nx = symbols('x')\n\nsolveset(3*x**2 + 4*x + 5, x)\n```\n\n\n```python\nfrom sympy import *\n\nf = symbols('f', cls=Function)\nx = symbols('x')\n```\n\n\n```python\nf(x)\n```\n\n\n```python\nf(x).diff(x)\n```\n\n\n```python\ndiffeq = Eq(f(x).diff(x, x) - 2*f(x).diff(x) + f(x), sin(x))\ndiffeq\n```\n\n\n```python\ndsolve(diffeq, f(x))\n```\n\n#### Scikit learn\n\n Machine Learning in Python\n\n##### *Ejemplos*\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n# Though the following import is not directly being used, it is required\n# for 3D projection to work\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfrom sklearn.cluster import KMeans\nfrom sklearn import datasets\n\nnp.random.seed(5)\n\niris = datasets.load_iris()\nX = iris.data\ny = iris.target\n\nestimators = [\n ('k_means_iris_8', KMeans(n_clusters=8)),\n ('k_means_iris_3', KMeans(n_clusters=3)),\n ('k_means_iris_bad_init', KMeans(n_clusters=3, n_init=1, init='random'))\n]\n\nfignum = 1\ntitles = ['8 clusters', '3 clusters', '3 clusters, bad initialization']\n\nfor name, est in estimators:\n fig = plt.figure(fignum, figsize=(10, 10))\n ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)\n \n est.fit(X)\n labels = est.labels_\n\n ax.scatter(\n X[:, 3], \n X[:, 0], \n X[:, 2],\n c=labels.astype(np.float), \n edgecolor='k'\n )\n\n ax.w_xaxis.set_ticklabels([])\n ax.w_yaxis.set_ticklabels([])\n ax.w_zaxis.set_ticklabels([])\n ax.set_xlabel('Petal width')\n ax.set_ylabel('Sepal length')\n ax.set_zlabel('Petal length')\n ax.set_title(titles[fignum - 1])\n ax.dist = 12\n fignum = fignum + 1\n\n# Plot the ground truth\nfig = plt.figure(fignum, figsize=(10, 10))\nax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)\n\nfor name, label in [('Setosa', 0), ('Versicolour', 1), ('Virginica', 2)]:\n ax.text3D(\n X[y == label, 3].mean(),\n X[y == label, 0].mean(),\n X[y == label, 2].mean() + 2, name,\n horizontalalignment='center',\n bbox=dict(alpha=.2, edgecolor='w', facecolor='w')\n )\n\n# Reorder the labels to have colors matching the cluster results\ny = np.choose(y, [1, 2, 0]).astype(np.float)\nax.scatter(X[:, 3], X[:, 0], X[:, 2], c=y, edgecolor='k')\n\nax.w_xaxis.set_ticklabels([])\nax.w_yaxis.set_ticklabels([])\nax.w_zaxis.set_ticklabels([])\nax.set_xlabel('Petal width')\nax.set_ylabel('Sepal length')\nax.set_zlabel('Petal length')\nax.set_title('Ground Truth')\nax.dist = 12\n\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "e6aef97c81ebb00cba575d02c2c706bd97f0b14e", "size": 13987, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "week-2/1-libraries/Libraries.ipynb", "max_stars_repo_name": "PCPUNMSM/Summer-School-2020", "max_stars_repo_head_hexsha": "49fdf5b40a1f71e5da57836a98eb996d2beb17a1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2020-02-22T18:30:12.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-14T19:24:23.000Z", "max_issues_repo_path": "week-2/1-libraries/Libraries.ipynb", "max_issues_repo_name": "PCPUNMSM/Summer-School-2020", "max_issues_repo_head_hexsha": "49fdf5b40a1f71e5da57836a98eb996d2beb17a1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week-2/1-libraries/Libraries.ipynb", "max_forks_repo_name": "PCPUNMSM/Summer-School-2020", "max_forks_repo_head_hexsha": "49fdf5b40a1f71e5da57836a98eb996d2beb17a1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2020-01-31T22:14:00.000Z", "max_forks_repo_forks_event_max_datetime": "2020-02-08T16:37:08.000Z", "avg_line_length": 20.9700149925, "max_line_length": 172, "alphanum_fraction": 0.4835919068, "converted": true, "num_tokens": 2032, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9304582593509315, "lm_q2_score": 0.8962513738264114, "lm_q1q2_score": 0.8339244932314038}} {"text": "#Hydrogen functions\nStart with some imports fro Symbolic Python library:\n\n\n```\nfrom sympy.physics.hydrogen import R_nl\nfrom sympy.functions.special.spherical_harmonics import Ynm\nfrom sympy import *\n```\n\nDefine some variables, radial, polar, azimuthal, time, and two frequencies:\n\n\n```\nvar(\"r theta phi t w1 w2\")\n```\n\n\n\n\n (r, theta, phi, t, w1, w2)\n\n\n\n## Look at a few of the radial equations and the spherical harmonics\nNotice that instead of Ylm the name is Ynm... the arguments to the function are still the quantum numbers `l` and `m`\n\n\n```\nR_nl(1, 0, r, 1) # the n = 1, l = 0 radial function\n```\n\n\n\n\n 2*exp(-r)\n\n\n\n\n```\nYnm(0,0,theta,phi).expand(func=True) # the l = 0, m = 0 spherical harmonic\n```\n\n\n\n\n 1/(2*sqrt(pi))\n\n\n\nWrite the equation for the $|nlm\\rangle = |100\\rangle$ state. Use the sympy method `.expand(func=True)` to convert to the actual expression. To create this state, we combine the Radial function and the Ylm function. Make sure to set n, l, and m to the correct values. The fourth argument to `R_nl` is `Z` which we set to 1 since we are talking about a 1-proton nucleus.\n\nThe combination of R_nl and Ynm should look like the following (replace N, L, and M with the appropriate values):\n\n`R_nl(N, L, r, 1)*Ynm(L, M, theta, phi).expand(func=True)`\n\n\n```\n# this is the |100> state:\npsi100 = R_nl(1, 0, r, 1)*Ynm(0,0,theta,phi).expand(func=True)\n```\n\n\n```\npsi100 # check to see how it looks as an expression\n```\n\n\n\n\n exp(-r)/sqrt(pi)\n\n\n\n##Integrating over all space\nRemember spherical coordinate integrals of function $f(r,\\theta,\\phi)$ over all space look like: $$\\int_0^\\infty\\int_0^\\pi\\int_0^{2\\pi}r^2\\sin(\\theta)drd\\theta d\\phi \\,\\,f(r,\\theta,\\phi)$$ so you alwasy need to add a factor of `r**2*sin(theta)` and then integrate `r` from 0 to infinity, `theta` from $0-\\pi$ and `phi` from $0-2\\pi$. As a check, you should integrate the square of the `psi100` wavefunction over all space to see that it equals 1 (i.e. it is normalized)\n\n\n```\nintegrate(r**2*sin(theta) * (psi100)**2 ,(r,0,oo),(theta,0,pi),(phi,0,2*pi))\n```\n\n\n\n\n 1\n\n\n\n## Now do the $|210\\rangle$ state:\n\n\n```\npsi210 = R_nl(2, 1, r, 1)*Ynm(1,0,theta,phi).expand(func=True)\n```\n\n\n```\npsi210 # check how it looks\n```\n\n\n\n\n sqrt(2)*r*exp(-r/2)*cos(theta)/(8*sqrt(pi))\n\n\n\nNote, if you compare these to listed solutions (for example at http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/hydwf.html#c3) you see that there are not any factors of $a_0$. This is because the `R_nl` function is defined in units of $a_0$. $a_0$ is the Bohr Radius: http://en.wikipedia.org/wiki/Bohr_radius\n\n\n```\npsi211 = R_nl(2, 1, r, 1)*Ynm(1,1,theta,phi).expand(func=True)\npsi211\n```\n\n\n\n\n -r*sqrt(-cos(theta)**2 + 1)*exp(-r/2)*exp(I*phi)/(8*sqrt(pi))\n\n\n\n## Now calculate $\\langle z \\rangle$:\nTo calculate $\\langle z \\rangle$ we need to convert to spherical coordinates: $z = r\\cos\\theta$. The terms in the following integral are the $r^2\\sin\\theta$ then $z$ (in spherical coords) then the wave function squared.\n\n\n```\nexpect = integrate(r**2*sin(theta)* (r*cos(theta)) * (psi100*psi100),(r,0,oo),(theta,0,pi),(phi,0,2*pi))\n```\n\n\n```\nexpect\n```\n\n\n\n\n 0\n\n\n\nNo surprise, the average z position of the electron in the hydrogen atom is 0.\n\n## Now for problem 13.21\nfind $\\langle z \\rangle(t)$. Use the same integral, but add a time-dependent piece to each term in the wavefunction, add them together and multiply by the complex conjugate.\n\n\n```\npsi = 1/sqrt(2)*(psi100*exp(1j*w1*t) + psi210*exp(1j*w2*t))\npsi_conj = 1/sqrt(2)*(psi100*exp(-1j*w1*t) + psi210*exp(-1j*w2*t))\n```\n\n\n```\nexpect2 = integrate(r**2*sin(theta)* (r*cos(theta)) * psi*psi_conj,(r,0,oo),(theta,0,pi),(phi,0,2*pi))\n```\n\n\n```\nexpect2\n```\n\n\n\n\n 2*pi*(32*sqrt(2)*exp(-1.0*I*t*w1)*exp(1.0*I*t*w2)/(243*pi) + 32*sqrt(2)*exp(1.0*I*t*w1)*exp(-1.0*I*t*w2)/(243*pi))\n\n\n\nWe need to interpret this result. First you should show that this expression is simply a constant amplitude factor times $\\cos((w2-w1)t)$, in other words $\\langle z \\rangle$ oscillates at frequency `w2-w1`.\n\n## Your assignment:\n\nExplore other combinations of states and draw conclusions about the z behavior from the results. You may not be able to get these expressions to simplify, but the important thing is to look for the time dependence and simplify that part.\n\n- Does $\\langle z \\rangle$ oscillate for any combination of two Hydrogen states $|nlm\\rangle$?\n- Are there restrictions on what n values give oscillating $\\langle z \\rangle$ expressions? (hint, to keep it simple, always let one state be the n=1 state)\n- How does $\\langle z \\rangle$ change with different l and m values are used in the state?\n\nHints for interpreting your results:\n- What are the relavant frequencies in your expression for $\\langle z \\rangle$ and why?\n- Simplify one of your $\\langle z \\rangle$ expressions and write the time dependence in terms of the frequencies w2 and w1.\n\n\n```\n\n```\n", "meta": {"hexsha": "fdb24d0dfbf25bcbff8791b2b5b401a2077285e9", "size": 10694, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter 13 - Hydrogen.ipynb", "max_stars_repo_name": "mniehus/QMlabs", "max_stars_repo_head_hexsha": "bf9ba41bafdac7625ce1810d67d79072219b32b9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 28, "max_stars_repo_stars_event_min_datetime": "2016-08-21T16:59:18.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-21T03:22:14.000Z", "max_issues_repo_path": "Chapter 13 - Hydrogen.ipynb", "max_issues_repo_name": "Ashardalon125/Quantum-Resources", "max_issues_repo_head_hexsha": "8dd95f6316758c3169d5c2db60e0e14aa357ad75", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-04-23T16:37:40.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-23T17:42:01.000Z", "max_forks_repo_path": "Chapter 13 - Hydrogen.ipynb", "max_forks_repo_name": "Ashardalon125/Quantum-Resources", "max_forks_repo_head_hexsha": "8dd95f6316758c3169d5c2db60e0e14aa357ad75", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 16, "max_forks_repo_forks_event_min_datetime": "2015-09-27T04:35:02.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-16T23:03:34.000Z", "avg_line_length": 27.6330749354, "max_line_length": 495, "alphanum_fraction": 0.5151486815, "converted": true, "num_tokens": 1518, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9304582574225517, "lm_q2_score": 0.8962513648201267, "lm_q1q2_score": 0.8339244831231187}} {"text": "## TNK\n\nTanaka suggested the following two-variable problem:\n\n**Definition**\n\n\\begin{equation}\n\\newcommand{\\boldx}{\\mathbf{x}}\n\\begin{array}\n\\mbox{Minimize} & f_1(\\boldx) = x_1, \\\\\n\\mbox{Minimize} & f_2(\\boldx) = x_2, \\\\\n\\mbox{subject to} & C_1(\\boldx) \\equiv x_1^2 + x_2^2 - 1 - \n0.1\\cos \\left(16\\arctan \\frac{x_1}{x_2}\\right) \\geq 0, \\\\\n& C_2(\\boldx) \\equiv (x_1-0.5)^2 + (x_2-0.5)^2 \\leq 0.5,\\\\\n& 0 \\leq x_1 \\leq \\pi, \\\\\n& 0 \\leq x_2 \\leq \\pi.\n\\end{array}\n\\end{equation}\n\n**Optimum**\n\nSince $f_1=x_1$ and $f_2=x_2$, the feasible objective space is also\nthe same as the feasible decision variable space. The unconstrained \ndecision variable space consists of all solutions in the square\n$0\\leq (x_1,x_2)\\leq \\pi$. Thus, the only unconstrained Pareto-optimal \nsolution is $x_1^{\\ast}=x_2^{\\ast}=0$. \nHowever, the inclusion of the first constraint makes this solution\ninfeasible. The constrained Pareto-optimal solutions lie on the boundary\nof the first constraint. Since the constraint function is periodic and\nthe second constraint function must also be satisfied,\nnot all solutions on the boundary of the first constraint are Pareto-optimal. The \nPareto-optimal set is disconnected.\nSince the Pareto-optimal\nsolutions lie on a nonlinear constraint surface, an optimization\nalgorithm may have difficulty in finding a good spread of solutions across\nall of the discontinuous Pareto-optimal sets.\n\n**Plot**\n\n\n```python\nfrom pymoo.factory import get_problem\nfrom pymoo.util.plotting import plot\n\nproblem = get_problem(\"tnk\")\nplot(problem.pareto_front(), no_fill=True)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "90ad0e856f12c15ff3cd0707f61d03f5e3b33687", "size": 35454, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/source/problems/multi/tnk.ipynb", "max_stars_repo_name": "AIasd/pymoo", "max_stars_repo_head_hexsha": "08705ca866367d9fab675c30ffe585c837df9654", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2018-05-22T17:38:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-28T03:34:33.000Z", "max_issues_repo_path": "doc/source/problems/multi/tnk.ipynb", "max_issues_repo_name": "AIasd/pymoo", "max_issues_repo_head_hexsha": "08705ca866367d9fab675c30ffe585c837df9654", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 15, "max_issues_repo_issues_event_min_datetime": "2022-01-03T19:36:36.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-30T03:57:58.000Z", "max_forks_repo_path": "doc/source/problems/multi/tnk.ipynb", "max_forks_repo_name": "AIasd/pymoo", "max_forks_repo_head_hexsha": "08705ca866367d9fab675c30ffe585c837df9654", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-11-22T08:01:47.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-11T08:53:58.000Z", "avg_line_length": 246.2083333333, "max_line_length": 31936, "alphanum_fraction": 0.9196141479, "converted": true, "num_tokens": 501, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9304582497090321, "lm_q2_score": 0.8962513655129177, "lm_q1q2_score": 0.8339244768544795}} {"text": "```python\n# imports\nimport sympy as sm\nimport numpy as np\nimport math\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\nIn order to recast the Huber loss as a valid likelihood function to be used in likelihood ratio tests, we need to derive the associated normalisation constant. We can achieve this using sympy, a symbolic computation package.\n\nUsing the same notation as in the PyTorchDIA paper, we can start by defining the class of distributions,\n\n$f(x) = Q \\times \\frac{1}{\\sigma} \\times \\exp({-\\rho(x)})$.\n\n$Q$ is the normalisation constant we're interested in deriving. In order to find $Q$, we need to integrate everything to its right over the entire range of $x$. Setting $Q$ equal to the reciprocal of the result of this integration will ensure that when $f(x)$ is integrated over the entire range of $x$ the result will always be $1$ i.e. $f(x)$ is now a valid probability distribution.\n\nWe'll start with the Gaussian case, where\n\n$\\rho(x) = \\frac{1}{2}\\frac{x^2}{\\sigma^2}\\;.$\n\nN.B. Under the above definition for $f(x)$, we already have the $\\sigma$ term that turns up in the usual derivation of the normalisation constant for a Gaussian, so integrating this expression should give us $\\sqrt{2\\pi}$. Let's use sympy to do this for us.\n\n\n```python\n# first, test this out for the gaussian case\nx, sigma = sm.symbols(\"x, sigma\", real=True, positive=True)\nrho = ((x / sigma) ** 2 )/ 2\nloss = sm.exp(-rho) / sigma\nnorm = 2*sm.integrate(loss, (x, 0, sm.oo))\nsm.simplify(norm)\n```\n\n\n\n\n$\\displaystyle \\sqrt{2} \\sqrt{\\pi}$\n\n\n\nOK great, that seems to work! Now let's do the same for the Huber loss.\n\n\\begin{equation}\n \\rho_{\\text{Huber}, c}(x) =\n \\begin{cases}\n \\frac{1}{2}\\frac{x^2}{\\sigma^2}, & \\text{for } |\\frac{x}{\\sigma}|\\leq c \\\\\n c(|\\frac{x}{\\sigma}| - \\frac{1}{2}c), & \\text{for } |\\frac{x}{\\sigma}| > c \\;.\n \\end{cases}\n \\label{eq:huber_loss}\n\\end{equation}\n\nI don't expect the result will be quite as tidy as above!\n\n\n```python\n# normalisation constant for Huber 'likelihood'\nx, c, sigma = sm.symbols(\"x, c, sigma\", real=True, positive=True)\nrho = sm.Piecewise(\n ((x / sigma) ** 2 / 2, (x / sigma) <= c),\n (c * (sm.Abs(x / sigma) - c / 2), ((x / sigma) > c))\n)\nloss = sm.exp(-rho) / sigma\nnorm = 2 * sm.integrate(loss, (x, 0, sm.oo))\nsm.simplify(norm)\n```\n\n\n\n\n$\\displaystyle \\sqrt{2} \\sqrt{\\pi} \\operatorname{erf}{\\left(\\frac{\\sqrt{2} c}{2} \\right)} + \\frac{2 e^{- \\frac{c^{2}}{2}}}{c}$\n\n\n\n\n```python\n#const = (np.sqrt(2 * np.pi) * math.erf(c / np.sqrt(2))) + ((2 / c) * np.exp(-0.5 * c**2)) \n```\n\nOK, on to the meat of this notebook. I'm not certain that it makes sense to compare a Huber likelihood against a Gaussian, even for instances in which there are no outlying data points. We can probe this question with a toy example; fitting a straight line to data with known Gaussian uncertainties.\n\nIt may be that the Huber likelihood (evaluated at the MLE) is strongly dependent on the tuning paramter, $c$. In the case where $c$ tends to infinity, we should expect the same value for the likelihood as for the Gaussian case. In this special case, not only are all residuals from the model treated as 'inliers', but note too that the normalisation constant we found above would also tend to $\\sqrt{2\\pi}$; the error function in the first term becomes unity and the latter term becomes neglibily small. However, for useful, smaller values of $c$ e.g. 1.345, in which 'outlying' residuals are treated linearly rather than quadratically, should we expect to get roughly the same values for the likelihoods? In other words, does the linear treatment of outliers balance with the change to the normalisation constant, which is itself dependent on $c$. I don't see how it could... but's let's verify this numerically.\n\n\n```python\n## Generate some mock data\n\n# The linear model with slope 2 and intercept 1:\ntrue_params = [2.0, 1.0]\n\n# points drawn from true model\nnp.random.seed(42)\nx = np.sort(np.random.uniform(-2, 2, 30))\nyerr = 0.4 * np.ones_like(x)\ny = true_params[0] * x + true_params[1] + yerr * np.random.randn(len(x))\n\n# true line\nx0 = np.linspace(-2.1, 2.1, 200)\ny0 = np.dot(np.vander(x0, 2), true_params)\n\n# plot\nplt.errorbar(x, y, yerr=yerr, fmt=\",k\", ms=0, capsize=0, lw=1, zorder=999)\nplt.scatter(x, y, marker=\"s\", s=22, c=\"k\", zorder=1000)\nplt.plot(x0, y0, c='k')\n```\n\n\n```python\n# analytic solution - MLE for a priori known Gaussian noise\ndef linear_regression(x, y, yerr):\n A = np.vander(x, 2)\n result = np.linalg.solve(np.dot(A.T, A / yerr[:, None]**2), np.dot(A.T, y / yerr**2))\n return result\n\nres = linear_regression(x, y, yerr)\nm, b = res\n```\n\n\n```python\n# optimise Huber loss\n\nimport torch\n# initialise model parameters (y = m*x + b)\nm_robust = torch.nn.Parameter(1e-3*torch.ones(1), requires_grad = True)\nb_robust = torch.nn.Parameter(1e-3*torch.ones(1), requires_grad = True)\nparams_robust = list([m_robust, b_robust])\n\n# negative log-likelihood for Huber likelihood (excluding the irrelevant normalisation constant)\ndef nll_Huber(model, data, yerr, c):\n \n # ln_sigma is same as above\n ln_sigma = torch.log(yerr).sum()\n\n # define inliers and outliers with a threshold, c\n resid = torch.abs((model - data)/yerr)\n cond1 = resid <= c\n cond2 = resid > c\n \n inliers = ((model - data)/yerr)[cond1]\n outliers = ((model - data)/yerr)[cond2]\n \n # Huber loss can be thought of as a hybrid of l2 and l1 loss\n # apply l2 (i.e. normal) loss to inliers, and l1 to outliers\n l2 = 0.5*torch.pow(inliers, 2).sum()\n l1 = (c * torch.abs(outliers) - (0.5 * c**2)).sum()\n \n nll = ln_sigma + l2 + l1\n \n return nll\n\n# pass paramterers to optimizer\noptimizer_robust = torch.optim.Adam(params_robust, lr = 0.01)\n```\n\n\n```python\n# tuning parameter, c\nc = 1.345\n\nxt, yt, yerrt = torch.from_numpy(x), torch.from_numpy(y), torch.from_numpy(yerr)\n\nfor epoch in range(2000): \n\n model = m_robust*xt + b_robust\n \n # negative loglikelihood\n loss = nll_Huber(model, yt, yerrt, c)\n \n optimizer_robust.zero_grad() \n loss.backward() \n optimizer_robust.step()\n \n if np.mod(epoch, 100) == 0:\n # You can see the alpha+scale parameters moving around.\n print('{:<4}: loss={:03f} m={:03f} b={:03f}'.format(\n epoch, loss.item(), m_robust.item(), b_robust.item())) \n```\n\n 0 : loss=-16.117150 m=2.061463 b=0.942204\n 100 : loss=-16.138877 m=2.075715 b=0.943876\n 200 : loss=-16.138879 m=2.075600 b=0.943839\n 300 : loss=-16.138879 m=2.075600 b=0.943839\n 400 : loss=-16.138879 m=2.075600 b=0.943839\n 500 : loss=-16.138879 m=2.075600 b=0.943839\n 600 : loss=-16.138879 m=2.075600 b=0.943839\n 700 : loss=-16.138879 m=2.075600 b=0.943839\n 800 : loss=-16.138879 m=2.075600 b=0.943839\n 900 : loss=-16.138879 m=2.075600 b=0.943839\n 1000: loss=-16.138879 m=2.075600 b=0.943839\n 1100: loss=-16.138879 m=2.075600 b=0.943839\n 1200: loss=-16.138879 m=2.075600 b=0.943839\n 1300: loss=-16.138879 m=2.075600 b=0.943839\n 1400: loss=-16.138879 m=2.075600 b=0.943839\n 1500: loss=-16.138879 m=2.075600 b=0.943839\n 1600: loss=-16.138879 m=2.075600 b=0.943839\n 1700: loss=-16.138879 m=2.075600 b=0.943839\n 1800: loss=-16.138879 m=2.075601 b=0.943839\n 1900: loss=-16.138879 m=2.075601 b=0.943839\n\n\n\n```python\nm_r, b_r = m_robust.item(), b_robust.item()\n```\n\n\n```python\ndef evaluate_gaussian_log_likelihood(data, model, var):\n print('\\nGaussian log-likelihood')\n chi2 = 0.5 * ((data - model)**2 / var).sum()\n lnsigma = np.log(np.sqrt(var)).sum()\n norm_constant = len(data.flatten()) * 0.5 * np.log(2 * np.pi)\n print('chi2, lnsigma, norm_constant:', chi2, lnsigma, norm_constant)\n return -(chi2 + lnsigma + norm_constant)\n\ndef evaluate_huber_log_likelihood(data, model, var, c):\n print('\\nHuber log-likelihood')\n \n ## PyTorchDIA - 'Huber' likelihood\n sigma = np.sqrt(var)\n ln_sigma = np.log(sigma).sum()\n\n # gaussian when (model - targ)/sigma <= c\n # absolute deviation when (model - targ)/sigma > c\n cond1 = np.abs((model - data)/sigma) <= c\n cond2 = np.abs((model - data)/sigma) > c\n inliers = ((model - data)/sigma)[cond1]\n outliers = ((model - data)/sigma)[cond2]\n\n l2 = 0.5*(np.power(inliers, 2)).sum()\n l1 = (c *(np.abs(outliers)) - (0.5 * c**2)).sum()\n\n constant = (np.sqrt(2 * np.pi) * math.erf(c / np.sqrt(2))) + ((2 / c) * np.exp(-0.5 * c**2)) \n norm_constant = len(data.flatten()) * np.log(constant)\n ll = -(l2 + l1 + ln_sigma + norm_constant)\n print('l2, l1, ln_sigma, norm_constant:', l2, l1, ln_sigma, norm_constant)\n return ll\n```\n\n\n```python\n# MLE models\nmodel = m*x + b # Gaussian\nmodel_r = m_r*x + b_r # Huber\n```\n\n\n```python\nll_gaussian = evaluate_gaussian_log_likelihood(x, model, yerr**2)\nprint(ll_gaussian)\nll_huber = evaluate_huber_log_likelihood(x, model_r, yerr**2, c=c)\nprint(ll_huber)\n```\n\n \n Gaussian log-likelihood\n chi2, lnsigma, norm_constant: 173.83346014331022 -27.488721956224655 27.56815599614018\n -173.91289418322575\n \n Huber log-likelihood\n l2, l1, ln_sigma, norm_constant: 4.538741913408627 81.53652455056039 -27.488721956224655 29.357945837155103\n -87.94449034489946\n\n\nUnless I've blundered, it seems like the Huber log-likelihood is always going to exceed the Gaussian due to how the numerics are treated. If this is indeed correct, then any comparison between the two would be meaningless.\n\n\n```python\n\n```\n", "meta": {"hexsha": "39267b158d3e2ae2a3c4ec8904266a5a35e78a22", "size": 25346, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Huber Loss recast as a valid Likelihood.ipynb", "max_stars_repo_name": "jah1994/PyTorchDIA", "max_stars_repo_head_hexsha": "205ad68a84ce4044009cd0ed6470e3627d250e27", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2021-04-29T09:00:27.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-04T03:01:08.000Z", "max_issues_repo_path": "Huber Loss recast as a valid Likelihood.ipynb", "max_issues_repo_name": "jah1994/PyTorchDIA", "max_issues_repo_head_hexsha": "205ad68a84ce4044009cd0ed6470e3627d250e27", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-10-11T12:26:37.000Z", "max_issues_repo_issues_event_max_datetime": "2020-10-11T14:02:49.000Z", "max_forks_repo_path": "Huber Loss recast as a valid Likelihood.ipynb", "max_forks_repo_name": "jah1994/PyTorchDIA", "max_forks_repo_head_hexsha": "205ad68a84ce4044009cd0ed6470e3627d250e27", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-04-29T17:09:03.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-15T17:30:31.000Z", "avg_line_length": 60.3476190476, "max_line_length": 10996, "alphanum_fraction": 0.7293853073, "converted": true, "num_tokens": 3110, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9399133531922388, "lm_q2_score": 0.8872045862611166, "lm_q1q2_score": 0.833895437640219}} {"text": "```python\nimport sympy\nsympy.init_printing()\nimport verify_2\n```\n\n# Section 1 - Differentiation\n\nWe can derive symbolic expressions\n\n\n```python\ndef demo_1a():\n x = sympy.Symbol('x',real=True)\n func = 1/(sympy.exp(x)-1)\n display([func, func.diff(x), func.diff(x,2).simplify()])\ndemo_1a()\n```\n\n__Exercise a__\n\nAn airline has a limit on the sum of the dimensions of checked luggage (height+width+length) $H+W+L=S$. Also, from asthetic reasons, the height has to be twice as large as the width. What is the width of a suitcase that meets this criterion that would maximised the volume?\n\n\n```python\ndef exercise_1a():\n \n H = sympy.Symbol('H', positive=True) # Height\n W = sympy.Symbol('W', positive=True) # Width\n L = sympy.Symbol('L', positive=True) # Length\n S = sympy.Symbol('S', positive=True) # Sum of all dimensions\n cond1 = sympy.Eq(S,H+W+L)\n cond2 = sympy.Eq(H,2*W)\n \n temp = H*W*L\n temp = temp.subs(sympy.solve(cond1,L,dict=True)[0])\n temp = temp.subs(sympy.solve(cond2,H,dict=True)[0])\n display(temp)\n temp = temp.diff(W)\n display(temp)\n temp = sympy.solve(temp, W)[0]\n display(temp)\n \n # Enter answer here\n answer = temp\n display(answer)\n print(verify_2.verify_1a(answer))\nexercise_1a()\n```\n\n__Exercise b__\n\nIn this exercise we reproduce the reflection and transmission coefficients of a quantum mechanical particle wave according to the Schroedinger equation. The potential is $V\\left(x\\right) = V_0 \\Theta \\left(x\\right)$, so waves are coming from the left and scatter off from the potential step. Use continuity and differentiability to\n\nI) Find the reflection coefficient\n\nII) Find the transmission coefficent\n\n\n```python\ndef exercise_1b():\n \n m = sympy.Symbol('m', positive=True) # Particle mass\n h = sympy.Symbol('h', positive=True) # Planck constant\n E = sympy.Symbol('E', positive=True) # Particle energy\n V_0 = sympy.Symbol('V_0', positive=True) # Potential step\n R = sympy.Symbol('R') # Reflection coefficient\n T = sympy.Symbol('T') # Transmission coefficient\n x = sympy.Symbol('x', real=True) # Position\n left_wave = sympy.exp(sympy.I*x*sympy.sqrt(2*m*E)/h) + R*sympy.exp(-sympy.I*x*sympy.sqrt(2*m*E)/h)\n right_wave = T*sympy.exp(sympy.I*x*sympy.sqrt(2*m*(E-V_0))/h)\n \n # Enter answer here\n answer_I = 0\n print(verify_2.verify_1bI(answer_I))\n \n answer_II = 0\n print(verify_2.verify_1bII(answer_II))\n\nexercise_1b()\n```\n\n False\n False\n\n\nsympy will delay the excution of derivatives as much as possible. You can force it to perform differentiations with the doit command\n\n\n```python\ndef demo_1b():\n \n f = sympy.Function('f')\n g = sympy.Function('g')\n x = sympy.Symbol('x')\n temp = f(x)\n display(temp)\n temp = temp.diff(x)\n display(temp)\n temp = temp.subs(f(x),g(x)**2)\n display(temp)\n temp = temp.doit()\n display(temp)\ndemo_1b()\n```\n\nTo create an unevaludated derivative, use the Derivative class\n\n\n```python\ndef demo_1c():\n \n x = sympy.Symbol('x', real=True)\n func = x**2\n temp = sympy.Derivative(func,x)\n display(temp)\n display(temp.doit())\n \ndemo_1c()\n```\n\n# Section 2 - Integration\n\nIndefinite integrals\n\n\n```python\ndef demo_2a():\n x = sympy.Symbol('x', real=True)\n func = x**2+x*sympy.cos(2*x)\n display([func, func.integrate(x)])\ndemo_2a()\n```\n\nDefinite integrals\n\n\n```python\ndef demo_2b():\n x = sympy.Symbol('x', real=True)\n func = x**3+x**4\n display(func.integrate((x,5,6)))\ndemo_2b()\n```\n\nImproper integrals\n\n\n```python\ndef demo_2c():\n x = sympy.Symbol('x', real=True)\n func = sympy.exp(-x**2)\n display(func.integrate((x,0,sympy.oo)))\ndemo_2c()\n```\n\nAgain, sympy can't do magic, so if an integral is not straightforward sympy will fail. As a rule of thumb, if you won't be able to do the integral, so sympy won't as well. Another issue is that the integral might give branching results depending on the values of the integration parameters\n\n\n```python\ndef demo_2d():\n x = sympy.Symbol('x', real=True)\n a = sympy.Symbol('a', real=True)\n func = x**a\n display(func.integrate(x))\ndemo_2d()\n```\n\n\n$\\displaystyle \\begin{cases} \\frac{x^{a + 1}}{a + 1} & \\text{for}\\: a \\neq -1 \\\\\\log{\\left(x \\right)} & \\text{otherwise} \\end{cases}$\n\n\nThe ambiguity can be eliminated by imposing qualifiers on the variables\n\n\n```python\ndef demo_2e():\n x = sympy.Symbol('x', real=True)\n a = sympy.Symbol('a', positive=True)\n func = x**a\n display(func.integrate(x))\ndemo_2e()\n```\n\n__Exercise a__\n\nA massless particle moves in a straight line next to a gravitating mass $M$. Neglecting the changes to the particle's trajectory, find the net change in downward velocity\n\n\n```python\ndef exercise_2a():\n \n G = sympy.Symbol('G', positive=True) # Gravitation constant\n M = sympy.Symbol('M', positive=True) # Mass\n t = sympy.Symbol('t', positive=True) # Time\n b = sympy.Symbol('b', positive=True) # Impact parameter\n v = sympy.Symbol('v', positive=True) # Particle velocity\n \n acceleration = G*M*b/(v**2*t**2+b**2)**sympy.Rational(3,2)\n \n # Answer\n answer = 0\n display(answer)\n print(verify_2.verify_2a(answer))\nexercise_2a()\n```\n\n# Section 3 - Series Expansion\n\nExpansing a function as a power series\n\n\n```python\ndef demo_3a():\n x = sympy.Symbol('x', real=True)\n func = sympy.exp(x)\n display(sympy.series(func,x,0,6))\ndemo_3a()\n```\n\nTo get rid of that annoying big O at the end use removeO\n\n\n```python\ndef demo_3b():\n x = sympy.Symbol('x', real=True)\n func = sympy.exp(x)\n display(sympy.series(func,x,0,6).removeO())\ndemo_3b()\n```\n\nYou can also expand about infinity\n\n\n```python\ndef demo_3c():\n x = sympy.Symbol('x', real=True)\n func = 1/(x+1)\n display(sympy.series(func,x,sympy.oo,6).removeO())\ndemo_3c()\n```\n\nSeries expansion doesn't handle exponentials and logarithms very well. Also, if we have an unknown power $x^{\\alpha}$ then sympy wouldn't know how to treat it.\n\n__Exercise a__\n\nTwo negative charge $-Q$ are placed at $x=\\pm s$ and $y=0$. One positive charge $2 Q$ is placed at the origing. Find the leading term in the potential at large distances $r \\gg s$\n\n\n```python\ndef exercise_3a():\n \n s = sympy.Symbol('s', positive=True) # Separation\n Q = sympy.Symbol('Q', positive=True) # Charge\n r = sympy.Symbol('r', positive=True) # Radius\n q = sympy.Symbol('theta', positive=True) # Angle\n \n potential = 2*Q/r-Q/sympy.sqrt(r**2+s**2+2*s*r*sympy.cos(q))-Q/sympy.sqrt(r**2+s**2-2*s*r*sympy.cos(q))\n print('potential')\n display(potential)\n \n # Answer\n print('expansion')\n answer = 0\n display(answer)\n print(verify_2.verify_3a(answer))\n \nexercise_3a()\n```\n\n# Section 4 - Limits\n\nSympy can also calculate limits\n\n\n```python\ndef demo_4a():\n \n x = sympy.Symbol('x', real=True)\n func = sympy.sin(3*x**2)/(sympy.exp(4*sympy.log(1+x)**2)-1)\n display([func,sympy.limit(func,x,0)])\ndemo_4a()\n```\n\nIt is also possible to define the direction of the limit\n\n\n```python\ndef demo_4b():\n \n x = sympy.Symbol('x', real=True)\n display([sympy.limit(1/x, x, 0, '+'),sympy.limit(1/x, x, 0, '-')])\ndemo_4b()\n```\n\nThe limit from Mean Girls!\n\n\n\n\n```python\ndef mean_girls_limit():\n \n x = sympy.Symbol('x')\n func = (sympy.log(1-x) - sympy.sin(x))/(1-sympy.cos(x)**2)\n display([func,sympy.limit(func,x,0,'+'),sympy.limit(func,x,0,'-')])\nmean_girls_limit()\n```\n\n__Exercise a__\n\nFind the limit $\\lim_{x\\rightarrow 0}\\frac{\\sin x}{x}$\n\n\n```python\ndef exercise_4a():\n \n x = sympy.Symbol('x', real=True)\n \n func = sympy.sin(x)/x\n \n answer = 0\n display(answer)\n print(verify_2.verify_4a(answer))\nexercise_4a()\n```\n\n# Section 5 - Integral Transforms\n\nFourier transform\n\n\n```python\ndef demo_5a():\n \n x = sympy.Symbol('x', real=True)\n k = sympy.Symbol('k', real=True)\n func = sympy.exp(-sympy.Abs(x))\n ft = sympy.fourier_transform(func,x,k)\n display([func,ft])\ndemo_5a()\n```\n\n__Exercise a__\n\nFind the Fourier transform of a Gaussian\n\n\n```python\ndef exercise_5a():\n \n x = sympy.Symbol('x', real=True)\n k = sympy.Symbol('k', real=True)\n func = sympy.exp(-x**2)\n \n answer = 0\n display(answer)\n print(verify_2.verify_5a(answer))\nexercise_5a()\n```\n\n# Section 6 - Differential Equations\n\nSolving differential equations\n\n\n```python\ndef demo_6a():\n \n x = sympy.Function('x', real=True)\n t = sympy.Symbol('t', real=True)\n \n eqn = sympy.Eq(x(t).diff(t,2), -x(t))\n sol = sympy.dsolve(eqn, x(t))\n \n display(eqn)\n display(sol)\n \ndemo_6a()\n```\n\nIncluding initial conditions\n\n\n```python\ndef demo_6b():\n \n x = sympy.Function('x', real=True)\n t = sympy.Symbol('t', real=True)\n \n init_cond = {x(0):0,\n x(t).diff(t).subs(t,0):1}\n eqn = sympy.Eq(x(t).diff(t,2), -x(t))\n sol = sympy.dsolve(eqn, x(t),\n ics=init_cond)\n \n display(eqn)\n display(init_cond)\n display(sol)\n \ndemo_6b()\n```\n\nCoupled differential equations\n\n\n```python\ndef demo_6c():\n \n x = sympy.Function('x', real=True)\n y = sympy.Function('y', real=True)\n t = sympy.Symbol('t', real=True)\n \n eqns = [sympy.Eq(x(t).diff(t),y(t)),\n sympy.Eq(y(t).diff(t),-x(t))]\n display(eqns)\n sol = sympy.dsolve(eqns, [x(t), y(t)])\n display(sol)\ndemo_6c()\n```\n\n__Exercise a__\n\nFind the solution to the equation $\\ddot{x} = - x + \\sin \\left(2 t\\right)$ with initial conditions $x \\left(0\\right) = \\dot{x} \\left(0\\right) = 0$\n\n\n```python\ndef exercise_6a():\n \n x = sympy.Function('x', real=True)\n t = sympy.Symbol('t', real=True)\n \n answer = sympy.Eq(x(t),0)\n display(answer)\n print(verify_2.verify_6a(answer))\n \nexercise_6a()\n```\n\n# Review Problems\n\n## Shapiro Time Delay\n\nWhen light passes closer to a massive object, it moves slower and takes longer to reach a distant observer. Let us consider a photon passing within a distance $b$ of a point mass $M$. Let us find the time as a function of the angle between the photon's velocity and position relative to the massive object. We can assume that $GM/c^2 b \\ll 1$, so we can only consider leading terms in mass.\n\n\n```python\ndef try_it_yourself_7a():\n \n ds = sympy.Symbol('ds', real=True) # Distance differential\n G = sympy.Symbol('G', positive=True) # Gravitation constant\n M = sympy.Symbol('M', positive=True) # Mass\n c = sympy.Symbol('c', positive=True) # Speed of light\n r = sympy.Symbol('r', positive=True) # Distance\n dt = sympy.Symbol('dt', positive=True) # Time differential\n dr = sympy.Symbol('dr', positive=True) # Radius differential\n df = sympy.Symbol(r'd\\phi', positive=True) # Angle differential\n f = sympy.Symbol('phi', positive=True) # Angle\n b = sympy.Symbol('b', positive=True) # Impact parameter\n xi = sympy.Symbol('xi', positive=True) # Auxiliary variable\n \n schwartzschild_metric = sympy.Eq(ds**2,(1-2*G*M/c**2/r)*c**2*dt**2 - dr**2/(1-2*G*M/c**2/r)-df**2*r**2)\n print('Schwartzschild metric')\n display(schwartzschild_metric)\n \n light_like = sympy.Eq(ds,0)\n print('Light like trajectory')\n display(light_like)\n \n trajctory = sympy.Eq(r*sympy.sin(f),b)\n print('trajectory')\n display(trajctory)\n \n print('shapiro time delay')\n # Enter you solution\n answer = 0\n display(answer)\n \ntry_it_yourself_7a()\n```\n\nThe solution\n\n\n```python\ndef demo_7a():\n \n ds = sympy.Symbol('ds', real=True) # Distance differential\n G = sympy.Symbol('G', positive=True) # Gravitation constant\n M = sympy.Symbol('M', positive=True) # Mass\n c = sympy.Symbol('c', positive=True) # Speed of light\n r = sympy.Symbol('r', positive=True) # Distance\n dt = sympy.Symbol('dt', positive=True) # Time differential\n dr = sympy.Symbol('dr', positive=True) # Radius differential\n df = sympy.Symbol(r'd\\phi', positive=True) # Angle differential\n f = sympy.Symbol('phi', positive=True) # Angle\n b = sympy.Symbol('b', positive=True) # Impact parameter\n xi = sympy.Symbol('xi', positive=True) # Auxiliary variable\n \n schwartzschild_metric = sympy.Eq(ds**2,(1-2*G*M/c**2/r)*c**2*dt**2 - dr**2/(1-2*G*M/c**2/r)-df**2*r**2)\n print('Schwartzschild metric')\n display(schwartzschild_metric)\n \n light_like = sympy.Eq(ds,0)\n print('Light like trajectory')\n display(light_like)\n \n trajctory = sympy.Eq(r*sympy.sin(f),b)\n print('trajectory')\n display(trajctory)\n \n print('shapiro time delay')\n # Enter you solution\n temp = schwartzschild_metric\n temp = temp.subs(light_like.lhs, light_like.rhs)\n temp = temp.subs(sympy.solve(trajctory,r,dict=True)[0])\n temp = temp.subs(dr,df*sympy.solve(trajctory,r)[0].diff(f))\n temp = sympy.solve(temp, dt)[0]/df\n temp = sympy.series(temp,M,0,3).removeO()\n temp = temp.integrate(f).simplify()\n display(temp)\n print(\"We have three kinds of terms. The first kind is proportional to b/c. It represents the light travel time in flat spacetime, and perstits even without a mass, so it is uninteresting. The second kind are proportional to GM/c^3, but not on the impact parameter. These represent a constant time delay for photons at all impact parameters, so they don't carry any important information. Finally, we have terms of the form G^2 M^2/c^4 b that are the only relevnat ones.\")\n temp = temp.diff(M,2)*M**2/2\n temp = temp.subs(f,sympy.pi) - temp.subs(f,0)\n display(temp)\n \ndemo_7a()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "69df9dc671c6720ca550a3659b09888a9b7070da", "size": 122785, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lesson_2.ipynb", "max_stars_repo_name": "bolverk/cyborg_math", "max_stars_repo_head_hexsha": "298224dadd4218ebcc266b0a135e15595a359343", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2020-02-25T22:29:21.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T21:49:08.000Z", "max_issues_repo_path": "lesson_2.ipynb", "max_issues_repo_name": "bolverk/cyborg_math", "max_issues_repo_head_hexsha": "298224dadd4218ebcc266b0a135e15595a359343", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lesson_2.ipynb", "max_forks_repo_name": "bolverk/cyborg_math", "max_forks_repo_head_hexsha": "298224dadd4218ebcc266b0a135e15595a359343", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-05-14T21:02:56.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-05T19:12:11.000Z", "avg_line_length": 72.8694362018, "max_line_length": 5760, "alphanum_fraction": 0.7760719958, "converted": true, "num_tokens": 4088, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9334308073258007, "lm_q2_score": 0.89330940889474, "lm_q1q2_score": 0.833842522736351}} {"text": "# Initialize Enviroment\n\n\n```python\nfrom IPython.display import display, Math, Latex \nfrom sympy import *\ninit_printing()\n\nfrom helper import comparator_factory, comparator_eval_factory, comparator_method_factory\n\nx,y,z = symbols('x y z')\n\nfunc_comparator = comparator_factory('Before applying {}:','After:')\nmethod_comparator = comparator_method_factory('Before calling {}:','After:')\neval_comparator = comparator_eval_factory('Before evaluation:','After evaluation:')\n```\n\n# Matrices\n\n## Construct matrix\n### Construct matrix from list\nTo make a matrix in Sympy, initialize a ```Matrix``` class by providing a list of row vectors (nested list).\n\n\n```python\nm = Matrix(\n [\n [1, -1], \n [3, 4], \n [0, 2]\n ]\n)\n\nm\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & -1\\\\3 & 4\\\\0 & 2\\end{matrix}\\right]$\n\n\n\nPass a single layer list of elements will create a $n \\times 1$ matrix.\n\n\n```python\nm = Matrix([1,2,3])\n\nm\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1\\\\2\\\\3\\end{matrix}\\right]$\n\n\n\n## Common Matrix Constructors\n\nSeveral constructors exist for creating common matrices.\n\n### Identity Matrix\nTo create identity matrix, use ```eye(n)```\n\n\n```python\nM = eye(3)\n\nM\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0 & 0\\\\0 & 1 & 0\\\\0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n### Matrix of Zero\nTo create a zero matrix, use ```zero(n)```\n\n\n```python\nM = zeros(3,4)\n\ndisplay(M)\n```\n\n\n$\\displaystyle \\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right]$\n\n\n### Matrix of Ones\nTo create a zero matrix, use ```one(n)```\n\n\n```python\nM = ones(3,2)\n\nM\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1\\\\1 & 1\\\\1 & 1\\end{matrix}\\right]$\n\n\n\n### Diagonal\nPass multiple matrix or numbers to construct a diagonal matrix. Numbers will be intepreted as $1 \\times 1$ matrix.\n\n\n```python\nM = diag(-1, ones(2, 2), Matrix([5, 7, 5]))\n\nM\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}-1 & 0 & 0 & 0\\\\0 & 1 & 1 & 0\\\\0 & 1 & 1 & 0\\\\0 & 0 & 0 & 5\\\\0 & 0 & 0 & 7\\\\0 & 0 & 0 & 5\\end{matrix}\\right]$\n\n\n\n## Basic operations\n\n### Shape\n\nTo get the shape of a matrix, use ```shape```\n\n\n```python\nM.shape\n```\n\n### Get rows and columns\nUse ```row()``` and ```col()``` to get single row and column. \n\nTo get the first row\n\n\n```python\nM.row(0)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}-1 & 0 & 0 & 0\\end{matrix}\\right]$\n\n\n\nTo get the last column\n\n\n```python\nM.col(-1)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0\\\\0\\\\0\\\\5\\\\7\\\\5\\end{matrix}\\right]$\n\n\n\n### Delete Rows and Columns\nTo delete a row or column, use ```row_del``` and ```col_del```. These operaitons modify the matrix inplace.\n\nFor example, we create a matrix then delete the first row and last column.\n\n\n```python\nM = Matrix([[1, 2, 3], [3, 2, 1]])\n\ndisplay('Before deletion:',M)\n\nM.row_del(0)\nM.col_del(-1)\n\ndisplay('After deletion:',M)\n```\n\n\n 'Before deletion:'\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 2 & 3\\\\3 & 2 & 1\\end{matrix}\\right]$\n\n\n\n 'After deletion:'\n\n\n\n$\\displaystyle \\left[\\begin{matrix}3 & 2\\end{matrix}\\right]$\n\n\n### Transpose of Matrix\nUse ```T```\n\n\n```python\nM = Matrix([[1, 2, 3], [4, 5, 6]])\n\ndisplay('Original matrix:',M)\ndisplay('Transposed matrix:',M.T)\n```\n\n\n 'Original matrix:'\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 2 & 3\\\\4 & 5 & 6\\end{matrix}\\right]$\n\n\n\n 'Transposed matrix:'\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 4\\\\2 & 5\\\\3 & 6\\end{matrix}\\right]$\n\n\n### Insert Rows and Columns\n\nTo insert rows and columns, use ```row_insert``` and ```col_insert```. These operations don't operate inplace and return a new matrix with modified data.\n\n```row_insert()``` accepts a position and a $1 \\times n$ matrix.\n\n\n```python\nM = Matrix([[1, 2, 3], [3, 2, 1]])\n\ndisplay('Before insertion:',M)\n\nM = M.row_insert(0, Matrix([[6,6,6]]))\n\ndisplay('After Insertion:',M)\n```\n\n\n 'Before insertion:'\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 2 & 3\\\\3 & 2 & 1\\end{matrix}\\right]$\n\n\n\n 'After Insertion:'\n\n\n\n$\\displaystyle \\left[\\begin{matrix}6 & 6 & 6\\\\1 & 2 & 3\\\\3 & 2 & 1\\end{matrix}\\right]$\n\n\n ```col_insert()``` accepts position and $n \\times 1$ matrix.\n\n\n```python\nM = Matrix([[1, 2, 3], [3, 2, 1]])\n\ndisplay('Before insertion:',M)\n\nM = M.col_insert(-1, Matrix([6,6]))\n\ndisplay('After Insertion:',M)\n```\n\n\n 'Before insertion:'\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 2 & 3\\\\3 & 2 & 1\\end{matrix}\\right]$\n\n\n\n 'After Insertion:'\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 2 & 6 & 3\\\\3 & 2 & 6 & 1\\end{matrix}\\right]$\n\n\n## Matrix Computation\nBasic matrix computation like addition, substraction, multiplication and inversion can be achived through python operators ```+```,```-```,```*```,```**```\n\n\n### Addition\n\n\n```python\nM = Matrix([[1, 3], [-2, 3]])\nN = Matrix([[0, 3], [0, 7]])\n\nprint('M')\ndisplay(M)\nprint('N')\ndisplay(N)\nprint('M+N')\ndisplay(M+N)\n```\n\n M\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 3\\\\-2 & 3\\end{matrix}\\right]$\n\n\n N\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0 & 3\\\\0 & 7\\end{matrix}\\right]$\n\n\n M+N\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 6\\\\-2 & 10\\end{matrix}\\right]$\n\n\n### Substraction\n\n\n```python\nM = Matrix([[1, 3], [-2, 3]])\nN = Matrix([[0, 3], [0, 7]])\n\nprint('M')\ndisplay(M)\nprint('N')\ndisplay(N)\nprint('M-N')\ndisplay(M-N)\n```\n\n M\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 3\\\\-2 & 3\\end{matrix}\\right]$\n\n\n N\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0 & 3\\\\0 & 7\\end{matrix}\\right]$\n\n\n M-N\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0\\\\-2 & -4\\end{matrix}\\right]$\n\n\n### Multiplication\n\n\n```python\nM = Matrix([[1, 2, 3], [3, 2, 1]])\nN = Matrix([0, 1, 1])\n\nprint('M')\ndisplay(M)\nprint('N')\ndisplay(N)\ndisplay('MxN')\ndisplay(M*N)\n```\n\n M\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 2 & 3\\\\3 & 2 & 1\\end{matrix}\\right]$\n\n\n N\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0\\\\1\\\\1\\end{matrix}\\right]$\n\n\n\n 'MxN'\n\n\n\n$\\displaystyle \\left[\\begin{matrix}5\\\\3\\end{matrix}\\right]$\n\n\n### Inverse\n\n\n```python\nM = Matrix([[1, 3], [-2, 3]])\n\nprint('M')\ndisplay(M)\nprint('inverse of M:')\ndisplay(M**(-1))\n```\n\n M\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 3\\\\-2 & 3\\end{matrix}\\right]$\n\n\n inverse of M:\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\frac{1}{3} & - \\frac{1}{3}\\\\\\frac{2}{9} & \\frac{1}{9}\\end{matrix}\\right]$\n\n\n## Determinant\n\nTo get determinant of matrix, use ```det```\n\n\n```python\nM = Matrix([[1, 0, 1], [2, -1, 3], [4, 3, 2]])\n\nmethod_comparator(M,'det')\n```\n\n### RREF\nTo put a matrix into reduced echelon form, use ```rref()```. It returns a matrix of two elements, the first is the reduced echelon form and the second is a list of indices of the pivot columns.\n\n\n```python\nM = Matrix([[1, 0, 1, 3], [2, 3, 4, 7], [-1, -3, -3, -4]])\n\nM_rref, pivot_columns = M.rref()\n\nprint('Original Matrix:')\ndisplay(M)\n\nprint('Reduced echelon form:')\ndisplay(M_rref)\n\nprint('Pivot columns:')\ndisplay(pivot_columns)\n```\n\n### Nullspace\n\n```nullspace()``` returns a list of column vectors which span the nullspace of a matrix.\n\n\n```python\nM = Matrix([[1, 2, 3, 0, 0], [4, 10, 0, 0, 1]])\n\nprint('Matrix M:')\ndisplay(M)\nprint('Spanning vectors for nullspace of M:')\ndisplay(M.nullspace())\n```\n\n Matrix M:\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 2 & 3 & 0 & 0\\\\4 & 10 & 0 & 0 & 1\\end{matrix}\\right]$\n\n\n Spanning vectors for nullspace of M:\n\n\n\n$\\displaystyle \\left[ \\left[\\begin{matrix}-15\\\\6\\\\1\\\\0\\\\0\\end{matrix}\\right], \\ \\left[\\begin{matrix}0\\\\0\\\\0\\\\1\\\\0\\end{matrix}\\right], \\ \\left[\\begin{matrix}1\\\\- \\frac{1}{2}\\\\0\\\\0\\\\1\\end{matrix}\\right]\\right]$\n\n\n### Column space\n\n```columnspace()``` returns a list of column vectors which span the columnspace of a matrix.\n\n\n```python\nM = Matrix([[1, 1, 2], [2 ,1 , 3], [3 , 1, 4]])\n\nprint('Matrix M:')\ndisplay(M)\nprint('Spanning vectors for columnspace of M:')\ndisplay(M.columnspace())\n```\n\n Matrix M:\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 2\\\\2 & 1 & 3\\\\3 & 1 & 4\\end{matrix}\\right]$\n\n\n Spanning vectors for columnspace of M:\n\n\n\n$\\displaystyle \\left[ \\left[\\begin{matrix}1\\\\2\\\\3\\end{matrix}\\right], \\ \\left[\\begin{matrix}1\\\\1\\\\1\\end{matrix}\\right]\\right]$\n\n\n### Eigenvalues and Eigenvectors\nTo find the eigenvalues of a matrix, use ```eigenvals```. It returns a dictionary of eigenvalue:algebraic multiplicity pairs.\n\n\n```python\nM = Matrix([[3, -2, 4, -2], [5, 3, -3, -2], [5, -2, 2, -2], [5, -2, -3, 3]])\n\nprint('Matrix M')\ndisplay(M)\n\nprint('Eigenvalues and their algebraic multiplicity:')\ndisplay(M.eigenvals())\n```\n\n```eigenvectors``` return a list of tuples of the form ```(eigenvalue, algebraic multiplicity, [eigenvectors])```\n\n\n```python\nM = Matrix(\n [\n [3, -2, 4, -2], \n [5, 3, -3, -2], \n [5, -2, 2, -2], \n [5, -2, -3, 3]\n ]\n)\n\nprint('Matrix M')\ndisplay(M)\n\nprint('Eigenvalues, their algebraic multiplicity and eigenvectors:')\ndisplay(M.eigenvects())\n```\n\n Matrix M\n\n\n\n$\\displaystyle \\left[\\begin{matrix}3 & -2 & 4 & -2\\\\5 & 3 & -3 & -2\\\\5 & -2 & 2 & -2\\\\5 & -2 & -3 & 3\\end{matrix}\\right]$\n\n\n Eigenvalues, their algebraic multiplicity and eigenvectors:\n\n\n\n$\\displaystyle \\left[ \\left( -2, \\ 1, \\ \\left[ \\left[\\begin{matrix}0\\\\1\\\\1\\\\1\\end{matrix}\\right]\\right]\\right), \\ \\left( 3, \\ 1, \\ \\left[ \\left[\\begin{matrix}1\\\\1\\\\1\\\\1\\end{matrix}\\right]\\right]\\right), \\ \\left( 5, \\ 2, \\ \\left[ \\left[\\begin{matrix}1\\\\1\\\\1\\\\0\\end{matrix}\\right], \\ \\left[\\begin{matrix}0\\\\-1\\\\0\\\\1\\end{matrix}\\right]\\right]\\right)\\right]$\n\n\nIf all you want is the characteristic polymonial, use ```charpoly()``` and ```factor()```\n\n\n```python\nprint('Matrix M')\ndisplay(M)\n\nlamda = symbols('lamda')\np = M.charpoly(lamda).factor()\n\nprint('characteristic polymonial of M')\ndisplay(p)\n```\n\n## Diagonalization\n\nTo diagonalize a matrix , use ```diagonalize()```. diagonalize returns a tuple ($P$,$D$), where D is diagonal and $M = PDP^{-1}$.\n\n\n```python\nM = Matrix(\n [\n [3, -2, 4, -2], \n [5, 3, -3, -2], \n [5, -2, 2, -2], \n [5, -2, -3, 3]\n ]\n)\n\nP, D = M.diagonalize()\n\nprint('Matrix M')\ndisplay(M)\n\nprint('Matrix P')\ndisplay(P)\n\nprint('Matrix D')\ndisplay(D)\n\ndisplay(Latex('$PDP^{-1}$'))\ndisplay(P*D*P**-1)\n```\n\n Matrix M\n\n\n\n$\\displaystyle \\left[\\begin{matrix}3 & -2 & 4 & -2\\\\5 & 3 & -3 & -2\\\\5 & -2 & 2 & -2\\\\5 & -2 & -3 & 3\\end{matrix}\\right]$\n\n\n Matrix P\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0 & 1 & 1 & 0\\\\1 & 1 & 1 & -1\\\\1 & 1 & 1 & 0\\\\1 & 1 & 0 & 1\\end{matrix}\\right]$\n\n\n Matrix D\n\n\n\n$\\displaystyle \\left[\\begin{matrix}-2 & 0 & 0 & 0\\\\0 & 3 & 0 & 0\\\\0 & 0 & 5 & 0\\\\0 & 0 & 0 & 5\\end{matrix}\\right]$\n\n\n\n$PDP^{-1}$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}3 & -2 & 4 & -2\\\\5 & 3 & -3 & -2\\\\5 & -2 & 2 & -2\\\\5 & -2 & -3 & 3\\end{matrix}\\right]$\n\n\n# Solvers\n\n## Basic equations\nThe main function for solving algebraic equations is ```solveset()```. The syntax for ```solveset()``` is ```solveset(equation, variable=None, domain=S.Complexes)```.\n\n\n```python\neq = Eq(x**2-1, 0)\n\nfunc_comparator(eq, solveset, x)\n```\n\nIf an expresison instead of a Eq instance is passed in ```solveset()```, it is automatically assumed to be zero. \n\n\n```python\nexpr = x**2-1\n\nfunc_comparator(eq, solveset, x)\n```\n\nResult of the solver can be a infinite set.\n\n\n```python\neq = Eq(sin(x) - 1)\n\nfunc_comparator(eq, solveset, x)\n```\n\nResult of the solver can be an empty set.\n\n\n```python\neq = Eq(exp(x))\n\nfunc_comparator(eq, solveset, x)\n```\n\nWhen solver fails to find solutions, it returns an conditional set.\n\n\n```python\neq = Eq(cos(x) - x)\n\nfunc_comparator(eq, solveset, x)\n```\n\n```solveset()``` reports each solution only once. \n\n\n```python\neq = Eq(x**3 - 6*x**2 + 9*x,0)\n\nfunc_comparator(eq, solveset, x)\n```\n\nTo get the solutions with polymonial equation with multiplicity, use ```roots()```\n\n\n```python\neq = Eq(x**3 - 6*x**2 + 9*x,0)\n\nfunc_comparator(eq, roots, x)\n```\n\n## Linear System\n\n```linsolve()``` is the solver to solve a linear system. A linear system can be defined using differet method.\n\n### list of equations\n\n\n```python\nls = [\n Eq(x + y + z - 1), \n Eq(x + y + 2*z - 3)\n]\n\nfunc_comparator(ls,linsolve, x, z)\n```\n\nList of equatons can be simplified into list of expressions if all of them are equal to zero\u3002\n\n\n```python\nls = [\n x + y + z - 1, \n x + y + 2*z - 3\n]\n```\n\n### Augmeted matrix form\n\n\n```python\nM = Matrix(\n [\n [1, 1, 1, 1], \n [1, 1, 2, 3]\n ]\n)\n \nfunc_comparator(M,linsolve, x, y, z)\n```\n\n### $Ax = b$ form\n\n\n```python\nM = Matrix(\n [\n [1, 1, 1, 1], \n [1, 1, 2, 3]\n ]\n)\nls = A, b = M[:, :-1], M[:, -1]\n\nfunc_comparator(M,linsolve, x, y, z)\n```\n\n## Non-linear Systems\n\n```solvset()``` is not capable of solving mutiple variable non-linear system. In this situation, use ```solve()``` instead.\n\n\n```python\neq_list = [\n Eq(x*y - 1, 0),\n Eq(x-2, 0)\n]\n\nfunc_comparator(eq_list, solve, x, y)\n```\n\n## Differential equition\n\nFirst define needed function symbol variables.\n\n\n```python\nf = symbols('f', cls = Function)\n```\n\nThen define the differential equation. For example, $f''(x) - 2f'(x) + f(x) = \\sin(x)$\n\n\n```python\ndiffeq = Eq(f(x).diff(x, 2) - 2*f(x).diff(x) + f(x), sin(x))\n```\n\nThen use ```dsolve()``` to solve the equation.\n\n\n```python\nfunc_comparator(diffeq, dsolve, f(x))\n```\n\nIf the equation cannot be solved explicitly, ```dsolve()``` returns an implicit equation.\n\n\n```python\ndiffeq = Eq(f(x).diff(x)*(1 - sin(f(x))), 0)\n\nfunc_comparator(diffeq, dsolve, f(x))\n```\n\n# Reference\n[Sympy Documentation](http://docs.sympy.org/latest/index.html)\n\n# Related Articles\n* [Sympy Notes I]({filename}0026_sympy_intro_1_en.ipynb)\n* [Sympy Notes II]({filename}0027_sympy_intro_2_en.ipynb)\n* [Sympy Notes III]({filename}0028_sympy_intro_3_en.ipynb)\n* [Sympy Notes IV]({filename}0029_sympy_intro_4_en.ipynb)\n", "meta": {"hexsha": "ed59bcea22625d8d97c9090eed32bf768a26727e", "size": 100523, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "0011_sympy/0028_sympy_intro_3_en.ipynb", "max_stars_repo_name": "junjiecai/jupyter_demos", "max_stars_repo_head_hexsha": "8aa8a0320545c0ea09e05e94aea82bc8aa537750", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-09-16T10:44:39.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-04T18:55:52.000Z", "max_issues_repo_path": "0011_sympy/0028_sympy_intro_3_en.ipynb", "max_issues_repo_name": "junjiecai/jupyter_demos", "max_issues_repo_head_hexsha": "8aa8a0320545c0ea09e05e94aea82bc8aa537750", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "0011_sympy/0028_sympy_intro_3_en.ipynb", "max_forks_repo_name": "junjiecai/jupyter_demos", "max_forks_repo_head_hexsha": "8aa8a0320545c0ea09e05e94aea82bc8aa537750", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-10-24T16:19:29.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-04T18:55:57.000Z", "avg_line_length": 37.5927449514, "max_line_length": 3016, "alphanum_fraction": 0.650149717, "converted": true, "num_tokens": 4677, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.927363293639213, "lm_q2_score": 0.8991213860551286, "lm_q1q2_score": 0.8338121699535384}} {"text": "```python\n#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n###\n# Name: Enea Dodi\n# Student ID: 2296306\n# Email: dodi@chapman.edu\n# Course: PHYS220/MATH220/CPSC220 Fall 2018 Assignment: CW03\n###\n\n'''Eratosthenes module\nThis module contains the eratosthenes function that generates\nprimes up to the value of positive integer input n'''\nimport math\ndef eratosthenes(n):\n '''Erastosthenes function\n Args: n - positive integer representing ceiling number\n Returns: list of all primes less than n'''\n \n #check if integer passed in is positive\n if n < 1:\n raise Exception(\"Please enter an integer > 0\")\n \n #generates all integers up to n > 2\n nums = list(range(n))\n nums = nums[2:]\n \n #removes all even numbers\n nums = [a for a in nums if ((a % 2 != 0) or (a==2))]\n \n #removes everything that isnt a prime number\n for i in nums:\n if i > math.sqrt(n):\n break\n else:\n for counter in nums:\n if(counter!=i and counter%i==0):\n nums.remove(counter)\n \n return nums\n\ndef gen_eratosthenes():\n n = set()\n i = 0\n a = 2\n checker = False\n while True:\n i = i + 1\n if i == 1:\n continue\n elif i == 2:\n n.add(2)\n yield 2\n else:\n checker = True\n while checker:\n checker = False\n for j in n:\n if (a%j) == 0:\n a = a+1\n checker = True\n break\n n.add(a)\n yield a\n\n```\n\n# The Sieve of Eratosthenes\n#### Name: Enea Dodi, Gwyneth Casey, Jack Savage\n#### Date: 9/27/2018\n\nThe goal of both eratosthenes and gen_eratosthenes are to single down on only the prime numbers up to a certain point. Eratosthenes takes in a parameter\nn which serves as the cap where the function stops at. This function will return a list containing every prime up to that number.\ngen_Eratosthenes is a little different because it is a generator. It figures out the next prime when next(generatorfunction) is called. It is infinite cause\nit'll always compute the next value\n\nI used list in the function because that was the best way to output a list (in my eyes)\nFor the function erastothenes, the design follows this order\n1. Checks if input value is less than 1, if not, then continue. If so, then raise Exception\n2. Makes a list with all values in range n.\n3. removes all values before 2.\n4. removes all even numbers except 2\n5. checks if value in nums is > the sqrt of the input value. If it is, operation stops. If it is not, then removes a value in the list if a proceding value % that value is = to 0\n\n
    \n
    \n\n##### if\n\\begin{align}0 & = b \\% a\\end{align}\n#### then remove b\n\n
    \n
    \n
    \n\n\n\n6. Repeats step 5 until the end of list\n\nFor the generator gen_erastothenes, the design follows this order:\n1. makes a set which will hold all the prime numbers\n2. uses an int for the special cases of the value being 1 ore two. Checks if the values are 1 and 2 and yields accordingly\n3. checks if a number represented by 'a' is prime by checking if any of the numbers in the prime set % 0 equals 0. If any number in the set does, then simply it adds 1 to a and tries again! Once there is no number in the set that divides equally to a, a is added to the set and is yielded.\n\n\n\n\n\n# Generating Prime Numbers\nThe prime number generator logic is completely different from the original function.\n\nFor the generator gen_erastothenes, the design follows this order:\n1. makes a set which will hold all the prime numbers\n2. uses an int for the special cases of the value being 1 ore two. Checks if the values are 1 and 2 and yields accordingly\n3. checks if a number represented by 'a' is prime by checking if any of the numbers in the prime set % 0 equals 0. If any number in the set does, then simply it adds 1 to a and tries again! Once there is no number in the set that divides equally to a, a is added to the set and is yielded.\n\nRather than removing all numbers in a list that are not primes, this makes a set where all prime numbers are held and has a singular integer that increments by one each time a division by a number in set equals a whole number.\n\nThe next section will demostrate the generator working!\n\n\n\n\n```python\n#!/usr/bin/env python3 \n#-*- coding: utf-8 -*-\nfrom primesE import gen_eratosthenes as gen_eratos\ng = gen_eratos()\nfor i in range(20):\n print(next(g))\n\n\n```\n", "meta": {"hexsha": "6b1c667938edc988ae9cc10420ece59e916d3d28", "size": 6510, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "cw04-primes.ipynb", "max_stars_repo_name": "chapman-phys220-2018f/cw04-thepenguins", "max_stars_repo_head_hexsha": "42b2f87b1d67ebd4f5f8868c7eb7eb2cc286e1e5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "cw04-primes.ipynb", "max_issues_repo_name": "chapman-phys220-2018f/cw04-thepenguins", "max_issues_repo_head_hexsha": "42b2f87b1d67ebd4f5f8868c7eb7eb2cc286e1e5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "cw04-primes.ipynb", "max_forks_repo_name": "chapman-phys220-2018f/cw04-thepenguins", "max_forks_repo_head_hexsha": "42b2f87b1d67ebd4f5f8868c7eb7eb2cc286e1e5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.3804347826, "max_line_length": 298, "alphanum_fraction": 0.5502304147, "converted": true, "num_tokens": 1157, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9273633016692238, "lm_q2_score": 0.8991213759183765, "lm_q1q2_score": 0.8338121677730409}} {"text": "```python\nimport pandas as pd\nimport numpy as np\nimport scipy.stats as stats\nfrom typing import Tuple\nfrom nptyping import Array\nfrom collections import defaultdict\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nsns.set_style(\"darkgrid\")\n```\n\n## Confidence Intervals\n\nA point estimate can give us a rough approximation of a population parameter. A confidence interval is a range of values above and below a point estimate that captures the true population parameter at some predetermined confidence level. \n\n\n\n$$ \\begin{align} \\text{Confidence Interval} = \\text{Point Estimate } \\pm \\text{Margin of Error}\\end{align} $$\n$$ \\begin{align} \\text{Margin of Error = 'a few' Standard Errors}\\end{align} $$\n \n$$ \\begin{align} \\text{point estimate} \\pm z * SE \\end{align} $$\n \n* $z$ is called the critical value and it corresponds to the confidence level that we chose. For instance, we know that roughly 95% of the data in a normal distribution lies within 2 standard deviations from the mean, so we could use 2 as the z-critical value for a 95% confidence interval\n* Standard error for a point estimate is estimated from the data and computed using a formula\n* The value $z * SE$ is called the margin of error\n\n### Proportion\n\n**Assumptions** \n1) $n*\\hat{p}=10$ and $n*(1-\\hat{p})=10$ \n2) Random Sample\n \n$$\\text{Confidence Interval = } \\text{point estimate} \\pm z * \\sqrt{\\frac{\\hat{p}(1-\\hat{p})}{n}}$$\n\nWe can enforce a *conservative* confidence interval by setting $\\hat{p}$ equal to 0.5 which will increase the interval.\n\n$$\\text{Confidence Interval = } \\text{point estimate} \\pm z * \\sqrt{\\frac{1}{2n}}$$\n\n\n```python\ndef confidence_interval_one_proportion(\n nobs: int,\n proportion: float,\n confidence: float = 0.975\n) -> Tuple[float, float]:\n \n z = stats.norm.ppf(confidence)\n standard_error = np.sqrt((proportion * (1-proportion))/nobs)\n margin_of_error = z * standard_error\n \n lower_confidence_interval = proportion - margin_of_error\n upper_confidence_interval = proportion + margin_of_error\n \n return (lower_confidence_interval, upper_confidence_interval)\n\nnobs = 659\nproportion = 0.85\n\nconfidence_interval = confidence_interval_one_proportion(\n nobs=nobs, \n proportion=proportion\n)\n\nprint(f\"Confidence Interval: {confidence_interval}\")\n```\n\n Confidence Interval: (0.8227378265796143, 0.8772621734203857)\n\n\n### Difference in Proportions for Independent Groups\n\n**Assumptions** \n1) $n_1*\\hat{p_1}\\geq10$ and $n_1*(1-\\hat{p_1})\\geq10$ and $n_2*\\hat{p_2}\\geq10$ and $n_2*(1-\\hat{p_2})\\geq10$ \n2) Random Sample\n\n$$\\text{Confidence Interval = } (\\hat{p_1} - \\hat{p_2}) \\pm z * \\sqrt{\\frac{\\hat{p_1}(1-\\hat{p_1})}{n_1} + \\frac{\\hat{p_2}(1-\\hat{p_2})}{n_2}}$$\n\n\n```python\ndef confidence_interval_two_proportions(\n nobs_1: int,\n proportion_1: float,\n nobs_2: int,\n proportion_2: float,\n confidence: float = 0.975\n) -> Tuple[float, float]:\n \n z = stats.norm.ppf(confidence)\n standard_error_1 = np.sqrt((proportion_1*(1-proportion_1))/nobs_1)\n standard_error_2 = np.sqrt((proportion_2*(1-proportion_2))/nobs_2)\n standard_error_diff = np.sqrt(standard_error_1**2 + standard_error_2**2)\n \n margin_of_error = z * standard_error_diff\n proportion_difference = proportion_1 - proportion_2\n \n lower_confidence_interval = proportion_difference - margin_of_error\n upper_confidence_interval = proportion_difference + margin_of_error\n \n return (lower_confidence_interval, upper_confidence_interval)\n\nnobs_1 = 2972\nproportion_1 = 0.304845\nnobs_2 = 2753\nproportion_2 = 0.513258\n\nconfidence_interval = confidence_interval_two_proportions(\n nobs_1=nobs_1, \n proportion_1=proportion_1,\n nobs_2=nobs_2,\n proportion_2=proportion_2\n)\n\nprint(f\"Confidence Interval: {confidence_interval}\")\n```\n\n Confidence Interval: (-0.2333631069853917, -0.18346289301460833)\n\n\n### Mean\n\n1) Population normal (or $n\\geq25$ enforce CLT) \n2) Random Sample\n\n$$ \\overline{x} \\pm t * \\frac{s}{ \\sqrt{n} }$$ , degrees of freedom: $n-1$\n\n\n\n\n```python\ndef confidence_interval_one_mean(\n nobs: int,\n mean: float,\n std: float,\n confidence: float = 0.975\n) -> Tuple[float, float]:\n \n degrees_freedom = nobs-1\n t = stats.t.ppf(confidence, degrees_freedom)\n \n standard_error = std/np.sqrt(nobs)\n margin_of_error = t * standard_error\n \n lower_confidence_interval = mean - margin_of_error\n upper_confidence_interval = mean + margin_of_error\n \n return (lower_confidence_interval, upper_confidence_interval)\n\nnobs = 25\nmean = 82.48\nstd = 15.058552387264852\n\n\nconfidence_interval = confidence_interval_one_mean(\n nobs=nobs, \n mean=mean, \n std=std\n)\n\nprint(f\"Confidence Interval: {confidence_interval}\")\n```\n\n Confidence Interval: (76.26413507754478, 88.69586492245523)\n\n\n### Difference in Means for Paired Data\n\n$$ \\overline{x_d} \\pm t * \\frac{s_d}{ \\sqrt{n} }$$ , degrees of freedom: $n-1$\n\n\n```python\nurl = \"https://raw.githubusercontent.com/Opensourcefordatascience/Data-sets/master/blood_pressure.csv\"\npaired_data = pd.read_csv(url)\npaired_data[\"difference\"] = paired_data[\"bp_before\"] - paired_data[\"bp_after\"]\ndisplay(paired_data.head(4))\n\nnobs = paired_data.shape[0]\nmean = paired_data[\"difference\"].mean()\nstd = paired_data[\"difference\"].std()\n\nconfidence_interval = confidence_interval_one_mean(\n nobs=nobs, \n mean=mean, \n std=std\n)\n\n\nprint(f\"Confidence Interval: {confidence_interval}\")\n```\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    patientsexagegrpbp_beforebp_afterdifference
    01Male30-45143153-10
    12Male30-45163170-7
    23Male30-45153168-15
    34Male30-4515314211
    \n
    \n\n\n Confidence Interval: (2.0705568567756942, 8.11277647655764)\n\n\n### Difference in Means for Independent Groups\n\n**Assumptions** \n1) Population normal (or $n_1\\geq25$, $n_2\\geq25$ enforce CLT) \n2) Random Sample\n\n*Unpooled* $\\sigma_1 \\neq \\sigma_2$:\n\n$$ (\\overline{x_1} - \\overline{x_2}) \\pm t * \\sqrt{\\frac{s_1^2}{n_1} + \\frac{s_2^2}{n_2}} $$\n\n, degrees of freedom: $\\min(n_1-1,n_2-1)$ or Welch approximation\n\n*Pooled* $\\sigma_1 = \\sigma_2$:\n\n$$ (\\overline{x_1} - \\overline{x_2}) \\pm t * \\sqrt{\\frac{(n_1-1)s_1^2+(n_2-1)s_2^2}{n_1+n_2-2}}*\\sqrt{\\frac{1}{n_1}+\\frac{1}{n_2}} $$\n\n, degrees of freedom: $n_1+n_2-2$\n\n\n```python\ndef confidence_intervals_two_means(\n nobs_1: int,\n mean_1: float,\n std_1: float,\n nobs_2: int,\n mean_2: float,\n std_2: float,\n unpooled: bool = True,\n confidence: float = 0.975\n) -> Tuple[float, float]:\n \n\n if unpooled:\n degrees_freedom = np.min([nobs_1-1, nobs_2-1])\n t = stats.t.ppf(confidence, degrees_freedom)\n \n standard_error_1 = std_1/np.sqrt(nobs_1)\n standard_error_2 = std_2/np.sqrt(nobs_2)\n standard_error_diff = np.sqrt(standard_error_1**2 + standard_error_2**2)\n \n margin_of_error = t * standard_error_diff\n \n else:\n degrees_freedom = nobs_1 + nobs_2 - 2\n t = stats.t.ppf(confidence, degrees_freedom)\n \n margin_of_error = t \\\n * np.sqrt(((nobs_1 - 1)*(std_1**2) + (nobs_2 - 1)*(std_2**2))/ (nobs_1 + nobs_2 - 2) ) \\\n * np.sqrt(1/nobs_1 + 1/nobs_2)\n \n \n mean_difference = mean_1 - mean_2\n \n lower_confidence_interval = mean_difference - margin_of_error\n upper_confidence_interval = mean_difference + margin_of_error\n \n return (lower_confidence_interval, upper_confidence_interval) \n \nnobs_1 = 2976\nmean_1 = 29.939946\nstd_1 = 7.753319\nnobs_2 = 2759\nmean_2 = 28.778072\nstd_2 = 6.252568\n \nunpooled_confidence_intervals = confidence_intervals_two_means(\n nobs_1=nobs_1,\n mean_1=mean_1,\n std_1=std_1,\n nobs_2=nobs_2,\n mean_2=mean_2,\n std_2=std_2,\n unpooled=True\n)\n\npooled_confidence_intervals = confidence_intervals_two_means(\n nobs_1=nobs_1,\n mean_1=mean_1,\n std_1=std_1,\n nobs_2=nobs_2,\n mean_2=mean_2,\n std_2=std_2,\n unpooled=False\n)\n\nprint(f\"unpooled_confidence_intervals: {unpooled_confidence_intervals}\")\nprint(f\"pooled_confidence_intervals: {pooled_confidence_intervals}\")\n```\n\n unpooled_confidence_intervals: (0.798356871923686, 1.5253911280763088)\n pooled_confidence_intervals: (0.7955138433444433, 1.5282341566555515)\n\n\n## Confidence Interval Interpretation\n\nConfidence interval with a confidence of 95% can be interpreted in the following way. \n\nIf we repeat the study many times each time producing a new sample (of same size) from which 95% confidence interval is computed, then 95% of the confidence intervals are expected to have the population parameter. The simulation below illustrates this, as we observe that not all confidence intervals overlap the orange line which marks the true mean. \n\n\n```python\ndef simulate_confidence_intervals(\n array: Array, \n sample_size: int,\n confidence: float = 0.95,\n seed: int = 10,\n simulations: int = 50\n) -> pd.DataFrame:\n \n np.random.seed(seed)\n \n simulation = defaultdict(list)\n\n for i in range(0, simulations):\n simulation[\"sample_id\"].append(i)\n sample = np.random.choice(orders, size = sample_size)\n sample_mean = sample.mean()\n simulation[\"sample_mean\"].append(sample_mean)\n \n degrees_freedom = sample_size - 1\n t = stats.t.ppf(confidence, degrees_freedom)\n sample_std = sample.std()\n margin_error = t * (sample_std / np.sqrt(sample_size))\n condfidence_interval = sample_mean - margin_error, sample_mean + margin_error\n simulation[\"sample_confidence_interval\"].append(condfidence_interval)\n\n return pd.DataFrame(simulation)\n\ndef visualise_confidence_interval_simulation(\n df: pd.DataFrame,\n):\n fig = plt.figure(figsize=(15,8))\n ax = plt.subplot(1, 1, 1)\n ax.errorbar(\n x=np.arange(0.1, df.shape[0]), \n y=df[\"sample_mean\"], \n yerr=[(top - bot) / 2 for top, bot in df[\"sample_confidence_interval\"]], fmt = 'o',\n color=\"navy\"\n )\n \n ax.hlines(\n xmin = 0.1,\n xmax = df.shape[0],\n y=orders.mean(),\n color=\"red\",\n linewidth=2\n )\n \n ax.set_title(\"Simulation of Confidence Intervals\", fontsize=20)\n ax.set_ylabel(\"Orders\", fontsize= 14)\n\nnp.random.seed(10)\n\norders_1 = stats.poisson.rvs(mu=40, size=200000)\norders_2 = stats.poisson.rvs(mu=10, size=150000)\norders = np.concatenate([orders_1, orders_2])\n \nsimulation_data = simulate_confidence_intervals(\n array=orders,\n confidence = 0.95,\n sample_size = 1000,\n)\n\nvisualise_confidence_interval_simulation(df=simulation_data)\n```\n", "meta": {"hexsha": "530eea8be8743c357df807e6af437c1a152bab6c", "size": 45395, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "statistics/confidence_intervals.ipynb", "max_stars_repo_name": "AndonisPavlidis/portfolio", "max_stars_repo_head_hexsha": "48de86ed33d703f17388e9e0f619ab225d4d6dde", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "statistics/confidence_intervals.ipynb", "max_issues_repo_name": "AndonisPavlidis/portfolio", "max_issues_repo_head_hexsha": "48de86ed33d703f17388e9e0f619ab225d4d6dde", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "statistics/confidence_intervals.ipynb", "max_forks_repo_name": "AndonisPavlidis/portfolio", "max_forks_repo_head_hexsha": "48de86ed33d703f17388e9e0f619ab225d4d6dde", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 78.8107638889, "max_line_length": 27320, "alphanum_fraction": 0.7818041635, "converted": true, "num_tokens": 3475, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9273632876167044, "lm_q2_score": 0.8991213671331906, "lm_q1q2_score": 0.8338121469910614}} {"text": "#### Jupyter notebooks\n\nThis is a [Jupyter](http://jupyter.org/) notebook using Python. You can install Jupyter locally to edit and interact with this notebook.\n\n# Higher order finite difference methods\n\n## Lagrange Interpolating Polynomials\n\nSuppose we are given function values $u_0, \\dotsc, u_n$ at the distinct points $x_0, \\dotsc, x_n$ and we would like to build a polynomial of degree $n$ that goes through all these points. This explicit construction is attributed to Lagrange (though he was not first):\n\n$$ p(x) = \\sum_{i=0}^n u_i \\prod_{j \\ne i} \\frac{x - x_j}{x_i - x_j} $$\n\n* What is the degree of this polynomial?\n* Why is $p(x_i) = u_i$?\n* How expensive (in terms of $n$) is it to evaluate $p(x)$?\n* How expensive (in terms of $n$) is it to convert to standard form $p(x) = \\sum_{i=0}^n a_i x^i$?\n* Can we easily evaluate the derivative $p'(x)$?\n* What can go wrong? Is this formulation numerically stable?\n\nA general derivation of finite difference methods for approximating $p^{(k)}(x)$ using function values $u(x_i)$ is to construct the Lagrange interpolating polynomial $p(x)$ from the function values $u_i = u(x_i)$ and evaluate it or its derivatives at the target point $x$. We can do this directly from the formula above, but a more linear algebraic approach will turn out to be more reusable.\n\n#### Uniqueness\n\nIs the polynomial $p(x)$ of degree $m$ that interpolates $m+1$ points unique? Why?\n\n### Vandermonde matrices\n\nWe can compute a polynomial\n\n$$ p(x) = c_0 + c_1 x + c_2 x^2 + \\dotsb $$\n\nthat assumes function values $p(x_i) = u_i$ by solving a linear system with the Vandermonde matrix.\n\n$$ \\underbrace{\\begin{bmatrix} 1 & x_0 & x_0^2 & \\dotsb \\\\\n 1 & x_1 & x_1^2 & \\dotsb \\\\\n 1 & x_2 & x_2^2 & \\dotsb \\\\\n \\vdots & & & \\ddots \\end{bmatrix}}_V \\begin{bmatrix} c_0 \\\\ c_1 \\\\ c_2 \\\\ \\vdots \\end{bmatrix} = \\begin{bmatrix} u_0 \\\\ u_1 \\\\ u_2 \\\\ \\vdots \\end{bmatrix} .$$\n\n\n```python\n%matplotlib inline\nimport numpy\nfrom matplotlib import pyplot\npyplot.style.use('ggplot')\n\nx = numpy.linspace(-2,2,4)\nu = numpy.sin(x)\nxx = numpy.linspace(-3,3,40)\nc = numpy.linalg.solve(numpy.vander(x), u)\npyplot.plot(x, u, '*')\npyplot.plot(xx, numpy.vander(xx, 4).dot(c), label='p(x)')\npyplot.plot(xx, numpy.sin(xx), label='sin(x)')\npyplot.legend(loc='upper left');\n```\n\nGiven the coefficients $c = V^{-1} u$, we find\n\n$$ \\begin{align} p(0) &= c_0 \\\\ p'(0) &= c_1 \\\\ p''(0) &= c_2 \\cdot 2! \\\\ p^{(k)}(0) &= c_k \\cdot k! . \\end{align} $$\n\nTo compute the stencil coefficients $s_i^0$ for interpolation to $x=0$,\n$$ p(0) = s_0^0 u_0 + s_1^0 u_1 + \\dotsb = \\sum_i s_i^0 u_i $$\nwe can write\n$$ p(0) = e_0^T \\underbrace{V^{-1} u}_c = \\underbrace{e_0^T V^{-1}}_{(s^0)^T} u $$\nwhere $e_0$ is the first column of the identity. Evidently $s^0$ can also be expressed as\n$$ s^0 = V^{-T} e_0 . $$\nWe can compute stencil coefficients for any order derivative $p^{(k)}(0) = (s^k)^T u$ by solving the linear system\n$$ s^k = V^{-T} e_k \\cdot k! . $$\nAlternatively, invert the Vandermonde matrix $V$ and scale row $k$ of $V^{-1}$ by $k!$.\n\n\n```python\ndef fdstencil(z, x):\n x = numpy.array(x)\n V = numpy.vander(x - z, increasing=True)\n scaling = numpy.array([numpy.math.factorial(i) for i in range(len(x))])\n return numpy.linalg.inv(V) * scaling[:, None]\n\nx = numpy.linspace(0, 3, 4)\nS = fdstencil(0, x)\nprint(S)\n\nhs = 2.**(-numpy.arange(6))\nerrors = numpy.zeros((3,len(hs)))\nfor i,h in enumerate(hs):\n z = 1 + .3*h\n S = fdstencil(z, 1+x*h)\n u = numpy.sin(1+x*h)\n errors[:,i] = S[:3].dot(u) - numpy.array([numpy.sin(z),\n numpy.cos(z),\n -numpy.sin(z)])\n\npyplot.loglog(hs, numpy.abs(errors[0]), 'o', label=\"$p(0)$\")\npyplot.loglog(hs, numpy.abs(errors[1]), '<', label=\"$p'(0)$\")\npyplot.loglog(hs, numpy.abs(errors[2]), 's', label=\"$p''(0)$\")\nfor k in (1,2,3):\n pyplot.loglog(hs, hs**k, label='$h^{%d}$' % k)\npyplot.xlabel('h')\npyplot.ylabel('error')\npyplot.legend(loc='upper left');\n```\n\n### Notes on accuracy\n\n* When using three points, we fit a polynomial of degree 2. The leading error term for interpolation $p(0)$ is thus $O(h^3)$.\n* Each derivative gives up one order of accuracy, therefore differencing to a general (non-centered or non-uniform grid) point is $O(h^2)$ for the first derivative and $O(h)$ for the second derivative.\n* Centered differences on uniform grids can provide cancelation, raising the order of accuracy by one. So our standard 3-point centered second derivative is $O(h^2)$ as we have seen in the Taylor analysis and numerically.\n* The Vandermonde matrix is notoriously ill-conditioned when using many points. For such cases, we recommend using a more numerically stable method from [Fornberg](https://doi.org/10.1137/S0036144596322507).\n\n\n```python\n-fdstencil(0, numpy.linspace(-1,4,6))[2]\n```\n\n\n\n\n array([-0.83333333, 1.25 , 0.33333333, -1.16666667, 0.5 ,\n -0.08333333])\n\n\n\n### Solving BVPs\n\nThis `fdstencil` gives us a way to compute derivatives of arbitrary accuracy on arbitrary grids. We will need to use uncentered rules near boundaries, usually with more points to maintain order of accuracy. This will usually cost us symmetry. Implementation of boundary conditions is the bane of high order finite difference methods.\n\n### Discretization stability measures: $h$-ellipticity\n\nConsider the test function $\\phi(\\theta, x) = e^{i\\theta x}$ and apply the difference stencil centered at an arbitrary point $x$ with element size $h=1$:\n\n$$ \\begin{bmatrix} -1 & 2 & -1 \\end{bmatrix} \\begin{bmatrix} e^{i \\theta (x - 1)} \\\\ e^{i \\theta x} \\\\ e^{i \\theta (x+1)} \\end{bmatrix}\n= \\big( 2 - (e^{i\\theta} + e^{-i\\theta}) \\big) e^{i\\theta x}= 2 (1 - \\cos \\theta) e^{i \\theta x} . $$\n\nEvidently $\\phi(\\theta,x) = e^{i \\theta x}$ is an eigenfunction of the discrete differencing operator on an infinite grid and the corresponding eigenvalue is\n$$ L(\\theta) = 2 (1 - \\cos \\theta), $$\nalso known as the \"symbol\" of the operator. That $\\phi(\\theta,x)$ is an eigenfunction of the discrete differencing formula will generally be true for uniform grids.\n\nThe highest frequency that is distinguishable using this stencil is $\\theta_{\\max} = \\pi$ which results in a wave at the Nyquist frequency. If a higher frequency wave is sampled onto this grid, it will be aliased back into the interval $[-\\pi, \\pi)$.\n\n\n```python\nx = numpy.linspace(-3, 3, 7)\ns2 = -fdstencil(0, x)[4]\nprint(s2)\ntheta = numpy.linspace(-numpy.pi, numpy.pi)\nphi = numpy.exp(1j*numpy.outer(x, theta))\npyplot.plot(theta, numpy.abs(s2.dot(phi)), '.')\npyplot.plot(theta, 2*(1-numpy.cos(theta)))\npyplot.plot(theta, theta**2);\n```\n\nA measure of internal stability known as $h$-ellipticity is defined by\n\n$$ E^h(L) = \\frac{\\min_{\\pi/2 \\le |\\theta| \\le \\pi} L(\\theta)}{\\max_{|\\theta| \\le \\pi} L(\\theta)} . $$\n\n* What is $E^h(L)$ for the second order \"version 2\" stencil?\n* How about for uncentered formulas and higher order?\n\n# Spectral collocation\n\nSuppose that instead of using only a fixed number of neighbors in our differencing stencil, we use all points in the domain?\n\n\n```python\nn = 20\nx = numpy.linspace(-1, 1, n)\nL = numpy.zeros((n,n))\nfor i in range(n):\n L[i] = -fdstencil(x[i], x)[2]\n\nu = numpy.cos(3*x)\npyplot.plot(x, L.dot(u), 'o')\npyplot.plot(x, 9*u);\n```\n\n\n```python\nnumpy.linalg.cond(numpy.vander(numpy.linspace(-1, 1, 30)))\n```\n\n\n\n\n 18390290533097.15\n\n\n\nWe are suffering from two problems here. The first is that the monomial basis is very ill-conditioned when using many terms. This is true as continuous functions, not just when sampled onto a particular grid.\n\n\n```python\nx = numpy.linspace(-1, 1, 50)\nV = numpy.vander(x, 15)\npyplot.plot(x, V)\nnumpy.linalg.cond(V)\n```\n\n## Chebyshev polynomials\n\nDefine $$ T_n(x) = \\cos (n \\arccos(x)) .$$\nThis turns out to be a polynomial, though it may not be obvious why.\nRecall $$ \\cos(a + b) = \\cos a \\cos b - \\sin a \\sin b .$$\nLet $y = \\arccos x$ and check\n$$ \\begin{split}\n T_{n+1}(x) &= \\cos (n+1) y = \\cos ny \\cos y - \\sin ny \\sin y \\\\\n T_{n-1}(x) &= \\cos (n-1) y = \\cos ny \\cos y + \\sin ny \\sin y\n\\end{split}$$\nAdding these together produces a similar recurrence:\n$$\\begin{split}\nT_0(x) &= 1 \\\\\nT_1(x) &= x \\\\\nT_{n+1}(x) &= 2 x T_n(x) - T_{n-1}(x)\n\\end{split}$$\nwhich we can also implement in code\n\n\n```python\ndef vander_chebyshev(x, n=None):\n if n is None:\n n = len(x)\n T = numpy.ones((len(x), n))\n if n > 1:\n T[:,1] = x\n for k in range(2,n):\n T[:,k] = 2 * x * T[:,k-1] - T[:,k-2]\n return T\n\nx = numpy.linspace(-1, 1, 20)\nV = numpy.vander(x, 20)\npyplot.plot(x, V)\nprint('cond(V) = {:.4e}'.format(numpy.linalg.cond(V)))\n```\n\n\n```python\n# We can use the Chebyshev basis for interpolation\nx = numpy.linspace(-2, 2, 4)\nu = numpy.sin(x)\nc = numpy.linalg.solve(vander_chebyshev(x), u)\npyplot.plot(x, u, '*')\npyplot.plot(xx, vander_chebyshev(xx, 4).dot(c), label='p(x)')\npyplot.plot(xx, numpy.sin(xx), label='sin(x)')\npyplot.legend(loc='upper left');\n```\n\n### Differentiation\n\nWe can differentiate Chebyshev polynomials using the recurrence\n\n$$ \\frac{T_n'(x)}{n} = 2 T_{n-1}(x) + \\frac{T_{n-2}'(x)}{n-2} $$\n\nwhich we can differentiate to evaluate higher derivatives.\n\n\n```python\ndef chebeval(z, n=None):\n \"\"\"Build matrices to evaluate the n-term Chebyshev expansion and its derivatives at point(s) z\"\"\"\n z = numpy.array(z, ndmin=1)\n if n is None:\n n = len(z)\n Tz = vander_chebyshev(z, n)\n dTz = numpy.zeros_like(Tz)\n dTz[:,1] = 1\n dTz[:,2] = 4*z\n ddTz = numpy.zeros_like(Tz)\n ddTz[:,2] = 4\n for n in range(3,n):\n dTz[:,n] = n * (2*Tz[:,n-1] + dTz[:,n-2]/(n-2))\n ddTz[:,n] = n * (2*dTz[:,n-1] + ddTz[:,n-2]/(n-2))\n return [Tz, dTz, ddTz]\n\nn = 20\nx = numpy.linspace(-1, 1, n)\nT = vander_chebyshev(x)\nprint('cond = {:e}'.format(numpy.linalg.cond(T)))\nTinv = numpy.linalg.inv(T)\nL = numpy.zeros((n,n))\nfor i in range(n):\n L[i] = chebeval(x[i], n)[2] @ Tinv\n\nu = numpy.cos(3*x)\npyplot.plot(x, L.dot(u), 'o')\nxx = numpy.linspace(-1, 1, 100)\npyplot.plot(xx, -9*numpy.cos(3*xx));\n```\n\n### Runge Effect\n\nPolynomial interpolation on equally spaced points is very ill-conditioned as the number of points grows. We've seen that in the growth of the condition number of the Vandermonde matrix, both for monomials and Chebyshev polynomials, but it's also true if the polynomials are measured in a different norm, such as pointwise values or merely the eyeball norm.\n\n\n```python\ndef chebyshev_interp_and_eval(x, xx):\n \"\"\"Matrix mapping from values at points x to values\n of Chebyshev interpolating polynomial at points xx\"\"\"\n vander = vander_chebyshev\n A = vander(x)\n B = vander(xx, len(x))\n return B @ numpy.linalg.inv(A)\n\ndef runge1(x):\n return 1 / (1 + 10*x**2)\nx = numpy.linspace(-1,1,20)\nxx = numpy.linspace(-1,1,100)\npyplot.plot(x, runge1(x), 'o')\npyplot.plot(xx, chebyshev_interp_and_eval(x, xx).dot(runge1(x)))\nnumpy.linalg.cond(chebyshev_interp_and_eval(x, xx))\n```\n\n\n```python\nnumpy.linalg.svd(numpy.vander(numpy.linspace(-1,1,100), 20))[1]\n```\n\n\n\n\n array([1.15106015e+01, 8.99357245e+00, 5.90736013e+00, 3.60291548e+00,\n 1.97697293e+00, 1.09831038e+00, 5.39232202e-01, 2.82004223e-01,\n 1.24250660e-01, 6.18308612e-02, 2.42429291e-02, 1.15563315e-02,\n 3.96443558e-03, 1.81898367e-03, 5.29882299e-04, 2.34876610e-04,\n 5.48708939e-05, 2.35666941e-05, 3.82863653e-06, 1.59719098e-06])\n\n\n\n\n```python\ndef cosspace(a, b, n=50):\n return (a + b)/2 + (b - a)/2 * (numpy.cos(numpy.linspace(-numpy.pi, 0, n)))\n\nx = cosspace(-1,1,8)\npyplot.plot(xx, chebyshev_interp_and_eval(x,xx))\nnumpy.linalg.cond(chebyshev_interp_and_eval(x, xx))\n```\n\n\n```python\nnumpy.outer(numpy.arange(4), [10,20])\n```\n\n\n\n\n array([[ 0, 0],\n [10, 20],\n [20, 40],\n [30, 60]])\n\n\n\n\n```python\nns = numpy.arange(5,20)\nconds = [numpy.linalg.cond(chebyshev_interp_and_eval(numpy.linspace(-1,1,n),\n numpy.linspace(-1,1,100)))\n for n in ns]\npyplot.semilogy(ns, conds);\n```\n\nThis ill-conditioning cannot be fixed when using polynomial *interpolation* on equally spaced grids.\n\n### Chebyshev nodes\n\nThe Chebyshev polynomials assume their maximum value of 1 at points where their derivatives are zero (plus the endpoints). Choosing the roots of $T_n'(x)$ (plus endpoints) will control the polynomials and should lead to a well-conditioned formulation.\n\n\n```python\ndef cosspace(a, b, n=50):\n return (a + b)/2 + (b - a)/2 * (numpy.cos(numpy.linspace(-numpy.pi, 0, n)))\n\nconds = [numpy.linalg.cond(chebyshev_interp_and_eval(cosspace(-1,1,n),\n numpy.linspace(-1,1,100)))\n for n in ns]\npyplot.figure()\npyplot.plot(ns, conds);\n```\n\n\n```python\nx = cosspace(-1, 1, 7)\npyplot.plot(x, 0*x, 'o')\npyplot.plot(xx, chebeval(xx, 7)[1]);\n```\n\n\n```python\nx = cosspace(-1,1,20)\nxx = numpy.linspace(-1,1,100)\npyplot.figure()\npyplot.plot(x, runge1(x), 'o')\npyplot.plot(xx, chebyshev_interp_and_eval(x, xx).dot(runge1(x)));\n```\n\n## Chebyshev solution of Boundary Value Problems\n\nIf instead of an equally (or arbitrarily) spaced grid, we choose the Chebyshev nodes and compute derivatives in a stable way (e.g., via interpolating into the Chebyshev basis), we should have a very accurate method. Let's return to our test equation\n\n$$ -u''(x) = f(x) $$\n\nsubject to some combination of Neumann and Dirichlet boundary conditions.\n\n\n```python\ndef laplacian_cheb(n, rhsfunc, left, right):\n \"\"\"Solve the Laplacian boundary value problem on (-1,1) using n elements with rhsfunc(x) forcing.\n The left and right boundary conditions are specified as a pair (deriv, func) where\n * deriv=0 for Dirichlet u(x_endpoint) = func(x_endpoint)\n * deriv=1 for Neumann u'(x_endpoint) = func(x_endpoint)\"\"\"\n x = cosspace(-1, 1, n+1) # n+1 points is n \"elements\"\n T = chebeval(x)\n L = -T[2]\n rhs = rhsfunc(x)\n for i,deriv,func in [(0, *left), (-1, *right)]:\n L[i] = T[deriv][i]\n rhs[i] = func(x[i])\n return x, L @ numpy.linalg.inv(T[0]), rhs\n\nclass exact_tanh:\n def __init__(self, k=1, x0=0):\n self.k = k\n self.x0 = x0\n def u(self, x):\n return numpy.tanh(self.k*(x - self.x0))\n def du(self, x):\n return self.k * numpy.cosh(self.k*(x - self.x0))**(-2)\n def ddu(self, x):\n return -2 * self.k**2 * numpy.tanh(self.k*(x - self.x0)) * numpy.cosh(self.k*(x - self.x0))**(-2)\n \nex = exact_tanh(5, 0.3)\nx, L, rhs = laplacian_cheb(20, lambda x: -ex.ddu(x),\n left=(0,ex.u), right=(0,ex.u))\nuu = numpy.linalg.solve(L, rhs)\npyplot.plot(x, uu, 'o')\npyplot.plot(xx, ex.u(xx))\npyplot.plot(xx, -ex.ddu(xx), '-.')\nprint(numpy.linalg.norm(uu - ex.u(x), numpy.inf))\n```\n\n\n```python\ndef mms_error(n, discretize, sol):\n x, L, f = discretize(n, lambda x: -sol.ddu(x), left=(0,sol.u), right=(1,sol.du))\n u = numpy.linalg.solve(L, f)\n return numpy.linalg.norm(u - sol.u(x), numpy.inf)\n\nns = numpy.geomspace(10,100,10, dtype=int)\nerrors = [mms_error(n, laplacian_cheb, ex) for n in ns]\npyplot.figure()\npyplot.loglog(ns, errors, 'o', label='numerical')\nfor p in range(1,5):\n pyplot.loglog(ns, 1/ns**(p), label='$n^{-%d}$'%p)\npyplot.xlabel('n')\npyplot.ylabel('error')\n \npyplot.legend(loc='lower left');\n```\n\n# Homework 2: Due 2018-09-30\n\nUse a Chebyshev method to solve the second order ordinary differential equation\n\n$$ u''(t) + a u'(t) + b u(t) = f(t) $$\n\nfrom $t=0$ to $t=1$ with initial conditions $u(0) = 1$ and $u'(0) = 0$.\n\n1. Do a grid convergence study to test the accuracy of your method.\n* Setting $f(t)=0$, experiment with the values $a$ and $b$ to identify two regimes with qualitatively different dynamics.\n", "meta": {"hexsha": "0288c0be9119aafb5f5dd4cf6dfdd2aefa5cef50", "size": 434186, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "FDHighOrder.ipynb", "max_stars_repo_name": "dkesseli/numpde", "max_stars_repo_head_hexsha": "c2f005b81c2f687dfecdd63d37da3591d61c9598", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2018-08-27T16:13:44.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-03T02:55:24.000Z", "max_issues_repo_path": "FDHighOrder.ipynb", "max_issues_repo_name": "dkesseli/numpde", "max_issues_repo_head_hexsha": "c2f005b81c2f687dfecdd63d37da3591d61c9598", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "FDHighOrder.ipynb", "max_forks_repo_name": "dkesseli/numpde", "max_forks_repo_head_hexsha": "c2f005b81c2f687dfecdd63d37da3591d61c9598", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2018-08-27T21:51:53.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-03T02:55:35.000Z", "avg_line_length": 458.4857444562, "max_line_length": 63292, "alphanum_fraction": 0.9359099556, "converted": true, "num_tokens": 5151, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9032942067038784, "lm_q2_score": 0.9230391706552538, "lm_q1q2_score": 0.8337759354136433}} {"text": "# Utility Theory\n\nThis notebook summarizes basic utility theory and how it can be used in portfolio optimisation.\n\n## Load Packages and Extra Functions\n\n\n```julia\nusing Printf, LinearAlgebra, Optim #Optim is an optimization package\n\ninclude(\"jlFiles/printmat.jl\")\n```\n\n\n\n\n printyellow (generic function with 1 method)\n\n\n\n\n```julia\nusing Plots\n\n#pyplot(size=(600,400))\ngr(size=(480,320))\ndefault(fmt = :png)\n```\n\n# Utility Function\n\nThe CRRA utility function is $U(x) = \\frac{x^{1-\\gamma}}{1-\\gamma}$\n\n### A Remark on the Code\n\n1. A Julia function can be created in several ways. For a simple one-liner, it is enough to do like this:\n`MySum(x,y) = x + y`\n\n2. If the function `U(x,\u03b3)` is written for a scalar `x` value, then we can calculate its value at each element of a vector `x_range` by using `U.(x_range,y)`.\n\n\n```julia\n\"\"\"\n U(x,\u03b3)\n\nCRRA utility function, \u03b3 is the risk aversion.\n\"\"\"\nU(x,\u03b3) = x^(1-\u03b3)/(1-\u03b3)\n\n\n\n\"\"\"\n U_1(u,\u03b3)\n\nInverse of CRRA utility function. Solves for x st. U(x,\u03b3) = u.\n\"\"\"\nU_1(u,\u03b3) = (u*(1-\u03b3))^(1/(1-\u03b3))\n```\n\n\n\n\n U_1\n\n\n\n\n```julia\nx_range = range(0.8,1.2,length=25) #possible different outcomes\n\u03b3 = 2\n\np1 = plot( x_range,U.(x_range,\u03b3), #notice the dot(.)\n linecolor = :red,\n linewidth = 2,\n legend = false,\n title = \"Utility function, CRRA, \u03b3 = $\u03b3\",\n xlabel = \"x\" )\ndisplay(p1)\n```\n\n# Expected Utility\n\nRecall: if $\\pi_s$ is the probability of outcome (\"state\") $x_s$ and there are $S$ possible outcomes, then expected utility is\n\n$\\text{E}U(x) = \\sum\\nolimits_{s=1}^{S} \\pi_{s}U(x_s)$\n\n### A Remark on the Code\n\nWhen `x` is a vector, then `U.(x,\u03b3)` is a vector of the corresponding utility values. `dot(\u03c0,U.(x,\u03b3))` calculates the inner product of the two vectors, that is, summing `\u03c0[i]*U(x[i],\u03b3)` across all `i`.\n\n\n```julia\n\"\"\"\n EU(\u03c0,x,\u03b3)\n\nCalculate expected CRRA utility from a vector of outcomes `x` and a vector of their probabilities `\u03c0`.\n\"\"\"\nEU(\u03c0,x,\u03b3) = dot(\u03c0,U.(x,\u03b3)) #could also do sum(\u03c0.*U.(x,\u03b3))\n```\n\n\n\n\n EU\n\n\n\n\n```julia\n\u03b3 = 2 #risk aversion\n\n(x\u2081,x\u2082) = (0.85,1.15) #possible outcomes\n(\u03c0\u2081,\u03c0\u2082) = (0.5,0.5) #probabilities of outcomes \n\nstate\u2081 = [x\u2081,U(x\u2081,\u03b3),\u03c0\u2081] #for printing\nstate\u2082 = [x\u2082,U(x\u2082,\u03b3),\u03c0\u2082]\n\nprintblue(\"Different states: wealth, utility and probability:\\n\")\nprintmat([state\u2081 state\u2082],colNames=[\"state 1\",\"state 2\"],rowNames=[\"wealth\",\"utility\",\"probability\"])\n\nEx = dot([\u03c0\u2081,\u03c0\u2082],[x\u2081,x\u2082]) #expected wealth\nExpUtil = EU([\u03c0\u2081,\u03c0\u2082],[x\u2081,x\u2082],\u03b3) #expected utility\n\nprintmat([Ex,ExpUtil],rowNames=[\"Expected wealth\",\"Expected utility\"])\n```\n\n \u001b[34m\u001b[1mDifferent states: wealth, utility and probability:\u001b[22m\u001b[39m\n \n state 1 state 2\n wealth 0.850 1.150\n utility -1.176 -0.870\n probability 0.500 0.500\n \n Expected wealth 1.000\n Expected utility -1.023\n \n\n\n\n```julia\np1 = plot( x_range,U.(x_range,\u03b3),\n linecolor = :red,\n linewidth = 2,\n label = \"Utility fn\",\n ylim = (-1.3,-0.7),\n legend = :topleft,\n title = \"Utility function, CRRA, \u03b3=$\u03b3\",\n xlabel = \"x\" )\nscatter!([x\u2081,x\u2082],U.([x\u2081,x\u2082],\u03b3),markercolor=:green,label=\"possible utility outcomes\")\nhline!([ExpUtil],linecolor=:magenta,line=(:dash,1),label=\"Expected utility\")\ndisplay(p1)\n```\n\n# Certainty Equivalent\n\nThe certainty equivalent (here denoted $P$) is the sure value that solves \n\n$U(P) = \\text{E}U(x)$,\n\nwhere the right hand side is the expected utility from the random $x$. $P$ is the highest price the investor is willing to pay for \"asset\" $x$.\n\nThe code below solves for $P$ by inverting the utility function, first for the same $\\gamma$ as above, and later for different values of the risk aversion.\n\nWe can think of $\\textrm{E}(x)/P-1$ as the expected return on $x$ that the investor requires in order to buy the asset. It is increasing in the risk aversion $\\gamma$.\n\n\n```julia\nEU_i = EU([\u03c0\u2081,\u03c0\u2082],[x\u2081,x\u2082],\u03b3) #expected utility\nP = U_1(EU_i,\u03b3) #certainty equivalent (inverting the utility fn)\n\np1 = plot( x_range,U.(x_range,\u03b3),\n linecolor = :red,\n linewidth = 2,\n label = \"Utility fn\",\n ylim = (-1.3,-0.7),\n legend = :bottomright,\n title = \"Utility function, CRRA, \u03b3=$\u03b3\",\n xlabel = \"x\" )\nscatter!([x\u2081,x\u2082],U.([x\u2081,x\u2082],\u03b3),markercolor=:green,label=\"possible utility outcomes\")\nhline!([ExpUtil],linecolor=:magenta,line=(:dash,1),label=\"Expected utility\")\nvline!([P],linecolor=:blue,line=(:dash,1),label=\"certainty equivalent\")\ndisplay(p1)\n```\n\n\n```julia\n\u03b3 = [0,2,5,10,25,50,100] #different risk aversions\nL = length(\u03b3)\n\n(P,ERx) = (fill(NaN,L),fill(NaN,L))\nfor i = 1:L\n #local EU_i #local/global is needed in script\n EU_i = EU([\u03c0\u2081,\u03c0\u2082],[x\u2081,x\u2082],\u03b3[i]) #expected utility with \u03b3[i]\n P[i] = U_1(EU_i,\u03b3[i]) #inverting the utility fn\n ERx[i] = Ex/P[i] - 1 #required expected net return\nend\n\nprintblue(\"risk aversion and certainly equivalent (recall: E(wealth) = $Ex):\\n\")\nprintmat([\u03b3 P ERx],colNames= [\"\u03b3\",\"certainty eq\",\"expected return\"],width=20)\n```\n\n \u001b[34m\u001b[1mrisk aversion and certainly equivalent (recall: E(wealth) = 1.0):\u001b[22m\u001b[39m\n \n \u03b3 certainty eq expected return\n 0.000 1.000 0.000\n 2.000 0.977 0.023\n 5.000 0.947 0.056\n 10.000 0.912 0.097\n 25.000 0.875 0.143\n 50.000 0.862 0.160\n 100.000 0.856 0.168\n \n\n\n# Portfolio Choice with One Risky Asset\n\nIn the example below, the investor maximizes $\\text{E}\\ln (1+R_{p})\\text{, with }R_{p}=vR_{i} + (1-v)R_{f}$ by choosing $v$. There are two possible outcomes for $R_{i}$ with equal probabilities.\n\nThis particular problem can be solved by pen and paper, but this becomes very difficult when the number of states increases - and even worse when there are many assets. To prepare for these tricker cases, we apply a numerical optimization algorithm already to this simple problem.\n\n### Remark on the Code\n\nTo solve the optimization problem we use `optimize()` from the [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl) package. The key steps are:\n\n1. Define a function for expected utility, `EUlog(v,\u03c0,Re,Rf)`. The value depends on the portfolio choice `v`, as well as the properties of the asset (probabilities and returns for different states).\n\n2. To create data for the plot, we loop over `v[i]` values and calculate expected utility as `EUlog(v[i],\u03c0,Re,Rf)` where \n`\u03c0` and `Re` are vectors of probabilities and returns in the different states, and the riskfree rate `Rf` is the same in all states. (Warning: you can assign a value to `\u03c0` provided you do not use the built-in constant `\u03c0` (3.14156...) first.)\n\n3. For the optimization, we minimize the anonymous function `v->-EUlog(v,\u03c0,Re,Rf)`. This is a function of `v` only and we use the negative value since `optimize()` is a *minimization* routine.\n\n\n```julia\n\"\"\"\n EUlog(v,\u03c0,Re,Rf)\n\nCalculate expected utility (log(1+Rp)) from investing into one risky and one riskfree asset\n\nv: scalar\n\u03c0: S vector probabilities of the different states\nRe: S vector, excess returns of the risky asset in different states\nRf: scalar, riskfree rate\n\n\"\"\"\nfunction EUlog(v,\u03c0,Re,Rf) #expected utility, utility fn is logarithmic\n R = Re .+ Rf\n Rp = v*R .+ (1-v)*Rf #portfolio return\n eu = dot(\u03c0,log.(1.0.+Rp)) #expected utility\n return eu\nend\n```\n\n\n\n\n EUlog\n\n\n\n\n```julia\n\u03c0 = [0.5,0.5] #probabilities for different states\nRe = [-0.10,0.12] #excess returns in different states\nRf = 0 #riskfree rate\n\nv_range = range(-1,1.5,length=101) #try different weights on risky asset\nL = length(v_range)\nEUv = fill(NaN,L)\nfor i = 1:L\n EUv[i] = EUlog(v_range[i],\u03c0,Re,Rf)\nend\n\np1 = plot( v_range,EUv,\n linecolor = :red,\n linewidth = 2,\n legend = false,\n title = \"Expected utility as a function of v\",\n xlabel = \"weight (v) on risky asset\" )\ndisplay(p1)\n```\n\n\n```julia\nSol = optimize(v->-EUlog(v,\u03c0,Re,Rf),-1,1) #minimize -EUlog\nprintlnPs(\"Optimum at: \",Optim.minimizer(Sol))\n\nprintred(\"\\nCompare with the figure\")\n```\n\n Optimum at: 0.833\n \n \u001b[31m\u001b[1mCompare with the figure\u001b[22m\u001b[39m\n\n\n# Portfolio Choice with Several Risky Assets\n\nThis optimization problem has several risky assets and states and a general CRRA utility function. Numerical optimization is still straightforward.\n\n### A Remark on the Code\n\nThe code below is fairly similar to the log utility case solved before, but extended to handle CRRA utility and several assets and states.\n\nWith several choice variables, the call to `optimize()` requires a vector of starting guesses as input.\n\n\n```julia\n\"\"\"\n EUcrra(v,\u03c0,R,Rf,\u03b3)\n\nCalculate expected utility from investing into n risky assets and one riskfree asset\n\nv: n vector (weights on the n risky assets)\n\u03c0: S vector (S possible \"states\")\nR: nxS matrix, each column is the n vector of returns in one of the states\nRf: scalar, riskfree rate\n\u03b3: scalar, risk aversion\n\"\"\"\nfunction EUcrra(v,\u03c0,R,Rf,\u03b3)\n S = length(\u03c0)\n Rp = fill(NaN,S)\n for i = 1:S #portfolio return in each state\n Rp[i] = v'R[:,i] + (1-sum(v))*Rf\n end\n eu = EU(\u03c0,1.0.+Rp,\u03b3) #expected utility when using portfolio v\n return eu\nend\n```\n\n\n\n\n EUcrra\n\n\n\n\n```julia\nRe = [-0.03 0.08 0.20; #2 assets, 3 states\n -0.04 0.22 0.15] #Re[i,j] is the excess return of asset i in state j\n\u03c0 = [1/3,1/3,1/3] #probs of the states\nRf = 0.065\n\u03b3 = 5\n\nSol = optimize(v->-EUcrra(v,\u03c0,Re,Rf,\u03b3),[-0.6,1.2]) #minimize -EUcrra\nv = Optim.minimizer(Sol)\n\nprintblue(\"optimal portfolio weights from max EUcrra():\\n\")\nprintmat([v;1-sum(v)],rowNames=[\"asset 1\",\"asset 2\",\"riskfree\"])\n```\n\n \u001b[34m\u001b[1moptimal portfolio weights from max EUcrra():\u001b[22m\u001b[39m\n \n asset 1 -0.726\n asset 2 1.317\n riskfree 0.409\n \n\n\n# Mean-Variance and the Telser Criterion\n\nLet $\\mu$ be a vector of expected returns and $\\Sigma$ be the covariance matrix of the investible assets.\n\nThe Telser criterion solves the problem\n\n$\\max_{v} \\mu_{p} \\: \\text{ subject to} \\: \\text{VaR}_{95\\%} < 0.1$,\n\nwhere $\\mu_{p} = v'\\mu+(1-v)R_f$ is the expected portfolio return.\n\nIf the returns are normally distributed then \n\n$\\text{VaR}_{95\\%} = -(\\mu_p - 1.64\\sigma_p)$,\n\nwhere $\\sigma_p = \\sqrt{v'\\Sigma v}$ is the standard deviation of the portfolio return. It follows that the VaR restriction can be written $-(\\mu_p - 1.64\\sigma_p) < 0.1$, which implies that the following must hold\n\n$\\mu_p > -0.1 + 1.64\\sigma_p$.\n\nThe figure below illustrates that the optimal portfolio is on the CLM (when the returns are normally distributed).\n\nIt can be shown (see lecture notes) that the return of the optimal portfolio is \n$R_{opt} = vR_T + (1-v)R_f$, \nwhere $R_T$ is the return of the tangency portfolio. The $v$ value is\n\n$\n\\begin{equation}\nv=-\\frac{R_{f}+V^{\\ast}}{c\\sigma_{T}+\\mu_{T}^{e}},\n\\end{equation}\n$\n\nwhere $V^{\\ast}$ is the VaR restriction (here 0.1) and $c$ is the critical value corresponding to the 1- confidence level of the VaR (here $c=-1.64$ since we use the 95% confidence level).\n\n\n```julia\ninclude(\"jlFiles/MvCalculations.jl\") #functions for traditional MV frontiers\n```\n\n\n\n\n MVTangencyP\n\n\n\n\n```julia\n\u03bc = [9, 6]/100 #means of investable assets\n\u03a3 = [ 256 0; #covariance matrix\n 0 144]/10000\nRf = 1/100\n\n\u03bcstar_range = range(Rf,0.1,length=101) #required average returns\nL = length(\u03bcstar_range)\n\n(StdRp_a,StdRp_b) = (fill(NaN,L),fill(NaN,L))\nfor i = 1:L\n StdRp_a[i] = MVCalc(\u03bcstar_range[i],\u03bc,\u03a3)[1] #std of MVF (risky only) at \u03bcstar\n StdRp_b[i] = MVCalcRf(\u03bcstar_range[i],\u03bc,\u03a3,Rf)[1] #std of MVF (risky&riskfree) at \u03bcstar\nend\n\nVaRRestr = -0.1 .+ 1.64*StdRp_b; #portfolio mean return must be above this\n```\n\n\n```julia\np1 = plot( [StdRp_b StdRp_a StdRp_b]*100,[\u03bcstar_range \u03bcstar_range VaRRestr]*100,\n linestyle = [:dash :solid :dot],\n linecolor = [:blue :red :black],\n linewidth = 2,\n label = [\"CML\" \"MVF\" \"VaR restriction\"],\n xlim = (0,15),\n ylim = (0,10),\n legend = :topleft,\n title = \"Mean vs std\",\n xlabel = \"Std(Rp), %\",\n ylabel = \"ERp, %\" )\ndisplay(p1)\n```\n\nThe next few cells shows the explicit solution of Telser problem, assuming normally distributed returns.\n\n\n```julia\n\"\"\"\n TelserSolution(\u03bceT,\u03c3T,Rf,Varstar,c)\n\nCalculate v in Rp = v*RT + (1-v)Rf which maximizes the Telser criterion,\nassuming normally distributed returns and where RT is the return on the tangency\nportfolio and Rf is the riskfree rate. \u03bceT and \u03c3T are the expected excess return \nand the standard deviation of the tangency portfolio.\n\nSee the lecture notes for more details.\n\"\"\"\nfunction TelserSolution(\u03bceT,\u03c3T,Rf,Varstar,c)\n v = - (Rf+Varstar)/(c*\u03c3T+\u03bceT)\n return v\nend\n```\n\n\n\n\n TelserSolution\n\n\n\n\n```julia\n(wT,\u03bcT,\u03c3T) = MVTangencyP(\u03bc,\u03a3,Rf) #tangency portfolio from (\u03bc,\u03a3)\nprintlnPs(\"Mean and std of tangency portfolio \",\u03bcT,\u03c3T)\n\nv = TelserSolution(\u03bcT-Rf,\u03c3T,Rf,0.1,-1.64)\n\nprintblue(\"\\nOptimal portfolio, weight on tangency portfolio and riskfree:\")\nprintmat([v,1-v],rowNames=[\"tangency\",\"riskfree\"])\n\n\u03bcOpt = v*\u03bcT + (1-v)*Rf #mean and std of optimal portfolio\n\u03c3Opt = abs(v)*\u03c3T\n\nprintblue(\"\\nMean and std of optimal portfolio:\")\nprintlnPs(\u03bcOpt,\u03c3Opt)\n\nprintred(\"\\ncompare with the figure\")\n```\n\n Mean and std of tangency portfolio 0.074 0.099\n \n \u001b[34m\u001b[1mOptimal portfolio, weight on tangency portfolio and riskfree:\u001b[22m\u001b[39m\n tangency 1.127\n riskfree -0.127\n \n \n \u001b[34m\u001b[1mMean and std of optimal portfolio:\u001b[22m\u001b[39m\n 0.082 0.111\n \n \u001b[31m\u001b[1mcompare with the figure\u001b[22m\u001b[39m\n\n\n\n```julia\n\n```\n", "meta": {"hexsha": "822a8896649d2daeb573af62f811d50fd99d88a4", "size": 164664, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Ch10_UtilityTheory.ipynb", "max_stars_repo_name": "PaulSoderlind/FinancialTheoryMSc", "max_stars_repo_head_hexsha": "bbdbeaedea4feb25b019608e65e0d8a5ecde946d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 31, "max_stars_repo_stars_event_min_datetime": "2017-10-22T20:52:31.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-03T22:53:06.000Z", "max_issues_repo_path": "Ch10_UtilityTheory.ipynb", "max_issues_repo_name": "PaulSoderlind/FinancialTheoryMSc", "max_issues_repo_head_hexsha": "bbdbeaedea4feb25b019608e65e0d8a5ecde946d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Ch10_UtilityTheory.ipynb", "max_forks_repo_name": "PaulSoderlind/FinancialTheoryMSc", "max_forks_repo_head_hexsha": "bbdbeaedea4feb25b019608e65e0d8a5ecde946d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 21, "max_forks_repo_forks_event_min_datetime": "2017-11-27T21:34:48.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-11T05:58:09.000Z", "avg_line_length": 212.7441860465, "max_line_length": 32669, "alphanum_fraction": 0.9013202643, "converted": true, "num_tokens": 4379, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9381240108164656, "lm_q2_score": 0.8887587890727755, "lm_q1q2_score": 0.8337659598533372}} {"text": "```python\nimport random\n\ndef stick(number_goats=2):\n \"\"\"A function to simulate a play of the game when we stick\"\"\"\n doors = ['Car'] + number_goats * ['Goat']\n \n initial_choice = random.choice(doors) # make a choice\n return initial_choice == 'Car'\n\ndef switch(number_goats=2):\n \"\"\"A function to simulate a play of the game when we swap\"\"\"\n doors = ['Car'] + number_goats * ['Goat'] \n \n initial_choice = random.choice(doors) # make a choice\n \n doors.remove(initial_choice) # Switching: remove initial choice\n doors.remove('Goat') # The game show host shows us a goat\n \n new_choice = random.choice(doors) # We choose our one remaining option\n \n return new_choice == 'Car'\n```\n\nChecking the initial probabilities:\n\n\n```python\nrepetitions = 10000\nrandom.seed(0)\nprob_win_stick = sum([stick() for rep in range(repetitions)]) / repetitions\nprob_win_switch = sum([switch() for rep in range(repetitions)]) / repetitions\nprob_win_stick, prob_win_switch\n```\n\n\n\n\n (0.3346, 0.6636)\n\n\n\nVerifying the mathematical formula:\n\n\n```python\nimport sympy as sym\nn = sym.symbols('n')\np_n = (1 - 1 / (n + 1)) * (1 / (n - 1))\np_n.simplify()\n```\n\n\n\n\n n/(n**2 - 1)\n\n\n\n\n```python\n(p_n / (1 / (n + 1))).simplify() \n```\n\n\n\n\n n/(n - 1)\n\n\n\nA function for the ratio:\n\n\n```python\ndef ratio(repetitions=50000, number_goats=2):\n \"\"\"Obtain the ratio of win probabilities\"\"\"\n prob_win_stick = sum([stick(number_goats=number_goats) \n for rep in range(repetitions)]) / repetitions\n prob_win_switch = sum([switch(number_goats=number_goats) \n for rep in range(repetitions)]) / repetitions\n return prob_win_switch / prob_win_stick \n```\n\nDraw the plot:\n\n\n```python\nimport matplotlib.pyplot as plt\nrandom.seed(0)\ngoats = range(2, 25 + 1)\nratios = [ratio(number_goats=n) for n in goats]\ntheoretic_ratio = [(n / (n - 1)) for n in goats]\n\nplt.figure()\nplt.scatter(goats, ratios, label=\"simulated\")\nplt.plot(goats, theoretic_ratio, color=\"C1\", label=\"theoretic\")\nplt.xlabel(\"Number of goats\")\nplt.ylabel(\"Ratio\")\nplt.legend()\nplt.savefig(\"simulated_v_expected_ratio_of_win_probability.pdf\");\n```\n\n\n```python\nsym.limit((p_n / (1 / (n + 1))), n, sym.oo)\n```\n\n\n\n\n 1\n\n\n", "meta": {"hexsha": "a9376716bd0dc543869de23aee2a238b1fb07e67", "size": 4930, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "assets/rsc/monty-hall/main.ipynb", "max_stars_repo_name": "geraintpalmer/cfm", "max_stars_repo_head_hexsha": "fa3f98cf45b225015f28be461e8ae661fa966b61", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assets/rsc/monty-hall/main.ipynb", "max_issues_repo_name": "geraintpalmer/cfm", "max_issues_repo_head_hexsha": "fa3f98cf45b225015f28be461e8ae661fa966b61", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assets/rsc/monty-hall/main.ipynb", "max_forks_repo_name": "geraintpalmer/cfm", "max_forks_repo_head_hexsha": "fa3f98cf45b225015f28be461e8ae661fa966b61", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.2547169811, "max_line_length": 86, "alphanum_fraction": 0.5036511156, "converted": true, "num_tokens": 627, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9124361700013356, "lm_q2_score": 0.9136765234137297, "lm_q1q2_score": 0.8336715076437592}} {"text": "# 2012 Pascal Contest - Question 24\n\n\n\nUse the variables $q, r, s, t, u, v, w, x, y$ to denote the values in all nine circles.\n\n\n\nLet $C$ denote the sum of the three integers along each line. \nThe following four equations hold.\n\n$$\n\\begin{align}\nq + r + s &= C \\\\\ns + t + u &= C \\\\\nu + v + w &= C \\\\\nw + x + y &= C\n\\end{align}\n$$\n\nWe are going to do some arithmetic, but the values $2012, 2013, \\dots, 2020$ are somewhat large.\nHowever, we can solve the problem using reduced values $q', r', \\dots, y'$ and then convert that answer to the solution of the original problem.\nObserve that if we reduce the nine consecutive integers each by the same amount, \nsay $M$, so \n\n$$\n\\begin{align}\nq' &= q - M \\\\\nr' &= r - M \\\\\n &\\vdots \\\\\ny' &= y - M\n\\end{align}\n$$\n\nthen the reduced integers are still consecutive and their sums along the four lines, will still be equal, say $C'$, although now this constant will be reduced by $3M$\ncompared to $C$. \nFor example, consider the line formed by $q', r', s'$:\n\n$$\n\\begin{split}\nC' &= q' + r' + s' \\\\\n &= q - M + r - M + s - M \\\\\n &= q + r + s - 3M \\\\\n &= C - 3M\n\\end{split}\n$$\n\nand similarly for the other three lines.\n\nTake $M$ to be the average value of the original integers:\n\n$$\nM = (2012 + 2020)/2 = 2016\n$$\n\nWith $M = 2016$, the reduced integers are $-4, -3, -2, -1, 0, 1, 2, 3, 4$.\nNote that the sum of these values is $0$.\nTherefore any arrangement of these integers also sums to $0$:\n\n$$\nq' + r' + s' + t' + u' + v' + w' + x' + y' = 0\n$$\n\nThe line sum equations in terms of the reduced variables are:\n\n$$\n\\begin{align}\nq' + r' + s' &= C' \\\\\ns' + t' + u' &= C' \\\\\nu' + v' + w' &= C' \\\\\nw' + x' + y' &= C'\n\\end{align}\n$$\n\nAdd these four equations together.\n\n$$\n\\begin{split}\n4C' &= q' + r' + s' + s' + t' + u' + u' + v' + w' + w' + x' + y' \\\\\n &= (q' + r' + s' + t' + u' + v' + w' + x' + y') + s' + u' + w' \\\\\n &= s' + u' + w'\n\\end{split}\n$$\n\nTherefore in any solution, $s' + u' + w'$ must be a multiple of $4$.\n\nThe line sum $C'$ takes its smallest value when $s', u', w'$ are as negative as possible, while still summing to a multiple of $4$.\nThe smallest multiple of $4$ that can be formed from the reduced integers occurs for the values $-4, -3, -1$.\nIn this case we have:\n\n$$\n4C' = -4 + -3 + -1 = -8\n$$\n\nTherefore $C' = -2$, which may or may not actually occur for some solution.\n\nNow try to find solutions that have $C' = -2$.\n\nNote that the problem has left-right symmetry in the sense that if we have one solution\nwe can find another one by interchanging the values under left-right reflection by a mirror through\n$u'$. Therefore we can just look for solutions with $s' < w'$ since we obtain the other\nsolution by reflection.\n\nAlso note that, given a solution, we can swap $q'$ with $r'$ or $x'$ with $y'$\nto get another solution. So just look for solutions with $q' < r'$ and $x' < y'$.\n\n## Case 1: $u' = -4, s' = -3, w' = -1$\n\nThis case is impossible since\n\n$$\n\\begin{split}\n-2 &= s' + t' + u' \\\\\n &= -3 + t' + -4 \\\\\n &= -7 + t' \\\\\n &\\implies t' = 5\n\\end{split}\n$$\n\nBut $t' = 5$ is not allowed.\n\n## Case 2: $u' = -3, s' = -4, w' = -1$\n\nThis case is also impossible for the same reason as Case 1.\n\n## Case 3: $u' = -1, s' = -4, w' = -3$\n\nSolve the line equations to find the other values.\n\nSolve for $t' \\in \\{ -2, 0, 1, 2, 3, 4 \\}$:\n\n$$\n\\begin{split}\n-2 &= s' + t' + u' \\\\\n &= -4 + t' + -1 \\\\\n &= -5 + t' \\\\\n &\\implies t' = 3\n\\end{split}\n$$\n\nSolve for $v' \\in \\{ -2, 0, 1, 2, 4 \\}$:\n\n$$\n\\begin{split}\n-2 &= u' + v' + w' \\\\\n &= -1 + v' + -3 \\\\\n &= -4 + v' \\\\\n &\\implies v' = 2\n\\end{split}\n$$\n\nSolve for $q', r' \\in \\{ -2, 0, 1, 4 \\}$ with $q' < r'$:\n\n$$\n\\begin{split}\n-2 &= q' + r' + s' \\\\\n &= q' + r' + -4 \\\\\n &\\implies q' + r' = 2 \\\\\n &\\implies q' = -2, r' = 4\n\\end{split}\n$$\n\nSolve for $x', y'\\in \\{ 0, 1 \\}$ with $x' < y'$:\n\n$$\n\\begin{split}\n-2 &= w' + x' + y' \\\\\n &= -3 + x' + y' \\\\\n &\\implies x' + y' = 1 \\\\\n &\\implies x' = 0, y' = 1\n\\end{split}\n$$\n\nTherefore $u' = -1$ for the solutions that give the smallest value of $C'$. \n\nThe solution to the original problem is therefore:\n\n$$\n\\begin{split}\nu &= u' + M \\\\\n &= -1 + 2016 \\\\\n &= 2015\n\\end{split}\n$$\n\n## Python Solution\n\nLet's now check the preceding solution by writing a Python program.\nFirst let's try a brute force approach, namely we'll simply generate every possible arrangement of \nthe nine consecutive integers, check the line sums, and keep the solutions.\nThe total number of arrangements of nine things is $9!$.\nLet's see how big this number is using the built-in function `math.factorial`.\n\n\n```python\nimport math\n\nmath.factorial(9)\n```\n\n\n\n\n 362880\n\n\n\nWe see that $9! = 362880$, which is not huge. We could easily store a list of that length in memory.\n\nPython has a built-in function `itertools.permutaions`\nto generate all permutations of a list.\n\n\n```python\nimport itertools\n\nlist(itertools.permutations([2012, 2013, 2014]))\n```\n\n\n\n\n [(2012, 2013, 2014),\n (2012, 2014, 2013),\n (2013, 2012, 2014),\n (2013, 2014, 2012),\n (2014, 2012, 2013),\n (2014, 2013, 2012)]\n\n\n\nNext, let's define a function `isSolution` that detects permutations of nine integers that have the same sum along each of the four lines.\n\n\n```python\ndef isSolution(t):\n C = t[0] + t[1] + t[2]\n\n if C != t[2] + t[3] + t[4]:\n return False\n\n if C != t[4] + t[5] + t[6]:\n return False\n \n return C == t[6] + t[7] + t[8]\n\nnon_solution1 = (-4, -3, -2, -1, 0, 1, 2, 3, 4)\nprint('non_solution1:', isSolution(non_solution1))\n\nsolution1 = (-2, 4, -4, 3, -1, 2, -3, 0, 1)\nprint('solution1:', isSolution(solution1))\n```\n\n non_solution1: False\n solution1: True\n\n\nWe can generate the list of nine integers using the `range` function.\n\n\n```python\nlist(range(2012, 2021))\n```\n\n\n\n\n [2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020]\n\n\n\n\n```python\nsolutions = [(t, t[0] + t[1] + t[2]) \n for t in itertools.permutations(range(2012, 2021)) \n if isSolution(t)]\nlen(solutions)\n```\n\n\n\n\n 96\n\n\n\nNote that the number of solutions should be a muliple of 8 because of the symmetries of the problem. \nGiven a solution, we can swap q and r, swap x and y, and reflect left to right to get other solutions. Thus there are 2 * 2 * 2 = 8 solutions that are related to eachother\nby the symmetries.\n\n\n```python\nlen(solutions) % 8\n```\n\n\n\n\n 0\n\n\n\nNext, find the minimum value of $C$\n\n\n```python\nC_min = min([C for (t, C) in solutions])\nC_min\n```\n\n\n\n\n 6046\n\n\n\n\n```python\nsolutions[0]\n```\n\n\n\n\n ((2012, 2018, 2017, 2016, 2014, 2020, 2013, 2015, 2019), 6047)\n\n\n\n\n```python\n\n```\n\nNow select all the solutions that attain the minimum.\n\n\n```python\nmin_solutions = [t for (t, C) in solutions if C == C_min]\nlen(min_solutions)\n```\n\n\n\n\n 8\n\n\n\n\n```python\nprint(min_solutions)\n```\n\n [(2014, 2020, 2012, 2019, 2015, 2018, 2013, 2016, 2017), (2014, 2020, 2012, 2019, 2015, 2018, 2013, 2017, 2016), (2016, 2017, 2013, 2018, 2015, 2019, 2012, 2014, 2020), (2016, 2017, 2013, 2018, 2015, 2019, 2012, 2020, 2014), (2017, 2016, 2013, 2018, 2015, 2019, 2012, 2014, 2020), (2017, 2016, 2013, 2018, 2015, 2019, 2012, 2020, 2014), (2020, 2014, 2012, 2019, 2015, 2018, 2013, 2016, 2017), (2020, 2014, 2012, 2019, 2015, 2018, 2013, 2017, 2016)]\n\n\nFinally, get the value of $u$ for each of these solutions.\n\n\n```python\nmin_us = [t[4] for t in min_solutions]\nprint(min_us)\n```\n\n [2015, 2015, 2015, 2015, 2015, 2015, 2015, 2015]\n\n\nWe can eliminate the duplicates by converting this list to a set.\n\n\n```python\nset(min_us)\n```\n\n\n\n\n {2015}\n\n\n", "meta": {"hexsha": "bd2853f527ade45ed48a0c6d90babea2095ab2b5", "size": 14492, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Meeting Notes/2020/2020-06-13/2012-pascal-contest-q24.ipynb", "max_stars_repo_name": "agryman/sean", "max_stars_repo_head_hexsha": "11baf69c6eb9308266126bf9c8b1c67c6fd33afc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-03-28T18:17:52.000Z", "max_stars_repo_stars_event_max_datetime": "2020-03-28T18:17:52.000Z", "max_issues_repo_path": "Meeting Notes/2020/2020-06-13/2012-pascal-contest-q24.ipynb", "max_issues_repo_name": "agryman/sean", "max_issues_repo_head_hexsha": "11baf69c6eb9308266126bf9c8b1c67c6fd33afc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-21T21:33:00.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-21T21:33:00.000Z", "max_forks_repo_path": "Meeting Notes/2020/2020-06-13/2012-pascal-contest-q24.ipynb", "max_forks_repo_name": "agryman/sean", "max_forks_repo_head_hexsha": "11baf69c6eb9308266126bf9c8b1c67c6fd33afc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.3973063973, "max_line_length": 458, "alphanum_fraction": 0.4647391664, "converted": true, "num_tokens": 2785, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.912436153333645, "lm_q2_score": 0.9136765322283324, "lm_q1q2_score": 0.8336715004576438}} {"text": "# Runge-Kutta methods and higher-order ODEs\n\nRunge-Kutta methods are a broad class of useful ODE solvers. In this notebook we look at a few of them, their convergence rates, and how to apply them to second-order equations\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n\n```python\n# The below commands make the font and image size bigger\nplt.rcParams.update({'font.size': 22})\nplt.rcParams[\"figure.figsize\"] = (15,10)\n```\n\nWe will define an `ODESolve` function that can use different methods. First let's define the stepping functions for the Euler, RK2 and RK4 methods.\n\n\n```python\ndef EulerStep(f, dx, xi, yi):\n return yi + dx*f(xi, yi)\n```\n\n\n```python\ndef RK2Step(f, dx, xi, yi):\n k1 = dx*f(xi, yi)\n k2 = dx*f(xi + dx, yi + k1)\n \n return yi + 0.5*(k1 + k2)\n```\n\n\n```python\ndef RK4Step(f, dx, xi, yi):\n k1 = dx*f(xi,yi)\n k2 = dx*f(xi + 0.5*dx, yi + 0.5*k1)\n k3 = dx*f(xi + 0.5*dx, yi + 0.5*k2)\n k4 = dx*f(xi + dx, yi + k3)\n \n return yi + 1/6*(k1 + 2*k2 + 2*k3 + k4)\n```\n\nThe method in the below function can be set using the optional 6th argument.\n\n\n```python\ndef ODESolve(f, dx, x0, y0, imax, method='RK4', plotSteps=False):\n \n xi = x0\n yi = y0\n \n # Create arrays to store the steps in\n steps = np.zeros((imax+1,2))\n steps[0,0] = x0\n steps[0,1] = y0\n \n stepper = RK4Step\n if(method == 'RK2'):\n stepper = RK2Step\n elif(method == 'Euler'):\n stepper = EulerStep\n \n i = 0\n while i < imax:\n yi = stepper(f, dx, xi, yi)\n\n xi += dx\n i += 1\n \n # Store the steps for plotting\n steps[i, 0] = xi\n steps[i, 1] = yi \n \n if(plotSteps):\n plt.scatter(steps[:,0], steps[:,1], color='red', linewidth=10)\n \n return [xi, yi]\n```\n\n\n```python\ndef dydx(x,y):\n return -2*x - y\n```\n\n\n```python\ndef yExact(x):\n return - 3*np.exp(-x) - 2*x + 2\n```\n\n\n```python\nx = np.linspace(0, 0.5, 100)\ny = yExact(x)\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid(True)\nplt.plot(x,y)\n\nODESolve(dydx, 0.1, 0, -1, 5, 'RK2', True)\n```\n\n## Convergence of the methods\nLet's look at the rate of convergence of the three methods: Euler, RK2 and RK4\n\n\n```python\nnmax = 12\n\ndiffEuler = np.zeros(nmax)\ndiffRK2 = np.zeros(nmax)\ndiffRK4 = np.zeros(nmax)\n\nn = 1\nwhile n < nmax:\n deltax = 0.1/2**n\n nsteps = 5*2**n\n resEuler = ODESolve(dydx, deltax, 0, -1, nsteps, 'Euler')\n resRK2 = ODESolve(dydx, deltax, 0, -1, nsteps, 'RK2')\n resRK4 = ODESolve(dydx, deltax, 0, -1, nsteps, 'RK4')\n \n diffEuler[n] = np.abs(resEuler[1] - yExact(0.5))\n diffRK2[n] = np.abs(resRK2[1] - yExact(0.5))\n diffRK4[n] = np.abs(resRK4[1] - yExact(0.5))\n n += 1\n \n# Plot the results\nplt.grid(True)\nplt.yscale('log')\nplt.xlabel('n')\nplt.ylabel('y_i - y(0.5)')\nplt.ylim([1e-24, 1])\n\n# Compute and plot reference curves for the convergence rate\nx = np.linspace(1, nmax, 12)\ndeltax = (0.1/2**x)\nfirstOrder = deltax**1\nsecondOrder = deltax**2\nforthOrder = deltax**4\n\nplt.plot(x, firstOrder)\nplt.plot(x, secondOrder)\nplt.plot(x, forthOrder)\n\nplt.scatter(np.arange(1,nmax+1), diffEuler, color='red', linewidth=10)\nplt.scatter(np.arange(1,nmax+1), diffRK2, color='orange', linewidth=10)\nplt.scatter(np.arange(1,nmax+1), diffRK4, color='green', linewidth=10)\n\nplt.legend(['First-order convergence reference', \n 'Second-order convergence reference', \n 'Forth-order convergence reference',\n 'Convergence of Euler method', \n 'Convergence of the RK2 method',\n 'Convergence of the RK4 method'\n ]);\n```\n\nThus we see that the RK4 method is rapidly convergent. It does require 4 evaluations of the right-hand side of the equations. The RK4 method has a good balance between between taking the least number of steps and achieving the highest accuracy.\n\n## Second-order ODEs\n\nAs we discussed in the lectures we can write any an $n^\\text{th}$-order ODE as a coupled system of $n$ first-order ODEs. Let's look at implementing it in practice. \n\nLet's consider a second-order ODE. We want to write this in the form:\n\n$$\\begin{align}\n y_0'(x) &= f_0(x, y_0, y_1)\\\\\n y_1'(x) &= f_1(x, y_0, y_1)\n\\end{align}$$\n\nWe thus want to write a function that when passed $x$ and an array $\\textbf{y} = [y_0, y_1]$ returns an array $\\textbf{f} = [f_0, f_1]$. Let's look at a specific example. Consider $y''(x) = -y$ with $y(0) = 1$. This has the analytic solution $y(x) = \\cos(x)$. Let's write this in first-order form. \n\nLet $y_0(x) = y(x)$ and $y_1 = y_0'(x)$. Then we have $y_1' = -y_0$. Thus\n\n$$\\begin{align}\n y_0'(x) &= y_1\\\\\n y_1'(x) &= -y_0\n\\end{align}$$\n\nso $\\textbf{f} = [y_1, -y_0]$. Let's define a Python function for this.\n\n\n```python\ndef f2(x, y):\n return np.array([y[1], -y[0]])\n```\n\n\n```python\ndef SecondOrderRK2(f, dx, x0, y0, imax):\n output = np.empty((imax, 3))\n i = 0\n xi = x0\n yi = y0\n while(i < imax):\n k1 = dx*f(xi, yi)\n k2 = dx*f(xi + dx, yi + k1)\n yi = yi + 0.5*(k1 + k2)\n xi += dx\n output[i, 0] = xi\n output[i, 1] = yi[0]\n output[i, 2] = yi[1]\n i += 1\n return output\n```\n\n\n```python\nres = SecondOrderRK2(f2, 0.1, 0, [1,0], 200);\n```\n\n\n```python\nx = np.linspace(0,20,400)\ny = np.cos(x)\n\nplt.grid(True)\nplt.scatter(res[:,0], res[:,1], color='r')\nplt.plot(x, y);\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "4e3c37f367185fa872869969ec32dc83d6ac9808", "size": 236105, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "OrdinaryDifferentialEquations/RungeKuttaMethods.ipynb", "max_stars_repo_name": "oisinohearcain/ACM20030-Examples", "max_stars_repo_head_hexsha": "be524b93d1fd9a1ce013492af45c048eb33a6bac", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 13, "max_stars_repo_stars_event_min_datetime": "2020-02-15T21:30:37.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-21T12:03:13.000Z", "max_issues_repo_path": "OrdinaryDifferentialEquations/RungeKuttaMethods.ipynb", "max_issues_repo_name": "oisinohearcain/ACM20030-Examples", "max_issues_repo_head_hexsha": "be524b93d1fd9a1ce013492af45c048eb33a6bac", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "OrdinaryDifferentialEquations/RungeKuttaMethods.ipynb", "max_forks_repo_name": "oisinohearcain/ACM20030-Examples", "max_forks_repo_head_hexsha": "be524b93d1fd9a1ce013492af45c048eb33a6bac", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 24, "max_forks_repo_forks_event_min_datetime": "2020-02-13T14:27:47.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-05T14:17:10.000Z", "avg_line_length": 600.7760814249, "max_line_length": 110368, "alphanum_fraction": 0.945397175, "converted": true, "num_tokens": 1865, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9124361628580401, "lm_q2_score": 0.9136765210631689, "lm_q1q2_score": 0.833671498972361}} {"text": "# Vortex panel solution method\n\nThis section will use vortex panels to solve to the flow around general objects by setting up and solving a system of linear equations.\n\n## General vortex sheets\n\nA curved vortex sheet with a variable strength can describe the flow around any immersed object. This is achieved by having the sheet act as an infinitely thin version of the boundary layer to enforce the no-slip boundary condition. \n\n---\n\n\n\n---\n\nIn other words we use the sheets to force the tangential velocity $u_s$ to zero at every point $s$ on the body surface $\\cal S$\n\n\\begin{equation}\nu_s = \\vec u \\cdot \\hat s = 0 \\quad s \\in \\cal S\n\\end{equation}\n\nSubstituting the definition of velocity induced by the sheet, the tangential velocity condition is\n\n\\begin{equation}\n\\left[\\vec U+\\frac{\\partial}{\\partial \\vec x}\\oint_{\\cal S} \\frac{\\gamma(s')}{2\\pi}\\theta(s,s')\\ ds'\\right] \\cdot\\hat s = 0 \n\\end{equation}\n\nwhere $\\vec U$ is the background velocity that has been added by superposition. \n\n**If we can use this equation to determine the strength distribution $\\gamma(s)$ along the sheet then we will have solved for the potential flow around the body!**\n\n## Discrete vortex panels\n\nFor general body surface shapes the velocity is a highly nonlinear function of $\\gamma(s)$, rendering analytic solution unlikely. We could attempt some complex analytic expansions, but why would we want to do that?\n\n##### Numerical fundamental: Discretization\n##### Replace continuous functions with linear approximations\n\nWe already know that the velocity depends **linearly** on $\\gamma$ for a vortex panel. This makes it easy to solve for $\\gamma$ as a function of $u_s$.\n\nIf we break up the continuous sheet into a series of vortex panels, we can add their influence together using superposition. We can then apply the no slip boundary condition, and use it to solve for $\\gamma$. \n\n\n\n\nThis is the essence of the *vortex panel method*.\n\n---\n\n## Array of panels\n\nTo help make this more concrete, lets use a unit circle as an example. The `make_circle` function has already been implemented in the `VortexPanel` module. Since we're going to use many functions in that module lets import the whole thing (like we do for `numpy`), and since it has a long name, lets give it the nick-name `vp`:\n\n\n```python\n# Annoying stuff to find files in other directories (only needed once)\nimport os\nimport sys\nmodule_path = os.path.abspath(os.path.join('..'))\nif module_path not in sys.path:\n sys.path.append(module_path)\n\n# Actual import code\nimport numpy as np\nfrom matplotlib import pyplot as plt\n%matplotlib inline\nfrom vortexpanel import VortexPanel as vp\n```\n\nSince you need to build your own shapes for the project, lets `inspect` how `make_circle` works\n\n\n```python\nfrom inspect import getsource\nprint(getsource(vp.make_circle))\n```\n\nMost of that is a comment block explaining the inputs and giving an example of using the function. Only the last three lines are code. An array of `theta` values are defined and then converted into `x,y` positions. The last line converts those points into an array of vortex panels using a function called `panelize`. \n\nLets use the two lines in the example to make a circle of panels and plot it:\n\n\n```python\ncircle = vp.make_circle(N=32) # make\ncircle.plot(style='o-') # plot\nplt.axis('equal'); # show its a circle\n```\n\nPlay around with the `style` argument to see the points and panels that make up the shape. \n\nNote that in addition to holding the array of `panels`, the `PanelArray` object also has some really useful functions like `PanelArray.plot()`.\n\nThe `PanelArray` class (and `Panel` within it) is written in an [Object Oriented Programming](https://realpython.com/python3-object-oriented-programming/#what-is-object-oriented-programming-oop) style.\n\n##### Coding fundamental: Objects\n##### Object-Oriented Programming helps organize data and functions\n\nLets see what the help says\n\n\n```python\nhelp(vp.PanelArray)\n```\n\nFrom the help, we can see that `PanelArray` contains the array of panels as well as an angle off attack. `Panel` also contains functions (the `methods`) which can be applied to it`self` such as plotting the panel and determining the panel's induced velocity. This avoids needing to pass the data about the geometry to the function - it's already all built-in!\n\n---\nNow that we have an example, what is the velocity induced by these panels and the uniform flow? \n\nUsing superposition, the total velocity at any point $x,y$ is simply\n\n\\begin{equation}\n\\vec u(x,y) = \\vec U+\\sum_{j=0}^{N-1} \\vec u_j(x,y)\n\\end{equation}\n\nwhere we use the index $j$ to label each of the $N$ panels. This is easily implemented for an array of panels:\n\n\n```python\nprint(getsource(vp.PanelArray.velocity))\n```\n\nwhere the `for-in` notation loops through every panel in the array.\n\n---\nThe function `PanelArray.plot_flow(size=2)` uses this superposition code to visualize the flow:\n\n\n```python\ncircle.plot_flow()\n```\n\n##### Quiz\n\nWhy is the flow going through the body above?\n\n1. We haven't applied the flow tangency condition\n1. We haven't applied the no-slip condition\n1. We haven't determined the correct `gamma` for each panel\n\n## System of linear equations\n\nOne key to the solution method is that the velocity $\\vec u_j$ induced by vortex panel $j$ depends __linearly__ on $\\gamma_j$. So lets write $\\vec u_j$ in a way that makes the linearity explicit:\n\n$$ \\vec u_j(x,y)=\\gamma_j\\ \\vec f_j(x,y)$$\n\nwhere $\\vec f_j$ is the part of the velocity function which depends only on the panel geometry.\n\nSubstituting this linear relation for the velocity $\\vec u_j$ into the total velocity equation and applying the no-slip boundary condition we have:\n\n$$ u_s = \\left[\\vec U + \\sum_{j=0}^{N-1} \\gamma_j \\ \\vec f_j(x,y)\\right]\\cdot\\hat s=0 $$\n\nSo the goal is to set $\\gamma$ on each panel such that this condition is enforced on the body.\n\n##### Quiz\n\nHow many unknowns are there?\n\n1. $1$\n1. $N$\n1. $N^2$\n\nBut we only have one equation, the no-slip condition... right?\n\n##### Numerical fundamental: Consistency\n##### Develop enough equations to match the unknowns\n\nWe need to have as many equations as we have unknowns to be consistent and to be able to determine a solution.\n\nLuckily the no-slip condition is a continuous equation - it applies to *every* point on the body. Therefore, we can evaluate the boundary equation **on each panel**. Then we will have a consistent linear system.\n\n---\n\nLet's start with one panel, say panel $i$, and set the total tangential velocity at the center of the panel to zero:\n\n$$ \\frac 12 \\gamma_i + \\vec U\\cdot\\hat s_i + \\sum_{j=0, j\\ne i}^{N-1} \\gamma_j \\ \\vec f_j(x_i,y_i)\\cdot\\hat s_i= 0 $$\n\nNote that we've used the simple relation for the velocity a panel induces on itself\n\n$$ \\vec u_i(x_i,y_i) \\cdot \\hat s_i = \\frac 12 \\gamma_i $$\n\nfor the tangential velocity that the panel induces on itself.\n\nNext, let's write the summation as an inner product of two arrays in order to separate out the knowns from the unknowns:\n\n$$\n\\begin{bmatrix} \\vec f_0(x_i,y_i)\\cdot\\hat s_i & \\vec f_1(x_i,y_i)\\cdot\\hat s_i & \\cdots & \\frac 12 & \\cdots & \\vec f_{N-1}(x_i,y_i)\\cdot\\hat s_i\\end{bmatrix} \\times \\begin{bmatrix} \\gamma_0 \\\\ \\gamma_1 \\\\ \\vdots \\\\ \\gamma_i \\\\ \\vdots \\\\ \\gamma_{N-1} \\end{bmatrix} + \\vec U \\cdot \\hat s_i = 0\n$$\n\nWritten like this, we can see two things:\n\n - The no-slip condition on panel $i$ depends on the strength at every panel, and\n - This is just the $i$th row of a matrix of equations\n\n\\begin{equation}\n\\begin{bmatrix} \n\\frac 12 & \\vec f_1(x_0,y_0)\\cdot\\hat s_0 & \\cdots & \\vec f_{N-1}(x_0,y_0)\\cdot\\hat s_0\\\\[0.5em]\n\\vec f_0(x_1,y_1)\\cdot\\hat s_1 & \\frac 12 & \\cdots & \\vec f_{N-1}(x_1,y_1)\\cdot\\hat s_1 \\\\[0.5em]\n\\vdots & \\vdots & \\ddots & \\vdots \\\\[0.5em] \n\\vec f_0(x_{N-1},y_{N-1})\\cdot\\hat s_{N-1} & \\vec f_1(x_{N-1},y_{N-1})\\cdot\\hat s_{N-1} & \\cdots & \\frac 12\n\\end{bmatrix} \n\\times \\begin{bmatrix} \\gamma_0 \\\\[0.9em] \\gamma_1 \\\\[0.9em] \\vdots \\\\[0.9em] \\gamma_{N-1} \\end{bmatrix} \n= -\\begin{bmatrix} \\vec U\\cdot\\hat s_0 \\\\[0.7em] \\vec U\\cdot\\hat s_1 \\\\[0.7em] \\vdots \\\\[0.7em] \\vec U\\cdot\\hat s_{N-1} \\end{bmatrix}\n\\end{equation}\n\nwhich we can summarize as: \n\n$$ A \\gamma = b$$\n\nthis defines the complete linear system.\n\n---\n\n## Summary\n\nLets review the key concepts:\n\n1. The no-slip vortex sheet equation implicitly defines the analytic $\\gamma(s)$, but is highly nonlinear.\n1. Breaking the sheet up into a set of panels ensures the velocity is a linear function of $\\gamma_i$. \n1. Applying the no-slip condition to each panel results in a linear system of equations for $\\gamma_i$. \n\nTherefore, the solution $\\gamma_i$ of $A\\gamma=b$ is a numerical approximation to the analytic solution $\\gamma(s)$. \n\n*This same basic procedure is applied whenever numerically approximating partial differential equations!*\n\n---\n\n## Implementation\n\nWe can construct `A` and `b` in only a few lines of code:\n```python\n # construct the matrix `A`\n for j, p_j in enumerate(panels): # loop through columns\n fx,fy = p_j.constant(xc,yc) # f_j at all panel centers\n A[:,j] = fx*sx+fy*sy # tangential component\n np.fill_diagonal(A, 0.5) # fill diagonal with 1/2\n\n # construct the vector `b`\n b = -(np.cos(alpha)*sx+np.sin(alpha)*sy)\n```\n\nThat seems like the easy part. After all, this is a dense linear algebra problem. How in the world will we determine `gamma`? \n\nIn fact, this is trivial:\n```python\n gamma = np.linalg.solve(A, b)\n```\n##### Numerical fundamental: Linear Algebra Packages\n##### Never write your own matrix solver\n\nEvery worthwhile numerical language has a set of linear algebra solution routines like the [linalg package](http://docs.scipy.org/doc/numpy/reference/routines.linalg.html). Use them!\n\n---\n\nThe function `PanelArray.solve_gamma(alpha=0)` combines the construction and solution code above. After defining __any__ `PanelArray` and an angle of attack `alpha`, this function sets $\\gamma_i$ on each panel such that the no-slip condition is enforced.\n\nLet's put all the `VortexPanel` functions together and test them out!\n\n\n```python\ncircle = vp.make_circle(N=10) #1) define geometry\ncircle.solve_gamma() #2) solve for gamma\ncircle.plot_flow() #3) compute flow field and plot\n```\n\nMuch better! The flow is generally going around the shape. And the flow is (pretty much) stationary within the body. But the external flow doesn't look __exactly__ like the potential flow solution...\n\n##### Numerical Fundamental: Convergence with resolution\n##### The numerical solution is improved by using more panels\n\n---\n\n## Quantitative testing\n\nLet's make this explicitly clear by plotting the distance around the shape $s$ versus $\\gamma$ for a number of resolutions.\n\nThe function `PanelArray.get_array` lets you get any attribute from the panels. What attributes are there?\n\n\n```python\nhelp(vp.Panel)\n```\n\nSo each panel knows all about its geometry (size, location, orrientation) as well as the strength $\\gamma$.\n\nLet's try getting this information. Say, the half-width of each panel:\n\n\n```python\nprint(circle.get_array('S'))\n```\n\nThe `distance` function uses this to get the cumulative distance.\n\n\n```python\nprint(getsource(circle.distance))\n```\n\nThe two functions `get_array` and `distance` are all we need to make a quantitative plot of our results:\n\n\n```python\n# Exact solution\ns = np.linspace(0,2*np.pi)\nplt.plot(s,2*np.sin(s),'k',label='exact')\n\n# Loop over resolutions\nfor N in 2**np.arange(6,2,-1): # N in powers of 2\n circle = vp.make_circle(N) # define geometry\n s = circle.distance() # get distance array\n\n circle.solve_gamma() # solve for gamma\n gamma = circle.get_array('gamma') # get gamma array\n plt.plot(s,gamma,'--',label=N) # plot\n\n# finish gamma(s) plot\nplt.legend(title='N')\nplt.xlabel(r'$s$', fontsize=20)\nplt.ylabel(r'$\\gamma$', fontsize=20, rotation=0)\nplt.show()\n```\n\nAll of the results for $N>32$ panels look pretty much identical, meaning our results __converge__ as we increase the number of panels. But we can't judge if the converged solution is __correct__ unless we compare to the known result.\n\n##### Numerical fundamental: Validation\n##### A method is validated by comparing the result to a known solution\n\nThat's easy in this case. The exact solution from analytic potential flow around a circle is\n\n$$\\tilde\\gamma = -\\tilde u_e = 2\\sin\\theta $$\n\nwhich is also in the plot and matches the converged solution perfectly.\n\n## Other shapes\n\nNotice that there was nothing special to the circle in this set-up. By giving a different set of points to `panelize`, we can make any shape we need.\n\n##### Quiz\n\nThis vortex panel method can be used to solve for the flow around:\n\n1. an ellipse\n1. a pair of tandem bodies\n1. a rudder\n\n---\n##### Your turn\n\n - ** Modify ** the `make_ellipse` function below to generate an ellipse instead of a circle when supplied with an aspect ratio `t_c`=$t/c$.\n - ** Discuss ** whether the maximum speed around the ellipse is greater or less than that around the circle.\n - ** Move ** a 2:1 ellipse geometry by giving it a different center\n - ** Combine ** the circle and ellipse geometry together into one set of panels using `vp.concatenate` and solve for the flow.\n - ** Discuss ** if there is a *wake* between the bodies. Why or why not?\n\n---\n\n \n##### Solution\n\n\n```python\ndef make_ellipse(N, t_c, xcen=0, ycen=0):\n theta = np.linspace(0, -2*np.pi, N+1)\n # your code here to define the points for an ellipse\n x = xcen+np.cos(theta) # adjust?\n y = ycen+np.sin(theta) # adjust?\n return vp.panelize(x,y)\n\n# ellipse = make_ellipse(N=32,t_c=2,xcen=2) # make the shape\n# ellipse.solve_gamma() # solve for gamma\n# ellipse.plot_flow(size=4) # compute flow field and plot\n```\n\n\n```python\n# pair = ? your code using vp.concatenate(a1,a2)\n```\n", "meta": {"hexsha": "d174fb6021cb18124b48ed698bc1fa9dc6d8d29c", "size": 20640, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "pre2021/notebooks/3_1_SolutionMethod.ipynb", "max_stars_repo_name": "Maselko/MarineHydro-Tesing", "max_stars_repo_head_hexsha": "7620a492b629ffad59b30ab300cfb4032f125be5", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2015-03-13T18:42:18.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-30T11:30:11.000Z", "max_issues_repo_path": "pre2021/notebooks/3_1_SolutionMethod.ipynb", "max_issues_repo_name": "Maselko/MarineHydro-Tesing", "max_issues_repo_head_hexsha": "7620a492b629ffad59b30ab300cfb4032f125be5", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2020-05-06T12:57:35.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-15T16:28:55.000Z", "max_forks_repo_path": "pre2021/notebooks/3_1_SolutionMethod.ipynb", "max_forks_repo_name": "Maselko/MarineHydro-Tesing", "max_forks_repo_head_hexsha": "7620a492b629ffad59b30ab300cfb4032f125be5", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 35, "max_forks_repo_forks_event_min_datetime": "2015-04-10T11:40:13.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-22T12:42:03.000Z", "avg_line_length": 34.6890756303, "max_line_length": 365, "alphanum_fraction": 0.581879845, "converted": true, "num_tokens": 3650, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9111797148356995, "lm_q2_score": 0.9149009573133051, "lm_q1q2_score": 0.8336391933876458}} {"text": "### Smooth transition between analytic functions\n\nThe idea is to create a smooth transition in a place where a function is discontinuous.\n\nThis way to create smoothness is useful because it doesn't have to evaluate the original piecewise functions outside their original domains.\n\nThe original discontinuity usually happens because a function is already defined like this:\n\n$$\n f(x) = \\left\\{\n \\begin{array}{r}\n f_{left}(x) &: x < x_{threshold}\\\\\n f_{right}(x) &: x \\geq x_{threshold}\\\\\n \\end{array}\n \\right.\n$$\n\nFor example:\n\n\n```python\nimport sympy\nfrom sympy import Piecewise\nimport numpy as np\n\n\n# For example:\nx_ = sympy.symbols('x', real=True)\n\nf_left_ = x_**1.2\nf_right_ = 10.0 / x_**0.2\n\nx_threshold = 10.0 ** (5 / 7)\n\nf_ = Piecewise(\n (f_left_, x_ < x_threshold),\n (f_right_, True)\n)\n\nf = sympy.lambdify(x_, f_)\n```\n\n\n```python\nimport seaborn\nimport matplotlib.pyplot as plt\nfrom ipywidgets import interact, widgets\n\nx = np.linspace(0.001, 20, 1000)\ny = f(x)\n\nplt.plot(x, y)\nplt.show()\n```\n\nWhat we want is $C^1$ continuity, i.e., first-derivatives are continuous, without needing values of `f_left` and `f_right` outside where they are defined.\n\nTo accomplish this, we'll join the second derivatives of both functions by a line, around a region $[x_0, x_1]$ that contains the threshold:\n\n\n```python\nd_dx_f_left_ = sympy.diff(f_left_, x_, 1)\nd_dx_f_right_ = sympy.diff(f_right_, x_, 1)\n\nd_dx_f_ = Piecewise(\n (d_dx_f_left_, x_ < x_threshold),\n (d_dx_f_right_, True)\n)\n\nd_dx_f = sympy.lambdify(x_, d_dx_f_)\n\n\nd_dx_f_left = sympy.lambdify(x_, d_dx_f_left_)\nd_dx_f_right = sympy.lambdify(x_, d_dx_f_right_)\n\nplt.plot(x, d_dx_f(x))\nplt.xlim((x_threshold * 0.8, x_threshold * 1.2))\nplt.ylim((-0.4, 1.8))\nplt.show()\n```\n\nThe derivative is indeed very discontinuous! Let's fix this!\n\nThey do not need to be symmetric around the threshold\n\n\n```python\nx_0 = x_threshold * 0.8\nx_1 = x_threshold * 1.2\n\nd_dx_at_x_0 = d_dx_f_left_.evalf(subs={x_: x_0})\nd_dx_at_x_1 = d_dx_f_right_.evalf(subs={x_: x_1})\n\nd_dx_f_center_ = d_dx_at_x_0 + ((x_ - x_0) / (x_1 - x_0)) * (d_dx_at_x_1 - d_dx_at_x_0)\n\nd_dx_f_smooth_ = Piecewise(\n (d_dx_f_left_, x_ < x_0),\n (d_dx_f_center_, x_ < x_1),\n (d_dx_f_right_, True)\n)\n\nd_dx_f_smooth = sympy.lambdify(x_, d_dx_f_smooth_)\n\n```\n\n\n```python\nplt.plot(x, d_dx_f_smooth(x))\nplt.xlim((x_threshold * 0.6, x_threshold * 1.4))\nplt.ylim((-0.4, 1.8))\nplt.show()\n```\n\nSo now we just have to integrate `d_dx_f_center` and adjust it's integral constant to create a better piecewise `f_smooth`\n\n\n\n\n```python\nf_center_ = sympy.integrate(d_dx_f_center_, x_)\n\nf_left = sympy.lambdify(x_, f_left_)\nf_center = sympy.lambdify(x_, f_center_)\n\n# f_center(x0) == f_left(x0)\nf_center_ = f_center_ + (f_left(x_0) - f_center(x_0))\n\nf_smooth_ = Piecewise(\n (f_left_, x_ < x_0),\n (f_center_, x_ < x_1),\n (f_right_, True)\n)\n\nf_smooth = sympy.lambdify(x_, f_smooth_)\n```\n\n\n```python\nplt.plot(x, f_smooth(x))\nplt.show()\n```\n\nMuch better!\nNow let's generalize a way to create `f_center`:\n\n\n```python\nsympy.init_printing()\n\nx_0_, x_1_, f_0_ = sympy.symbols('x_0,x_1,f_{left}(x_0)', real=True)\n\n# df_0 is f_left'(x_0)\n# df_1 is f_right'(x_1)\n\ndf_0_, df_1_ = sympy.symbols('df_0,df_1', real=True)\n\na = (x_ - x_0_) / (x_1_ - x_0_)\n\n# d_dx_f_center_ = d_dx_at_x_0 + a * (d_dx_at_x_1 - d_dx_at_x_0)\nd_dx_f_center_ = (1 - a) * df_0_ + a * df_1_\nf_center_ = sympy.integrate(d_dx_f_center_, x_)\n\n# f_center_ = f_center_ + f_0_ - f_center_.subs(x_, x_0_)\n\nf_center_\n```\n\n\n```python\n# Simplifying f_center to be an equation like x * (a*x + b) + c:\n\ndef get_f_center_coefficients(x0, x1, f_left, df_left, df_right):\n dx = x1 - x0\n df0 = df_left(x0)\n df1 = df_right(x1)\n a = 0.5 * (df1 - df0) / dx\n b = (df0*x1 - df1*x0) / dx\n c = f_left(x0) - (x0 * (a*x0 + b))\n return a, b, c\n\nx_0 = x_threshold * 0.8\nx_1 = x_threshold * 1.2\n\n# So, for example:\na, b, c = get_f_center_coefficients(\n x_0,\n x_1,\n f_left=lambda x: x**1.2,\n df_left=lambda x: 1.2 * x**0.2,\n df_right=lambda x: -2.0*x**-1.2,\n)\n\nprint (a, b, c)\n```\n\n -0.4387286573389664 5.230431342173345 -8.63388046349694\n\n\n\n```python\ndef f_smooth(x):\n return np.piecewise(x, [x < x_0, (x_0 <= x) & (x <= x_1), x > x_1], [\n lambda x: x**1.2,\n lambda x: x * (-0.4387286573389664 * x + 5.230431342173345) - 8.63388046349694,\n lambda x: 10.0 / x**0.2,\n ])\n\n```\n\n\n```python\nplt.plot(x, f_smooth(x))\nplt.show()\n```\n", "meta": {"hexsha": "cc6345fedfa2e59429f43dd59a986b2b69ea9f04", "size": 94230, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "smooth_transition_between_analytic_functions.ipynb", "max_stars_repo_name": "arthursoprano/notebooks", "max_stars_repo_head_hexsha": "f83c662fecd23c913c40b256d198190fbdcda45e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 19, "max_stars_repo_stars_event_min_datetime": "2017-05-09T16:25:16.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-30T23:22:38.000Z", "max_issues_repo_path": "smooth_transition_between_analytic_functions.ipynb", "max_issues_repo_name": "arthursoprano/notebooks", "max_issues_repo_head_hexsha": "f83c662fecd23c913c40b256d198190fbdcda45e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-01-25T13:50:39.000Z", "max_issues_repo_issues_event_max_datetime": "2019-01-25T14:30:39.000Z", "max_forks_repo_path": "smooth_transition_between_analytic_functions.ipynb", "max_forks_repo_name": "arthursoprano/notebooks", "max_forks_repo_head_hexsha": "f83c662fecd23c913c40b256d198190fbdcda45e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2017-06-21T12:30:32.000Z", "max_forks_repo_forks_event_max_datetime": "2019-01-06T23:48:04.000Z", "avg_line_length": 241.6153846154, "max_line_length": 18686, "alphanum_fraction": 0.9101347766, "converted": true, "num_tokens": 1617, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9362850110816423, "lm_q2_score": 0.8902942152065091, "lm_q1q2_score": 0.8335691291505484}} {"text": "# Lecture 14, Algebraic Modeling Languages\n\nAlgebraic Modeling Languages (AML) are high-level computer programming languages for describing and solving high complexity problems for large scale mathematical computation (i.e. large scale optimization type problems). Their syntax mimics the mathematical notation of optimization problems, which allows one to express optimization problems in a familiar, concise and readable way. \n\n**AMLs do not directly solve the problem, but they call appropriate external solvers to find the solution.**\n\nExamples of AMLs are\n* A Mathematical Programming Language (AMPL),\n* General Algebraic Modeling System (GAMS),\n* Optimization Programming Language (OPL),\n* Advanced Interactive Multidimensional Modeling System (AIMMS), and\n* Pyomo.\n\nIn addition to the ease of modelling, one of the advantages of AMLs is that you can model the problem once and then solve it with multiple solvers.\n\n## Pyomo\n\nOn this course, we use Pyomo as an example of AMLs. Pyomo is a Python-based, open-source optimization modeling language with a diverse set of optimization capabilities.\n\nPyomo may not be a completely typical AML, because Pyomo's modeling objects are embedded within a full-featured high-level programming language providing a rich set of supporting libraries, which distinguishes Pyomo from other AMLs.\n\nPyomo supports a wide range of problem types, including:\n* Linear programming\n* Quadratic programming\n* Nonlinear programming\n* Mixed-integer linear programming\n* Mixed-integer quadratic programming\n* Mixed-integer nonlinear programming\n* Stochastic programming\n* Generalized disjunctive programming\n* Differential algebraic equations\n* Bilevel programming\n* Mathematical programs with equilibrium constraints\n\n# Installing Pyomo\n\nThe easiest way to install Pyomo is to call\n```\npip install pyomo\n```\nwhen pip has been installed on your machine.\n\n## Example 1, linear optimization\n\nLet us start with a very simple linear problem\n$$\n\\begin{align}\n\\min &\\qquad 2x_1+3x_2\\\\\n\\text{s.t. }& \\qquad 2x_1+3x_2\\geq 1\\\\\n& \\qquad x_1,x_2\\geq 0.\n\\end{align}\n$$\n\n\n```python\nfrom pyomo.environ import *\n\n\nmodel = ConcreteModel()\n\nmodel.x = Var([1,2], domain=NonNegativeReals) #Non-negative variables x[1] and x[2]\n\nmodel.OBJ = Objective(expr = 2*model.x[1] + 3*model.x[2]) #Objective function\n\nmodel.Constraint1 = Constraint(expr = 3*model.x[1] + 4*model.x[2] >= 1) #Constraint\n\n```\n\nOnce we have defined the problem, we can solve it. Let us start by using glpk, which is an open source linear programming program.\n\nYou need to have glpk installed on your system. For details, see https://www.gnu.org/software/glpk/#TOCdownloading. For many Linux distributions, you can install glpk from the repositories by typing\n```\nsudo yum install glpk\n```\n```\nsudo apt-get install glpk,\n```\nor whatever your distribution needs.\n\n\n\n```python\nfrom pyomo.opt import SolverFactory #Import interfaces to solvers\nopt = SolverFactory(\"glpk\") #Use glpk\nres = opt.solve(model, tee=True) #Solve the problem and print the output\nprint \"Solution:\"\nprint \"=========\"\nmodel.x.display() #Print values of x\n```\n\nNow, if you have other linear solvers installed on your system, you can use them too. Let us use Cplex, which is a commercial solvers (academic license available).\n\n\n```python\nopt = SolverFactory(\"cplex\")\nres = opt.solve(model, tee=True)\nprint \"Solution:\"\nmodel.x.display()\n```\n\nWe can use also gurobi, which is another commercial solver with academic license.\n\n\n```python\nopt = SolverFactory(\"gurobi\")\nres = opt.solve(model, tee=True)\nprint \"Solution:\"\nmodel.x.display()\n```\n\n## Example 2, nonlinear optimization\n\nLet use define optimization problem\n$$\n\\begin{align}\n\\min &\\qquad c_d\\\\\n\\text{s.t. }& \\qquad c_{af}s_v - s_vc_a-k_1c_a=0\\\\\n&\\qquad s_vc_b+k_1c_a-k_2c_b=0\\\\\n&\\qquad s_vc_c+k_2c_b=0\\\\\n&\\qquad s_vc_d+k_3c_a^2=0,\\\\\n&\\qquad s_v,c_a,c_b,c_c,c_d\\geq0\n\\end{align}\n$$\nwhere $k_1=5/6$, $k_2=5/3$, $k_3=1/6000$, and $c_{af}=10000$.\n\n\n```python\nfrom pyomo.environ import *\n# create the concrete model\nmodel = ConcreteModel()\n# set the data \nk1 = 5.0/6.0 \nk2 = 5.0/3.0 \nk3 = 1.0/6000.0 \ncaf = 10000.0 \n# create the variables\nmodel.sv = Var(initialize = 1.0, within=PositiveReals)\nmodel.ca = Var(initialize = 5000.0, within=PositiveReals)\nmodel.cb = Var(initialize = 2000.0, within=PositiveReals)\nmodel.cc = Var(initialize = 2000.0, within=PositiveReals)\nmodel.cd = Var(initialize = 1000.0, within=PositiveReals)\n\n# create the objective\nmodel.obj = Objective(expr = model.cb, sense=maximize)\n# create the constraints\nmodel.ca_bal = Constraint(expr = (0 == model.sv * caf \\\n - model.sv * model.ca - k1 * model.ca \\\n - 2.0 * k3 * model.ca ** 2.0))\nmodel.cb_bal = Constraint(expr=(0 == -model.sv * model.cb \\\n + k1 * model.ca - k2 * model.cb))\nmodel.cc_bal = Constraint(expr=(0 == -model.sv * model.cc \\\n + k2 * model.cb))\nmodel.cd_bal = Constraint(expr=(0 == -model.sv * model.cd \\\n + k3 * model.ca ** 2.0))\n```\n\n## Solving with Ipopt\n\nInstall IPopt following http://www.coin-or.org/Ipopt/documentation/node10.html.\n\n\n```python\nopt = SolverFactory(\"ipopt\",solver_io=\"nl\")\n\nopt.solve(model,tee=True)\n\nprint \"Solution is \"\nmodel.sv.display()\nmodel.ca.display()\nmodel.cb.display()\nmodel.cc.display()\nmodel.cd.display()\n```\n\n# Example 3, Nonlinear multiobjective optimization\n\nLet us study optimization problem\n$$\n\\begin{align}\n\\min \\ & \\left(\\sum_{i=1}^{48}\\frac{\\sqrt{1+x_i^2}}{v_i},\\sum_{i=1}^{48}\\left(\\left(\\frac{x_iv_i}{\\sqrt{1+x_i^2}}+v_w\\right)^2+\\frac{v_i^2}{1+x_i^2}\\right)\\right., \\\\\n&\\qquad\\left.\\sum_{i=1}^{47}\\big|(x_{i+1}-x_i\\big|\\right)\\\\\n\\text{s.t. } & \\sum_{i=1}^{j}x_i\\leq -1\\text{ for all }j=10,11,12,13,14\\\\\n& \\left|\\sum_{i=1}^{j}x_i\\right|\\geq 2\\text{ for all }j=20,21,22,23,24\\\\\n& \\sum_{i=1}^{j}x_i\\geq 1\\text{ for all }j=30,31,32,33,34\\\\\n&\\sum_{i=1}^{48}\\frac{\\sqrt{1+x_i^2}}{v_i} \\leq 5\\\\\n&\\sum_{i=1}^{48}x_i=0\\\\\n&-10\\leq\\sum_{i=1}^{j}x_i\\leq10\\text{ for all }j=1,\\ldots,48\n&0\\leq v_i\\leq 25\\text{ for all }i=1,\\ldots,48\\\\\n&-10\\leq x_i\\leq 10\\text{ for all }i=1,\\ldots,48\\\\\n\\end{align}\n$$\n\n\n```python\n\nfrom pyomo.environ import *\n# create the concrete model9\ndef solve_ach(reference,lb,ub):\n model = ConcreteModel()\n\n vwind = 5.0\n min_speed = 0.01\n\n\n #f1, time used\n def f1(model):\n return sum([sqrt(1+model.y[i]**2)/model.v[i] for i in range(48)])\n #f2, wind drag, directly proportional to square of speed wrt. wind\n def f2(model):\n return sum([((model.y[i]*model.v[i])/sqrt(1+model.y[i]**2)+vwind)**2/\n +model.v[i]**2*((1+model.y[i])**2) for i in range(48)])\n #f3, maximal course changes\n def f3(model):\n return sum([abs(model.y[i+1]-model.y[i]) for i in range(47)])\n\n def h1_rule(model,i):\n return sum(model.y[j] for j in range(i))<=-1\n def h2_rule(model,i):\n return abs(sum(model.y[j] for j in range(i)))>=2\n def h3_rule(model,i):\n return sum(model.y[j] for j in range(i))>=1\n def h4_rule(model):\n return sum([sqrt(1+model.y[i]**2)/model.v[i] for i in range(48)])<=25\n def h5_rule(model):\n return sum(model.y[i] for i in range(48))==0\n\n def f_rule(model):\n return t\n\n def y_init(model,i):\n if i==0:\n return -1\n if i==18:\n return -1\n if i==24:\n return 1\n if i==25:\n return 1\n if i==26:\n return 1\n if i==34:\n return -1\n return 0\n model.y = Var(range(48),bounds = (-10,10),initialize=y_init)\n model.v = Var(range(48),domain=NonNegativeReals,bounds=(min_speed,25),initialize=25)\n model.t = Var()\n model.h1=Constraint(range(9,14),rule=h1_rule)\n model.h2=Constraint(range(19,24),rule=h2_rule)\n model.h3=Constraint(range(29,34),rule=h3_rule)\n model.h4=Constraint(rule=h4_rule)\n model.h5=Constraint(rule=h5_rule)\n \n def h6_rule(model,i):\n return -10<=sum([model.y[j] for j in range(i)])<=10\n \n model.h6 = Constraint(range(1,48),rule=h6_rule)\n def t_con_f1_rule(model):\n return model.t>=(f1(model)-reference[0]-lb[0])/(ub[0]-lb[0])\n model.t_con_f1 = Constraint(rule = t_con_f1_rule)\n def t_con_f2_rule(model):\n return model.t>=(f2(model)-reference[1]-lb[1])/(ub[1]-lb[1])\n model.t_con_f2 = Constraint(rule = t_con_f2_rule)\n def t_con_f3_rule(model):\n return model.t>=(f3(model)-reference[2]-lb[2])/(ub[2]-lb[2])\n model.t_con_f3 = Constraint(rule = t_con_f3_rule)\n model.f = Objective(expr = model.t+1e-10*(f1(model)+f2(model)+f3(model)))\n tee =False\n opt = SolverFactory(\"ipopt\",solver_io=\"nl\")\n opt.options.max_iter=100000\n #opt.options.constr_viol_tol=0.01\n #opt.options.halt_on_ampl_error = \"yes\"\n\n opt.solve(model,tee=tee)\n return [[value(f1(model)),value(f2(model)),value(f3(model))],[model.y,model.v]]\n\n```\n\n\n```python\nlb_ = [0,0,0]\nub_ = [1,1,1]\nvalues =[]\nfor i in range(3):\n reference = [1e10,1e10,1e10]\n reference[i]=0\n values.append(solve_ach(reference,ub_,lb_)[0])\nprint values\n```\n\n WARNING - Loading a SolverResults object with a warning status into model=unknown; message from solver=Ipopt 3.12\\x3a Maximum Number of Iterations Exceeded.\n WARNING - Loading a SolverResults object with a warning status into model=unknown; message from solver=Ipopt 3.12\\x3a Maximum Number of Iterations Exceeded.\n [[25133.93493837363, 759264160.1226324, 860.0], [25.000000247539568, 11188.12151236145, 147.55358533550918], [12673.93579526275, 289156195.6539606, 151.2406436251979]]\n\n\n\n```python\nlb = [0,0,0]\nub = [1,1,1]\nfor i in range(3):\n lb[i] = min([values[j][i] for j in range(3)])\n ub[i] = max([values[j][i] for j in range(3)])\nprint lb\nprint ub\n```\n\n [25.000000247539568, 11188.12151236145, 147.55358533550918]\n [25133.93493837363, 759264160.1226324, 860.0]\n\n\n\n```python\n[f,x] = solve_ach([(a+b)/2 for (a,b) in zip(lb,ub)],lb,ub) #Compromise solution\n#[f,x] = solve_ach([1e10,1e10,0],lb,ub) #Minimize the third objective\n\n```\n\n\n```python\nimport matplotlib.pyplot as plt\nfrom matplotlib.patches import Rectangle\nplt.plot([sum(value(x[0][j]) for j in range(i)) for i in range(49)])\ncurrentAxis = plt.gca()\ncurrentAxis.add_patch(Rectangle((10, -1),4,10))\ncurrentAxis.add_patch(Rectangle((20, -2),4,4))\ncurrentAxis.add_patch(Rectangle((30, -10),4,11))\nplt.show()\n```\n\n## Black-box optimization problem\n\n\n```python\nimport sys\nfrom pyomo.opt.blackbox import RealOptProblem\n\nclass RealProblem1(RealOptProblem):\n\n def __init__(self):\n RealOptProblem.__init__(self)\n self.lower=[0.0, -1.0, 1.0, None]\n self.upper=[None, 0.0, 2.0, -1.0]\n self.nvars=4\n\n def function_value(self, point):\n self.validate(point)\n return point.vars[0] - point.vars[1] + (point.vars[2]-1.5)**2 + (point.vars[3]+2)**4\n\nproblem = RealProblem1()\n\n```\n", "meta": {"hexsha": "9083d5861c33e8c53bf6d70bbb2937cf5fc14a5d", "size": 17511, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lecture 14, Algebraic Modeling Languages, especially Pyomo.ipynb", "max_stars_repo_name": "AwelEshetu/edxCourses", "max_stars_repo_head_hexsha": "2fb072d8e45d203080d40ca383de14257a1fcbd2", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2016-07-18T12:49:18.000Z", "max_stars_repo_stars_event_max_datetime": "2017-02-21T12:58:16.000Z", "max_issues_repo_path": "Lecture 14, Algebraic Modeling Languages, especially Pyomo.ipynb", "max_issues_repo_name": "AwelEshetu/edxCourses", "max_issues_repo_head_hexsha": "2fb072d8e45d203080d40ca383de14257a1fcbd2", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture 14, Algebraic Modeling Languages, especially Pyomo.ipynb", "max_forks_repo_name": "AwelEshetu/edxCourses", "max_forks_repo_head_hexsha": "2fb072d8e45d203080d40ca383de14257a1fcbd2", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.6796610169, "max_line_length": 394, "alphanum_fraction": 0.5413739935, "converted": true, "num_tokens": 3411, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9481545348152283, "lm_q2_score": 0.8791467627598857, "lm_q1q2_score": 0.8335669898789132}} {"text": "# Item III\n\n*Solving a very (un)known problem*\n\n1. Implement a function that finds the two roots of the quadratic equation $a x^2 + b x + c = 0$ given $a$,$b$, and $c$, i.e. implement $x_{\\pm} = \\frac{-b \\pm \\sqrt{b^2-4ac}}{2a}$.\n2. What are the roots of $2x^2+10^9+1 = 0$? How many digits of significance can you get for the two roots? Is there any problem?\n3. Design a code that finds the correct roots for $x^2+Bx+C=0$, given $B \\gg C$ to at least 2 digits of significance.\n4. Solve the previous equation using this new code and find the new roots. *I hope it works!*\n5. From the well-known solution $x_{\\pm}$ of the quadratic equation design an algorithm that approximates the two roots of $x^2+Bx+C = 0$, given $B \\gg C$. Hint: *A Taylor expansion may work*.\n---\n\n## Part 1\n\n\n```python\ndef solve_quadratic(a,b,c):\n assert(a!=0)\n disc = (b**2-4*a*c+0j)**0.5\n x1 = (-b-disc)/(2*a)\n x2 = (-b+disc)/(2*a)\n return (x1,x2)\n```\n\n## Part 2\n\nThe analytical solution is:\n$$\nx_{\\pm} = \\frac{-10^{9}\\pm\\sqrt{10^{18}-8}}{4}\n$$\n\n\n```python\n# if we use the solver\nx1,x2 = solve_quadratic(2,1e9,1)\nprint(\"x-:\",x1)\nprint(\"x+:\",x2)\n```\n\n x-: (-500000000+0j)\n x+: 0j\n\n\nThe approximation given for $x_{-}$ is good since the relative error is very little (the approximation $\\sqrt{10^{18}-8} \\approx 10^{9}$ that the computer does because of the *absorption* of the much smaller $-8$ doesn't affect the relative error too much).\n\n\nFor $x_{+}$ however, the error is equal to the value of the root, since \n\n## Part 3\n\nWe make the following change to the equation:\n\\begin{align}\nx_{\\pm} = \\frac{-b \\pm \\sqrt{b^2-4ac}}{2a} &= \\frac{-b \\pm \\sqrt{b^2-4ac}}{2a} \\cdot \\frac{-b \\mp \\sqrt{b^2-4ac}}{-b \\mp \\sqrt{b^2-4ac}}\n\\\\ &= \\frac{4ac}{2a \\left(-b \\mp \\sqrt{b^2-4ac}\\right)}\n\\\\ &= \\frac{-2c}{\\left(b \\pm \\sqrt{b^2-4ac}\\right)}\n\\end{align}\nwe can use it for $x_{+}$ if $b>0$ or $x_{-}$ otherwise.\n\n\n```python\ndef solve_quadratic_2(a,b,c):\n assert(a!=0)\n disc = (b**2-4*a*c+0j)**0.5\n if b>0:\n x1 = (-b-disc)/(2*a)\n x2 = -2*c/(b+disc)\n else:\n x1 = -2*c/(b-disc)\n x2 = (-b+disc)/(2*a)\n return (x1,x2)\n```\n\n## Part 4\n\nWe solve using the new method and see that it works.\n\n\n```python\nx1,x2 = solve_quadratic_2(2,1e9,1)\nprint(\"x-:\",x1)\nprint(\"x+:\",x2)\n```\n\n x-: (-500000000+0j)\n x+: (-1e-09+0j)\n\n\n## Part 5\n\nIf $C$ is small, we can approximate the solution:\n$$\nx = x_0 + C x_1 + C^2 x_2 + ...\n$$\n\nThen\n$$\n\\begin{align}\nx^2 + B x + C &= 0\n\\\\ (x_0^2 + C (2x_0x_1) + C^2 (x_1^2 + 2x_0x_2) + \\dots) + (Bx_0 + C B x_1 + C^2 B x_2 + \\dots) + C &= 0\n\\end{align}\n$$\nAnd we have the following equations:\n\\begin{align}\nO(C^0) : &\\qquad x_0^2 + B x_0 = 0 \n\\\\ O(C^1) : &\\qquad 2x_0x_1 + B x_1 +1 = 0 \n\\\\ O(C^2) : &\\qquad x_1^2 + 2 x_0x_2 + B x_2 = 0 \n\\end{align}\nwhich have the following solutions:\n$$\n(x_0,x_1,x_2)_1 = \\left(0,\\frac{-1}{B},\\frac{-1}{B^3}\\right)\n$$\n$$\n(x_0,x_1,x_2)_2 = \\left(-B,\\frac{1}{B},\\frac{1}{B^3}\\right)\n$$\nwhich result in the following approximations for $x$:\n$$\n\\begin{align}\nx_{1} &= 0 + C \\frac{-1}{B} + C^2 \\frac{-1}{B^3} + \\cdots\n\\\\ x_{2} &= -B + C \\frac{1}{B} + C^2 \\frac{1}{B^3} + \\cdots\n\\end{align}\n$$\n\n\n```python\ndef solve_quadratic_3(a,b,c):\n # In case a!=1 we just have to scale the equation:\n b /= a\n c /= a\n # Approximations:\n x1 = 0+c*(-1/b)+c**2*(-1/b**3)\n x2 = -b+c*(1/b)+c**2*(1/b**3)\n return (x1,x2)\n```\n\n\n```python\nx1,x2 = solve_quadratic_3(2,1e9,1)\nprint(\"x1:\",x1)\nprint(\"x2:\",x2)\n```\n\n x1: -1e-09\n x2: -500000000.0\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "7252c1c30da54c99ea864d11b3892cd24ec394e4", "size": 6539, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "t1_questions/item_03_alpha.ipynb", "max_stars_repo_name": "autopawn/cc5-works", "max_stars_repo_head_hexsha": "63775574c82da85ed0e750a4d6978a071096f6e7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "t1_questions/item_03_alpha.ipynb", "max_issues_repo_name": "autopawn/cc5-works", "max_issues_repo_head_hexsha": "63775574c82da85ed0e750a4d6978a071096f6e7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "t1_questions/item_03_alpha.ipynb", "max_forks_repo_name": "autopawn/cc5-works", "max_forks_repo_head_hexsha": "63775574c82da85ed0e750a4d6978a071096f6e7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.0517928287, "max_line_length": 268, "alphanum_fraction": 0.4699495336, "converted": true, "num_tokens": 1471, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9241418178895029, "lm_q2_score": 0.9019206771886166, "lm_q1q2_score": 0.8335026142092197}} {"text": "# integrate functions\n\n\n```python\nfrom scipy import integrate\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sympy as sy\nsy.init_printing()\n```\n\n\n```python\na,b=-1,1\nx=np.linspace(a,b,17)\ny=np.exp(-x)\n```\n\n\n```python\nval_trapz=integrate.trapz(y,x)\nval_trapz\n```\n\n\n```python\nval_simps=integrate.simps(y,x)\nval_simps\n```\n\n\n```python\nval_true=-np.exp(-b)+np.exp(-a)\n```\n\n\n```python\nprint(val_true-val_trapz)\n```\n\n -0.00305962308718\n\n\n\n```python\nprint(val_true-val_simps)\n```\n\n -3.18201703609e-06\n\n\n\n```python\nx=np.linspace(a,b,1+2**4)\ny=np.exp(-x)\ndx=x[1]-x[0]\nval_romb=integrate.romb(y,dx=dx)\nval_romb\n```\n\n\n```python\nprint(val_true-val_romb)\n```\n\n -4.20898871312e-11\n\n\n\n```python\nf=lambda x:np.exp(-x**2) * (x**12-x**5)\nval,err=integrate.quad(f,0,np.inf)\nval\n```\n\n\n```python\nerr\n```\n\n# \u591a\u91cd\u7a4d\u5206\n$$\\int_{x=a}^b \\int_{y=g_1(x)}^{g_2(x)} f(x,y) dxdy$$\nwhere $g_1=-1$ and $g_2=0$\n\n\n```python\ndef f(x,y):\n return 4-x**2-y**2\na,b=0,1\ng1,g2=lambda x:-1,lambda x:0\nintegrate.dblquad(f,a,b,g1,g2)\n```\n\n\n```python\nval,err=integrate.dblquad(f,0,1,lambda x:x-1,lambda x:1-x)\nval,err\n```\n\n$$\\int_{a}^{b} \\int_{y=g_1(x)}^{g_2(x)} \\int_{z=h_1(x,y)}^{h_2(x,y)} f(x,y,z)dxdydz$$\n\n\n```python\nff=lambda x,y,z:(x+y+z)**2\na,b=-1,1\ng1,g2=lambda x:-1,lambda x:1\nh1,h2=lambda x,y:-1,lambda x,y:1\nvar,err=integrate.tplquad(ff,a,b,g1,g2,h1,h2)\nval\n```\n\n\n```python\nval,err=integrate.nquad(f,[(-1,1),(-1,1),(-1,1)])\nval\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "97665a627d0e3554795621db2bf0f3685b28f4d3", "size": 18663, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "scipyExercise/integrate functions.ipynb", "max_stars_repo_name": "terasakisatoshi/pythonCodes", "max_stars_repo_head_hexsha": "baee095ecee96f6b5ec6431267cdc6c40512a542", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "scipyExercise/integrate functions.ipynb", "max_issues_repo_name": "terasakisatoshi/pythonCodes", "max_issues_repo_head_hexsha": "baee095ecee96f6b5ec6431267cdc6c40512a542", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "scipyExercise/integrate functions.ipynb", "max_forks_repo_name": "terasakisatoshi/pythonCodes", "max_forks_repo_head_hexsha": "baee095ecee96f6b5ec6431267cdc6c40512a542", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.7426470588, "max_line_length": 2092, "alphanum_fraction": 0.764453732, "converted": true, "num_tokens": 587, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9241418116217418, "lm_q2_score": 0.9019206785067698, "lm_q1q2_score": 0.8335026097743569}} {"text": "# Solving equations and inequalities\n\n\n```\nfrom sympy import *\ninit_printing()\n```\n\n\n```\nvar('x y z a')\n```\n\n\n```\nsolve(x**2 - a, x)\n```\n\n\n```\nx = Symbol('x')\nsolve_univariate_inequality(x**2 > 4, x)\n```\n\n\n```\nsolve([x + 2*y + 1, x - 3*y - 2], x, y)\n```\n\n\n```\nsolve([x**2 + y**2 - 1, x**2 - y**2 - S(1) / 2], x, y)\n```\n\n\n```\nsolve([x + 2*y + 1, -x - 2*y - 1], x, y)\n```\n\n\n```\nvar('a b c d u v')\n```\n\n\n```\nM = Matrix([[a, b, u], [c, d, v]])\nM\n```\n\n\n```\nsolve_linear_system(M, x, y)\n```\n\n\n```\ndet(M[:2, :2])\n```\n", "meta": {"hexsha": "9d735ac1d8fed12a85163820701f6552c377b2fe", "size": 57693, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter15/02_solvers.ipynb", "max_stars_repo_name": "PacktPublishing/IPython-Interactive-Computing-and-Visualization-Cookbook-Second-Edition", "max_stars_repo_head_hexsha": "7c41466113641abf7070f7bd14cc687c19c9c1ea", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 16, "max_stars_repo_stars_event_min_datetime": "2018-03-06T19:38:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-25T06:54:38.000Z", "max_issues_repo_path": "Chapter15/02_solvers.ipynb", "max_issues_repo_name": "PacktPublishing/IPython-Interactive-Computing-and-Visualization-Cookbook-Second-Edition", "max_issues_repo_head_hexsha": "7c41466113641abf7070f7bd14cc687c19c9c1ea", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter15/02_solvers.ipynb", "max_forks_repo_name": "PacktPublishing/IPython-Interactive-Computing-and-Visualization-Cookbook-Second-Edition", "max_forks_repo_head_hexsha": "7c41466113641abf7070f7bd14cc687c19c9c1ea", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 17, "max_forks_repo_forks_event_min_datetime": "2018-02-19T16:11:16.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-25T06:54:40.000Z", "avg_line_length": 216.0786516854, "max_line_length": 13110, "alphanum_fraction": 0.9203022897, "converted": true, "num_tokens": 224, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9674102589923635, "lm_q2_score": 0.8615382147637196, "lm_q1q2_score": 0.8334609074763886}} {"text": "# Runge-Kutta methods and higher-order ODEs\n\nRunge-Kutta methods are a broad class of useful ODE solvers. In this notebook we look at a few of them, their convergence rates, and how to apply them to second-order equations\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# The below commands make the font and image size bigger\nplt.rcParams.update({'font.size': 22})\nplt.rcParams[\"figure.figsize\"] = (15,10)\n```\n\nWe will define an `ODESolve` function that can use different methods. First let's define the stepping functions for the Euler, RK2 and RK4 methods.\n\n\n```python\ndef EulerStep(f, dx, xi, yi):\n return yi + dx*f(xi, yi)\n```\n\n\n```python\ndef RK2Step(f, dx, xi, yi):\n k1 = dx*f(xi, yi)\n k2 = dx*f(xi + dx, yi + k1)\n \n return yi + 0.5*(k1 + k2)\n```\n\n\n```python\ndef RK4Step(f, dx, xi, yi):\n k1 = dx*f(xi,yi)\n k2 = dx*f(xi + 0.5*dx, yi + 0.5*k1)\n k3 = dx*f(xi + 0.5*dx, yi + 0.5*k2)\n k4 = dx*f(xi + dx, yi + k3)\n \n return yi + 1/6*(k1 + 2*k2 + 2*k3 + k4)\n```\n\nThe method in the below function can be set using the optional 6th argument.\n\n\n```python\ndef ODESolve(f, dx, x0, y0, imax, method='RK4', plotSteps=False):\n \n xi = x0\n yi = y0\n \n # Create arrays to store the steps in\n steps = np.zeros((imax+1,2))\n steps[0,0] = x0\n steps[0,1] = y0\n \n i = 0\n while i < imax:\n if(method == 'RK4'):\n yi = RK4Step(f, dx, xi, yi)\n elif(method == 'RK2'):\n yi = RK2Step(f, dx, xi, yi)\n elif(method == 'Euler'):\n yi = EulerStep(f, dx, xi, yi)\n \n xi += dx\n i += 1\n \n # Store the steps for plotting\n steps[i, 0] = xi\n steps[i, 1] = yi \n \n if(plotSteps):\n plt.scatter(steps[:,0], steps[:,1], color='red', linewidth='10')\n \n return [xi, yi]\n```\n\n\n```python\ndef dydx(x,y):\n return -2*x - y\n```\n\n\n```python\ndef yExact(x):\n return - 3*np.exp(-x) - 2*x + 2\n```\n\n\n```python\nx = np.linspace(0, 0.5, 100)\ny = yExact(x)\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid(True)\nplt.plot(x,y)\n\nODESolve(dydx, 0.1, 0, -1, 5, 'RK2', True)\n```\n\n## Convergence of the methods\nLet's look at the rate of convergence of the three methods: Euler, RK2 and RK4\n\n\n```python\nnmax = 12\n\ndiffEuler = np.zeros(nmax)\ndiffRK2 = np.zeros(nmax)\ndiffRK4 = np.zeros(nmax)\n\nn = 1\nwhile n < nmax:\n deltax = 0.1/2**n\n nsteps = 5*2**n\n resEuler = ODESolve(dydx, deltax, 0, -1, nsteps, 'Euler')\n resRK2 = ODESolve(dydx, deltax, 0, -1, nsteps, 'RK2')\n resRK4 = ODESolve(dydx, deltax, 0, -1, nsteps, 'RK4')\n \n diffEuler[n] = np.abs(resEuler[1] - yExact(0.5))\n diffRK2[n] = np.abs(resRK2[1] - yExact(0.5))\n diffRK4[n] = np.abs(resRK4[1] - yExact(0.5))\n n += 1\n \n# Plot the results\nplt.grid(True)\nplt.yscale('log')\nplt.xlabel('n')\nplt.ylabel('y_i - y(0.5)')\nplt.ylim([1e-24, 1])\n\n# Compute and plot reference curves for the convergence rate\nx = np.linspace(1, nmax, 12)\ndeltax = (0.1/2**x)\nfirstOrder = deltax**1\nsecondOrder = deltax**2\nforthOrder = deltax**4\n\nplt.plot(x, firstOrder)\nplt.plot(x, secondOrder)\nplt.plot(x, forthOrder)\n\nplt.scatter(np.arange(1,nmax+1), diffEuler, color='red', linewidth='10')\nplt.scatter(np.arange(1,nmax+1), diffRK2, color='orange', linewidth='10')\nplt.scatter(np.arange(1,nmax+1), diffRK4, color='green', linewidth='10')\n\nplt.legend(['First-order convergence reference', \n 'Second-order convergence reference', \n 'Forth-order convergence reference',\n 'Convergence of Euler method', \n 'Convergence of the RK2 method',\n 'Convergence of the RK4 method'\n ]);\n```\n\nThus we see that the RK4 method is rapidly convergent. It does require 4 evaluations of the right-hand side of the equations. The RK4 method has a good balance between between taking the least number of steps and achieving the highest accuracy.\n\n## Second-order ODEs\n\nAs we discussed in the lectures we can write any an $n^\\text{th}$-order ODE as a coupled system of $n$ first-order ODEs. Let's look at implementing it in practice. \n\nLet's consider a second-order ODE. We want to write this in the form:\n\n$$\\begin{align}\n y_0'(x) &= f_0(x, y_0, y_1)\\\\\n y_1'(x) &= f_1(x, y_0, y_1)\n\\end{align}$$\n\nWe thus want to write a function that when passed $x$ and an array $\\textbf{y} = [y_0, y_1]$ returns an array $\\textbf{f} = [f_0, f_1]$. Let's look at a specific example. Consider $y''(x) = -y$ with $y(0) = 1$. This has the analytic solution $y(x) = \\cos(x)$. Let's write this in first-order form. \n\nLet $y_0(x) = y(x)$ and $y_1 = y_0'(x)$. Then we have $y_1' = -y_0$. Thus\n\n$$\\begin{align}\n y_0'(x) &= y_1\\\\\n y_1'(x) &= -y_0\n\\end{align}$$\n\nso $\\textbf{f} = [y_1, -y_0]$. Let's define a Python function for this.\n\n\n```python\ndef f2(x, y):\n return np.array([y[1], -y[0]])\n```\n\n\n```python\ndef SecondOrderRK2(f, dx, x0, y0, imax):\n output = np.empty((imax, 3))\n i = 0\n xi = x0\n yi = y0\n while(i < imax):\n k1 = dx*f(xi, yi)\n k2 = dx*f(xi + dx, yi + k1)\n yi = yi + 0.5*(k1 + k2)\n xi += dx\n output[i, 0] = xi\n output[i, 1] = yi[0]\n output[i, 2] = yi[1]\n i += 1\n return output\n```\n\n\n```python\nres = SecondOrderRK2(f2, 0.1, 0, [1,0], 200);\n```\n\n\n```python\nx = np.linspace(0,20,400)\ny = np.cos(x)\n\nplt.grid(True)\nplt.scatter(res[:,0], res[:,1], color='r')\nplt.plot(x, y);\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "675b00bafa004380731531d923a2a0fe2d993f74", "size": 242675, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "OrdinaryDifferentialEquations/RungeKuttaMethods.ipynb", "max_stars_repo_name": "CianCoyle/ACM20030-Examples", "max_stars_repo_head_hexsha": "fb81abf24d066717900657c1de4f2c6f87806413", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "OrdinaryDifferentialEquations/RungeKuttaMethods.ipynb", "max_issues_repo_name": "CianCoyle/ACM20030-Examples", "max_issues_repo_head_hexsha": "fb81abf24d066717900657c1de4f2c6f87806413", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "OrdinaryDifferentialEquations/RungeKuttaMethods.ipynb", "max_forks_repo_name": "CianCoyle/ACM20030-Examples", "max_forks_repo_head_hexsha": "fb81abf24d066717900657c1de4f2c6f87806413", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 630.3246753247, "max_line_length": 114156, "alphanum_fraction": 0.9458741115, "converted": true, "num_tokens": 1878, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9184802417938533, "lm_q2_score": 0.9073122232403329, "lm_q1q2_score": 0.8333483501842996}} {"text": "# Discrete Fourier Transform in Python\n\nThis notebook is a quick refresher on how to perform FFT in python/scipy.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n\n```python\nfrom scipy.fftpack import fft\n```\n\nWe define:\n\n- $N$: number of samples\n- $f_s$: sampling frequency/rate in samples/second\n\n\n```python\nN = 1000\nf_s = 100\n```\n\nPeriod between samples $T_s$:\n\n\n```python\nT_s = 1/f_s\nprint(T_s, \"seconds\")\nprint(T_s*1000, \"ms\")\n```\n\n 0.01 seconds\n 10.0 ms\n\n\nCreate time vector, each element corresponds to a measurement\n\n\n```python\nt = np.linspace(0, T_s*N, N)\n```\n\nThe signal which we are sampling:\n\n\\begin{align}\ns(t) = 0.1 sin(2\\pi 5t) + sin(2\\pi 3t - 0.25\\pi)\n\\end{align}\n\n\n```python\nx_t = 0.1*np.sin(2*np.pi*5*t) + np.sin(2*np.pi*3*t-np.pi/4)\n```\n\n\n```python\nplt.figure(figsize=(15,5))\nplt.plot(t, x_t)\nplt.plot(t, x_t, \"k+\")\nplt.xlabel(\"time [s]\")\nplt.xlim([0, 2])\nplt.grid()\nplt.title(\"Visualizing samples\")\n```\n\nNote that we can describe the **period** of each sinus component in number of samples:\n\n- $0.1 sin(2\\pi 5t)$: **20** samples ($f=5Hz$ leads to $T=1/5Hz=200ms$ with $T_s = 10ms$, $T/T_s = 20$)\n- $sin(2\\pi 3t - 0.25\\pi)$ : **33** samples\n\n\nAlternatively we can express the frequency in the reciprocal:\n\n- $0.1 sin(2\\pi 5t)$: **1/20 = 0.05**\n- $sin(2\\pi 3t - 0.25\\pi)$ : **1/33 = 0.0303**\n\nAlternatively we can express the frequency relative to the number of samples $N=1000$:\n\n- $0.1 sin(2\\pi 5t)$: **1000/20 = 50**\n- $sin(2\\pi 3t - 0.25\\pi)$ : **1000/33 = 30.30**\n\nYou can think of the last representation as a reference of the highest $T_s$ (or lowest $f_s$) we can extract from FFT. I.e. the FFT method cannot extract frequency information lower than $\\frac{f_s}{2}$ (ignore the $\\frac{1}{2}$ for now).\n\n## FFT\n\nWe perform the FFT on the sample array, note that the time vector ${t}$ is not used in the `fft` call:\n\n\n```python\na_f = fft(x_t)\n```\n\n\n```python\na_f.dtype\n```\n\n\n\n\n dtype('complex128')\n\n\n\nFFT returns a symmetric shape with positive frequencies on the right side and negative on the left:\n\n\n```python\nplt.figure(figsize=(10,5))\nplt.plot(np.abs(a_f)) # we take abs in order to get the magnitude of a complex number\nplt.axvline(N//2, color=\"red\", label=\"left: positive frequencies | right: negative, from high to low\")\nplt.xlabel(\"index k\")\nplt.legend();\n```\n\nThe index $k$ represents a frequency component.\n\nBecause we are interested in positive frequencies for now we cut the returned array in half:\n\n\n```python\na_f_positive = a_f[:N//2]\n```\n\n\n```python\na_f_positive.shape\n```\n\n\n\n\n (500,)\n\n\n\nEach element in `a_f` represents the real and imaginary part (amplitude $A_i$ and phase $\\phi_i$) for a specific frequency $f_i$.\n\nThe \"frequency\" after the FFT is defined as $\\frac{N}{s_i}$ in the period of specific sinus component. The period $s_i$ is expressed in number of samples.\n\nI.e. a sinus component with a frequency of $5 Hz$ or period of $\\frac{1}{5Hz} = 0.2s$ is $\\frac{0.2s}{T_s} = \\frac{0.2s}{0.01s} = 20$ samples long. Thus its magnitude peak should appear at $\\frac{N}{s_i} = \\frac{1000}{20} = 50$.\n\n- $0.1 sin(2\\pi 5t)$: low peak (because of $0.1$) at $k=50$\n- $sin(2\\pi 3t - 0.25\\pi)$: greater peak at $k= 30.303 \\approx 30$\n\n\n```python\nplt.figure(figsize=(10,5))\nplt.plot(np.abs(a_f_positive))\nplt.xlim([0, 100])\nplt.xticks(range(0, 101, 10))\nplt.grid()\nplt.xlabel(\"frequency in $k = N/s_i$\")\n```\n\nIn order to relate the sample-frequencies (as $N/1$) into time domain we need to convert the $k$ into frequencies as $1/s$.\n\n\n\n\\begin{align}\nk = \\frac{N}{s_i} = \\frac{N}{T_i/T_s} = \\frac{N f_i}{1/T_s} = \\frac{N f_i}{f_s}\n\\end{align}\n\nOur translation formula from $k$ to frequency is the following\n\n\\begin{align}\n\\Rightarrow f_i =& f_s\\frac{k}{N}\n\\end{align}\n\n\n```python\nf_i = np.arange(0, N//2)*f_s/N\n```\n\n\n```python\nplt.figure(figsize=(10,5))\nplt.plot(f_i, np.abs(a_f_positive))\nplt.grid()\nplt.xlabel(\"frequency in $1/s$\")\nplt.xticks(range(0, f_s//2, 1));\nplt.xlim([0, 10]);\n```\n\nWe need to normalize the magnitude of the peaks by the factor of $\\frac{2}{N}$:\n\n\n```python\nplt.figure(figsize=(10,5))\nplt.plot(f_i, 2/N*np.abs(a_f_positive))\nplt.grid()\nplt.xlabel(\"frequency in $1/s$ (Hz)\")\nplt.ylabel(\"amplitude [1]\")\nplt.xticks(range(0, f_s//2, 1));\nplt.xlim([0, 10]);\nplt.ylim([-0.2, 1.2]);\nplt.title(\"Final DFT result.\")\nplt.text(3, 1.02, \"$sin(2\\pi 3t - 0.25\\pi)$\", fontdict={\"size\": 15})\nplt.text(5, 0.12, \"$0.1 sin(2\\pi 5t)$\", fontdict={\"size\": 15});\n```\n\nAs you can see we found both sinus components.\n\n## Phase\n\nWe could find the magnitudes and the frequencies of both signals but not the $45^\\circ$ phase of the slower $3Hz$ signal.\n\nIn the previous section we saw that the result of the FFT algorithm is a complex array. Let's plot the real and imaginary parts relative to frequency.\n\n\n```python\nplt.figure(figsize=(15, 5))\nplt.subplot(2, 1, 1)\nplt.title(\"real\")\nplt.plot(f_i, 2/N*np.real(a_f_positive))\nplt.grid()\nplt.xlim([0, 10])\nplt.subplot(2, 1, 2)\nplt.title(\"imag\")\nplt.plot(f_i, 2/N*np.imag(a_f_positive))\nplt.grid()\nplt.xlim([0, 10])\n```\n\nLets calculate the angle of the complex number:\n\n\\begin{align}\n\\alpha = \\text{arctan} \\frac{imag}{real}\n\\end{align}\n\nThere is a handy function: `np.angle` which does it for us.\n\n\n```python\nangle = np.angle(a_f_positive, deg=True)\n\n# OR manually\n# angle = np.arctan2(2/N*np.imag(a_f_positive),(2/N*np.real(a_f_positive)))*grad_to_degree_factor\n```\n\nand plot it again\n\n\n```python\nplt.figure(figsize=(15, 10))\nplt.subplot(3, 1, 1)\nplt.ylabel(\"real-component [1]\")\nplt.plot(f_i, 2/N*np.real(a_f_positive))\nplt.grid()\nplt.xlim([0, 10])\nplt.subplot(3, 1, 2)\nplt.ylabel(\"imag component [1]\")\nplt.plot(f_i, 2/N*np.imag(a_f_positive))\nplt.grid()\nplt.xlim([0, 10])\nplt.subplot(3, 1, 3)\nplt.plot(f_i, angle)\nplt.grid()\nplt.ylabel(\"phase [\u00b0]\")\nplt.xlabel(\"frequency [Hz]\")\nplt.xlim([0, 10])\n\nplt.scatter(f_i[[30, 50]], angle[[30, 50]], color=\"k\")\nplt.text(f_i[30] + 0.1 , angle[30], \"%d\u00b0\" % int(angle[30]))\nplt.text(f_i[50] + 0.1 , angle[50], \"%d\u00b0\" % int(angle[50]))\nplt.ylim([-150, 100])\n```\n\nThe $5Hz$ sinus wave with zero phase has an $\\alpha \\approx -90^\\circ$, since a sine wave is a $90^\\circ$-shifted cos wave.\n\nThe $3Hz$ sinus component with $45^\\circ$ phase has an $\\alpha \\approx -90^\\circ-45^\\circ = -135^\\circ$ \n\n## FFT on complex numbers\n\nBecause within the multi-chirp FMCW algorithm we do a FFT on a series of complex numbers we want to make a simple example here.\n\nOur example function of interest will be:\n\n\\begin{align}\nf(t) = 0.25\\text{sin}(2\\pi 5 t + \\phi) \\\\\n\\phi = \\phi(t) = -\\frac{\\pi}{8}t = vt\n\\end{align}\n\nThe phase shift is time dependent in this example.\n\n**Goal**: find parameter $v$ via FFT.\n\n\n```python\ndef f(t, phi=0):\n return 0.25*np.sin(2*np.pi*5*t + phi)\n```\n\nLet's visualize how the sinus wave develops over time ...\n\n\n```python\nt = np.linspace(0, 10, 10000)\nplt.figure(figsize=(15,5))\nplt.plot(t, f(t), label=\"$\\phi=0$\")\nplt.plot(t, f(t, -np.pi/8*t), label=\"$\\phi=-\\pi/8 \\cdot t$\")\nplt.xlim([0, 4])\nplt.xlabel(\"$t$ [s]\")\nplt.grid()\nplt.legend();\n```\n\nFor the sake of our example we will run the FFT each $T_{cycle}$ seconds.\n\n\n```python\nT_cycle = 2 # seconds\nn_cycles = 200\nf_cycle = 1/T_cycle\n```\n\nPer cycle FFT config\n\n\n```python\nf_s = 100\nT_s = 1/f_s\nN = int(T_cycle/T_s)\nprint(\"Sample frequency:\", f_s, \"Hz\")\nprint(\"Sample period:\", T_s, \"sec\")\nprint(\"Number samples:\", N)\n```\n\n Sample frequency: 100 Hz\n Sample period: 0.01 sec\n Number samples: 200\n\n\nWe run FFT in each cycle and save the results in a list.\n\n\n```python\nfft_cycle_results = list() # result list\n\n# for each cycle\nfor c in range(n_cycles):\n \n # determine start and end of a cycle\n t_start = c*T_cycle\n t_end = (c+1)*T_cycle\n \n # sample the signal at according timesteps\n t_sample = np.arange(t_start, t_end, T_s)\n f_sample = f(t_sample, -np.pi/8*t_sample)\n \n # run FFT and append results\n fft_res = fft(f_sample)\n fft_cycle_results.append(fft_res)\n```\n\nWe cut the positive frequency range and normalize the amplitudes (see introdcutory example above).\n\n\n```python\nfft_cycle_results = [2/N*r[:N//2] for r in fft_cycle_results]\n```\n\n\n```python\nfreq = np.arange(0, N//2)*f_s/N\n```\n\n\n```python\nfreq\n```\n\n\n\n\n array([ 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5. ,\n 5.5, 6. , 6.5, 7. , 7.5, 8. , 8.5, 9. , 9.5, 10. , 10.5,\n 11. , 11.5, 12. , 12.5, 13. , 13.5, 14. , 14.5, 15. , 15.5, 16. ,\n 16.5, 17. , 17.5, 18. , 18.5, 19. , 19.5, 20. , 20.5, 21. , 21.5,\n 22. , 22.5, 23. , 23.5, 24. , 24.5, 25. , 25.5, 26. , 26.5, 27. ,\n 27.5, 28. , 28.5, 29. , 29.5, 30. , 30.5, 31. , 31.5, 32. , 32.5,\n 33. , 33.5, 34. , 34.5, 35. , 35.5, 36. , 36.5, 37. , 37.5, 38. ,\n 38.5, 39. , 39.5, 40. , 40.5, 41. , 41.5, 42. , 42.5, 43. , 43.5,\n 44. , 44.5, 45. , 45.5, 46. , 46.5, 47. , 47.5, 48. , 48.5, 49. ,\n 49.5])\n\n\n\n**Note**: The FFT frequency resolution is at 1Hz. That's important because the frequency shift by $-\\frac{1}{8}Hz$ introduced by $\\phi(t)$ is not visible in the FFT!\n\nThe FFT will show a peak at 5Hz with a different phase each time.\n\nBecause the frequency is almost the same in each cycle, we expect the same behaviour in each result:\n\n\n```python\nn_cycles_to_display = 4\nfft_res_display = fft_cycle_results[:n_cycles_to_display]\n\nfig, ax = plt.subplots(ncols=len(fft_res_display), figsize=(15, 3), sharex=True, sharey=True)\nfor i, ax, res in zip(range(n_cycles_to_display), ax, fft_res_display):\n res_abs = np.abs(res)\n ax.plot(freq, res_abs)\n ax.grid(True)\n ax.set_xlim([0, 10])\n ax.set_xlabel(\"frequency [Hz]\")\n \n k = np.argmax(res_abs)\n magn_max = res_abs[k]\n freq_max = freq[k]\n \n ax.set_title(\"Cycle %d:\\n%.2f at %.2f Hz\" % (i, magn_max, freq_max))\n```\n\nLooks fine for the first 4 cycles ... Let's look at all cycles by picking the frequency with max. magnitude from each cycle:\n\n\n```python\nfreq_list = list()\nfor res in fft_cycle_results:\n res_abs = np.abs(res)\n k = np.argmax(res_abs)\n freq_list.append(freq[k])\n \nplt.figure(figsize=(15,3))\nplt.plot(freq_list)\nplt.xlabel(\"cycle nr.\")\nplt.ylabel(\"frequency [Hz]\")\nplt.title(\"Frequency with max. peak in FFT domain vs. cycle\");\n```\n\nIt seems that the position (frequency) of the peaks remains **eqal**, despite the changing real and imaginary components.\n\nLet's collect the max. frequency component from each cycle\n\n\n```python\ncycle_max_list = list()\n\nfor res in fft_cycle_results:\n # calc. the magnitude\n res_abs = np.abs(res)\n \n # find frequency index\n k = np.argmax(res_abs)\n cycle_max_list.append(res[k])\n```\n\n... and visualize the complex numbers:\n\n\n```python\nn_cycles_to_display = 4\ncycle_max_list_display = cycle_max_list[:n_cycles_to_display]\n\nfig, ax = plt.subplots(ncols=len(cycle_max_list_display), figsize=(15, 30), \n subplot_kw={'projection': \"polar\"}, sharey=True)\n\nfor i, ax, res in zip(range(n_cycles_to_display), ax, cycle_max_list_display):\n ax.plot([0, np.angle(res)], [0, np.abs(res)], marker=\"o\")\n ax.text(np.angle(res)+0.1, np.abs(res), \"%d\u00b0\" % int(np.angle(res, deg=True)))\n ax.set_ylim([0, 0.4])\n ax.set_title(\"Cycle %d:\\n\" % (i, ))\n```\n\nWe can observe that the angle moves in negative direction with $-45^\\circ = T_{cycle}v = 2\\frac{\\pi}{8} = \\pi/4$ per cycle.\n\n### Solution via phase differences\n\nNow we could calculate ange velocity by taking differences between cycles and put them relative to cycle duration:\n\n\n```python\nangle_diff = np.diff(np.angle(cycle_max_list, deg=True))\nangle_vel = angle_diff/T_cycle\nprint(angle_vel[:10])\n```\n\n [-22.57811002 157.25241595 -22.41998496 -22.25432097 -22.57811002\n -22.74758405 -22.41998496 -22.25432097 -22.57811002 157.25241595]\n\n\nLet's look at the parameter $v = -\\frac{\\pi}{8}$\n\n\n```python\nv = -np.pi/8*360/(2*np.pi)\nprint(v)\n```\n\n -22.5\n\n\nLet's calculate the differences right (to remove the $157^\\circ-(-157^\\circ)$ effect).\n\n\n```python\nangle_vel[angle_vel>0] -= 180\nprint(\"Angle velocities:\", angle_vel[:10])\n```\n\n Angle velocities: [-22.57811002 -22.74758405 -22.41998496 -22.25432097 -22.57811002\n -22.74758405 -22.41998496 -22.25432097 -22.57811002 -22.74758405]\n\n\n\n```python\nplt.figure(figsize=(15,3))\nplt.plot(angle_vel)\nplt.xlabel(\"cycle nr.\")\nplt.ylabel(\"\u00b0/s\")\nplt.title(\"angular velocity derived by cycle FFT phase differences\")\nplt.ylim([-40, 0]);\n```\n\nAs you can see, the phases of the FFT output from each cycle give a hint over the phase velocity $v$ of the signal in time domain.\n\n**Summary**: We found $v$!\n\n### Solution via second FFT\n\nThe core idea of this alternative approach is to extract the periodic change of phase $\\phi(t)$.\n\nWe can find the phase velocity via a **second FFT over the cycle results**, too. Consider the first FFT result as a measurement/sample for the second FFT.\n\nRemember, those are our results (FFT-magnitude from the $5Hz$-component):\n\n\n```python\ncycle_max_list[:5]\n```\n\n\n\n\n [(-0.09177962152642351-0.22644808906297761j),\n (-0.22334337399156695-0.09379786966319806j),\n (-0.22407560703847842+0.09379786966349972j),\n (-0.09354738847923255+0.22644808906302294j),\n (0.09177962152656505+0.22644808906289618j)]\n\n\n\n\n```python\n# here, we take only the positive side of fft\nsecond_fft_res = fft(cycle_max_list)[:n_cycles//2]\n```\n\n\n```python\nsecond_fft_res[:5]\n```\n\n\n\n\n array([ 2.60416688e-13-3.46972451e-13j, 5.82752770e-12-1.78615130e-11j,\n -1.55815396e-11-1.21250472e-11j, -7.42998983e-13+9.69827294e-12j,\n 1.79379736e-11-1.00089983e-11j])\n\n\n\nLike in the introductory example, each element of `second_fft_res` represents a frequency component.\n\n\n```python\nfreq_second = np.arange(0, n_cycles//2)*f_cycle/n_cycles\nomega_second = 360*freq_second # same as 2*np.pi*\n```\n\n\n```python\nomega_second\n```\n\n\n\n\n array([ 0. , 0.9, 1.8, 2.7, 3.6, 4.5, 5.4, 6.3, 7.2, 8.1, 9. ,\n 9.9, 10.8, 11.7, 12.6, 13.5, 14.4, 15.3, 16.2, 17.1, 18. , 18.9,\n 19.8, 20.7, 21.6, 22.5, 23.4, 24.3, 25.2, 26.1, 27. , 27.9, 28.8,\n 29.7, 30.6, 31.5, 32.4, 33.3, 34.2, 35.1, 36. , 36.9, 37.8, 38.7,\n 39.6, 40.5, 41.4, 42.3, 43.2, 44.1, 45. , 45.9, 46.8, 47.7, 48.6,\n 49.5, 50.4, 51.3, 52.2, 53.1, 54. , 54.9, 55.8, 56.7, 57.6, 58.5,\n 59.4, 60.3, 61.2, 62.1, 63. , 63.9, 64.8, 65.7, 66.6, 67.5, 68.4,\n 69.3, 70.2, 71.1, 72. , 72.9, 73.8, 74.7, 75.6, 76.5, 77.4, 78.3,\n 79.2, 80.1, 81. , 81.9, 82.8, 83.7, 84.6, 85.5, 86.4, 87.3, 88.2,\n 89.1])\n\n\n\n\n```python\nplt.figure(figsize=(10,5))\nplt.plot(omega_second, np.abs(second_fft_res))\nplt.grid()\nplt.xlabel(\"angle velocity $\\omega$ [\u00b0/s]\")\nplt.xticks(range(0, 90, 5));\n```\n\nAs you could see we could detect the phase change $v=22.5^{\\circ}$ with a second FFT on the results of the first FFT.\n\n\n```python\n\n```\n", "meta": {"hexsha": "0e36df22e703dd4598ca0460eee86d3ae4ef3de3", "size": 536715, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "DFT.ipynb", "max_stars_repo_name": "kopytjuk/fmcw", "max_stars_repo_head_hexsha": "ceba9f71e41d54c3b339c7e40a840a3d8db542d8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 31, "max_stars_repo_stars_event_min_datetime": "2019-12-23T05:06:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-22T17:19:01.000Z", "max_issues_repo_path": "DFT.ipynb", "max_issues_repo_name": "kopytjuk/fmcw", "max_issues_repo_head_hexsha": "ceba9f71e41d54c3b339c7e40a840a3d8db542d8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "DFT.ipynb", "max_forks_repo_name": "kopytjuk/fmcw", "max_forks_repo_head_hexsha": "ceba9f71e41d54c3b339c7e40a840a3d8db542d8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2020-05-06T20:54:58.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-13T09:42:35.000Z", "avg_line_length": 431.0963855422, "max_line_length": 133280, "alphanum_fraction": 0.9415723429, "converted": true, "num_tokens": 5211, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9390248157222396, "lm_q2_score": 0.8872045952083047, "lm_q1q2_score": 0.8331071315234025}} {"text": "## What is a Percepton\n\nA percepton is a type of binary classifier. A binary classfier is a function\nwhich can be decide weather or not an input, represented by a vector of numbers,\nbelong to a specific class. \n\nThe perceptron maps its input $x$ ( a real value vector ) to an output value $f(x)$ ( a single binary value ):\n\n\\begin{equation}\n f(x) = \\chi(\\langle w, x \\rangle + b)\n\\end{equation}\n\nwhere $w$ is the vector of the weights with real value, $\\langle \\cdot , \\cdot \\rangle$ is the scalar product,\n$b$ is the bias, a constant term that doesn't dipend on any input value and $\\chi(x)$ is the output fuction, also called \nactivation function. \nThe most common choices for $\\chi(x)$ are:\n\n- $\\chi(x)$ = $sing(x)$\n- $\\chi(x)$ = $\u0398(x)$\n- $\\chi(x)$ = $x$\n\nwhere $\u0398(x)$ is the Heavside Function.\n\n\n\nThe perceptron works weel when the learning set is linearly separable, while when the learning isn't linearly separable\nits learning algorithm doesn't terminate. If the vector are not linearly separable will never reach a point where all vectors are \nclassified properly. The most famous example of perceptron inability to solve problems with linearly separable vector is the \nboolean **XOR** problem.\n\n\n\n## How to train the Perceptron\n\nThere are various way to train a perceptron, one of the most effienct ( the one that used here )\nis the **Delta Rule**\n\nThe Delta Rule is a gradient descend learning rule for updating the weights of the inputs of an artifical \nneuron in a single-layer neural network. It is a special case of the more general backpropagation algorithm. For\na neuron $j$ with activation function $g(x)$, the delta rule for $j$'s weight $w_{ji}$ is given by:\n\n\\begin{equation}\n\\Delta w_{ji} = \\eta (t_j - y_j)g'(h_i)x_i\n\\end{equation}\n\nwhere:\n\n- $\\eta$ is a small constant, called learning rate\n- $g(x)$ is the neuron activation function\n- $g'$ is the derivative of $g$\n- $t_j$ us the target output\n- $h_i$ is the weighted sum of the neuron's inputs\n- $y_j$ is the actual output\n- $x_i$ is the ith input\n\nIt holds that $h_j = \\sum x_i w_{ji}$ and $ y_j = g(h_j)$. \nThe delta rule is commonly stated is a simplified forn for a neuron with \na linear activation function as \n\n\\begin{equation}\n\\Delta w_{ji} = \\eta (t_j - y_j)x_i\n\\end{equation}\n\nWhile the delta rule is similar to the percetron's training rule, the derivation is different.\nIf the percepton uses the Heaviside step function as the activation function , it turn out that \n$g(h)$ is no differantiable in zero, which make the direct application of the delta rule impossible.\n", "meta": {"hexsha": "c11a0afea853f65bff9ed51e8b36f01f0bdf3bcd", "size": 4030, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "demo/What_is_a_perceptron.ipynb", "max_stars_repo_name": "paolodelia99/Python-Perceptron", "max_stars_repo_head_hexsha": "9a4e9ab47634fd7a81d54c5792658b77a715b4b4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-03-16T11:18:01.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-16T11:18:01.000Z", "max_issues_repo_path": "demo/What_is_a_perceptron.ipynb", "max_issues_repo_name": "paolodelia99/Python-Perceptron", "max_issues_repo_head_hexsha": "9a4e9ab47634fd7a81d54c5792658b77a715b4b4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "demo/What_is_a_perceptron.ipynb", "max_forks_repo_name": "paolodelia99/Python-Perceptron", "max_forks_repo_head_hexsha": "9a4e9ab47634fd7a81d54c5792658b77a715b4b4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.3063063063, "max_line_length": 139, "alphanum_fraction": 0.5947890819, "converted": true, "num_tokens": 691, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9458012686491107, "lm_q2_score": 0.8807970889295663, "lm_q1q2_score": 0.8330590041320274}} {"text": "# Van der Pol oscillator\nWe will look at the second order differentual equation (see https://en.wikipedia.org/wiki/Van_der_Pol_oscillator):\n\n$$\n{d^2y_0 \\over dx^2}-\\mu(1-y_0^2){dy_0 \\over dx}+y_0= 0\n$$\n\n\n```python\nfrom __future__ import division, print_function\nimport itertools\nimport numpy as np\nimport sympy as sp\nimport matplotlib.pyplot as plt\nfrom pyodesys.symbolic import SymbolicSys\nsp.init_printing()\n%matplotlib inline\nprint(sp.__version__)\n```\n\nOne way to reduce the order of our second order differential equation is to formulate a system of first order ODEs, using:\n\n$$ y_1 = \\dot y_0 $$\n\nwhich gives us:\n\n$$\n\\begin{cases}\n\\dot y_0 = y_1 \\\\\n\\dot y_1 = \\mu(1-y_0^2) y_1-y_0\n\\end{cases}\n$$\n\nLet's call this system of ordinary differential equations vdp1:\n\n\n```python\nvdp1 = lambda x, y, p: [y[1], -y[0] + p[0]*y[1]*(1 - y[0]**2)]\n```\n\n\n```python\ny0 = [0, 1]\nmu = 2.5\ntend = 25\n```\n\n\n```python\nodesys1 = SymbolicSys.from_callback(vdp1, 2, 1, names='y0 y1'.split())\nodesys1.exprs\n```\n\n\n```python\n# Let us plot using 30 data points\nres1 = odesys1.integrate(np.linspace(0, tend, 20), y0, [mu], name='vode')\nres1.plot()\nprint(res1.yout.shape)\n```\n\n\n```python\n# Let us interpolate between data points\nres2 = odesys1.integrate(np.linspace(0, tend, 20), y0, [mu], integrator='cvode', nderiv=1)\nres2.plot(m_lim=21)\nprint(res2.yout.shape)\n```\n\n\n```python\nodesys1.integrate(np.linspace(0, tend, 20), y0, [mu], integrator='cvode', nderiv=2)\nxplt, yplt = odesys1.plot_result(m_lim=21, interpolate=30)\nprint(odesys1._internal[1].shape, yplt.shape)\n```\n\nEquidistant points are not optimal for plotting this function. Using ``roots`` kwarg we can make the solver report the output where either the function value, its first or second derivative is zero.\n\n\n```python\nodesys2 = SymbolicSys.from_other(odesys1, roots=odesys1.exprs + (odesys1.dep[0],))\n# We could also add a higher derivative: tuple(odesys1.get_jac().dot(odesys1.exprs)))\n```\n\n\n```python\n# Let us plot using 10 data points\nres2 = odesys2.integrate(np.linspace(0, tend, 20), y0, [mu], integrator='cvode',\n nderiv=1, atol=1e-4, rtol=1e-4)\nxout, yout, info = res2\nxplt, yplt = odesys2.plot_result(m_lim=21, interpolate=30, indices=[0])\nxroots, yroots = info['roots_output'][0], info['roots_output'][1][:, 0]\nplt.plot(xroots, yroots, 'bd')\nprint(odesys2._internal[1].shape, yplt.shape, xroots.size)\n```\n\n\n```python\nodesys2.roots\n```\n\n\n```python\nres2.plot(indices=[0])\nplt.plot(xplt, [res2.at(_)[0][0, 0] for _ in xplt])\n```\n\n\n```python\nres1.plot(indices=[0])\nplt.plot(xplt, [res1.at(_, use_deriv=True)[0][0] for _ in xplt])\nplt.plot(xplt, [res1.at(_, use_deriv=False)[0][0] for _ in xplt])\n```\n", "meta": {"hexsha": "e6336ccdcfe2988e4d81ef075dbd84dc2ce922c3", "size": 5108, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/van_der_pol_interpolation.ipynb", "max_stars_repo_name": "slayoo/pyodesys", "max_stars_repo_head_hexsha": "8e1afb195dadf6c6f8e765873bc9dd0fae067c39", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 82, "max_stars_repo_stars_event_min_datetime": "2015-09-29T16:51:03.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-02T13:26:50.000Z", "max_issues_repo_path": "examples/van_der_pol_interpolation.ipynb", "max_issues_repo_name": "slayoo/pyodesys", "max_issues_repo_head_hexsha": "8e1afb195dadf6c6f8e765873bc9dd0fae067c39", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 28, "max_issues_repo_issues_event_min_datetime": "2015-09-29T14:40:45.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-18T19:29:50.000Z", "max_forks_repo_path": "examples/van_der_pol_interpolation.ipynb", "max_forks_repo_name": "slayoo/pyodesys", "max_forks_repo_head_hexsha": "8e1afb195dadf6c6f8e765873bc9dd0fae067c39", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2016-03-18T14:00:39.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-17T13:54:29.000Z", "avg_line_length": 25.1625615764, "max_line_length": 204, "alphanum_fraction": 0.5463978074, "converted": true, "num_tokens": 910, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294404096760996, "lm_q2_score": 0.896251377983158, "lm_q1q2_score": 0.8330122479254353}} {"text": "# Robust Kalman filtering for vehicle tracking\n\nWe will try to pinpoint the location of a moving vehicle with high accuracy from noisy sensor data. We'll do this by modeling the vehicle state as a discrete-time linear dynamical system. Standard **Kalman filtering** can be used to approach this problem when the sensor noise is assumed to be Gaussian. We'll use **robust Kalman filtering** to get a more accurate estimate of the vehicle state for a non-Gaussian case with outliers.\n\n# Problem statement\n \nA discrete-time linear dynamical system consists of a sequence of state vectors $x_t \\in \\mathbf{R}^n$, indexed by time $t \\in \\lbrace 0, \\ldots, N-1 \\rbrace$ and dynamics equations\n\n\\begin{align}\nx_{t+1} &= Ax_t + Bw_t\\\\\ny_t &=Cx_t + v_t,\n\\end{align}\n\nwhere $w_t \\in \\mathbf{R}^m$ is an input to the dynamical system (say, a drive force on the vehicle), $y_t \\in \\mathbf{R}^r$ is a state measurement, $v_t \\in \\mathbf{R}^r$ is noise, $A$ is the drift matrix, $B$ is the input matrix, and $C$ is the observation matrix.\n\nGiven $A$, $B$, $C$, and $y_t$ for $t = 0, \\ldots, N-1$, the goal is to estimate $x_t$ for $t = 0, \\ldots, N-1$.\n\n# Kalman filtering\n\nA Kalman filter estimates $x_t$ by solving the optimization problem\n\n\\begin{array}{ll}\n\\mbox{minimize} & \\sum_{t=0}^{N-1} \\left( \n\\|w_t\\|_2^2 + \\tau \\|v_t\\|_2^2\\right)\\\\\n\\mbox{subject to} & x_{t+1} = Ax_t + Bw_t,\\quad t=0,\\ldots, N-1\\\\\n& y_t = Cx_t+v_t,\\quad t = 0, \\ldots, N-1,\n\\end{array}\n\nwhere $\\tau$ is a tuning parameter. This problem is actually a least squares problem, and can be solved via linear algebra, without the need for more general convex optimization. Note that since we have no observation $y_{N}$, $x_N$ is only constrained via $x_{N} = Ax_{N-1} + Bw_{N-1}$, which is trivially resolved when $w_{N-1} = 0$ and $x_{N} = Ax_{N-1}$. We maintain this vestigial constraint only because it offers a concise problem statement.\n\nThis model performs well when $w_t$ and $v_t$ are Gaussian. However, the quadratic objective can be influenced by large outliers, which degrades the accuracy of the recovery. To improve estimation in the presence of outliers, we can use **robust Kalman filtering**.\n\n# Robust Kalman filtering\n\nTo handle outliers in $v_t$, robust Kalman filtering replaces the quadratic cost with a Huber cost, which results in the convex optimization problem\n\n\\begin{array}{ll}\n\\mbox{minimize} & \\sum_{t=0}^{N-1} \\left( \\|w_t\\|^2_2 + \\tau \\phi_\\rho(v_t) \\right)\\\\\n\\mbox{subject to} & x_{t+1} = Ax_t + Bw_t,\\quad t=0,\\ldots, N-1\\\\\n& y_t = Cx_t+v_t,\\quad t=0,\\ldots, N-1,\n\\end{array}\n\nwhere $\\phi_\\rho$ is the Huber function\n$$\n\\phi_\\rho(a)= \\left\\{ \\begin{array}{ll} \\|a\\|_2^2 & \\|a\\|_2\\leq \\rho\\\\\n2\\rho \\|a\\|_2-\\rho^2 & \\|a\\|_2>\\rho.\n\\end{array}\\right.\n$$\n\nThe Huber penalty function penalizes estimation error linearly outside of a ball of radius $\\rho$, whereas in standard Kalman filtering, all errors are penalized quadratically. Thus, large errors are penalized less harshly, making this model more robust to outliers.\n\n# Vehicle tracking example\n\nWe'll apply standard and robust Kalman filtering to a vehicle tracking problem with state $x_t \\in \\mathbf{R}^4$, where\n$(x_{t,0}, x_{t,1})$ is the position of the vehicle in two dimensions, and $(x_{t,2}, x_{t,3})$ is the vehicle velocity.\nThe vehicle has unknown drive force $w_t$, and we observe noisy measurements of the vehicle's position, $y_t \\in \\mathbf{R}^2$.\n\nThe matrices for the dynamics are\n\n$$\nA = \\begin{bmatrix}\n1 & 0 & \\left(1-\\frac{\\gamma}{2}\\Delta t\\right) \\Delta t & 0 \\\\\n0 & 1 & 0 & \\left(1-\\frac{\\gamma}{2} \\Delta t\\right) \\Delta t\\\\\n0 & 0 & 1-\\gamma \\Delta t & 0 \\\\\n0 & 0 & 0 & 1-\\gamma \\Delta t\n\\end{bmatrix},\n$$\n\n$$\nB = \\begin{bmatrix}\n\\frac{1}{2}\\Delta t^2 & 0 \\\\\n0 & \\frac{1}{2}\\Delta t^2 \\\\\n\\Delta t & 0 \\\\\n0 & \\Delta t \\\\\n\\end{bmatrix},\n$$\n\n$$\nC = \\begin{bmatrix}\n1 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0\n\\end{bmatrix},\n$$\nwhere $\\gamma$ is a velocity damping parameter.\n\n# 1D Model\nThe recurrence is derived from the following relations in a single dimension. For this subsection, let $x_t, v_t, w_t$ be the vehicle position, velocity, and input drive force. The resulting acceleration of the vehicle is $w_t - \\gamma v_t$, with $- \\gamma v_t$ is a damping term depending on velocity with parameter $\\gamma$. \n\nThe discretized dynamics are obtained from numerically integrating:\n$$\n\\begin{align}\nx_{t+1} &= x_t + \\left(1-\\frac{\\gamma \\Delta t}{2}\\right)v_t \\Delta t + \\frac{1}{2}w_{t} \\Delta t^2\\\\\nv_{t+1} &= \\left(1-\\gamma\\right)v_t + w_t \\Delta t.\n\\end{align}\n$$\n\nExtending these relations to two dimensions gives us the dynamics matrices $A$ and $B$.\n\n## Helper Functions\n\n\n```python\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef plot_state(t,actual, estimated=None):\n '''\n plot position, speed, and acceleration in the x and y coordinates for\n the actual data, and optionally for the estimated data\n '''\n trajectories = [actual]\n if estimated is not None:\n trajectories.append(estimated)\n \n fig, ax = plt.subplots(3, 2, sharex='col', sharey='row', figsize=(8,8))\n for x, w in trajectories: \n ax[0,0].plot(t,x[0,:-1])\n ax[0,1].plot(t,x[1,:-1])\n ax[1,0].plot(t,x[2,:-1])\n ax[1,1].plot(t,x[3,:-1])\n ax[2,0].plot(t,w[0,:])\n ax[2,1].plot(t,w[1,:])\n \n ax[0,0].set_ylabel('x position')\n ax[1,0].set_ylabel('x velocity')\n ax[2,0].set_ylabel('x input')\n \n ax[0,1].set_ylabel('y position')\n ax[1,1].set_ylabel('y velocity')\n ax[2,1].set_ylabel('y input')\n \n ax[0,1].yaxis.tick_right()\n ax[1,1].yaxis.tick_right()\n ax[2,1].yaxis.tick_right()\n \n ax[0,1].yaxis.set_label_position(\"right\")\n ax[1,1].yaxis.set_label_position(\"right\")\n ax[2,1].yaxis.set_label_position(\"right\")\n \n ax[2,0].set_xlabel('time')\n ax[2,1].set_xlabel('time')\n\ndef plot_positions(traj, labels, axis=None,filename=None):\n '''\n show point clouds for true, observed, and recovered positions\n '''\n matplotlib.rcParams.update({'font.size': 14})\n n = len(traj)\n\n fig, ax = plt.subplots(1, n, sharex=True, sharey=True,figsize=(12, 5))\n if n == 1:\n ax = [ax]\n \n for i,x in enumerate(traj):\n ax[i].plot(x[0,:], x[1,:], 'ro', alpha=.1)\n ax[i].set_title(labels[i])\n if axis:\n ax[i].axis(axis)\n \n if filename:\n fig.savefig(filename, bbox_inches='tight')\n```\n\n## Problem Data\n\nWe generate the data for the vehicle tracking problem. We'll have $N=1000$, $w_t$ a standard Gaussian, and $v_t$ a standard Guassian, except $20\\%$ of the points will be outliers with $\\sigma = 20$.\n\nBelow, we set the problem parameters and define the matrices $A$, $B$, and $C$.\n\n\n```python\nn = 1000 # number of timesteps\nT = 50 # time will vary from 0 to T with step delt\nts, delt = np.linspace(0,T,n,endpoint=True, retstep=True)\ngamma = .05 # damping, 0 is no damping\n\nA = np.zeros((4,4))\nB = np.zeros((4,2))\nC = np.zeros((2,4))\n\nA[0,0] = 1\nA[1,1] = 1\nA[0,2] = (1-gamma*delt/2)*delt\nA[1,3] = (1-gamma*delt/2)*delt\nA[2,2] = 1 - gamma*delt\nA[3,3] = 1 - gamma*delt\n\nB[0,0] = delt**2/2\nB[1,1] = delt**2/2\nB[2,0] = delt\nB[3,1] = delt\n\nC[0,0] = 1\nC[1,1] = 1\n```\n\n# Simulation\n\nWe seed $x_0 = 0$ (starting at the origin with zero velocity) and simulate the system forward in time. The results are the true vehicle positions `x_true` (which we will use to judge our recovery) and the observed positions `y`.\n\nWe plot the position, velocity, and system input $w$ in both dimensions as a function of time.\nWe also plot the sets of true and observed vehicle positions.\n\n\n```python\nsigma = 20\np = .20\nnp.random.seed(6)\n\nx = np.zeros((4,n+1))\nx[:,0] = [0,0,0,0]\ny = np.zeros((2,n))\n\n# generate random input and noise vectors\nw = np.random.randn(2,n)\nv = np.random.randn(2,n)\n\n# add outliers to v\nnp.random.seed(0)\ninds = np.random.rand(n) <= p\nv[:,inds] = sigma*np.random.randn(2,n)[:,inds]\n\n# simulate the system forward in time\nfor t in range(n):\n y[:,t] = C.dot(x[:,t]) + v[:,t]\n x[:,t+1] = A.dot(x[:,t]) + B.dot(w[:,t])\n \nx_true = x.copy()\nw_true = w.copy()\n\nplot_state(ts,(x_true,w_true))\nplot_positions([x_true,y], ['True', 'Observed'],[-4,14,-5,20],'rkf1.pdf')\n```\n\n# Kalman filtering recovery\n\nThe code below solves the standard Kalman filtering problem using CVXPY. We plot and compare the true and recovered vehicle states. Note that the recovery is distorted by outliers in the measurements.\n\n\n```python\n%%time\nimport cvxpy as cp\n\nx = cp.Variable(shape=(4, n+1))\nw = cp.Variable(shape=(2, n))\nv = cp.Variable(shape=(2, n))\n\ntau = .08\n \nobj = cp.sum_squares(w) + tau*cp.sum_squares(v)\nobj = cp.Minimize(obj)\n\nconstr = []\nfor t in range(n):\n constr += [ x[:,t+1] == A*x[:,t] + B*w[:,t] ,\n y[:,t] == C*x[:,t] + v[:,t] ]\n\ncp.Problem(obj, constr).solve(verbose=True)\n\nx = np.array(x.value)\nw = np.array(w.value)\n\nplot_state(ts,(x_true,w_true),(x,w))\nplot_positions([x_true,y], ['True', 'Noisy'], [-4,14,-5,20])\nplot_positions([x_true,x], ['True', 'KF recovery'], [-4,14,-5,20], 'rkf2.pdf')\n\nprint(\"optimal objective value: {}\".format(obj.value))\n```\n\n# Robust Kalman filtering recovery\n\nHere we implement robust Kalman filtering with CVXPY. We get a better recovery than the standard Kalman filtering, which can be seen in the plots below.\n\n\n```python\n%%time\nimport cvxpy as cp\n\nx = cp.Variable(shape=(4, n+1))\nw = cp.Variable(shape=(2, n))\nv = cp.Variable(shape=(2, n))\n\ntau = 2\nrho = 2\n \nobj = cp.sum_squares(w)\nobj += cp.sum([tau*cp.huber(cp.norm(v[:,t]),rho) for t in range(n)])\nobj = cp.Minimize(obj)\n\nconstr = []\nfor t in range(n):\n constr += [ x[:,t+1] == A*x[:,t] + B*w[:,t] ,\n y[:,t] == C*x[:,t] + v[:,t] ]\n\ncp.Problem(obj, constr).solve(verbose=True)\n\nx = np.array(x.value)\nw = np.array(w.value)\n\nplot_state(ts,(x_true,w_true),(x,w))\nplot_positions([x_true,y], ['True', 'Noisy'], [-4,14,-5,20])\nplot_positions([x_true,x], ['True', 'Robust KF recovery'], [-4,14,-5,20],'rkf3.pdf')\n\nprint(\"optimal objective value: {}\".format(obj.value))\n```\n", "meta": {"hexsha": "193be6c515ec90fbb89eafe3a0eeb190421254eb", "size": 444078, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/notebooks/WWW/robust_kalman.ipynb", "max_stars_repo_name": "jasondark/cvxpy", "max_stars_repo_head_hexsha": "56aaa01b0e9d98ae5a91a923708129a7b37a6f18", "max_stars_repo_licenses": ["ECL-2.0", "Apache-2.0"], "max_stars_count": 3285, "max_stars_repo_stars_event_min_datetime": "2015-01-03T04:02:29.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-19T14:51:29.000Z", "max_issues_repo_path": "examples/notebooks/WWW/robust_kalman.ipynb", "max_issues_repo_name": "h-vetinari/cvxpy", "max_issues_repo_head_hexsha": "86307f271819bb78fcdf64a9c3a424773e8269fa", "max_issues_repo_licenses": ["ECL-2.0", "Apache-2.0"], "max_issues_count": 1138, "max_issues_repo_issues_event_min_datetime": "2015-01-01T19:40:14.000Z", "max_issues_repo_issues_event_max_datetime": "2021-04-18T23:37:31.000Z", "max_forks_repo_path": "examples/notebooks/WWW/robust_kalman.ipynb", "max_forks_repo_name": "h-vetinari/cvxpy", "max_forks_repo_head_hexsha": "86307f271819bb78fcdf64a9c3a424773e8269fa", "max_forks_repo_licenses": ["ECL-2.0", "Apache-2.0"], "max_forks_count": 765, "max_forks_repo_forks_event_min_datetime": "2015-01-02T19:29:39.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-20T00:50:43.000Z", "avg_line_length": 784.5901060071, "max_line_length": 72528, "alphanum_fraction": 0.9443611257, "converted": true, "num_tokens": 3258, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294404018582426, "lm_q2_score": 0.8962513800615312, "lm_q1q2_score": 0.8330122428503941}} {"text": "```python\nimport sympy as sp\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.animation import FuncAnimation\n```\n\n\n```python\n# import symbols\ng, b, t, d, A, delt, l0, gam, A0 = sp.symbols(r'g \\beta t \\rho A \\delta l_0, \\gamma, A_0')\n```\n\n# The motion of the pendulum can be attributed to the superposition of 2 damped perpendicular SHMs who have angular frequencies as multiples of the variable angular frequency of the pendulum.\n\n$$x = A_1(t)\\;sin(n\\omega t)\\newline\ny = A_2(t)\\;sin(m\\omega t + \\delta)$$\n\n\n```python\n# Define the values of n amd m. By default, these are set two 6 and 5 respesctively.\ntry:\n n = int(input('Please enter n: '))\n m = int(input('Please enter m: '))\nexcept:\n n = 6\n m = 5 \n```\n\n### $$Let\\; \\delta = \\text{initial phase difference of the perpendicular SHMs}, \\newline\nl_0 = \\text{initial length of pendulum}\\newline\n\\beta = \\text{rate of mass outflow}\\newline\n\\rho = \\text{density of sand}\\newline\nA = \\text{area of container}$$\n\n\n```python\n#Defining the substitutions for later use\nsubst = {A0 : 5, #initial amplitude of pendulum \n g: 9.8, # acceleration due to gravity\n d: 10, # density of sand\n l0: 1, #initial length of pendulum\n gam: 0.1, #damping coefficient\n b: 0.5, #rate of mass outflow\n A: 0.4, # c.s area of the container\n delt:0 # initial phase difference of the two SHMs\n }\n```\n\n$$\\omega(t) =\\sqrt{\\frac{g}{l_0 + \\frac{\\beta t}{\\rho a}}}\\newline\nx = A_1(t)\\;sin(\\omega t)\\newline\ny = A_2(t)\\; sin(\\omega t + \\delta)\\newline\n\\frac{x^2}{A_1^2} + \\frac{y^2}{A_2^2} - \\frac{2xy\\;cos(\\delta)}{A_1A_2} = sin^2(\\delta) - Equation\\;of\\;the\\;curves \n$$\n\n\n```python\n# define the time dependent symbolic functions\nw = sp.Function(r'\\omega')(t)\nA1 = sp.Function(r'A_1')(t)\nA2 = sp.Function(r'A_2')(t)\n```\n\n\n```python\n# Define the angular frequency and the amplitudes\nw = sp.sqrt(g/(l0 + (b*t/(d*A))))\nA1 = A0*sp.exp(-gam*t)\nA2 = A0*sp.exp(-gam*t)\n```\n\n# The amplitudes of the perpendicular SHMs are given as\n## $$A_1(t) = A_0e^{-\\beta t}\\newline\nA_2(t) = A_0e^{-\\beta t}$$\n## The individual perpendicular SHMs are damped and hence follow the general equation:\n## $$ \\ddot{x} + 2\\beta\\dot{x} + kx = 0, \\newline \\;where\\; a,and\\; \\beta\\; are \\;constants \\;and\\; \\beta =\\;damping\\; parameter.$$ \n\nThe working formula of the angular frequency of the pendulum is \n$$ \\omega(t) =\\sqrt{\\frac{g}{l_0 + \\frac{\\beta t}{\\rho a}}}$$\n\nThe effective length of the pendulum is also changing. Hence, the working formula for the effective length of the pendulum is\n$$ l_{eff}(t) = L_0 + \\frac{\\beta}{A\\rho}t$$\n\n## The x and y components of the curves are given by the individual damped SHMs:\n### $$x = A_1(t)\\;sin(n\\omega t)\\newline\ny = A_2(t)\\;sin(m\\omega t + \\delta)$$\n\n\n```python\n# Define the symbolic x values and the y values\nx = A1*sp.sin(n*w*t)\ny = A2*sp.sin(m*w*t + delt)\n```\n\n\n```python\n# Convert the symbolic expressions to functions for plotting\nw_f = sp.lambdify(t, w.subs(subst))\nx_pts = sp.lambdify(t, x.subs(subst))\ny_pts = sp.lambdify(t, y.subs(subst))\n\n# Defining the time of plotting\ntime = np.linspace(0,20,1000)\n\n# create the data points\nx_div = x_pts(time)\ny_div = y_pts(time)\n```\n\n\n```python\n# animating the graph\n\n#Set the figure\nfig, ax = plt.subplots(figsize=(12,10))\nax.grid()\nax.set_xlabel(r'$x\\rightarrow$')\nax.set_ylabel(r'$y\\rightarrow$')\nax.set(xlim = (-5, 5), ylim = (-5,5))\nline, = ax.plot([],[], 'ro-') # blank line object for the lines\n\n# define the animating function which append to the empty line object for every x and y value\ndef animate(i):\n line.set_data(x_div[:i], y_div[:i])\n return line,\n\n# Create the animation\nanim = FuncAnimation(fig, animate, frames = len(time), blit = True, interval = 50)\n\n# Change the output format to JavaScript Html 'jshtml'\nplt.rcParams['animation.html'] = 'jshtml'\n\nplt.rcParams['animation.embed_limit'] = 50\n\n# Call the animation\nanim\n```\n", "meta": {"hexsha": "239665ecf56665e9248f012de37d177df98d870b", "size": 7058, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lissajous figures.ipynb", "max_stars_repo_name": "ScientificArchisman/Simulations", "max_stars_repo_head_hexsha": "b9f3e7cc5d94a150931c12dac5fa21391736c47f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lissajous figures.ipynb", "max_issues_repo_name": "ScientificArchisman/Simulations", "max_issues_repo_head_hexsha": "b9f3e7cc5d94a150931c12dac5fa21391736c47f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lissajous figures.ipynb", "max_forks_repo_name": "ScientificArchisman/Simulations", "max_forks_repo_head_hexsha": "b9f3e7cc5d94a150931c12dac5fa21391736c47f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.0079365079, "max_line_length": 197, "alphanum_fraction": 0.5266364409, "converted": true, "num_tokens": 1276, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294404038127071, "lm_q2_score": 0.8962513745192024, "lm_q1q2_score": 0.8330122394508213}} {"text": "### Evaluate a polynomial string\n\n\n```python\ndef symbolize(s):\n \"\"\"\n Converts a a string (equation) to a SymPy symbol object\n \"\"\"\n from sympy import sympify\n s1=s.replace('.','*')\n s2=s1.replace('^','**')\n s3=sympify(s2)\n \n return(s3)\n```\n\n\n```python\ndef eval_multinomial(s,vals=None,symbolic_eval=False):\n \"\"\"\n Evaluates polynomial at vals.\n vals can be simple list, dictionary, or tuple of values.\n vals can also contain symbols instead of real values provided those symbols have been declared before using SymPy\n \"\"\"\n from sympy import Symbol\n sym_s=symbolize(s)\n sym_set=sym_s.atoms(Symbol)\n sym_lst=[]\n for s in sym_set:\n sym_lst.append(str(s))\n sym_lst.sort()\n if symbolic_eval==False and len(sym_set)!=len(vals):\n print(\"Length of the input values did not match number of variables and symbolic evaluation is not selected\")\n return None\n else:\n if type(vals)==list:\n sub=list(zip(sym_lst,vals))\n elif type(vals)==dict:\n l=list(vals.keys())\n l.sort()\n lst=[]\n for i in l:\n lst.append(vals[i])\n sub=list(zip(sym_lst,lst))\n elif type(vals)==tuple:\n sub=list(zip(sym_lst,list(vals)))\n result=sym_s.subs(sub)\n \n return result\n```\n\n### Helper function for flipping binary values of a _ndarray_\n\n\n```python\ndef flip(y,p):\n import numpy as np\n lst=[]\n for i in range(len(y)):\n f=np.random.choice([1,0],p=[p,1-p])\n lst.append(f)\n lst=np.array(lst)\n return np.array(np.logical_xor(y,lst),dtype=int)\n```\n\n### Classification sample generation based on a symbolic expression\n\n\n```python\ndef gen_classification_symbolic(m=None,n_samples=100,n_features=2,flip_y=0.0):\n \"\"\"\n Generates classification sample based on a symbolic expression.\n Calculates the output of the symbolic expression at randomly generated (Gaussian distribution) points and\n assigns binary classification based on sign.\n m: The symbolic expression. Needs x1, x2, etc as variables and regular python arithmatic symbols to be used.\n n_samples: Number of samples to be generated\n n_features: Number of variables. This is automatically inferred from the symbolic expression. So this is ignored \n in case a symbolic expression is supplied. However if no symbolic expression is supplied then a \n default simple polynomial can be invoked to generate classification samples with n_features.\n flip_y: Probability of flipping the classification labels randomly. A higher value introduces more noise and make\n the classification problem harder.\n Returns a numpy ndarray with dimension (n_samples,n_features+1). Last column is the response vector.\n \"\"\"\n \n import numpy as np\n from sympy import Symbol,sympify\n \n if m==None:\n m=''\n for i in range(1,n_features+1):\n c='x'+str(i)\n c+=np.random.choice(['+','-'],p=[0.5,0.5])\n m+=c\n m=m[:-1]\n sym_m=sympify(m)\n n_features=len(sym_m.atoms(Symbol))\n evals=[]\n lst_features=[]\n for i in range(n_features):\n lst_features.append(np.random.normal(scale=5,size=n_samples))\n lst_features=np.array(lst_features)\n lst_features=lst_features.T\n for i in range(n_samples):\n evals.append(eval_multinomial(m,vals=list(lst_features[i])))\n \n evals=np.array(evals)\n evals_binary=evals>0\n evals_binary=evals_binary.flatten()\n evals_binary=np.array(evals_binary,dtype=int)\n evals_binary=flip(evals_binary,p=flip_y)\n evals_binary=evals_binary.reshape(n_samples,1)\n \n lst_features=lst_features.reshape(n_samples,n_features)\n x=np.hstack((lst_features,evals_binary))\n \n return (x)\n```\n\n\n```python\nx=gen_classification_symbolic(m='2*x1+3*x2+5*x3',n_samples=10,flip_y=0.0)\n```\n\n\n```python\nimport pandas as pd\ndf=pd.DataFrame(x)\n```\n\n\n```python\ndf\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    0123
    0-0.442614-0.5237972.3003781.0
    1-0.4636873.2005224.3094491.0
    2-2.580825-2.6068775.8452461.0
    30.8885343.727563-3.1749440.0
    4-1.120233-2.737827-8.3081710.0
    5-4.832498-6.5914975.5652510.0
    6-2.3266884.7518033.3095591.0
    7-1.213919-10.2092682.4474710.0
    8-1.3208910.770222-8.7358170.0
    92.1504376.061752-6.7331770.0
    \n
    \n\n\n\n\n```python\nx=gen_classification_symbolic(m='12*x1/(x2+5*x3)',n_samples=10,flip_y=0.2)\ndf=pd.DataFrame(x)\ndf\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    0123
    05.5667077.6456910.5598111.0
    1-10.5287962.241197-4.2163590.0
    25.0008660.6159972.0210551.0
    32.9194583.5932800.4984381.0
    44.671159-0.679506-5.1252421.0
    5-4.484902-12.8810672.3409851.0
    61.302499-5.054502-1.3965780.0
    7-2.321861-2.5483142.8816540.0
    8-7.0216604.5301752.7015440.0
    9-9.778964-0.1326424.0219560.0
    \n
    \n\n\n\n### Regression sample generation based on a symbolic expression\n\n\n```python\ndef gen_regression_symbolic(m=None,n_samples=100,n_features=2,noise=0.0,noise_dist='normal'):\n \"\"\"\n Generates regression sample based on a symbolic expression. Calculates the output of the symbolic expression \n at randomly generated (drawn from a Gaussian distribution) points\n m: The symbolic expression. Needs x1, x2, etc as variables and regular python arithmatic symbols to be used.\n n_samples: Number of samples to be generated\n n_features: Number of variables. This is automatically inferred from the symbolic expression. So this is ignored \n in case a symbolic expression is supplied. However if no symbolic expression is supplied then a \n default simple polynomial can be invoked to generate regression samples with n_features.\n noise: Magnitude of Gaussian noise to be introduced (added to the output).\n noise_dist: Type of the probability distribution of the noise signal. \n Currently supports: Normal, Uniform, t, Beta, Gamma, Poission, Laplace\n\n Returns a numpy ndarray with dimension (n_samples,n_features+1). Last column is the response vector.\n \"\"\"\n \n import numpy as np\n from sympy import Symbol,sympify\n \n if m==None:\n m=''\n for i in range(1,n_features+1):\n c='x'+str(i)\n c+=np.random.choice(['+','-'],p=[0.5,0.5])\n m+=c\n m=m[:-1]\n \n sym_m=sympify(m)\n n_features=len(sym_m.atoms(Symbol))\n evals=[]\n lst_features=[]\n \n for i in range(n_features):\n lst_features.append(np.random.normal(scale=5,size=n_samples))\n lst_features=np.array(lst_features)\n lst_features=lst_features.T\n lst_features=lst_features.reshape(n_samples,n_features)\n \n for i in range(n_samples):\n evals.append(eval_multinomial(m,vals=list(lst_features[i])))\n \n evals=np.array(evals)\n evals=evals.reshape(n_samples,1)\n \n if noise_dist=='normal':\n noise_sample=noise*np.random.normal(loc=0,scale=1.0,size=n_samples)\n elif noise_dist=='uniform':\n noise_sample=noise*np.random.uniform(low=0,high=1.0,size=n_samples)\n elif noise_dist=='beta':\n noise_sample=noise*np.random.beta(a=0.5,b=1.0,size=n_samples)\n elif noise_dist=='Gamma':\n noise_sample=noise*np.random.gamma(shape=1.0,scale=1.0,size=n_samples)\n elif noise_dist=='laplace':\n noise_sample=noise*np.random.laplace(loc=0.0,scale=1.0,size=n_samples)\n \n noise_sample=noise_sample.reshape(n_samples,1)\n evals=evals+noise_sample\n \n x=np.hstack((lst_features,evals))\n \n return (x)\n```\n\n#### Generate samples with a rational function as input \n### $$\\frac{10x_1}{(3x_2+4x_3)}$$\n\n\n```python\nx=gen_regression_symbolic(m='10*x1/(3*x2+4*x3)',n_samples=10,noise=0.1)\ndf=pd.DataFrame(x)\ndf\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    0123
    0-7.08499-4.358398.10732-3.67783156896633
    1-10.66840.779056-6.646514.45796155973331
    2-6.69542-5.3071.179355.95108211351032
    3-3.835816.46401-5.996788.28104884665168
    4-1.50995-1.46056-4.102130.493224249022704
    52.22995-1.121634.410091.51213441402200
    64.262610.9826877.06761.31581579041464
    70.7871952.20071-4.68889-0.724650906335961
    80.720728-2.32469-0.468637-0.917106940761226
    9-3.04485-3.26701-3.190831.40239700700260
    \n
    \n\n\n\n#### Generate samples with no symbolic input and with 10 features\n\n\n```python\nx=gen_regression_symbolic(n_features=10,n_samples=10,noise=0.1)\ndf=pd.DataFrame(x)\ndf\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    012345678910
    0-3.55302-3.60695-10.07297.85252-3.762767.78251-0.638193-4.75034-0.9896030.181414-10.2675547899900
    1-6.34778-2.74753-4.751874.704892.700145.29475-0.1900954.85289-4.50207-1.22839-1.93507315813350
    22.15057-0.43572-5.478056.960743.10096-1.508981.099134.73097-0.216513-3.256294.91632094325713
    31.111815.621560.06600366.665651.88656-4.57167-0.84389-2.445451.666497.3616918.0607243993836
    4-3.27857-4.135454.36299-3.73405-1.3958-6.392371.99202-0.216661-10.39053.43896-23.7181534803553
    5-6.78726-7.76895-2.2653-3.4326-16.8937-1.791114.79866-4.436922.01342-3.46766-49.7886776301048
    61.66157-4.344246.69391-3.42321-8.639760.8627222.70897-13.5047-7.14555-3.16143-33.7432854958296
    73.469497.11633-1.085236.41709-1.231617.079566.63776.06688-4.992623.8500519.9862211512126
    81.131554.02597-3.056074.9271-3.604546.062520.9366734.07096-2.040525.1109315.8002295267657
    9-1.60514.97768-7.69743-0.658654.070532.98333-10.93414.94022.97710.6835721.5880195549236
    \n
    \n\n\n\n\n```python\nimport matplotlib.pyplot as plt\n```\n\n#### Generate samples with less noise and plot: $0.2x^2+1.2x+6+f_{noise}(x\\mid{N=0.1})$\n\n\n```python\nx=gen_regression_symbolic(m='0.2*x**2+1.2*x+6',n_samples=100,noise=0.1)\ndf=pd.DataFrame(x)\nplt.scatter(df[0],df[1])\nplt.show()\n```\n\n#### Generate samples with more noise and plo: $0.2x^2+1.2x+6+f_{noise}(x\\mid{N=10})$\n\n\n```python\nx=gen_regression_symbolic(m='0.2*x**2+1.2*x+6',n_samples=100,noise=10)\ndf=pd.DataFrame(x)\nplt.scatter(df[0],df[1])\nplt.show()\n```\n\n#### Generate samples with larger coefficent for the quadratic term and plot: $1.3x^2+1.2x+6+f_{noise}(x\\mid{N=10})$\n\n\n```python\nx=gen_regression_symbolic(m='1.3*x**2+1.2*x+6',n_samples=100,noise=10)\ndf=pd.DataFrame(x)\nplt.scatter(df[0],df[1])\nplt.show()\n```\n", "meta": {"hexsha": "725888460eae554617efe613ef07595548bd595b", "size": 57763, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Random Function Generator/Symbolic regression classification generator.ipynb", "max_stars_repo_name": "jaymedina/Machine-Learning-with-Python", "max_stars_repo_head_hexsha": "203ef467cfa6d180c5cb67ffa2fcbb0ee514047e", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 1803, "max_stars_repo_stars_event_min_datetime": "2018-11-26T20:53:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T15:25:29.000Z", "max_issues_repo_path": "Random Function Generator/Symbolic regression classification generator.ipynb", "max_issues_repo_name": "jaymedina/Machine-Learning-with-Python", "max_issues_repo_head_hexsha": "203ef467cfa6d180c5cb67ffa2fcbb0ee514047e", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 8, "max_issues_repo_issues_event_min_datetime": "2019-02-05T04:09:57.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-19T23:46:27.000Z", "max_forks_repo_path": "Random Function Generator/Symbolic regression classification generator.ipynb", "max_forks_repo_name": "jaymedina/Machine-Learning-with-Python", "max_forks_repo_head_hexsha": "203ef467cfa6d180c5cb67ffa2fcbb0ee514047e", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 1237, "max_forks_repo_forks_event_min_datetime": "2018-11-28T19:48:55.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T15:25:07.000Z", "avg_line_length": 56.685966634, "max_line_length": 9416, "alphanum_fraction": 0.662153974, "converted": true, "num_tokens": 5896, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294404096760998, "lm_q2_score": 0.8962513627417531, "lm_q1q2_score": 0.8330122337594577}} {"text": "# 2021-08-30 High order and boundaries\n\n## Last time\n\n* Stability\n* Derivatives as matrices\n* Boundary conditions\n* Discrete eigenvalues/eigenvectors\n\n## Today\n* High order on arbitrary grids\n* Method of manufactured solutions\n* More satisfying approach to boundary conditions\n\n\n```julia\nusing Plots\n```\n\n# A few methods on grids\n\n\n```julia\ndiff1l(x, u) = x[2:end], (u[2:end] - u[1:end-1]) ./ (x[2:end] - x[1:end-1])\ndiff1r(x, u) = x[1:end-1], (u[2:end] - u[1:end-1]) ./ (x[2:end] - x[1:end-1])\ndiff1c(x, u) = x[2:end-1], (u[3:end] - u[1:end-2]) ./ (x[3:end] - x[1:end-2])\ndifflist = [diff1l, diff1r, diff1c]\n\nn = 20\nh = 2 / (n - 1)\nx = LinRange(-3, 3, n)\nu = sin.(x)\nfig = plot(cos, xlims=(-3, 3))\nfor d in difflist\n xx, yy = d(x, u)\n plot!(fig, xx, yy, marker=:circle, label=d)\nend\n```\n\n\n```julia\nfig\n```\n\n\n\n\n \n\n \n\n\n\n# Measuring error on grids\n\n\n```julia\nusing LinearAlgebra\n\ngrids = 2 .^ (2:10)\nhs = 1 ./ grids\nfunction refinement_error(f, fprime, d)\n error = []\n for n in grids\n x = LinRange(-3, 3, n)\n xx, yy = d(x, f.(x))\n push!(error, norm(yy - fprime.(xx), Inf)) \n end\n error\nend\n```\n\n\n\n\n refinement_error (generic function with 1 method)\n\n\n\n\n```julia\nfig = plot(xscale=:log10, yscale=:log10)\nfor d in difflist\n error = refinement_error(sin, cos, d)\n plot!(fig, hs, error, marker=:circle, label=d)\nend\nplot!(fig, hs, hs .^ 2)\n```\n\n\n\n\n \n\n \n\n\n\n# Stability\n\nAre there \"rough\" functions for which these formulas estimate $u'(x_i) = 0$?\n\n\n```julia\nx = LinRange(-1, 1, 9)\nf_rough(x) = cos(.1 + 4\u03c0*x)\nfp_rough(x) = -4\u03c0*sin(.1 + 4\u03c0*x)\n\nplot(x, f_rough, marker=:circle)\nplot!(f_rough)\n```\n\n\n\n\n \n\n \n\n\n\n\n```julia\nfig = plot(fp_rough, xlims=(-1, 1))\nfor d in difflist\n xx, yy = d(x, f_rough.(x))\n plot!(fig, xx, yy, label=d, marker=:circle)\nend\nfig\n```\n\n\n\n\n \n\n \n\n\n\nIf we have a solution $u(x)$, then $u(x) + f_{\\text{rough}}(x)$ is indistinguishable to our FD method.\n\n# Second derivatives\n\nWe can compute a second derivative by applying first derivatives twice.\n\n\n```julia\nfunction diff2a(x, u)\n xx, yy = diff1c(x, u)\n diff1c(xx, yy)\nend\n\nfunction diff2b(x, u)\n xx, yy = diff1l(x, u)\n diff1r(xx, yy)\nend\n\ndiff2list = [diff2a, diff2b]\nn = 10\nx = LinRange(-3, 3, n)\nu = - cos.(x);\n```\n\n\n```julia\nfig = plot(cos, xlims=(-3, 3))\nfor d2 in diff2list\n xx, yy = d2(x, u)\n plot!(fig, xx, yy, marker=:circle, label=d2)\nend\nfig\n```\n\n\n\n\n \n\n \n\n\n\n# How fast do these approximations converge?\n\n\n```julia\ngrids = 2 .^ (3:10)\nhs = 1 ./ grids\nfunction refinement_error2(f, f_xx, d2)\n error = []\n for n in grids\n x = LinRange(-3, 3, n)\n xx, yy = d2(x, f.(x))\n push!(error, norm(yy - f_xx.(xx), Inf))\n end\n error\nend\n```\n\n\n\n\n refinement_error2 (generic function with 1 method)\n\n\n\n\n```julia\nfig = plot(xscale=:log10, yscale=:log10)\nfor d2 in diff2list\n error = refinement_error2(x -> -cos(x), cos, d2)\n plot!(fig, hs, error, marker=:circle, label=d2)\nend\nplot!(fig, hs, hs .^ 2) \n```\n\n\n\n\n \n\n \n\n\n\n# Differentiation matrices\n\nAll our `diff*` functions thus far have been linear in `u`, therefore they can be represented as matrices.\n$$\\frac{u_{i+1} - u_i}{x_{i+1} - x_i} = \\begin{bmatrix} -1/h & 1/h \\end{bmatrix} \\begin{bmatrix} u_i \\\\ u_{i+1} \\end{bmatrix}$$\n\n\n```julia\nfunction diff1_mat(x)\n n = length(x)\n D = zeros(n, n)\n h = x[2] - x[1]\n D[1, 1:2] = [-1/h 1/h]\n for i in 2:n-1\n D[i, i-1:i+1] = [-1/2h 0 1/2h]\n end\n D[n, n-1:n] = [-1/h 1/h]\n D\nend\n```\n\n\n\n\n diff1_mat (generic function with 1 method)\n\n\n\n\n```julia\nx = LinRange(-3, 3, 10)\nplot(x, diff1_mat(x) * sin.(x), marker=:circle)\nplot!(cos)\n```\n\n\n\n\n \n\n \n\n\n\n# How accurate is this derivative matrix?\n\n\n```julia\nfig = plot(xscale=:log10, yscale=:log10, legend=:topleft)\nerror = refinement_error(sin, cos, (x, u) -> (x, diff1_mat(x) * u))\nplot!(fig, hs, error, marker=:circle)\nplot!(fig, hs, hs, label=\"\\$h\\$\")\nplot!(fig, hs, hs .^ 2, label=\"\\$h^2\\$\")\n```\n\n\n\n\n \n\n \n\n\n\n# Can we study it as a matrix?\n\n\n```julia\nD = diff1_mat(x)\nspy(D, marker=(:square, 10), c=:bwr)\n```\n\n\n\n\n \n\n \n\n\n\n\n```julia\nsvdvals(D)\n```\n\n\n\n\n 10-element Vector{Float64}:\n 2.268133218393964\n 2.2674392839412794\n 1.4265847744427302\n 1.3683737968309653\n 1.2135254915624207\n 1.0228485194005281\n 0.8816778784387094\n 0.5437139466339259\n 0.46352549156242107\n 3.204643550177702e-17\n\n\n\n# Second derivative with Dirichlet boundary conditions\n\nThe left endpoint in our example boundary value problem has a Dirichlet boundary condition,\n$$u(-1) = a . $$\nWith finite difference methods, we have an explicit degree of freedom $u_0 = u(x_0 = -1)$ at that endpoint.\nWhen building a matrix system for the BVP, we can implement this boundary condition by modifying the first row of the matrix,\n$$ \\begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\\\ \\\\ & & A_{2:,:} & & \\\\ \\\\ \\end{bmatrix} \\begin{bmatrix} u_0 \\\\ \\\\ u_{2:} \\\\ \\\\ \\end{bmatrix} = \\begin{bmatrix} a \\\\ \\\\ f_{2:} \\\\ \\\\ \\end{bmatrix} . $$\n\n* This matrix is not symmetric even if $A$ is.\n\n\n```julia\nfunction laplacian_dirichlet(x)\n n = length(x)\n D = zeros(n, n)\n h = x[2] - x[1]\n D[1, 1] = 1\n for i in 2:n-1\n D[i, i-1:i+1] = (1/h^2) * [-1, 2, -1]\n end\n D[n, n] = 1\n D\nend\n```\n\n\n\n\n laplacian_dirichlet (generic function with 1 method)\n\n\n\n# Laplacian as a matrix\n\n\n```julia\nL = laplacian_dirichlet(x)\nspy(L, marker=(:square, 10), c=:bwr)\n```\n\n\n\n\n \n\n \n\n\n\n\n```julia\nsvdvals(L)\n```\n\n\n\n\n 10-element Vector{Float64}:\n 8.745632414048261\n 8.01252539624902\n 6.887027178840083\n 5.501492841921461\n 4.019212498940009\n 2.620230430399528\n 1.5101748989145012\n 0.9274512680423916\n 0.630930249171091\n 0.23924866094931405\n\n\n\n# Shortcomings of our previous methods\n\n* Only second order accurate (at best)\n* Worse than second order on non-uniform grids\n* Worse than second order at Neumann boundaries\n* Boundary conditions break symmetry\n\n# Interpolation by Vandermonde matrices\n\nWe can compute a polynomial\n\n$$ p(x) = c_0 + c_1 x + c_2 x^2 + \\dotsb $$\n\nthat assumes function values $p(x_i) = u_i$ by solving a linear system with the Vandermonde matrix.\n\n$$ \\underbrace{\\begin{bmatrix} 1 & x_0 & x_0^2 & \\dotsb \\\\\n 1 & x_1 & x_1^2 & \\dotsb \\\\\n 1 & x_2 & x_2^2 & \\dotsb \\\\\n \\vdots & & & \\ddots \\end{bmatrix}}_V \\begin{bmatrix} c_0 \\\\ c_1 \\\\ c_2 \\\\ \\vdots \\end{bmatrix} = \\begin{bmatrix} u_0 \\\\ u_1 \\\\ u_2 \\\\ \\vdots \\end{bmatrix} .$$\n\n\n```julia\nfunction vander(x, k=nothing)\n if k === nothing\n k = length(x)\n end\n V = ones(length(x), k)\n for j = 2:k\n V[:, j] = V[:, j-1] .* x\n end\n V\nend\n```\n\n\n\n\n vander (generic function with 2 methods)\n\n\n\n\n```julia\nvander(LinRange(-1, 1, 5))\n```\n\n\n\n\n 5\u00d75 Matrix{Float64}:\n 1.0 -1.0 1.0 -1.0 1.0\n 1.0 -0.5 0.25 -0.125 0.0625\n 1.0 0.0 0.0 0.0 0.0\n 1.0 0.5 0.25 0.125 0.0625\n 1.0 1.0 1.0 1.0 1.0\n\n\n\n# Fitting a polynomial\n\n\n```julia\nk = 4\nx = LinRange(-2, 2, k)\nu = sin.(x)\nV = vander(x)\nc = V \\ u\nscatter(x, u, label=\"\\$u_i\\$\", legend=:topleft)\nplot!(x -> (vander(x, k) * c)[1,1], label=\"\\$p(x)\\$\")\nplot!(sin, label=sin)\n```\n\n\n\n\n \n\n \n\n\n\n# Differentiating\n\nWe're given the coefficients $c = V^{-1} u$ of the polynomial\n$$p(x) = c_0 + c_1 x + c_2 x^2 + \\dotsb.$$\nWhat is\n\\begin{align} p(0) &= c_0 \\\\\np'(0) &= c_1 \\\\ \np''(0) &= c_2 \\cdot 2\\\\\np^{(k)}(0) &= c_k \\cdot k! .\n\\end{align}\n\n\n```julia\nfunction fdstencil1(source, target)\n \"first derivative stencil from source to target\"\n x = source .- target\n V = vander(x)]\n inv(V)[2, :]' # as a row vector\nend\nplot([z -> fdstencil1(x, z) * u, cos], xlims=(-3,3))\nscatter!(x, 0*x, label=\"grid points\")\n```\n\n# Arbitrary order\n\n\n```julia\nfunction fdstencil(source, target, k)\n \"kth derivative stencil from source to target\"\n x = source .- target\n V = vander(x)\n rhs = zero(x)'\n rhs[k+1] = factorial(k)\n rhs / V\nend\nfdstencil(x, 0.5, 2)\n\n```\n\n\n\n\n 1\u00d74 adjoint(::Vector{Float64}) with eltype Float64:\n 0.0703125 0.351563 -0.914062 0.492188\n\n\n\n\n```julia\nplot([z -> fdstencil(x, z, 2) * u,\n z -> -sin(z)]) \n```\n\n\n\n\n \n\n \n\n\n\n## We didn't call `inv(V)`; what's up?\n$$p(0) = s_0^0 u_0 + s_1^0 u_1 + s_2^0 u_2 + \\dotsb = e_0^T \\underbrace{V^{-1} u}_c = \\underbrace{e_0^T V^{-1}}_{s^0} u$$\n\n# Convergence order\n\n\n```julia\nhs = 2 .^ -LinRange(-4, 10, 10)\nfunction diff_error(u, du, h; n, k, z=0)\n x = LinRange(-h, h, n) .+ .5\n fdstencil(x, z, k) * u.(x) - du.(z)\nend\nerrors = [diff_error(sin, t -> -sin(t), h, n=5, k=2, z=.5+0.1*h)\n for h in hs]\nplot(hs, abs.(errors), marker=:circle)\nplot!(h -> h^3, label=\"\\$h^?\\$\", xscale=:log10, yscale=:log10)\n```\n\n\n\n\n \n\n \n\n\n\n## Observations\n\n* When using $n=3$ points, we fit a polynomial of degree 2 and have error $O(h^3)$ for interpolation $p(0)$.\n* Each derivative gives up one order of accuracy in general.\n* Centered diff on uniform grids can have extra cancellation (superconvergence)\n* The Vandermonde matrix is notoriously ill-conditioned with many points $n$. We recommend using a [stable algorithm from Fornberg](https://doi.org/10.1137/S0036144596322507).\n\n# High order discretization of the Laplacian\n## The Poisson problem $-u_{xx} = f$ with boundary conditions\n\n\n```julia\nfunction poisson(x, spoints, forcing; left=(0, zero), right=(0, zero))\n n = length(x)\n L = zeros(n, n)\n rhs = forcing.(x)\n for i in 2:n-1\n jleft = min(max(1, i-spoints\u00f72), n-spoints+1)\n js = jleft : jleft + spoints - 1\n L[i, js] = -fdstencil(x[js], x[i], 2)\n end\n L[1,1:spoints] = fdstencil(x[1:spoints], x[1], left[1])\n L[n,n-spoints+1:n] = fdstencil(x[n-spoints+1:n], x[n], right[1])\n rhs[1] = left[2](x[1])\n rhs[n] = right[2](x[n])\n L, rhs\nend\n```\n\n\n\n\n poisson (generic function with 1 method)\n\n\n\n\n```julia\nL, b = poisson(LinRange(-1, 1, 6), 3, zero, left=(1, zero))\nL\n```\n\n\n\n\n 6\u00d76 Matrix{Float64}:\n -3.75 5.0 -1.25 0.0 0.0 0.0\n -6.25 12.5 -6.25 0.0 0.0 0.0\n 0.0 -6.25 12.5 -6.25 0.0 0.0\n 0.0 0.0 -6.25 12.5 -6.25 0.0\n 0.0 0.0 0.0 -6.25 12.5 -6.25\n 0.0 0.0 0.0 9.25186e-18 -2.31296e-16 1.0\n\n\n\n# Method of manufactured solutions\n\n## Problem: analytic solutions to PDEs are hard to find\n\nLet's choose a smooth function with rich derivatives,\n$$ u(x) = \\tanh(x) . $$\nThen $$ u'(x) = \\cosh^{-2}(x) $$ and $$ u''(x) = -2 \\tanh(x) \\cosh^{-2}(x) . $$\n\n* This works for nonlinear too.\n\n\n```julia\nx = LinRange(-2, 2, 10)\nL, rhs = poisson(x, 3,\n x -> 2 * tanh(x) / cosh(x)^2,\n left=(0, tanh), \n right=(1, x -> cosh(x)^-2))\nu = L \\ rhs\nplot(x, u, marker=:circle, legend=:topleft)\nplot!(tanh)\n```\n\n\n\n\n \n\n \n\n\n\n# Convergence rate\n\n\n```julia\nns = 2 .^ (2:10)\nhs = 1 ./ ns\nfunction poisson_error(n)\n x = LinRange(-2, 2, n)\n L, rhs = poisson(x, 3, x -> 2 * tanh(x) / cosh(x)^2,\n left = (0, tanh),\n right = (1, x -> cosh(x)^-2))\n u = L \\ rhs\n norm(u - tanh.(x), Inf)\nend\n```\n\n\n\n\n poisson_error (generic function with 1 method)\n\n\n\n\n```julia\nplot(hs, [poisson_error(n) for n in ns], marker=:circle)\nplot!(h -> h^2, xscale=:log10, yscale=:log10)\n```\n\n\n\n\n \n\n \n\n\n\n# Symmetry in boundary conditions: Dirichlet\n\nWe have implemented Dirichlet conditions by modifying the first row of the matrix,\n$$ \\begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\\\ \\\\ & & A_{1:,:} & & \\\\ \\\\ \\end{bmatrix} \\begin{bmatrix} u_0 \\\\ \\\\ u_{1:} \\\\ \\\\ \\end{bmatrix} = \\begin{bmatrix} a \\\\ \\\\ f_{1:} \\\\ \\\\ \\end{bmatrix} . $$\n\n* This matrix is not symmetric even if $A$ is.\n* We can eliminate $u_0$ and create a reduced system for $u_{1:}$.\n* Generalize: consider a $2\\times 2$ block system\n$$ \\begin{bmatrix} I & 0 \\\\ A_{10} & A_{11} \\end{bmatrix} \\begin{bmatrix} u_0 \\\\ u_1 \\end{bmatrix} = \\begin{bmatrix} f_0 \\\\ f_1 \\end{bmatrix} .$$\n\nWe can rearrange as\n$$ A_{11} u_1 = f_1 - A_{10} f_0, $$\nwhich is symmetric if $A_{11}$ is.\n* This is called \"lifting\" and is often done implicitly in the mathematics literature. It is convenient for linear solvers and eigenvalue solvers, but inconvenient for IO and postprocessing, as well as some nonlinear problems.\n* Convenient alternative: write\n$$ \\begin{bmatrix} I & 0 \\\\ 0 & A_{11} \\end{bmatrix} \\begin{bmatrix} u_0 \\\\ u_1 \\end{bmatrix} = \\begin{bmatrix} f_0 \\\\ f_1 - A_{10} f_0 \\end{bmatrix}, $$\nwhich is symmetric and decouples the degrees of freedom associated with the boundary. This method applies cleanly to nonlinear problems.\n* Optionally scale the identity by some scalar related to the norm of $A_{11}$.\n\n# Symmetry in boundary conditions: Neumann\n\nConsider FD discretization of the Neumann boundary condition\n$$ \\frac{du}{dx}(1) = b . $$\n1. Use a one-sided difference formula as in\n$$ \\frac{u_n - u_{n-1}}{h} = b . $$\n * an extra discretization choice\n * may reduce order of accuracy compared to interior discretization, lose symmetry.\n2. Temporarily introduce a ghost value $u_{n+1} = u(x_{n+1} = 1 + h)$ (possibly more) and define it to be a reflection of the values from inside the domain. In the case $b=0$, this reflection is $u_{n+i} = u_{n-i}$. More generally,\n$$ u_{n+i} = u_{n-i} + 2b(x_n - x_{n-i}) . $$\n\nAfter this definition of ghost values, we apply the interior discretization at the boundary. For our reference equation, we would write\n\n$$ \\frac{-u_{n-1} + 2 u_n - u_{n+1}}{h^2} = f(x_n) $$\n\nwhich simplifies to $$ \\frac{u_n - u_{n-1}}{h^2} = f(x_n)/2 + b/h $$\nafter dividing by 2 and moving the boundary term to the right hand side.\n\n# Fourier analysis of stencils\n\nConsider the plane waves $\\phi(x, \\theta) = e^{i\\theta x}$.\n\nSample $\\phi$ on a discrete grid $x = \\mathbb Z$ and apply the stencil\n\\begin{align}\nS \\phi(x, \\theta) &= s_{-1} \\phi(x-1, \\theta) + s_{0} \\phi(x, \\theta) + s_1 \\phi(x+1,\\theta) \\\\\n&= \\Big( s_{-1} e^{-i\\theta} + s_0 + s_{1} e^{i\\theta} \\Big) \\phi(x, \\theta)\n\\end{align}\nWith $S = \\begin{bmatrix} -1 & 2 & -1 \\end{bmatrix}$, we get\n$$S \\phi(x, \\theta) = \\underbrace{(2 - 2 \\cos\\theta)}_{\\hat S(\\theta)} \\phi(x, \\theta)$$\nWe call $\\hat S(\\theta)$ the *symbol* of the operator.\nWhat is the symbol of the continuous second derivative?\n\n\n```julia\nplot(theta -> 2 - 2*cos(theta), xlims=(-pi, pi))\n```\n\n\n\n\n \n\n \n\n\n", "meta": {"hexsha": "4fce9a4733241ea5e9db67a49672568baa304669", "size": 503510, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "slides/2021-08-30-high-order.ipynb", "max_stars_repo_name": "cu-numpde/numpde", "max_stars_repo_head_hexsha": "e5e1a465a622eba56900004f9a503412407cdccf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-01T20:54:51.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-01T20:54:51.000Z", "max_issues_repo_path": "slides/2021-08-30-high-order.ipynb", "max_issues_repo_name": "amta3208/fall21", "max_issues_repo_head_hexsha": "e5e1a465a622eba56900004f9a503412407cdccf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/2021-08-30-high-order.ipynb", "max_forks_repo_name": "amta3208/fall21", "max_forks_repo_head_hexsha": "e5e1a465a622eba56900004f9a503412407cdccf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-01T20:54:46.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-01T20:54:46.000Z", "avg_line_length": 141.4751334645, "max_line_length": 12713, "alphanum_fraction": 0.6605668209, "converted": true, "num_tokens": 5337, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9207896693699845, "lm_q2_score": 0.9046505331728751, "lm_q1q2_score": 0.8329928653356319}} {"text": "```julia\nusing CSV\nusing DataFrames\nusing PyPlot\nusing ScikitLearn # machine learning package\nusing StatsBase\nusing Random\nusing LaTeXStrings # for L\"$x$\" to work instead of needing to do \"\\$x\\$\"\nusing Printf\n\n# (optional)change settings for all plots at once, e.g. font size\nrcParams = PyPlot.PyDict(PyPlot.matplotlib.\"rcParams\")\nrcParams[\"font.size\"] = 16\n\n# (optional) change the style. see styles here: https://matplotlib.org/3.1.1/gallery/style_sheets/style_sheets_reference.html\nPyPlot.matplotlib.style.use(\"seaborn-white\") \n```\n\n## classifying breast tumors as malignant or benign\n\nsource: [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic))\n\n> Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image.\n\nThe mean radius and smoothness of the cell nuclei (the two features) and the outcome (M = malignant, B = benign) of the tumor are in the `breast_cancer_data.csv`.\n\n\n```julia\ndf = CSV.read(\"breast_cancer_data.csv\")\ndf[!, :class] = map(row -> row == \"B\" ? 0 : 1, df[:, :outcome])\nfirst(df, 5)\n```\n\n\n\n\n

    5 rows \u00d7 4 columns

    mean_radiusmean_smoothnessoutcomeclass
    Float64Float64StringInt64
    113.851.495B0
    29.6682.275B0
    39.2952.388B0
    419.694.585M1
    59.7551.243B0
    \n\n\n\n## visualize the two classes distributed in feature space\n\n\n```julia\nmarkers = Dict(\"M\" => \"x\", \"B\" => \"o\")\n\nfigure()\nxlabel(\"mean radius\")\nylabel(\"mean smoothness\")\nfor df_c in groupby(df, :outcome)\n outcome = df_c[1, :outcome]\n scatter(df_c[:, :mean_radius], df_c[:, :mean_smoothness], label=\"$outcome\", \n marker=markers[outcome])\nend\nlegend()\naxis(\"equal\")\n```\n\n## get data ready for classifiation in scikitlearn\n\nscikitlearn takes as input:\n* a feature matrix `X`, which must be `n_samples` by `n_features`\n* a target vector `y`, which must be `n_samples` long (of course)\n\n\n```julia\nn_tumors = nrow(df)\n\nX = zeros(n_tumors, 2)\ny = zeros(n_tumors)\nfor (i, tumor) in enumerate(eachrow(df))\n X[i, 1] = tumor[:mean_radius]\n X[i, 2] = tumor[:mean_smoothness]\n y[i] = tumor[:class]\nend\nX # look at y too!\n```\n\n\n\n\n 300\u00d72 Array{Float64,2}:\n 13.85 1.495\n 9.668 2.275\n 9.295 2.388\n 19.69 4.585\n 9.755 1.243\n 16.11 4.533\n 14.78 2.45 \n 15.78 3.598\n 15.71 1.972\n 14.68 3.195\n 13.71 3.856\n 21.09 4.414\n 11.31 1.831\n \u22ee \n 11.08 1.719\n 18.94 5.486\n 15.32 4.061\n 14.25 5.373\n 20.6 5.772\n 8.671 1.435\n 11.64 2.155\n 12.06 1.171\n 13.88 1.709\n 14.9 3.466\n 19.59 2.916\n 14.81 1.677\n\n\n\n## logistic regression\n\nlet $\\mathbf{x} \\in \\mathbb{R}^2$ be the feature vector describing a tumor. let $T$ be the random variable that denotes whether the tumor is benign (0) or malignant (1). the logistic model is a probabilistic model for the probability that a tumor is malignant given its feature vector:\n\n\\begin{equation}\n \\log \\frac{Pr(T=1 | \\mathbf{x})}{1-Pr(T=1 | \\mathbf{x})} = \\beta_0 + \\boldsymbol \\beta^\\intercal \\mathbf{x}\n\\end{equation}\nwhere $\\beta_0$ is the intercept and $\\boldsymbol \\beta \\in \\mathbb{R}$ are the weights for the features. \n\nwe will use scikitlearn to learn the $\\beta_0$ and $\\boldsymbol \\beta$ that maximize the likelihood.\n\n\n```julia\n@sk_import linear_model : LogisticRegression\n```\n\n WARNING: redefining constant LogisticRegression\n\n\n\n\n\n PyObject \n\n\n\n\n```julia\nlr = LogisticRegression(penalty=\"none\", solver=\"newton-cg\")\n\nlr.fit(X, y)\n\nprintln(\"\u03b2 = \", lr.coef_)\nprintln(\"\u03b2\u2080 = \", lr.intercept_)\n```\n\n \u03b2 = [1.1686607782175527 0.9420681231447378]\n \u03b2\u2080 = [-19.387890643955814]\n\n\nprediction of the probability that a new tumor is 0 (benign) or 1 (malignant)\n\n\n```julia\nlr.predict_proba([10.0 2.5])\n```\n\n\n\n\n 1\u00d72 Array{Float64,2}:\n 0.995256 0.00474403\n\n\n\n## visualize the learned model $Pr(T=1|\\mathbf{x})$\n\n\n```julia\nradius = 5:0.25:30\nsmoothness = 0.0:0.25:20.0\n\nlr_prediction = zeros(length(smoothness), length(radius))\nfor i = 1:length(radius)\n for j = 1:length(smoothness)\n x = [radius[i] smoothness[j]]\n lr_prediction[j, i] = lr.predict_proba(x)[2] # proba for probability\n end\nend\n```\n\n\n```julia\nfigure()\npcolor(radius, smoothness, lr_prediction, cmap=\"viridis\", \n vmin=0.0, vmax=1.0) # vmin,vmax enforce colorbar scale\ncolorbar()\n\nfor df_c in groupby(df, :outcome)\n outcome = df_c[1, :outcome]\n scatter(df_c[:, :mean_radius], df_c[:, :mean_smoothness], label=\"$outcome\", \n marker=markers[outcome])\nend\nlegend()\n```\n\n## making decisions: the ROC curve\n\nthis depends on the cost of a false positive versus false negative. (here, \"positive\" is defined as testing positive for \"malignant\")\n\n> \"I equally value minimizing (1) false positives and (2) false negatives.\"\n\n$\\implies$ choose $Pr(T=1|\\mathbf{x})=0.5$ as the decision boundary.\n\n> \"I'd rather predict that a benign tumor is malignant (false positive) than predict that a malignant tumor is benign (false negative).\"\n\n$\\implies$ choose $Pr(T=1|\\mathbf{x})=0.2$ as the decision boundary. Even if there is a relatively small chance that the tumor is malignant, we still take action and classify it as malignant...\n\nthe receiver operator characteristic (ROC) curve is a way we can evaluate a classification algorithm without imposing our values and specifying where the decision boundary should be.\n\n\n```julia\n@sk_import metrics : roc_curve\n@sk_import metrics : auc\n```\n\n WARNING: redefining constant roc_curve\n\n\n\n\n\n PyObject \n\n\n\n\n```julia\nscores = lr.predict_proba(X)[:, 2]\n```\n\n\n\n\n 300-element Array{Float64,1}:\n 0.14263839860751615 \n 0.0026092670076477047\n 0.0018782588829001088\n 0.9996447815854005 \n 0.0010942250321098578\n 0.9760986635528334 \n 0.5480964640686227 \n 0.9200581620768566 \n 0.6962552287733829 \n 0.6852396841339906 \n 0.5663718098629624 \n 0.9999187138695917 \n 0.011596238415532208 \n \u22ee \n 0.008004504524685206 \n 0.9996348109797668 \n 0.9122747253539892 \n 0.9111094723703275 \n 0.9999599017831867 \n 0.0003696570017181287\n 0.022876063860596256 \n 0.014910308332017176 \n 0.17409413839661506 \n 0.7842086355766936 \n 0.9980794947953938 \n 0.37749924416866815 \n\n\n\n\n```julia\nfpr, tpr, thresholds = roc_curve(y, scores)\n```\n\n\n\n\n ([0.0, 0.0, 0.0, 0.005555555555555556, 0.005555555555555556, 0.011111111111111112, 0.011111111111111112, 0.027777777777777776, 0.027777777777777776, 0.03333333333333333 \u2026 0.37222222222222223, 0.43333333333333335, 0.43333333333333335, 0.4722222222222222, 0.4722222222222222, 0.5055555555555555, 0.5055555555555555, 0.7666666666666667, 0.7666666666666667, 1.0], [0.0, 0.008333333333333333, 0.7416666666666667, 0.7416666666666667, 0.7583333333333333, 0.7583333333333333, 0.7666666666666667, 0.7666666666666667, 0.8083333333333333, 0.8083333333333333 \u2026 0.9583333333333334, 0.9583333333333334, 0.975, 0.975, 0.9833333333333333, 0.9833333333333333, 0.9916666666666667, 0.9916666666666667, 1.0, 1.0], [1.9999999999999254, 0.9999999999999254, 0.7959831048160733, 0.7882403199356263, 0.7732403837747389, 0.7622568750185913, 0.7614526612308641, 0.6962552287733829, 0.6282401802701606, 0.5888691371793174 \u2026 0.09889839902698462, 0.065245657335008, 0.0601149551828247, 0.05261077303643403, 0.0518823253258252, 0.043527327155129115, 0.04345926758794682, 0.008416240229296301, 0.008004504524685206, 0.00011874278075119236])\n\n\n\n\n```julia\nfigure()\nplot([0, 1], [0, 1], color=\"k\")\nplot(fpr, tpr, color=\"darkorange\",\n label=@sprintf(\"ROC curve (area = %0.2f)\", auc(fpr, tpr))\n )\nlegend()\nxlabel(\"false positive rate\")\nylabel(\"true positive rate\")\n```\n\ntradeoff:\n* threshold too small: classify all of the tumors as malignant, false positive rate very high\n* threshold too large: classify all of the tumors as benign, false negative rate very high\n\nsomewhere in the middle (but still depending on the cost of a false positive versus false negative) is where we should operate.\n\nthe `auc`, area under the curve, has a probabilistic interpretation:\n> the area under the curve (often referred to as simply the AUC) is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative') -[Wikipedia](https://en.wikipedia.org/wiki/Receiver_operating_characteristic)\n\n**warning**: always split your data into test or train or do cross-validation to assess model performance. we trained on all data here to see the mechanics of fitting a logistic regression model to data, visualizing the model, and creating an ROC curve.\n", "meta": {"hexsha": "ec08332d2b3304ba2eae5df9a5ca8baa28969650", "size": 167765, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "CHE599-IntroDataScience/lectures/logistic_regression/logistic regression.ipynb", "max_stars_repo_name": "leanth/OSUCoursework", "max_stars_repo_head_hexsha": "ccfbf5f9daa8f6d3818bb5e4cc8df7c5135a5f34", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "CHE599-IntroDataScience/lectures/logistic_regression/logistic regression.ipynb", "max_issues_repo_name": "leanth/OSUCoursework", "max_issues_repo_head_hexsha": "ccfbf5f9daa8f6d3818bb5e4cc8df7c5135a5f34", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CHE599-IntroDataScience/lectures/logistic_regression/logistic regression.ipynb", "max_forks_repo_name": "leanth/OSUCoursework", "max_forks_repo_head_hexsha": "ccfbf5f9daa8f6d3818bb5e4cc8df7c5135a5f34", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 307.2619047619, "max_line_length": 58514, "alphanum_fraction": 0.923184216, "converted": true, "num_tokens": 3166, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9207896715436483, "lm_q2_score": 0.9046505261034854, "lm_q1q2_score": 0.832992860792617}} {"text": "# Initialize Enviroment\n\n\n```python\nfrom IPython.display import display, Math\nfrom sympy import *\ninit_printing()\n\nfrom helper import comparator_factory, comparator_eval_factory, comparator_method_factory\n\nx,y,z = symbols('x y z')\n\ncomparator = comparator_factory('Before applying {}:','After:')\nmethod_comparator = comparator_method_factory('Before calling {}:','After:')\neval_comparator = comparator_eval_factory('Before evaluation:','After evaluation:')\n```\n\n# Calculus\n\nIn this section, we will cover how to perform baisc calculus tasks such as derivatives, integrals,litmis and series expansion.\n\n## Derivatives\nTo take derivatives, use diff() \n\n### basic derivatives\nPass in an expresison and one symbol with respect to which the derivative is applied.\n\n\n```python\nexpr = sin(x)\n\nexpr_diff = diff(expr,x)\n\ncomparator(expr, diff, x)\n```\n\ndiff() can also be called as a method\n\n\n```python\nexpr = sin(x)\n\nmethod_comparator(expr, 'diff', x)\n```\n\nTo create an unevaluated derivative, use Derivative class and initiate it with the same syntax with diff()\n\n\n```python\nexpr = sin(x)\n\ndiff_expr = Derivative(expr,x)\n\nprint('Before evaluation:')\ndisplay(diff_expr)\n```\n\nAnd call ```doit()``` method to evaluate it.\n\n\n```python\nprint('After evaluation:')\ndisplay(diff_expr.doit())\n```\n\nTo reduce code duplication, we user ```eval_comparator``` to handle the comparasion in later notes\n\n### higher order derivative\nTo take n order derivatives, pass the symbol n times or pass the symbol once and then follows it with n.\n\n\n```python\nexpr = Derivative(x**4,x,x,x)\n\neval_comparator(expr)\n```\n\nYou can achive the same calculation with the second syntax.\n\n\n```python\nexpr = Derivative(x**4,x,3)\n\neval_comparator(expr)\n```\n\n### higher order partial derivatives.\n\nJust pass the symbols in order, with the same syntax with single variable derivative.\n\n\n```python\nexpr = Derivative(exp(x*y*z),x, y, y, z, z, z, z)\n\neval_comparator(expr)\n```\n\nOr user number to control the derivatives order for each symbol\n\n\n```python\nexpr = Derivative(exp(x*y*z),x, y, 2, z, 4)\n\neval_comparator(expr)\n```\n\n# Integrals\n\n## indefinte integrals\nSimiliar with derivatives, integrals can be performed through integral() by function or method. To construct an unevaluated expression, initialize an Integral class and call doit() to evaludate the calculation.\n\n\n```python\nexpr = Integral(cos(x),x)\n\neval_comparator(expr)\n```\n\n### definite integrals\n\nTo perform a definite integral, pass in a tuple of variable symbol, lower bound and upper bound.\n\n\n```python\nexpr = Integral(exp(-x),(x,0,oo))\n\neval_comparator(expr)\n```\n\nNote, $\\infty$ in Sympy is oo (double lower case 'O')\n\n### multiple integrals\nTo perform a multiple integral, pass multiple tuples of symbol variable and integral bounds.\n\n\n```python\nexpr = Integral(exp(-x**2 - y**2), (x, -oo, oo), (y, -oo, oo))\n\neval_comparator(expr)\n```\n\nIf Sympy cannot calculate an integral on an expression, it returns an unevaluated integral expression.\n\n\n```python\nexpr = Integral(x**x)\n\neval_comparator(expr)\n```\n\n# Limits\n\nSimiliar with derivatives, limits can be performed through limit() by function or method. To construct an unevaluated expression, initialize an Limit class and call doit() to evaludate the calculation.\n\nBy default the limit is calculated from right to left with default setting ```dir = '+'```.\n\n\n```python\nexpr = Limit(sin(x)/x, x, 0)\n\neval_comparator(expr)\n```\n\nTo caluculate a limit from left to right. Pass ```dir='-'```\n\n\n```python\nexpr = Limit(sin(x)/x, x, 0, dir = '-')\n\neval_comparator(expr)\n```\n\n# Series Expansion\n\nSympy can calculate an asymptotic series expansion of a funciton around a point $x_0$ with respect to $O(x-x_0)^n$ by calling ```series``` method of an expression.\n\n\n```python\nexpr = sin(x)\n\nmethod_comparator(expr, 'series', x,0,6)\n```\n\nTo remove the order term, call ```removeO()``` method\n\n\n```python\nprint('Expanded expression without order term:')\nexpr.series(x,0,6).removeO()\n```\n\n# Reference\n[Sympy Documentation](http://docs.sympy.org/latest/index.html)\n\n# Related Articles\n* [Sympy Notes I]({filename}0026_sympy_intro_1_en.ipynb)\n* [Sympy Notes II]({filename}0027_sympy_intro_2_en.ipynb)\n* [Sympy Notes III]({filename}0028_sympy_intro_3_en.ipynb)\n* [Sympy Notes IV]({filename}0029_sympy_intro_4_en.ipynb)\n", "meta": {"hexsha": "f4464673f3728733fe39760a94f4131e20a209cf", "size": 63558, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "0011_sympy/0027_sympy_intro_2_en.ipynb", "max_stars_repo_name": "junjiecai/jupyter_demos", "max_stars_repo_head_hexsha": "8aa8a0320545c0ea09e05e94aea82bc8aa537750", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-09-16T10:44:39.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-04T18:55:52.000Z", "max_issues_repo_path": "0011_sympy/0027_sympy_intro_2_en.ipynb", "max_issues_repo_name": "junjiecai/jupyter_demos", "max_issues_repo_head_hexsha": "8aa8a0320545c0ea09e05e94aea82bc8aa537750", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "0011_sympy/0027_sympy_intro_2_en.ipynb", "max_forks_repo_name": "junjiecai/jupyter_demos", "max_forks_repo_head_hexsha": "8aa8a0320545c0ea09e05e94aea82bc8aa537750", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-10-24T16:19:29.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-04T18:55:57.000Z", "avg_line_length": 58.1500457457, "max_line_length": 3792, "alphanum_fraction": 0.7835992322, "converted": true, "num_tokens": 1088, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9324533126145178, "lm_q2_score": 0.8933094145755218, "lm_q1q2_score": 0.8329693228106809}} {"text": "```python\nimport numpy as np\n\nimport numpy as np\nimport scipy as sp\nfrom scipy import linalg\nfrom scipy import optimize\nfrom scipy import interpolate\nimport sympy as sm\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn-whitegrid')\nfrom matplotlib import cm\nfrom mpl_toolkits.mplot3d import Axes3D\n```\n\n# Gradient descent\n\nLet $\\boldsymbol{x} = \\left[\\begin{array}{c}\nx_1 \\\\\nx_2\\\\\n\\end{array}\\right]$ be a two-dimensional vector. Consider the following algorithm:\n\n**Algorithm:** `gradient_descent()`\n\n**Goal:** Minimize the function $f(\\boldsymbol{x})$.\n\n1. Choose a tolerance $\\epsilon>0$, a scale factor $ \\Theta > 0$, and a small number $\\Delta > 0$\n2. Guess on $\\boldsymbol{x}_0$ and set $n=1$\n3. Compute a numerical approximation of the jacobian for $f$ by\n\n $$\n \\nabla f(\\boldsymbol{x}_{n-1}) \\approx \\frac{1}{\\Delta}\\left[\\begin{array}{c}\n f\\left(\\boldsymbol{x}_{n-1}+\\left[\\begin{array}{c}\n \\Delta\\\\\n 0\n \\end{array}\\right]\\right)-f(\\boldsymbol{x}_{n-1})\\\\\n f\\left(\\boldsymbol{x}_{n-1}+\\left[\\begin{array}{c}\n 0\\\\\n \\Delta\n \\end{array}\\right]\\right)-f(\\boldsymbol{x}_{n-1})\n \\end{array}\\right]\n $$\n\n4. Stop if the maximum element in $|\\nabla f(\\boldsymbol{x}_{n-1})|$ is less than $\\epsilon$\n5. Set $\\theta = \\Theta$ \n6. Compute $f^{\\theta}_{n} = f(\\boldsymbol{x}_{n-1} - \\theta \\nabla f(\\boldsymbol{x}_{n-1}))$\n7. If $f^{\\theta}_{n} < f(\\boldsymbol{x}_{n-1})$ continue to step 9\n8. Set $\\theta = \\frac{\\theta}{2}$ and return to step 6 \n9. Set $x_{n} = x_{n-1} - \\theta \\nabla f(\\boldsymbol{x}_{n-1})$\n10. Set $n = n + 1$ and return to step 3\n\n**Question:** Implement the algorithm above such that the code below can run.\n\nDefine the rosenbrock function\n\n\n```python\ndef _rosen(x1,x2):\n f = (1.0-x1)**2 + 2*(x2-x1**2)**2\n\nx1 = sm.symbols('x_1')\nx2 = sm.symbols('x_2')\nf = (1.0-x1)**2 + 2*(x2-x1**2)**2\n\n```\n\n\n```python\n\ndef gradient_descent(f,x0,epsilon=1e-6,Theta=0.1,Delta=1e-8,max_iter=10_000):\n\n # step 1: initialize\n x = x0\n fx = f(x0)\n n = 1\n \n # step 2-6: iteration\n while n < max_iter:\n\n x1_variable=[x[0]+Delta,x[1]]\n x2_variable=[x[0],x[1]+Delta]\n # step 2: function and derivatives\n def rosen_jac(x):\n jac = np.zeros(2)\n jac[0] = f(x1_variable)-f(x)\n jac[1] = f(x2_variable)-f(x)\n return jac*(1/Delta)\n # step 3: check convergence\n if np.max(np.array(abs(rosen_jac(x)))) < epsilon:\n break\n\n x_prev = x\n fx_prev = fx\n \n # step 2: evaluate gradient\n jacx = rosen_jac(x)\n \n # step 3: find good step size (line search)\n\n fx_ast = np.inf\n theta_ast = Theta\n theta = Theta / 2\n if fx < fx_ast:\n fx_ast = fx\n theta_ast = theta\n \n # step 4: update guess\n x = x_prev - theta_ast * jacx\n \n # step 5: check convergence\n fx = f(x)\n if fx > fx_prev:\n break\n \n # d. update i\n n += 1\n \n return x,n\n```\n\n**Test case:**\n\n\n```python\ndef rosen(x):\n return (1.0-x[0])**2+2*(x[1]-x[0]**2)**2\n\nx0 = np.array([1.1,1.1])\ntry:\n x,it = gradient_descent(rosen,x0)\n print(f'minimum found at ({x[0]:.4f},{x[1]:.4f}) after {it} iterations')\n assert np.allclose(x,[1,1])\nexcept:\n print('not implemented yet')\n```\n\n minimum found at (1.0000,1.0000) after 578 iterations\n\n\n\n```python\ngradient_descent(rosen,x0)\n```\n\n\n\n\n (array([1.00000114, 1.00000253]), 578)\n\n\n\nCan use Nelder-Mead without analytical hessian\n\nNewton if we have the analytical hessian\n\nBFGS is the best without analytical\n\nMaybe potential step sizes are the theta function with theta/2\n\n##\n", "meta": {"hexsha": "abe6d20964c9ca0b0711c9197b349f7c34ab434e", "size": 6609, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Exam project/marcus33.ipynb", "max_stars_repo_name": "notnasobe666/BlackHatGang", "max_stars_repo_head_hexsha": "e6ec842cfaedaa3752003d7dc1d50ef66091fe6d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Exam project/marcus33.ipynb", "max_issues_repo_name": "notnasobe666/BlackHatGang", "max_issues_repo_head_hexsha": "e6ec842cfaedaa3752003d7dc1d50ef66091fe6d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Exam project/marcus33.ipynb", "max_forks_repo_name": "notnasobe666/BlackHatGang", "max_forks_repo_head_hexsha": "e6ec842cfaedaa3752003d7dc1d50ef66091fe6d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.6491935484, "max_line_length": 109, "alphanum_fraction": 0.4707217431, "converted": true, "num_tokens": 1225, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9263037363973295, "lm_q2_score": 0.8991213840277783, "lm_q1q2_score": 0.8328594974996693}} {"text": "Neccesary packages to run the project:\n\n\n```python\nimport matplotlib.pyplot as plt # A Python 2D plotting library to produce figures \nimport numpy as np # Package for scientific computing with Python\nfrom scipy import linalg # Used to do linear algebra functions\nfrom scipy import optimize # For maximizing and minimizing functions\nimport sympy as sm # Used to do symbolic mathematics\nfrom sympy import *\nimport ipywidgets as widgets # Package for making interactive figures\nsm.init_printing(use_unicode=True) # This will make all expressions pretty print with unicode characters.\n```\n\n# Analysis of the Solow \n\nThis project will try to analyse a wide range of Solow models. More specificly the project will try to analyse the models of chapter 3, 5, 6 and 7 from the book \"Introducing Advanced Macroeconomics - Growth and business cycles\".\n\n## The basic Solow model (Chapter 3)\n\n### Introducing the model\n\nThe basic Solow model consists of 4 essential equations which is stated below. \n1. $Y_t = F(K_t^d, L_t^d) = B K_t^\\alpha L_t^{1-\\alpha}$ \n2. $S_t = sY_t$ \n3. $L_{t+1} = (1+n)L_t$\n4. $K_{t+1} - K_t = S_t - \\delta K_t$\n\nThe model consists of the following variables and parameters:\n1. $Y_t$ is GDP which is given as a Cobb-Douglas function.\n2. $S_t$ is the amount of saving in the economy and is given as a constant fraction, $00$.\n6. $\\alpha$ is the share of income/output spent on capital, $0<\\alpha<1$\n\n### Analysing the model\n\nIn the first part of the analysis of the basic Solow model we find the steady state values for $k_t$ and $y_t$ respectively. In order to do that we use the sympy package which is a great tool for doing symbolic mathematics. We start by defing the symbols of model:\n\n\n```python\nY = sm.symbols('Y_t')\nK = sm.symbols('K_t')\nL = sm.symbols('L_t')\ny = sm.symbols('y_t')\nk = sm.symbols('k_t')\nB = sm.symbols('B')\ns = sm.symbols('s')\nn = sm.symbols('n')\nalpha = sm.symbols('alpha')\ndelta = sm.symbols('delta')\n```\n\n### Finding steady state analytical\n\nTo find the steady state value of capital per worker we'll use the transition equation:\n\n$k_{t+1} = \\frac{1}{1+n} sBk_t^\\alpha+(1-\\delta)k_t$\n\n\n```python\nTranstion_equation = sm.Eq(k,(s*B*k**alpha+(1-delta)*k)/(1+n))\nkss = sm.solve(Transtion_equation ,k)\nprint('The steady state value of capital per worker is given as')\nkss\n```\n\nTo find the steady state value of $y_t$ we firstly define the capital per capita function and then insert our steady state expression for $k^*$ into the output per capita production function:\n\n\n```python\ny=B*k**alpha\nyss =y.subs({k:kss[0]})\nprint('The steady state value of income per worker is given as')\nyss\n```\n\nWe now got an expresison for both capital per worker and income per worker in steady state. In the next part of the analysis we'll find an numerical solution the the steady state values.\n\n### Numerical solution to the model\n\nWe can find a numerical solution to the model as well, which we only do for income per worker. There are several methods to do this, but we will only demonstrate the Brent, Newton and Bisect method. \n\n**Brent method:**\n\n\n```python\ndef brent_basic_solow(a,b, B=1, alpha=1/3, delta=0.01, n=0.02, s=0.5):\n\n return optimize.brentq(lambda k: s*B*k**alpha - (n+delta)*k, a,b)\n\nprint(f'The steady state value with the Brent method is: {brent_basic_solow(1,100):.20f}')\n```\n\n The steady state value with the Brent method is: 68.04138174397705540741\n\n\n**Newton method:**\n\n\n```python\ndef newton_basic_solow(x, B=1, alpha=1/3, delta=0.01, n=0.02, s=0.5):\n\n return optimize.newton(lambda k: s*B*k**alpha - (n+delta)*k, x)\n\nprint(f'The steady state value with the Newton method is: {newton_basic_solow(100):.20f}')\n```\n\n The steady state value with the Newton method is: 68.04138174397716909425\n\n\n**Bisect method:**\n\n\n```python\ndef bisect_basic_solow(a,b, B=1, alpha=1/3, delta=0.01, n=0.02, s=0.5):\n\n return optimize.bisect(lambda k: s*B*k**alpha - (n+delta)*k, a,b)\nprint(f'The steady state with the Bisect method is: {bisect_basic_solow(1,100):.20f}')\n```\n\n The steady state with the Bisect method is: 68.04138174397708382912\n\n\nWe see that steady value of income per worker is roughly the same for all three methods although you might notice they change slightly as we get to 12. digit. From this it is safe to say that the steady value is around 68.04.\n\n### Visualisation of Solow diagram\n\nWe'll now try to visualise how capital converges towards it's steady state value.\n\n\n```python\n# We define the variables of the Solow diagram\ndef solowdiagram_basic_solow(k,B,s,n,delta,alpha,T):\n\n# We make two empty lists. One for k and the 45-degree line\n k_next_period = [k]\n Linear_line = [0]\n \n# And we create our functions which growths as time goes.\n for t in range(0,T):\n line = (n+delta)*t\n Linear_line.append(line)\n k_plus1 = s*B*t**alpha\n k_next_period.append(k_plus1)\n\n# Plotting figure, limits and labels.\n plt.figure(dpi=120)\n plt.plot(k_next_period[:T], label='Growth, $sBk_t^a$', color='b')\n plt.plot(Linear_line[:T], label = 'Depression,$(n+\\delta)*k_t$', color='r')\n plt.xlim(0,T)\n plt.ylim(0,Linear_line[-1])\n plt.xlabel('$k_t$')\n plt.ylabel('')\n plt.grid()\n plt.legend()\n plt.show()\n \n# Showing figure. Change yourself to obtain a higher or lower steady state.\nsolowdiagram_basic_solow(0,1,0.5,0.02,0.01,1/3,200)\n```\n\nThe blue line is the growth in capital. On the positive side we have that the higher the savings rate the more capital is accumulated, the higher total factor producitvity the more workers produce and the higher an alpha the higher capitals share is. The red line represents the depreciation. We have that population growth decreases capital per worker. Furthermore, the higher $\\delta$ is the more depreciation there is on capital which also decreses capital per worker. You can change the values yourself to move see how the steady state value of capital per worker changes.\n\nAs you probably notice the steady state value is the same as we found in the numerical analysis of the model. \n\nThe presence of technology in the model allows there to be long-run growth in output per capita. This will conclude the analysis of the basic Solow model. In the next part of the analysis we'll expand the the model by including technologycal growth from the general Solow model.\n\n## The general Solow model (chapter 5)\nThe next part of the project is going to analyze the Solow model with technology from chapter 5. The model looks a lot like the model from chapter 3 except that the total factor productivity is now being changed with an exogenious technological growth. For a better overview we'll introduce the full model although a lot of the equations is the same as in chapter 3.\n\n### Introducing the model\n\n1. $Y_t = K_t^\\alpha (A_t L_t)^{1-\\alpha}$ \n2. $S_t = sY_t$\n3. $K_{t+1} - K_t = S_t - \\delta K_t$\n4. $L_{t+1} = (1+n)L_t$ \n5. $A_{t+1} = (1+g)A_t$ \n\nwhere $A_{t+1}$ is the growth in technology which is given exogenously as $(1+g)A_t$ and $\\tilde{k}_t = \\frac{K_t}{A_t L_t}$ and the model has the parameters $\\alpha$, $n$, $g$ and $\\delta$. The rest of the parameters have been described previously.\n\n### Analysing the model\n\nIn the first part of the analysis of the general Solow model we find the marginal product with respect to capital and labour. Following that we'll once again find the steady state values for $k_t$ and $y_t$ respectively. We start by defing the symbols of model.\n\n\n```python\nY = sm.symbols('Y_t')\ny = sm.symbols('y_t')\nK = sm.symbols('K_t')\nk = sm.symbols('k_t')\nA = sm.symbols('A')\ns = sm.symbols('s')\nn = sm.symbols('n')\ng = sm.symbols('g')\nL = sm.symbols('L_t')\nalpha = sm.symbols('alpha')\ndelta = sm.symbols('delta')\nkstar = sm.symbols('k_t^*')\nystar = sm.symbols('y_t^*')\nktilde = sm.symbols('ktilde_t')\nytilde = sm.symbols('ytilde_t')\nytildestar = sm.symbols('ytilde_t^*')\n```\n\n### Finding the marginal product with respect to capital and labour\n\nTo find the marginal product of capital we take the derivative of the production function with respect to K. Following this we take the derivative of the production function with respect to L to find the the marginal product of labour.\n\n\n```python\nY = K**alpha*A**(1-alpha)*L**(1-alpha)\n\nr = diff(Y, K)\nprint('The marginal product of capital which equals the real rental rate is given as')\nsimplify(r)\n```\n\nThis can also be written as:\n\n$r_t = \\alpha \\left(\\frac{K_t}{A_tL_t} \\right)^{\\alpha-1}$\n\nWe find the marginal product with respect to L.\n\n\n```python\nw = diff(Y, L)\nprint('The marginal product of labour which equals the real wage is given as')\nsimplify(w)\n```\n\nWhich is an expression for the real wage. This can also be written as:\n\n$w_t = (1-\\alpha) \\left(\\frac{K_t}{A_tL_t} \\right)^\\alpha A_t$\n\n### Finding steady state analytical\n\nThis time we'll take our starting point in the Solow equation which is stated below:\n\n$\\tilde{k}_{t+1}-\\tilde{k}_t = \\frac{1}{(1+n)(1+g)} (s*\\tilde{k}_t^\\alpha-(n+g+delta+ng)*\\tilde{k}_t)$\n\nWe know that in steady state $\\tilde{k}_{t+1} = \\tilde{k}_t = \\tilde{k}^*$ which we use to find steady state:\n\n\n```python\neq = sm.Eq((s*ktilde**alpha-(n+g+delta+n*g)*ktilde)/((1+n)*(1+g)))\neq\n```\n\nWe then isolate k which gives us the result $\\tilde{k}^*$:\n\n\n```python\nk_tilde_star = sm.solve(eq,ktilde)[0]\nk_tilde_star\n```\n\nWhat we're really interested in is the capital per worker and not technology-adjusted capital per worker. Since $\\tilde{k}_t^* = Ak_t^*$ we get the following result:\n\n\n```python\nk_star = sm.Eq(kstar, A*k_tilde_star)\nk_star\n```\n\nWe now got the steady state value for capital per worker. It's quite easy to find $\\tilde{y}_t^*$ now since we know that $\\tilde{y}_t^* = \\tilde{k}_t^\\alpha$\n\n\n```python\ny_tilde_star = sm.Eq(ytildestar, k_tilde_star**alpha)\ny_tilde_star\n```\n\nOutput per worker is then given as:\n\n\n```python\ny_star = sm.Eq(ystar, A*k_tilde_star**alpha)\ny_star\n```\n\n### Numerical solution to the model\n\nWe will also find a numerical solution to the general Solow model where we once again will use the Brent, Newton and Bisect method.\n\n\n```python\ndef brent_general_solow(a,b, alpha=1/3, delta=0.01, n=0.02, s=0.5, g=0.01):\n\n return optimize.brentq(lambda k: s*k**alpha - (n+delta+g+n*g)*k, a,b)\n\nprint(f'The steady state value with the Brent method is: {brent_general_solow(1,100):.20f}')\n```\n\n The steady state value with the Brent method is: 43.86477710563418241918\n\n\n\n```python\ndef newton_general_solow(x, alpha=1/3, delta=0.01, n=0.02, s=0.5, g=0.01):\n\n return optimize.newton(lambda k: s*k**alpha - (n+delta+g+n*g)*k, x)\n\nprint(f'The steady state value with the Newton method is: {newton_general_solow(100):.20f}')\n```\n\n The steady state value with the Newton method is: 43.86477710563421794632\n\n\n**Bisect method:**\n\n\n```python\ndef bisect_general_solow(a,b, alpha=1/3, delta=0.01, n=0.02, s=0.5, g=0.01):\n\n return optimize.bisect(lambda k: s*k**alpha - (n+delta+g+n*g)*k, a,b)\nprint(f'The steady state with the Bisect method is: {bisect_general_solow(1,100):.20f}')\n```\n\n The steady state with the Bisect method is: 43.86477710563546850153\n\n\nOnce again we see that the steady state value of income per worker is roughly the same until the 12. digit.\n\n### Visualization of the general Solow diagram\n\nWe visualise the Solow diagram of the model.\n\n\n```python\n# We define the variabels of the Solow diagram\ndef solowdiagram_general_solow(k,alpha,delta,s,n,g,T):\n\n# We make a list for k and the 45-degree line. \n k_next_period = [k]\n Linear_line = [0]\n \n# And we create our functions which goes from 1 to infinity which we call T(= time).\n for t in range(1,T):\n line = (n+delta+g+n*g)*t\n Linear_line.append(line)\n k_plus = s*t**alpha\n k_next_period.append(k_plus)\n \n# Plotting figure, limits and labels.\n plt.figure(dpi=120)\n plt.plot(k_next_period[:T], label='Growth, $sk_t^a$', color='b')\n plt.plot(Linear_line[:T], label = 'Depression, $(n+\\delta+g+ng)k_t$', color='r')\n plt.xlim(0,T)\n plt.ylim(0,Linear_line[-1])\n plt.xlabel('$k_t$')\n plt.ylabel('')\n plt.grid()\n plt.legend()\n plt.show()\n \n# Showing figure. Change yourself to obtain a higher or lower steady state.\nsolowdiagram_general_solow(0,1/3,0.01,0.5,0.02,0.01,200)\n```\n\nWe've set the parameters to the same values as before and set $g=0.01$. As before you can change the parameters yourself. Although it might be hard to see, the steady state value of $k_t$ has decreased compared with the model in chapter 3. This is because we now look at the technology-adjusted capital per worker.\n\nAdding technology to the model ensures that there is growth in steady, but also brings it to a lower level. This makes sense as countries that are considered to be in steady state, still experience growth in GDP and GDP per capita. You may also notice that the intial steady state value is the same as we found in the numerical solution. This will conclude the analysis of the general Solow model. \n\nIn the next part of the project we will be focusing on the Solow model from chapter 6 which includes human capital. \n\n## The Solow model with human capital (Chapter 6)\n\nThe third part of our project will focus on the Solow model with human capital. This model addresses the two discrepancies between the empiri and the models predictions.\n\n### The model is given as following,\n\n1. $Y_t = K_t^\\alpha H_t^\\varphi (A_t L_t)^{1-\\alpha-\\varphi}$\n2. $L_{t+1} = (1+n)L_t$\n3. $A_{t+1} = (1+g)A_t$\n4. $H_{t+1} = s_H Y_t + (1+\\delta)H_t$\n5. $K_{t+1} = s_K Y_t +(1+\\delta)K_t$\n\n$r_t = \\alpha \\left(\\frac{K_t}{A_t L_t}\\right)^{\\alpha-1}\\left(\\frac{H_t}{A_t L_t}\\right)^\\varphi A_t$\n\n$w_t = (1-\\alpha) \\left(\\frac{K_t}{A_t L_t}\\right)^\\alpha \\left(\\frac{H_t}{A_t L_t}\\right)^\\varphi A_t$\n\nThe model consists of the following variables and parameters:\n1. $Y_t$ is GDP, which is given as a Cobb-Douglas function.\n2. $L_t$ is the amount of labour. The labour force has a growth rate of 'n'\n3. $A_t$ is the technological level and it has a growth rate of 'g'\n4. $K_t$ is the amount of capital.\n7. $s_H$ and $s_K$ is the amount of saving in the economy used on respectivly human capital and physical capital. They're both given as a constant fraction of GDP\n5. $\\sigma$ Physical capitals contribution to GDP growth\n6. $\\alpha$ Physical capitals contribution to GDP growth\n\n\n```python\n# Defining symbols for sympy \n\n# period t\nK = sm.symbols(\"K_t\")\nH = sm.symbols(\"H_t\")\nA = sm.symbols(\"A_t\")\nL = sm.symbols(\"L_t\")\n\nktilde = sm.symbols(\"ktilde_t\")\nhtilde = sm.symbols(\"htilde_t\")\nytilde = sm.symbols(\"ytilde_t\")\n\n# Steady state \nkstar = sm.symbols(\"ktilde^*\")\nhstar = sm.symbols(\"htilde^*\")\nystar = sm.symbols(\"ytilde^*\")\n\n# Parameters\nphi = sm.symbols(\"varphi\")\nsK = sm.symbols(\"s_K\")\nsH = sm.symbols(\"s_H\")\nalpha = sm.symbols('alpha')\ndelta = sm.symbols('delta')\nvarphi = sm.symbols('varphi')\ng = sm.symbols('g')\nn = sm.symbols('n')\n```\n\n\n```python\n# Defining the variables in sympy\n\n# GDP\nY = K**alpha*H**phi*(A*L)**(1-alpha-phi)\n\n# Physical capital\nKt1 = sK*Y +(1-delta)*K\n\n# Human capital\nHt1 = sH*Y +(1-delta)*H\n\n# Technology level and labor force\nAt1 = (1+g)*A\nLt1 = (1+n)*L\n```\n\n\n```python\n# By differentiation Yt (GDP) respectively for K and L you get the interest rate and the wages in the economy.\nr = sm.simplify(sm.diff(Y,K))\nw = sm.simplify(sm.diff(Y,L))\n```\n\n\n```python\n# interest rate\nr \n```\n\nThe intresrest rate is equivalent to the one shown in the model presentation\n\n$ H_t^\\varphi K_t^{\\alpha-1}\\alpha(A_t L_t)^{-\\alpha-\\varphi+1} = \n\\alpha \\left(\\frac{K_t}{A-t L_t}\\right)^{\\alpha-1} \\left(\\frac{H_t}{A_t L_t}\\right)^\\varphi$\n\n\n```python\n# real Wages\nw\n```\n\nThe real wages is equivalent to the one shown in the model presentation\n$ \\frac{H_t^\\varphi K_t^{\\alpha}(A_t L_t)^{-\\alpha-\\varphi+1} (-\\alpha-\\varphi+1)}{L_t} = \n(1-\\alpha) \\left(\\frac{K_t}{A-t L_t}\\right)^{\\alpha} \\left(\\frac{H_t}{A_t L_t}\\right)^\\varphi A_t$\n\n### Analysing the model (The Law of motion) \nThe Transition equation is a bit more complicated than the one in chapter 5 since this model includes two first order differential equations. \nEven so the models are very a like, by setting $\\varphi = 1$ and dropping equation 4 the model from chapter 5 appears. \n\nThe technology adjusted variables per effective worker are defined as following: \n\n$ \\tilde{yt} \\equiv \\frac{y_t}{A_t} \\equiv \\frac{Y_t}{A_tL_t} \\hspace 1 cm \\tilde{k_t} \\equiv \\frac{k_t}{A_t} \\equiv \\frac{K_t}{A_tL_t} \\hspace 1 cm \\tilde{h_t} \\equiv \\frac{h_t}{A_t} \\equiv \\frac{H_t}{A_tL_t} $\n\nWe derive the technology adjusted variables by dividing both sides of the equations (1), (4) and (5) by $A_{t+1}*L_{t+1}$ \n\n\n```python\n# Calculation of the technology adjusted GDP per effective worker - Equation (5) / [(2)(3)]\nytilde = sm.simplify((Y/(A*L)))\nytilde\n```\n\nPer definition the technology adjusted GDP per effective worker can also be expressed as:\n\n$ \\tilde y_t = \\tilde k_t^\\alpha \\tilde h_t^\\phi$\n\n\n```python\n# Calculation of the technology adjusted physical capital per effective worker - Equation (5) / [(2)(3)]\nktildet1 = sm.simplify(Kt1/(At1*Lt1))\nktildet1\n```\n\n\n```python\n# Calculation of the technology adjusted human capital per effective worker - Equation (4) / [(2)(3)]\nhtildet1 = sm.simplify(Ht1/(At1*Lt1))\nhtildet1\n```\n\nBy using the definition of $\\tilde k_t \\hspace 0.3 cm \\text and \\hspace 0.3 cm \\tilde h_t$ , substituting Y into the techonology adjusted human capital and physical capital, it can be reduced to the following functions: \n\n$\\text 6. \\hspace 1 cm \\tilde k_t = \\frac{1}{(1+n)(1+g)} (s_K \\tilde k_t^\\alpha \\tilde h_t^\\varphi +(1-\\delta)\\tilde k_t)\\\\ \n\\text 7. \\hspace 1 cm \\tilde h_t = \\frac{1}{(1+n)(1+g)} (s_h \\tilde k_t^\\alpha \\tilde h_t^\\varphi +(1-\\delta)\\tilde h_t)$\n\n### Finding steady state values for key endogenous variable \n\n**Step 1:** Subtracting $\\tilde k_t$ on both sides of equation 6 and subtracting $\\tilde h_t$ on both sides of equation 7 to rewrite them to the solow equations\n\n\n\n```python\n# Defining/calculting the solow equation for physical capital\nksolow = sm.simplify(ktildet1-K/(A*L))\nksolow\n```\n\n\n```python\n# Defining/calculting the solow equation for human capital\nhsolow = sm.simplify(htildet1 - H/(A*L))\nhsolow\n```\n\n**Step 2:** In steady state $ \\tilde k_{t+1} = \\tilde k_t$ and $\\tilde h_{t+1} = \\tilde h_t,$ so it makes perfect sence to set the equation equal to 0.\n\n\n```python\nksolowss= sm.Eq(ksolow)\nksolowss\n```\n\n\n```python\nhsolowss = sm.Eq(hsolow)\nhsolowss\n```\n\n\n```python\n# By using the definitons listed through out this chapter of the project the two equations above can be reduced to: \n\nksolowss1 = sm.Eq(sK*ktilde**alpha*htilde**varphi - (n + g + delta + n*g)*ktilde)\nhsolowss1= sm.Eq(sH*ktilde**alpha*htilde**varphi - (n + g + delta + n*g)*htilde)\n```\n\n\n```python\n# The solow equation for physical capital in steady state\nksolowss1\n```\n\n\n```python\n# The solow equation for physical capital in steady state\nhsolowss1\n```\n\n**Step 3:** isolating $H_t$ and $ K_t$ \n\n\n```python\n# Finding steady state for physical capital\nkss = sm.solve(ksolowss1, ktilde)[0]\nkSS = sm.Eq(kstar, sm.solve(ksolowss1, ktilde)[0])\nkSS\n```\n\nsubstituting $\\tilde h^{-\\varphi}$ as a fuction of respectivly $\\tilde k_t$ \n\n\n```python\nkss1 = kss.subs(htilde, (ktilde*sH)/sK)\nkss2 = sm.Eq(kstar, kss.subs(htilde, (ktilde*sH)/sK))\nkss2\n```\n\n\n```python\n# finding steady state for human capital\nhss = sm.solve(hsolowss1, htilde)[0]\nhSS = sm.Eq(hstar, sm.solve(hsolowss1, htilde)[0])\nhSS\n```\n\nsubstituting $\\tilde k^{-\\alpha}$ as a fuction of $\\tilde h_t$\n\n\n```python\nhss1 = hss.subs(ktilde, (htilde*sH)/sK)\nhss2 = sm.Eq(hstar, hss.subs(ktilde, (htilde*sH)/sK))\nhss2\n```\n\n\n```python\n# substituting steady state variables into technology adjusted GDP per efficient worker\n\nys = sm.Eq(ystar, kss1**alpha * hss1 **varphi)\nys\n```\n\n### Numerical solution to the model\n\nOnce again we find a numerical solution with the Brent, Newton and Bisect method. \n\n\n```python\ndef human_brentq(a=1, b=100, alpha=0.3, phi=0.3, delta=0.01, sK=0.2, sH=0.2, n=0.02, g=0.01):\n\n return print(f'The steady state value for k with the Brent method is: {optimize.brentq(lambda k: sK*k**alpha*((sH/sK)*k)**phi-(n+g+delta+n*g)*k, a, b):.20f} \\nThe steady state value for h with the Brent method is: {optimize.brentq(lambda h: sH*((sK/sH)*h)**alpha*h**phi-(n+g+delta+n*g)*h, a, b):.20f}')\nhuman_brentq()\n```\n\n The steady state value for k with the Brent method is: 55.20899689926860531841 \n The steady state value for h with the Brent method is: 55.20899689926860531841\n\n\n\n```python\ndef human_newton(x=1000, alpha=0.3, phi=0.3, delta=0.01, sK=0.2, sH=0.2, n=0.02, g=0.01):\n\n return print(f'The steady state value for k with the Newton method is: {optimize.newton(lambda k: sK*k**alpha*((sH/sK)*k)**phi-(n+g+delta+n*g)*k, x):.20f} \\nThe steady state value for h with the Newton method is: {optimize.newton(lambda h: sH*((sK/sH)*h)**alpha*h**phi-(n+g+delta+n*g)*h, x):.20f}')\nhuman_newton()\n```\n\n The steady state value for k with the Newton method is: 55.20899689926865505640 \n The steady state value for h with the Newton method is: 55.20899689926865505640\n\n\n\n```python\ndef human_bisect(a=1, b=100, alpha=0.3, phi=0.3, delta=0.01, sK=0.2, sH=0.2, n=0.01, g=0.02):\n\n return print(f'The steady state value for k with the Bisect method is: {optimize.bisect(lambda k: sK*k**alpha*((sH/sK)*k)**phi-(n+g+delta+n*g)*k, a, b):.20f} \\nThe steady state value for h with the Bisect method is: {optimize.bisect(lambda h: sH*((sK/sH)*h)**alpha*h**phi-(n+g+delta+n*g)*h, a, b):.20f}')\nhuman_bisect()\n```\n\n The steady state value for k with the Bisect method is: 55.20899689926760345315 \n The steady state value for h with the Bisect method is: 55.20899689926760345315\n\n\nAnd we can conclude that the steady steady value is roughly the same howsoever the method. \n\n### Simulation and visiual presentation of results - solow with human capital\n\nWe now present the model by visualising the Solow diagram.\n\n\n```python\ndef simulate_human_capital(sK, sH, g, n, T, alpha, phi, delta, htilde, ktilde): \n \n# Defining arays to contain values of the functions \n ktilde_movement = [ktilde]\n htilde_movement = [htilde]\n \n# Defing functions to add to our arays\n for t in range(1,T):\n ktilde = t\n ktilde_np = ((ktilde**(-alpha+1)*(delta+n*g+n+g))/sK)**(1/(phi))\n ktilde_movement.append(ktilde_np)\n htilde = t\n htilde_np = ((htilde**(-alpha)*(delta+n*g+n+g))/sH)**(1/(phi-1))\n htilde_movement.append(htilde_np)\n \n# Setting up the data plot\n plt.figure(dpi=120)\n plt.plot(htilde_movement[:T], label='$\\Delta \\~{h} = 0$', color='b')\n plt.plot(ktilde_movement[:T], label='$\\Delta \\~{k} = 0$', color='r')\n plt.xlim(0,T)\n plt.ylim(0,T)\n plt.xlabel('$\\~{k}$')\n plt.ylabel('$\\~{h}$')\n plt.grid()\n plt.legend()\n plt.show()\n \nwidgets.interact(simulate_human_capital, \n htilde = widgets.fixed(0), \n ktilde = widgets.fixed(0), \n alpha = widgets.FloatSlider(description='$\\u03B1$', min=0, max=0.5, step=0.05, value=0.3),\n phi = widgets.FloatSlider(description='$\\u03C6$', min=0, max=0.5, step=0.05, value=0.3), \n delta = widgets.FloatSlider(description='$\\u03B4$', min=0.01, max=0.1, step=0.01, value=0.01), \n sK = widgets.FloatSlider(description='$s_K$', min=0.1, max=0.4, step=0.01, value=0.2), \n sH = widgets.FloatSlider(description='$s_H$', min=0.1, max=0.4, step=0.01, value=0.2),\n n = widgets.FloatSlider(description='$n$', min=0.01, max=0.1, step=0.005, value=0.02), \n g = widgets.FloatSlider(description='$g$', min=0.01, max=0.1, step=0.005, value=0.01), \n T = widgets.IntSlider(description='$T$', min=1, max=1000, step=10, value=100))\n```\n\n\n interactive(children=(FloatSlider(value=0.2, description='$s_K$', max=0.4, min=0.1, step=0.01), FloatSlider(va\u2026\n\n\n\n\n\n \n\n\n\nAs shown in the visual representation by adding an extra parameter, human capital, the convergence towards steady state happens at a slower speed. This means that for an initial amount of human and physical capital, it is going to take a longer amount of time for a country to get to their steady state compared to the previous chapters. Once again we see that the steady state value equals the one we found in the numerical analysis.\n\n## Solow Model with land (Chapter 7)\n\n\n```python\nimport matplotlib.pyplot as plt\nfrom sympy.plotting import plot\nimport numpy as np # Package for scientific computing with Python\nfrom scipy import linalg # Used to do linear algebra functions\nfrom scipy import optimize # For maximizing and minimizing functions\nimport sympy as sm # For solving mathematics with symbols \nfrom sympy import *\nfrom sympy import latex # Used to print results as LaTeX code\nimport ipywidgets as widgets\nsm.init_printing(use_unicode=True) # This will make all expressions pretty print with unicode characters.\n```\n\n### The model\n\nThe model with land consists of the following equations: \n\n1. $Y_t=K^\\alpha_t(A_tL_t)^\\beta X^\\kappa$,$\\hspace{0.5cm}$ $\\alpha>0,$ $\\hspace{0.5cm}$ $\\beta>0$,$\\hspace{0.5cm}$$\\kappa>0$,$\\hspace{0.5cm}$ $\\alpha+\\beta+\\kappa=1$\n2. $K_{t+1}=sY_t+(1-\\delta)K_t$,$\\hspace{0.5cm}$ $0 Linear Regression using Gradient Descent\n\nVideo Explanation : https://youtu.be/CzW0w7guvFo\n\n\n```python\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.metrics import mean_squared_error\n```\n\n\n```python\nX = [3,4,6,12,9,15,10,1,8,13] ## Experience\ny = [16,29,43,65,51,89,57,9,53,68] ## Salary\n```\n\n\n```python\n#X = [3,4,6]\n#y = [16,29,43]\n```\n\n\n```python\nplt.scatter(X,y)\nplt.xlabel('Experience')\nplt.ylabel('Salary')\nplt.grid(alpha=0.25)\nplt.show()\n```\n\n\n```python\n# Lets lienar regession line is y = mx + c\n# lets \n# y = mx +c\nc=0\n# # y_pred = mx\n# calcaulating the y_out we will mutiply the original X values by the slope to produce y_out values for each X\ndef multiply_matrix(x,m,c):\n y_out = []\n for i in range(len(x)):\n y_out.append(x[i]*m + c)\n return y_out\n```\n\n\n```python\n# Lets \nc = 0\nm = 1\ny_m1=multiply_matrix(X,m,c)\ny_m1\n```\n\n\n\n\n [3, 4, 6, 12, 9, 15, 10, 1, 8, 13]\n\n\n\n\n```python\n# line at m =1\nplt.scatter(X, y,label='Scatter Plot') \nplt.plot([min(X), max(X)], [min(y_m1), max(y_m1)], color='red',label='Regression best fit Line at m=1')\nplt.xlabel('Experience')\nplt.ylabel('Salary')\nplt.grid(alpha=0.25)\nplt.legend()\nplt.show()\n```\n\n\n```python\n# calculate cost function 1/n*sum(y-y_out)^2 by looping each sample\n# calculate mse 1/n*sum(y-y_out)^2 by looping each sample\ndef mse(actual, predicted):\n sum_error = 0.0\n # loop over all values\n for i in range(len(actual)):\n # the error is the sum of (actual - prediction)^2\n SSE = (actual[i] - predicted[i])**2\n sum_error = sum_error + SSE\n mean_error = sum_error / float(len(actual))\n return (mean_error)\n```\n\n\n```python\nmse_m1 = mse(y,y_m1)\nmse_m1\n```\n\n\n\n\n 1953.5\n\n\n\n\n```python\n# calculate mse for different m values ,lets take m = range(0,9)\nmse_values = [] # store the MSE for each m where m = range(0,9)\nm = range(0,9)\nc = 0\nfor i in m: \n y_out_values = multiply_matrix(X, m[i],c)\n mse_values.append(mse(y, y_out_values))\n print(\"mse for m :\",m[i], \"and c :\",c ,\" is \", mse(y, y_out_values))\n plt.figure(figsize=(3,2))\n plt.scatter(X, y,label='Scatter Plot') \n plt.plot([min(X), max(X)], [min(y_out_values), max(y_out_values)], color='red',label='regression line')\n plt.xlabel('Experience')\n plt.ylabel('Salary')\n plt.grid(alpha=0.25)\n #plt.legend()\n plt.show()\n```\n\n\n```python\nplt.plot(m,mse_values,label = \"cost function line\")\nplt.xlabel('slope')\nplt.ylabel('Cost Function')\nplt.grid(alpha=0.25)\nplt.legend()\n```\n\n\n```python\nnp.min(mse_values) # at this MSE we get the best bit line when m = 5 and c =0\n# but we have to iterate the both m & c to get the best bit line taht we can get with the help \n# of gradient descent(taking derivative of MSE wrt m and c)\n```\n\n\n\n\n 28.0\n\n\n\nCost Function\n\n\\begin{equation}\nLinear Equation ==> y = mx + c\n\\end{equation}\n\n\\begin{equation}\nJ(x) = 1/n \\sum_{i=1}^{n} (y^{(i)} - y^{(i)}predicted)^2 \n\\end{equation}\n\n\\begin{equation}\ny^{(i)}predicted = (mx^{(i)} + c)\n\\end{equation}\n\n\\begin{equation}\nJ(x) = 1/n \\sum_{i=1}^{n} (y^{(i)} - (mx^{(i)} + c))^2 \n\\end{equation}\n\nGradient\n\n**Derivative of cost function with respect to m ==> slope**\n\\begin{equation}\n\\frac{\\partial J(x^{(i)})}{\\partial m} = -2/n\\sum_{i=1}^{n}(x^{(i)}).(y^{(i)} - (mx^{(i)} + c))\n\\end{equation}\n\n**Derivative of cost function with respect to c ==> intercept**\n\\begin{equation}\n\\frac{\\partial J(x^{(i)})}{\\partial c} = -2/n\\sum_{i=1}^{n}(y^{(i)} - (mx^{(i)} + c))\n\\end{equation}\n\n**Why do we take derivative?**\n\nTo get the direction of where is global minimum. If we are on left side of minimum value of parabola then the derivate is negative and negative sign before derivate will make the new value of X go right and thus more closer to minimum value. Vice versa for right side.\n\n\n```python\nimport numpy as np\ndef Simple_LinearRegression_Gradient_Descent(x,y,epochs,learning_rate,n):\n m_curr = c_curr = 0\n for i in range(epochs):\n y_pred = m_curr * x + c_curr\n ## MSE equation == cost_function\n cost = (1/n) * sum([val**2 for val in (y-y_pred)])\n md = -(2/n)*sum(x*(y-y_pred))\n cd = -(2/n)*sum(y-y_pred)\n m_curr = m_curr - learning_rate * md # step_size = learning_rate * md \n c_curr = c_curr - learning_rate * cd # # step_size = learning_rate * cd \n print('>epoch=%d, m=%.3f,c=%.3f, md=%.3f,cd=%.3f,step_size_m=%.3f,step_size_c=%.3f,cost=%.3f' % \n (i,m_curr,c_curr, md,cd,md*learning_rate,cd*learning_rate,(cost)))\n```\n\n\n```python\nx = np.array([3,4,6,12,9,15,10,1,8,13])\ny = np.array([16,29,43,65,51,89,57,9,53,68])\nepochs = 30\nn = len(x)\nlearning_rate = 0.001\nSimple_LinearRegression_Gradient_Descent(x,y,epochs,learning_rate,n)\n```\n\n >epoch=0, m=0.977,c=0.096, md=-976.600,cd=-96.000,step_size_m=-0.977,step_size_c=-0.096,cost=2845.600\n >epoch=1, m=1.787,c=0.176, md=-809.999,cd=-79.987,step_size_m=-0.810,step_size_c=-0.080,cost=1964.756\n >epoch=2, m=2.458,c=0.243, md=-671.814,cd=-66.705,step_size_m=-0.672,step_size_c=-0.067,cost=1358.756\n >epoch=3, m=3.016,c=0.298, md=-557.197,cd=-55.688,step_size_m=-0.557,step_size_c=-0.056,cost=941.840\n >epoch=4, m=3.478,c=0.345, md=-462.128,cd=-46.550,step_size_m=-0.462,step_size_c=-0.047,cost=655.012\n >epoch=5, m=3.861,c=0.384, md=-383.274,cd=-38.971,step_size_m=-0.383,step_size_c=-0.039,cost=457.679\n >epoch=6, m=4.179,c=0.417, md=-317.870,cd=-32.684,step_size_m=-0.318,step_size_c=-0.033,cost=321.917\n >epoch=7, m=4.443,c=0.444, md=-263.620,cd=-27.469,step_size_m=-0.264,step_size_c=-0.027,cost=228.515\n >epoch=8, m=4.661,c=0.467, md=-218.623,cd=-23.143,step_size_m=-0.219,step_size_c=-0.023,cost=164.256\n >epoch=9, m=4.842,c=0.487, md=-181.301,cd=-19.555,step_size_m=-0.181,step_size_c=-0.020,cost=120.045\n >epoch=10, m=4.993,c=0.503, md=-150.344,cd=-16.579,step_size_m=-0.150,step_size_c=-0.017,cost=89.628\n >epoch=11, m=5.117,c=0.517, md=-124.668,cd=-14.110,step_size_m=-0.125,step_size_c=-0.014,cost=68.700\n >epoch=12, m=5.221,c=0.530, md=-103.370,cd=-12.063,step_size_m=-0.103,step_size_c=-0.012,cost=54.301\n >epoch=13, m=5.307,c=0.540, md=-85.705,cd=-10.364,step_size_m=-0.086,step_size_c=-0.010,cost=44.393\n >epoch=14, m=5.378,c=0.549, md=-71.053,cd=-8.955,step_size_m=-0.071,step_size_c=-0.009,cost=37.576\n >epoch=15, m=5.436,c=0.557, md=-58.900,cd=-7.786,step_size_m=-0.059,step_size_c=-0.008,cost=32.884\n >epoch=16, m=5.485,c=0.563, md=-48.820,cd=-6.816,step_size_m=-0.049,step_size_c=-0.007,cost=29.655\n >epoch=17, m=5.526,c=0.569, md=-40.459,cd=-6.011,step_size_m=-0.040,step_size_c=-0.006,cost=27.432\n >epoch=18, m=5.559,c=0.575, md=-33.524,cd=-5.344,step_size_m=-0.034,step_size_c=-0.005,cost=25.901\n >epoch=19, m=5.587,c=0.580, md=-27.772,cd=-4.790,step_size_m=-0.028,step_size_c=-0.005,cost=24.846\n >epoch=20, m=5.610,c=0.584, md=-23.001,cd=-4.331,step_size_m=-0.023,step_size_c=-0.004,cost=24.120\n >epoch=21, m=5.629,c=0.588, md=-19.043,cd=-3.949,step_size_m=-0.019,step_size_c=-0.004,cost=23.618\n >epoch=22, m=5.645,c=0.591, md=-15.761,cd=-3.633,step_size_m=-0.016,step_size_c=-0.004,cost=23.272\n >epoch=23, m=5.658,c=0.595, md=-13.039,cd=-3.370,step_size_m=-0.013,step_size_c=-0.003,cost=23.032\n >epoch=24, m=5.669,c=0.598, md=-10.781,cd=-3.153,step_size_m=-0.011,step_size_c=-0.003,cost=22.866\n >epoch=25, m=5.678,c=0.601, md=-8.908,cd=-2.972,step_size_m=-0.009,step_size_c=-0.003,cost=22.750\n >epoch=26, m=5.685,c=0.604, md=-7.354,cd=-2.821,step_size_m=-0.007,step_size_c=-0.003,cost=22.669\n >epoch=27, m=5.691,c=0.606, md=-6.065,cd=-2.697,step_size_m=-0.006,step_size_c=-0.003,cost=22.612\n >epoch=28, m=5.696,c=0.609, md=-4.997,cd=-2.593,step_size_m=-0.005,step_size_c=-0.003,cost=22.571\n >epoch=29, m=5.700,c=0.612, md=-4.110,cd=-2.507,step_size_m=-0.004,step_size_c=-0.003,cost=22.542\n\n\n\n```python\n## Plot the cost with each iteration\nimport matplotlib.pyplot as plt\ndef Simple_LinearRegression_Gradient_Descent_cost_plot(x,y,epochs,learning_rate,n):\n m_curr = 0\n c_curr = 0\n iter = epochs\n cost_val = [] # list to store the cost in each iteration\n intercept = []\n coefficient = []\n for i in range(iter):\n y_pred = (m_curr * x) + c_curr\n cost = 1/n * sum([data**2 for data in (y - y_pred)])\n cost_val.append(cost) ## append the cost_val \n md = -(2/n) * sum(x * (y - y_pred))\n cd = -(2/n) * sum(y - y_pred)\n m_curr = m_curr - (learning_rate * md) # \n coefficient.append(m_curr)\n c_curr = c_curr - (learning_rate * cd)\n intercept.append(c_curr)\n return intercept,coefficient,cost_val\n```\n\n\n```python\nx = np.array([3,4,6,12,9,15,10,1,8,13])\ny = np.array([16,29,43,65,51,89,57,9,53,68])\nepochs = 100\nn = len(x)\nlearning_rate = 0.001\nintercept,coefficient,cost_val = Simple_LinearRegression_Gradient_Descent_cost_plot(x,y,epochs,learning_rate,n)\n\nplt.figure(figsize=(5,5))\nplt.plot(coefficient, cost_val, '.r',color='blue',label=\"cost function curve\")\nplt.xlabel('coefficient')\nplt.ylabel('cost_val')\nplt.grid(alpha=0.25)\nplt.legend()\n```\n\n\n```python\n## Now plot grapg to show the regression best fit line \nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndef Simple_LinearRegression_Gradient_Descent_best_fit_line(x,y,epochs,learning_rate,n):\n m_curr = 0\n c_curr = 0\n iter = epochs\n plt.scatter(x,y,color = \"red\",label='Scatter Plot')\n for i in range(iter):\n ## y = mx + c\n y_pred = m_curr * x + c_curr\n ## MSE equation == cost_function\n cost = 1/n*sum([val**2 for val in (y - y_pred)])\n ## lets take derivative\n md = -2/n * sum(x*(y - y_pred))\n cd = -2/n * sum(y - y_pred)\n plt.plot(x,y_pred,color = \"blue\",linewidth='.1') \n plt.xlabel('Experience')\n plt.ylabel('Salary')\n #plt.style.use('fivethirtyeight')\n #plt.legend()\n m_curr = m_curr - learning_rate * md\n c_curr = c_curr - learning_rate * cd \n```\n\n\n```python\n#x = np.array([3,4,6,12,9,15,10,1,8,13])\n#y = np.array([16,29,43,65,51,89,57,9,53,68])\nepochs = 100\nn = len(x)\nlearning_rate = 0.001\nSimple_LinearRegression_Gradient_Descent_best_fit_line(x,y,epochs,learning_rate,n)\n```\n\n\n```python\nlr = LinearRegression()\nlr.fit(x.reshape(-1,1),y)\n```\n\n\n\n\n LinearRegression()\n\n\n\n\n```python\nlr.intercept_,lr.coef_\n```\n\n\n\n\n (5.334568554790884, array([5.26733722]))\n\n\n\n\n```python\ny_predict = lr.predict(x.reshape(-1,1))\ny_predict\n```\n\n\n\n\n array([21.1365802 , 26.40391742, 36.93859185, 68.54261514, 52.74060349,\n 84.34462679, 58.00794071, 10.60190577, 47.47326628, 73.80995236])\n\n\n\n\n```python\nmean_squared_error(y,y_predict)\n```\n\n\n\n\n 17.499947061937544\n\n\n\n\n```python\nfrom sklearn.linear_model import ElasticNet,Ridge,Lasso\n```\n\n\n```python\nlre = ElasticNet()\nlre.fit(x.reshape(-1,1),y)\n```\n\n\n\n\n ElasticNet()\n\n\n\n\n```python\nlre.intercept_,lre.coef_\n```\n\n\n\n\n (6.643630737493552, array([5.1057246]))\n\n\n\n\n```python\ny_predict_e = lre.predict(x.reshape(-1,1))\ny_predict_e\n```\n\n\n\n\n array([21.96080454, 27.06652914, 37.27797834, 67.91232594, 52.59515214,\n 83.22949974, 57.70087674, 11.74935534, 47.48942754, 73.01805054])\n\n\n\n\n```python\nmean_squared_error(y,y_predict_e)\n```\n\n\n\n\n 17.993328121953727\n\n\n", "meta": {"hexsha": "7c59338daaab4eae34d0499c038e1fe09f4535ac", "size": 167533, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Machine_Learning/Linear_Regression/Gradient_Descent_mathematical_intuition_and_hands-on_Linear_Regression.ipynb", "max_stars_repo_name": "atulpatelDS/Youtube", "max_stars_repo_head_hexsha": "386204328b3d363d05f65f1f2381f4c2519e30d5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2021-04-05T17:29:20.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-16T21:55:44.000Z", "max_issues_repo_path": "Machine_Learning/Linear_Regression/Gradient_Descent_mathematical_intuition_and_hands-on_Linear_Regression.ipynb", "max_issues_repo_name": "MichaelGhaly20/Youtube", "max_issues_repo_head_hexsha": "386204328b3d363d05f65f1f2381f4c2519e30d5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Machine_Learning/Linear_Regression/Gradient_Descent_mathematical_intuition_and_hands-on_Linear_Regression.ipynb", "max_forks_repo_name": "MichaelGhaly20/Youtube", "max_forks_repo_head_hexsha": "386204328b3d363d05f65f1f2381f4c2519e30d5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 26, "max_forks_repo_forks_event_min_datetime": "2021-05-22T10:40:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-07T01:35:20.000Z", "avg_line_length": 183.4972617744, "max_line_length": 33912, "alphanum_fraction": 0.9007717882, "converted": true, "num_tokens": 4212, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9566341975270266, "lm_q2_score": 0.8705972566572504, "lm_q1q2_score": 0.8328431079915396}} {"text": "---\nauthor: Nathan Carter (ncarter@bentley.edu)\n---\n\nThis answer assumes you have imported SymPy as follows.\n\n\n```python\nfrom sympy import * # load all math functions\ninit_printing( use_latex='mathjax' ) # use pretty math output\n```\n\nSymPy has support for piecewise functions built in, using `Piecewise`.\nThe function above would be written as follows.\n\n\n```python\nvar( 'x' )\nformula = Piecewise( (x**2, x>2), (1+x, x<=2) )\nformula\n```\n\n\n\n\n$\\displaystyle \\begin{cases} x^{2} & \\text{for}\\: x > 2 \\\\x + 1 & \\text{otherwise} \\end{cases}$\n\n\n\nWe can test to be sure the function works correctly by plugging in a few\n$x$ values and ensuring the correct $y$ values result.\nHere we're using the method from how to substitute a value for a symbolic variable.\n\n\n```python\nformula.subs(x,1), formula.subs(x,2), formula.subs(x,3)\n```\n\n\n\n\n$\\displaystyle \\left( 2, \\ 3, \\ 9\\right)$\n\n\n\nFor $x=1$ we got $1+1=2$. For $x=2$ we got $2+1=3$. For $x=3$, we got $3^2=9$.\n", "meta": {"hexsha": "31ceec111060cb3f4e5cc4b8201b7ae6e6981c70", "size": 2767, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "database/tasks/How to write a piecewise-defined function/Python, using SymPy.ipynb", "max_stars_repo_name": "nathancarter/how2data", "max_stars_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "database/tasks/How to write a piecewise-defined function/Python, using SymPy.ipynb", "max_issues_repo_name": "nathancarter/how2data", "max_issues_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "database/tasks/How to write a piecewise-defined function/Python, using SymPy.ipynb", "max_forks_repo_name": "nathancarter/how2data", "max_forks_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-07-18T19:01:29.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T06:47:11.000Z", "avg_line_length": 23.4491525424, "max_line_length": 125, "alphanum_fraction": 0.5070473437, "converted": true, "num_tokens": 306, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9585377249197138, "lm_q2_score": 0.8688267796346599, "lm_q1q2_score": 0.8328032447003284}} {"text": "```python\n## This cell just imports necessary modules\n%pylab notebook\nfrom sympy import sin, cos, Function, Symbol, diff, integrate, exp\n```\n\n\n```python\n###### FIRST PARTIAL DERIVATIVES ######\n###### Lecture 2, slide 10 ######\n\n# Define the independent variables using Symbol\nr = Symbol('r')\nh = Symbol('h')\n# Define the function V(r,h)\nV = pi*(r**2)*h\n\n# The first partial derivative of V w.r.t h (i.e. r is kept constant)\nprint(\"The first partial derivative of V w.r.t. h is: \", diff(V, h))\n# The first partial derivative of V w.r.t r (i.e. h is kept constant)\nprint(\"The first partial derivative of V w.r.t. r is: \", diff(V, r))\n\n```\n\n\n```python\n###### SECOND PARTIAL DERIVATIVES ######\n###### Lecture 2, slide 12 ######\n\nx = Symbol('x')\ny = Symbol('y')\nf = (x**2)*sin(y)\n\nf_x = diff(f, x)\nf_y = diff(f, y)\n\nprint(\"The first partial derivatives of f = (x**2)*sin(y) are: \")\nprint(\"f_x = \", f_x)\nprint(\"f_y = \", f_y, \"\\n\")\n\nf_xx = diff(f_x, x)\nf_xy = diff(f_x, y)\nf_yx = diff(f_y, x)\nf_yy = diff(f_y, y)\n\nprint(\"The second partial derivatives of f = (x**2)*sin(y) are: \")\nprint(\"f_xx = \", f_xx)\nprint(\"f_xy = \", f_xy)\nprint(\"f_yy = \", f_yy)\nprint(\"f_yx = \", f_yx)\n```\n\n\n```python\n###### CHAIN RULE ######\n###### Lecture 2, slide 19 ######\na = Symbol('a')\nb = Symbol('b')\nt = Symbol('t')\n\nx = a*t\ny = b*t\n\nT = 3*y*sin(x)\n\n# SymPy automatically applies the chain rule here:\nprint(\"Differentiating T = 3*y*sin(x) wrt t using the chain rule: \\n\")\nprint(diff(T, t))\n```\n\n\n```python\n###### DEFINITE AND INDEFINITE INTEGRALS ######\n###### Lecture 2, slide 21 ######\nx = Symbol('x')\ny = Symbol('y')\n\n# Remember: Indefinite integrals result in a constant 'c'. SymPy sets this to zero.\n# f is the function we want to integrate.\nf = cos(x)\n\n# The second argument is the variable we want to integrate with respect to.\n# (in this example, it is 'x').\nprint(\"Integrating cos(x) yields:\")\nprint(integrate(f, x)) \n\n# Using integrate(2*x, (x, a, b)) evaluates a DEFINITE integral between x=a and x=b\na = 0\nb = 2\nprint(\"Integrating 2*x between x=0 and x=2 yields:\")\nprint(integrate(2*x, (x, a, b)))\n```\n\n\n```python\n###### DOUBLE INTEGRALS ######\n###### Lecture 2, slide 26 ######\n\n# The function we want to integrate.\nf = 2*(x**2)*y\n\n# First integrate f wrt y between y=0 and y=2.\ninner_integral = integrate(f, (y, 0, 2)) \nprint(\"The inner integral is: \", inner_integral)\n\n# Then integrate the inner_integral wrt x.\nouter_integral = integrate(inner_integral, (x, 0, 2)) \nprint(\"The outer integral is: \", outer_integral)\n```\n", "meta": {"hexsha": "8c6fdec406b0144d0ebb5d963843068ee88717de", "size": 4689, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "mathematics/mm1/Lecture_2_Multivariable_Calculus.ipynb", "max_stars_repo_name": "jrper/thebe-test", "max_stars_repo_head_hexsha": "554484b1422204a23fe47da41c6dc596a681340f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "mathematics/mm1/Lecture_2_Multivariable_Calculus.ipynb", "max_issues_repo_name": "jrper/thebe-test", "max_issues_repo_head_hexsha": "554484b1422204a23fe47da41c6dc596a681340f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mathematics/mm1/Lecture_2_Multivariable_Calculus.ipynb", "max_forks_repo_name": "jrper/thebe-test", "max_forks_repo_head_hexsha": "554484b1422204a23fe47da41c6dc596a681340f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.9482758621, "max_line_length": 92, "alphanum_fraction": 0.5065045852, "converted": true, "num_tokens": 790, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9585377296574669, "lm_q2_score": 0.8688267643505193, "lm_q1q2_score": 0.8328032341661897}} {"text": "# Etivity 1.1.1: Modelling an infected cohort\n\nIn this etivity, you will code your first simple model and use it to answer a research question. To get used to the format for models in R, we'll start by giving you some parts of the code, and you need to fill in the missing numbers or code blocks by replacing the *#YOUR CODE#* placeholders. At the end of each etivity, we'll provide a solution file that contains the full code, annotated with comments. Make sure you understand each line, and soon you'll be able to write your own model from scratch!\n\nYour task is to find out how long it takes for\na cohort of infected people to recover. As you saw in the video,\nto answer this question you need to keep track of 2 populations: \nthose that are infected (compartment I), and those that have recovered (compartment R). \nInfected people recover at a rate *gamma*. The differential equations describing this are:\n\n\\begin{align}\n\\frac{dI}{dt} & = -\\gamma I \\\\\n\\frac{dR}{dt} & = \\gamma I\n\\end{align}\n\nLoad the packages which you need for this etivity, by running the following cell:\n\n\n```\nlibrary(deSolve) # package to solve the model\nlibrary(reshape2) # package to change the shape of the model output\nlibrary(ggplot2) # package for plotting\n```\n\nTo start, it is useful to code what we know about the situation we want to model.\nWe are looking at a cohort of 10$^6$ currently infected people, and no one has recovered so far. The average duration of infection is 10 days. The question we want to answer is how many people will recover from the infection over a 4-week period.\n\nGiven this data, fill in the correct values to the following variables, and run the cell:\n\n\n```\ninitial_number_infected <- 1000000 # the initial infected population size\ninitial_number_recovered <- 0#YOUR CODE# # the initial number of people in the recovered state\nrecovery_rate <- 0.1 # the rate of recovery gamma, in units of days^-1\nfollow_up_duration <- 4*7 #YOUR CODE# # the duration to run the model for, in units of days\n \n# Hint: the units of the recovery rate and the follow-up duration should be consistent.\n```\n\nNow, we combine this data into objects that are recognised by the deSolve package as **model input**. To do this, again run the code below.\n\n\n```\n# The initial state values are stored as a vector and each value is assigned a name.\ninitial_state_values <- c(I = initial_number_infected, \n R = initial_number_recovered)\n\n# Parameters are also stored as a vector with assigned names and values. \nparameters <- c(gamma = recovery_rate) \n# In this case we only have one parameter, gamma.\n```\n\nThink about: what kind of information is stored in the *initial_state_values* and *parameters* vectors?\n\nAdditionally, we need to specify the time we want the model to run for. This depends on the question we want to answer. In the cell below, the duration you specified earlier is automatically filled in when you run it.\n\n\n```\n# The times vector creates a sequence of timepoints at which we want to calculate the number of people \n# in the I and R compartment.\ntimes <- seq(from = 0, to = follow_up_duration, by = 1) \n```\n\nThink about: what kind of information is stored in the *times* vector?\n\nCheck your answers by having a look at each of these vectors to familiarise yourself with the structure: in the following code cell we have typed the object names, so you just need to press \"Run\" to see what each of them contains.\n\n\n```\ninitial_state_values\nparameters\ntimes\n```\n\n\n
    \n\t
    I
    \n\t\t
    1e+06
    \n\t
    R
    \n\t\t
    0
    \n
    \n\n\n\n\ngamma: 0.1\n\n\n\n
      \n\t
    1. 0
    2. \n\t
    3. 1
    4. \n\t
    5. 2
    6. \n\t
    7. 3
    8. \n\t
    9. 4
    10. \n\t
    11. 5
    12. \n\t
    13. 6
    14. \n\t
    15. 7
    16. \n\t
    17. 8
    18. \n\t
    19. 9
    20. \n\t
    21. 10
    22. \n\t
    23. 11
    24. \n\t
    25. 12
    26. \n\t
    27. 13
    28. \n\t
    29. 14
    30. \n\t
    31. 15
    32. \n\t
    33. 16
    34. \n\t
    35. 17
    36. \n\t
    37. 18
    38. \n\t
    39. 19
    40. \n\t
    41. 20
    42. \n\t
    43. 21
    44. \n\t
    45. 22
    46. \n\t
    47. 23
    48. \n\t
    49. 24
    50. \n\t
    51. 25
    52. \n\t
    53. 26
    54. \n\t
    55. 27
    56. \n\t
    57. 28
    58. \n
    \n\n\n\nThe next step is specifying the model. Using the example code in the introductory document, complete the following model function with the differential equations above.\n\n\n```\ncohort_model <- function(time, state, parameters) { \n \n with(as.list(c(state, parameters)), {\n \n dI <- -gamma*I\n dR <- gamma*I\n \n return(list(c(dI, dR)))\n })\n \n}\n```\n\nNow all there's left to do is solving this set of equations using the deSolve package. Fill in the following command, which calculates and stores the number of infected and recovered people at each timestep in the \"output\" dataframe. Don't forget to run it!\n\n\n```\n# Hint: if you can't remember what those arguments correspond to, just look up the ode help file:\n??ode\n```\n\n\n```\noutput <- as.data.frame(ode(y = initial_state_values, \n times = times, \n func = cohort_model,\n parms = parameters))\n```\n\nPrinting the **model output** returns a dataframe with columns time (containing our times vector), I (containing the number of infected people at each timestep) and R (containing the number of recovered people at each timestep):\n\n\n```\noutput\n```\n\n\n\n\n\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\n
    timeIR
    0 1.000000e+06 0.0
    1 3.678794e+05 632120.6
    2 1.353351e+05 864664.9
    3 4.978697e+04 950213.0
    4 1.831559e+04 981684.4
    5 6.737923e+03 993262.1
    6 2.478741e+03 997521.3
    7 9.118772e+02 999088.1
    8 3.354606e+02 999664.5
    9 1.234090e+02 999876.6
    10 4.539958e+01 999954.6
    11 1.670156e+01 999983.3
    12 6.144155e+00 999993.9
    13 2.260306e+00 999997.7
    14 8.315196e-01 999999.2
    15 3.058986e-01 999999.7
    16 1.125336e-01 999999.9
    17 4.139872e-021000000.0
    18 1.522972e-021000000.0
    19 5.602676e-031000000.0
    20 2.061173e-031000000.0
    21 7.582818e-041000000.0
    22 2.789371e-041000000.0
    23 1.026131e-041000000.0
    24 3.773270e-051000000.0
    25 1.391230e-051000000.0
    26 5.151106e-061000000.0
    27 1.851387e-061000000.0
    28 6.704086e-071000000.0
    \n\n\n\n### Question: Based on the output, how many people have recovered after 4 weeks? What proportion of the total population does this correspond to?\n\nNow, plot your model output in the following cell, with time on the x axis and the number of infected and recovered people on the y axis. You can use the introductory document for help with this.\n\n\n```\n# First turn the output dataset into a long format, so that the number in each compartment at each timestep\n# are all in the same column\noutput_long <- melt(as.data.frame(output), id = \"time\") \n\n# Plot the number of people in each compartment over time\nggplot(data = output_long, # specify object containing data to plot\n aes(x = time, y = value, colour = variable, group = variable))+ \n # in the long-format output dataset, the number in each compartment is in the \"value\" column and the \n # compartment the number relates to is in the \"variable\" column. We are telling ggplot to assign a different \n # colour to each compartment/group, which automatically generates a legend\n geom_line() + \n # we want to represent the data over time as lines. This command automatically looks to the specifications saved\n # in the ggplot command above to know which data to plot\n xlab(\"Time (days)\")+ # add label for x axis\n ylab(\"Number of people\") + # add label for y axis\n labs(title = paste(\"Number infected and recovered over time when gamma =\",parameters[\"gamma\"],\"days^-1\")) # add title\n# Using the paste() command, we can combine sentences with the values stored in variables (here gamma)\n```\n\n### Question: Based on the plot, at what timepoint were infected and recovered individuals equal in number?\n\nFor the last part of the etivity, try varying *gamma* to see how it affects the output. For example, in the cell below change *gamma* to correspond to an average infectious period of 2 days and 20 days. What is the recovery rate in these 2 cases?\n\n7 days\n\n\n```\nparameters <- c(gamma = 1.0)\noutput <- as.data.frame(ode(y = initial_state_values, \n times = times, \n func = cohort_model,\n parms = parameters))\n\noutput_long <- melt(as.data.frame(output), id = \"time\") # turn output dataset into long format\n\nggplot(data = output_long, # specify object containing data to plot\n aes(x = time, y = value, colour = variable, group = variable)) + # assign columns to axes and groups\n geom_line() + # represent data as lines\n xlab(\"Time (days)\")+ # add label for x axis\n ylab(\"Number of people\") + # add label for y axis\n labs(title = paste(\"Number infected and recovered over time when gamma =\",parameters[\"gamma\"],\"days^-1\")) # add title \n# Since the initial number in each compartment and the timesteps of interest haven't changed, \n# these are the only parts of the code we need to rerun.\n\n# Now, copy-paste your plot code from above here to visualise the output.\n```\n\n### Question: What changes do you observe in the transition to the recovered compartment if *gamma* is higher or lower? For example, how long does it take for everyone to recover in both cases?\n\nhigher gamma, faster recovery time\n\n## Well done on writing your first model code. Now, check the solutions!\nOnce you are done with an etivity, you can open the \"Solutions to etivities\" folder in your Jupyter workspace. In this folder, you will find feedback and model code for all the coding etivities throughout the course. After completing an exercise, you should always carefully compare your answers and code with the solutions! It is especially important to get the coding right from the start, because we are building on the same modelling framework throughout the course. Additionally, the solution files sometimes contain further information that will help you deepen your understanding of the lesson. \nTo make the most of the learning experience, we always recommend trying to complete the whole etivity before checking the solutions - but if you are stuck at any point, they can also help you to move on to the next part.\n\n\n```\n\n```\n", "meta": {"hexsha": "2f737c25fa23fdee9f90ca16369fe177a02388dc", "size": 119949, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "course-1/utf-8''IDM1.1.07 - Modelling an infected cohort.ipynb", "max_stars_repo_name": "RKiddle/IDM", "max_stars_repo_head_hexsha": "f32e0038df97f67acc47097827ed1430d8e13478", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "course-1/utf-8''IDM1.1.07 - Modelling an infected cohort.ipynb", "max_issues_repo_name": "RKiddle/IDM", "max_issues_repo_head_hexsha": "f32e0038df97f67acc47097827ed1430d8e13478", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "course-1/utf-8''IDM1.1.07 - Modelling an infected cohort.ipynb", "max_forks_repo_name": "RKiddle/IDM", "max_forks_repo_head_hexsha": "f32e0038df97f67acc47097827ed1430d8e13478", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 189.4928909953, "max_line_length": 47850, "alphanum_fraction": 0.8584148263, "converted": true, "num_tokens": 3321, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9219218305645894, "lm_q2_score": 0.9032942086563877, "lm_q1q2_score": 0.8327666503828891}} {"text": "# SymPy Tutorial\n\n*Arthur Ryman*
    \n*Last Updated: 2020-04-14*\n\n\nThis notebook contains the examples from the \n[SymPy Tutorial](https://docs.sympy.org/latest/tutorial/index.html).\n\n## Calculus\n\n\n```python\nfrom sympy import *\nx, y, z = symbols('x y z')\ninit_printing(use_unicode=True)\n```\n\n### Derivatives\n\n\n```python\ndiff(cos(x), x)\n```\n\n\n```python\ndiff(exp(x**2), x)\n```\n\n\n```python\ndiff(x**4, x, x, x)\n```\n\n\n```python\ndiff(x**4, x, 3)\n```\n\n\n```python\nexpr = exp(x*y*z)\ndiff(expr, x, y, y, z, z, z, z)\n```\n\n\n```python\ndiff(expr, x, y, 2, z, 4)\n```\n\n\n```python\ndiff(expr, x, y, y, z, 4)\n```\n\n\n```python\nexpr.diff(x, y, y, z, 4)\n```\n\n\n```python\nderiv = Derivative(expr, x, y, y, z, 4)\nderiv\n```\n\n\n```python\nderiv.doit()\n```\n\n\n```python\nm, n, a, b = symbols('m n a b')\nexpr = (a*x + b)**m\nexpr.diff((x,n))\n```\n\n### Integrals\n\n\n```python\nintegrate(cos(x), x)\n```\n\n\n```python\nintegrate(exp(-x), (x, 0, oo))\n```\n\n\n```python\nintegrate(exp(-x**2 - y**2), (x, -oo, oo), (y, -oo, oo))\n```\n\n\n```python\nexpr = integrate(x**x, x)\nprint(expr)\nexpr\n```\n\n\n```python\nexpr = Integral(log(x)**2, x)\nexpr\n```\n\n\n```python\nexpr.doit()\n```\n\n\n```python\ninteg = Integral((x**4 + x**2*exp(x) - x**2 - 2*x*exp(x) - 2*x -\n exp(x))*exp(x)/((x - 1)**2*(x + 1)**2*(exp(x) + 1)), x)\ninteg\n```\n\n\n```python\ninteg.doit()\n```\n\n\n```python\ninteg = Integral(sin(x**2), x)\ninteg\n```\n\n\n```python\ninteg.doit()\n```\n\n\n```python\ninteg = Integral(x**y*exp(-x), (x, 0, oo))\ninteg\n```\n\n\n```python\ninteg.doit()\n```\n\n### Limits\n\n\n```python\nlimit(sin(x)/x, x, 0)\n```\n\n\n```python\nexpr = x**2/exp(x)\nexpr.subs(x, oo)\n```\n\n\n```python\nlimit(expr, x, oo)\n```\n\n\n```python\nexpr = Limit((cos(x) - 1)/x, x, 0)\nexpr\n```\n\n\n```python\nexpr.doit()\n```\n\n\n```python\nlimit(1/x, x, 0, '+')\n```\n\n\n```python\nlimit(1/x, x, 0, '-')\n```\n\n### Series Expansion\n\n\n```python\nexpr = exp(sin(x))\nexpr.series(x, 0, 4)\n```\n\n\n```python\nx + x**3 + x**6 + O(x**4)\n```\n\n\n```python\nx*O(1)\n```\n\n\n```python\nexpr.series(x, 0, 4).removeO()\n```\n\n\n```python\nexp(x - 6).series(x, x0=6)\n```\n\n### Finite Differences\n\n\n```python\nf, g = symbols('f g', cls=Function)\ndifferentiate_finite(f(x)*g(x))\n```\n\n\n```python\ndifferentiate_finite(f(x)*g(x), evaluate=True)\n```\n\n\n```python\nf = Function('f')\ndfdx = f(x).diff(x)\ndfdx.as_finite_difference()\n```\n\n\n```python\nf = Function('f')\nd2fdx2 = f(x).diff(x, 2)\nh = Symbol('h')\nd2fdx2.as_finite_difference([-3*h,-h,2*h])\n```\n\n\n```python\nfinite_diff_weights(2, [-3, -1, 2], 0)[-1][-1]\n```\n\n\n```python\nx_list = [-3, 1, 2]\ny_list = symbols('a b c')\napply_finite_diff(1, x_list, y_list, 0)\n```\n", "meta": {"hexsha": "80e9477761bfedf48ed0f0e66b02f8fb43951eda", "size": 137253, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notes/docs.sympy.org/tutorial/calculus.ipynb", "max_stars_repo_name": "agryman/acmpy", "max_stars_repo_head_hexsha": "9566a0abce51cb610ec6c79eaea2e2b26b707419", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notes/docs.sympy.org/tutorial/calculus.ipynb", "max_issues_repo_name": "agryman/acmpy", "max_issues_repo_head_hexsha": "9566a0abce51cb610ec6c79eaea2e2b26b707419", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-21T22:17:28.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-21T22:17:28.000Z", "max_forks_repo_path": "notes/docs.sympy.org/tutorial/calculus.ipynb", "max_forks_repo_name": "agryman/acmpy", "max_forks_repo_head_hexsha": "9566a0abce51cb610ec6c79eaea2e2b26b707419", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 116.8110638298, "max_line_length": 8040, "alphanum_fraction": 0.8627425266, "converted": true, "num_tokens": 968, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9219218262741297, "lm_q2_score": 0.9032941995446778, "lm_q1q2_score": 0.8327666381070575}} {"text": "# Objective: Semi-supervised classification of MNIST Dataset\n\nLess than 1.5 % labelling in each class\n\n\n```python\n# load the required libs\nfrom tqdm import tqdm\nimport numpy as np\nimport mnist\nimport torch\nimport faiss\nfrom matplotlib import pyplot as plt\nfrom torch_pdegraph.utilities import *\nfrom torch_pdegraph.operators import MeanCurv, GradPlusInfNorm, GradMinusInfNorm\n```\n\n\n```python\n# create a smaller dataset\n# We will take all the 70000 images and label only 100 in each class later\nimages_tr = mnist.train_images()\nlabels_tr = mnist.train_labels()\nimages_te = mnist.test_images()\nlabels_te = mnist.test_labels()\nimages = np.concatenate((images_tr,images_te),axis=0)\nlabels = np.concatenate((labels_tr, labels_te),axis=0)\nimages_flatten = np.reshape(images, (images.shape[0], images.shape[1]*images.shape[2])).astype(np.float32) * 1000\n```\n\n# Graph construction:\n\n- In order to simply show the effectiveness of PDEs on graph, I am only creating a simple K-NN based graphs. This may or maynot be the best graph for a given problem at hand.\n\n- One can create graph using whatsoever apt approach or one can even use third-party network datasets and run a PDE on that graph. PDEs are extensible to any given graph/network at hand as long as that graph has edges and weights( edge_index and edge_attr).\n\nAlthough torch_cluster comes with a knn-graph method. I found it to be limited and slow when the node-features have high dimensions. We shall be using facebook's faiss library which is blazingly fast for a KNN-graph construction.\n\n\n```python\n# Create the intial front(level-sets) for each class with 100 labels(seeds) in each class\n\"\"\"\nInitial front creation process in literature it is \nalso known as intial seed or level-set creation process.\n\"\"\"\nFront = genInitialSeeds(**dict(labels = labels, num_seeds = 100))\n\n# Create the Knn graph of the flattened image features and \n# assign weights to the edges\nres = faiss.StandardGpuResources()\nindex = faiss.IndexFlatL2(images_flatten.shape[1])\ngpu_index_flat = faiss.index_cpu_to_gpu(res,0,index)\ngpu_index_flat.add(images_flatten/1000)\nk = 30\nD, I = gpu_index_flat.search(images_flatten/1000,k+1)\n\n#Graph \nedge_index = np.vstack((I[:,1:].flatten(), np.repeat(I[:,0].flatten(),k)))\nedge_attr = np.exp(-(D[:,1:].flatten()/505000))\n\nedge_index = torch.tensor(edge_index, dtype=torch.long).to('cuda:0')\nedge_attr = torch.tensor(edge_attr, dtype=torch.float32).to('cuda:0')\nedge_attr = edge_attr.view(-1,1)\ngraph = Graph(edge_index, edge_attr)\n```\n\n Success!\n\n\n# Run a manually defined PDE:\n\n\n\n\\begin{equation}\n\\mathbf{x}^{n+1}_{i} = \\mathbf{x}^{n}_{i} + \\Delta t \\kappa_w(i,\\mathbf{x}^{n}) \\|\\nabla^{+}_w\\mathbf{x}^{n}_{i}\\|_{\\infty}, \\quad if\\quad \\kappa_w(i,\\mathbf{x}^{n}) > 0\n\\end{equation}\n\n\n\n\\begin{equation}\n\\mathbf{x}^{n+1}_{i} = \\mathbf{x}^{n}_{i} + \\Delta t \\kappa_w(i,\\mathbf{x}^{n}) \\|\\nabla^{-}_w\\mathbf{x}^{n}_{i}\\|_{\\infty}, \\quad if\\quad \\kappa_w(i,\\mathbf{x}^{n}) < 0\n\\end{equation}\n\n- $\\mathbf{x}_{i}$ is the node feature/signal at the $i^{th}$ node\n- $\\nabla^{-}_{w}$, $\\nabla^{+}_{w}$ are the negative and positive directional gradients on weighted graphs respectively.\n- $\\kappa(i,\\mathbf{x})$ is the mean curvatrue operator.\n\n**Example:**\n\n```python\nfrom torch_pdegraph.operators import GradMinusInfNorm, GradPlusInfNorm, MeanCurv \n# Instantiate the operators\nope_normM = GradMinusInfNorm.OPE(graph) \nope_normP = GradPlusInfNorm.OPE(graph)\nope_curv = MeanCurv.OPE(graph)\n\n# Run the above explicit PDE on intial front(level-set) on graph\nIp = (ope_curv(fr) > 0.0)\nIm = (ope_curv(fr) < 0.0)\nfr = fr + dt * ope_curv(fr) * (ope_normP(fr) * Ip.type(torch.int) + ope_normM(fr) * Im.type(torch.int)\n```\n\n\nTo know more ref [DEL11](https://ieeexplore.ieee.org/document/6116433), PDE level-set method in section 6.3 of [M. Toutain's thesis](https://tel.archives-ouvertes.fr/tel-01258738)\n\n\n```python\n# Params\nitr = 500\ndt = 0.05\n\n# Instantiate the operators\nope_curv = MeanCurv.OPE(graph)\nope_normP = GradPlusInfNorm.OPE(graph)\nope_normM = GradMinusInfNorm.OPE(graph)\n\n#Evolved front\nnewFront = []\n\n#Run the custome PDE on each initial front (initial level-set)\nfor fr in tqdm(Front):\n fr = torch.tensor(fr, dtype=torch.float32).to('cuda:0')\n for i in range(itr):\n Ip = (ope_curv(fr) > 0.0)\n Im = (ope_curv(fr) < 0.0)\n fr = fr + dt * ope_curv(fr) * (ope_normP(fr) * Ip.type(torch.int) + ope_normM(fr) * Im.type(torch.int)) \n newfr = fr.to('cpu')\n newFront.append(newfr.numpy())\n```\n\n 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 10/10 [00:24<00:00, 2.42s/it]\n\n\n\n```python\n# Now see the results\nfrom sklearn.preprocessing import MinMaxScaler\nscalar = MinMaxScaler()\nnorm_fr = []\n\nfor fr in newFront:\n scalar.fit(fr) \n norm_fr.append(scalar.transform(fr))\n\n\nm = np.argmax(norm_fr, axis=0)\nm = m[:,0]\n\nimport collections\nfrom sklearn.metrics import confusion_matrix\nprint(f\"The original number of elements in each class is : \\n {collections.Counter(labels)}\")\nprint(f\"The predicted number of elements are: \\n {collections.Counter(m)}\")\nprint(f\"The matrix of confusion: \\n {confusion_matrix(labels,m)}\")\n\nnmask = (m == labels)\nnmask = nmask.astype(\"int\")\nprint(\"The accuracy of the classification is: \\t {acc}\".format(acc=np.sum(nmask)*100/len(labels)))\n```\n\n The original number of elements in each class is : \n Counter({1: 7877, 7: 7293, 3: 7141, 2: 6990, 9: 6958, 0: 6903, 6: 6876, 8: 6825, 4: 6824, 5: 6313})\n The predicted number of elements are: \n Counter({1: 8618, 7: 7521, 9: 7515, 0: 7207, 6: 7076, 3: 7016, 2: 6375, 4: 6353, 5: 6329, 8: 5990})\n The matrix of confusion: \n [[6839 7 4 1 1 9 34 4 1 3]\n [ 0 7825 15 1 3 2 6 9 1 15]\n [ 114 226 6290 21 25 12 34 223 30 15]\n [ 23 60 28 6717 6 136 6 78 41 46]\n [ 6 88 1 0 6135 0 34 10 1 549]\n [ 45 44 0 75 16 5945 111 7 6 64]\n [ 44 24 0 0 8 20 6779 0 1 0]\n [ 4 141 11 2 25 2 0 6997 1 110]\n [ 97 167 21 117 67 187 66 59 5898 146]\n [ 35 36 5 82 67 16 6 134 10 6567]]\n The accuracy of the classification is: \t 94.27428571428571\n\n\n### NOTE: Here I have used a simple knn-graph construction method but with a more sophisticated method of graph creation like [two-sided tangent distance](http://www.keysers.net/daniel/files/ICPR2000.pdf) one can achieve more than 98% of classification accuracy with 1% labeling using this PDE level-set method, as it was shown in the section 6.4.5 in the thesis of [M. Toutain](https://tel.archives-ouvertes.fr/tel-01258738)\n", "meta": {"hexsha": "0132502c42e16257364cbf48a290a972fb76173d", "size": 9485, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "applications/4_classification.ipynb", "max_stars_repo_name": "aGIToz/Pytorch_pdegraph", "max_stars_repo_head_hexsha": "fade6817e437b606c43221a5ca13bdaeec563fff", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2020-08-24T09:04:48.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-19T03:46:07.000Z", "max_issues_repo_path": "applications/4_classification.ipynb", "max_issues_repo_name": "aGIToz/Pytorch_pdegraph", "max_issues_repo_head_hexsha": "fade6817e437b606c43221a5ca13bdaeec563fff", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "applications/4_classification.ipynb", "max_forks_repo_name": "aGIToz/Pytorch_pdegraph", "max_forks_repo_head_hexsha": "fade6817e437b606c43221a5ca13bdaeec563fff", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-08-27T15:53:02.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-13T08:36:27.000Z", "avg_line_length": 36.4807692308, "max_line_length": 432, "alphanum_fraction": 0.5651027939, "converted": true, "num_tokens": 2169, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9353465062370313, "lm_q2_score": 0.8902942246666266, "lm_q1q2_score": 0.8327335925649357}} {"text": "# Minimum Square Error method\n\n\n```python\nimport numpy as np\nfrom matplotlib import pyplot as plt\nimport sympy as sy\nfrom scipy import linalg as sla\nsy.init_printing()\n%matplotlib inline\n```\n\n\n```python\nxs=np.array([0.0,0.2,0.4,0.6,0.8,1.2,1.4,1.6,1.8,2.0])\nys=np.array([2.0,2.1,1.6,2.6,1.5,2.7,0.67,3.5,0.94,2.0])\nsize=len(xs)\nx=sy.symbols('x')\n```\n\n\n```python\nn=10\nbase=[x**k for k in range(n)]\nfs=[np.vectorize(sy.Lambda(x,base[k].subs({x:x}))) for k in range(n)]\n```\n\n\n```python\nphi=sy.Matrix([[np.dot(fs[i](xs), fs[j](xs)) for i in range(n)] for j in range(n)])\n```\n\n\n```python\nb=np.array([np.dot(ys,fs[i](xs)) for i in range(n)])\nb\n```\n\n\n\n\n array([19.6100000000000, 19.289999999999999, 27.4428000000000,\n 43.7709600000000, 73.9736160000000, 129.494736000000,\n 232.319902080000, 424.628390016000, 787.705159449600,\n 1478.97815892480], dtype=object)\n\n\n\n\n```python\na=phi.solve(b)\na\n```\n\n\n```python\npoly=sum([a[k] * base[k] for k in range(n)])\npoly=sy.Lambda(x,poly.subs({x:x}))\n```\n\n\n```python\nxplt=np.linspace(0,2,100)\nyplt=np.vectorize(poly)(xplt)\nplt.plot(xplt,yplt)\nplt.plot(xs,ys,'gx')\n```\n\n\n```python\n\n```\n\n\n```python\nimport numpy as np\nfrom matplotlib import pyplot as plt\nimport sympy as sy\nsy.init_printing()\n%matplotlib inline\n```\n\n\n```python\nxs=np.array([0.0,0.2,0.4,0.6,0.8,1.2,1.4,1.6,1.8,2.0])\nys=np.array([2.0,2.1,1.6,2.6,1.5,2.7,0.67,3.5,0.94,2.0])\nx=sy.symbols('x')\ndef approx(n):\n base=[x**k for k in range(n)]\n fs=[np.vectorize(sy.Lambda(x,base[k].subs({x:x}))) for k in range(n)]\n phi=sy.Matrix([[np.dot(fs[i](xs), fs[j](xs)) for i in range(n)] for j in range(n)])\n b=np.array([np.dot(ys,fs[i](xs)) for i in range(n)])\n a=phi.solve(b)\n poly=sum([a[k] * base[k] for k in range(n)])\n poly=sy.Lambda(x,poly.subs({x:x}))\n return poly\n\nfor n in range(8,11):\n poly=approx(n)\n xplt=np.linspace(0,2,100)\n yplt=np.vectorize(poly)(xplt)\n plt.plot(xplt,yplt)\n plt.ylim(-8.0,8.0)\n plt.plot(xs,ys,'bx')\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "fc33978604520b23159565f4ed5d7e909cf18424", "size": 67533, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "interpolateFunction/Untitled.ipynb", "max_stars_repo_name": "terasakisatoshi/pythonCodes", "max_stars_repo_head_hexsha": "baee095ecee96f6b5ec6431267cdc6c40512a542", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "interpolateFunction/Untitled.ipynb", "max_issues_repo_name": "terasakisatoshi/pythonCodes", "max_issues_repo_head_hexsha": "baee095ecee96f6b5ec6431267cdc6c40512a542", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "interpolateFunction/Untitled.ipynb", "max_forks_repo_name": "terasakisatoshi/pythonCodes", "max_forks_repo_head_hexsha": "baee095ecee96f6b5ec6431267cdc6c40512a542", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 233.678200692, "max_line_length": 31860, "alphanum_fraction": 0.9098959027, "converted": true, "num_tokens": 796, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9603611620335328, "lm_q2_score": 0.8670357701094303, "lm_q1q2_score": 0.8326674797069316}} {"text": "# Question 1\n\n## The Multivariate Gaussian distribution\n\nThe multivariate Gaussian distribution has the form \n\n\\begin{align}\n\\mathcal{N}(x; \\mu, \\Sigma) &= |2\\pi \\Sigma|^{-1/2} \\exp\\left( -\\frac{1}{2} (x-\\mu)^\\top \\Sigma^{-1} (x-\\mu) \\right) \\\\\n& = \\exp\\left(-\\frac{1}{2} x^\\top \\Sigma^{-1} x + \\mu^\\top \\Sigma^{-1} x - \\frac{1}{2} \\mu^\\top \\Sigma^{-1} \\mu -\\frac{1}{2}\\log \\det(2\\pi \\Sigma) \\right) \\\\\n\\end{align}\n\n\n\n## Gaussian Processes Regression\n\nIn Bayesian machine learning, a frequent problem encountered is the regression problem where we are given a pairs of inputs $x_i \\in \\mathbb{R}^N$ and associated noisy observations $y_i \\in \\mathbb{R}$. We assume the following model\n\n\\begin{eqnarray*}\ny_i &\\sim& {\\cal N}(y_i; f(x_i), R)\n\\end{eqnarray*}\n\nThe interesting thing about a Gaussian process is that the function $f$ is not specified in close form, but we assume that the function values \n\\begin{eqnarray*}\nf_i & = & f(x_i)\n\\end{eqnarray*}\nare jointly Gaussian distributed as\n\\begin{eqnarray*}\n\\left(\n \\begin{array}{c}\n f_1 \\\\\n \\vdots \\\\\n f_L \\\\\n \\end{array}\n\\right) & = & f_{1:L} \\sim {\\cal N}(f_{1:L}; 0, \\Sigma(x_{1:L}))\n\\end{eqnarray*}\nHere, we define the entries of the covariance matrix $\\Sigma(x_{1:L})$ as\n\\begin{eqnarray*}\n\\Sigma_{i,j} & = & K(x_i, x_j)\n\\end{eqnarray*}\nfor $i,j \\in \\{1, \\dots, L\\}$. Here, $K$ is a given covariance function. Now, if we wish to predict the value of $f$ for a new $x$, we simply form the following joint distribution:\n\\begin{eqnarray*}\n\\left(\n \\begin{array}{c}\n f_1 \\\\\n f_2 \\\\\n \\vdots \\\\\n f_L \\\\\n f \\\\\n \\end{array}\n\\right) & \\sim & {\\cal N}\\left( \\left(\\begin{array}{c}\n 0 \\\\\n 0 \\\\\n \\vdots \\\\\n 0 \\\\\n 0 \\\\\n \\end{array}\\right)\n , \\left(\\begin{array}{cccccc}\n K(x_1,x_1) & K(x_1,x_2) & \\dots & K(x_1, x_L) & K(x_1, x) \\\\\n K(x_2,x_1) & K(x_2,x_2) & \\dots & K(x_2, x_L) & K(x_2, x) \\\\\n \\vdots &\\\\\n K(x_L,x_1) & K(x_L,x_2) & \\dots & K(x_L, x_L) & K(x_L, x) \\\\\n K(x,x_1) & K(x,x_2) & \\dots & K(x, x_L) & K(x, x) \\\\\n \\end{array}\\right) \\right) \\\\\n\\left(\n\\begin{array}{c}\n f_{1:L} \\\\\n f \n \\end{array}\n\\right) & \\sim & {\\cal N}\\left( \\left(\\begin{array}{c}\n \\mathbf{0} \\\\\n 0 \\\\\n \\end{array}\\right)\n , \\left(\\begin{array}{cc}\n \\Sigma(x_{1:L}) & k(x_{1:L}, x) \\\\\n k(x_{1:L}, x)^\\top & K(x, x) \\\\\n \\end{array}\\right) \\right) \\\\\n\\end{eqnarray*}\n\nHere, $k(x_{1:L}, x)$ is a $L \\times 1$ vector with entries $k_i$ where\n\n\\begin{eqnarray*}\nk_i = K(x_i, x) \n\\end{eqnarray*}\n\nPopular choices of covariance functions to generate smooth regression functions include a Bell shaped one\n\\begin{eqnarray*}\nK_1(x_i, x_j) & = & \\exp\\left(-\\frac{1}2 \\| x_i - x_j \\|^2 \\right)\n\\end{eqnarray*}\nand a Laplacian\n\\begin{eqnarray*}\nK_2(x_i, x_j) & = & \\exp\\left(-\\frac{1}2 \\| x_i - x_j \\| \\right)\n\\end{eqnarray*}\n\nwhere $\\| x \\| = \\sqrt{x^\\top x}$ is the Euclidian norm.\n\n## Part 1\nFrom your notes, derive the expressions to compute the predictive density\n\\begin{eqnarray*}\np(\\hat{y}| y_{1:L}, x_{1:L}, \\hat{x})\n\\end{eqnarray*}\n\n\n\\begin{eqnarray*}\np(y | y_{1:L}, x_{1:L}, x) &=& {\\cal N}(y; m, S) \\\\\nm & = & \\\\\nS & = & \n\\end{eqnarray*}\n\n## Part 2\nWrite a program to compute the mean and covariance of $p(\\hat{y}| y_{1:L}, x_{1:L}, \\hat{x})$ to generate a for the following data:\n\n x = [-2 -1 0 3.5 4]\n y = [4.1 0.9 2 12.3 15.8]\n \nTry different covariance functions $K_1$ and $K_2$ and observation noise covariances $R$ and comment on the nature of the approximation.\n\n## Part 3\nSuppose we are using a covariance function parameterised by\n\\begin{eqnarray*}\nK_\\beta(x_i, x_j) & = & \\exp\\left(-\\frac{1}\\beta \\| x_i - x_j \\|^2 \\right)\n\\end{eqnarray*}\nFind the optimum regularisation parameter $\\beta^*(R)$ as a function of observation noise variance via maximisation of the marginal likelihood, i.e.\n\\begin{eqnarray*}\n\\beta^* & = & \\arg\\max_{\\beta} p(y_{1:N}| x_{1:N}, \\beta, R)\n\\end{eqnarray*}\nGenerate a plot of $b^*(R)$ for $R = 0.01, 0.02, \\dots, 1$ for the dataset given in 2.\n\nHere, remember that \n\\begin{eqnarray*}\np(y_{1:N}| x_{1:N}, \\beta, R) = \\mathcal{N}(y_{1:N}; 0, K(x_{1:N})+R)\n\\end{eqnarray*}\n\nFor your reference, we give a basic implementation below:\n\n\n```python\ndef cov_fun_bell(x1,x2,delta=1):\n return np.exp(-0.5*np.abs(x1-x2)**2/delta) \n\ndef cov_fun_exp(x1,x2):\n return np.exp(-0.5*np.abs(x1-x2)) \n\ndef cov_fun(x1,x2):\n return cov_fun_bell(x1,x2,delta=1) \n\nR = 0.05\n\nx = np.array([-2, -1, 0, 3.5, 4]);\ny = np.array([4.1, 0.9, 2, 12.3, 15.8]);\n\nSig = cov_fun(x.reshape((len(x),1)),x.reshape((1,len(x)))) + R*np.eye(len(x))\nSigI = np.linalg.inv(Sig)\n\nxx = np.linspace(-5,5,100)\nyy = np.zeros_like(xx)\nss = np.zeros_like(xx)\nfor i in range(len(xx)):\n z = np.r_[x,xx[i]]\n CrossSig = cov_fun(x,xx[i])\n PriorSig = cov_fun(xx[i],xx[i]) + R\n \n yy[i] = np.dot(np.dot(CrossSig, SigI),y)\n ss[i] = PriorSig - np.dot(np.dot(CrossSig, SigI),CrossSig)\n \n\nplt.plot(x,y,'or')\nplt.plot(xx,yy,'b.')\nplt.plot(xx,yy+3*np.sqrt(ss),'b:')\nplt.plot(xx,yy-3*np.sqrt(ss),'b:')\nplt.show()\n```\n\n# Question 2\n\nRead http://mbmlbook.com/TrueSkill.html\nand\nhttp://mbmlbook.com/TrueSkill_Modelling_the_outcome_of_games.html\n\n* Write a program that produces 10,000 samples from a Gaussian with zero mean and a standard deviation of 1. Compute the percentage of these samples which lie between -1 and 1, between -2 and 2 and between -3 and 3. You should find that these percentages are close to those given in the caption of Figure 3.4.\n\n* Construct a histogram of the samples created in the previous exercise (like the ones in Figure 3.7) and verify that it resembles a bell-shaped curve.\n\n* Compute the mean, standard deviation and variance of your samples, referring to Panel 3.1. The mean should be close to zero and the standard deviation and variance should both be close to 1 (since 1^2=1).\n\n* Produce a second set of 10,000 samples from a Gaussian with mean 1 and standard deviation 1. Plot a scatter plot like Figure 3.8 where the X co-ordinate of each point is a sample from the first set and the Y co-ordinate is the corresponding sample from the second set (pairing the first sample from each set, the second sample from each set and so on). Compute the fraction of samples which lie above the diagonal line where X=Y.\n\n\n* Create double variables X and Y with priors of Gaussian(0,1) and Gaussian(1,1) respectively. Define a third random variable Ywins that equals to Y>X. Compute the posterior distribution numerically over Ywins and verify that it is close to the fraction of samples above the diagonal in the previous exercise.\n\n", "meta": {"hexsha": "d1fcffdbdf2efb8d5da0c15351f00dec5ddf3102", "size": 9390, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "swe582/SWE582 TakeHome 2016.ipynb", "max_stars_repo_name": "bkoyuncu/notes", "max_stars_repo_head_hexsha": "0e660f46b7d17fdfddc2cad1bb60dcf847f5d1e4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 191, "max_stars_repo_stars_event_min_datetime": "2016-01-21T19:44:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-25T20:50:50.000Z", "max_issues_repo_path": "swe582/SWE582 TakeHome 2016.ipynb", "max_issues_repo_name": "onurboyar/notes", "max_issues_repo_head_hexsha": "2ec14820af044c2cfbc99bc989338346572a5e24", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-02-18T03:41:04.000Z", "max_issues_repo_issues_event_max_datetime": "2018-11-21T11:08:49.000Z", "max_forks_repo_path": "swe582/SWE582 TakeHome 2016.ipynb", "max_forks_repo_name": "onurboyar/notes", "max_forks_repo_head_hexsha": "2ec14820af044c2cfbc99bc989338346572a5e24", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 138, "max_forks_repo_forks_event_min_datetime": "2015-10-04T21:57:21.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-15T19:35:55.000Z", "avg_line_length": 38.9626556017, "max_line_length": 440, "alphanum_fraction": 0.526943557, "converted": true, "num_tokens": 2304, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9304582497090322, "lm_q2_score": 0.8947894703109853, "lm_q1q2_score": 0.8325642444036314}} {"text": "# Build a Circuit to Emulate the Scope's Filter\n\nMain reference: [Passive Low Pass Filter](https://www.electronics-tutorials.ws/filter/filter_2.html)\n\n## Relavant Equiations\n\n### RC Potential Divider Equation\n$$ V_{out}=V_{in}\\frac{X_C}{Z} \\; (V)$$\nwhere\n\\begin{align}\n X_C & = \\frac{1}{2\\pi fC} \\; (\\Omega)\\\\\n Z & = \\sqrt{R^2+X_C^2} \\; (\\Omega)\n\\end{align}\n\nVoltage gain can be written as\n\\begin{align}\n \\frac{V_{out}}{V_{in}} &= \\frac{1}{\\sqrt{1+\\left(\\frac{R}{X_C}\\right)^2}} \\\\\n &= \\frac{1}{\\sqrt{1+ \\left( 2\\pi fRC \\right)^2 }}\n\\label{eq:voltage_gain} \\tag{1}\n\\end{align}\n\n### Power Gain Level\n\nThe [power gain level](https://en.wikipedia.org/wiki/Decibel#Root-power_(field)_quantities) is defined as\n\\begin{align}\n L_G &= 10\\log\\left( \\frac{V_{out}}{V_{in}}\\right)^2 \\\\\n &= 20\\log\\left( \\frac{V_{out}}{V_{in}}\\right) \\; \\textrm{(dB)}\n\\label{eq:power_gain_level} \\tag{2}\n\\end{align}\n\nWe can make the famous [**Bode plot**](https://en.wikipedia.org/wiki/Bode_plot) with $RC=1$. (maybe not now...)\n\n### Cutoff Frequency\nThe cutoff frequency is defined as the frequency at which the power gain drops to $50\\%$. In decibel unit, this value is $-3$ dB, known as the [$3$ dB point](https://en.wikipedia.org/wiki/Cutoff_frequency#Electronics).\n\nHere is an ambiguity. In literature about Butterworth filter, frequency actually means **angular frequency**. However, by glancing through the scope's manual, I cannot figure out whether the frequency is the **usual frequency** or the **angular frequency**.\n\nFor now, I will assume the frequency used by the scope is also **angular frequency**.\n\n## The RC Values Leading to a 20 MHz Cutoff\n\nWith a power gain $\\alpha$, we have\n\\begin{equation}\n \\left( \\frac{V_{out}}{V_{in}} \\right)^2=\\alpha\n\\label{eq:cutoff_power} \\tag{3}\n\\end{equation}\n\nSubstitute eq.$~\\eqref{eq:voltage_gain}$ into eq.$~\\eqref{eq:cutoff_power}$ and rearrange, we have\n\\begin{equation}\n RC=\\frac{1}{\\omega}\\sqrt{\\frac{1}{\\alpha}-1} \\;\\; (\\omega=2\\pi f)\n\\end{equation}\n\nWith $\\alpha=0.5$, $\\omega =20$ MHz, we obtain\n\\begin{equation}\n \\boxed{RC=5\\times 10^{-8}}\n\\end{equation}\n\n\n```python\nfrom math import *\nRC=1/20e6*sqrt(1/.5-1)\nprint(RC)\n```\n\n 5e-08\n\n\n## Scope Measurement\n\nI inject sine waves with fixed frequency from the pulse generator. The pulse generator shows that the frequency it means is **normal frequency**.\n\nThe pulse amplitude I set up on the function generator is 1.65 V. However, I am measuring peak-to-peak voltage with the scope, which is enlarged by noise.\n\n\n```python\nimport matplotlib.pyplot as plt\nimport math\nimport numpy as np\nimport scipy.optimize\nx = [0.01e6, 0.1e6, 1e6, 5e6, 10e6, 15e6, 20e6, 25e6, 30e6, 35e6, 40e6, 45e6, 50e6, 60e6, 70e6, 80e6, 100e6]\ny = [1.88, 1.88, 1.88, 1.82, 1.72, 1.56, 1.32, 1.08, 0.84, 0.6, 0.4, 0.32, 0.266, 0.207, 0.184, 0.173, 0.162]\n```\n\n\n```python\ndef divider_eq(x, vin, RC, voffset):\n return vin/np.sqrt(1+(2*math.pi*x*RC)**2) + voffset\n# initial parameters\np_init = [1.88, 5e-8, 0.162]\nfit_par, fit_err = scipy.optimize.curve_fit(divider_eq, x, y, p_init)\nprint(fit_par[0], fit_par[1], fit_par[2])\n\nx_fit = np.linspace(.5e6, 51e6, 100)\ny_fit = divider_eq(x_fit, fit_par[0], fit_par[1], fit_par[2])\nplot_fit = plt.scatter(x=x_fit, y=y_fit, c='r', s=4)\n\nplot_raw_meas = plt.scatter(x=x, y=y)\ncache = plt.xlabel('frequency (not angular) (Hz)')\ncache = plt.ylabel('amplitude (V)')\n```\n\n\n```python\nvout = 1.32-0.162\nvin = 1.88-0.162\np_gain = (vout/vin)**2\nprint('power gain: {:.2f}'.format(p_gain))\nprint('RC: {:.2e}'.format(1/2/math.pi/20e6*sqrt(1/p_gain-1)))\nprint('2\u03c0RC: {:.2e}'.format(1/20e6*sqrt(1/p_gain-1)))\n```\n\n power gain: 0.45\n RC: 8.72e-09\n 2\u03c0RC: 5.48e-08\n\n", "meta": {"hexsha": "7d8eba0af0ef57bddba77a1fd197765ffa3815cd", "size": 19185, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/low_pass_filter_circuit.ipynb", "max_stars_repo_name": "kaikai581/t2k-mppc-daq", "max_stars_repo_head_hexsha": "6b4f7bf04d885e952d9fd653df8f9ca1dd31089e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/low_pass_filter_circuit.ipynb", "max_issues_repo_name": "kaikai581/t2k-mppc-daq", "max_issues_repo_head_hexsha": "6b4f7bf04d885e952d9fd653df8f9ca1dd31089e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/low_pass_filter_circuit.ipynb", "max_forks_repo_name": "kaikai581/t2k-mppc-daq", "max_forks_repo_head_hexsha": "6b4f7bf04d885e952d9fd653df8f9ca1dd31089e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 94.0441176471, "max_line_length": 12868, "alphanum_fraction": 0.8278863696, "converted": true, "num_tokens": 1357, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9273633016692238, "lm_q2_score": 0.8976952996340946, "lm_q1q2_score": 0.832489676961617}} {"text": "# Mish Derivatves\n\n\n```python\nimport torch\nfrom torch.nn import functional as F\n```\n\n\n```python\ninp = torch.randn(100) + (torch.arange(0, 1000, 10, dtype=torch.float)-500.)\ninp\n```\n\n\n\n\n tensor([-500.3069, -490.6361, -480.3858, -471.2755, -459.0872, -451.1570,\n -440.2400, -429.6230, -419.9467, -408.3055, -402.3395, -389.1660,\n -380.1614, -369.7649, -359.7261, -348.8759, -338.7170, -329.2680,\n -319.7470, -309.6079, -301.3083, -290.9236, -279.3832, -267.8622,\n -259.3479, -249.7400, -240.8742, -229.2343, -219.3999, -210.0166,\n -199.8259, -191.5603, -178.9595, -171.4488, -160.3362, -150.1327,\n -139.2230, -130.8046, -121.8909, -108.4913, -100.5724, -88.9087,\n -79.9365, -70.3478, -60.1005, -49.9595, -37.6322, -29.9353,\n -18.9407, -11.9213, -2.5633, 10.6869, 18.9005, 29.4622,\n 41.7188, 49.6080, 59.3583, 71.3071, 80.2604, 91.4908,\n 100.2913, 108.7626, 118.8391, 129.8859, 139.5593, 150.6612,\n 161.5152, 170.5409, 179.0472, 187.5896, 199.0938, 210.3955,\n 221.2551, 229.1151, 240.5497, 250.7286, 260.3474, 268.6524,\n 280.6704, 291.0199, 302.0525, 309.1079, 320.3692, 330.5589,\n 340.5503, 350.1638, 359.5840, 369.0214, 379.5835, 390.5975,\n 398.7851, 408.7201, 420.3418, 430.0718, 440.4431, 449.6514,\n 459.1114, 468.1187, 480.6921, 490.0807])\n\n\n\n\n```python\nimport sympy\nfrom sympy import Symbol, Function, Expr, diff, simplify, exp, log, tanh\nx = Symbol('x')\nf = Function('f')\n```\n\n## Overall Derivative\n\n\n```python\ndiff(x*tanh(log(exp(x)+1)))\n```\n\n\n\n\n$\\displaystyle \\frac{x \\left(1 - \\tanh^{2}{\\left(\\log{\\left(e^{x} + 1 \\right)} \\right)}\\right) e^{x}}{e^{x} + 1} + \\tanh{\\left(\\log{\\left(e^{x} + 1 \\right)} \\right)}$\n\n\n\n\n```python\nsimplify(diff(x*tanh(log(exp(x)+1))))\n```\n\n\n\n\n$\\displaystyle - \\frac{x \\left(\\tanh^{2}{\\left(\\log{\\left(e^{x} + 1 \\right)} \\right)} - 1\\right) e^{x} - \\left(e^{x} + 1\\right) \\tanh{\\left(\\log{\\left(e^{x} + 1 \\right)} \\right)}}{e^{x} + 1}$\n\n\n\n## Softplus\n\n$ \\Large \\frac{\\partial}{\\partial x} Softplus(x) = 1 - \\frac{1}{e^{x} + 1} $\n\nOr, from PyTorch:\n\n$ \\Large \\frac{\\partial}{\\partial x} Softplus(x) = 1 - e^{-Y} $\n\nWhere $Y$ is saved output\n\n\n```python\nclass SoftPlusTest(torch.autograd.Function):\n @staticmethod\n def forward(ctx, inp, threshold=20):\n y = torch.where(inp < threshold, torch.log1p(torch.exp(inp)), inp)\n ctx.save_for_backward(y)\n return y\n \n @staticmethod\n def backward(ctx, grad_out):\n y, = ctx.saved_tensors\n res = 1 - (-y).exp_()\n return grad_out * res\n\n```\n\n\n```python\ntorch.allclose(F.softplus(inp), SoftPlusTest.apply(inp))\n```\n\n\n\n\n True\n\n\n\n\n```python\ntorch.autograd.gradcheck(SoftPlusTest.apply, inp.to(torch.float64).requires_grad_())\n```\n\n\n\n\n True\n\n\n\n## $tanh(Softplus(x))$\n\n\n```python\ndiff(tanh(f(x)))\n```\n\n\n\n\n$\\displaystyle \\left(1 - \\tanh^{2}{\\left(f{\\left(x \\right)} \\right)}\\right) \\frac{d}{d x} f{\\left(x \\right)}$\n\n\n\n\n```python\nclass TanhSPTest(torch.autograd.Function):\n @staticmethod\n def forward(ctx, inp, threshold=20):\n ctx.save_for_backward(inp)\n sp = torch.where(inp < threshold, torch.log1p(torch.exp(inp)), inp)\n y = torch.tanh(sp)\n return y\n \n @staticmethod\n def backward(ctx, grad_out, threshold=20):\n inp, = ctx.saved_tensors\n sp = torch.where(inp < threshold, torch.log1p(torch.exp(inp)), inp)\n grad_sp = 1 - torch.exp(-sp)\n tanhsp = torch.tanh(sp)\n grad = (1 - tanhsp*tanhsp) * grad_sp\n return grad_out * grad\n\n```\n\n\n```python\ntorch.allclose(TanhSPTest.apply(inp), torch.tanh(F.softplus(inp)))\n```\n\n\n\n\n True\n\n\n\n\n```python\ntorch.autograd.gradcheck(TanhSPTest.apply, inp.to(torch.float64).requires_grad_())\n```\n\n\n\n\n True\n\n\n\n## Mish\n\n\n```python\ndiff(x * f(x))\n```\n\n\n\n\n$\\displaystyle x \\frac{d}{d x} f{\\left(x \\right)} + f{\\left(x \\right)}$\n\n\n\n\n```python\ndiff(x*tanh(f(x)))\n```\n\n\n\n\n$\\displaystyle x \\left(1 - \\tanh^{2}{\\left(f{\\left(x \\right)} \\right)}\\right) \\frac{d}{d x} f{\\left(x \\right)} + \\tanh{\\left(f{\\left(x \\right)} \\right)}$\n\n\n\n\n```python\nsimplify(diff(x*tanh(f(x))))\n```\n\n\n\n\n$\\displaystyle \\frac{x \\frac{d}{d x} f{\\left(x \\right)}}{\\cosh^{2}{\\left(f{\\left(x \\right)} \\right)}} + \\tanh{\\left(f{\\left(x \\right)} \\right)}$\n\n\n\n\n```python\ndiff(tanh(f(x)))\n```\n\n\n\n\n$\\displaystyle \\left(1 - \\tanh^{2}{\\left(f{\\left(x \\right)} \\right)}\\right) \\frac{d}{d x} f{\\left(x \\right)}$\n\n\n\n\n```python\nclass MishTest(torch.autograd.Function):\n @staticmethod\n def forward(ctx, inp, threshold=20):\n ctx.save_for_backward(inp)\n sp = torch.where(inp < threshold, torch.log1p(torch.exp(inp)), inp)\n tsp = torch.tanh(sp)\n y = inp.mul(tsp)\n return y\n \n @staticmethod\n def backward(ctx, grad_out, threshold=20):\n inp, = ctx.saved_tensors\n sp = torch.where(inp < threshold, torch.log1p(torch.exp(inp)), inp)\n grad_sp = 1 - torch.exp(-sp)\n tsp = torch.tanh(sp)\n grad_tsp = (1 - tsp*tsp) * grad_sp\n grad = inp * grad_tsp + tsp\n return grad_out * grad\n\n```\n\n\n```python\ntorch.allclose(MishTest.apply(inp), inp.mul(torch.tanh(F.softplus(inp))))\n```\n\n\n\n\n True\n\n\n\n\n```python\ntorch.autograd.gradcheck(TanhSPTest.apply, inp.to(torch.float64).requires_grad_())\n```\n\n\n\n\n True\n\n\n", "meta": {"hexsha": "53d868ef3912cc35ae9fdbb2c8085b7ab875169f", "size": 12605, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "extra/Derivatives.ipynb", "max_stars_repo_name": "hiyyg/mish-cuda", "max_stars_repo_head_hexsha": "b389b9f84433d8b9b4129d3e879ba746d248d8f2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 145, "max_stars_repo_stars_event_min_datetime": "2019-09-25T17:43:54.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T08:17:44.000Z", "max_issues_repo_path": "extra/Derivatives.ipynb", "max_issues_repo_name": "hiyyg/mish-cuda", "max_issues_repo_head_hexsha": "b389b9f84433d8b9b4129d3e879ba746d248d8f2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 20, "max_issues_repo_issues_event_min_datetime": "2019-11-18T22:20:02.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-16T03:04:30.000Z", "max_forks_repo_path": "extra/Derivatives.ipynb", "max_forks_repo_name": "hiyyg/mish-cuda", "max_forks_repo_head_hexsha": "b389b9f84433d8b9b4129d3e879ba746d248d8f2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 51, "max_forks_repo_forks_event_min_datetime": "2019-10-10T03:52:05.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T07:14:01.000Z", "avg_line_length": 23.9184060721, "max_line_length": 218, "alphanum_fraction": 0.4710829036, "converted": true, "num_tokens": 1950, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9273632956467157, "lm_q2_score": 0.8976952941600963, "lm_q1q2_score": 0.8324896664788548}} {"text": "## M\u00e9thode des Trap\u00e8zes\n\n\n\nOn sait que l'aire d'un trap\u00e8ze est donn\u00e9e par : $$Aire = \\frac{(B + b)h}{2}$$\n\nOn d\u00e9finit : $$W_i = \\frac{(f(x_i) + f(x_{i+1}))h}{2}$$\n\nOn trouve une formule pour approximer notre int\u00e9grale :\n\n$$\n\\begin{align*}\n I &\\approx \\sum_{i=0}^{n-1} W_i \\\\\n &\\approx \\sum_{i=0}^{n-1} \\frac{h}{2}(f(x_i) + f(x_{i+1}) \\\\ \n &\\approx \\frac{h}{2}(f(x_0) + f(x_1) + f(x_1) + ... + f(x_{n-1}) + f(x_{n-1}) + f(x_n)) \\\\\n &\\approx \\frac{h}{2} \\left[ f(x_0) + 2 \\sum_{i=0}^{n-1}f(x_i) + f(x_n)) \\right] \\\\\n &\\approx h \\left[ \\frac{1}{2}(f(x_0) + f(x_n)) + \\sum_{i=0}^{n-1}f(x_i)) \\right]\n\\end{align*}\n$$\n\n### It\u00e9ration \n- On diminue progressivement le pas de h de $\\frac{1}{2}$ \u00e0 chaque it\u00e9ration jusqu'\u00e0 atteindre la pr\u00e9cision souhait\u00e9e.\n- /!\\ une it\u00e9ration = une \u00e9valuation compl\u00e8te de l'int\u00e9grale.\n\nPour une fonction : $$\\int_a^b f(x)dx$$\n\n$n = 2^{k-1}$ => 1, 2, 4, 8,...\n\n$h = \\frac{b-a}{n}$ => $h$, $\\frac{h}{2}$, $\\frac{h}{4}$,...\n\nInconv\u00e9nient : convergence lente donc besoin de beaucoup d'it\u00e9rations pour \u00eatre pr\u00e9cis.\n\n\n```python\nimport numpy as np\nimport math\n\n# precision de l'\u00e9valuation num\u00e9rique de l'int\u00e9grale (trap\u00e8zes)\nTRESHOLD_PRECISION = 1e-9\nDIMENSIONS = [1] #[1, 2, 10, 15]\nX_0 = -3 \nX_N = 3\n\nclass Trapezes():\n def __init__(self, f, x_0, x_n):\n self.f = f\n self.x_0 = x_0\n self.x_n = x_n\n \n # borne inf\u00e9rieure : x_0\n # borne sup\u00e9rieure : x_n\n # en pratique on it\u00e8re sur la m\u00e9thode \n # et on diminue le pas h progressivement (de 1/2 \u00e0 chaque it\u00e9ration)\n # k : it\u00e9ration en cours\n def trapezes(self, k):\n # si k = 1, le nombre de trap\u00e8zes serait de 1 (en effet, 2^0 = 1)\n if k == 1: \n # Aire d'un trap\u00e8ze\n h = (self.x_n - self.x_0)\n return (self.f(self.x_0) + self.f(self.x_n)) * h / 2\n\n # nombre de trap\u00e8zes\n n = int(math.pow(2, k-1))\n # intervalle h\n h = (self.x_n - self.x_0) / n\n\n sum = 1/2 * (self.f(self.x_0) + self.f(self.x_n))\n for i in range(1, n):\n x_i = self.x_0 + (i * h)\n sum += self.f(x_i)\n\n return h * sum \n \n # N : nombre de dimensions\n def integrate(self, iterations, precision):\n new_integral = 0\n for k in range(1, iterations):\n old_integral = new_integral\n new_integral = self.trapezes(k)\n if abs(old_integral - new_integral) < precision and k > 1:\n return new_integral, k\n break\n\ndef gaussian(x):\n f = math.exp(-(x**2) / 2)\n return f\n\ndef gaussian_2_dim(x, y):\n return math.exp(-(x**2 + y**2) / 2)\n\ndef gaussian_theo(N):\n return math.sqrt((2*math.pi)**N)\n\nprint(f\"dimension 1\")\nprint(\"-----------\")\nprint(f\"Valeur th\u00e9orique approximative: {gaussian_theo(N=1)}\")\n\ngaussian_trapezes = Trapezes(gaussian, X_0, X_N)\ngaussian_trapezes_value = gaussian_trapezes.integrate(iterations = 20, precision = TRESHOLD_PRECISION)\nprint(f\"Valeur de la gaussienne \u00e0 1 dimension (trap\u00e8zes): {gaussian_trapezes_value}\")\n\nprint(f\"dimension 2\")\nprint(\"-------------\")\nprint(f\"Valeur th\u00e9orique approximative: {gaussian_theo(N=2)}\")\n\nprint(\"-------------\")\n\ndef universe_model(x, omega_m, omega_rad, omega_k, omega_lambda):\n \n\ngaussian_trapezes = Trapezes(gaussian, X_0, X_N)\ngaussian_trapezes_value = gaussian_trapezes.integrate(iterations = 20, precision = TRESHOLD_PRECISION)\nprint(f\"Valeur de la gaussienne \u00e0 1 dimension (trap\u00e8zes): {gaussian_trapezes_value}\")\n\n```\n\n dimension 1\n -----------\n Valeur th\u00e9orique approximative: 2.5066282746310002\n Valeur de la gaussienne \u00e0 1 dimension (trap\u00e8zes): (2.4998608892968623, 16)\n dimension 2\n -------------\n Valeur th\u00e9orique approximative: 6.283185307179586\n\n\n## M\u00e9thode de Romberg\n\nCombine la m\u00e9thode it\u00e9rative des trap\u00e8zes avec l'extrapolation de Richardson. \n**Erreur sur la m\u00e9thode des Trap\u00e8zes**: $E(h) = c_1h^2 + c_2h^4 + ... = \\sum_i c_ih^{2i}$\n\n### Extrapolation de Richardson\n\nPour un pas de h : \n$$G = I(h) + E(h)$$\n\nPour un pas de h/2 : \n$$G = I(h/2) + E(h/2)$$\n\nOn obtient le syst\u00e8me :\n\n$$\n\\begin{align}\n\\begin{cases}\n G = I(h) + ch^p \\\\ \n G = I(h/2) + c(h/2)^p \\implies 2^pG = 2^pI(h/2) + ch^p\n\\end{cases}\n\\end{align}\n$$\n\nOn soustrait l'\u00e9quation $(1)$ \u00e0 l'\u00e9quation $(2)$ :\n\n$$\n\\begin{align}\n 2^pG - G &= 2^pI(h/2) - I(h) \\\\\n (2^p - 1)G &= 2^p I(h/2) - I(h) \\\\\n G &= \\frac{2^p I(h/2) - I(h)}{2^p - 1}\n\\end{align}\n$$\n\nNuos avons donc une formule pour l'extrapolation de Richardson : \n\n$$\nG = \\frac{2^p I(h/2) - I(h)}{2^p - 1}\n$$\n\n**Concr\u00e8tement**:\n\nOn va se constituer une matrice $n x n$ avec _n : le nombre d'it\u00e9rations_. Elle contiendra dans sa premi\u00e8re colonne les approximations de l'int\u00e9grale par la m\u00e9thode des trap\u00e8zes. Ensuite on rempli \"le reste\" en cascade avec la formule de l'extrapolation de Richardson. \n\n**Algorithme**:\n\nOn remarque que plus le nombre d'it\u00e9rations augmente, plus la pr\u00e9cision augmente.\n\n\n\n\n```python\nimport numpy as np\nimport math\n\nclass Romberg():\n def __init__(self, f, x_0, x_n):\n self.f = f\n self.x_0 = x_0\n self.x_n = x_n\n\n def trapezes(self, k):\n # si k = 1, le nombre de trap\u00e8zes serait de 1 (en effet, 2^0 = 1)\n if k == 1: \n # Aire d'un trap\u00e8ze\n h = (self.x_n - self.x_0)\n return (self.f(self.x_0) + self.f(self.x_n)) * h / 2\n\n # nombre de trap\u00e8zes\n n = int(math.pow(2, k-1))\n # intervalle h\n h = (self.x_n - self.x_0) / n\n\n sum = 1/2 * (self.f(self.x_0) + self.f(self.x_n))\n for i in range(1, n):\n x_i = self.x_0 + (i * h)\n sum += self.f(x_i)\n\n return h * sum \n\n def richardson_extrapolation(self, I_prev, I_current, k):\n return ((2**k * I_current) - I_prev) / (2**k - 1)\n\n def integrate(self, iterations, precision):\n # R(n, m)\n self.map = np.zeros((iterations, iterations))\n\n \"\"\"\n iterations = 3\n j\n __________________\n i | trap x x |\n | trap rich x |\n | trap rich approx | \n ------------------\n \"\"\"\n for i in range(1, iterations):\n i_prime = i - 1\n # M\u00e9thode des trap\u00e8zes\n self.map[i_prime, 0] = self.trapezes(i)\n for j in range(1, i):\n # Extrapolation de Richardson\n self.map[i_prime, j] = self.richardson_extrapolation(self.map[i_prime-1, j-1], self.map[i_prime, j-1], 2*j)\n \n if abs(self.map[i_prime - 1, i_prime - 1] - self.map[i_prime, i_prime]) < precision:\n return self.map[i_prime, i_prime], i\n break\n\n# precision de l'\u00e9valuation num\u00e9rique de l'int\u00e9grale (trap\u00e8zes)\nTRESHOLD_PRECISION = 1e-9\nDIMENSIONS = [1] #[1, 2, 10, 15]\nX_0 = -3 \nX_N = 3\n\ndef gaussian(x):\n f = math.exp(-(x**2) / 2)\n return f\n \n# romberg\ngaussian_romberg = Romberg(gaussian, X_0, X_N)\ngaussian_romberg_value = gaussian_romberg.integrate(iterations = 10, precision = TRESHOLD_PRECISION)\nprint(f\"Valeur de la gaussienne \u00e0 2 dimension (romberg): {gaussian_romberg_value}\")\n```\n\n Valeur de la gaussienne \u00e0 2 dimension (romberg): (2.4998608894818006, 8)\n\n\n### M\u00e9thode de Monte-Carlo\n\n- Plus efficace pour calculer des int\u00e9grales \u00e0 plusieurs dimensions.\n- On tire les points al\u00e9atoirement dans le domaine de la fonction.\n\n$$\n\\begin{align}\n &= \\frac{1}{b-a} \\int_a^b f(x)dx \\\\\n \\int_a^b f(x)dx &= (b-a) \\\\\n \\int_a^b f(x)dx &\\approx (b-a) \\frac{1}{N} \\sum_{i=0}^{n-1} f(x_i)\n\\end{align}\n$$\n\n\n```python\nimport math\nimport random\nimport numpy as np\n\ndef gaussian_theo(N):\n return math.sqrt((2*math.pi)**N)\n\ndef gaussian(x_vec):\n sum = 0\n # x^2 + y^2 + z^2 + ...\n for x_i in x_vec:\n sum += x_i**2\n \n return math.exp(-(sum) / 2)\n\nclass MonteCarlo():\n def __init__(self, f, x_0, x_n):\n self.f = f\n self.x_0 = x_0\n self.x_n = x_n\n \n def integrate(self, iterations):\n sum = 0\n # e.g.: 3 = int\u00e9grale triple\n dimension = len(self.x_0) \n # N * N * N * ... = N^(dimension)\n volume = np.prod([x_n - x_0 for x_0, x_n in zip(self.x_0, self.x_n)]) \n # somme des f(x_i)\n sum = 0\n for i in range(iterations):\n x_i = [random.uniform(self.x_0[k], self.x_n[k]) for k in range(dimension)]\n sum += self.f(x_i)\n \n integral = volume * (1/iterations) * sum\n return integral\n \n\nprint(f\"predicted gaussian integral value: {gaussian_theo(3)}\")\nintegral = MonteCarlo(gaussian, [-3, -3, -3], [3, 3, 3]).integrate(1000000)\nprint(f\"approximated gaussian integral value: {integral}\")\n```\n\n predicted gaussian integral value: 15.749609945722419\n approximated gaussian integral value: 15.627813816362522\n\n\n#### Am\u00e9lioration \n\n- Utiliser une r\u00e9partition arbitraire selon la forme de la fonction \u00e0 int\u00e9grer.\n", "meta": {"hexsha": "c2dec1c9faff3285568bb150531554f51a9ae7e5", "size": 12624, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "python/notebooks/integration.ipynb", "max_stars_repo_name": "Mathieu-R/computational-physics", "max_stars_repo_head_hexsha": "7b1a39ffb65f864bc741a3e709892ddb18723a58", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "python/notebooks/integration.ipynb", "max_issues_repo_name": "Mathieu-R/computational-physics", "max_issues_repo_head_hexsha": "7b1a39ffb65f864bc741a3e709892ddb18723a58", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2021-06-08T23:08:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-12T00:54:45.000Z", "max_forks_repo_path": "python/notebooks/integration.ipynb", "max_forks_repo_name": "Mathieu-R/computational-physics", "max_forks_repo_head_hexsha": "7b1a39ffb65f864bc741a3e709892ddb18723a58", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.4524421594, "max_line_length": 280, "alphanum_fraction": 0.4865335868, "converted": true, "num_tokens": 2991, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9273632916317103, "lm_q2_score": 0.8976952838963489, "lm_q1q2_score": 0.8324896533563807}} {"text": "# Linear Advection Equation\n\n\\begin{equation}\n\\begin{aligned}\n \\partial_t u(t,x) + \\partial_x (a(x) u(t,x)) &= 0, && t \\in (0,T), x \\in (x_{min}, x_{max}), \\\\\n u(0,x) &= u_0(x), && x \\in (x_{min}, x_{max}), \\\\\n \\text{boundary conditions}, &&& x \\in \\partial (x_{min}, x_{max}).\n\\end{aligned}\n\\end{equation}\n\nThe boundary conditions depend on the sign of the transport velocity $a$ at the boundary. In particular, specifying a Dirichlet type boundary condition is only allowed for inflow boundaries, e.g. $a(x_{min}) > 0$ at $x = x_{min}$.\n\n\n```julia\nusing Revise\nusing SummationByPartsOperators, OrdinaryDiffEq\nusing Plots, LaTeXStrings, Printf\n\n# general parameters\nxmin = -1.\nxmax = +1.\ntspan = (0., 8.0)\nafunc(x) = one(x)\nu0func(x) = sinpi(x)\n# Dirichlet type boundary conditions; they are used only at inflow boundaries\nleft_bc(t) = t >= 3 ? sinpi(t) : zero(t)\nright_bc(t) = zero(t)\n\n# discretisation parameters\ninterior_order = 4\nN = 101\n# whether a split form should be applied or not\nsplit_form = Val(false)\n\n# setup spatial semidiscretization\nD = derivative_operator(MattssonSv\u00e4rdShoeybi2008(), 1, interior_order, xmin, xmax, N)\n# whether or not artificial dissipation should be applied: nothing, dissipation_operator(D)\nDi = nothing\nsemi = VariableLinearAdvectionNonperiodicSemidiscretization(D, Di, afunc, split_form, left_bc, right_bc)\node = semidiscretize(u0func, semi, tspan)\n\n# solve ode\nsol = solve(ode, SSPRK104(), dt=D.\u0394x, adaptive=false, \n saveat=range(first(tspan), stop=last(tspan), length=200))\n\n# visualise the result\nplot(xguide=L\"x\", yguide=L\"u\")\nplot!(evaluate_coefficients(sol[end], semi), label=\"\")\n```\n\n\n```julia\n# make a movie\nanim = Animation()\nidx = 1\nx, u = evaluate_coefficients(sol[idx], semi)\n\nfig = plot(x, u, xguide=L\"x\", yguide=L\"u\", xlim=extrema(x), ylim=(-1.05, 1.05),\n #size=(1024,768), dpi=250,\n label=\"\", title=@sprintf(\"\\$t = %6.2f \\$\", sol.t[idx]))\nfor idx in 1:length(sol.t)\n fig[1] = x, sol.u[idx]\n plot!(title=@sprintf(\"\\$t = %6.2f \\$\", sol.t[idx]))\n frame(anim)\nend\ngif(anim)\n```\n", "meta": {"hexsha": "d359bbbeef94d7e0ee26e8daf332a2d4e69a3c51", "size": 3358, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/Advection_equation.ipynb", "max_stars_repo_name": "ranocha/SummationByPartsOperators.jl", "max_stars_repo_head_hexsha": "2f6ec738e7387553024cd82f4abff9a38fcefc96", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 23, "max_stars_repo_stars_event_min_datetime": "2018-12-06T19:51:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-14T19:17:47.000Z", "max_issues_repo_path": "notebooks/Advection_equation.ipynb", "max_issues_repo_name": "ranocha/SummationByPartsOperators.jl", "max_issues_repo_head_hexsha": "2f6ec738e7387553024cd82f4abff9a38fcefc96", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 107, "max_issues_repo_issues_event_min_datetime": "2017-12-17T12:07:35.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-29T14:10:41.000Z", "max_forks_repo_path": "notebooks/Advection_equation.ipynb", "max_forks_repo_name": "ranocha/SummationByPartsOperators.jl", "max_forks_repo_head_hexsha": "2f6ec738e7387553024cd82f4abff9a38fcefc96", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-02-08T11:01:43.000Z", "max_forks_repo_forks_event_max_datetime": "2020-02-08T11:01:43.000Z", "avg_line_length": 31.980952381, "max_line_length": 236, "alphanum_fraction": 0.5461584276, "converted": true, "num_tokens": 664, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9273632896242073, "lm_q2_score": 0.897695283896349, "lm_q1q2_score": 0.8324896515542549}} {"text": "# Nonlinear Systems of Equations\n\nNon-linear systems are extensions of the linear systems cases except the systems involve products and powers of the unknown variables. Non-linear problems are often quite difficult to manage, especially when the systems are large (many rows and many variables).\nThe solution to non-linear systems, if non-trivial or even possible, are usually iterative. Within the iterative steps is a linearization component \u2013 these linear systems which are intermediate computations within the overall solution process are treated by an appropriate linear system method (direct or iterative).\nConsider the system below:\n\n\\begin{gather}\n\\begin{matrix}\nx^2 & +~y^2 \\\\\n e^x & +~y \\\\\n\\end{matrix}\n\\begin{matrix}\n= 4\\\\\n= 1\\\\\n\\end{matrix}\n\\end{gather}\n\nSuppose we have a solution guess $x_{k},y_{k}$, which of course could be wrong, but we could linearize about that guess as\n\n\\begin{gather}\n\\mathbf{A} =\n\\begin{pmatrix}\nx_k & + ~y_k \\\\\n0 & + ~1 \\\\\n\\end{pmatrix}\n~\\mathbf{x} = \n\\begin{pmatrix}\nx_{k+1} \\\\\ny_{k+1} \\\\\n\\end{pmatrix}\n~ \\mathbf{b} = \n\\begin{pmatrix}\n4\\\\\n1 - e^{x_k}\\\\\n\\end{pmatrix}\n\\end{gather}\n\nNow if we assemble the system in the usual fashion, $\\mathbf{A} \\cdot \\mathbf{x_{k+1}} = ~ \\mathbf{b}~$ we have a system of linear equations\\footnote{Linear in $\\mathbf{x_{k+1}}$}, which expanded look like:\n\n\\begin{gather}\n\\begin{pmatrix}\nx_k & + ~y_k \\\\\n0 & + ~1 \\\\\n\\end{pmatrix}\n\\cdot\n\\begin{pmatrix}\nx_{k+1} \\\\\ny_{k+1} \\\\\n\\end{pmatrix}\n~ = \n\\begin{pmatrix}\n4\\\\\n1 - e^{x_k}\\\\\n\\end{pmatrix}\n\\end{gather}\n\nNow that the system is linear, and we can solve for $\\mathbf{x_{k+1}}$ using our linear system solver for the new guess.\nIf the system is convergent (not all are) then if we update, and repeat we will eventually find a result.\n\nWhat one really needs is a way to construct the linear system that has a systematic update method, that is discussed below\n\n## Multiple-variable extension of Newton\u2019s Method\n\nThis section presents the Newton-Raphson method as a way to sometimes solve systems of non-linear equations.\n\nConsider an example where the function \\textbf{f} is a vector-valued function of a vector argument.\n\n\\begin{gather}\n\\mathbf{f(x)} = \n\\begin{matrix}\nf_1 = & x^2 & +~y^2 & - 4\\\\\nf_2 = & e^x & +~y & - 1\\\\\n\\end{matrix}\n\\end{gather}\n\nLet's also recall Newtons method for scalar valued function of a single variable.\n\n\\begin{equation}\nx_{k+1}=x_{k} - \\frac{ f(x_{k}) }{ \\frac{df}{dx}\\rvert_{x_k} } \n\\label{eqn:NewtonFormula}\n\\end{equation}\n\nWhen extending to higher dimensions, the analog for $x$ is the vector \\textbf{x} and the analog for the function $f()$ is the vector function \\textbf{f()}.\nWhat remains is an analog for the first derivative in the denominator (and the concept of division of a matrix).\n\nThe analog to the first derivative is a matrix called the Jacobian which is comprised of the first derivatives of the function \\textbf{f} with respect to the arguments \\textbf{x}. \nFor example for a 2-value function of 2 arguments (as our example above)\n\n\\begin{equation}\n\\frac{df}{dx}\\rvert_{x_k} =>\n\\begin{pmatrix}\n\\frac{\\partial f_1}{\\partial x_1} & \\frac{\\partial f_1}{\\partial x_2} \\\\\n~ & ~ \\\\\n\\frac{\\partial f_2}{\\partial x_1} & \\frac{\\partial f_2}{\\partial x_2} \\\\\n\\end{pmatrix}\n\\label{eqn:Jacobian}\n\\end{equation}\n\nNext recall that division is replaced by matrix multiplication with the multiplicative inverse, so the analogy continues as\n\n\\begin{equation}\n\\frac{1}{\\frac{df}{dx}\\rvert_{x_k}} =>\n{\\begin{pmatrix}\n\\frac{\\partial f_1}{\\partial x_1} & \\frac{\\partial f_1}{\\partial x_2} \\\\\n~ & ~ \\\\\n\\frac{\\partial f_2}{\\partial x_1} & \\frac{\\partial f_2}{\\partial x_2} \\\\\n\\end{pmatrix}}^{-1}\n\\label{eqn:JacobianInverse}\n\\end{equation}\n\nLet's name the Jacobian \\textbf{J(x)}.\n\nSo the multi-variate Newton's method can be written as\n\n\\begin{equation}\n\\mathbf{x}_{k+1}=\\mathbf{x}_{k} - \\mathbf{J(x)}^{-1}\\rvert_{x_k} \\cdot \\mathbf{f(x)}\\rvert_{x_k}\n\\label{eqn:VectorNewtonFormula}\n\\end{equation}\n\nIn the linear systems lessons we did find a way to solve for an inverse, but it's not necessary, and is computationally expensive to invert in these examples -- a series of rearrangement of the system above yields a nice scheme that does not require inversion of a matrix.\n\nFirst, move the $\\mathbf{x}_{k}$ to the left-hand side.\n\n\\begin{equation}\n\\mathbf{x}_{k+1}-\\mathbf{x}_{k} = - \\mathbf{J(x)}^{-1}\\rvert_{x_k} \\cdot \\mathbf{f(x)}\\rvert_{x_k}\n\\end{equation}\n\nNext multiply both sides by the Jacobian (The Jacobian must be non-singular otherwise we are dividing by zero)\n\n\\begin{equation}\n\\mathbf{J(x)}\\rvert_{x_k} \\cdot (\\mathbf{x}_{k+1}-\\mathbf{x}_{k}) = - \\mathbf{J(x)}\\rvert_{x_k} \\cdot \\mathbf{J(x)}^{-1}\\rvert_{x_k} \\cdot \\mathbf{f(x)}\\rvert_{x_k}\n\\end{equation}\n\nRecall a matrix multiplied by its inverse returns the identity matrix (the matrix equivalent of unity)\n\n\\begin{equation}\n-\\mathbf{J(x)}\\rvert_{x_k} \\cdot (\\mathbf{x}_{k+1}-\\mathbf{x}_{k}) = \\mathbf{f(x)}\\rvert_{x_k}\n\\end{equation}\n\nSo we now have an algorithm:\n\n1) Start with an initial guess $\\mathbf{x}_{k}$, compute $\\mathbf{f(x)}\\rvert_{x_k}$, and $\\mathbf{J(x)}\\rvert_{x_k}$.\n\n2) Test for stopping. Is $\\mathbf{f(x)}\\rvert_{x_k}$ close to zero? If yes, exit and report results, otherwise continue.\n\n3) Solve the linear system $\\mathbf{J(x)}\\rvert_{x_k} \\cdot (\\mathbf{x}_{k+1}-\\mathbf{x}_{k}) = \\mathbf{f(x)}\\rvert_{x_k}$.\n\n4) Test for stopping. Is $ (\\mathbf{x}_{k+1}-\\mathbf{x}_{k})$ close to zero? If yes, exit and report results, otherwise continue.\n\n5) Compute the update $\\mathbf{x}_{k+1} = \\mathbf{x}_{k} - (\\mathbf{x}_{k+1}-\\mathbf{x}_{k}) $, then\n\n6) Move the update into the guess vector $\\mathbf{x}_{k} <=\\mathbf{x}_{k+1}$ =and repeat step 1. Stop after too many steps.\n\n\n\n\n## Example using Analytical Derivatives\n\nNow to complete the example we will employ this algorithm.\n\nThe function (repeated)\n\n\\begin{gather}\n\\mathbf{f(x)} = \n\\begin{matrix}\nf_1 = & x^2 & +~y^2 & - 4\\\\\nf_2 = & e^x & +~y & - 1\\\\\n\\end{matrix}\n\\end{gather}\n\nThen the Jacobian, here we will compute it analytically because we can\n\n\\begin{equation}\n\\mathbf{J(x)}=>\n{\\begin{pmatrix}\n2x & 2y \\\\\n~ & ~ \\\\\ne^x & 1 \\\\\n\\end{pmatrix}}\n\\end{equation}\n\nNow for the scripts.\n\nWe will start by defining the two equations, and their derivatives, as well a a vector valued function `func` and its Jacobian `jacob` as below. Here the two modules `LinearSolverPivot` and `vector_matrix_lib` are just python source code files containing prototype functions.\n\n\n```python\n#################################################################\n# Newton Solver Example -- Analytical Derivatives #\n#################################################################\nimport math # This will import math module from python distribution\nfrom LinearSolverPivot import linearsolver # This will import our solver module\nfrom vector_matrix_lib import writeM,writeV,vdotv,vvsub # This will import our vector functions\n\ndef eq1(x,y):\n eq1 = x**2 + y**2 - 4.0\n return(eq1)\n\ndef eq2(x,y):\n eq2 = math.exp(x) + y - 1.0\n return(eq2)\n\ndef ddxeq1(x,y):\n ddxeq1 = 2.0*x\n return(ddxeq1)\n\ndef ddyeq1(x,y):\n ddyeq1 = 2.0*y\n return(ddyeq1)\n\ndef ddxeq2(x,y):\n ddxeq2 = math.exp(x)\n return(ddxeq2)\n\ndef ddyeq2(x,y):\n ddyeq2 = 1.0\n return(ddyeq2)\n\ndef func(x,y):\n func = [0.0 for i in range(2)] # null list\n # build the function\n func[0] = eq1(x,y)\n func[1] = eq2(x,y)\n return(func)\n\ndef jacob(x,y):\n jacob = [[0.0 for j in range(2)] for i in range(2)] # constructed list \n #build the jacobian\n jacob[0][0]=ddxeq1(x,y)\n jacob[0][1]=ddyeq1(x,y)\n jacob[1][0]=ddxeq2(x,y)\n jacob[1][1]=ddyeq2(x,y)\n return(jacob)\n```\n\nNext we create vectors to store values, and supply initial guesses to the system, and echo the inputs.\n\n\n```python\ndeltax = [0.0 for i in range(2)] # null list\nxguess = [0.0 for i in range(2)] # null list\nmyfunc = [0.0 for i in range(2)] # null list\nmyjacob = [[0.0 for j in range(2)] for i in range(2)] # constructed list \n# supply initial guess\nxguess[0] = float(input(\"Value for x : \"))\nxguess[1] = float(input(\"Value for y : \"))\n# build the initial function\nmyfunc = func(xguess[0],xguess[1])\n#build the initial jacobian\nmyjacob=jacob(xguess[0],xguess[1])\n#write initial results\nwriteV(xguess,2,\"Initial X vector \")\nwriteV(myfunc,2,\"Initial FUNC vector \")\nwriteM(myjacob,2,2,\"Initial Jacobian \")\n# solver parameters\ntolerancef = 1.0e-9\ntolerancex = 1.0e-9\n```\n\n Value for x : 2\n Value for y : 3\n\n\n ------ Initial X vector ------\n 2.0\n 3.0\n -----------------------------\n ------ Initial FUNC vector ------\n 9.0\n 9.38905609893065\n -----------------------------\n ------ Initial Jacobian ------\n [4.0, 6.0]\n [7.38905609893065, 1.0]\n -----------------------------\n\n\nNow we apply the algorithm a few times, here the count is set to 10. So eneter the loop, test for stopping, then update.\n\n\n```python\n# Newton-Raphson\nfor iteration in range(10):\n myfunc = func(xguess[0],xguess[1])\n testf = vdotv(myfunc,myfunc,2)\n if testf <= tolerancef :\n print(\"f(x) close to zero\\n test value : \", testf)\n break\n myjacob=jacob(xguess[0],xguess[1])\n deltax=linearsolver(myjacob,myfunc)\n testx = vdotv(deltax,deltax,2)\n if testx <= tolerancex :\n print(\"solution change small\\n test value : \", testx)\n break\n xguess=vvsub(xguess,deltax,2)\n## print(\"iteration : \",iteration)\n## writeV(xguess,2,\"Current X vector \")\n## writeV(myfunc,2,\"Current FUNC vector \")\nprint(\"Exiting Iteration : \",iteration)\nwriteV(xguess,2,\"Exiting X vector \")\nwriteV(myfunc,2,\"Exiting FUNC vector \")\n```\n\n f(x) close to zero\n test value : 4.741631714784361e-10\n Exiting Iteration : 6\n ------ Exiting X vector ------\n -1.8162690125838175\n 0.8373700502918618\n -----------------------------\n ------ Exiting FUNC vector ------\n 2.1727197990095704e-05\n 1.446388252723807e-06\n -----------------------------\n\n\n## Quasi-Newton Method using Finite Difference Approximations for the Derivative\nThe next variant is to approximate the derivatives -- usually a Finite-Difference approximation is used, either forward, backward, or centered differences -- generally determined based on the actual behavior of the functions themselves or by trial and error. \n\nFor really huge systems, we usually make the program itself make the adaptions as it proceeds.\n\nThe coding for a finite-difference representation of a Jacobian is shown in Listing that follows \nIn constructing the Jacobian, we observe that each column of the Jacobian is simply the directional derivative of the function with respect to the variable associated with the column. \nFor instance, the first column of the Jacobian in the example is first derivative of the first function (all rows) with respect to the first variable, in this case $x$. The second column is the first derivative of the second function with respect to the second variable, $y$. \nThis structure is useful to generalize the Jacobian construction method because we could write (yet another) prototype function that can take the directional derivatives for us, and just insert the returns as columns; in the example we simply modified the `ddx` and `ddy` functions from analytical to simple finite differences.\n\nThe example listing is specific to the 2X2 function in the example, but the extension to more general cases is evident.\n\n\n\n```python\n#################################################################\n# Newton Solver Example -- Numerical Derivatives #\n#################################################################\nimport math # This will import math module from python distribution\nfrom LinearSolverPivot import linearsolver # This will import our solver module\nfrom vector_matrix_lib import writeM,writeV,vdotv,vvsub # This will import our vector functions\n\ndef eq1(x,y):\n eq1 = x**2 + y**2 - 4.0\n return(eq1)\n\ndef eq2(x,y):\n eq2 = math.exp(x) + y - 1.0\n return(eq2)\n##############################################################\n# This portion is changed for finite-difference method to evaluate derivatives #\n##############################################################\ndef ddxeq1(x,y):\n delta = 1.0e-6\n ddxeq1 = (eq1(x+delta,y)-eq1(x,y))/delta\n return(ddxeq1)\n\ndef ddyeq1(x,y):\n delta = 1.0e-6\n ddyeq1 = (eq1(x,y+delta)-eq1(x,y))/delta\n return(ddyeq1)\n\ndef ddxeq2(x,y):\n delta = 1.0e-6\n ddxeq2 = (eq2(x+delta,y)-eq2(x,y))/delta\n return(ddxeq2)\n\ndef ddyeq2(x,y):\n delta = 1.0e-6\n ddyeq2 = (eq2(x,y+delta)-eq2(x,y))/delta\n return(ddyeq2)\n##############################################################\ndef func(x,y):\n func = [0.0 for i in range(2)] # null list\n # build the function\n func[0] = eq1(x,y)\n func[1] = eq2(x,y)\n return(func)\n\ndef jacob(x,y):\n jacob = [[0.0 for j in range(2)] for i in range(2)] # constructed list \n #build the jacobian\n jacob[0][0]=ddxeq1(x,y)\n jacob[0][1]=ddyeq1(x,y)\n jacob[1][0]=ddxeq2(x,y)\n jacob[1][1]=ddyeq2(x,y)\n return(jacob)\ndeltax = [0.0 for i in range(2)] # null list\nxguess = [0.0 for i in range(2)] # null list\nmyfunc = [0.0 for i in range(2)] # null list\nmyjacob = [[0.0 for j in range(2)] for i in range(2)] # constructed list \n# supply initial guess\nxguess[0] = float(input(\"Value for x : \"))\nxguess[1] = float(input(\"Value for y : \"))\n# build the initial function\nmyfunc = func(xguess[0],xguess[1])\n#build the initial jacobian\nmyjacob=jacob(xguess[0],xguess[1])\n#write initial results\nwriteV(xguess,2,\"Initial X vector \")\nwriteV(myfunc,2,\"Initial FUNC vector \")\nwriteM(myjacob,2,2,\"Initial Jacobian \")\n# solver parameters\ntolerancef = 1.0e-9\ntolerancex = 1.0e-9\n# Newton-Raphson\nfor iteration in range(10):\n myfunc = func(xguess[0],xguess[1])\n testf = vdotv(myfunc,myfunc,2)\n if testf <= tolerancef :\n print(\"f(x) close to zero\\n test value : \", testf)\n break\n myjacob=jacob(xguess[0],xguess[1])\n deltax=linearsolver(myjacob,myfunc)\n testx = vdotv(deltax,deltax,2)\n if testx <= tolerancex :\n print(\"solution change small\\n test value : \", testx)\n break\n xguess=vvsub(xguess,deltax,2)\n## print(\"iteration : \",iteration)\n## writeV(xguess,2,\"Current X vector \")\n## writeV(myfunc,2,\"Current FUNC vector \")\nprint(\"Exiting Iteration : \",iteration)\nwriteV(xguess,2,\"Exiting X vector \")\nwriteV(myfunc,2,\"Exiting FUNC vector using Finite-Differences\")\n```\n\n Value for x : 0\n Value for y : 2\n\n\n ------ Initial X vector ------\n 0.0\n 2.0\n -----------------------------\n ------ Initial FUNC vector ------\n 0.0\n 2.0\n -----------------------------\n ------ Initial Jacobian ------\n [1.000088900582341e-06, 4.0000010006480125]\n [1.0000005001842283, 1.000000000139778]\n -----------------------------\n f(x) close to zero\n test value : 1.124762923231742e-16\n Exiting Iteration : 5\n ------ Exiting X vector ------\n -1.816264071231508\n 0.8373678009903361\n -----------------------------\n ------ Exiting FUNC vector using Finite-Differences ------\n 1.0581842957435583e-08\n 7.077372021768724e-10\n -----------------------------\n\n\n\n\n\n ()\n\n\n\n\n```python\n\n```\n\n## Exercises\n\nWrite a script that forward defines the multi-variate functions and implements the Newton-Raphson technique.\nImplement the method, using analytical derivatives, and find a solution to:\n\\begin{gather}\n\\begin{matrix}\n x^3 & +~3y^2 & = 21\\\\\nx^2& +~2y & = -2 \\\\\n\\end{matrix}\n\\end{gather}\n\nRepeat the exercise, except use finite-differences to approximate the derivatives.\n\n\n```python\n\n```\n\nWrite a script that forward defines the multi-variate functions and implements the Newton-Raphson technique.\nImplement the method, using analytical derivatives, and find a solution to:\n\\begin{gather}\n\\begin{matrix}\nx^2 & +~ y^2 & +~z^2 & =~ 9\\\\\n~ & ~ & xyz & =~ 1\\\\\nx & +~ y & -z^2 & =~ 0\\\\\n\\end{matrix}\n\\end{gather}\n\nRepeat the exercise, except use finite-differences to approximate the derivatives.\n\n\n```python\n\n```\n\nWrite a script that forward defines the multi-variate functions and implements the Newton-Raphson technique.\nImplement the method, using analytical derivatives, and find a solution to:\n\\begin{gather}\n\\begin{matrix}\nxyz & -~ x^2 & +~y^2 & =~ 1.34\\\\\n~ & xy &-~z^2 & =~ 0.09\\\\\ne^x & -~ e^y & +z & =~ 0.41\\\\\n\\end{matrix}\n\\end{gather}\n\nRepeat the exercise, except use finite-differences to approximate the derivatives.\n\n\n```python\n\n```\n", "meta": {"hexsha": "3111de0b4cd03c7928d83bfd659cbc2ae356c16b", "size": 23340, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "9-MyJupyterNotebooks/15-NonLinearSystems/NonLinearSystems.ipynb", "max_stars_repo_name": "dustykat/engr-1330-webroo", "max_stars_repo_head_hexsha": "32d40e661bc0e0a4b8e64f577c1b2171b12d38af", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "9-MyJupyterNotebooks/15-NonLinearSystems/NonLinearSystems.ipynb", "max_issues_repo_name": "dustykat/engr-1330-webroo", "max_issues_repo_head_hexsha": "32d40e661bc0e0a4b8e64f577c1b2171b12d38af", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "9-MyJupyterNotebooks/15-NonLinearSystems/NonLinearSystems.ipynb", "max_forks_repo_name": "dustykat/engr-1330-webroo", "max_forks_repo_head_hexsha": "32d40e661bc0e0a4b8e64f577c1b2171b12d38af", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.298600311, "max_line_length": 336, "alphanum_fraction": 0.5286632391, "converted": true, "num_tokens": 5004, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9149009573133051, "lm_q2_score": 0.9099070017626537, "lm_q1q2_score": 0.8324747869787311}} {"text": "# Sweep Signals and their Spectra\n\n*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the masters module Selected Topics in Audio Signal Processing, Communications Engineering, Universit\u00e4t Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*\n\n## The Linear Sweep\n\nA [linear sweep](https://en.wikipedia.org/wiki/Chirp#Linear) is an exponential signal with linear increase in its instantaneous frequency. It is defined as\n\n\\begin{equation}\nx(t) = e^{j \\omega(t) t}\n\\end{equation}\n\nwith \n\n\\begin{equation}\n\\omega(t) = \\omega_\\text{l} - \\frac{\\omega_\\text{u} - \\omega_\\text{l}}{2 T} t\n\\end{equation}\n\nwhere $\\omega_\\text{l}$ and $\\omega_\\text{u}$ denote its lower and upper frequency limit, and $T$ its total duration. The linear sweep is generated in the following by sampling the continuous time.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport soundfile as sf\n\nfs = 48000 # sampling frequency\nom_l = 2*np.pi*200 # lower angluar frequency of sweep\nom_u = 2*np.pi*18000 # upper angular frequency of sweep\nT = 5 # duration of sweep\nk = (om_u - om_l)/(2*T)\n\nt = np.linspace(0, T, fs*T)\nx = np.exp(1j*(om_l + k*t)*t)\n```\n\nA short section of the linear sweep signal is plotted for illustration\n\n\n```python\nidx = range(5000) # portion of the signal to show\n\nplt.figure(figsize=(10, 5))\nplt.plot(t[idx], np.real(x[idx]))\nplt.xlabel(r'$t$ in s')\nplt.ylabel(r'$x(t)$')\nplt.grid()\n```\n\n### Auralization\n\nLets listen to the linear sweep. Please be careful with the volume of your speakers or headphones. Start with a very low volume and increase if necessary. This holds especially for the low and high frequencies which can damage your speakers at high levels. \n\n\n```python\nsf.write('linear_sweep.wav', np.real(x), fs)\n```\n\n\n[linear_sweep.wav](linear_sweep.wav)\n\n### Spectrogram\n\nThe spectrogram of the linear sweep is computed and plotted\n\n\n```python\nplt.figure(figsize=(8, 6))\nplt.specgram(x, Fs=fs, sides='onesided')\nplt.xlabel('$t$ in s')\nplt.ylabel('$f$ in Hz');\n```\n\n### Spectrum of a Linear Sweep (Analytic Solution)\n\nThe analytic solution of the Fourier transform of the linear sweep signal is used to compute and plot its overall magnitude spectrum $|X(j \\omega)|$\n\n\n```python\nfrom scipy.special import fresnel\n\nf = np.linspace(10, 20000, 1000)\nom = 2*np.pi*f\nom_s = 2*np.pi*200 # lower angluar frequency of sweep\nom_e = 2*np.pi*18000 # upper angular frequency of sweep\nT = 5 # duration of sweep\nk = (om_e - om_s)/(2*T)\n\n\na = (om-om_s)/np.sqrt(2*np.pi*k)\nb = (2*k*T-(om-om_s))/np.sqrt(2*np.pi*k)\nSa, Ca = fresnel(a)\nSb, Cb = fresnel(b)\n\nX = np.sqrt(np.pi/(2*k)) * np.sqrt((Ca + Cb)**2 + (Sa + Sb)**2)\n\nplt.figure(figsize=(8, 5))\nplt.plot(f, X)\nplt.xlabel('$f$ in Hz')\nplt.ylabel(r'$|X(f)|$')\nplt.grid()\n```\n\n### Crest Factor\n\nThe [Crest factor](https://en.wikipedia.org/wiki/Crest_factor) of the sweep is computed\n\n\n```python\nxrms = np.sqrt(1/T * np.sum(np.real(x)**2) * 1/fs)\nC = np.max(np.real(x)) / xrms\nprint('Crest factor C = {:<1.5f}'.format(C))\n```\n\n Crest factor C = 1.41421\n\n\n## Exponential Sweep\n\nAn [exponential sweep](https://en.wikipedia.org/wiki/Chirp#Exponential) is an exponential signal with an exponential increase in its instantaneous frequency. It is defined as\n\n\\begin{equation}\nx(t) = e^{j \\frac{\\omega_l}{\\ln(k)} (k^t - 1)}\n\\end{equation}\n\nwith\n\n\\begin{equation}\nk = \\left( \\frac{\\omega_\\text{u}}{\\omega_\\text{l}} \\right)^\\frac{1}{T}\n\\end{equation}\n\nwhere $\\omega_\\text{l}$ and $\\omega_\\text{u}$ denote its lower and upper frequency limit, and $T$ its total duration. The exponential sweep is generated in the following by sampling the continuous time.\n\n\n```python\nom_l = 2*np.pi*100 # lower angluar frequency of sweep\nom_u = 2*np.pi*18000 # upper angular frequency of sweep\nT = 5 # duration of sweep\nk = (om_e / om_s)**(1/T)\n\nt = np.linspace(0, T, fs*T)\nx = np.exp(1j*om_s * (k**(t) - 1) / np.log(k))\n```\n\nA short section of the linear sweep signal is plotted for illustration\n\n\n```python\nidx = range(5000) # portion of the signal to show\n\nplt.figure(figsize=(10, 5))\nplt.plot(t[idx], np.real(x[idx]))\nplt.xlabel(r'$t$ in s')\nplt.ylabel(r'$x(t)$')\nplt.grid()\n```\n\n### Spectrogram\n\nThe spectrogram of the logarithmic sweep is computed and plotted\n\n\n```python\nplt.figure(figsize=(8, 6))\nplt.specgram(x, Fs=fs, sides='onesided')\nplt.xlabel('$t$ in s')\nplt.ylabel('$f$ in Hz');\n```\n\n### Spectrum\n\nThe discrete Fourier transform of the logarithmic sweep is computed and its magnitude spectrum is plotted.\n\n\n```python\nX = np.fft.rfft(np.real(x))\nf = np.linspace(0, fs/2, len(X))\n\nplt.figure(figsize=(8, 6))\nplt.plot(f, 20*np.log10(np.abs(X)))\nplt.xlabel(r'$f$ in Hz')\nplt.ylabel(r'$|X(f)|$ in dB')\nplt.axis([20, 20000, 40, 70])\nplt.grid()\n```\n\n### Auralization\n\nLets listen to the exponential sweep. Please be careful with the volume of your speakers or headphones. Start with a very low volume and increase if necessary. This holds especially for the low and high frequencies which can damage your speakers at high levels.\n\n\n```python\nsf.write('exponential_sweep.wav', np.real(x), fs)\n```\n\n\n[exponential_sweep.wav](exponential_sweep.wav)\n\n**Copyright**\n\nThis notebook is provided as [Open Educational Resources](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text/images/data are licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Selected Topics in Audio Signal Processing - Supplementary Material*.\n", "meta": {"hexsha": "fde5143298e1d254127b57b95b908e62597dbbb9", "size": 491829, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "electroacoustics/sweep_spectrum.ipynb", "max_stars_repo_name": "spatialaudio/-selected-topics-in-audio-signal-processing-lecture-", "max_stars_repo_head_hexsha": "d56c54401ad15f72042baeba88a22809c6c9f85c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2017-10-19T14:54:02.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-30T12:39:02.000Z", "max_issues_repo_path": "electroacoustics/sweep_spectrum.ipynb", "max_issues_repo_name": "spatialaudio/-selected-topics-in-audio-signal-processing-lecture-", "max_issues_repo_head_hexsha": "d56c54401ad15f72042baeba88a22809c6c9f85c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "electroacoustics/sweep_spectrum.ipynb", "max_forks_repo_name": "spatialaudio/-selected-topics-in-audio-signal-processing-lecture-", "max_forks_repo_head_hexsha": "d56c54401ad15f72042baeba88a22809c6c9f85c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 1083.3237885463, "max_line_length": 146024, "alphanum_fraction": 0.957578752, "converted": true, "num_tokens": 1672, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541544761566, "lm_q2_score": 0.8840392756357327, "lm_q1q2_score": 0.8324592566224799}} {"text": "\n\n\n```python\nimport sympy as sp\n```\n\n\n```python\nsp.symbols('x')\n```\n\n\n\n\n$\\displaystyle x$\n\n\n\n\n```python\nx= sp.symbols('x')\n```\n\n\n```python\ny=sp.symbols('y')\n```\n\n\n```python\nx+x\n```\n\n\n\n\n$\\displaystyle 2 x$\n\n\n\n\n```python\n#2x+1=0\nsp.solve(2*x+1)\n```\n\n\n\n\n [-1/2]\n\n\n\n\n```python\n#3x+8=2x-3\necuaci\u00f3n = sp.Eq(3*x+8,2*x-3)\n```\n\n\n```python\nsp.solve(ecuaci\u00f3n)\n```\n\n\n\n\n [-11]\n\n\n\n\n```python\n#x^2-1=0\nsoluciones = sp.solve(x**2-1)\n```\n\n\n```python\nsoluciones\n```\n\n\n\n\n [-1, 1]\n\n\n\n\n```python\nsoluciones[0]\n```\n\n\n\n\n$\\displaystyle -1$\n\n\n\n\n```python\nsoluciones[1]\n```\n\n\n\n\n$\\displaystyle 1$\n\n\n\n\n```python\nsp.solve(x**2+1)\n```\n\n\n\n\n [-I, I]\n\n\n\n\n```python\nsp.solve(x**2-3*x+1)\n```\n\n\n\n\n [3/2 - sqrt(5)/2, sqrt(5)/2 + 3/2]\n\n\n\n\n```python\nsp.init_printing()\n```\n\n\n```python\nsp.solve(x**2-3*x+1)\n```\n\n\n```python\n#x^3-5*x^2=-6*x\n```\n\n\n```python\necuacion = sp.Eq(x**3-5*x**2,-6*x) #se usa coma \",\" no igual\n```\n\n\n```python\necuacion\n```\n\n\n```python\nsp.solve(ecuacion)\n```\n\n\n```python\nsp.plot(x**2)\n```\n\n\n```python\nsp.plot(x**2,x+1)\n```\n\n\n```python\nsp.plot(x**2,ylim=(0,9),xlim=(0,3), line_color='m')\n```\n\n\n```python\nf = 1/(x**2-4)\n```\n\n\n```python\nsp.solve(x**2-4)\n```\n\n\n```python\n#f(-1)=\nf.subs(x,-1)\n```\n\n\n```python\nf.subs(x,x+y)\n```\n\n\n```python\ng = x**2-3*x*y\ng\n```\n\n\n```python\ng.subs(x,1)\n```\n\n\n```python\ng.subs(y,2)\n```\n\n\n```python\ng.subs(x,1).subs(y,2)\n```\n\n\n```python\ng.subs([[x,1],[y,2]])\n```\n\n\n```python\nf.subs(y,1)\n```\n\n\n```python\nf.subs(x,g)\n```\n\n\n```python\nsp.factor(x**2-2*x+1)\n```\n\n\n```python\nsp.expand(x*(x-2)*(3*x-2)**2) #expand trata de simplificar siempre\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "687c6bfc3975851b4964e795a59df8537c11f080", "size": 92054, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Feb11Calc.ipynb", "max_stars_repo_name": "SantanaNicole/EjerciciosProgramacion1", "max_stars_repo_head_hexsha": "a1af78885a6c65c9e428789572a2daee3621192c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Feb11Calc.ipynb", "max_issues_repo_name": "SantanaNicole/EjerciciosProgramacion1", "max_issues_repo_head_hexsha": "a1af78885a6c65c9e428789572a2daee3621192c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Feb11Calc.ipynb", "max_forks_repo_name": "SantanaNicole/EjerciciosProgramacion1", "max_forks_repo_head_hexsha": "a1af78885a6c65c9e428789572a2daee3621192c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 96.0897703549, "max_line_length": 22126, "alphanum_fraction": 0.8240163382, "converted": true, "num_tokens": 680, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541528387692, "lm_q2_score": 0.8840392695254318, "lm_q1q2_score": 0.8324592494211748}} {"text": "```python\nimport numpy as np\nimport numpy.linalg as LA\nfrom sklearn import datasets\nfrom sklearn.linear_model import LogisticRegression\nimport matplotlib.pyplot as plt\n```\n\n# Logistic Regression\n\n## Maximim Log-likelihood\n\nHere we will use logistic regression to conduct a binary classification. The logistic regression process will be formulated using Maximum Likelihood estimation. To begin, consider a random variable $y \\in\\{0,1\\}$. Let the $p$ be the probability that $y=1$. Given a set of $N$ trials, the liklihood of the sequence $y_{1}, y_{2}, \\dots, y_{N}$ is given by:\n\n$\\mathcal{L} = \\Pi_{i}^{N}p^{y_{i}}(1-p)^{1 - y_{i}}$\n\nGiven a set of labeled training data, the goal of maximum liklihood estimation is to determine a probability distribution that best recreates the empirical distribution of the training set. \n\nThe log-likelihood is the logarithmic transformation of the likelihood function. As logarithms are strictly increasing functions, the resulting solution from maximizing the likelihood vs. the log-likelihood are the equivalent. Given a dataset of cardinality $N$, the log-likelihood (normalized by $N$) is given by:\n\n$l = \\frac{1}{N}\\sum_{i=1}^{N}\\Big(y_{i}\\log(p) + (1 - y_{i})\\log(1 - p)\\Big)$\n\n## Logistic function\n\nLogistic regression performs binary classification based on a probabilistic interpretation of the data. Essentially, the process seeks to assign a probability to new observations. If the probability associated with the new instance of data is greater than 0.5, then the new observation is assigned to 1 (for example). If the probability associated with the new instance of the data is less than 0.5, then it is assigned to 0. To map the real numerical values into probabilities (which must lie between 0 and 1), logistic regression makes use of the logistic (sigmoid) function, given by:\n\n$\\sigma(t) = \\frac{1}{1 + e^{-t}}$\n\nNote that by setting $t=0$, $\\sigma(0) = 0.5$, which is the decision boundary. We should also note that the derivative of the logistic function with respect to the parameter $t$ is:\n\n$\\frac{d}{dt}\\sigma(t) = \\sigma(t)(1 - \\sigma(t))$\n\n## Logistic Regression and Derivation of the Gradient\n\nLet's assume the training data consists of $N$ observations, where observation $i$ is denoted by the pair $(y_{i},\\mathbf{x}_{i})$, where $y \\in \\{0,1\\}$ is the label for the feature vector $\\mathbf{x}$. We wish to compute a linear decision boundary that best seperates the labeled observations. Let $\\mathbf{\\theta}$ denote the vector of coefficients to be estimated. In this problem, the log likelihood can be expressed as:\n\n$l = \\frac{1}{N}\\sum_{i=1}^{N}\\Big(y_{i}\\log\\big(\\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\big) + (1 - y_{i}) \\log\\big( 1 - \\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\big)\\Big)$\n\nThe gradient of the objective with respect to the $j^{th}$ element of $\\mathbf{\\theta}$ is:\n$$\n\\begin{aligned}\n\\frac{d}{d\\theta^{(j)}} l &= \\frac{1}{N}\\sum_{i=1}^{N}\\Bigg( \\frac{d}{d\\theta^{(j)}} y_{i}\\log\\big(\\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\big) + \\frac{d}{d\\theta^{(j)}}(1 - y_{i}) \\log\\big( 1 - \\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\big)\\Bigg) \\\\\n&= \\frac{1}{N}\\sum_{i=1}^{N}\\Bigg(\\frac{y_{i}}{\\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})} - \\frac{1 - y_{i}}{1 - \\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})} \\Bigg)\\frac{d}{d\\theta^{(j)}}\\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\\\\n&= \\frac{1}{N}\\sum_{i=1}^{N}\\Bigg(\\frac{y_{i}}{\\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})} - \\frac{1 - y_{i}}{1 - \\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})} \\Bigg)\\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\Big(1 - \\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\Big)x_{i}^{(j)}\\\\\n&= \\frac{1}{N}\\sum_{i=1}^{N}\\Bigg(\\frac{y_{i} - \\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})}{\\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\Big(1 - \\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\Big)}\\Bigg)\\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\Big(1 - \\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\Big)x_{i}^{(j)}\\\\\n&= \\frac{1}{N}\\sum_{i=1}^{N}\\Bigg(y_{i} - \\sigma(\\mathbf{\\theta}^{T}\\mathbf{x}_{i})\\Bigg)x_{i}^{(j)}\n\\end{aligned}\n$$\n\nwhere the last equation has the familiar form of the product of the prediciton error and the $j^{th}$ feature. With the gradient of the log likelihood function, the parameter vector $\\mathbf{\\theta}$ can now be calculated via gradient ascent (as we're maximizing the log likelihood):\n\n$$\n\\begin{equation}\n \\mathbf{\\theta}^{(j)}(k+1) = \\mathbf{\\theta}^{(j)}(k) + \\alpha \\frac{1}{N}\\sum_{i=1}^{N}\\Bigg( y_{i} - \\sigma(\\mathbf{\\theta}^{T}(k)\\mathbf{x}_{i}))\\Bigg)x_{i}^{(j)}\n\\end{equation}\n$$\n\n\n```python\n# Supporting Methods\n\n#logistic function\ndef sigmoid(a):\n return 1/(1 + np.exp(-a))\n\n#ll function\ndef log_likelihood(x, y, theta):\n logits = np.dot(x, theta)\n log_like = np.sum(y * logits - np.log(1 + np.exp(logits)))\n return log_like\n```\n\n\n```python\n#Load the data\niris = datasets.load_iris()\nx = iris.data[:,2:] #features will be petal width and petal length\ny = (iris.target==2).astype(np.int).reshape(len(x),1) #1 of iris-virginica, and 0 ow\n\n#Prepare Data for Regression\n#pad x with a vector of ones for computation of intercept\nx_aug = np.concatenate( (x,np.ones((len(x),1))) , axis=1)\n```\n\n\n```python\n#sklearn logistic regression\nlog_reg = LogisticRegression(penalty='none')\nlog_reg.fit(x,y)\nlog_reg.get_params()\ncoefs = log_reg.coef_.reshape(-1,1)\nintercept = log_reg.intercept_\ntheta_sklearn = np.concatenate((coefs, intercept.reshape(-1,1)), axis=0)\nprint(\"sklearn coefficients:\")\nprint(theta_sklearn)\nprint(\"sklearn log likelihood: \", log_likelihood(x_aug, y, theta_sklearn))\n```\n\n sklearn coefficients:\n [[ 5.75452053]\n [ 10.44681116]\n [-45.27248307]]\n sklearn log likelihood: -10.281754052558687\n\n\n C:\\Users\\danie\\Anaconda3\\Lib\\site-packages\\sklearn\\utils\\validation.py:63: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n return f(*args, **kwargs)\n\n\n\n```python\n#Perform Logistic Regression\nnum_iterations = 40000\nalpha = 1e-2\n\ntheta0 = np.ones((3,1))\ntheta = []\ntheta.append(theta0)\n\nk=0\nwhile k < num_iterations:\n \n #compute prediction error\n e = y - sigmoid(np.dot(x_aug, theta[k]))\n\n #compute the gradient of the log-likelihood\n grad_ll = np.dot(x_aug.T, e)\n\n #gradient ascent\n theta.append(theta[k] + alpha * grad_ll)\n\n #update iteration step\n k += 1\n\n if k % 4000 == 0:\n #print(\"iteration: \", k, \" delta: \", delta)\n print(\"iteration: \", k, \" log_likelihood:\", log_likelihood(x_aug, y, theta[k]))\n \ntheta_final = theta[k]\nprint(\"scratch coefficients:\")\nprint(theta_final)\n```\n\n iteration: 4000 log_likelihood: -11.529302688612685\n iteration: 8000 log_likelihood: -10.800986140073512\n iteration: 12000 log_likelihood: -10.543197464480874\n iteration: 16000 log_likelihood: -10.425775111214602\n iteration: 20000 log_likelihood: -10.36535749825992\n iteration: 24000 log_likelihood: -10.331961877189842\n iteration: 28000 log_likelihood: -10.312622172168293\n iteration: 32000 log_likelihood: -10.301055609225223\n iteration: 36000 log_likelihood: -10.293975480174714\n iteration: 40000 log_likelihood: -10.289566343598294\n scratch coefficients:\n [[ 5.5044475 ]\n [ 10.17562424]\n [-43.60242548]]\n\n\n\n```python\n#Plot the data and the decision boundary\n\n#create feature data for plotting\nx_dec_bnd = np.linspace(0,7,100).reshape(-1,1)\n#classification boundary from sklearn\ny_sklearn = (theta_sklearn[2] * np.ones((100,1)) + theta_sklearn[0] * x_dec_bnd) / -theta_sklearn[1]\n#classification boundary from scratch\ny_scratch = (theta_final[2] * np.ones((100,1)) + theta_final[0] * x_dec_bnd) / -theta_final[1]\n\ny_1 = np.where(y==1)[0] #training data, iris-virginica\ny_0 = np.where(y==0)[0] #training data, not iris-virginica\nplt.plot(x[y_0,0],x[y_0,1],'bo',label=\"not iris-virginica\")\nplt.plot(x[y_1,0],x[y_1,1],'k+',label=\"iris-virginica\")\nplt.plot(x_dec_bnd,y_sklearn,'r',label=\"sklearn dec. boundary\")\nplt.plot(x_dec_bnd,y_scratch,'g',label=\"scratch dec. boundary\")\nplt.xlabel('petal length')\nplt.ylabel('petal width')\nplt.title('Logistic Regression Classification')\nplt.xlim((3,7))\nplt.ylim((0,3.5))\nplt.legend()\nplt.show()\n```\n", "meta": {"hexsha": "9fafb598db9b786adf895a5ebc4b7bfb11eb67f8", "size": 38384, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": ".ipynb_checkpoints/Logistic_Regression-checkpoint.ipynb", "max_stars_repo_name": "dbarnold0220/from_scratch", "max_stars_repo_head_hexsha": "088b059377fd735326daaff6fc924123a13f8324", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": ".ipynb_checkpoints/Logistic_Regression-checkpoint.ipynb", "max_issues_repo_name": "dbarnold0220/from_scratch", "max_issues_repo_head_hexsha": "088b059377fd735326daaff6fc924123a13f8324", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": ".ipynb_checkpoints/Logistic_Regression-checkpoint.ipynb", "max_forks_repo_name": "dbarnold0220/from_scratch", "max_forks_repo_head_hexsha": "088b059377fd735326daaff6fc924123a13f8324", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-01T02:23:56.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-01T02:23:56.000Z", "avg_line_length": 143.223880597, "max_line_length": 26960, "alphanum_fraction": 0.8484003752, "converted": true, "num_tokens": 2653, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9334308165850443, "lm_q2_score": 0.8918110490322426, "lm_q1q2_score": 0.8324439157377312}} {"text": "# Solving ODEs with scipy.integrate.solve_ivp\n\n## Solving ordinary differential equations (ODEs)\n\nHere we will revisit the differential equations solved in 5300_Jupyter_Python_intro_01.ipynb with `odeint`, only now we'll use `solve_ivp` from Scipy. We'll compare the new and old solutions as we go.\n\n### First-order ODE\n\n\n```python\n# Import the required modules\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom scipy.integrate import solve_ivp # Now preferred to odeint\n```\n\nLet's try a one-dimensional first-order ODE, say:\n\n$\\begin{align}\n\\quad \n\\frac{dv}{dt} = -g, \\quad \\mbox{with} \\quad v(0) = 10\n\\end{align}$\n\nin some appropriate units (we'll use MKS units by default). This ODE can be separated and directly integrated:\n\n$\\begin{align}\n \\int_{v_0=10}^{v} dv' = - g \\int_{0}^{t} dt'\n \\quad\\Longrightarrow\\quad\n v - v_0 = - g (t - 0)\n \\quad\\Longrightarrow\\quad\n v(t) = 10 - gt\n\\end{align}$\n\n\n\nThe goal is to find the solution $v(t)$ as an array `v_pts` at the times in the array `t_pts`.\n\n\n```python\n# Define a function which calculates the derivative\ndef dv_dt_new(t, v, g=9.8):\n \"\"\"Returns the right side of a simple first-order ODE with default g.\"\"\"\n return -g \n\nt_start = 0.\nt_end = 10.\nt_pts = np.linspace(t_start, t_end, 20) # 20 points between t=0 and t=10.\n\nv_0 = np.array([10.0]) # initial condition, in form of a list or numpy array\n\nabserr = 1.e-8\nrelerr = 1.e-8\n\nsolution = solve_ivp(dv_dt_new, (t_start, t_end), v_0, t_eval=t_pts,\n rtol=relerr, atol=abserr) \n # solve_ivp( function for rhs with (t, v) argument (cf. (v,t) for odeint), \n # tspan=(starting t value, ending t value),\n # initial value of v(t), array of points we want to know v(t),\n # method='RK45' is the default method,\n # rtol=1.e-3, atol=1.e-6 are default tolerances\n # )\nv_pts = solution.y # array of results at t_pts\n```\n\n\n```python\nv_pts.shape # 1 x 100 matrix (row vector)\n```\n\n\n\n\n (1, 20)\n\n\n\nHere's how we did it before with odeint:\n\n\n```python\nfrom scipy.integrate import odeint \n\n# Define a function which calculates the derivative\ndef dv_dt(v, t, g=9.8):\n \"\"\"Returns the right side of a simple first-order ODE with default g.\"\"\"\n return -g \n\nt_pts = np.linspace(0., 10., 20) # 20 points between t=0 and t=10.\nv_0 = 10. # the initial condition\nv_pts_odeint = odeint(dv_dt, v_0, t_pts) # odeint( function for rhs, \n # initial value of v(t),\n # array of t values )\n\n```\n\n\n```python\nv_pts_odeint.shape # 100 x 1 matrix (column vector)\n```\n\n\n\n\n (20, 1)\n\n\n\nMake a table comparing results (using `flatten()` to make the matrices into arrays):\n\n\n```python\nprint(' t v(t) [solve_ivp] v(t) [odeint]')\nfor t, v_solve_ivp, v_odeint in zip(t_pts, \n v_pts.flatten(), \n v_pts_odeint.flatten()):\n print(f' {t:6.3f} {v_solve_ivp:12.7f} {v_odeint:12.7f}')\n```\n\n t v(t) [solve_ivp] v(t) [odeint]\n 0.000 10.0000000 10.0000000\n 0.526 4.8421053 4.8421053\n 1.053 -0.3157895 -0.3157895\n 1.579 -5.4736842 -5.4736842\n 2.105 -10.6315789 -10.6315789\n 2.632 -15.7894737 -15.7894737\n 3.158 -20.9473684 -20.9473684\n 3.684 -26.1052632 -26.1052632\n 4.211 -31.2631579 -31.2631579\n 4.737 -36.4210526 -36.4210526\n 5.263 -41.5789474 -41.5789474\n 5.789 -46.7368421 -46.7368421\n 6.316 -51.8947368 -51.8947368\n 6.842 -57.0526316 -57.0526316\n 7.368 -62.2105263 -62.2105263\n 7.895 -67.3684211 -67.3684211\n 8.421 -72.5263158 -72.5263158\n 8.947 -77.6842105 -77.6842105\n 9.474 -82.8421053 -82.8421053\n 10.000 -88.0000000 -88.0000000\n\n\nDifferences between `solve_ivp` and `odeint`:\n* `dv_dt(t, v)` vs. `dv_dt(v, t)`, i.e., the function definitions have the arguments reversed.\n* With `odeint`, you only specify the full array of $t$ points you want to know $v(t)$ at. With `solve_ivp`, you first specify the starting $t$ and ending $t$ as a tuple: `(t_start, t_end)` and then (optionally) specify `t_eval=t_pts` to evaluate $v$ at the points in the `t_pts` array.\n* `solve_ivp` returns an object from which $v(t)$ (and other results) can be found, while `ode_int` returns $v(t)$.\n* For this single first-order equation, $v(t)$ is returned for the $N$ requested $t$ points as a $1 \\times N$ two-dimensional array by `solve_ivp` and as a $N \\times 1$ array by `odeint`.\n* `odeint` has no choice of solver while the `solve_ivp` solver can be set by `method`. The default is `method='RK45'`, which is good, general-purpose Runge-Kutta solver. \n\n### Second-order ODE\n\nSuppose we have a second-order ODE such as:\n\n$$\n\\quad y'' + 2 y' + 2 y = \\cos(2x), \\quad \\quad y(0) = 0, \\; y'(0) = 0\n$$\n\nWe can turn this into two first-order equations by defining a new dependent variable. For example,\n\n$$\n\\quad z \\equiv y' \\quad \\Rightarrow \\quad z' + 2 z + 2y = \\cos(2x), \\quad z(0)=y(0) = 0.\n$$\n\nNow introduce the vector \n\n$$\n \\mathbf{U}(x) = \\left(\\begin{array}{c}\n y(x) \\\\\n z(x)\n \\end{array}\n \\right)\n \\quad\\Longrightarrow\\quad\n \\frac{d\\mathbf{U}}{dx} = \\left(\\begin{array}{c}\n z \\\\\n -2 y' - 2 y + \\cos(2x)\n \\end{array}\n \\right) \n$$\n\nWe can solve this system of ODEs using `solve_ivp` with lists, as follows. We will try it first without specifying the relative and absolute error tolerances rtol and atol.\n\n\n```python\n# Define a function for the right side\ndef dU_dx_new(x, U):\n \"\"\"Right side of the differential equation to be solved.\n U is a two-component vector with y=U[0] and z=U[1]. \n Thus this function should return [y', z']\n \"\"\"\n return [U[1], -2*U[1] - 2*U[0] + np.cos(2*x)]\n\n# initial condition U_0 = [y(0)=0, z(0)=y'(0)=0]\nU_0 = [0., 0.]\n\nx_pts = np.linspace(0, 15, 20) # Set up the mesh of x points\nresult = solve_ivp(dU_dx_new, (0, 15), U_0, t_eval=x_pts)\ny_pts = result.y[0,:] # Ok, this is tricky. For each x, result.y has two \n # components. We want the first component for all\n # x, which is y(x). The 0 means the first index and \n # the : means all of the x values.\n\n```\n\nHere's how we did it before with `odeint`:\n\n\n```python\n# Define a function for the right side\ndef dU_dx(U, x):\n \"\"\"Right side of the differential equation to be solved.\n U is a two-component vector with y=U[0] and z=U[1]. \n Thus this function should return [y', z']\n \"\"\"\n return [U[1], -2*U[1] - 2*U[0] + np.cos(2*x)]\n\n# initial condition U_0 = [y(0)=0, z(0)=y'(0)=0]\nU_0 = [0., 0.]\n\nx_pts = np.linspace(0, 15, 20) # Set up the mesh of x points\nU_pts = odeint(dU_dx, U_0, x_pts) # U_pts is a 2-dimensional array\ny_pts_odeint = U_pts[:,0] # Ok, this is tricky. For each x, U_pts has two \n # components. We want the upper component for all\n # x, which is y(x). The : means all of the first \n # index, which is x, and the 0 means the first\n # component in the other dimension.\n```\n\nMake a table comparing results (using `flatten()` to make the matrices into arrays):\n\n\n```python\nprint(' x y(x) [solve_ivp] y(x) [odeint]')\nfor x, y_solve_ivp, y_odeint in zip(x_pts, \n y_pts.flatten(), \n y_pts_odeint.flatten()):\n print(f' {x:6.3f} {y_solve_ivp:12.7f} {y_odeint:12.7f}')\n```\n\n x y(x) [solve_ivp] y(x) [odeint]\n 0.000 0.0000000 0.0000000\n 0.789 0.1360331 0.1360684\n 1.579 0.0346990 0.0347028\n 2.368 -0.2285865 -0.2287035\n 3.158 -0.0975122 -0.0974702\n 3.947 0.2065834 0.2067492\n 4.737 0.0927010 0.0927536\n 5.526 -0.2041225 -0.2042677\n 6.316 -0.0865181 -0.0865921\n 7.105 0.2064689 0.2066669\n 7.895 0.0832986 0.0832707\n 8.684 -0.2080277 -0.2081975\n 9.474 -0.0799116 -0.0799972\n 10.263 0.2092969 0.2094602\n 11.053 0.0765611 0.0765810\n 11.842 -0.2104970 -0.2107011\n 12.632 -0.0731547 -0.0731411\n 13.421 0.2117489 0.2118952\n 14.211 0.0695941 0.0696868\n 15.000 -0.2129369 -0.2130316\n\n\nNot very close agreement by the end. Run both again with greater accuracy.\n\n\n```python\nrelerr = 1.e-10\nabserr = 1.e-10\n\nresult = solve_ivp(dU_dx_new, (0, 15), U_0, t_eval=x_pts, \n rtol=relerr, atol=abserr)\ny_pts = result.y[0,:] \n\nU_pts = odeint(dU_dx, U_0, x_pts, \n rtol=relerr, atol=abserr) \ny_pts_odeint = U_pts[:,0] \n\nprint(' x y(x) [solve_ivp] y(x) [odeint]')\nfor x, y_solve_ivp, y_odeint in zip(x_pts, \n y_pts.flatten(), \n y_pts_odeint.flatten()):\n print(f' {x:6.3f} {y_solve_ivp:12.7f} {y_odeint:12.7f}')\n```\n\n x y(x) [solve_ivp] y(x) [odeint]\n 0.000 0.0000000 0.0000000\n 0.789 0.1360684 0.1360684\n 1.579 0.0347028 0.0347028\n 2.368 -0.2287035 -0.2287035\n 3.158 -0.0974702 -0.0974702\n 3.947 0.2067492 0.2067492\n 4.737 0.0927536 0.0927536\n 5.526 -0.2042678 -0.2042678\n 6.316 -0.0865921 -0.0865921\n 7.105 0.2066669 0.2066669\n 7.895 0.0832707 0.0832707\n 8.684 -0.2081975 -0.2081975\n 9.474 -0.0799972 -0.0799972\n 10.263 0.2094602 0.2094602\n 11.053 0.0765810 0.0765810\n 11.842 -0.2107011 -0.2107011\n 12.632 -0.0731411 -0.0731411\n 13.421 0.2118952 0.2118952\n 14.211 0.0696868 0.0696868\n 15.000 -0.2130316 -0.2130316\n\n\nComparing the results from when we didn't specify the errors we see that the default error tolerances for solve_ivp were insufficient. Moral: specify them explicitly. \n", "meta": {"hexsha": "a1167fae981759fdd686bf60c0bad38d601c2559", "size": 15776, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "week_4/ODEs_with_solve_ivp.ipynb", "max_stars_repo_name": "CLima86/Physics_5300_CDL", "max_stars_repo_head_hexsha": "d9e8ee0861d408a85b4be3adfc97e98afb4a1149", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "week_4/ODEs_with_solve_ivp.ipynb", "max_issues_repo_name": "CLima86/Physics_5300_CDL", "max_issues_repo_head_hexsha": "d9e8ee0861d408a85b4be3adfc97e98afb4a1149", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week_4/ODEs_with_solve_ivp.ipynb", "max_forks_repo_name": "CLima86/Physics_5300_CDL", "max_forks_repo_head_hexsha": "d9e8ee0861d408a85b4be3adfc97e98afb4a1149", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.0, "max_line_length": 296, "alphanum_fraction": 0.4752155172, "converted": true, "num_tokens": 3635, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9399133531922388, "lm_q2_score": 0.8856314677809303, "lm_q1q2_score": 0.8324168425745384}} {"text": "# Introduction to Quantum Physics\n\n### A complex Number\n\n$c = a + ib$\n\nAcircle of radius 1: $e^{-i\\theta}$\n\n### Single Qubit System ($\\mathcal{C}^{2}$ -space)\n\n$|\\psi \\rangle = \\alpha |0 \\rangle + \\beta | 1 \\rangle $\n\n$ \\langle \\psi | \\psi \\rangle = 1 \\implies \\alpha^{2} + \\beta^{2} = 1 $\n\n- Operators are 2 by 2 matrices, vectors are 2 by 1 column vectors.\n\n#### General form of single qubit Unitary Operation\n\nA single qubit quantum state can be written as\n\n$$\\left|\\psi\\right\\rangle = \\alpha\\left|0\\right\\rangle + \\beta \\left|1\\right\\rangle$$\n\n\nwhere $\\alpha$ and $\\beta$ are complex numbers. In a measurement the probability of the bit being in $\\left|0\\right\\rangle$ is $|\\alpha|^2$ and $\\left|1\\right\\rangle$ is $|\\beta|^2$. As a vector this is\n\n$$\n\\left|\\psi\\right\\rangle = \n\\begin{pmatrix}\n\\alpha \\\\\n\\beta\n\\end{pmatrix}\n$$\nWhere\n$$\\left| 0 \\right\\rangle = \n\\begin{pmatrix}\n1 \\\\\n0\n\\end{pmatrix}; \\left|1\\right\\rangle = \n\\begin{pmatrix}\n0 \\\\\n1\n\\end{pmatrix}. \n$$\n\n\nNote due to conservation probability $|\\alpha|^2+ |\\beta|^2 = 1$ and since global phase is undetectable $\\left|\\psi\\right\\rangle := e^{i\\delta} \\left|\\psi\\right\\rangle$ we only requires two real numbers to describe a single qubit quantum state.\n\nA convenient representation is\n\n$$\\left|\\psi\\right\\rangle = \\cos(\\theta/2)\\left|0\\right\\rangle + \\sin(\\theta/2)e^{i\\phi}\\left|1\\right\\rangle$$\n\nwhere $0\\leq \\phi < 2\\pi$, and $0\\leq \\theta \\leq \\pi$. From this it is clear that there is a one-to-one correspondence between qubit states ($\\mathbb{C}^2$) and the points on the surface of a unit sphere ($\\mathbb{R}^3$). This is called the Bloch sphere representation of a qubit state.\n\nQuantum gates/operations are usually represented as matrices. A gate which acts on a qubit is represented by a $2\\times 2$ unitary matrix $U$. The action of the quantum gate is found by multiplying the matrix representing the gate with the vector which represents the quantum state.\n\n$$\\left|\\psi'\\right\\rangle = U\\left|\\psi\\right\\rangle$$\n\nA general unitary must be able to take the $\\left|0\\right\\rangle$ to the above state. That is \n\n$$\nU = \\begin{pmatrix}\n\\cos(\\theta/2) & a \\\\\ne^{i\\phi}\\sin(\\theta/2) & b \n\\end{pmatrix}\n$$ \n\nwhere $a$ and $b$ are complex numbers constrained such that $U^\\dagger U = I$ for all $0\\leq\\theta\\leq\\pi$ and $0\\leq \\phi<2\\pi$. This gives 3 constraints and as such $a\\rightarrow -e^{i\\lambda}\\sin(\\theta/2)$ and $b\\rightarrow e^{i\\lambda+i\\phi}\\cos(\\theta/2)$ where $0\\leq \\lambda<2\\pi$ giving \n\n$$\nU = \\begin{pmatrix}\n\\cos(\\theta/2) & -e^{i\\lambda}\\sin(\\theta/2) \\\\\ne^{i\\phi}\\sin(\\theta/2) & e^{i\\lambda+i\\phi}\\cos(\\theta/2) \n\\end{pmatrix}.\n$$\n\nThis is the most general form of a single qubit unitary.\n\n\n\n\n\n### Quantum Gates:\nQuanum gates are Unitary Transformation. There exist a universal set of quantum gates. Hadamard gate, X,Y,Z, paili gates, Cnot gate are few examples.\n\n\n\n### Multiqubit System ($\\mathcal{C}^{4}$ -space)\n\n$|\\psi \\rangle = \\alpha |00 \\rangle + \\beta | 01 \\rangle + \\gamma |10 \\rangle + \\delta | 11 \\rangle $\n\n$ \\langle \\psi | \\psi \\rangle = 1 \\implies \\alpha^{2} + \\beta^{2} + \\gamma^{2} + \\delta^{2} = 1 $\n\n- Operators are 4 by 4 matrices, vectors are 4 by 1 column vectors.\n\n#### Realization of multi-qubit through single qubit\n\nThe space of quantum computer grows exponential with the number of qubits. For $n$ qubits the complex vector space has dimensions $d=2^n$. To describe states of a multi-qubit system, the tensor product is used to \"glue together\" operators and basis vectors.\n\nLet's start by considering a 2-qubit system. Given two operators $A$ and $B$ that each act on one qubit, the joint operator $A \\otimes B$ acting on two qubits is\n\n$$\\begin{equation}\n\tA\\otimes B = \n\t\\begin{pmatrix} \n\t\tA_{00} \\begin{pmatrix} \n\t\t\tB_{00} & B_{01} \\\\\n\t\t\tB_{10} & B_{11}\n\t\t\\end{pmatrix} & A_{01} \t\\begin{pmatrix} \n\t\t\t\tB_{00} & B_{01} \\\\\n\t\t\t\tB_{10} & B_{11}\n\t\t\t\\end{pmatrix} \\\\\n\t\tA_{10} \t\\begin{pmatrix} \n\t\t\t\t\tB_{00} & B_{01} \\\\\n\t\t\t\t\tB_{10} & B_{11}\n\t\t\t\t\\end{pmatrix} & A_{11} \t\\begin{pmatrix} \n\t\t\t\t\t\t\tB_{00} & B_{01} \\\\\n\t\t\t\t\t\t\tB_{10} & B_{11}\n\t\t\t\t\t\t\\end{pmatrix}\n\t\\end{pmatrix},\t\t\t\t\t\t\n\\end{equation}$$\n\nwhere $A_{jk}$ and $B_{lm}$ are the matrix elements of $A$ and $B$, respectively.\n\nAnalogously, the basis vectors for the 2-qubit system are formed using the tensor product of basis vectors for a single qubit:\n$$\\begin{equation}\\begin{split}\n\t\\left|{00}\\right\\rangle &= \\begin{pmatrix} \n\t\t1 \\begin{pmatrix} \n\t\t\t1 \\\\\n\t\t\t0\n\t\t\\end{pmatrix} \\\\\n\t\t0 \\begin{pmatrix} \n\t\t\t1 \\\\\n\t\t\t0 \n\t\t\\end{pmatrix}\n\t\\end{pmatrix} = \\begin{pmatrix} 1 \\\\ 0 \\\\ 0 \\\\0 \\end{pmatrix}~~~\\left|{01}\\right\\rangle = \\begin{pmatrix} \n\t1 \\begin{pmatrix} \n\t0 \\\\\n\t1\n\t\\end{pmatrix} \\\\\n\t0 \\begin{pmatrix} \n\t0 \\\\\n\t1 \n\t\\end{pmatrix}\n\t\\end{pmatrix} = \\begin{pmatrix}0 \\\\ 1 \\\\ 0 \\\\ 0 \\end{pmatrix}\\end{split}\n\\end{equation}$$\n \n$$\\begin{equation}\\begin{split}\\left|{10}\\right\\rangle = \\begin{pmatrix} \n\t0\\begin{pmatrix} \n\t1 \\\\\n\t0\n\t\\end{pmatrix} \\\\\n\t1\\begin{pmatrix} \n\t1 \\\\\n\t0 \n\t\\end{pmatrix}\n\t\\end{pmatrix} = \\begin{pmatrix} 0 \\\\ 0 \\\\ 1 \\\\ 0 \\end{pmatrix}~~~ \t\\left|{11}\\right\\rangle = \\begin{pmatrix} \n\t0 \\begin{pmatrix} \n\t0 \\\\\n\t1\n\t\\end{pmatrix} \\\\\n\t1\\begin{pmatrix} \n\t0 \\\\\n\t1 \n\t\\end{pmatrix}\n\t\\end{pmatrix} = \\begin{pmatrix} 0 \\\\ 0 \\\\ 0 \\\\1 \\end{pmatrix}\\end{split}\n\\end{equation}.$$\n\nNote we've introduced a shorthand for the tensor product of basis vectors, wherein $\\left|0\\right\\rangle \\otimes \\left|0\\right\\rangle$ is written as $\\left|00\\right\\rangle$. The state of an $n$-qubit system can described using the $n$-fold tensor product of single-qubit basis vectors. Notice that the basis vectors for a 2-qubit system are 4-dimensional; in general, the basis vectors of an $n$-qubit sytsem are $2^{n}$-dimensional.\n\n### Superposition and Entanglement\n\nSuperposition in 2 qubit system : $|\\psi \\rangle = \\alpha |00 \\rangle + \\beta | 01 \\rangle + \\gamma |10 \\rangle + \\delta | 11 \\rangle $\n\n\n\n- It can be written in direct product of lower qubit states.\n\n\n\nEntanglement in two qubit system: \n\n\n\nAn entanglement circuit: \n\n\n- It can not be written in direct product of lower qubit states\n\n\n$\\begin{bmatrix}\n p \\\\\n q \n\\end{bmatrix} \\otimes \\begin{bmatrix}\n r \\\\\n s \n\\end{bmatrix} = c \\begin{bmatrix}\n m \\\\\n 0 \\\\\n 0 \\\\\n n\n\\end{bmatrix}$\n\n\n### Quantum Circuits and Quantum Algorithms\n\nExecution of Quantum Algorithm (A unitary matrix), is challangeing task. This is because one need to find out product of single or multi-qubit gates to represent that algorithm.\n\n$$\nQFT: F_{N} = \\frac{1}{\\sqrt{N}} \\left( \\begin{array}{cccccc}\n 1 & 1 & 1 & 1 & \\cdots & 1 \\\\\n 1 & \\omega_{n} & \\omega_{n}^{2} & \\omega_{n}^{3} & \\cdots & \\omega_{n} ^{N-1}\\\\\n 1 & \\omega_{n}^{2} & \\omega_{n}^{4} & \\omega_{n}^{6} & \\cdots & \\omega_{n} ^{2(N-1)}\\\\\n 1 & \\omega_{n}^{3} & \\omega_{n}^{6} & \\omega_{n}^{9} & \\cdots & \\omega_{n} ^{3(N-1)}\\\\\n \\vdots & \\vdots & \\vdots & \\vdots & \\dots & \\vdots \\\\\n 1 & \\omega_{n}^{(N-1)} & \\omega_{n}^{2(N-1)} & \\omega_{n}^{3(N-1)} & \\cdots & \\omega_{n} ^{(N-1((N-1)}\\\\\n\\end{array}\\right )\n$$\n\n\nFigure: Execution of QFT algorithm\n\n### Measurement\n\n- Only observable can be measured.\n- What is being measured? E, P or X? Non of these are being measured. We measure probability through $<\\psi | \\psi>$.\n- After measurement, system collapse to one of the eigen state. Due to time evolution, it will superposed to many states in future.\n\n### Noise and Error\n\n- Current time Quantum Computers are **Noisy Intermediate-Scale Quantum (NISQ)**\n- Decoherence : Quantum decoherence is the loss of quantum coherence. Decoherence can be viewed as the loss of information from a system into the environment (often modeled as a heat bath).\n- Circuit depth: The depth of a circuit is the longest path in the circuit. The path length is always an integer number, representing the number of gates it has to execute in that path.\n- Fidelity: fidelity is the measure of the distance between two quantum states. Fidelity equal to 1, means that two states are equal. In the case of a density matrix, fidelity represents the overlap with a reference pure state.\n\n\n```python\n\n```\n", "meta": {"hexsha": "70ae0de7fab9092ebcbb172f36dcdc1fd3ab01e3", "size": 13210, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "day1/3. Quantum-Physics-of-Quantum-Computing.ipynb", "max_stars_repo_name": "srh-dhu/Quantum-Computing-2021", "max_stars_repo_head_hexsha": "5d6f99776f10224df237a2fadded25f63f5032c3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2021-07-23T13:38:20.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-07T00:40:09.000Z", "max_issues_repo_path": "day1/3. Quantum-Physics-of-Quantum-Computing.ipynb", "max_issues_repo_name": "Pratha-Me/Quantum-Computing-2021", "max_issues_repo_head_hexsha": "bd9cf9a1165a47c61f9277126f4df04ae5562d61", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2021-07-31T08:43:38.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-31T08:43:38.000Z", "max_forks_repo_path": "day1/3. Quantum-Physics-of-Quantum-Computing.ipynb", "max_forks_repo_name": "Pratha-Me/Quantum-Computing-2021", "max_forks_repo_head_hexsha": "bd9cf9a1165a47c61f9277126f4df04ae5562d61", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2021-07-24T06:14:36.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-29T22:02:12.000Z", "avg_line_length": 35.320855615, "max_line_length": 449, "alphanum_fraction": 0.5336866011, "converted": true, "num_tokens": 2752, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.97482115683641, "lm_q2_score": 0.8539127566694178, "lm_q1q2_score": 0.8324122212938497}} {"text": "```python\nimport numpy as np\nimport sympy as sy\n```\n\n\n```python\ns,z = sy.symbols('s,z', real=False)\nl_1, l_2 = sy.symbols('l1, l2')\n```\n\n\n```python\nPhi = sy.Matrix([[1.13, 0.52], [0.52, 1.13]])\nGamma = sy.Matrix([[0.13],[0.52]])\nL = sy.Matrix([[l_1, l_2]])\n\nM = z*sy.Matrix.eye(2) - (Phi - Gamma*L)\nM\n```\n\n\n\n\n Matrix([\n [0.13*l1 + z - 1.13, 0.13*l2 - 0.52],\n [ 0.52*l1 - 0.52, 0.52*l2 + z - 1.13]])\n\n\n\n\n```python\nchPoly = sy.poly(M.det(), z)\nchPoly\n```\n\n\n\n\n Poly(1.0*z**2 + (0.13*l1 + 0.52*l2 - 2.26)*z + 0.1235*l1 - 0.52*l2 + 1.0065, z, domain='RR[l1,l2]')\n\n\n\n\n```python\nchDesired = sy.simplify(sy.expand((z-np.exp(-0.5))**2))\n\nsol = sy.solve((chPoly - chDesired).coeffs(), [l_1, l_2])\n```\n\n\n```python\nsol\n```\n\n\n\n\n {l1: 1.61072237375216, l2: 1.61066302305182}\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "548cbc7d9a2dd207b75419b6fccd6530f33261c5", "size": 2618, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "misc-notebooks/MR2007-final-exam-fall-2017.ipynb", "max_stars_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_stars_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-11-07T05:20:37.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-22T09:46:13.000Z", "max_issues_repo_path": "misc-notebooks/MR2007-final-exam-fall-2017.ipynb", "max_issues_repo_name": "alfkjartan/control-computarizado", "max_issues_repo_head_hexsha": "5b9a3ae67602d131adf0b306f3ffce7a4914bf8e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-06-12T20:44:41.000Z", "max_issues_repo_issues_event_max_datetime": "2020-06-12T20:49:00.000Z", "max_forks_repo_path": "misc-notebooks/MR2007-final-exam-fall-2017.ipynb", "max_forks_repo_name": "alfkjartan/control-computarizado", "max_forks_repo_head_hexsha": "5b9a3ae67602d131adf0b306f3ffce7a4914bf8e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-09-25T20:02:23.000Z", "max_forks_repo_forks_event_max_datetime": "2019-09-25T20:02:23.000Z", "avg_line_length": 18.9710144928, "max_line_length": 108, "alphanum_fraction": 0.461802903, "converted": true, "num_tokens": 359, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9504109784205502, "lm_q2_score": 0.875787001374006, "lm_q1q2_score": 0.8323575808638688}} {"text": "# Weyl Tensor calculations using Symbolic module\n\n\n```python\nimport sympy\nfrom sympy import cos, sin, sinh\nfrom einsteinpy.symbolic import MetricTensor, WeylTensor\n\nsympy.init_printing()\n```\n\n### Defining the Anti-de Sitter spacetime Metric\n\n\n```python\nsyms = sympy.symbols(\"t chi theta phi\")\nt, ch, th, ph = syms\nm = sympy.diag(-1, cos(t) ** 2, cos(t) ** 2 * sinh(ch) ** 2, cos(t) ** 2 * sinh(ch) ** 2 * sin(th) ** 2).tolist()\nmetric = MetricTensor(m, syms)\n```\n\n### Calculating the Weyl Tensor (with all indices covariant)\n\n\n```python\nweyl = WeylTensor.from_metric(metric)\nweyl.tensor()\n```\n\n\n```python\nweyl.config\n```\n\n\n\n\n 'llll'\n\n\n", "meta": {"hexsha": "9a6ff37c47932ef7fe2ea9e731e9d51c7933f694", "size": 76481, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/source/examples/Weyl Tensor symbolic calculation.ipynb", "max_stars_repo_name": "r0cketr1kky/einsteinpy", "max_stars_repo_head_hexsha": "d86f412736a42e2cf688a1e21d7b553868a14bc4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-04-07T04:01:57.000Z", "max_stars_repo_stars_event_max_datetime": "2019-07-11T11:59:55.000Z", "max_issues_repo_path": "docs/source/examples/Weyl Tensor symbolic calculation.ipynb", "max_issues_repo_name": "r0cketr1kky/einsteinpy", "max_issues_repo_head_hexsha": "d86f412736a42e2cf688a1e21d7b553868a14bc4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/source/examples/Weyl Tensor symbolic calculation.ipynb", "max_forks_repo_name": "r0cketr1kky/einsteinpy", "max_forks_repo_head_hexsha": "d86f412736a42e2cf688a1e21d7b553868a14bc4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-19T18:46:13.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-19T18:46:13.000Z", "avg_line_length": 262.8213058419, "max_line_length": 56656, "alphanum_fraction": 0.7621762268, "converted": true, "num_tokens": 203, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9449947070591977, "lm_q2_score": 0.8807970873650401, "lm_q1q2_score": 0.8323485855531207}} {"text": "##Ejercicio 4 Practica 1\nPara cada uno de los siguientes sistemas encontrar todos los puntos de equilibrio y determinar el tipo de cada punto de equilibio aislado.\n * b) \n \n $$\\left\\{ \\begin{array}{lcc}\n \\dot{x}_{1}=-x_{1}+x_{2}\\\\\n \\\\ \\dot{x}_{2}=\\frac{x_{1}}{10}-2x_{1}-x_{1}^{2}-\\frac{x_{1}^{3}}{10}\n \\end{array}\n\\right.$$\n\n\n```python\nimport sympy as sym\n```\n\n\n```python\n#Con esto las salidas van a ser en LaTeX\nsym.init_printing(use_latex=True)\n```\n\n\n```python\nx_1, x_2 = sym.symbols('x_1 x_2')\n```\n\n\n```python\nX = sym.Matrix([x_1, x_2])\nX\n```\n\n\n```python\nf_1 = -x_1 + x_2\n```\n\n\n```python\nf_2 = sym.Rational(1,10) * x_1 - 2 * x_2 - x_1 ** 2 - sym.Rational(1,10) * x_1 ** 3\n```\n\n\n```python\nF = sym.Matrix([f_1,f_2])\n```\n\n\n```python\nF\n```\n\n\n```python\nA = F.jacobian(X)\nA\n```\n\n\n```python\n# puntos de equilibrio del sistema\npes = sym.solve([f_1,f_2])\npes\n```\n\n\n```python\ntype(pes[0])\n```\n\n\n\n\n dict\n\n\n\n\n```python\nA_1 = A.subs(pes[0])\nA_1\n```\n\n\n```python\nA_1.eigenvals() \n```\n\n\n```python\nA_2 = A.subs(pes[1])\nA_2\n```\n\n\n```python\nlambda_2 = A_2.eigenvals()\nlambda_2\n```\n\n\n```python\nsym.N(lambda_2.keys()[0])\n```\n\n\n```python\nsym.N(lambda_2.keys()[1])\n```\n\n\n```python\nA_3 = A.subs(pes[2])\nA_3\n```\n\n\n```python\nlambda_3 = A_3.eigenvals()\nlambda_3\n```\n\n\n```python\nsym.N(lambda_3.keys()[0])\n```\n\n\n```python\nsym.N(lambda_3.keys()[1])\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "026afe9a00cae5938e6af90f25cbca0cd7be3a7e", "size": 35532, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Practica1_eje4_b.ipynb", "max_stars_repo_name": "elsuizo/Nonlinear_systems", "max_stars_repo_head_hexsha": "9636d4a450339b8c735934923810c9539ac76042", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Practica1_eje4_b.ipynb", "max_issues_repo_name": "elsuizo/Nonlinear_systems", "max_issues_repo_head_hexsha": "9636d4a450339b8c735934923810c9539ac76042", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Practica1_eje4_b.ipynb", "max_forks_repo_name": "elsuizo/Nonlinear_systems", "max_forks_repo_head_hexsha": "9636d4a450339b8c735934923810c9539ac76042", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 63.9064748201, "max_line_length": 3060, "alphanum_fraction": 0.7558820218, "converted": true, "num_tokens": 536, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9381240125464115, "lm_q2_score": 0.8872045937171068, "lm_q1q2_score": 0.8323079334075011}} {"text": "# Solutions\n\n## Question 1\n\n> `1`. Simplify the following expressions:\n\n> $\\frac{3}{\\sqrt{3}}$:\n\n\n```python\nimport sympy as sym\n\nexpression = sym.S(3) / sym.sqrt(3)\nsym.simplify(expression)\n```\n\n\n\n\n$\\displaystyle \\sqrt{3}$\n\n\n\n> $\\frac{2 ^ {78}}{2 ^ {12}2^{-32}}$:\n\n\n```python\nsym.S(2) ** 78 / (sym.S(2) ** 12 * sym.S(2) ** (-32))\n```\n\n\n\n\n$\\displaystyle 316912650057057350374175801344$\n\n\n\n> $8^0$:\n\n\n```python\nsym.S(8) ** 0\n```\n\n\n\n\n$\\displaystyle 1$\n\n\n\n> $a^4b^{-2}+a^{3}b^{2}+a^{4}b^0$:\n\n\n```python\na = sym.Symbol(\"a\")\nb = sym.Symbol(\"b\")\nsym.factor(a ** 4 * b ** (-2) + a ** 3 * b ** 2 + a ** 4 * b ** 0)\n```\n\n\n\n\n$\\displaystyle \\frac{a^{3} \\left(a b^{2} + a + b^{4}\\right)}{b^{2}}$\n\n\n\n## Question 2\n\n> `2`. Solve the following equations:\n\n> $x + 3 = -1$:\n\n\n```python\nx = sym.Symbol(\"x\")\nequation = sym.Eq(x + 3, -1)\nsym.solveset(equation, x)\n```\n\n\n\n\n$\\displaystyle \\left\\{-4\\right\\}$\n\n\n\n> $3 x ^ 2 - 2 x = 5$:\n\n\n```python\nequation = sym.Eq(3 * x ** 2 - 2 * x, 5)\nsym.solveset(equation, x)\n```\n\n\n\n\n$\\displaystyle \\left\\{-1, \\frac{5}{3}\\right\\}$\n\n\n\n> $x (x - 1) (x + 3) = 0$:\n\n\n```python\nequation = sym.Eq(x * (x - 1) * (x + 3), 0)\nsym.solveset(equation, x)\n```\n\n\n\n\n$\\displaystyle \\left\\{-3, 0, 1\\right\\}$\n\n\n\n> $4 x ^3 + 7x - 24 = 1$:\n\n\n```python\nequation = sym.Eq(4 * x ** 3 + 7 * x - 24, 1)\nsym.solveset(equation, x)\n```\n\n\n\n\n$\\displaystyle \\left\\{- \\frac{7}{12 \\sqrt[3]{\\frac{25}{8} + \\frac{\\sqrt{51654}}{72}}} + \\sqrt[3]{\\frac{25}{8} + \\frac{\\sqrt{51654}}{72}}, - \\frac{\\sqrt[3]{\\frac{25}{8} + \\frac{\\sqrt{51654}}{72}}}{2} + \\frac{7}{24 \\sqrt[3]{\\frac{25}{8} + \\frac{\\sqrt{51654}}{72}}} + i \\left(\\frac{7 \\sqrt{3}}{24 \\sqrt[3]{\\frac{25}{8} + \\frac{\\sqrt{51654}}{72}}} + \\frac{\\sqrt{3} \\sqrt[3]{\\frac{25}{8} + \\frac{\\sqrt{51654}}{72}}}{2}\\right), - \\frac{\\sqrt[3]{\\frac{25}{8} + \\frac{\\sqrt{51654}}{72}}}{2} + \\frac{7}{24 \\sqrt[3]{\\frac{25}{8} + \\frac{\\sqrt{51654}}{72}}} + i \\left(- \\frac{\\sqrt{3} \\sqrt[3]{\\frac{25}{8} + \\frac{\\sqrt{51654}}{72}}}{2} - \\frac{7 \\sqrt{3}}{24 \\sqrt[3]{\\frac{25}{8} + \\frac{\\sqrt{51654}}{72}}}\\right)\\right\\}$\n\n\n\n## Question 3\n\n> `3`. Consider the equation: $x ^ 2 + 4 - y = \\frac{1}{y}$:\n\n> Find the solution to this equation for $x$.\n\n\n```python\ny = sym.Symbol(\"y\")\nequation = sym.Eq(x ** 2 + 4 - y, 1 / y)\nsolution = sym.solveset(equation, x)\nsolution\n```\n\n\n\n\n$\\displaystyle \\left\\{- \\sqrt{\\frac{y^{2} - 4 y + 1}{y}}, \\sqrt{\\frac{y^{2} - 4 y + 1}{y}}\\right\\}$\n\n\n\n> Obtain the specific solution when $y = 5$. Do this in two ways:\n> substitute the value in to your equation and substitute the value in to\n> your solution.\n\n\n```python\nsolution.subs({y: 5})\n```\n\n\n\n\n$\\displaystyle \\left\\{- \\frac{\\sqrt{30}}{5}, \\frac{\\sqrt{30}}{5}\\right\\}$\n\n\n\n\n```python\nsolution = sym.solveset(equation.subs({y: 5}), x)\nsolution\n```\n\n\n\n\n$\\displaystyle \\left\\{- \\frac{\\sqrt{30}}{5}, \\frac{\\sqrt{30}}{5}\\right\\}$\n\n\n\n## Question 4\n\n> `4`. Consider the quadratic: $f(x)=4x ^ 2 + 16x + 25$:\n\n> Calculate the discriminant of the quadratic equation $4x ^ 2 + 16x + 25 =\n> 0$. What does this tell us about the solutions to the equation? What\n> does this tell us about the graph of $f(x)$?\n\n\n```python\nquadratic = 4 * x ** 2 + 16 * x + 25\nsym.discriminant(quadratic)\n```\n\n\n\n\n$\\displaystyle -144$\n\n\n\n \nThis is negative so we know that the equation does not have any real solutions and\nhence the graph does not cross the x-axis. \nSince the coefficient of $x^2$ is positive it means that the graph is above \nthe $y=0$ line.\n\n\n> By completing the square, show that the minimum point of $f(x)$ is\n> $\\left(-2, 9\\right)$\n\n\n```python\na, b, c = sym.Symbol(\"a\"), sym.Symbol(\"b\"), sym.Symbol(\"c\")\ncompleted_square = a * (x - b) ** 2 + c\nsym.expand(completed_square)\n```\n\n\n\n\n$\\displaystyle a b^{2} - 2 a b x + a x^{2} + c$\n\n\n\nThis gives $a=4$.\n\n\n```python\ncompleted_square = completed_square.subs({a: 4})\nsym.expand(completed_square)\n```\n\n\n\n\n$\\displaystyle 4 b^{2} - 8 b x + c + 4 x^{2}$\n\n\n\nComparing the coefficients of $x$ we have the equation:\n\n$$\n - 8 b = 16\n$$\n\n\n```python\nequation = sym.Eq(-8 * b, 16)\nsym.solveset(equation, b)\n```\n\n\n\n\n$\\displaystyle \\left\\{-2\\right\\}$\n\n\n\nSubstituting:\n\n\n```python\ncompleted_square = completed_square.subs({b: -2})\nsym.expand(completed_square)\n```\n\n\n\n\n$\\displaystyle c + 4 x^{2} + 16 x + 16$\n\n\n\nComparing the coefficients of $x^0$ this gives:\n\n$$c+16=25$$\n\n\n```python\nequation = sym.Eq(c + 16, 25)\nsym.solveset(equation, c)\n```\n\n\n\n\n$\\displaystyle \\left\\{9\\right\\}$\n\n\n\n\n```python\ncompleted_square = completed_square.subs({c: 9})\ncompleted_square\n```\n\n\n\n\n$\\displaystyle 4 \\left(x + 2\\right)^{2} + 9$\n\n\n\nThe lowest value of $f(x)$ is for $x=-2$ which gives: $f(-2)=9$ as expected.\n\n## Question 5\n\n> `5`. Consider the quadratic: $f(x)=-3x ^ 2 + 24x - 97$:\n\n> Calculate the discriminant of the quadratic equation $-3x ^ 2 + 24x - 97 =\n> 0$. What does this tell us about the solutions to the equation? What\n> does this tell us about the graph of $f(x)$?\n\n\n```python\nquadratic = -3 * x ** 2 + 24 * x - 97\nsym.discriminant(quadratic)\n```\n\n\n\n\n$\\displaystyle -588$\n\n\n\nThis is negative so we know that the equation does not have any real solutions and\nhence the graph does not cross the x-axis. \nSince the coefficient of $x^2$ is negative it means that the graph is below \nthe $y=0$ line.\n\n\n> By completing the square, show that the maximum point of $f(x)$ is\n> $\\left(4, -49\\right)$\n\n\n```python\na, b, c = sym.Symbol(\"a\"), sym.Symbol(\"b\"), sym.Symbol(\"c\")\ncompleted_square = a * (x - b) ** 2 + c\nsym.expand(completed_square)\n```\n\n\n\n\n$\\displaystyle a b^{2} - 2 a b x + a x^{2} + c$\n\n\n\nThis gives $a=-3$.\n\n\n```python\ncompleted_square = completed_square.subs({a: -3})\nsym.expand(completed_square)\n```\n\n\n\n\n$\\displaystyle - 3 b^{2} + 6 b x + c - 3 x^{2}$\n\n\n\nComparing the coefficients of $x$ we have the equation:\n\n$$\n 6 b = 24\n$$\n\n\n```python\nequation = sym.Eq(6 * b, 24)\nsym.solveset(equation, b)\n```\n\n\n\n\n$\\displaystyle \\left\\{4\\right\\}$\n\n\n\nSubstituting:\n\n\n```python\ncompleted_square = completed_square.subs({b: 4})\nsym.expand(completed_square)\n```\n\n\n\n\n$\\displaystyle c - 3 x^{2} + 24 x - 48$\n\n\n\nComparing the coefficients of $x^0$ this gives:\n\n$$c-48=-97$$\n\n\n```python\nequation = sym.Eq(c - 48, -97)\nsym.solveset(equation, c)\n```\n\n\n\n\n$\\displaystyle \\left\\{-49\\right\\}$\n\n\n\n\n```python\ncompleted_square = completed_square.subs({c: -49})\ncompleted_square\n```\n\n\n\n\n$\\displaystyle - 3 \\left(x - 4\\right)^{2} - 49$\n\n\n\nThe highest value of $f(x)$ is for $x=4$ which gives: $f(4)=-49$ as expected.\n\n`6`. Consider the function $f(x) = x^ 2 + a x + b$.\n\n> Given that $f(0) = 0$ and $f(3) = 0$ obtain the values of $a$ and $b$.\n\nSubstituting 0 in to $f$ gives:\n\n\n```python\nexpression = x ** 2 + a * x + b\nexpression.subs({x: 0})\n```\n\n\n\n\n$\\displaystyle b$\n\n\n\nThis implies that $b=0$. Substituting back in to the expression:\n\n\n```python\nexpression = expression.subs({b: 0})\nexpression\n```\n\n\n\n\n$\\displaystyle a x + x^{2}$\n\n\n\nSubstituting $x=3$ in to this expression gives:\n\n\n```python\nexpression.subs({x: 3})\n```\n\n\n\n\n$\\displaystyle 3 a + 9$\n\n\n\nThis gives the equation:\n\n$$\n 3 a + 9 = 0\n$$\n\n\n```python\nsym.solveset(expression.subs({x: 3}), a)\n```\n\n\n\n\n$\\displaystyle \\left\\{-3\\right\\}$\n\n\n\nOur expression is thus:\n\n\n```python\nexpression = expression.subs({a: -3})\nexpression\n```\n\n\n\n\n$\\displaystyle x^{2} - 3 x$\n\n\n\n> By completing the square confirm that graph of $f(x)$ has a line of symmetry\n> at $x=\\frac{3}{2}$\n\n\n```python\ncompleted_square = a * (x - b) ** 2 + c\nsym.expand(completed_square)\n```\n\n\n\n\n$\\displaystyle a b^{2} - 2 a b x + a x^{2} + c$\n\n\n\nWe see that $a=1$ and. Substituting:\n\n\n```python\ncompleted_square = completed_square.subs({a: 1})\nsym.expand(completed_square)\n```\n\n\n\n\n$\\displaystyle b^{2} - 2 b x + c + x^{2}$\n\n\n\nThis gives:\n\n$$\n -2b=-3\n$$\n\n\n```python\nequation = sym.Eq(-2 * b, -3)\nsym.solveset(equation, b)\n```\n\n\n\n\n$\\displaystyle \\left\\{\\frac{3}{2}\\right\\}$\n\n\n\nSubstituting:\n\n\n```python\ncompleted_square = completed_square.subs({b: sym.S(3) / 2})\nsym.expand(completed_square)\n```\n\n\n\n\n$\\displaystyle c + x^{2} - 3 x + \\frac{9}{4}$\n\n\n\nWhich gives:\n\n$$\n c + 9 / 4 = 0\n$$\n\n\n```python\nequation = sym.Eq(c + sym.S(9) / 4, 0)\nsym.solveset(equation, c)\n```\n\n\n\n\n$\\displaystyle \\left\\{- \\frac{9}{4}\\right\\}$\n\n\n\nSubstituting:\n\n\n```python\ncompleted_square = completed_square.subs({c: -sym.S(9) / 4})\ncompleted_square\n```\n\n\n\n\n$\\displaystyle \\left(x - \\frac{3}{2}\\right)^{2} - \\frac{9}{4}$\n\n\n\nThus $x=3/2$ is a line of symmetry.\n", "meta": {"hexsha": "ec7a3ed76462fa4cf7c00d56927a67f0cd965058", "size": 24880, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "book/tools-for-mathematics/02-algebra/solutions/.main.md.bcp.ipynb", "max_stars_repo_name": "daffidwilde/pfm", "max_stars_repo_head_hexsha": "dcf38faccee3c212c8394c36f4c093a2916d283e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "book/tools-for-mathematics/02-algebra/solutions/.main.md.bcp.ipynb", "max_issues_repo_name": "daffidwilde/pfm", "max_issues_repo_head_hexsha": "dcf38faccee3c212c8394c36f4c093a2916d283e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "book/tools-for-mathematics/02-algebra/solutions/.main.md.bcp.ipynb", "max_forks_repo_name": "daffidwilde/pfm", "max_forks_repo_head_hexsha": "dcf38faccee3c212c8394c36f4c093a2916d283e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 20.5619834711, "max_line_length": 786, "alphanum_fraction": 0.4476286174, "converted": true, "num_tokens": 3043, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9381240142763574, "lm_q2_score": 0.8872045862611168, "lm_q1q2_score": 0.8323079279476736}} {"text": "# Image reconstruction in X-ray tomography\n\n**Authors**:\n\n* Alexandre Zouaoui (alexandre.zouaoui@ens-paris-saclay.fr)\n\n## X-ray tomography\n\nWe are interested in the following inverse problem:\n\n\\begin{align}\ny = H \\overline{x} + w\n\\end{align}\n\nwhere $y \\in \\mathbb{R}^M$ are the measurements from a set of projections at different angles of a X-ray tomography, $\\overline{x} \\in \\mathbb{R}^N$ is the sought absorption image and $w \\in \\mathbb{R}^M$ is the measurement noise assumed to be i.i.d. Gaussian with variance $\\sigma^2$. $H$ is a sparse tomography matrix that encodes the geometry of the measurements. \n\nIn our case $H$ models parallel projections of a 2-D object $\\overline{x}$. Here the tomography measures are acquired at fixed and regularly sampled rotational positions between the sample and the detector so that $H_{mn}$ models the intersection length between the $m$-th light-ray and the $n$-th pixel. We denote by $N_\\theta$ the number of different angular positions of the detector and by $L$ the linear size of the detector, hence the number of measurements are $M = L \\times N_\\theta$.\n\nWe study reconstruction approaches that do not require the linear system to be sufficiently determined for good results (i.e. $N_\\theta \\sim L$). We must then overcome the under-determinacy of the problem and make it robust to the presence of noise in the measurements.\n\n### Imports\n\n\n```python\nimport numpy as np\nfrom scipy.io import loadmat\nfrom scipy import sparse\nfrom scipy import linalg\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport os\nimport time\nfrom itertools import product\n```\n\n### Style\n\n\n```python\nsns.set_style(\"darkgrid\")\n```\n\n### Paths\n\n\n```python\nCWD = os.getcwd()\nDATA_DIR = os.path.join(CWD, \"data\")\n```\n\n### Question 1\n\nDownload the projection matrix $H$ and the image $\\overline{x}$ available on the website.\n\n---\n\n\n```python\n# Load files\nx = loadmat(os.path.join(DATA_DIR, \"x.mat\"))[\"x\"]\nH = loadmat(os.path.join(DATA_DIR, \"H.mat\"))[\"H\"]\n\n# Squeeze x\nx = x.squeeze()\n\n# Print type and shape information\nprint(f\"Type of x: {type(x)}\")\nprint(f\"Shape of x: {x.shape}\")\nprint(f\"Type of H: {type(H)}\")\nprint(f\"Shape of H: {H.shape}\")\n\n# Global variables setup\nN = 90 * 90\nM = 90 * 180\n\n# Sanity check\nassert x.shape[0] == N\nassert H.shape == (M, N)\n```\n\n Type of x: \n Shape of x: (8100,)\n Type of H: \n Shape of H: (16200, 8100)\n\n\n### Question 2\n\nConstruct y, according to the linear system defined above, using $\\sigma = 1$.\n\n---\n\n\n```python\n# Set a random seed for reproducibility\nnp.random.seed(0)\n\n# Noise setup\n\u03c3 = 1\nnoise = \u03c3 ** np.random.randn(M)\n\n# Construct y\ny = H.dot(x) + noise\n\n# Sanity check\nassert noise.shape[0] == M\nassert y.shape[0] == M\n```\n\n### Question 3\n\nHere $N = 90 \\times 90$ pixels and $M = 90 \\times 180$ measurements. Display a 2D version of $x$ and a 2D version of $y$, also known as sinogram. Bare in mind that $x$ and $y$ were constructed by concatenating the flattened the columns values.\n\n---\n\n\n```python\n# Display 2D signals\nfig, axes = plt.subplots(1, 2, figsize=(14, 5))\n# Display a 2D version of x\naxes[0].imshow(x.reshape((90, 90), order=\"F\"))\naxes[0].set_axis_off()\naxes[0].set_title(\"Original absorption image\");\n# Display a 2D version of y\naxes[1].imshow(y.reshape((90, 180), order=\"F\"))\naxes[1].set_axis_off()\naxes[1].set_title(\"Measurements image\");\n```\n\n## Optimization problem\n\nAn efficient strategy to address the reconstruction problem is to define $x$ as a minimizer of an appropriate cost function $f$. More specifically, we focus on the regularized least-squares criterion:\n\n$$\\forall x \\in \\mathbb{R}^N, \\quad f(x) = \\frac{1}{2} \\| Hx - y \\|^2 + \\lambda r(x)$$\n\nwhere $r$ is a regularization function incorporating a priori assumptions to guarantee the robustness of the solution with respect to noise. In order to promote images formed by smooth regions separated by sharp edges, we use:\n\n\n$$\\forall x \\in \\mathbb{R}^N, \\quad r(x) = \\sum_{i=1}^{2N} \\phi([Gx]^{(i)})$$\n\n\nwhere $G \\in \\mathbb{R}^{2N \\times N}$ is a sparse matrix such that $Gx \\in \\mathbb{R}^{2N}$ is the concatenation of the horizontal and vertical gradients of the image, and $\\phi$ is a potential function defined as:\n\n$$\\forall u \\in \\mathbb{R}, \\quad \\phi(u) = \\sqrt{1 + u^2 / \\delta^2}$$\nwith some parameter $\\delta > 0$ to guarantee the differentiability of $r$. \n\nIn the following, we will set $(\\lambda, \\delta) = (0.13, 0.02)$.\n\n\n```python\n# Setup\n\u03bb = 0.13\n\u03b4 = 0.02\n```\n\n### Question 1\n\nDownload the gradient operator $G$\n\n---\n\n\n```python\n# Load gradient operator\nG = loadmat(os.path.join(DATA_DIR, \"G.mat\"))[\"G\"]\n\n# Print type and shape information\nprint(f\"Type of x: {type(G)}\")\nprint(f\"Shape of x: {G.shape}\")\n\n# Sanity check\nassert G.shape == (2 * N, N)\n```\n\n Type of x: \n Shape of x: (16200, 8100)\n\n\n### Question 2\n\n* Give the expression of the gradient $\\nabla f$ at some point $x \\in \\mathbb{R}^N$. \n\n* Create a function which gives as an output the gradient of $f$ at some input vector $x$.\n\n---\n\n* We have:\n$$\\forall x \\in \\mathbb{R}^N, \\quad \\nabla f(x) = H^T(Hx - y) + \\lambda \\nabla r(x)$$\n\n\n* Moreover:\n\n\\begin{align*}\n\\forall x \\in \\mathbb{R}^N, \\, \\forall k \\in \\{1, \\ldots, N\\}, \\quad \\frac{d r(x)}{d x_k}\n& = \\sum_{i=1}^{2N} \\frac{d (\\sum_{j=1}^N G_{i, j} x_j)}{d x_k} \\phi'([Gx]_i) \\\\\n& = \\sum_{i=1}^{2N} G_{i, k} \\phi'([Gx]_i) \\\\\n& = \\sum_{i=1}^{2N} G_{k, i}^T \\phi'([Gx]_i) \\\\\n& = \\big[ G^T \\big( \\phi'([Gx]_i)\\big)_{i \\in \\{1, \\ldots, 2N\\}} \\big]_k\n\\end{align*}\n\n* Therefore:\n\n$$\\forall x \\in \\mathbb{R}^N, \\quad \\nabla r(x) = G^T \\big( \\phi'([Gx]_i)\\big)_{i \\in \\{1, \\ldots, 2N\\}}$$\n\n* Where:\n\n$$\\forall u \\in \\mathbb{R}, \\quad \\phi'(u) = \\frac{u}{\\delta^2 \\sqrt{1 + u^2 / \\delta^2}}$$\n\n* Finally:\n\n$$\\boxed{\\forall x \\in \\mathbb{R}^N, \\quad \\nabla f(x) = H^T (Hx - y) + \\lambda G^T \\big( \\phi'([Gx]_i)\\big)_{i \\in \\{1, \\ldots, 2N\\}}}$$\n\n\n```python\ndef grad_f(x, \u03bb=\u03bb, \u03b4=\u03b4):\n \"\"\"\n Compute the gradient of f (defined above)\n at some input x\n \n Parameters\n -----------\n - x : numpy.array\n Input vector\n \n - \u03bb : float (optional)\n Relaxation parameter\n Default: 0.13\n \n - \u03b4 : float (optional)\n Differentiability parameter\n Default: 0.02\n \n Returns\n -----------\n - g : numpy.array\n Gradient output vector\n \"\"\"\n \n # Compute the \u03d5' part\n \u03d5_grad = G.dot(x) / (\u03b4**2 * np.sqrt(1 + G.dot(x)**2 / \u03b4**2))\n\n # Compute the gradient\n grad = H.T.dot(H.dot(x) - y) + \u03bb * G.T.dot(\u03d5_grad)\n return grad\n```\n\n### Question 3\n\nShow that a Lipschitz constant of $\\nabla f$ is\n\n$$L = \\|H\\|^2 + \\frac{\\lambda}{\\delta^2} \\|G\\|^2$$\n\nCalculate it for the $(\\lambda, \\delta)$ values given above. In Python we can use ``scipy.sparse.linalg.svds`` that outputs the singular values of a sparse matrix, the norm of the matrix being the maximal singular value.\n\n---\n\nFirst of all, we have:\n\n$$\\forall x \\in \\mathbb{R}^N, \\quad \\nabla^2 f (x) = H^T H + \\lambda G^T G \\big( \\phi''([Gx]_i)\\big)_{i \\in \\{1, \\ldots, 2N\\}}$$\n\nIn addition:\n\n$$\\forall u \\in \\mathbb{R}, \\quad \\frac{d^2 \\phi(u)}{du^2} = \\frac{1 - \\frac{u^2}{\\delta^2(1 + \\frac{u^2}{\\delta^2})}}{\\delta^2 \\sqrt{1 + \\frac{u^2}{\\delta^2}}}$$\n\nTherefore:\n\n$$\\forall u \\in \\mathbb{R}, \\quad \\frac{d^2 \\phi(u)}{du^2} \\leq \\frac{1}{\\delta^2}$$\n\nAs a result:\n\n$$\\forall u, v \\in \\mathbb{R}, \\quad |\\phi'(u) - \\phi'(v)| \\leq \\frac{1}{\\delta^2} |u - v |$$\n\nCombining the previous results:\n\n$$\\boxed{L = \\|H\\|^2 + \\frac{\\lambda}{\\delta^2} \\|G\\|^2}$$\n\nWhere $\\|H\\|^2$ (respectively $\\|G\\|^2$) denotes the squared largest singular value of $H$ (respectively $G$).\n\n\n```python\n# Compute norms\nnorm_H = np.max(sparse.linalg.svds(H, return_singular_vectors=False))\nnorm_G = np.max(sparse.linalg.svds(G, return_singular_vectors=False))\n# Compute Lipschitz constant\nL = (norm_H ** 2) + \u03bb / (\u03b4**2) * (norm_G**2)\n```\n\n## Optimization algorithms\n\n### Gradient descent algorithm\n\n#### Question 1\n\nCreate $x_0 \\in \\mathbb{R}^N$ a vector with all entries equal to 0. This will be our initialization for all tested algorithms.\n\n---\n\n\n```python\n# Create xO as the null vector\nx0 = np.zeros_like(x)\n```\n\n#### Question 2\n\nImplement a gradient descent algorithm to minimize $f$\n\n---\n\n\n```python\n# Define \u03d5 and f\n\u03d5 = lambda u : np.sqrt(1 + u**2 / \u03b4 ** 2)\nf = lambda x : 1 / 2 * np.sum((H.dot(x) - y)**2) + \u03bb * np.sum(\u03d5(G.dot(x)))\n```\n\n\n```python\nclass Solver:\n \"\"\"\n Generic Solver class serving as a parent class\n to GradientDescent, MM quadratic, 3MG, Block Coordinate MMQ\n \n Implements the display method that prints information\n and display both the reconstruction and the iterations evolution\n \"\"\"\n def __init__(self):\n pass\n \n def solve(self, x0=x0, tol=1e-2, \u03bb=\u03bb, \u03b4=\u03b4):\n pass\n \n def display(self):\n # Display reconstruction\n fig, axes = plt.subplots(1, 2, figsize=(14, 5))\n axes[0].imshow(x.reshape((90, 90), order=\"F\"))\n axes[0].set_axis_off()\n axes[0].set_title(\"Original absorption image\");\n axes[1].set_axis_off()\n axes[1].imshow(self.x_hat.reshape((90, 90), order=\"F\"))\n axes[1].set_title(\"Reconstruction image\")\n\n # Display iterates evolution\n plt.figure(figsize=(12, 5))\n plt.title(f\"Value of $f$ along the {self.name} iterations\")\n plt.xlabel(\"Iterations time (in seconds)\")\n plt.ylabel(\"Value of $f$\")\n plt.plot(self.iterations_time, self.f_values, \n linestyle=\"--\", marker=\"o\", linewidth=3,\n label=f\"{self.short} iterates\")\n plt.yscale(\"log\")\n plt.legend(frameon=True);\n \n def __repr__(self):\n if self.counter is None:\n raise ValueError(\"Cannot call print before using ``solve``\")\n return f\"{self.name}: {self.counter} steps in {round(self.total_time, 2)} seconds\"\n \n def get_info(self):\n \"\"\"\n Returns\n ----------\n - x_hat : numpy.array\n Approximation result\n - counter : int\n Number of iterations\n - iterations_time : list of float\n List of computation time at regular intervals\n - f_values : list of float\n objectif function values at regular intervals\n - total_time : float\n Total computational time\n \"\"\"\n return self.x_hat, self.counter, self.iterations_time, self.f_values, self.total_time\n```\n\n\n```python\nclass GradientDescent(Solver):\n \n def __init__(self):\n self.name = \"Gradient Descent\"\n self.short = \"GD\"\n \n def solve(self, x0=x0, tol=1e-2):\n # Setup\n self.x0 = x0\n self.tol = tol\n self.counter = 0\n self.f_values = list()\n self.iterations_time = list()\n \n # Initialization\n t0 = time.time()\n x_curr = np.copy(self.x0)\n \n # Main loop\n while np.linalg.norm(grad_f(x_curr)) > np.sqrt(N) * self.tol:\n # First order approximation update\n x_curr = x_curr - 1 / L * grad_f(x_curr)\n # Save iterates\n if self.counter % 100 == 0:\n self.f_values.append(f(x_curr))\n self.iterations_time.append(time.time() - t0)\n self.counter += 1\n \n # Compute total time in seconds\n self.total_time = time.time() - t0\n # Store solution\n self.x_hat = x_curr \n```\n\n\n```python\n%%time\nsolver = GradientDescent()\nsolver.solve(x0=x0, tol=1e-2)\nprint(solver)\nsolver.display()\n```\n\n### Majorization Minimization (MM) quadratic algorithm\n\n#### Question 1\n\nConstruct, for all $x \\in \\mathbb{R}^N$, a quadratic majorant function of $f$ at $x$.\n\nCreate a function which gives, as an output, the curvature $A(x)$ of the majorant function at an input vector $x$.\n\nHint: In Python, we can use ``scipy.sparse.diags(d[:, 0]).tocsc()`` to create a sparse matrix from a diagonal vector $d \\in \\mathbb{R}^{n \\times 1}$ using the compressed sparse column (csc) format.\n\n---\n\nThe function $r: x \\in \\mathbb{R}^N \\mapsto r(x)$ is a separable function whose entries $r_i$ are even differentiable functions. As a result, we derive a majorant of $r_i$ using $w: u \\in \\mathbb{R} \\mapsto \\frac{r'(|u|)}{u} = \\frac{1}{\\delta^2 \\sqrt{1 + u^2 / \\delta^2}}$.\n\n\n```python\ndef curvature_A(x, \u03bb=\u03bb, \u03b4=\u03b4):\n \"\"\"\n Define the curvature of the majorant function\n at input vector x\n \n Parameters\n ------------\n - x : numpy.array\n Current iterate\n \n \n Returns\n -----------\n A function that takes an input vector $v$\n and outputs $Av$\n \"\"\"\n \n # Define a diagonal sparse matrix \n d = 1 / (\u03b4**2 * np.sqrt(1 + G.dot(x) ** 2 / \u03b4 ** 2)) \n \n D_G = sparse.diags(d).tocsc()\n \n # Combine intermediate results using sparse operations\n return lambda v: H.T.dot(H.dot(v)) + \u03bb * G.T.dot(D_G.dot(G.dot(v)))\n```\n\n#### Question 2\n\nDeduce a MM quadratic algorithm to minimize $f$. Implement it.\n\nHint: To invert the majorant matrix at each iteration, use ``bicg`` from ``scipy.sparse.linalg``.\n\n---\n\nSince $f$ is a differentiable function from $\\mathbb{R}^N$ to $\\mathbb{R}$, we can derive the Majoration Minimization algorithm updates:\n\n$$(\\forall n \\in \\mathbb{N}) \\quad x_{n+1} = x_n - \\theta_n A(x_n)^{-1} \\nabla f(x_n)$$\n\nwhere for all $n \\in \\mathbb{N}, \\; \\theta_n \\in \\; ]0, 2[$. In the following, we take for all $n \\in \\mathbb{N}, \\; \\theta_n = 1$.\n\n\n```python\nclass MMQuadratic(Solver):\n \n def __init__(self):\n self.name = \"MM quadratic\"\n self.short = \"MMQ\"\n \n def solve(self, x0=x0, tol=1e-2):\n # Setup\n self.x0 = x0\n self.tol = tol\n self.counter = 0\n self.f_values = list()\n self.iterations_time = list()\n \n # Initialization\n t0 = time.time()\n x_curr = np.copy(self.x0)\n \n # Main loop\n while np.linalg.norm(grad_f(x_curr)) > np.sqrt(N) * self.tol:\n # Instanciate Linear Operator\n A = sparse.linalg.LinearOperator(shape=(N, N), \n matvec=curvature_A(x_curr), \n rmatvec=curvature_A(x_curr))\n # Solve Linear System\n x_curr = x_curr - sparse.linalg.bicg(A=A, b=grad_f(x_curr))[0]\n # Save iterates\n if self.counter % 10 == 0:\n self.f_values.append(f(x_curr))\n self.iterations_time.append(time.time() - t0)\n self.counter += 1\n \n # Compute total time in seconds\n self.total_time = time.time() - t0\n # Store solution\n self.x_hat = x_curr \n```\n\n\n```python\n%%time\nsolver = MMQuadratic()\nsolver.solve(x0=x0, tol=1e-2)\nprint(solver)\nsolver.display()\n```\n\n### Majoration Minimization Memory Gradient (3MG) algorithm\n\nIn this section we use a subspace strategy acceleration. We focus on the 3MG approach which consists in defining the iterate $x_{k+1}$ as the minimizer of the quadratic majorant function at $x_k$ within a subspace spanned by the following directions:\n\n$$\\forall k \\in \\mathbb{N}, \\quad D_k = \\big[ - \\nabla f(x_k) \\; | \\; x_k - x_{k-1} \\big]$$\n\nUsing the convention $D_0 = - \\nabla f (x_0)$. An iterate of 3MG reads:\n\n$$\\forall k \\in \\mathbb{N}, \\quad x_{k+1} = x_k + D_k u_k$$\n\nwith\n\n$$\\forall k \\in \\mathbb{N}, \\quad u_k = - \\big( D_k^T A(x_k) D_k \\big)^{\\dagger} \\big(D_k^T \\nabla f(x_k) \\big)$$\n\nwhere $A(x_k) \\in \\mathbb{R}^{N \\times N}$ is the curvature of the majorant matrix at $x_k$ and $\\dagger$ denotes the pseudo-inverse operation.\n\n#### Question 1\n\nImplement the 3MG algorithm using ``scipy.linalg.pinv`` and be mindful of the matrix operations given the size of the matrices.\n\n---\n\n\n```python\ndef compute_directions(x_curr, x_previous=None, first_iterate=False, \u03bb=\u03bb, \u03b4=\u03b4):\n \"\"\"\n Compute the set of directions that span the acceleration subspace\n \n Parameters\n ---------\n - x_curr : numpy.array\n Current iterate\n \n - x_previous : numpy.array (optional)\n Previous iterate\n Default: None.\n \n - first_iterate : boolean (optional)\n Whether we are running the first iteration\n as it modifies the directions computation\n Default: False.\n \"\"\"\n if first_iterate:\n return - grad_f(x_curr, \u03bb=\u03bb, \u03b4=\u03b4).reshape(-1, 1)\n else:\n return np.hstack((-grad_f(x_curr, \u03bb=\u03bb, \u03b4=\u03b4).reshape(-1, 1), (x_curr - x_previous).reshape(-1, 1)))\n```\n\n\n```python\ndef compute_u(x_curr, D_curr, first_iterate=False, \u03bb=\u03bb, \u03b4=\u03b4):\n \"\"\"\n Compute current directions minimizer\n \n Parameters\n ---------------\n - x_curr : numpy.array\n Current iterate\n \n - D_curr : numpy.ndarray\n Current directions\n \n - first_iterate : boolean (optional)\n Whether this is the first iterate\n Default: False.\n \n \"\"\"\n \n # Intermediate computations\n tmp1 = H.dot(D_curr)\n tmp2 = G.dot(D_curr)\n # Compute Diag(Gx)\n d = 1 / (\u03b4**2 * np.sqrt(1 + G.dot(x_curr) ** 2 / \u03b4 ** 2)) \n D_G = sparse.diags(d).tocsc()\n # Matrix to be inverted\n tmp3 = tmp1.T.dot(tmp1) + \u03bb + tmp2.T.dot(D_G.dot(tmp2))\n # Compute u using Moore Penrose pseudo inverse\n if first_iterate:\n return - linalg.pinv(tmp3).dot(D_curr.T.dot(grad_f(x_curr, \u03bb=\u03bb, \u03b4=\u03b4).reshape(-1, 1)))\n #return - linalg.inv(tmp3).dot(D_curr.T.dot(grad_f(x_curr).reshape(-1, 1)))\n else:\n return - linalg.pinv(tmp3).dot(D_curr.T.dot(grad_f(x_curr, \u03bb=\u03bb, \u03b4=\u03b4)))\n #return - linalg.inv(tmp3).dot(D_curr.T.dot(grad_f(x_curr)))\n```\n\n\n```python\nclass MMMG(Solver):\n \n def __init__(self):\n self.name = \"MM Memory Gradient\"\n self.short = \"3MG\"\n \n def solve(self, x0=x0, tol=1e-2):\n # Setup\n self.x0 = x0\n self.tol = tol\n self.counter = 0\n self.f_values = list()\n self.iterations_time = list()\n \n # Initialization\n t0 = time.time()\n x_curr = np.copy(self.x0)\n \n # First iterate\n D0 = compute_directions(x_curr, first_iterate=True)\n u0 = compute_u(x_curr, D0, first_iterate=True)\n x_next = x_curr + D0.dot(u0).squeeze()\n self.counter += 1\n \n # Main loop\n while np.linalg.norm(grad_f(x_next)) > np.sqrt(N) * self.tol:\n x_previous = x_curr\n x_curr = x_next\n # Compute directions\n D_curr = compute_directions(x_curr, x_previous)\n # Compute minimizer\n u_curr = compute_u(x_curr, D_curr)\n # Next iterate using subspace strategy\n x_next = x_curr + D_curr.dot(u_curr).squeeze()\n\n # Store\n if self.counter % 50 == 1: # first step outside loop\n self.f_values.append(f(x_next))\n self.iterations_time.append(time.time() - t0)\n self.counter += 1\n \n # Compute total time in seconds\n self.total_time = time.time() - t0\n # Store solution\n self.x_hat = x_curr \n```\n\n\n```python\n%%time\nsolver = MMMG()\nsolver.solve(x0=x0, tol=1e-2)\nprint(solver)\nsolver.display()\n```\n\n### Block-coordinate MM quadratic algorithm\n\nAnother acceleration strategy consists in applying a block alternation technique. The vector $x$ is divided into $J \\geq 1$ blocks, with size $1 \\leq N_j \\leq N$. At each iteration $k \\in \\mathbb{N}$, a block index $j \\subset \\{1, \\ldots, J\\}$ is chosen, and the corresponding components of $x$, denoted $x^{(j)}$ are updated according to a MM quadratic rule. Here we will assume that the blocks are selected in a cyclic manner, that is:\n\n$$\\forall k \\in \\mathbb{N}, \\quad j = k \\text{mod} \\; J + 1$$\n\nFor a given block index $j$, the corresponding pixel indexes are updated in the image:\n\n$$n \\in \\mathbb{J}_j = \\{(j - 1) N_j + 1, \\ldots, j N_j\\}$$\n\n#### Question 1.\n\nCreate a function which gives, as an output, matrix $A_j(x) \\in \\mathbb{R}^{N_j \\times N_j}$ containing only the lines and rows of $A(x)$ with indexes $\\mathbb{J}_j$.\n\n---\n\n\n\n\n```python\n# Get A block\ndef A_block(x, j, Nj, \u03b4=\u03b4, \u03bb=\u03bb):\n \"\"\"\n Compute the matrix $Aj(x) \\in \\mathbb{R}^{N_j \\times N_j}$\n containing only the lines and rows of $A(x)$ with indexes $\\mathbb{J}_j$\n \n Parameters\n ------------\n - x : numpy.array\n Vector x\n \n - j : int\n Index to select block coordinates\n \n - Nj : int\n Size of submatrix\n \"\"\"\n # Select relevant block in H and G\n sub_H = H[:, (j - 1) * Nj : j * Nj]\n sub_G = G[:, (j - 1) * Nj : j * Nj]\n \n # Compute Diag(Gx)\n d = 1 / (\u03b4**2 * np.sqrt(1 + G.dot(x) ** 2 / \u03b4 ** 2)) \n D_G = sparse.diags(d).tocsc()\n \n return sub_H.T.dot(sub_H) + \u03bb * sub_G.T.dot(D_G.dot(sub_G))\n```\n\n##### Question 2.\n\nDeduce an implementation of a block coordinate MM quadratic algorithm for minimizing $f$.\n\nTest if for $N_j = N / K$ with $K \\in \\{1, 2, 3, 5, 6, 9\\}$.\n\n---\n\n\n```python\nclass BCMMQ(Solver):\n \n def __init__(self, K):\n self.name = \"Block Coordinate MMQ\"\n self.short = \"BCMMQ\"\n self.K = K\n \n def solve(self, x0=x0, tol=1e-2):\n # Setup\n self.x0 = x0\n self.tol = tol\n self.counter = 0\n self.f_values = list()\n self.iterations_time = list()\n \n # Initialization\n t0 = time.time()\n x_curr = np.copy(self.x0)\n \n # Implement MMQ algorithm\n # Use stopping criterion\n while np.linalg.norm(grad_f(x_curr)) > np.sqrt(N) * self.tol:\n # Create block\n j = self.counter % self.K + 1\n Nj = N // self.K\n lower, upper = (j - 1) * Nj , j * Nj\n A_j = A_block(x_curr, j, N // self.K) \n\n # Update block\n x_curr[lower:upper] += - sparse.linalg.bicg(A=A_j, b=grad_f(x_curr)[lower:upper])[0]\n\n if self.counter % 10 == 0:\n self.f_values.append(f(x_curr))\n self.iterations_time.append(time.time() - t0)\n self.counter += 1\n \n # Compute total time in seconds\n self.total_time = time.time() - t0\n # Store solution\n self.x_hat = x_curr \n \n def display(self):\n # Display reconstruction\n fig, axes = plt.subplots(1, 2, figsize=(14, 5))\n axes[0].imshow(x.reshape((90, 90), order=\"F\"))\n axes[0].set_axis_off()\n axes[0].set_title(\"Original absorption image\");\n axes[1].set_axis_off()\n axes[1].imshow(self.x_hat.reshape((90, 90), order=\"F\"))\n axes[1].set_title(f\"Reconstruction image (K={self.K})\")\n # Display iterates\n plt.figure(figsize=(12, 5))\n plt.title(f\"Value of $f$ along the iterations of {self.name} algorithm (K={self.K})\")\n plt.xlabel(\"Iterations time (in seconds)\")\n plt.ylabel(\"Value of $f$\")\n plt.plot(self.iterations_time, self.f_values, \n linestyle=\"--\", marker=\"o\", label=f\"{self.short} iterates\")\n plt.yscale(\"log\")\n plt.legend(frameon=True);\n \n def __repr__(self):\n if self.counter is None:\n raise ValueError(\"Cannot call print before using ``solve``\")\n return (f\"{self.name}: {self.counter} steps \"\n f\"in {round(self.total_time, 2)} seconds (K={self.K})\")\n \n```\n\n\n```python\n%%time\nsolver = BCMMQ(K=5)\nsolver.solve(x0=x0, tol=1e-2)\nprint(solver)\nsolver.display()\n```\n\n\n```python\n# Setup\nK_range = [5, 6, 9] # I was getting MemoryError using K=1 or K=2 or K=3\nreconstructions = list()\ncomputation_times = list()\nf_iterates = list()\n\nfor K in K_range:\n solver = BCMMQ(K=K)\n solver.solve(x0=x0, tol=1e-1)\n print(solver)\n x_hat, _ , iter_time, f_values, _ = solver.get_info()\n reconstructions.append(x_hat)\n computation_times.append(iter_time)\n f_iterates.append(f_values)\n \n# Grid plot for reconstruction images\nfig, axes = plt.subplots(1, len(K_range) + 1, figsize=(12, 5))\naxes[0].imshow(x.reshape((90, 90), order=\"F\"))\naxes[0].set_axis_off()\naxes[0].set_title(\"Original absorption image\");\nfor i in range(1, len(K_range) + 1):\n axes[i].imshow(reconstructions[i - 1].reshape((90, 90), order=\"F\"))\n axes[i].set_axis_off()\n axes[i].set_title(f\"Reconstruction image (K={K_range[i - 1]})\")\n \n# Single plot for iterates evolution\nplt.figure(figsize=(12, 5))\nplt.title(f\"Value of $f$ along the iterations of the BCMMQ algorithm for different K\")\nplt.xlabel(\"Iterations time (in seconds)\")\nplt.ylabel(\"Value of $f$\")\nfor i in range(len(K_range)):\n plt.plot(computation_times[i], f_iterates[i], \n linestyle=\"--\", marker=\"o\", label=f\"K={K_range[i]}\")\nplt.yscale(\"log\")\nplt.legend(frameon=True);\n```\n\n**Comment:**\n\n* It looks like the higher ``K`` is, faster is the convergence. In the last section, we will evaluate the Block Coordinate MM Quadratic algorithm using ``K=9``.\n\n### 3.5 Parallel MM quadratic algorithm (Bonus)\n\n### 3.6 Comparison of the methods\n\n#### Question 1.\n\nCreate a function that computes the value of the criterion f along the iterations of the algorithm\n\n---\n\nWe did that previously using ``f = lambda x : 1 / 2 * np.sum((H.dot(x) - y)**2) + \u03bb * np.sum(\u03d5(G.dot(x)))`` \n\nwhere $\\phi$ is defined as ``\u03d5 = lambda u : np.sqrt(1 + u**2 / \u03b4 ** 2)``\n\n#### Question 2.\n\nWe will consider that the convergence is reached when the following stopping criterion is fulfilled:\n\n$$\\|\\nabla f(x_k) \\| \\leq \\sqrt{N} \\times 10^{-4}$$\n\n* What is the required time for each method to achieve this condition?\n* For each method, plot the evolution of $\\big(f(x_k)\\big)_{k \\in \\mathbb{N}}$ until the stopping criterion is satisfied.\n\n---\n\n\n```python\n# Setup\ntol=1e-4\nsolvers = [GradientDescent(),\n MMQuadratic(),\n MMMG(),\n BCMMQ(K=9)]\n\nreconstructions = list()\niterates_times = list()\niterates_values = list()\nshort_names = list()\ntotal_times = dict()\n\n# Loop over solvers\nfor solver in solvers:\n solver.solve(x0=x0, tol=tol)\n print(solver)\n x_hat, _ , iter_time, f_values, total_time = solver.get_info()\n # Store results\n reconstructions.append(x_hat)\n iterates_times.append(iter_time)\n iterates_values.append(f_values)\n short_names.append(solver.short)\n total_times[solver.short] = total_time\n \n# Grid plot for reconstruction images\nfig, axes = plt.subplots(1, len(solvers) + 1, figsize=(12, 5))\naxes[0].imshow(x.reshape((90, 90), order=\"F\"))\naxes[0].set_axis_off()\naxes[0].set_title(\"Original\");\nfor i in range(1, len(solvers) + 1):\n axes[i].imshow(reconstructions[i - 1].reshape((90, 90), order=\"F\"))\n axes[i].set_axis_off()\n axes[i].set_title(f\"{short_names[i - 1]}\")\n \n# Single plot for iterates evolution\nplt.figure(figsize=(12, 5))\nplt.title(f\"Comparing the iterates of $f$ using different solvers (tol={tol})\")\nplt.xlabel(\"Iterations time (in seconds)\")\nplt.ylabel(\"Value of $f$\")\nfor i in range(len(solvers)):\n plt.plot(iterates_times[i], iterates_values[i], \n linewidth=2, linestyle=\"--\",\n marker=\"o\", markersize=6, \n label=f\"{short_names[i]}\")\nplt.yscale(\"log\")\nplt.xscale(\"log\")\nplt.legend(frameon=True);\n\n# Print sorted convergence times\nprint(\"-\"*10)\nprint(\"Sorted convergence times\")\nfor i, solver in enumerate(sorted(total_times, key=total_times.get)):\n print(f\"{i+1}) {solver} in {round(total_times[solver], 2)}s\")\nprint(\"-\"*10)\n```\n\n#### Question 3.\n\nThe Signal to Noise Ratio (SNR) of a restored image $\\hat{x}$ is defined as:\n\n$$\\text{SNR} = 10 \\log_{10} \\big( \\frac{\\|\\overline{x}\\|^2}{\\|\\overline{x} - \\hat{x}\\|^2} \\big)$$\n\n* Using the fastest method, look for the parameters $(\\lambda, \\delta)$ that optimize the SNR.\n\n---\n\nLet us redefine our ``MMQuadratic`` solver so that we can incorporate $\\lambda$ and $\\delta$ seamlessly.\n\n\n```python\nclass MMQuadraticFinal(Solver):\n \n def __init__(self):\n self.name = \"MM quadratic\"\n self.short = \"MMQ\"\n \n def solve(self, x0=x0, tol=1e-2, \u03bb=\u03bb, \u03b4=\u03b4):\n # Setup\n self.x0 = x0\n self.tol = tol\n self.counter = 0\n self.f_values = list()\n self.iterations_time = list()\n \n # Redefine f\n \u03d5 = lambda u : np.sqrt(1 + u**2 / \u03b4 ** 2)\n f = lambda x : 1 / 2 * np.sum((H.dot(x) - y)**2) + \u03bb * np.sum(\u03d5(G.dot(x)))\n \n # Initialization\n t0 = time.time()\n x_curr = np.copy(self.x0)\n \n # Main loop\n while np.linalg.norm(grad_f(x_curr, \u03bb=\u03bb, \u03b4=\u03b4)) > np.sqrt(N) * self.tol:\n # Instanciate Linear Operator\n A = sparse.linalg.LinearOperator(shape=(N, N), \n matvec=curvature_A(x_curr, \u03bb=\u03bb, \u03b4=\u03b4), \n rmatvec=curvature_A(x_curr, \u03bb=\u03bb, \u03b4=\u03b4))\n # Solve Linear System\n x_curr = x_curr - sparse.linalg.bicg(A=A, b=grad_f(x_curr, \u03bb=\u03bb, \u03b4=\u03b4))[0]\n # Save iterates\n if self.counter % 10 == 0:\n self.f_values.append(f(x_curr))\n self.iterations_time.append(time.time() - t0)\n self.counter += 1\n \n # Compute total time in seconds\n self.total_time = time.time() - t0\n # Store solution\n self.x_hat = x_curr\n```\n\n\n```python\n%%time\n# Compute the Signal to Noise Ratio between restored and original images\nSNR = lambda x_hat : 10 * np.log10(np.sum(x ** 2) / np.sum((x - x_hat) ** 2))\n\n# Setup\ntol = 1e-3\n\u03bb_range = np.linspace(0.5, 2, 4, endpoint=True)\n\u03b4_range = np.linspace(0.5, 2, 4, endpoint=True)\n\nsettings = list(product(\u03bb_range, \u03b4_range))\nn_settings = len(settings)\n\nbest_SNR = - np.inf\nbest_approx = None\nbest_\u03bb = None\nbest_\u03b4 = None\n\nsolver = MMQuadraticFinal()\n\n# Main loop\nfor i, setup in enumerate(settings):\n \u03bb, \u03b4 = setup\n print(\"-\"*30)\n print(f\"Run {i+1}/{n_settings}: \u03bb={\u03bb}, \u03b4={\u03b4}\")\n solver.solve(x0=x0, tol=tol, \u03bb=\u03bb, \u03b4=\u03b4)\n approx = solver.x_hat\n score = SNR(approx)\n \n # Update if new print\n if score > best_SNR:\n best_SNR = score\n best_approx = approx\n best_\u03bb = \u03bb\n best_\u03b4 = \u03b4\n print(f\"New best: SNR={round(best_SNR, 4)} (\u03bb={best_\u03bb}, \u03b4={best_\u03b4})\")\n\nprint(f\"Best SNR: {round(best_SNR, 4)} using \u03bb={best_\u03bb}, \u03b4={best_\u03b4}\")\n\n# Display reconstruction\nfig, axes = plt.subplots(1, 2, figsize=(12, 5))\naxes[0].imshow(x.reshape((90, 90), order=\"F\"))\naxes[0].set_axis_off()\naxes[0].set_title(\"Original absorption image\")\naxes[1].set_axis_off()\naxes[1].imshow(best_approx.reshape((90, 90), order=\"F\"))\naxes[1].set_title(f\"Best reconstruction image (SNR={round(best_SNR, 4)})\");\n```\n\n**Comment:**\n\n* We recover nicely the details compared to previous approximations that were using different values for $\\lambda$ and $\\delta$.\n", "meta": {"hexsha": "7224a2814e8a590eaacb05dd2d98d3561d92a191", "size": 725748, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "TP4/Tomography.ipynb", "max_stars_repo_name": "inzouzouwetrust/LSOPTIM", "max_stars_repo_head_hexsha": "99a0a3fbab54000044a978ab7e6f317abfe79414", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "TP4/Tomography.ipynb", "max_issues_repo_name": "inzouzouwetrust/LSOPTIM", "max_issues_repo_head_hexsha": "99a0a3fbab54000044a978ab7e6f317abfe79414", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "TP4/Tomography.ipynb", "max_forks_repo_name": "inzouzouwetrust/LSOPTIM", "max_forks_repo_head_hexsha": "99a0a3fbab54000044a978ab7e6f317abfe79414", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 455.8718592965, "max_line_length": 75592, "alphanum_fraction": 0.9321141774, "converted": true, "num_tokens": 9070, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026595857203, "lm_q2_score": 0.907312229506029, "lm_q1q2_score": 0.8322799212005298}} {"text": "# Least-Squares Fitting\n\n### Prof. Robert Quimby\n© 2018 Robert Quimby\n\n## In this tutorial you will...\n\n* find the best-fit model when you have more data than model parameters\n* learn how to use `numpy.matrix` objects to do least-square fits\n* estimate the uncertainty in the best-fit model parameters\n\n## Plot some Data\n\n\n```python\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = (7, 5)\n\nx = [1.3, 3.4]\ny = [2.1, 5.9]\n\nplt.plot(x, y, 'ro')\nplt.xlim(0, 8)\nplt.ylim(0, 15)\n```\n\n## Fit a Line to these Data\n\n$y = mx + b$\n\n$$\n\\begin{align}\n 5.9 & = 3.4 m + b \\\\\n-( 2.1 & = 1.3 m + b) \\\\\n\\hline\n 3.8 & = 2.1 m \n\\end{align}\n$$\n\n\n```python\nm = \nb = \n```\n\n## Plot the linear relation\n\n\n```python\nplt.plot(x, y, 'ro')\nplt.xlim(0, 8)\nplt.ylim(0, 15)\n\nimport numpy as np\nmodelx = np.linspace(0, 8, 10)\nmodely = m * modelx + b\nplt.plot(modelx, modely, '--')\n```\n\n## What if we want to fit a line to three points?\n\n\n```python\nx = [1.3, 3.4, 6.4]\ny = [2.1, 5.9, 13.5]\nplt.plot(x, y, 'ro')\n\n# overlay the model\nplt.plot(modelx, modely, '--')\nplt.xlim(0, 8)\nplt.ylim(0, 15)\n```\n\n## Dealing with Overdetermined Data\n\n* Real world measurements are imperfect!\n* Your best attempts to measure something will still have some error\n* The quantity you are trying to measure may deviate from the predictions of your model\n\n#### Therefore...\n* Three or more data points will usually **not** fix *exactly* on a line, even if you think they should\n\n## Fitting Models to Overdetermined Data\n\n* **IF** we can assume that the deviations from our ideal model are random and follow a Gaussian distribution\n* **THEN** there is an ideal method for determining the best fitting model\n\n## Least-Squares Fitting\n\n* minimize the sum of the square of the deviations from the model\n\n### Fit a line to the data in the least-squares sense\n\ndata:\n$$x = [1.3, 3.4, 6.4]$$\n$$y = [2.1, 5.9, 13.5]$$\n\nmodel: $$y = mx + b$$\n\n#### The sum of the squares of the deviations as a function of $m$ and $b$ is:\n$$S(m, b) = \\Sigma (y_i - \\theta_i)^2$$\n\nwhere $\\theta_i$ is the **model prediction** for the $i^{\\rm th}$ data point. With $\\theta_i = mx_i + b$, we have:\n\n$$S(m, b) = (2.1 - (1.3m + b))^2 \\\\ + (5.9 - (3.4m + b))^2 \\\\ + (13.5 - (6.4m + b))^2$$\n\nThis simplifies to:\n\n$$ S(m, b) = 54.21 m^2 + 22.2 m b - 218.38 m + 3 b^2 - 43 b + 221.47$$\n\n### Now we just have to minimize this...\n\n### Find the partial derivatives, $\\delta S / \\delta m$ and $\\delta S / \\delta b$\n\n$$ \\delta S / \\delta m = 108.42 m + 22.2 b - 218.38 $$\n$$ \\delta S / \\delta b = 6 b + 22.2 m - 43 $$\n\n### ...set these to zero and solve for $m$ and $b$\n\n\n```python\nm = 494. / 219.\nb = (43 - 22.2 * m) / 6\nprint(m, b)\n```\n\n\n```python\n# now we can plot the best fit line with our data\nmodely = m * modelx + b\nplt.plot(x, y, 'ro')\nplt.plot(modelx, modely, '--')\nplt.xlim(0, 8)\nplt.ylim(0, 15)\n```\n\n## That was for a two parameter model with 3 data points...\n\n* think about doing this for, say, 100 data points\n\n## Matricies to the Rescue!\n\nWe can take the three equations:\n\n$$\n\\begin{align}\n2.1 & = 1.3m + b\\\\\n5.9 & = 3.4m + b\\\\\n13.5 & = 6.4m + b\\\\\n\\end{align}\n$$\n\nand turn them into a single matrix equation:\n\n$$ Y = Xp $$\n\n$Y = \n\\left[ \\begin{array}{c}\n2.1 \\\\\n5.9 \\\\\n13.5 \\end{array} \\right] $, \n$X = \n\\left[ \\begin{array}{cc}\n1.3 & 1 \\\\\n3.4 & 1 \\\\\n6.4 & 1 \\end{array} \\right] \n$, and $p = \\left[ \\begin{array}{c}\nm \\\\\nb \\end{array} \\right] $\n\n### Now, just solve for $p$!\n\n$$ Xp = Y $$\n\n$$ X^T X p = X^T Y $$\n\n$$ (X^T X)^{-1} (X^T X) p = (X^T X)^{-1} X^T Y $$\n\n$$ p = (X^T X)^{-1} X^T Y $$\n\n## A quick intro to `numpy.matrix` objects\n\n### `numpy.matrix` is NOT the same as `numpy.array`\n\n\n```python\n# create an array and a matrix for testing\na1 = \nm1 = \nprint(\"the array is:\\n\", a1)\nprint(\"the matrix is:\\n\", m1)\n```\n\n\n```python\n# you can multiply them by scalars\nprint(\"the scaled array is: \\n\", a1 * 2)\nprint(\"the scaled matrix is: \\n\", m1 * 2)\n```\n\n\n```python\n# you can add them\nprint(\"the array sum is:\\n\", a1 + a1)\nprint(\"the matrix sum is:\\n\", m1 + m1)\n```\n\n### Key difference between `numpy.matrix` and `numpy.array`: multiplication!\n\n\n```python\n# note that array and matrix multiplacation is diferent\nprint(\"the array product is:\\n\", a1 * a1)\nprint(\"the matrix product is:\\n\", m1 * m1)\n```\n\n## Recall matrix multiplication\n\n$$\n\\left[ \\begin{array}{cc}\na & b \\\\\nc & d \\\\\n\\end{array} \\right]\n\\left[ \\begin{array}{cc}\nw & x \\\\\ny & z \\\\\n\\end{array} \\right] \n=\n\\left[ \\begin{array}{cc}\naw + by & ax + bz \\\\\ncw + dy & cx + dz \\\\ \n\\end{array} \\right] $$\n\n\n\nfor more see:\n * https://en.wikipedia.org/wiki/Matrix_multiplication\n * http://mathworld.wolfram.com/MatrixMultiplication.html\n\n## Matrix Math Makes Least-Squares Fitting a Snap!\n\nExpress the model, $y = mx + b$ as:\n$$ Y = Xp $$\n\nwhere $Y = \n\\left[ \\begin{array}{c}\ny_0 \\\\\ny_1 \\\\\n \\vdots \\\\\ny_{N-1} \\end{array} \\right] $, \n$X = \n\\left[ \\begin{array}{cc}\nx_0 & 1 \\\\\nx_1 & 1 \\\\\n \\vdots & \\vdots \\\\\nx_{N-1} & 1 \\end{array} \\right] \n$, and $p = \\left[ \\begin{array}{c}\nm \\\\\nb \\end{array} \\right] $\n\n### here's what that looks like with `numpy.matrix` objects\n\n\n```python\n# set up the x in a 3 row by two column (N x 2) matrix\nX = \n```\n\n\n```python\n# set up the y in a single column matrix\nY = \n```\n\n\n```python\n# solve for p\np = \n```\n\n## What if we have additional parameters in our model?\n\nAs long as you can express the model as a linear equation with independant model parameters\n$$ Y = Xp $$\nwill work.\n\n### for example, what about a quadratic equation...\n\n$$ y = ax^2 + bx + c $$\n\nno problem!\n\n$$\n\\left[ \\begin{array}{c}\ny_0 \\\\\ny_1 \\\\\n \\vdots \\\\\ny_N \\end{array} \\right] = \\left[ \\begin{array}{ccc}\nx_0^2 & x_0 & 1 \\\\\nx_1^2 & x_1 & 1 \\\\\n \\vdots & \\vdots \\\\\nx_N^2 & x_N & 1 \\end{array} \\right] \n\\left[ \\begin{array}{c}\na \\\\\nb \\\\\nc \\end{array} \\right] $$\n\n\n## Sample Variance\n\n* How close are the data points to the model?\n\n\n```python\n# predicted y values\nmodelY = \n\n# residuals from observed values\n\n```\n\n### Variance (${\\rm var} = \\sigma^2$) of the data from the model \n\n\n```python\nM = ???? # number of data samples\nN = ???? # number of model parameters\nsample_var = ????\nprint(sample_var)\n```\n\n\n```python\n# plot residuals\nplt.errorbar(x, y, np.sqrt(sample_var[0,0]), ls='None', marker='o', color='red', capsize=3)\nplt.plot(modelx, modely, '--')\n```\n\n## Uncertainty in the model parameters\n\n\n```python\n# now for the model parameter uncertainties\npvar = \npsig = \n```\n\n\n```python\n# best fit parameters\nm, b = \nmsig, bsig = \n\n# now for the parameter errors\nprint(\"m is {:.3f} +/- {:.3f}\".format(m, msig))\nprint(\"b is {:.3f} +/- {:.3f}\".format(b, bsig))\n```\n\n## For more details see...\n\n[\"Least-Squares and Chi-Square for the Budding Aficionado: Art and Practice\"](http://ugastro.berkeley.edu/radio/2015/handout_links/lsfit_2008.pdf) by Carl Heiles (UC Berkeley)\n", "meta": {"hexsha": "e287f2b9772ac0dbe7da1ad5725f5fe64d6d8997", "size": 18824, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "least-squares.ipynb", "max_stars_repo_name": "sandeep-rout/Astronomical_Techniques", "max_stars_repo_head_hexsha": "6f5b398676ced651e6349719a5e203ebb5f9b5b9", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 29, "max_stars_repo_stars_event_min_datetime": "2020-05-21T06:20:09.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T22:03:20.000Z", "max_issues_repo_path": "least-squares.ipynb", "max_issues_repo_name": "bridareiven/Astronomical_Techniques", "max_issues_repo_head_hexsha": "6f5b398676ced651e6349719a5e203ebb5f9b5b9", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "least-squares.ipynb", "max_forks_repo_name": "bridareiven/Astronomical_Techniques", "max_forks_repo_head_hexsha": "6f5b398676ced651e6349719a5e203ebb5f9b5b9", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 17, "max_forks_repo_forks_event_min_datetime": "2020-06-24T08:12:45.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-18T11:40:04.000Z", "avg_line_length": 19.3066666667, "max_line_length": 183, "alphanum_fraction": 0.4616447089, "converted": true, "num_tokens": 2410, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026641072386, "lm_q2_score": 0.9073122132152183, "lm_q1q2_score": 0.8322799103593546}} {"text": "# Sinais exponenciais\n\nNeste notebook avaliaremos os sinais exponenciais do tipo\n\n\\begin{equation}\nx(t) = A \\ \\mathrm{e}^{a \\ t}\n\\end{equation}\n\nEstamos interessados em 3 casos:\n\n1. $A \\ \\in \\ \\mathbb{R}$ e $a \\ \\in \\ \\mathbb{R}$ - As exponenciais reais.\n\n2. $A \\ \\in \\ \\mathbb{C}$ e $a \\ \\in \\ \\mathbb{C}, \\ \\mathrm{Re}\\left\\{a\\right\\} = 0$ - As exponenciais complexas.\n\n3. $A \\ \\in \\ \\mathbb{C}$ e $a \\ \\in \\ \\mathbb{C}, \\ \\mathrm{Re}\\left\\{a\\right\\} < 0$\n\n\n```python\n# importar as bibliotecas necess\u00e1rias\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n## 1. $A \\ \\in \\ \\mathbb{R}$ e $a \\ \\in \\ \\mathbb{R}$ - As exponenciais reais.\n\n\n```python\n# Sinal\nt = np.linspace(-1, 5, 1000) # vetor temporal\nA = 1\na = -5\n\nxt = A*np.exp(a*t)\n\n# Figura\nplt.figure()\nplt.title('Exponencial real')\nplt.plot(t, xt, '-b', linewidth = 2, label = 'Exp. Real')\nplt.legend(loc = 'best')\nplt.grid(linestyle = '--', which='both')\nplt.xlabel('Tempo [s]')\nplt.ylabel('Amplitude [-]')\nplt.tight_layout()\nplt.show()\n```\n\n## 2. $A \\ \\in \\ \\mathbb{C}$ e $a \\ \\in \\ \\mathbb{C}, \\ \\mathrm{Re}\\left\\{a\\right\\} = 0$ - As exponenciais complexas.\n\nTemos grande interesse neste caso. Ele est\u00e1 relacionado ao sinal\n\n\\begin{equation}\nx(t) = A \\ \\mathrm{cos}(2 \\pi f t - \\phi) = A \\ \\mathrm{cos}(\\omega t - \\phi).\n\\end{equation}\n\nA rela\u00e7\u00e3o de Euler nos diz que:\n\n\\begin{equation}\n\\mathrm{cos}(\\omega t - \\phi) = \\mathrm{Re}\\left\\{\\mathrm{e}^{\\mathrm{j}(\\omega t-\\phi)} \\right\\} .\n\\end{equation}\n\nAssim, o sinal $x(t)$ torna-se:\n\n\\begin{equation}\nx(t) = A \\ \\mathrm{cos}(\\omega t - \\phi) = A\\mathrm{Re}\\left\\{\\mathrm{e}^{\\mathrm{j}(\\omega t-\\phi)} \\right\\} = \\mathrm{Re}\\left\\{A \\mathrm{e}^{\\mathrm{j}(\\omega t-\\phi)} \\right\\}\n\\end{equation}\n\n\\begin{equation}\nx(t) = \\mathrm{Re}\\left\\{A\\mathrm{e}^{-\\mathrm{j}\\phi} \\ \\mathrm{e}^{\\mathrm{j}\\omega t} \\right\\}\n\\end{equation}\nem que $\\tilde{A} = A\\mathrm{e}^{-\\mathrm{j}\\phi}$ \u00e9 a amplitude complexa do cosseno e cont\u00eam as informa\u00e7\u00f5es de magnitude, $A$, e fase, $\\phi$.\n\n\n```python\n# Sinal\nt = np.linspace(-2, 2, 1000) # vetor temporal\nA = 1.5\nphi = 0\n\nA = A*np.exp(-1j*phi)\n\nf=1\nw = 2*np.pi*f\na = 1j*w\n\nxt = np.real(A*np.exp(a*t))\n\n# Figura\nplt.figure()\nplt.title('Exponencial complexa')\nplt.plot(t, xt, '-b', linewidth = 2, label = 'Exp. complexa')\nplt.legend(loc = 'upper right')\nplt.grid(linestyle = '--', which='both')\nplt.xlabel('Tempo [s]')\nplt.ylabel('Amplitude [-]')\nplt.ylim((-2, 2))\nplt.xlim((t[0], t[-1]))\nplt.tight_layout()\nplt.show()\n```\n\n3. $A \\ \\in \\ \\mathbb{C}$ e $a \\ \\in \\ \\mathbb{C}, \\ \\mathrm{Re}\\left\\{a\\right\\} < 0$\n\nNeste caso, se $a \\ \\in \\ \\mathbb{C}$ e $\\mathrm{Re}\\left\\{a\\right\\} < 0$, teremos um sinal oscilat\u00f3rio, cuja amplitude decai com o tempo. \u00c9 t\u00edpico de um sistema massa-mola-amortecedor.\n\n\n```python\n# Sinal\nt = np.linspace(0, 5, 1000) # vetor temporal\nA = 1.5\nphi = 0.3\nA = A*np.exp(-1j*phi)\n\nf=1\nw = 2*np.pi*f\na = -0.5+1j*w\n\nxt = np.real(A*np.exp(a*t))\n\n# Figura\nplt.figure()\nplt.title('Exponencial complexa vom decaimento')\nplt.plot(t, xt, '-b', linewidth = 2, label = 'Exp. complexa com decaimento')\nplt.legend(loc = 'upper right')\nplt.grid(linestyle = '--', which='both')\nplt.xlabel('Tempo [s]')\nplt.ylabel('Amplitude [-]')\nplt.ylim((-2, 2))\nplt.xlim((0, t[-1]))\nplt.tight_layout()\nplt.show()\n```\n\n## Exponenciais complexas em sinais discretos\n\nRetomamos agora as exponenciais complexas e discretas. Lembramos que para um sinal cont\u00ednuo, $x(t)=\\mathrm{e}^{\\mathrm{j} \\omega t}$, a taxa de oscila\u00e7\u00e3o aumenta quando $\\omega$ aumenta. Para sinais discretos do tipo\n\n\\begin{equation}\nx[n] = \\mathrm{e}^{\\mathrm{j} \\omega n} \n\\end{equation}\nveremos um aumento da taxa de oscila\u00e7\u00e3o para $0 \\leq \\omega < \\pi$ e uma diminui\u00e7\u00e3o da taxa de oscila\u00e7\u00e3o para $\\pi \\leq \\omega \\leq 2 \\pi$. Isto se relacionar\u00e1 depois com a amostragem de sinais. Veremos que para amostrar um sinal corretamente (representar bem suas componentes de frequ\u00eancia), precisaremos usar uma taxa de amostragem que seja pelo menos o dobro da maior frequ\u00eancia contida no sinal a ser amostrado.\n\n\n\n```python\nomega = [0, np.pi/8, np.pi/4, np.pi/2, np.pi, \n 3*np.pi/2, 7*np.pi/4, 15*np.pi/8, 2*np.pi]#np.linspace(0, 2*np.pi, 9) # Frequencias angulares\nn = np.arange(50) # amostras\n\nplt.figure(figsize=(15,10))\nfor jw,w in enumerate(omega):\n xn = np.real(np.exp(1j*w*n))\n \n plt.subplot(3,3,jw+1)\n plt.stem(n, xn, '-b', label = r'$\\omega$ = {:.3} [rad/s]'.format(float(w)), basefmt=\" \", use_line_collection= True)\n plt.legend(loc = 'upper right')\n plt.grid(linestyle = '--', which='both')\n plt.xlabel('Amostra [-]')\n plt.ylabel('Amplitude [-]')\n plt.ylim((-1.5, 1.5))\n plt.tight_layout()\n \nplt.show()\n```\n", "meta": {"hexsha": "2403e82a16f5776c818907980d72a15399ede1da", "size": 182961, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Aula 6 - Sinais exponenciais/sinais exponenciais.ipynb", "max_stars_repo_name": "RicardoGMSilveira/codes_proc_de_sinais", "max_stars_repo_head_hexsha": "e6a44d6322f95be3ac288c6f1bc4f7cfeb481ac0", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2020-10-01T20:59:33.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-27T22:46:58.000Z", "max_issues_repo_path": "Aula 6 - Sinais exponenciais/sinais exponenciais.ipynb", "max_issues_repo_name": "RicardoGMSilveira/codes_proc_de_sinais", "max_issues_repo_head_hexsha": "e6a44d6322f95be3ac288c6f1bc4f7cfeb481ac0", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Aula 6 - Sinais exponenciais/sinais exponenciais.ipynb", "max_forks_repo_name": "RicardoGMSilveira/codes_proc_de_sinais", "max_forks_repo_head_hexsha": "e6a44d6322f95be3ac288c6f1bc4f7cfeb481ac0", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2020-10-15T12:08:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-12T12:26:53.000Z", "avg_line_length": 639.7237762238, "max_line_length": 88804, "alphanum_fraction": 0.939839638, "converted": true, "num_tokens": 1727, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9241418262465169, "lm_q2_score": 0.9005297894548548, "lm_q1q2_score": 0.8322172442162009}} {"text": "```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport math\nimport scipy.integrate as inte\n```\n\n### ODE\n$$F(x,y,y',...,y^{(n-1)})=y^{(n)}$$\nwhere $y^{(n)}$ is the n-th derivative.\n- Order of an ODE is the highest order derivtive that appears\n- Mechanical systems are usually second order\n- Recall Newton's second $\\mathbf{F=ma=m\\ddot{x}}$\n\n### State-space for 2nd order systems\nDefine vector equation\n\n$$x = \\begin{pmatrix} x_1\\\\x_2\\end{pmatrix}=\\begin{pmatrix}\\chi\\\\\\dot{\\chi}\\end{pmatrix}\\Longrightarrow \\dot{x}=\\begin{pmatrix}x_2\\\\\\frac{F}{m}\\end{pmatrix}$$\n\n### Numerical ODE integration\n- Consider $\\dot{x}=\\alpha$\n- Could use fixed timestep, $dt$\n- Set $x(t_{k+1})=x(t_k)+\\alpha dt$\n- If $\\alpha(t)$ is not fixed, the results will be inaccurate\n- ode estimates how the right side is changing and picks the best timestep $dt$.\n- Only works when right hand side is smooth\n\n### Examples from MATLAB to python\nTry to solve the following ode problems from https://uk.mathworks.com/help/matlab/ref/ode45.html using python.\n\n### Solve ODE with Single Solution Component\n$$y'=2t$$\n\n\n```python\ndef odeSSC(t,y):\n return 2*t\n\nt0, t1 = 0, 5\nt = np.linspace(t0,t1,100)\ny0 = 0\ny = np.zeros((len(t),1))\ny[0,:] = y0 \nr = inte.ode(odeSSC).set_integrator(\"dopri5\") # choice of method\nr.set_initial_value(y0, t0) # initial values\nfor i in range(1, t.size):\n y[i, :] = r.integrate(t[i]) # get one more value, add it to the array\n if not r.successful():\n raise RuntimeError(\"Could not integrate\")\nplt.plot(t, y)\n```\n\n### Solve Nonstiff equation\nThe van der Pol equation is a second order ODE:\n$$y_1''-\\mu(1-y_1^2)y_1'+y_1=0$$\nwhere $\\mu>0$ is a scalar parameter. Rewrite this equation as first order ODEs by making the substitution $y_2=y_1'$. The resulting system of first-order ODEs is, \n\n\\begin{align}\ny_1' &= y_2 \\\\\ny_2' &= \\mu(1-y_1^2)y_2-y_1\n\\end{align}\n\n\n```python\ndef vdp1(t, y):\n return np.array([y[1], (1 - y[0]**2)*y[1] - y[0]])\n\nt0, t1 = 0, 20 # start and end\nt = np.linspace(t0, t1, 100) # the points of evaluation of solution\ny0 = [2, 0] # initial value\ny = np.zeros((len(t), len(y0))) # array for solution\ny[0, :] = y0\nr = inte.ode(vdp1).set_integrator(\"dopri5\") # choice of method\nr.set_initial_value(y0, t0) # initial values\nfor i in range(1, t.size):\n y[i, :] = r.integrate(t[i]) # get one more value, add it to the array\n if not r.successful():\n raise RuntimeError(\"Could not integrate\")\nplt.plot(t, y)\n```\n\n### Pass Extra Parameters to ODE Function\nSolve the ODE,\n$$y''=\\frac{A}{B}ty$$\nRewriting the equation as a 1st order ODE yields,\n\n\\begin{align}\ny_1' &= y_2\\\\\ny_2' &= \\frac{A}{B}ty_1\n\\end{align}\n\n\n```python\ndef odefcn(t,y,A,B):\n return np.array([y[1],A/B*t*y[0]])\nA,B = 1,2\nt0, t1 = 0, 5\nt = np.linspace(t0,t1,100)\ny0 = [0,0.01] # initial value\ny = np.zeros((len(t), len(y0))) # array for solution\nr = inte.ode(lambda t,y:odefcn(t,y,A,B)).set_integrator(\"dopri5\") # choice of method\nr.set_initial_value(y0, t0) # initial values\nfor i in range(1, t.size):\n y[i, :] = r.integrate(t[i]) # get one more value, add it to the array\n if not r.successful():\n raise RuntimeError(\"Could not integrate\")\nplt.plot(t, y)\n```\n\n### ODE with Time-Dependent Terms\nConsider the following ODE with time-dependent parameters,\n$$y'(t)+f(t)y(t) = g(t)$$\n\n\n```python\nft = np.linspace(0,5,25)\ngt = np.linspace(1,6,25)\nf = ft**2-ft-3\ng = 3*np.sin(gt-.25)\nt0, t1, ic, y0 = 1, 5, 1, 1\nt= np.linspace(t0,t1,100)\n```\n\n\n```python\ndef myode(t,y,ft,f,gt,g):\n f = np.interp(t,ft,f)\n g = np.interp(t,gt,g)\n dydt = -f*y+g\n return dydt\n```\n\n\n```python\ny = np.zeros((len(t), 1)) # array for solution\nr = inte.ode(lambda t,y:myode(t,y,ft,f,gt,g)).set_integrator(\"dopri5\",atol=1e-4,rtol=1e-2) # choice of method\nr.set_initial_value(y0, t0) # initial values\nfor i in range(1, t.size):\n tmp = r.integrate(t[i]) # get one more value, add it to the array\n y[i, :] = tmp\n if not r.successful():\n raise RuntimeError(\"Could not integrate\")\nplt.plot(t, y)\n```\n\n### Evaluate and Extend Solution Structure\nThe van der Pol equation is a second order ODE\n$$y_1''-\\mu(1-y_1^2)y_1'+y_1=0$$\n\n\n```python\nt0, t1 = 0, 20 # start and end\nt = np.linspace(t0, t1, 100) # the points of evaluation of solution\ny0 = [2, 0] # initial value\ny = np.zeros((len(t), len(y0))) # array for solution\ny[0, :] = y0\nr = inte.ode(vdp1).set_integrator(\"dopri5\") # choice of method\nr.set_initial_value(y0, t0) # initial values\nfor i in range(1, t.size):\n y[i, :] = r.integrate(t[i]) # get one more value, add it to the array\n if not r.successful():\n raise RuntimeError(\"Could not integrate\")\n```\n\n\n```python\nplt.plot(t,y[:,0])\n```\n\n\n```python\nx0, x1 = 20,35\nx = np.linspace(x0,x1,350)\nyx0 = y[-1,:]\nyx = np.zeros((len(x), len(yx0))) # array for solution\nyx[0,:] = yx0\n\nr.set_initial_value(yx0, x0) # initial values\nfor i in range(1, x.size):\n yx[i, :] = r.integrate(x[i]) # get one more value, add it to the array\n if not r.successful():\n raise RuntimeError(\"Could not integrate\")\n \nplt.plot(t,y[:,0])\nplt.plot(x,yx[:,0])\n```\n\n### Robotics Capstone \n#### 1. Solving ODE\n$$\\dot{x}=\\begin{pmatrix}\\dot{x_1}\\\\\\dot{x_2}\\end{pmatrix}=\\begin{pmatrix} x_2 \\\\ \\gamma\\cos(\\omega t)-\\alpha x_1-\\delta x_2 -\\beta x_1^3\\end{pmatrix}$$\n\n\n```python\ndef dyn(t,X,beta):\n delta,alpha,gamma,omega = 1,1,1,1\n x = X[0]; xd = X[1]\n return np.array([xd,gamma*np.cos(omega*t)-alpha*x-delta*xd-beta*x**3])\n```\n\n\n```python\ndef simode(beta,tend):\n t0, t1 = 0, tend # start and end\n t = np.linspace(t0, t1, 100) # the points of evaluation of solution\n y0 = [1, 1] # initial value\n y = np.zeros((len(t), len(y0))) # array for solution\n y[0, :] = y0\n r = inte.ode(lambda t,X: dyn(t,X,beta)).set_integrator(\"dopri5\") \n r.set_initial_value(y0, t0) # initial values\n for i in range(1, t.size):\n y[i, :] = r.integrate(t[i]) # get one more value, add it to the array\n if not r.successful():\n raise RuntimeError(\"Could not integrate\")\n \n return t,y\n```\n\n\n```python\nt,y = simode(1,50)\nplt.figure(figsize=(20,5))\nplt.subplot(1,2,1); plt.plot(t,y)\nplt.subplot(1,2,2); plt.plot(y[:,0], y[:,1])\n```\n\n\n```python\nt,y = simode(5,50)\nplt.figure(figsize=(20,5))\nplt.subplot(1,2,1); plt.plot(t,y)\nplt.subplot(1,2,2); plt.plot(y[:,0], y[:,1])\n```\n\n\n```python\nt,y = simode(10,50)\nplt.figure(figsize=(20,5))\nplt.subplot(1,2,1); plt.plot(t,y)\nplt.subplot(1,2,2); plt.plot(y[:,0], y[:,1])\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "6b84ddfebcb372591419b45fa23a0f74d1869f1f", "size": 394480, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Robotics/ODEsolver/.ipynb_checkpoints/Ordinary Differential Equation-checkpoint.ipynb", "max_stars_repo_name": "zcemycl/ProbabilisticPerspectiveMachineLearning", "max_stars_repo_head_hexsha": "8291bc6cb935c5b5f9a88f7b436e6e42716c21ae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2019-11-20T10:20:29.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-09T11:15:23.000Z", "max_issues_repo_path": "Capstone/ODE solver/Ordinary Differential Equation.ipynb", "max_issues_repo_name": "kasiv008/Robotics", "max_issues_repo_head_hexsha": "302b3336005acd81202ebbbb0c52a4b2692fa9c7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Capstone/ODE solver/Ordinary Differential Equation.ipynb", "max_forks_repo_name": "kasiv008/Robotics", "max_forks_repo_head_hexsha": "302b3336005acd81202ebbbb0c52a4b2692fa9c7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-05-27T03:56:38.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-02T13:15:42.000Z", "avg_line_length": 669.7453310696, "max_line_length": 102556, "alphanum_fraction": 0.9497490367, "converted": true, "num_tokens": 2313, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9241418116217418, "lm_q2_score": 0.900529795461386, "lm_q1q2_score": 0.8322172365970418}} {"text": "# Pr\u00e1ctica 1 - Introducci\u00f3n a Jupyter lab y libreria ```robots```\n\n## Introducci\u00f3n al calculo simb\u00f3lico num\u00e9rico y librer\u00eda ```sympy```\n\nEn este documento se describe el proceso de obtenci\u00f3n de la din\u00e1mica de un robot manipulador (pendulo simple) por medio de la ecuaci\u00f3n de Euler-Lagrange, empecemos importando las librerias necesarias:\n\n\n```python\nfrom sympy import var, sin, cos, pi, Matrix, Function, Rational\nfrom sympy.physics.mechanics import mechanics_printing\nmechanics_printing()\n```\n\nUna vez que hemos importado las funciones necesarias, podemos empezar definiendo las variables a utilizar dentro del calculo:\n\n\n```python\nvar(\"l1\")\n```\n\nCuando se definen las variables, se pueden mandar a llamar con el mismo nombre:\n\n\n```python\nl1\n```\n\nDefinimos de una vez todas las variables necesarias:\n\n\n```python\nvar(\"m1 J1 t g\")\n```\n\nY definimos las variables que dependen de otra variable, especificamente en este calculo, todo lo anterior es constante y solo $q_1$ es una variable dependiente del tiempo:\n\n\n```python\nq1 = Function(\"q1\")(t)\n```\n\nYa con las variables definidas, puedo empezar a definir la posici\u00f3n del centro de masa del primer (y \u00fanico eslabon):\n\n\n```python\nx1 = l1*cos(q1)\ny1 = l1*sin(q1)\n```\n\n\n```python\nx1\n```\n\n\n```python\ny1\n```\n\nDe manera que si necesitamos calcular la derivada con respecto del tiempo de $x_1$, tenemos que hacer:\n\n\n```python\nx1.diff(t)\n```\n\nCalculamos el cuadrado de la velocidad traslacional del primer centro de masa:\n\n\n```python\nv1c = x1.diff(t)**2 + y1.diff(t)**2\n```\n\n\n```python\nv1c\n```\n\nPero como se puede ver, no necesariamente se va a simplificar completamente la expresi\u00f3n calculada; por lo que podemos decir explicitamente que trate de simplificar m\u00e1s, el motor de algebra simbolica:\n\n\n```python\nv1c.simplify()\n```\n\nGuardando esta expresion simplificada:\n\n\n```python\nv1c = v1c.simplify()\n```\n\nY calculando la altura y velocidad rotacional del eslabon:\n\n\n```python\nh1 = y1\n\u03c91 = q1.diff(t)\n```\n\nCalculando la energ\u00eda cin\u00e9tica y potencial:\n\n\n```python\nK = Rational(1,2)*m1*v1c + Rational(1,2)*J1*\u03c91**2\n```\n\n\n```python\nU = m1*g*h1\n```\n\nCon estas energias se puede calcular el Lagrangiano:\n\n\n```python\nL = K - U\n```\n\n\n```python\nL\n```\n\nY una vez obtenido el Lagrangiano, podemos empezar a derivar, $\\frac{\\partial L}{\\partial \\dot{q}_1}$, $\\frac{d}{dt}\\left( \\frac{\\partial L}{\\partial \\dot{q}_1} \\right)$ y $\\frac{\\partial L}{\\partial q_1}$\n\n\n```python\nL.diff(q1.diff(t))\n```\n\n\n```python\nL.diff(q1.diff(t)).diff(t)\n```\n\n\n```python\nL.diff(q1)\n```\n\nO bien, agrupandolo en la ecuaci\u00f3n de Euler - Lagrange:\n\n$$\n\\tau_1 = \n\\frac{d}{dt}\\left( \\frac{\\partial L}{\\partial \\dot{q}_1} \\right) - \\frac{\\partial L}{\\partial q_1}\n$$\n\n\n```python\nL.diff(q1.diff(t)).diff(t) - L.diff(q1)\n```\n\nEn este caso, podemos utilizar el metodo collect para factorizar con respecto a ciertos terminos, en este caso $\\ddot{q}_1$:\n\n\n```python\n\u03c41 = (L.diff(q1.diff(t)).diff(t) - L.diff(q1)).collect(q1.diff(t).diff(t))\n```\n\n\n```python\n\u03c41\n```\n\nUna vez que hemos concluido este proceso, podemos pasar al documento llamado ```numerico.ipynb```\n", "meta": {"hexsha": "4dfd764fc1412064703a50f8c16cf6c8887a1c6b", "size": 7743, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Practicas/practica1/simbolico.ipynb", "max_stars_repo_name": "robblack007/clase-dinamica-robot", "max_stars_repo_head_hexsha": "f38cb358f2681e9c0dce979acbdcd81bf63bd59c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Practicas/practica1/simbolico.ipynb", "max_issues_repo_name": "robblack007/clase-dinamica-robot", "max_issues_repo_head_hexsha": "f38cb358f2681e9c0dce979acbdcd81bf63bd59c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2016-01-26T18:33:11.000Z", "max_issues_repo_issues_event_max_datetime": "2016-05-30T23:58:07.000Z", "max_forks_repo_path": "Practicas/practica1/simbolico.ipynb", "max_forks_repo_name": "robblack007/clase-dinamica-robot", "max_forks_repo_head_hexsha": "f38cb358f2681e9c0dce979acbdcd81bf63bd59c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 20.1640625, "max_line_length": 225, "alphanum_fraction": 0.5302854191, "converted": true, "num_tokens": 935, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.946596665680527, "lm_q2_score": 0.8791467690927438, "lm_q1q2_score": 0.8321974002669995}} {"text": "# Solution {-}\n\n\n```python\nfrom sympy import Matrix, symbols, sqrt\n\nxA, yA, xB, yB, r1, r2, x, xdot, y, ydot = symbols('xA, yA, xB, yB, r1 r2 x xdot y ydot')\n\n# Measurement equation\nr1 = sqrt((xA - x)**2 + (yA - y)**2)\nr2 = sqrt((xB - x)**2 + (yB - y)**2)\n\n# State vector\nx = Matrix([[x],\n [xdot],\n [y],\n [ydot]])\n\nH = Matrix([[r1],\n [r2]])\ndH = H.jacobian(x)\ndisplay(dH)\n```\n\n\n$\\displaystyle \\left[\\begin{matrix}\\frac{x - xA}{\\sqrt{\\left(- x + xA\\right)^{2} + \\left(- y + yA\\right)^{2}}} & 0 & \\frac{y - yA}{\\sqrt{\\left(- x + xA\\right)^{2} + \\left(- y + yA\\right)^{2}}} & 0\\\\\\frac{x - xB}{\\sqrt{\\left(- x + xB\\right)^{2} + \\left(- y + yB\\right)^{2}}} & 0 & \\frac{y - yB}{\\sqrt{\\left(- x + xB\\right)^{2} + \\left(- y + yB\\right)^{2}}} & 0\\end{matrix}\\right]$\n\n\n\n```python\nfrom numpy import array, exp, sqrt, arange, eye, kron, diag, zeros, column_stack\nfrom numpy.linalg import inv, norm\nfrom lib.vanloan import numeval\nimport matplotlib.pyplot as plt\n\ndt = 1\nsamples = 200\nd = 10000\n\n# Initial standard deviation\nsigmap = sqrt(2000) # Position [meter]\nsigmav = 30 # Velocity [meter/second]\nsigmar = 15 # Range [meter]\n\n# Gauss-Markov process parameters\nbeta = 0.01 # Time constant [rad/second]\nsigma = 30 # [meter/second]\n\n# DME station coordinates (x, y)\ndme1 = [-10000, 0]\ndme2 = [ 10000, 0]\ndme = column_stack([dme1, dme2]) # Make columnvector\n\n# Dynamic matrix (2D)\nF0 = array([[0, 1],\n [0, -beta]])\nF = kron(eye(2), F0)\n\n# White noise coefficients (2D)\nG0 = array([[0],\n [sqrt(2*sigma**2*beta)]])\nG = kron(eye(2), G0)\n\n# Numerical evaluation (Van Loan)\nphi, Q = numeval(F, G, dt)\n\n# Inital state covariance matrix\nP0 = array([[sigmap**2, 0],\n [0, sigmav**2]])\nP = kron(eye(2), P0)\n\n# Initial measurement covariance matrix\nR = sigmar**2*diag([2, 2])\n\n# Linearized design matrix\ndef dH(xs, xnom):\n \n dH = zeros([2, 4])\n \n dx = xs - xnom\n \n dH[0] = [-dx[0, 0]/norm(dx[:, 0]), 1, -dx[1, 0]/norm(dx[:, 0]), 0]\n dH[1] = [-dx[0, 1]/norm(dx[:, 1]), 0, -dx[1, 1]/norm(dx[:, 1]), 1]\n \n return dH\n \n\n# Initialize plot vectors\nP_all = []\n\n# Main loop\nfor k in range(0, samples):\n \n # Nominal trajectory\n xnom = array([[0],\n [-10000 + 100*k]])\n \n H = dH(dme, xnom)\n \n # Time update\n Pp = phi@P@phi.T + Q\n \n # Design matrix\n #h11 = d/sqrt(d**2 + ynom**2)\n #h12 = ynom/sqrt(d**2 + ynom**2)\n #h21 = -d/sqrt(d**2 + ynom**2)\n #h22 = ynom/sqrt(d**2 + ynom**2)\n #\n #H = array([[h11, 1, h12, 0],\n # [h21, 0, h22, 1]])\n \n # Measurement update\n K = Pp@H.T@inv(H@Pp@H.T + R)\n P = (eye(4) - K@H)@Pp\n \n # Accumulate plot vectors\n P_all.append(P)\n\n# Time vector\ntime = arange(0, samples)\n\n# Extract plot vectors\nstd_x = [sqrt(P[0, 0]) for P in P_all] # Standard deviation x-direction\nstd_y = [sqrt(P[2, 2]) for P in P_all] # Standard deviation y-direction\n\n# Plot results\nplt.title('Error Analysis')\nplt.plot(time, std_x, 'r', label='std_x')\nplt.plot(time, std_y, 'g', label='std_y')\nplt.xlabel('Time [second]')\nplt.ylabel('Standard deviation [meter]')\nplt.legend()\nplt.grid()\nplt.show()\n```\n\n### Comment\nThe increase in variance in y-direction is due to the lack of observability of the y-component of from the range measurements close to the origin. However, the geometry of the x-component is good throughout the whole flight.\n", "meta": {"hexsha": "ff87c18b7a71337a557f25963b98e1a4a077d412", "size": 28950, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Problem 7.4.ipynb", "max_stars_repo_name": "mfkiwl/GMPE340", "max_stars_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Problem 7.4.ipynb", "max_issues_repo_name": "mfkiwl/GMPE340", "max_issues_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Problem 7.4.ipynb", "max_forks_repo_name": "mfkiwl/GMPE340", "max_forks_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 137.2037914692, "max_line_length": 22776, "alphanum_fraction": 0.8573747841, "converted": true, "num_tokens": 1237, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9465966732132748, "lm_q2_score": 0.8791467611766711, "lm_q1q2_score": 0.8321973993960623}} {"text": "# charging of capacitor\n\n\\begin{equation}\n\\frac{Q}{C}+R\\frac{dQ}{dt}=V\n\\end{equation}\n\n\\begin{equation}\n\\frac{dQ}{dt}=\\frac{V}{R}-\\frac{Q}{CR}\n\\end{equation}\n\n\n```python\nimport numpy as np\nfrom scipy.integrate import odeint\nimport matplotlib.pyplot as plt\n```\n\n\n```python\n# function that returns dy/dt\ndef model(y,t,C,R,V):\n dydt = V/R-y/(C*R)\n return dydt\n```\n\n\n```python\nV=10 #V\nC=1e-6 #F\nR1=100 #ohm\nR2 = 200\nR3=400\n```\n\n\n```python\n# time points\nt = np.arange(0,0.002,0.0001) #\n```\n\n\n```python\n# Charging\n# initial condition\ny0 = 0\ny1 = odeint(model,y0,t,args=(C,R1,V))\ny2 = odeint(model,y0,t,args=(C,R2,V))\ny3 = odeint(model,y0,t,args=(C,R3,V,))\n\n# plot results\nplt.plot(t,y1,'r-',linewidth=2,label='R1=100$\\Omega$')\nplt.plot(t,y2,'b--',linewidth=2,label='R2=200$\\Omega$')\nplt.plot(t,y3,'g:',linewidth=2,label='R3=300$\\Omega$')\nplt.xlabel('time(sec)')\nplt.xticks(rotation=45)\nplt.ylabel('Q(C)')\nplt.legend()\nplt.show()\n\n```\n\n\n```python\n#discharging\n\ny0 = V*C\ny11 = odeint(model,y0,t,args=(C,R1,0))\ny12 = odeint(model,y0,t,args=(C,R2,0))\ny13 = odeint(model,y0,t,args=(C,R3,0,))\n\n# plot results\nplt.plot(t,y11,'r-',linewidth=2,label='R1=100$\\Omega$')\nplt.plot(t,y12,'b--',linewidth=2,label='R2=200$\\Omega$')\nplt.plot(t,y13,'g:',linewidth=2,label='R3=300$\\Omega$')\nplt.xlabel('time(sec)')\nplt.xticks(rotation=45)\nplt.ylabel('Q(C)')\nplt.legend()\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "30d52066e1f08968e50f7785537a2236dc457c60", "size": 56410, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ode1.ipynb", "max_stars_repo_name": "AmbaPant/NPS", "max_stars_repo_head_hexsha": "0500f39f6708388d5c3f2b8d3e5ee5e56a1f646f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-09-16T03:21:55.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-16T03:21:55.000Z", "max_issues_repo_path": "ode1.ipynb", "max_issues_repo_name": "AmbaPant/NPS", "max_issues_repo_head_hexsha": "0500f39f6708388d5c3f2b8d3e5ee5e56a1f646f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ode1.ipynb", "max_forks_repo_name": "AmbaPant/NPS", "max_forks_repo_head_hexsha": "0500f39f6708388d5c3f2b8d3e5ee5e56a1f646f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-08-10T12:17:21.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-13T14:31:02.000Z", "avg_line_length": 316.9101123596, "max_line_length": 26472, "alphanum_fraction": 0.9353483425, "converted": true, "num_tokens": 519, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9539660989095221, "lm_q2_score": 0.8723473779969193, "lm_q1q2_score": 0.8321898250816714}} {"text": "# Assignment 1\n\nThe goal of this assignment is to supply you with machine learning models and algorithms. In this notebook, we will cover linear and nonlinear models, the concept of loss functions and some optimization techniques. All mathematical operations should be implemented in **NumPy** only. \n\n\n## Table of contents\n* [1. Logistic Regression](#1.-Logistic-Regression)\n * [1.1 Linear Mapping](#1.1-Linear-Mapping)\n * [1.2 Sigmoid](#1.2-Sigmoid)\n * [1.3 Negative Log Likelihood](#1.3-Negative-Log-Likelihood)\n * [1.4 Model](#1.4-Model)\n * [1.5 Simple Experiment](#1.5-Simple-Experiment)\n* [2. Decision Tree](#2.-Decision-Tree)\n * [2.1 Gini Index & Data Split](#2.1-Gini-Index-&-Data-Split)\n * [2.2 Terminal Node](#2.2-Terminal-Node)\n * [2.3 Build the Decision Tree](#2.3-Build-the-Decision-Tree)\n* [3. Experiments](#3.-Experiments)\n * [3.1 Decision Tree for Heart Disease Prediction](#3.1-Decision-Tree-for-Heart-Disease-Prediction) \n * [3.2 Logistic Regression for Heart Disease Prediction](#3.2-Logistic-Regression-for-Heart-Disease-Prediction)\n\n### Note\nSome of the concepts below have not (yet) been discussed during the lecture. These will be discussed further during the next lectures. \n\n### Before you begin\n\nTo check whether the code you've written is correct, we'll use **automark**. For this, we created for each of you an account with the username being your student number. \n\n\n```python\nimport automark as am\n\n# fill in you student number as your username\nusername = 'Your Username'\n\n# to check your progress, you can run this function\nam.get_progress(username)\n```\n\nSo far all your tests are 'not attempted'. At the end of this notebook you'll need to have completed all test. The output of `am.get_progress(username)` should at least match the example below. However, we encourage you to take a shot at the 'not attempted' tests!\n\n```\n---------------------------------------------\n| Your name / student number |\n| your_email@your_domain.whatever |\n---------------------------------------------\n| linear_forward | not attempted |\n| linear_grad_W | not attempted |\n| linear_grad_b | not attempted |\n| nll_forward | not attempted |\n| nll_grad_input | not attempted |\n| sigmoid_forward | not attempted |\n| sigmoid_grad_input | not attempted |\n| tree_data_split_left | not attempted |\n| tree_data_split_right | not attempted |\n| tree_gini_index | not attempted |\n| tree_to_terminal | not attempted |\n---------------------------------------------\n```\n\n\n```python\nfrom __future__ import print_function, absolute_import, division # You don't need to know what this is. \nimport numpy as np # this imports numpy, which is used for vector- and matrix calculations\n```\n\nThis notebook makes use of **classes** and their **instances** that we have already implemented for you. It allows us to write less code and make it more readable. If you are interested in it, here are some useful links:\n* The official [documentation](https://docs.python.org/3/tutorial/classes.html) \n* Video by *sentdex*: [Object Oriented Programming Introduction](https://www.youtube.com/watch?v=ekA6hvk-8H8)\n* Antipatterns in OOP: [Stop Writing Classes](https://www.youtube.com/watch?v=o9pEzgHorH0)\n\n# 1. Logistic Regression\n\nWe start with a very simple algorithm called **Logistic Regression**. It is a generalized linear model for 2-class classification.\nIt can be generalized to the case of many classes and to non-linear cases as well. However, here we consider only the simplest case. \n\nLet us consider a data with 2 classes. Class 0 and class 1. For a given test sample, logistic regression returns a value from $[0, 1]$ which is interpreted as a probability of belonging to class 1. The set of points for which the prediction is $0.5$ is called a *decision boundary*. It is a line on a plane or a hyper-plane in a space.\n\n\n\nLogistic regression has two trainable parameters: a weight $W$ and a bias $b$. For a vector of features $X$, the prediction of logistic regression is given by\n\n$$\nf(X) = \\frac{1}{1 + \\exp(-[XW + b])} = \\sigma(h(X))\n$$\nwhere $\\sigma(z) = \\frac{1}{1 + \\exp(-z)}$ and $h(X)=XW + b$.\n\nParameters $W$ and $b$ are fitted by maximizing the log-likelihood (or minimizing the negative log-likelihood) of the model on the training data. For a training subset $\\{X_j, Y_j\\}_{j=1}^N$ the normalized negative log likelihood (NLL) is given by \n\n$$\n\\mathcal{L} = -\\frac{1}{N}\\sum_j \\log\\Big[ f(X_j)^{Y_j} \\cdot (1-f(X_j))^{1-Y_j}\\Big]\n= -\\frac{1}{N}\\sum_j \\Big[ Y_j\\log f(X_j) + (1-Y_j)\\log(1-f(X_j))\\Big]\n$$\n\nThere are different ways of fitting this model. In this assignment we consider Logistic Regression as a one-layer neural network. We use the following algorithm for the **forward** pass:\n\n1. Linear mapping: $h=XW + b$\n2. Sigmoid activation function: $f=\\sigma(h)$\n3. Calculation of NLL: $\\mathcal{L} = -\\frac{1}{N}\\sum_j \\Big[ Y_j\\log f_j + (1-Y_j)\\log(1-f_j)\\Big]$\n\nIn order to fit $W$ and $b$ we perform Gradient Descent ([GD](https://en.wikipedia.org/wiki/Gradient_descent)). We choose a small learning rate $\\gamma$ and after each computation of forward pass, we update the parameters \n\n$$W_{\\text{new}} = W_{\\text{old}} - \\gamma \\frac{\\partial \\mathcal{L}}{\\partial W}$$\n\n$$b_{\\text{new}} = b_{\\text{old}} - \\gamma \\frac{\\partial \\mathcal{L}}{\\partial b}$$\n\nWe use Backpropagation method ([BP](https://en.wikipedia.org/wiki/Backpropagation)) to calculate the partial derivatives of the loss function with respect to the parameters of the model.\n\n$$\n\\frac{\\partial\\mathcal{L}}{\\partial W} = \n\\frac{\\partial\\mathcal{L}}{\\partial h} \\frac{\\partial h}{\\partial W} =\n\\frac{\\partial\\mathcal{L}}{\\partial f} \\frac{\\partial f}{\\partial h} \\frac{\\partial h}{\\partial W}\n$$\n\n$$\n\\frac{\\partial\\mathcal{L}}{\\partial b} = \n\\frac{\\partial\\mathcal{L}}{\\partial h} \\frac{\\partial h}{\\partial b} =\n\\frac{\\partial\\mathcal{L}}{\\partial f} \\frac{\\partial f}{\\partial h} \\frac{\\partial h}{\\partial b}\n$$\n\n## 1.1 Linear Mapping\nFirst of all, you need to implement the forward pass of a linear mapping:\n$$\nh(X) = XW +b\n$$\n\n**Note**: here we use `n_out` as the dimensionality of the output. For logisitc regression `n_out = 1`. However, we will work with cases of `n_out > 1` in next assignments. You will **pass** the current assignment even if your implementation works only in case `n_out = 1`. If your implementation works for the cases of `n_out > 1` then you will not have to modify your method next week. All **numpy** operations are generic. It is recommended to use numpy when is it possible.\n\n\n```python\ndef linear_forward(x_input, W, b):\n \"\"\"Perform the mapping of the input\n # Arguments\n x_input: input of the linear function - np.array of size `(n_objects, n_in)`\n W: np.array of size `(n_in, n_out)`\n b: np.array of size `(n_out,)`\n # Output\n the output of the linear function \n np.array of size `(n_objects, n_out)`\n \"\"\"\n #################\n ### YOUR CODE ###\n #################\n return output\n```\n\nLet's check your first function. We set the matrices $X, W, b$:\n$$\nX = \\begin{bmatrix}\n1 & -1 \\\\\n-1 & 0 \\\\\n1 & 1 \\\\\n\\end{bmatrix} \\quad\nW = \\begin{bmatrix}\n4 \\\\\n2 \\\\\n\\end{bmatrix} \\quad\nb = \\begin{bmatrix}\n3 \\\\\n\\end{bmatrix}\n$$\n\nAnd then compute \n$$\nXW = \\begin{bmatrix}\n1 & -1 \\\\\n-1 & 0 \\\\\n1 & 1 \\\\\n\\end{bmatrix}\n\\begin{bmatrix}\n4 \\\\\n2 \\\\\n\\end{bmatrix} =\n\\begin{bmatrix}\n2 \\\\\n-4 \\\\\n6 \\\\\n\\end{bmatrix} \\\\\nXW + b = \n\\begin{bmatrix}\n5 \\\\\n-1 \\\\\n9 \\\\\n\\end{bmatrix} \n$$\n\n\n```python\nX_test = np.array([[1, -1],\n [-1, 0],\n [1, 1]])\n\nW_test = np.array([[4],\n [2]])\n\nb_test = np.array([3])\n\nh_test = linear_forward(X_test, W_test, b_test)\nprint(h_test)\n```\n\n\n```python\nam.test_student_function(username, linear_forward, ['x_input', 'W', 'b'])\n```\n\nNow you need to implement the calculation of the partial derivative of the loss function with respect to the parameters of the model. As this expressions are used for the updates of the parameters, we refer to them as gradients.\n$$\n\\frac{\\partial \\mathcal{L}}{\\partial W} = \n\\frac{\\partial \\mathcal{L}}{\\partial h}\n\\frac{\\partial h}{\\partial W} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial b} = \n\\frac{\\partial \\mathcal{L}}{\\partial h}\n\\frac{\\partial h}{\\partial b} \\\\\n$$\n\n\n```python\ndef linear_grad_W(x_input, grad_output, W, b):\n \"\"\"Calculate the partial derivative of \n the loss with respect to W parameter of the function\n dL / dW = (dL / dh) * (dh / dW)\n # Arguments\n x_input: input of a dense layer - np.array of size `(n_objects, n_in)`\n grad_output: partial derivative of the loss functions with \n respect to the ouput of the dense layer (dL / dh)\n np.array of size `(n_objects, n_out)`\n W: np.array of size `(n_in, n_out)`\n b: np.array of size `(n_out,)`\n # Output\n the partial derivative of the loss \n with respect to W parameter of the function\n np.array of size `(n_in, n_out)`\n \"\"\"\n #################\n ### YOUR CODE ###\n #################\n return grad_W\n```\n\n\n```python\nam.test_student_function(username, linear_grad_W, ['x_input', 'grad_output', 'W', 'b'])\n```\n\n\n```python\ndef linear_grad_b(x_input, grad_output, W, b):\n \"\"\"Calculate the partial derivative of \n the loss with respect to b parameter of the function\n dL / db = (dL / dh) * (dh / db)\n # Arguments\n x_input: input of a dense layer - np.array of size `(n_objects, n_in)`\n grad_output: partial derivative of the loss functions with \n respect to the ouput of the linear function (dL / dh)\n np.array of size `(n_objects, n_out)`\n W: np.array of size `(n_in, n_out)`\n b: np.array of size `(n_out,)`\n # Output\n the partial derivative of the loss \n with respect to b parameter of the linear function\n np.array of size `(n_out,)`\n \"\"\"\n #################\n ### YOUR CODE ###\n #################\n return grad_b\n```\n\n\n```python\nam.test_student_function(username, linear_grad_b, ['x_input', 'grad_output', 'W', 'b'])\n```\n\n\n```python\nam.get_progress(username)\n```\n\n## 1.2 Sigmoid\n$$\nf = \\sigma(h) = \\frac{1}{1 + e^{-h}} \n$$\n\nSigmoid function is applied element-wise. It does not change the dimensionality of the tensor and its implementation is shape-agnostic in general.\n\n\n```python\ndef sigmoid_forward(x_input):\n \"\"\"sigmoid nonlinearity\n # Arguments\n x_input: np.array of size `(n_objects, n_in)`\n # Output\n the output of relu layer\n np.array of size `(n_objects, n_in)`\n \"\"\"\n #################\n ### YOUR CODE ###\n #################\n return output\n```\n\n\n```python\nam.test_student_function(username, sigmoid_forward, ['x_input'])\n```\n\nNow you need to implement the calculation of the partial derivative of the loss function with respect to the input of sigmoid. \n\n$$\n\\frac{\\partial \\mathcal{L}}{\\partial h} = \n\\frac{\\partial \\mathcal{L}}{\\partial f}\n\\frac{\\partial f}{\\partial h} \n$$\n\nTensor $\\frac{\\partial \\mathcal{L}}{\\partial f}$ comes from the loss function. Let's calculate $\\frac{\\partial f}{\\partial h}$\n\n$$\n\\frac{\\partial f}{\\partial h} = \n\\frac{\\partial \\sigma(h)}{\\partial h} =\n\\frac{\\partial}{\\partial h} \\Big(\\frac{1}{1 + e^{-h}}\\Big)\n= \\frac{e^{-h}}{(1 + e^{-h})^2}\n= \\frac{1}{1 + e^{-h}} \\frac{e^{-h}}{1 + e^{-h}}\n= f(h) (1 - f(h))\n$$\n\nTherefore, in order to calculate the gradient of the loss with respect to the input of sigmoid function you need \nto \n1. calculate $f(h) (1 - f(h))$ \n2. multiply it element-wise by $\\frac{\\partial \\mathcal{L}}{\\partial f}$\n\n\n```python\ndef sigmoid_grad_input(x_input, grad_output):\n \"\"\"sigmoid nonlinearity gradient. \n Calculate the partial derivative of the loss \n with respect to the input of the layer\n # Arguments\n x_input: np.array of size `(n_objects, n_in)`\n grad_output: np.array of size `(n_objects, n_in)` \n dL / df\n # Output\n the partial derivative of the loss \n with respect to the input of the function\n np.array of size `(n_objects, n_in)` \n dL / dh\n \"\"\"\n #################\n ### YOUR CODE ###\n #################\n return grad_input\n```\n\n\n```python\nam.test_student_function(username, sigmoid_grad_input, ['x_input', 'grad_output'])\n```\n\n## 1.3 Negative Log Likelihood\n\n$$\n\\mathcal{L} \n= -\\frac{1}{N}\\sum_j \\Big[ Y_j\\log \\dot{Y}_j + (1-Y_j)\\log(1-\\dot{Y}_j)\\Big]\n$$\n\nHere $N$ is the number of objects. $Y_j$ is the real label of an object and $\\dot{Y}_j$ is the predicted one.\n\n\n```python\ndef nll_forward(target_pred, target_true):\n \"\"\"Compute the value of NLL\n for a given prediction and the ground truth\n # Arguments\n target_pred: predictions - np.array of size `(n_objects, 1)`\n target_true: ground truth - np.array of size `(n_objects, 1)`\n # Output\n the value of NLL for a given prediction and the ground truth\n scalar\n \"\"\"\n #################\n ### YOUR CODE ###\n ################# \n return output\n```\n\n\n```python\nam.test_student_function(username, nll_forward, ['target_pred', 'target_true'])\n```\n\nNow you need to calculate the partial derivative of NLL with with respect to its input.\n\n$$\n\\frac{\\partial \\mathcal{L}}{\\partial \\dot{Y}}\n=\n\\begin{pmatrix}\n\\frac{\\partial \\mathcal{L}}{\\partial \\dot{Y}_0} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\dot{Y}_1} \\\\\n\\vdots \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\dot{Y}_N}\n\\end{pmatrix}\n$$\n\nLet's do it step-by-step\n\n\\begin{equation}\n\\begin{split}\n\\frac{\\partial \\mathcal{L}}{\\partial \\dot{Y}_0} \n&= \\frac{\\partial}{\\partial \\dot{Y}_0} \\Big(-\\frac{1}{N}\\sum_j \\Big[ Y_j\\log \\dot{Y}_j + (1-Y_j)\\log(1-\\dot{Y}_j)\\Big]\\Big) \\\\\n&= -\\frac{1}{N} \\frac{\\partial}{\\partial \\dot{Y}_0} \\Big(Y_0\\log \\dot{Y}_0 + (1-Y_0)\\log(1-\\dot{Y}_0)\\Big) \\\\\n&= -\\frac{1}{N} \\Big(\\frac{Y_0}{\\dot{Y}_0} - \\frac{1-Y_0}{1-\\dot{Y}_0}\\Big)\n= \\frac{1}{N} \\frac{\\dot{Y}_0 - Y_0}{\\dot{Y}_0 (1 - \\dot{Y}_0)}\n\\end{split}\n\\end{equation}\n\nAnd for the other components it can be done in exactly the same way. So the result is the vector where each component is given by \n$$\\frac{1}{N} \\frac{\\dot{Y}_j - Y_j}{\\dot{Y}_j (1 - \\dot{Y}_j)}$$\n\nOr if we assume all multiplications and divisions to be done element-wise the output can be calculated as\n$$\n\\frac{\\partial \\mathcal{L}}{\\partial \\dot{Y}} = \\frac{1}{N} \\frac{\\dot{Y} - Y}{\\dot{Y} (1 - \\dot{Y})}\n$$\n\n\n```python\ndef nll_grad_input(target_pred, target_true):\n \"\"\"Compute the partial derivative of NLL\n with respect to its input\n # Arguments\n target_pred: predictions - np.array of size `(n_objects, 1)`\n target_true: ground truth - np.array of size `(n_objects, 1)`\n # Output\n the partial derivative \n of NLL with respect to its input\n np.array of size `(n_objects, 1)`\n \"\"\"\n #################\n ### YOUR CODE ###\n ################# \n return grad_input\n```\n\n\n```python\nam.test_student_function(username, nll_grad_input, ['target_pred', 'target_true'])\n```\n\n\n```python\nam.get_progress(username)\n```\n\n## 1.4 Model\n\nHere we provide a model for your. It consist of the function which you have implmeneted above\n\n\n```python\nclass LogsticRegressionGD(object):\n \n def __init__(self, n_in, lr=0.05):\n super().__init__()\n self.lr = lr\n self.b = np.zeros(1, )\n self.W = np.random.randn(n_in, 1)\n \n def forward(self, x):\n self.h = linear_forward(x, self.W, self.b)\n y = sigmoid_forward(self.h)\n return y\n \n def update_params(self, x, nll_grad):\n # compute gradients\n grad_h = sigmoid_grad_input(self.h, nll_grad)\n grad_W = linear_grad_W(x, grad_h, self.W, self.b)\n grad_b = linear_grad_b(x, grad_h, self.W, self.b)\n # update params\n self.W = self.W - self.lr * grad_W\n self.b = self.b - self.lr * grad_b\n```\n\n## 1.5 Simple Experiment\n\n\n```python\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n\n```python\n# Generate some data\ndef generate_2_circles(N=100):\n phi = np.linspace(0.0, np.pi * 2, 100)\n X1 = 1.1 * np.array([np.sin(phi), np.cos(phi)])\n X2 = 3.0 * np.array([np.sin(phi), np.cos(phi)])\n Y = np.concatenate([np.ones(N), np.zeros(N)]).reshape((-1, 1))\n X = np.hstack([X1,X2]).T\n return X, Y\n\n\ndef generate_2_gaussians(N=100):\n phi = np.linspace(0.0, np.pi * 2, 100)\n X1 = np.random.normal(loc=[1, 2], scale=[2.5, 0.9], size=(N, 2))\n X1 = X1.dot(np.array([[0.7, -0.7], [0.7, 0.7]]))\n X2 = np.random.normal(loc=[-2, 0], scale=[1, 1.5], size=(N, 2))\n X2 = X2.dot(np.array([[0.7, 0.7], [-0.7, 0.7]]))\n Y = np.concatenate([np.ones(N), np.zeros(N)]).reshape((-1, 1))\n X = np.vstack([X1,X2])\n return X, Y\n\ndef split(X, Y, train_ratio=0.7):\n size = len(X)\n train_size = int(size * train_ratio)\n indices = np.arange(size)\n np.random.shuffle(indices)\n train_indices = indices[:train_size]\n test_indices = indices[train_size:]\n return X[train_indices], Y[train_indices], X[test_indices], Y[test_indices]\n\nf, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 4))\n\n\nX, Y = generate_2_circles()\nax1.scatter(X[:,0], X[:,1], c=Y.ravel(), edgecolors= 'none')\nax1.set_aspect('equal')\n\n\nX, Y = generate_2_gaussians()\nax2.scatter(X[:,0], X[:,1], c=Y.ravel(), edgecolors= 'none')\nax2.set_aspect('equal')\n\n```\n\n\n```python\nX_train, Y_train, X_test, Y_test = split(*generate_2_gaussians(), 0.7)\n```\n\n\n```python\n# let's train our model\nmodel = LogsticRegressionGD(2, 0.05)\n\nfor step in range(30):\n Y_pred = model.forward(X_train)\n \n loss_value = nll_forward(Y_pred, Y_train)\n accuracy = ((Y_pred > 0.5) == Y_train).mean()\n print('Step: {} \\t Loss: {:.3f} \\t Acc: {:.1f}%'.format(step, loss_value, accuracy * 100))\n \n loss_grad = nll_grad_input(Y_pred, Y_train)\n model.update_params(X_train, loss_grad)\n\n \nprint('\\n\\nTesting...')\nY_test_pred = model.forward(X_test)\ntest_accuracy = ((Y_test_pred > 0.5) == Y_test).mean()\nprint('Acc: {:.1f}%'.format(test_accuracy * 100))\n```\n\n\n```python\ndef plot_model_prediction(prediction_func, X, Y, hard=True):\n u_min = X[:, 0].min()-1\n u_max = X[:, 0].max()+1\n v_min = X[:, 1].min()-1\n v_max = X[:, 1].max()+1\n\n U, V = np.meshgrid(np.linspace(u_min, u_max, 100), np.linspace(v_min, v_max, 100))\n UV = np.stack([U.ravel(), V.ravel()]).T\n c = prediction_func(UV).ravel()\n if hard:\n c = c > 0.5\n plt.scatter(UV[:,0], UV[:,1], c=c, edgecolors= 'none', alpha=0.15)\n plt.scatter(X[:,0], X[:,1], c=Y.ravel(), edgecolors= 'black')\n plt.xlim(left=u_min, right=u_max)\n plt.ylim(bottom=v_min, top=v_max)\n plt.axes().set_aspect('equal')\n plt.show()\n \nplot_model_prediction(lambda x: model.forward(x), X_train, Y_train, False)\n\nplot_model_prediction(lambda x: model.forward(x), X_train, Y_train, True)\n```\n\n\n```python\n# Now run the same experiment on 2 circles\n```\n\n# 2. Decision Tree\nThe next model we look at is called **Decision Tree**. This type of model is non-parametric, meaning in contrast to **Logistic Regression** we do not have any parameters here that need to be trained.\n\nLet us consider a simple binary decision tree for deciding on the two classes of \"creditable\" and \"Not creditable\".\n\n\n\nEach node, except the leafs, asks a question about the the client in question. A decision is made by going from the root node to a leaf node, while considering the clients situation. The situation of the client, in this case, is fully described by the features:\n1. Checking account balance\n2. Duration of requested credit\n3. Payment status of previous loan\n4. Length of current employment\n\nIn order to build a decision tree we need training data. To carry on the previous example: we need a number of clients for which we know the properties 1.-4. and their creditability.\nThe process of building a decision tree starts with the root node and involves the following steps:\n1. Choose a splitting criteria and add it to the current node.\n2. Split the dataset at the current node into those that fullfil the criteria and those that do not.\n3. Add a child node for each data split.\n4. For each child node decide on either A. or B.:\n 1. Repeat from 1. step\n 2. Make it a leaf node: The predicted class label is decided by the majority vote over the training data in the current split.\n\n## 2.1 Gini Index & Data Split\nDeciding on how to split your training data at each node is dominated by the following two criterias:\n1. Does the rule help me make a final decision?\n2. Is the rule general enough such that it applies not only to my training data, but also to new unseen examples?\n\nWhen considering our previous example, splitting the clients by their handedness would not help us deciding on their creditability. Knowning if a rule will generalize is usually a hard call to make, but in practice we rely on [Occam's razor](https://en.wikipedia.org/wiki/Occam%27s_razor) principle. Thus the less rules we use, the better we believe it to generalize to previously unseen examples.\n\nOne way to measure the quality of a rule is by the [**Gini Index**](https://en.wikipedia.org/wiki/Gini_coefficient).\nSince we only consider binary classification, it is calculated by:\n$$\nGini = \\sum_{n\\in\\{L,R\\}}\\frac{|S_n|}{|S|}\\left( 1 - \\sum_{c \\in C} p_{S_n}(c)^2\\right)\\\\\np_{S_n}(c) = \\frac{|\\{\\mathbf{x}_{i}\\in \\mathbf{X}|y_{i} = c, i \\in S_n\\}|}{|S_n|}, n \\in \\{L, R\\}\n$$\nwith $|C|=2$ being your set of class labels and $S_L$ and $S_R$ the two splits determined by the splitting criteria.\nThe lower the gini score, the better the split. In the extreme case, where all class labels are the same in each split respectively, the gini index takes the value of $0$.\n\n\n```python\ndef tree_gini_index(Y_left, Y_right, classes):\n \"\"\"Compute the Gini Index.\n # Arguments\n Y_left: class labels of the data left set\n np.array of size `(n_objects, 1)`\n Y_right: class labels of the data right set\n np.array of size `(n_objects, 1)`\n classes: list of all class values\n # Output\n gini: scalar `float`\n \"\"\"\n gini = 0.0\n #################\n ### YOUR CODE ###\n #################\n return gini\n```\n\n\n```python\nam.test_student_function(username, tree_gini_index, ['Y_left', 'Y_right', 'classes'])\n```\n\nAt each node in the tree, the data is split according to a split criterion and each split is passed onto the left/right child respectively.\nImplement the following function to return all rows in `X` and `Y` such that the left child gets all examples that are less than the split value and vice versa. \n\n\n```python\ndef tree_split_data_left(X, Y, feature_index, split_value):\n \"\"\"Split the data `X` and `Y`, at the feature indexed by `feature_index`.\n If the value is less than `split_value` then return it as part of the left group.\n \n # Arguments\n X: np.array of size `(n_objects, n_in)`\n Y: np.array of size `(n_objects, 1)`\n feature_index: index of the feature to split at \n split_value: value to split between\n # Output\n (XY_left): np.array of size `(n_objects_left, n_in + 1)`\n \"\"\"\n X_left, Y_left = None, None\n #################\n ### YOUR CODE ###\n #################\n XY_left = np.concatenate([X_left, Y_left], axis=-1)\n return XY_left\n\n\ndef tree_split_data_right(X, Y, feature_index, split_value):\n \"\"\"Split the data `X` and `Y`, at the feature indexed by `feature_index`.\n If the value is greater or equal than `split_value` then return it as part of the right group.\n \n # Arguments\n X: np.array of size `(n_objects, n_in)`\n Y: np.array of size `(n_objects, 1)`\n feature_index: index of the feature to split at\n split_value: value to split between\n # Output\n (XY_left): np.array of size `(n_objects_left, n_in + 1)`\n \"\"\"\n X_right, Y_right = None, None\n #################\n ### YOUR CODE ###\n #################\n XY_right = np.concatenate([X_right, Y_right], axis=-1)\n return XY_right\n```\n\n\n```python\nam.test_student_function(username, tree_split_data_left, ['X', 'Y', 'feature_index', 'split_value'])\n```\n\n\n```python\nam.test_student_function(username, tree_split_data_right, ['X', 'Y', 'feature_index', 'split_value'])\n```\n\n\n```python\nam.get_progress(username)\n```\n\nNow to find the split rule with the lowest gini score, we brute-force search over all features and values to split by.\n\n\n```python\ndef tree_best_split(X, Y):\n class_values = list(set(Y.flatten().tolist()))\n r_index, r_value, r_score = float(\"inf\"), float(\"inf\"), float(\"inf\")\n r_XY_left, r_XY_right = (X,Y), (X,Y)\n for feature_index in range(X.shape[1]):\n for row in X:\n XY_left = tree_split_data_left(X, Y, feature_index, row[feature_index])\n XY_right = tree_split_data_right(X, Y, feature_index, row[feature_index])\n XY_left, XY_right = (XY_left[:,:-1], XY_left[:,-1:]), (XY_right[:,:-1], XY_right[:,-1:])\n gini = tree_gini_index(XY_left[1], XY_right[1], class_values)\n if gini < r_score:\n r_index, r_value, r_score = feature_index, row[feature_index], gini\n r_XY_left, r_XY_right = XY_left, XY_right\n return {'index':r_index, 'value':r_value, 'XY_left': r_XY_left, 'XY_right':r_XY_right}\n```\n\n## 2.2 Terminal Node\nThe leaf nodes predict the label of an unseen example, by taking a majority vote over all training class labels in that node.\n\n\n```python\ndef tree_to_terminal(Y):\n \"\"\"The most frequent class label, out of the data points belonging to the leaf node,\n is selected as the predicted class.\n \n # Arguments\n Y: np.array of size `(n_objects)`\n \n # Output\n label: most frequent label of `Y.dtype`\n \"\"\"\n label = None\n #################\n ### YOUR CODE ###\n #################\n return label\n```\n\n\n```python\nam.test_student_function(username, tree_to_terminal, ['Y'])\n```\n\n\n```python\nam.get_progress(username)\n```\n\n## 2.3 Build the Decision Tree\nNow we recursively build the decision tree, by greedily splitting the data at each node according to the gini index.\nTo prevent the model from overfitting, we transform a node into a terminal/leaf node, if:\n1. a maximum depth is reached.\n2. the node does not reach a minimum number of training samples.\n\n\n\n```python\ndef tree_recursive_split(X, Y, node, max_depth, min_size, depth):\n XY_left, XY_right = node['XY_left'], node['XY_right']\n del(node['XY_left'])\n del(node['XY_right'])\n # check for a no split\n if XY_left[0].size <= 0 or XY_right[0].size <= 0:\n node['left_child'] = node['right_child'] = tree_to_terminal(np.concatenate((XY_left[1], XY_right[1])))\n return\n # check for max depth\n if depth >= max_depth:\n node['left_child'], node['right_child'] = tree_to_terminal(XY_left[1]), tree_to_terminal(XY_right[1])\n return\n # process left child\n if XY_left[0].shape[0] <= min_size:\n node['left_child'] = tree_to_terminal(XY_left[1])\n else:\n node['left_child'] = tree_best_split(*XY_left)\n tree_recursive_split(X, Y, node['left_child'], max_depth, min_size, depth+1)\n # process right child\n if XY_right[0].shape[0] <= min_size:\n node['right_child'] = tree_to_terminal(XY_right[1])\n else:\n node['right_child'] = tree_best_split(*XY_right)\n tree_recursive_split(X, Y, node['right_child'], max_depth, min_size, depth+1)\n\n\ndef build_tree(X, Y, max_depth, min_size):\n root = tree_best_split(X, Y)\n tree_recursive_split(X, Y, root, max_depth, min_size, 1)\n return root\n```\n\nBy printing the split criteria or the predicted class at each node, we can visualise the decising making process.\nBoth the tree and a a prediction can be implemented recursively, by going from the root to a leaf node.\n\n\n```python\ndef print_tree(node, depth=0):\n if isinstance(node, dict):\n print('%s[X%d < %.3f]' % ((depth*' ', (node['index']+1), node['value'])))\n print_tree(node['left_child'], depth+1)\n print_tree(node['right_child'], depth+1)\n else:\n print('%s[%s]' % ((depth*' ', node)))\n \ndef tree_predict_single(x, node):\n if isinstance(node, dict):\n if x[node['index']] < node['value']:\n return tree_predict_single(x, node['left_child'])\n else:\n return tree_predict_single(x, node['right_child'])\n \n return node\n\ndef tree_predict_multi(X, node):\n Y = np.array([tree_predict_single(row, node) for row in X])\n return Y[:, None] # size: (n_object,) -> (n_object, 1)\n```\n\nLet's test our decision tree model on some toy data.\n\n\n```python\nX_train, Y_train, X_test, Y_test = split(*generate_2_circles(), 0.7)\n\ntree = build_tree(X_train, Y_train, 4, 1)\nY_pred = tree_predict_multi(X_test, tree)\ntest_accuracy = (Y_pred == Y_test).mean()\nprint('Test Acc: {:.1f}%'.format(test_accuracy * 100))\n```\n\nWe print the decision tree in [pre-order](https://en.wikipedia.org/wiki/Tree_traversal#Pre-order_(NLR)).\n\n\n```python\nprint_tree(tree)\n```\n\n\n```python\nplot_model_prediction(lambda x: tree_predict_multi(x, tree), X_test, Y_test)\n```\n\n# 3. Experiments\nThe [Cleveland Heart Disease](https://archive.ics.uci.edu/ml/datasets/Heart+Disease) dataset aims at predicting the presence of heart disease based on other available medical information of the patient.\n\nAlthough the whole database contains 76 attributes, we focus on the following 14:\n1. Age: age in years \n2. Sex: \n * 0 = female\n * 1 = male \n3. Chest pain type: \n * 1 = typical angina\n * 2 = atypical angina\n * 3 = non-anginal pain\n * 4 = asymptomatic\n4. Trestbps: resting blood pressure in mm Hg on admission to the hospital \n5. Chol: serum cholestoral in mg/dl \n6. Fasting blood sugar: > 120 mg/dl\n * 0 = false\n * 1 = true\n7. Resting electrocardiographic results: \n * 0 = normal\n * 1 = having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of > 0.05 mV) \n * 2 = showing probable or definite left ventricular hypertrophy by Estes' criteria \n8. Thalach: maximum heart rate achieved \n9. Exercise induced angina:\n * 0 = no\n * 1 = yes\n10. Oldpeak: ST depression induced by exercise relative to rest \n11. Slope: the slope of the peak exercise ST segment\n * 1 = upsloping\n * 2 = flat \n * 3 = downsloping \n12. Ca: number of major vessels (0-3) colored by flourosopy \n13. Thal: \n * 3 = normal\n * 6 = fixed defect\n * 7 = reversable defect \n14. Target: diagnosis of heart disease (angiographic disease status)\n * 0 = < 50% diameter narrowing \n * 1 = > 50% diameter narrowing\n \nThe 14. attribute is the target variable that we would like to predict based on the rest.\n\nWe have prepared some helper functions to download and pre-process the data in `heart_disease_data.py`\n\n\n```python\nimport heart_disease_data\n```\n\n\n```python\nX, Y = heart_disease_data.download_and_preprocess()\nX_train, Y_train, X_test, Y_test = split(X, Y, 0.7)\n```\n\nLet's have a look at some examples\n\n\n```python\nprint(X_train[0:2])\nprint(Y_train[0:2])\n\n# TODO feel free to explore more examples and see if you can predict the presence of a heart disease\n```\n\n## 3.1 Decision Tree for Heart Disease Prediction \nLet's build a decision tree model on the training data and see how well it performs\n\n\n```python\n# TODO: you are free to make use of code that we provide in previous cells\n# TODO: play around with different hyper parameters and see how these impact your performance\n\ntree = build_tree(X_train, Y_train, 5, 4)\nY_pred = tree_predict_multi(X_test, tree)\ntest_accuracy = (Y_pred == Y_test).mean()\nprint('Test Acc: {:.1f}%'.format(test_accuracy * 100))\n```\n\nHow did changing the hyper parameters affect the test performance? Usually hyper parameters are tuned using a hold-out [validation set](https://en.wikipedia.org/wiki/Training,_validation,_and_test_sets#Validation_dataset) instead of the test set.\n\n## 3.2 Logistic Regression for Heart Disease Prediction\n\nInstead of manually going through the data to find possible correlations, let's try training a logistic regression model on the data.\n\n\n```python\n# TODO: you are free to make use of code that we provide in previous cells\n# TODO: play around with different hyper parameters and see how these impact your performance\n```\n\nHow well did your model perform? Was it actually better then guessing? Let's look at the empirical mean of the target.\n\n\n```python\nY_train.mean()\n```\n\nSo what is the problem? Let's have a look at the learned parameters of our model.\n\n\n```python\nprint(model.W, model.b)\n```\n\nIf you trained sufficiently many steps you'll probably see how some weights are much larger than others. Have a look at what range the parameters were initialized and how much change we allow per step (learning rate). Compare this to the scale of the input features. Here an important concept arises, when we want to train on real world data: \n[Feature Scaling](https://en.wikipedia.org/wiki/Feature_scaling).\n\nLet's try applying it on our data and see how it affects our performance.\n\n\n```python\n# TODO: Rescale the input features and train again\n```\n\nNotice that we did not need any rescaling for the decision tree. Can you think of why?\n", "meta": {"hexsha": "573a6d2178c0300d5138735a1933012506c0e4c7", "size": 48702, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "week_2/ML.ipynb", "max_stars_repo_name": "shoemaker9/aml2019", "max_stars_repo_head_hexsha": "f09c3ac942158b6fe9748c76552d6ace73f47815", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "week_2/ML.ipynb", "max_issues_repo_name": "shoemaker9/aml2019", "max_issues_repo_head_hexsha": "f09c3ac942158b6fe9748c76552d6ace73f47815", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week_2/ML.ipynb", "max_forks_repo_name": "shoemaker9/aml2019", "max_forks_repo_head_hexsha": "f09c3ac942158b6fe9748c76552d6ace73f47815", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.0122214234, "max_line_length": 483, "alphanum_fraction": 0.5446182908, "converted": true, "num_tokens": 9195, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9184802350995702, "lm_q2_score": 0.9059898279984214, "lm_q1q2_score": 0.8321337502178093}} {"text": "# How do micromorts add up?\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\nMicromorts are a convenient way to measure the dealiness of human activity.\nThey are defined as the amount of activity resulting in a 1:1 000 000 chance of dying.\n\nThese low probablities can be added up, due to the taylor expansion\n\n$\\begin{align}\np_{not\\_dead} &= (1 - p)^N \\\\\n &= 1 - N \\cdot (1 -0) \\cdot p + O(p^2)\\\\\n &\\approx 1 - Np\n\\end{align}$\n\nBut this is clearly not valid for $N\\approx \\frac{1}{p}$, as $p_{not\\_dead} = 0$ in this case.\n\nSo, lets simulate some numbers!\n\n\n```python\nn_long = np.logspace(1, 6.5, 100)\np_not_death = np.power(1 - 10**-6, n_long) \nplt.plot(n_long, p_not_death, label='exact')\n\nn = np.logspace(1, 6, 100)\ntaylored_not_death = 1 - n * 10**-6\nplt.plot(n, taylored_not_death, label='taylored')\nplt.legend()\nplt.show()\n```\n\nAs we can see, the exact probablity to zero asymptotically, and is about zero at $N\\approx \\frac{3}{p}$, while the taylored formula is exactly zero at $N = \\frac{1}{p}$.\n\nBut for which quantities of N can we use the first-order taylored formula?\n\n\n```python\nn = np.logspace(1, 5.7, 1000)\np_not_death = np.power(1 - 10**-6, n) \nplt.plot(n, p_not_death, label='exact')\n\ntaylored_not_death = 1 - n * 10**-6\nplt.plot(n, taylored_not_death, label='taylored')\nplt.legend()\nplt.show()\n```\n\nIt's probably safe for $N<\\frac{1}{p}/10$, and roughly correct for $N<\\frac{1}{p}/5$, 100 000 and 500 000 for the micromorts in this case.\n\nWhat if we add up bigger lumps of micromorts?\n\n\n```python\nn_long = np.logspace(1, 4.5, 100)\np_not_death = np.power(1 - 10**-4, n_long) \nplt.plot(n_long, p_not_death, label='exact')\n\nn = np.logspace(1, 4, 100)\ntaylored_not_death = 1 - n * 10**-4\nplt.plot(n, taylored_not_death, label='taylored')\nplt.legend()\n\nplt.figure()\nn = np.logspace(1, 3.7, 1000)\np_not_death = np.power(1 - 10**-4, n) \nplt.plot(n, p_not_death, label='exact')\n\ntaylored_not_death = 1 - n * 10**-4\nplt.plot(n, taylored_not_death, label='taylored')\nplt.figure()\nplt.show()\n```\n\nFor bigger lumps, the results are the same, just shifted downwards - i.e., we can add up 1000 doses of 100 micromorts safely.\n", "meta": {"hexsha": "95545caf432bdba659531f1b390e7fde650c8db2", "size": 72931, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "micromort_scaling.ipynb", "max_stars_repo_name": "lukaselflein/micromort_scaling", "max_stars_repo_head_hexsha": "5f3da2e355aadc47cfe298c1d2ba6c1485765368", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "micromort_scaling.ipynb", "max_issues_repo_name": "lukaselflein/micromort_scaling", "max_issues_repo_head_hexsha": "5f3da2e355aadc47cfe298c1d2ba6c1485765368", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "micromort_scaling.ipynb", "max_forks_repo_name": "lukaselflein/micromort_scaling", "max_forks_repo_head_hexsha": "5f3da2e355aadc47cfe298c1d2ba6c1485765368", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 362.8407960199, "max_line_length": 18106, "alphanum_fraction": 0.9249838889, "converted": true, "num_tokens": 757, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9362850128595114, "lm_q2_score": 0.888758801595206, "lm_q1q2_score": 0.8321315459805714}} {"text": "# Solving ODEs with the Euler integrator\n\n**Ordinary differential equations** ([ODE](http://mathworld.wolfram.com/OrdinaryDifferentialEquation.html)s) describe many phenomena in physics. They describe the changes of a **dependent variable** $y(t)$ as a function of a **single independent variable** (e.g. $t$ or $x$).\n\n- An ODE of order $n$ contains $\\frac{d^n y}{dt^n}$ as the highest derivative.\n\n For example, **Newton's equations of motion** \n \n $$\n F = m \\frac{d^2 x(t)}{dt^2}\n $$\n \n are second order ODEs.\n\n- An ODE of order $n$ requires $n$ initial conditions to uniquely determine a solution $y(t)$.\n For Newton: we need initial position $x(t=0)$ and velocity $v(t=0)$.\n \n- Linear ODEs contain no higher powers than 1 of any of the derivatives (including the 0-th derivative $y$ term).\n\n- Non-linear ODEs can contain any powers in the dependent variable and its derivatives.\n\n## Integrating ODEs with Euler's algorithm\nFirst order ODE:\n\n$$\n\\frac{dy}{dt} = f(t, y)\n$$\n\nBasic idea: \n1. Start with initial conditions, $y_0 \\equiv y(t=0)$\n2. Use $\\frac{dy}{dt} = f(t, y)$ (the RHS!) to advance solution a small step $h$ forward in time: $y(t=h) \\equiv y_1$\n3. Repeat with $y_1$ to obtain $y_2 \\equiv y(t=2h)$... and for all future values of $t$.\n\n### Euler's algorithm\nUse the forward difference approximation for the derivative:\n\n$$\nf(t, y) = \\frac{dy(t)}{dt} \\approx \\frac{y(t_{n+1}) - y(t_n)}{h}\n$$\n\nSolve for the position in the future $y(t_{n+1})$, based on present *and known* values $y(t_n)$ and $f\\big(t_n, y(t_n)\\big)$:\n\n$$\ny_{n+1} \\approx y_n + h f(t_n, y_n) \\quad \\text{with} \\quad y_n := y(t_n)\n$$\n\n### Convert 2nd order ODE to 2 coupled 1st order ODEs\nThe 2nd order ODE is\n$$\n\\frac{d^2 y}{dt^2} = f(t, y)\n$$\n\nIntroduce \"dummy\" dependent variables $y_i$ with $y_0 \\equiv y$ and\n\n\\begin{alignat}{1}\n\\frac{dy}{dt} &= \\frac{dy_0}{dt} &= y_1\\\\\n\\frac{d^2y}{dt^2} &= \\frac{dy_1}{dt} &= {} f(t, y_0).\n\\end{alignat}\n\n\nThe first equation defines the velocity $y_1 = v$ and the second one is the original ODE.\n\n## Bouncing ball \n\nProblem: Integrate the equations of a bouncing ball under gravity\n* Drop from height $y_0 = 2$ within initial velocity $v_0 = 0$. \n* The ball bounces elastically off the ground at $y=0$.\n\nWe have to solve the *second order ODE* (Newton's equations of motion with constant acceleration)\n\n$$\n\\frac{d^2 y}{dt^2} = -g.\n$$\n\nThe Euler scheme for any *first order ODE* \n\n$$\n\\frac{dy}{dt} = f(y, t)\n$$\n\nis\n$$\ny(t + h) = y(t) + h f(y(t), t).\n$$\n\nIn order to solve the original 2nd order equation of motion we make use of the fact that one $n$-th order ODE can be written as $n$ coupled first order ODEs, namely \n\n\\begin{align}\n\\frac{dy}{dt} &= v\\\\\n\\frac{dv}{dt} &= -g.\n\\end{align}\n\nSolve each of the first order ODEs with Euler:\n\n\\begin{align}\ny(t + h) &= y(t) + h v(t)\\\\\nv(t + h) &= v(t) - h g.\n\\end{align}\n\n### Free fall \n\nStart with free fall as an even simpler problem.\n\n\n```python\nimport numpy as np\n```\n\n\n```python\n# parameters\ng = -9.81\n# initial conditions\ny = 2.0\nv = 0.0\n\nt = 0\ndt = 0.01\n\n# record initial conditions\ndata = [[t, y, v]]\n\n# start at first step\nt = dt\nwhile t < 10:\n y = y + v*dt\n v = v + g*dt\n data.append([t, y, v]) \n t += dt\n\ndata = np.array(data) \n```\n\n\n```python\ndata.shape\n```\n\n\n\n\n (1001, 3)\n\n\n\nLook at the first few values:\n\n\n```python\ndata[:4]\n```\n\n\n\n\n array([[ 0. , 2. , 0. ],\n [ 0.01 , 2. , -0.0981 ],\n [ 0.02 , 1.999019, -0.1962 ],\n [ 0.03 , 1.997057, -0.2943 ]])\n\n\n\nTo make it more convenient to get `t = data[:, 0]`, `y = data[:, 1]`, and `v = data[:, 2]` we use transposition to change the array from time x coordinates to coordinates x time and then use tuple assignment:\n\n\n```python\ndata = data.transpose()\ndata.shape # t, y, v\n```\n\n\n\n\n (3, 1001)\n\n\n\n\n```python\nt, y, v = data\n```\n\nPlot the trajectory $y(t)$ with matplotlib.\n\n(Using `t = data[0]` for time and `y = data[1]` for position, after the transposition!)\n\n\n```python\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n\n```python\nplt.plot(data[0], data[1])\nplt.xlabel(\"time (s)\")\nplt.ylabel(\"position (m)\");\n```\n\n### Bouncing\nAdd a floor at $y = 0$.\n\nWhat happens at the floor? \u2013 The velocity changes (elastic collision).\n\n\n```python\n# parameters\ng = -9.81\ny_floor = 0\n\n# initial conditions\ny = 2.0\nv = 0.0\n\nt = 0\ndt = 0.01\n\n# record initial conditions\ndata = [[t, y, v]]\n\n# start at first step\nt = dt\nwhile t < 10:\n y += v*dt\n if y > y_floor:\n v += g*dt\n else:\n v = -v # bounce off floor\n data.append([t, y, v]) \n t += dt\n\ndata = np.array(data).transpose() \n```\n\n\n```python\nplt.plot(data[0], data[1])\nplt.xlabel(\"time (s)\")\nplt.ylabel(\"position (m)\");\n```\n\n## Summary: Euler integrator \n\n1. If the order of the ODE > 1 then write the ODE as a coupled system of n first order ODEs.\n\n For Newton's EOM ($F = m\\frac{d^2}{dt^2}$):\n \n \\begin{align}\n \\frac{dx}{dt} &= v\\\\\n \\frac{dv}{dt} &= m^{-1}F\n \\end{align}\n \n Note that $F$ typically depends on $x$, e.g., $F(x) = -\\frac{\\partial U}{\\partial x}$ when the force can be derived from a potential energy function $U(x)$.\n2. Solve all first order ODEs with the forward Euler algorithm for time step $\\Delta t$:\n\n \\begin{align}\n x_{t+1} &= x_t + v \\Delta t\\\\\n v_{t+1} &= v_t + m^{-1} F(x_t) \\Delta t\n \\end{align}\n\n\n\nThe time step of the Euler algorithm has to be chosen small because the error in $x(t)$ will go like $\\Delta t^2$ \u2013 Euler is really a terrible algorithm but for this introductory class it is good enough. Better algorithms exist and are not much more difficult (see, e.g., PHY432 Computational Methods).\n", "meta": {"hexsha": "ae052ff25ed7549a469f8eb6c88a29b1363b84f4", "size": 57018, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Module_5/euler_integrator.ipynb", "max_stars_repo_name": "Py4Phy/PHY202", "max_stars_repo_head_hexsha": "ec3a0b0285f2601accfdbf0c30416e1351430342", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-10-26T00:39:14.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-29T19:35:20.000Z", "max_issues_repo_path": "Module_5/euler_integrator.ipynb", "max_issues_repo_name": "Py4Phy/PHY202", "max_issues_repo_head_hexsha": "ec3a0b0285f2601accfdbf0c30416e1351430342", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Module_5/euler_integrator.ipynb", "max_forks_repo_name": "Py4Phy/PHY202", "max_forks_repo_head_hexsha": "ec3a0b0285f2601accfdbf0c30416e1351430342", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 98.4766839378, "max_line_length": 30352, "alphanum_fraction": 0.8577642148, "converted": true, "num_tokens": 1887, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9362850004144266, "lm_q2_score": 0.8887587831798666, "lm_q1q2_score": 0.8321315176778867}} {"text": "# 4.1 Introduction\n\n- Our predictor G(x) takes values in a discrete set $\\mathcal{G}$.\n\n- We can divide the input space into regions labeled according to the classification.\n\n- Decision boundaries of regions are linear; this is what we'll mean by *linear methods for classification*.\n\nSuppose there are K classes and the fitted linear model: $\\hat{f}_k(x)=\\hat{\\beta}_{k0} + \\hat{\\beta}_k^Tx$; then the decision boundary between class *k* and *l* is $\\hat{f}_k(x)=\\hat{f}_l(x)$ that is $\\{ x : (\\hat{\\beta}_{k0}-\\hat{\\beta}_{l0}) + (\\hat{\\beta}_{k}-\\hat{\\beta}_{l})^Tx = 0 \\}$. This regression approach is a member of a class of methods that model *discriminant functions $\\delta_k(x)$* for each class, and then classify x to the class with the largest value for its discriminant function. Methods that model the posterior probabilities Pr(G = k | X = x) are also in this class. Clearly, if either the $\\delta_k(x)$ or Pr(G = k| X = x) are linear in x, then the decision boundaries will be linear.\n\nFor example, a popular model for the posterior probabilities for two classes are (4.1):\n\n$$\n\\begin{equation}\nPr(G = 1|X=x)=\\cfrac{exp(\\beta_0 + \\beta^Tx)}{1+exp(\\beta_0 + \\beta^Tx)}\\\\\nPr(G = 2|X=x)=\\cfrac{1}{1+exp(\\beta_0 + \\beta^Tx)}\n\\end{equation}\n$$\n\nHere the monotone transformation is the *logit* (or *log-odds*) transformation: log[p/(1-p)], in fact (4.2):\n\n$$\nlog \\frac{Pr(G = 1|X=x)}{Pr(G = 2|X=x)} = \\beta_0 + \\beta^Tx\n$$\n\nThe decision boundary defined by $\\{x : \\beta_0 + \\beta^Tx = 0\\}$\n\n", "meta": {"hexsha": "7cfd9f17a7a340b69bc9ee650956ead8e71f857e", "size": 2350, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chapter-04/4.1-introduction.ipynb", "max_stars_repo_name": "leduran/ESL", "max_stars_repo_head_hexsha": "fcb6c8268d6a64962c013006d9298c6f5a7104fe", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 360, "max_stars_repo_stars_event_min_datetime": "2019-01-28T14:05:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-27T00:11:21.000Z", "max_issues_repo_path": "chapter-04/4.1-introduction.ipynb", "max_issues_repo_name": "leduran/ESL", "max_issues_repo_head_hexsha": "fcb6c8268d6a64962c013006d9298c6f5a7104fe", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-07-06T16:51:40.000Z", "max_issues_repo_issues_event_max_datetime": "2020-07-06T16:51:40.000Z", "max_forks_repo_path": "chapter-04/4.1-introduction.ipynb", "max_forks_repo_name": "leduran/ESL", "max_forks_repo_head_hexsha": "fcb6c8268d6a64962c013006d9298c6f5a7104fe", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 79, "max_forks_repo_forks_event_min_datetime": "2019-03-21T23:48:35.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T13:05:10.000Z", "avg_line_length": 37.3015873016, "max_line_length": 738, "alphanum_fraction": 0.5740425532, "converted": true, "num_tokens": 476, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9615338057771058, "lm_q2_score": 0.8652240860523328, "lm_q1q2_score": 0.8319422083119177}} {"text": "# Newton's Method for Finding Equation Roots\n\nNewton's method, also known as Newton-Raphson, is an approach for finding the roots of nonlinear equations and is one of the most common root-finding algorithms due to its relative simplicity and speed. The [root of a function](http://mathworld.wolfram.com/Root.html) is the point at which $f(x) = 0$. Many equations have more than one root. Every real polynomial of odd degree has an odd number of real roots (\"Zero of a function,\" 2016). Newton-Raphson is an iterative method that begins with an initial guess of the root. The method uses the derivative of the function $f'(x)$ as well as the original function $f(x)$, and thus only works when the derivative can be determined.\n\n## Newton-Raphson Iteration\n\nThe initial guess of the root is typically denoted $x_0$ with the true root represented by $r$. The true root can thus be expressed as $r = x_0 + h$, and therefore $h = r - x_0$, where $h$ measures how far the guess is from the true value of the root. As $h$ will be small, a linear tangent line is used to approximate the location of the root and can be written as:\n\n$$ 0 = f(r) = f(x_0 + h) \\approx f(x_0) + hf'(x_0) $$\n\nwhere $h$ is approximately:\n\n$$ h \\approx -\\frac{f(x_0)}{f'(x_0)} $$\n\nUnless the derivative $f'(x_o)$ is close to 0. Combining this approximation with the value of the true root yields:\n\n$$ r = x_0 + h \\approx x_0 -\\frac{f(x_0)}{f'(x_0)} $$\n\nTherefore the new estimate of $r$, $x_1$, becomes:\n\n$$ x_1 = x_0 - \\frac{f(x_0)}{f'(x_0)} $$\n\nThen the new estimate $x_2$ is obtained from $x_1$ in the same manner:\n\n$$ x_2 = x_1 - \\frac{f(x_1)}{f'(x_1)} $$\n\nThe iteration of Newton-Raphson can thus be generalized:\n\n$$ x_{n + 1} = x_n - \\frac{f(x_n)}{f'(x_n)} $$\n\n## Newton-Raphson Convergence\n\nNewton-Raphson is not a foolproof method in that it can fail to converge to a solution. In fact, there are no 'perfect' numerical methods that will always converge on a solution. Particularly, the assumption $f''(x)$ exists and is continuous near $r$ must be made.\n\nThe method can also fail to converge when $f'(x)$ is close to 0. For example, 5.02 is close to 5 as it is only `r (5.02 - 5) / 5 * 100`% different, however, $5.02/10^{-7}$, is quite different than $5/10^{-7}$.\n\n## Examples of Newton-Raphson\n\nAn example will help illuminate further on the definitions and equations above. The NR method can be used to approximate square roots such as $\\sqrt{10}$. The square root of 10 is about three, so we can use that as a good starting value.\n\nIt often helps to plot the function to see where the roots occur. The function is first rearranged to be an expression of $f(x)$ before plotting.\n\n$$ x = \\sqrt{10} $$\n\n$$ x - \\sqrt{10} = 0 $$\n\n$$ f(x) = x^2 - 10 $$\n\n\n```python\nfrom sympy import symbols, limit, diff, sin, cos, log, tan, sqrt, init_printing, plot, oo\nfrom mpmath import ln, e, pi\n\ninit_printing()\nx = symbols('x')\ny = symbols('y')\n```\n\n\n```python\nplot(x ** 2 - 10)\n```\n\nIt can be seen from the graph the function crosses the x-axis at $\\sqrt{10}$ on an interval [3, 4]. The derivative of the function is $2x$.\n\nSolving for the function root using the Newton-Raphson method proceeds as follows using three as an initial guess.\n\n$$ x_{n+1} = x_n - \\frac{f(x_n)}{f'(x_n)} = x_n - \\frac{x_n^2 - 10}{2x_n} $$\n\n$$ x_{1} = 3 - \\frac{(3)^2 - 10}{2(3)} = 3.166667 $$\n\n$$ x_{2} = 3.166667 - \\frac{(3.166667)^2 - 10}{2(3.166667)} = 3.162281 $$\n\n$$ x_{3} = 3.162281 - \\frac{(3.162281)^2 - 10}{2(3.162281)} = 3.162278 $$\n\nAnd so on until the ${x_n}$ estimates are within a particular level of tolerance. This example converges in three iterations. Thus $3.162278$ is the estimated root of the function. This result can be verified by simply taking the square root of 10.\n\n\n```python\n10 ** (1/2) # math.sqrt(10)\n```\n\n## Newton-Raphson Method in Python\n\nAs an example, suppose the equation we are interested in locating the roots is: \n\n$$ f(x) = x^3 - 2x - 5 $$\n\n\n```python\ndef f(x):\n return x ** 3 - 2 * x - 5\n```\n\nPlot the function to visualize how the equation behaves and where any roots may be located.\n\n\n```python\nplot(x ** 3 - 2 * x - 5, xlim=(-5,5), ylim=(-10,5))\n```\n\n\n```python\nfrom mathpy.numerical.roots import newtonraph\nfrom scipy.optimize import newton\n```\n\nIt looks like the function equals 0 when $y$ is about 2. To find the root of the equation, use the `uniroot` function with a starting value of 2 and upper bound of 3. \n\n\n```python\nnewtonraph(f, 2)[0]\n```\n\n\n```python\nnewton(f,2)\n```\n\nAs suspected, the root of the function is very close to 2 at $2.09455$. The iterations can be written as follows:\n\n$$ x_{n+1} = x_n - \\frac{f(x_n)}{f'(x_n)} = x_n - \\frac{x_n^3 - 2x - 5}{3x_n^2 - 2} $$\n\n$$ x_{1} = 2 - \\frac{(2)^3 - 2(2) - 5}{3(2)^2 - 2} = 1.281239 $$\n\n$$ x_{2} = 1.281239 - \\frac{(1.281239)^3 - 2(1.281239) - 5}{3(1.281239)^2 - 2} = 3.147821 $$\n\n$$ x_{3} = 3.147821 - \\frac{(3.147821)^3 - 2(3.147821) - 5}{3(3.147821)^2 - 2} = 2.430257 $$\n\n$$ x_{4} = 2.430257 - \\frac{(2.430257)^3 - 2(2.430257) - 5}{3(2.430257)^2 - 2} = 2.144418 $$\n\n$$ x_{5} = 2.144418 - \\frac{(2.144418)^3 - 2(2.144418) - 5}{3(2.144418)^2 - 2} = 2.095897 $$\n\n$$ x_{6} = 2.095897 - \\frac{(2.095897)^3 - 2(2.095897) - 5}{3(2.095897)^2 - 2} = 2.094552 $$\n\n## References\n\nAgresti, A. (2002). Categorical data analysis (2nd ed.). New York, NY: Wiley-Interscience.\n\nKiusalaas, J. (2013). Numerical methods in engineering with python (2nd ed.). New York: Cambridge University Press.\n\nThe Newton-Raphson method. Retrieved from https://www.math.ubc.ca/~anstee/math104/104newtonmethod.pdf\n\nStewart, J. (2007). Essential calculus: Early transcendentals. Belmont, CA: Thomson Higher Education.\n\nZero of a function (2016). In Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Zero_of_a_function#Polynomial_roots\n", "meta": {"hexsha": "eaa7ec8222ecdda5c489dfb26735be545f991561", "size": 43109, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Jupyter_Notes/Newton_s_Method_for_Root_Finding.ipynb", "max_stars_repo_name": "xiuquan0418/MAT221", "max_stars_repo_head_hexsha": "bca0bd815a6c82325ebbacbfca3a15c4cbdc24f0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-07T07:18:11.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-07T07:18:11.000Z", "max_issues_repo_path": "Jupyter_Notes/Newton_s_Method_for_Root_Finding.ipynb", "max_issues_repo_name": "xiuquan0418/MAT221", "max_issues_repo_head_hexsha": "bca0bd815a6c82325ebbacbfca3a15c4cbdc24f0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Jupyter_Notes/Newton_s_Method_for_Root_Finding.ipynb", "max_forks_repo_name": "xiuquan0418/MAT221", "max_forks_repo_head_hexsha": "bca0bd815a6c82325ebbacbfca3a15c4cbdc24f0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 109.6921119593, "max_line_length": 17174, "alphanum_fraction": 0.8517246979, "converted": true, "num_tokens": 1915, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8991213799730775, "lm_q2_score": 0.9252299586383206, "lm_q1q2_score": 0.8318940372033202}} {"text": "# SymPy\n**SymPy** is a Computer Algebra System (CAS) for Python. It does symbolic computation instead of numeric computation. This means that mathematical objects are represented exactly, not approximately as in the case of numerical representation.\n\nTake the example of $\\sqrt{8}$. When calculated numerically, we get the approximate answer 2.82842712475. But in SymPy it is represented as $2 \\sqrt{2}$. Further, performing operations on such representations will continue to retain accuracy. Note that $\\frac{\\sqrt{8}}{\\sqrt{3}}$ is simplified to $\\frac{2 \\sqrt{6}}{3}$, retaining full accuracy. You can numerically evaluate any expression with the **`N()`** function in SymPy.\n\n\n```python\nfrom sympy import *\nimport math\n\nx, y, z, t = symbols('x y z t') # Symbols representing real numbers\nk, m, n = symbols('k m n', integer=True) # Symbols representing integers\nf, g, h = symbols('f g h', cls=Function) # Symbols repesenting function names\ninit_printing()\n\nprint math.sqrt(8)\nprint sqrt(8)\nprint math.sqrt(8) / math.sqrt(3)\nprint sqrt(8) / sqrt(3)\nprint N(sqrt(8) / sqrt(3)) # Numerical evaluation\nprint N(sqrt(8. / 3.))\n```\n\n 2.82842712475\n 2*sqrt(2)\n 1.63299316186\n 2*sqrt(6)/3\n 1.63299316185545\n 1.63299316185545\n\n\n## Numerical Simplification\n\n\n```python\nprint nsimplify(0.1)\nprint nsimplify(6.28, [pi], tolerance=0.01)\nprint nsimplify(pi, tolerance=0.1)\nprint nsimplify(pi, tolerance=0.001)\n```\n\n 1/10\n 2*pi\n 22/7\n 355/113\n\n\n## Algebra\nSymPy can handle algebraic expressions, simplify them and evaluate them.\n\n\n```python\neq = ((x+y)**2 * (x+1))\neq\n```\n\n\n```python\nexpand(eq)\n```\n\nYou can substitute a numerical value for any of the symbols and simplify the expression. The method to do this is **`subs()`**. It takes two arguments, the symbol and the numerical value it is to assume. If an expression has more than one symbol, substitution must be done one symbol at a time.\n\n\n```python\neq.subs(x, 1).subs(y,1)\n```\n\n\n```python\na = 1/x + (x*sin(x) - 1)/x\na\n```\n\n\n```python\nN(a.subs(x, 1))\n```\n\n\n```python\n\n```\n\n## Integral Calculus\n\nSymPy performs integration symbolically, like you would if you were doing so by hand rather than numerically.\n\n\n```python\na = Integral(cos(x), x)\nEq(a, a.doit())\n```\n\n\n```python\nb = Integral(sqrt(1/x), x)\nEq(b, b.doit())\n```\n\nHere is how we can evaluate the definite integral $\\int_0^{\\infty} e^{-x} dx$\n\n\n```python\nb = Integral(x**2+2*x+3, x)\nEq(b, b.doit())\nintegrate(b, (x, 0, 1))\n```\n\n\n```python\nintegrate(exp(-x), (x, 0, oo))\n```\n\nHere is the definite integral $\\int_{-\\infty}^{\\infty} -x^2 - y^2 dx$\n\n\n```python\nintegrate(exp(-x**2 - y**2), (x, -oo, oo), (y, -oo, oo))\n```\n\n\n```python\nc = Integral(exp(-x**2), (x, 0, 1))\nEq(c, c.doit())\n```\n\n\n```python\nN(c.doit())\n```\n\n## Differential Calculus\n\nSymPy can also perform differentiation symbolically. Derivative of $y = x^2 + 3x - \\frac{1}{2}$ is $y' = 2x + 3$. Substituting $x=2$ in the derivative results in the numerical value $y'(2) = 7$.\n\n\n```python\ns = \"x**2 + 3*x - 1/2\"\nc = sympify(s)\nprint c\nd = diff(c)\nprint d\nd.subs(x, 2)\n```\n\nIt is possible to differentiate a function multiple times and obtain the second or higher derivatives.\n\n\n```python\nprint diff(x**4)\nprint diff(x**4, x, x) # Differentiate w.r.t. x two times\nprint diff(x**4, x, 2) # Differentiate w.r.t. x two times\n```\n\n 4*x**3\n 12*x**2\n 12*x**2\n\n\nA function of two or more variables can be differentiated with respect to any of the variables any number of times.\n\n\n```python\nexpr = exp(x*y*z)\nderiv = diff(expr, x, y, z)\nprint deriv\n```\n\n (x**2*y**2*z**2 + 3*x*y*z + 1)*exp(x*y*z)\n\n\n## Limits\n\nSymPy can evaluate limits of functions.\n\n\n```python\nprint limit(sin(x)/x, x, 0)\nprint limit(tan(x)/x, x, 0)\n```\n\n 1\n 1\n\n\n\n```python\nexpr = x**2 / exp(x)\nprint expr.subs(x, oo)\nprint limit(expr, x, oo)\n```\n\n nan\n 0\n\n\n\n```python\nexpr = Limit((cos(x) -1) / x, x, 0)\nexpr\n```\n\n\n```python\nexpr.doit()\n```\n\n\n```python\nc = Limit((-x + sin(x)) / (x * cos(x) - sin(x)), x, 0)\nc\n```\n\n\n```python\nr = Limit(x * (sin(x) - x * cos(x)) / (2*(1-cos(x)) - x * sin(x)), x, 0)\nr\n```\n\n\n```python\nprint c.doit()\nprint r.doit()\n```\n\n 1/2\n 4\n\n\n## Solution of Equations\n\nTo solve the equation $x^2 = 1$, first form the equation with the **`Eq()`** SymPy function by defining the left and right hand sides. Then solve the equation by calling the SymPy function **`solve()`**.\n\n\n```python\nsolve(Eq(x**2, 1), x)\n```\n\nThe same equation could also be expressed as $x^2 - 1 = 0$ and solved as show below:\n\n\n```python\nsolve(Eq(x**2 - 1, 0), x)\n```\n\nSince it is a common form to have zero on the right hand side, SymPy allows you to dispense with the **`Eq()`** function call to form the equation and solve the equation directly as follows:\n\n\n```python\nsolve(x**2 - 1, x)\n```\n\nLet us now solve the polynomial equation $x^2 - x = 0$\n\n\n```python\nprint solve(x**2 - x, x)\n```\n\n [0, 1]\n\n\nFor polynomial equations, **`solve`** prints repeated roots, if any, only once. The function **`roots()`** prints the roots and their frequency.\n\n\n```python\nprint solve(x**3 - 6*x**2 + 9*x, x)\nprint roots(x**3 - 6*x**2 + 9*x, x)\n```\n\n [0, 3]\n {0: 1, 3: 2}\n\n\nDifferential equations can be solved using the SymPy function **`dsolve()`**. Let us first represent the differential equation $f''(x) - 2 f'(x) + f(x) = \\sin(x)$ as follows using **`Eq()`**, and then solve it using **`dsolve()`**:\n\n\n```python\ndiffeq = Eq(f(x).diff(x, x) - 2 * f(x).diff(x) + f(x), sin(x))\ndiffeq\n```\n\n\n```python\ndsolve(diffeq, f(x))\n```\n\nIn the above solution $C_1$ and $C_2$ are arbitrary constants of integration which will have to be determined by applying known conditions.\n\n\n```python\n\n```\n", "meta": {"hexsha": "96e3ec4c43393178c3355d2615af427fa6b91c1e", "size": 49236, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "SymPy.ipynb", "max_stars_repo_name": "satish-annigeri/Notebooks", "max_stars_repo_head_hexsha": "92a7dc1d4cf4aebf73bba159d735a2e912fc88bb", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "SymPy.ipynb", "max_issues_repo_name": "satish-annigeri/Notebooks", "max_issues_repo_head_hexsha": "92a7dc1d4cf4aebf73bba159d735a2e912fc88bb", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "SymPy.ipynb", "max_forks_repo_name": "satish-annigeri/Notebooks", "max_forks_repo_head_hexsha": "92a7dc1d4cf4aebf73bba159d735a2e912fc88bb", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.1497797357, "max_line_length": 441, "alphanum_fraction": 0.6617312536, "converted": true, "num_tokens": 1832, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9252299570920387, "lm_q2_score": 0.8991213786215105, "lm_q1q2_score": 0.8318940345625149}} {"text": "# Predicting the energy in the one-dimensional Ising model\n\nWe will in this notebook use linear (ordinary least squares), ridge and LASSO regression to predict the energy in the nearest neighbor one-dimensional Ising model on a ring, i.e., the endpoints wrap around. We will use the linear regression models to fit a value for the coupling constant to achieve this.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nimport seaborn as sns\nimport scipy.linalg as scl\nfrom sklearn.model_selection import train_test_split\nimport sklearn.linear_model as skl\nimport tqdm\n\n%matplotlib inline\n\nsns.set(color_codes=True)\ncmap_args=dict(vmin=-1., vmax=1., cmap='seismic')\n```\n\n## The one-dimensional Ising model\n\nThe one-dimensional Ising model with nearest neighbor interaction, no external field and a constant coupling constant $J$ is given by\n\n\\begin{align}\n H = -J \\sum_{k}^L s_k s_{k + 1},\n\\end{align}\n\nwhere $s_i \\in \\{-1, 1\\}$ and $s_{N + 1} = s_1$. The number of spins in the system is determined by $L$. For the low temperature limit there is no phase transition.\n\nWe will look at a system of $L = 40$ spins with a coupling constant of $J = 1$. To get enough training data we will generate 10000 states with their respective energies.\n\n\n```python\nL = 40\nn = int(1e4)\n\nspins = np.random.choice([-1, 1], size=(n, L))\nJ = 1.0\n\nenergies = np.zeros(n)\n\nfor i in range(n):\n energies[i] = - J * np.dot(spins[i], np.roll(spins[i], 1))\n```\n\n## Reformulating the problem to suit regression\n\nA more general form for the one-dimensional Ising model is\n\n\\begin{align}\n H = - \\sum_j^L \\sum_k^L s_j s_k J_{jk}.\n\\end{align}\n\nHere we allow for interactions beyond the nearest neighbors and a more adaptive coupling matrix. This latter expression can be formulated as a matrix-product on the form\n\n\\begin{align}\n H = X J,\n\\end{align}\n\nwhere $X_{jk} = s_j s_k$ and $J$ is the matrix consisting of the elements $-J_{jk}$. This form of writing the energy fits perfectly with the form utilized in linear regression, viz.\n\n\\begin{align}\n y = X\\omega + \\epsilon,\n\\end{align}\n\nwhere $\\omega$ are the weights we wish to fit and $\\epsilon$ is noise with zero-mean. In the case of the Ising model we have $\\sigma = 0$.\n\n\n```python\nX = np.zeros((n, L ** 2))\nfor i in range(n):\n X[i] = np.outer(spins[i], spins[i]).ravel()\n```\n\n\n```python\ny = energies\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.96)\n\nX_train_own = np.concatenate(\n (np.ones(len(X_train))[:, np.newaxis], X_train),\n axis=1\n)\n\nX_test_own = np.concatenate(\n (np.ones(len(X_test))[:, np.newaxis], X_test),\n axis=1\n)\n```\n\n## Linear regression\n\nThe problem at hand is to try to fit the equation\n\n\\begin{align}\n y = f(x) + \\epsilon,\n\\end{align}\n\nwhere $f(x)$ is some unknown function of the data $x$ and $\\epsilon$ is normally distributed with mean zero noise with standard deviation $\\sigma_{\\epsilon}$. Our job is to try to find a predictor which estimates the function $f(x)$. In linear regression we assume that we can formulate the problem as\n\n\\begin{align}\n y = X\\omega + \\epsilon,\n\\end{align}\n\nwhere $X$ and $\\omega$ are now matrices. Our job at hand is now to find a _cost function_ $C$, which we wish to minimize in order to find the best estimate of $\\omega$.\n\n### Ordinary least squares\n\nIn the ordinary least squares method we choose the cost function\n\n\\begin{align}\n C(X, \\omega) = ||X\\omega - y||^2\n = (X\\omega - y)^T(X\\omega - y)\n\\end{align}\n\nWe then find the extremal point of $C$ by taking the derivative with respect to $\\omega$ and setting it to zero, i.e.,\n\n\\begin{align}\n \\dfrac{\\mathrm{d}C}{\\mathrm{d}\\omega}\n = 0.\n\\end{align}\n\nThis yields the expression for $\\omega$ to be\n\n\\begin{align}\n \\omega = \\frac{X^T y}{X^T X},\n\\end{align}\n\nwhich immediately imposes some requirements on $X$ as there must exist an inverse of $X^T X$. If the expression we are modelling contains an intercept, i.e., a constant expression we must make sure that the first column of $X$ consists of $1$.\n\n\n```python\ndef get_ols_weights_naive(x: np.ndarray, y: np.ndarray) -> np.ndarray:\n return scl.inv(x.T @ x) @ (x.T @ y)\n```\n\n\n```python\nomega = get_ols_weights_naive(X_train_own, y_train)\n```\n\nHmmm, doing the inversion directly turns out to be a bad idea as the matrix $X^TX$ is singular. An alternative approach is to use the _singular value decomposition_. Using the definition of the Moore-Penrose pseudoinverse we can write the equation for $\\omega$ as\n\n\\begin{align}\n \\omega = X^{+}y,\n\\end{align}\n\nwhere the pseudoinverse of $X$ is given by\n\n\\begin{align}\n X^{+} = \\frac{X^T}{X^T X}.\n\\end{align}\n\nUsing singular value decomposition we have that $X = U\\Sigma V^T$, where $X^{+} = V\\Sigma^{+} U^T$. This reduces the equation for $\\omega$ to\n\n\\begin{align}\n \\omega = V\\Sigma^{+} U^T y.\n\\end{align}\n\nNote that solving this equation by actually doing the pseudoinverse (which is what we will do) is not a good idea as this operation scales as $\\mathcal{O}(n^3)$, where $n$ is the number of elements in a general matrix. Instead, doing $QR$-factorization and solving the linear system as an equation would reduce this down to $\\mathcal{O}(n^2)$ operations.\n\n\n```python\ndef get_ols_weights(x: np.ndarray, y: np.ndarray) -> np.ndarray:\n u, s, v = scl.svd(x)\n return v.T @ scl.pinv(scl.diagsvd(s, u.shape[0], v.shape[0])) @ u.T @ y\n```\n\nBefore passing in the data to the function we append a column with ones to the training data.\n\n\n```python\nomega = get_ols_weights(X_train_own,y_train)\n```\n\nNext we fit a `LinearRegression`-model from Scikit-learn for comparison.\n\n\n```python\nclf = skl.LinearRegression().fit(X_train, y_train)\n```\n\nExtracting the $J$-matrix from both our own method and the Scikit-learn model where we make sure to remove the intercept.\n\n\n```python\nJ_own = omega[1:].reshape(L, L)\nJ_sk = clf.coef_.reshape(L, L)\n```\n\nA way of looking at the coefficients in $J$ is to plot the matrices as images.\n\n\n```python\nfig = plt.figure(figsize=(20, 14))\nim = plt.imshow(J_own, **cmap_args)\nplt.title(\"Home-made OLS\", fontsize=18)\nplt.xticks(fontsize=18)\nplt.yticks(fontsize=18)\ncb = fig.colorbar(im)\ncb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)\n\nfig = plt.figure(figsize=(20, 14))\nim = plt.imshow(J_sk, **cmap_args)\nplt.title(\"LinearRegression from Scikit-learn\", fontsize=18)\nplt.xticks(fontsize=18)\nplt.yticks(fontsize=18)\ncb = fig.colorbar(im)\ncb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)\n\nplt.show()\n```\n\nWe can see that our model for the least squares method performes close to the benchmark from Scikit-learn. It is interesting to note that OLS considers both $J_{j, j + 1} = -0.5$ and $J_{j, j - 1} = -0.5$ as valid matrix elements for $J$.\n\n### Ridge regression\n\nHaving explored the ordinary least squares we move on to ridge regression. In ridge regression we include a _regularizer_. This involves a new cost function which leads to a new estimate for the weights $\\omega$. This results in a penalized regression problem. The cost function is given by\n\n\\begin{align}\n C(X, \\omega; \\lambda) = ||X\\omega - y||^2 + \\lambda ||\\omega||^2\n = (X\\omega - y)^T(X\\omega - y) + \\lambda \\omega^T\\omega.\n\\end{align}\n\nFinding the extremum of this function yields the weights\n\n\\begin{align}\n \\omega(\\lambda) = \\frac{X^Ty}{X^TX + \\lambda} \\to \\frac{\\omega_{\\text{LS}}}{1 + \\lambda},\n\\end{align}\n\nwhere $\\omega_{\\text{LS}}$ is the weights from ordinary least squares. The last assumption assumes that $X$ is orthogonal, which it is not. We will therefore resort to solving the equation as it stands on left hand side.\n\n\n```python\ndef get_ridge_weights(x: np.ndarray, y: np.ndarray, _lambda: float) -> np.ndarray:\n return x.T @ y @ scl.inv(\n x.T @ x + np.eye(x.shape[1], x.shape[1]) * _lambda\n )\n```\n\n\n```python\n_lambda = 0.1\n```\n\n\n```python\nomega_ridge = get_ridge_weights(X_train_own, y_train, np.array([_lambda]))\n```\n\n\n```python\nclf_ridge = skl.Ridge(alpha=_lambda).fit(X_train, y_train)\n```\n\n\n```python\nJ_ridge_own = omega_ridge[1:].reshape(L, L)\nJ_ridge_sk = clf_ridge.coef_.reshape(L, L)\n```\n\n\n```python\nfig = plt.figure(figsize=(20, 14))\nim = plt.imshow(J_ridge_own, **cmap_args)\nplt.title(\"Home-made ridge regression\", fontsize=18)\nplt.xticks(fontsize=18)\nplt.yticks(fontsize=18)\ncb = fig.colorbar(im)\ncb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)\n\nfig = plt.figure(figsize=(20, 14))\nim = plt.imshow(J_ridge_sk, **cmap_args)\nplt.title(\"Ridge from Scikit-learn\", fontsize=18)\nplt.xticks(fontsize=18)\nplt.yticks(fontsize=18)\ncb = fig.colorbar(im)\ncb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)\n\nplt.show()\n```\n\nIn other words, our implementation of ridge regression seems to match well with the benchmark from Scikit-learn. We can also see the same symmetry pattern as for OLS.\n\n### LASSO regression\n\nIn the _Least Absolute Shrinkage and Selection Operator_ (LASSO)-method we get a third cost function.\n\n\\begin{align}\n C(X, \\omega; \\lambda) =\n ||X\\omega - y||^2 + \\lambda ||\\omega||\n = (X\\omega - y)^T(X\\omega - y) + \\lambda \\sqrt{\\omega^T\\omega}.\n\\end{align}\n\nFinding the extremal point of this cost function is not so straight-forward as in least squares and ridge. We will therefore rely solely on the function ``Lasso`` from Scikit-learn.\n\n\n```python\nclf_lasso = skl.Lasso(alpha=_lambda).fit(X_train, y_train)\nJ_lasso_sk = clf_lasso.coef_.reshape(L, L)\n```\n\n\n```python\nfig = plt.figure(figsize=(20, 14))\nim = plt.imshow(J_lasso_sk, **cmap_args)\nplt.title(\"Lasso from Scikit-learn\", fontsize=18)\nplt.xticks(fontsize=18)\nplt.yticks(fontsize=18)\ncb = fig.colorbar(im)\ncb.ax.set_yticklabels(cb.ax.get_yticklabels(), fontsize=18)\n\nplt.show()\n```\n\nIt is quite striking how LASSO breaks the symmetry of the coupling constant as opposed to ridge and OLS. We get a sparse solution with $J_{j, j + 1} = -1$.\n\n## Performance of the different models\n\nIn order to judge which model performs best at varying values of $\\lambda$ (for ridge and LASSO) we compute $R^2$ which is given by\n\n\\begin{align}\n R^2 = 1 - \\frac{(y - \\hat{y})^2}{(y - \\bar{y})^2},\n\\end{align}\n\nwhere $y$ is a vector with the true values of the energy, $\\hat{y}$ is the predicted values of $y$ from the models and $\\bar{y}$ is the mean of $\\hat{y}$.\n\n\n```python\ndef r_squared(y, y_hat):\n return 1 - np.sum((y - y_hat) ** 2) / np.sum((y - np.mean(y_hat)) ** 2)\n```\n\nThis is the same metric used by Scikit-learn for their regression models when scoring.\n\n\n```python\ny_hat = clf.predict(X_test)\nr_test = r_squared(y_test, y_hat)\nsk_r_test = clf.score(X_test, y_test)\n\nassert abs(r_test - sk_r_test) < 1e-2\n```\n\n## Performance as function of the regularization parameter\n\nWe see how the different models perform for a different set of values for $\\lambda$.\n\n\n```python\nlambdas = np.logspace(-4, 5, 10)\n\ntrain_errors = {\n \"ols_own\": np.zeros(lambdas.size),\n \"ols_sk\": np.zeros(lambdas.size),\n \"ridge_own\": np.zeros(lambdas.size),\n \"ridge_sk\": np.zeros(lambdas.size),\n \"lasso_sk\": np.zeros(lambdas.size)\n}\n\ntest_errors = {\n \"ols_own\": np.zeros(lambdas.size),\n \"ols_sk\": np.zeros(lambdas.size),\n \"ridge_own\": np.zeros(lambdas.size),\n \"ridge_sk\": np.zeros(lambdas.size),\n \"lasso_sk\": np.zeros(lambdas.size)\n}\n\nplot_counter = 1\n\nfig = plt.figure(figsize=(32, 54))\n\nfor i, _lambda in enumerate(tqdm.tqdm(lambdas)):\n omega = get_ols_weights(X_train_own, y_train)\n y_hat_train = X_train_own @ omega\n y_hat_test = X_test_own @ omega\n\n train_errors[\"ols_own\"][i] = r_squared(y_train, y_hat_train)\n test_errors[\"ols_own\"][i] = r_squared(y_test, y_hat_test)\n\n plt.subplot(10, 5, plot_counter)\n plt.imshow(omega[1:].reshape(L, L), **cmap_args)\n plt.title(\"Home made OLS\")\n plot_counter += 1\n\n omega = get_ridge_weights(X_train_own, y_train, _lambda)\n y_hat_train = X_train_own @ omega\n y_hat_test = X_test_own @ omega\n\n train_errors[\"ridge_own\"][i] = r_squared(y_train, y_hat_train)\n test_errors[\"ridge_own\"][i] = r_squared(y_test, y_hat_test)\n\n plt.subplot(10, 5, plot_counter)\n plt.imshow(omega[1:].reshape(L, L), **cmap_args)\n plt.title(r\"Home made ridge, $\\lambda = %.4f$\" % _lambda)\n plot_counter += 1\n\n for key, method in zip(\n [\"ols_sk\", \"ridge_sk\", \"lasso_sk\"],\n [skl.LinearRegression(), skl.Ridge(alpha=_lambda), skl.Lasso(alpha=_lambda)]\n ):\n method = method.fit(X_train, y_train)\n\n train_errors[key][i] = method.score(X_train, y_train)\n test_errors[key][i] = method.score(X_test, y_test)\n\n omega = method.coef_.reshape(L, L)\n\n plt.subplot(10, 5, plot_counter)\n plt.imshow(omega, **cmap_args)\n plt.title(r\"%s, $\\lambda = %.4f$\" % (key, _lambda))\n plot_counter += 1\n\nplt.show()\n```\n\nWe can see that LASSO quite fast reaches a good solution for low values of $\\lambda$, but will \"wither\" when we increase $\\lambda$ too much. Ridge is more stable over a larger range of values for $\\lambda$, but eventually also fades away.\n\n## Finding the optimal value of $\\lambda$\n\nTo determine which value of $\\lambda$ is best we plot the accuracy of the models when predicting the training and the testing set. We expect the accuracy of the training set to be quite good, but if the accuracy of the testing set is much lower this tells us that we might be subject to an overfit model. The ideal scenario is an accuracy on the testing set that is close to the accuracy of the training set.\n\n\n```python\nfig = plt.figure(figsize=(20, 14))\n\ncolors = {\n \"ols_own\": \"b\",\n \"ridge_own\": \"g\",\n \"ols_sk\": \"r\",\n \"ridge_sk\": \"y\",\n \"lasso_sk\": \"c\"\n}\n\nfor key in train_errors:\n plt.semilogx(\n lambdas,\n train_errors[key],\n colors[key],\n label=\"Train {0}\".format(key),\n linewidth=4.0\n )\n\nfor key in test_errors:\n plt.semilogx(\n lambdas,\n test_errors[key],\n colors[key] + \"--\",\n label=\"Test {0}\".format(key),\n linewidth=4.0\n )\n#plt.semilogx(lambdas, train_errors[\"ols_own\"], label=\"Train (OLS own)\")\n#plt.semilogx(lambdas, test_errors[\"ols_own\"], label=\"Test (OLS own)\")\n\nplt.legend(loc=\"best\", fontsize=18)\nplt.xlabel(r\"$\\lambda$\", fontsize=18)\nplt.ylabel(r\"$R^2$\", fontsize=18)\nplt.tick_params(labelsize=18)\nplt.show()\n```\n\nFrom the above figure we can see that LASSO with $\\lambda = 10^{-2}$ achieve a very good accuracy on the test set. This by far surpases the other models for all values of $\\lambda$.\n", "meta": {"hexsha": "9145c9d754a5e15e7050bd97e29be80be6971ebc", "size": 813105, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/Programs/IsingModel/onedim.ipynb", "max_stars_repo_name": "anacost/MachineLearning", "max_stars_repo_head_hexsha": "89e1c3637fe302c2b15b96bf89c8a01d2d693f29", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-08-24T18:42:36.000Z", "max_stars_repo_stars_event_max_datetime": "2020-08-24T18:42:36.000Z", "max_issues_repo_path": "doc/Programs/IsingModel/onedim.ipynb", "max_issues_repo_name": "bernharl/MachineLearning", "max_issues_repo_head_hexsha": "35389a23d0abe490fbb9cd653aa732eeac162262", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/Programs/IsingModel/onedim.ipynb", "max_forks_repo_name": "bernharl/MachineLearning", "max_forks_repo_head_hexsha": "35389a23d0abe490fbb9cd653aa732eeac162262", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-09-04T16:21:16.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-04T16:21:16.000Z", "avg_line_length": 977.2896634615, "max_line_length": 423210, "alphanum_fraction": 0.9357524551, "converted": true, "num_tokens": 4119, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.92522995296862, "lm_q2_score": 0.8991213691605412, "lm_q1q2_score": 0.8318940221014888}} {"text": "# Elliptic PDEs\n\n\n\nThe classic example of an elliptic PDE is **Laplace's equation** (yep, the same Laplace that gave us the Laplace transform), which in two dimensions for a variable $u(x,y)$ is\n\\begin{equation}\n\\frac{\\partial^2 u}{\\partial x^2} + \\frac{\\partial^2 u}{\\partial y^2} = \\nabla^2 u = 0 \\;,\n\\end{equation}\nwhere $\\nabla$ is del, or nabla, and represents the gradient operator: $\\nabla = \\frac{\\partial}{\\partial x} + \\frac{\\partial}{\\partial y}$.\n\nLaplace's equation shows up in a number of physical problems, including heat transfer, fluid dynamics, and electrostatics. For example, the heat equation for conduction in two dimensions is\n\\begin{equation}\n\\frac{\\partial u}{\\partial t} = \\alpha \\left( \\frac{\\partial^2 u}{\\partial x^2} + \\frac{\\partial^2 u}{\\partial y^2} \\right) \\;,\n\\end{equation}\nwhere $u(x,y,t)$ is temperature and $\\alpha$ is thermal diffusivity. Steady-state heat transfer (meaning after any initial transient period) is then described by Laplace's equation.\n\nA related elliptic PDE is **Poisson's equation**:\n\\begin{equation}\n\\nabla^2 u = f(x,y) \\;,\n\\end{equation}\nwhich also appears in multiple physical problems\u2014most notably, when solving for pressure in the Navier\u2013Stokes equations.\n\nTo numerically solve these equations, and any elliptic PDE, we can use finite differences, where we replace the continuous $x,y$ domain with a discrete grid of points. This is similar to what we did with boundary-value problems in one dimension\u2014but now we have two dimensions.\n\nTo approximate the second derivatives in Laplace's equation, we can use central differences in both the $x$ and $y$ directions, applied around the $u_{i,j}$ point:\n\\begin{align}\n\\frac{\\partial^2 u}{\\partial x^2} &\\approx \\frac{u_{i-1,j} - 2u_{i,j} + u_{i+1,j}}{\\Delta x^2} \\\\\n\\frac{\\partial^2 u}{\\partial y^2} &\\approx \\frac{u_{i,j-1} - 2u_{i,j} + u_{i,j+1}}{\\Delta y^2}\n\\end{align}\nwhere $i$ is the index used in the $x$ direction, $j$ is the index in the $y$ direction, and $\\Delta x$ and $\\Delta y$ are the step sizes in the $x$ and $y$ directions.\nIn other words, $x_i = (i-1) \\Delta x$ and $y_j = (j-1) \\Delta y$.\n\nThe following figure shows the points necessary to approximate the partial derivatives in the PDE at a location $(x_i, y_j)$, for a general 2D region. This is known as a **five-point stencil**:\n\n:::{figure-md} fig-stencil\n\n\nFive-point finite difference stencil\n:::\n\nApplying these finite differences gives us an approximation for Laplace's equation:\n\\begin{equation}\n\\frac{u_{i-1,j} - 2u_{i,j} + u_{i+1,j}}{\\Delta x^2} + \\frac{u_{i,j-1} - 2u_{i,j} + u_{i,j+1}}{\\Delta y^2} = 0 \\;.\n\\end{equation}\nIf we use a uniform grid where $\\Delta x = \\Delta y = h$, then we can simplify to \n\\begin{equation}\nu_{i+1,j} + u_{i,j+1} + u_{i-1,j} + u_{i,j-1} - 4 u_{i,j} = 0 \\;.\n\\end{equation}\n\n## Example: heat transfer in a square plate\n\nAs an example, let's consider the problem of steady-state heat transfer in a square solid object. If $u(x,y)$ is temperature, then this is described by Laplace's equation:\n\\begin{equation}\n\\frac{\\partial^2 u}{\\partial x^2} + \\frac{\\partial^2 u}{\\partial y^2} = \\nabla^2 u = 0 \\;,\n\\end{equation}\nand we can solve this using finite differences. Using a uniform grid where $\\Delta x = \\Delta y = h$, Laplace's equation gives us a recursion formula that relates the values at neighboring points:\n\\begin{equation}\nu_{i+1,j} + u_{i,j+1} + u_{i-1,j} + u_{i,j-1} - 4 u_{i,j} = 0 \\;.\n\\end{equation}\n\nConsider a case where the square has sides of length $L$, and the boundary conditions are that the temperature is fixed at 100 on the left, right, and bottom sides, and fixed at 0 on the top.\nFor now, we'll use two segments to discretize the domain in each directions, giving us nine total points in the grid.\nThe following figures show the example problem, and the grid of points we'll use.\n\n:::{figure-md} fig-heat-transfer-square\n\n\nHeat transfer in a square object\n:::\n\n:::{figure-md} fig-grid-three\n\n\nSimple 3x3 grid of points\n:::\n\nUsing the above recursion formula, we can write an equation for each of the nine unknown points (in the interior, not the boundary points):\n\\begin{align}\nu_{1,1} &= 100 \\\\\nu_{2,1} &= 100 \\\\\nu_{3,1} &= 100 \\\\\nu_{1,2} &= 100 \\\\\n\\text{for } u_{2,2}: \\quad u_{3,2} + u_{2,3} + u_{1,2} + u_{2,1} - 4u_{2,2} &= 0 \\\\\nu_{3,2} &= 100 \\\\\nu_{1,3} &= 100 \\\\\nu_{2,3} &= 0 \\\\\nu_{3,3} &= 100\n\\end{align}\nwhere $u_{i,j}$ are the unknowns. Note that in this we used the side boundary condition values for the corner points $u_{1,3}$ and $u_{3,3}$, rather than the top value. (In reality this would represent a discontinuity in temperature, so these aren't very realistic boundary conditions.)\n\nThis is a system of linear equations, that we can represent as a matrix-vector product:\n\\begin{align}\n\\begin{bmatrix} \n1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 1 & -4 & 1 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\\\\\end{bmatrix}\n\\begin{bmatrix} u_{1,1} \\\\ u_{2,1} \\\\ u_{3,1} \\\\ u_{1,2} \\\\ u_{2,2} \\\\ u_{3,2} \\\\ u_{1,3} \\\\ u_{2,3} \\\\ u_{3,3} \\end{bmatrix} &= \n\\begin{bmatrix} 100 \\\\ 100 \\\\ 100 \\\\ \n100 \\\\ 0 \\\\ 100 \\\\\n100 \\\\ 0 \\\\ 100 \\end{bmatrix} \\\\\n\\text{or} \\quad A \\mathbf{u} &= \\mathbf{b}\n\\end{align}\nwhere $A$ is a $9\\times 9$ coefficient matrix, $\\mathbf{u}$ is a nine-element vector of unknown variables, and $\\mathbf{b}$ is a nine-element right-hand side vector.\nFor $\\mathbf{u}$, we had to take variables that physically represent points in a two-dimensional space and combine them in some order to form a one-dimensional column vector. Here, we used a **row-major** mapping, where we started with the point in the first row and first column, then added the remaining points in that row, before moving to the next row and repeating. We'll discuss this a bit more later.\n\nIf we set this up in Matlab, we can solve with `u = A \\ b`:\n\n\n```matlab\nclear all; clc\n\nA = [\n1 0 0 0 0 0 0 0 0;\n0 1 0 0 0 0 0 0 0;\n0 0 1 0 0 0 0 0 0;\n0 0 0 1 0 0 0 0 0;\n0 1 0 1 -4 1 0 1 0;\n0 0 0 0 0 1 0 0 0;\n0 0 0 0 0 0 1 0 0;\n0 0 0 0 0 0 0 1 0;\n0 0 0 0 0 0 0 0 1];\nb = [100; 100; 100; 100; 0; 100; 100; 0; 100];\n\n% solve system of linear equations\nu = A \\ b;\n\ndisp(u)\n```\n\n 100\n 100\n 100\n 100\n 75\n 100\n 100\n 0\n 100\n \n\n\nThis gives us the values for temperature at each of the nine points. In this example, we really only have one unknown temperature: $u_{2,2}$, located in the middle. Does the value given make sense? We can check by rearranging the recursion formula for Laplace's equation:\n\\begin{equation}\nu_{i,j} = \\frac{u_{i+1,j} + u_{i,j+1} + u_{i-1,j} + u_{i,j-1}}{4} \\;,\n\\end{equation}\nwhich shows that in such problems the value of the middle point should be the average of the four surrounding points. This matches the value of 75 found above.\n\nWe can use a contour plot to visualize the results, though we'll need to convert the one-dimensional solution array into a two-dimensional matrix to plot. The Matlab `reshape()` function can help us here: it reshapes an array into a matrix, by specifying the target number of desired columns and rows:\n\n\n```matlab\n% Example of using the reshape function, with a simple array going from 1 to 10\n\n% We want to convert it into a matrix with 5 columns and 2 rows.\n% The expected output is:\n% [1 2 3 4 5; \n% 6 7 8 9 10]\n\nb = (1 : 10)';\nA = reshape(b, [5, 2]);\ndisp('b array:')\ndisp(b)\ndisp('A matrix:')\ndisp(A)\n```\n\n b array:\n 1\n 2\n 3\n 4\n 5\n 6\n 7\n 8\n 9\n 10\n \n A matrix:\n 1 6\n 2 7\n 3 8\n 4 9\n 5 10\n \n\n\nThis behavior may be a bit unexpected, because `reshape()` uses a column-major mapping. We can fix this by taking the transpose of the resulting matrix:\n\n\n```matlab\ndisp('transpose of output matrix:')\ndisp(A')\n```\n\n transpose of output matrix:\n 1 2 3 4 5\n 6 7 8 9 10\n \n\n\n\n```matlab\n% We can use the reshape function to convert the calculated temperatures\n% into a 3x3 matrix:\n\nn = 3; m = 3;\nu_square = reshape(u, [n, m]);\n\ncontourf(u_square')\ncolorbar\n```\n\nOverall that looks correct: the boundary conditions are right, and we see that the center is the average of the boundaries.\n\nBut, clearly only using nine points (with eight of those being boundary conditions) doesn't give us a very good solution. To make this more accurate, we'll need to use more points, which also means we need to automate the construction of the system of equations.\n\n## Row-major mapping\n\nFor a two-dimensional elliptic PDE like Laplace's equation, we can generate a general recursion formula, but we need a way to take a grid of points where location is defined by row and column index and map these into a one-dimensional column vector, which has its own index.\n\nThe following figure shows a general 2D grid of points, with $n$ number of columns in the $x$ direction (using index $i$) and $m$ number of rows in the $y$ direction (using index $j$):\n\n:::{figure-md} fig-twodim-grid\n\n\n2D grid of points with *n* columns and *m* columns.\n:::\n\nWe want to convert the rows and columns of $u_{i,j}$ points defined by column and row index into a single column array using a different index, $k$ (this choice is arbitrary):\n\\begin{equation}\n\\begin{bmatrix} u_{1,1} \\\\ u_{2,1} \\\\ u_{3,1} \\\\ \\vdots \\\\ u_{n,1} \\\\\nu_{1,2} \\\\ u_{2,2} \\\\ u_{3,2} \\\\ \\vdots \\\\ u_{n, 2} \\\\ u_{1,3} \\\\ \\vdots \\\\\nu_{1,m} \\\\ u_{2,m} \\\\ \\vdots \\\\ u_{n,m}\n\\end{bmatrix}\n\\end{equation}\nwhere $k$ refers to the index used in that array.\n\nTo do this mapping, we can use this formula:\n\\begin{equation}\nk_{i,j} = (j-1)n + i\n\\end{equation}\nwhere $k_{i,j}$ refers to the 1D index $k$ mapped from the 2D indices $i$ and $j$.\n\n:::{figure-md} fig-grid-three\n\n\nSimple 3x3 grid of points\n:::\n\nFor example, in this $3\\times 3$ grid, where $n=3$ and $m=3$, consider the point where $i=2$ and $j=2$ (the point right in the center). Using our formula, \n\\begin{equation}\nk_{2,2} = (2-1)3 + 2 = 5\n\\end{equation}\nwhich matches what we can visually confirm.\n\nUsing that mapping, we can also identify the 1D indices associated with the points surrounding location $(i,j)$:\n\\begin{align}\nk_{i-1,j} &= (j-1)n + i - 1 \\\\\nk_{i+1,j} &= (j-1)n + i + 1 \\\\\nk_{i,j-1} &= (j-2)n + i \\\\\nk_{i,j+1} &= j n + i\n\\end{align}\nwhich we can use to determine the appropriate locations to place values in the coefficient and right-hand side matrices.\n\n## Example: heat transfer in a square plate (redux)\n\nLet's return to the example of steady-state heat transfer in a square plate\u2014but this time we'll set the solution up more generally so we can vary the step size $h = \\Delta x = \\Delta y$.\n\n\n```matlab\nclear; clc; close all\n\nh = 0.1;\nx = [0 : h : 1]; n = length(x);\ny = [0 : h : 1]; m = length(y);\n\n% The coefficient matrix A is now m*n by m*n, since that is the total number of points.\n% The right-hand side vector b is m*n by 1.\nA = zeros(m*n, m*n);\nb = zeros(m*n, 1);\n\nu_left = 100;\nu_right = 100;\nu_bottom = 100;\nu_top = 0;\n\nfor j = 1 : m\n for i = 1 : n\n % for convenience we calculate all the indices once\n kij = (j-1)*n + i;\n kim1j = (j-1)*n + i - 1;\n kip1j = (j-1)*n + i + 1;\n kijm1 = (j-2)*n + i;\n kijp1 = j*n + i;\n \n if i == 1 \n % this is the left boundary\n A(kij, kij) = 1;\n b(kij) = u_left;\n elseif i == n \n % right boundary\n A(kij, kij) = 1;\n b(kij) = u_right;\n elseif j == 1 \n % bottom boundary\n A(kij, kij) = 1;\n b(kij) = u_bottom;\n elseif j == m \n % top boundary\n A(kij, kij) = 1;\n b(kij) = u_top;\n else\n % these are the coefficients for the interior points,\n % based on the recursion formula\n A(kij, kim1j) = 1;\n A(kij, kip1j) = 1;\n A(kij, kijm1) = 1;\n A(kij, kijp1) = 1;\n A(kij, kij) = -4;\n end\n end\nend\nu = A \\ b;\n\nu_square = reshape(u, [n, m]);\ncontourf(x, y, u_square')\nc = colorbar;\nc.Label.String = 'Temperature';\n```\n\n## Neumann (derivative) boundary conditions\n\nSo far, we have only discussed cases where we have Dirichlet boundary conditions; in other words, when we have all fixed values at the boundary. Frequently we also encounter Neumann-style boundary conditions, where we have the *derivative* specified at the boundary.\n\nWe can handle this in the same way we do for one-dimensional boundary value problems: either with a forward or backward difference (both of which are first-order accurate), or with a central difference using an imaginary point/ghost node (which is second-order accurate). Let's focus on using the central difference, since it is more accurate.\n\n:::{figure-md} fig-ghost-node\n\n\nGhost/imaginary node beyond an upper boundary\n:::\n\nFor example, let's say that at the upper boundary, the derivative of temperature is zero:\n\\begin{equation}\n\\left. \\frac{\\partial u}{\\partial y} \\right|_{\\text{boundary}} = 0\n\\end{equation}\n\nLet's consider this boundary condition applied at the point shown, $u_{2,3}$.\nWe can approximate this derivative using a central difference:\n\\begin{align}\n\\frac{u_{2,3}}{\\partial y} \\approx \\frac{u_{2,4} - u_{2,2}}{\\Delta x} &= 0 \\\\\nu_{2,4} &= u_{2,2}\n\\end{align}\nThis tells us the value of the point above the boundary, $u_{2,4}$; however, this point is a \"ghost\" or imaginary point located outside the boundary, so we don't really care about its value. Instead, we can use this relationship to give us a usable equation for the boundary point, by incorporating it into the normal recursion formula for Laplace's equation:\n\\begin{align}\nu_{1,3} + u_{3,3} + u_{2,4} + u_{2,2} - 4u_{2,3} &= 0 \\\\\nu_{1,3} + u_{3,3} + u_{2,2} + u_{2,2} - 4u_{2,3} &= 0 \\\\\n\\rightarrow u_{1,3} + u_{3,3} + 2 u_{2,2} - 4u_{2,3} &= 0\n\\end{align}\n\nThe recursion formula for points along the upper boundary would then become\n\\begin{equation}\nu_{i+1,j} + u_{i-1,j} + 2 u_{i,j-1} - 4 u_{i,j} = 0 \\;.\n\\end{equation}\n\nNow let's try solving the above example, but with $\\frac{\\partial u}{\\partial y} = 0$ at the top boundary and $u = 0$ at the bottom boundary:\n\n\n```matlab\nclear; clc; close all\n\nh = 0.1;\nx = [0 : h : 1]; n = length(x);\ny = [0 : h : 1]; m = length(y);\n\n% The coefficient matrix A is now m*n by m*n, since that is the total number of points.\n% The right-hand side vector b is m*n by 1.\nA = zeros(m*n, m*n);\nb = zeros(m*n, 1);\n\nu_left = 100;\nu_right = 100;\nu_bottom = 0;\n\nfor j = 1 : m\n for i = 1 : n\n % for convenience we calculate all the indices once\n kij = (j-1)*n + i;\n kim1j = (j-1)*n + i - 1;\n kip1j = (j-1)*n + i + 1;\n kijm1 = (j-2)*n + i;\n kijp1 = j*n + i;\n \n if i == 1 \n % this is the left boundary\n A(kij, kij) = 1;\n b(kij) = u_left;\n elseif i == n \n % right boundary\n A(kij, kij) = 1;\n b(kij) = u_right;\n elseif j == 1 \n % bottom boundary\n A(kij, kij) = 1;\n b(kij) = u_bottom;\n elseif j == m \n % top boundary, using the ghost node + recursion formula\n A(kij, kim1j) = 1;\n A(kij, kip1j) = 1;\n A(kij, kijm1) = 2;\n A(kij, kij) = -4;\n else\n % these are the coefficients for the interior points,\n % based on the recursion formula\n A(kij, kim1j) = 1;\n A(kij, kip1j) = 1;\n A(kij, kijm1) = 1;\n A(kij, kijp1) = 1;\n A(kij, kij) = -4;\n end\n end\nend\nu = A \\ b;\n\nu_square = reshape(u, [n, m]);\n% the \"20\" indicates the number of levels for the contour plot\ncontourf(x, y, u_square', 20);\nc = colorbar;\nc.Label.String = 'Temperature';\n```\n\n## Iterative solutions for (very) large problems\n\nSo far, we've been able to solve our systems of linear equations in Matlab by using `y = A \\ b`, which directly finds the solution to the equation $A \\mathbf{y} = \\mathbf{b}$.\n\nHowever, this approach will become very slow as the grid resolution ($h = \\Delta x = \\Delta y$) becomes smaller, and eventually unfeasable due to the associated computational requirements. First, let's create a function that takes as input the segment size $h$, then returns the time ittakes to solve the problem for different sizes.\n\n\n```matlab\n%%file heat_equation.m\nfunction [time, num] = heat_equation(h)\n\nx = [0 : h : 1]; n = length(x);\ny = [0 : h : 1]; m = length(y);\n\n% The coefficient matrix A is now m*n by m*n, since that is the total number of points.\n% The right-hand side vector b is m*n by 1.\nA = zeros(m*n, m*n);\nb = zeros(m*n, 1);\n\nnum = m*n; % number of points\n\ntic;\n\nu_left = 100;\nu_right = 100;\nu_bottom = 100;\nu_top = 0;\n\nfor j = 1 : m\n for i = 1 : n\n % for convenience we calculate all the indices once\n kij = (j-1)*n + i;\n kim1j = (j-1)*n + i - 1;\n kip1j = (j-1)*n + i + 1;\n kijm1 = (j-2)*n + i;\n kijp1 = j*n + i;\n \n if i == 1 \n % this is the left boundary\n A(kij, kij) = 1;\n b(kij) = u_left;\n elseif i == n \n % right boundary\n A(kij, kij) = 1;\n b(kij) = u_right;\n elseif j == 1 \n % bottom boundary\n A(kij, kij) = 1;\n b(kij) = u_bottom;\n elseif j == m \n % top boundary\n A(kij, kij) = 1;\n b(kij) = u_top;\n else\n % these are the coefficients for the interior points,\n % based on the recursion formula\n A(kij, kim1j) = 1;\n A(kij, kip1j) = 1;\n A(kij, kijm1) = 1;\n A(kij, kijp1) = 1;\n A(kij, kij) = -4;\n end\n end\nend\nu = A \\ b;\n\nu_square = reshape(u, [n, m]);\n% the \"20\" indicates the number of levels for the contour plot\n%contourf(x, y, u_square', 20);\n%c = colorbar;\n%c.Label.String = 'Temperature';\n\ntime = toc;\n```\n\n Created file '/Users/kyle/projects/ME373-book/content/pdes/heat_equation.m'.\n\n\nNow, we can see how long it takes to solve as we increase the resolution, and get an idea about the relationship between time-to-solution and number of unknowns.\n\n\n\n\n```matlab\nclear all; clc;\n\nstep_sizes = [0.1, 0.05, 0.025, 0.02, 0.0125, 0.01];\n\nn = length(step_sizes);\nnums = zeros(n,1); times = zeros(n,1);\n\nfor i = 1 : n\n [times(i), nums(i)] = heat_equation(step_sizes(i));\nend\n\nloglog(nums, times, '-o')\nxlabel('Number of unknowns'); \nylabel('Time for direct solution (sec)')\nhold on\n\nx = nums(3:end);\nn2 = x.^2 * (times(3) / x(1)^2);\nn3 = x.^3 * (times(3) / x(1)^3);\nplot(x, n2, '-x');\nplot(x, n3, '-s');\nlegend('Actual cost', 'Quadratic cost', 'Cubic cost', 'Location', 'northwest')\n```\n\nInterestingly, we see that the slope in this log-log plot that after about 400 unknowns (so a coefficient matrix of about 160,000), the cost begins to increase exponentially, somewhere between quadratic ($\\mathcal{O}(n^2)$) and cubic ($\\mathcal{O}(n^3)$).\n\nIf we try to reduce the step size further, for example to 0.005, we'll see that we cannot get a solution in a reasonable amount of time. But, clearly we want to get solutions for large numbers of unknowns, so what can we do?\n\nWe can solve larger systems of linear equations using *iterative* methods. There are a number of these, and we'll focus on two:\n\n- Jacobi method\n- Gauss-Seidel method\n\n\n### Jacobi method\n\nThe Jacobi method essentially works by starting with an initial guess to the solution, then using the recursion formula to solve for values at each point, then repeating this until the values converge (i.e., stop changing). \n\nAn algorithm we can use to solve Laplace's equation:\n\n1. Set some initial guess for all unknowns: $u_{i,j}^{\\text{old}}$\n2. Set the boundary values\n3. For each point in the interior, use the recursion formula to solve for new values based on old values at the surrounding points: $u_{i,j} = \\left( u_{i+1,j}^{\\text{old}} + u_{i-1,j}^{\\text{old}} + u_{i,j+1}^{\\text{old}} + u_{i,j-1}^{\\text{old}} \\right)/4$.\n4. Check for convergence: is $\\epsilon$ less than some tolerance, such as $10^{-6}$? Where $\\epsilon = \\max \\left| u_{i,j} - u_{i,j}^{\\text{old}} \\right|$. If no, then return to step 2 and repeat.\n\nMore formally, if we have a system $A \\mathbf{x} = \\mathbf{b}$, where\n\\begin{equation}\nA = \\begin{bmatrix}\na_{11} & a_{12} & \\cdots & a_{1n} \\\\\na_{21} & a_{22} & \\cdots & a_{2n} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\na_{n1} & a_{n2} & \\cdots & a_{nn} \\end{bmatrix}\n\\quad \\mathbf{x} = \\begin{bmatrix} x_1 \\\\ x_2 \\\\ \\vdots \\\\ x_n \\end{bmatrix}\n\\quad \\mathbf{b} = \\begin{bmatrix} b_1 \\\\ b_2 \\\\ \\vdots \\\\ b_n \\end{bmatrix}\n\\end{equation}\nthen we can solve iterative for $\\mathbf{x}$ using\n\\begin{equation}\nx_i^{(k+1)} = \\frac{1}{a_{ii}} \\left( b_i - \\sum_{j \\neq i} a_{ij} x_j^{(k)} \\right) , \\quad i = 1,2,\\ldots, n \n\\end{equation}\nwhere $x_i^{(k)}$ is a value of the solution at iteration $k$ and $x_i^{(k+1)}$ is at the next iteration.\n\n\n```matlab\n%%file heat_equation_jacobi.m\nfunction [time, num_point, num_iter] = heat_equation_jacobi(h)\n\nx = [0 : h : 1]; n = length(x);\ny = [0 : h : 1]; m = length(y);\n\n% The coefficient matrix A is now m*n by m*n, since that is the total number of points.\n% The right-hand side vector b is m*n by 1.\nA = zeros(m*n, m*n);\nb = zeros(m*n, 1);\nnum_point = m*n;\n\ntic;\n\nu_left = 100;\nu_right = 100;\nu_bottom = 100;\nu_top = 0;\n\n% initial guess\nu = 100*ones(m*n, 1);\n\n% dummy value for residual variable\nepsilon = 1.0; \n\nnum_iter = 0;\nwhile epsilon > 1e-6\n u_old = u;\n \n epsilon = 0;\n for j = 1 : m\n for i = 1 : n\n kij = (j-1)*n + i;\n kim1j = (j-1)*n + i - 1;\n kip1j = (j-1)*n + i + 1;\n kijm1 = (j-2)*n + i;\n kijp1 = j*n + i;\n\n if i == 1 \n % this is the left boundary\n u(kij) = u_left;\n elseif i == n \n % right boundary\n u(kij) = u_right;\n elseif j == 1 \n % bottom boundary\n u(kij) = u_bottom;\n elseif j == m \n % top boundary\n u(kij) = u_top;\n else\n % interior points\n u(kij) = (u_old(kip1j) + u_old(kim1j) + u_old(kijm1) + u_old(kijp1))/4.0;\n end\n end\n end\n \n epsilon = max(abs(u - u_old));\n num_iter = num_iter + 1;\nend\n\nu_square = reshape(u, [n, m]);\n% the \"20\" indicates the number of levels for the contour plot\ncontourf(x, y, u_square', 20);\nc = colorbar;\nc.Label.String = 'Temperature';\n\ntime = toc;\nfprintf('Number of iterations: %d\\n', num_iter)\n```\n\n Created file '/Users/kyle/projects/ME373-book/content/pdes/heat_equation_jacobi.m'.\n\n\n\n```matlab\nstep_sizes = [0.1, 0.05, 0.025, 0.02, 0.0125, 0.01, 0.005];\nn = length(step_sizes);\n\nnums_jac = zeros(n,1); times_jac = zeros(n,1); num_iter_jac = zeros(n,1);\n\nfor i = 1 : n\n [times_jac(i), nums_jac(i), num_iter_jac(i)] = heat_equation_jacobi(step_sizes(i));\nend\n```\n\n### Gauss-Seidel method\n\nThe Gauss-Seidel method is very similar to the Jacobi method, but with one important difference: rather than using all old values to calculate the new values, incorporate updated values as they are available. Because the method incorporates newer information more quickly, it tends to converge faster (meaning, with fewer iterations) than the Jacobi method.\n\nFormally, if we have a system $A \\mathbf{x} = \\mathbf{b}$, where\n\\begin{equation}\nA = \\begin{bmatrix}\na_{11} & a_{12} & \\cdots & a_{1n} \\\\\na_{21} & a_{22} & \\cdots & a_{2n} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\na_{n1} & a_{n2} & \\cdots & a_{nn} \\end{bmatrix}\n\\quad \\mathbf{x} = \\begin{bmatrix} x_1 \\\\ x_2 \\\\ \\vdots \\\\ x_n \\end{bmatrix}\n\\quad \\mathbf{b} = \\begin{bmatrix} b_1 \\\\ b_2 \\\\ \\vdots \\\\ b_n \\end{bmatrix}\n\\end{equation}\nthen we can solve iterative for $\\mathbf{x}$ using\n\\begin{equation}\nx_i^{(k+1)} = \\frac{1}{a_{ii}} \\left( b_i - \\sum_{j=1}^{i-1} a_{ij} x_j^{(k+1)} - \\sum_{j =i+1}^n a_{ij} x_j^{(k)} \\right) , \\quad i = 1,2,\\ldots, n \n\\end{equation}\nwhere $x_i^{(k)}$ is a value of the solution at iteration $k$ and $x_i^{(k+1)}$ is at the next iteration.\n\n\n```matlab\n%%file heat_equation_gaussseidel.m\nfunction [time, num_point, num_iter] = heat_equation_gaussseidel(h)\n\nx = [0 : h : 1]; n = length(x);\ny = [0 : h : 1]; m = length(y);\n\n% The coefficient matrix A is now m*n by m*n, since that is the total number of points.\n% The right-hand side vector b is m*n by 1.\nA = zeros(m*n, m*n);\nb = zeros(m*n, 1);\nnum_point = m*n;\n\ntic;\n\nu_left = 100;\nu_right = 100;\nu_bottom = 100;\nu_top = 0;\n\n% initial guess\nu = 100*ones(m*n, 1);\n\n% dummy value for residual variable\nepsilon = 1.0; \n\nnum_iter = 0;\nwhile epsilon > 1e-6\n u_old = u;\n \n epsilon = 0;\n for j = 1 : m\n for i = 1 : n\n kij = (j-1)*n + i;\n kim1j = (j-1)*n + i - 1;\n kip1j = (j-1)*n + i + 1;\n kijm1 = (j-2)*n + i;\n kijp1 = j*n + i;\n\n if i == 1 \n % this is the left boundary\n u(kij) = u_left;\n elseif i == n \n % right boundary\n u(kij) = u_right;\n elseif j == 1 \n % bottom boundary\n u(kij) = u_bottom;\n elseif j == m \n % top boundary\n u(kij) = u_top;\n else\n % interior points\n u(kij) = (u(kip1j) + u(kim1j) + u(kijm1) + u(kijp1))/4.0;\n end\n end\n end\n \n epsilon = max(abs(u - u_old));\n num_iter = num_iter + 1;\nend\n\nu_square = reshape(u, [n, m]);\n%% the \"20\" indicates the number of levels for the contour plot\ncontourf(x, y, u_square', 20);\nc = colorbar;\nc.Label.String = 'Temperature';\n\ntime = toc;\nfprintf('Number of iterations: %d\\n', num_iter)\n```\n\n Created file '/Users/kyle/projects/ME373-book/content/pdes/heat_equation_gaussseidel.m'.\n\n\n\n```matlab\nstep_sizes = [0.1, 0.05, 0.025, 0.02, 0.0125, 0.01, 0.005];\nn = length(step_sizes);\n\nnums_gs = zeros(n,1); times_gs = zeros(n,1); num_iter_gs = zeros(n,1);\n\nfor i = 1 : n\n [times_gs(i), nums_gs(i), num_iter_gs(i)] = heat_equation_gaussseidel(step_sizes(i));\nend\n\nloglog(nums, times, '-o'); hold on\nloglog(nums_jac, times_jac, '-^')\nloglog(nums_gs, times_gs, '-x')\nxlabel('Number of unknowns'); \nylabel('Time for solution (sec)')\nlegend('Direct solution', 'Jacobi solution', 'Gauss-Seidel solution', 'Location', 'northwest')\n```\n\n\n```matlab\nloglog(nums_jac, num_iter_jac, '-^')\nhold on;\nloglog(nums_gs, num_iter_gs, '-x')\nxlabel('Number of unknowns')\nylabel('Number of iterations required')\nlegend('Jacobi method', 'Gauss-Seidel method', 'Location', 'northwest')\n```\n\nThese results show us a few things:\n\n- For very small problems, the direct solution method is faster.\n- For the heat equation, once we get to around 1000 unknowns, the methods perform similarly. Beyond this, the direct solution method becomes unreasonably slow, and even fails to solve in a reasonable time for a step size of 0.005.\n- The Gauss-Seidel method converges with around half the number of iterations than the Jacobi method.\n\nFor larger, more-realistic problems, iterative solution methods like Jacobi and Gauss-Seidel are essential.\n", "meta": {"hexsha": "1c0cbf56f77e0290bd5059ecd69d2c09fb5338ea", "size": 356595, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/_sources/content/pdes/elliptic.ipynb", "max_stars_repo_name": "kyleniemeyer/ME373-book", "max_stars_repo_head_hexsha": "66a9ef0f69a8c4e1656c02080aebfb5704e1a089", "max_stars_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/_sources/content/pdes/elliptic.ipynb", "max_issues_repo_name": "kyleniemeyer/ME373-book", "max_issues_repo_head_hexsha": "66a9ef0f69a8c4e1656c02080aebfb5704e1a089", "max_issues_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/_sources/content/pdes/elliptic.ipynb", "max_forks_repo_name": "kyleniemeyer/ME373-book", "max_forks_repo_head_hexsha": "66a9ef0f69a8c4e1656c02080aebfb5704e1a089", "max_forks_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 327.1513761468, "max_line_length": 73340, "alphanum_fraction": 0.9160756601, "converted": true, "num_tokens": 8988, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.934395168021653, "lm_q2_score": 0.8902942319436395, "lm_q1q2_score": 0.8318866284456856}} {"text": "# SciPy\n## Solve Linear Equation\n\n\n```python\n#import \nfrom scipy import linalg as sla\nimport numpy as np\nimport sympy as sy\nsy.init_printing()\n\nfrom matplotlib import pyplot as plt\nfrom pprint import pprint\n```\n\n\n```python\nA=sy.Matrix([[3,2],[-3,5]])\nb=sy.Matrix([8,-1])\n```\n\n\n```python\nA, b\n```\n\n## try %timeit\n\n\n```python\n%timeit x=A.inv() * b\n```\n\n 998 \u00b5s \u00b1 60.9 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\n\n\n\n```python\n%timeit x=A.solve(b)\n```\n\n 1.23 ms \u00b1 196 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\n\n\n\n```python\n%timeit x=A.LUsolve(b)\n```\n\n 1.06 ms \u00b1 522 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\n\n\n\n```python\nA=np.array(A).astype(float)\nb=np.array(b).astype(float)\nprint(A)\nprint(b)\n```\n\n [[ 3. 2.]\n [-3. 5.]]\n [[ 8.]\n [-1.]]\n\n\n\n```python\n%timeit x=sla.solve(A,b)\n```\n\n 199 \u00b5s \u00b1 53.5 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\n\n\n\n```python\n# Handle Exception\n```\n\n\n```python\nA=sy.Matrix([[3,2],[6,4 ]])\nb=sy.Matrix([8,-1])\n```\n\n\n```python\nA,b\n```\n\n\n```python\nx=A.LUsolve(b)\nx\n```\n\n\n```python\nA=np.array(A).astype(float)\nb=np.array(b).astype(float)\nprint(A)\nprint(b)\n```\n\n [[ 3. 2.]\n [ 6. 4.]]\n [[ 8.]\n [-1.]]\n\n\n\n```python\nx=sla.solve(A,b)\n```\n", "meta": {"hexsha": "e353aa85bbea4c9f413e903494156f432237d72e", "size": 11083, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "scipyExercise/Solve Linear Equation.ipynb", "max_stars_repo_name": "terasakisatoshi/pythonCodes", "max_stars_repo_head_hexsha": "baee095ecee96f6b5ec6431267cdc6c40512a542", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "scipyExercise/Solve Linear Equation.ipynb", "max_issues_repo_name": "terasakisatoshi/pythonCodes", "max_issues_repo_head_hexsha": "baee095ecee96f6b5ec6431267cdc6c40512a542", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "scipyExercise/Solve Linear Equation.ipynb", "max_forks_repo_name": "terasakisatoshi/pythonCodes", "max_forks_repo_head_hexsha": "baee095ecee96f6b5ec6431267cdc6c40512a542", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.9837662338, "max_line_length": 1660, "alphanum_fraction": 0.6509067942, "converted": true, "num_tokens": 465, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9343951661947455, "lm_q2_score": 0.8902942304882371, "lm_q1q2_score": 0.8318866254592794}} {"text": "\n\n# Matem\u00e1tica: Fun\u00e7\u00f5es e Seus Usos\n\n## Equa\u00e7\u00e3o de segundo Grau com Python\n\n\n```\nfrom math import sqrt\nfrom sympy import *\nimport math\ninit_printing()\n\na = 1\nb = 8\nc = 6\n```\n\n\n```\n# Resolvendo equa\u00e7\u00e3o de segundo grau\ndef raizes(a, b, c):\n x = Symbol('x')\n # ax\u00b2 + bx + c\n return solve( a*x**2 + b*x + c , x)\n\n# Plotando equa\u00e7\u00e3o de segundo grau\ndef grafico_2grau(x1,x2):\n eixo_x = []\n eixo_y = []\n zero = []\n\n variacao = abs(x1 - x2)\n\n if variacao < 3:\n variacao = 3\n\n for x in np.arange(x1 - variacao, x2 + variacao, variacao / 100):\n y = a * (x ** 2 ) + b * (x) + c\n eixo_x.append(x)\n eixo_y.append(y)\n zero.append(0.0)\n\n # Desenha linha\n plt.plot(eixo_x,eixo_y,color=\"blue\")\n\n # Desenha pontos\n plt.plot((x1,x2),(0,0), marker='o',color='red')\n plt.plot(c, marker='o',color='red')\n\n # Desenha eixos\n plt.plot(eixo_x,zero,color=\"black\")\n plt.plot(zero,eixo_y,color='black')\n\n plt.show()\n\nr = raizes(a,b,c)\n```\n\n\n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ntry:\n x1 = float(r[0])\n x2 = float(r[1])\n grafico_2grau(x1,x2)\n print('x1 = %0.3f' % x1, 'x2= %0.3f' % x2)\nexcept:\n try:\n x1 = complex(r[0])\n x2 = complex(r[1])\n print('O resultado das ra\u00edzes \u00e9 um n\u00famero complexo')\n print('x1 =', x1, '\\nx2 =', x2)\n except:\n print('O valor de a \u00e9 0, resultado indefinido.')\n```\n\n## Utiliza\u00e7\u00e3o do sympy para criar vari\u00e1veis simb\u00f3licas\n\n\n```\n# define x, y e z como vari\u00e1veis simb\u00f3licas\nvar('x y z')\n```\n\n# Fun\u00e7\u00f5es\n\n## Exemplo 01: Custo de opera\u00e7\u00e3o de uma m\u00e1quina\n\n\n* A empresa deseja saber o custo de energia el\u00e1trica de uma m\u00e1quina.\n* A pot\u00eancia da m\u00e1quina \u00e9 de 500 kW\n* O custo de cada kW*h \u00e9 de R$ 0,10\n\nQual a fun\u00e7\u00e3o matem\u00e1tica que descreve o consumo da m\u00e1quina em fun\u00e7\u00e3o da hora?\n\n\n\n\n\n```\n# Custo da m\u00e1quina por hora\ndef f(x): return 500*0.1*x\nx = 24 * 30\nprint('O custo \u00e9 de:', f(x), 'por %2.0f hora(s)' % x)\n\n# Custo da m\u00e1quina por dia\ndef C(x): return (500*0.1*x)*24\nx = 30\nprint('O custo \u00e9 de:', C(x), 'por %2.0f dia(s)' % x)\n```\n\n O custo \u00e9 de: 36000.0 por 720 hora(s)\n O custo \u00e9 de: 36000.0 por 30 dia(s)\n\n\n### Plotando Gr\u00e1ficos\n\n\n```\n# Calculando horas em um m\u00eas\ndias = 30\nhoras = dias*24\n\nlista_preco = list()\nfor hora in range(0,horas):\n lista_preco.append(f(hora))\n\n# Gr\u00e1fico\n\nfig, ax = plt.subplots()\n\n# T\u00edtulo do gr\u00e1fico\nplt.title('Pre\u00e7o kW*h em um m\u00eas')\n\n# Eixos \nax.set_xlabel('horas')\nax.set_ylabel('Pre\u00e7o em (R$)')\n\nplt.plot(lista_preco)\n```\n\n\n```\n# Podemos redefinir o pre\u00e7o em fun\u00e7\u00e3o dos dias\n\ndias = 30\n\nlista_preco2 = list()\n\nfor dia in range(0,dias):\n lista_preco2.append(C(dia))\n\n# Gr\u00e1fico \n\nfig, ax = plt.subplots()\n\n# T\u00edtulo do gr\u00e1fico\nplt.title('Pre\u00e7o kW*dia em um m\u00eas')\n\n# Eixos \nax.set_xlabel('dias')\nax.set_ylabel('Pre\u00e7o em (R$)')\n\nplt.plot(lista_preco2)\n```\n\n## Exemplo 02: Realizando Previs\u00f5es com fun\u00e7\u00f5es\n\n\n* A situa\u00e7\u00e3o problema da empresa \u00e9: temos a fun\u00e7\u00e3o custo C(x):\n\n

    C(x) = 0.02x\u00b2 + 80/x

    \n\n* Qual a dimens\u00e3o x da caixa, a partir da qual o custo \u00e9 maior ou igual a R$ 22,00?\n\n\n\n### Definindo Fun\u00e7\u00f5es para o exemplo\n\n\n```\ndef C(x): return (0.02*(x**2)) + (80/x)\n\ndef c(x): return round(C(x) - 22,2)\n\ndef find_raizes(y,x1,x2,p=0.005):\n lista_raizes = list()\n lista_index = list()\n raizes_= list()\n\n for i in np.arange(x1,x2,step=0.001):\n if (C(i)<= y and C(i)>= y-p):\n # Pre\u00e7o\n lista_raizes.append(round(C(i),4))\n # m\u00b2\n lista_index.append(round(i,2))\n\n return lista_index, lista_raizes\n```\n\n### Plotando Gr\u00e1ficos\n\n\n```\nresultado = find_raizes(22,3,5)\n\nimport itertools\n\nfor i, j in itertools.product(resultado[0], resultado[1]):\n valor_x = float(i)\n valor_y = float(j)\n\nprint('O valor de x \u00e9:', valor_x)\nprint('O valor de y \u00e9:', valor_y)\n\nresultado2 = find_raizes(22,30,35)\n\nprint()\n\nfor i, j in itertools.product(resultado2[0], resultado2[1]):\n valor_x2 = float(i)\n valor_y2 = float(j)\n\nprint('O valor de x2 \u00e9:', valor_x2)\nprint('O valor de y2 \u00e9:', valor_y2)\n\n```\n\n O valor de x \u00e9: 3.68\n O valor de y \u00e9: 21.9985\n \n O valor de x2 \u00e9: 31.17\n O valor de y2 \u00e9: 21.9991\n\n\n\n```\nlista_ex02 = list()\n\nx = 35\n\nzero_ex02 = []\neixo_x_ex02 = []\nraiz = []\n\nfor i in np.arange(0.5,x, step=0.01):\n lista_ex02.append(C(i))\n zero_ex02.append(22)\n eixo_x_ex02.append(i)\n\n# Gr\u00e1fico\nfig, ax = plt.subplots()\n\n# T\u00edtulo do gr\u00e1fico\nplt.title('Custo de produ\u00e7\u00e3o da caixa')\n\n# Eixos \nax.set_xlabel('cm')\nax.set_ylabel('Pre\u00e7o em (R$)')\n\nplt.plot(eixo_x_ex02,lista_ex02)\n\nplt.plot(eixo_x_ex02,zero_ex02,color='red')\n\nplt.plot(valor_x,valor_y, marker='o',color='black')\nplt.plot(valor_x2,valor_y2, marker='o',color='black')\n\nprint('Logo, para tamanhos menores que: %0.2fcm e maiores que %0.2fcm, o custo de R$22,00 \u00e9 invi\u00e1vel.' % (valor_x,valor_x2))\n```\n\n## Varia\u00e7\u00e3o Media de Fun\u00e7\u00e3o: Parte 1\n\nA varia\u00e7\u00e3o m\u00e9dia de uma fun\u00e7\u00e3o f(x), dentro do intervalo [a,b] \u00e9 definida por:\n\n$\n\\dfrac{\\Delta f(x)}{\\Delta x} = \\dfrac{f(b)-f(a)}{b-a}\n$\n\n\n```\nvar ('a b')\n\ndef f(a): return 2*a**2-1\nf(a)\n\neixo_x_exvm = []\neixo_y_exvm = []\nraiz = []\n\nfor i in np.arange(0,5,step=0.1):\n eixo_y_exvm .append(f(i))\n eixo_x_exvm.append(i)\n\n# Gr\u00e1fico\nfig, ax = plt.subplots()\n\n# T\u00edtulo do gr\u00e1fico\nplt.title('Gr\u00e1fico da Fun\u00e7\u00e3o')\n\nplt.plot(eixo_x_exvm,eixo_y_exvm)\n```\n\n\n```\ndef variacao(a,b): return (f(b)-f(a))/(b-a)\n\n# Alguns exemplos de varia\u00e7\u00e3o m\u00e9dia da fun\u00e7\u00e3o\nprint(variacao(4,2))\nprint(variacao(5,2))\nprint(variacao(1,2))\nprint(variacao(3,5))\n```\n\n 12.0\n 14.0\n 6.0\n 16.0\n\n\n## Varia\u00e7\u00e3o Media de Fun\u00e7\u00e3o: Parte 2\n\nUma descri\u00e7\u00e3o mais flex\u00edvel para varia\u00e7\u00e3o m\u00e9dia de uma fun\u00e7\u00e3o \u00e9: \n\nA varia\u00e7\u00e3o m\u00e9dia da fun\u00e7\u00e3o f(x), no intevalo delta(x) \u00e9 dada por:\n\n$\n\\dfrac{\\Delta f(x)}{\\Delta x} = \\dfrac{(f(x) + \\Delta x) - f(x)}{\\Delta x}\n$\n\n\n```\nvar ('x delta_x p')\ndef C(p): return 0.02*p**2+80/p\nprint(C(p))\ndef h(x,delta_x): return (C(x + delta_x) - C(x) )/ delta_x\n\n# A fun\u00e7\u00e3o h() pode ser usada para calcular a varia\u00e7\u00e3o da m\u00e9dia \nprint(h(10,0.5))\n```\n\n 0.02*p**2 + 80/p\n -0.3519047619047626\n\n\n## Dominio e Imagem de uma Fun\u00e7\u00e3o\n\nO dom\u00ednio da fun\u00e7\u00e3o f(x) corresponde ao conjunto de todos os valores x onde f(x) pode ser calculada.\n\nA imagem da fun\u00e7\u00e3o f(x) corresponde ao conjunto de todos os valores que a fun\u00e7\u00e3o assume.\n\n\n```\nvar ('x')\ndef f(x): return sqrt(x-1)\n\neixo_y_ex_dominio = list()\neixo_x_ex_dominio = list()\n\nfor i in np.arange(3,5, step=0.01):\n eixo_y_ex_dominio.append(f(i))\n eixo_x_ex_dominio.append(i)\n\nplt.plot(eixo_x_ex_dominio,eixo_y_ex_dominio)\n\n```\n\n## Inversa de uma Fun\u00e7\u00e3o\n\n### Fun\u00e7\u00e3o Injetora\n Uma fun\u00e7\u00e3o \u00e9 injetora, se e somente se, para quaisquer x1 e x2 pertencentes ao dom\u00ednio, temos que:\n \n $\n x_1 \\neq x_2 \\rightarrow f(x_1) \\neq f(x_2)\n $ \n\n**Exemplo :** Note que: $ f: \\mathbb{R} \\rightarrow \\mathbb{R} $ **tal que:** $ f(x) = 5 $ \u00e9 uma fun\u00e7\u00e3o, mas n\u00e3o \u00e9 injetora!\n\n\n```\nvar ('x')\ndef f(x): return 5\n\neixo_x = list()\neixo_y = list()\n\nfor i in range(-5,6):\n eixo_x.append(i)\n eixo_y.append(f(i))\n\nplt.plot(eixo_x, eixo_y)\n```\n\n**Exemplo02 :** Note que: $ f: \\mathbb{R} \\rightarrow \\mathbb{R} $ **tal que:** $ f(x) = 5 * x $ \u00e9 uma fun\u00e7\u00e3o injetora!\n\n\n```\nvar ('x')\ndef f(x): return 5*x\n\neixo_x = list()\neixo_y = list()\n\nfor i in range(-5,6):\n eixo_x.append(i)\n eixo_y.append(f(i))\n\nplt.plot(eixo_x, eixo_y)\n```\n\n### Fun\u00e7\u00e3o Sobrejetora\nUma fun\u00e7\u00e3o $ f: A \\rightarrow B $ \u00e9 sobrejetora, se cada elemento do conjunto imagem de B \u00e9 imagem de pelo menos um elemento de A. \n\nEm outras palavras, a imagem B de f \u00e9 a totalidade do conjunto B.\n \n$ f: \\mathbb{R} \\rightarrow \\mathbb{R} | f(x) = 3*x $ \n\n\n```\nvar ('x')\ndef f(x): return 3*x\n\neixo_x = list()\neixo_y = list()\n\nfor i in range(-5,6):\n eixo_x.append(i)\n eixo_y.append(f(i))\n\nplt.plot(eixo_x, eixo_y)\n```\n\n### Fun\u00e7\u00e3o Bijetora\n\nDefini\u00e7\u00e3o: Uma fun\u00e7\u00e3o que \u00e9 injetora e subjetora.\n\n$ f(x) $ bijetora \u00e9 revers\u00edvel $ f^{-1}(x)$\n\n$ f^{-1}(f(x)) = x$\n\n$ f^{-1}(x) = \\sqrt{x} $\n\n## Polin\u00f4mios e raizes\n\nUma fun\u00e7\u00e3o polinomia de grau n, \u00e9 toda fun\u00e7\u00e3o que pode ser escrita da seguinte forma:\n$ f(x) = a_0 + a_1x^1 + a_2x^2 + ... + a_nx^n $ \n\n**exemplo:** $ f(x) = -2x + 5x^3 + 3 $ \u00e9 uma fun\u00e7\u00e3o de grau 3.\n\n### Exemplo\nVamos mostrar alguns exemplos de como modelamos alguns problemas reais, usando fun\u00e7\u00f5es. Dessa vez, vamos descrever a queda livre de um objeto, o modelo matem\u00e1tico parte da segunda lei de Newton:\n\nA queda livre de um objeto \u00e9 solu\u00e7\u00e3o do seguinte problema:\n\n\n\n1. Solta-se um objeto, de uma altura $ h0 $, deixando-o cair em dire\u00e7\u00e3o ao solo.\n2. Considera-se desprez\u00edvel a resist\u00eancia do ar, durante o movimento da queda.\n3. A \u00fanica for\u00e7a que age sobre o objeto \u00e9 o peso do mesmo ( que descreve a atra\u00e7\u00e3o do planeta sobre esse corpo)\n4. Usa-se a segunda lei de Newton: \n\n $ \\overrightarrow{P} = m\\overrightarrow{g}=m\\overrightarrow{a}_(t)$\n\nA fun\u00e7\u00e3o que descreve o modelo do objeto em queda, \u00e9 solu\u00e7\u00e3o da lei de Newton e \u00e9 dada pela fun\u00e7\u00e3o polinomial de segundo grau:\n\n$h(t) = h_0 + v_{0}t - \\dfrac {g}{2}t^2 $\n\n\n```\nvar ('t h0 v0 g')\n# A acelera\u00e7\u00e3o da gravidade na Terra ao n\u00edvel do mar e \u00e0 latitude de 45\u00b0,\n# possui o valor aproximado de 9,80665 m/s\u00b2\ndef h(t,h0=h0,v0=v0,g=9.80665): return h0 + (v0 * t) - ((g/2)*t**2)\nh(t,h0,v0,g)\n```\n\n### Problema\n\nPodemos realizar um gr\u00e1fico para o seguinte exemplo de queda livre: \nUm objeto \u00e9 solto de uma altura inicial $ h_0 = 200m, v_0 = 0ms^{-1}.$\n\nPede-se o gr\u00e1fico da altura $ h(t) $ para $ t > 0 $ e quanto tempo esse objeto leva para alcan\u00e7ar o solo.\n\n\n```\nh(t,200,0)\n\neixo_x = list()\neixo_y = list()\n\nzero = list()\n\nfor i in np.arange(-20,20,step=0.1):\n eixo_y.append(h(i,200,0))\n eixo_x.append(i)\n zero.append(0)\n\nplt.plot(eixo_x,eixo_y)\n\n# Desenha eixos\nplt.plot(eixo_x,zero,color=\"black\")\nplt.plot(zero,eixo_y,color='black')\n\n# Ra\u00edz do polin\u00f4mio \u00e9 quando f(x) = 0\nround(h(6.382,200,0))\n```\n\n\n\n## Juros \n\nOs juros s\u00e3o divididos entre simples ou compostos. \n\n### Juros Simples\n\n$ J(C,i,n) = Cin$\n\n**Valor futuro de uma aplica\u00e7\u00e3o financeira:**\n\n$ C + J(C,i,n) = C(1+in) $\n\n**Onde:**\n\n* C --> Capital inicial\n* i --> Taxa de Juros\n* n --> Per\u00edodo\n\nCapital inicial tamb\u00e9m \u00e9 conhecido como Valor presente:\n\n$ VF = VP(1+in) $\n\n\n**Exemplo:** Considere um capital inicial $ C= 10000,00$ capitalizado com uma taxa de juros $ i = 0,01 $ ao m\u00eas.\n\n* Qual \u00e9 o valor montante VF depois de 10 meses?\n* Fa\u00e7a um gr\u00e1fico de VF pelo tempo.\n\n\n```\nvar ('i n vp')\ndef vf(i,n,vp): return vp*(1+i*n)\n\nVF = vf(0.01,10,10000)\nprint('O montante final foi de: R$', VF)\n```\n\n O montante final foi de: R$ 11000.0\n\n\n\n```\neixo_x = list()\neixo_y = list()\n\nfor t in np.arange(0,10, step=0.1):\n eixo_x.append(t)\n eixo_y.append(vf(0.01,t,10000))\n\nplt.plot(eixo_x,eixo_y)\n```\n\n### Juros Compostos\n\nF\u00f3rmula dos juros compostos e uma compara\u00e7\u00e3o:\n\n**Juros compostos:**\n\n$ VF(VP,i,n,) = VP(1+i)^2 $\n\n**Juros simples:**\n\n$ VF = C + J(C,i,n) = C(1+in) $\n\n\n\n```\n# Definindo fun\u00e7\u00e3o de juros compostos\ndef vf_c(i,n,vp): return vp*(1+i)**n\n```\n\n\n```\neixo_x = list()\neixo_y = list()\n\neixo_y2 = list()\n\nfor t in np.arange(0,30, step=0.1):\n eixo_x.append(t)\n eixo_y.append(vf(0.03,t,100))\n\n eixo_x2.append(t)\n eixo_y2.append(vf_c(0.03,t,100))\n\nplt.plot(eixo_x,eixo_y)\nplt.plot(eixo_x,eixo_y2)\n```\n\n## Fun\u00e7\u00e3o exponencial\n\n**Defini\u00e7\u00e3o:**\n\u00c9 uma fun\u00e7\u00e3o cont\u00ednua da vari\u00e1vel \u201cx\u201d que depende de um par\u00e2metro \u201cC\u201d, que \u00e9 chamado de base.\n\n Esse \u201cC\u201d \u00e9 um n\u00famero real/positivo e todo esse \u201cC\u201d est\u00e1 elevado a uma fun\u00e7\u00e3o cont\u00ednua e bem-comportada da vari\u00e1vel \u201cx\u201d. E \u201cg(x)\u201d pode ter sinal positivo ou negativo.\n\n $ f(x) = C ^{g(x)}$\n\n**Usos mais comuns:** \n\n$ C=2, C=10, C=e = 2,71828183... $\n\nEstimativas num\u00e9ricas para o $ e $:\n\n$ e = \\lim_{x \\to oo} ( 1 + \\dfrac{1}{x})^{x}$\n\n**Usos mais comuns:** \n\n$ f(x) = e^{x}$\n\n$h(x) = e^{-x}$\n\n**Exemplo:** Em uma reportagem de mar\u00e7o de 2020 do Jornal Nacional, foi mostrado o dado de que, a cada pessoa que contrai o virus coronavirus, ela passa para duas pessoas.\n\n Pensando em estimar o cont\u00e1gio na popula\u00e7\u00e3o, crie uma fun\u00e7\u00e3o que represente esse dado de contamina\u00e7\u00e3o.\n\n\n```\nvar('k')\ndef N(k): return 2**k\nN(10)\n```\n\n# Limites\n\nPodemos usar o SymPy para calcular o limite de uma fun\u00e7\u00e3o quando x tende a um ponto.\n\n## Exemplo 03: Calculando Limites\n\n\n```\n# define x e y como vari\u00e1veis simb\u00f3licas\nvar('x y')\n\n# Definindo fun\u00e7\u00e3o\nf = Lambda(x, (x**2 -1))\n\n# Limite da fun\u00e7\u00e3o f(x) quando x tende a 2\nr1 = limit(f(x),x,2)\n\n# Limite da fun\u00e7\u00e3o f(x) quando x tende a -1\nr2 = limit(f(x),x,-1)\n\n# Limite da fun\u00e7\u00e3o f(x) quando x tende a 5\nr3 = limit(f(x),x,5)\n\nprint(r1,r2,r3)\n```\n\n 3 0 24\n\n\n### Gr\u00e1fico da Fun\u00e7\u00e3o\n\n\n```\ndef f(x): return (x**2 -1)\n\nlista_ex03 = list()\n\nx = 5\n\neixo_x_ex03 = []\nraiz = []\n\nfor i in np.arange(-1,x,step=0.1):\n lista_ex03.append(f(i))\n eixo_x_ex03.append(i)\n\n# Gr\u00e1fico\nfig, ax = plt.subplots()\n\n# T\u00edtulo do gr\u00e1fico\nplt.title('Gr\u00e1fico da Fun\u00e7\u00e3o')\n\nplt.plot(eixo_x_ex03,lista_ex03)\n\n# Limites\nplt.plot(2,r1, marker='o',color='black')\nplt.plot(-1,r2, marker='o',color='black')\nplt.plot(5,r3, marker='o',color='black')\n```\n\n## Calculando Limites Laterais\n\nComo padr\u00e3o, a fun\u00e7\u00e3o ```limit()```\ncalcula o limite lateral pela direita.\n\n\n\n\n```\nlimit(1/y, y, 0)\n```\n\npara calcularmos o limite lateral pela esquerda, basta digitar:\n\n\n```\nlimit(1/y, y, 0, '-')\n```\n\n### Exemplo 04\n\nDevido aos limites laterais n\u00e3o coincidirem, o limite da fun\u00e7\u00e3o n\u00e3o existe.\n\n\n```\nlista_ex04 = list()\neixo_x_ex04 = list()\n\nfor x in np.arange(0,6, step=0.1):\n if (x<3):\n lista_ex04.append(1+x**2)\n else:\n lista_ex04.append(20)\n eixo_x_ex04.append(x)\n\nplt.plot(eixo_x_ex04,lista_ex04)\n```\n\n\n```\n# Limite de x quando x tende a 3\nlimite01 = limit(1+x**2, x, 3, '-')\n\n# Limite de x quando x tende a 3\nlimite02 = limit(20, x, 3)\n\nprint('O limite pela direita \u00e9 de:', limite01 )\nprint('enquanto o limite pela esquerda \u00e9 de:' , limite02 )\n```\n\n O limite pela direita \u00e9 de: 35.8100000000000\n enquanto o limite pela esquerda \u00e9 de: 20\n\n\n**Conclus\u00e3o:** Os limites laterais refletem o fato de que podemos nos aproximar de um dado valor, quer seja pela esquerda ou pela direita.\n\nEm alguns casos, o valor resultante do limite pode ser igual pela equerda e pela direita, mas em outros pode n\u00e3o ser.\n\n## Propriedades Limites: Parte 1\n\n1 - O limite da soma de duas fun\u00e7\u00f5es quando x se aproxima de x0, \u00e9 igual \u00e0 soma dos limites das fun\u00e7\u00f5es.\n\n$\n\\lim_{x \\to x0} (f(x) + g(x))= \\lim_{x \\to x0} f(x) + \\lim_{x \\to x0} g(x)\n$\n\n\n2 - Propriedade da multiplica\u00e7\u00e3o por escalar do Limite:\n\n$\n\\lim_{x \\to x0} C( f(x))= C \\lim_{x \\to x0} f(x)\n$\n\n### Soma de **Fun\u00e7\u00f5es**\n\n\n```\ndef f(z): return z + 1\ndef g(z): return z\n\nsoma_funcoes = f(z) + g(z)\nprint('O limite da soma das fun\u00e7\u00f5es \u00e9:',limit(soma_funcoes,z, 1))\n\nsoma_limites = limit(g(z),z,1) + limit(f(z),z,1)\n\nprint('A soma dos limites das fun\u00e7\u00f5es \u00e9:',soma_limites)\n\n```\n\n O limite da soma das fun\u00e7\u00f5es \u00e9: 3\n A soma dos limites das fun\u00e7\u00f5es \u00e9: 3\n\n\n### Constante vezes fun\u00e7\u00e3o\n\n\n```\nsoma_funcoes = 4 * (f(z) + g(z))\nprint('O limite da soma das fun\u00e7\u00f5es \u00e9:',limit(soma_funcoes,z, 1))\n\nsoma_limites = 4 * (limit(g(z),z,1) + limit(f(z),z,1))\n\nprint('A soma dos limites das fun\u00e7\u00f5es \u00e9:',soma_limites)\n```\n\n O limite da soma das fun\u00e7\u00f5es \u00e9: 12\n A soma dos limites das fun\u00e7\u00f5es \u00e9: 12\n\n\n## Propriedades Limites: Parte 2\n\n1 - O limite do produto das fun\u00e7\u00f5es \u00e9 equivalente ao produto dos limites.\n\n$\n\\lim_{x \\to x_0} f(x) *g(x) = \\lim_{x \\to x_0} f(x) * \\lim_{x \\to x_0} g(x)\n$\n\n2 - O limite da divis\u00e3o das fun\u00e7\u00f5es \u00e9 equivalente \u00e0 divis\u00e3o dos limites.\n\n$\n\\lim_{x \\to x_0} \\dfrac{f(x)}{g(x)} = \\dfrac{\\lim_{x \\to x_0}f(x)}{\\lim_{x \\to x_0} g(x)}\n$\n\n\n**Desde que:**\n\n$\n\\lim_{x \\to x_0} g(x) \\neq 0\n$\n\n\n```\n# Reutilizando as fun\u00e7\u00f5es j\u00e1 criadas na parte 01\nprint(f(z),g(z),'\\n')\n\nproduto_funcoes = f(z) * g(z)\nprint('O limite do produto das fun\u00e7\u00f5es \u00e9:',limit(produto_funcoes, z, 2))\n\nproduto_limites = limit(f(z),z, 2) * limit(g(z), z, 2)\nprint('O limite do produto dos limites \u00e9:',limit(produto_limites, z, 2))\n\nprint()\ntry: \n divisao_funcoes = f(z) / g(z)\n print('O limite da divis\u00e3o das fun\u00e7\u00f5es \u00e9:',limit(divisao_funcoes, z, 2))\n\n divisao_limites = limit(f(z),z, 2) / limit(g(z), z, 2)\n print('O limite da divis\u00e3o dos limites \u00e9:',limit(divisao_limites, z, 2))\nexcept:\n print('g(x) == 0')\n```\n\n z + 1 z \n \n O limite do produto das fun\u00e7\u00f5es \u00e9: 6\n O limite do produto dos limites \u00e9: 6\n \n O limite da divis\u00e3o das fun\u00e7\u00f5es \u00e9: 3/2\n O limite da divis\u00e3o dos limites \u00e9: 3/2\n\n\n## Exist\u00eancia do Limite\n\n\n```\nvar('x')\ndef f(x): return 2*x**2\n\neixo_x_ex05 = list()\neixo_y_ex05 = list()\n\nfor i in np.arange(0,5, step= 0.1):\n eixo_x_ex05.append(i)\n eixo_y_ex05.append(f(i))\n\nplt.plot(eixo_x_ex05,eixo_y_ex05)\n```\n\n\n```\nprint('Limite Lateral pela Direita:',limit(f(x),x,2))\nprint('Limite Lateral pela Esquerda:',limit(f(x),x,2,'-'))\n```\n\n Limite Lateral pela Direita: 8\n Limite Lateral pela Esquerda: 8\n\n", "meta": {"hexsha": "8f843b6dba52366a3a39469f1bfd50bb0344f0f2", "size": 250712, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "GoogleColab/Calculo/Funcoes.ipynb", "max_stars_repo_name": "gabrielvieiraf/ProjetosPython", "max_stars_repo_head_hexsha": "d8cf8eb28ebe1d0bf8319cb5500028c1d2a71ef9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-23T23:48:39.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-23T23:48:39.000Z", "max_issues_repo_path": "GoogleColab/Calculo/Funcoes.ipynb", "max_issues_repo_name": "gabrielvieiraf/ProjetosPython", "max_issues_repo_head_hexsha": "d8cf8eb28ebe1d0bf8319cb5500028c1d2a71ef9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "GoogleColab/Calculo/Funcoes.ipynb", "max_forks_repo_name": "gabrielvieiraf/ProjetosPython", "max_forks_repo_head_hexsha": "d8cf8eb28ebe1d0bf8319cb5500028c1d2a71ef9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-06-23T21:59:10.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-23T23:49:00.000Z", "avg_line_length": 121.0584258812, "max_line_length": 18134, "alphanum_fraction": 0.8484197007, "converted": true, "num_tokens": 6076, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9207896824119663, "lm_q2_score": 0.9032942067038784, "lm_q1q2_score": 0.8317439857154333}} {"text": "# Curse of Dimensionality\n\n\n```julia\nusing Plots, SpecialFunctions\n```\n\n## 2D bionomial coefficients\n\nThis problem can be simplified as choosing M pieces out of 2M total pieces of the path. The selected pieces will form the right directions and others will be upward directions. \n\n$$\n\\begin{equation}\n\\binom{2M}{M} = \\frac{2M!}{M! \\times M!}\n\\end{equation}\n$$\n\n## 3D problem\n\nThis problem can be simplified as choosing $k_1=M$ pieces and $k_2=M$ out of 3M total pieces of the path. The selected pieces will form the directions in each dimension\n\n$$\n\\begin{equation}\n\\binom{3M}{M,M} = \\frac{3M!}{M! \\times M! \\times M!}\n\\end{equation}\n$$\n\n## ND problem\n\nThis problem can be simplified as choosing $k_1=M$ pieces, $k_2=M$, and $\\dots$, etc. out of NM total pieces of the path. The selected pieces will form the directions in each dimension\n\n$$\n\\begin{equation}\n\\binom{NM}{k_1=M,k_2=M,\\dots,k_{N-1}=M} = \\frac{NM!}{M!^N}\n\\end{equation}\n$$\n\n# Visualization\n\n\n```julia\nfunction npaths(N)\n nom = gamma(N*5+1)\n den = 1\n for n=1:N\n den *= gamma(6)\n end\n nom/den\nend\n\nN = 1:10\nscatter(N, npaths.(N), yaxis=:log, label=:false, title=\"Path through lattice of side 5\",\n xlabel=\"Numer of dimention\", ylabel=\"Path count\")\n```\n\n\n\n\n \n\n \n\n\n\n## Adjourn\n\n\n```julia\nusing Dates\nprintln(\"mahdiar\")\nDates.format(now(), \"Y/U/d HH:MM\") \n```\n\n mahdiar\n\n\n\n\n\n \"2021/April/17 17:05\"\n\n\n", "meta": {"hexsha": "c523a127fe3617dce8f3c7b4fffc7439aeb69eec", "size": 55047, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "HW12/3.ipynb", "max_stars_repo_name": "mahdiarsadeghi/NumericalAnalysis", "max_stars_repo_head_hexsha": "95a0914c06963b0510971388f006a6b2fc0c4ef9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW12/3.ipynb", "max_issues_repo_name": "mahdiarsadeghi/NumericalAnalysis", "max_issues_repo_head_hexsha": "95a0914c06963b0510971388f006a6b2fc0c4ef9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW12/3.ipynb", "max_forks_repo_name": "mahdiarsadeghi/NumericalAnalysis", "max_forks_repo_head_hexsha": "95a0914c06963b0510971388f006a6b2fc0c4ef9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 216.7204724409, "max_line_length": 31224, "alphanum_fraction": 0.6883208894, "converted": true, "num_tokens": 456, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9207896671963206, "lm_q2_score": 0.9032942125614059, "lm_q1q2_score": 0.8317439773647793}} {"text": "# Solving Linear Systems\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.linalg as la\n%matplotlib inline\n```\n\n## Linear Systems\n\nA [linear system of equations](https://en.wikipedia.org/wiki/System_of_linear_equations) is a collection of linear equations\n\n\\begin{align}\na_{0,0}x_0 + a_{0,1}x_2 + \\cdots + a_{0,n}x_n & = b_0 \\\\\\\na_{1,0}x_0 + a_{1,1}x_2 + \\cdots + a_{1,n}x_n & = b_1 \\\\\\\n& \\vdots \\\\\\\na_{m,0}x_0 + a_{m,1}x_2 + \\cdots + a_{m,n}x_n & = b_m \\\\\\\n\\end{align}\n\nIn matrix notation, a linear system is $A \\mathbf{x}= \\mathbf{b}$ where\n\n$$\nA = \\begin{bmatrix}\na_{0,0} & a_{0,1} & \\cdots & a_{0,n} \\\\\\\na_{1,0} & a_{1,1} & \\cdots & a_{1,n} \\\\\\\n\\vdots & & & \\vdots \\\\\\\na_{m,0} & a_{m,1} & \\cdots & a_{m,n} \\\\\\\n\\end{bmatrix}\n \\ \\ , \\ \\\n\\mathbf{x} = \\begin{bmatrix}\nx_0 \\\\\\ x_1 \\\\\\ \\vdots \\\\\\ x_n\n\\end{bmatrix}\n \\ \\ , \\ \\\n\\mathbf{b} = \\begin{bmatrix}\nb_0 \\\\\\ b_1 \\\\\\ \\vdots \\\\\\ b_m\n\\end{bmatrix} \n$$\n\n## Gaussian elimination\n\nThe general procedure to solve a linear system of equation is called [Gaussian elimination](https://en.wikipedia.org/wiki/Gaussian_elimination). The idea is to perform elementary row operations to reduce the system to its row echelon form and then solve.\n\n### Elementary Row Operations\n\n[Elementary row operations](https://en.wikipedia.org/wiki/Elementary_matrix#Elementary_row_operations) include:\n\n1. Add $k$ times row $j$ to row $i$.\n2. Multiply row $i$ by scalar $k$.\n3. Switch rows $i$ and $j$.\n\nEach of the elementary row operations is the result of matrix multiplication by an elementary matrix (on the left).\nTo add $k$ times row $i$ to row $j$ in a matrix $A$, we multiply $A$ by the matrix $E$ where $E$ is equal to the identity matrix except the $i,j$ entry is $E_{i,j} = k$. For example, if $A$ is 3 by 3 and we want to add 3 times row 2 to row 0 (using 0 indexing) then\n\n$$\nE_1 = \\begin{bmatrix}\n1 & 0 & 3 \\\\\\\n0 & 1 & 0 \\\\\\\n0 & 0 & 1\n\\end{bmatrix}\n$$\n\nLet's verify the calculation:\n\n\n```python\nA = np.array([[1,1,2],[-1,3,1],[0,5,2]])\nprint(A)\n```\n\n [[ 1 1 2]\n [-1 3 1]\n [ 0 5 2]]\n\n\n\n```python\nE1 = np.array([[1,0,3],[0,1,0],[0,0,1]])\nprint(E1)\n```\n\n [[1 0 3]\n [0 1 0]\n [0 0 1]]\n\n\n\n```python\nE1 @ A\n```\n\n\n\n\n array([[ 1, 16, 8],\n [-1, 3, 1],\n [ 0, 5, 2]])\n\n\n\nTo multiply $k$ times row $i$ in a matrix $A$, we multiply $A$ by the matrix $E$ where $E$ is equal to the identity matrix except the $,i,j$ entry is $E_{i,i} = k$. For example, if $A$ is 3 by 3 and we want to multiply row 1 by -2 then\n\n$$\nE_2 = \\begin{bmatrix}\n1 & 0 & 0 \\\\\\\n0 & -2 & 0 \\\\\\\n0 & 0 & 1\n\\end{bmatrix}\n$$\n\nLet's verify the calculation:\n\n\n```python\nE2 = np.array([[1,0,0],[0,-2,0],[0,0,1]])\nprint(E2)\n```\n\n [[ 1 0 0]\n [ 0 -2 0]\n [ 0 0 1]]\n\n\n\n```python\nE2 @ A\n```\n\n\n\n\n array([[ 1, 1, 2],\n [ 2, -6, -2],\n [ 0, 5, 2]])\n\n\n\nFinally, to switch row $i$ and row $j$ in a matrix $A$, we multiply $A$ by the matrix $E$ where $E$ is equal to the identity matrix except $E_{i,i} = 0$, $E_{j,j} = 0$, $E_{i,j} = 1$ and $E_{j,i} = 1$. For example, if $A$ is 3 by 3 and we want to switch row 1 and row 2 then\n\n$$\nE^3 = \\begin{bmatrix}\n1 & 0 & 0 \\\\\\\n0 & 0 & 1 \\\\\\\n0 & 1 & 0\n\\end{bmatrix}\n$$\n\nLet's verify the calculation:\n\n\n```python\nE3 = np.array([[1,0,0],[0,0,1],[0,1,0]])\nprint(E3)\n```\n\n [[1 0 0]\n [0 0 1]\n [0 1 0]]\n\n\n\n```python\nE3 @ A\n```\n\n\n\n\n array([[ 1, 1, 2],\n [ 0, 5, 2],\n [-1, 3, 1]])\n\n\n\n### Implementation\n\nLet's write function to implement the elementary row operations. First of all, let's write a function called `add_rows` which takes input parameters $A$, $k$, $i$ and $j$ and returns the NumPy array resulting from adding $k$ times row $j$ to row $i$ in the matrix $A$. If $i=j$, then let's say that the function scales row $i$ by $k+1$ since this would be the result of $k$ times row $i$ added to row $i$.\n\n\n```python\ndef add_row(A,k,i,j):\n \"Add k times row j to row i in matrix A.\"\n n = A.shape[0]\n E = np.eye(n)\n if i == j:\n E[i,i] = k + 1\n else:\n E[i,j] = k\n return E @ A\n```\n\nLet's test our function:\n\n\n```python\nM = np.array([[1,1],[3,2]])\nprint(M)\n```\n\n [[1 1]\n [3 2]]\n\n\n\n```python\nadd_row(M,2,0,1)\n```\n\n\n\n\n array([[7., 5.],\n [3., 2.]])\n\n\n\n\n```python\nadd_row(M,3,1,1)\n```\n\n\n\n\n array([[ 1., 1.],\n [12., 8.]])\n\n\n\nLet's write a function called `scale_row` which takes 3 input parameters $A$, $k$, and $i$ and returns the matrix that results from $k$ times row $i$ in the matrix $A$.\n\n\n```python\ndef scale_row(A,k,i):\n \"Multiply row i by k in matrix A.\"\n n = A.shape[0]\n E = np.eye(n)\n E[i,i] = k\n return E @ A\n```\n\n\n```python\nM = np.array([[3,1],[-2,7]])\nprint(M)\n```\n\n [[ 3 1]\n [-2 7]]\n\n\n\n```python\nscale_row(M,3,1)\n```\n\n\n\n\n array([[ 3., 1.],\n [-6., 21.]])\n\n\n\n\n```python\nA = np.array([[1,1,1],[1,-1,0]])\nprint(A)\n```\n\n [[ 1 1 1]\n [ 1 -1 0]]\n\n\n\n```python\nscale_row(A,5,1)\n```\n\n\n\n\n array([[ 1., 1., 1.],\n [ 5., -5., 0.]])\n\n\n\nLet's write a function called `switch_rows` which takes 3 input parameters $A$, $i$ and $j$ and returns the matrix that results from switching rows $i$ and $j$ in the matrix $A$.\n\n\n```python\ndef switch_rows(A,i,j):\n \"Switch rows i and j in matrix A.\"\n n = A.shape[0]\n E = np.eye(n)\n E[i,i] = 0\n E[j,j] = 0\n E[i,j] = 1\n E[j,i] = 1\n return E @ A\n```\n\n\n```python\nA = np.array([[1,1,1],[1,-1,0]])\nprint(A)\n```\n\n [[ 1 1 1]\n [ 1 -1 0]]\n\n\n\n```python\nswitch_rows(A,0,1)\n```\n\n\n\n\n array([[ 1., -1., 0.],\n [ 1., 1., 1.]])\n\n\n\n## Examples\n\n### Find the Inverse\n\nLet's apply our functions to the augmented matrix $[M \\ | \\ I]$ to find the inverse of the matrix $M$:\n\n\n```python\nM = np.array([[5,4,2],[-1,2,1],[1,1,1]])\nprint(M)\n```\n\n [[ 5 4 2]\n [-1 2 1]\n [ 1 1 1]]\n\n\n\n```python\nA = np.hstack([M,np.eye(3)])\nprint(A)\n```\n\n [[ 5. 4. 2. 1. 0. 0.]\n [-1. 2. 1. 0. 1. 0.]\n [ 1. 1. 1. 0. 0. 1.]]\n\n\n\n```python\nA1 = switch_rows(A,0,2)\nprint(A1)\n```\n\n [[ 1. 1. 1. 0. 0. 1.]\n [-1. 2. 1. 0. 1. 0.]\n [ 5. 4. 2. 1. 0. 0.]]\n\n\n\n```python\nA2 = add_row(A1,1,1,0)\nprint(A2)\n```\n\n [[1. 1. 1. 0. 0. 1.]\n [0. 3. 2. 0. 1. 1.]\n [5. 4. 2. 1. 0. 0.]]\n\n\n\n```python\nA3 = add_row(A2,-5,2,0)\nprint(A3)\n```\n\n [[ 1. 1. 1. 0. 0. 1.]\n [ 0. 3. 2. 0. 1. 1.]\n [ 0. -1. -3. 1. 0. -5.]]\n\n\n\n```python\nA4 = switch_rows(A3,1,2)\nprint(A4)\n```\n\n [[ 1. 1. 1. 0. 0. 1.]\n [ 0. -1. -3. 1. 0. -5.]\n [ 0. 3. 2. 0. 1. 1.]]\n\n\n\n```python\nA5 = scale_row(A4,-1,1)\nprint(A5)\n```\n\n [[ 1. 1. 1. 0. 0. 1.]\n [ 0. 1. 3. -1. 0. 5.]\n [ 0. 3. 2. 0. 1. 1.]]\n\n\n\n```python\nA6 = add_row(A5,-3,2,1)\nprint(A6)\n```\n\n [[ 1. 1. 1. 0. 0. 1.]\n [ 0. 1. 3. -1. 0. 5.]\n [ 0. 0. -7. 3. 1. -14.]]\n\n\n\n```python\nA7 = scale_row(A6,-1/7,2)\nprint(A7)\n```\n\n [[ 1. 1. 1. 0. 0. 1. ]\n [ 0. 1. 3. -1. 0. 5. ]\n [ 0. 0. 1. -0.42857143 -0.14285714 2. ]]\n\n\n\n```python\nA8 = add_row(A7,-3,1,2)\nprint(A8)\n```\n\n [[ 1. 1. 1. 0. 0. 1. ]\n [ 0. 1. 0. 0.28571429 0.42857143 -1. ]\n [ 0. 0. 1. -0.42857143 -0.14285714 2. ]]\n\n\n\n```python\nA9 = add_row(A8,-1,0,2)\nprint(A9)\n```\n\n [[ 1. 1. 0. 0.42857143 0.14285714 -1. ]\n [ 0. 1. 0. 0.28571429 0.42857143 -1. ]\n [ 0. 0. 1. -0.42857143 -0.14285714 2. ]]\n\n\n\n```python\nA10 = add_row(A9,-1,0,1)\nprint(A10)\n```\n\n [[ 1. 0. 0. 0.14285714 -0.28571429 0. ]\n [ 0. 1. 0. 0.28571429 0.42857143 -1. ]\n [ 0. 0. 1. -0.42857143 -0.14285714 2. ]]\n\n\nLet's verify that we found the inverse $M^{-1}$ correctly:\n\n\n```python\nMinv = A10[:,3:]\nprint(Minv)\n```\n\n [[ 0.14285714 -0.28571429 0. ]\n [ 0.28571429 0.42857143 -1. ]\n [-0.42857143 -0.14285714 2. ]]\n\n\n\n```python\nresult = Minv @ M\nprint(result)\n```\n\n [[ 1.00000000e+00 4.44089210e-16 2.22044605e-16]\n [-6.66133815e-16 1.00000000e+00 -2.22044605e-16]\n [ 0.00000000e+00 0.00000000e+00 1.00000000e+00]]\n\n\nSuccess! We can see the result more clearly if we round to 15 decimal places:\n\n\n```python\nnp.round(result,15)\n```\n\n\n\n\n array([[ 1.e+00, 0.e+00, 0.e+00],\n [-1.e-15, 1.e+00, -0.e+00],\n [ 0.e+00, 0.e+00, 1.e+00]])\n\n\n\n### Solve a System\n\nLet's use our functions to perform Gaussian elimination and solve a linear system of equations $A \\mathbf{x} = \\mathbf{b}$.\n\n\n```python\nA = np.array([[6,15,1],[8,7,12],[2,7,8]])\nprint(A)\n```\n\n [[ 6 15 1]\n [ 8 7 12]\n [ 2 7 8]]\n\n\n\n```python\nb = np.array([[2],[14],[10]])\nprint(b)\n```\n\n [[ 2]\n [14]\n [10]]\n\n\nForm the augemented matrix $M$:\n\n\n```python\nM = np.hstack([A,b])\nprint(M)\n```\n\n [[ 6 15 1 2]\n [ 8 7 12 14]\n [ 2 7 8 10]]\n\n\nPerform row operations:\n\n\n```python\nM1 = scale_row(M,1/6,0)\nprint(M1)\n```\n\n [[ 1. 2.5 0.16666667 0.33333333]\n [ 8. 7. 12. 14. ]\n [ 2. 7. 8. 10. ]]\n\n\n\n```python\nM2 = add_row(M1,-8,1,0)\nprint(M2)\n```\n\n [[ 1. 2.5 0.16666667 0.33333333]\n [ 0. -13. 10.66666667 11.33333333]\n [ 2. 7. 8. 10. ]]\n\n\n\n```python\nM3 = add_row(M2,-2,2,0)\nprint(M3)\n```\n\n [[ 1. 2.5 0.16666667 0.33333333]\n [ 0. -13. 10.66666667 11.33333333]\n [ 0. 2. 7.66666667 9.33333333]]\n\n\n\n```python\nM4 = scale_row(M3,-1/13,1)\nprint(M4)\n```\n\n [[ 1. 2.5 0.16666667 0.33333333]\n [ 0. 1. -0.82051282 -0.87179487]\n [ 0. 2. 7.66666667 9.33333333]]\n\n\n\n```python\nM5 = add_row(M4,-2,2,1)\nprint(M5)\n```\n\n [[ 1. 2.5 0.16666667 0.33333333]\n [ 0. 1. -0.82051282 -0.87179487]\n [ 0. 0. 9.30769231 11.07692308]]\n\n\n\n```python\nM6 = scale_row(M5,1/M5[2,2],2)\nprint(M6)\n```\n\n [[ 1. 2.5 0.16666667 0.33333333]\n [ 0. 1. -0.82051282 -0.87179487]\n [ 0. 0. 1. 1.19008264]]\n\n\n\n```python\nM7 = add_row(M6,-M6[1,2],1,2)\nprint(M7)\n```\n\n [[1. 2.5 0.16666667 0.33333333]\n [0. 1. 0. 0.1046832 ]\n [0. 0. 1. 1.19008264]]\n\n\n\n```python\nM8 = add_row(M7,-M7[0,2],0,2)\nprint(M8)\n```\n\n [[1. 2.5 0. 0.13498623]\n [0. 1. 0. 0.1046832 ]\n [0. 0. 1. 1.19008264]]\n\n\n\n```python\nM9 = add_row(M8,-M8[0,1],0,1)\nprint(M9)\n```\n\n [[ 1. 0. 0. -0.12672176]\n [ 0. 1. 0. 0.1046832 ]\n [ 0. 0. 1. 1.19008264]]\n\n\nSuccess! The solution of $Ax=b$ is\n\n\n```python\nx = M9[:,3].reshape(3,1)\nprint(x)\n```\n\n [[-0.12672176]\n [ 0.1046832 ]\n [ 1.19008264]]\n\n\nOr, we can do it the easy way...\n\n\n```python\nx = la.solve(A,b)\nprint(x)\n```\n\n [[-0.12672176]\n [ 0.1046832 ]\n [ 1.19008264]]\n\n\n## `scipy.linalg.solve`\n\nWe are mostly interested in linear systems $A \\mathbf{x} = \\mathbf{b}$ where there is a unique solution $\\mathbf{x}$. This is the case when $A$ is a square matrix ($m=n$) and $\\mathrm{det}(A) \\not= 0$. To solve such a system, we can use the function [`scipy.linalg.solve`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solve.html).\n\nThe function returns a solution of the system of equations $A \\mathbf{x} = \\mathbf{b}$. For example:\n\n\n```python\nA = np.array([[1,1],[1,-1]])\nprint(A)\n```\n\n [[ 1 1]\n [ 1 -1]]\n\n\n\n```python\nb1 = np.array([2,0])\nprint(b1)\n```\n\n [2 0]\n\n\nAnd solve:\n\n\n```python\nx1 = la.solve(A,b1)\nprint(x1)\n```\n\n [1. 1.]\n\n\nNote that the output $\\mathbf{x}$ is returned as a 1D NumPy array when the vector $\\mathbf{b}$ (the right hand side) is entered as a 1D NumPy array. If we input $\\mathbf{b}$ as a 2D NumPy array, then the output is a 2D NumPy array. For example:\n\n\n```python\nA = np.array([[1,1],[1,-1]])\nb2 = np.array([2,0]).reshape(2,1)\nx2 = la.solve(A,b2)\nprint(x2)\n```\n\n [[1.]\n [1.]]\n\n\nFinally, if the right hand side $\\mathbf{b}$ is a matrix, then the output is a matrix of the same size. It is the solution of $A \\mathbf{x} = \\mathbf{b}$ when $\\mathbf{b}$ is a matrix. For example:\n\n\n```python\nA = np.array([[1,1],[1,-1]])\nb3 = np.array([[2,2],[0,1]])\nx3 = la.solve(A,b3)\nprint(x3)\n```\n\n [[1. 1.5]\n [1. 0.5]]\n\n\n### Simple Example\n\nLet's compute the solution of the system of equations\n\n\\begin{align}\n2x + y &= 1 \\\\\\\nx + y &= 1\n\\end{align}\n\nCreate the matrix of coefficients:\n\n\n```python\nA = np.array([[2,1],[1,1]])\nprint(A)\n```\n\n [[2 1]\n [1 1]]\n\n\nAnd the vector $\\mathbf{b}$:\n\n\n```python\nb = np.array([1,-1]).reshape(2,1)\nprint(b)\n```\n\n [[ 1]\n [-1]]\n\n\nAnd solve:\n\n\n```python\nx = la.solve(A,b)\nprint(x)\n```\n\n [[ 2.]\n [-3.]]\n\n\nWe can verify the solution by computing the inverse of $A$:\n\n\n```python\nAinv = la.inv(A)\nprint(Ainv)\n```\n\n [[ 1. -1.]\n [-1. 2.]]\n\n\nAnd multiply $A^{-1} \\mathbf{b}$ to solve for $\\mathbf{x}$:\n\n\n```python\nx = Ainv @ b\nprint(x)\n```\n\n [[ 2.]\n [-3.]]\n\n\nWe get the same result. Success!\n\n### Inverse or Solve\n\nIt's a bad idea to use the inverse $A^{-1}$ to solve $A \\mathbf{x} = \\mathbf{b}$ if $A$ is large. It's too computationally expensive. Let's create a large random matrix $A$ and vector $\\mathbf{b}$ and compute the solution $\\mathbf{x}$ in 2 ways:\n\n\n```python\nN = 1000\nA = np.random.rand(N,N)\nb = np.random.rand(N,1)\n```\n\nCheck the first entries $A$:\n\n\n```python\nA[:3,:3]\n```\n\n\n\n\n array([[0.35754719, 0.63135432, 0.6572258 ],\n [0.18450506, 0.14639832, 0.23528745],\n [0.27576474, 0.46264005, 0.26589724]])\n\n\n\nAnd for $\\mathbf{b}$:\n\n\n```python\nb[:4,:]\n```\n\n\n\n\n array([[0.82726751],\n [0.96946096],\n [0.31351176],\n [0.63757837]])\n\n\n\nNow we compare the speed of `scipy.linalg.solve` with `scipy.linalg.inv`:\n\n\n```python\n%%timeit\nx = la.solve(A,b)\n```\n\n 2.77 s \u00b1 509 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\n\n\n\n```python\n%%timeit\nx = la.inv(A) @ b\n```\n\n 4.46 s \u00b1 2.04 s per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\n\n\nSolving with `scipy.linalg.solve` is about twice as fast!\n", "meta": {"hexsha": "fd3ba13adeae56b7679f0dc8f2b6671678a7c8a3", "size": 29979, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Python/3. Computational Sciences and Mathematics/Linear Algebra/Solving Systems of Linear Equations.ipynb", "max_stars_repo_name": "okara83/Becoming-a-Data-Scientist", "max_stars_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Python/3. Computational Sciences and Mathematics/Linear Algebra/Solving Systems of Linear Equations.ipynb", "max_issues_repo_name": "okara83/Becoming-a-Data-Scientist", "max_issues_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Python/3. Computational Sciences and Mathematics/Linear Algebra/Solving Systems of Linear Equations.ipynb", "max_forks_repo_name": "okara83/Becoming-a-Data-Scientist", "max_forks_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T15:41:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T07:47:40.000Z", "avg_line_length": 29979.0, "max_line_length": 29979, "alphanum_fraction": 0.5903465759, "converted": true, "num_tokens": 6002, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9496693659780477, "lm_q2_score": 0.8757869981319863, "lm_q1q2_score": 0.8317080832478212}} {"text": "# Neural Network\n\n## neuron\n\na neuron takes input $x \\in \\mathbb{R}^{d}$, multiply $x$ by weights $w$ and add bias term $b$, finally use a activation function $g$.\n\nthat is:\n\n$$f(x) = g(w^{T}x + b)$$\n\nit is analogous to the functionality of biological neuron.\n\n\n\nsome useful activation function:\n\n$$\n\\begin{equation}\n\\begin{split}\n\\text{sigmoid:}\\quad &g(z) = \\frac{1}{1 + e^{-z}} \\\\\n\\text{tanh:}\\quad &g(z) = \\frac{e^{z}-e^{-z}}{e^{z} + e^{-z}} \\\\\n\\text{relu:}\\quad &g(z) = max(z,0) \\\\\n\\text{leaky relu:}\\quad &g(z) = max(z, \\epsilon{z})\\ ,\\ \\epsilon\\text{ is a small positive number}\\\\\n\\text{identity:}\\quad &g(z) = z\n\\end{split}\n\\end{equation}\n$$\n\nlinear regression's forward process is a neuron with identity activation function.\n\nlogistic regression's forward process is a neuron with sigmoid activation function.\n\n## neural network\n\nbuilding neural network is analogous to lego bricks: you take individual bricks and stack them together to build complex structures.\n\n\n\nwe use bracket to denote layer, we take the above as example\n\n$[0]$ denote input layer, $[1]$ denote hidden layer, $[2]$ denote output layer\n\n$a^{[l]}$ denote the output of layer $l$, set $a^{[0]} := x$\n\n$z^{[l]}$ denote the affine result of layer $l$\n\nwe have:\n\n$$z^{[l]} = W^{[l]}a^{[l-1]} + b^{[l]}$$\n\n$$a^{[l]} = g^{[l]}(z^{[l]})$$\n\nwhere $W^{[l]} \\in \\mathbb{R}^{d[l] \\times d[l-1]}$, $b^{[l]} \\in \\mathbb{R}^{d[l]}$.\n\n## weight decay\n\nrecall that to mitigate overfitting, we use $l_{2}$ and $l_{1}$ regularization in linear and logistic regression.\n\nweight decay is a alias of $l_{2}$ regularization, can be generalize to neural network, we concatenate $W^{[l]}$ and flatten it to get $w$ in this setting.\n\nfirst adding $l_{2}$ norm penalty:\n\n$$J(w,b) = \\sum_{i=1}^{n}l(w, b, x^{(i)}, y^{(i)}) + \\frac{\\lambda}{2}\\left \\| w \\right \\|^{2} $$\n\nthen by gradient descent, we have:\n\n$$\n\\begin{equation}\n\\begin{split}\nw:=& w-\\eta\\frac{\\partial}{\\partial w}J(w, b) \\\\\n=& w-\\eta\\frac{\\partial}{\\partial w}\\left(\\sum_{i=1}^{n}l(w, b, x^{(i)}, y^{(i)}) + \\frac{\\lambda}{2}\\left \\| w \\right \\|^{2}\\right) \\\\\n=& (1 - \\eta\\lambda)w - \\eta\\frac{\\partial}{\\partial w}\\sum_{i=1}^{n}l(w, b, x^{(i)}, y^{(i)})\n\\end{split}\n\\end{equation}\n$$\n\nmultiply by $(1 - \\eta\\lambda)$ is weight decay.\n\noften we do not calculate bias term in regularization, so does weight decay.\n\n## dropout\n\nto strength robustness through perturbation, we can deliberately add perturbation in traning, dropout is one of that skill.\n\nwe actually do the following in hidden neuron:\n\n$$\na_{dropout} = \n\\begin{cases}\n0 &\\text{with probability }p \\\\\n\\frac{a}{1-p} &\\text{otherwise}\n\\end{cases}\n$$\n\nthis operation randomly dropout neuron with probability $p$ and keep the expectation unchanged:\n\n$$E(a_{dropout}) = E(a)$$\n\ndepict this process below:\n\n\n\none more thing: we do not use dropout in predicting.\n\n## prerequesities for back-propagation\n\nsuppose in forward-propagation $x \\to y \\to l$, where $x \\in \\mathbb{R}^{n}$, $y \\in \\mathbb{R} ^{m}$, loss $l \\in \\mathbb{R}$.\n\nthen:\n\n$$\n\\frac{\\partial l}{\\partial y} = \\begin{bmatrix}\n \\frac{\\partial l}{\\partial y_{1}} \\\\\n ...\\\\\n \\frac{\\partial l}{\\partial y_{m}}\n\\end{bmatrix}\n\\quad\n\\frac{\\partial l}{\\partial x} = \\begin{bmatrix}\n \\frac{\\partial l}{\\partial x_{1}} \\\\\n ...\\\\\n \\frac{\\partial l}{\\partial x_{n}}\n\\end{bmatrix}\n$$\n\nby total differential equation:\n\n$$\n\\frac{\\partial l}{\\partial x_{k}} = \\sum_{j=1}^{m}\\frac{\\partial l}{\\partial y_{j}}\\frac{\\partial y_{j}}{\\partial x_{k}}\n$$\n\nthen we can connect $\\frac{\\partial l}{\\partial x}$ and $\\frac{\\partial l}{\\partial y}$ by:\n\n$$\n\\frac{\\partial l}{\\partial x} = \\begin{bmatrix}\n \\frac{\\partial l}{\\partial x_{1}} \\\\\n ...\\\\\n \\frac{\\partial l}{\\partial x_{n}}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n \\frac{\\partial y_{1}}{\\partial x_{1}} & ... & \\frac{\\partial y_{m}}{\\partial x_{1}}\\\\\n \\vdots & \\ddots & \\vdots \\\\\n \\frac{\\partial y_{1}}{\\partial x_{n}}& .... & \\frac{\\partial y_{m}}{\\partial x_{n}}\n\\end{bmatrix}\n\\begin{bmatrix}\n \\frac{\\partial l}{\\partial y_{1}} \\\\\n ...\\\\\n \\frac{\\partial l}{\\partial y_{m}}\n\\end{bmatrix}\n=\n(\\frac{\\partial y}{\\partial x})^{T}\\frac{\\partial l}{\\partial y}\n$$\n\nhere $\\frac{\\partial y}{\\partial x}$ is the jacobian matrix.\n\nunlike other activation functions, calculate softmax depend on other neurons, so jacobian of softmax.\n\n$$\n\\frac{\\partial a_{i}}{\\partial z_{j}} = \\frac{\\partial}{\\partial z_{j}}\\left(\\frac{exp(z_{i})}{\\sum_{s=1}^{k}exp(z_{s})}\\right)\n$$\n\nit is easy to check the jacobian of matrix-multiplication:\n\n$$\\frac{\\partial Mx}{\\partial x} = M$$\n\n## back-propagation\n\ngradient descent update rule:\n\n$$W^{[l]} = W^{[l]} - \\alpha\\frac{\\partial{L}}{\\partial{W^{[l]}}}$$\n\n$$b^{[l]} = b^{[l]} - \\alpha\\frac{\\partial{L}}{\\partial{b^{[l]}}}$$\n\nto proceed, we must compute the gradient with respect to the parameters.\n\nwe can define a three-step recipe for computing the gradients as follows:\n\n1.for output layer, we have:\n\n$$\n\\frac{\\partial L(\\hat{y}, y)}{\\partial z^{[N]}} = (\\frac{\\partial \\hat{y}}{\\partial z^{[N]}})^{T}\\frac{\\partial L(\\hat{y}, y)}{\\partial \\hat{y}}\n$$\n\nif $g^{[N]}$ is softmax.\n\n$$\n\\frac{\\partial L(\\hat{y}, y)}{\\partial z^{[N]}} = \\frac{\\partial L(\\hat{y}, y)}{\\partial \\hat{y}} \\odot {g^{[N]}}'(z^{[N]})\n$$\n\nif not softmax.\n\nthe above computations are all straight forward.\n\n2.for $l=N-1,...,1$, we have:\n\n$$z^{[l + 1]} = W^{[l + 1]}a^{[l]} + b^{[l + 1]}$$\n\nso by our prerequesities:\n\n$$\n\\frac{\\partial L}{\\partial a^{[l]}} = (\\frac{\\partial z^{[l+1]}}{\\partial a^{[l]}})^{T}\\frac{\\partial L}{\\partial z^{[l+1]}} = (W^{[l+1]})^{T}\\frac{\\partial L}{\\partial z^{[l+1]}}\n$$\n\nwe also have:\n\n$$a^{[l]} = g^{[l]}z^{[l]}$$\n\nwe do not use softmax activation in hidden layers, so the dependent is direct:\n\n$$\\frac{\\partial L}{\\partial z^{[l]}} = \\frac{\\partial L}{\\partial a^{[l]}} \\odot {g^{[l]}}'(z^{[l]})$$\n\ncombine two equations:\n\n$$\\frac{\\partial L}{\\partial z^{[l]}} = (W^{[l+1]})^{T}\\frac{\\partial L}{\\partial z^{[l+1]}} \\odot {g^{[l]}}'(z^{[l]})$$\n\n3.final step, because:\n\n$$z^{[l]} = W^{[l]}a^{[l - 1]} + b^{[l]}$$\n\nso:\n\n$$\\frac{\\partial L}{\\partial W^{[l]}} = \\frac{\\partial L}{\\partial z^{[l]}}(a^{[l - 1]})^{T}$$ \n\n$$\\frac{\\partial L}{\\partial b^{[l]}}=\\frac{\\partial L}{\\partial z^{[l]}}$$\n\n## xavier initialization\n\nto mitigate vanishing and exploding gradient, to insure breaking symmtry, we should carefully initialize weights.\n\nconsider a fully connected layer without bias term and activation function:\n\n$$o_{i} = \\sum_{j=1}^{n_{in}}w_{ij}x_{j}$$\n\nsuppose $w_{ij}$ draw from a distribution of 0 mean and $\\sigma^{2}$ variance, not necessarily guassian.\n\nsuppose $x_{j}$ draw from a distribution of 0 mean and $\\gamma^{2}$ variance, all $w_{ij}, x_{j}$ are independent.\n\nthen mean of $o_{i}$ is of course 0, variance:\n\n$$\n\\begin{equation}\n\\begin{split}\nVar[o_{i}] =& E[o_{i}^{2}] - (E[o_{i}])^{2}\\\\\n=&\\sum_{j=1}^{n_{in}}E[w_{ij}^{2}x_{j}^{2}] \\\\\n=&\\sum_{j=1}^{n_{in}}E[w_{ij}^{2}]E[x_{j}^{2}] \\\\\n=&n_{in}\\sigma^{2}\\gamma^{2}\n\\end{split}\n\\end{equation}\n$$\n\nto keep variance fixed, we need to set $n_{in}\\sigma^{2}=1$.\n\nconsider back-propagation, we have:\n\n$$\\frac{\\partial L}{\\partial x_{j}} = \\sum_{i=1}^{n_{out}}w_{ij}\\frac{\\partial L}{\\partial o_{i}}$$\n\nso by the same inference, we need to set $$n_{out}\\sigma^{2} = 1$$.\n\nwe cannot satisfy both conditions simutaneously, we simply try to satisfy:\n\n$$\\frac{1}{2}(n_{in} + n_{out})\\sigma^{2} = 1 \\ \\text{ or }\\ \\sigma = \\sqrt{\\frac{n_{in} + n_{out}}{2}}$$\n\nthis is the reasoning under xavier initialization.\n\n\n```python\n\n```\n", "meta": {"hexsha": "b3f83a3099cc325ee56598eaca629c3eb3b6e5a0", "size": 11299, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_build/html/_sources/09_neural_network.ipynb", "max_stars_repo_name": "newfacade/machine-learning-notes", "max_stars_repo_head_hexsha": "1e59fe7f9b21e16151654dee888ceccc726274d3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_build/html/_sources/09_neural_network.ipynb", "max_issues_repo_name": "newfacade/machine-learning-notes", "max_issues_repo_head_hexsha": "1e59fe7f9b21e16151654dee888ceccc726274d3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_build/html/_sources/09_neural_network.ipynb", "max_forks_repo_name": "newfacade/machine-learning-notes", "max_forks_repo_head_hexsha": "1e59fe7f9b21e16151654dee888ceccc726274d3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.528189911, "max_line_length": 200, "alphanum_fraction": 0.484910169, "converted": true, "num_tokens": 2629, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9553191297273498, "lm_q2_score": 0.8705972566572504, "lm_q1q2_score": 0.8316982135728226}} {"text": "# Some basic discrete-time signals\nDiscrete-time signals are sequences, or functions with integer-valued arguments\n$$ f:\\, \\mathbb{Z} \\rightarrow \\mathbb{R} $$\n$$ f(k) \\in \\mathbb{k}, \\; k \\in \\mathbb{Z} = \\{\\ldots, -2,-1,0,1,2,\\ldots\\}$$\nBelow are some important examples\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n## The impulse function\n$$ f(k) = \\delta (k) = \\begin{cases} 1 & k=0\\\\ 0 & \\text{otherwise} \\end{cases}$$\n\n\n```python\nk = np.arange(-4,5)\nplt.figure(figsize=(12,1))\nplt.stem(k, k==0)\nplt.xticks([-4,-2,0,2,4])\nplt.xlabel('$k$')\nplt.ylabel('$\\delta(k)$');\n```\n\n## The shifted and scaled impulse function\n$$ f(k) = \\mathrm{q}a\\delta (k) = a\\delta(k+1) = \\begin{cases} a & k=-1\\\\ 0 & \\text{otherwise} \\end{cases}$$\nNote the *shift operator* q, whose definition is that it shifts the sequence forward (advances it). The impulse function has the property that it is equal to one if and only if the argument is equal to zero. So in the one-step-ahead shifted sequence $\\mathrm{q}\\delta(k)$ the value is one when $k+1=0$. \n\n\n```python\na = 1.4\nk = np.arange(-4,5)\nplt.figure(figsize=(12,1))\nplt.stem(k, a*((k+1)==0))\nplt.xticks(k); plt.yticks([0, a],['0', '$a$'])\nplt.xlabel('$k$')\nplt.ylabel('$\\mathrm{q}\\delta(k) = \\delta(k+1)$');\n\n```\n\n## The unit step function\n$$ f(k) = u_s (k) = \\begin{cases} 1 & k\\ge 0\\\\ 0 & k < 0 \\end{cases}$$\n\n\n```python\nk = np.arange(-4,5)\nplt.figure(figsize=(12,1)); \nplt.stem(k, k>=0)\nplt.xticks(k)\nplt.xlabel('$k$')\nplt.ylabel('$u_s(k)$')\n```\n\n## Connection between the impulse and the step function\nThe impulse funktion is obtained by taking the first difference of the step function\n$$ \\delta(k) = (1-\\text{q}^{-1})u_s(k) = u_s(k)-u_s(k-1). $$\nNote the *shift operator* q, whose definition is that it shifts the sequence forward (advaneces it). Consequently the inverse shift operator $\\text{q}^{-1}$ shifts the sequence backwards (delays it) one step.\n\n\n```python\nk = np.arange(-4,5)\nus = k>=0\nus_shift1 = k>=1 # The shifted sequence is one when k-1 >=0 <=> k>= 1\nfig,axs = plt.subplots(3,1, sharex=True, figsize=(12,4))\naxs[0].stem(k, us); axs[0].set(ylabel = '$u_s(k)$')\naxs[1].stem(k, us_shift1); axs[1].set(ylabel = '$u_s(k-1)$')\naxs[2].stem(k, k==0) ; axs[2].set(ylabel = '$\\delta(k)$');\n```\n\nClearly the bottom sequence is obtained by subtracting the second sequence from the first.\n\nThe unit step function can be obtained by summing over the impulse funktion\n$$ u_s(k) = \\sum_{i=-\\infty}^k \\delta(i). $$\n\n\n```python\nk = np.arange(-4,5)\ndlta = k==0\nfig,axs = plt.subplots(2,1, sharex=True, figsize=(12,4))\naxs[0].stem(k, dlta); axs[0].set(ylabel = '$k < 0$')\naxs[1].stem(k, dlta); axs[1].set(ylabel = '$k > 1$')\naxs[0].plot([-4,-1.9], [1.01, 1.01], 'r--')\naxs[0].plot([-1.9,-1.9], [1.01, .01], 'r--')\naxs[0].text(-4,0.5, '$k=-2,\\;$ summing only zeros')\naxs[1].plot([-4, 2.1], [1.01, 1.01], 'r--')\naxs[1].plot([2.1, 2.1], [1.01, .01], 'r--')\naxs[1].text(-4,0.5, '$k=2,\\;$ summing zeros and a single one');\n```\n\n## Real exponential\nThe real exponential is\n$$ f(k) = a^k. $$\nLet's look at three cases \n### Exampel 1 $a=0.5$\n\n\n```python\na = 0.5\nk = np.arange(-2,5)\nplt.figure(figsize=(12,5))\nplt.stem(k, np.power(a, k))\nplt.xticks(k); plt.yticks(np.power(a,k[:-1]))\nplt.xlabel('$k$')\nplt.ylabel('$0.5^k$');\n```\n\n### Exampel 2 $a = 2$\n\n\n```python\na = 2.0\nk = np.arange(-2,5)\nplt.figure(figsize=(12,5))\nplt.stem(k, np.power(a, k))\nplt.xticks(k); plt.yticks(np.power(a,k[1:]))\nplt.xlabel('$k$')\nplt.ylabel('$2^k$');\n```\n\n### Exampel 3 $a = -0.8$\n\n\n```python\na = -.8\nk = np.arange(-2,5)\nplt.figure(figsize=(12,5))\nplt.stem(k, np.power(a, k))\nplt.xticks(k); plt.yticks(np.power(a,k[:]))\nplt.xlabel('$k$')\nplt.ylabel('$(-1)^k$');\n```\n\n## Sinusoid\n$$ f(k) = a\\sin(\\omega_0 k + \\phi), \\quad k \\in \\mathbb{Z},\\quad a,\\omega_0, \\phi \\in \\mathbb{R} $$\n\nThe discrete-time sinusoid has two important differences compared to the continuous-time sinusoid:\n\n### It has infinitely many *aliases* at frequencies that are multiples of $2\\pi$\n$$\\sin\\big((\\omega_0 + n2\\pi)k + \\phi\\big) = \\sin(\\omega_0k + \\phi), \\quad n\\in\\mathbb{Z}$$ \nThis means that the signal with frequency $\\omega_0$ and the signal with frequency $\\omega_0 + n2\\pi$, where $n$ is an integer, are identical. \n\n\n```python\na = 1.0; w0 = np.pi/8; phi = np.pi/3; n=3\nk = np.arange(-14,15)\nfig,axs = plt.subplots(2, 1, sharex=True,figsize=(12,5))\naxs[0].stem(k, np.sin(w0*k+phi))\naxs[1].stem(k, np.sin((w0+2*np.pi*n)*k+phi))\naxs[0].set(yticks=[-1,1])\naxs[1].set(yticks=[-1,1])\naxs[0].set(ylabel='$\\omega_0=\\pi/8$')\naxs[1].set(ylabel='$\\omega_0=\\pi/8 + 6\\pi$')\nplt.xlabel('$k$');\n```\n\nIn the example above, wee see that we cannot know by just looking at a discrete-time sinusoid, which of the alias frequencies it has ($\\omega_0=\\pi/8$ and $\\omega_0+n2\\pi=\\pi/8 + 6\\pi$ in the example). \n\nSince the discrete-time sinusoid is often obtained by sampling a continuous-time sinusoid, we reach the very important conclusion that **continous-time sinusoids of frequency $\\omega_0$ and $\\omega_0 + n\\omega_s$ have the same sample values**, and so are not distinguishable when sampled. Here $\\omega_s$ is the sampling frequency in radians per second. This has profound consequences when dealing with sampled signals and systems.\n\n### The discrete-time sinusoid is not always periodic\nThis is due to the fact that the argument $\\omega_0k + \\phi$ to the sinusoid is only taking on discrete values, and not all real values as in the continuous-time case.\nThe discrete-time sinusoid is periodic when $f(k+N) = f(k)$ for some integer $N$. This requires that \n$$\\omega_0(k+N) = \\omega_0k + 2\\pi M, $$\n$$ \\omega_0k + \\omega_0N = \\omega_0k + 2\\pi M $$\nfor some integers $N$ and $M$. This gives \n$$\\omega_0N = 2\\pi M \\quad \\Leftrightarrow \\quad \\frac{\\omega_0}{2\\pi} = \\frac{M}{N},$$\nwhich means that the ratio of $\\omega_0$ to $2\\pi$ must be a rational number. The periodicity will be $N$, and the sequence will repeat after $M$ whole wavelengths.\n\nThe above analysis show that we can have discrete-time sinusoids that are periodic where the period length is longer than one wavelength. To obtain a discrete-time sinusoid in which the period equals one wavelength we must require that\n$$ \\omega_0k + \\omega_0N = \\omega_0k + 2\\pi \\quad \\Rightarrow \\quad \\omega_0 = \\frac{2\\pi}{N}. $$\n\n#### Example 1: Periodic with periodiciy equal to one wavelength $\\omega_0 = \\pi/8$, $\\phi=\\pi/3$\n\n\n```python\nw0 = np.pi/8; phi = np.pi/3\nk = np.arange(-14,15)\nplt.figure(figsize=(12,5))\nplt.stem(k, np.sin(w0*k+phi))\nplt.xticks(k); plt.yticks([-1,1])\nplt.xlabel('$k$')\nplt.ylabel('');\n```\n\n#### Example 2: Periodic with periodiciy equal to two wavelengths $\\omega_0 = 4\\pi/17$, $\\phi=\\pi/3$\nThis gives a sequence which has periodicity 17 and repeats after two wavelengths.\n\n\n```python\nN = 17; M = 2\nw0 = M*2*np.pi/N; phi = np.pi/3\nk = np.arange(-14,15)\nplt.figure(figsize=(12,5))\nplt.stem(k, np.sin(w0*k+phi))\nplt.plot([-6, -6], [0.3, 0.75], 'r:')\nplt.plot([-6+N, -6+N], [0.3, 0.75], 'r:')\nplt.plot([-6, -6+17], [0.7, 0.7], 'r:')\nplt.text(3, 0.8, '%d wavelengths' %M)\nplt.xticks(k); plt.yticks([-1,1])\nplt.xlabel('$k$');\n```\n\n#### Example 3: Aperiodic $\\omega_0 = 1$, $\\phi = \\pi/3$\n\n\n```python\nw0 = 1\nphi = np.pi/3\nk = np.arange(-14,15)\nplt.figure(figsize=(12,5))\nplt.stem(k, np.sin(w0*k+phi))\nplt.xticks(k); plt.yticks([-1,1])\nplt.xlabel('$k$')\nplt.ylabel('');\n```\n\n#### Exercise: Construct an example with periodicity of three wavelengths\n\n\n```python\n#Your code here\n```\n\n## Discrete complex exponential\n$$ f(k) = a^k, \\quad a \\in \\mathbb{C}, \\quad k \\in \\mathbb{Z} $$\n\nWriting the complex number $a$ on polar form gives\n$$ f(k) = \\big(r\\mathrm{e}^{i\\omega_0}\\big)^k = r^k\\mathrm{e}^{ik\\omega_0} = r^k\\cos(k\\omega_0) + ir^k\\sin(k\\omega_0), $$\nwhich gives a sequence that looks like a discrete spiral in the complex plane. If the magnitude of $a$ is less than one, $r<1$, then the spiral approached the origin. If the magnitude is greater than one it spirals outwards to infinity. If $r=1$, the complex sequence consists of points on the unit circle, each with the same phase distance to the previous point.\n\n### Connection to sinusoids\nComplex exponentials and sinusoids are closely connected through Euler's formula:\n\\begin{align}\n \\sin(\\omega_0 t) &= \\frac{1}{2i} \\big(\\mathrm{e}^{i\\omega_0 k} - \\mathrm{e}^{-i\\omega_0 k} \\big) = \\mathrm{Im} \\{ \\mathrm{e}^{i\\omega_0 k} \\}\\\\\n \\cos(\\omega_0 k) &= \\frac{1}{2} \\big(\\mathrm{e}^{i\\omega_0 k} + \\mathrm{e}^{-i\\omega_0 k} \\big) = \\mathrm{Re} \\{ \\mathrm{e}^{i\\omega_0 k} \\} \n\\end{align}\nSo, as for the case of sinusoids, we see that discrete-time complex exponentials will be periodic with period of $2\\pi$ if and only if $\\omega_0 can be written \n$$ \\omega_0 = \\frac{2\\pi}{N}, \\quad N \\in \\mathbb{Z}. $$\n\nMore importantly, all discrete-time complex exponentials $f(k) = \\mathrm{e}^{i\\omega k}$ with \n$$ \\omega = \\omega_0 + n2\\pi, \\quad n \\in \\mathbb{Z} $$\nare indistinguishable, or aliases, since \n$$ f(k) = \\mathrm{e}^{i\\omega k} = \\mathrm{e}^{ik(\\omega_0 + n2\\pi)} = \\mathrm{e}^{ik\\omega_0 + ink2\\pi} = \\big(\\mathrm{e}^{i2\\pi}\\big)^{(nk)} \\mathrm{e}^{i\\omega_0 k} = 1^{(nk)}\\mathrm{e}^{i\\omega_0 k} = \\mathrm{e}^{i\\omega_0 k}. $$ \nHowever, also $-\\omega_0 + n2\\pi$ are alias frequencies to $\\omega_0$, since \n\\begin{align}\n\\sin(\\omega_0 k) &= -\\sin(-\\omega_0k) = -\\sin\\big(-\\omega_0 + 2n\\pi)k\\big) = - \\frac{1}{2i} \\big( \\mathrm{e}^{i(-\\omega_0+ n2\\pi)k} -\\mathrm{e}^{-i(-\\omega_0 + 2\\pi) k} \\big).\n \\end{align}\n\n\n```python\nw0 = np.pi/6\nk = np.arange(-2,10)\nplt.figure(figsize=(8,8))\nfor r in [0.8, 1.0, 1.2]:\n a = r*np.complex(np.cos(w0), np.sin(w0))\n f = np.power(a, k)\n plt.polar(np.angle(f), np.abs(f), '.', markersize=14, label='r=%0.2f' %r)\nplt.legend(loc='right',fontsize='large', bbox_to_anchor=(1.1,1));\n```\n\n### Exercise\nPlot the function \n$$ f(k) = \\mathrm{Im} \\{ \\big(1.2 \\mathrm{e}^{i\\frac{\\pi}{12}}\\big)^k\\}$$\n\n\n```python\nw0 = np.pi/12; r=1.2\n#a = # your code here\nf = np.power(a, k)\nk = np.arange(-2,20)\nplt.figure(figsize=(12,4))\n#plt.stem(k, ) # Your code here\n```\n", "meta": {"hexsha": "1b0c2d65159086688c791236c291d200bc888a98", "size": 208366, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "discrete-time-systems/notebooks/Basic-dt-signals.ipynb", "max_stars_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_stars_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-11-07T05:20:37.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-22T09:46:13.000Z", "max_issues_repo_path": "discrete-time-systems/notebooks/Basic-dt-signals.ipynb", "max_issues_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_issues_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-06-12T20:44:41.000Z", "max_issues_repo_issues_event_max_datetime": "2020-06-12T20:49:00.000Z", "max_forks_repo_path": "discrete-time-systems/notebooks/Basic-dt-signals.ipynb", "max_forks_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_forks_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-14T03:55:27.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-14T03:55:27.000Z", "avg_line_length": 334.455858748, "max_line_length": 81024, "alphanum_fraction": 0.9281840607, "converted": true, "num_tokens": 3615, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9425067211996142, "lm_q2_score": 0.8824278587245935, "lm_q1q2_score": 0.831694187821713}} {"text": "```julia\nusing Symbolics\n```\n\n\n```julia\n@variables t x y\n```\n\n\n\n\n\\begin{equation}\n\\left[\n\\begin{array}{c}\nt \\\\\nx \\\\\ny \\\\\n\\end{array}\n\\right]\n\\end{equation}\n\n\n\n\n\n```julia\nD = Differential(t)\n```\n\n\n\n\n (::Differential) (generic function with 2 methods)\n\n\n\n\n```julia\nz = t + t^2\n```\n\n\n\n\n\\begin{equation}\nt + t^{2}\n\\end{equation}\n\n\n\n\n\n```julia\nD(z)\n```\n\n\n\n\n\\begin{equation}\n\\mathrm{\\frac{d}{d t}}\\left( t + t^{2} \\right)\n\\end{equation}\n\n\n\n\n\n```julia\nexpand_derivatives(D(z))\n```\n\n\n\n\n\\begin{equation}\n1 + 2 t\n\\end{equation}\n\n\n\n\n\n```julia\nSymbolics.jacobian([x + x*y, x^2 + y], [x, y])\n```\n\n\n\n\n\\begin{equation}\n\\left[\n\\begin{array}{cc}\n1 + y & x \\\\\n2 x & 1 \\\\\n\\end{array}\n\\right]\n\\end{equation}\n\n\n\n\n\n```julia\nB = simplify.([t^2 + t + t^2 2t + 4t; x + y + y + 2t x^2 - x^2 + y^2])\n```\n\n\n\n\n\\begin{equation}\n\\left[\n\\begin{array}{cc}\nt + 2 t^{2} & 6 t \\\\\nx + 2 t + 2 y & y^{2} \\\\\n\\end{array}\n\\right]\n\\end{equation}\n\n\n\n\n\n```julia\nsimplify.(substitute.(B, (Dict(x=>y^2),)))\n```\n\n\n\n\n\\begin{equation}\n\\left[\n\\begin{array}{cc}\nt + 2 t^{2} & 6 t \\\\\n2 t + y^{2} + 2 y & y^{2} \\\\\n\\end{array}\n\\right]\n\\end{equation}\n\n\n\n\n\n```julia\nsubstitute.(B, (Dict(x => 2.0, y=>3.0, t=>4.0),))\n```\n\n\n\n\n\\begin{equation}\n\\left[\n\\begin{array}{cc}\n36.0 & 24.0 \\\\\n16.0 & 9.0 \\\\\n\\end{array}\n\\right]\n\\end{equation}\n\n\n\n\n# SymbolivUtils\n\n\n```julia\nimport Pkg\nPkg.add(\"SymbolicUtils\")\n```\n\n \u001b[32m\u001b[1m Updating\u001b[22m\u001b[39m registry at `~/.julia/registries/General`\n \u001b[32m\u001b[1m Updating\u001b[22m\u001b[39m git-repo `https://github.com/JuliaRegistries/General.git`\n \u001b[32m\u001b[1m Resolving\u001b[22m\u001b[39m package versions...\n \u001b[32m\u001b[1m Installed\u001b[22m\u001b[39m SymbolicUtils \u2500 v0.18.2\n \u001b[32m\u001b[1m Updating\u001b[22m\u001b[39m `~/.julia/environments/v1.7/Project.toml`\n \u001b[90m [d1185830] \u001b[39m\u001b[92m+ SymbolicUtils v0.18.2\u001b[39m\n \u001b[32m\u001b[1m Updating\u001b[22m\u001b[39m `~/.julia/environments/v1.7/Manifest.toml`\n \u001b[90m [d1185830] \u001b[39m\u001b[93m\u2191 SymbolicUtils v0.18.1 \u21d2 v0.18.2\u001b[39m\n \u001b[32m\u001b[1mPrecompiling\u001b[22m\u001b[39m project...\n \u001b[33m \u2713 \u001b[39mSymbolicUtils\n \u001b[33m \u2713 \u001b[39mSymbolics\n 2 dependencies successfully precompiled in 30 seconds (192 already precompiled)\n \u001b[33m2\u001b[39m dependencies precompiled but different versions are currently loaded. Restart julia to access the new versions\n\n\n\n```julia\nusing SymbolicUtils\n```\n\n\n```julia\nSymbolicUtils.show_simplified[] = true\n```\n\n\n\n\n true\n\n\n\n\n```julia\n@syms x::Real y::Real z::Complex f(::Number)::Real\n```\n\n\n\n\n (x, y, z, f)\n\n\n\n\n```julia\n2x^2 - y + x^2\n```\n\n\n\n\n\\begin{equation}\n - y + 3 x^{2}\n\\end{equation}\n\n\n\n\n\n```julia\nf(sin(x)^2 + cos(x)^2) + z\n```\n\n\n\n\n\\begin{equation}\nz + f\\left( \\cos^{2}\\left( x \\right) + \\sin^{2}\\left( x \\right) \\right)\n\\end{equation}\n\n\n\n\n\n```julia\nr = @rule sinh(im * ~x) => sin(~x)\n```\n\n\n\n\n sinh(im * ~x) => sin(~x)\n\n\n\n\n```julia\nr(sinh(im*y))\n```\n\n\n\n\n\\begin{equation}\n\\sin\\left( y \\right)\n\\end{equation}\n\n\n\n\n\n```julia\nsimplify(cos(y)^2 + sinh(im*y)\n```\n", "meta": {"hexsha": "085d8134ddfa4ec7a6a9b1c57996723f4c022377", "size": 10573, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "all_repository/julia_lab/sym_test-Copy1.ipynb", "max_stars_repo_name": "jskDr/keraspp_2021", "max_stars_repo_head_hexsha": "dc46ebb4f4dea48612135136c9837da7c246534a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2021-09-21T15:35:04.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-14T12:14:44.000Z", "max_issues_repo_path": "all_repository/julia_lab/sym_test-Copy1.ipynb", "max_issues_repo_name": "jskDr/keraspp_2021", "max_issues_repo_head_hexsha": "dc46ebb4f4dea48612135136c9837da7c246534a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "all_repository/julia_lab/sym_test-Copy1.ipynb", "max_forks_repo_name": "jskDr/keraspp_2021", "max_forks_repo_head_hexsha": "dc46ebb4f4dea48612135136c9837da7c246534a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.3165322581, "max_line_length": 144, "alphanum_fraction": 0.4637283647, "converted": true, "num_tokens": 1138, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294404018582427, "lm_q2_score": 0.8947894618940992, "lm_q1q2_score": 0.8316534770413724}} {"text": "# Maximum Likelihood and MAP continued...\n\n## MLE of Mean of Gaussian\n\n* Let $\\mathbf{x}_1, \\mathbf{x}_2, \\ldots, \\mathbf{x}_N$ be samples from a multi-variance Normal distribution with known covariance matrix and an unknown mean. Given this data, obtain the ML estimate of the mean vector. \n\t\\begin{equation}\n\tp(\\mathbf{x}_k| {{\\mu}}) = \\frac{1}{(2\\pi)^{\\frac{l}{2}}\\left| \\Sigma \\right|^{\\frac{1}{2}}}\\exp\\left( -\\frac{1}{2}(\\mathbf{x}_k - {{\\mu}})^T\\Sigma^{-1}(\\mathbf{x}_k - {\\mu})\\right)\n\t\\end{equation}\n* We can define our likelihood given the $N$ data points. We are assuming these data points are drawn independently but from an identical distribution (i.i.d.):\n\t\\begin{equation}\n\t\\prod_{n=1}^N p(\\mathbf{x}_n| {{\\mu}}) = \\prod_{n=1}^N \\frac{1}{(2\\pi)^{\\frac{l}{2}}\\left| \\Sigma \\right|^{\\frac{1}{2}}}\\exp\\left( -\\frac{1}{2}(\\mathbf{x}_n - {{\\mu}})^T\\Sigma^{-1}(\\mathbf{x}_n - {\\mu})\\right)\n\t\\end{equation}\n* We can apply our \"trick\" to simplify\n\t\\begin{eqnarray}\n\t\\mathscr{L} &=& \\ln \\prod_{n=1}^N p(\\mathbf{x}_n| {{\\mu}}) = \\ln \\prod_{n=1}^N \\frac{1}{(2\\pi)^{\\frac{l}{2}}\\left| \\Sigma \\right|^{\\frac{1}{2}}}\\exp\\left( -\\frac{1}{2}(\\mathbf{x}_n - {{\\mu}})^T\\Sigma^{-1}(\\mathbf{x}_n - {\\mu})\\right)\\\\\n\t&=& \\sum_{n=1}^N \\ln \\frac{1}{(2\\pi)^{\\frac{l}{2}}\\left| \\Sigma \\right|^{\\frac{1}{2}}}\\exp\\left( -\\frac{1}{2}(\\mathbf{x}_n - {{\\mu}})^T\\Sigma^{-1}(\\mathbf{x}_n - {\\mu})\\right)\\\\\n\t&=& \\sum_{n=1}^N \\left( \\ln \\frac{1}{(2\\pi)^{\\frac{l}{2}}\\left| \\Sigma \\right|^{\\frac{1}{2}}} + \\left( -\\frac{1}{2}(\\mathbf{x}_n - {{\\mu}})^T\\Sigma^{-1}(\\mathbf{x}_n - {\\mu})\\right) \\right) \\\\\n\t&=& - N \\ln (2\\pi)^{\\frac{l}{2}}\\left| \\Sigma \\right|^{\\frac{1}{2}} + \\sum_{n=1}^N \\left( -\\frac{1}{2}(\\mathbf{x}_n - {{\\mu}})^T\\Sigma^{-1}(\\mathbf{x}_n - {\\mu}) \\right) \n\t\\end{eqnarray}\n* Now, lets maximize:\n\t\\begin{eqnarray}\n\t\\frac{\\partial \\mathscr{L}}{\\partial \\mu} &=& \\frac{\\partial}{\\partial \\mu} \\left[- N \\ln (2\\pi)^{\\frac{l}{2}}\\left| \\Sigma \\right|^{\\frac{1}{2}} + \\sum_{n=1}^N \\left( -\\frac{1}{2}(\\mathbf{x}_n - {{\\mu}})^T\\Sigma^{-1}(\\mathbf{x}_n - {\\mu}) \\right)\\right] = 0 \\\\\n\t&\\rightarrow& \\sum_{n=1}^N \\Sigma^{-1}(\\mathbf{x}_n - {\\mu}) = 0\\\\\n\t&\\rightarrow& \\sum_{n=1}^N \\Sigma^{-1}\\mathbf{x}_n = \\sum_{n=1}^N \\Sigma^{-1} {\\mu}\\\\\n\t&\\rightarrow& \\Sigma^{-1} \\sum_{n=1}^N \\mathbf{x}_n = \\Sigma^{-1} {\\mu} N\\\\\n\t&\\rightarrow& \\sum_{n=1}^N \\mathbf{x}_n = {\\mu} N\\\\\n\t&\\rightarrow& \\frac{\\sum_{n=1}^N \\mathbf{x}_n}{N} = {\\mu}\\\\\n\t\\end{eqnarray}\n* So, the ML estimate of $\\mu$ is the sample mean!\n\n\n## MAP of Mean of Gaussian\n\n* To get a MAP estimate of the mean of a Gaussian, we apply a prior distribution and maximize the posterior. \n* Lets use a Gaussian prior on the mean (because it has a *conjugate prior* relationship)\n\n\\begin{eqnarray}\np(\\mu|X, \\mu_0, \\sigma_0^2, \\sigma^2) &\\propto& \\mathscr{N}(X|\\mu, \\sigma^2)\\mathscr{N}(\\mu|\\mu_0, \\sigma_0^2)\\\\\n&=& \\prod_{n=1}^N \\left(\\frac{1}{\\sqrt{2\\pi \\sigma^2}} \\exp\\left\\{-\\frac{1}{2\\sigma^2}\\left(x_n-\\mu\\right)^2 \\right\\}\\right)\\frac{1}{\\sqrt{2\\pi \\sigma_0^2}} \\exp\\left\\{-\\frac{1}{2\\sigma_0^2}\\left(\\mu-\\mu_0\\right)^2 \\right\\}\\nonumber\\\\\n\\mathscr{L} &=& -\\frac{N}{2}\\ln(2\\pi\\sigma^2) - \\frac{1}{2\\sigma^2}\\sum_{n=1}^N(x_n-\\mu)^2 - \\frac{1}{2}\\ln(2\\pi \\sigma_0^2) - \\frac{1}{2\\sigma_0^2}(\\mu - \\mu_0)^2\\\\\n\\frac{\\partial \\mathscr{L}}{\\partial \\mu} &=& \\frac{1}{\\sigma^2}\\sum_{n=1}^N x_n - \\frac{N}{\\sigma^2}\\mu - \\frac{1}{\\sigma_0^2}\\mu + \\frac{1}{\\sigma_0^2}\\mu_0 = 0\\\\\n\\mu\\left(\\frac{N\\sigma_0^2 + \\sigma^2}{\\sigma^2\\sigma_0^2} \\right) &=& \\frac{1}{\\sigma^2}\\sum_{n=1}^N x_n + \\frac{1}{\\sigma_0^2}\\mu_0 \\\\\n\\mu_{MAP} &=& \\frac{\\sigma_0^2}{N\\sigma_0^2 + \\sigma^2}\\sum_{n=1}^N x_n + \\frac{\\mu_0\\sigma^2}{N\\sigma_0^2 + \\sigma^2}\n\\end{eqnarray}\n\n* *Does this result make sense?*\n\n\n```python\n\n```\n", "meta": {"hexsha": "b9ae2dc50aff00022d6006f921eb970f05614f38", "size": 5193, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lecture04_MLandMAPcont/Lecture 04 .ipynb", "max_stars_repo_name": "chenshenlv/LectureNotes", "max_stars_repo_head_hexsha": "6febeec7b93f915ba4608fe85ee13364e03db73b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-03-17T19:11:19.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-16T06:04:02.000Z", "max_issues_repo_path": "Lecture04_MLandMAPcont/Lecture 04 .ipynb", "max_issues_repo_name": "chenshenlv/LectureNotes", "max_issues_repo_head_hexsha": "6febeec7b93f915ba4608fe85ee13364e03db73b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-10-02T16:11:44.000Z", "max_issues_repo_issues_event_max_datetime": "2018-10-23T17:05:00.000Z", "max_forks_repo_path": "Lecture04_MLandMAPcont/Lecture 04 .ipynb", "max_forks_repo_name": "chenshenlv/LectureNotes", "max_forks_repo_head_hexsha": "6febeec7b93f915ba4608fe85ee13364e03db73b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-03-18T00:29:54.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-23T05:00:04.000Z", "avg_line_length": 57.0659340659, "max_line_length": 302, "alphanum_fraction": 0.5074138263, "converted": true, "num_tokens": 1677, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9532750453562491, "lm_q2_score": 0.8723473713594991, "lm_q1q2_score": 0.8315869799991312}} {"text": "\n \n
    \n Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli
    \n\n\n```python\nfrom __future__ import print_function\nfrom __future__ import absolute_import\n\n%matplotlib inline\nimport numpy\nimport matplotlib.pyplot as plt\n```\n\n# Numerical Differentiation\n\n**GOAL:** Given a set of $N+1$ points $(x_i, y_i)$ compute the derivative of a given order to a specified accuracy.\n\n**Approaches:** \n * Find the interpolating polynomial $P_N(x)$ and differentiate that.\n * Use Taylor-series expansions and the method of undetermined coefficients to derive finite-difference weights and their error estimates\n \n**Issues:** Order vs accuracy...how to choose\n\n# Example 1: how to approximate the derivative $f'(x)$ given a discrete sampling of a function $f(x)$\n\nHere we will consider how to estimate $f'(x_k)$ given a $N$ point sampling of $f(x)=\\sin(\\pi x) + 1/2 \\sin(2\\pi x)$ sampled uniformly over the interval $x\\in [ 0,1]$\n\n\n```python\nN = 11\nx = numpy.linspace(0,1,N)\nxfine = numpy.linspace(0,1,101)\nf = lambda x: numpy.sin(numpy.pi*x) + 0.5*numpy.sin(4*numpy.pi*x)\n\nfig = plt.figure(figsize=(8,6))\naxes = fig.add_subplot(1,1,1)\naxes.plot(xfine, f(xfine),'b',label='$f(x)$')\naxes.plot(x, f(x), 'ro', markersize=12, label='$f(x_k)$')\naxes.grid()\naxes.set_xlabel('x')\np = numpy.polyfit(x,f(x),N-1)\naxes.plot(xfine,numpy.polyval(p,xfine),'g--',label='$P_{{{N}}}$'.format(N=N-1))\naxes.legend(fontsize=15)\nplt.show()\n```\n\n### Example 2: how to approximate derivative $f'(x)$ given a discrete sampling of a function $f(x)$\n\nHere we will consider how to estimate $f'(x_k)$ given a $N$ point sampling of Runge's function sampled uniformly over the interval $x\\in [ -1,1]$\n\n\n```python\nN = 11\nx = numpy.linspace(-1,1,N)\nxfine = numpy.linspace(-1,1,101)\nf = lambda x: 1./(1. + 25*x**2)\n\nfig = plt.figure(figsize=(8,6))\naxes = fig.add_subplot(1,1,1)\naxes.plot(xfine, f(xfine),'b',label='$f(x)$')\naxes.plot(x, f(x), 'ro', markersize=12, label='$f(x_k)$')\naxes.grid()\naxes.set_xlabel('x')\np = numpy.polyfit(x,f(x),N-1)\naxes.plot(xfine,numpy.polyval(p,xfine),'g--',label='$P_{{{N}}}$'.format(N=N-1))\naxes.legend(fontsize=15)\nplt.show()\n```\n\n### The interpolating polynomial: review\n\nFrom our previous lecture, we showed that we can approximate a function $f(x)$ over some interval in terms of a unique interpolating polynomial through $N+1$ points and a remainder term\n\n$$\n f(x) = P_N(x) + R_N(x)\n$$\n\nWhere the Lagrange remainder term is\n\n$$R_N(x) = (x - x_0)(x - x_1)\\cdots (x - x_{N})(x - x_{N+1}) \\frac{f^{(N+1)}(c)}{(N+1)!}$$\n\nWhile there are multiple ways to represent the interpolating polynomial, both $P_N(x)$ and $R_N(x)$ are polynomials in $x$ and therefore differentiable. Thus we should be able to calculate the first derivative and its error as\n\n$$\n f'(x) = P'_N(x) + R'_N(x)\n$$\n\nand likewise for higher order derivatives up to degree $N$.\n\n### Derivatives of the Lagrange Polynomials \n\nThe Lagrange basis, is a particularly nice basis for calculating numerical differentiation formulas because of their basic interpolating property that\n\n$$\n P_N(x) = \\sum_{i=0}^N f(x_i)\\ell_i(x)\n$$\n\nwhere $f(x_i)$ is just the value of our function $f$ at node $x_i$ and all of the $x$ dependence is contained in the Lagrange Polynomials $\\ell_i(x)$ (which only depend on the node coordinates $x_i$, $i=0,\\ldots,N$). Thus, the interpolating polynomial at any $x$ is simply a linear combination of the values at the nodes $f(x_i)$\n\nLikewise its first derivative\n$$\nP'_N(x) = \\sum_{i=0}^N f(x_i)\\ell'_i(x)\n$$\nis also just a linear combination of the values $f(x_i)$\n\n## Examples\n\nGiven the potentially, highly oscillatory nature of the interpolating polynomial, in practice we only use a small number of data points around a given point $x_k$ to derive a differentiation formula for the derivative $f'(x_k)$. In the context of differential equations we also often have $f(x)$ so that $f(x_k) = y_k$ and we can approximate the derivative of a known function $f(x)$.\n\n\n```python\nN = 9\nf = lambda x: 1./(1. + 25*x**2)\n#f = lambda x: numpy.cos(2.*numpy.pi*x)\n```\n\n\n```python\nx = numpy.linspace(-1,1,N)\nxfine = numpy.linspace(-1,1,101)\n\nfig = plt.figure(figsize=(8,6))\naxes = fig.add_subplot(1,1,1)\naxes.plot(xfine, f(xfine),'b',label='$f(x)$')\naxes.plot(x, f(x), 'ro', markersize=12, label='$f(x_k)$')\nx3 = x[5:8]\nx3fine = numpy.linspace(x3[0],x3[-1],20)\np = numpy.polyfit(x3,f(x3),2)\naxes.plot(x3,f(x3),'m',label = 'Piecewise $P_1(x)$')\naxes.plot(x3fine,numpy.polyval(p,x3fine),'k',label = 'Piecewise $P_2(x)$')\naxes.grid()\naxes.set_xlabel('x')\np = numpy.polyfit(x,f(x),N-1)\naxes.plot(xfine,numpy.polyval(p,xfine),'g--',label='$P_{{{N}}}$'.format(N=N-1))\naxes.legend(fontsize=14,loc='best')\nplt.show()\n```\n\n### Example: 1st order polynomial through 2 points $x=x_0, x_1$:\n\n\n$$\n P_1(x)=f_0\\ell_0(x) + f_1\\ell_1(x)\n$$\n\nOr written out in full\n\n$$\nP_1(x) = f_0\\frac{x-x_1}{x_0-x_1} + f_1\\frac{x-x_0}{x_1-x_0} \n$$\n\n\nThus the first derivative of this polynomial for all $x\\in[x_0,x_1]$ is\n\n$$\nP'_1(x) = \\frac{f_0}{x_0-x_1} + \\frac{f_1}{x_1-x_0} = \\frac{f_1 - f_0}{x_1 - x_0} = \\frac{f_1 - f_0}{\\Delta x}\n$$\n\nWhere $\\Delta x$ is the width of the interval. This formula is simply the slope of the chord connecting the points $(x_0, f_0)$ and $(x_1,f_1)$. Note also, that the estimate of the first-derivative is constant for all $x\\in[x_0,x_1]$.\n\n#### \"Forward\" and \"Backward\" first derivatives\n\nEven though the first derivative by this method is the same at both $x_0$ and $x_1$, we sometime make a distinction between the \"forward Derivative\"\n\n$$f'(x_n) \\approx D_1^+ = \\frac{f(x_{n+1}) - f(x_n)}{\\Delta x}$$\n\nand the \"backward\" finite-difference as\n\n$$f'(x_n) \\approx D_1^- = \\frac{f(x_n) - f(x_{n-1})}{\\Delta x}$$\n\n\n\nNote these approximations should be familiar to use as the limit as $\\Delta x \\rightarrow 0$ these are no longer approximations but equivalent definitions of the derivative at $x_n$.\n\n### Example: 2nd order polynomial through 3 points $x=x_0, x_1, x_2$:\n\n\n$$\n P_2(x)=f_0\\ell_0(x) + f_1\\ell_1(x) + f_2\\ell_2(x)\n$$\n\nOr written out in full\n\n$$\nP_2(x) = f_0\\frac{(x-x_1)(x-x_2)}{(x_0-x_1)(x_0-x_2)} + f_1\\frac{(x-x_0)(x-x_2)}{(x_1-x_0)(x_1-x_2)} + f_2\\frac{(x-x_0)(x-x_1)}{(x_2-x_0)(x_2-x_1)}\n$$\n\n\nThus the first derivative of this polynomial for all $x\\in[x_0,x_2]$ is\n\n$$\nP'_2(x) = f_0\\frac{(x-x_1)+(x-x_2)}{(x_0-x_1)(x_0-x_2)} + f_1\\frac{(x-x_0)+(x-x_2)}{(x_1-x_0)(x_1-x_2)} + f_2\\frac{(x-x_0)+(x-x_1)}{(x_2-x_0)(x_2-x_1)}\n$$\n\n\n\n**Exercise**: show that the second-derivative $P''_2(x)$ is a constant (find it!) but is also just a linear combination of the function values at the nodes.\n\n### Special case of equally spaced nodes $x = [-h, 0, h]$ where $h=\\Delta x$ is the grid spacing\n\n\nGeneral Case:\n$$\nP'_2(x) = f_0\\frac{(x-x_1)+(x-x_2)}{(x_0-x_1)(x_0-x_2)} + f_1\\frac{(x-x_0)+(x-x_2)}{(x_1-x_0)(x_1-x_2)} + f_2\\frac{(x-x_0)+(x-x_1)}{(x_2-x_0)(x_2-x_1)}\n$$\n\nBecomes:\n$$\nP'_2(x) = f_0\\frac{2x-h}{2h^2} + f_1\\frac{-2x}{h^2} + f_2\\frac{2x+h}{2h^2}\n$$\n\nwhich if we evaluate at the three nodes $-h,0,h$ yields\n\n$$\nP'_2(-h) = \\frac{-3f_0 + 4f_1 -1f_2}{2h}, \\quad\\quad P'_2(0) = \\frac{-f_0 + f_2}{2h}, \\quad\\quad P'_2(h) = \\frac{f_0 -4f_1 + 3f_2}{2h} \n$$\n\nAgain, just linear combinations of the values at the nodes $f(x_i)$\n\n#### Quick Checks\n\nIn general, all finite difference formulas can be written as linear combinations of the values of $f(x)$ at the nodes. The formula's can be hard to remember, but they are easy to check.\n\n* The sum of the coefficients must add to zero. Why?\n* The sign of the coefficients can be checked by inserting $f(x_i) = x_i$\n\n##### Example\n\nGiven \n$$\nP'_2(-h) =\\frac{-3f_0 + 4f_1 -1f_2}{2h}\n$$\n\nWhat is $P'_2(-h)$ if\n\n* $$f_0=f_1=f_2$$\n* $$f_0 = 0, ~f_1 = 1, ~f_2 = 2$$ \n\n### Error Analysis\n\nIn addition to calculating finite difference formulas, we can also estimate the error\n\nFrom Lagrange's Theorem, the remainder term looks like\n\n$$R_N(x) = (x - x_0)(x - x_1)\\cdots (x - x_{N})(x - x_{N+1}) \\frac{f^{(N+1)}(c)}{(N+1)!}$$\n\nThus the derivative of the remainder term $R_N(x)$ is\n\n$$R_N'(x) = \\left(\\sum^{N}_{i=0} \\left( \\prod^{N}_{j=0,~j\\neq i} (x - x_j) \\right )\\right ) \\frac{f^{(N+1)}(c)}{(N+1)!}$$\n\nThe remainder term contains a sum of $N$'th order polynomials and can be awkward to evaluate, however, if we restrict ourselves to the error at any given node $x_k$, the remainder simplifies to \n\n$$R_N'(x_k) = \\left( \\prod^{N}_{j=0,~j\\neq k} (x_k - x_j) \\right) \\frac{f^{(N+1)}(c)}{(N+1)!}$$\n\nIf we let $\\Delta x = \\max_i |x_k - x_i|$ we then know that the remainder term will be $\\mathcal{O}(\\Delta x^N)$ as $\\Delta x \\rightarrow 0$ thus showing that this approach converges and we can find arbitrarily high order approximations (ignoring floating point error).\n\n### Examples\n\n#### First order differences $N=1$\n\nFor our first order finite differences, the error term is simply\n\n$$R_1'(x_0) = -\\Delta x \\frac{f''(c)}{2}$$\n$$R_1'(x_1) = \\Delta x \\frac{f''(c)}{2}$$\n\nBoth of which are $O(\\Delta x f'')$\n\n#### Second order differences $N=2$\n\n\nFor general second order polynomial interpolation, the derivative of the remainder term is\n\n$$\\begin{aligned}\n R_2'(x) &= \\left(\\sum^{2}_{i=0} \\left( \\prod^{2}_{j=0,~j\\neq i} (x - x_j) \\right )\\right ) \\frac{f'''(c)}{3!} \\\\\n &= \\left ( (x - x_{i+1}) (x - x_{i-1}) + (x-x_i) (x-x_{i-1}) + (x-x_i)(x-x_{i+1}) \\right ) \\frac{f'''(c)}{3!}\n\\end{aligned}$$\n\nAgain evaluating this expression at the center point $x = x_i$ and assuming evenly space points we have\n\n$$R_2'(x_i) = -\\Delta x^2 \\frac{f'''(c)}{3!}$$\n\nshowing that our error is $\\mathcal{O}(\\Delta x^2)$.\n\n### Caution\n\nHigh order does not necessarily imply high-accuracy! \n\nAs always, the question remains as to whether the underlying function is well approximated by a high-order polynomial.\n\n\n### Convergence \n\nNevertheless, we can always check to see if the error reduces as expected as $\\Delta x\\rightarrow 0$. Here we estimate the 1st and 2nd order first-derivative for evenly spaced points\n\n\n```python\ndef D1_p(func, x_min, x_max, N):\n \"\"\" calculate consistent 1st order Forward difference of a function func(x) defined on the interval [x_min,xmax]\n and sampled at N evenly spaced points\"\"\"\n\n x = numpy.linspace(x_min, x_max, N)\n f = func(x)\n dx = x[1] - x[0]\n f_prime = numpy.zeros(N)\n f_prime[0:-1] = (f[1:] - f[0:-1])/dx\n # and patch up the end point with a backwards difference\n f_prime[-1] = f_prime[-2]\n\n return f_prime\n\ndef D1_2(func, x_min, x_max, N):\n \"\"\" calculate consistent 2nd order first derivative of a function func(x) defined on the interval [x_min,xmax]\n and sampled at N evenly spaced points\"\"\"\n\n x = numpy.linspace(x_min, x_max, N)\n f = func(x)\n dx = x[1] - x[0]\n f_prime = numpy.zeros(N)\n f_prime[0] = f[:3].dot(numpy.array([-3, 4, -1]))/(2*dx)\n f_prime[1:-1] = (f[2:N] - f[0:-2])/(2*dx)\n f_prime[-1] = f[-3:].dot(numpy.array([1, -4, 3]))/(2*dx)\n \n return f_prime\n```\n\n#### Note: \n\nThis first derivative operator can also be written as a Matrix $D$ such that $f'(\\mathbf{x}) = Df(\\mathbf{x})$ where $\\mathbf{x}$ is a vector of $x$ coordinates. (exercise left for the homework)\n\n\n```python\nN = 81\nxmin = 0.\nxmax = 1.\nfunc = lambda x: numpy.sin(numpy.pi*x) + 0.5*numpy.sin(4*numpy.pi*x)\nfunc_prime = lambda x: numpy.pi*numpy.cos(numpy.pi*x) + 2.*numpy.pi * numpy.cos(4*numpy.pi*x)\nD1f = D1_p(func, xmin, xmax, N)\nD2f = D1_2(func, xmin, xmax, N)\n```\n\n\n```python\nxa = numpy.linspace(xmin, xmax, 100)\nxi = numpy.linspace(xmin, xmax, N)\nfig = plt.figure(figsize=(16, 6))\naxes = fig.add_subplot(1, 2, 1)\naxes.plot(xa, func(xa), 'b', label=\"$f(x)$\")\naxes.plot(xa, func_prime(xa), 'k--', label=\"$f'(x)$\")\naxes.plot(xi, func(xi), 'ro')\naxes.plot(xi, D1f, 'ko',label='$D^+_1(f)$')\naxes.legend(loc='best')\naxes.set_title(\"$f'(x)$\")\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"$f'(x)$ and $\\hat{f}'(x)$\")\naxes.grid()\n\naxes = fig.add_subplot(1, 2, 2)\naxes.plot(xa, func(xa), 'b', label=\"$f(x)$\")\naxes.plot(xa, func_prime(xa), 'k--', label=\"$f'(x)$\")\naxes.plot(xi, func(xi), 'ro')\naxes.plot(xi, D2f, 'go',label='$D_1^2(f)$')\naxes.legend(loc='best')\naxes.set_title(\"$f'(x)$\")\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"$f'(x)$ and $\\hat{f}'(x)$\")\naxes.grid()\nplt.show()\n```\n\n\n```python\nN = 81\nxmin = -1\nxmax = 1.\nfunc = lambda x: 1./(1 + 25.*x**2)\nfunc_prime = lambda x: -50. * x / (1. + 25.*x**2)**2\nD1f = D1_p(func, xmin, xmax, N)\nD2f = D1_2(func, xmin, xmax, N)\n```\n\n\n```python\nxa = numpy.linspace(xmin, xmax, 100)\nxi = numpy.linspace(xmin, xmax, N)\nfig = plt.figure(figsize=(16, 6))\naxes = fig.add_subplot(1, 2, 1)\naxes.plot(xa, func(xa), 'b', label=\"$f(x)$\")\naxes.plot(xa, func_prime(xa), 'k--', label=\"$f'(x)$\")\naxes.plot(xi, func(xi), 'ro')\naxes.plot(xi, D1f, 'ko',label='$D^+_1(f)$')\naxes.legend(loc='best')\naxes.set_title(\"$f'(x)$\")\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"$f'(x)$ and $\\hat{f}'(x)$\")\naxes.grid()\n\naxes = fig.add_subplot(1, 2, 2)\naxes.plot(xa, func(xa), 'b', label=\"$f(x)$\")\naxes.plot(xa, func_prime(xa), 'k--', label=\"$f'(x)$\")\naxes.plot(xi, func(xi), 'ro')\naxes.plot(xi, D2f, 'go',label='$D_1^2(f)$')\naxes.legend(loc='best')\naxes.set_title(\"$f'(x)$\")\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"$f'(x)$ and $\\hat{f}'(x)$\")\naxes.grid()\nplt.show()\n```\n\n#### Computing Order of Convergence\n\nSay we had the error $E(\\Delta x)$ and we wanted to make a statement about the rate of convergence (note we can replace $E$ here with the $R$ from above). Then we can do the following:\n$$\\begin{aligned}\n E(\\Delta x) &= C \\Delta x^n \\\\\n \\log E(\\Delta x) &= \\log C + n \\log \\Delta x\n\\end{aligned}$$\n\nThe slope of the line is $n$ when modeling the error like this! We can also match the first point by solving for $C$:\n\n$$\n C = e^{\\log E(\\Delta x) - n \\log \\Delta x}\n$$\n\n\n```python\n# Compute the error as a function of delta_x\nN_range = numpy.logspace(1, 4, 10, dtype=int)\ndelta_x = numpy.empty(N_range.shape)\nerror = numpy.empty((N_range.shape[0], 4))\nfor (i, N) in enumerate(N_range):\n x_hat = numpy.linspace(xmin, xmax, N)\n delta_x[i] = x_hat[1] - x_hat[0]\n\n # Compute forward difference\n D1f = D1_p(func, xmin, xmax, N)\n \n # Compute 2nd order difference\n D2f = D1_2(func, xmin, xmax, N)\n\n \n # Calculate the infinity norm or maximum error\n error[i, 0] = numpy.linalg.norm(numpy.abs(func_prime(x_hat) - D1f), ord=numpy.inf)\n error[i, 1] = numpy.linalg.norm(numpy.abs(func_prime(x_hat) - D2f), ord=numpy.inf)\n \nerror = numpy.array(error)\ndelta_x = numpy.array(delta_x)\n \norder_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x))\n \nfig = plt.figure(figsize=(8,6))\naxes = fig.add_subplot(1,1,1)\naxes.loglog(delta_x, error[:,0], 'ro', label='$D_1^+$')\naxes.loglog(delta_x, error[:,1], 'bo', label='$D_1^2$')\naxes.loglog(delta_x, order_C(delta_x[0], error[0, 0], 1.0) * delta_x**1.0, 'r--', label=\"1st Order\")\naxes.loglog(delta_x, order_C(delta_x[0], error[0, 1], 2.0) * delta_x**2.0, 'b--', label=\"2nd Order\")\naxes.legend(loc=4)\naxes.set_title(\"Convergence of Finite Differences\", fontsize=18)\naxes.set_xlabel(\"$\\Delta x$\", fontsize=16)\naxes.set_ylabel(\"$|f'(x) - \\hat{f}'(x)|$\", fontsize=16)\naxes.legend(loc='best', fontsize=14)\naxes.grid()\n\nplt.show()\n```\n\n# Another approach: The method of undetermined Coefficients\n\nAn alternative method for finding finite-difference formulas is by using Taylor series expansions about the point we want to approximate. The Taylor series about $x_n$ is\n\n$$f(x) = f(x_n) + (x - x_n) f'(x_n) + \\frac{(x - x_n)^2}{2!} f''(x_n) + \\frac{(x - x_n)^3}{3!} f'''(x_n) + \\mathcal{O}((x - x_n)^4)$$\n\nSay we want to derive the second order accurate, first derivative approximation that we just did, this requires the values $(x_{n+1}, f(x_{n+1})$ and $(x_{n-1}, f(x_{n-1})$. We can express these values via our Taylor series approximation above as\n\n\\begin{aligned}\n f(x_{n+1}) &= f(x_n) + (x_{n+1} - x_n) f'(x_n) + \\frac{(x_{n+1} - x_n)^2}{2!} f''(x_n) + \\frac{(x_{n+1} - x_n)^3}{3!} f'''(x_n) + \\mathcal{O}((x_{n+1} - x_n)^4) \\\\\n\\end{aligned}\n\nor\n\\begin{aligned}\n&= f(x_n) + \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) + \\frac{\\Delta x^3}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^4)\n\\end{aligned}\n\nand\n\n\\begin{align}\nf(x_{n-1}) &= f(x_n) + (x_{n-1} - x_n) f'(x_n) + \\frac{(x_{n-1} - x_n)^2}{2!} f''(x_n) + \\frac{(x_{n-1} - x_n)^3}{3!} f'''(x_n) + \\mathcal{O}((x_{n-1} - x_n)^4) \n\\end{align}\n\n\\begin{align} \n&= f(x_n) - \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) - \\frac{\\Delta x^3}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^4)\n\\end{align}\n\nOr all together (for regularly spaced points),\n\\begin{align} \nf(x_{n+1}) &= f(x_n) + \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) + \\frac{\\Delta x^3}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^4)\\\\\nf(x_n) &= f(x_n) \\\\\nf(x_{n-1})&= f(x_n) - \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) - \\frac{\\Delta x^3}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^4)\n\\end{align}\n\nNow to find out how to combine these into an expression for the derivative we assume our approximation looks like\n\n$$\n f'(x_n) + R(x_n) = A f(x_{n+1}) + B f(x_n) + C f(x_{n-1})\n$$\n\nwhere $R(x_n)$ is our error. \n\nPlugging in the Taylor series approximations we find\n\n$$\\begin{aligned}\n f'(x_n) + R(x_n) &= A \\left ( f(x_n) + \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) + \\frac{\\Delta x^3}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^4)\\right ) \\\\\n & + B ~~~~f(x_n) \\\\ \n & + C \\left ( f(x_n) - \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) - \\frac{\\Delta x^3}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^4) \\right )\n\\end{aligned}$$\n\nOr\n$$\nf'(x_n) + R(x_n)= (A + B + C) f(x_n) + (A\\Delta x +0B - C\\Delta x)f'(x_n) + (A\\frac{\\Delta x^2}{2!} + C\\frac{\\Delta x^2}{2!})f''(x_n) + O(\\Delta x^3)\n$$\n\nSince we want $R(x_n) = \\mathcal{O}(\\Delta x^2)$ we want all terms lower than this to cancel except for those multiplying $f'(x_n)$ as those should sum to 1 to give us our approximation. Collecting the terms with common evaluations of the derivatives on $f(x_n)$ we get a series of expressions for the coefficients $A$, $B$, and $C$ based on the fact we want an approximation to $f'(x_n)$. The $n=0$ terms collected are $A + B + C$ and are set to 0 as we want the $f(x_n)$ term to also cancel.\n\n$$\\begin{aligned}\n f(x_n):& &A + B + C &= 0 \\\\\n f'(x_n): & &A \\Delta x - C \\Delta x &= 1 \\\\\n f''(x_n): & &A \\frac{\\Delta x^2}{2} + C \\frac{\\Delta x^2}{2} &= 0\n\\end{aligned} $$\n\nOr as a linear algebra problem\n\n$$\\begin{bmatrix}\n1 & 1 & 1 \\\\\n\\Delta x & 0 &-\\Delta x \\\\\n\\frac{\\Delta x^2}{2} & 0 & \\frac{\\Delta x^2}{2} \\\\\n\\end{bmatrix}\n\\begin{bmatrix} A \\\\ B\\\\ C\\\\\\end{bmatrix} =\n\\begin{bmatrix} 0 \\\\ 1\\\\ 0\\\\\\end{bmatrix} \n$$\n\nThis last equation $\\Rightarrow A = -C$, using this in the second equation gives $A = \\frac{1}{2 \\Delta x}$ and $C = -\\frac{1}{2 \\Delta x}$. The first equation then leads to $B = 0$. \n\nPutting this altogether then gives us our previous expression including an estimate for the error:\n\n$$\\begin{aligned}\n f'(x_n) + R(x_n) &= \\quad \\frac{1}{2 \\Delta x} \\left ( f(x_n) + \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) + \\frac{\\Delta x^3}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^4)\\right ) \\\\\n & \\quad + 0 \\cdot f(x_n) \\\\ \n & \\quad - \\frac{1}{2 \\Delta x} \\left ( f(x_n) - \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) - \\frac{\\Delta x^3}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^4) \\right ) \\\\\n &= f'(x_n) + \\frac{1}{2 \\Delta x} \\left ( \\frac{2 \\Delta x^3}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^4)\\right )\n\\end{aligned}$$\nso that we find\n$$\n R(x_n) = \\frac{\\Delta x^2}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^3) = \\mathcal{O}(\\Delta x^2)\n$$\n\n#### Another way...\n\nThere is one more way to derive the second order accurate, first order finite-difference formula. Consider the two first order forward and backward finite-differences averaged together:\n\n$$\\frac{D_1^+(f(x_n)) + D_1^-(f(x_n))}{2} = \\frac{f(x_{n+1}) - f(x_n) + f(x_n) - f(x_{n-1})}{2 \\Delta x} = \\frac{f(x_{n+1}) - f(x_{n-1})}{2 \\Delta x}$$\n\n### Example 4: Higher Order Derivatives\n\nUsing our Taylor series approach lets derive the second order accurate second derivative formula. Again we will use the same points and the Taylor series centered at $x = x_n$ so we end up with the same expression as before:\n\n$$\\begin{aligned}\n f''(x_n) + R(x_n) &= \\quad A \\left ( f(x_n) + \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) + \\frac{\\Delta x^3}{3!} f'''(x_n) + \\frac{\\Delta x^4}{4!} f^{(4)}(x_n) + \\mathcal{O}(\\Delta x^5)\\right ) \\\\\n &+ \\quad B \\cdot f(x_n) \\\\\n &+ \\quad C \\left ( f(x_n) - \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) - \\frac{\\Delta x^3}{3!} f'''(x_n) + \\frac{\\Delta x^4}{4!} f^{(4)}(x_n) + \\mathcal{O}(\\Delta x^5) \\right )\n\\end{aligned}$$\n\nexcept this time we want to leave $f''(x_n)$ on the right hand side. \n\nTry out the same trick as before and see if you can setup the equations that need to be solved.\n\nDoing the same trick as before we have the following expressions:\n\n$$\\begin{aligned}\n f(x_n): & & A + B + C &= 0\\\\\n f'(x_n): & & A \\Delta x - C \\Delta x &= 0\\\\\n f''(x_n): & & A \\frac{\\Delta x^2}{2} + C \\frac{\\Delta x^2}{2} &= 1\n\\end{aligned}$$\n\nOr again\n\n$$\\begin{bmatrix}\n1 & 1 & 1 \\\\\n\\Delta x & 0 &-\\Delta x \\\\\n\\frac{\\Delta x^2}{2} & 0 & \\frac{\\Delta x^2}{2} \\\\\n\\end{bmatrix}\n\\begin{bmatrix} A \\\\ B\\\\ C\\\\\\end{bmatrix} =\n\\begin{bmatrix} 0 \\\\ 0\\\\ 1\\\\\\end{bmatrix} \n$$\n\nNote, the Matrix remains, the same, only the right hand side has changed\n\nThe second equation implies $A = C$ which combined with the third implies\n\n$$A = C = \\frac{1}{\\Delta x^2}$$\n\nFinally the first equation gives\n\n$$B = -\\frac{2}{\\Delta x^2}$$\n\nleading to the final expression\n\n$$\\begin{aligned}\n f''(x_n) + R(x_n) &= \\quad \\frac{1}{\\Delta x^2} \\left ( f(x_n) + \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) + \\frac{\\Delta x^3}{3!} f'''(x_n) + \\frac{\\Delta x^4}{4!} f^{(4)}(x_n) + \\mathcal{O}(\\Delta x^5)\\right ) \\\\\n &+ \\quad -\\frac{2}{\\Delta x^2} \\cdot f(x_n) \\\\\n &+ \\quad \\frac{1}{\\Delta x^2} \\left ( f(x_n) - \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) - \\frac{\\Delta x^3}{3!} f'''(x_n) + \\frac{\\Delta x^4}{4!} f^{(4)}(x_n) + \\mathcal{O}(\\Delta x^5) \\right ) \\\\\n &= f''(x_n) + \\frac{1}{\\Delta x^2} \\left(\\frac{2 \\Delta x^4}{4!} f^{(4)}(x_n) + \\mathcal{O}(\\Delta x^5) \\right )\n\\end{aligned}\n$$\nso that\n\n$$\n R(x_n) = \\frac{\\Delta x^2}{12} f^{(4)}(x_n) + \\mathcal{O}(\\Delta x^3)\n$$\n\n\n```python\ndef D2(func, x_min, x_max, N):\n \"\"\" calculate consistent 2nd order second derivative of a function func(x) defined on the interval [x_min,xmax]\n and sampled at N evenly spaced points\"\"\"\n\n x = numpy.linspace(x_min, x_max, N)\n f = func(x)\n dx = x[1] - x[0]\n D2f = numpy.zeros(x.shape) \n D2f[1:-1] = (f[:-2] - 2*f[1:-1] + f[2:])/(dx**2)\n # patch up end points to be 1 sided 2nd derivatives\n D2f[0] = D2f[1]\n D2f[-1] = D2f[-2]\n\n \n return D2f\n```\n\n\n```python\nf = lambda x: numpy.sin(x)\nf_dubl_prime = lambda x: -numpy.sin(x)\n\n# Use uniform discretization\nx = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, 1000)\nN = 80\nx_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N)\ndelta_x = x_hat[1] - x_hat[0]\n\n# Compute derivative\nD2f = D2(f, x_hat[0], x_hat[-1], N)\n```\n\n\n```python\nfig = plt.figure(figsize=(8,6))\naxes = fig.add_subplot(1, 1, 1)\n\naxes.plot(x,f(x),'b',label='$f(x)$')\naxes.plot(x, f_dubl_prime(x), 'k--', label=\"$f'(x)$\")\naxes.plot(x_hat, D2f, 'ro', label='$D_2(f)$')\naxes.set_xlim((x[0], x[-1]))\naxes.set_ylim((-1.1, 1.1))\naxes.legend(loc='best',fontsize=14)\naxes.grid()\naxes.set_title('Discrete Second derivative',fontsize=18)\naxes.set_xlabel('$x$', fontsize=16)\n\nplt.show()\n```\n\n### The general case\n\nIn the general case we can use any $N+1$ points to calculated consistent finite difference coefficients for approximating any derivative of order $k \\leq N$. Relaxing the requirement of equal grid spacing (or the expectation that the location where the derivative is evaluated $\\bar{x}$, is one of the grid points) the Taylor series expansions become\n\n\n$$\\begin{aligned}\n f^{(k)}(\\bar{x}) + R(\\bar{x}) &= \\quad c_0 \\left ( f(\\bar{x}) + \\Delta x_0 f'(\\bar{x}) + \\frac{\\Delta x_0^2}{2!} f''(\\bar{x}) + \\frac{\\Delta x_0^3}{3!} f'''(\\bar{x}) + \\frac{\\Delta x_0^4}{4!} f^{(4)}(\\bar{x}) + \\mathcal{O}(\\Delta x_0^5)\\right ) \\\\\n &+ \\quad c_1 \\left ( f(\\bar{x}) + \\Delta x_1 f'(\\bar{x}) + \\frac{\\Delta x_1^2}{2!} f''(\\bar{x}) + \\frac{\\Delta x_1^3}{3!} f'''(\\bar{x}) + \\frac{\\Delta x_1^4}{4!} f^{(4)}(\\bar{x}) + \\mathcal{O}(\\Delta x_1^5)\\right )\\\\\n &+ \\quad c_2 \\left ( f(\\bar{x}) + \\Delta x_2 f'(\\bar{x}) + \\frac{\\Delta x_2^2}{2!} f''(\\bar{x}) + \\frac{\\Delta x_2^3}{3!} f'''(\\bar{x}) + \\frac{\\Delta x_2^4}{4!} f^{(4)}(\\bar{x}) + \\mathcal{O}(\\Delta x_2^5)\\right ) \\\\\n &+ \\quad \\vdots\\\\\n &+ \\quad c_N \\left ( f(\\bar{x}) + \\Delta x_N f'(\\bar{x}) + \\frac{\\Delta x_N^2}{2!} f''(\\bar{x}) + \\frac{\\Delta x_N^3}{3!} f'''(\\bar{x}) + \\frac{\\Delta x_N^4}{4!} f^{(4)}(\\bar{x}) + \\mathcal{O}(\\Delta x_N^5)\\right ) \\\\\n\\end{aligned}$$\nwhere $\\Delta\\mathbf{x} = \\bar{x} - \\mathbf{x}$ is the distance between the point $\\bar{x}$ and each grid point.\n\nEquating terms of equal order reduces the problem to another Vandermonde matrix problem\n$$\\begin{bmatrix}\n1 & 1 & 1 & \\cdots & 1 \\\\\n\\Delta x_0 & \\Delta x_1 & \\Delta x_2 & \\cdots & \\Delta x_N\\\\\n\\frac{\\Delta x_0^2}{2!} & \\frac{\\Delta x_1^2}{2!} & \\frac{\\Delta x_2^2}{2!} &\\cdots & \\frac{\\Delta x_N^2}{2!}\\\\\n & & \\vdots & \\cdots & \\\\\n\\frac{\\Delta x_0^N}{N!} & \\frac{\\Delta x_1^N}{N!} & \\frac{\\Delta x_2^N}{N!} & \\cdots & \\frac{\\Delta x_N^N}{N!}\\\\\n\\end{bmatrix}\n\\begin{bmatrix} c_0 \\\\ c_1\\\\ c_2 \\\\ \\vdots \\\\ c_N\\\\\\end{bmatrix} =\n\\mathbf{b}_k \n$$\n\nwhere $\\mathbf{b}_k$ is a vector of zeros with just a one in the $k$th position for the $k$th derivative.\n\nBy exactly accounting for the first $N+1$ terms of the Taylor series (with $N+1$ equations), we can get any order derivative $0 k. \n Usually the elements x(i) are monotonically increasing\n and x(1) <= xbar <= x(n), but neither condition is required.\n The x values need not be equally spaced but must be distinct. \n \n Modified rom http://www.amath.washington.edu/~rjl/fdmbook/ (2007)\n \"\"\"\n \n from scipy.special import factorial\n\n n = x.shape[0]\n assert k < n, \" The order of the derivative must be less than the stencil width\"\n\n # Generate the Vandermonde matrix from the Taylor series\n A = numpy.ones((n,n))\n xrow = (x - xbar) # displacements x-xbar \n for i in range(1,n):\n A[i,:] = (xrow**(i))/factorial(i);\n \n b = numpy.zeros(n) # b is right hand side,\n b[k] = 1 # so k'th derivative term remains\n\n c = numpy.linalg.solve(A,b) # solve n by n system for coefficients\n \n return c\n\n```\n\n\n```python\nN = 11\nx = numpy.linspace(-2*numpy.pi, 2.*numpy.pi, )\nk = 2\nscale = (x[1]-x[0])**k\n\nprint(fdcoeffV(k,x[0],x[:3])*scale)\nfor j in range(k,N-1):\n print(fdcoeffV(k, x[j], x[j-1:j+2])*scale)\nprint(fdcoeffV(k,x[-1],x[-3:])*scale)\n```\n\n [ 1. -2. 1.]\n [ 1. -2. 1.]\n [ 1. -2. 1.]\n [ 1. -2. 1.]\n [ 1. -2. 1.]\n [ 1. -2. 1.]\n [ 1. -2. 1.]\n [ 1. -2. 1.]\n [ 1. -2. 1.]\n [ 1. -2. 1.]\n\n\n### Example: A variably spaced mesh\n\n\n```python\nN = 21\ny = numpy.linspace(-.95, .95,N)\nx = numpy.arctanh(y)\n```\n\n\n```python\nfig = plt.figure(figsize=(8,6))\naxes = fig.add_subplot(1, 1, 1)\naxes.plot(x,numpy.zeros(x.shape),'bo-')\naxes.plot(x,y,'ro-')\naxes.grid()\naxes.set_xlabel('$x$')\naxes.set_ylabel('$y$')\nplt.show()\n```\n\n\n```python\nk=1 \nfd = fdcoeffV(k,x[0],x[:3])\nprint('{}, sum={}'.format(fd,fd.sum()))\nfor j in range(1,N-1):\n fd = fdcoeffV(k, x[j], x[j-1:j+2])\n print('{}, sum={}'.format(fd,fd.sum()))\nfd = fdcoeffV(k,x[-1],x[-3:])\nprint('{}, sum={}'.format(fd,fd.sum()))\n\n```\n\n [-2.99107041 5.38832143 -2.39725102], sum=0.0\n [-0.59748257 -1.79976846 2.39725102], sum=0.0\n [-1.4786644 -1.54760369 3.02626809], sum=0.0\n [-2.27379013 -1.34334775 3.61713788], sum=0.0\n [-2.97889933 -1.14769751 4.12659684], sum=0.0\n [-3.59168893 -0.95484098 4.54652991], sum=0.0\n [-4.11109435 -0.76316008 4.87425443], sum=0.0\n [-4.53657954 -0.57205069 5.10863023], sum=0.0\n [-4.86785915 -0.38124048 5.24909963], sum=0.0\n [-5.10478308 -0.19058641 5.29536948], sum=0.0\n [-5.24728627 0. 5.24728627], sum=0.0\n [-5.29536948 0.19058641 5.10478308], sum=0.0\n [-5.24909963 0.38124048 4.86785915], sum=0.0\n [-5.10863023 0.57205069 4.53657954], sum=0.0\n [-4.87425443 0.76316008 4.11109435], sum=0.0\n [-4.54652991 0.95484098 3.59168893], sum=0.0\n [-4.12659684 1.14769751 2.97889933], sum=-4.440892098500626e-16\n [-3.61713788 1.34334775 2.27379013], sum=0.0\n [-3.02626809 1.54760369 1.4786644 ], sum=-2.220446049250313e-16\n [-2.39725102 1.79976846 0.59748257], sum=0.0\n [ 2.39725102 -5.38832143 2.99107041], sum=0.0\n\n\n### Application to Numerical PDE's\n\nGiven an efficent way to generate Finite Difference Coefficients these coefficients can be stored in a (usually sparse) matrix $D_k$ such that given any discrete vector $\\mathbf{f} = f(\\mathbf{x})$, We can calculate the approximate $k$th derivative as simply the matrix vector product\n\n$$\n \\mathbf{f}' = D_k\\mathbf{f}\n$$\n\nThis technique will become extremely useful when solving basic finite difference approximations to differential equations (as we will explore in future lectures and homeworks). \n\n### The Bigger idea\n\nMore generally, using finite differences we can transform a continuous differential operator on a function space\n\n$$\n v = \\frac{d}{dx} u(x)\n$$\nwhich maps a function to a function, to a discrete linear algebraic problem \n\n$$\n \\mathbf{v} = D\\mathbf{u}\n$$\nwhere $\\mathbf{v}, \\mathbf{u}$ are discrete approximations to the continous functions $v,u$ and $D$ is a discrete differential operator (Matrix) which maps a vector to a vector.\n", "meta": {"hexsha": "d1d74c200608383c9c0d9ea8a1aba618ed5fb6cc", "size": 364623, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "07_differentiation.ipynb", "max_stars_repo_name": "mspieg/intro-numerical-methods", "max_stars_repo_head_hexsha": "d267a075c95acfed6bbcbe91951a05539be61311", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2020-09-10T13:01:06.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-20T15:05:30.000Z", "max_issues_repo_path": "07_differentiation.ipynb", "max_issues_repo_name": "AinsleyChen/intro-numerical-methods", "max_issues_repo_head_hexsha": "2eda74cccbed5c0d4c57e24c3f4c96a1aa741f08", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "07_differentiation.ipynb", "max_forks_repo_name": "AinsleyChen/intro-numerical-methods", "max_forks_repo_head_hexsha": "2eda74cccbed5c0d4c57e24c3f4c96a1aa741f08", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 35, "max_forks_repo_forks_event_min_datetime": "2020-01-21T16:08:37.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-21T12:46:56.000Z", "avg_line_length": 214.6103590347, "max_line_length": 52488, "alphanum_fraction": 0.9008619862, "converted": true, "num_tokens": 11575, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8976952948443462, "lm_q2_score": 0.9263037348714851, "lm_q1q2_score": 0.8315385043908768}} {"text": "# Linear Regression\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.style.use(['ggplot'])\n```\n\n### Create Data\n\n
    Generate some data with:\n\\begin{equation} \\theta_0= 4 \\end{equation} \n\\begin{equation} \\theta_1= 3 \\end{equation} \n\nAdd some Gaussian noise to the data\n\n\n```python\nX = 2 * np.random.rand(100,1)\ny = 4 +3 * X+np.random.randn(100,1)\n```\n\nLet's plot our data to check the relation between X and Y\n\n\n```python\nplt.plot(X,y,'b.')\nplt.xlabel(\"$x$\", fontsize=18)\nplt.ylabel(\"$y$\", rotation=0, fontsize=18)\n_ =plt.axis([0,2,0,15])\n```\n\n## Analytical way of Linear Regression\n\n\n\n\n```python\nX_b = np.c_[np.ones((100,1)),X]\ntheta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)\nprint(theta_best)\n```\n\n [[3.78114801]\n [3.03518288]]\n\n\n
    This is close to our real thetas 4 and 3. It cannot be accurate due to the noise\n\n\n```python\nX_new = np.array([[0],[2]])\nX_new_b = np.c_[np.ones((2,1)),X_new]\ny_predict = X_new_b.dot(theta_best)\ny_predict\n```\n\n\n\n\n array([[3.78114801],\n [9.85151377]])\n\n\n\n
    Let's plot prediction line with calculated:theta\n\n\n```python\nplt.plot(X_new,y_predict,'r-')\nplt.plot(X,y,'b.')\nplt.xlabel(\"$x_1$\", fontsize=18)\nplt.ylabel(\"$y$\", rotation=0, fontsize=18)\nplt.axis([0,2,0,15])\n\n```\n\n## Gradient Descent\n\n### Cost Function & Gradients\n\nThe equation for calculating cost function and gradients are as shown below. Please note the cost function is for Linear regression. For other algorithms the cost function will be different and the gradients would have to be derived from the cost functions\n\n\n\nCost\n\\begin{equation}\nJ(\\theta) = 1/2m \\sum_{i=1}^{m} (h(\\theta)^{(i)} - y^{(i)})^2 \n\\end{equation}\n\nGradient\n\n\\begin{equation}\n\\frac{\\partial J(\\theta)}{\\partial \\theta_j} = 1/m\\sum_{i=1}^{m}(h(\\theta^{(i)} - y^{(i)}).X_j^{(i)}\n\\end{equation}\n\nGradients\n\\begin{equation}\n\\theta_0: = \\theta_0 -\\alpha . (1/m .\\sum_{i=1}^{m}(h(\\theta^{(i)} - y^{(i)}).X_0^{(i)})\n\\end{equation}\n\\begin{equation}\n\\theta_1: = \\theta_1 -\\alpha . (1/m .\\sum_{i=1}^{m}(h(\\theta^{(i)} - y^{(i)}).X_1^{(i)})\n\\end{equation}\n\\begin{equation}\n\\theta_2: = \\theta_2 -\\alpha . (1/m .\\sum_{i=1}^{m}(h(\\theta^{(i)} - y^{(i)}).X_2^{(i)})\n\\end{equation}\n\n\\begin{equation}\n\\theta_j: = \\theta_j -\\alpha . (1/m .\\sum_{i=1}^{m}(h(\\theta^{(i)} - y^{(i)}).X_0^{(i)})\n\\end{equation}\n\n\n```python\ndef cal_cost(theta,X,y):\n '''\n \n Calculates the cost for given X and Y. The following shows and example of a single dimensional X\n theta = Vector of thetas \n X = Row of X's np.zeros((2,j))\n y = Actual y's np.zeros((2,1))\n \n where:\n j is the no of features\n '''\n \n m = len(y)\n \n predictions = X.dot(theta)\n cost = (1/2*m) * np.sum(np.square(predictions-y))\n return cost\n```\n\n\n```python\ndef gradient_descent(X,y,theta,learning_rate=0.01,iterations=100):\n '''\n X = Matrix of X with added bias units\n y = Vector of Y\n theta=Vector of thetas np.random.randn(j,1)\n learning_rate \n iterations = no of iterations\n \n Returns the final theta vector and array of cost history over no of iterations\n '''\n m = len(y)\n cost_history = np.zeros(iterations)\n theta_history = np.zeros((iterations,2))\n for it in range(iterations):\n \n prediction = np.dot(X,theta)\n \n theta = theta -(1/m)*learning_rate*( X.T.dot((prediction - y)))\n theta_history[it,:] =theta.T\n cost_history[it] = cal_cost(theta,X,y)\n \n return theta, cost_history, theta_history\n```\n\n#### Let's start with 1000 iterations and a learning rate of 0.01. Start with theta from a Gaussian distribution\n\n\n```python\nlr =0.01\nn_iter = 1000\n\ntheta = np.random.randn(2,1)\n\nX_b = np.c_[np.ones((len(X),1)),X]\ntheta,cost_history,theta_history = gradient_descent(X_b,y,theta,lr,n_iter)\n\n\nprint('Theta0: {:0.3f},\\nTheta1: {:0.3f}'.format(theta[0][0],theta[1][0]))\nprint('Final cost/MSE: {:0.3f}'.format(cost_history[-1]))\n```\n\n Theta0: 3.599,\n Theta1: 3.184\n Final cost/MSE: 4600.829\n\n\n

    Let's plot the cost history over iterations\n\n\n```python\nfig,ax = plt.subplots(figsize=(12,8))\n\nax.set_ylabel('J(Theta)')\nax.set_xlabel('Iterations')\n_=ax.plot(range(n_iter),cost_history,'b.')\n```\n\n#### After around 150 iterations the cost is flat so the remaining iterations are not needed or will not result in any further optimization. Let us zoom in till iteration 200 and see the curve\n\n\n```python\nfig,ax = plt.subplots(figsize=(10,8))\n_=ax.plot(range(200),cost_history[:200],'b.')\n```\n\nIt is worth while to note that the cost drops faster initially and then the gain in cost reduction is not as much\n\nIt would be great to see the effect of different learning rates and iterations together\n\nLet us build a function which can show the effects together and also show how gradient decent actually is working\n\n\n```python\ndef plot_GD(n_iter,lr,ax,ax1=None):\n \"\"\"\n n_iter = no of iterations\n lr = Learning Rate\n ax = Axis to plot the Gradient Descent\n ax1 = Axis to plot cost_history vs Iterations plot\n\n \"\"\"\n _ = ax.plot(X,y,'b.')\n theta = np.random.randn(2,1)\n\n tr =0.1\n cost_history = np.zeros(n_iter)\n for i in range(n_iter):\n pred_prev = X_b.dot(theta)\n theta,h,_ = gradient_descent(X_b,y,theta,lr,1)\n pred = X_b.dot(theta)\n\n cost_history[i] = h[0]\n\n if ((i % 25 == 0) ):\n _ = ax.plot(X,pred,'r-',alpha=tr)\n if tr < 0.8:\n tr = tr+0.2\n if not ax1== None:\n _ = ax1.plot(range(n_iter),cost_history,'b.') \n```\n\n#### Plot the graphs for different iterations and learning rates combination\n\n\n```python\nfig = plt.figure(figsize=(15,12))\nfig.subplots_adjust(hspace=0.4, wspace=0.4)\n\nit_lr =[(2000,0.001),(500,0.01),(200,0.05),(100,0.1)]\ncount =0\nfor n_iter, lr in it_lr:\n count += 1\n \n ax = fig.add_subplot(4, 2, count)\n count += 1\n \n ax1 = fig.add_subplot(4,2,count)\n \n ax.set_title(\"lr:{}\".format(lr))\n ax1.set_title(\"Iterations:{}\".format(n_iter))\n plot_GD(n_iter,lr,ax,ax1)\n \n```\n\nSee how useful it is to visualize the effect of learning rates and iterations on gradient descent. The red lines show how the gradient descent starts and then slowly gets closer to the final value\n\nWe can always plot Indiviual graphs to zoom in\n\n\n```python\n_,ax = plt.subplots(figsize=(14,10))\nplot_GD(100,0.1,ax)\n```\n\n## Stochastic Gradient Descent\n\n\n```python\ndef stocashtic_gradient_descent(X,y,theta,learning_rate=0.01,iterations=10):\n '''\n X = Matrix of X with added bias units\n y = Vector of Y\n theta=Vector of thetas np.random.randn(j,1)\n learning_rate \n iterations = no of iterations\n \n Returns the final theta vector and array of cost history over no of iterations\n '''\n m = len(y)\n cost_history = np.zeros(iterations)\n \n \n for it in range(iterations):\n cost =0.0\n for i in range(m):\n rand_ind = np.random.randint(0,m)\n X_i = X[rand_ind,:].reshape(1,X.shape[1])\n y_i = y[rand_ind].reshape(1,1)\n prediction = np.dot(X_i,theta)\n\n theta = theta -(1/m)*learning_rate*( X_i.T.dot((prediction - y_i)))\n cost += cal_cost(theta,X_i,y_i)\n cost_history[it] = cost\n \n return theta, cost_history\n```\n\n\n```python\nlr =0.5\nn_iter = 50\n\ntheta = np.random.randn(2,1)\n\nX_b = np.c_[np.ones((len(X),1)),X]\ntheta,cost_history = stocashtic_gradient_descent(X_b,y,theta,lr,n_iter)\n\n\nprint('Theta0: {:0.3f},\\nTheta1: {:0.3f}'.format(theta[0][0],theta[1][0]))\nprint('Final cost/MSE: {:0.3f}'.format(cost_history[-1]))\n```\n\n Theta0: 3.807,\n Theta1: 2.948\n Final cost/MSE: 44.861\n\n\n\n```python\nfig,ax = plt.subplots(figsize=(10,8))\n\nax.set_ylabel('{J(Theta)}',rotation=0)\nax.set_xlabel('{Iterations}')\ntheta = np.random.randn(2,1)\n\n_=ax.plot(range(n_iter),cost_history,'b.')\n```\n\n## Mini Batch Gradient Descent\n\n\n```python\ndef minibatch_gradient_descent(X,y,theta,learning_rate=0.01,iterations=10,batch_size =20):\n '''\n X = Matrix of X without added bias units\n y = Vector of Y\n theta=Vector of thetas np.random.randn(j,1)\n learning_rate \n iterations = no of iterations\n \n Returns the final theta vector and array of cost history over no of iterations\n '''\n m = len(y)\n cost_history = np.zeros(iterations)\n n_batches = int(m/batch_size)\n \n for it in range(iterations):\n cost =0.0\n indices = np.random.permutation(m)\n X = X[indices]\n y = y[indices]\n for i in range(0,m,batch_size):\n X_i = X[i:i+batch_size]\n y_i = y[i:i+batch_size]\n \n X_i = np.c_[np.ones(len(X_i)),X_i]\n \n prediction = np.dot(X_i,theta)\n\n theta = theta -(1/m)*learning_rate*( X_i.T.dot((prediction - y_i)))\n cost += cal_cost(theta,X_i,y_i)\n cost_history[it] = cost\n \n return theta, cost_history\n```\n\n\n```python\nlr =0.1\nn_iter = 200\n\ntheta = np.random.randn(2,1)\n\n\ntheta,cost_history = minibatch_gradient_descent(X,y,theta,lr,n_iter)\n\n\nprint('Theta0: {:0.3f},\\nTheta1: {:0.3f}'.format(theta[0][0],theta[1][0]))\nprint('Final cost/MSE: {:0.3f}'.format(cost_history[-1]))\n```\n\n Theta0: 3.728,\n Theta1: 3.080\n Final cost/MSE: 911.121\n\n\n\n```python\nfig,ax = plt.subplots(figsize=(10,8))\n\nax.set_ylabel('{J(Theta)}',rotation=0)\nax.set_xlabel('{Iterations}')\ntheta = np.random.randn(2,1)\n\n_=ax.plot(range(n_iter),cost_history,'b.')\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "13e91a987892a3895c986dba11b9743360617810", "size": 243079, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Linear Regression.ipynb", "max_stars_repo_name": "archd3sai/Bayesian-Methods", "max_stars_repo_head_hexsha": "de270df64a3bdee1d53046e4d0596bc18327f407", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Linear Regression.ipynb", "max_issues_repo_name": "archd3sai/Bayesian-Methods", "max_issues_repo_head_hexsha": "de270df64a3bdee1d53046e4d0596bc18327f407", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Linear Regression.ipynb", "max_forks_repo_name": "archd3sai/Bayesian-Methods", "max_forks_repo_head_hexsha": "de270df64a3bdee1d53046e4d0596bc18327f407", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 251.6345755694, "max_line_length": 103912, "alphanum_fraction": 0.9214164942, "converted": true, "num_tokens": 2917, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9263037241905732, "lm_q2_score": 0.897695283896349, "lm_q1q2_score": 0.8315384846615019}} {"text": "### Example 2: Nonlinear convection in 2D\n\nFollowing the initial convection tutorial with a single state variable $u$, we will now look at non-linear convection (step 6 in the original). This brings one new crucial challenge: computing a pair of coupled equations and thus updating two time-dependent variables $u$ and $v$.\n\nThe full set of coupled equations is now\n\n\\begin{aligned}\n\\frac{\\partial u}{\\partial t} + u \\frac{\\partial u}{\\partial x} + v \\frac{\\partial u}{\\partial y} = 0 \\\\\n\\\\\n\\frac{\\partial v}{\\partial t} + u \\frac{\\partial v}{\\partial x} + v \\frac{\\partial v}{\\partial y} = 0\\\\\n\\end{aligned}\n\nand rearranging the discretized version gives us an expression for the update of both variables\n\n\\begin{aligned}\nu_{i,j}^{n+1} &= u_{i,j}^n - u_{i,j}^n \\frac{\\Delta t}{\\Delta x} (u_{i,j}^n-u_{i-1,j}^n) - v_{i,j}^n \\frac{\\Delta t}{\\Delta y} (u_{i,j}^n-u_{i,j-1}^n) \\\\\n\\\\\nv_{i,j}^{n+1} &= v_{i,j}^n - u_{i,j}^n \\frac{\\Delta t}{\\Delta x} (v_{i,j}^n-v_{i-1,j}^n) - v_{i,j}^n \\frac{\\Delta t}{\\Delta y} (v_{i,j}^n-v_{i,j-1}^n)\n\\end{aligned}\n\nSo, for starters we will re-create the original example run in pure NumPy array notation, before demonstrating \nthe Devito version. Let's start again with some utilities and parameters:\n\n\n```python\nfrom examples.cfd import plot_field, init_hat\nimport numpy as np\nimport sympy\n%matplotlib inline\n\n# Some variable declarations\nnx = 101\nny = 101\nnt = 80\nc = 1.\ndx = 2. / (nx - 1)\ndy = 2. / (ny - 1)\nsigma = .2\ndt = sigma * dx\n```\n\nLet's re-create the initial setup with a 2D \"hat function\", but this time for two state variables.\n\n\n```python\n#NBVAL_IGNORE_OUTPUT\n\n# Allocate fields and assign initial conditions\nu = np.empty((nx, ny))\nv = np.empty((nx, ny))\n\ninit_hat(field=u, dx=dx, dy=dy, value=2.)\ninit_hat(field=v, dx=dx, dy=dy, value=2.)\n\nplot_field(u)\n```\n\nNow we can create the two stencil expression for our two coupled equations according to the discretized equation above. We again use some simple Dirichlet boundary conditions to keep the values on all sides constant.\n\n\n```python\n#NBVAL_IGNORE_OUTPUT\nfor n in range(nt + 1): ##loop across number of time steps\n un = u.copy()\n vn = v.copy()\n u[1:, 1:] = (un[1:, 1:] - \n (un[1:, 1:] * c * dt / dy * (un[1:, 1:] - un[1:, :-1])) -\n vn[1:, 1:] * c * dt / dx * (un[1:, 1:] - un[:-1, 1:]))\n v[1:, 1:] = (vn[1:, 1:] -\n (un[1:, 1:] * c * dt / dy * (vn[1:, 1:] - vn[1:, :-1])) -\n vn[1:, 1:] * c * dt / dx * (vn[1:, 1:] - vn[:-1, 1:]))\n \n u[0, :] = 1\n u[-1, :] = 1\n u[:, 0] = 1\n u[:, -1] = 1\n \n v[0, :] = 1\n v[-1, :] = 1\n v[:, 0] = 1\n v[:, -1] = 1\n \nplot_field(u)\n```\n\nExcellent, we again get a wave that resembles the one from the oiginal examples.\n\nNow we can set up our coupled problem in Devito. Let's start by creating two initial state variables $u$ and $v$, as before, and initialising them with our \"hat function.\n\n\n```python\n#NBVAL_IGNORE_OUTPUT\nfrom devito import Grid, TimeFunction\n\n# First we need two time-dependent data fields, both initialized with the hat function\ngrid = Grid(shape=(nx, ny), extent=(2., 2.))\nu = TimeFunction(name='u', grid=grid)\ninit_hat(field=u.data[0], dx=dx, dy=dy, value=2.)\n\nv = TimeFunction(name='v', grid=grid)\ninit_hat(field=v.data[0], dx=dx, dy=dy, value=2.)\n\nplot_field(u.data[0])\n```\n\nUsing the two `TimeFunction` objects we can again derive our discretized equation, rearrange for the forward stencil point in time and define our variable update expression - only we have to do everything twice now! We again use forward differences for time via `u.dt` and backward differences in space via `u.dxl` and `u.dyl` to match the original tutorial.\n\n\n```python\nfrom devito import Eq, solve\n\neq_u = Eq(u.dt + u*u.dxl + v*u.dyl)\neq_v = Eq(v.dt + u*v.dxl + v*v.dyl)\n\n# We can use the same SymPy trick to generate two\n# stencil expressions, one for each field update.\nstencil_u = solve(eq_u, u.forward)\nstencil_v = solve(eq_v, v.forward)\nupdate_u = Eq(u.forward, stencil_u, subdomain=grid.interior)\nupdate_v = Eq(v.forward, stencil_v, subdomain=grid.interior)\n\nprint(\"U update:\\n%s\\n\" % update_u)\nprint(\"V update:\\n%s\\n\" % update_v)\n```\n\n U update:\n Eq(u(t + dt, x, y), dt*(-(u(t, x, y)/h_x - u(t, x - h_x, y)/h_x)*u(t, x, y) - (u(t, x, y)/h_y - u(t, x, y - h_y)/h_y)*v(t, x, y) + u(t, x, y)/dt))\n \n V update:\n Eq(v(t + dt, x, y), dt*(-(v(t, x, y)/h_x - v(t, x - h_x, y)/h_x)*u(t, x, y) - (v(t, x, y)/h_y - v(t, x, y - h_y)/h_y)*v(t, x, y) + v(t, x, y)/dt))\n \n\n\nWe then set Dirichlet boundary conditions at all sides of the domain to $1$.\n\n\n```python\nx, y = grid.dimensions\nt = grid.stepping_dim\nbc_u = [Eq(u[t+1, 0, y], 1.)] # left\nbc_u += [Eq(u[t+1, nx-1, y], 1.)] # right\nbc_u += [Eq(u[t+1, x, ny-1], 1.)] # top\nbc_u += [Eq(u[t+1, x, 0], 1.)] # bottom\nbc_v = [Eq(v[t+1, 0, y], 1.)] # left\nbc_v += [Eq(v[t+1, nx-1, y], 1.)] # right\nbc_v += [Eq(v[t+1, x, ny-1], 1.)] # top\nbc_v += [Eq(v[t+1, x, 0], 1.)] # bottom\n```\n\nAnd finally we can put it all together to build an operator and solve our coupled problem.\n\n\n```python\n#NBVAL_IGNORE_OUTPUT\nfrom devito import Operator\n\n# Reset our data field and ICs\ninit_hat(field=u.data[0], dx=dx, dy=dy, value=2.)\ninit_hat(field=v.data[0], dx=dx, dy=dy, value=2.)\n\nop = Operator([update_u, update_v] + bc_u + bc_v)\nop(time=nt, dt=dt)\n\nplot_field(u.data[0])\n```\n\nExcellent, we have now a scalar implementation of a convection problem, but this can be written as a single vectorial equation:\n\n$\\frac{d U}{dt} + \\nabla(U)U = 0$\n\nLet's now use devito vectorial utilities and implement the vectorial equation\n\n\n```python\nfrom devito import VectorTimeFunction, grad\n\nU = VectorTimeFunction(name='U', grid=grid)\ninit_hat(field=U[0].data[0], dx=dx, dy=dy, value=2.)\ninit_hat(field=U[1].data[0], dx=dx, dy=dy, value=2.)\n\nplot_field(U[1].data[0])\n\neq_u = Eq(U.dt + grad(U)*U)\n```\n\nWe now have a vectorial equation. Unlike in the previous case, we do not need to play with left/right derivatives\nas the automated staggering of the vectorial function takes care of this.\n\n\n```python\neq_u\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)} \\frac{\\partial}{\\partial x} \\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)} + \\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)} \\frac{\\partial}{\\partial y} \\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)} + \\frac{\\partial}{\\partial t} \\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)}\\\\\\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)} \\frac{\\partial}{\\partial x} \\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)} + \\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)} \\frac{\\partial}{\\partial y} \\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)} + \\frac{\\partial}{\\partial t} \\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)}\\end{matrix}\\right] = 0$\n\n\n\nThen we set the nboundary conditions\n\n\n```python\nx, y = grid.dimensions\nt = grid.stepping_dim\nbc_u = [Eq(U[0][t+1, 0, y], 1.)] # left\nbc_u += [Eq(U[0][t+1, nx-1, y], 1.)] # right\nbc_u += [Eq(U[0][t+1, x, ny-1], 1.)] # top\nbc_u += [Eq(U[0][t+1, x, 0], 1.)] # bottom\nbc_v = [Eq(U[1][t+1, 0, y], 1.)] # left\nbc_v += [Eq(U[1][t+1, nx-1, y], 1.)] # right\nbc_v += [Eq(U[1][t+1, x, ny-1], 1.)] # top\nbc_v += [Eq(U[1][t+1, x, 0], 1.)] # bottom\n```\n\n\n```python\n# We can use the same SymPy trick to generate two\n# stencil expressions, one for each field update.\nstencil_U = solve(eq_u, U.forward)\nupdate_U = Eq(U.forward, stencil_U, subdomain=grid.interior)\n```\n\nAnd we have the updated (stencil) as a vectorial equation once again\n\n\n```python\nupdate_U\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\operatorname{U_{x}}{\\left(t + dt,x + \\frac{h_{x}}{2},y \\right)}\\\\\\operatorname{U_{y}}{\\left(t + dt,x,y + \\frac{h_{y}}{2} \\right)}\\end{matrix}\\right] = \\left[\\begin{matrix}dt \\left(- \\left(- \\frac{\\operatorname{U_{x}}{\\left(t,x - \\frac{h_{x}}{2},y \\right)}}{h_{x}} + \\frac{\\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)}}{h_{x}}\\right) \\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)} - \\left(\\frac{\\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)}}{h_{y}} - \\frac{\\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y - h_{y} \\right)}}{h_{y}}\\right) \\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)} + \\frac{\\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)}}{dt}\\right)\\\\dt \\left(- \\left(\\frac{\\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)}}{h_{x}} - \\frac{\\operatorname{U_{y}}{\\left(t,x - h_{x},y + \\frac{h_{y}}{2} \\right)}}{h_{x}}\\right) \\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)} - \\left(- \\frac{\\operatorname{U_{y}}{\\left(t,x,y - \\frac{h_{y}}{2} \\right)}}{h_{y}} + \\frac{\\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)}}{h_{y}}\\right) \\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)} + \\frac{\\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)}}{dt}\\right)\\end{matrix}\\right]$\n\n\n\nWe finally run the operator\n\n\n```python\n#NBVAL_IGNORE_OUTPUT\nop = Operator([update_U] + bc_u + bc_v)\nop(time=nt, dt=dt)\n\n# The result is indeed the expected one.\nplot_field(U[0].data[0])\n```\n\n\n```python\nfrom devito import norm\nassert np.isclose(norm(u), norm(U[0]), rtol=1e-5, atol=0)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "eaf23c80315dea635930560abebf766ce12eaf66", "size": 612130, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/cfd/02_convection_nonlinear.ipynb", "max_stars_repo_name": "jakubbober/devito", "max_stars_repo_head_hexsha": "5d364018e9002a87ee7d4896ca19bb65281d4669", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "examples/cfd/02_convection_nonlinear.ipynb", "max_issues_repo_name": "jakubbober/devito", "max_issues_repo_head_hexsha": "5d364018e9002a87ee7d4896ca19bb65281d4669", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 31, "max_issues_repo_issues_event_min_datetime": "2020-05-19T15:08:19.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-21T13:01:38.000Z", "max_forks_repo_path": "examples/cfd/02_convection_nonlinear.ipynb", "max_forks_repo_name": "mloubout/devito", "max_forks_repo_head_hexsha": "d02ab398e0e7b2619b54088de848d2e7a93ffc57", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 1127.3112338858, "max_line_length": 108832, "alphanum_fraction": 0.9567346805, "converted": true, "num_tokens": 3439, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9219218284193597, "lm_q2_score": 0.9019206778476933, "lm_q1q2_score": 0.8315003604105736}} {"text": "# M\u00e9todo de punto fijo\nJ.J\n---\nDado que se busca resolver $f(x) = 0$, la funci\u00f3n $f$ puede ser reescrita como $f(x) = g(x) - x = 0$. El m\u00e9todo de punto fijo est\u00e1 dado por las iteraciones:\n\n\\begin{equation}\nx_{k+1} = g(x_{k}).\n\\end{equation}\n\nEl m\u00e9todo converge a una ra\u00edz si los valores de $x_{k}$ en cada iteraci\u00f3n cumplen $|g'(x)| \\leq 1$.\n\nEjemplo: $f(x) = 2x^{3} - 9x^{2} + 7x + 6 = 0$\n\\begin{equation}\nx = \\frac{1}{7} \\{ -2x^{3} + 9x^{2} - 6 \\}\n\\end{equation}\n\n\n```python\nimport numpy as np\nfrom sympy import *\nfrom sympy.utilities.lambdify import lambdify\nimport matplotlib.pyplot as plt\ninit_printing(use_unicode=True)\n```\n\n\n```python\n#calcular la derivada de f\nx = symbols('x')\nfuncion = 2*x**3 - 9*x**2 + 7*x + 6\ngfuncion = (-2*x**3 + 9*x**2 - 6)/7 #escribir la funci\u00f3n aqui\ndgfuncion = diff(gfuncion, x)\nprint(str(dgfuncion))\n```\n\n -6*x**2/7 + 18*x/7\n\n\n\n```python\nf = lambdify(x, funcion)\ng = lambdify(x, gfuncion)\ndg = lambdify(x, dgfuncion)\n```\n\n\n```python\nX = np.linspace(-1, 4, 100)\nplt.plot(X,dg(X), label = 'g\u00b4(x)')\nplt.ylim(-1,1)\nplt.legend()\nplt.show()\n```\n\nLa desigualdad se cumple aproximadamente entre en los intervalos $[-0.44, 0.46]$ y $[2.5, 3.34]$, aun que en realidad, en el primer intervalo no converge.\n\n\n```python\ne = 0.0001 #error\nmaxit = 100 #iteraciones m\u00e1ximas\n```\n\n\n```python\ndef PuntoFijo(x0, func = g, error = e, iterations = maxit):\n it = 0\n while (abs(f(x0)) > e) and (it < maxit):\n it += 1\n xk = g(x0)\n x0 = xk\n return x0\n```\n\n\n```python\nsol = PuntoFijo(2.6)\nprint(sol)\n```\n\n 2.9999926857948673\n\n\n\n```python\nplt.plot(X, f(X), label='f(x)')\nplt.plot(sol,f(sol),'ro')\nplt.legend()\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "a2b8fb80b8dd915054442a98f81be0155fd6a9c8", "size": 31673, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Ec. No lineales/Punto_Fijo.ipynb", "max_stars_repo_name": "JosueJuarez/M-todos-Num-ricos", "max_stars_repo_head_hexsha": "8e328ef0f70519be57163b556db1fd27c3b04560", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Ec. No lineales/Punto_Fijo.ipynb", "max_issues_repo_name": "JosueJuarez/M-todos-Num-ricos", "max_issues_repo_head_hexsha": "8e328ef0f70519be57163b556db1fd27c3b04560", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Ec. No lineales/Punto_Fijo.ipynb", "max_forks_repo_name": "JosueJuarez/M-todos-Num-ricos", "max_forks_repo_head_hexsha": "8e328ef0f70519be57163b556db1fd27c3b04560", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 143.9681818182, "max_line_length": 13560, "alphanum_fraction": 0.8964101916, "converted": true, "num_tokens": 645, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9111797148356994, "lm_q2_score": 0.9124361640485893, "lm_q1q2_score": 0.831393323763573}} {"text": "

    Table of Contents

    \n\n\n# Question\n\n**Question 1 :** Write a program which will find all such numbers which are divisible by 7 but are not a multiple of 5, between 2000 and 3200 (both included). The numbers obtained should be printed in a comma-separated sequence on a single line.\n\n**BreakDown Points:**\n1) Get the Numbers between 2000 and 3200
    \n2) Find all Numbers divisible by 7
    \n3) From the result exclude Numbers divisible by 5
    \n4) Print in comma-separated sequence
    \n\n## Answer \n\n\n```python\nfor i in range(2000,3200+1):\n if i%7 == 0 and i%5 != 0:\n print(i, end=',')\n```\n\n 2002,2009,2016,2023,2037,2044,2051,2058,2072,2079,2086,2093,2107,2114,2121,2128,2142,2149,2156,2163,2177,2184,2191,2198,2212,2219,2226,2233,2247,2254,2261,2268,2282,2289,2296,2303,2317,2324,2331,2338,2352,2359,2366,2373,2387,2394,2401,2408,2422,2429,2436,2443,2457,2464,2471,2478,2492,2499,2506,2513,2527,2534,2541,2548,2562,2569,2576,2583,2597,2604,2611,2618,2632,2639,2646,2653,2667,2674,2681,2688,2702,2709,2716,2723,2737,2744,2751,2758,2772,2779,2786,2793,2807,2814,2821,2828,2842,2849,2856,2863,2877,2884,2891,2898,2912,2919,2926,2933,2947,2954,2961,2968,2982,2989,2996,3003,3017,3024,3031,3038,3052,3059,3066,3073,3087,3094,3101,3108,3122,3129,3136,3143,3157,3164,3171,3178,3192,3199,\n\n# Question\n\n**Question 2 :** Write a Python program to accept the user's first and last name and then getting them printed in the the reverse order with a space between first name and last name.\n\n**BreakDown Points:**\n1) Read First Name
    \n2) Read Last Name
    \n3) Print in reverse order
    \n\n## Answer\n\n\n```python\nFullName = input('Please Enter First & Last Name : ').split()\n\nfor i in FullName[::-1]:\n print(i, end=' ')\n```\n\n Please Enter First & Last Name : V P\n P V \n\n# Question\n\n**Question 3 :** Write a Python program to find the volume of a sphere with diameter 12 cm\n\n\\begin{align}\nFormula : V = \\frac{4}{3} * {\\pi} * (\\frac{d}{2})^3\\\\\n\\end{align}\n\n**BreakDown Points:**\n1) Replce r with d/2
    \n2) Value of pi is availble in math module
    \n\n## Answer\n\n\n```python\nimport math\nvolume = (4/3) * math.pi * (12/2)**3\nprint(volume)\n```\n\n 904.7786842338603\n\n\n\n```python\nvolume = (4/3) * 3.141 * (12/2)**3\nprint(volume)\n```\n\n 904.608\n\n", "meta": {"hexsha": "f47ace4704c614e5a830e3d8bda0bc1f39b99039", "size": 6444, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Assignments/Basic-Python-Assignment-1.ipynb", "max_stars_repo_name": "vigneshpalanivelr/MeachineLearningAI", "max_stars_repo_head_hexsha": "16741f08b8846fd27c319aff24d7e043acb843f3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Assignments/Basic-Python-Assignment-1.ipynb", "max_issues_repo_name": "vigneshpalanivelr/MeachineLearningAI", "max_issues_repo_head_hexsha": "16741f08b8846fd27c319aff24d7e043acb843f3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Assignments/Basic-Python-Assignment-1.ipynb", "max_forks_repo_name": "vigneshpalanivelr/MeachineLearningAI", "max_forks_repo_head_hexsha": "16741f08b8846fd27c319aff24d7e043acb843f3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.1397379913, "max_line_length": 983, "alphanum_fraction": 0.5595903166, "converted": true, "num_tokens": 1188, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9124361580958427, "lm_q2_score": 0.9111797075998823, "lm_q1q2_score": 0.8313933117373299}} {"text": "Tra\u00e7ar um esbo\u00e7o do gr\u00e1fico e obter uma equa\u00e7\u00e3o da par\u00e1bola que satisfa\u00e7a as condi\u00e7\u00f5es dadas.\n\n19. V\u00e9rtice: $V(2,-1)$; Foco: $F(5,-1)$

    \n\nComo o v\u00e9rtice da par\u00e1bola se encontra fora da origem e est\u00e1 paralela ao eixo $x$ a sua equa\u00e7\u00e3o \u00e9 dada por $(y-k)^2 = 2p(x-h)$

    \n\nSubstituindo os pontos do v\u00e9rtice por $x = 2$ e $y = -1$

    \n$(y-(-1))^2 = 2p(x-2))$

    \n$(y+1)^2 = 2p(x-2)$

    \nAchando o valor de $p$

    \n$\\frac{p}{2} = \\sqrt{(5-2)^2 + (-1-(-1))^2}$

    \n$\\frac{p}{2} = \\sqrt{3^2 + 0}$

    \n$\\frac{p}{2} = \\pm \\sqrt{9}$

    \n$\\frac{p}{2} = 3$

    \n$p = 6$

    \nSubstituindo $p$ na f\u00f3rmula

    \n$(y+1)^2 = 2 \\cdot 6 (x-2)$

    \n$(y+1)^2 = 12(x-2)$

    \n$y^2 + 2y + 1 = 12x - 24$

    \n$y^2 + 2y -12x +1 + 24 = 0$

    \n$y^2 + 2y -12x + 25 = 0 $

    \nGr\u00e1fico da par\u00e1bola

    \n\n\n```python\nfrom sympy import *\nfrom sympy.plotting import plot_implicit\nx, y = symbols(\"x y\")\nplot_implicit(Eq((y+1)**2, 6*(x-2)), (x,-20,20), (y,-10,10),\ntitle=u'Gr\u00e1fico da par\u00e1bola', xlabel='x', ylabel='y');\n```\n", "meta": {"hexsha": "badf4d055c6f20373c28898f7ff2f766265e91b8", "size": 16007, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Problemas Propostos. Pag. 172 - 175/19.ipynb", "max_stars_repo_name": "mateuschaves/GEOMETRIA-ANALITICA", "max_stars_repo_head_hexsha": "bc47ece7ebab154e2894226c6d939b7e7f332878", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-02-03T16:40:45.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-03T16:40:45.000Z", "max_issues_repo_path": "Problemas Propostos. Pag. 172 - 175/19.ipynb", "max_issues_repo_name": "mateuschaves/GEOMETRIA-ANALITICA", "max_issues_repo_head_hexsha": "bc47ece7ebab154e2894226c6d939b7e7f332878", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Problemas Propostos. Pag. 172 - 175/19.ipynb", "max_forks_repo_name": "mateuschaves/GEOMETRIA-ANALITICA", "max_forks_repo_head_hexsha": "bc47ece7ebab154e2894226c6d939b7e7f332878", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 172.1182795699, "max_line_length": 13672, "alphanum_fraction": 0.8896732679, "converted": true, "num_tokens": 542, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9511422227627597, "lm_q2_score": 0.8740772433654401, "lm_q1q2_score": 0.8313717721209504}} {"text": "#Taylor Series Expansion with Python from Data Science Fabric \n\nrecovered from **[Data Science Fabric](https://dsfabric.org/taylor-series-expansion-with-python)**\n\nNote: for ease and organization, titles were placed on the notebook for quick reading\n\n##Libraries\n\n\n```\nfrom sympy import series, Symbol\nfrom sympy.functions import sin, cos, exp\nfrom sympy.plotting import plot\nimport matplotlib.pyplot as plt\n```\n\n\n```\nfrom sympy.functions import ln\n```\n\n\n```\n# Define symbol\nx = Symbol('x')\n```\n\n\n```\n# Function for Taylor Series Expansion\n\ndef taylor(function, x0, n):\n \"\"\"\n Parameter \"function\" is our function which we want to approximate\n \"x0\" is the point where to approximate\n \"n\" is the order of approximation\n \"\"\"\n return function.series(x,x0,n).removeO()\n```\n\n##First's Cases of Use\n\n\n```\nprint('sin(x) \u2245', taylor(sin(x), 0, 4))\n\nprint('cos(x) \u2245', taylor(cos(x), 0, 4))\n\nprint('e(x) \u2245', taylor(exp(x), 0, 4))\n```\n\n sin(x) \u2245 -x**3/6 + x\n cos(x) \u2245 1 - x**2/2\n e(x) \u2245 x**3/6 + x**2/2 + x + 1\n\n\n\n```\nprint(\"Ejercicio\")\nprint('ln(x+1) \u2245', taylor(ln(x+1), 0, 4))\n```\n\n Ejercicio\n ln(x+1) \u2245 x**3/3 - x**2/2 + x\n\n\n\n```\nprint('sin(1) =', taylor(sin(x), 0, 4).subs(x,1))\n\nprint('cos(1) =', taylor(cos(x), 0, 4).subs(x,1))\n\nprint('e(1) =', taylor(exp(x), 0, 4).subs(x,1))\n```\n\n sin(1) = 5/6\n cos(1) = 1/2\n e(1) = 8/3\n\n\n\n```\nprint(\"Ejercicio\")\n\nprint('ln((1)+1) =', taylor(ln(x+1), 0, 4).subs(x,1))\n```\n\n Ejercicio\n ln((1)+1) = 5/6\n\n\n##Tests of Taylor's Series\n\n\n```\nprint('Taylor 0 exp(x) \u2245', taylor(exp(x), 0, 0))\nprint('Taylor 1 exp(x) \u2245', taylor(exp(x), 0, 1))\nprint('Taylor 2 exp(x) \u2245', taylor(exp(x), 0, 2))\nprint('Taylor 3 exp(x) \u2245', taylor(exp(x), 0, 3))\nprint('Taylor 4 exp(x) \u2245', taylor(exp(x), 0, 4))\nprint('Taylor 5 exp(x) \u2245', taylor(exp(x), 0, 5))\nprint('Taylor 6 exp(x) \u2245', taylor(exp(x), 0, 6))\nprint('Taylor 7 exp(x) \u2245', taylor(exp(x), 0, 7))\nprint('Taylor 8 exp(x) \u2245', taylor(exp(x), 0, 8))\n```\n\n Taylor 0 exp(x) \u2245 0\n Taylor 1 exp(x) \u2245 1\n Taylor 2 exp(x) \u2245 x + 1\n Taylor 3 exp(x) \u2245 x**2/2 + x + 1\n Taylor 4 exp(x) \u2245 x**3/6 + x**2/2 + x + 1\n Taylor 5 exp(x) \u2245 x**4/24 + x**3/6 + x**2/2 + x + 1\n Taylor 6 exp(x) \u2245 x**5/120 + x**4/24 + x**3/6 + x**2/2 + x + 1\n Taylor 7 exp(x) \u2245 x**6/720 + x**5/120 + x**4/24 + x**3/6 + x**2/2 + x + 1\n Taylor 8 exp(x) \u2245 x**7/5040 + x**6/720 + x**5/120 + x**4/24 + x**3/6 + x**2/2 + x + 1\n\n\n\n```\nprint(\"Ejercicio\")\n\nfor i in range(1,10):\n print('Taylor', i,'ln(x+1) \u2245', taylor(ln(x+1), 0, i))\n```\n\n Ejercicio\n Taylor 1 ln(x+1) \u2245 0\n Taylor 2 ln(x+1) \u2245 x\n Taylor 3 ln(x+1) \u2245 -x**2/2 + x\n Taylor 4 ln(x+1) \u2245 x**3/3 - x**2/2 + x\n Taylor 5 ln(x+1) \u2245 -x**4/4 + x**3/3 - x**2/2 + x\n Taylor 6 ln(x+1) \u2245 x**5/5 - x**4/4 + x**3/3 - x**2/2 + x\n Taylor 7 ln(x+1) \u2245 -x**6/6 + x**5/5 - x**4/4 + x**3/3 - x**2/2 + x\n Taylor 8 ln(x+1) \u2245 x**7/7 - x**6/6 + x**5/5 - x**4/4 + x**3/3 - x**2/2 + x\n Taylor 9 ln(x+1) \u2245 -x**8/8 + x**7/7 - x**6/6 + x**5/5 - x**4/4 + x**3/3 - x**2/2 + x\n\n\n\n```\nprint(\"Ejercicio\")\n\nfor i in range(1,10):\n print('Taylor', i,'sin(x) \u2245', taylor(sin(x), 0, i))\n```\n\n Ejercicio\n Taylor 1 sin(x) \u2245 0\n Taylor 2 sin(x) \u2245 x\n Taylor 3 sin(x) \u2245 x\n Taylor 4 sin(x) \u2245 -x**3/6 + x\n Taylor 5 sin(x) \u2245 -x**3/6 + x\n Taylor 6 sin(x) \u2245 x**5/120 - x**3/6 + x\n Taylor 7 sin(x) \u2245 x**5/120 - x**3/6 + x\n Taylor 8 sin(x) \u2245 -x**7/5040 + x**5/120 - x**3/6 + x\n Taylor 9 sin(x) \u2245 -x**7/5040 + x**5/120 - x**3/6 + x\n\n\n\n```\nprint('Taylor 0 sin(x) \u2245', taylor(sin(x), 0, 0).subs(x,2),' = ',taylor(sin(x), 0, 0).subs(x,2).evalf())\nprint('Taylor 1 cos(x) \u2245', taylor(cos(x), 0, 1).subs(x,2),' = ',taylor(cos(x), 0, 1).subs(x,2).evalf())\nprint('Taylor 2 exp(x) \u2245', taylor(exp(x), 0, 2).subs(x,2),' = ',taylor(exp(x), 0, 2).subs(x,2).evalf())\nprint('Taylor 3 exp(x) \u2245', taylor(exp(x), 0, 3).subs(x,2),' = ',taylor(exp(x), 0, 3).subs(x,2).evalf())\nprint('Taylor 4 exp(x) \u2245', taylor(exp(x), 0, 4).subs(x,2),' = ',taylor(exp(x), 0, 4).subs(x,2).evalf())\nprint('Taylor 5 exp(x) \u2245', taylor(exp(x), 0, 5).subs(x,2),' = ',taylor(exp(x), 0, 5).subs(x,2).evalf())\nprint('Taylor 6 exp(x) \u2245', taylor(exp(x), 0, 6).subs(x,2),' = ',taylor(exp(x), 0, 6).subs(x,2).evalf())\nprint('Taylor 7 exp(x) \u2245', taylor(exp(x), 0, 8).subs(x,2),' = ',taylor(exp(x), 0, 7).subs(x,2).evalf())\n```\n\n Taylor 0 sin(x) \u2245 0 = 0\n Taylor 1 cos(x) \u2245 1 = 1.00000000000000\n Taylor 2 exp(x) \u2245 3 = 3.00000000000000\n Taylor 3 exp(x) \u2245 5 = 5.00000000000000\n Taylor 4 exp(x) \u2245 19/3 = 6.33333333333333\n Taylor 5 exp(x) \u2245 7 = 7.00000000000000\n Taylor 6 exp(x) \u2245 109/15 = 7.26666666666667\n Taylor 7 exp(x) \u2245 155/21 = 7.35555555555556\n\n\n\n```\nprint(\"Ejercicio\")\nprint('Taylor 0 sin(x) \u2245', taylor(sin(x), 0, 0).subs(x,2),' = ',taylor(sin(x), 0, 0).subs(x,2).evalf())\nprint('Taylor 1 sin(x) \u2245', taylor(sin(x), 0, 1).subs(x,2),' = ',taylor(sin(x), 0, 1).subs(x,2).evalf())\nprint('Taylor 2 sin(x) \u2245', taylor(sin(x), 0, 2).subs(x,2),' = ',taylor(sin(x), 0, 2).subs(x,2).evalf())\nprint('Taylor 3 sin(x) \u2245', taylor(sin(x), 0, 3).subs(x,2),' = ',taylor(sin(x), 0, 3).subs(x,2).evalf())\nprint('Taylor 4 ln(x+1) \u2245', taylor(ln(x+1), 0, 4).subs(x,2),' = ',taylor(ln(x+1), 0, 4).subs(x,2).evalf())\nprint('Taylor 5 ln(x+1) \u2245', taylor(ln(x+1), 0, 5).subs(x,2),' = ',taylor(ln(x+1), 0, 5).subs(x,2).evalf())\nprint('Taylor 6 ln(x+1) \u2245', taylor(ln(x+1), 0, 6).subs(x,2),' = ',taylor(ln(x+1), 0, 6).subs(x,2).evalf())\nprint('Taylor 7 ln(x+1) \u2245', taylor(ln(x+1), 0, 8).subs(x,2),' = ',taylor(ln(x+1), 0, 7).subs(x,2).evalf())\n```\n\n Ejercicio\n Taylor 0 sin(x) \u2245 0 = 0\n Taylor 1 sin(x) \u2245 0 = 0\n Taylor 2 sin(x) \u2245 2 = 2.00000000000000\n Taylor 3 sin(x) \u2245 2 = 2.00000000000000\n Taylor 4 ln(x+1) \u2245 8/3 = 2.66666666666667\n Taylor 5 ln(x+1) \u2245 -4/3 = -1.33333333333333\n Taylor 6 ln(x+1) \u2245 76/15 = 5.06666666666667\n Taylor 7 ln(x+1) \u2245 444/35 = -5.60000000000000\n\n\n##Comparison between methods\n\n\n```\nimport math\nprint('sympy exp(x)subs(x,2) =', exp(x).subs(x,2))\nprint('sympy exp(x).subs(x,2).evalf() =', exp(x).subs(x,2).evalf())\nprint('math.exp(2) =', math.exp(2))\n```\n\n sympy exp(x)subs(x,2) = exp(2)\n sympy exp(x).subs(x,2).evalf() = 7.38905609893065\n math.exp(2) = 7.38905609893065\n\n\n\n```\nprint(\"Ejercicio\")\nimport math\nprint('sympy ln(x+1)subs(x,2) =', ln(x+1).subs(x,2))\nprint('sympy ln(x+1).subs(x,2).evalf() =', ln(x+1).subs(x,2).evalf())\nprint('math.ln(2+1) =', math.log1p(2))\n```\n\n Ejercicio\n sympy ln(x+1)subs(x,2) = log(3)\n sympy ln(x+1).subs(x,2).evalf() = 1.09861228866811\n math.ln(2+1) = 1.0986122886681096\n\n\n\n```\nprint(\"Ejercicio\")\nimport math\nprint('sympy sin(x)subs(x,2) =', sin(x).subs(x,2))\nprint('sympy sin(x).subs(x,2).evalf() =', sin(x).subs(x,2).evalf())\nprint('math.sin(2) =', math.sin(2))\n```\n\n Ejercicio\n sympy sin(x)subs(x,2) = sin(2)\n sympy sin(x).subs(x,2).evalf() = 0.909297426825682\n math.sin(2) = 0.9092974268256817\n\n\n##Plots of `exp()`\n\n\n```\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nvalues = np.arange(-5,5,0.1)\np_exp = np.exp(values)\nt_exp1 = [taylor(exp(x), 0, 1).subs(x,v) for v in values]\nlegends = ['exp() ','Taylor 1 (constant)']\n\nfig, ax = plt.subplots()\nax.plot(values,p_exp, color ='red')\nax.plot(values,t_exp1)\n\nax.set_ylim([-5,5])\nax.axhline(y=0.0, xmin=-5.0, xmax=5.0, color='black')\nax.axvline(x=0.0, ymin=-10.0, ymax=10.0, color='black')\nax.legend(legends)\n\nplt.show()\n```\n\n\n```\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n# if using a Jupyter notebook, include:\n%matplotlib inline\n\nvalues = np.arange(-5,5,0.1)\np_exp = np.exp(values)\nt_exp2 = [taylor(exp(x), 0, 2).subs(x,v) for v in values]\nlegends = ['exp() ','Taylor 2 (linear)']\n\nfig, ax = plt.subplots()\nax.plot(values,p_exp, color ='red')\nax.plot(values,t_exp2)\n\nax.set_ylim([-5,5])\nax.axhline(y=0.0, xmin=-5.0, xmax=5.0, color='black')\nax.axvline(x=0.0, ymin=-10.0, ymax=10.0, color='black')\nax.legend(legends)\n\nplt.show()\n```\n\n\n```\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n# if using a Jupyter notebook, include:\n%matplotlib inline\n\nvalues = np.arange(-5,5,0.1)\np_exp = np.exp(values)\nt_exp3 = [taylor(exp(x), 0, 3).subs(x,v) for v in values]\nlegends = ['exp() ','Taylor 3 (quadratic)']\n\nfig, ax = plt.subplots()\nax.plot(values,p_exp, color ='red')\nax.plot(values,t_exp3)\n\nax.set_ylim([-5,5])\nax.axhline(y=0.0, xmin=-5.0, xmax=5.0, color='black')\nax.axvline(x=0.0, ymin=-10.0, ymax=10.0, color='black')\nax.legend(legends)\n\nplt.show()\n```\n\n\n```\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n# if using a Jupyter notebook, include:\n%matplotlib inline\n\nvalues = np.arange(-5,5,0.1)\np_exp = np.exp(values)\nt_exp4 = [taylor(exp(x), 0, 4).subs(x,v) for v in values]\nlegends = ['exp() ','Taylor 4 (cubic)']\n\nfig, ax = plt.subplots()\nax.plot(values,p_exp, color ='red')\nax.plot(values,t_exp4)\n\nax.set_ylim([-5,5])\nax.axhline(y=0.0, xmin=-5.0, xmax=5.0, color='black')\nax.axvline(x=0.0, ymin=-10.0, ymax=10.0, color='black')\nax.legend(legends)\n\nplt.show()\n```\n\n\n```\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n# if using a Jupyter notebook, include:\n%matplotlib inline\n\nvalues = np.arange(-5,5,0.1)\np_exp = np.exp(values)\nt_exp1 = [taylor(exp(x), 0, 1).subs(x,v) for v in values]\nt_exp2 = [taylor(exp(x), 0, 2).subs(x,v) for v in values]\nt_exp3 = [taylor(exp(x), 0, 3).subs(x,v) for v in values]\nt_exp4 = [taylor(exp(x), 0, 4).subs(x,v) for v in values]\nlegends = ['exp() ','Taylor 1 (constant)','Taylor 3 (linear)','Taylor 3 (quadratic)','Taylor 4 (cubic)']\n\nfig, ax = plt.subplots()\nax.plot(values,p_exp)\nax.plot(values,t_exp1)\nax.plot(values,t_exp2)\nax.plot(values,t_exp3)\nax.plot(values,t_exp4)\n\nax.set_ylim([-5,5])\nax.axhline(y=0.0, xmin=-5.0, xmax=5.0, color='black')\nax.axvline(x=0.0, ymin=-10.0, ymax=10.0, color='black')\nax.legend(legends)\n\nplt.show()\n```\n\n##Plots of $\\ln(x+1)$\n\n\n```\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nvalues = np.arange(0,5,0.1)\np_ln = [math.log1p(value) for value in values]\nt_ln1 = [taylor(ln(x+1), 0, 1).subs(x,v) for v in values]\nlegends = ['ln(x+1) ','Taylor 1 (constant)']\n\nfig, ax = plt.subplots()\nax.plot(values,p_ln, color ='red')\nax.plot(values,t_ln1)\n\nax.set_ylim([-5,5])\n#ax.axhline(y=0.0, xmin=-5.0, xmax=5.0, color='black')\n#ax.axvline(x=0.0, ymin=-10.0, ymax=10.0, color='black')\nax.legend(legends)\n\nplt.show()\nprint(\"Note that the blue line is in y=0\")\n```\n\n\n```\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nvalues = np.arange(0,5,0.1)\np_ln = [math.log1p(value) for value in values]\nt_ln2 = [taylor(ln(x+1), 0, 2).subs(x,v) for v in values]\nlegends = ['ln(x+1) ','Taylor 2 (Lineal)']\n\n\nfig, ax = plt.subplots()\nax.plot(values,p_ln, color ='red')\nax.plot(values,t_ln2)\n\nax.set_ylim([-5,5])\nax.axhline(y=0.0, xmin=-5.0, xmax=5.0, color='black')\nax.axvline(x=0.0, ymin=-10.0, ymax=10.0, color='black')\nax.legend(legends)\n\nplt.show()\n```\n\n\n```\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n# if using a Jupyter notebook, include:\n%matplotlib inline\n\nvalues = np.arange(0,5,0.1)\np_ln = [math.log1p(value) for value in values]\nt_ln3 = [taylor(ln(x+1), 0, 3).subs(x,v) for v in values]\nlegends = ['ln(x+1) ','Taylor 3 (Quadratic)']\n\nfig, ax = plt.subplots()\nax.plot(values,p_ln, color ='red')\nax.plot(values,t_ln3)\n\nax.set_ylim([-5,5])\nax.axhline(y=0.0, xmin=-5.0, xmax=5.0, color='black')\nax.axvline(x=0.0, ymin=-10.0, ymax=10.0, color='black')\nax.legend(legends)\n\nplt.show()\n```\n\n\n```\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nvalues = np.arange(0,5,0.1)\np_ln = [math.log1p(value) for value in values]\nt_ln4 = [taylor(ln(x+1), 0, 4).subs(x,v) for v in values]\nlegends = ['ln(x+1) ','Taylor 4 (Cubic)']\n\n\nfig, ax = plt.subplots()\nax.plot(values,p_ln, color ='red')\nax.plot(values,t_ln4)\n\nax.set_ylim([-5,5])\nax.axhline(y=0.0, xmin=-5.0, xmax=5.0, color='black')\nax.axvline(x=0.0, ymin=-10.0, ymax=10.0, color='black')\nax.legend(legends)\n\nplt.show()\n```\n\n\n```\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n# if using a Jupyter notebook, include:\n%matplotlib inline\n\nvalues = np.arange(0,5,0.1)\np_ln = [math.log1p(value) for value in values]\nt_ln1 = [taylor(ln(x+1), 0, 1).subs(x,v) for v in values]\nt_ln2 = [taylor(ln(x+1), 0, 2).subs(x,v) for v in values]\nt_ln3 = [taylor(ln(x+1), 0, 3).subs(x,v) for v in values]\nt_ln4 = [taylor(ln(x+1), 0, 4).subs(x,v) for v in values]\nlegends = ['ln(x+1) ','Taylor 1 (constant)','Taylor 3 (linear)','Taylor 3 (quadratic)','Taylor 4 (cubic)']\n\nfig, ax = plt.subplots()\nax.plot(values,p_ln)\nax.plot(values,t_ln1)\nax.plot(values,t_ln2)\nax.plot(values,t_ln3)\nax.plot(values,t_ln4)\n\nax.set_ylim([-2,3])\nax.axhline(y=0.0, xmin=-5.0, xmax=5.0, color='black')\nax.axvline(x=0.0, ymin=-10.0, ymax=10.0, color='black')\nax.legend(legends)\n\nplt.show()\n```\n\n##Plots of $\\sin(x)$\n\n\n```\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nvalues = np.arange(-2*math.pi,2*math.pi,0.1)\np_sin = [math.sin(value) for value in values]\nt_sin1 = [taylor(sin(x), 0, 1).subs(x,v) for v in values]\nlegends = ['sin() ','Taylor 1 (constant)']\n\nfig, ax = plt.subplots()\nax.plot(values,p_sin, color ='red')\nax.plot(values,t_sin1)\n\nax.set_ylim([-5,5])\n#ax.axhline(y=0.0, xmin=-5.0, xmax=5.0, color='black')\n#ax.axvline(x=0.0, ymin=-10.0, ymax=10.0, color='black')\nax.legend(legends)\n\nplt.show()\n```\n\n\n```\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n# if using a Jupyter notebook, include:\n%matplotlib inline\n\nvalues = np.arange(-2*math.pi,2*math.pi,0.1)\np_sin = [math.sin(value) for value in values]\nt_sin2 = [taylor(sin(x), 0, 2).subs(x,v) for v in values]\nlegends = ['sin() ','Taylor 2 (linear)']\n\nfig, ax = plt.subplots()\nax.plot(values,p_sin, color ='red')\nax.plot(values,t_sin2)\n\nax.set_ylim([-5,5])\nax.axhline(y=0.0, xmin=-5.0, xmax=5.0, color='black')\nax.axvline(x=0.0, ymin=-10.0, ymax=10.0, color='black')\nax.legend(legends)\n\nplt.show()\n```\n\n\n```\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nvalues = np.arange(-2*math.pi,2*math.pi,0.1)\np_sin = [math.sin(value) for value in values]\nt_sin3 = [taylor(sin(x), 0, 3).subs(x,v) for v in values]\nlegends = ['sin()','Taylor 3 (quadratic)']\n\nfig, ax = plt.subplots()\nax.plot(values,p_sin, color ='red')\nax.plot(values,t_sin3)\n\nax.set_ylim([-5,5])\nax.axhline(y=0.0, xmin=-5.0, xmax=5.0, color='black')\nax.axvline(x=0.0, ymin=-10.0, ymax=10.0, color='black')\nax.legend(legends)\n\nplt.show()\n```\n\n\n```\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n# if using a Jupyter notebook, include:\n%matplotlib inline\n\nvalues = np.arange(-2*math.pi,2*math.pi,0.1)\np_sin = [math.sin(value) for value in values]\nt_sin4 = [taylor(sin(x), 0, 4).subs(x,v) for v in values]\nlegends = ['sin() ','Taylor 4 (cubic)']\n\nfig, ax = plt.subplots()\nax.plot(values,p_sin, color ='red')\nax.plot(values,t_sin4)\n\nax.set_ylim([-5,5])\nax.axhline(y=0.0, xmin=-5.0, xmax=5.0, color='black')\nax.axvline(x=0.0, ymin=-10.0, ymax=10.0, color='black')\nax.legend(legends)\n\nplt.show()\n```\n\n\n```\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nvalues = np.arange(-2*math.pi,2*math.pi,0.1)\np_sin = [math.sin(value) for value in values]\nt_sin1 = [taylor(sin(x), 0, 1).subs(x,v) for v in values]\nt_sin2 = [taylor(sin(x), 0, 2).subs(x,v) for v in values]\nt_sin3 = [taylor(sin(x), 0, 3).subs(x,v) for v in values]\nt_sin4 = [taylor(sin(x), 0, 4).subs(x,v) for v in values]\nlegends = ['sin() ','Taylor 1 (constant)','Taylor 3 (linear)','Taylor 3 (quadratic)','Taylor 4 (cubic)']\n\nfig, ax = plt.subplots()\nax.plot(values,p_sin)\nax.plot(values,t_sin1)\nax.plot(values,t_sin2)\nax.plot(values,t_sin3)\nax.plot(values,t_sin4)\n\nax.set_ylim([-5,5])\n#ax.axhline(y=0.0, xmin=-5.0, xmax=5.0, color='black')\n#ax.axvline(x=0.0, ymin=-10.0, ymax=10.0, color='black')\nax.legend(legends)\n\nplt.show()\n```\n", "meta": {"hexsha": "3ac0d047d8983a5050bb0ac63d9009d635808392", "size": 272428, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lab9/TaylorSymPy.ipynb", "max_stars_repo_name": "crvargasm/MetNumUN2021I", "max_stars_repo_head_hexsha": "e8b56337482c51eb0ca397ebd173360979e9013c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-27T05:34:50.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-27T05:34:50.000Z", "max_issues_repo_path": "Lab9/TaylorSymPy.ipynb", "max_issues_repo_name": "crvargasm/MetNumUN2021I", "max_issues_repo_head_hexsha": "e8b56337482c51eb0ca397ebd173360979e9013c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lab9/TaylorSymPy.ipynb", "max_forks_repo_name": "crvargasm/MetNumUN2021I", "max_forks_repo_head_hexsha": "e8b56337482c51eb0ca397ebd173360979e9013c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 208.1191749427, "max_line_length": 26102, "alphanum_fraction": 0.8891560339, "converted": true, "num_tokens": 6345, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.951142225532629, "lm_q2_score": 0.8740772236840656, "lm_q1q2_score": 0.8313717558222437}} {"text": "# Bead on a rotating circle\n\n\n\nWe consider a movement of a material point (bead) with the mass $m$ in a gravitational field, along a vertical circle with a radius $l$ rotating with the frequency $\\omega$ around a vertical axis, passing through the center of the circle.\n\nTurning circle creates a virtual sphere with a $R$ radius and a bead moves in this sphere. Therefore, it is most convenient to choose a reference system in $(r, \\theta, \\phi)$ spherical variables:\n\n\\begin{eqnarray}\nx&=&r \\sin \\theta \\cos \\phi \\\\\ny&=&r \\sin \\theta \\sin \\phi \\\\\nz&=&r \\cos \\theta\n\\end{eqnarray}\n\nIt is a system with constraints and therefore it is convenient to analyze this issue in the framework of Lagrange mechanics.\n\nThe Lagrange function is the difference in the kinetic energy of the bead and its potential energy (particles in the gravitational field):\n\n$$L=L(r, \\theta, \\phi, \\dot{r}, \\dot{\\theta}, \\dot{\\phi}) = E_k -E_p$$\n\nThe kinetic energy of the bead is given by the formula:\n\n\\begin{equation}\n\\label{eq:Ek_spherical}\nE_k =\\frac{m v^2}{2} = \\frac{m}{2} [\\dot{x}^2 + \\dot{y}^2 + \\dot{z}^2] = \\frac{m}{2}[\\dot{r}^2 + r^2 \\dot{\\theta}^2 + r^2 \\sin^2 (\\theta) \\dot{\\phi}^2]\n\\end{equation}\n\nThe potential energy of the bead has the form\n$$E_p = mgz = mgr \\cos(\\theta)$$\n\n## Lagrange aproach in spherical coordinates\n\nWe can write Lagrangian of the system in spherical coordinates. Contraints in spherical coordinates have simple form:\n\n - $r = l$ - radius is constant.\n - $\\phi(t)=\\omega_0 t$ - the azimuthal angle is forced by the constraint.\n \nEffectively, the only degree of freedom is a polar angle $\\theta$. \n \nWe will used CAS to derive both the form of the Lagrangian as well as equation of motion.\n\n\n```python\nload('cas_utils.sage')\n```\n\n\n```python\nvar('t')\nvar('l g w0')\nxy_names = [('x','x'),('y','y'),('z','z')]\nuv_names = [('r','r'),('phi',r'\\phi'),('theta',r'\\theta')]\n\nload('cas_utils.sage')\n\nto_fun, to_var = make_symbols(xy_names,uv_names)\nx2u = {x:r*sin(theta)*cos(phi),\n y: r*sin(theta)*sin(phi),\n z: r*cos(theta)}\n_ = transform_virtual_displacements(xy_names,uv_names,suffix='_uv')\n```\n\n\n```python\nEk = 1/2*sum([x_.subs(x2u).subs(to_fun).diff(t).subs(to_var)^2 \\\n for x_ in [x,y,z]])\n```\n\nThe symbolic variable is kinetic energy in spherical coordinates \\ref{eq:Ek_spherical},\n\n\n```python\nshowmath(Ek.trig_simplify())\n```\n\nAt this point we can substitute conditions resulting from constraints. Note that since we have already time derivatives $\\dot r$ and $\\dot phi$, we have to also substitute them explicitely.\n\n\n```python\nEk = Ek.subs({phi:w0*t,phid:w0,rd:0,r:l}).trig_simplify()\nshowmath(Ek)\n```\n\nThe potential energy can be computer in a similar way:\n\n\n```python\nEp = g*z.subs(x2u).subs({r:l})\nshowmath(Ep)\n```\n\nLangrange function is a difference between $E_k$ and $E_{pot}$:\n\n\n```python\nL = Ek-Ep\nshowmath(L.trig_simplify())\n```\n\nWe have single Euler-Lagrange equation in generalized coodinate: polar angle $\\theta$.\n\n\n```python\nEL = L.diff(thetad).subs(to_fun).diff(t).subs(to_var) - L.diff(theta)\n```\n\n\n```python\nshowmath(EL.trig_simplify())\n```\n\n## Analysis of the system\n\n### Effective potential \n\nFist, we can notice that the form of Lagrangian contains kinetic part, with term contatining $\\dot \\theta$ and two terms dependent only on $\\theta$. We can interpret the latter as an effective potential of 1d motion in $\\theta$ coordinate. The term comes from kinetic energy.\n\nWe can extract it symbolically in SageMath by setting $\\dot \\theta=0$ in Lagrangian:\n\n\n```python\nUeff = -L.subs(thetad==0)\nshowmath(Ueff)\n```\n\n\n```python\nplot?\n```\n\n\n```python\nw0s = [0,3,4,5]\np_u1 = plot([Ueff.subs({w0:w0_,l:1,g:9.81}) for w0_ in w0s],\\\n (theta,-3*pi/2-0.01,pi/2),\n legend_label=[r'$\\omega_0=%0.1f$'%w_ for w_ in w0s], \n tick_formatter=pi, gridlines=[[-pi],None])\np_u1.show(figsize=(6,3))\n```\n\n\\newpage\n", "meta": {"hexsha": "15a33a6cc0fa1ce88d3adab13b0ed8f6e5b53e7a", "size": 7517, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "061-Lagrange_bead_rotating_circle.ipynb", "max_stars_repo_name": "marcinofulus/Mechanics_with_SageMath", "max_stars_repo_head_hexsha": "6d13cb2e83cd4be063c9cfef6ce536564a25cf57", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "061-Lagrange_bead_rotating_circle.ipynb", "max_issues_repo_name": "marcinofulus/Mechanics_with_SageMath", "max_issues_repo_head_hexsha": "6d13cb2e83cd4be063c9cfef6ce536564a25cf57", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-30T16:45:58.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-30T16:45:58.000Z", "max_forks_repo_path": "061-Lagrange_bead_rotating_circle.ipynb", "max_forks_repo_name": "marcinofulus/Mechanics_with_SageMath", "max_forks_repo_head_hexsha": "6d13cb2e83cd4be063c9cfef6ce536564a25cf57", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-11-15T08:26:14.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-12T13:07:16.000Z", "avg_line_length": 26.2832167832, "max_line_length": 289, "alphanum_fraction": 0.545962485, "converted": true, "num_tokens": 1163, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9353465134460242, "lm_q2_score": 0.8887588052782736, "lm_q1q2_score": 0.831297449811487}} {"text": "# Multivariate Normal Distribution - Mahalanobis Distance\n\n> This document is written in *R*.\n>\n> ***GitHub***: https://github.com/czs108\n\n## Question A\n\n> Generate a sample of **1000** random points from a *Multivariate Normal Distribution* with *means* **0.0**, **0.0** and *standard deviations* of **1.5** along the $x$ axis and **0.5** along the $y$ axis.\n>\n> Plot the points on a graph with limits on the $x$ axis of $(-10,\\, 10)$ and limits on the $y$ axis of $(-10,\\, 10)$.\n\nUse the `rnorm` function. \n\n\n```R\ntotal <- 1000\nX.1 <- rnorm(n=total, mean=0, sd=1.5)\nY.1 <- rnorm(n=total, mean=0, sd=0.5)\n\nplot(Y.1 ~ X.1, xlim=c(-10, 10), ylim=c(-10, 10), xlab=\"X\", ylab=\"Y\")\n```\n\n## Question B\n\n> Generate a sample of **1000** random points from a *Multivariate Normal Distribution* with *means* **-4.0**, **0.0** and *standard deviations* of **0.5** along the $x$ axis and **1.5** along the $y$ axis.\n>\n> Plot the points on the current graph with color *blue*.\n\n\n```R\nX.2 <- rnorm(n=total, mean=-4.0, sd=0.5)\nY.2 <- rnorm(n=total, mean=0, sd=1.5)\n\nplot(Y.1 ~ X.1, xlim=c(-10, 10), ylim=c(-10, 10), xlab=\"X\", ylab=\"Y\")\npoints(Y.2 ~ X.2, col=\"blue\")\n```\n\n## Question C\n\n> Compute the *Mahalanobis Distance* of each of the **2000** points from each of the distributions.\n\n\\begin{equation}\nd^{2} = \\frac{(x - m_x)^2}{\\sigma^2_x} + \\frac{(y - m_y)^2}{\\sigma^2_y}\n\\end{equation}\n\n\n```R\nmahadist <- function(point, mean, sd) {\n d.x <- ((point[1] - mean[1]) ^ 2) / (sd[1] ^ 2)\n d.y <- ((point[2] - mean[2]) ^ 2) / (sd[2] ^ 2)\n return (d.x + d.y)\n}\n```\n\n\n```R\ndist.1 <- NULL\nfor (i in c(1:total)) {\n dist.1[i] <- mahadist(c(X.1[i], Y.1[i]), c(0, 0), c(1.5, 0.5))\n}\n\ndist.2 <- NULL\nfor (i in c(1:total)) {\n dist.2[i] <- mahadist(c(X.2[i], Y.2[i]), c(-4.0, 0), c(0.5, 1.5))\n}\n\ndist.1[c(1:5)]\ndist.2[c(1:5)]\n```\n\n\n
      \n\t
    1. 0.0835216640048668
    2. \n\t
    3. 0.525807369423523
    4. \n\t
    5. 2.89716097154469
    6. \n\t
    7. 2.76312320046508
    8. \n\t
    9. 0.570738446675666
    10. \n
    \n\n\n\n\n
      \n\t
    1. 0.849843382279618
    2. \n\t
    3. 0.782329010614814
    4. \n\t
    5. 0.0402248932660116
    6. \n\t
    7. 3.79224983124425
    8. \n\t
    9. 1.7885729380896
    10. \n
    \n\n\n\n## Question D\n\n> Classify each point using the *Mahalanobis Distance*. How accurate is your classifier?\n\n\n```R\ncorrect.1 <- NULL\nfor (i in c(1:total)) {\n if (dist.1[i] <= mahadist(c(X.1[i], Y.1[i]), c(-4.0, 0), c(0.5, 1.5))) {\n correct.1[i] = TRUE\n } else {\n correct.1[i] = FALSE\n }\n}\n\ncorrect.2 <- NULL\nfor (i in c(1:total)) {\n if (dist.2[i] <= mahadist(c(X.2[i], Y.2[i]), c(0, 0), c(1.5, 0.5))) {\n correct.2[i] = TRUE\n } else {\n correct.2[i] = FALSE\n }\n}\n```\n\nFor the *black* distribution, the accuracy is:\n\n\n```R\nsum(correct.1) / total\n```\n\n\n0.972\n\n\nFor the *blue* distribution, the accuracy is:\n\n\n```R\nsum(correct.2) / total\n```\n\n\n0.994\n\n\n## Question E\n\n> In order to visualise the feature space, plot a grid of points on the current graph covering the whole space. Compute the *Mahalanobis Distances* at each point.\n>\n> Plot a *black* circle if the point is closer to the *black* distribution and *blue* point if it is closer to the *blue* distribution.\n\n\n```R\nloc.1.x <- NULL\nloc.1.y <- NULL\nloc.2.x <- NULL\nloc.2.y <- NULL\n\nfor (x in c(-10:10)) {\n for (y in c(-10:10)) {\n dist.1 <- mahadist(c(x, y), c(0, 0), c(1.5, 0.5))\n dist.2 <- mahadist(c(x, y), c(-4.0, 0), c(0.5, 1.5))\n if (dist.1 <= dist.2) {\n loc.1.x[length(loc.1.x) + 1] <- x\n loc.1.y[length(loc.1.y) + 1] <- y\n } else {\n loc.2.x[length(loc.2.x) + 1] <- x\n loc.2.y[length(loc.2.y) + 1] <- y\n }\n }\n}\n\nplot(loc.1.y ~ loc.1.x, xlim=c(-10, 10), ylim=c(-10, 10), xlab=\"X\", ylab=\"Y\")\npoints(loc.2.y ~ loc.2.x, col=\"blue\")\n```\n", "meta": {"hexsha": "98cec76da1f577b373ceea3208e680e576f627d2", "size": 40615, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "exercises/Multivariate Normal Distribution - Mahalanobis Distance.ipynb", "max_stars_repo_name": "czs108/Probability-Theory-Exercises", "max_stars_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exercises/Multivariate Normal Distribution - Mahalanobis Distance.ipynb", "max_issues_repo_name": "czs108/Probability-Theory-Exercises", "max_issues_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercises/Multivariate Normal Distribution - Mahalanobis Distance.ipynb", "max_forks_repo_name": "czs108/Probability-Theory-Exercises", "max_forks_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-21T05:04:07.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-21T05:04:07.000Z", "avg_line_length": 104.6778350515, "max_line_length": 13760, "alphanum_fraction": 0.8369321679, "converted": true, "num_tokens": 1467, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9149009642742805, "lm_q2_score": 0.9086178938396674, "lm_q1q2_score": 0.8312953872307776}} {"text": "# \u6d4b\u8bd5\u6587\u4ef6\n\n\u5728\u6b64\u5bf9\u7f16\u5199\u7684\u5404\u4e2a\u51fd\u6570\u8fdb\u884c\u6d4b\u8bd5\uff0c\u68c0\u67e5\u662f\u5426\u53ef\u4ee5\u6b63\u5e38\u8fd0\u884c\uff0c\u9996\u5148\u52a0\u8f7dbase\u6587\u4ef6\u91cc\u9762\u5b9a\u4e49\u7684\u5404\u79cd\u51fd\u6570\u3002\n\n\n```python\n%run base.py\n```\n\n\n```python\nfrom sympy import init_printing\ninit_printing()\nfrom IPython.core.interactiveshell import InteractiveShell \nInteractiveShell.ast_node_interactivity = \"all\"\n```\n\n## \u57fa\u672c\u68af\u5ea6\u529f\u80fd\n\n\u5728\u6b64\u53ef\u4ee5\u6c42\u68af\u5ea6\u4e0eHessian\u77e9\u9635\uff0c\u4f8b\u5b50\u4f7f\u7528\n\n$$\nx_1^2+x_2\\times cos(x_2)\n$$\n\n\n\n\n```python\nxvec = symbols('x1:3') # \u5b9a\u4e49\u53d8\u91cfx1,x2\nfexpr = xvec[0]**2+xvec[1]*cos(xvec[1])\ngexpr = get_g(fexpr, xvec, display=True)\nprint('-'*40)\nGexpr = get_G(fexpr, xvec, display=True)\n```\n\n\n```python\ng = get_fun(gexpr,xvec)\ng.__name__\ng(1, 1)\n```\n\n## \u9ec4\u91d1\u5206\u5272\u6cd5\u6d4b\u8bd5\n\n\u76ee\u6807\u51fd\u6570\u4e3a\uff1a\n\n$$\n1-x\\times e^{-x^2}\n$$\n\n\u5728$\\sqrt{2}/2$\u53d6\u5230\u89e3\u6790\u89e3\n\n\n```python\n%%time\ndef foo(x): return 1-x*np.exp(-x**2)\nprint('0.618\u6cd5', \n golden_section_search(foo, 0.01, interval=[0, 1], printout=True), \n '\u89e3\u6790\u89e3', np.sqrt(0.5))\n```\n\n## \u4fee\u6b63\u725b\u987f\u6cd5\u5728\u6b63\u5b9a\u4e8c\u6b21\u51fd\u6570\u60c5\u51b5\u4e0b\u7684\u6d4b\u8bd5\n\n\u51fd\u6570\u5f62\u5f0f\u4e3a\uff1a\n\n$$\nx_1^2+x_2^2+x_1\\cdot x_2\n$$\n\n\u5728(0,0)\u53d6\u5230\u6700\u5c0f\u503c\n\n\n```python\n%%time\nxvec = symbols('x1:3')\nfexpr = xvec[0]**2+xvec[1]**2+1*xvec[0]*xvec[1] # \u6b63\u5b9a\u7684\u4e8c\u6b21\u51fd\u6570\npprint(fexpr)\nx = modified_newton(fexpr, xvec, (1, 1), eps=1e-5)\nprint('\u7ed3\u679c\uff1a', x)\n```\n\n## \u963b\u5c3c\u725b\u987f\u6cd5\u5728\u6b63\u5b9a\u4e8c\u6b21\u51fd\u6570\u4e0a\u7684\u6d4b\u8bd5\n\n\u51fd\u6570\u5f62\u5f0f\u540c\u4e0a\n\n\n```python\n%%time\nfexpr = xvec[0]**2+xvec[1]**2+1*xvec[0]*xvec[1] # \u6b63\u5b9a\u7684\u4e8c\u6b21\u51fd\u6570\npprint(fexpr)\nx = damped_newton(fexpr, xvec, (1, 1), eps=1e-5)\nprint('\u7ed3\u679c\uff1a', x)\n```\n", "meta": {"hexsha": "728c8b3a308494b0781ab516e79b5ddd04c6532a", "size": 3314, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "test.ipynb", "max_stars_repo_name": "LingrenKong/Numerical-Optimization-Code", "max_stars_repo_head_hexsha": "598e2b5099e2ba57ea0aa7ff4a5f5547889828b2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-11-10T09:06:03.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-07T06:43:45.000Z", "max_issues_repo_path": "test.ipynb", "max_issues_repo_name": "LingrenKong/Numerical-Optimization-Code", "max_issues_repo_head_hexsha": "598e2b5099e2ba57ea0aa7ff4a5f5547889828b2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "test.ipynb", "max_forks_repo_name": "LingrenKong/Numerical-Optimization-Code", "max_forks_repo_head_hexsha": "598e2b5099e2ba57ea0aa7ff4a5f5547889828b2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.1560693642, "max_line_length": 79, "alphanum_fraction": 0.4791792396, "converted": true, "num_tokens": 619, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9149009480320036, "lm_q2_score": 0.9086179012632543, "lm_q1q2_score": 0.8312953792646007}} {"text": "# Pre-Calculus Notebook\n\n\n```python\n# Begin with some useful imports\nimport numpy as np\nimport pylab\nfrom fractions import Fraction\nfrom IPython.display import display, Math\nfrom sympy import init_printing, N, pi, sqrt, symbols, Symbol\nfrom sympy.abc import x, y, theta, phi\ninit_printing(use_latex='mathjax') # do this to allow sympy to print mathy stuff\n%matplotlib inline\n```\n\n## Trig Functions\n\nThis notebook will introduce basic trigonometric functions.\n\nGiven an angle, \u03b8, there are two units used to indicate the measure of the angle: degrees and radians.\n\n### First make a simple function to convert from degrees to radians\n\n\n```python\ndef deg_to_rad(angle, as_expr=False):\n \"\"\" Convert degrees to radians \n param: theta: degrees as a float or numpy array of values that can be cast to a float\n param: as_text: True/False output as text with multiple of '\u03c0' (Default = False)\n \n The formula for this is:\n 2\u03c0\u03b8/360\u00b0\n \n Note: it's usually bad form to allow a function argument to change an output type. Oh, well.\n \"\"\"\n radians = (angle * 2 * np.pi) / 360.0\n \n if as_expr:\n # note: this requires sympy\n radians = 2*pi*angle/360\n \n return radians\n```\n\n\n```python\n# Test our function\n# Note: normally you would print results, but with sympy expressions, \n# the `display` function of jupyter notebooks is used to print using MathJax.\n\ndisplay(deg_to_rad(210, as_expr=True))\ndisplay(deg_to_rad(360, as_expr=True))\n\n```\n\n\n$$\\frac{7 \\pi}{6}$$\n\n\n\n$$2 \\pi$$\n\n\n### Now, let's make a list of angles around a unit circle and convert them to radians\n\n\n```python\nbase_angles = [0, 30, 45, 60]\nunit_circle_angles = []\nfor i in range(4):\n unit_circle_angles.extend(np.array(base_angles)+(i*90))\nunit_circle_angles.append(360)\n```\n\n\n```python\n# make a list of the angles converted to radians (list items are a sympy expression)\nunit_circle_angles_rad = [deg_to_rad(x, False) for x in unit_circle_angles]\nunit_circle_angles_rad_expr = [deg_to_rad(x, True) for x in unit_circle_angles]\n```\n\n\n```python\nunit_circle_angles_rad\n```\n\n\n\n\n$$\\left [ 0.0, \\quad 0.523598775598, \\quad 0.785398163397, \\quad 1.0471975512, \\quad 1.57079632679, \\quad 2.09439510239, \\quad 2.35619449019, \\quad 2.61799387799, \\quad 3.14159265359, \\quad 3.66519142919, \\quad 3.92699081699, \\quad 4.18879020479, \\quad 4.71238898038, \\quad 5.23598775598, \\quad 5.49778714378, \\quad 5.75958653158, \\quad 6.28318530718\\right ]$$\n\n\n\n\n```python\nunit_circle_angles_rad_expr\n```\n\n\n\n\n$$\\left [ 0, \\quad \\frac{\\pi}{6}, \\quad \\frac{\\pi}{4}, \\quad \\frac{\\pi}{3}, \\quad \\frac{\\pi}{2}, \\quad \\frac{2 \\pi}{3}, \\quad \\frac{3 \\pi}{4}, \\quad \\frac{5 \\pi}{6}, \\quad \\pi, \\quad \\frac{7 \\pi}{6}, \\quad \\frac{5 \\pi}{4}, \\quad \\frac{4 \\pi}{3}, \\quad \\frac{3 \\pi}{2}, \\quad \\frac{5 \\pi}{3}, \\quad \\frac{7 \\pi}{4}, \\quad \\frac{11 \\pi}{6}, \\quad 2 \\pi\\right ]$$\n\n\n\n\n```python\n# show the list of angles in degrees\nunit_circle_angles\n```\n\n\n\n\n$$\\left [ 0, \\quad 30, \\quad 45, \\quad 60, \\quad 90, \\quad 120, \\quad 135, \\quad 150, \\quad 180, \\quad 210, \\quad 225, \\quad 240, \\quad 270, \\quad 300, \\quad 315, \\quad 330, \\quad 360\\right ]$$\n\n\n\n### So, what are radians\n\nRadians are a unit of angle measure (similar to degrees). Radians correspond to the chord length that an angle sweeps out on the unit circle. A complete circle with a radius of 1 (a unit circle) has a circumference of $2\\pi$ (from the formula for circumference: $C = 2\\pi r$ )\n\nLet's import some things to help us create a plot to see this in action.\n\n\n```python\nfrom bokeh.plotting import figure, show, output_notebook\noutput_notebook()\n```\n\n\n\n
    \n \n Loading BokehJS ...\n
    \n\n\n\n\nIf you were to map out our angle values on the circle drawn on a cartesian coordinate grid, it might look like this:\n\n\n```python\np = figure(plot_width=600, plot_height=600, match_aspect=True)\n\n\n# draw angle lines\nx = np.cos(unit_circle_angles_rad)\ny = np.sin(unit_circle_angles_rad)\norigin = np.zeros(x.shape)\n\n\nfor i in range(len(unit_circle_angles_rad)):\n #print (x[i], y[i])\n p.line((0, x[i]), (0, y[i]))\n\n# draw unit circle\np.circle(0,0, radius=1.0, color=\"white\", line_width=0.5, line_color='black',\n alpha=0.0, line_alpha=1.0)\n\n```\n\n\n\n\n
    GlyphRenderer(
    id = 'fc41fd69-f9b1-4846-b0a5-77bea065dad0', …)
    data_source = ColumnDataSource(id='d578de05-1c57-4568-bf26-45bbd7e91071', ...),
    glyph = Circle(id='c04870c5-f214-4e3c-8b2a-9ca556962ac3', ...),
    hover_glyph = None,
    js_event_callbacks = {},
    js_property_callbacks = {},
    level = 'glyph',
    muted = False,
    muted_glyph = None,
    name = None,
    nonselection_glyph = Circle(id='e17f1a11-aaf3-46df-8b18-671e96ef5549', ...),
    selection_glyph = None,
    subscribed_events = [],
    tags = [],
    view = CDSView(id='86474633-4aeb-4e71-bd74-6de289e9ebd8', ...),
    visible = True,
    x_range_name = 'default',
    y_range_name = 'default')
    \n\n\n\n\n\n\n```python\nshow(p)\n```\n\n\n\n
    \n
    \n
    \n\n\n\n\n\n```python\nimport bokeh\n```\n\n\n```python\nbokeh.__version__\n```\n\n\n\n\n u'0.12.7'\n\n\n\n\n```python\n from bokeh.plotting import figure, output_file, show\n\n output_file(\"circle_v1.html\")\n\n p = figure(plot_width=400, plot_height=400, match_aspect=True, tools='save')\n p.circle(0, 0, radius=1.0, fill_alpha=0)\n p.line((0, 1), (0, 0))\n p.line((0, -1), (0, 0))\n p.line((0, 0), (0, 1))\n p.line((0, 0), (0, -1))\n #p.rect(0, 0, 2, 2, fill_alpha=0)\n\n show(p)\n \n```\n\n\n\n
    \n
    \n
    \n\n\n\n\n\n```python\nimport sys, IPython, bokeh\nprint \"Python: \", sys.version\nprint \"IPython: \", IPython.__version__\nprint \"Bokeh: \", bokeh.__version__\n```\n\n Python: 2.7.13 |Continuum Analytics, Inc.| (default, Dec 20 2016, 23:05:08) \n [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)]\n IPython: 5.3.0\n Bokeh: 0.12.10\n\n\n\n```python\n%reload_ext version_information\n```\n\n\n```python\n%version_information\n```\n\n\n\n\n
    SoftwareVersion
    Python2.7.13 64bit [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)]
    IPython5.3.0
    Bokeh0.12.10
    OSDarwin 17.2.0 x86_64 i386 64bit
    Wed Nov 15 13:14:25 2017 CST
    \n\n\n\n\n```python\nfrom bokeh.plotting import figure, output_file, show\nfrom bokeh.layouts import layout\n\np1 = figure(match_aspect=True, title=\"Circle touches all 4 sides of square\")\n#p1.rect(0, 0, 300, 300, line_color='black')\np1.circle(x=0, y=0, radius=10, line_color='black', fill_color='grey',\n radius_units='data')\n\ndef draw_test_figure(aspect_scale=1, width=300, height=300):\n p = figure(\n plot_width=width,\n plot_height=height,\n match_aspect=True,\n aspect_scale=aspect_scale,\n title=\"Aspect scale = {0}\".format(aspect_scale),\n toolbar_location=None)\n p.circle([-1, +1, +1, -1], [-1, -1, +1, +1])\n return p\n\naspect_scales = [0.25, 0.5, 1, 2, 4]\np2s = [draw_test_figure(aspect_scale=i) for i in aspect_scales]\n\nsizes = [(100, 400), (200, 400), (400, 200), (400, 100)]\np3s = [draw_test_figure(width=a, height=b) for (a, b) in sizes]\n\nlayout = layout(children=[[p1], p2s, p3s])\n\noutput_file(\"aspect.html\")\nshow(layout)\n```\n\n\n\n
    \n
    \n
    \n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "cb68782ba649ecec9dd9eae09a0be3f082aeb349", "size": 151177, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "math/PreCalculus.ipynb", "max_stars_repo_name": "tvaught/education", "max_stars_repo_head_hexsha": "f96f12c178ab353957bf1ab26eeb302c4f272eea", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "math/PreCalculus.ipynb", "max_issues_repo_name": "tvaught/education", "max_issues_repo_head_hexsha": "f96f12c178ab353957bf1ab26eeb302c4f272eea", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "math/PreCalculus.ipynb", "max_forks_repo_name": "tvaught/education", "max_forks_repo_head_hexsha": "f96f12c178ab353957bf1ab26eeb302c4f272eea", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 154.1049949032, "max_line_length": 68285, "alphanum_fraction": 0.5927092084, "converted": true, "num_tokens": 3727, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391643039739, "lm_q2_score": 0.9005297901222472, "lm_q1q2_score": 0.8312242649052721}} {"text": "# Problem 4.15 Additive Noise Models (ANMs)\n\n## a) ANM I\n\nGiven the SCM\n\\begin{align}\nX:=&N_X\\\\\nY:=&2X+N_Y\n\\end{align}\nwith $N_X\\sim U(1,3)$ and $N_Y\\sim U(-0.5,0.5)$.\n\n### Support of $P_{X,Y}$\n\n\n```python\n%matplotlib inline\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\n```\n\n\n```python\nn = 3000\nx = np.random.uniform(1, 3, n)\ny = 2 * x + np.random.uniform(-.5, .5, n)\n```\n\n\n```python\ndef plot_support(x, y):\n fig = plt.figure()\n ax=fig.add_axes([0, 0, 2, 2])\n ax.scatter(x, y, color='b')\n ax.set_xlabel('$X$')\n ax.set_ylabel('$Y$')\n ax.set_title('$P_{X,Y}$')\n plt.show()\n```\n\n\n```python\nplot_support(x, y)\n```\n\nThere is no ANM from $Y$ to $X$ because the noise shape depends on $Y$, when the plot _is flipped by 90 degrees_.\n\n## b) ANM II\n\nANM with the same noise as ANM I, but the causal assignments\n\\begin{align}\nX:=&N_X\\\\\nY:=&X^2+N_Y\n\\end{align}\n\n\n```python\nx = np.random.uniform(1, 3, n)\ny = x**3 + np.random.uniform(-.5, .5, n)\n```\n\n\n```python\nplot_support(x, y)\n```\n\nVisually, the thickness seems to vary because me measure orthogonal to the tangent. However, as seen from the $X$ axis, the thickness remains the same.\n\nAgain, the height of the _noise bar_ in $Y$ direction is always the same regardless of $X$. When flipping the plot, i.e. changing the causal direction, the noise thickness depends on $Y$.\n", "meta": {"hexsha": "b15cbf800cf22c33ce9435c8e5d3e16bbf3f906d", "size": 99011, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "causal-inference/problems/problem-4.15-anms.ipynb", "max_stars_repo_name": "Simsso/Machine-Learning-Tinkering", "max_stars_repo_head_hexsha": "0a024aab0bb1ac5fbdd2f77380ab36d192278701", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-02-19T16:30:04.000Z", "max_stars_repo_stars_event_max_datetime": "2020-03-13T02:22:29.000Z", "max_issues_repo_path": "causal-inference/problems/problem-4.15-anms.ipynb", "max_issues_repo_name": "Simsso/Machine-Learning-Tinkering", "max_issues_repo_head_hexsha": "0a024aab0bb1ac5fbdd2f77380ab36d192278701", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2021-02-17T14:00:33.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-10T03:05:35.000Z", "max_forks_repo_path": "causal-inference/problems/problem-4.15-anms.ipynb", "max_forks_repo_name": "Simsso/Machine-Learning-Tinkering", "max_forks_repo_head_hexsha": "0a024aab0bb1ac5fbdd2f77380ab36d192278701", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 532.3172043011, "max_line_length": 72324, "alphanum_fraction": 0.9464806941, "converted": true, "num_tokens": 453, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391579526934, "lm_q2_score": 0.9005297847831082, "lm_q1q2_score": 0.8312242542575204}} {"text": "# Laboratory 09: Matrices and Rotations (Crystallography)\n\n## Background\n\nThe purpose of this laboratory is to understand and apply the concept of an agebraic group to produce one of the plane group patterns below. In this context, algebraic groups are mathematical objects that represent a collection of geometric transformations and are used in the study of crystallography. For example, when you say that is crystal is \"face centered cubic\" what you are really saying is that an atomic motif, when transformed by all the algebraic elements of a particular group, will be identical to the face centered cubic structure. The group is the mathematical machinary of crystal structures. The group itself is an abstract idea that is embodied in a crystal structure, a geometric design, a geometric series, a set of matrices, or a set of any kind where the rules that govern the set meet the criteria of a group. The idea of a group exists beyond the example I present here.\n\n## What Skills Will I Learn?\n\nUsing the context of two-dimensional plane groups, in this assignment you will practice the following skills:\n\n* Identifying a symmetry operation (rotation, translation, mirror) and determining the associated matrix representation.\n* Applying a symmetry operation to a position vector to transform the vector (or collection of vectors).\n* Seeing the inner workings of a class object and its attributes for the purpose of simplifying a computing task.\n* Using lists to collect and organize objects (such as transformation matrices)\n\n## What Steps Should I Take?\n\n1. Review the idea of \"symmetry operations\" and familarize yourself with mirror, translational, and rotational symmetry operations.\n1. Review the idea of matrix transformations and how they can be used to represent symmetry operations and how those symmetry operations can be represented as a matrix/vector dot product.\n1. Read about the metric tensor below and practice a few calculations of lengths and angles in unit cell coordinates.\n1. Read and practice writing down 2D, 3D and 3D augmented transformation matrices and learn how Euler angles are defined.\n1. For each case above, use a position vector pointing in a direction you select and apply a transformation using each matrix in turn. Does the resultant vector match your anticipated transformation? (e.g. If you take the position vector pointing at the [0,0,1] position in a cubic lattice and apply a 90 degree rotation about the x-axis, where should the resultant vector be pointing? Repeat this for mirror planes and translations.\n1. Review the concept of a \"class\" in object oriented programming and familarize yourself with the `Point` and `Triangle` classes I have defined below. Practice creating these objects and accessing their attributes.\n1. Review the code that accesses the Python Imaging Library (PIL) and look at the `polygon` function call and the `Triangle` points method. You can call the points method from within the polygon function to simplify the drawing of triangles.\n1. Using the `Point` and `Triangle` classes and your knowledge of symmetry transformations, reproduce one of the plane groups from the figure below. My suggestion is to start with a single triangle and apply a set of operations (transformation matrices) and collect new triangles into a list. Sketch a flowchart for how you might solve this problem.\n1. Review the very last block of code in this notebook and modify it to complete the assignment. You will need to use combinations of translations, reflections and rotations to accomplish this. \n1. Prepare a new notebook (not just modifications to this one) that describes your approach to reproducing one or more of the plane groups.\n\nYou may discover that a unique set of matrices will reproduce the whole pattern. This small set of matrices are an algebraic structure known as a **group**. So - the \"plane group\" is the group of symmetry operations that reproduces the structure in the plane starting from a single motif. The plane group representations are reproduced below for reference; this image comes from Hammond's book on crystallography, cited above.\n\n## A Sucessful Jupyter Notebook Will\n\n* Present a description of the essential elements of plane group symmetry, matrix algebra, and group theory to understand how these are related;\n* Identify the audience for which the work is intended;\n* Run the code necessary to draw one of the plane groups;\n* Provide a narrative and equations to explain why your approach is relevant to solving the problem;\n* Provide references and citations to any others' work you use to complete the assignment;\n* Be checked into your GitHub repository by the due date (one week from assignment).\n\nA high quality communication provides an organized, logically progressing, blend of narrative, equations, and code that teaches the reader a particular topic or idea. You will be assessed on:\n* The functionality of the code (i.e. it should perform the task assigned).\n* The narrative you present. I should be able to read and learn from it. Choose your audience wisely.\n* The supporting equations and figures you choose to include.\n\nIf your notebook is just computer code your assignment will be marked incomplete.\n\n## Reading and Reference\n\n* M. De Graef and M. McHenry, Structure of Materials, Cambridge University Press, 2nd ed.\n* C. Hammond, The Basics of Crystallography and Diffraction, Oxford Science Publications, 4th ed.\n\n## The Plane Groups\n\n\n\n### Introduction\n----\n\nOperations using vector-like data structures are an essential component of numerical computing, mathematics, science and engineering. In the field of crystallography the use of vectors and rotations in real and reciprocal space helps to simplify the description of and quantitative operations on crystal structures. Matrix operations are used in the solution of partial differential equations. The vector algebra and rotations are most easily performed using matrix tools and representations. The concepts and their mathematical properties will be reviewed and demonstrated using symbolic algebra and numerical methods. A review of matrix algebra will be helpful.\n\n### Rotations\n----\n\nA vector can be transformed by translations, rotations, and stretching/shrinking. A matrix multiplication operation can be used to define each individual operation. We can use matrix multiplication to perform combinations of these operations and then this composite operator can be applied to a vector. In general these types of transformations are called Affine Transformations. A rotation in two dimensions is given by:\n\n\\begin{equation*}\n\\left[\\begin{matrix}\\cos{\\left (\\theta \\right )} & - \\sin{\\left (\\theta \\right )}\\\\\\sin{\\left (\\theta \\right )} & \\cos{\\left (\\theta \\right )}\\end{matrix}\\right]\n\\end{equation*}\n\nWe can rotate a vector $x\\mathbf{i} + y\\mathbf{j}$ to the position $x'\\mathbf{i} + y'\\mathbf{j}$ using the following matrix operation:\n\n\\begin{equation*}\n\\left( \\begin{matrix} x' \\\\ y' \\end{matrix} \\right) = \\left[\\begin{matrix}\\cos{\\left (\\theta \\right )} & - \\sin{\\left (\\theta \\right )}\\\\\\sin{\\left (\\theta \\right )} & \\cos{\\left (\\theta \\right )}\\end{matrix}\\right] \\cdot \\left( \\begin{matrix} x \\\\ y \\end{matrix} \\right)\n\\end{equation*}\n\n\n```python\n%matplotlib inline\n\nimport numpy as np\nimport sympy as sp\nsp.init_session(quiet=True)\n```\n\n\n```python\n?sp.rot_axis3\n```\n\n\n```python\nx = sp.symbols('\\\\theta')\nsp.rot_axis3(x)\n```\n\nWe can look up definitions, but we can also do some simple tests to see which way things rotate. Let us take a vector pointing in the $\\mathbf{x}$ direction and rotate it about $\\mathbf{z}$ by 90 degrees and see what happens:\n\n\n```python\nxUnit = sp.Matrix([1,0,0])\nzRotation = sp.rot_axis3(sp.rad(90))\n```\n\n\n```python\nxUnit\n```\n\n\n```python\n# Each column can be viewed as where the unit vectors are moved to in the new space.\nzRotation\n```\n\n\n```python\nzRotation*xUnit\n```\n\n\n```python\n# This should not work.\nxUnit*zRotation\n```\n\n\n```python\nxUnit.T*zRotation\n```\n\nWhat can we learn from this result? It is now known that:\n\n* The convention for positive angles is a *counterclockwise* rotation.\n* The rotation axis function in `sympy` as defined rotates *clockwise*\n* There are conventions about active and passive rotations.\n* Don't assume module functions will do what you want - always check.\n\nRather than rely on module functions, we can define our own rotation function. Using a function called \"isclose\" or Boolean indexing it is possible to clean up the arrays and remove small numerical values that should be zeros.\n\n\n```python\ndef rotation2D(theta):\n return np.array([[np.cos(theta), np.sin(theta)],\n [-np.sin(theta), np.cos(theta)]])\n```\n\n\n```python\ntestArray = rotation2D(0)\ntestArray\n```\n\n\n```python\nnp.dot(np.array([1,0]),testArray)\n```\n\n### DIY: Computing Rotation Matrices\n----\n\nCompute the following rotation matrices:\n\n* A rotation of 0$^\\circ$ about the origin.\n* A rotation of 45$^\\circ$ about the origin.\n* A rotation of 60$^\\circ$ about the origin.\n* A rotation of 90$^\\circ$ about the origin.\n* A rotation of 180$^\\circ$ about the origin.\n* A rotation of -270$^\\circ$ about the origin.\n\n\n```python\n# Put your code here.\n```\n\n### Cleaning up the Small Values\n---\n\nSometimes Numpy returns very small numbers instead of zeros.\n\n\nOne strategy is to remove small numbers less than some tolerance and set them equal to zero. Algorithms like these where you compare your data to a tolerance and then operate on the entries that meet certain criteria are not uncommon in numerical methods. This is the tradeoff between symbolic computation and numeric computation.\n\n\n```python\ntestArray[np.abs(testArray) < 1e-5] = 0\n\ntestArray\n```\n\nThe key is in the Boolean comparision using the `<` symbol. The expression returns a `numpy` array of `dtype=bool`. Let me say here that it is good to check the results of expressions if you are unsure.\n\n\n```python\nnp.abs(testArray) < 1e-5\n```\n\nWe can write a function to do this that is a bit more robust. Modifications are done in-place (by reference) so we just return the array passed to the function after some manipulation that we do by Boolean indexing.\n\n\n```python\ndef zeroArray(testArray):\n testArray[np.isclose(testArray, np.zeros_like(testArray))] = 0.0\n return testArray\n```\n\n\n```python\nmodifiedArray = rotation2D(np.pi/2)\nmodifiedArray\n```\n\n\n```python\nmodifiedArray = zeroArray(rotation2D(np.pi/2))\nmodifiedArray\n```\n\n### Rotations (Continued)\n----\n\nUsing the new `rotation2D` function and `zeroArray` function we can now write:\n\n\n```python\nzeroArray(np.dot(np.array([1,0]),rotation2D(np.pi/2)))\n```\n\nA collection of functions for performing transformations is available at http://www.lfd.uci.edu/~gohlke/. This can be imported and the functions used in your code. The fastest way to explore what is available is to import the file and then use autocomplete and docstring viewing functions from the Jupyter notebook.\n\n\n```python\nimport transformations as tfm\n```\n\n\n```python\ntfm.rotation_matrix(np.pi/2, [0,1,0])\n```\n\n\n```python\nzeroArray(tfm.rotation_matrix(np.pi/2, [0,1,0]))\n```\n\n### Symmetry Operations and Translations in Crystals\n---\n\nA generalized affine transformation in two dimensions can be thought of as an augmented matrix as:\n\n$$\\begin{bmatrix}\nr_1 & r_2 & t_x\\\\\nr_3 & r_4 & t_y\\\\\n0 & 0 & 1\\\\\n\\end{bmatrix}$$\n\nso you could imagine the following:\n\n$$\\begin{bmatrix} x'\\\\ y'\\\\ 1\\\\ \\end{bmatrix} =\n\\begin{bmatrix} 1 & 0 & t_x\\\\ 0 & 1 & t_y\\\\ 0 & 0 & 1\\\\ \\end{bmatrix} \n\\begin{bmatrix}x\\\\ y\\\\ 1\\\\ \\end{bmatrix} $$\n\nexpanding to:\n\n$$x' = x + t_x $$\n\nand\n\n$$y' = y + t_y $$\n\nWe can explicitly write the rotation components as listed earlier:\n\n$$\\begin{bmatrix} x'\\\\ y'\\\\ 1\\\\ \\end{bmatrix} =\n\\begin{bmatrix} \\cos{\\theta} & \\sin{\\theta} & t_x\\\\ -\\sin{\\theta} & \\cos{\\theta} & t_y\\\\ 0 & 0 & 1\\\\ \\end{bmatrix} \n\\begin{bmatrix}x\\\\ y\\\\ 1\\\\ \\end{bmatrix} $$\n\nwhere the $r_i$ represent the rotation matrix components and the $t_i$ represent the translations components. This can be thought of as a shearing operation in 3D. The Wikipedia article on this [topic](https://en.wikipedia.org/wiki/Affine_transformation) expands this idea a bit more.\n\n\n\nIn this format we can use a point description that looks like $(x, y, t)$ and matrix algebra to generate our transformed points. Using SymPy:\n\n\n```python\nalpha, t_x, t_y, x, y = sp.symbols('alpha t_x t_y x y')\n```\n\n\n```python\nsa = sp.sin(alpha)\nca = sp.cos(alpha)\nM = sp.Matrix([[ca, sa, t_x], [-sa, ca, t_y], [0, 0, 1]])\nV = sp.Matrix([x, y, 1])\n\nM*V\n```\n\n### Helper Classes, Drawing and Plane Groups\n----\n\nLet us explore a bit of how we can draw an image - and then we have everything we need to start building the plane group representations. To build pictoral representations of plane groups, two classes have been created to simplify the organization of the motif. \n\nBelow are the `Point` and `Triangle` classes for your use. A `Point` has storage for an $(x,y)$ position. Rather than building arrays and referencing specific positions within the array a more natural referencing of points is possible. If we define a point `p1 = Point(10,20)` we can access the points by `p1.x` and `p1.y`. The code is more easily read and debugged with this syntax.\n\nBuilding on the `Point` class we define a `Triangle` class. The `Triangle` permits access of each point and defines an `affine()` method that will take a `Numpy` array that represents a transformation matrix. This method returns a new instance of a `Triangle` and preserves the original points. Writing the code this way avoids explicit handling of the transformation matrices.\n\nThese two classes and methods are demonstrated in the second and third code blocks. Building on this demonstration you will be able to complete the homework.\n\n\n```python\n# Class definitions\n\nfrom math import sqrt\nimport numpy as np\n\nclass Point:\n \"\"\"\n A Point object to simplify storage of (x,y) positions.\n p1.x, p1.y, etc...\n \"\"\"\n def __init__(self,x_init,y_init):\n self.x = x_init\n self.y = y_init\n\n def shift(self, x, y):\n self.x += x\n self.y += y\n\n def __repr__(self):\n return \"\".join([\"Point(\", str(self.x), \",\", str(self.y), \")\"])\n \nclass Triangle:\n \"\"\"\n A Triangle class constructed from points. Helps organize information on\n triangles. Has a points() method for returning points in a form that \n can be used with polygon drawing from Python Image Library (PIL) and a \n method, affine(), that applies a matrix transformation to the points.\n \"\"\"\n def __init__(self, p1_init, p2_init, p3_init):\n self.p1 = p1_init\n self.p2 = p2_init\n self.p3 = p3_init\n \n def points(self):\n x1, y1 = self.p1.x, self.p1.y\n x2, y2 = self.p2.x, self.p2.y\n x3, y3 = self.p3.x, self.p3.y\n \n return [(x1,y1),(x2,y2),(x3,y3)]\n \n def affine(self, affineMatrix):\n \"\"\"\n Applies an affine transformation to a triangle and changes the \n points of the triangle. This code returns a new triangle. Uses\n Points to simplify augmenting the matrix and dot products.\n \"\"\"\n x1, y1 = self.p1.x, self.p1.y\n x2, y2 = self.p2.x, self.p2.y\n x3, y3 = self.p3.x, self.p3.y\n \n p1Vector = np.array([[x1, y1, 1]])\n p2Vector = np.array([[x2, y2 , 1]])\n p3Vector = np.array([[x3, y3, 1]])\n \n p1New = np.dot(affineMatrix, p1Vector.T)\n p2New = np.dot(affineMatrix, p2Vector.T)\n p3New = np.dot(affineMatrix, p3Vector.T)\n \n # This line needs to be cleaned up.\n newTriangle = Triangle(Point(p1New[0,0],p1New[1,0]),Point(p2New[0,0],p2New[1,0]),Point(p3New[0,0],p3New[1,0]))\n \n return newTriangle\n```\n\n### Using the Python Imaging Library\n----\n\nIn order that our transformations can be visualized, we will use the Python Imaging Library and the `polygon()` method. The `polygon()` method takes a list of points in the format returned by our `Triangle` class method `points()`. The code below is a very simple starting point for the student to begin building a representation of the plane groups.\n\n\n```python\n# Class demonstrations\n\n%matplotlib inline\n\nimport numpy as np\nimport math\nfrom PIL import Image, ImageDraw\n\nimage = Image.new('RGB', (500, 500), 'white')\ndraw = ImageDraw.Draw(image)\n\nt1 = Triangle(Point(10,10),Point(40,10),Point(10,50))\nt2 = Triangle(Point(10,10),Point(40,10),Point(10,50))\ntranslationMatrix = np.array([[1,0,50],[0,1,0],[0,0,1]])\nreflectionMatrix = np.array([[-1,0,0],[0,1,0],[0,0,1]])\n\nt2 = t1.affine(reflectionMatrix*translationMatrix)\n\nprint(t1.points(), t2.points())\n\ndraw.polygon(t1.points(), outline='black', fill='red')\ndraw.polygon(t2.points(), outline='black', fill='green')\n\nimage\n```\n\n\n```python\n# A slightly more advanced demonstration\n\n%matplotlib inline\n\nimport numpy as np\nimport math\nfrom PIL import Image, ImageDraw\n\nimage = Image.new('RGB', (500, 500), 'white')\ndraw = ImageDraw.Draw(image)\n\nt1 = Triangle(Point(70,10),Point(100,10),Point(70,50))\n\ntriangleList = [t1]\n\ntranslationMatrix = np.array([[1,0,10],[0,1,0],[0,0,1]])\nreflectionMatrixY = np.array([[-1,0,0],[0,1,0],[0,0,1]])\nreflectionMatrixX = np.array([[1,0,0],[0,-1,0],[0,0,1]])\n\nr90 = np.array([[0,-1,0],[1,0,0],[0,0,1]])\n\ntriangleList.append(t1.affine(r90))\ntriangleList.append((t1.affine(r90)).affine(r90))\ntriangleList.append((t1.affine(r90)).affine(r90).affine(r90))\n\ntempList = [triangle.affine(reflectionMatrixX) for triangle in triangleList]\ntriangleList.extend(tempList)\n\n# Using an affine transformation to center the triangles in the drawing\n# as canvas coordinates are (0,0) at the top left.\ncenterMatrix = np.array([[1,0,250],[0,1,250],[0,0,1]])\ndrawList = [triangle.affine(centerMatrix) for triangle in triangleList]\nfor triangle in drawList:\n draw.polygon(triangle.points(), outline='black', fill='red')\n\nimage\n```\n\n# Advanced Topics: The Metric Tensor\n\nStudents who learn crystallography for the first time are introduced to the topic through study of cubic crystal structures. This permits an appeal to our intuition about orthonormal (Euclidian) coordinate systems. This approach misses out on a more general method for teaching the topic where the d-spacing and the angle between directions can be worked out for any general crystal system. The method for describing distances in a general reference frame involves the **metric tensor**. The metric tensor defines how distances are measured in every direction and its components are the dot product of every combination of basis vector in the system of interest. We use the standard lattice parameter designations:\n\n$$\n[a, b, c, \\alpha, \\beta, \\gamma]\n$$\n\nwhere $\\gamma$ is the angle between $\\mathbf{a}$ and $\\mathbf{b}$, etc. This general system has basis vectors:\n\n$$\n\\mathbf{a},\\mathbf{b}, \\mathbf{c}\n$$\n\nThe standard geometric interpretation of an inner product of vectors is:\n\n$$\n\\mathbf{a} \\cdot \\mathbf{b} = |a| \\; |b| \\; \\cos{\\gamma}\n$$\n\nso that the metric tensor is:\n\n$$\ng_{ij} = \n\\begin{bmatrix} \n\\mathbf{a} \\cdot \\mathbf{a} & \n\\mathbf{a} \\cdot \\mathbf{b} & \n\\mathbf{a} \\cdot \\mathbf{c} \\\\ \n\\mathbf{b} \\cdot \\mathbf{a} & \n\\mathbf{b} \\cdot \\mathbf{b} & \n\\mathbf{b} \\cdot \\mathbf{c} \\\\ \n\\mathbf{c} \\cdot \\mathbf{a} & \n\\mathbf{c} \\cdot \\mathbf{b} & \n\\mathbf{c} \\cdot \\mathbf{c}\n\\end{bmatrix}\n$$\n\nCommitting this to memory is simple and it has an intuitive meaning in that each entry measures a different projection of a vector component onto an axis in a general crystal system within lattice coordinates. It is possible to write a small function in SymPy to illustrate this.\n\n\n```python\ndef metricTensor(a=1, b=1, c=1, alpha=sp.pi/2, beta=sp.pi/2, gamma=sp.pi/2):\n return sp.Matrix([[a*a, a*b*sp.cos(gamma), a*c*sp.cos(beta)], \\\n [a*b*sp.cos(gamma), b*b, b*c*sp.cos(alpha)], \\\n [a*c*sp.cos(beta), b*c*sp.cos(alpha), c*c]])\n```\n\n\n```python\nsp.var('a b c alpha beta gamma u v w h k l')\n```\n\n\n```python\nM = metricTensor(a=a,\n b=b,\n c=c,\n alpha=alpha,\n beta=beta,\n gamma=gamma\n )\nM\n```\n\n### Two Common Computations\n----\n\nUsing the Einstein summation convention, the square of the distance between two points located at the end of vectors $\\mathbf{p}$ and $\\mathbf{q}$ is given by:\n\n$$\nD^2 = (\\mathbf{q} - \\mathbf{p})_i \\; g_{ij} \\; (\\mathbf{q} - \\mathbf{p})_j\n$$\n\nThe dot product between two vectors is given by:\n\n$$\n\\mathbf{p} \\cdot \\mathbf{q} = p_i \\; g_{ij} \\; q_j\n$$\n\nNote that the vectors $\\mathbf{p}$ and $\\mathbf{q}$ are in lattice coordinates.\n\n### DIY: Angle Between Two Vectors\n---\n\nFind the expression for and compute the angle between two vectors in a general coordinate system. Use the function defined above or write a new one. You are encouraged to use SymPy or Numpy in your calculations. Refer to the earlier lectures that cover these topics for the implementation and technical details.\n\n\n```python\n# Put your code or markdown here.\n```\n\n### DIY: Compute the angle between the $(123)$ plane and the $(112)$ plane in a trigonal crystal system\n---\n\nThe trigonal crystal system has the least symmetry. Refer to standard texts for the pattern of lattice parameters.\n\n\n```python\nvectorOne = np.array([1,2,3])\nvectorTwo = np.array([1,1,2])\n\n# Put your code here.\n```\n\n# Advanced Topics: Euler Angles\n\nThis discussion is derived primarily from Arfken, Chapter 3.3. The figure below is from Arfken:\n\n\n\nThere are three successive rotations used in the Euler angle formalism - the product of these three rotations is the single operation that transforms one set of coordinates $(x,y,z)$ into another, $(x',y',z')$. The order is important and is the difference between active and passive rotations.\n\nIn steps as shown in Figure 3.7 from Arfken:\n\n1. The first rotation is about $x_3$. In this case the $x'_3$ and $x_3$ axes coincide. The angle $\\alpha$ is taken to be positive (counterclockwise). Our new coordinate system is $(x'_1,x'_2,x'_3)$.\n1. The coordinates are now rotated through an angle $\\beta$ around the $x'_2$ axis. Our new coordinate system is now $(x''_1,x''_2,x''_3)$.\n1. The final rotation is through the angle $\\gamma$ about the $x'''_3$ axis. Our coordinate system is now the $(x'''_1,x'''_2,x'''_3)$. In the case pictured above the $x''_3$ and $x'''_3$ axes coincide.\n\nFor example:\n\n\n```python\nalpha, beta, gamma = sp.symbols('alpha beta gamma')\n\ndef rZ(angle):\n sa = sp.sin(angle)\n ca = sp.cos(angle)\n M = sp.Matrix([[ca, sa, 0],\n [-sa, ca, 0],\n [0, 0, 1]])\n return M\n\ndef rY(angle):\n sb = sp.sin(angle)\n cb = sp.cos(angle)\n M = sp.Matrix([[cb, 0, -sb],\n [0, 1, 0],\n [sb, 0, cb]])\n return M\n\n```\n\n\n```python\n(rZ(alpha), rY(beta), rZ(gamma))\n```\n\nYou'll find that the symbolic triple matrix product matches up with the results in Arfken for the definition of Euler angles $\\alpha$, $\\beta$, $\\gamma$. Note also that this is much easier to compute than by hand! Also - less likely to result in errors. \n\n\n```python\nrZ(gamma)*rY(beta)*rZ(alpha)\n```\n\nTo convert this symbolic expression to a numerical expression, one option is to use `lambdafy` from SymPy:\n\n\n```python\neulerAngles = sp.lambdify((alpha,beta,gamma), rZ(gamma)*rY(beta)*rZ(alpha), \"numpy\")\n```\n\n\n```python\nnp.array([1,0,0]).dot(eulerAngles(np.pi/2.0,0,0))\n```\n", "meta": {"hexsha": "6e59154ec1a3f02d62d10728e2f5fed0baf04ef9", "size": 34104, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lecture-09-Matrices-and-Rotations.ipynb", "max_stars_repo_name": "mathinmse/mathinmse.github.io", "max_stars_repo_head_hexsha": "837e508bfeeb7d108019fb9bc499066b2b653551", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 23, "max_stars_repo_stars_event_min_datetime": "2017-07-19T04:04:38.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-18T19:33:43.000Z", "max_issues_repo_path": "Lecture-09-Matrices-and-Rotations.ipynb", "max_issues_repo_name": "mathinmse/mathinmse.github.io", "max_issues_repo_head_hexsha": "837e508bfeeb7d108019fb9bc499066b2b653551", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-04-08T15:21:45.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-03T20:19:00.000Z", "max_forks_repo_path": "Lecture-09-Matrices-and-Rotations.ipynb", "max_forks_repo_name": "mathinmse/mathinmse.github.io", "max_forks_repo_head_hexsha": "837e508bfeeb7d108019fb9bc499066b2b653551", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 11, "max_forks_repo_forks_event_min_datetime": "2017-07-27T02:27:49.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-27T08:16:40.000Z", "avg_line_length": 36.0888888889, "max_line_length": 909, "alphanum_fraction": 0.5905758855, "converted": true, "num_tokens": 6077, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391579526935, "lm_q2_score": 0.9005297821135385, "lm_q1q2_score": 0.8312242517934032}} {"text": "## CCNSS 2018 Module 1: Neurons, synapses and networks\n# Tutorial 1: Wilson-Cowan equations\n[source](https://colab.research.google.com/drive/16strzPZxTEqR2owgSh6NNLlj2j7MNOQb)\n\nPlease execute the cell below to initalise the notebook environment.\n\n\n```\nimport matplotlib.pyplot as plt # import matplotlib\nimport numpy as np # import numpy\nimport scipy as sp # import scipy\nimport math # import basic math functions\nimport random # import basic random number generator functions\n\nfig_w, fig_h = (6, 4)\nplt.rcParams.update({'figure.figsize': (fig_w, fig_h)})\n```\n\n## Objectives\nIn this notebook we will introduce the *Wilson-Cowan* rate model, and use it to learn more about phase planes, nullclines, and attractors.\n\n** Background paper:** \n* Wilson H and Cowan J (1972) Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal 12.\n\n\n## Background\n\nThe Wilson-Cowan equations model the mean-field (i.e., average across the population) dynamics of two coupled populations of excitatory (E) and inhibitory (I) neurons:\n\n\\begin{align}\n&\\tau_E \\frac{dE}{dt} = -E + (1 - r E) F(w_{EE}E -w_{EI}I + I_{ext};a,\\theta)\\\\\n&\\tau_I \\frac{dI}{dt} = -I + (1 - r I) F(w_{IE}E -w_{II}I;a,\\theta)\n\\end{align}\n\n$E(t)$ represents the average activation of the excitatory population, and $I(t)$ the activation of the inhibitory population. The parameters $\\tau_E$ and $\\tau_I$ control the timescales of each population. The connection strengths are given by: $w_{EE}$ (E to E), $w_{EI}$ (I to E), $w_{IE}$ (E to I), and $w_{II}$ (I to I). Refractory effects are modelled through the parameter $r$, and $I_{ext}$ represents external input to the excitatory population. \n\n\n\nThe function F describes the population activation function. We assume F to be a sigmoidal function, which is parameterized by its gain $a$ and threshold $\\theta$.\n\n$$ F(x;a,\\theta) = \\frac{1}{1+\\exp\\{-a(x-\\theta)\\}} - \\frac{1}{1+\\exp\\{a\\theta\\}}$$\n\nThe argument $x$ represents the input to the population. Note that the the second term is chosen so that $F(0;a,\\theta)=0$.\n\nTo start, execute the cell below to initialise the simulation parameters.\n\n\n```\ndt = 0.1\n\n# Connection weights\nwEE = 12\nwEI = 4\nwIE = 13\nwII = 11\n\n# Refractory parameter\nr = 1\n\n# External input\nI_ext = 0\n\n# Excitatory parameters\ntau_E = 1 # Timescale of excitatory population\na_E = 1.2 # Gain of excitatory population\ntheta_E = 2.8 # Threshold of excitatory population\n\n# Inhibitory parameters\ntau_I = 1 # Timescale of inhibitory population\na_I = 1 # Gain of inhibitory population\ntheta_I = 4 # Threshold of inhibitory population\n```\n\n**EXERCISE 1** \n\nFill in the function below to define the activation function F as a function of its input x, and arguments a, and $\\theta$. Verify your function by evaluating the excitatory activation function for $x = 0,3,6$. Then plot F for both E and I population parameters over $0 \\leq x \\leq 10$. \n\n\n```\ndef F(x,a,theta): \n \"\"\"Population activation function.\n\n Arguments:\n x -- the population input\n a -- the gain of the function\n theta -- the threshold of the function\n \n Returns:\n y -- the population activation response\n \"\"\"\n # insert your code here\n \n return y\n \n# insert your code here\n```\n\n**EXPECTED OUTPUT**\n\n```\n0.0\n0.5261444259857104\n0.9453894296980492\n```\n\n\n**Exercise 2:** Fill in the function below to simulate the dynamics of the Wilson-Cowan equation for up to $t_{max}=15$ with steps of $dt$. Remember from the LIF tutorial that we can numerically integrate the ODEs by replacing the derivatives with their discretized approximations:\n\n\\begin{align}\n&\\frac{dE}{dt} \\to \\frac{E[k+\\Delta t]-E[k]}{\\Delta t} \\hspace{5 mm}\\text{ and }\\hspace{5mm}\\frac{dI}{dt} \\to \\frac{I[k+\\Delta t]-I[k]}{\\Delta t}\\\\\n\\end{align}\n\nThen simulate the dynamics of the population starting from initial condition $E_0=I_0=0.2$ and plot the results. What is the steady state solution? Then, also plot the dynamics starting from $E_0=I_0=0.25$ and plot the solution (in dashed lines). Now what is the steady state solution?\n\n\n```\ndef simulate_wc(t,E0,I0):\n \"\"\"Simulate the Wilson-Cowan equations.\n \n Arguments:\n t -- time (vector)\n E0 -- initial condition weeof the excitatory population\n I0 -- initial condition of the inhibitory population\n \n Returns:\n E -- Activity of excitatory population (vector)\n I -- Activity of inhibitory population (vector)\n \"\"\"\n # insert your code here\n \n return E,I\n\n# insert your code here\n```\n\n**EXPECTED OUTPUT**\n\n\n\n**Exercise 3:** Now use the same function to simulate the Wilson Cowan equations for different initial conditions from $0.01 \\leq E_0 \\leq 1$ and $0.01 \\leq I_0 \\leq 1$ with stepsize 0.1. For each initial condition, find the steady state value to which $E$ and $I$ converge. There are several ways to do this. A simple way to do this is to check, for each initial condition, that the last two points in the simulation are within 1% of each other:\n\n$$ \\frac{E(t_{max})-E(t_{max}-dt)}{E(t_{max})} \\leq 0.01 $$\n\nUse the following code within your for loops to throw an error in case the trajectories have not converged:\n``raise ValueError('Has not converged.')``\n\nThen you can just keep increasing $t_{max}$ until every initial condition converges. Plot the steady state values ($E$ vs. $I$) What fixed points do you see?\n\n\n```\n# insert your code here\n```\n\n**EXPECTED OUTPUT**\n\n\n\n**Exercise 4**: To make the phase plane plot, we first need to determine the inverse of F. To calculate the inverse, set y = F(x), and then solve for x. Then, fill out the function below to define the inverse activation function $F^{-1}$. Check that this is the correct inverse function by testing $F^{-1}$ for $x=0,0.5,0.9$, and then plotting x against $F^{-1}(F(x))$ for $0\\leq x\\leq1$ (use the excitatory population parameters).\n\n\n```\ndef F_inv(x,a,theta): \n \"\"\"Define the inverse of the population activation function.\n\n Arguments:\n x -- the population input\n a -- the gain of the function\n theta -- the threshold of the function\n \n Returns:\n y -- value of the inverse function\n \"\"\"\n # insert your code here\n \n return y\n \n# insert your code here\n```\n\n**EXPECTED OUTPUT**\n\n```\n0.0\n2.9120659956266\n5.002378884081663\n```\n\n\n\n\n**Exercise 5:** Now, derive the E and I nullclines, in terms of the inverse function $F^{-1}$. To do this, set $\\frac{dE}{dt}=0$ and solve for $I$, then set $\\frac{dI}{dt}=0$ and solve for $E$. Then, fill out the two functions below to calculate the I nullcline (over $-0.01 \\leq I \\leq 0.3$) and E nullcline (over $-0.01 \\leq E \\leq 0.48$). First test the value of the I nullcline for $I=0.1$, then test the E nullcline for $E=0.1$. Then use these functions to plot the nullclines in phase space (E vs. I). What fixed points do you see? Compare the intersections of the nullclines with the steady state values you observed numerically in Exercise 3.\n\n\n\n```\ndef get_E_nullcline(E):\n \"\"\"Solve for I along the E nullcline (dE/dt = 0).\n \n Arguments:\n E -- values of E over which the nullcline is computed\n \n Returns:\n I -- values of I along the nullcline for each E\n \"\"\"\n # insert your code here\n \n return I\n \n \ndef get_I_nullcline(I):\n \"\"\"Solve for E along the I nullcline (dI/dt = 0).\n \n Arguments:\n I -- values of I over which the nullcline is computed\n \n Returns:\n E -- values of E along the nullcline for each I\n \"\"\"\n # insert your code here\n \n return E\n\n# insert your code here\n```\n\n**EXPECTED OUTPUT**\n```\n0.24546433162390224\n-0.029802383619274175\n```\n\n\n\n**Exercise 6:** Now, on top of the nullclines, plot some sample trajectories starting with different initial conditions, for $0 \\leq E_0 \\leq 1$ and $0 \\leq I_0 \\leq 1$. How many attractors do you see?\n\n\n```\n# insert your code here\n```\n\n**EXPECTED OUTPUT**\n\n\n\n**Exercise 7:** Repeat the previous exercise while varying the recurrent excitatory connectivity over the following values: $w_{EE}=5,10,12,15$. What is happening? Can you find a value of wEE where a qualitative transformation occurs? What does this tell you about how increasing recurrent connectivity affects the dynamics? \n\n\n```\n# insert your code here\n```\n\n**EXPECTED OUTPUT**\n\n\n\n\n\n", "meta": {"hexsha": "4f4c50568f75712613ffaeebc91903ae0f379607", "size": 17894, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "module1/1_wilson-cowan_equations/1_wilson-cowan_equations.ipynb", "max_stars_repo_name": "ruyuanzhang/ccnss2018_students", "max_stars_repo_head_hexsha": "978b2414ade6116da01c19a945304f9c514fb93f", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2018-07-01T10:51:09.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-15T22:57:17.000Z", "max_issues_repo_path": "module1/1_wilson-cowan_equations/1_wilson-cowan_equations.ipynb", "max_issues_repo_name": "marcelomattar/ccnss2018_students", "max_issues_repo_head_hexsha": "978b2414ade6116da01c19a945304f9c514fb93f", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "module1/1_wilson-cowan_equations/1_wilson-cowan_equations.ipynb", "max_forks_repo_name": "marcelomattar/ccnss2018_students", "max_forks_repo_head_hexsha": "978b2414ade6116da01c19a945304f9c514fb93f", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2018-05-15T02:54:07.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-15T22:57:19.000Z", "avg_line_length": 33.5093632959, "max_line_length": 668, "alphanum_fraction": 0.4988264223, "converted": true, "num_tokens": 2305, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9161096204605947, "lm_q2_score": 0.9073122301325987, "lm_q1q2_score": 0.8311974627860308}} {"text": "# 14 Partial Differential Equations \u2014 1\n\n## Solving Laplace's or Poisson's equation\n\n**Poisson's equation** for the electric potential $\\Phi(\\mathbf{r})$ and the charge density $\\rho(\\mathbf{r})$:\n\n$$\n\\nabla^2 \\Phi(x, y, z) = -4\\pi\\rho(x, y, z)\\\\\n$$\n\nFor a region of space without charges ($\\rho = 0$) this reduces to **Laplace's equation**\n\n$$\n\\nabla^2 \\Phi(x, y, z) = 0\n$$\n\n\nSolutions depend on the **boundary conditions**: \n\n* the *value of the potential* on the *boundary* or \n* the *electric field* (i.e. the derivative of the potential, $\\mathbf{E} = -\\nabla\\Phi$ *normal to the surface* ($\\mathbf{n}\\cdot\\mathbf{E}$), which directly follows from the charge distribution).\n\n### Example: 2D Laplace equation\n$$\n\\frac{\\partial^2 \\Phi(x,y)}{\\partial x^2} + \\frac{\\partial^2 \\Phi(x,y)}{\\partial y^2} = 0\n$$\n(\"elliptic PDE\")\n\nBoundary conditions:\n* square area surrounded by wires\n* three wires at ground (0 V), one wire at 100 V\n\n## Finite difference algorithm for Poisson's equation\nDiscretize space on a lattice (2D) and solve for $\\Phi$ on each lattice site.\n\nTaylor-expansion of the four neighbors of $\\Phi(x, y)$:\n\n\\begin{align}\n\\Phi(x \\pm \\Delta x, y) &= \\Phi(x, y) \\pm \\Phi_x \\Delta x + \\frac{1}{2} \\Phi_{xx} \\Delta x^2 + \\dots\\\\\n\\Phi(x, y \\pm \\Delta y) &= \\Phi(x, y) \\pm \\Phi_y \\Delta x + \\frac{1}{2} \\Phi_{yy} \\Delta x^2 + \\dots\\\\\n\\end{align}\n\nAdd equations in pairs: odd terms cancel, and **central difference approximation** for 2nd order partial derivatives (to $\\mathcal{O}(\\Delta^4)$):\n\n\\begin{align}\n\\Phi_{xx}(x,y) = \\frac{\\partial^2 \\Phi}{\\partial x^2} & \\approx \n \\frac{\\Phi(x+\\Delta x,y) + \\Phi(x-\\Delta x,y) - 2\\Phi(x,y)}{\\Delta x^2} \\\\\n\\Phi_{yy}(x,y) = \\frac{\\partial^2 \\Phi}{\\partial y^2} &\\approx \n \\frac{\\Phi(x,y+\\Delta y) + \\Phi(x,y-\\Delta y) - 2\\Phi(x,y)}{\\Delta y^2}\n\\end{align}\n\nTake $x$ and $y$ grids of equal spacing $\\Delta$: Discretized Poisson equation\n\n$$\n\\begin{split}\n\\Phi(x+\\Delta x,y) + \\Phi(x-\\Delta x,y) +\\Phi(x,y+\\Delta y) &+ \\\\\n +\\, \\Phi(x,y-\\Delta y) - 4\\Phi(x,y) &= -4\\pi\\rho(x,y)\\,\\Delta^2\n \\end{split}\n$$\n\nOr written for lattice sites $(i, j)$ where \n\n$$\nx = x_0 + i\\Delta\\quad\\text{and}\\quad y = y_0 + j\\Delta, \\quad 0 \\leq i,j < N_\\text{max}\n$$\n\n$$\n\\Phi_{i+1,j} + \\Phi_{i-1,j} + \\Phi_{i,j+1} + \\Phi_{i,j-1} - 4 \\Phi_{i,j} = -4\\pi\\rho_{ij} \\Delta^2\n$$\n\nDefines a system of $N_x \\times N_y$ simultaneous algebraic equations for $\\Phi_{ij}$ to be solved.\n\nCan be solved directly via matrix approaches (and then is the best solution) but can be unwieldy for large grids.\n\nAlternatively: **iterative solution**:\n\n$$\n\\begin{split}\n4\\Phi(x,y) &= \\Phi(x+\\Delta x,y) + \\Phi(x-\\Delta x,y) +\\\\\n &+ \\Phi(x,y+\\Delta y) + \\Phi(x,y-\\Delta y) + 4\\pi\\rho(x,y)\\,\\Delta^2\n\\end{split}\n$$\n\n$$\n\\Phi_{i,j} = \\frac{1}{4}\\Big(\\Phi_{i+1,j} + \\Phi_{i-1,j} + \\Phi_{i,j+1} + \\Phi_{i,j-1}\\Big)\n + \\pi\\rho_{i,j} \\Delta^2\n$$\n\n* Converged solution at $(i, j)$ will be the average potential from the four neighbor sites + charge density contribution.\n* *Not a direct solution*: iterate and hope for convergence.\n\n#### Jacobi method\nDo not change $\\Phi_{i,j}$ until a complete sweep has been completed.\n\n$$\n\\Phi_{i,j} = \\frac{1}{4}\\Big(\\Phi_{i+1,j} + \\Phi_{i-1,j} + \\Phi_{i,j+1} + \\Phi_{i,j-1}\\Big)\n + \\pi\\rho_{i,j} \\Delta^2\n$$\n\n#### Gauss-Seidel method\n\n$$\n\\Phi_{i,j} = \\frac{1}{4}\\Big(\\Phi_{i+1,j} + \\Phi_{i-1,j} + \\Phi_{i,j+1} + \\Phi_{i,j-1}\\Big)\n + \\pi\\rho_{i,j} \\Delta^2\n$$\n\nImmediately use updated new values for $\\Phi_{i-1, j}$ and $\\Phi_{i, j-1}$ (if starting from $\\Phi_{1, 1}$).\n\nLeads to *accelerated convergence* and therefore *less round-off error* (but distorts symmetry of boundary conditions... hopefully irrelevant when converged but check!)\n\n### Solution via relaxation (Gauss-Seidel) \n\nSolve the box-wire problem on a lattice: The wire at $x=0$ (the $y$-axis) is at 100 V, the other three sides of the box are grounded (0 V).\n\nNote: $\\rho=0$ inside the box.\n\nNote for Jupyter notebook use:\n* For interactive 3D plots, select\n ```\n %matplotlib widget\n ```\n or if the above fails, try\n ```\n %matplotlib notebook\n ```\n* For standard inline figures (e.g. for exporting the notebook to LaTeX/PDF or html) use \n ```\n %matplotlib inline\n ``` \n \nEnable a matplotlib-Jupyter integration that works for you (try `conda install ipympl` or `pip install ipympl` first to get `%matplotlib widget` working).\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n```\n\n\n```python\n# for interactive work\n\n%matplotlib widget\n#%matplotlib inline\n#%matplotlib notebook\n```\n\n\n```python\n# for plotting/saving\n%matplotlib inline\n```\n\n#### Wire on a box: Solution of Laplace's equation with the Gauss-Seidel algorithm\n\n* set boundary conditions\n* Implement Gauss-Seidel algorithm\n* visualize solution\n* does it make sense?\n* try higher `Max_iter`\n\n\n```python\nNmax = 100\nMax_iter = 70\nPhi = np.zeros((Nmax, Nmax), dtype=np.float64)\n\n# initialize boundaries\n# everything starts out zero so nothing special for the grounded wires\nPhi[0, :] = 100 # wire at x=0 at 100 V\n\nNx, Ny = Phi.shape\n\nfor n_iter in range(Max_iter):\n for xi in range(1, Nx-1):\n for yj in range(1, Ny-1):\n Phi[xi, yj] = 0.25*(Phi[xi+1, yj] + Phi[xi-1, yj] \n + Phi[xi, yj+1] + Phi[xi, yj-1])\n```\n\n#### Visualization of the potential \n\n\n```python\n# plot Phi(x,y)\nx = np.arange(Phi.shape[0])\ny = np.arange(Phi.shape[1])\nX, Y = np.meshgrid(x, y)\n\nZ = Phi[X, Y]\n```\n\n\n```python\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nax.plot_wireframe(X, Y, Z, rstride=2, cstride=2)\n\nax.set_xlabel('X')\nax.set_ylabel('Y')\nax.set_zlabel(r'potential $\\Phi$ (V)')\n\nax.view_init(elev=40, azim=-65)\n```\n\n### Surfaces and 2D contours \n\nNicer plot (use this code for other projects):\n\n\n\n```python\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\n\nax.plot_wireframe(X, Y, Z, rstride=2, cstride=2, linewidth=0.5, color=\"gray\")\nsurf = ax.plot_surface(X, Y, Z, cmap=plt.cm.coolwarm, alpha=0.6)\ncset = ax.contourf(X, Y, Z, zdir='z', offset=-50, cmap=plt.cm.coolwarm)\n\nax.set_xlabel('X')\nax.set_ylabel('Y')\nax.set_zlabel(r'potential $\\Phi$ (V)')\nax.set_zlim(-50, 100)\nax.view_init(elev=40, azim=-65)\n\ncb = fig.colorbar(surf, shrink=0.5, aspect=5)\ncb.set_label(r\"potential $\\Phi$ (V)\")\n```\n\n(Note that the calculation above is is *not converged* ... see next lecture.)\n", "meta": {"hexsha": "848233c434018e769e6e99afa9fe5ab841dc6201", "size": 155599, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "14_PDEs/14_PDEs-1.ipynb", "max_stars_repo_name": "Py4Phy/PHY432-resources", "max_stars_repo_head_hexsha": "c26d95eaf5c28e25da682a61190e12ad6758a938", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "14_PDEs/14_PDEs-1.ipynb", "max_issues_repo_name": "Py4Phy/PHY432-resources", "max_issues_repo_head_hexsha": "c26d95eaf5c28e25da682a61190e12ad6758a938", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-03-03T21:47:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-03T21:47:56.000Z", "max_forks_repo_path": "14_PDEs/14_PDEs-1.ipynb", "max_forks_repo_name": "Py4Phy/PHY432-resources", "max_forks_repo_head_hexsha": "c26d95eaf5c28e25da682a61190e12ad6758a938", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 294.6950757576, "max_line_length": 74728, "alphanum_fraction": 0.9277116177, "converted": true, "num_tokens": 2158, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.930458253565792, "lm_q2_score": 0.8933094003735664, "lm_q1q2_score": 0.8311871045654935}} {"text": "# \u4e60\u9898\n\n- Q-1\n\n\n\n```python\nfrom sympy import *\n\nx = symbols('x')\nf_x = x**3 - 3 * x**2 - x - 1\ng_x = 3 * x**2 - 2 * x + 1\nq_x, r_x = div(f_x, g_x)\nq_x.as_poly()\n```\n\n\n```python\nr_x.as_poly()\n```\n\n- Q-2\n\n\n\n```python\nfrom sympy import *\n\nx, m, p, q = symbols('x, m, p, q')\nf_x = x**3 + p * x + q\ng_x = x**2 + m * x - 1\nq_x, r_x = div(f_x, g_x)\nr_x.as_poly(x)\n```\n\n> \u8bf4\u660e=>r_x = 0, \u7531\u6b64\u5f97\u51fa\u5404\u9879\u7cfb\u6570\u4e3a 0\n\n\n- Q-3\n\n\n\n```python\nfrom sympy import *\n\nx = symbols('x')\nf_x = x**3 - x**2 - x\ng_x = x - 1 + 2 * I\nq_x, r_x = div(f_x, g_x, domain='ZZ')\nq_x\n```\n\n\n```python\nr_x\n```\n\n- Q-4\n- Q-4-3\n\n\n\n```python\nfrom sympy import *\n\nx = symbols('x')\nf_x = x**5\nx0 = 1\nseries(f_x, x=x, x0=x0)\n```\n\n- Q-4-3\n\n\n\n```python\nfrom sympy import *\n\nx = symbols('x')\nf_x = x**4 + 2 * I * x**3 - (1 + I) * x**2 - 3 * x + 7 + I\nseries(f_x, x=x, x0=-I)\n```\n\n- Q-5\n\n\n\n```python\nfrom sympy import *\n\nx = symbols('x')\nf_x = x**4 - 10 * x**2 + 1\ng_x = x**4 - 4 * sqrt(2) * x**3 + 6 * x**2 + 4 * sqrt(2) * x + 1\ngcd(f_x.as_poly(x), g_x.as_poly(x))\n\n```\n\n\n\n\n$\\displaystyle \\operatorname{Poly}{\\left( x^{2} - 2 \\sqrt{2} x - 1, x, domain=\\mathtt{\\text{EX}} \\right)}$\n\n\n\n- Q-6\n\n\n\n```python\nfrom sympy import *\n\nx = symbols('x')\nf_x = x**4 - x**3 - 4 * x**2 + 4 * x + 1\ng_x = x**2 - x - 1\ngcdex(f_x, g_x)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "4f222c0beaadcfdd88186f4773f34e4878053238", "size": 35808, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "college/Linear-Algebra/theory/ch01/hw/homework.ipynb", "max_stars_repo_name": "dzylikecode/Math-Learning", "max_stars_repo_head_hexsha": "0948c4fd478529f1f1bab038d4cc950c12fb394e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "college/Linear-Algebra/theory/ch01/hw/homework.ipynb", "max_issues_repo_name": "dzylikecode/Math-Learning", "max_issues_repo_head_hexsha": "0948c4fd478529f1f1bab038d4cc950c12fb394e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "college/Linear-Algebra/theory/ch01/hw/homework.ipynb", "max_forks_repo_name": "dzylikecode/Math-Learning", "max_forks_repo_head_hexsha": "0948c4fd478529f1f1bab038d4cc950c12fb394e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 101.7272727273, "max_line_length": 5902, "alphanum_fraction": 0.8552837355, "converted": true, "num_tokens": 602, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.927363293639213, "lm_q2_score": 0.896251371055247, "lm_q1q2_score": 0.8311506233904543}} {"text": "## Import libraries\n\n\n```python\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\n%matplotlib inline\n```\n\n## Define function\nThe goal is to create a plot of the following function\n\\begin{equation}\nf(x)=0.2+0.4x^2+0.3x\\cdot\\sin(15x)+0.05\\cos(50x)\n\\end{equation}\n\n\n```python\nx = np.linspace(0, 1, 100)\ny = 0.2+0.4*x**2+0.3*x*np.sin(15*x)+0.05*np.cos(50*x)\n```\n\n## Produce figure\n\n\n```python\nplt.figure(figsize=(6, 6))\nplt.plot(x, y)\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "b6dca8dda8d1d538be59f1e6e79a37bdff721664", "size": 1671, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "function-plot.ipynb", "max_stars_repo_name": "ibqn/jupyterlab-ext", "max_stars_repo_head_hexsha": "a4e6bb59331334de9996c2dcd0009959ba56fe34", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2019-09-14T01:23:56.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-28T05:59:15.000Z", "max_issues_repo_path": "function-plot.ipynb", "max_issues_repo_name": "ibqn/jupyterlab-ext", "max_issues_repo_head_hexsha": "a4e6bb59331334de9996c2dcd0009959ba56fe34", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-03-31T10:57:26.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-24T13:38:46.000Z", "max_forks_repo_path": "function-plot.ipynb", "max_forks_repo_name": "hso-nn/jupyterlab-codecellbtn-extra", "max_forks_repo_head_hexsha": "7e3b5e87d47add71536b29fc6e5f196c298c11ec", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-11-30T03:09:35.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-23T07:49:17.000Z", "avg_line_length": 18.3626373626, "max_line_length": 63, "alphanum_fraction": 0.499700778, "converted": true, "num_tokens": 171, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9566341987633823, "lm_q2_score": 0.8688267762381843, "lm_q1q2_score": 0.8311494069507879}} {"text": "```python\nfrom sympy import *\nimport pandas as pd\nimport numpy as np\ninit_printing(use_unicode=True)\n\nx, y = symbols('x y')\n```\n\n\n```python\ndef check_maximum(f,interval,symbol):\n possiveis_max = []\n borda1 = (f.subs(symbol,interval.args[0]).evalf())\n borda2 = (f.subs(symbol,interval.args[1]).evalf())\n\n possiveis_max.append(borda1)\n possiveis_max.append(borda2)\n f_ = diff(f)\n zeros = solve(f_)\n for zero in zeros:\n if str(type(zero)) == \"\":\n zero = zero.evalf()\n if zero in interval:\n possiveis_max.append(f.subs(symbol,zero).evalf())\n\n possiveis_sem_complex = []\n for ele in possiveis_max:\n if str(type(ele)) != \"\":\n possiveis_sem_complex.append(float(ele))\n return Matrix(possiveis_sem_complex)\n\ndef df_from_M(M, func = None, symb = symbols('x')):\n x = symb\n M = transpose(M)\n M = np.array(M).astype(np.float64)\n try:\n df = pd.DataFrame(M, columns=['x', 'f(x)'])\n except:\n df = pd.DataFrame(M, columns=['x'])\n df['f(x)'] = ''\n for i in range(df.shape[0]):\n df.loc[i, 'f(x)'] = Rational(func.subs(x, Rational(df.loc[i, 'x'])))\n return df\n\nclass f_newton:\n def __init__(self, ind, xlist, ylist):\n n = len(xlist)\n xlist = [Rational(n) for n in xlist]\n ylist = [Rational(n) for n in ylist]\n self.n = n\n name = 'f['\n for i in range(n):\n name += 'x{},'.format(ind + i)\n name = name[:-1]\n name += ']'\n self.name = name\n self.xlist = xlist\n self.buffer = np.array([ylist,[0 for i in range(len(ylist))]]).transpose()\n self.list_ = xlist\n self.nivel = 0\n self.acha_val()\n\n def acha_val(self):\n while self.buffer.shape[0] >1:\n self.nivel += 1\n xlist = self.xlist\n buffer = self.buffer\n for i in range(buffer.shape[0]-1):\n buffer[i,1] = (buffer[i+1,0] - buffer[i,0])/(xlist[i+self.nivel]-xlist[i])\n buffer = np.hstack([buffer[:-1,1:],np.zeros(buffer[:-1,1:].shape)])\n self.buffer = buffer\n self.val = self.buffer[0,0]\n return self.val\n\nclass interpolador():\n def __init__(self, matrix, func=None,symb = symbols('x')):\n df = df_from_M(matrix, func, symb)\n self.df = df\n self.symb = symb\n min_ = df['x'].min()\n max_ = df['x'].max()\n Inter = Interval(min_,max_)\n self.min_ = min_\n self.max_ = max_\n self.Inter = Inter\n self.func = func\n\n def lagrange(self):\n df = self.df\n x = self.symb\n\n\n df['Li(x)'] = ''\n p = 0\n for i in range(df.shape[0]):\n up = 1\n down = 1\n for j in range(df.shape[0]):\n if i != j:\n up *= (x-Rational(df.loc[j,'x']))\n down *= (Rational(df.loc[i,'x'])-Rational(df.loc[j,'x']))\n df.loc[i, 'Li(x)'] = simplify(up/down)\n p += (up/down)*Rational(df.loc[i, 'f(x)'])\n p = simplify(p)\n self.df = df\n self.p_lagr = p\n\n def newton(self):\n x = symbols('x')\n df = self.df\n xlist = df['x'].to_list()\n ylist = df['f(x)'].to_list()\n names = ['x','f(x)']\n n = len(xlist)\n arr = np.full((n,n-1), Rational(0))\n arr_ = np.full((n,n+1), Rational(0))\n for j in range(n):\n for i in range(n-j-1):\n if i == 0:\n names.append(f_newton(i, xlist[i:i+j+2],ylist[i:i+j+1]).name)\n arr[i,j] = Rational(f_newton(i, xlist[i:i+j+2],ylist[i:i+j+2]).acha_val())\n arr_[:,2:] = arr\n arr_[:,0:1] = np.array([xlist]).transpose()\n arr_[:,1:2] = np.array([ylist]).transpose()\n df = pd.DataFrame(arr_, columns=[names])\n p_new = 0\n termo = 1\n for i in range(arr_.shape[1]-1):\n p_new += Rational(arr_[0,i+1])*termo\n termo *= (x - Rational(xlist[i]))\n self.df = df\n self.p_new = simplify(p_new)\n\n def Erro(self):\n x = symbols('x')\n func = self.func\n df = self.df\n Inter = self.Inter\n\n if func != None:\n Erro = 1\n n = df.shape[0]\n func___ = func\n for i in range(n):\n try:\n Erro *= (x-Rational(df.loc[i,'x']))\n except:\n Erro *= (x-Rational(df.loc[i,'x'].values[0]))\n func___ = diff(func___)\n # Erro = abs(Erro)\n Erro /= Rational(factorial(n+1))\n maxi = max(abs(check_maximum(func___,Inter, x)))\n Erro *= maxi\n Erro = simplify(Erro)/2\n self.Erro = Erro\n return Erro\n\nclass romberg:\n def __init__(self, Ts):\n h = symbols('h')\n cols = ['h','T(h)','S(h)','W(h)']\n df = pd.DataFrame(columns = cols)\n df['T(h)'] = Ts\n for i in range(df.shape[0]):\n df.loc[i, 'h'] = h\n h *= 1/Rational(2)\n if i != df.shape[0] - 1:\n i += 1\n df.loc[i, 'S(h)'] = (4*df.loc[i, 'T(h)'] - df.loc[i-1, 'T(h)'])/Rational(3)\n df.loc[i, 'W(h)'] = (16*df.loc[i, 'S(h)'] - df.loc[i-1, 'S(h)'])/Rational(15)\n self.df = df\n\nclass gauss:\n def __init__(self, grau, Inter,func,symb = symbols('x')):\n x = symb\n t = symbols('t')\n cnj = {\n 2:{\n 0:1,\n 1:1\n },\n 3:{\n 0:0.5555555555555555555555,\n 1:0.8888888888888888888888,\n 2:0.5555555555555555555555\n },\n 4:{\n 0:0.3478548451,\n 1:0.6521451549,\n 2:0.6521451549,\n 3:0.3478548451\n }\n }\n xnj = {\n 2:{\n 0:0.5773502692,\n 1:-0.5773502692\n },\n 3:{\n 0:0.7745966692,\n 1:0,\n 2:-0.7745966692\n },\n 4:{\n 0:0.8611363116,\n 1:0.3399810436,\n 2:-0.3399810436,\n 3:-0.8611363116\n }\n }\n n = 0\n while 2*n-1 < grau:\n n +=1\n self.n = n\n res = 0\n a = Inter.args[0]\n b = Inter.args[1]\n var = ((b-a)*t + a + b)/2\n\n var_ = diff(var, t)\n func = func.subs(x, var)\n for i in range(n):\n res += cnj[n][i]*func.subs(t, xnj[n][i])\n res *= var_\n self.res = res\n\nclass euler1l:\n def __init__(self, x0, y0, h, func):\n x, y = symbols('x y')\n xlist = []\n ylist = []\n xlist.append(x0)\n ylist.append(y0)\n for i in range(1,11):\n ylist.append(ylist[-1] + h*func.subs(x, xlist[-1]).subs(y, ylist[-1]))\n xlist.append(xlist[-1] + h)\n df = pd.DataFrame()\n df['x'] = xlist\n df['y'] = ylist\n self.df = df\n self.xs = xlist\n self.ys = ylist\n\nclass eulermod:\n def __init__(self, x0, y0, h, func):\n x, y = symbols('x y')\n xlist = []\n ylist = []\n xlist.append(x0)\n ylist.append(y0)\n for i in range(1,10):\n ylist.append(ylist[-1] + (h/2)*(func.subs(x, xlist[-1]).subs(y, ylist[-1]) + func.subs(x, xlist[-1] + h).subs(y, ylist[-1] + h*func.subs(x, xlist[-1]).subs(y, ylist[-1]))))\n xlist.append(xlist[-1] + h)\n df = pd.DataFrame()\n df['x'] = xlist\n df['y'] = ylist\n self.df = df\n self.xs = xlist\n self.ys = ylist\n\nclass eulerM:\n def __init__(self, x0, y0, h,coef):\n xlist = []\n ylist = []\n xlist.append(x0)\n ylist.append(y0)\n dlist = []\n for i in range(1,11):\n dlist.append(ylist[-1]*coef)\n ylist.append(ylist[-1] + h*dlist[-1])\n xlist.append(xlist[-1] + h)\n df = pd.DataFrame()\n df['x'] = xlist\n df['y'] = ylist\n self.df = df\n self.xs = xlist\n self.ys = ylist\n```\n\n\n```python\nprint('-----------------------------------------------------')\nprint('Q2')\nA=[2/Rational(3), -5/Rational(3)]\nY0=[-7, -8]\nh=1/Rational(10)\n\nX0 = [0, 0]\na0 = eulerM(X0[0], Y0[0], h, A[0])\nn1 = a0.ys[2]\n\na1 = eulerM(X0[1], Y0[1], h, A[1])\nn2 = a1.ys[2]\n\nprint('Resp', n1+n2)\n\nerros = []\na0.ys[-1] + 7*exp(2/3)\n# for i in range(2):\n# segderi = Y0[i] * A[i]**2 * exp(A[i]*x)\n# Msegderi = max(abs(check_maximum(segderi,Interval(0,1),x)))\n# L = abs(A[i])\n# erro = h*Msegderi/(2*L)*(exp(L*1) - 1)\n# erros.append(erro.evalf())\n# print(erros)\n```\n\n\n```python\nprint('-----------------------------------------------------')\nprint('Q3')\n\ny0 = symbols('y0')\nfunc = y*x**(-2)\nx0 = 1/Rational(5)\nxf = 3/Rational(10)\nyf = -27/Rational(8)\nh = (xf - x0)/2\n\na = euler1l(x0, y0,h,func)\ny0_ = solve(a.df.loc[2,'y'] - yf)[0]\nprint(y0_)\na = eulermod(x0, y0_,h,func)\nprint(a.df.loc[2,'y'])\n```\n\n -5/6\n -1919/480\n\n\n\n```python\nprint('-----------------------------------------------------')\nprint('Q4')\nM = Matrix([\n [0, 1, 2, 3],\n])\nsube = 1/Rational(6) #Valor que deve ser substituido em X para obter a fra\u00e7\u00e3o desejada\na = interpolador(M, (Rational(8)/Rational(5))**x)\n\na.newton()\nprint()\nprint('Primeira resposta \u00e9 a primeira linha do dataframe abaixo.\\n', a.df)\nprint()\nprint()\nprint('Fra\u00e7\u00e3o que \u00e9 a resposta da quest\u00e3o do meio:')\nprint()\npprint(a.p_new.subs(x,Rational(sube)))\n\nprint()\nprint()\nb = a.Erro()\nprint('Erro no ponto x = 1/6: ', abs(b.subs(x, sube))*10)\nprint('Majoramento do erro no intervalo:', max(abs(check_maximum(b, Interval(M[0], M[-1]), x)))*10 )\n\n\n\n\n```\n\n Primeira linha df\n x f(x) f[x0,x1] f[x0,x1,x2] f[x0,x1,x2,x3]\n 0 -2 4/9 2/9 1/18 1/108\n 1 -1 2/3 1/3 1/12 0\n 2 0 1 1/2 0 0\n 3 1 3/2 0 0 0\n Fracao 353/288\n Debaixo 0.00168925015274316\n\n\n\n```python\nprint('-----------------------------------------------------')\nprint('Q5')\n\nx = symbols('x')\nfunc = Rational(1)/(Rational(16)/Rational(7) *x + Rational(6)/Rational(5))\nInter = Interval(8/3,49/18)\nfunc__ = func\npprint(func)\nfor i in range(3):\n func__ = diff(func__)\ncheck_maximum(func__, Inter, x)/(factorial(3))\n```\n\n 1 \n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 16\u22c5x 6\n \u2500\u2500\u2500\u2500 + \u2500\n 7 5\n\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}-0.00421607128879581\\\\-0.0039348666208735\\end{matrix}\\right]$\n\n\n\n\n```python\nprint('-----------------------------------------------------')\nprint('Q6')\nh, fa, fb, fx1, fx2, fx3 = symbols('h fa fb fx1, fx2, fx3')\n\nresps = [Rational(5)/2, Rational(5)/3, Rational(493)/336]\na = 1/Rational(2)\n\nEq1e = (fa + fb)*h\nEq1d = 2*a*resps[0]\n\nEq2e = h*(fa + fb) + h*2*fx2\nEq2d = 4*a*resps[1]\n\nEq3e = h*(fa + fb)+h*(+ 2*fx1 + 2*fx2+ 2*fx3)\nEq3d = 8*a*resps[2]\n\nEq3e *= 2\nEq3d *= 2\n\nEq3e += - Eq2e\nEq3d += - Eq2d\n\nEq3e *= 1/Rational(6)\nEq3d *= 1/Rational(6)\n\nEq3e = simplify(Eq3e)\npprint(Eq3d)\na = romberg(resps)\na.df\n\n```\n\n\n```python\nprint('-----------------------------------------------------')\nprint('Q7')\nx = symbols('x')\nInter = Interval(-3,7)\nM = Matrix([\n [-3, 2, 7],\n [2/5, -2, 4/9]\n])\n\nb = interpolador(M)\nb.lagrange()\nfunc = b.p_lagr\ngrau = 2\n\na = gauss(grau, Inter, func)\nprint(a.res)\nt = symbols('t')\na = Inter.args[0]\nb = Inter.args[1]\nvar = (2*x - a - b)/(b-a)\nvar_ = diff(var, t)\nLegendre = t**3 - (3/5)*t\nprint(solve(Legendre.subs(t, var)))\n```\n\n -11.9259259256358\n [-1.87298334620742, 2.00000000000000, 5.87298334620742]\n\n", "meta": {"hexsha": "155feb9baea656711215d0d8b6db4ed8ad88179f", "size": 21196, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Numerico/Progrmas_numerico/p2.ipynb", "max_stars_repo_name": "victorathanasio/Personal-projects", "max_stars_repo_head_hexsha": "94c870179cec32aa733a612a6faeb047df16d977", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Numerico/Progrmas_numerico/p2.ipynb", "max_issues_repo_name": "victorathanasio/Personal-projects", "max_issues_repo_head_hexsha": "94c870179cec32aa733a612a6faeb047df16d977", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Numerico/Progrmas_numerico/p2.ipynb", "max_forks_repo_name": "victorathanasio/Personal-projects", "max_forks_repo_head_hexsha": "94c870179cec32aa733a612a6faeb047df16d977", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.3266666667, "max_line_length": 2461, "alphanum_fraction": 0.4474429138, "converted": true, "num_tokens": 3846, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.956634196290671, "lm_q2_score": 0.8688267677469952, "lm_q1q2_score": 0.8311493966794683}} {"text": "\n# **1.3 Calculate elastic stiffness of a given composite**\n\n[](https://moodle.rwth-aachen.de/mod/page/view.php?id=551801)\n\n## Task\nPredict the tensile stiffness of a reinforced concrete cross section shown in the Figure with the thickness of 10~mm and width of 100 mm.\nThe cross-section is reinforced with 6 layers of textile fabrics made of CAR-EP3300 specified in the table below\n\n\n\n## Theory\nThe derived mixture rule will be used to solve the task\n\n\n\n## Data\n\n\\begin{array}{|c|c|c|c|c|c|}\n \\mathrm{Label}\n & \\mathrm{Material}\n & \\mathrm{Area }\n & \\mathrm{Grid\\; spacing}\n & \\mathrm{Stiffness}\n & \\mathrm{Strength\\; (characteristic)}\n \\\\ \n \\hline \n & \n & [\\mathrm{mm}^2]\n & [\\mathrm{mm}]\n & [\\mathrm{MPa}]\n & [\\mathrm{Mpa}]\n \\\\ \n \\hline\n \\mathrm{CAR-EP3300}\n & \\mathrm{carbon/proxy}\n & 1.84\n & 25.77\n & 240000\n & 3500\n \\\\\n \\hline\n \\mathrm{solidian\\; GRID \\; Q95}\n & \\mathrm{carbon/proxy}\n & 3.62\n & 36.0\n & 240000\n & 3200\n \\\\ \n\\end{array}\n\n# How to evaluate an expression?\n\n**Calculator:** Let us use Python language as a calculator and evalute the mixture rule for the exemplified cross-section.m\n\n\n```python\nA_roving = 1.84 # [mm**2] \nn_layers = 6 # - \nspacing = 25.77 # [mm] \nthickness = 10 # [mm] \nwidth = 100 # [mm] \nE_carbon = 240000 # [MPa] \nE_concrete = 28000 # [MPa]\n```\n\n\n```python\nA_composite = width * thickness\nn_rovings = width / spacing\nA_layer = n_rovings * A_roving\nA_carbon = n_layers * A_layer \nA_concrete = A_composite - A_carbon \nE_composite = (E_carbon * A_carbon + E_concrete * A_concrete) / A_composite\nE_composite\n```\n\n\n\n\n 37082.18859138533\n\n\n\nThus, the composite has an effective stiffness of 37 GPa.\n\n# How to construct a model?\n\nOnce we have derived the formula capturing the design question, i.e. \"what is the stiffness of the new composite?\" we want to learn from the model and develop a new intuition. To explore the possible composite designs let us rewrite the above equations as symbolic expressions. Then, we can construct a model which can be interactively used to study the available design options.\n\nIn contrast to the previously performed numerical evaluation, we now express the derived equations as mathematical symbols instead of numbers. To do this, let us use a Python package `sympy` to do symbolic algebra.\n\n\n```python\nimport sympy as sp\n```\n\nThe parameters of the model are now introduced as `sympy.symbols`. To distinguish the symbols from numbers introduced above, let us name the Python variables referring to the mathematical symbols with a trailing underscore `_`\n\n\n```python\nA_roving_ = sp.Symbol('A_r')\nn_layers_ = sp.Symbol('n_l')\nspacing_ = sp.Symbol('d')\nthickness_ = sp.Symbol('h')\nwidth_ = sp.Symbol('b')\nE_carbon_ = sp.Symbol('E_car')\nE_concrete_ = sp.Symbol('E_c')\n```\n\nTo see the difference between a variable referring to a number and to a symbol, let us display the variables `A_roving` and `A_roving_`\n\n\n```python\ndisplay(A_roving, A_roving_)\n```\n\n\n 1.84\n\n\n\n$\\displaystyle A_{r}$\n\n\nLet us now rephrase the the above derived equations in the symbolic form, i.e. using the symbols with the trailing underscore `_`\n\n\n```python\nA_composite_ = width_ * thickness_\nn_rovings_ = width_ / spacing_\nA_layer_ = n_rovings_ * A_roving_\nA_carbon_ = n_layers_ * A_layer_ \nA_concrete_ = A_composite_ - A_carbon_ \nE_composite_ = (E_carbon_ * A_carbon_ + E_concrete_ * A_concrete_) / A_composite_\nsp.simplify(E_composite_)\n```\n\n\n\n\n$\\displaystyle \\frac{A_{r} E_{car} n_{l} - E_{c} \\left(A_{r} n_{l} - d h\\right)}{d h}$\n\n\n\n**The power of modeling**\n\n - Instead of a number, we have now a symbolic expression showing the influence of the individual parameters on a design property, i.e. on the material stiffness $\\bar{E}$. \n\n - This expression represents the first model that we constructuted in the course using the conditions of compatibility, equilibrium and constitutive laws.\n\n - Using a model, we can explore the behavior of the composite, develop a design intuition, optimize the design.\n We learn how to construct simple models describing the mechanisms governing the behavior of a composite.\n \n - We will construct simplified analytical models for pull-out, that will help us to understand elementary types of\n material behavior.\n \n\n# Next steps\n\n - Login to jupyter.rwth-aachen.de\n - Navigate to this mixture rule example\n - Evaluate the cells by issueing the [Shift+Return] key combination\n\n# Why Jupyter Lab? Why Python?\n - [Jupyterlab introduction](https://youtu.be/A5YyoCKxEOU) [7 mins] video explaining the basic features of jupyter notebook within jupyter lab \n - [Most Popular Programming Languages 1965 - 2019](https://www.youtube.com/watch?v=Og847HVwRSI) Check this race of programming languages over the last 50 years and wait till the end of it ;-)\n - Useful packages\n - `matplotlib` - how to plot fancy diagrams \n - `sympy` - how to perform algebraic manipulations\n - `numpy` - how to manipulate data - beyond `Excel`\n will be shortly explained.\n\n
    \n\n\n```python\n\n```\n", "meta": {"hexsha": "68acd7e21c9d78e7333bbb46fb5d5608abfd9bb4", "size": 417064, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tour1_intro/1_1_elastic_stiffness_of_the_composite.ipynb", "max_stars_repo_name": "bmcs-group/bmcs_tutorial", "max_stars_repo_head_hexsha": "4e008e72839fad8820a6b663a20d3f188610525d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tour1_intro/1_1_elastic_stiffness_of_the_composite.ipynb", "max_issues_repo_name": "bmcs-group/bmcs_tutorial", "max_issues_repo_head_hexsha": "4e008e72839fad8820a6b663a20d3f188610525d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tour1_intro/1_1_elastic_stiffness_of_the_composite.ipynb", "max_forks_repo_name": "bmcs-group/bmcs_tutorial", "max_forks_repo_head_hexsha": "4e008e72839fad8820a6b663a20d3f188610525d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 1022.2156862745, "max_line_length": 297152, "alphanum_fraction": 0.9534483916, "converted": true, "num_tokens": 1537, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026595857203, "lm_q2_score": 0.9059898210180105, "lm_q1q2_score": 0.8310668723774118}} {"text": "```python\nfrom sympy import Matrix, symbols, sqrt, asin\n\nx1, x2, x3, x4, K, r, Re, alpha0, theta = symbols('x1 x2 x3 x4 K r Re alpha0 theta')\n\nH = Matrix([[sqrt(x1**2 + x3**2)]])\ndisplay(H)\n\nx = Matrix([[x1],\n [x2],\n [x3]])\ndisplay(x)\n\ndH = H.jacobian(x)\ndisplay(dH)\n```\n\n\n$\\displaystyle \\left[\\begin{matrix}\\sqrt{x_{1}^{2} + x_{3}^{2}}\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}x_{1}\\\\x_{2}\\\\x_{3}\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\frac{x_{1}}{\\sqrt{x_{1}^{2} + x_{3}^{2}}} & 0 & \\frac{x_{3}}{\\sqrt{x_{1}^{2} + x_{3}^{2}}}\\end{matrix}\\right]$\n\n\n\n```python\nx = Matrix([[x1],\n [x2],\n [x3],\n [x4]])\ndisplay(x)\n\nF = Matrix([[x2],\n [x1*x4**2 - K/x1**2],\n [x4],\n [-2*x2*x4/x1]])\ndisplay(F)\n\ndF = F.jacobian(x)\ndisplay(dF)\n\nH = Matrix([[asin(Re/x1)],\n [alpha0 - x3]])\ndisplay(H)\n\ndH = H.jacobian(x)\ndisplay(dH)\n```\n\n\n$\\displaystyle \\left[\\begin{matrix}x_{1}\\\\x_{2}\\\\x_{3}\\\\x_{4}\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}x_{2}\\\\- \\frac{K}{x_{1}^{2}} + x_{1} x_{4}^{2}\\\\x_{4}\\\\- \\frac{2 x_{2} x_{4}}{x_{1}}\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0 & 1 & 0 & 0\\\\\\frac{2 K}{x_{1}^{3}} + x_{4}^{2} & 0 & 0 & 2 x_{1} x_{4}\\\\0 & 0 & 0 & 1\\\\\\frac{2 x_{2} x_{4}}{x_{1}^{2}} & - \\frac{2 x_{4}}{x_{1}} & 0 & - \\frac{2 x_{2}}{x_{1}}\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\operatorname{asin}{\\left(\\frac{Re}{x_{1}} \\right)}\\\\\\alpha_{0} - x_{3}\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}- \\frac{Re}{x_{1}^{2} \\sqrt{- \\frac{Re^{2}}{x_{1}^{2}} + 1}} & 0 & 0 & 0\\\\0 & 0 & -1 & 0\\end{matrix}\\right]$\n\n", "meta": {"hexsha": "25fba9787024d8d66ce1fd814266c2fbe8a68d9d", "size": 4990, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Jacobian.ipynb", "max_stars_repo_name": "mfkiwl/GMPE340", "max_stars_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Jacobian.ipynb", "max_issues_repo_name": "mfkiwl/GMPE340", "max_issues_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Jacobian.ipynb", "max_forks_repo_name": "mfkiwl/GMPE340", "max_forks_repo_head_hexsha": "3602b8ba859a2c7db2cab96862472597dc1ac793", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.4591836735, "max_line_length": 255, "alphanum_fraction": 0.3907815631, "converted": true, "num_tokens": 703, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9688561703644736, "lm_q2_score": 0.8577681068080749, "lm_q1q2_score": 0.8310539230228562}} {"text": "# ODE initial value problems\n\nThis notebook will explore how oridinary differential equations may be solved using Julia and the DifferentialEquations.jl package. We will specifically focus on intial value problems; boundary value problems will be examined in another notebook. \n\n\n```julia\n#using Pkg\n#Pkg.add( \"DifferentialEquations\" )\n```\n\n\n```julia\nusing DifferentialEquations\n```\n\n \u250c Info: Precompiling DifferentialEquations [0c46a032-eb83-5123-abaf-570d42b7fbaa]\n \u2514 @ Base loading.jl:1260\n\n\nLets solve the equation\n$$\n\\frac{d u}{d t} = p u\n$$\non the interval $t \\in [0,1]$ with the initial condition $u(0)=u_0$ where $p$ is a constant. The exact solution is $u(t)=u_0 e^{p t}$.\n\n\n```julia\nfunction simple_function!(du, u, p, t)\n du[1] = p[1] * u[1]\nend\n\np = [1.1]\nu0 = [1.0]\ntspan = (0.0,1.0)\nprob = ODEProblem(simple_function!,u0,tspan,p)\nsol = solve(prob, Tsit5(), reltol=1e-8, abstol=1e-8)\n\nusing Plots\nplot(sol,linewidth=5, xaxis=\"t\",yaxis=\"u(t)\",label=\"Numerical solution\") # legend=false\nplot!(sol.t, t->u0[1]*exp(p[1]*t),lw=3,ls=:dash,label=\"Exact Solution\")\n```\n\n\n\n\n \n\n \n\n\n\nHere Tsit5() is the standard non-stiff solver. Now lets solve a system of ODEs such as the lorenz equations\n$$\n\\begin{align}\n\\frac{dx}{dt} &= \\sigma (y-x), \\\\\n\\frac{dy}{dt} &= x (\\rho-z) - y, \\\\\n\\frac{dz}{dt} &= xy - \\sigma z.\n\\end{align}\n$$\nThis simplified model for atmospheric convection has chaotic solutions when the parameters have the values $\\sigma = 10$, $\\rho = 28$ and $\\beta = 8/3$. We choose the initial conditions $x(0)=1$, $y(0)=0$ and $z(0)=0$.\n\n\n```julia\nfunction lorenz!(du,u,p,t)\n x,y,z = u\n \u03c3,\u03c1,\u03b2 = p\n du[1] = dx = \u03c3*(y-x)\n du[2] = dy = x*(\u03c1-z) - y\n du[3] = dz = x*y - \u03b2*z\nend\n```\n\n\n\n\n lorenz! (generic function with 1 method)\n\n\n\n\n```julia\np = [10.0,28.0,8/3]\nu0 = [1.0;0.0;0.0]\ntspan = (0.0,100.0)\nprob = ODEProblem(lorenz!,u0,tspan,p)\nsol = solve(prob)\nplot(sol,vars=(1,2,3), title=\"Lorenz attractor\", xaxis=\"x\", yaxis=\"y\",\n zaxis=\"z\")\n```\n\n\n\n\n \n\n \n\n\n\nWe can also plot the time series for each variable individually.\n\n\n```julia\nplot(sol, vars=(0,1), label=\"x(t)\")\nplot!(sol, vars=(0,2), label=\"y(t)\")\nplot!(sol, vars=(0,3), label=\"z(t)\")\n```\n\n\n\n\n \n\n \n\n\n\n\n```julia\n\n```\n", "meta": {"hexsha": "d62583c4d6debddb2725fa1e6e172c39e57d3dce", "size": 954987, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ODE_IVP.ipynb", "max_stars_repo_name": "anthonyoneill/jupyter-notebooks", "max_stars_repo_head_hexsha": "9e1b0a6e1af51d38447adda01ac35060d1097c4e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ODE_IVP.ipynb", "max_issues_repo_name": "anthonyoneill/jupyter-notebooks", "max_issues_repo_head_hexsha": "9e1b0a6e1af51d38447adda01ac35060d1097c4e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ODE_IVP.ipynb", "max_forks_repo_name": "anthonyoneill/jupyter-notebooks", "max_forks_repo_head_hexsha": "9e1b0a6e1af51d38447adda01ac35060d1097c4e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 158.7148080439, "max_line_length": 253, "alphanum_fraction": 0.6821475057, "converted": true, "num_tokens": 796, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.933430805473952, "lm_q2_score": 0.89029422102812, "lm_q1q2_score": 0.8310280518430826}} {"text": "# Kullback-Leibler Divergence\n\nThe Kullback-Leibler divergence (KLD) measures the distance between two probability distributions, $Q$ and $P$. KLD between $Q$ and $P$ is defined as follows.\n\n* $D_{\\mathrm{KL}}(P\\|Q) = \\sum_i P(i) \\, \\log\\frac{P(i)}{Q(i)}$\n* $D_{\\mathrm{KL}}(P\\|Q) \\geq 0$\n\nThe way to interpret the value of KLD is\n\n* as the KLD is closer to zero, $P$ and $Q$ are more similar\n* as the KLD is moves away from zero, $P$ and $Q$ are more dissimilar (diverging, more distant)\n\nIn the example below, we will calculate KLD against three distributions, each associated with different models. Model 1 takes on the following form.\n\n* $X_1 \\sim \\mathcal{N}(0, 1)$\n* $X_2 \\sim \\mathcal{N}(1, 1)$\n* $X_3 \\sim \\mathcal{N}(2 + 0.8x_1 - 0.2x_2, 1)$\n\nModel 2 takes on the following form.\n\n* $X_1 \\sim \\mathcal{N}(0.85, 1)$\n* $X_2 \\sim \\mathcal{N}(1.05, 1)$\n* $X_3 \\sim \\mathcal{N}(2 + 0.9x_1 - 0.25x_2, 1)$\n\nModel 3 takes on the following form.\n\n* $X_1 \\sim \\mathcal{N}(2, 1)$\n* $X_2 \\sim \\mathcal{N}(5, 1)$\n* $X_3 \\sim \\mathcal{N}(4 + 0.8x_1 - 0.8x_2, 1)$\n\n\nNote how Models 1 and 2 were constructed to be very similar, and Model 3 to be very dissimilar to Models 1 and 2.\n\n## Generate data\n\n\n```python\n%matplotlib inline\nimport matplotlib.pylab as plt\nimport numpy as np\nimport seaborn as sns\nfrom numpy.random import normal\nfrom scipy.stats import multivariate_normal, norm, entropy\n\nnp.random.seed(37)\nsns.set_style('whitegrid')\nnum_samples = 1000\n\nx1 = normal(0, 1, num_samples)\nx2 = normal(1, 1, num_samples)\nx3 = normal(2 + 0.8 * x1 - 0.2 * x2, 1, num_samples)\n\ndata1 = data = np.column_stack((x1, x2, x3))\nmeans1 = data1.mean(axis=0)\ncovs1 = np.cov(data1, rowvar=False)\n\nx1 = normal(0.85, 1, num_samples)\nx2 = normal(1.05, 1, num_samples)\nx3 = normal(2 + 0.9 * x1 - 0.25 * x2, 1, num_samples)\n\ndata2 = np.column_stack((x1, x2, x3))\nmeans2 = data2.mean(axis=0)\ncovs2 = np.cov(data2, rowvar=False)\n\nx1 = normal(2, 1, num_samples)\nx2 = normal(5, 1, num_samples)\nx3 = normal(4 + 0.8 * x1 - 0.8 * x2, 1, num_samples)\n\ndata3 = np.column_stack((x1, x2, x3))\nmeans3 = data3.mean(axis=0)\ncovs3 = np.cov(data3, rowvar=False)\n\nprint('means_1 = {}'.format(means1))\nprint('covariance_1')\nprint(covs1)\nprint('')\nprint('means_2 = {}'.format(means2))\nprint('covariance_2')\nprint(covs2)\nprint('')\nprint('means_3 = {}'.format(means3))\nprint('covariance_3')\nprint(covs3)\n```\n\n means_1 = [0.01277839 0.9839153 1.80334137]\n covariance_1\n [[ 0.9634615 -0.00371354 0.76022725]\n [-0.00371354 0.97865653 -0.25181086]\n [ 0.76022725 -0.25181086 1.63064517]]\n \n means_2 = [0.85083876 1.07957661 2.46909572]\n covariance_2\n [[ 1.00406579 0.03774339 0.91788487]\n [ 0.03774339 1.00889847 -0.21973076]\n [ 0.91788487 -0.21973076 1.94124604]]\n \n means_3 = [2.00362816 4.97508849 1.65194765]\n covariance_3\n [[ 1.01322936 0.0112429 0.75369598]\n [ 0.0112429 0.96736793 -0.76265399]\n [ 0.75369598 -0.76265399 2.14695264]]\n\n\nNote how we estimate the means and covariance matrix of Models 1 and 2 from the sampled data. For any observation, ${\\mathbf X} = (x_{1}, \\ldots, x_{k})$, we can compute the probablity of such data point according to the following probability density function.\n\n$\\begin{align}\nf_{\\mathbf X}(x_1,\\ldots,x_k)\n& = \\frac{\\exp\\left(-\\frac 1 2 ({\\mathbf x}-{\\boldsymbol\\mu})^\\mathrm{T}{\\boldsymbol\\Sigma}^{-1}({\\mathbf x}-{\\boldsymbol\\mu})\\right)}{\\sqrt{(2\\pi)^k|\\boldsymbol\\Sigma|}}\n\\end{align}$\n\n## Visualize\n\nLet's visualize the density curves of each variable in the models.\n\n\n```python\nfig, ax = plt.subplots(1, 1, figsize=(10, 5))\nax.set_title('Model 1')\nax.set_xlim([-4, 8])\nsns.kdeplot(data1[:,0], bw=0.5, ax=ax, label=r'$x_{1}$')\nsns.kdeplot(data1[:,1], bw=0.5, ax=ax, label=r'$x_{2}$')\nsns.kdeplot(data1[:,2], bw=0.5, ax=ax, label=r'$x_{3}$')\n\nfig, ax = plt.subplots(1, 1, figsize=(10, 5))\nax.set_title('Model 2')\nax.set_xlim([-4, 8])\nsns.kdeplot(data2[:,0], bw=0.5, ax=ax, label=r'$x_{1}$')\nsns.kdeplot(data2[:,1], bw=0.5, ax=ax, label=r'$x_{2}$')\nsns.kdeplot(data2[:,2], bw=0.5, ax=ax, label=r'$x_{3}$')\n\nfig, ax = plt.subplots(1, 1, figsize=(10, 5))\nax.set_title('Model 3')\nax.set_xlim([-4, 8])\nsns.kdeplot(data3[:,0], bw=0.5, ax=ax, label=r'$x_{1}$')\nsns.kdeplot(data3[:,1], bw=0.5, ax=ax, label=r'$x_{2}$')\nsns.kdeplot(data3[:,2], bw=0.5, ax=ax, label=r'$x_{3}$')\n```\n\n## Measure divergence\n\nNow that we have estimated the parameters (means and covariance matrix) of the models, we can plug these back into the density function above to estimate the probability of each data point in the data simulated from Model 1. Note that $P$ is the density function associated with Model 1, $Q1$ is the density function associated with Model 2, and $Q2$ is the density function associated with Model 3. Also note\n\n* $D_{\\mathrm{KL}}(P\\|P) = 0$\n* $D_{\\mathrm{KL}}(P\\|Q) \\neq D_{\\mathrm{KL}}(Q\\|P)$ (the KLD is asymmetric)\n\n\n```python\nP = multivariate_normal.pdf(data1, mean=means1, cov=covs1)\nQ1 = multivariate_normal.pdf(data1, mean=means2, cov=covs2)\nQ2 = multivariate_normal.pdf(data1, mean=means3, cov=covs3)\n\nprint(entropy(P, P))\nprint(entropy(P, Q1))\nprint(entropy(P, Q2))\n```\n\n 0.0\n 0.17877316564929496\n 6.628549732040807\n\n\nThis time around, $P$ is the density function associated with Model 2 and $Q1$ is the density function associated with Model 1 and $Q2$ with Model 3.\n\n\n```python\nP = multivariate_normal.pdf(data2, mean=means2, cov=covs2)\nQ1 = multivariate_normal.pdf(data2, mean=means1, cov=covs1)\nQ2 = multivariate_normal.pdf(data2, mean=means3, cov=covs3)\n\nprint(entropy(P, P))\nprint(entropy(P, Q1))\nprint(entropy(P, Q2))\n```\n\n 0.0\n 0.18572628345083425\n 5.251771615080081\n\n\nFinally, $P$ is the density function associated with Model 3 and $Q1$ is the density function associated with Model 1 and $Q2$ with Model 2.\n\n\n```python\nP = multivariate_normal.pdf(data3, mean=means3, cov=covs3)\nQ1 = multivariate_normal.pdf(data3, mean=means1, cov=covs1)\nQ2 = multivariate_normal.pdf(data3, mean=means2, cov=covs2)\n\nprint(entropy(P, P))\nprint(entropy(P, Q1))\nprint(entropy(P, Q2))\n```\n\n 0.0\n 4.964071493531684\n 4.154646297473461\n\n\nSince Models 1 and 2 are very similar (as can be seen by how we constructed them), their KLD is closer to zero. On the other hand the KLDs between these two models and Model 3 are farther from zero. Though, it is interesting to note, that Model 2 is closer to Model 3 than Model 1 is to Model 3.\n", "meta": {"hexsha": "eb33d5c6d43c3fad74809463d9f29dfc86e66558", "size": 135096, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "sphinx/datascience/source/kullback-leibler-divergence.ipynb", "max_stars_repo_name": "oneoffcoder/books", "max_stars_repo_head_hexsha": "84619477294a3e37e0d7538adf819113c9e8dcb8", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 26, "max_stars_repo_stars_event_min_datetime": "2020-05-05T08:07:43.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-12T03:28:15.000Z", "max_issues_repo_path": "sphinx/datascience/source/kullback-leibler-divergence.ipynb", "max_issues_repo_name": "oneoffcoder/books", "max_issues_repo_head_hexsha": "84619477294a3e37e0d7538adf819113c9e8dcb8", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 19, "max_issues_repo_issues_event_min_datetime": "2021-03-10T00:33:51.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-02T13:04:32.000Z", "max_forks_repo_path": "sphinx/datascience/source/kullback-leibler-divergence.ipynb", "max_forks_repo_name": "oneoffcoder/books", "max_forks_repo_head_hexsha": "84619477294a3e37e0d7538adf819113c9e8dcb8", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-01-09T16:48:21.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-19T17:06:50.000Z", "avg_line_length": 322.4248210024, "max_line_length": 41236, "alphanum_fraction": 0.9302644046, "converted": true, "num_tokens": 2388, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9525741281688025, "lm_q2_score": 0.8723473829749844, "lm_q1q2_score": 0.8309755477977322}} {"text": "# Linear regression fitting noise versus structure\n\nLet \n$$\ny = y^\\ast + e, \\quad y^\\ast = X \\theta^\\ast\n$$\nwhere $X$ is a fat matrix. Consider optimizing the least squares objective \n$$\n\\frac{1}{2}\n\\| X \\theta - y \\|^2.\n$$\nThe gradient is \n$$\n\\nabla f(\\theta) = X^T X \\theta - X^T y.\n$$\nWe have\n$$\n\\begin{align}\n\\theta_{k+1} - \\theta^\\ast\n&=\n\\theta_k - \\alpha(X^T X \\theta - X^T y) - \\theta^\\ast \\\\\n&=\n\\theta_k - \\alpha(X^T X \\theta - X^T X \\theta^\\ast - X^T e) - \\theta^\\ast \\\\\n&=\n(I - \\alpha X^T X) (\\theta_k - \\theta^\\ast) + \\alpha X^T e\n\\end{align}\n$$\nThe difference in terms of residual therefore becomes\n$$\nX\\theta_{k+1} - X\\theta^\\ast\n=\n(I - \\alpha XX^T) (X\\theta_k - X\\theta^\\ast) + \\alpha X X^T e \\\\\n$$\n\n\n```python\nimport matplotlib.pyplot as plt\n#%matplotlib notebook\n#import matplotlib.pyplot as plt\nfrom numpy import *\nimport numpy as np\n```\n\n\n```python\ndef gradient_descent(A,b,niter = 10000,ytarget=None,stepsize=None):\n \n def f(x):\n return linalg.norm(dot(A,x) - b)**2\n\n def gradf(x):\n return 0.5*(dot(Q,x) - dot(b,A))\n\n Q = dot(A.T,A)\n eigenvalues = linalg.eigvals(Q)\n M = max(eigenvalues)\n m = min(eigenvalues)\n\n xopt = dot( linalg.inv( dot(A.T,A) ), dot( A.T , b ) ) \n \n print(\"optimal errors: \", linalg.norm( y - dot(A,xopt) ), linalg.norm( ytarget - dot(A,xopt) ) )\n \n if stepsize==None:\n stepsize = 2/(M+m)\n \n print(\"minimal and maximal eigenvalues: \", m, M)\n print(\"stepsize: \", stepsize)\n \n residuals = []\n gradients = []\n residual_target = []\n distances = []\n xk = zeros(n) #random.randn(n) # random initializer\n for k in range(niter):\n xk = xk - stepsize*gradf(xk)\n residuals.append( linalg.norm( y - dot(A,xk) ) )\n gradients.append( linalg.norm(gradf(xk)) )\n residual_target.append( linalg.norm( ytarget - dot(A,xk) ) )\n distances.append( linalg.norm(xk) )\n return array(residuals), array(gradients), array(residual_target),array(distances)\n```\n\n\n```python\n# generate a problem instance\nn = 100\nA = random.randn(n,n)\nU,S,VT = linalg.svd(A)\n\nytarget = U[:, int(0)]\nperturbation = random.randn(n) #U[:,n-10]\nperturbation = perturbation/linalg.norm(perturbation)\n#perturbation = U[:,n-10]\n\nnewS = np.array([s**3 for s in S])\nnewS = newS/np.max(newS)\nplt.plot(newS)\nplt.title(\"spectrum\")\nplt.show()\n\nS = np.diag(newS)\nA = U @ S @ VT\n\ny = ytarget + perturbation\n\nprint(linalg.norm(y), linalg.norm(ytarget), linalg.norm(perturbation), dot(ytarget,perturbation) )\n\nsteps = 1000\nresiduals,gradients,residual_target,distances = gradient_descent(A,y,niter=steps,ytarget=ytarget,stepsize=0.25)\n\nprint(\"logarithmic\")\nplt.plot( log(residuals) )\nplt.show()\nplt.plot( log(residual_target) )\nplt.show()\n\nprint(\"non logarithmic\")\nplt.plot( residuals )\nplt.show()\nplt.plot( residual_target )\nplt.show()\n\n\nks = np.array( [i for i in range(steps)] )\nnp.savetxt(\"ls_residuals.dat\", np.vstack([ ks ,np.array(residuals),np.array(residual_target) ] ).T , delimiter=\"\\t\")\n\nns = np.array( [i for i in range(n)] )\nnp.savetxt(\"ls_spectrum.dat\", np.vstack([ ns ,np.array(newS)] ).T , delimiter=\"\\t\")\n\n```\n", "meta": {"hexsha": "3765f35d8cb7e2d35b0c40ee6561c7f7fee6f144", "size": 64059, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "linear_least_squares_selective_fitting_warmup.ipynb", "max_stars_repo_name": "MLI-lab/overparameterized_convolutional_generators", "max_stars_repo_head_hexsha": "ef2fae85768f1954dbd1ead75b9ba8e214c13230", "max_stars_repo_licenses": ["Apache-1.1"], "max_stars_count": 13, "max_stars_repo_stars_event_min_datetime": "2019-11-02T11:41:42.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-21T02:53:18.000Z", "max_issues_repo_path": "linear_least_squares_selective_fitting_warmup.ipynb", "max_issues_repo_name": "MLI-lab/overparameterized_convolutional_generators", "max_issues_repo_head_hexsha": "ef2fae85768f1954dbd1ead75b9ba8e214c13230", "max_issues_repo_licenses": ["Apache-1.1"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "linear_least_squares_selective_fitting_warmup.ipynb", "max_forks_repo_name": "MLI-lab/overparameterized_convolutional_generators", "max_forks_repo_head_hexsha": "ef2fae85768f1954dbd1ead75b9ba8e214c13230", "max_forks_repo_licenses": ["Apache-1.1"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2019-11-17T13:33:02.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-19T10:02:12.000Z", "avg_line_length": 264.7066115702, "max_line_length": 12360, "alphanum_fraction": 0.9181067453, "converted": true, "num_tokens": 998, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9525741241296944, "lm_q2_score": 0.8723473862936942, "lm_q1q2_score": 0.830975547435544}} {"text": "# COURSE: A deep understanding of deep learning\n## SECTION: Math prerequisites\n### LECTURE: Derivatives: intuition and polynomials\n#### TEACHER: Mike X Cohen, sincxpress.com\n##### COURSE URL: udemy.com/course/dudl/?couponCode=202201\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# sympy = symbolic math in Python\nimport sympy as sym\nimport sympy.plotting.plot as symplot\n```\n\n\n```python\n# create symbolic variables in sympy\nx = sym.symbols('x')\n\n# create a function\nfx = 2*x**2\n\n# compute its derivative\ndf = sym.diff(fx,x)\n\n# print them\nprint(fx)\nprint(df)\n```\n\n\n```python\n# plot them\nsymplot(fx,(x,-4,4),title='The function')\nplt.show()\n\nsymplot(df,(x,-4,4),title='Its derivative')\nplt.show()\n\n```\n\n\n```python\n# repeat with relu and sigmoid\n\n# create symbolic functions\nrelu = sym.Max(0,x)\nsigmoid = 1 / (1+sym.exp(-x))\n\n# graph the functions\np = symplot(relu,(x,-4,4),label='ReLU',show=False,line_color='blue')\np.extend( symplot(sigmoid,(x,-4,4),label='Sigmoid',show=False,line_color='red') )\np.legend = True\np.title = 'The functions'\np.show()\n\n\n# graph their derivatives\np = symplot(sym.diff(relu),(x,-4,4),label='df(ReLU)',show=False,line_color='blue')\np.extend( symplot(sym.diff(sigmoid),(x,-4,4),label='df(Sigmoid)',show=False,line_color='red') )\np.legend = True\np.title = 'The derivatives'\np.show()\n\n```\n", "meta": {"hexsha": "b9e2eaaa14a06e0a03da93d0aeb71bb65058523c", "size": 2211, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "01_math/DUDL_math_derivatives1.ipynb", "max_stars_repo_name": "amitmeel/DUDL", "max_stars_repo_head_hexsha": "92348d5fc95b179dd2119f35f671366cd73718e6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "01_math/DUDL_math_derivatives1.ipynb", "max_issues_repo_name": "amitmeel/DUDL", "max_issues_repo_head_hexsha": "92348d5fc95b179dd2119f35f671366cd73718e6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "01_math/DUDL_math_derivatives1.ipynb", "max_forks_repo_name": "amitmeel/DUDL", "max_forks_repo_head_hexsha": "92348d5fc95b179dd2119f35f671366cd73718e6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 2211.0, "max_line_length": 2211, "alphanum_fraction": 0.6612392583, "converted": true, "num_tokens": 395, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9399133548753619, "lm_q2_score": 0.8840392848011834, "lm_q1q2_score": 0.8309203300190958}} {"text": "```python\n%matplotlib inline\n\nimport numpy as np\nimport scipy as sc\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport sympy as sp\n\nimport itertools\n\nsns.set();\n```\n\n# Euler's method\n\nGiven a first-order explicit ODE (autonomous or non-autonomous) on $\\vec{y} \\in \\mathbb{R}^{n}$ (It is often the case where $n = 1$, so we could drop the vector notation) in the general form\n\n$$\\frac{d}{dt}\\vec{y}(t) = \\vec{f}(t, y(t))$$\n\nthe Euler's method is a recursive method with the initial values given by the IVP's initial conditions. It attempts to approximate the solution by tracking its trajectory as ditacted by the ODE.\n\nThe approximate solution at time $t_{k+1} = t_{k} + h_{k}$ is given by the equation\n\n$$\\vec{y}_{k+1} = \\vec{y}_{k} + h_{k} \\vec{f}(t_{k}, \\vec{y}_{k})$$\n\nwhere\n\n$$\\vec{y}_{k}' = \\vec{f}(t_{k}, \\vec{y}_{k})$$\n\ngiven the initial condition\n\n$$\\vec{y}_{0} = \\vec{y}(0)$$\n\nNote that it is a first-order ODE and it would be great if we could use Euler's method to solve any order ODE. In fact, we can. The way to do that is to use some auxiliary variables as follows:\n\n$$\n \\begin{cases}\n \\vec{u}_{1} = \\vec{y} \\\\\n \\vec{u}_{2} = \\vec{y}' \\\\\n \\vdots \\\\\n \\vec{u}_{m} = \\vec{y}^{(m-1)}\n \\end{cases}\n$$\n\nThen\n\n$$\n \\begin{cases}\n \\vec{u}_{1}' = \\vec{y}' = \\vec{u}_{2}\\\\\n \\vec{u}_{2}' = \\vec{y}'' = \\vec{u}_{3}\\\\\n \\vdots \\\\\n \\vec{u}_{m}' = \\vec{y}^{(m)} = \\vec{f}(t, u_{1}, u_{2}, \\cdots, u_{m})\n \\end{cases}\n$$\n\nAnd so, we have a system of $m$ fist-order ODE. We can solve it using any method, eg. Euler's method, for first-order ODE.\n\nBelow we write a simple still general generator function that yields at each time the tuple $(t_{k}, y_{k})$ given the initial conditions $(t_{0}, \\vec{y}_{0})$, the function $\\vec{f}$, the step size $h_{k}$ and the maximum number of iterations. Note that both $\\vec{y}_{0}$ and $\\vec{f}$ can be list-like data structures since they represent objects in $\\mathbb{R}^{n}$.\n\n\n```python\ndef euler(x_0, y_0, f, step=0.001, k_max=None):\n r\"\"\"\n Euler's method for solving first-order ODE.\n \n The function computes `k_max` iterations from the initial conditions `x_0` and `y_0` with\n steps of size `step`. It yields a total of `k_max` + 1 values. The recorrent equation is:\n \n y_{k+1} = y_{k} + h_{k} * f(x_{k}, y_{k})\n \n Parameters\n ----------\n x_0 : float\n The initial value for the independent variable.\n y_0 : array_like\n 1-D array of initial values for the dependente variable evaluated at `x_0`.\n f : callable\n The function that represents the first derivative of y with respect to x.\n It must accept two arguments: the point x at which it will be evaluated and\n the value of y at this point.\n step : float, optional\n The size step between each iteration.\n k_max : number\n The maximum number of iterations.\n \n Yields\n ------\n x_k : float\n The point at which the function was evaluated in the last iteration.\n y_k : float\n The value of the function in the last iteration.\n \"\"\"\n \n if k_max is None: counter = itertools.count()\n else: counter = range(k_max)\n x_k = x_0\n y_k = y_0\n yield (x_k, y_k)\n for k in counter:\n y_k = y_k + step * f(x_k, y_k)\n x_k = x_k + step\n yield (x_k, y_k)\n```\n\n\n```python\ndef extract(it):\n r\"\"\"\n Extract the values from a iterable of iterables.\n \n The function extracts the values from a iterable of iterables (eg. a list of tuples) to a list\n of coordinates. For example,\n \n [(1, 10), (2, 20), (3, 30), (4, 40)] -> [[1, 2, 3, 4], [10, 20, 30, 40]]\n \n If `it` is a list of M tuples each one with N elements, then `extract` returns\n a list of N lists each one with M elements.\n \n Parameters\n ----------\n it : iterable\n An iterable of iterables.\n \n Returns\n ------\n A list with the lists of first-elements, second-elements and so on.\n \"\"\"\n \n return list(zip(*it))\n```\n\n## Example 1\n\nThe following example was taken from [Guidi], section 8.2, example 37.\n\nThe IVP is\n\n$$y' = x^{2} + y^{2}$$\n\nwith initial conditions\n\n$$y(0) = 0$$\n\nBy Euler's method\n\n$$y_{k+1} = y_{k} + h_{k} f(t, y_{k})$$\n\nSince $x_{k} = x_{0} + h_{k} k$ and $x_{0} = 0$, we have\n\n$$x_{k} = h_{k} k$$\n\nRemember that $y_{k}' = f(t, y_{k})$ and so\n\n$$y_{k+1} = y_{k} + h_{k} (h_{k}^{2} k^{2} + y_{k}^{2})$$\n\n\n```python\ndef example1(x_k, y_k):\n return x_k**2 + y_k**2\n\nresults = euler(x_0=0.0, y_0=0.0, f=example1, step=0.001, k_max=1000)\n\nresults = [(x, y) for k, (x, y) in enumerate(results) if k in range(0, 1001, 100)]\nx, y_euler = extract(results)\n\ndf1 = pd.DataFrame({'x': x, 'euler': y_euler})\n\ndf1 = df1[['x', 'euler']]\n\ndf1.head(15)\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    xeuler
    00.00.000000
    10.10.000328
    20.20.002647
    30.30.008958
    40.40.021279
    50.50.041664
    60.60.072263
    70.70.115402
    80.80.173730
    90.90.250438
    101.00.349605
    \n
    \n\n\n\n\n```python\nfig, ax = plt.subplots(figsize=(13, 8))\nplt.plot(df1['x'], df1['euler'], label='Euler approximation with step 0.001')\nplt.legend(loc='upper left', fancybox=True, framealpha=1, shadow=True, borderpad=1, frameon=True)\nax.set(title=\"Solutions for the ODE\", xlabel=\"Time\", ylabel=\"y\");\n```\n\n## Example 2\n\nThe following example was taken from [Heath], section 9.3.1, example 9.8.\n\nThe IVP is\n\n$$y'(t) = y(t)$$\n\nwith initial conditions\n\n$$t_0 = 0$$\n\nRemembering Euler's method\n\n$$y_{k+1} = y_{k} + h_{k} y_{k}$$\n\n\n```python\ndef example2(x_k, y_k):\n return y_k\n\nresults = euler(x_0=0.0, y_0=1.0, f=example2, step=0.5, k_max=15)\nx, y_euler = extract(results)\n\ndf2 = pd.DataFrame({'x': x, 'euler': y_euler, 'actual': np.exp(np.arange(0, 8, 0.5))})\ndf2 = df2[['x', 'euler', 'actual']]\n\ndf2.head(15)\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    xeuleractual
    00.01.0000001.000000
    10.51.5000001.648721
    21.02.2500002.718282
    31.53.3750004.481689
    42.05.0625007.389056
    52.57.59375012.182494
    63.011.39062520.085537
    73.517.08593833.115452
    84.025.62890654.598150
    94.538.44335990.017131
    105.057.665039148.413159
    115.586.497559244.691932
    126.0129.746338403.428793
    136.5194.619507665.141633
    147.0291.9292601096.633158
    \n
    \n\n\n\n\n```python\nfig, ax = plt.subplots(figsize=(13, 8))\nplt.plot(df2['x'], df2['euler'], label='Euler approximation with step 0.5')\nplt.plot(df2['x'], df2['actual'], label='Actual values')\nax.legend(loc='upper left', fancybox=True, framealpha=1, shadow=True, borderpad=1, frameon=True)\nax.set(title=\"Solutions for the ODE\", xlabel=\"Time\", ylabel=\"y\");\n```\n\n## Example 3\n\nConsider the following ODE\n\n$$y'' + y = 0$$\n\nwith initial conditions\n\n$$y(0) = 1$$\n\nand\n\n$$y'(0) = 0$$\n\nThe exact solution for this IVP is $y(t) = \\mathit{cos}(t)$.\n\nLet's apply some transformations over this ODE to represent it as a system of first-order ODE. \n$u_{1} = y$ e $u_{2} = y'$. Note that $u_{2}' = y''$, so\n$y'' = f(x, y, y')$ becomes $u_{2}' = g(x, u_{1}, u_{2})$.\n\nThe previous ODE can now be written as\n\n$u_{2}' = -u_{1}$ com $u_{1}(0) = 1$ e $u_{2}(0) = 0$.\n\n\n```python\ndef example3(x_k, u_k):\n return np.array([u_k[1], -u_k[0]])\n\nresults = euler(x_0=0.0, y_0=np.array([1.0, 0.0]), f=example3, step=0.03, k_max=1000)\nx, ys = extract(results)\ny_euler, dy_euler = extract(ys)\n\ndf3 = pd.DataFrame({'x': x, 'euler': y_euler, 'dy_euler': dy_euler, 'actual': np.cos(x), 'dy_actual': -np.sin(x)})\ndf3 = df3[['x', 'euler', 'actual', 'dy_euler', 'dy_actual']]\n\ndf3.head(15)\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    xeuleractualdy_eulerdy_actual
    00.001.0000001.0000000.000000-0.000000
    10.031.0000000.999550-0.030000-0.029996
    20.060.9991000.998201-0.060000-0.059964
    30.090.9973000.995953-0.089973-0.089879
    40.120.9946010.992809-0.119892-0.119712
    50.150.9910040.988771-0.149730-0.149438
    60.180.9865120.983844-0.179460-0.179030
    70.210.9811280.978031-0.209056-0.208460
    80.240.9748570.971338-0.238489-0.237703
    90.270.9677020.963771-0.267735-0.266731
    100.300.9596700.955336-0.296766-0.295520
    110.330.9507670.946042-0.325556-0.324043
    120.360.9410000.935897-0.354079-0.352274
    130.390.9303780.924909-0.382309-0.380188
    140.420.9189090.913089-0.410221-0.407760
    \n
    \n\n\n\n\n```python\nfig, ax = plt.subplots(figsize=(13, 8))\nplt.plot(df3['x'], df3['euler'], label='Euler approximation with step 0.01')\nplt.plot(df3['x'], df3['actual'], label='Actual values')\nax.legend(loc='lower right', fancybox=True, framealpha=1, shadow=True, borderpad=1, frameon=True)\nax.set(title=\"Solutions for the ODE\", xlabel=\"Time\", ylabel=\"y\");\n```\n\n\n```python\nfig, ax = plt.subplots(figsize=(13, 8))\nplt.plot(df3['euler'], df3['dy_euler'], label=\"Euler's solution\")\nplt.plot(df3['actual'], df3['dy_actual'], label='Exact solution')\nax.legend(loc='upper right', fancybox=True, framealpha=1, shadow=True, borderpad=1, frameon=True)\nax.set(title='Phase space', xlabel='y', ylabel='dy')\nax.axis('equal');\n```\n\n## Example 4: simple pendulum\n\nThe simple pendulum system can be represented as ODE as follows:\n\n$$\\theta'' + \\frac{g}{l}\\mathit{sin}(\\theta) = 0$$\n\nwhere $\\theta$ is the angle in radians between the vertical line passing through the joint and the wire holding the mass, $g$ is the gravity acceleration and $l$ is the wire length.\n\nWe can turn it into a IVP by adding the following equations:\n\n$$\\theta(0) = \\theta_{0}$$\nand\n$$\\theta'(0) = 0$$\n\nThe exact solution for this ODE is\n\n$$\\theta(t) = \\theta_0 \\cos\\left({\\sqrt{\\frac{g}{l}}}\\,t\\right)$$\n\nwhose first derivative relative to $t$ is\n\n$$\\theta'(t) = -\\theta_{0} \\sqrt{\\frac{g}{l}} \\sin\\left({\\sqrt{\\frac{g}{l}}}\\,t\\right)$$\n\nSince the IVP has a second-order ODE, we can apply the same transformations we did above to turn it into a system of first-order ODE. Let's call $u_{1}(t) = y(t) = \\theta(t)$ and $u_{2}(t) = y'(t) = \\theta'(t)$.\n\nThis way we end up with a system of first-order ODE that can be solved by using Euler's method or any other first-order method. Now the Euler's method equation becomes:\n\n$$\\vec{u}^{(k+1)} = \\vec{u}^{(k)} + h_{k} f(t, \\vec{u}^{k})$$\n\nwhere $\\vec{u} = [u_{1}, u_{2}]^{T}$.\n\n\n```python\ng = 9.8\nl = 10\n\ndef example4(t_k, u_k):\n return np.array([u_k[1], -(g/l) * np.sin(u_k[0])])\n\ntheta_0 = 0.087 # About 5 degrees.\nresults = euler(x_0=0.0, y_0=np.array([theta_0, 0.0]), f=example4, step=0.02, k_max=1000)\nx, ys = extract(results)\ny_euler, dy_euler = extract(ys)\n\nimport math\ndf4 = pd.DataFrame({'x': x, 'euler': y_euler, 'dy_euler': dy_euler,\n 'actual': (theta_0 * np.cos(math.sqrt(g/l)*np.array(x))),\n 'dy_actual': (-theta_0 * math.sqrt(g/l) * np.sin(math.sqrt(g/l)*np.array(x)))})\n\ndf4 = df4[['x', 'euler', 'actual', 'dy_euler', 'dy_actual']]\n\ndf4.head(15)\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    xeuleractualdy_eulerdy_actual
    00.000.0870000.0870000.000000-0.000000
    10.020.0870000.086983-0.001703-0.001705
    20.040.0869660.086932-0.003406-0.003410
    30.060.0868980.086847-0.005108-0.005113
    40.080.0867960.086727-0.006810-0.006814
    50.100.0866590.086574-0.008509-0.008512
    60.120.0864890.086387-0.010205-0.010207
    70.140.0862850.086166-0.011898-0.011898
    80.160.0860470.085911-0.013587-0.013585
    90.180.0857750.085622-0.015272-0.015266
    100.200.0854700.085300-0.016951-0.016941
    110.220.0851310.084945-0.018624-0.018609
    120.240.0847590.084556-0.020290-0.020270
    130.260.0843530.084134-0.021950-0.021924
    140.280.0839140.083679-0.023601-0.023568
    \n
    \n\n\n\n\n```python\nfig, ax = plt.subplots(figsize=(13, 8))\nplt.plot(df4['x'], df4['euler'], label='Euler approximation with step 0.01')\nplt.plot(df4['x'], df4['actual'], label='Actual values')\nax.legend(loc='lower right', fancybox=True, framealpha=1, shadow=True, borderpad=1, frameon=True)\nax.set(title=\"Solutions for the ODE\", xlabel=\"Time\", ylabel=r\"$\\theta$\");\n```\n\n\n```python\nfig, ax = plt.subplots(figsize=(13, 8))\nplt.plot(df4['euler'], df4['dy_euler'], label=\"Euler's solution\")\nplt.plot(df4['actual'], df4['dy_actual'], label=\"Exact solution\")\nax.legend(loc='upper right', fancybox=True, framealpha=1, shadow=True, borderpad=1, frameon=True)\nax.set(title=\"Phase space\", xlabel='y', ylabel=\"dy\")\nax.axis('equal');\n```\n\n## Errors\n\nThere are two types of errors when solving an ODE numerically. The first is the _rounding error_ which is due to the precision of floating-point arithmetic. The second type is _truncation error_ which is due to the method used.\n\nTruncation error can be of two different types: global error and local error.\n\nThe global truncation error in step $k$, $e_{k}$, is the difference between the computed solution through the method, $y_{k}$, and the true solution of the ODE, $y(t_{k})$:\n\n$$\\vec{e}_{k} = \\vec{y}_{k} - \\vec{y}(t_{k})$$\n\nThe local truncation error in step $k$, $\\tau_{k}$, is the difference between the approximate and exact increment per unit step:\n\n$$\\vec{\\tau}_{k} = \\frac{\\vec{y}(t_{k}) - \\vec{y}(t_{k-1}) - f(t, \\vec{y}_{k})}{h}$$\n\nWe follow with the pendulum example to demonstrate error computations in practice.\n\nWe know that the true solution for the simple pendulum IVP\n\n$$\\theta'' + \\frac{g}{l}\\mathit{sin}(\\theta) = 0$$\n\nwith initial values\n\n$$\\theta(0) = \\theta_{0}$$\nand\n$$\\theta'(0) = 0$$\n\nis\n\n$$\\theta(t) = \\theta_0 \\cos\\left({\\sqrt{\\frac{g}{l}}}\\,t\\right)$$\n\nwhose first derivative relative to $t$ is\n\n$$\\theta'(t) = -\\theta_{0} \\sqrt{\\frac{g}{l}} \\sin\\left({\\sqrt{\\frac{g}{l}}}\\,t\\right)$$\n\nThen we know the true solution $\\theta$ at each time $t_{k}$ and its first derivative. We can proceed and estimate the global and local errors and see how they behave.\n\n\n```python\ndf4['global_err'] = df4['actual'] - df4['euler']\ndf4.loc[0, 'local_err'] = 0.0\ndf4.loc[1:, 'local_err'] = (df4['actual'].diff() - df4['euler'].diff())\n\ndf4.head(15)\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    xeuleractualdy_eulerdy_actualglobal_errlocal_err
    00.000.0870000.0870000.000000-0.0000000.0000000.000000
    10.020.0870000.086983-0.001703-0.001705-0.000017-0.000017
    20.040.0869660.086932-0.003406-0.003410-0.000034-0.000017
    30.060.0868980.086847-0.005108-0.005113-0.000051-0.000017
    40.080.0867960.086727-0.006810-0.006814-0.000068-0.000017
    50.100.0866590.086574-0.008509-0.008512-0.000085-0.000017
    60.120.0864890.086387-0.010205-0.010207-0.000102-0.000017
    70.140.0862850.086166-0.011898-0.011898-0.000119-0.000017
    80.160.0860470.085911-0.013587-0.013585-0.000136-0.000017
    90.180.0857750.085622-0.015272-0.015266-0.000153-0.000017
    100.200.0854700.085300-0.016951-0.016941-0.000170-0.000017
    110.220.0851310.084945-0.018624-0.018609-0.000186-0.000016
    120.240.0847590.084556-0.020290-0.020270-0.000203-0.000016
    130.260.0843530.084134-0.021950-0.021924-0.000219-0.000016
    140.280.0839140.083679-0.023601-0.023568-0.000235-0.000016
    \n
    \n\n\n\n\n```python\nfig, ax = plt.subplots(figsize=(13, 8))\nplt.plot(df4['x'], df4['global_err'], label=\"Global truncation error\")\nplt.plot(df4['x'], df4['local_err'], label=\"Local truncation error\")\nax.legend(loc='upper left', fancybox=True, framealpha=1, shadow=True, borderpad=1, frameon=True)\nax.set(title=\"Truncation errors\", xlabel=\"Time\", ylabel=\"Error\");\n```\n\nAs we can see, the global error grows indefinitely although the local errors remain near zero. This seems to be the case where the global error is the sum of the local errors. However, it is important to note that this is not always the case: The global error may be greater or smaller than the sum of local errors. In general, if the solutions of the ODE are diverging, then the global error is greater than the sum of local errors. On the other hand, if the solutions of the ODE are converging, then the global error is smaller than the sum of local errors.\n\n\n```python\nnp.allclose(df4['global_err'], df4['local_err'].cumsum()) # Equality for floats.\n```\n\n\n\n\n True\n\n\n\n## References\n\n* Guidi, L., Notas da disciplina C\u00e1lculo Num\u00e9rico. Dispon\u00edvel em [Notas da disciplina C\u00e1lculo Num\u00e9rico](http://www.mat.ufrgs.br/~guidi/grad/MAT01169/calculo_numerico.pdf)\n* Heath, M. T., Scientific Computing: An Introductory Survey, 2nd Edition, McGraw Hill, 2002.\n", "meta": {"hexsha": "a5b3171caade804e938c37543c2760a0234d2a43", "size": 503318, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/math/euler_method.ipynb", "max_stars_repo_name": "kmyokoyama/machine-learning", "max_stars_repo_head_hexsha": "05c41cfa1d2c070ce4f476a20f5ad0c5bd6a1fe7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/math/euler_method.ipynb", "max_issues_repo_name": "kmyokoyama/machine-learning", "max_issues_repo_head_hexsha": "05c41cfa1d2c070ce4f476a20f5ad0c5bd6a1fe7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/math/euler_method.ipynb", "max_forks_repo_name": "kmyokoyama/machine-learning", "max_forks_repo_head_hexsha": "05c41cfa1d2c070ce4f476a20f5ad0c5bd6a1fe7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 336.6675585284, "max_line_length": 99966, "alphanum_fraction": 0.9047739203, "converted": true, "num_tokens": 9604, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8991213880824789, "lm_q2_score": 0.9241418210233832, "lm_q1q2_score": 0.830915676903614}} {"text": "## Chapter 3 problems\n\n\n```python\nfrom sympy import *\ninit_printing()\n```\n\n\n```python\n\n```\n\n\n```python\nA = Matrix([\n[3, 3],\n[2, S(3)/2]])\nA\n```\n\n\n```python\nb = Matrix([6,5])\n```\n\n\n```python\nAUG = A.row_join(b)\nAUG # the augmented matrix\n```\n\n### Alice\n\n\n```python\nAUGA = AUG.copy()\nAUGA[0,:] = AUGA[0,:]/3\nAUGA\n```\n\n\n```python\nAUGA[1,:] = AUGA[1,:] - 2*AUGA[0,:]\nAUGA\n```\n\n\n```python\nAUGA[1,:] = -2*AUGA[1,:]\nAUGA\n```\n\n\n```python\nAUGA[0,:] = AUGA[0,:] - AUGA[1,:]\nAUGA\n```\n\n### Bob\n\n\n```python\nAUGB = AUG.copy()\nAUGB[0,:] = AUGB[0,:] - AUGB[1,:]\nAUGB\n```\n\n\n```python\nAUGB[1,:] = AUGB[1,:] - 2*AUGB[0,:]\nAUGB\n```\n\n\n```python\nAUGB[1,:] = -1*S(2)/3*AUGB[1,:]\nAUGB\n```\n\n\n```python\nAUGB[0,:] = AUGB[0,:] - S(3)/2*AUGB[1,:]\nAUGB\n```\n\n\n```python\n\n```\n\n### Charlotte\n\n\n```python\nAUGC = AUG.copy()\nAUGC[0,:], AUGC[1,:] = AUGC[1,:], AUGC[0,:]\nAUGC\n```\n\n\n```python\nAUGC[0,:] = AUGC[0,:]/2\nAUGC\n```\n\n\n```python\nAUGC[1,:] = AUGC[1,:] - 3*AUGC[0,:]\nAUGC\n```\n\n\n```python\nAUGC[1,:] = S(4)/3*AUGC[1,:]\nAUGC\n```\n\n\n```python\nAUGC[0,:] = AUGC[0,:] - S(3)/4*AUGC[1,:]\nAUGC\n```\n\n### P3.3\n\n\n```python\n# define agmented matrices for three systems of eqns. with unique sol'ns\nA = Matrix([\n [ -1, -2, -2],\n [ 3, 3, 0]])\n \nB = Matrix([\n [ 1, -1, -2, 1],\n [-2, 3, 3, -1],\n [-1, 0, 1, 2]])\n\nC = Matrix([\n [ 2, -2, 3, 2],\n [ 1, -2, -1, 0],\n [-2, 2, 2, 1]])\n\n```\n\n\n```python\nA\n```\n\n\n```python\nA.rref()\n```\n\n\n```python\nB\n```\n\n\n```python\nB.rref()\n```\n\n\n```python\nC\n```\n\n\n```python\nC.rref()\n```\n\n### P3.4\n\n\n```python\n# now for three systems of eqns. with infinitely many sol'ns\nD = Matrix([\n [ -1, -2, -2],\n [ 3, 6, 6]])\n \nE = Matrix([\n [ 1, -1, -2, 1],\n [-2, 3, 3, -1],\n [-1, 2, 1, 0]])\n\nF = Matrix([\n [ 2, -2, 3, 2],\n [ 0, 0, 5, 3],\n [-2, 2, 2, 1]])\n```\n\n### Solving d)\n\n\n```python\nD\n```\n\n\n```python\nD.rref()\n```\n\n\n```python\nD[0:2,0:2].nullspace()\n```\n\n\n```python\n# the solutions to the sytem of equations represented by D\n# is of the form point + nullspace\npoint = D.rref()[0][:,2]\nnullspace = D[0:2,0:2].nullspace()\n```\n\n\n```python\n# the point is also called he particular solution\npoint\n```\n\n\n```python\n# if A aug matrix is [A|b], then the point satisfies A*point = b.\nprint( D[0:2,0:2]*point == D[:,2] )\nD[0:2,0:2]*point\n```\n\n### Null space\n\n\n```python\n# the nullspace of A in aug. matrix [A|b] is one dimensional and spanned by\nn = nullspace[0]\nn\n# every vector n in the nullspace of A satisfies A*n=0\n```\n\n\n```python\n# so solution to A*x=b is any (point+s*n) where s is any real number\n# since A*(point +s*n) = A*point + sA*n = A*point + 0 = b.\n# verify claim for 20 values of s in range -5,-4,-3,-2,-1,0,1,2,3,4,5\nfor s in range(-5,6):\n print( D[0:2,0:2]*(point + s*n), \n D[0:2,0:2]*(point + s*n) == D[:,2] )\n```\n\n Matrix([[-2], [6]]) True\n Matrix([[-2], [6]]) True\n Matrix([[-2], [6]]) True\n Matrix([[-2], [6]]) True\n Matrix([[-2], [6]]) True\n Matrix([[-2], [6]]) True\n Matrix([[-2], [6]]) True\n Matrix([[-2], [6]]) True\n Matrix([[-2], [6]]) True\n Matrix([[-2], [6]]) True\n Matrix([[-2], [6]]) True\n\n\n### Solving e)\n\n\n```python\nE\n```\n\n\n```python\nE.rref()\n```\n\n\n```python\npoint_E = E.rref()[0][:,3]\nnullspace_E = E[0:3,0:3].nullspace()[0]\ns = symbols('s')\npoint_E + s*nullspace_E\n```\n\n### Solving f)\n\n\n```python\nF\n```\n\n\n```python\nF.rref()\n```\n\n\n```python\npoint_F = F.rref()[0][:,3]\nnullspace_F = F[0:3,0:3].nullspace()[0]\ns = symbols('s')\npoint_F + s*nullspace_F\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "5ced05e8a26659d7135d7106ac58ada9f303c12b", "size": 63698, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chapter03_problems.ipynb", "max_stars_repo_name": "ChidinmaKO/noBSLAnotebooks", "max_stars_repo_head_hexsha": "c0102473f1e6625fa5fb62768d4545059959fa26", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter03_problems.ipynb", "max_issues_repo_name": "ChidinmaKO/noBSLAnotebooks", "max_issues_repo_head_hexsha": "c0102473f1e6625fa5fb62768d4545059959fa26", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter03_problems.ipynb", "max_forks_repo_name": "ChidinmaKO/noBSLAnotebooks", "max_forks_repo_head_hexsha": "c0102473f1e6625fa5fb62768d4545059959fa26", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.8264680105, "max_line_length": 3188, "alphanum_fraction": 0.7510282897, "converted": true, "num_tokens": 1479, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9241418137109955, "lm_q2_score": 0.8991213711878918, "lm_q1q2_score": 0.8309156547158956}} {"text": "# AutoDiff by Symboic Representation in Julia\n\n\n```julia\nusing Symbolics\n```\n\n\n```julia\ni(x) = x\nf(x) = 3x^2\ng(x) = 2x^2\nh(x) = x^2\nw_vec = [i, h, g, f]\n\n@variables x \n```\n\n\n\n\n\\begin{equation}\n\\left[\n\\begin{array}{c}\nx \\\\\n\\end{array}\n\\right]\n\\end{equation}\n\n\n\n\n\n```julia\nfunction forward_fn(w_vec, x, i::Int)\n y = w_vec[i](x)\n i == size(w_vec)[1] ? y : [y; forward_fn(w_vec,y,i+1)] \nend\n```\n\n\n\n\n forward_fn (generic function with 1 method)\n\n\n\n\n```julia\nx_vec = forward_fn(w_vec, x, 1)\ndisplay(x_vec)\n```\n\n\n\\begin{equation}\n\\left[\n\\begin{array}{c}\nx \\\\\nx^{2} \\\\\n2 x^{4} \\\\\n12 x^{8} \\\\\n\\end{array}\n\\right]\n\\end{equation}\n\n\n\n\n```julia\nfunction gradient(w_i, x_i_1)\n @variables x\n dy = expand_derivatives(Differential(x)(w_i(x)))\n (substitute(dy, (Dict(x=>x_i_1,))),)\nend\n\nfunction reverse_autodiff(w_vec, x_vec, i::Int)\n i == 1 ? 1 :\n gradient(w_vec[i], x_vec[i-1])[1] * \n reverse_autodiff(w_vec, x_vec, i-1)\nend\n```\n\n\n\n\n reverse_autodiff (generic function with 1 method)\n\n\n\n\n```julia\ny_ad = x_vec[end]\ndisplay(y_ad)\ndy_ad = reverse_autodiff(w_vec, x_vec, size(w_vec)[1])\ndisplay(dy_ad)\n```\n\n\n\\begin{equation}\n12 x^{8}\n\\end{equation}\n\n\n\n\n\\begin{equation}\n96 x^{7}\n\\end{equation}\n\n\n\n## Check by theory\n\n\n```julia\ny_th = f(g(h(x)))\ndisplay(y_th)\ndy_th = expand_derivatives(Differential(x)(y_th))\ndisplay(dy_th)\n```\n\n\n\\begin{equation}\n12 x^{8}\n\\end{equation}\n\n\n\n\n\\begin{equation}\n96 x^{7}\n\\end{equation}\n\n\n\n## Check by Zygote\n\n\n```julia\nusing Symbolics\nusing Zygote\n\nf(x) = 3x^2\ng(x) = 2x^2\nh(x) = x^2\ny(x) = f(g(h(x)))\ndisplay(y(x))\n\ndy(x) = Zygote.gradient(y,x)[1]\ndisplay(dy(x))\n```\n\n\n\\begin{equation}\n12 x^{8}\n\\end{equation}\n\n\n\n\n\\begin{equation}\n96 x^{7}\n\\end{equation}\n\n\n\n\n```julia\nfunction y(x)\n N = 5\n y = 1\n for i=1:N\n y *= x\n end\n y\nend\n\ndisplay(y(x))\ndy(x) = Zygote.gradient(y,x)[1]\ndy(x)\n```\n\n\n\\begin{equation}\nx^{5}\n\\end{equation}\n\n\n\n\n\n\n\\begin{equation}\n5 x^{4}\n\\end{equation}\n\n\n\n\n\n```julia\nfunction y(x, N)\n # N = 5\n y = 1\n for i=1:N\n y *= x\n end\n y\nend\n\ndisplay(y(x, 5))\ndy(x,N) = Zygote.gradient(y,x,N)[1]\ndy(x,5)\n```\n\n\n\\begin{equation}\nx^{5}\n\\end{equation}\n\n\n\n\n\n\n\\begin{equation}\n5 x^{4}\n\\end{equation}\n\n\n\n\n## All Codes\n\n\n```julia\nfunction gradient(w_i, x_i_1) # 1) Newly added\n @variables x\n dy = expand_derivatives(Differential(x)(w_i(x)))\n (substitute(dy, (Dict(x=>x_i_1,))),)\nend\n\nfunction main(w_vec)\n @variables x # 2) Replaced from x = 2.0 \n x_vec = forward_fn(w_vec, x, 1)\n y_ad = x_vec[end]\n dy_ad = reverse_autodiff(w_vec, x_vec, size(w_vec)[1])\n return x_vec, y_ad, dy_ad\nend\n\ni(x) = x\nf(x) = 3x^2\ng(x) = 2x^2\nh(x) = x^2\nw_vec = [i, h, g, f]\nx_vec, y_ad, dy_ad = main(w_vec)\ndisplay(x_vec)\ndisplay(y_ad)\ndisplay(dy_ad)\n\n# 3) Verification code\n@variables x\ny_th = f(g(h(x)))\ndisplay(y_th)\ndy_th = expand_derivatives(Differential(x)(y_th))\ndisplay(dy_th)\n```\n\n\n\\begin{equation}\n\\left[\n\\begin{array}{c}\nx \\\\\nx^{2} \\\\\n2 x^{4} \\\\\n12 x^{8} \\\\\n\\end{array}\n\\right]\n\\end{equation}\n\n\n\n\n\\begin{equation}\n12 x^{8}\n\\end{equation}\n\n\n\n\n\\begin{equation}\n96 x^{7}\n\\end{equation}\n\n\n\n\n\\begin{equation}\n12 x^{8}\n\\end{equation}\n\n\n\n\n\\begin{equation}\n96 x^{7}\n\\end{equation}\n\n\n\n\n```julia\n\n```\n", "meta": {"hexsha": "3a1b7ff4c7fffa8397f2aaa76972e1f16fb89f52", "size": 10755, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "diffprog/julia_dp/autodiff_chain_rule-symb.ipynb", "max_stars_repo_name": "jskDr/keraspp_2021", "max_stars_repo_head_hexsha": "dc46ebb4f4dea48612135136c9837da7c246534a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2021-09-21T15:35:04.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-14T12:14:44.000Z", "max_issues_repo_path": "diffprog/julia_dp/autodiff_chain_rule-symb.ipynb", "max_issues_repo_name": "jskDr/keraspp_2021", "max_issues_repo_head_hexsha": "dc46ebb4f4dea48612135136c9837da7c246534a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "diffprog/julia_dp/autodiff_chain_rule-symb.ipynb", "max_forks_repo_name": "jskDr/keraspp_2021", "max_forks_repo_head_hexsha": "dc46ebb4f4dea48612135136c9837da7c246534a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.3783783784, "max_line_length": 68, "alphanum_fraction": 0.4065086007, "converted": true, "num_tokens": 1211, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9381240090865197, "lm_q2_score": 0.8856314738181875, "lm_q1q2_score": 0.8308321487915212}} {"text": "```python\n# Problem (2)\n# Part (a)\n\nfrom sympy.solvers import solve\nfrom sympy import Symbol\nfrom sympy import *\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.sparse import *\nfrom numpy.linalg import inv\nfrom array import *\nfrom scipy import linalg\nx=Symbol('x')\n\n# function to find k and f for linear elements\ndef kfmatixLin(nEle,domainLen,a,c,f):\n xb=[]\n xa=[]\n for i in range(1,nEle+1):\n xb.append(i*(domainLen/nEle))\n xa.append((i-1)*(domainLen/nEle))\n k=[]\n Ftemp=[]\n # for ith element # NOTE: [[element 1 k's],[element 2 k's], ...]\n for i in range(nEle):\n psi_1 = (xb[i]-x)/(xb[i]-xa[i])\n psi_2 = (x-xa[i])/(xb[i]-xa[i])\n # print(psi_1)\n # print(psi_2)\n k.append([])\n Ftemp.append(integrate(f*psi_1 ,(x, xa[i], xb[i])))\n Ftemp.append(integrate(f*psi_2 ,(x, xa[i], xb[i])))\n k[i].append(integrate( a*diff(psi_1,x)*diff(psi_1,x)+c*psi_1*psi_1,(x, xa[i], xb[i])))\n k[i].append(integrate( a*diff(psi_1,x)*diff(psi_2,x)+c*psi_1*psi_2,(x, xa[i], xb[i])))\n k[i].append(integrate( a*diff(psi_2,x)*diff(psi_1,x)+c*psi_2*psi_1,(x, xa[i], xb[i])))\n k[i].append(integrate( a*diff(psi_2,x)*diff(psi_2,x)+c*psi_2*psi_2,(x, xa[i], xb[i])))\n\n F=[]\n F.append(Ftemp[0])\n for i in range(0,len(Ftemp)-2,2):\n F.append(Ftemp[i+1]+Ftemp[i+2])\n F.append(Ftemp[len(Ftemp)-1])\n\n # print('k = ',k)\n # print('Ftemp',Ftemp)\n # print('F = ',F)\n # two diagonals of the tridiagonal matrix\n diagA=[]\n diagB=[]\n # for three element 4*4 k matrix\n for i in range(nEle):\n diagA.append(k[i][1])\n # print('diagA',diagA)\n # NOTE: no need for diagC as it will always be same as diagA\n\n diagB.append(k[0][0])\n for i in range(nEle-1):\n diagB.append(k[i][3]+k[i+1][0])\n diagB.append(k[nEle-1][3])\n # print('diagB',diagB)\n diagA=np.array(diagA, dtype=np.float64)\n diagB=np.array(diagB, dtype=np.float64)\n\n K = np.array( diags([diagB,diagA,diagA], [0,-1, 1]).todense() )\n return(K,F)\n# function to find k and f for Quadratic elements\ndef kfmatixQad(nEle,domainLen,a,c,f):\n xb=[]\n xa=[]\n xc=[]\n for i in range(1,nEle+1):\n xb.append(i*(domainLen/nEle))\n xa.append((i-1)*(domainLen/nEle))\n for i in range(nEle):\n xc.append(0.5*xa[i]+0.5*xb[i])\n # print('xa',xa)\n # print('xb',xb)\n # print('xc',xc)\n k=[]\n Ftemp=[]\n # for ith element # NOTE: [[element 1 k's],[element 2 k's], ...]\n for i in range(nEle):\n psi_1 = ((x-xc[i])*(x-xb[i]))/((xa[i]-xc[i])*(xa[i]-xb[i]))\n psi_2 = ((x-xa[i])*(x-xb[i]))/((xc[i]-xa[i])*(xc[i]-xb[i]))\n psi_3 = ((x-xa[i])*(x-xc[i]))/((xb[i]-xa[i])*(xb[i]-xc[i]))\n # print(psi_1)\n # print(psi_2)\n k.append([])\n Ftemp.append(integrate(f*psi_1 ,(x, xa[i], xb[i])))\n Ftemp.append(integrate(f*psi_2 ,(x, xa[i], xb[i])))\n Ftemp.append(integrate(f*psi_3 ,(x, xa[i], xb[i])))\n k[i].append(integrate( a*diff(psi_1,x)*diff(psi_1,x)+c*psi_1*psi_1,(x, xa[i], xb[i])))\n k[i].append(integrate( a*diff(psi_1,x)*diff(psi_2,x)+c*psi_1*psi_2,(x, xa[i], xb[i])))\n k[i].append(integrate( a*diff(psi_1,x)*diff(psi_3,x)+c*psi_1*psi_3,(x, xa[i], xb[i])))\n\n k[i].append(integrate( a*diff(psi_2,x)*diff(psi_1,x)+c*psi_2*psi_1,(x, xa[i], xb[i])))\n k[i].append(integrate( a*diff(psi_2,x)*diff(psi_2,x)+c*psi_2*psi_2,(x, xa[i], xb[i])))\n k[i].append(integrate( a*diff(psi_2,x)*diff(psi_3,x)+c*psi_2*psi_3,(x, xa[i], xb[i])))\n\n k[i].append(integrate( a*diff(psi_3,x)*diff(psi_1,x)+c*psi_3*psi_1,(x, xa[i], xb[i])))\n k[i].append(integrate( a*diff(psi_3,x)*diff(psi_2,x)+c*psi_3*psi_2,(x, xa[i], xb[i])))\n k[i].append(integrate( a*diff(psi_3,x)*diff(psi_3,x)+c*psi_3*psi_3,(x, xa[i], xb[i])))\n\n F=[]\n F.append(Ftemp[0])\n F.append(Ftemp[1])\n t=0\n i=0\n while t 0 $iff$ $\\operatorname{ker}(A)$ has infinit solution\n\n#### Dimension theorem\nFor any $m\\times n$ matrix $A$, \n$$\\operatorname{rank}(A) + \\operatorname{nullity}(A) = n.$$\n\n##### Exercise 5\nLet \n```python\nA = sympy.Matrix([[1,3,3,18], \n [5,15,16,95], \n [-5,-15,-15,-90]])\nR,pvts = A.rref()\n```\nUse the same technique as Exercise 4 to solve the kernel of $A$. \nThen verify your answer by `A.nullspace()`.\n\n\n```python\n### your answer here\n```\n\n##### Exercise 6\nLet \n```python\nA = np.array([[1,1,1], \n [1,1,1]])\nA_sym = sympy.Matrix(A)\n```\nUse the vectors in `A_sym.nullspace()` to draw the grid. \nCheck if the space is the same as what you did in Exercise 1.\n\n\n```python\n### your answer here\n```\n\n##### Exercise 7\nLet \n```python\nA = sympy.Matrix([[1,-1,0], \n [1,0,-1]])\nR,pvts = A.rref()\n```\nDraw the grid in red using the rows of $A$ and draw the grid in blue using the rows of $R$. \nAre they the same space?\n\n\n\n```python\n### your answer here\n```\n", "meta": {"hexsha": "2d745afd7eabd277adc8193e8643917416a71b66", "size": 10753, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Mathematics/Linear Algebra/0.0b Supplement of MIT Course - Concept Explanations with Problems_and_Questions to Solve/05-Solving-Ax-=-0.ipynb", "max_stars_repo_name": "okara83/Becoming-a-Data-Scientist", "max_stars_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Mathematics/Linear Algebra/0.0b Supplement of MIT Course - Concept Explanations with Problems_and_Questions to Solve/05-Solving-Ax-=-0.ipynb", "max_issues_repo_name": "okara83/Becoming-a-Data-Scientist", "max_issues_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Mathematics/Linear Algebra/0.0b Supplement of MIT Course - Concept Explanations with Problems_and_Questions to Solve/05-Solving-Ax-=-0.ipynb", "max_forks_repo_name": "okara83/Becoming-a-Data-Scientist", "max_forks_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T15:41:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T07:47:40.000Z", "avg_line_length": 24.1098654709, "max_line_length": 242, "alphanum_fraction": 0.4834929787, "converted": true, "num_tokens": 1821, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284088025362857, "lm_q2_score": 0.8947894611926921, "lm_q1q2_score": 0.8307304121879955}} {"text": "# Principal Component Analysis (PCA)\n\nIn one of the previous tutorial, we looked at line fitting using linear regression. Here, we were predicting $\\mathbf{y_i}$ from $\\mathbf{x_i}$ to minimize the following error:\n\n\\begin{equation}\n\\sum_{i=1}^\\mathbf{N} (\\mathbf{y_i - f(w, x_i)})^2\n\\end{equation}\n\nIn a related sense, we can interpret PCA performing dimensionality reduction from three different view points:\n\n1. Minimizing Orthogonal distance between input data points and decision boundaries.\n2. Maximizing variance along the new dimensions.\n3. A representation/compression from which one can reconstruct the original data with minimal error.\n\nPCA make three important assumptions:\n\n1. Linearity\n2. Principal components (or eigenvectors) are orthogonal\n3. PCs with large variances contain more information and small variances are considered as noise. \n\n## Minimizing Orthogonal distance\n\nTo get an intuitive understanding of this viewpoint, consider a point and a line in 2D space as shown below.\n\n\n\nOne can find the orthogonal distance, $\\mathbf{d}_{\\perp}$, between the point and the line by using Pythagoras theorem and principle of vector projection. The equation defined for Figure 1 is as follows:\n\n\\begin{equation}\npoint^{\\top}point = line^{\\top}point + \\mathbf{d}_{\\perp}\n\\end{equation}\n\nBut why do we need to minimize orthogonal distance? Note that when we minimize the orthogonal distance to the least value i.e. 0, the line passes through the point. Turns out that this line is the best fit line for this point.\n\nWe can generalize this idea by considering a set of points in 2D where $\\mathbf{x = (x_1, x_2, x_3,...., x_N)_i}$. Note that $\\mathbf{x_i = [x^1, x^2]^{\\top}}$ and $\\mathbf{p}$ is one of the possibilities for a line in a linear subspace.\n\n\n\nIn Figure 2, between the 2D points and the given line, the average orthogonal distance $= 1.402$.\n\nSimilarly, let's analyze Figure 3. \n\n\n\nHere, the given line is the best fit line achieved through numpy's linear regression. Hence, the average orthogonal distance $= 0.127$.\n\nAmong the possible set of lines in linear subspace, the average orthogonal distance calculated for Figure 2 is the maximum. Why? Because the given line in Figure 2 is perpendicular to the best fit line mentioned in Figure 3.\n\nBy using our understanding about minimizing orthogonal distance from the example above, we can now understand how PCA uses the above equation to perform dimensionality reduction (1st viewpoint).\n\nBefore we proceed, it would be good to know dimensions of $\\mathbf{X}$ and $\\mathbf{w}$. \n\n$dim(\\mathbf{X}) \\in (\\mathbf{n, m})$\n\n$dim(\\mathbf{w}) \\in (\\mathbf{n, r})$\n\nwhere, \n\n* $\\mathbf{n}:$ Number of dimensions in $\\mathbf{X}$\n* $\\mathbf{m}:$ Number of examples in $\\mathbf{X}$\n* $\\mathbf{r}:$ Number of dimensions to retain after performing PCA such that $\\mathbf{r \\leq n}$\n* $\\mathbf{X}:$ Input data matrix\n* $\\mathbf{w}:$ Set of eigen vectors\n\nLet us assume that we have an input $\\mathbf{X}$ where each of its dimensions has been centered.\n\nWe would like to minimize the orthogonal distance, $\\mathbf{d}_{\\perp}$, between a set of points and a line.\n\n\\begin{equation} \n\\min_{\\mathbf{w}} \\mathbf{d_{\\perp}^2(X, w)}\n\\end{equation}\n\nwhich is nothing but,\n\n\\begin{equation}\n\\min_{\\mathbf{w}} \\mathbf{X^{\\top}X - (w^{\\top}X)^2}\n\\end{equation}\n\nSince $\\mathbf{X^{\\top}X}$ doesn't contain any $w$ term, a compact notation of the above equation is as follows:\n\n\\begin{equation}\n\\max_{\\mathbf{w}} \\mathbf{w^{\\top} (X^{\\top}X) w}\n\\end{equation}\n\nWe see that this is an unconstrained optimization where $\\mathbf{w}$ can lead to infinity. Intuitively, as $\\mathbf{w\\to\\infty}$, so will $\\mathbf{d}_{\\perp} \\to\\infty$. To prevent this, we add a lagrangian constraint defined as $\\lambda(\\mathbf{w^{\\top}w - 1})$. This makes $\\mathbf{w^{\\top}w} = 1$, leading to orthogonality of w and maximizing our objective.\n\nFrom the lecture notes, our final optimization problem defined with lagrangian constraint looks as follows:\n\n\\begin{equation*}\n\\mathbf{(X^{\\top}X)w} = \\lambda \\mathbf{w}\n\\end{equation*}\n\nHere, $\\mathbf{w}$ and $\\lambda$ is a series of eigen vectors and eigen values, respectively, of the covariance matrix $\\Sigma = \\mathbf{X^{\\top}X}$.\n\nAt this point, we can make an important conclusion:\n\n* In an n-dimensional input space, the line that minimizes the orthogonal distance has a direction of the first eigen vector of the covariance matrix.\n\n## Maximizing Variance\n\nFrom this viewpoint, we will see how PCA performs dimensionality reduction by retaining maximum variance along new orthogonal axes.\n\nHere, the data $\\mathbf{X}$ is projected onto a set of new orthogonal axes $\\mathbf{u}$ to obtain a new representation $\\mathbf{Z}$. Note that the new orthogonal axes generated by PCA, $\\mathbf{u}$, is a linear combination of existing features or dimensions.\n\nLet's assume that $\\mathbf{X}$ is mean-centered. Our objective function for obtaining maximum variance alone new axes is,\n\n\\begin{equation}\n\\operatorname*{argmax} \\frac{1}{\\mathbf{N}} \\mathbf{u^{\\top}\\Sigma u}\n\\end{equation}\n\nWith the constraint of $\\|\\mathbf{u}\\| = 1$, the first principal component, i.e. the eigen vector corresponding to the largest eigen value of the covariance matrix $\\Sigma$, maximizes the variance.\n\nWe will see this from an example below.\n\n## Implementation\n\n#### Imports\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom numpy import pi\nimport seaborn as sns; sns.set();\nfrom sklearn.datasets import load_wine\nfrom mpl_toolkits.mplot3d import Axes3D\n\nsns.set();\n\nnp.random.seed(7)\n```\n\n#### Create synthetic dataset\n\n\n```python\nX = np.dot(np.random.rand(2, 2), np.random.randn(2, 50)).T\n\n# Calculating mean of 'X' across every dimension\nX_mean = np.mean(X, axis=0)\n\nplt.scatter(X[:, 0], X[:, 1])\nplt.xlabel('$X_0$')\nplt.ylabel('$X_1$')\nplt.axis('equal');\n```\n\n#### Implementing PCA\n\nBefore we dive in, let us walkthrough the pseudo code for the PCA algorithm.\n\n1. Center the data by subtracting the mean $\\mathbf{\\mu}$.\n2. Compute the covariance matrix $\\mathbf{\\Sigma = \\frac{1}{N} X^{\\top} X}$.\n3. Compute eigen values and eigenvectors of $\\Sigma$.\n4. Take the $k$ eigen vectors corresponding to the largest $k$ eigen values. Keep the eigen vectors as the rows and create a matrix $U$ of $k \\times d$.\n\n\n```python\ndef pca_transform(X_input, num_components):\n\n \"\"\" PCA algorithm as per our pseudo code above.\n\n Parameters:\n --------------\n\n X_input: ndarray (num_examples (rows) x num_features(columns))\n Our input data on which we would like to perform PCA.\n\n num_components: int\n Defines the kth number of principal components (or eigenvectors) to keep\n while performing PCA. These components will be chosen in decreasing \n order of variances (or eigenvalues).\n\n \"\"\"\n\n # Centering our data (Step 1)\n X_mean = np.mean(X_input, axis=0)\n X_mean = X_mean.reshape(1, -1)\n X_input -= X_mean\n\n num_examples = (X_input.shape)[0]\n constant = 1/(num_examples - 1)\n\n # Calculating covariance matrix (Step 2)\n cov_matrix = constant * np.dot(X_input.T, X_input)\n cov_matrix = np.array(cov_matrix, dtype=float)\n\n # Calculating eigen values and eigen vectors (or first n-principal components)\n # Step 3\n eigvals, eigvecs = np.linalg.eig(cov_matrix)\n\n # Step 4\n idx = eigvals.argsort()[::-1]\n eigvals = eigvals[idx][:num_components]\n eigvecs = np.atleast_1d(eigvecs[:, idx])[:, :num_components]\n\n X_projected = np.dot(X_input, eigvecs)\n eigvecs = eigvecs.T\n return X_projected, eigvecs, eigvals\n```\n\n#### Applying PCA to the data\n\n\n```python\n# Perform PCA on input data X\nmax_components = np.shape(X)[1]\nX_projected, principal_components, variances = pca_transform(X, max_components)\n\nprint(\"PCA components: \", principal_components)\nprint(\"PCA variance: \", variances)\n```\n\n PCA components: [[-0.61782231 -0.78631774]\n [-0.78631774 0.61782231]]\n PCA variance: [1.36739859 0.05648966]\n\n\nHere, PCA components are the eigen vectors of the covariance matrix $\\Sigma = \\mathbf{X^{\\top}X}$. PCA variances are the eigen values or the squared-length describing the importance of each principal component. Here, we see that $\\lambda_{\\mathbf{w_1}} > \\lambda_{\\mathbf{w_2}}$.\n\n#### Visualize the principle components\n\n\n```python\ndef draw_vector(v0, v1, eigvec, ax=None):\n ax = plt.gca()\n arrowprops=dict(arrowstyle='->',\n linewidth=2,\n shrinkA=0, shrinkB=0, color='black')\n ax.annotate('', v1, v0, arrowprops=arrowprops)\n ax.text(v1[0], v1[1], eigvec)\n```\n\n\n```python\n# plot data\nplt.scatter(X[:, 0], X[:, 1], alpha=0.35)\nplt.xlabel('$X_0$')\nplt.ylabel('$X_1$')\n\ni = 1\nfor length, vector in zip(variances, principal_components):\n v = vector * 1.8 * np.sqrt(length)\n draw_vector(X_mean, X_mean + v, \"w\"+str(i))\n i += 1\nplt.axis('equal');\n```\n\nFrom the above figure, we can see that $\\mathbf{w_1}$, the first eigen vector (or principal component), maximizes the variance by capturing the most information of the input data along its axis.\n\nAnother way to see the effect of first component maximizing variance is to visualize the reconstructed input data $\\mathbf{X}$ with only the first principal component.\n\n\n```python\n# Perform PCA on input data X and getting only the first eigen vector with maximum variance\nnum_components = 1\n# Projecting in the new space with principal components as axes\nX_new = np.dot(X_projected[:, :num_components], principal_components[:num_components, :])\n\n# Projecting the 2D data into a single dimension\nplt.scatter(X[:, 0], X[:, 1], alpha=0.3)\nplt.scatter(X_new[:, 0], X_new[:, 1], alpha=0.9)\nplt.xlabel('$X_0$')\nplt.ylabel('$X_1$')\nplt.axis('equal');\n```\n\n## Minimizing Reconstruction Loss\n\nWhile computing PCA, we transform the input data into a new feature space. This generates a series of orthogonal principal components and variances are generated from a covariance matrix $\\Sigma$, where each variance describe the importance of its respective principal component.\n\nIn this viewpoint, we posit that we only need the first k of n total principal components. These k components contribute to certain fraction of the total variance which is defined as follows:\n\n\\begin{equation}\n\\frac{\\sum_{\\mathbf{i=k+1}}^\\mathbf{d} \\lambda_{i}} {\\sum_{\\mathbf{i=1}}^\\mathbf{d} \\lambda_{i}}\n\\end{equation}\n\nFor instance, after computing PCA for a 10-dimensional input dataspace, we get 10 orthogonal principal components. However, out of these 10 components, 6 of them might explain 97% of the variance in the dataset. This means we can discard the remaining 4 components. Although we might slightly lose accuracy, such a dimensionality reduction can make downstream computations efficient.\n\nFrom the example below, we will see how we can perform dimensionality reduction using PCA.\n\n\n```python\nfrom sklearn.datasets import load_wine\n\nwine = load_wine()\nprint(\"Dataset(X) shape: \", wine.data.shape)\nprint(\"Target(y) shape: \", wine.target.shape)\nprint(\"Feature names: \", wine.feature_names)\nprint(\"Target names: \", wine.target_names)\n```\n\n Dataset(X) shape: (178, 13)\n Target(y) shape: (178,)\n Feature names: ['alcohol', 'malic_acid', 'ash', 'alcalinity_of_ash', 'magnesium', 'total_phenols', 'flavanoids', 'nonflavanoid_phenols', 'proanthocyanins', 'color_intensity', 'hue', 'od280/od315_of_diluted_wines', 'proline']\n Target names: ['class_0' 'class_1' 'class_2']\n\n\n\n```python\n# Perform PCA on input data 'wine.data'\nmax_components = (wine.data.shape)[1]\nX_projected, principal_components, variances = pca_transform(wine.data, max_components)\n```\n\n\n```python\ntotal_var = np.sum(variances)\nexplained_variance_ratio = variances/total_var\n\nplt.plot(range(1, 14),np.cumsum(explained_variance_ratio))\nplt.xlabel('number of components')\nplt.ylabel('cumulative explained variance');\n```\n\nFrom the above graph, we can see that the first four components capture almost 99% of the information from the input dataset. This means that we can discard the remaining 9 dimensions and still reconstruct our dataset with lesser dimensions and considerably minimum error.\n\n\n```python\n# project all 13 input dimensions to 4 new axes\nnum_components = 4\n# Projecting in the new space with principal components as axes\nX_new = np.dot(X_projected[:, :num_components], principal_components[:num_components, :])\n```\n\n\n```python\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfig = plt.figure(num=None, figsize=(9, 7), dpi=80, facecolor='w', edgecolor='k')\nax = fig.add_subplot(111, projection='3d')\n\nx = X_new[:, 0]\ny = X_new[:, 1]\nz = X_new[:, 2]\nc = X_new[:, 3]\n\nimg = ax.scatter(x, y, z, c=c, cmap=plt.hot())\nfig.colorbar(img)\nplt.show()\n```\n", "meta": {"hexsha": "e115f0cf80c11d5efebb95c1c83232b25d9d6769", "size": 188156, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "HW5/PCA Tutorial.ipynb", "max_stars_repo_name": "avani17101/SMAI-Assignments", "max_stars_repo_head_hexsha": "8d408911f964768bf50d965f881d10d37ac8f7f7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-03-05T12:28:39.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-05T12:28:44.000Z", "max_issues_repo_path": "HW5/PCA Tutorial.ipynb", "max_issues_repo_name": "avani17101/Statistical-Methods-in-AI", "max_issues_repo_head_hexsha": "8d408911f964768bf50d965f881d10d37ac8f7f7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW5/PCA Tutorial.ipynb", "max_forks_repo_name": "avani17101/Statistical-Methods-in-AI", "max_forks_repo_head_hexsha": "8d408911f964768bf50d965f881d10d37ac8f7f7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-05T12:21:26.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-05T12:21:26.000Z", "avg_line_length": 234.0248756219, "max_line_length": 95280, "alphanum_fraction": 0.9136620676, "converted": true, "num_tokens": 3407, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.919642526773001, "lm_q2_score": 0.9032942158155877, "lm_q1q2_score": 0.8307077750520835}} {"text": "# Symbolic Mathematics in Python \n\nThere are times when you need to solve a difficult problem symbollically or analytically. If you have ever used Wolfram Alpha, then you have already done this. Sympy is a python library that allows you to do symbolic mathematics in python. \n\n\n```\nimport sympy as sym\n```\n\n## 1. Introduction\n\n### Example 1.1\n\nIf you try to write the follwing in python by itself, you will get an error telling you x is undefined:\n$$x-x$$\n\n\n```\nx-x\n```\n\n(The error above is on purpose). Variables in python need to be defined before you can say something specific about them\n\n\n```\nx=102\nx-x\n```\n\nIf you are trying to show that $x-x=0$ is true for any $x$, the above answer would not be valid. Instead you can use a symbolic expression to show that it is true\n\n**First we define the variable as a symmbolic expression**\n\n\n```\nx = sym.symbols('x')\n```\n\n\n```\n\n```\n\n**Now we can use the variable in a symbolic expression**\n\n\n```\nx-x\n```\n\n### Example 1.2\n\nSympy can be used to perform algebreic operations (among other things). Consider the following expression: $$(3a-4b)^3$$\n\n\nWe can use symppy to expand the expression algebraically.\n\n**First we need to define the variables as symbolic expressions**\n\n\n```\na,b = sym.symbols('a,b')\n```\n\n**Side note** Notice that the left hand side of the epression has two variables being defined. Python can define more than one variable at a time:\n\n\n```\nx1,y1 =10,20\nprint(x1)\nprint(y1)\n```\n\n 10\n 20\n\n\n**Back to the expression** We can define an expression using the variables $a$ and $b$.\n\n\n```\nexpr = (3*a-4*b)**3\nprint(expr)\n```\n\n (3*a - 4*b)**3\n\n\nWe can also make it look nicer in our notebook. This doesn't affect the math, but it makes our notebook more readable.\n\n\n```\nsym.init_printing()\n```\n\n\n```\nexpr\n```\n\n**Now we expand the function algebreically**\n\n\n```\nexpr.expand()\n```\n\nSympy can also factor the equation\n\n\n```\nsym.factor(26*a**3-108*a**2*b+144*a*b**2-64*b**3)\n```\n\nIf you want to copy and paste a result, you print the result.\n\n\n```\nprint(sym.factor(26*a**3-108*a**2*b+144*a*b**2-64*b**3))\n```\n\n 2*(a - 2*b)*(13*a**2 - 28*a*b + 16*b**2)\n\n\nYou can also chain together functions\n\n\n```\nexpr.expand().factor()\n```\n\n### Exercise 1.1\n\nShow that the following two expressions are true.\n$$(2w-3z)(2w+3z)=4w^2-9z^2$$\n$$(2w-3z)^2\\ne4w^2-9z^2$$\n\n\n```\nw,z = sym.symbols('w,z')\nexpr = (2*\ud835\udc64-3*\ud835\udc67)*(2*\ud835\udc64+3*\ud835\udc67)\nsym.init_printing()\nexpr\n```\n\n\n```\nexpr.expand()\n```\n\n\n```\nexpr = (2*\ud835\udc64-3*\ud835\udc67)**2\nexpr\n```\n\n\n```\nexpr.expand()\n```\n\n## 2. Solving Equations\n\nSympy can be used to symbolilically solve equations. As before, you need to define which variables are symbols\n\n### Example 2.1\n\nUse sympy to solve the following equation\n$$ax^3+bx^2+cx+d=0$$\n\n\n```\n# Define the variables\na,b,c,d,x = sym.symbols('a,b,c,d,x')\n```\n\n\n```\n# Define the expression\nexpr=a*x**3+b*x**2+c*x+d\nexpr\n```\n\nWe can use the `solvset` function to solve this equation\n\n\n```\nsolutions=sym.solveset(expr,x)\n```\n\n\n```\nprint(solutions)\n```\n\n {-(-3*c/a + b**2/a**2)/(3*(sqrt(-4*(-3*c/a + b**2/a**2)**3 + (27*d/a - 9*b*c/a**2 + 2*b**3/a**3)**2)/2 + 27*d/(2*a) - 9*b*c/(2*a**2) + b**3/a**3)**(1/3)) - (sqrt(-4*(-3*c/a + b**2/a**2)**3 + (27*d/a - 9*b*c/a**2 + 2*b**3/a**3)**2)/2 + 27*d/(2*a) - 9*b*c/(2*a**2) + b**3/a**3)**(1/3)/3 - b/(3*a), -(-3*c/a + b**2/a**2)/(3*(-1/2 - sqrt(3)*I/2)*(sqrt(-4*(-3*c/a + b**2/a**2)**3 + (27*d/a - 9*b*c/a**2 + 2*b**3/a**3)**2)/2 + 27*d/(2*a) - 9*b*c/(2*a**2) + b**3/a**3)**(1/3)) - (-1/2 - sqrt(3)*I/2)*(sqrt(-4*(-3*c/a + b**2/a**2)**3 + (27*d/a - 9*b*c/a**2 + 2*b**3/a**3)**2)/2 + 27*d/(2*a) - 9*b*c/(2*a**2) + b**3/a**3)**(1/3)/3 - b/(3*a), -(-3*c/a + b**2/a**2)/(3*(-1/2 + sqrt(3)*I/2)*(sqrt(-4*(-3*c/a + b**2/a**2)**3 + (27*d/a - 9*b*c/a**2 + 2*b**3/a**3)**2)/2 + 27*d/(2*a) - 9*b*c/(2*a**2) + b**3/a**3)**(1/3)) - (-1/2 + sqrt(3)*I/2)*(sqrt(-4*(-3*c/a + b**2/a**2)**3 + (27*d/a - 9*b*c/a**2 + 2*b**3/a**3)**2)/2 + 27*d/(2*a) - 9*b*c/(2*a**2) + b**3/a**3)**(1/3)/3 - b/(3*a)}\n\n\n\n```\nsolutions\n```\n\nWhat if I need help. You can do this with any python function. `function?`\n\n\n```\n# Run this command to see a help box\nsym.solveset?\n```\n\n### Exercise 2.1\n\nUse the `solveset` function to solve the following chemical problem.\n\nPhosgene gas, $\\text{COCl}_2$, dissociates at high temperatures according to the following equilibrium:\n \n$$ \\text{COCl}_2 \\rightleftharpoons \\text{CO} + \\text{Cl}_2 $$\n\nAt $\\text{400 C}$, the equilibrium constant $K_c=8.05$.\u00a0 \n\nIf you start with a $\\text{0.250 M}$ phosgene sample at $\\text{400 C}$, determine the concentrations of all species at equilibrium.\n\n\n```\nx = sym.symbols('x')\nexpr = ((x**2)/(0.250-x))-8.05\nexpr\n```\n\n\n```\nsolutions=sym.solveset(expr,x)\nprint(solutions)\n```\n\n {-8.29268379803378, 0.242683798033777}\n\n\nWhy did you pick your answer?\n\n\n```\n# Concentrations cannot be negative\n```\n\n\n\n## 3. Calculus\n\nWe can use also Sympy to differentiate and integrate. Let us experiment with differentiating the following expression:\n\n$$x ^ 2 - \\cos(x)$$\n\n\n```\nsym.diff(x ** 2 - sym.cos(x), x)\n```\n\nSimilarly we can integrate:\n\n\n```\nsym.integrate(x ** 2 - sym.cos(x), x)\n```\n\nWe can also carry out definite integrals:\n\n\n```\nsym.integrate(x ** 2 - sym.cos(x), (x, 0, 5))\n```\n\n### Exercise 3.1\n\nUse Sympy to calculate the following:\n\n1. $\\frac{d(x ^2 + xy - \\ln(y))}{dy}$\n1. $\\int_0^5 e^{2x}\\;dx$\n\n\n```\nx,y = sym.symbols('x,y')\nsym.diff(x**2+x*y-sym.ln(y), y)\n```\n\n\n```\nsym.integrate(sym.E**(2*x), (x,0,5))\n```\n\n### Exercise 3.2\nSolve the following definate integral\n$$\\int\\limits_{ - \\infty }^\\infty {\\frac{1}{{\\sigma \\sqrt {2\\pi } }}{e^{ - \\frac{1}{2}{{\\left( {\\frac{{x - \\mu }}{\\sigma }} \\right)}^2}}}}$$\n\nHint, the sympy symbol for infinity is `oo`\n\n\n```\nfrom sympy import oo\nfrom sympy import pi\nfrom sympy import sqrt\nfrom sympy import E\nfrom sympy import Symbol\n\no,u,x = sym.symbols('o,u,x')\n\nsym.integrate((1/(o*sqrt(2*pi))*(E**(-(1/2)*((x-u)/o)**2))), (x,-oo,oo))\n```\n\n\n\n\n$\\displaystyle \\begin{cases} 0.353553390593274 \\sqrt{2} \\left(2 - \\operatorname{erfc}{\\left(\\frac{0.707106781186548 u}{o} \\right)}\\right) + 0.353553390593274 \\sqrt{2} \\operatorname{erfc}{\\left(\\frac{0.707106781186548 u}{o} \\right)} & \\text{for}\\: \\left(\\left(2 \\left|{\\arg{\\left(o \\right)}}\\right| \\leq \\frac{\\pi}{2} \\wedge \\left|{4 \\arg{\\left(o \\right)} - 2 \\arg{\\left(u \\right)}}\\right| < \\pi\\right) \\vee \\left(\\left|{4 \\arg{\\left(o \\right)} - 2 \\arg{\\left(u \\right)}}\\right| \\leq \\pi \\wedge 2 \\left|{\\arg{\\left(o \\right)}}\\right| < \\frac{\\pi}{2}\\right) \\vee \\left(\\left|{4 \\arg{\\left(o \\right)} - 2 \\arg{\\left(u \\right)}}\\right| < \\pi \\wedge 2 \\left|{\\arg{\\left(o \\right)}}\\right| < \\frac{\\pi}{2}\\right) \\vee 2 \\left|{\\arg{\\left(o \\right)}}\\right| < \\frac{\\pi}{2}\\right) \\wedge \\left(\\left(2 \\left|{\\arg{\\left(o \\right)}}\\right| \\leq \\frac{\\pi}{2} \\wedge \\left|{- 4 \\arg{\\left(o \\right)} + 2 \\arg{\\left(u \\right)} + 2 \\pi}\\right| < \\pi\\right) \\vee \\left(\\left|{- 4 \\arg{\\left(o \\right)} + 2 \\arg{\\left(u \\right)} + 2 \\pi}\\right| \\leq \\pi \\wedge 2 \\left|{\\arg{\\left(o \\right)}}\\right| < \\frac{\\pi}{2}\\right) \\vee \\left(\\left|{- 4 \\arg{\\left(o \\right)} + 2 \\arg{\\left(u \\right)} + 2 \\pi}\\right| < \\pi \\wedge 2 \\left|{\\arg{\\left(o \\right)}}\\right| < \\frac{\\pi}{2}\\right) \\vee 2 \\left|{\\arg{\\left(o \\right)}}\\right| < \\frac{\\pi}{2}\\right) \\\\\\int\\limits_{-\\infty}^{\\infty} \\frac{\\sqrt{2} e^{- \\frac{0.5 \\left(- u + x\\right)^{2}}{o^{2}}}}{2 \\sqrt{\\pi} o}\\, dx & \\text{otherwise} \\end{cases}$\n\n\n\n\n```\nfrom sympy import oo\nfrom sympy import pi\nfrom sympy import sqrt\nfrom sympy import E\nfrom sympy import Symbol\n\no,u,x = sym.symbols('o,u,x')\n\nx = Symbol('x', real=True); x\nu = Symbol('u', real=True); u\no = Symbol('o', real=True); o\n\nsym.integrate((1/(o*sqrt(2*pi))*(E**(-(1/2)*((x-u)/o)**2))), (x,-oo,oo))\n\n```\n\n\n\n\n$\\displaystyle \\begin{cases} 0.707106781186547 \\sqrt{2} & \\text{for}\\: 2 \\left|{\\arg{\\left(o \\right)}}\\right| \\leq \\frac{\\pi}{2} \\\\\\int\\limits_{-\\infty}^{\\infty} \\frac{\\sqrt{2} e^{- \\frac{0.5 \\left(- u + x\\right)^{2}}{o^{2}}}}{2 \\sqrt{\\pi} o}\\, dx & \\text{otherwise} \\end{cases}$\n\n\n\n\n```\nexpr = 0.707106781186547*sqrt(2)\na = (0.707106781186547*sqrt(2).evalf())\nprint (expr,'=', a, '=', a.round(1))\n\n```\n\n 0.707106781186547*sqrt(2) = 0.999999999999999 = 1.\n\n\nLookup Gaussian functions: https://en.wikipedia.org/wiki/Gaussian_function\nDoes your answer maake sense?\n\n## 4. Plotting with Sympy\n\nFinally Sympy can be used to plot functions. Note that this makes use of [matplotlib](http://matplotlib.org/). \n\nLet us plot $x^2$:\n\n\n```\nexpr = x ** 2\np = sym.plot(expr)\n```\n\n\n
    \n\n\n### Exercise 4.1 Plot the following function:\n\n1. $y=x + cos(x)$\n1. ${\\frac{1}{{ \\sqrt {2\\pi } }}{e^{ - \\frac{x^2}{2}}}}$\n\n\n```\nexpr1 = x + sym.cos(x)\np = sym.plot(expr1)\n```\n\n\n```\nexpr = (1/sqrt(2*pi))*E**(-x**2/2)\np = sym.plot(expr)\n```\n\n# Lecture\n\n## L1. Hydrogen Atom\n\nSympy has built in modules for the eigenfunctions of the hydrogen atom. \n\n\n```\nimport sympy.physics.hydrogen\nimport numpy as np\n```\n\nYou can caluclate the eigenvalues ($E$) in Hartrees\n\n`sym.physics.hydrogen.E_nl(n,Z)`\n\n\n```\nsym.physics.hydrogen.E_nl(1,1)\n```\n\nWe can use a loop to print out many energies\n\n\n```\nfor n in range(1,5):\n print(sym.physics.hydrogen.E_nl(n,1))\n```\n\n -1/2\n -1/8\n -1/18\n -1/32\n\n\nWe can plot the hydrogen radial wavefunction (1s orbital)\n\n\n```\nr = sym.symbols('r')\nsympy.symbols('r')\nsympy.physics.hydrogen.R_nl(1, 0, r, 1)\n```\n\n\n```\nsym.plot(sympy.physics.hydrogen.R_nl(1, 0, r, 1),(r,0,10.50))\n```\n\nAnd the probablity distribution function\n\n\n```\nsympy.symbols('r')\nprob_1s=sympy.physics.hydrogen.R_nl(1, 0, r, 1)*sympy.physics.hydrogen.R_nl(1, 0, r, 1)\nprob_1s\n```\n\n\n```\nsym.plot(prob_1s,(r,0,10))\n```\n\nPlot a 2s orbital\n\n\n```\nsympy.symbols('r')\nprob_2s=sympy.physics.hydrogen.R_nl(2, 0, r, 1)*sympy.physics.hydrogen.R_nl(2, 0, r, 1)\nprob_2s\n```\n\n\n```\nsym.plot(prob_2s,(r,0,10))\n```\n\nWe can change the range to see the node better.\n\n\n```\nsym.plot(prob_2s,(r,1,8))\n```\n\nNotice the node!\n\n### Exercise L1.1\n\nPlot the radial distriubution function for a 2p, 3s, 4s, and 3d orbital. \n\n\n```\nsympy.symbols('r')\nprob_2p=sympy.physics.hydrogen.R_nl(2, 1, r, 1)*sympy.physics.hydrogen.R_nl(2, 1, r, 1)\nprob_2p\n```\n\n\n```\nsym.plot(prob_2p,(r,0,10))\n```\n\n\n```\nsympy.symbols('r')\nprob_3s=sympy.physics.hydrogen.R_nl(3, 0, r, 1)*sympy.physics.hydrogen.R_nl(3, 0, r, 1)\nprob_3s\n```\n\n\n```\nsym.plot(prob_3s,(r,1,8))\n```\n\n\n```\nsympy.symbols('r')\nprob_4s=sympy.physics.hydrogen.R_nl(4, 0, r, 1)*sympy.physics.hydrogen.R_nl(4, 0, r, 1)\nprob_4s\n```\n\n\n```\nsym.plot(prob_4s,(r,1,8))\n```\n\n\n```\nsympy.symbols('r')\nprob_3d=sympy.physics.hydrogen.R_nl(3, 2, r, 1)*sympy.physics.hydrogen.R_nl(3, 2, r, 1)\nprob_3d\n```\n\n\n```\nsym.plot(prob_3d,(r,0,20))\n```\n\n\n```\n\n```\n", "meta": {"hexsha": "0c08bbd47e83fd24e2dcd03d9777982da86c21b9", "size": 278614, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "symbolic_math.ipynb", "max_stars_repo_name": "sju-chem264-2019/9-26-2019-symbolic-math-laurensruiz", "max_stars_repo_head_hexsha": "79a4fd3c8ab87f8cde87a92b7738c9c5348fcc75", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "symbolic_math.ipynb", "max_issues_repo_name": "sju-chem264-2019/9-26-2019-symbolic-math-laurensruiz", "max_issues_repo_head_hexsha": "79a4fd3c8ab87f8cde87a92b7738c9c5348fcc75", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "symbolic_math.ipynb", "max_forks_repo_name": "sju-chem264-2019/9-26-2019-symbolic-math-laurensruiz", "max_forks_repo_head_hexsha": "79a4fd3c8ab87f8cde87a92b7738c9c5348fcc75", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 137.5192497532, "max_line_length": 25740, "alphanum_fraction": 0.8098659795, "converted": true, "num_tokens": 4103, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9314625069680098, "lm_q2_score": 0.8918110526265555, "lm_q1q2_score": 0.8306885588213111}} {"text": "### part (a)\n\nUsing Krichoff's current law, the currents through junction 1, 2 and 3 can be written as:\n\\begin{equation}\n I_{t1} = I_{12} + I_{13} \\\\\n I_{t2} + I_{12} = I_{23} + I_{2g} \\\\\n I_{13} + I_{23} = I_{3g}\n\\end{equation}\n\nwhere $t$ and $g$ refer to the positive terminal and the ground resp.\n\nMultiplying these by $R$, we get:\n\\begin{equation}\n V_{+} - V_{1} = (V_{1} - V_{2}) + (V_{1}-V_{3}) \\\\\n (V_{+} - V_{2}) + (V_{1} - V_{2}) = (V_{2} - V_{3}) + (V_{2} - 0) \\\\\n (V_{1} - V_{3}) + (V_{2} - V_{3}) = V_{3} - 0\n\\end{equation}\n\nRe-arranging the terms, this can be written as\n\n$$\n\\begin{bmatrix}\n 3 & -1 & -1 \\\\\n -1 & 4 & -1 \\\\\n -1 & -1 & 3\n\\end{bmatrix}\n\\begin{bmatrix}\n V_{1}\\\\\n V_{2}\\\\\n V_{3}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n V_{+}\\\\\n V_{+}\\\\\n 0\n\\end{bmatrix}\n$$\n\nThe LU decomposition can be written as:\n\n$$\n\\begin{bmatrix}\n 3 & -1 & -1 \\\\\n -1 & 4 & -1 \\\\\n -1 & -1 & 3\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n 1 & 0 & 0 \\\\\n -\\frac{1}{3} & 1 & 0 \\\\\n -\\frac{1}{3} & -\\frac{13}{3} & 1\n\\end{bmatrix}\n\\begin{bmatrix}\n 3 & -1 & -1 \\\\\n 0 & \\frac{11}{3} & -\\frac{4}{3} \\\\\n 0 & 0 & \\frac{24}{11}\n\\end{bmatrix}\n$$\n\nUsing Gaussian elimination, the solution is given by:\n$$\n\\begin{bmatrix}\n V_{1}\\\\\n V_{2}\\\\\n V_{3}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n \\frac{25}{8}\\\\\n \\frac{5}{2}\\\\\n \\frac{15}{8}\n\\end{bmatrix}\n$$\n\n\n```python\nimport numpy as np\n```\n\n\n```python\nnp.linalg.solve([[3,-1,-1],[-1,4,-1],[-1,-1,3]], [5, 5, 0])\n```\n\n\n\n\n array([3.125, 2.5 , 1.875])\n\n\n\n### part (b)\n\nFor N internal junctions, all except the first two and the last two junctions are 4-way junctions. They have 2 currents going in, and 2 coming out. \nThat is, for the $i^{th}$ junction (excluding first two and last two), Krichoff's law gives:\n$$\nI_{i,i-1} + I_{i,i-2} = I_{i+2,i} + I_{i+1,i} \n$$\nMultiplying by $R$, we get:\n$$\n(V_{i} - V_{i-1}) + (V_{i} - V_{i-2}) = (V_{i+2} - V_{i}) + (V_{i+1} - V_{i})\n$$\n\nThis equation can be rearranged to give:\n$$- V_{i-1} - V_{i-2} + 4V_{i} -V_{i+1} -V_{i+2} = 0 $$\n\nJunctions $1$, $2$, $N-1$ and $N$ are connected to the terminals, so for them the equations are slightly different.\n\nThese equations can be written as:\n\n$$\n\\begin{bmatrix}\n 3 & -1 & -1 & & & & \\\\\n -1 & 4 & -1 & -1 & & & \\\\\n -1 & -1 & 4 & -1 & -1 & & & ...\\\\\n & -1 & -1 & 4 & -1 & -1 & \\\\\n & & -1 & -1 & 4 & -1 & -1 \\\\\n & & & . & & & \\\\\n & & & . & & & \\\\\n & & & . & & & \\\\\n & & & & & & & & -1 & -1 & 4 & -1\\\\\n & & & & & & & & & -1 & -1 & 3 \n\\end{bmatrix}\n\\begin{bmatrix}\n V_{1}\\\\\n V_{2}\\\\\n V_{3}\\\\\n .\\\\\n .\\\\\n .\\\\\n V_{N-1}\\\\\n V_{N}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n V_{+}\\\\\n V_{+}\\\\\n 0\\\\\n 0\\\\\n .\\\\\n .\\\\\n .\\\\\n\\end{bmatrix}\n$$\n\n\n### part (c)\n\n\n```python\nA = [[3,-1,-1,0,0,0],[-1,4,-1,-1,0,0],[-1,-1,4,-1,-1,0],[0,-1,-1,4,-1,-1],[0,0,-1,-1,4,-1],[0,0,0,-1,-1,3]]\nb = [5,5]+[0]*4\nprint(np.linalg.solve(A, b))\n```\n\n [3.7254902 3.43137255 2.74509804 2.25490196 1.56862745 1.2745098 ]\n\n\n### part (d)\n\n\n```python\n######################################################################\n#\n# Function to solve a banded system of linear equations using\n# Gaussian elimination and backsubstitution\n#\n# x = banded(A,v,up,down)\n#\n# This function returns the vector solution x of the equation A.x = v,\n# where v is an array representing a vector of N elements, either real\n# or complex, and A is an N by N banded matrix with \"up\" nonzero\n# elements above the diagonal and \"down\" nonzero elements below the\n# diagonal. The matrix is specified as a two-dimensional array of\n# (1+up+down) by N elements with the diagonals of the original matrix\n# along its rows, thus:\n#\n# ( - - A02 A13 A24 ...\n# ( - A01 A12 A23 A34 ...\n# ( A00 A11 A22 A33 A44 ...\n# ( A10 A21 A32 A43 A54 ...\n# ( A20 A31 A42 A53 A64 ...\n#\n# Elements represented by dashes are ignored -- it doesn't matter what\n# these elements contain. The size of the system is taken from the\n# size of the vector v. If the matrix A is larger than NxN then the\n# extras are ignored. If it is smaller, the program will produce an\n# error.\n#\n# The function is compatible with version 2 and version 3 of Python.\n#\n# Written by Mark Newman , September 4, 2011\n# You may use, share, or modify this file freely\n#\n######################################################################\n\nfrom numpy import copy\n\ndef banded(Aa,va,up,down):\n\n # Copy the inputs and determine the size of the system\n A = copy(Aa)\n v = copy(va)\n N = len(v)\n\n # Gaussian elimination\n for m in range(N):\n\n # Normalization factor\n div = A[up,m]\n\n # Update the vector first\n v[m] /= div\n for k in range(1,down+1):\n if m+k\n\n\n\nAs we can see from the progress bar, in the scalar case we managed about 24,000 iterations per second with the classical fourth order Runge-Kutta method. A successful numerical integration is called a *run* - let's find our experiment first by listing all runs.\n\n\n```python\nintegrator.list_runs()\n```\n\n | timestamp | run_id |\n |--------------------------|--------------------------------------|\n | Wed Dec 30 19:44:13 2020 | 78bd4868-fffd-4894-bc39-283601669562 |\n\n\nAnd there we have our runs, neatly tabulated. The more experiments you do, the more runs will show up in this table. Each run has its own identifier called *run_id*, which is a UUID generated during the run. \n\nWe also see that the output format of our step function is \"zipped\" - that means that internally, the step function returns a dictionary of floats instead of a dictionary of state vectors. This is beneficial for later analysis with pandas, but creates a performance hit. Try out the other option if you want - just supply output_format=\"variables\" to the RungeKutta4 constructor.\n\nLet us look at the result of our first run - instead of selecting it by the run's ID as you normally would, we can also type \"latest\" to indicate that we want the newest result.\n\n\n```python\nresult = integrator.return_result_data(run_id=\"latest\")\n\nresult\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    ty
    00.0001.000000
    10.0010.999500
    20.0020.999000
    30.0030.998501
    40.0040.998002
    .........
    99969.9960.006751
    99979.9970.006748
    99989.9980.006745
    99999.9990.006741
    1000010.0000.006738
    \n

    10001 rows \u00d7 2 columns

    \n
    \n\n\n\nThat looks ok - although staring at the numbers is not really the best option here for assessing whether our solution actually makes sense. If you have worked with pandas before, you may have noticed by the print output that the returned \"result\" object is in fact a pandas DataFrame - which can be plotted immediately. Let's do it!\n\n\n```python\nresult.plot(x=\"t\", y=\"y\")\n```\n\nThat looks much more familiar! Great job, the general shape of the curve looks correct for our example. But how accurate is the solution really? That is something we cannot directly infer from this graph. And if you had to assert numbers like these for a Mars mission launch, you would surely want a way to validate that your algorithm does not go crazy - a lot could be at stake!\n\nLuckily, for an exponential decay model like this, we have a closed form solution available - literally the exponential function. So how about we check our calculations against the real solution?\n\n## Solution distance experiment\n\nWe define our solution, which mathematically is a function $y: \\mathbb{R}\\rightarrow\\mathbb{R}^n$. For the exponential decay model with decay rate $\\lambda$ and initial state $y_0\\in\\mathbb{R}^n$, the exact solution is given for every $t \\geq 0$ as\n\\begin{equation}\n\\hat{y}(t) = y_0 \\exp(-\\lambda t).\n\\end{equation}\n\n\n```python\ndef sol(t):\n return y_0 * np.exp(-lamb * t)\n```\n\nNice! Notice we reused the $y_0$ object from earlier in the notebook. Now we start another integrator run with the same parameters, but this time we add a *Metric*, the *Euclidean distance from the exact solution* $\\hat{y}$\n\\begin{equation}\nd(y, \\hat{y}, t) = \\left\\lVert y(t) - \\hat{y}(t) \\right\\rVert_2 = \\sqrt{\\sum_i (y_i - \\hat{y}_i)^2}.\n\\end{equation}\n\nIf our algorithm is correct, which it should be (it is a built-in!), the distance should be small throughout the run, indicating an accurate numerical approximation. Fingers crossed!\n\n*Tip: You can also calculate other norms, like the maximum norm or the $L_1$ distance between solution and approximation. To try out other norms, pass the **norm** argument in the DistanceToSolution constructor (norm=1 is $L_1$ distance, norm=\"inf\" is maximum norm distance)*\n\n\n```python\nintegrator.integrate_const(model=model,\n step_func=RungeKutta4(),\n initial_state=initial_state,\n h=0.001,\n max_steps=10000,\n verbosity=1,\n progress_bar=True,\n metrics=[DistanceToSolution(solution=sol, name=\"l2_distance\")])\n```\n\n I1230 19:44:14.256709 4548111872 integrator.py:191] Starting integration.\n 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 10000/10000 [00:00<00:00, 31518.39it/s]\n I1230 19:44:14.577193 4548111872 integrator.py:200] Finished integration.\n\n\n\n\n\n \n\n\n\nLooks like that worked! Let us quickly confirm.\n\n\n```python\nintegrator.list_runs()\n```\n\n | timestamp | run_id |\n |--------------------------|--------------------------------------|\n | Wed Dec 30 19:44:13 2020 | 78bd4868-fffd-4894-bc39-283601669562 |\n | Wed Dec 30 19:44:14 2020 | 97466b81-d7ab-4695-b00f-860fb9d6ff9f |\n\n\nPerfect! There is a new run listed in the table. It should be holding the distance to the solution as metric values. To request the metric data, we can use the *return_metrics* interface exposed by the integrator.\n\n\n```python\nmetrics = integrator.return_metrics(run_id=\"latest\")\n\nmetrics\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    l2_distance
    00.000000e+00
    10.000000e+00
    20.000000e+00
    30.000000e+00
    40.000000e+00
    ......
    99963.417405e-16
    99973.426079e-16
    99983.443426e-16
    99993.460773e-16
    100003.478121e-16
    \n

    10001 rows \u00d7 1 columns

    \n
    \n\n\n\nLots of zeros and numbers around machine precision there - that looks promising! The metrics object above is also a pandas DataFrame, so let us plot the solution distance over time:\n\n\n```python\nmetrics.plot()\n```\n\nFrom this plot, it appears that the Euclidean distance of our numerical solution to the exact solution never exceeds $3\\cdot 10^{-14}$ across all 10,000 iterations. Though the distance does not appear to be uniformly at around machine precision ($\\approx 10^{-16}$), but rather periodically increasing and decreasing - an interesting artifact. We can quickly print some statistics, too:\n\n\n```python\nmetrics.describe()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    l2_distance
    count1.000100e+04
    mean9.958515e-15
    std8.205534e-15
    min0.000000e+00
    25%2.655862e-15
    50%9.103829e-15
    75%1.618150e-14
    max2.473022e-14
    \n
    \n\n\n\nThat confirms what we were seeing. In theory, the classical Runge-Kutta method has order of consistency 4, meaning that the discretization error is globally of $\\mathcal{O}(h^4)$. For a step size of $h = 0.001$ that we chose, this would place us in the range of $10^{-12}$ relative to $h = 1$ (ignoring constants) - which seems all right given our results, and considering that the condition number of the exponential decay equation is not large for $\\lambda = 0.5$, no stiffness effects are coming to haunt us here, either. \n\nThis concludes the first small demo of the ode-explorer library - if you made it this far, thanks for trying it out, and see you in the next one!\n", "meta": {"hexsha": "13879ccf1d34dba9f2edbba4b654f4b5d1808e06", "size": 52806, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ode_explorer/examples/exponential_decay/exponential_decay.ipynb", "max_stars_repo_name": "njunge94/ode-explorer", "max_stars_repo_head_hexsha": "0bcb5d4834f6001b2a3e54bd5e000e86bbedf221", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-12-21T20:09:59.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-03T14:27:45.000Z", "max_issues_repo_path": "ode_explorer/examples/exponential_decay/exponential_decay.ipynb", "max_issues_repo_name": "njunge94/ode-explorer", "max_issues_repo_head_hexsha": "0bcb5d4834f6001b2a3e54bd5e000e86bbedf221", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ode_explorer/examples/exponential_decay/exponential_decay.ipynb", "max_forks_repo_name": "njunge94/ode-explorer", "max_forks_repo_head_hexsha": "0bcb5d4834f6001b2a3e54bd5e000e86bbedf221", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 69.2083879423, "max_line_length": 18088, "alphanum_fraction": 0.7568836875, "converted": true, "num_tokens": 3714, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9362850093037731, "lm_q2_score": 0.8872045832787205, "lm_q1q2_score": 0.8306763515094671}} {"text": "# First a little bit of statistics review:\n\n# Variance\n\nVariance is a measure of the spread of numbers in a dataset. Variance is the average of the squared differences from the mean. So naturally, you can't find the variance of something unless you calculate it's mean first. Lets get some data and find its variance.\n\n\n```\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport random\n\n# Lets generate two variables with 50 random integers each.\nvariance_one = []\nvariance_two = []\nfor x in range(50):\n variance_one.append(random.randint(25,75))\n variance_two.append(random.randint(0,100))\n \nvariance_data = {'v1': variance_one, 'v2': variance_two}\n\nvariance_df = pd.DataFrame(variance_data)\nvariance_df['zeros'] = pd.Series(list(np.zeros(50)))\n\nvariance_df.head()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    v1v2zeros
    056110.0
    147290.0
    24180.0
    351930.0
    438310.0
    \n
    \n\n\n\n\n```\n# Now some scatter plots\n\nplt.scatter(variance_df.v1, variance_df.zeros)\nplt.xlim(0,100)\nplt.title(\"Plot One\")\nplt.show()\n\nplt.scatter(variance_df.v2, variance_df.zeros)\nplt.xlim(0,100)\nplt.title(\"Plot Two\")\nplt.show()\n```\n\nNow I know this isn't complicated, but each of the above plots has the same number of points, but we can tell visually that \"Plot Two\" has the greater variance because its points are more spread out. What if we didn't trust our eyes though? Lets calculate the variance of each of these variables to prove it to ourselves\n\n$\\overline{X}$ is the symbol for the mean of the dataset.\n\n$N$ is the total number of observations.\n\n$v$ or variance is sometimes denoted by a lowercase v. But you'll also see it referred to as $\\sigma^{2}$.\n\n\\begin{align}\nv = \\frac{\\sum{(X_{i} - \\overline{X})^{2}} }{N}\n\\end{align}\n\nHow do we calculate a simple average? We add up all of the values and then divide by the total number of values. this is why there is a sum in the numerator and N in the denomenator. \n\nHowever in this calculation, we're not just summing the values like we would if we were calculateing the mean, we are summing the squared difference between each point and the mean. (The squared distance between each point in the mean.)\n\n\n```\n# Since we generated these random values in a range centered around 50, that's \n# about where their means should be.\n\n# Find the means for each variable\nv1_mean = variance_df.v1.mean()\nprint(\"v1 mean: \", v1_mean)\nv2_mean = variance_df.v2.mean()\nprint(\"v2 mean: \", v2_mean)\n\n# Find the distance between each point and its corresponding mean\n\nvariance_df['v1_distance'] = variance_df.v1-v1_mean\nvariance_df['v2_distance'] = variance_df.v2-v2_mean\n\nvariance_df.head()\n```\n\n v1 mean: 49.08\n v2 mean: 51.66\n\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    v1v2zerosv1_distancev2_distance
    056110.06.92-40.66
    147290.0-2.08-22.66
    24180.0-8.08-43.66
    351930.01.9241.34
    438310.0-11.08-20.66
    \n
    \n\n\n\n\n```\n# Now we'll square the distances from the means\nvariance_df['v1_squared_distance'] = variance_df.v1_distance**2\nvariance_df['v2_squared_distance'] = variance_df.v2_distance**2\n\n# Notice that squaring the distances turns all of our negative values into positive ones?\n\nvariance_df.head()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    v1v2zerosv1_distancev2_distancev1_squared_distancev2_squared_distance
    056110.06.92-40.6647.88641653.2356
    147290.0-2.08-22.664.3264513.4756
    24180.0-8.08-43.6665.28641906.1956
    351930.01.9241.343.68641708.9956
    438310.0-11.08-20.66122.7664426.8356
    \n
    \n\n\n\n\n```\n# Now we'll sum the squared distances and divide by the number of observations.\nobservations = len(variance_df)\nprint(\"Number of Observations: \", observations)\n\nVariance_One = variance_df.v1_squared_distance.sum()/observations\nVariance_Two = variance_df.v2_squared_distance.sum()/observations\n\nprint(\"Variance One: \", Variance_One)\nprint(\"Variance Two: \", Variance_Two)\n```\n\n Number of Observations: 50\n Variance One: 176.31360000000004\n Variance Two: 1032.2644\n\n\nWoah, so what is the domain of V1 and V2?\n\nWell, V1 goes from 25 to 75 so its range is ~50 and V2 goes from 0 to 100 so its range is about 100\n\nSo even though V2 is roughly twice as spread out, how much bigger is its variance than V1?\n\n\n```\nprint(\"How many times bigger is Variance_One than Variance_Two? \", Variance_Two/Variance_One)\n\n# About 3.86 times bigger! Why is that? \n```\n\n How many times bigger is Variance_One than Variance_Two? 5.854706613670187\n\n\n## A note about my code quality\n\nWhy did I go to the trouble of calculating all of that by hand, and add a bunch of extra useless rows to my dataframe? That is some bad code! \n\nBecause I wanted to make sure that you understood all of the parts of the equation. I didn't want the function to be some magic thing that you put numbers in and out popped a variance. Taking time to understand the equation will reinforce your intuition about the spread of the data. After all, I could have just done this:\n\n\n```\nprint(variance_df.v1.var(ddof=1))\nprint(variance_df.v2.var(ddof=1))\n```\n\n 179.91183673469385\n 1053.3310204081633\n\n\nBut wait! Those variance values are different than the ones we calculated above, oh no! This is because variance is calculated slightly differently for a population vs a sample. Lets clarify this a little bit. \n\nThe **POPULATION VARIANCE** $\\sigma^{2}$ is a **PARAMETER** (aspect, property, attribute, etc) of the population.\n\nThe **SAMPLE VARIANCE** $s^{2}$ is a **STATISTIC** (estimated attribute) of the sample.\n\nWe use the sample statistic to **estimate** the population parameter.\n\nThe sample variance $s^{2}$ is an estimate of the population variance $\\sigma^{2}$.\n\nBasically, if you're calculating a **sample** variance, you need to divide by $N-1$ or else your estimate will be a little biased. The equation that we were originally working from is for a **population variance**. \n\nIf we use the ddof=0 parameter (default is ddof=1) in our equation, we should get the same result. \"ddof\" stands for Denominator Degrees of Freedom.\n\n\n```\nprint(variance_df.v1.var(ddof=0))\nprint(variance_df.v2.var(ddof=0))\n```\n\n 176.31359999999998\n 1032.2644\n\n\n# Standard Deviation\n\nIf you understand how variance is calculated, then standard deviation is a cinch. The standard deviation is the square root $\\sqrt()$ of the variance.\n\n## So why would we use one over the other?\n\nRemember how we squared all of the distances from the mean before we added them all up? Well then taking the square root of the variance will put our measures back in the same units as the mean. So the Standard Deviation is a measure of spread of the data that is expressed in the same units as the mean of the data. Variance is the average squared distance from the mean, and the Standard Deviation is the average distance from the mean. You'll remember that when we did hypothesis testing and explored the normal distribution we talked in terms of standard deviations, and not in terms of variance for this reason.\n\n\n```\nprint(variance_df.v1.std(ddof=0))\nprint(variance_df.v2.std(ddof=0))\n```\n\n 13.27831314587813\n 32.128871751121295\n\n\n# Covariance\n\nCovariance is a measure of how changes in one variable are associated with changes in a second variable. It's a measure of how they Co (together) Vary (move) or how they move in relation to each other. For this topic we're not really going to dive into the formula, I just want you to be able to understand the topic intuitively. Since this measure is about two variables, graphs that will help us visualize things in two dimensions will help us demonstrate this idea. (scatterplots)\n\n\n\nLets look at the first scatterplot. the y variable has high values where the x variable has low values. This is a negative covariance because as one variable increases (moves), the other decreases (moves in the opposite direction).\n\nIn the second scatterplot we see no relation between high and low values of either variable, therefore this cloud of points has a near 0 covariance\n\nIn the third graph, we see that the y variable takes on low values in the same range where the x value takes on low values, and simiarly with high values. Because the areas of their high and low values match, we would expect this cloud of points to have a positive covariance.\n\n\n\n \n\nCheck out how popular this site is: \n\n\n\n\n\n## Interpeting Covariance\n\nA large positive or negative covariance indicates a strong relationship between two variables. However, you can't necessarily compare covariances between sets of variables that have a different scale, since the covariance of variables that take on high values will always be higher than since covariance values are unbounded, they could take on arbitrarily high or low values. This means that you can't compare the covariances between variables that have a different scale. Two variablespositive covariance variable that has a large scale will always have a higher covariance than a variable with an equally strong relationship, yet smaller scale. This means that we need a way to regularlize\n\nOne of the challenges with covariance is that its value is unbounded and variables that take on larger values will have a larger covariance irrespective of \n\nLet me show you what I mean:\n\n\n```\na = [1,2,3,4,5,6,7,8,9]\nb = [1,2,3,4,5,6,7,8,9]\nc = [10,20,30,40,50,60,70,80,90]\nd = [10,20,30,40,50,60,70,80,90]\n\nfake_data = {\"a\": a, \"b\": b, \"c\": c, \"d\": d,}\n\ndf = pd.DataFrame(fake_data)\n\nplt.scatter(df.a, df.b)\nplt.xlim(0,100)\nplt.ylim(0,100)\nplt.show()\n\nplt.scatter(df.c, df.d)\nplt.xlim(0,100)\nplt.ylim(0,100)\nplt.show()\n```\n\nWhich of the above sets of variables has a stronger relationship?\n\nWhich has the stronger covariance?\n\n# The Variance-Covariance Matrix\n\nIn order to answer this problem we're going to use a tool called a variance-covariance matrix. \n\nThis is matrix that compares each variable with every other variable in a dataset and returns to us variance values along the main diagonal, and covariance values everywhere else. \n\n\n```\ndf.cov()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    abcd
    a7.57.575.075.0
    b7.57.575.075.0
    c75.075.0750.0750.0
    d75.075.0750.0750.0
    \n
    \n\n\n\nWhat type of special square matrix is the variance-covariance matrix?\n\nThe two sets of variables above show relationships that are equal in their strength, yet their covariance values are wildly different. \n\nHow can we counteract this problem?\n\nWhat if there was some statistic of a distribution that represented how spread out the data was that we could use to standardize the units/scale of the variables?\n\n# Correlation Coefficient\n\nWell, it just so happens that we do have such a measure of spread of a variable. It's called the Standard Deviation! And we already learned about it. If we divide our covariance values by the product of the standard deviations of the two variables, we'll end up with what's called the Correlation Coefficient. (Sometimes just referred to as the correlation). \n\nCorrelation Coefficients have a fixed range from -1 to +1 with 0 representing no linear relationship between the data. \n\nIn most use cases the correlation coefficient is an improvement over measures of covariance because:\n\n- Covariance can take on practically any number while a correlation is limited: -1 to +1.\n- Because of it\u2019s numerical limitations, correlation is more useful for determining how strong the relationship is between the two variables.\n- Correlation does not have units. Covariance always has units\n- Correlation isn\u2019t affected by changes in the center (i.e. mean) or scale of the variables\n\n[Statistics How To - Covariance](https://www.statisticshowto.datasciencecentral.com/covariance/)\n\nThe correlation coefficient is usually represented by a lower case $r$.\n\n\\begin{align}\nr = \\frac{cov(X,Y)}{\\sigma_{X}\\sigma_{Y}}\n\\end{align}\n\n\n```\ndf.corr()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    abcd
    a1.01.01.01.0
    b1.01.01.01.0
    c1.01.01.01.0
    d1.01.01.01.0
    \n
    \n\n\n\nCorrelation coefficients of 1 tell us that all of these varaibles have a perfectly linear positive correlation with one another. \n\n\n\nCorrelation and other sample statistics are somewhat limited in their ability to tell us about the shape/patterns in the data.\n\n[Anscombe's Quartet](https://en.wikipedia.org/wiki/Anscombe%27s_quartet)\n\n\n\nOr take it to the next level with the [Datasaurus Dozen](https://www.autodeskresearch.com/publications/samestats)\n\n\n# Orthogonality\n\nOrthogonality is another word for \"perpendicularity\" or things (vectors or matrices) existing at right angles to one another. Two vectors that are perpendicular to one another are orthogonal.\n\n## How to tell if two vectors are orthogonal\n\nTwo vectors are orthogonal to each other if their dot product will be zero. \n\nLets look at a couple of examples to see this in action:\n\n\n```\nvector_1 = [0, 2]\nvector_2 = [2, 0]\n\n# Plot the Scaled Vectors\nplt.arrow(0,0, vector_1[0], vector_1[1],head_width=.05, head_length=0.05, color ='red')\nplt.arrow(0,0, vector_2[0], vector_2[1],head_width=.05, head_length=0.05, color ='green')\nplt.xlim(-1,3) \nplt.ylim(-1,3)\nplt.title(\"Orthogonal Vectors\")\nplt.show()\n```\n\nClearly we can see that the above vectors are perpendicular to each other, what does the formula say?\n\n\\begin{align}\na = \\begin{bmatrix} 0 & 2\\end{bmatrix}\n\\qquad\nb = \\begin{bmatrix} 2 & 0\\end{bmatrix}\n\\\\\na \\cdot b = (0)(2) + (2)(0) = 0\n\\end{align}\n\n\n```\nvector_1 = [-2, 2]\nvector_2 = [2, 2]\n\n# Plot the Scaled Vectors\nplt.arrow(0,0, vector_1[0], vector_1[1],head_width=.05, head_length=0.05, color ='red')\nplt.arrow(0,0, vector_2[0], vector_2[1],head_width=.05, head_length=0.05, color ='green')\nplt.xlim(-3,3) \nplt.ylim(-1,3)\nplt.title(\"Orthogonal Vectors\")\nplt.show()\n```\n\nAgain the dot product is zero.\n\n\\begin{align}\na = \\begin{bmatrix} -2 & 2\\end{bmatrix}\n\\qquad\nb = \\begin{bmatrix} 2 & 2\\end{bmatrix}\n\\\\\na \\cdot b = (-2)(2) + (2)(2) = 0\n\\end{align}\n\n\n# Unit Vectors\n\nIn Linear Algebra a unit vector is any vector of \"unit length\" (1). You can turn any non-zero vector into a unit vector by dividing it by its norm (length/magnitude).\n\nfor example if I have the vector \n\n\\begin{align}\n b = \\begin{bmatrix} 1 \\\\ 2 \\\\ 2 \\end{bmatrix}\n\\end{align}\n\n and I want to turn it into a unit vector, first I will calculate its norm\n \n \\begin{align}\n ||b|| = \\sqrt{1^2 + 2^2 + 2^2} = \\sqrt{1 + 4 + 4} = \\sqrt{9} = 3\n\\end{align}\n\nI can turn b into a unit vector by dividing it by its norm. Once something has been turned into a unit vector we'll put a ^ \"hat\" symbol over it to denote that it is now a unit vector.\n\n \\begin{align}\n \\hat{b} = \\frac{1}{||b||}b = \\frac{1}{3}\\begin{bmatrix} 1 \\\\ 2 \\\\ 2 \\end{bmatrix} = \\begin{bmatrix} \\frac{1}{3} \\\\ \\frac{2}{3} \\\\ \\frac{2}{3} \\end{bmatrix}\n\\end{align}\n\nYou might frequently see mentioned the unit vectors used to denote a certain dimensional space. \n\n$\\mathbb{R}$ unit vector: $\\hat{i} = \\begin{bmatrix} 1 \\end{bmatrix}$\n\n\n$\\mathbb{R}^2$ unit vectors: $\\hat{i} = \\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix}$, $\\hat{j} = \\begin{bmatrix} 0 \\\\ 1 \\end{bmatrix}$\n\n$\\mathbb{R}^3$ unit vectors: $\\hat{i} = \\begin{bmatrix} 1 \\\\ 0 \\\\ 0 \\end{bmatrix}$, $\\hat{j} = \\begin{bmatrix} 0 \\\\ 1 \\\\ 0 \\end{bmatrix}$, $\\hat{k} = \\begin{bmatrix} 0 \\\\ 0 \\\\ 1 \\end{bmatrix}$\n\nYou'll notice that in the corresponding space, these basis vectors are the rows/columns of the identity matrix.\n\n\n```\n# Axis Bounds\nplt.xlim(-1,2) \nplt.ylim(-1,2)\n\n# Unit Vectors\ni_hat = [1,0]\nj_hat = [0,1]\n\n# Fix Axes\nplt.axes().set_aspect('equal')\n\n# PLot Vectors\nplt.arrow(0, 0, i_hat[0], i_hat[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')\nplt.arrow(0, 0, j_hat[0], j_hat[1], linewidth=3, head_width=.05, head_length=0.05, color ='blue')\nplt.title(\"basis vectors in R^2\")\nplt.show()\n```\n\n## Vectors as linear combinations of scalars and unit vectors\n\nAny vector (or matrix) can be be described in terms of a linear combination of scaled unit vectors. Lets look at an example.\n\n\\begin{align}\nc = \\begin{bmatrix} 2 \\\\ 3 \\end{bmatrix}\n\\end{align}\n\nWe think about a vector that starts at the origin and extends to point $(2,3)$\n\nLets rewrite this in terms of a linear combination of scaled unit vectors:\n\n\\begin{align}\nc = \\begin{bmatrix} 2 \\\\ 3 \\end{bmatrix} = 2\\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix} + 3\\begin{bmatrix} 0 \\\\ 1 \\end{bmatrix} = 2\\hat{i} + 3\\hat{j}\n\\end{align}\n\nThis says that matrix $\\begin{bmatrix} 2 \\\\ 3 \\end{bmatrix}$ will result from scaling the $\\hat{i}$ unit vector by 2, the $\\hat{j}$ vector by 3 and then adding the two together.\n\nWe can describe any vector in $\\mathbb{R}^2$ in this way. Well, we can describe any vector in any dimensionality this way provided we use all of the unit vectors for that space and scale them all appropriately. In this examply we just happen to be using a vector whose dimension is 2.\n\n# Span\n\nThe span is the set of all possible vectors that can be created with a linear combination of two vectors (just as we described above).\n\nA linear combination of two vectors just means that we're composing to vectors (via addition or subtraction) to create a new vector. \n\n## Linearly Dependent Vectors\n\nTwo vectors that live on the same line are what's called linearly dependent. This means that there is no linear combination (no way to add, or subtract scaled version of these vectors from each other) that will ever allow us to create a vector that lies outside of that line. \n\nIn this case, the span of these vectors (lets say the green one and the red one for example - could be just those two or a whole set) is the line that they lie on, since that's what can be produced by scaling and composing them together.\n\nThe span is the graphical area that we're able to cover via a linear combination of a set of vectors.\n\n## Linearly Independent Vectors\n\nLinearly independent vectors are vectors that don't lie on the same line as each other. If two vectors are linearly independent, then there ought to be some linear combination of them that could represent any vector in the space ($\\mathbb{R}^2$ in this case).\n\n\n```\n# Plot Linearly Dependent Vectors\n\n# Axis Bounds\nplt.xlim(-1.1,4) \nplt.ylim(-1.1,4)\n\n# Original Vector\nv = [1,0] \n\n# Scaled Vectors\nv2 = np.multiply(3, v)\nv3 = np.multiply(-1,v)\n\n# Get Vals for L\naxes = plt.gca()\nx_vals = np.array(axes.get_xlim())\ny_vals = 0*x_vals\n\n# Plot Vectors and L\nplt.plot(x_vals, y_vals, '--', color='b', linewidth=1)\nplt.arrow(0,0, v2[0], v2[1], linewidth=3, head_width=.05, head_length=0.05, color ='yellow')\nplt.arrow(0,0, v[0], v[1], linewidth=3, head_width=.05, head_length=0.05, color ='green')\nplt.arrow(0,0, v3[0], v3[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')\n\nplt.title(\"Linearly Dependent Vectors\")\nplt.show()\n```\n\n\n```\n# Plot Linearly Dependent Vectors\n\n# Axis Bounds\nplt.xlim(-2,3.5) \nplt.ylim(-1,3)\n\n# Original Vector\na = [-1.5,.5] \nb = [3, 1]\n\n# Plot Vectors\nplt.arrow(0,0, a[0], a[1], linewidth=3, head_width=.05, head_length=0.05, color ='blue')\nplt.arrow(0,0, b[0], b[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')\n\nplt.title(\"Linearly Independent Vectors\")\nplt.show()\n```\n\n# Basis\n\nThe basis of a vector space $V$ is a set of vectors that are linearly independent and that span the vector space $V$.\n\nA set of vectors spans a space if their linear combinations fill the space.\n\nFor example, the unit vectors in the \"Linearly Independent Vectors\" plot above form a basis for the vector space $\\mathbb{R}^2$ becayse they are linearly independent and span that space.\n\n## Orthogonal Basis\n\nAn orthogonal basis is a set of vectors that are linearly independent, span the vector space, and are orthogonal to each other. Remember that vectors are orthogonal if their dot product equals zero.\n\n## Orthonormal Basis\n\nAn orthonormal basis is a set of vectors that are linearly independent, span the vector space, are orthogonal to eachother and each have unit length. \n\nFor more on this topic (it's thrilling, I know) you might research the Gram-Schmidt process -which is a method for orthonormalizing a set of vectors in an inner product space.\n\nThe unit vectors form an orthonormal basis for whatever vector space that they are spanning.\n\n# Rank\n\nThe rank of a matrix is the dimension of the vector space spanned by its columns. Just because a matrix has a certain number of rows or columns (dimensionality) doesn't neccessarily mean that it will span that dimensional space. Sometimes there exists a sort of redundancy within the rows/columns of a matrix (linear dependence) that becomes apparent when we reduce a matrix to row-echelon form via Gaussian Elimination.\n\n## Gaussian Elimination \n\nGaussian Elimination is a process that seeks to take any given matrix and reduce it down to what is called \"Row-Echelon form.\" A matrix is in Row-Echelon form when it has a 1 as its leading entry (furthest left) in each row, and zeroes at every position below that main entry. These matrices will usually wind up as a sort of upper-triangular matrix (not necessarly square) with ones on the main diagonal. \n\n\n\nGaussian Elimination takes a matrix and converts it to row-echelon form by doing combinations of three different row operations:\n\n1) You can swap any two rows\n\n2) You can multiply entire rows by scalars\n\n3) You can add/subtract rows from each other\n\nThis takes some practice to do by hand but once mastered becomes the fastest way to find the rank of a matrix.\n\nFor example lets look at the following matrix:\n\n\\begin{align}\n P = \\begin{bmatrix}\n 1 & 0 & 1 \\\\\n -2 & -3 & 1 \\\\\n 3 & 3 & 0 \n \\end{bmatrix}\n\\end{align}\n\nNow, lets use gaussian elimination to get this matrix in row-echelon form\n\nStep 1: Add 2 times the 1st row to the 2nd row\n\n\\begin{align}\n P = \\begin{bmatrix}\n 1 & 0 & 1 \\\\\n 0 & -3 & -3 \\\\\n 3 & 3 & 0 \n \\end{bmatrix}\n\\end{align}\n\nStep 2: Add -3 times the 1st row to the 3rd row\n\n\\begin{align}\n P = \\begin{bmatrix}\n 1 & 0 & 1 \\\\\n 0 & -3 & 3 \\\\\n 0 & 3 & -3 \n \\end{bmatrix}\n\\end{align}\n\nStep 3: Multiply the 2nd row by -1/3\n\n\\begin{align}\n P = \\begin{bmatrix}\n 1 & 0 & 1 \\\\\n 0 & 1 & -1 \\\\\n 0 & 3 & -3 \n \\end{bmatrix}\n\\end{align}\n\nStep 4: Add -3 times the 2nd row to the 3rd row\n\n\\begin{align}\n P = \\begin{bmatrix}\n 1 & 0 & 1 \\\\\n 0 & 1 & -1 \\\\\n 0 & 0 & 0 \n \\end{bmatrix}\n\\end{align}\n\nNow that we have this in row-echelon form we can see that we had one row that was linearly dependent (could be composed as a linear combination of other rows). That's why we were left with a row of zeros in place of it. If we look closely we will see that the first row equals the second row plus the third row. \n\nBecause we had two rows with leading 1s (these are called pivot values) left after the matrix was in row-echelon form, we know that its Rank is 2. \n\nWhat does this mean? This means that even though the original matrix is a 3x3 matrix, it can't span $\\mathbb{R}^3$, only $\\mathbb{R}^2$\n\n# Linear Projections in $\\mathbb{R}^{2}$\n\nAssume that we have some line $L$ in $\\mathbb{R}^{2}$.\n\n\n```\n# Plot a line\nplt.xlim(-1,4) \nplt.ylim(-1,4)\naxes = plt.gca()\nx_vals = np.array(axes.get_xlim())\ny_vals = 0*x_vals\nplt.plot(x_vals, y_vals, '--', color='b')\nplt.title(\"A Line\")\nplt.show()\n```\n\nWe know that if we have a vector $v$ that lies on that line, if we scale that vector in any direction, the resulting vectors can only exist on that line.\n\n\n```\n# Plot a line\n\n# Axis Bounds\nplt.xlim(-1.1,4) \nplt.ylim(-1.1,4)\n\n# Original Vector\nv = [1,0] \n\n# Scaled Vectors\nv2 = np.multiply(3, v)\nv3 = np.multiply(-1,v)\n\n# Get Vals for L\naxes = plt.gca()\nx_vals = np.array(axes.get_xlim())\ny_vals = 0*x_vals\n\n# Plot Vectors and L\nplt.plot(x_vals, y_vals, '--', color='b', linewidth=1)\nplt.arrow(0,0, v2[0], v2[1], linewidth=3, head_width=.05, head_length=0.05, color ='yellow')\nplt.arrow(0,0, v[0], v[1], linewidth=3, head_width=.05, head_length=0.05, color ='green')\nplt.arrow(0,0, v3[0], v3[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')\n\nplt.title(\"v scaled two different ways\")\nplt.show()\n```\n\nLets call the green vector $v$\n\nThis means that line $L$ is equal to vector $v$ scaled by all of the potential scalars in $\\mathbb{R}$. We can represent this scaling factor by a constant $c$. Therefore, line $L$ is vector $v$ scaled by any scalar $c$.\n\n\\begin{align}\nL = cv\n\\end{align}\n\nNow, say that we have a second vector $w$ that we want to \"project\" onto line L\n\n\n```\n# Plot a line\n\n# Axis Bounds\nplt.xlim(-1.1,4) \nplt.ylim(-1.1,4)\n\n# Original Vector\nv = [1,0] \nw = [2,2]\n\n# Get Vals for L\naxes = plt.gca()\nx_vals = np.array(axes.get_xlim())\ny_vals = 0*x_vals\n\n# Plot Vectors and L\nplt.plot(x_vals, y_vals, '--', color='b', linewidth=1)\nplt.arrow(0, 0, v[0], v[1], linewidth=3, head_width=.05, head_length=0.05, color ='green')\nplt.arrow(0, 0, w[0], w[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')\n\nplt.title(\"vector w\")\nplt.show()\n```\n\n## Projection as a shadow cast onto the target vector at a right angle\n\nThis is the intuition that I want you to develop. Imagine that we are shining a light down onto lin $L$ from a direction that is exactly orthogonal to it. In this case shining a light onto $L$ from a direction that is orthogonal to it is as if we were shining a light down from directly above. How long will the shadow be?\n\nImagine that you're **projecting** light from above to cast a shadow onto the x-axis.\n\nWell since $L$ is literally the x-axis you can probably tell that the length of the projection of $w$ onto $L$ is 2.\n\nA projection onto an axis is the same as just setting the variable that doesn't match the axis to 0. in our case the coordinates of vector $w$ is $(2,2)$ so it projects onto the x-axis at (2,0) -> just setting the y value to 0.\n\n### Notation\n\nIn linear algebra we write the projection of w onto L like this: \n\n\\begin{align}proj_{L}(\\vec{w})\\end{align}\n\n\n```\n# Axis Bounds\nplt.xlim(-1.1,4) \nplt.ylim(-1.1,4)\n\n# Original Vector\nv = [1,0] \nw = [2,2]\nproj = [2,0]\n\n# Get Vals for L\naxes = plt.gca()\nx_vals = np.array(axes.get_xlim())\ny_vals = 0*x_vals\n\n# Plot Vectors and L\nplt.plot(x_vals, y_vals, '--', color='b', linewidth=1)\nplt.arrow(0, 0, proj[0], proj[1], linewidth=3, head_width=.05, head_length=0.05, color ='gray')\nplt.arrow(0, 0, v[0], v[1], linewidth=3, head_width=.05, head_length=0.05, color ='green')\nplt.arrow(0, 0, w[0], w[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')\n\nplt.title(\"Shadow of w\")\nplt.show()\n```\n\nThe problem here is that we can't just draw a vector and call it a day, we can only define that vector in terms of our $v$ (green) vector.\n\nOur gray vector is defined as:\n\n\\begin{align}\ncv = proj_{L}(w)\n\\end{align}\n\nBut what if $L$ wasn't on the x-axis? How would calculate the projection?\n\n\n```\n# Axis Bounds\nplt.xlim(-1.1,4) \nplt.ylim(-1.1,4)\n\n# Original Vector\nv = [1,1/2] \nw = [2,2]\nproj = np.multiply(2.4,v)\n\n# Set axes\naxes = plt.gca()\nplt.axes().set_aspect('equal')\n\n# Get Vals for L\nx_vals = np.array(axes.get_xlim())\ny_vals = 1/2*x_vals\n\n# Plot Vectors and L\nplt.plot(x_vals, y_vals, '--', color='b', linewidth=1)\nplt.arrow(0, 0, proj[0], proj[1], linewidth=3, head_width=.05, head_length=0.05, color ='gray')\nplt.arrow(0, 0, v[0], v[1], linewidth=3, head_width=.05, head_length=0.05, color ='green')\nplt.arrow(0, 0, w[0], w[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')\n\nplt.title(\"non x-axis projection\")\nplt.show()\n```\n\nRemember, that it doesn't matter how long our $v$ (green) vectors is, we're just looking for the c value that can scale that vector to give us the gray vector $proj_{L}(w)$. \n\n\n```\n# Axis Bounds\nplt.xlim(-1.1,4) \nplt.ylim(-1.1,4)\n\n# Original Vector\nv = [1,1/2] \nw = [2,2]\nproj = np.multiply(2.4,v)\nx_minus_proj = w-proj\n\n# Set axes\naxes = plt.gca()\nplt.axes().set_aspect('equal')\n\n# Get Vals for L\nx_vals = np.array(axes.get_xlim())\ny_vals = 1/2*x_vals\n\n# Plot Vectors and L\nplt.plot(x_vals, y_vals, '--', color='b', linewidth=1)\nplt.arrow(0, 0, proj[0], proj[1], linewidth=3, head_width=.05, head_length=0.05, color ='gray')\nplt.arrow(0, 0, v[0], v[1], linewidth=3, head_width=.05, head_length=0.05, color ='green')\nplt.arrow(0, 0, w[0], w[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')\nplt.arrow(proj[0], proj[1], x_minus_proj[0], x_minus_proj[1], linewidth=3, head_width=.05, head_length=0.05, color = 'yellow')\n\nplt.title(\"non x-axis projection\")\nplt.show()\n```\n\nLets use a trick. We're going to imagine that there is yellow vector that is orthogonal to $L$, that starts at the tip of our projection (gray) and ends at the tip of $w$ (red).\n\n### Here's the hard part\n\nThis may not be intuitive, but we can define that yellow vector as $w-proj_{L}(w)$. Remember how two vectors added together act like we had placed one at the end of the other? Well this is the opposite, if we take some vector and subtract another vector, the tip moves to the end of the subtracted vector.\n\nSince we defined $proj_{L}(w)$ as $cv$ (above). We then rewrite the yellow vector as:\n\n\\begin{align}\nyellow = w-cv\n\\end{align}\n\nSince we know that our yellow vector is orthogonal to $v$ we can then set up the following equation:\n\n\\begin{align}\nv \\cdot (w-cv) = 0\n\\end{align}\n\n(remember that the dot product of two orthogonal vectors is 0)\n\nNow solving for $c$ we get\n\n1) Distribute the dot product\n\n\\begin{align}\nv \\cdot w - c(v \\cdot v) = 0\n\\end{align} \n\n2) add $c(v \\cdot v)$ to both sides\n\n\\begin{align}\nv \\cdot w = c(v \\cdot v)\n\\end{align} \n\n3) divide by $v \\cdot v$\n\n\\begin{align}\nc = \\frac{w \\cdot v}{v \\cdot v}\n\\end{align}\n\nSince $cv = proj_{L}(w)$ we know that: \n\n\\begin{align}\nproj_{L}(w) = \\frac{w \\cdot v}{v \\cdot v}v\n\\end{align}\n\nThis is the equation for the projection of any vector $w$ onto any line $L$!\n\nThink about if we were trying to project an already orthogonal vector onto a line:\n\n\n```\n# Axis Bounds\nplt.xlim(-1.1,4) \nplt.ylim(-1.1,4)\n\n# Original Vector\n# v = [1,0] \nw = [0,2]\nproj = [2,0]\n\n# Get Vals for L\naxes = plt.gca()\nx_vals = np.array(axes.get_xlim())\ny_vals = 0*x_vals\n\n# Plot Vectors and L\nplt.plot(x_vals, y_vals, '--', color='b', linewidth=1)\nplt.arrow(0, 0, w[0], w[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')\n\nplt.title(\"Shadow of w\")\nplt.show()\n```\n\nNow that you have a feel for linear projections, you can see that the $proj_{L}(w)$ is 0 mainly because $w \\cdot v$ is 0.\n\nWhy have I gone to all of this trouble to explain linear projections? Because I think the intuition behind it is one of the most important things to grasp in linear algebra. We can find the shortest distance between some data point (vector) and a line best via an orthogonal projection onto that line. We can now move data points onto any given line and be certain that they move as little as possible from their original position. \n\n\nThe square of the norm of a vector is equivalent to the dot product of a vector with itself. \n\nThe dot product of a vector and itself can be rewritten as that vector times the transpose of itself. \n", "meta": {"hexsha": "dcbb2170aabf00224422295e0d94dbc0a3d90ddf", "size": 249433, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "curriculum/unit-1-statistics-fundamentals/sprint-3-linear-algebra/module2-intermediate-linear-algebra/LS_DS_132_Intermediate_Linear_Algebra.ipynb", "max_stars_repo_name": "BrianThomasRoss/lambda-school", "max_stars_repo_head_hexsha": "6140db5cb5a43d0a367e9a08dc216e8bec9fb323", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-12-04T22:01:58.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-28T18:13:44.000Z", "max_issues_repo_path": "curriculum/unit-1-statistics-fundamentals/sprint-3-linear-algebra/module2-intermediate-linear-algebra/LS_DS_132_Intermediate_Linear_Algebra.ipynb", "max_issues_repo_name": "BrianThomasRoss/lambda-school", "max_issues_repo_head_hexsha": "6140db5cb5a43d0a367e9a08dc216e8bec9fb323", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "curriculum/unit-1-statistics-fundamentals/sprint-3-linear-algebra/module2-intermediate-linear-algebra/LS_DS_132_Intermediate_Linear_Algebra.ipynb", "max_forks_repo_name": "BrianThomasRoss/lambda-school", "max_forks_repo_head_hexsha": "6140db5cb5a43d0a367e9a08dc216e8bec9fb323", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 117.4908148846, "max_line_length": 17296, "alphanum_fraction": 0.804095689, "converted": true, "num_tokens": 10668, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9559813463747181, "lm_q2_score": 0.8688267745399465, "lm_q1q2_score": 0.8305821896911018}} {"text": "# Linear Algebra and Python Basics\n\nby Rob Hicks http://rlhick.people.wm.edu/stories/linear-algebra-python-basics.html\n\nIn this chapter, I will be discussing some linear algebra basics that will provide sufficient linear algebra background for effective programming in Python for our purposes. We will be doing very basic linear algebra that by no means covers the full breadth of this topic. Why linear algebra? Linear algebra allows us to express relatively complex linear expressions in a very compact way.\n\nBeing comfortable with the rules for scalar and matrix addition, subtraction, multiplication, and division (known as inversion) is important for our class.\n\nBefore we can implement any of these ideas in code, we need to talk a bit about python and how data is stored.\n\n## Python Primer\n\nThere are numerous ways to run python code. I will show you two and both are easily accessible after installing Anaconda:\n\n1. The Spyder integrated development environment. The major advantages of Spyder is that it provides a graphical way for viewing matrices, vectors, and other objects you want to check as you work on a problem. It also has the most intuitive way of debugging code.\n\n Spyder looks like this:\n \n Code can be run by clicking the green arrow (runs the entire file) or by blocking a subset and running it.\n In Windows or Mac, you can launch the Spyder by looking for the icon in the newly installed Program Folder Anaconda. \n \n2. The Ipython Notebook (now called Jupyter). The major advantages of this approach is that you use your web browser for all of your python work and you can mix code, videos, notes, graphics from the web, and mathematical notation to tell the whole story of your python project. In fact, I am using the ipython notebook for writing these notes. \n The Ipython Notebook looks like this:\n \n In Windows or Mac, you can launch the Ipython Notebook by looking in the newly installed Program Folder Anaconda.\n\nIn my work flow, I usually only use the Ipython Notebook, but for some coding problems where I need access to the easy debugging capabilities of Spyder, I use it. We will be using the Ipython Notebook interface (web browser) mostly in this class.\n\n### Loading libraries\n\nThe python universe has a huge number of libraries that extend the capabilities of python. Nearly all of these are open source, unlike packages like stata or matlab where some key libraries are proprietary (and can cost lots of money). In lots of my code, you will see this at the top:\n\n\n```python\n%matplotlib inline\n##import sympy as sympy\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sbn\nfrom scipy import *\n```\n\nThis code sets up Ipython Notebook environments (lines beginning with `%`), and loads several libraries and functions. The core scientific stack in python consists of a number of free libraries. The ones I have loaded above include:\n\n1. sympy: provides for symbolic computation (solving algebra problems)\n2. numpy: provides for linear algebra computations\n3. matplotlib.pyplot: provides for the ability to graph functions and draw figures\n4. scipy: scientific python provides a plethora of capabilities\n5. seaborn: makes matplotlib figures even pretties (another library like this is called bokeh). This is entirely optional and is purely for eye candy.\n\n### Creating arrays, scalars, and matrices in Python\n\nScalars can be created easily like this:\n\n\n```python\nx = .5\nprint x\n```\n\n 0.5\n\n\n#### Vectors and Lists\n\nThe numpy library (we will reference it by np) is the workhorse library for linear algebra in python. To creat a vector simply surround a python list ($[1,2,3]$) with the np.array function:\n\n\n```python\nx_vector = np.array([1,2,3])\nprint x_vector\n```\n\n [1 2 3]\n\n\nWe could have done this by defining a python list and converting it to an array:\n\n\n```python\nc_list = [1,2]\nprint \"The list:\",c_list\nprint \"Has length:\", len(c_list)\n\nc_vector = np.array(c_list)\nprint \"The vector:\", c_vector\nprint \"Has shape:\",c_vector.shape\n```\n\n The list: [1, 2]\n Has length: 2\n The vector: [1 2]\n Has shape: (2,)\n\n\n\n```python\nz = [5,6]\nprint \"This is a list, not an array:\",z\nprint type(z)\n```\n\n This is a list, not an array: [5, 6]\n \n\n\n\n```python\nzarray = np.array(z)\nprint \"This is an array, not a list\",zarray\nprint type(zarray)\n```\n\n This is an array, not a list [5 6]\n \n\n\n#### Matrices\n\n\n```python\nb = zip(z,c_vector)\nprint b\nprint \"Note that the length of our zipped list is 2 not (2 by 2):\",len(b)\n```\n\n [(5, 1), (6, 2)]\n Note that the length of our zipped list is 2 not (2 by 2): 2\n\n\n\n```python\nprint \"But we can convert the list to a matrix like this:\"\nA = np.array(b)\nprint A\nprint type(A)\nprint \"A has shape:\",A.shape\n```\n\n But we can convert the list to a matrix like this:\n [[5 1]\n [6 2]]\n \n A has shape: (2, 2)\n\n\n## Matrix Addition and Subtraction\n\n### Adding or subtracting a scalar value to a matrix\n\nTo learn the basics, consider a small matrix of dimension $2 \\times 2$, where $2 \\times 2$ denotes the number of rows $\\times$ the number of columns. Let $A$=$\\bigl( \\begin{smallmatrix} a_{11} & a_{12} \\\\ a_{21} & a_{22} \\end{smallmatrix} \\bigr)$. Consider adding a scalar value (e.g. 3) to the A.\n$$\n\\begin{equation}\n\tA+3=\\begin{bmatrix}\n\t a_{11} & a_{12} \\\\\n\t a_{21} & a_{22} \t\n\t\\end{bmatrix}+3\n\t=\\begin{bmatrix}\n\t a_{11}+3 & a_{12}+3 \\\\\n\t a_{21}+3 & a_{22}+3 \t\n\t\\end{bmatrix}\n\\end{equation}\n$$\nThe same basic principle holds true for A-3:\n$$\n\\begin{equation}\n\tA-3=\\begin{bmatrix}\n\t a_{11} & a_{12} \\\\\n\t a_{21} & a_{22} \t\n\t\\end{bmatrix}-3\n\t=\\begin{bmatrix}\n\t a_{11}-3 & a_{12}-3 \\\\\n\t a_{21}-3 & a_{22}-3 \t\n\t\\end{bmatrix}\n\\end{equation}\n$$\nNotice that we add (or subtract) the scalar value to each element in the matrix A. A can be of any dimension.\n\nThis is trivial to implement, now that we have defined our matrix A:\n\n\n```python\nresult = A + 3\n#or\nresult = 3 + A\nprint result\n```\n\n [[8 4]\n [9 5]]\n\n\n### Adding or subtracting two matrices\nConsider two small $2 \\times 2$ matrices, where $2 \\times 2$ denotes the \\# of rows $\\times$ the \\# of columns. Let $A$=$\\bigl( \\begin{smallmatrix} a_{11} & a_{12} \\\\ a_{21} & a_{22} \\end{smallmatrix} \\bigr)$ and $B$=$\\bigl( \\begin{smallmatrix} b_{11} & b_{12} \\\\ b_{21} & b_{22} \\end{smallmatrix} \\bigr)$. To find the result of $A-B$, simply subtract each element of A with the corresponding element of B:\n\n$$\n\\begin{equation}\n\tA -B =\n\t\\begin{bmatrix}\n\t a_{11} & a_{12} \\\\\n\t a_{21} & a_{22} \t\n\t\\end{bmatrix} -\n\t\\begin{bmatrix} b_{11} & b_{12} \\\\\n\t b_{21} & b_{22}\n\t\\end{bmatrix}\n\t=\n\t\\begin{bmatrix}\n\t a_{11}-b_{11} & a_{12}-b_{12} \\\\\n\t a_{21}-b_{21} & a_{22}-b_{22} \t\n\t\\end{bmatrix}\n\\end{equation}\n$$\n\nAddition works exactly the same way:\n\n$$\n\\begin{equation}\n\tA + B =\n\t\\begin{bmatrix}\n\t a_{11} & a_{12} \\\\\n\t a_{21} & a_{22} \t\n\t\\end{bmatrix} +\n\t\\begin{bmatrix} b_{11} & b_{12} \\\\\n\t b_{21} & b_{22}\n\t\\end{bmatrix}\n\t=\n\t\\begin{bmatrix}\n\t a_{11}+b_{11} & a_{12}+b_{12} \\\\\n\t a_{21}+b_{21} & a_{22}+b_{22} \t\n\t\\end{bmatrix}\n\\end{equation}\n$$\n\nAn important point to know about matrix addition and subtraction is that it is only defined when $A$ and $B$ are of the same size. Here, both are $2 \\times 2$. Since operations are performed element by element, these two matrices must be conformable- and for addition and subtraction that means they must have the same numbers of rows and columns. I like to be explicit about the dimensions of matrices for checking conformability as I write the equations, so write\n\n$$\nA_{2 \\times 2} + B_{2 \\times 2}= \\begin{bmatrix}\n a_{11}+b_{11} & a_{12}+b_{12} \\\\\n a_{21}+b_{21} & a_{22}+b_{22} \t\n\\end{bmatrix}_{2 \\times 2}\n$$\n\nNotice that the result of a matrix addition or subtraction operation is always of the same dimension as the two operands.\n\nLet's define another matrix, B, that is also $2 \\times 2$ and add it to A:\n\n\n```python\nB = np.random.randn(2,2)\nprint B\n```\n\n [[ 2.5056974 0.37029763]\n [ 0.94461604 -0.23399752]]\n\n\n\n```python\nresult = A + B\nresult\n```\n\n\n\n\n array([[ 7.5056974 , 1.37029763],\n [ 6.94461604, 1.76600248]])\n\n\n\n##Matrix Multiplication\n\n###Multiplying a scalar value times a matrix\n\nAs before, let $A$=$\\bigl( \\begin{smallmatrix} a_{11} & a_{12} \\\\ a_{21} & a_{22} \\end{smallmatrix} \\bigr)$. Suppose we want to multiply A times a scalar value (e.g. $3 \\times A$)\n\n$$\n\\begin{equation}\n\t3 \\times A = 3 \\times \\begin{bmatrix}\n\t a_{11} & a_{12} \\\\\n\t a_{21} & a_{22} \t\n\t\\end{bmatrix}\n\t=\n\t\\begin{bmatrix}\n\t 3a_{11} & 3a_{12} \\\\\n\t 3a_{21} & 3a_{22} \t\n\t\\end{bmatrix}\n\\end{equation}\n$$\n\nis of dimension (2,2). Scalar multiplication is commutative, so that $3 \\times A$=$A \\times 3$. Notice that the product is defined for a matrix A of any dimension.\n\nSimilar to scalar addition and subtration, the code is simple:\n\n\n```python\nA * 3\n```\n\n\n\n\n array([[15, 3],\n [18, 6]])\n\n\n\n### Multiplying two matricies\n\nNow, consider the $2 \\times 1$ vector $C=\\bigl( \\begin{smallmatrix} c_{11} \\\\\n c_{21}\n\\end{smallmatrix} \\bigr)$ \n\nConsider multiplying matrix $A_{2 \\times 2}$ and the vector $C_{2 \\times 1}$. Unlike the addition and subtraction case, this product is defined. Here, conformability depends not on the row **and** column dimensions, but rather on the column dimensions of the first operand and the row dimensions of the second operand. We can write this operation as follows\n\n$$\n\\begin{equation}\n\tA_{2 \\times 2} \\times C_{2 \\times 1} = \n\t\\begin{bmatrix}\n\t a_{11} & a_{12} \\\\\n\t a_{21} & a_{22} \t\n\t\\end{bmatrix}_{2 \\times 2}\n \\times\n \\begin{bmatrix}\n\tc_{11} \\\\\n\tc_{21}\n\t\\end{bmatrix}_{2 \\times 1}\n\t=\n\t\\begin{bmatrix}\n\t a_{11}c_{11} + a_{12}c_{21} \\\\\n\t a_{21}c_{11} + a_{22}c_{21} \t\n\t\\end{bmatrix}_{2 \\times 1}\n\\end{equation}\n$$\n\nAlternatively, consider a matrix C of dimension $2 \\times 3$ and a matrix A of dimension $3 \\times 2$\n\n$$\n\\begin{equation}\n\tA_{3 \\times 2}=\\begin{bmatrix}\n\t a_{11} & a_{12} \\\\\n\t a_{21} & a_{22} \\\\\n\t a_{31} & a_{32} \t\n\t\\end{bmatrix}_{3 \\times 2}\n\t,\n\tC_{2 \\times 3} = \n\t\\begin{bmatrix}\n\t\t c_{11} & c_{12} & c_{13} \\\\\n\t\t c_{21} & c_{22} & c_{23} \\\\\n\t\\end{bmatrix}_{2 \\times 3}\n\t\\end{equation}\n$$\n\nHere, A $\\times$ C is\n\n$$\n\\begin{align}\n\tA_{3 \\times 2} \\times C_{2 \\times 3}=&\n\t\\begin{bmatrix}\n\t a_{11} & a_{12} \\\\\n\t a_{21} & a_{22} \\\\\n\t a_{31} & a_{32} \t\n\t\\end{bmatrix}_{3 \\times 2}\n\t\\times\n\t\\begin{bmatrix}\n\t c_{11} & c_{12} & c_{13} \\\\\n\t c_{21} & c_{22} & c_{23} \n\t\\end{bmatrix}_{2 \\times 3} \\\\\n\t=&\n\t\\begin{bmatrix}\n\t a_{11} c_{11}+a_{12} c_{21} & a_{11} c_{12}+a_{12} c_{22} & a_{11} c_{13}+a_{12} c_{23} \\\\\n\t a_{21} c_{11}+a_{22} c_{21} & a_{21} c_{12}+a_{22} c_{22} & a_{21} c_{13}+a_{22} c_{23} \\\\\n\t a_{31} c_{11}+a_{32} c_{21} & a_{31} c_{12}+a_{32} c_{22} & a_{31} c_{13}+a_{32} c_{23}\n\t\\end{bmatrix}_{3 \\times 3}\t\n\\end{align}\n$$\n\nSo in general, $X_{r_x \\times c_x} \\times Y_{r_y \\times c_y}$ we have two important things to remember: \n\n* For conformability in matrix multiplication, $c_x=r_y$, or the columns in the first operand must be equal to the rows of the second operand.\n* The result will be of dimension $r_x \\times c_y$, or of dimensions equal to the rows of the first operand and columns equal to columns of the second operand.\n\nGiven these facts, you should convince yourself that matrix multiplication is not generally commutative, that the relationship $X \\times Y = Y \\times X$ does **not** hold in all cases.\nFor this reason, we will always be very explicit about whether we are pre multiplying ($X \\times Y$) or post multiplying ($Y \\times X$) the vectors/matrices $X$ and $Y$.\n\nFor more information on this topic, see this\nhttp://en.wikipedia.org/wiki/Matrix_multiplication.\n\n\n```python\n# Let's redefine A and C to demonstrate matrix multiplication:\nA = np.arange(6).reshape((3,2))\nC = np.random.randn(2,2)\n\nprint A.shape\nprint C.shape\n```\n\n (3, 2)\n (2, 2)\n\n\nWe will use the numpy dot operator to perform the these multiplications. You can use it two ways to yield the same result:\n\n\n```python\nprint A.dot(C)\nprint np.dot(A,C)\n```\n\n [[ 0.48080757 0.43511698]\n [ 1.47915018 0.72999774]\n [ 2.47749278 1.0248785 ]]\n [[ 0.48080757 0.43511698]\n [ 1.47915018 0.72999774]\n [ 2.47749278 1.0248785 ]]\n\n\nSuppose instead of pre-multiplying C by A, we post-multiply. The product doesn't exist because we don't have conformability as described above:\n\n\n```python\nC.dot(A)\n```\n\n## Matrix Division\nThe term matrix division is actually a misnomer. To divide in a matrix algebra world we first need to invert the matrix. It is useful to consider the analog case in a scalar work. Suppose we want to divide the $f$ by $g$. We could do this in two different ways:\n$$\n\\begin{equation}\n\t\\frac{f}{g}=f \\times g^{-1}.\n\\end{equation}\n$$\nIn a scalar seeting, these are equivalent ways of solving the division problem. The second one requires two steps: first we invert g and then we multiply f times g. In a matrix world, we need to think about this second approach. First we have to invert the matrix g and then we will need to pre or post multiply depending on the exact situation we encounter (this is intended to be vague for now).\n\n###Inverting a Matrix\n\nAs before, consider the square $2 \\times 2$ matrix $A$=$\\bigl( \\begin{smallmatrix} a_{11} & a_{12} \\\\ a_{21} & a_{22}\\end{smallmatrix} \\bigr)$. Let the inverse of matrix A (denoted as $A^{-1}$) be \n\n$$\n\\begin{equation}\n\tA^{-1}=\\begin{bmatrix}\n a_{11} & a_{12} \\\\\n\t\t a_{21} & a_{22} \n \\end{bmatrix}^{-1}=\\frac{1}{a_{11}a_{22}-a_{12}a_{21}}\t\\begin{bmatrix}\n\t\t a_{22} & -a_{12} \\\\\n\t\t\t\t -a_{21} & a_{11} \n\t\t \\end{bmatrix}\n\\end{equation}\n$$\n\nThe inverted matrix $A^{-1}$ has a useful property:\n$$\n\\begin{equation}\n\tA \\times A^{-1}=A^{-1} \\times A=I\n\\end{equation}\n$$\nwhere I, the identity matrix (the matrix equivalent of the scalar value 1), is\n$$\n\\begin{equation}\n\tI_{2 \\times 2}=\\begin{bmatrix}\n 1 & 0 \\\\\n\t\t 0 & 1 \n \\end{bmatrix}\n\\end{equation}\n$$\nfurthermore, $A \\times I = A$ and $I \\times A = A$.\n\nAn important feature about matrix inversion is that it is undefined if (in the $2 \\times 2$ case), $a_{11}a_{22}-a_{12}a_{21}=0$. If this relationship is equal to zero the inverse of A does not exist. If this term is very close to zero, an inverse may exist but $A^{-1}$ may be poorly conditioned meaning it is prone to rounding error and is likely not well identified computationally. The term $a_{11}a_{22}-a_{12}a_{21}$ is the determinant of matrix A, and for square matrices of size greater than $2 \\times 2$, if equal to zero indicates that you have a problem with your data matrix (columns are linearly dependent on other columns). The inverse of matrix A exists if A is square and is of full rank (ie. the columns of A are not linear combinations of other columns of A).\n\nFor more information on this topic, see this\nhttp://en.wikipedia.org/wiki/Matrix_inversion, for example, on inverting matrices.\n\n\n```python\n# note, we need a square matrix (# rows = # cols), use C:\nC_inverse = np.linalg.inv(C)\nprint C_inverse\n```\n\n [[ 2.97399042 1.966247 ]\n [-3.28628201 0.12551463]]\n\n\nCheck that $C\\times C^{-1} = I$:\n\n\n```python\nprint C.dot(C_inverse)\nprint \"Is identical to:\"\nprint C_inverse.dot(C)\n```\n\n [[ 1.00000000e+00 -4.61031414e-18]\n [ -6.43302442e-18 1.00000000e+00]]\n Is identical to:\n [[ 1.00000000e+00 6.11198607e-17]\n [ 6.54738800e-18 1.00000000e+00]]\n\n\n## Transposing a Matrix\n\nAt times it is useful to pivot a matrix for conformability- that is in order to matrix divide or multiply, we need to switch the rows and column dimensions of matrices. Consider the matrix\n$$\n\\begin{equation}\n\tA_{3 \\times 2}=\\begin{bmatrix}\n\t a_{11} & a_{12} \\\\\n\t a_{21} & a_{22} \\\\\n\t a_{31} & a_{32} \t\n\t\\end{bmatrix}_{3 \\times 2}\t\n\\end{equation}\n$$\nThe transpose of A (denoted as $A^{\\prime}$) is\n$$\n\\begin{equation}\n A^{\\prime}=\\begin{bmatrix}\n\t a_{11} & a_{21} & a_{31} \\\\\n\t a_{12} & a_{22} & a_{32} \\\\\n\t\\end{bmatrix}_{2 \\times 3}\n\\end{equation}\n$$\n\n\n```python\nA = np.arange(6).reshape((3,2))\nB = np.arange(8).reshape((2,4))\nprint \"A is\"\nprint A\n\nprint \"The Transpose of A is\"\nprint A.T\n```\n\n A is\n [[0 1]\n [2 3]\n [4 5]]\n The Transpose of A is\n [[0 2 4]\n [1 3 5]]\n\n\nOne important property of transposing a matrix is the transpose of a product of two matrices. Let matrix A be of dimension $N \\times M$ and let B of of dimension $M \\times P$. Then\n$$\n\\begin{equation}\n\t(AB)^{\\prime}=B^{\\prime}A^{\\prime}\n\\end{equation}\n$$\nFor more information, see this http://en.wikipedia.org/wiki/Matrix_transposition on matrix transposition. This is also easy to implement:\n\n\n```python\nprint B.T.dot(A.T)\nprint \"Is identical to:\"\nprint (A.dot(B)).T\n```\n\n [[ 4 12 20]\n [ 5 17 29]\n [ 6 22 38]\n [ 7 27 47]]\n Is identical to:\n [[ 4 12 20]\n [ 5 17 29]\n [ 6 22 38]\n [ 7 27 47]]\n\n\n## More python tools\n\n### Indexing\n\nPython begins indexing at 0 (not 1), therefore the first row and first column is referenced by 0,0 **not** 1,1.\n\n### Slicing \n\nAccessing elements of numpy matrices and arrays. This code grabs the first column of A:\n\n\n```python\nprint A\nA[:,0]\n```\n\n [[0 1]\n [2 3]\n [4 5]]\n\n\n\n\n\n array([0, 2, 4])\n\n\n\nor, we could grab a particular element (in this case, the second column, last row):\n\n\n```python\nA[2,1]\n```\n\n\n\n\n 5\n\n\n\n### Logical Checks to extract values from matrices/arrays:\n\n\n```python\nprint A\n```\n\n [[0 1]\n [2 3]\n [4 5]]\n\n\n\n```python\nprint A[:,1]>4\n\nA[A[:,1]>4]\n```\n\n [False False True]\n\n\n\n\n\n array([[4, 5]])\n\n\n\n### For loops\n\nCreate a $12 \\times 2$ matrix and print it out:\n\n\n```python\nA = np.arange(24).reshape((12,2))\nprint A\nprint A.shape\n```\n\n [[ 0 1]\n [ 2 3]\n [ 4 5]\n [ 6 7]\n [ 8 9]\n [10 11]\n [12 13]\n [14 15]\n [16 17]\n [18 19]\n [20 21]\n [22 23]]\n (12, 2)\n\n\nThe code below loops over the rows (12 of them) of our matrix A. For each row, it slices A and prints the row values across all columns. Notice the form of the for loop. The colon defines the statement we are looping over. For each iteration of the loop **idented** lines will be executed:\n\n\n```python\nfor rows in A:\n print rows\n```\n\n [0 1]\n [2 3]\n [4 5]\n [6 7]\n [8 9]\n [10 11]\n [12 13]\n [14 15]\n [16 17]\n [18 19]\n [20 21]\n [22 23]\n\n\n\n```python\nfor rows in A:\n print rows\n```\n\n [0 1]\n [2 3]\n [4 5]\n [6 7]\n [8 9]\n [10 11]\n [12 13]\n [14 15]\n [16 17]\n [18 19]\n [20 21]\n [22 23]\n\n\n\n```python\nfor cols in A.T:\n print cols\n```\n\n [ 0 2 4 6 8 10 12 14 16 18 20 22]\n [ 1 3 5 7 9 11 13 15 17 19 21 23]\n\n\n### If/then/else\n\nThe code below checks the value of x and categorizes it into one of three values. Like the for loop, each logical if check is ended with a colon, and any commands to be applied to that particular if check (if true) must be indented.\n\n\n```python\nx=.4\n\nif x<.5:\n print \"Heads\"\n print 100\nelif x>.5:\n print \"Tails\"\n print 0\nelse:\n print \"Tie\"\n print 50\n```\n\n Heads\n 100\n\n\n### While loops\n\nAgain, we have the same basic form for the statement (note the colons and indents). Here we use the shorthand notation `x+=1` for performing the calculation `x = x + 1`:\n\n\n```python\nx=0\nwhile x<10:\n x+=1 \n print x<10\n\nprint x\n```\n\n True\n True\n True\n True\n True\n True\n True\n True\n True\n False\n 10\n\n\n# Some more\n\n\n```python\nv = np.random.beta(56, 23, 100)\n```\n\n\n```python\nplt.hist(v)\n```\n\n\n```python\nnp.savetxt(\"myvextor.txt\", v.reshape(20,5))\n```\n\n\n```sh\n%%sh\nhead -10 myvextor.txt\n```\n\n 7.248008818950451015e-01 6.532244850357984411e-01 6.536199234633465194e-01 7.334928646986760281e-01 5.398165955580682684e-01\n 7.139216071922678264e-01 6.728443837682190898e-01 6.494566164553140508e-01 6.683509000127559885e-01 7.644556371656708871e-01\n 7.215153654452582943e-01 6.911006139808743010e-01 6.734786066554359074e-01 7.319844378166059373e-01 6.986996622331494988e-01\n 6.978137082576147954e-01 6.825797326701168455e-01 7.569021032715560482e-01 6.154717554965444259e-01 7.482810964973004575e-01\n 6.908519676137282461e-01 7.373778586062508245e-01 6.414907513325304178e-01 7.671835956420999247e-01 6.534701870485098985e-01\n 6.679163640763395859e-01 7.410336013997532723e-01 7.564381573540180925e-01 7.741644043396176400e-01 7.560739681550309177e-01\n 6.611087863698059675e-01 6.944074001813811403e-01 6.107849058765042471e-01 7.698996963306875552e-01 7.171169314025702679e-01\n 7.722202225663886699e-01 7.263728335966201932e-01 7.350093378174409331e-01 7.552882740562755215e-01 6.027107542833861631e-01\n 6.953737452564848764e-01 7.343557137075048535e-01 7.700578612228643482e-01 7.458161986630932327e-01 7.763693151512383039e-01\n 6.891582596380744219e-01 7.441501789777352771e-01 7.231777325956401103e-01 6.755128569529142979e-01 5.962737198081802248e-01\n\n\n\n```python\nw = np.loadtxt(\"myvextor.txt\")\n```\n\n\n```python\n(w*v).sum()/(w.)\n```\n\n\n\n\n 49.145946056843869\n\n\n\n\n```python\na = v.reshape(10,10)\n```\n\n\n```python\ntype(a)\n```\n\n\n\n\n numpy.ndarray\n\n\n\n\n```python\nnp.matrix(a)\n```\n\n\n\n\n matrix([[ 0.72480088, 0.65322449, 0.65361992, 0.73349286, 0.5398166 ,\n 0.71392161, 0.67284438, 0.64945662, 0.6683509 , 0.76445564],\n [ 0.72151537, 0.69110061, 0.67347861, 0.73198444, 0.69869966,\n 0.69781371, 0.68257973, 0.7569021 , 0.61547176, 0.7482811 ],\n [ 0.69085197, 0.73737786, 0.64149075, 0.7671836 , 0.65347019,\n 0.66791636, 0.7410336 , 0.75643816, 0.7741644 , 0.75607397],\n [ 0.66110879, 0.6944074 , 0.61078491, 0.7698997 , 0.71711693,\n 0.77222022, 0.72637283, 0.73500934, 0.75528827, 0.60271075],\n [ 0.69537375, 0.73435571, 0.77005786, 0.7458162 , 0.77636932,\n 0.68915826, 0.74415018, 0.72317773, 0.67551286, 0.59627372],\n [ 0.67069764, 0.6087569 , 0.71437864, 0.78013279, 0.62574326,\n 0.64370709, 0.71247713, 0.67404966, 0.67993396, 0.66871352],\n [ 0.66840764, 0.69701147, 0.68111859, 0.72275825, 0.75494115,\n 0.66595713, 0.71336278, 0.6192563 , 0.70614289, 0.71749587],\n [ 0.72002589, 0.72817938, 0.70982128, 0.75395697, 0.72369636,\n 0.67164351, 0.68503003, 0.67223346, 0.72685012, 0.68745557],\n [ 0.682259 , 0.72198239, 0.77744477, 0.71248248, 0.73918039,\n 0.75032694, 0.62653126, 0.67802253, 0.71520116, 0.62238738],\n [ 0.65828396, 0.61692964, 0.72643121, 0.79209095, 0.74067574,\n 0.66903045, 0.69260704, 0.71129395, 0.6608246 , 0.66332163]])\n\n\n\n\n```python\nv = \ntype(v)\n```\n\n\n\n\n numpy.ndarray\n\n\n\n\n```python\nv = np.matrix(np.random.rand(10,1))\nw = np.matrix(np.random.rand(10,1))\n\n```\n\n\n```python\na = v.dot(w.transpose())\n```\n\n\n```python\na.shape\n```\n\n\n\n\n (10, 10)\n\n\n\n\n```python\ntype(a)\n```\n\n\n\n\n numpy.matrixlib.defmatrix.matrix\n\n\n\n\n```python\nlag = np.linalg\n```\n\n\n```python\n\n```\n\n\n\n\n matrix([[ 1.29562817e+17, -6.77337552e+16, -1.06075205e+17,\n 2.38896852e+16, -2.35331525e+17, 8.74792833e+16,\n -1.06547937e+17, -1.95293482e+16, -4.33592894e+16,\n 6.57006219e+16],\n [ -0.00000000e+00, -0.00000000e+00, 8.00000000e+00,\n -1.47117588e+17, 1.44115188e+17, 3.12249574e+17,\n -1.15292150e+18, 9.00719925e+16, -9.24714206e+16,\n 7.20575940e+16],\n [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,\n 1.80143985e+16, 0.00000000e+00, -4.50359963e+16,\n 1.44115188e+17, -9.00719925e+15, 8.09096783e+15,\n 0.00000000e+00],\n [ 6.00479950e+15, -1.80143985e+16, -1.20095990e+16,\n 6.75539944e+15, -1.20095990e+16, -6.00479950e+15,\n 9.60767921e+16, 1.50119988e+15, 6.94501266e+14,\n 0.00000000e+00],\n [ -1.20095990e+16, -0.00000000e+00, 6.00479950e+15,\n 1.50119988e+15, 2.40191980e+16, 9.00719925e+15,\n -4.80383960e+16, 6.00479950e+15, -6.31845975e+15,\n -0.00000000e+00],\n [ -6.00479950e+15, -0.00000000e+00, 1.20095990e+16,\n -1.05083991e+16, -2.40191980e+16, 1.80143985e+16,\n -9.60767921e+16, 3.00239975e+15, 2.66842553e+15,\n -0.00000000e+00],\n [ 0.00000000e+00, 0.00000000e+00, -1.80143985e+16,\n 1.80143985e+16, 0.00000000e+00, -4.50359963e+16,\n 1.44115188e+17, -9.00719925e+15, 1.61044132e+16,\n 0.00000000e+00],\n [ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,\n -9.00719925e+15, 0.00000000e+00, 1.80143985e+16,\n 0.00000000e+00, 0.00000000e+00, -1.77888496e+15,\n 0.00000000e+00],\n [ -6.00479950e+15, 3.60287970e+16, 1.20095990e+16,\n -1.65131986e+16, -2.40191980e+16, 1.20095990e+16,\n -9.60767921e+16, 3.00239975e+15, 3.85207751e+14,\n -0.00000000e+00],\n [ 6.00479950e+15, 0.00000000e+00, 2.40191980e+16,\n 2.55203979e+16, 2.40191980e+16, -4.80383960e+16,\n 9.60767921e+16, -2.10167983e+16, 8.13151685e+15,\n -3.60287970e+16]])\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "0919795db7f5667f032b679b69750950be6f458b", "size": 58770, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "DataScienceProgramming/03-NumPy-and-Linear-Algebra/linear-algebra-python-basics.ipynb", "max_stars_repo_name": "cartermin/MSA8090", "max_stars_repo_head_hexsha": "c8d95fb7c71682b2197a391995b76f6043a905cc", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2017-08-21T23:23:59.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-16T23:57:11.000Z", "max_issues_repo_path": "DataScienceProgramming/03-NumPy-and-Linear-Algebra/linear-algebra-python-basics.ipynb", "max_issues_repo_name": "cartermin/MSA8090", "max_issues_repo_head_hexsha": "c8d95fb7c71682b2197a391995b76f6043a905cc", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "DataScienceProgramming/03-NumPy-and-Linear-Algebra/linear-algebra-python-basics.ipynb", "max_forks_repo_name": "cartermin/MSA8090", "max_forks_repo_head_hexsha": "c8d95fb7c71682b2197a391995b76f6043a905cc", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2017-08-22T00:39:01.000Z", "max_forks_repo_forks_event_max_datetime": "2019-09-09T01:46:29.000Z", "avg_line_length": 32.8507546115, "max_line_length": 8300, "alphanum_fraction": 0.5846520334, "converted": true, "num_tokens": 9115, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8705972784807408, "lm_q2_score": 0.953966097273569, "lm_q1q2_score": 0.8305202880492627}} {"text": "# Computing gradients and derivatives in PyTorch\n>\"Making good use of the `gradient` argument in PyTorch's `backward` function\"\n\n- toc: true \n- badges: true\n- comments: true\n- categories: [mathematics]\n\n---\ntags: mathematics pytorch gradients backward automatic differentiation vector-Jacobian product backpropagation\n\n---\n\n# tl;dr\nThe `backward` function in `PyTorch` can be used to compute the derivatives or gradients of functions. The `backward` function computes vector-Jacobian products so that the appropriate vector must be determined. In other words, the correct `gradient` argument must be passed to `backward`, although not passing `gradient` explicitly will cause `backward` to choose the appropriate value but only in the simplest cases.\n\nThis notebook explains vector-Jacobian products and how to choose the `gradient` argument in the `backward` function in the general case.\n\n# A brief overview\nIn the case of a function taking a scalar and returning a scalar, the use of the `backward` function is quite straight-forward:\n\n\n```python\n# collapse-hide\nimport torch\nx = torch.tensor(1., requires_grad=True)\ny = x**2\ny.backward()\nprint(f\"Derivative at a single point:\")\nprint(x.grad.data)\n```\n\n Derivative at a single point:\n tensor(2.)\n\n\nHowever, when \n- the function is **multi-valued** (e.g. vector- or matrix-valued); or \n- one wishes to compute the derivative of a function at **mulitple** points, \n\nthen the `gradient` argument in `backward` must be suitably chosen. For example:\n\n\n```python\n# collapse-hide\nimport torch\nx = torch.linspace(-2, 2, 5, requires_grad=True)\ny = x**2\ngradient = torch.ones_like(y)\ny.backward(gradient)\nprint(\"Derivative at multiple points:\")\nprint(x.grad.data)\n```\n\n Derivative at multiple points:\n tensor([-4., -2., 0., 2., 4.])\n\n\nIndeed, more precisely, the `backward` function computes vector-Jacobian products, which is not explicit in the function's doc string:\n\n\n```python\n# collapse-hide\nprint(\"First line of `torch.Tensor.backward` doc string:\")\nprint(\"\\\"\"+ torch.Tensor.backward.__doc__.split(\"\\n\")[0] + \"\\\"\")\n```\n\n First line of `torch.Tensor.backward` doc string:\n \"Computes the gradient of current tensor w.r.t. graph leaves.\"\n\n\nalthough some explanations are given [in this official tutorial](https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#gradients). The crucial point is therefore to choose the appropriate vector, which is passed to the `backward` function in its `gradient` argument:\n\n\n```python\n# collapse-hide\nimport inspect\nimport torch\nprint(f\"torch.Tensor.backward{inspect.signature(torch.Tensor.backward)}\")\nprint(\"...\")\nprint(\"\\n\".join(torch.Tensor.backward.__doc__.split(\"\\n\")[11:18]))\nprint(\"...\")\n```\n\n torch.Tensor.backward(self, gradient=None, retain_graph=None, create_graph=False)\n ...\n Arguments:\n gradient (Tensor or None): Gradient w.r.t. the\n tensor. If it is a tensor, it will be automatically converted\n to a Tensor that does not require grad unless ``create_graph`` is True.\n None values can be specified for scalar Tensors or ones that\n don't require grad. If a None value would be acceptable then\n this argument is optional.\n ...\n\n\nThere is a way around specifying the `gradient` argument. Revisiting the example above, the derivative at multiple points can be equivalently calculated by adding a `sum()`:\n\n\n```python\n# collapse-hide\nimport torch\nx = torch.linspace(-2, 2, 5, requires_grad=True)\ny = (x**2).sum()\n\ny.backward()\nprint(\"Derivative at multiple points:\")\nprint(x.grad.data)\n```\n\n Derivative at multiple points:\n tensor([-4., -2., 0., 2., 4.])\n\n\nHere, the `backward` method is invoked on a different `tensor`: \n```\n(x**2).backward()\n```\nif `x` contains a single input,\nvs\n```\n(x**2).sum().backward()\n```\nif `x` contains multiple inputs.\n\nOn the other hand, passing the `gradient` argument, whether `x` contains one or multiple inputs, the same command is used to compute the derivatives:\n```\ny = (x**2)\ny.backward(torch.ones_like(y))\n```\n\nRoughly speaking, the difference between the two methods, namely setting `gradient=torch.ones_like(y)` or adding `sum()`, is in the order of the summation and differentiation.\n\n# Usage examples of the `backward` function\nThe derivative of the **scalar**, **univariate** function $f(x)=x^2$ at a **single** point $x=1$:\n\n\n```python\nimport torch\nx = torch.tensor(1., requires_grad=True)\ny = x**2\ny.backward()\nx.grad\n```\n\n\n\n\n tensor(2.)\n\n\n\nThe derivative of the **scalar**, **univariate** function $f(x)=x^2$ at **multiple** points $x= -2, -1, \\dots, 2$:\n\n\n```python\nimport torch\nx = torch.linspace(-2, 2, 5, requires_grad=True)\ny = x**2\nv = torch.ones_like(y)\ny.backward(v)\nx.grad\n```\n\n\n\n\n tensor([-4., -2., 0., 2., 4.])\n\n\n\nThe gradient of the **scalar**, **multivariate** function $f(x_1, x_2)=3x_1^2 + 5x_2^2$ at a **single** point $(x_1, x_2)=(-1, 2)$:\n\n\n```python\nimport torch\nx = torch.tensor([-1., 2.], requires_grad=True)\nw = torch.tensor([3., 5.])\ny = (x*x*w).sum()\ny.backward()\nx.grad\n```\n\n\n\n\n tensor([-6., 20.])\n\n\n\nThe gradient of the **scalar**, **multivariate** function $f(x_1, x_2) = -x_1^2 + x_2^2$ at **multiple** points $(x_1, x_2)$:\n\n\n```python\nimport torch\nx = torch.arange(6, dtype=float).view(3, 2).requires_grad_(True)\nw = torch.tensor([-1, 1])\ny = (x*x*w).sum(1)\nv = torch.ones_like(y)\ny.backward(v)\nx.grad\n```\n\n\n\n\n tensor([[-0., 2.],\n [-4., 6.],\n [-8., 10.]], dtype=torch.float64)\n\n\n\nThe _derivatives_ of the **vector-valued**, **univariate** function $f(x)= (-x^3, 5x)$ at a **single** point $x=1$, i.e. the derivative of\n- its first component function $f_1(x)=-x^3$; and\n- its second component function $f_2(x)=5x$.\n\n\n```python\n# collapse-hide\nimport torch\nx = torch.tensor(1., requires_grad=True)\ny = torch.stack([-x**3, 5*x])\n\nv1 = torch.tensor([1., 0.])\ny.backward(v1, retain_graph=True)\n\nprint(f\"f_1'({x.data.item()}) = {x.grad.data.item():>4}\")\n\nx.grad.zero_()\n\nv2 = torch.tensor([0., 1.])\ny.backward(v2)\nprint(f\"f_2'({x.data.item()}) = {x.grad.data.item():>4}\")\n```\n\n f_1'(1.0) = -3.0\n f_2'(1.0) = 5.0\n\n\nThe _derivatives_ of the **vector-valued**, **univariate** function $f(x)= (-x^3, 5x)$ at **multiple** points, i.e. the derivative of\n- its first component function $f_1(x)=-x^3$; and\n- its second component function $f_2(x)=5x$.\n\n\n```python\n# collapse-hide\nimport torch\nimport itertools\nx = torch.arange(3, dtype=float, requires_grad=True)\ny = torch.stack([-x**3, 5*x])\n\nranges = [range(_) for _ in y.shape]\n\nv1 = torch.tensor([1. if i == 0 else 0. for i, j in itertools.product(*ranges)]).view(*y.shape)\ny.backward(v1, retain_graph=True)\nprint(f\"Derivative of f_1(x)=-3x^2 at the points {tuple(x.data.view(-1).tolist())}:\")\nprint(x.grad)\n\nx.grad.zero_()\n\nv2 = torch.tensor([1. if i == 1 else 0. for i, j in itertools.product(*ranges)]).view(*y.shape)\ny.backward(v2)\nprint(f\"\\nDerivative of f_2(x)=5x at the points {tuple(x.data.view(-1).tolist())}:\")\nprint(x.grad)\n```\n\n Derivative of f_1(x)=-3x^2 at the points (0.0, 1.0, 2.0):\n tensor([ 0., -3., -12.], dtype=torch.float64)\n \n Derivative of f_2(x)=5x at the points (0.0, 1.0, 2.0):\n tensor([5., 5., 5.], dtype=torch.float64)\n\n\nThe _gradients_ of the **vector-valued**, **multivariate** function\n$$\nf(x_1, \\dots, x_n) = (x_1 + \\dots + x_n\\,, x_1^2 + \\dots + x_n^2)\n$$\nat a **single** point $(x_1, \\dots, x_n)$, i.e. the gradient of\n- its first component function $f_1(x_1, \\dots, x_n) = x_1 + \\dots + x_n$; and\n- its second component function $f_2(x_1, \\dots, x_n) = x_1^2 + \\dots + x_n^2$.\n\n\n```python\n# collapse-show\nimport torch\nx = torch.arange(4, dtype=float, requires_grad=True)\ny = torch.stack([x.sum(), (x**2).sum()])\n\nprint(f\"x : {tuple(x.data.tolist())}\")\nprint(f\"y = (y_1, y_2) : {tuple(y.data.tolist())}\")\n\nv1 = torch.tensor([1., 0.])\ny.backward(v1, retain_graph=True)\nprint(f\"gradient of y_1 : {tuple(x.grad.data.tolist())}\")\n\nx.grad.zero_()\n\nv2 = torch.tensor([0., 1.])\ny.backward(v2)\nprint(f\"gradient of y_2 : {tuple(x.grad.data.tolist())}\")\n```\n\n x : (0.0, 1.0, 2.0, 3.0)\n y = (y_1, y_2) : (6.0, 14.0)\n gradient of y_1 : (1.0, 1.0, 1.0, 1.0)\n gradient of y_2 : (0.0, 2.0, 4.0, 6.0)\n\n\nThe _gradients_ of the **vector-valued**, **multivariate** function\n$$\nf(x_1, \\dots, x_n) = (x_1 + \\dots + x_n\\,, x_1^2 + \\dots + x_n^2)\n$$\nat **multiple** points, i.e. the gradient of\n- its first component function $f_1(x_1, \\dots, x_n) = x_1 + \\dots + x_n$; and\n- its second component function $f_2(x_1, \\dots, x_n) = x_1^2 + \\dots + x_n^2$.\n\n\n```python\n# collapse-show\nimport torch\nimport itertools\nx = torch.arange(4*3, dtype=float).view(-1,4).requires_grad_(True)\ny = torch.stack([x.sum(1), (x**2).sum(1)])\nprint(\"x:\")\nprint(x.data)\nprint(\"y:\")\nprint(y.data)\n\nprint()\n\nranges = [range(_) for _ in y.shape]\n\nv1 = torch.tensor([1. if i == 0 else 0. for i, j in itertools.product(*ranges)]).view(*y.shape)\ny.backward(v1, retain_graph=True)\nprint(\"Gradients of the f1 at multiple points:\")\nprint(x.grad)\n\nx.grad.zero_()\n\nprint()\nv2 = torch.tensor([1. if i == 1 else 0. for i, j in itertools.product(*ranges)]).view(*y.shape)\ny.backward(v2)\nprint(\"Gradients of the f2 at multiple points:\")\nprint(x.grad)\n\n\n```\n\n x:\n tensor([[ 0., 1., 2., 3.],\n [ 4., 5., 6., 7.],\n [ 8., 9., 10., 11.]], dtype=torch.float64)\n y:\n tensor([[ 6., 22., 38.],\n [ 14., 126., 366.]], dtype=torch.float64)\n \n Gradients of the f1 at multiple points:\n tensor([[1., 1., 1., 1.],\n [1., 1., 1., 1.],\n [1., 1., 1., 1.]], dtype=torch.float64)\n \n Gradients of the f2 at multiple points:\n tensor([[ 0., 2., 4., 6.],\n [ 8., 10., 12., 14.],\n [16., 18., 20., 22.]], dtype=torch.float64)\n\n\n# Mathematical preliminaries\n## Scalars, vectors, matrices, and tensors\n\n- A **scalar** is a real number. It is usually denoted with $x$. \n- An **$n$-dimensional vector** is a list $(x_1, \\dots, x_n)$ of scalars.\n- An **$m$-by-$n$ matrix** is an array with $m$ rows and $n$ columns of scalars:\n$$\n\\begin{bmatrix}w_{1,1}&\\dots&w_{1,n}\\\\\\vdots&\\ddots&\\vdots\\\\w_{m,1}&\\dots&w_{m,n}\\end{bmatrix}\n$$\n- A **column vector** of length $n$ is a $n$-by-$1$ matrix:\n$$\\begin{bmatrix}x_1\\\\\\vdots\\\\x_n\\end{bmatrix}$$\nNote that it is distinct from its vector counterpart $(x_1, \\dots, x_n)$.\n- A **row vector** of length $n$ is a $1$-by-$n$ matrix:\n$$\\begin{bmatrix}x_1&\\dots&x_n\\end{bmatrix}$$\nNote that it is distinct from its vector and column vector counterparts.\n\n>Note:\nFor convenience, we may denote a vector, a column vector, or a row vector with a single symbol, typically $x$.\n\nIn another post we establish the following correspondence between these mathematical entities and their `tensor` counterparts in `PyTorch`:\n\n|mathematical name|mathematical notation|`tensor` shape|`tensor` dimension|\n|---|---|---|---|\n|scalar|$x$|`()`|`0`|\n|vector|$(x_1, \\dots, x_n)$|`(n,)`|`1`|\n|matrix|$\\begin{bmatrix}w_{1,1}&\\dots&w_{1,n}\\\\\\vdots&\\ddots&\\vdots\\\\w_{m,1}&\\dots&w_{n,m}\\end{bmatrix}$|`(m,n)`| `2`|\n|column vector|$\\begin{bmatrix}x_1\\\\\\vdots\\\\x_n\\end{bmatrix}$|`(n,1)`|`2`|\n|row vector|$\\begin{bmatrix}x_1&\\dots&x_n\\end{bmatrix}$|`(1,n)`|`2`|\n\n## Mathematical functions\n- We consider functions which are mappings from scalars, vectors, or matrices to scalars, vectors, or matrices. It is generically denoted $y=f(x)$.\n- A **scalar** function $y=f(x)$ is a function returning a scalar, i.e. $y$ is a scalar. \n- A **vector-valued** function $y=f(x)$ is a function returning a vector, i.e. $y$ is a vector. We often write\n$$f(x) = (f_1(x), \\dots, f_m(x))$$\nif the output is $m$-dimensional, where each of $f_1(x), \\dots, f_m(x)$ is a scalar function.\n- A **univariate** function $y=f(x)$ is a function depending on a scalar $x$.\n- A **multivariate** function $y=f(x)$ is a function depending on a vector $x=(x_1, \\dots, x_n)$. \n\nIn summary\n\n|$y=f(x)$|scalar-valued|vector-valued|\n|---|---|---|\n|**univariate**|$x$ is a scalar
    $y$ is a scalar|$x$ is a scalar
    $y$ is a vector|\n|**multivariate**|$x$ is a vector
    $y$ is a scalar|$x$is a vector
    $y$ is a vector|\n\n## Differentiation\n### Basic definitions\nWe do not recall the definitions for:\n- the **derivative** $f'(x)$ of a scalar, uni-variate function $y=f(x)$ evaluated at a scalar $x$;\n- the **partial derivatives** $\\frac{\\partial f}{\\partial x_i}(x)$, $i=1, \\dots, n$, of a scalar, multivariate function $y=f(x)$ with respect to the variables $x_1, \\dots, x_n$, and evaluated at $x=(x_1, \\dots, x_n)$.\n\n### Derivatives of vector-valued, univariate functions\nThe **derivative** of a vector-valued, uni-variate function $y=f(x)$ evaluated at a scalar $x$ is the vertical concatenation of the derivatives of its component functions:\n$$f'(x) = \\begin{bmatrix}f_1'(x)\\\\\\vdots\\\\f_m'(x)\\end{bmatrix}$$\n\n### Gradients\nThe **gradient** of a scalar-valued function $y=f(x)$, is the *row* vector of its partial derivatives:\n$$\\nabla f(x) = \\begin{bmatrix}\\frac{\\partial f}{\\partial x_1}(x)&\\dots&\\frac{\\partial f}{\\partial x_n}(x)\\end{bmatrix}$$\nwith length $n$ if $x$ is $n$-dimensional: $x=(x_1, \\dots, x_n)$.\n\n### Jacobians\nThe **Jacobian** of a vector-valued, multivariate function $y=f(x)$ is the vertical concatenation of the gradients of the component functions $f_1, \\dots, f_m$:\n$$J_f(x)\n\\,=\\,\n\\begin{bmatrix}\n\\nabla f_1(x)\\\\\\vdots\\\\\\nabla f_m(x)\n\\end{bmatrix}\n\\,=\\,\n\\begin{bmatrix}\n \\frac{\\partial f_1}{\\partial x_1}(x)&\\dots&\\frac{\\partial f_1}{\\partial x_n}(x)\\\\\n \\vdots&\\ddots&\\vdots\\\\\n \\frac{\\partial f_m}{\\partial x_1}(x)&\\dots&\\frac{\\partial f_m}{\\partial x_n}(x)\n\\end{bmatrix}\n$$\nIt is thus an $m$-by-$n$ matrix, i.e. with $m$ rows and $n$ columns.\n\n#### Special case: $m=1$\nIn case $m=1$, the Jacobian agrees with the gradient of a scalar, multivariate function:\n$$J_f(x) = \\nabla f(x)$$\n\n#### Special case: $n=1$\nIn case $n=1$, the Jacobian agrees with the derivative of a vector-valued, univariate function.\n$$J_f(x) = \\begin{bmatrix}f_1'(x)\\\\\\vdots\\\\f_m'(x)\\end{bmatrix}$$\n\n## Vector-Jacobian products\nGiven a vector-valued, multivariate function $y=f(x)$ and a _column_ vector\n$v=\\begin{bmatrix}v_1\\\\\\vdots\\\\v_m\\end{bmatrix}$,\nthe **vector-Jacobian product** is the matrix multiplication\n$$v^\\top J_f(x) \\,=\\,\n\\begin{bmatrix}\nv_1&\\dots&v_m\n\\end{bmatrix}\n\\begin{bmatrix}\n \\frac{\\partial f_1}{\\partial x_1}(x)&\\dots&\\frac{\\partial f_1}{\\partial x_n}(x)\\\\\n \\vdots&\\ddots&\\vdots\\\\\n \\frac{\\partial f_m}{\\partial x_1}(x)&\\dots&\\frac{\\partial f_m}{\\partial x_n}(x)\n\\end{bmatrix}\n$$\nwhich is then a _row_ vector of length $n$.\n\n### Special case\nIf $v^\\top$ happens to be the gradient of a scalar-valued function $z=\\ell(y)$ evaluated at $f(x)$, i.e. $v = \\nabla \\ell(y)$ where $y=f(x)$, then\n\\begin{equation}\nv^\\top J_f(x) \n\\,=\\,\\nabla (\\ell\\circ f)(x)\n\\end{equation}\nIn other words, $v^\\top J_f(x)$ is the gradient of the composition of the function $\\ell$ with the function $f$.\n\n>Note:\nThe vector-Jacobian product can be generalized to cases where $x$ and $y$ are (mathematical) tensors of higher dimensions. This generalization is in fact used in some of the examples of this post.\n\n### Application: Gradients of vector-valued functions\nIf $y=f(x)=(f_1(x), \\dots, f_m(x))$ is a vector-valued, multivariate function, one computes the gradients $\\nabla f_1(x), \\dots, \\nabla f_m(x)$ one at a time, each time with a suitable vector $v$. Indeed, fix $i$ between $1$ and $m$, and define $\\ell_i(y)=y_i$ the function selecting the $i$-th coordinate of $y=(y_1, \\dots, y_m)$, so that\n$$f_i(x) = \\ell_i(f(x))\\,.$$\nNoting that\n$$\\nabla \\ell_i(y) = \\begin{bmatrix}0&\\cdots&0&1&0&\\cdots&0\\end{bmatrix}$$\nwhere the only non-zero coordinate is in the $i$-th position, then \n$$\n\\begin{align}\n\\nabla \\ell_i(f(x))J_f(x)\n& =\n\\begin{bmatrix}0&\\cdots&0&1&0&\\cdots&0\\end{bmatrix}\n\\begin{bmatrix}\n \\frac{\\partial f_1}{\\partial x_1}(x)&\\dots&\\frac{\\partial f_1}{\\partial x_n}(x)\\\\\n \\vdots&\\ddots&\\vdots\\\\\n \\frac{\\partial f_m}{\\partial x_1}(x)&\\dots&\\frac{\\partial f_m}{\\partial x_n}(x)\n\\end{bmatrix}\\\\\n&=\n\\begin{bmatrix}\\frac{\\partial f_i}{\\partial x_1}(x)&\\dots&\\frac{\\partial f_i}{\\partial x_n}(x)\\end{bmatrix}\n\\end{align}\n$$\n\n\n### Application: Derivatives at multiple points\nTo evaluate the derivative of a scalar, univariate function $f(x)$ at multiple sample points $x^{(1)}, \\dots, x^{(N)}$, we create a *new*, vector-valued and multivariate function\n$$F(x)=\\begin{bmatrix}f\\left(x^{(1)}\\right)\\\\ \\vdots \\\\ f\\left(x^{(N)}\\right)\\end{bmatrix}\n\\qquad\\textrm{where}\\qquad\nx\\,=\\,(x^{(1)}, \\dots, x^{(N)})\\,.$$\nThus, its Jacobian is\n$$J_F(x)=\\begin{bmatrix}\nf'(x^{(1)})&&&&\\\\\n&\\ddots&&&\\\\\n&&f'(x^{(j)})&&\\\\\n&&&\\ddots&\\\\\n&&&&f'(x^{(N)})\\end{bmatrix}\n$$\nwhere all off-diagonal terms are $0$.\nThus, setting $v=\\begin{bmatrix}1\\\\\\vdots\\\\1\\end{bmatrix}$, we obtain the gradient of $f$ evaluated at the $N$ sample points $x^{(1)}\\,, \\dots\\,, x^{(N)}$:\n$$\\begin{bmatrix}f'(x^{(1)})&\\dots& f'(x^{(j)})&\\cdots& f'(x^{(N)})\\end{bmatrix}\n=\\left[1\\,,\\dots\\,,1\\right]\nJ_f(x)\\,.$$\nThe interpretation here is that the resulting row vector contains the derivative of $f$ at the samples $x^{(1)}$ to $x^{(N)}$.\n\n### The trick with `sum()`\nThe trick of adding `sum()` before calling `backward` differs with the previous application only in the order of operations performed: the summation is performed before differentiation.\n\nFrom a scalar, univariate function $y=f(x)$, construct a new scalar, multivariate function \n$$G(x_1, \\dots, x_N) = f(x_1) + \\dots + f(x_N)$$\nUsing the rules of vector calculus, the gradient of $G$ at an $n$-dimensional point $(x_1, \\dots, x_N)$ is\n$$\n\\begin{align}\n\\nabla G(x) & = \\begin{bmatrix}\\frac{\\partial G}{\\partial x_1}(x)&\\cdots&\\frac{\\partial G}{\\partial x_N}\\end{bmatrix}\\\\\n& = \\begin{bmatrix}f'(x_1)&\\cdots&f'(x_N)\\end{bmatrix}\n\\end{align}\n$$\nThe interpretation here is that the resulting row vector contains the gradient of $G$ at the $N$-dimensional point $(x_1, \\dots, x_N)$.\n\n# Computing gradients with `PyTorch`\nA mathematical function is a mapping, which strictly speaking one should denote $f$. The denotation $y=f(x)$ is simply to suggest that the typical input will be denoted $x$ and the corresponding output will be denoted $y$. Otherwise, $y=f(x)$ actually asserts the identity between a value $y$ and the evaluation of the function $f$ at the value $x$.\n\nIn `PyTorch`, the primary objects are `tensor`s, which can represent (mathematical) scalars, vectors, and matrices (as well as mathematical tensors). The way a `PyTorch` function calculates a `tensor`, generically denoted `y` and called the output, from another `tensor`, generically denoted `x` and called the input, reflects the action of a mathematical function $f$ (or $y=f(x)$).\n\nConversely, a mathematical function $f$ can be evaluated at $x$ using `PyTorch`, and furthermore `PyTorch` allows to evaluate the derivative or gradient of $f$ at $x$ via the method `backward`. More specifically, the `backward` function performs vector-Jacobian products, where the vector correspond to the `gradient` argument. The key point in using the `backward` is thus to understand how to choose the `gradient` argument.\n\nThe mathematical preliminaries above show how `gradient` should be chosen. There are two key points: \n1. `gradient` has the same shape as `y`; \n1. `gradient` is populated with `0.`'s and `1.`'s, and the location of the `1.`'s corresponding to the inputs and outputs of interest.\n\n# Examples revisited\n\n>Note:\nThe variable `v` is passed to the `gradient` argument in all our examples.\n\nFor the derivative of a scalar, univariate function evaluated a single point, we choose `gradient=torch.tensor(1.)`, which is the default value:\n\n\n```python\nimport torch\nx = torch.tensor(1., requires_grad=True)\ny = x**2\nv = torch.ones_like(y)\ny.backward()\nprint(f\"Shape of x : {tuple(x.shape)}\")\nprint(f\"Shape of y : {tuple(y.shape)}\")\nprint(f\"gradient argument : {v}\")\n```\n\n Shape of x : ()\n Shape of y : ()\n gradient argument : 1.0\n\n\nNote that if `x` is cast as a `1`-dimensional `tensor`, then (in this particular example) `y` is also a `1`-dimensional `tensor`:\n\n\n```python\nimport torch\nx = torch.tensor([1.], requires_grad=True)\ny = x**2\nv = torch.ones_like(y)\ny.backward()\nprint(f\"Shape of x : {tuple(x.shape)}\")\nprint(f\"Shape of y : {tuple(y.shape)}\")\nprint(f\"gradient argument : {v}\")\n```\n\n Shape of x : (1,)\n Shape of y : (1,)\n gradient argument : tensor([1.])\n\n\nSimilarly if `x` is cast as `2`-dimensional `tensor`:\n\n\n```python\nimport torch\nx = torch.tensor([[1.]], requires_grad=True)\ny = x**2\nv = torch.ones_like(y)\ny.backward()\nprint(f\"Shape of x : {tuple(x.shape)}\")\nprint(f\"Shape of y : {tuple(y.shape)}\")\nprint(f\"gradient argument : {v}\")\n```\n\n Shape of x : (1, 1)\n Shape of y : (1, 1)\n gradient argument : tensor([[1.]])\n\n\nFor the derivative of a scalar, univariate function evaluated at multiple points, `gradient` contains all `1.`'s and is of same shape as `y`:\n\n\n```python\nimport torch\nx = torch.linspace(-1, 1, 5, requires_grad=True)\ny = x**2\nv = torch.ones_like(y)\ny.backward(v)\nprint(f\"Shape of x : {tuple(x.shape)}\")\nprint(f\"Shape of y : {tuple(y.shape)}\")\nprint(f\"gradient argument : {v}\")\n```\n\n Shape of x : (5,)\n Shape of y : (5,)\n gradient argument : tensor([1., 1., 1., 1., 1.])\n\n\nCasting `x` in a different shape changes the shape of `y`, and thus of `gradient`:\n\n\n```python\nimport torch\nx = torch.linspace(-2, 2, 5).view(-1,1).requires_grad_(True)\ny = x**2\nv = torch.ones_like(y)\ny.backward(v)\nprint(f\"Shape of x : {tuple(x.shape)}\")\nprint(f\"Shape of y : {tuple(y.shape)}\")\nprint(f\"gradient argument : \")\nprint(v)\n```\n\n Shape of x : (5, 1)\n Shape of y : (5, 1)\n gradient argument : \n tensor([[1.],\n [1.],\n [1.],\n [1.],\n [1.]])\n\n\nFor the derivative of a vector-valued, univariate function evaluated at a single point, the derivative of each component function is calculated one at a time, and `gradient` consists of all `0.`'s except for one `1.`, which is located at a position corresponding to the component function. In the example below, the function is in fact *matrix-valued*, namely we calculate the derivative of\n$$f(x) = \\begin{bmatrix}1&x\\\\x^2&x^3\\\\x^4&x^5\\end{bmatrix}\\qquad \\textrm{at}\\quad x\\,=\\,1\\,.$$\n\n\n```python\n# collapse-show\nimport torch\nimport itertools\nx = torch.tensor(1., requires_grad=True)\ny = torch.stack([x**i for i in range(6)]).view(3,2)\nranges = [range(_) for _ in y.shape]\n\nprint(\"x:\")\nprint(x.data)\nprint(\"\\ny:\")\nprint(y.data)\n\nderivatives = torch.zeros_like(y)\n\nfor i, j in itertools.product(*ranges):\n v = torch.zeros_like(y)\n v[i,j] = 1.\n if x.grad is not None: x.grad.zero_()\n \n y.backward(v, retain_graph=True)\n derivatives[i,j] = x.grad.item()\nprint(\"\\nDerivatives:\")\nprint(derivatives) \n```\n\n x:\n tensor(1.)\n \n y:\n tensor([[1., 1.],\n [1., 1.],\n [1., 1.]])\n \n Derivatives:\n tensor([[0., 1.],\n [2., 3.],\n [4., 5.]])\n\n\n>Note:\nThe use of `for` loops can be avoided.\n\nFor the gradient of a scalar, multivariate function evaluated at a single point, `gradient=torch.tensor(1.)`: \n\n\n```python\nimport torch\nx = torch.tensor([-1., 2.], requires_grad=True)\nw = torch.tensor([3., 5.])\ny = (x*x*w).sum()\nv = torch.ones_like(y)\ny.backward()\nprint(f\"Shape of x : {tuple(x.shape)}\")\nprint(f\"Shape of y : {tuple(y.shape)}\")\nprint(f\"gradient argument : {v}\")\n```\n\n Shape of x : (2,)\n Shape of y : ()\n gradient argument : 1.0\n\n\nIn the following example, the input `x` is a `(3,2)`-tensor:\n\n\n```python\nx = torch.arange(6, dtype=float).view(3,2).requires_grad_(True)\ny = (x**2).sum()\nv = torch.ones_like(y)\ny.backward(v)\n\nprint(f\"Shape of x: {tuple(x.shape)}\")\nprint(f\"Shape of y: {tuple(y.shape)}\")\nprint(f\"gradient argument: {v}\")\nprint(\"x:\")\nprint(x.data)\nprint(\"x.grad:\")\nprint(x.grad.data)\n```\n\n Shape of x: (3, 2)\n Shape of y: ()\n gradient argument: 1.0\n x:\n tensor([[0., 1.],\n [2., 3.],\n [4., 5.]], dtype=torch.float64)\n x.grad:\n tensor([[ 0., 2.],\n [ 4., 6.],\n [ 8., 10.]], dtype=torch.float64)\n\n", "meta": {"hexsha": "5e5de3072f6ae56ca77a9022ae70baa29ab1d77a", "size": 37130, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_notebooks/2020-04-19-pytorch-backward-function-vector-Jacobian-product.ipynb", "max_stars_repo_name": "antoinechoffrut/fastai-companion", "max_stars_repo_head_hexsha": "97ff750d685ffc6dbe58be78b231f338d1742a5b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_notebooks/2020-04-19-pytorch-backward-function-vector-Jacobian-product.ipynb", "max_issues_repo_name": "antoinechoffrut/fastai-companion", "max_issues_repo_head_hexsha": "97ff750d685ffc6dbe58be78b231f338d1742a5b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-03-30T08:36:47.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-28T01:23:56.000Z", "max_forks_repo_path": "_notebooks/2020-04-19-pytorch-backward-function-vector-Jacobian-product.ipynb", "max_forks_repo_name": "antoinechoffrut/fastai-companion", "max_forks_repo_head_hexsha": "97ff750d685ffc6dbe58be78b231f338d1742a5b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-08-01T22:59:29.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-01T22:59:29.000Z", "avg_line_length": 32.3432055749, "max_line_length": 437, "alphanum_fraction": 0.5233773229, "converted": true, "num_tokens": 7821, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9207896780646393, "lm_q2_score": 0.9019206673024666, "lm_q1q2_score": 0.8304792408852828}} {"text": "# SciPy\n\n_Numpy_ provides a high-performance multidimensional array and basic tools to compute with and manipulate these arrays. **SciPy** builds on this, and provides a large number of functions that operate on numpy arrays and are useful for different types of scientific and engineering applications.\n\nThe best way to get familiar with SciPy is to browse the documentation (found at [https://docs.scipy.org/doc/scipy-1.1.0/reference/tutorial/index.html]). We will highlight some parts of SciPy that you might find useful for this class.\n\nSciPy is a collection of mathematical algorithms and convenience functions built on the Numpy extension of Python. It adds significant power to the interactive Python session by providing the user with high-level commands and classes for manipulating and visualizing data. With SciPy an interactive Python session becomes a data-processing and system-prototyping environment rivaling systems such as MATLAB, IDL, Octave, R-Lab, and SciLab.\n\nThe additional benefit of basing SciPy on Python is that this also makes a powerful programming language available for use in developing sophisticated programs and specialized applications. Scientific applications using SciPy benefit from the development of additional modules in numerous niches of the software landscape by developers across the world. Everything from parallel programming to web and data-base subroutines and classes have been made available to the Python programmer. All of this power is available in addition to the mathematical libraries in SciPy.\n\nThis tutorial will acquaint the first-time user of SciPy with some of its most important features. It assumes that the user has already installed the SciPy package. Some general Python facility is also assumed, such as could be acquired by working through the Python distribution\u2019s Tutorial. For further introductory help the user is directed to the Numpy documentation.\n\nFor brevity and convenience, we will often assume that the main packages (numpy, scipy, and matplotlib) have been imported as:\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n## SciPy Organization\n\nSciPy is organized into subpackages covering different scientific computing domains. These are summarized in the following table:\n\n| Subpackage | Description | \n| ------ | ---- |\n| `cluster` | Clustering algorithms |\n| `constants` | Physical and mathematical constants |\n| `fftpack` | Fast Fourier Transform routines |\n| `integrate` | Integration and ordinary differential equation solvers |\n| `interpolate` | Interpolation and smoothing splines |\n| `io` | Input and Output |\n| `linalg` | Linear algebra |\n| `ndimage` | N-dimensional image processing |\n| `odr` | Orthogonal distance regression |\n| `optimize` | Optimization and root-finding routines |\n| `signal` | Signal processing |\n| `sparse` | Sparse matrices and associated routines |\n| `spatial` | Spatial data structures and algorithms |\n| `special` | Special functions |\n| `stats` | Statistical distributions and functions |\n\nWe will barely scratch the surface in terms of the huge expanse of libraries that **SciPy** offers, but it is recommended that each SciPy sub-package is imported separately, for example:\n\n\n```python\nfrom scipy import linalg, optimize\n```\n\n## Integration\n\nThe `scipy.integrate` sub-package provides several integration techniques including an ordinary differential equation integrator. An overview of the module is provided by the `help` command:\n\n\n```python\nfrom scipy import integrate\n```\n\n## General Integration (quad)\n\nThe function quad is provided to integrate a function of one variable between two points. The points can be ($\\pm \\infty$) to indicate infinite limits. For example, let's say you wish to integrate:\n\n$$\nI=\\int_0^{\\frac{\\pi}{2}}cos(x)dx\n$$\n\nThis can be trivially computed using `integrate.quad()`:\n\n\n```python\nresult = integrate.quad(lambda x: np.cos(x), 0, np.pi/2)\nresult\n```\n\nThe first value represents the *integral*, as we would expect it is extremely close to $1$. The second value represents the *absolute error* estimate within the result, as SciPy computes the integral **numerically**.\n\nIf the function to integrate takes *additional parameters*, this can be provided for in the **args** argument. These parameters must be considered *constants*. Suppose that the following integral shall be calculated:\n\n$$\nI(a,b)=\\int_0^1 ax^2 + b dx\n$$\n\nThis is implemented as follows:\n\n\n```python\ndef integrand(x, a, b):\n return a*x**2 + b\n\na = 2\nb = 1\nI = integrate.quad(integrand, 0, 1, args=(a,b))\nI\n```\n\n## General multiple integration\n\nThe mechanics for double and triple integration have been wrapped up into the functions `dblquad` and `tplquad`. These functions take the function to integrate and four, or six arguments, respectively. The limits of all inner integrals need to be defined as functions.\n\nAn example of using double integration to compute several values of $I_n$ is shown below:\n\n\n```python\ndef I(n):\n return integrate.dblquad(lambda t, x: np.exp(-x*t)/t**n, 0, np.inf, lambda x: 1, lambda x: np.inf)\n\nprint(I(2))\nprint(I(3))\nprint(I(4))\n```\n\n## Integration using samples\n\nIf we are working with data samples across some space, we can approximate an integral of both equally-spaced and arbitrarily-spaced samples using a variety of different methods. Two of the most common are `trapz` and `simps`:\n\n\n```python\nx = np.array([1,3,4])\ny = x**2\nintegrate.simps(y, x)\n```\n\nThis corresponds exactly to:\n\n$$\n\\int_1^4 x^2 dx=21\n$$\n\nwhereas integrating the following:\n\n\n```python\ny2 = x**3\nintegrate.simps(y2, x)\n```\n\nDoesn't correspond to:\n\n$$\n\\int_1^4 x^3 dx = 63.75\n$$\n\nThis is because Simpson's rule approximates the function between adjacent point as a parabola, as long as the function is a polynomial of order 2 or less with unequal spacing. Simpson's rule is more accurate than `trapz`, but `trapz` is considerably more reliable, as it interpolates *linearly* by integrating in small trapezoid parts along the sample space.\n\n## Ordinary differential equations (ODEs)\n\nIntegrating a set of ordinary differential equations (ODEs) given initial conditions is another useful example. The function `odeint` is available in SciPy for integrating a first-order vector differential equation:\n\n$$\n\\frac{d\\dot{y}}{dt}=f(\\dot{y},t)\n$$\n\ngiven initial conditions $\\dot{y}(0)=y_0$, where $\\dot{y}$ is a length $N$ vector and $f$ is a mapping from $\\mathcal{R}^N$ to $\\mathcal{R}^N$. A higher-order ordinary differential equation can always be reduced to a differential equation of this type by introducing intermediate derivatives into the $\\dot{y}$ vector.\n\n### Example\n\nThe second order differential equation for the angle theta of a pendulum acted on by gravity with friction can be written:\n\n$$\n\\theta''(t) + b \\theta'(t) + c \\sin(\\theta(t)) = 0\n$$\n\nwhere $b$ and $c$ are care positive constants, and a prime $'$ denotes a derivative. To solve this equation with `odeint`, we first convert it to a system of first-order equations. By defining angular velocity $\\omega(t)=\\theta'(t)$, we obtain the system:\n\n$$\n\\begin{equation}\n\\theta'(t)=\\omega(t) \\\\\n\\omega'(t)=-b \\omega(t) - c \\sin(\\theta(t))\n\\end{equation}\n$$\n\nLet $y$ be the vector $[\\theta, \\omega]$. We implement this system in Python as:\n\n\n```python\ndef pend(y, t, b, c):\n theta, omega = y\n dydt = [omega, -b*omega - c*np.sin(theta)]\n return dydt\n```\n\nWe assume for the initial conditions, the pendulum is nearly vertical with $\\theta(0)=\\pi - 0.1$, and is initially at rest, so $\\omega(0)=0$. Then the vector of initial conditions, with constants $b=0.25$ and $c=5.0$, is:\n\n\n\n\n```python\nb = 0.25\nc = 5.0\ny0 = [np.pi - 0.1, 0.0]\n```\n\nNow we generate a solution over a uniform-space sample set in the interval $t \\in [0, 10]$:\n\n\n```python\nt = np.linspace(0, 10, 101)\n```\n\nCalling `odeint` to generate the solution. We pass $b$ and $c$ to `odeint` using the *args* argument:\n\n\n```python\nsol = integrate.odeint(pend, y0, t, args=(b,c))\n```\n\nIn our solution, we have a $[101,2]$ array, whereby the first column is $\\theta(t)$ and the second is $\\omega(t)$. We plot as:\n\n\n```python\nplt.plot(t, sol[:,0], label=r\"$\\theta(t)$\")\nplt.plot(t, sol[:,1], label=r\"$\\omega(t)$\")\nplt.xlabel(r\"$t$\")\nplt.legend()\nplt.show()\n```\n\n## Interpolation\n\nThere are several general interpolation facilities available in SciPy, for data in 1, 2, and higher dimensions.\n\nThe `interp1d` class in `scipy.interpolate` is a convenient method to create a function based on fixed data points which can be evaluated anywhere within the domain defined by the given data using linear interpolation.\n\n\n```python\nfrom scipy import interpolate\n```\n\n\n```python\nx = np.linspace(0, 10, 11, endpoint=True)\ny = np.cos(-x**2/9.)\nf = interpolate.interp1d(x, y)\nf2 = interpolate.interp1d(x, y, kind=\"nearest\")\nf3 = interpolate.interp1d(x, y, kind=\"cubic\")\nf4 = interpolate.interp1d(x, y, kind=\"next\")\n\nxnew = np.linspace(0, 10, 71, endpoint=True)\n\nfig,ax=plt.subplots(ncols=4, figsize=(15,4))\nax[0].plot(x, y, 'o', xnew, f(xnew), 'r-', label=\"linear\")\nax[1].plot(x, y, 'o', xnew, f2(xnew), 'g-', label=\"nearest\")\nax[2].plot(x, y, 'o', xnew, f3(xnew), 'b-', label=\"cubic\")\nax[3].plot(x, y, 'o', xnew, f4(xnew), 'k-', label=\"next\")\nfor a in ax:\n a.legend()\nplt.show()\n```\n\n### Multivariate data interpolation\n\nSuppose you have multidimensional data, for instance for an underlying function $f(x, y)$ you only know the values at points ($x[i]$, $y[i]$) that do not form a regular grid.\n\nSuppose we want to interpolate the 2-D function:\n\n\n```python\ndef func(x, y):\n return x*(1-x)*np.cos(4*np.pi*x) * np.sin(4*np.pi*y**2)**2\n\ngrid_x, grid_y = np.mgrid[0:1:100j, 0:1:200j]\n```\n\nbut we only know its values at 1000 data points:\n\n\n```python\npoints = np.random.rand(1000, 2)\nvalues = func(points[:,0], points[:,1])\n```\n\nThis can be done with `griddata` \u2013 below we try out all of the interpolation methods:\n\n\n```python\ngrid_z0 = interpolate.griddata(points, values, (grid_x, grid_y), method=\"nearest\")\ngrid_z1 = interpolate.griddata(points, values, (grid_x, grid_y), method=\"linear\")\ngrid_z2 = interpolate.griddata(points, values, (grid_x, grid_y), method=\"cubic\")\n\nfig, ax = plt.subplots(ncols=3, figsize=(15,4))\n\nfor i,p in enumerate([grid_z0, grid_z1, grid_z2]):\n ax[i].imshow(p)\nfor i,c in enumerate([\"nearest\",\"linear\",\"cubic\"]):\n ax[i].set_title(c)\n ax[i].axis(\"off\")\n```\n\n### Spline interpolation\n\nSpline interpolation requires two essential steps: (1) a spline representation of the curve is computed, and (2) the spline is evaluated at the desired points. In order to find the spline representation, there are two different ways to represent a curve and obtain (smoothing) spline coefficients: directly and parametrically. The direct method finds the spline representation of a curve in a two- dimensional plane using the function `splrep`:\n\n\n```python\nx = np.arange(0, 2*np.pi+np.pi/4, 2*np.pi/8)\ny = np.sin(x)\ntck = interpolate.splrep(x, y, s = 0)\ntck\n```\n\nThe keyword argument, s , is used to specify the amount of smoothing to perform during the spline fit. The default value of $s$ is $s=m-\\sqrt{2m}$ where $m$ is the number of data points being fit. Thus if no smoothing is desired $s=0$.\n\nOnce the spline representation of the data has been determined, functions are available for evaluating the spline (`splev`) and its derivatives (`splev`, `spalde`) at any point and the integral of the spline between any two points ( `splint`):\n\n\n```python\nxnew = np.arange(0, 2*np.pi, np.pi/50)\nynew = interpolate.splev(xnew, tck, der=0)\n\nplt.plot(x, y, 'x', xnew, ynew, xnew, np.sin(xnew), x, y, 'b')\nplt.legend([\"Linear\",\"Cubic\",\"True\"])\n```\n\n## Multidimensional image processing\n\nImage processing and analysis are generally seen as operations on two-dimensional arrays of values. There are however a number of fields where images of higher dimensionality must be analyzed. Good examples of these are **medical imaging** and **biological imaging**. `numpy` is suited very well for this type of applications due its inherent multidimensional nature. The `scipy.ndimage` packages provides a number of general image processing and analysis functions that are designed to operate with arrays of arbitrary dimensionality. The packages currently includes functions for linear and non-linear filtering, binary morphology, B-spline interpolation, and object measurements.\n\nTo access this functionality, we import the `ndimage` package:\n\n\n```python\nfrom scipy import ndimage\n```\n\n### Importing images from file\n\nCreating a numpy array from an image file:\n\n\n```python\nfig,ax=plt.subplots(ncols=3, figsize=(15,5))\n\nfly = plt.imread(\"butterfly.jpg\")\nprint(fly.shape, fly.dtype)\nax[0].imshow(fly)\nfor a in ax:\n a.axis(\"off\")\n# different interpolations\nax[1].imshow(fly, interpolation=\"bilinear\")\nax[2].imshow(fly, interpolation=\"nearest\")\nplt.show()\n```\n\n### Basic Manipulations\n\nIncluding **masking** and **rotation**:\n\n\n```python\n# create a copy to manipulate\nporthole_fly = fly.copy()\n\nlx, ly, lz = fly.shape\nX, Y = np.ogrid[0:lx, 0:ly]\nmask = (X - lx / 2) **2 + (Y - ly / 2) **2 > lx * ly / 4\n\nporthole_fly[mask,:] = 0\n\nfig,ax=plt.subplots(ncols=2, figsize=(14,5))\n\nax[0].imshow(porthole_fly)\nax[0].axis(\"off\")\n\nfly_rot = ndimage.rotate(fly, 45, reshape=False)\nax[1].imshow(fly_rot)\nax[1].axis(\"off\")\nplt.show()\n```\n\n### Blurring/Smoothing\n\nNote that this has selected only on the *gray* channel:\n\n\n```python\nblurred = ndimage.gaussian_filter(fly, sigma=3)\nvery_blurred = ndimage.gaussian_filter(fly, sigma=5)\nunif_fly = ndimage.uniform_filter(fly, size=11)\n\nfig,ax=plt.subplots(ncols=3, figsize=(15,7))\nfor a in ax:\n a.axis(\"off\")\nax[0].imshow(blurred)\nax[1].imshow(very_blurred)\nax[2].imshow(unif_fly)\nplt.show()\n```\n\n### Sharpening\n\nTo sharpen an image, we apply a blurring filter and then remove the gaussian filter from the image:\n\n\n```python\nfilter_blurred = ndimage.gaussian_filter(blurred, 1)\n# select an alpha\nalpha = 5\nsharpened = blurred + alpha * (blurred - filter_blurred)\n\nfig,ax=plt.subplots(ncols=2, figsize=(15,7))\nax[0].imshow(fly)\nax[1].imshow(sharpened)\nplt.axis(\"off\")\nplt.show()\n```\n\n### Edge Detection\n\nWe can use a **gradient operator** (Sobel) to find high intensity variations:\n\n\n```python\nsq = np.zeros((256,256))\nsq[64:-64, 64:-64] = 1\nsq = ndimage.rotate(sq, 30, mode=\"constant\")\nsq = ndimage.gaussian_filter(sq, 8)\n\nsx = ndimage.sobel(sq, axis=0, mode=\"constant\")\nsy = ndimage.sobel(sq, axis=1, mode=\"constant\")\nsob = np.hypot(sx, sy)\n\nfig, ax=plt.subplots(ncols=3, figsize=(15,4))\nfor a in ax:\n a.axis(\"off\")\nfor i,p in enumerate([sx, sy, sob]):\n ax[i].imshow(p, cmap=\"hot\")\n```\n\n\n```python\n\n```\n\nThere is substantially more that can be found with processing images, however the scope of this session is just to cover some basic operations to show how things can be done.\n\n## Sparse Matrices\n\nNormal matrices are 2-D objects that store numerical values, and every value is stored in memory in a contiguous chunk. This provides benefits such as very fast access to individual items, but what about when most of the data values are null?\n\nWe can use `scipy.sparse` for a selection of different strategies for representing **sparse** data, and it even helps when we have cases where memory grows exponentially.\n\nSparse matrices act to *compress* the data to save memory usage, by not representing zero values. Applications include:\n\n- solution to partial differential equations (finite elements etc.)\n- graph theory (nodes and edges)\n\nSparsity can be visualised with `matplotlib` using `plt.spy`:\n\n\n```python\nX_sp = np.random.choice([0, 1], size=(200,200), p=[.95, .05])\nplt.spy(X_sp, cmap=\"Blues\")\nplt.show()\n```\n\nSparse matrices offer the data structure to store large, sparse matrices, and allows us to perform complex matrix computations. The ability to do such computations is incredibly powerful in a variety of data science problems. Learning to work with Sparse matrix, a large matrix or 2d-array with a lot elements being zero, can be extremely handy.\n\nPython\u2019s SciPy library has a lot of options for creating, storing, and operating with Sparse matrices. There are 7 different types of sparse matrices available.\n\n1. __csc_matrix__: Compressed Sparse Column format\n1. __csr_matrix__: Compressed Sparse Row format\n1. __bsr_matrix__: Block Sparse Row format\n1. __lil_matrix__: List of Lists format\n1. __dok_matrix__: Dictionary of Keys format\n1. __coo_matrix__: COOrdinate format\n1. __dia_matrix__: DIAgonal format\n\nThe default type is the **csr_matrix**, and NumPy converts your sparse matrix to this format before it conducts arithmetic operations on it. The table below highlights the opportunities of each format:\n\n\n| format | matrix `*` vector | get item | fancy get | set item | fancy set | solvers | note | \n| ------ | ---- | ------ | ---- | ------ | ---- | ------ | ---- |\n| DIA | sparsetools | . | . | . | . | iterative | has data array, specialized |\n| LIL | via CSR | yes | yes | yes | yes | iterative | arithmetics via CSR, incremental construction |\n| DOK | python | yes | one axis only | yes | yes | iterative | O(1) item access, incremental construction |\n| COO | sparsetools | . | . | . | . | iterative | has data array, facilitates fast conversion |\n| CSR | sparsetools | yes | yes | slow | . | any | has data array, fast row-wise operations |\n| CSC | sparsetools | yes | yes | slow | . | any | has data array, fast column-wise operations |\n| BSR | sparsetools | . | . | . | . | specialized | has data array, specialized |\n\n\n\n```python\nfrom scipy import sparse\n```\n\n**WARNING**: When multiplying `scipy.sparse` matrices, it acts as *matrix multiplication* (i.e dot product), not element-wise.\n\n### Example\n\nHere we will create a **lil_matrix**, assign some random numbers, convert to CSR and use `sparse.solve`:\n\n\n```python\ndim_size = 10000\nsubsets = 1000\nA = sparse.lil_matrix((dim_size,dim_size))\nA[0, :subsets] = np.random.rand(subsets)\nA[1, subsets:subsets*2] = np.random.rand(subsets)\nA.setdiag(np.random.rand(dim_size))\n```\n\n\n```python\nA = A.tocsr()\nb = np.random.rand(dim_size)\n# solve using scipy.sparse.linalg\nx = sparse.linalg.spsolve(A, b)\n# solve non-sparse by converting A back to numpy!\nx_ = np.linalg.solve(A.toarray(), b)\n# error between methods\nerr = np.linalg.norm(x - x_)\nprint(err)\n```\n\n### CSC/CSR format\n\nThese are the best formats, as they allow for fast matrix-vector products and other arithmetics along the appropriate axis, in addition to efficient row/column slicing.\n\n\n```python\nx = np.random.choice([0,1], size=(5,5), p=[.8, .2])\nx\n```\n\n\n```python\nsparse.csc_matrix(x)\n```\n\n\n```python\nsparse.csr_matrix(x)\n```\n\n### Diagonal sparse matrices\n\nThis is natural, as a diagonal matrix is by definition mostly sparse, only containing non-zero values on the *diagonal* of the matrix.\n\n\n```python\nsparse.dia_matrix(np.ones((10000,10000)))\n```\n\n# Tasks\n\n## Task 1\n\nThe force $F$ on an area $A$ at a depth $y$ in a liquid of density $w$ is given by:\n\n$$\nF=wyA\n$$\n\nImagine this applied to a plate submerged vertically in a liquid.\n\nThe **total force** on the plate is given by:\n\n$$\nF=w\\int_a^b xy \\, dy\n$$\n\nwhere $x$ is the length (in m) of the element of area expressed in terms of $y$, $y$ is depth (in m) of the element of area, $w$ is the density of the liquid (in $Nm^{-3}$), $a$ is the depth at the top of the area (in m), and $b$ is the depth at the bottom of the area (in m).\n\nCalculate the force on one side of a cubical container 10.0cm on an edge if the container is filled with water. The weight density of water is $w=9800Nm^{-3}$.\n\n\n```python\n# your codes here\n```\n\n### Task 2\n\nConsider the motion of a spring that is subject to a frictional force or a damping force. An example is the damping force supplied by a shock absorber in a car or a bicycle. We assume that the damping force is proportion to the velocity of the mass and acts in the direction opposite to the motion. Thus:\n\n$$\nF_d=-c\\frac{dx}{dt}\n$$\n\nwhere $c$ is a damping constant. Newton's second law thus gives:\n\n$$\nm\\frac{d^2x}{dt^2}=F_r + F_d=-kx-c\\frac{dx}{dt}\n$$\n\nwhich we re-arrange to:\n\n$$\nm\\frac{d^2x}{dt^2}+c\\frac{dx}{dt}+kx=0\n$$\n\nSolve the linear system of equations using `odeint`, with initial conditions $x(0)=0$ and $x'(0)=0.6$. Ensure that $m$, $c$ and $k$ are all positive constants, but initially test with $m=5$, $c=10$ and $k=128$. Create a timespace as $t \\in [0, 10]$ with a sensible number of steps.\n\nOnce you done one run, try tweaking $m$ and $c$ and see the different plots you find.\n\nEnsure that you plot both $x(t)$ and $x'(t)$.\n\n\n```python\n# your codes here\n```\n\n### Task 3\n\nImport the image `bigcat.jpg`. Compute the laplace transformation from the method `laplace()` found in `ndimage`. Plot the image of the big cat and it's laplace transformation. How well does the image capture the big cat from the scenery?\n\n\n```python\n# your codes here\n```\n\n### Task 4\n\nTrying different $\\sigma$, draw 9 laplace-transformed cats using 9 different values for in the logspace range of $\\sigma \\in [10^{-1}, 5]$, within a `gaussian_filter()`. Print the sigma at the top of each image.\n\n\n```python\n# your codes here\n```\n\n### Task 5\n\nWe can try to label groups of pixels in this image using `ndimage.label`, which accepts a boolean matrix. This boolean matrix which it tries to associate groups to can be generated by a number of ways; in this instance we will simply select points that are greater than the pixel mean across all pixels:\n\n$$\nB_{ij}=\n\\begin{cases}\n1 & B_{ij} > \\bar{B} \\\\\n0 & B_{ij} \\le \\bar{B} \n\\end{cases}\n$$\n\nPlot 3 images:\n1. The labelled unfiltered image\n2. The labelled laplacian-gaussian filtered image\n3. The labelled sobel-filtered image\n\nSobel-filters can be generated using `ndimage.sobel`. You may choose to use a gaussian filter and/or laplace transform before using `sobel()`.\n\nYou may use any parameters to `gaussian_filter(sigma)` as necessary to get interesting results.\n\n\n```python\n# your codes here\n```\n\n## Solutions\n\n**WARNING**: _Please attempt to solve the problems before fetching the solutions!_\n\nSee the solutions to all of the problems here:\n\n\n```python\n%load solutions/02_solutions.py\n```\n", "meta": {"hexsha": "2236743f819383f1d8677fa3b8a23a063f28bc9d", "size": 33628, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "02-Simulation/02 SciPy Basics.ipynb", "max_stars_repo_name": "gregparkes/PythonTeaching", "max_stars_repo_head_hexsha": "f9c58bd13880f03c1f3552af743aa82f85f37140", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2022-03-10T17:02:22.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-19T14:49:30.000Z", "max_issues_repo_path": "02-Simulation/02 SciPy Basics.ipynb", "max_issues_repo_name": "gregparkes/PythonTeaching", "max_issues_repo_head_hexsha": "f9c58bd13880f03c1f3552af743aa82f85f37140", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "02-Simulation/02 SciPy Basics.ipynb", "max_forks_repo_name": "gregparkes/PythonTeaching", "max_forks_repo_head_hexsha": "f9c58bd13880f03c1f3552af743aa82f85f37140", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-05-14T14:11:48.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-14T14:11:48.000Z", "avg_line_length": 32.3346153846, "max_line_length": 691, "alphanum_fraction": 0.5871000357, "converted": true, "num_tokens": 5858, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9343951698485603, "lm_q2_score": 0.8887588052782736, "lm_q1q2_score": 0.830451934812396}} {"text": "### Logistic Regression From Scratch Using Iris Data Set\nLogistic Regression is a classification algorithm that aims at predicting the probability that a certain observation $X_i$ belongs to the class $y=1$.\n\nHere we will first give the formulas that we will implement LR with using Gradient descent algorithm (Following the Likelihood approach).\n\nGenerally,we want to find a function that gives the probability of a linear combination of the input variable/features $X$ i.e. $$p=\\theta_0+\\theta_1x_1+ .... +\\theta_nx_n$$. We know that a probability a probaility $0\\leq p\\leq 1$, from the function $p$ it is evident that the probility might be great than $1$ or even less than $0$.\n\nWe can therefore, instead of assignig $p$ to the function we assign the odds i.e $\\frac{p}{1-p}$. Such that ,\n $$\\frac{p}{1-p}=\\theta_0+\\theta_1x_1+ .... +\\theta_nx_n$$\n \n We also know that $\\theta_0+\\theta_1x_1+ .... +\\theta_nx_n$ might sometimes be negative, to ensure that then negatives are taken care of we apply $\\log$ on $\\frac{p}{1-p}$.This leads to,\n $$\\log\\bigg(\\frac{p}{1-p}\\bigg)=\n \\theta_0+\\theta_1x_1+ .... +\\theta_nx_n=\\theta^TX \\quad \\text{(In vector notation)}$$\n \n Equation above ensures that the output is within the range of $0$ and $1$. To simplify things further and solve for $p$, we can exponentiate both sides of the equation above to obtain,\n \\begin{align}\n &\\frac{p}{1-p}=e^{\\theta^TX}\\\\\n &p=e^{\\theta^TX}-pe^{\\theta^TX}\\\\\n &p+pe^{\\theta^TX}=e^{\\theta^TX}\\\\\n &p\\bigg(1+e^{\\theta^TX}\\bigg)=e^{\\theta^TX}\\\\\n &p=\\frac{e^{\\theta^TX}}{1+e^{\\theta^TX}}\\\\\n &p=\\frac{1}{1+e^{-\\theta^TX}}\n \\end{align}\nThe last equation is the famous sigmoid/logistic function.\n\nSince we have already found the probility $p$, which gives the probability that a certain observation $X_i$ belongs to a certain class, in this case $1$, then the probaility that that the obsercation $X_i$ belongs to the class $0$ is $1-p$. This is because we have only two classes $\\{1,0\\}$. This is represented as:\n$$P(y=1|X;\\theta)=p\\\\\nP(y=0|X;\\theta)=1-p$$\nFor such an obsertion $X_i=x$, the probability above can be written in a more compact form as,$$P(y|x)=p^{y}(1-p)^{1-y}$$\n\nAnd since we have many observations in $X$, the to obtain the probality for all the observations, we multiply the individual probability with each other i.e.\n$$P(y|x)=p^{y_1}(1-p)^{1-y_1}\\times p^{y_2}(1-p)^{1-y_2}\\times...\\times p^{y_n}(1-p)^{1-y_n\n}$$\nFor $n$ observations. The product above is called the Likelihood (which needs to be minimized/optimized)of a certain outcome given inputs. It written as,\n$$L(y;x)=\\prod_{i=1}^{n}p^{y_i}(1-p)^{1-y_i}$$\n\nBut we know that GradienT Descent (GD) uses derivatives, therefore it is wise not find derivatives of products (Not easy). We therefore apply logarithm on the likelihood to obtain the log-likelihood (l)\n$$l=\\log(L(y;x))=\\log\\bigg(\\prod_{i=1}^{n}p^{y_i}(1-p)^{1-y_i}\\bigg)=\\frac{1}{n}\\sum_{i=1}^{n}y_i\\log(p)+(1-y_i)\\log(1-p)$$\n\nRecall that $$p=\\frac{1}{1+e^{-\\theta^TX}}$$\nWe can therefore find the partial derivatives of $\\theta_j$, since we want to to find the parameters $\\theta_j$ that optimizes the log-likelihood function. \n\nAfter applying chain rule on the log-likelihood function,we obtain\n$$\\frac{\\partial l}{\\partial \\theta_j}=\\frac{1}{n}\\sum_{i=1}^{n}X_i(p-y_i)$$\n\nTo find the optimum $\\theta_j$ we use GD, which iteratively updates the values of \\theta_j untill convergence using:\n$$\\theta_j \\text(new)=\\theta_j \\text(old)-\\alpha \\frac{\\partial l}{\\partial \\theta_j}$$\nwhere alpha is learning rate choosen experimentaly.\n\nOnce the optimum values of $ \\theta_j$ is found can use $p=\\frac{1}{1+e^{-\\theta^TX}}$ to classify unseen data $X$.\n\nLet us now implement Binary Logistic Regression from scratch and also using inbuilt package. We will use data from kaggle Iris data but with two categories instead of three. Kindly download the data here. If you want to use the same data.\n\n\n```python\n## Import the necessary packages\nimport numpy as np # performing array calculations\nimport pandas as pd # Load and manipulating the dataset\nimport seaborn as sns # visualization\nimport matplotlib.pyplot as plt # visualization\n```\n\n\n```python\n#LOad the data which is local in my machine\ndata=pd.read_csv('Iris.csv')\n#check the first 5 rows\ndata.head()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    IdSepalLengthCmSepalWidthCmPetalLengthCmPetalWidthCmSpecies
    015.13.51.40.2Iris-setosa
    124.93.01.40.2Iris-setosa
    234.73.21.30.2Iris-setosa
    344.63.11.50.2Iris-setosa
    455.03.61.40.2Iris-setosa
    \n
    \n\n\n\n\n```python\n#Check the shape of our dataset\ndata.shape\n```\n\n\n\n\n (150, 6)\n\n\n\n#### The data has four features $X=\\{SepalLengthC,SepalWidthCm,PetalLengthCm PetalWidthCm\\}$ and one Target variable $y=Species$, with $n=150$. \n\n\n```python\n#The species has how namy unique categories\ndata['Species'].unique()\n```\n\n\n\n\n array(['Iris-setosa', 'Iris-versicolor', 'Iris-virginica'], dtype=object)\n\n\n\n#### As seen above,There are three categories ('Iris-setosa', 'Iris-versicolor', 'Iris-virginica'). We remove one so that its binary classification.\n\n\n```python\n#we remove one species using pandas\ndata1=data[data['Species']!='Iris-virginica']\nprint(data1.Species.unique() )#Two classes are left.\nprint(data1.shape)\n```\n\n ['Iris-setosa' 'Iris-versicolor']\n (100, 6)\n\n\n#### Now $n=100$\n\n\n```python\n#Next we drop the id column and check whether there missing values in the dataset\ndata2=data1.drop('Id', axis=1)\ndata2.info() \n#No null /or missing values\n```\n\n \n Int64Index: 100 entries, 0 to 99\n Data columns (total 5 columns):\n # Column Non-Null Count Dtype \n --- ------ -------------- ----- \n 0 SepalLengthCm 100 non-null float64\n 1 SepalWidthCm 100 non-null float64\n 2 PetalLengthCm 100 non-null float64\n 3 PetalWidthCm 100 non-null float64\n 4 Species 100 non-null object \n dtypes: float64(4), object(1)\n memory usage: 4.7+ KB\n\n\n#### The next step we convert the species column to 0 and 1 since it is of onject datatype using the Label Encoder from scikit learn\n\n\n```python\n#encode the Species/Target Vaiable\nfrom sklearn.preprocessing import LabelEncoder\nle=LabelEncoder()\ndata2['Species']=le.fit_transform(data2['Species'])\ndata2.Species.unique()\n```\n\n\n\n\n array([0, 1])\n\n\n\n#### Here we see that Iris-setosa was encoded as zero and Iris-versicolor as 1.\n\n#### The data appears to be recorded in a certain order we shuffle to mix the data well. Uisng the sample method.\n\n\n```python\ndata2=data2.sample(frac=1)\n```\n\n\n```python\n# we can now assing X and y their respective values.\nX=data2.drop('Species',axis=1).values\ny=data2.Species.values\n```\n\n#### Since the dataset isn't that large we shall only train and check the score on the training data. Remember that it is of paramount to have a testing set in order to have a better model. In this tutorial, we just want to show how Logistic regression works with GD\n\n\n```python\n#hstack a column onf ones in X\nones=np.ones([100,1])\nX=np.hstack([ones,X])\n```\n\n\n```python\n#initialize the weights\nweights=np.zeros(5)\nweights\n```\n\n\n\n\n array([0., 0., 0., 0., 0.])\n\n\n\n\n```python\nepochs=1000\ncost=[]\ndef sigmoid(p):\n return 1/(1+np.exp(-p))\ndef gradient(X,y,p):\n return 1/X.shape[0]*(np.dot(X.T,p-y))\n\n```\n\n\n```python\ndef logistic(X,y,epochs,alpha):\n weights=np.zeros(5)\n for i in range(epochs):\n product=np.dot(X,weights)\n #print(product)\n p=sigmoid(product)\n l=-1/X.shape[0]*np.sum(y*np.log(p)+(1-y)*np.log(1-p))\n #print(l)\n cost.append(l)\n weights=weights-alpha*gradient(X,y,p)\n return weights\n```\n\n\n```python\nw1=logistic(X,y,500,0.02)\ndiff=np.round(sigmoid(X@w1))-y \n(len(y)-np.count_nonzero(diff))/len(y)*100\n```\n\n\n\n\n 100.0\n\n\n\n\n```python\nnp.linalg.pinv?\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "96fd512a15bb89383934fe20db7a5028ec31839a", "size": 15255, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Logistic_Regression_from_scratch.ipynb", "max_stars_repo_name": "Joemuthui/ML_theory_and_code", "max_stars_repo_head_hexsha": "ec5ae52809044fca3daf4064f1e3d09224f60ae4", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Logistic_Regression_from_scratch.ipynb", "max_issues_repo_name": "Joemuthui/ML_theory_and_code", "max_issues_repo_head_hexsha": "ec5ae52809044fca3daf4064f1e3d09224f60ae4", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Logistic_Regression_from_scratch.ipynb", "max_forks_repo_name": "Joemuthui/ML_theory_and_code", "max_forks_repo_head_hexsha": "ec5ae52809044fca3daf4064f1e3d09224f60ae4", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.2602459016, "max_line_length": 347, "alphanum_fraction": 0.5095378564, "converted": true, "num_tokens": 2794, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9481545274901875, "lm_q2_score": 0.8757869867849166, "lm_q1q2_score": 0.8303813966371076}} {"text": "# Regression\n\nRegression is the process of approximating noisy data with functions. The same numerical methods can also be used to approximate a complicated function with a simpler function. We'll begin by looking at some limited cases in terms of linear algebra.\n\n## Polynomial regression\n\nLong ago (in the Linear Algebra notebook), we solved an over-determined linear system to compute a polynomial approximation of a function.\nSometimes we make approximations because we are interested in the polynomial coefficients.\nThat is usually only for very low order (like linear or quadratic fits).\nInferring higher coefficients is ill-conditioned (as we saw with Vandermonde matrices) and probably not meaningful.\nWe now know that Chebyshev bases are good for representing high-degree polynomials, but if the points are arbitrarily spaced, how many do we need for the Chebyshev approximation to be well-conditioned?\n\n\n```python\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn')\n\ndef cosspace(a, b, n=50):\n return (a + b)/2 + (b - a)/2 * (\n np.cos(np.linspace(-np.pi, 0, n)))\n\ndef vander_chebyshev(x, n=None):\n if n is None:\n n = len(x)\n T = np.ones((len(x), n))\n if n > 1:\n T[:,1] = x\n for k in range(1,n-1):\n T[:,k+1] = 2 * x * T[:,k] - T[:,k-1]\n return T\n\ndef runge1(x):\n return 1 / (1 + 10*x**2)\n```\n\n\n```python\ndef chebyshev_regress_eval(x, xx, n):\n V = vander_chebyshev(x, n)\n Q, R = np.linalg.qr(V)\n return vander_chebyshev(xx, n) @ np.linalg.solve(R, Q.T)\n\nxx = np.linspace(-1, 1, 100)\nx = np.linspace(-1, 1, 80)\nplt.plot(x, runge1(x), '.k')\nplt.plot(xx, chebyshev_regress_eval(x, xx, 40) @ runge1(x), label='p(x)')\nplt.plot(xx, runge1(xx), '-.', label='runge1(x)')\nplt.legend(loc='upper left');\n```\n\n* What is the degree $k$ of the polynomial that is used?\n* What distribution of points is used for the data?\n* Would this have artifacts if we used a degree $k$ polynomial to interpolate noise-free samples at $k+1$ points using the same distribution?\n\n\n```python\nns = np.geomspace(5, 1000, dtype=int)\nconds = [np.linalg.cond(vander_chebyshev(np.linspace(-1, 1, n),\n int(n**.5)))\n for n in ns]\nplt.loglog(ns, conds)\nplt.xlabel('$n$')\nplt.ylabel('cond');\n```\n\n* If we have $n$ data points, can we use a polynomial of degree $k = \\lfloor n/5 \\rfloor$?\n* What about $k = n^{3/4}$?\n* What expression $k = ?(n)$ appears to be sufficient?\n\n## Noisy data\n\nRegression really comes into its own when the data is noisy.\n\n\n```python\ndef runge1_noisy(x, sigma):\n return runge1(x) + np.random.randn(*x.shape)*sigma\n\nx = np.linspace(-1, 1, 500)\ny = runge1_noisy(x, 0.5)\nplt.plot(x, runge1(x), label='runge1(x)')\nplt.plot(x, y, '.')\nplt.plot(x, chebyshev_regress_eval(x, x, 7) @ y, label='regress(x)')\nplt.legend(loc='upper left');\n```\n\n### Probability distributions\n\nIn order to interpret real data, we need a model for the noise. We have chosen the most common and computationally convenient choice when creating the synthetic data above.\nThe function `randn` draws samples from the \"standard normal\" or Gaussian distribution.\n\n$$ p(t) = \\frac{1}{\\sqrt{2\\pi}} e^{-t^2/2} . $$\n\n\n```python\ndef stdnormal(t):\n return np.exp(-t**2/2) / np.sqrt(2*np.pi)\n\nn = 1000\nw = np.random.randn(n)\nplt.hist(w, bins=40, range=(-3,3), density=True)\nt = np.linspace(-3, 3)\nplt.plot(t, stdnormal(t))\nplt.xlabel('$t$')\nplt.ylabel('$p(t)$');\n```\n\n### Regression on noisy data\n\nWe can just go run our regression algorithm on the noisy data.\n\n\n```python\nx = np.linspace(-1, 1, 500)\ny = runge1_noisy(x, sigma=0.1)\n```\n\n\n```python\nplt.plot(x, y, '.')\nplt.plot(xx, chebyshev_regress_eval(x, xx, 9) @ y, label='p(x)')\nplt.plot(xx, runge1(xx), label='runge1(x)')\nplt.legend(loc='upper left');\n```\n\n## What problem are we solving?\n\n### Why do we call it a linear model?\n\nWe are currently working with algorithms that express the regression as a linear function of the model parameters. That is, we search for coefficients $c = [c_0, c_1, \\dotsc]^T$ such that\n\n$$ V(x) c \\approx y $$\n\nwhere the left hand side is linear in $c$. In different notation, we are searching for a predictive model\n\n$$ f(x_i, c) \\approx y_i \\text{ for all $(x_i, y_i)$} $$\n\nthat is linear in $c$.\n\n### Assumptions\n\n1. The independent variables $x$ are error-free\n1. The prediction (or \"response\") $f(x,c)$ is linear in $c$\n1. The noise in the measurements $y$ is independent (uncorrelated)\n1. The noise in the measurements $y$ has constant variance\n\nThere are reasons why all of these assumptions may be undesirable in practice, thus leading to more complicated methods.\n\n### Loss functions\n\nThe error in a single prediction $f(x_i,c)$ of an observation $(x_i, y_i)$ is often measured as\n$$ \\frac 1 2 \\big( f(x_i, c) - y_i \\big)^2, $$\nwhich turns out to have a statistical interpretation when the noise is normally distributed.\nIt is natural to define the error over the entire data set as\n\\begin{align} L(c; x, y) &= \\sum_i \\frac 1 2 \\big( f(x_i, c) - y_i \\big)^2 \\\\\n&= \\frac 1 2 \\lVert f(x, c) - y \\rVert^2\n\\end{align}\nwhere I've used the notation $f(x,c)$ to mean the vector resulting from gathering all of the outputs $f(x_i, c)$.\nThe function $L$ is called the \"loss function\" and is the key to relaxing the above assumptions.\n\n#### Optimization\nGiven data $(x,y)$ and loss function $L(c; x,y)$, we wish to find the coefficients $c$ that minimize the loss, thus yielding the \"best predictor\" (in a sense that can be made statistically precise). I.e.,\n$$ \\bar c = \\arg\\min_c L(c; x,y) . $$\n\nIt is usually desirable to design models such that the loss function is differentiable with respect to the coefficients $c$, because this allows the use of more efficient optimization methods. For our chosen model,\n\\begin{align} \\nabla_c L(c; x,y) = \\frac{\\partial L(c; x,y)}{\\partial c} &= \\sum_i \\big( f(x_i, c) - y_i \\big) \\frac{\\partial f(x_i, c)}{\\partial c} \\\\\n&= \\sum_i \\big( f(x_i, c) - y_i \\big) V(x_i)\n\\end{align}\nwhere $V(x_i)$ is the $i$th row of $V(x)$.\nA more linear algebraic way to write the same expression is\n\\begin{align} \\nabla_c L(c; x,y) &= \\big( f(x,c) - y \\big)^T V(x) \\\\\n&= \\big(V(x) c - y \\big)^T V(x) \\\\\n&= V(x)^T \\big( V(x) c - y \\big)\n\\end{align}\nA necessary condition for the loss function to be minimized is that $\\nabla_c L(c; x,y) = 0$.\n\n* Is the condition sufficient for general $f(x, c)$?\n* Is the condition sufficient for the linear model $V(x) c$?\n* Have we seen this sort of equation before?\n\n##### Uniqueness\nWe can read the expression $ V(x)^T \\big( V(x) c - y \\big) = 0$ as saying that the residual $V(x) c - y$ is orthogonal to the range of $V(x)$. Suppose $c$ satisfies this equation and $c' \\ne c$ is some other value of the coefficients. Then\n\\begin{align}\nV(x)^T \\big( V(x) c' - y \\big) &= V(x)^T V(x) c' - V(x)^T y \\\\\n&= V(x)^T V(x) c' - V(x)^T V(x) c \\\\\n&= V(x)^T V(x) (c' - c) \\ne 0\n\\end{align}\nwhenever $V(x)^T V(x)$ is nonsingular, which happens any time $V(x)$ has full column rank.\n\n##### Gradient descent\nInstead of solving the least squares problem using linear algebra (QR factorization), we could solve it using gradient descent. That is, on each iteration, we'll take a step in the direction of the negative gradient.\n\n\n```python\ndef grad_descent(loss, grad, c0, gamma=1e-3, tol=1e-5):\n \"\"\"Minimize loss(c) via gradient descent with initial guess c0\n using learning rate gamma. Declares convergence when gradient\n is less than tol or after 500 steps.\n \"\"\"\n c = c0.copy()\n chist = [c.copy()]\n lhist = [loss(c)]\n for it in range(500):\n g = grad(c)\n c -= gamma * g\n chist.append(c.copy())\n lhist.append(loss(c))\n if np.linalg.norm(g) < tol:\n break\n return c, np.array(chist), np.array(lhist)\n\nclass quadratic_loss:\n \"\"\"Test problem to give example of gradient descent.\"\"\"\n def __init__(self):\n self.A = np.array([[1, 1], [1, 4]])\n def loss(self, c):\n return .5 * c @ self.A @ c\n def grad(self, c):\n return self.A @ c\n\ndef test(gamma):\n q = quadratic_loss()\n c, chist, lhist = grad_descent(q.loss, q.grad, .9*np.ones(2), gamma=gamma)\n plt.semilogy(lhist)\n plt.ylabel('loss')\n plt.xlabel('cumulative iterations')\n\n plt.figure()\n l = np.linspace(-1, 1)\n x, y = np.meshgrid(l, l)\n z = [q.loss(np.array([x[i,j], y[i,j]])) for i in range(50) for j in range(50)]\n plt.contour(x, y, np.reshape(z, x.shape))\n plt.plot(chist[:,0], chist[:,1], 'o-')\n plt.title('gamma={}'.format(gamma))\n \ntest(.45)\n```\n\n\n```python\nclass chebyshev_regress:\n def __init__(self, x, y, n):\n self.V = vander_chebyshev(x, n)\n self.y = y\n self.n = n\n \n def init(self):\n return np.zeros(self.n)\n\n def loss(self, c):\n r = self.V @ c - y\n return 0.5 * (r @ r)\n \n def grad(self, c):\n r = self.V @ c - y\n return self.V.T @ r\n \nreg = chebyshev_regress(x, y, 6)\nc, _, lhist = grad_descent(reg.loss, reg.grad, reg.init(), gamma=3e-3)\nplt.semilogx(np.arange(1, 1+len(lhist)), lhist)\nplt.ylabel('loss')\nplt.xlabel('cumulative iterations')\nplt.ylim(bottom=0);\n```\n\n* How does changing the \"learning rate\" `gamma` affect convergence?\n* How does changing the number of basis functions affect convergence rate?\n\n#### Did we find the same solution?\nWe intend to solve the same problem as we previously solved using QR, so we hope to find the same solution.\n\n\n```python\nplt.plot(xx, chebyshev_regress_eval(x, xx, 6) @ y, '-k', label='QR')\nplt.plot(xx, vander_chebyshev(xx, 6) @ c, '--', label='grad_descent')\nplt.plot(x, y, '.')\nplt.legend();\n```\n\n#### Observations\n\n* This algorithm is kind of finnicky:\n * It takes many iterations to converge\n * The \"learning rate\" needs to be empirically tuned\n * When not tuned well, the algorithm can diverge\n* It could be made more robust using a line search\n* The $QR$ algorithm is a more efficient and robust way to solve these problems\n\n## Nonlinear regression\n\nInstead of the linear model\n$$ f(x,c) = V(x) c = c_0 + c_1 \\underbrace{x}_{T_1(x)} + c_2 T_2(x) + \\dotsb $$\nlet's consider a rational model with only three parameters\n$$ f(x,c) = \\frac{1}{c_0 + c_1 x + c_2 x^2} = (c_0 + c_1 x + c_2 x^2)^{-1} . $$\nWe'll use the same loss function\n$$ L(c; x,y) = \\frac 1 2 \\lVert f(x,c) - y \\rVert^2 . $$\n\nWe will also need the gradient\n$$ \\nabla_c L(c; x,y) = \\big( f(x,c) - y \\big)^T \\nabla_c f(x,c) $$\nwhere\n\\begin{align}\n\\frac{\\partial f(x,c)}{\\partial c_0} &= - f(x,c)^2 \\\\\n\\frac{\\partial f(x,c)}{\\partial c_1} &= - f(x,c)^2 x \\\\\n\\frac{\\partial f(x,c)}{\\partial c_2} &= - f(x,c)^2 x^2 .\n\\end{align}\n\n\n```python\nclass rational_regress:\n def __init__(self, x, y):\n self.x = x\n self.y = y\n self.n = 3\n \n def init(self):\n return np.ones(self.n)\n \n def f(self, c):\n x = self.x\n return 1 / (c[0] + c[1]*x + c[2]*x**2)\n \n def df(self, c):\n x = self.x\n f2 = self.f(c)**2\n return np.array([-f2, -f2*x, -f2*x**2]).T\n\n def loss(self, c):\n r = self.f(c) - self.y\n return 0.5 * (r @ r)\n \n def grad(self, c):\n r = self.f(c) - self.y\n return r @ self.df(c)\n\nreg = rational_regress(x, y)\nc, _, lhist = grad_descent(reg.loss, reg.grad, reg.init(), gamma=2e-2)\nplt.semilogx(np.arange(1, 1+len(lhist)), lhist)\nplt.ylabel('loss')\nplt.xlabel('cumulative iterations')\nplt.ylim(bottom=0)\nplt.title('rational c={}'.format(c));\n```\n\n\n```python\nx = np.linspace(-1, 1, 500)\ny = runge1_noisy(x, sigma=0.1)\n\nc0 = np.array([1, 0, 1.])\nc, _, lhist = grad_descent(reg.loss, reg.grad, c0, gamma=2e-2)\nplt.semilogx(np.arange(1, 1+len(lhist)), lhist)\nplt.ylabel('loss')\nplt.xlabel('cumulative iterations')\nplt.ylim(bottom=0)\nplt.title('rational c={}'.format(c));\n```\n\n\n```python\nplt.plot(x, y, '.')\nplt.plot(xx, runge1(xx), label='runge1(x)')\nplt.plot(xx, chebyshev_regress_eval(x, xx, 6) @ y, label='Polynomial')\nplt.plot(xx, 1/(c[0] + c[1]*xx + c[2]*xx**2), '--k', label='Rational')\nplt.legend();\n```\n\n#### Observations\n\n* There can be local minima or points that look an awful lot like local minima.\n* Convergence is sensitive to learning rate.\n* It takes a lot of iterations to converge.\n* A well-parametrized model (such as the rational model above) can accurately reconstruct the mean even with very noisy data.\n\n### Outlook\n\n* The optimization problem can be solved using a Newton method. It can be onerous to implement the needed derivatives.\n* The [Gauss-Newton method](https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm) (see homework) is often more practical than Newton while being faster than gradient descent, though it lacks robustness.\n* The [Levenberg-Marquardt method](https://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm) provides a sort of middle-ground between Gauss-Newton and gradient descent.\n* Many globalization techniques are used for models that possess many local minima.\n* One pervasive approach is stochastic gradient descent, where small batches (e.g., 1 or 10 or 20) are selected randomly from the corpus of observations (500 in our current example), and a step of gradient descent is applied to that reduced set of observations. This helps to escape saddle points and weak local minima.\n* Among expressive models $f(x,c)$, some may converge much more easily than others. Having a good optimization algorithm is essential for nonlinear regression with complicated models, especially those with many parameters $c$.\n* Classification is a very similar problem to regression, but the observations $y$ are discrete, thus\n\n * models $f(x,c)$ must have discrete output\n * the least squares loss function is not appropriate.\n* [Why momentum really works](https://distill.pub/2017/momentum/)\n", "meta": {"hexsha": "c257b4dcab0ec9bb693fcfc352470494253b468e", "size": 319624, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Regression.ipynb", "max_stars_repo_name": "justindeng21/numcomp-class-spring19", "max_stars_repo_head_hexsha": "4e1be516fc2f63e749c8553a0039b289135ec569", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2019-01-15T19:32:45.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-08T04:43:36.000Z", "max_issues_repo_path": "Regression.ipynb", "max_issues_repo_name": "justindeng21/numcomp-class-spring19", "max_issues_repo_head_hexsha": "4e1be516fc2f63e749c8553a0039b289135ec569", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 21, "max_issues_repo_issues_event_min_datetime": "2019-01-25T17:37:39.000Z", "max_issues_repo_issues_event_max_datetime": "2019-04-29T22:19:40.000Z", "max_forks_repo_path": "Regression.ipynb", "max_forks_repo_name": "justindeng21/numcomp-class-spring19", "max_forks_repo_head_hexsha": "4e1be516fc2f63e749c8553a0039b289135ec569", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": 22, "max_forks_repo_forks_event_min_datetime": "2019-01-15T20:16:52.000Z", "max_forks_repo_forks_event_max_datetime": "2019-11-04T06:42:35.000Z", "avg_line_length": 463.2231884058, "max_line_length": 51712, "alphanum_fraction": 0.936722524, "converted": true, "num_tokens": 4126, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9124361580958427, "lm_q2_score": 0.9099069980980296, "lm_q1q2_score": 0.8302320455690873}} {"text": "```python\nimport sympy\nsympy.init_printing()\nimport numpy\nfrom sympy.plotting import plot #para plotear 2 variables\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nt, s = sympy.symbols('t, s')\na = sympy.symbols('a', real=True, positive=True)\n\nf = sympy.exp(-a*t)\nf\n```\n\n\n```python\nsympy.integrate(f*sympy.exp(-s*t), (t, 0, sympy.oo))\n```\n\n\n\n\n$\\displaystyle \\begin{cases} \\frac{1}{s \\left(\\frac{a}{s} + 1\\right)} & \\text{for}\\: \\left|{\\arg{\\left(s \\right)}}\\right| \\leq \\frac{\\pi}{2} \\\\\\int\\limits_{0}^{\\infty} e^{- a t} e^{- s t}\\, dt & \\text{otherwise} \\end{cases}$\n\n\n\n\n```python\nsympy.laplace_transform(f, t, s)\n```\n\n\n\n\n$\\displaystyle \\left( \\frac{1}{a + s}, \\ 0, \\ \\text{True}\\right)$\n\n\n\n\n```python\ndef L(f):\n return sympy.laplace_transform(f, t, s, noconds=True)\ndef invL(F):\n return sympy.inverse_laplace_transform(F, s, t)\n```\n\n\n```python\nF=L(f)\nF\n```\n\n\n```python\nf_=invL(F)\nf_\n```\n\n\n```python\nsympy.plot(sympy.Heaviside(t));\n```\n\n\n```python\nomega = sympy.Symbol('omega', real=True)\nexp = sympy.exp\nsin = sympy.sin\ncos = sympy.cos\nfunctions = [1,\n t,\n exp(-a*t),\n t*exp(-a*t),\n t**2*exp(-a*t),\n sin(omega*t),\n cos(omega*t),\n 1 - exp(-a*t),\n exp(-a*t)*sin(omega*t),\n exp(-a*t)*cos(omega*t),\n ]\nfunctions\n```\n\n\n```python\nFs = [L(f) for f in functions]\nFs\n```\n\n\n```python\nF = ((s + 1)*(s + 2)* (s + 3))/((s + 4)*(s + 5)*(s + 6))\ndisplay(F)\nF.apart(s)\n```\n\n\n```python\ndisplay(invL(F))\ninvL(F.apart(s))\n```\n\n\n```python\ns = sympy.Symbol('s')\nt = sympy.Symbol('t', real=True)\ntau = sympy.Symbol('tau', real=True, positive=True)\nG = K/(tau*s + 1)\nG\n```\n\n\n```python\nu = 1/s\nstepresponse = invL(G*u)\nstepresponse\n```\n\n\n```python\nu = 1/s**2\nrampresponse = invL(G*u)\nrampresponse\n```\n\n\n```python\ng = sympy.inverse_laplace_transform(G.subs({tau: 1,K:10}), s, t)\ndisplay(g)\nsympy.plot(g, (t, -1, 10), ylim=(0, 1.1))\n```\n\n\n```python\ns = sympy.symbols(\"s\")\nw = sympy.symbols(\"omega\", real=True)\ndisplay(G)\nGw = G.subs({tau : 1,K:10, s : sympy.I*w})\nGw\n```\n\n\n```python\nRA = abs(Gw)\nphi = sympy.arg(Gw)\nsympy.plot(RA, (w, 0.01, 100), xscale=\"log\", yscale=\"log\", ylabel=\"RA\", xlabel=\"$\\omega$\")\nsympy.plot(phi*180/sympy.pi, (w, 0.01, 100), xscale=\"log\", xlabel=\"$\\omega$\", ylabel=\"$\\phi$\")\n```\n\n\n```python\nfrom ipywidgets import interact\nevalfimpulse = sympy.lambdify((K, tau, t), impulseresponse, 'numpy')\nevalfstep = sympy.lambdify((K, tau, t), stepresponse, 'numpy')\nevalframp = sympy.lambdify((K, tau, t), rampresponse, 'numpy')\n```\n\n\n```python\nts = numpy.linspace(0, 10)\n\ndef firstorder(tau_in, K_in):\n plt.figure(figsize=(12, 6))\n ax_impulse = plt.subplot2grid((2, 2), (0, 0))\n ax_step = plt.subplot2grid((2, 2), (1, 0))\n ax_complex = plt.subplot2grid((2, 2), (0, 1), rowspan=2)\n\n ax_impulse.plot(ts, evalfimpulse(K_in, tau_in, ts))\n ax_impulse.set_title('Impulse response')\n ax_impulse.set_ylim(0, 10)\n\n tau_height = 1 - numpy.exp(-1)\n ax_step.set_title('Step response')\n ax_step.plot(ts, evalfstep(K_in, tau_in, ts))\n ax_step.axhline(K_in)\n ax_step.plot([0, tau_in, tau_in], [K_in*tau_height]*2 + [0], alpha=0.4)\n ax_step.text(0, K_in, '$K=${}'.format(K_in))\n ax_step.text(0, K_in*tau_height, '{:.3}$K$'.format(tau_height))\n ax_step.text(tau_in, 0, r'$\\tau={:.3}$'.format(tau_in))\n ax_step.set_ylim(0, 10)\n\n ax_complex.set_title('Poles plot')\n ax_complex.scatter(-1/tau_in, 0, marker='x', s=30)\n ax_complex.axhline(0, color='black')\n ax_complex.axvline(0, color='black')\n ax_complex.axis([-10, 10, -10, 10])\n```\n\n\n```python\nfirstorder(1., 10.)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "9885c761b502ac6997d926555a449d0d3855e9d5", "size": 127279, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "python/0/Laplace.ipynb", "max_stars_repo_name": "WayraLHD/SRA21", "max_stars_repo_head_hexsha": "1b0447bf925678b8065c28b2767906d1daff2023", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-29T16:38:53.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-29T16:38:53.000Z", "max_issues_repo_path": "python/0/Laplace.ipynb", "max_issues_repo_name": "WayraLHD/SRA21", "max_issues_repo_head_hexsha": "1b0447bf925678b8065c28b2767906d1daff2023", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-08-10T08:24:57.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-10T08:24:57.000Z", "max_forks_repo_path": "python/0/Laplace.ipynb", "max_forks_repo_name": "WayraLHD/SRA21", "max_forks_repo_head_hexsha": "1b0447bf925678b8065c28b2767906d1daff2023", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 179.7725988701, "max_line_length": 26204, "alphanum_fraction": 0.8869020027, "converted": true, "num_tokens": 1324, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.939024825960626, "lm_q2_score": 0.8840392756357326, "lm_q1q2_score": 0.8301348269462017}} {"text": "# What is probability?\n1. Frequentist: Probabilities represent long run frequencies of events\n2. Bayesian: \n - Probability is used to quantify our uncertainty about something\n - It can be used to model our uncertainty about events that do not have long term frequencies\n\n## Discrete random variables\n\nFor an event $A \\in \\mathcal{X}$, like \"it will rain tomorrow\".\n - **Probability mass function (PMF)**: p(A) means probability that event A is true\n - $0 \\leq p(A) \\leq 1$\n - $\\sum_{a \\in \\mathcal{X}}p(a) = 1$\n\n\n### Joint Probability\nGiven two events $A, B \\in \\mathcal{X}$, the probability of the joint event A and B:\n\\begin{align}\np(A, B) = p(A \\cap B) = p(A | B)p(B) = p(B | A)p(A)\n\\end{align}\n\n### Marginal distribution (\u8fb9\u7f18\u5206\u5e03)\n\n\u5982\u679c\u6211\u4eec\u628a\u6bcf\u4e00\u4e2a\u53d8\u91cf\u7684\u6982\u7387\u5206\u5e03\u79f0\u4e3a\u4e00\u4e2a\u6982\u7387\u5206\u5e03\uff0c\u90a3\u4e48\u8fb9\u7f18\u5206\u5e03\u5c31\u662f\u82e5\u5e72\u4e2a\u53d8\u91cf\u7684\u6982\u7387\u52a0\u548c\u6240\u8868\u73b0\u51fa\u7684\u5206\u5e03\u3002\n\nGiven a joint distribution, the marginal distribution is defined as:\n\\begin{equation}\np(A) = \\sum_{b \\in \\mathcal{X}}p(A, B=b) = \\sum_{b \\in \\mathcal{X}}p(A|B=b)p(B=b)\n\\end{equation}\n\n\u4e3e\u4e2a\u4f8b\u5b50\uff0c\u5047\u8bbe$p(B),p(C),p(A|B),p(A|C)$\u5df2\u77e5\uff0c\u6c42P(A):\n\\begin{equation}\np(A) = \\sum_{b \\in [B, C]}p(A, B=b) = p(A, B) + p(A, C) = p(A|B)p(B) + P(A|C)p(C)\n\\end{equation}\n### Conditional Probability\n\n\u4e8b\u4ef6A\u5728\u53e6\u5916\u4e00\u4e2a\u4e8b\u4ef6B\u5df2\u7ecf\u53d1\u751f\u6761\u4ef6\u4e0b\u7684\u53d1\u751f\u6982\u7387\u3002\u6761\u4ef6\u6982\u7387\u8868\u793a\u4e3aP\uff08A|B\uff09\uff0c\u8bfb\u4f5c\u201c\u5728B\u6761\u4ef6\u4e0bA\u7684\u6982\u7387\u201d\u3002\n\nWe define the conditional probabiltity of an event A, given that event B is true:\n\\begin{equation}\np(A|B) = \\frac{p(A, B)}{p(B)} \\; \\textrm{if} \\; p(B) \\; \\gt 0 \n\\end{equation}\n\n### Bayes Rule\n\nAccording to conditional and marginal probabilities, bayes rule is defined by: \n\\begin{equation}\n\\underbrace{p(X=\\mathcal{x}|Y=\\mathcal{y}) = \\frac{p(X=\\mathcal{x}, Y=\\mathcal{y})}{p(Y=\\mathcal{y})}}_{\\textrm{conditional probability}} \\; \\textrm{and} \\; \\underbrace{p(y) = \\sum_{x'}p(X=\\mathcal{x'})p(Y=\\mathcal{y} | X=\\mathcal{x'})}_{\\textrm{marginal probability}} \\\\ \\Downarrow \\\\\np(X=\\mathcal{x}|Y=\\mathcal{y}) = \\frac{p(X=\\mathcal{x}, Y=\\mathcal{y})}{\\sum_{x'}p(X=\\mathcal{x'})p(Y=\\mathcal{y} | X=\\mathcal{x'})}\n\\end{equation}\n\n### A medical diagnoise problem using Bayes Rule\n\n**Example**: Suppose you are a women in your 40s, and you decide to have medical test ($X = 1$) for breast cancer ($Y \\in [0, 1]$), which is called mammogram. If the test is positive, what's probability you have cancer $p(Y=1 | X=1)$? \n\n\\begin{proof}\nAssume that $p(X=1|Y=1) = 0.8, p(X=1|Y=0) = 0.1, p(Y=1) = 0.004$\n\\begin{equation}\n\\begin{aligned}\np(Y=1|X=1) &= \\frac{p(X=1, Y=1)}{p(X=1)} \\\\\n&= \\frac{p(X=1|Y=1)p(Y=1)}{p(X=1, Y=1) + p(X=1, Y=0)} \\\\\n&= \\frac{p(X=1|Y=1)p(Y=1)}{p(X=1|Y=1)p(Y=1) + p(X=1|Y=0)p(Y=0)} \\\\\n&= \\frac{0.8 * 0.004}{0.8 * 0.0004 + 0.1*0.0996} = 0.031 \n\\end{aligned}\n\\end{equation}\n\\end{proof}\n\n### Conditional Independence\n\n- If random variables $X$ and $Y$ are said to be independent if $p(X, Y) = p(X)p(Y)$, alternatively, $p(X|Y) = p(X), or p(Y|X) = p(Y)$, which is denoted by $X \\perp Y$\n- Conditional independent: $X \\perp Y | Z \\Leftrightarrow p(X, Y | Z) = p(X|Z)p(Y|Z)$ \n\n## Continuous Random Variable\n\n- Probability density $p(x) (\\geq 0)$\n- The larger $p(x)$ for a variable $x$, the more likely that a variable around $x$ will be generated by this distribution. \n- The probability that $x$ falls into an interval $[a, b]$ can be computed as $\\int_{a}^{b} p(x) \\,dx$\n- The likelihood that any variable drawn from $p(x)$ must fall between postive and negative infinity $\\int_{-\\infty}^{\\infty}p(x)dx = 1$\n\n## Summary stastistic Property\n\nSuppose that we are dealing with a random variables X. The distribution itself can be hard to interpret. It is often useful to be able to summarize the behavior of a random variable concisely. Numbers that help us capture the behavior of a random variable are called summary statistics. The most commonly encountered ones are the mean, the variance, and the standard deviation.\n\n### Mean (a.k.a, expected value)\n\n- Discrete Random Variable: $\\mu = \\mathbb{E}[X] = \\sum_{\\mathcal{x} \\in X} x p(x)$. then the mean is given by the weighted average: sum the values times the probability that the random variable takes on that value\n- Continuous Random Variable: $\\mu = \\mathbb{E}[X] = \\int_{\\mathcal{x}}xp(x)dx$ \n\nBecause they are helpful, let us summarize a few properties.\n- $\\mu_{aX+b} = a\\mu_{X} + b$\n- $\\mu_{X + Y} = \\mu_X + \\mu_Y$\n\nMeans are useful for understanding the average behavior of a random variable, however the mean is not sufficient to even have a full intuitive understanding.\n\n### Variance (\u65b9\u5dee, \"spread\" of a distribution)\n\nThis is a quantitative measure of how far a random variable deviates from the mean\n\\begin{equation}\n\\begin{aligned}\n\\textrm{var}[X] &= \\mathbb{E}[(X-\\mu)^2] = \\mathbb{E}[X^2 - 2\\mu X + \\mu^2] \\\\\n&= \\mathbb{E}[X^2] - \\mathbb{E}[2\\mu X] + \\mathbb{E}[\\mu^2] \\\\\n&= \\mathbb{E}[X^2] - 2\\mu \\mathbb{E}[X] + \\mathbb{E}[\\mu^2] \\\\\n&= \\mathbb{E}[X^2] - 2\\mu^2 + \\mu^2 = \\mathbb{E}[X^2] - \\mu^2\n\\end{aligned}\n\\end{equation}\n\nAlternatively, proof as follow:\n\\begin{proof}\n\\begin{equation}\n\\begin{aligned}\n\\textrm{var}[X] &= \\mathbb{E}[(X-\\mu)^2] = \\int (x-\\mu)^2p(x)dx \\\\\n&= \\int x^2p(x) dx - \\int 2\\mu x p(x) dx + \\int \\mu^2 p(x)dx \\\\\n&= \\int x^2p(x) dx - 2\\mu \\int x p(x) dx + \\mu^2 \\int p(x)dx \\\\\n&= \\mathbb{E}[X^2] - 2\\mu \\mathbb{E}[X] + \\mu^2 * 1 \\\\\n&= \\mathbb{E}[X^2] - \\mu^2\n\\end{aligned}\n\\end{equation}\n\\end{proof}\n\n\nWe will list a few properties of variance below:\n- $Var[X] \\geq 0$; If X is constant, then $Var[X] = 0$\n- $Var[aX+b] = a^2Var[X]$\n- if X is independent Y, then $Var[X+Y] = Var[X] + Var[Y]$\n\n### Standard deviation\n\n- $\\delta_X = \\textrm{std}[X] = \\sqrt{\\textrm{var}[X]} $\n\nThe properties we had for the variance can be restated for the standard deviation.\n- $\\delta_X \\geq 0$\n- $\\delta_{aX + b} = |a|\\delta_X$\n- if X is independent Y, \\delta_{X + Y} = \\sqrt_{\\delta^2_X + \\delta^2_Y}\n\n### Covariance (\u534f\u65b9\u5dee)\n\nFor two jointly distributed real-valued random variables X and Y with finite second moments, the covariance is defined as the expected value (or mean) of the product of their deviations from their individual expected values:\n\\begin{equation}\n\\begin{aligned}\n\\delta_{XY} = \\textrm{cov}(X, Y) &= \\mathbb{E}[(X-\\mathbb{E}(X))(Y-\\mathbb{E}(Y))] = \\sum_{i, j}(x_i - \\mu_X)(y_j - \\mu_Y)P_{ij}\\\\\n&= \\mathbb{E}[(X-\\mu_X)(Y- \\mu_Y)] \\\\\n&= \\mathbb{E}[XY-X*\\mu_Y - Y*\\mu_X + \\mu_X*\\mu_Y] \\\\\n&= \\mathbb{E}[XY]-\\mathbb{E}[X*\\mu_Y] - \\mathbb{E}[Y*\\mu_X] + \\mathbb{E}[\\mu_X*\\mu_Y] \\\\\n&= \\mathbb{E}[XY]-\\mathbb{E}[X]*\\mu_Y - \\mathbb{E}[Y]*\\mu_X + \\mu_X*\\mu_Y \\\\\n&= \\mathbb{E}[XY]-\\mu_X*\\mu_Y - \\mu_Y*\\mu_X + \\mu_X*\\mu_Y \\\\\n&= \\mathbb{E}[XY]-\\mu_X*\\mu_Y \\\\\n&= \\mathbb{E}[XY]-\\mathbb{E}[X]\\mathbb{E}[Y]\n\\end{aligned}\\end{equation}\n\nIf X is a d-dimnsional random vector, its covariance matrix is:\n\\begin{equation}\n\\begin{aligned}\n\\textrm{cov}[X] &= \\mathbb{E}[(X-\\mathbb{E}[X])(X-\\mathbb{E}[X])^T] \\\\\n&= \\begin{bmatrix}\n\\color{red}{\\textrm{cov}[X_1, X_1]} & \\textrm{cov}[X_1, X_2] & \\cdots &\\textrm{cov}[X_1, X_d]\\\\\n\\textrm{cov}[X_2, X_1] & \\color{red}{\\textrm{cov}[X_2, X_2]} & \\cdots &\\textrm{cov}[X_2, X_d]\\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\textrm{cov}[X_d, X_1]& \\textrm{cov}[X_d, X_2] & \\cdots &\\color{red}{\\textrm{cov}[X_d, X_d]}\\\\\n\\end{bmatrix} = \\begin{bmatrix}\n\\color{red}{\\textrm{var}[X_1]} & \\textrm{cov}[X_1, X_2] & \\cdots &\\textrm{cov}[X_1, X_d]\\\\\n\\textrm{cov}[X_2, X_1] & \\color{red}{\\textrm{var}[X_2]} & \\cdots &\\textrm{cov}[X_2, X_d]\\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\textrm{cov}[X_d, X_1]& \\textrm{cov}[X_d, X_2] & \\cdots &\\color{red}{\\textrm{var}[X_d]}\\\\\n\\end{bmatrix}\n\\end{aligned}\n\\end{equation}\n\nLet us see some properties of covariances:\n\n- $Cov[X, X] = Var[X]$\n- $Cov[aX + b, Y] = Cov[X, aY + b] = a Cov[X, Y]$\n- if X is independent of Y, then $Cov[X, Y] = 0$.\n- $Var[X + Y] = Var[X] + 2Cov[X, Y] + Var[Y]$\n\n### Correlation Coefficient\n\n- The (Pearson) correlation coefficient between X and Y measures the linear relationship, which is defined by: \n\\begin{equation}\n-1 \\leq \\rho_{XY} = \\textrm{corr}[X, Y] = \\frac{\\textrm{cov}[X, Y]}{\\sqrt{\\textrm{var}[X]\\textrm{var}[Y]}} = \\frac{Cov[X, Y]}{\\delta_X\\delta_Y}\\leq 1\n\\end{equation}\n\n\nconsider X as any random variable, and Y = aX +b as any linear deterministic function of X. Then, one can compute that\n$$\\delta_Y = \\delta_{aX + b} = |a|\\delta_X$$\n$$Cov[X, Y] = Cov[X, aX+b] = a Cov[X, X] = a Var[X]$$\nand thus that:\n$$\\rho_{XY} = \\frac{aVar[x]}{|a|\\delta^2_X} = \\frac{a}{|a} = sign(a)$$\nThus we see that the correlation is +1 for any a > 0, and -1 for any a < 0 illustrating that **correlation measures the degree and directionality the two random variables are related, not the scale that the variation takes.**\n\n\nLet us list a few properties of the correlation below.\n- \\$rho_{XX} = 1$\n- \\forall X, Y, a, b \\in \\mathcal{R}: \\rho[aX+b, Y] = \\rho[X, aY+b] = \\rho[X, Y]\n- If X and Y are independent with non-zero variance then $\\rho[X, Y] = 0$\n- Indeed if we think of norms as being related to standard deviations, and correlations as being cosines of angles, much of the intuition we have from geometry can be applied to thinking about random variables.\n\n## Common distributions (TODO)\n\n### Bernoulli Distribution (Discrete)\n- Toss a coin only once\n- Let X be a binary variable $X \\in {0, 1}$\n\n- What's the probability of X that a toss shows up as \"head\":\n\\begin{equation}\n\\textrm{Ber}(x|\\theta) = \\left\\{\n\t\\begin{array}{ll}\n\t\t\\theta & \\mbox{if } x = 0 \\\\\n\t\t1 - \\theta & \\mbox{if } x = 0\n\t\\end{array}\n\\right.\n\\end{equation}\n\n### Binomial Distribution (Discrete)\n\n\n### Multinomial Distribution (Discrete)\n\n### Gaussian Distribution (Continuous)\n\n### Laplace Distribution (Continuous)\n\n### Beta Distribution (Continuous)\n\n# Bayesian Theory\n\n\u6240\u8c13\u7684\u8d1d\u53f6\u65af\u65b9\u6cd5\u6e90\u4e8e\u4ed6\u751f\u524d\u4e3a\u89e3\u51b3\u4e00\u4e2a\u201c\u9006\u6982\u201d\u95ee\u9898\u5199\u7684\u4e00\u7bc7\u6587\u7ae0\uff0c\u800c\u8fd9\u7bc7\u6587\u7ae0\u662f\u5728\u4ed6\u6b7b\u540e\u624d\u7531\u4ed6\u7684\u4e00\u4f4d\u670b\u53cb\u53d1\u8868\u51fa\u6765\u7684\u3002\u5728\u8d1d\u53f6\u65af\u5199\u8fd9\u7bc7\u6587\u7ae0\u4e4b\u524d\uff0c\u4eba\u4eec\u5df2\u7ecf\u80fd\u591f\u8ba1\u7b97\u201c\u6b63\u5411\u6982\u7387\u201d\uff0c\u5982\u201c\u5047\u8bbe\u888b\u5b50\u91cc\u9762\u6709N\u4e2a\u767d\u7403\uff0cM\u4e2a\u9ed1\u7403\uff0c\u4f60\u4f38\u624b\u8fdb\u53bb\u6478\u4e00\u628a\uff0c\u6478\u51fa\u9ed1\u7403\u7684\u6982\u7387\u662f\u591a\u5927\u201d\u3002\u800c\u4e00\u4e2a\u81ea\u7136\u800c\u7136\u7684\u95ee\u9898\u662f\u53cd\u8fc7\u6765\uff1a\u201c\u5982\u679c\u6211\u4eec\u4e8b\u5148\u5e76\u4e0d\u77e5\u9053\u888b\u5b50\u91cc\u9762\u9ed1\u767d\u7403\u7684\u6bd4\u4f8b\uff0c\u800c\u662f\u95ed\u7740\u773c\u775b\u6478\u51fa\u4e00\u4e2a\uff08\u6216\u597d\u51e0\u4e2a\uff09\u7403\uff0c\u89c2\u5bdf\u8fd9\u4e9b\u53d6\u51fa\u6765\u7684\u7403\u7684\u989c\u8272\u4e4b\u540e\uff0c\u90a3\u4e48\u6211\u4eec\u53ef\u4ee5\u5c31\u6b64\u5bf9\u888b\u5b50\u91cc\u9762\u7684\u9ed1\u767d\u7403\u7684\u6bd4\u4f8b\u4f5c\u51fa\u4ec0\u4e48\u6837\u7684\u63a8\u6d4b\u201d\u3002\u8fd9\u4e2a\u95ee\u9898\uff0c\u5c31\u662f\u6240\u8c13\u7684\u9006\u6982\u95ee\u9898\u3002\n\n\n\u8d1d\u53f6\u65af\u662f\u673a\u5668\u5b66\u4e60\u7684\u6838\u5fc3\u65b9\u6cd5\u4e4b\u4e00\u3002\u8fd9\u80cc\u540e\u7684\u6df1\u523b\u539f\u56e0\u5728\u4e8e\uff0c\u73b0\u5b9e\u4e16\u754c\u672c\u8eab\u5c31\u662f\u4e0d\u786e\u5b9a\u7684\uff0c\u4eba\u7c7b\u7684\u89c2\u5bdf\u80fd\u529b\u662f\u6709\u5c40\u9650\u6027\u7684\uff0c\u6211\u4eec\u65e5\u5e38\u6240\u89c2\u5bdf\u5230\u7684\u53ea\u662f\u4e8b\u7269\u8868\u9762\u4e0a\u7684\u7ed3\u679c\uff0c\u6cbf\u7528\u521a\u624d\u90a3\u4e2a\u888b\u5b50\u91cc\u9762\u53d6\u7403\u7684\u6bd4\u65b9\uff0c\u6211\u4eec\u5f80\u5f80\u53ea\u80fd\u77e5\u9053\u4ece\u91cc\u9762\u53d6\u51fa\u6765\u7684\u7403\u662f\u4ec0\u4e48\u989c\u8272\uff0c\u800c\u5e76\u4e0d\u80fd\u76f4\u63a5\u770b\u5230\u888b\u5b50\u91cc\u9762\u5b9e\u9645\u7684\u60c5\u51b5\u3002\u8fd9\u4e2a\u65f6\u5019\uff0c\u6211\u4eec\u5c31\u9700\u8981\u63d0\u4f9b\u4e00\u4e2a\u5047\u8bbe\uff08hypothesis\uff09\u3002\u6240\u8c13\u5047\u8bbe\uff0c\u5f53\u7136\u5c31\u662f\u4e0d\u786e\u5b9a\u7684\uff08\u53ef\u80fd\u662f\u6709\u9650\u4e2a\uff0c\u4e5f\u53ef\u80fd\u662f\u65e0\u9650\u591a\u79cd\uff09\uff0c\u4e3a\u4e86\u786e\u5b9a\u54ea\u4e2a\u5047\u8bbe\u662f\u6b63\u786e\u7684\uff0c\u6211\u4eec\u9700\u8981\u505a\u4e24\u4ef6\u4e8b\u60c5\uff1a1\u3001\u7b97\u51fa\u5404\u79cd\u4e0d\u540c\u731c\u6d4b\u7684\u53ef\u80fd\u6027\u5927\u5c0f\u30022\u3001\u7b97\u51fa\u6700\u9760\u8c31\u7684\u731c\u6d4b\u662f\u4ec0\u4e48\u3002\u7b2c\u4e00\u4e2a\u5c31\u662f\u8ba1\u7b97\u7279\u5b9a\u731c\u6d4b\u7684\u540e\u9a8c\u6982\u7387\uff08Posterior\uff09\uff0c\u5bf9\u4e8e\u8fde\u7eed\u7684\u731c\u6d4b\u7a7a\u95f4\u5219\u662f\u8ba1\u7b97\u731c\u6d4b\u7684\u6982\u7387\u5bc6\u5ea6\u51fd\u6570\u3002\u7b2c\u4e8c\u4e2a\u5219\u662f\u6240\u8c13\u7684\u6a21\u578b\u6bd4\u8f83\uff0c\u6a21\u578b\u6bd4\u8f83\u5982\u679c\u4e0d\u8003\u8651\u5148\u9a8c\u6982\u7387\uff08Prior\uff09\u7684\u8bdd\u5c31\u662f\u6700\u5927\u4f3c\u7136\u65b9\u6cd5\u3002\n\n## Generative Classifer\n\n\u5bf9\u4e8e\u4e00\u4e2a\u8bad\u7ec3\u6570\u636e\u96c6X\uff0c\u5982\u4f55\u5224\u65ad\u4ed6\u5c5e\u4e8e\u54ea\u4e2a\u7c7b\u578b\uff1f \n\n\u901a\u7528\u7684\u65b9\u6cd5\u5c31\u662f\u6c42\u89e3\u5728\u5df2\u7ecf\u6570\u636e\u96c6X\u7684\u60c5\u51b5\u4e0b\uff0c\u6c42\u51fa\u6bcf\u4e2a\u5206\u7c7b\u9488\u5bf9\u4e8e\u8be5\u6570\u636e\u96c6\u7684\u6982\u7387\uff0c\u6982\u7387\u6700\u5927\u7684\u5c31\u662f\u6700\u5927\u7684\u53ef\u80fd\u6027\u3002 \u5176\u4e2d$\\theta$\u662f\u5173\u4e8e\u8fd9\u4e2a\u6a21\u5757\u7684\u53c2\u6570\u3002\n\nGiven a dataset $X$, the probability of (Y=c) for this dataset is defined using the class conditional density ($p(X|Y=c)$) and the class prior $p(Y=c)$:\n\\begin{equation}\n\\label{eq:baysian_rule}\n\\begin{aligned}\np(Y=c|X, \\theta) &= \\frac{p(X, Y=c, \\theta)}{p(X, \\theta)} \\\\\n& = \\frac{p(X|Y=c, \\theta)p(Y=c, \\theta)}{\\sum_{c'}p(Y=c', \\theta)p(X|Y=c', \\theta)} \\\\\n&\\propto p(X|Y=c, \\theta)p(Y=c, \\theta)\n\\end{aligned}\n\\end{equation}\n- $p(Y=c | X, \\theta)$: Posterior probability (\u540e\u9a8c\u6982\u7387)\n- $f(\\theta) = p(X|Y=c, \\theta)$: likelihood probability, we also call this function likelihood function (\u4f3c\u7136\u51fd\u6570)\n- $p(Y=c, \\theta)$: prior probability (\u5148\u9a8c\u6982\u7387)\n- $p(X, \\theta)$: Normalized factor(\u8bc1\u636e\u56e0\u5b50)\n\n**Solution:** the posterior equals to the likelihood times the prior, up to a constant. \n\\begin{equation}\n\\begin{aligned}\n\\textrm{Posterior (\u540e\u9a8c\u6982\u7387)} &= \\frac{\\textrm{prior (\u5148\u9a8c\u6982\u7387)} \\times \\textrm{likelihood (\u4f3c\u7136\u51fd\u6570)}}{\\textrm{\u8bc1\u636e\u56e0\u5b50}} \\\\\n&\\propto \\textrm{prior (\u5148\u9a8c\u6982\u7387)} \\times \\textrm{likelihood (\u4f3c\u7136\u51fd\u6570)}\n\\end{aligned}\n\\end{equation}\n\n## Example: Number Game (from Josh Tenenbaum's PhD thesis)\n\nInferring abstract patterns from sequence of integers\n\n**Problem:** There is a group of hypotheses $\\mathcal{H}$: {'Primer number', 'a number between 1 and 10', $\\ldots$}. In this task, a set of positive examples (training datasets) $D = \\{x_1, x_2, \\ldots, x_n\\}$ drawn from a reasonable arithmetical concept $h \\in \\mathcal{H}$. Finally, I would ask that whether a new test case $x$ belong to $h$ (classifying $x$) \n- **Posterior predictive distribution**: what's probability of that $X \\in h$ given the dataset $D$: $p(x \\in h |D, \\theta)$.\n\\begin{equation}\n\\begin{aligned}\np(x \\in h | D, \\theta) & = \\frac{p(x \\in h, D, \\theta)}{p(D, \\theta)} \\\\\n&= \\frac{p(D|x \\in h, \\theta)p(x \\in h, \\theta)}{p(D, \\theta)} \\\\\n& = \\frac{p(D|x \\in h, \\theta)p(x \\in h, \\theta)}{\\sum_{h' \\in \\mathcal{H}}p(D, D \\in h', \\theta)} \\\\\n&\\propto p(D|x \\in h, \\theta)p(x \\in h, \\theta)\n\\end{aligned}\n\\end{equation}\n\n**Example**\n- For simplicity, assume all numbers are integers between 1 and 100\n- $D = \\{2, 8, 16, 64\\}$ which are samples from IID (independent and identically distributed) datasets\n\n**Solution**\n\n1. **Hypothesis space of concepts $\\mathcal{H}$**\n\nUsually, the models favors the simplest hypothesis consistent with the data (Occam's razor). In this case, we choose $\\mathcal{H} = \\{h_1, h_2, h_3\\}$\n- \"powers of two\", that's $h_1 = h_{two} = \\{2, 4, 8, 16, 32, 64\\}$, where $|h_{1}| = 6$ \n- \"even numbers\", that's $h_2 = h_{even} = \\{2, 4, 6, \\ldots, 100\\}$, where $|h_{2}|=50$\n- \"powers of two except 32\", that's $h_3 = h_{two-32} = \\{2, 4, 8, 16, 64\\}$, where $|h_{3}| = 5$ \n\n2. **Prior p(h)**\n\nUsually, prior probability captures the background knowledge, domain knowledge, pre-existing biases. In our case, it looks like $h_{two-32}$ is much better fit than $h_{two}$, however, the former seems \"conceptually unnatural\". Therefore, we can capture such intuition by assigning lower prior probability to unnatural concepts. \n\\[ p(\\mathcal{H}) = \\begin{bmatrix} p(h_1) = 0.19, \\\\ p(h_2) = 0.8, \\\\ p(h_3) = 0.01 \\end{bmatrix}\\]\n\n\n3. **Likelihood p(D|h)**\n\nIn this case, we define likelihood function (statistical information in examples) as follows:\n\\[ p(D | h) = [\\frac{1}{size(h)}]^n = [\\frac{1}{|h|}]^n \\; \\textrm{if} x_1, \\ldots, x_n \\in h\\] \nwhere $n$ means the total number of elements in $D$. Smaller hypotheses receive greater likelihood, and exponentially more so as n increases.\n\nTherefore, we can get likelihood probability for $h_1$: \n\\begin{equation}\n\\begin{aligned}\np(D|h_1) &= p(x_1|h_1) \\times p(x_2|h_1) \\times p(x_3|h_1) \\times p(x_4|h_1) \\;\\;\\; (\\textrm{data from IID}) \\\\\n&= \\frac{1}{6} \\times \\frac{1}{6} \\times \\frac{1}{6} \\times \\frac{1}{6} \\\\\n&= [\\frac{1}{|h_1|}]^4 \\;\\;\\; (|h_1| = 6) \\\\\n&= [\\frac{1}{6}]^4\n\\end{aligned}\n\\end{equation}\nSimilarly, we can compute likelihood probabilities for $h_2$ and $h_3$:\n\\[p(D|h_2) = [\\frac{1}{|h_2|}]^4 = [\\frac{1}{50}]^4\\]\n\\[p(D|h_3) = [\\frac{1}{|h_3|}]^4 = [\\frac{1}{5}]^4\\]\n\n4. **Posterior p(h|D)**\n\n\\begin{equation}\n\\begin{aligned}\np(x \\in h_1 | D) & = \\frac{p(x \\in h_1, D)}{p(D)} \\\\\n&= \\frac{p(D|x \\in h_1)p(x \\in h_1)}{p(D)} \\\\\n& = \\frac{p(D|x \\in h_1)p(x \\in h_1)}{\\sum_{h' \\in \\mathcal{H}}p(D, D \\in h')} \\\\\n&\\propto p(D|x \\in h_1, \\theta)p(x \\in h_1) \\\\\n&= p(D|h_1)p(h_1)\n\\end{aligned}\n\\end{equation}\n\n\n## Approaches to Estimate Parameters from IID data\n\nUsually, the posterior is simply the liklihood times the prior, and then normalized:\n\\begin{equation}\n\\begin{aligned}\np(h|D, \\theta) &= \\frac{p(D, h, \\theta)}{p(D, \\theta)} \\\\\n& = \\frac{p(D|h, \\theta)p(h, \\theta)}{\\sum_{h'}p(Y=h', \\theta)p(D|h', \\theta)} \\\\\n&\\propto p(D|h, \\theta)p(h, \\theta)\n\\end{aligned}\n\\end{equation}\n\nMaximum Likelihood Estimation (MLE) and Maximum A Posteriori (MAP), are both a method for estimating some variable in the setting of probability distributions or graphical models. They are similar, as they compute a single estimate, instead of a full distribution.\n\n### Maximum Likelihood Estimation (MLE)\nMLE, as we, who have already indulge ourselves in Machine Learning, would be familiar with this method. This is the concept that when working with a probabilistic model with unknown parameters, the parameters which make the data have the highest probability are the most likely ones.\n\n\nSuppose that we have a model with parameters $\\theta$ and a collection of data examples X. For concreteness, we can imagine\nthat $\\theta$ is a single value representing the probability that a coin comes up heads when flipped, and X is a sequence of independent coin flips. We will look at this example in depth later.\n\nIf we want to find the most likely value for the parameters of our model, that means we want to find:\n\\[ \\text{arg max} P(\\theta | X) \\]\n\nBy Bayes\u02bc rule, this is the same thing as\n\\[ \\text{arg max} P(\\theta | X) = \\text{arg max} \\frac{P(X|\\theta)P(\\theta)}{P(X)} \\] where\n1. Evidence $P(X)$ is a a parameter agnostic probability of generating the data, does not depend on $\\theta$ at all. so it can be dropped without changing the best choice of $\\theta$\n2. Prior $P(\\theta)$ does not depend on $\\theta$, that's uninformative prior. \n3. Likelihood $P(X | \\theta)$ is the probability of the data given the parameters.\n\nGiven above two assumptions, we see that our application of Bayes\u02bc rule shows that our best choice of $\\theta$ is the maximum likelihood estimate for $\\theta$:\n\\[ \\hat{\\theta} = \\underset{\\theta}{\\text{arg max}} P(X | \\theta)\\]\n\nSometimes, we even use it without knowing it. Take for example, when fitting a Gaussian to our dataset, we immediately take the sample mean and sample variance, and use it as the parameter of our Gaussian. This is MLE, as, if we take the derivative of the Gaussian function with respect to the mean and variance, and maximizing it (i.e. setting the derivative to zero), what we get is functions that are calculating sample mean and sample variance. Another example, most of the optimization in Machine Learning and Deep Learning (neural net, etc), could be interpreted as MLE.\n\n\n\nSpeaking in more abstract term, let\u2019s say we have a likelihood function $P(X|\\theta)$. Then, the MLE for $\\theta$, the parameter we want to infer, is:\n\\begin{equation}\n\\begin{aligned}\n\\theta_{MLE} &= \\underset{\\theta}{\\textrm{argmax}} P(X|\\theta) \\\\\n&= \\underset{\\theta}{\\textrm{argmax}}\\prod_{i} P(x_i|\\theta) \\;\\;\\; (x_i \\in X \\;\\text{sampled from IID datasets})\n\\end{aligned}\n\\end{equation}\n\nAs taking a product of some numbers less than 1 ($ 0 \\leq P(x_i|\\theta) \\leq 1$) would approaching 0 as the number of those numbers goes to infinity, it would be not practical to compute, because of computation underflow. Hence, we will instead work in the **log** space, as logarithm is monotonically increasing, so maximizing a function is equal to maximizing the log of that function.\n\n\\begin{equation}\n\\begin{aligned}\n\\theta_{MLE} &= \\underset{\\theta}{\\textrm{argmax}} \\log P(X|\\theta) \\\\\n&= \\underset{\\theta}{\\textrm{argmax}} \\log \\prod_{i} P(x_i|\\theta) \\;\\;\\; (x_i \\in X \\;\\text{sampled from IID datasets}) \\\\\n&= \\underset{\\theta}{\\textrm{argmax}} \\sum_{i} \\log P(x_i|\\theta)\n\\end{aligned}\n\\end{equation}\nTo use this framework, we just need to derive the log likelihood of our model, then maximizing it with regard of $\\theta$ using our favorite optimization algorithm like Gradient Descent.\n\n### Maximum A Posteriori (MAP)\n\nMAP usually comes up in **Bayesian** setting. Because, as the name suggests, it works on a posterior distribution, not only the likelihood. When we have enough data, the posterior becomes peaked on a single concept.\n\n\\begin{equation}\n\\begin{aligned}\n\\hat{h}^{MAP} &= \\underset{h \\in \\mathcal{H}}{\\textrm{argmax}} [p(h|D, \\theta)] \\\\\n&= \\underset{h \\in \\mathcal{H}}{\\textrm{argmax}} [\\frac{p(D|h, \\theta)p(h, \\theta)}{\\sum_{h'}p(Y=h', \\theta)p(D|h', \\theta)}] \\\\\n&\\propto \\underset{h \\in \\mathcal{H}}{\\textrm{argmax}}[\\log(p(D|h, \\theta)p(h, \\theta))] \\\\\n&= \\underset{h \\in \\mathcal{H}}{\\textrm{argmax}}[\\log p(D|h, \\theta) + \\log p(h, \\theta)]\n\\end{aligned}\n\\end{equation}\n\n\u5728\u4f17\u591a\u7684$\\mathcal{H}$\u4e2d\uff0c\u54ea\u4e2aposterior probability\u662f\u6700\u5927\u7684\uff0c\u90a3\u4e48\u5f53\u524d\u8fd9\u4e2a\u6570\u636e\u96c6\u6700\u5c5e\u4e8e\u90a3\u4e2a$h$\n\n\nRecall, with Bayes\u2019 rule, we could get the posterior as a product of likelihood and prior:\n\\begin{equation}\n\\begin{aligned}\nP(\\theta|X) &= \\frac{P(X|\\theta)P(\\theta)}{P(X)} \\\\\n&\\propto P(X|\\theta)P(\\theta)\n\\end{aligned}\n\\end{equation}\nWe are ignoring the **normalizing constant** as we are strictly speaking about optimization here, so proportionality is sufficient. If we replace the likelihood in the MLE formula above with the posterior, we get:\n\n\\begin{equation}\n\\begin{aligned}\n\\theta_{MAP} &= \\underset{\\theta}{\\textrm{argmax}} \\log P(X|\\theta)P(\\theta) \\\\\n&= \\underset{\\theta}{\\textrm{argmax}} [\\log P(X|\\theta) + \\log P(\\theta)] \\\\\n&= \\underset{\\theta}{\\textrm{argmax}} [\\log \\prod_{i}{P(x_i | \\theta)} + \\log P(\\theta)] \\;\\;\\; (x_i \\in X \\;\\text{sampled from IID datasets})\\\\\n&= \\underset{\\theta}{\\textrm{argmax}} \\sum_{i} \\log P(x_i | \\theta) + \\underset{\\theta}{\\textrm{argmax}} \\log P(\\theta)\n\\end{aligned}\n\\end{equation}\n\n### Comparison between MLE and MAP\nComparing both MLE and MAP equation, the only thing differs is the inclusion of prior $P(\\theta)$ in MAP, otherwise they are identical. What it means is that, the likelihood is now weighted with some weight coming from the prior.\n\\begin{equation}\n\\begin{aligned}\n\\theta_{MLE} &= \\underset{\\theta}{\\textrm{argmax}} \\sum_{i} \\log P(x_i | \\theta) \\\\\n\\theta_{MAP} &= \\underset{\\theta}{\\textrm{argmax}} \\sum_{i} \\log P(x_i | \\theta) + \\underset{\\theta}{\\textrm{argmax}} \\log P(\\theta)\n\\end{aligned}\n\\end{equation}\n\nThere are several options for prior distribution:\n\n- **Uniform Prior:** This means, we assign equal weights everywhere, on all possible values of the $\\theta$. The implication is that the likelihood equivalently weighted by some constants. Being constant, we could be ignored from our MAP equation, as it will not contribute to the maximization. $\\theta_{MLE} = \\theta_{MAP}$\n- **Gaussian Prior:** Depending on the region of the distribution, the probability is high or low, never always the same.\n- Others, like Beta distribution etc. \n\nWhat we could conclude then, **is that MLE is a special case of MAP, where the prior is uniform!**\n\n### Numerical Optimization and the Negative Log-Likelihood\nUsing $\\log$ space in MLE and MAP with following considerations\n1. Notice that if we make the assumption that all the data examples are independent, we can no longer practically consider the likelihood itself as it is a product of many probabilities. For example, $p(x) = 0.5$, then product of $(1/2)^1000000000$ is far below machine precision. We cannot work with that directly. We can leverage log-likelihood to overcome this issue. \n\\[ \\log((1/2)^1000000000) = 1000000000*\\log(1/2) \\approx -301029995.6...\\]\nThis number fits perfectly within even a single precision 32-bit float. Thus, we should consider the log-likelihood, which is\n\\[ \\log P[X|\\theta]\\]\nSince the function $x \\rightarrow \\log(x)$ is increasing, maximizing the likelihood (e.g. $\\underset{\\theta}{\\text{arg max} P(X|\\theta)}$) is the same thing as maximizing the log-likelihood (e.g. $\\underset{\\theta}{\\text{arg max} \\log P(X|\\theta)}$) .\nWe often work with loss functions, where we wish to minimize the loss. We may turn **maximum likelihood** into the **minimization** of a loss by taking $-\\log(P(X|\\theta))$, which is the **negative log-likelihood**. \n\nTo illustrate this, consider the coin flipping problem from before, and pretend that we do not know the closed form solution. We may compute that:\n\\[ -\\log (P(X | \\theta)) = -\\log(\\theta^{n_H}(1-\\theta)^{n_T}) = -(n_H\\log(\\theta)+n_T\\log(1-\\theta)) \\]\n\n\n```python\n# This can be written into code, and freely optimized even for billions of coin flips\n\nimport torch\n# Set up our data\nn_H = 8675309\nn_T = 25624\n# Initialize our paramteres\ntheta = torch.tensor(0.5, requires_grad=True)\n\n# Perform gradient descent\nlr = 0.00000000001\nfor iter in range(10):\n loss = -(n_H * torch.log(theta) + n_T * torch.log(1 - theta))\n loss.backward()\n with torch.no_grad():\n theta -= lr * theta.grad\n theta.grad.zero_()\n# Check output\ntheta, n_H / (n_H + n_T)\n```\n\n\n\n\n (tensor(0.5017, requires_grad=True), 0.9970550284664874)\n\n\n\n2. The second reason we consider the log-likelihood is the simplified application of calculus rules. As discussed above, due to independence assumptions, most probabilities we encounter in machine learning are products of individual probabilities.\n\\[ P(X | \\theta) = p(x_1|\\theta)p(x_2|\\theta)\\ldotsp(x_n|\\theta)\\]\nThis means that if we directly apply the product rule to compute a derivative we get a signficant computations, which require $n(n-1)$ multiplications, along with $n-1$ additions. So it is proportional to **quadratic time** in the inputs! For the negative log-likelihood we have instead:\n\\[ -\\log P(X | \\theta) = -(\\log(p(x_1|\\theta)) + \\log(p(x_2|\\theta)) + \\ldots+ \\log(p(x_n|\\theta)))\\]\nwhich then gives\n\\[ -\\frac{\\partial}{\\partial \\theta} \\log(P(X | \\theta)) = \\frac{1}{P(x_1 | \\theta)}\\frac{\\partial}{\\partial \\theta} P(x_1 | \\theta) + \\ldots + \\frac{1}{P(x_n | \\theta)}\\frac{\\partial}{\\partial \\theta} P(x_n | \\theta) \\]\nThis requires only n divides and n - 1 sums, and thus is **linear time** in the inputs.\n3. The third and final reason to consider the negative log-likelihood is the relationship to information theory. This is a rigorous mathematical theory which gives a way to measure the degree of information or randomness in a random variable. The key object of study in that field is the entropy which is: $H(p) = - \\sum_{i}p_i\\log_2(p_i)$, which measures the randomness of a source. Notice that this is nothing more than the average -log probability, and thus if we take our negative log-likelihood and divide by the number of data examples, we get a relative of entropy known as **cross-entropy**.\n\n### Example 1: Bernoulli model\n\nWe perform following steps to estimate the parameters of the model: \n\n1. We observed $N$ IID coin tossing: $D = \\{x_1, x_2, \\ldots, x_n\\} \\;\\text{where}\\; x_i \\in \\{0, 1\\}$, For example, $D = \\{1, 0, 1, \\ldots, 0\\}$,\n2. The model for an instance $x \\in D$ is defined as follow:\n\\begin{equation}\nP(x|\\theta) = \\textrm{Ber}(x|\\theta) = \\left\\{\n\t\\begin{array}{ll}\n\t\t\\theta & \\mbox{if } x = 0 \\\\\n\t\t1 - \\theta & \\mbox{if } x = 0\n\t\\end{array}\n\\right. \\, \\Rightarrow \\, P(x|\\theta) = \\theta^x(1-\\theta)^{1-x}\n\\end{equation}\n3. **MLE:** We need to maximize the objective function - log likelihood of dataset $D$:\n\\begin{equation}\n\\begin{aligned}\nL(\\theta; D) &= \\log P(D|\\theta) \\\\\n&= \\log \\prod_{i}P(x_i|\\theta) = \\log \\prod_{i} \\theta^{x_i}(1-\\theta)^{1-x_i} \\\\\n&= \\log \\theta^{\\sum_{i}x_i}(1-\\theta)^{\\sum_i(1-x_i)} \\\\\n&= \\log \\theta^{n_h}(1-\\theta)^{n_t} \\\\\n&= n_h log \\theta + n_t \\log (1 - \\theta) \\\\\n&= n_h log \\theta + (N - n_h) \\log (1 - \\theta)\n\\end{aligned}\n\\,\\Rightarrow\\, s.t. \\underset{\\theta}{\\textrm{argmax}} L(\\theta; D)\n\\end{equation}\n\n4. In order to get max value of this objective function, we take derivatives of $\\theta$:\n\\begin{equation}\n\\frac{\\partial L(\\theta; D)}{\\partial \\theta} = \\frac{n_h}{\\theta} - \\frac{N-n_h}{1 - \\theta} = 0 \\\\ \\Downarrow \\\\ \\hat{\\theta}_{MLE} = \\frac{n_h}{N} = \\frac{n_h}{n_h + n_t} \\; \\text{or} \\; \\hat{\\theta}_{MLE} = \\frac{1}{N}\\sum_{i} x_i \n\\end{equation}\n\n**Overfiting Problem:** What if we tossed too few times so that we saw zero head? $\\hat{\\theta}_{MLE} = 0$. To overcome this problem, we make it more formal:\n\\begin{equation}\n\\hat{\\theta}_{MLE} = \\frac{n_h + n'}{n_h + n_t + n'} \\; \\text{where} \\; n' \\text{ is known as the pseudo (imaginary) count.} \n\\end{equation}\n\n5. ***MAP:*** If we take Beta distribution as the prior of model parameter $\\theta$:\n\\begin{equation}\nP(\\theta; \\alpha, \\beta) = \\frac{\\Gamma(\\alpha + \\beta)}{\\Gamma(\\alpha)\\Gamma(\\beta)}\\theta^{\\alpha - 1}(1-\\theta)^{\\beta - 1} = B(\\alpha, \\beta)\\theta^{\\alpha - 1}(1-\\theta)^{\\beta - 1}\n\\end{equation}\nThen, according Bayesian rule:\n\\begin{equation}\n\\begin{aligned}\nP(\\theta|D) &\\propto P(D|\\theta)P(\\theta) \\\\\n&= \\underbrace{\\theta^{n_h}(1-\\theta)^{n_t}}_{P(D|\\theta)} \\times \\underbrace{B(\\alpha, \\beta)\\theta^{\\alpha - 1}(1-\\theta)^{\\beta - 1}}_{P(\\theta)} \\\\\n&= B(\\alpha, \\beta)\\theta^{n_h + \\alpha - 1}(1-\\theta)^{n_t + \\beta - 1}\n\\end{aligned}\n\\end{equation}\nMaximum a posterior (MAP) estimation:\n\\begin{equation}\n\\theta_{MAP} = \\underset{\\theta}{\\textrm{argmax}} \\log P(\\theta|D) \\propto \\underset{\\theta}{\\textrm{argmax}} \\log P(D|\\theta)P(\\theta)\n\\end{equation}\nAccording to step 4 to get the corresponding derivatives of $\\theta$:\n\\begin{equation}\n\\hat{\\theta_{MAP}} = \\hat{\\theta_{Bays}} = \\int \\theta p(\\theta|D)d\\theta = C\\int \\theta \\times \\theta^{n_h + \\alpha - 1}(1-\\theta)^{n_t + \\beta - 1}d\\theta = \\frac{n_h + \\alpha}{N + \\alpha + \\beta}\n\\end{equation}\nwhere $A = \\alpha + \\beta$ is prior strength, which can be interoperated as the size of an imaginary data set from which we obtain the **pseudo-counts**.\n\n### Example 2: Univariate Normal (Gaussian)\nWe perform following steps to estimate the parameters of the model: \n\n1. We observed $N$ IID coin tossing: $D = \\{x_1, x_2, \\ldots, x_n\\}$, For example, $D = \\{-0.1, 10, 1, \\ldots, 3\\}$,\n2. The model with parameter $\\mu, \\delta$ for an instance $x \\in D$ is defined as follow:\n\\begin{equation}\nP(x|\\mu, \\delta) = \\frac{1}{\\sqrt{2\\pi \\delta^2}}e^{-\\frac{(x-\\mu)^2}{2\\delta^2}}\n\\end{equation}\n3. **MLE:** We need to maximize the objective function - log likelihood of dataset $D$:\n \\begin{equation}\n\\begin{aligned}\nL(\\mu, \\delta; D) &= log P(D|\\mu, \\delta) \\\\\n&= \\log \\prod_{i}P(x_i|\\theta) \\\\\n&= \\sum_{i} \\log P(x_i | \\theta) \\\\\n&= \\sum_{i} \\log \\frac{1}{\\sqrt{2\\pi \\delta^2}}e^{-\\frac{(x_i-\\mu)^2}{2\\delta^2}} \\\\\n&= \\sum_{i}(-\\frac{1}{2} \\log (2\\pi \\delta^2) - \\frac{(x_i-\\mu)^2}{2\\delta^2}) \\\\\n&= -\\frac{N}{2} \\log (2\\pi \\delta^2) - \\sum_{i}(\\frac{(x_i-\\mu)^2}{2\\delta^2})\n\\end{aligned}\n\\,\\Rightarrow\\, s.t. \\underset{\\theta}{\\textrm{argmax}} L(\\theta; D)\n\\end{equation}\n\n4. In order to get max value of this objective function, we take derivatives of $\\mu, \\delta^2$:\n\\begin{equation}\n\\frac{\\partial L}{\\partial \\mu} = \\frac{\\sum_{i}(x_i - \\mu)}{\\delta^2} = 0 \\Rightarrow \\mu_{MLE} = \\frac{\\sum_{i}{x_i}}{N} \\\\\n\\frac{\\partial L}{\\partial \\delta^2} = -\\frac{N}{2\\delta^2} + \\frac{1}{2\\delta^2}\\sum_{i}(x_i - \\mu)^ = 0 \\Rightarrow \\delta^2_{MLE} = \\frac{1}{N}\\sum_{i}(x_i - \\mu_{MLE})^2\n\\end{equation}\n\n5. **MAP:** Similarly, we can assume normal prior for model parameter $\\mu$: \n\\[ P(u) = \\frac{1}{\\sqrt{2\\pi \\tau^2}}e^{-\\frac{(x-\\mu)^2}{2\\tau^2}} \\]\nMaximum a posterior (MAP) estimation:\n\\begin{equation}\n\\theta_{MAP} = \\underset{\\theta}{\\textrm{argmax}} \\log P(\\mu, \\delta |D) \\propto \\underset{\\theta}{\\textrm{argmax}} \\log P(D|\\mu, \\delta)P(\\mu)\n\\end{equation}\nAccording to step 4 to get the corresponding derivatives of $\\mu$ and $\\delta$:\nTODO\n\n", "meta": {"hexsha": "1922f203f1874f960087c361e0dfba14ba80a8e3", "size": 40276, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "deep_learning/Probability Theory.ipynb", "max_stars_repo_name": "bingrao/notebook", "max_stars_repo_head_hexsha": "4bd74a09ffe86164e4bd318b25480c9ca0c6a462", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "deep_learning/Probability Theory.ipynb", "max_issues_repo_name": "bingrao/notebook", "max_issues_repo_head_hexsha": "4bd74a09ffe86164e4bd318b25480c9ca0c6a462", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "deep_learning/Probability Theory.ipynb", "max_forks_repo_name": "bingrao/notebook", "max_forks_repo_head_hexsha": "4bd74a09ffe86164e4bd318b25480c9ca0c6a462", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.8352638353, "max_line_length": 607, "alphanum_fraction": 0.5674843579, "converted": true, "num_tokens": 11117, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.939024825960626, "lm_q2_score": 0.8840392741081575, "lm_q1q2_score": 0.8301348255117708}} {"text": "# Artificial Neural Networks (ANNs) and Backpropagation\n\nClassify images into ten digits by using an artificial neural network. Let $x \\in \\Re^{784\\times N}$ be a collection of N input images (stacked in columns) and $y \\in [0, 1]^{10\\times N}$ be the collection of our class predictions made for all images and encoded by so-called one-hot codes meant to approximate class probabilities. \n\nA neural network makes the prediction y from the collection x of images by forward propagation which is defined recursively as follows\n$$\n\\boldsymbol{y}=\\boldsymbol{h}_L \\quad with \\quad \\boldsymbol{h}_k=g_k(\\boldsymbol{a}_k), \\quad \\boldsymbol{a}_k=\\boldsymbol{W}_k \\boldsymbol{h}_{k-1} + \\boldsymbol{b}_k \\quad and \\quad \\boldsymbol{h}_0=\\boldsymbol{x}\n$$\nwith $L$ the number of layers, $k$ the layer index, $g_k$ a so-called activation function, $\\boldsymbol{h}_k$ an array of $N$ hidden feature vectors (also stacked in columns), $\\boldsymbol{a}_k$ an array of $N$ vectors of activations (also called potentials), $\\boldsymbol{W}_k$ a matrix of synaptic weights and $\\boldsymbol{b}_k$ a vector of biases. \n\nWe will use a shallow neural network which means $L = 2$. The hidden layer will use ReLU activation function and the output layer will use Softmax. \n\nThe matrices $\\boldsymbol{W}_k$ and vectors $\\boldsymbol{b}_k$ will be learned/estimated from the MNIST training set by using logistic regression. This consists of minimizing the cross-entropy loss, between the collection d of desired one-hot codes ($d_{ij} = 1$ if the j-th image represents the digit i, 0 otherwise) and the prediction $y$, which is defined as follows\n$$\nE=-\\Sigma_{j=1}^{N} \\Sigma_{i=1}^{10} d_{ij} \\log{y_{ij}}\n$$ \n\nThe optimization will be performed by gradient descent with backpropagation that can be implemented\niteratively, for $t$ = 0, 1, ..., and some initializations $\\boldsymbol{W}_k^0$ and $\\boldsymbol{b}_k^0$ , as:\n\n\n```python\nimport numpy as np\nfrom MNISTtools import load, show\nfrom matplotlib import pyplot as plt\n```\n\n## 1. Load dataset\n\n\n```python\nxtrain, ltrain = load(dataset='training', path='dataset/')\n```\n\n\n```python\nprint(f'xtrain.shape = {xtrain.shape}')\nprint(f'ltrain.shape = {ltrain.shape}')\nprint(f'size of training set = {xtrain.shape[1]}')\nprint(f'feature dimension = {xtrain.shape[0]}')\n```\n\n xtrain.shape = (784, 60000)\n ltrain.shape = (60000,)\n size of training set = 60000\n feature dimension = 784\n\n\n## 2. Display the image of index 42\n\n\n```python\nprint(f'label of index 42 is {ltrain[42]}')\nshow(xtrain[:,42].reshape((28,28)))\n```\n\n## 3. The range and type of `xtrain`\n\n\n```python\nprint(f'range of xtrain = from {xtrain.min()} to {xtrain.max()}')\nprint(f'type of xtrain = {type(xtrain)}')\nprint(f'dtype of xtrain = {xtrain.dtype}')\n```\n\n range of xtrain = from 0 to 255\n type of xtrain = \n dtype of xtrain = uint8\n\n\n## 4. Normalize `xtrain`\nNormalize `xtrain` s.t. it is in the range `[-1,1]` of type `float32`.\n\n\n```python\ndef normalize_MNIST_images(x):\n '''\n Inputs:\n x: data\n '''\n x_norm = x.astype(np.float32)\n return x_norm*2/255-1\n```\n\n\n```python\nxtrain = normalize_MNIST_images(xtrain)\nprint(f'min normalized xtrain = {np.min(xtrain)}')\nprint(f'max normalized xtrain = {np.max(xtrain)}')\n```\n\n min normalized xtrain = -1.0\n max normalized xtrain = 1.0\n\n\n## 5. One-hot encoding\n\n\n```python\ndef label2onehot(lbl):\n '''\n Convert label (n,) to one-hot form (d,n).\n '''\n d = np.zeros((lbl.max() + 1, lbl.size))\n d[lbl, np.arange(lbl.size)] = 1\n return d\n```\n\n\n```python\ndtrain = label2onehot(ltrain)\nprint(f'dtrain shape = {dtrain.shape}')\nprint(f'{np.where(dtrain[:,42] == 1)}')\nprint(f'{np.where(dtrain[:,42] == 1) == ltrain[42]}')\n```\n\n dtrain shape = (10, 60000)\n (array([7]),)\n [[ True]]\n\n\n## 6. One-hot decoding\n\n\n```python\ndef onehot2label(d):\n '''\n Inputs:\n d: one-hot encoding labels\n '''\n lbl = d.argmax(axis=0)\n return lbl\n```\n\n\n```python\nall(ltrain == onehot2label(dtrain))\n```\n\n\n\n\n True\n\n\n\n## 7. The softmax function\n$$\ny_i=g(\\boldsymbol{a})_i=\\frac{e^{a_i}}{\\Sigma_{j=1}^{10} e^{a_j}} \\quad where \\quad i\\in[1,10]\n$$\n\n\n```python\ndef softmax(a):\n '''\n Inputs: \n a: data\n '''\n exp_a = np.exp(a - np.max(a, axis=0))\n return exp_a / np.sum(exp_a, axis=0)\n```\n\n## 8. $$ \n\\frac{\\partial g(\\boldsymbol{a})_i}{\\partial a_i} = {g(\\boldsymbol{a})}_i (1-{g(\\boldsymbol{a})_i})\n$$\n\nProof:\n\n$$\n\\begin{align}\n\\frac{\\partial g(\\boldsymbol{a})_i}{\\partial a_i} & = \n \\frac{\\partial}{\\partial a_i}\\frac{e^{a_i}}{\\Sigma_{k=1}^{N} e^{a_k}} \\\\\n& = \\frac{e^{a_i}\\Sigma_{k=1}^{N}e^{a_k}-e^{a_i}e^{a_i}}{(\\Sigma_{k=1}^{N}e^{a_k})^2} \\\\\n& = \\frac{e^{a_i}}{\\Sigma_{k=1}^{N}e^{a_k}} \\frac{\\Sigma_{k=1}^{N}e^{a_k}-e^{a_i}}{\\Sigma_{k=1}^{N}e^{a_k}} \\\\\n& = {g(\\boldsymbol{a})}_i (1-{g(\\boldsymbol{a})_i})\n\\end{align}\n$$\n\n## Q9$$\n\\frac{\\partial{g(\\boldsymbol{a})_i}}{\\partial a_j}=-g(\\boldsymbol{a})_i g(\\boldsymbol{a})_j \\quad for \\, j \\neq i\n$$\n\nProof:\n\n$$\n\\begin{align}\n\\frac{\\partial g(\\boldsymbol{a})_i}{\\partial a_j} & = \n \\frac{\\partial}{\\partial a_j} \\frac{e^{a_i}}{\\Sigma_{k=1}^{N} e^{a_k}} \\\\\n& = \\frac{0-e^{a_i}e^{a_j}}{(\\Sigma_{k=1}^{N}e^{a_k})^2} \\\\\n& = -\\frac{e^{a_i}}{\\Sigma_{k=1}^{N}e^{a_k}} \\frac{e^{a_j}}{\\Sigma_{k=1}^{N}e^{a_k}} \\\\\n& = -{g(\\boldsymbol{a})}_i g(\\boldsymbol{a})_j\n\\end{align}\n$$\n\n## 10. Compute the $\\delta$\n$$\n\\delta = g(a)\\otimes e-g(a)\n$$\n\n\n```python\ndef softmaxp(a, e):\n '''\n Compute delta for the backpropagation.\n \n Inputs:\n a: predictions before softmax\n e: onehot labels\n \n Returns:\n delta\n '''\n softmax_a = softmax(a)\n \n # element-wise product\n e_prod = softmax_a*e\n \n # dot product\n dot_prod = np.sum(e_prod, axis=0)\n \n return e_prod - dot_prod*softmax_a\n```\n\n# 11. Numerical approximations for $\\delta\\$\n$$\n\\delta=\\frac{\\partial g(a)}{\\partial a}\\times e = \\lim_{\\epsilon \\to 0}\\frac{g(a+\\epsilon e)-g(a)}{\\epsilon}\n$$\n\n\n```python\neps = 1e-6 # finite difference step\na = np.random.randn(10, 200) # random inputs\ne = np.random.randn(10, 200) # random directions\n\ndiff = softmaxp(a, e)\ndiff_approx = (softmax(a + eps*e) - softmax(a)) / eps\nrel_error = np.abs(diff-diff_approx).mean() / np.abs(diff_approx).mean()\nprint(rel_error, 'should be smaller than 1e-6')\n```\n\n 4.888515969765918e-07 should be smaller than 1e-6\n\n\n## 12. ReLU and its directional derivative\n\n\n```python\ndef relu(a):\n return np.maximum(a, 0)\n\ndef relup(a, e):\n '''\n The directional derivative of the ReLU function.\n '''\n eps = 1e-6\n return (relu(a + eps * e) - relu(a)) / eps\n```\n\n\n```python\neps = 1e-6 # finite difference step\na = np.random.randn(10, 200) # random inputs\ne = np.random.randn(10, 200) # random directions\n\ndiff = relup(a, e)\ndiff_approx = (relu(a + eps*e) - relu(a)) / eps\nrel_error = np.abs(diff-diff_approx).mean() / np.abs(diff_approx).mean()\nprint(rel_error, 'should be smaller than 1e-6')\n```\n\n 0.0 should be smaller than 1e-6\n\n\n## 13. Initialization of the net\nUtilize `He and Xavier initializations`.\n\n\n```python\ndef init_shallow(Ni, Nh, No):\n '''\n Inputs:\n Ni: dimension of input layer\n Nh: dimension of output layer\n No: number of unit of output layer\n \n Returns:\n parameters of the net\n '''\n b1 = np.random.randn(Nh, 1) / np.sqrt((Ni+1.)/2.)\n W1 = np.random.randn(Nh, Ni) / np.sqrt((Ni+1.)/2.)\n b2 = np.random.randn(No, 1) / np.sqrt((Nh+1.))\n W2 = np.random.randn(No, Nh) / np.sqrt((Nh+1.))\n return W1, b1, W2, b2\n```\n\n\n```python\nNi = xtrain.shape[0] # 784\nNh = 64\nNo = dtrain.shape[0] # 10\nnetinit = init_shallow(Ni, Nh, No)\n```\n\n## 14. Foward propagation\n\n\n```python\ndef forwardprop_shallow(x, net):\n '''\n Inputs:\n x: data\n net: parameters of the net\n \n Returns:\n prediction \n '''\n W1 = net[0] # (64, 784)\n b1 = net[1] # (64, 1)\n W2 = net[2] # (10, 64)\n b2 = net[3] # (10, 1)\n \n a1 = W1.dot(x) + b1 # (64, 60000)\n h1 = relu(a1) # (64, 60000)\n \n a2 = W2.dot(h1) + b2 # (10, 60000)\n y = softmax(a2) # (10, 60000)\n return y\n```\n\n\n```python\nyinit = forwardprop_shallow(xtrain, netinit)\n```\n\n## 15. Loss\n\n\n```python\ndef eval_loss(y, d):\n '''\n Inputs:\n y (10, 60000): prediction\n d (10, 60000): ground truth\n \n Returns:\n Average cross-entropy loss (1, 1)\n '''\n return -np.sum(d*np.log(y))/y.shape[0]/y.shape[1]\n```\n\n\n```python\nprint(eval_loss(yinit, dtrain), 'should be around .26')\n```\n\n 0.26297468338637725 should be around .26\n\n\n## 16. Error rate\n\n\n```python\ndef eval_perfs(y, lbl):\n '''\n Compute the percentage of misclassified samples.\n \n Inputs:\n y (10, 60000): prediction\n lbl (10, 60000): ground truth\n \n Returns:\n Error rate\n '''\n return np.sum(onehot2label(y)!=lbl)/lbl.size\n```\n\n\n```python\nprint(eval_perfs(yinit, ltrain))\n```\n\n 0.8715\n\n\n## 17. Update the parameters \nPerform one backpropagation update for the network.\n\n$$\nE=-\\Sigma_{j=1}^{N} \\Sigma_{i=1}^{10} d_{ij} \\log{y_{ij}} \\\\\n(\\nabla_y E)_i=-\\frac{d_i}{y_i}\n$$\n\n\n```python\ndef update_shallow(x, d, net, gamma=.05):\n '''\n Inputs:\n x: data\n d: ground truth\n net: parameters of the net\n gamma: learning rate\n \n Returns:\n parameters of the net\n '''\n W1 = net[0]\n b1 = net[1]\n W2 = net[2]\n b2 = net[3]\n Ni = W1.shape[1]\n Nh = W1.shape[0]\n No = W2.shape[0]\n gamma = gamma / x.shape[1] # normalized by the training dataset size\n \n a1 = W1.dot(x) + b1\n h1 = relu(a1)\n a2 = W2.dot(h1) + b2\n y = softmax(a2)\n \n # derivative of cross-entropy loss\n e = -d/y + (1-d)/(1-y)\n \n # backprop\n delta2 = softmaxp(a2, e)\n delta1 = relup(a1, W2.T.dot(delta2))\n \n W2 = W2 - gamma * delta2.dot(h1.T)\n W1 = W1 - gamma * delta1.dot(x.T)\n b2 = b2 - gamma * delta2.sum(axis=1, keepdims=True)\n b1 = b1 - gamma * delta1.sum(axis=1, keepdims=True)\n \n return W1, b1, W2, b2\n```\n\n## 18. Backpropagation\n\n\n```python\ndef backprop_shallow(x, d, net, T, gamma=.05):\n '''\n Inputs:\n x: data\n d: ground truth\n net: parameters of the net\n T: number of updates\n gamma: learning rate\n \n Returns:\n parameters of the net\n '''\n lbl = onehot2label(d)\n for t in range(T):\n # UPDATE NET\n y = forwardprop_shallow(x, net)\n # DISPLAY LOSS AND PERFS\n net = update_shallow(x, d, net, gamma)\n print(f'\\niter = {t+1}')\n print(f'loss = {eval_loss(y, d):.4f}')\n print(f'error rate = {eval_perfs(y, lbl):.4f}')\n return net\n```\n\n\n```python\nprint('\\niter_max = 100\\n')\nnettrain = backprop_shallow(xtrain, dtrain, netinit, 100)\n```\n\n \n iter_max = 100\n \n \n iter = 1\n loss = 0.2630\n error rate = 0.8715\n \n iter = 2\n loss = 0.2224\n error rate = 0.8073\n \n iter = 3\n loss = 0.2094\n error rate = 0.7125\n \n iter = 4\n loss = 0.2003\n error rate = 0.6544\n \n iter = 5\n loss = 0.1919\n error rate = 0.5986\n \n iter = 6\n loss = 0.1840\n error rate = 0.5502\n \n iter = 7\n loss = 0.1765\n error rate = 0.5087\n \n iter = 8\n loss = 0.1695\n error rate = 0.4803\n \n iter = 9\n loss = 0.1634\n error rate = 0.4573\n \n iter = 10\n loss = 0.1589\n error rate = 0.4576\n \n iter = 11\n loss = 0.1629\n error rate = 0.4871\n \n iter = 12\n loss = 0.1730\n error rate = 0.5871\n \n iter = 13\n loss = 0.1893\n error rate = 0.5526\n \n iter = 14\n loss = 0.1635\n error rate = 0.5360\n \n iter = 15\n loss = 0.1357\n error rate = 0.3711\n \n iter = 16\n loss = 0.1290\n error rate = 0.3540\n \n iter = 17\n loss = 0.1291\n error rate = 0.3766\n \n iter = 18\n loss = 0.1290\n error rate = 0.3786\n \n iter = 19\n loss = 0.1385\n error rate = 0.4335\n \n iter = 20\n loss = 0.1280\n error rate = 0.3814\n \n iter = 21\n loss = 0.1240\n error rate = 0.3857\n \n iter = 22\n loss = 0.1174\n error rate = 0.3435\n \n iter = 23\n loss = 0.1181\n error rate = 0.3681\n \n iter = 24\n loss = 0.1080\n error rate = 0.3079\n \n iter = 25\n loss = 0.1064\n error rate = 0.3271\n \n iter = 26\n loss = 0.1007\n error rate = 0.2932\n \n iter = 27\n loss = 0.1031\n error rate = 0.3241\n \n iter = 28\n loss = 0.0951\n error rate = 0.2821\n \n iter = 29\n loss = 0.0975\n error rate = 0.3120\n \n iter = 30\n loss = 0.0922\n error rate = 0.2815\n \n iter = 31\n loss = 0.0948\n error rate = 0.3071\n \n iter = 32\n loss = 0.0892\n error rate = 0.2742\n \n iter = 33\n loss = 0.0898\n error rate = 0.2942\n \n iter = 34\n loss = 0.0861\n error rate = 0.2661\n \n iter = 35\n loss = 0.0865\n error rate = 0.2833\n \n iter = 36\n loss = 0.0826\n error rate = 0.2547\n \n iter = 37\n loss = 0.0824\n error rate = 0.2675\n \n iter = 38\n loss = 0.0795\n error rate = 0.2461\n \n iter = 39\n loss = 0.0792\n error rate = 0.2559\n \n iter = 40\n loss = 0.0768\n error rate = 0.2387\n \n iter = 41\n loss = 0.0761\n error rate = 0.2450\n \n iter = 42\n loss = 0.0747\n error rate = 0.2339\n \n iter = 43\n loss = 0.0735\n error rate = 0.2349\n \n iter = 44\n loss = 0.0730\n error rate = 0.2298\n \n iter = 45\n loss = 0.0710\n error rate = 0.2251\n \n iter = 46\n loss = 0.0714\n error rate = 0.2264\n \n iter = 47\n loss = 0.0688\n error rate = 0.2159\n \n iter = 48\n loss = 0.0700\n error rate = 0.2228\n \n iter = 49\n loss = 0.0670\n error rate = 0.2084\n \n iter = 50\n loss = 0.0687\n error rate = 0.2192\n \n iter = 51\n loss = 0.0654\n error rate = 0.2009\n \n iter = 52\n loss = 0.0673\n error rate = 0.2145\n \n iter = 53\n loss = 0.0643\n error rate = 0.1947\n \n iter = 54\n loss = 0.0660\n error rate = 0.2091\n \n iter = 55\n loss = 0.0633\n error rate = 0.1913\n \n iter = 56\n loss = 0.0646\n error rate = 0.2040\n \n iter = 57\n loss = 0.0624\n error rate = 0.1885\n \n iter = 58\n loss = 0.0632\n error rate = 0.1981\n \n iter = 59\n loss = 0.0614\n error rate = 0.1844\n \n iter = 60\n loss = 0.0617\n error rate = 0.1912\n \n iter = 61\n loss = 0.0602\n error rate = 0.1805\n \n iter = 62\n loss = 0.0601\n error rate = 0.1847\n \n iter = 63\n loss = 0.0587\n error rate = 0.1757\n \n iter = 64\n loss = 0.0585\n error rate = 0.1787\n \n iter = 65\n loss = 0.0572\n error rate = 0.1693\n \n iter = 66\n loss = 0.0570\n error rate = 0.1725\n \n iter = 67\n loss = 0.0557\n error rate = 0.1630\n \n iter = 68\n loss = 0.0555\n error rate = 0.1662\n \n iter = 69\n loss = 0.0542\n error rate = 0.1572\n \n iter = 70\n loss = 0.0541\n error rate = 0.1605\n \n iter = 71\n loss = 0.0529\n error rate = 0.1530\n \n iter = 72\n loss = 0.0531\n error rate = 0.1573\n \n iter = 73\n loss = 0.0519\n error rate = 0.1515\n \n iter = 74\n loss = 0.0529\n error rate = 0.1585\n \n iter = 75\n loss = 0.0519\n error rate = 0.1555\n \n iter = 76\n loss = 0.0541\n error rate = 0.1708\n \n iter = 77\n loss = 0.0540\n error rate = 0.1713\n \n iter = 78\n loss = 0.0566\n error rate = 0.1869\n \n iter = 79\n loss = 0.0578\n error rate = 0.1914\n \n iter = 80\n loss = 0.0568\n error rate = 0.1855\n \n iter = 81\n loss = 0.0580\n error rate = 0.1931\n \n iter = 82\n loss = 0.0551\n error rate = 0.1737\n \n iter = 83\n loss = 0.0554\n error rate = 0.1807\n \n iter = 84\n loss = 0.0527\n error rate = 0.1605\n \n iter = 85\n loss = 0.0525\n error rate = 0.1659\n \n iter = 86\n loss = 0.0506\n error rate = 0.1497\n \n iter = 87\n loss = 0.0503\n error rate = 0.1547\n \n iter = 88\n loss = 0.0491\n error rate = 0.1431\n \n iter = 89\n loss = 0.0487\n error rate = 0.1477\n \n iter = 90\n loss = 0.0479\n error rate = 0.1386\n \n iter = 91\n loss = 0.0476\n error rate = 0.1432\n \n iter = 92\n loss = 0.0471\n error rate = 0.1365\n \n iter = 93\n loss = 0.0469\n error rate = 0.1402\n \n iter = 94\n loss = 0.0465\n error rate = 0.1349\n \n iter = 95\n loss = 0.0463\n error rate = 0.1384\n \n iter = 96\n loss = 0.0461\n error rate = 0.1338\n \n iter = 97\n loss = 0.0459\n error rate = 0.1369\n \n iter = 98\n loss = 0.0458\n error rate = 0.1333\n \n iter = 99\n loss = 0.0455\n error rate = 0.1362\n \n iter = 100\n loss = 0.0455\n error rate = 0.1331\n\n\n## 19. Evaluate the performance on the testing dataset\n\n\n```python\nxtest, ltest = load(dataset='testing', path='dataset/')\nprint(f'xtest.shape = {xtest.shape}')\nprint(f'ltest.shape = {ltest.shape}')\n```\n\n xtest.shape = (784, 10000)\n ltest.shape = (10000,)\n\n\n\n```python\n# use trained network to evaluate the performance on testing data\nxtest = normalize_MNIST_images(xtest)\npred_test = forwardprop_shallow(xtest, nettrain)\nprint(f'error rate = {eval_perfs(pred_test, ltest):.4f}')\n```\n\n error rate = 0.1286\n\n\n## 20. Backpropagation based on minibatch gradient descent\n\n\n```python\ndef backprop_minibatch_shallow(x, d, net, T, B=100, gamma=.05):\n '''\n Inputs:\n x: data\n d: ground truth\n net: parameters of net\n T: number of epoch\n B: minibatches\n gamma: learning rate\n \n Returns:\n parameters of net\n '''\n N = x.shape[1]\n NB = int((N+B-1)/B)\n lbl = onehot2label(d)\n for t in range(T): # epoch\n shuffled_indices = np.random.permutation(range(N))\n for l in range(NB):\n minibatch_indices = shuffled_indices[B*l:min(B*(l+1), N)]\n # UPDATE NET\n net = update_shallow(x[:,minibatch_indices], d[:,minibatch_indices], net, gamma)\n y = forwardprop_shallow(x, net)\n # DISPLAY LOSS AND PERFS\n print(f'\\nepoch = {t+1}')\n print(f'loss = {eval_loss(y, d):.4f}')\n print(f'error rate = {eval_perfs(y, lbl):.4f}')\n return net\n```\n\n## 21. Evaluate the performance on the testing dataset based on network with minibatch\n\n\n```python\nnetminibatch = backprop_minibatch_shallow(xtrain, dtrain, netinit, 5, B=100)\n```\n\n \n epoch = 1\n loss = 0.0267\n error rate = 0.0790\n \n epoch = 2\n loss = 0.0205\n error rate = 0.0584\n \n epoch = 3\n loss = 0.0191\n error rate = 0.0574\n \n epoch = 4\n loss = 0.0148\n error rate = 0.0432\n \n epoch = 5\n loss = 0.0124\n error rate = 0.0361\n\n\n\n```python\n# use trained network to evaluate the performance on testing data\npred_test_mini = forwardprop_shallow(xtest, netminibatch)\nprint(f'error rate = {eval_perfs(pred_test_mini, ltest):.4f}')\n```\n\n error rate = 0.0399\n\n", "meta": {"hexsha": "25c59eb0a95f3e87212c9bdbf82412e4edad2204", "size": 36984, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "src/Artificial-Neural-Networks.ipynb", "max_stars_repo_name": "lychengr3x/Artificial-Neuarl-Networks", "max_stars_repo_head_hexsha": "f2d02b55e55361339940ffdf28e9c745f515160f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/Artificial-Neural-Networks.ipynb", "max_issues_repo_name": "lychengr3x/Artificial-Neuarl-Networks", "max_issues_repo_head_hexsha": "f2d02b55e55361339940ffdf28e9c745f515160f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "src/Artificial-Neural-Networks.ipynb", "max_forks_repo_name": "lychengr3x/Artificial-Neuarl-Networks", "max_forks_repo_head_hexsha": "f2d02b55e55361339940ffdf28e9c745f515160f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.6, "max_line_length": 4580, "alphanum_fraction": 0.5124378109, "converted": true, "num_tokens": 6769, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.949669371675949, "lm_q2_score": 0.8740772400852111, "lm_q1q2_score": 0.8300843833879701}} {"text": "# The geometry of linear equation\n\nUse matrix form to study linear equations and relative pircture of the matrix\n\nFor example: \n\n\\begin{align}\n2x &-y &=0 \\\\ \n-x &+2y &=3\n\\end{align}\n\nIt can be rapresented as a matrix 2 rows X 2 colums:\n\n$$\n\\begin{bmatrix}\n2 & -1 \\\\\n-1 & 2\n\\end{bmatrix} \\circ\n\\begin{bmatrix}\nx \\\\\ny\n\\end{bmatrix} = \n\\begin{bmatrix}\n0 \\\\\n3\n\\end{bmatrix}\n$$\n\nI can not resist writing the matrix form that can be called at this point a **Linear Equation**\n\n$$ A \\cdot X = b $$\n\nLet's write the **Row Picture** of the two equation and the solution in the poiunt *P(1,2)* as the intersection of the the lines.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.xlim(-5, 5)\nplt.ylim(-5, 5)\nx = np.linspace(-5,5, 10)\nplt.quiver([0, -5], [-5, 0], [0, 10], [10, 0], angles='xy', scale_units='xy', scale=1)\nplt.plot(x, 2 * x, 'r-', label=\"2x - y = 0\")\nplt.plot(x, 0.5 * x + 1.5, 'b-', label = \"-x + 2y = 3\")\nplt.scatter(1,2, c='blue', marker='o')\nplt.annotate('P (1,2)', xy=(1, 2), xytext=(1, 1.5))\nplt.legend()\nplt.show()\n```\n\nNow let's weite the column picture:\n\n$$\nX \n\\begin{bmatrix}\n2 \\\\\n-1 \n\\end{bmatrix} +\nY\n\\begin{bmatrix}\n-1 \\\\\n2\n\\end{bmatrix} = \n\\begin{bmatrix}\n0 \\\\\n3\n\\end{bmatrix}\n$$\n\nAnd this form is called **Lienar combination of columns** Let's write the **Column Picture**\n\n\n```python\nplt.close()\nplt.xlim(-3, 3)\nplt.ylim(-3, 3)\nplt.quiver([0, -3], [-3, 0], [0, 6], [6, 0], angles='xy', scale_units='xy', scale=1)\nplt.quiver([0, 0], [0, 0], [2, -1], [-1, 2], color='b',angles='xy', scale_units='xy', scale=1)\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "ba1a4c55d88ec68305f535b1c1c277868a19a3a5", "size": 27993, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "linear-algebra.ipynb", "max_stars_repo_name": "daval302/computer-science", "max_stars_repo_head_hexsha": "1e6a8a94e10a98109e2dadceedf666b3b9cb2474", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-04-02T12:16:46.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-02T12:16:46.000Z", "max_issues_repo_path": "linear-algebra.ipynb", "max_issues_repo_name": "daval302/computer-science", "max_issues_repo_head_hexsha": "1e6a8a94e10a98109e2dadceedf666b3b9cb2474", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "linear-algebra.ipynb", "max_forks_repo_name": "daval302/computer-science", "max_forks_repo_head_hexsha": "1e6a8a94e10a98109e2dadceedf666b3b9cb2474", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 163.701754386, "max_line_length": 15718, "alphanum_fraction": 0.8886150109, "converted": true, "num_tokens": 593, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9372107931567176, "lm_q2_score": 0.8856314828740728, "lm_q1q2_score": 0.8300233845089697}} {"text": "# Interpolaci\u00f3n polinomial\n## Vandermonde, Lagrange, Diferencias divididas de Newton\n### SCT 2020\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n### Implementaci\u00f3n de m\u00e9todos de interpolaci\u00f3n\n\n\n```python\ndef vandermonde(x_i, y_i):\n n = x_i.shape[-1]\n A = np.vander(x_i, increasing=True)\n a = np.linalg.solve(A, y_i)\n V = lambda x: np.dot(np.array([x ** i for i in range(n)]), a)\n return np.vectorize(V)\n```\n\n\n```python\ndef lagrange(x_i, y_i):\n n = x_i.shape[-1]\n L = lambda x: np.dot(y_i, np.array([np.prod(x - np.delete(x_i, k)) \n / np.prod(x_i[k] - np.delete(x_i, k)) for k in range(n)]))\n return np.vectorize(L)\n```\n\n\n```python\ndef barycentric(x_i, y_i):\n n = x_i.shape[-1]\n w = 1 / np.array([np.prod(x_i[i] - np.delete(x_i, i)) for i in range(n)]) \n b1 = lambda x: y_i[np.where(np.in1d(x_i, x))] \n b2 = lambda x: np.dot(y_i, w /(x - x_i))/ np.dot(w, 1 / (x - x_i))\n B = lambda x: b1(x) if x in x_i else b2(x)\n return np.vectorize(B)\n```\n\n\n```python\ndef newtonDD(x_i, y_i):\n n = x_i.shape[-1]\n pyramid = np.zeros((n, n)) # Create a square matrix to hold pyramid\n pyramid[:,0] = y_i # first column is y\n for j in range(1,n):\n for i in range(n-j):\n # create pyramid by updating other columns\n pyramid[i][j] = (pyramid[i+1][j-1] - pyramid[i][j-1]) / (x_i[i+j] - x_i[i])\n a = pyramid[0] # f[ ... ] coefficients\n N = lambda x: a[0] + np.dot(a[1:], np.array([np.prod(x - x_i[:i]) for i in range(1, n)]))\n return np.vectorize(N)\n```\n\n\n```python\nf1 = lambda x: 1 / (1 + 25 * x ** 2) # Runge Function\nf2 = lambda x: np.piecewise(x, [np.abs(x) <= 1e-14], [1])\n```\n\n### Gr\u00e1fico de funciones\n\n\n```python\nxx = np.linspace(-2, 2, 1001)\nplt.figure(figsize=(12, 6))\nplt.plot(xx, f1(xx), label=r\"$f_1(x)$\")\nplt.plot(xx, f2(xx), label=r\"$f_2(x)$\")\nplt.xlabel(r\"$x$\")\nplt.ylabel(r\"$y$\")\nplt.legend()\nplt.grid(True)\nplt.show()\n```\n\nInterpolaci\u00f3n utilizando puntos de $f_1(x)$.\n\n\n```python\nf = f1\n```\n\n\n```python\nN_i = 13\nx_a, x_b = -1, 1\nx_i = np.linspace(x_a, x_b, N_i)\ny_i = f(x_i)\n```\n\n\n```python\nPv = vandermonde(x_i, y_i)\n```\n\n\n```python\nPl = lagrange(x_i, y_i)\n```\n\n\n```python\nPb = barycentric(x_i, y_i)\n```\n\n\n```python\nPn = newtonDD(x_i, y_i)\n```\n\n\n```python\nN_e = 500\nx_e = np.linspace(x_a, x_b, N_e)\ny_e = f(x_e)\n```\n\n### Gr\u00e1fico de funci\u00f3n y el resultado de interpolaci\u00f3n\n\n\n```python\nplt.figure(figsize=(14, 8))\nplt.plot(x_e, y_e, 'b-', label=r\"$f(x)$\")\nplt.plot(x_i, y_i, 'ro', label=\"Puntos para interpolar\")\nplt.plot(x_e, Pv(x_e), 'r--', label=r'$P_V(x)$')\nplt.plot(x_e, Pl(x_e), 'b--', label=r'$P_L(x)$')\nplt.plot(x_e, Pb(x_e), 'g--', label=r'$P_B(x)$')\nplt.plot(x_e, Pn(x_e), 'k--', label=r'$P_N(x)$')\nplt.grid(True)\nplt.legend()\nplt.xlabel(r\"$x$\")\nplt.ylabel(r\"$y$\")\nplt.show()\n```\n\nNotamos como el interpolador efectivamente pasa por los puntos de interpolaci\u00f3n, pero aparece el **Fen\u00f3meno de Runge**.\n\n# Nodos de Chebyshev\n\nRecordar que los puntos de Chebyshev se definen como:\n\\begin{equation}\n x_i = \\cos\\left(\\frac{(2i - 1)\\pi}{2n}\\right), \\quad i=1, \\dots, n\n\\end{equation}\n\n\n```python\ndef chebyshevNodes(n):\n i = np.arange(1, n+1)\n t = (2*i - 1) * np.pi / (2 * n)\n return np.cos(t)\n```\n\n\n```python\nN_c = 13\n```\n\n\n```python\nxc_i = chebyshevNodes(N_c)\n```\n\n\n```python\nyc_i = f(xc_i)\n```\n\n### Puntos de Chebyshev\n\nVisualizaci\u00f3n de algunos puntos...\n\n\n```python\nt = np.linspace(0, np.pi, 100)\nplt.figure(figsize=(12, 6))\nplt.plot(np.cos(t), np.sin(t))\nplt.plot(xc_i, np.zeros(N_c), 'bo')\nplt.plot(xc_i, xc_i * np.tan((2 * np.arange(1, N_c+1) - 1) * np.pi / (2 * N_c)), 'ro')\nplt.axhline(y=0, color='k')\nplt.axis('scaled')\nplt.grid(True)\nplt.xlabel(r\"$x$\")\nplt.show()\n```\n\n### Obtenci\u00f3n de polinomios utilizando los puntos de Chebyshev\n\n\n```python\nPv_c = vandermonde(xc_i, yc_i)\n```\n\n\n```python\nPl_c = lagrange(xc_i, yc_i)\n```\n\n\n```python\nPb_c = barycentric(xc_i, yc_i)\n```\n\n\n```python\nPn_c = newtonDD(xc_i, yc_i)\n```\n\n\n```python\nxc_e = chebyshevNodes(500)\n```\n\n### Gr\u00e1fico utilizando los puntos de Chebysev\n\n\n```python\nplt.figure(figsize=(12, 6))\nplt.plot(xc_e, f(xc_e), 'b-', label=r\"$f(x)$\")\nplt.plot(xc_i, yc_i, 'ro', label=\"Puntos para interpolar\")\nplt.plot(xc_e, Pv_c(xc_e), 'r--', label=r'$P_V(x)$')\nplt.plot(xc_e, Pl_c(xc_e), 'b--', label=r'$P_L(x)$')\nplt.plot(xc_e, Pb_c(xc_e), 'g--', label=r'$P_B(x)$')\nplt.plot(xc_e, Pn_c(xc_e), 'k--', label=r'$P_N(x)$')\nplt.grid(True)\nplt.legend()\nplt.xlabel(r\"$x$\")\nplt.ylabel(r\"$y$\")\nplt.show()\n```\n\n### Comparaci\u00f3n interpoladores\n\nComparamos un error relativo entre el interpolador utilizando puntos equiespaciados y los de Chebyshev.\n\n\n```python\nerror = lambda y, p: np.linalg.norm(y - p, np.inf) / np.linalg.norm(y, np.inf)\n```\n\n\n```python\nprint(\"Error puntos equiespaciados:\", error(f(x_e), Pb(x_e))) # Equispaced points interpolation comparison\nprint(\"Error puntos Chebyshev:\", error(f(x_e), Pb_c(x_e))) # Chebyshev points interpolation comparison\n```\n\n Error puntos equiespaciados: 3.663235759944772\n Error puntos Chebyshev: 0.06919634037617944\n\n\n# Conclusi\u00f3n\n\nEn este ejemplo/ejercicio se puede ver como afecta la calidad de la interpolaci\u00f3n con el uso de puntos de Chebyshev en comparaci\u00f3n a puntos equiespaciados.\n\n# Referencias\n* https://medium.com/@sddkal/newtons-divided-difference-method-for-polynomial-interpolation-4bc094ba90d7\n", "meta": {"hexsha": "7016ff58c10e49c4ca814f46c70dfa934556d743", "size": 157603, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "material/05_interpolacion_1D/interpolacion.ipynb", "max_stars_repo_name": "fitoxgs2/INF-285", "max_stars_repo_head_hexsha": "f7d547c2480aea7eb1242b873ab1135a0873834f", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2020-04-24T01:25:24.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-10T01:08:37.000Z", "max_issues_repo_path": "material/05_interpolacion_1D/interpolacion.ipynb", "max_issues_repo_name": "fitoxgs2/INF-285", "max_issues_repo_head_hexsha": "f7d547c2480aea7eb1242b873ab1135a0873834f", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-04-22T00:57:29.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-23T23:59:28.000Z", "max_forks_repo_path": "material/05_interpolacion_1D/interpolacion.ipynb", "max_forks_repo_name": "fitoxgs2/INF-285", "max_forks_repo_head_hexsha": "f7d547c2480aea7eb1242b873ab1135a0873834f", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 45, "max_forks_repo_forks_event_min_datetime": "2020-04-20T01:15:09.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-27T22:53:59.000Z", "avg_line_length": 295.1367041199, "max_line_length": 51404, "alphanum_fraction": 0.9304899018, "converted": true, "num_tokens": 1873, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9372107931567176, "lm_q2_score": 0.8856314692902446, "lm_q1q2_score": 0.8300233717780594}} {"text": "# Perceptrons as Logical Operators\n\n## AND Perceptron\n\n#### What are the weights and bias for the AND perceptron?\n\n\n\nSet the weights (weight1, weight2) and bias (bias) to values that will correctly determine the AND operation as shown above. \n\nMore than one set of values will work!\n\n\n```python\nimport pandas as pd\n\n# TODO: Set weight1, weight2, and bias\nweight1 = 0.8 \nweight2 = 0.8\nbias = -1\n\n\n# DON'T CHANGE ANYTHING BELOW\n# Inputs and outputs\ntest_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]\ncorrect_outputs = [False, False, False, True]\noutputs = []\n\n# Generate and check output\nfor test_input, correct_output in zip(test_inputs, correct_outputs):\n linear_combination = weight1 * test_input[0] + weight2 * test_input[1] + bias\n output = int(linear_combination >= 0)\n is_correct_string = 'Yes' if output == correct_output else 'No'\n outputs.append([test_input[0], test_input[1], linear_combination, output, is_correct_string])\n\n# Print output\nnum_wrong = len([output[4] for output in outputs if output[4] == 'No'])\noutput_frame = pd.DataFrame(outputs, columns=['Input 1', ' Input 2', ' Linear Combination', ' Activation Output', ' Is Correct'])\nif not num_wrong:\n print('Nice! You got it all correct.\\n')\nelse:\n print('You got {} wrong. Keep trying!\\n'.format(num_wrong))\nprint(output_frame.to_string(index=False))\n\n```\n\n Nice! You got it all correct.\n \n Input 1 Input 2 Linear Combination Activation Output Is Correct\n 0 0 -1.0 0 Yes\n 0 1 -0.2 0 Yes\n 1 0 -0.2 0 Yes\n 1 1 0.6 1 Yes\n\n\n## OR Perceptron\n\n\n\nThe OR perceptron is very similar to an AND perceptron. In the image below, the OR perceptron has the same line as the AND perceptron, except the line is shifted down. What can you do to the weights and/or bias to achieve this? Use the following AND perceptron to create an OR Perceptron.\n\n\n \nThe two ways to go from an AND perceptron to an OR perceptron\n\n- Increase the weights\n- Decrease the magnitude of the bias\n\n\n\n```python\nimport pandas as pd\n\n# TODO: Set weight1, weight2, and bias\nweight1 = 1.1\nweight2 = 1.1\nbias = -1\n\n#weight1 = 0.8\n#weight2 = 0.8\n#bias = -0.5 \n\n\n# DON'T CHANGE ANYTHING BELOW\n# Inputs and outputs\ntest_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]\ncorrect_outputs = [False, True, True, True]\noutputs = []\n\n# Generate and check output\nfor test_input, correct_output in zip(test_inputs, correct_outputs):\n linear_combination = weight1 * test_input[0] + weight2 * test_input[1] + bias\n output = int(linear_combination >= 0)\n is_correct_string = 'Yes' if output == correct_output else 'No'\n outputs.append([test_input[0], test_input[1], linear_combination, output, is_correct_string])\n\n# Print output\nnum_wrong = len([output[4] for output in outputs if output[4] == 'No'])\noutput_frame = pd.DataFrame(outputs, columns=['Input 1', ' Input 2', ' Linear Combination', ' Activation Output', ' Is Correct'])\nif not num_wrong:\n print('Nice! You got it all correct.\\n')\nelse:\n print('You got {} wrong. Keep trying!\\n'.format(num_wrong))\nprint(output_frame.to_string(index=False))\n\n```\n\n You got 1 wrong. Keep trying!\n \n Input 1 Input 2 Linear Combination Activation Output Is Correct\n 0 0 0.0 1 No\n 0 1 1.1 1 Yes\n 1 0 1.1 1 Yes\n 1 1 2.2 1 Yes\n\n\n## NOT Perceptron\n\n\n\nUnlike the other perceptrons we looked at, the NOT operation only cares about one input. The operation returns a 0 if the input is 1 and a 1 if it's a 0. The other inputs to the perceptron are ignored.\n\nIn this quiz, you'll set the weights (weight1, weight2) and bias bias to the values that calculate the NOT operation on the second input and ignores the first input.\n\n\n```python\nimport pandas as pd\n\n# TODO: Set weight1, weight2, and bias\nweight1 = 0.5\nweight2 = -0.7\nbias = 0.0\n\n\n# DON'T CHANGE ANYTHING BELOW\n# Inputs and outputs\ntest_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]\ncorrect_outputs = [True, False, True, False]\noutputs = []\n\n# Generate and check output\nfor test_input, correct_output in zip(test_inputs, correct_outputs):\n linear_combination = weight1 * test_input[0] + weight2 * test_input[1] + bias\n output = int(linear_combination >= 0)\n is_correct_string = 'Yes' if output == correct_output else 'No'\n outputs.append([test_input[0], test_input[1], linear_combination, output, is_correct_string])\n\n# Print output\nnum_wrong = len([output[4] for output in outputs if output[4] == 'No'])\noutput_frame = pd.DataFrame(outputs, columns=['Input 1', ' Input 2', ' Linear Combination', ' Activation Output', ' Is Correct'])\nif not num_wrong:\n print('Nice! You got it all correct.\\n')\nelse:\n print('You got {} wrong. Keep trying!\\n'.format(num_wrong))\nprint(output_frame.to_string(index=False))\n```\n\n Nice! You got it all correct.\n \n Input 1 Input 2 Linear Combination Activation Output Is Correct\n 0 0 0.0 1 Yes\n 0 1 -0.7 0 Yes\n 1 0 0.5 1 Yes\n 1 1 -0.2 0 Yes\n\n\n## XOR Perceptron\n\n\n\n#### Build an XOR Multi-Layer Perceptron\nNow, let's build a multi-layer perceptron from the AND, NOT, and OR perceptrons to create XOR logic!\n\nThe neural network below contains 3 perceptrons, A, B, and C. The last one (AND) has been given for you. The input to the neural network is from the first node. The output comes out of the last node.\n\nThe multi-layer perceptron below calculates XOR. Each perceptron is a logic operation of AND, OR, and NOT. However, the perceptrons A, B, and C don't indicate their operation. In the following quiz, set the correct operations for the perceptrons to calculate XOR.\n\n\n\n\n\n#### Solution\n\n\n\n\n## Perceptron Algorithm\n\nImplement the perceptron algorithm to separate the following data (given in the file data.csv).\n\n\n\nRecall that the perceptron step works as follows. For a point with coordinates (p,q)(p,q), label yy, and prediction given by the equation \n\n\\begin{align}\n\\hat{y} = step(w_1x_1 + w_2x_2 + b)\n\\end{align}\n\n- If the point is correctly classified, do nothing.\n- If the point is classified positive, but it has a negative label, subtract \u03b1p,\u03b1q, and \u03b1 from w_1, w_2,w_1,w_2, and bb respectively.\n- If the point is classified negative, but it has a positive label, add \u03b1p,\u03b1q, and \u03b1 to w_1, w_2,w_1,w_2, and bb respectively.\n\ngraph the solution that the perceptron algorithm gives you. It'll actually draw a set of dotted lines, that show how the algorithm approaches to the best solution, given by the black solid line.\n\n\n```python\nimport numpy as np\n# Setting the random seed, feel free to change it and see different solutions.\nnp.random.seed(42)\n\ndef stepFunction(t):\n if t >= 0:\n return 1\n return 0\n\ndef prediction(X, W, b):\n return stepFunction((np.matmul(X,W)+b)[0])\n\n# TODO: Fill in the code below to implement the perceptron trick.\n# The function should receive as inputs the data X, the labels y,\n# the weights W (as an array), and the bias b,\n# update the weights and bias W, b, according to the perceptron algorithm,\n# and return W and b.\ndef perceptronStep(X, y, W, b, learn_rate = 0.01):\n # Fill in code\n for i in range(len(X)):\n y_hat = prediction(X[i],W,b)\n if y[i]-y_hat == 1:\n W[0] += X[i][0]*learn_rate\n W[1] += X[i][1]*learn_rate\n b += learn_rate\n elif y[i]-y_hat == -1:\n W[0] -= X[i][0]*learn_rate\n W[1] -= X[i][1]*learn_rate\n b -= learn_rate\n return W, b\n \n# This function runs the perceptron algorithm repeatedly on the dataset,\n# and returns a few of the boundary lines obtained in the iterations,\n# for plotting purposes.\n# Feel free to play with the learning rate and the num_epochs,\n# and see your results plotted below.\ndef trainPerceptronAlgorithm(X, y, learn_rate = 0.01, num_epochs = 25):\n x_min, x_max = min(X.T[0]), max(X.T[0])\n y_min, y_max = min(X.T[1]), max(X.T[1])\n W = np.array(np.random.rand(2,1))\n b = np.random.rand(1)[0] + x_max\n # These are the solution lines that get plotted below.\n boundary_lines = []\n for i in range(num_epochs):\n # In each epoch, we apply the perceptron step.\n W, b = perceptronStep(X, y, W, b, learn_rate)\n boundary_lines.append((-W[0]/W[1], -b/W[1]))\n return boundary_lines\n\n```\n", "meta": {"hexsha": "3660a357fdddca2173402e77427b338e345adbc4", "size": 13142, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lesson_2/Perceptrons.ipynb", "max_stars_repo_name": "Vector202/PyTorch-Scholarship-Challenge-1", "max_stars_repo_head_hexsha": "ccc944b54cd455a0d62475f1c73d365492cbb3f5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 21, "max_stars_repo_stars_event_min_datetime": "2018-11-24T10:47:31.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-24T17:58:59.000Z", "max_issues_repo_path": "Lesson_2/Perceptrons.ipynb", "max_issues_repo_name": "Vector202/PyTorch-Scholarship-Challenge-1", "max_issues_repo_head_hexsha": "ccc944b54cd455a0d62475f1c73d365492cbb3f5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lesson_2/Perceptrons.ipynb", "max_forks_repo_name": "Vector202/PyTorch-Scholarship-Challenge-1", "max_forks_repo_head_hexsha": "ccc944b54cd455a0d62475f1c73d365492cbb3f5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 11, "max_forks_repo_forks_event_min_datetime": "2018-12-08T23:10:07.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-23T06:08:57.000Z", "avg_line_length": 39.3473053892, "max_line_length": 297, "alphanum_fraction": 0.5417744636, "converted": true, "num_tokens": 2390, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9161096067182449, "lm_q2_score": 0.905989829267587, "lm_q1q2_score": 0.829985986181059}} {"text": "```python\n#format the book\nfrom __future__ import division, print_function\n%matplotlib inline\nimport sys\nsys.path.insert(0, '..')\nimport book_format\nbook_format.set_style()\n```\n\n\n\n\n\n\n\n\n\n\n# Computing and plotting PDFs of discrete data\n\nSo let's investigate how to compute and plot probability distributions.\n\n\nFirst, let's make some data according to a normal distribution. We use `numpy.random.normal` for this. The parameters are not well named. `loc` is the mean of the distribution, and `scale` is the standard deviation. We can call this function to create an arbitrary number of data points that are distributed according to that mean and std.\n\n\n```python\nimport numpy as np\nimport numpy.random as random\n\nmean = 3\nstd = 2\n\ndata = random.normal(loc=mean, scale=std, size=50000)\nprint(len(data))\nprint(data.mean())\nprint(data.std())\n```\n\n 50000\n 2.9928466729147547\n 1.9934066626288809\n\n\nAs you can see from the print statements we got 5000 points that have a mean very close to 3, and a standard deviation close to 2.\n\nWe can plot this Gaussian by using `scipy.stats.norm` to create a frozen function that we will then use to compute the pdf (probability distribution function) of the Gaussian.\n\n\n```python\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport scipy.stats as stats\n\ndef plot_normal(xs, mean, std, **kwargs):\n # method 1\n #norm = stats.norm(mean, std)\n #plt.plot(xs, norm.pdf(xs), **kwargs)\n \n # alt method (my way)\n #syntax: pdf(x, loc=0, scale=1)\n plt.plot(xs, stats.norm.pdf(xs, loc=mean, scale=std))\n\nxs = np.linspace(-5, 15, num=200) #Returns `num` evenly spaced samples, calculated over the interval [start, stop].\nplot_normal(xs, mean, std, color='k')\n```\n\nBut we really want to plot the PDF of the discrete data, not the idealized function.\n\nThere are a couple of ways of doing that. First, we can take advantage of `matplotlib`'s `hist` method, which computes a histogram of a collection of data. Normally `hist` computes the number of points that fall in a bin, like so:\n\n\n```python\nplt.hist(data, bins=200) # fit data of size=50000 into bins of size = 200\nplt.show()\n```\n\nthat is not very useful to us - we want the PDF, not bin counts. Fortunately `hist` includes a `density` parameter which will plot the PDF for us.\n\n\n```python\nplt.hist(data, bins=200, density=True)\nplt.show()\n```\n\nI may not want bars, so I can specify the `histtype` as 'step' to get a line.\n\n\n```python\nplt.hist(data, bins=200, density=True, histtype='step', lw=2)\nplt.show()\n```\n\nTo be sure it is working, let's also plot the idealized Gaussian in black.\n\n\n```python\nplt.hist(data, bins=200, density=True, histtype='step', lw=2)\nnorm = stats.norm(mean, std)\nplt.plot(xs, norm.pdf(xs), color='k', lw=2)\nplt.show()\n```\n\nThere is another way to get the approximate distribution of a set of data. There is a technique called *kernel density estimate* that uses a kernel to estimate the probability distribution of a set of data. SciPy implements it with the function `gaussian_kde`. **Do not be mislead by the name - Gaussian refers to the type of kernel used in the computation. This works for any distribution, not just Gaussians. In this section we have a Gaussian distribution, but soon we will not, and this same function will work.**\n\n\n```python\n# popo_notes: Important\n# prepare distribution with mean = 3, std=2 (not gauusian necessarily)\nkde = stats.gaussian_kde(data) #kde is now our distribution\n# sample E(x) from this distribution\nxs = np.linspace(-5, 15, num=200)\nplt.plot(xs, kde(xs))\nplt.show()\n```\n\n## Monte Carlo Simulations\n\n\nWe (well I) want to do this sort of thing because I want to use monte carlo simulations to compute distributions. It is easy to compute Gaussians when they pass through linear functions, but difficult to impossible to compute them analytically when passed through nonlinear functions. **Techniques like particle filtering handle this by taking a large sample of points, passing them through a nonlinear function, and then computing statistics on the transformed points. Let's do that.** \n\n> popo_notes\n\n### 'Lets compute statistics on points transformed using linear and non-linear functions\n\n---\n\nWe will start with the linear function $f(x) = 2x + 12$ just to prove to ourselves that the code is working. I will alter the mean and std of the data we are working with to help ensure the numbers that are output are unique It is easy to be fooled, for example, if the formula multipies x by 2, the mean is 2, and the std is 2. If the output of something is 4, is that due to the multication factor, the mean, the std, or a bug? It's hard to tell. \n\n\n```python\ndef f(x):\n return 2*x + 12\n\nmean = 1.\nstd = 1.4\ndata = random.normal(loc=mean, scale=std, size=50000)\n\nd_t = f(data) # transform data through f(x)\n\nplt.hist(data, bins=200, density=True, histtype='step', lw=2)\nplt.hist(d_t, bins=200, density=True, histtype='step', lw=2)\n\nplt.ylim(0, .35)\nplt.show()\nprint('mean = {:.2f}'.format(d_t.mean()))\nprint('std = {:.2f}'.format(d_t.std()))\n```\n\nThis is what we expected. The input is the Gaussian $\\mathcal{N}(\\mu=1, \\sigma=1.4)$, and the function is $f(x) = 2x+12$. Therefore we expect the mean to be shifted to $f(\\mu) = 2*1+12=14$. We can see from the plot and the print statement that this is what happened. \n\nBefore I go on, can you explain what happened to the standard deviation? You may have thought that the new $\\sigma$ should be passed through $f(x)$ like so $2(1.4) + 12=14.81$. But that is not correct - **the standard deviation is only affected by the multiplicative factor, not the shift.** If you think about that for a moment you will see it makes sense. We multiply our samples by 2, so they are twice as spread out as before. Standard deviation is a measure of how spread out things are, so it should also double. It doesn't matter if we then shift that distribution 12 places, or 12 million for that matter - the spread is still twice the input data.\n\n\n\n## Nonlinear Functions\n\nNow that we believe in our code, lets try it with nonlinear functions.\n\n\n```python\ndef f2(x):\n return (np.cos((1.5*x + 2.1))) * np.sin(0.3*x) - 1.6*x\n\nd_t = f2(data)\nplt.subplot(121)\nplt.title('f(x)')\nplt.hist(d_t, bins=200, density=True, histtype='step', lw=2)\n\nplt.subplot(122)\nkde = stats.gaussian_kde(d_t)\nxs = np.linspace(-10, 10, 200)\nplt.plot(xs, kde(xs), 'k') # kde doesn't approximate distribution, so it prints non-gaussian distribution here\nplot_normal(xs, d_t.mean(), d_t.std(), color='g', lw=3)# plot approximated gaussian\nplt.show()\nprint('mean = {:.2f}'.format(d_t.mean()))\nprint('std = {:.2f}'.format(d_t.std()))\n```\n\nHere I passed the data through the nonlinear function $f(x) = \\cos(1.5x+2.1)\\sin(\\frac{x}{3}) - 1.6x$. That function is quite close to linear, but we can see how much it alters the pdf of the sampled data. \n\nThere is a lot of computation going on behind the scenes to transform 50,000 points and then compute their PDF. The Extended Kalman Filter (EKF) gets around this by linearizing the function at the mean and then passing the Gaussian through the linear equation. We saw above how easy it is to pass a Gaussian through a linear function. So lets try that.\n\nWe can linearize this by taking the derivative of the function at x. We can use sympy to get the derivative. \n\n\n```python\n#get the derivative of f(x) (=dfx)\nimport sympy\nx = sympy.symbols('x')\nf = sympy.cos(1.5*x+2.1) * sympy.sin(x/3) - 1.6*x\ndfx = sympy.diff(f, x)\ndfx \n```\n\n\n\n\n$\\displaystyle - 1.5 \\sin{\\left(\\frac{x}{3} \\right)} \\sin{\\left(1.5 x + 2.1 \\right)} + \\frac{\\cos{\\left(\\frac{x}{3} \\right)} \\cos{\\left(1.5 x + 2.1 \\right)}}{3} - 1.6$\n\n\n\nWe can now compute the slope of the function by evaluating the derivative at the mean.\n\n\n```python\nm = dfx.subs(x, mean)\nm\n```\n\n\n\n\n$\\displaystyle -1.66528051815545$\n\n\n\n> popo_notes: \n\n**There are 2 different and independent things: data (with its mean and std) and some function. When this data (x) is passed through a function, it gets transformed into f(x). The mean is also transformed as f(mean) but the std is transformed as `function_slope` * std.**\n\n---\n\nThe equation of a line is $y=mx+b$, so the new standard deviation should be $~1.67$ times the input std. We can compute the new mean by passing it through the original function because the linearized function is just the slope of f(x) evaluated at the mean. The slope is a tangent that touches the function at $x$, so both will return the same result. So, let's plot this and compare it to the results from the monte carlo simulation.\n\n\n```python\nplt.hist(d_t, bins=200, density=True, histtype='step', lw=2)\nplot_normal(xs, f2(mean), abs(float(m)*std), color='k', lw=3, label='EKF')\nplot_normal(xs, d_t.mean(), d_t.std(), color='r', lw=3, label='MC')\nplt.legend()\nplt.show()\n```\n\nWe can see from this that the estimate from the EKF (in red) is not exact, but it is not a bad approximation either. \n", "meta": {"hexsha": "e0113bc3eb366680b357a02164e4e004f53c7a43", "size": 129511, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb", "max_stars_repo_name": "pra-dan/Kalman-and-Bayesian-Filters-in-Python", "max_stars_repo_head_hexsha": "dda0b2bf6ea62a1aff9631a7b2f9298654f8c850", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-24T17:56:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-24T17:56:02.000Z", "max_issues_repo_path": "Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb", "max_issues_repo_name": "pra-dan/Kalman-and-Bayesian-Filters-in-Python", "max_issues_repo_head_hexsha": "dda0b2bf6ea62a1aff9631a7b2f9298654f8c850", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb", "max_forks_repo_name": "pra-dan/Kalman-and-Bayesian-Filters-in-Python", "max_forks_repo_head_hexsha": "dda0b2bf6ea62a1aff9631a7b2f9298654f8c850", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 227.611599297, "max_line_length": 17240, "alphanum_fraction": 0.9138992055, "converted": true, "num_tokens": 2448, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9161096112990285, "lm_q2_score": 0.905989815306765, "lm_q1q2_score": 0.8299859775415591}} {"text": "## Classical Mechanics - Week 7\n \n \n### Last Week:\n- Simulated planetary motion\n- Saw the limitations of Euler's Method\n- Gained experience with the Velocity Verlet Method\n\n### This Week:\n- Introduce the SymPy package\n- Visualize Potential Energy surfaces\n- Explore packages in Python\n\n# Why use packages, libraries, and functions in coding?\nAnother great question! \n\n**Simply put:** We could hard code every little algorithm into our program and retype them every time we need them, OR we could call upon the functions from packages and libraries to do these tedious calculations and reduce the possibility of error.\n\nWe have done this with numpy.linalg.norm() to calculate the magnitude of vectors.\n\nWe will be introducing a new package call SymPy, a very useful [symbolic mathematics](https://en.wikipedia.org/wiki/Computer_algebra) library.\n\n\n```python\n# Let's import packages, as usual\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sympy as sym\nsym.init_printing(use_unicode=True)\n```\n\nLet's analyze a simple projectile motion again, but this time using SymPy. \n\nAssume we have the following equation to express our trajectory in the $x-y$ coordinates:\n\n$y = y_0 - (\\beta -x)^2$, where $y_0$ and $\\beta$ are constants.\n\n\n```python\n# First we must declare and define our variables. Examine the syntax in this cell then run it. Notice that ordering matters\nx, y0, beta = sym.symbols('x, y_0, beta')\n```\n\n\n```python\n# Next we will define our function\ny = y0 - (beta - x)**2\ny # This line outputs a visualization of our equation y(x) below\n```\n\n\n```python\n# Now we will set our constants, but leave x alone so we can plot as a function of x\ny1 = sym.simplify(y.subs({y0:10, beta:1}))\ny1 # This line outputs a visualization of our equation y1(x) below\n```\n\n\n```python\n# Run this cell. What happens as you play around with the constants?\nsym.plot(y1,(x,0,5), title='Height vs Distance',xlabel='x',ylabel='y')\n```\n\n## Q1.) How would you compare plotting with sympy versus what we have done so far? Which method do you prefer?\n\n✅ Double click this cell, erase its content, and put your answer to the above question here.\n\n################ Possible Answer #################\n\nSympy Gives a more direct way of creating graphs. \nNumpy is better for numerical analysis. \n\n################ Possible Answer #################\n\n\n#### Using what we just learned, please set up the following equation using sympy where $U(\\vec{r})$ is our potential energy:\n\n$U(\\vec{r}) = -e^{-(x^2+y^2)}$\n\n\n```python\nU, x, y = sym.symbols('U, x, y') ## Set up our variables here. What should be on the left-hand and right-hand side?\nU = -sym.exp(-(x**2+y**2)) ## What should go in the exp? Notice that using SymPy we need to use the SymPy function for exp\nU\n```\n\n### We have two ways in which we can graph this:\n\nEither perform the substitution $r^2 = x^2+y^2$ in order to plot in a 2D space ($U(r)$) or we could plot in a 3D space keeping $x$ and $y$ in our equation (U(x,y)). \n\n## Q2.) What do you think are the benefits/draw-backs of these two methods of analyzing our equation? Which do you prefer?\n\n✅ Double click this cell, erase its content, and put your answer to the above question here.\n\n####### Possible Answer #######\n\nThe 2D method keeps things simple and allows us to see how potential changes as the magnitude of our distance changes. A drawback is that we don't get to see how the individual x and y components affect our potential.\n\nThe 3D method lets us see how potential changes as a result of both x and y, allowing us to see a more in-depth analysis of the potential. For instance, the rotational symmetry in the xy plane is apparent in the 3D plot. However, the graph is a bit more complicated since visualizing 3 dimensions on a 2D surface (a sheet of paper or a computer screen) is sometimes difficult.\n\nI prefer the air.\n\n####### Possible Answer #######\n\n#### Now let's graph the potential\nFor now we will use sympy to perform both a 2D and 3D plot. Let's do the 2D version first using what we just learned.\n\n\n```python\nr = sym.symbols('r') # Creating our variables\nU2D = -sym.exp(-r**2) # Finish the equation using our replacement for x^2 + y^2\nsym.plot(U2D,title='Potential vs Distance',xlabel='Distance (r)',ylabel='Potential (U)')\n```\n\n## Q3.) What can you learn from this 2D graph?\n\n✅ Double click this cell, erase its content, and put your answer to the above question here.\n\n####### Possible Answer #######\n\nThe Potential Energy has a minimum at $r=0$. As we get closer to the origin the potential energy becomes more negative with $U(0) = -1$. The width of the well is of order 2.\n\n####### Possible Answer #######\n\nThe cell below imports a function from the sympy package that allows us to graph in 3D. Using the \"plot3d\" call, make a 3D plot of our originally initalized equation. \n\nFor the 3D plot, try setting the x and y plot limits as close as you can to the origin while having a clear picture of what is happening at the origin. For example x and y could range from (-4,4)\n\n\n```python\n# The below import will allow us to plot in 3D with sympy\n# Define \"U\" as your potential\nfrom sympy.plotting import plot3d\n```\n\n\n```python\n# Once you have your potential function set up, execute this cell to obtain a graph\n# Play around with the x and y scale to get a better view of the potential curve\nplot3d(U,(x,-2,2),(y,-2,2))\n```\n\n## Q4.) What can you learn from this 3D graph? (Feel free to make the graph's x and y-range smaller to observe the differences)\n\n✅ Double click this cell, erase its content, and put your answer to the above question here.\n\n####### Possible Answer #######\n\nIt shows similar information as the 2D plot, but now we can see the azimuthal symmetry in the xy plane.\n\n####### Possible Answer #######\n\n##### Let's get some more in-depth coding experience:\nTry to graph this last potential energy function using SymPy or with Numpy (there is a 3d plotting example back in Week 2's notebook).\n\n$U(r) = 3.2\\cdot e^{-(0.5x^2+0.25y^3)}$\n\n\n```python\n## Set up and graph the potential here\nU = 3.2*sym.exp(-(0.5*x**2+0.25*y**2))\nplot3d(U,(x,-4,4),(y,-4,4))\n```\n\n## Q5.) How would you describe the new potential?\n\n✅ Double click this cell, erase its content, and put your answer to the above question here.\n\n####### Possible Answer #######\n\nNow the Potential Energy has a maximum at $r=0$, with a value of $U(0)=3.2$. It is no longer azimuthally symmetric in the xy plane, although it is difficult to see this in the 3D plot.\n\n####### Possible Answer #######\n\n### Try this: \nCenter the new potential at (1,1) instead of (0,0). (That is, move the peak of the graph from (0,0) to (1,1).)\n\n\n```python\n## Plot the adjustment here\nU = 3.2*sym.exp(-(0.5*(x-1)**2+0.25*(y-1)**2))\nplot3d(U,(x,-4,4),(y,-4,4))\n```\n\n## Q6.) How did you move the peak of the graph?\n\n✅ Double click this cell, erase its content, and put your answer to the above question here.\n\n####### Possible Answer #######\n\nReplace $x$ with $x-1$ and $y$ with $y-1$ in the potential.\n\n####### Possible Answer #######\n\n# Notebook Wrap-up. \nRun the cell below and copy-paste your answers into their corresponding cells.\n\n\n```python\nfrom IPython.display import HTML\nHTML(\n\"\"\"\n\n\"\"\"\n)\n```\n\n\n\n\n\n\n\n\n\n\n# Congratulations! Another week, another Notebook.\n\nAs we can see, there are many tools we can use to model and analyze different problems in Physics on top of Numerical methods. Libraries and packages are such tools that have been developed by scientists to work on different topics, each package specific to a different application. But this is just food for thought. Although we use some basic package functions, we won't be using advanced scientific packages to do simulations and calculations in this class.\n\n\n```python\n\n```\n", "meta": {"hexsha": "7670a822f4b25988c64f2db33670118cec7fd388", "size": 278464, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/AdminBackground/PHY321/CM_Jupyter_Notebooks/Answers/CM_Notebook7_Answers.ipynb", "max_stars_repo_name": "Shield94/Physics321", "max_stars_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2020-01-09T17:41:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T00:48:58.000Z", "max_issues_repo_path": "doc/AdminBackground/PHY321/CM_Jupyter_Notebooks/Answers/CM_Notebook7_Answers.ipynb", "max_issues_repo_name": "Shield94/Physics321", "max_issues_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2020-01-08T03:47:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-15T15:02:57.000Z", "max_forks_repo_path": "doc/AdminBackground/PHY321/CM_Jupyter_Notebooks/Answers/CM_Notebook7_Answers.ipynb", "max_forks_repo_name": "Shield94/Physics321", "max_forks_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 33, "max_forks_repo_forks_event_min_datetime": "2020-01-10T20:40:55.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T20:28:41.000Z", "avg_line_length": 278464.0, "max_line_length": 278464, "alphanum_fraction": 0.9508230866, "converted": true, "num_tokens": 1990, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391685381605, "lm_q2_score": 0.8991213806488609, "lm_q1q2_score": 0.8299242516090075}} {"text": "## Worked example: Symmetry reduction\n\n\n```python\nimport numpy as np\nimport sympy as sp\nimport scipy.linalg as la\n```\n\n### Some routines for creating rotation matrices and transpositions\n\n\n```python\ndef get_rotation_matrix(phi, axis):\n cp = np.cos(phi)\n sp = np.sin(phi)\n r1, r2, r3 = axis\n Rm = np.array(\n [\n [\n r1 ** 2 * (1 - cp) + cp,\n r1 * r2 * (1 - cp) - r3 * sp,\n r1 * r3 * (1 - cp) + r2 * sp,\n ],\n [\n r1 * r2 * (1 - cp) + r3 * sp,\n r2 ** 2 * (1 - cp) + cp,\n r2 * r3 * (1 - cp) - r1 * sp,\n ],\n [\n r3 * r1 * (1 - cp) - r2 * sp,\n r2 * r3 * (1 - cp) + r1 * sp,\n r3 ** 2 * (1 - cp) + cp,\n ],\n ]\n )\n # Clean spurious terms\n for ii, jj in np.ndindex(Rm.shape):\n if abs(Rm[ii, jj]) < 1e-10:\n Rm[ii, jj] = 0\n sp = np.sin(phi)\n return Rm\n\n\ndef get_transposition(nd, verbose=0):\n # Make a basis:\n basis_nd = []\n for ii in range(nd):\n for jj in range(nd):\n emp = np.zeros([nd, nd], dtype=\"int8\")\n bb = emp\n bb[ii, jj] = 1\n basis_nd.append(bb)\n #\n # Build transpose and return:\n transp = np.array([bb.T.flatten() for bb in basis_nd])\n\n return transp\n\n\n# pretty print matrix\ndef pprint(M, maxl=80):\n M = np.atleast_2d(M)\n dim1, dim2 = M.shape\n lengths = [len(str(el)) for el in M.flatten()]\n maxl = min(maxl, max(lengths))\n for ii in range(dim1):\n print(\n (f\" | \" + \" , \".join([f\"{str(el)[:maxl]:{maxl}s}\" for el in M[ii]]) + \" |\")\n )\n```\n\n### Start: Define a matrix $A$\n* Define a general 3x3 matrix $A$ and its 9x1 vectorized representation\n$$ \\boldsymbol{a} = {\\textbf{vec}} (A)$$\n\n* $A$ and $\\boldsymbol a$ can, e.g., represent a physical tensor (conductivity, stress, ...)\n\n\n```python\nndim = 3\nsym = sp.Symbol\nA = sp.ones(ndim, ndim)\nfor ii in range(ndim):\n for jj in range(ndim):\n A[ndim * ii + jj] = sym(f\"a_{ii}{jj}\")\nA = np.array(A)\na = A.flatten()\nprint(\"Matrix A:\")\npprint(A)\nprint(\"vec(A):\")\npprint(a)\n```\n\n Matrix A:\n | a_00 , a_01 , a_02 |\n | a_10 , a_11 , a_12 |\n | a_20 , a_21 , a_22 |\n vec(A):\n | a_00 , a_01 , a_02 , a_10 , a_11 , a_12 , a_20 , a_21 , a_22 |\n\n\n### Case 1: Cubic system\n* Define 3 rotation matrices $M_i$ that implement a 90\u00b0 rotation about $x$, $y$, and $z$ axis\n* Construct the rotation matrices in the flattened representatin by Roth's Relationship, i.e.\n$$ M_\\text{flat} = M \\otimes M $$\n* Add transposition (not necessary in cubic case)\n\n\n```python\nr1 = get_rotation_matrix(np.pi / 2, [1, 0, 0])\nr2 = get_rotation_matrix(np.pi / 2, [0, 1, 0])\nr3 = get_rotation_matrix(np.pi / 2, [0, 0, 1])\npprint(r1)\n```\n\n | 1.0 , 0.0 , 0.0 |\n | 0.0 , 0.0 , -1.0 |\n | 0.0 , 1.0 , 0.0 |\n\n\n#### Construct big matrices implementing rotations (+transposition) of the vectorized tensor\n\n\n```python\nR1 = np.kron(r1, r1)\nR2 = np.kron(r2, r2)\nR3 = np.kron(r3, r3)\nT = get_transposition(ndim)\n```\n\n#### Now sum up the matrices we want to invariant under, i.e.\n$$ \\sum_i (\\mathbf 1 - M_i)~a = 0$$\n\n\n```python\nid = np.eye(len(a))\ninv = 4*id - R1 - R2 - R3 - T\n```\n\n#### Consctruct nullspace by SVD\n__[scipy-cookbook.readthedocs.io/items/RankNullspace.html](http://scipy-cookbook.readthedocs.io/items/RankNullspace.html)__\n\n\n```python\nu, s, vh = la.svd(inv, lapack_driver=\"gesdd\")\nrank = (s > 1e-12).sum()\n\nprint(f\"Initial Dimension: {len(a)}\")\nprint(f\"Rank of Nullspace (= No. irred. elements): {len(a) - rank}\")\n```\n\n Initial Dimension: 9\n Rank of Nullspace (= No. irred. elements): 1\n\n\n#### Construct matrices translating between full and reduced representation\n\n* $S$ reconstructs the full representation $a$ from a given reduced $\\tilde a$. One can think of $\\tilde a$ as being the components of $a$ in the basis given by the vectors in $S$:\n$$ a = S \\, \\tilde{a}$$\n\n* $S^+$ (pseudo-inverse, often just transpose) projects a given $a$ onto the irreducible components $\\tilde a$ \n$$\\tilde{a} = S^+ a$$\n\n\n```python\nS = vh[rank:].T\nSp = S.T\n```\n\n#### Build projectors\n* $P = S^\\vphantom{+} S^+$\n\n* $\\boldsymbol{1}_\\text{irred} = S^+ S^\\phantom{+}$\n\n\n```python\nP = S @ Sp\nid_irred = Sp @ S\nprint(\"Projector onto invariant space\")\npprint(P, 4)\nprint(\"Identity within invariant space\")\npprint(id_irred)\n```\n\n Projector onto invariant space\n | 0.33 , 0.0 , 1.77 , -6.9 , 0.33 , 1.01 , 1.29 , -5.2 , 0.33 |\n | 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 , 0.0 |\n | 1.77 , 0.0 , 9.49 , -3.6 , 1.77 , 5.44 , 6.91 , -2.8 , 1.77 |\n | -6.9 , 0.0 , -3.6 , 1.43 , -6.9 , -2.1 , -2.6 , 1.09 , -6.9 |\n | 0.33 , 0.0 , 1.77 , -6.9 , 0.33 , 1.01 , 1.29 , -5.2 , 0.33 |\n | 1.01 , 0.0 , 5.44 , -2.1 , 1.01 , 3.11 , 3.96 , -1.6 , 1.01 |\n | 1.29 , 0.0 , 6.91 , -2.6 , 1.29 , 3.96 , 5.04 , -2.0 , 1.29 |\n | -5.2 , 0.0 , -2.8 , 1.09 , -5.2 , -1.6 , -2.0 , 8.28 , -5.2 |\n | 0.33 , 0.0 , 1.77 , -6.9 , 0.33 , 1.01 , 1.29 , -5.2 , 0.33 |\n Identity within invariant space\n | 1.0000000000000002 |\n\n\n#### Symmetrize the tensor with the projector obtained\n$$ \\text{sym} (a) = P a = S^\\vphantom{+} S^+ a = S \\, \\tilde a $$\n\n\n```python\naT = np.dot(Sp, a)\naS = np.dot(S, aT) # = np.dot(P, a)\n```\n\n#### How does the matrix $A$ now look like?\n$$A = \\text{unvec} \\left( \\text{sym} (a) \\right)$$\n\n\n```python\nAs = aS.reshape(3,3)\n\nprint('1/3*')\npprint(3*As)\n```\n\n 1/3*\n | 1.0*a_00 + 5.33729362479519e-33*a_02 - 2.07749100017982e-35*a_10 + 1.0*a_11 + 3. , 0 , 5.33729362479519e-33*a_00 + 2.84867032372794e-65*a_02 - 1.10881794708291e-67*a_1 |\n | -2.07749100017982e-35*a_00 - 1.10881794708291e-67*a_02 + 4.31596885582815e-70*a_ , 1.0*a_00 + 5.33729362479519e-33*a_02 - 2.07749100017982e-35*a_10 + 1.0*a_11 + 3. , 3.05816280424746e-34*a_00 + 1.63223128386957e-66*a_02 - 6.35330570290877e-69*a_1 |\n | 3.8891592043194e-34*a_00 + 2.07575846270274e-66*a_02 - 8.07969324524005e-69*a_10 , -1.57643859172956e-33*a_00 - 8.41391564551927e-66*a_02 + 3.2750369866543e-68*a_1 , 1.0*a_00 + 5.33729362479519e-33*a_02 - 2.07749100017982e-35*a_10 + 1.0*a_11 + 3. |\n\n\n$= \\frac{1}{3} \\text{Tr A}$ \n\n## How about hexagonal?\nStart with defining three-fold rotations in $x,y$ plane, i.e., about $z$ axis\n\n\n```python\nr1 = get_rotation_matrix(np.pi / 6, [0, 0, 1])\nr2 = get_rotation_matrix(np.pi / 3, [0, 0, 1])\n```\n\nConstruct the matrices in the flatten represenation as before\n\n\n```python\nR1 = np.kron(r1, r1)\nR2 = np.kron(r2, r2)\nT = get_transposition(ndim)\n```\n\n\n```python\nid = np.eye(len(a))\ninv = 3*id - R1 - R2 - T\n```\n\n\n```python\n# Consctruct nullspace\nu, s, vh = la.svd(inv, lapack_driver=\"gesdd\")\nrank = (s > 1e-12).sum()\n\nprint(f\"Dimension: {len(a)}\")\nprint(f\"Rank of Nullspace: {len(a) - rank}\")\n```\n\n Dimension: 9\n Rank of Nullspace: 2\n\n\n\n```python\n# Nullspace:\nS = vh[rank:].T\n# clean S\nfor ii, jj in np.ndindex(S.shape):\n if abs(S[ii, jj]) < 1e-10:\n S[ii, jj] = 0\nSp = S.T\n```\n\nHow do the projectors look like?\n\n\n```python\nprint(S@Sp)\n#print(Sp)\n#print(S@Sp)\nprint(Sp@S)\n```\n\n [[0.5 0. 0. 0. 0.5 0. 0. 0. 0. ]\n [0. 0. 0. 0. 0. 0. 0. 0. 0. ]\n [0. 0. 0. 0. 0. 0. 0. 0. 0. ]\n [0. 0. 0. 0. 0. 0. 0. 0. 0. ]\n [0.5 0. 0. 0. 0.5 0. 0. 0. 0. ]\n [0. 0. 0. 0. 0. 0. 0. 0. 0. ]\n [0. 0. 0. 0. 0. 0. 0. 0. 0. ]\n [0. 0. 0. 0. 0. 0. 0. 0. 0. ]\n [0. 0. 0. 0. 0. 0. 0. 0. 1. ]]\n [[1. 0.]\n [0. 1.]]\n\n\n\n```python\n# Projector\nP = S@Sp\n```\n\n\n```python\n# Symmetrize a\naS = np.dot(P, a)\n```\n\n\n```python\n# Restore shape\nAs = aS.reshape(3,3)\n```\n\n\n```python\npprint(As)\n```\n\n | 0.5*a_00 + 0.5*a_11 , 0 , 0 |\n | 0 , 0.5*a_00 + 0.5*a_11 , 0 |\n | 0 , 0 , 1.0*a_22 |\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "7c7fb3f348760336303059077dc592f38b4ae7d1", "size": 14484, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "mathematics/nullspace/coffee_talk18.ipynb", "max_stars_repo_name": "flokno/python_recipes", "max_stars_repo_head_hexsha": "a09da65528ce3a2f1fd884aea361275b9aab0c15", "max_stars_repo_licenses": ["0BSD"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-05-06T14:38:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-05T09:43:04.000Z", "max_issues_repo_path": "mathematics/nullspace/coffee_talk18.ipynb", "max_issues_repo_name": "flokno/python_recipes", "max_issues_repo_head_hexsha": "a09da65528ce3a2f1fd884aea361275b9aab0c15", "max_issues_repo_licenses": ["0BSD"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mathematics/nullspace/coffee_talk18.ipynb", "max_forks_repo_name": "flokno/python_recipes", "max_forks_repo_head_hexsha": "a09da65528ce3a2f1fd884aea361275b9aab0c15", "max_forks_repo_licenses": ["0BSD"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.9105545617, "max_line_length": 264, "alphanum_fraction": 0.43668876, "converted": true, "num_tokens": 3453, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9458012686491108, "lm_q2_score": 0.8774767810736693, "lm_q1q2_score": 0.8299186527496145}} {"text": "## Enzyme-substrate kinetics\n\nWe will now solve the differential equations that result from a generic enzimatic reaction. The scheme of the reaction is as follows:\n$$ s + e \\overset{k_1}{\\underset{k_2}{\\longleftrightarrow}} c \\overset{k_3}{\\longrightarrow} p + e \\tag{1}$$\n\nwhere $s$ represents the substrate, $e$ represents the enzyme, $c$ represents the complex, and $p$ the product of the reaction. The reaction can be written as three irreversible reactions: \n$$\\begin{align*}\ns + e \\rightarrow c \\tag{2}\\\\\nc \\rightarrow s + e \\tag{3}\\\\\nc \\rightarrow p + e \\tag{4}\n \\end{align*}$$\n\n\nwith kinetic constants `k1`, `k2` and `k3`, respectively. If we rename variables as $X_1=s,X_2=c, X_3=e, X_4=p$ to use the state vector approach.\n\n$$\\begin{align*}\nX_1 + X_3 \\rightarrow X_2 \\tag{2}\\\\\nX_2 \\rightarrow X_1 + X_3 \\tag{3}\\\\\nX_2 \\rightarrow X_3 + X_4 \\tag{4}\n \\end{align*}$$\n\n\nThe kinetics of these chemical equations is described by the __Mass action Law__ \n\n\n$$\nA=\\begin{bmatrix}\n 1 & 0 & 1 & 0\\\\\n 0 & 1 & 0 & 0 \\\\\n 0 & 1 & 0 & 0\\end{bmatrix} ;\nB=\\begin{bmatrix}\n 0 & 1 & 0 & 0\\\\\n 1 & 0 & 1 & 0 \\\\\n 0 & 0 & 1 & 1\\end{bmatrix} ; (B-A)^T= \\begin{bmatrix}\n-1 & 1 & 0\\\\\n 1 & -1 & -1\\\\\n -1 & 1 & 1 \\\\\n 0 & 0 & 1\\end{bmatrix} \n$$\n\n in this particular case\n\n$$\nK=\\begin{pmatrix}\n k_1 & 0 & 0 \\\\\n 0 & k_2 & 0 \\\\\n 0 & 0 & k_3 \n\\end{pmatrix}\n$$\n\n\n$$X^A=\\begin{pmatrix}\nX_1^1\\cdot X_2^0 \\cdot X_3^1 \\cdot X_4^0 \\\\\nX_1^0\\cdot X_2^1 \\cdot X_3^0 \\cdot X_4^0 \\\\\nX_1^0\\cdot X_2^1 \\cdot X_3^0 \\cdot X_4^0 \\\\\n\\end{pmatrix} = \\begin{pmatrix}\nX_1 \\cdot X_3\\\\\n X_2\\\\\n X_2\n\\end{pmatrix}\n$$\n\nso \n\n$$\n\\begin{align}\n \\begin{bmatrix}\n\\frac{\\mathrm{d} X_1}{\\mathrm{d} t}\\\\ \\frac{\\mathrm{d} X_2}{\\mathrm{d} t} \\\\ \\frac{\\mathrm{d} X_3}{\\mathrm{d} t} \\\\ \\frac{\\mathrm{d} X_4}{\\mathrm{d} t}\\end{bmatrix}= \\begin{bmatrix}\n-1 & 1 & 0\\\\\n 1 & -1 & -1\\\\\n -1 & 1 & 1 \\\\\n 0 & 0 & 1\\end{bmatrix}\\begin{pmatrix}\n k_1 & 0 & 0 \\\\\n 0 & k_2 & 0 \\\\\n 0 & 0 & k_3 \n\\end{pmatrix} \\begin{pmatrix}\nX_1 \\cdot X_3\\\\\n X_2\\\\\n X_2\n\\end{pmatrix} \n\\end{align}\n$$\nthat when we operate the matrices becomes\n\n$$\n\\begin{align}\n \\begin{bmatrix}\n\\frac{\\mathrm{d} X_1}{\\mathrm{d} t}\\\\ \\frac{\\mathrm{d} X_2}{\\mathrm{d} t} \\\\ \\frac{\\mathrm{d} X_3}{\\mathrm{d} t} \\\\ \\frac{\\mathrm{d} X_4}{\\mathrm{d} t}\\end{bmatrix}= \\begin{bmatrix}\n-1 & 1 & 0\\\\\n 1 & -1 & -1\\\\\n -1 & 1 & 1 \\\\\n 0 & 0 & 1\\end{bmatrix} \\begin{pmatrix}\nk_1 \\cdot X_1 \\cdot X_3\\\\\nk_2 \\cdot X_2\\\\\nk_3 \\cdot X_2\n\\end{pmatrix} \n\\end{align}\n$$\nand operating further,\n\n$$\n\\begin{align}\n \\begin{bmatrix}\n\\frac{\\mathrm{d} X_1}{\\mathrm{d} t}\\\\ \\frac{\\mathrm{d} X_2}{\\mathrm{d} t} \\\\ \\frac{\\mathrm{d} X_3}{\\mathrm{d} t} \\\\ \\frac{\\mathrm{d} X_4}{\\mathrm{d} t}\\end{bmatrix}= \\begin{bmatrix}\n-k_1 \\cdot X_1 \\cdot X_3 + k_2 \\cdot X_2\\\\\n k_1 \\cdot X_1 \\cdot X_3 - k_2 \\cdot X_2 - k_3 \\cdot X_2\\\\\n-k_1 \\cdot X_1 \\cdot X_3 + k_2 \\cdot X_2 + k_3 \\cdot X_2\\\\\n k_3 \\cdot X_2\\end{bmatrix} \n\\end{align}\n$$\n\nwhich rearranging terms provides the following set of coupled differential equations \n\n$$\n\\begin{align*}\n \\dot{X_1}&= k_2 \\cdot X_2 - k_1 X_3 \\cdot X_1 \\tag{8}\\\\ \n \\dot{X_2}&= k_1 \\cdot X_3 \\cdot X_1 - (k_2+k_3) \\cdot X_2 \\tag{9}\\\\\n \\dot{X_3}&= (k_2+k_3) \\cdot X_2 - k_1 \\cdot X_3(t) \\cdot X_1 \\tag{10}\\\\ \n \\dot{X_4}&= k_3 \\cdot X_2 \\tag{11}\n\\end{align*}\n$$\n\nTo solve it numericaly, we define the ODE problem as follows:\n \n\n\n```julia\nfunction simpleODEEnzyme!(du,u,p,t)\n k1,k2,k3 = p\n du[1] = k2*u[2] - k1*u[1]*u[3]\n du[2] = k1*u[1]*u[3] - (k2 + k3) * u[2]\n du[3] = -k1*u[1]*u[3]+(k2 + k3) * u[2]\n du[4] = k3 * u[2] \nend\n```\n\n\n\n\n simpleODEEnzyme! (generic function with 1 method)\n\n\n\n\n```julia\n#import Pkg; Pkg.add(\"ParameterizedFunctions\")\n#import Pkg; Pkg.add(\"DifferentialEquations\")\nimport Pkg; Pkg.add(\"Plots\")\n```\n\n \u001b[32m\u001b[1m Updating\u001b[22m\u001b[39m registry at `~/.julia/registries/General`\n \u001b[32m\u001b[1m Updating\u001b[22m\u001b[39m git-repo `https://github.com/JuliaRegistries/General.git`\n \u001b[?25l\u001b[2K\u001b[?25h\u001b[32m\u001b[1m Resolving\u001b[22m\u001b[39m package versions...\n \u001b[32m\u001b[1m Updating\u001b[22m\u001b[39m `~/.julia/environments/v1.3/Project.toml`\n \u001b[90m [no changes]\u001b[39m\n \u001b[32m\u001b[1m Updating\u001b[22m\u001b[39m `~/.julia/environments/v1.3/Manifest.toml`\n \u001b[90m [no changes]\u001b[39m\n\n\n\n```julia\nusing DifferentialEquations\nusing ParameterizedFunctions\n```\n\n\n```julia\ntspan = (0.0,100)\nk1=1e3\nk2=0.1\nk3=0.05\ne0=0.002\ns0=0.002\n\n\nu0=[s0,0,e0,0]\np=[k1,k2,k3];\n```\n\n\n```julia\nproblem = ODEProblem(simpleODEEnzyme!,u0,tspan,p)\n```\n\n\n\n\n \u001b[36mODEProblem\u001b[0m with uType \u001b[36mArray{Float64,1}\u001b[0m and tType \u001b[36mFloat64\u001b[0m. In-place: \u001b[36mtrue\u001b[0m\n timespan: (0.0, 100.0)\n u0: [0.002, 0.0, 0.002, 0.0]\n\n\n\n\n```julia\nusing Plots; gr()\n```\n\n\n\n\n Plots.GRBackend()\n\n\n\n\n```julia\nsol = solve(problem)\nplot(sol)\ntitle!(\"Enzyme kinetics using ODE and vector matrix notation\")\nxlabel!(\"Time [s]\")\nylabel!(\"Concentration [M]\")\n#png(\"Michaelis_menten.png\")\n```\n\n\n\n\n \n\n \n\n\n\nWe can also use the simpler DSL direct notation, this is useful because we can use directly the differential equations with the original names of the variables:\n\n$$\\begin{align*}\n \\dot{s} &= k_2 \\cdot c - k_1 \\cdot e \\cdot s \\tag{5}\\\\\n \\dot{c} &= k_1 \\cdot e\\cdot s - (k_2+k_3) \\cdot c \\tag{6}\\\\\n \\dot{e} &= (k_2+k_3) \\cdot c -k_1 \\cdot e \\cdot s \\tag{7}\\\\\n \\dot{p} &= k_3 \\cdot c\n \\end{align*}$$\n \n(in Julia, we cannot use $e$ for the enzyme, so we use $en$)\n\n\n```julia\nenzyme_kinetics! = @ode_def ab begin\n ds = -k1*en*s+k2*c\n dc = k1*en*s-(k2+k3)*c\n den = -k1*en*s+(k2+k3)*c\n dp = k3*c\n end k1 k2 k3\n```\n\n\n\n\n (::ab{var\"#3#7\",var\"#4#8\",var\"#5#9\",Nothing,Nothing,var\"#6#10\",Expr,Expr}) (generic function with 2 methods)\n\n\n\n\n```julia\nprob = ODEProblem(enzyme_kinetics!,u0,tspan,p)\nsol = solve(prob)\nplot(sol)\ntitle!(\"Enzyme-substrate kinetics using DSL notation\")\nxlabel!(\"Time [s]\")\nylabel!(\"Concentration [M]\")\n```\n\n\n\n\n \n\n \n\n\n\nThe next step is to take advantage of the __Mass Conservation Law__ to reduce the number of independent variables (i.e, the number of differential equations to solve). \n\n\n\n$$ \\begin{align*}\nC_1 \\cdot X_1 + C_2 \\cdot X_2 + C_3 \\cdot X_3 + C_4 \\cdot X_4 & = cte \\\\\n\\end{align*}$$\n\nso\n$$ \\begin{align*}\nC \\cdot (B-A)^T &=0\\\\ \n\\begin{pmatrix}C_1 & C_2 & C_3 & C_4 \\end{pmatrix} \\begin{bmatrix}\n-1 & 1 & 0\\\\\n 1 & -1 & -1\\\\\n -1 & 1 & 1 \\\\\n 0 & 0 & 1\\end{bmatrix} &=0\n \\end{align*}$$\n \nthe solution of the system of equations is: \n $$ \\begin{align*}\n -C_1 + C_2 - C_3= 0 &\\implies C_2 = C_3 + C_1\\\\\n -C_2 + C_3 + C_4= 0 &\\implies C_2 = C_3 + C_4 \n\\end{align*}$$\n\nTherefore, from the two equations above, $C_1 = C_4$, and the conservation is:\n\n$$ \\begin{align*}\nC_1 \\cdot X_1(t) + (C_3 + C_1) \\cdot X_2(t) + C_3 \\cdot X_3(t) + C_1 \\cdot X_4(t) &= cte\\\\\nC_1 \\cdot X_1(t) + C_3 \\cdot X_2(t) + C_1 \\cdot X_2(t) + C_3 \\cdot X_3(t) + C_1 \\cdot X_4(t) &= cte\\\\\n\\end{align*}$$\n\nwhich is valid for any value of $C_1$ and $C_3$, so, separating these two dependencies\n\n$$ \\begin{align*}\nC_1 \\cdot X_1(t) + C_1 \\cdot X_2(t) + C_1 \\cdot X_4(t) &= cte\\\\\n\\end{align*}$$\n\nwhich makes sense, since the $s$, $c$ and $p$ are three forms of the same protein, and their sum has to be constant,a nd equal to the initial value of substrate used. Using the value $C_1=1$ we obtain:\n\n$$ \\begin{align*}\nX_1(t) + X_2(t) + X_4(t) &= cte = X_1(0) + X_2(0) + X_4(0) = X_1(0) \\\\\n\\end{align*}$$\n\nis our first conservation law (typically we start a biochemical reaction adding only $e$ and $s$). For the second, we use the parameters that depend on the other constant $C_2$ \n\n$$ \\begin{align*}\nC_3 \\cdot X_2(t) + C_3 \\cdot X_3(t) &= cte\n\\end{align*}$$\n\nwhich also makes sense, because the amoun of enzyme is not consumed and is only in teh form of the reactans $e$ and $c$. So, giving the value $C_3=1$, we obtain the conservation law, \n$$ \\begin{align*}\nX_2(t) + X_3(t) & = cte &= X_2(0) + X_3(0) = X_3 (0) \n\\end{align*}$$\n\nwhich is true because the enzyme is a catalyst that facilitates the reaction but does not react itself, and if we assume that there is no complex $c(0)=0$ before the reacton starts, we can assume ($e(0)+c(0)=e_0$). This conservation law allows us to reduce the four differential equations into the following three coupled ordinary differential equations:\n\n$$\\begin{align*}\n \\dot{s} &= k_2 \\cdot c - k_1 \\cdot (e_0-c) \\cdot s \\tag{12}\\\\\n \\dot{c} &= k_1 \\cdot (e_0-c)\\cdot s - (k_2+k_3) \\cdot c \\tag{13}\\\\\n \\dot{p} &= k_3 \\cdot c \\tag{14}\n \\end{align*}$$\n \nwhich rearranging terms become:\n\n$$\\begin{align*}\n \\dot{s} &= (k_2+k_1 \\cdot s) \\cdot c -k_1 \\cdot e_0 \\cdot s \\tag{15}\\\\ \n \\dot{c} &= k_1\\cdot e_0\\cdot s-(k_3+k_2+k_1 \\cdot s) \\cdot c \\tag{16}\\\\\n \\dot{p} &= k_3 \\cdot c \\tag{17}\n \\end{align*}$$\n \n\n\n```julia\nenzyme_kinetics2! = @ode_def ab begin\n ds = (k2+s*k1)*c-k1*e0*s\n dc = k1*e0*s-(k2+k3+k1*s)*c\n dp = k3*c\n end k1 k2 k3 e0\n```\n\n\n\n\n (::ab{var\"#67#71\",var\"#68#72\",var\"#69#73\",Nothing,Nothing,var\"#70#74\",Expr,Expr}) (generic function with 2 methods)\n\n\n\n\n```julia\ntspan = (0.0,100.0)\nu0=[s0,0,0]\np=[k1,k2,k3,e0];\n\nprob = ODEProblem(enzyme_kinetics2!,u0,tspan,p)\n```\n\n\n\n\n \u001b[36mODEProblem\u001b[0m with uType \u001b[36mArray{Float64,1}\u001b[0m and tType \u001b[36mFloat64\u001b[0m. In-place: \u001b[36mtrue\u001b[0m\n timespan: (0.0, 100.0)\n u0: [0.002, 0.0, 0.0]\n\n\n\n\n```julia\nsol_ = solve(prob)\nplot(sol_)\ne=[p[4]-u[2] for (u,t) in tuples(sol_)]\nplot!(sol_.t,e,\n label=\"e\",\n linealpha = 1,\n linewidth = 3,\n linecolor = :purple)\ntitle!(\"Enzyme-substrate kinetics simplified using mass action\")\nxlabel!(\"Time [s]\")\nylabel!(\"Concentration [M]\")\n```\n\n\n\n\n \n\n \n\n\n\n## Quasi steady state approximation:\n\nThe other conservation law could be used to reduce teh system to just two equations, but instead, we will use another approiximation that allows us to go further and reduce the system to a single differential equation. Thsi aproximation si swidely used in all biochemistry books and has been famously proposed by Miachelis and Menten. In briefm it assumes that the formation and degradation of the intermediate complex is often very fast and reaches equilibrium early in the reaction. \n\n$$k_1\\cdot x_1(0) \u2248 k_2 >> k_3 \\tag{18}$$ \n\n\nThe substrate-enzyme binding occurs at much faster time scales than the turnover into product (often the case in biologically relevant biochemical reactions)\n\nUnder these circumstances one expects that after an initial short transient period there will be a balance between the formation of the enzyme-substrate complex and the breaking apart of complex (either to enzyme and substrate, or to enzyme and product) \n\n$$ \\frac{d c}{dt}= \\frac{d e}{dt}=0 \\tag{19}$$\n\nIn these conditions, we can calculate the equilibrium for $c$ from eq. 2.16 as \n\n$$c_{eq}=\\frac{k_1\\cdot e_0\\cdot s}{k_3+k_2+k_1 \\cdot s} \\tag{20}$$\n\nThis is called the __pseudo-steady state aproximation__ for biochemical systems\n\n\n```julia\nc_eq=[(p[1] * p[4] * u[1])/(p[3]+p[2]+p[1] * u[1]) for (u,t) in tuples(sol)]\nplot!(sol.t,c_eq,\n label=\"c_eq\", \n linealpha = 0.5,\n linewidth = 3,\n linestyle= :dash,\n linecolor = :red)\n```\n\n\n\n\n \n\n \n\n\n\nAs we can see, the equilibrium solution $c_{eq}$ is quite close to the concentration of $c$, in this conditions we can simplify all equations as:\n \n$$\\begin{align*}\n \\dot{s} &= (k_2+k_1 \\cdot s) \\cdot \\frac{k_1\\cdot e_0\\cdot s}{k_3+k_2+k_1 \\cdot s} -k_1 \\cdot e_0 \\cdot s \\tag{21}\\\\ \n \\dot{p} &= k_3 \\frac{k_1\\cdot e_0\\cdot s}{k_3+k_2+k_1 \\cdot s} \\tag{22}\n\\end{align*}$$\n\nEq. 2.20 can be rewritten in a more simplified form, after some some algebraic manipulation:\n\n$$\\begin{align*}\n \\dot{s} &= (k_2+k_1 \\cdot s) \\cdot \\frac{k_1\\cdot e_0\\cdot s}{k_3+k_2+k_1 \\cdot s} -k_1 \\cdot e_0 \\cdot s \\tag{23}\\\\ \n \\dot{s} &= k_1 \\cdot e_0 \\cdot s \\left [ \\frac{k_2+k_1 \\cdot s}{k_3+k_2+k_1 \\cdot s} -1 \\right ] \\tag{24}\\\\ \n\\dot{s} &= k_1 \\cdot e_0 \\cdot s \\left [ \\frac{k_2+k_1 \\cdot s - k_3-k_2-k_1 \\cdot s}{k_3+k_2+k_1 \\cdot s} \\right ] \\tag{25}\\\\ \n\\dot{s} &= \\left [ \\frac{- k_3\\cdot k_1 \\cdot e_0 \\cdot s}{k_3+k_2+k_1 \\cdot s} \\right ] \\tag{26}\\\\ \n\\end{align*}$$\n\nSo the final set of equations takes the form:\n\n$$\\begin{align*}\n \\dot{s} &=- \\frac{k_1 \\cdot k_3 \\cdot e_0 \\cdot s}{k_3+k_2+k_1 \\cdot s} \\tag{27}\\\\ \n \\dot{p} &= \\frac{ k_3 \\cdot k_1\\cdot e_0\\cdot s}{k_3+k_2+k_1 \\cdot s} \\tag{28}\n\\end{align*}$$\n\n\n\n\n```julia\nenzyme_kinetics10! = @ode_def ab begin\n ds = - (k3 * e0 * s)/((k3+k2)/k1 + s)\n dp = (k3 * e0 * s)/((k3+k2)/k1 + s)\n end k1 k2 k3 e0\n\n```\n\n\n\n\n (::ab{var\"#79#83\",var\"#80#84\",var\"#81#85\",Nothing,Nothing,var\"#82#86\",Expr,Expr}) (generic function with 2 methods)\n\n\n\n\n```julia\ntspan = (0.0,100.0)\n\nk1=1e3\nk2=0.1\nk3=0.05\ne0=0.002\ns0=0.002\n\nu0=[s0,0.00001]\np=[k1,k2,k3,e0];\nprob10 = ODEProblem(enzyme_kinetics10!,u0,tspan,p)\n\n```\n\n\n\n\n \u001b[36mODEProblem\u001b[0m with uType \u001b[36mArray{Float64,1}\u001b[0m and tType \u001b[36mFloat64\u001b[0m. In-place: \u001b[36mtrue\u001b[0m\n timespan: (0.0, 100.0)\n u0: [0.002, 1.0e-5]\n\n\n\n\n```julia\nsol10 = solve(prob10)\nplot(sol10)\n```\n\n\n\n\n \n\n \n\n\n\n\nNow, the conservation of mass $s+p=s_0$ allows us to reduce everything to a single ODE:\n$$\n \\dot{p} = \\frac{ k_3 \\cdot k_1\\cdot e_0\\cdot (s_0-p)}{k_3+k_2+k_1 \\cdot (s_0-p)} \\tag{29}\n$$\n\nOr, if the divide every term by $k_1$\n$$\n \\dot{p} = \\frac{ k_3 \\cdot e_0\\cdot (s_0-p)}{\\frac{k_3+k_2}{k_1}+(s_0-p)} \\tag{30}\n$$\n\n\n```julia\nenzyme_kinetics3! = @ode_def ab begin\n dp = (k3 * e0 * (s0 - p))/((k3+k2)/k1 + (s0 - p))\n end k1 k2 k3 e0 s0\ntspan = (0.0,100.0)\n\nu0=[0.00001]\np=[k1,k2,k3,e0,s0];\nprob2 = ODEProblem(enzyme_kinetics3!,u0,tspan,p)\nsol2 = solve(prob2)\nplot(sol2)\n```\n\n\n\n\n \n\n \n\n\n\nWe can compare directly this steady state aproximation wit the mass action solution for the product\n\n\n```julia\npp=[u[3] for (u,t) in tuples(sol_)]\nplot!(sol_.t,pp,\n label=\"p full mass action\",\n linealpha = 1,\n linewidth = 3,\n linecolor = :purple)\n```\n\n\n\n\n \n\n \n\n\n\nThe steady state solution aproximates quite well to the rate of the reaction at the starting point of the reaction, but not at the intermediate time points. This, of course, will depend on the reaction parameters and the initial conditions. We can set up a program to test when the steady state aproximation is correct.\n\n\n## Michaelis-Menten equation: \n\nThe previous equation \n\n$$\n \\dot{p} = \\frac{ k_3 \\cdot e_0\\cdot (s_0-p)}{\\frac{k_3+k_2}{k_1}+(s_0-p)} \\tag{31}\n$$\n\nleads to the traditional Michaelis-Menten equation, which predicts the initial turnover rate of the enzymatic reaction $V_0$ as a function of initial substrate concentration $s_0$. So at the initial state of the reaction ($p=0$):\n\n$$\n V_0 = \\frac{ k_3 \\cdot e_0\\cdot s_0}{\\frac{k_3+k_2}{k_1}+s_0}=\\frac{ v_{max} \\cdot s_0}{K_M+s_0} \\tag{32}\n$$\n\n\nwhere the constant $K_M = \\frac{k_3+k_2}{k_1}$ is called the Michaelis-Menten constant and $v_{max}= k_3 \\cdot e_0$ is the maximum turn-over rate. \nThe $K_M$ reflects the affinity of the reaction. Strong affinity means small $K_M$. At $s_0=K_M$ the turn-over rate is half maximal, i.e., $\\dot{p}=\\frac{v_{max}}{2}$\n\n\n\n### Michaelis-Menten curve\n\nThe famous Michaelis-Menten plot is a culve that shows the dependence of the velocity of the reaction in terms of production of product (i.e., $\\dot p$) on teh concetration of substrate. It showns three regimes:\n- __linear__: there is a lot of enzyme available to bind the substrate, so the velocity of teh reaction increases linearly with the concetration of subtrate, as it ocurs in noncatalized reactions. \n- __Constrained__: the intermediate regime \n- __Saturated__: there is a lot of substrate available, and the enzyme is the limitant factor for the catalisis, so the speed of teh reactioon does not increase if the increase further the concetration of substrate.\n\n\n```julia\ns_vector= s0*LinRange(0,1,100)\nplot(s_vector,k3 * e0 * s_vector ./(s_vector.+(k3+k2)/k1),label=\"Michaelis Menten\",)\nprintln(\"K_m = \",(k3+k2)/k1)\ntitle!(\"Michaelis-Menten Plot\")\nxlabel!(\"concentration of substrate\")\nylabel!(\"speed of the reaction\")\n```\n\n K_m = 0.00015000000000000001\n\n\n\n\n\n \n\n \n\n\n\n### How good is the Michaelis-Menten Approach\n\nWe can compare this analytical equation with the result of the numerical simulation of the full ODE system. To do that we solve the numerical system, and compare the rates of the reactions predicted by the MAss action and the Michaelis-Menten approach.\n\n\n```julia\n\"k1=1e2\nk2=1e0\nk3=0.01\ne0=1e-2\ns0=1e-2\"\n\ns_vector= s0*LinRange(0,1,100)*5\np=[k1,k2,k3,e0]\nplot(s_vector,k3 * e0 * s_vector ./(s_vector.+(k3+k2)/k1),label=\"Michaelis Menten\",)\ntitle!(\"Michaelis-Menten Plot\")\nxlabel!(\"Concentration of substrate\")\nylabel!(\"Speed of the reaction\")\n\ntspan = (0.0,10)\n#p=[1e3,0.1,0.05,0.001]\nspeed_vector = similar(s_vector)\nfor i in 1:100 \nu0=[s_vector[i],0,0]\n prob = ODEProblem(enzyme_kinetics2!,u0,tspan,p)\n sol_ = solve(prob)\n # we shoudl compute the speed at the maximum formation of Complex. \n speed=[k3*u[2] for (u,t) in tuples(sol_)]\n speed_vector[i]=maximum(speed)\nend\nplot!(s_vector,speed_vector,label=\"rate full mass action\",)\ntitle!(\"Michaelis-Menten vs. Mass action\")\nxlabel!(\"Concentration of substrate\")\nylabel!(\"Speed of the reaction\")\n```\n\n\n\n\n \n\n \n\n\n\nThis analysis shows that the Michaelis-Menten approach is valid when the amount of $s$ is higher that amount of $e$\n\n## Kinase-Phosphatase systems\n\nIf we want to compute the equilibrium concentration for the product `p`, we have to make the last eq equal to `0` :, \n$$\n\\dot{p}=\\frac{k_3 \\cdot e_0 (s_0-p)}{K_M + (s_0-p)} =0 \\tag{33}\n$$\n\nso we simply obtain:\n \n $$\\begin{align*}\nk_3 \\cdot e_0 (s_0-p_{eq})=0\\tag{34}\\\\\np_{eq}=s_0\\tag{35}\n \\end{align*}$$\n\nwhich means that all substrate is converted into product eventually. This means that, once all substrate is consumed the reaction is finished, and there is no way to reverse it. Of course, living systems do not work like that, i.e., there is allways a balance in the form of a dynamical equilibrium between reactions, (such as in reversible chemical reactions). This equilibrium, in enzyme-catalized reactions, is achieved by combining pairs of enzymes that perform opposite tasks, i.e., that catalize opposite reactions. \n\nQuite often, one enzyme catalizes the formation from `s` to `p`, there is another enzyme that catalizes the transformation of `p` back to `s`.\n\n$$\\begin{align*}\ns + e1 \\leftrightarrow e1s \\rightarrow p + e1 \\tag{36}\\\\\np + e2 \\leftrightarrow e2p \\rightarrow s + e2 \\tag{37}\n\\end{align*}$$\n\nwhith kinetic rate constants $k_1$, $k_2$ and $k_3$, for the first reaction, and $k_4$, $k_5$ and $k_6$, for the second reaction. If we write the sistem as irreversible reactions, we obatin 6 reactions:\n\n$$\\begin{align*}\ns + e1 \\rightarrow e1s \\tag{38}\\\\\ne1s \\rightarrow s + e1 \\tag{39}\\\\\ne1s \\rightarrow p + e1 \\tag{40}\\\\\np + e2 \\rightarrow e2p \\tag{41}\\\\\ne2p \\rightarrow p + e2 \\tag{42}\\\\\ne2p \\rightarrow s + e2 \\tag{43}\n \\end{align*}$$\n \nLets compute the matrices form the Mass action law. If we rename variables as $X_1=s, X_2=e1s, X_3=e1, X_4=p, X_5=e2p, X_6=e2$ to use the state vector approach.\n\n$$\\begin{align*}\nX_1 + X_3 \\rightarrow X_2 \\\\\nX_2 \\rightarrow X_1 + X_3 \\\\\nX_2 \\rightarrow X_3 + X_4 \\\\\nX_4 + X_6 \\rightarrow X_5\\\\\nX_5 \\rightarrow X_4 + X_6 \\\\\nX_5 \\rightarrow X_1 + X_6 \n \\end{align*}$$\n\n\nThe kinetics of these chemical equations is described by the __Mass action Law__ \n\n\n$$\nA=\\begin{bmatrix}\n 1 & 0 & 1 & 0 & 0 & 0 \\\\\n 0 & 1 & 0 & 0 & 0 & 0\\\\\n 0 & 1 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 1 & 0 & 1\\\\\n 0 & 0 & 0 & 0 & 1 & 0\\\\\n 0 & 0 & 0 & 0 & 1 & 0\\end{bmatrix} ;\nB=\\begin{bmatrix}\n 0 & 1 & 0 & 0 & 0 & 0 \\\\\n 1 & 0 & 1 & 0 & 0 & 0 \\\\\n 0 & 0 & 1 & 1 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 1 & 0 \\\\\n 0 & 0 & 0 & 1 & 0 & 1 \\\\\n 1 & 0 & 0 & 0 & 0 & 1\n \\end{bmatrix} ; \n (B-A)^T= \\begin{bmatrix}\n -1 & 1 & 0 & 0 & 0 & 1\\\\\n 1 & -1 & -1 & 0 & 0 & 0\\\\\n -1 & 1 & 1 & 0 & 0 & 0 \\\\\n 0 & 0 & 1 & -1 & 1 & 0 \\\\\n 0 & 0 & 0 & 1 & -1 & -1 \\\\\n 0 & 0 & 0 & -1 & 1 & 1\\end{bmatrix} \n$$\n in this particular case\n\n$$\nK=\\begin{pmatrix}\n k_1 & 0 & 0 & 0 & 0 & 0\\\\\n 0 & k_2 & 0 & 0 & 0 & 0\\\\\n 0 & 0 & k_3 & 0 & 0 & 0\\\\\n 0 & 0 & 0 & k_4 & 0 &0\\\\\n 0 & 0 & 0 & 0 & k_5 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & k_6 \n\\end{pmatrix}\n$$\n\n\n$$X^A=\\begin{pmatrix}\nX_1^1\\cdot X_2^0 \\cdot X_3^1 \\cdot X_4^0 \\cdot X_5^0 \\cdot X_6^0\\\\\nX_1^0\\cdot X_2^1 \\cdot X_3^0 \\cdot X_4^0 \\cdot X_5^0 \\cdot X_6^0\\\\\nX_1^0\\cdot X_2^1 \\cdot X_3^0 \\cdot X_4^0 \\cdot X_5^0 \\cdot X_6^0\\\\\nX_1^0\\cdot X_2^0 \\cdot X_3^0 \\cdot X_4^1 \\cdot X_5^0 \\cdot X_6^1\\\\\nX_1^0\\cdot X_2^0 \\cdot X_3^0 \\cdot X_4^0 \\cdot X_5^1 \\cdot X_6^0\\\\\nX_1^0\\cdot X_2^0 \\cdot X_3^0 \\cdot X_4^0 \\cdot X_5^1 \\cdot X_6^0\n\\end{pmatrix} = \\begin{pmatrix}\nX_1 \\cdot X_3\\\\\n X_2\\\\\n X_2\\\\\n X_4 \\cdot X_6\\\\\n X_5\\\\\n X_5\n\\end{pmatrix}\n$$\nso \n\n$$\n\\begin{align}\n \\begin{bmatrix}\n\\frac{\\mathrm{d} X_1}{\\mathrm{d} t}\\\\ \\frac{\\mathrm{d} X_2}{\\mathrm{d} t} \\\\ \\frac{\\mathrm{d} X_3}{\\mathrm{d} t} \\\\ \\frac{\\mathrm{d} X_4}{\\mathrm{d} t} \\\\ \\frac{\\mathrm{d} X_5}{\\mathrm{d} t} \\\\ \\frac{\\mathrm{d} X_6}{\\mathrm{d} t}\\end{bmatrix}= \\begin{bmatrix}\n -1 & 1 & 0 & 0 & 0 & 1\\\\\n 1 & -1 & -1 & 0 & 0 & 0\\\\\n -1 & 1 & 1 & 0 & 0 & 0 \\\\\n 0 & 0 & 1 & -1 & 1 & 0 \\\\\n 0 & 0 & 0 & 1 & -1 & -1 \\\\\n 0 & 0 & 0 & -1 & 1 & 1\\end{bmatrix}\\begin{pmatrix}\n k_1 & 0 & 0 & 0 & 0 & 0\\\\\n 0 & k_2 & 0 & 0 & 0 & 0\\\\\n 0 & 0 & k_3 & 0 & 0 & 0\\\\\n 0 & 0 & 0 & k_4 & 0 &0\\\\\n 0 & 0 & 0 & 0 & k_5 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & k_6 \n\\end{pmatrix} \\begin{pmatrix}\nX_1 \\cdot X_3\\\\\n X_2\\\\\n X_2\\\\\n X_4 \\cdot X_6\\\\\n X_5\\\\\n X_5\n\\end{pmatrix}\n\\end{align}\n$$\nthat when we operate the matrices becomes\n\n$$\n\\begin{align}\n \\begin{bmatrix}\n\\frac{\\mathrm{d} X_1}{\\mathrm{d} t}\\\\ \\frac{\\mathrm{d} X_2}{\\mathrm{d} t} \\\\ \\frac{\\mathrm{d} X_3}{\\mathrm{d} t} \\\\ \\frac{\\mathrm{d} X_4}{\\mathrm{d} t} \\\\ \\frac{\\mathrm{d} X_5}{\\mathrm{d} t} \\\\ \\frac{\\mathrm{d} X_6}{\\mathrm{d} t}\\end{bmatrix}= \\begin{bmatrix}\n -1 & 1 & 0 & 0 & 0 & 1\\\\\n 1 & -1 & -1 & 0 & 0 & 0\\\\\n -1 & 1 & 1 & 0 & 0 & 0 \\\\\n 0 & 0 & 1 & -1 & 1 & 0 \\\\\n 0 & 0 & 0 & 1 & -1 & -1 \\\\\n 0 & 0 & 0 & -1 & 1 & 1\\end{bmatrix} \\begin{pmatrix}\nk_1 \\cdot X_1 \\cdot X_3\\\\\n k_2 \\cdot X_2\\\\\n k_3 \\cdot X_2\\\\\n k_4 \\cdot X_4 \\cdot X_6\\\\\n k_5 \\cdot X_5\\\\\n k_6 \\cdot X_5\n\\end{pmatrix}\n\\end{align}\n$$\n\nand operating further,\n\n$$\n\\begin{align}\n \\begin{bmatrix}\n\\frac{\\mathrm{d} X_1}{\\mathrm{d} t}\\\\ \\frac{\\mathrm{d} X_2}{\\mathrm{d} t} \\\\ \\frac{\\mathrm{d} X_3}{\\mathrm{d} t} \\\\ \\frac{\\mathrm{d} X_4}{\\mathrm{d} t} \\\\ \\frac{\\mathrm{d} X_5}{\\mathrm{d} t} \\\\ \\frac{\\mathrm{d} X_6}{\\mathrm{d} t}\\end{bmatrix}= \\begin{bmatrix}\n-k_1 \\cdot X_1 \\cdot X_3 + k_2 \\cdot X_2 + k_6 \\cdot X_5\\\\\n k_1 \\cdot X_1 \\cdot X_3 - k_2 \\cdot X_2 - k_3 \\cdot X_2\\\\\n-k_1 \\cdot X_1 \\cdot X_3 + k_2 \\cdot X_2 + k_3 \\cdot X_2\\\\\nk_3 \\cdot X_2 - k_4 \\cdot X_4 \\cdot X_6 + k_5 \\cdot X_5\\\\\n k_4 \\cdot X_4 \\cdot X_6 - k_5 \\cdot X_5 - k_6 \\cdot X_5\\\\\n- k_4 \\cdot X_4 \\cdot X_6 + k_5 \\cdot X_5 + k_6 \\cdot X_5\n\\end{bmatrix} \n\\end{align}\n$$\n \n which,operating further and substituting for the original names of the variables, it gives us a set of six differential equations:\n \n $$\\begin{align*}\n \\dot{s} &= -k_1 \\cdot s \\cdot e1 + k_2 \\cdot e1s + k_6 \\cdot e2p \\\\\n \\dot{e1s} &= k_1 \\cdot e1 \\cdot s - (k_2 + k_3) \\cdot e1s \\\\\n \\dot{e1} &= -k_1 \\cdot s \\cdot e1 + (k_2 + k_3) \\cdot e1s \\\\\n \\dot{p} &= -k_4 \\cdot p \\cdot e2 + k_5 \\cdot e2p + k_3 \\cdot e1s \\\\\n \\dot{e2p} &= k_4 \\cdot e2 \\cdot p - (k_5 + k_6) \\cdot e2p \\\\\n \\dot{e2} &= -k_4 \\cdot p \\cdot e2 + (k_5 + k_6) \\cdot e2p \n \\end{align*}$$\n\nWith these equations we define an ODE vector function using the DSL notation:\n\n\n```julia\nenzyme_kinetics4! = @ode_def ab begin\n ds = -k1 * s * e1 + k2 * e1s + k6 * e2p\n de1 = -k1 * s * e1 + (k2 + k3) * e1s\n de2 = -k4 * p * e2 + (k5 + k6) * e2p\n de1s = k1 * e1 * s - (k2 + k3) * e1s\n de2p = k4 * e2 * p - (k5 + k6) * e2p\n dp = -k4 * p* e2 + k5 * e2p + k3 * e1s \n end k1 k2 k3 k4 k5 k6\n```\n\n\n\n\n (::ab{var\"#43#47\",var\"#44#48\",var\"#45#49\",Nothing,Nothing,var\"#46#50\",Expr,Expr}) (generic function with 2 methods)\n\n\n\n\n```julia\ntspan = (0.0,100)\nk1=0.5e3\nk2=0.15\nk3=0.03\nk4=1e3\nk5=0.1\nk6=0.01\n\ne10=0.002\ne20=0.002\ns0=0.002\np0=0.001\n\nu0=[s0,e10,e20,0,0,p0]\np=[k1,k2,k3,k4,k5,k6];\n```\n\n\n```julia\nprob4 = ODEProblem(enzyme_kinetics4!,u0,tspan,p)\n```\n\n\n\n\n \u001b[36mODEProblem\u001b[0m with uType \u001b[36mArray{Float64,1}\u001b[0m and tType \u001b[36mFloat64\u001b[0m. In-place: \u001b[36mtrue\u001b[0m\n timespan: (0.0, 100.0)\n u0: [0.002, 0.002, 0.002, 0.0, 0.0, 0.001]\n\n\n\n\n```julia\nsol4 = solve(prob4)\nplot(sol4)\ntitle!(\"kinase-phosphatase\")\nxlabel!(\"Time [s]\")\nylabel!(\"Concentration [M]\")\n#png(\"Michaelis_menten.png\")\n```\n\n\n\n\n \n\n \n\n\n\nWe can take advantage of the conservation of mass (as in systems with one enzyme) to go from six to four differential equations. \n\n$$ \\begin{align*}\nC_1 \\cdot X_1 + C_2 \\cdot X_2 + C_3 \\cdot X_3 + C_4 \\cdot X_4 + C_5 \\cdot X_5 + C_6 \\cdot X_6 & = cte \\\\\n\\end{align*}$$\n\nso\n$$ \\begin{align*}\nC \\cdot (B-A)^T &=0\\\\ \n\\begin{pmatrix}C_1 & C_2 & C_3 & C_4 & C_5 & C_6\\end{pmatrix} \\begin{bmatrix}\n -1 & 1 & 0 & 0 & 0 & 1\\\\\n 1 & -1 & -1 & 0 & 0 & 0\\\\\n -1 & 1 & 1 & 0 & 0 & 0 \\\\\n 0 & 0 & 1 & -1 & 1 & 0 \\\\\n 0 & 0 & 0 & 1 & -1 & -1 \\\\\n 0 & 0 & 0 & -1 & 1 & 1\\end{bmatrix} &=0\n \\end{align*}$$\n \n \n the solution of the system of equations is: \n $$ \\begin{align*}\n -C_1 + C_2 - C_3= 0 &\\implies C_2 = C_3 + C_1\\\\\n -C_2 + C_3 + C_4= 0 &\\implies C_2 = C_3 + C_4\\\\\n - C_4 + C_5 - C_6 = 0 &\\implies C_5=C_4 + C_6\\\\\n C_1 - C_5 + C_6 =0 &\\implies C_5= C_1 + C_6\n\\end{align*}$$\n\nTherefore, from the set of equations above, we obatin $C_1 = C_4$ (as in the case of one enzyem and one substrate. Therefore, the conservation of mass is:\n\n$$ \\begin{align*}\nC_1 \\cdot X_1(t) + (C_3 + C_1) \\cdot X_2(t) + C_3 \\cdot X_3(t) + C_1 \\cdot X_4(t) + (C_6 + C_1) \\cdot X_5(t) + C_6 \\cdot X_6(t)&= cte\\\\\nC_1 \\cdot X_1(t) + C_3 \\cdot X_2(t) + C_1 \\cdot X_2(t) + C_3 \\cdot X_3(t) + C_1 \\cdot X_4(t) + C_6 \\cdot X_5(t) + C_1 \\cdot X_5(t) + C_6 \\cdot X_6(t)&= cte\\\\\n\\end{align*}$$\nwhich is valid for any value of $C_1$, $C_3$ and $C_6$, so, separating these dependencies\n\n\n$$ \\begin{align*}\nC_1 \\cdot X_1(t) + C_1 \\cdot X_2(t) + C_1 \\cdot X_4(t) + C_1 \\cdot X_5 (t)&= cte\\\\\n\\end{align*}$$\n\nwhich makes sense, since the $s$, $e1s$, $e2p$ and $p$ are four forms of the same protein, and their sum has to be constant, and equal to the initial value of substrate used. Using the value $C_1=1$ we obtain:\n\n$$ \\begin{align*}\nX_1(t) + X_2(t) + X_4(t) + X_5 (t) &= cte = X_1(0) + X_2(0) + X_4(0) + X_5(0)= X_1(0) + X_4(0)\\\\\n\\end{align*}$$\n\nis our first conservation law (typically we start a biochemical reaction adding only the two substrates $s$ and $p$ and the two enzymes $e1$ and $e2$). For the second, we use the parameters that depend on the other constant $C_2$ \n \n$$ \\begin{align*}\nC_3 \\cdot X_2(t) + C_3 \\cdot X_3(t) &= cte\n\\end{align*}$$\n\nThis is the same conservation law that we obtained for the simple single enzyme-substrate system studiend previously. So, giving the value $C_3=1$, we obtain the conservation law, \n\n$$ \\begin{align*}\nX_2(t) + X_3(t) & = cte &= X_2(0) + X_3(0) = X_3 (0) \n\\end{align*}$$\n\nFor the other constant $C_6$: \n$$ \\begin{align*}\nC_6 \\cdot X_5(t) + C_6 \\cdot X_6(t) &= cte\n\\end{align*}$$\n\nwhich is true because the enzyme $e2$ is also a catalyst that facilitates the reaction but does not react itself, and if we assume that there is no complex $e2p(0)=0$ before the reacton starts, we can assume ($e2(0)+e2p(0)=e2_0$). \n\nThis conservation law together with the previous one, allows us to reduce the system of 6 differential equations to just four coupled ordinary differential equations:\n \n $$\\begin{align*}\n \\dot{s} &= -k_1 \\cdot s \\cdot e1_0 + (k_2 + k_1 \\cdot s) \\cdot e1s + k_6 \\cdot e2p \\tag{52}\\\\\n \\dot{e1s} &= k_1 \\cdot e1_0 \\cdot s - (k_2 + k_3 + k_1 \\cdot s) \\cdot e1s \\tag{53}\\\\\n \\dot{e2p} &= k_4 \\cdot e2_0 \\cdot p - (k_5 + k_6 + k_4 \\cdot p) \\cdot e2p \\tag{54}\\\\\n \\dot{p} &= -k_4 \\cdot p \\cdot e2_0 + (k_5 + k_4 \\cdot p) \\cdot e2p + k_3 \\cdot e1s \\tag{55}\n \\end{align*}$$\n\n\n```julia\nenzyme_kinetics5! = @ode_def ab begin\n ds = -k1 * s * e10 + (k2 + k1 * s) * e1s + k6 * e2p\n de1s = k1 * e10 * s - (k2 + k3 + k1 * s) * e1s\n de2p = k4 * e20 * p - (k5 + k6 + k4 * p) * e2p\n dp = -k4 * p * e20 + (k5 + k4 * p) * e2p + k3 * e1s\n end k1 k2 k3 k4 k5 k6 e10 e20\n```\n\n\n\n\n (::ab{var\"#51#55\",var\"#52#56\",var\"#53#57\",Nothing,Nothing,var\"#54#58\",Expr,Expr}) (generic function with 2 methods)\n\n\n\n\n```julia\nu0=[s0,0,0,p0]\np=[k1,k2,k3,k4,k5,k6,e10,e20];\nprob5 = ODEProblem(enzyme_kinetics5!,u0,tspan,p)\nsol5 = solve(prob5)\nplot(sol5,ylims = (0,s0+p0))\ntitle!(\"kinase-phosphatase\")\nxlabel!(\"Time [s]\")\nylabel!(\"Concentration [M]\")\n#png(\"Michaelis_menten.png\")\n```\n\n\n\n\n \n\n \n\n\n\nThe concentration of `e1` and `e2` can be also plotted by simply using the mass conservation:;\n\n\n```julia\nplot(sol5.t,[e10.-sol5[2,:],e20.-sol5[3,:]],label = [\"e1\" \"e2\"],ylims = (0,s0+p0))\n```\n\n\n\n\n \n\n \n\n\n\n## Quasi steady state approximation:\n\nIf we consider the pseudo-steady state approximation typical for biochemical systems (i.e., dynamic of intermediate complexes very fast compared with the other reactions), \n\n$$ \n\\begin{align*}\nk_1 \\cdot s(0) \u2248 k_2 >> k_3 \\tag{56}\\\\\nk_4 \\cdot p(0) \u2248 k_5 >> k_6 \\tag{57}\n \\end{align*}$$\n \nif this is true, we wan simplify the system by using the following rules:\n\n$$ \\frac{d e1s}{dt}= \\frac{d e1}{dt}=\\frac{d e2p}{dt}=\\frac{d e2}{dt}=0 \\tag{58}$$\n\nso, the equilibrium concentrations for `e1s`and `e2p` are simply:\n\n $$\\begin{align*}\n e1s &= \\frac{k_1 \\cdot e1_0 \\cdot s}{k_2 + k_3 + k_1 \\cdot s} \\tag{59}\\\\\n e2p &= \\frac{k_4 \\cdot e2_0 \\cdot p }{k_5 + k_6 + k_4 \\cdot p} \\tag{60}\n \\end{align*}$$\n\n\n```julia\ne1s_eq=(p[1] .* p[7] .* sol5[1,:])./(p[2].+p[3].+p[1] .* sol5[1,:])\ne2sp_eq=(p[4] .* p[8] .* sol5[4,:])./(p[5].+p[6].+p[4] .* sol5[4,:])\n\nplot(sol5,label=[\"s\" \"e1s\" \"e2p\" \"p\"])\ntitle!(\"kinase-phosphatase\")\nxlabel!(\"Time [s]\")\nylabel!(\"Concentration [M]\")\n#png(\"Michaelis_menten.png\")\n\nplot!(sol5.t,e1s_eq,\n ylims = (0,s0+p0),\n label=\"e1s_eq\", \n linealpha = 0.5,\n linewidth = 3,\n linestyle= :dash,\n linecolor = :red)\n\n\nplot!(sol5.t,e2sp_eq,\n label=\"e1s_eq\", \n linealpha = 0.5,\n linewidth = 3,\n linestyle= :dash,\n linecolor = :green)\n\n```\n\n\n\n\n \n\n \n\n\n\nAs we can see, the two equilibrium solutions are again quite close to the concentration of `e1s` and `e2p`, in this conditions we can simplify all equations as:\n \n$$\\begin{align*}\n \\dot{s} &= -k_1 \\cdot s \\cdot e1_0 + (k_2 + k_1 \\cdot s) \\cdot \\frac{k_1 \\cdot e1_0 \\cdot s}{k_2 + k_3 + k_1 \\cdot s} + \\frac{k_6 \\cdot k_4 \\cdot e2_0 \\cdot p }{k_5 + k_6 + k_4 \\cdot p} \\tag{61}\\\\\n \\dot{p} &= -k_4 \\cdot p \\cdot e2_0 + (k_5 + k_4 \\cdot p) \\cdot \\frac{k_4 \\cdot e2_0 \\cdot p }{k_5 + k_6 + k_4 \\cdot p} + \\frac{ k_3 \\cdot k_1 \\cdot e1_0 \\cdot s}{k_2 + k_3 + k_1 \\cdot s} \\tag{62}\n\\end{align*}$$\n\nwhich, after some manipulation become\n\n$$\\begin{align*}\n \\dot{s} &= k_1 \\cdot e1_0 \\cdot s \\left [ \\frac{k_2 + k_1 \\cdot s}{k_2 + k_3 + k_1 \\cdot s} -1\\right ] + \\frac{k_6 \\cdot k_4 \\cdot e2_0 \\cdot p }{k_5 + k_6 + k_4 \\cdot p} \\tag{63}\\\\\n \\dot{p} &= k_4 \\cdot e2_0 \\cdot p \\left [ \\frac{k_5 + k_4 \\cdot p }{k_5 + k_6 + k_4 \\cdot p} -1 \\right ] + \\frac{ k_3 \\cdot k_1 \\cdot e1_0 \\cdot s}{k_2 + k_3 + k_1 \\cdot s} \\tag{64}\n\\end{align*}$$\n\n\n$$\\begin{align*}\n \\dot{s} &= k_1 \\cdot e1_0 \\cdot s \\left [ \\frac{k_2 + k_1 \\cdot s- k_2 - k_3 - k_1 \\cdot s}{k_2 + k_3 + k_1 \\cdot s} \\right ] + \\frac{k_6 \\cdot k_4 \\cdot e2_0 \\cdot p }{k_5 + k_6 + k_4 \\cdot p} \\tag{65}\\\\\n \\dot{p} &= k_4 \\cdot e2_0 \\cdot p \\left [ \\frac{k_5 + k_4 \\cdot p -k_5 - k_6- k_4 \\cdot p}{k_5 + k_6 + k_4 \\cdot p} \\right ] + \\frac{ k_3 \\cdot k_1 \\cdot e1_0 \\cdot s}{k_2 + k_3 + k_1 \\cdot s} \\tag{66}\n\\end{align*}$$\n\n$$\\begin{align*}\n \\dot{s} &= \\frac{k_6 \\cdot k_4 \\cdot e2_0 \\cdot p }{k_5 + k_6 + k_4 \\cdot p} - \\frac{ k_3 \\cdot k_1 \\cdot e1_0 \\cdot s }{k_2 + k_3 + k_1 \\cdot s} \\tag{67} \\\\\n \\dot{p} &= \\frac{ k_3 \\cdot k_1 \\cdot e1_0 \\cdot s}{k_2 + k_3 + k_1 \\cdot s} - \\frac{k_6 \\cdot k_4 \\cdot e2_0 \\cdot p}{k_5 + k_6 + k_4 \\cdot p} \\tag{68}\n\\end{align*}$$\n\nFinally, using the definition of the Michaelis-Menten constant for each enzyme-substrate pair, $K_{1M} = \\frac{k_3+k_2}{k_1}$, $K_{2M} = \\frac{k_5+k_6}{k_4}$, the previous equations can be simplified as: \n\n$$\\begin{align*}\n \\dot{s} &= \\frac{k_6 \\cdot e2_0 \\cdot p}{K_{2M} + p} - \\frac{ k_3 \\cdot e1_0 \\cdot s}{K_{1M}+ s} \\tag{69}\\\\\n \\dot{p} &= \\frac{ k_3 \\cdot e1_0 \\cdot s}{K_{1M}+ s} - \\frac{k_6 \\cdot e2_0 \\cdot p}{K_{2M} + p} \\tag{70}\n\\end{align*}$$\n\n\n\n\n```julia\nenzyme_kinetics6! = @ode_def ab begin\n ds = (k6 * e20 * p)/(K2M + p) - (k3 * e10 * s)/(K1M + s)\n dp = (k3 * e10 * s)/(K1M + s) - (k6 * e20 * p)/(K2M + p)\n end k3 k6 e10 e20 K1M K2M\n```\n\n\n\n\n (::ab{var\"#59#63\",var\"#60#64\",var\"#61#65\",Nothing,Nothing,var\"#62#66\",Expr,Expr}) (generic function with 2 methods)\n\n\n\n\n```julia\nu0=[s0,p0]\nK1M=(k3+k2)/k1\nK2M=(k5+k6)/k4\np=[k3,k6,e10,e20,K1M,K2M];\nprob6 = ODEProblem(enzyme_kinetics6!,u0,tspan,p)\nsol6 = solve(prob6)\nplot(sol6,label=[\"s\" \"p\"],ylims = (0,s0+p0))\ntitle!(\"kinase-phosphatase full simplified\")\nxlabel!(\"Time [s]\")\nylabel!(\"Concentration [M]\")\n#png(\"Michaelis_menten.png\")\n```\n\n\n\n\n \n\n \n\n\n\nWe can see that, comparing the last set of curves curves, that the dynamics predicted using the `pseudo-steady state condition` is far from the expected behavior of the dual enzyme-substrate. The approximation is much better when we change the chemical parameters of the reaction, and/or the initial concentration of enzyme. \n\n\n```julia\nk1=0.5e2\nk2=0.015\nk3=0.5\nk4=1e2\nk5=0.1\nk6=0.01\n\ne10=0.0002\ne20=0.0002\ns0=0.002\np0=0.001\ntspan = (0.0,500)\nu0=[s0,p0]\nK1M=(k3+k2)/k1\nK2M=(k5+k6)/k4\np=[k3,k6,e10,e20,K1M,K2M];\nprob6 = ODEProblem(enzyme_kinetics6!,u0,tspan,p)\nsol6 = solve(prob6)\nplot(sol6,label=[\"s\" \"p\"],ylims = (0,s0+p0))\ntitle!(\"kinase-phosphatase full simplified\")\nxlabel!(\"Time [s]\")\nylabel!(\"Concentration [M]\")\n#png(\"Michaelis_menten.png\")\n```\n\n\n\n\n \n\n \n\n\n\nwhich if we compare it directly with the full systems without the pseudo-steady state aproximation:\n\n\n```julia\nu0=[s0,0,0,p0]\n\np=[k1,k2,k3,k4,k5,k6,e10,e20];\nprob5 = ODEProblem(enzyme_kinetics5!,u0,tspan,p)\nsol5 = solve(prob5)\nplot(sol5,label=[\"s\" \"e1s\" \"e2p\" \"p\"],ylims = (0,s0+p0))\ntitle!(\"kinase-phosphatase\")\nxlabel!(\"Time [s]\")\nylabel!(\"Concentration [M]\")\n#png(\"Michaelis_menten.png\")\n```\n\n\n\n\n \n\n \n\n\n\n## Further simplification:\n\nNow, with the system of just two equations, and in conditions of small concentration fo the intermediate complexes $e1s$ and $e2p$, we can use the other conservation law $s(t)+p(t)+e1s(t)+e2p(t)=cte$ to obtain a analyitical solution of the system. If we assume that the concetrations of $e1s(t)$ and $e2p(t)$ are small compared to the concetrations of $e1(t)$ and $e2(t)$, we can write $s(t)+p(t)+e1s(t)+e2p(t)=s(0)+p(0)= $. Subsitituting this into the previous equations, we obtain:\n\n$$\\begin{align*}\n \\dot{p} &= \\frac{ k_3 \\cdot e1_0 \\cdot (s_0 + p_0 - p )}{K_{1M}+ s_0 + p_0 - p} - \\frac{k_6 \\cdot e2_0 \\cdot p}{K_{2M} + p} \\tag{71} \n\\end{align*}$$\n\nNow, we have only one equation, so we can calculate the equilibrium as \n$$\\begin{align*}\n \\dot{p} & = 0 \\rightarrow \\frac{ k_3 \\cdot e1_0 \\cdot (s_0 + p_0 - p )}{K_{1M}+ s_0 + p_0 - p} = \\frac{k_6 \\cdot e2_0 \\cdot p}{K_{2M} + p} \\tag{72} \n\\end{align*}$$\n\nwhich is a second order equation with solution:\n\n$$\\frac{k_3 \\cdot e1_0 \\cdot (s_0 + p_0 - p )}{k_6 \\cdot e2_0 \\cdot p}= \\frac{K_{1M}+ s_0 + p_0 - p}{K_{2M} + p }\\tag{73}\n$$\n\nwe can rename variables as:\n\n$$W=\\frac{k_3 \\cdot e1_0 }{k_6 \\cdot e2_0}\\tag{74}$$\nso \n\n$$W\\frac{(s_0 + p_0 - p )}{p}= \\frac{K_{1M}+ s_0 + p_0 - p}{K_{2M} + p }\\tag{75}\n$$\n\n$$W(s_0 + p_0 - p )(K_{2M} + p)= K_{1M} \\cdot p+ s_0 \\cdot p + p_0 \\cdot p - p^2\\tag{76}\n$$\n\n$$W(s_0 + p_0 - p )(K_{2M} + p)= (K_{1M} +p_0 +s_0) p - p^2\\tag{77}\n$$\n\n$$W (s_0 \\cdot K_{2M} + p_0 \\cdot K_{2M} - p \\cdot K_{2M} + s_0 \\cdot p + p_0 \\cdot p - p^2)= (K_{1M} +p_0 +s_0) p - p^2\\tag{78}\n$$\n\n$$W (s_0 \\cdot K_{2M} + p_0 \\cdot K_{2M} + (s_0 + p_0 -\\cdot K_{2M}) p - p^2)= (K_{1M} +p_0 +s_0) p - p^2\\tag{79}\n$$\n\n$$ W \\cdot s_0 \\cdot K_{2M} + W \\cdot p_0 \\cdot K_{2M} + (s_0 + p_0 -\\cdot K_{2M}) W \\cdot p - W \\cdot p^2= (K_{1M} +p_0 +s_0) p - p^2\\tag{80}\n$$\n\n$$ W \\cdot s_0 \\cdot K_{2M} + W \\cdot p_0 \\cdot K_{2M} + (s_0 + p_0 -\\cdot K_{2M}) W \\cdot p + p^2(1-W)= (K_{1M} +p_0 +s_0) p \\tag{81}\n$$\n\n$$ W \\cdot s_0 \\cdot K_{2M} + W \\cdot p_0 \\cdot K_{2M} + (s_0 W+ p_0 W- W\\cdot K_{2M} -K_{1M} -p_0 -s_0) \\cdot p + p^2(1-W)= 0 \\tag{82}\n$$\n\n$$ W \\cdot s_0 \\cdot K_{2M} + W \\cdot p_0 \\cdot K_{2M} + (s_0 (W-1) + p_0 (W-1) - W \\cdot K_{2M} -K_{1M}) \\cdot p + p^2(1-W)= 0 \\tag{83}\n$$\n\n thsi is a second order equation $A \\cdot P^2 + B \\cdot p +C=0$ with\n \n$$A=1-W \\tag{84}$$\n\n$$B=s_0 (W-1) + p_0 (W-1) - W \\cdot K_{2M} -K_{1M} \\tag{85}$$\n\n$$C=W \\cdot s_0 \\cdot K_{2M} + W \\cdot p_0 \\cdot K_{2M} \\tag{86}$$\n\n\n```julia\nfunction quadratic(a, b, c)\n discr = b^2 - 4*a*c\n discr >= 0 ? ( (-b + sqrt(discr))/(2a), (-b - sqrt(discr))/(2a) ) : error(\"Only complex roots\")\n end\n```\n\n\n\n\n quadratic (generic function with 1 method)\n\n\n\n\n```julia\nW=k3*e10/k6/e20\nA=1-W\nB=s0*(W-1)+p0*(W-1)-W*K2M-K1M\nC=W*s0*K2M+W*p0*K2M\n```\n\n\n\n\n 0.00016500000000000003\n\n\n\n\n```julia\nroots=quadratic(A,B,C)\nroots[2]\n```\n\n\n\n\n 0.002849202777618357\n\n\n\n\n```julia\nplot(sol6,label=[\"s\" \"p\"],ylims = (0,s0+p0))\ntitle!(\"kinase-phosphatase full simplified steady state\")\nxlabel!(\"Time [s]\")\nylabel!(\"Concentration [M]\")\nhline!([roots[2]],label=\" \\\\ P_{ss}\")\n```\n\n\n\n\n \n\n \n\n\n\n\n\n## Conclusion \n\nEnzyme kinetics is one of the most basics sistems where Mass Action combined with mathematical modeling allows us to extract relevant information of the dynamics of the interactions that shape the responses of biological systems \n", "meta": {"hexsha": "9a8c34b459286fa04ece13b256ab99a1c3a955ff", "size": 1000394, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "4_EnzymeKinetics.ipynb", "max_stars_repo_name": "davidgmiguez/julia_notebooks", "max_stars_repo_head_hexsha": "b395fac8f73bf8d9d366d6354a561c722f37ce66", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "4_EnzymeKinetics.ipynb", "max_issues_repo_name": "davidgmiguez/julia_notebooks", "max_issues_repo_head_hexsha": "b395fac8f73bf8d9d366d6354a561c722f37ce66", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "4_EnzymeKinetics.ipynb", "max_forks_repo_name": "davidgmiguez/julia_notebooks", "max_forks_repo_head_hexsha": "b395fac8f73bf8d9d366d6354a561c722f37ce66", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 113.1667420814, "max_line_length": 531, "alphanum_fraction": 0.6526768453, "converted": true, "num_tokens": 16088, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9591542875927779, "lm_q2_score": 0.8652240825770432, "lm_q1q2_score": 0.8298833885322987}} {"text": "# Linear Discriminant Functions\nLinear Discriminant Functions distinguish (in the simple case) two different classes from each other (assuming they are linear seperable). For the twodimensional case (N = 2), we want to find a line $d(x) = w_1x_1+w_2x_2+w_3 = 0$ which outputs a value $>0$ for the first class and a value $<0$ for the second class. If the value is 0 (and thus the point lies on the decision line), we reject the decision*. \n\nThe goal is to learn the weights ($w_1$ and $w_2$) as well as the bias to discriminate the two classes. There are many algorithms to achieve this. In the lecture by Prof. Jiang two different approaches were shown: The perceptron algorithm and the pseudoinverse algorithm. \n\n## Perceptron Algorithm\nThe perceptron algorithm minimizes the number of misclassified training examples by using the gradient descend approach. This is done by using the model of a perceptron which has (in the twodimensional case) two inputs and a bias. The output of the perceptron is the sum of the inputs combined with some weights which need to be learned. For learning we use our initial solution and compute for every training example the output of the neuron. Every misclassified sample causes a slight change to the weights until no misclassification happens.\n\n$$\n\\begin{align}\nJ(w) &= \\sum_{y \\in \\text{misclassified examples}}(-wy^t)\\\\\n\\nabla J(w) &= \\sum_{y \\in \\text{misclassified examples}}(-y)\\\\\nw_{k+1} &= w_k - \\sum_{y \\in \\text{misclassified examples}}(-y)\\\\\nw_{k+1} &= w_k + \\sum_{y \\in \\text{misclassified examples}}(y)\n\\end{align}\n$$\n\n#### Example\nConsider the following data (I already added the bias)\n\n| x | y | bias | class | $d(w)$ shall be... |\n|---|---|------|--------|--------------------|\n| 0 | 0 | 1 | $C_1$ | $>0$ |\n| 0 | 1 | 1 | $C_1$ | $>0$ |\n| 1 | 0 | 1 | $C_2$ | $<0$ |\n| 1 | 1 | 1 | $C_2$ | $<0$ |\n\nFor simplicity reasons we multiply the values of the second class * -1 so we get\n\n| x | y | bias | class | $d(w)$ shall be... |\n|----|----|------|--------|--------------------|\n| 0 | 0 | 1 | $C_1$ | $>0$ |\n| 0 | 1 | 1 | $C_1$ | $>0$ |\n| -1 | 0 | -1 | $C_2$ | $>0$ |\n| -1 | -1 | -1 | $C_2$ | $>0$ |\n\nLet's imagine we initialize with $w_0 = (0,0,0)$. Now we can find out if an example is correctly classified or not, by multiplying the sample with the weight.\n\n| x | y | bias | class | $d(w)$ shall be... | $d(w_0)$ is be... |\n|----|----|------|--------|--------------------|-----------------|\n| 0 | 0 | 1 | $C_1$ | $>0$ | $\\leq 0$ |\n| 0 | 1 | 1 | $C_1$ | $>0$ | $\\leq 0$ |\n| -1 | 0 | -1 | $C_2$ | $>0$ | $\\leq 0$ |\n| -1 | -1 | -1 | $C_2$ | $>0$ | $\\leq 0$ |\n\nAs we can see, all four values are misclassified. Thus we can sum up all x's and y's and biases and we get as a $\\delta_0 = (-2,0,0)$. Now we calculate $w_1 = w_0 + p * \\delta_0 = (0,0,0) + 1*(-2,0,0) = (-2,0,0)$, assuming a learning rate $p$ of $1$.\n\nFor the second iteration we use the new weight and get...\n\n\n| x | y | bias | class | $d(w)$ shall be... | $d(w_1)$ is be... |\n|----|----|------|--------|--------------------|-------------------|\n| 0 | 0 | 1 | $C_1$ | $>0$ | $\\leq 0$ |\n| 0 | 1 | 1 | $C_1$ | $>0$ | $\\leq 0$ |\n| -1 | 0 | -1 | $C_2$ | $>0$ | $> 0$ |\n| -1 | -1 | -1 | $C_2$ | $>0$ | $> 0$ |\n\nNow there are only two misclassified examples (the first and the second one). If we now add up the two misclassified examples, we get $\\delta_1=(0,0,1)+(0,1,1)=(0,1,2)$. Secondly, we get $w_2 = w_1 + p * \\delta_1 = (-2,0,0) + 1 * (0,1,2) = (-2,1,2)$. The algorithm continues until it converges to $(-4,1,2)$.\n\n## Pseudoinverse Algorithm\nThe pseudoinverse algorithm works by introducing a vector $b$, so that\n$$\nwX = b\n$$\nwith $b$ having positive values. Thus, we replace the requirement to be greater than zero with concrete values.\n\n$X$ is a matrix with the trainingsdata, i.e.\n\n$$\nX=\n \\begin{pmatrix}\n 0 & 0 & -1 & -1 \\\\\n 0 & 1 & 0 & -1 \\\\\n 1 & 1 & -1 & -1\n \\end{pmatrix}\n$$\n\nIn this case the first column corresponds to the first example, the second to the second example etc. The third and the forth example are again multiplied with -1 since they belong to class 2 (just as before).\n\nTo calculate the corresponding $w$ we need to have the inverse of $X$, however there is no exact solution, so instead we calculate.\n\n$$\nw = bX^t(XX^t)^{-1}\n$$\n\nThe challenge here is to determine a suitable b. You might need mulitple trials or another optimization strategy for b.\n\n# Code-Part\n### Perceptron Algorithm\n\n\n```python\n# First we import the required packages\n\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nfrom matplotlib import animation, rc\n\nfrom IPython.display import HTML\n\nfrom numpy.linalg import inv # For pseudoinverse algorithm\n```\n\nNow we set the data and the learning rate.\n\n\n```python\n# Two classes\nc1 = np.array([[0,0],[0,1]])\nc2 = np.array([[1,0],[1,1]])\nlearning_rate = 1\n\n# Assertion check, some kind of input validation\nassert c1[0].shape == c2[0].shape\n```\n\nNow we ...\n* initialize the weights vector w with (0,0,0)\n* add the bias terms to the samples\n* multiply the samples of class 2 with -1 as explained above.\n\n\n```python\n# Initialize w (i.e. with zeros)\nw = np.zeros(c1[0].shape[0] + 1) #+1 for bias\n\n# Add bias-term for classes\n_c1 = np.append(c1, np.repeat(1,c1[0].shape[0]).reshape((c1[0].shape[0], 1)), axis=1)\n_c2 = np.append(c2, np.repeat(1,c1[0].shape[0]).reshape((c1[0].shape[0], 1)), axis=1)\n\n# Multiply _c2*-1 so we can check if > 0 more conveniently\n_c2 = _c2*-1\n```\n\nNow this is the main loop. The main functionality is described above, not more magic is happening here. We save each weight in weights, so we can visualize the progress of the linear discriminant function later.\n\n\n```python\n# Main loop\n\nweights = [w] # We save the weights, so we can display the line fitting later\n\nwhile True:\n \n # Test which samples are wrong classified\n wrong_classified = []\n \n for sample in _c1:\n # Calculate the function value (i.e. w0 + w1*x1 + w2*x2 + ...)\n # It is correctly classified if the value is > 0, thus wrongly if it is <= 0\n if sum(sample*w) <= 0:\n wrong_classified.append(sample)\n \n for sample in _c2:\n # Same for _c2\n if sum(sample*w) <= 0:\n wrong_classified.append(sample)\n \n # Break if no samples were wrong classified\n if len(wrong_classified) == 0:\n break\n \n # Otherwise calculate delta\n delta = np.sum(wrong_classified, axis=0)\n \n # Add delta to weight\n w = w + learning_rate*delta\n weights.append(w)\n```\n\nThis part is just for creating the visualization of the data and the discriminant function. It creates a fancy animation. This works by using the weights which have been previously saved.\n\n\n```python\nfig, ax = plt.subplots()\nax.plot(c1[:,0], c1[:,1], 'ro')\nax.plot(c2[:,0], c2[:,1], 'bo')\nline, = ax.plot([], [])\n\nax.set_xlim(( -2, 2)) # Set x axis here\nax.set_ylim((-2, 2)) # Set y axis here\n\ndef init():\n return (line,)\n\ndef animate(i):\n x = np.linspace(*ax.get_xlim(), 1000)\n weight = weights[i]\n #w0x1 + w1x2 + w2 = 0\n # w1x2 = -w2 - w0x1\n # x2 = (-w2 -w0*x1) / w1\n if weight[1] == 0: # This would be a divide by zero.\n y = [0] * len(x)\n else:\n y = (-weight[0]*x-weight[2])/weight[1]\n line.set_data(x, y)\n return (line,)\n\nanim = animation.FuncAnimation(fig, animate, init_func=init,\n frames=len(weights), interval=1000, \n blit=True)\nHTML(anim.to_jshtml()) # Print the animation\n```\n\n\n\n\n\n\n\n\n
    \n \n
    \n \n
    \n \n \n \n \n \n \n \n \n \n
    \n Once \n Loop \n Reflect \n
    \n
    \n\n\n\n\n\n\n\n### Pseudoinverse Part\nThis is the part where we calculate the pseudoinverse. However, we do not optimize b. This is part of other algorithms, i.e. Ho-Kashyap algorithm. Instead, we set $b$ to a vector of ones.\n\n\n```python\n## Define X\n\n#(_c1.shape[0] + _c2.shape[0]) = Number of training examples\nX = np.array([_c1,_c2]).reshape((_c1.shape[0] + _c2.shape[0]),3).T\nb = np.ones((1,(_c1.shape[0] + _c2.shape[0]))) # Just fill b with ones (one 1 per training example)\n\n# Calculate the w with the pseudoinverse\n# We need to use the dotproduct, so we cannot write X*X.T, but i.e.\n# np.dot(X,X.T)\nw = np.dot(b, np.dot(X.T, inv(np.dot(X,X.T))))\n\n# For simplicity reasons we flatten the matrix to a vector\nw = w.flatten()\n```\n\nHere we just visualize again\n\n\n```python\nplt.clf()\nfig, ax = plt.subplots()\nax.plot(c1[:,0], c1[:,1], 'ro')\nax.plot(c2[:,0], c2[:,1], 'bo')\nline, = ax.plot([], [])\n\nax.set_xlim(( -2, 2)) # Set x axis here\nax.set_ylim((-2, 2)) # Set y axis here\n\nx = np.linspace(*ax.get_xlim(), 1000)\ny = (-w[0]*x-w[2])/w[1]\nline.set_data(x, y)\nplt.show()\n```\n\n#### Literature\n* Haykin, Simon S., et al. Neural networks and learning machines. Vol. 3. Upper Saddle River, NJ, USA:: Pearson, 2009.\n* Duda, Richard O., Peter E. Hart, and David G. Stork. Pattern classification. John Wiley & Sons, 2012.\n", "meta": {"hexsha": "834b2fc6ac10f63e40c7ec309693944f282c2fc9", "size": 90021, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Linear Discriminant Functions.ipynb", "max_stars_repo_name": "hija/DeepLearningNotebooks", "max_stars_repo_head_hexsha": "d95377c95a41369562e13d03b8fb0fcb34900f26", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Linear Discriminant Functions.ipynb", "max_issues_repo_name": "hija/DeepLearningNotebooks", "max_issues_repo_head_hexsha": "d95377c95a41369562e13d03b8fb0fcb34900f26", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Linear Discriminant Functions.ipynb", "max_forks_repo_name": "hija/DeepLearningNotebooks", "max_forks_repo_head_hexsha": "d95377c95a41369562e13d03b8fb0fcb34900f26", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 72.0168, "max_line_length": 6534, "alphanum_fraction": 0.7499583431, "converted": true, "num_tokens": 3510, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026663679976, "lm_q2_score": 0.9046505421702796, "lm_q1q2_score": 0.8298383544640522}} {"text": "\n\n\n\n# Introduction to Matrices\nIn general terms, a matrix is an array of numbers that are arranged into rows and columns.\n\n## Matrices and Matrix Notation\nA matrix arranges numbers into rows and columns, like this:\n\n\\begin{equation}A = \\begin{bmatrix}\n 1 & 2 & 3 \\\\\n 4 & 5 & 6\n \\end{bmatrix}\n\\end{equation}\n\nNote that matrices are generally named as a capital letter. We refer to the *elements* of the matrix using the lower case equivalent with a subscript row and column indicator, like this:\n\n\\begin{equation}A = \\begin{bmatrix}\n a_{1,1} & a_{1,2} & a_{1,3} \\\\\n a_{2,1} & a_{2,2} & a_{2,3}\n \\end{bmatrix}\n\\end{equation}\n\nIn Python, you can define a matrix as a 2-dimensional *numpy.**array***, like this:\n\n\n```python\nimport numpy as np\n\nA = np.array([[1,2,3],\n [4,5,6]])\nprint (A)\n```\n\n [[1 2 3]\n [4 5 6]]\n\n\nYou can also use the *numpy.**matrix*** type, which is a specialist subclass of ***array***:\n\n\n```python\nimport numpy as np\n\nM = np.matrix([[1,2,3],\n [4,5,6]])\nprint (M)\n```\n\n [[1 2 3]\n [4 5 6]]\n\n\nThere are some differences in behavior between ***array*** and ***matrix*** types - particularly with regards to multiplication (which we'll explore later). You can use either, but most experienced Python programmers who need to work with both vectors and matrices tend to prefer the ***array*** type for consistency.\n\n## Matrix Operations\nMatrices support common arithmetic operations.\n\n### Adding Matrices\nTo add two matrices of the same size together, just add the corresponding elements in each matrix:\n\n\\begin{equation}\\begin{bmatrix}1 & 2 & 3 \\\\4 & 5 & 6\\end{bmatrix}+ \\begin{bmatrix}6 & 5 & 4 \\\\3 & 2 & 1\\end{bmatrix} = \\begin{bmatrix}7 & 7 & 7 \\\\7 & 7 & 7\\end{bmatrix}\\end{equation}\n\nIn this example, we're adding two matrices (let's call them ***A*** and ***B***). Each matrix has two rows of three columns (so we describe them as 2x3 matrices). Adding these will create a new matrix of the same dimensions with the values a1,1 + b1,1, a1,2 + b1,2, a1,3 + b1,3,a2,1 + b2,1, a2,2 + b2,2, and a2,3 + b2,3. In this instance, each pair of corresponding elements(1 and 6, 2, and 5, 3 and 4, etc.) adds up to 7.\n\nLet's try that with Python:\n\n\n```python\nimport numpy as np\n\nA = np.array([[1,2,3],\n [4,5,6]])\nB = np.array([[6,5,4],\n [3,2,1]])\nprint(A + B)\n```\n\n [[7 7 7]\n [7 7 7]]\n\n\n### Subtracting Matrices\nMatrix subtraction works similarly to matrix addition:\n\n\\begin{equation}\\begin{bmatrix}1 & 2 & 3 \\\\4 & 5 & 6\\end{bmatrix}- \\begin{bmatrix}6 & 5 & 4 \\\\3 & 2 & 1\\end{bmatrix} = \\begin{bmatrix}-5 & -3 & -1 \\\\1 & 3 & 5\\end{bmatrix}\\end{equation}\n\nHere's the Python code to do this:\n\n\n```python\nimport numpy as np\n\nA = np.array([[1,2,3],\n [4,5,6]])\nB = np.array([[6,5,4],\n [3,2,1]])\nprint (A - B)\n```\n\n [[-5 -3 -1]\n [ 1 3 5]]\n\n\n#### Conformability\nIn the previous examples, we were able to add and subtract the matrices, because the *operands* (the matrices we are operating on) are ***conformable*** for the specific operation (in this case, addition or subtraction). To be conformable for addition and subtraction, the operands must have the same number of rows and columns. There are different conformability requirements for other operations, such as multiplication; which we'll explore later.\n\n### Negative Matrices\nThe nagative of a matrix, is just a matrix with the sign of each element reversed:\n\n\\begin{equation}C = \\begin{bmatrix}-5 & -3 & -1 \\\\1 & 3 & 5\\end{bmatrix}\\end{equation}\n\n\\begin{equation}-C = \\begin{bmatrix}5 & 3 & 1 \\\\-1 & -3 & -5\\end{bmatrix}\\end{equation}\n\nLet's see that with Python:\n\n\n```python\nimport numpy as np\n\nC = np.array([[-5,-3,-1],\n [1,3,5]])\nprint (C)\nprint (-C)\n```\n\n [[-5 -3 -1]\n [ 1 3 5]]\n [[ 5 3 1]\n [-1 -3 -5]]\n\n\n### Matrix Transposition\nYou can *transpose* a matrix, that is switch the orientation of its rows and columns. You indicate this with a superscript **T**, like this:\n\n\\begin{equation}\\begin{bmatrix}1 & 2 & 3 \\\\4 & 5 & 6\\end{bmatrix}^{T} = \\begin{bmatrix}1 & 4\\\\2 & 5\\\\3 & 6 \\end{bmatrix}\\end{equation}\n\nIn Python, both *numpy.**array*** and *numpy.**matrix*** have a **T** function:\n\n\n```python\nimport numpy as np\n\nA = np.array([[1,2,3],\n [4,5,6]])\nprint(A.T)\n# or use\nprint('\\n', A.transpose())\n```\n\n [[1 4]\n [2 5]\n [3 6]]\n \n [[1 4]\n [2 5]\n [3 6]]\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "9e9a20a81af0544e81df8ebf6bcac45d3a0225d8", "size": 11580, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "03_03_Matrices.ipynb", "max_stars_repo_name": "verryp/learning-phyton", "max_stars_repo_head_hexsha": "103470ae49652755f146baaa353d3c72e6588a4a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "03_03_Matrices.ipynb", "max_issues_repo_name": "verryp/learning-phyton", "max_issues_repo_head_hexsha": "103470ae49652755f146baaa353d3c72e6588a4a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "03_03_Matrices.ipynb", "max_forks_repo_name": "verryp/learning-phyton", "max_forks_repo_head_hexsha": "103470ae49652755f146baaa353d3c72e6588a4a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.0, "max_line_length": 567, "alphanum_fraction": 0.4507772021, "converted": true, "num_tokens": 1553, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026550642019, "lm_q2_score": 0.9046505415276079, "lm_q1q2_score": 0.8298383436485428}} {"text": "```python\n# Geometric Properties of Hollow Rectangle\n# E.Durham 5-Jul-2019\n```\n\n\n\n\n```python\nfrom sympy import *\n# from sympy import symbols\n```\n\n\n```python\n# define symbols\nb, d, t = symbols('b d t')\n```\n\n\n```python\nA = 2*t*(d+b-2*t) # Area\nI_x = (b*d**3 - (b-2*t) * (d-2*t)**3) / 12 # Second Moment of Inertia for Major Axis\nI_y = (d*b**3 - (d-2*t) * (b-2*t)**3) / 12 # Second Moment of Inertia for Minor Axis\nS_x = (b*d**3 - (b-2*t) * (d-2*t)**3) / (6*d) # Elastic Section Modulus for Major Axis\nS_y = (d*b**3 - (d-2*t) * (b-2*t)**3) / (6*b) # Elastic Section Modulus for Minor Axis\nr_x = sqrt((b*d**3 - (b-2*t) * (d-2*t)**3) / (24*t * (d+b-2*t))) # Radius of Gyration for Major Axis\nr_y = sqrt((d*b**3 - (d-2*t) * (b-2*t)**3) / (24*t * (b+d-2*t))) # Radius of Gyration for Minor Axis\nZ_x = b*t*(d-t) + 2*t * ((d/2)-t)**2 # Plastic Section Modulus for Major Axis\nZ_y = d*t*(b-t) + 2*t * ((b/2)-t)**2 # Plastic Section Modulus for Minor Axis\nJ = (2*t*(d-t)**2 * (b-t)**2) / (d+b-2*t) # St. Venant's Torsional Constant\n```\n\n\n```python\n# Specify Depth, d, Width, b, AND Wall Thickness, t in identical units\n# Example: \n# if case values are: d = 3.00, b = 2.00 and t = 0.250\n# then use case_values = [(d, 3.00), (b, 2.00), (t, 0.250)]\n# Enter actual case values below per above\ncase_values = [(d, 3.00), (b, 2.00), (t, 0.250)]\n```\n\n\n```python\nprint('Geometric Properties:')\nprint('Given:')\nprint('Depth, ', case_values[0], 'unit')\nprint('Width, ', case_values[1], 'unit')\nprint('Wall, ', case_values[2], 'unit')\nprint('A =', A.subs(case_values).evalf(3), 'unit**2')\nprint('I_x =', I_x.subs(case_values).evalf(3), 'unit**4')\nprint('S_x =', S_x.subs(case_values).evalf(3), 'unit**3')\nprint('Z_x =', Z_x.subs(case_values).evalf(3), 'unit**3')\nprint('r_x =', r_x.subs(case_values).evalf(3), 'unit')\nprint('I_y =', I_y.subs(case_values).evalf(3), 'unit**4')\nprint('S_y =', S_y.subs(case_values).evalf(3), 'unit**3')\nprint('Z_y =', Z_y.subs(case_values).evalf(3), 'unit**3')\nprint('r_y =', r_y.subs(case_values).evalf(3), 'unit')\nprint('J =', J.subs(case_values).evalf(3), 'unit**4')\n```\n\n Geometric Properties:\n Given:\n Depth, (d, 3.0) unit\n Width, (b, 2.0) unit\n Wall, (t, 0.25) unit\n A = 2.25 unit**2\n I_x = 2.55 unit**4\n S_x = 1.70 unit**3\n Z_x = 2.16 unit**3\n r_x = 1.06 unit\n I_y = 1.30 unit**4\n S_y = 1.30 unit**3\n Z_y = 1.59 unit**3\n r_y = 0.759 unit\n J = 2.57 unit**4\n\n\n\n```python\n# Display formulas in proper math formatting\ninit_printing()\nprint('Area , A ='); A\n\n```\n\n\n```python\nprint('Second Moment of Inertia, I ='); I_x\n```\n\n\n```python\nprint('Elastic Section Modulus, S ='); S_x\n```\n\n\n```python\nprint('Plastic Section Modulus, Z ='); Z_x\n```\n\n\n```python\nprint('Radius of Gyration, r ='); r_x\n```\n\n\n```python\nprint(\"St. Venant's Torsional Constant, J =\"); J\n```\n\n## Test Values and Expected Results\n### RT 3 x x 1/4 from Aluminum Design Manual, 2015 page V-39\nd = 3\nb = 2\nt = 0.25\nA = 2.25\nI_x = 2.55\nS_x = 1.70\nZ_x = 2.16\nr_x = 1.06\nI_y = 1.30\nS_y = 1.30\nZ_y = 1.59\nr_y = 0.759\nJ = 2.57\n\n## Glossary of Torsional Terms [1]\n### HSS Shear Constant\nThe shear constant, $C_{RT}$, is used for calculating the maximum shear stress due to an applied shear force. \nFor hollow structural section, the maximum shear stress in the cross section is given by: \n$\\tau_{max} = \\frac{V Q}{2 t I}$ \nwhere $V$ is the applied shear force, $Q$ is the statical moment of the portion of the section lying outside the neutral axis taken about the neutral axis, $I$ is the moment of inertia, and $t$ is the wall thickness. \nThe shear constant is expressed as the ratio of the applied shear force to the maximum shear stress [3]: \n$C_{RT} = \\frac{V}{\\tau_{max}} = \\frac{2 t I}{Q}$ \n### HSS Torsional Constant\nThe torsional constant, $C$, is used for calculating the shear stress due to an applied torqu. It is expressed as the ratio of the applied torque, $T$, to the shear stress in the cross section, $\\tau$: \n$C = \\frac{T}{\\tau}$ \n\n### St. Venant Torsional Constant\nThe St. Venant torsional constant, $J$, measures the resistance of a structural member to *pure* or *uniform* torsion. It is used in calculating the buckling moment resistance of laterally unsupported beams and torsional-flexural buckling of compression members in accordance with CSA S16. \n\nFor open cross section, the general formula is given by Galambos (1968): \n$J = \\sum(\\frac{b't^3}{3})$ \nwhere $b'$ are the plate lengths between points of intersection on their axes, and $t$ are the plate thicknesses. Summation includes all component plates. It is noted that the tabulated values in the Handbook of Steel Construction are based on net plate lengths instead of lengths between intersection points, a mostly conservative approach. \n\nThe expressions for $J$ given herein do not take into account the flange-to-web fillets. Formulas which account for this effect are given by El Darwish and Johnston (1965).\n\nFor thin-walled closed sections, the general formula is given by Salmon and Johnson (1980): \n$J = \\frac{4 A_o^2}{\\int_s ds/t}$ \n\nwhere $A_o$ is the enclosed area by the walls, $t$ is the wall thickness, $ds$ is a length element along the perimeter. Integration is performed over the entire perimeter $S$.\n\n### Warping Torsional Constant\nThe warping torsional constant, $C_w$, measures the resistance of a structural member to *nonuniform* or *warping* torsion. It is used in calculating the buckling moment resistance of laterally unsupported beams and torsional-flexural buckling of compression members in accordance with CSA Standard S16.\n\nFor open section, a general calculation method is given by Galambos (1968). For section in which all component plates meet at a single poitn, such as angles and T-sections, a calculation method is given by Bleich (1952). For hollow structural sections (HSS), warping deformations are small, and the warping torsional constant is generally taken as zero. \n\n\n\n### References for Torsional Constants\n[1] CISC, 2002. Torsional Section Properties of Steel Shapes. \nCanadian Institute of Steel Construction, 2002\n\n[2] Seaburg, P.A. and Carter, C.J. 1997. Torsional Analysis of Structural Steel Members. \nAmerican Institute of Steel Construction, Chicago, Ill.\n\n[3] Stelco. 1981. Hollow Structural Sections - Sizes and Section Properties, 6th Edition. \nStelco Inc., Hamilton, Ont.\n\n[4] CISC 2016. Handbook of Steel Construction, 11th Edition, page 7-84\nwww.cisc-icca.ca\n\n### Revision History\n0.1 - 2019-07-08 - E.Durham - added graphic image\n0.0 - 2019-07-05 - E.Durham - Created initial notebook\n", "meta": {"hexsha": "9986924094bb5e5eb990849713f8120fa9d80c20", "size": 35177, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Properties_of_Geometric_Sections/Hollow_Rectangle/Hollow_Rectangle.ipynb", "max_stars_repo_name": "Ewdy/Structural-Design", "max_stars_repo_head_hexsha": "b4bf079c2153e7fcac9f09e8b8aafd8400e411c1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-07-19T11:43:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-24T22:26:03.000Z", "max_issues_repo_path": "Properties_of_Geometric_Sections/Hollow_Rectangle/Hollow_Rectangle.ipynb", "max_issues_repo_name": "Ewdy/Structural-Design", "max_issues_repo_head_hexsha": "b4bf079c2153e7fcac9f09e8b8aafd8400e411c1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2019-07-19T11:31:47.000Z", "max_issues_repo_issues_event_max_datetime": "2019-07-26T12:11:39.000Z", "max_forks_repo_path": "Properties_of_Geometric_Sections/Hollow_Rectangle/Hollow_Rectangle.ipynb", "max_forks_repo_name": "Ewdy/Structural-Design", "max_forks_repo_head_hexsha": "b4bf079c2153e7fcac9f09e8b8aafd8400e411c1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-12-03T21:07:44.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-03T21:07:44.000Z", "avg_line_length": 72.3806584362, "max_line_length": 4674, "alphanum_fraction": 0.781561816, "converted": true, "num_tokens": 2131, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9046505351008904, "lm_q2_score": 0.9173026488471135, "lm_q1q2_score": 0.8298383321290055}} {"text": "```python\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy import stats\n```\n\n# Monte Carlo integration\n\n## Simple Monte Carlo integration\n\nThe basic idea of Monte Carlo integration is very simple and only requires elementary statistics. Suppose we want to find the value of \n$$\nI = \\int_a^b f(x) dx\n$$\nin some region with volume $V$. Monte Carlo integration estimates this integral by estimating the fraction of random points that fall below $f(x)$ multiplied by $V$. \n\n\nIn a statistical context, we use Monte Carlo integration to estimate the expectation\n$$\nE[g(X)] = \\int_X g(x) p(x) dx\n$$\n\nwith\n\n$$\n\\bar{g_n} = \\frac{1}{n} \\sum_{i=1}^n g(x_i)\n$$\nwhere $x_i \\sim p$ is a draw from the density $p$.\n\nWe can estimate the Monte Carlo variance of the approximation as\n$$\nv_n = \\frac{1}{n^2} \\sum_{o=1}^n (g(x_i) - \\bar{g_n})^2)\n$$\n\nAlso, from the Central Limit Theorem,\n\n$$\n\\frac{\\bar{g_n} - E[g(X)]}{\\sqrt{v_n}} \\sim \\mathcal{N}(0, 1)\n$$\n\nThe convergence of Monte Carlo integration is $\\mathcal{0}(n^{1/2})$ and independent of the dimensionality. Hence Monte Carlo integration generally beats numerical integration for moderate- and high-dimensional integration since numerical integration (quadrature) converges as $\\mathcal{0}(n^{d})$. Even for low dimensional problems, Monte Carlo integration may have an advantage when the volume to be integrated is concentrated in a very small region and we can use information from the distribution to draw samples more often in the region of importance.\n\nAn elementary, readable description of Monte Carlo integration and variance reduction techniques can be found [here](https://www.cs.dartmouth.edu/~wjarosz/publications/dissertation/appendixA.pdf).\n\n## Intuition behind Monte Carlo integration\n\nWe want to find some integral \n\n$$I = \\int{f(x)} \\, dx$$\n\nConsider the expectation of a function $g(x)$ with respect to some distribution $p(x)$. By definition, we have\n\n$$\nE[g(x)] = \\int{g(x) \\, p(x) \\, dx}\n$$\n\nIf we choose $g(x) = f(x)/p(x)$, then we have\n\n$$\n\\begin{align}\nE[g(x)] &= \\int{\\frac{f(x}{p(x)} \\, p(x) \\, dx} \\\\\n&= \\int{f(x) dx} \\\\\n&= I\n\\end{align}\n$$\n\nBy the law of large numbers, the average converges on the expectation, so we have\n\n$$\nI \\approx \\bar{g_n} = \\frac{1}{n} \\sum_{i=1}^n g(x_i)\n$$\n\nIf $f(x)$ is a proper integral (i.e. bounded), and $p(x)$ is the uniform distribution, then $g(x) = f(x)$ and this is known as ordinary Monte Carlo. If the integral of $f(x)$ is improper, then we need to use another distribution with the same support as $f(x)$.\n\n**Example: Estimating $\\pi$**\n\nWe have a function \n\n$$\nf(x, y) = \n\\begin{cases}\n1 & \\text{if}\\ x^2 + y^2 \\le 1 \\\\\n0 & \\text{otherwise}\n\\end{cases}\n$$\n\nwhose integral is\n$$\nI = \\int_{-1}^{1} \\int_{-1}^{1} f(x,y) dx dy = \\pi\n$$\n\nSo a Monte Carlo estimate of $\\pi$ is \n$$\nQ = 4 \\sum_{i=1}^{N} f(x, y)\n$$\n\nif we sample $p$ from the standard uniform distribution in $\\mathbb{R}^2$.\n\n\n```python\nx = np.linspace(-3,3,100)\ndist = stats.norm(0,1)\na = -2\nb = 0\nplt.plot(x, dist.pdf(x))\nplt.fill_between(np.linspace(a,b,100), dist.pdf(np.linspace(a,b,100)), alpha=0.5)\nplt.text(b+0.1, 0.1, 'p=%.4f' % (dist.cdf(b) - dist.cdf(a)), fontsize=14)\npass\n```\n\n#### Using quadrature\n\n\n```python\nfrom scipy.integrate import quad\n```\n\n\n```python\ny, err = quad(dist.pdf, a, b)\ny\n```\n\n\n\n\n 0.47724986805182085\n\n\n\n#### Simple Monte Carlo integration\n\nIf we can sample directly from the target distribution $N(0,1)$\n\n\n```python\nn = 1000000\nx = dist.rvs(n)\nnp.sum((a < x) & (x < b))/n\n```\n\n\n\n\n 0.477918\n\n\n\nIf we cannot sample directly from the target distribution $N(0,1)$ but can evaluate it at any point. \n\nRecall that $g(x) = \\frac{f(x)}{p(x)}$. Since $p(x)$ is $U(a, b)$, $p(x) = \\frac{1}{b-a}$. So we want to calculate\n\n$$\n\\frac{1}{n} \\sum_{i=1}^n (b-a) f(x)\n$$\n\n\n```python\nn = 10000\nx = np.random.uniform(a, b, n)\nnp.mean((b-a)*dist.pdf(x))\n```\n\n\n\n\n 0.48069595176138846\n\n\n\n## Intuition for error rate\n\nWe will just work this out for a proper integral $f(x)$ defined in the unit cube and bounded by $|f(x)| \\le 1$. Draw a random uniform vector $x$ in the unit cube. Then\n\n$$\n\\begin{align}\nE[f(x_i)] &= \\int{f(x) p(x) dx} = I \\\\\n\\text{Var}[f(x_i)] &= \\int{(f(x_i) - I )^2 p(x) \\, dx} \\\\\n&= \\int{f(x)^2 \\, p(x) \\, dx} - 2I \\int(f(x) \\, p(x) \\, dx + I^2 \\int{p(x) \\, dx} \\\\\n&= \\int{f(x)^2 \\, p(x) \\, dx} + I^2 \\\\\n& \\le \\int{f(x)^2 \\, p(x) \\, dx} \\\\\n& \\le \\int{p(x) \\, dx} = 1\n\\end{align}\n$$\n\nNow consider summing over many such IID draws $S_n = f(x_1) + f(x_2) + \\cdots + f(x_n)$. We have\n\n$$\n\\begin{align}\nE[S_n] &= nI \\\\\n\\text{Var}[S_n] & \\le n\n\\end{align}\n$$\n\nand as expected, we see that $I \\approx S_n/n$. From Chebyshev's inequality,\n\n$$\n\\begin{align}\nP \\left( \\left| \\frac{s_n}{n} - I \\right| \\ge \\epsilon \\right) &= \nP \\left( \\left| s_n - nI \\right| \\ge n \\epsilon \\right) & \\le \\frac{\\text{Var}[s_n]}{n^2 \\epsilon^2} & \\le\n\\frac{1}{n \\epsilon^2} = \\delta\n\\end{align}\n$$\n\nSuppose we want 1% accuracy and 99% confidence - i.e. set $\\epsilon = \\delta = 0.01$. The above inequality tells us that we can achieve this with just $n = 1/(\\delta \\epsilon^2) = 1,000,000$ samples, regardless of the data dimensionality.\n\n### Example\n\nWe want to estimate the following integral $\\int_0^1 e^x dx$. \n\n\n```python\nx = np.linspace(0, 1, 100)\nplt.plot(x, np.exp(x))\nplt.xlim([0,1])\nplt.ylim([0, np.exp(1)])\npass\n```\n\n#### Analytic solution\n\n\n```python\nfrom sympy import symbols, integrate, exp\n\nx = symbols('x')\nexpr = integrate(exp(x), (x,0,1))\nexpr.evalf()\n```\n\n\n\n\n 1.71828182845905\n\n\n\n#### Using quadrature\n\n\n```python\nfrom scipy import integrate\n\ny, err = integrate.quad(exp, 0, 1)\ny\n```\n\n\n\n\n 1.7182818284590453\n\n\n\n#### Monte Carlo integration\n\n\n```python\nfor n in 10**np.array([1,2,3,4,5,6,7,8]):\n x = np.random.uniform(0, 1, n)\n sol = np.mean(np.exp(x))\n print('%10d %.6f' % (n, sol))\n```\n\n 10 1.847075\n 100 1.845910\n 1000 1.731000\n 10000 1.727204\n 100000 1.719337\n 1000000 1.718142\n 10000000 1.718240\n 100000000 1.718388\n\n\n### Monitoring variance in Monte Carlo integration\n\nWe are often interested in knowing how many iterations it takes for Monte Carlo integration to \"converge\". To do this, we would like some estimate of the variance, and it is useful to inspect such plots. One simple way to get confidence intervals for the plot of Monte Carlo estimate against number of iterations is simply to do many such simulations.\n\nFor the example, we will try to estimate the function (again)\n\n$$\nf(x) = x \\cos 71 x + \\sin 13x, \\ \\ 0 \\le x \\le 1\n$$\n\n\n```python\ndef f(x):\n return x * np.cos(71*x) + np.sin(13*x)\n```\n\n\n```python\nx = np.linspace(0, 1, 100)\nplt.plot(x, f(x))\npass\n```\n\n#### Single MC integration estimate\n\n\n```python\nn = 100\nx = f(np.random.random(n))\ny = 1.0/n * np.sum(x)\ny\n```\n\n\n\n\n 0.03550964669105388\n\n\n\n#### Using multiple independent sequences to monitor convergence\n\nWe vary the sample size from 1 to 100 and calculate the value of $y = \\sum{x}/n$ for 1000 replicates. We then plot the 2.5th and 97.5th percentile of the 1000 values of $y$ to see how the variation in $y$ changes with sample size. The blue lines indicate the 2.5th and 97.5th percentiles, and the red line a sample path.\n\n\n```python\nn = 100\nreps = 1000\n\nx = f(np.random.random((n, reps)))\ny = 1/np.arange(1, n+1)[:, None] * np.cumsum(x, axis=0)\nupper, lower = np.percentile(y, [2.5, 97.5], axis=1)\n```\n\n\n```python\nplt.plot(np.arange(1, n+1), y, c='grey', alpha=0.02)\nplt.plot(np.arange(1, n+1), y[:, 0], c='red', linewidth=1);\nplt.plot(np.arange(1, n+1), upper, 'b', np.arange(1, n+1), lower, 'b')\npass\n```\n\n#### Using bootstrap to monitor convergence\n\nIf it is too expensive to do 1000 replicates, we can use a bootstrap instead.\n\n\n```python\nxb = np.random.choice(x[:,0], (n, reps), replace=True)\nyb = 1/np.arange(1, n+1)[:, None] * np.cumsum(xb, axis=0)\nupper, lower = np.percentile(yb, [2.5, 97.5], axis=1)\n```\n\n\n```python\nplt.plot(np.arange(1, n+1)[:, None], yb, c='grey', alpha=0.02)\nplt.plot(np.arange(1, n+1), yb[:, 0], c='red', linewidth=1)\nplt.plot(np.arange(1, n+1), upper, 'b', np.arange(1, n+1), lower, 'b')\npass\n```\n\n## Variance Reduction\n\nWith independent samples, the variance of the Monte Carlo estimate is \n\n\n$$\n\\begin{align}\n\\text{Var}[\\bar{g_n}] &= \\text{Var} \\left[ \\frac{1}{N}\\sum_{i=1}^{N} \\frac{f(x_i)}{p(x_i)} \\right] \\\\\n&= \\frac{1}{N^2} \\sum_{i=1}^{N} \\text{Var} \\left[ \\frac{f(x_i)}{p(x_i)} \\right] \\\\\n&= \\frac{1}{N^2} \\sum_{i=1}^{N} \\text{Var}[Y_i] \\\\\n&= \\frac{1}{N} \\text{Var}[Y_i]\n\\end{align}\n$$\n\nwhere $Y_i = f(x_i)/p(x_i)$. In general, we want to make $\\text{Var}[\\bar{g_n}]$ as small as possible for the same number of samples. There are several variance reduction techniques (also colorfully known as Monte Carlo swindles) that have been described - we illustrate the change of variables and importance sampling techniques here.\n\n### Change of variables\n\nThe Cauchy distribution is given by \n$$\nf(x) = \\frac{1}{\\pi (1 + x^2)}, \\ \\ -\\infty \\lt x \\lt \\infty \n$$\n\nSuppose we want to integrate the tail probability $P(X > 3)$ using Monte Carlo. One way to do this is to draw many samples form a Cauchy distribution, and count how many of them are greater than 3, but this is extremely inefficient.\n\n#### Only 10% of samples will be used\n\n\n```python\nimport scipy.stats as stats\n\nh_true = 1 - stats.cauchy().cdf(3)\nh_true\n```\n\n\n\n\n 0.10241638234956674\n\n\n\n\n```python\nn = 100\n\nx = stats.cauchy().rvs(n)\nh_mc = 1.0/n * np.sum(x > 3)\nh_mc, np.abs(h_mc - h_true)/h_true\n```\n\n\n\n\n (0.13, 0.26932817794994063)\n\n\n\n#### A change of variables lets us use 100% of draws\n\nWe are trying to estimate the quantity\n\n$$\n\\int_3^\\infty \\frac{1}{\\pi (1 + x^2)} dx\n$$\n\nUsing the substitution $y = 3/x$ (and a little algebra), we get\n\n$$\n\\int_0^1 \\frac{3}{\\pi(9 + y^2)} dy\n$$\n\nHence, a much more efficient MC estimator is \n\n$$\n\\frac{1}{n} \\sum_{i=1}^n \\frac{3}{\\pi(9 + y_i^2)}\n$$\n\nwhere $y_i \\sim \\mathcal{U}(0, 1)$.\n\n\n```python\ny = stats.uniform().rvs(n)\nh_cv = 1.0/n * np.sum(3.0/(np.pi * (9 + y**2)))\nh_cv, np.abs(h_cv - h_true)/h_true\n```\n\n\n\n\n (0.10219440906830025, 0.002167361082027339)\n\n\n\n### Importance sampling\n\nSuppose we want to evaluate\n\n$$\nI = \\int{h(x)\\,p(x) \\, dx}\n$$\n\nwhere $h(x)$ is some function and $p(x)$ is the PDF of $y$. If it is hard to sample directly from $p$, we can introduce a new density function $q(x)$ that is easy to sample from, and write\n\n$$\nI = \\int{h(x)\\, p(x)\\, dx} = \\int{h(x)\\, \\frac{p(x)}{q(x)} \\, q(x) \\, dx}\n$$\n\nIn other words, we sample from $h(y)$ where $y \\sim q$ and weight it by the likelihood ratio $\\frac{p(y)}{q(y)}$, estimating the integral as\n\n$$\n\\frac{1}{n}\\sum_{i=1}^n \\frac{p(y_i)}{q(y_i)} h(y_i)\n$$\n\nSometimes, even if we can sample from $p$ directly, it is more efficient to use another distribution.\n\n#### Example\n\nSuppose we want to estimate the tail probability of $\\mathcal{N}(0, 1)$ for $P(X > 5)$. Regular MC integration using samples from $\\mathcal{N}(0, 1)$ is hopeless since nearly all samples will be rejected. However, we can use the exponential density truncated at 5 as the importance function and use importance sampling. Note that $h$ here is simply the identify function.\n\n\n```python\nx = np.linspace(4, 10, 100)\nplt.plot(x, stats.expon(5).pdf(x))\nplt.plot(x, stats.norm().pdf(x))\npass\n```\n\n#### Expected answer\n\nWe expect about 3 draws out of 10,000,000 from $\\mathcal{N}(0, 1)$ to have a value greater than 5. Hence simply sampling from $\\mathcal{N}(0, 1)$ is hopelessly inefficient for Monte Carlo integration.\n\n\n```python\n%precision 10\n```\n\n\n\n\n '%.10f'\n\n\n\n\n```python\nv_true = 1 - stats.norm().cdf(5)\nv_true\n```\n\n\n\n\n 2.866515719235352e-07\n\n\n\n#### Using direct Monte Carlo integration\n\n\n```python\nn = 10000\ny = stats.norm().rvs(n)\nv_mc = 1.0/n * np.sum(y > 5)\n# estimate and relative error\nv_mc, np.abs(v_mc - v_true)/v_true \n```\n\n\n\n\n (0.0, 1.0)\n\n\n\n#### Using importance sampling\n\n\n```python\nn = 10000\ny = stats.expon(loc=5).rvs(n)\nv_is = 1.0/n * np.sum(stats.norm().pdf(y)/stats.expon(loc=5).pdf(y))\n# estimate and relative error\nv_is, np.abs(v_is- v_true)/v_true\n```\n\n\n\n\n (2.8290057563382236e-07, 0.01308555981236137)\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "1e843c1122c22a0ee754b384a3ad93a4d4401606", "size": 156699, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/copies/lectures/T08C_Monte_Carlo_Integration.ipynb", "max_stars_repo_name": "robkravec/sta-663-2021", "max_stars_repo_head_hexsha": "4dc8018f7b172eaf81da9edc33174768ff157939", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/copies/lectures/T08C_Monte_Carlo_Integration.ipynb", "max_issues_repo_name": "robkravec/sta-663-2021", "max_issues_repo_head_hexsha": "4dc8018f7b172eaf81da9edc33174768ff157939", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/copies/lectures/T08C_Monte_Carlo_Integration.ipynb", "max_forks_repo_name": "robkravec/sta-663-2021", "max_forks_repo_head_hexsha": "4dc8018f7b172eaf81da9edc33174768ff157939", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 165.6437632135, "max_line_length": 35284, "alphanum_fraction": 0.8975360404, "converted": true, "num_tokens": 4117, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8947894632969137, "lm_q2_score": 0.9273632926354617, "lm_q1q2_score": 0.8297949028985434}} {"text": "Un sistema de orden $4$ se puede escribir en su forma controlador de la siguiente manera:\n\n$$\n\\frac{d}{dt} x_c =\n\\begin{pmatrix}\n0 & 1 & 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 1 \\\\\n-a_4 & -a_3 & -a_2 & -a_1\n\\end{pmatrix} x_c +\n\\begin{pmatrix}\n0 \\\\\n0 \\\\\n0 \\\\\n1\n\\end{pmatrix} u\n$$\n\n\n\n```\nfrom sympy import init_printing\ninit_printing()\n```\n\n\n```\nfrom sympy import Matrix, var, eye, collect, expand\n```\n\n\n```\nvar(\"s, a1, a2, a3, a4, p1, p2, p3, p4\")\n```\n\n\n```\nAc = Matrix([[0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1], [-a4, -a3, -a2, -a1]])\nbc = Matrix([[0],[0],[0],[1]])\n```\n\n\n```\nAc\n```\n\n\n```\n(s*eye(4) - Ac).det()\n```\n\nEste es el polinomio caracteristico del sistema:\n\n$$\n\\det{(sI - A_c)} = s^4 + a_1 s^3 + a_2 s^2 + a_3 s + a_4\n$$\n\nTambien podemos definir una retroalimentaci\u00f3n de estado:\n\n$$\nu = f_c x_c + v\n$$\n\nen donde\n\n$$\nf_c =\n\\begin{pmatrix}\na_4 - \\bar{a}_4 & a_3 - \\bar{a}_3 & a_2 - \\bar{a}_2 & a_1 - \\bar{a}_1\n\\end{pmatrix}\n$$\n\nLa ecuaci\u00f3n de estado del sistema retroalimentada quedar\u00eda:\n\n$$\n\\frac{d}{dt} x_c = A_{f_c} x_c + b_c v\n$$\n\ndonde $A_{f_c} = A_c + b_c f_c$\n\n\n```\nfc = Matrix([[a4-p4, a3-p3, a2-p2, a1-p1]])\n```\n\n\n```\nAfc = Ac + bc*fc\n```\n\n\n```\nAfc\n```\n\ny su polinomio caracteristico es:\n\n\n```\n(s*eye(4) - Afc).det()\n```\n\nLa matriz de controlabilidad del par $(A_c, b_c)$ es:\n\n\n```\nCAcbc = ((bc.row_join(Ac*bc)).row_join(Ac*Ac*bc)).row_join(Ac*Ac*Ac*bc)\n```\n\n\n```\nCAcbc\n```\n\ny su determinante es:\n\n\n```\nCAcbc.det()\n```\n\nSi ahora analizamos la matriz de controlabilidad del sistema retroalimentado:\n\n\n\n\n```\nCAfcbc = ((bc.row_join(Afc*bc)).row_join(Afc*Afc*bc)).row_join(Afc*Afc*Afc*bc)\n```\n\n\n```\nCAfcbc\n```\n\nPero en general, podemos demostrar que la matriz de un sistema retroalimentado nos es mas que:\n\n$$\nC_{(A_f, b)} =\n\\begin{pmatrix}\nb & A_f b & \\dots & A_f^{n-1} b\n\\end{pmatrix} =\n\\begin{pmatrix}\nb & \\left( A + b f^T \\right) b & \\dots & \\left( A + b f^T \\right)^{n-1} b\n\\end{pmatrix}\n$$\n\nen donde los terminos van siendo:\n\n\n```\nvar(\"A, b, f\")\n```\n\n\n```\nb\n```\n\n\n```\nexpand((A + b*f)*b)\n```\n\n\n```\nexpand((A + b*f)**2*b)\n```\n\n\n```\nexpand((A + b*f)**3*b)\n```\n\nLo que en general, podemos ver como:\n\n$$\nC_{(A_f, b)} =\n\\begin{pmatrix}\nb & A b & \\dots & A^{n-1} b\n\\end{pmatrix} X\n$$\n\ndonde $X$ tiene que ser de la forma:\n\n$$\nX =\n\\begin{pmatrix}\n1 & b f & b^2 f^2 & b^3 f^3 \\\\\n0 & 1 & b f & b^2 f^2\\\\\n0 & 0 & 1 & b f \\\\\n0 & 0 & 0 & 1\n\\end{pmatrix}\n$$\n\n\n```\nX = Matrix([[1, b*f, b**2*f**2 , b**3*f**3], [0, 1, b*f, b**2*f**2], [0, 0, 1, b*f], [0, 0, 0, 1]])\n```\n\n\n```\nX\n```\n\n\n```\nCAb = Matrix([[b, A*b, A**2*b, A**3*b]])\n```\n\n\n```\nCAb\n```\n\n\n```\nCAb*X\n```\n\nY como habiamos visto anteriormente, el determinante de esta matriz, no cambiar\u00e1:\n\n\n```\nCAfcbc.det()\n```\n", "meta": {"hexsha": "382791fe23f6c29ee800deed84461ab90c1b70cb", "size": 42702, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "IPythonNotebooks/Teoria de Control I/Forma controlador.ipynb", "max_stars_repo_name": "chelizalde/DCA", "max_stars_repo_head_hexsha": "34fd4d500117a9c0a75b979b8b0f121c1992b9dc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "IPythonNotebooks/Teoria de Control I/Forma controlador.ipynb", "max_issues_repo_name": "chelizalde/DCA", "max_issues_repo_head_hexsha": "34fd4d500117a9c0a75b979b8b0f121c1992b9dc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "IPythonNotebooks/Teoria de Control I/Forma controlador.ipynb", "max_forks_repo_name": "chelizalde/DCA", "max_forks_repo_head_hexsha": "34fd4d500117a9c0a75b979b8b0f121c1992b9dc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-20T12:44:13.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-20T12:44:13.000Z", "avg_line_length": 60.570212766, "max_line_length": 3385, "alphanum_fraction": 0.7201770409, "converted": true, "num_tokens": 1164, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9273632856092016, "lm_q2_score": 0.8947894583870633, "lm_q1q2_score": 0.8297948920583049}} {"text": "```python\n%matplotlib notebook\nimport numpy as np\nimport matplotlib.pyplot as pl\nimport matplotlib as mpl\n```\n\n# Simulating Reaction-Diffusion Systems\n\nIn this notebook we want to show step by step how to simulate a reaction-diffusion system in Python. Here, we decided for the [Gray-Scott model](http://www.karlsims.com/rd.html) because of the variety of shapes it can produce.\n\nIn this models, we have two chemicals $A$ and $B$ which are distributed across a grid of size $N$. The symbol $A_{ij}$ represents the concentration of chemical $A$ at grid coordinates $(i,j)$ (similar for $B$).\n\nThe discrete reaction-diffusion update equations are \n\n$$\n\\begin{align}\nA_{ij}(t+1) &= A_{ij}(t) + \\Big[D_A (\\nabla^2 A)_{ij} - A_{ij}B_{ij}^2 + f(1-A_{ij}) \\Big]\\times\\Delta t\\\\\nB_{ij}(t+1) &= B_{ij}(t) + \\Big[D_B (\\nabla^2 B)_{ij} + A_{ij}B_{ij}^2 - (k+f)B_{ij} \\Big]\\times\\Delta t\n\\end{align}\n$$\n\nThe single terms are explained in the tutorial linked above.\n\nFirst, we need to take care of the discretized Laplacian terms. \n\n## Discrete Laplacian\nAs explained on [Wikipedia](https://en.wikipedia.org/wiki/Discrete_Laplace_operator#Implementation_via_operator_discretization), the discretized Laplacian of a grid cell $(i,j)$ can be computed by summing over neighboring cells and subtract the value of the original cell with the total weight. One possible implementation is to only recognize direct neighbors of grid difference $\\Delta=1$, i.e. at $(i,j-1)$, $(i,j+1)$, $(i-1,j)$, and $(i+1,j)$.\n\nThe whole update formula is\n\n$$\n(\\nabla^2 A)_{ij} = A_{i,j-1} + A_{i,j+1} + A_{i-1,j} + A_{i+1,j} - 4A_{ij}\n$$\n\nHow can we do this efficiently in `numpy`? Let's first define a small concentration matrix `A`.\n\n\n```python\nA = np.ones((3,3))\nA[1,1] = 0\nA\n```\n\n\n\n\n array([[1., 1., 1.],\n [1., 0., 1.],\n [1., 1., 1.]])\n\n\n\nNow for each cell, we want to add its right neighbor. We can easily access this value in a matrix sense by doing a `numpy.roll`, which shifts all elements in a certain direction, with periodic boundary conditions.\n\n\n```python\nright_neighbor = np.roll(A, # the matrix to permute\n (0,-1), # we want the right neighbor, so we shift the whole matrix -1 in the x-direction)\n (0,1), # apply this in directions (y,x)\n )\nright_neighbor\n```\n\n\n\n\n array([[1., 1., 1.],\n [0., 1., 1.],\n [1., 1., 1.]])\n\n\n\nSo to compute the discrete Laplacian of a matrix $M$, one could use the following function.\n\n\n```python\ndef discrete_laplacian(M):\n \"\"\"Get the discrete Laplacian of matrix M\"\"\"\n L = -4*M\n L += np.roll(M, (0,-1), (0,1)) # right neighbor\n L += np.roll(M, (0,+1), (0,1)) # left neighbor\n L += np.roll(M, (-1,0), (0,1)) # top neighbor\n L += np.roll(M, (+1,0), (0,1)) # bottom neighbor\n \n return L\n```\n\nLet's test this with our example matrix\n\n\n```python\ndiscrete_laplacian(A)\n```\n\n\n\n\n array([[ 0., -1., 0.],\n [-1., 4., -1.],\n [ 0., -1., 0.]])\n\n\n\nSeems like it worked! Note that periodic boundary conditions were used, too.\n\n## Implement update formula\n\nComputing the Laplacian was the hardest part. The other parts are simple. Just take the concentration matrices and apply the update formula.\n\n\n```python\ndef gray_scott_update(A, B, DA, DB, f, k, delta_t):\n \"\"\"\n Updates a concentration configuration according to a Gray-Scott model\n with diffusion coefficients DA and DB, as well as feed rate f and\n kill rate k.\n \"\"\"\n \n # Let's get the discrete Laplacians first\n LA = discrete_laplacian(A)\n LB = discrete_laplacian(B)\n \n # Now apply the update formula\n diff_A = (DA*LA - A*B**2 + f*(1-A)) * delta_t\n diff_B = (DB*LB + A*B**2 - (k+f)*B) * delta_t\n \n A += diff_A\n B += diff_B\n \n return A, B\n```\n\n## Choosing initial conditions\n\nThe initial conditions are very important in the Gray-Scott model. If you just randomize the initial conditions it might happen that everything just dies out. It seems to be a good idea to assume a homogeneous distribution of chemicals with a small disturbance which can then produce some patterns. We can also add a bit of noise. We can do the same decisions as Rajesh Singh in his [somewhat more elaborate version](https://rajeshrinet.github.io/blog/2016/gray-scott/) and disturb with a square in the center of the grid. Let's do the following.\n\n\n```python\ndef get_initial_configuration(N, random_influence=0.2):\n \"\"\"\n Initialize a concentration configuration. N is the side length\n of the (N x N)-sized grid.\n `random_influence` describes how much noise is added.\n \"\"\"\n \n # We start with a configuration where on every grid cell \n # there's a lot of chemical A, so the concentration is high\n A = (1-random_influence) * np.ones((N,N)) + random_influence * np.random.random((N,N))\n \n # Let's assume there's only a bit of B everywhere\n B = random_influence * np.random.random((N,N))\n \n # Now let's add a disturbance in the center\n N2 = N//2\n radius = r = int(N/10.0)\n \n A[N2-r:N2+r, N2-r:N2+r] = 0.50\n B[N2-r:N2+r, N2-r:N2+r] = 0.25\n \n return A, B\n```\n\nLet's also add a function which makes nice drawings and then draw some initial configurations.\n\n\n```python\ndef draw(A,B):\n \"\"\"draw the concentrations\"\"\"\n fig, ax = pl.subplots(1,2,figsize=(5.65,4))\n ax[0].imshow(A, cmap='Greys')\n ax[1].imshow(B, cmap='Greys')\n ax[0].set_title('A')\n ax[1].set_title('B')\n ax[0].axis('off')\n ax[1].axis('off')\n```\n\n\n```python\nA, B = get_initial_configuration(200)\ndraw(A,B)\n```\n\nNow we can simulate! We should first decide for some parameter choices. Let's stick with [Rajesh's choices](https://rajeshrinet.github.io/blog/2016/gray-scott/).\n\n\n```python\n# update in time\ndelta_t = 1.0\n\n# Diffusion coefficients\nDA = 0.16\nDB = 0.08\n\n# define feed/kill rates\nf = 0.060\nk = 0.062\n\n# grid size\nN = 200\n\n# simulation steps\nN_simulation_steps = 10000\n```\n\nAnd for the simulation we simply update the concentrations for `N_simulation_steps` time steps.\n\n\n```python\nA, B = get_initial_configuration(200)\n\nfor t in range(N_simulation_steps):\n A, B = gray_scott_update(A, B, DA, DB, f, k, delta_t)\n \ndraw(A,B)\n```\n\nIsn't this nice? You might also want to try the following values (directly taken from Rajesh's version above):\n\n\n```python\nDA, DB, f, k = 0.14, 0.06, 0.035, 0.065 # bacteria\nA, B = get_initial_configuration(200)\n\nfor t in range(N_simulation_steps):\n A, B = gray_scott_update(A, B, DA, DB, f, k, delta_t)\n \ndraw(A,B)\n```\n", "meta": {"hexsha": "80e3a86625e31da01838be01a3dfa488ed0e47ef", "size": 376005, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "gray_scott.ipynb", "max_stars_repo_name": "benmaier/reaction-diffusion", "max_stars_repo_head_hexsha": "ce9be6121d36740235c9a152ebdd191f164e784f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 61, "max_stars_repo_stars_event_min_datetime": "2018-12-15T10:33:59.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-29T19:30:12.000Z", "max_issues_repo_path": "gray_scott.ipynb", "max_issues_repo_name": "benmaier/reaction-diffusion", "max_issues_repo_head_hexsha": "ce9be6121d36740235c9a152ebdd191f164e784f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "gray_scott.ipynb", "max_forks_repo_name": "benmaier/reaction-diffusion", "max_forks_repo_head_hexsha": "ce9be6121d36740235c9a152ebdd191f164e784f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 15, "max_forks_repo_forks_event_min_datetime": "2018-12-14T19:38:25.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-09T01:07:48.000Z", "avg_line_length": 901.690647482, "max_line_length": 158856, "alphanum_fraction": 0.9554234651, "converted": true, "num_tokens": 1938, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9304582497090321, "lm_q2_score": 0.8918110454379297, "lm_q1q2_score": 0.8297929444093581}} {"text": "# Rotation Curve of The Solar System\n\n\n\nWe learned in Module1/BonusChallenge1 about Kepler's laws of planetary motion, and how you can use the known periods and distances of satellites to calculate the mass of the body they are orbiting using Kepler's third law:\n\n\\begin{equation}\nP^2 = \\frac{4 \\pi^2}{G \\, M} a^3\n\\end{equation}\n\nWhere $M$ is the mass of the central object. Another way to write that equation is in terms of the $speed$ a satellite is traveling around the object they are orbiting:\n\n\\begin{equation}\nv = \\sqrt{\\frac{G \\, M}{R}}\n\\end{equation}\n\nWhere $G$ is the gravitational constant, $M$ is the mass of the central object, and $R$ is the distance to the satellite. (__Bonus:__ Can you demonstrate this?)\n\nBecause velocities are relatively easy to calculate using the $Doppler$ $shift$ of light, it is often convenient to use the latter equation. Let's make a \"rotation curve\" for the Solar System.\n\nFirst, let's calculate the expected rotation curve for our Solar System using the equations above. Let's start be defining a function that calculates the expected velocity of a planet in the Solar System at a given radius:\n\n\n```python\n#FILL IN CODE\ndef velocity(#FILL IN):\n #FILL IN CODE\n```\n\nNow create an array of radii that you want to calculate the velocity at. Then, calculate the expected velocities at all of those radii:\n\n\n```python\n# FILL IN CODE\nR = # FILL IN CODE\n\n# USE THIS ARRAY AND YOUR VELOCITY FUNCTION TO CALCULATE VELOCITY:\n\n```\n\nNext, let's make a plot of $v$ vs. $R$ for the Solar System. Make sure the axes are labelled properly! Also be sure to check your units throughout.\n\n\n```python\n\n```\n\nNow that we've done that, let's see how the expected values for the velocities of the planets in our Solar System compare with the actual values. Collect both $v$ and $R$ into lists or arrays below:\n\n\n```python\n#FILL IN CODE\nv_planets = # FILL IN CODE\nR_planets = # FILL IN CODE\n```\n\nAdd those values to the plot you created above:\n\n\n```python\n\n```\n\nHow do the actual values compare with the measured values? Hopefully they match well! What do you notice about how the velocity changes with radius?\n\n\n```python\n\n```\n", "meta": {"hexsha": "60d76e31ec23bc067c8af304654d548c383b75d9", "size": 4038, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "BonusProblems/Module2/BonusChallenge1.ipynb", "max_stars_repo_name": "psheehan/CIERA-HS-Program", "max_stars_repo_head_hexsha": "76f7f0ff994e74e646fa34bbb41c314bf7526e9b", "max_stars_repo_licenses": ["Naumen", "Condor-1.1", "MS-PL"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-06-25T02:36:49.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-09T21:44:41.000Z", "max_issues_repo_path": "BonusProblems/Module2/BonusChallenge1.ipynb", "max_issues_repo_name": "psheehan/CIERA-HS-Program", "max_issues_repo_head_hexsha": "76f7f0ff994e74e646fa34bbb41c314bf7526e9b", "max_issues_repo_licenses": ["Naumen", "Condor-1.1", "MS-PL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "BonusProblems/Module2/BonusChallenge1.ipynb", "max_forks_repo_name": "psheehan/CIERA-HS-Program", "max_forks_repo_head_hexsha": "76f7f0ff994e74e646fa34bbb41c314bf7526e9b", "max_forks_repo_licenses": ["Naumen", "Condor-1.1", "MS-PL"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2019-06-25T15:33:10.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-12T18:04:36.000Z", "avg_line_length": 27.2837837838, "max_line_length": 231, "alphanum_fraction": 0.5891530461, "converted": true, "num_tokens": 532, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9511422199928904, "lm_q2_score": 0.8723473713594991, "lm_q1q2_score": 0.8297264153998364}} {"text": "# Astroinformatics \"Machine Learning Basics\"\n## Class 3: \nIn this tutorial, we'll see basics concepts of machine learning. (We will not see classification yet, but these concepts applies to those problems too). All this concepts are very well explained in the [Deep Learning Book, Chapter 5](http://www.deeplearningbook.org/contents/ml.html)\n\nFirst a brief discussion about the frequentist and bayesians approach of machine learning. Then a basic problem of linear regression in order to give some insight about the approach of frequentist and bayesians of this problem and its connection. Then we'll see the concept of capacity, overfitting and underfitting and how to solve it using regularization (explained from a frequentist and bayesian point of view). Finally, we use cross validation to select the hyperparameters of our optimization problem.\n\n# Frequentiest and Bayesians\n\n\nPlease read the discussion in [this great book](http://nbviewer.jupyter.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter1_Introduction/Ch1_Introduction_PyMC3.ipynb), It'll give you a great insight about the difference between the two approaches. I also recommend [this reading](https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading20.pdf) from an MIT class\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom numpy.linalg import inv\nfrom mpl_toolkits.mplot3d import axes3d\nfrom matplotlib import cm\n```\n\n# Linear regression least square\n\nGiven some data $(x_{i}, y_{i})_{i=1}^{N}$, we want to find the best affine transformation of $x$ that better predicts $y$, the affine model is $\\hat{y} = w^{T}x$ where we add a 1 to $x$ in order to have an offset of the linear model (affine transformation). Let's define $(X,Y)$ as the dataset where each row of $X$ is $x_{i}^{T}$ and $Y$ has $y_{i}$ as components. Now, we define the mean squared error ($MSE$) as our measure of performance of the model:\n\n$$ MSE = \\frac{1}{N}\\| \\hat{Y}-Y \\|^{2}_{2} $$\n\nwhere $\\hat{Y}$ is the estimated values of the linear model. In order to find the best model in the least square sense, we can minimize MSE by gradient:\n\n\\begin{equation}\n\\nabla_{w} MSE = 0 \\\\\n\\nabla_{w} \\frac{1}{N} \\| \\hat{Y} - Y \\|^{2}_{2} = 0 \\\\\n\\frac{1}{N} \\nabla_{w} \\| Xw - Y \\|^{2}_{2} = 0 \\\\\n\\nabla_{w} (Xw-Y)^{T}(Xw-Y) = 0 \\\\\n\\nabla_{w} (w^{T}X^{T}Xw-2w^{T}X^{T}Y-Y^{T}Y) = 0 \\\\\n2X^{T}Xw-2X^{T}Y=0 \\\\\nw = (X^{T}X)^{-1}X^{T}Y\n\\end{equation}\n\nWe know that this is the solution for the linear regression because the $MSE$ is a convex function of $w$m we can check this by taking the second derivative, $\\nabla_{w}^{2}MSE = 2X^{T}X$ which is a positive semi definite matrix (the eigenvalues are bigger or equal to zero). In this case, we need all the eigenvalues bigger than zero, if there is at least one eigenvalue equal to zero, that means that the value of the MSE is \"flat\" in the direction of that eigenvector, when the eigenvalue is \"close\" to zero, it can produce numerical instability on the optimization (this can be fixed with regularization). Now let's make an example:\n\n\n```python\ndef linear_function(w, x):\n return np.dot(x, w)\n\nw = np.array([0.7, 0.3])[...,np.newaxis]\nprint(w.shape)\nnoise = 0.10\nn_points = 20\nnp.random.seed(500)\n\n# we add an extra dimension to make it a column vector\nx_samples = np.linspace(3, 5, n_points)[..., np.newaxis]\n# then we add a column of ones in order to have the constant term a*x + b*1 = y\naugmented_x = np.concatenate([x_samples, np.ones(shape=(n_points,1))], axis=1)\nprint(\"samples shape: \"+str(augmented_x.shape))\n# adding gaussian noise to the data\ny_samples = linear_function(w, augmented_x) + np.random.normal(loc=0.0, scale=noise, size=(n_points,1))\nprint(\"target shape: \"+str(y_samples.shape))\nfig, ax = plt.subplots(figsize=(12,7))\nax.plot(x_samples, linear_function(w, augmented_x), label=\"Real solution\")\nax.scatter(x_samples, y_samples, label=\"Samples\", s=70)\nax.legend(fontsize=14)\nax.set_xlabel(\"x\", fontsize=14)\nax.set_ylabel(\"y\", fontsize=14)\nplt.show()\n```\n\n\n```python\n# Least square solution\nestimated_w = inv(augmented_x.T @ augmented_x) @ augmented_x.T @ y_samples\n# MSE\nerror = np.linalg.norm(y_samples - linear_function(estimated_w, augmented_x))**2/len(y_samples)\n# eigenvectors and eigenvalues of the covariance matrix\neg_values, eg_vectors = np.linalg.eig(augmented_x.T @ augmented_x)\nprint(\"estimated w:\" +str(estimated_w))\nprint(\"mean squared error: \"+str(error))\nprint(\"eigenvalues: \"+str(eg_values))\nprint(\"eigenvectos: \"+str(eg_vectors))\n```\n\n estimated w:[[ 0.68565289]\n [ 0.33056476]]\n mean squared error: 0.0101285040086\n eigenvalues: [ 346.94365923 0.42476182]\n eigenvectos: [[ 0.97134386 -0.23767859]\n [ 0.23767859 0.97134386]]\n\n\n\n```python\n# making error maningfold\nX_array = np.arange(-1, 2.5, 0.05)\nY_array = np.arange(-1, 2.5, 0.05)\nX, Y = np.meshgrid(X_array, Y_array)\nZ = np.zeros(shape=(len(X_array), len(Y_array)))\n\nfor i, x in enumerate(X_array):\n for j, y in enumerate(Y_array):\n w_loop = np.array([x, y])[..., np.newaxis]\n Z[i, j] = np.linalg.norm(y_samples - linear_function(w_loop, augmented_x))**2/len(y_samples)\n```\n\n\n```python\nfig, (ax, ax2) = plt.subplots(1, 2, figsize=(15,7))\nax.plot(x_samples, linear_function(w, augmented_x), label=\"Real solution\")\nax.scatter(x_samples, y_samples, label=\"Samples\", s=70)\nax.plot(x_samples, linear_function(estimated_w, augmented_x), label=\"Estimated solution\")\nax.legend(fontsize=14)\nax.set_xlabel(\"x\", fontsize=14)\nax.set_ylabel(\"y\", fontsize=14)\n\nlevels = np.linspace(0, np.amax(Z), 100)\ncont = ax2.contourf(X, Y, Z, levels = levels)#,cmap=\"inferno\")\nsoa = np.concatenate([np.roll(np.repeat(estimated_w.T, 2, axis=0), shift=1), eg_vectors], axis=1)*1.0\nX2, Y2, U, V = zip(*soa)\nax2.quiver(X2, Y2, U, V, angles='xy', scale_units='xy', scale=1,\n color=\"y\", label=\"eigen vectors of covariance matrix\")\nax2.legend(fontsize=14)\nax2.set_xlabel(\"MSE for each w\", fontsize=14)\nplt.show()\n```\n\n\n```python\nfig = plt.figure(figsize=(12, 7))\nax2 = fig.add_subplot(1, 1, 1, projection='3d')\nsurf = ax2.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm,\n linewidth=0, antialiased=False)\nax2.set_ylabel(\"w[1]\", fontsize=14)\nax2.set_xlabel(\"x[0]\", fontsize=14)\nplt.show()\n```\n\nAs we can see here, there is one direction where the MSE is relatively flat (small eigenvalue), that is why we see little variation in that direction by changing w, but it is still convex enough to have a unique solution. This is a frequentist way to solve a linear regression, is just a data driven solution with the only assumption of the model shape and that the MSE is a good way to measure performance, let's see now what is the bayesian way, which involves some distribution assumptions expressed using probabilities.\n\n# Linear Regression Maximum Likelihood Estimation\nNow, we define our model in a probabilistic way, that is, for example:\n\n$$ \\hat{y} = w^{T}x + \\epsilon \\hspace{0.5cm} \\text{with} \\hspace{0.5cm} \\epsilon \\sim \\mathcal{N}(0, \\sigma) $$\n\nThis means that we are assuming that the data has gaussian noise of a given variance $\\sigma^{2}$. Now, we can write what is the probability of occurrence of the data Y, for a given input X and a model:\n\n$$ P(Y \\mid X, w, \\sigma) $$\n\nWe call this \"Likelihood\". Now, it is very intuitive that if a particular value of w produces a high probability of seeing the data (high likelihood value), then w is a good choice for our model, so we define the maximum likelihood estimation of w as:\n\n$$ \\hat{w} = \\text{argmax}_{w} P(Y \\mid X, w, \\sigma) = \\text{argmax}_{w} \\prod_{i}^{N}P(y_{i}, x_{i},w,\\sigma) = \\text{argmax}_{w} \\prod_{i}^{N}\\mathcal{N}(y_{i}; wx_{i}, \\sigma) $$\n\nwhere the last equality comes from the assumption of i.i.d samples. Now, this product over many probabilities can produce numerical underflow, so we can obtain a more convenient optimization problem. Instead of maximizing the likelihood, we minimize the \"negative log likelihood\" which is $NLL = -\\log(P(Y \\mid X, w, \\sigma))$, so the optimization problem is:\n\n$$ \\hat{w} = \\text{argmin} -\\log(P(Y \\mid X, w, \\sigma)) $$\n\nThis expression to minimize arise naturally when we start by the optimization by minimizing the Kullback Leibler divergence between the estimated distribution of the data and the distribution produced by the model (we'll skip this for now). Something interesting happens when we work a little bit the expression of the NLL\n\n\\begin{equation}\n\\begin{split}\nNLL = & -\\log(\\prod_{i}^{N}\\mathcal{N}(y_{i}; wx_{i}, \\sigma)) \\\\\n= & -\\sum_{i=1}^{N}\\log(\\mathcal{N}(y_{i}; wx_{i}, \\sigma))\n= & -\\sum_{i=1}^{N}\\left [ \\log \\frac{1}{\\sqrt{2\\pi}\\sigma} - \\frac{(y_{i}-wx_{i})^{2}}{2 \\sigma^{2}} \\right ] \\propto \\| \\hat{Y} - Y \\|^{2}_{2}\n\\end{split}\n\\end{equation}\n\nLeast Mean Squared Error is equivalent to the Maximum Likelihood Estimation when gaussian noise in the data is assumed!!\n\n# Capacity, overfitting and Underfitting\n\nGenerally, in machine learning, the purpose of fitting a model to a dataset, is to evaluate the model on new data, not just the one that we use to train the model. The ability to perform well on new data is called generalization.\n\nIn order to measure the generalization capacity of a trained model, we subtract a subset of the original dataset which is not used to train the model but to test it. Those sets are called train set (used to find the parameters) and test set (used to measure the generalization performance), then we can estimate the generalization ability of our model by measuring the error (or another metric of performance) on the test set. Here we are assuming that both sets, train and test sets are generated by the same data-generating process, that means i.i.d assumptions over every sample (as we did before in the maximum likelihood case)\n\nThe objective of the optimization process is to make the training error small enough and make the gap between the training and test error small. The following concepts are useful to understand this process:\n\n- Capacity: ability of the model to fit a wide variety of functions (like relations between inputs and outputs)\n- Overfitting: Occurs when the gap between the training error and test error is too large, when the Capacity of the model is too high compared with the real complexity of the data-generating process\n- Underfitting: Happens when the model is not able to obtain a sufficiently low error value on the training set, may occur because the Capacity of the model is not enough to encode the complexity of the data-generating process\n\nLet's understand this with an example using a polinomial fitting over data. In the following example, the true complexity of the data is a quadratic polynomial. We fit models with low and high capacities compare with the real complexity\n\n\n```python\nw = np.array([-2, 0.6, 0.7])[...,np.newaxis]\nnoise = 1.2\nn_points = 20\ntrain_size = 10\ntest_size = n_points - train_size\n\nnp.random.seed(0)\n\nx_samples = np.linspace(-2, 2, n_points)[..., np.newaxis]\n# Making quadratic polinomial\naugmented_x = np.concatenate([x_samples**2, x_samples, x_samples**0], axis=1)\ny_samples = linear_function(w, augmented_x) + np.random.normal(loc=0.0, scale=noise, size=(n_points,1))\nx_plot = np.linspace(-2,2,100)[..., np.newaxis]\naug_x_plot = np.concatenate([x_plot**2, x_plot, x_plot**0], axis=1)\n\n# Dividing in train and test set\nindexes = np.arange(start=0, stop=n_points,step=1)\nnp.random.shuffle(indexes)\ntrain_index = indexes[:train_size]\ntest_index = indexes[train_size:]\nx_train = x_samples[train_index, ...]\naug_x_train = augmented_x[train_index, ...]\ny_train = y_samples[train_index, ...]\nx_test = x_samples[test_index, ...]\naug_x_test = augmented_x[test_index, ...]\ny_test = y_samples[test_index, ...]\n\nfig, ax = plt.subplots(figsize=(12,7))\nax.plot(x_plot, linear_function(w, aug_x_plot), label=\"Real solution\")\nax.scatter(x_train, y_train, label=\"train samples\", s=70)\nax.scatter(x_test, y_test, label=\"test samples\", s=70)\nax.legend(fontsize=14)\nax.set_xlabel(\"x\", fontsize=14)\nax.set_ylabel(\"y\", fontsize=14)\nplt.show()\n```\n\n\n```python\n# Linear, Quadratic 5 and 10 degree polynomial fit.\nlinear_coef = np.polyfit(x_train[:, 0], y_train[:, 0], deg=1, full=True) \nqd_coef = np.polyfit(x_train[:, 0], y_train[:, 0], deg=2,full=True)\ndeg5_coef = np.polyfit(x_train[:, 0], y_train[:, 0], deg=5, full=True) \ndeg10_coef = np.polyfit(x_train[:, 0], y_train[:, 0], deg=10, full=True) \np1 = np.poly1d(linear_coef[0])\np2 = np.poly1d(qd_coef[0])\np3 = np.poly1d(deg5_coef[0])\np4 = np.poly1d(deg10_coef[0])\nerror1 = np.linalg.norm(y_test[:, 0] - p1(x_test[:, 0]))**2/len(y_test)\nerror2 = np.linalg.norm(y_test[:, 0] - p2(x_test[:, 0]))**2/len(y_test)\nerror3 = np.linalg.norm(y_test[:, 0] - p3(x_test[:, 0]))**2/len(y_test)\nerror4 = np.linalg.norm(y_test[:, 0] - p4(x_test[:, 0]))**2/len(y_test)\n```\n\n\n```python\nprint(\"Generalization errors\")\nprint(\"linear: \"+str(error1))\nprint(\"quadratic: \"+str(error2))\nprint(\"deg5: \"+str(error3))\nprint(\"deg10: \"+str(error4))\nfig, ax = plt.subplots(figsize=(12,7))\nax.plot(x_plot, linear_function(w, aug_x_plot), label=\"Real solution\", lw=3)\nax.scatter(x_train, y_train, label=\"train samples\", s=70)\nax.scatter(x_test, y_test, label=\"test samples\", s=70)\nax.plot(x_plot, p1(x_plot),'--' ,label=\"linear (Underfitting)\")\nax.plot(x_plot, p2(x_plot),'--' , label=\"quadratic (Appropiate Capacity)\")\nax.plot(x_plot, p3(x_plot),'--' , label=\"5 deg (Not too overfitted)\" )\nax.plot(x_plot, p4(x_plot),'--' , label=\"10 deg (Overfitting)\")\nax.legend(fontsize=14)\nax.set_xlabel(\"x\", fontsize=14)\nax.set_ylabel(\"y\", fontsize=14)\nax.set_ylim([-15, 5])\nplt.show()\n```\n\nIn the last plot, we can see that the linear model does not have enough capacity to express the relation between x and y. Quadratic and deg 5 polynomial are good enough to find the relation. Some times, a high capacity model with a large family of functions that can represent (this is representational capacity), does not find the best solution during the optimization process, these additional limitations reduce the capacity of the actual solution, this is called the effective capacity. In the case of deg 10 polynomial, the capacity of the model is too high, so fits perfectly the train data but has a poor generalization ability.\n\nThe capacity of the model can be modified by modifying the model or changing the effective capacity by adding restrictions to the loss function. This is called Regularization.\n\n# Regularization\n## Regularized least squares\nIn this example, we'll use the weight decay regularization, which tends to choose parameters on the solution space that are close to the origin (small euclidean norm). We just need to modify the loss function by adding one term:\n\n\\begin{equation}\n\\begin{split}\n J(w) = & MSE + R \\\\\n = & \\frac{1}{N} \\| \\hat{Y} - Y \\|^{2}_{2} + \\lambda \\| w \\|^{2}_{2}\n\\end{split}\n\\end{equation}\n\nLet's see how the solution looks by taking the gradient of J with respect to w\n\n\\begin{equation}\n\\nabla_{w} J(w) = 0 \\\\\n2X^{T}Xw-2X^{T}Y + 2\\lambda w=0 \\\\\nw = (X^{T}X + \\lambda I)^{-1}X^{T}Y\n\\end{equation}\n\nAs we can see in the last expression, now we take the inverse of the correlation matrix plus the identity ponderated by $\\lambda$. This means that we are adding convexity to the problem because the eigenvalues of this new matrix are going to be bigger if we increase $\\lambda$ (we are making the matrix less singular), the convexity of this new manifold as a combination of a parabola because of the regularization term and the convexity of the original problem without the regularization. The optimization will tend to choose small norm w if we increase $\\lambda$\n\nNow, let's see an example by fitting a high capacity model but with this regularization term in the loss function\n\n\n```python\n# Same model as before\nw = np.array([-2, 0.6, 0.7])[...,np.newaxis]\nnoise = 1.2\nn_points = 20\ntrain_size = 10\ntest_size = n_points - train_size\nnp.random.seed(0) \n\nx_samples = np.linspace(-2, 2, n_points)[..., np.newaxis]\naugmented_x = np.concatenate([x_samples**2, x_samples, x_samples**0], axis=1)\ny_samples = linear_function(w, augmented_x) + np.random.normal(loc=0.0, scale=noise, size=(n_points,1))\nx_plot = np.linspace(-2,2,100)[..., np.newaxis]\naug_x_plot = np.concatenate([x_plot**2, x_plot, x_plot**0], axis=1)\n\nindexes = np.arange(start=0, stop=n_points,step=1)\nnp.random.shuffle(indexes)\ntrain_index = indexes[:train_size]\ntest_index = indexes[train_size:]\nx_train = x_samples[train_index, ...]\naug_x_train = augmented_x[train_index, ...]\ny_train = y_samples[train_index, ...]\nx_test = x_samples[test_index, ...]\naug_x_test = augmented_x[test_index, ...]\ny_test = y_samples[test_index, ...]\n\n# Now we do it for high capacity model\ndeg = 10\nx_deg = []\nfor i in range(deg+1):\n x_deg.append(x_samples**(deg-i))\nx_deg = np.concatenate(x_deg, axis=1)\nx_deg_train = x_deg[train_index, ...]\nx_deg_test = x_deg[test_index, ...]\n```\n\n\n```python\n# Least square solution\nreg_values = [10**7, 0.5, 0]\nlabels = [\"Too large lambda (Underfitting)\", \"appropiate lambda\", \"no regularization (Overfitting)\"]\nreg_w = []\nsolution = []\nfor i, lam in enumerate(reg_values):\n # we save the regularized solution for each lambda\n reg_w.append(inv(x_deg_train.T @ x_deg_train + lam*np.identity(deg+1)) @ x_deg_train.T @ y_train)\n solution.append(np.poly1d(reg_w[-1][:,0]))\n```\n\n\n```python\nfig, ax_array = plt.subplots(1,3,figsize=(15,5))\nfor i, lam in enumerate(reg_values):\n ax_array[i].plot(x_plot, linear_function(w, aug_x_plot), label=\"Real solution\", lw=3)\n ax_array[i].scatter(x_train, y_train, label=\"train samples\", s=70)\n ax_array[i].scatter(x_test, y_test, label=\"test samples\", s=70)\n p = solution[i]\n ax_array[i].plot(x_plot, p(x_plot), label=\"Estimated solution\")\n ax_array[i].set_ylim([-10, 5])\n ax_array[i].set_title(labels[i], fontsize=14)\n ax_array[i].legend()\n \nplt.show()\n```\n\nThe last plot shows how the regularization term modifies the effective capacity of the model. For very high $\\lambda$ (left plot), the solutions are reduced to a very small region of the original space, so the capacity of the model is reduced too much and produce underfitting on the data. For very small $\\lambda$ (right plot), there is no penalization for the size of the weights and the model is able to look for solutions using its original capacity, so the model overfits. For a medium $\\lambda$ (middle plot), the effective capacity is probably close to the necessary one to find the correct function of the data. \n\n## Probabilistic Perspective of Regularization, Maximum a Posteriori\n\nIn this case, we will add information about the distribution of the parameters p(w) as a prior knowledge, by using Bayes' theorem to modify the likelihood in the following way:\n\n\\begin{equation}\n\\begin{split}\nP(\\theta \\mid D) = & \\frac{P(D \\mid \\theta) P(\\theta)}{P(D)} \\\\\n\\propto & P(D \\mid w) P(w)\n\\end{split}\n\\end{equation}\n\nWhere $D$ is the data and $\\theta$ the parameters of the model. $P(\\theta)$ is called \"prior\" since it is prior knowledge added to the model about how the parameters distribute, in some application, the designer of the model might have some idea where to look for the parameters for a particular problem. $P(D \\mid \\theta)$ is the likelihood as we already know. $P(\\theta \\mid D)$ is called \"posterior\" probability, it is the distribution of the parameters for a given data, basically an update of our prior after seeing evidence of the real process (samples from the data-generating process)\n\nLet's consider our previous model and assume a gaussian distribution for the prior of the parameters\n\n$$ \\hat{y} = w^{T}x + \\epsilon \\hspace{0.5cm} \\text{with} \\hspace{0.5cm} \\epsilon \\sim \\mathcal{N}(0, \\sigma) \\hspace{0.5cm} \\text{with} \\hspace{0.5cm} p(w) \\sim \\mathcal{N(0, \\tau)} $$\n\nThen, the posterior probability of the parameters is proportional to:\n\\begin{equation}\n\\begin{split}\nP(w \\mid Y,X,\\sigma , \\tau) \\propto P(Y \\mid X, w, \\sigma) P(w \\mid \\tau)\n\\end{split}\n\\end{equation}\n\nIf we find $w$ where the posterior probability is maximized, it means that for the given dataset and the prior knowledge, there is a high probability that the model with that value of $w$ is the one that produce the data. So the solution to the maximum a posteriori (MAP) is:\n\n\\begin{equation}\n\\begin{split}\n\\hat{w} = & \\text{argmax}_{w} P(w \\mid Y, X, \\sigma, \\tau) \\\\\n= & \\text{argmin}_{w} -\\log(P(w \\mid Y, X, \\sigma, \\tau))\n\\end{split}\n\\end{equation}\n\nSomething interesting happens (again) when we work a little bit this expression\n\n\\begin{equation}\n\\begin{split}\n-\\log(P(w \\mid Y, X, \\sigma, \\tau)) = & -\\sum_{i=1}^{N}\\log \\mathcal{N}(y_{i}, wx_{i}, \\sigma)-\\log \\mathcal{N}(w; 0, \\tau) \\\\\n= & n \\log \\sqrt{2 \\pi} \\sigma + \\sum_{i=1}^{N} \\left ( \\frac{(y_{i}-wx_{i})^{2}}{2\\sigma^{2}} \\right ) + n\\log \\sqrt{2 \\pi \\tau} + \\sum_{i=1}^{N}\\left ( \\frac{w^{2}}{2 \\tau^{2}} \\right ) \\\\\n\\propto & \\| \\hat{Y} - Y \\|^{2}_{2} + \\frac{N \\sigma^{2}}{2 \\tau^{2}} \\| w \\|^{2}_{2} \\hspace{0.2cm} \\text{check this math please}\n\\end{split}\n\\end{equation}\n\nRegularized least mean squared is the same as maximum a posteriori with gaussian noise and gaussian prior! The amount of regularization, in this case, is controlled by the width of the gaussian prior $\\tau$\n\n# Hyperparameters and Cross-validation\n\nMany models and regularization factors are subject to hyperparameters that must be chosen by the designer. With hyperparameters I mean, for example, the lambda for regularization, the degree of the polynomial, number of layer and neurons in a neural network, gamma coefficient for support vector machines, etc.\n\nWe should choose the hyperparameters that produce a better model to generalize over new data (test set). I good way to do this is used cross-validation, which is a procedure to have a better estimation of the generalization performance. Some times we do not have too many examples, so the random choice for the test set could be very sensitive to the realization, and of course, the generalization performance estimation too, cross-validation try to fix this problem by doing the following:\n\n#### K-fold cross-validation\n\nFor a given dataset $D$, performance metric F and number of subsets k, we do:\n- Split D into k mutually exclusive subsets $D_{i}$ with $\\bigcup_{i=1}^{K} D_{i} = D$\n- For i from 1 to k:\n - train the model with $D\\backslash D_{i}$\n - Compute performance F over $D_{i}$\n- end for\n- Return performance\n\n\n```python\ndef cross_validation(lam, x_subsets, y_subsets):\n train_error = []\n test_error = []\n for i, x_test in enumerate(x_subsets):\n x_train = np.concatenate([x for j, x in enumerate(x_subsets) if j!=i], axis=0)\n y_train = np.concatenate([y for j, y in enumerate(y_subsets) if j!=i], axis=0)\n y_test = y_subsets[i]\n w = inv(x_train.T @ x_train + lam*np.identity(x_train.shape[1])) @ x_train.T @ y_train\n p = np.poly1d(w[:,0])\n test_error.append(np.linalg.norm(y_test[:, 0] - p(x_test[:, -2]))**2/len(y_test))\n train_error.append(np.linalg.norm(y_train[:, 0] - p(x_train[:, -2]))**2/len(y_train))\n return np.array(train_error), np.array(test_error)\n\ndef kfold_cv(x_data, y_data, lam_array, kfold=4):\n x_subsets = np.split(x_data, kfold)\n y_subsets = np.split(y_data, kfold)\n \n train_error_mean = []\n test_error_mean = []\n train_error_std = []\n test_error_std = []\n for j, lam in enumerate(lam_array):\n print('\\r{}'.format(float(j/len(lam_array))*100), end='')\n train_error, test_error = cross_validation(lam, x_subsets, y_subsets)\n train_error_mean.append(np.mean(train_error))\n train_error_std.append(np.std(train_error))\n test_error_mean.append(np.mean(test_error))\n test_error_std.append(np.std(test_error))\n \n return [np.array(train_error_mean), \n np.array(train_error_std), \n np.array(test_error_mean), \n np.array(test_error_std)]\n```\n\n\n```python\nlam_array = np.linspace(0.01, 10**8, 10000)\none_over_lambda = 1.0/lam_array\ntrain_error_mean, train_error_std, test_error_mean, test_error_std = kfold_cv(x_deg, y_samples, lam_array)\noptimal_lambda = lam_array[np.where(test_error_mean==np.amin(test_error_mean))[0]]\n```\n\n 99.9900000000000143\n\n\n```python\nfig, ax = plt.subplots(figsize=(12,7))\nax.plot(lam_array, test_error_mean, label=\"test error\")\nax.plot(lam_array, train_error_mean, label=\"train error\")\nax.set_xscale(\"log\")\nax.set_yscale(\"log\")\nax.set_xlabel(\"lambda (Less capacity <--- ---> More capacity)\", fontsize=14)\nax.set_ylabel(\"errors log scale\", fontsize=14)\nax.set_title(\"Cross validation results\", fontsize=14)\nax.set_xlim([np.amax(lam_array), np.amin(lam_array)])\nax.axvline(x=optimal_lambda, color='r', linestyle='--',\n lw=4, label=\"Optimal lambda = \"+str(optimal_lambda))\nax.legend(fontsize=14)\nplt.show()\n```\n\nIt is easy to see how the parameter $\\lambda$ change the effective capacity and produce a smooth transition between underfitting (left side of the optimal $\\lambda$, too large $\\lambda$) and overfitting (right side of the optimal $\\lambda$, too small $\\lambda$) regime. \n\n\n```python\ndeg_w = (inv(x_deg_train.T @ x_deg_train + optimal_lambda*np.identity(deg+1)) @ x_deg_train.T @ y_train)\ndeg_p = np.poly1d(w[:,0])\n```\n\n\n```python\nfig, ax = plt.subplots(figsize=(12,7))\nax.plot(x_plot, linear_function(w, aug_x_plot), label=\"Real solution\", lw=4)\nax.scatter(x_train, y_train, label=\"train samples\", s=70)\nax.scatter(x_test, y_test, label=\"test samples\", s=70)\nax.plot(x_plot, deg_p(x_plot),'-o' ,label=\"regularized high capacity model\", lw=1,ms=4)\nax.legend(fontsize=14)\nax.set_xlabel(\"x\", fontsize=14)\nax.set_ylabel(\"y\", fontsize=14)\nax.set_ylim([-15, 5])\nplt.show()\n```\n", "meta": {"hexsha": "1858e7f7a0b78d615d70aac326448d4c9fd41570", "size": 532272, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "auxiliar3.ipynb", "max_stars_repo_name": "rodrigcd/Astroinformatics_AS4501", "max_stars_repo_head_hexsha": "4ac614ff5cfc15922df8592562e5fdad3151abe1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "auxiliar3.ipynb", "max_issues_repo_name": "rodrigcd/Astroinformatics_AS4501", "max_issues_repo_head_hexsha": "4ac614ff5cfc15922df8592562e5fdad3151abe1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "auxiliar3.ipynb", "max_forks_repo_name": "rodrigcd/Astroinformatics_AS4501", "max_forks_repo_head_hexsha": "4ac614ff5cfc15922df8592562e5fdad3151abe1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 666.1727158949, "max_line_length": 101916, "alphanum_fraction": 0.9316871825, "converted": true, "num_tokens": 7258, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9184802484881361, "lm_q2_score": 0.9032942060530421, "lm_q1q2_score": 0.8296578868334917}} {"text": "```python\n# Question 33, Assignment 1\n# firstorder partial derivatives of f(x,y)\nfrom sympy import plot_implicit, Eq\nfrom sympy.abc import x, y\np=plot_implicit (Eq(4*y*y*y+2*x,0) ,(x, -2, 2), (y, -2, 2),show=False, line_color='r')\np2=plot_implicit (Eq(2*y+2*x,0) ,(x, -2, 2), (y, -2, 2),show=False, line_color='b')\np.extend(p2)\np.show()\n```\n\n\n```python\n# Question 33, Assignment 1\n# 3D Graph of f(x,y)\nfrom mpl_toolkits import mplot3d\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef z_function(x, y):\n return x**2+2*x*y+y**4\n\nx = np.linspace(-1, 1, 100)\ny = np.linspace(-1, 1, 100)\n\nX, Y = np.meshgrid(x, y)\nZ = z_function(X, Y)\n\nfig = plt.figure(figsize=(12,12))\nax = plt.axes(projection=\"3d\")\nax.plot_surface(X, Y, Z, rstride=1, cstride=1,\n cmap='coolwarm')\nax.set_xlabel('X')\nax.set_ylabel('Y')\nax.set_zlabel('f(x,y)')\nax.set_title('f(x,y)=x**2+2*x*y+y**4')\nax.set_zlim3d(-0.5, 2)\nax.view_init(-0,105)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "6b83f2fd0540e162023c662c924dc16ff4d1d016", "size": 261019, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "MATH1111/Assignment1/Question33.ipynb", "max_stars_repo_name": "linyuan-dc/AIDI", "max_stars_repo_head_hexsha": "db7deef4f3d34b117db60051215eed4b1eb63db8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "MATH1111/Assignment1/Question33.ipynb", "max_issues_repo_name": "linyuan-dc/AIDI", "max_issues_repo_head_hexsha": "db7deef4f3d34b117db60051215eed4b1eb63db8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MATH1111/Assignment1/Question33.ipynb", "max_forks_repo_name": "linyuan-dc/AIDI", "max_forks_repo_head_hexsha": "db7deef4f3d34b117db60051215eed4b1eb63db8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 2394.6697247706, "max_line_length": 241852, "alphanum_fraction": 0.9644930063, "converted": true, "num_tokens": 361, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9334308147331956, "lm_q2_score": 0.888758786126321, "lm_q1q2_score": 0.8295948378351777}} {"text": "# Iterative Solvers 3 - The Conjugate Gradient Method\n\n## Symmetric positive definite matrices\n\nA very frequent type of matrices are symmetric positive definite matrices. Let $A\\in\\mathbb{R}^{n\\times n}$ be a symmetric matrix (that is $A^T=A$). $A$ is called symmetric positive definite if \n\n$$\nx^TAx > 0, \\forall x\\neq 0.\n$$\n\nThis is equivalent to the condition that all eigenvalues of $A$ are larger than zero (remember that symmetric matrices only have real eigenvalues).\n\nOne application of symmetric positive definite matrices are energy functionals. The expression $x^TAx$ arises when discretising functional involving kinetic energies (e.g. energies of the from $E = \\frac{1}{2}m|\\nabla f|^2$ for f a given function).\n\nFor linear systems involving symmetric positive definite matrices we can derive a special algorithm, namely the Method of Conjugate Gradients (CG).\n\n## Lanczos - Arnoldi for symmetric matrices\n\nLet us start with the Arnoldi recurrence relation\n\n$$\nAV_m = V_mH_m + h_{m+1,m}v_{m+1}e_m^T\n$$\n\nWe know that $H_m$ is an upper Hessenberg matrix (i.e. the upper triangular part plus the first lower triangular diagonal can only be nonzero). Also, we know from the orthogonality of the $v_k$ vectors that\n\n$$\nV_m^TAV_m = H_m.\n$$\n\nLet $A$ now be symmetric. From the symmetry of $A$ an even nicer structure for $H_m$ arises. $H_m$ is upper Hessenberg, but now it is also symmetric. The only possible type of matrices to satisfy this condition are tridional matrices. These are matrices, where only the diagonal and the first upper and lower super/subdiagonals are nonzero.\n\nLet us test this out. Below you find our simple implementation of Arnoldi's method. We then plot the resulting matrix $H_m$.\n\n\n```python\nimport numpy as np\n\ndef arnoldi(A, r0, m):\n \"\"\"Perform m-1 step of the Arnoldi method.\"\"\"\n n = A.shape[0]\n \n V = np.empty((n, m + 1), dtype=np.float64)\n H = np.zeros((m+1, m), dtype=np.float64)\n \n V[:, 0] = r0 / np.linalg.norm(r0)\n \n for index in range(m):\n # Multiply the previous vector with A\n tmp = A @ V[:, index]\n # Now orthogonalise against the previous basis vectors\n h = V[:, :index + 1].T @ tmp # h contains all inner products against previous vectors\n H[:index + 1, index] = h\n w = tmp - V[:, :index + 1] @ h # Subtract the components in the directions of the previous vectors\n # Normalise and store\n H[index + 1, index] = np.linalg.norm(w)\n V[:, index + 1] = w[:] / H[index + 1, index]\n \n return V, H\n```\n\nThe following code creates a random symmetric positive definite matrix.\n\n\n```python\nfrom numpy.random import RandomState\n\nn = 500\n\nrand = RandomState(0)\nQ, _ = np.linalg.qr(rand.randn(n, n))\nD = np.diag(rand.rand(n))\nA = Q.T @ D @ Q\n```\n\nNow let's run Arnoldi's method and plot the matrix H. We are adding some artificial noise so as to ensure for the log-plot that all values are nonzero. The colorscale shows the logarithm of the magnitude of the entries.\n\n\n```python\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nm = 50\nr0 = rand.randn(n)\nV, H = arnoldi(A, r0, m)\n\nfig = plt.figure(figsize=(8, 8))\nax = fig.add_subplot(111)\n\nim = ax.imshow(np.log10(1E-15 + np.abs(H)))\nfig.colorbar(im)\n```\n\nIt is clearly visible that only the main diagonal and the first upper and lower off-diagonal are nonzero, as expected. This hugely simplifies the Arnoldi iteration. Instead of orthogonalising $Av_m$ against all previous vectors we only need to orthogonalise against $v_m$ and $v_{m-1}$. All other inner products are already zero. Hence, the main orthogonalisation step now takes the form\n\n$$\nw = Av_m - (v_m^TAv_m)v_m - (v_{m-1}^TAv_m)v_{m-1}.\n$$\n\nSince the new vector $w$ is composed of only 3 vectors. This is also called a 3-term recurrence. The big advantage is that in addition to $Av_m$ we only need to keep $v_m$ and $v_{m-1}$ in memory. Hence, no matter how many iterations we do, the memory requirement remains constant, in contrast to Arnoldi for nonsymmetric matrices, where we need to keep all previous vectors in memory.\n\nArnoldi with a short recurrence relation for symmetric matrices has a special name. It is called **Lanczos method**.\n\n## Solving linear systems of equations with Lanczos\n\nWe can now proceed exactly as in the Full orthogonalisation method and arrive at the linear system of equations\n\n$$\nT_my_m = \\|r_0\\|_2e_1,\n$$\n\nwhere $x_m = x_0 + V_my_m$ and $T_m = V_m^TAV_m$ is the tridiagonal matrix obtained from the Lanczos method.\n\nThe conjugate gradient method is an implementation of this approach. A very good derivation from Lanczos to CG is obtained in the beautiful book by Yousef Saad \"[Iterative Methods for Sparse Linear Systems](https://www-users.cs.umn.edu/~saad/IterMethBook_2ndEd.pdf)\", which is available online for free. Here, we will briefly motivate another approach to CG, which is a bit more intuitive and reveals more about the structure of the method, namely CG as an optimisation algorithm for a quadratic minimisation problem. One of the most beautiful summaries of this approach is contained in the paper [An introduction to the Conjugate Gradient Method Without the Agonizing Pain](https://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf) by Jonathan Shewchuk.\n\n## A quadratic optimisation problem\n\nWe consider the quadratic minimisation problem\n\n$$\n\\min_{x\\in\\mathbb{R}^n} f(x)\n$$\n\nwith $f(x)=\\frac{1}{2}x^TAx - b^Tx$. We have\n\n$$\n\\nabla f(x) = Ax - b\n$$\n\nand hence the only stationary point is the solution of the linear system $Ax=b$. Furthermore, it is really a minimiser since $f''(x) > 0$ for all $x\\in\\mathbb{R}^n$ as $A$ is positive definite.\n\n## The Method of Steepest Descent\n\nOur first idea is the method of steepest descent. Remember that the negative gradient is a descent direction. Given a point $x_k$. We have\n\n$$\n-\\nabla f(x_k) = b - Ax_k := r_k.\n$$\n\nThe negative gradient is hence just the residual. Hence, we need to minimise along the direction of the residual, that is we will have\n$x_{k+1} = x_k + \\alpha_k r_k$ for some value $\\alpha_k$. To compute $\\alpha_k$ we just solve\n\n$$\n\\frac{d}{d\\alpha}f(x_k + \\alpha r_k) = 0\n$$\n\nSince $\\frac{d}{d\\alpha}f(x_k + \\alpha r_k) = r_{k+1}^Tr_k$ we just need to choose $\\alpha_k$ such that $r_{k+1}$ is orthogonal to $r_k$. The solution is given by $\\alpha_k = \\frac{r_k^Tr_k}{r_k^TAr_k}$. This gives us a complete method consisting of three steps to get from $x_k$ to $x_{k+1}$.\n\n$$\n\\begin{align}\nr_k &= b - Ax_k\\nonumber\\\\\n\\alpha_k &= \\frac{r_k^Tr_k}{r_k^TAr_k}\\nonumber\\\\\nx_{k+1} &= x_k + \\alpha_k r_k\n\\end{align}\n$$\n\nWe are not going to derive the complete convergence analysis here but only state the final result. Let $\\kappa := \\frac{\\lambda_{max}}{\\lambda_{min}}$, where $\\lambda_{max}$ and $\\lambda_{min}$ are the largest, respectively smallest eigenvalue of $A$ (remember that all eigenvalues are positive since $A$ is symmetric positive definite). The number $\\kappa$ is called the condition number of $A$. Let $e_k = x_k - x^*$ be the difference of the exact solution $x^*$ satisfying $Ax^*=b$ and our current iterate $x_k$. Note that $r_k = -Ae_k$.\n\nWe now have that\n\n$$\n\\|e_k\\|_A\\leq \\left(\\frac{\\kappa - 1}{\\kappa + 1}\\right)^k\\|e_0\\|_A,\n$$\n\nwhere $\\|e_k\\|_A := \\left(e_k^TAe_k\\right)^{1/2}$.\n\nThis is an extremely slow rate of convergence. Let $\\kappa=10$, which is a fairly small number. Then the error reduces in each step only by a factor of $\\frac{9}{11}\\approx 0.81$ and we need 11 iterations for each digit of accuracy.\n\n## The method of conjugate directions\n\nThe steepest descent approach was not bad. But we want to improve on it. The problem with the steepest descent method is that we have no guarantee that we are reducing the error $e_{k+1}$ as much as possible along our current direction $r_k$ when we minimize. But we can fix this.\n\nLet us pick a set of directions $d_0, d_1, \\dots, d_{n-1}$, which are mutually orthogonal, that is $d_i^Td_j =0$ for $i\\neq j$. We now want to enforce the condition that\n\n$$\ne_{k+1}^Td_k = 0.\n$$\n\nThis means that the remaining error is orthogonal to $d_k$ and hence is a linear combination of all the other search directions. We have therefore exhausted all the information from $d_k$. Let's play this through.\n\nWe know that $e_{k+1} = x_{k+1} - x^* = x_k -x^* + \\alpha_k d_k = e_k + \\alpha_kd_k$.\n\nIt follows that\n\n$$\n\\begin{align}\ne_{k+1}^Td_k &= d_k^T(e_k + \\alpha_kd_k)\\nonumber\\\\\n &= d_k^Te_k + \\alpha_kd_k^Td_k = 0\\nonumber\n\\end{align}\n$$\n\nand therefore $\\alpha_k = -\\frac{d_k^Te_k}{d_k^Td_k}$.\n\nUnfortunately, this does not quite work in practice as we don't know $e_k$. But there is a solution. Remember that $r_k = -Ae_k$. We just need an $A$ in the right place. To achieve this we choose **conjugate directions**, that is we impose the condition that\n\n$$\nd_i^TAd_j = 0\n$$\n\nfor $i\\neq j$. We also impose the condition that $e_{k+1}^TAd_k = 0$. Writing this out we obtain\n\n$$\n\\alpha_k = \\frac{d_k^Tr_k}{d_k^TAd_k}.\n$$\n\nThis expression is computable if we have a suitable set of conjugate directions $d_k$. Moreoever, it guarantees that the method converges in at most $n$ steps since in every iteration we are annihiliating the error in the direction of $d_k$ and there are only $n$ different directions.\n\n## Conjugate Gradients - Mixing steepest descent with conjugate directions\n\nThe idea of conjugate gradients is to obtain the conjugate directions $d_i$ by taking the $r_i$ (the gradients) and to $A$-orthogonalise (conjugate) them against the previous directions. We are leaving out the details of the derivation and refer to the Shewchuk paper. But the final algorithm now takes the following form.\n\n$$\n\\begin{align}\nd_0 &= r_0 = b - Ax_0\\nonumber\\\\\n\\alpha_i &= \\frac{r_i^Tr_i}{d_i^TAd_i}\\nonumber\\\\\nx_{i+1} &=x_i + \\alpha_id_i\\nonumber\\\\\nr_{i+1} &= r_i - \\alpha_i Ad_i\\nonumber\\\\\n\\beta_{i+1} &= \\frac{r_{i+1}^Tr_{i+1}}{r_i^Tr_i}\\nonumber\\\\\nd_{i+1} &= r_{i+1} + \\beta_{i+1}d_i\\nonumber\n\\end{align}\n$$\n\nConjugate Gradients has a much more favourable convergence bound than steepest descent. One can derive that\n\n$$\n\\|e_i\\|_A\\leq 2\\left(\\frac{\\sqrt{\\kappa} - 1}{\\sqrt{\\kappa} + 1}\\right)^i\\|e_0\\|_A.\n$$\n\nIf we choose again the example that $\\kappa=10$ we obtain\n\n$$\n\\|e_i\\|A\\lessapprox 0.52^i\\|e_0\\|_A.\n$$\n\nHence, we need around $4$ iterations for each digits of accuracy instead of 11 for the method of steepest descent.\n\n## A numerical example\n\nThe following code creates a symmetric positive definite matrix.\n\n\n```python\nfrom scipy.sparse import diags\n\nn = 10000\n\ndata = [2.1 * np.ones(n),\n -1. * np.ones(n - 1),\n -1. * np.ones(n - 1)]\n\noffsets = [0, 1, -1]\n\nA = diags(data, offsets=offsets, shape=(n, n), format='csr')\n```\n\nWe now solve the associated linear system with CG and plot the convergence.\n\n\n```python\nfrom scipy.sparse.linalg import cg\n\nb = rand.randn(n)\nresiduals = []\n\ncallback = lambda x: residuals.append(np.linalg.norm(b - A @ x) / np.linalg.norm(b))\n\nsol, _ = cg(A, b, tol=1E-6, callback=callback, maxiter=1000)\n\nfig = plt.figure(figsize=(8, 8))\nax = fig.add_subplot(111)\nax.semilogy(1 + np.arange(len(residuals)), residuals, 'k--')\nax.set_title('CG Convergence')\nax.set_xlabel('Iteration Step')\n_ = ax.set_ylabel('$\\|Ax-b\\|_2 / \\|b\\|_2$')\n```\n", "meta": {"hexsha": "666745cd2718f0c1d50a25b7a8127374c0a6c7d6", "size": 53007, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "hpc_lecture_notes/it_solvers3.ipynb", "max_stars_repo_name": "skailasa/hpc_lecture_notes", "max_stars_repo_head_hexsha": "bcabc86d97b7069df98e1efcc90f5408a7e5d4f4", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "hpc_lecture_notes/it_solvers3.ipynb", "max_issues_repo_name": "skailasa/hpc_lecture_notes", "max_issues_repo_head_hexsha": "bcabc86d97b7069df98e1efcc90f5408a7e5d4f4", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hpc_lecture_notes/it_solvers3.ipynb", "max_forks_repo_name": "skailasa/hpc_lecture_notes", "max_forks_repo_head_hexsha": "bcabc86d97b7069df98e1efcc90f5408a7e5d4f4", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 113.2628205128, "max_line_length": 22916, "alphanum_fraction": 0.8439451393, "converted": true, "num_tokens": 3322, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9019206686206199, "lm_q2_score": 0.9196425372343816, "lm_q1q2_score": 0.8294446120743968}} {"text": "# Solving Ax = b\n\n \nThis work by Jephian Lin is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits import mplot3d\nimport sympy\n```\n\n## Main idea\n\nLet $A$ be an $m\\times n$ matrix. The solution set of $A{\\bf x} = {\\bf b}$ is the intersection of the affine planes $\\langle {\\bf r}_i, {\\bf x}\\rangle = b_i$ for all $i=1,\\ldots,m$, where ${\\bf r}_i$ is the $i$-th row of $A$ and $b_i$ is the $i$-th entry of ${\\bf b}$. \nTherefore, the solution set of $A$ is an affine space (shifted space).\n\nThe solutions set of $A{\\bf x} = {\\bf b}$ is of the form:\n\n general solutions = particular solution + homogeneous solutions\n (a shifted space) (a vector) (a space)\n \nHere \"general solution\" stands for all solutions of $A{\\bf x} = {\\bf b}$, \n\"particular solution\" stands for one arbitrary solution of $A{\\bf x} = {\\bf b}$, and \n\"homogeneous solutions\" stands for all solutions of $A{\\bf x} = {\\bf 0}$.\n\nEvery matrix lead to its **reduced echelon form** after some **row operations**. If $\\left[\\begin{array}{cc}R | {\\bf r}\\end{array}\\right]$ is the reduced echelon form of $\\left[\\begin{array}{cc}A | {\\bf b}\\end{array}\\right]$, then $A{\\bf x} = {\\bf b} = R{\\bf x} = {\\bf r}$.\n\n## Side stories\n\n- $A{\\bf x} = {\\bf b} \\iff {\\bf b}\\in\\operatorname{Col}(A)$\n- matrix inverse\n\n## Experiments\n\n###### Exercise 1\nThis exercise helps you to visualize the affine space $A{\\bf x} = {\\bf b}$. \nLet \n```python\nA = np.array([[1,1,1], \n [1,1,1]])\nb = np.array([5,5])\n```\n\n###### 1(a)\nUse the techniques you learned in Lesson 2 to draw some random solutions of $A{\\bf x} = {\\bf b}$. \nWhat is the nullity of $A$? What is the \"dimension\" of the affine space? \nHint: \n```python\nxs = 5*np.random.randn(3,10000)\nmask = (np.abs(b[:,np.newaxis] - A.dot(xs)) < 0.1).all(axis = 0)\n```\n\n\n```python\n### your answer here\n```\n\n###### 1(b)\nIt is known that \n```python\np = np.array([5,0,0])\n```\nis a particular solution of $A{\\bf x} = {\\bf b}$. \nAdd a vector of `p` upon your previous drawing.\n\n\n```python\n### your answer here\n```\n\n###### 1(c)\nDo the same for \n```python\nb = np.array([5,6])\n```\nHow many solutions are there?\n\n\n```python\n### your answer here\n```\n\n###### Exercise 2\nThis exercise helps you to visualize the affine space $A{\\bf x} = {\\bf b}$. \nLet \n```python\nA = np.array([[1,1,1], \n [1,1,1]])\nb = np.array([5,5])\n```\n\n###### 2(a)\nDraw the grid using the columns of $A$ and draw a vector for $b$. \nIs $b$ in the column space of $A$?\n\n\n```python\n### your answer here\n```\n\n###### 2(b)\nDo the same for \n```python\nb = np.array([5,6])\n```\nIs $b$ in the column space of $A$?\n\n\n```python\n### your answer here\n```\n\n#### Remark \nWhether a particular solution exists depends only on whether ${\\bf b}$ is in the column space of $A$ or not. \nWe say a equation $A{\\bf x} = {\\bf b}$ is **consistent** if it has at least a particular solution. \n\nWhether the homogeneous solutions contains only the trivial solution ${\\bf 0}$ depends only on $A$. \n\nThis table summarize the number of solutions of $A{\\bf x} = {\\bf b}$.\n\n hom \\ par | consistent | inconsistent \n --------- | ---------- | ------------ \n trivial | one | none \n nontrivial | infinite | none\n\n\n## Exercises\n\n##### Exercise 3\nLet \n```python\nA = sympy.Matrix([[1,1], \n [-1,0],\n [0,-1]])\nb = sympy.Matrix([3,-2,-1])\nAb = A.col_insert(2,b)\n```\n\n###### 3(a)\nCalculate the reduced echelon form of `Ab` . \nCan you tell if `b` is in the column space of `A` ?\n\n\n```python\n### your answer here\n```\n\n###### 3(b)\nLet \n```python\nb = sympy.Matrix([1,2,3])\n```\nand update `Ab` . \nCan you tell if `b` is in the column space of `A` ?\n\n\n```python\n### your answer here\n```\n\n##### Exercise 4\nLet \n```python\nA = sympy.Matrix([[1,1,1], \n [1,2,4], \n [1,3,9]])\nb1 = sympy.Matrix([1,0,0])\n```\n\n###### 4(a)\nIf a matrix has no free variable, then the homogeneous solution is trivial. \nFind the unique solution of $A{\\bf x} = {\\bf b}_1$.\n\n\n```python\n### your answer here\n```\n\n###### 4(b)\nLet \n```python\nb2 = sympy.Matrix([0,1,0])\nAb = A.col_insert(3,b1)\nAbb = Ab.col_insert(4,b2)\n```\nCan you use `Abb` to solve the solutions of $A{\\bf x} = {\\bf b}_1$ and $A{\\bf x} = {\\bf b}_2$ at once?\n\n\n```python\n### your answer here\n```\n\n###### 4(c)\nLet \n```python\nb3 = sympy.Matrix([0,0,1])\n``` \nSolve the solutions of $A{\\bf x} = {\\bf b}_1$, $A{\\bf x} = {\\bf b}_2$, and $A{\\bf x} = {\\bf b}_3$ at once.\n\n\n```python\n### your answer here\n```\n\n###### 4(d)\nLet \n$$ B = \\begin{bmatrix} \n | & ~ & | \\\\\n {\\bf b}_1 & \\cdots & {\\bf b}_3 \\\\\n | & ~ & | \n \\end{bmatrix}.$$\n Find a matrix $X$ such that $AX = B$. \n When $B$ is the identity matrix \n$$ I_n = \\begin{bmatrix} \n 1 & ~ & ~ \\\\\n ~ & \\ddots & ~ \\\\\n ~ & ~ & 1 \n \\end{bmatrix},$$\n the matrix $X$ with $AX = I_n$ is called the **inverse** of $A$, denoted by $A^{-1}$.\n\n\n```python\n### your answer here\n```\n\n###### 4(e)\nCompare your answer in 4(d) with the output of `np.linalg.inv(A)` .\n\n\n```python\n### your answer here\n```\n\n##### Exercise 5\nLet \n```python\nA = sympy.Matrix([[1,3,3,18], \n [5,15,16,95], \n [-5,-15,-15,-90]])\nR,pvts = A.rref()\n```\n\n###### 5(a)\nLet $B$ be the matrix whose columns are the columns of $A$ the corresponding to leading variables. \nPick a column of $A$ corresponding a free variable. \nCheck that the column is in the column space of $B$. \n(If yes, this means this column is redundant for generating the column space of $A$.)\n\n\n```python\n### your answer here\n```\n\n###### 5(b)\nCheck if $B$ itself has any redundant column.\n\n\n```python\n### your answer here\n```\n\n#### Remark\nLet $S = \\{{\\bf u}_1, \\ldots, {\\bf u}_n\\}$ be a collection of vectors and $A$ the matrix whose columns are $S$. \nWe say $S$ is **linearly independent** if one of the following equivalent condition holds:\n- $c_1{\\bf u}_1 + \\cdots + c_n{\\bf u}_n = {\\bf 0}$ only have trivial solution $c_1 = \\cdots = c_n = 0$.\n- $A{\\bf x} = {\\bf 0}$ only have trivial solution ${\\bf x} = 0$.\n- $A$ has no free variable.\n\nMoreover, if a space $V$ is equal to$\\operatorname{span}(S)$ and $S$ is linearly independent, then we say $S$ is a **basis** of the the space $V$.\n\n##### Exercise 6\nLet \n```python\nA = sympy.Matrix([[1,1,1], \n [-1,0,0],\n [0,-1,0],\n [0,0,-1]])\n```\nCheck if the columns of $A$ form a linearly independent set.\n\n\n```python\n### your answer here\n```\n\n##### Exercise 7\n```python\nA = sympy.Matrix([[1,3,3,18], \n [5,15,16,95], \n [-5,-15,-15,-90]])\nR,pvts = A.rref()\n```\nCheck what is `A.nullspace()`, `A.rowspace()`, and `A.columnspace()` and think about their meaning. \n\n\n```python\n### your answer here\n```\n\n#### Remark\nSince it is impossible to output a space, the three commands in Exercise 7 in fact outputs the basis of the space only, which is enough.\n\n**Nullspace**: its basis consists of ${\\bf h}$'s in the previous lesson. \n**Rowspace**: its basis consists of the rows of $R$ corresponding to the pivots. \n**Columnspace**: its basis consists of the columns of $A$ corresponding to the pivots.\n", "meta": {"hexsha": "5529cb4444076f3c228ec440e39a24ddd9326f12", "size": 13721, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Mathematics/Linear Algebra/0.0b Supplement of MIT Course - Concept Explanations with Problems_and_Questions to Solve/06-Solving-Ax-=-b.ipynb", "max_stars_repo_name": "okara83/Becoming-a-Data-Scientist", "max_stars_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Mathematics/Linear Algebra/0.0b Supplement of MIT Course - Concept Explanations with Problems_and_Questions to Solve/06-Solving-Ax-=-b.ipynb", "max_issues_repo_name": "okara83/Becoming-a-Data-Scientist", "max_issues_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Mathematics/Linear Algebra/0.0b Supplement of MIT Course - Concept Explanations with Problems_and_Questions to Solve/06-Solving-Ax-=-b.ipynb", "max_forks_repo_name": "okara83/Becoming-a-Data-Scientist", "max_forks_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T15:41:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T07:47:40.000Z", "avg_line_length": 24.4581105169, "max_line_length": 294, "alphanum_fraction": 0.472706071, "converted": true, "num_tokens": 2289, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9196425267730008, "lm_q2_score": 0.9019206785067698, "lm_q1q2_score": 0.8294446117307852}} {"text": "## 2 Analyzing the Data\n\n\n```python\nimport numpy as np\nimport pandas as pd\nimport scipy.stats as st\nimport statsmodels.api as sm\nimport seaborn as sns\nfrom statsmodels.regression.rolling import RollingOLS\nimport warnings\nwarnings.filterwarnings('ignore')\n```\n\n\n```python\ndf = pd.read_excel(\"proshares_analysis_data.xlsx\",sheet_name=\"hedge_fund_series\",index_col=\"date\")\ndf.head()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    HFRIFWI IndexMLEIFCTR IndexMLEIFCTX IndexHDG US EquityQAI US Equity
    date
    2011-08-31-0.032149-0.025588-0.025689-0.027035-0.006491
    2011-09-30-0.038903-0.032414-0.032593-0.032466-0.022142
    2011-10-310.0268580.0435930.0433200.0505320.025241
    2011-11-30-0.013453-0.012142-0.012431-0.028608-0.007965
    2011-12-31-0.0044790.0019380.0017960.0128750.001854
    \n
    \n\n\n\n### 1. For the series in the \u201chedge fund series\u201d tab, report the following summary statistics:
    \n(a) mean
    \n(b) volatility
    \n(c) Sharpe ratio
    \nAnnualize these statistics.\n\n\n```python\ndef summary(df):\n '''\n Given a pandas DataFrame of time series with monthly data, returns annualized mean, volatility and Sharpe Ratio\n '''\n mean = df.mean()*12\n volatility = df.std()*np.sqrt(12)\n out = pd.DataFrame({'mean':mean,'volatility':volatility})\n out['SharpeRatio'] = out['mean']/out['volatility']\n return out\nsummary(df)\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    meanvolatilitySharpeRatio
    HFRIFWI Index0.0507840.0615070.825665
    MLEIFCTR Index0.0388210.0538480.720933
    MLEIFCTX Index0.0373300.0536820.695382
    HDG US Equity0.0281000.0563800.498407
    QAI US Equity0.0254910.0454840.560434
    \n
    \n\n\n\n### 2. For the series in the \u201chedge fund series\u201d tab, , calculate the following statistics related to tailrisk.
    \n(a) Skewness
    \n(b) Excess Kurtosis (in excess of 3)
    \n(c) VaR (.05) - the fifth quantile of historic returns
    \n(d) CVaR (.05) - the mean of the returns at or below the fifth quantile
    \n(e) Maximum drawdown - include the dates of the max/min/recovery within the max drawdown period.
    \n\nThere is no need to annualize any of these statistics\n\n\n```python\ndef tailRisks(df):\n '''\n Given a pandas DataFrame of time series, returns Skewness, Excess Kurtosis, 5% VaR & CVar, Maximum Drawdown\n '''\n \n sk = df.skew()\n #kt = st.kurtosis(df,fisher=True)\n kt = df.kurtosis()\n VaR = df.quantile(0.05)\n CVaR = df[df < VaR].mean()\n\n levels = (1+df).cumprod()\n high = levels.cummax()\n drawdown = (levels - high)/high\n maxDrawdown = drawdown.min()\n \n bottom = pd.Series(pd.to_datetime([drawdown[col].idxmin() for col in drawdown]),index=df.columns)\n peak = pd.Series(pd.to_datetime([(levels[col][:bottom[col]].idxmax()) for col in levels]),index=df.columns)\n \n peakLevels = pd.Series([levels[col].loc[peak.loc[col]] for col in levels],index=df.columns)\n \n\n recovered = []\n for col in levels:\n for lev in levels[col][bottom[col]:]:\n if lev >= peakLevels[col]:\n recovered.append(levels.index[levels[col] == lev][0])\n break\n\n recovered = pd.Series(recovered,index=df.columns)\n \n drawdown.plot(figsize=(9,6), title = 'Drawdown') \n \n\n return pd.DataFrame({'Skewness':sk,'Excess Kurtosis':kt,'5% VaR':VaR,'5% CVaR':CVaR,'Max Drawdown':maxDrawdown,\\\n 'Peak':peak,'Bottom':bottom,'Recovered':recovered})\n \n \ntailRisks(df) \n```\n\n## 3. For the series in the \u201chedge fund series\u201d tab, run a regression of each against SPY (found in the \u201cmerrill factors\u201d tab.) Include an intercept. Report the following regression-based statistics:\n\n(a) Market Beta\n\n(b) Treynor Ratio\n\n(c) Information ratio\n\nAnnualize these three statistics as appropriate.\n\n\n```python\nmerrill_factors = pd.read_excel(\"proshares_analysis_data.xlsx\",sheet_name= \"merrill_factors\",index_col=\"date\")\nspy = merrill_factors['SPY US Equity']\nmerrill_factors.head()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    SPY US EquityUSGG3M IndexEEM US EquityEFA US EquityEUO US EquityIWM US Equity
    date
    2011-08-31-0.0549760.000009-0.092549-0.087549-0.005889-0.088913
    2011-09-30-0.0694490.000017-0.179064-0.1080830.142180-0.111541
    2011-10-310.109147-0.0000130.1629860.096275-0.0695020.151012
    2011-11-30-0.0040640.000000-0.019723-0.0217640.054627-0.003783
    2011-12-310.0104400.000009-0.043017-0.0221390.0755810.005114
    \n
    \n\n\n\n\n```python\ndef performanceMeasures(seriesY,seriesX):\n \n mean =seriesY.mean()*12\n sharpe = mean/(seriesY.std()*(12**0.5))\n model = sm.OLS(seriesY,sm.add_constant(seriesX)).fit()\n rsq = model.rsquared\n beta = model.params[1]\n treynor = mean/beta\n information = model.params[0]/(model.resid.std()*np.sqrt(12))\n \n return pd.DataFrame({'Mean Return':mean,'Sharpe Ratio':sharpe,'R Squared':rsq, 'Beta':beta,\\\n 'Treynor Ratio':treynor, 'Information Ratio':information},index= [seriesY.name])\n```\n\n\n```python\nframes = []\nfor col in df:\n p = performanceMeasures(df[col],spy)\n frames.append(p) \npd.concat(frames)\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    Mean ReturnSharpe RatioR SquaredBetaTreynor RatioInformation Ratio
    HFRIFWI Index0.0507840.8256650.7530900.3943200.128789-0.020169
    MLEIFCTR Index0.0388210.7209330.8161580.3593820.108021-0.051272
    MLEIFCTX Index0.0373300.6953820.8150470.3580340.104263-0.055939
    HDG US Equity0.0281000.4984070.7857330.3691990.076111-0.084218
    QAI US Equity0.0254910.5604340.7193790.2849930.089443-0.057274
    \n
    \n\n\n\n### 4. Relative Performance\n\nDiscuss the previous statistics, and what they tell us about...\n\n(a) the differences between SPY and the hedge-fund series?\n\nThe large part of most Hedge Fund series can be explained using SPY as the R Squared from Regrssening Hedge Fund series on SPY high (greater than 70% for each series. \n\n(b) which performs better between HDG and QAI.\n\nHDG has higher Mean Returns than QAI but has lower Sharpe Ratio, Treynor Ratio and Information Ratios. Thus higher returns have come with higher volaility in HDG. From a mean variance optimized standpoint, QAI has performed better.\n\n(c) whether HDG and the ML series capture the most notable properties of HFRI.\n\nWe note (from question 5) that both HDG and ML are highly correlated with HFRI with a correlation of about 90%. However, HFRI's excess kurtosis of 6.73 is substantially higher than the excess kurtosis of HDG and ML (about 2.4-2.5). Thus HDG and ML would fail to capture the very high tail-risk of HFRI.
    \nFurhter, HDG and ML series are highly correlated (~98% correlation). Regressing HFRI on HDG and ML together would thus suffer from a high multi-collinearity.\n\n## 5. Report the correlation matrix for these assets.\n(a) Show the correlations as a heat map.\n(b) Which series have the highest and lowest correlations?\n\n\n```python\ncorrelations = df.corr()\ncorrelations\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    HFRIFWI IndexMLEIFCTR IndexMLEIFCTX IndexHDG US EquityQAI US Equity
    HFRIFWI Index1.0000000.9103920.9100630.8976530.866714
    MLEIFCTR Index0.9103921.0000000.9999390.9846210.864431
    MLEIFCTX Index0.9100630.9999391.0000000.9844710.863734
    HDG US Equity0.8976530.9846210.9844711.0000000.847597
    QAI US Equity0.8667140.8644310.8637340.8475971.000000
    \n
    \n\n\n\n\n```python\nsns.heatmap(correlations)\n```\n\n\n```python\ndef findHighestLowestCorrelations(correlationsMatrix):\n high, low = [],[]\n for col in correlationsMatrix.columns:\n\n val_high = correlationsMatrix[col][(correlationsMatrix[col] == correlationsMatrix[col].nlargest(2).iloc[1])]\n high.append((val_high.index[0],val_high.name,val_high[0]))\n\n val_low = correlationsMatrix[col][(correlationsMatrix[col] == min(correlationsMatrix[col]))]\n low.append((val_low.index[0],val_low.name,val_low[0]))\n\n print(\"Highest correlation pair = \",max(high,key = lambda x:x[2]))\n print(\"Lowest correlation pair = \",min(low,key = lambda x:x[2]))\n \nfindHighestLowestCorrelations(correlations)\n```\n\n Highest correlation pair = ('MLEIFCTX Index', 'MLEIFCTR Index', 0.9999391791142366)\n Lowest correlation pair = ('QAI US Equity', 'HDG US Equity', 0.8475965688236916)\n\n\n## 6. Replicate HFRI with the six factors listed on the \u201cmerrill factors\u201d tab. Include a constant, and run the unrestricted regression,\n\n \\begin{align}\n r^{hfri}_{t} = \\alpha^{merr} + x^{merr}_{t} \\beta^{merr} + \\epsilon^{merr}_{t} (1)\n \\end{align}\n\n \\begin{align}\n \\hat r^{hfri}_{t} = \\hat\\alpha^{merr} + x^{merr}_{t} \\hat \\beta^{merr} (2)\n \\end{align}\n\n\nNote that the second equation is just our notation for the fitted replication.\n\n(a) Report the intercept and betas.\n\n(b) Are the betas realistic position sizes, or do they require huge long-short positions?\n\n(c) Report the R-squared.\n\n(d) Report the volatility of $\\epsilon^{merr}$, (the tracking error.)\n\n\n```python\nmodel = sm.OLS(df['HFRIFWI Index'],sm.add_constant(merrill_factors)).fit()\nmodel.params\n```\n\n\n\n\n const 0.001147\n SPY US Equity 0.072022\n USGG3M Index -0.400591\n EEM US Equity 0.072159\n EFA US Equity 0.106318\n EUO US Equity 0.022431\n IWM US Equity 0.130892\n dtype: float64\n\n\n\nThe Betas are realistic position sizes except for USGG3M Index\n\n\n```python\nprint(\"R-squared of the Model =\",round(model.rsquared,6))\n```\n\n R-squared of the Model = 0.855695\n\n\n\n```python\nprint(\"Volatility of Error term (annualized) =\",round(model.resid.std()*(12**0.5),6))\n```\n\n Volatility of Error term (annualized) = 0.023365\n\n\n### 7. Let\u2019s examine the replication out-of-sample.\nStarting with t = 61 month of the sample, do the following:\n\n\u2022 Use the previous 60 months of data to estimate the regression equation, (1). This gives time-t estimates of the regression parameters, $\\tilde \\alpha^{merr}_{t}$ and $\\tilde \\beta^{merr}_{t}$\n\n\u2022 Use the estimated regression parameters, along with the time-t regressor values, $x^{merr}_{t}$, to calculate the time-t replication value that is, with respect to the regression estimate, built\n\n\\begin{align}\n\\tilde r^{merr}_{t} \\equiv \\tilde \\alpha^{merr}_{t} + (x^{merr}_{t})\\prime \\tilde \\beta^{merr}_{t}\n\\end{align}\n\n\u2022 Step forward to t = 62, and now use t = 2 through t = 61 for the estimation. Re-run the\nsteps above, and continue this process throughout the data series. Thus, we are running a\nrolling, 60-month regression for each point-in-time.\nHow well does the out-of-sample replication perform with respect to the target?\n\n\n```python\ndef get_est_coef(Y, X):\n\n X_add_const = sm.add_constant(X)\n\n model = sm.OLS(Y, X_add_const)\n\n res = model.fit()\n\n return res.params[0], res.params[1:]\n\n\n\nfitted_values = []\nwindow_size = 60\n\n\n\nfor i in range(df.shape[0] - window_size - 1):\n\n alpha, beta = get_est_coef(df['HFRIFWI Index'][i : i+window_size], merrill_factors[i : i+window_size])\n\n fitted_values.append((alpha + np.dot(merrill_factors[i+window_size+1 : i+window_size+2], beta).item()))\n \ntable_7 = pd.DataFrame({'True Values': df['HFRIFWI Index'][61:], 'Fitted Values': fitted_values})\n\ntable_7.plot(figsize=(13,8))\n\n```\n\n\n```python\ntable_7.corr()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    True ValuesFitted Values
    True Values1.0000000.943363
    Fitted Values0.9433631.000000
    \n
    \n\n\n\nWe observe that the predicted value is 94.34% correlated with actual performance. Thus out-of-sample replication performs very well. The same can also be visualized in the chart below:\n\n## 8. We estimated the replications using an intercept. Try the full-sample estimation, but this time without an intercept.\n\n \\begin{align}\n r^{hfri}_{t} = x^{merr}_{t} \\beta^{merr} + \\epsilon^{merr}_{t}\n \\end{align}\n\n \\begin{align}\n \\check r^{hfri}_{t} = x^{merr}_{t} \\check \\beta^{merr}\n \\end{align}\n\nReport\n\n(a) the regression beta. How does it compare to the estimated beta with an intercept, $\\hat \\beta^{merr}$?\n\n\n\n```python\nmodel_without_intercept = sm.OLS(df['HFRIFWI Index'],merrill_factors).fit()\nregressionBetas = pd.DataFrame({\"With Intercept\":model.params,\"Without Intercept\":model_without_intercept.params})\nregressionBetas\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    With InterceptWithout Intercept
    EEM US Equity0.0721590.069936
    EFA US Equity0.1063180.101524
    EUO US Equity0.0224310.023479
    IWM US Equity0.1308920.128999
    SPY US Equity0.0720220.087505
    USGG3M Index-0.4005910.334503
    const0.001147NaN
    \n
    \n\n\n\nThe Regression Betas are mostly similar in both the regressions except for USGG3M Index Beta. In the absence of intercept, there is a much higher weightage of USGG3M which could be thought of as a proxy of Risk Free Interest Rates\n\n(b) the mean of the fitted value, $\\check r^{hfri}_{t}$. How does it compare to the mean of the HFRI?\n\n\n```python\npd.Series([df['HFRIFWI Index'].mean()*12,model_without_intercept.predict(merrill_factors).mean()*12],\\\n index=['Mean of HFRI','Mean of Fitted Values'])\n```\n\n\n\n\n Mean of HFRI 0.050784\n Mean of Fitted Values 0.042888\n dtype: float64\n\n\n\nThe mean of the Fitted Values is lower than the mean of HFRI\n\n(c) the correlations of the fitted values, $\\check r^{hfri}_{t}$ to the HFRI. How does the correlation compare to that of the fitted values with an intercept, $\\hat r^{hfri}_{t}$?\n\n\n```python\nHFRIpredictions = pd.DataFrame({'HFRI Index':df['HFRIFWI Index'],\\\n 'Predict with intercept':model.predict(sm.add_constant(merrill_factors)),\\\n 'Predict without intercept':model_without_intercept.predict(merrill_factors)})\nHFRIpredictions.corr()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    HFRI IndexPredict with interceptPredict without intercept
    HFRI Index1.0000000.9250380.924517
    Predict with intercept0.9250381.0000000.999437
    Predict without intercept0.9245170.9994371.000000
    \n
    \n\n\n\nThe correlations of HFRI in both the cases is almost equally good. 92.45% correlatation between HFRI Index and Prediction without intercept. 92.50% correlation between HFRI and prediction with intercept. The predicted values with and without intercept are highly correlated. Thusincluding the intercept or not doesn't affect much for the prediction. If including the intercept, the intercept part would be non-tradable asset. Without intercept, it's in accordance with our purpose of replicating(using all tradable assets).\n\nDo you think Merrill and ProShares fit their replicators with an intercept or not?\n\nWe think Merrill and ProShares would use the replicators without the Intercept as the purpose is to replicate the entire HFRI Index and not just mimic the returns from HFRI Index.\n", "meta": {"hexsha": "32da937a0a5b98619df59ec813d30a0a4344916f", "size": 219794, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "solutions/hw2/FINM36700_HW2_GroupA11v3.ipynb", "max_stars_repo_name": "tulyu96/finm-portfolio-2021", "max_stars_repo_head_hexsha": "6b26e235323064f2bf0a5bbc81f922b2150b75ef", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "solutions/hw2/FINM36700_HW2_GroupA11v3.ipynb", "max_issues_repo_name": "tulyu96/finm-portfolio-2021", "max_issues_repo_head_hexsha": "6b26e235323064f2bf0a5bbc81f922b2150b75ef", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "solutions/hw2/FINM36700_HW2_GroupA11v3.ipynb", "max_forks_repo_name": "tulyu96/finm-portfolio-2021", "max_forks_repo_head_hexsha": "6b26e235323064f2bf0a5bbc81f922b2150b75ef", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 139.9070655633, "max_line_length": 85420, "alphanum_fraction": 0.8508649008, "converted": true, "num_tokens": 6866, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9019206712569268, "lm_q2_score": 0.9196425306271939, "lm_q1q2_score": 0.8294446085396976}} {"text": "```python\nfrom sympy import Matrix\n```\n\n\n```python\nA = Matrix([[0,2,0,1],[2,2,3,2],[4,-3,0,1],[6,1,-6,-5]])\n```\n\n\n```python\nA\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0 & 2 & 0 & 1\\\\2 & 2 & 3 & 2\\\\4 & -3 & 0 & 1\\\\6 & 1 & -6 & -5\\end{matrix}\\right]$\n\n\n\n\n```python\nB=Matrix([[0,-2,-7,6]])\n```\n\n\n```python\nB = B.transpose()\n```\n\n\n```python\nB\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0\\\\-2\\\\-7\\\\6\\end{matrix}\\right]$\n\n\n\n\n```python\nA.gauss_jordan_solve(B)\n```\n\n\n\n\n (Matrix([\n [-1/2],\n [ 1],\n [ 1/3],\n [ -2]]),\n Matrix(0, 1, []))\n\n\n\nx1=-0.05 , x2=1.0, x3=0.33, x4=-2.0 \n", "meta": {"hexsha": "27874eb8157198b94b55a13966dd92740f3539e8", "size": 3070, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "gauss_elmination_2.ipynb", "max_stars_repo_name": "lgtejas/mfds", "max_stars_repo_head_hexsha": "dc9569a970f963d298dbe56db58b090684558d7b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "gauss_elmination_2.ipynb", "max_issues_repo_name": "lgtejas/mfds", "max_issues_repo_head_hexsha": "dc9569a970f963d298dbe56db58b090684558d7b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "gauss_elmination_2.ipynb", "max_forks_repo_name": "lgtejas/mfds", "max_forks_repo_head_hexsha": "dc9569a970f963d298dbe56db58b090684558d7b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.6794871795, "max_line_length": 136, "alphanum_fraction": 0.4465798046, "converted": true, "num_tokens": 268, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9489172572644806, "lm_q2_score": 0.8740772450055545, "lm_q1q2_score": 0.8294269819679643}} {"text": "# UMAP on sparse data\n\nSometimes datasets get very large, and potentially very very high dimensional. In many such cases, however, the data itself is sparse -- that is, while there are many many features, any given sample has only a small number of non-zero features observed. In such cases the data can be represented much more efficiently in terms of memory usage by a sparse matrix data structure. It can be hard to find dimension reduction techniques that work directly on such sparse data -- often one applies a basic linear technique such as ``TruncatedSVD`` from sklearn (which does accept sparse matrix input) to get the data in a format amenable to other more advanced dimension reduction techniques. In the case of UMAP this is not necessary -- UMAP can run directly on sparse matrix input. This tutorial will walk through a couple of examples of doing this. First we'll need some libraries loaded. We need ``numpy`` obviously, but we'll also make use of ``scipy.sparse`` which provides sparse matrix data structures. One of our examples will be purely mathematical, and we'll make use of ``sympy`` for that; the other example is test based and we'll use sklearn for that (specifically ``sklearn.feature_extraction.text``). Beyond that we'll need umap, and plotting tools.\n\n\n```python\nimport numpy as np\nimport scipy.sparse\nimport sympy\nimport sklearn.datasets\nimport sklearn.feature_extraction.text\nimport umap\nimport umap.plot\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n /Users/leland/anaconda3/envs/umap_0.4dev/lib/python3.7/site-packages/datashader/transfer_functions.py:21: FutureWarning: xarray subclass Image should explicitly define __slots__\n class Image(xr.DataArray):\n\n\n## A mathematical example\n\nOur first example constructs a sparse matrix of data out of pure math. This example is inspired by the work of [John Williamson](https://johnhw.github.io/umap_primes/index.md.html), and if you haven't looked at that work you are strongly encouraged to do so. The dataset under consideration will be the integers. We will represent each integer by a vector of its divisibility by distinct primes. Thus our feature space is the space of prime numbers (less than or equal to the largest integer we will be considering) -- potentially very high dimensional. In practice a given integer is divisible by only a small number of distinct primes, so each sample will be mostly made up of zeros (all the primes that the number is not divisible by), and thus we will have a very sparse dataset.\n\nTo get started we'll need a list of all the primes. Fortunately we have ``sympy`` at our disposal and we can quickly get that information with a single call to ``primerange``. We'll also need a dictionary mapping the different primes to the column number they correspond to in our data structure; effectively we'll just be enumerating the primes.\n\n\n```python\nprimes = list(sympy.primerange(2, 110000))\nprime_to_column = {p:i for i, p in enumerate(primes)}\n```\n\nNow we need to construct our data in a format we can put into a sparse matrix easily. At this point a little background on sparse matrix data structures is useful. For this purpose we'll be using the so called [\"LIL\" format](https://scipy-lectures.org/advanced/scipy_sparse/lil_matrix.html). LIL is short for \"List of Lists\", since that is how the data is internally stored. There is a list of all the rows, and each row is stored as a list giving the column indices of the non-zero entries. To store the data values there is a parallel structure containing the value of the entry corresponding to a given row and column.\n\nTo put the data together in this sort of format we need to construct such a list of lists. We can do that by iterating over all the integers up to a fixed bound, and for each integer (i.e. each row in our dataset) generating the list of column indices which will be non-zero. The column indices will simply be the indices corresponding to the primes that divide the number. Since ``sympy`` has a function ``primefactors`` which returns a list of the unique prime factors of any integer we simply need to map those through our dictionary to covert the primes into column numbers.\n\nParallel to that we'll construct the corresponding structure of values to insert into a matrix. Since we are only concerned with divisibility this will simply be a one in every non-zero entry, so we can just add a list of ones of the appropriate length for each row.\n\n\n```python\n%%time\nlil_matrix_rows = []\nlil_matrix_data = []\nfor n in range(100000):\n prime_factors = sympy.primefactors(n)\n lil_matrix_rows.append([prime_to_column[p] for p in prime_factors])\n lil_matrix_data.append([1] * len(prime_factors))\n```\n\n CPU times: user 1.9 s, sys: 24.2 ms, total: 1.93 s\n Wall time: 1.93 s\n\n\nNow we need to get that into a sparse matrix. Fortunately the ``scipy.sparse`` package makes this easy, and we've already built the data in a fairly useful structure. First we create a sparse matrix of the correct format (LIL) and the right shape (as many rows as we have generated, and as many columns as there are primes). This is essentially just an empty matrix however. We can fix that by setting the ``rows`` attribute to be the rows we have generated, and the ``data`` attribute to be the corresponding structure of values (all ones). The result is a sparse matrix data structure which can then be easily manipulated and converted into other sparse matrix formats easily.\n\n\n```python\nfactor_matrix = scipy.sparse.lil_matrix((len(lil_matrix_rows), len(primes)), dtype=np.float32)\nfactor_matrix.rows = np.array(lil_matrix_rows)\nfactor_matrix.data = np.array(lil_matrix_data)\nfactor_matrix\n```\n\n\n\n\n <100000x10453 sparse matrix of type ''\n \twith 266398 stored elements in LInked List format>\n\n\n\nAs you can see we have a matrix with 100000 rows and over 10000 columns. If we were storing that as a numpy array it would take a great deal of memory. In practice, however, there are only 260000 or so entries that are not zero, and that's all we really need to store, making it much more compact.\n\nThe question now is how can we feed that sparse matrix structure into UMAP to have it learn an embedding. The answer is surprisingly straightforward -- we just hand it directly to the fit method. Just like other sklearn estimators that can handle sparse input UMAP will detect the sparse matrix and just do the right thing.\n\n\n```python\n%%time\nmapper = umap.UMAP(metric='cosine', random_state=42, low_memory=True).fit(factor_matrix)\n```\n\nThat was easy! But is it really working? We can easily plot the results:\n\n\n```python\numap.plot.points(mapper, values=np.arange(100000), theme='viridis')\n```\n\nAnd this looks very much in line with the results [John Williamson got](https://johnhw.github.io/umap_primes/index.md.html) with the proviso that we only used 100,000 integers instead of 1,000,000 to ensure that most users should be able to run this example (the full million may require a large memory compute node). So it seems like this is working well. The next question is whether we can use the ``transform`` functionality to map new data into this space. To test that we'll need some more data. Fortunately there are more integers. We'll grab the next 10,000 and put them in a sparse matrix, much as we did for the first 100,000.\n\n\n```python\n%%time\nlil_matrix_rows = []\nlil_matrix_data = []\nfor n in range(100000, 110000):\n prime_factors = sympy.primefactors(n)\n lil_matrix_rows.append([prime_to_column[p] for p in prime_factors])\n lil_matrix_data.append([1] * len(prime_factors))\n```\n\n\n```python\nnew_data = scipy.sparse.lil_matrix((len(lil_matrix_rows), len(primes)), dtype=np.float32)\nnew_data.rows = np.array(lil_matrix_rows)\nnew_data.data = np.array(lil_matrix_data)\nnew_data\n```\n\nTo map the new data we generated we can simply hand it to the ``transform`` method of our trained model. This is a little slow, but it does work.\n\n\n```python\nnew_data_embedding = mapper.transform(new_data)\n```\n\nAnd we can plot the results. Since we just got the locations of the points this time (rather than a model) we'll have to resort to matplotlib for plotting.\n\n\n```python\nfig = plt.figure(figsize=(12,12))\nax = fig.add_subplot(111)\nplt.scatter(new_data_embedding[:, 0], new_data_embedding[:, 1], s=0.1, c=np.arange(10000), cmap='viridis')\nax.set(xticks=[], yticks=[], facecolor='black');\n```\n\nThe color scale is different in this case, but you can see that the data has been mapped into locations corresponding to the various structures seen in the original embedding. Thus, even with large sparse data we can embed the data, and even add new data to the embedding.\n\n## A text analysis example\n\nLet's look at a more classical machine learning example of working with sparse high dimensional data -- working with text documents. Machine learning on text is hard, and there is a great deal of literature on the subject, but for now we'll just consider a basic approach. Part of the difficulty of machine learning with text is turning language into numbers, since numbers are really all most machine learning algorithms understand (at heart anyway). One of the most straightforward ways to do this for documents is what is known as the [\"bag-of-words\" model](https://en.wikipedia.org/wiki/Bag-of-words_model). In this model we view a document as simply a multi-set of the words contained in it -- we completely ignore word order. The result can be viewed as a matrix of data by setting the feature space to be the set of all words that appear in any document, and a document is represented by a vector where the value of the *i*th entry is the number of times the *i*th word occurs in that document. This is a very common approach, and is what you will get if you apply sklearn's ``CountVectorizer`` to a text dataset for example. The catch with this approach is that the feature space is often *very* large, since we have a feature for each and every word that ever occurs in the entire corpus of documents. The data is sparse however, since most documents only use a small portion of the total possible vocabulary. Thus the default output format of ``CountVectorizer`` (and other similar feature extraction tools in sklearn) is a ``scipy.sparse`` format matrix.\n\nFor this example we'll make use of the classic 20newsgroups dataset, a sampling of newsgroup messages from the old NNTP newsgroup system covering 20 different newsgroups. The ``sklearn.datasets`` module can easily fetch the data, and, in fact, we can fetch a pre-vectorized version to save us the trouble of running ``CountVectorizer`` ourselves. We'll grab both the training set, and the test set for later use.\n\n\n```python\nnews_train = sklearn.datasets.fetch_20newsgroups_vectorized(subset='train')\nnews_test = sklearn.datasets.fetch_20newsgroups_vectorized(subset='test')\n```\n\nIf we look at the actual data we have pulled back, we'll see that sklearn has run a ``CountVectorizer`` and produced the data is sparse matrix format.\n\n\n```python\nnews_train.data\n```\n\n\n\n\n <11314x130107 sparse matrix of type ''\n \twith 1787565 stored elements in Compressed Sparse Row format>\n\n\n\nThe value of the sparse matrix format is immediately obvious in this case; while there are only 11,000 samples there are 130,000 features! If the data were stored in a standard ``numpy`` array we would be using up 10GB of memory! And most of that memory would simply be storing the number zero, over and over again. In sparse matrix format it easily fits in memory on most machines. This sort of dimensionality of data is very common with text workloads.\n\nThe raw counts are, however, not ideal since common words the \"the\" and \"and\" will dominate the counts for most documents, while contributing very little information about the actual content of the document. We can correct for this by using a ``TfidfTransformer`` from sklearn, which will convert the data into [TF-IDF format](https://en.wikipedia.org/wiki/Tf%E2%80%93idf). There are lots of ways to think about the transformation done by TF-IDF, but I like to think of it intuitively as follows. The information content of a word can be thought of as (roughly) proportional to the negative log of the frequency of the word; the more often a word is used, the less information it tends to carry, and infrequent words carry more information. What TF-IDF is going to do can be thought of as akin to re-weighting the columns according to the information content of the word associated to that column. Thus the common words like \"the\" and \"and\" will get down-weighted, as carrying less information about the document, while infrequent words will be deemed more imporant and have their associated columns up-weighted. We can apply this transformation to both the train and test sets (using the same transformer trained on the training set).\n\n\n```python\ntfidf = sklearn.feature_extraction.text.TfidfTransformer(norm='l1').fit(news_train.data)\ntrain_data = tfidf.transform(news_train.data)\ntest_data = tfidf.transform(news_test.data)\n```\n\nThe result is still a sparse matrix, since TF-IDF doesn't change the zero elements at all, nor the number of features.\n\n\n```python\ntrain_data\n```\n\n\n\n\n <11314x130107 sparse matrix of type ''\n \twith 1787565 stored elements in Compressed Sparse Row format>\n\n\n\nNow we need to pass this very high dimensional data to UMAP. Unlike some other non-linear dimension reduction techniques we don't need to apply PCA first to get the data down to a reasonable dimensionality; nor do we need to use other techniques to reduce the data to be able to be represented as a dense ``numpy`` array; we can work directly on the 130,000 dimensional sparse matrix.\n\n\n```python\n%%time\nmapper = umap.UMAP(metric='hellinger', random_state=42).fit(train_data)\n```\n\n CPU times: user 8min 40s, sys: 3.07 s, total: 8min 44s\n Wall time: 8min 43s\n\n\nNow we can plot the results, with labels according to the target variable of the data -- which newsgroup the posting was drawn from.\n\n\n```python\numap.plot.points(mapper, labels=news_train.target)\n```\n\nWe can see that even going directly from a 130,000 dimensional space down to only 2 dimensions UMAP has done a decent job of separating out many of the different newsgroups.\n\nWe can now attempt to add the test data to the same space using the ``transform`` method. \n\n\n```python\ntest_embedding = mapper.transform(test_data)\n```\n\nWhile this is somewhat expensive computationally, it does work, and we can plot the end result:\n\n\n```python\nfig = plt.figure(figsize=(12,12))\nax = fig.add_subplot(111)\nplt.scatter(test_embedding[:, 0], test_embedding[:, 1], s=1, c=news_test.target, cmap='Spectral')\nax.set(xticks=[], yticks=[]);\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "cc4716efc8cac83e2d3450d460fd9a5da37c3b82", "size": 477105, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "umap_doc_notebooks/sparse.ipynb", "max_stars_repo_name": "JamesRH/umap", "max_stars_repo_head_hexsha": "4ba98ca6ecf0a3a03b92bd5bd8410a77145fc2a4", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-02-24T16:29:08.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-29T02:29:01.000Z", "max_issues_repo_path": "umap_doc_notebooks/sparse.ipynb", "max_issues_repo_name": "JamesRH/umap", "max_issues_repo_head_hexsha": "4ba98ca6ecf0a3a03b92bd5bd8410a77145fc2a4", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "umap_doc_notebooks/sparse.ipynb", "max_forks_repo_name": "JamesRH/umap", "max_forks_repo_head_hexsha": "4ba98ca6ecf0a3a03b92bd5bd8410a77145fc2a4", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-01-10T23:43:10.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-10T16:57:36.000Z", "avg_line_length": 958.0421686747, "max_line_length": 228656, "alphanum_fraction": 0.9507278272, "converted": true, "num_tokens": 3372, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541577509315, "lm_q2_score": 0.8807970904940926, "lm_q1q2_score": 0.8294062423986858}} {"text": "# Quaternion Triple Products and Distance\n\nby Doug Sweetser, sweetser@alum.mit.edu - please feel free to email\n\nIn this IPython notebook, efforts will be made to understand quaternion triple products and how they are related to distances in space and intervals in space-time as seen in special relativity. Rather than follow a historical story, I will try a more abstract approach. Initialize a few tools.\n\n\n```python\n%%capture\n%matplotlib inline\nimport numpy as np\nimport sympy as sp\nimport matplotlib.pyplot as plt\n\n# To get equations the look like, well, equations, use the following.\nfrom sympy.interactive import printing\nprinting.init_printing(use_latex=True)\nfrom IPython.display import display\n\n# Tools for manipulating quaternions.\nimport Q_tools as qt;\n```\n\n## Spatial Rotations\n\nDefine a triple product function modeled on what it takes to do a spatial rotation, $P R P^*$, where $R$ is a quaternion to be spatially rotated and $P$ is a quaternion parameter to do said rotation.\n\n\n```python\ndef triple_sandwich(r, p=qt.QH([1, 0, 0, 0])):\n \"\"\"A function that takes 2 quaternions but does a triple product. The default value for P leaves R unchanged.\"\"\"\n\n return p.product(r.product(p.conj()))\n```\n\n\n```python\nt, x, y, z = sp.symbols(\"t x y z\")\ns, u, v, w = sp.symbols(\"s u v w\")\n\nR = qt.QH([t, x, y, z])\nP = qt.QH([s, u, v, w])\nRP_sandwich = triple_sandwich(R, P)\nsp.simplify(RP_sandwich.t)\n```\n\nThe first term is just the norm of the parameter $P$ times the scalar value of $R$, how simple! Rotating a value is complicated.\n\n\n```python\nsp.simplify(RP_sandwich.x)\n```\n\nShow the interval of $R$ is unchanged up to the norm of the parameter $P$:\n\n\n```python\nsp.simplify(sp.factor(RP_sandwich.square().t))\n```\n\nThe interval will be invariant so long as the norm of the parameter $P$ is equal to one. A common way to do this is to use sine and cosine functions due to the trig identity $\\sin^2(\\theta) + \\cos^2(\\theta) = 1$.\n\n\n```python\ndef triple_trig_z(r, a):\n \"\"\"A rotation around the z axis only by the double angle of a.\"\"\"\n \n return triple_sandwich(r, qt.QH([sp.cos(a), 0, 0, sp.sin(a)]))\n\ndef is_quadratic(r):\n \"\"\"Tests if the the first term of the square of a quaternion is equal to t^2 - x^2 - y^2 - z^2.\"\"\"\n \n r2 = r.square()\n simple_r2 = sp.simplify(r2.t)\n it_is = ((simple_r2 == 1.0*t**2 - 1.0*x**2 - 1.0*y**2 - 1.0*z**2) \n or (simple_r2 == t**2 - x**2 - y**2 - z**2))\n \n if it_is:\n display(t**2 - x**2 - y**2 - z**2)\n else:\n display(simple_r2)\n \n return it_is\n```\n\n\n```python\na = sp.Symbol('a')\ndisplay(sp.simplify(triple_trig_z(R, a).t))\ndisplay(sp.simplify(triple_trig_z(R, a).x))\ndisplay(sp.simplify(triple_trig_z(R, a).y))\ndisplay(sp.simplify(triple_trig_z(R, a).z))\nis_quadratic(triple_trig_z(R, a))\n```\n\nAn important thing to notice is that rotations work for arbitrarily small values of an angle.\n\n\n```python\ndisplay(sp.simplify(triple_trig_z(R, 0.01).t))\ndisplay(sp.simplify(triple_trig_z(R, 0.01).x))\ndisplay(sp.simplify(triple_trig_z(R, 0.01).y))\ndisplay(sp.simplify(triple_trig_z(R, 0.01).z))\nis_quadratic(triple_trig_z(R, 0.01))\n```\n\nThis is relevant to the fact that the group $SO(3)$ is a compact group. It is easy to visualize the example above: it is a circle in the $xy$ plane with $t$ and $z$ unaltered. Circles are sets of points where the \"next\" point is an arbitrarily short distance away.\n\nCan we create a function that can take _any_ quaternion parameter $P$ yet still always generate another member of the group $SO(3)$? This can be done using the inverse of a quaternion which is the conjugate of a quaternion divided by the norm squared. Groups are about binary operations on a set. The binary operation can be a composite function, where the results of one rotation are fed into another.\n\n\n```python\ndef next_rotation(r, p=qt.QH([1, 0, 0, 0])):\n \"\"\"Generates another member of the rotation group given a quaternion parameter P.\"\"\"\n \n return p.product(r.product(p.invert()))\n\ndef composite_rotation(r, p1=qt.QH([1, 0, 0, 0]), p2=qt.QH([1, 0, 0, 0])):\n \"\"\"A composite function of next_rotation.\"\"\"\n \n return next_rotation(next_rotation(r, p1), p2)\n```\n\n\n```python\ndisplay(sp.simplify(composite_rotation(R, qt.QH([s, u, v, w])).t))\ndisplay(sp.simplify(composite_rotation(R, qt.QH([s, u, v, w])).x))\nis_quadratic(composite_rotation(R, qt.QH([s, u, v, w])))\n```\n\nThe next_rotation function can use any quaternion parameter $P$ as input and create another member of the group. This does not mean that rotations have four degrees of freedom. There is an equivalence relation involved since the product of a quaternion with its inverse has a norm of one. This algebraic constraint means the composite_rotation function has $4-1=3$ degrees of freedom.\n\nThe composite_rotation function could be used to show that there is a real-valued quaternion representation of the compact Lie group $SO(3)$. Since it is well known quaternions can do this, such an effort will be skipped.\n\n## Other Triple Products Lead to More Than Just Rotations\n\nOther triple products are possible. For example, the two quaternions could be on the same side. A number of years ago, a search for a real-valued quaternion function that could do a Lorentz boost turned up this difference between two one-sided triples, $ \\frac{1}{2}((P P R)^* - (P^* P^* R)^*)$:\n\n\n```python\ndef triple_2_on_1(r, p=qt.QH([1, 0, 0, 0])):\n \"\"\"The two are on one side, minus a different two on one side.\"\"\"\n \n ppr = p.product(p.product(r)).conj()\n pcpcr = p.conj().product(p.conj().product(r)).conj()\n pd = ppr.dif(pcpcr)\n pd_ave = pd.product(qt.QH([1/2, 0, 0, 0]))\n return pd_ave\n```\n\n\n```python\nrq_321 = triple_2_on_1(R, P)\ndisplay(sp.simplify(rq_321.t))\ndisplay(sp.simplify(rq_321.x))\ndisplay(sp.simplify(rq_321.y))\ndisplay(sp.simplify(rq_321.z))\n```\n\nIf $s=0$, then triple_2_on_1 would contribute nothing.\n\nExplore the hyperbolic sine and cosines:\n\n\n```python\nphx = qt.QH([sp.cosh(a), sp.sinh(a), 0, 0])\nppr = triple_2_on_1(R, phx)\ndisplay(sp.simplify(ppr.t))\n```\n\nThis is promising for doing a Lorentz boost. There is a direct link between hyperbolic trig functions and the relativistic velocity $\\beta$ and stretch factor $\\gamma$ of special relativity.\n\n$$\\gamma = \\cosh(\\alpha)$$\n$$\\beta \\gamma = \\sinh(\\alpha)$$\n\nThe trig functions are based on a circle in the plane, while the hyperbolic trig functions start with hyperbolas. The definitions are remarkably similar:\n\n$$\\sin(\\alpha) = \\frac{e^{i \\alpha} - e^{-i \\alpha}}{2 i}$$\n\n$$\\cos(\\alpha) = \\frac{e^{i \\alpha} + e^{-i \\alpha}}{2 i}$$\n\n$$\\sinh(\\alpha) = \\frac{e^{\\alpha} - e^{-\\alpha}}{2}$$\n\n$$\\cosh(\\alpha) = \\frac{e^{\\alpha} + e^{-\\alpha}}{2}$$\n\nThe hyperbolic trig functions oddly are \"more real\", never needing an imaginary factor. The hyperbola of the hyperbolic cosine does touch the unit circle at its minimum, suggesting a solitary link to the trig functions.\n\nCombine the three triples and test if they do all the work of a Lorentz boost:\n$$\\rm{triple-triple}(R, P) \\equiv P R P^* + \\frac{1}{2}((P P R)^* - (P^* P^* R)^*)$$\n\n\n```python\ndef triple_triple(r, p=qt.QH([1, 0, 0, 0])):\n \"\"\"Use three triple products for rotations and boosts.\"\"\"\n \n # Note: 'qtype' provides a record of what algrabric operations were done to create a quaternion.\n return triple_sandwich(r, p).add(triple_2_on_1(r, p), qtype=\"triple_triple\")\n```\n\nCan this function do a rotation? If the first value of $P$ is equal to zero, then the two one-sided triple terms, $PPR$, will make no contribution, leaving the triple sandwich $PRP^*$. So long as the norm is equal to unity, then spatial rotations result. Do a rotation:\n\n\n```python\njk = qt.QH([0, 0, 3/5, 4/5])\ndisplay(sp.simplify(triple_triple(R, jk).t))\ndisplay(sp.simplify(triple_triple(R, jk).x))\ndisplay(sp.simplify(triple_triple(R, jk).y))\ndisplay(sp.simplify(triple_triple(R, jk).z))\nis_quadratic(triple_triple(R, jk))\n```\n\nSomething important has changed going from the regular trig functions to these hyperbolic functions for rotations. The requirements that the first term must be zero while the other three terms are normalized to unity means that one cannot go an arbitrarily small distance away and find another transformation. If one wants a product of rotations, those rotations must be at right angles to each other.\n\n\n```python\nQi, Qj, Qk = qt.QH([0, 1, 0, 0]), qt.QH([0, 0, 1, 0]), qt.QH([0, 0, 0, 1])\nprint(triple_triple(triple_triple(R, Qi), Qj))\nprint(triple_triple(R, Qi.product(Qj)))\n```\n\n (t, -x, -y, z) triple_triple\n (t, -x, -y, z) triple_triple\n\n\nThe fact that one cannot find a super close neighbor is a big technical change.\n\nWhat is so special about setting the first term equal to zero? Is there a more general form? Perhaps all that is needed is for the first term of the square to be equal to negative one. Test this out:\n\n\n```python\nminus_1 = qt.QH([2, 2, 1, 0])\nprint(minus_1.square().t)\ndisplay((triple_triple(R, minus_1).t, triple_triple(R, minus_1).x, triple_triple(R, minus_1).y, triple_triple(R, minus_1).z))\nis_quadratic(triple_triple(R, minus_1))\n```\n\nTo be honest, this came as a surprise to me. Notice that the value for time changes, so a rotation is getting mixed in with a boost. This sort of mixing of rotations and boosts is known to happen when one does two boosts, one say along $x$, the other along $y$. Now we can say a similar thing is possible for rotations. If there scalar is zero then one gets a pure spatial rotation. When that is not the case, there is a mixture of rotations and boosts.\n\nDemonstrate that a boost along the $x$ axis works.\n\n\n```python\nbx = qt.QH([sp.cosh(a), sp.sinh(a), 0, 0])\ndisplay(sp.simplify(bx.square().t))\ndisplay(sp.simplify(triple_triple(R, bx).t))\ndisplay(sp.simplify(triple_triple(R, bx).x))\ndisplay(sp.simplify(triple_triple(R, bx).y))\ndisplay(sp.simplify(triple_triple(R, bx).z))\nis_quadratic(triple_triple(R, bx))\n```\n\nPerfect. It was this result that began my investigation of triple_triple quaternion products. This is what the boost looks like using gammas and betas: $$(\\gamma t - \\gamma \\beta x, \\gamma x - \\gamma \\beta t, y, z)$$\n\nThe first term of the square of the hyperbolic parameter $P=bx$ is equal to positive one. So long as the triple_triple function is fed a quaternion parameter $P$ whose first term of the square has an absolute value of one, the interval is invariant. That is surprisingly simple.\n\nNote the double angle in the hyperbolic trig function that appeared earlier for rotations.\n\n## Spatial Reflection and Time Reversal\n\nFor a spatial reflection, just one spatial term flips signs. The first term of the square will not be altered. Yet the triple_triple function cannot flip only one sign. It can flip two terms. Thus, using just the triple_triple function one can go from all positive, to two positive-two negative, to all negative terms, but never one or three negative terms starting from an all positive quaternion $R$. The conjugate operator can do odd sign changes. Do a spatial reflection on $x$ only by rotating using $i$ and using the conjugate operator like so:\n\n\n```python\nx_reflection = triple_triple(R, Qi).conj()\nprint(x_reflection)\nis_quadratic(x_reflection)\n```\n\nTime reversal also cannot be done using triple_triple. The parameter $P$ is used twice, so its sign is of no consequence for the scalar in $R$. The entire quaternion $R$ must be multiplied by $-1$ then take a conjugate like so:\n\n\n```python\nt_reversal = triple_triple(R).conj().product(qt.QH([-1, 0, 0, 0], qtype=\"sign_flip\"))\nprint(t_reversal)\nis_quadratic(t_reversal)\n```\n\nRotations and boosts do not do the work of time reversal. Time reversal requires different algebraic tricks.\n\n## Fixing the Limitations of the Triple_Triple Function\n\nThe triple_triple function must be fed quaternions whose square is either exactly equal to plus or minus one. Create a function that can take in _any_ quaternion as a parameter and generate the next quadratic. The function must be scaled to the square root of the first term of the quaternion parameter $P$ squared. Expand the parameters so both spatial reflections and time reversals can be done.\n\nIf the parameter $P$ is light-like, it cannot be used to do a boost. Feed the triple_triple function a light-like quaternion and it will always return zero. Light-like quaternions can do rotations. The next_rotation function is up to the task.\n\n\n```python\ndef next_quadratic(r, p=qt.QH([1, 0, 0, 0]), conj=False, sign_flip=False):\n \"\"\"Generates another quadratic using a quaternion parameter p, \n if given any quaternion and whether a conjugate or sign flip is needed.\"\"\"\n \n pt_squared = p.square().t\n \n # Avoid using sp.Abs() so equations can be simplified.\n if isinstance(pt_squared, (int, float)):\n if pt_squared < 0:\n pt_squared *= -1\n else:\n if pt_squared.is_negative:\n pt_squared *= -1\n \n sqrt_pt_squared = sp.sqrt(pt_squared)\n \n # A light-like parameter P can rotate but not boost R.\n if sqrt_pt_squared == 0:\n rot_calc = next_rotation(r, p) \n else:\n p_normalized = p.product(qt.QH([1/sqrt_pt_squared, 0, 0, 0]))\n rot_calc = triple_triple(r, p_normalized)\n \n if conj:\n conj_calc = rot_calc.conj()\n else:\n conj_calc = rot_calc\n \n if sign_flip:\n sign_calc = conj_calc.product(qt.QH([-1, 0, 0, 0]))\n else:\n sign_calc = conj_calc\n \n calc_t = sp.simplify(sp.expand(sign_calc.t))\n calc_x = sp.simplify(sp.expand(sign_calc.x))\n calc_y = sp.simplify(sp.expand(sign_calc.y))\n calc_z = sp.simplify(sp.expand(sign_calc.z))\n \n return qt.QH([calc_t, calc_x, calc_y, calc_z], qtype=\"L\")\n```\n\n\n```python\ndisplay(sp.simplify(next_quadratic(R, P, True, True).t))\ndisplay(sp.simplify(next_quadratic(R, P, True, True).x))\nis_quadratic(next_quadratic(R, P, True, True))\n```\n\nNo matter what values are used for the parameter $P$, the next_quadratic function will preserve the interval of $R$. Even a light-like interval works:\n\n\n```python\nprint(next_quadratic(R, qt.QH([s, s, 0, 0])))\nis_quadratic(next_quadratic(R, qt.QH([s, s, 0, 0])))\n```\n\nNotice how the $y$ and $z$ terms flip positions, but the squaring process will put both into their proper spots in the first term of the square.\n\n## The Lorentz Group and Functional Composition with the next_quadratic Function\n\nThe Lorentz group is all possible ways to transform an event in space-time yet preserve the quadratic form:\n$$(t, x, y, z) \\rightarrow t^2 - x^2 - y^2 - z^2$$\nThe elements of the group are the tuples (t, x, y, z) but not the rotation angles, boost velocities, conjugation and sign flips.\n\nA group is defined as a binary operation on a set of elements that has 4 qualities:\n1. Closure\n1. An inverse exists\n1. There is an identity element\n1. Associative\n\nThe next_quadratic function acts on one element of the group. The binary operation is a composite function built from two next_quadratic functions. Take the result of one action of the next_quadratic function, and have that result go into another round of the next_quadratic function.\n\n\n```python\ndef composite_quadratic(r, p1=qt.QH([1, 0, 0, 0]), p2=qt.QH([1, 0, 0, 0]), conj1=False, conj2=False, sign_flip1=False, sign_flip2=False):\n \"\"\"A composite function for the next_quadratic function.\"\"\"\n \n return next_quadratic(next_quadratic(r, p1, conj1, sign_flip1), p2, conj2, sign_flip2)\n```\n\n\n```python\nprint(composite_quadratic(R))\nis_quadratic(composite_quadratic(R))\nprint(composite_quadratic(R, Qi, Qj, True, True, True, False))\nis_quadratic(composite_quadratic(R, Qi, Qj, True, True, True, False))\nprint(composite_quadratic(R, minus_1, Qj, False, True, False, True))\nis_quadratic(composite_quadratic(R, minus_1, Qj, False, True, False, True))\nprint(composite_quadratic(R, bx, P, True, False, True, False))\nis_quadratic(composite_quadratic(R, bx, P, True, False, True, False))\nprint(composite_quadratic(composite_quadratic(R, bx, bx)))\nis_quadratic(composite_quadratic(composite_quadratic(R, bx, bx)))\n```\n\nEach of these composite functions generates exactly the same quadratic as required to be part of the Lorentz group. These five examples argue for closure: every possible choice for what one puts in the composite_quadratic function will have the same quadratic. I don't have the math skills to prove closure (unless one thinks the earlier general case is enough).\n\nQuaternions are a division algebra. As such, it is reasonable to expect an inverse to exist. Look for one for the $Qi$, $Qk$ parameter case:\n\n\n```python\nprint(composite_quadratic(R, Qi, Qj, True, True, True, False))\nprint(composite_quadratic(composite_quadratic(R, Qi, Qj, True, True, True, False), Qk))\n```\n\n (-t, x, y, -z) L\n (-t, -x, -y, -z) L\n\n\nClose, but not quite. Add a sign_flip.\n\n\n```python\nprint(composite_quadratic(composite_quadratic(R, Qi, Qj, True, True, True, False), Qk, sign_flip1=True))\n```\n\n (t, x, y, z) L\n\n\nThe is back where we started with the quaternion $R$. Again, this is just an example and not a proof. Some inverses are easier to find than others like pure rotations or pure boosts with a rotation or opposite velocity.\n\nThe identity composition was shown to do its fine work in the first composite_quadratic(R) example.\n\nComposite functions are associative, at least according to wikipedia.\n\n## The Difference Between composite_rotation and composite_quadratic\n\nBoth of these composite functions call another function twice, next_rotation and next_quadratic respectively. Both functions do a normalization. The next_rotation normalizes to the norm squared which can be zero if the parameter $P$ is zero, otherwise it is positive. The next_rotation function always does one thing, $P R P^{-1}$. The next_quadratic normalizes to the first term of the square of parameter $P$. That value can be positive, negative, or zero. When the first term of the square is positive or negative, the next_quadratic function treats both cases identically. Three triple quaternion products are used, $P R P^* + \\frac{1}{2}((P P R)^* - (P^* P^* R)^*)$. The first term is identical to a rotation so long as the norm is equal to one. Otherwise, it is off just by a scaling factor. The difference happens when it is zero which indicates the properties of light come into play. It is the lightcone that separates time-like events from space-like events. For a time-like value of the parameter $P$, the triple-triple returns zero which is not a member of the group. If one uses the first triple, no matter what its norm of light-like parameter $P$ happens to be, the resulting $R->R'$ remains in the group. The rotation group $SO(3)$ is compact, while the Lorentz group $O(1, 3)$ is not. The change in algebra needed for light-light parameter $P$ may be another way to view this difference.\n\n## Degrees of Freedom\n\nThe typical representation of the Lorentz group $O(1, 3)$ says there are six independent variables needed to represent the Lorentz group: three for rotations and three for boosts. Yet when one does two boosts in different directions, it is a mix between a boost and a rotation. This suggests there is no such thing as a completely separate notion of rotations and boosts, that they have a capacity to mix. If true, that decreases the degrees of freedom.\n\nTwo spacial rotations will result in spacial rotation:\n\n\n```python\nprint(composite_quadratic(R, qt.QH([0, 1,0,1]), qt.QH([0, 1,1,0])))\nis_quadratic(composite_quadratic(R, qt.QH([0, 1,0,1]), qt.QH([0, 1,1,0])))\n```\n\nNotice that the value of the first squared term is negative. That value gets normalized to negative one in the composite_quadratic function (via the next_quadratic function that gets called twice). What makes these rotations be only spacial is the zero in the first position of the parameter $P$. It is easy enough to look at situations where the first term of the square is negative, and the first term of the parameter is not equal to zero:\n\n\n```python\nprint(composite_quadratic(R, qt.QH([4, 5,0,0])))\nis_quadratic(composite_quadratic(R, qt.QH([4, 5,0,0])))\n```\n\nThis is both a boost and a rotation. The boost effect can be seen in the first and second terms where there is a positve and negative term (the negative being the term that \"doesn't belong\", seeing the $x$ in the first term and $t$ in the second). The rotation appears in the sign flips for $y$ and $z$. If the 4 and 5 are switched, there is no rotation of these terms:\n\n\n```python\nprint(composite_quadratic(R, qt.QH([5, 4,0,0])))\n```\n\n (4.55555555555556*t - 4.44444444444444*x, -4.44444444444444*t + 4.55555555555556*x, y, z) L\n\n\nThe first two terms are exactly the same. Now the last two terms don't flip signs because there is no rotation. Both the (4, 5) and (5, 4) parameter composites will have the same first term for the square. This real-valued quaternion representation makes it possible to see.\n\nAt first blush, one looks into the next_quadratic function and sees six degrees of freedom: four for the quaternion parameter $P$, one for the conjugate operator and one for the sign_flip. These last two are needed to generate spatial reflection and time reversal. The quaternion parameter $P$ normalizes to the first term of the square of the quaternion parameter $P$. This means that once three of the values are chosen, then the value of the fourth one is set by this algebraic constraint. The same thing happens with the composite_rotation function defined earlier: a 4D quaternion may go in, but they way it gets normalized means there is an equivalence class to those quaternions that have a norm of one, and thus only 3 degrees of freedom. Representing the Lorentz group with only five degrees of freedom with this real-valued quaternion representation would be an interesting result if it can be rigorously proved.\n", "meta": {"hexsha": "b61f65ba9c33948acf084b31a0b345fe4577c2a8", "size": 103888, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Notebooks/triple_products_and_distance.ipynb", "max_stars_repo_name": "dougsweetser/AIG", "max_stars_repo_head_hexsha": "ce23119bbde41671438fb805dfba4b04b42d84d6", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Notebooks/triple_products_and_distance.ipynb", "max_issues_repo_name": "dougsweetser/AIG", "max_issues_repo_head_hexsha": "ce23119bbde41671438fb805dfba4b04b42d84d6", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notebooks/triple_products_and_distance.ipynb", "max_forks_repo_name": "dougsweetser/AIG", "max_forks_repo_head_hexsha": "ce23119bbde41671438fb805dfba4b04b42d84d6", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.6835091616, "max_line_length": 3368, "alphanum_fraction": 0.7331356846, "converted": true, "num_tokens": 5872, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541577509315, "lm_q2_score": 0.8807970826714613, "lm_q1q2_score": 0.8294062350324725}} {"text": "# Fast-Slow\u7cfb\n\\begin{equation}\n\\begin{array}{ll}\n\\dot{x}&=\\varepsilon (y+x-\\frac{x^3}{3})\\\\\n\\dot{y}&=-\\frac{1}{\\varepsilon}x\n\\end{array}\n\\label{fastslow_vdp}\n\\end{equation}\n\n\n```python\nimport numpy as np\nfrom scipy.integrate import solve_ivp\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\nsns.set('poster', 'whitegrid', 'dark', rc={\"lines.linewidth\": 2, 'grid.linestyle': '--'})\n```\n\n\n```python\ndef vdp(t, x, eps):\n return [eps * (x[1] + x[0] - x[0]**3/3), -x[0]/eps]\n```\n\n\n```python\neps=10\nt0 = 0.0\n```\n\n\n```python\nt1 = 100.0\nx0 = [0.0, 1.5]\ns0 = solve_ivp(vdp, [t0, t1], x0, args=([eps]),dense_output=True)\n```\n\n\n```python\nT = np.linspace(t0, t1, 10000)\nsol = s0.sol(T)\nfig = plt.figure(figsize=(9,6))\n\nax = fig.add_subplot(111)\nax.set_xlabel(\"$x$\")\nax.set_ylabel(\"$y$\")\nax.set_xlim(-3,3)\nax.set_ylim(-2,2)\nax.set_yticks([-2,-1,0,1,2])\nX = np.linspace(-3,3,256)\nax.plot(np.zeros(2), np.linspace(-2,2,2), '-', color='gray')\nax.plot(X, -X + X**3/3, '--', color='gray')\nax.plot(sol.T[:,0], sol.T[:,1], '-k')\n# plt.savefig(\"fastslow.pdf\", bbox_inches='tight')\n```\n", "meta": {"hexsha": "4919461718b7492c6e4607777afa096af8520f80", "size": 30471, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/oscillation/fastslow.ipynb", "max_stars_repo_name": "tmiyaji/sgc164", "max_stars_repo_head_hexsha": "660f61b72a3898f8e287feb464134f5c48f9383e", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-02-01T15:29:43.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-01T13:20:21.000Z", "max_issues_repo_path": "notebooks/oscillation/fastslow.ipynb", "max_issues_repo_name": "tmiyaji/sgc164", "max_issues_repo_head_hexsha": "660f61b72a3898f8e287feb464134f5c48f9383e", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/oscillation/fastslow.ipynb", "max_forks_repo_name": "tmiyaji/sgc164", "max_forks_repo_head_hexsha": "660f61b72a3898f8e287feb464134f5c48f9383e", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-12-20T07:46:22.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-20T07:46:22.000Z", "avg_line_length": 236.2093023256, "max_line_length": 27668, "alphanum_fraction": 0.9219257655, "converted": true, "num_tokens": 421, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9399133548753619, "lm_q2_score": 0.8824278741843884, "lm_q1q2_score": 0.8294057436601823}} {"text": "# SymPy: Open Source Symbolic Mathematics\n\nThis notebook uses the [SymPy](http://sympy.org) package to perform symbolic manipulations,\nand combined with numpy and matplotlib, also displays numerical visualizations of symbolically\nconstructed expressions.\n\nWe first load sympy printing extensions, as well as all of sympy:\n\n\n```python\nfrom IPython.display import display\n\nfrom sympy.interactive import printing\nprinting.init_printing(use_latex='mathjax')\n\nimport sympy as sym\nx, y, z = sym.symbols(\"x y z\")\nk, m, n = sym.symbols(\"k m n\", integer=True)\nf, g, h = map(sym.Function, 'fgh')\n```\n\n

    Elementary operations

    \n\n\n```python\nsym.Rational(3,2)*sym.pi + sym.exp(sym.I*x) / (x**2 + y)\n```\n\n\n```python\nsym.exp(sym.I*x).subs(x,sym.pi).evalf()\n```\n\n\n```python\ne = x + 2*y\n```\n\n\n```python\nsym.srepr(e)\n```\n\n\n```python\nsym.exp(sym.pi * sym.sqrt(163)).evalf(50)\n```\n\n

    Algebra

    \n\n\n```python\neq = ((x+y)**2 * (x+1))\neq\n```\n\n\n```python\nsym.expand(eq)\n```\n\n\n```python\na = 1/x + (x*sym.sin(x) - 1)/x\na\n```\n\n\n```python\nsym.simplify(a)\n```\n\n\n```python\neq = sym.Eq(x**3 + 2*x**2 + 4*x + 8, 0)\neq\n```\n\n\n```python\nsym.solve(eq, x)\n```\n\n\n```python\na, b = sym.symbols('a b')\nsym.Sum(6*n**2 + 2**n, (n, a, b))\n```\n\n

    Calculus

    \n\n\n```python\nsym.limit((sym.sin(x)-x)/x**3, x, 0)\n```\n\n\n```python\n(1/sym.cos(x)).series(x, 0, 6)\n```\n\n\n```python\nsym.diff(sym.cos(x**2)**2 / (1+x), x)\n```\n\n\n```python\nsym.integrate(x**2 * sym.cos(x), (x, 0, sym.pi/2))\n```\n\n\n```python\neqn = sym.Eq(sym.Derivative(f(x),x,x) + 9*f(x), 1)\ndisplay(eqn)\nsym.dsolve(eqn, f(x))\n```\n\n# Illustrating Taylor series\n\nWe will define a function to compute the Taylor series expansions of a symbolically defined expression at\nvarious orders and visualize all the approximations together with the original function\n\n\n```python\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n\n```python\n# You can change the default figure size to be a bit larger if you want,\n# uncomment the next line for that:\n#plt.rc('figure', figsize=(10, 6))\n```\n\n\n```python\ndef plot_taylor_approximations(func, x0=None, orders=(2, 4), xrange=(0,1), yrange=None, npts=200):\n \"\"\"Plot the Taylor series approximations to a function at various orders.\n\n Parameters\n ----------\n func : a sympy function\n x0 : float\n Origin of the Taylor series expansion. If not given, x0=xrange[0].\n orders : list\n List of integers with the orders of Taylor series to show. Default is (2, 4).\n xrange : 2-tuple or array.\n Either an (xmin, xmax) tuple indicating the x range for the plot (default is (0, 1)),\n or the actual array of values to use.\n yrange : 2-tuple\n (ymin, ymax) tuple indicating the y range for the plot. If not given,\n the full range of values will be automatically used. \n npts : int\n Number of points to sample the x range with. Default is 200.\n \"\"\"\n if not callable(func):\n raise ValueError('func must be callable')\n if isinstance(xrange, (list, tuple)):\n x = np.linspace(float(xrange[0]), float(xrange[1]), npts)\n else:\n x = xrange\n if x0 is None: x0 = x[0]\n xs = sym.Symbol('x')\n # Make a numpy-callable form of the original function for plotting\n fx = func(xs)\n f = sym.lambdify(xs, fx, modules=['numpy'])\n # We could use latex(fx) instead of str(), but matploblib gets confused\n # with some of the (valid) latex constructs sympy emits. So we play it safe.\n plt.plot(x, f(x), label=str(fx), lw=2)\n # Build the Taylor approximations, plotting as we go\n apps = {}\n for order in orders:\n app = fx.series(xs, x0, n=order).removeO()\n apps[order] = app\n # Must be careful here: if the approximation is a constant, we can't\n # blindly use lambdify as it won't do the right thing. In that case, \n # evaluate the number as a float and fill the y array with that value.\n if isinstance(app, sym.numbers.Number):\n y = np.zeros_like(x)\n y.fill(app.evalf())\n else:\n fa = sym.lambdify(xs, app, modules=['numpy'])\n y = fa(x)\n tex = sym.latex(app).replace('$', '')\n plt.plot(x, y, label=r'$n=%s:\\, %s$' % (order, tex) )\n \n # Plot refinements\n if yrange is not None:\n plt.ylim(*yrange)\n plt.grid()\n plt.legend(loc='best').get_frame().set_alpha(0.8)\n```\n\nWith this function defined, we can now use it for any sympy function or expression\n\n\n```python\nplot_taylor_approximations(sym.sin, 0, [2, 4, 6], (0, 2*sym.pi), (-2,2))\n```\n\n\n```python\nplot_taylor_approximations(sym.cos, 0, [2, 4, 6], (0, 2*sym.pi), (-2,2))\n```\n\nThis shows easily how a Taylor series is useless beyond its convergence radius, illustrated by \na simple function that has singularities on the real axis:\n\n\n```python\n# For an expression made from elementary functions, we must first make it into\n# a callable function, the simplest way is to use the Python lambda construct.\nplot_taylor_approximations(lambda x: 1/sym.cos(x), 0, [2,4,6], (0, 2*sym.pi), (-5,5))\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "b2c2e53d9e6b8fc1c31f5d91fa5f02882881be63", "size": 10698, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "001-Jupyter/001-Tutorials/003-IPython-in-Depth/examples/IPython Kernel/SymPy.ipynb", "max_stars_repo_name": "jhgoebbert/jupyter-jsc-notebooks", "max_stars_repo_head_hexsha": "bcd08ced04db00e7a66473b146f8f31f2e657539", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "001-Jupyter/001-Tutorials/003-IPython-in-Depth/examples/IPython Kernel/SymPy.ipynb", "max_issues_repo_name": "jhgoebbert/jupyter-jsc-notebooks", "max_issues_repo_head_hexsha": "bcd08ced04db00e7a66473b146f8f31f2e657539", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "001-Jupyter/001-Tutorials/003-IPython-in-Depth/examples/IPython Kernel/SymPy.ipynb", "max_forks_repo_name": "jhgoebbert/jupyter-jsc-notebooks", "max_forks_repo_head_hexsha": "bcd08ced04db00e7a66473b146f8f31f2e657539", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-13T18:49:12.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-13T18:49:12.000Z", "avg_line_length": 22.9570815451, "max_line_length": 114, "alphanum_fraction": 0.5119648532, "converted": true, "num_tokens": 1497, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9399133498259924, "lm_q2_score": 0.8824278571786139, "lm_q1q2_score": 0.8294057232205234}} {"text": "Tra\u00e7ar um esbo\u00e7o do gr\u00e1fico e obter uma equa\u00e7\u00e3o da par\u00e1bola que satisfa\u00e7a as condi\u00e7\u00f5es dadas.\n\n17. V\u00e9rtice: $V(0,0)$; Eixo $y=0$; Passa pelo ponto $(4,5)$

    \n\nComo a par\u00e1bola \u00e9 paralela ao eixo $x$ a equa\u00e7\u00e3o que a representa \u00e9 dada por $y^2 = 2px$

    \nSubstituindo os pontos dados na equa\u00e7\u00e3o temos:

    \n$5^2 = 2\\cdot p \\cdot 4$

    \n$25 = 8p$

    \n$\\frac{25}{8} = p$

    \nEncontrando o valor do foco

    \n$F = \\frac{p}{2}$

    \n$F = \\frac{\\frac{25}{8}}{2}$

    \n$F = \\frac{25}{8} \\cdot \\frac{1}{2}$

    \n$F = \\frac{25}{16}$

    \n$F(\\frac{25}{16},0)$

    \nEncontrando o valor da diretriz

    \n$D = -\\frac{p}{2}$

    \n$D = -\\frac{25}{16}$

    \n$D : x = \\frac{25}{16}$

    \nMontando a equa\u00e7\u00e3o

    \n$y^2 = 2 \\cdot \\frac{25}{8} \\cdot x$

    \n$y^2 = \\frac{50}{8}x$

    \n$y^2 = \\frac{25}{4}x$

    \nGr\u00e1fico da par\u00e1bola

    \n\n\n```python\nfrom sympy import *\nfrom sympy.plotting import plot_implicit\nx, y = symbols(\"x y\")\nplot_implicit(Eq((y-0)**2, 25/4*(x+0)), (x,-10,10), (y,-10,10),\ntitle=u'Gr\u00e1fico da par\u00e1bola', xlabel='x', ylabel='y');\n```\n", "meta": {"hexsha": "27b084e9cda2042171bf2781ce26fbecd4ce3243", "size": 16719, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Problemas Propostos. Pag. 172 - 175/17.ipynb", "max_stars_repo_name": "mateuschaves/GEOMETRIA-ANALITICA", "max_stars_repo_head_hexsha": "bc47ece7ebab154e2894226c6d939b7e7f332878", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-02-03T16:40:45.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-03T16:40:45.000Z", "max_issues_repo_path": "Problemas Propostos. Pag. 172 - 175/17.ipynb", "max_issues_repo_name": "mateuschaves/GEOMETRIA-ANALITICA", "max_issues_repo_head_hexsha": "bc47ece7ebab154e2894226c6d939b7e7f332878", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Problemas Propostos. Pag. 172 - 175/17.ipynb", "max_forks_repo_name": "mateuschaves/GEOMETRIA-ANALITICA", "max_forks_repo_head_hexsha": "bc47ece7ebab154e2894226c6d939b7e7f332878", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 185.7666666667, "max_line_length": 14364, "alphanum_fraction": 0.8986781506, "converted": true, "num_tokens": 535, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284088025362857, "lm_q2_score": 0.8933094032139576, "lm_q1q2_score": 0.8293563133322743}} {"text": "# Exercise 2\nWrite a function to compute the roots of a mathematical equation of the form\n\\begin{align}\n ax^{2} + bx + c = 0.\n\\end{align}\nYour function should be sensitive enough to adapt to situations in which a user might accidentally set $a=0$, or $b=0$, or even $a=b=0$. For example, if $a=0, b\\neq 0$, your function should print a warning and compute the roots of the resulting linear function. It is up to you on how to handle the function header: feel free to use default keyword arguments, variable positional arguments, variable keyword arguments, or something else as you see fit. Try to make it user friendly.\n\nYour function should return a tuple containing the roots of the provided equation.\n\n**Hint:** Quadratic equations can have complex roots of the form $r = a + ib$ where $i=\\sqrt{-1}$ (Python uses the notation $j=\\sqrt{-1}$). To deal with complex roots, you should import the `cmath` library and use `cmath.sqrt` when computing square roots. `cmath` will return a complex number for you. You could handle complex roots yourself if you want, but you might as well use available libraries to save some work.\n\n\n```python\nimport cmath\n\ndef compute_roots(a, b, c):\n \"\"\" Returns roots for quadratic equation of form ax^2 + bx + c = 0 \"\"\"\n if a == 0:\n if b == 0:\n if c == 0:\n raise ValueError('Infititely many solutions!')\n else:\n raise ValueError('No solutions!')\n else:\n return (-c/b,)\n \n descriminant = (b^2) - (4 * a * c)\n \n sol1 = -b + cmath.sqrt(descriminant) / (2 * a)\n sol2 = -b - cmath.sqrt(descriminant) / (2 * a)\n return sol1, sol2\n\na = 0\nb = 2\nc = 4\ntry:\n solution = compute_roots(0, 2, 4)\n print('Roots = {}'.format(solution))\nexcept Exception as e:\n print(e)\n```\n\n Roots = (-2.0,)\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "08d7bf64db307869824b0030dcf95b3a58526220", "size": 3045, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lectures/L5/Exercise_2.ipynb", "max_stars_repo_name": "nate-stein/cs207_nate_stein", "max_stars_repo_head_hexsha": "f8ce68f9d839a0bd0ab4a2e1ebaa7ae985f6b5d0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lectures/L5/Exercise_2.ipynb", "max_issues_repo_name": "nate-stein/cs207_nate_stein", "max_issues_repo_head_hexsha": "f8ce68f9d839a0bd0ab4a2e1ebaa7ae985f6b5d0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/L5/Exercise_2.ipynb", "max_forks_repo_name": "nate-stein/cs207_nate_stein", "max_forks_repo_head_hexsha": "f8ce68f9d839a0bd0ab4a2e1ebaa7ae985f6b5d0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.71875, "max_line_length": 495, "alphanum_fraction": 0.5458128079, "converted": true, "num_tokens": 498, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9381240090865197, "lm_q2_score": 0.8840392741081575, "lm_q1q2_score": 0.8293384680162814}} {"text": "# Modeling data 2\n\n## Building a model\n\nRecall that in notebook 3, we saw that we could use a mathematical function to classify an image as an apple or a banana, based on the average amount of green in an image:\n\n\n\n\n\n\nA common function for performing this kind of **classification** is the sigmoid that we saw in the last notebook, and that we will now extend by adding two **parameters**, $w$ and $b$:\n\n$$\\sigma(x; w, b) := \\frac{1}{1 + \\exp(-wx + b)}$$\n\n$$ x = \\mathrm{data} $$\n\n\\begin{align}\n\\sigma(x;w,b) &\\approx 0 \\implies \\mathrm{apple} \\\\\n\\sigma(x;w,b) &\\approx 1 \\implies \\mathrm{banana}\n\\end{align}\n\nIn our mathematical notation above, the `;` in the function differentiates between the **data** and the **parameters**. `x` is the data and is determined from the image. The parameters, `w` and `b`, are numbers which we choose to make our function match the results it should be modeling.\n\nNote that in the code below, we don't distinguish between data and parameters - both are just inputs to our function, \u03c3!\n\n\n```julia\nusing Images, Statistics\n\napple = load(\"data/10_100.jpg\")\nbanana = load(\"data/104_100.jpg\")\n\napple_green_amount = mean(Float64.(green.(apple)))\nbanana_green_amount = mean(Float64.(green.(banana)))\n\nprintln(\"Average green for apple = $apple_green_amount\")\nprintln(\"Average green for banana = $banana_green_amount\")\n```\n\n Average green for apple = 0.3382027450980393\n Average green for banana = 0.8807972549019609\n\n\n\n```julia\n\u03c3(x, w, b) = 1 / (1 + exp(-w * x + b))\n```\n\n\n\n\n \u03c3 (generic function with 1 method)\n\n\n\nWhat we want is that when we give \u03c3 as input the average green for the apple, roughly `x = 0.3385`, it should return as output something close to 0, meaning \"apple\". And when we give \u03c3 the input `x = 0.8808`, it should output something close to 1, meaning \"banana\".\n\nBy changing the parameters of the function, we can change the shape of the function, and hence make it represent, or **fit**, the data better!\n\n## Data fitting by varying parameters\n\nWe can understand how our choice of `w` and `b` affects our model by seeing how our values for `w` and `b` change the plot of the $\\sigma$ function.\n\n\n```julia\nusing Plots; gr() # GR works better for interactive manipulations\n```\n\n\n\n\n Plots.GRBackend()\n\n\n\nRun the code in the next cell. You should see two \"sliders\" appear, one for `w` and one for `b`.\n\n**Game**:\nChange w and b around until the blue curve, labeled \"model\", which is the graph of the `\\sigma` function, passes through *both* of the data points at the same time.\n\n\n```julia\nw = 10.0 # try manipulating w between -10 and 30\nb = 10.0 # try manipulating b between 0 and 20\n\nplot(x -> \u03c3(x, w, b), xlim=(-0,1), ylim=(-0.1,1.1), label=\"model\", legend=:topleft, lw=3)\n\nscatter!([apple_green_amount], [0.0], label=\"apple\", ms=5) # marker size = 5\nscatter!([banana_green_amount], [1.0], label=\"banana\", ms=5)\n```\n\n\n\n\n \n\n \n\n\n\nNotice that the two parameters do two very different things. The **weight**, `w`, determines *how fast* the transition between 0 and 1 occurs. It encodes how trustworthy we think our data actually is, and in what range we should be putting points between 0 and 1 and thus calling them \"unsure\". The **bias**, `b`, encodes *where* on the $x$-axis the switch should take place. It can be seen as shifting the function left-right. We'll come to understand these *parameters* more in notebook 6.\n\nHere are some parameter choices that work well:\n\n\n```julia\nw = 25.58; b = 15.6\n\nplot(x -> \u03c3(x, w, b), xlim=(0,1), ylim=(-0.1,1.1), label=\"model\", legend=:topleft, lw=3)\n\nscatter!([apple_green_amount], [0.0], label=\"apple\")\nscatter!([banana_green_amount],[1.0], label=\"banana\")\n```\n\n\n\n\n \n\n \n\n\n\n(Note that in this problem there are many combinations of `w` and `b` that fit the data well.)\n\nOnce we have a model, we have a computational representation for how to choose between \"apple\" and \"banana\". So let's pull in some new images and see what our model says about them!\n\n\n```julia\napple2 = load(\"data/107_100.jpg\")\n```\n\n\n```julia\ngreen_amount = mean(Float64.(green.(apple2)))\n@show green_amount\n\nscatter!([green_amount], [0.0], label=\"new apple\")\n```\n\n green_amount = 0.4687666666666668\n\n\n\n\n\n \n\n \n\n\n\nOur model successfully says that our new image is an apple! Pat yourself on the back: you've actually just trained your first neural network!\n\n#### Exercise 1\n\nLoad the image of a banana in `data/8_100.jpg` as `mybanana`. Edit the code below to calculate the amount of green in `mybanana` and to overlay data for this image with the existing model and data points.\n\n#### Solution\n\nTo get the desired overlay, the code we need is\n\n\n```julia\nmybanana = load(\"data/8_100.jpg\")\nmybanana_green_amount = mean(Float64.(green.(banana)))\nscatter!([mybanana_green_amount], [1.0], label=\"my banana\")\n```\n\n\n\n\n \n\n \n\n\n\n## Closing remarks: bigger models, more data, more accuracy\n\nThat last apple should start making you think: not all apples are red; some are yellow. \"Redness\" is one attribute of being an apple, but isn't the whole thing. What we need to do is incorporate more ideas into our model by allowing more inputs. However, more inputs would mean more parameters to play with. Also, we would like to have the computer start \"learning\" on its own, instead of modifying the parameters ourselves until we think it \"looks right\". How do we take the next step?\n\nThe first thing to think about is, if you wanted to incorporate more data into the model, how would you change the sigmoid function? Play around with some ideas. But also, start thinking about how you chose parameters. What process did you do to finally end up at good parameters? These two problems (working with models with more data and automatically choosing parameters) are the last remaining step to understanding deep learning.\n", "meta": {"hexsha": "c60d143e1ef882fe2376a344bf78fccbaa7f7f36", "size": 207904, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "0500.Building-models.ipynb", "max_stars_repo_name": "OwenAnalytics/Foundations-of-Machine-Learning", "max_stars_repo_head_hexsha": "e3b16dd422d5ca169adedf72166aee461f66033a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 22, "max_stars_repo_stars_event_min_datetime": "2021-05-18T17:03:38.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T02:27:18.000Z", "max_issues_repo_path": "0500.Building-models.ipynb", "max_issues_repo_name": "OwenAnalytics/Foundations-of-Machine-Learning", "max_issues_repo_head_hexsha": "e3b16dd422d5ca169adedf72166aee461f66033a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-04-13T18:03:02.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-17T12:05:47.000Z", "max_forks_repo_path": "0500.Building-models.ipynb", "max_forks_repo_name": "JuliaAcademy/Foundations-of-Machine-Learning", "max_forks_repo_head_hexsha": "e3b16dd422d5ca169adedf72166aee461f66033a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2020-11-23T20:31:19.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-07T03:29:25.000Z", "avg_line_length": 246.9168646081, "max_line_length": 19222, "alphanum_fraction": 0.7143345005, "converted": true, "num_tokens": 1514, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9525741227833249, "lm_q2_score": 0.8705972684083609, "lm_q1q2_score": 0.8293084292516533}} {"text": "# Information Retrieval in High Dimensional Data\n## Lab #6, 23.11.2017\n\n## Principal Component Analysis\n\n### Task 1\n\nIn this task we will once again work with the MNIST training set as provided on Moodle. Chose three digit classes, e.g. 1, 2 and 3 and load N=1000 images from each of the clsses to the workspace. Store the data in a normalized matrix $X$ of type double and size(784, 3\\*N). Furthermore, generate color label matrix $C$ of dimensions (3\\*N, 3). Each row of $C$ assigns an RGB color vector to the repective column of $X$ as an indicator of the digit class. Choose [0, 0, 1], [0, 1, 0] and [1, 0, 0] for the three digit classes.\n\n\n```python\nimport os\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport imageio\n```\n\n\n```python\nN = 1000\nX = np.zeros((784, 3*N), dtype='float64')\nfor i in range(1, 4):\n path = 'mnist/d{}/'.format(i)\n filenames = sorted((fn for fn in os.listdir(path) if fn.endswith('.png')))\n for idx, fn in enumerate(filenames):\n im = imageio.imread(path + fn)\n X[:, idx + N*(i-1)] = np.reshape(im, 784)\n if idx == 999:\n break\n```\n\n\n```python\nlabels = np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0]])\nC = np.zeros((3*N, 3))\nfor i in range(3):\n for j in range(1000):\n C[N*i + j, :] = labels[i]\nC\n```\n\n\n\n\n array([[ 0., 0., 1.],\n [ 0., 0., 1.],\n [ 0., 0., 1.],\n ..., \n [ 1., 0., 0.],\n [ 1., 0., 0.],\n [ 1., 0., 0.]])\n\n\n\na) Compute the row-wise mean mu of $X$ and substract it from each column of $X$.\nSave the results as X_c.\n\n\n```python\nmu = np.mean(X, axis=1)\nX_c = X-np.expand_dims(mu, axis=1)\n```\n\nb) Use np.linalg.svd with full_matrices=False to compute the singular value decomposition [U, SIgma, VT] of X_c. Make sure the matrices are sorted in descending order with respect to the singular values.\n\n\n```python\nU, Sigma, VT = np.linalg.svd(X_c, full_matrices=False)\n```\n\nc) Use reshape in order to convert mu and the first three columns of U to (28, 28)-matrices. Plot the resulting images. What do you see?\n\n\n```python\nplt.subplot(141)\nplt.imshow(np.reshape(mu, (28,28)))\nplt.subplot(142)\nplt.imshow(np.reshape(U[:, 0], (28,28)))\nplt.subplot(143)\nplt.imshow(np.reshape(U[:, 1], (28,28)))\nplt.subplot(144)\nplt.imshow(np.reshape(U[:, 2], (28,28)))\n\nplt.show()\n```\n\nd) Compute the matrix S=np.dot(np.diag(Sigma), VT)). Note that ths yields the same result s S=np.dot(U.T, X_c). The S matrix contains the 3\\*N scores for the principal components 1 to 784. Create a 2D scatter plot with C as its color parameter in order to plot the scores for the first $two$ rincipal components of the data.\n\n\n```python\nS = np.dot(np.diag(Sigma), VT)\nplt.scatter(S[:,:2], X_c[:, :2], c=C)\nplt.show()\n```\n\n### Task 2\n\nIn this task we consider the problem of choosing he number of principal vectors. Assuming that $\\mathbf{X} \\in \\mathbb{R}^{p\\times N}$ is the centered data matrix and $ \\mathbf{P} = \\mathbf{U}_k \\mathbf{U}_k^T $ is the projector onto the $k$-dimensional principal subspace, the dimension $k$ is chosen such that the fraction of overall energy contained in the projection error does not exceed $\\epsilon$, i.e.\n\n\\begin{equation} \\frac{\\|\\mathbf{X} - \\mathbf{PX}\\|_F^2}{\\| \\mathbf{X}\\|_F^2} = \\frac{\\sum_{i=1}^M \\|\\mathbf{x}_i - \\mathbf{Px}_i\\|^2}{\\sum_{i=1}^N \\| \\mathbf{x}_i\\|^2} \\leq \\epsilon ,\\end{equation}\n\nwhere $\\epsilon$ is usually chosen to be between $0.01$ and $0.2$.\n\nThe MIT VisTex database as provided on Moodle consists of a set of 167 RGB texture images of sizes (512, 512, 3). Download the ZIP file, unpack it and make yourself familiar with the directory structure.\n\na) After preprocessing the entire image set (converting to normalized grayscale matrices), divide the images into non overlapping tiles of sizes (64, 64) and create a centered data matrix X_c of size (p, N) from them, where P=64\\*64 and N=167\\*(512/64)\\*(512/64).\n\n\n```python\n\n```\n\nb) Compute the SVD of X_c and make sure the singular values are sorted in descending order.\n\n\n```python\n\n```\n\nc) Plot the fraction of energy contained in the projection error for the principal subspace dimensions 0 to p. How many principal vectors do you need to retain 80%, 90%, 95% or 99% of the original data energy?\n\n\n```python\n\n```\n\nd) Discuss: Can you imagine a scenario, where energy is a bad measure of useful information?\n\n\n```python\n\n```\n", "meta": {"hexsha": "0cf86713f443311e6980a06107baf12f561c033f", "size": 37081, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "information-retrieval-exercises/lab06.ipynb", "max_stars_repo_name": "achmart/inforet", "max_stars_repo_head_hexsha": "3596ff971207728a42b335e71608b0b96e241228", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "information-retrieval-exercises/lab06.ipynb", "max_issues_repo_name": "achmart/inforet", "max_issues_repo_head_hexsha": "3596ff971207728a42b335e71608b0b96e241228", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "information-retrieval-exercises/lab06.ipynb", "max_forks_repo_name": "achmart/inforet", "max_forks_repo_head_hexsha": "3596ff971207728a42b335e71608b0b96e241228", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 118.8493589744, "max_line_length": 16156, "alphanum_fraction": 0.8606564009, "converted": true, "num_tokens": 1340, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9314625145783431, "lm_q2_score": 0.890294230488237, "lm_q1q2_score": 0.8292757026451641}} {"text": "# Interpolation and Differentiation\n\nAll algorithms use the baryentric weights as described in the book \"Implementing Spectral Methods for PDEs\" by David Kopriva.\n\n- Interpolation can be performed either with `interpolate(dest, u, basis)` or via an \n `interpolation_matrix(dest, basis)`.\n- In a nodal basis, the derivative matrix is available as `basis.D`. Alternatively, `derivative_at(x, u, basis)`\n can be used.\n\n\n```julia\nusing Revise\nusing PolynomialBases\nusing LaTeXStrings, Plots\n\n# define nodal bases\np = 5 # polynomial degree\nbasis1 = LobattoLegendre(p)\nbasis2 = GaussLegendre(p)\n\n# the function that will be interpolated\nufunc(x) = sinpi(x); uprim(x) = \u03c0*cospi(x)\n#ufunc(x) = 1 / (1 + 25x^2); uprim(x) = -ufunc(x)^2*50x\n\nfor basis in (basis1, basis2)\n u = ufunc.(basis.nodes)\n\n xplot = range(-1, stop=1, length=500)\n uplot = interpolate(xplot, u, basis)\n\n fig1 = plot(xplot, ufunc.(xplot), label=\"u\", xguide=L\"x\", yguide=L\"u\")\n plot!(fig1, xplot, uplot, label=L\"\\mathrm{I}(u)\")\n\n fig2 = plot(xplot, uprim.(xplot), label=\"u'\", xguide=L\"x\", yguide=L\"u'\")\n plot!(fig2, xplot, interpolate(xplot, basis.D*u, basis), label=L\"\\mathrm{I}(u)'\")\n\n display(basis)\n display(plot(fig1, fig2))\nend\n```\n\n# Integration\n\nThe nodes and weights are from [FastGaussQuadrature.jl](https://github.com/ajt60gaibb/FastGaussQuadrature.jl).\n\n\n```julia\nusing Revise\nusing PolynomialBases\nusing LaTeXStrings, Plots; pyplot()\n\nufunc(x) = sinpi(x)^6\n\nfunction compute_error(p, basis_type)\n basis = basis_type(p)\n u = ufunc.(basis.nodes)\n abs(5/8 - integrate(u, basis))\nend\n\nps = 1:23\nscatter(ps, compute_error.(ps, LobattoLegendre), label=\"Lobatto\", xguide=L\"p\", yguide=\"Error\", yaxis=:log10)\nscatter!(ps, compute_error.(ps, GaussLegendre), label=\"Gauss\")\n```\n\n# Evaluation of Orthogonal Polynomials\n\n## Legendre Polynomials\n\nLegendre poylnomials $P_p$ are evaluated as `legendre(x, p)` using the three term recursion formula.\n\n\n```julia\nusing Revise\nusing PolynomialBases\nusing LaTeXStrings, Plots; pyplot()\n\nx = range(-1, stop=1, length=10^3)\nfig = plot(xguide=L\"x\")\nfor p in 0:5\n plot!(fig, x, legendre.(x, p), label=\"\\$ P_$p \\$\")\nend\nfig\n```\n\n## Gegenbauer Polynomials\n\nGegenbauer poylnomials $C_p^{(\\alpha)}$ are evaluated as `gegenbauer(x, p, \u03b1)` using the three term recursion formula.\n\n\n```julia\nusing Revise\nusing PolynomialBases\nusing LaTeXStrings, Plots; pyplot()\n\n\u03b1 = 0.5\nx = range(-1, stop=1, length=10^3)\nfig = plot(xguide=L\"x\")\nfor p in 0:5\n plot!(fig, x, gegenbauer.(x, p, \u03b1), label=\"\\$ C_$p^{($\u03b1)} \\$\")\nend\nfig\n```\n\n## Jacobi Polynomials\n\nJacobi poylnomials $P_p^{\\alpha,\\beta}$ are evaluated as `jacobi(x, p, \u03b1, \u03b2)` using the three term recursion formula.\n\n\n```julia\nusing Revise\nusing PolynomialBases\nusing LaTeXStrings, Plots; pyplot()\n\n\u03b1, \u03b2 = -0.5, -0.5\n#\u03b1, \u03b2 = 0, 0\nx = range(-1, stop=1, length=10^3)\nfig = plot(xguide=L\"x\")\nfor p in 0:5\n plot!(fig, x, jacobi.(x, p, \u03b1, \u03b2), label=\"\\$ P_$p^{$\u03b1, $\u03b2} \\$\")\nend\nfig\n```\n\n## Hermite Polynomials\n\nHermite poylnomials $H_p$ are evaluated as `hermite(x, p)` using the three term recursion formula.\n\n\n```julia\nusing Revise\nusing PolynomialBases\nusing LaTeXStrings, Plots; pyplot()\n\nx = range(-2, stop=2, length=10^3)\nfig = plot(xguide=L\"x\")\nfor p in 0:4\n plot!(fig, x, hermite.(x, p), label=\"\\$ H_$p \\$\")\nend\nfig\n```\n\n## Hahn Polynomials\n\n\n```julia\nusing Revise\nusing PolynomialBases\nusing LaTeXStrings, Plots; pyplot()\n\n\u03b1, \u03b2, N = -0.5, -0.5, 20\nx = range(0, stop=N, length=10^3)\nfig = plot(xguide=L\"x\")\nfor p in 0:5\n plot!(fig, x, hahn.(x, p, \u03b1, \u03b2, N), label=\"\\$ Q_$p(x; $\u03b1, $\u03b2, $N) \\$\")\nend\nfig\n```\n\n# Symbolic Computations\n\nSymbolic computations using [SymPy.jl](https://github.com/JuliaPy/SymPy.jl) and [SymEngine.jl](https://github.com/symengine/symengine) are supported.\n\n\n```julia\nusing Revise\nusing PolynomialBases\nimport SymPy\nimport SymEngine\n\nx_sympy = SymPy.symbols(\"x\")\nx_symengine = SymEngine.symbols(\"x\")\n\nlegendre(x_sympy, 6) |> SymPy.expand |> display\nlegendre(x_symengine, 6) |> SymEngine.expand |> display\n\nGaussLegendre(2, SymPy.Sym).D |> display\nGaussLegendre(2, SymEngine.Basic).D |> display\n```\n\n# Modal Matrices\n\nThe Vandermonde matrix $V$ can be computed as `legendre_vandermonde(basis)` and used to transform coeficients in a modal basis of Legendre polynomials to a nodal basis.\n\n\n```julia\nusing Revise\nusing PolynomialBases\nusing LinearAlgebra\n\np = 7\nbasis = GaussLegendre(p)\nV = legendre_vandermonde(basis)\n\n# the modal derivative matrix\nDhat = legendre_D(p)\n# they should be equal\nnorm( basis.D - V * Dhat / V)\n```\n\nSimilarly, the Vandermonde matrix with respect to the Jacobi polynomials is given as `jacobi_vandermonde(basis, \u03b1, \u03b2)`.\n\n\n```julia\nusing Revise\nusing PolynomialBases\n\np = 3\n\u03b1, \u03b2 = -0.5, -0.5\nbasis = GaussJacobi(p, \u03b1, \u03b2)\nV = jacobi_vandermonde(basis, \u03b1, \u03b2)\n```\n", "meta": {"hexsha": "a64a97b49a9b9005f76f93c9bda60dc86e9e4bec", "size": 8356, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/Tutorial.ipynb", "max_stars_repo_name": "ranocha/PolynomialBases.jl", "max_stars_repo_head_hexsha": "845531c8eee63f2c1608badd980872796fabbfc9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2019-09-11T18:12:07.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-13T21:08:56.000Z", "max_issues_repo_path": "notebooks/Tutorial.ipynb", "max_issues_repo_name": "ranocha/PolynomialBases.jl", "max_issues_repo_head_hexsha": "845531c8eee63f2c1608badd980872796fabbfc9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 17, "max_issues_repo_issues_event_min_datetime": "2018-02-15T06:32:55.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-11T15:59:07.000Z", "max_forks_repo_path": "notebooks/Tutorial.ipynb", "max_forks_repo_name": "ranocha/PolynomialBases.jl", "max_forks_repo_head_hexsha": "845531c8eee63f2c1608badd980872796fabbfc9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-02-26T18:34:02.000Z", "max_forks_repo_forks_event_max_datetime": "2020-02-08T11:01:42.000Z", "avg_line_length": 25.950310559, "max_line_length": 174, "alphanum_fraction": 0.5287218765, "converted": true, "num_tokens": 1605, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9314625069680097, "lm_q2_score": 0.8902942304882371, "lm_q1q2_score": 0.8292756958697284}} {"text": "## Exercise 10.1 (search)\n\nWe want to find the largest and smallest values in a long list of numbers. Implement\ntwo algorithms, based on:\n\n1. Iterating over the list entries; and \n1. First applying a built-in sort operation to the list.\n\nEncapsulate each algorithm in a function. To create lists of numbers for testing use, for example:\n```python\nx = np.random.rand(1000)\n```\n\n### Solution\n\nWe first create the list of random numbers\n\n\n```python\nimport numpy as np\nx = np.random.rand(1000)\n```\n\n#### Approach 1\n\n\n```python\ndef min_max1(x):\n # YOUR CODE HERE\n raise NotImplementedError()\n return x_min, x_max\n \nprint(min_max1(x))\n```\n\n#### Approach 2\n\n\n```python\ndef min_max2(x):\n # YOUR CODE HERE\n raise NotImplementedError()\n\nprint(min_max2(x))\n```\n\n\n```python\nassert min_max1(x) == min_max2(x)\n```\n\nIn practice, we would use the the NumPy function:\n\n\n```python\nprint(np.min(x), np.max(x))\n```\n\n## Exercise 10.2 (Newton's method for root finding)\n\n### Background\n\nNewton's method can be used to find a root $x$ of a function $f(x)$ such that\n$$\nf(x) = 0\n$$\nA Taylor series expansion of $f$ about $x_{i}$ reads:\n$$\nf(x_{i+1}) = f(x_{i}) + \\left. f^{\\prime} \\right|_{x_{i}} (x_{i+1} - x_{i}) + O((x_{i+1} - x_{i})^{2})\n$$\nIf we neglect the higher-order terms and set $f(x_{i+1})$ to zero, we have Newton's method:\n\\begin{align}\nx_{i + 1} &= - \\frac{f(x_{i})}{f^{\\prime}(x_{i})} + x_{i}\n\\\\\nx_{i} &\\leftarrow x_{i+1}\n\\end{align}\nIn Newton's method, the above is applied iteratively until $\\left|f(x_{i + 1})\\right|$ is below a tolerance value.\n\n### Task\n\nDevelop an implementation of Newton's method, with the following three functions in your implementation:\n```python\ndef newton(f, df, x0, tol, max_it):\n # Implement here\n \n return x1 # return root\n```\nwhere `x0` is the initial guess, `tol` is the stopping tolerance, `max_it` is the maximum number \nof iterations, and \n```python\ndef f(x):\n # Evaluate function at x and return value\n\n\ndef df(x):\n # Evaluate df/dx at x and return value\n\n```\n\nYour implementation should raise an exception if the maximum number of iterations (`max_it`)\nis exceeded.\n\nUse your program to find the roots of:\n$$\nf(x) = \\tan(x) - 2x\n$$\nbetween $-\\pi/2$ and $\\pi/2$. Plot $f(x)$ and $f^{\\prime}(x)$ on the same graph, \nand show the roots computed by Newton's method.\n\nNewton's method can be sensitive to the starting value. Make sure you find the root around $x = 1.2$. What happens if you start at $x = 0.9$? It may help to add a print statement in the iteration loop, showing $x$ and $f$ at each iteration.\n\n\n### Extension (optional)\n\nFor a complicated function we might not know how to compute the derivative, or it may be very complicated\nto evaluate. Write a function that computes the *numerical derivative* of $f(x)$ by evaluating \n$(f(x + dx) - f(x - dx)) / (2dx)$, where $dx$ is small. How should you choose $dx$?\n\n### Solution\n\nWe first implement a Newton solver function:\n\n\n```python\nimport numpy as np\n\ndef newton(f, df, x, tol=1e-8, max_it=20):\n \"\"\"Find root of equation defined by function f(x) where df(x) is\n first derivative and x is the initial guess.Optional arguments tol \n (tolerance) and max_it (maximum number of iterations)\"\"\"\n \n # YOUR CODE HERE\n raise NotImplementedError()\n```\n\nWe now provide implementations of `f` and `df`, and find the roots:\n\n\n```python\ndef f(x):\n # YOUR CODE HERE\n raise NotImplementedError()\n\ndef df(x):\n # YOUR CODE HERE\n raise NotImplementedError()\n```\n\n\n```python\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\nWe can visualise the result:\n\n\n```python\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\n# Plot f and df/dx\nx = np.linspace(-1.5, 1.5, 100)\nplt.plot(x, f(x), label='$f(x)$')\nplt.plot(x, df(x), label=\"$f^{\\prime}(x)$\")\n\n# Add location of roots to plot\n# YOUR CODE HERE\nraise NotImplementedError()\n\nplt.show()\n```\n\nFor the extension, we can replace the function `df(x)` with a new version\n\n\n```python\ndef df(x):\n # Try changing dx to 1e-15 or smaller\n dx = 1e-9\n # YOUR CODE HERE\n raise NotImplementedError()\n```\n\n\n```python\n# Find roots near -1.2, 0.1, and 1.2\nxroots = np.array((newton(f, df, -1.2),\n newton(f, df, 0.1),\n newton(f, df, 1.2)))\nassert np.isclose(xroots, [-1.16556119e+00, 2.08575213e-10, 1.16556119e+00]).all()\n```\n\n\n```python\n# Plot f, f' and roots\n\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\nIn practice, we could use the Newton function `scipy.optimize.newton` from SciPy (http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.newton.html) rather than implementing our own function.\n\n## Exercise 10.3 (optional, low pass image filter)\n\nImages files can be loaded and displayed with Matplotlib. An imported image is stored as a \nthree-dimensional NumPy array of floats. The shape of the array is `[0:nx, 0:ny, 0:3]`. \nwhere `nx` is the number of pixels in the $x$-direction, `ny` is the number of pixels in the $y$-direction,\nand the third axis is for the colour component (RGB: red, green and blue) intensity. See http://matplotlib.org/users/image_tutorial.html for more background.\n\nBelow we fetch an image and display it:\n\n\n```python\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n\n# Import image\nimg = mpimg.imread('https://raw.githubusercontent.com/matplotlib/matplotlib.github.com/master/_images/stinkbug.png')\n\n# Check type and shape\nprint(type(img))\nprint(\"Image array shape: {}\".format(img.shape))\n\n# Display image\nplt.imshow(img);\n```\n\nThe task is to write a *function* that applies a particular low-pass filter algorithm to an image array \nand returns the filtered image. With this particular filter, the value of a pixel in the filtered image \nis equal to the average value of the four neighbouring pixels in the original image. For the `[i, j, :]` pixel, \nthe neighbours are `[i, j+1, :]`, `[i, j-1, :]`, `[i+1, j, :]` and `[i-1, j, :]`. \n\nRun the filter algorithm multiple times on the above image to explore the effect of the filter.\n\n*Hint*: To create a NumPy array of zeros, `B`, with the same shape as array `A`, use:\n```python\nimport numpy as np\nB = np.zeros_like(A)\n```\n\n\n```python\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n", "meta": {"hexsha": "67328aaa748c65e4efad83f2d5915788c201d82b", "size": 13697, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Assignment/10 Exercises.ipynb", "max_stars_repo_name": "reddyprasade/PYTHON-BASIC-FOR-ALL", "max_stars_repo_head_hexsha": "4fa4bf850f065e9ac1cea0365b93257e1f04e2cb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 21, "max_stars_repo_stars_event_min_datetime": "2019-06-28T05:11:17.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-16T02:02:28.000Z", "max_issues_repo_path": "Assignment/10 Exercises.ipynb", "max_issues_repo_name": "chandhukogila/Python-Basic-For-All-3.x", "max_issues_repo_head_hexsha": "f4105833759a271fa0777f3d6fb96db32bbfaaa4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-12-28T14:15:58.000Z", "max_issues_repo_issues_event_max_datetime": "2021-12-28T14:16:02.000Z", "max_forks_repo_path": "Assignment/10 Exercises.ipynb", "max_forks_repo_name": "chandhukogila/Python-Basic-For-All-3.x", "max_forks_repo_head_hexsha": "f4105833759a271fa0777f3d6fb96db32bbfaaa4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 18, "max_forks_repo_forks_event_min_datetime": "2019-07-07T03:20:33.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-08T10:44:18.000Z", "avg_line_length": 26.0895238095, "max_line_length": 249, "alphanum_fraction": 0.5446448127, "converted": true, "num_tokens": 1746, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8902942144788076, "lm_q2_score": 0.9314625017359053, "lm_q1q2_score": 0.8292756762994328}} {"text": "```python\nimport loader\nfrom sympy import *\ninit_printing()\nfrom root.solver import *\n```\n\nFind the complementary and particular solution of the system of ode\n$$\n \\vec{\\mathbf{x'}} = \\begin{bmatrix}\n -2 & 1 \\\\\n 1 & -2 \\\\\n \\end{bmatrix} \\vec{\\mathbf{x}} + \\begin{bmatrix}\n 2 e^{-t} \\\\\n 3 t\n \\end{bmatrix}\n$$\nusing variation of parameters\n\n\n```python\nxc, p = system([\n [-2, 1],\n [1, -2]\n])\np.display()\n_, p = nonhomo_system_variation_of_parameters(xc, [2 * exp(-t), 3*t])\np.display()\n```\n\n\n$\\displaystyle \\text{Characteristic equation: }$\n\n\n\n$\\displaystyle \\left(\\lambda + 2\\right)^{2} - 1 = 0$\n\n\n\n$\\displaystyle \\text{Eigenvalues and eigenvectors}$\n\n\n\n$\\displaystyle \\lambda_1 = -3$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 0\\\\1 & 1 & 0\\end{matrix}\\right]\\text{ ~ }\\left[\\begin{matrix}1 & 1 & 0\\\\0 & 0 & 0\\end{matrix}\\right]\\Rightarrow v = \\left[\\begin{matrix}-1\\\\1\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\lambda_2 = -1$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}-1 & 1 & 0\\\\1 & -1 & 0\\end{matrix}\\right]\\text{ ~ }\\left[\\begin{matrix}1 & -1 & 0\\\\0 & 0 & 0\\end{matrix}\\right]\\Rightarrow v = \\left[\\begin{matrix}1\\\\1\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\text{General solution: }$\n\n\n\n$\\displaystyle \\vec{\\mathbf{x}} = C_{1}e^{- 3 t}\\left[\\begin{matrix}-1\\\\1\\end{matrix}\\right]+C_{2}e^{- t}\\left[\\begin{matrix}1\\\\1\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\text{Fundamental matrix }\\Psi$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}- e^{- 3 t} & e^{- t}\\\\e^{- 3 t} & e^{- t}\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\text{Calculate the inverse of the fundamental matrix }\\Psi^{-1}$\n\n\n\n$\\displaystyle \\Psi^{-1} = \\left[\\begin{matrix}- \\frac{e^{3 t}}{2} & \\frac{e^{3 t}}{2}\\\\\\frac{e^{t}}{2} & \\frac{e^{t}}{2}\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\text{Compute }\\Psi^{-1} g(t)$\n\n\n\n$\\displaystyle \\Psi^{-1} g(t) = \\left[\\begin{matrix}\\frac{3 t e^{3 t}}{2} - e^{2 t}\\\\\\frac{3 t e^{t}}{2} + 1\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\text{Compute the integral}$\n\n\n\n$\\displaystyle \\int \\Psi^{-1} g(t) =\\left[\\begin{matrix}\\frac{t e^{3 t}}{2} - \\frac{e^{3 t}}{6} - \\frac{e^{2 t}}{2}\\\\\\frac{3 t e^{t}}{2} + t - \\frac{3 e^{t}}{2}\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\text{Finally, }\\vec{\\mathbf{x_p}} = \\Psi \\int \\Psi^{-1} g(t)$\n\n\n\n$\\displaystyle \\vec{\\mathbf{x_p}} =\\left[\\begin{matrix}t + t e^{- t} - \\frac{4}{3} + \\frac{e^{- t}}{2}\\\\2 t + t e^{- t} - \\frac{5}{3} - \\frac{e^{- t}}{2}\\end{matrix}\\right]$\n\n\nFind a particular solution of the system\n$$\n \\vec{\\mathbf{x'}} = \\begin{bmatrix}\n 10 & -5 \\\\\n 20 & -10 \\\\\n \\end{bmatrix} \\vec{\\mathbf{x}} + \\begin{bmatrix}\n t^{-3} \\\\\n -t^{-2}\n \\end{bmatrix}\n$$\nusing variation of parameters\n\n\n```python\nxc, p = system([\n [10, -5],\n [20, -10]\n])\np.display()\n_, p = nonhomo_system_variation_of_parameters(xc, [t**(-3),-t**(-2)])\np.display()\n```\n\n\n$\\displaystyle \\text{Characteristic equation: }$\n\n\n\n$\\displaystyle \\lambda^{2} = 0$\n\n\n\n$\\displaystyle \\text{Eigenvalues and eigenvectors}$\n\n\n\n$\\displaystyle \\lambda_1 = 0$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}10 & -5 & 0\\\\20 & -10 & 0\\end{matrix}\\right]\\text{ ~ }\\left[\\begin{matrix}1 & - \\frac{1}{2} & 0\\\\0 & 0 & 0\\end{matrix}\\right]\\Rightarrow v = \\left[\\begin{matrix}\\frac{1}{2}\\\\1\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\text{Find the generalized eigenvector}\\left( M - \\lambda I \\right) w = v $\n\n\n\n$\\displaystyle \\left[\\begin{matrix}10 & -5 & \\frac{1}{2}\\\\20 & -10 & 1\\end{matrix}\\right]\\text{ ~ }\\left[\\begin{matrix}1 & - \\frac{1}{2} & \\frac{1}{20}\\\\0 & 0 & 0\\end{matrix}\\right]\\Rightarrow w = \\left[\\begin{matrix}\\frac{1}{20}\\\\0\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\text{General solution: }$\n\n\n\n$\\displaystyle \\vec{\\mathbf{x}} = C_{1}1\\left[\\begin{matrix}\\frac{1}{2}\\\\1\\end{matrix}\\right]+C_{2}1\\left(\\left[\\begin{matrix}\\frac{1}{2}\\\\1\\end{matrix}\\right]t + \\left[\\begin{matrix}\\frac{1}{20}\\\\0\\end{matrix}\\right]\\right)$\n\n\n\n$\\displaystyle \\text{Fundamental matrix }\\Psi$\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\frac{1}{2} & \\frac{t}{2} + \\frac{1}{20}\\\\1 & t\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\text{Calculate the inverse of the fundamental matrix }\\Psi^{-1}$\n\n\n\n$\\displaystyle \\Psi^{-1} = \\left[\\begin{matrix}- 20 t & 10 t + 1\\\\20 & -10\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\text{Compute }\\Psi^{-1} g(t)$\n\n\n\n$\\displaystyle \\Psi^{-1} g(t) = \\left[\\begin{matrix}- \\frac{10}{t} - \\frac{21}{t^{2}}\\\\\\frac{10}{t^{2}} + \\frac{20}{t^{3}}\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\text{Compute the integral}$\n\n\n\n$\\displaystyle \\int \\Psi^{-1} g(t) =\\left[\\begin{matrix}- 10 \\ln{\\left(t \\right)} + \\frac{21}{t}\\\\- \\frac{10}{t} - \\frac{10}{t^{2}}\\end{matrix}\\right]$\n\n\n\n$\\displaystyle \\text{Finally, }\\vec{\\mathbf{x_p}} = \\Psi \\int \\Psi^{-1} g(t)$\n\n\n\n$\\displaystyle \\vec{\\mathbf{x_p}} =\\left[\\begin{matrix}- 5 \\ln{\\left(t \\right)} - 5 + \\frac{5}{t} - \\frac{1}{2 t^{2}}\\\\- 10 \\ln{\\left(t \\right)} - 10 + \\frac{11}{t}\\end{matrix}\\right]$\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "6f086d52057897734a8f43080e83af8ac9024e1b", "size": 14347, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/system-of-ode-variation-of-parameters.ipynb", "max_stars_repo_name": "kaiyingshan/ode-solver", "max_stars_repo_head_hexsha": "30c6798efe9c35a088b2c6043493470701641042", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-02-17T23:15:20.000Z", "max_stars_repo_stars_event_max_datetime": "2019-02-17T23:15:27.000Z", "max_issues_repo_path": "notebooks/system-of-ode-variation-of-parameters.ipynb", "max_issues_repo_name": "kaiyingshan/ode-solver", "max_issues_repo_head_hexsha": "30c6798efe9c35a088b2c6043493470701641042", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/system-of-ode-variation-of-parameters.ipynb", "max_forks_repo_name": "kaiyingshan/ode-solver", "max_forks_repo_head_hexsha": "30c6798efe9c35a088b2c6043493470701641042", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.1260945709, "max_line_length": 286, "alphanum_fraction": 0.4545201087, "converted": true, "num_tokens": 1819, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9504109798251321, "lm_q2_score": 0.8723473630627234, "lm_q1q2_score": 0.8290885120763133}} {"text": "# 8.2 Regularized linear softening law for stable calculation in finite element code\n\n\n```python\n%matplotlib inline\nimport sympy as sp\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\nLet us assume that the fracture energy $G_\\mathrm{f}$ is known and we want to improve the above model to deliver stable, mesh-independent results. Knowing that the energy dissipation is only happening in the one softening element, we can require that it alwas dissipates the same amount of energy. Recall that the fracture energy dissipated by a unit area of a stress free crack is evaluated as \n\\begin{align}\nG_\\mathrm{f} = \\int_0^{\\infty} f(w) \\, \\mathrm{d}w\n\\end{align}\nwhere $w$ represents the crack opening.\nIn the studied case of linear softening with $f$ defined as\n\\begin{align}\nf(w) = f_\\mathrm{t}\\left( 1 - \\frac{1}{w_\\mathrm{f}} w \\right)\n\\end{align}\nand the corresponding fracture energy\n\\begin{align}\nG_\\mathrm{f} = \\int_0^{w_\\mathrm{f}} f(w) \\, \\mathrm{d}w = \\frac{1}{2} f_\\mathrm{t} w_\\mathrm{f}\n\\end{align}\nThis kind of model is working correctly and reproduces exactly the amount of fracture energy needed to produce the unit area of the stress free crack. \n\nHowever, in a finite element model, the crack is not represented as a discrete line but is represented by a strain within the softening element of the size $L_s = L / N$. Thus, the softening is not related to crack opening displacement (COD) but to a strain in the softening zone using a softening function $\\phi(\\varepsilon_\\mathrm{s})$. This model was used in the example above and delivered mesh-dependent results with varying amount of fracture energy.\n\nTo regularize the finite element model let us express the crack opening displacement as a product of the softening strain $\\varepsilon_\\mathrm{s}$ and the size of the softening zone $L_\\mathrm{s}$:\n\\begin{align}\nw = \\varepsilon_\\mathrm{s} L_\\mathrm{s}\n\\end{align}\nNow, the energy dissipated within the softening zone can be obtained as an integral over the history of the softening strain as\n\\begin{align}\nG_\\mathrm{f} = L_\\mathrm{s} \\int_0^{\\infty} \\phi(\\varepsilon_\\mathrm{s}) \\, \\mathrm{d}\\varepsilon_\\mathrm{s}\n\\end{align}\nComming back to the model with linear softening, the integral of the strain based softening function is expressed as\n\\begin{align}\n\\int_0^{\\infty} \\phi(\\varepsilon_\\mathrm{s}) \\, \\mathrm{d} \\varepsilon_\\mathrm{s} = \\frac{1}{2} \\varepsilon_\\mathrm{f} f_\\mathrm{t}\n\\end{align}\nso that\n\\begin{align}\nG_\\mathrm{f} = \\frac{1}{2} L_\\mathrm{s} \\varepsilon_\\mathrm{f}\nf_\\mathrm{t}\n\\implies\n\\varepsilon_\\mathrm{f} = \\frac{2}{L_\\mathrm{s}} \\frac{G_\\mathrm{f}}{ f_\\mathrm{t}}\n\\end{align}\n\n\n\n```python\nL = 10.0\nE = 20000.0\nf_t = 2.4\nG_f = 0.0125 \nn_E_list = [5,10,1000]\n# run a loop over the different discretizations\nfor n in n_E_list: # n: number of element\n L_s = L / n\n eps_f = 2 * G_f / f_t / L_s \n eps = np.array([0.0, f_t / E, eps_f / n])\n sig = np.array([0.0, f_t, 0.0])\n g_f = eps[-1] * sig[1] / 2\n print('Is the energy constant?', g_f)\n plt.plot(eps, sig, label='n=%i' % n)\n plt.legend(loc=1)\n\nplt.xlabel('strain')\nplt.ylabel('stress')\nplt.show()\n```\n\nConsider a tensile test with the dimensions of a cross section $100 \\times 100$ mm and length of $1000$ mm. The measured elongation of the beam during the test was as follows\nAssume the stiffness of $E = 28000$ MPa and strength $f_t = 3$ MPa. The length of the cohesive zone at which the crack localized was measured using the digital image correlation and set to $L_s = 0.05$ mm.\n\n## Task\n\nGiven the fracture energy, linear softening function, strength, E-modulus, bar length, manually calculate the force-displacement response of a tensile test. \n\n\n", "meta": {"hexsha": "5ceed4901b32d3a4af5f5bac88d29b6e3087986f", "size": 5792, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tour6_energy/8_2_Regularized linear softening law for stable calculation in finite element code.ipynb", "max_stars_repo_name": "bmcs-group/bmcs_tutorial", "max_stars_repo_head_hexsha": "4e008e72839fad8820a6b663a20d3f188610525d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tour6_energy/8_2_Regularized linear softening law for stable calculation in finite element code.ipynb", "max_issues_repo_name": "bmcs-group/bmcs_tutorial", "max_issues_repo_head_hexsha": "4e008e72839fad8820a6b663a20d3f188610525d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tour6_energy/8_2_Regularized linear softening law for stable calculation in finite element code.ipynb", "max_forks_repo_name": "bmcs-group/bmcs_tutorial", "max_forks_repo_head_hexsha": "4e008e72839fad8820a6b663a20d3f188610525d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.103030303, "max_line_length": 468, "alphanum_fraction": 0.5980662983, "converted": true, "num_tokens": 1092, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9086179043564153, "lm_q2_score": 0.9124361598816667, "lm_q1q2_score": 0.8290558314506952}} {"text": "# Solvers\n\nBoilerplate to make the doctester work. \n\n\n```\nimport sys\nimport os\nsys.path.insert(1, os.path.join(os.path.pardir, \"ipython_doctester\"))\nfrom sympy import *\nfrom ipython_doctester import test\n# Work around a bug in IPython. This will disable the ability to paste things with >>>\ndef notransform(line): return line\nfrom IPython.core import inputsplitter\ninputsplitter.transform_classic_prompt = notransform\ninit_printing()\n```\n\nFor each exercise, fill in the function according to its docstring. Execute the cell to see if you did it right. \n\n\n```\na, b, c, d, x, y, z, t = symbols('a b c d x y z t')\nf, g, h = symbols('f g h', cls=Function)\n```\n\n## Algebraic Equations\n\nWrite a function that computes the [quadratic equation](http://en.wikipedia.org/wiki/Quadratic_equation).\n\n\n```\ndef quadratic():\n return ???\nquadratic()\n```\n\nWrite a function that computes the general solution to the cubic $x^3 + ax^2 + bx + c$.\n\n\n```\ndef cubic():\n return ???\ncubic()\n```\n\n## Differential Equations\n\nA population that grows without bound is modeled by the differential equation\n\n$$f'(t)=af(t)$$\n\nSolve this differential equation using SymPy.\n\n\n```\n\n```\n\nIf the population growth is bounded, it is modeled by \n\n$$f'(t) = f(t)(1 - f(t))$$\n\n\n```\n\n```\n", "meta": {"hexsha": "f1e5c0baf221ac427f3141658b5b8f4a24dd4276", "size": 4456, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tutorial_exercises/Solvers.ipynb", "max_stars_repo_name": "certik/scipy-2013-tutorial", "max_stars_repo_head_hexsha": "26a1cab3a16402afdc20088cedf47acd9bc58483", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 23, "max_stars_repo_stars_event_min_datetime": "2015-02-28T08:53:05.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-05T05:37:59.000Z", "max_issues_repo_path": "sympy/Solvers.ipynb", "max_issues_repo_name": "certik/scipy-in-13", "max_issues_repo_head_hexsha": "418c139ab6e1b0c9acd53e7e1a02b8b930005096", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-04-17T15:05:46.000Z", "max_issues_repo_issues_event_max_datetime": "2021-04-17T15:05:46.000Z", "max_forks_repo_path": "sympy/Solvers.ipynb", "max_forks_repo_name": "certik/scipy-in-13", "max_forks_repo_head_hexsha": "418c139ab6e1b0c9acd53e7e1a02b8b930005096", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 14, "max_forks_repo_forks_event_min_datetime": "2015-03-11T00:25:21.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-25T14:52:40.000Z", "avg_line_length": 24.7555555556, "max_line_length": 259, "alphanum_fraction": 0.5008976661, "converted": true, "num_tokens": 344, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.934395157060208, "lm_q2_score": 0.8872045832787205, "lm_q1q2_score": 0.8289996659372565}} {"text": "# The Variational Autoencoder\n\nIn this notebook we are interested in the problem of inference in a probabilistic model that contains both observed and latent variables, which can be represented as the following graphical model:\n\n
    \n\n
    \n\nFor this model the joint probability distribution factorizes as\n\n$$\np(\\mathbf{x}, \\mathbf{z}\\vert\\boldsymbol{\\theta}) = p(\\mathbf{x}\\vert\\mathbf{z},\\boldsymbol{\\theta})p(\\mathbf{z}\\vert\\boldsymbol{\\theta})\n$$\n\nFor any distribution $q(\\mathbf{z}\\vert\\mathbf{x},\\boldsymbol{\\phi})$, we can write the marginal log-likelihood as the sum of two terms: the evidence lower bound $\\mathcal{L}(q, \\boldsymbol{\\theta},\\boldsymbol{\\phi})$ and the KL divergence between $q(\\mathbf{z}\\vert\\mathbf{x},\\boldsymbol{\\phi})$ and the posterior $p(\\mathbf{z}\\vert\\mathbf{x},\\boldsymbol{\\theta})$:\n\n$$\n\\log p(\\mathbf{x}\\vert\\boldsymbol{\\theta}) = \\mathcal{L}(q, \\boldsymbol{\\theta}, \\boldsymbol{\\phi}) + \\text{KL}(q(\\mathbf{z}\\vert\\mathbf{x},\\boldsymbol{\\phi})\\Vert p(\\mathbf{z\\vert\\mathbf{x}, \\boldsymbol{\\theta}}))\n$$\n\nOur goal is to find the values of $\\boldsymbol{\\theta}$ and $\\boldsymbol{\\phi}$ that maximize the marginal log-likelihood. In *variational inference*, we propose $q(\\mathbf{z}\\vert\\mathbf{x},\\boldsymbol{\\phi})$ as an approximation to the posterior, so that the KL divergence is minimized. The KL divergence is minimized when we maximize the lower bound, defined as\n\n$$\n\\begin{align}\n\\mathcal{L}(q, \\boldsymbol{\\theta}, \\boldsymbol{\\phi}) &= \\mathbb{E}_q[\\log p(\\mathbf{x},\\mathbf{z}\\vert\\boldsymbol{\\theta}) - \\log q(\\mathbf{z}\\vert\\mathbf{x},\\boldsymbol{\\phi})]\\\\\n&=\n\\mathbb{E}_q[\\log p(\\mathbf{x}\\vert\\mathbf{z},\\boldsymbol{\\theta})] - \\text{KL}(q(\\mathbf{z}\\vert\\mathbf{x},\\boldsymbol{\\phi})\\Vert p(\\mathbf{z}\\vert\\boldsymbol{\\theta}))\n\\end{align}\n$$\n\nWe can think of $q(\\mathbf{z}\\vert\\mathbf{x},\\boldsymbol{\\phi})$ as taking the variable $\\mathbf{x}$ and producing a distribution over the latent variable $\\mathbf{z}$. For this reason $q(\\mathbf{z}\\vert\\mathbf{x},\\boldsymbol{\\phi})$ is also known as the **encoder**: the latent variable $\\mathbf{z}$ acts as a code for the observation $\\mathbf{x}$. The parameters $\\boldsymbol{\\phi}$ are known as the **variational parameters**, because they correspond to the distribution $q$ that we want to use as an approximation to the true posterior.\n\nSimilarly, we can see that our model for $p(\\mathbf{x}\\vert\\mathbf{z},\\boldsymbol{\\theta})$ does the opposite: given a latent representation, a distribution over the observation is produced. Therefore $p(\\mathbf{x}\\vert\\mathbf{z},\\boldsymbol{\\theta})$ is also known as the **decoder**, which takes the code $\\mathbf{z}$ and reconstructs the observation $\\mathbf{x}$. The parameters $\\boldsymbol{\\theta}$ are known as the **generative parameters**.\n\nTo maximize the lower bound, we can obtain its gradient with respect to the parameters and then update them in that direction:\n\n$$\n\\nabla_{\\boldsymbol{\\theta},\\boldsymbol{\\phi}} \\mathcal{L}(q, \\boldsymbol{\\theta}, \\boldsymbol{\\phi}) = \\nabla_{\\boldsymbol{\\theta},\\boldsymbol{\\phi}}\n\\left[\n\\mathbb{E}_q[\\log p(\\mathbf{x}\\vert\\mathbf{z},\\boldsymbol{\\theta})] - \\text{KL}(q(\\mathbf{z}\\vert\\mathbf{x},\\boldsymbol{\\phi})\\Vert p(\\mathbf{z}\\vert\\boldsymbol{\\theta}))\n\\right]\n$$\n\nFor some cases, the KL divergence can be calculated analytically, as well as its gradient with respect to both the generative and variational parameters. The expectation term can be approximated with a *Monte Carlo estimate*, by taking samples and averaging the result. However, how do we calculate the derivative with respect to $\\boldsymbol{\\phi}$ of a sampling operation from a distribution whose parameter is $\\boldsymbol{\\phi}$ itself?\n\n## The reparameterization trick\n\nInstead of using $q(\\mathbf{z}\\vert\\mathbf{x},\\boldsymbol{\\phi})$ to obtain samples of $\\mathbf{z}$, we will introduce an auxiliary random variable $\\boldsymbol{\\epsilon}$ with a corresponding, known distribution $p(\\boldsymbol{\\epsilon})$. We obtain a sample $\\boldsymbol{\\epsilon}$ from this distribution, and then we let $\\mathbf{z}$ be a deterministic, differentiable function of it:\n\n$$\n\\mathbf{z} = g(\\mathbf{x},\\boldsymbol{\\epsilon},\\boldsymbol{\\phi})\n$$\n\nGiven an appropriate choice of $p(\\boldsymbol{\\epsilon})$ and $g$, $\\mathbf{z}$ would be as if we had sampled from $q(\\mathbf{z}\\vert\\mathbf{x},\\boldsymbol{\\phi})$, which is what we wanted. The difference now is that the sample $\\mathbf{z}$ was obtained from a differentiable function, and now we can obtain the gradient with respect to $\\boldsymbol{\\phi}$! We can take $L$ samples to obtain the Monte Carlo estimate of the expectation and then differentiate:\n\n$$\n\\nabla_{\\boldsymbol{\\theta},\\boldsymbol{\\phi}} \\mathcal{L}(q, \\boldsymbol{\\theta}, \\boldsymbol{\\phi}) \\approx \\nabla_{\\boldsymbol{\\theta},\\boldsymbol{\\phi}}\\frac{1}{L}\\sum_{i=1}^L \\log p(\\mathbf{x}\\vert\\mathbf{z}^{(i)},\\boldsymbol{\\theta})\n$$\n\nwith $\\mathbf{z}^{(i)}$ is obtained for each sample of $\\boldsymbol{\\epsilon}$.\n\n## The algorithm\n\nWe now have an algorithm to optimize the lower bound, known as **Autoencoding Variational Bayes** [1]:\n\n- Take an observation $\\mathbf{x}$\n- Take $L$ samples of $\\boldsymbol{\\epsilon}\\sim p(\\boldsymbol{\\epsilon})$ and let $\\mathbf{z}^{(i)}=g(\\mathbf{x},\\boldsymbol{\\epsilon}^{(i)},\\boldsymbol{\\phi})$\n- Calculate $\\mathbf{g} = \\nabla_{\\boldsymbol{\\theta},\\boldsymbol{\\phi}}\\lbrace\\frac{1}{L}\\sum_{i=1}^L \\log p(\\mathbf{x}\\vert\\mathbf{z}^{(i)},\\boldsymbol{\\theta}) - \\text{KL}(q(\\mathbf{z}\\vert\\mathbf{x},\\boldsymbol{\\phi})\\Vert p(\\mathbf{z}\\vert\\boldsymbol{\\theta}))\\rbrace$\n- Update $\\boldsymbol{\\theta}$ and $\\boldsymbol{\\phi}$ using $\\mathbf{g}$ and an optimizer like Stochastic Gradient Descent or Adam.\n\n## A practical example\n\nAs in the EM example, we will now define a generative model for the MNIST digits dataset. This time, however, we will assume the latent variables to be continuous instead of discrete, so that $\\mathbf{z}\\in\\mathbb{R}^K$ where $K$ is a hyperparameter that indicates the dimension of the latent space. We choose the prior distribution as a Gaussian with zero mean and unit covariance,\n\n$$\np(\\mathbf{z}) = \\mathcal{N}(\\mathbf{z}\\vert\\mathbf{0}, \\mathbf{I})\n$$\n\nGiven an observation $\\mathbf{x}$, the approximation of the posterior $p(\\mathbf{z}\\vert\\mathbf{x},\\boldsymbol{\\theta})$ (the encoder) will be a Gaussian distribution with diagonal covariance:\n\n$$\nq(\\mathbf{z}\\vert\\mathbf{x}, \\boldsymbol{\\phi}) = \\mathcal{N}(\\mathbf{z}\\vert\\boldsymbol{\\mu}_e,\\text{diag}(\\boldsymbol{\\sigma}_e))\n$$\n\nwith\n\n$$\n\\begin{align}\n\\boldsymbol{\\mu}_e &= f_{\\phi_\\mu}(\\mathbf{x})\\\\\n\\log(\\boldsymbol{\\sigma}_e^2) &= f_{\\phi_\\sigma}(\\mathbf{x})\n\\end{align}\n$$\n\nwhere $e$ in the subscripts refer to the *encoder*, and $f_{\\phi_\\mu}$ and $f_{\\phi_\\sigma}$ are neural networks with weights $\\boldsymbol{\\phi}_\\mu$ and $\\boldsymbol{\\phi}_\\sigma$, respectively. These parameters form the parameters of the encoder: $\\boldsymbol{\\phi} = \\lbrace\\boldsymbol{\\phi}_\\mu,\\boldsymbol{\\phi}_\\sigma\\rbrace$. The reparameterization trick for this case is\n\n$$\n\\begin{align}\n\\boldsymbol{\\epsilon}&\\sim\\mathcal{N}(\\mathbf{0}, \\mathbf{I})\\\\\n\\mathbf{z} &= \\boldsymbol{\\mu}_e + \\boldsymbol{\\sigma}_e\\odot\\boldsymbol{\\epsilon}\n\\end{align}\n$$\n\nwhere $\\odot$ denotes element-wise multiplication.\n\nFor the prior and approximate posterior that we have defined, the KL divergence is\n\n$$\n\\text{KL}(q(\\mathbf{z}\\vert\\mathbf{x},\\boldsymbol{\\phi})\\Vert p(\\mathbf{z}\\vert\\boldsymbol{\\theta})) = \\frac{1}{2}\\sum_{i=1}^K(\\sigma_{ei}^2 + \\mu_{ei}^2 - \\log(\\sigma_{ei}^2) - 1)\n$$\n\nSince the pixels in the images are binary, we model an observation $\\mathbf{x}\\in\\mathbb{R}^D$, given the latent variable $\\mathbf{z}$, as a multivariate Bernoulli random variable with mean $\\boldsymbol{\\mu}_d$. This corresponds to the *decoder*:\n\n$$\np(\\mathbf{x}\\vert\\mathbf{z},\\boldsymbol{\\theta}) = \\prod_{i=1}^D \\mu_{di}^{x_i}(1-\\mu_{di})^{(1-x_i)}\n$$\n\nwith\n\n$$\n\\boldsymbol{\\mu}_d = f_\\theta(\\mathbf{z})\n$$\n\nwhere $f_\\theta$ is a neural network with weights $\\boldsymbol{\\theta}$. Note that since the output of the decoder models a distribution over a multivariate Bernoulli, we must ensure that its values lie within 0 and 1. We do this with a sigmoid layer at the output.\n\nGiven this definition of the decoder, we have\n\n$$\n\\log p(\\mathbf{x}\\vert\\mathbf{z},\\boldsymbol{\\theta}) = \\sum_{i=1}^D x_i\\log\\mu_{di} + (1-x_i)\\log(1-\\mu_{di})\n$$\n\nwhich is the negative binary cross-entropy loss. We now have all the ingredients to implement and train the autoencoder, for which we will use PyTorch.\n\n\n```python\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.distributions.multivariate_normal import MultivariateNormal\n\nclass BernoulliVAE(nn.Module):\n def __init__(self, input_dim, latent_dim, enc_units, dec_units):\n super(BernoulliVAE, self).__init__()\n # Encoder parameters\n self.linear_enc = nn.Linear(input_dim, enc_units)\n self.enc_mu = nn.Linear(enc_units, latent_dim)\n self.enc_logvar = nn.Linear(enc_units, latent_dim)\n\n # Distribution to sample for the reparameterization trick\n self.normal_dist = MultivariateNormal(torch.zeros(latent_dim),\n torch.eye(latent_dim))\n\n # Decoder parameters\n self.linear_dec = nn.Linear(latent_dim, dec_units)\n self.dec_mu = nn.Linear(dec_units, input_dim)\n\n # Reconstruction loss: binary cross-entropy\n self.criterion = nn.BCELoss(reduction='sum')\n\n def encode(self, x):\n # Obtain the parameters of the latent variable distribution\n h = torch.relu(self.linear_enc(x))\n mu_e = self.enc_mu(h)\n logvar_e = self.enc_logvar(h)\n\n # Get a latent variable sample with the reparameterization trick\n epsilon = self.normal_dist.sample((x.shape[0],))\n z = mu_e + torch.sqrt(torch.exp(logvar_e)) * epsilon\n\n return z, mu_e, logvar_e\n\n def decode(self, z):\n # Obtain the parameters of the observation distribution\n h = torch.relu(self.linear_dec(z))\n mu_d = torch.sigmoid(self.dec_mu(h))\n\n return mu_d\n\n def forward(self, x):\n \"\"\" Calculate the negative lower bound for the given input \"\"\"\n z, mu_e, logvar_e = self.encode(x)\n mu_d = self.decode(z)\n neg_cross_entropy = self.criterion(mu_d, x)\n kl_div = -0.5* (1 + logvar_e - mu_e**2 - torch.exp(logvar_e)).sum()\n\n # Since the optimizer minimizes, we return the negative\n # of the lower bound that we need to maximize\n return neg_cross_entropy, kl_div\n```\n\nWe can now load the data, create a model and train it. We will choose 10 for the dimension of the latent space, and 300 units in the hidden layer for both the encoder and the decoder.\n\n\n```python\nfrom torchvision import datasets, transforms\nfrom torch.utils.data import DataLoader\n\ninput_dim = 28 * 28\nbatch_size = 128\nepochs = 5\n\ndataset = datasets.MNIST('data/', transform=transforms.ToTensor(), download=True)\nloader = DataLoader(dataset, batch_size, shuffle=True)\nmodel = BernoulliVAE(input_dim, latent_dim=10, enc_units=200, dec_units=200)\noptimizer = torch.optim.Adam(model.parameters(), lr=0.01)\n\nlog = '\\r[{:d}/{:d}] NCE: {:.6f} KL: {:.6f} Total: {:.6f}'\nfor epoch in range(1, epochs + 1):\n print(f'Epoch {epoch}')\n avg_loss = 0\n for i, (data, _) in enumerate(loader):\n model.zero_grad()\n # Reshape data so each image is an array with 784 elements\n data = data.view(-1, input_dim)\n \n nce, kl_div = model(data)\n loss = nce + kl_div\n loss.backward()\n optimizer.step()\n \n avg_loss += loss.item()/len(dataset)\n \n if i % 100 == 0:\n # Print average loss per sample in batch\n print(log.format(i + 1, len(loader),\n nce.item(), kl_div.item(), loss.item()),\n end='')\n \n print(f'\\nAverage loss: {avg_loss:.6f}'.format(avg_loss)) \n```\n\n Epoch 1\n [401/469] NCE: 14406.266602 KL: 1945.858276 Total: 16352.125000\n Average loss: 147.292228\n Epoch 2\n [401/469] NCE: 12620.369141 KL: 2087.734131 Total: 14708.103516\n Average loss: 123.565269\n Epoch 3\n [401/469] NCE: 13314.766602 KL: 2143.032471 Total: 15457.798828\n Average loss: 120.036630\n Epoch 4\n [401/469] NCE: 12593.125000 KL: 2143.898438 Total: 14737.023438\n Average loss: 117.831159\n Epoch 5\n [401/469] NCE: 13047.822266 KL: 2218.240967 Total: 15266.063477\n Average loss: 116.889384\n\n\nLet's now use the autoencoder to take observations and reconstruct them from their latent representation.\n\n\n```python\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nn_samples = 10\nfig = plt.figure(figsize=(14, 3))\nfig.suptitle('Observations (top row) and their reconstructions (bottom row)')\nfor i in range(n_samples):\n # Take a sample and view as mini-batch of size 1\n x = dataset[i][0].view(-1, input_dim)\n # Encode the observation\n z, mu_e, logvar_e = model.encode(x)\n # Get reconstruction\n x_d = model.decode(z)\n \n plt.subplot(2, n_samples, i + 1)\n plt.imshow(x.view(28, 28).data.numpy(), cmap='binary')\n plt.axis('off')\n plt.subplot(2, n_samples, i + 1 + n_samples)\n plt.imshow(x_d.view(28, 28).data.numpy(), cmap='binary')\n plt.axis('off')\n```\n\nWe can see that the model effectively creates a representation of the latent variable $\\mathbf{z}$ that contains enough information to reconstruct a digit very similar to the original observation.\n\nThe encoded representation acts as a low dimensional representation of the observation. The digit images have 784 pixels in total, with each pixel having values between 0 and 1. Therefore, the images lie in a certain region of $\\mathbb{R}^{784}$. Their encoded representation, on the other hand, lies in a region of $\\mathbb{R}^{10}$, which hopefully encodes a compact and meaningful representation of a digit. We can observed this with visualization techniques, such as t-SNE:\n\n\n```python\nfrom sklearn.manifold import TSNE\nimport numpy as np\n\n# Select 1000 random samples\nsample_idx = np.random.randint(0, len(dataset), size=1000)\nX = torch.cat([dataset[i][0].view(-1, input_dim) for i in sample_idx])\nlabels = dataset.train_labels[sample_idx].data.numpy()\n\nZ, _, _ = model.encode(X)\nZ_vis = TSNE(n_components=2).fit_transform(Z.data.numpy())\n\nplt.scatter(Z_vis[:, 0], Z_vis[:, 1], c=labels, cmap=plt.cm.get_cmap(\"jet\", 10))\nplt.colorbar();\n```\n\nAs we expected, numbers of the same class cluster together in some regions of the space. This is possible thanks to the latent space discovered by the autoencoder. \n\nThe variational autoencoder is a powerful model for unsupervised learning that can be used in many applications like visualization, machine learning models that work on top of the compact latent representation, and inference in models with latent variables as the one we have explored. A particular example of this last application is reflected in the Bayesian Skip-gram [2], which I plan to explore in the near future.\n\n### References\n\n[1] Kingma, Diederik P., and Max Welling. \"Auto-encoding variational bayes.\" arXiv preprint arXiv:1312.6114 (2013).\n\n[2] Bra\u017einskas, Arthur, Serhii Havrylov, and Ivan Titov. \"Embedding Words as Distributions with a Bayesian Skip-gram Model.\" arXiv preprint arXiv:1711.11027 (2017).\n\n### Some final notes\n\n1. In the Expectation Maximization algorithm, we get around the problem of inference in models with latent variables by calculating the posterior and simply setting $q(\\mathbf{z}\\vert\\boldsymbol{\\phi}) = p(\\mathbf{z}\\vert\\mathbf{x},\\boldsymbol{\\theta})$, which effectively makes the KL divergence equal to zero. However, for some models calculating the posterior is not possible. Furthermore, the EM algorithm calculates updates using the complete dataset, which might not scale up well when we have millions of data points. The VAE addresses these issues by proposing an approximation to the posterior, and optimizing the parameters of the approximation with stochastic gradient descent.\n\n2. Recall the expression for the evidence lower bound:\n $$\n \\begin{align}\n \\mathcal{L}(q, \\boldsymbol{\\theta}, \\boldsymbol{\\phi}) &= \\mathbb{E}_q[\\log p(\\mathbf{x},\\mathbf{z}\\vert\\boldsymbol{\\theta}) - \\log q(\\mathbf{z}\\vert\\mathbf{x},\\boldsymbol{\\phi})]\\\\\n &=\n \\mathbb{E}_q[\\log p(\\mathbf{x}\\vert\\mathbf{z},\\boldsymbol{\\theta})] - \\text{KL}(q(\\mathbf{z}\\vert\\mathbf{x},\\boldsymbol{\\phi})\\Vert p(\\mathbf{z}\\vert\\boldsymbol{\\theta}))\n \\end{align}\n $$\n\n This last formulation reveals what optimizing the lower bound does. Maximizing the first term finds parameters that given the latent variable $\\mathbf{z}$, assign high probability to the observation $\\mathbf{x}$. We can think of this as a negative reconstruction error, that is, a reconstruction from the latent variable $\\mathbf{z}$ to the observation $\\mathbf{x}$. Maximizing the second term (including the minus sign) minimizes the KL divergence between $q(\\mathbf{z}\\vert\\mathbf{x},\\boldsymbol{\\phi})$ and the prior $p(\\mathbf{z}\\vert\\boldsymbol{\\theta})$, thus acting as a regularizer that enforces a prior structure that we have specified. Briefly stated, we then have\n\n $$\n \\text{lower bound} = -\\text{reconstruction error} - \\text{regularization penalty}\n $$\n\n Therefore, values of $\\boldsymbol{\\theta}$ and $\\boldsymbol{\\phi}$ that maximize the lower bound will produce a low reconstruction error and a model that takes into account prior information.\n", "meta": {"hexsha": "83d0f3925a602652762f7a06d681057b0e50f4b8", "size": 103012, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "01-vae/01-vae.ipynb", "max_stars_repo_name": "dfdazac/studious-rotary-phone", "max_stars_repo_head_hexsha": "0cdabfd02901a433b43a11f39f2d3e5b53f7d4be", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2019-02-26T23:15:35.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-10T09:01:24.000Z", "max_issues_repo_path": "01-vae/01-vae.ipynb", "max_issues_repo_name": "dfdazac/studious-rotary-phone", "max_issues_repo_head_hexsha": "0cdabfd02901a433b43a11f39f2d3e5b53f7d4be", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "01-vae/01-vae.ipynb", "max_forks_repo_name": "dfdazac/studious-rotary-phone", "max_forks_repo_head_hexsha": "0cdabfd02901a433b43a11f39f2d3e5b53f7d4be", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-09-08T19:29:30.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-08T19:29:30.000Z", "avg_line_length": 242.3811764706, "max_line_length": 55756, "alphanum_fraction": 0.886285093, "converted": true, "num_tokens": 4929, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9643214455334864, "lm_q2_score": 0.8596637541053281, "lm_q1q2_score": 0.8289921940315936}} {"text": "### Suppose we have `n` features and `m` observations\n\n| Index | $X_{1}$ | $X_{2}$ | $X_{3}$ | .... | .... | $X_{n}$ | y |\n|--------------|---------------|---------------|---------------|------|------|----------------|----------|\n| 1 | $x_{1}^{1} $ | $x_{2}^{1}$ | $x_{3}^{1}$ | ... | ... | $x_{n}^{1}$ | $y^{1}$ |\n| 2 | $x_{1}^{2}$ | $x_{2}^{2}$ | $x_{3}^{2}$ | ... | ... | $x_{n}^{2}$ | $y^{2}$ |\n| 3 | $x_{1}^{3}$ | $x_{2}^{3}$ | $x_{3}^{3}$ | ... | ... | $x_{n}^{3}$ | $y^{3}$ |\n| . | . | . | . | ... | ... | . | |\n| . | . | . | . | ... | ... | . | |\n| . | . | . | . | ... | ... | . | |\n| m | $x_{1}^{m}$ | $x_{2}^{m}$ | $x_{3}^{m}$ | ... | ... | $x_{n}^{m}$ | $y^{m}$ |\n\nHere `subscript` denotes feature and `superscript` denote observation number\n\nSuppose the weights of matrix for n features is denoted by column vectors of shape [1, n]\n
    \n$$ \\beta = \\begin{bmatrix} \\beta_{1} & \\beta_{2} & \\beta_{3} & .... & \\beta_{n} \\end{bmatrix} $$\n\n### we need to calculate prediction for each observation\n\n\\begin{equation}\n\\hat{y^1} = \\beta_{0} + \\beta_{1}x_{1}^{1} + \\beta_{2}x_{2}^{1} + \\beta_{3}x_{3}^{1} + \\beta_{3}x_{3}^{1} + .... + \\beta_{n}x_{n}^{1}\n\\end{equation}\n\n\\begin{equation}\n\\hat{y^2} = \\beta_{0} + \\beta_{1}x_{1}^{2} + \\beta_{2}x_{2}^{2} + \\beta_{3}x_{3}^{2} + \\beta_{3}x_{3}^{2} + .... + \\beta_{n}x_{n}^{2}\n\\end{equation}\n\n\n\\begin{equation}\n\\hat{y^3} = \\beta_{0} +\\beta_{1}x_{1}^{3} + \\beta_{2}x_{2}^{3} + \\beta_{3}x_{3}^{3} + \\beta_{3}x_{3}^{3} + .... + \\beta_{n}x_{n}^{3}\n\\end{equation}\n\n\\begin{equation}\n ..................\n\\end{equation}\n\n\\begin{equation}\n ..................\n\\end{equation}\n\n\\begin{equation}\n\\hat{y^m} = \\beta_{0} +\\beta_{1}x_{1}^{m} + \\beta_{2}x_{2}^{m} + \\beta_{3}x_{3}^{m} + \\beta_{3}x_{3}^{m} + .... + \\beta_{n}x_{n}^{m}\n\\end{equation}\n\n### In matrix form \n\n\\begin{equation}\n\\begin{bmatrix} \\hat{y}^{1} \\\\ \\hat{y}^{2} \\\\ \\hat{y}^{3} \\\\ .. \\\\.. \\\\ \\hat{y}^{m} \\end{bmatrix} = \n\\begin{bmatrix} x_{1}^{1} & x_{2}^{1} & x_{3}^{1} & .... & x_{n}^{1}\n \\\\ x_{1}^{2} & x_{2}^{2} & x_{3}^{2} & .... & x_{n}^{2}\n \\\\ x_{1}^{3} & x_{2}^{3} & x_{3}^{3} & .... & x_{n}^{3}\n \\\\.... &.... & ... &.... &....\n \\\\.... &.... & ... &.... &....\n \\\\ x_{1}^{m} & x_{2}^{m} & x_{3}^{m} & .... & x_{n}^{m}\n\\end{bmatrix}\n*\n\\begin{bmatrix} \\beta_{1} \\\\ \\beta_{2} \\\\ \\beta_{3} \\\\ .. \\\\.. \\\\ \\beta_{n} \\end{bmatrix}\n+ \\beta_{0}\n\\end{equation}\n\n\\begin{equation} \\hat{y} = X.\\beta^{T} + \\beta_{0} \\end{equation}\n\n#### Equivalent numpy implementation is:\n\n `y_hat = np.dot(X, B.T) + b` \nWhere \n $$ B = \\beta $$\n $$ b = \\beta_{0}$$\n\n### The Mean Squared Error cost function is:\n\n$$ J(\\beta, \\beta_{0}) = \\frac{1}{2m}\\sum_{n=1}^{m} (y^{(i)} - \\hat{y}^{(i)})^{2} $$\n\n#### Equivalent numpy implementation is:\n\n`cost = (1/(2*m))*np.sum((y-y_hat)**2)`\n\n### Just for confirmation let us take an example\n\n| y | $\\hat{y}$ | (y - $\\hat{y}$)^2 |\n|---|---------|-----------------|\n| 2 | 1 | 1 |\n| 4 | 2 | 4 |\n| 6 | 3 | 9 |\n| | | 14 |\n\n\\begin{equation} 14/3 = 4.67 \\end{equation}\n\n\n```python\nimport numpy as np\n\ny=np.array([[2],\n [4],\n [6]])\ny_hat=np.array([[1],\n [2],\n [3]])\n```\n\n\n```python\nnp.sum((y-y_hat)**2)/3\n```\n\n\n\n\n 4.666666666666667\n\n\n\n### Now lets calculate the gradient descent\n\n### We have the loss function defined as\n\n$$ J(\\beta, \\beta_{0}) = \\frac{1}{2m}\\sum_{n=1}^{m} (y - \\hat{y})^{2} $$\n\nwhere: $$ \\hat{y} = \\beta_{0} + X*\\beta^{T} $$\n\nSo:\n$$ \\frac {\\partial J(\\beta, \\beta_{0})}{\\partial \\beta_{0}} = \n\\frac{-1}{m}\\sum_{n=1}^{m}(y-\\hat{y}) $$\n\nAgain: $$ J(\\beta, \\beta_{0}) = \\frac{1}{2m}\\sum_{n=1}^{m} (y - \\hat{y})^{2} $$\n\n$$ \\frac {\\partial J(\\beta, \\beta_{0})}{\\partial \\beta} = \\frac{-1}{m}(y - \\hat{y}) * \\frac{\\partial (\\beta_{0} + X*\\beta^{T})}{\\partial \\beta} $$\n\n$$ \\frac {\\partial J(\\beta, \\beta_{0})}{\\partial \\beta} = \n\\frac{-1}{m}(y-\\hat{y})*X * \\frac{\\partial \\beta^{T}}{\\partial \\beta} $$\n\nwhere $$ \\beta = \\begin{bmatrix} \\beta_{1} & \\beta_{2} & \\beta_{3} &.... & \\beta_{n} \\end{bmatrix}$$ \n\nand\n\n$$ \\beta^{T} = \\begin{bmatrix} \\beta_{1} \\\\ \\beta_{2} \\\\ \\beta_{3} \\\\.... \\\\ \\beta_{n} \\end{bmatrix}$$ \n\n\nSince: $$ \\frac{\\partial \\beta^{T}}{\\partial \\beta} = I $$\n\n### Just a side note for calculation of $$ \\frac{\\partial \\beta^{T}}{\\partial \\beta} $$\nwhere $$ \\beta = \\begin{bmatrix} \\beta_{1} & \\beta_{2} & \\beta_{3} &.... & \\beta_{n} \\end{bmatrix}$$ \n\nand\n\n$$ \\beta^{T} = \\begin{bmatrix} \\beta_{1} \\\\ \\beta_{2} \\\\ \\beta_{3} \\\\.... \\\\ \\beta_{n} \\end{bmatrix}$$ \n\n$$ \\frac{\\partial \\beta^{T}}{\\partial \\beta} = \\begin{bmatrix} \\frac{\\partial \\beta^{T}}{\\partial \\beta_{1}} & \\frac{\\partial \\beta^{T}}{\\partial \\beta_{2}} & \\frac{\\partial \\beta^{T}}{\\partial \\beta_{3}} & .... & \\frac{\\partial \\beta^{T}}{\\partial \\beta_{n}}\\end{bmatrix} $$\n\n\n\n\\begin{equation} \n\\frac{\\partial \\beta^{T}}{\\partial \\beta} = \n\\begin{bmatrix} \n\\frac{\\partial \\beta_{1}}{\\partial \\beta_{1}} & \\frac{\\partial \\beta_{1}}{\\partial \\beta_{2}} & \\frac{\\partial \\beta_{1}}{\\partial \\beta_{3}} & ..&..& \\frac{\\partial \\beta_{1}}{\\partial \\beta_{n}} \n\\\\ \n\\frac{\\partial \\beta_{2}}{\\partial \\beta_{1}} & \\frac{\\partial \\beta_{2}}{\\partial \\beta_{2}} & \\frac{\\partial \\beta_{2}}{\\partial \\beta_{3}} &..&..& \\frac{\\partial \\beta_{2}}{\\partial \\beta_{n}} \n\\\\ \n\\frac{\\partial \\beta_{2}}{\\partial \\beta_{1}} & \\frac{\\partial \\beta_{3}}{\\partial \\beta_{2}} & \\frac{\\partial \\beta_{3}}{\\partial \\beta_{3}} &..&..& \\frac{\\partial \\beta_{3}}{\\partial \\beta_{n}} \n\\\\ \n.. & .. & .. & .. & .. &..\n\\\\\n.. & .. & .. & .. & .. &..\n\\\\\n\\frac{\\partial \\beta_{n}}{\\partial \\beta_{1}} & \\frac{\\partial \\beta_{n}}{\\partial \\beta_{2}} & \\frac{\\partial \\beta_{n}}{\\partial \\beta_{3}} & ..&..& \\frac{\\partial \\beta_{n}}{\\partial \\beta_{n}} \n\\end{bmatrix}\n\\end{equation}\n\n\n\\begin{equation} \n\\frac{\\partial \\beta^{T}}{\\partial \\beta} = \n\\begin{bmatrix} \n1 & 0 & 0 & ..&..& 0 \n\\\\ \n0 & 1 & 0 &..&..& 0 \n\\\\ \n0 & 0 & 1 &..&..& 0 \n\\\\ \n.. & .. & .. & .. & .. &..\n\\\\\n.. & .. & .. & .. & .. &..\n\\\\\n0 & 0 & 0 & ..&..& 1 \n\\end{bmatrix} \n\\end{equation}\n\n\ni.e $$ \\frac{\\partial \\beta^{T}}{\\partial \\beta} = I_{n*n} $$\n\nSince: $$ X_{m*n} * I_{n*n} = X $$\n\nSo: \n $$ \\frac {\\partial J(\\beta, \\beta_{0})}{\\partial \\beta} = \n\\frac{-1}{m}(y-\\hat{y})*X $$\n\n### Shape of $$ (y-\\hat{y}) $$ `m rows and 1 cols [m, 1]`\n\n### Shape of `X is [m, n] ie m observations and n features`\n\n### Required shape of `dB is [1, n] `\n\n### [1, m] * [m, n] == [1, n]\n\n## Equivalent Numpy Implementation is:\n\n `dB = (-1/m)* np.dot((y-y_hat).T, X)`\n
    \n `db = (-1/m)*np.sum(y-y_hat)`\n\n\n```python\ndef propagate(B, b, X, Y):\n \"\"\"\n params:\n B: weights of size [1, X.shape[1]]\n b: bias\n X: matrix of observations and features size [X.shape[0], X.shape[1]]\n Y: matrix of actual observation size [Y.shape[0], 1]\n \n returns:\n grads: dict of gradients, dB of shape same as B and db of shape [1, 1].\n cost: MSE cost\n \"\"\"\n \n ## m is no of observations ie rows of X\n m = X.shape[0]\n \n #Calculate hypothesis\n y_hat = np.dot(X, B.T) + b\n \n y = Y.values.reshape(Y.shape[0],1)\n \n #Compute Cost\n cost = (1/(2*m))*np.sum((y-y_hat)**2)\n \n # BACKWARD PROPAGATION (TO FIND GRAD)\n dB = (-1/m)* np.dot((y-y_hat).T, X)\n \n db = -np.sum(y-y_hat)/m\n \n grads = {\"dB\": dB,\n \"db\": db}\n \n return grads, cost\n```\n\n\n```python\ndef optimize(B, b, X, Y, num_iterations, learning_rate):\n \"\"\"\n params:\n B: weights of size [1, X.shape[1]]\n b: bias\n X: matrix of observations and features size [X.shape[0], X.shape[1]]\n Y: matrix of actual observation size [Y.shape[0], 1]\n num_iterations: number of iterations\n learning_rate: learning rate\n returns:\n params: parameters B of shape [1, X.shape[1]] and bias\n grads: dict of gradients, dB of shape same as B and db\n costs: MSE cost \n \"\"\"\n costs = []\n \n for i in range(num_iterations):\n \n \n # Cost and gradient calculation call function propagate\n grads, cost = propagate(B,b,X,Y)\n \n # Retrieve derivatives from grads\n dB = grads[\"dB\"]\n db = grads[\"db\"]\n \n # update parameters\n B = B - learning_rate * dB\n b = b - learning_rate * db\n \n costs.append(cost)\n \n params = {\"B\": B,\n \"b\": b}\n \n grads = {\"dB\": dB,\n \"db\": db}\n \n return params, grads, costs\n```\n\n\n```python\ndef predict(B, b, X):\n \"\"\":param\n B: weights\n b: bias\n X: matrix of observations and features\n \"\"\"\n # Compute predictions for X\n Y_prediction = np.dot(X, B.T) + b\n return Y_prediction\n```\n\n\n```python\ndef model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5):\n \"\"\"\n params: \n X_train: X_train\n Y_train: Y_train\n X_test: X_test\n Y_test: Y_test\n \n returns:\n d: dictionary\n \"\"\"\n \n \n # initialize parameters with zeros \n B = np.zeros(shape=(1, X_train.shape[1]))\n b = 0\n \n # Gradient descent\n parameters, grads, costs = optimize(B, b, X_train, Y_train, num_iterations, learning_rate)\n \n # Retrieve parameters w and b from dictionary \"parameters\"\n B = parameters[\"B\"]\n b = parameters[\"b\"]\n \n # Predict test/train set examples\n Y_prediction_test = predict(B, b, X_test)\n Y_prediction_train = predict(B, b, X_train)\n \n Y_train = Y_train.values.reshape(Y_train.shape[0], 1)\n Y_test = Y_test.values.reshape(Y_test.shape[0], 1)\n\n # Print train/test Errors\n print(\"train accuracy: {} %\".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))\n print(\"test accuracy: {} %\".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))\n\n \n d = {\"costs\": costs,\n \"Y_prediction_test\": Y_prediction_test, \n \"Y_prediction_train\" : Y_prediction_train, \n \"B\" : B, \n \"b\" : b,\n \"learning_rate\" : learning_rate,\n \"num_iterations\": num_iterations}\n \n return d\n```\n\n\n```python\nimport pandas as pd\ndf = pd.read_csv('../datasets/USA_Housing.csv')\n```\n\n\n```python\ndf\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    Avg. Area IncomeAvg. Area House AgeAvg. Area Number of RoomsAvg. Area Number of BedroomsArea PopulationPriceAddress
    079545.4585745.6828617.0091884.0923086.8005031.059034e+06208 Michael Ferry Apt. 674\\nLaurabury, NE 3701...
    179248.6424556.0029006.7308213.0940173.0721741.505891e+06188 Johnson Views Suite 079\\nLake Kathleen, CA...
    261287.0671795.8658908.5127275.1336882.1594001.058988e+069127 Elizabeth Stravenue\\nDanieltown, WI 06482...
    363345.2400467.1882365.5867293.2634310.2428311.260617e+06USS Barnett\\nFPO AP 44820
    459982.1972265.0405557.8393884.2326354.1094726.309435e+05USNS Raymond\\nFPO AE 09386
    ........................
    499560567.9441407.8303626.1373563.4622837.3610351.060194e+06USNS Williams\\nFPO AP 30153-7653
    499678491.2754356.9991356.5767634.0225616.1154891.482618e+06PSC 9258, Box 8489\\nAPO AA 42991-3352
    499763390.6868867.2505914.8050812.1333266.1454901.030730e+064215 Tracy Garden Suite 076\\nJoshualand, VA 01...
    499868001.3312355.5343887.1301445.4442625.6201561.198657e+06USS Wallace\\nFPO AE 73316
    499965510.5818045.9923056.7923364.0746501.2838031.298950e+0637778 George Ridges Apt. 509\\nEast Holly, NV 2...
    \n

    5000 rows \u00d7 7 columns

    \n
    \n\n\n\n\n```python\ndf.drop(['Address'],axis=1,inplace=True)\n```\n\n\n```python\ndf.head()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    Avg. Area IncomeAvg. Area House AgeAvg. Area Number of RoomsAvg. Area Number of BedroomsArea PopulationPrice
    079545.4585745.6828617.0091884.0923086.8005031.059034e+06
    179248.6424556.0029006.7308213.0940173.0721741.505891e+06
    261287.0671795.8658908.5127275.1336882.1594001.058988e+06
    363345.2400467.1882365.5867293.2634310.2428311.260617e+06
    459982.1972265.0405557.8393884.2326354.1094726.309435e+05
    \n
    \n\n\n\n\n```python\ndf.info()\n```\n\n \n RangeIndex: 5000 entries, 0 to 4999\n Data columns (total 6 columns):\n Avg. Area Income 5000 non-null float64\n Avg. Area House Age 5000 non-null float64\n Avg. Area Number of Rooms 5000 non-null float64\n Avg. Area Number of Bedrooms 5000 non-null float64\n Area Population 5000 non-null float64\n Price 5000 non-null float64\n dtypes: float64(6)\n memory usage: 234.5 KB\n\n\n\n```python\n#normalization of column values\ndf_norm = (df - df.mean()) / (df.max() - df.min())\n\n# Putting feature variable to X\nX = df_norm[['Avg. Area Income','Avg. Area House Age','Avg. Area Number of Rooms','Avg. Area Number of Bedrooms','Area Population']]\n\n# Putting response variable to y\ny = df_norm['Price']\n\n#random_state is the seed used by the random number generator, it can be any integer.\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.7 ,test_size = 0.3, random_state=2)\n\n\n```\n\n\n```python\ndf_norm.head()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    Avg. Area IncomeAvg. Area House AgeAvg. Area Number of RoomsAvg. Area Number of BedroomsArea PopulationPrice
    00.121932-0.0428170.0028440.024149-0.188292-0.070538
    10.1186310.003735-0.034156-0.1980730.0577340.111620
    2-0.081153-0.0161940.2026920.2552600.010348-0.070557
    3-0.0582600.176153-0.186228-0.160296-0.0266850.011636
    4-0.095667-0.1362470.1131930.055260-0.141246-0.245046
    \n
    \n\n\n\n\n```python\nX_train.shape\n```\n\n\n\n\n (3500, 5)\n\n\n\n\n```python\nX_train.head()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    Avg. Area IncomeAvg. Area House AgeAvg. Area Number of RoomsAvg. Area Number of BedroomsArea Population
    24160.129642-0.1434560.003923-0.169184-0.027249
    2417-0.094771-0.2630020.052597-0.1647400.132247
    2513-0.0191340.219268-0.100217-0.109184-0.300045
    1698-0.033811-0.2954700.0580200.533038-0.104025
    33220.030541-0.015484-0.129778-0.329184-0.135710
    \n
    \n\n\n\n\n```python\ny_train.head()\n```\n\n\n\n\n 2416 -0.041265\n 2417 -0.182448\n 2513 -0.082988\n 1698 -0.196576\n 3322 -0.069286\n Name: Price, dtype: float64\n\n\n\n\n```python\ny_test.shape[0]\n```\n\n\n\n\n 1500\n\n\n\n\n```python\nmodel1 = model(X_train=X_train, Y_train=y_train, X_test=X_test, Y_test=y_test, num_iterations = 1000, learning_rate = 0.001)\n\nmodel2 = model(X_train=X_train, Y_train=y_train, X_test=X_test, Y_test=y_test, num_iterations = 1000, learning_rate = 0.01)\n\nmodel3 = model(X_train=X_train, Y_train=y_train, X_test=X_test, Y_test=y_test, num_iterations = 1000, learning_rate = 0.1)\n\nmodel4 = model(X_train=X_train, Y_train=y_train, X_test=X_test, Y_test=y_test, num_iterations = 1000, learning_rate = 0.3)\n```\n\n train accuracy: 88.62860037424024 %\n test accuracy: 88.8357850464578 %\n train accuracy: 90.16271542913721 %\n test accuracy: 90.33382459987604 %\n train accuracy: 95.97241738688837 %\n test accuracy: 96.00886188507282 %\n train accuracy: 96.6522103927996 %\n test accuracy: 96.73232357323806 %\n\n\n\n```python\nimport matplotlib.pyplot as plt\nplt.plot([i for i in range(1000)], model1['costs'])\nplt.plot([i for i in range(1000)], model2['costs'])\nplt.plot([i for i in range(1000)], model3['costs'])\nplt.plot([i for i in range(1000)], model4['costs'])\n\nplt.gca().legend(('alpha 0.001','alpha 0.01', 'alpha 0.1', 'alpha 0.3'))\nplt.show()\n```\n\n\n```python\n# model4 = model(X_train=X_train, Y_train=y_train, X_test=X_test, Y_test=y_test, num_iterations = 10, learning_rate = 0.01)\n\n# model5 = model(X_train=X_train, Y_train=y_train, X_test=X_test, Y_test=y_test, num_iterations = 100, learning_rate = 0.01)\n\n# model6 = model(X_train=X_train, Y_train=y_train, X_test=X_test, Y_test=y_test, num_iterations = 1000, learning_rate = 0.01)\n```\n", "meta": {"hexsha": "c07d77e74bcc483bb8b6170e68fd0e7519daef90", "size": 66555, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/Multivariate-Linear-Regression.ipynb", "max_stars_repo_name": "silencedsre/Multivariate-Linear-Regression", "max_stars_repo_head_hexsha": "9c17dd88d077ee58d3213d6b786dab1cf01b9325", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/Multivariate-Linear-Regression.ipynb", "max_issues_repo_name": "silencedsre/Multivariate-Linear-Regression", "max_issues_repo_head_hexsha": "9c17dd88d077ee58d3213d6b786dab1cf01b9325", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/Multivariate-Linear-Regression.ipynb", "max_forks_repo_name": "silencedsre/Multivariate-Linear-Regression", "max_forks_repo_head_hexsha": "9c17dd88d077ee58d3213d6b786dab1cf01b9325", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.9502881844, "max_line_length": 23764, "alphanum_fraction": 0.598647735, "converted": true, "num_tokens": 7834, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.964321450147636, "lm_q2_score": 0.8596637487122111, "lm_q1q2_score": 0.8289921927975124}} {"text": "# Coordinate systems\n\n## Introduction\n\nThis notebooks provides examples of curvilinear coordinates. Particularly,\nit presents the coordinate surfaces for three common coordinate systems.\n\n### Use of PyVista in this notebook\n\nWe use `PyVista` to represent the coordinate surfaces and `\u00ectkwidgets`\nto render them interactively in the notebook.\n\nIf installing things manually, the following would be required\n\n\n conda install -c conda-forge pyvista\n conda install -c conda-forge itkwidgets\n\n\n**More information:** https://docs.pyvista.org/examples/index.html\n\n\n```python\nimport numpy as np\nimport pyvista as pv\nfrom itkwidgets import view\n```\n\n\n```python\nred = (0.9, 0.1, 0.1)\nblue = (0.2, 0.5, 0.7)\ngreen = (0.3, 0.7, 0.3)\n```\n\n## Cylindrical coordinates\n\n\nThe [ISO standard 80000-2](https://en.wikipedia.org/wiki/ISO_80000-2) recommends the use of $(\u03c1, \u03c6, z)$, where $\u03c1$ is the radial coordinate, $\\varphi$ the azimuth, and $z$ the height.\n\nFor the conversion between cylindrical and Cartesian coordinates, it is convenient to assume that the reference plane of the former is the Cartesian $xy$-plane (with equation $z=0$, and the cylindrical axis is the Cartesian $z$-axis. Then the $z$-coordinate is the same in both systems, and the correspondence between cylindrical $(\\rho, \\varphi)$ and Cartesian $(x, y)$ are the same as for polar coordinates, namely\n\n\\begin{align}\nx &= \\rho \\cos \\varphi \\\\\ny &= \\rho \\sin \\varphi\n\\end{align}\n\nin one direction, and\n\n\\begin{align}\n \\rho &= \\sqrt{x^2+y^2} \\\\\n \\varphi &= \\begin{cases}\n 0 & \\mbox{if } x = 0 \\mbox{ and } y = 0\\\\\n \\arcsin\\left(\\frac{y}{\\rho}\\right) & \\mbox{if } x \\geq 0 \\\\\t\n \\arctan\\left(\\frac{y}{x}\\right) & \\mbox{if } x > 0 \\\\\t\n -\\arcsin\\left(\\frac{y}{\\rho}\\right) + \\pi & \\mbox{if } x < 0\n \\end{cases}\n\\end{align}\n\nin the other. It is also common to use [$\\varphi = \\mathrm{atan2}(y, x)$](https://en.wikipedia.org/wiki/Atan2).\n\n\n```python\n# Cylinder\nphi, z = np.mgrid[0:2*np.pi:31j, -2:2*np.pi:31j]\nx = 2*np.cos(phi)\ny = 2*np.sin(phi)\ncylinder = pv.StructuredGrid(x, y, z)\n```\n\n\n```python\n# Vertical plane\nrho, z = np.mgrid[0:3:31j, -2:2*np.pi:31j]\nx = rho*np.cos(np.pi/4)\ny = rho*np.sin(np.pi/4)\nvert_plane = pv.StructuredGrid(x, y, z)\n```\n\n\n```python\n# Horizontal plane\nrho, phi = np.mgrid[0:3:31j, 0:2*np.pi:31j]\nx = rho*np.cos(phi)\ny = rho*np.sin(phi)\nz = np.ones_like(x)\nhor_plane = pv.StructuredGrid(x, y, z)\n```\n\n\n```python\nview(geometries=[cylinder, vert_plane, hor_plane],\n geometry_colors=[blue, red, green])\n```\n\n\n Viewer(geometries=[{'vtkClass': 'vtkPolyData', 'points': {'vtkClass': 'vtkPoints', 'name': '_points', 'numberO\u2026\n\n\n## Spherical coordinates\n\nThe use of $(r, \u03b8, \u03c6)$ to denote radial distance, inclination (or elevation), and azimuth, respectively, is common practice in physics, and is specified by [ISO standard 80000-2](https://en.wikipedia.org/wiki/ISO_80000-2).\n\n\nThe spherical coordinates of a point can be obtained from its [Cartesian coordinate system](https://en.wikipedia.org/wiki/ISO_80000-2) $(x, y, z)$\n\n\\begin{align}\nr&=\\sqrt{x^2 + y^2 + z^2} \\\\\n\\theta &= \\arccos\\frac{z}{\\sqrt{x^2 + y^2 + z^2}} = \\arccos\\frac{z}{r} \\\\\n\\varphi &= \\arctan \\frac{y}{x}\n\\end{align}\n\nThe inverse tangent denoted in $\\varphi = \\arctan\\left(\\frac{y}{x}\\right)$ must be suitably defined , taking into account the correct quadrant of $(x, y)$ (using the function ``atan2``).\n\nConversely, the Cartesian coordinates may be retrieved from the spherical coordinates (_radius_ $r$, _inclination_ $\\theta$, _azimuth_ $\\varphi$), where $r \\in [0, \\infty)$, $\\theta \\in [0, \\pi]$ and $\\varphi \\in [0, 2\\pi)$, by:\n\n\\begin{align}\nx&=r \\, \\sin\\theta \\, \\cos\\varphi \\\\\ny&=r \\, \\sin\\theta \\, \\sin\\varphi \\\\\nz&=r \\, \\cos\\theta\n\\end{align}\n\n\n```python\ntheta, phi = np.mgrid[0:np.pi:21j, 0:np.pi:21j]\n```\n\n\n```python\n# Sphere\nx = np.sin(phi) * np.cos(theta)\ny = np.sin(phi) * np.sin(theta)\nz = np.cos(phi)\nsphere = pv.StructuredGrid(x, y, z)\nsphere2 = pv.StructuredGrid(-x, -y, z)\n```\n\n\n```python\n# Cone\nx = theta/3 * np.cos(phi)\ny = theta/3 * np.sin(phi)\nz = theta/3\ncone = pv.StructuredGrid(x, y, z)\ncone2 = pv.StructuredGrid(-x, -y, z)\n```\n\n\n```python\n# Plane\nx = theta/np.pi\ny = theta/np.pi\nz = phi - np.pi/2\nplane = pv.StructuredGrid(x, y, z)\n```\n\n\n```python\nview(geometries=[sphere, sphere2, cone, cone2, plane],\n geometry_colors=[blue, blue, red, red, green],\n geometry_opacities=[1.0, 0.5, 1.0, 0.5, 1.0])\n```\n\n\n Viewer(geometries=[{'vtkClass': 'vtkPolyData', 'points': {'vtkClass': 'vtkPoints', 'name': '_points', 'numberO\u2026\n\n\n## Ellipsoidal coordinates\n\n\n```python\nv, theta = np.mgrid[0:np.pi/2:21j, 0:np.pi:21j]\na = 3\nb = 2\nc = 1\n```\n\n\n```python\n# Ellipsoid\nlam = 3\nx = np.sqrt(a**2 + lam) * np.cos(v) * np.cos(theta)\ny = np.sqrt(b**2 + lam)* np.cos(v) * np.sin(theta)\nz = np.sqrt(c**2 + lam) * np.sin(v)\nellipsoid = pv.StructuredGrid(x, y, z)\nellipsoid2 = pv.StructuredGrid(x, y, -z)\nellipsoid3 = pv.StructuredGrid(x, -y, z)\nellipsoid4 = pv.StructuredGrid(x, -y, -z)\n```\n\n\n```python\n# Hyperboloid of one sheet\nmu = 2\nx = np.sqrt(a**2 + mu) * np.cosh(v) * np.cos(theta)\ny = np.sqrt(b**2 + mu) * np.cosh(v) * np.sin(theta)\nz = np.sqrt(c**2 + mu) * np.sinh(v)\nz = np.sqrt(c**2 + mu) * np.sinh(v)\nhyper = pv.StructuredGrid(x, y, z)\nhyper2 = pv.StructuredGrid(x, y, -z)\nhyper3 = pv.StructuredGrid(x, -y, z)\nhyper4 = pv.StructuredGrid(x, -y, -z)\n```\n\n\n```python\n# Hyperboloid of two sheets\nnu = 1\nx = np.sqrt(a**2 + nu) * np.cosh(v)\ny = np.sqrt(c**2 + nu) * np.sinh(v) * np.sin(theta)\nz = np.sqrt(b**2 + nu) * np.sinh(v) * np.cos(theta)\nhyper_up = pv.StructuredGrid(x, y, z)\nhyper_down = pv.StructuredGrid(-x, y, z)\nhyper_up2 = pv.StructuredGrid(x, -y, z)\nhyper_down2 = pv.StructuredGrid(-x, -y, z)\n```\n\n\n```python\nview(geometries=[ellipsoid, ellipsoid2, ellipsoid3, ellipsoid4,\n hyper, hyper2, hyper3, hyper4,\n hyper_up, hyper_down, hyper_up2, hyper_down2],\n geometry_colors=[blue]*4 + [red]*4 + [green]*4)\n```\n\n\n Viewer(geometries=[{'vtkClass': 'vtkPolyData', 'points': {'vtkClass': 'vtkPoints', 'name': '_points', 'numberO\u2026\n\n\n## References\n\n1. Wikipedia contributors. [\"Cylindrical coordinate system.\"](https://en.wikipedia.org/wiki/Cylindrical_coordinate_system) Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 12 Dec. 2016. Web. 9 Feb. 2017.\n\n2. Wikipedia contributors. [\"Spherical coordinate system.\"](https://en.wikipedia.org/wiki/Spherical_coordinate_system) Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 29 Jan. 2017. Web. 9 Feb. 2017.\n\n3. Sullivan et al., (2019). [PyVista: 3D plotting and mesh analysis through a streamlined interface for the Visualization Toolkit (VTK)](https://joss.theoj.org/papers/10.21105/joss.01450). Journal of Open Source Software, 4(37), 1450, https://doi.org/10.21105/joss.01450\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open('./styles/custom_barba.css', 'r').read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "763580f8371381e7ea3712e7f1fcd11a318cddf7", "size": 16259, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/vector_calculus-pyvista.ipynb", "max_stars_repo_name": "nicoguaro/AdvancedMath", "max_stars_repo_head_hexsha": "2749068de442f67b89d3f57827367193ce61a09c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 26, "max_stars_repo_stars_event_min_datetime": "2017-06-29T17:45:20.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-06T20:14:29.000Z", "max_issues_repo_path": "notebooks/vector_calculus-pyvista.ipynb", "max_issues_repo_name": "nicoguaro/AdvancedMath", "max_issues_repo_head_hexsha": "2749068de442f67b89d3f57827367193ce61a09c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/vector_calculus-pyvista.ipynb", "max_forks_repo_name": "nicoguaro/AdvancedMath", "max_forks_repo_head_hexsha": "2749068de442f67b89d3f57827367193ce61a09c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2019-04-22T08:08:56.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-27T08:15:53.000Z", "avg_line_length": 28.6754850088, "max_line_length": 427, "alphanum_fraction": 0.5204502122, "converted": true, "num_tokens": 2845, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9136765210631689, "lm_q2_score": 0.9073122207340544, "lm_q1q2_score": 0.8289898733583888}} {"text": "Solution to: [Day 2: More Dice](https://www.hackerrank.com/challenges/s10-mcq-2/problem)\n\n

    Table of Contents

    \n
    \n\n- Table of Contents\n- Sample question\n- Math Solution\n- Monte Carlo Solution\n - Imports\n - Probability of unique dice rolls at target\n - Main\n\n\n```javascript\n%%javascript\n$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')\n```\n\n\n \n\n\n# Sample question \nFind the probability of getting 1 head and 1 tail when 2 fair coins are tossed.\n\n\nGiven the above question, we can extract the following:\n- Experiment: tossing 2 coins.\n- Sample space (S): The possible outcomes for the toss of 1 coin are {H, T}. \n \nAs our experiment tosses 2 coins, we have to consider all possible toss outcomes by finding the Cartesian Product of the possible outcomes for each coin: \n\n\\begin{equation} \n\\normalsize\nP(S) = \\{H, T\\} * \\{H * T\\}\n\\end{equation}\n\n\\begin{equation}\n\\normalsize\nP(S) = \\text{{(H, H), (H, T), (T, H), (T, T)}} = 4\n\\end{equation}\n\n\n- Event (A n B): that the outcome of 1 toss will be H, and the outcome of the other toss will be T (i.e., P(A) = {(H, T), (T, H)}}\n\n\\begin{equation}\n\\normalsize\nP(A) = \\frac\n{\\text{number of favorable events}}\n{\\text{probabiliy space}}\n\\end{equation}\n\n\\begin{equation}\n\\normalsize\n= \\frac{P(A)}{P(S)} = \\frac{2}{4} = \\frac{1}{2}\n\\end{equation}\n\n# Math Solution\n- Experiment: Roll 2 die and find the probability that sum == 6 with different die faces\n- Sample Space: possible outcomes == 36\n- Event: P( (A != B) n (A + B == 6))\n- Formula: \n\n\\begin{equation}\n\\large\n\\frac{P(\\text{number favorable events})}{P(\\text{total number of events})} = \\frac{P(A)}{P(S)}\n\\end{equation}\n\n\n\\begin{equation}\n\\large\nP(A) = \n\\frac{P(1,5) + P(2, 4) + P(4, 2) + P(1, 5)}\n{P(S)}\n\\end{equation}\n\n\n\\begin{equation}\n\\large\n= \\frac{1 + 1 + 1 + 1}{36}\n\\end{equation}\n\n\\begin{equation}\n\\large\n= \\frac{4}{36}\n\\end{equation}\n\n\n\\begin{equation}\n\\large\n= \\frac{1}{9}\n\\end{equation}\n\n\\begin{equation}\n\\large\n= 0.111\n\\end{equation}\n\n# Monte Carlo Solution\n \n\n## Imports\n\n\n```python\nfrom typing import List\n```\n\n## Probability of unique dice rolls at target\n\n\n```python\ndef get_all_possible_combos(num_dice: int, dice_face: int, target: int) -> List[List[int]]:\n\t\"\"\"Returns Cartesian product of all possible dice rolls.\n\t\n\tUses two helper functions:\n\t\t- get_possible_rolls, generator func to yield all die rolls.\n\t\t- recurse_roll_combos, helper recursion to create combinations for `num_dice`\n\t\"\"\"\n\tdef get_possible_rolls(dice_face: int) -> int:\n\t\t\"\"\"Generator funct to yield possible rolls.\"\"\"\n\t\tfor i in range(1, dice_face + 1): yield i\n\t\n\n\tdef recurse_roll_combos(processed_dice: int, all_roll_combos: list) -> List[List[int]]:\n\t\t\"\"\"Helper func to recurse through all dice.\"\"\"\n\t\tif processed_dice == num_dice:\t\t## Base case\n\t\t\treturn all_roll_combos\n\t\t\n\t\tnew_roll_combos = []\n\t\tfor roll in all_roll_combos:\n\t\t\troll_combo = [roll + [val] for val in get_possible_rolls(dice_face)]\n\t\t\tnew_roll_combos += (roll_combo)\n\t\t\n\t\tprocessed_dice += 1\n\t\treturn recurse_roll_combos(processed_dice= processed_dice, all_roll_combos = new_roll_combos)\n\n\t\n\tfirst_rolls = [[val] for val in get_possible_rolls(dice_face)]\n\troll_combinations = recurse_roll_combos( 1, first_rolls)\n\n\n\tassert (len(roll_combinations) == dice_face ** num_dice)\n\treturn roll_combinations\n```\n\n## Main\n\n\n```python\ndef main():\n\tnum_dice = 2\n\tdice_face = 6\n\ttarget = 6\n\n\t## Get Combinations\n\troll_combos = get_all_possible_combos(num_dice, dice_face, target)\n\n\t## Count unique combos in target\n\tnum_at_target = 0\n\tfor combo in roll_combos:\n\n\t\tflag_unique = True\n\t\tfor c in combo:\n\t\t\tif combo.count(c) > 1:\n\t\t\t\tflag_unique = False\n\t\t\t\tbreak\n\t\t\n\t\tif flag_unique and sum(combo) == target:\n\t\t\tnum_at_target += 1\n\t\n\t## Return P(A) / P(S)\n\tprob = num_at_target / len(roll_combos)\n\tprint( round(prob, 3))\n```\n\n\n```python\nif __name__ == \"__main__\":\n\tmain()\n```\n\n 0.111\n\n", "meta": {"hexsha": "0b573d34d5122b18916cb2fc26b0ff062114d9b2", "size": 7555, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "statistics/10_days/07_day2moredice.ipynb", "max_stars_repo_name": "jaimiles23/hacker_rank", "max_stars_repo_head_hexsha": "0580eac82e5d0989afabb5c2e66faf09713f891b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "statistics/10_days/07_day2moredice.ipynb", "max_issues_repo_name": "jaimiles23/hacker_rank", "max_issues_repo_head_hexsha": "0580eac82e5d0989afabb5c2e66faf09713f891b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "statistics/10_days/07_day2moredice.ipynb", "max_forks_repo_name": "jaimiles23/hacker_rank", "max_forks_repo_head_hexsha": "0580eac82e5d0989afabb5c2e66faf09713f891b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-09-22T11:06:58.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-25T09:29:24.000Z", "avg_line_length": 25.5236486486, "max_line_length": 163, "alphanum_fraction": 0.5228325612, "converted": true, "num_tokens": 1232, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9136765210631688, "lm_q2_score": 0.9073122144683576, "lm_q1q2_score": 0.8289898676335686}} {"text": "### Example 4: Burgers' equation\n\nNow that we have seen how to construct the non-linear convection and diffusion examples, we can combine them to form Burgers' equations. We again create a set of coupled equations which are actually starting to form quite complicated stencil expressions, even if we are only using a low-order discretisations.\n\nLet's start with the definition fo the governing equations:\n$$ \\frac{\\partial u}{\\partial t} + u \\frac{\\partial u}{\\partial x} + v \\frac{\\partial u}{\\partial y} = \\nu \\; \\left(\\frac{\\partial ^2 u}{\\partial x^2} + \\frac{\\partial ^2 u}{\\partial y^2}\\right)$$\n \n$$ \\frac{\\partial v}{\\partial t} + u \\frac{\\partial v}{\\partial x} + v \\frac{\\partial v}{\\partial y} = \\nu \\; \\left(\\frac{\\partial ^2 v}{\\partial x^2} + \\frac{\\partial ^2 v}{\\partial y^2}\\right)$$\n\nThe discretized and rearranged form then looks like this:\n\n\\begin{aligned}\nu_{i,j}^{n+1} &= u_{i,j}^n - \\frac{\\Delta t}{\\Delta x} u_{i,j}^n (u_{i,j}^n - u_{i-1,j}^n) - \\frac{\\Delta t}{\\Delta y} v_{i,j}^n (u_{i,j}^n - u_{i,j-1}^n) \\\\\n&+ \\frac{\\nu \\Delta t}{\\Delta x^2}(u_{i+1,j}^n-2u_{i,j}^n+u_{i-1,j}^n) + \\frac{\\nu \\Delta t}{\\Delta y^2} (u_{i,j+1}^n - 2u_{i,j}^n + u_{i,j+1}^n)\n\\end{aligned}\n\n\\begin{aligned}\nv_{i,j}^{n+1} &= v_{i,j}^n - \\frac{\\Delta t}{\\Delta x} u_{i,j}^n (v_{i,j}^n - v_{i-1,j}^n) - \\frac{\\Delta t}{\\Delta y} v_{i,j}^n (v_{i,j}^n - v_{i,j-1}^n) \\\\\n&+ \\frac{\\nu \\Delta t}{\\Delta x^2}(v_{i+1,j}^n-2v_{i,j}^n+v_{i-1,j}^n) + \\frac{\\nu \\Delta t}{\\Delta y^2} (v_{i,j+1}^n - 2v_{i,j}^n + v_{i,j+1}^n)\n\\end{aligned}\n\nGreat. Now before we look at the Devito implementation, let's re-create the NumPy-based implementation form the original.\n\n\n```python\nfrom examples.cfd import plot_field, init_hat\nimport numpy as np\n%matplotlib inline\n\n# Some variable declarations\nnx = 41\nny = 41\nnt = 120\nc = 1\ndx = 2. / (nx - 1)\ndy = 2. / (ny - 1)\nsigma = .0009\nnu = 0.01\ndt = sigma * dx * dy / nu\n```\n\n\n```python\n#NBVAL_IGNORE_OUTPUT\n\n# Assign initial conditions\nu = np.empty((nx, ny))\nv = np.empty((nx, ny))\n\ninit_hat(field=u, dx=dx, dy=dy, value=2.)\ninit_hat(field=v, dx=dx, dy=dy, value=2.)\n\nplot_field(u)\n```\n\n\n```python\n#NBVAL_IGNORE_OUTPUT\n\nfor n in range(nt + 1): ##loop across number of time steps\n un = u.copy()\n vn = v.copy()\n\n u[1:-1, 1:-1] = (un[1:-1, 1:-1] -\n dt / dx * un[1:-1, 1:-1] * \n (un[1:-1, 1:-1] - un[1:-1, 0:-2]) - \n dt / dy * vn[1:-1, 1:-1] * \n (un[1:-1, 1:-1] - un[0:-2, 1:-1]) + \n nu * dt / dx**2 * \n (un[1:-1,2:] - 2 * un[1:-1, 1:-1] + un[1:-1, 0:-2]) + \n nu * dt / dy**2 * \n (un[2:, 1:-1] - 2 * un[1:-1, 1:-1] + un[0:-2, 1:-1]))\n \n v[1:-1, 1:-1] = (vn[1:-1, 1:-1] - \n dt / dx * un[1:-1, 1:-1] *\n (vn[1:-1, 1:-1] - vn[1:-1, 0:-2]) -\n dt / dy * vn[1:-1, 1:-1] * \n (vn[1:-1, 1:-1] - vn[0:-2, 1:-1]) + \n nu * dt / dx**2 * \n (vn[1:-1, 2:] - 2 * vn[1:-1, 1:-1] + vn[1:-1, 0:-2]) +\n nu * dt / dy**2 *\n (vn[2:, 1:-1] - 2 * vn[1:-1, 1:-1] + vn[0:-2, 1:-1]))\n \n u[0, :] = 1\n u[-1, :] = 1\n u[:, 0] = 1\n u[:, -1] = 1\n \n v[0, :] = 1\n v[-1, :] = 1\n v[:, 0] = 1\n v[:, -1] = 1\n \nplot_field(u)\n```\n\nNice, our wave looks just like the original. Now we shall attempt to write our entire Burgers' equation operator in a single cell - but before we can demonstrate this, there is one slight problem.\n\nThe diffusion term in our equation requires a second-order space discretisation on our velocity fields, which we set through the `TimeFunction` constructor for $u$ and $v$. The `TimeFunction` objects will store this dicretisation information and use it as default whenever we use the shorthand notations for derivative, like `u.dxl` or `u.dyl`. For the advection term, however, we want to use a first-order discretisation, which we now have to create by hand when combining terms with different stencil discretisations. To illustrate let's consider the following example: \n\n\n```python\nfrom devito import Grid, TimeFunction, first_derivative, left\n\ngrid = Grid(shape=(nx, ny), extent=(2., 2.))\nx, y = grid.dimensions\nt = grid.stepping_dim\n\nu1 = TimeFunction(name='u1', grid=grid, space_order=1)\nprint(\"Space order 1:\\n%s\\n\" % u1.dxl)\n\nu2 = TimeFunction(name='u2', grid=grid, space_order=2)\nprint(\"Space order 2:\\n%s\\n\" % u2.dxl)\n\n# We use u2 to create the explicit first-order derivative\nu1_dx = first_derivative(u2, dim=x, side=left, order=1)\nprint(\"Explicit space order 1:\\n%s\\n\" % u1_dx)\n```\n\n Space order 1:\n u1(t, x, y)/h_x - u1(t, x - h_x, y)/h_x\n \n Space order 2:\n 1.5*u2(t, x, y)/h_x + 0.5*u2(t, x - 2*h_x, y)/h_x - 2.0*u2(t, x - h_x, y)/h_x\n \n Explicit space order 1:\n u2(t, x, y)/h_x - u2(t, x - h_x, y)/h_x\n \n\n\nOk, so by constructing derivative terms explicitly we again have full control of the spatial discretisation - the power of symbolic computation. Armed with that trick, we can now build and execute our advection-diffusion operator from scratch in one cell.\n\n\n```python\n#NBVAL_IGNORE_OUTPUT\nfrom sympy import solve\nfrom devito import Operator, Constant, Eq, INTERIOR\n\n# Define our velocity fields and initialise with hat function\nu = TimeFunction(name='u', grid=grid, space_order=2)\nv = TimeFunction(name='v', grid=grid, space_order=2)\ninit_hat(field=u.data[0], dx=dx, dy=dy, value=2.)\ninit_hat(field=v.data[0], dx=dx, dy=dy, value=2.)\n\n# Write down the equations with explicit backward differences\na = Constant(name='a')\nu_dx = first_derivative(u, dim=x, side=left, order=1)\nu_dy = first_derivative(u, dim=y, side=left, order=1)\nv_dx = first_derivative(v, dim=x, side=left, order=1)\nv_dy = first_derivative(v, dim=y, side=left, order=1)\neq_u = Eq(u.dt + u*u_dx + v*u_dy, a*u.laplace, region=INTERIOR)\neq_v = Eq(v.dt + u*v_dx + v*v_dy, a*v.laplace, region=INTERIOR)\n\n# Let SymPy rearrange our stencils to form the update expressions\nstencil_u = solve(eq_u, u.forward)[0]\nstencil_v = solve(eq_v, v.forward)[0]\nupdate_u = Eq(u.forward, stencil_u)\nupdate_v = Eq(v.forward, stencil_v)\n\n# Create Dirichlet BC expressions using the low-level API\nbc_u = [Eq(u.indexed[t+1, 0, y], 1.)] # left\nbc_u += [Eq(u.indexed[t+1, nx-1, y], 1.)] # right\nbc_u += [Eq(u.indexed[t+1, x, ny-1], 1.)] # top\nbc_u += [Eq(u.indexed[t+1, x, 0], 1.)] # bottom\nbc_v = [Eq(v.indexed[t+1, 0, y], 1.)] # left\nbc_v += [Eq(v.indexed[t+1, nx-1, y], 1.)] # right\nbc_v += [Eq(v.indexed[t+1, x, ny-1], 1.)] # top\nbc_v += [Eq(v.indexed[t+1, x, 0], 1.)] # bottom\n\n# Create the operator\nop = Operator([update_u, update_v] + bc_u + bc_v)\n\n# Execute the operator for a number of timesteps\nop(time=nt, dt=dt, a=nu)\n\nplot_field(u.data[0])\n```\n", "meta": {"hexsha": "09b252225134d25ab27ba9e1331e1cf505e724bb", "size": 287014, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/cfd/04_burgers.ipynb", "max_stars_repo_name": "RajatRasal/devito", "max_stars_repo_head_hexsha": "162abb6b318e77eaa4e8f719047327c45782056f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "examples/cfd/04_burgers.ipynb", "max_issues_repo_name": "RajatRasal/devito", "max_issues_repo_head_hexsha": "162abb6b318e77eaa4e8f719047327c45782056f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/cfd/04_burgers.ipynb", "max_forks_repo_name": "RajatRasal/devito", "max_forks_repo_head_hexsha": "162abb6b318e77eaa4e8f719047327c45782056f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 986.3024054983, "max_line_length": 92732, "alphanum_fraction": 0.9507968252, "converted": true, "num_tokens": 2473, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9219218370002789, "lm_q2_score": 0.8991213813246444, "lm_q1q2_score": 0.8289196355570444}} {"text": "# Think Bayes\n\nThis notebook presents example code and exercise solutions for Think Bayes.\n\nCopyright 2018 Allen B. Downey\n\nMIT License: https://opensource.org/licenses/MIT\n\n\n```python\n# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import classes from thinkbayes2\nfrom thinkbayes2 import Hist, Pmf, Suite\n```\n\n**Exercise:** Let's consider [a more general version of the Monty Hall problem](https://en.wikipedia.org/wiki/Monty_Hall_problem#Other_host_behaviors) where Monty is more unpredictable. As before, Monty never opens the door you chose (let's call it A) and never opens the door with the prize. So if you choose the door with the prize, Monty has to decide which door to open. Suppose he opens B with probability `p` and C with probability `1-p`.\n\n1. If you choose A and Monty opens B, what is the probability that the car is behind A, in terms of `p`?\n\n2. What if Monty opens C?\n\nHint: you might want to use SymPy to do the algebra for you. \n\n\n```python\nfrom sympy import symbols\np = symbols('p')\n```\n\n\n\n\n p\n\n\n\n\n```python\n# Solution\n\n# Here's the solution if Monty opens B.\n\npmf = Pmf('ABC')\npmf['A'] *= p\npmf['B'] *= 0\npmf['C'] *= 1\npmf.Normalize()\npmf['A'].simplify()\n```\n\n\n\n\n 1.0*p/(p + 1)\n\n\n\n\n```python\n# Solution\n\n# When p=0.5, the result is what we saw before\n\npmf['A'].evalf(subs={p:0.5})\n```\n\n\n\n\n 0.333333333333333\n\n\n\n\n```python\n# Solution\n\n# When p=0.0, we know for sure that the prize is behind C\n\npmf['C'].evalf(subs={p:0.0})\n```\n\n\n\n\n 1.00000000000000\n\n\n\n\n```python\n# Solution\n\n# And here's the solution if Monty opens C.\n\npmf = Pmf('ABC')\npmf['A'] *= 1-p\npmf['B'] *= 1\npmf['C'] *= 0\npmf.Normalize()\npmf['A'].simplify()\n```\n\n\n\n\n 0.333333333333333*(p - 1)/(0.333333333333333*p - 0.666666666666667)\n\n\n", "meta": {"hexsha": "ecac451cfa95fa909ebb15ab14137d411ce59a71", "size": 4328, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/monty_soln.ipynb", "max_stars_repo_name": "vickiwickinger/ThinkBayes2", "max_stars_repo_head_hexsha": "2259f1e83dba9a959b2cc84c7b83318b53b3ee24", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1337, "max_stars_repo_stars_event_min_datetime": "2015-01-06T06:23:55.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T21:06:21.000Z", "max_issues_repo_path": "examples/monty_soln.ipynb", "max_issues_repo_name": "vickiwickinger/ThinkBayes2", "max_issues_repo_head_hexsha": "2259f1e83dba9a959b2cc84c7b83318b53b3ee24", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 43, "max_issues_repo_issues_event_min_datetime": "2015-04-23T13:14:15.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-04T12:55:59.000Z", "max_forks_repo_path": "examples/monty_soln.ipynb", "max_forks_repo_name": "vickiwickinger/ThinkBayes2", "max_forks_repo_head_hexsha": "2259f1e83dba9a959b2cc84c7b83318b53b3ee24", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1497, "max_forks_repo_forks_event_min_datetime": "2015-01-13T22:05:32.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-30T09:19:53.000Z", "avg_line_length": 22.1948717949, "max_line_length": 456, "alphanum_fraction": 0.5184842884, "converted": true, "num_tokens": 576, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9149009573133051, "lm_q2_score": 0.9059898172105135, "lm_q1q2_score": 0.8288909510820052}} {"text": "# Week10\n## Random Number Generators\n\n### Exercise 1) Linear Congrugential Generators\n\nIn the lecture notes of L17, we showed several examples of LCGs. You will now be tasked with creating a more general framework for creating and using LCGs in an object-oriented manner.\n\nRecall that an LCG generates a new pseudorandom number according to the formula\n\n$$ X_{n+1} = a\\cdot X_n + c \\mod m.$$\n\n#### 1a) Defining the class\n\n(In Python) Define a class `LinearCongrugentialGenerator`. The class should have a constructor that takes the *seed* as input, and sets the state of the pRNG to be this seed.\n\nThe constructor should also take the parameters $a$, $c$ and $m$\u00a0as keyword arguments and store these in the class. As the default values, choose one of the LCG commonly used parameters found in this [table](https://en.wikipedia.org/wiki/Linear_congruential_generator#Parameters_in_common_use) on wikipedia. You can for example choose the one used by glibc, i.e., the gcc compiler.\n\n#### 1b) Adding a call method\n\nCreate a `__call__` special method that advances the state of the pRNG by one number, and returns this new number.\n\nTest your method by implementing an instance of your pRNG and producing ten numbers.\n\n#### 1c) Random floats\n\nAdd a new method called `rand()` that should be called with no input. The function should return a random floating point number on the interval $[0, 1)$.\n\nHint: To do this, the method should call on the `__call__` special method to advance the state and produce a random integer. This integer then needs to be scaled to a proper float by dividing it by some value.\n\n#### 1d) Uniform\n\nAdd a new method called `uniform(a, b)` that takes two numbers in: $a$ and $b$ and returns a uniformly distributed floating point number on the interval $[a, b)$.\n\nHint, you can first call `rand()` to get a number on the range $[0, 1)$, and then scale this number by multiplying and shifting it.\n\n#### 1e) Randint\n\nNow add a method called `randint(a, b)` (short for random integer). That should return a uniformly distributed integer on the interval $[a, b]$.\n\nTest your function by throwing 1000 dice, and computing the average result, which should be close to 3.5.\n\n#### 1f) Normally Distributed Numbers\n\nNow add a method `normal()` that should return a normally distributed number with standard deviation 1 and a mean of 0.\n\nTo compute a normally distributed number, use the Box-Muller metho, as described in the lecture notes. To do this, you need two uniformly distributed numbers on the interval $[0, 1)$: $U_1$ and $U_2$. Then you can compute the two independent normally distributed numbers based on the formulas:\n\n$$\\begin{align}\nZ_1 = \\sqrt{-2 \\ln U_1}\\cos (2\\pi U_2), \\\\\nZ_2 = \\sqrt{-2 \\ln U_1}\\sin (2\\pi U_2).\n\\end{align}$$\n\nNote that to compute and return *one* normally distributed number, you can simply draw two numbers with `rand()`, and compute $Z_1$, and ignore $Z_2$. \n\nHowever, slightly more efficient, is to compute both values, and then use the other number the next time `normal()` is called. See if you can find a nice way to do this, and if not, simply implement the simpler solution of only computing $Z_1$.\n\n### Exercise 2\n\n#### 2a) Implementing RANDU\n\nUse your `LinearCongrugentialGenerator` class to implement a RANDU pRNG. RANDU is a LCG with $a = 65539$, $c=0$ and $m=2^{31}.$$\n\n\n#### 2b) - Checking the mean of the generated numbers\n\nAs we have discussed in the lectures. For a pRNG to produce numbers that \"look\" random, they have to reproduce certain statistical properties. One of these is the *mean*. If we are drawing numbers on the interval $[0, 1)$, then the average over a large number of random numbers, i.e., the *sample mean*, should tend to exactly 0.5.\n\nGenerate $n=10^6$ samples from your RANDU class and find the sample mean. Does RANDU reproduce the expected mean?\n\n#### 2c) Checking the sample variance\n\nThe random numbers should, in addition to respecting the mean, reproduce the variance of the distribution. For random numbers drawn from a uniform distribution between 0 and 1, this variance should be $1/12$.\n\nUsing your $n=10^6$ generated samples, check that the sample variance is reasonable.\n\n#### 2d) Moments of the pdf (Only for those who have taken or are taking STK1100) \n\nWe are attempting to draw random numbers from the probability density function\n\n$$f(x) = \\begin{cases}\n1 & \\mbox{ if } 0 \\leq x < 1, \\\\\n0 & \\mbox{else}.\n\\end{cases}$$\n\nFor any pdf, the *moments* of the sample are defined as:\n$$E(x^k) = \\int_{-\\infty}^\\infty x^k \\cdot f(x) \\ {\\rm d}x.$$\n\nShow that for our given pdf, that the $k$-th moment can be written as\n$$E(X^k) = \\frac{1}{k+1}.$$\n\nFrom this, use that fact that the variance can be written as \n$${\\rm Var}(X) = E(X^2) - E(X)^2,$$\nto show that the variance of the pdf is 1/12.\n\n\n\n### Plotting the correlation found in RANDU\n\nWe have seen that RANDU reproduces the mean and the variance of its distribution, which is good. However, there are other statistical properties it also needs to follow. One of these is that different samples from the distribution, i.e., different generated numbers, should be *uncorrelated*. It turns out that RANDU breaks this requirement, horribly. This is very briefly shown in L17.\n\nA common way to show how bad RANDU actually is, is to plot random points within a three dimensional unit cube. A unit cube is a cube with sides that goes from 0 to 1 in all three dimensions. We can place a randomly located point inside the cube by drawing three random numbers in the $[0, 1)$ range, one for each of the three coordinates $x$, $y$ and $z$. If all three coordinates are chosen randomly *and* independent of each other, the random point will have an equal likelihood of landing anywhere within the cube. For RANDU however, if we plot this out, we see that these \"random\" points aren't distributed uniformly throughout the cube at all, but rather all land in specific planes. Let us produce such a plot.\n\n#### 2e) Drawing random points\n\nUse your RANDU-generator to generate $n=1000$ such points. Note that for this to work you need to draw the three coordinates for each point consecutively, you cannot first draw all the x-positions, then all the y-positions for example.\n\n\n#### 2f) A three dimensional scatter plot\n\nNow, plot your $n=1000$ points in a 3D scatter plot. If you do not know how to draw a 3D scatter plot. Take a look at this [matplotlib example script](https://matplotlib.org/2.1.1/gallery/mplot3d/scatter3d.html).\n\nOnce you have your scatter plot, the points might *look* uniformly distributed. To properly see that they lie in planes, we need to look at it from the right angle. If you are plotting outside Jupyter, the plot window should be interactive and so you can simply drag the view around untill you find a good angle. If you are plotting inside Jupyter you can use `ax.view_init(elevation, method)` before you show the plot. For example `view_init(30, 60)` should be a good angle. At least for me.\n\nIncrease the number of points to $n=10000$ to really make the planes apparent.\n\n#### 2g) A proper uniformly unit cube\n\nRepeat the plot, but this time replace your RANDU generater with `numpy.random.rand()` which produces floats in the range $[0, 1)$ based on the Mersenne Twister algorithm. Verify that no planes are visible in the cube in this case.\n\n\n### Exercise 3) Randomness in C++\n\nBefore you do this exercise, be sure to read about generating random numbers in C++ in the lecture notes, or watch this video:\n* https://channel9.msdn.com/Events/GoingNative/2013/rand-Considered-Harmful\n\n\nFor this exercise, the C++ reference is a good resource.\n\n#### Exercise 3a) Uniform numbers\nCreate a C++ script that uses the Mersenne Twister algorithm to produce 10 uniformly distributed numbers in the range 0 and 1\n\n#### Exercise 3b) Normally distributed numbers\n\nUse a different distribution to produce normally distributed numbers.\n\n### Exercise 4) The Birthday Problem\n\nThe Birthday Problem concerns the probabilty that two or more people share a birthday in a room with $n$ people. This is also a good test of an RNG, because an RNG of truley uncorrelated numbers should produce duplicates with a given frequency.\n\n#### 4a) Drawing a random birthday\n\nUse [`numpy.random.randint`](https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.randint.html) (click for the offical reference) to draw a birthday as a random integer in the range $[1, 366]$. We have 366 possibilities because we include leap years.\n\n#### 4b) Drawing $n$\u00a0random birthdays\n\nNow use the `size` keyword to draw a whole set of random birthdays, to simulate the birthdays of $n$\u00a0randomly people located in the same room. As an example, let's say we are $n=20$ people in the same room.\n\n#### 4c) Checking for duplicates\n\nIf the array we get from randint contains any *duplicate* values. Write a function for checking if any duplicates are contained in the array.\n\nHint: There are many ways to check for duplicates. You can for example do `len(np.unique(birthdays)` to check how many *unique* birthdays there are. Or you can use the fact that a Python *set* cannot contain any duplicates, and so on. Give it a go on your own, and if you cannot figure it out, google is sure to be helpful :)\n\n#### 4d) Repeated Simulation\n\nNow use a loop to repeat the experiment of drawing $n=23$\u00a0birthdays and checking wether there is a duplicate 1000 times. Count the number of times there is a duplicate, and the number of times there is not.\n\nThe probabilty should be close to 50/50 for 23 people, if the RNG returns the expected number of duplicates. Did you get a value close to 50/50?\n\n#### 4e) Edge cases\n\nFor $n=1$, the probability of a duplicate is \"obivously\" 0. And for $n=366$ it is \"obviously\" 1. Why is this?\n\n### Drawing out the full curve\n\nThe probability of a duplicate birthday can be found analytically, and produces the curve shown in this Figure:\n \n\nLet's try to produce a similar function ourselves.\n\n#### 4f) Drawing the Curve\n\nGeneralize your answer to 4D so that it instead finds the probability of $n$ people having a shared birthday.\n\n#### 4g) Plotting the curve\n\nFor $n$ in the range $[0, 100]$, simulate 1000 cases and count the number of shared birthdays for each case. Divide by the number of simulations, $1000$, to find the probability of a shared birthday for each $n$\n\nIf you have managed to find the probability for each $n$, use `plt.step` to plot it. The function `plt.step` works just like `plt.plot`, but plots the data as a discontinous stepwise function, just like in the analytical function above.\n\n#### 4h) Comparing the curves\n\nDoes your program look like the analytical function? Should it? \n\n\n\n### More exercises\n\nFor more exercises, turn to Langtangen Chapter 8. We suggest the following:\n* Exercise 8.8\n* Exercise 8.9\n* Exercise 8.19\n* Exercise 8.21\n", "meta": {"hexsha": "dd45053b36078060a578c5698180dbf1f35cd2ae", "size": 14477, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "book/docs/exercises/week10/E8_random_number_generators.ipynb", "max_stars_repo_name": "finsberg/IN1910_H21", "max_stars_repo_head_hexsha": "4bd8f49c0f6839884bfaf8a0b1e3a717c3041092", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "book/docs/exercises/week10/E8_random_number_generators.ipynb", "max_issues_repo_name": "finsberg/IN1910_H21", "max_issues_repo_head_hexsha": "4bd8f49c0f6839884bfaf8a0b1e3a717c3041092", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "book/docs/exercises/week10/E8_random_number_generators.ipynb", "max_forks_repo_name": "finsberg/IN1910_H21", "max_forks_repo_head_hexsha": "4bd8f49c0f6839884bfaf8a0b1e3a717c3041092", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-08-30T12:38:40.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-05T14:14:59.000Z", "avg_line_length": 45.0996884735, "max_line_length": 727, "alphanum_fraction": 0.6417766112, "converted": true, "num_tokens": 2741, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.905989810230102, "lm_q2_score": 0.9149009515124918, "lm_q1q2_score": 0.8288909394401422}} {"text": "```python\nimport sympy as sp\nimport numpy as np\n\nprint(f\"SymPy Version: {sp.__version__}\")\n\n# \u6570\u5f0f\u3092\u30ad\u30ec\u30a4\u306b\u8868\u793a\u3059\u308b\nsp.init_printing()\n\n# \u4e71\u6570\u306e\u7a2e\u306e\u56fa\u5b9a\u3057\u3066\u304a\u304f\nnp.random.seed(123)\n```\n\n SymPy Version: 1.8\n\n\n### C\u8a00\u8a9e\u306e\u30b3\u30fc\u30c9\u3092\u751f\u6210\u3059\u308b\u3002\n\n- C\u8a00\u8a9e\u306e\u30b3\u30fc\u30c9\u306f\u3001`sympy.ccode()`\u3067\u51fa\u529b\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u3002\n\n\n```python\nx, y, z = sp.symbols('x y z')\n```\n\n\n```python\nsp.ccode(sp.exp(-x))\n```\n\n\n\n\n 'exp(-x)'\n\n\n\n\n```python\ndistance = (x ** 2 + y ** 2 + z ** 2) ** 0.5\ndistance\n```\n\n\n```python\nsp.ccode(distance)\n```\n\n\n\n\n 'sqrt(pow(x, 2) + pow(y, 2) + pow(z, 2))'\n\n\n\n\n```python\nx = sp.MatrixSymbol('x', 3, 1)\n\nA = sp.Matrix(np.random.randint(15, size=(3, 3)))\nA\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}14 & 13 & 14\\\\2 & 12 & 2\\\\6 & 1 & 3\\end{matrix}\\right]$\n\n\n\n\n```python\nAx = A * sp.Matrix(x)\nAx\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{array}{c}14 x_{0, 0} + 13 x_{1, 0} + 14 x_{2, 0}\\\\2 x_{0, 0} + 12 x_{1, 0} + 2 x_{2, 0}\\\\6 x_{0, 0} + x_{1, 0} + 3 x_{2, 0}\\end{array}\\right]$\n\n\n\n\n```python\nn_rows = 3\nfor i in range(n_rows):\n code = sp.ccode(Ax[i], assign_to=f\"y[{i}]\")\n print(code)\n```\n\n y[0] = 14*x[0] + 13*x[1] + 14*x[2];\n y[1] = 2*x[0] + 12*x[1] + 2*x[2];\n y[2] = 6*x[0] + x[1] + 3*x[2];\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "aef0d7a5250fff50c9b399532c96d63b059e55f6", "size": 6596, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/04-CodeGeneration.ipynb", "max_stars_repo_name": "codemajin/Introduction-to-SymPy", "max_stars_repo_head_hexsha": "a403b50ba384260a8d5845be1fd1fbd0e5f6b807", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/04-CodeGeneration.ipynb", "max_issues_repo_name": "codemajin/Introduction-to-SymPy", "max_issues_repo_head_hexsha": "a403b50ba384260a8d5845be1fd1fbd0e5f6b807", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/04-CodeGeneration.ipynb", "max_forks_repo_name": "codemajin/Introduction-to-SymPy", "max_forks_repo_head_hexsha": "a403b50ba384260a8d5845be1fd1fbd0e5f6b807", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.188034188, "max_line_length": 2020, "alphanum_fraction": 0.5826258338, "converted": true, "num_tokens": 541, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9263037323284109, "lm_q2_score": 0.8947894738180211, "lm_q1q2_score": 0.8288468292458079}} {"text": "# Basis for grayscale images\n\n## Introduction\n\nConsider the set of real-valued matrices of size $M\\times N$; we can turn this into a vector space by defining addition and scalar multiplication in the usual way:\n\n\\begin{align}\n\\mathbf{A} + \\mathbf{B} &= \n \\left[ \n \\begin{array}{ccc} \n a_{0,0} & \\dots & a_{0,N-1} \\\\ \n \\vdots & & \\vdots \\\\ \n a_{M-1,0} & \\dots & b_{M-1,N-1} \n \\end{array}\n \\right]\n + \n \\left[ \n \\begin{array}{ccc} \n b_{0,0} & \\dots & b_{0,N-1} \\\\ \n \\vdots & & \\vdots \\\\ \n b_{M-1,0} & \\dots & b_{M-1,N-1} \n \\end{array}\n \\right]\n \\\\\n &=\n \\left[ \n \\begin{array}{ccc} \n a_{0,0}+b_{0,0} & \\dots & a_{0,N-1}+b_{0,N-1} \\\\ \n \\vdots & & \\vdots \\\\ \n a_{M-1,0}+b_{M-1,0} & \\dots & a_{M-1,N-1}+b_{M-1,N-1} \n \\end{array}\n \\right] \n \\\\ \\\\ \\\\\n\\beta\\mathbf{A} &= \n \\left[ \n \\begin{array}{ccc} \n \\beta a_{0,0} & \\dots & \\beta a_{0,N-1} \\\\ \n \\vdots & & \\vdots \\\\ \n \\beta a_{M-1,0} & \\dots & \\beta a_{M-1,N-1}\n \\end{array}\n \\right]\n\\end{align}\n\n\nAs a matter of fact, the space of real-valued $M\\times N$ matrices is completely equivalent to $\\mathbb{R}^{MN}$ and we can always \"unroll\" a matrix into a vector. Assume we proceed column by column; then the matrix becomes\n\n$$\n \\mathbf{a} = \\mathbf{A}[:] = [\n \\begin{array}{ccccccc}\n a_{0,0} & \\dots & a_{M-1,0} & a_{0,1} & \\dots & a_{M-1,1} & \\ldots & a_{0, N-1} & \\dots & a_{M-1,N-1}\n \\end{array}]^T\n$$\n\nAlthough the matrix and vector forms represent exactly the same data, the matrix form allows us to display the data in the form of an image. Assume each value in the matrix is a grayscale intensity, where zero is black and 255 is white; for example we can create a checkerboard pattern of any size with the following function:\n\n\n```python\n# usual pyton bookkeeping...\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport IPython\nfrom IPython.display import Image\nimport math\n```\n\n\n```python\n# ensure all images will be grayscale\nplt.gray();\n```\n\n\n \n\n\n\n```python\n# let's create a checkerboard pattern\nSIZE = 4\nimg = np.zeros((SIZE, SIZE))\nfor n in range(0, SIZE):\n for m in range(0, SIZE):\n if (n & 0x1) ^ (m & 0x1):\n img[n, m] = 255\n\n# now display the matrix as an image\nplt.matshow(img); \n```\n\nGiven the equivalence between the space of $M\\times N$ matrices and $\\mathbb{R}^{MN}$ we can easily define the inner product between two matrices in the usual way:\n\n$$\n\\langle \\mathbf{A}, \\mathbf{B} \\rangle = \\sum_{m=0}^{M-1} \\sum_{n=0}^{N-1} a_{m,n} b_{m, n}\n$$\n\n(where we have neglected the conjugation since we'll only deal with real-valued matrices); in other words, we can take the inner product between two matrices as the standard inner product of their unrolled versions. The inner product allows us to define orthogonality between images and this is rather useful since we're going to explore a couple of bases for this space.\n\n\n## Actual images\n\nConveniently, using IPython, we can read images from disk in any given format and convert them to numpy arrays; let's load and display for instance a JPEG image:\n\n\n```python\nimg = np.array(plt.imread('cameraman.jpg'), dtype=int)\nplt.matshow(img);\n```\n\nThe image is a $64\\times 64$ low-resolution version of the famous \"cameraman\" test picture. Out of curiosity, we can look at the first column of this image, which is is a $64\u00d71$ vector:\n\n\n```python\nimg[:,0]\n```\n\n\n\n\n array([156, 157, 157, 152, 154, 155, 151, 157, 152, 155, 158, 159, 159,\n 160, 160, 161, 155, 160, 161, 161, 164, 162, 160, 162, 158, 160,\n 158, 157, 160, 160, 159, 158, 163, 162, 162, 157, 160, 114, 114,\n 103, 88, 62, 109, 82, 108, 128, 138, 140, 136, 128, 122, 137,\n 147, 114, 114, 144, 112, 115, 117, 131, 112, 141, 99, 97])\n\n\n\nThe values are integers between zero and 255, meaning that each pixel is encoded over 8 bits (or 256 gray levels).\n\n## The canonical basis\n\nThe canonical basis for any matrix space $\\mathbb{R}^{M\\times N}$ is the set of \"delta\" matrices where only one element equals to one while all the others are 0. Let's call them $\\mathbf{E}_n$ with $0 \\leq n < MN$. Here is a function to create the canonical basis vector given its index:\n\n\n```python\ndef canonical(n, M=5, N=10):\n e = np.zeros((M, N))\n e[(n % M), int(n / M)] = 1\n return e\n```\n\nHere are some basis vectors: look for the position of white pixel, which differentiates them and note that we enumerate pixels column-wise:\n\n\n```python\nplt.matshow(canonical(0));\nplt.matshow(canonical(1));\nplt.matshow(canonical(49));\n```\n\n## Transmitting images\n\nSuppose we want to transmit the \"cameraman\" image over a communication channel. The intuitive way to do so is to send the pixel values one by one, which corresponds to sending the coefficients of the decomposition of the image over the canonical basis. So far, nothing complicated: to send the cameraman image, for instance, we will send $64\\times 64 = 4096$ coefficients in a row. \n\nNow suppose that a communication failure takes place after the first half of the pixels have been sent. The received data will allow us to display an approximation of the original image only. If we replace the missing data with zeros, here is what we would see, which is not very pretty:\n\n\n```python\n# unrolling of the image for transmission (we go column by column, hence \"F\")\ntx_img = np.ravel(img, \"F\")\n\n# oops, we lose half the data\ntx_img[int(len(tx_img)/2):] = 0\n\n# rebuild matrix\nrx_img = np.reshape(tx_img, (64, 64), \"F\")\nplt.matshow(rx_img);\n```\n\nCan we come up with a trasmission scheme that is more robust in the face of channel loss? Interestingly, the answer is yes, and it involves a different, more versatile basis for the space of images. What we will do is the following: \n\n* describe the Haar basis, a new basis for the image space\n* project the image in the new basis\n* transmit the projection coefficients\n* rebuild the image using the basis vectors\n\nWe know a few things: if we choose an orthonormal basis, the analysis and synthesis formulas will be super easy (a simple inner product and a scalar multiplication respectively). The trick is to find a basis that will be robust to the loss of some coefficients. \n\nOne such basis is the **Haar basis**. We cannot go into too many details in this notebook but, for the curious, a good starting point is [here](https://chengtsolin.wordpress.com/2015/04/15/real-time-2d-discrete-wavelet-transform-using-opengl-compute-shader/). Mathematical formulas aside, the Haar basis works by encoding the information in a *hierarchical* way: the first basis vectors encode the broad information and the higher coefficients encode the detail. Let's have a look. \n\nFirst of all, to keep things simple, we will remain in the space of square matrices whose size is a power of two. The code to generate the Haar basis matrices is the following: first we generate a 1D Haar vector and then we obtain the basis matrices by taking the outer product of all possible 1D vectors (don't worry if it's not clear, the results are what's important):\n\n\n```python\ndef haar1D(n, SIZE):\n # check power of two\n if math.floor(math.log(SIZE) / math.log(2)) != math.log(SIZE) / math.log(2):\n print(\"Haar defined only for lengths that are a power of two\")\n return None\n if n >= SIZE or n < 0:\n print(\"invalid Haar index\")\n return None\n \n # zero basis vector\n if n == 0:\n return np.ones(SIZE)\n \n # express n > 1 as 2^p + q with p as large as possible;\n # then k = SIZE/2^p is the length of the support\n # and s = qk is the shift\n p = math.floor(math.log(n) / math.log(2))\n pp = int(pow(2, p))\n k = SIZE / pp\n s = (n - pp) * k\n \n h = np.zeros(SIZE)\n h[int(s):int(s+k/2)] = 1\n h[int(s+k/2):int(s+k)] = -1\n # these are not normalized\n return h\n\n\ndef haar2D(n, SIZE=8):\n # get horizontal and vertical indices\n hr = haar1D(n % SIZE, SIZE)\n hv = haar1D(int(n / SIZE), SIZE)\n # 2D Haar basis matrix is separable, so we can\n # just take the column-row product\n H = np.outer(hr, hv)\n H = H / math.sqrt(np.sum(H * H))\n return H\n```\n\nFirst of all, let's look at a few basis matrices; note that the matrices have positive and negative values, so that the value of zero will be represented as gray:\n\n\n```python\nplt.matshow(haar2D(0));\nplt.matshow(haar2D(1));\nplt.matshow(haar2D(10));\nplt.matshow(haar2D(63));\n```\n\nWe can notice two key properties\n\n* each basis matrix has positive and negative values in some symmetric patter: this means that the basis matrix will implicitly compute the difference between image areas\n* low-index basis matrices take differences between large areas, while high-index ones take differences in smaller **localized** areas of the image\n\nWe can immediately verify that the Haar matrices are orthogonal:\n\n\n```python\n# let's use an 8x8 space; there will be 64 basis vectors\n# compute all possible inner product and only print the nonzero results\nfor m in range(0,64):\n for n in range(0,64):\n r = np.sum(haar2D(m, 8) * haar2D(n, 8))\n if r != 0:\n print(\"[%dx%d -> %f] \" % (m, n, r), end=\"\")\n```\n\n [0x0 -> 1.000000] [1x1 -> 1.000000] [2x2 -> 1.000000] [3x3 -> 1.000000] [4x4 -> 1.000000] [5x5 -> 1.000000] [6x6 -> 1.000000] [7x7 -> 1.000000] [8x8 -> 1.000000] [9x9 -> 1.000000] [10x10 -> 1.000000] [11x11 -> 1.000000] [12x12 -> 1.000000] [13x13 -> 1.000000] [14x14 -> 1.000000] [15x15 -> 1.000000] [16x16 -> 1.000000] [16x17 -> -0.000000] [17x16 -> -0.000000] [17x17 -> 1.000000] [18x18 -> 1.000000] [19x19 -> 1.000000] [20x20 -> 1.000000] [21x21 -> 1.000000] [22x22 -> 1.000000] [23x23 -> 1.000000] [24x24 -> 1.000000] [24x25 -> -0.000000] [25x24 -> -0.000000] [25x25 -> 1.000000] [26x26 -> 1.000000] [27x27 -> 1.000000] [28x28 -> 1.000000] [29x29 -> 1.000000] [30x30 -> 1.000000] [31x31 -> 1.000000] [32x32 -> 1.000000] [33x33 -> 1.000000] [34x34 -> 1.000000] [35x35 -> 1.000000] [36x36 -> 1.000000] [37x37 -> 1.000000] [38x38 -> 1.000000] [39x39 -> 1.000000] [40x40 -> 1.000000] [41x41 -> 1.000000] [42x42 -> 1.000000] [43x43 -> 1.000000] [44x44 -> 1.000000] [45x45 -> 1.000000] [46x46 -> 1.000000] [47x47 -> 1.000000] [48x48 -> 1.000000] [49x49 -> 1.000000] [50x50 -> 1.000000] [51x51 -> 1.000000] [52x52 -> 1.000000] [53x53 -> 1.000000] [54x54 -> 1.000000] [55x55 -> 1.000000] [56x56 -> 1.000000] [57x57 -> 1.000000] [58x58 -> 1.000000] [59x59 -> 1.000000] [60x60 -> 1.000000] [61x61 -> 1.000000] [62x62 -> 1.000000] [63x63 -> 1.000000] \n\nOK! Everything's fine. Now let's transmit the \"cameraman\" image: first, let's verify that it works\n\n\n```python\n# project the image onto the Haar basis, obtaining a vector of 4096 coefficients\n# this is simply the analysis formula for the vector space with an orthogonal basis\ntx_img = np.zeros(64*64)\nfor k in range(0, (64*64)):\n tx_img[k] = np.sum(img * haar2D(k, 64))\n\n# now rebuild the image with the synthesis formula; since the basis is orthonormal\n# we just need to scale the basis matrices by the projection coefficients\nrx_img = np.zeros((64, 64))\nfor k in range(0, (64*64)):\n rx_img += tx_img[k] * haar2D(k, 64)\n\nplt.matshow(rx_img);\n```\n\nCool, it works! Now let's see what happens if we lose the second half of the coefficients:\n\n\n```python\n# oops, we lose half the data\nlossy_img = np.copy(tx_img);\nlossy_img[int(len(tx_img)/2):] = 0\n\n# rebuild matrix\nrx_img = np.zeros((64, 64))\nfor k in range(0, (64*64)):\n rx_img += lossy_img[k] * haar2D(k, 64)\n\nplt.matshow(rx_img);\n```\n\nThat's quite remarkable, no? We've lost the same amount of information as before but the image is still acceptable. This is because we lost the coefficients associated to the fine details of the image but we retained the \"broad strokes\" encoded by the first half. \n\nNote that if we lose the first half of the coefficients the result is markedly different:\n\n\n```python\nlossy_img = np.copy(tx_img);\nlossy_img[0:int(len(tx_img)/2)] = 0\n\nrx_img = np.zeros((64, 64))\nfor k in range(0, (64*64)):\n rx_img += lossy_img[k] * haar2D(k, 64)\n\nplt.matshow(rx_img);\n```\n\nIn fact, schemes like this one are used in *progressive encoding*: send the most important information first and add details if the channel permits it. You may have experienced this while browsing the interned over a slow connection. \n\nAll in all, a great application of a change of basis!\n", "meta": {"hexsha": "2b42c599db8f0180a080ef43aefc88343be386c7", "size": 124713, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "HaarBasis/hb.ipynb", "max_stars_repo_name": "AkshayPR244/Coursera-EPFL-Digital-Signal-Processing", "max_stars_repo_head_hexsha": "bdf9c65e2c02f0a99336cbe60ebac919891e05e3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-07-24T03:16:36.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-25T10:21:00.000Z", "max_issues_repo_path": "HaarBasis/hb.ipynb", "max_issues_repo_name": "AkshayPR244/Coursera-EPFL-Digital-Signal-Processing", "max_issues_repo_head_hexsha": "bdf9c65e2c02f0a99336cbe60ebac919891e05e3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HaarBasis/hb.ipynb", "max_forks_repo_name": "AkshayPR244/Coursera-EPFL-Digital-Signal-Processing", "max_forks_repo_head_hexsha": "bdf9c65e2c02f0a99336cbe60ebac919891e05e3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-23T19:37:53.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-23T19:37:53.000Z", "avg_line_length": 200.8260869565, "max_line_length": 17388, "alphanum_fraction": 0.8908533994, "converted": true, "num_tokens": 3997, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.926303732328411, "lm_q2_score": 0.8947894682067639, "lm_q1q2_score": 0.8288468240480794}} {"text": "# Penalised Regression\n\n## YouTube Videos\n1. **Scikit Learn Linear Regression:** https://www.youtube.com/watch?v=EvnpoUTXA0E\n2. **Scikit Learn Linear Penalise Regression:** https://www.youtube.com/watch?v=RhsEAyDBkTQ\n\n## Introduction\nWe often do not want the coefficients/ weights to be too large. Hence we append the loss function with a penalty function to discourage large values of $w$.\n\n\\begin{align}\n\\mathcal{L} & = \\sum_{i=1}^N (y_i-f(x_i|w,b))^2 + \\alpha \\sum_{j=1}^D w_j^2 + \\beta \\sum_{j=1}^D |w_j|\n\\end{align}\nwhere, $f(x_i|w,b) = wx_i+b$. The values of $\\alpha$ and $\\beta$ are positive (or zero), with higher values enforcing the weights to be closer to zero.\n\n## Lesson Structure\n1. The task of this lesson is to infer the weights given the data (observations, $y$ and inputs $x$).\n2. We will be using the module `sklearn.linear_model`.\n\n\n```python\nfrom IPython.display import YouTubeVideo\nYouTubeVideo(\"EvnpoUTXA0E\")\n```\n\n\n\n\n\n\n\n\n\n\n\n```python\nYouTubeVideo(\"RhsEAyDBkTQ\")\n```\n\n\n\n\n\n\n\n\n\n\n\n```python\nimport numpy as np\nfrom sklearn.linear_model import LinearRegression\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# In order to reproduce the exact same number we need to set the seed for random number generators:\nnp.random.seed(1)\n```\n\nA normally distributed random looks as follows:\n\n\n```python\ne = np.random.randn(10000,1)\nplt.hist(e,100) #histogram with 100 bins\nplt.ylabel('y')\nplt.xlabel('x')\nplt.title('Histogram of Normally Distributed Numbers')\nplt.show()\n```\n\nGenerate observations $y$ given feature (design) matrix $X$ according to:\n$$\ny = Xw + \\xi\\\\\n\\xi_i \\sim \\mathcal{N}(0,\\sigma^2)\n$$\n\nIn this particular case, $w$ is a 100 dimensional vector where 90% of the numbers are zero. i.e. only 10 of the numbers are non-zero.\n\n\n```python\n# Generate the data\nN = 40 # Number of observations\nD = 100 # Dimensionality\n\nx = np.random.randn(N,D) # get random observations of x\nw_true = np.zeros((D,1)) # create a weight vector of zeros\nidx = np.random.choice(100,10,replace=False) # randomly choose 10 of those weights\nw_true[idx] = np.random.randn(10,1) # populate then with 10 random weights\n\ne = np.random.randn(N,1) # have a noise vector\ny = np.matmul(x,w_true) + e # generate observations\n\n# create validation set:\nN_test = 50\nx_test = np.random.randn(50,D)\ny_test_true = np.matmul(x_test,w_true)\n```\n\n\n```python\nmodel = LinearRegression()\nmodel.fit(x,y)\n\n# plot the true vs estimated coeffiecients\nplt.plot(np.arange(100),np.squeeze(model.coef_))\nplt.plot(np.arange(100),w_true)\nplt.legend([\"Estimated\",\"True\"])\nplt.title('Estimated Weights')\nplt.show()\n```\n\nOne way of testing how good your model is to look at metrics. In the case of regression Mean Squared Error (MSE) is a common metric which is defined as:\n$$ \\frac{1}{N}\\sum_{i=1}^N \\xi_i^2$$ where, $\\xi_i = y_i-f(x_i|w,b)$. Furthermore it is best to look at the MSE on a validation set, rather than on the training dataset that we used to train the model.\n\n\n```python\ny_est = model.predict(x_test)\nmse = np.mean(np.square(y_test_true-y_est))\nprint(mse)\n```\n\n 6.808967330386871\n\n\nRidge regression is where you penalise the weights by setting the $\\alpha$ parameter right at the top. It penalises it so that the higher **the square of the weights** the higher the loss.\n\n\n```python\nfrom sklearn.linear_model import Ridge\n\nmodel = Ridge(alpha=5.0,fit_intercept = False)\nmodel.fit(x,y)\n\n# plot the true vs estimated coeffiecients\nplt.plot(np.arange(100),np.squeeze(model.coef_))\nplt.plot(np.arange(100),w_true)\nplt.legend([\"Estimated\",\"True\"])\nplt.show()\n```\n\nThis model is slightly better than without any penalty on the weights.\n\n\n```python\ny_est = model.predict(x_test)\nmse = np.mean(np.square(y_test_true-y_est))\nprint(mse)\n```\n\n 6.422880725012181\n\n\nLasso is a model that encourages weights to go to zero exactly, as opposed to Ridge regression which encourages small weights.\n\n\n```python\nfrom sklearn.linear_model import Lasso\n\nmodel = Lasso(alpha=0.1,fit_intercept = False)\nmodel.fit(x,y)\n\n# plot the true vs estimated coeffiecients\nplt.plot(np.arange(100),np.squeeze(model.coef_))\nplt.plot(np.arange(100),w_true)\nplt.legend([\"Estimated\",\"True\"])\nplt.title('Lasso regression weight inference')\nplt.show()\n```\n\nThe MSE is significantly better than both the above models.\n\n\n```python\ny_est = model.predict(x_test)[:,None]\nmse = np.mean(np.square(y_test_true-y_est))\nprint(mse)\n```\n\n 2.306001340842457\n\n\nAutomated Relevance Determination (ARD) regression is similar to lasso in that it encourages zero weights. However, the advantage is that you do not need to set a penalisation parameter, $\\alpha$, $\\beta$ in this model.\n\n\n```python\nfrom sklearn.linear_model import ARDRegression\n\nmodel = ARDRegression(fit_intercept = False)\nmodel.fit(x,y)\n\n# plot the true vs estimated coeffiecients\nplt.plot(np.arange(100),np.squeeze(model.coef_))\nplt.plot(np.arange(100),w_true)\nplt.legend([\"Estimated\",\"True\"])\nplt.show()\n```\n\n\n```python\ny_est = model.predict(x_test)[:,None]\nmse = np.mean(np.square(y_test_true-y_est))\nprint(mse)\n```\n\n 2.870211080521882\n\n\n### Note:\nRerun the above with setting N=400\n\n## Inverse Problems\nThe following section is optional and you may skip it. It is not necessary for understanding Deep Learning.\n\nInverse problems are where given the outputs you are required to infer the inputs. A typical example is X-rays. Given the x-ray sensor readings, the algorithm needs to build an image of an individuals bone structure.\n\nSee [here](http://scikit-learn.org/stable/auto_examples/applications/plot_tomography_l1_reconstruction.html#sphx-glr-auto-examples-applications-plot-tomography-l1-reconstruction-py) for an example of l1 reguralisation applied to a compressed sensing problem (has a resemblance to the x-ray problem). \n\n\n```python\n\n```\n", "meta": {"hexsha": "cd0e99af334b97fc4b30cc3eb06112d86c732383", "size": 127139, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "jupyter/Keras_TensorFlow_Course/Lesson 01 - PenalisedRegression - Solutions.ipynb", "max_stars_repo_name": "multivacplatform/multivac-dl", "max_stars_repo_head_hexsha": "54cb33960ba14f32ed9ac185a4c151a6b72a97ca", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-11-24T10:47:49.000Z", "max_stars_repo_stars_event_max_datetime": "2018-11-24T10:47:49.000Z", "max_issues_repo_path": "jupyter/Keras_TensorFlow_Course/Lesson 01 - PenalisedRegression - Solutions.ipynb", "max_issues_repo_name": "multivacplatform/multivac-dl", "max_issues_repo_head_hexsha": "54cb33960ba14f32ed9ac185a4c151a6b72a97ca", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "jupyter/Keras_TensorFlow_Course/Lesson 01 - PenalisedRegression - Solutions.ipynb", "max_forks_repo_name": "multivacplatform/multivac-dl", "max_forks_repo_head_hexsha": "54cb33960ba14f32ed9ac185a4c151a6b72a97ca", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 263.7738589212, "max_line_length": 29968, "alphanum_fraction": 0.9214481788, "converted": true, "num_tokens": 1528, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9481545377452442, "lm_q2_score": 0.8740772253241802, "lm_q1q2_score": 0.8287602875308937}} {"text": "# SciPy\nSciPy is a collection of mathematical algorithms and convenience functions. In this this notebook there are just a few examples of the features that are most important to us. But if you want to see all that SciPy has to offer, have a look at the [official documentation](https://docs.scipy.org/doc/scipy/reference/).\n\nSince SciPy has several sublibraries, it is commom practice to import just the one we are going to use, as you'll in the following examples.\n\n\n```python\nimport numpy as np\nimport matplotlib as mpl # ignore this for now\nimport matplotlib.pyplot as plt # ignore this for now\n```\n\n# Interpolation\nThere are several general interpolation facilities available in SciPy, for data in 1, 2, and higher dimensions. First, let's generate some sample data.\n\n\n```python\nx = np.linspace(0, 10, num=11, endpoint=True)\ny = np.cos(-x**2/9.0)\n\nplt.scatter(x,y)\n```\n\nThe `interp1d` funtions grabs data points and **returns a *function***. The default interpolation method is the linear interpolation, but there are several to choose from.\n\n\n```python\nfrom scipy.interpolate import interp1d\n\nf1 = interp1d(x, y) # linear is the default\nf2 = interp1d(x, y, kind='cubic') # cubic splines\nf3 = interp1d(x, y, kind='nearest') # grab the nearest value\n```\n\nNow that we have the interpolated function, lets generate a tighter grid in the x axis and plot the resulto of the different interpolation methods.\n\n\n```python\nxnew = np.linspace(0, 10, num=101, endpoint=True)\nxnew\n```\n\n\n\n\n array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ,\n 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2. , 2.1,\n 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3. , 3.1, 3.2,\n 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4. , 4.1, 4.2, 4.3,\n 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 5. , 5.1, 5.2, 5.3, 5.4,\n 5.5, 5.6, 5.7, 5.8, 5.9, 6. , 6.1, 6.2, 6.3, 6.4, 6.5,\n 6.6, 6.7, 6.8, 6.9, 7. , 7.1, 7.2, 7.3, 7.4, 7.5, 7.6,\n 7.7, 7.8, 7.9, 8. , 8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7,\n 8.8, 8.9, 9. , 9.1, 9.2, 9.3, 9.4, 9.5, 9.6, 9.7, 9.8,\n 9.9, 10. ])\n\n\n\n\n```python\nplt.plot(x, y, 'o', xnew, f1(xnew), '-', xnew, f2(xnew), '--', xnew, f3(xnew), '-.')\nplt.legend(['data', 'linear', 'cubic', 'nearest'], loc='best')\nplt.show()\n```\n\nThe `interpolate` sublibrary also has interpolation methods for multivariate data and has **integration with pandas**. Have a look at the documentation.\n\n---\n# Definite Integrals\nThe function `quad` is provided to integrate a function of one variable between two points. This functions has 2 outputs, the first one is the computed integral value and the second is an estimate of the absolute error.\n\nAs an example consider the following integral:\n\n$$\n\\int_{0}^{2} x^{2}dx\n$$\n\n\n```python\nimport scipy.integrate as integrate\n\ndef my_func(x):\n return x**2\n\nintegrate.quad(my_func, 0, 2)\n```\n\n\n\n\n (2.666666666666667, 2.960594732333751e-14)\n\n\n\nThe `quad` functions also allows for infinite limits.\n\n$$\n\\int_{-\\infty}^{\\infty} e^{-x^{2}}dx\n$$\n\n\n```python\ndef my_func(x):\n return np.exp(-x**2)\n\nintegrate.quad(my_func, -np.inf, np.inf)\n```\n\n\n\n\n (1.7724538509055159, 1.4202636781830878e-08)\n\n\n\nSciPy's `integrate` library also has functions for double and triple integrals. Check them out in the documentations.\n\n# Optimization\nThe `scipy.optimize` package provides several commonly used optimization algorithms. Here we are going to use just one to illustrate.\n\nConsider that you have 3 assets available. Their expected returns, risks (standard-deviations) and betas are on the table bellow and $\\rho$ is the correlation matrix of the returns.\n\n| Asset | Return | Risk | Beta |\n|-------|--------|------|------|\n|A |3% | 10% | 0.5 |\n|B |3.5% | 11% | 1.2 |\n|C |5% | 15% | 1.8 |\n\n$$\n\\rho = \n\\begin{bmatrix}\n1 & 0.3 & -0.6 \\\\\n0.3 & 1 & 0 \\\\\n-0.6 & 0 & 1 \n\\end{bmatrix}\n$$\n\nUse the `minimize` function to find the weights of each asset that maximizes it's Sharpe index.\n\n\n```python\nretu = np.array([0.03, 0.035, 0.05])\nrisk = np.array([0.10, 0.11, 0.15])\nbeta = np.array([0.5, 1.2, 1.8])\n\ncorr = np.array([[1, 0.3, -0.6], \n [0.3, 1, 0],\n [-0.6, 0, 1]])\n\ndef port_return(w):\n return retu.dot(w)\n\ndef port_risk(w):\n covar = np.diag(risk).dot(corr).dot(np.diag(risk))\n return (w.dot(covar).dot(w))**0.5\n\ndef port_sharpe(w):\n return -1*(port_return(w) / port_risk(w)) # The -1 is because we want to MINIMIZE the negative of the Sharpe\n\ndef port_weight(w):\n return w.sum()\n```\n\n\n```python\nw_teste = np.array([0.5, 0.30, 0.2])\nprint(port_return(w_teste))\nprint(port_risk(w_teste))\nprint(port_sharpe(w_teste))\nprint(port_weight(w_teste))\n```\n\n 0.035500000000000004\n 0.06065476073648301\n -0.5852796972397789\n 1.0\n\n\nWhen declaring an optimization problem with inequality restrictions, they have the form of:\n\n$$\n\\begin{align*}\n\\min_{w} & f\\left(w\\right)\\\\\ns.t. & \\quad g\\left(w\\right)\\geq0\n\\end{align*}\n$$\n\n\n```python\nfrom scipy.optimize import minimize\n\neq_cons = {'type': 'eq',\n 'fun' : lambda w: port_weight(w) - 1}\n\nw0 = np.array([1, 0, 0])\n\nres = minimize(port_sharpe, w0, method='SLSQP', constraints=eq_cons, options={'ftol': 1e-9, 'disp': True})\n```\n\n Optimization terminated successfully. (Exit mode 0)\n Current function value: -0.7140791324512301\n Iterations: 7\n Function evaluations: 37\n Gradient evaluations: 7\n\n\n\n```python\nres.x\n```\n\n\n\n\n array([0.54864871, 0.06613309, 0.3852182 ])\n\n\n\n\n```python\nres.x.sum()\n```\n\n\n\n\n 1.0\n\n\n\n\n```python\nres.fun\n```\n\n\n\n\n -0.7140791324512301\n\n\n\n# Linear Algebra (again)\n`scipy.linalg` contains all the functions in `numpy.linalg` plus some more advanced ones.\n\n\n```python\nfrom scipy import linalg as la\n\nA = np.array([[1,3,5],[2,5,1],[2,3,8]])\nla.inv(A)\n```\n\n\n\n\n array([[-1.48, 0.36, 0.88],\n [ 0.56, 0.08, -0.36],\n [ 0.16, -0.12, 0.04]])\n\n\n\nMatrix and vector **norms** can also be computed with SciPy. A wide range of norm definitions are available using different parameters to the order argument of `linalg.norm`.\n\n\n```python\nA = np.array([[1, 2], [3, 4]])\nprint(la.norm(A)) # frobenius norm is the default.\nprint(la.norm(A, 1)) # L1 norm (max column sum)\nprint(la.norm(A, np.inf)) # L inf norm (max row sum)\n```\n\n 5.477225575051661\n 6.0\n 7.0\n\n\nSome more advanced matrix decompositions are also available, like the **Schur Decomposition**\n\n\n```python\nla.schur(A)\n```\n\n\n\n\n (array([[-0.37228132, -1. ],\n [ 0. , 5.37228132]]), array([[-0.82456484, -0.56576746],\n [ 0.56576746, -0.82456484]]))\n\n\n\nSome notable matrices can also be created, like block **diagonal matrices**.\n\n\n```python\nA = np.array([[1, 0],\n [0, 1]])\n\nB = np.array([[3, 4, 5],\n [6, 7, 8]])\n\nC = np.array([[7]])\n\nla.block_diag(A, B, C)\n```\n\n\n\n\n array([[1, 0, 0, 0, 0, 0],\n [0, 1, 0, 0, 0, 0],\n [0, 0, 3, 4, 5, 0],\n [0, 0, 6, 7, 8, 0],\n [0, 0, 0, 0, 0, 7]], dtype=int32)\n\n\n\n# Solving Linear Systems\n\n\n$$\n\\begin{align}\nx+3y+5 & =10\\\\\n2x+5y+z & =8\\\\\n2x+3y+8z & =3\n\\end{align}\n$$\n\nThe system above can be written with matrix notation as $AX=B$ and we know we can find the solution by doing $X=A^{-1}B$, but inverting a matrix is computationally expensive. When solving big linear system it is advised to use the `solve` method.\n\n\n```python\nA = np.array([[1, 3, 5], [2, 5, 1], [2, 3, 8]])\nB = np.array([[10], [8], [3]])\n```\n\nLets check the time that it takes to solve the system in both ways...\n\n\n```python\nla.inv(A).dot(B)\n```\n\n\n\n\n array([[-9.28],\n [ 5.16],\n [ 0.76]])\n\n\n\n\n```python\nla.solve(A, B)\n```\n\n\n\n\n array([[-9.28],\n [ 5.16],\n [ 0.76]])\n\n\n\nlet's try with a bigger matrix\n\n\n```python\nimport numpy.random as rnd\nA = rnd.random((2000, 2000))\nB = rnd.random((2000, 1))\n```\n\n\n```python\n%%timeit\nla.inv(A).dot(B)\n```\n\n 154 ms \u00b1 1.58 ms per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)\n\n\n\n```python\n%%timeit\nla.solve(A, B)\n```\n\n 123 ms \u00b1 539 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)\n\n", "meta": {"hexsha": "c8c4340334702fee26a0126db2bce5c25922ff46", "size": 55728, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Python Lectures/Section 03 - SciPy.ipynb", "max_stars_repo_name": "Finance-Hub/FinanceHubMaterials", "max_stars_repo_head_hexsha": "e06cae52ac34873413e946810ad5e6bf79b1c0dc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 38, "max_stars_repo_stars_event_min_datetime": "2019-11-12T04:52:31.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-12T09:27:08.000Z", "max_issues_repo_path": "Python Lectures/Section 03 - SciPy.ipynb", "max_issues_repo_name": "antoniosalomao/FinanceHubMaterials", "max_issues_repo_head_hexsha": "0e2aead9c2a7c92a6826b6b47970afbfa30fb1b2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-04-04T03:03:17.000Z", "max_issues_repo_issues_event_max_datetime": "2020-04-04T03:03:17.000Z", "max_forks_repo_path": "Python Lectures/Section 03 - SciPy.ipynb", "max_forks_repo_name": "antoniosalomao/FinanceHubMaterials", "max_forks_repo_head_hexsha": "0e2aead9c2a7c92a6826b6b47970afbfa30fb1b2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 20, "max_forks_repo_forks_event_min_datetime": "2019-06-28T15:35:10.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-04T02:34:10.000Z", "avg_line_length": 79.0468085106, "max_line_length": 31096, "alphanum_fraction": 0.8171655182, "converted": true, "num_tokens": 3016, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9161096204605946, "lm_q2_score": 0.9046505395995929, "lm_q1q2_score": 0.8287590624820551}} {"text": "Hello world.\n\n###A. Polarization of the liquid crystal\n\nIf the electric field $\\mathbf{E}$ is applied at some angle with respect to the director $\\hat{n}$, there will be a component of the electric field $\\mathbf{E}_\\parallel$ parallel to $\\hat{n}$ and a component $\\mathbf{E}_\\perp$ perpendicular to $\\hat{n}$. Show that the polarization $\\mathbf{P} = \\epsilon_0 ( \\chi_\\parallel \\mathbf{E}_\\parallel + \\chi_\\perp \\mathbf{E}_\\perp)$ induced in the liquid crystal material can be written as \n\n\\begin{equation}\n\\mathbf{P} = \\epsilon_0[\\chi_\\perp\\mathbf{E} + \\chi_a ( \\hat{n} \\cdot \\mathbf{E})\\hat{n}].\n\\end{equation}\n\nTo start, we have\n\n$$\\mathbf{P} = \\epsilon_0 ( \\chi_\\parallel \\mathbf{E}_\\parallel + \\chi_\\perp \\mathbf{E}_\\perp).$$\n\nSince $\\chi_a = \\chi_\\parallel - \\chi_\\perp$, we get $$\\chi_\\parallel = \\chi_a + \\chi_\\perp.$$\n\nNow $\\mathbf{P} = \\epsilon_0 ( \\chi_\\parallel \\mathbf{E}_\\parallel + \\chi_\\perp \\mathbf{E}_\\perp)$ becomes\n\n$$\\mathbf{P} = \\epsilon_0 ( (\\chi_a + \\chi_\\perp) \\mathbf{E}_\\parallel + \\chi_\\perp \\mathbf{E}_\\perp).$$\n\nDistributing and since $\\mathbf{E} = \\mathbf{E}_\\parallel + \\mathbf{E}_\\perp$, we get \n\n$$\\mathbf{P} = \\epsilon_0 ( \\chi_a \\mathbf{E}_\\parallel + \\chi_\\perp \\mathbf{E}).$$\n\nBy dot product rules, $$\\mathbf{E}_\\parallel = \\hat{n} \\cdot \\mathbf{E}.$$\n\nNow we come to the final equation $$\\mathbf{P} = \\epsilon_0[\\chi_\\perp\\mathbf{E} + \\chi_a ( \\hat{n} \\cdot \\mathbf{E})\\hat{n}]. \\: (1)$$\n\n###B. Electric potential energy of liquid crystals\n\nUse the expression in Eq. (1) to find the potential energy per unit volume of the system. This can be done by calculating the work done by the electric field in turning the director from perpendicular to the field (taking that the orientation to be $\\theta = 0$) to some angle $\\theta$. This is the integral with respect to $\\theta$ of the torque per unit volume, $\\mathbf{P} \\times \\mathbf{E}$. The electric potential energy density $U_e$, taken to be zero at $\\theta = 0$, will then be the negative of the work done by the field. Show that \n$$U_e = -\\frac{1}{2} \\epsilon_0 \\chi_a E^2 \\sin^2\\theta. \\: (2)$$\n\nTo start\n\n$$\\mathbf{P} \\times \\mathbf{E} = \\epsilon_0[\\chi_\\perp\\mathbf{E} + \\chi_a ( \\hat{n} \\cdot \\mathbf{E})\\hat{n}] \\times \\mathbf{E}.$$\n\nSome definitions:\n\n$$ \\hat{n} \\cdot \\mathbf{E} = \\lVert\\hat{n}\\rVert \\lVert\\mathbf{E}\\rVert \\cos\\theta = E\\cos\\theta$$\n\n$$ \\hat{n} \\times \\mathbf{E} = \\lVert\\hat{n}\\rVert \\lVert\\mathbf{E}\\rVert \\sin\\theta = E\\sin\\theta$$\n\nsince $\\lVert\\hat{n}\\rVert = 1.$\n\n$$ \\mathbf{E}\\times\\mathbf{E} = 0 $$\n\nWith these defined above, $\\mathbf{P} \\times \\mathbf{E}$ becomes\n\n$$\\mathbf{P} \\times \\mathbf{E} = \\epsilon_0\\chi_aE^2\\cos\\theta\\sin\\theta $$\n\nThe integral looks like this:\n\n$$ U_e = \\int_0^\\theta \\epsilon_0\\chi_aE^2\\cos\\theta\\sin\\theta d\\theta$$\n\nWith a $u$ sub of $\\sin$ and knowing the antiderivative of $\\sin$ is $-\\cos$, we get:\n\n$$U_e = -\\frac{1}{2} \\epsilon_0 \\chi_a E^2 \\sin^2\\theta.$$\n\n###C. Distortion energy produced by surfaces\n\nThe effect of the surfaces is to make the $x$ and $y$ components of the director, i.e. $n_x$ and $n_y$, vary as functions of $y$. In other words, $\\frac{dn_x}{dy}$ and $\\frac{dn_y}{dy}$ become nonzero; there is no variation of $\\hat{n}$ in the $x$ direction. These derivatives are measures of the amount of distortion introduced by applying an electric field to a liquid crystal that has the directions of its molecules anchored at the surfaces. If the distortion is not too large, the analogy with a simple harmonic oscillator, the energy density increases with the square of the distortion, and we can write the energy density due to distortion as $$U_d = \\frac{1}{2}k\\left(\\frac{dn_x}{dy}\\right)^2 + \\frac{1}{2}k\\left(\\frac{dn_y}{dy}\\right)^2. \\: (3)$$\n\nSince the two terms represent different types of distortion, there is no reason to expect that the coefficient $k$ is the same for both. Nevertheless, the simplifying assumption is that they are the same is a fairly good approximation in many cases.\n\nUse the fact that $\\hat{n}$ is a unit vector to show that the total energy density is, $U_d+U_e$, can be expressed as \n\n$$U = \\frac{1}{2}k\\left(\\frac{d\\theta}{dy}\\right)^2 - \\frac{1}{2}\\epsilon_0\\chi_aE^2\\sin^2\\theta. \\: (4)$$\n\nTo start\n\n$$ U = \\frac{1}{2}k\\left(\\frac{dn_x}{dy}\\right)^2 + \\frac{1}{2}k\\left(\\frac{dn_y}{dy}\\right)^2 - \\frac{1}{2}\\epsilon_0\\chi_aE^2\\sin^2\\theta$$\n\nThe components, $n_x$ and $n_y$, are\n\n$$n_x = \\cos\\theta$$ and $$n_y = \\sin\\theta$$ since $\\lVert\\hat{n}\\rVert = 1$.\n\nThen the derivatives of $\\theta$ with respect to $y$ are\n$$\\frac{dn_x}{dy} = -\\sin\\theta\\frac{d\\theta}{dy}$$ and $$\\frac{dn_y}{dy} = \\cos\\theta\\frac{d\\theta}{dy}.$$ \n\nSquaring and adding both of them yields the familiar trig identity\n\n$$\\left(-\\sin\\theta\\frac{d\\theta}{dy}\\right)^2 + \\left(\\cos\\theta\\frac{d\\theta}{dy}\\right)^2 = \\left(\\frac{d\\theta}{dy}\\right)^2(\\sin^2\\theta + \\cos^2\\theta) = \\left(\\frac{d\\theta}{dy}\\right)^2.$$\n\nNow\n\n$$ U = \\frac{1}{2}k\\left(\\frac{d\\theta}{dy}\\right)^2 - \\frac{1}{2}\\epsilon_0\\chi_aE^2\\sin^2\\theta.$$\n\n###D. Distribution of the director\n\n(a). Solving the extrema of director involves using calculus of variations and the Euler-Lagrange equation. The function $\\theta(y)$ minimizes the total energy.\n\nWe must minimize $U$\n\n$$ U = \\frac{1}{2}k\\left(\\frac{d\\theta}{dy}\\right)^2 - \\frac{1}{2}\\epsilon_0\\chi_aE^2\\sin^2\\theta.$$\n\nThe E-L equation is, for this problem, defined as \n\n$$\\frac{\\partial U}{\\partial \\theta} - \\frac{d}{dy}\\frac{\\partial U}{\\partial \\left(d\\theta/dy\\right)} = 0 $$\n\nThe first part is\n\n$$\\frac{\\partial U}{\\partial \\theta} = -\\epsilon_0\\chi_aE^2\\sin^2\\theta$$\n\nThe second is\n\n$$\\frac{d}{dy}\\frac{\\partial U}{\\partial \\left(d\\theta/dy\\right)} = \\frac{d}{dy}\\left(k\\left(\\dfrac{d\\theta}{dy}\\right)\\right) = k\\left(\\dfrac{d^2\\theta}{dy^2}\\right)$$\n\nPutting the parts together\n\n$$-\\epsilon_0\\chi_aE^2\\sin\\theta\\cos\\theta - k\\left(\\dfrac{d^2\\theta}{dy^2}\\right) = 0$$\n\nWhich can be written as $$k\\left(\\dfrac{d^2\\theta}{dy^2}\\right)+\\epsilon_0\\chi_aE^2\\sin\\theta\\cos\\theta=0$$\n\nSince $\\xi^2 = \\frac{k}{\\epsilon_0\\chi_aE^2}$, we can divide by the denominator and change the $y$ variable to $y/d$ to get $$\\xi_d\\left(\\dfrac{d^2\\theta}{dy^2}\\right)+\\sin\\theta\\cos\\theta=0. \\: (5)$$\n\nHUNTER\n", "meta": {"hexsha": "f4385a47ee648b4407a13cf5ff7b48ce8ad2ac2a", "size": 8484, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Nematic/Liquid_Crystal_Displays_Collings.ipynb", "max_stars_repo_name": "brettavedisian/Liquid-Crystals", "max_stars_repo_head_hexsha": "c7c6eaec594e0de8966408264ca7ee06c2fdb5d3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Nematic/Liquid_Crystal_Displays_Collings.ipynb", "max_issues_repo_name": "brettavedisian/Liquid-Crystals", "max_issues_repo_head_hexsha": "c7c6eaec594e0de8966408264ca7ee06c2fdb5d3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Nematic/Liquid_Crystal_Displays_Collings.ipynb", "max_forks_repo_name": "brettavedisian/Liquid-Crystals", "max_forks_repo_head_hexsha": "c7c6eaec594e0de8966408264ca7ee06c2fdb5d3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.2045454545, "max_line_length": 776, "alphanum_fraction": 0.5636492221, "converted": true, "num_tokens": 2194, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9046505376715774, "lm_q2_score": 0.9161096153072138, "lm_q1q2_score": 0.828759056053773}} {"text": "### Using Numpy & Sympy: Case Study\n(created by Chenkuan Liu; this notebook can also be found at https://github.com/ck44liu/scientific-computing-python-notes/tree/main/Note2)\n\nIn previous homework, there was a problem asking the expression of the plane given three points in $\\mathbb{R}^3$. The key to this problem is to set up the expression $ax+by+cz+d=0$ (some places write it as $ax+by+cz=d$, but having $d$ on the left side will make it more straightforward in this problem), and then plug in the points to solve the linear system.\n\nAt here, we are going to extend and look at this problem from both mathematical and scientific programming perspectives. Without further ado, let's import the libraries:\n\n\n```python\nimport numpy as np\nimport sympy\n```\n\n#### Example Demo\n\nLet's say we want to determine the plane containing the points $(1,-2,4)$, $(-2,5,-3)$, and $(2,-3,7)$. After plugging them into $ax+by+cz+d=0$, we get the following system: (note that we are plugging into $x,y,z$ and trying to solve for $a,b,c,d$ )\n\n$$a-2b+4c+d=0$$\n$$-2a+5b-3c+d=0$$\n$$2a-3b-7c+d=0$$\n\nTo solve this linear system, it's better to look at its reduced row echelon form. At here, we can use numpy and sympy to do this:\n\n\n```python\n# initializing coefficient matrix\nM = np.array([[1,-2,4,1],[-2,5,-3,1],[2,-3,7,1]])\nM\n```\n\n\n\n\n array([[ 1, -2, 4, 1],\n [-2, 5, -3, 1],\n [ 2, -3, 7, 1]])\n\n\n\n\n```python\n# using sympy to compute its rref\nM = sympy.Matrix(M)\nM.rref()\n```\n\n\n\n\n (Matrix([\n [1, 0, 0, -7/3],\n [0, 1, 0, -1/3],\n [0, 0, 1, 2/3]]),\n (0, 1, 2))\n\n\n\nNote that .rref() command returns two things: the first one is the rref matrix, and the second one lists the pivot columns. At here the pivot columns are the first three columns. Python starts counting from zero, so it returns (0,1,2). Also, since all the values on the right side of the linear system are zeros, we only need to look at the coefficient matrix here.\n\n#### \"Meaningless\" Derivation\n\nNow let's use the rref to derive some equations by hand and let the numbers tell the tale:\n\nthe rref tells us $a-\\frac{7}{3}d=0$, $b-\\frac{1}{3}d=0$, and $c+\\frac{2}{3}d=0$, so we have $a=\\frac{7}{3}d$, $b=\\frac{1}{3}d$, $c=-\\frac{2}{3}d$, and the plane becomes: \n\n$$\\frac{7}{3}dx + \\frac{1}{3}dy - \\frac{2}{3}dz + d = 0.$$\n\nTo have a meaningful plane, we need to set $d$ be non-zero. In this way, we can divide $d$ on both sides and get:\n\n$$\\frac{7}{3}x + \\frac{1}{3}y - \\frac{2}{3}z + 1 = 0,$$ \n\nwhich is the expression of our plane.\n\n \n\nAnd now, let's do something \"meaningless\": multiply both sides by $-1$: \n\n$$-\\frac{7}{3}x - \\frac{1}{3}y + \\frac{2}{3}z - 1 = 0$$\n\nThis is still the same plane. However, it actually makes it easier for our program: the last column entries of rref are exactly the $a,b,c$ we seek given that the expression of the plane is $ax+by+cz-1=0$. Our hand derivation above shows that this is true as long as the pivot columns are the first three columns.\n\nIn this way, our program becomes more straightforward: we can just extract the last column of rref and give the expression of the plane:\n\n\n```python\n# extract last column from rref\nlast_col = M.rref()[0].col(-1) # M.rref()[0] is the actual rref, while M.rref()[1] gives the pivot columns as shown above\nlast_col\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}- \\frac{7}{3}\\\\- \\frac{1}{3}\\\\\\frac{2}{3}\\end{matrix}\\right]$\n\n\n\n\n```python\n# assign the values from the last column to a, b and c\na, b, c = last_col\nprint(a,b,c)\n```\n\n -7/3 -1/3 2/3\n\n\n\n```python\n# print the expression of the plane\nprint(\"The plane is: ({})x + ({})y + ({})z - 1 = 0\".format(a, b, c))\n```\n\n The plane is: (-7/3)x + (-1/3)y + (2/3)z - 1 = 0\n\n\n#### Plot Twist\n\nJust like nothing is perfect, not all combination of three points can be solved in the same steps. Take a look at the next example: say we want to determine the plane containing $(1,1,2)$, $(-2,-2,-3)$, and $(3,3,5)$:\n\n\n```python\nN = np.array([[1,1,2,1],[-2,-2,-3,1],[3,3,5,1]])\nN = sympy.Matrix(N)\nN.rref()\n```\n\n\n\n\n (Matrix([\n [1, 1, 0, 0],\n [0, 0, 1, 0],\n [0, 0, 0, 1]]),\n (0, 2, 3))\n\n\n\nAt here, the pivot columns are no longer the first three columns, and we get zero entries in the last column. However, if you observe the points closely, you can find that for each of the three points, we have $x$ equals to $y$. Actually the plane is just $x=y$, namely $a=1,b=-1,c=0,d=0$ if we express it in the form of $ax+by+cz+d=0$. The picture below is generated from Geogebra, which visualizes the plane $x=y$ and our three points.\n\n\n```python\nfrom PIL import Image\nim = Image.open(\"img\\plane_visualization.png\")\nim.resize((700,600))\n```\n\nHence, in such case, it's better to take a look and solve by ourselves. Though the rref becomes \"irregular\", the plane actually gets simpler and easier to visualize. \n\n#### Bring it Together\n\nNow we can combine what we've got so far into a single Python function.\n\n\n```python\ndef get_plane(p1, p2, p3):\n \"\"\"\n Arguments: \n p1, p2, p3 -- the three points in 3d plane, expressed in numpy array\n \n Returns:\n a, b, c -- the coefficients of the plane ax + by + cz + 1 = 0;\n returns -1 if the pivot columns are not the first three columns\n \"\"\"\n # concatenate the numpy arrays to form a 3 by 4 coefficient matrix\n M = np.concatenate((np.array([p1,p2,p3]), np.ones((3,1), dtype=int)), axis=1)\n # convert to sympy matrix\n M = sympy.Matrix(M)\n \n # compute and print rref\n rref = M.rref()\n print(\"The rref matrix is:\\n\", rref)\n \n # compute the plane or suggest further manual analysis\n if rref[1] == (0,1,2):\n a, b, c = rref[0].col(-1)\n print(\"\\nThe plane is: ({})x + ({})y + ({})z - 1 = 0\".format(a, b, c))\n else:\n a, b, c = -1, -1, -1\n print(\"\\nSpecial case, manual analysis needed.\")\n \n return a, b, c\n```\n\nWe can check the program by creating and running different set of points:\n\n\n```python\np1 = np.array([1,-2,4])\np2 = np.array([-2,5,-3])\np3 = np.array([2,-3,7])\nget_plane(p1,p2,p3)\nprint(\"\\n\")\n\np4 = np.array([3,3,3])\np5 = np.array([1,-1,1])\np6 = np.array([-2,-2,-2])\nget_plane(p4,p5,p6)\nprint(\"\\n\")\n```\n\n The rref matrix is:\n (Matrix([\n [1, 0, 0, -7/3],\n [0, 1, 0, -1/3],\n [0, 0, 1, 2/3]]), (0, 1, 2))\n \n The plane is: (-7/3)x + (-1/3)y + (2/3)z - 1 = 0\n \n \n The rref matrix is:\n (Matrix([\n [1, 0, 1, 0],\n [0, 1, 0, 0],\n [0, 0, 0, 1]]), (0, 1, 3))\n \n Special case, manual analysis needed.\n \n \n\n\n#### Extensions\n\nThere are some further extensions can be done to this program:\n\n- If given four points $A,B,C,D$ in $\\mathbb{R}^3$, we can determine whether they reside on the same plane. Namely, we first compute the plane determined by $A,B$ and $C$, and then plug in vertices of $D$ to see whether the equation holds true. \n- We can also extend our program into higher dimensions: we can modify it so that, given $n$ points in n-dimensional space, it can compute the $(n-1)$ dimensional hyperplane determined by these $n$ points.\n", "meta": {"hexsha": "aeba4f9ab33701ef07da91be21ff5d3ae9b3b0a7", "size": 151226, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Note2/Note2.ipynb", "max_stars_repo_name": "ck44liu/scientific-computing-python-notes", "max_stars_repo_head_hexsha": "f1d2f65bbc8707fca2777700b138a5d4ecf96eef", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-01-20T05:26:22.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-09T05:07:33.000Z", "max_issues_repo_path": "Note2/Note2.ipynb", "max_issues_repo_name": "ck44liu/scientific-computing-python-notes", "max_issues_repo_head_hexsha": "f1d2f65bbc8707fca2777700b138a5d4ecf96eef", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Note2/Note2.ipynb", "max_forks_repo_name": "ck44liu/scientific-computing-python-notes", "max_forks_repo_head_hexsha": "f1d2f65bbc8707fca2777700b138a5d4ecf96eef", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 333.8322295806, "max_line_length": 138764, "alphanum_fraction": 0.9303889543, "converted": true, "num_tokens": 2310, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9324533163686646, "lm_q2_score": 0.8887587993853654, "lm_q1q2_score": 0.8287260899387167}} {"text": "# Multidimentional data - Matrices and Images\n\n\n```python\n%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom scipy import linalg\n```\n\n\n```python\nplt.style.use('ggplot')\nplt.rc('axes', grid=False) # turn off the background grid for images\n```\n\nLet us work with the matrix:\n$\n\\left[\n\\begin{array}{cc}\n1 & 2 \\\\\n1 & 1\n\\end{array}\n\\right]\n$\n\n\n```python\nmy_matrix = np.array([[1,2],[1,1]])\n\nprint(my_matrix.shape)\n```\n\n\n```python\nprint(my_matrix)\n```\n\n\n```python\nmy_matrix_transposed = np.transpose(my_matrix)\n\nprint(my_matrix_transposed)\n```\n\n\n```python\nmy_matrix_inverse = linalg.inv(my_matrix)\n\nprint(my_matrix_inverse)\n```\n\n### numpy matrix multiply uses the `dot()` function:\n\n\n```python\nmy_matrix_inverse.dot(my_matrix)\n```\n\n### Caution the `*` will just multiply the matricies on an element-by-element basis:\n\n\n```python\nmy_matrix_inverse * my_matrix_inverse\n```\n\n### Solving system of linear equations\n\n$$\n\\begin{array}{c}\nx + 2y = 4 \\\\\nx + y = 3 \\\\\n\\end{array}\n\\hspace{2cm}\n\\left[\n\\begin{array}{cc}\n1 & 2 \\\\\n1 & 1 \\\\\n\\end{array}\n\\right]\n\\left[\n\\begin{array}{c}\nx\\\\\ny\n\\end{array}\n\\right]\n=\n\\left[\n\\begin{array}{c}\n4\\\\\n3\\\\ \n\\end{array}\n\\right]\n\\hspace{2cm}\n{\\bf A}x = {\\bf b}\n\\hspace{2cm}\n\\left[\n\\begin{array}{c}\nx\\\\\ny\n\\end{array}\n\\right]\n=\n\\left[\n\\begin{array}{cc}\n1 & 2 \\\\\n1 & 1 \\\\\n\\end{array}\n\\right]^{-1}\n\\left[\n\\begin{array}{c}\n4\\\\\n3\\\\ \n\\end{array}\n\\right]\n=\n\\left[\n\\begin{array}{c}\n2\\\\\n1\n\\end{array}\n\\right]\n$$\n\n\n```python\nA = np.array([[1,2],[1,1]])\n\nprint(A)\n```\n\n\n```python\nb = np.array([[4],[3]])\n\nprint(b)\n```\n\n\n```python\n# Solve by inverting A and then mulitply by b\n\nlinalg.inv(A).dot(b) \n```\n\n\n```python\n# Cleaner looking\n\nlinalg.solve(A,b)\n```\n\n### System of 3 equations example (Numpy):\n\n$$\n\\begin{array}{c}\nx + 3y + 5z = 10 \\\\\n2x + 5y + z = 8 \\\\\n2x + 3y + 8z = 3 \\\\\n\\end{array}\n\\hspace{3cm}\n\\left[\n\\begin{array}{ccc}\n1 & 3 & 5 \\\\\n2 & 5 & 1 \\\\\n2 & 3 & 8 \n\\end{array}\n\\right]\n\\left[\n\\begin{array}{c}\nx\\\\\ny\\\\\nz \n\\end{array}\n\\right]\n=\n\\left[\n\\begin{array}{c}\n10\\\\\n8\\\\\n3 \n\\end{array}\n\\right]\n$$\n\n\n```python\nA = np.array([[1,3,5],[2,5,1],[2,3,8]])\nb = np.array([[10],[8],[3]])\n\nprint(linalg.inv(A))\n\nprint(linalg.solve(A,b))\n```\n\n### System of 3 equations example (SymPy) - Python's Symbolic Math Package\n\n\n```python\nimport sympy as sym\n\nAA = sym.Matrix([[1,3,5],[2,5,1],[2,3,8]])\nbb = sym.Matrix([[10],[8],[3]])\n\nprint(AA**-1)\n\nprint(AA**-1 * bb)\n```\n\n### SymPy is slower than NumPy\n\n\n```python\n%timeit AA**-1 * bb\n%timeit linalg.solve(A,b)\n```\n\n# Images are just 2-d arrays - `imshow` will display 2-d arrays as images\n\n\n```python\nprint(A)\n\nplt.imshow(A, interpolation='nearest', cmap=plt.cm.Blues);\n```\n\n### Read in some data\n\n\n```python\nI = np.load(\"test_data.npy\") # load in a saved numpy array\n```\n\n\n```python\nI.ndim, I.shape, I.dtype\n```\n\n\n```python\nprint(\"The minimum value of the array I is {0:.2f}\".format(I.min()))\nprint(\"The maximum value of the array I is {0:.2f}\".format(I.max()))\nprint(\"The mean value of the array I is {0:.2f}\".format(I.mean()))\nprint(\"The standard deviation of the array I is {0:.2f}\".format(I.std()))\n```\n\n\n```python\n#flatten() collapses n-dimentional data into 1-d\n\nplt.hist(I.flatten(),bins=30);\n```\n\n### Math on images applies to every value (pixel)\n\n\n```python\nII = I + 8\n\nprint(\"The minimum value of the array II is {0:.2f}\".format(II.min()))\nprint(\"The maximum value of the array II is {0:.2f}\".format(II.max()))\nprint(\"The mean value of the array II is {0:.2f}\".format(II.mean()))\nprint(\"The standard deviation of the array II is {0:.2f}\".format(II.std()))\n```\n\n### Show the image represenation of `I` with a colorbar\n\n\n```python\nplt.imshow(I, cmap=plt.cm.gray)\nplt.colorbar();\n```\n\n### Colormap reference: http://matplotlib.org/examples/color/colormaps_reference.html\n\n\n```python\nfig, ax = plt.subplots(1,5,sharey=True)\n\nfig.set_size_inches(12,6)\n\nfig.tight_layout()\n\nax[0].imshow(I, cmap=plt.cm.viridis)\nax[0].set_xlabel('viridis')\n\nax[1].imshow(I, cmap=plt.cm.hot)\nax[1].set_xlabel('hot')\n\nax[2].imshow(I, cmap=plt.cm.magma)\nax[2].set_xlabel('magma')\n\nax[3].imshow(I, cmap=plt.cm.spectral)\nax[3].set_xlabel('spectral')\n\nax[4].imshow(I, cmap=plt.cm.gray)\nax[4].set_xlabel('gray')\n```\n\n## WARNING! Common image formats DO NOT preserve dynamic range of original data!!\n- Common image formats: jpg, gif, png, tiff\n- Common image formats will re-scale your data values to [0:1]\n- Common image formats are **NOT** suitable for scientific data!\n\n\n```python\nplt.imsave('Splash.png', I, cmap=plt.cm.gray) # Write the array I to a PNG file\n\nIpng = plt.imread('Splash.png') # Read in the PNG file\n\nprint(\"The original data has a min = {0:.2f} and a max = {1:.2f}\".format(I.min(), I.max()))\n\nprint(\"The PNG file has a min = {0:.2f} and a max = {1:.2f}\".format(Ipng.min(), Ipng.max()))\n```\n\n## Creating images from math\n\n\n```python\nX = np.linspace(-5, 5, 500)\nY = np.linspace(-5, 5, 500)\n\nX, Y = np.meshgrid(X, Y) # turns two 1-d arrays (X, Y) into one 2-d grid\n\nZ = np.sqrt(X**2+Y**2)+np.sin(X**2+Y**2)\n\nZ.min(), Z.max(), Z.mean()\n```\n\n### Fancy Image Display\n\n\n```python\nfrom matplotlib.colors import LightSource\n\nls = LightSource(azdeg=0,altdeg=40)\nshadedfig = ls.shade(Z,plt.cm.copper)\n\nfig, ax = plt.subplots(1,3)\n\nfig.set_size_inches(12,6)\n\nfig.tight_layout()\n\nax[0].imshow(shadedfig)\n\ncontlevels = [1,2,Z.mean()]\n\nax[1].axis('equal')\nax[1].contour(Z,contlevels)\n\nax[2].imshow(shadedfig)\nax[2].contour(Z,contlevels);\n```\n\n### Reading in images (`imread`) - Common Formats\n\n\n```python\nI2 = plt.imread('doctor5.png')\n\nprint(\"The image I2 has a shape [height,width] of {0}\".format(I2.shape))\nprint(\"The image I2 is made up of data of type {0}\".format(I2.dtype))\nprint(\"The image I2 has a maximum value of {0}\".format(I2.max()))\nprint(\"The image I2 has a minimum value of {0}\".format(I2.min()))\n```\n\n\n```python\nplt.imshow(I2,cmap=plt.cm.gray);\n```\n\n## Images are just arrays that can be sliced. \n\n- ### For common image formats the origin is the upper left hand corner\n\n\n```python\nfig, ax = plt.subplots(1,4)\nfig.set_size_inches(12,6)\n\nfig.tight_layout()\n\n# You can show just slices of the image - Rememeber: The origin is the upper left corner\n\nax[0].imshow(I2, cmap=plt.cm.gray)\nax[0].set_xlabel('Original')\n\nax[1].imshow(I2[0:300,0:100], cmap=plt.cm.gray)\nax[1].set_xlabel('[0:300,0:100]') # 300 rows, 100 columns\n\nax[2].imshow(I2[:,0:100], cmap=plt.cm.gray) # \":\" = whole range\nax[2].set_xlabel('[:,0:100]') # all rows, 100 columns\n\nax[3].imshow(I2[:,::-1], cmap=plt.cm.gray);\nax[3].set_xlabel('[:,::-1]') # reverse the columns\n```\n\n\n```python\nfig, ax = plt.subplots(1,2)\nfig.set_size_inches(12,6)\n\nfig.tight_layout()\n\nCutLine = 300\n\nax[0].imshow(I2, cmap=plt.cm.gray)\nax[0].hlines(CutLine, 0, 194, color='b', linewidth=3)\n\nax[1].plot(I2[CutLine,:], color='b', linewidth=3)\nax[1].set_xlabel(\"X Value\")\nax[1].set_ylabel(\"Pixel Value\")\n```\n\n## Simple image manipulation\n\n\n```python\nfrom scipy import ndimage\n```\n\n\n```python\nfig, ax = plt.subplots(1,5)\nfig.set_size_inches(14,6)\n\nfig.tight_layout()\n\nax[0].imshow(I2, cmap=plt.cm.gray)\n\nI3 = ndimage.rotate(I2,45,cval=0.75) # cval is the value to set pixels outside of image\nax[1].imshow(I3, cmap=plt.cm.gray) # Rotate and reshape\n\nI4 = ndimage.rotate(I2,45,reshape=False,cval=0.75) # Rotate and do not reshape\nax[2].imshow(I4, cmap=plt.cm.gray)\n\nI5 = ndimage.shift(I2,(10,30),cval=0.75) # Shift image \nax[3].imshow(I5, cmap=plt.cm.gray)\n\nI6 = ndimage.gaussian_filter(I2,5) # Blur image\nax[4].imshow(I6, cmap=plt.cm.gray);\n```\n\n### `ndimage` can do much more: http://scipy-lectures.github.io/advanced/image_processing/\n\n---\n\n## FITS file (Flexible Image Transport System) - Standard Astro File Format\n- **FITS format preserves dynamic range of data**\n- FITS format can include lists, tables, images, and combunations of different types of data\n\n\n```python\nimport astropy.io.fits as fits\n```\n\n\n```python\nx = fits.open('bsg01.fits')\n\nx.info()\n```\n\n\n```python\nx[0].header\n```\n\n\n```python\nxd = x[0].data\n\nprint(\"The image x has a shape [height,width] of {0}\".format(xd.shape))\nprint(\"The image x is made up of data of type {0}\".format(xd.dtype))\nprint(\"The image x has a maximum value of {0}\".format(xd.max()))\nprint(\"The image x has a minimum value of {0}\".format(xd.min()))\n```\n\n\n```python\nfig, ax = plt.subplots(1,2)\n\nfig.set_size_inches(12,6)\n\nfig.tight_layout()\n\nax[0].imshow(xd,cmap=plt.cm.gray)\n\nax[1].hist(xd.flatten(),bins=20);\n```\n\n## You can use masks on images\n\n\n```python\nCopyData = np.copy(xd)\n\nCutOff = 40\n\nmask = np.where(CopyData > CutOff)\nCopyData[mask] = 50 # You can not just throw data away, you have to set it to something.\n\nfig, ax = plt.subplots(1,2)\n\nfig.set_size_inches(12,6)\n\nfig.tight_layout()\n\nax[0].imshow(CopyData,cmap=plt.cm.gray)\n\nax[1].hist(CopyData.flatten(),bins=20);\n```\n\n## You can add and subtract images\n\n\n```python\nfig, ax = plt.subplots(1,2)\nfig.set_size_inches(12,6)\n\nfig.tight_layout()\n\nax[0].imshow(xd, cmap=plt.cm.gray)\n\n# Open another file 'bsg02.fits'\n\ny = fits.open('bsg02.fits')\nyd = y[0].data\n\nax[1].imshow(yd, cmap=plt.cm.gray);\n```\n\n### The two images above may look the same but they are not! Subtracting the two images reveals the truth.\n\n\n```python\nfig, ax = plt.subplots(1,3)\nfig.set_size_inches(12,6)\n\nfig.tight_layout()\n\nax[0].imshow(xd, cmap=plt.cm.gray)\nax[1].imshow(yd, cmap=plt.cm.gray)\n\nz = xd - yd # Subtract the images pixel by pixel\n\nax[2].imshow(z, cmap=plt.cm.gray);\n```\n\n## FITS Tables - An astronomical example\n\n* Stellar spectra data from the [ESO Library of Stellar Spectra](http://www.eso.org/sci/facilities/paranal/decommissioned/isaac/tools/lib.html)\n\n\n```python\nS = fits.open('SolarSpectra.fits')\n\nS.info()\n```\n\n\n```python\nData = S[0].data\n```\n\n\n```python\nHead = S[0].header\nHead\n```\n\n\n```python\n# The FITS header has the information to make an array of wavelengths\n\nStart = Head['CRVAL1']\nNumber = Head['NAXIS1']\nDelta = Head['CDELT1']\n\nEnd = Start + (Number * Delta)\n\nWavelength = np.arange(Start,End,Delta)\n```\n\n\n```python\nfig, ax = plt.subplots(2,1)\nfig.set_size_inches(11,8.5)\n\nfig.tight_layout()\n\n# Full spectra\n\nax[0].plot(Wavelength, Data, color='b')\nax[0].set_ylabel(\"Flux\")\nax[0].set_xlabel(\"Wavelength [angstroms]\")\n\n# Just the visible range with the hydrogen Balmer lines\n\nax[1].set_xlim(4000,7000)\nax[1].set_ylim(0.6,1.2)\nax[1].plot(Wavelength, Data, color='b')\nax[1].set_ylabel(\"Flux\")\nax[1].set_xlabel(\"Wavelength [angstroms]\")\n\nH_Balmer = [6563,4861,4341,4102,3970,3889,3835,3646]\n\nax[1].vlines(H_Balmer,0,2, color='r', linewidth=3, alpha = 0.25)\n```\n\n# Pseudocolor - All color astronomy images are fake.\n\n### Color images are composed of three 2-d images: \n\n### JPG images are 3-d, even grayscale images\n\n\n```python\nredfilter = plt.imread('sphereR.jpg')\n\nredfilter.shape,redfilter.dtype\n```\n\n### We just want to read in one of the three channels\n\n\n```python\nredfilter = plt.imread('sphereR.jpg')[:,:,0]\n\nredfilter.shape,redfilter.dtype\n```\n\n\n```python\nplt.imshow(redfilter,cmap=plt.cm.gray);\n```\n\n\n```python\ngreenfilter = plt.imread('sphereG.jpg')[:,:,0]\nbluefilter = plt.imread('sphereB.jpg')[:,:,0]\n```\n\n\n```python\nfig, ax = plt.subplots(1,3)\nfig.set_size_inches(12,3)\n\nfig.tight_layout()\n\nax[0].set_title(\"Red Filter\")\nax[1].set_title(\"Green Filter\")\nax[2].set_title(\"Blue Filter\")\n\nax[0].imshow(redfilter,cmap=plt.cm.gray)\nax[1].imshow(greenfilter,cmap=plt.cm.gray)\nax[2].imshow(bluefilter,cmap=plt.cm.gray);\n```\n\n### Need to create a blank 3-d array to hold all of the images\n\n\n```python\nrgb = np.zeros((480,640,3),dtype='uint8')\n\nprint(rgb.shape, rgb.dtype)\n\nplt.imshow(rgb,cmap=plt.cm.gray);\n```\n\n## Fill the array with the filtered images\n\n\n```python\nrgb[:,:,0] = redfilter\nrgb[:,:,1] = greenfilter\nrgb[:,:,2] = bluefilter\n```\n\n\n```python\nfig, ax = plt.subplots(1,4)\nfig.set_size_inches(14,3)\n\nfig.tight_layout()\n\nax[0].set_title(\"Red Filter\")\nax[1].set_title(\"Green Filter\")\nax[2].set_title(\"Blue Filter\")\nax[3].set_title(\"All Filters Stacked\")\n\nax[0].imshow(redfilter,cmap=plt.cm.gray)\nax[1].imshow(greenfilter,cmap=plt.cm.gray)\nax[2].imshow(bluefilter,cmap=plt.cm.gray)\nax[3].imshow(rgb,cmap=plt.cm.gray);\n```\n\n\n```python\nprint(\"The image rgb has a shape [height,width] of {0}\".format(rgb.shape))\nprint(\"The image rgb is made up of data of type {0}\".format(rgb.dtype))\nprint(\"The image rgb has a maximum value of {0}\".format(rgb.max()))\nprint(\"The image rgb has a minimum value of {0}\".format(rgb.min()))\n```\n\n\n```python\nrgb[:,:,0] = redfilter * 1.5\n\nplt.imshow(rgb)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "7ce581da3a015f8ce2d839b30ac818f8d98fdd57", "size": 26887, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "07_Images_In_Python.ipynb", "max_stars_repo_name": "UWashington-Astro300/Astro300-A16", "max_stars_repo_head_hexsha": "bb3ee938c905035dfd1ac123036a140a117e6b95", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "07_Images_In_Python.ipynb", "max_issues_repo_name": "UWashington-Astro300/Astro300-A16", "max_issues_repo_head_hexsha": "bb3ee938c905035dfd1ac123036a140a117e6b95", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "07_Images_In_Python.ipynb", "max_forks_repo_name": "UWashington-Astro300/Astro300-A16", "max_forks_repo_head_hexsha": "bb3ee938c905035dfd1ac123036a140a117e6b95", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.5786516854, "max_line_length": 149, "alphanum_fraction": 0.4955182802, "converted": true, "num_tokens": 3996, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9324533069832973, "lm_q2_score": 0.888758786126321, "lm_q1q2_score": 0.8287260692339491}} {"text": "# Case Study III: (1) Solving Laplace's equation with Python\n\nIn the first part of the case study we will solve a simple electrostatics problem with Python. In the second part, we will use NumPy.\n\n## Solving Laplace's or Poisson's equation\n\n**Poisson's equation** for the electric potential $\\Phi(\\mathbf{r})$ and the charge density $\\rho(\\mathbf{r})$:\n\n$$\n\\nabla^2 \\Phi(x, y, z) = -4\\pi\\rho(x, y, z)\\\\\n$$\n\nFor a region of space without charges ($\\rho = 0$) this reduces to **Laplace's equation**\n\n$$\n\\nabla^2 \\Phi(x, y, z) = 0\n$$\n\n\nSolutions depend on the **boundary conditions**: \n\n* the *value of the potential* on the *boundary* or \n* the *electric field* (i.e. the derivative of the potential, $\\mathbf{E} = -\\nabla\\Phi$ *normal to the surface* ($\\mathbf{n}\\cdot\\mathbf{E}$), which directly follows from the charge distribution).\n\n### Example: 2D Laplace equation for the \"Wire on the grounded box\" problem\n\nAs a problem. consider a grounded box where the left side is a wire held at a potential of 100 V. There are no charges _inside_ the box.\n\n\n\nBoundary conditions:\n* square area surrounded by wires\n* three wires at ground (0 V), one wire at 100 V (the line with $x=0$ is at 100 V)\n\n_Inside the box_ the **Laplace equation** applies: \n$$\n\\frac{\\partial^2 \\Phi(x,y)}{\\partial x^2} + \\frac{\\partial^2 \\Phi(x,y)}{\\partial y^2} = 0\n$$\n\ni.e., the Poisson equation with charges set to 0.\n\n## Finite difference algorithm for Poisson's equation\nDiscretize space on a lattice (2D) and solve for $\\Phi$ on each lattice site.\n\nTaylor-expansion of the four neighbors of $\\Phi(x, y)$:\n\n\\begin{align}\n\\Phi(x \\pm \\Delta x, y) &= \\Phi(x, y) \\pm \\Phi_x \\Delta x + \\frac{1}{2} \\Phi_{xx} \\Delta x^2 + \\dots\\\\\n\\Phi(x, y \\pm \\Delta y) &= \\Phi(x, y) \\pm \\Phi_y \\Delta x + \\frac{1}{2} \\Phi_{yy} \\Delta x^2 + \\dots\\\\\n\\end{align}\n\nAdd equations in pairs: odd terms cancel, and **central difference approximation** for 2nd order partial derivatives (to $\\mathcal{O}(\\Delta^4)$):\n\n\\begin{align}\n\\Phi_{xx}(x,y) = \\frac{\\partial^2 \\Phi}{\\partial x^2} & \\approx \n \\frac{\\Phi(x+\\Delta x,y) + \\Phi(x-\\Delta x,y) - 2\\Phi(x,y)}{\\Delta x^2} \\\\\n\\Phi_{yy}(x,y) = \\frac{\\partial^2 \\Phi}{\\partial y^2} &\\approx \n \\frac{\\Phi(x,y+\\Delta y) + \\Phi(x,y-\\Delta y) - 2\\Phi(x,y)}{\\Delta y^2}\n\\end{align}\n\nTake $x$ and $y$ grids of equal spacing $\\Delta$: Discretized Poisson equation\n\n$$\n\\begin{split}\n\\Phi(x+\\Delta x,y) + \\Phi(x-\\Delta x,y) +\\Phi(x,y+\\Delta y) &+ \\\\\n +\\, \\Phi(x,y-\\Delta y) - 4\\Phi(x,y) &= -4\\pi\\rho(x,y)\\,\\Delta^2\n \\end{split}\n$$\n\nDefines a system of $N_x \\times N_y$ simultaneous algebraic equations for $\\Phi_{ij}$ to be solved.\n\nCan be solved directly via matrix approaches (and then is the best solution) but can be unwieldy for large grids.\n\nAlternatively: **iterative solution**:\n\n$$\n\\begin{split}\n4\\Phi(x,y) &= \\Phi(x+\\Delta x,y) + \\Phi(x-\\Delta x,y) +\\\\\n &+ \\Phi(x,y+\\Delta y) + \\Phi(x,y-\\Delta y) + 4\\pi\\rho(x,y)\\,\\Delta^2\n\\end{split}\n$$\n\nCompute a new value for $\\Phi(x,y)$ (left hand site) from a guessed potential (right hand side).\n\nOr written for lattice sites $(i, j)$ where \n\n$$\nx = x_0 + i\\Delta\\quad\\text{and}\\quad y = y_0 + j\\Delta, \\quad 0 \\leq i,j < N_\\text{max}\n$$\n\n$$\n\\Phi_{i,j} = \\frac{1}{4}\\Big(\\Phi_{i+1,j} + \\Phi_{i-1,j} + \\Phi_{i,j+1} + \\Phi_{i,j-1}\\Big)\n + \\pi\\rho_{i,j} \\Delta^2\n$$\n\n* Converged solution at $(i, j)$ will be the average potential from the four neighbor sites + charge density contribution.\n* *Not a direct solution*: iterate and hope for convergence.\n\n#### Jacobi method\nDo not change $\\Phi_{i,j}$ until a complete sweep has been completed.\n\nThe Jacobi algorithm is the simplest iterative approach and much better solutions exist (which we explore in the PHY432 _Computational Methods_ class). Jacobi converges slower than better algorithms but it will eventually give the right answer. It has the advantage of being easier to understand and easier to speed up with NumPy.\n\n## Solution via relaxation (Jacobi method) in Python\n\nSolve the box-wire problem on a lattice: The wire at $x=0$ (the $y$-axis) is at 100 V, the other three sides of the box are grounded (0 V).\n\nNote: $\\rho=0$ inside the box.\n\nNote for Jupyter notebook use:\n* For interactive 3D plots, select\n ```\n %matplotlib notebook\n ```\n* For standard inline figures (e.g. for exporting the notebook to LaTeX/PDF or html) use \n ```\n %matplotlib inline\n ``` \n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n# %matplotlib inline\n%matplotlib notebook\n```\n\n\n```python\n%matplotlib inline\n```\n\n#### Wire on a box: Solution of Laplace's equation with the Jacobi algorithm\n\nThe potential is represented as a numpy array `Phi[x, y]`. We will consider axis 0 to correspond to x and axis 1 to y. (Technically speaking, row indices are \"x\" and column indices are \"y\" but this becomes confusing when translating the equations into code. We just need to be careful when we plot...)\n\nWe only iterate to `Max_iter = 70`: this will *not* produce a converged solution. We will address this problem in the second part.\n\nNote how the actual update of the array does not update the boundaries of the array because they are set by the problem. It also means that the indices loop over `Nmax-2` values via `range(1, Nx-1)` and `range(1, Ny-1)`. This is important to remember for the numpyfication in Part 2.\n\n\n\n```python\nNmax = 100\nMax_iter = 70\nPhi = np.zeros((Nmax, Nmax), dtype=np.float64)\n\n# initialize boundaries\n# everything starts out zero so nothing special for the grounded wires\nPhi[0, :] = 100 # wire at x=0 at 100 V\n\n# Jacobi: do not change the potential during one update, so we need to work on a copy\nPhi_new = Phi.copy()\n\nNx, Ny = Phi.shape\nfor n_iter in range(Max_iter):\n for xi in range(1, Nx-1):\n for yj in range(1, Ny-1):\n Phi_new[xi, yj] = 0.25*(Phi[xi+1, yj] + Phi[xi-1, yj] \n + Phi[xi, yj+1] + Phi[xi, yj-1])\n # update the potential for the next iteration\n Phi[:, :] = Phi_new\n```\n\n#### Visualization of the potential \n\nWe use 2D and 3D plotting in matplotlib. The meshgrid function provides a convenient way to produce arrays that can be easily plotted with these functions. The notebook [meshgrid.ipynb](meshgrid.ipynb) provides additional information.\n\n\n```python\n# plot Phi(x,y)\nx = np.arange(Phi.shape[0])\ny = np.arange(Phi.shape[1])\nX, Y = np.meshgrid(x, y)\n\nZ = Phi[X, Y]\n```\n\n\n```python\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nax.plot_wireframe(X, Y, Z, rstride=2, cstride=2)\n\nax.set_xlabel('X')\nax.set_ylabel('Y')\nax.set_zlabel(r'potential $\\Phi$ (V)')\n\nax.view_init(elev=40, azim=-65)\n```\n\nNicer plot (you can use this code for other projects):\n\n\n```python\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\n\nax.plot_wireframe(X, Y, Z, rstride=2, cstride=2, linewidth=0.5, color=\"gray\")\nsurf = ax.plot_surface(X, Y, Z, cmap=plt.cm.coolwarm, alpha=0.6)\ncset = ax.contourf(X, Y, Z, zdir='z', offset=-50, cmap=plt.cm.coolwarm)\n\nax.set_xlabel('X')\nax.set_ylabel('Y')\nax.set_zlabel(r'potential $\\Phi$ (V)')\nax.set_zlim(-50, 100)\nax.view_init(elev=40, azim=-65)\n\ncb = fig.colorbar(surf, shrink=0.5, aspect=5)\ncb.set_label(r\"potential $\\Phi$ (V)\")\n```\n\n(Note that the calculation above is is *not converged* ... see next lecture.)\n", "meta": {"hexsha": "13582d65943c1464b965df19ec39b6c41433ba3e", "size": 176335, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Module_7/CaseStudyIII/CaseStudyIII_Laplace_equation_Python.ipynb", "max_stars_repo_name": "Py4Physics/PHY194", "max_stars_repo_head_hexsha": "68966ad96bbf2756ca3c0c39210be69c379c7619", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-10-26T00:39:14.000Z", "max_stars_repo_stars_event_max_datetime": "2019-10-29T19:35:20.000Z", "max_issues_repo_path": "Module_7/CaseStudyIII/CaseStudyIII_Laplace_equation_Python.ipynb", "max_issues_repo_name": "Py4Phy/PHY202", "max_issues_repo_head_hexsha": "ec3a0b0285f2601accfdbf0c30416e1351430342", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Module_7/CaseStudyIII/CaseStudyIII_Laplace_equation_Python.ipynb", "max_forks_repo_name": "Py4Phy/PHY202", "max_forks_repo_head_hexsha": "ec3a0b0285f2601accfdbf0c30416e1351430342", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 346.4341846758, "max_line_length": 84696, "alphanum_fraction": 0.932860748, "converted": true, "num_tokens": 2245, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9324533088603709, "lm_q2_score": 0.8887587831798665, "lm_q1q2_score": 0.8287260681547834}} {"text": "# Kernel Logistic Classifier\n\n## Linear logistic classifier\n\nConsider the multi-dimensional logistic classifier\n\\begin{equation}\nP(c\\mid{\\bf x},\\Theta) = \\frac{e^{\\alpha_c+{\\bf\\beta}_c^{T}{\\bf x}}}\n{\\sum_{c'=1}^{C}e^{\\alpha_{c'}+{\\bf\\beta}_{c'}^{T}{\\bf x}}}\\,,\n\\end{equation}\nfor feature vector ${\\bf x}\\in\\mathbb{R}^{F}$.\nWe can, if we wish, notionally consider the prior of class $c$ to be $P(c\\mid\\Theta)\\propto e^{\\alpha_c}$, and\nthe class density of ${\\bf x}$ to be\n$p({\\bf x}\\mid c,\\Theta)\\propto e^{{\\bf\\beta}_c^{T}{\\bf x}}$, although the latter assumption poses some normalisation issues.\n\nNow consider supervised training data comprised of $N$ known class labels ${\\bf C}=[c_1,c_2,\\ldots,c_N]^{T}$\nand feature (or design) matrix ${\\bf X}=[{\\bf x}_1,{\\bf x}_2,\\ldots,{\\bf x}_N]^{T}$.\nThe discriminative likelihood is then given by\n\\begin{eqnarray}\nP({\\bf C}\\mid{\\bf X},\\Theta) & = & \\prod_{d=1}^{N}P(c_d\\mid{\\bf x}_d,\\Theta)\n~=~ \\prod_{d=1}^{N}\\prod_{c=1}^{C}P(c\\mid{\\bf x}_d,\\Theta)^{\\delta(c_d=c)}\\,,\n\\end{eqnarray}\nwhere $\\delta(A)=1$ and $\\delta(\\neg A)=0$ if proposition $A$ is true.\n\nFor notational convenience, we let $z_{cd}\\doteq\\delta(c_d=c)$ and $\\pi_{cd}\\doteq P(c\\mid{\\bf x}_d,\\Theta)$. \nThen the discriminative log-likelihood is just\n\\begin{eqnarray}\nL(\\Theta) & = & \\ln P({\\bf C}\\mid{\\bf X},\\Theta)\n~=~\\sum_{d=1}^{N}\\sum_{c=1}^{C}z_{cd}\\ln\\pi_{cd}\n\\nonumber\\\\\n& = & \\sum_{d=1}^{N}\\sum_{c=1}^{C}z_{cd}(\\alpha_c+{\\bf\\beta}_c^{T}{\\bf x}_d)\n-\\sum_{d=1}^{N}\\ln\\sum_{c=1}^{C}e^{\\alpha_c+{\\bf\\beta}_c^{T}{\\bf x}_d}\\,.\n\\end{eqnarray}\n\nSince discriminatively trained models are prone to overfitting, it is usual to regularise the parameters.\nIn the case of ridge regression, there is some dispute whether to penalise just the feature weights, proportional \n$\\|{\\bf\\beta}_c\\|^2$, or to also penalise the bias $\\alpha_c$, proportional to $\\|{\\bf\\gamma}_c\\|^2$,\nwhere ${\\bf\\gamma}_c\\doteq [\\alpha_c]\\oplus{\\bf\\beta}_c$ is the concatenate of all parameters for class $c$.\n\nThe former case is sometimes preferred on the basis that $\\alpha_c$ controls the prior on class $c$, and probably shouldn't be constrained beyond what the data suggest.\nIn particular, if we do not regularise the class weights, then it can be shown that\n\\begin{eqnarray}\n\\frac{1}{N}\\sum_{d=1}^{N}P(c\\mid{\\bf x}_d,\\Theta) & = & \\frac{N_c}{N}\\,,\n\\end{eqnarray}\nwhere $N_c$ is the number of training samples of class $c$.\n\nAlternatively, it could be noted that the observed class proportions at best only approximate the true class priors, and at worst are artificially constrained (e.g. by balancing class sizes). Hence, we might prefer the latter case of regularising all parameters.\n\nFor now we consider the more general, quadratic penalty ${\\bf\\gamma}_c^{T}{\\bf\\Lambda}_c{\\bf\\gamma}_c$, which allows us to not only \"turn off\" regularisation of some parameters, but additionally to handle differently-scaled features and correlations between features. We define the modified feature vector $\\tilde{\\bf x}\\doteq[1]\\oplus{\\bf x}$, such that\n\\begin{eqnarray}\n\\pi_{cd} & = & \\frac{e^{{\\bf\\gamma}_c^{T}\\tilde{\\bf x}_d}}\n{\\sum_{c'=1}^{C}e^{{\\bf\\gamma}_{c'}^{T}\\tilde{\\bf x}_d}}\\,.\n\\end{eqnarray}\nThe ridge-regularised discriminative log-likelihood is then\n\\begin{eqnarray}\n\\tilde{L}(\\Theta) & = & \n\\sum_{d=1}^{N}\\sum_{c=1}^{C}z_{cd}{\\bf\\gamma}_c^{T}\\tilde{\\bf x}_d\n-\\sum_{d=1}^{N}\\ln\\sum_{c=1}^{C}e^{{\\bf\\gamma}_c^{T}\\tilde{\\bf x}_d}\n-\\frac{1}{2}\\sum_{c=1}^{C}{\\bf\\gamma}_c^T{\\bf\\Lambda}_c\\mathbf{\\gamma}_c\n\\,.\n\\end{eqnarray}\n\nIt can be shown that its class-specific gradient vector is given by\n\\begin{eqnarray}\n{\\bf\\nabla}_{c}\\tilde{L} & = & \\frac{\\partial\\tilde{L}}{\\partial{\\bf\\gamma}_c}\n~=~\\sum_{d=1}^{N}z_{cd}\\tilde{\\bf x}_d-\\sum_{d=1}^{N}\\pi_{cd}\\tilde{\\bf x}_d-{\\bf\\Lambda}_c{\\bf\\gamma}_c\\,,\n\\end{eqnarray}\nand the class-specific Hessian matrix is given by\n\\begin{eqnarray}\n{\\bf\\nabla}_{c}^{T}{\\bf\\nabla}_{c}\\tilde{L} & = & \n\\frac{\\partial^2\\tilde{L}}{\\partial{\\bf\\gamma}_c^{T}\\partial{\\bf\\gamma}_c}\n~=~-\\sum_{d=1}^{N}\\pi_{cd}(1-\\pi_{cd})\\tilde{\\bf x}_d\\tilde{\\bf x}_d^{T}-{\\bf\\Lambda}_c\\,.\n\\end{eqnarray}\nNote that, for simplicity, we are going to ignore the explicit cross-class dependencies\n${\\bf\\nabla}_{c'}^{T}{\\bf\\nabla}_{c}\\tilde{L}$.\n\nIn matrix notation, let ${\\bf z}_c=[z_{c1},\\ldots,z_{cN}]^T$, ${\\bf\\pi}_c=[\\pi_{c1},\\ldots,\\pi_{cN}]^T$,\nand $\\tilde{\\bf X}={\\bf 1}\\oplus{\\bf X}$. Then the gradient becomes\n\\begin{eqnarray}\n{\\bf\\nabla}_{c}\\tilde{L} & = &\n\\tilde{\\bf X}^{T}({\\bf z}_c-{\\bf\\pi}_c)-{\\bf\\Lambda}_c{\\bf\\gamma}_c\\,.\n\\end{eqnarray}\nSimilarly, define $w_{cd}\\doteq\\pi_{cd}(1-\\pi_{cd})$, and let ${\\bf w}_c=[w_{c1},\\ldots,w_{cN}]^T$\nand ${\\bf W}_c={\\tt diag}\\{{\\bf w}_c\\}$. Then the Hessian becomes\n\\begin{eqnarray}\n{\\bf\\nabla}_{c}^{T}{\\bf\\nabla}_{c}\\tilde{L} & = & \n-\\tilde{\\bf X}^T{\\bf W}_c\\tilde{\\bf X}-{\\bf\\Lambda}_c\\,.\n\\end{eqnarray}\n\nNow, the class-specific update for parameters ${\\bf\\gamma}_c$ takes the form of a single iteration of the Newton-Raphson method, namely\n\\begin{eqnarray}\n{\\bf\\gamma}'_c & = & {\\bf\\gamma}_c - \\left[{\\bf\\nabla}_{c}^{T}{\\bf\\nabla}_{c}\\tilde{L}\\right]^{-1}\n{\\bf\\nabla}_{c}\\tilde{L}\n\\nonumber\\\\\n& = & {\\bf\\gamma}_c+\\left[\\tilde{\\bf X}^T{\\bf W}_c\\tilde{\\bf X}+{\\bf\\Lambda}_c\\right]^{-1}\n\\left[\\tilde{\\bf X}^{T}({\\bf z}_c-{\\bf\\pi}_c)-{\\bf\\Lambda}_c{\\bf\\gamma}_c\\right]\n\\nonumber\\\\\n& = & \\left[\\tilde{\\bf X}^T{\\bf W}_c\\tilde{\\bf X}+{\\bf\\Lambda}_c\\right]^{-1}\n\\left[\\tilde{\\bf X}^T{\\bf W}_c\\tilde{\\bf X}{\\bf\\gamma}_c+{\\bf\\Lambda}_c{\\bf\\gamma}_c+\n\\tilde{\\bf X}^{T}({\\bf z}_c-{\\bf\\pi}_c)-{\\bf\\Lambda\\gamma}_c\\right]\n\\nonumber\\\\\n& = & \\left[\\tilde{\\bf X}^T{\\bf W}_c\\tilde{\\bf X}+{\\bf\\Lambda}_c\\right]^{-1}\\tilde{\\bf X}^T{\\bf W}_c\n\\left[\\tilde{\\bf X}{\\bf\\gamma}_c+{\\bf W}_{c}^{-1}({\\bf z}_c-{\\bf\\pi}_c)\\right]\\,.\n\\end{eqnarray}\n\n## Dual optimisation\n\nObserve from above that ${\\bf z}_c-{\\bf\\pi}_c$ represents a vector of prediction errors for class $c$. \nHence, if we define the weighted errors\n${\\bf e}_c\\doteq{\\bf W}_c^{-1}({\\bf z}_c-{\\bf\\pi}_c)$\nand the linear system\n\\begin{equation}\n{\\bf y}_c = \\tilde{\\bf X}{\\bf\\gamma}_c+{\\bf e}_c\\,,\n\\end{equation}\nthen we note that the class-specific parameter update becomes\n\\begin{eqnarray}\n\\left[\\tilde{\\bf X}^T{\\bf W}_c\\tilde{\\bf X}+{\\bf\\Lambda}_c\\right]{\\bf\\gamma}'_c\n& = & \\tilde{\\bf X}^T{\\bf W}_c{\\bf y}_c\\,.\n\\end{eqnarray}\nIt is of interest that this update corresponds to a regularised form of the iteratively reweighted least-squares (IRLS) algorithm, except that here ${\\bf y}_c$ itself varies with each iteration.\n\nTo motivate this observation, note that if we knew ${\\bf y}_c$ in advance, then we could simply find the optimal\nparameters ${\\bf\\gamma}_c$ via a weighted least-squares (WLS)\nminimisation of the square error $\\|{\\bf W}_c{\\bf e}_c\\|^2$. However, we instead must obtain ${\\bf y}_c$ via the following steps:\n1. Choose initial parameters, ${\\bf\\gamma}_c$, for all class $c=1,2,\\ldots,C$.\n2. Compute the linear projection, $\\tilde{\\bf X}{\\bf\\gamma}_c$.\n3. Compute the posterior probabilities, ${\\bf\\pi}_c$.\n4. Compute the weighted prediction errors, ${\\bf e}_c$.\n5. Compute the 'observations', ${\\bf y}_c$.\n\nAt this juncture, we may now find update parameters ${\\bf\\gamma}'_c$ that minimise the\nsquare error $\\|{\\bf e}'_c\\|^2=\\|{\\bf y}_c-\\tilde{\\bf X}^T{\\bf\\gamma}'_c\\|^2$, satisfying the system\n${\\bf y}_c = \\tilde{\\bf X}{\\bf\\gamma}'_c+{\\bf e}'_c$. This gives rise to the IRLS update above.\n\nIn conclusion, Newton-Raphson maximisation of the discriminative log-likelihood (the primal problem)\ncorresponds to IRLS minimisation of the (weighted) prediction error (the dual problem).\n\n## Representer theorem\n\nNow, a consequence of this duality is that we may apply the representer theorem. Put simply, the ridge-penalised function $f_c(\\cdot)$ that minimises the square error $\\sum_{d=1}^{N}\\|y_{cd}-f_c(\\tilde{\\bf x}_d)\\|^2$\nsatisfies $f_c(\\tilde{\\bf x})=\\sum_{d=1}^{N}\\omega_{cd}k(\\tilde{\\bf x}_d,\\tilde{\\bf x})$ for some positive-definite kernel function $k(\\cdot,\\cdot)$. In other words, the least-squares function interpolates over the known data points.\n\nHence, since the parameter update chooses ${\\bf\\gamma}'_c$ to minimise the square error\n$\\|{\\bf y}_c-\\tilde{\\bf X}{\\bf\\gamma}'_c\\|^2$, then we may take \n$f_c(\\tilde{\\bf x})\\doteq{\\bf\\gamma}_c^{'T}\\tilde{\\bf x}$.\n\nLet us now consider the scalar-product kernel $k({\\bf x},{\\bf y})\\doteq {\\bf x}^{T}{\\bf y}$.\nThen it follows that\n\\begin{equation}\n{\\bf\\gamma}_c^{'T}\\tilde{\\bf x} = \\sum_{d=1}^{N}\\omega'_{cd}\\tilde{\\bf x}_d^{T}\\tilde{\\bf x}\n= \\left({\\bf\\omega}_c^{'T}\\tilde{\\bf X}\\right)\\tilde{\\bf x}\n\\Rightarrow {\\bf\\gamma}'_c = \\tilde{\\bf X}^{T}{\\bf\\omega}'_c\\,.\n\\end{equation}\nSubstituting this representation (for both the old and new parameter estimates) back into the parameter update then gives\n\\begin{eqnarray}\n{\\bf y}_c & = & \\tilde{\\bf X}\\tilde{\\bf X}^{T}{\\bf\\omega}_c+{\\bf W}_{c}^{-1}({\\bf z}_c-{\\bf\\pi}_c)\\,,\n\\nonumber\\\\\n\\left[\\tilde{\\bf X}^T{\\bf W}_c\\tilde{\\bf X}+{\\bf\\Lambda}_c\\right]\\tilde{\\bf X}^{T}{\\bf\\omega}'_c\n&=&\\tilde{\\bf X}^T{\\bf W}_c{\\bf y}_c\n\\nonumber\\\\\n\\Rightarrow\n\\left[\\tilde{\\bf X}\\tilde{\\bf X}^T{\\bf W}_c\\tilde{\\bf X}\\tilde{\\bf X}^{T}\n+\\tilde{\\bf X}{\\bf\\Lambda}_c\\tilde{\\bf X}^{T}\\right]{\\bf\\omega}'_c\n& = &\\tilde{\\bf X}\\tilde{\\bf X}^T{\\bf W}_c{\\bf y}_c\\,.\n\\end{eqnarray}\n\nIn order to simplify these expressions further, we note that the general regulariser ${\\bf\\Lambda}_c$ now gives us a problem.\nOne way to avoid the problem is to reverse our generalisation, and take ${\\bf\\Lambda}_c=\\lambda{\\bf I}$ as per usual.\nHence, we obtain\n\\begin{eqnarray}\n\\left[\\tilde{\\bf X}\\tilde{\\bf X}^T{\\bf W}_c\\tilde{\\bf X}\\tilde{\\bf X}^{T}\n+\\lambda\\tilde{\\bf X}\\tilde{\\bf X}^{T}\\right]{\\bf\\omega}'_c\n&=&\\tilde{\\bf X}\\tilde{\\bf X}^T{\\bf W}_c{\\bf y}_c\\,.\n\\end{eqnarray}\nNote that this simplifying assumption now regularises both the class weights $\\alpha_c$ and the feature weights\n${\\bf\\beta}_c$. With some effort, we can avoid regularisation of $\\alpha_c$ by instead choosing\n${\\bf\\Lambda}_c\\doteq\\lambda\\,{\\tt diag}\\{0,1,\\ldots,1\\}$, whereupon\n\\begin{eqnarray}\n\\left[\\tilde{\\bf X}\\tilde{\\bf X}^T{\\bf W}_c\\tilde{\\bf X}\\tilde{\\bf X}^{T}\n+\\lambda{\\bf X}{\\bf X}^{T}\\right]{\\bf\\omega}'_c\n&=&\\tilde{\\bf X}\\tilde{\\bf X}^T{\\bf W}_c{\\bf y}_c\\,,\n\\end{eqnarray}\nsince $\\tilde{\\bf X}={\\bf 1}\\oplus{\\bf X}$.\nWe shall return to this point later.\n\n## Kernel trick\n\nNow, we define the kernel matrix $\\tilde{\\bf K}\\doteq\\tilde{\\bf X}\\tilde{\\bf X}^{T}$, \nwhere $\\tilde{K}_{ij}=\\tilde{\\bf x}_i^{T}\\tilde{\\bf x}_j=k(\\tilde{\\bf x}_i,\\tilde{\\bf x}_j)$.\nConsequently, we obtain\n\\begin{eqnarray}\n{\\bf y}_c & = &\\tilde{\\bf K}{\\bf\\omega}_c+{\\bf W}_{c}^{-1}({\\bf z}_c-{\\bf\\pi}_c)\\,,\n\\nonumber\\\\\n\\left[\\tilde{\\bf K}{\\bf W}_c\\tilde{\\bf K}\n+\\lambda\\tilde{\\bf K}\\right]{\\bf\\omega}'_c\n& = & \\tilde{\\bf K}{\\bf W}_c{\\bf y}_c\n\\nonumber\\\\\n\\Rightarrow \\left[{\\bf W}_c\\tilde{\\bf K}+\\lambda{\\bf I}\\right]{\\bf\\omega}'_c & = & {\\bf W}_c{\\bf y}_c\n={\\bf W}_c\\tilde{\\bf K}{\\bf\\omega}_c+({\\bf z}_c-{\\bf\\pi}_c)\\,,\n\\end{eqnarray}\nsince $\\tilde{\\bf K}$ is invertible, from the definition that the kernel function $k(\\cdot,\\cdot)$\nis positive-definite.\nNote that we also assumed previously that the diagonal matrix\n${\\bf W}_c$ is invertible, hence we could simplify further to\n\\begin{eqnarray}\n\\left[\\tilde{\\bf K}+\\lambda{\\bf W}_c^{-1}\\right]{\\bf\\omega}'_c & = &\n\\tilde{\\bf K}{\\bf\\omega}_c+{\\bf W}_{c}^{-1}({\\bf z}_c-{\\bf\\pi}_c)\\,.\n\\end{eqnarray}\nWe should note at this point that we have now traded an $F\\times F$ inversion problem (in the parameter space) for an $N\\times N$ one (in the data space). \n\nWe have also, as noted earlier, lost the propery of being able to *not* penalise the $\\alpha_c$ bias (or class proportion) parameter.\nTo see what form the update would take if we did not regularise $\\alpha_c$, we instead take the alternative\nregularisation\n${\\bf\\Lambda}_c\\doteq\\lambda\\,{\\tt diag}\\{0,1,\\ldots,1\\}$. Then, from the derivation above, \nwe obtain the modified version\n\\begin{eqnarray}\n\\left[{\\bf W}_c\\tilde{\\bf K}+\\lambda\\tilde{\\bf K}^{-1}{\\bf K}\\right]{\\bf\\omega}'_c\n&=&{\\bf W}_c\\tilde{\\bf K}{\\bf \\omega}_c+({\\bf z}_c-\\mathbb{\\pi}_c)\\,,\n\\end{eqnarray}\nwhere $\\tilde{\\bf K}={\\bf K}+{\\bf 1}{\\bf 1}^T$.\nWe deduce from the Sherman-Morrison formula that $\\tilde{\\bf K}^{-1}{\\bf K}={\\bf I}-{\\bf v}{\\bf 1}^{T}$,\nwhere ${\\bf v}={\\bf u}/(1+{\\bf 1}^{T}{\\bf u})$ and ${\\bf K}{\\bf u}={\\bf 1}$.\nHence, the full version of the class-specific parameter update is\n\\begin{eqnarray}\n\\left[{\\bf W}_c({\\bf K}+{\\bf 1}{\\bf 1}^T)+\\lambda({\\bf I}-{\\bf v}{\\bf 1}^T)\\right]{\\bf\\omega}'_c\n&=&{\\bf W}_c({\\bf K}+{\\bf 1}{\\bf 1}^T){\\bf \\omega}_c+({\\bf z}_c-\\mathbb{\\pi}_c)\\,,\n\\end{eqnarray}\nor\n\\begin{eqnarray}\n\\left[{\\bf W}_c{\\bf K}+\\lambda{\\bf I}+({\\bf w}_c-\\lambda{\\bf v}){\\bf 1}^T\\right]{\\bf\\omega}'_c\n&=&({\\bf W}_c{\\bf K}+{\\bf w}_c{\\bf 1}^T){\\bf \\omega}_c+({\\bf z}_c-\\mathbb{\\pi}_c)\\,,\n\\end{eqnarray}\nsince ${\\bf W}_c{\\bf 1}={\\bf w}_c$.\n\nFinally, we recall that ${\\bf\\gamma}_c=\\tilde{\\bf X}^{T}{\\bf\\omega}_c$, such that our original parameters are recovered as\n\\begin{eqnarray}\n\\alpha_c~=~{\\bf 1}^{T}{\\bf\\omega}_c\\,, && {\\bf\\beta}_c~=~{\\bf X}^{T}{\\bf\\omega}_c\\,.\n\\end{eqnarray}\n", "meta": {"hexsha": "679844daf65d963ede4777262fafa50ab62864af", "size": 17865, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "LogisticModels/notebooks/kernel-logistic-classifier.ipynb", "max_stars_repo_name": "gaj67/gaj-data-science", "max_stars_repo_head_hexsha": "aadcf6ee2cd00606563f213167c2eeeb42430c59", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "LogisticModels/notebooks/kernel-logistic-classifier.ipynb", "max_issues_repo_name": "gaj67/gaj-data-science", "max_issues_repo_head_hexsha": "aadcf6ee2cd00606563f213167c2eeeb42430c59", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "LogisticModels/notebooks/kernel-logistic-classifier.ipynb", "max_forks_repo_name": "gaj67/gaj-data-science", "max_forks_repo_head_hexsha": "aadcf6ee2cd00606563f213167c2eeeb42430c59", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 48.4146341463, "max_line_length": 376, "alphanum_fraction": 0.5445843829, "converted": true, "num_tokens": 5097, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9597620527726579, "lm_q2_score": 0.863391617003942, "lm_q1q2_score": 0.8286505106824078}} {"text": "# Ricci Tensor and Scalar Curvature calculations using Symbolic module\n\n\n```python\nimport sympy\nfrom einsteinpy.symbolic import RicciTensor, RicciScalar\nfrom einsteinpy.symbolic.predefined import AntiDeSitter\n\nsympy.init_printing()\n```\n\n### Defining the Anti-de Sitter spacetime Metric\n\n\n```python\nmetric = AntiDeSitter()\nmetric.tensor()\n```\n\n### Calculating the Ricci Tensor(with both indices covariant)\n\n\n```python\nRic = RicciTensor.from_metric(metric)\nRic.tensor()\n```\n\n### Calculating the Ricci Scalar(Scalar Curvature) from the Ricci Tensor\n\n\n```python\nR = RicciScalar.from_riccitensor(Ric)\nR.simplify()\nR.expr\n```\n\nThe curavture is -12 which is in-line with the theoretical results\n", "meta": {"hexsha": "e9fa66012cf275f189a7ea890ab0490c1a18f6a7", "size": 27983, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/source/examples/Ricci Tensor and Scalar Curvature symbolic calculation.ipynb", "max_stars_repo_name": "iamhardikat11/einsteinpy", "max_stars_repo_head_hexsha": "7bf0ca0020b273e616b6e7c19aed7a5e13925444", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 485, "max_stars_repo_stars_event_min_datetime": "2019-02-04T09:15:22.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-19T13:50:17.000Z", "max_issues_repo_path": "docs/source/examples/Ricci Tensor and Scalar Curvature symbolic calculation.ipynb", "max_issues_repo_name": "iamhardikat11/einsteinpy", "max_issues_repo_head_hexsha": "7bf0ca0020b273e616b6e7c19aed7a5e13925444", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 570, "max_issues_repo_issues_event_min_datetime": "2019-02-02T10:57:27.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-26T16:37:05.000Z", "max_forks_repo_path": "docs/source/examples/Ricci Tensor and Scalar Curvature symbolic calculation.ipynb", "max_forks_repo_name": "iamhardikat11/einsteinpy", "max_forks_repo_head_hexsha": "7bf0ca0020b273e616b6e7c19aed7a5e13925444", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 250, "max_forks_repo_forks_event_min_datetime": "2019-01-30T14:14:14.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-28T21:18:18.000Z", "avg_line_length": 166.5654761905, "max_line_length": 11968, "alphanum_fraction": 0.8649537219, "converted": true, "num_tokens": 177, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9597620539235895, "lm_q2_score": 0.8633916082162403, "lm_q1q2_score": 0.8286505032420098}} {"text": "# The resilu linearity / non-linearity\n\nThe function $resilu(x)=\\frac{x}{1-e^{-x}}$ can be written as the sum of a linear funciton and a function that limits to relu(x).\n\nBy using resilu(x) and 'non'-linearity in neural networks, the effect of a skip-connection should thus be included 'for free'.\n\n\n```python\nimport copy\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport math\nimport sympy\n```\n\n\n```python\nx=np.arange(-20,20,0.01)\n```\n\n\n```python\ndef resilu(x):\n return x/(1.0-np.exp(x*-1.0))\n\ndef relu(x):\n y=copy.copy(x)\n y[y<0]=0.0\n return y\n \n```\n\nComparing $resilu(x)=\\frac{x}{1-e^{-x}}$ with the $relu(x)$ function.\n\nBy introducing a parameter $a$, $relu(x)$ is a limit of $resilu(x,a)$:\n \n$$\\lim_{a\\to\\infty}\\frac{x}{1-e^{-x a}}=relu(x)$$\n\n\n```python\nfig=plt.figure()\nax=fig.add_axes([0,0,1,1])\nax.plot(x,resilu(x),label='resilu')\nax.plot(x,relu(x),linestyle=':',label='relu')\nax.legend()\n```\n\n\n```python\ndef resilu_nonlin(x):\n return x/(np.exp(x)-1.0)\n\ndef resilu_lin(x):\n return x\n```\n\nResilu(x) can be split into a sum of two functions:\n\n* linear part $f_1(x)=x$\n* non-linear part $f_2(x)=\\frac{x}{e^x-1}$, with exactly similar limit properties as the complete resilu function: $lim_{a\\to\\infty}\\frac{x}{e^{x a}-1}=relu(-x)$\n\n$$\nresilu(x)=f_1(x)+f_2(x)=\\frac{x}{e^x-1} + x = \\frac{x}{1-e^{-x}}\n$$\n\nThis should show that using resilu non-linearity should be \u00e4quivalent to using a 'standard' non-linearity and a skip connection that ommits the nonlinearity as additive residual connection. The only difference are arbitrary linear transformations which the net can easily cancel out.\n\n\n```python\nfig=plt.figure()\nax=fig.add_axes([0,0,1,1])\nax.plot(x,resilu_nonlin(x),label='resilu (non-lin part)')\nax.plot(x,resilu_lin(x),label='resilu (lin part)')\nax.plot(x,resilu_nonlin(x)+resilu_lin(x),label='non-lin + lin (= complete resilu)')\nax.plot(x,relu(x),linestyle=':',label='relu(x)')\nax.plot(x,relu(-x),linestyle=':',label='relu(-x)')\nax.legend()\n```\n\n## Numerical instability at $x=0$\n\n$\\frac{x}{1-e^{-x}}$ is obviously numerically unstable at $x=0$, which could be seen as a phase transition or an unstable fixed point.\n\nIn order to use the resilu(x) function, the function itself and it's derivate (which are both not numerically stable around $x=0$ are approximated with $\\mathcal{O}(4)$ taylor expansions for a small interval $-h<0www.tsaad.net)
    Department of Chemical Engineering
    University of Utah**\n\n# Example 1\nThe height of a person vs femur length is given by the following data:\n\n| femur length (cm) | height (cm) |\n| :----------------: |:-----------: |\n| 40\t | 163|\n|41\t|165|\n|43\t|167|\n|43\t|164|\n|44\t|168|\n|44\t|169|\n|45\t|170|\n|45\t|167|\n|48\t|170|\n|51\t|175|\n\nPlot this data and then perform regression to a line and a quadratic using the standard derivation as well as the normal equations. Also use `numpy`'s `polyfit` and compare to you solutions.\n\nLet's first get the boiler plate out of the way and import matplotlib and numpy\n\n\n```python\n%matplotlib inline\n%config InlineBackend.figure_format = 'svg'\nimport matplotlib.pyplot as plt\nimport numpy as np\n```\n\nNow place the input data in `numpy` arrays. We can enter these manually or import from a text file.\n\n\n```python\nxi = np.array([40,41,43,43,44,44,45,45,48,51])\nyi = np.array([163,165,167,164,168,169,170,167,170,175])\n```\n\nNow plot the data\n\n\n```python\nplt.plot(xi,yi,'ro')\nplt.xlabel('Femur length (cm)')\nplt.ylabel('Person\\' height (cm)')\nplt.title('Height vs femur length')\nplt.grid()\n```\n\n\n \n\n \n\n\n## Regression to a straight line\nHere, we regress the data to a straight line $a_0 + a_1 x$. Recall that, for regression to a straight line, we must solve the following system of equations:\n\\begin{equation}\n\\label{eq:regression-line}\n\\left[\n\\begin{array}{*{20}{c}}\nN& \\sum{x_i}\\\\\n\\sum{x_i} &\\sum{x_i^2}\n\\end{array}\n\\right]\n\\left(\n\\begin{array}{*{20}{c}}\na_0\\\\\na_1\n\\end{array} \\right)\n= \\left( \n\\begin{array}{*{20}{c}}\n\\sum{y_i}\\\\\n\\sum {x_i y_i}\n\\end{array} \n\\right)\n\\end{equation}\n\nfirst build the coefficient matrix as a list of lists [ [a,b], [c,d]]. We will use `numpy.sum` to compute the various summations in the system \\eqref{eq:regression-line}.\n\n\n```python\nN = len(xi)\nA = np.array([[N, np.sum(xi)],\n [np.sum(xi), np.sum(xi**2)]])\n```\n\nnext, construct the right-hand-side using a 1D numpy array\n\n\n```python\nb = np.array([np.sum(yi), np.sum(xi*yi)])\n```\n\nnow find the solution of $[\\mathbf{A}]\\mathbf{a} = \\mathbf{b}$, where $a = (a_0, a_1)$ are the coefficients of the regressed line. Use `numpy`'s built-in linear solver.\n\n\n```python\nsol = np.linalg.solve(A,b)\n# print the solution. Note that in this case, the solution should contain two values, a0 and a1, respectively.\nprint(sol)\n```\n\n [123.20779221 1.004329 ]\n\n\nThe variable `sol` contains the solution of the system of equations. It is a list of two entries corresponding to the coefficients $a_0$ and $a_1$ of the regressed line.\n\nWe can now plot the line, $a_0 + a_1 x$ that we just fitted into the data\n\n\n```python\n# first get the coefficients a0 and a1 from the variable sol\na0 = sol[0]\na1 = sol[1]\n\n# construct a function f(x) for the regressed line\ny_line = lambda x: a0 + a1*x\n\n# now plot the regressed line as a function of the input data xi\nplt.plot(xi, y_line(xi), label='least-squares fit')\n# plot the original data\nplt.plot(xi, yi,'ro', label='original data')\nplt.xlabel('Femur length (cm)')\nplt.ylabel('Person\\' height (cm)')\nplt.title('Height vs femur length')\nplt.legend()\nplt.grid()\n```\n\n\n \n\n \n\n\n## Standard Error\n\nThe standard error of the model quantifies the spread of the data around the regression curve. It is given by\n\\begin{equation}\nS_{y/x} = \\sqrt{\\frac{S_r}{n-2}} = \\sqrt{\\frac{\\sum{(y_i - f_i)^2}}{n-2}}\n\\end{equation}\n\n\n```python\nybar = np.average(yi)\nprint(ybar)\nfi = y_line(xi)\nstdev = np.sqrt(np.sum((yi - ybar)**2)/(N-1))\nprint(stdev - ybar)\nSr = np.sum((yi-fi)**2)\nSyx = np.sqrt(Sr/(N-2))\nprint(Syx)\n```\n\n 167.8\n -164.3103327124527\n 1.4317065166379408\n\n\n### R2 Value\n\nThe $R^2$ value can be computed as:\n\\begin{equation}R^2 = 1 - \\frac{\\sum{(y_i - f_i)^2}}{\\sum{(y_i - \\bar{y})^2}}\\end{equation}\nwhere\n\\begin{equation}\n\\bar{y} = \\frac{1}{N}\\sum{y_i}\n\\end{equation}\n\n\n```python\nybar = np.average(yi)\nfi = y_line(xi)\nrsq = 1 - np.sum( (yi - fi)**2 )/np.sum( (yi - ybar)**2 )\nprint(rsq)\n```\n\n 0.8503807627895221\n\n\nTo enable quick calculation of the R2 value for other functions, let's declare a function that computes the R2 value for an arbitrary model fit.\n\n\n```python\ndef rsquared(xi,yi,ymodel):\n '''\n xi: vector of length n representing the known x values.\n yi: vector of length n representing the known y values that correspond to xi.\n ymodel: a python function (of x only) that can be evaluated at xi and represents a model fit of \n the data (e.g. a regressed curve).\n '''\n ybar = np.average(yi)\n fi = ymodel(xi)\n result = 1 - np.sum( (yi - fi)**2 )/np.sum( (yi - ybar)**2 )\n return result\n```\n\n\n```python\nrsq = rsquared(xi,yi,y_line)\nprint(rsq)\n```\n\n 0.8503807627895221\n\n\n## Regression to a quadratic\nHere, we regress the data to a quadratic polynomial $a_0 + a_1 x + a_2 x^2$. Recall that, for regression to a quadratic, we must solve the following system of equations:\n\\begin{equation}\n\\label{eq:regression-line}\n\\left[\n\\begin{array}{*{20}{c}}\nN& \\sum{x_i} & \\sum{x_i^2}\\\\\n\\sum{x_i} &\\sum{x_i^2} & \\sum{x_i^3} \\\\\n\\sum{x_i^2} & \\sum{x_i^3} & \\sum{x_i^4}\n\\end{array}\n\\right]\n\\left(\n\\begin{array}{*{20}{c}}\na_0\\\\\na_1 \\\\\na_2\n\\end{array} \\right)\n= \\left( \n\\begin{array}{*{20}{c}}\n\\sum{y_i}\\\\\n\\sum {x_i y_i}\\\\\n\\sum {x_i^2 y_i}\n\\end{array} \n\\right)\n\\end{equation}\n\n\n```python\nA = np.array([[len(xi), np.sum(xi) , np.sum(xi**2) ],\n [np.sum(xi), np.sum(xi**2), np.sum(xi**3)],\n [np.sum(xi**2), np.sum(xi**3), np.sum(xi**4)] ])\nb = np.array([np.sum(yi), np.sum(xi*yi), np.sum(xi*xi*yi)])\nprint(A,b)\n```\n\n [[ 10 444 19806]\n [ 444 19806 887796]\n [ 19806 887796 39994422]] [ 1678 74596 3331896]\n\n\n\n```python\nsol = np.linalg.solve(A,b)\nprint(sol)\n```\n\n [1.28368173e+02 7.76379556e-01 2.50458155e-03]\n\n\n\n```python\na0 = sol[0]\na1 = sol[1]\na2 = sol[2]\ny_quad = lambda x: a0 + a1*x + a2*x**2\n# now plot the regressed line as a function of the input data xi\nplt.plot(xi, y_quad(xi), label='least-squares fit (quadratic)')\n# plot the original data\nplt.plot(xi, yi,'ko', label='original data')\nplt.xlabel('Femur length (cm)')\nplt.ylabel('Person\\' height (cm)')\nplt.title('Height vs femur length')\nplt.legend()\nplt.grid()\n```\n\n\n \n\n \n\n\nLet's now compute the R2 value for the quadratic fit\n\n\n```python\nrsq = rsquared(xi,yi,y_quad)\nprint(rsq)\n```\n\n 0.8504537705463819\n\n\nThis value of 85% is not much different than the one we got with a straight line fit. The data is very likely to be distributed linearly.\n\n## Using the normal equations\nFit a straight line to the data using the normal equations\n\n\n```python\nA = np.array([np.ones(len(xi)),xi]).T\nprint(A)\nATA = A.T @ A\nsol = np.linalg.solve(ATA,A.T@yi)\nprint(sol)\n```\n\n [[ 1. 40.]\n [ 1. 41.]\n [ 1. 43.]\n [ 1. 43.]\n [ 1. 44.]\n [ 1. 44.]\n [ 1. 45.]\n [ 1. 45.]\n [ 1. 48.]\n [ 1. 51.]]\n [123.20779221 1.004329 ]\n\n\nFor a quadratic fit using the normal equations\n\n\n```python\nA = np.array([np.ones(len(xi)),xi,xi**2]).T\nATA = A.T @ A\nsol = np.linalg.solve(ATA,A.T@yi)\nprint(sol)\n```\n\n [1.28368173e+02 7.76379556e-01 2.50458155e-03]\n\n\n## Using Numpy's Polyfit\nIt is possible to repeat the previous analysis using `numpy's` `polyfit` function. Let's try it out and compare to our results\n\nWe first compare a straight line fit\n\n\n```python\ncoefs = np.polyfit(xi,yi,1)\nprint(coefs)\n```\n\n [ 1.004329 123.20779221]\n\n\nIndeed, the coefficients are the same as the ones we obtained from our straight-line fit. Polyfit returns the coefs sorted in reverse order.\n\nTo use the data returned by `polyfit`, we can construct a polynomial using `poly1d` and simply pass the coefficients returned by `polyfit`\n\n\n```python\np = np.poly1d(coefs)\n```\n\nno, simply call `p` on any value, say `p(43)`\n\n\n```python\nprint(p(43))\n```\n\n 166.39393939393938\n\n\nWe can also plot the fit\n\n\n```python\n# now plot the regressed line as a function of the input data xi\nplt.plot(xi, p(xi), label='least-squares fit (quadratic)')\n# plot the original data\nplt.plot(xi, yi,'ko', label='original data')\nplt.xlabel('Femur length (cm)')\nplt.ylabel('Person\\' height (cm)')\nplt.title('Height vs femur length')\nplt.legend()\nplt.grid()\n```\n\n\n \n\n \n\n\nWe can do the same for the quadratic fit - simply change the degree of the polynomial in `polyfit`\n\n\n```python\ncoefs = np.polyfit(xi,yi,2)\npquad = np.poly1d(coefs)\nprint(pquad(43))\n```\n\n 166.38346568926897\n\n\n## Using numpy's Least Squares\n\n\n```python\nnp.linalg.lstsq(A,yi,rcond=None)\n```\n\n\n\n\n (array([1.28368173e+02, 7.76379556e-01, 2.50458155e-03]),\n array([16.39026675]),\n 3,\n array([6.32567302e+03, 9.94238708e+00, 1.72620823e-02]))\n\n\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\nCSS style adapted from https://github.com/barbagroup/CFDPython. Copyright (c) Barba group\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "0d78098f558ae5c530b03e13878f386dd7a4e365", "size": 229246, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "topics/regression/Least Squares Regression.ipynb", "max_stars_repo_name": "jomorodi/NumericalMethods", "max_stars_repo_head_hexsha": "e040693001941079b2e0acc12e0c3ee5c917671c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-03-27T05:22:34.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-27T10:49:13.000Z", "max_issues_repo_path": "topics/regression/Least Squares Regression.ipynb", "max_issues_repo_name": "jomorodi/NumericalMethods", "max_issues_repo_head_hexsha": "e040693001941079b2e0acc12e0c3ee5c917671c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "topics/regression/Least Squares Regression.ipynb", "max_forks_repo_name": "jomorodi/NumericalMethods", "max_forks_repo_head_hexsha": "e040693001941079b2e0acc12e0c3ee5c917671c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2019-12-29T23:31:56.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-28T19:04:10.000Z", "avg_line_length": 41.0100178891, "max_line_length": 209, "alphanum_fraction": 0.4860848172, "converted": true, "num_tokens": 3706, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9372107878954106, "lm_q2_score": 0.8840392710530071, "lm_q1q2_score": 0.8285311417540732}} {"text": "# Sympy\n\n\n```python\nfrom sympy import *\n\n# init_printing()\nx, y, z = symbols(\"x y z\")\n\n```\n\n\n```python\nsimplify(sin(x) ** 2 + cos(x) ** 2)\n```\n\n\n\n\n$\\displaystyle 1$\n\n\n\n\n```python\nexpand((x + 1) ** 3)\n```\n\n\n\n\n$\\displaystyle x^{3} + 3 x^{2} + 3 x + 1$\n\n\n\n\n```python\na = 3\nb = 8\nc = 2\ny = a * x ** 2 + b * x + c\n\nplot(y)\n```\n\n\n```python\nsolveset(y)\n```\n\n\n\n\n$\\displaystyle \\left\\{- \\frac{4}{3} - \\frac{\\sqrt{10}}{3}, - \\frac{4}{3} + \\frac{\\sqrt{10}}{3}\\right\\}$\n\n\n\n\n```python\ndy = y.diff(x)\n\nplot(dy)\n```\n\n\n```python\niy = integrate(y, x)\n\nplot(iy)\n```\n", "meta": {"hexsha": "b1fe1ede2a968d43399a6f4d8a8466b3b7a28ba7", "size": 163372, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/Library/ThirdParty/sympy.ipynb", "max_stars_repo_name": "yoannmos/PythonGuide", "max_stars_repo_head_hexsha": "b7885f7da4193801e53edc441ecc4de9ee8ea6f7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-09-22T02:29:09.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-27T09:44:51.000Z", "max_issues_repo_path": "docs/Library/ThirdParty/sympy.ipynb", "max_issues_repo_name": "yoannmos/PythonGuide", "max_issues_repo_head_hexsha": "b7885f7da4193801e53edc441ecc4de9ee8ea6f7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/Library/ThirdParty/sympy.ipynb", "max_forks_repo_name": "yoannmos/PythonGuide", "max_forks_repo_head_hexsha": "b7885f7da4193801e53edc441ecc4de9ee8ea6f7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 781.6842105263, "max_line_length": 41542, "alphanum_fraction": 0.7296048282, "converted": true, "num_tokens": 225, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9637799482964025, "lm_q2_score": 0.8596637559030338, "lm_q1q2_score": 0.828526690216517}} {"text": "# \u041d\u043e\u0442\u0430\u0446\u0438\u044f \u0414\u0435\u043d\u0430\u0432\u0438\u0442\u0430-\u0425\u0430\u0440\u0442\u0435\u043d\u0431\u0435\u0440\u0433\u0430\n\n\n```python\nfrom sympy import *\ndef rz(a):\n return Matrix([\n [cos(a), -sin(a), 0, 0],\n [sin(a), cos(a), 0, 0],\n [0, 0, 1, 0],\n [0, 0, 0, 1]\n ])\n\ndef ry(a):\n return Matrix([\n [cos(a), 0, sin(a), 0],\n [0, 1, 0, 0],\n [-sin(a), 0, cos(a), 0],\n [0, 0, 0, 1]\n ])\n\ndef rx(a):\n return Matrix([\n [1, 0, 0, 0],\n [0, cos(a), -sin(a), 0],\n [0, sin(a), cos(a), 0],\n [0, 0, 0, 1]\n ])\n\ndef trs(x, y, z):\n return Matrix([\n [1, 0, 0, x],\n [0, 1, 0, y],\n [0, 0, 1, z],\n [0, 0, 0, 1]\n ])\n\ndef vec(x, y, z):\n return Matrix([\n [x],\n [y],\n [z],\n [1]\n ])\n```\n\n\u0415\u0441\u043b\u0438 \u0441\u043e\u0435\u0434\u0438\u043d\u0438\u0442\u044c \u043c\u0430\u0442\u0440\u0438\u0446\u044b \u0438 \u0432\u0438\u043d\u0442\u043e\u0432\u043e\u0435 \u0438\u0441\u0447\u0438\u0441\u043b\u0435\u043d\u0438\u0435, \u043e\u0434\u043d\u0438\u043c \u0438\u0437 \u0440\u0435\u0437\u0443\u043b\u044c\u0442\u0430\u0442\u043e\u0432 \u0431\u0443\u0434\u0435\u0442 \u043d\u043e\u0442\u0430\u0446\u0438\u044f \u0414\u0435\u043d\u0430\u0432\u0438\u0442\u0430-\u0425\u0430\u0440\u0442\u0435\u043d\u0431\u0435\u0440\u0433\u0430.\n\u041e\u043d\u0430 \u043f\u043e\u0434\u0440\u0430\u0437\u0443\u043c\u0435\u0432\u0430\u0435\u0442 \u0447\u0435\u0442\u044b\u0440\u0435 \u043f\u043e\u0441\u043b\u0435\u0434\u043e\u0432\u0430\u0442\u0435\u043b\u044c\u043d\u044b\u0445 \u043f\u0440\u0435\u043e\u0431\u0440\u0430\u0437\u043e\u0432\u043d\u0438\u044f:\n$$\nDH(\\theta, d, \\alpha, a) =\nR_z(\\theta) T_z(d) R_x(\\alpha) T_z(a)\n$$\n\n\n```python\ndef dh(theta, d, alpha, a):\n return rz(theta) * trs(0, 0, d) * rx(alpha) * trs(a, 0, 0)\n```\n\n\n```python\ntheta, d, alpha, a = symbols(\"theta_i, d_i, alpha_i, a_i\")\nsimplify(dh(theta, d, alpha, a))\n```\n\nDH-\u043f\u0430\u0440\u0430\u043c\u0435\u0442\u0440\u044b \u043e\u043f\u0438\u0441\u044b\u0432\u0430\u044e\u0442 \u043f\u043e\u0441\u043b\u0435\u0434\u043e\u0432\u0430\u0442\u0435\u043b\u044c\u043d\u044b\u0435\n- \u043f\u043e\u0432\u043e\u0440\u043e\u0442 \u0432\u043e\u043a\u0440\u0443\u0433 \u043e\u0441\u0438 $Z$ - $\\theta$\n- \u0441\u043c\u0435\u0449\u0435\u043d\u0438\u0435 \u0432\u0434\u043e\u043b\u044c \u043e\u0441\u0438 $Z$ - $d$\n- \u043f\u043e\u0432\u043e\u0440\u043e\u0442 \u0432\u043e\u043a\u0440\u0443\u0433 \u043d\u043e\u0432\u043e\u0439 \u043e\u0441\u0438 $X$ - $\\alpha$\n- \u0441\u043c\u0435\u0449\u0435\u043d\u0438\u0435 \u0432\u0434\u043e\u043b\u044c \u043d\u043e\u0432\u043e\u0439 \u043e\u0441\u0438 $X$ - $r$\n\n\u041e\u0431\u043e\u0431\u0449\u0435\u043d\u043d\u044b\u0435 \u043a\u043e\u043e\u0440\u0434\u0438\u043d\u0430\u0442\u044b \u0432\u044b\u0431\u0438\u0440\u0430\u044e\u0442\u0441\u044f \u0442\u0430\u043a, \u0447\u0442\u043e\u0431\u044b \u043f\u043e\u043f\u0430\u0434\u0430\u0442\u044c \u043d\u0430 \u0432\u0440\u0430\u0449\u0435\u043d\u0438\u0435 \u0432\u043e\u043a\u0440\u0443\u0433 / \u0441\u043c\u0435\u0449\u0435\u043d\u0438\u0435 \u0432\u0434\u043e\u043b\u044c \u043e\u0441\u0438 $Z$.\n\n\n", "meta": {"hexsha": "35dff3d0c15a3fd12a87713c721c51843dc68a5b", "size": 3242, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "3 - DH notation.ipynb", "max_stars_repo_name": "red-hara/jupyter-dh-notation", "max_stars_repo_head_hexsha": "0ffd305b3e67ce7dd3c20f2d1c719b53251dbf58", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "3 - DH notation.ipynb", "max_issues_repo_name": "red-hara/jupyter-dh-notation", "max_issues_repo_head_hexsha": "0ffd305b3e67ce7dd3c20f2d1c719b53251dbf58", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "3 - DH notation.ipynb", "max_forks_repo_name": "red-hara/jupyter-dh-notation", "max_forks_repo_head_hexsha": "0ffd305b3e67ce7dd3c20f2d1c719b53251dbf58", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.4927536232, "max_line_length": 111, "alphanum_fraction": 0.4420111043, "converted": true, "num_tokens": 631, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9637799472560582, "lm_q2_score": 0.8596637487122111, "lm_q1q2_score": 0.8285266823918}} {"text": "```python\nimport loader\nfrom sympy import *\ninit_printing()\nfrom root.solver import *\n```\n\n#### Solve $y'' - 4y' + 4y = 2t^2 e^{2t}$ using the method of undetermined coefficients\n\n\n```python\ncoeffs = [1, -4, 4]\nyc, p = nth_order_const_coeff(*coeffs)\np.display()\nyp, p = undetermined_coefficients(yc, coeffs, 2 * t ** 2 * exp(2*t))\np.display()\n```\n\n\n$\\displaystyle \\text{Characteristic equation: }$\n\n\n\n$\\displaystyle r^{2} - 4 r + 4 = 0$\n\n\n\n$\\displaystyle \\text{Roots: }\\left\\{ \\begin{array}{ll}r_{1,2} = 2\\\\\\end{array} \\right.$\n\n\n\n$\\displaystyle \\text{General Solution: }$\n\n\n\n$\\displaystyle y = \\left(C_{1} + C_{2} t\\right) e^{2 t}$\n\n\n\n$\\displaystyle \\text{Find }Y(t)\\text{ that mimics the form of }g(t)$\n\n\n\n$\\displaystyle Y{\\left (t \\right )} = A_{0} t^{2} e^{2 t} + A_{1} t^{4} e^{2 t} + A_{2} t^{3} e^{2 t}$\n\n\n\n$\\displaystyle \\text{Compute successive derivatives of }Y(t)$\n\n\n\n$\\displaystyle \\begin{align*}&Y{\\left (t \\right )} = t^{2} \\left(A_{0} + A_{1} t^{2} + A_{2} t\\right) e^{2 t}\\\\&\\frac{d}{d t} Y{\\left (t \\right )} = t \\left(2 A_{0} t + 2 A_{0} + 2 A_{1} t^{3} + 4 A_{1} t^{2} + 2 A_{2} t^{2} + 3 A_{2} t\\right) e^{2 t}\\\\&\\frac{d^{2}}{d t^{2}} Y{\\left (t \\right )} = 2 \\left(2 A_{0} t^{2} + 4 A_{0} t + A_{0} + 2 A_{1} t^{4} + 8 A_{1} t^{3} + 6 A_{1} t^{2} + 2 A_{2} t^{3} + 6 A_{2} t^{2} + 3 A_{2} t\\right) e^{2 t}\\\\\\end{align*}$\n\n\n\n$\\displaystyle \\text{Plug the derivatives into the LHS and equate coefficients}$\n\n\n\n$\\displaystyle \\begin{align*}&4 t^{2} \\left(A_{0} + A_{1} t^{2} + A_{2} t\\right) e^{2 t} - 4 t \\left(2 A_{0} t + 2 A_{0} + 2 A_{1} t^{3} + 4 A_{1} t^{2} + 2 A_{2} t^{2} + 3 A_{2} t\\right) e^{2 t} + 2 \\left(2 A_{0} t^{2} + 4 A_{0} t + A_{0} + 2 A_{1} t^{4} + 8 A_{1} t^{3} + 6 A_{1} t^{2} + 2 A_{2} t^{3} + 6 A_{2} t^{2} + 3 A_{2} t\\right) e^{2 t} = 2 t^{2} e^{2 t}\\\\&2 A_{0} e^{2 t} + 12 A_{1} t^{2} e^{2 t} + 6 A_{2} t e^{2 t} = 2 t^{2} e^{2 t}\\\\\\end{align*}$\n\n\n\n$\\displaystyle \\left\\{ \\begin{array}{ll}12 A_{1} - 2 = 0\\\\0 = 0\\\\0 = 0\\\\6 A_{2} = 0\\\\2 A_{0} = 0\\\\\\end{array} \\right.$\n\n\n\n$\\displaystyle \\text{Solve for the undetermined coefficients}$\n\n\n\n$\\displaystyle \\left\\{ \\begin{array}{ll}A_{0} = 0\\\\A_{1} = \\frac{1}{6}\\\\A_{2} = 0\\\\\\end{array} \\right.$\n\n\n\n$\\displaystyle \\text{Substitute the coefficients to get the particular solution}$\n\n\n\n$\\displaystyle y_{p} = \\frac{t^{4} e^{2 t}}{6}$\n\n\n#### Given that ${1, \\cos(t), \\sin(t)}$ are the complementary solutions of $y''' + y' = 2\\cos(2t) + 3\\sin(t)$, find the particular solution. \n\n\n```python\nyp, p = undetermined_coefficients([1, cos(t), sin(t)], [1, 0, 1, 0], 2*cos(2*t) + 3*sin(t))\np.display()\n```\n\n\n$\\displaystyle \\text{Find }Y(t)\\text{ that mimics the form of }g(t)$\n\n\n\n$\\displaystyle Y{\\left (t \\right )} = A_{0} t \\cos{\\left (t \\right )} + A_{1} \\cos{\\left (2 t \\right )} + A_{2} t \\sin{\\left (t \\right )} + A_{3} \\sin{\\left (2 t \\right )}$\n\n\n\n$\\displaystyle \\text{Compute successive derivatives of }Y(t)$\n\n\n\n$\\displaystyle \\begin{align*}&Y{\\left (t \\right )} = A_{0} t \\cos{\\left (t \\right )} + A_{1} \\cos{\\left (2 t \\right )} + A_{2} t \\sin{\\left (t \\right )} + A_{3} \\sin{\\left (2 t \\right )}\\\\&\\frac{d}{d t} Y{\\left (t \\right )} = - A_{0} t \\sin{\\left (t \\right )} + A_{0} \\cos{\\left (t \\right )} - 2 A_{1} \\sin{\\left (2 t \\right )} + A_{2} t \\cos{\\left (t \\right )} + A_{2} \\sin{\\left (t \\right )} + 2 A_{3} \\cos{\\left (2 t \\right )}\\\\&\\frac{d^{2}}{d t^{2}} Y{\\left (t \\right )} = - A_{0} t \\cos{\\left (t \\right )} - 2 A_{0} \\sin{\\left (t \\right )} - 4 A_{1} \\cos{\\left (2 t \\right )} - A_{2} t \\sin{\\left (t \\right )} + 2 A_{2} \\cos{\\left (t \\right )} - 4 A_{3} \\sin{\\left (2 t \\right )}\\\\&\\frac{d^{3}}{d t^{3}} Y{\\left (t \\right )} = A_{0} t \\sin{\\left (t \\right )} - 3 A_{0} \\cos{\\left (t \\right )} + 8 A_{1} \\sin{\\left (2 t \\right )} - A_{2} t \\cos{\\left (t \\right )} - 3 A_{2} \\sin{\\left (t \\right )} - 8 A_{3} \\cos{\\left (2 t \\right )}\\\\\\end{align*}$\n\n\n\n$\\displaystyle \\text{Plug the derivatives into the LHS and equate coefficients}$\n\n\n\n$\\displaystyle \\begin{align*}&- 2 A_{0} \\cos{\\left (t \\right )} + 6 A_{1} \\sin{\\left (2 t \\right )} - 2 A_{2} \\sin{\\left (t \\right )} - 6 A_{3} \\cos{\\left (2 t \\right )} = 3 \\sin{\\left (t \\right )} + 2 \\cos{\\left (2 t \\right )}\\\\&- 2 A_{0} \\cos{\\left (t \\right )} + 6 A_{1} \\sin{\\left (2 t \\right )} - 2 A_{2} \\sin{\\left (t \\right )} - 6 A_{3} \\cos{\\left (2 t \\right )} = 3 \\sin{\\left (t \\right )} + 2 \\cos{\\left (2 t \\right )}\\\\\\end{align*}$\n\n\n\n$\\displaystyle \\left\\{ \\begin{array}{ll}0 = 0\\\\- 2 A_{2} - 3 = 0\\\\0 = 0\\\\6 A_{1} = 0\\\\- 6 A_{3} - 2 = 0\\\\- 2 A_{0} = 0\\\\\\end{array} \\right.$\n\n\n\n$\\displaystyle \\text{Solve for the undetermined coefficients}$\n\n\n\n$\\displaystyle \\left\\{ \\begin{array}{ll}A_{0} = 0\\\\A_{1} = 0\\\\A_{2} = - \\frac{3}{2}\\\\A_{3} = - \\frac{1}{3}\\\\\\end{array} \\right.$\n\n\n\n$\\displaystyle \\text{Substitute the coefficients to get the particular solution}$\n\n\n\n$\\displaystyle y_{p} = - \\frac{3 t}{2} \\sin{\\left (t \\right )} - \\frac{\\sin{\\left (2 t \\right )}}{3}$\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "6226536cc76e3610eccb479be3f4938623249562", "size": 11837, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/undetermined-coefficients.ipynb", "max_stars_repo_name": "kaiyingshan/ode-solver", "max_stars_repo_head_hexsha": "30c6798efe9c35a088b2c6043493470701641042", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-02-17T23:15:20.000Z", "max_stars_repo_stars_event_max_datetime": "2019-02-17T23:15:27.000Z", "max_issues_repo_path": "notebooks/undetermined-coefficients.ipynb", "max_issues_repo_name": "kaiyingshan/ode-solver", "max_issues_repo_head_hexsha": "30c6798efe9c35a088b2c6043493470701641042", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/undetermined-coefficients.ipynb", "max_forks_repo_name": "kaiyingshan/ode-solver", "max_forks_repo_head_hexsha": "30c6798efe9c35a088b2c6043493470701641042", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.8707317073, "max_line_length": 1049, "alphanum_fraction": 0.4556053054, "converted": true, "num_tokens": 2182, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9441768651485395, "lm_q2_score": 0.877476793890012, "lm_q1q2_score": 0.8284932884956626}} {"text": "```python\nfrom sympy import *\ninit_printing(use_latex='mathjax')\nx, y, z = symbols('x,y,z')\nr, theta = symbols('r,theta', positive=True)\n```\n\n## Matrices\n\nThe SymPy `Matrix` object helps us with small problems in linear algebra.\n\n\n```python\nrot = Matrix([[r*cos(theta), -r*sin(theta)],\n [r*sin(theta), r*cos(theta)]])\nrot\n```\n\n### Standard methods\n\n\n```python\nrot.det()\n```\n\n\n```python\nrot.inv()\n```\n\n\n```python\nrot.singular_values()\n```\n\n### Exercise\n\nFind the inverse of the following Matrix:\n\n$$ \\left[\\begin{matrix}1 & x\\\\y & 1\\end{matrix}\\right] $$\n\n\n```python\n# Create a matrix and use the `inv` method to find the inverse\n\n\n```\n\n### Operators\n\nThe standard SymPy operators work on matrices.\n\n\n```python\nrot * 2\n```\n\n\n```python\nrot**2\n```\n\n\n```python\nv = Matrix([[x], [y]])\nv\n```\n\n\n```python\nrot * v\n```\n\n### Exercise\n\nIn the last exercise you found the inverse of the following matrix\n\n\n```python\nM = Matrix([[1, x], [y, 1]])\nM\n```\n\n\n```python\nM.inv()\n```\n\nNow verify that this is the true inverse by multiplying the matrix times its inverse. Do you get the identity matrix back?\n\n\n```python\n# Multiply `M` by its inverse. Do you get back the identity matrix?\n\n```\n\n### Exercise\n\nWhat are the eigenvectors and eigenvalues of `M`?\n\n\n```python\n# Find the methods to compute eigenvectors and eigenvalues. Use these methods on `M`\n\n\n```\n\n### NumPy-like Item access\n\n\n```python\nrot[0, 0]\n```\n\n\n```python\nrot[:, 0]\n```\n\n\n```python\nrot[1, :]\n```\n\n### Mutation\n\nWe can change elements in the matrix.\n\n\n```python\nrot[0, 0] += 1\nrot\n```\n\n\n```python\nsimplify(rot.det())\n```\n\n\n```python\nrot.singular_values()\n```\n\n### Exercise\n\nPlay around with your matrix `M`, manipulating elements in a NumPy like way. Then try the various methods that we've talked about (or others). See what sort of answers you get.\n\n\n```python\n# Play with matrices\n\n\n```\n", "meta": {"hexsha": "6c4909176f1fb344ce5dc27380d9d04783219f17", "size": 6198, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tutorial_exercises/04-Matrices.ipynb", "max_stars_repo_name": "gvvynplaine/scipy-2016-tutorial", "max_stars_repo_head_hexsha": "aa417427a1de2dcab2a9640b631b809d525d7929", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 53, "max_stars_repo_stars_event_min_datetime": "2016-06-21T21:11:02.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-04T07:51:03.000Z", "max_issues_repo_path": "tutorial_exercises/04-Matrices.ipynb", "max_issues_repo_name": "gvvynplaine/scipy-2016-tutorial", "max_issues_repo_head_hexsha": "aa417427a1de2dcab2a9640b631b809d525d7929", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2016-07-02T20:24:06.000Z", "max_issues_repo_issues_event_max_datetime": "2016-07-11T11:31:44.000Z", "max_forks_repo_path": "tutorial_exercises/04-Matrices.ipynb", "max_forks_repo_name": "gvvynplaine/scipy-2016-tutorial", "max_forks_repo_head_hexsha": "aa417427a1de2dcab2a9640b631b809d525d7929", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 36, "max_forks_repo_forks_event_min_datetime": "2016-06-25T09:04:24.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-09T06:46:01.000Z", "avg_line_length": 17.5084745763, "max_line_length": 184, "alphanum_fraction": 0.4906421426, "converted": true, "num_tokens": 518, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9441768604361741, "lm_q2_score": 0.8774767762675405, "lm_q1q2_score": 0.8284932677219415}} {"text": "### Example 3 , part B: Diffusion for non uniform material properties \n\nIn this example we will look at the diffusion equation for non uniform material properties and how to handle second-order derivatives. For this, we will reuse Devito's `.laplace` short-hand expression outlined in the previous example and demonstrate it using the examples from step 7 of the original tutorial. This example is an enhancement of `03_diffusion` in terms of having non-uniform viscosity as opposed to the constant $\\nu$. This example introduces the use of `Function` in order to create this non-uniform space.\n\nSo, the equation we are now trying to implement is\n\n$$\\frac{\\partial u}{\\partial t} = \\nu(x,y) \\frac{\\partial ^2 u}{\\partial x^2} + \\nu(x,y) \\frac{\\partial ^2 u}{\\partial y^2}$$\n\nIn our case $\\nu$ is not uniform and $\\nu(x,y)$ represents spatially variable viscosity.\nTo discretize this equation we will use central differences and reorganizing the terms yields\n\n\\begin{align}\nu_{i,j}^{n+1} = u_{i,j}^n &+ \\frac{\\nu(x,y) \\Delta t}{\\Delta x^2}(u_{i+1,j}^n - 2 u_{i,j}^n + u_{i-1,j}^n) \\\\\n&+ \\frac{\\nu(x,y) \\Delta t}{\\Delta y^2}(u_{i,j+1}^n-2 u_{i,j}^n + u_{i,j-1}^n)\n\\end{align}\n\nAs usual, we establish our baseline experiment by re-creating some of the original example runs. So let's start by defining some parameters.\n\n\n```python\nfrom examples.cfd import plot_field, init_hat\nimport numpy as np\n%matplotlib inline\n\n# Some variable declarations\nnx = 100\nny = 100\nnt = 1000\n\nnu = 0.15 #the value of base viscosity\n\noffset = 1 # Used for field definition\n\nvisc = np.full((nx, ny), nu) # Initialize viscosity\nvisc[nx//4-offset:nx//4+offset, 1:-1] = 0.0001 # Adding a material with different viscosity\nvisc[1:-1,nx//4-offset:nx//4+offset ] = 0.0001 \nvisc[3*nx//4-offset:3*nx//4+offset, 1:-1] = 0.0001 \n\nvisc_nb = visc[1:-1,1:-1]\n\ndx = 2. / (nx - 1)\ndy = 2. / (ny - 1)\nsigma = .25\ndt = sigma * dx * dy / nu\n\n\n# Initialize our field\n\n# Initialise u with hat function\nu_init = np.empty((nx, ny))\ninit_hat(field=u_init, dx=dx, dy=dy, value=1)\nu_init[10:-10, 10:-10] = 1.5\n\n\nzmax = 2.5 # zmax for plotting\n```\n\nWe now set up the diffusion operator as a separate function, so that we can re-use if for several runs.\n\n\n```python\ndef diffuse(u, nt ,visc):\n for n in range(nt + 1): \n un = u.copy()\n u[1:-1, 1:-1] = (un[1:-1,1:-1] + \n visc*dt / dy**2 * (un[1:-1, 2:] - 2 * un[1:-1, 1:-1] + un[1:-1, 0:-2]) +\n visc*dt / dx**2 * (un[2:,1: -1] - 2 * un[1:-1, 1:-1] + un[0:-2, 1:-1]))\n u[0, :] = 1\n u[-1, :] = 1\n u[:, 0] = 1\n u[:, -1] = 1\n```\n\nNow let's take this for a spin. In the next two cells we run the same diffusion operator for a varying number of timesteps to see our \"hat function\" dissipate to varying degrees.\n\n\n```python\n#NBVAL_IGNORE_OUTPUT\n\n# Plot material according to viscosity, uncomment to plot\nimport matplotlib.pyplot as plt\nplt.imshow(visc_nb, cmap='Greys', interpolation='nearest')\n\n# Field initialization\nu = u_init\n\nprint (\"Initial state\")\nplot_field(u, zmax=zmax)\n\ndiffuse(u, nt , visc_nb )\nprint (\"After\", nt, \"timesteps\")\nplot_field(u, zmax=zmax)\n\ndiffuse(u, nt, visc_nb)\nprint (\"After another\", nt, \"timesteps\")\nplot_field(u, zmax=zmax)\n```\n\nYou can notice that the area with lower viscosity is not diffusing its heat as quickly as the area with higher viscosity.\n\n\n```python\n#NBVAL_IGNORE_OUTPUT\n\n# Field initialization\nu = u_init\n\n\ndiffuse(u, nt , visc_nb)\nprint (\"After\", nt, \"timesteps\")\nplot_field(u, zmax=zmax)\n```\n\nExcellent. Now for the Devito part, we need to note one important detail to our previous examples: we now have a second-order derivative. So, when creating our `TimeFunction` object we need to tell it about our spatial discretization by setting `space_order=2`. We also use the notation `u.laplace` outlined previously to denote all second order derivatives in space, allowing us to reuse this code for 2D and 3D examples.\n\n\n```python\nfrom devito import Grid, TimeFunction, Eq, solve, Function\nfrom sympy.abc import a\nfrom sympy import nsimplify\n\n# Initialize `u` for space order 2\ngrid = Grid(shape=(nx, ny), extent=(2., 2.))\n\n# Create an operator with second-order derivatives\na = Function(name='a',grid = grid) # Define as Function\na.data[:]= visc # Pass the viscosity in order to be used in the operator.\n\n\n\nu = TimeFunction(name='u', grid=grid, space_order=2)\n\n# Create an equation with second-order derivatives\neq = Eq(u.dt, a * u.laplace)\nstencil = solve(eq, u.forward)\neq_stencil = Eq(u.forward, stencil)\n\nprint(nsimplify(eq_stencil))\n```\n\n Eq(u(t + dt, x, y), dt*((-2*u(t, x, y)/h_y**2 + u(t, x, y - h_y)/h_y**2 + u(t, x, y + h_y)/h_y**2 - 2*u(t, x, y)/h_x**2 + u(t, x - h_x, y)/h_x**2 + u(t, x + h_x, y)/h_x**2)*a(x, y) + u(t, x, y)/dt))\n\n\nGreat. Now all that is left is to put it all together to build the operator and use it on our examples. For illustration purposes we will do this in one cell, including update equation and boundary conditions.\n\n\n```python\n#NBVAL_IGNORE_OUTPUT\nfrom devito import Operator, Constant, Eq, solve, Function\n\n\n# Reset our data field and ICs\ninit_hat(field=u.data[0], dx=dx, dy=dy, value=1.)\n\n# Field initialization\nu.data[0] = u_init\n\n\n# Create an operator with second-order derivatives\na = Function(name='a',grid = grid)\na.data[:]= visc\n\neq = Eq(u.dt, a * u.laplace, subdomain=grid.interior)\nstencil = solve(eq, u.forward)\neq_stencil = Eq(u.forward, stencil)\n\n# Create boundary condition expressions\nx, y = grid.dimensions\nt = grid.stepping_dim\nbc = [Eq(u[t+1, 0, y], 1.)] # left\nbc += [Eq(u[t+1, nx-1, y], 1.)] # right\nbc += [Eq(u[t+1, x, ny-1], 1.)] # top\nbc += [Eq(u[t+1, x, 0], 1.)] # bottom\n\n\nop = Operator([eq_stencil] + bc)\nop(time=nt, dt=dt, a = a)\n\nprint (\"After\", nt, \"timesteps\")\nplot_field(u.data[0], zmax=zmax)\n\nop(time=nt, dt=dt, a = a)\nprint (\"After another\", nt, \"timesteps\")\nplot_field(u.data[0], zmax=zmax)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "311e1da8a4a3b919849faea87deaeb9516816838", "size": 955112, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/cfd/03_diffusion_nonuniform.ipynb", "max_stars_repo_name": "kristiantorres/devito", "max_stars_repo_head_hexsha": "9357d69448698fd2b7a57be6fbb400058716b532", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-01-31T10:35:49.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-31T10:35:49.000Z", "max_issues_repo_path": "examples/cfd/03_diffusion_nonuniform.ipynb", "max_issues_repo_name": "kristiantorres/devito", "max_issues_repo_head_hexsha": "9357d69448698fd2b7a57be6fbb400058716b532", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 53, "max_issues_repo_issues_event_min_datetime": "2020-11-30T07:50:14.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-10T17:06:03.000Z", "max_forks_repo_path": "examples/cfd/03_diffusion_nonuniform.ipynb", "max_forks_repo_name": "kristiantorres/devito", "max_forks_repo_head_hexsha": "9357d69448698fd2b7a57be6fbb400058716b532", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-06-02T03:31:11.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-02T03:31:11.000Z", "avg_line_length": 2062.8768898488, "max_line_length": 168644, "alphanum_fraction": 0.9644146446, "converted": true, "num_tokens": 1876, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8933094145755218, "lm_q2_score": 0.9273632856092016, "lm_q1q2_score": 0.8284223537663883}} {"text": "This notebook is part of https://github.com/AudioSceneDescriptionFormat/splines, see also http://splines.readthedocs.io/.\n\n# Derivation of Non-Uniform Catmull--Rom Splines\n\nRecursive algorithm developed by\nBarry and Goldman (1988),\naccording to\nYuksel et al. (2011), figure 3.\n\n\n```python\nimport sympy as sp\nsp.init_printing()\n```\n\n\n```python\nfrom utility import NamedExpression, NamedMatrix\n```\n\n\n```python\nx_1, x0, x1, x2 = sp.symbols('xbm_-1 xbm:3')\n```\n\n\n```python\nt, t_1, t0, t1, t2 = sp.symbols('t t_-1 t:3')\n```\n\n\n```python\np_10 = NamedExpression('pbm_-1,0', x_1 * (t0 - t) / (t0 - t_1) + x0 * (t - t_1) / (t0 - t_1))\np_10\n```\n\n\n```python\np01 = NamedExpression('pbm_0,1', x0 * (t1 - t) / (t1 - t0) + x1 * (t - t0) / (t1 - t0))\np01\n```\n\n\n```python\np12 = NamedExpression('pbm_1,2', x1 * (t2 - t) / (t2 - t1) + x2 * (t - t1) / (t2 - t1))\np12\n```\n\n\n```python\np_101 = NamedExpression('pbm_-1,0,1', p_10.name * (t1 - t) / (t1 - t_1) + p01.name * (t - t_1) / (t1 - t_1))\np_101\n```\n\n\n```python\np012 = NamedExpression('pbm_0,1,2', p01.name * (t2 - t) / (t2 - t0) + p12.name * (t - t0) / (t2 - t0))\np012\n```\n\n\n```python\np = NamedExpression('pbm', p_101.name * (t1 - t) / (t1 - t0) + p012.name * (t - t0) / (t1 - t0))\np\n```\n\n\n```python\np = p.subs([p_101, p012]).subs([p_10, p01, p12])\np\n```\n\n\n```python\np_normalized = p.expr.subs(t, t * (t1 - t0) + t0)\n```\n\n\n```python\nM_CR = NamedMatrix(\n r'{M_\\text{CR}}',\n sp.Matrix([[c.expand().coeff(x).factor() for x in (x_1, x0, x1, x2)]\n for c in p_normalized.as_poly(t).all_coeffs()]))\n```\n\n\n```python\ndeltas = [\n (t_1, -sp.Symbol('Delta_-1')),\n (t0, 0),\n (t1, sp.Symbol('Delta0')),\n (t2, sp.Symbol('Delta0') + sp.Symbol('Delta1'))\n]\n```\n\n\n```python\nM_CR.simplify().subs(deltas).factor()\n```\n\n\n```python\nuniform = [\n (sp.Symbol('Delta_-1'), 1),\n (sp.Symbol('Delta0') , 1),\n (sp.Symbol('Delta1') , 1),\n]\n```\n\n\n```python\nM_CR.subs(deltas).subs(uniform).pull_out(sp.S.Half).expr\n```\n\n\n```python\nvelocity = p.expr.diff(t)\n```\n\n\n```python\nvelocity.subs(t, t0).subs(deltas).factor()\n```\n\n\n```python\nvelocity.subs(t, t1).subs(deltas).factor()\n```\n\nin general:\n\n\\begin{equation}\n\\boldsymbol{\\dot{x}}_i =\n\\frac{\n(t_{i+1} - t_i)^2 (\\boldsymbol{x}_i - \\boldsymbol{x}_{i-1}) +\n(t_i - t_{i-1})^2 (\\boldsymbol{x}_{i+1} - \\boldsymbol{x}_i)\n}{\n(t_{i+1} - t_i)(t_i - t_{i-1})(t_{i+1} - t_{i-1})\n}\n\\end{equation}\n\nYou might encounter another way to write the equation for $\\boldsymbol{\\dot{x}}_0$\n(e.g. at https://stackoverflow.com/a/23980479/):\n\n\n```python\n(x0 - x_1) / (t0 - t_1) - (x1 - x_1) / (t1 - t_1) + (x1 - x0) / (t1 - t0)\n```\n\n... but this is equivalent to the equation shown above:\n\n\n```python\n_.subs(deltas).factor()\n```\n\nYet another way to skin this cat -- sometimes referred to as Bessel--Overhauser -- is to define the velocity of the left and right chords:\n\n\n```python\nv_left = (x0 - x_1) / (t0 - t_1)\nv_right = (x1 - x0) / (t1 - t0)\n```\n\n... and then combine them in this way:\n\n\n```python\n((t1 - t0) * v_left + (t0 - t_1) * v_right) / (t1 - t_1)\n```\n\nAgain, that's the same as we had above:\n\n\n```python\n_.subs(deltas).factor()\n```\n", "meta": {"hexsha": "c9ca7563c421f0e3d40ffc577b12b7d0150afb4f", "size": 7439, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/catmull-rom-non-uniform.ipynb", "max_stars_repo_name": "mgeier/splines", "max_stars_repo_head_hexsha": "f54b09479d98bf13f00a183fd9d664b5783e3864", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/catmull-rom-non-uniform.ipynb", "max_issues_repo_name": "mgeier/splines", "max_issues_repo_head_hexsha": "f54b09479d98bf13f00a183fd9d664b5783e3864", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/catmull-rom-non-uniform.ipynb", "max_forks_repo_name": "mgeier/splines", "max_forks_repo_head_hexsha": "f54b09479d98bf13f00a183fd9d664b5783e3864", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.1937321937, "max_line_length": 144, "alphanum_fraction": 0.4886409464, "converted": true, "num_tokens": 1245, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9196425223682085, "lm_q2_score": 0.9005297947939936, "lm_q1q2_score": 0.8281654919520736}} {"text": "```python\nfrom sympy import *\n```\n\n\n```python\nx,y,z = symbols('x y z')\ninit_printing(use_unicode=True)\n```\n\n### Basic *Calculus*\n\n\n```python\nprint('------')\n\nLimit((cos(x)-1),x,0)\nLimit((cos(x)-1),x,0).doit()\n\nprint('------')\n\nDerivative(5*x**2,x)\nDerivative(5*x**2,x).doit()\n\nprint('------')\n\nIntegral(log(x)**2,x)\nIntegral(log(x)**2,x).doit()\n```\n\n\n```python\nlimit(sin(x)/x,x,0)\nlimit(sin(x)/x,x,oo)\n```\n\n\n```python\n# use limit instead of subs (if there's a singularity)\n(x/x**x).subs(x,oo)\nlimit((x/x**x),x,oo)\n\n# 'limit' one side only \nlimit(1/x,x,0,'+')\nlimit(1/x,x,0,'-')\n```\n\n\n```python\ndiff(cos(x),x)\n\ndiff(5*x**3)\ndiff(5*x**3,x)\ndiff(5*x**3,x,1)\n\ndiff(x**2*y**2,y)\n```\n\n\n```python\ndiff(5*x**3,x,0)\ndiff(5*x**3*y**2,x,0,y,0)\n\ndiff(5*x**3*y**2,x,x,y,y) \ndiff(5*x**3*y**2,x,2,y,2)\n```\n\n\n```python\nintegrate(cos(x),x)\n\n# use 'oo' to indicates 'infinity'\nintegrate(exp(-x),(x,0,oo))\nintegrate(exp(-x**2-y**2),(x,-oo,oo),(y,-oo,oo))\n\n# oops!\nintegrate(x**x,x)\n```\n\n\n```python\nexp(sin(x))\nexp(sin(x)).series(x,0,5)\n\n# hell yeah!\nexp(x-5).series(x)\nexp(x-5).series(x,x0=5)\n```\n\n### *Solver*\n\n\n```python\n# In Sympy, any expression not in an 'Eq' \n# is automatically assumed to equal 0 by solving funcs.\n\nEq(x,y)\n\n# equals to 0? u can omit it!\nsolveset(Eq(x**2-5,0),x)\nsolveset(x**2-5,x)\n\n# use Eq or not is FINE \nsolveset(Eq(x**2,x),x)\nsolveset(x**2-x,x)\n```\n\n\n```python\nsolveset(Eq(x**2,x),x,domain=S.Reals)\nsolveset(sin(x)-1,x,domain=S.Reals)\n```\n\n\n```python\n# no solution exists\nsolveset(exp(x),x)\n\n# not able to find solution \n# ( C\u4ee3\u8868\u865a\u6570, \u53cdV\u4ee3\u8868\"\u4e0e\" )\nsolveset(cos(x)-x,x)\n```\n\n\n```python\nlinsolve([x+y+z-1,x+y+2*z-3],(x,y,z))\nlinsolve(Matrix(([1,1,1,1],[1,1,2,3])), (x,y,z))\n```\n\n\n```python\n# nonlinear shit\na,b,c,d = symbols('a b c d',real=True)\n\nnonlinsolve([a**2+a,a-b],[a,b])\nnonlinsolve([x*y-1,x-2],x,y)\n\nnonlinsolve([x**2+1,y**2+1],[x,y])\nnonlinsolve([x**2-2*y**2-2,x*y-2],[x,y])\n```\n\n\n```python\n# differential equations\nf,g = symbols('f g',cls=Function)\n\nf(x)\nf(g(x))\n```\n\n\n```python\neq = Eq(f(x).diff(x,2) - 2*f(x).diff(x) + f(x),sin(x))\n\neq\ndsolve(eq,f(x))\n\ndsolve(Eq(f(x).diff(x)))\ndsolve(f(x).diff(x)*(1-sin(f(x))),f(x))\n```\n", "meta": {"hexsha": "d7ae9098dbbb71f16462b87abe40888f4cb784c5", "size": 90152, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "sympy-part03-calc-solver.ipynb", "max_stars_repo_name": "codingEzio/code_python_learn_math", "max_stars_repo_head_hexsha": "bd7869d05e1b4ec250cc5fa13470a960b299654e", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sympy-part03-calc-solver.ipynb", "max_issues_repo_name": "codingEzio/code_python_learn_math", "max_issues_repo_head_hexsha": "bd7869d05e1b4ec250cc5fa13470a960b299654e", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sympy-part03-calc-solver.ipynb", "max_forks_repo_name": "codingEzio/code_python_learn_math", "max_forks_repo_head_hexsha": "bd7869d05e1b4ec250cc5fa13470a960b299654e", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 87.4413191077, "max_line_length": 5092, "alphanum_fraction": 0.826082616, "converted": true, "num_tokens": 871, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9591542840900508, "lm_q2_score": 0.8633916047011595, "lm_q1q2_score": 0.8281257564965008}} {"text": "# Getting Started with Equations\nEquations are calculations in which one or more variables represent unknown values. In this notebook, you'll learn some fundamental techniques for solving simple equations.\n\n## One Step Equations\nConsider the following equation:\n\n\\begin{equation}x + 16 = -25\\end{equation}\n\nThe challenge here is to find the value for **x**, and to do this we need to *isolate the variable*. In this case, we need to get **x** onto one side of the \"=\" sign, and all of the other values onto the other side. To accomplish this we'll follow these rules:\n1. Use opposite operations to cancel out the values we don't want on one side. In this case, the left side of the equation includes an addition of 16, so we'll cancel that out by subtracting 16 and the left side of the equation becomes **x + 16 - 16**.\n2. Whatever you do to one side, you must also do to the other side. In this case, we subtracted 16 from the left side, so we must also subtract 16 from the right side of the equation, which becomes **-25 - 16**.\nOur equation now looks like this:\n\n\\begin{equation}x + 16 - 16 = -25 - 16\\end{equation}\n\nNow we can calculate the values on both side. On the left side, 16 - 16 is 0, so we're left with:\n\n\\begin{equation}x = -25 - 16\\end{equation}\n\nWhich yields the result **-41**. Our equation is now solved, as you can see here:\n\n\\begin{equation}x = -41\\end{equation}\n\nIt's always good practice to verify your result by plugging the variable value you've calculated into the original equation and ensuring that it holds true. We can easily do that by using some simple Python code.\n\nTo verify the equation using Python code, place the cursor in the following cell and then click the ►| button in the toolbar.\n\n\n```python\nx = -41\nx + 16 == -25\n```\n\n## Two-Step Equations\nThe previous example was fairly simple - you could probably work it out in your head. So what about something a little more complex?\n\n\\begin{equation}3x - 2 = 10 \\end{equation}\n\nAs before, we need to isolate the variable **x**, but this time we'll do it in two steps. The first thing we'll do is to cancel out the *constants*. A constant is any number that stands on its own, so in this case the 2 that we're subtracting on the left side is a constant. We'll use an opposite operation to cancel it out on the left side, so since the current operation is to subtract 2, we'll add 2; and of course whatever we do on the left side we also need to do on the right side, so after the first step, our equation looks like this:\n\n\\begin{equation}3x - 2 + 2 = 10 + 2 \\end{equation}\n\nNow the -2 and +2 on the left cancel one another out, and on the right side, 10 + 2 is 12; so the equation is now:\n\n\\begin{equation}3x = 12 \\end{equation}\n\nOK, time for step two - we need to deal with the *coefficients* - a coefficient is a number that is applied to a variable. In this case, our expression on the left is 3x, which means x multiplied by 3; so we can apply the opposite operation to cancel it out as long as we do the same to the other side, like this:\n\n\\begin{equation}\\frac{3x}{3} = \\frac{12}{3} \\end{equation}\n\n3x ÷ 3 is x, so we've now isolated the variable\n\n\\begin{equation}x = \\frac{12}{3} \\end{equation}\n\nAnd we can calculate the result as 12/3 which is **4**:\n\n\\begin{equation}x = 4 \\end{equation}\n\nLet's verify that result using Python:\n\n\n```python\nx = 4\n3*x - 2 == 10\n```\n\n## Combining Like Terms\nLike terms are elements of an expression that relate to the same variable or constant (with the same *order* or *exponential*, which we'll discuss later). For example, consider the following equation:\n\n\\begin{equation}\\textbf{5x} + 1 \\textbf{- 2x} = 22 \\end{equation}\n\nIn this equation, the left side includes the terms **5x** and **- 2x**, both of which represent the variable **x** multiplied by a coefficent. Note that we include the sign (+ or -) in front of the value.\n\nWe can rewrite the equation to combine these like terms:\n\n\\begin{equation}\\textbf{5x - 2x} + 1 = 22 \\end{equation}\n\nWe can then simply perform the necessary operations on the like terms to consolidate them into a single term:\n\n\\begin{equation}\\textbf{3x} + 1 = 22 \\end{equation}\n\nNow, we can solve this like any other two-step equation. First we'll remove the constants from the left side - in this case, there's a constant expression that adds 1, so we'll use the opposite operation to remove it and do the same on the other side:\n\n\\begin{equation}3x + 1 - 1 = 22 - 1 \\end{equation}\n\nThat gives us:\n\n\\begin{equation}3x = 21 \\end{equation}\n\nThen we'll deal with the coefficients - in this case x is multiplied by 3, so we'll divide by 3 on boths sides to remove that:\n\n\\begin{equation}\\frac{3x}{3} = \\frac{21}{3} \\end{equation}\n\nThis give us our answer:\n\n\\begin{equation}x = 7 \\end{equation}\n\n\n```python\nx = 7\n5*x + 1 - 2*x == 22\n```\n\n## Working with Fractions\nSome of the steps in solving the equations above have involved working wth fractions - which in themselves are actually just division operations. Let's take a look at an example of an equation in which our variable is defined as a fraction:\n\n\\begin{equation}\\frac{x}{3} + 1 = 16 \\end{equation}\n\nWe follow the same approach as before, first removing the constants from the left side - so we'll subtract 1 from both sides.\n\n\\begin{equation}\\frac{x}{3} = 15 \\end{equation}\n\nNow we need to deal with the fraction on the left so that we're left with just **x**. The fraction is x/3 which is another way of saying *x divided by 3*, so we can apply the opposite operation to both sides. In this case, we need to multiply both sides by the denominator under our variable, which is 3. To make it easier to work with a term that contains fractions, we can express whole numbers as fractions with a denominator of 1; so on the left, we can express 3 as 3/1 and multiply it with x/3. Note that the notation for mutiplication is a **•** symbol rather than the standard *x* multiplication operator (which would cause confusion with the variable **x**) or the asterisk symbol used by most programming languages.\n\n\\begin{equation}\\frac{3}{1} \\cdot \\frac{x}{3} = 15 \\cdot 3 \\end{equation}\n\nThis gives us the following result:\n\n\\begin{equation}x = 45 \\end{equation}\n\nLet's verify that with some Python code:\n\n\n```python\nx = 45\nx/3 + 1 == 16\n```\n\nLet's look at another example, in which the variable is a whole number, but its coefficient is a fraction:\n\n\\begin{equation}\\frac{2}{5}x + 1 = 11 \\end{equation}\n\nAs usual, we'll start by removing the constants from the variable expression; so in this case we need to subtract 1 from both sides:\n\n\\begin{equation}\\frac{2}{5}x = 10 \\end{equation}\n\nNow we need to cancel out the fraction. The expression equates to two-fifths times x, so the opposite operation is to divide by 2/5; but a simpler way to do this with a fraction is to multiply it by its *reciprocal*, which is just the inverse of the fraction, in this case 5/2. Of course, we need to do this to both sides:\n\n\\begin{equation}\\frac{5}{2} \\cdot \\frac{2}{5}x = \\frac{10}{1} \\cdot \\frac{5}{2} \\end{equation}\n\nThat gives us the following result:\n\n\\begin{equation}x = \\frac{50}{2} \\end{equation}\n\nWhich we can simplify to:\n\n\\begin{equation}x = 25 \\end{equation}\n\nWe can confirm that with Python:\n\n\n```python\nx = 25\n2/5 * x + 1 ==11\n```\n\n## Equations with Variables on Both Sides\nSo far, all of our equations have had a variable term on only one side. However, variable terms can exist on both sides. \n\nConsider this equation:\n\n\\begin{equation}3x + 2 = 5x - 1 \\end{equation}\n\nThis time, we have terms that include **x** on both sides. Let's take exactly the same approach to solving this kind of equation as we did for the previous examples. First, let's deal with the constants by adding 1 to both sides. That gets rid of the -1 on the right:\n\n\\begin{equation}3x + 3 = 5x \\end{equation}\n\nNow we can eliminate the variable expression from one side by subtracting 3x from both sides. That gets rid of the 3x on the left:\n\n\\begin{equation}3 = 2x \\end{equation}\n\nNext, we can deal with the coefficient by dividing both sides by 2:\n\n\\begin{equation}\\frac{3}{2} = x \\end{equation}\n\nNow we've isolated x. It looks a little strange because we usually have the variable on the left side, so if it makes you more comfortable you can simply reverse the equation:\n\n\\begin{equation}x = \\frac{3}{2} \\end{equation}\n\nFinally, this answer is correct as it is; but 3/2 is an improper fraction. We can simplify it to:\n\n\\begin{equation}x = 1\\frac{1}{2} \\end{equation}\n\nSo x is 11/2 (which is of course 1.5 in decimal notation).\nLet's check it in Python:\n\n\n```python\nx = 1.5\n3*x + 2 == 5*x -1\n```\n\n## Using the Distributive Property\nThe distributive property is a mathematical law that enables us to distribute the same operation to terms within parenthesis. For example, consider the following equation:\n\n\\begin{equation}\\textbf{4(x + 2)} + \\textbf{3(x - 2)} = 16 \\end{equation}\n\nThe equation includes two operations in parenthesis: **4(*x* + 2)** and **3(*x* - 2)**. Each of these operations consists of a constant by which the contents of the parenthesis must be multipled: for example, 4 times (*x* + 2). The distributive property means that we can achieve the same result by multiplying each term in the parenthesis and adding the results, so for the first parenthetical operation, we can multiply 4 by *x* and add it to 4 times +2; and for the second parenthetical operation, we can calculate 3 times *x* + 3 times -2). Note that the constants in the parenthesis include the sign (+ or -) that preceed them:\n\n\\begin{equation}4x + 8 + 3x - 6 = 16 \\end{equation}\n\nNow we can group our like terms:\n\n\\begin{equation}7x + 2 = 16 \\end{equation}\n\nThen we move the constant to the other side:\n\n\\begin{equation}7x = 14 \\end{equation}\n\nAnd now we can deal with the coefficient:\n\n\\begin{equation}\\frac{7x}{7} = \\frac{14}{7} \\end{equation}\n\nWhich gives us our anwer:\n\n\\begin{equation}x = 2 \\end{equation}\n\nHere's the original equation with the calculated value for *x* in Python:\n\n\n```python\nx = 2\n4*(x + 2) + 3*(x - 2) == 16\n```\n", "meta": {"hexsha": "464dbf8847633b1846aede331500531d10e27bc3", "size": 13875, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Python/Module01/01-01-Introduction to Equations.ipynb", "max_stars_repo_name": "joelgenter/Essential-Math", "max_stars_repo_head_hexsha": "2e76546a82fb0ad2b8698c7dc0f48f0aad0762bb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 33, "max_stars_repo_stars_event_min_datetime": "2018-01-11T20:44:52.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T16:10:41.000Z", "max_issues_repo_path": "Python/Module01/01-01-Introduction to Equations.ipynb", "max_issues_repo_name": "joelgenter/Essential-Math", "max_issues_repo_head_hexsha": "2e76546a82fb0ad2b8698c7dc0f48f0aad0762bb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2018-11-19T23:54:27.000Z", "max_issues_repo_issues_event_max_datetime": "2018-11-20T00:15:39.000Z", "max_forks_repo_path": "Python/Module01/01-01-Introduction to Equations.ipynb", "max_forks_repo_name": "joelgenter/Essential-Math", "max_forks_repo_head_hexsha": "2e76546a82fb0ad2b8698c7dc0f48f0aad0762bb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 35, "max_forks_repo_forks_event_min_datetime": "2018-03-08T15:42:32.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T06:11:43.000Z", "avg_line_length": 42.0454545455, "max_line_length": 805, "alphanum_fraction": 0.6111711712, "converted": true, "num_tokens": 2915, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9473810421953309, "lm_q2_score": 0.8740772236840656, "lm_q1q2_score": 0.8280841911330115}} {"text": "```python\nimport sympy\nfrom IPython.display import display\nimport sympy as sp\nfrom sympy import *\nsympy.init_printing(use_latex=True)\nxl, yl, d, x, y, theta, T, v_k, om_k, w_k, Q = symbols('xl yl d x y theta_k-1 T v_k om_k w_k Q')\nX_k = sp.Matrix([[x],[y],[theta]]) + (T * sp.Matrix([[cos(theta),0], [sin(theta), 0], [0, 1]])@(sp.Matrix([[v_k], [om_k]])))\nstate = sp.Matrix([x, y, theta])\ndisplay(state)\nF=X_k.jacobian(state)\ndisplay(F)\n```\n\n\n$$\\left[\\begin{matrix}x\\\\y\\\\\\theta_{k-1}\\end{matrix}\\right]$$\n\n\n\n$$\\left[\\begin{matrix}1 & 0 & - T v_{k} \\sin{\\left (\\theta_{k-1} \\right )}\\\\0 & 1 & T v_{k} \\cos{\\left (\\theta_{k-1} \\right )}\\\\0 & 0 & 1\\end{matrix}\\right]$$\n\n\n\n```python\n#xl, xk, theta, yl, yk, d = symbols('x_l x_k theta_k y_l y_k d')\nz = sp.Matrix([[sqrt(((xl-x-d*cos(theta))**2) + ((yl-y-d*sin(theta))**2))], [atan2((yl-y-d*sin(theta)), (xl-x-d*cos(theta))) - theta]])\ndisplay(z)\nH = z.jacobian(sp.Matrix([x, y, theta]))\ndisplay(H)\n```\n\n\n$$\\left[\\begin{matrix}\\sqrt{\\left(- d \\sin{\\left (\\theta_{k-1} \\right )} - y + yl\\right)^{2} + \\left(- d \\cos{\\left (\\theta_{k-1} \\right )} - x + xl\\right)^{2}}\\\\- \\theta_{k-1} + \\operatorname{atan_{2}}{\\left (- d \\sin{\\left (\\theta_{k-1} \\right )} - y + yl,- d \\cos{\\left (\\theta_{k-1} \\right )} - x + xl \\right )}\\end{matrix}\\right]$$\n\n\n\n$$\\left[\\begin{matrix}\\frac{d \\cos{\\left (\\theta_{k-1} \\right )} + x - xl}{\\sqrt{\\left(- d \\sin{\\left (\\theta_{k-1} \\right )} - y + yl\\right)^{2} + \\left(- d \\cos{\\left (\\theta_{k-1} \\right )} - x + xl\\right)^{2}}} & \\frac{d \\sin{\\left (\\theta_{k-1} \\right )} + y - yl}{\\sqrt{\\left(- d \\sin{\\left (\\theta_{k-1} \\right )} - y + yl\\right)^{2} + \\left(- d \\cos{\\left (\\theta_{k-1} \\right )} - x + xl\\right)^{2}}} & \\frac{- d \\left(- d \\sin{\\left (\\theta_{k-1} \\right )} - y + yl\\right) \\cos{\\left (\\theta_{k-1} \\right )} + d \\left(- d \\cos{\\left (\\theta_{k-1} \\right )} - x + xl\\right) \\sin{\\left (\\theta_{k-1} \\right )}}{\\sqrt{\\left(- d \\sin{\\left (\\theta_{k-1} \\right )} - y + yl\\right)^{2} + \\left(- d \\cos{\\left (\\theta_{k-1} \\right )} - x + xl\\right)^{2}}}\\\\- \\frac{d \\sin{\\left (\\theta_{k-1} \\right )} + y - yl}{\\left(- d \\sin{\\left (\\theta_{k-1} \\right )} - y + yl\\right)^{2} + \\left(- d \\cos{\\left (\\theta_{k-1} \\right )} - x + xl\\right)^{2}} & - \\frac{- d \\cos{\\left (\\theta_{k-1} \\right )} - x + xl}{\\left(- d \\sin{\\left (\\theta_{k-1} \\right )} - y + yl\\right)^{2} + \\left(- d \\cos{\\left (\\theta_{k-1} \\right )} - x + xl\\right)^{2}} & \\frac{d \\left(d \\sin{\\left (\\theta_{k-1} \\right )} + y - yl\\right) \\sin{\\left (\\theta_{k-1} \\right )}}{\\left(- d \\sin{\\left (\\theta_{k-1} \\right )} - y + yl\\right)^{2} + \\left(- d \\cos{\\left (\\theta_{k-1} \\right )} - x + xl\\right)^{2}} - \\frac{d \\left(- d \\cos{\\left (\\theta_{k-1} \\right )} - x + xl\\right) \\cos{\\left (\\theta_{k-1} \\right )}}{\\left(- d \\sin{\\left (\\theta_{k-1} \\right )} - y + yl\\right)^{2} + \\left(- d \\cos{\\left (\\theta_{k-1} \\right )} - x + xl\\right)^{2}} - 1\\end{matrix}\\right]$$\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "6d9033f8bd551401375ccf39ca2de162adf98212", "size": 8000, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Sate_Estimation_and_Localization_for_Self_Driving_Cars/module2/Jacobian.ipynb", "max_stars_repo_name": "arpan-99/self_driving_cars_specialization", "max_stars_repo_head_hexsha": "bbca885557898c4428bf84c3af47c327984e729d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-07-02T13:46:12.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-02T13:46:12.000Z", "max_issues_repo_path": "Sate_Estimation_and_Localization_for_Self_Driving_Cars/module2/Jacobian.ipynb", "max_issues_repo_name": "arpan-99/self_driving_cars_specialization", "max_issues_repo_head_hexsha": "bbca885557898c4428bf84c3af47c327984e729d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Sate_Estimation_and_Localization_for_Self_Driving_Cars/module2/Jacobian.ipynb", "max_forks_repo_name": "arpan-99/self_driving_cars_specialization", "max_forks_repo_head_hexsha": "bbca885557898c4428bf84c3af47c327984e729d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.6329113924, "max_line_length": 1807, "alphanum_fraction": 0.352, "converted": true, "num_tokens": 1241, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9653811571768047, "lm_q2_score": 0.8577680995361899, "lm_q1q2_score": 0.8280731605195957}} {"text": "# Fourier Transforms in Python\n\nA fundamental skill for anyone working on signal/image related data is the ability to analyze the frequencies (and strength of thsoe frequencies making up a signal). There are a few assumptions that we have to consider before taking a Fourier Transform.\n\n1. The underlying signal is periodic.\n2. The integral overal the entire input space (from $-\\infty$ to $\\infty$) is finite.\n\nIf you need a primer to remind you about Fourier Transforms, the [Wikipedia](https://en.wikipedia.org/wiki/Fourier_transform) and [Math World](https://mathworld.wolfram.com/FourierTransform.html) articles are a good place to start. The Fourier Transform and Inverse Fourier Transform are defined as\n\n\\begin{align}\n H(\\omega) &=\n \\mathcal{F}\\left[h(t)\\right] &=\n \\int_{-\\infty}^{\\infty} h(t) e^{-i \\omega t} dt \\\\\n h(t) &=\n \\mathcal{F}^{-1}\\left[H(\\omega)\\right] &=\n \\frac{1}{2\\pi} \\int_{-\\infty}^{\\infty} H(\\omega) e^{i \\omega t} dt \\\\\n\\end{align}\n\nrespectively.\n\nNow, when it comes to numerical programming and data analysis, we do not have a *continuous* signal to analyze (for which the equations above are derived). Instead, we have a *disccrete* signal for which we collect data at regular intervals. Therefore, we need likewise need a *discrete* Fourier Transform (DFT) which is defined as\n\n\\begin{align}\n F_n &=\n \\sum_{k=0}^{N-1} f_k e^{-2 \\pi i n k / N} \\\\\n f_k &=\n \\frac{1}{N} \\sum_{n=0}^{N-1} F_n e^{2 \\pi i n k / N} \\\\\n\\end{align}\n\nwhere $f_k$ and $F_n$ are the signals in the two different domains, respectively (such as time and frequency domains).\n\nThe final piece of information that we will need is the definition of the power spectrum which is what we will use to measure the strength of each given frequency. For the discreet transforms, the power spectrum is defined as \n\n\\begin{equation}\n S = F_n^* F_n.\n\\end{equation}\n\nPerhaps this will be more convenient to understand with an example. Let's dive right in.\n\n## Imports\n\n\n```python\n# Python Imports\n\n# 3rd Party Imports\nimport numpy as np\nimport pandas as pd\nfrom scipy.signal import periodogram\nfrom matplotlib import pyplot as plt\n```\n\n## Fourier Transform Example\n\n### Signal Creation\n\nLet's begin by creating a signal to analyze. I'll define the underlying signal as \n\n\\begin{equation}\n x(t) = 5 \\sin\\left( 2 \\pi f_1 t \\right) + 7 \\sin\\left( 2 \\pi f_2 t \\right)\n\\end{equation}\n\nwhere $f_1=2$ Hz and $f_2=5$ Hz. Again, since this is a *discrete* domain, we will also have to define the time step size which we will choose $\\Delta t = 0.01$ s and we'll plot the underlying signal below.\n\n\n```python\n# Define the Variables\nf1 = 2\nf2 = 5\ndt = 0.01\nt = np.arange(0, 2, dt)\nx = 5 * np.sin(2*np.pi*f1*t) + 7 * np.sin(2*np.pi*f2*t)\n\n# Plot the Signal\n_ = plt.plot(t, x, linewidth=2)\n_ = plt.xlabel('Time (s)')\n_ = plt.ylabel('Position (cm)')\n_ = plt.title('Underlying Signal')\n```\n\nNow, to make this a little more realistic, let's add in some random Gaussian noise to this signal.\n\n\n```python\n# Get the Random Number Generator\nrng = np.random.default_rng(0)\n\n# Add the Random Numbers to the Signal\nx += 4*rng.standard_normal(x.shape)\n\n# Plot the Noisy Signal\n_ = plt.plot(t, x, linewidth=2)\n_ = plt.xlabel('Time (s)')\n_ = plt.ylabel('Position (cm)')\n_ = plt.title('Underlying Signal')\n```\n\n### Signal Analysis\n\nAt this point we are ready to start analyzing the signal. For this, we will use the Numpy Fast Fourier Transform (FFT) library.\n\n\n```python\n# Get the Fourier Transform\nxT = np.fft.rfft(x)\n```\n\nNumpy provides several helper functions to parse through this data. We will use `rfftfreq` to get the frequencies of the transformed signal `xT`.\n\n\n```python\n# Get the measured frequencies\nf = np.fft.rfftfreq(x.size, dt)\n```\n\nNow, if you attempted to plot this signal that has been transformed, you would receive a Numpy warning. This would arise due to the complex nature of the data. Due to the definition of the Fourier transform, the outputs are going to be, in general, complex. Therefore, we need a way to represent the overall magnitude of the transform. To do that, we will compute the square root of the power spectrum.\n\nNow, the [rfft](https://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.rfft.html) and [rfftfreq](https://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.rfftfreq.html#numpy.fft.rfftfreq) have a few nuances that we have to consider.\n\n1. The Fourier Transform is defined over all space (positive and negative frequencies), but each of these functions only returns values in the positive frequencies (i.e., half of the possible values). Therefore, we will have to multiply all of the non-zero frequencies by 2.\n2. The DFT defined gets larger with the more data points we add. Therefore, we will have to divide the transformed signal by $N$ where is $N$ is the number of datapoints in $x$.\n\n\n\n```python\n# Get the Transform Magnitudes\nxT[1:] *= 2 # Multiply the non-zero frequencies by 2.\nmagT = np.abs(xT/x.size) # Get the Magnitude of the scaled transform.\n\n# Plot the \n_ = plt.plot(f, magT)\n_ = plt.title('Signal Magnitude')\n_ = plt.ylabel('Magnitude (cm)')\n_ = plt.xlabel('Frequency (Hz)')\n```\n\nScipy provides a convenient functon that calculates the RMS Power Spectrum. Therefore, we can use this function to wrap all the steps above into a single function call. However, since this is the *RMS* Power Spectrum, we will have to multiply this by two and take the square root to get the magnitudes we seek.\n\n\n```python\n# Get the Power Spectrum\nf, spec = periodogram(x, 1/dt, scaling='spectrum')\n\n# Plot the Magnitudes\n_ = plt.plot(f, np.sqrt(spec*2))\n_ = plt.title('Signal Magnitude')\n_ = plt.ylabel('Magnitude (cm)')\n_ = plt.xlabel('Frequency (Hz)')\n```\n\nNote that the signal we originally created was of the form\n\n\\begin{equation}\n x(t) = 5 \\sin\\left( 2 \\pi f_1 t \\right) + 7 \\sin\\left( 2 \\pi f_2 t \\right)\n\\end{equation}\n\nwhere $f_1=2$ Hz and $f_2=5$ Hz. From the figure you can see that we recovered the frequencies and amplitudes that were used to create this signal. On both of the figures above, there is a peak approximately equal to 5 cm at $f=2$ Hz, and there is a peak approximately equal to 7 cm at $f=5$ Hz.\n\n\n## Assignment\n\nYour assignment is to study the periodicity of the total number of sunspots. I have provided the data, input lines to read in the data and the lines needed to clean the data below. I downloaded this [data](http://www.sidc.be/silso/INFO/sndtotcsv.php) from the [Sunspot Index and Long-term Solar Observations Website](http://sidc.be/silso/home).\n\n\n```python\n# Read in the Values as a Numpy array\nssDat = pd.read_csv(\n 'SN_d_tot_V2.0.csv',\n sep=';',\n header=0,\n names=['Year', 'Month', 'Day', 'YearFraction', 'nSpots', 'std', 'nObs', 'Prov'],\n usecols=[3, 4],\n skiprows=6\n).values\n\n# Indicate -1 as missing data\nssN = ssDat[:, 1]\nssN[ssN == -1] = np.NaN\nssDat[:, 1] = ssN\n\n# Interpolate Missing Data\nmsk = np.isfinite(ssDat[:, 1])\nssDat[:, 1] = np.interp(ssDat[:, 0], ssDat[msk, 0], ssDat[msk, 1])\n\n# Get the Data into the form used above\ndt = np.diff(ssDat[:, 0]).mean()\nt = ssDat[:, 0]\nx = ssDat[:, 1]\n\n# Plot the Data\n_ = plt.plot(t, x, linewidth=1)\n_ = plt.xlabel('Year')\n_ = plt.ylabel('Number of Sunspots')\n_ = plt.title('Sunspot Data')\n```\n\n### Plot the Magnitude of the Fourier Transform\n\n\n```python\n# Get the Fourier Transform\nxT = np.fft.rfft(x)\n\n# Get the measured frequencies\nf = np.fft.rfftfreq(x.size, dt)\n\n# Get the Transform Magnitudes\nxT[1:] *= 2 # Multiply the non-zero frequencies by 2.\nmagT = np.abs(xT/x.size) # Get the Magnitude of the scaled transform.\n\n# Plot the \n_ = plt.plot(f[:100], magT[:100])\n_ = plt.title('Sunspot Spectral Analysis')\n_ = plt.ylabel('Magnitude')\n_ = plt.xlabel('Frequency (Yr$^{-1}$)')\n```\n\n### Plot the Signal Magnitude using Scipy\n\n\n```python\n# Get the Power Spectrum\nf, spec = periodogram(x, 1/dt, scaling='spectrum')\n\n# Plot the Magnitudes\n_ = plt.loglog(f[1:], np.sqrt(spec*2)[1:])\n_ = plt.title('Signal Magnitude')\n_ = plt.ylabel('Magnitude')\n_ = plt.xlabel('Frequency (Yr$^{-1}$)')\n```\n\nIn the cell below, insert the fundamental period (the inverse of the frequency with the highest magnitude) for the sunspot oscillations. If you are having a difficult time determining the correct frequency, you may want to plot a smaller window of data.\n\n\n```python\n11\n```\n\n\n\n\n 11\n\n\n", "meta": {"hexsha": "53eac62144bd05e2c1e348d5dc65d74406fde4d3", "size": 189650, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "07-FourierTransforms/FourierTransforms-Complete.ipynb", "max_stars_repo_name": "wwaldron/NumericalPythonGuide", "max_stars_repo_head_hexsha": "8e0c2947251b9639cbc66d6462dd495c180e3faa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-22T02:29:11.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-22T02:29:11.000Z", "max_issues_repo_path": "07-FourierTransforms/FourierTransforms-Complete.ipynb", "max_issues_repo_name": "wwaldron/NumericalPythonGuide", "max_issues_repo_head_hexsha": "8e0c2947251b9639cbc66d6462dd495c180e3faa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "07-FourierTransforms/FourierTransforms-Complete.ipynb", "max_forks_repo_name": "wwaldron/NumericalPythonGuide", "max_forks_repo_head_hexsha": "8e0c2947251b9639cbc66d6462dd495c180e3faa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 408.7284482759, "max_line_length": 35304, "alphanum_fraction": 0.9370261007, "converted": true, "num_tokens": 2357, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.951142217223021, "lm_q2_score": 0.8705972734445508, "lm_q1q2_score": 0.8280618209723667}} {"text": "# Theoretical Question\n\n## The algorithm ad its mechanism\n\nWe write the recursive function to check what kind of operation it implements on the array a.\nWe can apply this function to an array a = [1,2,3,4,5,6,7,8], that has a power of 2 as length, and see what kinds of \noutputs it returns for different specific lenght n, as n=2, n=4, n=8.\n\n\n```python\ndef splitSwap( a, l, n ):\n if n <= 1:\n return\n splitSwap(a, l, n//2)\n splitSwap(a, l+ n//2, n//2)\n swapList(a, l, n)\n \ndef swapList(a, l, n):\n for i in range(0,n//2):\n tmp = a[l + i]\n a[l + i] = a[l + n//2 + i]\n a[l + n//2 + i] = tmp\n \na = [1,2,3,4,5,6,7,8]\n\nsplitSwap(a, 0, 2)\nprint(a)\n\na = [1,2,3,4,5,6,7,8]\n\nsplitSwap(a, 0, 4)\nprint(a)\n\na = [1,2,3,4,5,6,7,8]\n\nsplitSwap(a, 0, 8)\nprint(a)\n```\n\n [2, 1, 3, 4, 5, 6, 7, 8]\n [4, 3, 2, 1, 5, 6, 7, 8]\n [8, 7, 6, 5, 4, 3, 2, 1]\n\n\nWe can consider only the case when l = 0 as written in the homework text. \nAs shown in the previous code, the algorithm take an array a, an index l and a specific length n that could be \ndifferent from the total length of the array a, len(a): then it returns the first n components of the array a in a \nswap order, where the first element of the first n components becomes the last and so on.\n\nWe can consider the first n components of the first array, a, as a new array, b.\nThe algorithm does the following things:\n \n- it splits the array b into 2 arrays, 1 array composed by the first n/2 components and the second by the second \n half of the components; then it continue on the 2 new arrays and split again this 2 arrays, and so on till this \n split mechanism bring us to an array of only 1 component;\n\n\n- now the algorithm can go back to the second last splitting and swap all the arrays with 2 components in the \n for loop of the swapList() function, then it can join the swapped arrays of 2 components and go back to the third last splitting with the a new 4 components arrays and so on;\n\n\n- on the last step it returns to the first splitting and it joins 2 arrays of n/2 components swapped respect to \n the starting 2 arrays, so we finally obtain the reversed b array applying swapList() on this couple of final couple \n of arrays.\n \nFinally the algorithm returns the swapped first n components of the array a and it leaves the others unchanged.\n\n## Big O analysis\n\nWe can imagine the described previous procedure like a tree with many layers and $2^i$ branches ( the\nnumbers of splitted arrays at the step $i$) for the layer $i$.\n\nIf we denote the input parameter as $n = n_0$, the algorithm performs $n_0/2$ steps for each layer of the tree and the big O calculus is th following:\n\n\\begin{equation}\nT(n) = 2T(n/2) + n/2 \n\\end{equation}\n\n\\begin{equation}\nT(n) = 2( 2T(n/4) + n/4) + n/2 = 4T(n/4) + n/2 + n/2\n\\end{equation}\n\n\\begin{equation}\nT(n) = 2^iT(n/2^i) + \\sum_{k=1}^i n/2\n\\end{equation}\n\nThe algorithm finish to split the array when it arrives to:\n\n\\begin{equation}\nn/2^i = 1 --> i = log_2(n)\n\\end{equation}\n\nso the total time is:\n\n\\begin{equation}\nT(n) = nT(1) + {n\\over 2}log_2(n).\n\\end{equation}\n\nFinally, we have the asymptotic behavior for the splitSwap algorithm:\n\n\\begin{equation}\nT(n) = O( nlog_2(n) ).\n\\end{equation}\n\nbecause the linear term is asymptotically smaller than $nlog(n)$, so we can neglect the linear term.\n\n## Is the algorithm optimal?\n\nThe proposed algorithm is not optimal because we can write one version with running time $T(n) = O(n)$, that is showen in the following cell.\n\n\n```python\n# The following version works in the case of interest, so with l = 0, and reproduce the same outputs of the \n# given version\n\ndef splitSwap( a, l, n):\n \n for i in range( 0, n//2 ):\n tmp = a[i]\n a[i] = a[n-i-1]\n a[n-i-1] = tmp\n \na = [1,2,3,4,5,6,7,8]\n\nsplitSwap(a, 0, 2)\nprint(a)\n\na = [1,2,3,4,5,6,7,8]\n\nsplitSwap(a, 0, 4)\nprint(a)\n\na = [1,2,3,4,5,6,7,8]\n\nsplitSwap(a, 0, 8)\nprint(a)\n```\n\n [2, 1, 3, 4, 5, 6, 7, 8]\n [4, 3, 2, 1, 5, 6, 7, 8]\n [8, 7, 6, 5, 4, 3, 2, 1]\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "8fe7e34d35d49be30431fe18bb3261c9dc245507", "size": 6658, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "theory.ipynb", "max_stars_repo_name": "vedatk67/ADM-HW2", "max_stars_repo_head_hexsha": "1c899777a610dd03a8ed6adacc3558ffe0a629f2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "theory.ipynb", "max_issues_repo_name": "vedatk67/ADM-HW2", "max_issues_repo_head_hexsha": "1c899777a610dd03a8ed6adacc3558ffe0a629f2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "theory.ipynb", "max_forks_repo_name": "vedatk67/ADM-HW2", "max_forks_repo_head_hexsha": "1c899777a610dd03a8ed6adacc3558ffe0a629f2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.452991453, "max_line_length": 189, "alphanum_fraction": 0.5198257735, "converted": true, "num_tokens": 1371, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8918110540642805, "lm_q2_score": 0.928408800060238, "lm_q1q2_score": 0.8279652305842746}} {"text": "# Functions\nSo far in this course we've explored equations that perform algebraic operations to produce one or more results. A *function* is a way of encapsulating an operation that takes an input and produces exactly one ouput.\n\nFor example, consider the following function definition:\n\n\\begin{equation}f(x) = x^{2} + 2\\end{equation}\n\nThis defines a function named ***f*** that accepts one input (***x***) and returns a single value that is the result calculated by the expression *x2 + 2*.\n\nHaving defined the function, we can use it for any input value. For example:\n\n\\begin{equation}f(3) = 11\\end{equation}\n\nYou've already seen a few examples of Python functions, which are defined using the **def** keyword. However, the strict definition of an algebraic function is that it must return a single value. Here's an example of defining and using a Python function that meets this criteria:\n\n\n```R\n# define a function to return x^2 + 2\nf = function(x){x**2 + 2}\n\n# call the function\nf(3)\n```\n\n\n11\n\n\nYou can use functions in equations, just like any other term. For example, consider the following equation:\n\n\\begin{equation}y = f(x) - 1\\end{equation}\n\nTo calculate a value for ***y***, we take the ***f*** of ***x*** and subtract 1. So assuming that ***f*** is defined as previously, given an ***x*** value of 4, this equation returns a ***y*** value of **17** (*f*(4) returns 42 + 2, so 16 + 2 = 18; and then the equation subtracts 1 to give us 17). Here it is in Python:\n\n\n```R\nx = 4\ny = f(x) - 1\ncat(y)\n```\n\n 17\n\nOf course, the value returned by a function depends on the input; and you can graph this with the iput (let's call it ***x***) on one axis and the output (***f(x)***) on the other.\n\n\n```R\n# Create an array of x values from -100 to 100\ndf = data.frame(x = seq(-100, 100))\ndf$y = f(df$x)\n\nlibrary(ggplot2)\nlibrary(repr)\noptions(repr.plot.width=4, repr.plot.height=3)\nggplot(df, aes(x,y)) + geom_line(color = 'magenta', size = 1) +\n xlab('x') + ylab('f(x)')\n```\n\nAs you can see (if you hadn't already figured it out), our function is a *quadratic function* - it returns a squared value that results in a parabolic graph when the output for multiple input values are plotted.\n\n## Bounds of a Function\nSome functions will work for any input and may return any output. For example, consider the function ***u*** defined here:\n\n\\begin{equation}u(x) = x + 1\\end{equation}\n\nThis function simply adds 1 to whatever input is passed to it, so it will produce a defined output for any value of ***x*** that is a *real* number; in other words, any \"regular\" number - but not an *imaginary* number like √-1, or ∞ (infinity). You can specify the set of real numbers using the symbol ${\\rm I\\!R}$ (note the double stroke). The values that can be used for ***x*** can be expressed as a *set*, which we indicate by enclosing all of the members of the set in \"{...}\" braces; so to indicate the set of all possible values for x such that x is a member of the set of all real numbers, we can use the following expression:\n\n\\begin{equation}\\{x \\in \\rm I\\!R\\}\\end{equation}\n\n\n### Domain of a Function\nWe call the set of numbers for which a function can return value it's *domain*, and in this case, the domain of ***u*** is the set of all real numbers; which is actually the default assumption for most functions.\n\nNow consider the following function ***g***:\n\n\\begin{equation}g(x) = (\\frac{12}{2x})^{2}\\end{equation}\n\nIf we use this function with an ***x*** value of **2**, we would get the output **9**; because (12 ÷ (2•2))2 is 9. Similarly, if we use the value **-3** for ***x***, the output will be **4**. However, what happens when we apply this function to an ***x*** value of **0**? Anything divided by 0 is undefined, so the function ***g*** doesn't work for an ***x*** value of 0.\n\nSo we need a way to denote the domain of the function ***g*** by indicating the input values for which a defined output can be returned. Specifically, we need to restrict ***x*** to a specific list of values - specifically any real number that is not 0. To indicate this, we can use the following notation:\n\n\\begin{equation}\\{x \\in \\rm I\\!R\\;\\;|\\;\\; x \\ne 0 \\}\\end{equation}\n\nThis is interpreted as *Any value for x where x is in the set of real numbers such that x is not equal to 0*, and we can incorporate this into the function's definition like this:\n\n\\begin{equation}g(x) = (\\frac{12}{2x})^{2}, \\{x \\in \\rm I\\!R\\;\\;|\\;\\; x \\ne 0 \\}\\end{equation}\n\nOr more simply:\n\n\\begin{equation}g(x) = (\\frac{12}{2x})^{2},\\;\\; x \\ne 0\\end{equation}\n\nWhen you plot the output of a function, you can indicate the gaps caused by input values that are not in the function's domain by plotting an empty circle to show that the function is not defined at this point:\n\n\n```R\ng = function(x){\n ## Use vectorized ifelse to return the value or the missing value, NA\n ifelse(x != 0,(12.2*x)^2, NA)\n}\n\n## Construct the data frame.\ndf2 = data.frame(x = seq(-100,100))\ndf2$y = g(df2$x) ## Call g(x) with the vector df2$x\n\n## Make the plot\nggplot(df2, aes(x,y)) + geom_line(color = 'magenta', size = 1) +\n annotate(\"text\", x = 0, y = 0, label = \"O\") + ## Put a symbol at the origin\n xlab('x') + ylab('g(x)')\n```\n\nNote that the function works for every value other than 0; so the function is defined for x = 0.000000001, and for x = -0.000000001; it only fails to return a defined value for exactly 0.\n\nOK, let's take another example. Consider this function:\n\n\\begin{equation}h(x) = 2\\sqrt{x}\\end{equation}\n\nApplying this function to a non-negative ***x*** value returns a meaningful output; but for any value where ***x*** is negative, the output is undefined.\n\nWe can indicate the domain of this function in its definition like this:\n\n\\begin{equation}h(x) = 2\\sqrt{x}, \\{x \\in \\rm I\\!R\\;\\;|\\;\\; x \\ge 0 \\}\\end{equation}\n\nThis is interpreted as *Any value for x where x is in the set of real numbers such that x is greater than or equal to 0*.\n\nOr, you might see this in a simpler format:\n\n\\begin{equation}h(x) = 2\\sqrt{x},\\;\\; x \\ge 0\\end{equation}\n\nNote that the symbol ≥ is used to indicate that the value must be *greater than **or equal to*** 0; and this means that **0** is included in the set of valid values. To indicate that the value must be *greater than 0, **not including 0***, use the > symbol. You can also use the equivalent symbols for *less than or equal to* (≤) and *less than* (<).\n\nWhen plotting a function line that marks the end of a continuous range, the end of the line is shown as a circle, which is filled if the function includes the value at that point, and unfilled if it does not.\n\nHere's the Python to plot function ***h***:\n\n\n```R\noptions(warn=-1) ## Turn off warnings from attempts to plot NAs\n\nh = function(x){\n ## Use vectorized ifelse to return the value or the missing value, NA\n ifelse(x >= 0,(2 * sqrt(x)), NA)\n}\n\n## Construct the data frame.\ndf3 = data.frame(x = seq(-100,100))\ndf3$y = h(df3$x) ## Call g(x) with the vector df2$x\n\n## Make the plot\nggplot(df3, aes(x,y)) + geom_line(color = 'magenta', size = 1) +\n annotate(\"text\", x = 0, y = 0, label = \"O\") + # Put a symbol at the origin\n xlim(-1,101) + # Limit the range of x values displayed\n xlab('x') + ylab('h(x)')\n```\n\nSometimes, a function may be defined for a specific *interval*; for example, for all values between 0 and 5:\n\n\\begin{equation}j(x) = x + 2,\\;\\; x \\ge 0 \\text{ and } x \\le 5\\end{equation}\n\nIn this case, the function is defined for ***x*** values between 0 and 5 *inclusive*; in other words, **0** and **5** are included in the set of defined values. This is known as a *closed* interval and can be indicated like this:\n\n\\begin{equation}\\{x \\in \\rm I\\!R\\;\\;|\\;\\; 0 \\le x \\le 5 \\}\\end{equation}\n\nIt could also be indicated like this:\n\n\\begin{equation}\\{x \\in \\rm I\\!R\\;\\;|\\;\\; [0,5] \\}\\end{equation}\n\nIf the condition in the function was **x > 0 and x < 5**, then the interval would be described as *open* and 0 and 5 would *not* be included in the set of defined values. This would be indicated using one of the following expressions:\n\n\\begin{equation}\\{x \\in \\rm I\\!R\\;\\;|\\;\\; 0 \\lt x \\lt 5 \\}\\end{equation}\n\\begin{equation}\\{x \\in \\rm I\\!R\\;\\;|\\;\\; (0,5) \\}\\end{equation}\n\nHere's function ***j*** in Python:\n\n\n```R\nj = function(x){\n ## Use vectorized ifelse to return the value or the missing value, NA\n ifelse(x >= 0 & x <= 5, x + 2, NA)\n}\n\n## Construct the data frame.\ndf4 = data.frame(x = seq(-100,100))\ndf4$y = j(df4$x) ## Call g(x) with the vector df2$x\n\n## Make the plot\nsuppressWarnings( # Suppress the warnings from attempts to plot NAs\n ggplot(df4, aes(x,y)) + geom_line(color = 'magenta', size = 1) +\n # Put a symbols at the ends of the lines\n geom_point(data = data.frame(x = c(0,5), y =c(2,7)), aes(x,y), color = 'magenta', size = 2) + \n xlim(-1,6) + # Limit the range of x values displayed\n xlab('x') + ylab('j(x)')\n )\n```\n\nNow, suppose we have a function like this:\n\n\\begin{equation}\nk(x) = \\begin{cases}\n 0, & \\text{if } x = 0, \\\\\n 1, & \\text{if } x = 100\n\\end{cases}\n\\end{equation}\n\nIn this case, the function has highly restricted domain; it only returns a defined output for 0 and 100. No output for any other ***x*** value is defined. In this case, the set of the domain is:\n\n\\begin{equation}\\{0,100\\}\\end{equation}\n\nNote that this does not include all real numbers, it only includes 0 and 100.\n\nWhen we use Python to plot this function, note that it only makes sense to plot a scatter plot showing the individual values returned, there is no line in between because the function is not continuous between the values within the domain. \n\n\n```R\nk = function(x){\n ## Use vectorized ifelse to return the value or the missing value, NA\n ifelse(x == 0, x, ifelse(x ==100, x, NA))\n}\n\n## Construct the data frame.\ndf5 = data.frame(x = seq(-100,100))\ndf5$y = k(df4$x) ## Call g(x) with the vector df2$x\n\n## Make the plot\nsuppressWarnings( # Suppress the warnings from attempts to plot NAs\n ggplot(df5, aes(x,y)) + geom_point(color = 'magenta', size = 2) + \n xlim(-1,101) + # Limit the range of x values displayed\n xlab('x') + ylab('k(x)')\n )\n```\n\n### Range of a Function\nJust as the domain of a function defines the set of values for which the function is defined, the *range* of a function defines the set of possible outputs from the function.\n\nFor example, consider the following function:\n\n\\begin{equation}p(x) = x^{2} + 1\\end{equation}\n\nThe domain of this function is all real numbers. However, this is a quadratic function, so the output values will form a parabola; and since the function has no negative coefficient or constant, it will be an upward opening parabola with a vertex that has a y value of 1.\n\nSo what does that tell us? Well, the minimum value that will be returned by this function is 1, so it's range is:\n\n\\begin{equation}\\{p(x) \\in \\rm I\\!R\\;\\;|\\;\\; p(x) \\ge 1 \\}\\end{equation}\n\nLet's create and plot the function for a range of ***x*** values in Python:\n\n\n```R\np = function(x){x**2 + 1}\n\n# Create an array of x values from -100 to 100\ndf6 = data.frame(x = seq(-100, 100))\ndf6$y = p(df$x)\n\n# Plot the function\nggplot(df6, aes(x,y)) + geom_line(color = 'magenta', size = 1) +\n xlab('x') + ylab('p(x)')\n```\n\nNote that the ***p(x)*** values in the plot drop exponentially for ***x*** values that are negative, and then rise exponentially for positive ***x*** values; but the minimum value returned by the function (for an *x* value of 0) is **1**.\n", "meta": {"hexsha": "26c31a78db9af2ccbfc0745ceac01cab0021e8a8", "size": 44283, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "R/Module01/01-08-Functions.ipynb", "max_stars_repo_name": "joelgenter/Essential-Math", "max_stars_repo_head_hexsha": "2e76546a82fb0ad2b8698c7dc0f48f0aad0762bb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 33, "max_stars_repo_stars_event_min_datetime": "2018-01-11T20:44:52.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-17T16:10:41.000Z", "max_issues_repo_path": "R/Module01/01-08-Functions.ipynb", "max_issues_repo_name": "joelgenter/Essential-Math", "max_issues_repo_head_hexsha": "2e76546a82fb0ad2b8698c7dc0f48f0aad0762bb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2018-11-19T23:54:27.000Z", "max_issues_repo_issues_event_max_datetime": "2018-11-20T00:15:39.000Z", "max_forks_repo_path": "R/Module01/01-08-Functions.ipynb", "max_forks_repo_name": "joelgenter/Essential-Math", "max_forks_repo_head_hexsha": "2e76546a82fb0ad2b8698c7dc0f48f0aad0762bb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 35, "max_forks_repo_forks_event_min_datetime": "2018-03-08T15:42:32.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-03T06:11:43.000Z", "avg_line_length": 90.9301848049, "max_line_length": 5210, "alphanum_fraction": 0.7961971863, "converted": true, "num_tokens": 3387, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284088025362857, "lm_q2_score": 0.8918110511888303, "lm_q1q2_score": 0.8279652301228481}} {"text": "```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nmatplotlib.rcParams['figure.figsize'] = (10.0, 8.0)\n```\n\n# Phase plane analysis of dynamical systems\nWhen we have a system of two differential equations in two variables, $\\mathbf{x}' = \\mathbf{f}(\\mathbf{x})$ we have a very useful analysis technique available to us, that of the *phase plane*. The phase plane is a two-dimensional plot with the two state variables of the ODE system on the axes. Each point on the plane corresponds to a particular choice of $\\mathbf{x}$ at which we can compute a velocity vector $\\mathbf{x}'$ and plot it on the graph (if we like) as an arrow originating at $\\mathbf{x}$.\n\n### Example: Fitzhugh-Nagumo equations\nWe're going to use the Fitzhugh-Nagumo equations as our example:\n\\begin{equation}\n \\begin{split}\n \\epsilon \\frac{dv}{dt} &= f(v) - w + I_{\\text{app}} \\\\\n \\frac{dw}{dt} &= v - \\gamma w\n \\end{split}\n\\end{equation}\nwhere\n\\begin{equation*}\n f(v) = v(1 - v)(v - \\alpha)\n\\end{equation*}\nand $0 < \\alpha < 1$, $\\epsilon \\ll 1$. We'll use $\\alpha = 0.1$, $\\gamma = 0.5$, and $\\epsilon = 0.01$ for this example.\n\nBelow we're going to define our equations in code so we can use them to illustrate various things about the system.\n\n\n```python\n# Parameters\nalpha = 0.1\ngamma = 0.5\nepsilon = 0.1\nI_app = 0\n\n# Define the f(v) function separately\ndef f(v):\n return v*(1 - v)*(v - alpha)\n\n# Fitzhugh-Nagumo equations - these will pick up the parameter values above\ndef fhn(x):\n v, w = x[0], x[1]\n dv = 1/epsilon*(f(v) - w + I_app)\n dw = v - gamma*w\n return array([dv, dw])\n\ndef plot_vector_field():\n V, W = meshgrid(np.linspace(-1, 2, 20), np.linspace(-1, 2, 20))\n dV, dW = fhn([V, W])\n plt.quiver(V, W, dV, dW)\n plt.title('Vector field for Fitzhugh-Nagumo equations', fontsize=16)\n plt.xlabel('$v$', fontsize=16)\n plt.ylabel('$w$', fontsize=16)\n\nplot_vector_field()\n```\n\nIf we simulate our system numerically for a given choice of initial conditions. e.g. $v(0) = -1$, $w(0) = -0.5$, we can plot the resulting $v$, $w$ against each other to give a *trajectory*\n\n\n```python\nfrom scipy.integrate import odeint\n# initial conditions and simulation time\n\nv0, w0 = -1, -0.5\nt = np.linspace(0, 10, 2000)\ndef compute_trajectory(v0, w0, t):\n # scipy's odeint does the heavy lifting for us. Its ode function requires an additional\n # t input, so that's what the lambda is doing. We could have just added this to fhn at the\n # start\n X = odeint(lambda x, t: fhn(x), [v0, w0], t)\n\n # Solution comes out as a big array with two columns, so pull v and w out\n v, w = X[:, 0], X[:, 1]\n return [v, w]\n\nplot_vector_field()\nv, w = compute_trajectory(v0, w0, t)\nplt.plot(v, w)\n```\n\nNote that the trajectory is tangent to the vector field at all points.\n\n## Nullclines\nThe first thing we do in the phase plane method is to compute the *nullclines* of our system. These are the curves were either $\\frac{dv}{dt} = 0$ or $\\frac{dw}{dt} = 0$. For our example:\n\\begin{equation*}\n \\frac{dv}{dt} = 0 \\implies w = f(v) + I_{\\text{app}} = 0 \n\\end{equation*}\nand\n\\begin{equation*}\n \\frac{dw}{dt} = 0 \\implies w = \\frac{1}{\\gamma} v.\n\\end{equation*}\n\nReferring to our example, the curve $\\frac{dv}{dt} = 0$ corresponds to all points where the velocity vector has no horizontal component, trajectories can only cross this nullcline vertically. Moreover, the sign of $\\frac{dv}{dt}$ is fixed on either side of the nullcline, as the sign cannot change without passing back through it. For our example, $\\frac{dv}{dt}$ is *positive* when we are below the nullcline, meaning trajectories move right when we are beneath it, and *negative* when we are above the nullcline, meaning trajectories will move left.\n\n \n\n\n\n```python\ndef null_v(v):\n return f(v) + I_app\ndef null_w(v):\n return 1/gamma*v\nv_trajectory, w_trajectory = compute_trajectory(v0, w0, t)\n\nv = np.linspace(-1, 2, 200)\nplot_vector_field()\nplt.plot(v, null_v(v), label='dv/dt = 0')\nplt.plot(v, null_w(v), label='dw/dt=0')\n\nplt.plot(v_trajectory, w_trajectory, label='sample trajectory')\nplt.ylim([-1, 2])\n\ntext(-1, -0.2, 'dv/dt > 0 \\ndw/dt < 0', fontsize=16, bbox=dict(alpha=1, facecolor='white'))\ntext(0.5, -0.8, 'dv/dt > 0 \\ndw/dt > 0', fontsize=16, bbox=dict(alpha=1, facecolor='white'))\ntext(1.5, 1, 'dv/dt < 0 \\ndw/dt > 0', fontsize=16, bbox=dict(alpha=1, facecolor='white'))\ntext(-0.1, 1.5, 'dv/dt < 0 \\ndw/dt < 0', fontsize=16, bbox=dict(alpha=1, facecolor='white'))\nplt.legend(fontsize=16)\n```\n\nYou can see at this point that we can already understand quite a bit about how the system behaves. For our given trajectory, it starts in a regime where it is moving right and downward. When the trajectory crosses the $w$ nullcline (orange), it begins to move upward, but continues moving right until it hits the $v$ nullcline(blue). It crosses this nullcline vertically, and then begins to move left, still moving upward. It starts to move down after crossing the orange nullcline again, and then gradually spirals into the origin.\n\nThis simple piece of analysis (figuring out the nullclines) already gives us a lot of qualitative insight into how the ODE system behaves. Now, the next thing we have to do is look at the *steady states*, the points where the nullclines intersect.\n\n## Steady states\nThe next thing that we want to do is to understand what kind of long term behaviour the system can exhibit. For our example we see that the trajectory we plotted spirals into the origin and seems to stay there. The origin is an example of what we call a *stable steady state*, i.e. is a point $\\mathbf{x}^\\star$ satisfying\n\\begin{equation}\n \\mathbf{f}(\\mathbf{x}^*) = \\mathbf{0}.\n\\end{equation}\nand for which trajectories starting near $\\mathbf{x}^\\star$ do not depart a small neighbourhood of it.\n\nSome steady states are stable, some are unstable, and some look like a mixture. We're going to take a small digression and look at linear systems now.\n\n## Digression: Linear autonomous systems\nForget our complicated example for a second, we're going to consider some simpler systems. Consider the system \n\\begin{equation}\n \\mathbf{x}' = A \\mathbf{x}\n\\end{equation}\nwhere $A$ is a 2x2 matrix. You may recall from your earlier studies that assuming $A$ has two linearly independent eigenvectors that the solution to this system is\n\\begin{equation}\n \\mathbf{x}(t) = c_1 \\mathbf{v}_1 \\exp(\\lambda_1 t) + c_2 \\mathbf{v}_2 \\exp(\\lambda_2 t)\n\\end{equation}\nwhere $\\lambda_1, \\lambda_2$ are the eigenvalues of $A$ (may be duplicate), and $\\mathbf{v}_1, \\mathbf{v}_2$ are the two corresponding eigenvectors. The constants are found by considering the initial conditions:\n\\begin{equation*}\n \\begin{bmatrix} x_1(0) \\\\ x_2(0) \\end{bmatrix} = \n c_1 \\mathbf{v_1} + c_2 \\mathbf{v}_2\n\\end{equation*}\ni.e. $c_1, c_2$ are the coefficients of $\\mathbf{v}_1$ and $\\mathbf{v}_2$ when expressing the initial condition using the eigenvectors as a basis.\n\nIf the eigenvectors are real, note that if $\\mathbf{x}(0)$ is lined up with an eigenvector, then the other coefficient will be zero, and the trajectory will track this eigenvector into the origin (if $\\lambda < 0$) or out to infinity (if $\\lambda > 0$) or just stay put (if $\\lambda = 0$).\n\nThere are a few typical cases that we frequently see (we'll ignore the zero real part possibilities):\n\n### Two negative real eigenvalues: stable node\nConsidering the form of the solution, it is evident that if both eigenvalues have negative real part that $\\mathbf{x}(t) \\to \\mathbf{0}$ as $t \\to \\infty$. In this case, the system is *stable* and we call the origin a *stable node*\n\n### Two positive real eigenvalues: unstable node\nThis situation is the opposite of the previous one, no matter the initial condition the exponentials in the solution will blow up and the long term solution is infinite. We say the origin is an *unstable node*\n\n### One positive, one negative real eigenvalue: saddle node\nThis one is interesting. If we start on the eigenvector corresponding to the negative eigenvalue, then the trajectory will hone in on the origin. This eigenvector is called the *stable manifold* of the steady state, the set of points for which the steady state is stable.\n\nIf we start anywhere else, the coefficient of the positive eigenvalue term will be nonzero and so eventually the positive exponential term will dominate.\n\n### Complex (conjugate) eigenvalues, negative real parts: stable spiral\nIf $\\lambda = a \\pm ib$ with $a < 0$, then the solution takes the form\n\\begin{equation*}\n \\mathbf{x}(t) = \\exp(at)\\left(\\mathbf{u}_1 \\cos bt + \\mathbf{u}_2 \\sin bt\\right)\n\\end{equation*}\nIn this case the solution oscillates with decaying amplitude, and so in the phase plane it spirals into the origin. We call this a *stable spiral*.\n\n### Complex (conjugate) eigenvalues, positive real parts: unstable spiral\nSame as previously, except the exponential term corresponds to a growing exponential, and so trajectories spiral away from the origin. We call this an *unstable spiral*.\n\n## Back to steady states: Hartman-Grobman Theorem\nSo why did we look at linear systems? Well, turns out that if we approximate our nonlinear system by a linear system near around a steady state, then due to a cool theorem called the *Hartman-Grobman Theorem*, that the nonlinear system is guaranteed to behave qualitatively the same as the linear system in some neighbourhood of the fixed point, so long as the real parts of the eigenvalues are nonzero.\n\n### Approximating our nonlinear system by a linear one\nSo, how do we do this? Well, we can expand our system around the steady state in a 2D Taylor series. Let's assume $\\mathbf{x}^\\star$ is a steady state, i.e. $\\mathbf{f}(\\mathbf{x}^\\star) = \\mathbf{0}$. Then\n\\begin{align*}\n \\mathbf{f}(\\mathbf{x} - \\mathbf{x}^\\star) &= \\mathbf{f}(\\mathbf{x}^\\star) + J(\\mathbf{x}^\\star)(\\mathbf{x} - \\mathbf{x}^\\star) + \\mathcal{O}(\\lVert \\mathbf{x} - \\mathbf{x}^\\star \\rVert^2) \\\\\n &= J(\\mathbf{x}^\\star)(\\mathbf{x} - \\mathbf{x}^\\star) + \\mathcal{O}(\\lVert \\mathbf{x} - \\mathbf{x}^\\star \\rVert^2)\n\\end{align*}\nwhere $J(\\mathbf{x})$ is the Jacobian of the system, evaluated at $\\mathbf{x}$.\n\nSo, defining new coordinates $\\mathbf{u} = \\mathbf{x} - \\mathbf{x}^\\star$ so that the origin in $\\mathbf{u}$-space is the steady state, we have that\n\\begin{align*}\n \\mathbf{u}' &= J(\\mathbf{x}^\\star)\\mathbf{u} + \\text{higher order terms} \\\\\n &\\approx J(\\mathbf{x}^\\star)\\mathbf{u}.\n\\end{align*}\nThis approximation is a linear system, and so long as the eigenvalues of $J(\\mathbf{x}^\\star)$ have nonzero real parts, this approximation behaves the same way as the nonlinear system near the steady state.\n\n### Fitzhugh-Nagumo equations\nSo let's look at the steady state of the Fitzhugh-Nagumo equations. We can see that the only steady state is at the origin by inspection, so we don't need to do any fancy variable translations. We need the Jacobian. If $\\mathbf{x} = \\begin{bmatrix} v & w \\end{bmatrix}^T$, and express our system as\n\\begin{equation}\n \\begin{split}\n \\frac{dv}{dt} &= \\frac{1}{\\epsilon}\\left(f(v) - w + I_{\\text{app}}\\right) = g(v, w) \\\\\n \\frac{dw}{dt} &= v - \\gamma w = h(v, w)\n \\end{split}\n\\end{equation}\n\n\\begin{equation*}\n J(\\mathbf{0}) = \\begin{bmatrix}\n \\frac{\\partial g}{\\partial v} & \\frac{\\partial g}{\\partial w} \\\\\n \\frac{\\partial h}{\\partial v} & \\frac{\\partial h}{\\partial w}\n \\end{bmatrix}(\\mathbf{0}) = \n \\begin{bmatrix}\n \\frac{1}{\\epsilon}\\left(-3v^2 + 2(\\alpha + 1)v - \\alpha\\right) & -\\frac{1}{\\epsilon} \\\\\n 1 & -\\gamma\n \\end{bmatrix}(\\mathbf{0}) = \\begin{bmatrix}\n -\\frac{\\alpha}{\\epsilon} & -\\frac{1}{\\epsilon} \\\\\n 1 & -\\gamma\n \\end{bmatrix}\n\\end{equation*}\n\nFor our choice of parameters, this matrix has eigenvalues $\\lambda \\approx -5.25 \\pm 8.8i$, which makes the steady state of the linear system, and hence locally the nonlinear system, a *stable spiral*.\n", "meta": {"hexsha": "ef151b805c883d5561349d1c74cfd276e10b98d4", "size": 322258, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notes/Notes on dynamical systems.ipynb", "max_stars_repo_name": "rgbrown/160319", "max_stars_repo_head_hexsha": "db42bca0f626cfcdb4327568ab60152b58a7bf00", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notes/Notes on dynamical systems.ipynb", "max_issues_repo_name": "rgbrown/160319", "max_issues_repo_head_hexsha": "db42bca0f626cfcdb4327568ab60152b58a7bf00", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/Notes on dynamical systems.ipynb", "max_forks_repo_name": "rgbrown/160319", "max_forks_repo_head_hexsha": "db42bca0f626cfcdb4327568ab60152b58a7bf00", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 942.2748538012, "max_line_length": 126100, "alphanum_fraction": 0.9365011885, "converted": true, "num_tokens": 3543, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9111796979521253, "lm_q2_score": 0.9086178913651383, "lm_q1q2_score": 0.8279141758079837}} {"text": "```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n# Lab 6 Monte Carlo Methods\n\n## Introduction: Random numbers and statistics\n\n### Normal distribution\n\nA nomral distribution can be generated by `numpy.eandom.normal`\nSee details [here](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.normal.html).\n\nThe probability density function of the normal distribution follows a Gaussian function\n\\begin{equation}\n f(x|\\mu,\\sigma^2)=\\frac{1}{\\sqrt{2\\pi\\sigma^2}}e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}}\n\\end{equation}\n\n\n```python\n# mean\nmu = 0\n# Standard deviation\nsigma = 1\n# size of variables of the normal distribution, n can beint or tuple \n# depending on if x is a vector, or a matrix, etc.\nn = 10000\n# n = (100, 100)\n# Normal distribution\nx_normal = np.random.normal(mu, sigma, n)\n# plot\nfig, axs = plt.subplots(2, 1, figsize=(8, 8))\naxs[0].plot(x_normal, 'ro')\naxs[0].set_title(r'Normal Distribution with $\\mu=$'+str(mu)+' and $\\sigma=$'+str(sigma))\n\ncount, bins, ignored = axs[1].hist(x_normal, 100, density=True)\naxs[1].plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * \n np.exp( - (bins - mu)**2 / (2 * sigma**2) ), \n linewidth=2, color='r')\naxs[1].set_title('Probability Density')\n\nfor i in range(2): axs[i].autoscale(enable=True, axis='both', tight=True)\n```\n\n### Uniform distribution\n\nSimilarly, a uniform distribution can be constructed by `numpy.random.uniform`.\nSee [here](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.uniform.html).\n\n\n```python\n# the half-open interval [low, high)\nlow, high = 0, 1\n# size of variables of the normal distribution, n can beint or tuple \n# depending on if x is a vector, or a matrix, etc.\nn = 500\n# uniform distribution\nx_uni = np.random.uniform(low, high, n)\n\n# plot\nfig, axs = plt.subplots(2, 1, figsize=(8, 8))\naxs[0].plot(x_uni, 'ro')\naxs[0].set_title(r'Uniform Distribution in $[$'+str(low)+','+str(high)+r'$)$')\n\ncount, bins, ignored = axs[1].hist(x_uni, 100, density=True)\naxs[1].plot(bins, np.ones_like(bins), linewidth=2, color='r')\naxs[1].set_title('Probability Density')\nfor i in range(2): axs[i].autoscale(enable=True, axis='both', tight=True)\n```\n\n## A simulation: Throw dice\n\nThe uniform distribution random function `numpy.random.uniform` generate real numbers. \nIn order to create only integers, one need to round the random value to the nearest integer with `numpy.floor` or `numpy.ceil`.\n\nDefine the following function to simulate dice throw\n\n\n```python\ndef throwDice(N):\n '''\n To simulate throwing dice N times\n '''\n return np.floor(1 + 6*np.random.uniform(0, 1, size=N))\n```\n\n\n```python\nN = 1000\nNrepeat = 10000\nr = [np.mean(throwDice(N)) for i in range(Nrepeat)] \n\n# plot\nfig, ax = plt.subplots(1, 1, figsize=(8, 4))\nax.hist(r, 100, density=True)\nax.autoscale(enable=True, axis='both', tight=True)\n```\n\n__NOTE__: There is a random integers generator in `numpy.random.randint` which create discrete uniform random integers from _low (inclusive)_ to __high (exclusive)__.\nSee [here](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.randint.html).\n\n\n```python\n# Range of the dice\n# NOTE!! high value is not included, for a dice with six sides, high=7\nlow, high = 1, 7\n# Number of throws\nn = 1000\nx_dice = np.random.randint(low, high, n)\n\n```\n\nThen, we can simulate the same event with the following code\n\n\n```python\nN = 1000\nNrepeat = 10000\nr = [np.mean(np.random.randint(low, high, n)) for i in range(Nrepeat)] \n\n# plot\nfig, ax = plt.subplots(1, 1, figsize=(8, 4))\nax.hist(r, 100, density=True)\nax.autoscale(enable=True, axis='both', tight=True)\n```\n\n## Application: Computing an integral with Monte Carlo\n\nAn example use Monte Carlo method to compute the integration of multivariate normal distribution funciton.\n\n\n```python\ndef mcint(func, domain, N, M=30):\n \"\"\" Numerical integration using Monte Carlo method\n Parameters\n ----------\n func : function, function handler of the integrand;\n domain : numpy.ndarray, the domain of computation, \n domain = array([[-5, 5],\n [-5, 5],\n [-5, 5]])\n The dimensions of the domain is given by domain.shape[0];\n N : integer, the number of points in each realization;\n M : integer, the number of repetitions used for error estimation,\n (Recommendation, M = 30+).\n Total number of points used is thus M*N\n Returns\n -------\n r.mean() : the integral value of func in the domain\n r.std() : the error in the result (the standard deviation)\n \"\"\"\n # Get the dimensions\n dim = domain.shape[0]\n # volume of the domain\n V = abs(domain.T[0] - domain.T[1]).prod()\n \n r = np.zeros(M)\n for i in range(M):\n # generate uniform distributed random numbers within the domain\n x = np.random.uniform(domain.T[0], domain.T[1], (N, dim))\n r[i] = V * np.mean(func(x), axis=0)\n \n return r.mean(), r.std()\n\ndef fnorm(x):\n \"\"\" Normal distribution function in d-dimensions\n Parameters\n ----------\n x : numpy.ndarray, of shape (N, d), where d is the dimension and \n N is the number of realizations\n Returns\n -------\n y : numpy.ndarray, of the shape (N, 1) \n \"\"\" \n d = x.shape[1]\n y = 1/((2*np.pi)**(d/2))*np.exp(-0.5*np.sum(x**2, axis=1))\n return y\n```\n\nTake the domain in $[-4,4]\\times[-4,4]$, compute the integral using funtion `fnorm`\n\n\n```python\n# numbers of samples\nN = 1000\nM = 50\n# domain\ndomain = np.array([[-4,4],[-4,4]])\n# integrate\nintF, err = mcint(fnorm, domain, N, M)\nprint('The result of the integral with N=', str(N), 'is', '{:.5f}'.format(intF),',')\nprint('with standard deviation', '{:.5f}'.format(err), 'for', str(M), 'realizations.')\n```\n\n### Check the order of accuracy $p$ for the Monte Carlo method.\n\n\n```python\n# Change the dimension to see different results\ndim = 3\n\n# domain [-5, 5]^dim\ndomain = np.array([[-5, 5] for i in range(dim)])\n\n# take an array of N\nn = 8\nNList = 500 * 2**np.array(range(n))\nM = 50\n\n# Save the error\nerrList = np.zeros_like(NList, dtype=float)\nfor i in range(n):\n intF, err = mcint(fnorm, domain, NList[i], M)\n errList[i] = err\n \n\n# Plot\nfig, ax = plt.subplots(1, 1, figsize=(8, 4))\nax.loglog(NList, errList)\nax.autoscale(enable=True, axis='both', tight=True)\nax.set_xlabel('N')\nax.set_ylabel('Error')\n\n# Compute the order\na = np.polyfit(np.log(NList), np.log(errList),1)\np = np.round(a[0], 1)\nprint('Order of accuracy is N^p, with p=', p)\n```\n\n## Programming: Brownian motion\n\n\n```python\ndef brownian(x0, tEnd, dt):\n \"\"\"\n Generate an instance of Brownian motion\n Parameters\n ----------\n x0 : float or numpy array (or something that can be converted to a numpy array\n using numpy.asarray(x0)).\n The initial condition(s) (i.e. position(s)) of the Brownian motion.\n tEnd : float, the final time.\n dt : float, the time step.\n Returns\n -------\n x: A numpy array of floats with shape `x0.shape + (n,)`.\n \"\"\"\n x0 = np.asarray(x0)\n n = int(tEnd/dt)\n\n # For each element of x0, generate a sample of n numbers from a\n # normal distribution.\n r = np.random.normal(size=x0.shape + (n,), scale=(dt**0.5))\n\n # This computes the Brownian motion by forming the cumulative sum of\n # the random samples. \n x = np.cumsum(r, axis=-1)\n\n # Add the initial condition.\n x += np.expand_dims(x0, axis=-1)\n \n return x\n```\n\n\n```python\n# Total time.\nT = 10.0\n# Number of steps.\nN = 500\n# Time step size\ndt = T/N\n# Initial values of x.\nx = np.empty((2,N+1))\nx[:, 0] = 0.0\n\n# Brownian motion\nx[:, 1:] = brownian(x[:,0], T, dt)\n\n# Plot the 2D trajectory.\nfig, ax = plt.subplots(1, 1, figsize=(8, 8))\nax.plot(x[0],x[1])\nax.plot(x[0,0],x[1,0], 'go')\nax.plot(x[0,-1], x[1,-1], 'ro')\nax.set_title('2D Brownian Motion')\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.axis('equal')\nax.grid(True)\n```\n", "meta": {"hexsha": "273367b0a59ceaa7a4fdaa6028bf378efde98f19", "size": 12766, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lab/L6/Lab6.ipynb", "max_stars_repo_name": "enigne/ScientificComputingBridging", "max_stars_repo_head_hexsha": "920f3c9688ae0e7d17cffce5763289864b9cac80", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-05-04T01:15:32.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-08T15:08:27.000Z", "max_issues_repo_path": "Lab/L6/Lab6.ipynb", "max_issues_repo_name": "enigne/ScientificComputingBridging", "max_issues_repo_head_hexsha": "920f3c9688ae0e7d17cffce5763289864b9cac80", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lab/L6/Lab6.ipynb", "max_forks_repo_name": "enigne/ScientificComputingBridging", "max_forks_repo_head_hexsha": "920f3c9688ae0e7d17cffce5763289864b9cac80", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.6233183857, "max_line_length": 175, "alphanum_fraction": 0.5168416105, "converted": true, "num_tokens": 2326, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9207896715436482, "lm_q2_score": 0.8991213759183765, "lm_q1q2_score": 0.827901676409755}} {"text": "```python\nimport numpy as np\nimport scipy.stats as stats\nimport scipy.special as special\n```\n\n### Binomial Distribution\n\nTake the example of the flipping of a coin. Let us label the outcome as $\\gamma$ which can take two values $0$ or $1$. We then define a parameter $\\theta$ which will give us the probability of the outcome being $\\gamma$. This is written as:\n\n\\begin{equation}\nP\\left(\\gamma|\\theta\\right) = \\theta^{\\gamma}\\left(1-\\theta\\right)^{1-\\gamma}\n\\end{equation}\n\nThe above distribution is known as the Binomial distribution or the Bernoulli distribution. This particular distribution can also be inferred in a different manner. We can consider that $\\gamma$ to be fixed by an observation and the parameter $\\theta$ to be a variable. The above equation then gives the probability of getting $\\gamma$ output for different values of $\\theta$. In this scenario, this function is called the likelihood function of the parameter $\\theta$\n\nIn Bayesian inference, $P\\left(\\gamma|\\theta\\right)$ is usually thought of with the data $\\gamma$ being fixed and certain and the parameter $\\theta$ to be a variable and uncertain. In this particular case, this is known as the Bernoulli likelihood function of $\\theta$. \n\n#### Fixed set of outcomes\n\nConsider the case of multiple flips. Each flip can be considered independent of each other. The joint probability would be thus a product of all individual probabilities.\n\n\\begin{align}\nP\\left(\\{\\gamma_i\\} |\\theta\\right) &=& \\Pi_i p\\left(\\gamma_i|\\theta\\right)\\\\\n &=& \\theta^{\\Sigma_i \\gamma_i}\\left(1-\\theta\\right)^{\\Sigma_i\\left(1-\\gamma_i\\right)}\\\\\n &=& \\theta^{z}\\left(1-\\theta\\right)^{N-z}\n\\end{align}\n\nwhere $N$ is the total number of tosses and $z = \\Sigma_i\\gamma_i$ is the number of heads\n\n### Conjugate Priors\n\nThe Baye's rule is given by :\n\n\\begin{equation}\nP\\left(\\gamma|\\theta\\right) = \\frac{P\\left(\\theta|\\gamma\\right)P\\left(\\theta\\right)}{\\int d\\theta' P\\left(\\theta'|\\gamma\\right)P\\left(\\theta'\\right)}\n\\end{equation}\n\nBayesian statistics involve the idea of a prior and a posterior distribution of probabilities. In the Bayes's rule given above, $P\\left(\\theta\\right)$ is the prior probability and the posterior probability upon the outcome of an experiment or an observation is given by the term on the left hand side $P\\left(\\gamma|\\theta\\right)$. \n\nIdeally we can assume any form of function for the prior distribution as long as the outcome lies in the range $[0, 1]$. However it would be mathematically far more simpler if on multiplication by the likelihood function, the posterior distribution has the same functional form as the prior. This would indeed make future posterior probaility functions very easy to caluculate. It would also be helpful if the denominator $\\int d\\theta' P\\left(\\theta'|\\gamma\\right)P\\left(\\theta'\\right)$ be analytically solvable. In the particular scenarios where the prior and posterior distributions has the same functional form, the posterior is said to be conjugate of the prior.\n\n##### Note: A conjugate prior is related to the likelihood function under consideration. If you change the likelihood function will probably need a different conjugate prior.\n\n### Conjugate function for a Bernoulli likelihood function\n\nThe outcome of a single coin toss or a multiple coin toss can be decsribed by the Bernoulli function as depicted in the previous function. Bernoulli function has the form $\\theta^{\\gamma}\\left(1-\\theta\\right)^{1-\\gamma}$. An appropriate prior function should look like $\\theta^\\left(a-1\\right)\\left(1-\\theta\\right)^\\left(b-1\\right)$ which when multiplied by the Bernoulli likelihood will have the same functional form. \n\nThe Beta distribution function given by \n\\begin{equation}\n\\beta\\left(\\theta, a, b\\right) = \\frac{\\theta^\\left(a-1\\right)\\left(1-\\theta\\right)^\\left(b-1\\right)}{B\\left(a, b\\right)}\n\\end{equation}\n\nwill be a possible conjugate prior to the Bernoulli likelihood function, here $B\\left(a, b\\right)$ is a normalizing factor so that the area under the integration goes to 1. In effect we will have the prior probaility as $P\\left(\\theta\\right) = \\beta\\left(\\theta, a, b\\right)$\n\nAlso the normalizer $B\\left(\\theta\\right)$ is given by:\n\\begin{equation}\nB\\left(a, b\\right) = \\int d\\theta\\text{ } \\theta^a\\left(1-\\theta\\right)^b\n\\end{equation}\n\nwhich is the beta function.\n\n#### Choosing a prior\nThe choice of a prior depends on the information available to us. For example, if we do not know anything about the coin that is about to be tossed, then probable we will assume that it is equally likely to have all values of the parameter $\\theta$. That would correspond to $P\\left(\\theta\\right) = \\frac{\\theta^a\\left(1-\\theta\\right)^b}{\\beta\\left(a, b\\right)}$ with $a=1, b=1$\n\n\n```python\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n\n```python\nfrom scipy.stats import beta\n```\n\n\n```python\na, b = 1, 1\nx = np.linspace(0, 1, 100)\n\n```\n\n\n```python\nfig, ax = plt.subplots(1, 1)\nax.plot(x, beta.pdf(x, a, b)/special.beta(a,b),'r-', lw=5, alpha=0.6, label='a = {}, b = {}'.format(a,b))\nax.legend(loc = 'best')\nax.set_xlabel(r'$\\theta$')\nax.set_ylabel(r'$P\\left(\\theta\\right)$')\n```\n\nLet us look at the various shapes of priors we can choose from depending upn the previous information that might be provided to us:\n\n\n\n```python\na, b = [[0.1,1,2,3,4], [0.1,1,2,3,4]]\nfig1, ax1 = plt.subplots(len(a), len(b), sharex = 'all', sharey = 'all', figsize = (12, 9))\nfor i in range(len(a)):\n for j in range(len(b)):\n ax1[i, j].plot(x, beta.pdf(x, a[i], b[j]), label='a = {}, b = {}'.format(a[i],b[j]))\n ax1[i, j].legend(loc = 'best')\n ax1[i, j].set_xlabel(r'$\\theta$')\n ax1[i, j].set_ylabel(r'$P\\left(\\theta|a,b\\right)$')\n ax1[i, j].set_ylim(0,3)\nfig1.tight_layout()\n \n```\n\nThe above plots shows the various shape of the prior distribution functions (not normalized) that might be available to us depending on our experience and available information. In the figures shown above the total no. of tosses are $n = a + b$. We can see that for $a=4, b=4$ the distrinution is narrow around $\\theta = 0.5$ compared to the case of $a=2, b = 2$. This means that with more information we can choose a prior that has more confidence. \n\n### Some more properties of the beta distribution function\n\nThe choice of a prior is quite often dependent on the information that we have acquired and our own biases. For e.g. if we know that a coin has been minted in a government facility, then we would reasonably assume that the parameter $\\theta $ is centred around $0.5$. Depending upon our confidence in the government minting facilty we could assume prior to performing any experiment that if we had performed 200 experiments then half of it would have turned up heads and the other half tails. That would correspond to a very narrow distribution of the probability $P\\left(\\theta|a,b\\right)$ centered around $\\theta = 0.5$. Thus it would be useful to know the central tendencies of this distribution (or as matter of fact any prior distribution one might assume.)\n\nThe mean of the distribution function $B\\left(\\theta|a, b\\right)$ is :\n\\begin{equation}\n\\mu = \\frac{a}{a+b}\n\\end{equation}\nThe mode is given by :\n\n\\begin{equation}\n\\omega = \\frac{a-1}{a+b-2}\n\\end{equation}\nfor $a>1$ and $b>1$. As expected from the plots of the distribution function when $a=b$, both the mean and the mode are the same and equal to $0.5$.\n\nAnother important parameter is the spread of the beta function. This is denoted by $\\kappa = a+b$. The higher the value of $\\kappa$, narrower the spread of the distribution around the mean. \n\nThe following equations are useful to know the relation ships between the parameters and the choice of $a, b$.\n\\begin{align}\na & = & \\mu\\kappa\\\\\nb & = & \\left(1-\\mu\\right)\\kappa\\\\\na & = & \\omega\\left(\\kappa - 2\\right)+1\\\\\nb & = & \\left(1 - \\omega \\right)\\left(\\kappa - 2\\right)+1\n\\end{align}\n\nIn terms of standard deviation, $\\sigma$ the parameters $a, b$ can be written as:\n\n\\begin{align}\na &=& \\mu\\left( \\frac{\\mu\\left(1-\\mu\\right)}{\\sigma^2} -1\\right)\\\\\nb &=& \\left(1-\\mu\\right)\\left( \\frac{\\mu\\left(1-\\mu\\right)}{\\sigma^2} -1\\right)\n\\end{align}\n\n### Choice of prior or how to choose $\\kappa$\n\nSuppose I borrow coins from a friend. I have the previous information from someone else that he is susceptible to give you biased coins with $\\theta = 0.8$. However you are not sure how much this information is reliable. You this want a few tosses to sway the distribution towards a correct one. This can be achieved by assuming a smaller value of $\\kappa$. Choosing a very large value would $\\kappa$ is advisable only if your source of information is really reliable. Otherwise it will take a large number of tosses to sway the model towards the correct distribution.\n\nIn order to see how distribution varies with $\\kappa$ let us redefine a the beta function in terms of $\\kappa, \\mu, \\omega \\text{ and } \\sigma$. \n\n\n```python\ndef betaABfromMeanKappa(x, mean, kappa):\n a = mean*kappa\n b = (1-mean)*kappa\n return beta.pdf(x, a, b)\ndef betaABfromModeKappa(x, mode, kappa):\n a = mode*(kappa-2)+1\n b = (1-mode)*(kappa-2)+1\n return beta.pdf(x, a, b)\ndef betaABfromSDMean(x, sd, mean):\n a = mean*(mean*(1-mean)/sd**2-1)\n b = (1-mean)*(mean*(1-mean)/sd**2-1)\n return beta.pdf(x, a, b)\n```\n\n\n```python\nmean, kappa = 0.8, 7.0\nfig, ax = plt.subplots(1,1, figsize = (5,4))\nax.plot(x, betaABfromMeanKappa(x, mean, kappa), label = r'$\\mu=${}, $\\kappa=${}'.format(mean, kappa))\nax.set_xlabel(r'\\theta')\nax.set_ylabel(r'$P\\left(\\theta\\right)$')\nax.legend(loc = 'best')\n```\n\n\n```python\nmode, kappa = 0.8, 7\nfig, ax = plt.subplots(1,1, figsize = (5,4))\nax.plot(x, betaABfromModeKappa(x, mode, kappa), label = r'$\\omega=${}, $\\kappa=${}'.format(mode, kappa))\nax.set_xlabel(r'\\theta')\nax.set_ylabel(r'$P\\left(\\theta\\right)$')\nax.legend(loc = 'best')\n```\n\n### The Posterior beta distribution\n\nThe posterior distribution is obtained by applying the Bayes' rule to the prior distribution. Suppose we performed an experiment, consisting of $N$ coin flips out of which $z$ where heads. The likelihood function is thus given by:\n\\begin{equation}\nP\\left(z, N|\\theta\\right) = \\theta^z\\left(1-\\theta\\right)^\\left(N-z\\right)\n\\end{equation}\nLet us assume a beta distribution prior given by:\n\n\\begin{equation}\nP\\left(\\theta|a, b\\right) = \\frac{\\theta^\\left(a-1\\right)\\left(1-\\theta\\right)^\\left(b-1\\right)}{\\beta\\left(a, b\\right)}\n\\end{equation}\n\nNow let's start from the Bayes' rule:\n\\begin{align}\nP\\left(\\theta|z,N\\right) &=& \\frac{P\\left(z, N|\\theta\\right) P\\left(\\theta\\right)}{P\\left(z, N\\right)}\\\\\n &=& \\frac{P\\left(z, N|\\theta\\right) P\\left(\\theta\\right)}{\\int_{\\theta '}d\\theta 'P\\left(z, N|\\theta ' \\right)P\\left(\\theta '\\right)}\\\\\n &=& \\frac{\\theta^{\\left(z+a\\right)-1}\\left(1-\\theta\\right)^{\\left(N-z+b\\right)-1}}{\\beta\\left(z+a, N-z+b\\right)}\n\\end{align}\n\nThe posterior distribution as we can see is also a beta distribution function. However we have new value for the parameters of the beta distribution function $a'= a+z, b'= N-z+b$, which changes the overall model. \n\n### The significance of the posterior\n\nLet us consider a prior such with $a=5$ and $b=5$. Now suppose we do $N = 1$ flips out of which the number of heads $z=1$. Let us look at the prior probability, likelihood function and the posterior probability. The prior is given by the function:\n\n\\begin{equation}\nP\\left(\\theta|5, 5\\right) = \\frac{\\theta^\\left(5-1\\right)\\left(1-\\theta\\right)^\\left(5-1\\right)}{\\beta\\left(5, 5\\right)}\n\\end{equation}\n\n\n\n```python\n#prior\na,b = 5, 5\nfig, ax = plt.subplots(1, 1)\nax.plot(x, beta.pdf(x, a, b), 'b-', label='a = {}, b = {}'.format(a,b))\nax.legend(loc = 'best', fontsize = 15)\nax.set_xlabel(r'$\\theta$', fontsize = 15)\nax.set_ylabel(r'$dbeta\\left(\\theta|a,b\\right)$', fontsize = 15)\nax.fill(x, beta.pdf(x, a, b), 'b')\nax.set_facecolor('0.8')\n\n```\n\n### Likelihood function\nNow for the likelihood function let us assume that we performed and experiment where we had $N = 10$ flips and got only $z = 1$ heads. Thus we will have a Bernoulli likelihood given by:\n\\begin{equation}\nP\\left(D|\\theta\\right) = \\theta\\left(1-\\theta\\right)^9\n\\end{equation}\n\n\n```python\ndef likelihood(theta, N, z):\n return ((theta)**z)*(1-theta)**(N-z)\nfig, ax = plt.subplots(1, 1)\nax.plot(x, likelihood(x, 10, 1), 'b-', label = 'z = 1, N = 10')\nax.legend(loc = 'best', fontsize = 15)\nax.set_xlabel(r'$\\theta$', fontsize = 15)\nax.set_ylabel(r'$P\\left(D|\\theta\\right)$', fontsize = 15)\nax.fill(x, likelihood(x, 10, 1), 'b')\nax.set_facecolor('0.8')\n```\n\n### The posterior distribution\nThe posterior distribution can now be obtained from the function:\n\\begin{align}\nP\\left(\\theta|z,N\\right) &=& \\frac{P\\left(z, N|\\theta\\right) P\\left(\\theta\\right)}{P\\left(z, N\\right)}\\\\\n &=& \\frac{P\\left(z, N|\\theta\\right) P\\left(\\theta\\right)}{\\int_{\\theta '}d\\theta 'P\\left(z, N|\\theta ' \\right)P\\left(\\theta '\\right)}\\\\\n &=& \\frac{\\theta^{\\left(z+a\\right)-1}\\left(1-\\theta\\right)^{\\left(N-z+b\\right)-1}}{\\beta\\left(z+a, N-z+b\\right)}\n\\end{align}\n\nReplacing $a=5, b=5$ and $z=1, N=10$ we will have :\n\\begin{equation}\nP\\left(\\theta|z=1,N=10\\right) = \\frac{\\theta^{\\left(6-1\\right)}\\left(1-\\theta\\right)^{14-1}}{\\beta\\left(6,14\\right)}\n\\end{equation}\n\n\n\n```python\na,b = 6, 14\nfig, ax = plt.subplots(1, 1)\nax.plot(x, beta.pdf(x, a, b), 'b-', label='a = {}, b = {}'.format(a,b))\nax.legend(loc = 'best', fontsize = 15)\nax.set_xlabel(r'$\\theta$', fontsize = 15)\nax.set_ylabel(r'$dbeta\\left(\\theta|a,b\\right)$', fontsize = 15)\nax.fill(x, beta.pdf(x, a, b), 'xkcd:blue')\nax.set_facecolor('0.8')\n\n```\n\n### Prior knowledge that cannot be expressed as a beta distribution\n\nSuppose a company specializes in manufacturing coins of two types - one that has a probability of heads being 25% and the other having probability of head being 75 %. Or prior model thus has a distribution that is bimodal with peaks around 0.25 and 0.75. We can construct to be composed of two Gaussians with peaks at 0.25 and 0.75 respectively.\n\n\n```python\n# Creating the prior\ndef prior(x):\n return (np.exp(-(x - 0.25)**2/.002)/np.sqrt(2*np.pi*.001) + np.exp(-(x - 0.75)**2/.002)/np.sqrt(2*np.pi*.001))/2\n```\n\n\n```python\nx = np.linspace(0, 1, 1000)\n```\n\n\n```python\nfig, ax = plt.subplots(1, 1)\nax.plot(x, prior(x), 'b-', label='Double Peaked Prior Distribution')\nax.legend(loc = 'best', fontsize = 10)\nax.set_xlabel(r'$\\theta$', fontsize = 15)\nax.set_ylabel(r'$P\\left(\\theta\\right)$', fontsize = 15)\nax.fill(x, prior(x), 'xkcd:blue')\nax.set_facecolor('lightsteelblue')\n```\n\nLet us check whether our distrbution integrates to unity or not when intergrated all over the theta space as it should.\n\n\n```python\nfrom scipy.integrate import simps\nsimps(prior(x),x)\n```\n\n\n\n\n 0.9999999999999987\n\n\n\nWhich is close to unity so this prior distribution is valid. Now suppose we flip coins 27 times and we get 14 heads and 13 tails. The likelihood function is thus given by:\n\n\n```python\nfig, ax = plt.subplots(1, 1)\nax.plot(x, likelihood(x, 27, 14), 'b-', label = 'z = {}, N = {}'.format(27, 14) )\nax.legend(loc = 'best', fontsize = 10)\nax.set_xlabel(r'$\\theta$', fontsize = 15)\nax.set_ylabel(r'$P\\left(D|\\theta\\right)$', fontsize = 15)\nax.fill(x, likelihood(x, 27, 14), 'b')\nax.set_facecolor('0.8')\n```\n\nNow we will calculate the posterior distribution:\n\n\n```python\nfig, ax = plt.subplots(1, 1)\nax.plot(x, prior(x)*likelihood(x, 27, 14)/simps(prior(x)*likelihood(x, 27, 14),x), 'r-', label='Posterior Distribution')\nax.plot(x, prior(x), 'b-', label='Prior Distribution')\nax.legend(loc = 'best', fontsize = 10)\nax.set_xlabel(r'$\\theta$', fontsize = 15)\nax.set_ylabel(r'$P\\left(\\theta\\right)$', fontsize = 15)\nax.fill(x, prior(x)*likelihood(x, 27, 14)/simps(prior(x)*likelihood(x, 27, 14),x), 'xkcd:orange' )\nax.fill(x, prior(x), 'xkcd:blue', alpha = 0.8)\nax.set_facecolor('lightsteelblue')\n```\n\nAs can be seen the posterior distribution has been shifted inwards when compared to the prior distribution. This makes sense as the experiment performed has nearly equal number of heads and tails which comes up, making the central tendency of the theta distribution move towards a central peak at $theta = 0.5$. However that would need a really large number of experiments as we have a relatively strong prior. We can do that above calulcation using a relatively weak prior and see what happens\n\n\n```python\n# Creating the prior\ndef prior_weak(x):\n return (np.exp(-(x - 0.25)**2/.02)/np.sqrt(2*np.pi*.01) + np.exp(-(x - 0.75)**2/.02)/np.sqrt(2*np.pi*.01))/2\n```\n\n\n```python\nfig, ax = plt.subplots(1, 1)\nax.plot(x, prior_weak(x), 'b-', label='Double Peaked Prior Distribution')\nax.legend(loc = 'best', fontsize = 10)\nax.set_xlabel(r'$\\theta$', fontsize = 15)\nax.set_ylabel(r'$P\\left(\\theta\\right)$', fontsize = 15)\nax.fill(x, prior_weak(x), 'xkcd:blue')\nax.set_facecolor('lightsteelblue')\n```\n\nLet's us calculate the posterior:\n\n\n```python\nfig, ax = plt.subplots(1, 1)\nax.plot(x, prior_weak(x)*likelihood(x, 27, 14)/simps(prior(x)*likelihood(x, 27, 14),x), 'r-', label='Posterior Distribution')\nax.plot(x, prior_weak(x), 'b-', label='Prior Distribution')\nax.legend(loc = 'best', fontsize = 10)\nax.set_xlabel(r'$\\theta$', fontsize = 15)\nax.set_ylabel(r'$P\\left(\\theta\\right)$', fontsize = 15)\nax.fill(x, prior_weak(x)*likelihood(x, 27, 14)/simps(prior(x)*likelihood(x, 27, 14),x), 'xkcd:orange' )\nax.fill(x, prior_weak(x), 'xkcd:blue', alpha = 0.8)\nax.set_facecolor('lightsteelblue')\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "858c713a1f338d9e4de8c2be69acbaaf1950cffc", "size": 325253, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "BinomialProbailityWithExactmathematicalAnalysis/.ipynb_checkpoints/Inferring Binomial Probability-checkpoint.ipynb", "max_stars_repo_name": "nathdipankar/Bayesian-Statistics", "max_stars_repo_head_hexsha": "444d0476e61e2ce61ae39bf0bb30c6d0e1b3d2c9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "BinomialProbailityWithExactmathematicalAnalysis/.ipynb_checkpoints/Inferring Binomial Probability-checkpoint.ipynb", "max_issues_repo_name": "nathdipankar/Bayesian-Statistics", "max_issues_repo_head_hexsha": "444d0476e61e2ce61ae39bf0bb30c6d0e1b3d2c9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "BinomialProbailityWithExactmathematicalAnalysis/.ipynb_checkpoints/Inferring Binomial Probability-checkpoint.ipynb", "max_forks_repo_name": "nathdipankar/Bayesian-Statistics", "max_forks_repo_head_hexsha": "444d0476e61e2ce61ae39bf0bb30c6d0e1b3d2c9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 418.601029601, "max_line_length": 101558, "alphanum_fraction": 0.9218393066, "converted": true, "num_tokens": 5215, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9252299632771662, "lm_q2_score": 0.8947894668039496, "lm_q1q2_score": 0.8278860255118133}} {"text": "# Numerical integrals\n\nWhat about when we cannot integrate a function analytically? In other words, when there is no (obvious) closed-form solution. In these cases, we can use **numerical methods** to solve the problem.\n\nLet's use this problem:\n\\begin{align}\n\\frac{dy}{dx} &= e^{-x^2} \\\\\ny(x) &= \\int e^{-x^2} dx + C\n\\end{align}\n\n(You may recognize this as leading to the error function, $\\text{erf}$:\n$\\frac{1}{2} \\sqrt{\\pi} \\text{erf}(x) + C$,\nso the exact solution to the integral over the range $[0,1]$ is 0.7468.)\n\n\n```matlab\nx = linspace(0, 1);\nf = @(x) exp(-x.^2);\nplot(x, f(x))\naxis([0 1 0 1])\n```\n\n## Numerical integration: Trapezoidal rule\n\nIn such cases, we can find the integral by using the **trapezoidal rule**, which finds the area under the curve by creating trapezoids and summing their areas:\n\\begin{equation}\n\\text{area under curve} = \\sum \\left( \\frac{f(x_{i+1}) + f(x_i)}{2} \\right) \\Delta x\n\\end{equation}\n\nLet's see what this looks like with four trapezoids ($\\Delta x = 0.25$):\n\n\n```matlab\nhold off\nx = linspace(0, 1);\nplot(x, f(x)); hold on\naxis([0 1 0 1])\n\nx = 0 : 0.25 : 1;\n\n% plot the trapezoids\nfor i = 1 : length(x)-1\n xline = [x(i), x(i)];\n yline = [0, f(x(i))];\n line(xline, yline, 'Color','red','LineStyle','--')\n xline = [x(i+1), x(i+1)];\n yline = [0, f(x(i+1))];\n line(xline, yline, 'Color','red','LineStyle','--')\n xline = [x(i), x(i+1)];\n yline = [f(x(i)), f(x(i+1))];\n line(xline, yline, 'Color','red','LineStyle','--')\nend\nhold off\n```\n\nNow, let's integrate using the trapezoid formula given above:\n\n\n```matlab\ndx = 0.1;\nx = 0.0 : dx : 1.0;\n\narea = 0.0;\nfor i = 1 : length(x)-1\n area = area + (dx/2)*(f(x(i)) + f(x(i+1)));\nend\n\nfprintf('Numerical integral: %f\\n', area)\nexact = 0.5*sqrt(pi)*erf(1);\nfprintf('Exact integral: %f\\n', exact)\nfprintf('Error: %f %%\\n', 100.*abs(exact-area)/exact)\n```\n\n Numerical integral: 0.746211\n Exact integral: 0.746824\n Error: 0.082126 %\n\n\nWe can see that using the trapezoidal rule, a numerical integration method, with an internal size of $\\Delta x = 0.1$ leads to an approximation of the exact integral with an error of 0.08%.\n\nYou can make the trapezoidal rule more accurate by:\n\n- using more segments (that is, a smaller value of $\\Delta x$, or\n- using higher-order polynomials (such as with Simpson's rules) over the simpler trapezoids.\n\nFirst, how does reducing the segment size (step size) by a factor of 10 affect the error?\n\n\n```matlab\ndx = 0.01;\nx = 0.0 : dx : 1.0;\n\narea = 0.0;\nfor i = 1 : length(x)-1\n area = area + (dx/2)*(f(x(i)) + f(x(i+1)));\nend\n\nfprintf('Numerical integral: %f\\n', area)\nexact = 0.5*sqrt(pi)*erf(1);\nfprintf('Exact integral: %f\\n', exact)\nfprintf('Error: %f %%\\n', 100.*abs(exact-area)/exact)\n```\n\n Numerical integral: 0.746818\n Exact integral: 0.746824\n Error: 0.000821 %\n\n\nSo, reducing our step size by a factor of 10 (using 100 segments instead of 10) reduced our error by a factor of 100!\n\n## Numerical integration: Simpson's rule\n\nWe can increase the accuracy of our numerical integration approach by using a more sophisticated interpolation scheme with each segment. In other words, instead of using a straight line, we can use a polynomial. **Simpson's rule**, also known as Simpson's 1/3 rule, refers to using a quadratic polynomial to approximate the line in each segment.\n\nSimpson's rule defines the definite integral for our function $f(x)$ from point $a$ to point $b$ as\n\\begin{equation}\n\\int_a^b f(x) \\approx \\frac{1}{6} \\Delta x \\left( f(a) + 4 f \\left(\\frac{a+b}{2}\\right) + f(b) \\right)\n\\end{equation}\nwhere $\\Delta x = b - a$.\n\nThat equation comes from interpolating between points $a$ and $b$ with a third-degree polynomial, then integrating by parts.\n\n\n```matlab\nhold off\nx = linspace(0, 1);\nplot(x, f(x)); hold on\naxis([-0.1 1.1 0.2 1.1])\n\nplot([0 1], [f(0) f(1)], 'Color','black','LineStyle',':');\n\n% quadratic polynomial\na = 0; b = 1; m = (b-a)/2;\np = @(z) (f(a).*(z-m).*(z-b)/((a-m)*(a-b))+f(m).*(z-a).*(z-b)/((m-a)*(m-b))+f(b).*(z-a).*(z-m)/((b-a)*(b-m)));\nplot(x, p(x), 'Color','red','LineStyle','--');\n\nxp = [0 0.5 1];\nyp = [f(0) f(m) f(1)];\nplot(xp, yp, 'ok')\nhold off\nlegend('exact', 'trapezoid fit', 'polynomial fit', 'points used')\n```\n\nWe can see that the polynomial fit, used by Simpson's rule, does a better job of of approximating the exact function, and as a result Simpson's rule will be more accurate than the trapezoidal rule.\n\nNext let's apply Simpson's rule to perform the same integration as above:\n\n\n```matlab\ndx = 0.1;\nx = 0.0 : dx : 1.0;\n\narea = 0.0;\nfor i = 1 : length(x)-1\n area = area + (dx/6.)*(f(x(i)) + 4*f((x(i)+x(i+1))/2.) + f(x(i+1)));\nend\n\nfprintf('Simpson rule integral: %f\\n', area)\nexact = 0.5*sqrt(pi)*erf(1);\nfprintf('Exact integral: %f\\n', exact)\nfprintf('Error: %f %%\\n', 100.*abs(exact-area)/exact)\n```\n\n Simpson rule integral: 0.746824\n Exact integral: 0.746824\n Error: 0.000007 %\n\n\nSimpson's rule is about three orders of magnitude (~1000x) more accurate than the trapezoidal rule.\n\nIn this case, using a more-accurate method allows us to significantly reduce the error while still using the same number of segments/steps.\n", "meta": {"hexsha": "271427fd518ff046295d6993582e53b9826baa7a", "size": 66763, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/_sources/content/numerical-methods/integrals.ipynb", "max_stars_repo_name": "kyleniemeyer/ME373-book", "max_stars_repo_head_hexsha": "66a9ef0f69a8c4e1656c02080aebfb5704e1a089", "max_stars_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/_sources/content/numerical-methods/integrals.ipynb", "max_issues_repo_name": "kyleniemeyer/ME373-book", "max_issues_repo_head_hexsha": "66a9ef0f69a8c4e1656c02080aebfb5704e1a089", "max_issues_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/_sources/content/numerical-methods/integrals.ipynb", "max_forks_repo_name": "kyleniemeyer/ME373-book", "max_forks_repo_head_hexsha": "66a9ef0f69a8c4e1656c02080aebfb5704e1a089", "max_forks_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 214.6720257235, "max_line_length": 22520, "alphanum_fraction": 0.9058460525, "converted": true, "num_tokens": 1734, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9073122163480667, "lm_q2_score": 0.9124361521430956, "lm_q1q2_score": 0.8278644674770539}} {"text": "### Example 4: Burgers' equation\n\nNow that we have seen how to construct the non-linear convection and diffusion examples, we can combine them to form Burgers' equations. We again create a set of coupled equations which are actually starting to form quite complicated stencil expressions, even if we are only using a low-order discretisations.\n\nLet's start with the definition fo the governing equations:\n$$ \\frac{\\partial u}{\\partial t} + u \\frac{\\partial u}{\\partial x} + v \\frac{\\partial u}{\\partial y} = \\nu \\; \\left(\\frac{\\partial ^2 u}{\\partial x^2} + \\frac{\\partial ^2 u}{\\partial y^2}\\right)$$\n \n$$ \\frac{\\partial v}{\\partial t} + u \\frac{\\partial v}{\\partial x} + v \\frac{\\partial v}{\\partial y} = \\nu \\; \\left(\\frac{\\partial ^2 v}{\\partial x^2} + \\frac{\\partial ^2 v}{\\partial y^2}\\right)$$\n\nThe discretized and rearranged form then looks like this:\n\n\\begin{aligned}\nu_{i,j}^{n+1} &= u_{i,j}^n - \\frac{\\Delta t}{\\Delta x} u_{i,j}^n (u_{i,j}^n - u_{i-1,j}^n) - \\frac{\\Delta t}{\\Delta y} v_{i,j}^n (u_{i,j}^n - u_{i,j-1}^n) \\\\\n&+ \\frac{\\nu \\Delta t}{\\Delta x^2}(u_{i+1,j}^n-2u_{i,j}^n+u_{i-1,j}^n) + \\frac{\\nu \\Delta t}{\\Delta y^2} (u_{i,j+1}^n - 2u_{i,j}^n + u_{i,j+1}^n)\n\\end{aligned}\n\n\\begin{aligned}\nv_{i,j}^{n+1} &= v_{i,j}^n - \\frac{\\Delta t}{\\Delta x} u_{i,j}^n (v_{i,j}^n - v_{i-1,j}^n) - \\frac{\\Delta t}{\\Delta y} v_{i,j}^n (v_{i,j}^n - v_{i,j-1}^n) \\\\\n&+ \\frac{\\nu \\Delta t}{\\Delta x^2}(v_{i+1,j}^n-2v_{i,j}^n+v_{i-1,j}^n) + \\frac{\\nu \\Delta t}{\\Delta y^2} (v_{i,j+1}^n - 2v_{i,j}^n + v_{i,j+1}^n)\n\\end{aligned}\n\nGreat. Now before we look at the Devito implementation, let's re-create the NumPy-based implementation form the original.\n\n\n```python\nfrom examples.cfd import plot_field, init_hat\nimport numpy as np\n%matplotlib inline\n\n# Some variable declarations\nnx = 41\nny = 41\nnt = 120\nc = 1\ndx = 2. / (nx - 1)\ndy = 2. / (ny - 1)\nsigma = .0009\nnu = 0.01\ndt = sigma * dx * dy / nu\n```\n\n\n```python\n#NBVAL_IGNORE_OUTPUT\n\n# Assign initial conditions\nu = np.empty((nx, ny))\nv = np.empty((nx, ny))\n\ninit_hat(field=u, dx=dx, dy=dy, value=2.)\ninit_hat(field=v, dx=dx, dy=dy, value=2.)\n\nplot_field(u)\n```\n\n\n```python\n#NBVAL_IGNORE_OUTPUT\n\nfor n in range(nt + 1): ##loop across number of time steps\n un = u.copy()\n vn = v.copy()\n\n u[1:-1, 1:-1] = (un[1:-1, 1:-1] -\n dt / dx * un[1:-1, 1:-1] * \n (un[1:-1, 1:-1] - un[1:-1, 0:-2]) - \n dt / dy * vn[1:-1, 1:-1] * \n (un[1:-1, 1:-1] - un[0:-2, 1:-1]) + \n nu * dt / dx**2 * \n (un[1:-1,2:] - 2 * un[1:-1, 1:-1] + un[1:-1, 0:-2]) + \n nu * dt / dy**2 * \n (un[2:, 1:-1] - 2 * un[1:-1, 1:-1] + un[0:-2, 1:-1]))\n \n v[1:-1, 1:-1] = (vn[1:-1, 1:-1] - \n dt / dx * un[1:-1, 1:-1] *\n (vn[1:-1, 1:-1] - vn[1:-1, 0:-2]) -\n dt / dy * vn[1:-1, 1:-1] * \n (vn[1:-1, 1:-1] - vn[0:-2, 1:-1]) + \n nu * dt / dx**2 * \n (vn[1:-1, 2:] - 2 * vn[1:-1, 1:-1] + vn[1:-1, 0:-2]) +\n nu * dt / dy**2 *\n (vn[2:, 1:-1] - 2 * vn[1:-1, 1:-1] + vn[0:-2, 1:-1]))\n \n u[0, :] = 1\n u[-1, :] = 1\n u[:, 0] = 1\n u[:, -1] = 1\n \n v[0, :] = 1\n v[-1, :] = 1\n v[:, 0] = 1\n v[:, -1] = 1\n \nplot_field(u)\n```\n\nNice, our wave looks just like the original. Now we shall attempt to write our entire Burgers' equation operator in a single cell - but before we can demonstrate this, there is one slight problem.\n\nThe diffusion term in our equation requires a second-order space discretisation on our velocity fields, which we set through the `TimeFunction` constructor for $u$ and $v$. The `TimeFunction` objects will store this dicretisation information and use it as default whenever we use the shorthand notations for derivative, like `u.dxl` or `u.dyl`. For the advection term, however, we want to use a first-order discretisation, which we now have to create by hand when combining terms with different stencil discretisations. To illustrate let's consider the following example: \n\n\n```python\nfrom devito import Grid, TimeFunction, first_derivative, left\n\ngrid = Grid(shape=(nx, ny), extent=(2., 2.))\nx, y = grid.dimensions\nt = grid.stepping_dim\n\nu1 = TimeFunction(name='u1', grid=grid, space_order=1)\nprint(\"Space order 1:\\n%s\\n\" % u1.dxl)\n\nu2 = TimeFunction(name='u2', grid=grid, space_order=2)\nprint(\"Space order 2:\\n%s\\n\" % u2.dxl)\n\n# We use u2 to create the explicit first-order derivative\nu1_dx = first_derivative(u2, dim=x, side=left, order=1)\nprint(\"Explicit space order 1:\\n%s\\n\" % u1_dx)\n```\n\n Space order 1:\n u1(t, x, y)/h_x - u1(t, x - h_x, y)/h_x\n \n Space order 2:\n 3*u2(t, x, y)/(2*h_x) + u2(t, x - 2*h_x, y)/(2*h_x) - 2*u2(t, x - h_x, y)/h_x\n \n Explicit space order 1:\n u2(t, x, y)/h_x - u2(t, x - h_x, y)/h_x\n \n\n\nOk, so by constructing derivative terms explicitly we again have full control of the spatial discretisation - the power of symbolic computation. Armed with that trick, we can now build and execute our advection-diffusion operator from scratch in one cell.\n\n\n```python\n#NBVAL_IGNORE_OUTPUT\nfrom sympy import solve\nfrom devito import Operator, Constant, Eq, INTERIOR\n\n# Define our velocity fields and initialise with hat function\nu = TimeFunction(name='u', grid=grid, space_order=2)\nv = TimeFunction(name='v', grid=grid, space_order=2)\ninit_hat(field=u.data[0], dx=dx, dy=dy, value=2.)\ninit_hat(field=v.data[0], dx=dx, dy=dy, value=2.)\n\n# Write down the equations with explicit backward differences\na = Constant(name='a')\nu_dx = first_derivative(u, dim=x, side=left, order=1)\nu_dy = first_derivative(u, dim=y, side=left, order=1)\nv_dx = first_derivative(v, dim=x, side=left, order=1)\nv_dy = first_derivative(v, dim=y, side=left, order=1)\neq_u = Eq(u.dt + u*u_dx + v*u_dy, a*u.laplace, region=INTERIOR)\neq_v = Eq(v.dt + u*v_dx + v*v_dy, a*v.laplace, region=INTERIOR)\n\n# Let SymPy rearrange our stencils to form the update expressions\nstencil_u = solve(eq_u, u.forward)[0]\nstencil_v = solve(eq_v, v.forward)[0]\nupdate_u = Eq(u.forward, stencil_u)\nupdate_v = Eq(v.forward, stencil_v)\n\n# Create Dirichlet BC expressions using the low-level API\nbc_u = [Eq(u.indexed[t+1, 0, y], 1.)] # left\nbc_u += [Eq(u.indexed[t+1, nx-1, y], 1.)] # right\nbc_u += [Eq(u.indexed[t+1, x, ny-1], 1.)] # top\nbc_u += [Eq(u.indexed[t+1, x, 0], 1.)] # bottom\nbc_v = [Eq(v.indexed[t+1, 0, y], 1.)] # left\nbc_v += [Eq(v.indexed[t+1, nx-1, y], 1.)] # right\nbc_v += [Eq(v.indexed[t+1, x, ny-1], 1.)] # top\nbc_v += [Eq(v.indexed[t+1, x, 0], 1.)] # bottom\n\n# Create the operator\nop = Operator([update_u, update_v] + bc_u + bc_v)\n\n# Execute the operator for a number of timesteps\nop(time=nt, dt=dt, a=nu)\n\nplot_field(u.data[0])\n```\n", "meta": {"hexsha": "198e4fffa13ae86597246e4341898b55c6add5f2", "size": 285419, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "devito/examples/cfd/04_burgers.ipynb", "max_stars_repo_name": "LukasMosser/stochastic_seismic_waveform_inversion", "max_stars_repo_head_hexsha": "4976c3b9a39b8d246d3d220056f235df6fc7dbb3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2019-08-01T19:08:10.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T05:04:06.000Z", "max_issues_repo_path": "devito/examples/cfd/04_burgers.ipynb", "max_issues_repo_name": "esgomi/gan_seismic_waveform_inversion", "max_issues_repo_head_hexsha": "4976c3b9a39b8d246d3d220056f235df6fc7dbb3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-12-05T13:00:14.000Z", "max_issues_repo_issues_event_max_datetime": "2019-12-05T13:00:14.000Z", "max_forks_repo_path": "devito/examples/cfd/04_burgers.ipynb", "max_forks_repo_name": "esgomi/gan_seismic_waveform_inversion", "max_forks_repo_head_hexsha": "4976c3b9a39b8d246d3d220056f235df6fc7dbb3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2019-10-03T14:47:48.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-22T14:06:47.000Z", "avg_line_length": 974.1262798635, "max_line_length": 92136, "alphanum_fraction": 0.9504763173, "converted": true, "num_tokens": 2469, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541528387691, "lm_q2_score": 0.8791467643431001, "lm_q1q2_score": 0.8278522015984469}} {"text": "## Few examples from sympy.stats\n\n\n```python\nfrom sympy import *\ninit_session(quiet=True)\n%matplotlib inline\n```\n\n \n\n\n\n```python\nfrom sympy.stats import Die, P, E\n```\n\nLet's play with two random variables, each representing a throw with a 6-sided die.\n\n\n```python\nX, Y = Die(\"X\"), Die(\"Y\")\n```\n\nProbability that we get 6 on both dice\n\n\n```python\nP(Eq(X, 6) & Eq(Y, 6))\n```\n\n\n```python\nP(X>Y)\n```\n\nYou can ask about conditional probabilities\n\n\n```python\nP(X+Y>3, Y<6)\n```\n\nand conditional expectations\n\n\n```python\nE(X, X+Y>3)\n```\n\nThere are many more statistical distributions in the `sympy.stats` module\n\n\n```python\nimport sympy.stats\n\" \".join([p for p in dir(sympy.stats) if p[0].isupper() & (len(p)>1)])\n```\n\n\n\n\n 'Arcsin Benini Bernoulli Beta BetaPrime Binomial Cauchy Chi ChiNoncentral ChiSquared Coin ContinuousRV Covariance Dagum Die DiscreteUniform Erlang Expectation Exponential FDistribution FiniteRV FisherZ Frechet Gamma GammaInverse Geometric Gompertz Gumbel Hypergeometric Kumaraswamy Laplace LogNormal Logarithmic Logistic Maxwell Nakagami NegativeBinomial Normal Pareto Poisson Probability QuadraticU Rademacher RaisedCosine Rayleigh ShiftedGompertz StudentT Trapezoidal Triangular Uniform UniformSum Variance VonMises Weibull WignerSemicircle YuleSimon Zeta'\n\n\n\n\n```python\nfrom sympy.stats import Normal, sample, std, cdf\nmu1, mu2 = symbols(\"mu1, mu2\")\nsigma1, sigma2 = symbols(\"sigma1, sigma2\", positive=True)\n```\n\n\n```python\nX, Y = Normal(\"X\", mu1, sigma1), Normal(\"Y\", mu2, sigma2)\n```\n\n\n```python\ncdf(X)\n```\n\nProbability of encounterin random value larger than $\\mu+\\sigma$\n\n\n```python\nsimplify(P(X>mu1+sigma1))\n```\n\nits numerical value\n\n\n```python\nsimplify(P(X>mu1+sigma1)).evalf()\n```\n\nand the expression used to calculate it:\n\n\n```python\nP(X>mu1+sigma1, evaluate=False)\n```\n\nRandom samples from the distribution can be generated\n\n\n```python\nsample(X)\n```\n\nStandard deviation of sum of two independent normally distributed random variables...\n\n\n```python\nsimplify(std(X+Y))\n```\n\nIf the integrals cannot be calculated analytically in SymPy,\n\n\n```python\nX, Y = Normal(\"X\", 0, 1), Normal(\"Y\", 1, 1)\nP(X>Y, evaluate=False)\n```\n\nthe probabilities can be estimated by Monte Carlo method (random sampling)\n\n\n```python\nP(X>Y, numsamples=100000)\n```\n", "meta": {"hexsha": "99f71c0f879ff079a2c0ed56337737ee4adbee1e", "size": 37593, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/04 Statistics.ipynb", "max_stars_repo_name": "rouckas/sympy-slides", "max_stars_repo_head_hexsha": "c2777f0eddedd19c4bf094d40489f49c1ef8ad28", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2018-10-22T19:52:48.000Z", "max_stars_repo_stars_event_max_datetime": "2018-12-01T16:59:45.000Z", "max_issues_repo_path": "notebooks/04 Statistics.ipynb", "max_issues_repo_name": "rouckas/sympy-slides", "max_issues_repo_head_hexsha": "c2777f0eddedd19c4bf094d40489f49c1ef8ad28", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/04 Statistics.ipynb", "max_forks_repo_name": "rouckas/sympy-slides", "max_forks_repo_head_hexsha": "c2777f0eddedd19c4bf094d40489f49c1ef8ad28", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 71.879541109, "max_line_length": 6336, "alphanum_fraction": 0.7936317932, "converted": true, "num_tokens": 638, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541593883189, "lm_q2_score": 0.8791467580102418, "lm_q1q2_score": 0.8278522013931}} {"text": "# Effect of cancelling a process zero\nThe following exercise is taken from \u00c5str\u00f6m & Wittenmark (problem 5.3)\n\nConsider the system with pulse-transfer function \n$$ H(z) = \\frac{z+0.7}{z^2 - 1.8z + 0.81}.$$\nUse polynomial design to determine a controller such that the closed-loop system has the characteristic polynomial $$ A_c = z^2 -1.5z + 0.7. $$\nLet the observer polynomial have as low order as possible, and place all observer poles in the origin (dead-beat observer). Consider the following two cases \n\n(a) The process zero is cancelled\n\n(b) The process zero is not cancelled.\n\nSimulate the two cases and discuss the differences between the two controllers. Which one should be preferred?\n\n(c) Design an incremental controller for the system\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport control\nimport sympy as sy\n```\n\n## Checking the poles\nBefore solving the problem, let's look at the location of the poles of the plant and the desired closed-loop system.\n\n\n```python\nH = control.tf([1, 0.7], [1, -1.8, 0.81], 1)\ncontrol.pzmap(H)\n```\n\n\n```python\nz = sy.symbols(\"z\", real=False)\nAc = sy.Poly(z**2 - 1.5*z + 0.7,z)\nsy.roots(Ac)\n```\n\n\n\n\n {0.75 + 0.370809924354783*I: 1, 0.75 - 0.370809924354783*I: 1}\n\n\n\nSo, the plant has a double pole in $z=0.9$, and the desired closed-loop system has complex-conjugated poles in $z=0.75 \\pm i0.37$.\n\n## (a)\n### The feedback controller $F_b(z)$\nThe plant has numerator polynomial $B(z) = z+0.7$ and denominator polynomial $A(z) = z^2 - 1.8z + 0.81$. With the feedback controller $$F_b(z) = \\frac{S(z)}{R(z)}$$ and feedforward $$F_f(z) = \\frac{T(z)}{R(z)}$$ the closed-loop pulse-transfer function from the command signal to the output becomes \n$$ H_{c}(z) = \\frac{\\frac{T(z)}{R(z)} \\frac{B(z)}{A(z)}}{1 + \\frac{B(z)}{A(z)}\\frac{S(z)}{R(z)}} = \\frac{T(z)(z+0.7)}{A(z)R(z) + S(z)(z+0.7)}.$$\nTo cancel the process zero, $z+0.7$ should be a factor of $R(z)$. Write $R(z)= \\bar{R}(z)(z+0.7)$ to obtain the Diophantine equation\n$$ A(z)\\bar{R}(z) + S(z) = A_c(z)A_o(z).$$\n\n\nLet's try to find a minimum-order controller that solves the Diophantine equation. The degree of the left hand side (and hence also of the right-hand side) is \n$$ \\deg (A\\bar{R} + S) = \\deg A + \\deg \\bar{R} = 2 + \\deg\\bar{R}.$$\nThe number of equations obtained when setting the coefficients of the left- and right-hand side equal is the same as the degree of the polynomials on each side (taking into account that the leading coefficient is 1, by convention). \n\nThe feedback controller can be written\n$$ F_b(z) = \\frac{S(z)}{R(z)} = \\frac{s_0z^n + s_1z^{n-1} + \\cdots + s_n}{(z+0.7)(z^{n-1} + r_1z^{n-2} + \\cdots + r_{n-1}}, $$\nwhich has $(n-1) + (n+1) = 2n$ unknown parameters, where $n = \\deg\\bar{R} + 1$.\nSo to obtain a Diophantine equation which gives exactly as many equations in the coefficients as unknowns, we must have \n$$ 2 + \\deg\\bar{R} = 2\\deg\\bar{R} + 2 \\quad \\Rightarrow \\quad \\deg\\bar{R} = 0.$$\n\nThus, the controller becomes \n$$ F_b(z) = \\frac{s_0z + s_1}{z+0.7}, $$\nand the Diophantine equation \n$$ z^2 - 1.8z + 0.81 + (s_0z + s_1) = z^2 - 1.5z + 0.7$$\n$$ z^2 - (1.8-s_0)z + (0.81 + s_1) = z^2 - 1.5z + 0.7, $$\nwith solution \n$$ s_0 = 1.8 - 1.5 = 0.3, \\qquad s_1 = 0.7-0.81 = -0.11. $$\nThe right hand side of the Diophantine equation consists only of the desired characteristic polynomial $A_c(z)$, and the observer polynomial is $A_o(z) = 1$, in order for the degrees of the left- and right hand side to be the same.\n\nLet's verify by calculation using SymPy.\n\n\n```python\ns0,s1 = sy.symbols(\"s0, s1\")\nA = sy.Poly(z**2 -1.8*z + 0.81, z)\nB = sy.Poly(z + 0.7, z)\nS = sy.Poly(s0*z + s1, z)\nAc = sy.Poly(z**2 - 1.5*z + 0.7, z)\nAo = sy.Poly(1, z)\n\n# Diophantine equation\nDioph = A + S - Ac*Ao\n\n# Extract the coefficients\nDioph_coeffs = Dioph.all_coeffs()\n\n# Solve for s0 and s1,\nsol = sy.solve(Dioph_coeffs, (s0,s1))\nprint('s_0 = %f' % sol[s0])\nprint('s_1 = %f' % sol[s1])\n```\n\n s_0 = 0.300000\n s_1 = -0.110000\n\n\n### The feedforward controller $F_f(z)$\nPart of the methodology of the polynomial design, is that the forward controller $F_f(z) = \\frac{T(z)}{R(z)}$ should cancel the observer poles, so we set $T(z) = t_0A_o(z)$. In case (a) the observer poynomial is simply $A_o(z)=1$. However, since $R(z)=z+0.7$, we can choose $T(z) = t_0z$ and still have a causal controller $F_f(z)$.\nThe scalar factor $t_0$ is chosen to obtain unit DC-gain of $H_c(z)$, hence \n$$ H_c(1) = \\frac{t_0}{A_c(1)} = 1 \\quad \\Rightarrow \\quad t_0 = A_c(1) = 1-1.5+0.7 = 0.2$$\n\n\n```python\n\n```\n\n### Simulate\nLet's simulate a step-responses from the command signal, and plot both the output and the control signal. \n\n\n```python\nt0 = float(Ac.eval(1))\nScoeffs = [float(sol[s0]), float(sol[s1])]\nRcoeffs = [1, 0.7]\nFb = control.tf(Scoeffs, Rcoeffs, 1)\nFf = control.tf([t0], Rcoeffs, 1)\nHc = Ff * control.feedback(H, Fb) # From command-signal to output\nHcu = Ff * control.feedback(1, Fb*H)\n```\n\n\n```python\ntvec = np.arange(40)\n(t1, y1) = control.step_response(Hc,tvec)\nplt.figure(figsize=(14,4))\nplt.step(t1, y1[0])\nplt.xlabel('k')\nplt.ylabel('y')\nplt.title('Output')\n```\n\n\n```python\n(t1, y1) = control.step_response(Hcu,tvec)\nplt.figure(figsize=(14,4))\nplt.step(t1, y1[0])\nplt.xlabel('k')\nplt.ylabel('y')\nplt.title('Control signal')\n```\n\n## Solve (b) on your own\n\n\n```python\n\n```\n\n## Solve (c) on your own\n\n\n```python\n\n```\n", "meta": {"hexsha": "b929af0a709ac25682ba9809e27f65438f201093", "size": 36968, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "polynomial-design/notebooks/Polynomial design exercise.ipynb", "max_stars_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_stars_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-11-07T05:20:37.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-22T09:46:13.000Z", "max_issues_repo_path": "polynomial-design/notebooks/Polynomial design exercise.ipynb", "max_issues_repo_name": "alfkjartan/control-computarizado", "max_issues_repo_head_hexsha": "5b9a3ae67602d131adf0b306f3ffce7a4914bf8e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-06-12T20:44:41.000Z", "max_issues_repo_issues_event_max_datetime": "2020-06-12T20:49:00.000Z", "max_forks_repo_path": "polynomial-design/notebooks/Polynomial design exercise.ipynb", "max_forks_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_forks_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-14T03:55:27.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-14T03:55:27.000Z", "avg_line_length": 107.778425656, "max_line_length": 9444, "alphanum_fraction": 0.8525752002, "converted": true, "num_tokens": 1898, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8887587817066391, "lm_q2_score": 0.9314625102975307, "lm_q1q2_score": 0.8278454858574412}} {"text": "# Deutsch's algorithm\n\n\n```python\nimport numpy as np\n\nfrom qiskit import QuantumRegister, ClassicalRegister\nfrom qiskit import QuantumCircuit\nfrom qiskit import execute, BasicAer\n\nimport qiskit.tools.visualization as qvis\n\nimport warnings\nwarnings.filterwarnings(\"ignore\", category=DeprecationWarning) \n```\n\nDeutsch's algorithm allows us to perform a task using fewer queries quantumly than are needed classically.\n\nSuppose we have some function $f$, that we know to be one of the 4 following functions:\n\n| Function ($f$) | $f$(0) | $f$(1) |\n| --- | -------- | -------- |\n| `constant_zero`| 0 | 0|\n| `constant_one` | 1| 1 |\n| `balanced_id` | 0| 1 |\n| `balanced_not`| 1 | 0 |\n\nThe two \"constant\" functions output the same value no matter the input, whereas the balanced functions output different values for each input.\n\n\n## Our task\n\nWe are given a black box that we know implements one of: `constant_zero`, `constant_one`, `balanced_id`, `balanced_not`. We are allowed to query the box (i.e. ask for $f(x)$ for either $x= 0, 1$) as many times as we need. _How many queries are needed to determine with certainty which function the box implements_?\n\nLet's choose one of the four functions randomly.\n\n\n```python\nfunction_names = ['constant_zero', 'constant_one', 'balanced_id', 'balanced_not']\noracle = function_names[np.random.randint(4)]\n```\n\n### Classical solution\n\nWe _must_ query twice. For example, if we query $f(0)$ and receive $0$, we know we must have `constant_zero` or `balanced_id`, but we still can't pin down the exact function without querying a second time. \n\n### Quantum solution\n\nWe can perform this task with only a _single_ query using two qubits.\n\nOur oracle $U_f$ is going to implement the following transformation:\n\\begin{equation}\n U_f|x\\rangle|y\\rangle = |x\\rangle|y\\oplus f(x) \\rangle\n\\end{equation}\nwhere $f$ will be replaced with one of the four functions above, and $\\oplus$ indicates addition modulo 2.\n\nFor example, if $f$ is `constant_one`,\n\\begin{eqnarray}\n U_f|00\\rangle &=& |01\\rangle \\\\\n U_f|01\\rangle &=& |00\\rangle \\\\\n U_f|10\\rangle &=& |11\\rangle \\\\\n U_f|11\\rangle &=& |10\\rangle\n\\end{eqnarray}\n\nFrom this you can see that each function will produce a different permutation of the computational basis states. \n\n**Exercise**: Compute the unitary matrix associated with each function, and write out the quantum circuit that implements it. (The answers are shown further down because we need them for the demo, but you can still compute them yourself to understand how they work.)\n\nTo execute Deutsch's algorithm, we'll follow the steps as shown in class. We begin with the state \n\\begin{equation} |+-\\rangle =\n \\left(\\frac{|0\\rangle + |1\\rangle}{\\sqrt{2}}\\right) \\otimes \\left(\\frac{|0\\rangle - |1\\rangle}{\\sqrt{2}}\\right) = \\frac{1}{2}\\left(|00\\rangle - |01\\rangle + |10\\rangle - |11\\rangle \\right)\n\\end{equation}\n\nLet's set up the quantum and classical registers and prepare it:\n\n\n```python\nq = QuantumRegister(2)\nc = ClassicalRegister(1)\n\ncirc = QuantumCircuit(q, c)\n```\n\n\n```python\n# Prepares the state |+-> starting from |00>\ncirc.h(q[0])\ncirc.x(q[1])\ncirc.h(q[1]);\n```\n\nLet's now apply the oracle circuit:\n\n\n```python\n# Here are the quantum circuits for each of our functions\n# If the circuit is 'constant_zero', we do nothing.\nif oracle == 'constant_one':\n circ.x(q[1])\nelif oracle == 'balanced_id':\n circ.cx(q[0], q[1])\nelif oracle == 'balanced_not':\n circ.x(q[0])\n circ.cx(q[0], q[1])\n```\n\nWe saw in the lectures that after calling the oracle, if the function is constant, we get a $|+\\rangle$ on the first qubit, but when it is balanced, we get $|-\\rangle$. We can combine all these results into a compact form:\n\n\\begin{equation}\n U_f|+\\rangle |- \\rangle = \\left( \\frac{|0\\rangle + (-1)^{f(0) \\oplus f(1)}|1\\rangle}{\\sqrt{2}} \\right) \\otimes \\left( \\frac{|0\\rangle - |1\\rangle}{\\sqrt{2}} \\right)\n \\end{equation}\n\nWe can then determine the value of $f(0) \\oplus f(1)$, by applying a Hadamard to the first qubit to perform a basis change back to the computational basis, and then measuring. We should obtain:\n\\begin{equation}\n |f(0)\\oplus f(1) \\rangle \\otimes |-\\rangle\n\\end{equation}\n\nIf the function is constant, $f(0) \\oplus f(1) = 0$ and so the first qubit will be in state $|0\\rangle$. Similarly, if it is balanced, the first qubit will be in state $|1\\rangle$. Does this work?\n\n\n```python\n# Convert the first qubit back to the computational basis and measure\ncirc.h(q[0])\ncirc.measure(q[0], c[0])\n\n# Execute the circuit a single time to get the measurement outcome\nbackend = BasicAer.get_backend('qasm_simulator')\nresult = execute(circ, backend, shots = 1).result()\ncounts = result.get_counts()\n\nif list(counts.keys())[0] == '0': # c[0] is a tuple of the register and the measurement value\n print(\"Measured first qubit in state |0>, function is constant.\")\nelse:\n print(\"Measured first qubit in state |1>, function is balanced.\")\n```\n\nWere we correct? Let's see which function the oracle had chosen to implement.\n\n\n```python\nprint(f\"The function implemented by the oracle is '{oracle}'\")\n```\n\nDeutsch's algorithm can also be generalized to the Deutsch-Josza algorithm when $f$ is a function on $n$ bits. No matter the value of $n$, it is still possible to determine whether $f$ is constant or balanced with a single query.\n", "meta": {"hexsha": "91a78c1b33b00d10557950a464fcbfdf25b70b9a", "size": 8385, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "01-gate-model-theory/notebooks/Deutsch-Algorithm.ipynb", "max_stars_repo_name": "abhishekabhishek/Intro-QC-TRIUMF", "max_stars_repo_head_hexsha": "9c780446988b327e32f62113d03b55d689876079", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "01-gate-model-theory/notebooks/Deutsch-Algorithm.ipynb", "max_issues_repo_name": "abhishekabhishek/Intro-QC-TRIUMF", "max_issues_repo_head_hexsha": "9c780446988b327e32f62113d03b55d689876079", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "01-gate-model-theory/notebooks/Deutsch-Algorithm.ipynb", "max_forks_repo_name": "abhishekabhishek/Intro-QC-TRIUMF", "max_forks_repo_head_hexsha": "9c780446988b327e32f62113d03b55d689876079", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.8821292776, "max_line_length": 323, "alphanum_fraction": 0.5771019678, "converted": true, "num_tokens": 1461, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9489172688214138, "lm_q2_score": 0.872347369700144, "lm_q1q2_score": 0.8277854835194048}} {"text": "# Least Squares for Response Surface Work\n\nLink: [http://charlesreid1.com/wiki/Empirical_Model-Building_and_Response_Surfaces#Chapter_3:_Least_Squares_for_Response_Surface_Work](http://charlesreid1.com/wiki/Empirical_Model-Building_and_Response_Surfaces#Chapter_3:_Least_Squares_for_Response_Surface_Work)\n\n## Method of Least Squares\n\nLeast squares helps you to understand a model of the form:\n\n$$\ny = f(x,t) + \\epsilon\n$$\n\nwhere:\n\n$$\nE(y) = \\eta = f(x,t)\n$$\n\nis the mean level of the response $y$ which is affected by $k$ variables $(x_1, x_2, ..., x_k) = \\mathbf{x}$\n\nIt also involves $p$ parameters $(t_1, t_2, ..., t_p) = \\mathbf{t}$\n\n$\\epsilon$ is experimental error\n\nTo examine this model, experiments would run at n different sets of conditions, $x_1, x_2, ..., x_n$\n\nwould then observe corresponding values of response $y_1, y_2, ..., y_n$\n\nTwo important questions:\n\n1. does postulated model accurately represent the data?\n\n2. if model does accurately represent data, what are best estimates of parameters t?\n\nstart with second question first\n\n-----\n\nGiven: function $f(x,t)$ for each experimental run\n\n$n$ discrepancies:\n\n$$\n{y_1 - f(x_1,t)}, {y_2 - f(x_2,t)}, ..., {y_n - f(x_n,t)}\n$$\n\nMethod of least squares selects best value of t that make the sum of squares smallest:\n\n$$\nS(t) = \\sum_{u=1}^{n} \\left[ y_n - f \\left( x_u, t \\right) \\right]^2\n$$\n\n$S(t)$ is the sum of squares function\n\nMinimizing choice of $t$ is denoted\n\n$$\n\\hat{t}\n$$\n\nare least-squares estimates of $t$ good?\n\nTheir goodness depends on the nature of the distribution of their errors\n\nLeast-squares estimates are appropriate if you can assume that experimental errors:\n\n$$\n\\epsilon_u = y_u - \\eta_u\n$$\n\nare statistically independent and with constant variance, and are normally distributed\n\nthese are \"standard assumptions\"\n\n## Linear models\n\nThis is a limiting case, where\n\n$$\n\\eta = f(x,t) = t_1 z_1 + t_2 z_2 + ... + t_p z_p\n$$\n\nadding experimental error $\\epsilon = y - \\eta$:\n\n$$\ny = t_1 z_1 + t_2 z_2 + ... + t_p z_p + \\epsilon\n$$\n\nmodel of this form is linear in the parameters\n\n### Algorithm\n\nFormulate a problem with $n$ observed responses, $p$ parameters...\n\nThis yields $n$ equations of the form:\n\n$$\ny_1 = t_1 z_{11} + t_2 z_{21} + ... \\\\\ny_2 = t_1 z_{21} + t_2 z_{22} + ...\n$$\n\netc...\n\nThis can be written in matrix form:\n\n$$\n\\mathbf{y} = \\mathbf{Z t} + \\boldsymbol{\\epsilon}\n$$\n\nand the dimensions of each matrix are:\n\n* $y = n \\times 1$\n* $Z = n \\times p$\n* $t = p \\times 1$\n* $\\epsilon = n \\times 1$\n\nthe sum of squares function is given by:\n\n$$\nS(\\mathbf{t}) = \\sum_{u=1}^{n} \\left( y_u - t_1 z_{1u} - t_2 z_{2u} - ... - t_p z_{pu} \\right)^2\n$$\n\nor,\n\n$$\nS(t) = ( y - Zt )^{\\prime} ( y - Zt )\n$$\n\nthis can be rewritten as:\n\n$$\n\\mathbf{ Z^{\\prime} Z t = Z^{\\prime} y }\n$$\n\n### Rank of Z\n\nIf there are relationships between the different input parameters $(z's)$, then the matrix $\\mathbf{Z}$ can become singular\n\ne.g. if there is a relationship $z_2 = c z_1$, then you can only estimate the linear combination $z_1 + c z_2$ \n\nreason: when $z_2 = c z_1$, changes in $z_1$ can't be distinguished from changes in $z_2$\n\n$Z$ (an $n \\times p$ matrix) is said to be full rank $p$ if there are no linear relationships of the form:\n\n$$\na_1 z_1 + a_2 z_2 + ... + a_p z_p l= 0\n$$\n\nif there are $q > 0$ independent linear relationships, then $Z$ has rank $p - q$\n\n## Analysis of Variance: 1 regressor\n\nAssume simple model $y = \\beta + \\epsilon$\n\nThis states that $y$ is varying about an unknown mean $\\beta$\n\nSuppose we have 3 observations of $y$, $\\mathbf{y} = (4, 1, 1)' $\n\nThen the model can be written as $y = z_1 t + \\epsilon$\n\nand $z_1 = (1, 1, 1) '$\n\nand $t = \\beta$\n\nso that\n\n```\n[ 4 ] [ 1 ] [ \\epsilon_1 ]\n[ 1 ] = [ 1 ] t + [ \\epsilon_2 ]\n[ 1 ] [ 1 ] [ \\epsilon_3 ]\n```\n\nSupposing the linear model posited a value of one of the regressors t, e.g. $t_0 = 0.5$\n\nThen you could check the null hypothesis, e.g. $H_0 : t = t_0 = 0.5$\n\nIf true, the mean observation vector given by $\\eta_0 = z_1 t_0$\n\nor,\n\n```\n[ 0.5 ] [ 1 ]\n[ 0.5 ] = [ 1 ] 0.5\n[ 0.5 ] [ 1 ]\n```\n\nand the appropriate \"observation breakdown\" (whatever that means?) is:\n\n$$\ny - \\eta_0 = ( \\hat{y} - \\eta_0 ) + ( y - \\hat{y} )\n$$\n\nAssociated with this observation breakdown is an analysis of variance table:\n\n{|\n|Source\n|Degrees of freedom (df)\n|Sum of squares (square of length), SS\n|Mean square, MS\n|Expected value of mean square, E(MS)\n|-\n|Model\n|1\n|$\\vert \\hat{y} - \\eta_0 \\vert^2 = ( \\hat{t} - t_0 )^2 \\sum z_1^2$\n|6.75\n|$\\sigma^2 + ( t - t_0 )^2 \\sum z_1^2$\n\n|-\n|Residual\n|2\n|$\\vert y - \\hat{y} \\vert^2 = \\sum ( y - \\hat{t} z_1 )^2$\n|3.00\n|$\\sigma^2$\n\n|-\n|Total\n|3\n|$\\vert y - \\eta_0 \\vert^2 = \\sum ( y - \\eta_0 )^2 = 12.75$\n|\n|\n|}\n\nSum of squares: squared lengths of vectors\n\nDegrees of freedom: number of dimensions in which vector can move (geometric interpretation)\n\nThe model $y = z_1 t + \\epsilon$ says whatever the data is, the systematic part $\\hat{y} - \\eta_0 = ( \\hat{t} - t_0) z_1$ of $y - \\eta_0$ must lie in the direction of $z_1$, which gives $\\hat{y} - \\eta_0$ only one degree of freedom.\n\nWhatever the data, the residual vector must be perpendicular to $z_1$ (why?), and so it can move in 2 directions and has 2 degrees of freedom\n\nNow, looking at the null hypothesis: \n\nThe component $\\vert \\hat{y} - \\eta_0 \\vert^2 = ( \\hat{t} - t_0 )^2 \\sum z^2$ is a measure of discrepancy between POSTULATED model $\\eta_0 = z_1 t_0$ and ESTIMATED model $\\hat{y} = z_1 \\hat{t}$\n\nMaking \"standard assumptions\" (earlier), expected value of sum of squares, assuming model is true, is $( t - t_0 )^2 \\sum z_1^2 + \\sigma^2$\n\nFor the residual component it is $2 \\sigma^2$ (or, in general, $\\nu_2 \\sigma^2$, where $\\nu_2$ is number of degrees of freedom of residuals)\n\nThus a measure of discrepancy from the null hypothesis $t = t_0$ is $F = \\frac{ \\vert \\hat{y} - \\eta_0 \\vert^2 / 1 }{ \\vert y - \\hat{y} \\vert^2 / 2 }$\n\nif the null hypothesis were true, then the top and bottom would both estimate the same $\\sigma^2$\n\nSo if $F$ is different from 1, that indicates departure from null hypothesis\n\nThe MORE $F$ differs from 1, the more doubtful the null hypothesis becomes\n\n## Least squares: 2 regressors\n\nPrevious model, $y = \\beta + \\epsilon$, said $y$ was represented with a mean $t$ plus an error.\n\nInstead, suppose that there are systematic deviations from the mean, associated with an external variable (e.g. humidity in the lab).\n\nNow equation is for straight line: $ y = \\beta_0 + \\beta_1 x + \\epsilon$\n\nor, $y = z_1 t_1 + z_2 t_2 + \\epsilon$\n\nSo now the revised least-squares model is: $\\eta = z_1 t_1 + z_2 t_2$\n\n$\\eta = E(y)$ - i.e. $\\eta$ is in the plane defined by linear combinations of vectors $z_1, z_2$\n\nbecause $z_1^{\\prime} z_2 = \\sum z_1 z_2 \\neq 0$, these two vectors are NOT at right angles\n\nThe least-squares values $\\hat{t_1}, \\hat{t_2}$ produce a vector $\\hat{\\hat{y}} = z_1 \\hat{t_1} + z_2 \\hat{t_2}$\n\nThese least-squares values make the squared length $\\sum ( y - \\hat{\\hat{y}} )^2 = \\vert y - \\hat{\\hat{y}} \\vert^2$ of the residual vector as small as possible\n\nThe normal equations express fact that residual vector must be perpendicular to both $z_1$ and $z_2$:\n\n$$\nz_1^{\\prime} ( y - \\hat{\\hat{y}} ) = 0 \\\\\nz_2^{\\prime} ( y - \\hat{\\hat{y}} ) = 0\n$$\n\nalso written as:\n\n$$\n\\begin{align}\n\\sum z_1 ( y - \\hat{t_1} z_1 - \\hat{t_2} z_2 ) &=& 0 \\\\\n\\sum z_2 ( y - \\hat{t_1} z_1 - \\hat{t_2} z_2 ) &=& 0\n\\end{align}\n$$\n\nalso written (in matrix form) as:\n\n$$\n\\mathbf{Z^{\\prime}} ( \\mathbf{y - Z \\hat{t} } ) = 0\n$$\n\n\n\nNow suppose the null hypothesis was investigated for $t_1 = t_{10} = 0.5$ and $t_2 = t_{20} = 1.0$\n\nThen the mean observation vector $\\eta_0$ is represented as $\\eta_0 = t_{10} z_1 + t_{20} z_2$\n\n$$\ny - \\eta_0 = \\left( \\hat{\\hat{y}} - \\eta_0 \\right) + \\left( y - \\hat{\\hat{y}} \\right)\n$$\n\nand so\n\n$$F_0 = \\frac{ \\vert \\hat{\\hat{y}} - \\eta_0 \\vert / 2 }{ \\vert y - \\hat{\\hat{y}} \\vert^2 / 1 } = 2.23\n$$\n\n## Orthogonalizing second regressor\n\nIn the above example, $z_1$ and $z_2$ are not orthogonal\n\nOne can find the vectors $z_1$ and $z_{2 \\cdot 1}$ that are orthogonal\n\nTo do this, use least squares property that residual vector is orthogonal to space in which the predictor variables lie\n\nRegard $z_2$ as \"response\" vector and $z_1$ as predictor variable\n\nYou then obtain $\\hat{z_2} = 0.2 z_1$ (how?)\n\nso the residual vector is $z_{2 \\cdot 1} = z_2 - \\hat{z_2} = z_2 - 0.2 z_1$\n\nnow the model can be rewritten as $\\eta = \\left( t_1 + 0.2 t_2 \\right) z_1 + t_2 \\left( z_2 - 0.2 z_1 \\right) = t z_1 + t_2 z_{2 \\cdot 1}$\n\nThis gives three least-squares equations:\n\n1. $\\hat{y} = 2 z_1$\n2. $\\hat{y} = 1.5 z_1 + 2.5 z_2$\n3. $\\hat{y} = 2.0 z_1 + 2.5 z_{2 \\cdot 1}$\n\nThe analysis of variance becomes:\n\n---\n\nSource: Response function with $z_1$ only\n\nDoF: 1\n\nSum of Squares (SS): $\\vert \\hat{y} - \\eta_0 |vert^2 = \\left( \\hat{t} - t_0 \\right)^2 \\sum z_1^2 = 12.0$\n\nSource: Extra due to $z_2$ (given $z_1$)\n\nDoF: 1\n\nSS: $\\vert \\hat{\\hat{y}} - \\hat{y} \\vert^2 = \\hat{t}_2^2 \\sum z_{2 \\cdot 1}^2 = 4.5$\n\nSource: Residual\n\nDoF: 1\n\nSS: $\\vert y - \\hat{\\hat{y}} \\vert^2 = \\sum \\left( y - \\hat{\\hat{y}} \\right)^2 = 1.5$\n\nSource: Total\n\nDoF: 3\n\nSS: $\\vert y - \\eta_0 \\vert^2 = \\sum \\left( y - \\eta_0 \\right)^2 = 18.0$\n\n\n## Generalization to p regressors\n\nWith n observations and p parameters:\n\nn relations implicit in response function can be written \n\n$$\n\\boldsymbol{\\eta} = \\mathbf{Z t}\n$$\n\nAssuming $Z$ is full rank, and letting $\\hat{\\mathbf{t}}$ be the vector of estimates given by normal equations\n\n$$\n\\left( \\mathbf{ y - \\hat{y} } \\right)^{\\prime} \\mathbf{Z} = \\left( y - Z \\hat{t} \\right)^{\\prime} Z = 0\n$$\n\nSum of squares function is $S(t) = (y - \\eta)^{\\prime} (y - \\eta) = (y - \\hat{y})^{\\prime} (y - \\hat{y}) + ( \\hat{y} - \\eta )^{\\prime} (\\hat{y} - \\eta)$\n\nBecause cross-product is zero from the normal equations\n\n$$\nS(t) = S(\\hat{t}) + (\\hat{t} - t)^{\\prime} \\mathbf{Z^{\\prime} Z} ( \\hat{t} - t )\n$$\n\nFurthermore, because $\\mathbf{Z^{\\prime} Z}$ is positive definite, $S(t)$ minimized when $t = \\hat{t}$\n\nSo the solution to the normal equations producing the least squares estimate is the one where $t = \\hat{t}$:\n\n$$\n\\hat{t} = ( \\mathbf{Z^{\\prime} Z} )^{-1} \\mathbf{Z^{\\prime} y}\n$$\n\n----\n\nSource: Response function\n\nDoF: $p$\n\nSS: $\\vert \\hat{y} - \\eta \\vert^2 = (\\hat{t} - t)^{\\prime} \\mathbf{Z^{\\prime} Z} ( \\hat{t} - t )$\n\nSource: Residual\n\nDoF: $n - p$\n\nSS: $\\vert y - \\hat{y} \\vert^2 = \\sum ( y - \\hat{y} )^2 $\n\nSource: Total\n\nDoF: $n$\n\nSS: $\\vert y - \\eta \\vert^2 = \\sum ( y - \\eta )^2 $\n\n\n## Bias in Least-Squares Estimators if Inadequate Model\n\nSay data was being fit with a model $y = Z_1 t_1 + \\epsilon$,\n\nbut the true model that should have been used is $y = Z_1 t_1 + Z_2 t_2 + \\epsilon$\n\n$t_1$ would be estimated by $\\hat{t_1} = (\\mathbf{ Z_1^{\\prime} Z_1 } )^{-1} \\mathbf{ Z_1^{\\prime} y }$\n\nbut using true model, \n\n$$\n\\begin{array}{rcl}\nE( \\hat{t_1} ) &=& ( \\mathbf{Z_1^{\\prime} Z_1} )^{-1} \\mathbf{Z_1^{\\prime}} E(\\mathbf{y}) \\\\\n&=& ( \\mathbf{ Z_1^{\\prime} Z_1 } )^{-1} \\mathbf{Z_1^{\\prime}} (\\mathbf{Z_1 t_1} + \\mathbf{Z_2 t_2} ) \\\\\n&=& \\mathbf{t_1 + A t_2}\n\\end{array}\n$$\n\nThe matrix A is the bias or alias matrix\n\n$$\nA = \\left( \\mathbf{ Z_1^{\\prime} Z_1 } \\right)^{-1} \\mathbf{ Z_1^{\\prime} Z_2 }\n$$\n\nUnless $A = 0$, $\\hat{t_1}$ will represent $t_1$ AND $t_2$, not just $t_1$\n\n$A = 0$ when $\\mathbf{Z_1^{\\prime} Z_2} = 0$, which happens if regressors in $\\mathbf{Z_1}$ are orthogonal to regressors in $\\mathbf{Z_2}$\n\n\n```python\n\n```\n", "meta": {"hexsha": "7f0daab849bd70b5374ffb66319bebc7ba54daab", "size": 16849, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/ED_2C_LeastSquaresResponseSurface.ipynb", "max_stars_repo_name": "charlesreid1/experiment-design-notes", "max_stars_repo_head_hexsha": "872e310a55a26068ec43ee1838d26f8f845df134", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-05-19T07:28:03.000Z", "max_stars_repo_stars_event_max_datetime": "2019-05-19T07:28:03.000Z", "max_issues_repo_path": "notebooks/ED_2C_LeastSquaresResponseSurface.ipynb", "max_issues_repo_name": "charlesreid1/experiment-design-notes", "max_issues_repo_head_hexsha": "872e310a55a26068ec43ee1838d26f8f845df134", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/ED_2C_LeastSquaresResponseSurface.ipynb", "max_forks_repo_name": "charlesreid1/experiment-design-notes", "max_forks_repo_head_hexsha": "872e310a55a26068ec43ee1838d26f8f845df134", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.6531007752, "max_line_length": 268, "alphanum_fraction": 0.4943320078, "converted": true, "num_tokens": 4205, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9136765210631688, "lm_q2_score": 0.905989826094673, "lm_q1q2_score": 0.8277816324248062}} {"text": "**Exercise set 3**\n==============\n\n\n>The goal of this exercise is to investigate some theoretical properties of the solution to\n>the (multiple) linear regression problem, and to perform a **least-squares regression**.\n>We will also see how we can evaluate our regression model by investigating **residuals**.\n\n\n**Exercise 3.1**\n\nMultiple linear regression (MLR) solves the regression equation $\\mathbf{Y} = \\mathbf{X}\\mathbf{B}$ with\n$\\mathbf{B} = (\\mathbf{X}^\\mathrm{T} \\mathbf{X})^{-1} \\mathbf{X}^\\mathrm{T} \\mathbf{Y}$\nwhere the\ndimensions of the data matrix $\\mathbf{X}$ is $[N \\times M]$.\nWill this solution work when (please explain why/why not):\n\n\n**(a)** $\\det \\left( \\mathbf{X}^\\mathrm{T} \\mathbf{X} \\right) = 0$?\n\n**(b)** $\\det \\left( \\mathbf{X}^\\mathrm{T} \\mathbf{X} \\right) > 0$?\n\n**(c)** $\\det \\left( \\mathbf{X}^\\mathrm{T} \\mathbf{X} \\right) \\neq 0$?\n\n**(d)** The variables in $\\mathbf{X}$ are correlated?\n\n**(e)** The columns in $\\mathbf{X}$ are orthogonal?\n\n**(f)** The rank of $\\mathbf{X}$ is $\\frac{\\min(N, M)}{2}$?\n\n**(g)** We have more variables than samples/objects (more columns than rows in $\\mathbf{X}$)?\n\n**(h)** When we have more samples/objects than variables (more rows than columns in $\\mathbf{X}$)?\n\n**Your answer to question 3.1:** *Double click here*\n\n\n**Exercise 3.2**\n\nAs stated in the previous problem, the MLR solution of the equation\n$\\mathbf{Y} = \\mathbf{X}\\mathbf{B}$ is\n\n\\begin{equation}\n\\mathbf{B} = (\\mathbf{X}^\\mathrm{T} \\mathbf{X})^{-1} \\mathbf{X}^\\mathrm{T} \\mathbf{Y}.\n\\end{equation}\n\nIf we let $\\hat{\\mathbf{Y}}$ be the values calculated by the MLR solution, we can write\nthis as,\n\\begin{equation}\n\\hat{\\mathbf{Y}} = \\mathbf{X}\\mathbf{B} =\n\\mathbf{X} \\left[ (\\mathbf{X}^\\mathrm{T} \\mathbf{X})^{-1} \\mathbf{X}^\\mathrm{T} \\mathbf{Y} \\right] = \n\\left[\\mathbf{X} (\\mathbf{X}^\\mathrm{T} \\mathbf{X})^{-1} \\mathbf{X}^\\mathrm{T}\\right] \\mathbf{Y} =\n\\mathbf{H} \\mathbf{Y},\n\\tag{1}\\end{equation}\n\nwhere we have defined the projection matrix, $\\mathbf{H}$, as,\n\\begin{equation}\n\\mathbf{H} = \\mathbf{X} \\left(\\mathbf{X}^\\mathrm{T} \\mathbf{X}\\right)^{-1} \\mathbf{X}^\\mathrm{T}.\n\\label{eq:projectionmatrix}\n\\tag{2}\\end{equation}\n\nThis means that we can write the residual, $\\mathbf{E}$, as,\n\\begin{equation}\n\\mathbf{E} = \\mathbf{Y} - \\hat{\\mathbf{Y}} = \\mathbf{Y} - \\mathbf{H} \\mathbf{Y} =\n(\\mathbf{I} -\\mathbf{H}) \\mathbf{Y},\n\\tag{3}\\end{equation}\n\nwhere $\\mathbf{I}$ is the identity matrix.\n\nIn this exercise, we will show two properties of $\\mathbf{H}$ that enables us to simplify\nthe squared error, $\\mathbf{E}^\\mathrm{T} \\mathbf{E}$, as follows,\n\n\\begin{equation}\n\\mathbf{E}^\\mathrm{T} \\mathbf{E} = \\mathbf{Y}^\\mathrm{T} (\\mathbf{I} -\\mathbf{H})^\\mathrm{T}\n(\\mathbf{I} -\\mathbf{H}) \\mathbf{Y} = \n\\mathbf{Y}^\\mathrm{T} (\\mathbf{I} -\\mathbf{H}) \\mathbf{Y}.\n\\tag{4}\\end{equation}\n\nIn this equation, the last equality follows from the following two properties of $\\mathbf{H}$:\n\n**(a)** $\\mathbf{H}$ is *symmetric*: $\\mathbf{H}^\\mathrm{T} = \\mathbf{H}$.\n\n**(b)** $\\mathbf{H}$ is *idempotent*: $\\mathbf{H}^{k} = \\mathbf{H}$ where $k > 0$ is an integer.\n\nShow these two properties for $\\mathbf{H}$. (Hint: For the idempotency, begin by showing that $\\mathbf{H}^{2} = \\mathbf{H}$.)\n\n**Your answer to question 3.2:** *Double click here*\n\n**Exercise 3.3**\n\nIn the regression problem $\\mathbf{y} = \\mathbf{X}\\mathbf{b}$\nwe find the least-squares solution assuming that $\\mathbf{X}^\\mathrm{T} \\mathbf{X}$ is\nnon-singular. If you are given the information that $\\mathbf{X}$ is symmetric\nand non-singular, is there another\nsimpler formula for estimating the regression coefficients ($\\mathbf{b}$)?\n\n\n\n**Your answer to question 3.3:** *Double click here*\n\n**Exercise 3.4**\n\nAssume that we have recorded data as shown in Fig. 1.\n\n\n**Fig. 1:** Example data.\n\nTo model this data, we suggest a third-order polynomial in $x$:\n\\begin{equation}\n\\hat{y} = b_0 + b_1 x + b_2 x^2 + b_3 x^3 .\n\\end{equation}\n\nExplain how you can formulate this on a form suitable for least-squares regression,\n$\\mathbf{y} = \\mathbf{X} \\mathbf{b}$.\nWhat do the vectors $\\mathbf{y}$ and $\\mathbf{b}$ contain? What does the matrix $\\mathbf{X}$ contain?\n\n\n\n**Your answer to question 3.4:** *Double click here*\n\n**Exercise 3.5**\n\nThe temperature (\u00b0C) is measured continuously over time at a high altitude\nin the atmosphere using a\nweather balloon. Every hour a measurement is made and sent to an on-board computer.\nThe measurements are \nshown in Fig. 2 and can be found in [the data file](Data/temperature.txt) (located at 'Data/temperature.txt').\n\n\n**Fig. 2:** Measured temperature as a function of time.\n\n**(a)** Create a Python script that performs polynomial\nfitting to the data using a first, second, third, fourth,\nand fifth order polynomial model. Hint: Make use of `numpy`, `matplotlib`\nand `pandas`.\n\n\n```python\n# Your code here\n```\n\n**(b)** Plot the fitted curves for the five models to the raw data.\n\n\n\n\n```python\n# Your code here\n```\n\n**(c)** Plot the residual curves for the five models and determine,\nfrom a visual inspection, the best polynomial order to use for modeling the\ntemperature as a function of time. \n\n\n\n\n```python\n# Your code here\n```\n\n**Your answer to question 3.5(c):** *Double click here*\n\n**(d)** Obtain the sum of squared residuals for each polynomial. Plot this as a function\nof the degree of the polynomial and determine from visual inspection\nthe best polynomial order to use for modeling the\ntemperature as a function of time. Does this agree with your conclusion in point **3.5(c)**?\n\n\n\n```python\n# Your code here\n```\n\n**Your answer to question 3.5(d):** *Double click here*\n\n\n\n", "meta": {"hexsha": "d402b55165b9946c0fb9667cb3e1b2aa13376229", "size": 9089, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "exercises/03_Exercise_Set_3.ipynb", "max_stars_repo_name": "sroet/chemometrics", "max_stars_repo_head_hexsha": "c797505d07e366319ba1544e8a602be94b88fbb6", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2020-02-04T12:09:19.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-06T12:28:24.000Z", "max_issues_repo_path": "exercises/03_Exercise_Set_3.ipynb", "max_issues_repo_name": "sroet/chemometrics", "max_issues_repo_head_hexsha": "c797505d07e366319ba1544e8a602be94b88fbb6", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 72, "max_issues_repo_issues_event_min_datetime": "2020-01-06T10:24:33.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-21T10:37:46.000Z", "max_forks_repo_path": "exercises/03_Exercise_Set_3.ipynb", "max_forks_repo_name": "sroet/chemometrics", "max_forks_repo_head_hexsha": "c797505d07e366319ba1544e8a602be94b88fbb6", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2020-01-09T12:04:26.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-19T10:06:14.000Z", "avg_line_length": 30.8101694915, "max_line_length": 134, "alphanum_fraction": 0.5384530751, "converted": true, "num_tokens": 1803, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9136765304654121, "lm_q2_score": 0.9059898159413479, "lm_q1q2_score": 0.827781631666288}} {"text": "# 10 ODE integrators: Verlet\n\n\n```python\nfrom importlib import reload\n\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\n%matplotlib inline\nmatplotlib.style.use('ggplot')\n\nimport integrators2_solution as integrators2\n```\n\n## Velocity Verlet\n\nUse expansion *forward* and *backward* (!) in time (Hamiltons (i.e. Newton without friction) equations are time symmetric)\n\n\\begin{align}\nr(t + \\Delta t) &\\approx r(t) + \\Delta t\\, v(t) + \\frac{1}{2m} \\Delta t^2 F(t)\\\\\nr(t) &\\approx r(t + \\Delta t) - \\Delta t\\, v(t + \\Delta t) + \\frac{1}{2m} \\Delta t^2 F(t+\\Delta t)\n\\end{align}\n\nSolve for $v$:\n\\begin{align}\nv(t+\\Delta t) &\\approx v(t) + \\frac{1}{2m} \\Delta t \\big(F(t) + F(t+\\Delta t)\\big)\n\\end{align}\n\nComplete **Velocity Verlet** integrator consists of the first and third equation.\n\nIn practice, split into three steps (calculate the velocity at the half time step):\n\\begin{align}\nv(t+\\frac{\\Delta t}{2}) &= v(t) + \\frac{\\Delta t}{2} \\frac{F(t)}{m} \\\\\nr(t + \\Delta t) &= r(t) + \\color{blue}{\\Delta t\\, v(t+\\frac{\\Delta t}{2})}= r(t) + \\color{blue}{\\Delta t\\, v(t) + \\frac{1}{2m} \\Delta t^2 F(t)} \\\\\nv(t+\\Delta t) &= \\color{blue}{v(t+\\frac{\\Delta t}{2})} + \\frac{\\Delta t}{2} \\frac{F(t+\\Delta t)}{m} = \\color{blue}{v(t) + \\frac{\\Delta t}{2} \\frac{F(t)}{m}} + \\frac{\\Delta t}{2} \\frac{F(t+\\Delta t)}{m}\n\\end{align}\n\n**velocity Verlet integrator** equations \n\\begin{align}\nv(t+\\frac{\\Delta t}{2}) &= v(t) + \\frac{\\Delta t}{2} \\frac{F(t)}{m} \\\\\nr(t + \\Delta t) &= r(t) + \\Delta t\\, v(t+\\frac{\\Delta t}{2})\\\\\nv(t+\\Delta t) &= v(t+\\frac{\\Delta t}{2}) + \\frac{\\Delta t}{2} \\frac{F(t+\\Delta t)}{m}\n\\end{align}\nor with steps\n\\begin{align}\nv_{n+1/2} &= v_n + \\frac{\\Delta t}{2} \\frac{F(r_n)}{m} \\\\\nr_{n+1} &= r_n + \\Delta t\\, v_{n+1/2}\\\\\nv_n &= v_{n+1/2} + \\frac{\\Delta t}{2} \\frac{F(r_{n+1})}{m}\n\\end{align}\n\nWhen writing production-level code, remember to re-use $F(t+\\Delta t)$ als the \"new\" starting $F(t)$ in the next iteration (and don't recompute).\n\n### Integration of planetary motion \nGravitational potential energy:\n$$\nU(r) = -\\frac{GMm}{r}\n$$\nwith $r$ the distance between the two masses $m$ and $M$.\n\n#### Central forces\n$$\nU(\\mathbf{r}) = f(r) = f(\\sqrt{\\mathbf{r}\\cdot\\mathbf{r}})\\\\\n\\mathbf{F} = -\\nabla U(\\mathbf{r}) = -\\frac{\\partial f(r)}{\\partial r} \\, \\frac{\\mathbf{r}}{r} \n$$\n\n#### Force of gravity\n\\begin{align}\n\\mathbf{F} &= -\\frac{G m M}{r^2} \\hat{\\mathbf{r}}\\\\\n\\hat{\\mathbf{r}} &= \\frac{1}{\\sqrt{x^2 + y^2}} \\left(\\begin{array}{c} x \\\\ y \\end{array}\\right)\n\\end{align}\n\n#### Integrate simple planetary orbits \nSet $M = 1$ (one solar mass) and $m = 3.003467\u00d710^{-6}$ (one Earth mass in solar masses) and try initial conditions\n\n\\begin{alignat}{1}\nx(0) &= 1,\\quad y(0) &= 0\\\\\nv_x(0) &= 0,\\quad v_y(0) &= 6.179\n\\end{alignat}\n\nNote that we use the following units:\n* length in astronomical units (1 AU = 149,597,870,700 m )\n* mass in solar masses (1 $M_\u2609 = 1.988435\u00d710^{30}$ kg)\n* time in years (1 year = 365.25 days, 1 day = 86400 seconds)\n\nIn these units, the gravitational constant is $G = 4\\pi^2$ (in SI units $G = 6.674\u00d710^{-11}\\, \\text{N}\\cdot\\text{m}^2\\cdot\\text{kg}^{-2}$).\n\n\n```python\nM_earth = 3.003467e-6\nM_sun = 1.0\n```\n\n\n```python\nG_grav = 4*np.pi**2\n\ndef F_gravity(r, m=M_earth, M=M_sun):\n rr = np.sum(r*r)\n rhat = r/np.sqrt(rr)\n return -G_grav*m*M/rr * rhat\n\ndef U_gravity(r, m=M_earth, M=M_sun):\n return -G_grav*m*M/np.sqrt(np.sum(r*r))\n```\n\nLet's now integrate the equations of motions under gravity with the **Velocity Verlet** algorithm:\n\n\n```python\ndef planet_orbit(r0=np.array([1, 0]), v0=np.array([0, 6.179]),\n mass=M_earth, dt=0.001, t_max=1):\n \"\"\"2D planetary motion with velocity verlet\"\"\"\n dim = len(r0)\n assert len(v0) == dim\n\n nsteps = int(t_max/dt)\n\n r = np.zeros((nsteps, dim))\n v = np.zeros_like(r)\n\n r[0, :] = r0\n v[0, :] = v0\n\n # start force evaluation for first step\n Ft = F_gravity(r[0], m=mass)\n for i in range(nsteps-1):\n vhalf = v[i] + 0.5*dt * Ft/mass\n r[i+1, :] = r[i] + dt * vhalf\n Ftdt = F_gravity(r[i+1], m=mass)\n v[i+1] = vhalf + 0.5*dt * Ftdt/mass\n # new force becomes old force\n Ft = Ftdt\n \n return r, v\n```\n\n\n```python\nr, v = planet_orbit(dt=0.1, t_max=10)\n```\n\n\n```python\nrx, ry = r.T\nax = plt.subplot(1,1,1)\nax.set_aspect(1)\nax.plot(rx, ry)\n```\n\nThese are not closed orbits (as we would expect from a $1/r$ potential). But it gets much besser when stepsize is reduced to 0.01 (just rerun the code with `dt = 0.01` and replot):\n\n\n```python\nr, v = planet_orbit(dt=0.01, t_max=10)\n```\n\n\n```python\nrx, ry = r.T\nax = plt.subplot(1,1,1)\nax.set_aspect(1)\nax.plot(rx, ry)\n```\n\n## Velocity Verlet vs RK4: Energy conservation\nAssess the stability of `rk4` and `Velocity Verlet` by checking energy conservation over longer simulation times.\n\nThe file `integrators2.py` contains almost all code that you will need.\n\n### Implement gravity force in `integrators2.py`\nAdd `F_gravity` to the `integrators2.py` module. Use the new function `unitvector()`.\n\n### Planetary orbits with `integrators2.py` \n\n\n```python\nr0 = np.array([1, 0])\nv0 = np.array([0, 6.179])\n```\n\n\n```python\nfrom importlib import reload\nreload(integrators2)\n```\n\n\n\n\n \n\n\n\nUse the new function `integrators2.integrate_newton_2d()` to integrate 2d coordinates.\n\n#### RK4\n\n\n```python\ntrk4, yrk4 = integrators2.integrate_newton_2d(x0=r0, v0=v0, t_max=30, mass=M_earth,\n h=0.01,\n force=integrators2.F_gravity, \n integrator=integrators2.rk4)\n```\n\n\n```python\nrxrk4, ryrk4 = yrk4[:, 0, 0], yrk4[:, 0, 1]\nax = plt.subplot(1,1,1)\nax.set_aspect(1)\nax.plot(rxrk4, ryrk4)\n```\n\n\n```python\nintegrators2.analyze_energies(trk4, yrk4, integrators2.U_gravity, m=M_earth)\n```\n\n\n```python\nprint(\"Energy conservation RK4 for {} steps: {}\".format(\n len(trk4),\n integrators2.energy_conservation(trk4, yrk4, integrators2.U_gravity, m=M_earth)))\n```\n\n Energy conservation RK4 for 3000 steps: 3.5366039779498594e-06\n\n\n#### Euler\n\n\n```python\nte, ye = integrators2.integrate_newton_2d(x0=r0, v0=v0, t_max=30, mass=M_earth,\n h=0.01,\n force=F_gravity, \n integrator=integrators2.euler)\nrex, rey = ye[:, 0].T\n```\n\n\n```python\nax = plt.subplot(1,1,1)\nax.plot(rx, ry, label=\"RK4\")\nax.plot(rex, rey, label=\"Euler\")\nax.legend(loc=\"best\")\nax.set_aspect(1)\n```\n\n\n```python\nintegrators2.analyze_energies(te, ye, integrators2.U_gravity, m=M_earth)\n```\n\n\n```python\nprint(\"Energy conservation Euler for {} steps: {}\".format(\n len(te),\n integrators2.energy_conservation(te, ye, integrators2.U_gravity, m=M_earth)))\n```\n\n Energy conservation Euler for 3000 steps: 0.6763156457080265\n\n\n*Euler* is just awful... but we knew that already.\n\n#### Velocity Verlet\n\n\n```python\ntv, yv = integrators2.integrate_newton_2d(x0=r0, v0=v0, t_max=30, mass=M_earth,\n h=0.01,\n force=F_gravity, \n integrator=integrators2.velocity_verlet)\n```\n\n\n```python\nrxv, ryv = yv[:, 0].T\nax = plt.subplot(1,1,1)\nax.set_aspect(1)\nax.plot(rxv, ryv, label=\"velocity Verlet\")\nax.plot(rxrk4, ryrk4, label=\"RK4\")\nax.legend(loc=\"best\")\n```\n\n\n```python\nintegrators2.analyze_energies(tv, yv, integrators2.U_gravity, m=M_earth)\n```\n\n\n```python\nprint(\"Energy conservation Velocity Verlet for {} steps: {}\".format(\n len(tv),\n integrators2.energy_conservation(tv, yv, integrators2.U_gravity, m=M_earth)))\n```\n\n Energy conservation Velocity Verlet for 3000 steps: 6.384226992694312e-05\n\n\n*Velocity Verlet* only has moderate accuracy, especially when compared to *RK4*.\n\nHowever, let's look at energy conservation over longer times:\n\n#### Longer time scale stability\nRun RK4 and Velocity Verlet for longer.\n\n\n```python\ntv2, yv2 = integrators2.integrate_newton_2d(x0=r0, v0=v0, t_max=1000, mass=M_earth,\n h=0.01,\n force=F_gravity, \n integrator=integrators2.velocity_verlet)\n```\n\n\n```python\nprint(\"Energy conservation Velocity Verlet for {} steps: {}\".format(\n len(tv2),\n integrators2.energy_conservation(tv2, yv2, integrators2.U_gravity, m=M_earth)))\n```\n\n Energy conservation Velocity Verlet for 100000 steps: 6.390030863482895e-05\n\n\n\n```python\nt4, y4 = integrators2.integrate_newton_2d(x0=r0, v0=v0, t_max=1000, mass=M_earth,\n h=0.01,\n force=F_gravity, \n integrator=integrators2.rk4)\n```\n\n\n```python\nprint(\"Energy conservation RK4 for {} steps: {}\".format(\n len(t4),\n integrators2.energy_conservation(t4, y4, integrators2.U_gravity, m=M_earth)))\n```\n\n Energy conservation RK4 for 100000 steps: 0.00011860288209883538\n\n\nVelocity Verlet shows **good long-term stability** but relative low precision. On the other hand, RK4 has high precision but the accuracy decreases over time.\n\n* Use a *Verlet* integrator when energy conservation is important and long term stability is required (e.g. molecular dynamics simulations). It is generally recommended to use an integrator that conserves some of the inherent symmetries and structures of the governing physical equations (e.g. for Hamilton's equations of motion, time reversal symmetry and the symplectic and area-preserving structure).\n* Use *RK4* for high short-term accuracy (but may be difficult to know what \"short term\" should mean) or when solving general differential equations.\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "bca810497ae7b95c6e894e3566b16eb3e6011ba3", "size": 223787, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "10_ODEs/10-ODE-integrators-verlet.ipynb", "max_stars_repo_name": "Py4Phy/PHY432-resources", "max_stars_repo_head_hexsha": "c26d95eaf5c28e25da682a61190e12ad6758a938", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "10_ODEs/10-ODE-integrators-verlet.ipynb", "max_issues_repo_name": "Py4Phy/PHY432-resources", "max_issues_repo_head_hexsha": "c26d95eaf5c28e25da682a61190e12ad6758a938", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-03-03T21:47:56.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-03T21:47:56.000Z", "max_forks_repo_path": "10_ODEs/10-ODE-integrators-verlet.ipynb", "max_forks_repo_name": "Py4Phy/PHY432-resources", "max_forks_repo_head_hexsha": "c26d95eaf5c28e25da682a61190e12ad6758a938", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 287.2747111682, "max_line_length": 41620, "alphanum_fraction": 0.9244817617, "converted": true, "num_tokens": 3169, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9136765234137296, "lm_q2_score": 0.9059898229217591, "lm_q1q2_score": 0.8277816316553734}} {"text": "# Schwarzschild Solution to the Einstein's equation\n\nWe will rederive the equations from *General Theory of Relativity*, chapter 18, by Dirac. They describe a spherically symmetric solution to the Einstein's equation in vacuum.\n\nWe will not try to solve the resulting equations.\n\nThis version of the notebook uses functions that explicitly calculate the Christoffel symbols, hence the work is coordinate-system-dependent.\n\nImport the necessary modules.\n\n\n```\nfrom sympy.diffgeom import *\nTP = TensorProduct\n```\n\nDefine a 4D manifold with a patch and a coordinate chart on which we will work.\n\n\n```\nm = Manifold('Schwarzschild', 4)\np = Patch('origin', m)\ncs = CoordSystem('spherical', p, ['t', 'r', 'theta', 'phi'])\n```\n\n\n```\nm, p, cs\n```\n\nPrepare the variables containing the scalar fields and the 1-form fields.\n\n\n```\nt, r, theta, phi = cs.coord_functions()\nt, r, theta, phi\n```\n\n\n```\ndt, dr, dtheta, dphi = cs.base_oneforms()\ndt, dr, dtheta, dphi\n```\n\nThe most general spherically-symmetric metric has the following form.\n\n\n```\nmetric = exp(2*f(r))*TP(dt, dt) - exp(2*g(r))*TP(dr, dr) - r**2*TP(dtheta, dtheta) - r**2*sin(theta)**2*TP(dphi, dphi)\nmetric\n```\n\nThe matrix $M$ representing the two-form as a bilinear map $V,U\\to V^tMU$ over the column vectors $V$ and $U$ in the canonical basis of the chosen coordinate system is:\n\n\n```\ntwoform_to_matrix(metric)\n```\n\nNow we will calculate the components in the same basis of the Ricci tensor.\n\n\n```\nricci = metric_to_Ricci_components(metric)\n```\n\n\n```\nricci = [[simplify(ricci[i][j])\n for j in range(4)] for i in range(4)]\n\n```\n\nThe diagonal components give the equations we are interested in.\n\n\n```\nricci[0][0]\n```\n\n\n```\nricci[1][1]\n```\n\n\n```\nricci[2][2]\n```\n\n\n```\nricci[3][3]\n```\n\nThe off-diagonal components are zero.\n\n\n```\nall(ricci[i][j]==0 for i in range(4) for j in range(4) if i!=j)\n```\n\n\n\n\n True\n\n\n\nFor completeness we can also check out the Christoffel symbol of 2nd kind. We will print only the non-zero components, and only one of the symmetric components (symmetric in the last two indices).\n\n\n```\nch_2nd = metric_to_Christoffel_2nd(metric)\nfilt = [((i,j,k), simplify(ch_2nd[i][j][k]))\n for i in range(4) for j in range(4) for k in range(j,4)\n if ch_2nd[i][j][k]!=0]\n```\n\n\n```\nfilt[0:3]\n```\n\n\n```\nfilt[3:6]\n```\n\n\n```\nfilt[6:9]\n```\n\nWe can also confirm that the Christoffel symbol is symmetric.\n\n\n```\nall([ch_2nd[k][i][j] == ch_2nd[k][j][i] for k in range(4) for i in range(4) for j in range(4)])\n```\n\n\n\n\n True\n\n\n", "meta": {"hexsha": "252c4c6f37d5b2b67b451c1099b939889ca34c0e", "size": 69809, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/intermediate/schwarzschild.ipynb", "max_stars_repo_name": "utkarshdeorah/sympy", "max_stars_repo_head_hexsha": "dcdf59bbc6b13ddbc329431adf72fcee294b6389", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 8323, "max_stars_repo_stars_event_min_datetime": "2015-01-02T15:51:43.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T13:13:19.000Z", "max_issues_repo_path": "examples/intermediate/schwarzschild.ipynb", "max_issues_repo_name": "utkarshdeorah/sympy", "max_issues_repo_head_hexsha": "dcdf59bbc6b13ddbc329431adf72fcee294b6389", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 15102, "max_issues_repo_issues_event_min_datetime": "2015-01-01T01:33:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T22:53:13.000Z", "max_forks_repo_path": "examples/intermediate/schwarzschild.ipynb", "max_forks_repo_name": "utkarshdeorah/sympy", "max_forks_repo_head_hexsha": "dcdf59bbc6b13ddbc329431adf72fcee294b6389", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 4490, "max_forks_repo_forks_event_min_datetime": "2015-01-01T17:48:07.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T17:24:05.000Z", "avg_line_length": 126.6950998185, "max_line_length": 7063, "alphanum_fraction": 0.7982638342, "converted": true, "num_tokens": 750, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9433475683211323, "lm_q2_score": 0.8774767970940975, "lm_q1q2_score": 0.8277656027969326}} {"text": "# Naive Bayes classifier\n\nThe first steps are the same as previously, we start up by importing libraries.\n\n\n```python\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport random\n```\n\nDefining number of points, covariance matrix and means in respective classes.\n\n\n```python\nn = {'A': 40, 'B': 30}\ncov = np.array([[4, 0], [0, 4]])\nmeans = {'A': np.array([-3, -1]), 'B': np.array([2, 2])}\n```\n\nGenerating the points from multivariate normal distribution and inserting them into dataframe.\n\n\n```python\ndata = pd.DataFrame(index=range(sum(n.values())), columns=['x', 'y', 'label'])\ndata.loc[:n['A']-1, ['x', 'y']] = np.random.multivariate_normal(means['A'], cov, n['A'])\ndata.loc[:n['A']-1, ['label']] = 'A'\ndata.loc[n['A']:, ['x', 'y']] = np.random.multivariate_normal(means['B'], cov, n['B'])\ndata.loc[n['A']:, ['label']] = 'B'\n```\n\nFunction that plots the generated points.\n\n\n```python\ndef plot_points(data):\n fig = plt.figure(figsize=(8, 6))\n sns.set_style('white')\n sns.set_palette('muted')\n sns.scatterplot(data=data, x='x', y='y', hue='label', legend=False, edgecolor='black').set(xlim=(-9, 9), ylim=(-9, 9))\n```\n\n\n```python\nplot_points(data)\n```\n\n## Gaussian Naive Bayes model from *Scikit-learn*\n\nWe need to split our data into features and target variable. Then we create model with embedded `GaussianNB()` method and fit it according to the variables.\n\n\n```python\nfrom sklearn.naive_bayes import GaussianNB\n\nX = data[['x', 'y']]\ny = data['label']\nclf = GaussianNB()\nclf.fit(X, y);\n```\n\nWith `predict_proba()` method we can compute posterior probability for any point. So to plot decision boundary for two classes we will create a rectangular meshgrid out of two arrays and compute the probability in every node of the grid.\n\n\n```python\nx_grid, y_grid = np.meshgrid(np.arange(-9, 9.1, 0.1), np.arange(-9, 9.1, 0.1))\nprob = clf.predict_proba(np.c_[x_grid.ravel(), y_grid.ravel()])\nprob = prob[:, 1].reshape(x_grid.shape)\n```\n\nNow we will define the function that plots the points with decision boundary and areas which belong to corresponding class.\n\n\n```python\ndef plot_classified_points(data, xx, yy, prob):\n plot_points(data)\n plt.contour(xx, yy, prob, [0.5], zorder=0)\n plt.contourf(xx, yy, prob, [0, 0.5, 1], cmap='Pastel2', zorder=-1)\n```\n\n\n```python\nplot_classified_points(data, x_grid, y_grid, prob)\n```\n\n## Own implementation of Naive Bayes classifier\n\nBefore we start implementing the algorithm we need to update the means and covariance matrices of the generated points.\n\n\n```python\nmeans = {label: np.array(np.mean(data.loc[data['label'] == label])) for label in means.keys()}\ncov = {label: np.cov(data.loc[data['label'] == label][['x', 'y']].values.T.astype(float)) for label in means.keys()}\n```\n\nTo implement our own classifier we need to make an assumption that all the features are conditionally independent to each other. With this assumption the posterior probability can be expressed by the following equation:\n\n\\begin{equation}\nP(C|\\boldsymbol{X}) = \\frac{\\pi_{C_{i}}P(\\boldsymbol{X}|C_{i})}{P(\\boldsymbol{X})} = \\frac{\\pi_{C_{i}}P(x|C_{i})P(y|C_{i})}{\\pi_{C_{1}}P(x|C_{1})P(y|C_{1}) + \\pi_{C_{2}}P(x|C_{2})P(y|C_{2})}\n\\end{equation}\n\nwhere $\\pi_{C_{i}}$ is the prior probability of the given class.\n\n\n```python\nfrom scipy.stats import multivariate_normal\n\ndef posterior_prob(prior_prob_1, prior_prob_2, mean_1, mean_2, cov_1, cov_2, x, y):\n likelihood_x_1 = multivariate_normal.pdf(x, mean_1[0], cov_1[0, 0])\n likelihood_y_1 = multivariate_normal.pdf(y, mean_1[1], cov_1[1, 1])\n likelihood_x_2 = multivariate_normal.pdf(x, mean_2[0], cov_2[0, 0])\n likelihood_y_2 = multivariate_normal.pdf(y, mean_2[1], cov_2[1, 1])\n \n nominator = prior_prob_2 * likelihood_x_2 * likelihood_y_2\n denominator = prior_prob_1 * likelihood_x_1 * likelihood_y_1 + prior_prob_2 * likelihood_x_2 * likelihood_y_2\n \n return nominator / denominator\n```\n\nWe estimate class prior probabilities by dividing number of class observations by total number of observations.\n\n\n```python\nprior_prob_A = n['A'] / sum(n.values())\nprior_prob_B = n['B'] / sum(n.values())\n\nprob = posterior_prob(prior_prob_A, prior_prob_B, means['A'], means['B'], cov['A'], cov['B'], x_grid.ravel(), y_grid.ravel())\nprob = prob.reshape(x_grid.shape)\n```\n\nFinally we can plot the decision boundary determined by our classifier.\n\n\n```python\nplot_classified_points(data, x_grid, y_grid, prob)\n```\n\nAs we can see the results are identical both for embedded model and our own implementation.\n", "meta": {"hexsha": "79f89f8e2e87b15c0738cf803d926deab0546a53", "size": 99006, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Exercise 1/Naive Bayes classifier.ipynb", "max_stars_repo_name": "mickuz/lsed", "max_stars_repo_head_hexsha": "c7aa1dc0544e971dcc405425cf131d180b6721de", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Exercise 1/Naive Bayes classifier.ipynb", "max_issues_repo_name": "mickuz/lsed", "max_issues_repo_head_hexsha": "c7aa1dc0544e971dcc405425cf131d180b6721de", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Exercise 1/Naive Bayes classifier.ipynb", "max_forks_repo_name": "mickuz/lsed", "max_forks_repo_head_hexsha": "c7aa1dc0544e971dcc405425cf131d180b6721de", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 300.9300911854, "max_line_length": 33312, "alphanum_fraction": 0.9307011696, "converted": true, "num_tokens": 1300, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9546474220263197, "lm_q2_score": 0.8670357512127872, "lm_q1q2_score": 0.8277134446999408}} {"text": "## Introduction\n-----\nYou (an electrical engineer) wish to determine the resistance of an electrical component by using Ohm's law. You remember from your high school circuit classes that $$V = RI$$ where $V$ is the voltage in volts, $R$ is resistance in ohms, and $I$ is electrical current in amperes. Using a multimeter, you collect the following data:\n\n| Current (A) | Voltage (V) |\n|-------------|-------------|\n| 0.2 | 1.23 |\n| 0.3 | 1.38 |\n| 0.4 | 2.06 |\n| 0.5 | 2.47 |\n| 0.6 | 3.17 |\n\nYour goal is to \n1. Fit a line through the origin (i.e., determine the parameter $R$ for $y = Rx$) to this data by using the method of least squares. You may assume that all measurements are of equal importance. \n2. Consider what the best estimate of the resistance is, in ohms, for this component.\n\n## Getting Started\n----\n\nFirst we will import the neccesary Python modules and load the current and voltage measurements into numpy arrays:\n\n\n```python\nimport numpy as np\nfrom numpy.linalg import inv\nimport matplotlib.pyplot as plt\n\n# Store the voltage and current data as column vectors.\nI = np.mat([0.2, 0.3, 0.4, 0.5, 0.6]).T\nV = np.mat([1.23, 1.38, 2.06, 2.47, 3.17]).T\n```\n\nNow we can plot the measurements - can you see the linear relationship between current and voltage?\n\n\n```python\nplt.scatter(np.asarray(I), np.asarray(V))\n\nplt.xlabel('Current (A)')\nplt.ylabel('Voltage (V)')\nplt.grid(True)\nplt.show()\n```\n\n\n```python\n\n```\n\n## Estimating the Slope Parameter\n----\nLet's try to estimate the slope parameter $R$ (i.e., the resistance) using the least squares formulation from Module 1, Lesson 1 - \"The Squared Error Criterion and the Method of Least Squares\":\n\n\\begin{align}\n\\hat{R} = \\left(\\mathbf{H}^T\\mathbf{H}\\right)^{-1}\\mathbf{H}^T\\mathbf{y}\n\\end{align}\n\nIf we know that we're looking for the slope parameter $R$, how do we define the matrix $\\mathbf{H}$ and vector $\\mathbf{y}$?\n\n\n```python\n# Define the H matrix, what does it contain?\n# H = ...\nH = np.mat([1,1,1,1,1]).T\n\n# Now estimate the resistance parameter.\n# R = ... \nR_oi = np.mat(V/I)\nR = (1/((H.T)*H))*(H.T)*R_oi\nprint('The slope parameter (i.e., resistance) for the best-fit line is:')\nprint(R)\n\n```\n\n The slope parameter (i.e., resistance) for the best-fit line is:\n [[5.22466667]]\n\n\n## Plotting the Results\n----\nNow let's plot our result. How do we relate our linear parameter fit to the resistance value in ohms?\n\n\n```python\nI_line = I\nV_line = np.multiply(R,I_line)\n\nplt.scatter(np.asarray(I), np.asarray(V))\nplt.plot(I_line, V_line)\nplt.xlabel('current (A)')\nplt.ylabel('voltage (V)')\nplt.grid(True)\nplt.show()\n```\n\nIf you have implemented the estimation steps correctly, the slope parameter $\\hat{R}$ should be close to the actual resistance value of $R = 5~\\Omega$. However, the estimated value will not match the true resistance value exactly, since we have only a limited number of noisy measurements.\n\n\n```python\n\n```\n", "meta": {"hexsha": "e6164c86c5feef6d4a3b833ff0e174a8d0b29796", "size": 29986, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "C2M1L1.ipynb", "max_stars_repo_name": "andreza-bona/SelfDrivingCars", "max_stars_repo_head_hexsha": "3ef82a24088a16bbd5f43615defbab9082f8bfda", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "C2M1L1.ipynb", "max_issues_repo_name": "andreza-bona/SelfDrivingCars", "max_issues_repo_head_hexsha": "3ef82a24088a16bbd5f43615defbab9082f8bfda", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "C2M1L1.ipynb", "max_forks_repo_name": "andreza-bona/SelfDrivingCars", "max_forks_repo_head_hexsha": "3ef82a24088a16bbd5f43615defbab9082f8bfda", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 146.2731707317, "max_line_length": 14160, "alphanum_fraction": 0.8911825519, "converted": true, "num_tokens": 834, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9546474207360067, "lm_q2_score": 0.8670357460591569, "lm_q1q2_score": 0.8277134386612934}} {"text": "

    Simulaci\u00f3n matem\u00e1tica 2018

    \n
    \n \n
      \n
    • L\u00e1zaro Alonso
    • \n
    • Email: `alonsosilva@iteso.mx, lazarus.alon@gmail.com`
    • \n
    \n
    \n
    \n\n###\u00a0Por favor, den click al siguiente link. Ser\u00e1 una forma f\u00e1cil de que me hagan llegar sus dudas, adem\u00e1s de que todos podremos estar al tanto de los problemas tanto de tarea como de clase. \n\nhttps://join.slack.com/t/sm-grupo/shared_invite/enQtMzcxMzcxMjY1NzgyLWE5ZDlhYjg4OGJhMGE2ZmY2ZGUzZWIxYzQzOTUxZWU2ZGM5YjUyYWMyZGUzNzZjMDE5ZDIxYTA4YTI2ZWQ1NTU\n\n### M\u00e1ximos y m\u00ednimos\n\n##### 1. Encuentre los valores m\u00e1ximo y m\u00ednimo locales de la funci\u00f3n \n$$g(x) = x + 2 \\sin x$$\n\nElementos que debe de contener su respuesta: \n - Gr\u00e1fica de la funcion $g(x)$\n - Gr\u00e1fica de la primera y segunda derivada, ($g(x)'$, $g(x)''$)\n - Indicar en la gr\u00e1fica los m\u00e1ximos y m\u00ednimos (_utilizar plt.scatter_)\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport sympy as sym\nsym.init_printing(use_latex='mathjax')\nsym.var(\"x\")\nsym.var(\"x\", real = True)\ng = x + 2*sym.sin(x)\ndg = sym.diff(g,x,1)\nd2g = sym.diff(g,x,2)\n```\n\n\n```python\n\nx1c = sym.solve(dg, x)\nx2c = sym.solve(d2g, x)\nprint(\"Valores de g\u00b4(x)=0 --> \")\nprint(x1c)\nprint(\"Valores de g\u00b4\u00b4(x)=0 --> \")\nprint(x2c)\n```\n\n Valores de g\u00b4(x)=0 --> \n [2*pi/3, 4*pi/3]\n Valores de g\u00b4\u00b4(x)=0 --> \n [0, pi]\n\n\n\n```python\ng_num = sym.lambdify([x], g, 'numpy')\ndg_num = sym.lambdify([x], dg, 'numpy')\nd2g_num = sym.lambdify([x], d2g, 'numpy')\nx_vec = np.linspace(0, 2*np.pi, 100)\n\nplt.figure(figsize=(10,4))\nplt.plot(x_vec, g_num(x_vec), color = \"cyan\", label = g)\nplt.plot(x_vec, dg_num(x_vec), color = \"green\", label = dg)\nplt.plot(x_vec, d2g_num(x_vec), color = \"red\", label = d2g)\nplt.scatter(2*np.pi/3,(2*np.pi/3) + 2*np.sin(2*np.pi/3), color=\"cyan\")\nplt.scatter(4*np.pi/3,(4*np.pi/3) + 2*np.sin(4*np.pi/3), color=\"cyan\")\nplt.grid()\nplt.legend()\nplt.xlabel('$x$', fontsize = 18)\nplt.ylabel('$y$', fontsize = 18)\nplt.show()\n```\n\n##### 2. Discuta la curva $f(x) = x^4 - 4x^3$. Puntos de inflexi\u00f3n, m\u00e1ximos y m\u00ednimos. Su respuesta debe de incluir los mismos puntos que el caso anterior. \n\n\n```python\nsym.var(\"z\")\nsym.var(\"z\", real = True)\nf=z**4-4*z**3\ndf = sym.diff(f,z,1)\nd2f = sym.diff(f,z,2)\nz1c = sym.solve(df, z)\nz2c = sym.solve(d2f, z)\nprint(\"Valores de f\u00b4(z)=0 --> \")\nprint(z1c)\nprint(\"Valores de f\u00b4\u00b4(z)=0 --> \")\nprint(z2c)\n```\n\n Valores de f\u00b4(z)=0 --> \n [0, 3]\n Valores de f\u00b4\u00b4(z)=0 --> \n [0, 2]\n\n\n\n```python\nf_num = sym.lambdify([z], f, 'numpy')\ndf_num = sym.lambdify([z], df, 'numpy')\nd2f_num = sym.lambdify([z], d2f, 'numpy')\nx_vec = np.linspace(-2, 4, 100)\n\nplt.figure(figsize=(10,4))\nplt.plot(x_vec, f_num(x_vec), color = \"cyan\", label = f)\nplt.plot(x_vec, df_num(x_vec), color = \"green\", label = df)\nplt.plot(x_vec, d2f_num(x_vec), color = \"red\", label = d2f)\nplt.scatter(0,0, color=\"black\")\nplt.scatter(3,3**4-4*3**3, color=\"orange\")\nplt.scatter(2,2**4-4*2**3, color=\"blue\")\nplt.grid()\nplt.legend()\nplt.xlabel('$x$', fontsize = 18)\nplt.ylabel('$y$', fontsize = 18)\nplt.show()\n```\n\n#####\u00a03. Se va a fabricar una lata que ha de contaner 1L de aceite. Encuentre las dimensiones que debe de tener la lata de manera que minimicen el costo del metal para fabricarla. \n\n- La soluci\u00f3n debe de incluir los siguientes elementos \n - Ecuaci\u00f3n del sistema\n - N\u00fameros cr\u00edticos\n - Dibujar la lata\n\n\n```python\nsym.var(\"w\")\nsym.var(\"w\", real = True)\n\n#1000cm3=1lt=h*pi*r**2\nh = 1000/(np.pi*(w**2))\narea = 2*np.pi*w**2 +h*2*np.pi*w \nderarea = sym.diff(area,w,1)\nprint(\"Derivada de area: \")\nprint(derarea)\nw1c = sym.solve(derarea, w)\nprint(\"Valores de area\u00b4(w)=0 --> \")\nprint(w1c)\n```\n\n 10.838764212436917\n Derivada de area: \n 12.5663706143592*w - 2000.0/w**2\n Valores de area\u00b4(w)=0 --> \n [5.41926070139289]\n\n\n\n```python\n\narea_num = sym.lambdify([w], area, 'numpy')\nderarea_num = sym.lambdify([w], derarea, 'numpy')\nx_vec = np.linspace(2, 8, 100)\npunto= (2*np.pi*5.4192**2) + (1000/(np.pi*(5.4192**2)))*2*np.pi*5.4192 \nplt.figure(figsize=(10,4))\nplt.plot(x_vec, area_num(x_vec), color = \"red\", label = area)\nplt.grid()\nplt.scatter(5.4192,punto)\nplt.legend()\nplt.xlabel('$Radio$', fontsize = 18)\nplt.ylabel('$Area Total$', fontsize = 18)\nplt.show()\n```\n\nPara saber como minimizar el costo al hacer una lata, necesitamos encontrar una formula la cual nos diga cuanto metal usaremos, en este caso, necesitaremos el area de las tapas y el cilindro, por lo que podemos decir que:\n\n$Area$ $total = 2\u03c0(r^2) + 2000/r$\n\nEntonces, ahora que conocemos la funcion del area, necesitamos saber el punto donde el area es minima, manteniendo el volumen de 1 litro, con el teorema de Fermat, nos da que para optimizar y reducir los costos, el radio tendra que ser de 5.4192 cm y la altura de 10.8387 cm.\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "904c76c5d5244b5b9db0248cd6d734cbb971a69b", "size": 96350, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "TareaClase4_Douglas.ipynb", "max_stars_repo_name": "douglasparism/Hello-World", "max_stars_repo_head_hexsha": "d1083a5d532174065df5d367c5b496ac33bca97c", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "TareaClase4_Douglas.ipynb", "max_issues_repo_name": "douglasparism/Hello-World", "max_issues_repo_head_hexsha": "d1083a5d532174065df5d367c5b496ac33bca97c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "TareaClase4_Douglas.ipynb", "max_forks_repo_name": "douglasparism/Hello-World", "max_forks_repo_head_hexsha": "d1083a5d532174065df5d367c5b496ac33bca97c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 283.3823529412, "max_line_length": 32492, "alphanum_fraction": 0.9247638817, "converted": true, "num_tokens": 1774, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.936285002192296, "lm_q2_score": 0.8840392924390585, "lm_q1q2_score": 0.8277127308593798}} {"text": "# Pendulum\n## Analytical small angle approximation\nWe start with the simple pendulum; assuming the small amplitude approximation, and the initial condition that at t=0 the pendulum is at $\\theta_0=\\pi/3$ and $\\dot{\\theta}=0$. Then \n\\begin{align}\n\\theta&=\\theta_0\\cos{\\omega t}\\\\\n\\omega&=\\sqrt{g/l}\\\\\n\\end{align}\n\n# A file we want to include-- pendulumParameters.py\n\n\n```python\n#File pendulumParameters.py\n\nstringDiameter=0.031*0.0254 #meters\nstringLength=0.627 #meters, from pivot to top of mass\nstringDensity=0.000273/0.738 #kg/m\nmMass=0.024245-1.260*stringDensity #kg just the mass (mass+string=24.245 g)\ndMass=0.02539 #m diameter of mass, measured with caliper\nstringDensity=0.00133/1.31 #kg/m, mass of entire length of string-1.33 g = mass of 1.31 m of string\nstringMass=stringDensity*stringLength\n\n#calculate length of pendulum\nl=stringLength+dMass/2\n\n#moments of Inertia\nIMass=mMass*l*l #moment of inertia of mass about pivot\nIString=1.0/3*stringMass*stringLength**2 #1/3 mL**2 is Moment of intertia of rod about one end\nI=IMass+IString\n\n#center of mass\nm=mMass+stringMass\nlcm=(mMass*l+stringMass*stringLength/2)/m\n\nprint('Total Mass ',m)\nprint('lcm ',lcm,' l ',l)\nprint('I, IMass, IString:',I, IMass, IString)\n```\n\n Total Mass 0.02441547495810836\n lcm 0.6311902802180227 l 0.639695\n I, IMass, IString: 0.009813975740162914 0.009730557367544595 8.34183726183206e-05\n\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pendulumParameters as pend\n\ntheta_0=np.pi/3\n#l=0.635 #length of pendulum, meters\nl=pend.l\ng=9.8 # acceleration of gravity on Earth's surface, meters/s**2\nomega=np.sqrt(g/l)\nT=2*np.pi/omega # period, seconds\n\nnsteps0=1000\nt = np.linspace(0.0, 25.0*T, nsteps0)\ny0=theta_0*np.cos(omega*t)\nfigno=1\nplt.figure(figno)\nfigno+=1\n\nplt.plot(t, y0, 'b')\nplt.title('Pendulum angle versus time, small angle model')\nplt.xlabel('Time[s]')\nplt.ylabel('Angle [radians]')\nplt.show()\n\nprint('l=',l)\n```\n\n## Without the small angle approximation: simple finite difference solution\n\nThis is called \"Euler integration\" in NR, chapter 17. It is good to read chapter 17 at this point. \nThe original equation for the pendulum is:\n\\begin{equation}\n\\ddot{\\theta}=-\\frac{g}{l}\\sin{\\theta} \n\\end{equation}\n\nWe write this second order equation as coupled first order equations:\n\\begin{equation}\n\\frac{d}{dt}\\left(\n\\begin{array}{c}\n\\omega\\\\\n\\theta\\\\\n\\end{array}\n\\right)=\n\\left(\n\\begin{array}{c}\n-\\frac{g}{l}\\sin{\\theta}\\\\\n\\omega\\\\\n\\end{array}\\right)\n\\end{equation}\n\nExpanding the derivatives in terms of $\\Delta$ we have\n\\begin{equation}\n\\left(\n\\begin{array}{c}\n\\frac{\\omega(t+\\Delta)-\\omega(t)}{\\Delta}\\\\\n\\frac{\\theta(t+\\Delta)-\\theta(t)}{\\Delta}\\\\\n\\end{array}\n\\right)=\n\\left(\n\\begin{array}{c}\n-\\frac{g}{l}\\sin{\\theta}\\\\\n\\omega\\\\\n\\end{array}\\right)\n\\end{equation}\n\nThis ends up giving us a recursion relationship:\n\\begin{equation}\n\\left(\n\\begin{array}{c}\n\\omega(t+\\Delta)\\\\\n\\theta(t+\\Delta)\\\\\n\\end{array}\n\\right)=\n\\left(\n\\begin{array}{c}\n-\\Delta\\frac{g}{l}\\sin{\\theta(t)}+\\omega(t)\\\\\n\\Delta\\omega(t)+\\theta(t)\\\\\n\\end{array}\\right)\n\\end{equation}\n\nWe write a little Python loop to calculate $\\omega$ and $\\theta$ using this equation:\n\n\n```python\nfactor=100\nnsteps=nsteps0*factor\nt2=np.linspace(0,25*T,nsteps)\nx=np.zeros(2*len(t2)).reshape(len(t2),2)\nDelta=t2[1]\nprint('Number of steps =%d and size of steps=%f between 0 and %f s'%(len(t2),Delta,25*T))\n\n# we will use the 0-row for theta, and the 1-row for omega\nx[0,1]=0 #initial condition is pendulum at rest\nx[0,0]=theta_0 #initial condition \nfor i in np.arange(0,len(t2)-1):\n x[i+1,1]=-Delta*g/l*np.sin(x[i,0])+x[i,1]\n x[i+1,0]=Delta*x[i,1]+x[i,0]\ny=x[::factor,0]\nr=y/y0\nplt.figure(figno)\nfigno+=1\nplt.plot(t,y,'k')\nplt.plot(t, y0, 'b')\nplt.title('Euler integrated(black) versus analytic small angle (blue) pendulum')\nplt.xlabel('Time[s]')\nplt.ylabel('Angle[radian]')\n\nplt.figure(figno)\nfigno+=1\nplt.plot(t,r,'k')\nplt.title('Ratio of Euler integration and small angle solutions to pendulum')\nplt.xlabel('Time[s]')\nplt.ylabel('Ratio of amplitudes')\nplt.show()\n\n\n```\n\n\n```python\nfactor=1000\nnsteps=nsteps0*factor\nt2=np.linspace(0,25*T,nsteps)\nx=np.zeros(2*len(t2)).reshape(len(t2),2)\nDelta=t2[1]\nprint('Number of steps =%d and size of steps=%f between 0 and %f s'%(len(t2),Delta,25*T))\n\n# we will use the 0-row for theta, and the 1-row for omega\nx[0,1]=0 #initial condition is pendulum at rest\nx[0,0]=theta_0 #initial condition \nfor i in np.arange(0,len(t2)-1):\n x[i+1,1]=-Delta*g/l*np.sin(x[i,0])+x[i,1]\n x[i+1,0]=Delta*x[i,1]+x[i,0]\ny=x[::factor,0]\nr=y/y0\nplt.figure(figno)\nfigno+=1\nplt.plot(t,y,'k')\nplt.plot(t, y0, 'b')\nplt.title('Euler integrated(black) versus analytic small angle (blue) pendulum')\nplt.xlabel('Time[s]')\nplt.ylabel('Angle[radian]')\nplt.figure(figno)\nfigno+=1\nplt.plot(t,r,'k')\nplt.title('Ratio of Euler integration and small angle solutions to pendulum')\nplt.xlabel('Time[s]')\nplt.ylabel('Ratio of amplitudes')\nplt.show()\n\n\n```\n\n\n```python\nfactor=10000\nnsteps=nsteps0*factor\nt2=np.linspace(0,25*T,nsteps)\nx=np.zeros(2*len(t2)).reshape(len(t2),2)\nDelta=t2[1]\nprint('Number of steps =%d and size of steps=%f between 0 and %f s'%(len(t2),Delta,25*T))\n\n# we will use the 0-row for theta, and the 1-row for omega\nx[0,1]=0 #initial condition is pendulum at rest\nx[0,0]=theta_0 #initial condition \nfor i in np.arange(0,len(t2)-1):\n x[i+1,1]=-Delta*g/l*np.sin(x[i,0])+x[i,1]\n x[i+1,0]=Delta*x[i,1]+x[i,0]\ny=x[::factor,0]\nr=y/y0\nplt.figure(figno)\nfigno+=1\nplt.plot(t,y,'k')\nplt.plot(t, y0, 'b')\nplt.title('Euler integrated(black) versus analytic small angle (blue) pendulum')\nplt.xlabel('Time[s]')\nplt.ylabel('Angle[radian]')\nplt.figure(figno)\nfigno+=1\nplt.plot(t,r,'k')\nplt.title('Ratio of Euler integration and small angle solutions to pendulum')\nplt.xlabel('Time[s]')\nplt.ylabel('Ratio of amplitudes')\nplt.show()\n\n\n```\n\n\n\nOur next step will be more sophisticated- using algorithms that have been worked out more carefully. But first- some thoughts about an important principle in coding\n\n\n\n# Explicit Verification of Code\nAfter figuring out the model-- always!-- think of ways to test the numerical solution. Testing and debugging code is almost always more than half the work. Therefore- never give up a chance to do things differently/redundantly- check at the end to make sure that the algorithm is doing what you intended it to do. Reading it over and over is natural but usually, for most people, not productive. Looking at the output line by line-- and there are python debuggers that allow you to do that (or cut and paste the code one line at a time into the ipython window) work. But I always try to explicitly think of tests that we can do inside the project to double check that the code is right.\n\nFor this code, what can we do? \n1. Do the second numerical derivative and compare it to the equation (ie. substitute back in)\n2. Do time reversal, and see if we come back to the starting point.\n3. Add an explicit calculation of the total energy, and see that it is constant\n\nWhy do we do all three rather than just one?\n\n\n\nWe start with doing the second order numerical derivative:\n\\begin{equation}\n\\ddot{\\theta}=\\frac{g}{l}\\sin{\\theta}=\\frac{1}{h}\\left(\\frac{\\theta(t+h)-\\theta(t)}{h}-\\frac{\\theta(t)-\\theta(t-h)}{h}\\right)=\\frac{\\theta(t+h)+\\theta(t-h)-2\\theta(t)}{h^2}\n\\end{equation}\n\n\n\n```python\nthetaOfT=x[1:-1,0]\nthetaTPlusH=x[2:,0]\nthetaTMinusH=x[:-2,0]\nh=Delta\nthetaDotDot=(1/h**2)*(thetaTPlusH+thetaTMinusH-2*thetaOfT)\nplt.plot(t2[1:-1:factor],thetaDotDot[::factor],'k')\nplt.plot(t2[1:-1:factor],-g/l*np.sin(thetaOfT[1:-1:factor]),'r')\nplt.title(\"Numerical Derivative in black; analytic derivative in red\")\nplt.xlabel(\"Time[s]\")\nplt.ylabel(r'$\\ddot\\theta$[rad/s^2]')\nplt.show()\n\ndiff=thetaDotDot[::factor]+g/l*np.sin(thetaOfT[1:-1:factor])\nplt.plot(t2[1:-1:factor],diff,'b')\nplt.title(\"Difference between desired and actual second derivative\")\nplt.xlabel(\"Time[s]\")\nplt.ylabel(r'Difference in $\\ddot\\theta$[rad/s^2]')\nplt.show()\n```\n\n\nThis would indicate that our precision is about 0.0004/10 or 4 e-5. It needs to be a bit better than this because theta is also an output of the calculation- that is we are checking the derivative of $\\theta$, which is $\\omega$! \n\nTo time reverse we start with theta0, phi0 as the final value, and make the derivative be - the old derivative. \n\n\n```python\nx[0,1]=x[-1,1] #initial condition is pendulum at rest\nx[0,0]=x[-1,0] #initial condition \nfor i in np.arange(0,len(t2)-1):\n x[i+1,1]=Delta*g/l*np.sin(x[i,0])+x[i,1]\n x[i+1,0]=-Delta*x[i,1]+x[i,0]\ny=x[::factor,0]\nr=y/y0\nplt.figure(figno)\nfigno+=1\nplt.plot(t,y,'k')\nplt.title('Euler integrated(black) versus analytic small angle (blue) pendulum')\nplt.xlabel('Time[s]')\nplt.ylabel('Angle[radian]')\nplt.figure(figno)\nfigno+=1\nplt.plot(t,r,'k')\nplt.title('Ratio of Euler integration and small angle solutions to pendulum')\nplt.xlabel('Time[s]')\nplt.ylabel('Ratio of amplitudes')\nplt.show()\n\n\nprint('Final theta=%f omega=%f'%(x[-1,0],x[-1,1]))\nprint('Original theta0=%f omega0=%f'%(theta_0,0))\nprint('Fractional Difference=',(x[-1,0]-theta_0)/(50*np.pi))\n\n```\n\nNotice that this also says we have something like 2e-3. \n\n\nThe total energy is $E=1/2 mv^2+mgh=1/2ml^2\\omega^2-mgl\\cos\\theta$.\n\n\n\n```python\n\nm=pend.m\nEnergy=(0.5*m*l**2)*x[::factor,1]**2-(m*g*l)*np.cos(x[::factor,0])\nplt.figure(figno)\nfigno+=1\nplt.plot(t,Energy,'r')\nplt.show()\nprint('Energy is', Energy)\n```\n\n\n```python\n\n\n```\n\n\n\nBut also note that we already have a physics result-- the large angle pendulum has a longer period than the \"small angle\" approximation by about 1 cycle in 23- or about 4-5% in this case. That makes sense because the acceleration is smaller. Of course, this will depend on the amplitude, and as the amplitude drops the approximation will get closer to reality. \n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "e47f62786cee700da3e0961739b6f0ea2b9d9cdb", "size": 490077, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Notebook Folder-20190925/.ipynb_checkpoints/02 Pendulum Calculations With Eulers Method-checkpoint.ipynb", "max_stars_repo_name": "hanzhihua72/phys-420", "max_stars_repo_head_hexsha": "748d29b55d57680212b15bb70879a24b79cb16a9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Notebook Folder-20190925/.ipynb_checkpoints/02 Pendulum Calculations With Eulers Method-checkpoint.ipynb", "max_issues_repo_name": "hanzhihua72/phys-420", "max_issues_repo_head_hexsha": "748d29b55d57680212b15bb70879a24b79cb16a9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notebook Folder-20190925/.ipynb_checkpoints/02 Pendulum Calculations With Eulers Method-checkpoint.ipynb", "max_forks_repo_name": "hanzhihua72/phys-420", "max_forks_repo_head_hexsha": "748d29b55d57680212b15bb70879a24b79cb16a9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 610.3075965131, "max_line_length": 69748, "alphanum_fraction": 0.9437231292, "converted": true, "num_tokens": 3198, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.914900950352329, "lm_q2_score": 0.9046505254608136, "lm_q1q2_score": 0.8276656254808321}} {"text": "# Taylor Series for Approximations\n\nTaylor series are commonly used in physics to approximate functions making them easier to handle specially when solving equations. In this notebook we give a visual example on how it works and the biases that it introduces.\n\n## Theoretical Formula\n\nConsider a function $f$ that is $n$ times differentiable in a point $a$. Then by Taylor's theorem, for any point $x$ in the domain of f, we have the Taylor expansion about the point $a$ is defined as:\n\\begin{equation}\nf(x) = f(a) + \\sum_{k=1}^n \\frac{f^{k}(a)}{k!}(x-a)^k + o\\left((x-a)^n\\right) \\quad,\n\\end{equation}\nwhere $f^{(k)}$ is the derivative of order $k$ of $f$. Usually, we consider $a=0$ which gives:\n\\begin{equation}\nf(x) = f(0) + \\sum_{k=1}^n \\frac{f^{k}(0)}{k!}(x)^k + o\\left((x)^n\\right) \\quad.\n\\end{equation}\n\nFor example, the exponential, $e$ is infinitely differentiable with $e^{(k)}=e$ and $e^0=1$. This gives us the following Taylor expansion:\n\\begin{equation}\ne(x) = 1 + \\sum_{k=1}^\\infty \\frac{x^k}{k!} \\quad.\n\\end{equation}\n\n## Visualising Taylor Expansion Approximation and its Bias\n\nLet us see visually how the Taylor expansion approximatees a given function. We start by defining our function below, for example we will consider the exponential function, $e$ again up to order 3.\n\n\n```python\n#### FOLDED CELL\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom IPython.display import Markdown as md\nfrom sympy import Symbol, series, lambdify, latex\nfrom sympy.functions import *\nfrom ipywidgets import interactive_output\nimport ipywidgets as widgets\nfrom sympy.parsing.sympy_parser import parse_expr\nimport numpy as np\n\nx = Symbol('x')\n```\n\n\n```python\norder = 3\nfunc = exp(x)\n```\n\n\n```python\n#### FOLDED CELL\ntaylor_exp = series(func,x,n=order+1)\napprox = lambdify(x, sum(taylor_exp.args[:-1]), \"numpy\")\nfunc_np = lambdify(x, func, \"numpy\")\nlatex_func = '$'+latex(func)+'$'\nlatex_taylor = '\\\\begin{equation} '+latex(taylor_exp)+' \\end{equation}'\n```\n\nThe Taylor expansion of {{ latex_func }} is :\n{{latex_taylor}}\n\nNow let's plot the function and its expansion while considering a point, noted $p$, to study the biais that we introduce when we approximate the function by its expansion:\n\n\n```python\n#### FOLDED CELL\norder = widgets.IntSlider(min=0,max=20,step=1,value=3,description='Order')\nx_min = -4\nx_max = 4\nx1 = widgets.FloatSlider(min=x_min,max=x_max,value=3,step=0.2,description='Point Absciss')\nfunc = widgets.Text('exp(x)',description='Function')\ntext_offset = np.array([-0.15,2.])\n\n\nui = widgets.HBox([x1, order, func])\n\ndef f(order=widgets.IntSlider(min=1,max=10,step=1,value=3)\n ,x1=1.5\n ,func='exp(x)'):\n func_sp = parse_expr(func)\n taylor_exp = series(func_sp,x,n=order+1)\n approx = lambdify(x, sum(taylor_exp.args[:-1]), \"numpy\")\n func_np = lambdify(x, func_sp, \"numpy\")\n n_points = 1000\n x_array = np.linspace(x_min,x_max,n_points)\n approx_array = np.array([approx(z) for z in x_array])\n func_array = np.array([func_np(z) for z in x_array])\n func_x1 = func_np(x1)\n approx_x1 = approx(x1)\n plt.figure(42,figsize=(10,10))\n plt.plot(x_array,approx_array,color='blue',label='Taylor Expansion')\n plt.plot(x_array,func_array,color='green',label=func)\n plt.plot(0,approx(0),color='black',marker='o')\n plt.annotate(r'(0,0)',[0,approx(0)],xytext=text_offset)\n plt.plot([x1,x1]\n ,[-np.max(np.abs([np.min(func_array),np.max(func_array)])),min(approx_x1, func_x1)]\n ,'--',color='black',marker='x')\n plt.plot([x1,x1],[approx_x1, func_x1],'r--',marker='x')\n plt.annotate(r'$p_{approx}$',[x1,approx(x1)],xytext=[x1,approx(x1)]-text_offset)\n plt.annotate(r'$p$',[x1,func_np(x1)],xytext=[x1,func_np(x1)]-text_offset)\n plt.xlim([x_min,x_max])\n plt.ylim(-np.max(np.abs([np.min(func_array),np.max(func_array)]))\n ,np.max(np.abs([np.min(func_array),np.max(func_array)])))\n plt.legend()\n plt.show()\n print('Approximation bias : {}'.format(func_x1-approx_x1))\n return None\ninteractive_plot = widgets.interactive_output(f, {'order': order, 'x1': x1, 'func': func})\ninteractive_plot.layout.height = '650px'\ndisplay(interactive_plot, ui)\n```\n\n\n Output(layout=Layout(height='650px'))\n\n\n\n HBox(children=(FloatSlider(value=3.0, description='Point Absciss', max=4.0, min=-4.0, step=0.2), IntSlider(val\u2026\n\n\nNotice that the further $p$ gets away from the point of the expansion (in that case $0$), the higher the approximation bias gets. Samely, the lower the order of approximation is, the higher the approximation bias gets.\n", "meta": {"hexsha": "d1e7f77a0398c1454062aa878fc5723994dd3e99", "size": 7425, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "taylor_series.ipynb", "max_stars_repo_name": "fadinammour/taylor_series", "max_stars_repo_head_hexsha": "4deb11d51dcf23432c035486d997cfebd3ea7418", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "taylor_series.ipynb", "max_issues_repo_name": "fadinammour/taylor_series", "max_issues_repo_head_hexsha": "4deb11d51dcf23432c035486d997cfebd3ea7418", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "taylor_series.ipynb", "max_forks_repo_name": "fadinammour/taylor_series", "max_forks_repo_head_hexsha": "4deb11d51dcf23432c035486d997cfebd3ea7418", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.75, "max_line_length": 232, "alphanum_fraction": 0.5667340067, "converted": true, "num_tokens": 1353, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9149009480320035, "lm_q2_score": 0.904650527388829, "lm_q1q2_score": 0.8276656251456916}} {"text": "Determine a polynomial $f(x)$ of degree 3 from the following information:\n\n- $f(x)$ has a stationary point at $P(0|4)$\n- $f(x)$ has an inflection point at $Q(2|2)$\n\nPlot the graph of the resulting function for $-2.1 \\le x \\le 6.1$.\n\n## Solution\n\nThe first step consists of some initialisations\n\n\n```python\n# Initialisations\nfrom sympy import *\ninit_printing()\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n%config InlineBackend.figure_format='retina' # for OSX only\n\nimport numpy as np\n\nfrom IPython.display import display, Math\n\nfrom fun_expr import Function_from_Expression as FE\n```\n\nThen, the function $f(x)$ is defined as\n\n$$\n f(x) = a\\,x^3 + b\\,x^2 + c\\,x + d\n$$\n\nwith unknown coefficients $a\\cdots d$.\n\n\n```python\n# define f\n\nx = Symbol('x')\n\na,b,c,d = symbols('a,b,c,d')\n\nf = FE(x, a*x**3 + b*x**2 + c*x + d)\nf_1 = f.diff(x)\nf_2 = f.diff(x,2)\n\ndisplay(Math(\"f(x)=\"+latex(f(x))))\ndisplay(Math(\"f'(x)=\"+latex(f_1(x))))\ndisplay(Math(\"f''(x)=\"+latex(f_2(x))))\n```\n\n\n$$f(x)=a x^{3} + b x^{2} + c x + d$$\n\n\n\n$$f'(x)=3 a x^{2} + 2 b x + c$$\n\n\n\n$$f''(x)=6 a x + 2 b$$\n\n\nThe unknown coefficients are determined by the conditions\n\n\\begin{align*}\n f(x_s) &= y_s \\\\\n f(x_i) &= y_i \\\\\n f'(x_s) &= 0 \\\\\n f''(x_i) &= 0 \\\\\n\\end{align*}\n\nHere, $(x_s|y_s)$ is the stationary point and $(x_i|y_i)$ the inflection point.\n\n\n```python\n# known information\nx_s, y_s = 0,4\nx_i, y_i = 2,2\np_s = (x_s,y_s) # stationary point\np_i = (x_i,y_i) # inflection point\n\n# equations\neqns = [Eq(f(x_s),y_s), \n Eq(f_1(x_s),0),\n Eq(f(x_i),y_i),\n Eq(f_2(x_i),0)]\n\nfor eq in eqns:\n display(eq)\n```\n\nThe resulting system of equations is solved\n\n\n```python\n# solve equations\nsol = solve(eqns)\nsol\n```\n\n... and the solution substituted into $f$, $f'$ and $f''$.\n\n\n```python\n# substitute solution\nf = f.subs(sol)\nf_1 = f_1.subs(sol)\nf_2 = f_2.subs(sol)\ndisplay(Math('f(x)='+latex(f(x))))\ndisplay(Math(\"f'(x)=\"+latex(f_1(x))))\ndisplay(Math(\"f''(x)=\"+latex(f_2(x))))\n```\n\n\n$$f(x)=\\frac{x^{3}}{8} - \\frac{3 x^{2}}{4} + 4$$\n\n\n\n$$f'(x)=\\frac{3 x^{2}}{8} - \\frac{3 x}{2}$$\n\n\n\n$$f''(x)=\\frac{3 x}{4} - \\frac{3}{2}$$\n\n\nThe resulting function $f(x)$ is plotted over $-2.1 \\le x \\le 6.1$ \n\n\n```python\n# define new plot\nfig, ax = plt.subplots()\n\n# x-values\nlx = np.linspace(-2.1,6.1)\n\n# plot\nax.plot(lx,f.lambdified(lx),label=r'$y={f}$'.format(f=latex(f(x))))\nax.scatter(*zip(*[p_s,p_i]))\n\n# refine plot\nax.axhline(0,c='k')\nax.axvline(0,c='k')\nax.grid(True)\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.legend(loc='best')\n\n# show plot\nplt.show()\n```\n", "meta": {"hexsha": "4c42f447818b401f043a151ae1bfb2ecf10dfda0", "size": 51320, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/04-determine_polynomial_of_degree_3.ipynb", "max_stars_repo_name": "w-meiners/fun-expr", "max_stars_repo_head_hexsha": "a44f0366f08c8c2d2eb2702176698bfe3f6febed", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2017-12-20T16:16:40.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-28T11:06:38.000Z", "max_issues_repo_path": "docs/04-determine_polynomial_of_degree_3.ipynb", "max_issues_repo_name": "w-meiners/fun-expr", "max_issues_repo_head_hexsha": "a44f0366f08c8c2d2eb2702176698bfe3f6febed", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/04-determine_polynomial_of_degree_3.ipynb", "max_forks_repo_name": "w-meiners/fun-expr", "max_forks_repo_head_hexsha": "a44f0366f08c8c2d2eb2702176698bfe3f6febed", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-01-27T09:50:59.000Z", "max_forks_repo_forks_event_max_datetime": "2020-01-27T09:50:59.000Z", "avg_line_length": 137.5871313673, "max_line_length": 39052, "alphanum_fraction": 0.875233827, "converted": true, "num_tokens": 928, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9525741268224333, "lm_q2_score": 0.8688267728417087, "lm_q1q2_score": 0.8276219044996432}} {"text": "# \u89e3\u65b9\u7a0b\n\n\u548c\u8ba1\u7b97\u7b97\u5f0f\u4e0d\u540c,\u89e3\u65b9\u7a0b\u662f\u77e5\u9053\u7ed3\u679c\u6c42\u53d8\u91cf,\u4f5c\u4e3a\u7b26\u53f7\u8fd0\u7b97\u5de5\u5177,sympy\u5728\u89e3\u65b9\u7a0b\u65b9\u9762\u975e\u5e38\u76f4\u89c2\n\n\n```python\nfrom sympy import init_printing\ninit_printing(use_unicode=True)\n```\n\n\n```python\nfrom sympy import symbols\nx, y, z = symbols('x y z')\n```\n\n## \u76f8\u7b49\n\npython\u4e2d`=`\u662f\u8d4b\u503c,`==`\u662f\u5224\u65ad\u76f8\u7b49,\u800c\u5728SymPy\u4e2d\u4f7f\u7528\u51fd\u6570`Eq(exp1,exp2)`\u6765\u8868\u793a\u4e24\u4e2a\u7b97\u5f0f\u76f8\u7b49\n\n\n```python\nfrom sympy import Eq\n```\n\n\n```python\nEq(x,y)\n```\n\n## \u6c42\u89e3\u65b9\u7a0b\n\n\n\u6c42\u89e3\u65b9\u7a0b\u4f7f\u7528`solveset(Eq(expr, result), var)`\u6765\u5b9e\u73b0.\u6c42\u89e3\u65b9\u7a0b\u672c\u8d28\u4e0a\u662f\u6c42\u65b9\u7a0b\u7684\u89e3\u96c6,\u56e0\u6b64\u5176\u8fd4\u56de\u503c\u662f\u4e00\u4e2a\u96c6\u5408,\u96c6\u5408\u8ba1\u7b97\u4f1a\u5728\u540e\u9762\u4ecb\u7ecd\n\n\n```python\nfrom sympy import solveset\n```\n\n\n```python\nsolveset(Eq(x**2, 1), x)\n```\n\n\u56e0\u4e3a\u5c31\u4e00\u4e2a\u53c2\u6570\u6240\u4ee5x\u53ef\u4ee5\u7701\u7565\n\n\n```python\nsolveset(Eq(x**2, 1))\n```\n\n\u5982\u679c\u7b2c\u4e00\u4e2a\u53c2\u6570\u4e0d\u662f\u7b49\u5f0f,\u90a3\u4e48\u9ed8\u8ba4\u7684`solveset`\u5c06\u4f1a\u628a\u5b83\u4f5c\u4e3a\u7b49\u4e8e0\u5904\u7406\n`solveset(expr, var)`\n\n\n```python\nsolveset(x**2-1, x)\n```\n\n\u5176\u5b9e`solveset(equation, variable=None, domain=S.Complexes)->Set`\u624d\u662fsolveset\u7684\u5b8c\u6574\u63a5\u53e3,\u800cdomain\u5219\u8868\u793a\u57df(\u53d6\u503c\u8303\u56f4\u96c6\u5408),SymPy\u652f\u6301\u7684\u57df\u5305\u62ec:\n\n\n+ `S.Naturals`\u8868\u793a\u81ea\u7136\u6570(\u6216\u8ba1\u6570\u6570),\u5373\u4ece1\u5f00\u59cb\u7684\u6b63\u6574\u6570($\u2115$)\n+ `S.Naturals0`\u975e\u8d1f\u6574\u6570($\u21150$)\n+ `S.Integers`\u6574\u6570($Z$)\n+ `S.Reals`\u5b9e\u6570($R$)\n+ `S.Complexes`\u590d\u6570($C$)\n+ `S.EmptySet`\u7a7a\u57df($\\emptyset$)\n\n\n```python\nfrom sympy import S,sin,cos,exp\n```\n\n\n```python\nsolveset(x - x, x, domain=S.Reals)\n```\n\n\n```python\nsolveset(sin(x) - 1, x, domain=S.Reals)\n```\n\n\u5982\u679c\u65e0\u89e3,\u90a3\u4e48\u7ed3\u679c\u5c31\u662f\u4e00\u4e2a\u7a7a\u96c6\n\n\n```python\nsolveset(exp(x), x) # No solution exists\n```\n\n\n```python\nsolveset(cos(x) - x, x) \n```\n\n## \u6c42\u65b9\u7a0b\u7ec4\n\n\u6c42\u65b9\u7a0b\u7ec4\u53ef\u4ee5\u7528`linsolve(exps...,(vars...))-> Set`\u51fd\u6570\n\n\n```python\nfrom sympy import linsolve\n```\n\n\n```python\nlinsolve([x + y + z - 1,\n x + y + 2*z - 3 ], (x, y, z))\n```\n\n\u5f53\u7136\u4e5f\u53ef\u4ee5\u4f7f\u7528\u77e9\u9635\u7684\u65b9\u5f0f,\u77e9\u9635\u5c06\u5728\u4e0b\u4e00\u90e8\u5206\u5b66\u4e60\n\n\n```python\nfrom sympy import Matrix\n```\n\n\n```python\nA=Matrix([[1,1,1],[1,1,2]])\n```\n\n\n```python\nb = Matrix([[1],[3]])\n```\n\n\n```python\nlinsolve((A,b,),(x,y,z,))\n```\n\n## \u6c42\u89e3\u5fae\u5206\u65b9\u7a0b\n\n`dsolve()`\u662f\u5fae\u5206\u65b9\u7a0b\u7684\u6c42\u89e3\u51fd\u6570\n\n\n```python\nfrom sympy import Function,dsolve\n```\n\n\n```python\nf, g = symbols('f g', cls=Function)\n```\n\n\n```python\nf(x)\n```\n\n\n```python\nf(x).diff(x)\n```\n\n\n```python\ndiffeq = Eq(f(x).diff(x, x) - 2*f(x).diff(x) + f(x),\n sin(x))\n```\n\n\n```python\ndiffeq\n```\n\n\n```python\ndsolve(diffeq, f(x))\n```\n\n\n```python\nf(x).diff(x)*(1 - sin(f(x)))\n```\n\n\n```python\ndsolve(f(x).diff(x)*(1 - sin(f(x))),\n f(x))\n```\n", "meta": {"hexsha": "e6775373b730246c9c4d8b55d11a3d412c102c13", "size": 35688, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "src/\u6570\u636e\u5206\u6790\u7bc7/\u5de5\u5177\u4ecb\u7ecd/SymPy/\u7b26\u53f7\u8ba1\u7b97/\u89e3\u65b9\u7a0b.ipynb", "max_stars_repo_name": "hsz1273327/TutorialForDataScience", "max_stars_repo_head_hexsha": "1d8e72c033a264297e80f43612cd44765365b09e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/\u6570\u636e\u5206\u6790\u7bc7/\u5de5\u5177\u4ecb\u7ecd/SymPy/\u7b26\u53f7\u8ba1\u7b97/\u89e3\u65b9\u7a0b.ipynb", "max_issues_repo_name": "hsz1273327/TutorialForDataScience", "max_issues_repo_head_hexsha": "1d8e72c033a264297e80f43612cd44765365b09e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2020-03-31T03:36:05.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-31T03:36:21.000Z", "max_forks_repo_path": "src/\u6570\u636e\u5206\u6790\u7bc7/\u5de5\u5177\u4ecb\u7ecd/SymPy/\u7b26\u53f7\u8ba1\u7b97/\u89e3\u65b9\u7a0b.ipynb", "max_forks_repo_name": "hsz1273327/TutorialForDataScience", "max_forks_repo_head_hexsha": "1d8e72c033a264297e80f43612cd44765365b09e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.6755070203, "max_line_length": 3016, "alphanum_fraction": 0.7754147052, "converted": true, "num_tokens": 950, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9449947101574299, "lm_q2_score": 0.8757869997529961, "lm_q1q2_score": 0.8276140819912277}} {"text": "# Revis\u00e3o de conceitos estat\u00edsticos I\n\nVamos explorar alguns conceitos estat\u00edsticos aplicados \u00e0 an\u00e1lise de sinais.\n\n\n```python\n# importar as bibliotecas necess\u00e1rias\nimport numpy as np # arrays\nimport matplotlib.pyplot as plt # plots\nplt.rcParams.update({'font.size': 14})\nimport IPython.display as ipd # to play signals\nimport sounddevice as sd\n```\n\n# Exemplo 1: Tr\u00eas pessoas lan\u00e7am uma moeda 10 vezes\n\nAssim, teremos um registro \"temporal\" do lan\u00e7amento da moeda. Vamos dizer que \n\n- Ao conjunto de registros temporais chamamos de ***Ensemble***\n- A cada $n$ em $n_{\\text{eventos}}$, temos um ***evento***. Cada ***evento*** (cara ou coroa) tem associado a si uma ***vari\u00e1vel aleat\u00f3ria***, definida como\n - Cara = 0\n - Coroa = 1\n\n- O universo amostral \u00e9: $\\Omega = (0, 1)$\n- Podemos calcular a probavilidade do evento 'cara' tomando $p(\\text{cara}) = n(\\text{cara})/n_{\\text{eventos}}$. Esta \u00e9 uma vis\u00e3o frequentista do fen\u00f4meno.\n\n\n```python\nn_eventos = np.arange(1,11)\np1 = np.random.randint(2, size=len(n_eventos))\np2 = np.random.randint(2, size=len(n_eventos))\np3 = np.random.randint(2, size=len(n_eventos))\n\nprint(\"Prob. de caras em p1 \u00e9 de: {:.2f}\".format(len(p1[p1==1])/len(p1)))\nprint(\"Prob. de caras em p2 \u00e9 de: {:.2f}\".format(len(p2[p2==1])/len(p2)))\nprint(\"Prob. de caras em p3 \u00e9 de: {:.2f}\".format(len(p3[p3==1])/len(p3)))\n\nplt.figure(figsize = (12,6))\nplt.subplot(3,1,1)\nplt.bar(n_eventos, p1, color = 'lightblue')\nplt.xticks(n_eventos)\nplt.grid(linestyle = '--', which='both')\nplt.xlabel('Eventos')\nplt.ylabel('V.A. [-]')\nplt.xlim((0, len(n_eventos)+1))\nplt.ylim((0, 1.2))\n\nplt.subplot(3,1,2)\nplt.bar(n_eventos, p2, color = 'lightcoral')\nplt.xticks(n_eventos)\nplt.grid(linestyle = '--', which='both')\nplt.xlabel('Eventos')\nplt.ylabel('V.A. [-]')\nplt.xlim((0, len(n_eventos)+1))\nplt.ylim((0, 1.2))\n\nplt.subplot(3,1,3)\nplt.bar(n_eventos, p3, color = 'lightgreen')\nplt.xticks(n_eventos)\nplt.grid(linestyle = '--', which='both')\nplt.xlabel('Eventos')\nplt.ylabel('V.A. [-]')\nplt.xlim((0, len(n_eventos)+1))\nplt.ylim((0, 1.2))\nplt.tight_layout();\n```\n\n# Exemplo 2: Ru\u00eddo aleat\u00f3rio com distribui\u00e7\u00e3o normal\n\nNeste exemplo n\u00f3s consideramos um sinal gerado a partir do processo de obter amostras de uma distribui\u00e7\u00e3o normal, dada por:\n\n\\begin{equation}\np(x) = \\mathcal{N}(\\mu_x, \\sigma_x) = \\frac{1}{\\sqrt{2\\pi}\\sigma_x}\\mathrm{e}^{-\\frac{1}{2\\sigma_x^2}(x-\\mu_x)^2}\n\\end{equation}\nem que $\\mu_x$ \u00e9 a m\u00e9dia e $\\sigma_{x}$ \u00e9 o desvio padr\u00e3o.\n\nImaginemos ent\u00e3o, que a cada instante de tempo $t$, n\u00f3s sorteamos um valor da distribui\u00e7\u00e3o normal.\n\n\n```python\nfs = 44100\ntime = np.arange(0,2, 1/fs)\n\n# sinal\nmu_x = 0\nsigma_x = 1.0\nxt = np.random.normal(loc = mu_x, scale = sigma_x, size=len(time))\n\n# Figura\nfig, axs = plt.subplots(1, 2, gridspec_kw={'width_ratios': [8, 2]}, figsize = (12, 4))\naxs[0].plot(time, xt, '-b', linewidth = 1, color = 'lightcoral')\naxs[0].grid(linestyle = '--', which='both')\naxs[0].set_xlabel('Tempo [s]')\naxs[0].set_ylabel('Amplitude [-]')\naxs[0].set_xlim((0, 0.05))\naxs[0].set_ylim((-4, 4))\n\naxs[1].hist(xt, density = True, orientation='horizontal', bins=np.linspace(-4*sigma_x, 4*sigma_x, 20), color = 'lightcoral')\naxs[1].grid(linestyle = '--', which='both')\naxs[1].set_xlabel('Densidade [-]')\naxs[1].set_ylabel(r'Valores de $x(t)$ [-]')\naxs[1].set_ylim((-4, 4))\n\n\n\n\nplt.tight_layout()\n\nipd.Audio(xt, rate=fs) # load a NumPy array\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "ca730471c44476d36220eb293ecb79c86dab3ba2", "size": 336184, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Aula 49 - Rev conceitos estatisticos I/.ipynb_checkpoints/conceitos estatisticos I-checkpoint.ipynb", "max_stars_repo_name": "RicardoGMSilveira/codes_proc_de_sinais", "max_stars_repo_head_hexsha": "e6a44d6322f95be3ac288c6f1bc4f7cfeb481ac0", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2020-10-01T20:59:33.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-27T22:46:58.000Z", "max_issues_repo_path": "Aula 49 - Rev conceitos estatisticos I/.ipynb_checkpoints/conceitos estatisticos I-checkpoint.ipynb", "max_issues_repo_name": "RicardoGMSilveira/codes_proc_de_sinais", "max_issues_repo_head_hexsha": "e6a44d6322f95be3ac288c6f1bc4f7cfeb481ac0", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Aula 49 - Rev conceitos estatisticos I/.ipynb_checkpoints/conceitos estatisticos I-checkpoint.ipynb", "max_forks_repo_name": "RicardoGMSilveira/codes_proc_de_sinais", "max_forks_repo_head_hexsha": "e6a44d6322f95be3ac288c6f1bc4f7cfeb481ac0", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2020-10-15T12:08:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-12T12:26:53.000Z", "avg_line_length": 1500.8214285714, "max_line_length": 235352, "alphanum_fraction": 0.9474543702, "converted": true, "num_tokens": 1176, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9449947117065459, "lm_q2_score": 0.8757869932689566, "lm_q1q2_score": 0.8276140772205403}} {"text": "```python\nfrom sympy import init_session\ninit_session()\n```\n\n IPython console for SymPy 1.0 (Python 2.7.12-64-bit) (ground types: gmpy)\n \n These commands were executed:\n >>> from __future__ import division\n >>> from sympy import *\n >>> x, y, z, t = symbols('x y z t')\n >>> k, m, n = symbols('k m n', integer=True)\n >>> f, g, h = symbols('f g h', cls=Function)\n >>> init_printing()\n \n Documentation can be found at http://docs.sympy.org/1.0/\n\n\n# Euler Equations\n\nThe Euler equations in primitive variable form, $q = (\\rho, u, p)^\\intercal$ appear as:\n\n$$q_t + A(q) q_x = 0$$\n\nwith the matrix $A(q)$:\n\n\n$$A(q) = \\left ( \\begin{array}{ccc} u & \\rho & 0 \\\\ \n 0 & u & 1/\\rho \\\\ \n 0 & \\gamma p & u \\end{array} \\right ) \n$$\n\nThe sound speed is related to the adiabatic index, $\\gamma$, as $c^2 = \\gamma p /\\rho$.\n\nWe can represent this matrix symbolically in SymPy and explore its eigensystem.\n\n\n```python\nfrom sympy.abc import rho\nrho, u, c = symbols('rho u c')\n\nA = Matrix([[u, rho, 0], [0, u, rho**-1], [0, c**2 * rho, u]])\nA\n```\n\nThe eigenvalues are the speeds at which information propagates with. SymPy returns them as a\ndictionary, giving the multiplicity for each eigenvalue.\n\n\n```python\nA.eigenvals()\n```\n\nThe right eigenvectors are what SymPy gives natively. For a given eigenvalue, $\\lambda$, these \nsatisfy:\n \n$$A r = \\lambda r$$\n\n## Right Eigenvectors\n\n\n```python\nR = A.eigenvects() # this returns a tuple for each eigenvector with multiplicity -- unpack it\nr = []\nlam = []\nfor (ev, _, rtmp) in R:\n r.append(rtmp[0])\n lam.append(ev)\n \n# we can normalize them anyway we want, so let's make the first entry 1\nfor n in range(len(r)):\n v = r[n]\n r[n] = v/v[0]\n```\n\n### 0-th right eigenvector \n\n\n```python\nr[0]\n```\n\nthis corresponds to the eigenvalue\n\n\n```python\nlam[0]\n```\n\n### 1-st right eigenvector\n\n\n```python\nr[1]\n```\n\nthis corresponds to the eigenvalue\n\n\n```python\nlam[1]\n```\n\n### 2-nd right eigenvector\n\n\n```python\nr[2]\n```\n\nthis corresponds to the eigenvalue\n\n\n```python\nlam[2]\n```\n\nHere they are as a matrix, $R$, in order from smallest to largest eigenvalue\n\n\n```python\nR = zeros(3,3)\nR[:,0] = r[1]\nR[:,1] = r[0]\nR[:,2] = r[2]\nR\n```\n\n## Left Eigenvectors\n\nThe left eigenvectors satisfy:\n\n$$l A = \\lambda l$$\n\nSymPy doesn't have a method to get left eigenvectors directly, so we take the transpose of this expression:\n\n$$(l A)^\\intercal = A^\\intercal l^\\intercal = \\lambda l^\\intercal$$\n\nTherefore, the transpose of the left eigenvectors, $l^\\intercal$, are the right eigenvectors of transpose of $A$\n\n\n```python\nB = A.transpose()\nB\n```\n\n\n```python\nL = B.eigenvects()\nl = []\nlaml = []\nfor (ev, _, ltmp) in L:\n l.append(ltmp[0].transpose())\n laml.append(ev)\n \n```\n\nTraditionally, we normalize these such that $l^{(\\mu)} \\cdot r^{(\\nu)} = \\delta_{\\mu\\nu}$\n\n\n```python\nfor n in range(len(l)):\n if lam[n] == laml[n]:\n ltmp = l[n]\n p = ltmp.dot(r[n])\n l[n] = ltmp/p\n```\n\n### 0-th left eigenvector\n\n\n```python\nl[0]\n```\n\n### 1-st left eigenvector\n\n\n```python\nl[1]\n```\n\n### 2-nd left eigenvector\n\n\n```python\nl[2]\n```\n\n\n```python\n\n```\n\n# Entropy formulation\n\nhere we write the system in terms of $q_s = (\\rho, u, s)^\\intercal$, where the system is\n\n$${q_s}_t + A_s(q_s) {q_s}_x = 0$$\n\nand \n\n$$\nA_s = \\left (\\begin{matrix}u & \\rho & 0\\\\\n \\frac{c^{2}}{\\rho} & u & \\frac{p_{s}}{\\rho}\\\\\n 0 & 0 & u\\end{matrix}\\right)\n $$\n\n\n```python\nps = symbols('p_s')\n\nAs = Matrix([[u, rho, 0], [c**2/rho, u, ps/rho], [0, 0, u]])\nAs\n```\n\n\n```python\nAs.eigenvals()\n```\n\n\n```python\nR = As.eigenvects() # this returns a tuple for each eigenvector with multiplicity -- unpack it\nr = []\nlam = []\nfor (ev, _, rtmp) in R:\n r.append(rtmp[0])\n lam.append(ev)\n \n# we can normalize them anyway we want, so let's make the first entry 1\nfor n in range(len(r)):\n v = r[n]\n r[n] = v/v[0]\n```\n\n\n```python\nr[0], lam[0]\n```\n\n\n```python\nr[1], lam[1]\n```\n\n\n```python\nr[2], lam[2]\n```\n\n### left eigenvectors\n\n\n```python\nBs = As.transpose()\nL = B.eigenvects()\nl = []\nlaml = []\nfor (ev, _, ltmp) in L:\n l.append(ltmp[0].transpose())\n laml.append(ev)\n \n```\n\nnormalization\n\n\n```python\nfor n in range(len(l)):\n if lam[n] == laml[n]:\n ltmp = l[n]\n p = ltmp.dot(r[n])\n l[n] = ltmp/p\n```\n\n\n```python\nsimplify(l[0])\n```\n\n\n```python\nl[1]\n```\n\n\n```python\nl[2]\n```\n\n# 2-d system\n\n\n```python\nrho, u, v, c = symbols('rho u v c')\n\nA = Matrix([[u, rho, 0, 0], [0, u, 0, rho**-1], [0,0, u, 0], [0, c**2 * rho, 0, u]])\nA\n```\n\n\n```python\nA.eigenvals()\n```\n\n\n```python\nR = A.eigenvects() # this returns a tuple for each eigenvector with multiplicity -- unpack it\nr = []\nlam = []\nfor (ev, _, rtmp) in R:\n for rv in rtmp:\n r.append(rv)\n lam.append(ev)\n \n# we can normalize them anyway we want, so let's make the first entry 1\nfor n in range(len(r)):\n v = r[n]\n if not v[0] == 0:\n r[n] = v/v[0]\n```\n\n\n```python\nr[0], lam[0]\n```\n\n\n```python\nr[1], lam[1]\n```\n\n\n```python\nr[2], lam[2]\n```\n\n\n```python\nr[3], lam[3]\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "6b9d2b4a5b7cacb7dc42e502aa804425987403a4", "size": 58934, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "compressible/euler.ipynb", "max_stars_repo_name": "python-hydro/hydro_examples", "max_stars_repo_head_hexsha": "55b7750a7644f3e2187f7fe338b6bc1d6fb9c139", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 66, "max_stars_repo_stars_event_min_datetime": "2018-09-01T10:44:07.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-26T23:50:57.000Z", "max_issues_repo_path": "compressible/euler.ipynb", "max_issues_repo_name": "srinivasvl81/hydro_examples", "max_issues_repo_head_hexsha": "d1b8a5c98ce28ed4f8bac4d2a20d91a27355a21a", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "compressible/euler.ipynb", "max_forks_repo_name": "srinivasvl81/hydro_examples", "max_forks_repo_head_hexsha": "d1b8a5c98ce28ed4f8bac4d2a20d91a27355a21a", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 39, "max_forks_repo_forks_event_min_datetime": "2018-09-06T20:02:14.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-27T17:05:24.000Z", "avg_line_length": 47.7585089141, "max_line_length": 2408, "alphanum_fraction": 0.7130179523, "converted": true, "num_tokens": 1748, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9449947155793357, "lm_q2_score": 0.8757869851639066, "lm_q1q2_score": 0.8276140729530498}} {"text": "## Finding expressions for elastic moduli\n\nI want to find more expressions to fill in the gaps in [my matrix](http://www.subsurfwiki.org/wiki/Template:Elastic_modulus):\n\n\n```python\nfrom IPython.display import Image\nImage(\"data/moduli.png\")\n```\n\nWe can use [`sympy`](http://docs.sympy.org/dev/tutorial/basic_operations.html) to help!\n\n\n```python\nimport sympy\nsympy.__version__\n```\n\n\n\n\n '1.0'\n\n\n\n\n```python\nsympy.init_printing(use_latex='mathjax')\n```\n\nWe must define our symbols. We'll use `alpha` and `beta` for Vp and Vs.\n\n\n```python\nalpha, beta, gamma = sympy.symbols(\"alpha, beta, gamma\")\nlamda, mu, E, K, M, rho = sympy.symbols(\"lamda, mu, E, K, M, rho\")\nnu = sympy.symbols(\"nu\")\n```\n\n## Expression for Vp (alpha) in terms of K, E\n\n\n```python\nfrom sympy import sqrt\nalpha_expr = sqrt((mu * (E - 4*mu)) / (rho * (E - 3*mu)))\nalpha_expr\n```\n\n\n\n\n$$\\sqrt{\\frac{\\mu \\left(E - 4 \\mu\\right)}{\\rho \\left(E - 3 \\mu\\right)}}$$\n\n\n\n\n```python\nprint(sympy.latex(alpha_expr))\n```\n\n \\sqrt{\\frac{\\mu \\left(E - 4 \\mu\\right)}{\\rho \\left(E - 3 \\mu\\right)}}\n\n\n\n```python\nmu_expr = (3 * K * E) / (9 * K - E)\n```\n\n\n```python\nsubs = alpha_expr.subs(mu, mu_expr)\nsubs\n```\n\n\n\n\n$$\\sqrt{3} \\sqrt{\\frac{E K \\left(- \\frac{12 E K}{- E + 9 K} + E\\right)}{\\rho \\left(- E + 9 K\\right) \\left(- \\frac{9 E K}{- E + 9 K} + E\\right)}}$$\n\n\n\n\n```python\nprint(sympy.latex(subs))\n```\n\n \\sqrt{3} \\sqrt{\\frac{E K \\left(- \\frac{12 E K}{- E + 9 K} + E\\right)}{\\rho \\left(- E + 9 K\\right) \\left(- \\frac{9 E K}{- E + 9 K} + E\\right)}}\n\n\n\n```python\nfrom sympy import simplify\nsimplify(subs)\n```\n\n\n\n\n$$\\sqrt{3} \\sqrt{\\frac{K \\left(E + 3 K\\right)}{\\rho \\left(- E + 9 K\\right)}}$$\n\n\n\n\n```python\nprint(sympy.latex(simplify(subs)))\n```\n\n \\sqrt{3} \\sqrt{\\frac{K \\left(E + 3 K\\right)}{\\rho \\left(- E + 9 K\\right)}}\n\n\n## Expression for Vs (beta) in terms of K, E\n\n\n```python\nbeta_expr = sqrt(mu/rho)\nsimplify(beta_expr.subs(mu, mu_expr))\n```\n\n\n\n\n$$\\sqrt{3} \\sqrt{- \\frac{E K}{\\rho \\left(E - 9 K\\right)}}$$\n\n\n\n\n```python\nsimpl = simplify(beta_expr.subs(mu, mu_expr))\n```\n\n\n```python\nprint(sympy.latex(simpl))\n```\n\n \\sqrt{3} \\sqrt{- \\frac{E K}{\\rho \\left(E - 9 K\\right)}}\n\n\n## Expression for Vp/Vs (gamma) in terms of K, E\n\n\n```python\ngamma_expr = sqrt((K + (4*mu/3)) / mu)\nsimpl = simplify(gamma_expr.subs(mu, mu_expr))\nsimpl\n```\n\n\n\n\n$$\\sqrt{\\frac{1}{E} \\left(E + 3 K\\right)}$$\n\n\n\n\n```python\nprint(sympy.latex(simpl))\n```\n\n \\sqrt{\\frac{1}{E} \\left(E + 3 K\\right)}\n\n\n## Expression for Vp/Vs (gamma) in terms of E, mu\n\nNot totally sure why I have to use that hacky way to get the terms to cnacel properly. \n\n\n```python\ngamma_emu_expr = alpha_expr / beta_expr\nsimpl = sqrt(gamma_emu_expr**2)\nsimpl\n```\n\n\n\n\n$$\\sqrt{\\frac{E - 4 \\mu}{E - 3 \\mu}}$$\n\n\n\n\n```python\nprint(sympy.latex(simpl))\n```\n\n \\sqrt{\\frac{E - 4 \\mu}{E - 3 \\mu}}\n\n\n## Expression for Vp/Vs (gamma) in terms of PR, mu\n\n\n```python\ne_expr = 2 * mu * (1 + nu)\n```\n\n\n```python\nsimplify(sqrt(gamma_emu_expr.subs(E, e_expr)**2))\n```\n\n\n\n\n$$\\sqrt{2} \\sqrt{\\frac{\\nu - 1}{2 \\nu - 1}}$$\n\n\n\n\n```python\nprint(sympy.latex(simplify(sqrt(gamma_emu_expr.subs(E, e_expr)**2))))\n```\n\n \\sqrt{2} \\sqrt{\\frac{\\nu - 1}{2 \\nu - 1}}\n\n\n## Expression for Vp in terms of E, lamda\n\n\n```python\nvp_expr = sympy.Eq(alpha, sqrt(mu*(E - 4*mu) / (rho*(E - 3*mu))))\nvp_expr\n```\n\n\n\n\n$$\\alpha = \\sqrt{\\frac{\\mu \\left(E - 4 \\mu\\right)}{\\rho \\left(E - 3 \\mu\\right)}}$$\n\n\n\n\n```python\nalpha_expr = sqrt(mu*(E - 4*mu) / (rho*(E - 3*mu)))\n```\n\n\n```python\nmu_expr = (rho * alpha**2 - lamda)/2\n```\n\n\n```python\nnew_expr = simplify(vp_expr.subs(mu, mu_expr))\n```\n\n\n```python\nnew_expr.subs(alpha, alpha_expr)\n```\n\n\n\n\n$$\\sqrt{\\frac{\\mu \\left(E - 4 \\mu\\right)}{\\rho \\left(E - 3 \\mu\\right)}} = \\sqrt{\\frac{\\left(- \\lambda + \\frac{\\mu \\left(E - 4 \\mu\\right)}{E - 3 \\mu}\\right) \\left(E + 2 \\lambda - \\frac{2 \\mu \\left(E - 4 \\mu\\right)}{E - 3 \\mu}\\right)}{\\rho \\left(2 E + 3 \\lambda - \\frac{3 \\mu \\left(E - 4 \\mu\\right)}{E - 3 \\mu}\\right)}}$$\n\n\n\nOK, I give up. Trying Wolfram Alpha...\n\n\n```python\nvp_wolfram = simplify(sqrt(-lamda/rho+sqrt(9*lamda**2+E**2+2*lamda*E)/rho+E/rho)/sqrt(2))**2\nsqrt(vp_wolfram)\n```\n\n\n\n\n$$\\frac{\\sqrt{2}}{2} \\sqrt{\\frac{1}{\\rho} \\left(E - \\lambda + \\sqrt{E^{2} + 2 E \\lambda + 9 \\lambda^{2}}\\right)}$$\n\n\n\nThat's better!\n\n\n```python\nprint(sympy.latex(sqrt(vp_wolfram)))\n```\n\n \\frac{\\sqrt{2}}{2} \\sqrt{\\frac{1}{\\rho} \\left(E - \\lambda + \\sqrt{E^{2} + 2 E \\lambda + 9 \\lambda^{2}}\\right)}\n\n\nAnother for Wolfram Alpha: Vs\n\n solve V = sqrt((a/(2*((y-2 r V^2)/(2 r V^2))*r)) - (a/r)) for V\n\n\n```python\nsimplify(sqrt(sqrt(9*lamda**2+2*lamda*E+E**2)/rho-(3*lamda)/rho+E/rho)/2)**2\n```\n\n\n\n\n$$\\frac{1}{4 \\rho} \\left(E - 3 \\lambda + \\sqrt{E^{2} + 2 E \\lambda + 9 \\lambda^{2}}\\right)$$\n\n\n\nAnother for Wolfram Alpha: Gamma, or Vp/Vs\n\n solve G = sqrt(((4/3)*a - 2*((y*G^2 - y)/3))/(a - ((y*G^2 - y)/3))) for G\n\n\n```python\nsimplify(sqrt(sqrt(9*lamda**2+2*lamda*E+E**2)/E+(3*lamda)/E+3)/sqrt(2))**2\n```\n\n\n\n\n$$\\frac{1}{2 E} \\left(3 E + 3 \\lambda + \\sqrt{E^{2} + 2 E \\lambda + 9 \\lambda^{2}}\\right)$$\n\n\n\n## Expression for Vs (beta) in terms of nu, K\n\n\n```python\nmu_expr = 3*K*(1 - 2*nu)/(2 + 2*nu)\nsimpl = simplify((sqrt(mu/rho)).subs(mu,mu_expr)**2)\nsimpl\n```\n\n\n\n\n$$- \\frac{3 K \\left(2 \\nu - 1\\right)}{2 \\rho \\left(\\nu + 1\\right)}$$\n\n\n\n\n```python\nprint(sympy.latex(simpl))\n```\n\n - \\frac{3 K \\left(2 \\nu - 1\\right)}{2 \\rho \\left(\\nu + 1\\right)}\n\n\n## Expression for Vp (alpha) in terms of nu, K\n\n\n```python\nvp_expr = sqrt(lamda * (1 - nu) / (rho * nu))\nl_expr = 3 * K * nu / (1 + nu)\nvp_expr.subs(lamda, l_expr)\n```\n\n\n\n\n$$\\sqrt{3} \\sqrt{\\frac{K \\left(- \\nu + 1\\right)}{\\rho \\left(\\nu + 1\\right)}}$$\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "a982a074def8e388ff2e03b008ce30cbdd1560fa", "size": 164165, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks_dev/Elastic_modulus_expressions.ipynb", "max_stars_repo_name": "mycarta/in-bruges", "max_stars_repo_head_hexsha": "5aff5111a61f86145c688c64b7b032b8058cbe56", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-02-27T17:13:33.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-07T23:13:48.000Z", "max_issues_repo_path": "notebooks_dev/Elastic_modulus_expressions.ipynb", "max_issues_repo_name": "mycarta/in-bruges", "max_issues_repo_head_hexsha": "5aff5111a61f86145c688c64b7b032b8058cbe56", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks_dev/Elastic_modulus_expressions.ipynb", "max_forks_repo_name": "mycarta/in-bruges", "max_forks_repo_head_hexsha": "5aff5111a61f86145c688c64b7b032b8058cbe56", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-03-03T23:42:16.000Z", "max_forks_repo_forks_event_max_datetime": "2019-06-01T21:01:06.000Z", "avg_line_length": 187.189281642, "max_line_length": 145012, "alphanum_fraction": 0.8911826516, "converted": true, "num_tokens": 2191, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9449947117065458, "lm_q2_score": 0.8757869803008764, "lm_q1q2_score": 0.827614064965773}} {"text": "# HW 03\n\n\n\n\n1. The lowest and highest bins are filled. We expect clipping in the data. \n\n2. Increase sampling frequency till spectra does not change anymore.\n\n3. Here, we want the resolution $\\Delta V$ to be 0.1% of the range (full scale: $FS = V_{max} - V_{min}$).\n\n\\begin{align}\n\\frac{\\Delta V}{FS} & = \\frac{1}{V_{max} - V_{min}} \\cdot \\frac{V_{max} - V_{min}}{2^{N}} \\\\\n & = \\frac{1}{2^{N}} \\lt 0.001\n\\end{align}\n\nThis implies $N = 10$ bit, ($2^{10} = 1,024$).\n\n4. The signal is:\n\n\\begin{align}\ny(t) = 1.1 + 3.9 \\sin(20,000 \\pi t)\n\\end{align}\nIts amplitude will be between 2.8 and 5 V, and the frequency is $f = 10,000$ Hz. We need to make sure the DAQ system has a high enough sampling rate to prevent aliasing and large enough input rate to prevent clipping. Then we will select the DAQ system with the best resolution (or the smallest quantization error).\n\n(a): sampling rate is too low to satisfy Nyquiest criterion\n\n(b): input range is too small, we would like to have a safety margin of 20-40 %.\n\n(c) \\& (d): sampling rate and input range are ok. \n\n\n```python\nQ_c = 20/2**(8+1)\n\nQ_d = 20/2**(16+1)\n\nprint('Q (c) = ', Q_c, ' V, Q (d) = ', Q_d, ' V')\n```\n\n Q (c) = 0.0390625 V, Q (d) = 0.000152587890625 V\n\n\nCase (d) is the best design.\n\n5. \n\\begin{align}\ny(t) = 0.03 + 0.05 \\cos (1,000 \\pi t)\n\\end{align}\n\nFirst, we select the gain: Voltage range: $A_{PT} = 0.02 - 0.08$ V. The smallest range of $\\pm 0.01$ V is the best and has adequate safety margin to guaranty that there will be not clipping.\n\nThen, we select the sampling rate: The signal is periodic and we will optimize the frequency resolution to make sure we do not have aliazing. The signal frequency is $f = 500$ Hz, to satisfy Nyquiest criterion, we would a sample rate $f_s > 2\\cdot f > 1,000$ Hz. An appropriate sampling frequency would be in the range $f_s = [1,024 - 4,000]$ Hz.\n\n6. (a) The ideal input range on the DAQ would be: $\\pm 60-80$ mV. We will use an amplifier with an output voltage: $A_{amp}\\pm 80$ mV for the analysis here.\n\nThe gain should be: $G = A_{amp} / A_{PT} = 80/8 = 10$\n\n(b) Inverting amplifiers are less susceptible to electromagnetic noise, so we should build an amplifier stage based on inverting amplifiers. However, inverting amplifiers are susceptible to impedance loading, so it would be safe to use a buffer on the input of the amplifier stage.\n\n7. (a) The desired bandwidth is $f = 100,000$ Hz. The amplifier gain bandwidth product is $GBP = 1 MHz$. Remembering the definition of $GBP$:\n\n\\begin{align*}\nGBP & = G_{theoretical} \\times f_c\n\\end{align*}\nSo for us the $f_c = f = 100,000$ kHz. This forces a theoretical gain per amplifier stage that should not exceed $G_{theoretical} = GBP / f_c = 10^6/10^5 = 10$. \n\nSo for a total desired gain of $G=100$, we should use at least two stages of $G_{stage} = G_{theoretical} = 10$ in series.\n\n(b) You have two options to build your circuit:\n\n1- Use two non-inverting amplifiers of gain $G=10$ in series. Non-inverting amplifiers have infinite input impedance and not susceptible to impedance loading.\n\n2- Use two inverting amplifiers of gain $G=-10$ in series. Now you have have to worry about impedance loading, so you should put a buffer first, then the two inverting amplifiers.\n\n\n8. For a Butterworth high-pass filter of order $n$ the gain is:\n\n\\begin{align*}\nG = \\frac{1}{\\sqrt{1+\\left( \\frac{f_{cutoff}}{f} \\right)^{2n} }}\n\\end{align*}\n\nHere, $f$ is the noise: $f = 60$ Hz, the cutoff frequency is $f_{cutoff} = 300$ Hz.\n\n\n```python\nimport numpy\n\nf = 60 # Hz\nf_c = 300 # Hz\n\nG_1 = 1/numpy.sqrt(1+(f_c/f)**(2*1))\nG_2 = 1/numpy.sqrt(1+(f_c/f)**(2*2))\nG_4 = 1/numpy.sqrt(1+(f_c/f)**(2*4))\n\nprint('the gains in percentage are:')\n\nprint(\"order 1: G = %2.4f \" % (G_1*100), '%')\nprint(\"order 2: G = %2.4f \" % (G_2*100), '%')\nprint(\"order 4: G = %2.4f \" % (G_4*100), '%')\n\n# convert to dB:\nG_1dB = 20*numpy.log10(G_1)\nG_2dB = 20*numpy.log10(G_2)\nG_4dB = 20*numpy.log10(G_4)\n\nprint('the gains in dB are: ')\n\nprint(\"order 1: G = %2.4f \" % (G_1dB), 'dB')\nprint(\"order 2: G = %2.4f \" % (G_2dB), 'dB')\nprint(\"order 4: G = %2.4f \" % (G_4dB), 'dB')\n\n\n```\n\n the gains in percentage are:\n order 1: G = 19.6116 %\n order 2: G = 3.9968 %\n order 4: G = 0.1600 %\n the gains in dB are: \n order 1: G = -14.1497 dB\n order 2: G = -27.9657 dB\n order 4: G = -55.9176 dB\n\n\n9. The signal could be written analytically as:\n\\begin{align}\ny(t) = 1.50 \\sin(40 \\pi t) + 0.20 \\sin(20,000 \\pi t)\n\\end{align}\nSo the carrier frequency is $f_{carrier} = 20$ Hz, and the noise frequency is $f_{noise} = 10,000$ Hz. We acquire $N=2^{12}$ points. The DAQ has $n=16$ bits.\nWe also have $f_s = 30,000$ Hz.\n\n(a) The frequency resolution is: $\\Delta f = f_s/N$.\n\nThe quantization error is: $Q = (V_{max}-V_{min})/2^{n+1}$\n\n\n```python\nf_s = 30000 # Hz\nN = 2**12 # points\nn = 16 # # bits\nQ = 20/2**(n+1) # quantization error in V\n\nprint(\"\\u0394 f = %4.4f\" %(f_s/N), 'Hz')\nprint(\"Q = %1.4f\" %(Q*1000), 'mV')\n```\n\n \u0394 f = 7.3242 Hz\n Q = 0.1526 mV\n\n\n(b) This takes 3 steps.\n\n_Step 1_ Select the type of filter and cutoff frequency, $f_c$: Here we want to remove high frequency noise, so we will use a low-pass filter. As a rule of thumb, it should be at least $10>f_{carrier}$ and $10 0$, and can take on any positive integer value, and its density tends to have a \"hump\" character, where it peaks around the parameter value $\\lambda$. In math, its density is given by\n\n$$p(x) = \\frac{\\lambda^x e^{-\\lambda}}{x!}.$$\n\nImplementing a sampler for $\\text{Poisson}(\\lambda)$ is a bit different. Here I choose to implement `poisson` using a [sampling algorithm](https://en.wikipedia.org/wiki/Poisson_distribution#Generating_Poisson-distributed_random_variables) created by Donald Knuth. The idea here is to make use of a fact about Poisson distributions that states the time between new events is exponentially distributed. (More explanation to be added later.)\n\nNote: This algorithm only works well when `lambda_` is not too large (a rule of thumb is under `lambda_=30`). This is due to the fact that the inner loop depends on `np.exp(-lambda_)` taking on a non-zero value, which is impractical when `lambda_` is large due to numerical underflow. For larger values of `lambda_` the algorithm should be tweaked. See [here](https://www.johndcook.com/blog/2010/06/14/generating-poisson-random-values/) for an example of how to do this.\n\nThe expected statistics for $\\text{Poisson}(\\lambda)$ are below. How well do these values match the ones in the example below?\n\n\\begin{equation}\n\\begin{aligned}\n\\text{min}&=0, \\\\\n\\text{max}&=\\infty, \\\\\n\\text{mean}&= \\lambda, \\\\\n\\text{std dev}&=\\sqrt{\\lambda}.\n\\end{aligned}\n\\end{equation}\n\n\n```python\ndef poisson(lambda_=1, size=1, seed=None):\n \"\"\"Generates Poisson(lambda) samples using Knuth algorithm\"\"\"\n array = []\n L = np.exp(-lambda_)\n for j in range(size):\n x = 0\n p = 1\n while p > L:\n x += 1\n p = p * rand(seed=seed)\n array.append(x - 1)\n seed += 1\n return np.array(array)\n\nsize = 1000\nsamples = poisson(size=size, seed=seed, lambda_=10)\nsummarize(samples, title=f'poisson(lambda=10)')\n```\n\n### Exponential Distribution\n\nThe exponential distribution is often used to model arrival times of processes following a Poisson distribution. Given that it's used to model times, an exponential random variable can take on any positive real value. It's parametrized by a rate parameter $\\lambda > 0$ that determines the (inverse) expected arrival time. Its density is given by\n\n$$p(x) = \\lambda e^{-\\lambda x}.$$\n\nAn $\\text{Exponential}(\\lambda)$ random sample can easily be generated using inverse transform sampling. The CDF of $\\text{Exponential}(\\lambda)$ is \n$$F(x) = 1 - e^{-\\lambda x},$$\nwhich means its inverse CDF is \n$$F^{-1}(u) = -\\frac{1}{\\lambda} \\log(1-u).$$\nThus, we merely have to generate a `rand()` and apply the inverse CDF to it to get an exponential random sample. This is implemented as `exponential` below, and takes in the rate parameter as `lambda_`. I merely sample a uniform sample via `rand()` and apply the above inverse CDF transformation to get an exponential sample.\n\nThe expected statistics for $\\text{Exponential}(\\lambda)$ are below. How well do these values match the ones in the example below?\n\n\\begin{equation}\n\\begin{aligned}\n\\text{min}&=0, \\\\\n\\text{max}&=\\infty, \\\\\n\\text{mean}&= \\frac{1}{\\lambda}, \\\\\n\\text{std dev}&=\\sqrt{\\frac{1}{\\lambda^2}}.\n\\end{aligned}\n\\end{equation}\n\n\n```python\ndef exponential(lambda_=1, size=1, seed=None):\n \"\"\"Generates Exp(lambda) samples using Inverse Transform Sampling\"\"\"\n u = rand(seed=seed, size=size)\n x = - (1 / lambda_) * np.log(1 - u)\n return x\n\nsize = 1000\nsamples = exponential(size=size, seed=seed, lambda_=1)\nsummarize(samples, title=f'exponential(lambda=1)')\n```\n\n### Gaussian / Normal Distribution\n\nThe Gaussian distribution (aka the Normal distribution) is without a doubt the most important distribution out there. It's used to model all kinds of things: measurement errors, the parameter values in a neural net, the velocities of a gas in a container, approximating other distributions via the central limit theorem, and much more. This distribution takes on the characteristic \"bell curve\", where a mean value is centered at the top of the bell, and values away from the mean become less and less likely to get sampled. The Gaussian, represented as $\\mathcal{N}(\\mu, \\sigma^2)$, takes two parameters $\\mu$ and $\\sigma$, each of which can take on any real value. Its density is given by\n\n$$ p(x) = \\frac{1}{\\sqrt{2 \\pi \\sigma^2}} e^{-\\frac{(x-\\mu)^2}{\\sigma^2}}.$$\n\nGenerating samples from a Gaussian is most frequently done using the [Box Muller Transform](https://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform). The idea is to apply a transformation to 2 uniform random variables $U_1, U_2$:\n\n$$ Z_0 = \\sqrt{-2 \\log U_1} \\cos (2 \\pi U_2)$$\n$$ Z_1 = \\sqrt{-2 \\log U_1} \\sin (2 \\pi U_2).$$\n\nIt turns out that both $Z_0$ and $Z_1$ are (independent) $\\mathcal{N}(0, 1)$ random variables. It's a simple transformation to do, but may take a bit to wrap your head around as to why it works. We can then recover any $\\mathcal{N}(\\mu, \\sigma^2)$ variable by applying the transformation\n\n$$ X = \\sigma Z + \\mu.$$\n\nI have this coded up in the `normal` function below, which takes as input the 2 parameters `mu` and `sigma`. The two uniform samples `u1, u2` are generated by `rand()`. I only return the Gaussian variable generated by `z0`, but code up `z1` anyway so you can see it.\n\nThe expected statistics for $\\mathcal{N}(\\mu, \\sigma^2)$ are below. How well do the sample statistics match?\n\n\\begin{equation}\n\\begin{aligned}\n\\text{min}&=-\\infty, \\\\\n\\text{max}&=\\infty, \\\\\n\\text{mean}&= \\mu, \\\\\n\\text{std dev}&=\\sigma.\n\\end{aligned}\n\\end{equation}\n\n\n```python\ndef normal(mu=0, sigma=1, size=1, seed=None):\n \"\"\"Generates N(mu, sigma) samples using Box-Muller Transform\"\"\"\n u1 = rand(seed=seed, size=size)\n u2 = rand(seed=seed+1, size=size)\n z0 = np.sqrt(-2 * np.log(u1)) * np.cos(2 * np.pi * u2)\n z1 = np.sqrt(-2 * np.log(u1)) * np.sin(2 * np.pi * u2)\n return z0 * sigma + mu\n\nsize = 1000\nsamples = normal(size=size, seed=seed, mu=0, sigma=1)\nsummarize(samples, title=f'normal(mu=0, sigma=1)')\n```\n\n## References\n\n[1] https://towardsdatascience.com/how-to-generate-random-variables-from-scratch-no-library-used-4b71eb3c8dc7\n", "meta": {"hexsha": "868083b6cb00147389e9b5e28784e49941949d0e", "size": 119227, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/random.ipynb", "max_stars_repo_name": "rkingery/ml_tutorials", "max_stars_repo_head_hexsha": "9532fa5f1e31f6928c823de04d35dcb768fb4d5c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2018-06-22T00:58:53.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-15T10:26:34.000Z", "max_issues_repo_path": "notebooks/random.ipynb", "max_issues_repo_name": "rkingery/ml_tutorials", "max_issues_repo_head_hexsha": "9532fa5f1e31f6928c823de04d35dcb768fb4d5c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/random.ipynb", "max_forks_repo_name": "rkingery/ml_tutorials", "max_forks_repo_head_hexsha": "9532fa5f1e31f6928c823de04d35dcb768fb4d5c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2018-07-12T17:20:28.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-25T07:46:21.000Z", "avg_line_length": 122.409650924, "max_line_length": 8108, "alphanum_fraction": 0.8431143952, "converted": true, "num_tokens": 6830, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9219218284193597, "lm_q2_score": 0.8976952941600964, "lm_q1q2_score": 0.827604886955531}} {"text": "sergazy.nurbavliyev@gmail.com \u00a9 2021\n\n## Minimum of two random variables\n\nQuestion: Assume we have $X \\sim Uniform(0, 1)$ and $Y \\sim Uniform(0,1)$ and independent of each other. What is the expected value of the minimum of $X$ and $Y$?\n\n\n### Intuition\n\nBefore going to show the exact theoretical result, we can think heuristically,we have two uniform random variables that are independent of each other, we would expect two of these random variables to divide the interval (0,1) equally into 3 equal subintervals. Then the minimum would occur at 1/3 and the other one (which is maximum in this case) occurs at 2/3. We can even generalize our intuition for $n$ independent uniform random variables. In that case minimum would occur at $1/(n+1)$ and maximum would occur at $n/(n+1)$. Now let us verify intution theoretically. \n\n### Theoritical result\n\nLet $Z$ be minimum of X and Y. We write this as $Z=\\min(X,Y)$. \n\\begin{equation}\n\\mathbb{P}(Z\\leq z)= \\mathbb{P}(\\min(X,Y)\\leq z) =1-\\mathbb{P}(\\min(X,Y)\\geq z)=1-\\mathbb{P}(X> z, Y>z)\n\\end{equation}\nSince our distribution is between 0 and 1 the following is true for uniform distribution\n\\begin{equation}\n\\mathbb{P}(X\\leq z)= z \\text{ and } \\mathbb{P}(X> z)=1- z \n\\end{equation}\nAlso same goes for $Y$. Now since they are independent we have\n\\begin{equation}\n\\mathbb{P}(X> z, Y>z)=\\mathbb{P}(X> z)\\mathbb{P}(Y>z)=(1-z)^{2}\n\\end{equation}\nThen equation (1) becomes\n\\begin{equation}\n\\mathbb{P}(Z\\leq z)=1-\\mathbb{P}(X> z, Y>z)=1-(1-z)^{2}\n\\end{equation}\nWe just calculated cumulative distribution function of $z$. Usually denoted as \n\\begin{equation}\nF_{Z}(z)= \\mathbb{P}(Z\\leq z)\n\\end{equation}\nIf we take derivative of this $ F_{Z}(z)$, we will get density function of z. In this case it would be\n\\begin{equation}\nF_{Z}'(z)=f_Z(z)=2(1-z)\n\\end{equation}\nAs last part we would take integral of $zf_Z(z)$ between 0 and 1 to find an expected value of minimum of two uniform random variables.\n\\begin{equation}\n\\mathbb{E}[Z]=\\int_{0}^1 zf_Z(z)dz=\\int_{0}^1 2z(1-z)dz=2\\left(\\frac{1}{2}-\\frac{1}{3}\\right)=\\frac{1}{3}\n\\end{equation}\n\n\n```python\n# If you are allowed to use any programming language then you can simulate.\nimport numpy as np\nx = np.random.uniform(0,1,100000)\ny = np.random.uniform(0,1,100000)\nz =np.minimum(x,y)\nu =np.maximum(x,y)\n```\n\n\n```python\nnp.mean(z), np.mean(u) # as you can see z is very close to 1/3\n```\n\n\n\n\n (0.33389155844548174, 0.6670118602826249)\n\n\n\n### To see the histogram of Z and U\n\n\n```python\nimport matplotlib.pyplot as plt\ncount, bins, ignored = plt.hist(z, 100, density=True)\nplt.plot(bins, np.ones_like(bins), linewidth=2, color='r')\nplt.show()\n```\n\n\n```python\nimport matplotlib.pyplot as plt\ncount, bins, ignored = plt.hist(u, 100, density=True)\nplt.plot(bins, np.ones_like(bins), linewidth=2, color='r')\nplt.show()\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "7792f22eb5df8b0c22b46aea714021ab8940af42", "size": 20395, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Minimim of two random variables March 3 2021.ipynb", "max_stars_repo_name": "sernur/probability_stats_interveiw_questions", "max_stars_repo_head_hexsha": "3144dae00fa83c82ff4e1f7668828349270a1937", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2021-03-04T06:48:28.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-19T10:04:24.000Z", "max_issues_repo_path": "Minimim of two random variables March 3 2021.ipynb", "max_issues_repo_name": "sernur/probability_stats_interveiw_questions", "max_issues_repo_head_hexsha": "3144dae00fa83c82ff4e1f7668828349270a1937", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-03-05T22:00:43.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-05T22:00:43.000Z", "max_forks_repo_path": "Minimim of two random variables March 3 2021.ipynb", "max_forks_repo_name": "sernur/probability_stats_interveiw_questions", "max_forks_repo_head_hexsha": "3144dae00fa83c82ff4e1f7668828349270a1937", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-03-04T05:02:00.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-16T01:13:40.000Z", "avg_line_length": 80.9325396825, "max_line_length": 6900, "alphanum_fraction": 0.8262809512, "converted": true, "num_tokens": 937, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9343951643678382, "lm_q2_score": 0.8856314783461303, "lm_q1q2_score": 0.8275297707785639}} {"text": "```python\nfrom scipy import *\nimport sympy as sym\nfrom approx1D import *\nimport matplotlib.pyplot as plt\n\nNs = [2, 4, 8, 16]\nTaylor = [0.0983, 0.00263, 7.83e-07, 3.57e-10]\nSinusoidal = [0.0027, 0.00061, 0.00012, 2.17e-05]\nBernstein = [0.0021, 4.45e-05, 8.73e-09, 4.49e-15]\nLagrange = [0.0021, 4.45e-05, 8.73e-09, 2.45e-12]\n\nx = sym.Symbol('x')\npsi = [1, x]\n\nu, c = regression_with_noise(log2(Sinusoidal), psi, log2(Ns))\nprint((\"estimated model for sine: %3.2e*N**(%3.2e)\" % \\\n (2**(c[0]), c[1])))\n\n# check the numbers estimated by the model by manual inspection\nfor N in Ns:\n print((2**c[0] * N **c[1]))\n\nX = log2(Ns)\nU = sym.lambdify([x], u)\nUU = U(X)\n\nplt.plot(X, log2(Sinusoidal))\nplt.plot(X, UU)\nplt.legend([\"data\", \"model\"])\nplt.show()\n\nu, c = regression_with_noise(log(Bernstein), psi, Ns)\nprint((\"estimated model for Bernstein: %3.2e*exp(%3.2e*N)\" % (exp(c[0]), c[1])))\n\n# check the numbers estimated by the model by manual inspection\nfor N in Ns:\n print((exp(c[0]) * exp(N * c[1])))\n\nX = Ns\nU = sym.lambdify([x], u)\nUU = U(array(X))\n\nplt.plot(X, log(Bernstein))\nplt.plot(X, UU)\nplt.legend([\"data\", \"model\"])\nplt.show()\n\nCPU_Taylor = [0.0123, 0.0325, 0.108, 0.441]\nCPU_sine = [0.0113, 0.0383, 0.229, 1.107]\nCPU_Bernstein = [0.0384, 0.1100, 0.3368, 1.187]\nCPU_Lagrange = [0.0807, 0.3820, 2.5233, 26.52]\n\nplt.plot(log2(Ns), log2(CPU_Taylor))\nplt.plot(log2(Ns), log2(CPU_sine))\nplt.plot(log2(Ns), log2(CPU_Bernstein))\nplt.plot(log2(Ns), log2(CPU_Lagrange))\nplt.legend([\"Taylor\", \"sine\", \"Bernstein\", \"Lagrange\"])\nplt.show()\n\n```\n", "meta": {"hexsha": "54acb7f807b081a6047f2eaf081e92e536ea8429", "size": 2647, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/SRC/11_CONVERGENCE_RATE_LOCAL.ipynb", "max_stars_repo_name": "okara83/Becoming-a-Data-Scientist", "max_stars_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/SRC/11_CONVERGENCE_RATE_LOCAL.ipynb", "max_issues_repo_name": "okara83/Becoming-a-Data-Scientist", "max_issues_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/SRC/11_CONVERGENCE_RATE_LOCAL.ipynb", "max_forks_repo_name": "okara83/Becoming-a-Data-Scientist", "max_forks_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T15:41:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T07:47:40.000Z", "avg_line_length": 28.7717391304, "max_line_length": 91, "alphanum_fraction": 0.5013222516, "converted": true, "num_tokens": 656, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9343951643678382, "lm_q2_score": 0.8856314753275019, "lm_q1q2_score": 0.8275297679579722}} {"text": "```python\nimport numpy as np\nimport scipy as sp\nfrom matplotlib import pyplot as plt\nfrom diff_exp import *\n```\n\n# Linear Systems of Differential Equations\n\nLet $y = (y_1(t),..., y_n(t))$ and $M = M(t)$ matrix with dimensions $n \\times n$. The linear system of differential equations:\n\n\\begin{equation}\n\\frac{dy}{dt} = M y\n\\end{equation}\n\nwith initial point values $y(t = t_0) = y_0$ can be solved nummericaly using integrators (e.g. Runge-Kutta). But it has analytical solution using matrix exponent:\n\n\\begin{equation}\ny(t) = e^{\\int_{t_0}^t M(t) dt} y_0\n\\end{equation}\n\nThe simple script *diff_exp.py* implements both ways for obtaining solutions, and here is the comparison of the so solutions.\n\nFirst two examples cover the case where $M$ is constant matrix.\n\n\n```python\nM = np.array([[0,1], [1,-2]])\ny0 = np.array([[1],[2]])\nti = 2\ntf = 5\ndt = 0.01\n```\n\n\n```python\ny, t = sol_lin_system_ode_c(y0, ti, tf, dt, M)\n```\n\n\n```python\nplt.plot(t, y[0])\nplt.plot(t, y[1])\nplt.show()\n```\n\n\n```python\ny0 = np.array([1, 2])\nx, t = lin_system_ivp_c(y0, ti, tf, dt, M)\n```\n\n\n```python\nplt.plot(t, x[0])\nplt.plot(t, x[1])\nplt.show()\n```\n\n\n```python\nM = np.array([[0, 1, 0, -1], [1,-2, 5, 0], [0, 1, 2, 3], [-1, -2, 0, -5]])\ny0 = np.array([[1],[2],[5],[0]])\nti = 2\ntf = 5\ndt = 0.01\n```\n\n\n```python\ny, t = sol_lin_system_ode_c(y0, ti, tf, dt, M)\n```\n\n\n```python\nplt.plot(t, y[0])\nplt.plot(t, y[1])\nplt.plot(t, y[2])\nplt.plot(t, y[3])\nplt.show()\n```\n\n\n```python\ny0 = np.array([1, 2, 5, 0])\nx, t = lin_system_ivp_c(y0, ti, tf, dt, M)\n```\n\n\n```python\nplt.plot(t, x[0])\nplt.plot(t, x[1])\nplt.plot(t, x[2])\nplt.plot(t, x[3])\nplt.show()\n```\n\nLet's compare two solutions:\n\n\n```python\nplt.plot(t, x[0]-y[0])\nplt.plot(t, x[1]-y[1])\nplt.plot(t, x[2]-y[2])\nplt.plot(t, x[3]-y[3])\nplt.show()\n```\n\nAnd the relative errors: (*Warning* division with $0$ possible)\n\n\n```python\nplt.plot(t, np.abs(x[0]-y[0])/np.abs(y[0]))\nplt.plot(t, np.abs(x[1]-y[1])/np.abs(y[1]))\nplt.plot(t, np.abs(x[2]-y[2])/np.abs(y[2]))\nplt.plot(t, np.abs(x[3]-y[3])/np.abs(y[3]))\nplt.show()\n```\n\nNext example covers the case where $M$ is function of $t$.\n\n\n```python\nM = lambda t: np.array([[-t, 1], [np.exp(-t), -t**2]])\ny0 = np.array([[1], [3]])\nti = 0\ntf = 3\ndt = 0.001\n```\n\n\n```python\ny, t = sol_lin_system_ode_l(y0, ti, tf, dt, M)\n```\n\n\n```python\nplt.plot(t, y[0])\nplt.plot(t, y[1])\nplt.show()\n```\n\n\n```python\ny0 = np.array([1, 3])\nx, t = lin_system_ivp_l(y0, ti, tf, dt, M)\n```\n\n\n```python\nplt.plot(t, x[0])\nplt.plot(t, x[1])\nplt.show()\n```\n\nTo conclude, the Runge-Kutta method is way more efficient than matrix exponent method.\n\n\n```python\n\n```\n", "meta": {"hexsha": "f94c934e2baf4c58437adeb4e21bef5c086d62be", "size": 200016, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Linear systems of differential equations/differential_equations.ipynb", "max_stars_repo_name": "PhyProg/Scientific-Computing", "max_stars_repo_head_hexsha": "cdbbbe67b7621c4cf2a2995c576bcbebbc65d1a6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Linear systems of differential equations/differential_equations.ipynb", "max_issues_repo_name": "PhyProg/Scientific-Computing", "max_issues_repo_head_hexsha": "cdbbbe67b7621c4cf2a2995c576bcbebbc65d1a6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Linear systems of differential equations/differential_equations.ipynb", "max_forks_repo_name": "PhyProg/Scientific-Computing", "max_forks_repo_head_hexsha": "cdbbbe67b7621c4cf2a2995c576bcbebbc65d1a6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 514.1799485861, "max_line_length": 64132, "alphanum_fraction": 0.9470042397, "converted": true, "num_tokens": 980, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9343951625409307, "lm_q2_score": 0.8856314632529871, "lm_q1q2_score": 0.8275297550576372}} {"text": "# Cart-pole swing-up problem: interactive demonstration\n\nHello and welcome. This is a Jupyter Notebook, a kind of document that can alternate between static content, like text and images, and executable cells of code.\n\nThis document ilustrates the Cart-pole swing-up test case of the paper: \"Collocation Methods for Second Order Systems\", submitted to RSS 2022.\n\nIn order to run the cells of code, you can select the cell and clic on the small \"play\" button in the bar above or press shift+enter. Alternatively, you can select the option \"run -> run all cells\" in order to run all the code in order. Beware that some cells can take several minutes!\n\nAll of the code used in this example is open-source and free to use.\n\n[SymPy](https://www.sympy.org/en/index.html) is used for Symbolic formulation and manipulation of the problem.\n\n[Numpy](https://numpy.org/) is used for numerical arrays and operations.\n\n[CasADI](https://web.casadi.org/) is used for optimization.\n\n[Optibot](https://github.com/AunSiro/optibot) is the name of the package where we are compiling our code. We aim to produce a toolbox for Optimal Control Problems, focused on robotics, including a high level, readable and clean interface between the prior three packages.\n\n## Package imports\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n\n```python\nfrom sympy import (symbols, simplify)\nfrom sympy.physics.mechanics import dynamicsymbols, init_vprinting\nfrom sympy.physics.mechanics import Lagrangian, ReferenceFrame, Point, Particle,inertia, RigidBody\n```\n\n\n```python\nfrom optibot.symbolic import lagrange, diff_to_symb, SimpLagrangesMethod\nfrom optibot.numpy import unpack\n```\n\n\n```python\nfrom functools import lru_cache\n```\n\n\n```python\n#SymPy vector-like latex rendering inizialization:\n\ninit_vprinting()\n```\n\n## Symbolic Problem Modelling\n\nThe first step is to model our problem taking advantage of the high level object syntax of the mechanics module in SymPy\n\n\n```python\n# Creating symbols and dynamic symbols\n\nm0, m1, l, t, g = symbols('m_0 m_1 l t g')\nq0, q1 = dynamicsymbols('q_0 q_1')\n```\n\n\n```python\n# Definition of the physics system\n\nN_in = ReferenceFrame('N')\npN = Point('N*')\npN.set_vel(N_in, 0)\n\nP0 = pN.locatenew('P0', q0 * N_in.x)\nP0.set_vel(N_in, q0.diff(t) * N_in.x)\ncart_part = Particle('CartPart', P0, m0)\ncart_part.potential_energy = m0 * g * P0.pos_from(pN).dot(N_in.y)\n\nN1 = N_in.orientnew('N1', 'Axis', [q1, N_in.z])\nP1 = P0.locatenew('P1', -l*N1.y)\nP1.set_vel(N_in, P1.pos_from(pN).dt(N_in))\n\npend_part = Particle('PendPart', P1, m1)\npend_part.potential_energy = m1 * g * P1.pos_from(pN).dot(N_in.y)\n```\n\n\n```python\n#Computing the Lagrangian\n\nLag_simp = Lagrangian(N_in, cart_part, pend_part)\nLag_simp\n```\n\n\n```python\n# Defining the control forces and external actions, and applying them to our system\n\nu0, u1 = symbols('u_0, u_1')\nFL = [(P0, u0 * N_in.x)]#, (N1, u1 * N_in.z)]\nLM_small = SimpLagrangesMethod(Lag_simp, [q0, q1], forcelist=FL, frame=N_in)\n```\n\n\n```python\n# Generating the dynamic equations\n\nLM_small.form_lagranges_equations()\nRHS_small = LM_small.rhs\nRHS_small\n```\n\n### Scheme definitions\n\nEach scheme is defined here as a function that must be equal to zero at each interval.\nNote that functions that contain \"mod\" in the name are those we define as \"second order\",\nand use separate conditions for q and v.\n\nSchemes that contain \"parab\" in the name are versions of Hermite Simpson that allow\nor $U_c$ to be a free parameter. It is passed to the function through the \n\"scheme_params\" argument.\n\nIf you wish to define your own schemes, do it here.\n\nBe careful to respect the function structure: either\n\n restriction(x, x_n, u, u_n, F, dt, params) = 0\nor\n \n restriction(x, x_n, u, u_n, F, dt, params, scheme_params) = 0\n\n\n```python\nfrom optibot.schemes import index_div\nfrom copy import copy\n\ndef euler_restr(x, x_n, u, u_n, F, dt, params):\n return x_n - (x + dt * F(x, u, params))\n\n\ndef trapz_restr(x, x_n, u, u_n, F, dt, params):\n f = F(x, u, params)\n f_n = F(x_n, u_n, params)\n return x_n - (x + dt / 2 * (f + f_n))\n\n\ndef trapz_mod_restr(x, x_n, u, u_n, F, dt, params):\n res = copy(x)\n first_ind, last_ind = index_div(x)\n q = x[first_ind]\n v = x[last_ind]\n f = F(x, u, params)[last_ind]\n f_n = F(x_n, u_n, params)[last_ind]\n res[last_ind] = v + dt / 2 * (f + f_n)\n res[first_ind] = q + dt * v + dt ** 2 / 6 * (f_n + 2 * f)\n return x_n - res\n\n\ndef hs_restr(x, x_n, u, u_n, F, dt, params):\n f = F(x, u, params)\n f_n = F(x_n, u_n, params)\n x_c = (x + x_n) / 2 + dt / 8 * (f - f_n)\n u_c = (u + u_n) / 2\n f_c = F(x_c, u_c, params)\n return x + dt / 6 * (f + 4 * f_c + f_n) - x_n\n\n\ndef hs_mod_restr(x, x_n, u, u_n, F, dt, params):\n x_c = copy(x)\n res = copy(x)\n first_ind, last_ind = index_div(x)\n f = F(x, u, params)[last_ind]\n f_n = F(x_n, u_n, params)[last_ind]\n q = x[first_ind]\n v = x[last_ind]\n q_n = x_n[first_ind]\n v_n = x_n[last_ind]\n u_c = (u + u_n) / 2\n q_c = q + dt / 32 * (13 * v + 3 * v_n) + dt**2 / 192 * (11 * f - 5 * f_n)\n v_c = (v + v_n) / 2 + dt / 8 * (f - f_n)\n x_c[first_ind] = q_c\n x_c[last_ind] = v_c\n f_c = F(x_c, u_c, params)[last_ind]\n res[last_ind] = v + dt / 6 * (f + 4 * f_c + f_n)\n res[first_ind] = q + dt * v + dt ** 2 / 6 * (f + 2 * f_c)\n return x_n - res\n\n\ndef hs_parab_restr(x, x_n, u, u_n, F, dt, params, scheme_params):\n f = F(x, u, params)\n f_n = F(x_n, u_n, params)\n x_c = (x + x_n) / 2 + dt / 8 * (f - f_n)\n u_c = scheme_params\n f_c = F(x_c, u_c, params)\n return x + dt / 6 * (f + 4 * f_c + f_n) - x_n\n\n\ndef hs_mod_parab_restr(x, x_n, u, u_n, F, dt, params, scheme_params):\n x_c = copy(x)\n res = copy(x)\n first_ind, last_ind = index_div(x)\n f = F(x, u, params)[last_ind]\n f_n = F(x_n, u_n, params)[last_ind]\n q = x[first_ind]\n v = x[last_ind]\n q_n = x_n[first_ind]\n v_n = x_n[last_ind]\n u_c = scheme_params\n q_c = q + dt / 32 * (13 * v + 3 * v_n) + dt**2 / 192 * (11 * f - 5 * f_n)\n v_c = (v + v_n) / 2 + dt / 8 * (f - f_n)\n x_c[first_ind] = q_c\n x_c[last_ind] = v_c\n f_c = F(x_c, u_c, params)[last_ind]\n res[last_ind] = v + dt / 6 * (f + 4 * f_c + f_n)\n res[first_ind] = q + dt * v + dt ** 2 / 6 * (f + 2 * f_c)\n return x_n - res\n```\n\n### Casadi optimization\n\nWe have generated the system equations symbolicaly. Now, we translate them to CasADi objects in order to perform the optimization.\n\n\n```python\n#Numerical values of the paramenters\n\nm0_n, m1_n = [1., 0.3]\nl_n = 0.5\ng_n = 9.81\nparams = [g_n, l_n, m0_n, m1_n]\n```\n\n\n```python\n#Package imports\n\nimport casadi as cas\nfrom optibot.casadi import rhs_to_casadi_function, restriction2casadi\n```\n\n\n```python\n# Translating the Sympy Expression into a CasADi function\n\nF_cas_simp = rhs_to_casadi_function(RHS_small[2:], 2)\n```\n\n\n```python\ndef gen_ini_guess(N = 25, ini_guess = 'lin'):\n '''\n Generates an initial guess for the Cartpole problem of N intervals.\n '''\n if ini_guess == 'zero':\n x_init_guess = np.zeros([N+1,4])\n elif ini_guess == 'lin':\n def_q1 = np.linspace(0,1,N+1)\n def_q2 = np.linspace(0,np.pi,N+1)\n def_v1 = np.zeros(N+1)\n def_v2 = np.zeros(N+1)\n x_init_guess = np.array([def_q1, def_q2, def_v1, def_v2]).T\n return x_init_guess\n\n```\n\n\n```python\nimport time\ndef chrono_solve(opti, solve_repetitions):\n '''\n Calls the solver a certain amount of times and returns the last solution\n obtained and the average computing time\n '''\n cput0 = time.time()\n for ii in range(solve_repetitions):\n sol = opti.solve()\n cput1 = time.time()\n cpudt = (cput1-cput0)/solve_repetitions\n return sol, cpudt\n\n```\n\n\n```python\n#@lru_cache\ndef casadi_cartpole(N = 25, scheme = 'euler', ini_guess = 'lin', solve_repetitions = 1, t_end = 2):\n opti = cas.Opti()\n p_opts = {\"expand\":True,'ipopt.print_level':0, 'print_time':0}\n s_opts = {\"max_iter\": 10000, 'tol': 1e-26}\n opti.solver(\"ipopt\",p_opts,\n s_opts)\n restr_schemes = {\n 'euler': euler_restr, # Euler scheme\n 'trapz': trapz_restr, # Trapezoidal Scheme\n 'trapz_mod' : trapz_mod_restr, # Second Order Trapezoidal Scheme\n 'hs': hs_restr, # Hermite Simpson Scheme, assuming that each Uc is the central value\n 'hs_mod': hs_mod_restr, # Second Order Hermite Simpson Scheme, assuming that each Uc is the central value\n 'hs_parab': hs_parab_restr, # Hermite Simpson Scheme, with Uc as a free problem parameter\n 'hs_mod_parab': hs_mod_parab_restr # Second Order Hermite Simpson Scheme, with Uc as a free problem parameter\n #'your scheme name here': your_scheme_function_here\n }\n \n f_restr = restr_schemes[scheme]\n \n # parab is a boolean variable that controls wether the centran points of U are free decision variables\n if scheme in ['hs_parab', 'hs_mod_parab']:\n parab = True\n else:\n parab = False\n \n # Creating problem structure\n X = opti.variable(N+1,4)\n U = opti.variable(N+1)\n if parab:\n U_c = opti.variable(N)\n T = opti.parameter()\n u_m = opti.parameter()\n Params = opti.parameter(4)\n\n # Defining the problem cost to minimize (integral of u^2)\n cost = (cas.sum1(U[:]**2)+cas.sum1(U[1:-1]**2))/N\n if parab:\n cost = (4*cas.sum1(U_c[:]**2) + cas.sum1(U[:]**2)+cas.sum1(U[1:-1]**2))/(3*N)\n opti.minimize(cost)\n\n # Initial and final conditions\n opti.subject_to(X[0,:].T == [0, 0, 0, 0])\n opti.subject_to(X[-1,:].T == [1, np.pi, 0, 0])\n \n # Translating the scheme restriction function into a CasADi function\n if parab: \n restriction = restriction2casadi(f_restr, F_cas_simp, 2, 1, 4, 1)\n else:\n restriction = restriction2casadi(f_restr, F_cas_simp, 2, 1, 4)\n\n # Appliying restrictions and action boundaries\n for ii in range(N):\n if parab:\n opti.subject_to(restriction(X[ii,:], X[ii+1,:], U[ii,:], U[ii+1],T/N, Params, U_c[ii])==0)\n opti.subject_to(opti.bounded(-u_m, U_c[ii,:] ,u_m))\n else:\n opti.subject_to(restriction(X[ii,:], X[ii+1,:], U[ii,:], U[ii+1,:],T/N, Params)==0)\n opti.subject_to(opti.bounded(-u_m,U[ii,:],u_m))\n opti.subject_to(opti.bounded(-u_m,U[-1, :],u_m))\n \n # Setting parameters to their numeric values\n opti.set_value(T, t_end)\n max_f = 20.0\n opti.set_value(u_m, max_f)\n\n m0_n, m1_n = [1., 0.3]\n l_n = 0.5\n g_n = 9.81\n opti.set_value(Params, [g_n, l_n, m0_n, m1_n])\n \n # Setting the initialization values\n if ini_guess in ['zero', 'lin']:\n opti.set_initial(X, gen_ini_guess(N, ini_guess))\n elif type(ini_guess) == list:\n opti.set_initial(X, ini_guess[0])\n opti.set_initial(U, ini_guess[1])\n if parab:\n opti.set_initial(U_c, ini_guess[2])\n else:\n raise TypeError('initial guess not understood')\n \n # Solve\n sol, cpudt = chrono_solve(opti, solve_repetitions)\n err_count = None\n sol_cost = sol.value(cost)\n xx_simp = sol.value(X)\n uu_simp = sol.value(U)\n if parab:\n uu_c = sol.value(U_c)\n else:\n uu_c = None\n \n # Return data\n return xx_simp, uu_simp, uu_c, cpudt, err_count, sol_cost\n```\n\nLet's try to solve the problem for 25 points and the 2nd order Hermite Simpson\n\n\n```python\nfrom optibot.schemes import interpolated_array, interpolated_array_derivative\nfrom optibot.analysis import dynamic_error\nfrom optibot.numpy import RHS2numpy\n```\n\n\n```python\nF_nump = RHS2numpy(RHS_small, 2)\n```\n\n\n```python\nscheme = 'hs_mod_parab'\nN = 25\nxx, uu, uu_c, cpudt, _, cost = casadi_cartpole(N, scheme, 'lin', 1)\n\nxx_interp, uu_interp = interpolated_array(\n X = xx,\n U = uu,\n F = F_nump,\n h = 2/N,\n t_array = np.linspace(0, 2, 2000),\n params = params,\n scheme = \"hs_parab\",\n u_scheme = 'parab',\n scheme_params = {'u_c' : uu_c}\n)\nplt.figure(figsize=[16,8])\nplt.plot(np.linspace(0,2,N+1),uu[:], 'o',label = '$u_k$ points')\nplt.plot(np.linspace(0,2,2*N+1)[1::2],uu_c, 'o',label = '$u_c$ points')\nplt.plot(np.linspace(0,2,2000),uu_interp, label = 'interpolation')\nplt.grid()\nplt.legend()\nplt.title('Cart-pole U(t) for 2nd order Hermite Simpson with N = 25')\nlabels = ['q1','q2','v1','v2']\nfor ii in range(4):\n plt.figure(figsize=[16,10])\n plt.plot(np.linspace(0,2,N+1),xx[:,ii], 'o',label = f'${labels[ii]}_k$ points')\n plt.plot(np.linspace(0,2,2000),xx_interp[:,ii], label = 'interpolation')\n plt.grid()\n plt.legend()\n plt.title(f'Cart-pole {labels[ii]}(t) for 2nd order Hermite Simpson with N = 25')\n```\n\n## Sistematic comparison of schemes for different values of N\n\nNow let's solve the problem with different methods.\n\n### Caution!\n\nExecuting the next cell may require some time!\n\n\n```python\nschemes = ['hs_parab', 'hs_mod_parab', 'trapz', 'trapz_mod'] #If you defined a custom function, name your scheme here\ninitials = ['lin']\nsolve_repetitions = 30 #Increase this number to get more reliable values of execution times\nN_arr = [20, 25, 30, 40, 50, 60]# You can increase the numbers here, but it will take more time\nresults = {}\n\nfor scheme in schemes:\n for init in initials:\n key = scheme + '_' + init\n print('Problem:', key)\n results[key] = {'N_arr':N_arr}\n for N in N_arr:\n print(f'\\tN = {N}')\n xx, uu, uu_c, cpudt, _, cost = casadi_cartpole(N, scheme, init, solve_repetitions)\n results[key][N] = {\n 'x': xx,\n 'u': uu,\n 'u_c': uu_c,\n 'cpudt': cpudt,\n 'cost': cost,\n }\n```\n\n\n```python\n#Calculating the number of collocation number\nfor scheme in results.keys():\n if 'hs' in scheme:\n n_coll = np.array(results[scheme]['N_arr'])*2-1\n results[scheme]['N_coll_arr'] = n_coll\n else:\n results[scheme]['N_coll_arr'] = results[scheme]['N_arr']\n```\n\n## Dynamic Error\n\nNow we can compute the dynamic errors for each case\n\n\n```python\ndef total_state_error(t_arr, dyn_err):\n errors = np.trapz(np.abs(dyn_err), t_arr, axis=0)\n return errors\n```\n\n\n```python\nschemes = ['hs_parab', 'hs_mod_parab', 'trapz', 'trapz_mod']\ninitials = ['lin']#, 'funcs']\nn_interp = 4000\nfor scheme in schemes:\n for init in initials:\n key = scheme + '_' + init\n print('Problem:', key)\n N_arr = results[key]['N_arr']\n for N in N_arr:\n print(f'\\tN = {N}')\n if 'parab' in scheme:\n u_scheme = 'parab'\n else:\n u_scheme = 'lin'\n dyn_err_q, dyn_err_v, dyn_err_2_a, dyn_err_2_b = dynamic_error(\n results[key][N]['x'],\n results[key][N]['u'],\n 2,\n params,\n F_nump,\n scheme = scheme,\n u_scheme= u_scheme,\n scheme_params={'u_c':results[key][N]['u_c']},\n n_interp = n_interp)\n t_arr = np.linspace(0,2, n_interp)\n tot_dyn_err_q = total_state_error(t_arr, dyn_err_q)\n tot_dyn_err_v = total_state_error(t_arr, dyn_err_v)\n tot_dyn_err_2_a = total_state_error(t_arr, dyn_err_2_a)\n tot_dyn_err_2_b = total_state_error(t_arr, dyn_err_2_b)\n results[key][N]['err_q_int'] = dyn_err_q\n results[key][N]['err_v_int'] = dyn_err_v\n results[key][N]['err_2_a_int'] = dyn_err_2_a\n results[key][N]['err_2_b_int'] = dyn_err_2_b\n results[key][N]['err_q'] = tot_dyn_err_q\n results[key][N]['err_v'] = tot_dyn_err_v\n results[key][N]['err_2_a'] = tot_dyn_err_2_a\n results[key][N]['err_2_b'] = tot_dyn_err_2_b\n```\n\n\n```python\nfor scheme in schemes:\n for init in initials:\n key = scheme + '_' + init\n print('Problem:', key)\n N_arr = results[key]['N_arr']\n err_q_acum = []\n err_v_acum = []\n err_2_a_acum = []\n err_2_b_acum = []\n cpudt = []\n for N in N_arr:\n err_q_acum.append(results[key][N]['err_q'])\n err_v_acum.append(results[key][N]['err_v'])\n err_2_a_acum.append(results[key][N]['err_2_a'])\n err_2_b_acum.append(results[key][N]['err_2_b'])\n cpudt.append(results[key][N]['cpudt'])\n results[key]['err_q_acum'] = np.array(err_q_acum, dtype = float)\n results[key]['err_v_acum'] = np.array(err_v_acum, dtype = float)\n results[key]['err_2_a_acum'] = np.array(err_2_a_acum, dtype = float)\n results[key]['err_2_b_acum'] = np.array(err_2_b_acum, dtype = float)\n results[key]['cpudt'] = np.array(cpudt, dtype = float)\n```\n\n\n```python\n#Plotting parameters\nplt.rcParams.update({'font.size': 12})\noct_fig_size = [15,10]\n```\n\n\n```python\nsch = [['hs_parab','hs_mod_parab'],['trapz', 'trapz_mod']]\ntit = [['Hermite Simpson','2nd order Hermite Simpson'],['Trapezoidal', '2nd order Trapezoidal']]\ncolors = [f'C{ii}' for ii in [1,0,2,3]]\nn_int = len(t_arr)\nN_hh = [25,50]\nfor hh in range(2):\n schemes = sch[hh]\n titles = tit[hh]\n N = N_hh[hh]\n interv_n = (N * t_arr)/2\n for ii in range(2):\n plt.figure(figsize=oct_fig_size)\n for kk in range(len(schemes)):\n scheme = schemes[kk]\n key = scheme + '_lin'\n cut_p = 0\n for ll in range(1,N+1):\n jj = np.searchsorted(interv_n, ll)\n plt.plot(t_arr[cut_p:jj],results[key][N]['err_q_int'][cut_p:jj,ii], '-', c = colors[2*hh+kk], label = titles[kk] if cut_p == 0 else None)\n cut_p = jj\n plt.plot(np.linspace(0,2,N+1), np.zeros(N+1), 'ok', label = 'knot & collocation points')\n if hh == 0:\n plt.plot(np.linspace(0,2,2*N+1)[1::2], np.zeros(N), 'ow', markeredgecolor='k', label = 'collocation points')\n plt.legend()\n plt.grid()\n plt.title(r'First order dynamic error $\\varepsilon^{[1]}_{q_'+f'{ii+1}}}$, {titles[0]} schemes, N = {N}')\n plt.xlabel('Time(s)')\n units = 'm/s' if ii == 0 else'rad/s'\n plt.ylabel(f'Dynamic error $({units})$')\n plt.tight_layout(pad = 0.0)\n sch_type = titles[0].replace(' ','_')\n \n # If you are running the notebook locally and want to save the plots,\n # uncomment the next line\n #plt.savefig(f'Cartpole_First_Order_Dynamic_Error_q_{ii+1}_{sch_type}_schemes_N_{N}.eps', format='eps')\n```\n\n\n```python\nsch = [['hs_parab','hs_mod_parab'],['trapz', 'trapz_mod']]\ntit = [['Hermite Simpson','2nd order Hermite Simpson'],['Trapezoidal', '2nd order Trapezoidal']]\ncolors = [f'C{ii}' for ii in [1,0,2,3]]\nn_int = len(t_arr)\nN_hh = [25,50]\nfor hh in range(2):\n schemes = sch[hh]\n titles = tit[hh]\n N = N_hh[hh]\n interv_n = (N * t_arr)/2\n for ii in range(2):\n plt.figure(figsize=oct_fig_size)\n for kk in range(len(schemes)):\n scheme = schemes[kk]\n key = scheme + '_lin'\n cut_p = 0\n for ll in range(1,N+1):\n jj = np.searchsorted(interv_n, ll)\n plt.plot(t_arr[cut_p:jj],results[key][N]['err_2_b_int'][cut_p:jj,ii], '-', c = colors[2*hh+kk], label = titles[kk] if cut_p == 0 else None)\n cut_p = jj\n plt.plot(np.linspace(0,2,N+1), np.zeros(N+1), 'ok', label = 'knot & collocation points')\n if hh == 0:\n plt.plot(np.linspace(0,2,2*N+1)[1::2], np.zeros(N), 'ow', markeredgecolor='k', label = 'collocation points')\n plt.legend()\n plt.grid()\n #plt.ylim([-0.00022, 0.00022])\n plt.title(r'Second order dynamic error $\\varepsilon^{[2]}_{q_'+f'{ii+1}}}$, {titles[0]} schemes, N = {N}')\n plt.xlabel('Time(s)')\n units = 'm/s^2' if ii == 0 else'rad/s^2'\n plt.ylabel(f'Dynamic error $({units})$')\n plt.tight_layout(pad = 0.0)\n sch_type = titles[0].replace(' ','_')\n # If you are running the notebook locally and want to save the plots,\n # uncomment the next line\n #plt.savefig(f'Cartpole_Second_Order_Dynamic_Error_q_{ii+1}_{sch_type}_schemes_N_{N}.eps', format='eps')\n```\n\n\n```python\nschemes_graph = ['hs_mod_parab', 'hs_parab', 'trapz', 'trapz_mod']\ntitles = ['2nd order Hermite Simpson', 'Hermite Simpson','Trapezoidal', '2nd order Trapezoidal']\ncolors = [f'C{ii}' for ii in range(9)]\ndata_array = ['err_q_acum','err_v_acum','err_2_b_acum','cpudt']\ninitial = 'lin'\n\n\ndata_key = data_array[2]\nfor qq in range(2):\n plt.figure(figsize=[10,6])\n plt.title(f'Second order dynamic error $E^{{[2]}}_{{q_{qq+1}}}$')\n for ii in [2,3,1,0]:\n scheme = schemes_graph[ii]\n key = scheme + '_' + initial\n print('Problem:', key)\n N_arr = results[key]['N_arr']\n if len(results[key][data_key].shape) == 1:\n plt.plot(N_arr,results[key][data_key], marker = 'o', c = f'C{ii}',label = titles[ii])\n else:\n plt.plot(N_arr,results[key][data_key][:,qq], marker = 'o', c = f'C{ii}',label = titles[ii])\n plt.yscale('log')\n plt.xlabel('Number of intervals')\n plt.grid()\n plt.legend()\n units = 'm/s' if qq == 0 else'rad/s'\n plt.ylabel(f'Dynamic error $({units})$')\n plt.tight_layout(pad = 0.0)\n # If you are running the notebook locally and want to save the plots,\n # uncomment the next line\n #plt.savefig(f'Cartpole_Integrated_Second_Order_Dynamic_Error_q_{qq+1}_vs_N.eps', format='eps')\n\n```\n\n\n```python\nschemes = ['hs_mod_parab','hs_parab', 'trapz', 'trapz_mod']\ntitles = ['2nd order Hermite Simpson', 'Hermite Simpson','Trapezoidal', '2nd order Trapezoidal']\nplt.figure(figsize=[10,6])\nfor ii in [2,3,1,0]:\n key = schemes[ii] + '_lin'\n plt.plot(results[key]['N_arr'], results[key][f'cpudt'], marker = 'o', c = f'C{ii}',label = titles[ii])\nplt.grid()\nplt.legend()\nplt.title('Optimization time')\nplt.xlabel('Number of intervals')\nplt.ylabel('Time (s)')\nplt.tight_layout(pad = 0.0)\n# If you are running the notebook locally and want to save the plots,\n# uncomment the next line\n#plt.savefig(f'Cartpole_optimization_time_vs_interval_number.eps', format='eps')\n```\n\n\n```python\n# Here we print the data shown in Table II of the paper\nfor scheme in ['hs_mod_parab', 'hs_parab', 'trapz', 'trapz_mod']:\n key = scheme + '_lin'\n for N in [25,50]:#results[key]['N_arr']:\n print('scheme:', scheme, 'N:', N,'\\n\\ttime:', results[key][N][f'cpudt'],\n '\\n\\tErr 1:', results[key][N]['err_q'], '\\n\\tErr 2:', results[key][N]['err_2_b'])\n```\n\n## Animation\n\n\n```python\nfrom matplotlib import animation, rc\nimport matplotlib.patches as patches\nfrom matplotlib.transforms import Affine2D\nfrom IPython.display import HTML\nimport matplotlib\nmatplotlib.rcParams['animation.embed_limit'] = 200\n```\n\n\n```python\ndef create_anim(X, U, params):\n [g_n, l_n, m0_n, m1_n] = params\n \n N = X.shape[0]\n fig, ax = plt.subplots()\n y_scale = 1\n min_x_cart = np.min(X[:,0])\n max_x_cart = np.max(X[:,0])\n cart_displ = max_x_cart-min_x_cart\n size_x = 2*y_scale + cart_displ\n size_y = 2*y_scale\n draw_width = 14\n draw_height = draw_width / size_x * size_y\n \n x_0 = X[:,0]\n y_0 = np.zeros_like(x_0)\n x_1 = x_0 + l_n*np.sin(X[:,1])\n y_1 = y_0 - l_n*np.cos(X[:,1])\n \n x_cm = (m0_n * x_0 + m1_n * x_1)/(m0_n + m1_n)\n y_cm = (m0_n * y_0 + m1_n * y_1)/(m0_n + m1_n)\n\n fig.set_dpi(72)\n fig.set_size_inches([draw_width,draw_height])\n ax.set_xlim(( min_x_cart-y_scale, max_x_cart+y_scale))\n ax.set_ylim(( -y_scale, y_scale))\n\n #circle1 = plt.Circle((0, 0), l_n, color='b', ls = \":\", fill=False)\n #ax.add_artist(circle1)\n ax.plot([min_x_cart - l_n, max_x_cart + l_n], [0,0], 'k', lw=1, ls = ':')\n\n line1, = ax.plot([], [], lw=2)\n line3, = ax.plot([], [], 'k', lw=1, ls = ':')\n #line_cm, = ax.plot([], [], 'g', lw=1, ls = ':')\n point0, = ax.plot([], [], marker='s', markersize=10, color=\"k\")\n point1, = ax.plot([], [], marker='o', markersize=7, color=\"red\")\n #point_cm, = ax.plot([], [], marker='o', markersize=10, color=\"green\")\n u_max = max(np.max(np.abs(U[:])),1e-15)\n arrow_w = 0.1*l_n\n arrow_l = 0.7*l_n\n u_arrow = patches.Arrow(0, 0, 0, -arrow_l, color = 'gray',width = arrow_w)\n ax.add_patch(u_arrow)\n \n print_vars = [X[:,0], X[:,1], U[:], np.linspace(0, N-1, N, dtype=int)]\n print_var_names = ['q_0', 'q_1', 'u_0', 'step']\n texts = []\n ii = 0.8\n for arr in print_vars:\n texts.append(ax.text(-0.8, ii, \"\", fontsize = 12))\n ii -= 0.2*l_n\n \n xx_interpolated, uu_interpolated = interpolated_array(\n X,\n U,\n F = F_nump,\n h = 2/(N-1),\n t_array = np.linspace(0, 2, 5*(N-1)+1),\n params = params,\n scheme = 'hs_mod_parab',\n u_scheme = 'parab',\n scheme_params = {'u_c' : results['hs_mod_parab_lin'][N-1]['u_c']}\n )\n x_0_interp = xx_interpolated[:,0]\n y_0_interp = np.zeros_like(x_0_interp)\n x_1_interp = x_0_interp + l_n*np.sin(xx_interpolated[:,1])\n y_1_interp = y_0_interp - l_n*np.cos(xx_interpolated[:,1])\n \n def init():\n line1.set_data([], [])\n line3.set_data([], [])\n #line_cm.set_data([], [])\n point1.set_data([], [])\n #circle1.center = (0, 0)\n return (line1,)\n def animate(i):\n #circle1.center = (x_0[i], y_0[i])\n point0.set_data(x_0[i], y_0[i])\n line1.set_data([x_0[i], x_1[i]], [y_0[i], y_1[i]]) \n point1.set_data(x_1[i], y_1[i])\n #point_cm.set_data(x_cm[i], y_cm[i])\n line3.set_data(x_1_interp[:5*i+1], y_1_interp[:5*i+1])\n #line_cm.set_data(x_cm[:i], y_cm[:i])\n trans = Affine2D()\n u_arrow._patch_transform = trans.scale(U[i] * arrow_l / u_max, arrow_w).translate(x_0[i],0)\n for ii in range(len(texts)):\n text = texts[ii]\n name = print_var_names[ii]\n arr = print_vars[ii]\n if name == 'step':\n text.set_text(\"$step$ = \" + str(arr[i]))\n else:\n text.set_text(\"$\" + name + \"$ = %.3f\" % arr[i])\n return (line1,u_arrow)\n frame_indices = np.concatenate((np.zeros(10, dtype=int), np.arange(0, N, 1), np.ones(15, dtype=int)*(N-1)))\n anim = animation.FuncAnimation(fig, animate, init_func=init,\n frames=frame_indices, interval=20, \n blit=True)\n return anim\n```\n\n\n```python\nanim = create_anim(results['hs_parab_lin'][25]['x'], results['hs_parab_lin'][25]['u'], params)\n```\n\n\n```python\nHTML(anim.to_jshtml())\n```\n\n\n```python\nf = r\"cartpole_animation.mp4\" \nwritervideo = animation.FFMpegWriter(fps=12) \n# If you are running the notebook locally and want to save the animation,\n# uncomment the next line\n#anim.save(f, writer=writervideo)\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "8061224c1e7c6ca917e0f0ad15bc00a63315d585", "size": 39007, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Cartpole-demo.ipynb", "max_stars_repo_name": "AunSiro/Second-Order-Schemes", "max_stars_repo_head_hexsha": "ef7ac9a6755e166d81b83f584f82055d38265087", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Cartpole-demo.ipynb", "max_issues_repo_name": "AunSiro/Second-Order-Schemes", "max_issues_repo_head_hexsha": "ef7ac9a6755e166d81b83f584f82055d38265087", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Cartpole-demo.ipynb", "max_forks_repo_name": "AunSiro/Second-Order-Schemes", "max_forks_repo_head_hexsha": "ef7ac9a6755e166d81b83f584f82055d38265087", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 34.7966101695, "max_line_length": 298, "alphanum_fraction": 0.5082421104, "converted": true, "num_tokens": 8390, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9161096135894201, "lm_q2_score": 0.903294212561406, "lm_q1q2_score": 0.8275165120271891}} {"text": "# Class 5\n\nNote: The notes that follow are largely those of Mark Krumholz (ANU) who led the Bootcamp\nlast in 2015. You can find the 2015 lectures [here](https://sites.google.com/a/ucsc.edu/krumholz/teaching-and-courses/python-15)\n\n\n```python\n# These are to display images in-line\nfrom IPython.display import Image\nfrom IPython.core.display import HTML\n\n#Imports\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\nToday's class is the final one in the boot camp, and it is here that we turn mainly to scientific applications of python, as opposed to general python programming. The universe of scientific applications is vast, so we will pick out a few examples of common scientific tasks that we can perform using python, numpy, and scipy.\n\n# Root Finding\n### Motivating example and statement of the problem\n\nOur first example task is finding the solutions to algebraic equations, or, equivalently, the roots of functions. We can illustrate this example using a problem from basic astronomy. We know that the light emitted by a blackbody follows a distribution in wavelength that depends on its temperature. Specifically, we know that the distribution of intensity versus wavelength is distributed following the Planck function,\n\n\\begin{equation}\nB_{x}=\\left[\\frac{2(k T)^{5}}{h^{4} c^{3}}\\right] \\frac{x^{5}}{e^{x}-1}\n\\end{equation}\n\nwhere $\\lambda$ is the wavelength, h is Planck's constant, k is Boltzmann's constant, and T is the temperature. Now suppose that we want to find the wavelength of maximum intensity for a given temperature T. This is a straightforward problem of maximizing a function, and we can solve it via the usual calculus method: take the derivative and find where it is equal to zero.\n\nThe algebra is a bit less messy if we make the substitutionv $x=hc / \\lambda kT$, which makes the function we want to maximize\n\n\\begin{equation}\nB_{x}=\\left[\\frac{2(k T)^{5}}{h^{4} c^{3}}\\right] \\frac{x^{5}}{e^{x}-1}\n\\end{equation}\n\nTaking the derivative with respect to x, and doing a bit of simplification, we get\n\n\\begin{equation}\n\\frac{d B_{x}}{d x}=\\left[\\frac{2(k T)^{5}}{h^{4} c^{3}}\\right] x^{4} \\frac{(5-x) e^{x}-5}{\\left(e^{x}-1\\right)^{2}}\n\\end{equation}\n\nWe want to find the value of $x$ for which this is equal to 0, which is clearly the value of $x$ where the numerator $(5 - x)e^x - 5$ is equal to zero. We are thus left with a pure math problem: solve the equation\n\n\\begin{equation}\n(5 - x)e^x - 5 = 0\n\\end{equation}\n\nThe difficulty is that this is a transcendental equation, which means that the equation cannot be solved analytically in an exact way. However, as we will see below, there clearly is a solution.\n\nThe problem of solving equations of this goes by the generic name of root-finding. In mathematical terms, the problem can be stated as follows. Suppose we have some function one one variable $f(x)$: for what value(s) of $x$ do we have $f(x) = 0$? In other words, what are the roots of this function?\n\n### Graphical evaluation\nA first step toward solving a problem like this is getting some sense of what the function looks like. To that end, we can fire up python, define the function, and plot it. We'll do this for both the Planck function and its derivative, omitting the leading constants in square brackets.\n\n\n```python\ndef bx(x):\n return( x**5/(np.exp(x)-1) )\n```\n\n\n```python\ndef dbdx(x):\n return( x**4 * ((5-x)*np.exp(x)-5) / (np.exp(x)-1)**2 )\n```\n\n\n```python\nx = np.arange(0.01,20,0.01)\n```\n\n\n```python\nplt.plot(x, bx(x), lw=3)\nplt.plot(x, dbdx(x), lw=3)\nplt.plot(x, 0*x, 'k--')\nplt.legend(['B', 'dB/dx', '0'])\n```\n\nThe blue line is the graph of the function we want, and the green line is $f(x) = 0$. Clearly there is a solution. By eye, we can estimate that it is around $x = 5$, but we can be much more accurate than that.\n\nOne important reason to graph a function before employing these more accurate methods, however, is that a graph can often alert us if a function has more than one root. If it does, we may need to think about which root we want to find, as it is often the case that only one of the roots corresponds to a physically-realistic solution that we're interested in.\n\n## Newton's method and the secant method\n\nSo how do we go about finding the numerical value of $x$ for which $f(x) = 0$? There are numerous numerical methods, but in this class we'll explore only two: Newton's method / the secant method, and Brent's method.\n\nNewton's method was invented, as one might guess from the name, by Isaac Newton, and it consists of the following steps. We start with an initial guess for the root $x_0$, and at the guess we evaluate both the function and its derivative: $f(x_0)$ and $f'(x_0)$.\n\nIf the function were a straight line, then we could find the root just from this, because $f'(x_0)$ is the slope and $f(x_0)$ is the y value. The root would be at\n\n\\begin{equation}\nx_1 = x_0 - \\frac{f(x_0)}{ f'(x_0)}.\n\\end{equation}\n\nIn general our function f is not a straight line, since, if it were, all of this would be unnecessary. However, on sufficiently small scales, every smooth function looks like a straight line, so this may still be a pretty good approximation, especially if our initial guess was not too far off. To improve the approximation, we can simply repeat the procedure by using the new value $x_1$ to guess a new $x$ value $x_2$ via the same formula:\n\n\\begin{equation}\nx_2 = x_1 - \\frac{f(x_1)}{f'(x_1)}.\n\\end{equation}\n\nWe can perform this operation repeatedly until $f(x_N)$ is as close to zero as we want it to be. The [Newton's method article on wikipedia](https://en.wikipedia.org/wiki/Newton%27s_method) has a much better graphical representation of this than I am likely to come up with, so here it is:\n\n\n\nNewton's method requires that we be able to calculate the derivative of the function $f'(x)$. This is sometimes the case, but now always. If we cannot analytically calculate the derivative, we can use a closely-related method called the second method. This is basically the same as Newton's method, with the difference that the derivative is approximated rather than computed exactly. Suppose that we take $x_0$ and $x_1$ as two initial guesses at the root. We can approximate the derivative by\n\n\\begin{equation}\nf^{\\prime}\\left(x_{1}\\right) \\approx \\frac{f\\left(x_{1}\\right)-f\\left(x_{0}\\right)}{x_{1}-x_{0}}\n\\end{equation}\n\nWe can plug this approximation into our Newton's method formula to get $x_2$, and then we can approximate the derivative $f'(x_2)$ using the values of $f(x)$ evaluated at $x_2$ and $x_1$ just as we approximated the derivative $f'(x_1)$ using the values of $f(x)$ evaluated at $x_1$ and $x_0$.\n\nThe scipy package provides implementations of both Newton's method and the secant method. To use these methods, we must first define the function whose roots we want to find. In the example above, we have already defined the function $f(x)$ that is of interest to us, so we can proceed to the next steps, which are to import the [scipy.optimize](https://docs.scipy.org/doc/scipy/reference/optimize.html) module, and then call the function [newton()](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.newton.html).\n\nThe syntax is as follows:\n\n\n\n```python\nimport scipy.optimize as opt\n\nopt.newton(dbdx, 5)\n```\n\n\n\n\n 4.965114231744276\n\n\n\nThe newton() method takes two arguments by default. The first is the function whose root is to be found, and the second is an initial guess for the location of the root. Given this initial guess, the function will apply Newton's method or the secant method iteratively to find the root. In this case, we didn't specify the derivative of the function, and so the secant method is used. If we wanted to specify the derivative, we could do so and then use Newton's method:\n\n\n```python\ndef d2bdx2(x):\n return( 20*x**3/(np.exp(x)-1) - 10*x**4*np.exp(x)/(np.exp(x)-1)**2 + 2*np.exp(2*x)*x**5/(np.exp(x)-1)**3 - np.exp(x)*x**5/(np.exp(x)-1)**2 )\n```\n\n\n```python\nopt.newton(dbdx, 5, fprime=d2bdx2)\n```\n\n\n\n\n 4.965114231744276\n\n\n\nThe function we have defined as $f'(x)$ is the analytically-computed derivative of $f(x)$. The syntax is that the optional argument fprime is set equal to the function that returns $f'(x)$. Since we have now specified $f'(x)$, the newton() function in scipy.optimize uses Newton's method instead of the secant method. The advantage of Newton's method as opposed to the second method is that it usually converges to the right answer in fewer iterations, so, if the function $f(x)$ takes a while to evaluate, Newton's method can get to the answer faster.\n\nFrom the output answer, we can find the relationship between temperature and peak wavelength. Recall that we defined\n\n\\begin{equation}\nx= \\frac{h c}{ \\lambda k T}\n\\end{equation}\n\nso we can invert this to give\n\n\\begin{equation}\n\\lambda_{\\max } T=x_{\\max }(h c / k)=0.290 \\mathrm{cm} \\mathrm{K}\n\\end{equation}\n\nwhere we have plugged in $x = 4.9651$. This is known as **Wien's Law**.\n\n## Bracketed roots, the bisection method, and Brent's method\n\nWhile Newton's method and the second method worked in the example we just tried, it can also run into problems, and there is no guarantee that it will find a root. Suppose that, instead of using 5 as an initial guess, we had used 0.1. Here's the result:\n\n\n```python\nopt.newton(dbdx, 0.1)\n```\n\nSo Newton's method failed to find the solution. What went wrong? To answer that question, we need only look at the graph of the function. Newton's method essentially amounts to saying \"go downhill / uphill following the current slope\", but using 0.1 as an initial guess, \"downhill\" actually points away from the solution, not toward it. Newton's method works well if we're close enough to the right answer that the function isn't too far from a straight line, but it will fail badly if the function is not like a straight line.\n\nFortunately, we can do better, if we can bracket the root. Bracketing the root means that we can identify two values a and b such that $f(a)$ and $f(b)$ have different signs meaning that the function $f(x)$ (assuming it is continuous) must pass through 0 for some $x$ between $a$ and $b$.\n\nBracketing a root is extremely powerful, because it enables one to use an algorithm that is guaranteed to find the root. This method is [bisection](https://en.wikipedia.org/wiki/Bisection_method). The idea of bisection is extremely simple, and can be described by the following steps:\n\n\n1. Given two points $a$ and $b$ that bracket a root (i.e. $f(a)$ and $f(b)$ have opposite signs), take the halfway point between them $h = (a+b)/2$, and evaluate $f(h)$.\n\n2. If $f(h)$ is within some specified tolerance of 0, then end -- we have found the root.\n\n3. If not, then check if $f(h)$ and $f(a)$ have opposite signs. If they do, then there must be a root between $x = a$ and $x = h$, so set $b = h$ and go back to step 1.\n\n4. If $f(h)$ and $f(a)$ have the same sign, then $f(h)$ and $f(b)$ must have opposite signs, because we know that $f(a)$ and $f(b)$ have opposite signs. Thus there must be a root between $h$ and $b$, so set $a = h$ and go back to step 1.\n\nThis method is guaranteed to find the root, because it always keeps the root in the interval of interest, and successively halves that interval until we are close enough.\n\nThe bisection method is guaranteed to work, but can be much slower than something like Newton's method, because it only ever gets closer to the answer by a factor of 2 per step. In contrast, if a function is fairly close to a line, Newton's method will jump very close to the answer in a single step. [Brent's method](https://en.wikipedia.org/wiki/Brent's_method) gets the best of both worlds, by trying to use secant or a similar fast method, but falling back on bisection if that doesn't converge toward the answer. It is guaranteed to converge to the right answer just like bisection, but in most cases is nearly as fast as Newton's method.\n\n(Note: although Brent's method is guaranteed to converge *mathematically*, it may still fail to converge *numerically* because on a computer one is using finite-precision arithmetic, and there are situations where the 15 decimal places of accuracy that floating point numbers normally provide will not be enough to get to the answer within the requested tolerance. Attempting to find the root of $f(x) = e^{10000000x} - 1$ numerically is going to be problematic even with an algorithm that is mathematically guaranteed to work.)\n\n\nThe scipy.optimize library implements Brent's method through the routine [brentq()](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.brentq.html), and if should generally be the go-to method of solving equations if you can bracket the root. The syntax is very much like Newton's method, except that, instead of specifying an initial guess, one instead specifies the bracketing values:\n\n\n\n```python\nopt.brentq(dbdx, 0.1, 100)\n```\n\n\n\n\n 4.965114231743836\n\n\n\nHere the first argument is the function whose root is to be found, the second argument is the left bracketing value ($a$), and the third argument is the right bracketing value ($b$). Even if these values are nowhere near the root (as is the case here), Brent's method will find it. Calling brentq with values for $a$ and $b$ such that $f(a)$ and $f(b)$ do not have opposite signs results in an error.\n\n## Multi-dimensional root-finding\n\nThe problem of finding the roots of a multi-dimensional function, where there is more than one independent or dependent variable, is a significantly harder problem. In its most general form, this is the problem of finding a vector $\\mathbf{x}$ satisfying the condition that $\\mathbf{f}(\\mathbf{x}) = 0$. The most common situation is where $\\mathbf{x}$ and $\\mathbf{f}(\\mathbf{x})$ have the same number of elements, but in general they may have different numbers of elements.\n\nThe basic reason that this is a hard problem is that, in more than one dimension, there is no guaranteed-to-work-even-if-it-is-slow method like bisection. There is no way to bracket a root, and guarantee that it lies somewhere between point $a$ and point $b$. The situation is easiest to visualize if we imagine two superimposed landscapes there the elevations of each vary with position. We want to find a point that is at sea level in both landscapes. This is hard. For each landscape we can draw the zero-elevation contour reasonably easily, but there is no guarantee that the two sets of zero-elevation contours ever actually cross one another, and no general way to find out if and where they do.\n\nDue to this difficulty, methods to find the roots of multi-dimensional functions are generally much more complex than those for finding the roots of scalar functions of scalar variables. Indeed, developing algorithms for solving problems of this sort is an active area of research in the field of applied mathematics.\n\nThe scipy.optimize module includes several methods for finding the roots of multidimensional functions, most of which can be accessed through the [root()](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.root.html) function. Here's an example of using it. To start with, let us consider a two-dimensional function of two-variables, defined as follows:\n\n\n\n```python\ndef fvec(xvec):\n r = np.sqrt( xvec[0]**2 + xvec[1]**2 )\n phi = np.arccos( xvec[0]/r )\n fx = r**2 - 1.0\n fy = np.cos(phi)\n return( np.array([fx, fy]) )\n```\n\nWe can see where the solutions are going to be just by looking at the function. Its first component is $r^2 - 1$, so clearly the solution must lie on the circle $r = 1$. The second component is cos(phi), so clearly the solution must lie at phi = $\\pi$/2 or $(3/2)\\pi$, corresponding to $x = 0$. Thus there are two solutions, one at (0, 1) and one at (0, -1).\n\nTo search for these solutions using root, we proceed as follows.\n\n\n```python\nopt.root(fvec, [0.25, 0.5])\n```\n\n\n\n\n fjac: array([[-0.02837482, -0.99959735],\n [ 0.99959735, -0.02837482]])\n fun: array([ 2.69748668e-11, -3.60626471e-11])\n message: 'The solution converged.'\n nfev: 12\n qtf: array([-5.35354922e-09, -4.22230880e-09])\n r: array([-0.96382325, 0.02222859, 2.06183641])\n status: 1\n success: True\n x: array([-3.60625931e-11, 1.00000000e+00])\n\n\n\nThe first argument to root is the function whose roots are to be found, and the second argument is an initial guess at the solution. There are several keywords that control what method is used to search for the solution, and things like that. The function returns an object with a number of parts describing whether it succeeded in finding a solution, and if so where. In this example, success is True, indicating that the solver did find a solution, and x is the solution it found. In this case, it found the solution (0, 1).\n\n# Numerical Integration\n\n### Motivating example and statement of the problem\n\nThe second topic for today is numerical integration: using a numerical method to approximately evaluate integrals. To motivate this problem, we can return to the Planck function and the distribution of energy from a radiating black body, and ask an interesting question. Human eyes are sensitive to light over the wavelength range (roughly) 400 - 700 nm. What fraction of the Sun's light output actually occurs over this range of wavelengths. Now the Sun isn't really a blackbody, but its spectrum is close enough to being a blackbody that we can get a good rough answer by treating it as one.\n\nThe Sun's effective temperature is 5780 K, so using our definition $x = h c / \\lambda k T$, plugging in the values of [Planck's constant](http://en.wikipedia.org/wiki/Planck_constant), [Boltzmann's constant](http://en.wikipedia.org/wiki/Boltzmann_constant), and the [speed of light](http://en.wikipedia.org/wiki/Speed_of_light), we find that\n\n\\begin{equation}\nx=\\frac{24.89}{(\\lambda / 100 \\mathrm{nm})}\n\\end{equation}\n\nThus the range 400 - 700 nm corresponds to $x = 3.56 - 6.22$. The distribution of energies follows the Planck function, so we are interested in knowing how the integral of the Planck function over this interval compares to the integral from 0 to infinity. In mathematical terms, we want to be able to evaluate\n\n\\begin{equation}\n\\frac{\\int_{3.56}^{6.22} B_{x} d x}{\\int_{0}^{\\infty} B_{x} d x}\n\\end{equation}\n\nhe bottom integral can in fact be done exactly analytically (the result is $16 \\pi^{6}(k T)^{5} /\\left(63 h^{4} c^{3}\\right)$ ), but the top integral cannot be evaluated analytically, and must instead be evaluated numerically. We can represent the problem geometrically with a simple plot:\n\n\n\n```python\nx1 = np.arange(3.56, 6.22, 0.001)\n\nplt.clf()\nplt.fill_between(x, bx(x), alpha=0.5)\nplt.fill_between(x1, bx(x1), alpha=0.5, facecolor='r')\nplt.ylim([0,25])\n```\n\nThe goal is to evaluate the ratio of the purple shaded area to the total blue plus purple shaded area.\n\nThis is an example of a general problem that can be stated very simply: given a known function $f(x)$, evaluate the definite integral\n\n\\begin{equation}\n\\int_{a}^{b} f(x) d x\n\\end{equation}\n\nover some specified interval (a,b), where a could be -infinity, and b could be +infinity.\n\n### Quadrature of specified functions\n\nThere are numerous methods for evaluating integrals, and many of them are implemented in the [scipy.integrate](http://docs.scipy.org/doc/scipy/reference/integrate.html) module. All of these use some variant on the idea of quadrature, which means to approximate the function using a series of other functions whose integrals are easy to evaluate. A trivial example of this is the trapezoidal rule, where one approximates the function by a series of straight lines. In these case, the shape formed by the line segment on top, the y axis on the bottom, and the vertical sides is a trapezoid, a shape whose area is trivial to calculate. The integral may therefore be approximated as the sum of the areas of a series of trapezoids.\n\nModern quadrature methods are much more clever than this, in that they tend to approximate each segment with somewhat higher order functions than simple lines, and that they adaptively decide where to put the breaks between the various approximate segments based on an estimate of the numerical error. The scipy routine [quad()](http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.quad.html) implements one such sophisticated algorithm. Its use is quite simple. To evaluate the numerator of our integral, we can do\n\n\n\n```python\nimport scipy.integrate as integ\n\ninteg.quad(bx, 3.56, 6.22)\n```\n\n\n\n\n (53.14174673016606, 5.899919078862899e-13)\n\n\n\nThe syntax is of quad is that the first argument is the function to be integrated, the second argument is the lower limit of integration, and the third argument is the upper limit. The function returns a tuple of two numbers. The first is the value of the integral, and the second is an estimate of the absolute value of the error in the result.\n\nThe limits of integration can be plus or minus infinity, which are represented by Inf and -Inf in python. Thus to evaluate the denominator, we could do\n\n\n\n```python\ninteg.quad(bx, 0, np.Inf)\n```\n\n /home/bruno/pyEnvs/py3/lib/python3.6/site-packages/ipykernel_launcher.py:2: RuntimeWarning: overflow encountered in exp\n \n\n\n\n\n\n (122.08116743813392, 2.0046050699847875e-07)\n\n\n\nIt is easy to verify that this matches the analytic result very precisely, with an error that is in fact much smaller than the estimate given by the quad routine:\n\n\n```python\ninteg.quad(bx, 0, np.Inf)[0]-8*np.pi**6/63\n```\n\n /home/bruno/pyEnvs/py3/lib/python3.6/site-packages/ipykernel_launcher.py:2: RuntimeWarning: overflow encountered in exp\n \n\n\n\n\n\n 4.263256414560601e-14\n\n\n\nThus the answer to our question, \"what fraction of the Sun's light falls in the wavelength range visible to the human eye?\", is\n\n\n\n```python\ninteg.quad(bx, 3.56, 6.22)[0] / integ.quad(bx, 0, np.Inf)[0]\n```\n\n /home/bruno/pyEnvs/py3/lib/python3.6/site-packages/ipykernel_launcher.py:2: RuntimeWarning: overflow encountered in exp\n \n\n\n\n\n\n 0.43529848088237094\n\n\n\nThus about 44% of the Sun's output falls in the wavelength range that the human eye can see. At first this might seem surprising, given how tiny a part of the spectrum we can see. Of course, however, this is no accident -- evolution would hardly select for vision in parts of the spectrum where there isn't much light to see. The eye have evolved to be exquisitely well-tuned to the light output by the Sun. We would do much less well around stars with different effective temperatures.\n\n### Quadrature of sampled functions\n\nIn the example we just did, we had an analytic formula for the function we want to integrate. However, that is not always the case. Sometimes we only have access to the value of the function evaluated at particular points. For example, the function values might be data that were obtained experimentally rather than by an exact calculation, or they might be the result of a complex numerical calculation that we can't afford to perform a very large number of times. Obviously in this case we are not going to get as good an approximation to an integral as if we knew the function perfectly and could evaluate it wherever we wanted. However, we would still like to be able to numerically integrate such sampled functions as well as possible.\n\nAs an example, we can again take on the question of what fraction of the Sun's output energy lies in the energy range from 400 - 700 nm, but this time using a measured Solar spectrum rather than a pure blackbody. To start with, download the file [sorce_ssi.csv](https://sites.google.com/a/ucsc.edu/krumholz/teaching-and-courses/python14/class-5/sorce_ssi.csv?attredirects=0&d=1), which contains a measured spectrum of the Sun from January 1, 2010, obtained from the very useful database maintained by the [SOlar Radiation and Climate Experiment (SORCE)](http://lasp.colorado.edu/home/sorce/). This file contains three columns. The first is observation time (all the same in this file), the second is wavelength in nm, and the third is radiation flux in W/m2/nm.\n\nLet's start by reading in this data and plotting it. We will do so using the astropy.io package:\n\n\n```python\nfrom astropy.io import ascii\n\ndata = ascii.read('sorce_ssi.csv')\ndata.columns\n```\n\n\n\n\n \n\n\n\nThe columns available are as we said: time, wavelength, and irradiance, another word for frequency-dependent flux. We can plot the latter two quantities against one another as follows:\n\n\n```python\nl = data['wavelength (nm)']\nflux = data['irradiance (W/m^2/nm)']\nplt.plot(l, flux)\n```\n\nis is similar to our idealized blackbody, but not quite identical. To find out the fraction of the power between 400 and 700 nm, we want to integrate this function over that range, and divide by the integral of the full range.\n\nPython provides a method to evaluate these tabulate integrals by approximating the function over each tabulation range as a parabola. This approximation method is known as Simpson's rule, and the routine that performs it is [scipy.integrate.simps()](http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.simps.html). We can use it as follows:\n\n\n```python\ninteg.simps(flux, x=l)\n```\n\n\n\n\n 1319.4359600971845\n\n\n\nThe first argument is the $y$ values of the function to be integrated, and the optional argument $x =$ lets us specify the $x$ values.\n\nThis gives the integral over the full range of the function. To get the integral over 400 - 700 nm, we need to extract just the parts of the l and flux arrays that correspond to this. In order to do this, we need to find out what indices in x correspond to values of 400 and 700. There are many ways to do this in numpy, but the easiest is probably the following:\n\n\n```python\nl1 = l[np.logical_and(l>400, l<700)]\nflux1 = flux[np.logical_and(l>400, l<700)]\n```\n\nLet's unpack these statements a bit. The statement l > 400 produces an array that is True wherever l is bigger than 400, and False otherwise. Similarly, l < 700 produces an array that is True wherever l is below 700, False elsewhere. Then we take logical_and(l>400, l<700), and the effect of logical_and is exactly what it sounds like: it produces True where both conditions (l>400 and l<700) are satisfied, and False elsewhere. Finally, l[logical_and(l>400, l<700)] finds the elements of l where both conditions are satisfied. Thus we've set l1 to just the elements of l that are between 400 and 700. Similarly, the second statement sets flux1 to be equal to just the elements of flux where the corresponding value of l is between 400 and 700. We can verify that this all worked ok by checking the maximum and minimum values of l1:\n\n\n```python\nnp.amax(l1)\n```\n\n\n\n\n 698.85\n\n\n\n\n```python\nnp.amin(l1)\n```\n\n\n\n\n 400.34\n\n\n\nFinally, we are ready to use simps() to evaluate the integral of flux1 versus l1, and compare that to the integral of flux versus l:\n\n\n```python\ninteg.simps(flux1, x=l1) / integ.simps(flux, x=l)\n```\n\n\n\n\n 0.40204419522708396\n\n\n\nSo we get 40% instead of 44% using the real Solar spectrum -- not a big difference, clearly.\n\n### Multi-dimensional quadrature\n\nThe quadrature method we have used thus far can also be generalized to multiple dimensional integrals, although of course the cost of the evaluation goes up as the dimensionality does. This capability is provided by the functions dblquad (for 2d functions), tplquad (for 3d functions), and nquad (for arbitrary-dimensional quadratures); see scipy.integrate. We won't go over the usage of these functions in class, but they're basically analogous to what we've already done.\n\n\n# Ordinary Differential Equations\n\n### Motivating example and statement of the problem\n\nOur final example in this class is using python to integrate ordinary differential equations. The example we will use is a simple harmonic oscillator, for example a mass on a spring. Let us consider a mass m attached to a spring with spring constant $k$. The spring is oriented horizontally, so there is no gravitational force. When the mass is displaced by a distance $x$ from the position where the spring is force-free, the restoring force exerted on the mass is $-kx$. This in turn causes the mass to accelerate with an acceleration $a = -(k/m) x$. We are interested in figuring out how the mass will move if we initially displace it some distance $x_0$ from the equilibrium position.\n\nThe first step is to write down the equations governing this system. Let $v = dx/dt$ be the velocity of the mass at any given time. The set of equations that we have to solve is then\n\n\\begin{aligned}\n&\\frac{d x}{d t}=v\\\\\n&\\frac{d v}{d t}=-\\frac{k}{m} x\n\\end{aligned}\n\n\nEach of these equations is an ordinary differential equation (ODE), and this pair represents a coupled pair of ODEs. Our goal is to find functions $x(t)$ and $v(t)$ that satisfy these two equations, along with the initial condition that $x(0) = x_0$ and $v(0) = 0$.\n\nThis particular set of ODEs can be solved analytically, but many others can't be. In general such a system can be written as follows. A single ODE is an equation of the form\n\n\\begin{equation}\n\\frac{d x}{d t}=f(x, t)\n\\end{equation}\n\nwhere $f(x, t)$ is some arbitrary, specified function of $x$ and $t$. A set of two coupled ODEs is a pair of equations of the form\n\n\\begin{aligned}\n&\\frac{d x}{d t}=f(x, y, t)\\\\\n&\\frac{d y}{d t}=g(x, y, t)\n\\end{aligned}\n\nwhere $f(x,y,t)$ and $g(x,y,t)$ are any specified functions. ODEs of this form are very common in physics, and usually instead of $y$ we have the velocity, $v$. In general a system of N ODEs takes the form\n\n\\begin{aligned}\n&\\frac{d x_{1}}{d t}=f_{1}\\left(x_{1}, x_{2}, x_{3}, \\dots, t\\right)\\\\\n&\\frac{d x_{2}}{d t}=f_{2}\\left(x_{1}, x_{2}, x_{3}, \\dots, t\\right)\\\\\n&\\frac{d x_{3}}{d t}=f_{3}\\left(x_{1}, x_{2}, x_{3}, ., t\\right)\\\\\n&...\n\\end{aligned}\n\nwhere there are N independent variables, and N specified functions $f_1$ through $f_N$. This system of equations requires a set of $N$ initial conditions, which are most commonly of the form $x_i(0) = x_{i,0}$.\n\n## Solving ODEs in python\n\nThere are an immense number of standard methods for solving ODEs, and scipy provides an interface to a powerful and general ODE solver. As with integration, the first step is to define the function specifying the equations to be integrated. Specifically, we need to provide a function that gives the right-hand sides of the above equations, that is the derivatives of $x$ and $v$ with respect to time. We can enter this as\n\n\n\n\n```python\ndef derivs(xv, t, k, m):\n x = xv[0]\n v = xv[1]\n dxdt = v\n dvdt = -(k/m)*x\n return( [dxdt, dvdt] )\n```\n\nThe form of this function has to be as follows. The first argument is an array giving all the dependent variables to be integrated in time. In this case, our two dependent variables are $x$ and $v$, so we get an array of two elements, the first one being $x$ and the second one being $v$. The second argument of derivs is the time $t$. We don't actually need the time in this case, but we get it anyway because in general the derivatives might depend on time. Then finally we have any additional arguments we need. In this case the additional arguments are the spring constant $k$ and the mass $m$.\n\nThe function must then return an array giving the derivatives of the two dependent variables. Thus the function has to return $dxdt$ and $dvdt$. The function we have written calculates these from the equations we are trying to solve.\n\nOnce we've entered the function, the next thing to do is to specify the initial conditions. In this example, since there are two dependent variables, $x$ and $v$, we need to give the initial values of $x$ and $v$. As we described the problem, the initial position is displaced from $x = 0$ by some amount, which for this example we can take to be 1, and the initial velocity is zero. Thus we say\n\n\n```python\nxv0 = [1, 0]\n```\n\nThe third step is to decide at what times we'd like to know the position and velocity. Thus is up to us, but for this example let's say we'd like to know the position at intervals of 0.1 seconds for 20 seconds. We will create an array specifying these times:\n\n\n```python\nt = np.arange(0, 20, 0.1)\n\n```\n\nFinally, we need to choose values for the mass and spring constant. Let's choose a mass of 1 kg, and a spring constant of 2 N/m. Thus\n\n\n```python\nk = 2.\nm = 1.\n```\n\nNow we're finally ready to integrate. We use the function [odeint()](http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.odeint.html) as follows: \n\n\n```python\noutput = integ.odeint(derivs, xv0, t, args=(k, m))\n```\n\nThe first argument is the function specifying the derivatives, the second is the initial conditions, the third is the set of times at which we want to know the position and velocity. Finally, the optional argument args specifies the additional arguments (in this case $k$ and $m$) that are to be passed to the derivs function. \n\nThe odeint routine returns at output that consists of the position $x$ and velocity $v$ at every time we asked for. The shape is\n\n\n```python\noutput.shape\n```\n\n\n\n\n (200, 2)\n\n\n\nThus output is a 200 x 2 array. To plot the position versus time and velocity versus time, we can do\n\n\n```python\nplt.clf()\nplt.plot(t, output[:,0])\nplt.plot(t, output[:,1])\nplt.legend(['x', 'v'])\n```\n\nThus the position and velocity both undergo sinusoidal oscillations, $\\pi/2$ out of phase, as we expect for this simple harmonic oscillator.\n", "meta": {"hexsha": "ffab59b8981c3911e30eb0f845500c592a087f12", "size": 128660, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "bootcamp/Class5.ipynb", "max_stars_repo_name": "bvillasen/lamat2020", "max_stars_repo_head_hexsha": "7a8a792f47bfed7512679aa3f24c110afb62349f", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "bootcamp/Class5.ipynb", "max_issues_repo_name": "bvillasen/lamat2020", "max_issues_repo_head_hexsha": "7a8a792f47bfed7512679aa3f24c110afb62349f", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bootcamp/Class5.ipynb", "max_forks_repo_name": "bvillasen/lamat2020", "max_forks_repo_head_hexsha": "7a8a792f47bfed7512679aa3f24c110afb62349f", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 115.7014388489, "max_line_length": 35888, "alphanum_fraction": 0.839188559, "converted": true, "num_tokens": 8635, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9263037343628703, "lm_q2_score": 0.8933094145755219, "lm_q1q2_score": 0.8274758466628154}} {"text": "# 15.5. A bit of number theory with SymPy\n\nhttps://ipython-books.github.io/155-a-bit-of-number-theory-with-sympy/\n\n## Ref\n\n* Undergraduate level: Elementary Number Theory, Gareth A. Jones, Josephine M. Jones, Springer, (1998)\n* Graduate level: A Classical Introduction to Modern Number Theory, Kenneth Ireland, Michael Rosen, Springer, (1982)\n* SymPy's number-theory module, available at http://docs.sympy.org/latest/modules/ntheory.html\n* The Chinese Remainder Theorem on Wikipedia, at https://en.wikipedia.org/wiki/Chinese_remainder_theorem\n* Applications of the Chinese Remainder Theorem, given at http://mathoverflow.net/questions/10014/applications-of-the-chinese-remainder-theorem\n* Number theory lectures on Awesome Math, at https://github.com/rossant/awesome-math/#number-theory\n\n\n```python\nfrom sympy import *\nimport sympy.ntheory as nt\ninit_printing()\n```\n\n\n```python\nnt.isprime(11), nt.isprime(121)\n```\n\n\n\n\n (True, False)\n\n\n\n\n```python\nnt.isprime(2017)\n```\n\n\n\n\n True\n\n\n\n\n```python\nnt.nextprime(2017)\n```\n\n\n```python\nnt.prime(1000)\n```\n\n\n```python\nnt.primepi(2017)\n```\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nx = np.arange(2, 10000)\nfig, ax = plt.subplots(1, 1, figsize=(6, 4))\nax.plot(x, list(map(nt.primepi, x)), '-k',\n label='$\\pi(x)$')\nax.plot(x, x / np.log(x), '--k',\n label='$x/\\log(x)$')\nax.legend(loc=2)\n```\n\n\n```python\nfactors = nt.factorint(2020)\n```\n\n\n```python\ntype(factors), factors.keys(), factors.values()\n```\n\n\n\n\n (dict, dict_keys([2, 5, 101]), dict_values([2, 1, 1]))\n\n\n\n\n```python\nn = 1\nfor k,v in factors.items():\n n *= k**v\nprint(n)\n```\n\n 2020\n\n\n\n```python\nnt.factorint(1998)\n```\n\n\n```python\n2 * 3**3 * 37\n```\n\n\n```python\nfrom sympy.ntheory.modular import solve_congruence\nsolve_congruence((1, 3), (2, 4), (3, 5))\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "3fe657c46b827fda5a303790ea3b4832724d758a", "size": 33343, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chapter15_symbolic/05_number_theory.ipynb", "max_stars_repo_name": "wgong/cookbook-2nd-code", "max_stars_repo_head_hexsha": "8ca2e5b3c90fee6605f4155e6b9dfb783ce46807", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter15_symbolic/05_number_theory.ipynb", "max_issues_repo_name": "wgong/cookbook-2nd-code", "max_issues_repo_head_hexsha": "8ca2e5b3c90fee6605f4155e6b9dfb783ce46807", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter15_symbolic/05_number_theory.ipynb", "max_forks_repo_name": "wgong/cookbook-2nd-code", "max_forks_repo_head_hexsha": "8ca2e5b3c90fee6605f4155e6b9dfb783ce46807", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 91.3506849315, "max_line_length": 19796, "alphanum_fraction": 0.861410191, "converted": true, "num_tokens": 556, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294404116305638, "lm_q2_score": 0.8902942159342104, "lm_q1q2_score": 0.8274754225302026}} {"text": "# Maximum likelihood Estimation (MLE)\nbased on http://python-for-signal-processing.blogspot.com/2012/10/maximum-likelihood-estimation-maximum.html\n## Simulate coin flipping\n- [Bernoulli distribution](https://en.wikipedia.org/wiki/Bernoulli_distribution) \nis the probability distribution of a random variable which takes the value 1 with probability $p$ and the value 0 with probability $q = 1 - p$\n- [scipy.stats.bernoulli](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.bernoulli.html)\n\n\n```python\nimport numpy as np\nfrom scipy.stats import bernoulli \n\nnp.random.seed(123456789)\n\np_true = 1/2 # this is the value we will try to estimate from the observed data\nfp = bernoulli(p_true)\n\ndef sample(n=10):\n \"\"\"\n simulate coin flipping\n \"\"\"\n return fp.rvs(n) # flip it n times\n\nxs = sample(100) # generate some samples\n```\n\n## Find maximum of Bernoulli distribution\nSingle experiment\n$$\\phi(x) = p ^ {x} * (1 - p) ^ { 1 - x }$$\nSeries of experiments\n$$\\mathcal{L}(p|x) = \\prod_{i=1}^{n} p^{x_{i}}*(p-1)^{1-x_{i}}$$\n### Hints\n- [sympy.diff()](http://docs.sympy.org/dev/modules/core.html#sympy.core.function.diff)\n- [sympy.expand()](http://docs.sympy.org/dev/modules/core.html#sympy.core.function.expand)\n- [sympy.expand_log()](http://docs.sympy.org/dev/modules/core.html#sympy.core.function.expand_log)\n- [sympy.solve()](http://docs.sympy.org/dev/modules/core.html#sympy.core.function.solve)\n- [sympy.symbols()](http://docs.sympy.org/dev/modules/core.html#symbols)\n- [sympy gotchas](http://docs.sympy.org/dev/tutorial/gotchas.html)\n\n\n```python\nimport sympy\nfrom sympy.abc import x\n\np = sympy.symbols('p', positive=True)\nphi = p ** x * (1 - p) ** (1 - x)\nL = np.prod([phi.subs(x, i) for i in xs]) # objective function to maximize\nlog_L = sympy.expand_log(sympy.log(L))\nsol = sympy.solve(sympy.diff(log_L, p), p)[0]\n```\n\n\n```python\nimport matplotlib.pyplot as plt\n\nx_space = np.linspace(1/100, 1, 100, endpoint=False)\n\nplt.plot(x_space,\n list(map(sympy.lambdify(p, log_L, 'numpy'), x_space)),\n sol,\n log_L.subs(p, sol),\n 'o',\n p_true,\n log_L.subs(p, p_true),\n 's',\n )\nplt.xlabel('$p$', fontsize=18)\nplt.ylabel('Likelihood', fontsize=18)\nplt.title('Estimate not equal to true value', fontsize=18)\nplt.grid(True)\nplt.show()\n```\n\n## Empirically examine the behavior of the maximum likelihood estimator \n- [evalf()](http://docs.sympy.org/dev/modules/core.html#module-sympy.core.evalf)\n\n\n```python\ndef estimator_gen(niter=10, ns=100):\n \"\"\"\n generate data to estimate distribution of maximum likelihood estimator'\n \"\"\"\n x = sympy.symbols('x', real=True)\n phi = p**x*(1-p)**(1-x)\n for i in range(niter):\n xs = sample(ns) # generate some samples from the experiment\n L = np.prod([phi.subs(x,i) for i in xs]) # objective function to maximize\n log_L = sympy.expand_log(sympy.log(L)) \n sol = sympy.solve(sympy.diff(log_L, p), p)[0]\n yield float(sol.evalf())\n \nentries = list(estimator_gen(100)) # this may take awhile, depending on how much data you want to generate\nplt.hist(entries) # histogram of maximum likelihood estimator\nplt.title('$\\mu={:3.3f},\\sigma={:3.3f}$'.format(np.mean(entries), np.std(entries)), fontsize=18)\nplt.show()\n```\n\n## Dynamic of MLE by length sample sequence\n\n\n```python\ndef estimator_dynamics(ns_space, num_tries = 20):\n for ns in ns_space:\n estimations = list(estimator_gen(num_tries, ns))\n yield np.mean(estimations), np.std(estimations)\n \nns_space = list(range(10, 100, 5))\nentries = list(estimator_dynamics(ns_space))\nentries_mean = list(map(lambda e: e[0], entries))\nentries_std = list(map(lambda e: e[1], entries))\n\nplt.errorbar(ns_space, entries_mean, entries_std, fmt='-o')\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "4940682496d40fe493ef3f22d66b75e23e264263", "size": 46821, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "mle.ipynb", "max_stars_repo_name": "hyzhak/mle", "max_stars_repo_head_hexsha": "257d8046a950b7381052cc56d9931cf98aeb0a5c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2017-10-22T09:29:36.000Z", "max_stars_repo_stars_event_max_datetime": "2017-10-22T09:29:36.000Z", "max_issues_repo_path": "mle.ipynb", "max_issues_repo_name": "hyzhak/mle", "max_issues_repo_head_hexsha": "257d8046a950b7381052cc56d9931cf98aeb0a5c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mle.ipynb", "max_forks_repo_name": "hyzhak/mle", "max_forks_repo_head_hexsha": "257d8046a950b7381052cc56d9931cf98aeb0a5c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-01-23T04:46:01.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-21T18:38:49.000Z", "avg_line_length": 199.2382978723, "max_line_length": 21420, "alphanum_fraction": 0.8948762308, "converted": true, "num_tokens": 1098, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9465966671870767, "lm_q2_score": 0.874077230244524, "lm_q1q2_score": 0.8273985930135774}} {"text": "# Lab #2: Basics + Loops, Conditionals & Functions \n\nThese exercises are meant to give you some practice with the concepts from Tutorials 2.1-2.3, as well as Tutorials 1.1/1.2 since we didn't get to the exercises last week. Be sure to use best practices, comment your code, and be mindful of your units.\n\n\n```python\n# Import any libraries here:\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n***\n## Arithmetic & variables\n\n1. Use the quadratic equation, to find the roots of $5x^2 + 3x + 1$ by calculating the numerator and denominator separately. Round to 2 decimal places. (**Tip:** When parintheses in a big equation get a little hard to read, breaking it up like this is a good way to make code easier to read.)\n\n$$x = \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$$\n\n\n```python\n# Your code here:\n\nimport math\na,b,c = 5,3,-1\n\nnum_plus = -b + math.sqrt(b**2 - 4 * a * c)\nnum_minus = -b - math.sqrt(b**2 - 4 * a * c)\ndenom = 2*a\n\nroot_plus = num_plus/denom\nroot_minus = num_minus/denom\n\nprint(round(root_plus,2), round(root_minus,2))\n```\n\n 0.24 -0.84\n\n\n2. Use the distance modulus, to find the distance to a star with apparent magnitude $m_v = 0.5$ and absolute magnitude $M_v = -5.85$.\n\n$$m_v - M_v = 5log(d/10 \\text{ pc})$$\n\n\n```python\n# Your code here:\n\nm, M = 0.5, -5.85\nd = 10**((m-M)/5)*10\n\nprint(d, 'pc')\n```\n\n 186.20871366628677 pc\n\n\n***\n## User input & data types\n\nPrompt the user for their age and calculate the percent of their life they've completed assuming they live to 80 years old.\n\n\n```python\n# Your code here:\n\nage = input('How old are you in years? ')\n\nprint((int(age)/80.0)*100.0, '%')\n```\n\n How old are you in years? 21\n 26.25 %\n\n\n***\n## Indexing\n\n1. Print the strings \"pneumono,\" \"ultra,\" \"volcano,\" and \"coniosis\" by slicing the word \"pneumonoultramicroscopicsilicovolcanoconiosis\".\n\n\n```python\n# Your code here:\n\nstring = \"pneumonoultramicroscopicsilicovolcanoconiosis\"\n\nfourth,ninth,fifteenth = string[3], string[9], string[15]\nindex_20, index_3, index_12 = string[20], string[3], string[12]\n\nprint(\"Fourth letter:\", fourth)\nprint(\"Ninth letter:\", ninth)\nprint(\"Fifteenth letter:\", fifteenth)\n\nprint(\"Third index:\", index_3)\nprint(\"Twelfth index:\", index_12)\nprint(\"Twentieth index:\", index_20)\n```\n\n Fourth letter: u\n Ninth letter: l\n Fifteenth letter: c\n Third index: u\n Twelfth index: a\n Twentieth index: o\n\n\n2. Print the 4th, 9th, and 15th letter and the letter at the 20th, 3rd, and 12th indices of the word from part one by indexing.\n\n\n```python\n# Your code here:\n\nstring = \"pneumonoultramicroscopicsilicovolcanoconiosis\"\n\npneumono = string[:8]\nultra = string[8:13]\nmicroscopic = string[13:24]\nvolcanoconiosis = string[-15:]\n\nprint(pneumono, ultra, microscopic, volcanoconiosis)\n```\n\n pneumono ultra microscopic volcanoconiosis\n\n\n***\n## `numpy` arrays\n\nUse the Wein displacement law to calculate the temperatures of blackbodies with 20 peak wavelengths between 300nm and 700nm using arrays.\n\n\\begin{equation}\n \\lambda_{peak}T = 0.29 \\text{ cm K}\n\\end{equation}\n\n\n```python\n# Your code here:\n\nlambdas = np.linspace(300,700,20)\nlambdas_cm = lambdas*1E-7\n\ntemps = 0.29 / lambdas_cm\n\nprint(temps)\n```\n\n [9666.66666667 9032.78688525 8476.92307692 7985.50724638 7547.94520548\n 7155.84415584 6802.4691358 6482.35294118 6191.01123596 5924.7311828\n 5680.41237113 5455.44554455 5247.61904762 5055.04587156 4876.10619469\n 4709.4017094 4553.71900826 4408. 4271.31782946 4142.85714286]\n\n\n***\n## Conditional statements\n\nWrite a program that has the user input a wavelength or frequency and returns (a) the photon energy, and (b) the part of the electromagnetic spectrum in which a photon of that energy falls.\n\n$$E = h\\nu \\quad\\text{&}\\quad \\lambda\\nu = c$$\n\n\n```python\n# Your code here:\n\nwav_or_freq = input(\"Would you like to enter a wavelength or a frequency? Enter w for wavelength, f for frequency: \")\nval = float(input(\"Enter the value in centimeters for wavelength, in Hertz for frequency: \"))\n\nif wav_or_freq == 'w':\n val = val #cm\n energy = 4.13E-15*(3*10**10)/(val) #eV\nelif wav_or_freq == 'f':\n energy = 4.13E-15*val #eV\n val = (3*10**10)/(val) #cm\n\nif val < 1E-9:\n print(\"Gamma; energy = \", energy, \"eV\")\nelif val < 1.0E-6 and val > 1.0E-9:\n print(\"X-ray; energy = \", energy, \"eV\")\nelif val < 3.5E-5 and val > 1.0E-6:\n print(\"UV; energy = \", energy, \"eV\")\nelif val < 7.5E-5 and val > 3.5E-5:\n print(\"Visible; energy = \", energy, \"eV\")\nelif val < 0.5 and val > 7.5E-5:\n print(\"IR; energy = \", energy, \"eV\")\nelse:\n print(\"Radio; energy = \", energy, \"eV\")\n\n```\n\n Would you like to enter a wavelength or a frequency? Enter w for wavelength, f for frequency: w\n Enter the value in centimeters for wavelength, in Hertz for frequency: 1\n Radio; energy = 0.0001239 eV\n\n\n***\n## Loops and plotting\n\n1. Add together every third number in the list [155,2,54,34,5,16,7,38,26,10] and print the result.\n\n\n```python\n# Your code here:\n\nlst = [155,2,54,34,5,16,7,38,26,10]\n\nsummation = 0\nfor n,x in enumerate(lst):\n if n % 3 == 0:\n summation += x\n \nprint(summation)\n```\n\n 206\n\n\n2. Plot the sine and cosine functions on separate sides of the y-axis.\n\n\n```python\n# Your code here:\n\nx = np.linspace(-2*np.pi,2*np.pi)\nsine = np.sin(x)\ncosine = np.cos(x)\n\nplt.plot(x,sine)\nplt.plot(x,cosine)\n```\n\n3. The rules for Pin-Pon are as follows:\n\n - Start counting numbers, starting from 1. \n - Whenever the next number's a multiple of 3 (3, 6, 9,\u2026), replace the actual number with, \"Pin.\" \n - Whenever the next number's a multiple of 5, replace it with the word, \"Pon.\"\n - Whenever the number's a multiple of both 3 and 5, say, \"Pin Pon.\"\n - Keep going until someone messes up.\n\nWrite a program that prints the results of the game for 1 through 100.\n\n\n```python\n# Your code here:\n\nfor n in range(1,101):\n if (n % 3 == 0) and (n % 5 == 0):\n print(\"Pin Pon\")\n elif n % 5 == 0:\n print(\"Pin\")\n elif n % 3 == 0:\n print(\"Pon\")\n else:\n print(n)\n```\n\n***\n## Putting it all together\n\nThese are meant to use all of your new skills at once. That being said, you may have notice that there's no \"Functions\" section above --- once you've completed each of the tasks below, please turn them into functions, comment them well, and add them to a Python script which will serve as a library for the future. \n\n1. (From Lab 1): While DMS (degrees, minutes, seconds) and HMS (hours, minutes, seconds) formats for celestial coordinates have their place (e.g. in every database ever), having them in decimal degrees is often more convenient for calculations. Write two scripts: one that allows the user to enter in a DMS coordinate and prints it in decimal degrees, and another for HMS coordinates.\n\n\n```python\n# Your code here:\n\n# DMS\ndms = input(\"Enter D:M:S coordinates, separated by colons: \")\ndms = dms.split(\":\")\ndegs, mins, secs = dms\ndecdeg = float(degs) + float(mins)/60.0 + float(secs)/3600.0\n\nprint(\"Decimal degrees:\", decdeg)\n\n# HMS\nhms = input(\"Enter H:M:S coordinates, separated by colons: \")\nhms = hms.split(\":\")\nhrs, mins, secs = hms\ndecdeg = (float(hrs) + float(mins)/60.0 + float(secs)/3600.0) * 15\n\nprint(\"Decimal degrees:\", decdeg)\n```\n\n2. In my research on X-ray binaries, I utilize the Novikov-Thorne thin accretion disk approximation for black holes, which relates the temperature of the accertion disk to the mass of the accretor. Typical disk temperatures range between 0.5 and 1.0 keV. Using this relationship, calculate the mass of black holes (a) within this range, (b) in the \"interesting\" regime of 1.0 to 5.0 keV, and (c) in the \"crazy\" range of >5.0 keV. Plot your results, making sure the three regimes are clearly labeled in your plot. (P.S. -- X-ray astronomers often talk about temperatures in terms of energy rather than Kelvin, so keep that in mind with this equation.)\n\n$$T_{eff} \\text{ [keV]} \\sim \\Bigg(\\frac{10M_\\odot}{M}\\Bigg)^{1/4} \\text{ keV}$$\n\n\n```python\n# Your code here:\n\ntemps_normal = np.linspace(0.5,1.0,100)\ntemps_interesting = np.linspace(1.0,5.0,100)\ntemps_crazy = np.linspace(5.0,6.0,100)\n\n'''\nInput: temp -- keV\nOutput: mass -- solar masses\n'''\ndef mass(temp):\n return (10.0/temp**4)**(-1.0)\n\nmass_normal = mass(temps_normal)\nmass_interesting = mass(temps_interesting)\nmass_crazy = mass(temps_crazy)\n\nplt.plot(temps_normal,mass_normal,label=\"Normal\")\nplt.plot(temps_interesting,mass_interesting,label=\"Interesting\")\nplt.plot(temps_crazy,mass_crazy,label=\"Crazy\")\nplt.xlabel(\"Disk temperature [keV]\")\nplt.ylabel(r\"Mass of black hole $M_\\odot$\")\nplt.title(\"Novikov-Thorne Thin Accretion Disk Relationship\")\nplt.legend()\nplt.show()\nplt.close()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "18fd83b848cf55b3a5aa61bf1cb942bd72bfae4e", "size": 64872, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "files/astr211_lab2-exercises-KEY.ipynb", "max_stars_repo_name": "mvtea/mvtea.github.io", "max_stars_repo_head_hexsha": "91cb2558e570bba35e2f0718c4c658cd7317bd5d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "files/astr211_lab2-exercises-KEY.ipynb", "max_issues_repo_name": "mvtea/mvtea.github.io", "max_issues_repo_head_hexsha": "91cb2558e570bba35e2f0718c4c658cd7317bd5d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "files/astr211_lab2-exercises-KEY.ipynb", "max_forks_repo_name": "mvtea/mvtea.github.io", "max_forks_repo_head_hexsha": "91cb2558e570bba35e2f0718c4c658cd7317bd5d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 120.1333333333, "max_line_length": 29660, "alphanum_fraction": 0.8687106918, "converted": true, "num_tokens": 2721, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9465966671870767, "lm_q2_score": 0.8740772269642949, "lm_q1q2_score": 0.8273985899085236}} {"text": "# PCA\n\n\n\n```python\nimport pandas\n# For lots of great things.\nimport numpy as np\n# To make our plots.\nimport matplotlib.pyplot as plt\n%matplotlib inline\n# Because sympy and LaTeX make\n# everything look wonderful!\nfrom sympy import *\ninit_printing(use_latex=True)\nfrom IPython.display import display\n# We will use this to check our implementation...\nfrom sklearn.decomposition import PCA\nimport keras\nfrom sklearn.preprocessing import StandardScaler\n```\n\n\n```python\n## load in data and split into x train and y train\n## data = np.array(pandas.read_csv(\"./comp_new_trainingdata.csv\", header=0))\ndata = np.array(pandas.read_csv(\"./training_noavg.csv\", header=0))\n## bring in loc 0 -1 test data for PCA\ndata1 = np.array(pandas.read_csv(\"./test1.csv\", header=0))\ndata2 = np.array(pandas.read_csv(\"./test2.csv\", header=0))\ndata3 = np.array(pandas.read_csv(\"./test3.csv\", header=0))\n## Have to drop all teh rows that have nan values because they will not help with net\n## clean out rows with nan values\ndata = data[~np.isnan(data).any(axis=1)]\ndata1 = data1[~np.isnan(data1).any(axis=1)]\ndata2 = data2[~np.isnan(data2).any(axis=1)]\ndata3 = data3[~np.isnan(data3).any(axis=1)]\n\nprint(data[:8])\nprint(data.shape)\n\ndata = np.vstack((data,data1,data2,data3))\nprint(data[:8])\ndata.shape\n```\n\n\n```python\n# vectors AND class labels...\nX = data[:,0:8] # 0 thru 30\nY = data[:,8] # 30\n\nscaler = StandardScaler()\n# standardize X .. will mean center data\nX = scaler.fit_transform(X)\n\n# Pretty-print with display()!\ndisplay(X.shape)\ndisplay(Y.shape)\ndisplay(Matrix(np.unique(Y)).T)\ndisplay(X[0:8])\n```\n\n\n```python\nU,S,V = np.linalg.svd(X,full_matrices=True)\n\n# Percent variance accounted for\nplt.plot(100.0*S/np.sum(S))\nplt.ylabel('% Var')\nplt.xlabel('Singular Value')\nplt.show()\n```\n\n\n```python\n# Variance accounted for in the first two principal components\n100.0*(S[0]+S[1])/np.sum(S)\n```\n\n\n```python\n# Scale the singular vectors, resulting in a rotated form of our mean-centered data\nD = np.zeros([X.shape[0],X.shape[1]])\nnp.fill_diagonal(D,S)\nXrotated = np.dot(U,D)\n# Extract just the first two principal components!\nPCs = Xrotated[:,0:2]\nPCs.shape\n```\n\n\n```python\n# The x and y values come from the two\n# Principal Components and the colors for\n# each point are selected based on the\n# corresponding class for each point...\nplt.scatter(PCs[:,0],PCs[:,1],\ncolor=[['red','green','blue','orange'][i] for i in Y.astype(int)])\nplt.xlabel(\"PC1\")\nplt.ylabel(\"PC2\")\nplt.show()\n```\n\nThe data suggest that we have some clear descision boundries. The orange represents the testing dat from all three locations, we can see that it fits the class groupings. A simple MLP with linear activation functions will not work for our data, we should use a RELU or sigmoid instead along with a large hidden layer. It remains to be seen how well the network will be able to generalize.\n", "meta": {"hexsha": "123434b29481ce681ec0f01dd3095167d06a8959", "size": 70120, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "PCA.ipynb", "max_stars_repo_name": "holypolarpanda7/S19-team2-project", "max_stars_repo_head_hexsha": "09b51f07849e3288dfa4ba91cf5d8d13909e35e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "PCA.ipynb", "max_issues_repo_name": "holypolarpanda7/S19-team2-project", "max_issues_repo_head_hexsha": "09b51f07849e3288dfa4ba91cf5d8d13909e35e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "PCA.ipynb", "max_forks_repo_name": "holypolarpanda7/S19-team2-project", "max_forks_repo_head_hexsha": "09b51f07849e3288dfa4ba91cf5d8d13909e35e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 215.0920245399, "max_line_length": 39956, "alphanum_fraction": 0.9045350827, "converted": true, "num_tokens": 765, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026618464796, "lm_q2_score": 0.9019206857566127, "lm_q1q2_score": 0.827334245818943}} {"text": "# Basics of Hafnians and Loop Hafnians\n*Author: Nicol\u00e1s Quesada*\n\nIn the [background section](../hafnian.html) of the The Walrus documentation, some basic ideas related to (loop) hafnians were introduced. This tutorial is a computational exploration of the same ideas.\n\n\n```python\nfrom thewalrus.reference import hafnian as haf_ref\nfrom thewalrus import hafnian\nimport numpy as np\nimport matplotlib.pyplot as plt\n%config InlineBackend.figure_formats=['svg']\n```\n\n## A simple loopless graph and the hafnian\n\n\nLet's consider the following graph\n\n\n\nwith adjacency matrix\n\n\n```python\nA = np.array([[0,0,0,1,0,0],\n [0,0,0,1,1,0],\n [0,0,0,1,1,1],\n [1,1,1,0,0,0],\n [0,1,1,0,0,0],\n [0,0,1,0,0,0]])\n```\n\nIt is easy to verify by inspection that the graph in Fig. 1 has only one perfect matching given by the edges (1,4)(2,5)(3,6).\nWe can verify this by calculating the hafnian of the adjacency matrix $A$\n\n\n```python\nhaf_ref(A) # Using the reference implementation\n```\n\n\n\n\n 1\n\n\n\n\n```python\nhafnian(A) # Using the default recursive method\n```\n\n\n\n\n 1\n\n\n\nLet's see what happens if we rescale the adjacency matrix by a scalar $a$. We'll use the [SymPy](https://sympy.org) library for symbolic manipulation:\n\n\n```python\nfrom sympy import symbols\n```\n\n\n```python\na = symbols(\"a\")\nhaf_ref(a*A)\n```\n\n\n\n\n a**3\n\n\n\nThe example above shows that one can use the reference implementation not only with numpy arrays but also with symbolic sympy expressions.\n\n## A graph with loops and the loop hafnian\n\n\nNow let's consider a graph with loops:\n\n\n\n\nThe adjacency matrix is now\n\n\n```python\nAt = np.array([[1,0,0,1,0,0],\n [0,0,0,1,1,0],\n [0,0,0,1,1,1],\n [1,1,1,0,0,0],\n [0,1,1,0,1,0],\n [0,0,1,0,0,0]])\n```\n\nNote that now the adjacency matrix has non zero elements in the diagonal.\nIt is also strightforward to see that the graph in Fig. 2 has two perfect matchings, namely, (1,4)(2,5)(3,6) and (1,1)(5,5)(2,4)(3,6)\n\n\n```python\nhaf_ref(At, loop=True) # Using the reference implementation\n```\n\n\n\n\n 2\n\n\n\n\n```python\nhafnian(At, loop=True) # Using the default recursive method\n```\n\n\n\n\n 2.0000000000000107\n\n\n\nWe can also use the loop hafnian to count the number of matching (perfect or otherwise)\nby taking the adjacency matrix of the loop less graph, putting ones on its diagonal and calculating the loop hafnian of the resulting matrix. For the graph in Fig. 1 we find\n\n\n```python\nhaf_ref(A+np.diag([1,1,1,1,1,1]), loop=True)\n```\n\n\n\n\n 15\n\n\n", "meta": {"hexsha": "c7e62b46a4a3e3290c6e589f09d8fecd8a824017", "size": 6125, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/gallery/basics.ipynb", "max_stars_repo_name": "BastianZim/thewalrus", "max_stars_repo_head_hexsha": "28cba83b457fc068861c542f0e86d8c9d198b60b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 60, "max_stars_repo_stars_event_min_datetime": "2019-08-13T18:28:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-07T17:37:10.000Z", "max_issues_repo_path": "docs/gallery/basics.ipynb", "max_issues_repo_name": "BastianZim/thewalrus", "max_issues_repo_head_hexsha": "28cba83b457fc068861c542f0e86d8c9d198b60b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 280, "max_issues_repo_issues_event_min_datetime": "2019-08-19T00:28:31.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-28T19:25:12.000Z", "max_forks_repo_path": "docs/gallery/basics.ipynb", "max_forks_repo_name": "BastianZim/thewalrus", "max_forks_repo_head_hexsha": "28cba83b457fc068861c542f0e86d8c9d198b60b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 36, "max_forks_repo_forks_event_min_datetime": "2019-09-18T18:23:28.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-20T07:01:29.000Z", "avg_line_length": 21.9534050179, "max_line_length": 208, "alphanum_fraction": 0.5105306122, "converted": true, "num_tokens": 768, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026618464795, "lm_q2_score": 0.9019206837793827, "lm_q1q2_score": 0.8273342440052246}} {"text": "```python\nimport numpy as np\nimport sympy as sp\nimport matplotlib.pyplot as plt\nimport Euler as p1\nimport nonlinear_ODE as p2\n```\n\n# Exercise C.2 Solving a nonlinear ODE\nSolves the ODE problem: $$u^\\prime = u^q,\\ u(0) = 1,\\ t\\epsilon [0,T]$$\n\n$q=1,\\ \\Delta t=0.1:$\n\n\n```python\np2.graph(1, 0.1, 1)\n```\n\n$q=1,\\ \\Delta t=0.01:$\n\n\n```python\np2.graph(1, 0.01, 1)\n```\n\n$q=2,\\ \\Delta t=0.1:$\n\n\n```python\np2.graph(2, 0.1, 1)\n```\n\n$q=12,\\ \\Delta t=0.01:$\n\n\n```python\np2.graph(2, 0.01, 1)\n```\n\n$q=3,\\ \\Delta t=0.1:$\n\n\n```python\np2.graph(3, 0.1, 1)\n```\n\n\n```python\np2.graph(3, 0.01, 1)\n```\n\nThe Euler Method is most accurate for small $\\Delta t$, which corresponds to a larger number of calculations, and small $q$.\n", "meta": {"hexsha": "717365e48d7d220bdb896e7c06833245ed26f687", "size": 143039, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "temporary_for_nonlinear_ODE.ipynb", "max_stars_repo_name": "chapman-phys227-2016s/cw-6-classwork-team", "max_stars_repo_head_hexsha": "ab705f3d4c2577eafa3b521cf51bf903c48b1ff6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "temporary_for_nonlinear_ODE.ipynb", "max_issues_repo_name": "chapman-phys227-2016s/cw-6-classwork-team", "max_issues_repo_head_hexsha": "ab705f3d4c2577eafa3b521cf51bf903c48b1ff6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "temporary_for_nonlinear_ODE.ipynb", "max_forks_repo_name": "chapman-phys227-2016s/cw-6-classwork-team", "max_forks_repo_head_hexsha": "ab705f3d4c2577eafa3b521cf51bf903c48b1ff6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 641.4304932735, "max_line_length": 24416, "alphanum_fraction": 0.9409881221, "converted": true, "num_tokens": 287, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026573249612, "lm_q2_score": 0.9019206778476933, "lm_q1q2_score": 0.8273342344860193}} {"text": "# Generating the cluster number density\n\nWe will compute the cluster number density, $n(s, p)$, from sets of percolating two-dimensional clusters. As the cluster number density is not known analytically in all but the simplest systems, i.e., the infinite dimensional and the one-dimensional system, we will estimate it numerically. This is done by\n\\begin{align}\n n(s, p; L) \\approx \\frac{N_s}{ML^d},\n\\end{align}\nwhere $s$ is the size of the cluster, $p$ the probability for a site to be set in the system, $L$ the length of a side in the system (assuming equal lengths for all sides), $d$ the dimension of the problem ($L^d$ is the volume), $N_s$ the number of clusters of size $s$ and $M$ the number of simulations performed.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport scipy.ndimage\nimport skimage\nimport tqdm\nimport sklearn.linear_model\nfrom uncertainties import ufloat\n\nsns.set(color_codes=True)\n```\n\n\n```python\ndef is_percolating(prop, num_rows, num_cols):\n min_row, min_col, max_row, max_col = prop.bbox\n\n return max_row - min_row == num_rows or max_col - min_col == num_cols\n```\n\n\n```python\ndef compute_cluster_area(system, p, remove_percolating_cluster=True):\n mat = system < p\n # Label and count the number of connected clusters\n labels, num_features = scipy.ndimage.measurements.label(mat)\n s_list = skimage.measure.regionprops(labels)\n\n new_s_list = []\n if remove_percolating_cluster:\n for s in s_list:\n if is_percolating(s, *system.shape):\n continue\n new_s_list.append(s)\n else:\n new_s_list = s_list\n \n area = list(map(lambda prop: prop.area, new_s_list))\n\n return area\n```\n\nExample code to plot a specific prop in a system.\n\n```python\nbox = s[1].bbox\nplt.matshow(labels[box[0]:box[2], box[1]:box[3]], cmap=\"hsv\")\nplt.show()\n```\n\n\n```python\ndef compute_cluster_number_density(L, M, p, a=1.2):\n area = []\n for i in range(M):\n z = np.random.rand(L, L)\n area.extend(compute_cluster_area(z, p))\n\n n, s = np.histogram(area, L ** 2)\n\n # Logarithmic binning\n logamax = np.ceil(np.log(max(s)) / np.log(a))\n bins = a ** np.arange(0, logamax, 1)\n\n nl, _ = np.histogram(area, bins)\n ds = np.diff(bins)\n sl = (bins[1:] + bins[:-1]) * 0.5\n nsl = nl / (M * L ** 2 * ds)\n\n # Remove bins where there are no observations\n mask = np.abs(nsl) > 1e-14\n\n return sl[mask], nsl[mask]\n```\n\n\n```python\np_c = 0.59275\n```\n\nBelow we plot the cluster number density as $p \\to p_c = 0.59275$ from below.\n\n\n```python\nfig = plt.figure(figsize=(14, 10))\n\nfor p in tqdm.tqdm_notebook(np.linspace(0.3, p_c, 6)):\n plt.loglog(\n *compute_cluster_number_density(200, 200, p, a=1.2),\n label=rf\"$p = {p:.5f}$\"\n )\n\nplt.legend(loc=\"best\")\nplt.title(r\"Cluster number density for $p \\to p_c$ from below\")\nplt.xlabel(r\"$s$\")\nplt.ylabel(r\"$n(s, p)$\")\nplt.show()\n```\n\nIn this figure we can see how the cluster number density follows a seemingly straight line for a while, before dropping of sharply when $p$ is a little off from the critical percolation probability. This shows how the cluster number density at the critical percolation probability follows a power law as the axes are logarithmic.\n\nHere we repeat the experiment, but for $p \\to p_c$ from above.\n\n\n```python\nfig = plt.figure(figsize=(14, 10))\n\nfor p in tqdm.tqdm_notebook(np.linspace(p_c, 0.7, 6)):\n plt.loglog(\n *compute_cluster_number_density(200, 200, p, a=1.2),\n label=rf\"$p = {p:.5f}$\"\n )\n\nplt.legend()\nplt.title(r\"Cluster number density for $p \\to p_c$ from above\")\nplt.xlabel(r\"$s$\")\nplt.ylabel(r\"$n(s, p)$\")\nplt.show()\n```\n\nWhen $p \\to p_c$ from above, we see that all cluster number densities behave in a power law fashion. Why is that?\n\n## The cluster number density at the critical percolation probability\n\nFor $p = p_c = 0.59275$, we explore how the cluster number density changes for a system of $L = 2^k$ for $k = \\{4, \\dots, 9\\}$.\nWe wish to show how the cluster number density deviates from the power law behavior, when $p = p_c$. We expect that\n\\begin{align}\n n(s, p_c; L) = F\\left(\\frac{s}{s_{\\xi}}\\right) s^{-\\tau} \\propto s^{-\\tau},\n\\end{align}\nwhere $F$ is some function that is constant when $p = p_c$.\nBy taking the logarithm on both sides, we get an expression for $\\tau$.\n\\begin{gather}\n \\log\\left[n(s, p_c; L)\\right]\n =\n -\\tau \\log\\left[ s \\right]\n + \\log\\left[ F\\left(\\frac{s}{s_{\\xi}}\\right) \\right].\n\\end{gather}\nWe use linear regression to get an estimate for $-\\tau$. We also compute $-\\tau$ manually by computing the decrease between two values of $s$, for all differences. We find our best estimate by computing the mean of $-\\tau$.\n\n\n```python\nL_arr = 2 ** np.arange(4, 10)\ntrue_tau = 187 / 91\n```\n\n\n```python\nfig = plt.figure(figsize=(14, 10))\na = 1.2\ntau_list = []\n\nfor L in tqdm.tqdm_notebook(L_arr):\n sl, nsl = compute_cluster_number_density(L, 500, p=p_c, a=a)\n\n log_sl = np.log(sl) / np.log(a)\n log_nsl = np.log(nsl) / np.log(a)\n \n plt.loglog(sl, nsl, label=rf\"$L = {L}$\")\n clf = sklearn.linear_model.LinearRegression(\n fit_intercept=True\n ).fit(log_sl[:, None], log_nsl[:, None])\n\n tau = clf.coef_[0, 0]\n C = clf.intercept_[0]\n\n print(f\"For L = {L}: -tau = {tau:.4f}\\tC = {C:.4f}\")\n\n tau_arr = np.zeros(int(len(log_nsl) * (len(log_nsl) - 1) / 2))\n index = 0\n for i in range(len(log_nsl)):\n for j in range(i + 1, len(log_nsl)):\n tau = (log_nsl[j] - log_nsl[i]) / (log_sl[j] - log_sl[i])\n tau_arr[index] = tau\n index += 1\n\n tau_list.append(tau_arr.copy())\n\nplt.title(r\"Cluster number density as a function of $s$ in order to estimate $\\tau$\")\nplt.legend()\nplt.xlabel(r\"$s$\")\nplt.ylabel(r\"$n(s, p)$\")\nplt.show()\n```\n\nBelow we display the mean value for $-\\tau$ with its standard deviation for each size $L$.\n\n\n```python\nfor L, tau in zip(L_arr, tau_list):\n print(f\"For L = {L}: tau = {np.mean(tau):.2f} +/- {np.std(tau):.2f}\")\n```\n\n For L = 16: tau = -1.83 +/- 0.57\n For L = 32: tau = -1.93 +/- 0.68\n For L = 64: tau = -1.89 +/- 0.58\n For L = 128: tau = -1.86 +/- 0.33\n For L = 256: tau = -1.88 +/- 0.28\n For L = 512: tau = -1.90 +/- 0.27\n\n\nWe see that this is quite far off from the true value $-\\tau = 187 / 91 \\approx 2.05$. Below we output the error in $-\\tau$.\n\n\n```python\nunc_tau = [ufloat(np.mean(tau), np.std(tau)) for tau in tau_list]\n\nfor L, tau in zip(L_arr, unc_tau):\n print(f\"For L = {L}: error tau = {true_tau + tau}\")\n```\n\n For L = 16: error tau = 0.2+/-0.6\n For L = 32: error tau = 0.1+/-0.7\n For L = 64: error tau = 0.2+/-0.6\n For L = 128: error tau = 0.20+/-0.33\n For L = 256: error tau = 0.18+/-0.28\n For L = 512: error tau = 0.15+/-0.27\n\n\nWe can see that our estimate for $-\\tau$ is quite far off. This indicates that using linear regression on the slope of the logarithmically binned cluster number density against the cluster size is not very precise.\n\n## Measuring the characteristic cluster size $s_\\xi$\n\nThe scaling of the characteristic cluster size $s_{\\xi}$ is given by\n\\begin{align}\n s_{\\xi} \\propto |p - p_c|^{-1 / \\sigma}.\n\\end{align}\nWe _define_ the characteristic cluster size by the point where\n\\begin{gather}\n \\frac{n(s, p)}{n(s, p_c)} = F\\left(\\frac{s}{s_{\\xi}}\\right) = \\frac{1}{2},\n \\\\\n \\implies\n n(s, p) = \\frac{1}{2} n(s, p_c).\n\\end{gather}\nThe equality is hard to match exactly, so numerically we define it to be\n\\begin{align}\n n(s, p) \\leq \\frac{1}{2} n(s, p_c).\n\\end{align}\nThe value of $s$ that first satisfies this equation is then our estimate of the characteristic cluster size $s_{\\xi}$. We look at the characteristic cluster size for $p \\to p_c$ from below.\n\n\n```python\nL = 512\nM = 500\np_list = [0.45, 0.5, 0.54, 0.57, 0.58] # Numbers from book\n```\n\n\n```python\nfig = plt.figure(figsize=(14, 10))\na = 1.3\ns_xi_list = []\nsl_list = []\nnsl_list = []\n\nsl_c, nsl_c = compute_cluster_number_density(L, M, p=p_c, a=a)\nplt.loglog(sl_c, nsl_c, label=rf\"$p = {p_c}$\")\n\nfor p in tqdm.tqdm_notebook(p_list):\n sl, nsl = compute_cluster_number_density(L, M, p=p, a=a)\n sl_list.append(sl)\n nsl_list.append(nsl)\n\n plt.loglog(sl, nsl, label=rf\"$p = {p}$\")\n\n mask = nsl <= 0.5 * nsl_c[:len(nsl)]\n\n # If no points are below the cluster number density at p = p_c,\n # skip that index.\n if not np.any(mask):\n continue\n\n index = np.argmax(mask)\n\n s_xi = sl[index]\n s_xi_list.append(s_xi)\n\n plt.scatter(s_xi, nsl[index], label=(r\"$s_\\xi = {0:.2}$\").format(s_xi))\n\n\nplt.legend()\nplt.xlabel(r\"$s$\")\nplt.ylabel(r\"$n(s, p)$\")\nplt.title(r\"Estimating the characteristic cluster size\")\nplt.show()\n```\n\nIn this plot we have marked the characteristic cluster size $s_{\\xi}$ by dots. We now plot these as a function of the percolation probability $p$.\n\n\n```python\nfig = plt.figure(figsize=(14, 10))\n\nplt.plot(p_list, s_xi_list, \"-o\")\nplt.xlim(0.4, 0.7)\nplt.title(r\"The characteristic cluster size $s_{\\xi}$ as function of percolation probability $p$\")\nplt.xlabel(r\"$p$\")\nplt.ylabel(r\"$s_{\\xi}$\")\nplt.show()\n```\n\nIn this figure we see how the characteristic cluster size $s_{\\xi}$ diverges as $p \\to p_c$ from below, that is, the cluster number density follows the power law type behaviour for longer before dropping off towards zero. For the scaling\n\\begin{align}\n s_{\\xi} = C|p - p_c|^{-1 / \\sigma},\n\\end{align}\nwhere $C$ is a constant, we can find an expression for the exponent $\\sigma$ by taking the logarithm on both sides and rearranging the terms.\n\\begin{gather}\n \\log(s_{\\xi}) = \\log(C) - \\frac{1}{\\sigma}\\log\\left(|p - p_c|\\right).\n\\end{gather}\nWe use linear regression to find an expression for the coefficient $-\\sigma^{-1}$.\n\n\n```python\ns_xi_arr = np.log(np.array(s_xi_list))\np_min_pc = np.log(np.abs(p_c - np.array(p_list)))\n```\n\n\n```python\nclf = sklearn.linear_model.LinearRegression().fit(\n p_min_pc[:, None], s_xi_arr[:, None]\n)\n\ncoef = clf.coef_[0, 0]\nsigma = - 1 / coef\n\nprint(f\"sigma = {sigma}\")\n```\n\n sigma = 0.47692140021677076\n\n\nThe true value for $\\sigma$ in $d = 2$ dimensions is\n\\begin{align}\n \\sigma = \\frac{36}{91} \\approx 0.395.\n\\end{align}\nThe absolute difference between this value and the one found by us is computed below.\n\n\n```python\nsigma_true = 36 / 91\n\nprint(f\"Diff in sigma: {np.abs(sigma_true - sigma)}\")\n```\n\n Diff in sigma: 0.08131700461237518\n\n\n\n```python\nfig = plt.figure(figsize=(14, 10))\n\nfor i, p in enumerate(p_list):\n y_vals = nsl_list[i] * sl_list[i] ** true_tau\n x_vals = sl_list[i] * np.abs(p - p_c) ** (1 / sigma)\n\n plt.loglog(x_vals, y_vals, label=fr\"$p={p}$\")\n\nplt.legend(loc=\"best\")\nplt.title(r\"Data collapse for $n(s, p)$\")\nplt.xlabel(r\"$s |p - p_c|^{1/\\sigma}$\")\nplt.ylabel(r\"$n(s, p) s^{\\tau}$\")\nplt.show()\n```\n\nIn this figure we can see the data collapse of the cluster number density.\n\n## Mass scaling of percolation cluster\n\nWe now compute the mass of the percolating cluster $M(L)$ at $p = p_c$.\nWhen $p = p_c$ the characteristic length, $\\xi \\to \\infty$.\nThis means that the mass scales as\n\\begin{align}\n M(L) \\propto L^{D}.\n\\end{align}\nTaking the logarithm on both sides, we can find an estimate for $D$ using linear regression, viz.\n\\begin{align}\n \\log\\left(M(L)\\right)\n = \\log(C) + D\\log(L),\n\\end{align}\nwhere $C$ is a constant.\nWe compute the mass of the percolating cluster by counting the number of set sites contained in the cluster that spans the entire system length $L$.\n\n\n```python\ndef compute_mass_of_percolating_cluster(system):\n L = system.shape[0]\n\n # Label and count the number of connected clusters\n labels, num_features = scipy.ndimage.measurements.label(system)\n s_list = skimage.measure.regionprops(labels)\n \n for s in s_list:\n if is_percolating(s, *system.shape):\n # We look at the first percolating cluster that we find.\n # There might be more than one, but we ignore these.\n return s.area\n\n return 0\n```\n\n\n```python\nL_arr = 2 ** np.arange(4, 11)\nmass_arr = np.zeros(len(L_arr))\nM = 100\n```\n\nWe repeat the experiment $M$ times to make sure that we actually get a percolating cluster for all system sizes.\n\n\n```python\nfor i, L in tqdm.tqdm_notebook(enumerate(L_arr)):\n for _ in range(M):\n system = np.random.rand(L, L) <= p_c\n mass_arr[i] += compute_mass_of_percolating_cluster(system)\n\nmass_arr /= M\n```\n\n Widget Javascript not detected. It may not be installed or enabled properly.\n\n\n\n\n \n\n\nFitting the data to find $D$ and the intercept $C$.\n\n\n```python\nclf = sklearn.linear_model.LinearRegression().fit(\n np.log(L_arr)[:, None],\n np.log(mass_arr)[:, None]\n)\n\nD = clf.coef_[0, 0]\nprint(f\"D = {D}\")\n```\n\n D = 1.895808803079029\n\n\nThe true value of $D$ is given by\n\\begin{align}\n D = \\frac{91}{48} \\approx 1.896.\n\\end{align}\nThus, the absolute difference between the true value and our estimated value is given by.\n\n\n```python\nD_true = 91 / 48\n\nprint(f\"Diff: {np.abs(D_true - D)}\")\n```\n\n Diff: 2.453025430426692e-05\n\n\n\n```python\nfig = plt.figure(figsize=(14, 10))\n\nplt.loglog(L_arr, mass_arr, \"-o\", label=r\"Data\")\nplt.loglog(\n L_arr,\n np.exp(clf.intercept_[0]) * L_arr ** D,\n \"--\",\n label=r\"$M(L) = C L^{D}$\"\n)\n\nplt.title(r\"Mass scaling of the percolating cluster as a function of side length $L$\")\nplt.xlabel(r\"$L$\")\nplt.ylabel(r\"$M(L)$\")\nplt.legend()\nplt.show()\n```\n\nIn this figure we can see how the log-log plot of the mass and the system size follows a power law type behaviour. We have also plotted the theoretical line on top using our estimate for $D$.\n", "meta": {"hexsha": "6f7ba350bfb088f12a148a85ecee9be8d089edc4", "size": 574428, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "project-3/cluster-number-density.ipynb", "max_stars_repo_name": "Schoyen/FYS4460", "max_stars_repo_head_hexsha": "0c6ba1deefbfd5e9d1657910243afc2297c695a3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-08-29T16:29:18.000Z", "max_stars_repo_stars_event_max_datetime": "2019-08-29T16:29:18.000Z", "max_issues_repo_path": "project-3/cluster-number-density.ipynb", "max_issues_repo_name": "Schoyen/FYS4460", "max_issues_repo_head_hexsha": "0c6ba1deefbfd5e9d1657910243afc2297c695a3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "project-3/cluster-number-density.ipynb", "max_forks_repo_name": "Schoyen/FYS4460", "max_forks_repo_head_hexsha": "0c6ba1deefbfd5e9d1657910243afc2297c695a3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-05-27T14:01:36.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-27T14:01:36.000Z", "avg_line_length": 601.4952879581, "max_line_length": 113098, "alphanum_fraction": 0.9332240072, "converted": true, "num_tokens": 4172, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.90192067652954, "lm_q2_score": 0.917302657890151, "lm_q1q2_score": 0.8273342337866302}} {"text": "# 1. K-Armed Bandit Problem\nBandit problems are reinforcement learning (RL) problems in which there is only a single state in which multiple actions can potentially be taken. In the k-armed bandit problem, you are repeatedly faced with a choice among k different options/actions. After selecting an action, you receive a reward chosen from a stationary probability distribution that is dependent on the selected action.\n#### Objective: Maximize the expected total reward over some time period\n\nAlthough the rewards for actions are chosen from a probability distribution, each action has a mean reward value. We start out with estimates of the rewards for each action, but with more selections/experience, the estimates converge to the mean. If we have a 'way' to quantify the value of taking each action (the expected reward), then to achieve the objective, we would simply always take the action with the highest value. Mathematically speaking,\n\n$Q_t(a) = E[R_t | A_t = a]$\n\nThis says that the value of an arbitrary action **a** is the expected reward given that **a** is selected\n\n* If we keep estimates of the action values, then at each step, there is at least one action whose estimate is the greatest. These actions are called **greedy actions** and if selected, we are said to be **exploting** our current knowledge of the values of actions\n* If instaed we select a non-greedy action, then we are said to be **exploring** because it enables us to improve our estimates of the non-greedy action values\n\n## Sample-Average Action-Value Estimation\nA simple method of estimating the value of an action is by averaging the rewards previosly received from taking that\naction. i.e.\n\n$$Q_t(a) = \\frac{\\text{sum of rewards when a is taken prior to t}}{\\text{number of times a has been taken prior to t}}$$\n\nThe next step is then to use the estimates to select actions. The simplest rule is to select the action (or one of the actions) with the highest estimated values. This is called the **greedy action selection** method and is denoted:\n\n$A_t = argmax_a Q_t(a)$\n\nwhere $argmax_a$ denotes the value of a at which the expression is maximized\n* If multiple actions maximize the expression, then it is important that the tie is broken **arbitrarily**\n\nYou may have guessed that being greedy all the time is probably not the best way to go - there may be an unexplored action of higher value than our current greedy choice. An alternative is to be greedy most of the time, but every once in a while, with probability $\\epsilon$, select a random action. Methods with this action selection rule are called **$\\epsilon$-greedy methods**. This means that with probability $\\epsilon$ we select a random action and with probability $1-\\epsilon$ we select a greedy action.\n\n## Problem Description\nWe have a k-armed bandit problem with k = 10. The actual action values, $q_*(a)$ are selected according to a normal distribution with mean 0 and variance 1. When a learning method selects action $A_t$ at time t, the actual reward $R_t$ is selected from a normal distribution with mean $q_*(A_t)$ and variance 1. We'll measure the behavior as it improves over 1000 steps. This makes up one run. We'll execute 2000 independent runs to obtain the learning algorithm's average behavior.\n\n### Environment\n* Python 3.5\n* numpy\n* matplotlib\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n```\n\n\n```python\ndef sample_average(actions):\n \"\"\"\n Returns the action-value for each action using the sample-average method\n :param actions: actions[0] is a tuple for each action of the form (sum of rewards received, no. of times taken)\n \"\"\"\n results = [0.0 if actions[i][1] == 0 else actions[i][0] / float(actions[i][1]) for i in range(len(actions))]\n return results\n```\n\n\n```python\ndef get_reward(true_values, a):\n \"\"\"\n Returns the reward for selecting action a.\n Reward is selected around true_values[a] with unit variance (as in problem description)\n :param true_values: list of expected reward for each action\n :param a: index of action to return reward for\n \"\"\"\n reward = np.random.normal(true_values[a], size=1)[0]\n return reward\n```\n\n\n```python\ndef k_armed_bandit(k, epsilon, iterations):\n \"\"\"\n Performs a single run of the k-armed bandit experiment\n :param k: the number of arms\n :param epsilon: value of epsilon for epoch-greedy action selection\n :param iterations: number of steps in a single run\n \"\"\"\n # Randomly assign true values of reward for each action with mean 0 and variance 1\n true_values = np.random.normal(size=k)\n \n # actions[i] is the ith action\n # actions[i][0] is the sum of received rewards for taking action i\n # actions[i][1] is the number of times action i has been taken\n actions = [[0.0, 0] for _ in range(k)]\n \n # Store the rewards received for this experiment\n rewards = []\n \n # Track how often the optimal action was selected\n optimal = []\n optimal_action = true_values.argmax()\n \n for _ in range(iterations):\n prob = np.random.rand(1)\n \n if prob > epsilon:\n # Greedy (exploit current knowledge)\n action_values = np.array(sample_average(actions))\n \n # Break ties arbitrarily (reference: http://stackoverflow.com/questions/42071597/numpy-argmax-random-tie-breaking)\n a = np.random.choice(np.flatnonzero(action_values == action_values.max()))\n else:\n # Explore (take random action)\n a = np.random.randint(0, k)\n \n reward = get_reward(true_values, a) \n \n # Update statistics for executed action\n actions[a][0] += reward\n actions[a][1] += 1\n \n rewards.append(reward)\n optimal.append(1 if a == optimal_action else 0)\n \n return rewards, optimal\n```\n\n\n```python\ndef experiment(k, epsilon, iters, epochs):\n \"\"\"\n Runs the k-armed bandit experiment\n :param k: the number of arms\n :param epsilon: the value of epsilon for epoch-greedy action selection\n :param iters: the number of steps in a single run\n :param epochs: the number of runs to execute\n \"\"\"\n rewards = []\n optimal = []\n \n for i in range(epochs):\n r, o = k_armed_bandit(k, epsilon, iters)\n rewards.append(r)\n optimal.append(o)\n \n print('Experiment with \\u03b5 = {} completed.'.format(epsilon))\n \n # Compute the mean reward for each iteration\n r_means = np.mean(rewards, axis=0)\n o_means = np.mean(optimal, axis=0)\n \n return r_means, o_means\n```\n\n\n```python\nk = 10\niters = 1000\nruns = 2000\n\n# We experiment with values 0.01, 0.1 and 0 (always greedy)\nr_exp1, o_exp1 = experiment(k, 0, iters, runs)\nr_exp2, o_exp2 = experiment(k, 0.01, iters, runs)\nr_exp3, o_exp3 = experiment(k, 0.1, iters, runs)\n```\n\n Experiment with \u03b5 = 0 completed.\n Experiment with \u03b5 = 0.01 completed.\n Experiment with \u03b5 = 0.1 completed.\n\n\n\n```python\nx = range(iters)\nplt.plot(x, r_exp1, c='green', label='\\u03b5 = 0')\nplt.plot(x, r_exp2, c='red', label='\\u03b5 = 0.01')\nplt.plot(x, r_exp3, c='black', label='\\u03b5 = 0.1')\nplt.xlabel('Steps')\nplt.ylabel('Average reward')\nplt.legend()\nplt.show()\n```\n\n\n```python\nplt.plot(x, o_exp1, c='green', label='\\u03b5 = 0')\nplt.plot(x, o_exp2, c='red', label='\\u03b5 = 0.01')\nplt.plot(x, o_exp3, c='black', label='\\u03b5 = 0.1')\nplt.xlabel('Steps')\nplt.ylabel('% Optimal action')\nplt.legend()\nplt.show()\n```\n\nFrom these results we can see that the greedy method ($\\epsilon$ = 0) performs the worst. This is because it simply selects the same action each time (the first one that gives a positive reward, remember, all are initialized to 0.0 so have equal liklihood initially). The other experiments involve exploration of varying degrees and so can be seen improving wiith time.\n\n### Incremental Implementation\nIn the code listing above, we computed the values of actions by summing rewards and then dividing by the number of times a particular action was taken. i.e.\n$Q_n = \\frac{R_1 + R_2 + ... + R_{n-1}}{n - 1}$\n\nAn obvious implemtattion would be to maintain a record of all the rewards and then perform this computation when needed. This can be memory intensive and is unnecessary. We can show that that computation can be computed incrementally as:\n$Q_{n+1} = Q_n + \\frac{1}{n}[R_n - Q_n]$\n\n#### Proof\n\\begin{align}\nQ_{n+1} & = \\frac{1}{n}\\sum_{i = 1}^n R_i \\\\\n& = \\frac{1}{n}\\left(R_n + \\sum_{i=1}^{n-1} R_i\\right) \\\\\n& = \\frac{1}{n}\\left(R_n + (n-1)\\frac{1}{n-1} \\sum_{i = 1}^{n-1} R_i\\right) \\\\\n& = \\frac{1}{n}\\left(R_n + (n-1)Q_n\\right) \\\\\n& = \\frac{1}{n}\\left(R_n + nQ_n - Q_n\\right) \\\\\n& = Q_n + \\frac{1}{n} \\left(R_n - Q_n\\right)\n\\end{align}\n\n\n```python\n# We get rid of the sample_average() method and modify k_armed_bandit as follows:\n# Lines marked ** indicate changes to the code\ndef k_armed_bandit(k, epsilon, iterations):\n \"\"\"\n Performs a single run of the k-armed bandit experiment\n :param k: the number of arms\n :param epsilon: Value of epsilon for epoch-greedy action selection\n \"\"\"\n # Randomly assign true values of reward for each action with mean 0 and variance 1\n true_values = np.random.normal(size=k)\n \n # Estimates of action values **\n Q = np.zeros(k)\n \n # N[i] is the no. of times action i has been taken **\n N = np.zeros(k)\n \n # Store the rewards received for this experiment\n rewards = []\n \n # Track how often the optimal action was selected\n optimal = []\n optimal_action = true_values.argmax()\n \n for _ in range(iterations):\n prob = np.random.rand(1)\n \n if prob > epsilon:\n # Greedy (exploit current knowledge) **\n a = np.random.choice(np.flatnonzero(Q == Q.max()))\n else:\n # Explore (take random action)\n a = np.random.randint(0, k)\n \n reward = get_reward(true_values, a) \n \n # Update statistics for executed action **\n N[a] += reward\n Q[a] += (1.0 / N[a]) * (reward - Q[a])\n \n rewards.append(reward)\n optimal.append(1 if a == optimal_action else 0)\n \n return rewards, optimal\n```\n\n## References\n1. Richard S. Sutton, Andrew G. Barto (1998). Reinforcement Learning: An Introduction. MIT Press.\n", "meta": {"hexsha": "855ebae18b48c2d96fdcc548d4184db989564139", "size": 67562, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "bandits/simple_bandit.ipynb", "max_stars_repo_name": "frankibem/reinforcement-learning", "max_stars_repo_head_hexsha": "a8b7878b356f209151e9c877b19b2f4545c6c0dc", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2017-09-18T17:44:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-19T05:55:51.000Z", "max_issues_repo_path": "bandits/simple_bandit.ipynb", "max_issues_repo_name": "frankibem/reinforcement-learning", "max_issues_repo_head_hexsha": "a8b7878b356f209151e9c877b19b2f4545c6c0dc", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "bandits/simple_bandit.ipynb", "max_forks_repo_name": "frankibem/reinforcement-learning", "max_forks_repo_head_hexsha": "a8b7878b356f209151e9c877b19b2f4545c6c0dc", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 161.245823389, "max_line_length": 30940, "alphanum_fraction": 0.8668334271, "converted": true, "num_tokens": 2653, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026618464795, "lm_q2_score": 0.90192067257508, "lm_q1q2_score": 0.8273342337274879}} {"text": "## Problem\nYou are faced repeatedly with a choice among n different options, or actions. After each choice you receive a\nnumerical reward chosen from a stationary probability distribution that depends on the action you selected. Your objective is to maximize the expected total reward over some time period, for example, over 1000 action selections,\nor time steps.\n\n## Exploring and exploiting\nLet expected or mean of reward of each action be called its *value*. At any time step if we choose best possible action as highest of our observed values of all actions. We call such action *greedy* and this process is called *exploiting*. If we choose one of the nongreedy action, this process will be called *exploration* because this may improve observed values of nongreedy actions which may be optimal.\n\n## Action value method\nwe denote the true (actual) value of action a as $q\u2217(a)$, and the estimated value on the tth time step as $Q_t(a)$. If by the $t^{th}$ time step action $a$ has been chosen $K_a$ times prior to $t$, yielding rewards $R_1, R_2,...R_{K_a}$, then its value is estimated to be: \n$$Q_t(a) = \\frac{R_1 + R_2 + .... + R_{K_a}}{K_a}$$\n\nWe choose explore with probability $\\epsilon$ and choose greedy action with probability $(1-\\epsilon)$ i.e Action with max $Q_t(a)$. This way of near greedy selection is called $\\epsilon-$greedy method\n\n## Softmax action selection\n\nIn Softmax action selection we choose *Exploration* action based on ranking given by *softmax action selection rules*. One method of ranking is *Gibbs, or Boltzmann, distribution*. It\nchooses action $a$ on the $t^{th}$ time step with probability, $$\\frac{e^{Q_t(a)/\\tau}}{\\sum_{i=1}^{n}{e^{Q_t(i)/\\tau}}}$$\n\n\n#### Optimized way to update mean\nLet mean at time t is $m_t$. Let new observation at time $t+1$ is $x_{t+1}$.\n$$\n\\begin{align}\nm_t &= \\frac{x_1 + x_2 + ....... + x_t}{t} \\tag{1} \\\\\nm_{t+1} &= \\frac{x_1 + x_2 + ....... + x_t + x_{t+1}}{t+1} \\\\\n&= \\frac{m_t*t + x_{t+1}}{t+1} && \\text{by (1)} \\\\\n&= \\frac{m_t*t + m_t -m_t + x_{t+1}}{t+1} \\\\\n&= \\frac{m_t(t + 1) + (x_{t+1}-m_t)}{t+1} \\\\\nm_{t+1} &= m_t + \\frac{x_{t+1}-m_t}{t+1}\n\\end{align}\n$$\n\n\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sb\n%matplotlib inline\n```\n\n\n```python\n#np.random.seed(seed=2017)\n\n# Environment code\nclass Bandit:\n def __init__(self,n):\n self.n = n\n self.q_star_arr = np.random.normal(size=self.n)\n \n # get reward R\n def reward(self,a): \n return np.random.normal(loc=self.q_star_arr[a])\n \n def getOptimal(self):\n return self.q_star_arr.argmax()\n```\n\n\n```python\nclass ActionValue:\n def __init__(self,n,epsilon,bandit,max_t = 1000):\n self.n = n\n self.bandit = bandit\n self.max_t = max_t\n self.Qta = np.zeros(n)\n self.Ka = np.zeros(n,dtype=int)\n self.optimalHistory = np.zeros(max_t)\n self.epsilon = epsilon \n self.rewardHistory = np.zeros(max_t)\n self.algoRun()\n \n def algoRun(self):\n for t in range(self.max_t):\n greedyAction = self.Qta.argmax()\n \n if np.random.rand() < self.epsilon:#exploring\n curAction = np.random.randint(self.n -1) \n #not to use greedyAction so\n if curAction>=greedyAction:\n curAction += 1\n else:#exploiting\n curAction = greedyAction\n \n self.optimalHistory[t] = 1 if curAction == self.bandit.getOptimal() else 0\n curActionReward = self.bandit.reward(curAction)\n self.rewardHistory[t] = curActionReward\n self.Ka[curAction] += 1\n #update Qt(a)\n self.Qta[curAction] += (curActionReward - self.Qta[curAction])/self.Ka[curAction] \n \n# n =10\n# bandit = Bandit(10)\n# print(bandit.q_star_arr)\n# max_t = 1000\n# a1 = ActionValue(n,0.1,bandit,max_t=max_t)\n# a1.algoRun()\n# print(a1.Ka)\n# a2 = ActionValue(n,0.01,bandit,max_t=max_t)\n# a2.algoRun()\n# print(a2.Ka)\n# a3 = ActionValue(n,0.0,bandit,max_t=max_t)\n# a3.algoRun()\n# print(a3.Ka)\n```\n\n\n```python\nclass SoftmaxAction:\n def __init__(self,n,tau,bandit,max_t = 1000):\n self.n = n\n self.tau = tau\n self.bandit = bandit\n self.max_t = max_t\n self.Qta = np.zeros(n)\n self.Ka = np.zeros(n,dtype=int)\n self.optimalHistory = np.zeros(max_t)\n self.rewardHistory = np.zeros(max_t)\n self.algoRun()\n \n def algoRun(self):\n for t in range(self.max_t):\n probArray = np.exp(self.Qta/ self.tau) \n probArray = probArray / sum(probArray) \n actionList = list(range(len(self.Qta)))\n curAction = np.random.choice(actionList,p=probArray)\n self.optimalHistory[t] = 1 if curAction == self.bandit.getOptimal() else 0\n curActionReward = self.bandit.reward(curAction)\n self.rewardHistory[t] = curActionReward\n self.Ka[curAction] += 1\n #update Qt(a)\n self.Qta[curAction] += (curActionReward - self.Qta[curAction])/self.Ka[curAction] \n\n```\n\n\n```python\nclass SoftmaxEpsilonGreedyAction:\n def __init__(self,n,epsilon,tau,bandit,max_t = 1000):\n self.n = n\n self.tau = tau\n self.bandit = bandit\n self.max_t = max_t\n self.Qta = np.zeros(n)\n self.Ka = np.zeros(n,dtype=int)\n self.optimalHistory = np.zeros(max_t)\n self.epsilon = epsilon \n self.rewardHistory = np.zeros(max_t)\n self.algoRun()\n \n def algoRun(self):\n for t in range(self.max_t):\n greedyAction = self.Qta.argmax()\n \n if np.random.rand() < self.epsilon:#exploring\n probArray = np.exp(self.Qta/ self.tau) \n actionList = list(range(len(self.Qta)))\n #not to use greedyAction so\n actionList.remove(greedyAction)\n probArray = np.delete(probArray, greedyAction)\n \n probArray = probArray / sum(probArray)\n curAction = np.random.choice(actionList,p=probArray) \n else:#exploiting\n curAction = greedyAction\n \n self.optimalHistory[t] = 1 if curAction == self.bandit.getOptimal() else 0\n curActionReward = self.bandit.reward(curAction)\n self.rewardHistory[t] = curActionReward\n self.Ka[curAction] += 1\n #update Qt(a)\n self.Qta[curAction] += (curActionReward - self.Qta[curAction])/self.Ka[curAction] \n \n```\n\n\n```python\nclass TestBed:\n def __init__(self, n, epsilon,tau,algoType,max_t = 1000):\n self.n = n\n self.maxExp = 2000\n self.max_t = max_t \n self.epsilon = epsilon \n self.tau = tau\n self.averageRewardHistory = np.zeros(self.max_t)\n self.averageOptimalHistory = np.zeros(self.max_t)\n \n for i in range(self.maxExp): \n bandit = Bandit(n)\n if algoType==\"ActionValue\":\n exp = ActionValue(self.n,self.epsilon,bandit,max_t=self.max_t)\n elif algoType==\"SoftmaxAction\":\n exp = SoftmaxAction(self.n,self.tau,bandit,max_t=self.max_t)\n elif algoType==\"SoftmaxEpsilonGreedyAction\":\n exp = SoftmaxEpsilonGreedyAction(self.n,self.epsilon,self.tau,bandit,max_t=self.max_t)\n \n self.averageRewardHistory += exp.rewardHistory \n self.averageOptimalHistory += exp.optimalHistory \n \n self.averageRewardHistory /= self.maxExp\n self.averageOptimalHistory *= 100.0/self.maxExp\n```\n\n\n```python\nn =10\nmax_t =1000\n#a1 = TestBed(n,0.1,None,\"ActionValue\",max_t)\na2 = TestBed(n,0.01,None,\"ActionValue\",max_t)\na3 = TestBed(n,0.0,None,\"ActionValue\",max_t)\na4 = TestBed(n,None,1.0,\"SoftmaxAction\",max_t)\na5 = TestBed(n,None,10.0,\"SoftmaxAction\",max_t)\n#a6 = TestBed(n,0.01,1.0,\"SoftmaxEpsilonGreedyAction\",max_t)\n#a7 = TestBed(n,0.0,10.0,\"SoftmaxEpsilonGreedyAction\",max_t)\n#a8 = TestBed(n,0.0,10.0,\"SoftmaxEpsilonGreedyAction\",max_t)\n\n\n```\n\n\n```python\nt = range(max_t)\n#plt.plot(t,a1.averageRewardHistory,label='epsilon=0.1')\nplt.plot(t,a2.averageRewardHistory,label='ActionValue: epsilon=0.01')\nplt.plot(t,a3.averageRewardHistory,label='ActionValue: epsilon=0.0')\nplt.plot(t,a4.averageRewardHistory,label='SA: tau=1.0')\nplt.plot(t,a5.averageRewardHistory,label='SA: tau=10.0')\n# plt.plot(t,a4.averageRewardHistory,label='SEGA: epsilon=0.01 tau=1.0')\n# plt.plot(t,a5.averageRewardHistory,label='SEGA: epsilon=0.0 tau=10.0')\nplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\nplt.show()\n```\n\n\n```python\nplt.plot(t,a2.averageOptimalHistory,label='ActionValue: epsilon=0.01')\nplt.plot(t,a3.averageOptimalHistory,label='ActionValue: epsilon=0.0')\nplt.plot(t,a4.averageOptimalHistory,label='SA: tau=1.0')\nplt.plot(t,a5.averageOptimalHistory,label='SA: tau=10.0')\n# plt.plot(t,a4.averageOptimalHistory,label='SEGA: epsilon=0.01 tau=1.0')\n# plt.plot(t,a5.averageOptimalHistory,label='SEGA: epsilon=0.0 tau=10.0')\nplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\nplt.show()\n```\n", "meta": {"hexsha": "ffaf7b8f3105c3149ed8ac8e2e9ee59a01f116d4", "size": 110999, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/Reinforcement_learning/.ipynb_checkpoints/RL_Part_1_ArmedBandit_Softmax-checkpoint.ipynb", "max_stars_repo_name": "rakesh-malviya/MLCodeGems", "max_stars_repo_head_hexsha": "b9b2b4c2572f788724a7609499b3adee3a620aa4", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-02-19T14:42:57.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-19T14:42:57.000Z", "max_issues_repo_path": "notebooks/Reinforcement_learning/.ipynb_checkpoints/RL_Part_1_ArmedBandit_Softmax-checkpoint.ipynb", "max_issues_repo_name": "rakesh-malviya/MLCodeGems", "max_issues_repo_head_hexsha": "b9b2b4c2572f788724a7609499b3adee3a620aa4", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/Reinforcement_learning/.ipynb_checkpoints/RL_Part_1_ArmedBandit_Softmax-checkpoint.ipynb", "max_forks_repo_name": "rakesh-malviya/MLCodeGems", "max_forks_repo_head_hexsha": "b9b2b4c2572f788724a7609499b3adee3a620aa4", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2017-11-09T11:09:31.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-17T06:38:28.000Z", "avg_line_length": 320.8063583815, "max_line_length": 58284, "alphanum_fraction": 0.9038009351, "converted": true, "num_tokens": 2568, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9324533126145178, "lm_q2_score": 0.8872045966995027, "lm_q1q2_score": 0.8272768651592786}} {"text": "# Introduction to Linear Algebra \n\n## Matrices\n\n### Conventions of Representation \n\nIn the last chapter, we introduced a convention of representing vectors as an ordered, vertical arrangement of numbers within square brackets.\n\n$$\\vec{a}=\\begin{bmatrix} 3\\\\4\\end{bmatrix}; \\quad \n\\vec{b}=\\begin{bmatrix} 2\\\\-1\\end{bmatrix}; \\quad \n\\vec{c}=\\begin{bmatrix} -3\\\\-3\\end{bmatrix}; \\quad \n\\vec{d}=\\begin{bmatrix} -2\\\\3\\end{bmatrix}; \\quad\n\\vec{e}=\\begin{bmatrix} 2\\\\3\\\\5\\end{bmatrix}\n$$\n\nThis is part of a broader convention of orderly arranging numbers in a rectangular, table-like structure called a __matrix__. A variable containing a vector is represented with a small letter and an arrow above it. A variable containing a matrix is conventionally represented with a capital letter. For example, M is a matrix containing 12 numbers arranged in a particular way.\n\n$$M=\\begin{bmatrix} 1 & 3 & -2 & 4\\\\2 & 3 & -1 & -1\\\\ -1 & 1 & 3 & 4\\end{bmatrix}$$\n\nAs the table-like arrangement of numbers suggests, a matrix can be organised into rows and columns. This allows us to assess its size. The matrix $M$ has 3 rows and 4 columns, thus it is described as a $3\\times4$ matrix. The convention is to always start with the number of rows. \n\n$$M_{3\\times4}=\\underset{\\;\\\\\\mathbf{3 \\, rows \\, \\times \\, 4 \\, \\text{columns}}}{\\begin{bmatrix} 1 & 3 & -2 & 4\\\\2 & 3 & -1 & -1\\\\ -1 & 1 & 3 & 4\\end{bmatrix}}$$\n\nWith this in mind, we can think of vectors as single column matrices. \n\n### Why a Matrix?\n\nThere is not a single precise meaning associated with matrices. They are first and foremost a mathematical structure uniquely arranging numbers. We can use numbers to count but also to measure. We can use vectors to describe a direction and magnitude of a force but also features of some object we are trying to encode computationally. Consequently, we can use this rectangular arrangement of numbers for many things. We can, for example, concisely represent a system of linear equations, or conveniently represent any linear transformation of space. These specific uses, however, do not exhaust what this structure may ultimately facilitate. It is rather that we have found a way of using an ordered table of numbers to accomplish something more efficiently or to describe something in a way which ends up further illuminating it from a new perspective. When this happens, as it does in linear algebra, we tend to associate a specific meaning to mathematical objects such are matrices and forget that meaning is a matter of the perspective taken.\n\n### Matrices in Linar Algebra\n\n#### Linear Transformation\n\nIn linear algebra, matrices extend the idea of manipulating vectors in space, towards manipulating the space itself (including all the possible vectors in it). To explain this, we will use the idea of a __linear transformation__. We can think of a linear transformation as a function (with some properties), which can transform an input defined in terms of vectors into an output defined in terms of vectors.\n\n\n\n In linear algebra, space itself is defined in terms of vectors. We can, for example, make a linear combination of any two linearly independent 2-D vectors to reach any point in 2-D space. Hence a linear transformation can transform a whole space into a different one.\n\nWhat are the properties of a linear transformation? The most obvious one is the linearity. It is an aspect that mathematics defines formally, in terms of operations, without much regard to its spatial aspect. This leads to a multitude of possible geometric interpretations. Let's describe one:
    \nImagine a two-dimensional plane. The plane itself has no shape (you can think of it as an empty canvas), so to illustrate its principal two directions, we need some kind of a helping device. One such device is the coordinate axes. Another such device is a grid, subdividing the space with two sets of lines, parallel to the coordinate axes. The axes lay in the centre of this grid. A linear transformation of space implies that the coordinate centre remains fixed at $(0,0)$, and the the grid lines of the transformed space remain parallel to each other and equally spaced. \n\n\n\n#### Using Basis Vectors to Describe a Linear Transformation\n\nIf the linearity is guaranteed, an easier and much more convenient way to describe what a linear transformation accomplishes is to track how it transforms its basis vectors. In the case of the 2-dimensional space, we start with the basis vectors $\\boldsymbol{\\hat{\\imath}}$ and $\\boldsymbol{\\hat{\\jmath}}$. A linear transformation transforms the space such that $\\boldsymbol{\\hat{\\imath}}$ and $\\boldsymbol{\\hat{\\jmath}}$ (usually) end up at a new location in space in terms of the starting condition. We can call these transformed vectors $\\boldsymbol{\\hat{\\imath}}^{\\;new}$ and $\\boldsymbol{\\hat{\\jmath}}^{\\; new}$\n\n\n\nNumerically expressing this linear transformation is easier than one might think. We simply need to capture the coordinates of the transformed basis vectors $\\boldsymbol{\\hat{\\imath}}^{\\;new}$ and $\\boldsymbol{\\hat{\\jmath}}^{\\; new}$ in terms of the original space and put them in a matrix! Since we have two vectors, our matrix will have 2 columns. The first vector corresponds to the first column (we count columns from left to right). Because our vectors are two-dimensional, our matrix will have 2 rows. The first spatial dimension starts at the top (we encode dimensions from the top towards the bottom).\n\n\n\nTo capture the transformation described in the image above we define a matrix $M$ as follows:\n\n$$M=\\begin{bmatrix} -1&-2\\\\3&-1\\end{bmatrix}$$\n\n### Some Common Linear Transformations\n\nSome linear transformations are used very often, especially when applied in computer graphics. A good example is using matrices to rotate a space in a certain direction. \n\n#### Rotate 90 Degrees Clockwise\n\n\n\n$$M_{rotate 90+}=\\begin{bmatrix} 0&-1\\\\-1&0\\end{bmatrix}$$\n\n#### Rotate 90 Degrees Counter-clockwise\n\n\n\n$$M_{rotate 90-}=\\begin{bmatrix} 0&-1\\\\1&0\\end{bmatrix}$$\n\n#### Shear Transformation\n\nWith this transformation, all the vector components in the direction of $\\boldsymbol{\\hat{\\imath}}$ remain unchanged, but the ones in the $\\boldsymbol{\\hat{\\jmath}}$ direction get pushed towards the $\\boldsymbol{\\hat{\\imath}}$. \n\n\n\n$$M_{shear}=\\begin{bmatrix} 1&1\\\\0&1\\end{bmatrix}$$\n\n#### Identity Matrix\n\nA transformation matrix that does not transform the space at all is a matrix whose columns contain the basis vectors. This idea might sound not that useful, yet such a matrix is extremely important in mathematics. Its name is an __identity matrix__, and it is usually symbolised with the capital $I$. An identity matrix serves a similar purpose that the number one does in arithmetics. Multiplying any number by one also does not change the number. \n\n\n\n$$I=\\begin{bmatrix} 1&0\\\\0&1\\end{bmatrix}$$\n\nIn the case of three dimensions, an identity matrix would be contain 3 vectors of 3 dimensions:\n\n$$I_{3}=\\begin{bmatrix} 1&0&0 \\\\ 0&1&0 \\\\ 0&0&1 \\end{bmatrix}$$\n\nNotice an important feature that in both cases, the diagonal elements are ones, and all the rest are zeros.\n\n### Elementary Matrix Operations\n\n#### How a Linear Transformation Affects an Arbitary Vector\n\nLet us return to the transformation matrix $M$, we have created, described as:\n$$M=\\begin{bmatrix} 0&-3\\\\4&-1\\end{bmatrix}$$\n\nThe matrix $M$ is sufficient to compute where any vector of the original space will land in the transformed space. The process is illustrated in the image below. (Fig. 1) We start in the space defined by the basis vectors $\\boldsymbol{\\hat{\\imath}}$ and $\\boldsymbol{\\hat{\\jmath}}$. The vector $\\vec{p}$ is an arbitrary vector in this space whose coordinates are $-0.5$ and $1.5$. We can describe $\\vec{p}$ as a linear combination of the basis vectors: $\\vec{p}=-0.5\\boldsymbol{\\hat{\\imath}}-1.5\\boldsymbol{\\hat{\\jmath}}$.\n\nNow we transform the space. To define the transformation we change the direction and orientation of the basis vectors. We end up with new vectors $\\boldsymbol{\\hat{\\imath}}^{\\;new}$ and $\\boldsymbol{\\hat{\\jmath}}^{\\; new}$. To encode this transformation in a matrix $M$, we capture vector's new coordinates: $\\boldsymbol{\\hat{\\imath}}^{\\;new}$ has coordinates $-1$ and $3$; $\\boldsymbol{\\hat{\\jmath}}^{\\;new}$ has coordinates $-2$ and $-1$. (Fig. 2)\n\n\n\nTo compute where the vector $\\vec{p}$ lands in the transformed space, we multiply $\\vec{p}$'s first coordinate $-0.5$ with the first column of the transformation matrix $M$, and the $\\vec{p}$'s second coordinate $-1.5$ with the second column. This gives us the vector $\\vec{p}^{\\;new}$ whose coordinates in terms of the old space are $3.5$ and $0$ (Fig. 3) Once we remove the old space and its basis vectors from the picture, we are left with new basis vectors $\\boldsymbol{\\hat{\\imath}}^{\\;new}$, $\\boldsymbol{\\hat{\\jmath}}^{\\;new}$ and the vector $\\vec{p}^{\\;new}$. (Fig. 4) If we represent the vector $\\vec{p}^{\\;new}$ as a linear combination of $\\boldsymbol{\\hat{\\imath}}^{\\;new}$ and $\\boldsymbol{\\hat{\\jmath}}^{\\;new}$ we notice something very interesting. The linear combination is parametrised by the same scalars $-0.5$ and $-1.5$ just like in the space we started! (Fig. 1) Vectors $\\vec{p}$'s coordinates are still $-0.5$ and $-1.5$, only in terms of the new basis vectors (In terms of the old basis vectors, its coordinates are $3.5$ and $0$). The consequence of linearity of the transformation is that as we transform the space, the basis vectors of the space change, but the scalars parametrising every vector in this space remain constant.\n\n\n\n#### Multiplying a Vector by a Matrix\n\nThe calculation we performed to find the landing coordinates of $\\vec{p}^{\\;new}$ was, in fact, the same as __multiplying__ a vectory $\\vec{p}$ by a matrix $M$. To correctly perform the calculation, the transformation matrix needs to be on the left of the vector that it multiplies. What we did can be described algebraically as following:\n\n\\begin{align}\n\\vec{p}^{\\;new}&=M \\times \\vec{p} \\\\\\\\\n\\vec{p}^{\\;new}&=\\begin{bmatrix} -1&-2\\\\3&-1 \\end{bmatrix}\\times \\begin{bmatrix} -0.5\\\\-1.5\\end{bmatrix} \\\\\\\\\n\\vec{p}^{\\;new}&=-0.5 \\begin{bmatrix} -1\\\\3 \\end{bmatrix} -1.5\\begin{bmatrix} -2\\\\-1 \\end{bmatrix} \\\\\\\\\n\\vec{p}^{\\;new}&=\\begin{bmatrix} 0.5\\\\-1.5 \\end{bmatrix}+\\begin{bmatrix} 3\\\\1.5 \\end{bmatrix} \\\\\\\\\n\\vec{p}^{\\;new}&=\\begin{bmatrix} 3.5\\\\0 \\end{bmatrix}\n\\end{align}\n\n#### Matrix Multiplication\n\n##### Composing Linear Transformations\n\nA single matrix can be used to describe a transformation of space. A __product__ of two matrices can be used to compose two linear transformations into one! Let's say that we wanted to first rotate a vector by 90 degrees counter-clockwise and then perform a shear operation. We could start with a vector $\\vec{q}$ and multiply it by the matrix $M_{rotate90+}$. This operation would yield a vector $\\vec{q_{rotated}}$. \n\n\n$$\\vec{q_{rotated}}=M_{rotate90-} \\times \\vec{q}$$\n\nNow we can multiply the vector $q_{rotated}$ by the matrix $M_{shear}$ and get the final vector $q_{final}$. \n\n$$\\vec{q_{final}}=M_{shear} \\times \\vec{q_{rotated}}$$\n\nVector multiplication allows us to compute a new matrix $M_{rotate-shear}$ whose effect on the vector $\\vec{q}$ would be of applying both the rotation and shear transformation at the same time! The order of operations matter, as we will get a different result if we first applied shear and then rotated than if we first rotated and then applied shear. The operation that comes last should be on the left of the operation that precedes it. \n\n\\begin{align}\nM_{rotate-shear}&=M_{shear}\\times M_{rotate90-} \\\\\\\\\nM_{rotate-shear}&=\\begin{bmatrix} 1&1\\\\0&1\\end{bmatrix} \\times \\begin{bmatrix} 0&-1\\\\1&0\\end{bmatrix} \\\\\\\\\nM_{rotate-shear}&=\\begin{bmatrix} 1&-1\\\\1&0\\end{bmatrix}\n\\end{align}\n\n\n\n##### Matrix Multiplication Rules\n\nIn the previous step, we skipped the process of actually computing the matrix product and directly showed the result. This is because matrix multiplication can be quite numerically cumbersome, and its computation can obscure the idea of what it stands for in linear algebra\u2014composing linear transformations. Here we introduce the multiplication rules. Let's start with an example of a single matrix, $A$:\n\n$$\nA=\\begin{bmatrix}\na_{11}&a_{12}&a_{13}&a_{14} \\\\\na_{21}&a_{22}&a_{23}&a_{14} \\\\\na_{31}&a_{32}&a_{33}&a_{14} \\\\\n\\end{bmatrix} \\\\\n$$\n\n$A$ is a $3\\times4$ matrix containing 12 elements distributed in 3 rows and 4 columns. Each element $a_{ij}$ is indexed by two numbers. The first one $i$ defines the element's row and the second one $j$ the element's column. Thus the name $a_{23}$ indicates an element in the second row and the third column.\n\nLet's introduce another matrix B:\n\n$$\nB=\\begin{bmatrix}\nb_{11}&b_{12} \\\\\nb_{21}&b_{22} \\\\\nb_{31}&a_{32} \\\\\nb_{41}&a_{42} \\\\\n\\end{bmatrix} \\\\\n$$\n\n$B$ is a $4\\times2$ matrix, containing 8 elements distributed in 4 rows and 2 columns. The element indexing rules are the same as in the case of $A$.\n\nTo compute a matrix product, matrices need to be compatible in size. Moreover, the compatibility then defines the size of the product:\n\n__To compute the matrix product $A\\times B$, the number of columns of matrix A needs to match the number of rows of matrix B. If this criterion is met, the product matrix will have the same number of rows as the matrix A and the same number of columns as the matrix B.__\n\nSince matrix $A$ is a $\\color{red}3\\times \\boldsymbol{4}$ matrix and B is a $\\boldsymbol{4} \\times \\color{blue}2$ matrix, two matrices are compatible to compute the product $A \\times B$, and the product matrix will be a $\\color{red}3 \\times \\color{blue}2$ matrix.\n\n\\begin{align}\nC_{\\color{red}3\\times \\color{blue}2} &= A_{\\color{red}3\\times\\boldsymbol{4}} \\times B_{\\boldsymbol{4}\\times\\color{blue}2} \\\\\\\\\nC &= \\begin{bmatrix}\nc_{11} & c_{12} \\\\\nc_{21} & c_{22} \\\\\nc_{31} & c_{32}\n\\end{bmatrix}\n\\end{align}\n\nWhat is left is to compute are actual elements $c_{ij}$, and for that, we will need the dot product operation. Index $_{\\color{red}2 \\color{blue}1}$ of tells us that to compute the element $c_{\\color{red}2 \\color{blue}1}$ we need to dot multiply the __second__ row of the matrix $A$ by the __first__ column of the matrix $B$. To compute the element $c_{\\color{red}3 \\color{blue}2}$ we need to dot multiply the __third__ row of the matrix $A$ by the __second__ column of the matrix $B$. In other words, to compute $c_{\\color{red}2 \\color{blue}1}$ we need to multiply:\n\n\n\nNotice that any column or a row of a matrix is one-dimensional, so we can think of it as a vector and represent it consequently:\n\n\\begin{align}\nc_{21} &= \\begin{bmatrix} a_{21} \\\\ a_{22} \\\\ a_{23} \\\\ a_{24}\\end{bmatrix} \\cdot\n\\begin{bmatrix} b_{11} \\\\ b_{21} \\\\ b_{31} \\\\ b_{41}\\end{bmatrix} \\\\[2ex]\nc_{21} &=a_{21}b_{11}+a_{22}b_{21}+a_{23}b_{31}+a_{24}b_{41}\n\\end{align}\n\nto compute $c_{\\color{red}3 \\color{blue}2}$ we need to multiply:\n\n\n\nAnd in terms of a dot product of vectors:\n\n\\begin{align}\nc_{32} &= \\begin{bmatrix} a_{31} \\\\ a_{32} \\\\ a_{33} \\\\ a_{34}\\end{bmatrix} \\cdot\n\\begin{bmatrix} b_{12} \\\\ b_{22} \\\\ b_{32} \\\\ b_{42}\\end{bmatrix} \\\\[2ex]\nc_{32} &=a_{31}b_{12}+a_{32}b_{22}+a_{33}b_{32}+a_{34}b_{42}\n\\end{align}\n\nThe animation below shows how to compute the rest of the elements:\n\n\n\nHere it should be clear why the matrix multiplication requires that $A$ and $B$ are compatible in size. If matrix $A$ had 3 rows instead of 4, and the matrix $B$ remained the same, we would need to dot multiply two vectors of a different length. That is simply not possible, as dot product requires us to have to vectors with the same number of elements. \n\nTo compute all the elements of matrix C, we need to compute 8 dot products of two 4-dimensional vectors. This is why the matrix multiplication is better left to computers.\n\n$$\nC=\n\\begin{bmatrix} a_{11}b_{11}+a_{12}b_{21}+a_{13}b_{31}+a_{14}b_{41} & a_{11}b_{12}+a_{12}b_{22}+a_{13}b_{32}+a_{14}b_{42} \\\\\na_{21}b_{11}+a_{22}b_{21}+a_{23}b_{31}+a_{24}b_{41} & a_{21}b_{12}+a_{22}b_{22}+a_{23}b_{32}+a_{24}b_{42} \\\\\na_{31}b_{11}+a_{32}b_{21}+a_{33}b_{31}+a_{34}b_{41} & a_{31}b_{12}+a_{32}b_{22}+a_{33}b_{32}+a_{34}b_{42} \\\\\na_{41}b_{11}+a_{42}b_{21}+a_{43}b_{31}+a_{44}b_{41} & a_{41}b_{12}+a_{4}b_{22}+a_{43}b_{32}+a_{44}b_{42} \\\\\n\\end{bmatrix}\n$$\n\n#### Matrix Addition and Subtraction\n\nMatrix addition and subtraction do not have a straightforward intuitive meaning. They can be performed, but we need to think of them purely in algebraic terms. Similarly to vectors, to be able to add or subtract two matrices, they need to be of the same size. If the matrix $A$ is a $3\\times4$ matrix, then the matrix $B$ needs to be a $3\\times4$ matrix as well if we wish to compute $A+B$ and $A-B$. If the matrices are of the same size, both addition and subtraction are trivial. We simply need to add or subtract individual elements of both matrices that have matching indices. \n\n$$A_{2\\times3}=\\begin{bmatrix}\na_{11} & a_{12} & a_{13}\\\\\na_{21} & a_{22} & a_{23}\\\\\n\\end{bmatrix}; \\quad\nB_{2\\times3} = \\begin{bmatrix}\nb_{11} & b_{12} & b_{13}\\\\\nb_{21} & b_{22} & b_{23}\\\\\n\\end{bmatrix} \\\\[2em]\nA+B = \\begin{bmatrix}\na_{11}+b_{11} & a_{12}+b_{12} & a_{13}+b_{13}\\\\\na_{21}+b_{21} & a_{22}+b_{22} & a_{23}+a_{23}\\\\\n\\end{bmatrix} \\\\[2em]\nA-B = \\begin{bmatrix}\na_{11}-b_{11} & a_{12}-b_{12} & a_{13}-b_{13}\\\\\na_{21}-b_{21} & a_{22}-b_{22} & a_{23}-a_{23}\\\\\n\\end{bmatrix}\n$$\n\n#### Multiplying a Matrix by a Scalar\n\nAnother common operation is to multiply a matrix by a scalar. This is also very straightforward. Every element of the new matrix will be a multiple of the old one and the given scalar. \n\n$$A_{2\\times3}=\\begin{bmatrix}\na_{11} & a_{12} & a_{13}\\\\\na_{21} & a_{22} & a_{23}\\\\\n\\end{bmatrix}; \\\\[2em]\npA = \\begin{bmatrix}\np\\,a_{11} & p\\,a_{12} & p\\,a_{13}\\\\\np\\,a_{21} & p\\,a_{22} & p\\,a_{23}\\\\\n\\end{bmatrix}\n$$\n\n### Elementary Matrix Operations in Python\n\nTo represent matrices in Python programming language, we will use the linear algebra library called NumPy. To import it we write the following line of code:\n\n\n```python\nimport numpy as np\n```\n\nLet us define matrices $A$, $B$ and $C$ as follows:\n\n$$\nA=\\begin{bmatrix}\n4 & 2 & -1 & 5 \\\\\n2 & 1 & 3 & -3 \\\\\n-2 & -3 & 1 & 4 \\\\\n\\end{bmatrix}; \\quad\nB=\\begin{bmatrix}\n-1 & 1 \\\\\n3 & -3 \\\\\n4 & -2 \\\\\n5 & 5 \\\\\n\\end{bmatrix}; \\quad\nC=\\begin{bmatrix}\n1 & 3 \\\\\n3 & -1 \\\\\n\\end{bmatrix}; \\quad\nD=\\begin{bmatrix}\n2 & 1 \\\\\n-1 & 2 \\\\\n\\end{bmatrix}\n$$\n\nNow we can use NumPy's data structure called array to represent matrices:\n\n\n```python\nA = np.array([[4, 2, -1, 5],\n [2, 1, 3, -3],\n [-2, -3, 1, 4]])\n```\n\n\n```python\nB = np.array([[-1, 1],\n [ 3, -3],\n [ 4, -2],\n [ 5, 5]])\n```\n\n\n```python\nC = np.array([[-1, 3],\n [ 3, -1]])\n```\n\n\n```python\nD = np.array([[ 2, 1],\n [-1, 2]])\n```\n\nTo make sure we encoded them properly, we can use the command print:\n\n\n```python\nprint (A)\n```\n\n [[ 4 2 -1 5]\n [ 2 1 3 -3]\n [-2 -3 1 4]]\n\n\n\n```python\nprint (B)\n```\n\n [[-1 1]\n [ 3 -3]\n [ 4 -2]\n [ 5 5]]\n\n\n\n```python\nprint (C)\n```\n\n [[-1 3]\n [ 3 -1]]\n\n\n\n```python\nprint (D)\n```\n\n [[ 2 1]\n [-1 2]]\n\n\n#### Matrix Addition and Subtraction in Python\n\nWe can only add two matrices if they are of the same size. Thus attempting to compute $A+B$ will result in an error:\n\n\n```python\nA+B\n```\n\nWhat we can add are the matrices $C$ and $D$:\n\n\n```python\nprint (C+D)\n```\n\n [[1 4]\n [2 1]]\n\n\nor we can add any matrix to itself\n\n\n```python\nprint (A+A)\n```\n\n [[ 8 4 -2 10]\n [ 4 2 6 -6]\n [-4 -6 2 8]]\n\n\n#### Multiplying a Matrix by a Scalar in Python\n\nMultiplying a matrix by a scalar in python is quite straightforward. To multiply the matrix $A$ by 5, we write:\n\n\n```python\nprint (5*A)\n```\n\n [[ 20 10 -5 25]\n [ 10 5 15 -15]\n [-10 -15 5 20]]\n\n\n\n```python\nprint (-5*A)\n```\n\n [[-20 -10 5 -25]\n [-10 -5 -15 15]\n [ 10 15 -5 -20]]\n\n\n#### Matrix Multiplication in Python\n\nWe can only multiply two matrices if their sizes are matching. We can for example compute $A\\times B$. To apply matrix multiplication we will use the NumPy command `np.dot`:\n\n\n```python\nprint (np.dot(A,B))\n```\n\n [[ 23 25]\n [ -2 -22]\n [ 17 25]]\n\n\nSince matrix product is not commutative, trying to compute the product $B\\times A$ will not work:\n\n\n```python\nprint (np.dot(B,A))\n```\n\nHowever, we can find the product $B\\times C$ or $B\\times D$:\n\n\n```python\nprint (np.dot(B,C))\n```\n\n [[ 4 -4]\n [-12 12]\n [-10 14]\n [ 10 10]]\n\n\n\n```python\nprint (np.dot(B,D))\n```\n\n [[-3 1]\n [ 9 -3]\n [10 0]\n [ 5 15]]\n\n\nSince the matrices $C$ and $D$ have the same size, computing both $C\\times D$ and $D\\times C$ is possible:\n\n\n```python\nprint (np.dot(C,D))\n```\n\n [[-5 5]\n [ 7 1]]\n\n\n\n```python\nprint (np.dot(D,C))\n```\n\n [[ 1 5]\n [ 7 -5]]\n\n", "meta": {"hexsha": "40cb13d30cd64300a8f4543b5671f87f83541521", "size": 38034, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "10. Introduction to Matrices.ipynb", "max_stars_repo_name": "Mistrymm7/machineintelligence", "max_stars_repo_head_hexsha": "7629d61d46dafa8e5f3013082b1403813d165375", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 82, "max_stars_repo_stars_event_min_datetime": "2019-09-23T11:25:41.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T22:56:10.000Z", "max_issues_repo_path": "10. Introduction to Matrices.ipynb", "max_issues_repo_name": "Iason-Giraud/machineintelligence", "max_issues_repo_head_hexsha": "b34a070208c7ac7d7b8a1e1ad02813b39274921c", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "10. Introduction to Matrices.ipynb", "max_forks_repo_name": "Iason-Giraud/machineintelligence", "max_forks_repo_head_hexsha": "b34a070208c7ac7d7b8a1e1ad02813b39274921c", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 31, "max_forks_repo_forks_event_min_datetime": "2019-09-30T16:08:46.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-19T10:29:07.000Z", "avg_line_length": 30.5004009623, "max_line_length": 1286, "alphanum_fraction": 0.5585528737, "converted": true, "num_tokens": 6385, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9324533088603709, "lm_q2_score": 0.8872045862611166, "lm_q1q2_score": 0.8272768520952746}} {"text": "### Lets talk about the following topics\n1. Derivative\n2. Derivative and partial derivative\n3. Derivative and directional derivative\n4. Derivative and gradient\n5. Gradient descent algorithm\n\n### Derivative\n\n\nDefinition of derivative:\n\n$$\nf'(x_0) = \\lim_{\\Delta x \\to 0}\\frac{\\Delta y}{\\Delta x} = \\\n\\lim_{\\Delta x \\to 0}\\frac{f(x_0 + \\Delta x) - f(x_0)}{\\Delta x}\n$$\n\n$f'(x_0)$ indicates the changing rate/trend of $f(x)$ along the positive x-axis direction, more intuitively to say, to some point of x:\n 1. If $f'(x) > 0$, then the value of $f(x)$ will increase along the positive direction of x-axis;\n 2. If $f'(x) < 0$, then the value of $f(x)$ will decrease along the positive direction of x-axis;\n \n### Derivative and partial derivative\nDefinition of partial derivative:\n\n$$\n\\frac{\\partial}{\\partial x_i}f(x_0, x_1, \\dots, x_n) \\\n= \\lim_{\\Delta x \\to 0}\\frac{\\Delta y}{\\Delta x} \\\n= \\lim_{\\Delta x \\to 0}\\frac{f(x_0, x_1, \\dots, x_i + \\Delta x, \\dots, x_n) - f(x_0, x_1, \\dots, x_i, \\dots, x_n)}{\\Delta x}\n$$\n\nSo as you can see, derivative and partial derivative are essentially same, just:\n 1. Derivative indicates the changing rate/trend of unary function $f(x)$ along the positive x-axis direction;\n 2. Partial derivative $\\frac{\\partial}{\\partial x_i} (i = 0, 1, \\dots, n)$ indicates the changing rate/trend of multivariable function $f(x_0, x_1, \\dots, x_n)$ along the positive $x_i (i = 0, 1, \\dots, n)$ axis direction;\n \n### Derivative and directional derivative\nDefinition of directional derivative:\n\n$$\n\\begin{align}\n \\frac{\\partial}{\\partial v}f(x_0, x_1, \\dots, x_n) \\\n &= \\lim_{v \\to 0}\\frac{\\Delta y}{\\Delta x} \\\n = \\lim_{v \\to 0}\\frac{f(x_0 + \\Delta x_0, x_1 + \\Delta x_1, \\dots, x_i + \\Delta x_i, \\dots, x_n + \\Delta x_n)}{v} \\\\\n \\newline\n v &= \\sqrt{(\\Delta x_0)^2 + (\\Delta x_1)^2 + \\dots + (\\Delta x_i)^2 + \\dots + (\\Delta x_n)^2}\n\\end{align}\n$$\n\nIn previous derivative and partial derivative definitions, we were all talking about the changing rate/trend of $f(x)$ along the axes, if we want to calculate the changing rate/trend of $f(x)$ along any arbitrary direction(360-degree), then the directional derivative comes into picture.\n\nThat is: **directional derivative indicates the changing rate/trend of $f(x)$ along one arbitrary direction.**\n\n### Derivative and gradient\nDefinition of gradient:\n\n$$\ngrad\\,f(x_0, x_1, \\dots, x_n) = \\\n\\Big(\n \\frac{\\partial f}{\\partial x_0}, \\\n \\dots, \\\n \\frac{\\partial f}{\\partial x_i}, \\\n \\dots, \\\n \\frac{\\partial f}{\\partial x_n}\n\\Big)\n$$\n\nThe notion of gradient is created only for answering one question: **in which direction of the function's independent variables space it has the maximum changing rate?**\n\nThat is: the function's gradient at one point is such a vector:\n 1. It has the same direction with the directional derivative which has the maximum value at this point(at one point the directional derivative is 360-degree);\n 2. And its norm is the value of above point 1 described directional derivative;\n \nHere pay attentions to the below three points:\n 1. Gradient is a vector, so it has both direction and magnitude;\n 2. It has the same direction with the directional derivative which has the maximum value at that point;\n 3. The value of the gradient is the value of above point 2 described directional derivative;\n \n### Gradient descent algorithm\nSince one function has the maximum changing rate along the gradient's direction at one point of the independent variables space, then if we move along the **reversed** direction of the gradient direction, we can reduce the value of the cost function most efficiently.\n\nBut how to move along the **reversed** direction of the gradient direction?\n \nSince gradient and partial derivative are all vectors, then according to the rules of vector operating, we can just decrease correspondingly in each axis, that is:\n\n$\n\\text{repeat until convergence }\\Bigg\\{\n$\n\n$$\nx_0 = x_0 - \\alpha\\frac{\\partial f}{\\partial x_0} \\\\\nx_1 = x_1 - \\alpha\\frac{\\partial f}{\\partial x_1} \\\\\n\\vdots \\\\\nx_n = x_n - \\alpha\\frac{\\partial f}{\\partial x_n}\n$$\n\n$\n\\Bigg\\}\n$\n\n### Credit to [[\u673a\u5668\u5b66\u4e60] ML\u91cd\u8981\u6982\u5ff5\uff1a\u68af\u5ea6\uff08Gradient\uff09\u4e0e\u68af\u5ea6\u4e0b\u964d\u6cd5\uff08Gradient Descent\uff09](https://blog.csdn.net/walilk/article/details/50978864)\n", "meta": {"hexsha": "033d39d56d34ed2cec8377938f5dd505a02b9d76", "size": 6159, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ml_basics/rdm002_gradient_and_gradient_descent/gradient_and_gradient_descent.ipynb", "max_stars_repo_name": "lnshi/ml-exercises", "max_stars_repo_head_hexsha": "7482ca0f5bb599e6be0a02b50ca1db3c882e715f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-10-06T10:32:10.000Z", "max_stars_repo_stars_event_max_datetime": "2019-07-04T14:55:55.000Z", "max_issues_repo_path": "ml_basics/rdm002_gradient_and_gradient_descent/gradient_and_gradient_descent.ipynb", "max_issues_repo_name": "lnshi/ml-exercises", "max_issues_repo_head_hexsha": "7482ca0f5bb599e6be0a02b50ca1db3c882e715f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ml_basics/rdm002_gradient_and_gradient_descent/gradient_and_gradient_descent.ipynb", "max_forks_repo_name": "lnshi/ml-exercises", "max_forks_repo_head_hexsha": "7482ca0f5bb599e6be0a02b50ca1db3c882e715f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-07-03T09:24:41.000Z", "max_forks_repo_forks_event_max_datetime": "2019-07-03T09:24:41.000Z", "avg_line_length": 41.3355704698, "max_line_length": 296, "alphanum_fraction": 0.588407209, "converted": true, "num_tokens": 1240, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9324533051062238, "lm_q2_score": 0.8872045825331214, "lm_q1q2_score": 0.8272768452883966}} {"text": "# Simple Linear Regression with NumPy\n\nIn school, students are taught to draw lines like the following.\n\n$$ y = 2 x + 1$$\n\nThey're taught to pick two values for $x$ and calculate the corresponding values for $y$ using the equation.\nThen they draw a set of axes, plot the points, and then draw a line extending through the two dots on their axes.\n\n\n```python\n# Import matplotlib.\nimport matplotlib.pyplot as plt\n\n# Draw some axes.\nplt.plot([-1, 10], [0, 0], 'k-')\nplt.plot([0, 0], [-1, 10], 'k-')\n\n# Plot the red, blue and green lines.\nplt.plot([1, 1], [-1, 3], 'b:')\nplt.plot([-1, 1], [3, 3], 'r:')\n\n# Plot the two points (1,3) and (2,5).\nplt.plot([1, 2], [3, 5], 'ko')\n# Join them with an (extending) green lines.\nplt.plot([-1, 10], [-1, 21], 'g-')\n\n# Set some reasonable plot limits.\nplt.xlim([-1, 10])\nplt.ylim([-1, 10])\n\n# Show the plot.\nplt.show()\n```\n\n\n
    \n\n\nSimple linear regression is about the opposite problem - what if you have some points and are looking for the equation?\nIt's easy when the points are perfectly on a line already, but usually real-world data has some noise.\nThe data might still look roughly linear, but aren't exactly so.\n\n***\n\n## Example (contrived and simulated)\n\n\n\n#### Scenario\nSuppose you are trying to weigh your suitcase to avoid an airline's extra charges.\nYou don't have a weighing scales, but you do have a spring and some gym-style weights of masses 7KG, 14KG and 21KG.\nYou attach the spring to the wall hook, and mark where the bottom of it hangs.\nYou then hang the 7KG weight on the end and mark where the bottom of the spring is.\nYou repeat this with the 14KG weight and the 21KG weight.\nFinally, you place your case hanging on the spring, and the spring hangs down halfway between the 7KG mark and the 14KG mark.\nIs your case over the 10KG limit set by the airline?\n\n#### Hypothesis\nWhen you look at the marks on the wall, it seems that the 0KG, 7KG, 14KG and 21KG marks are evenly spaced.\nYou wonder if that means your case weighs 10.5KG.\nThat is, you wonder if there is a *linear* relationship between the distance the spring's hook is from its resting position, and the mass on the end of it.\n\n#### Experiment\nYou decide to experiment.\nYou buy some new weights - a 1KG, a 2KG, a 3Kg, all the way up to 20KG.\nYou place them each in turn on the spring and measure the distance the spring moves from the resting position.\nYou tabulate the data and plot them.\n\n#### Analysis\nHere we'll import the Python libraries we need for or investigations below.\n\n\n```python\n# Make matplotlib show interactive plots in the notebook.\n%matplotlib inline\n```\n\n\n```python\n# numpy efficiently deals with numerical multi-dimensional arrays.\nimport numpy as np\n\n# matplotlib is a plotting library, and pyplot is its easy-to-use module.\nimport matplotlib.pyplot as plt\n\n# This just sets the default plot size to be bigger.\nplt.rcParams['figure.figsize'] = (8, 6)\n```\n\nIgnore the next couple of lines where I fake up some data. I'll use the fact that I faked the data to explain some results later. Just pretend that w is an array containing the weight values and d are the corresponding distance measurements.\n\n\n```python\nw = np.arange(0.0, 21.0, 1.0)\nd = 5.0 * w + 10.0 + np.random.normal(0.0, 5.0, w.size)\n```\n\n\n```python\n# Let's have a look at w.\nw\n```\n\n\n\n\n array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12.,\n 13., 14., 15., 16., 17., 18., 19., 20.])\n\n\n\n\n```python\n# Let's have a look at d.\nd\n```\n\n\n\n\n array([ 7.82186059, 14.0940333 , 18.91831809, 29.19790716,\n 28.35552966, 35.13268004, 37.92904728, 49.27309391,\n 51.95939251, 59.00125941, 54.78074124, 74.59207512,\n 82.88496355, 76.9700379 , 77.64418561, 91.332951 ,\n 82.39451551, 88.36237078, 100.88501927, 103.13242505,\n 109.47821768])\n\n\n\nLet's have a look at the data from our experiment.\n\n\n```python\n# Create the plot.\n\nplt.plot(w, d, 'k.')\n\n# Set some properties for the plot.\nplt.xlabel('Weight (KG)')\nplt.ylabel('Distance (CM)')\n\n# Show the plot.\nplt.show()\n```\n\n#### Model\nIt looks like the data might indeed be linear.\nThe points don't exactly fit on a straight line, but they are not far off it.\nWe might put that down to some other factors, such as the air density, or errors, such as in our tape measure.\nThen we can go ahead and see what would be the best line to fit the data. \n\n#### Straight lines\nAll straight lines can be expressed in the form $y = mx + c$.\nThe number $m$ is the slope of the line.\nThe slope is how much $y$ increases by when $x$ is increased by 1.0.\nThe number $c$ is the y-intercept of the line.\nIt's the value of $y$ when $x$ is 0.\n\n#### Fitting the model\nTo fit a straight line to the data, we just must pick values for $m$ and $c$.\nThese are called the parameters of the model, and we want to pick the best values possible for the parameters.\nThat is, the best parameter values *given* the data observed.\nBelow we show various lines plotted over the data, with different values for $m$ and $c$.\n\n\n```python\n# Plot w versus d with black dots.\nplt.plot(w, d, 'k.', label=\"Data\")\n\n# Overlay some lines on the plot.\nx = np.arange(0.0, 21.0, 1.0)\nplt.plot(x, 5.0 * x + 10.0, 'r-', label=r\"$5x + 10$\")\nplt.plot(x, 6.0 * x + 5.0, 'g-', label=r\"$6x + 5$\")\nplt.plot(x, 5.0 * x + 15.0, 'b-', label=r\"$5x + 15$\")\n\n# Add a legend.\nplt.legend()\n\n# Add axis labels.\nplt.xlabel('Weight (KG)')\nplt.ylabel('Distance (CM)')\n\n# Show the plot.\nplt.show()\n```\n\n#### Calculating the cost\nYou can see that each of these lines roughly fits the data.\nWhich one is best, and is there another line that is better than all three?\nIs there a \"best\" line?\n\nIt depends how you define the word best.\nLuckily, everyone seems to have settled on what the best means.\nThe best line is the one that minimises the following calculated value.\n\n$$ \\sum_i (y_i - mx_i - c)^2 $$\n\nHere $(x_i, y_i)$ is the $i^{th}$ point in the data set and $\\sum_i$ means to sum over all points. \nThe values of $m$ and $c$ are to be determined.\nWe usually denote the above as $Cost(m, c)$.\n\nWhere does the above calculation come from?\nIt's easy to explain the part in the brackets $(y_i - mx_i - c)$.\nThe corresponding value to $x_i$ in the dataset is $y_i$.\nThese are the measured values.\nThe value $m x_i + c$ is what the model says $y_i$ should have been.\nThe difference between the value that was observed ($y_i$) and the value that the model gives ($m x_i + c$), is $y_i - mx_i - c$.\n\nWhy square that value?\nWell note that the value could be positive or negative, and you sum over all of these values.\nIf we allow the values to be positive or negative, then the positive could cancel the negatives.\nSo, the natural thing to do is to take the absolute value $\\mid y_i - m x_i - c \\mid$.\nWell it turns out that absolute values are a pain to deal with, and instead it was decided to just square the quantity instead, as the square of a number is always positive.\nThere are pros and cons to using the square instead of the absolute value, but the square is used.\nThis is usually called *least squares* fitting.\n\n\n```python\n# Calculate the cost of the lines above for the data above.\ncost = lambda m,c: np.sum([(d[i] - m * w[i] - c)**2 for i in range(w.size)])\n\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (5.0, 10.0, cost(5.0, 10.0)))\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (6.0, 5.0, cost(6.0, 5.0)))\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (5.0, 15.0, cost(5.0, 15.0)))\n```\n\n Cost with m = 5.00 and c = 10.00: 510.73\n Cost with m = 6.00 and c = 5.00: 1739.48\n Cost with m = 5.00 and c = 15.00: 894.32\n\n\n#### Minimising the cost\nWe want to calculate values of $m$ and $c$ that give the lowest value for the cost value above.\nFor our given data set we can plot the cost value/function.\nRecall that the cost is:\n\n$$ Cost(m, c) = \\sum_i (y_i - mx_i - c)^2 $$\n\nThis is a function of two variables, $m$ and $c$, so a plot of it is three dimensional.\nSee the **Advanced** section below for the plot.\n\nIn the case of fitting a two-dimensional line to a few data points, we can easily calculate exactly the best values of $m$ and $c$.\nSome of the details are discussed in the **Advanced** section, as they involve calculus, but the resulting code is straight-forward.\nWe first calculate the mean (average) values of our $x$ values and that of our $y$ values.\nThen we subtract the mean of $x$ from each of the $x$ values, and the mean of $y$ from each of the $y$ values.\nThen we take the *dot product* of the new $x$ values and the new $y$ values and divide it by the dot product of the new $x$ values with themselves.\nThat gives us $m$, and we use $m$ to calculate $c$.\n\nRemember that in our dataset $x$ is called $w$ (for weight) and $y$ is called $d$ (for distance).\nWe calculate $m$ and $c$ below.\n\n\n```python\n# Calculate the best values for m and c.\n\n# First calculate the means (a.k.a. averages) of w and d.\nw_avg = np.mean(w)\nd_avg = np.mean(d)\n\n# Subtract means from w and d.\nw_zero = w - w_avg\nd_zero = d - d_avg\n\n# The best m is found by the following calculation.\nm = np.sum(w_zero * d_zero) / np.sum(w_zero * w_zero)\n# Use m from above to calculate the best c.\nc = d_avg - m * w_avg\n\nprint(\"m is %8.6f and c is %6.6f.\" % (m, c))\n```\n\n m is 4.951198 and c is 11.161382.\n\n\nNote that numpy has a function that will perform this calculation for us, called polyfit.\nIt can be used to fit lines in many dimensions.\n\n\n```python\nnp.polyfit(w, d, 1)\n```\n\n\n\n\n array([ 4.95119814, 11.16138172])\n\n\n\n#### Best fit line\nSo, the best values for $m$ and $c$ given our data and using least squares fitting are about $4.95$ for $m$ and about $11.13$ for $c$.\nWe plot this line on top of the data below.\n\n\n```python\n# Plot the best fit line.\nplt.plot(w, d, 'k.', label='Original data')\nplt.plot(w, m * w + c, 'b-', label='Best fit line')\n\n# Add axis labels and a legend.\nplt.xlabel('Weight (KG)')\nplt.ylabel('Distance (CM)')\nplt.legend()\n\n# Show the plot.\nplt.show()\n```\n\nNote that the $Cost$ of the best $m$ and best $c$ is not zero in this case.\n\n\n```python\nprint(\"Cost with m = %5.2f and c = %5.2f: %8.2f\" % (m, c, cost(m, c)))\n```\n\n Cost with m = 4.95 and c = 11.16: 499.37\n\n\n### Summary\nIn this notebook we:\n1. Investigated the data.\n2. Picked a model.\n3. Picked a cost function.\n4. Estimated the model parameter values that minimised our cost function.\n\n### Advanced\nIn the following sections we cover some of the more advanced concepts involved in fitting the line.\n\n#### Simulating data\nEarlier in the notebook we glossed over something important: we didn't actually do the weighing and measuring - we faked the data.\nA better term for this is *simulation*, which is an important tool in research, especially when testing methods such as simple linear regression.\n\nWe ran the following two commands to do this:\n\n```python\nw = np.arange(0.0, 21.0, 1.0)\nd = 5.0 * w + 10.0 + np.random.normal(0.0, 5.0, w.size)\n```\n\nThe first command creates a numpy array containing all values between 1.0 and 21.0 (including 1.0 but not including 21.0) in steps of 1.0.\n\n\n```python\n np.arange(0.0, 21.0, 1.0)\n```\n\n\n\n\n array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12.,\n 13., 14., 15., 16., 17., 18., 19., 20.])\n\n\n\nThe second command is more complex.\nFirst it takes the values in the `w` array, multiplies each by 5.0 and then adds 10.0.\n\n\n```python\n5.0 * w + 10.0\n```\n\n\n\n\n array([ 10., 15., 20., 25., 30., 35., 40., 45., 50., 55., 60.,\n 65., 70., 75., 80., 85., 90., 95., 100., 105., 110.])\n\n\n\nIt then adds an array of the same length containing random values.\nThe values are taken from what is called the normal distribution with mean 0.0 and standard deviation 5.0.\n\n\n```python\nnp.random.normal(0.0, 5.0, w.size)\n```\n\n\n\n\n array([ 5.21128808, 2.13198653, -0.74232395, 7.45955589,\n 0.98834441, 11.1086927 , 9.80473266, -1.24492606,\n 0.90826174, 1.09293531, -3.85817768, 6.31678901,\n -3.89289611, -0.8390512 , 5.41083856, 14.17220137,\n 1.12206625, 0.84775242, 1.1699498 , -5.79200309,\n -11.09390936])\n\n\n\nThe normal distribution follows a bell shaped curve.\nThe curve is centred on the mean (0.0 in this case) and its general width is determined by the standard deviation (5.0 in this case).\n\n\n```python\n# Plot the normal distrution.\nnormpdf = lambda mu, s, x: (1.0 / (2.0 * np.pi * s**2)) * np.exp(-((x - mu)**2)/(2 * s**2))\n\nx = np.linspace(-20.0, 20.0, 100)\ny = normpdf(0.0, 5.0, x)\nplt.plot(x, y)\n\nplt.show()\n```\n\nThe idea here is to add a little bit of randomness to the measurements of the distance.\nThe random values are entered around 0.0, with a greater than 99% chance they're within the range -15.0 to 15.0.\nThe normal distribution is used because of the [Central Limit Theorem](https://en.wikipedia.org/wiki/Central_limit_theorem) which basically states that when a bunch of random effects happen together the outcome looks roughly like the normal distribution. (Don't quote me on that!)\n\n#### Plotting the cost function\nWe can plot the cost function for a given set of data points.\nRecall that the cost function involves two variables: $m$ and $c$, and that it looks like this:\n\n$$ Cost(m,c) = \\sum_i (y_i - mx_i - c)^2 $$\n\nTo plot a function of two variables we need a 3D plot.\nIt can be difficult to get the viewing angle right in 3D plots, but below you can just about make out that there is a low point on the graph around the $(m, c) = (\\approx 5.0, \\approx 10.0)$ point. \n\n\n```python\n# This code is a little bit involved - don't worry about it.\n# Just look at the plot below.\n\nfrom mpl_toolkits.mplot3d import Axes3D\n\n# Ask pyplot a 3D set of axes.\nax = plt.figure().gca(projection='3d')\n\n# Make data.\nmvals = np.linspace(4.5, 5.5, 100)\ncvals = np.linspace(0.0, 20.0, 100)\n\n# Fill the grid.\nmvals, cvals = np.meshgrid(mvals, cvals)\n\n# Flatten the meshes for convenience.\nmflat = np.ravel(mvals)\ncflat = np.ravel(cvals)\n\n# Calculate the cost of each point on the grid.\nC = [np.sum([(d[i] - m * w[i] - c)**2 for i in range(w.size)]) for m, c in zip(mflat, cflat)]\nC = np.array(C).reshape(mvals.shape)\n\n# Plot the surface.\nsurf = ax.plot_surface(mvals, cvals, C)\n\n# Set the axis labels.\nax.set_xlabel('$m$', fontsize=16)\nax.set_ylabel('$c$', fontsize=16)\nax.set_zlabel('$Cost$', fontsize=16)\n\n# Show the plot.\nplt.show()\n```\n\n#### Coefficient of determination\nEarlier we used a cost function to determine the best line to fit the data.\nUsually the data do not perfectly fit on the best fit line, and so the cost is greater than 0.\nA quantity closely related to the cost is the *coefficient of determination*, also known as the *R-squared* value.\nThe purpose of the R-squared value is to measure how much of the variance in $y$ is determined by $x$.\n\nFor instance, in our example the main thing that affects the distance the spring is hanging down is the weight on the end.\nIt's not the only thing that affects it though.\nThe room temperature and density of the air at the time of measurment probably affect it a little.\nThe age of the spring, and how many times it has been stretched previously probably also have a small affect.\nThere are probably lots of unknown factors affecting the measurment.\n\nThe R-squared value estimates how much of the changes in the $y$ value is due to the changes in the $x$ value compared to all of the other factors affecting the $y$ value.\nIt is calculated as follows:\n\n$$ R^2 = 1 - \\frac{\\sum_i (y_i - m x_i - c)^2}{\\sum_i (y_i - \\bar{y})^2} $$\n\nNote that sometimes the [*Pearson correlation coefficient*](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) is used instead of the R-squared value.\nYou can just square the Pearson coefficient to get the R-squred value.\n\n\n```python\n# Calculate the R-squared value for our data set.\nrsq = 1.0 - (np.sum((d - m * w - c)**2)/np.sum((d - d_avg)**2))\n\nprint(\"The R-squared value is %6.4f\" % rsq)\n```\n\n The R-squared value is 0.9742\n\n\n\n```python\n# The same value using numpy.\nnp.corrcoef(w, d)[0][1]**2\n```\n\n\n\n\n 0.9742264265116457\n\n\n\n#### The minimisation calculations\nEarlier we used the following calculation to calculate $m$ and $c$ for the line of best fit.\nThe code was:\n\n```python\nw_zero = w - np.mean(w)\nd_zero = d - np.mean(d)\n\nm = np.sum(w_zero * d_zero) / np.sum(w_zero * w_zero)\nc = np.mean(d) - m * np.mean(w)\n```\n\nIn mathematical notation we write this as:\n\n$$ m = \\frac{\\sum_i (x_i - \\bar{x}) (y_i - \\bar{y})}{\\sum_i (x_i - \\bar{x})^2} \\qquad \\textrm{and} \\qquad c = \\bar{y} - m \\bar{x} $$\n\nwhere $\\bar{x}$ is the mean of $x$ and $\\bar{y}$ that of $y$.\n\nWhere did these equations come from?\nThey were derived using calculus.\nWe'll give a brief overview of it here, but feel free to gloss over this section if it's not for you.\nIf you can understand the first part, where we calculate the partial derivatives, then great!\n\nThe calculations look complex, but if you know basic differentiation, including the chain rule, you can easily derive them.\nFirst, we differentiate the cost function with respect to $m$ while treating $c$ as a constant, called a partial derivative.\nWe write this as $\\frac{\\partial m}{ \\partial Cost}$, using $\\delta$ as opposed to $d$ to signify that we are treating the other variable as a constant.\nWe then do the same with respect to $c$ while treating $m$ as a constant.\nWe set both equal to zero, and then solve them as two simultaneous equations in two variables.\n\n###### Calculate the partial derivatives\n$$\n\\begin{align}\nCost(m, c) &= \\sum_i (y_i - mx_i - c)^2 \\\\[1cm]\n\\frac{\\partial Cost}{\\partial m} &= \\sum 2(y_i - m x_i -c)(-x_i) \\\\\n &= -2 \\sum x_i (y_i - m x_i -c) \\\\[0.5cm]\n\\frac{\\partial Cost}{\\partial c} & = \\sum 2(y_i - m x_i -c)(-1) \\\\\n & = -2 \\sum (y_i - m x_i -c) \\\\\n\\end{align}\n$$\n\n###### Set to zero\n$$\n\\begin{align}\n& \\frac{\\partial Cost}{\\partial m} = 0 \\\\[0.2cm]\n& \\Rightarrow -2 \\sum x_i (y_i - m x_i -c) = 0 \\\\\n& \\Rightarrow \\sum (x_i y_i - m x_i x_i - x_i c) = 0 \\\\\n& \\Rightarrow \\sum x_i y_i - \\sum_i m x_i x_i - \\sum x_i c = 0 \\\\\n& \\Rightarrow m \\sum x_i x_i = \\sum x_i y_i - c \\sum x_i \\\\[0.2cm]\n& \\Rightarrow m = \\frac{\\sum x_i y_i - c \\sum x_i}{\\sum x_i x_i} \\\\[0.5cm]\n& \\frac{\\partial Cost}{\\partial c} = 0 \\\\[0.2cm]\n& \\Rightarrow -2 \\sum (y_i - m x_i - c) = 0 \\\\\n& \\Rightarrow \\sum y_i - \\sum_i m x_i - \\sum c = 0 \\\\\n& \\Rightarrow \\sum y_i - m \\sum_i x_i = c \\sum 1 \\\\\n& \\Rightarrow c = \\frac{\\sum y_i - m \\sum x_i}{\\sum 1} \\\\\n& \\Rightarrow c = \\frac{\\sum y_i}{\\sum 1} - m \\frac{\\sum x_i}{\\sum 1} \\\\[0.2cm]\n& \\Rightarrow c = \\bar{y} - m \\bar{x} \\\\\n\\end{align}\n$$\n\n###### Solve the simultaneous equations\nHere we let $n$ be the length of $x$, which is also the length of $y$.\n\n$$\n\\begin{align}\n& m = \\frac{\\sum_i x_i y_i - c \\sum_i x_i}{\\sum_i x_i x_i} \\\\[0.2cm]\n& \\Rightarrow m = \\frac{\\sum x_i y_i - (\\bar{y} - m \\bar{x}) \\sum x_i}{\\sum x_i x_i} \\\\\n& \\Rightarrow m \\sum x_i x_i = \\sum x_i y_i - \\bar{y} \\sum x_i + m \\bar{x} \\sum x_i \\\\\n& \\Rightarrow m \\sum x_i x_i - m \\bar{x} \\sum x_i = \\sum x_i y_i - \\bar{y} \\sum x_i \\\\[0.3cm]\n& \\Rightarrow m = \\frac{\\sum x_i y_i - \\bar{y} \\sum x_i}{\\sum x_i x_i - \\bar{x} \\sum x_i} \\\\[0.2cm]\n& \\Rightarrow m = \\frac{\\sum (x_i y_i) - n \\bar{y} \\bar{x}}{\\sum (x_i x_i) - n \\bar{x} \\bar{x}} \\\\\n& \\Rightarrow m = \\frac{\\sum (x_i y_i) - n \\bar{y} \\bar{x} - n \\bar{y} \\bar{x} + n \\bar{y} \\bar{x}}{\\sum (x_i x_i) - n \\bar{x} \\bar{x} - n \\bar{x} \\bar{x} + n \\bar{x} \\bar{x}} \\\\\n& \\Rightarrow m = \\frac{\\sum (x_i y_i) - \\sum y_i \\bar{x} - \\sum \\bar{y} x_i + n \\bar{y} \\bar{x}}{\\sum (x_i x_i) - \\sum x_i \\bar{x} - \\sum \\bar{x} x_i + n \\bar{x} \\bar{x}} \\\\\n& \\Rightarrow m = \\frac{\\sum_i (x_i - \\bar{x}) (y_i - \\bar{y})}{\\sum_i (x_i - \\bar{x})^2} \\\\\n\\end{align}\n$$\n\n#### End\n", "meta": {"hexsha": "b5b7ca7d05b37fe5096aa1e600ac5504b3edf981", "size": 256724, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lectures/simple-linear-regression.ipynb", "max_stars_repo_name": "SomanathanSubramaniyan/fundamentals-of-data-analysis", "max_stars_repo_head_hexsha": "1b2d49a84c9e40f2db307b4478772e69b082a449", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lectures/simple-linear-regression.ipynb", "max_issues_repo_name": "SomanathanSubramaniyan/fundamentals-of-data-analysis", "max_issues_repo_head_hexsha": "1b2d49a84c9e40f2db307b4478772e69b082a449", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lectures/simple-linear-regression.ipynb", "max_forks_repo_name": "SomanathanSubramaniyan/fundamentals-of-data-analysis", "max_forks_repo_head_hexsha": "1b2d49a84c9e40f2db307b4478772e69b082a449", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 265.7598343685, "max_line_length": 112552, "alphanum_fraction": 0.9146398467, "converted": true, "num_tokens": 6269, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8962513786759491, "lm_q2_score": 0.9230391690674339, "lm_q1q2_score": 0.82727512784859}} {"text": "# Advanced Expression Manipulation\n\n\n```\nimport sys\nimport os\nsys.path.insert(1, os.path.join(os.path.pardir, \"ipython_doctester\"))\nfrom sympy import *\nfrom ipython_doctester import test\nx, y, z = symbols('x y z')\n```\n\nFor each exercise, fill in the function according to its docstring. Execute the cell to see if you did it right. \n\n## Creating expressions from classes\n\nCreate the following objects without using any mathematical operators like `+`, `-`, `*`, `/`, or `**` by explicitly using the classes `Add`, `Mul`, and `Pow`. You may use `x` instead of `Symbol('x')` and `4` instead of `Integer(4)`.\n\n$$x^2 + 4xyz$$\n$$x^{(x^y)}$$\n$$x - \\frac{y}{z}$$\n\n\n\n```\n@test\ndef explicit_classes1():\n \"\"\"\n Returns the expression x**2 + 4*x*y*z, built using SymPy classes explicitly.\n\n >>> explicit_classes1()\n x**2 + 4*x*y*z\n \"\"\"\n\n```\n\n\n```\n@test\ndef explicit_classes2():\n \"\"\"\n Returns the expression x**(x**y), built using SymPy classes explicitly.\n\n >>> explicit_classes2()\n x**(x**y)\n \"\"\"\n\n```\n\n\n```\n@test\ndef explicit_classes3():\n \"\"\"\n Returns the expression x - y/z, built using SymPy classes explicitly.\n\n >>> explicit_classes3()\n x - y/z\n \"\"\"\n\n```\n\n## Nested args\n\n\n```\nexpr = x**2 - y*(2**(x + 3) + z)\n```\n\nUse nested `.args` calls to get the 3 in expr.\n\n\n```\n@test\ndef nested_args():\n \"\"\"\n Get the 3 in the above expression.\n\n >>> nested_args()\n 3\n \"\"\"\n\n```\n\n## Traversal \n\nWrite a post-order traversal function that prints each node.\n\n\n```\n@test\ndef post(expr):\n \"\"\"\n Post-order traversal\n\n >>> expr = x**2 - y*(2**(x + 3) + z)\n >>> post(expr)\n -1\n y\n 2\n 3\n x\n x + 3\n 2**(x + 3)\n z\n 2**(x + 3) + z\n -y*(2**(x + 3) + z)\n x\n 2\n x**2\n x**2 - y*(2**(x + 3) + z)\n \"\"\"\n\n```\n\n\n```\nfor i in postorder_traversal(expr):\n print i\n```\n", "meta": {"hexsha": "3b252b71f9fcad47ffb684bc8e74ebf103494962", "size": 4744, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tutorial_exercises/Advanced Expression Manipulation.ipynb", "max_stars_repo_name": "certik/scipy-2013-tutorial", "max_stars_repo_head_hexsha": "26a1cab3a16402afdc20088cedf47acd9bc58483", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 23, "max_stars_repo_stars_event_min_datetime": "2015-02-28T08:53:05.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-05T05:37:59.000Z", "max_issues_repo_path": "sympy/Advanced Expression Manipulation.ipynb", "max_issues_repo_name": "certik/scipy-in-13", "max_issues_repo_head_hexsha": "418c139ab6e1b0c9acd53e7e1a02b8b930005096", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-04-17T15:05:46.000Z", "max_issues_repo_issues_event_max_datetime": "2021-04-17T15:05:46.000Z", "max_forks_repo_path": "sympy/Advanced Expression Manipulation.ipynb", "max_forks_repo_name": "certik/scipy-in-13", "max_forks_repo_head_hexsha": "418c139ab6e1b0c9acd53e7e1a02b8b930005096", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 14, "max_forks_repo_forks_event_min_datetime": "2015-03-11T00:25:21.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-25T14:52:40.000Z", "avg_line_length": 22.2723004695, "max_line_length": 245, "alphanum_fraction": 0.410623946, "converted": true, "num_tokens": 575, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8962513814471134, "lm_q2_score": 0.9230391627161538, "lm_q1q2_score": 0.8272751247141397}} {"text": "\n\n## ROW REDUCTION USING PYTHON \n\n\n```\n## GETTING OUR TOOLS READY\n\n#importing python's package called sympy as sym\nimport sympy as sym\n```\n\n\n```\n## Creating a matrix using sympy and matrix is named A\n## sym.Matrix() is the syntax\n\n#1.First you write syntax as A=sym.Matrix()\n#2.And inside brackets you write 2-D matrix as A=sym.Matrix([[],[],[],[]])\n#3.And in every large brackets put values of each row as I have written below.\n\n\n\n\n## Creating the matrix from given question\nA=sym.Matrix([[0,-3,-6,4,9],\n [-1,-2,-1,3,1],\n [-2,-3,0,3,-1],\n [1,4,5,-9,-7]])\n```\n\n\n```\n## finding the row reduced matrix\n\n## Here the syntax A.rref(pivots=False) contains \n#first element A as our matrix and rref is attribute for finding \n#row reduced form and yeah don't worry about pivots=False\n#I did it to make the answer seem more clear.You could try using pivots=True\n\n\nA.rref(pivots=False)\n```\n\n\n\n\n Matrix([\n [1, 0, -3, 0, 5],\n [0, 1, 2, 0, -3],\n [0, 0, 0, 1, 0],\n [0, 0, 0, 0, 0]])\n\n\n\n\n```\n\n```\n", "meta": {"hexsha": "a7015ab43d4be42c268679dbe5ddde64b3c8d443", "size": 3602, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Row_reduce_.ipynb", "max_stars_repo_name": "anjanpa/LINEAR-ALGEBRA", "max_stars_repo_head_hexsha": "9add763b829edc5d05f50e2c1dd5e7cd1738a16d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Row_reduce_.ipynb", "max_issues_repo_name": "anjanpa/LINEAR-ALGEBRA", "max_issues_repo_head_hexsha": "9add763b829edc5d05f50e2c1dd5e7cd1738a16d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Row_reduce_.ipynb", "max_forks_repo_name": "anjanpa/LINEAR-ALGEBRA", "max_forks_repo_head_hexsha": "9add763b829edc5d05f50e2c1dd5e7cd1738a16d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.4852941176, "max_line_length": 232, "alphanum_fraction": 0.4403109384, "converted": true, "num_tokens": 371, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9643214460461698, "lm_q2_score": 0.857768108626046, "lm_q1q2_score": 0.8271641828825568}} {"text": "# Funciones y expresiones booleanas\n\nEl paquete `sympy` tiene un m\u00f3dulo de [l\u00f3gica](http://docs.sympy.org/latest/modules/logic.html). Con \u00e9l podemos hacer algunas simplificaciones\n\n\n```python\nfrom sympy import *\n```\n\nDefinimos los s\u00edmbolos que vamos a utilizar. Supremo e \u00ednfimo se introducen usando el o e y l\u00f3gicos. Aunque tambi\u00e9n podemos utilizar `And` y `Or` como funciones. Para la negaci\u00f3n usamos `Not` o `~`. Tenemos adem\u00e1s `Xor`, `Nand`, `Implies` (que se puede usar de forma prefija con `>>`) y `Equivalent`.\n\n\n```python\nx, y, z = symbols(\"x,y,z\")\n```\n\n\n```python\np = (x | y) & ~ z\n```\n\n\n```python\npprint(p)\n```\n\n \u00acz \u2227 (x \u2228 y)\n\n\nLa formas normales conjuntiva y disyuntivas las podemos calcular como sigue\n\n\n```python\nto_cnf(p)\n```\n\n\n\n\n And(Not(z), Or(x, y))\n\n\n\n\n```python\nto_dnf(p)\n```\n\n\n\n\n Or(And(Not(z), x), And(Not(z), y))\n\n\n\nTambi\u00e9n podemos simplificar expresiones\n\n\n```python\nsimplify(x | ~x)\n```\n\n\n\n\n True\n\n\n\nO dar valores de verdad a las variables\n\n\n```python\np.xreplace({x:True})\n```\n\n\n\n\n Not(z)\n\n\n\nEsto nos permite crear nuestras propias tablas de verdad\n\n\n```python\np.free_symbols\n```\n\n\n\n\n {x, y, z}\n\n\n\n\n```python\np = Or(x,And(x,y))\n```\n\n\n```python\nfrom IPython.display import HTML,display\n\n```\n\n\n```python\ncolores=['LightCoral','Aquamarine']\ntabla=\"\"\nfor t in cartes({True,False}, repeat=2):\n v =dict(zip((x,y),t))\n tabla=tabla+\"\"\ntabla=tabla+\"
    $\"+latex(x)\ntabla=tabla+\"$ $\"+latex( y)+\"$$\"+latex(p)+\"$
    \"+str(v[x])\n tabla=tabla+\"\"+str(v[y])\n tabla=tabla+\"\"+str(p.xreplace(v))+\"
    \"\ndisplay(HTML(tabla))\n```\n\n\n
    $x$ $y$$x \\vee \\left(x \\wedge y\\right)$
    FalseFalseFalse
    FalseTrueFalse
    TrueFalseTrue
    TrueTrueTrue
    \n\n\nUna forma de comprobar que dos expresiones son equivalentes es la siguiente\n\n\n```python\nEquivalent(simplify(p), simplify(x))\n```\n\n\n\n\n True\n\n\n\nVeamos ahora c\u00f3mo podemos encontrar la versi\u00f3n simplificada de una funci\u00f3n booleana que venga dada por minterms. Aparentemente `SOPform` hace algunas simplificaciones usando el algoritmo de Quine-McCluskey\n\n\n```python\np=SOPform([x,y,z],[[0,0,1],[0,1,0],[0,1,1],[1,1,0],[1,0,0],[1,0,1]])\n```\n\n\n```python\np\n```\n\n\n\n\n Or(And(Not(x), y), And(Not(x), z), And(Not(y), z), And(Not(z), x), And(Not(z), y))\n\n\n\nAl utilizar `sympy` podemos escribir una forma m\u00e1s amigable de una expresi\u00f3n booleana\n\n\n```python\npprint(p)\n```\n\n (x \u2227 \u00acz) \u2228 (y \u2227 \u00acx) \u2228 (y \u2227 \u00acz) \u2228 (z \u2227 \u00acx) \u2228 (z \u2227 \u00acy)\n\n\nLos comandos `simplify` or `simplify_logic` pueden simplificar a\u00fan m\u00e1s\n\n\n```python\npprint(simplify(p))\n```\n\n (x \u2227 \u00acy) \u2228 (x \u2227 \u00acz) \u2228 (y \u2227 \u00acx) \u2228 (z \u2227 \u00acy)\n\n\n\n```python\npprint(simplify_logic(p))\n```\n\n (x \u2227 \u00acy) \u2228 (x \u2227 \u00acz) \u2228 (y \u2227 \u00acx) \u2228 (z \u2227 \u00acy)\n\n\nDe hecho, `p` se puede escribir de forma m\u00e1s compacta. Para ello vamos a utilizar el algoritmo espresso, que viene implementado en el paquete `pyeda`\n\n\n```python\nfrom pyeda.inter import *\n```\n\nEste paquete no admite las variables definidas con `symbols`, as\u00ed que las vamos a declarar con `expvar` para definir variables booleanas\n\n\n```python\nx,y,z = map(exprvar,\"xyz\")\n```\n\n\n```python\np=SOPform([x,y,z],[[0,0,1],[0,1,0],[0,1,1],[1,1,0],[1,0,0],[1,0,1]])\n```\n\nOtro problema es que la salida de `SOPform` no es una expresi\u00f3n de `pyeda`. Lo podemos arreglar pas\u00e1ndola a cadena de caracteres y reley\u00e9ndola en `pyeda`\n\n\n```python\np=expr(str(p))\n```\n\nAhora s\u00ed que podemos utilizar el simplificador *espresso* implementado en `pyeda`\n\n\n```python\npm, =espresso_exprs(p)\n```\n\n\n```python\npm\n```\n\n\n\n\n Or(And(x, ~z), And(~x, y), And(~y, z))\n\n\n\nY podemos comprobar que es m\u00e1s corta que la salida que daba `sympy`. Para escribirla de forma m\u00e1s \"legible\" volvemos a utilizar `pprint` de `sympy`, pero para ello necesitamos pasar nuestra expresi\u00f3n en `pyeda` a `sympy`\n\n\n```python\npprint(sympify(pm))\n```\n\n (x \u2227 \u00acz) \u2228 (y \u2227 \u00acx) \u2228 (z \u2227 \u00acy)\n\n\nPodr\u00edamos haber definido directamente `p` utilizando tablas de verdad\n\n\n```python\np=truthtable([x,y,z], \"01111110\")\n```\n\n\n```python\npm, = espresso_tts(p)\n```\n\n\n```python\npprint(sympify(pm))\n```\n\n (x \u2227 \u00acz) \u2228 (y \u2227 \u00acx) \u2228 (z \u2227 \u00acy)\n\n\nLa tabla de verdad de una expresi\u00f3n se obtiene como sigue\n\n\n```python\nexpr2truthtable(pm)\n```\n\n\n\n\n z y x\n 0 0 0 : 0\n 0 0 1 : 1\n 0 1 0 : 1\n 0 1 1 : 1\n 1 0 0 : 1\n 1 0 1 : 1\n 1 1 0 : 1\n 1 1 1 : 0\n\n\n\nVeamos un ejemplo an\u00e1logo pero con m\u00e1s variables, y de paso mostramos como definir vectores de variables \n\n\n```python\nX = ttvars('x', 4)\nf = truthtable(X, \"0111111111111110\")\n```\n\n\n```python\nfm, = espresso_tts(f)\n```\n\n\n```python\nfm\n```\n\n\n\n\n Or(And(~x[0], x[1]), And(~x[1], x[2]), And(x[0], ~x[3]), And(~x[2], x[3]))\n\n\n\n\n```python\nexpr2truthtable(fm)\n```\n\n\n\n\n x[3] x[2] x[1] x[0]\n 0 0 0 0 : 0\n 0 0 0 1 : 1\n 0 0 1 0 : 1\n 0 0 1 1 : 1\n 0 1 0 0 : 1\n 0 1 0 1 : 1\n 0 1 1 0 : 1\n 0 1 1 1 : 1\n 1 0 0 0 : 1\n 1 0 0 1 : 1\n 1 0 1 0 : 1\n 1 0 1 1 : 1\n 1 1 0 0 : 1\n 1 1 0 1 : 1\n 1 1 1 0 : 1\n 1 1 1 1 : 0\n\n\n\n### Un ejemplo de simplificaci\u00f3n\n\nVeamos que el o exclusivo con la definici\u00f3n $x\\oplus y=(x\\wedge \\neg y)\\vee (\\neg x\\wedge y)$ es asociativo\n\n\n```python\nx, y, z = map(exprvar,\"xyz\")\n```\n\n\n```python\nf = lambda x,y : Or(And(x,~ y),And(~x,y))\n```\n\n\n```python\nf(x,y)\n```\n\n\n\n\n Or(And(x, ~y), And(~x, y))\n\n\n\n\n```python\nexpr2truthtable(f(x,y))\n```\n\n\n\n\n y x\n 0 0 : 0\n 0 1 : 1\n 1 0 : 1\n 1 1 : 0\n\n\n\n\n```python\nf(x,y).equivalent(Xor(x,y))\n```\n\n\n\n\n True\n\n\n\nVeamos que efectivamente $x\\oplus(y\\oplus z)=(x\\oplus y)\\oplus z$\n\n\n```python\npprint(simplify_logic(f(x,f(y,z))))\n```\n\n (x \u2227 y \u2227 z) \u2228 (x \u2227 \u00acy \u2227 \u00acz) \u2228 (y \u2227 \u00acx \u2227 \u00acz) \u2228 (z \u2227 \u00acx \u2227 \u00acy)\n\n\n\n```python\npprint(simplify_logic(f(f(x,y),z)))\n```\n\n (x \u2227 y \u2227 z) \u2228 (x \u2227 \u00acy \u2227 \u00acz) \u2228 (y \u2227 \u00acx \u2227 \u00acz) \u2228 (z \u2227 \u00acx \u2227 \u00acy)\n\n\n\n```python\na= f(f(x,y),z)\nb= f(x,f(y,z))\n```\n\n\n```python\na.equivalent(b)\n```\n\n\n\n\n True\n\n\n\nPodemos hacer una funci\u00f3n que pase de minterm a expresions en pyeda\n\n\n```python\ndef minterm2expr(l,v):\n n = len(l)\n vv=v.copy()\n for i in range(n):\n if not(l[i]):\n vv[i]=Not(vv[i])\n return And(*vv)\n```\n\n\n```python\nx,y,z,t = map(exprvar,\"xyzt\")\n```\n\n\n```python\nminterm2expr([0,1,0,1],[x,y,z,t])\n```\n\n\n\n\n And(~x, y, ~z, t)\n\n\n\n\n```python\ndef minterms2expr(l,v):\n return Or(*[minterm2expr(a,v) for a in l])\n```\n\n\n```python\nhh2=minterms2expr([[0,0,0,0],[0,0,1,0],[0,1,0,0],[0,1,1,0],[0,1,1,1],[1,0,0,0],[1,0,1,0],[1,1,0,0]],[x,y,z,t])\n```\n\n\n```python\nhh2\n```\n\n\n\n\n Or(And(x, y, ~z, ~t), And(~x, ~y, z, ~t), And(~x, y, ~z, ~t), And(~x, y, z, ~t), And(~x, y, z, t), And(x, ~y, ~z, ~t), And(x, ~y, z, ~t), And(~x, ~y, ~z, ~t))\n\n\n\n\n```python\npprint(sympify(hh2))\n```\n\n (t \u2227 y \u2227 z \u2227 \u00acx) \u2228 (x \u2227 y \u2227 \u00act \u2227 \u00acz) \u2228 (x \u2227 z \u2227 \u00act \u2227 \u00acy) \u2228 (x \u2227 \u00act \u2227 \u00acy \u2227 \u00acz) \n \u2228 (y \u2227 z \u2227 \u00act \u2227 \u00acx) \u2228 (y \u2227 \u00act \u2227 \u00acx \u2227 \u00acz) \u2228 (z \u2227 \u00act \u2227 \u00acx \u2227 \u00acy) \u2228 (\u00act \u2227 \u00acx \u2227 \u00acy \n \u2227 \u00acz)\n\n\nY ahora la podemos simplificar\n\n\n```python\nsh2, = espresso_exprs(hh2)\n```\n\n\n```python\nsh2\n```\n\n\n\n\n Or(And(~z, ~t), And(~y, ~t), And(~x, y, z))\n\n\n", "meta": {"hexsha": "8ae466315b9480a2a5e34656b610e3ee044fdab9", "size": 21262, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Funciones Booleanas/Boole.ipynb", "max_stars_repo_name": "lmd-ugr/LMD", "max_stars_repo_head_hexsha": "677033858074fc31e0a65885ea424f8ae05591b8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 11, "max_stars_repo_stars_event_min_datetime": "2016-02-24T09:29:47.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-23T15:29:52.000Z", "max_issues_repo_path": "Funciones Booleanas/Boole.ipynb", "max_issues_repo_name": "pedritomelenas/LMD", "max_issues_repo_head_hexsha": "cf49271dcb05b1c39339a10a933d30c67b9445f3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Funciones Booleanas/Boole.ipynb", "max_forks_repo_name": "pedritomelenas/LMD", "max_forks_repo_head_hexsha": "cf49271dcb05b1c39339a10a933d30c67b9445f3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2016-05-09T14:46:11.000Z", "max_forks_repo_forks_event_max_datetime": "2016-05-09T14:46:11.000Z", "avg_line_length": 19.5243342516, "max_line_length": 611, "alphanum_fraction": 0.4651961245, "converted": true, "num_tokens": 3013, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9184802484881361, "lm_q2_score": 0.9005297894548548, "lm_q1q2_score": 0.827118824789464}} {"text": "# Modeling and Simulation in Python\n\nSymPy code for Chapter 16\n\nCopyright 2017 Allen Downey\n\nLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)\n\n\n### Mixing liquids\n\nWe can figure out the final temperature of a mixture by setting the total heat flow to zero and then solving for $T$.\n\n\n```python\nfrom sympy import *\n\ninit_printing() \n```\n\n\n```python\nC1, C2, T1, T2, T = symbols('C1 C2 T1 T2 T')\n\neq = Eq(C1 * (T - T1) + C2 * (T - T2), 0)\neq\n```\n\n\n```python\nsolve(eq, T)\n```\n\n### Analysis\n\nWe can use SymPy to solve the cooling differential equation.\n\n\n```python\nT_init, T_env, r, t = symbols('T_init T_env r t')\nT = Function('T')\n\neqn = Eq(diff(T(t), t), -r * (T(t) - T_env))\neqn\n```\n\nHere's the general solution:\n\n\n```python\nsolution_eq = dsolve(eqn)\nsolution_eq\n```\n\n\n```python\ngeneral = solution_eq.rhs\ngeneral\n```\n\nWe can use the initial condition to solve for $C_1$. First we evaluate the general solution at $t=0$\n\n\n```python\nat0 = general.subs(t, 0)\nat0\n```\n\nNow we set $T(0) = T_{init}$ and solve for $C_1$\n\n\n```python\nsolutions = solve(Eq(at0, T_init), C1)\nvalue_of_C1 = solutions[0]\nvalue_of_C1\n```\n\nThen we plug the result into the general solution to get the particular solution:\n\n\n```python\nparticular = general.subs(C1, value_of_C1)\nparticular\n```\n\nWe use a similar process to estimate $r$ based on the observation $T(t_{end}) = T_{end}$\n\n\n```python\nt_end, T_end = symbols('t_end T_end')\n```\n\nHere's the particular solution evaluated at $t_{end}$\n\n\n```python\nat_end = particular.subs(t, t_end)\nat_end\n```\n\nNow we set $T(t_{end}) = T_{end}$ and solve for $r$\n\n\n```python\nsolutions = solve(Eq(at_end, T_end), r)\nvalue_of_r = solutions[0]\nvalue_of_r\n```\n\nWe can use `evalf` to plug in numbers for the symbols. The result is a SymPy float, which we have to convert to a Python float.\n\n\n```python\nsubs = dict(t_end=30, T_end=70, T_init=90, T_env=22)\nr_coffee2 = value_of_r.evalf(subs=subs)\ntype(r_coffee2)\n```\n\n\n\n\n sympy.core.numbers.Float\n\n\n\n\n```python\nr_coffee2 = float(r_coffee2)\nr_coffee2\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "b60b692d47ee26780b9afdebdc2ae837e91065f8", "size": 23553, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "code/chap16sympy.ipynb", "max_stars_repo_name": "SSModelGit/ModSimPy", "max_stars_repo_head_hexsha": "4d1e3d8c3b878ea876e25e6a74509535f685f338", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "code/chap16sympy.ipynb", "max_issues_repo_name": "SSModelGit/ModSimPy", "max_issues_repo_head_hexsha": "4d1e3d8c3b878ea876e25e6a74509535f685f338", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "code/chap16sympy.ipynb", "max_forks_repo_name": "SSModelGit/ModSimPy", "max_forks_repo_head_hexsha": "4d1e3d8c3b878ea876e25e6a74509535f685f338", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 50.4346895075, "max_line_length": 2097, "alphanum_fraction": 0.7441514881, "converted": true, "num_tokens": 649, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9184802417938535, "lm_q2_score": 0.900529781446146, "lm_q1q2_score": 0.8271188114052223}} {"text": "### 1. measure linear separability\n\nhttps://www.python-course.eu/linear_discriminant_analysis.php\n\nGiven mxn dimensional input $X_{mn}$, and expecting 1xn dimensional output $y_{1n}$, the Fishers Linear Discriminant Analysis (LDA) searches for the **linear** projection parameterized by $w_{m1}$, noted as $y_{1n} = w^T_{m1} * X_{mn}$, where the **separability** of the classes is maximized. The **separability** of the classes means that, the samples of a class should have the predictions closer to ground truth class, than to other classes.\n\n********\nConsider the optimization as **least square regresssion problem** from $X_{mn}$ to output $y_{1n}$, the regression loss is:\n\n$\\begin{align}\n(1) Loss_w &= \\sum_{c\\in C} SE_c\\\\\n & = \\sum_{c\\in C} \\sum_{j\\in N_c} [y({x_j}) - y({u_c})]^2 \\\\\n & = \\sum_{c\\in C} \\sum_{j\\in N_c} (w^T * x_j - w^T * u_c)(w^T * x_j - w^T * u_c)^T\\\\\n & = \\sum_{c\\in C} \\sum_{j\\in N_c} w^T*(x_j - u_c)(x_j - u_c)^T * w\\\\\n & = \\sum_{c\\in C}w^T * [\\sum_{j\\in N_c} (x_j - u_c)(x_j - u_c)^T] * w\\\\\n & = w^T * S_W * w \\\\\n\\end{align}$\n\nwhere $S_W$ is the within class scatter matrix, denoted as:\n\n$\\begin{align}\nS_W = \\sum_{c \\in C}\\sum_{j \\in N_{c}} (x_j - u_c)(x_j - u_c)^T\\\\\n\\end{align}$\n\nGiven that we have calculated the the scatter matrix , the computation of covariance matrix is straight forward. We just a have to scale by n-1 the scatter matrix values to compute the covariance matrix, which means:\n\n\n$\\begin{align}\nCov(X) &= \\frac{\\sum_{i\\in N}\\sum_{j\\in N}(X_i - u)(X_j - u)^T}{N-1}\\\\\n &= \\frac{\\sum_{i\\in N}\\sum_{S_X}}{N-1}\\\\\nS_X &= (N - 1) * Cov(X)\\\\\n\\end{align}$\n\n\n\n$Loss_w$ represents how much the predictions deviate from the ground truth across all samples, noted as **Within Group Loss**. This is important information, but not enough, as **separatability** should be a notion reflecting the **contrast** between the confidence of an instance belonging to a class, and the confidence of belonging to other classes. $Loss_w$ measures how close are the predictions and ground truth labels, but it does not tell how far the predictions are away from the wrong labels (away from the faults). There should be a loss term measuring the scatter between different classes in the transformed space. Again, using the square error, the scatter between two classes a, b can be expressed as:\n\n$\\begin{align}\nSE_{a,b \\in C} & = N_{a} * N_{b} * [(y(u_{a}) - y(u_{b})]^2 \\\\\n& = N_a * N_b * (w^T * u_a - w^T * u_b)(w^T * u_ia - w^T * u_b)^T\\\\\n& = N_a * N_b * w^T*[(u_a - u_b)(u_a - u_b)^T] * w\\\\\n\\end{align}$\n\nWhen summing up all pairs, the overal becomes:\n\n$\\begin{align}\n(2) Loss_b &= \\sum^{a \\neq b} SE_{a,b}\\\\\n &= w^T*\\sum_{}^{a \\neq b} N_a * N_b * [(u_a - u_b)(u_a - u_b)^T] * w\\\\\n &= w^T*\\sum_{a,b}^{a \\neq b} N_a *N_b * [(u_a - u_b)(u_a - u + u - u_b)^T] * w \\\\\n &= w^T*\\sum_{a,b}^{a \\neq b} N_a *N_b * [(u_a - u_b)(u_a - u)^T * w + w^T*\\sum_{a,b}^{a \\neq b} N_a *N_b * [(u_a - u_b)(u - u_b)^T] * w \\\\\n &= w^T*\\sum_{a,b}^{a \\neq b} N_a *N_b * [(u_a - u_b)(u_a - u)^T * w + w^T*\\sum_{b,a}^{b \\neq a} N_b *N_a * [(u_b - u_a)(u_b - u)^T] * w \\\\\n &= 2 * w^T*\\sum_{a,b}^{a \\neq b} N_a *N_b * [(u_a - u_b)(u_a - u)^T * w \\\\\n &= 2 * w^T*\\sum_{a}N_a*\\sum_{b \\neq a} N_b * [(u_a - u_b)(u_a - u)^T * w \\\\\n &= 2 * w^T*\\sum_{a}N_a*[\\sum_{b \\neq a} N_b * u_a - \\sum_{b \\neq a} N_b * u_b)]*(u_a - u)^T * w\\\\\n &= 2 * w^T*\\sum_{a}N_a*[(N - N_a) * u_a - (N*u - N_a*u_a)]*(u_a - u)^T * w\\\\\n &= 2 * w^T*\\sum_{a}N_a*[(N * u_a - N*u]*(u_a - u)^T * w\\\\\n &= 2 * w^T*\\sum_{a}N * N_a*[(u_a - u]*(u_a - u)^T * w \\\\\n &= 2 * N * w^T*\\sum_{c}N_c*[(u_c - u]*(u_c - u)^T * w \\\\\n &= 2 * N * w^T* S_B * w \\\\ \n\\end{align}$\n\nwhere $S_B$ is the between class scatter matrix, denoted as:\n\n$\\begin{align}\nS_B = \\sum_{c \\in C} N_c (u_c - u)(u_c - u)^T\\\\\n\\end{align}$\n\nInterestingly, $SB$ was initially defined as weighted sum of pairwised outerproduct of the class mean vectors in the transformed space, in the end, it's equilevant to calculate the weighted sum of the outer product of each class mean and the global mean in the transformed space.\n\nMoreover, when summing up $S_W, S_B$, we get $S_T$ which captures the overal scatter of the samples:\n\n$\\begin{align}\n(3) S_T &= \\sum_{x \\in X} (x - u)(x - u)^T \\\\\n&= \\sum_{c \\in C}\\sum_{ j \\in N_c} [(x_j - u_c) + (u_c - u)][(x_j - u_c) + (u_c - u)]^T \\\\\n&= \\sum_{c \\in C}\\sum_{ j \\in N_c} (x_j - u_c)(x_j - u_c)^T + \\sum_{c \\in C}\\sum_{ j \\in N_c} (u_c - u)(u_c - u)^T + \\sum_{c \\in C}\\sum_{ j \\in N_c} (x_j - u_c)(u_c - u)^T + \\sum_{c \\in C}\\sum_{ j \\in N_c} (u_c - u)(x_j - u_c)^T \\\\\n&= \\sum_{c \\in C}\\sum_{ j \\in N_c} (x_j - u_c)(x_j - u_c)^T + \\sum_{c \\in C} N_c(u_c - u)(u_c - u)^T +\\sum_{c \\in C}(\\sum_{ j \\in N_c} x_j - N_c * u_c)(u_c - u)^T + \\sum_{c \\in C}(u_c - u) (\\sum_{ j \\in N_c}x_j - N_c * u_c)^T \\\\\n&= \\sum_{c \\in C}\\sum_{ j \\in N_c} (x_j - u_c)(x_j - u_c)^T + \\sum_{c \\in C} N_c(u_c - u)(u_c - u)^T + \\sum_{c}(0)(u_x - u)^T + \\sum_{c}(u_x - u)(0) \\\\ \n&= \\sum_{c \\in C}\\sum_{ j \\in N_c} (x_j - u_c)(x_j - u_c)^T + \\sum_{c \\in C} N_c(u_c - u)(u_c - u)^T + 0 + 0 \\\\\n&= S_W + S_B +0 +0 \\\\\n&= S_W + S_B\n\\end{align}$\n\nAs scatter matrix captures the variance/covariance, it represents a notion of energy. We can think that $S_T$ captures the overal energy in the distribution, which can be split into two parts: the $S_W$ which captures the 'harmful' energy which enlarges the distances between samples in same class, and $S_B$ captures the 'useful' energy between classes which enlarges the distances between samples of different classes.\n\n### 2. optimize linear separability\n\nTo increase the linear separability, we are looking for small $Loss_w$ and large $Loss_b$. So we can form the loss function as:\n\n$\\begin{align}\n(4) J_w & = \\frac{Loss_b}{Loss_w}\\\\ \n& = \\frac{w^T * S_B * w}{w^T * S_W * w}\\\\ \n\\end{align}$\n\nDo the derivative and make it zero:\n\n$\\begin{align}\n(5) J^{'}_w & = \\frac{D(J_w)}{D_w} = 0\\\\\n & => (w^T * S_W * w)* 2 * S_B * w - (w^T * S_B * w) * 2 * S_W * w = 0\\\\\n & => \\frac{(w^T * S_W * w)* S_B * w}{(w^T * S_W * w)} - \\frac{(w^T * S_B * w) * S_W * w}{(w^T * S_W * w)}= 0\\\\\n & => S_B * w - \\frac{(w^T * S_B * w)}{(w^T * S_W * w)} * S_W * w= 0\\\\\n & => S_B * w - J_w * S_W * w= 0\\\\\n & => (S_B - J_w * S_W) * w= 0\\\\\n & => S^{-1}_W*(S_B - J_w * S_W) * w= 0\\\\\n & => S^{-1}_W*S_B *w - J_w * w = 0\\\\\n & => S^{-1}_W*S_B *w = \\lambda * w\\\\\n\\end{align}$\n\nNow we see that the optimal w is an eigen-vector of $S^{-1}_W*S_B$, corresponding to the largest eigen-value. Note that here w represents a normalized vector where $||w||_2 = 1$. When perform multi-class LDA, we would extract the first $N_c-1$ eigen-vectors to form the overal transformation. As these eigen-vectors are orthogonal to each other, they form the axis bases of the transformed space. This combination makes $\\sum_{i \\in N_c-1}J_{wi}$ largest in the transformation solution space composed by all ${w:||w||_2 = 1}$.\n\n\nThere is another way using Lagrangian form of the problem:\n\nThe problem of maximizing $J_w$ is equilevant to maximizing $Loss_b = w^T*S_B*w$ when keeping $Loss_w = w^T*S_W*w = K$, where K is constant.\n\nThen the lagrangian form is:\n\n$\\begin{align}\n(6) L & = w^T * S_B * w - \\lambda * (w^T * S_W * w - K)\\\\ \n\\end{align}$\n\nThen make the derivative to $w$ equals to $0_{m1}$ (vector):\n\n$\\begin{align}\n(7) \\frac {\\delta L}{\\delta w} & = 2 * S_B * w - \\lambda * 2 * S_W * w = 0_{m1}\\\\ \n& => S_B * w - \\lambda * S_W * w = 0_{m1}\\\\ \n& = S_B * w = \\lambda * S_W * w\\\\ \n& = S_W^{-1}*S_B * w = \\lambda *w\\\\ \n\\end{align}$\n\nAgain this is eigen-values and eigen-vectors problem.\n\n### 3. Now implement\n\n#### 3.1 generate dataset\n\n\n```python\n#### Load dataset\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import style\nstyle.use('fivethirtyeight')\nnp.random.seed(seed=42)\n```\n\n\n```python\n# Create data\nnum_samples = 100\ngap = 4\nA = np.random.randn(num_samples) + gap, np.random.randn(num_samples)\nB = np.random.randn(num_samples), np.random.randn(num_samples) + gap\nC = np.random.randn(num_samples) + gap, np.random.randn(num_samples) + gap\nA = np.array(A)\nB = np.array(B)\nC = np.array(C)\nABC = np.hstack([A, B, C])\ny = np.array([0] * num_samples + [1] * num_samples+ [2] * num_samples)\n\n## calculate the mean\nmean_A = A.mean(axis = 1, keepdims = True)\nmean_B = B.mean(axis = 1, keepdims = True)\nmean_C = C.mean(axis = 1, keepdims = True)\nmean_ABC = ABC.mean(axis = 1, keepdims = True)\n\n```\n\n\n```python\n## visualize\nfig = plt.figure(figsize=(10,5))\nax0 = fig.add_subplot(111)\nax0.scatter(A[0],A[1],marker='s',c='r',edgecolor='black')\nax0.scatter(B[0],B[1],marker='^',c='g',edgecolor='black')\nax0.scatter(C[0],C[1],marker='o',c='b',edgecolor='black')\nax0.scatter(mean_A[0],mean_A[1],marker='o', s = 100, c='y',edgecolor='black')\nax0.scatter(mean_B[0],mean_B[1],marker='o', s = 100, c='y',edgecolor='black')\nax0.scatter(mean_C[0],mean_C[1],marker='o', s = 100, c='y',edgecolor='black')\nax0.scatter(mean_ABC[0],mean_ABC[1],marker='o', s = 200,c='y',edgecolor='red')\nplt.show()\n\n```\n\n\n```python\n\n## calculate the scatters\nscatter_A = np.dot(A-mean_A, np.transpose(A-mean_A))\nscatter_B = np.dot(B-mean_B, np.transpose(B-mean_B))\nscatter_C = np.dot(C-mean_C, np.transpose(C-mean_C))\nscatter_ABC = np.dot(ABC-mean_ABC, np.transpose(ABC-mean_ABC))\n\n## see the equilevant of scatter matrix and covariance matrix\nprint('@scatter matrix:\\n',scatter_A)\nprint('\\n@covariance matrix to scatter matrix:\\n', np.cov(A) *99)\n\n```\n\n @scatter matrix:\n [[ 81.65221947 -11.69726509]\n [-11.69726509 90.03896521]]\n \n @covariance matrix to scatter matrix:\n [[ 81.65221947 -11.69726509]\n [-11.69726509 90.03896521]]\n\n\n\n```python\n## compute Sw, Sb\nSw = scatter_A + scatter_B + scatter_C\nSb = scatter_ABC - Sw\n\n## computer eigen-values and eigen-vectors\neigval, eigvec = np.linalg.eig(np.dot(np.linalg.inv(Sw),Sb))\n\n## get first 2 projections\neigen_pairs = zip(eigval, eigvec)\neigen_pairs = sorted(eigen_pairs,key=lambda k: k[0],reverse=True)\nw = eigvec[:2]\n```\n\n\n```python\n## transform\nProjected = ABC.T.dot(w).T\n\n## plot transformed feature and means\nfig = plt.figure(figsize=(12, 8))\nax0 = fig.add_subplot(111)\n\nmeans = []\nfor l,c,m in zip(np.unique(y),['r','g','b'],['s','x','o']):\n means.append(np.mean(Projected[:,y==l],axis=1))\n ax0.scatter(Projected[0][y==l],\n Projected[1][y==l],\n c=c, marker=m, label=l, edgecolors='black')\n \n## make grid\nmesh_x, mesh_y = np.meshgrid(np.linspace(min(Projected[0]),max(Projected[0])),\n np.linspace(min(Projected[1]),max(Projected[1]))) \nmesh = []\nfor i in range(len(mesh_x)):\n for j in range(len(mesh_x[0])):\n mesh.append((mesh_x[i][j],mesh_y[i][j]))\n \n## make decision on grid points\nfrom sklearn.neighbors import KNeighborsClassifier\nNN = KNeighborsClassifier(n_neighbors=1)\nNN.fit(means,['r','g','b']) \npredictions = NN.predict(np.array(mesh))\n\n## plot grid\nax0.scatter(np.array(mesh)[:,0],np.array(mesh)[:,1],color=predictions,alpha=0.4)\n\n## plot means\nmeans = np.array(means)\nax0.scatter(means[:,0],means[:,1],marker='o',c='yellow', edgecolors='red', s=200)\n\nax0.legend(loc='upper right')\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "d404fe325692a0ec7dce6b1be19bab372d74bb17", "size": 358351, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "LDA From Scratch.ipynb", "max_stars_repo_name": "BaiqiangGit/ML-DL-Numpy", "max_stars_repo_head_hexsha": "b5a868722f761d07e6cdc1ba99a735743af9569a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2019-05-29T10:05:33.000Z", "max_stars_repo_stars_event_max_datetime": "2019-08-02T09:14:34.000Z", "max_issues_repo_path": "LDA From Scratch.ipynb", "max_issues_repo_name": "BaiqiangGit/Handcraft-Machine-Learning", "max_issues_repo_head_hexsha": "b5a868722f761d07e6cdc1ba99a735743af9569a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "LDA From Scratch.ipynb", "max_forks_repo_name": "BaiqiangGit/Handcraft-Machine-Learning", "max_forks_repo_head_hexsha": "b5a868722f761d07e6cdc1ba99a735743af9569a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 984.4807692308, "max_line_length": 320644, "alphanum_fraction": 0.9479588448, "converted": true, "num_tokens": 4106, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9390248174286373, "lm_q2_score": 0.8807970764133561, "lm_q1q2_score": 0.8270903138707293}} {"text": "\n\n\n```python\nimport matplotlib.pyplot as plt\nimport math\nimport numpy as np\n%matplotlib inline\nfrom ipywidgets import interact\nfrom sympy import nsolve\n```\n\nTOTIENT FUNCTION\n\n\n\n```python\ndef gcd(a, b):\n if b==0:\n return a\n return gcd(b, a % b)\n```\n\n\n```python\nplt.gca().spines['top'].set_visible(False)\nplt.gca().spines['right'].set_visible(False)\nplt.xlabel(\"N\")\nplt.ylabel(\"phi(N)\")\nplt.scatter([*range(1,2500)], [sum(gcd(n, i) == 1 for i in range(1,n)) for n in range(1, 2500)], s = 1, c='green');\nplt.show()\n```\n\nGENERAL TO WEISTRASS FORM\n\n\n\n```python\n@interact(a = (-10,10,0.1), b=(-10,10,0.1), c=(-10,10,0.1), d=(-10,10,0.1), e=(-10,10,0.1))\ndef ell_curve(a, b, c, d, e):\n mx2, mx1 = np.ogrid[-10:10:0.1,-15:15:0.1]\n def evaluate_general(x,y):\n return np.power(y,2) + a*x*y + b*y - np.power(x, 3) - c * np.power(x,2) - d*x - e\n def transform_coord(x,y):\n return x - ((a**2 + 4*c) / 12), y - (a / 2)*x + ((a**3 + 4*a*c - 12 * b) / 24)\n def evaluate_normal(x,y):\n x, y = transform_coord(x,y)\n return np.power(y,2) - np.power(x,3) - d*x - e\n\n \n plt.contour(mx1.ravel(), mx2.ravel(), evaluate_general(mx1, mx2), [0], colors=\"blue\")\n plt.contour(mx1.ravel(), mx2.ravel(), evaluate_normal(mx1, mx2), [0], colors=\"red\")\n plt.show()\n```\n\n\n interactive(children=(FloatSlider(value=0.0, description='a', max=10.0, min=-10.0), FloatSlider(value=0.0, des\u2026\n\n\nFINITE CURVE\n\n\n```python\ndef display_finite_curve(a, b, N):\n def is_point(x, y, a, b, N):\n return (y**2) % N == (x**3+ a*x + b) % N\n points = [(x,y) for x in range(N) for y in range(N) if is_point(x,y,a,b,N)]\n plt.text(-5,-5,s = \"p = {}\\n a = {}\\n b= {}\".format(N,a,b),c = \"black\",bbox={'facecolor': 'green', 'alpha': 0.5})\n plt.scatter(list(zip(*points))[0], list(zip(*points))[1], s=10)\n```\n\n\n```python\ndisplay_finite_curve(1, -1, 39)\n```\n", "meta": {"hexsha": "42e314c6b79a013b5e2452d76d764878412f599f", "size": 80515, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "plot_for_eliptic_curves.ipynb", "max_stars_repo_name": "desabuh/elliptic_curves_cryptography_plots", "max_stars_repo_head_hexsha": "6d7a4d01cb38042a78bfbb3959d6856837e3b8be", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "plot_for_eliptic_curves.ipynb", "max_issues_repo_name": "desabuh/elliptic_curves_cryptography_plots", "max_issues_repo_head_hexsha": "6d7a4d01cb38042a78bfbb3959d6856837e3b8be", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "plot_for_eliptic_curves.ipynb", "max_forks_repo_name": "desabuh/elliptic_curves_cryptography_plots", "max_forks_repo_head_hexsha": "6d7a4d01cb38042a78bfbb3959d6856837e3b8be", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 91.598407281, "max_line_length": 31062, "alphanum_fraction": 0.7866732907, "converted": true, "num_tokens": 720, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.957912273285902, "lm_q2_score": 0.8633916082162402, "lm_q1q2_score": 0.8270534181623895}} {"text": "## Question 1\n\n\n```python\nimport matplotlib.pyplot as plt\nimport math\nimport statistics as st\nimport numpy as np\nimport scipy.linalg as la\nfrom datetime import datetime\nimport sympy as sym\n```\n\n\n```python\ndef myawgn(PSD,B,Fs,l):\n sigma = math.sqrt(2*PSD*B)\n #sigma is the standard deviation\n print(\"Power = \",2*PSD*B)\n \n N = 1000 \n x = sigma*(np.random.randn(N,l))\n awgn_sequence = x[0]\n \n y = abs(np.fft.fft(x))\n y_conjugate = np.conjugate(y)\n y_squared = np.multiply(y,y_conjugate)\n y_mean = np.mean(y_squared,axis = 1)\n y_mean = y_mean/(Fs*l);\n calculated_psd = np.mean(y_mean);\n \n print(\"Calculated PSD = \",calculated_psd)\n \n #plotting\n plt.figure()\n plt.plot(y_mean, color = 'green', label='Mean using sample functions')\n plt.axhline(y=calculated_psd, color='r', linestyle='-', label = 'Calculated PSD')\n plt.axhline(y=PSD, color='blue', linestyle='dotted', label = 'Given PSD')\n plt.legend()\n plt.show\n \n return awgn_sequence\n```\n\n\n```python\nPSD = float(input('PSD: '))\nB = float(input('Bandwidth: '))\nFs = float(input('Sampling frequency: '))\nl = int(input('Length of the sequence: '))\nawgn = myawgn(PSD,B,Fs,l)\nprint(\"AWGN sequence: \",awgn)\n```\n\n## Question 2\n\n\n```python\n# library function to check PSD\ndef posSymDef(mat):\n return np.all(np.linalg.eigvals(mat) > 0)\n\ndef mygauss(mu,cov,s):\n \n eigval,m1 = la.eig(cov); \n m1 = np.array(m1).astype(float);\n m1_inv = np.linalg.inv(m1);\n m2 = np.matmul(m1_inv,cov);\n m2 = np.matmul(m2,m1);\n \n for i in range(len(m2)):\n m2[i][i] = np.sqrt(m2[i][i]);\n\n M = np.matmul(m1,m2);\n M = np.matmul(M,m1_inv);\n ans = np.matmul(np.random.randn(s, len(m2)),M);\n \n for i in range(s):\n ans[i] = ans[i] + mu;\n print('\\nSamples:');\n print(ans);\n \n print('\\nCalculated Mean Vector:');\n mu = [];\n ans = np.transpose(ans);\n for i in range(len(m2)): \n mu.append(np.mean(ans[i]));\n print(mu);\n \n print('\\nCalculated Covariance Matrix:');\n print(np.cov(ans));\n print('\\nCalculated matrix SPD?: '+str(posSymDef(cov)));\n```\n\n\n```python\ncov = np.array([[2, -1, 0],\n [-1, 2, -1],\n [0, -1, 2]]);\nmu = np.array([1,3,-5]);\n\nprint('Entered Mean Vector:');\nprint(mu);\nprint('\\nEntered Covariance Matrix:');\nprint(cov);\n\nmygauss(mu,cov,10000);\n```\n\n Entered Mean Vector:\n [ 1 3 -5]\n \n Entered Covariance Matrix:\n [[ 2 -1 0]\n [-1 2 -1]\n [ 0 -1 2]]\n \n Samples:\n [[ 1.07126392 1.57325674 -1.99812429]\n [ 1.8310989 2.95882931 -5.24555917]\n [ 0.24695964 1.16748971 -4.66019658]\n ...\n [ 1.80900553 6.01039805 -6.02386182]\n [ 3.8905174 0.82114425 -6.22760604]\n [ 1.70671262 5.53382482 -9.03190338]]\n \n Calculated Mean Vector:\n [1.0353198096946992, 2.953741212848046, -4.984730914314471]\n \n Calculated Covariance Matrix:\n [[ 2.0194349 -1.02033005 0.01733812]\n [-1.02033005 2.0414831 -1.02638963]\n [ 0.01733812 -1.02638963 2.01643282]]\n \n Calculated matrix SPD?: True\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "4fa45d199a01cfdb41826fd98a357a9b2732041a", "size": 59381, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lab 3/E1.ipynb", "max_stars_repo_name": "shantanutyagi67/CT303_Labs", "max_stars_repo_head_hexsha": "f1303cd9e8665dccfd1a60b07e3ac2713dff8a47", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-10-12T12:03:33.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-12T12:03:33.000Z", "max_issues_repo_path": "Lab 3/E1.ipynb", "max_issues_repo_name": "shantanutyagi67/CT303_Labs", "max_issues_repo_head_hexsha": "f1303cd9e8665dccfd1a60b07e3ac2713dff8a47", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lab 3/E1.ipynb", "max_forks_repo_name": "shantanutyagi67/CT303_Labs", "max_forks_repo_head_hexsha": "f1303cd9e8665dccfd1a60b07e3ac2713dff8a47", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 123.1970954357, "max_line_length": 34504, "alphanum_fraction": 0.8136609353, "converted": true, "num_tokens": 1035, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9273632916317102, "lm_q2_score": 0.8918110511888303, "lm_q1q2_score": 0.8270328319440092}} {"text": "# Interval estimates\n\n#### Estimation framework, reminder\n\nThe framework we consider is the following. We have $N$ data points modelled as a vector $y \\in \\mathbb{R}^N$. We have a model for the data, that is the data is assumed to have a distribution $ p(y|\\theta)$ for a certain parameter $\\theta$ that we wish to estimate. The function $\\theta \\rightarrow p(y|\\theta)$ is called the likelihood. \n\nFor instance, we have $N$ data points, independent statistically, each data point is assumed to follow a Gaussian distribution of mean $\\mu$ and variance $\\sigma_k^2$, denoted by $Y_k \\sim G(\\mu, \\sigma_k^2)$. Let us suppose that the $\\sigma_k^2$ are known, so that the only unknown parameter is $\\mu$ (it plays the role of $\\theta$ in the definition of the likelihood). The likelihood of the data point $Y_k$ is then \n$$p(y_k|\\mu) = \\frac{1}{\\sqrt{2\\pi} \\sigma_k} \\mathrm{e}^{-\\frac{1}{2\\sigma_k^2} (y_k - \\mu)^2}$$\nSince all the $Y_k$ are independent, the likelihood of the data $Y$ is the product of the likelihoods of the $Y_k$,\n$$p(y|\\mu) = \\prod\\limits_{k=1}^N p(y_k|\\mu) =\\prod\\limits_{k=1}^N \\frac{1}{\\sqrt{2\\pi} \\sigma_k} \\mathrm{e}^{-\\frac{1}{2\\sigma_k^2} (y_k - \\mu)^2} = \\frac{1}{\\sqrt{2\\pi}^N \\prod\\limits_{k=1}^N \\sigma_k} \\mathrm{e}^{-\\frac{1}{2} \\sum\\limits_{k=1}^N \\frac{(y_k - \\mu)^2}{\\sigma_k^2} } $$\n\nFurthermore, we might assume that the parameter itself has a prior distrbution. That is, the probability of $\\theta$ before seeing the data is $p(\\theta)$. In the example above, we could assume that $p(\\mu)$ is a Gaussian distribution of mean $0$ and variance $\\sigma_\\mu$ \n\n#### Point estimates, reminder\n\nIn the previous lesson we have studied the point estimates. In the point estimates view, we have an estimator $\\hat{\\theta}:y \\rightarrow \\hat{\\theta}(y)$ that takes as argument the data $y$ and outputs a value wanted to be close to $\\theta$. The error bar is then given as the variance or square root mean squared error ($\\sqrt{\\mathrm{MSE}}$) of $\\hat{\\theta}$.\n\nSome point estimates ignore the prior distributions, while some take it into account. The most common estimators that do not involve the prior are the maximum likelihood and least square estimates. When the Likelihood of the data is Gaussian and the covariance is known, they are equivalent. In the example above, the maximum likelihood estimate is \n$$\\hat{\\mu}_{ML} = \\arg \\max_\\mu p(y|\\mu) = \\frac{\\sum\\limits_{k=1}^N \\frac{y_k}{\\sigma_k^2} }{\\sum\\limits_{k=1}^N \\frac{1}{\\sigma_k^2}} $$\n\nIf we assume a prior on $\\mu$, $p(\\mu)$, the common estimators are the mean, median and a posteriori, that are\n$$ \\hat{\\theta}_{\\mathrm{mean}} = \\int_{-\\infty}^\\infty \\mu p(\\mu|y) \\mathrm{d} \\mu =\\int_{-\\infty}^\\infty \\mu \\frac{p(y|\\mu) p(\\mu) }{p(y)} \\mathrm{d} \\mu $$\n$$ \\hat{\\theta}_{\\mathrm{median}} = \\mathrm{median}(p(\\mu|y)) $$\n$$ \\hat{\\theta}_{\\mathrm{mode}} = \\mathrm{mode}(p(\\mu|y)) $$\nwhere the mode is the argument that maximizes the a function, $\\mathrm{mode}(p(\\mu|y)) = \\arg \\max_\\mu p(\\mu|y)$.\n\nIn the example above, $$\\hat{\\theta}_{\\mathrm{mean}} = \\hat{\\theta}_{\\mathrm{median}} = \\hat{\\theta}_{\\mathrm{mode}} = \\frac{\\sum\\limits_{k=1}^N \\frac{y_k}{\\sigma_k^2} }{\\frac{1}{\\sigma_\\mu^2} +\\sum\\limits_{k=1}^N \\frac{1}{\\sigma_k^2}} $$.\n\nIf the model is correct, the posterior mean and median have respectively minimal mean squared error and mean absolute error. \n\n#### Interval estimates\n\nIn this spreadsheet, we change the viewpoint of the estimation. Instead of aiming at finding an estimator that is optimal in a certain sense, we consider the question: how likely is it that the true value of the parameters lie in a certain interval ? \n\n''Likely'' is a loose term that needs clarifications. There are two main ways of constructing interval estimates: the confidence intervals and the credible intervals, which have different properties.\n\n\n\n## Confidence interval\n\nA confidence interval is constructed in the following way. Given a likelihood $p(y|\\theta)$ and data $y$, a confidence interval is constructed by choosing a probability $\\alpha$, and two functions of the data $l_\\alpha(y)$ and $u_\\alpha(y)$ such that \n$$ \\mathrm{Pr}\\left\\{ \\theta \\in [l_\\alpha(y), u_\\alpha(y) ] \\; | \\;\u00a0\\theta \\right\\} = \\alpha $$\n\nWe first consider an example where we construct a confidence interval for the weighted mean of independent Gaussian variables. \n$$\\hat{\\mu}_{ML} = \\arg \\max_\\mu p(y|\\mu) = \\frac{\\sum\\limits_{k=1}^N \\frac{y_k}{\\sigma_k^2} }{\\sum\\limits_{k=1}^N \\frac{1}{\\sigma_k^2}} $$\n\n$\\hat{\\mu}_{ML}$ has a Gaussian distribution of variance $\\sigma_{\\hat{\\mu}}^2 = \\frac{1}{\\sum\\limits_{k=1}^N \\frac{1}{\\sigma_k^2}}$. \n\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.optimize import minimize\nimport scipy.special as sp\n```\n\n\n```python\n# Number of simulations\nNsim = 100000\nN = 10 # Number of data points \nmean_error = 1 #Mean value of the error bars\nmu = 1\n#alpha = 4\nerrors_sigma = mean_error*np.random.chisquare(4,size=N)/4 #Generage values of the error bars\nk = 3\nconditions = np.ones(Nsim, dtype=bool)\n\nfor i in range(Nsim):\n \n y = mu + np.random.randn(N)*errors_sigma\n \n mu_estim = np.sum(y/errors_sigma**2) / np.sum(1/errors_sigma**2)\n sigma_estim = 1/ np.sqrt(np.sum(1/errors_sigma**2))\n \n u = mu_estim + k*sigma_estim\n l = mu_estim - k*sigma_estim\n condition = (mu <= u) * (mu >= l)\n \n conditions[i] = condition \n \nprint('The true value of the parameter is in the interval in', np.sum(conditions) /Nsim*100, '% of the trials')\n\n\n```\n\n The true value of the parameter is in the interval in 99.724 % of the trials\n\n\n### Question 1\n\nCompute analytically the probability that the true parameter is in the confidence interval if $l = \\hat{\\mu} - k \\sigma_\\hat{\\mu}$ and $u = \\hat{\\mu} + k \\sigma_\\hat{\\mu}$ for $k = 1, 2, 3$. \n\nDoes the value of the confidence interval depend on the number of data points ? \n\nCompute the centered confidence interval that gives an inclusion probability $ \\alpha = 50\\%$ \n\nGiven that the distribution of data point is Gaussian with mean $\\mu$ and error $\\sigma$, we know that, $\\alpha$ can be calculated using the following definition,\n\n$$\\alpha = \\int_{\\mu-k\\sigma}^{\\mu + k\\sigma} \\frac{1}{\\sqrt{2\\pi} \\sigma} e^{-\\frac{(y-\\mu)^2}{2\\sigma^2}} dy$$\n\nMaking a change of variables, $\\left(\\frac{y-\\mu}{\\sigma}\\right)^2 = u^2$, we can write above equation as,\n\n\\begin{equation*}\n \\begin{split}\n \\alpha &= \\frac{1}{\\sqrt{2\\pi}\\sigma} \\int_{-k}^{k} e^{-\\frac{1}{2}u^2} \\sigma du \\\\\n &= \\sqrt{\\frac{2}{\\pi}} \\int_{0}^{k} e^{-\\frac{1}{2}u^2} du\n \\end{split}\n\\end{equation*}\n\nIn the last line we used the fact that the integral is the even function around the limit. To solve this integration we can make another change of variable $\\frac{u^2}{2} = x$,\n\n\\begin{equation*}\n \\begin{split}\n \\alpha &= \\sqrt{\\frac{2}{\\pi}} \\int_{0}^{k^2/2} e^{-x} \\frac{1}{\\sqrt{2x}} dx \\\\\n &= \\frac{1}{\\sqrt{\\pi}} \\int_0^{k^2/2} x^{-1/2} e^{-x} dx \\\\\n &= \\frac{1}{\\sqrt{\\pi}} \\left[-\\sqrt{\\pi} (1 - erf(\\sqrt{x}) \\right]^{k^2/2}_0\n \\end{split}\n\\end{equation*}\n\nIn the last line we used the integral tables to find the value of given integration. Solving last equation one would get that,\n\n$$\\alpha = erf\\left(\\frac{k}{\\sqrt{2}}\\right)$$\n\nWe can check this formula using scipy as follows,\n\n\n```python\nk1 = np.array([1,2,3])\nalpha = sp.erf(k1/np.sqrt(2))\n\nprint('For k=1, the probability that the true value would be in interval is ', alpha[0])\nprint('For k=2, the probability that the true value would be in interval is ', alpha[1])\nprint('For k=3, the probability that the true value would be in interval is ', alpha[2])\n```\n\n For k=1, the probability that the true value would be in interval is 0.6826894921370859\n For k=2, the probability that the true value would be in interval is 0.9544997361036416\n For k=3, the probability that the true value would be in interval is 0.9973002039367398\n\n\nThis probability would not depend on the number of data points. (In the last calculation, we didn't use number of data points anywhere, we just used the PDF of the data).\n\nNow, we want to calculate the centered confidence interval that gives the inclusion probability $\\alpha=0.5$. That means we want to compute $k$ for given alpha which can be done using the inverse error function.\n\n$$k = \\sqrt{2} \\cdot erf^{-1}(\\alpha)$$\n\nWe can calculate this using the scipy.\n\n\n```python\nalpha1 = 0.5\nkk = np.sqrt(2)*sp.erfinv(alpha1)\nprint('The confidence interval that gives the inclusion probability of 50% would be at about ' \n + str(np.around(kk,2)) + \n '-sigma from the center')\n```\n\n The confidence interval that gives the inclusion probability of 50% would be at about 0.67-sigma from the center\n\n\n### Question 2\n\nWe now consider another example. Suppose we observe\n$$Y = \\theta + \\epsilon$$ where $\\epsilon$ follows an exponential distribution\n$f(\\epsilon) = \\frac{1}{\\lambda} \\exp(-\\frac{\\epsilon}{\\lambda})$ and $\\theta$ is the parameter to estimate.\n\nGiven the data $y$, construct confidence intervals for 68.27, 95.45 and 99.73 $\\%$ for $\\theta$ of the form $[y - x_\\alpha ,y]$. In other words, find $x_\\alpha$ such that $\\theta \\in [y - x_\\alpha ,y]$ with a probability $\\alpha$.\n\nCheck your calculations with a simulation as above. \n\n\n\nThe likelihood function for the given exponential distribution would be,\n\n$$p(y|\\theta) = \\frac{1}{\\lambda}\\exp{\\left(-\\sum_k \\frac{y_k}{\\lambda}\\right)}$$\n\nWe can calculate the Maximum Likelihood estimate of $\\hat{\\lambda}$ as follows,\n\n\\begin{equation}\n \\begin{split}\n \\log p(y|\\theta) &= - \\log \\lambda - \\sum_k \\frac{y_k}{\\lambda} \\\\\n \\Rightarrow \\frac{d \\log p(y|\\theta)}{d\\lambda} &= -\\frac{1}{\\lambda} + \\sum_k \\frac{y_k}{\\lambda^2} \\\\\n \\Rightarrow 0 &= -1 + \\sum_k \\frac{y_k}{\\lambda} \\\\\n \\Rightarrow \\hat{\\lambda} &= \\sum_k y_k\n \\end{split}\n\\end{equation}\n\nNow, we want to find confidence interval for this distribution. We can do so as we did in the previous case. Let, $\\alpha$ be probability with which true value of $\\lambda$ lies in the given interval $(0,k)$. Then,\n\n\\begin{equation}\n \\begin{split}\n \\alpha &= \\int_0^k \\frac{1}{\\lambda} e^{-x/\\lambda} dx \\\\\n &= \\frac{1}{\\lambda} \\left( \\frac{e^{-x/\\lambda}}{-1/\\lambda} \\right)_0^k \\\\\n &= 1 - e^{-k/\\lambda}\n \\end{split}\n\\end{equation}\n\nHere, $\\lambda$ would be the ML estimate of the parameter. Using above formula, we can find the interval $(0,k)$ for which the $\\hat{\\lambda}$ would be in the interval would be $\\alpha$.\n\n$$k = \\lambda \\ln{(1-\\alpha)}$$\n\n\n```python\n# Number of simulations\nNsim = 100000\nN = 10 # Number of data points \n\nmu = 2\nalpha = 0.95\n\nconditions1 = np.ones(Nsim, dtype=bool)\n\nfor i in range(Nsim):\n \n y = np.random.exponential(mu,N)\n \n mu_estim = np.sum(y)\n \n l = 0\n u = mu_estim*np.log(1-alpha)\n condition = (mu <= u) * (mu >= l)\n \n conditions1[i] = condition \n \nprint('The true value of the parameter is in the interval in', np.sum(conditions1) /Nsim*100, '% of the trials')\n```\n\n The true value of the parameter is in the interval in 0.0 % of the trials\n\n\n### Question 3\n\nWe now consider another example. Suppose we observe\n$$Y = \\theta + \\epsilon$$ where $\\epsilon$ follows a gamma distribution of parameters $\\alpha, \\beta$.\n\nGiven the data $y$, construct confidence intervals for 68.27, 95.45 and 99.73 $\\%$ for $\\theta$ of the form $[y - x_\\alpha ,y]$. In other words, find $x_\\alpha$ such that $\\mu \\in [y - x_\\alpha ,y]$ with a probability $\\alpha$.\n\nCheck your calculations with a simulation as above. \n\n\n```python\n\n```\n", "meta": {"hexsha": "4c63b71f0ad476355c929c9315296e76914998a0", "size": 15813, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lec10_Confidence_Intervals.ipynb", "max_stars_repo_name": "Jayshil/Astro-data_science", "max_stars_repo_head_hexsha": "8f83643197cf05981e09490352caeec3f0cde4ae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lec10_Confidence_Intervals.ipynb", "max_issues_repo_name": "Jayshil/Astro-data_science", "max_issues_repo_head_hexsha": "8f83643197cf05981e09490352caeec3f0cde4ae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lec10_Confidence_Intervals.ipynb", "max_forks_repo_name": "Jayshil/Astro-data_science", "max_forks_repo_head_hexsha": "8f83643197cf05981e09490352caeec3f0cde4ae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 44.6694915254, "max_line_length": 435, "alphanum_fraction": 0.5670018339, "converted": true, "num_tokens": 3532, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9273632976542184, "lm_q2_score": 0.8918110432813419, "lm_q1q2_score": 0.8270328299818341}} {"text": "```python\nfrom sympy import *\n# Use pretty printing for SymPy `expr`essions\ninit_printing()\n\ndef noop(*args, **kwargs):\n pass\nif __name__ != '__main__':\n display = noop\n```\n\n# Linear ODEs: example with complex eigenvalues\n\nIn this lecture we solve the linear ODE $\\dot x=Ax$, where $A=\\begin{bmatrix}6&-5\\\\13&-10\\end{bmatrix}$.\nI will solve this ODE manually verifying each step using [Python] library [SymPy].\n\n[Python]: https://www.python.org \"Python programming language\"\n[SymPy]: https://www.sympy.org \"Symboic math in Python\"\n\n## Eigenvalues and eigenvectors\n\nThe eigenvalues of this matrix are $-2\\pm i$, and the corresponding eigenvectors are $\\begin{pmatrix}8\\pm i\\\\13\\end{pmatrix}$.\n\n\n```python\nA = Matrix(2, 2, [6, -5, 13, -10])\nA.eigenvects()\n```\n\n## Normal form\nTherefore, $A = PCP^{-1}$, where $P=\\begin{bmatrix}8&-1\\\\13&0\\end{bmatrix}$, $C=\\begin{bmatrix}-2&-1\\\\1&-2\\end{bmatrix}$. In general, if $v=\\begin{bmatrix}v_1\\\\v_2\\end{bmatrix}$ is an eigenvector of $A$ with eigenvalue $\u03bb$, then $C=\\begin{bmatrix}\\Re\\lambda&-\\Im\\lambda\\\\\\Im\\lambda&\\Re\\lambda\\end{bmatrix}$, $P=\\begin{bmatrix}\\Re v_1&-\\Im v_1\\\\\\Re v_2&-\\Im v_2\\end{bmatrix}$.\n\n\n```python\n(\u03bb, m, (v,)) = A.eigenvects()[1]\nv *= 13 # get rid of the denominator; it's still an eigenvector\nassert (A * v).expand() == (\u03bb * v).expand()\nP = re(v).row_join(-im(v))\nC = Matrix(2, 2, [re(\u03bb), -im(\u03bb), im(\u03bb), re(\u03bb)])\nassert P * C * P ** -1 == A\n\u03bb, v, P, C\n```\n\n## Formula for the solution\n\n### Solution of the normalized equation\nRecall that the solution of $\\dot y=\\begin{pmatrix}a&-b\\\\b&a\\end{pmatrix}y$, $y(0)=c$, is given by $y(t)=e^{at}\\begin{pmatrix}\\cos(bt) & -\\sin(bt)\\\\\\sin(bt) & \\cos(bt)\\end{pmatrix}c$, hence solutions of $\\dot y=Cy$ are given by $y(t)=e^{-2t}\\begin{pmatrix}\\cos t&-\\sin t\\\\\\sin t&\\cos t\\end{pmatrix}y(0)$.\n\n\n```python\nvar('c0 c1') # Coordinates of $x(0)$\nvar('t') # Time\nc = Matrix([c0, c1])\nM = exp(re(\u03bb) * t) * Matrix(2, 2, [cos(im(\u03bb) * t), -sin(im(\u03bb) * t), sin(im(\u03bb) * t), cos(im(\u03bb) * t)])\ny = M * c\nassert y.diff(t) == (C * y).expand()\nM, y\n```\n\n### Solution of the original equation\nSolutions of $\\dot x=Ax$, $x(0)=c$, are given by \n$$\nx(t)=PM(t)P^{-1}c=e^{-2t}\\begin{bmatrix}8\\sin t+\\cos t&-5\\sin t\\\\13\\sin t&\\cos t-8\\sin t\\end{bmatrix}x(0).\n$$\n\n\n```python\nx = P * M * P ** -1 * c\nassert x.diff(t).expand() == (A * x).expand()\nassert x.subs(t, 0) == c\ndisplay(P * M * P ** -1, x)\n```\n", "meta": {"hexsha": "fdc2bc26dc3c8c8a410b3e606e616aecca3c7a4c", "size": 42793, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Linear_ODE_complex_evs.ipynb", "max_stars_repo_name": "urkud/mat332-notebooks", "max_stars_repo_head_hexsha": "c334255e246bdde22b6c905b38b714b92f74c8d6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Linear_ODE_complex_evs.ipynb", "max_issues_repo_name": "urkud/mat332-notebooks", "max_issues_repo_head_hexsha": "c334255e246bdde22b6c905b38b714b92f74c8d6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Linear_ODE_complex_evs.ipynb", "max_forks_repo_name": "urkud/mat332-notebooks", "max_forks_repo_head_hexsha": "c334255e246bdde22b6c905b38b714b92f74c8d6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-12-29T22:25:25.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-29T22:25:25.000Z", "avg_line_length": 186.0565217391, "max_line_length": 8996, "alphanum_fraction": 0.8600939406, "converted": true, "num_tokens": 923, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9518632247867715, "lm_q2_score": 0.8688267881258485, "lm_q1q2_score": 0.8270042683266032}} {"text": "# 07 Numbers and Errors: Sine Series Problem \n\n* *Computational Physics*: Ch 2.5\n\n\n## Set up\n\n\nPull changes in PHY494-resources\n```\ncd ~/PHY494-resources\ngit pull\n```\nCopy the the problem notebook with the whole directory as your work dir:\n```\ncd\ncp -r ~/PHY494-resources/07_numbers ~/PHY494\n```\nWork on the `07-problem-sine-series.ipynb` notebook:\n```\n~/PHY494\njupyter notebook\n```\n\n## Problem: Summing Series: sin(x)\n\nEvaluate the $\\sin$ function from its series representation\n$$\n\\sin x = x - \\frac{x^3}{3!} + \\frac{x^5}{5!} - \\frac{x^7}{7!} + \\dots\n$$\n\nA naive algorithm is to sum the series up to the $N$th term:\n$$\n\\sin x \\approx \\sum_{n=1}^N \\frac{(-1)^{n-1} x^{2n-1}}{(2n - 1)!}\n$$\n\nProblems:\n\n- How to decide when to stop summing?\n- Division of large terms (overflows!)\n- Powers and factorials are very expensive to compute.\n\nBetter approach: Build up series terms $a_n$ using previous term $a_{n-1}$ through a recursion:\n\n\\begin{align}\na_n &= a_{n-1} \\times q_n\\\\\na_n &= \\frac{(-1)^{n-1} x^{2n-1}}{(2n - 1)!} = \\frac{(-1)^{n-2} x^{2n-3}}{(2n - 3)!} \\frac{-x^2}{(2n - 1)(2n - 2)}\\\\\na_n & = a_{n-1} \\frac{-x^2}{(2n - 1)(2n - 2)}\n\\end{align}\n\nAccuracy of this approach? Not clear in absolute terms but we can make the assumption that the error is approximately the last term in the sum, $a_N$. Hence we can strive to make the relative error smaller than the machine precision\n$$\n\\left| \\frac{a_N}{\\sum_{n=1}^N a_n} \\right| < \\epsilon_m\n$$\n\n\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n\n```python\ndef sin_series(x, eps=1e-16):\n \"\"\"Calculate sin(x) to precision eps.\n \n Arguments\n ---------\n x : float\n argument of sin(x)\n eps : float, optional\n desired precision\n \n Returns\n -------\n func_value, N\n \n where func_value = sin(x) and N is the number\n of terms included in the approximation.\n \"\"\"\n # ....\n # add your code here\n return sumN, n-1\n```\n\nTest the implementation against the \"exact\" numpy function `np.sin()`.\n\nReport\n1. `x`\n2. maximum `n`\n3. `sin_series(x)`\n4. relative error `abs(sin_series(x) - sin(x))/abs(sin(x))`\n\nPlot against `x` the quantities above and also `sin(x)`.\n* `x` - `sin_series(x)`\n* `x` - `sin(x)`\n* `x` - max `n`\n* `x` - relative error (semilogy plot!)\n\nFor a range of numbers look at `Xsmall` and `Xlarge`:\n\n\n```python\nXsmall = np.pi*np.arange(-2, 2.05, 0.05)\nXlarge = np.pi*np.arange(-50, 50.05, 0.1)\n```\n\nImplementation of the test:\n\n\n```python\ndef test_sin(x):\n y, nmax = sin_series(x)\n y0 = np.sin(x)\n delta = y - y0\n if y0 != 0:\n relative_error = delta/y0\n else:\n relative_error = 0\n # print(x, y, y0, delta, relative_error)\n #return x, y, y0, delta, relative_error\n return x, nmax, y, relative_error \n```\n\n\n```python\ndef test_plot_sine(X, filename=\"sine_error.pdf\"):\n results = np.array([test_sin(x) for x in X])\n \n fig = plt.figure(figsize=(8, 10))\n \n ax1 = fig.add_subplot(3,1,1)\n ax1.plot(results[:, 0], results[:, 2], 'k-', lw=1, label=\"series\")\n ax1.plot(results[:, 0], np.sin(results[:, 0]), 'g--', lw=2, label=\"sin x\")\n ax1.legend(loc=\"best\")\n\n ax2 = fig.add_subplot(3,1,2)\n ax2.plot(results[:, 0], results[:, 1], label=\"max n\")\n ax2.legend(loc=\"best\")\n\n ax3 = fig.add_subplot(3,1,3)\n ax3.semilogy(results[:, 0], results[:, 3], label=\"rel.error\")\n ax3.legend(loc=\"best\")\n \n fig.suptitle(\"sine series approximation\")\n \n fig.savefig(filename)\n print(\"saved to file {0}\".format(filename))\n```\n\nNow test the two ranges of numbers and write to different files:\n\n\n```python\ntest_plot_sine(Xsmall, filename=\"sine_error_Xsmall.pdf\")\n```\n\n\n```python\n# ...\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "54c1f32466dfa4395eeae453833764af8dfd6785", "size": 6847, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "07_numbers/07-problem-sine-series.ipynb", "max_stars_repo_name": "nachrisman/PHY494", "max_stars_repo_head_hexsha": "bac0dd5a7fe6f59f9e2ccaee56ebafcb7d97e2e7", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "07_numbers/07-problem-sine-series.ipynb", "max_issues_repo_name": "nachrisman/PHY494", "max_issues_repo_head_hexsha": "bac0dd5a7fe6f59f9e2ccaee56ebafcb7d97e2e7", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "07_numbers/07-problem-sine-series.ipynb", "max_forks_repo_name": "nachrisman/PHY494", "max_forks_repo_head_hexsha": "bac0dd5a7fe6f59f9e2ccaee56ebafcb7d97e2e7", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.2656826568, "max_line_length": 241, "alphanum_fraction": 0.4892653717, "converted": true, "num_tokens": 1200, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8887587993853654, "lm_q2_score": 0.9304582598330264, "lm_q1q2_score": 0.826952965887397}} {"text": "# Mathematical Theory and Intuition behind Our Improved Model \n\nThis markdown explains the mathematical theory behind our model. Note that the actual implementation might be different. For simplicity, we are also not including all the datasets we used (otherwise the equations might be too long).\n\nLet $i$ denote the $i$th date since the beginning of our dataset and $n$ denote the cutoff date for training-testing . Let the range from $n+1$ to $m$ be the range for testing (forecast). \n\nFor a given date $i$, let $\\textbf{asp}(i)$ be the actual index for S\\&P500; $\\textbf{fbsp}(i)$ be the index for S\\&P500 bases on a training set of the first $i-1$ many days' market data of S\\&P500; $\\textbf{div}(i)$ be the dividend yield rates of S\\&P500; $\\textbf{eps}(i)$ be the dividen yield rates of S\\&P500; $\\textbf{tby}(i)$ be the 10-year treasury bond yield; $\\textbf{ffr}(i)$ be the fed fund rates; $\\textbf{fta}(i)$ be the fed total assets.\n\n## Linear Model\n\n\\begin{equation}\n \\textbf{fb}_1(i,a,b,c,d,e):=\\textbf{fbsp}(i)+a*\\textbf{tby}(i)+b*\\textbf{ffr}(i)+c*\\textbf{fta}(i)+d*\\textbf{div}(i)+e*\\textbf{eps}(i)\n\\end{equation}\n\n\\begin{equation}\n \\textbf{E}_n(a,b,c,d,e):= \\frac{1}{n-1000}\\sum_{i=1000}^{n} \n (\\textbf{fb}_1(i,a,b,c,d,e)-\\textbf{asp}(i))^2\n\\end{equation}\n\nattains its minimum. Namely, finding\n\n\\begin{equation}\n (a_n,b_n,c_n,d_n,e_n):=\\text{argmin} \\textbf{E}_n(a,b,c,d,e)\n\\end{equation}\n\nNote that it doesn't make sense to start with $i=1$ since the fbprophet itself need to use the first $i-1$ of the data for training\n\n## Nonlinear Model\n\nAll notations are the same as above.\n\nHere is a different model (nonlinear revision of fbprophet). In this model, we will the dividend yield rate of S\\&P 500.\n\nFirst, what is the dividend yield rate? Dividend is the money that publicly listed companies pay you (usually four times a year) for holding their stocks (until the so-called ex-dividend dates). It's like the interest paid to your saving accounts from the banks. Some companies pay while some don't, especially for the growth tech stocks. From my experience, the impact of bond rates and fed fund rates on the stock market will change when they rise above or fall below the dividend yield rate}. Stock prices fall when those rates rise above the dividend yield rate of SP500 (investor are selling their stocks to buy bonds or save more money in their bank account!).\n\nBased on this idea, it might be useful to consider the differences of those rates and the dividend yield rate of SP500. \n\nNormally an increase in the federal fund rate will result in an increase in bank loan interest rate, which will in turn result in an decrease in the net income of S\\&P500-listed companies since they have to pay a higher interest when borrowing money from banks. Based on this thought, I believe it is reasonable to make a correction to $c*\\textbf{eps}(i)$ by replacing the term by $c*\\textbf{eps}(i)(1+d*\\textbf{ffr}(i))$. If my intuition is correct, the generated constant $d$ from the optimization should be a negative number. \n\n\n$$\\textbf{fb}_2(i,a,b,c,d,e):= \\textbf{fbsp}(i)*\\big[1+a*(\\textbf{div}(i)-\\textbf{tby}(i))\\big]\\big[1+b*(\\textbf{div}(i)-\\textbf{ffr}(i))\\big]\\\\\n+c*\\textbf{eps}(i)(1+d*\\textbf{ffr}(i)+e*\\textbf{fta}(i))$$\n\nand consider\n\n$$E_n(a,b,c,d,e) := \\frac{1}{n-1000}\\sum_{i=1000}^n(\\textbf{fb}_2(i,a,b,c,d,e)-\\textbf{asp}(i))^2$$\n\nNow find (by approximation, SGD, etc.) $(a_n,b_n,c_n,d_n,e_n):=\\text{argmin} E_n(a,b,c,d,e)$ \n\nUsing $(a_n,b_n,c_n,d_n,e_n)$ as constants, our output will be $\\textbf{fb}_2(i,a_n,b_n,c_n,d_n,e_n)$\n\n\n### The actual implementation of the nonlinear model\n\nFor the actual implementation of the nonlinear model, we threw away the higher order terms (the products of three things) since they are relatively smaller quantities\n\n### The mathematical theory behind our nonlinear model: \n\n#### The Taylor's theorem for multivariate functions\n\nLet $f :\\mathbb{R}^n \u2192 \\mathbb R$ be a $k$-times-differentiable function at the point $a \u2208 \\mathbb{R}^n$. Then there exists $h_a:\\mathbb{R}^n \u2192 \\mathbb R$ such that:\n\n\\begin{align}\n& f(\\boldsymbol{x}) = \\sum_{|\\alpha|\\leq k} \\frac{D^\\alpha f(\\boldsymbol{a})}{\\alpha!} (\\boldsymbol{x}-\\boldsymbol{a})^\\alpha + \\sum_{|\\alpha|=k} h_\\alpha(\\boldsymbol{x})(\\boldsymbol{x}-\\boldsymbol{a})^\\alpha, \\\\\n& \\mbox{and}\\quad \\lim_{\\boldsymbol{x}\\to \\boldsymbol{a}}h_\\alpha(\\boldsymbol{x})=0.\n\\end{align}\n\nIn our model, we think of $\\textbf{asp}$ as a function of $\\textbf{fbsp}, \\textbf{tby}, \\textbf{div}, \\textbf{ffr}, \\textbf{fta}$, etc. Say \n\n$$\\textbf{asp}:=F(\\textbf{fbsp}, \\textbf{tby}, \\textbf{div}, \\textbf{ffr}, \\textbf{fta},\\cdots)$$\n\nWith the assumption that $F$ here is regular, we can apply the Taylor's theorem for multivariate functions from above to $F$. Ideally, we have to consider all possible products in our implementation. But in our implementation, we chose to make a balance between our intuitive nonlinear model and Taylor's theorem.\n\n## A faster computation scheme\nOne drawback of the models we proposed about is that we have to call fbprophet thousands of times when we implement it.\n\nHere is a method that reduces the number of times calling fbprophet:\n\nSay in the training process we are considering from i=1,000 to 11,000. Namely based on the current scheme, we have to call fbprophet for 10,000 times.\n\nInstead of this, we make a break-down 10,000=100*100 as follows:\n\nFor i=1,000 to 1,100, we only use the first 999 dates for training;\n\nFor i=1,100 to 1,200, we only use the first 1,099 dates for training;\n\n.............................................\n\nFor i=10,900 to 11,000, we only use the first 10,899 dates for training;\n\nIn this way, it seems that we only need to call fbprophet for 100 times. And this doesn't seem harm the accuracy too much.\n\n\n```python\nimport pandas as pd\nimport numpy as np\n\n## For plotting\nimport matplotlib.pyplot as plt\nfrom matplotlib import style\nimport datetime as dt\nimport seaborn as sns\nsns.set_style(\"whitegrid\")\n```\n\n\n```python\npath = '../Data/dff1.csv'\n```\n\n\n```python\ndf= pd.read_csv(path, parse_dates=['ds'])\n# df = df.rename(columns = {\"Date\":\"ds\",\"Close\":\"y\"}) \ndf = df[['ds', 'y','fbsp', 'diff','tby', 'ffr', 'fta', 'eps', 'div', 'une', 'wti', 'ppi',\n 'rfs']]\n# df\n```\n\n\n```python\ndf['fbsp_tby'] = df['fbsp'] * df['tby']\ndf['fbsp_ffr'] = df['fbsp'] * df['ffr']\ndf['fbsp_div'] = df['fbsp'] * df['div']\ndf['eps_tby'] = df['eps'] * df['tby']\ndf['eps_ffr'] = df['eps'] * df['ffr']\ndf['eps_div'] = df['eps'] * df['div']\n```\n\n\n```python\n# cutoff between test and train data\ncutoff = len(df) - 252\ndf_train = df[:cutoff].copy()\ndf_test = df[cutoff:].copy()\nprint(cutoff)\n```\n\n 2300\n\n\n\n```python\ndf_train.columns\n```\n\n\n\n\n Index(['ds', 'y', 'fbsp', 'diff', 'tby', 'ffr', 'fta', 'eps', 'div', 'une',\n 'wti', 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby',\n 'eps_ffr', 'eps_div'],\n dtype='object')\n\n\n\n\n```python\n#possible_features = ['tby', 'ffr', 'fta', 'eps', 'div', 'une', 'wti',\n# 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby',\n# 'eps_ffr', 'eps_div']\n\npossible_features = ['tby', 'ffr', 'fta', 'div', 'une', \n 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div']\n\nfrom itertools import chain, combinations\n\ndef powerset(iterable):\n #\"powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)\"\n s = list(iterable)\n return chain.from_iterable(combinations(s, r) for r in range(len(s)+1))\n\n#print(list(powerset(possible_features)))\n```\n\n\n```python\nlen(possible_features)\n```\n\n\n\n\n 10\n\n\n\n\n```python\nfrom statsmodels.regression.linear_model import OLS\n\nreg_new = OLS((df_train['diff']).copy(),df_train[possible_features].copy()).fit()\nprint(reg_new.params)\n\n#from the output, we can see it's consistent with sklearn output\n```\n\n tby 142.277390\n ffr 85.508342\n fta 0.000068\n div 366.196032\n une -139.915289\n ppi -0.651339\n rfs 0.003782\n fbsp_tby -0.060019\n fbsp_ffr 0.017499\n fbsp_div -0.462135\n dtype: float64\n\n\n\n```python\nnew_coef = reg_new.params\nnew_possible_feats = new_coef[abs(new_coef)>0].index\n\npower_feats = list(powerset(new_possible_feats))\npower_feats.remove(())\n\npower_feats = [ list(feats) for feats in power_feats]\nlen(power_feats)\n\n```\n\n\n\n\n 1023\n\n\n\n\n```python\nAIC_scores = []\nparameters = []\n\nfor feats in power_feats:\n tmp_reg = OLS((df_train['diff']).copy(),df_train[feats].copy()).fit()\n AIC_scores.append(tmp_reg.aic)\n parameters.append(tmp_reg.params)\n\n \nMin_AIC_index = AIC_scores.index(min(AIC_scores))\nMin_AIC_feats = power_feats[Min_AIC_index] \nMin_AIC_params = parameters[Min_AIC_index]\nprint(Min_AIC_feats)\nprint(Min_AIC_params) \n```\n\n ['tby', 'ffr', 'fta', 'div', 'une', 'rfs', 'fbsp_tby', 'fbsp_div']\n tby 151.672502\n ffr 133.514484\n fta 0.000063\n div 359.410771\n une -144.152409\n rfs 0.003479\n fbsp_tby -0.064792\n fbsp_div -0.447041\n dtype: float64\n\n\n\n```python\nlen(Min_AIC_feats)\n```\n\n\n\n\n 8\n\n\n\n\n```python\n###After selecting the best features, we report the testing error, and make the plot \nAIC_df_test = df_test[Min_AIC_feats]\nAIC_pred_test = AIC_df_test.dot(Min_AIC_params)+df_test.fbsp\n\nAIC_df_train = df_train[Min_AIC_feats]\nAIC_pred_train = AIC_df_train.dot(Min_AIC_params)+ df_train.fbsp\n\n\n```\n\n\n```python\nfrom sklearn.metrics import mean_squared_error as MSE\n\nmse_train = MSE(df_train.y, AIC_pred_train) \nmse_test = MSE(df_test.y, AIC_pred_test)\n\n\n#compare with fbprophet()\n\nfb_mse_train = MSE(df_train.y, df_train.fbsp) \nfb_mse_test = MSE(df_test.y, df_test.fbsp)\n\n\nprint(mse_train,fb_mse_train)\n\nprint(mse_test,fb_mse_test)\n```\n\n 3021.3722993387496 22303.56360854362\n 39273.94024676501 15247.912341091065\n\n\n\n```python\ndf_train.ds\n```\n\n\n\n\n 0 2009-12-15\n 1 2009-12-16\n 2 2009-12-17\n 3 2009-12-18\n 4 2009-12-21\n ... \n 2295 2019-02-20\n 2296 2019-02-21\n 2297 2019-02-22\n 2298 2019-02-25\n 2299 2019-02-26\n Name: ds, Length: 2300, dtype: datetime64[ns]\n\n\n\n\n```python\nplt.figure(figsize=(18,10))\n\n# plot the training data\nplt.plot(df_train.ds,df_train.y,'b',\n label = \"Training Data\")\n\nplt.plot(df_train.ds, AIC_pred_train,'r-',\n label = \"Improved Fitted Values by Best_AIC\")\n\n# # plot the fit\nplt.plot(df_train.ds, df_train.fbsp,'g-',\n label = \"FB Fitted Values\")\n\n# # plot the forecast\nplt.plot(df_test.ds, df_test.fbsp,'g--',\n label = \"FB Forecast\")\nplt.plot(df_test.ds, AIC_pred_test,'r--',\n label = \"Improved Forecast by Best_AIC\")\nplt.plot(df_test.ds,df_test.y,'b--',\n label = \"Test Data\")\n\nplt.legend(fontsize=14)\n\nplt.xlabel(\"Date\", fontsize=16)\nplt.ylabel(\"SP&500 Close Price\", fontsize=16)\n\nplt.show()\n```\n\n\n```python\nplt.figure(figsize=(18,10))\nplt.plot(df_test.y,label=\"Training Data\")\nplt.plot(df_test.fbsp,label=\"FB Forecast\")\nplt.plot(AIC_pred_test,label=\"Improved Forecast by Best_AIC\")\nplt.legend(fontsize = 14)\nplt.show()\n```\n\n\n```python\n\n```\n\n\n```python\ncolumn = ['tby', 'ffr', 'fta', 'eps', 'div', 'une',\n 'wti', 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby', 'eps_ffr', 'eps_div']\n```\n\n\n```python\nfrom sklearn import preprocessing\ndf1_train = df_train[['diff', 'tby', 'ffr', 'fta', 'eps', 'div', 'une', 'wti', 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby', 'eps_ffr', 'eps_div']]\n\nX = preprocessing.scale(df1_train)\nfrom statsmodels.regression.linear_model import OLS\n\nreg_new = OLS((X[:,0]).copy(),X[:,1:].copy()).fit()\nprint(reg_new.params)\n```\n\n [ 1.50405129 1.03228322 0.27409454 1.17073571 0.31243092 -0.75747342\n 0.46988206 -0.39944639 2.10369448 -0.69112943 -2.1804296 -2.38576385\n -1.14196633 1.41832903 -0.34501927]\n\n\n\n```python\n# Before Covid\n# pd.Series(reg_new.params, index=['tby', 'ffr', 'fta', 'eps', 'div', 'une',\n# 'wti', 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby', 'eps_ffr', 'eps_div'] )\n```\n\n\n```python\n# before covid\ncoef1 = [ 1.50405129, 1.03228322, 0.27409454, 1.17073571, 0.31243092,\n -0.75747342, 0.46988206, -0.39944639, 2.10369448, -0.69112943,\n -2.1804296 , -2.38576385, -1.14196633, 1.41832903, -0.34501927]\n# include covid\ncoef2 = [ 0.65150054, 1.70457239, -0.1573802 , -0.18007979, -0.15221931,\n -0.62326075, 0.45065894, -0.38972706, 2.87210843, -1.17604495,\n -4.92858316, -2.15459111, 0.11418468, 2.74829778, 0.55520382]\n```\n\n\n```python\n# Include Covid\n# pd.Series( np.append( ['coefficients (before covid)'], np.round(coef1,3)), index= np.append(['features'], column) ) \n \n```\n\n\n```python\nindex1 = ['10 Year U.S Treasury Bond Yield Rates (tby)', 'Federal Funds Rates (ffr)',\n 'Federal Total Assets (fta)', 'Earning-Per-Share of S&P 500 (eps)', 'Dividend Yield of S&P 500 (div)',\n 'Unemployment Rates (une) ', 'West Texas Intermediate oil index (wit)', 'Producer Price Index (ppi)',\n 'Retail and Food Services Sales (rfs)', \n 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby', 'eps_ffr', 'eps_div'\n ]\n```\n\n\n```python\nlen(index1)\n```\n\n\n\n\n 15\n\n\n\n\n```python\npd.Series(coef2, index =index1)\n```\n\n\n\n\n 10 Year U.S Treasury Bond Yield Rates (tby) 0.651501\n Federal Funds Rates (ffr) 1.704572\n Federal Total Assets (fta) -0.157380\n Earning-Per-Share of S&P 500 (eps) -0.180080\n Dividend Yield of S&P 500 (div) -0.152219\n Unemployment Rates (une) -0.623261\n West Texas Intermediate oil index (wit) 0.450659\n Producer Price Index (ppi) -0.389727\n Retail and Food Services Sales (rfs) 2.872108\n fbsp_tby -1.176045\n fbsp_ffr -4.928583\n fbsp_div -2.154591\n eps_tby 0.114185\n eps_ffr 2.748298\n eps_div 0.555204\n dtype: float64\n\n\n\n\n```python\ndf3 = pd.DataFrame(coef1, index = index1, columns = ['coefficients (before covid)'])\ndf3['coefficients (include covid)'] =pd.Series(coef2, index =index1)\ndf3\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    coefficients (before covid)coefficients (include covid)
    10 Year U.S Treasury Bond Yield Rates (tby)1.5040510.651501
    Federal Funds Rates (ffr)1.0322831.704572
    Federal Total Assets (fta)0.274095-0.157380
    Earning-Per-Share of S&P 500 (eps)1.170736-0.180080
    Dividend Yield of S&P 500 (div)0.312431-0.152219
    Unemployment Rates (une)-0.757473-0.623261
    West Texas Intermediate oil index (wit)0.4698820.450659
    Producer Price Index (ppi)-0.399446-0.389727
    Retail and Food Services Sales (rfs)2.1036942.872108
    fbsp_tby-0.691129-1.176045
    fbsp_ffr-2.180430-4.928583
    fbsp_div-2.385764-2.154591
    eps_tby-1.1419660.114185
    eps_ffr1.4183292.748298
    eps_div-0.3450190.555204
    \n
    \n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "987477334232712787120ecd2d8dc6605cb34266", "size": 137089, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Code/Scenario2 including the covid period without wti, eps.ipynb", "max_stars_repo_name": "thinkhow/Market-Prediction-with-Macroeconomics-features", "max_stars_repo_head_hexsha": "feac711017739ea6ffe46a7fcac6b4b0c265e0b5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Code/Scenario2 including the covid period without wti, eps.ipynb", "max_issues_repo_name": "thinkhow/Market-Prediction-with-Macroeconomics-features", "max_issues_repo_head_hexsha": "feac711017739ea6ffe46a7fcac6b4b0c265e0b5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Code/Scenario2 including the covid period without wti, eps.ipynb", "max_forks_repo_name": "thinkhow/Market-Prediction-with-Macroeconomics-features", "max_forks_repo_head_hexsha": "feac711017739ea6ffe46a7fcac6b4b0c265e0b5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 163.7861409797, "max_line_length": 108460, "alphanum_fraction": 0.8654086032, "converted": true, "num_tokens": 5414, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9304582612793113, "lm_q2_score": 0.8887587905460026, "lm_q1q2_score": 0.8269529589481371}} {"text": "In this notebook, you will implement the forward longitudinal vehicle model. The model accepts throttle inputs and steps through the longitudinal dynamic equations. Once implemented, you will be given a set of inputs that drives over a small road slope to test your model.\n\nThe input to the model is a throttle percentage $x_\\theta \\in [0,1]$ which provides torque to the engine and subsequently accelerates the vehicle for forward motion. \n\nThe dynamic equations consist of many stages to convert throttle inputs to wheel speed (engine -> torque converter -> transmission -> wheel). These stages are bundled together in a single inertia term $J_e$ which is used in the following combined engine dynamic equations.\n\n\\begin{align}\n J_e \\dot{\\omega}_e &= T_e - (GR)(r_{eff} F_{load}) \\\\ m\\ddot{x} &= F_x - F_{load}\n\\end{align}\n\nWhere $T_e$ is the engine torque, $GR$ is the gear ratio, $r_{eff}$ is the effective radius, $m$ is the vehicle mass, $x$ is the vehicle position, $F_x$ is the tire force, and $F_{load}$ is the total load force. \n\nThe engine torque is computed from the throttle input and the engine angular velocity $\\omega_e$ using a simplified quadratic model. \n\n\\begin{align}\n T_e = x_{\\theta}(a_0 + a_1 \\omega_e + a_2 \\omega_e^2)\n\\end{align}\n\nThe load forces consist of aerodynamic drag $F_{aero}$, rolling friction $R_x$, and gravitational force $F_g$ from an incline at angle $\\alpha$. The aerodynamic drag is a quadratic model and the friction is a linear model.\n\n\\begin{align}\n F_{load} &= F_{aero} + R_x + F_g \\\\\n F_{aero} &= \\frac{1}{2} C_a \\rho A \\dot{x}^2 = c_a \\dot{x}^2\\\\\n R_x &= N(\\hat{c}_{r,0} + \\hat{c}_{r,1}|\\dot{x}| + \\hat{c}_{r,2}\\dot{x}^2) \\approx c_{r,1} \\dot{x}\\\\\n F_g &= mg\\sin{\\alpha}\n\\end{align}\n\nNote that the absolute value is ignored for friction since the model is used for only forward motion ($\\dot{x} \\ge 0$). \n \nThe tire force is computed using the engine speed and wheel slip equations.\n\n\\begin{align}\n \\omega_w &= (GR)\\omega_e \\\\\n s &= \\frac{\\omega_w r_e - \\dot{x}}{\\dot{x}}\\\\\n F_x &= \\left\\{\\begin{array}{lr}\n cs, & |s| < 1\\\\\n F_{max}, & \\text{otherwise}\n \\end{array}\\right\\} \n\\end{align}\n\nWhere $\\omega_w$ is the wheel angular velocity and $s$ is the slip ratio. \n\nWe setup the longitudinal model inside a Python class below. The vehicle begins with an initial velocity of 5 m/s and engine speed of 100 rad/s. All the relevant parameters are defined and like the bicycle model, a sampling time of 10ms is used for numerical integration.\n\n\n```python\nfrom math import sin, atan2\nimport sys\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n\nclass Vehicle():\n def __init__(self):\n \n # ==================================\n # Parameters\n # ==================================\n \n #Throttle to engine torque\n self.a_0 = 400\n self.a_1 = 0.1\n self.a_2 = -0.0002\n \n # Gear ratio, effective radius, mass + inertia\n self.GR = 0.35\n self.r_e = 0.3\n self.J_e = 10\n self.m = 2000\n self.g = 9.81\n \n # Aerodynamic and friction coefficients\n self.c_a = 1.36\n self.c_r1 = 0.01\n \n # Tire force \n self.c = 10000\n self.F_max = 10000\n \n # State variables\n self.x = 0\n self.v = 5\n self.a = 0\n self.w_e = 100\n self.w_e_dot = 0\n \n self.sample_time = 0.01\n \n def reset(self):\n # reset state variables\n self.x = 0\n self.v = 5\n self.a = 0\n self.w_e = 100\n self.w_e_dot = 0\n```\n\nImplement the combined engine dynamic equations along with the force equations in the cell below. The function $\\textit{step}$ takes the throttle $x_\\theta$ and incline angle $\\alpha$ as inputs and performs numerical integration over one timestep to update the state variables. Hint: Integrate to find the current position, velocity, and engine speed first, then propagate those values into the set of equations.\n\n\n```python\nclass Vehicle(Vehicle):\n def step(self, throttle, alpha):\n # ==================================\n # Implement vehicle model here\n # ==================================\n x_dot = self.v\n v_dot = self.a\n w_e_dot = self.w_e_dot\n self.x += x_dot * self.sample_time\n self.v += v_dot * self.sample_time\n self.w_e += w_e_dot * self.sample_time\n omega_w = self.GR * self.w_e # Wheel speed from engine speed\n s = omega_w * self.r_e / self.v - 1 # Slip ratio\n F_x = self.c * s if abs(s) < 1 else self.F_max\n F_g = self.m * self.g * sin(alpha)\n R_x = self.c_r1 * self.v\n F_aero = self.c_a * self.v**2\n F_load = F_aero + R_x + F_g\n T_e = throttle * (self.a_0 + self.a_1 * self.w_e + self.a_2 * self.w_e**2)\n self.a = (F_x - F_load) / self.m\n self.w_e_dot = (T_e - self.GR * self.r_e * F_load) / self.J_e\n```\n\nUsing the model, you can send constant throttle inputs to the vehicle in the cell below. You will observe that the velocity converges to a fixed value based on the throttle input due to the aerodynamic drag and tire force limit. A similar velocity profile can be seen by setting a negative incline angle $\\alpha$. In this case, gravity accelerates the vehicle to a terminal velocity where it is balanced by the drag force.\n\n\n```python\nsample_time = 0.01\ntime_end = 100\nmodel = Vehicle()\n\nt_data = np.arange(0,time_end,sample_time)\nv_data = np.zeros_like(t_data)\n\n# throttle percentage between 0 and 1\nthrottle = 0.2\n\n# incline angle (in radians)\nalpha = 0\n\nfor i in range(t_data.shape[0]):\n v_data[i] = model.v\n model.step(throttle, alpha)\n \nplt.plot(t_data, v_data)\nplt.show()\n```\n\nWe will now drive the vehicle over a slope as shown in the diagram below.\n\n\n\nTo climb the slope, a trapezoidal throttle input is provided for the next 20 seconds as shown in the figure below. \n\n\n\nThe vehicle begins at 20% throttle and gradually increases to 50% throttle. This is maintained for 10 seconds as the vehicle climbs the steeper slope. Afterwards, the vehicle reduces the throttle to 0.\n\nIn the cell below, implement the ramp angle profile $\\alpha (x)$ and throttle profile $x_\\theta (t)$ and step them through the vehicle dynamics. The vehicle position $x(t)$ is saved in the array $\\textit{x_data}$. This will be used to grade your solution.\n\n\n\n```python\ntime_end = 20\nt_data = np.arange(0,time_end,sample_time)\nx_data = np.zeros_like(t_data)\nv_data = np.zeros_like(t_data)\nw_e_data = np.zeros_like(t_data)\n\n\n# reset the states\nmodel.reset()\n\n# ==================================\n# Learner solution begins here\n# ==================================\ndef angle(i, alpha, x):\n if x < 60:\n alpha[i] = np.arctan(3/60)\n elif x < 150:\n alpha[i] = np.arctan(9/90)\n else:\n alpha[i] = 0\n\nthrottle = np.zeros_like(t_data)\nalpha = np.zeros_like(t_data)\n\n#throttle depends on time and alpha depends on distance travelled (model.x)\nfor i in range(t_data.shape[0]):\n if t_data[i] < 5:\n throttle[i] = 0.2 + ((0.5 - 0.2)/5)*t_data[i]\n angle(i, alpha, model.x)\n elif t_data[i] < 15:\n throttle[i] = 0.5\n angle(i, alpha, model.x)\n else:\n throttle[i] = ((0 - 0.5)/(20 - 15))*(t_data[i] - 20)\n angle(i, alpha, model.x)\n \n #call the step function and update x_data array\n model.step(throttle[i], alpha[i])\n x_data[i] = model.x\n v_data[i] = model.v\n w_e_data[i] = model.w_e\n\n# ==================================\n# Learner solution ends here\n# ==================================\n\n# Plot x vs t for visualization\nplt.plot(t_data, x_data)\nplt.show()\n```\n\nIf you have implemented the vehicle model and inputs correctly, you should see that the vehicle crosses the ramp at ~15s where the throttle input begins to decrease.\n\nThe cell below will save the time and vehicle inputs as text file named $\\textit{xdata.txt}$. To locate the file, change the end of your web directory to $\\textit{/notebooks/Course_1_Module_4/xdata.txt}$\n\nOnce you are there, you can download the file and submit to the Coursera grader to complete this assessment.\n\n\n```python\ndata = np.vstack([t_data, x_data]).T\nnp.savetxt('xdata.txt', data, delimiter=', ')\n```\n\nCongratulations! You have now completed the assessment! Feel free to test the vehicle model with different inputs in the cell below, and see what trajectories they form. In the next module, you will see the longitudinal model being used for speed control. See you there!\n\n\n```python\nsample_time = 0.01\ntime_end = 30\nmodel.reset()\n\nt_data = np.arange(0,time_end,sample_time)\nx_data = np.zeros_like(t_data)\n\n# ==================================\n# Test various inputs here\n# ==================================\nfor i in range(t_data.shape[0]):\n\n model.step(0,0)\n \nplt.axis('equal')\nplt.plot(x_data, t_data)\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "68bce745bffd24dd53794e715db67d6ae8d773c0", "size": 41584, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Introduction to Self Driving Cars/week_4/Programming_Exercises/Longitudinal_Vehicle_Model.ipynb", "max_stars_repo_name": "veerkalburgi/Self_Driving_Cars_Specialization", "max_stars_repo_head_hexsha": "42b2bf28104c86b77c49f141b4eead6fc9efe6e1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-08-14T18:27:12.000Z", "max_stars_repo_stars_event_max_datetime": "2020-08-14T18:27:12.000Z", "max_issues_repo_path": "Introduction to Self Driving Cars/week_4/Programming_Exercises/Longitudinal_Vehicle_Model.ipynb", "max_issues_repo_name": "veerkalburgi/Self_Driving_Cars_Specialization", "max_issues_repo_head_hexsha": "42b2bf28104c86b77c49f141b4eead6fc9efe6e1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Introduction to Self Driving Cars/week_4/Programming_Exercises/Longitudinal_Vehicle_Model.ipynb", "max_forks_repo_name": "veerkalburgi/Self_Driving_Cars_Specialization", "max_forks_repo_head_hexsha": "42b2bf28104c86b77c49f141b4eead6fc9efe6e1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 110.3023872679, "max_line_length": 11976, "alphanum_fraction": 0.8261590997, "converted": true, "num_tokens": 2389, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9241418137109956, "lm_q2_score": 0.8947894724152067, "lm_q1q2_score": 0.8269123659272939}} {"text": "# Robust Kalman filtering for vehicle tracking\n\nWe will try to pinpoint the location of a moving vehicle with high accuracy from noisy sensor data. We'll do this by modeling the vehicle state as a discrete-time linear dynamical system. Standard **Kalman filtering** can be used to approach this problem when the sensor noise is assumed to be Gaussian. We'll use **robust Kalman filtering** to get a more accurate estimate of the vehicle state for a non-Gaussian case with outliers.\n\n# Problem statement\n \nA discrete-time linear dynamical system consists of a sequence of state vectors $x_t \\in \\mathbf{R}^n$, indexed by time $t \\in \\lbrace 0, \\ldots, N-1 \\rbrace$ and dynamics equations\n\n\\begin{align}\nx_{t+1} &= Ax_t + Bw_t\\\\\ny_t &=Cx_t + v_t,\n\\end{align}\n\nwhere $w_t \\in \\mathbf{R}^m$ is an input to the dynamical system (say, a drive force on the vehicle), $y_t \\in \\mathbf{R}^r$ is a state measurement, $v_t \\in \\mathbf{R}^r$ is noise, $A$ is the drift matrix, $B$ is the input matrix, and $C$ is the observation matrix.\n\nGiven $A$, $B$, $C$, and $y_t$ for $t = 0, \\ldots, N-1$, the goal is to estimate $x_t$ for $t = 0, \\ldots, N-1$.\n\n# Kalman filtering\n\nA Kalman filter estimates $x_t$ by solving the optimization problem\n\n\\begin{array}{ll}\n\\mbox{minimize} & \\sum_{t=0}^{N-1} \\left( \n\\|w_t\\|_2^2 + \\tau \\|v_t\\|_2^2\\right)\\\\\n\\mbox{subject to} & x_{t+1} = Ax_t + Bw_t,\\quad t=0,\\ldots, N-1\\\\\n& y_t = Cx_t+v_t,\\quad t = 0, \\ldots, N-1,\n\\end{array}\n\nwhere $\\tau$ is a tuning parameter. This problem is actually a least squares problem, and can be solved via linear algebra, without the need for more general convex optimization. Note that since we have no observation $y_{N}$, $x_N$ is only constrained via $x_{N} = Ax_{N-1} + Bw_{N-1}$, which is trivially resolved when $w_{N-1} = 0$ and $x_{N} = Ax_{N-1}$. We maintain this vestigial constraint only because it offers a concise problem statement.\n\nThis model performs well when $w_t$ and $v_t$ are Gaussian. However, the quadratic objective can be influenced by large outliers, which degrades the accuracy of the recovery. To improve estimation in the presence of outliers, we can use **robust Kalman filtering**.\n\n# Robust Kalman filtering\n\nTo handle outliers in $v_t$, robust Kalman filtering replaces the quadratic cost with a Huber cost, which results in the convex optimization problem\n\n\\begin{array}{ll}\n\\mbox{minimize} & \\sum_{t=0}^{N-1} \\left( \\|w_t\\|^2_2 + \\tau \\phi_\\rho(v_t) \\right)\\\\\n\\mbox{subject to} & x_{t+1} = Ax_t + Bw_t,\\quad t=0,\\ldots, N-1\\\\\n& y_t = Cx_t+v_t,\\quad t=0,\\ldots, N-1,\n\\end{array}\n\nwhere $\\phi_\\rho$ is the Huber function\n$$\n\\phi_\\rho(a)= \\left\\{ \\begin{array}{ll} \\|a\\|_2^2 & \\|a\\|_2\\leq \\rho\\\\\n2\\rho \\|a\\|_2-\\rho^2 & \\|a\\|_2>\\rho.\n\\end{array}\\right.\n$$\n\nThe Huber penalty function penalizes estimation error linearly outside of a ball of radius $\\rho$, whereas in standard Kalman filtering, all errors are penalized quadratically. Thus, large errors are penalized less harshly, making this model more robust to outliers.\n\n# Vehicle tracking example\n\nWe'll apply standard and robust Kalman filtering to a vehicle tracking problem with state $x_t \\in \\mathbf{R}^4$, where\n$(x_{t,0}, x_{t,1})$ is the position of the vehicle in two dimensions, and $(x_{t,2}, x_{t,3})$ is the vehicle velocity.\nThe vehicle has unknown drive force $w_t$, and we observe noisy measurements of the vehicle's position, $y_t \\in \\mathbf{R}^2$.\n\nThe matrices for the dynamics are\n\n$$\nA = \\begin{bmatrix}\n1 & 0 & \\left(1-\\frac{\\gamma}{2}\\Delta t\\right) \\Delta t & 0 \\\\\n0 & 1 & 0 & \\left(1-\\frac{\\gamma}{2} \\Delta t\\right) \\Delta t\\\\\n0 & 0 & 1-\\gamma \\Delta t & 0 \\\\\n0 & 0 & 0 & 1-\\gamma \\Delta t\n\\end{bmatrix},\n$$\n\n$$\nB = \\begin{bmatrix}\n\\frac{1}{2}\\Delta t^2 & 0 \\\\\n0 & \\frac{1}{2}\\Delta t^2 \\\\\n\\Delta t & 0 \\\\\n0 & \\Delta t \\\\\n\\end{bmatrix},\n$$\n\n$$\nC = \\begin{bmatrix}\n1 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0\n\\end{bmatrix},\n$$\nwhere $\\gamma$ is a velocity damping parameter.\n\n# 1D Model\nThe recurrence is derived from the following relations in a single dimension. For this subsection, let $x_t, v_t, w_t$ be the vehicle position, velocity, and input drive force. The resulting acceleration of the vehicle is $w_t - \\gamma v_t$, with $- \\gamma v_t$ is a damping term depending on velocity with parameter $\\gamma$. \n\nThe discretized dynamics are obtained from numerically integrating:\n$$\n\\begin{align}\nx_{t+1} &= x_t + \\left(1-\\frac{\\gamma \\Delta t}{2}\\right)v_t \\Delta t + \\frac{1}{2}w_{t} \\Delta t^2\\\\\nv_{t+1} &= \\left(1-\\gamma\\right)v_t + w_t \\Delta t.\n\\end{align}\n$$\n\nExtending these relations to two dimensions gives us the dynamics matrices $A$ and $B$.\n\n## Helper Functions\n\n\n```\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef plot_state(t,actual, estimated=None):\n '''\n plot position, speed, and acceleration in the x and y coordinates for\n the actual data, and optionally for the estimated data\n '''\n trajectories = [actual]\n if estimated is not None:\n trajectories.append(estimated)\n \n fig, ax = plt.subplots(3, 2, sharex='col', sharey='row', figsize=(8,8))\n for x, w in trajectories: \n ax[0,0].plot(t,x[0,:-1])\n ax[0,1].plot(t,x[1,:-1])\n ax[1,0].plot(t,x[2,:-1])\n ax[1,1].plot(t,x[3,:-1])\n ax[2,0].plot(t,w[0,:])\n ax[2,1].plot(t,w[1,:])\n \n ax[0,0].set_ylabel('x position')\n ax[1,0].set_ylabel('x velocity')\n ax[2,0].set_ylabel('x input')\n \n ax[0,1].set_ylabel('y position')\n ax[1,1].set_ylabel('y velocity')\n ax[2,1].set_ylabel('y input')\n \n ax[0,1].yaxis.tick_right()\n ax[1,1].yaxis.tick_right()\n ax[2,1].yaxis.tick_right()\n \n ax[0,1].yaxis.set_label_position(\"right\")\n ax[1,1].yaxis.set_label_position(\"right\")\n ax[2,1].yaxis.set_label_position(\"right\")\n \n ax[2,0].set_xlabel('time')\n ax[2,1].set_xlabel('time')\n\ndef plot_positions(traj, labels, axis=None,filename=None):\n '''\n show point clouds for true, observed, and recovered positions\n '''\n matplotlib.rcParams.update({'font.size': 14})\n n = len(traj)\n\n fig, ax = plt.subplots(1, n, sharex=True, sharey=True,figsize=(12, 5))\n if n == 1:\n ax = [ax]\n \n for i,x in enumerate(traj):\n ax[i].plot(x[0,:], x[1,:], 'ro', alpha=.1)\n ax[i].set_title(labels[i])\n if axis:\n ax[i].axis(axis)\n \n if filename:\n fig.savefig(filename, bbox_inches='tight')\n```\n\n## Problem Data\n\nWe generate the data for the vehicle tracking problem. We'll have $N=1000$, $w_t$ a standard Gaussian, and $v_t$ a standard Guassian, except $20\\%$ of the points will be outliers with $\\sigma = 20$.\n\nBelow, we set the problem parameters and define the matrices $A$, $B$, and $C$.\n\n\n```\nn = 1000 # number of timesteps\nT = 50 # time will vary from 0 to T with step delt\nts, delt = np.linspace(0,T,n,endpoint=True, retstep=True)\ngamma = .05 # damping, 0 is no damping\n\nA = np.zeros((4,4))\nB = np.zeros((4,2))\nC = np.zeros((2,4))\n\nA[0,0] = 1\nA[1,1] = 1\nA[0,2] = (1-gamma*delt/2)*delt\nA[1,3] = (1-gamma*delt/2)*delt\nA[2,2] = 1 - gamma*delt\nA[3,3] = 1 - gamma*delt\n\nB[0,0] = delt**2/2\nB[1,1] = delt**2/2\nB[2,0] = delt\nB[3,1] = delt\n\nC[0,0] = 1\nC[1,1] = 1\n```\n\n# Simulation\n\nWe seed $x_0 = 0$ (starting at the origin with zero velocity) and simulate the system forward in time. The results are the true vehicle positions `x_true` (which we will use to judge our recovery) and the observed positions `y`.\n\nWe plot the position, velocity, and system input $w$ in both dimensions as a function of time.\nWe also plot the sets of true and observed vehicle positions.\n\n\n```\nsigma = 20\np = .20\nnp.random.seed(6)\n\nx = np.zeros((4,n+1))\nx[:,0] = [0,0,0,0]\ny = np.zeros((2,n))\n\n# generate random input and noise vectors\nw = np.random.randn(2,n)\nv = np.random.randn(2,n)\n\n# add outliers to v\nnp.random.seed(0)\ninds = np.random.rand(n) <= p\nv[:,inds] = sigma*np.random.randn(2,n)[:,inds]\n\n# simulate the system forward in time\nfor t in range(n):\n y[:,t] = C.dot(x[:,t]) + v[:,t]\n x[:,t+1] = A.dot(x[:,t]) + B.dot(w[:,t])\n \nx_true = x.copy()\nw_true = w.copy()\n\nplot_state(ts,(x_true,w_true))\nplot_positions([x_true,y], ['True', 'Observed'],[-4,14,-5,20],'rkf1.pdf')\n```\n\n# Kalman filtering recovery\n\nThe code below solves the standard Kalman filtering problem using CVXPY. We plot and compare the true and recovered vehicle states. Note that the recovery is distorted by outliers in the measurements.\n\n\n```\n%%time\nfrom cvxpy import *\n\nx = Variable(shape=(4, n+1))\nw = Variable(shape=(2, n))\nv = Variable(shape=(2, n))\n\ntau = .08\n \nobj = sum_squares(w) + tau*sum_squares(v)\nobj = Minimize(obj)\n\nconstr = []\nfor t in range(n):\n constr += [ x[:,t+1] == A*x[:,t] + B*w[:,t] ,\n y[:,t] == C*x[:,t] + v[:,t] ]\n\nProblem(obj, constr).solve(verbose=True)\n\nx = np.array(x.value)\nw = np.array(w.value)\n\nplot_state(ts,(x_true,w_true),(x,w))\nplot_positions([x_true,y], ['True', 'Noisy'], [-4,14,-5,20])\nplot_positions([x_true,x], ['True', 'KF recovery'], [-4,14,-5,20], 'rkf2.pdf')\n```\n\n# Robust Kalman filtering recovery\n\nHere we implement robust Kalman filtering with CVXPY. We get a better recovery than the standard Kalman filtering, which can be seen in the plots below.\n\n\n```\n%%time\nfrom cvxpy import *\nx = Variable(shape=(4, n+1))\nw = Variable(shape=(2, n))\nv = Variable(shape=(2, n))\n\ntau = 2\nrho = 2\n \nobj = sum_squares(w)\nobj += sum(tau*huber(norm(v[:,t]),rho) for t in range(n))\nobj = Minimize(obj)\n\nconstr = []\nfor t in range(n):\n constr += [ x[:,t+1] == A*x[:,t] + B*w[:,t] ,\n y[:,t] == C*x[:,t] + v[:,t] ]\n\nProblem(obj, constr).solve(verbose=True)\n\nx = np.array(x.value)\nw = np.array(w.value)\n\nplot_state(ts,(x_true,w_true),(x,w))\nplot_positions([x_true,y], ['True', 'Noisy'], [-4,14,-5,20])\nplot_positions([x_true,x], ['True', 'Robust KF recovery'], [-4,14,-5,20],'rkf3.pdf')\n```\n", "meta": {"hexsha": "d37fa37cc05499c4f3d5e9e94899471bb1dafc7e", "size": 527393, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/notebooks/WWW/robust_kalman.ipynb", "max_stars_repo_name": "Hennich/cvxpy", "max_stars_repo_head_hexsha": "4dfd6d69ace76abf57d8b1d63db0556dee96e24f", "max_stars_repo_licenses": ["ECL-2.0", "Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "examples/notebooks/WWW/robust_kalman.ipynb", "max_issues_repo_name": "Hennich/cvxpy", "max_issues_repo_head_hexsha": "4dfd6d69ace76abf57d8b1d63db0556dee96e24f", "max_issues_repo_licenses": ["ECL-2.0", "Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/notebooks/WWW/robust_kalman.ipynb", "max_forks_repo_name": "Hennich/cvxpy", "max_forks_repo_head_hexsha": "4dfd6d69ace76abf57d8b1d63db0556dee96e24f", "max_forks_repo_licenses": ["ECL-2.0", "Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 920.4066317627, "max_line_length": 77333, "alphanum_fraction": 0.9342008711, "converted": true, "num_tokens": 3209, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9241418158002491, "lm_q2_score": 0.8947894527758053, "lm_q1q2_score": 0.826912349647144}} {"text": "```python\n# Solving an expression in python\n\nmy_value = 2 * (3 + 2)**2 / 5 - 4\n\nprint(my_value) \n```\n\n 6.0\n\n\n\n```python\n# making use of parenthesis for clarity in python\n\nmy_value = 2 * ((3 + 2)**2 / 5) - 4\n\nprint(my_value) # prints 6.0\n```\n\n 6.0\n\n\n\n```python\n# Functions are expressions that define relationships between two or more variables.\n# More specifically, a function takes input variables (also called domain variables or independent variables),\n# plugs them into an expression, and then results in an output variable (also called dependent variable).\n\ndef f(x):\n return 2 * x + 1\n\n\nx_values = [0, 1, 2, 3]\n\nfor x in x_values:\n y = f(x)\n print(y)\n \n```\n\n 1\n 3\n 5\n 7\n\n\n\n```python\n# Charting a linear function in python using Sympy (Sympy comes auto bundled in Anaconda)\nfrom matplotlib import *\nfrom sympy import *\n\nx = symbols('x')\nf = 2*x + 1\nplot(f)\n```\n\n\n```python\nx = symbols('x')\nf = x**2 + 1\nplot(f)\n```\n\n\n```python\nfrom sympy.plotting import plot3d\n\nx, y = symbols('x y')\nf = 2*x + 3*y\nplot3d(f)\n```\n\n\n```python\n# A summation is expressed as a sigma and adds elements together.\n# Summation in python\n\nsummation = sum(2*i for i in range(1,6))\nprint(summation)\n```\n\n 30\n\n\n\n```python\n# Exponents multiplies a number by itself a specified number of times\n# .The base is the variable or value we are exponentiating,\n# and the exponent is the number of times we multiply the base value.\n# using sympy to print exponent expression\n\nx = symbols('x')\nexpr = x**2 / x**5\nprint(expr) # x**(-3)\n```\n\n x**(-3)\n\n\n\n```python\n# A logarithm is a math function that finds a power for a specific number and base.\n# Using the log function in python\n\nfrom math import log\n\n# 2 raised to what power gives me 8?\nx = log(8, 2)\n\nprint(x) \n```\n\n 3.0\n\n\n\n```python\n# There is a special number that shows up quite a bit in math called Euler\u2019s number e.\n# It is a special number much like Pi \u03c0, and is approximately 2.71828.\n# e is used a lot because it mathematically simplifies a lot of problems.\n\n# Calculating continuous interest in python, A = P*e**rt\n\nfrom math import exp\n\np = 100 # principal, starting amount\nr = .20 # interest rate, by year\nt = 2.0 # time, number of years\n\na = p * exp(r*t)\n\nprint(a)\n```\n\n 149.18246976412703\n\n\n\n```python\n# When we use e as our base for a logarithm, we call it a natural logarithm.\n\nfrom math import log\n\n# e raised to what power gives us 10?\nx = log(10)\n\nprint(x) # prints 2.302585092994046\n```\n\n 2.302585092994046\n\n\n\n```python\n# A derivative tells the slope of a function, and it is useful to measure the rate of change at any point in a function.\n# derivative calculator in python\n\ndef derivative_x(f, x, step_size):\n m = (f(x + step_size) - f(x)) / ((x + step_size) - x)\n return m\n\n\ndef my_function(x):\n return x**2\n\nslope_at_2 = derivative_x(my_function, 2, .00001)\n\nprint(slope_at_2)\n```\n\n 4.000010000000827\n\n\n\n```python\n# Calculating a derivative in Sympy\n\n# Declare 'x' to SymPy\nx = symbols('x')\n\n# Now just use Python syntax to declare function\nf = x**2\n\n# Calculate the derivative of the function\ndx_f = diff(f)\nprint(dx_f) # prints 2*x\n```\n\n 2*x\n\n\n\n```python\n# Calculating partial derivative with Sympy\n\n# Declare x and y to SymPy\nx,y = symbols('x y')\n\n# Now just use Python syntax to declare function\nf = 2*x**3 + 3*y**3\n\n# Calculate the partial derivatives for x and y\ndx_f = diff(f, x)\ndy_f = diff(f, y)\n\nprint(dx_f) # prints 6*x**2\nprint(dy_f) # prints 9*y**2\n\n# plot the function\nplot3d(f)\n```\n\n\n```python\n# integral, which finds the area under the curve for a given range.\n# Integral approximation in python\n\ndef approximate_integral(a, b, n, f):\n delta_x = (b - a) / n\n total_sum = 0\n\n for i in range(1, n + 1):\n midpoint = 0.5 * (2 * a + delta_x * (2 * i - 1))\n total_sum += f(midpoint)\n\n return total_sum * delta_x\n\ndef my_function(x):\n return x**2 + 1\n\narea = approximate_integral(a=0, b=1, n=5, f=my_function)\n\nprint(area)\n```\n\n 1.33\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "111f5b4fbf81c793831764f11a1746a655eb1019", "size": 193870, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Maths/Basic Maths and Calculus Review/Basic Maths and Calculus Review.ipynb", "max_stars_repo_name": "rishi9504/Data-Science", "max_stars_repo_head_hexsha": "10344bf641c601bf16451ddd9eaa28ab4c0fc75b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Maths/Basic Maths and Calculus Review/Basic Maths and Calculus Review.ipynb", "max_issues_repo_name": "rishi9504/Data-Science", "max_issues_repo_head_hexsha": "10344bf641c601bf16451ddd9eaa28ab4c0fc75b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Maths/Basic Maths and Calculus Review/Basic Maths and Calculus Review.ipynb", "max_forks_repo_name": "rishi9504/Data-Science", "max_forks_repo_head_hexsha": "10344bf641c601bf16451ddd9eaa28ab4c0fc75b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 403.0561330561, "max_line_length": 76176, "alphanum_fraction": 0.9421674318, "converted": true, "num_tokens": 1222, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9353465188527685, "lm_q2_score": 0.8840392725805823, "lm_q1q2_score": 0.8268830561373814}} {"text": "```python\nimport sys, os\nsys.path.insert(0, os.path.join(os.pardir, 'src'))\nfrom fe_approx1D_numint import approximate, mesh_uniform, u_glob\nfrom sympy import sqrt, exp, sin, Symbol, lambdify, simplify\nimport numpy as np\nfrom math import log\n\nx = Symbol('x')\nA = 1\nw = 1\n\ncases = {'sqrt': {'f': sqrt(x), 'Omega': [0,1]},\n 'exp': {'f': A*exp(-w*x), 'Omega': [0, 3.0/w]},\n 'sin': {'f': A*sin(w*x), 'Omega': [0, 2*np.pi/w]}}\n\nresults = {}\nd_values = [1, 2, 3, 4]\n\nfor case in cases:\n f = cases[case]['f']\n f_func = lambdify([x], f, modules='numpy')\n Omega = cases[case]['Omega']\n results[case] = {}\n for d in d_values:\n results[case][d] = {'E': [], 'h': [], 'r': []}\n for N_e in [4, 8, 16, 32, 64, 128]:\n try:\n c = approximate(\n f, symbolic=False,\n numint='GaussLegendre%d' % (d+1),\n d=d, N_e=N_e, Omega=Omega,\n filename='tmp_%s_d%d_e%d' % (case, d, N_e))\n except np.linalg.linalg.LinAlgError as e:\n print((str(e)))\n continue\n vertices, cells, dof_map = mesh_uniform(\n N_e, d, Omega, symbolic=False)\n xc, u, _ = u_glob(c, vertices, cells, dof_map, 51)\n e = f_func(xc) - u\n # Trapezoidal integration of the L2 error over the\n # xc/u patches\n e2 = e**2\n L2_error = 0\n for i in range(len(xc)-1):\n L2_error += 0.5*(e2[i+1] + e2[i])*(xc[i+1] - xc[i])\n L2_error = np.sqrt(L2_error)\n h = (Omega[1] - Omega[0])/float(N_e)\n results[case][d]['E'].append(L2_error)\n results[case][d]['h'].append(h)\n # Compute rates\n h = results[case][d]['h']\n E = results[case][d]['E']\n for i in range(len(h)-1):\n r = log(E[i+1]/E[i])/log(h[i+1]/h[i])\n results[case][d]['r'].append(round(r, 2))\n\nprint(results)\nfor case in results:\n for d in sorted(results[case]):\n print(('case=%s d=%d, r: %s' % \\\n (case, d, results[case][d]['r'])))\n\n```\n", "meta": {"hexsha": "a6d68780c6ca5ae36807c4dac8c0478fbe4a5d38", "size": 3225, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/EXERCICES/19_PD_APPROX_ERROR.ipynb", "max_stars_repo_name": "okara83/Becoming-a-Data-Scientist", "max_stars_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/EXERCICES/19_PD_APPROX_ERROR.ipynb", "max_issues_repo_name": "okara83/Becoming-a-Data-Scientist", "max_issues_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/EXERCICES/19_PD_APPROX_ERROR.ipynb", "max_forks_repo_name": "okara83/Becoming-a-Data-Scientist", "max_forks_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T15:41:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T07:47:40.000Z", "avg_line_length": 33.9473684211, "max_line_length": 76, "alphanum_fraction": 0.4282170543, "converted": true, "num_tokens": 665, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9353465134460243, "lm_q2_score": 0.8840392756357327, "lm_q1q2_score": 0.8268830542152315}} {"text": "# One-dimensional Lagrange Interpolation\n\nThe problem of interpolation or finding the value of a function at an arbitrary point $X$ inside a given domain, provided we have discrete known values of the function inside the same domain is at the heart of the finite element method. In this notebooke we use Lagrange interpolation where the approximation $\\hat f(x)$ to the function $f(x)$ is built like:\n\n\\begin{equation}\n\\hat f(x)={L^I}(x)f^I\n\\end{equation}\n\nIn the expression above $L^I$ represents the $I$ Lagrange Polynomial of order $n-1$ and $f^1, f^2,,...,f^n$ are the $n$ known values of the function. Here we are using the summation convention over the repeated superscripts.\n\nThe $I$ Lagrange polynomial is given by the recursive expression:\n\n\\begin{equation}\n{L^I}(x)=\\prod_{J=1, J \\ne I}^{n}{\\frac{{\\left( {x - {x^J}} \\right)}}{{\\left( {{x^I} - {x^J}} \\right)}}} \n\\end{equation}\n\nin the domain $x\\in[-1.0,1.0]$.\n\nWe wish to interpolate the function $ f(x)=x^3+4x^2-10 $ assuming we know its value at points $x=-1.0$, $x=1.0$ and $x=0.0$.\n\n\n```python\nfrom __future__ import division\nimport numpy as np\nfrom scipy import interpolate\nimport sympy as sym\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n```\n\n\n```python\n%matplotlib notebook\n\n```\n\nFirst we use a function to generate the Lagrage polynomial of order $order$ at point $i$\n\n\n```python\ndef basis_lagrange(x_data, var, cont):\n \"\"\"Find the basis for the Lagrange interpolant\"\"\"\n prod = sym.prod((var - x_data[i])/(x_data[cont] - x_data[i])\n for i in range(len(x_data)) if i != cont)\n return sym.simplify(prod)\n```\n\nwe now define the function $ f(x)=x^3+4x^2-10 $:\n\n\n```python\nfun = lambda x: x**3 + 4*x**2 - 10\n```\n\n\n```python\nx = sym.symbols('x')\nx_data = np.array([-1, 1, 0])\nf_data = fun(x_data)\n```\n\nAnd obtain the Lagrange polynomials using:\n\n\n\n```python\nbasis = []\nfor cont in range(len(x_data)):\n basis.append(basis_lagrange(x_data, x, cont))\n sym.pprint(basis[cont])\n\n```\n\n x\u22c5(x - 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 2 \n x\u22c5(x + 1)\n \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n 2 \n 2 \n - x + 1\n\n\nwhich are shown in the following plots/\n\n\n```python\nnpts = 101\nx_eval = np.linspace(-1, 1, npts)\nbasis_num = sym.lambdify((x), basis, \"numpy\") # Create a lambda function for the polynomials\n```\n\n\n```python\nplt.figure(figsize=(6, 4))\nfor k in range(3): \n y_eval = basis_num(x_eval)[k]\n plt.plot(x_eval, y_eval)\n```\n\n\n \n\n\n\n\n\n\n\n```python\ny_interp = sym.simplify(sum(f_data[k]*basis[k] for k in range(3)))\ny_interp\n```\n\n\n\n\n 4*x**2 + x - 10\n\n\n\nNow we plot the complete approximating polynomial, the actual function and the points where the function was known.\n\n\n```python\ny_interp = sum(f_data[k]*basis_num(x_eval)[k] for k in range(3))\ny_original = fun(x_eval)\n\nplt.figure(figsize=(6, 4))\nplt.plot(x_eval, y_original)\nplt.plot(x_eval, y_interp)\nplt.plot([-1, 1, 0], f_data, 'ko')\nplt.show()\n```\n\n\n \n\n\n\n\n\n\n## Interpolation in 2 dimensions\n\nWe can extend the concept of Lagrange interpolation to 2 or more dimensions.\nIn the case of bilinear interpolation (2\u00d72, 4 vertices) in $[-1, 1]^2$,\nthe base functions are given by (**prove it**):\n\n\\begin{align}\nN_0 = \\frac{1}{4}(1 - x)(1 - y)\\\\\nN_1 = \\frac{1}{4}(1 + x)(1 - y)\\\\\nN_2 = \\frac{1}{4}(1 + x)(1 + y)\\\\\nN_3 = \\frac{1}{4}(1 - x)(1 + y)\n\\end{align}\n\nLet's see an example using piecewise bilinear interpolation.\n\n\n```python\ndef rect_grid(Lx, Ly, nx, ny):\n u\"\"\"Create a rectilinear grid for a rectangle\n \n The rectangle has dimensiones Lx by Ly. nx are \n the number of nodes in x, and ny are the number of nodes\n in y\n \"\"\"\n y, x = np.mgrid[-Ly/2:Ly/2:ny*1j, -Lx/2:Lx/2:nx*1j]\n els = np.zeros(((nx - 1)*(ny - 1), 4), dtype=int)\n for row in range(ny - 1):\n for col in range(nx - 1):\n cont = row*(nx - 1) + col\n els[cont, :] = [cont + row, cont + row + 1,\n cont + row + nx + 1, cont + row + nx]\n return x.flatten(), y.flatten(), els\n```\n\n\n```python\ndef interp_bilinear(coords, f_vals, grid=(10, 10)):\n \"\"\"Piecewise bilinear interpolation for rectangular domains\"\"\"\n x_min, y_min = np.min(coords, axis=0)\n x_max, y_max = np.max(coords, axis=0)\n x, y = np.mgrid[-1:1:grid[0]*1j,-1:1:grid[1]*1j]\n N0 = (1 - x) * (1 - y)\n N1 = (1 + x) * (1 - y)\n N2 = (1 + x) * (1 + y)\n N3 = (1 - x) * (1 + y)\n interp_fun = N0 * f_vals[0] + N1 * f_vals[1] + N2 * f_vals[2] + N3 * f_vals[3]\n interp_fun = 0.25*interp_fun\n x, y = np.mgrid[x_min:x_max:grid[0]*1j, y_min:y_max:grid[1]*1j]\n return x, y, interp_fun\n```\n\n\n```python\ndef fun(x, y):\n \"\"\"Monkey saddle function\"\"\"\n return y**3 + 3*y*x**2\n```\n\n\n```python\nx_coords, y_coords, els = rect_grid(2, 2, 4, 4)\nnels = els.shape[0]\nz_coords = fun(x_coords, y_coords)\nz_min = np.min(z_coords)\nz_max = np.max(z_coords)\n```\n\n\n```python\nfig = plt.figure(figsize=(6, 6))\nax = fig.add_subplot(111, projection='3d')\nx, y = np.mgrid[-1:1:51j,-1:1:51j]\nz = fun(x, y)\nsurf = ax.plot_surface(x, y, z, rstride=1, cstride=1, linewidth=0, alpha=0.6,\n cmap=\"viridis\")\nplt.colorbar(surf, shrink=0.5, aspect=10)\nax.plot(x_coords, y_coords, z_coords, 'ok')\nfor k in range(nels):\n x_vals = x_coords[els[k, :]]\n y_vals = y_coords[els[k, :]]\n coords = np.column_stack([x_vals, y_vals])\n f_vals = fun(x_vals, y_vals)\n x, y, z = interp_bilinear(coords, f_vals, grid=[4, 4])\n inter = ax.plot_wireframe(x, y, z, color=\"black\", cstride=3, rstride=3)\nplt.xlabel(r\"$x$\", fontsize=18)\nplt.ylabel(r\"$y$\", fontsize=18)\nax.legend([inter], [u\"Interpolation\"])\nplt.show();\n```\n\n\n \n\n\n\n\n\n\n\n```python\n\n```\n\nThe next cell change the format of the Notebook.\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open('../styles/custom_barba.css', 'r').read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "e76a6ea68a1538f224fc89d8a86ac0cf51a80d24", "size": 449991, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/lagrange_interpolation.ipynb", "max_stars_repo_name": "jomorlier/FEM-Notes", "max_stars_repo_head_hexsha": "3b81053aee79dc59965c3622bc0d0eb6cfc7e8ae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-04-15T01:53:14.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-15T01:53:14.000Z", "max_issues_repo_path": "notebooks/lagrange_interpolation.ipynb", "max_issues_repo_name": "jacojvr/FEM-Notes", "max_issues_repo_head_hexsha": "3b81053aee79dc59965c3622bc0d0eb6cfc7e8ae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/lagrange_interpolation.ipynb", "max_forks_repo_name": "jacojvr/FEM-Notes", "max_forks_repo_head_hexsha": "3b81053aee79dc59965c3622bc0d0eb6cfc7e8ae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-04-15T01:53:15.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-15T01:53:15.000Z", "avg_line_length": 157.3395104895, "max_line_length": 257427, "alphanum_fraction": 0.8381989862, "converted": true, "num_tokens": 2462, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9496693702514737, "lm_q2_score": 0.8705972768020108, "lm_q1q2_score": 0.8267795676032136}} {"text": "# Loss Functions\n\n\n```python\n%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport mxnet as mx\nfrom mxnet import nd, autograd\nimport numpy as np\nimport math\n\nx = nd.arange(-5, 5, 0.01)\nx.attach_grad()\n```\n\n## $l_2$ loss\n\n$$\n\\begin{align}\nl(y,y') = \\frac{1}{2}(y-y')^2\n\\end{align}\n$$\n\n\n```python\nwith autograd.record():\n l = 0.5 * x**2\nl.backward()\np = nd.exp(-l)\np = 100 * p / p.sum()\nplt.figure(figsize=(10, 5))\nplt.plot(x.asnumpy(), l.asnumpy())\nplt.plot(x.asnumpy(), x.grad.asnumpy())\nplt.plot(x.asnumpy(), 10*p.asnumpy())\nplt.show()\n```\n\n## $l_1$ loss\n\n$$\n\\begin{align}\nl(y,y') = |y-y'|\n\\end{align}\n$$\n\n\n```python\nwith autograd.record():\n l = nd.abs(x)\nl.backward()\np = nd.exp(-l)\np = 100 * p / p.sum()\nplt.figure(figsize=(10, 5))\nplt.plot(x.asnumpy(), l.asnumpy())\nplt.plot(x.asnumpy(), x.grad.asnumpy())\nplt.plot(x.asnumpy(), 10*p.asnumpy())\nplt.show()\n```\n\n## Huber's Robust loss\n\n$$\n\\begin{align}\nl(y,y') = \\begin{cases}\n|y-y'| - \\frac{1}{2} & \\text{ if } |y-y'| > 1 \\\\\n\\frac{1}{2} (y-y')^2 & \\text{ otherwise}\n\\end{cases}\n\\end{align}\n$$\n\n\n```python\nwith autograd.record():\n tmp = nd.abs(x) > 1\n l = tmp * (nd.abs(x) - 0.5) + (1-tmp) * (0.5 * x**2)\nl.backward()\np = nd.exp(-l)\np = 100 * p / p.sum()\nplt.figure(figsize=(10, 5))\nplt.plot(x.asnumpy(), l.asnumpy())\nplt.plot(x.asnumpy(), x.grad.asnumpy())\nplt.plot(x.asnumpy(), 10*p.asnumpy())\nplt.show()\n```\n\nMany more loss functions, e.g.\n* $\\epsilon$-insensitive loss $\\mathrm{max}(0, |y-y'| - \\epsilon)$\n* Huber's robust loss with width adjustment\n* Log-normal loss, i.e. $(\\log y - \\log y')^2$\n\n\n```python\n\n```\n", "meta": {"hexsha": "cd1e662a38f2d46e36d9ea94f9ed198a17be5daa", "size": 91992, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "slides/2_5/loss.ipynb", "max_stars_repo_name": "cristicmf/berkeley-stat-157", "max_stars_repo_head_hexsha": "327f77db7ecdc02001f8b7be8c1fcaf0607694c0", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2709, "max_stars_repo_stars_event_min_datetime": "2018-12-29T18:15:20.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T13:24:29.000Z", "max_issues_repo_path": "slides/2_5/loss.ipynb", "max_issues_repo_name": "lantainlya/berkeley-stat-157", "max_issues_repo_head_hexsha": "327f77db7ecdc02001f8b7be8c1fcaf0607694c0", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2018-12-27T04:56:20.000Z", "max_issues_repo_issues_event_max_datetime": "2021-02-18T04:43:11.000Z", "max_forks_repo_path": "slides/2_5/loss.ipynb", "max_forks_repo_name": "lantainlya/berkeley-stat-157", "max_forks_repo_head_hexsha": "327f77db7ecdc02001f8b7be8c1fcaf0607694c0", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1250, "max_forks_repo_forks_event_min_datetime": "2019-01-07T05:51:39.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T13:24:18.000Z", "avg_line_length": 357.9455252918, "max_line_length": 29756, "alphanum_fraction": 0.9391686234, "converted": true, "num_tokens": 571, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9496693716759489, "lm_q2_score": 0.8705972616934406, "lm_q1q2_score": 0.8267795544952115}} {"text": "Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\\rightarrow$Run All).\n\nMake sure you fill in any place that says `YOUR CODE HERE` or \"YOUR ANSWER HERE\", as well as your name and collaborators below:\n\n\n```python\nNAME = \"Prabal CHowdhury\"\nCOLLABORATORS = \"\"\n```\n\n---\n\n# Differentiation: Forward, Backward, And Central\n---\n\n## Task 1: Differentiation\n\nWe have already learnt about *forward differentiation*, *backward diferentiation* and *central differentiation*. In this part of the assignment we will write methods to calculate this values and check how they perform.\n\nThe equations are as follows,\n\n\\begin{align}\n\\text{forward differentiation}, f^\\prime(x) \\simeq \\frac{f(x+h)-f(x)}{h} \\tag{4.6} \\\\\n\\text{backward differentiation}, f^\\prime(x) \\simeq \\frac{f(x)-f(x-h)}{h} \\tag{4.7} \\\\\n\\text{central differentiation}, f^\\prime(x) \\simeq \\frac{f(x+h)-f(x-h)}{2h} \\tag{4.8}\n\\end{align}\n\n## Importing libraries\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom numpy.polynomial import Polynomial\n```\n\nHere, `forward_diff(f, h, x)`, `backward_diff(f, h, x)`, and `central_diff(f, h, x)` calculates the *forward differentiation*, *backward differentiation* and *central differentiation* respectively. finally the `error(f, f_prime, h, x)` method calculates the different values for various $h$ and returns the errors.\n\nLater we will run some code to test out performance. The first one is done for you.\n\n\n```python\ndef forward_diff(f, h, x):\n return (f(x+h) - f(x)) / h\n```\n\n\n```python\ndef backward_diff(f, h, x):\n # --------------------------------------------\n # YOUR CODE HERE\n return (f(x-h) - f(x)) / h\n # --------------------------------------------\n```\n\n\n```python\ndef central_diff(f, h, x):\n # --------------------------------------------\n # YOUR CODE HERE\n return (f(x+h) - f(x-h)) / (2*h)\n # --------------------------------------------\n```\n\n\n```python\ndef error(f, f_prime, h, x):\n Y_correct = f_prime(x)\n f_error = np.array([])\n b_error = np.array([])\n c_error = np.array([])\n \n for h_i in h:\n # for different values of h (h_i)\n # calculate the error at the point x for (i) forward method \n # (ii) backward method\n # (ii) central method\n # the first one is done for you\n f_error_h_i = forward_diff(f, h_i, x) - Y_correct\n f_error = np.append(f_error, f_error_h_i)\n \n b_error_h_i = backward_diff(f, h_i, x) - Y_correct\n b_error = np.append(b_error, b_error_h_i)\n \n c_error_h_i = central_diff(f, h_i, x) - Y_correct\n c_error = np.append(c_error, c_error_h_i)\n \n return f_error, b_error, c_error\n```\n\n## Plot1\nPolynomial and Actual Derivative Function\n\n\n```python\nfig, ax = plt.subplots()\nax.axhline(y=0, color='k')\n\np = Polynomial([2.0, 1.0, -6.0, -2.0, 2.5, 1.0])\ndata = p.linspace(domain=[-2.4, 1.5])\nax.plot(data[0], data[1], label='Function')\n\np_prime = p.deriv(1)\ndata2 = p_prime.linspace(domain=[-2.4, 1.5])\nax.plot(data2[0], data2[1], label='Derivative')\n\nax.legend()\n```\n\n\n```python\nh = 1\nfig, bx = plt.subplots()\nbx.axhline(y=0, color='k')\n\nx = np.linspace(-2.0, 1.3, 50, endpoint=True)\ny = forward_diff(p, h, x)\nbx.plot(x, y, label='Forward; h=1')\ny = backward_diff(p, h, x)\nbx.plot(x, y, label='Backward; h=1')\ny = central_diff(p, h, x)\nbx.plot(x, y, label='Central; h=1')\n\ndata2 = p_prime.linspace(domain=[-2.0, 1.3])\nbx.plot(data2[0], data2[1], label='actual')\n\nbx.legend()\n\n```\n\n\n```python\nh = 0.1\nfig, bx = plt.subplots()\nbx.axhline(y=0, color='k')\n\nx = np.linspace(-2.2, 1.3, 50, endpoint=True)\ny = forward_diff(p, h, x)\nbx.plot(x, y, label='Forward; h=0.1')\ny = backward_diff(p, h, x)\nbx.plot(x, y, label='Backward; h=0.1')\ny = central_diff(p, h, x)\nbx.plot(x, y, label='Central; h=0.1')\n\ndata2 = p_prime.linspace(domain=[-2.2, 1.3])\nbx.plot(data2[0], data2[1], label='actual')\n\nbx.legend()\n\n```\n\n\n```python\nh = 0.01\nfig, bx = plt.subplots()\nbx.axhline(y=0, color='k')\n\nx = np.linspace(-2.2, 1.3, 50, endpoint=True)\ny = forward_diff(p, h, x)\nbx.plot(x, y, label='Forward; h=0.01')\ny = backward_diff(p, h, x)\nbx.plot(x, y, label='Backward; h=0.01')\ny = central_diff(p, h, x)\nbx.plot(x, y, label='Central; h=0.01')\n\ndata2 = p_prime.linspace(domain=[-2.2, 1.3])\nbx.plot(data2[0], data2[1], label='actual')\n\nbx.legend()\n\n```\n\n\n```python\nfig, bx = plt.subplots()\nbx.axhline(y=0, color='k')\n\nh = np.array([1., 0.55, 0.3, .17, 0.1, 0.055, 0.03, 0.017, 0.01])\nerr = error(p, p_prime, h, 2.0)\n\nbx.plot(h, err[0], label='Forward')\nbx.plot(h, err[1], label='Backward')\nbx.plot(h, err[2], label='Central')\nbx.legend()\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "2cfc6b18a9465c463bf20d6d07db11d4c4509b8a", "size": 136578, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Forward_Backward_and_Central_Differentiation.ipynb", "max_stars_repo_name": "PrabalChowdhury/CSE330-NUMERICAL-METHODS", "max_stars_repo_head_hexsha": "aabfea01f4ceaecfbb50d771ee990777d6e1122c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Forward_Backward_and_Central_Differentiation.ipynb", "max_issues_repo_name": "PrabalChowdhury/CSE330-NUMERICAL-METHODS", "max_issues_repo_head_hexsha": "aabfea01f4ceaecfbb50d771ee990777d6e1122c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Forward_Backward_and_Central_Differentiation.ipynb", "max_forks_repo_name": "PrabalChowdhury/CSE330-NUMERICAL-METHODS", "max_forks_repo_head_hexsha": "aabfea01f4ceaecfbb50d771ee990777d6e1122c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 213.7370892019, "max_line_length": 31666, "alphanum_fraction": 0.8942435824, "converted": true, "num_tokens": 1522, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9073122213606241, "lm_q2_score": 0.9111797142327147, "lm_q1q2_score": 0.826724490579223}} {"text": "```python\nimport pyprob\nfrom pyprob import Model\nfrom pyprob.distributions import Normal\n\nimport torch\nimport numpy as np\nimport math\nimport matplotlib.pyplot as plt\nfig = plt.figure();\n```\n\n\n
    \n\n\n# Defining the model\nFirst, we define the model as a probabilistic program inheriting from `pyprob.Model`. Models inherit from `torch.nn.Module` and can be potentially trained with gradient-based optimization (not covered in this example).\n\nThe `forward` function can have any number and type of arguments as needed.\n\n\n```python\n%matplotlib inline\nclass GaussianUnknownMean(Model):\n def __init__(self):\n super().__init__(name='Gaussian with unknown mean') # give the model a name\n self.prior_mean = 1\n self.prior_std = math.sqrt(5)\n self.likelihood_std = math.sqrt(2)\n\n def forward(self): # Needed to specify how the generative model is run forward\n # sample the (latent) mean variable to be inferred:\n mu = pyprob.sample(Normal(self.prior_mean, self.prior_std)) # NOTE: sample -> denotes latent variables\n\n # define the likelihood\n likelihood = Normal(mu, self.likelihood_std)\n\n # Lets add two observed variables\n # -> the 'name' argument is used later to assignment values:\n pyprob.observe(likelihood, name='obs0') # NOTE: observe -> denotes observable variables\n pyprob.observe(likelihood, name='obs1')\n\n # return the latent quantity of interest\n return mu\n \nmodel = GaussianUnknownMean()\n```\n\n# Finding the correct posterior analytically\nSince all distributions are gaussians in this model, we can analytically compute the posterior and we can compare the true posterior to the inferenced one.\n\nAssuming that the prior and likelihood are $p(x) = \\mathcal{N}(\\mu_0, \\sigma_0)$ and $p(y|x) = \\mathcal{N}(x, \\sigma)$ respectively and, $y_1, y_2, \\ldots y_n$ are the observed values, the posterior would be $p(x|y) = \\mathcal{N}(\\mu_p, \\sigma_p)$ where,\n$$\n\\begin{align}\n\\sigma_{p}^{2} & = \\frac{1}{\\frac{n}{\\sigma^2} + \\frac{1}{\\sigma_{0}^{2}}} \\\\\n\\mu_p & = \\sigma_{p}^{2} \\left( \\frac{\\mu_0}{\\sigma_{0}^{2}} + \\frac{n\\overline{y}}{\\sigma^2} \\right)\n\\end{align}\n$$\nThe following class implements computing this posterior distribution. We also implement some helper functions and variables for plotting the correct posterior and prior.\n\n\n```python\ndef plot_function(min_val, max_val, func, *args, **kwargs):\n '''\n Apply a function repetitively to tuples within a vector\n '''\n x = np.linspace(min_val,max_val,int((max_val-min_val)*50))\n plt.plot(x, np.vectorize(func)(x), *args, **kwargs)\n\ndef get_dist_pdf(dist):\n return lambda x: math.exp(dist.log_prob(x))\n \nclass CorrectDistributions:\n def __init__(self, model):\n self.prior_mean = model.prior_mean\n self.prior_std = model.prior_std\n self.likelihood_std = model.likelihood_std\n self.prior_dist = Normal(self.prior_mean, self.prior_std)\n \n @property\n def observed_list(self):\n return self.__observed_list\n\n @observed_list.setter\n def observed_list(self, new_observed_list):\n self.__observed_list = new_observed_list\n self.construct_correct_posterior()\n \n def construct_correct_posterior(self):\n n = len(self.observed_list)\n # As per the analytical method we construct the posterior given by the formula\n posterior_var = 1/(n/self.likelihood_std**2 + 1/self.prior_std**2)\n posterior_mu = posterior_var * (self.prior_mean/self.prior_std**2 + n*np.mean(self.observed_list)/self.likelihood_std**2)\n self.posterior_dist = Normal(posterior_mu, math.sqrt(posterior_var))\n\n def prior_pdf(self, model, x):\n p = Normal(model.prior_mean,model.prior_stdd)\n return math.exp(p.log_prob(x))\n\n def plot_posterior(self, min_val, max_val):\n if not hasattr(self, 'posterior_dist'):\n raise AttributeError('observed values are not set yet, and posterior is not defined.')\n plot_function(min_val, max_val, get_dist_pdf(self.posterior_dist), label='correct posterior', color='orange')\n\n\n def plot_prior(self, min_val, max_val):\n plot_function(min_val, max_val, get_dist_pdf(self.prior_dist), label='prior', color='green')\n\ncorrect_dists = CorrectDistributions(model)\n```\n\n# Prior distribution\nWe inspect the prior distribution to see if it behaves in the way we intended. First we construct an `Empirical` distribution with forward samples from the model.\n\nNote: Extra arguments passed to `prior_distribution` will be forwarded to model's `forward` function.\n\n\n```python\nprior = model.prior_distribution(num_traces=1000)\n```\n\n Time spent | Time remain.| Progress | Trace | Traces/sec\n 0d:00:00:00 | 0d:00:00:00 | #################### | 1000/1000 | 1,621.42 \n\n\nWe can plot a histogram of these samples that are held by the `Empirical` distribution.\n\n\n```python\n# Compare our assumed prior against the actual prior from the model - sanity check\nprior.plot_histogram(show=False, alpha=0.75, label='empirical prior')\ncorrect_dists.plot_prior(min(prior.values_numpy()),max(prior.values_numpy()))\nplt.legend();\n```\n\n# Posterior inference with importance sampling\nFor a given set of observations, we can get samples from the posterior distribution.\n\n\n```python\ncorrect_dists.observed_list = [4, 5] # Observations\n# sample from posterior (5000 samples)\nposterior = model.posterior_distribution(\n num_traces=5000, # the number of samples estimating the posterior\n inference_engine=pyprob.InferenceEngine.IMPORTANCE_SAMPLING, # specify which inference engine to use\n observe={'obs0': correct_dists.observed_list[0],\n 'obs1': correct_dists.observed_list[1]} # assign values to the observed values\n )\n```\n\n Time spent | Time remain.| Progress | Trace | Traces/sec\n 0d:00:00:02 | 0d:00:00:00 | #################### | 5000/5000 | 1,807.42 \n\n\nRegular importance sampling uses proposals from the prior distribution. We can see this by plotting the histogram of the posterior distribution without using the importance weights. As expected, this is the same as the prior distribution.\n\n\n```python\nposterior_unweighted = posterior.unweighted()\nposterior_unweighted.plot_histogram(show=False, alpha=0.75, label='empirical proposal')\ncorrect_dists.plot_prior(min(posterior_unweighted.values_numpy()),\n max(posterior_unweighted.values_numpy()))\ncorrect_dists.plot_posterior(min(posterior_unweighted.values_numpy()),\n max(posterior_unweighted.values_numpy()))\nplt.legend();\n```\n\nWhen we do use the weights, we end up with the correct posterior distribution. The following shows the sampled posterior with the correct posterior (orange curve).\n\n\n```python\nposterior.plot_histogram(show=False, alpha=0.75, bins=50, label='inferred posterior')\ncorrect_dists.plot_posterior(min(posterior.values_numpy()),\n max(posterior_unweighted.values_numpy()))\nplt.legend();\n```\n\nIn practice, it is advised to use methods of the `Empirical` posterior distribution instead of dealing with the weights directly, which ensures that the weights are used in the correct way.\n\nFor instance, we can get samples from the posterior, compute its mean and standard deviation, and evaluate expectations of a function under the distribution:\n\n\n```python\nprint(posterior.sample())\n```\n\n tensor(4.1385)\n\n\n\n```python\nprint(posterior.mean)\n```\n\n tensor(3.9196)\n\n\n\n```python\nprint(posterior.stddev)\n```\n\n tensor(0.9020)\n\n\n\n```python\nprint(posterior.expectation(lambda x: torch.sin(x)))\n```\n\n tensor(-0.4711)\n\n\n# Inference compilation\nInference compilation is a technique where a deep neural network is used for parameterizing the proposal distribution in importance sampling (https://arxiv.org/abs/1610.09900). This neural network, which we call inference network, is automatically generated and trained with data sampled from the model.\n\nWe can learn an inference network for our model.\n\n\n```python\nmodel.learn_inference_network(num_traces=20000,\n observe_embeddings={'obs0' : {'dim' : 32},\n 'obs1': {'dim' : 32}},\n inference_network=pyprob.InferenceNetwork.LSTM)\n```\n\n Creating new inference network...\n Observable obs0: observe embedding not specified, using the default FEEDFORWARD.\n Observable obs0: embedding depth not specified, using the default 2.\n Observable obs1: observe embedding not specified, using the default FEEDFORWARD.\n Observable obs1: embedding depth not specified, using the default 2.\n Observe embedding dimension: 64\n Train. time | Epoch| Trace | Init. loss| Min. loss | Curr. loss| T.since min | Traces/sec\n New layers, address: 16__forward__mu__Normal__1, distribution: Normal\n Total addresses: 1, distribution types: 1, parameters: 1,643,583\n 0d:00:00:31 | 1 | 20,032 | +2.21e+00 | +1.15e+00 | \u001b[32m+1.27e+00\u001b[0m | 0d:00:00:10 | 638.0 \n\n\nWe now construct the posterior distribution using samples from inference compilation, using the trained inference network.\n\nA much smaller number of samples are enough (200 vs. 5000) because the inference network provides good proposals based on the given observations. We can see that the proposal distribution given by the inference network is doing a job much better than the prior, by plotting the posterior samples without the importance weights, for a selection of observations.\n\n\n```python\n# sample from posterior (200 samples)\nposterior = model.posterior_distribution(\n num_traces=200, # the number of samples estimating the posterior\n inference_engine=pyprob.InferenceEngine.IMPORTANCE_SAMPLING_WITH_INFERENCE_NETWORK, # specify which inference engine to use\n observe={'obs0': correct_dists.observed_list[0],\n 'obs1': correct_dists.observed_list[1]} # assign values to the observed values\n)\n```\n\n Time spent | Time remain.| Progress | Trace | Traces/sec\n 0d:00:00:00 | 0d:00:00:00 | #################### | 200/200 | 316.34 \n\n\n\n```python\nposterior_unweighted = posterior.unweighted()\nposterior_unweighted.plot_histogram(\n show=False, bins=50, alpha=0.75, label='empirical proposal')\n\ncorrect_dists.plot_posterior(min(posterior.values_numpy()),\n max(posterior.values_numpy()))\nplt.legend();\n```\n\nWe can see that the proposal distribution given by the inference network is already a good estimate to the true posterior which makes the inferred posterior a much better estimate than the prior, even using much less number of samples.\n\n\n```python\nposterior.plot_histogram(show=False, bins=50, alpha=0.75, label='inferred posterior')\ncorrect_dists.plot_posterior(min(posterior.values_numpy()),\n max(posterior.values_numpy()))\nplt.legend();\n```\n\nInference compilation performs amortized inference which means, the same trained network provides proposal distributions for any observed values.\n\nWe can try performing inference using the same trained network with different observed values.\n\n\n```python\ncorrect_dists.observed_list = [8, 10] # New observations\n\nposterior = model.posterior_distribution(\n num_traces=200,\n inference_engine=pyprob.InferenceEngine.IMPORTANCE_SAMPLING_WITH_INFERENCE_NETWORK, # specify which inference engine to use\n observe={'obs0': correct_dists.observed_list[0],\n 'obs1': correct_dists.observed_list[1]}\n)\n```\n\n Time spent | Time remain.| Progress | Trace | Traces/sec\n 0d:00:00:00 | 0d:00:00:00 | #################### | 200/200 | 362.70 \n\n\n\n```python\nposterior_unweighted = posterior.unweighted()\nposterior_unweighted.plot_histogram(show=False, bins=50, alpha=0.75, label='empirical proposal')\n\ncorrect_dists.plot_posterior(min(posterior.values_numpy()),\n max(posterior.values_numpy()))\nplt.legend();\n```\n\n\n```python\nposterior.plot_histogram(show=False, bins=50, alpha=0.75, label='inferred posterior')\ncorrect_dists.plot_posterior(min(posterior.values_numpy()),\n max(posterior.values_numpy()))\nplt.legend();\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "6487fd6ebac99520782bd977d89ca3ff676915e4", "size": 217706, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/gaussian_unknown_mean.ipynb", "max_stars_repo_name": "SwapneelM/pyprob", "max_stars_repo_head_hexsha": "4d93441ea838c3491a49050ae05d218a34708e6d", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "examples/gaussian_unknown_mean.ipynb", "max_issues_repo_name": "SwapneelM/pyprob", "max_issues_repo_head_hexsha": "4d93441ea838c3491a49050ae05d218a34708e6d", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "examples/gaussian_unknown_mean.ipynb", "max_forks_repo_name": "SwapneelM/pyprob", "max_forks_repo_head_hexsha": "4d93441ea838c3491a49050ae05d218a34708e6d", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 343.9273301738, "max_line_length": 33040, "alphanum_fraction": 0.9320184101, "converted": true, "num_tokens": 2910, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.907312213841788, "lm_q2_score": 0.9111797045849582, "lm_q1q2_score": 0.8267244749746848}} {"text": "### Minimum graph coloring/ Minimum chromatic number\n\nProblem definition: https://www8.cs.umu.se/kurser/TDBAfl/VT06/algorithms/COMPEND/COMPED11/NODE13.HTM#SECTION00021500000000000000\n\nAssignment based ILP-formulation (e.g. see here: https://arxiv.org/abs/1706.10191):\nLet a graph $G=(V,E)$ with vertex set $V$ and edge set $E$ be given.\nWe model the decision via variables $x_{vi}\\in\\{0,1\\}$, that take value $1$ if vertex $v\\in V$ is assigned color $i$ and $0$ otherwise.\n\nFurther, let $H$ be an upper bound for the chromatic number, i.e. $H\\le|V|$, but usually smaller as it could come from, e.g., the greedy coloring algorithm (https://en.wikipedia.org/wiki/Greedy_coloring). $w_{i}$ takes value $1$ if color $i=1,\\ldots H$ is used in the assignment and $0$ otherwise.\n\n\\begin{align}\n &\\min \\sum_{1\\le i \\le H}w_{i} \\text{ (minimize the total number of colors used) }\\\\\n &\\text{s.t.} \\\\\n &\\sum_{i=1}^{H} x_{vi} = 1\\;\\forall v\\in V \\text{ (make sure every vertex gets exactly one color) } \\\\\n &x_{ui}+x_{vi}\\le w_{i}\\;\\forall(u,v)\\in E, i=1,\\ldots,H\\text{ (make sure no two neighboring vertices get the same color) } \\\\\n &x_{vi},w_{i}\\in\\{0,1\\}\\;\\forall v\\in V, i=1,\\ldots, H \\text{ (assigning a color or not is a binary decision) }\n\\end{align}\n\n\n\n```python\nimport gurobipy as gp\nimport networkx as nx\n```\n\n\n```python\nfrom graphilp.partitioning.min_vertex_coloring import *\nfrom graphilp.partitioning.heuristics import vertex_coloring_greedy\n```\n\n\n```python\nfrom graphilp.imports import networkx as imp_nx\n```\n\n#### Create test graphs\n\n\n```python\n#create cycle graphs as test cases. we know odd cycles have chromatic number 3 and even cycles have chromatic number 2\nG_odd_init = nx.cycle_graph(n=5)\nG_even_init = nx.cycle_graph(n=4)\n```\n\n\n```python\n#create ILPGraph objects\nG_odd = imp_nx.read(G_odd_init)\nG_even = imp_nx.read(G_even_init)\n```\n\n\n```python\n#create test models\nm_odd = create_model(G_odd)\nm_even = create_model(G_even)\n```\n\n\n```python\n#run optimization\nm_odd.optimize()\n```\n\n\n```python\nm_even.optimize()\n```\n\n#### Inspect solutions\n\n\n```python\ncolor_assignment_even, node_to_col_even = extract_solution(G_even, m_even)\n```\n\n\n```python\n#visualize solution\nnx.draw_circular(G_even.G, node_color=list(node_to_col_even.values()))\n```\n\n\n```python\ncolor_assignment_odd, node_to_col_odd = extract_solution(G_odd, m_odd)\n```\n\n\n```python\nnx.draw_circular(G_odd.G, node_color=list(node_to_col_odd.values()))\n```\n\n\n```python\ncol_to_node, node_to_col = vertex_coloring_greedy.get_heuristic(G_even)\n```\n\n\n```python\nnx.draw_circular(G_even.G, node_color=list(node_to_col.values()))\n```\n", "meta": {"hexsha": "8f23802d373c9c87d967e2be346add4d6a5f3db0", "size": 5462, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "graphilp/examples/MinVertexColouring.ipynb", "max_stars_repo_name": "VF-DE-CDS/GraphILP-API", "max_stars_repo_head_hexsha": "841b80256f06b5dfc9f3bd4e514f1e24fb82b6ce", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2021-05-03T10:58:55.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-25T22:22:29.000Z", "max_issues_repo_path": "graphilp/examples/MinVertexColouring.ipynb", "max_issues_repo_name": "VF-DE-CDS/GraphILP-API", "max_issues_repo_head_hexsha": "841b80256f06b5dfc9f3bd4e514f1e24fb82b6ce", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "graphilp/examples/MinVertexColouring.ipynb", "max_forks_repo_name": "VF-DE-CDS/GraphILP-API", "max_forks_repo_head_hexsha": "841b80256f06b5dfc9f3bd4e514f1e24fb82b6ce", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.287037037, "max_line_length": 308, "alphanum_fraction": 0.570303918, "converted": true, "num_tokens": 787, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.963779946215714, "lm_q2_score": 0.8577681013541613, "lm_q1q2_score": 0.8266996945886687}} {"text": "# Part 1 - Scalars and Vectors\n\nFor the questions below it is not sufficient to simply provide answer to the questions, but you must solve the problems and show your work using python (the NumPy library will help a lot!) Translate the vectors and matrices into their appropriate python representations and use numpy or functions that you write yourself to demonstrate the result or property. \n\n## 1.1 Create a two-dimensional vector and plot it on a graph\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n\n```python\nvector = np.array([1,6])\nplt.arrow(0, 0, *vector, head_width=.1, head_length=0.1)\nplt.xlim(0,max(vector)+1) \nplt.ylim(0,max(vector)+1);\n```\n\n## 1.2 Create a three-dimensional vecor and plot it on a graph\n\n\n```python\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\n\nvector = np.array([3, 2, 3])\n\nax.quiver(*(0,0,0), *vector)\nax.set_xlim([0, max(vector) + 1])\nax.set_ylim([0, max(vector) + 1])\nax.set_zlim([0, max(vector) + 1])\nax.set_xlabel('X')\nax.set_ylabel('Y')\nax.set_zlabel('Z')\n\nplt.show();\n```\n\n## 1.3 Scale the vectors you created in 1.1 by $5$, $\\pi$, and $-e$ and plot all four vectors (original + 3 scaled vectors) on a graph. What do you notice about these vectors? \n\n\n```python\nvector = np.array([1,6])\nquiver = np.array([vector, vector*5, vector*np.pi, vector*-np.exp(1)])\n\nfor v in quiver:\n plt.arrow(0, 0, *v, head_width=1, head_length=1)\n \nplt.xlim(quiver.min()-1,quiver.max()+1) \nplt.ylim(quiver.min()-1,quiver.max()+1);\n```\n\n## 1.4 Graph vectors $\\vec{a}$ and $\\vec{b}$ and plot them on a graph\n\n\\begin{align}\n\\vec{a} = \\begin{bmatrix} 5 \\\\ 7 \\end{bmatrix}\n\\qquad\n\\vec{b} = \\begin{bmatrix} 3 \\\\4 \\end{bmatrix}\n\\end{align}\n\n\n```python\na = np.array([5, 7])\nb = np.array([3, 4])\nquiver = np.array([a, b])\n\nfor v in quiver:\n plt.arrow(0, 0, *v, head_width=0.5, head_length=0.5)\n \nplt.xlim(0,quiver.max()+1) \nplt.ylim(0,quiver.max()+1);\n```\n\n## 1.5 find $\\vec{a} - \\vec{b}$ and plot the result on the same graph as $\\vec{a}$ and $\\vec{b}$. Is there a relationship between vectors $\\vec{a} \\thinspace, \\vec{b} \\thinspace \\text{and} \\thinspace \\vec{a-b}$\n\n\n```python\na = np.array([5, 7])\nb = np.array([3, 4])\nquiver = np.array([a, b, a-b])\n\nfor v in quiver:\n plt.arrow(0, 0, *v, head_width=0.5, head_length=0.5)\n \nplt.xlim(0,quiver.max()+1) \nplt.ylim(0,quiver.max()+1);\n```\n\n$\\vec{a-b}$ is the length and direction connecting the tips of $\\vec{a}$ and $\\vec{b}$\n\n## 1.6 Find $c \\cdot d$\n\n\\begin{align}\n\\vec{c} = \\begin{bmatrix}7 & 22 & 4 & 16\\end{bmatrix}\n\\qquad\n\\vec{d} = \\begin{bmatrix}12 & 6 & 2 & 9\\end{bmatrix}\n\\end{align}\n\n\n\n```python\nnp.dot(np.array([ 7, 22, 4, 16]),\n np.array([12, 6, 2, 9]))\n```\n\n\n\n\n 368\n\n\n\n## 1.7 Find $e \\times f$\n\n\\begin{align}\n\\vec{e} = \\begin{bmatrix} 5 \\\\ 7 \\\\ 2 \\end{bmatrix}\n\\qquad\n\\vec{f} = \\begin{bmatrix} 3 \\\\4 \\\\ 6 \\end{bmatrix}\n\\end{align}\n\n\n```python\nnp.cross(np.array([5, 7, 2]),\n np.array([3, 4, 6]))\n```\n\n\n\n\n array([ 34, -24, -1])\n\n\n\n## 1.8 Find $||g||$ and then find $||h||$. Which is longer?\n\n\\begin{align}\n\\vec{e} = \\begin{bmatrix} 1 \\\\ 1 \\\\ 1 \\\\ 8 \\end{bmatrix}\n\\qquad\n\\vec{f} = \\begin{bmatrix} 3 \\\\3 \\\\ 3 \\\\ 3 \\end{bmatrix}\n\\end{align}\n\n\n```python\n(np.linalg.norm(np.array([1, 1, 1, 8])), \n np.linalg.norm(np.array([3, 3, 3, 3])))\n```\n\n\n\n\n (8.18535277187245, 6.0)\n\n\n\n## 1.9 Show that the following vectors are orthogonal (perpendicular to each other):\n\n\\begin{align}\n\\vec{g} = \\begin{bmatrix} 1 \\\\ 0 \\\\ -1 \\end{bmatrix}\n\\qquad\n\\vec{h} = \\begin{bmatrix} 1 \\\\ \\sqrt{2} \\\\ 1 \\end{bmatrix}\n\\end{align}\n\n\n```python\nnp.dot(np.array([1, 0, -1]),\n np.array([1, 2**(1/2), 1]))\n```\n\n\n\n\n 0.0\n\n\n\n# Part 2 - Matrices\n\n## 2.1 What are the dimensions of the following matrices? Which of the following can be multiplied together? See if you can find all of the different legal combinations.\n\\begin{align}\nA = \\begin{bmatrix}\n1 & 2 \\\\\n3 & 4 \\\\\n5 & 6\n\\end{bmatrix}\n\\qquad\nB = \\begin{bmatrix}\n2 & 4 & 6 \\\\\n\\end{bmatrix}\n\\qquad\nC = \\begin{bmatrix}\n9 & 6 & 3 \\\\\n4 & 7 & 11\n\\end{bmatrix}\n\\qquad\nD = \\begin{bmatrix}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1\n\\end{bmatrix}\n\\qquad\nE = \\begin{bmatrix}\n1 & 3 \\\\\n5 & 7\n\\end{bmatrix}\n\\end{align}\n\n\n```python\nA = np.array([[1, 2],\n [3, 4],\n [5, 6]])\n\nB = np.array([[2, 4, 6]])\n\nC = np.array([[9, 6, 3],\n [4, 7, 11]])\n\nD = np.array([[1, 0, 0],\n [0, 1, 0],\n [0, 0, 1]])\n\nE = np.array([[1, 3],\n [5, 7]])\n\nquiver = [A, B, C, D, E]\n\n# You can multipy any two matrices where the number of columns of the first matrix \n# is equal to the number of rows of the second matrix.\n\nfor M1 in quiver:\n for M2 in quiver:\n if M1 is M2:\n continue\n if M1.shape[1] == M2.shape[0]:\n print('Can multiply:', M1, M2, sep='\\n')\n print('\\n')\n if M1.shape[0] == M2.shape[1]:\n print('Can multiply:', M2, M1, sep='\\n')\n print('\\n')\n```\n\n Can multiply:\n [[2 4 6]]\n [[1 2]\n [3 4]\n [5 6]]\n \n \n Can multiply:\n [[1 2]\n [3 4]\n [5 6]]\n [[ 9 6 3]\n [ 4 7 11]]\n \n \n Can multiply:\n [[ 9 6 3]\n [ 4 7 11]]\n [[1 2]\n [3 4]\n [5 6]]\n \n \n Can multiply:\n [[1 0 0]\n [0 1 0]\n [0 0 1]]\n [[1 2]\n [3 4]\n [5 6]]\n \n \n Can multiply:\n [[1 2]\n [3 4]\n [5 6]]\n [[1 3]\n [5 7]]\n \n \n Can multiply:\n [[2 4 6]]\n [[1 2]\n [3 4]\n [5 6]]\n \n \n Can multiply:\n [[2 4 6]]\n [[1 0 0]\n [0 1 0]\n [0 0 1]]\n \n \n Can multiply:\n [[ 9 6 3]\n [ 4 7 11]]\n [[1 2]\n [3 4]\n [5 6]]\n \n \n Can multiply:\n [[1 2]\n [3 4]\n [5 6]]\n [[ 9 6 3]\n [ 4 7 11]]\n \n \n Can multiply:\n [[ 9 6 3]\n [ 4 7 11]]\n [[1 0 0]\n [0 1 0]\n [0 0 1]]\n \n \n Can multiply:\n [[1 3]\n [5 7]]\n [[ 9 6 3]\n [ 4 7 11]]\n \n \n Can multiply:\n [[1 0 0]\n [0 1 0]\n [0 0 1]]\n [[1 2]\n [3 4]\n [5 6]]\n \n \n Can multiply:\n [[2 4 6]]\n [[1 0 0]\n [0 1 0]\n [0 0 1]]\n \n \n Can multiply:\n [[ 9 6 3]\n [ 4 7 11]]\n [[1 0 0]\n [0 1 0]\n [0 0 1]]\n \n \n Can multiply:\n [[1 2]\n [3 4]\n [5 6]]\n [[1 3]\n [5 7]]\n \n \n Can multiply:\n [[1 3]\n [5 7]]\n [[ 9 6 3]\n [ 4 7 11]]\n \n \n\n\n## 2.2 Find the following products: CD, AE, and BA. What are the dimensions of the resulting matrices? How does that relate to the dimensions of their factor matrices?\n\n\n```python\n# the result of multiplying two matrices will have the shape of \n# the first matrix's rows and the second matrix's columns\n\nCD = np.matmul(C, D)\nprint(CD)\n\nAE = np.matmul(A, E)\nprint(AE)\n\nBA = np.matmul(B, A)\nprint(BA)\n```\n\n [[ 9 6 3]\n [ 4 7 11]]\n [[11 17]\n [23 37]\n [35 57]]\n [[44 56]]\n\n\n## 2.3 Find $F^{T}$. How are the numbers along the main diagonal (top left to bottom right) of the original matrix and its transpose related? What are the dimensions of $F$? What are the dimensions of $F^{T}$?\n\n\\begin{align}\nF = \n\\begin{bmatrix}\n20 & 19 & 18 & 17 \\\\\n16 & 15 & 14 & 13 \\\\\n12 & 11 & 10 & 9 \\\\\n8 & 7 & 6 & 5 \\\\\n4 & 3 & 2 & 1\n\\end{bmatrix}\n\\end{align}\n\n\n```python\nF = np.array([[20, 19, 18, 17],\n [16, 15, 14, 13],\n [12, 11, 10, 9],\n [ 8, 7, 6, 5],\n [ 4, 3, 2, 1]])\nprint(F)\n\nprint(np.transpose(F))\n```\n\n [[20 19 18 17]\n [16 15 14 13]\n [12 11 10 9]\n [ 8 7 6 5]\n [ 4 3 2 1]]\n [[20 16 12 8 4]\n [19 15 11 7 3]\n [18 14 10 6 2]\n [17 13 9 5 1]]\n\n\n# Part 3 - Square Matrices\n\n## 3.1 Find $IG$ (be sure to show your work) \ud83d\ude03\n\n\\begin{align}\nG= \n\\begin{bmatrix}\n12 & 11 \\\\\n7 & 10 \n\\end{bmatrix}\n\\end{align}\n\nThe product of any square matrix with its corresponding identity matrix is the same martrix. \n\n## 3.2 Find $|H|$ and then find $|J|$.\n\n\\begin{align}\nH= \n\\begin{bmatrix}\n12 & 11 \\\\\n7 & 10 \n\\end{bmatrix}\n\\qquad\nJ= \n\\begin{bmatrix}\n0 & 1 & 2 \\\\\n7 & 10 & 4 \\\\\n3 & 2 & 0\n\\end{bmatrix}\n\\end{align}\n\n\n\n```python\nH = np.array([[12, 11],\n [ 7, 10]])\nnp.linalg.det(H)\n```\n\n\n\n\n 43.0\n\n\n\n\n```python\nJ = np.array([[ 0, 1, 2],\n [ 7,10, 4],\n [ 3, 2, 0]])\nnp.linalg.det(J)\n```\n\n\n\n\n -19.999999999999996\n\n\n\n## 3.3 Find $H^{-1}$ and then find $J^{-1}$\n\n\n```python\nnp.linalg.inv(H)\n```\n\n\n\n\n array([[ 0.23255814, -0.25581395],\n [-0.1627907 , 0.27906977]])\n\n\n\n\n```python\nnp.linalg.inv(J)\n```\n\n\n\n\n array([[ 0.4 , -0.2 , 0.8 ],\n [-0.6 , 0.3 , -0.7 ],\n [ 0.8 , -0.15, 0.35]])\n\n\n\n## 3.4 Find $HH^{-1}$ and then find $J^{-1}J$. Is $HH^{-1} == J^{-1}J$? Why or Why not?\n\n\n```python\nnp.matmul(H, np.linalg.inv(H))\n```\n\n\n\n\n array([[ 1.00000000e+00, 0.00000000e+00],\n [-2.22044605e-16, 1.00000000e+00]])\n\n\n\n\n```python\nnp.matmul(np.linalg.inv(J), J)\n```\n\n\n\n\n array([[ 1.00000000e+00, -4.44089210e-16, 0.00000000e+00],\n [ 6.66133815e-16, 1.00000000e+00, 0.00000000e+00],\n [-1.11022302e-16, 0.00000000e+00, 1.00000000e+00]])\n\n\n\nNo, they are not equal. They don't have the same dimensions.\n\n# Stretch Goals: \n\nA reminder that these challenges are optional. If you finish your work quickly we welcome you to work on them. If there are other activities that you feel like will help your understanding of the above topics more, feel free to work on that. Topics from the Stretch Goals sections will never end up on Sprint Challenges. You don't have to do these in order, you don't have to do all of them. \n\n- Write a function that can calculate the dot product of any two vectors of equal length that are passed to it.\n- Write a function that can calculate the norm of any vector\n- Prove to yourself again that the vectors in 1.9 are orthogonal by graphing them. \n- Research how to plot a 3d graph with animations so that you can make the graph rotate (this will be easier in a local notebook than in google colab)\n- Create and plot a matrix on a 2d graph.\n- Create and plot a matrix on a 3d graph.\n- Plot two vectors that are not collinear on a 2d graph. Calculate the determinant of the 2x2 matrix that these vectors form. How does this determinant relate to the graphical interpretation of the vectors?\n\n\n", "meta": {"hexsha": "0bc7053001268ae1b464a090b3a191edda12bde6", "size": 127900, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "05-Linear-Algebra/01_Linear_Algebra_Assignment.ipynb", "max_stars_repo_name": "shalevy1/data-science-journal", "max_stars_repo_head_hexsha": "2a6beaf5bf328e257b638a695983457a9f3cd7ce", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 71, "max_stars_repo_stars_event_min_datetime": "2019-03-05T04:44:48.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-24T09:47:48.000Z", "max_issues_repo_path": "05-Linear-Algebra/01_Linear_Algebra_Assignment.ipynb", "max_issues_repo_name": "pesobreiro/data-science-journal", "max_issues_repo_head_hexsha": "82a72b4ed5ce380988fac17b0acd97254c2b5c86", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "05-Linear-Algebra/01_Linear_Algebra_Assignment.ipynb", "max_forks_repo_name": "pesobreiro/data-science-journal", "max_forks_repo_head_hexsha": "82a72b4ed5ce380988fac17b0acd97254c2b5c86", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 37, "max_forks_repo_forks_event_min_datetime": "2019-03-07T05:08:03.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-05T11:32:51.000Z", "avg_line_length": 125.2693437806, "max_line_length": 54476, "alphanum_fraction": 0.8752775606, "converted": true, "num_tokens": 3855, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.905989822921759, "lm_q2_score": 0.9124361646438639, "lm_q1q2_score": 0.8266578792331032}} {"text": "# Lecture 4 - SciPy\n\n\nWhat we have seen so far\n- How to setup a python environment and jupyter notebooks\n- Basic python language features\n- Introduction to NumPy\n- Plotting using matplotlib\n\nScipy is a collection of packages that provide useful mathematical functions commonly used for scientific computing.\n\nList of subpackages\n- cluster : Clustering algorithms\n- constants : Physical and mathematical constants\n- fftpack : Fast Fourier Transform routines\n- integrate : Integration and ordinary differential equation solvers\n- interpolate : Interpolation and smoothing splines\n- io : Input and Output\n- linalg : Linear algebra\n- ndimage : N-dimensional image processing\n- odr : Orthogonal distance regression\n- optimize : Optimization and root-finding routines\n- signal : Signal processing\n- sparse : Sparse matrices and associated routines\n- spatial : Spatial data structures and algorithms\n- special : Special functions\n- stats : Statistical distributions and functions\n\nWe cannot cover all of them in detail but we will go through some of the packages and their capabilities today\n\n- interpolate\n- optimize\n- stats\n- integrate\n\nWe will also briefly look at some other useful packages\n- networkx\n- sympy\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n## Interpolation : `scipy.interpolate`\n\n\n```python\nimport scipy.interpolate as interp\n```\n\n\n```python\nx = np.linspace(-1,2,5);\ny = x**2\nplt.plot(x,y,'ro')\n```\n\n\n```python\nf = interp.interp1d(x,y,kind=\"linear\")\n```\n\n\n```python\ntype(f)\n```\n\n\n```python\nx_fine = np.linspace(-1,2,100)\nplt.plot(x_fine,f(x_fine))\nplt.plot(x,y,'ro')\n```\n\n\n```python\nplt.plot(x_fine,interp.interp1d(x,y,kind=\"zero\")(x_fine))\nplt.plot(x_fine,interp.interp1d(x,y,kind=\"linear\")(x_fine))\nplt.plot(x_fine,interp.interp1d(x,y,kind=\"cubic\")(x_fine))\nplt.plot(x,y,'ro')\n```\n\n\n```python\ninterp.interp1d?\n```\n\n\n```python\ninterp.interp2d?\n```\n\n## Optimization : `scipy.optimize`\n\nContains functions to find minima, roots and fit parameters \n\n\n```python\nfrom scipy import optimize\n```\n\n\n```python\ndef f(x):\n return x**2 + np.sin(2*x)\n```\n\n\n```python\nx = np.linspace(-5,5,100)\nplt.plot(x,f(x));\n```\n\n\n```python\nresults = optimize.minimize(f, -4)\nresults\n```\n\n\n```python\nx_opt = results.x\n```\n\n\n```python\nplt.plot(x,f(x));\nplt.plot(x_opt,f(x_opt),'ro');\n```\n\n\n```python\noptimize.minimize?\n```\n\n\n```python\ndef f(x):\n return x[0]*x[0] + x[1]*x[1] + 5*(np.sin(2*x[0]) + np.sin(2*x[1]) )\n```\n\n\n```python\nx=np.linspace(-5,5,100)\ny=np.linspace(-5,5,100)\nX,Y = np.meshgrid(x,y)\n```\n\n\n```python\nplt.imshow(f((X,Y)))\n```\n\n\n```python\noptimize.minimize(f,x0=[2,2])\n```\n\nYou can use the function `basinhopping` to find the global minima\n\n\n```python\noptimize.basinhopping(f,[1,4])\n```\n\n\n```python\noptimize.basinhopping?\n```\n\n## Curve Fitting\n\n\n```python\nx = np.linspace(-2,2,30)\ny = x+np.sin(5.2*x)+0.3*np.random.randn(30)\nplt.plot(x,y,'ro')\n```\n\n\n```python\ndef f(x,a,b,c):\n return a*x + b*np.sin(c*x)\n```\n\n\n```python\n((a,b,c),cov) = optimize.curve_fit(f,x,y,(0,0,4))\na,b,c\n```\n\n\n```python\nx_fine = np.linspace(-2,2,200)\nplt.plot(x_fine,f(x_fine,a,b,c))\nplt.plot(x,y,'ro')\n```\n\n### Root Finding\n\n\n```python\ndef f(x):\n return (x+2)*(x-1)*(x-5)\n```\n\n\n```python\noptimize.root(f,0)\n```\n\n## Statistics : `scipy.stats`\n\n\n```python\nfrom scipy import stats\n```\n\nFind the maximum likelihood estimate for parameters\n\n\n```python\nsamples = 3*np.random.randn(1000)+2\nplt.hist(samples);\n```\n\n\n```python\nstats.norm.fit(samples)\n```\n\n\n```python\nnp.mean(samples),np.median(samples)\n```\n\n\n```python\nstats.scoreatpercentile(samples,20)\n```\n\n\n```python\na = np.random.randn(30)\nb = np.random.randn(30) + 0.1\n```\n\n\n```python\nstats.ttest_ind(a,b)\n```\n\nYou can also perform kernel density estimation\n\n\n```python\nx = np.concatenate(( 2*np.random.randn(1000)+5, 0.6*np.random.randn(1000)-1) )\n```\n\n\n```python\nplt.hist(x);\n```\n\n\n```python\npdf = stats.kde.gaussian_kde(x)\n```\n\n\n```python\ncounts,bins,_ = plt.hist(x)\nx_fine=np.linspace(-2,10,100)\nplt.plot(x_fine,np.sum(counts)*pdf(x_fine))\n```\n\n\n```python\nbins\n```\n\n## Numerical Integration : `scipy.integrate`\n\n\n```python\nimport scipy.integrate as integ\n```\n\nYou can compute integral using the `quad` funtion\n\n\n```python\ndef f(x):\n return x**2 + 5*x + np.sin(x)\n```\n\n\n```python\ninteg.quad(f,-1,1)\n```\n\n\n```python\ninteg.quad?\n```\n\nYou can also solve ODEs of the form\n$$ \\frac{dy}{dt} = f(y,t) $$\n\n\n```python\ndef f(y,t):\n return (y[1], -y[1]-9*y[0])\n```\n\n\n```python\nt = np.linspace(0,10,100)\nY = integ.odeint(f,[1,1],t)\n```\n\n\n```python\nplt.plot(t,Y[:,1])\n```\n\n# Other useful packages\n\n## `networkx`\nUseful Package to handle graphs.\n\nInstall by running `conda install networkx`\n\n\n```python\nimport networkx as nx\n```\n\n\n```python\nG = nx.Graph()\nG.add_nodes_from([1,2,3,4])\nG.add_edge(1,2)\nG.add_edge(2,3)\nG.add_edge(3,1)\nG.add_edge(3,4)\n```\n\n\n```python\nnx.draw(G)\n```\n\n\n```python\nG = nx.complete_graph(10)\nnx.draw(G)\n```\n\n## `sympy`\n\nPackage for performing symbolic computation and manipulation.\n\nInstall it in your environment by running `conda install sympy`\n\n\n```python\nfrom sympy import *\n```\n\n\n```python\nx,y = symbols(\"x y\")\n```\n\n\n```python\nexpr = x+y**2\n```\n\n\n```python\nx*expr\n```\n\n\n```python\nexpand(x*expr)\n```\n\n\n```python\nfactor(x**2 -2*x*y + y**2)\n```\n\n\n```python\nlatex(expr)\n```\n\n\n```python\ninit_printing()\n```\n\n\n```python\nsimplify( (x-y)**2 + (x+y)**2)\n```\n\n\n```python\nx**2/(y**3+y)\n```\n\n\n```python\n(x**2/(y**3+y)).subs(y,1/(1+x))\n```\n\n\n```python\n(x**2/(y**3+y)).evalf(subs={'x':2, 'y':4})\n```\n\n\n```python\nIntegral(exp(-x**2 - y**2), (x, -oo, oo), (y, -oo, oo))\n```\n\n\n```python\nI = Integral(exp(-x**2 - y**2), (x, -oo, oo), (y, -oo, oo))\n```\n\n\n```python\nI.doit()\n```\n\n\n```python\n(sin(x)/(1+cos(x)))\n```\n\n\n```python\n(sin(x)/(1+cos(x))).series(x,0,10)\n```\n\n\n```python\n\n```\n\n## Exercises\nThe following exercises requires the combined usage of the packages we learnt today. \n\n1. Generate 10 random polynomials of order 5\n - Numerically and analytically integrate them from 0 to 1 and compare the answers.\n - Compute one minima for each polynomial and show that the analytically computed derivative is 0 at the minima\n - Randomly sample the polynomials in the range from 0 to 1, and see if you can recover the original coefficents by trying to fit a 5th order polynomial to the samples.\n2. Read and learn about [Erdos-Renyi Random Graphs](https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_model). See if you can numerically verify some of the properties mentioned in the wiki, such as for what parameter values is the graph most likely connected.\n\n\n```python\n\n```\n", "meta": {"hexsha": "448406e856ed57f32bf93b6e819e75b9b4684ca3", "size": 16586, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "nb/2019_winter/Lecture_4.ipynb", "max_stars_repo_name": "icme/cme193", "max_stars_repo_head_hexsha": "3ed008f6e0951b80faf1d77c9542ae0dd925691d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 15, "max_stars_repo_stars_event_min_datetime": "2016-02-17T06:03:51.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-30T03:47:27.000Z", "max_issues_repo_path": "nb/2019_winter/Lecture_4.ipynb", "max_issues_repo_name": "icme/cme193", "max_issues_repo_head_hexsha": "3ed008f6e0951b80faf1d77c9542ae0dd925691d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "nb/2019_winter/Lecture_4.ipynb", "max_forks_repo_name": "icme/cme193", "max_forks_repo_head_hexsha": "3ed008f6e0951b80faf1d77c9542ae0dd925691d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 33, "max_forks_repo_forks_event_min_datetime": "2016-01-19T18:23:46.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-23T03:08:23.000Z", "avg_line_length": 19.5589622642, "max_line_length": 275, "alphanum_fraction": 0.5053056795, "converted": true, "num_tokens": 1995, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.905989822921759, "lm_q2_score": 0.912436161072216, "lm_q1q2_score": 0.8266578759972266}} {"text": "# Intro to neural net training with autograd\n\nIn this notebook, we'll practice\n\n* using the **autograd** Python package to compute gradients\n* using gradient descent to train a basic linear regression (a NN with 0 hidden layers)\n* using gradient descent to train a basic neural network for regression (NN with 1+ hidden layers)\n\n\n### Requirements:\n\nStandard `comp135_env`, PLUS the `autograd` package: https://github.com/HIPS/autograd\n\nTo install autograd, first activate your `comp135_env`, and then do:\n```\npip install autograd\n```\n\n### Outline\n\n* Part 1: Autograd for scalar input -> scalar output functions\n* Part 2: Autograd for vector input -> scalar output functions\n* Part 3: Using autograd inside a simple gradient descent procedure\n* Part 4: Using autograd to solve linear regression\n\n\n```python\nimport pickle\nimport copy\nimport time\n```\n\n\n```python\n## Import plotting tools\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n```\n\n\n```python\n## Import numpy\nimport numpy as np\nimport pandas as pd\n```\n\n\n```python\n## Import autograd\nimport autograd.numpy as ag_np\nimport autograd\n```\n\n# PART 1: Using autograd's 'grad' function on univariate functions\n\nSuppose we have a mathematical function of interest $f(x)$. For now, we'll work with functions that have a scalar input and scalar output. \n\nThen we can of course ask: what is the derivative (aka *gradient*) of this function:\n\n$$\ng(x) \\triangleq \\frac{\\partial}{\\partial x} f(x)\n$$\n\nInstead of computing this gradient by hand via calculus/algebra, we can use autograd to do it for us.\n\nFirst, we need to implement the math function $f(x)$ as a **Python function** `f`.\n\n\nThe Python function `f` needs to satisfy the following requirements:\n* INPUT 'x': scalar float\n* OUTPUT 'f(x)': scalar float\n* All internal operations are composed of calls to functions from `ag_np`, the `autograd` version of numpy\n\n**Important:**\n* You might be used to importing numpy as `import numpy as np`, and then using this shorthand for `np.cos(0.0)` or `np.square(5.0)` etc.\n* For autograd to work, you need to instead use **autograd's** provided numpy wrapper interface: `from autograd.numpy as ag_np`\n* The `ag_np` module has the same API as `numpy`, so you can call `ag_np.cos(0.0)`, `ag_np.square(5.0)`, etc.\n\nNow, if `f` meeds the above requirements, we can create a Python function `g` to compute derivatives of $f(x)$ by calling `autograd.grad`:\n\n```\ng = autograd.grad(f)\n```\n\nThe symbol `g` is now a **Python function** that takes the same input as `f`, but produces the derivative at a given input.\n\n\n\n\n```python\ndef f(x):\n return ag_np.square(x)\n\ng = autograd.grad(f)\n```\n\n\n```python\nf(4.0)\n```\n\n\n\n\n 16.0\n\n\n\n\n```python\n# 'g' is just a function. You can call it as usual, by providing a possible scalar float input\n\ng(0.0)\n```\n\n\n\n\n 0.0\n\n\n\n\n```python\n[g(-1.0), g(1.0)]\n```\n\n\n\n\n [-2.0, 2.0]\n\n\n\n### Plot to demonstrate the gradient function side-by-side with original function\n\n\n```python\nx_grid_G = np.linspace(-10, 10, 100)\n\nfig_h, subplot_grid = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True, squeeze=False)\nsubplot_grid[0,0].plot(x_grid_G, [f(x_g) for x_g in x_grid_G], 'k.-')\nsubplot_grid[0,0].set_title('f(x) = x^2')\n\nsubplot_grid[0,1].plot(x_grid_G, [g(x_g) for x_g in x_grid_G], 'b.-')\nsubplot_grid[0,1].set_title('gradient of f(x)')\n\n```\n\n## Exercise 1a:\n\nConsider the decaying periodic function below. Can you compute its derivative using autograd and plot the result?\n\n$$\nf(x) = e^{-x/10} * cos(x)\n$$\n\n\n```python\ndef f(x):\n return 0.0 # TODO compute the function above, using 'ag_np'\n \ng = f # TODO define g as gradient of f, using autograd's `grad` \n\n# TODO plot the result\nx_grid_G = np.linspace(-10, 10, 500)\nfig_h, subplot_grid = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True, squeeze=False)\nsubplot_grid[0,0].plot(x_grid_G, [f(x_g) for x_g in x_grid_G], 'k.-');\nsubplot_grid[0,0].set_title('f(x) = x^2');\n\nsubplot_grid[0,1].plot(x_grid_G, [g(x_g) for x_g in x_grid_G], 'b.-');\nsubplot_grid[0,1].set_title('gradient of f(x)');\n```\n\n# PART 2: Using autograd's 'grad' function on functions with multivariate input\n\n\nNow, imagine the input $x$ could be a vector of size D. \n\nOur mathematical function $f(x)$ will map each input vector to a scalar.\n\nWe want the gradient function\n\n\\begin{align}\ng(x) &\\triangleq \\nabla_x f(x)\n\\\\\n&= [\n \\frac{\\partial}{\\partial x_1} f(x)\n \\quad \\frac{\\partial}{\\partial x_2} f(x)\n \\quad \\ldots \\quad \\frac{\\partial}{\\partial x_D} f(x) ]\n\\end{align}\n\nInstead of computing this gradient by hand via calculus/algebra, we can use autograd to do it for us.\n\nFirst, we implement math function $f(x)$ as a **Python function** `f`.\n\nThe Python function `f` needs to satisfy the following requirements:\n* INPUT 'x': numpy array of float\n* OUTPUT 'f(x)': scalar float\n* All internal operations are composed of calls to functions from `ag_np`, the `autograd` version of numpy\n\n\n\n```python\ndef f(x_D):\n return ag_np.sum(ag_np.square(x_D))\n\ng = autograd.grad(f)\n```\n\n\n```python\nx_D = np.zeros(4)\nprint(x_D)\nprint(f(x_D))\nprint(g(x_D))\n```\n\n [0. 0. 0. 0.]\n 0.0\n [0. 0. 0. 0.]\n\n\n\n```python\nx_D = np.asarray([1., 2., 3., 4.])\nprint(x_D)\nprint(f(x_D))\nprint(g(x_D))\n```\n\n [1. 2. 3. 4.]\n 30.0\n [2. 4. 6. 8.]\n\n\n# Part 3: Using autograd gradients within gradient descent to solve multivariate optimization problems \n\n### Helper function: basic gradient descent\n\nHere's a very simple function that will perform many gradient descent steps to optimize a given function.\n\n\n\n```python\ndef run_many_iters_of_gradient_descent(f, g, init_x_D=None, n_iters=100, step_size=0.001):\n\n # Copy the initial parameter vector\n x_D = copy.deepcopy(init_x_D)\n\n # Create data structs to track the per-iteration history of different quantities\n history = dict(\n iter=[],\n f=[],\n x_D=[],\n g_D=[])\n\n for iter_id in range(n_iters):\n if iter_id > 0:\n x_D = x_D - step_size * g(x_D)\n\n history['iter'].append(iter_id)\n history['f'].append(f(x_D))\n history['x_D'].append(x_D)\n history['g_D'].append(g(x_D))\n return x_D, history\n```\n\n### Worked Example 3a: Minimize f(x) = sum(square(x))\n\nIt's easy to figure out that the vector with smallest L2 norm (smallest sum of squares) is the all-zero vector.\n\nHere's a quick example of showing that using gradient functions provided by autograd can help us solve the optimization problem:\n\n$$\n\\min_x \\sum_{d=1}^D x_d^2\n$$\n\n\n```python\ndef f(x_D):\n return ag_np.sum(ag_np.square(x_D))\n\ng = autograd.grad(f)\n\n# Initialize at x_D = [-3, 4, -5, 6]\ninit_x_D = np.asarray([-3.0, 4.0, -5.0, 6.0])\n```\n\n\n```python\nopt_x_D, history = run_many_iters_of_gradient_descent(f, g, init_x_D, n_iters=1000, step_size=0.01)\n```\n\n\n```python\n# Make plots of how x parameter values evolve over iterations, and function values evolve over iterations\n# Expected result: f goes to zero. all x values goto zero.\n\nfig_h, subplot_grid = plt.subplots(\n nrows=1, ncols=2, sharex=True, sharey=False, figsize=(15,3), squeeze=False)\nsubplot_grid[0,0].plot(history['iter'], history['x_D'])\nsubplot_grid[0,0].set_xlabel('iters')\nsubplot_grid[0,0].set_ylabel('x_d')\n\nsubplot_grid[0,1].plot(history['iter'], history['f'])\nsubplot_grid[0,1].set_xlabel('iters')\nsubplot_grid[0,1].set_ylabel('f(x)');\n```\n\n### Try it Example 3b: Minimize the 'trid' function\n\nGiven a 2-dimensional vector $x = [x_1, x_2]$, the trid function is:\n\n$$\nf(x) = (x_1-1)^2 + (x_2-1)^2 - x_1 x_2\n$$\n\nBackground and Picture: \n\nCan you use autograd + gradient descent to find the optimal value $x^*$ that minimizes $f(x)$?\n\nYou can initialize your gradient descent at [+1.0, -1.0]\n\n\n```python\ndef f(x_D):\n return 0.0 # TODO\n\ng = f # TODO\n```\n\n\n```python\n# TODO call run_many_iters_of_gradient_descent() with appropriate args\n```\n\n\n```python\n# TRID example\n# Make plots of how x parameter values evolve over iterations, and function values evolve over iterations\n# Expected result: ????\n\nfig_h, subplot_grid = plt.subplots(\n nrows=1, ncols=2, sharex=True, sharey=False, figsize=(15,3), squeeze=False)\nsubplot_grid[0,0].plot(history['iter'], history['x_D'])\nsubplot_grid[0,0].set_xlabel('iters')\nsubplot_grid[0,0].set_ylabel('x_d')\n\nsubplot_grid[0,1].plot(history['iter'], history['f'])\nsubplot_grid[0,1].set_xlabel('iters')\nsubplot_grid[0,1].set_ylabel('f(x)');\n```\n\n# Part 4: Solving linear regression with gradient descent + autograd\n\nWe observe $N$ examples $(x_i, y_i)$ consisting of D-dimensional 'input' vectors $x_i$ and scalar outputs $y_i$.\n\nConsider the multivariate linear regression model:\n\n\\begin{align}\ny_i &\\sim \\mathcal{N}(w^T x_i, \\sigma^2), \\forall i \\in 1, 2, \\ldots N\n\\end{align}\nwhere we assume $\\sigma = 0.1$.\n\nOne way to train weights would be to just compute the maximum likelihood solution:\n\n\\begin{align}\n\\min_w - \\log p(y | w, x)\n\\end{align}\n\n\n## Toy Data for linear regression task\n\nWe'll generate data that comes from an idealized linear regression model.\n\nEach example has D=2 dimensions for x.\n\nThe first dimension is weighted by +4.2.\nThe second dimension is weighted by -4.2\n\n\n\n```python\nN = 100\nD = 2\nsigma = 0.1\n\ntrue_w_D = np.asarray([4.2, -4.2])\ntrue_bias = 0.1\n\ntrain_prng = np.random.RandomState(0)\nx_ND = train_prng.uniform(low=-5, high=5, size=(N,D))\ny_N = np.dot(x_ND, true_w_D) + true_bias + sigma * train_prng.randn(N)\n```\n\n## Toy Data Visualization: Pairplots for all possible (x_d, y) combinations\n\nYou can clearly see the slopes of the lines:\n* x1 vs y plot: slope is around +4\n* x2 vs y plot: slope is around -4\n\n\n```python\nsns.pairplot(\n data=pd.DataFrame(np.hstack([x_ND, y_N[:,np.newaxis]]), columns=['x1', 'x2', 'y']));\n```\n\n\n```python\n# Define the optimization problem as an AUTOGRAD-able function wrt the weights w_D\ndef calc_neg_likelihood_linreg(w_D):\n return 0.5 / ag_np.square(sigma) * ag_np.sum(ag_np.square(ag_np.dot(x_ND, w_D) - y_N))\n```\n\n\n```python\n## Test the function at an easy initial point\ninit_w_D = np.zeros(2)\ncalc_neg_likelihood_linreg(init_w_D)\n```\n\n\n\n\n 1521585.0576643152\n\n\n\n\n```python\n## Test the gradient at that easy point \ncalc_grad_wrt_w = autograd.grad(calc_neg_likelihood_linreg)\ncalc_grad_wrt_w(init_w_D)\n```\n\n\n\n\n array([-357441.84423006, 367223.20042115])\n\n\n\n\n```python\n# Because the gradient's magnitude is very large, use very small step size\nopt_w_D, history = run_many_iters_of_gradient_descent(\n calc_neg_likelihood_linreg, autograd.grad(calc_neg_likelihood_linreg), init_w_D,\n n_iters=300, step_size=0.000001,\n )\n```\n\n\n```python\n# LinReg worked example\n# Make plots of how w_D parameter values evolve over iterations, and function values evolve over iterations\n# Expected result: x\n\nfig_h, subplot_grid = plt.subplots(\n nrows=1, ncols=2, sharex=True, sharey=False, figsize=(15,3), squeeze=False)\nsubplot_grid[0,0].plot(history['iter'], history['x_D'])\nsubplot_grid[0,0].set_xlabel('iters')\nsubplot_grid[0,0].set_ylabel('w_d')\n\nsubplot_grid[0,1].plot(history['iter'], history['f'])\nsubplot_grid[0,1].set_xlabel('iters')\nsubplot_grid[0,1].set_ylabel('-1 * log p(y | w, x)');\n```\n\n## Try it Example 4b: Solve the linear regression problem using a weights-and-bias representation\n\nThe above example only uses weights on the dimensions of $x_i$, and thus can only learn linear models that pass through the origin.\n\nCan you instead optimize a model that includes a **bias** term $b>0$?\n\n\\begin{align}\ny_i &\\sim \\mathcal{N}(w^T x_i + b, \\sigma^2), \\forall i \\in 1, 2, \\ldots N\n\\end{align}\nwhere we assume $\\sigma = 0.1$.\n\nOne non-Bayesian way to train weights would be to just compute the maximum likelihood solution:\n\n\\begin{align}\n\\min_{w,b} - \\log p(y | w, b, x)\n\\end{align}\n\n\nAn easy way to do this is to imagine that each observation vector $x_i$ is expanded into a $\\tilde{x}_i$ that contains a column of all ones. Then, we can write the corresponding expanded weights as $\\tilde{w} = [w_1 w_2 b]$.\n\n\n\\begin{align}\n\\min_{\\tilde{w}} - \\log p(y | \\tilde{w},\\tilde{x})\n\\end{align}\n\n\n\n```python\n# Now, each expanded xtilde vector has size E = D+1 = 3\n\nxtilde_NE = np.hstack([x_ND, np.ones((N,1))])\n```\n\n\n```python\n# TODO: Define f to minimize that takes a COMBINED weights-and-bias vector wtilde_E of size E=3\n```\n\n\n```python\n# TODO: Compute gradient of f\n```\n\n\n```python\n# TODO run gradient descent and plot the results\n```\n\n# Part 5 setup: Autograd for functions of data structures of arrays\n\n#### Useful Fact: autograd can take derivatives with respect to DATA STRUCTURES of parameters\n\nThis can help us when it is natural to define models in terms of several parts (e.g. NN layers).\n\nWe don't need to turn our many model parameters into one giant weights-and-biases vector. We can express our thoughts more naturally.\n\n### Demo 1: gradient of a LIST of parameters\n\n\n```python\ndef f(w_list_of_arr):\n return ag_np.sum(ag_np.square(w_list_of_arr[0])) + ag_np.sum(ag_np.square(w_list_of_arr[1]))\n\ng = autograd.grad(f)\n```\n\n\n```python\nw_list_of_arr = [np.zeros(3), np.arange(5, dtype=np.float64)]\n\nprint(\"Type of the gradient is: \")\nprint(type(g(w_list_of_arr)))\n\nprint(\"Result of the gradient is: \")\ng(w_list_of_arr)\n```\n\n Type of the gradient is: \n \n Result of the gradient is: \n\n\n\n\n\n [array([0., 0., 0.]), array([0., 2., 4., 6., 8.])]\n\n\n\n### Demo 2: gradient of DICT of parameters\n\n\n\n```python\ndef f(dict_of_arr):\n return ag_np.sum(ag_np.square(dict_of_arr['weights'])) + ag_np.sum(ag_np.square(dict_of_arr['bias']))\ng = autograd.grad(f)\n```\n\n\n```python\ndict_of_arr = dict(weights=np.arange(5, dtype=np.float64), bias=4.2)\n\nprint(\"Type of the gradient is: \")\nprint(type(g(dict_of_arr)))\n\nprint(\"Result of the gradient is: \")\ng(dict_of_arr)\n```\n\n Type of the gradient is: \n \n Result of the gradient is: \n\n\n\n\n\n {'weights': array([0., 2., 4., 6., 8.]), 'bias': array(8.4)}\n\n\n\n# Part 5: Neural Networks and Autograd\n\n### Let's use a convenient data structure for NN model parameters\n\nUse a list of dicts of arrays.\n\nEach entry in the list is a dict that represents the parameters of one \"layer\".\n\nEach layer-specific dict has two named attributes: a vector of weights 'w' and a vector of biases 'b'\n\n#### Here's a function to create NN params as a 'list-of-dicts' that match a provided set of dimensions\n\n\n```python\ndef make_nn_params_as_list_of_dicts(\n n_hiddens_per_layer_list=[5],\n n_dims_input=1,\n n_dims_output=1,\n weight_fill_func=np.zeros,\n bias_fill_func=np.zeros):\n nn_param_list = []\n n_hiddens_per_layer_list = [n_dims_input] + n_hiddens_per_layer_list + [n_dims_output]\n\n # Given full network size list is [a, b, c, d, e]\n # For loop should loop over (a,b) , (b,c) , (c,d) , (d,e)\n for n_in, n_out in zip(n_hiddens_per_layer_list[:-1], n_hiddens_per_layer_list[1:]):\n nn_param_list.append(\n dict(\n w=weight_fill_func((n_in, n_out)),\n b=bias_fill_func((n_out,)),\n ))\n return nn_param_list\n```\n\n#### Here's a function to pretty-print any given set of NN parameters to stdout, so we can inspect\n\n\n```python\ndef pretty_print_nn_param_list(nn_param_list_of_dict):\n \"\"\" Create pretty display of the parameters at each layer\n \"\"\"\n for ll, layer_dict in enumerate(nn_param_list_of_dict):\n print(\"Layer %d\" % ll)\n print(\" w | size %9s | %s\" % (layer_dict['w'].shape, layer_dict['w'].flatten()))\n print(\" b | size %9s | %s\" % (layer_dict['b'].shape, layer_dict['b'].flatten()))\n```\n\n## Example: NN with 0 hidden layers (equivalent to linear regression)\n\nFor univariate regression: 1D -> 1D\n\nWill fill all parameters with zeros by default\n\n\n```python\nnn_params = make_nn_params_as_list_of_dicts(n_hiddens_per_layer_list=[], n_dims_input=1, n_dims_output=1)\npretty_print_nn_param_list(nn_params)\n```\n\n Layer 0\n w | size (1, 1) | [0.]\n b | size (1,) | [0.]\n\n\n## Example: NN with 0 hidden layers (equivalent to linear regression)\n\nFor multivariate regression when |x_i| = 2: 2D -> 1D\n\nWill fill all parameters with zeros by default\n\n\n```python\nnn_params = make_nn_params_as_list_of_dicts(n_hiddens_per_layer_list=[], n_dims_input=2, n_dims_output=1)\npretty_print_nn_param_list(nn_params)\n```\n\n Layer 0\n w | size (2, 1) | [0. 0.]\n b | size (1,) | [0.]\n\n\n## Example: NN with 1 hidden layer of 3 hidden units\n\n\n```python\nnn_params = make_nn_params_as_list_of_dicts(n_hiddens_per_layer_list=[3], n_dims_input=2, n_dims_output=1)\npretty_print_nn_param_list(nn_params)\n```\n\n Layer 0\n w | size (2, 3) | [0. 0. 0. 0. 0. 0.]\n b | size (3,) | [0. 0. 0.]\n Layer 1\n w | size (3, 1) | [0. 0. 0.]\n b | size (1,) | [0.]\n\n\n## Example: NN with 1 hidden layer of 3 hidden units\n\nUse 'ones' as the fill function for weights\n\n\n```python\nnn_params = make_nn_params_as_list_of_dicts(\n n_hiddens_per_layer_list=[3], n_dims_input=2, n_dims_output=1,\n weight_fill_func=np.ones)\npretty_print_nn_param_list(nn_params)\n```\n\n Layer 0\n w | size (2, 3) | [1. 1. 1. 1. 1. 1.]\n b | size (3,) | [0. 0. 0.]\n Layer 1\n w | size (3, 1) | [1. 1. 1.]\n b | size (1,) | [0.]\n\n\n## Example: NN with 1 hidden layer of 3 hidden units\n\nUse random draws from standard normal as the fill function for weights\n\n\n```python\nnn_params = make_nn_params_as_list_of_dicts(\n n_hiddens_per_layer_list=[3], n_dims_input=2, n_dims_output=1,\n weight_fill_func=lambda size_tuple: np.random.randn(*size_tuple))\npretty_print_nn_param_list(nn_params)\n```\n\n Layer 0\n w | size (2, 3) | [ 1.24823477 -0.70553662 -0.13712655 0.23659527 -1.72792202 -1.66701658]\n b | size (3,) | [0. 0. 0.]\n Layer 1\n w | size (3, 1) | [ 0.23254128 -1.57423719 -0.26868047]\n b | size (1,) | [0.]\n\n\n## Example: NN with 7 hidden layers of diff sizes\n\nJust shows how generic this framework is!\n\n\n```python\nnn_params = make_nn_params_as_list_of_dicts(\n n_hiddens_per_layer_list=[3, 4, 5, 6, 5, 4, 3], n_dims_input=2, n_dims_output=1)\npretty_print_nn_param_list(nn_params)\n```\n\n Layer 0\n w | size (2, 3) | [0. 0. 0. 0. 0. 0.]\n b | size (3,) | [0. 0. 0.]\n Layer 1\n w | size (3, 4) | [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n b | size (4,) | [0. 0. 0. 0.]\n Layer 2\n w | size (4, 5) | [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n b | size (5,) | [0. 0. 0. 0. 0.]\n Layer 3\n w | size (5, 6) | [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.]\n b | size (6,) | [0. 0. 0. 0. 0. 0.]\n Layer 4\n w | size (6, 5) | [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.]\n b | size (5,) | [0. 0. 0. 0. 0.]\n Layer 5\n w | size (5, 4) | [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n b | size (4,) | [0. 0. 0. 0.]\n Layer 6\n w | size (4, 3) | [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n b | size (3,) | [0. 0. 0.]\n Layer 7\n w | size (3, 1) | [0. 0. 0.]\n b | size (1,) | [0.]\n\n\n## Setup: Function that performs **prediction**\n\n\n```python\ndef predict_y_given_x_with_NN(x=None, nn_param_list=None, activation_func=ag_np.tanh):\n \"\"\" Predict y value given x value via feed-forward neural net\n \n Args\n ----\n x : array_like, n_examples x n_input_dims\n \n Returns\n -------\n y : array_like, n_examples\n \"\"\"\n for layer_id, layer_dict in enumerate(nn_param_list):\n if layer_id == 0:\n if x.ndim > 1:\n in_arr = x\n else:\n if x.size == nn_param_list[0]['w'].shape[0]:\n in_arr = x[ag_np.newaxis,:]\n else:\n in_arr = x[:,ag_np.newaxis] \n else:\n in_arr = activation_func(out_arr)\n out_arr = ag_np.dot(in_arr, layer_dict['w']) + layer_dict['b']\n return ag_np.squeeze(out_arr)\n```\n\n### Example: Make predictions with 0-layer NN whose parameters are filled with the 'true' params for our toy dataset\n\n\n```python\ntrue_nn_params = make_nn_params_as_list_of_dicts(n_hiddens_per_layer_list=[], n_dims_input=2, n_dims_output=1)\ntrue_nn_params[0]['w'][:] = true_w_D[:,np.newaxis]\ntrue_nn_params[0]['b'][:] = true_bias\n```\n\n\n```python\nyhat_N = predict_y_given_x_with_NN(x_ND, true_nn_params)\nassert yhat_N.size == N\n\nplt.plot(yhat_N, y_N, 'k.')\nplt.xlabel('true y')\nplt.ylabel('predicted y|x')\n```\n\n### Example: Make predictions with 0-layer NN whose parameters are filled with all zeros\n\n\n```python\nzero_nn_params = make_nn_params_as_list_of_dicts(n_hiddens_per_layer_list=[], n_dims_input=2, n_dims_output=1)\nyhat_N = predict_y_given_x_with_NN(x_ND, zero_nn_params)\nassert yhat_N.size == N\n\nplt.plot(yhat_N, y_N, 'k.')\nplt.xlabel('true y')\nplt.ylabel('predicted y|x')\n```\n\n## Setup: Gradient descent implementation that can use list-of-dict parameters (not just arrays)\n\n\n```python\ndef run_many_iters_of_gradient_descent_with_list_of_dict(f, g, init_x_list_of_dict=None, n_iters=100, step_size=0.001):\n\n # Copy the initial parameter vector\n x_list_of_dict = copy.deepcopy(init_x_list_of_dict)\n\n # Create data structs to track the per-iteration history of different quantities\n history = dict(\n iter=[],\n f=[],\n x=[],\n g=[])\n start_time = time.time()\n for iter_id in range(n_iters):\n if iter_id > 0:\n # Gradient is a list of layer-specific dicts\n grad_list_of_dict = g(x_list_of_dict)\n for layer_id, x_layer_dict in enumerate(x_list_of_dict):\n for key in x_layer_dict.keys():\n x_layer_dict[key] = x_layer_dict[key] - step_size * grad_list_of_dict[layer_id][key]\n \n fval = f(x_list_of_dict)\n history['iter'].append(iter_id)\n history['f'].append(fval)\n history['x'].append(copy.deepcopy(x_list_of_dict))\n history['g'].append(g(x_list_of_dict))\n\n if iter_id < 3 or (iter_id+1) % 50 == 0:\n print(\"completed iter %5d/%d after %7.1f sec | loss %.6e\" % (\n iter_id+1, n_iters, time.time()-start_time, fval))\n return x_list_of_dict, history\n```\n\n# Worked Exercise 5a: Train 0-layer NN via gradient descent on LINEAR toy data\n\n\n```python\ndef nn_regression_loss_function(nn_params):\n yhat_N = predict_y_given_x_with_NN(x_ND, nn_params)\n return 0.5 / ag_np.square(sigma) * ag_np.sum(np.square(y_N - yhat_N))\n```\n\n\n```python\nfromtrue_opt_nn_params, fromtrue_history = run_many_iters_of_gradient_descent_with_list_of_dict(\n nn_regression_loss_function,\n autograd.grad(nn_regression_loss_function),\n true_nn_params,\n n_iters=100,\n step_size=0.000001)\n```\n\n completed iter 1/100 after 0.0 sec | loss 4.343353e+01\n completed iter 2/100 after 0.1 sec | loss 4.330311e+01\n completed iter 3/100 after 0.1 sec | loss 4.319213e+01\n completed iter 50/100 after 2.7 sec | loss 4.242465e+01\n completed iter 100/100 after 5.2 sec | loss 4.234312e+01\n\n\n\n```python\npretty_print_nn_param_list(fromtrue_opt_nn_params)\n```\n\n Layer 0\n w | size (2, 1) | [ 4.19568065 -4.19965201]\n b | size (1,) | [0.09469614]\n\n\n\n```python\nplt.plot(fromtrue_history['iter'], fromtrue_history['f'], 'k.-')\n```\n\n\n```python\nfromzero_opt_nn_params, fromzero_history = run_many_iters_of_gradient_descent_with_list_of_dict(\n nn_regression_loss_function,\n autograd.grad(nn_regression_loss_function),\n zero_nn_params,\n n_iters=100,\n step_size=0.000001)\n```\n\n completed iter 1/100 after 0.0 sec | loss 1.521585e+06\n completed iter 2/100 after 0.1 sec | loss 1.270163e+06\n completed iter 3/100 after 0.2 sec | loss 1.060293e+06\n completed iter 50/100 after 2.8 sec | loss 2.686671e+02\n completed iter 100/100 after 5.4 sec | loss 4.457975e+01\n\n\n\n```python\npretty_print_nn_param_list(fromzero_opt_nn_params)\n```\n\n Layer 0\n w | size (2, 1) | [ 4.19465049 -4.19892163]\n b | size (1,) | [0.11288017]\n\n\n\n```python\nplt.plot(fromzero_history['iter'], fromzero_history['f'], 'k.-')\n```\n\n\n```python\n\n```\n\n# Create more complex non-linear toy dataset\n\nTrue method *regression from QUADRATIC features*:\n\n$$\ny \\sim \\text{Normal}( w_1 x_1 + w_2 x_2 + w_3 x_1^2 + w_4 x_2^2 + b, \\sigma^2)\n$$\n\n\n```python\nN = 300\nD = 2\nsigma = 0.1\n\nwsq_D = np.asarray([-2.0, 2.0])\nw_D = np.asarray([4.2, -4.2])\n\ntrain_prng = np.random.RandomState(0)\nx_ND = train_prng.uniform(low=-5, high=5, size=(N,D))\ny_N = (\n np.dot(np.square(x_ND), wsq_D)\n + np.dot(x_ND, w_D)\n + sigma * train_prng.randn(N))\n```\n\n\n```python\nsns.pairplot(\n data=pd.DataFrame(np.hstack([x_ND, y_N[:,np.newaxis]]), columns=['x1', 'x2', 'y']));\n```\n\n\n```python\ndef nonlinear_toy_nn_regression_loss_function(nn_params):\n yhat_N = predict_y_given_x_with_NN(x_ND, nn_params)\n return 0.5 / ag_np.square(sigma) * ag_np.sum(np.square(y_N - yhat_N))\n```\n\n\n```python\n# Initialize 1-layer, 10 hidden unit network with small random noise on weights\n\nH10_init_nn_params = make_nn_params_as_list_of_dicts(\n n_hiddens_per_layer_list=[10], n_dims_input=2, n_dims_output=1,\n weight_fill_func=lambda sz_tuple: 0.1 * np.random.randn(*sz_tuple))\n```\n\n\n```python\nH10_opt_nn_params, H10_history = run_many_iters_of_gradient_descent_with_list_of_dict(\n nonlinear_toy_nn_regression_loss_function,\n autograd.grad(nonlinear_toy_nn_regression_loss_function),\n H10_init_nn_params,\n n_iters=300,\n step_size=0.000001)\n```\n\n completed iter 1/300 after 0.1 sec | loss 1.195763e+07\n completed iter 2/300 after 0.3 sec | loss 1.165899e+07\n completed iter 3/300 after 0.5 sec | loss 1.113150e+07\n completed iter 50/300 after 9.5 sec | loss 3.516704e+06\n completed iter 100/300 after 18.8 sec | loss 1.907578e+06\n completed iter 150/300 after 29.2 sec | loss 1.964786e+06\n completed iter 200/300 after 39.2 sec | loss 1.749155e+06\n completed iter 250/300 after 51.1 sec | loss 1.575341e+06\n completed iter 300/300 after 61.6 sec | loss 1.659934e+06\n\n\n#### Plot objective function vs iters\n\n\n```python\nplt.plot(H10_history['iter'], H10_history['f'], 'k.-')\nplt.title('10 hidden units');\n```\n\n#### Plot predicted y vs. true y for each example as a scatterplot\n\n\n```python\nyhat_N = predict_y_given_x_with_NN(x_ND, H10_opt_nn_params)\n\nplt.plot(yhat_N, y_N, 'k.');\nplt.xlabel('predicted y|x');\nplt.ylabel('true y');\n```\n\n\n```python\n_, subplot_grid = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=False, figsize=(10,3), squeeze=False)\nsubplot_grid[0,0].plot(x_ND[:,0], y_N, 'k.');\nsubplot_grid[0,0].plot(x_ND[:,0], yhat_N, 'b.')\nsubplot_grid[0,0].set_xlabel('x_0');\n\nsubplot_grid[0,1].plot(x_ND[:,1], y_N, 'k.');\nsubplot_grid[0,1].plot(x_ND[:,1], yhat_N, 'b.')\nsubplot_grid[0,1].set_xlabel('x_1');\n```\n\n## More units! Try 1 layer with H=30 hidden units\n\n\n```python\n# Initialize 1-layer, 30 hidden unit network with small random noise on weights\nH30_init_nn_params = make_nn_params_as_list_of_dicts(\n n_hiddens_per_layer_list=[30], n_dims_input=2, n_dims_output=1,\n weight_fill_func=lambda sz_tuple: 0.1 * np.random.randn(*sz_tuple))\n```\n\n\n```python\nH30_opt_nn_params, H30_history = run_many_iters_of_gradient_descent_with_list_of_dict(\n nonlinear_toy_nn_regression_loss_function,\n autograd.grad(nonlinear_toy_nn_regression_loss_function),\n H30_init_nn_params,\n n_iters=50,\n step_size=0.000001)\n```\n\n completed iter 1/50 after 0.1 sec | loss 1.184375e+07\n completed iter 2/50 after 0.2 sec | loss 1.087085e+07\n completed iter 3/50 after 0.4 sec | loss 9.449709e+06\n completed iter 50/50 after 6.7 sec | loss 2.035457e+06\n\n\n#### Plot objective function vs iterations\n\n\n```python\nplt.plot(H30_history['iter'], H30_history['f'], 'k.-');\nplt.title('30 hidden units');\n```\n\n#### Plot predicted y value vs true y value for each example\n\n\n```python\nyhat_N = predict_y_given_x_with_NN(x_ND, H30_opt_nn_params)\n\nplt.plot(yhat_N, y_N, 'k.');\nplt.xlabel('predicted y|x');\nplt.ylabel('true y');\n```\n\n\n```python\n_, subplot_grid = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=False, figsize=(10,3), squeeze=False)\nsubplot_grid[0,0].plot(x_ND[:,0], y_N, 'k.');\nsubplot_grid[0,0].plot(x_ND[:,0], yhat_N, 'b.')\nsubplot_grid[0,0].set_xlabel('x_0');\n\nsubplot_grid[0,1].plot(x_ND[:,1], y_N, 'k.');\nsubplot_grid[0,1].plot(x_ND[:,1], yhat_N, 'b.')\nsubplot_grid[0,1].set_xlabel('x_1');\n```\n\n## Even more units! Try 1 layer with H=100 hidden units\n\n\n```python\n# Initialize 1-layer, 100 hidden unit network with small random noise on weights\nH100_init_nn_params = make_nn_params_as_list_of_dicts(\n n_hiddens_per_layer_list=[100], n_dims_input=2, n_dims_output=1,\n weight_fill_func=lambda sz_tuple: 0.05 * np.random.randn(*sz_tuple))\n```\n\n\n```python\nH100_opt_nn_params, H100_history = run_many_iters_of_gradient_descent_with_list_of_dict(\n nonlinear_toy_nn_regression_loss_function,\n autograd.grad(nonlinear_toy_nn_regression_loss_function),\n H100_init_nn_params,\n n_iters=30,\n step_size=0.0000005)\n```\n\n completed iter 1/30 after 0.1 sec | loss 1.194856e+07\n completed iter 2/30 after 0.2 sec | loss 1.140156e+07\n completed iter 3/30 after 0.3 sec | loss 1.064044e+07\n\n\n\n```python\nyhat_N = predict_y_given_x_with_NN(x_ND, H100_opt_nn_params)\n\nplt.plot(yhat_N, y_N, 'k.');\nplt.xlabel('predicted y|x');\nplt.ylabel('true y');\n```\n\n\n```python\n_, subplot_grid = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=False, figsize=(10,3), squeeze=False)\nsubplot_grid[0,0].plot(x_ND[:,0], y_N, 'k.');\nsubplot_grid[0,0].plot(x_ND[:,0], yhat_N, 'b.')\nsubplot_grid[0,0].set_xlabel('x_0');\n\nsubplot_grid[0,1].plot(x_ND[:,1], y_N, 'k.');\nsubplot_grid[0,1].plot(x_ND[:,1], yhat_N, 'b.')\nsubplot_grid[0,1].set_xlabel('x_1');\n```\n\n# Try it yourself!\n\n* Can you train a prediction network on the non-linear toy data so it has ZERO training error? Is this even possible?\n\n* Can you make the network train faster? What happens if you play with the step_size?\n\n* What if you made the network **deeper** (more layers)?\n\n* What other dataset would you want to try out this regression on?\n\n\n```python\n\n```\n", "meta": {"hexsha": "512a3363f3d5ab9a09def7380f33d24a4b151a1c", "size": 494116, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "labs/IntroToAutogradAndBackpropForNNets.ipynb", "max_stars_repo_name": "tufts-ml-courses/comp135-19s-assignments", "max_stars_repo_head_hexsha": "d54f4356e022150d85cfa58ebbf8ccdf66e0f1a9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2019-02-23T00:28:06.000Z", "max_stars_repo_stars_event_max_datetime": "2020-01-28T20:45:57.000Z", "max_issues_repo_path": "labs/IntroToAutogradAndBackpropForNNets.ipynb", "max_issues_repo_name": "tufts-ml-courses/comp135-19s-assignments", "max_issues_repo_head_hexsha": "d54f4356e022150d85cfa58ebbf8ccdf66e0f1a9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "labs/IntroToAutogradAndBackpropForNNets.ipynb", "max_forks_repo_name": "tufts-ml-courses/comp135-19s-assignments", "max_forks_repo_head_hexsha": "d54f4356e022150d85cfa58ebbf8ccdf66e0f1a9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 18, "max_forks_repo_forks_event_min_datetime": "2019-01-24T20:45:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-21T20:27:11.000Z", "avg_line_length": 235.5176358437, "max_line_length": 92172, "alphanum_fraction": 0.9174971059, "converted": true, "num_tokens": 9531, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8976953003183443, "lm_q2_score": 0.9207896818685505, "lm_q1q2_score": 0.8265885699950212}} {"text": "First off we need to check if all necessary libraries are installed\n\n\n```python\n#verify that scipy is installed\n!pip install scipy\n!pip install sympy\n#verify that numpy is installed\n!pip install numpy\n#verify that matplotlib is installed\n!pip install matplotlib\n```\n\nSuggestions for lab exercises.\n\n# Variables and assignment\n\n## Exercise 1\n\nRemember that $n! = n \\times (n - 1) \\times \\dots \\times 2 \\times 1$. Compute $15!$, assigning the result to a sensible variable name.\n\n#### Solution\n\n\n```python\nfifteen_factorial = 15*14*13*12*11*10*9*8*7*6*5*4*3*2*1\nprint(fifteen_factorial)\n```\n\n 1307674368000\n\n\n## Exercise 2\n\nUsing the `math` module, check your result for $15$ factorial. You should explore the help for the `math` library and its functions, using eg tab-completion, the spyder inspector, or online sources.\n\n#### Solution\n\n\n```python\nimport math\nprint(math.factorial(15))\nprint(\"Result correct?\", math.factorial(15) == fifteen_factorial)\n```\n\n 1307674368000\n Result correct? True\n\n\n## Exercise 3\n\n[Stirling's approximation](http://mathworld.wolfram.com/StirlingsApproximation.html) gives that, for large enough $n$, \n\n\\begin{equation}\n n! \\simeq \\sqrt{2 \\pi} n^{n + 1/2} e^{-n}.\n\\end{equation}\n\nUsing functions and constants from the `math` library, compare the results of $n!$ and Stirling's approximation for $n = 5, 10, 15, 20$. In what sense does the approximation improve?\n\n#### Solution\n\n\n```python\nprint(math.factorial(5), math.sqrt(2*math.pi)*5**(5+0.5)*math.exp(-5))\nprint(math.factorial(10), math.sqrt(2*math.pi)*10**(10+0.5)*math.exp(-10))\nprint(math.factorial(15), math.sqrt(2*math.pi)*15**(15+0.5)*math.exp(-15))\nprint(math.factorial(20), math.sqrt(2*math.pi)*20**(20+0.5)*math.exp(-20))\nprint(\"Absolute differences:\")\nprint(math.factorial(5) - math.sqrt(2*math.pi)*5**(5+0.5)*math.exp(-5))\nprint(math.factorial(10) - math.sqrt(2*math.pi)*10**(10+0.5)*math.exp(-10))\nprint(math.factorial(15) - math.sqrt(2*math.pi)*15**(15+0.5)*math.exp(-15))\nprint(math.factorial(20) - math.sqrt(2*math.pi)*20**(20+0.5)*math.exp(-20))\nprint(\"Relative differences:\")\nprint((math.factorial(5) - math.sqrt(2*math.pi)*5**(5+0.5)*math.exp(-5)) / math.factorial(5))\nprint((math.factorial(10) - math.sqrt(2*math.pi)*10**(10+0.5)*math.exp(-10)) / math.factorial(10))\nprint((math.factorial(15) - math.sqrt(2*math.pi)*15**(15+0.5)*math.exp(-15)) / math.factorial(15))\nprint((math.factorial(20) - math.sqrt(2*math.pi)*20**(20+0.5)*math.exp(-20)) / math.factorial(20))\n```\n\n 120 118.01916795759007\n 3628800 3598695.6187410355\n 1307674368000 1300430722199.4658\n 2432902008176640000 2.422786846761133e+18\n Absolute differences:\n 1.9808320424099293\n 30104.38125896454\n 7243645800.53418\n 1.0115161415506944e+16\n Relative differences:\n 0.016506933686749412\n 0.00829596044393864\n 0.00553933454519939\n 0.004157652622880542\n\n\nWe see that the relative error decreases, whilst the absolute error grows (significantly).\n\n# Basic functions\n\n## Exercise 1\n\nWrite a function to calculate the volume of a cuboid with edge lengths $a, b, c$. Test your code on sample values such as\n\n1. $a=1, b=1, c=1$ (result should be $1$);\n2. $a=1, b=2, c=3.5$ (result should be $7.0$);\n3. $a=0, b=1, c=1$ (result should be $0$);\n4. $a=2, b=-1, c=1$ (what do you think the result should be?).\n\n#### Solution\n\n\n```python\ndef cuboid_volume(a, b, c):\n \"\"\"\n Compute the volume of a cuboid with edge lengths a, b, c.\n Volume is abc. Only makes sense if all are non-negative.\n \n Parameters\n ----------\n \n a : float\n Edge length 1\n b : float\n Edge length 2\n c : float\n Edge length 3\n \n Returns\n -------\n \n volume : float\n The volume a*b*c\n \"\"\"\n \n if (a < 0.0) or (b < 0.0) or (c < 0.0):\n print(\"Negative edge length makes no sense!\")\n return 0\n \n return a*b*c\n```\n\n\n```python\nprint(cuboid_volume(1,1,1))\nprint(cuboid_volume(1,2,3.5))\nprint(cuboid_volume(0,1,1))\nprint(cuboid_volume(2,-1,1))\n```\n\n 1\n 7.0\n 0\n Negative edge length makes no sense!\n 0\n\n\nIn later cases, after having covered exceptions, I would suggest raising a `NotImplementedError` for negative edge lengths.\n\n## Exercise 2\n\nWrite a function to compute the time (in seconds) taken for an object to fall from a height $H$ (in metres) to the ground, using the formula\n\\begin{equation}\n h(t) = \\frac{1}{2} g t^2.\n\\end{equation}\nUse the value of the acceleration due to gravity $g$ from `scipy.constants.g`. Test your code on sample values such as\n\n1. $H = 1$m (result should be $\\approx 0.452$s);\n2. $H = 10$m (result should be $\\approx 1.428$s);\n3. $H = 0$m (result should be $0$s);\n4. $H = -1$m (what do you think the result should be?).\n\n#### Solution\n\n\n```python\ndef fall_time(H):\n \"\"\"\n Give the time in seconds for an object to fall to the ground\n from H metres.\n \n Parameters\n ----------\n \n H : float\n Starting height (metres)\n \n Returns\n -------\n \n T : float\n Fall time (seconds)\n \"\"\"\n \n from math import sqrt\n from scipy.constants import g\n \n if (H < 0):\n print(\"Negative height makes no sense!\")\n return 0\n \n return sqrt(2.0*H/g)\n# return sqrt(2.0*H/scipy.constants(g))\n```\n\n\n```python\nprint(fall_time(1))\nprint(fall_time(10))\nprint(fall_time(0))\nprint(fall_time(-1))\n```\n\n 0.45160075575178754\n 1.4280869812290344\n 0.0\n Negative height makes no sense!\n 0\n\n\n## Exercise 3\n\nWrite a function that computes the area of a triangle with edge lengths $a, b, c$. You may use the formula\n\\begin{equation}\n A = \\sqrt{s (s - a) (s - b) (s - c)}, \\qquad s = \\frac{a + b + c}{2}.\n\\end{equation}\n\nConstruct your own test cases to cover a range of possibilities.\n\n\n```python\ndef triangle_area(a, b, c):\n \"\"\"\n Compute the area of a triangle with edge lengths a, b, c.\n Area is sqrt(s (s-a) (s-b) (s-c)). \n s is (a+b+c)/2.\n Only makes sense if all are non-negative.\n \n Parameters\n ----------\n \n a : float\n Edge length 1\n b : float\n Edge length 2\n c : float\n Edge length 3\n \n Returns\n -------\n \n area : float\n The triangle area.\n \"\"\"\n \n from math import sqrt\n \n if (a < 0.0) or (b < 0.0) or (c < 0.0):\n print(\"Negative edge length makes no sense!\")\n return 0\n \n s = 0.5 * (a + b + c)\n return sqrt(s * (s-a) * (s-b) * (s-c))\n```\n\n\n```python\nprint(triangle_area(1,1,1)) # Equilateral; answer sqrt(3)/4 ~ 0.433\nprint(triangle_area(3,4,5)) # Right triangle; answer 6\nprint(triangle_area(1,1,0)) # Not a triangle; answer 0\nprint(triangle_area(-1,1,1)) # Not a triangle; exception or 0.\n```\n\n 0.4330127018922193\n 6.0\n 0.0\n Negative edge length makes no sense!\n 0\n\n\n# Floating point numbers\n\n## Exercise 1\n\nComputers cannot, in principle, represent real numbers perfectly. This can lead to problems of accuracy. For example, if\n\n\\begin{equation}\n x = 1, \\qquad y = 1 + 10^{-14} \\sqrt{3}\n\\end{equation}\n\nthen it *should* be true that\n\n\\begin{equation}\n 10^{14} (y - x) = \\sqrt{3}.\n\\end{equation}\n\nCheck how accurately this equation holds in Python and see what this implies about the accuracy of subtracting two numbers that are close together.\n\n#### Solution\n\n\n```python\nfrom math import sqrt\n\nx = 1.0\ny = 1.0 + 1e-14 * sqrt(3.0)\nprint(\"The calculation gives {}\".format(1e14*(y-x)))\nprint(\"The result should be {}\".format(sqrt(3.0)))\n```\n\n The calculation gives 1.7319479184152442\n The result should be 1.7320508075688772\n\n\nWe see that the first three digits are correct. This isn't too surprising: we expect 16 digits of accuracy for a floating point number, but $x$ and $y$ are identical for the first 14 digits.\n\n## Exercise 2\n\nThe standard quadratic formula gives the solutions to\n\n\\begin{equation}\n a x^2 + b x + c = 0\n\\end{equation}\n\nas\n\n\\begin{equation}\n x = \\frac{-b \\pm \\sqrt{b^2 - 4 a c}}{2 a}.\n\\end{equation}\n\nShow that, if $a = 10^{-n} = c$ and $b = 10^n$ then\n\n\\begin{equation}\n x = \\frac{10^{2 n}}{2} \\left( -1 \\pm \\sqrt{1 - 4 \\times 10^{-4n}} \\right).\n\\end{equation}\n\nUsing the expansion (from Taylor's theorem)\n\n\\begin{equation}\n \\sqrt{1 - 10^{-4 n}} \\simeq 1 - 2 \\times 10^{-4 n} + \\dots, \\qquad n \\gg 1,\n\\end{equation}\n\nshow that\n\n\\begin{equation}\n x \\simeq -10^{2 n} + 10^{-2 n} \\quad \\text{and} \\quad -10^{-2n}, \\qquad n \\gg 1.\n\\end{equation}\n\n#### Solution\n\nThis is pen-and-paper work; each step should be re-arranging.\n\n## Exercise 3\n\nBy multiplying and dividing by $-b \\mp \\sqrt{b^2 - 4 a c}$, check that we can also write the solutions to the quadratic equation as\n\n\\begin{equation}\n x = \\frac{2 c}{-b \\mp \\sqrt{b^2 - 4 a c}}.\n\\end{equation}\n\n#### Solution\n\nUsing the difference of two squares we get\n\n\\begin{equation}\n x = \\frac{b^2 - \\left( b^2 - 4 a c \\right)}{2a \\left( -b \\mp \\sqrt{b^2 - 4 a c} \\right)}\n\\end{equation}\n\nwhich re-arranges to give the required solution.\n\n## Exercise 4\n\nUsing Python, calculate both solutions to the quadratic equation\n\n\\begin{equation}\n 10^{-n} x^2 + 10^n x + 10^{-n} = 0\n\\end{equation}\n\nfor $n = 3$ and $n = 4$ using both formulas. What do you see? How has floating point accuracy caused problems here?\n\n#### Solution\n\n\n```python\na = 1e-3\nb = 1e3\nc = a\nformula1_n3_plus = (-b + sqrt(b**2 - 4.0*a*c))/(2.0*a)\nformula1_n3_minus = (-b - sqrt(b**2 - 4.0*a*c))/(2.0*a)\nformula2_n3_plus = (2.0*c)/(-b + sqrt(b**2 - 4.0*a*c))\nformula2_n3_minus = (2.0*c)/(-b - sqrt(b**2 - 4.0*a*c))\nprint(\"For n=3, first formula, solutions are {} and {}.\".format(formula1_n3_plus, \n formula1_n3_minus))\nprint(\"For n=3, second formula, solutions are {} and {}.\".format(formula2_n3_plus, \n formula2_n3_minus))\n\na = 1e-4\nb = 1e4\nc = a\nformula1_n4_plus = (-b + sqrt(b**2 - 4.0*a*c))/(2.0*a)\nformula1_n4_minus = (-b - sqrt(b**2 - 4.0*a*c))/(2.0*a)\nformula2_n4_plus = (2.0*c)/(-b + sqrt(b**2 - 4.0*a*c))\nformula2_n4_minus = (2.0*c)/(-b - sqrt(b**2 - 4.0*a*c))\nprint(\"For n=4, first formula, solutions are {} and {}.\".format(formula1_n4_plus, \n formula1_n4_minus))\nprint(\"For n=4, second formula, solutions are {} and {}.\".format(formula2_n4_plus, \n formula2_n4_minus))\n```\n\n For n=3, first formula, solutions are -9.999894245993346e-07 and -999999.999999.\n For n=3, second formula, solutions are -1000010.5755125057 and -1.000000000001e-06.\n For n=4, first formula, solutions are -9.094947017729282e-09 and -100000000.0.\n For n=4, second formula, solutions are -109951162.7776 and -1e-08.\n\n\nThere is a difference in the fifth significant figure in both solutions in the first case, which gets to the third (arguably the second) significant figure in the second case. Comparing to the limiting solutions above, we see that the *larger* root is definitely more accurately captured with the first formula than the second (as the result should be bigger than $10^{-2n}$).\n\nIn the second case we have divided by a very small number to get the big number, which loses accuracy.\n\n## Exercise 5\n\nThe standard definition of the derivative of a function is\n\n\\begin{equation}\n \\left. \\frac{\\text{d} f}{\\text{d} x} \\right|_{x=X} = \\lim_{\\delta \\to 0} \\frac{f(X + \\delta) - f(X)}{\\delta}.\n\\end{equation}\n\nWe can *approximate* this by computing the result for a *finite* value of $\\delta$:\n\n\\begin{equation}\n g(x, \\delta) = \\frac{f(x + \\delta) - f(x)}{\\delta}.\n\\end{equation}\n\nWrite a function that takes as inputs a function of one variable, $f(x)$, a location $X$, and a step length $\\delta$, and returns the approximation to the derivative given by $g$.\n\n#### Solution\n\n\n```python\ndef g(f, X, delta):\n \"\"\"\n Approximate the derivative of a given function at a point.\n \n Parameters\n ----------\n \n f : function\n Function to be differentiated\n X : real\n Point at which the derivative is evaluated\n delta : real\n Step length\n \n Returns\n -------\n \n g : real\n Approximation to the derivative\n \"\"\"\n \n return (f(X+delta) - f(X)) / delta\n```\n\n## Exercise 6\n\nThe function $f_1(x) = e^x$ has derivative with the exact value $1$ at $x=0$. Compute the approximate derivative using your function above, for $\\delta = 10^{-2 n}$ with $n = 1, \\dots, 7$. You should see the results initially improve, then get worse. Why is this?\n\n#### Solution\n\n\n```python\nfrom math import exp\nfor n in range(1, 8):\n print(\"For n={}, the approx derivative is {}.\".format(n, g(exp, 0.0, 10**(-2.0*n))))\n```\n\n For n=1, the approx derivative is 1.005016708416795.\n For n=2, the approx derivative is 1.000050001667141.\n For n=3, the approx derivative is 1.0000004999621837.\n For n=4, the approx derivative is 0.999999993922529.\n For n=5, the approx derivative is 1.000000082740371.\n For n=6, the approx derivative is 1.000088900582341.\n For n=7, the approx derivative is 0.9992007221626409.\n\n\nWe have a combination of floating point inaccuracies: in the numerator we have two terms that are nearly equal, leading to a very small number. We then divide two very small numbers. This is inherently inaccurate.\n\nThis does not mean that you can't calculate derivatives to high accuracy, but alternative approaches are definitely recommended.\n\n# Prime numbers\n\n## Exercise 1\n\nWrite a function that tests if a number is prime. Test it by writing out all prime numbers less than 50.\n\n#### Solution\n\nThis is a \"simple\" solution, but not efficient.\n\n\n```python\ndef isprime(n):\n \"\"\"\n Checks to see if an integer is prime.\n \n Parameters\n ----------\n \n n : integer\n Number to check\n \n Returns\n -------\n \n isprime : Boolean\n If n is prime\n \"\"\"\n \n # No number less than 2 can be prime\n if n < 2:\n return False\n \n # We only need to check for divisors up to sqrt(n)\n for m in range(2, int(n**0.5)+1):\n if n%m == 0:\n return False\n \n # If we've got this far, there are no divisors.\n return True\n```\n\n\n```python\nfor n in range(50):\n if isprime(n):\n print(\"Function says that {} is prime.\".format(n))\n```\n\n Function says that 2 is prime.\n Function says that 3 is prime.\n Function says that 5 is prime.\n Function says that 7 is prime.\n Function says that 11 is prime.\n Function says that 13 is prime.\n Function says that 17 is prime.\n Function says that 19 is prime.\n Function says that 23 is prime.\n Function says that 29 is prime.\n Function says that 31 is prime.\n Function says that 37 is prime.\n Function says that 41 is prime.\n Function says that 43 is prime.\n Function says that 47 is prime.\n\n\n## Exercise 2\n\n500 years ago some believed that the number $2^n - 1$ was prime for *all* primes $n$. Use your function to find the first prime $n$ for which this is not true.\n\n#### Solution\n\nWe could do this many ways. This \"elegant\" solution says:\n\n* Start from the smallest possible $n$ (2).\n* Check if $n$ is prime. If not, add one to $n$.\n* If $n$ is prime, check if $2^n-1$ is prime. If it is, add one to $n$.\n* If both those logical checks fail, we have found the $n$ we want.\n\n\n```python\nn = 2\nwhile (not isprime(n)) or (isprime(2**n-1)):\n n += 1\nprint(\"The first n such that 2^n-1 is not prime is {}.\".format(n))\n```\n\n The first n such that 2^n-1 is not prime is 11.\n\n\n## Exercise 3\n\nThe *Mersenne* primes are those that have the form $2^n-1$, where $n$ is prime. Use your previous solutions to generate all the $n < 40$ that give Mersenne primes.\n\n#### Solution\n\n\n```python\nfor n in range(2, 41):\n if isprime(n) and isprime(2**n-1):\n print(\"n={} is such that 2^n-1 is prime.\".format(n))\n```\n\n n=2 is such that 2^n-1 is prime.\n n=3 is such that 2^n-1 is prime.\n n=5 is such that 2^n-1 is prime.\n n=7 is such that 2^n-1 is prime.\n n=13 is such that 2^n-1 is prime.\n n=17 is such that 2^n-1 is prime.\n n=19 is such that 2^n-1 is prime.\n n=31 is such that 2^n-1 is prime.\n\n\n## Exercise 4\n\nWrite a function to compute all prime factors of an integer $n$, including their multiplicities. Test it by printing the prime factors (without multiplicities) of $n = 17, \\dots, 20$ and the multiplicities (without factors) of $n = 48$.\n\n##### Note \n\nOne effective solution is to return a *dictionary*, where the keys are the factors and the values are the multiplicities.\n\n#### Solution\n\nThis solution uses the trick of immediately dividing $n$ by any divisor: this means we never have to check the divisor for being prime.\n\n\n```python\ndef prime_factors(n):\n \"\"\"\n Generate all the prime factors of n.\n \n Parameters\n ----------\n \n n : integer\n Number to be checked\n \n Returns\n -------\n \n factors : dict\n Prime factors (keys) and multiplicities (values)\n \"\"\"\n \n factors = {}\n \n m = 2\n while m <= n:\n if n%m == 0:\n factors[m] = 1\n n //= m\n while n%m == 0:\n factors[m] += 1\n n //= m\n m += 1\n \n return factors\n```\n\n\n```python\nfor n in range(17, 21):\n print(\"Prime factors of {} are {}.\".format(n, prime_factors(n).keys()))\nprint(\"Multiplicities of prime factors of 48 are {}.\".format(prime_factors(48).values()))\n```\n\n Prime factors of 17 are dict_keys([17]).\n Prime factors of 18 are dict_keys([2, 3]).\n Prime factors of 19 are dict_keys([19]).\n Prime factors of 20 are dict_keys([2, 5]).\n Multiplicities of prime factors of 48 are dict_values([4, 1]).\n\n\n## Exercise 5\n\nWrite a function to generate all the integer divisors, including 1, but not including $n$ itself, of an integer $n$. Test it on $n = 16, \\dots, 20$.\n\n##### Note\n\nYou could use the prime factorization from the previous exercise, or you could do it directly.\n\n#### Solution\n\nHere we will do it directly.\n\n\n```python\ndef divisors(n):\n \"\"\"\n Generate all integer divisors of n.\n \n Parameters\n ----------\n \n n : integer\n Number to be checked\n \n Returns\n -------\n \n divs : list\n All integer divisors, including 1.\n \"\"\"\n \n divs = [1]\n m = 2\n while m <= n/2:\n if n%m == 0:\n divs.append(m)\n m += 1\n \n return divs\n```\n\n\n```python\nfor n in range(16, 21):\n print(\"The divisors of {} are {}.\".format(n, divisors(n)))\n```\n\n The divisors of 16 are [1, 2, 4, 8].\n The divisors of 17 are [1].\n The divisors of 18 are [1, 2, 3, 6, 9].\n The divisors of 19 are [1].\n The divisors of 20 are [1, 2, 4, 5, 10].\n\n\n## Exercise 6\n\nA *perfect* number $n$ is one where the divisors sum to $n$. For example, 6 has divisors 1, 2, and 3, which sum to 6. Use your previous solution to find all perfect numbers $n < 10,000$ (there are only four!).\n\n#### Solution\n\nWe can do this much more efficiently than the code below using packages such as `numpy`, but this is a \"bare python\" solution.\n\n\n```python\ndef isperfect(n):\n \"\"\"\n Check if a number is perfect.\n \n Parameters\n ----------\n \n n : integer\n Number to check\n \n Returns\n -------\n \n isperfect : Boolean\n Whether it is perfect or not.\n \"\"\"\n \n divs = divisors(n)\n sum_divs = 0\n for d in divs:\n sum_divs += d\n \n return n == sum_divs\n```\n\n\n```python\nfor n in range(2,10000):\n if (isperfect(n)):\n factors = prime_factors(n)\n print(\"{} is perfect.\\n\"\n \"Divisors are {}.\\n\"\n \"Prime factors {} (multiplicities {}).\".format(\n n, divisors(n), factors.keys(), factors.values()))\n```\n\n 6 is perfect.\n Divisors are [1, 2, 3].\n Prime factors dict_keys([2, 3]) (multiplicities dict_values([1, 1])).\n 28 is perfect.\n Divisors are [1, 2, 4, 7, 14].\n Prime factors dict_keys([2, 7]) (multiplicities dict_values([2, 1])).\n 496 is perfect.\n Divisors are [1, 2, 4, 8, 16, 31, 62, 124, 248].\n Prime factors dict_keys([2, 31]) (multiplicities dict_values([4, 1])).\n 8128 is perfect.\n Divisors are [1, 2, 4, 8, 16, 32, 64, 127, 254, 508, 1016, 2032, 4064].\n Prime factors dict_keys([2, 127]) (multiplicities dict_values([6, 1])).\n\n\n## Exercise 7\n\nUsing your previous functions, check that all perfect numbers $n < 10,000$ can be written as $2^{k-1} \\times (2^k - 1)$, where $2^k-1$ is a Mersenne prime.\n\n#### Solution\n\nIn fact we did this above already:\n\n* $6 = 2^{2-1} \\times (2^2 - 1)$. 2 is the first number on our Mersenne list.\n* $28 = 2^{3-1} \\times (2^3 - 1)$. 3 is the second number on our Mersenne list.\n* $496 = 2^{5-1} \\times (2^5 - 1)$. 5 is the third number on our Mersenne list.\n* $8128 = 2^{7-1} \\times (2^7 - 1)$. 7 is the fourth number on our Mersenne list.\n\n## Exercise 8 (bonus)\n\nInvestigate the `timeit` function in python or IPython. Use this to measure how long your function takes to check that, if $k$ on the Mersenne list then $n = 2^{k-1} \\times (2^k - 1)$ is a perfect number, using your functions. Stop increasing $k$ when the time takes too long!\n\n##### Note\n\nYou could waste considerable time on this, and on optimizing the functions above to work efficiently. It is *not* worth it, other than to show how rapidly the computation time can grow!\n\n#### Solution\n\n\n```python\n%timeit isperfect(2**(3-1)*(2**3-1))\n```\n\n 2.49 \u00b5s \u00b1 215 ns per loop (mean \u00b1 std. dev. of 7 runs, 100000 loops each)\n\n\n\n```python\n%timeit isperfect(2**(5-1)*(2**5-1))\n```\n\n 35.3 \u00b5s \u00b1 4 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\n\n\n\n```python\n%timeit isperfect(2**(7-1)*(2**7-1))\n```\n\n 658 \u00b5s \u00b1 64.9 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\n\n\n\n```python\n%timeit isperfect(2**(13-1)*(2**13-1))\n```\n\n 2.72 s \u00b1 290 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\n\n\nIt's worth thinking about the operation counts of the various functions implemented here. The implementations are inefficient, but even in the best case you see how the number of operations (and hence computing time required) rapidly increases.\n\n# Logistic map\n\nPartly taken from Newman's book, p 120.\n\nThe logistic map builds a sequence of numbers $\\{ x_n \\}$ using the relation\n\n\\begin{equation}\n x_{n+1} = r x_n \\left( 1 - x_n \\right),\n\\end{equation}\n\nwhere $0 \\le x_0 \\le 1$.\n\n## Exercise 1\n\nWrite a program that calculates the first $N$ members of the sequence, given as input $x_0$ and $r$ (and, of course, $N$).\n\n#### Solution\n\n\n```python\ndef logistic(x0, r, N = 1000):\n sequence = [x0]\n xn = x0\n for n in range(N):\n xnew = r*xn*(1.0-xn)\n sequence.append(xnew)\n xn = xnew\n return sequence\n```\n\n## Exercise 2\n\nFix $x_0=0.5$. Calculate the first 2,000 members of the sequence for $r=1.5$ and $r=3.5$ Plot the last 100 members of the sequence in both cases.\n\nWhat does this suggest about the long-term behaviour of the sequence?\n\n#### Solution\n\n\n```python\nimport numpy\nfrom matplotlib import pyplot\n%matplotlib inline\n\nx0 = 0.5\nN = 2000\nsequence1 = logistic(x0, 1.5, N)\nsequence2 = logistic(x0, 3.5, N)\npyplot.plot(sequence1[-100:], 'b-', label = r'$r=1.5$')\npyplot.plot(sequence2[-100:], 'k-', label = r'$r=3.5$')\npyplot.xlabel(r'$n$')\npyplot.ylabel(r'$x$')\npyplot.show()\n```\n\nThis suggests that, for $r=1.5$, the sequence has settled down to a fixed point. In the $r=3.5$ case it seems to be moving between four points repeatedly.\n\n## Exercise 3\n\nFix $x_0 = 0.5$. For each value of $r$ between $1$ and $4$, in steps of $0.01$, calculate the first 2,000 members of the sequence. Plot the last 1,000 members of the sequence on a plot where the $x$-axis is the value of $r$ and the $y$-axis is the values in the sequence. Do not plot lines - just plot markers (e.g., use the `'k.'` plotting style).\n\n#### Solution\n\n\n```python\nimport numpy\nfrom matplotlib import pyplot\n%matplotlib inline\n\n# This is the \"best\" way of doing it, but numpy hasn't been introduced yet\n# r_values = numpy.arange(1.0, 4.0, 0.01) \nr_values = []\nfor i in range(302):\n r_values.append(1.0 + 0.01 * i)\nx0 = 0.5\nN = 2000\nfor r in r_values:\n sequence = logistic(x0, r, N)\n pyplot.plot(r*numpy.ones_like(sequence[1000:]), sequence[1000:], 'k.')\npyplot.xlabel(r'$r$')\npyplot.ylabel(r'$x$')\npyplot.show()\n```\n\n## Exercise 4\n\nFor iterative maps such as the logistic map, one of three things can occur:\n\n1. The sequence settles down to a *fixed point*.\n2. The sequence rotates through a finite number of values. This is called a *limit cycle*.\n3. The sequence generates an infinite number of values. This is called *deterministic chaos*.\n\nUsing just your plot, or new plots from this data, work out approximate values of $r$ for which there is a transition from fixed points to limit cycles, from limit cycles of a given number of values to more values, and the transition to chaos.\n\n#### Solution\n\nThe first transition is at $r \\approx 3$, the next at $r \\approx 3.45$, the next at $r \\approx 3.55$. The transition to chaos appears to happen before $r=4$, but it's not obvious exactly where.\n\n# Mandelbrot\n\nThe Mandelbrot set is also generated from a sequence, $\\{ z_n \\}$, using the relation\n\n\\begin{equation}\n z_{n+1} = z_n^2 + c, \\qquad z_0 = 0.\n\\end{equation}\n\nThe members of the sequence, and the constant $c$, are all complex. The point in the complex plane at $c$ is in the Mandelbrot set only if the $|z_n| < 2$ for all members of the sequence. In reality, checking the first 100 iterations is sufficient.\n\nNote: the python notation for a complex number $x + \\text{i} y$ is `x + yj`: that is, `j` is used to indicate $\\sqrt{-1}$. If you know the values of `x` and `y` then `x + yj` constructs a complex number; if they are stored in variables you can use `complex(x, y)`.\n\n## Exercise 1\n\nWrite a function that checks if the point $c$ is in the Mandelbrot set.\n\n#### Solution\n\n\n```python\ndef in_Mandelbrot(c, n_iterations = 100):\n z0 = 0.0 + 0j\n in_set = True\n n = 0\n zn = z0\n while in_set and (n < n_iterations):\n n += 1\n znew = zn**2 + c\n in_set = abs(znew) < 2.0\n zn = znew\n return in_set\n```\n\n## Exercise 2\n\nCheck the points $c=0$ and $c=\\pm 2 \\pm 2 \\text{i}$ and ensure they do what you expect. (What *should* you expect?)\n\n#### Solution\n\n\n```python\nc_values = [0.0, 2+2j, 2-2j, -2+2j, -2-2j]\nfor c in c_values:\n print(\"Is {} in the Mandelbrot set? {}.\".format(c, in_Mandelbrot(c)))\n```\n\n Is 0.0 in the Mandelbrot set? True.\n Is (2+2j) in the Mandelbrot set? False.\n Is (2-2j) in the Mandelbrot set? False.\n Is (-2+2j) in the Mandelbrot set? False.\n Is (-2-2j) in the Mandelbrot set? False.\n\n\n## Exercise 3\n\nWrite a function that, given $N$\n\n1. generates an $N \\times N$ grid spanning $c = x + \\text{i} y$, for $-2 \\le x \\le 2$ and $-2 \\le y \\le 2$;\n2. returns an $N\\times N$ array containing one if the associated grid point is in the Mandelbrot set, and zero otherwise.\n\n#### Solution\n\n\n```python\nimport numpy\n\ndef grid_Mandelbrot(N):\n x = numpy.linspace(-2.0, 2.0, N)\n X, Y = numpy.meshgrid(x, x)\n C = X + 1j*Y\n grid = numpy.zeros((N, N), int)\n for nx in range(N):\n for ny in range(N):\n grid[nx, ny] = int(in_Mandelbrot(C[nx, ny]))\n return grid\n```\n\n## Exercise 4\n\nUsing the function `imshow` from `matplotlib`, plot the resulting array for a $100 \\times 100$ array to make sure you see the expected shape.\n\n#### Solution\n\n\n```python\nfrom matplotlib import pyplot\n%matplotlib inline\n\npyplot.imshow(grid_Mandelbrot(100))\n```\n\n## Exercise 5\n\nModify your functions so that, instead of returning whether a point is inside the set or not, it returns the logarithm of the number of iterations it takes. Plot the result using `imshow` again.\n\n#### Solution\n\n\n```python\nfrom math import log\n\ndef log_Mandelbrot(c, n_iterations = 100):\n z0 = 0.0 + 0j\n in_set = True\n n = 0\n zn = z0\n while in_set and (n < n_iterations):\n n += 1\n znew = zn**2 + c\n in_set = abs(znew) < 2.0\n zn = znew\n return log(n)\n\ndef log_grid_Mandelbrot(N):\n x = numpy.linspace(-2.0, 2.0, N)\n X, Y = numpy.meshgrid(x, x)\n C = X + 1j*Y\n grid = numpy.zeros((N, N), int)\n for nx in range(N):\n for ny in range(N):\n grid[nx, ny] = log_Mandelbrot(C[nx, ny])\n return grid\n```\n\n\n```python\nfrom matplotlib import pyplot\n%matplotlib inline\n\npyplot.imshow(log_grid_Mandelbrot(100))\n```\n\n## Exercise 6\n\nTry some higher resolution plots, and try plotting only a section to see the structure. **Note** this is not a good way to get high accuracy close up images!\n\n#### Solution\n\nThis is a simple example:\n\n\n```python\npyplot.imshow(log_grid_Mandelbrot(1000)[600:800,400:600])\n```\n\n# Equivalence classes\n\nAn *equivalence class* is a relation that groups objects in a set into related subsets. For example, if we think of the integers modulo $7$, then $1$ is in the same equivalence class as $8$ (and $15$, and $22$, and so on), and $3$ is in the same equivalence class as $10$. We use the tilde $3 \\sim 10$ to denote two objects within the same equivalence class.\n\nHere, we are going to define the positive integers programmatically from equivalent sequences.\n\n## Exercise 1\n\nDefine a python class `Eqint`. This should be\n\n1. Initialized by a sequence;\n2. Store the sequence;\n3. Define its representation (via the `__repr__` function) to be the integer length of the sequence;\n4. Redefine equality (via the `__eq__` function) so that two `eqint`s are equal if their sequences have same length.\n\n#### Solution\n\n\n```python\nclass Eqint(object):\n \n def __init__(self, sequence):\n self.sequence = sequence\n \n def __repr__(self):\n return str(len(self.sequence))\n \n def __eq__(self, other):\n return len(self.sequence)==len(other.sequence)\n```\n\n## Exercise 2\n\nDefine a `zero` object from the empty list, and three `one` objects, from a single object list, tuple, and string. For example\n\n```python\none_list = Eqint([1])\none_tuple = Eqint((1,))\none_string = Eqint('1')\n```\n\nCheck that none of the `one` objects equal the zero object, but all equal the other `one` objects. Print each object to check that the representation gives the integer length.\n\n#### Solution\n\n\n```python\nzero = Eqint([])\none_list = Eqint([1])\none_tuple = Eqint((1,))\none_string = Eqint('1')\n\nprint(\"Is zero equivalent to one? {}, {}, {}\".format(zero == one_list, \n zero == one_tuple,\n zero == one_string))\nprint(\"Is one equivalent to one? {}, {}, {}.\".format(one_list == one_tuple,\n one_list == one_string,\n one_tuple == one_string))\nprint(zero)\nprint(one_list)\nprint(one_tuple)\nprint(one_string)\n```\n\n Is zero equivalent to one? False, False, False\n Is one equivalent to one? True, True, True.\n 0\n 1\n 1\n 1\n\n\n## Exercise 3\n\nRedefine the class by including an `__add__` method that combines the two sequences. That is, if `a` and `b` are `Eqint`s then `a+b` should return an `Eqint` defined from combining `a` and `b`s sequences.\n\n##### Note\n\nAdding two different *types* of sequences (eg, a list to a tuple) does not work, so it is better to either iterate over the sequences, or to convert to a uniform type before adding.\n\n#### Solution\n\n\n```python\nclass Eqint(object):\n \n def __init__(self, sequence):\n self.sequence = sequence\n \n def __repr__(self):\n return str(len(self.sequence))\n \n def __eq__(self, other):\n return len(self.sequence)==len(other.sequence)\n \n def __add__(a, b):\n return Eqint(tuple(a.sequence) + tuple(b.sequence))\n```\n\n## Exercise 4\n\nCheck your addition function by adding together all your previous `Eqint` objects (which will need re-defining, as the class has been redefined). Print the resulting object to check you get `3`, and also print its internal sequence.\n\n#### Solution\n\n\n```python\nzero = Eqint([])\none_list = Eqint([1])\none_tuple = Eqint((1,))\none_string = Eqint('1')\n\nsum_eqint = zero + one_list + one_tuple + one_string\nprint(\"The sum is {}.\".format(sum_eqint))\nprint(\"The internal sequence is {}.\".format(sum_eqint.sequence))\n```\n\n The sum is 3.\n The internal sequence is (1, 1, '1').\n\n\n## Exercise 5\n\nWe will sketch a construction of the positive integers from *nothing*.\n\n1. Define an empty list `positive_integers`.\n2. Define an `Eqint` called `zero` from the empty list. Append it to `positive_integers`.\n3. Define an `Eqint` called `next_integer` from the `Eqint` defined by *a copy of* `positive_integers` (ie, use `Eqint(list(positive_integers))`. Append it to `positive_integers`.\n4. Repeat step 3 as often as needed.\n\nUse this procedure to define the `Eqint` equivalent to $10$. Print it, and its internal sequence, to check.\n\n#### Solution\n\n\n```python\npositive_integers = []\nzero = Eqint([])\npositive_integers.append(zero)\n\nN = 10\nfor n in range(1,N+1):\n positive_integers.append(Eqint(list(positive_integers)))\n \nprint(\"The 'final' Eqint is {}\".format(positive_integers[-1]))\nprint(\"Its sequence is {}\".format(positive_integers[-1].sequence))\nprint(\"That is, it contains all Eqints with length less than 10.\")\n```\n\n The 'final' Eqint is 10\n Its sequence is [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n That is, it contains all Eqints with length less than 10.\n\n\n# Rational numbers\n\nInstead of working with floating point numbers, which are not \"exact\", we could work with the rational numbers $\\mathbb{Q}$. A rational number $q \\in \\mathbb{Q}$ is defined by the *numerator* $n$ and *denominator* $d$ as $q = \\frac{n}{d}$, where $n$ and $d$ are *coprime* (ie, have no common divisor other than $1$).\n\n## Exercise 1\n\nFind a python function that finds the greatest common divisor (`gcd`) of two numbers. Use this to write a function `normal_form` that takes a numerator and divisor and returns the coprime $n$ and $d$. Test this function on $q = \\frac{3}{2}$, $q = \\frac{15}{3}$, and $q = \\frac{20}{42}$.\n\n#### Solution\n\n\n```python\ndef normal_form(numerator, denominator):\n from fractions import gcd\n \n factor = gcd(numerator, denominator)\n return numerator//factor, denominator//factor\n```\n\n\n```python\nprint(normal_form(3, 2))\nprint(normal_form(15, 3))\nprint(normal_form(20, 42))\n```\n\n (3, 2)\n (5, 1)\n (10, 21)\n\n\n /srv/conda/lib/python3.7/site-packages/ipykernel_launcher.py:4: DeprecationWarning: fractions.gcd() is deprecated. Use math.gcd() instead.\n after removing the cwd from sys.path.\n\n\n## Exercise 2\n\nDefine a class `Rational` that uses the `normal_form` function to store the rational number in the appropriate form. Define a `__repr__` function that prints a string that *looks like* $\\frac{n}{d}$ (**hint**: use `len(str(number))` to find the number of digits of an integer). Test it on the cases above.\n\n#### Solution\n\n\n```python\nclass Rational(object):\n \"\"\"\n A rational number.\n \"\"\"\n \n def __init__(self, numerator, denominator):\n \n n, d = normal_form(numerator, denominator)\n \n self.numerator = n\n self.denominator = d\n \n return None\n \n def __repr__(self):\n \n max_length = max(len(str(self.numerator)), len(str(self.denominator)))\n \n if self.denominator == 1:\n frac = str(self.numerator)\n else:\n numerator = str(self.numerator)+'\\n'\n bar = max_length*'-'+'\\n'\n denominator = str(self.denominator)\n frac = numerator+bar+denominator\n \n return frac\n```\n\n\n```python\nq1 = Rational(3, 2)\nprint(q1)\nq2 = Rational(15, 3)\nprint(q2)\nq3 = Rational(20, 42)\nprint(q3)\n```\n\n 3\n -\n 2\n 5\n 10\n --\n 21\n\n\n /srv/conda/lib/python3.7/site-packages/ipykernel_launcher.py:4: DeprecationWarning: fractions.gcd() is deprecated. Use math.gcd() instead.\n after removing the cwd from sys.path.\n\n\n## Exercise 3\n\nOverload the `__add__` function so that you can add two rational numbers. Test it on $\\frac{1}{2} + \\frac{1}{3} + \\frac{1}{6} = 1$.\n\n#### Solution\n\n\n```python\nclass Rational(object):\n \"\"\"\n A rational number.\n \"\"\"\n \n def __init__(self, numerator, denominator):\n \n n, d = normal_form(numerator, denominator)\n \n self.numerator = n\n self.denominator = d\n \n return None\n \n def __add__(a, b):\n \n numerator = a.numerator * b.denominator + b.numerator * a.denominator\n denominator = a.denominator * b.denominator\n return Rational(numerator, denominator)\n \n def __repr__(self):\n \n max_length = max(len(str(self.numerator)), len(str(self.denominator)))\n \n if self.denominator == 1:\n frac = str(self.numerator)\n else:\n numerator = str(self.numerator)+'\\n'\n bar = max_length*'-'+'\\n'\n denominator = str(self.denominator)\n frac = numerator+bar+denominator\n \n return frac\n```\n\n\n```python\nprint(Rational(1,2) + Rational(1,3) + Rational(1,6))\n```\n\n 1\n\n\n /srv/conda/lib/python3.7/site-packages/ipykernel_launcher.py:4: DeprecationWarning: fractions.gcd() is deprecated. Use math.gcd() instead.\n after removing the cwd from sys.path.\n\n\n## Exercise 4\n\nOverload the `__mul__` function so that you can multiply two rational numbers. Test it on $\\frac{1}{3} \\times \\frac{15}{2} \\times \\frac{2}{5} = 1$.\n\n#### Solution\n\n\n```python\nclass Rational(object):\n \"\"\"\n A rational number.\n \"\"\"\n \n def __init__(self, numerator, denominator):\n \n n, d = normal_form(numerator, denominator)\n \n self.numerator = n\n self.denominator = d\n \n return None\n \n def __add__(a, b):\n \n numerator = a.numerator * b.denominator + b.numerator * a.denominator\n denominator = a.denominator * b.denominator\n return Rational(numerator, denominator)\n \n def __mul__(a, b):\n \n numerator = a.numerator * b.numerator\n denominator = a.denominator * b.denominator\n return Rational(numerator, denominator)\n \n def __repr__(self):\n \n max_length = max(len(str(self.numerator)), len(str(self.denominator)))\n \n if self.denominator == 1:\n frac = str(self.numerator)\n else:\n numerator = str(self.numerator)+'\\n'\n bar = max_length*'-'+'\\n'\n denominator = str(self.denominator)\n frac = numerator+bar+denominator\n \n return frac\n```\n\n\n```python\nprint(Rational(1,3)*Rational(15,2)*Rational(2,5))\n```\n\n 1\n\n\n /srv/conda/lib/python3.7/site-packages/ipykernel_launcher.py:4: DeprecationWarning: fractions.gcd() is deprecated. Use math.gcd() instead.\n after removing the cwd from sys.path.\n\n\n## Exercise 5\n\nOverload the [`__rmul__`](https://docs.python.org/2/reference/datamodel.html?highlight=rmul#object.__rmul__) function so that you can multiply a rational by an *integer*. Check that $\\frac{1}{2} \\times 2 = 1$ and $\\frac{1}{2} + (-1) \\times \\frac{1}{2} = 0$. Also overload the `__sub__` function (using previous functions!) so that you can subtract rational numbers and check that $\\frac{1}{2} - \\frac{1}{2} = 0$.\n\n#### Solution\n\n\n```python\nclass Rational(object):\n \"\"\"\n A rational number.\n \"\"\"\n \n def __init__(self, numerator, denominator):\n \n n, d = normal_form(numerator, denominator)\n \n self.numerator = n\n self.denominator = d\n \n return None\n \n def __add__(a, b):\n \n numerator = a.numerator * b.denominator + b.numerator * a.denominator\n denominator = a.denominator * b.denominator\n return Rational(numerator, denominator)\n \n def __mul__(a, b):\n \n numerator = a.numerator * b.numerator\n denominator = a.denominator * b.denominator\n return Rational(numerator, denominator)\n \n def __rmul__(self, other):\n \n numerator = self.numerator * other\n return Rational(numerator, self.denominator)\n \n def __sub__(a, b):\n \n return a + (-1)*b\n \n def __repr__(self):\n \n max_length = max(len(str(self.numerator)), len(str(self.denominator)))\n \n if self.denominator == 1:\n frac = str(self.numerator)\n else:\n numerator = str(self.numerator)+'\\n'\n bar = max_length*'-'+'\\n'\n denominator = str(self.denominator)\n frac = numerator+bar+denominator\n \n return frac\n```\n\n\n```python\nhalf = Rational(1,2)\nprint(2*half)\nprint(half+(-1)*half)\nprint(half-half)\n```\n\n 1\n 0\n 0\n\n\n /srv/conda/lib/python3.7/site-packages/ipykernel_launcher.py:4: DeprecationWarning: fractions.gcd() is deprecated. Use math.gcd() instead.\n after removing the cwd from sys.path.\n\n\n## Exercise 6\n\nOverload the `__float__` function so that `float(q)` returns the floating point approximation to the rational number `q`. Test this on $\\frac{1}{2}, \\frac{1}{3}$, and $\\frac{1}{11}$.\n\n#### Solution\n\n\n```python\nclass Rational(object):\n \"\"\"\n A rational number.\n \"\"\"\n \n def __init__(self, numerator, denominator):\n \n n, d = normal_form(numerator, denominator)\n \n self.numerator = n\n self.denominator = d\n \n return None\n \n def __add__(a, b):\n \n numerator = a.numerator * b.denominator + b.numerator * a.denominator\n denominator = a.denominator * b.denominator\n return Rational(numerator, denominator)\n \n def __mul__(a, b):\n \n numerator = a.numerator * b.numerator\n denominator = a.denominator * b.denominator\n return Rational(numerator, denominator)\n \n def __rmul__(self, other):\n \n numerator = self.numerator * other\n return Rational(numerator, self.denominator)\n \n def __sub__(a, b):\n \n return a + (-1)*b\n \n def __float__(a):\n \n return float(a.numerator) / float(a.denominator)\n \n def __repr__(self):\n \n max_length = max(len(str(self.numerator)), len(str(self.denominator)))\n \n if self.denominator == 1:\n frac = str(self.numerator)\n else:\n numerator = str(self.numerator)+'\\n'\n bar = max_length*'-'+'\\n'\n denominator = str(self.denominator)\n frac = numerator+bar+denominator\n \n return frac\n```\n\n\n```python\nprint(float(Rational(1,2)))\nprint(float(Rational(1,3)))\nprint(float(Rational(1,11)))\n```\n\n 0.5\n 0.3333333333333333\n 0.09090909090909091\n\n\n /srv/conda/lib/python3.7/site-packages/ipykernel_launcher.py:4: DeprecationWarning: fractions.gcd() is deprecated. Use math.gcd() instead.\n after removing the cwd from sys.path.\n\n\n## Exercise 7\n\nOverload the `__lt__` function to compare two rational numbers. Create a list of rational numbers where the denominator is $n = 2, \\dots, 11$ and the numerator is the floored integer $n/2$, ie `n//2`. Use the `sorted` function on that list (which relies on the `__lt__` function).\n\n#### Solution\n\n\n```python\nclass Rational(object):\n \"\"\"\n A rational number.\n \"\"\"\n \n def __init__(self, numerator, denominator):\n \n n, d = normal_form(numerator, denominator)\n \n self.numerator = n\n self.denominator = d\n \n return None\n \n def __add__(a, b):\n \n numerator = a.numerator * b.denominator + b.numerator * a.denominator\n denominator = a.denominator * b.denominator\n return Rational(numerator, denominator)\n \n def __mul__(a, b):\n \n numerator = a.numerator * b.numerator\n denominator = a.denominator * b.denominator\n return Rational(numerator, denominator)\n \n def __rmul__(self, other):\n \n numerator = self.numerator * other\n return Rational(numerator, self.denominator)\n \n def __sub__(a, b):\n \n return a + (-1)*b\n \n def __float__(a):\n \n return float(a.numerator) / float(a.denominator)\n \n def __lt__(a, b):\n \n return a.numerator * b.denominator < a.denominator * b.numerator\n \n def __repr__(self):\n \n max_length = max(len(str(self.numerator)), len(str(self.denominator)))\n \n if self.denominator == 1:\n frac = str(self.numerator)\n else:\n numerator = '\\n'+str(self.numerator)+'\\n'\n bar = max_length*'-'+'\\n'\n denominator = str(self.denominator)\n frac = numerator+bar+denominator\n \n return frac\n```\n\n\n```python\nq_list = [Rational(n//2, n) for n in range(2, 12)]\nprint(sorted(q_list))\n```\n\n [\n 1\n -\n 3, \n 2\n -\n 5, \n 3\n -\n 7, \n 4\n -\n 9, \n 5\n --\n 11, \n 1\n -\n 2, \n 1\n -\n 2, \n 1\n -\n 2, \n 1\n -\n 2, \n 1\n -\n 2]\n\n\n /srv/conda/lib/python3.7/site-packages/ipykernel_launcher.py:4: DeprecationWarning: fractions.gcd() is deprecated. Use math.gcd() instead.\n after removing the cwd from sys.path.\n\n\n## Exercise 8\n\nThe [Wallis formula for $\\pi$](http://mathworld.wolfram.com/WallisFormula.html) is\n\n\\begin{equation}\n \\pi = 2 \\prod_{n=1}^{\\infty} \\frac{ (2 n)^2 }{(2 n - 1) (2 n + 1)}.\n\\end{equation}\n\nWe can define a partial product $\\pi_N$ as\n\n\\begin{equation}\n \\pi_N = 2 \\prod_{n=1}^{N} \\frac{ (2 n)^2 }{(2 n - 1) (2 n + 1)},\n\\end{equation}\n\neach of which are rational numbers.\n\nConstruct a list of the first 20 rational number approximations to $\\pi$ and print them out. Print the sorted list to show that the approximations are always increasing. Then convert them to floating point numbers, construct a `numpy` array, and subtract this array from $\\pi$ to see how accurate they are.\n\n#### Solution\n\n\n```python\ndef wallis_rational(N):\n \"\"\"\n The partial product approximation to pi using the first N terms of Wallis' formula.\n \n Parameters\n ----------\n \n N : int\n Number of terms in product\n \n Returns\n -------\n \n partial : Rational\n A rational number approximation to pi\n \"\"\"\n \n partial = Rational(2,1)\n for n in range(1, N+1):\n partial = partial * Rational((2*n)**2, (2*n-1)*(2*n+1))\n return partial\n```\n\n\n```python\npi_list = [wallis_rational(n) for n in range(1, 21)]\nprint(pi_list)\nprint(sorted(pi_list))\n```\n\n [\n 8\n -\n 3, \n 128\n ---\n 45, \n 512\n ---\n 175, \n 32768\n -----\n 11025, \n 131072\n ------\n 43659, \n 2097152\n -------\n 693693, \n 8388608\n -------\n 2760615, \n 2147483648\n ----------\n 703956825, \n 8589934592\n ----------\n 2807136475, \n 137438953472\n ------------\n 44801898141, \n 549755813888\n ------------\n 178837328943, \n 35184372088832\n --------------\n 11425718238025, \n 140737488355328\n ---------------\n 45635265151875, \n 2251799813685248\n ----------------\n 729232910488125, \n 9007199254740992\n ----------------\n 2913690606794775, \n 9223372036854775808\n -------------------\n 2980705490751054825, \n 36893488147419103232\n --------------------\n 11912508103174630875, \n 590295810358705651712\n ---------------------\n 190453061649520333125, \n 2361183241434822606848\n ----------------------\n 761284675790187924375, \n 151115727451828646838272\n ------------------------\n 48691767863540419643025]\n [\n 8\n -\n 3, \n 128\n ---\n 45, \n 512\n ---\n 175, \n 32768\n -----\n 11025, \n 131072\n ------\n 43659, \n 2097152\n -------\n 693693, \n 8388608\n -------\n 2760615, \n 2147483648\n ----------\n 703956825, \n 8589934592\n ----------\n 2807136475, \n 137438953472\n ------------\n 44801898141, \n 549755813888\n ------------\n 178837328943, \n 35184372088832\n --------------\n 11425718238025, \n 140737488355328\n ---------------\n 45635265151875, \n 2251799813685248\n ----------------\n 729232910488125, \n 9007199254740992\n ----------------\n 2913690606794775, \n 9223372036854775808\n -------------------\n 2980705490751054825, \n 36893488147419103232\n --------------------\n 11912508103174630875, \n 590295810358705651712\n ---------------------\n 190453061649520333125, \n 2361183241434822606848\n ----------------------\n 761284675790187924375, \n 151115727451828646838272\n ------------------------\n 48691767863540419643025]\n\n\n /srv/conda/lib/python3.7/site-packages/ipykernel_launcher.py:4: DeprecationWarning: fractions.gcd() is deprecated. Use math.gcd() instead.\n after removing the cwd from sys.path.\n\n\n\n```python\nimport numpy\nprint(numpy.pi-numpy.array(list(map(float, pi_list))))\n```\n\n [0.47492599 0.29714821 0.21587837 0.16943846 0.1394167 0.11842246\n 0.10291902 0.09100266 0.08155811 0.07388885 0.06753749 0.06219131\n 0.05762923 0.05369058 0.05025576 0.04723393 0.04455483 0.0421633\n 0.04001539 0.03807569]\n\n\n# The shortest published Mathematical paper\n\nA [candidate for the shortest mathematical paper ever](http://www.ams.org/journals/bull/1966-72-06/S0002-9904-1966-11654-3/S0002-9904-1966-11654-3.pdf) shows the following result:\n\n\\begin{equation}\n 27^5 + 84^5 + 110^5 + 133^5 = 144^5.\n\\end{equation}\n\nThis is interesting as\n\n> This is a counterexample to a conjecture by Euler ... that at least $n$ $n$th powers are required to sum to an $n$th power, $n > 2$.\n\n## Exercise 1\n\nUsing python, check the equation above is true.\n\n#### Solution\n\n\n```python\nlhs = 27**5 + 84**5 + 110**5 + 133**5\nrhs = 144**5\n\nprint(\"Does the LHS {} equal the RHS {}? {}\".format(lhs, rhs, lhs==rhs))\n```\n\n Does the LHS 61917364224 equal the RHS 61917364224? True\n\n\n## Exercise 2\n\nThe more interesting statement in the paper is that\n\n\\begin{equation}\n 27^5 + 84^5 + 110^5 + 133^5 = 144^5.\n\\end{equation}\n\n> [is] the smallest instance in which four fifth powers sum to a fifth power.\n\nInterpreting \"the smallest instance\" to mean the solution where the right hand side term (the largest integer) is the smallest, we want to use python to check this statement.\n\nYou may find the `combinations` function from the `itertools` package useful.\n\n\n```python\nimport numpy\nimport itertools\n```\n\nThe `combinations` function returns all the combinations (ignoring order) of `r` elements from a given list. For example, take a list of length 6, `[1, 2, 3, 4, 5, 6]` and compute all the combinations of length 4:\n\n\n```python\ninput_list = numpy.arange(1, 7)\ncombinations = list(itertools.combinations(input_list, 4))\nprint(combinations)\n```\n\n [(1, 2, 3, 4), (1, 2, 3, 5), (1, 2, 3, 6), (1, 2, 4, 5), (1, 2, 4, 6), (1, 2, 5, 6), (1, 3, 4, 5), (1, 3, 4, 6), (1, 3, 5, 6), (1, 4, 5, 6), (2, 3, 4, 5), (2, 3, 4, 6), (2, 3, 5, 6), (2, 4, 5, 6), (3, 4, 5, 6)]\n\n\nWe can already see that the number of terms to consider is large.\n\nNote that we have used the `list` function to explicitly get a list of the combinations. The `combinations` function returns a *generator*, which can be used in a loop as if it were a list, without storing all elements of the list.\n\nHow fast does the number of combinations grow? The standard formula says that for a list of length $n$ there are\n\n\\begin{equation}\n \\begin{pmatrix} n \\\\ k \\end{pmatrix} = \\frac{n!}{k! (n-k)!}\n\\end{equation}\n\ncombinations of length $k$. For $k=4$ as needed here we will have $n (n-1) (n-2) (n-3) / 24$ combinations. For $n=144$ we therefore have\n\n\n```python\nn_combinations = 144*143*142*141/24\nprint(\"Number of combinations of 4 objects from 144 is {}\".format(n_combinations))\n```\n\n Number of combinations of 4 objects from 144 is 17178876.0\n\n\n### Exercise 2a\n\nShow, by getting python to compute the number of combinations $N = \\begin{pmatrix} n \\\\ 4 \\end{pmatrix}$ that $N$ grows roughly as $n^4$. To do this, plot the number of combinations and $n^4$ on a log-log scale. Restrict to $n \\le 50$.\n\n#### Solution\n\n\n```python\nfrom matplotlib import pyplot\n%matplotlib inline\n```\n\n\n```python\nn = numpy.arange(5, 51)\nN = numpy.zeros_like(n)\nfor i, n_c in enumerate(n):\n combinations = list(itertools.combinations(numpy.arange(1,n_c+1), 4))\n N[i] = len(combinations)\n```\n\n\n```python\npyplot.figure(figsize=(12,6))\npyplot.loglog(n, N, linestyle='None', marker='x', color='k', label='Combinations')\npyplot.loglog(n, n**4, color='b', label=r'$n^4$')\npyplot.xlabel(r'$n$')\npyplot.ylabel(r'$N$')\npyplot.legend(loc='upper left')\npyplot.show()\n```\n\nWith 17 million combinations to work with, we'll need to be a little careful how we compute.\n\nOne thing we could try is to loop through each possible \"smallest instance\" (the term on the right hand side) in increasing order. We then check all possible combinations of left hand sides.\n\nThis is computationally *very expensive* as we repeat a lot of calculations. We repeatedly recalculate combinations (a bad idea). We repeatedly recalculate the powers of the same number.\n\nInstead, let us try creating the list of all combinations of powers once.\n\n### Exercise 2b\n\n1. Construct a `numpy` array containing all integers in $1, \\dots, 144$ to the fifth power. \n2. Construct a list of all combinations of four elements from this array.\n3. Construct a list of sums of all these combinations.\n4. Loop over one list and check if the entry appears in the other list (ie, use the `in` keyword).\n\n#### Solution\n\n\n```python\nimport numpy as np\nimport itertools\n```\n\n\n```python\n\nnmax=145\nrange_to_power = np.arange(1, nmax)**5.0\nlhs_combinations = list(itertools.combinations(range_to_power, 4))\n```\n\nThen calculate the sums:\n\n\n```python\nlhs_sums = []\nfor lhs_terms in lhs_combinations:\n lhs_sums.append(np.sum(np.array(lhs_terms)))\n```\n\nFinally, loop through the sums and check to see if it matches any possible term on the RHS:\n\n\n```python\nfor i, lhs in enumerate(lhs_sums):\n if lhs in range_to_power:\n rhs_primitive = int(lhs**(0.2))\n lhs_primitive = (numpy.array(lhs_combinations[i])**(0.2)).astype(int)\n print(\"The LHS terms are {}.\".format(lhs_primitive))\n print(\"The RHS term is {}.\".format(rhs_primitive))\n```\n\n# Lorenz attractor\n\nThe Lorenz system is a set of ordinary differential equations which can be written\n\n\\begin{equation}\n \\frac{\\text{d} \\vec{v}}{\\text{d} \\vec{t}} = \\vec{f}(\\vec{v})\n\\end{equation}\n\nwhere the variables in the state vector $\\vec{v}$ are\n\n\\begin{equation}\n \\vec{v} = \\begin{pmatrix} x(t) \\\\ y(t) \\\\ z(t) \\end{pmatrix}\n\\end{equation}\n\nand the function defining the ODE is\n\n\\begin{equation}\n \\vec{f} = \\begin{pmatrix} \\sigma \\left( y(t) - x(t) \\right) \\\\ x(t) \\left( \\rho - z(t) \\right) - y(t) \\\\ x(t) y(t) - \\beta z(t) \\end{pmatrix}.\n\\end{equation}\n\nThe parameters $\\sigma, \\rho, \\beta$ are all real numbers.\n\n## Exercise 1\n\nWrite a function `dvdt(v, t, params)` that returns $\\vec{f}$ given $\\vec{v}, t$ and the parameters $\\sigma, \\rho, \\beta$.\n\n#### Solution\n\n\n```python\ndef dvdt(v, t, sigma, rho, beta):\n \"\"\"\n Define the Lorenz system.\n \n Parameters\n ----------\n \n v : list\n State vector\n t : float\n Time\n sigma : float\n Parameter\n rho : float\n Parameter\n beta : float\n Parameter\n \n Returns\n -------\n \n dvdt : list\n RHS defining the Lorenz system\n \"\"\"\n \n x, y, z = v\n \n return [sigma*(y-x), x*(rho-z)-y, x*y-beta*z]\n```\n\n## Exercise 2\n\nFix $\\sigma=10, \\beta=8/3$. Set initial data to be $\\vec{v}(0) = \\vec{1}$. Using `scipy`, specifically the `odeint` function of `scipy.integrate`, solve the Lorenz system up to $t=100$ for $\\rho=13, 14, 15$ and $28$.\n\nPlot your results in 3d, plotting $x, y, z$.\n\n#### Solution\n\n\n```python\nimport numpy\nfrom scipy.integrate import odeint\n```\n\n\n```python\nv0 = [1.0, 1.0, 1.0]\nsigma = 10.0\nbeta = 8.0/3.0\nt_values = numpy.linspace(0.0, 100.0, 5000)\nrho_values = [13.0, 14.0, 15.0, 28.0]\nv_values = []\nfor rho in rho_values:\n params = (sigma, rho, beta)\n v = odeint(dvdt, v0, t_values, args=params)\n v_values.append(v)\n```\n\n\n```python\n%matplotlib inline\nfrom matplotlib import pyplot\nfrom mpl_toolkits.mplot3d.axes3d import Axes3D\n```\n\n\n```python\nfig = pyplot.figure(figsize=(12,6))\nfor i, v in enumerate(v_values):\n ax = fig.add_subplot(2,2,i+1,projection='3d')\n ax.plot(v[:,0], v[:,1], v[:,2])\n ax.set_xlabel(r'$x$')\n ax.set_ylabel(r'$y$')\n ax.set_zlabel(r'$z$')\n ax.set_title(r\"$\\rho={}$\".format(rho_values[i]))\npyplot.show()\n```\n\n## Exercise 3\n\nFix $\\rho = 28$. Solve the Lorenz system twice, up to $t=40$, using the two different initial conditions $\\vec{v}(0) = \\vec{1}$ and $\\vec{v}(0) = \\vec{1} + \\vec{10^{-5}}$.\n\nShow four plots. Each plot should show the two solutions on the same axes, plotting $x, y$ and $z$. Each plot should show $10$ units of time, ie the first shows $t \\in [0, 10]$, the second shows $t \\in [10, 20]$, and so on.\n\n#### Solution\n\n\n```python\nt_values = numpy.linspace(0.0, 40.0, 4000)\nrho = 28.0\nparams = (sigma, rho, beta)\nv_values = []\nv0_values = [[1.0,1.0,1.0],\n [1.0+1e-5,1.0+1e-5,1.0+1e-5]]\nfor v0 in v0_values:\n v = odeint(dvdt, v0, t_values, args=params)\n v_values.append(v)\n```\n\n\n```python\nfig = pyplot.figure(figsize=(12,6))\nline_colours = 'by'\nfor tstart in range(4):\n ax = fig.add_subplot(2,2,tstart+1,projection='3d')\n for i, v in enumerate(v_values):\n ax.plot(v[tstart*1000:(tstart+1)*1000,0], \n v[tstart*1000:(tstart+1)*1000,1], \n v[tstart*1000:(tstart+1)*1000,2], \n color=line_colours[i])\n ax.set_xlabel(r'$x$')\n ax.set_ylabel(r'$y$')\n ax.set_zlabel(r'$z$')\n ax.set_title(r\"$t \\in [{},{}]$\".format(tstart*10, (tstart+1)*10))\npyplot.show()\n```\n\nThis shows the *sensitive dependence on initial conditions* that is characteristic of chaotic behaviour.\n\n# Systematic ODE solving with sympy\n\nWe are interested in the solution of\n\n\\begin{equation}\n \\frac{\\text{d} y}{\\text{d} t} = e^{-t} - y^n, \\qquad y(0) = 1,\n\\end{equation}\n\nwhere $n > 1$ is an integer. The \"minor\" change from the above examples mean that `sympy` can only give the solution as a power series.\n\n## Exercise 1\n\nCompute the general solution as a power series for $n = 2$.\n\n#### Solution\n\n\n```python\nimport sympy\nsympy.init_printing()\n```\n\n\n```python\ny, t = sympy.symbols('y, t')\n```\n\n\n```python\nsympy.dsolve(sympy.diff(y(t), t) + y(t)**2 - sympy.exp(-t), y(t))\n```\n\n## Exercise 2\n\nInvestigate the help for the `dsolve` function to straightforwardly impose the initial condition $y(0) = 1$ using the `ics` argument. Using this, compute the specific solutions that satisfy the ODE for $n = 2, \\dots, 10$.\n\n#### Solution\n\n\n```python\nfor n in range(2, 11):\n ode_solution = sympy.dsolve(sympy.diff(y(t), t) + y(t)**n - sympy.exp(-t), y(t), \n ics = {y(0) : 1})\n print(ode_solution)\n```\n\n## Exercise 3\n\nUsing the `removeO` command, plot each of these solutions for $t \\in [0, 1]$.\n\n\n```python\n%matplotlib inline\n\nfor n in range(2, 11):\n ode_solution = sympy.dsolve(sympy.diff(y(t), t) + y(t)**n - sympy.exp(-t), y(t), \n ics = {y(0) : 1})\n sympy.plot(ode_solution.rhs.removeO(), (t, 0, 1));\n```\n\n# Twin primes\n\nA *twin prime* is a pair $(p_1, p_2)$ such that both $p_1$ and $p_2$ are prime and $p_2 = p_1 + 2$.\n\n## Exercise 1\n\nWrite a generator that returns twin primes. You can use the generators above, and may want to look at the [itertools](https://docs.python.org/3/library/itertools.html) module together with [its recipes](https://docs.python.org/3/library/itertools.html#itertools-recipes), particularly the `pairwise` recipe.\n\n#### Solution\n\nNote: we need to first pull in the generators introduced in that notebook\n\n\n```python\ndef all_primes(N):\n \"\"\"\n Return all primes less than or equal to N.\n \n Parameters\n ----------\n \n N : int\n Maximum number\n \n Returns\n -------\n \n prime : generator\n Prime numbers\n \"\"\"\n \n primes = []\n for n in range(2, N+1):\n is_n_prime = True\n for p in primes:\n if n%p == 0:\n is_n_prime = False\n break\n if is_n_prime:\n primes.append(n)\n yield n\n```\n\nNow we can generate pairs using the pairwise recipe:\n\n\n```python\nfrom itertools import tee\n\ndef pair_primes(N):\n \"Generate consecutive prime pairs, using the itertools recipe\"\n a, b = tee(all_primes(N))\n next(b, None)\n return zip(a, b)\n```\n\nWe could examine the results of the two primes directly. But an efficient solution is to use python's [filter function](https://docs.python.org/3/library/functions.html#filter). To do this, first define a function checking if the pair are *twin* primes:\n\n\n```python\ndef check_twin(pair):\n \"\"\"\n Take in a pair of integers, check if they differ by 2.\n \"\"\"\n p1, p2 = pair\n return p2-p1 == 2\n```\n\nThen use the `filter` function to define another generator:\n\n\n```python\ndef twin_primes(N):\n \"\"\"\n Return all twin primes\n \"\"\"\n return filter(check_twin, pair_primes(N))\n```\n\nNow check by finding the twin primes with $N<20$:\n\n\n```python\nfor tp in twin_primes(20):\n print(tp)\n```\n\n (3, 5)\n (5, 7)\n (11, 13)\n (17, 19)\n\n\n## Exercise 2\n\nFind how many twin primes there are with $p_2 < 1000$.\n\n#### Solution\n\nAgain there are many solutions, but the itertools recipes has the `quantify` pattern. Looking ahead to exercise 3 we'll define:\n\n\n```python\ndef pi_N(N):\n \"\"\"\n Use the quantify pattern from itertools to count the number of twin primes.\n \"\"\"\n return sum(map(check_twin, pair_primes(N)))\n```\n\n\n```python\npi_N(1000)\n```\n\n## Exercise 3\n\nLet $\\pi_N$ be the number of twin primes such that $p_2 < N$. Plot how $\\pi_N / N$ varies with $N$ for $N=2^k$ and $k = 4, 5, \\dots 16$. (You should use a logarithmic scale where appropriate!)\n\n#### Solution\n\nWe've now done all the hard work and can use the solutions above.\n\n\n```python\nimport numpy\nfrom matplotlib import pyplot\n%matplotlib inline\n```\n\n\n```python\nN = numpy.array([2**k for k in range(4, 17)])\ntwin_prime_fraction = numpy.array(list(map(pi_N, N))) / N\n```\n\n\n```python\npyplot.semilogx(N, twin_prime_fraction)\npyplot.xlabel(r\"$N$\")\npyplot.ylabel(r\"$\\pi_N / N$\")\npyplot.show()\n```\n\nFor those that have checked Wikipedia, you'll see [Brun's theorem](https://en.wikipedia.org/wiki/Twin_prime#Brun.27s_theorem) which suggests a specific scaling, that $\\pi_N$ is bounded by $C N / \\log(N)^2$. Checking this numerically on this data:\n\n\n```python\npyplot.semilogx(N, twin_prime_fraction * numpy.log(N)**2)\npyplot.xlabel(r\"$N$\")\npyplot.ylabel(r\"$\\pi_N \\times \\log(N)^2 / N$\")\npyplot.show()\n```\n\n# A basis for the polynomials\n\nIn the section on classes we defined a `Monomial` class to represent a polynomial with leading coefficient $1$. As the $N+1$ monomials $1, x, x^2, \\dots, x^N$ form a basis for the vector space of polynomials of order $N$, $\\mathbb{P}^N$, we can use the `Monomial` class to return this basis.\n\n## Exercise 1\n\nDefine a generator that will iterate through this basis of $\\mathbb{P}^N$ and test it on $\\mathbb{P}^3$.\n\n#### Solution\n\nAgain we first take the definition of the crucial class from the notes.\n\n\n```python\nclass Polynomial(object):\n \"\"\"Representing a polynomial.\"\"\"\n explanation = \"I am a polynomial\"\n \n def __init__(self, roots, leading_term):\n self.roots = roots\n self.leading_term = leading_term\n self.order = len(roots)\n \n def __repr__(self):\n string = str(self.leading_term)\n for root in self.roots:\n if root == 0:\n string = string + \"x\"\n elif root > 0:\n string = string + \"(x - {})\".format(root)\n else:\n string = string + \"(x + {})\".format(-root)\n return string\n \n def __mul__(self, other):\n roots = self.roots + other.roots\n leading_term = self.leading_term * other.leading_term\n return Polynomial(roots, leading_term)\n \n def explain_to(self, caller):\n print(\"Hello, {}. {}.\".format(caller,self.explanation))\n print(\"My roots are {}.\".format(self.roots))\n return None\n```\n\n\n```python\nclass Monomial(Polynomial):\n \"\"\"Representing a monomial, which is a polynomial with leading term 1.\"\"\"\n explanation = \"I am a monomial\"\n \n def __init__(self, roots):\n Polynomial.__init__(self, roots, 1)\n \n def __repr__(self):\n string = \"\"\n for root in self.roots:\n if root == 0:\n string = string + \"x\"\n elif root > 0:\n string = string + \"(x - {})\".format(root)\n else:\n string = string + \"(x + {})\".format(-root)\n return string\n```\n\nNow we can define the first basis:\n\n\n```python\ndef basis_pN(N):\n \"\"\"\n A generator for the simplest basis of P^N.\n \"\"\"\n \n for n in range(N+1):\n yield Monomial(n*[0])\n```\n\nThen test it on $\\mathbb{P}^N$:\n\n\n```python\nfor poly in basis_pN(3):\n print(poly)\n```\n\n \n x\n xx\n xxx\n\n\nThis looks horrible, but is correct. To really make this look good, we need to improve the output. If we use\n\n\n```python\nclass Monomial(Polynomial):\n \"\"\"Representing a monomial, which is a polynomial with leading term 1.\"\"\"\n explanation = \"I am a monomial\"\n \n def __init__(self, roots):\n Polynomial.__init__(self, roots, 1)\n \n def __repr__(self):\n if len(self.roots):\n string = \"\"\n n_zero_roots = len(self.roots) - numpy.count_nonzero(self.roots)\n if n_zero_roots == 1:\n string = \"x\"\n elif n_zero_roots > 1:\n string = \"x^{}\".format(n_zero_roots)\n else: # Monomial degree 0.\n string = \"1\"\n for root in self.roots:\n if root > 0:\n string = string + \"(x - {})\".format(root)\n elif root < 0:\n string = string + \"(x + {})\".format(-root)\n return string\n```\n\nthen we can deal with the uglier cases, and re-running the test we get\n\n\n```python\nfor poly in basis_pN(3):\n print(poly)\n```\n\n 1\n x\n x^2\n x^3\n\n\nAn even better solution would be to use the `numpy.unique` function as in [this stackoverflow answer](http://stackoverflow.com/questions/10741346/numpy-most-efficient-frequency-counts-for-unique-values-in-an-array) (the second one!) to get the frequency of all the roots.\n\n## Exercise 2\n\nAn alternative basis is given by the monomials\n\n\\begin{align}\n p_0(x) &= 1, \\\\ p_1(x) &= 1-x, \\\\ p_2(x) &= (1-x)(2-x), \\\\ \\dots & \\quad \\dots, \\\\ p_N(x) &= \\prod_{n=1}^N (n-x).\n\\end{align}\n\nDefine a generator that will iterate through this basis of $\\mathbb{P}^N$ and test it on $\\mathbb{P}^4$.\n\n#### Solution\n\n\n```python\ndef basis_pN_variant(N):\n \"\"\"\n A generator for the 'sum' basis of P^N.\n \"\"\"\n \n for n in range(N+1):\n yield Monomial(range(n+1))\n```\n\n\n```python\nfor poly in basis_pN_variant(4):\n print(poly)\n```\n\n x\n x(x - 1)\n x(x - 1)(x - 2)\n x(x - 1)(x - 2)(x - 3)\n x(x - 1)(x - 2)(x - 3)(x - 4)\n\n\nI am too lazy to work back through the definitions and flip all the signs; it should be clear how to do this!\n\n## Exercise 3\n\nUse these generators to write another generator that produces a basis of $\\mathbb{P^3} \\times \\mathbb{P^4}$.\n\n#### Solution\n\nHopefully by now you'll be aware of how useful `itertools` is!\n\n\n```python\nfrom itertools import product\n```\n\n\n```python\ndef basis_product():\n \"\"\"\n Basis of the product space\n \"\"\"\n yield from product(basis_pN(3), basis_pN_variant(4))\n```\n\n\n```python\nfor p1, p2 in basis_product():\n print(\"Basis element is ({}) X ({}).\".format(p1, p2))\n```\n\n Basis element is (1) X (x).\n Basis element is (1) X (x(x - 1)).\n Basis element is (1) X (x(x - 1)(x - 2)).\n Basis element is (1) X (x(x - 1)(x - 2)(x - 3)).\n Basis element is (1) X (x(x - 1)(x - 2)(x - 3)(x - 4)).\n Basis element is (x) X (x).\n Basis element is (x) X (x(x - 1)).\n Basis element is (x) X (x(x - 1)(x - 2)).\n Basis element is (x) X (x(x - 1)(x - 2)(x - 3)).\n Basis element is (x) X (x(x - 1)(x - 2)(x - 3)(x - 4)).\n Basis element is (x^2) X (x).\n Basis element is (x^2) X (x(x - 1)).\n Basis element is (x^2) X (x(x - 1)(x - 2)).\n Basis element is (x^2) X (x(x - 1)(x - 2)(x - 3)).\n Basis element is (x^2) X (x(x - 1)(x - 2)(x - 3)(x - 4)).\n Basis element is (x^3) X (x).\n Basis element is (x^3) X (x(x - 1)).\n Basis element is (x^3) X (x(x - 1)(x - 2)).\n Basis element is (x^3) X (x(x - 1)(x - 2)(x - 3)).\n Basis element is (x^3) X (x(x - 1)(x - 2)(x - 3)(x - 4)).\n\n\nI've cheated here as I haven't introduced the `yield from` syntax (which returns an iterator from a generator). We could write this out instead as\n\n\n```python\ndef basis_product_long_form():\n \"\"\"\n Basis of the product space (without using yield_from)\n \"\"\"\n prod = product(basis_pN(3), basis_pN_variant(4))\n yield next(prod)\n```\n\n\n```python\nfor p1, p2 in basis_product():\n print(\"Basis element is ({}) X ({}).\".format(p1, p2))\n```\n\n Basis element is (1) X (x).\n Basis element is (1) X (x(x - 1)).\n Basis element is (1) X (x(x - 1)(x - 2)).\n Basis element is (1) X (x(x - 1)(x - 2)(x - 3)).\n Basis element is (1) X (x(x - 1)(x - 2)(x - 3)(x - 4)).\n Basis element is (x) X (x).\n Basis element is (x) X (x(x - 1)).\n Basis element is (x) X (x(x - 1)(x - 2)).\n Basis element is (x) X (x(x - 1)(x - 2)(x - 3)).\n Basis element is (x) X (x(x - 1)(x - 2)(x - 3)(x - 4)).\n Basis element is (x^2) X (x).\n Basis element is (x^2) X (x(x - 1)).\n Basis element is (x^2) X (x(x - 1)(x - 2)).\n Basis element is (x^2) X (x(x - 1)(x - 2)(x - 3)).\n Basis element is (x^2) X (x(x - 1)(x - 2)(x - 3)(x - 4)).\n Basis element is (x^3) X (x).\n Basis element is (x^3) X (x(x - 1)).\n Basis element is (x^3) X (x(x - 1)(x - 2)).\n Basis element is (x^3) X (x(x - 1)(x - 2)(x - 3)).\n Basis element is (x^3) X (x(x - 1)(x - 2)(x - 3)(x - 4)).\n\n\n# Anscombe's quartet\n\nFour separate datasets are given:\n\n| x | y | x | y | x | y | x | y |\n|------|-------|------|------|------|-------|------|-------|\n| 10.0 | 8.04 | 10.0 | 9.14 | 10.0 | 7.46 | 8.0 | 6.58 |\n| 8.0 | 6.95 | 8.0 | 8.14 | 8.0 | 6.77 | 8.0 | 5.76 |\n| 13.0 | 7.58 | 13.0 | 8.74 | 13.0 | 12.74 | 8.0 | 7.71 |\n| 9.0 | 8.81 | 9.0 | 8.77 | 9.0 | 7.11 | 8.0 | 8.84 |\n| 11.0 | 8.33 | 11.0 | 9.26 | 11.0 | 7.81 | 8.0 | 8.47 |\n| 14.0 | 9.96 | 14.0 | 8.10 | 14.0 | 8.84 | 8.0 | 7.04 |\n| 6.0 | 7.24 | 6.0 | 6.13 | 6.0 | 6.08 | 8.0 | 5.25 |\n| 4.0 | 4.26 | 4.0 | 3.10 | 4.0 | 5.39 | 19.0 | 12.50 |\n| 12.0 | 10.84 | 12.0 | 9.13 | 12.0 | 8.15 | 8.0 | 5.56 |\n| 7.0 | 4.82 | 7.0 | 7.26 | 7.0 | 6.42 | 8.0 | 7.91 |\n| 5.0 | 5.68 | 5.0 | 4.74 | 5.0 | 5.73 | 8.0 | 6.89 |\n\n## Exercise 1\n\nUsing standard `numpy` operations, show that each dataset has the same mean and standard deviation, to two decimal places.\n\n#### Solution\n\n\n```python\nimport numpy\n```\n\n\n```python\nset1_x = numpy.array([10.0, 8.0, 13.0, 9.0, 11.0, 14.0, 6.0, 4.0, 12.0, 7.0, 5.0])\nset1_y = numpy.array([8.04, 6.95, 7.58, 8.81, 8.33, 9.96, 7.24, 4.26, 10.84, 4.82, 5.68])\nset2_x = numpy.array([10.0, 8.0, 13.0, 9.0, 11.0, 14.0, 6.0, 4.0, 12.0, 7.0, 5.0])\nset2_y = numpy.array([9.14, 8.14, 8.74, 8.77, 9.26, 8.10, 6.13, 3.10, 9.13, 7.26, 4.74])\nset3_x = numpy.array([10.0, 8.0, 13.0, 9.0, 11.0, 14.0, 6.0, 4.0, 12.0, 7.0, 5.0])\nset3_y = numpy.array([7.46, 6.77, 12.74, 7.11, 7.81, 8.84, 6.08, 5.39, 8.15, 6.42, 5.73])\nset4_x = numpy.array([8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 19.0, 8.0, 8.0, 8.0])\nset4_y = numpy.array([6.58, 5.76, 7.71, 8.84, 8.47, 7.04, 5.25, 12.50, 5.56, 7.91, 6.89])\n\ndata_x = set1_x, set2_x, set3_x, set4_x\ndata_y = set1_y, set2_y, set3_y, set4_y\n```\n\n\n```python\nprint(\"Results for x:\")\nfor x in data_x:\n print(\"Mean: {:.2f}. Variance {:.2f}. Standard deviation {:.2f}.\".format(numpy.mean(x),\n numpy.var(x),\n numpy.std(x)))\nprint(\"Results for y:\")\nfor data in data_y:\n print(\"Mean: {:.2f}. Variance {:.2f}. Standard deviation {:.2f}.\".format(numpy.mean(data),\n numpy.var(data),\n numpy.std(data)))\n```\n\n Results for x:\n Mean: 9.00. Variance 10.00. Standard deviation 3.16.\n Mean: 9.00. Variance 10.00. Standard deviation 3.16.\n Mean: 9.00. Variance 10.00. Standard deviation 3.16.\n Mean: 9.00. Variance 10.00. Standard deviation 3.16.\n Results for y:\n Mean: 7.50. Variance 3.75. Standard deviation 1.94.\n Mean: 7.50. Variance 3.75. Standard deviation 1.94.\n Mean: 7.50. Variance 3.75. Standard deviation 1.94.\n Mean: 7.50. Variance 3.75. Standard deviation 1.94.\n\n\n## Exercise 2\n\nUsing the standard `scipy` function, compute the linear regression of each data set and show that the slope and correlation coefficient match to two decimal places.\n\n\n```python\nfrom scipy import stats\n\nfor x, y in zip(data_x, data_y):\n slope, intercept, r_value, p_value, std_err = stats.linregress(x, y)\n print(\"Slope: {:.2f}. Correlation: {:.2f}.\".format(slope, r_value))\n```\n\n Slope: 0.50. Correlation: 0.82.\n Slope: 0.50. Correlation: 0.82.\n Slope: 0.50. Correlation: 0.82.\n Slope: 0.50. Correlation: 0.82.\n\n\n## Exercise 3\n\nPlot each dataset. Add the best fit line. Then look at the description of [Anscombe's quartet](https://en.wikipedia.org/wiki/Anscombe%27s_quartet), and consider in what order the operations in this exercise *should* have been done.\n\n\n```python\n%matplotlib inline\nfrom matplotlib import pyplot\n\nfit_x = numpy.linspace(2.0, 20.0)\nfig = pyplot.figure(figsize=(12,6))\nfor i in range(4):\n slope, intercept, r_value, p_value, std_err = stats.linregress(data_x[i], data_y[i])\n ax = fig.add_subplot(2,2,i+1)\n ax.scatter(data_x[i], data_y[i])\n ax.plot(fit_x, intercept + slope*fit_x)\n ax.set_xlim(2.0, 20.0)\n ax.set_xlabel(r'$x$')\n ax.set_ylabel(r'$y$')\npyplot.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "48bc3f7ce7cf05b6b661c1380f5912704d13c16f", "size": 620417, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ExercisesSolutions.ipynb", "max_stars_repo_name": "LaGuer/maths-with-python", "max_stars_repo_head_hexsha": "7bbba1e1e95d3c930a41405ebbb0dcd6735e5009", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ExercisesSolutions.ipynb", "max_issues_repo_name": "LaGuer/maths-with-python", "max_issues_repo_head_hexsha": "7bbba1e1e95d3c930a41405ebbb0dcd6735e5009", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-05-03T20:04:19.000Z", "max_issues_repo_issues_event_max_datetime": "2019-05-03T20:04:19.000Z", "max_forks_repo_path": "ExercisesSolutions.ipynb", "max_forks_repo_name": "LaGuer/maths-with-python", "max_forks_repo_head_hexsha": "7bbba1e1e95d3c930a41405ebbb0dcd6735e5009", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 132.2568748668, "max_line_length": 188008, "alphanum_fraction": 0.8677486271, "converted": true, "num_tokens": 22768, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284087965937711, "lm_q2_score": 0.8902942363098472, "lm_q1q2_score": 0.8265570005467956}} {"text": "## Modelo del rendimiento de una cuenta de ahorro\n> ___Inter\u00e9s compuesto continuo___.\nYa es sabido que para una tasa de inter\u00e9s nominal constante, si la frecuencia de capitalizaci\u00f3n aumenta, el monto compuesto resultante tambi\u00e9n aumenta. Cuando la frecuenca con la que el inter\u00e9s se capitaliza crece indefinidamente, se habla de que los intereses se generan de forma continua, llam\u00e1ndosele inter\u00e9s compuesto continuo al que se calcula de ese modo. Al trabajar con estamodalidad de inter\u00e9s, el monto compuesto no tiende a ser infinitamente grande como a veces se piensa, sino que tiende a acercarse a un valor l\u00edmite.\n\nReferencia: \n- https://es.slideshare.net/tmateo14/inters-compuesto-continuo\n- https://es.wikipedia.org/wiki/Capitalizaci%C3%B3n_continua\n\n___\nEl anterior fen\u00f3meno se puede pensar tambi\u00e9n, como que a cada instante de tiempo $t$ se obtiene un rendimiento proporcional al monto actual $C(t)$. En este caso, la constante de proporcionalidad es el inter\u00e9s compuesto continuo $r$. Un modelo que representa esta situaci\u00f3n es la siguiente ecuaci\u00f3n diferencial de primer orden \n\n$$\\frac{d C(t)}{dt}=r\\; C(t),$$\n\nsujeto a la condici\u00f3n inicial (monto o capital inicial) $C(0)=C_0$.\n\nLa anterior, es una ecuaci\u00f3n diferencial lineal de primer orden, para la cual se puede calcular la *soluci\u00f3n anal\u00edtica*.\n\n\n```python\nfrom sympy import *\n\n# Para imprimir en formato TeX\nfrom sympy import init_printing; init_printing(use_latex='mathjax')\n\nt, r = symbols(\"t r\")\nC, f = map(Function, 'Cf')\n```\n\n\n```python\neqn = Eq(Derivative(C(t),t) - r*C(t), 0)\ndisplay(eqn)\ndsolve(eqn, C(t))\n```\n\n\n$$- r C{\\left (t \\right )} + \\frac{d}{d t} C{\\left (t \\right )} = 0$$\n\n\n\n\n\n$$C{\\left (t \\right )} = C_{1} e^{r t}$$\n\n\n\ncon $C_1=C_0$.\n\n___\n\u00bfC\u00f3mo podemos calcular la *soluci\u00f3n num\u00e9rica*?\n\n\n```python\n# Librer\u00edas para c\u00e1lculo num\u00e9rico\nimport numpy as np\n# Librer\u00edas para integraci\u00f3n num\u00e9rica\nfrom scipy.integrate import odeint\n# Librer\u00edas para gr\u00e1ficos\nimport matplotlib.pyplot as plt\n# Para que se muestren las gr\u00e1ficas en la misma ventana\n%matplotlib inline\n```\n\n\n```python\n# Modelo de capitalizaci\u00f3n continua\ndef cap_continua(CC, tt, rr):\n return rr * CC\n```\n\n\n```python\n# Capital inicial\nC0 = 5000\n# Inter\u00e9s continuo del 1%\nrr = 0.0004\ntt = np.linspace(0, 3*360)\nCC = odeint(cap_continua, C0, tt, args = (rr,))\nplt.plot(tt, CC)\nplt.xlabel('$t$', fontsize = 18)\nplt.ylabel('$C(t)$', fontsize = 18)\nplt.show()\n```\n\nVer que lo anterior es una aproximaci\u00f3n continua del modelo discreto de inter\u00e9s continuo cuando la frecuencia de capitalizaci\u00f3n tiende a infinito\n\n\n```python\n\n```\n", "meta": {"hexsha": "ed25dc4861983bf093f080266da7056183ae8b45", "size": 19835, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "SimulacionO2017/Modulo 1/Clase7_ModeloAhorro.ipynb", "max_stars_repo_name": "jcmartinez67/SimulacionMatematica_O2017", "max_stars_repo_head_hexsha": "085394f939c37717acbbe19a90cad661b279fbee", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "SimulacionO2017/Modulo 1/Clase7_ModeloAhorro.ipynb", "max_issues_repo_name": "jcmartinez67/SimulacionMatematica_O2017", "max_issues_repo_head_hexsha": "085394f939c37717acbbe19a90cad661b279fbee", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "SimulacionO2017/Modulo 1/Clase7_ModeloAhorro.ipynb", "max_forks_repo_name": "jcmartinez67/SimulacionMatematica_O2017", "max_forks_repo_head_hexsha": "085394f939c37717acbbe19a90cad661b279fbee", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 94.4523809524, "max_line_length": 14510, "alphanum_fraction": 0.8447693471, "converted": true, "num_tokens": 765, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284088064979618, "lm_q2_score": 0.8902942152065091, "lm_q1q2_score": 0.8265569897719146}} {"text": "# Linear-Feedback Shift Register (LFSR)\n\nIn an LFSR, the output from a standard shift register is fed back into its\ninput causing an endless cycle. The feedback bit is the result of a linear\ncombination of the shift register content and the feedback coefficients.\n\n\n\nThe foolowing are the equation ruling the LFSR block scheme:\n$$\n\\begin{align}\n b[t] = &\\; s_0[t] \\\\\n s_j[t] = &\\; s_{j+1}[t-1] \\\\\n s_{m-1}[t] = &\\; \\oplus_{j=0}^{m-1} p_{m-j} \\otimes s_{j}[t-1] \\\\\n\\end{align}\n$$\n\n**Example**:\n- LFSR length: 3\n- feeedback polynomial: $x^3 + x + 1$\n- initial state: 0b111\n- LFSR cycle:\n| state | output | feedback |\n|:---------:|:------:|:--------:|\n| 0b111 (7) | 1 | 0 |\n| 0b011 (3) | 1 | 1 |\n| 0b101 (5) | 1 | 0 |\n| 0b010 (2) | 0 | 0 |\n| 0b001 (1) | 1 | 1 |\n| 0b100 (4) | 0 | 1 |\n| 0b110 (6) | 0 | 1 |\n| 0b111 (7) | 1 | 0 |\n\n\n```python\nimport math\nfrom operator import xor\nfrom functools import reduce\nfrom itertools import compress\nfrom itertools import islice\n```\n\n## LFSR generator\n\nAs first step, we implement an LFSR as a generator.\n\n**Inputs**:\n - **Feedback Polynomial**: `list` of `int` representing the degrees of the non-zeros coefficients. Example: [12, 6, 4, 1, 0] represents the polynomial $x^{12}+x^6+x^4+x+1$.\n - **shift-register initial state** (optional, default all bits to 1): `int` or `list` of bits representing the LFSR initial state. Example: 0xA65 for 0b101001100101.\n \n**Yield**:\n- **Output bit**: `bool` representing the LFSR output bit\n\n**Template**:\n```python\ndef lfsr_generator(poly, state=None):\n ''' generator docstring '''\n \n # check inputs validity\n # define variables storing the internal state\n \n while True:\n # LFSR iteration:\n # - update state\n # - compute output\n yield output\n```\n\n### implentation with lists\n\nIn this first implementation both the shift register `state` and the feedback polynomial `poly` are `list` of `bool`. This is the most straightforward choice as it directly maps the LFSR block scheme.\n\n| variable/
    operation | definition | implementation |\n|:-:|:-:|:--|\n| feeedback polynomial | $$x^3 + x + 1$$ | `poly = [True, True, False, True]` |\n| initial state | 0b111 | `state = [True, True, True]` |\n| state update | $$s_j[t] = s_{j+1}[t-1]$$ | `state = state[1:] + [feedback]` |\n| output bit | $$s_0[t]$$ | `output = state[0]` |\n| feedback bit | $$\\oplus_{j=0}^{m-1} p_{m-j} \\otimes s_{j}[t]$$ | `feedback = reduce(xor, compress(state[::-1], poly[1:]))` |\n\nFor debugging purposes, a `verbose` flag is added as argument to enable the print of the internal state at each iteration.\n\n\n```python\ndef lfsr_generator(poly, state=None, verbose=False):\n '''\n Generator implementing a Linear Feedback Shift Register (LFSR)\n \n Parameters\n ----------\n poly: list of int,\n feedback polynomial expressed as list of integers corresponding to\n the degrees of the non-zero coefficients.\n state: int, optional (default=None),\n shift register initial state. If None, all bits of state are set to 1.\n verbose: bool, optional (default=False),\n If True, the internal state is printed at each iteration\n \n Return\n ------\n bool, LFSR output bit\n '''\n \n # ==== check inputs ====\n # ==> verbose\n if verbose:\n # define a function to print the lfsr internal state\n def print_lfsr(state, output, feedback):\n state_bits = ''.join(str(int(s)) for s in state)[::-1]\n state_int = int(state_bits, 2)\n print(f'{state_bits} ({state_int}) {int(output)} {int(feedback)}')\n \n # ==> poly\n length = max(poly) # LFSR length as the max degree of the polynomial\n poly = [i in poly for i in range(length+1)] # turn poly into a list of bool\n \n # ==> state\n if state is None: # default value for state is all ones (True)\n state = [True for _ in range(length)]\n else: # convert integer into list of bool\n state = [bool(int(s)) for s in (f'{state:0{length}b}')[::-1]]\n \n # ==== initial state ====\n # compute output bit and feedback bit\n output = state[0]\n feedback = reduce(xor, compress(state[::-1], poly[1:]))\n if verbose: # print initial state\n print(' state b fb')\n print_lfsr(state, output, feedback)\n \n # ==== infinite loop ====\n while True: \n state = state[1:] + [feedback] # update state \n output = state[0] # output bit\n feedback = reduce(xor, compress(state[::-1], poly[1:])) # feedback bit \n if verbose: # print current state\n print_lfsr(state, output, feedback)\n \n yield output # return current output\n```\n\nHere we check the generator functioning, by replication the example showed above where the feedback polynomial is $x^3+x+1$ and the initial state is `0x7`=`0b111`\n\n\n```python\npoly = [3, 1, 0] # feedback polynomial x^3 + x + 1\nstate = 0x7 # initial state\nniter = 7 # number of iterations\n\n# lfsr definition\nlfsr = lfsr_generator(poly, state, verbose=True) \n\n# just iter over the LFSR generator\nfor b in islice(lfsr, niter):\n # do nothing, just let the generator print its internal state\n pass\n```\n\n state b fb\n 111 (7) 1 0\n 011 (3) 1 1\n 101 (5) 1 0\n 010 (2) 0 0\n 001 (1) 1 1\n 100 (4) 0 1\n 110 (6) 0 1\n 111 (7) 1 0\n\n\n### implentation with integers\n\nIn this implementation both the shift register `state` and the feedback polynomial `poly` are `int`. With this choice, bit-wise logical operation, as well as bit-shift, are easy to perform, while XOR of multiple bits or reversing the bit order are less straightforward.\n\nFor efficiency reasons, it is choosen to store the bits of the state in reverse order. At each iteration, the computation of the feedback bit requires a bit-wise AND between the current state and the polynomial feedback. Since these quantities are defined with inverted index, it is convinient to keep stored the bits of either state or poly in reverse order. To be compliant with common implementation in other programming languages (like C/C++) we choose to store the bits of the state in reverse order.\n\n| variable/
    operation |         definition         | implementation |\n|:-:|:-:|:--|\n| feeedback
    polynomial |   $x^3 + x + 1 $ | `poly = 0b1011`, `length = 3` |\n| initial state | 0b111 | `state = 0b111` |\n| state update |   $s_j[t] = s_{j+1}[t-1] $ | `statemask = (1 << length) - 1`
    `state = ((state << 1) \\| feedback) & statemask` |\n| output bit |   $s_0[t] $ | `outmask = 1 << (length-1)`
    `output = bool(state & outmask)` |\n| feedback bit |   $\\oplus_{j=0}^{m-1} p_{m-j} \\otimes s_{j}[t] $   | `feedback = parity(state & poly)` |\n\nLet us define a function `reverse` to reverse the bit order of an integer\n\n\n```python\ndef reverse(x, nbit=None):\n ''' reverse the bit order of the input `x` (int) represented with `nbit` \n (int) bits (e.g., nbit: 5, x: 13=0b01101 -> output: 22=0b10110) '''\n if nbit is None:\n nbit = math.ceil(math.log2(x))\n return int(f'{x:0{nbit}b}'[::-1][:nbit], 2)\n```\n\n\n```python\nx, nbit = 0b1101011000101010, 16\ny = reverse(x, nbit)\nprint(f'{x:0{nbit}b} -> {y:0{nbit}b}')\n```\n\n 1101011000101010 -> 0101010001101011\n\n\nLet us also define a function `parity` that computes the parity bit for a given integer.\n\n\n```python\ndef parity(x):\n ''' compute the parity bit of an integer `x` (int) '''\n return bool(sum(int(b) for b in f'{x:b}') % 2)\n```\n\n\n```python\nx = 0b1101011010101010\ny = parity(x)\n\nprint(f'{x:0{nbit}b} -> parity bit: {y:b}')\n\n```\n\n 1101011010101010 -> parity bit: 1\n\n\nNow we can define the generator that implements the LFSR.\n\n\n```python\ndef lfsr_generator(poly, state=None, verbose=False):\n '''\n Generator implementing a Linear Feedback Shift Register (LFSR)\n \n Parameters\n ----------\n poly: list of int,\n feedback polynomial expressed as list of integers corresponding to\n the degrees of the non-zero coefficients.\n state: int, optional (default=None),\n shift register initial state. If None, all bits of state are set to 1.\n verbose: bool, optional (default=False),\n If True, the internal state is printed at each iteration\n \n Return\n ------\n bool, LFSR output bit\n '''\n \n # ==== check inputs ====\n # ==> verbose\n if verbose:\n def print_lfsr(state, output, feedback, length): \n print(f'{state:0{length}b} ({state})',\n f'{int(output)} {int(feedback)}')\n \n # ==> poly\n length = max(poly) # LFSR length as the max degree of the polynomial \n poly = sum([2**p for p in poly]) >> 1 # ignoring p0 (always 1)\n \n # ==> state\n outmask = 1 << (length-1) # mask to select the output bit from state\n statemask = (1 << length) - 1 # mask for state\n if state is None: # default value for state is all ones\n state = statemask\n state = reverse(state, length) # reverse the bit order of state\n \n # ==== initial state ====\n # compute output bit and feedback bit\n output = bool(state & outmask) # get output from current state\n feedback = parity(state & poly) # compute feedback bit as the parity bit\n if verbose: # print initial state\n print(' state b fb')\n print_lfsr(reverse(state, length), output, feedback, length)\n \n # ==== infinite loop ====\n while True:\n state = ((state << 1) | feedback) & statemask # update state\n output = bool(state & outmask) # get output\n feedback = parity(state & poly) # update feedback bit \n if verbose: # print current state\n _state = reverse(state, length)\n print_lfsr(_state, output, feedback, length)\n \n yield output\n```\n\nAs we did above, we check the implementation by replicating the reference example.\n\n\n```python\npoly = [3, 1, 0]\nstate = 0x7\nniter = 7\n\nlfsr = lfsr_generator(poly, state, verbose=True)\n\nfor b in islice(lfsr, niter):\n pass\n```\n\n state b fb\n 111 (7) 1 0\n 011 (3) 1 1\n 101 (5) 1 0\n 010 (2) 0 0\n 001 (1) 1 1\n 100 (4) 0 1\n 110 (6) 0 1\n 111 (7) 1 0\n\n\nNow, we can use the LFSR to generate a \"long\" (not really) sequence of bits.\n\n\n```python\nnbits = 1000\npoly = [5, 2, 0]\nlfsr = lfsr_generator(poly)\n\n# iter over LFSR and store its output in a list\nbits = [b for b in islice(lfsr, nbits)]\n\n# print the list of bool as string of 0s and 1s\nprint(''.join([str(int(b)) for b in bits])) \n```\n\n 1111001101001000010101110110001111100110100100001010111011000111110011010010000101011101100011111001101001000010101110110001111100110100100001010111011000111110011010010000101011101100011111001101001000010101110110001111100110100100001010111011000111110011010010000101011101100011111001101001000010101110110001111100110100100001010111011000111110011010010000101011101100011111001101001000010101110110001111100110100100001010111011000111110011010010000101011101100011111001101001000010101110110001111100110100100001010111011000111110011010010000101011101100011111001101001000010101110110001111100110100100001010111011000111110011010010000101011101100011111001101001000010101110110001111100110100100001010111011000111110011010010000101011101100011111001101001000010101110110001111100110100100001010111011000111110011010010000101011101100011111001101001000010101110110001111100110100100001010111011000111110011010010000101011101100011111001101001000010101110110001111100110100100001010111011000111110011\n\n\nWe can notice that the 31-bits sequence `1111001101001000010101110110001` is periodically repeated. This is compliant with an LFSR with length 5 and a primitive polynomial with degree 5 in the feedback.\n\n## LFSR Iterator\n\nImplementing the LFSR as generator does not allow the user to access to the internal state and variables. Indeed, we needed to code a internal print that was activate by the `verbose` flag for debug.\n\nTo solve this issue, we can implement the LFSR as an **iterator**, that is a class with the `__iter__` and `__next__` methods that allow to iter over it. Such solution allow the user to employ the LFSR with the same scheme used for the generator bu with the capability to access to the internal state and variables.\n\n**Inputs**:\n - **Feedback Polynomial**: `list` of `int` representing the degrees of the non-zeros coefficients. Example: [12, 6, 4, 1, 0] represents the polynomial $x^{12}+x^6+x^4+x+1$.\n - **shift-register initial state** (optional, default all bits to 1): `int` or `list` of bits representing the LFSR initial state. Example: 0xA65 for 0b101001100101.\n \n**Attributes**:\n- `poly`: (`list` of `int`) list of the polynomial coefficients;\n- `length`: (`int`) polynomial degree and length of the shift register;\n- `state`: (`int`) LFSR state;\n- `output`: (`bool`) output bit;\n- `feedback`: (`bool`) feedback bit.\n \n**Methods**:\n- `__init__`: class constructor;\n- `__iter__`: necessary to be an iterable;\n- `__next__`: update LFSR state and returns output bit;\n- `cycle`: returns a list of bool representing the full LFSR cycle ;\n- `run_steps`: execute N LFSR steps and returns the corresponding output as list of bool (N is a input parameter, default N=1);\n- `__str__`: return a string describing the LFSR class instance.\n\n\n**Template**:\n```python\nclass LFSR(object):\n ''' class docstring '''\n\n def __init__(self, poly, state=None):\n ''' constructor docstring '''\n ...\n self.poly = ... \n self.length = ... \n self.state = ... \n self.output = ... \n self.feedback = ...\n\n def __iter__(self): \n return self\n\n def __next__(self): \n ''' next docstring ''' \n ... \n return self.output\n\n def run_steps(self, N=1): \n ''' run_steps docstring ''' \n ... \n return list_of_bool\n\n def cycle(self, state=None): \n ''' cycle docstring ''' \n ... \n return list_of_bool\n```\n\nWe first define two functions to convert integers into, what we will call it, their *sparse* representation, which consist in the list of indexes corresponding to 1s in the integer binary representation. For example the integer 138 (`0b10001010`) is turned into the list `[1, 3, 7]`.\n\nThis function is employed to easily pass from the integer representation of the LFSR feedback polynomial (used during computations) to the representation as list of the degrees of the non-zeor coefficients (used as inputs and outputs).\n\nWe call it sparse as it reminds how sparse matrices are stored, i.e., as list of triplets comprising the row and column indexes and the corresponding value.\n\n\n```python\ndef int_to_sparse(integer):\n ''' transform an integer into the list of indexes corresponding to 1\n in the binary representation (sparse representation)'''\n sparse = [i for i, b in enumerate(f'{integer:b}'[::-1]) if bool(int(b))]\n return sparse\n\ndef sparse_to_int(sparse):\n ''' transform a list of indexes (sparse representation) in an integer whose\n binary representation has 1s at the positions corresponding the indexes and \n 0 elsewhere '''\n integer = sum([2**index for index in sparse])\n return integer\n```\n\n\n```python\ninteger = 0b10001010\n\nsparse = int_to_sparse(integer)\nprint(f'int to sparse: {integer} ({bin(integer)}) -> {sparse}')\n\ninteger = sparse_to_int(sparse)\nprint(f'sparse to int: {sparse} -> {integer} ({bin(integer)})')\n```\n\n int to sparse: 138 (0b10001010) -> [1, 3, 7]\n sparse to int: [1, 3, 7] -> 138 (0b10001010)\n\n\nHere is the LFSR implemented as Iterator. We choose to implement both state and polynomial feedback as integers.\n\nNote that:\n- in Python, for attributes and methods there is not the concept of **private** or **public** . It is all public and it is up to the user make a good use of the class. However, there is the convention (supported also by IDE) that attributes starting with an underscore `_` are to be considered private:\n```python\nself.state # this is public\nself._state # this is considered private (actually, it is public)\n```\n\n\n- Python meets the idea of [Uniform Access Principle](https://en.wikipedia.org/wiki/Uniform_access_principle) that states *all services offered by a module should be available through a uniform notation, which does not betray whether they are implemented through storage or through computation*. \n - This means that `lfsr.state` and `lfsr.state = 0` are much preferred than `lfsr.get_state()` and `lfsr.set_state(0)`. \n - If you need to wrap the accesses to the attributes inside methods (to deny setting, to check the value before setting, to return the value in a different format with respect to how it is stored internally) you can use the [decorator](https://wiki.python.org/moin/PythonDecorators) `@property` that allows you to hide a method `lfsr.set_state(0)` behind the expression `lfsr.state = 0`.\n \n\nThese features help in writing very readable code. \n\n**Example**: let us assume that, internally we want the LFSR state stored as an integer with bits in reverse order to allow an efficient computation for the LFSR iterations, but, externally, i.e., when the state is read, we want to return the value with bits in correct order. We can define a private attribute `_state` for computations and a public attribute `state` that hides (i.e., invokes) a **getter method** that return the private attribute inverting the bit order.\n```python\n @property\n def state(self):\n return reverse(self._state, len(self))\n```\nThe same principle can be employed to let the attribute `state` invoke a **setter method** that perform the inverse operation:\n```python\n @state.setter\n def state(self, state):\n self._state = reverse(state & self._statemask, len(self))\n```\nNote that the decorator now is `.setter`.\n\nMoreover, the setter method can be used to make attributes only readble and not writable. For example, we may want the LFSR length to be an only-read attribute.\n```python\n @property\n def length(self):\n return self._length\n @length.setter\n def length(self, val):\n raise AttributeError('Denied')\n```\n\nThanks to these feature we choose:\n- `state` to be both redable and writable but setter and getter are defined in such a way to allow state to be internally stored with bits in reverse order.\n- `poly` to be only redable and internally stored as integer.\n- `length`, `output`\n- `feedback` to be both redable and writable. The user can insert the state one bit at a time and we will see that some stream ciphers exploit this possibility.\n\n\n\n\n```python\nclass LFSR(object):\n '''\n Class implementing a Linear Feedback Shift Register (LFSR)\n \n Attributes\n ----------\n poly: list of int,\n feedback polynomial expressed as list of integers corresponding to\n the degrees of the non-zero coefficients.\n state: int,\n state of the shift register.\n output: bool,\n last shift register output,\n length: int,\n length of the shift register as well as maximum degree of the feedback\n polynomial.\n \n Methods\n -------\n run_steps(self, N=1)\n Execute multiple LFSR steps\n cycle(self, state=None)\n Execute a full cycle.\n '''\n \n def __init__(self, poly, state=None):\n '''\n Parameters\n ----------\n poly: list of int,\n feedback polynomial expressed as list of integers corresponding to\n the degrees of the non-zero coefficients.\n state: int, optional (default=None)\n shift register initial state.\n If None, state is set to all ones.\n '''\n length = max(poly)\n self._length = length\n self._poly = sparse_to_int(poly) >> 1 # p0 is omitted (always 1)\n \n self._statemask = (1 << length) - 1\n if state is None:\n state = self._statemask\n self.state = state\n \n self._outmask = 1 << (length-1)\n self._output = bool(self._state & self._outmask)\n \n self._feedback = parity(self._state & self._poly) \n\n # ==== state ====\n @property\n def state(self):\n # state is re-reversed before being read\n return reverse(self._state, len(self))\n @state.setter\n def state(self, state):\n if not isinstance(state, int):\n raise TypeError('input type is not supported')\n # ensure seed is in the range [1, 2**m-1]\n if (state < 1) or (state > len(self)):\n state = 1 + state % (2**len(self)-2)\n self._state = reverse(state & self._statemask, len(self))\n\n # ==== length ====\n @property\n def length(self):\n return self._length\n @length.setter\n def length(self, val):\n raise AttributeError('Denied')\n \n # ==== poly ====\n @property\n def poly(self):\n return int_to_sparse((self._poly << 1) | 1)[::-1]\n @poly.setter\n def poly(self, poly):\n raise AttributeError('Denied')\n\n # ==== output ====\n @property\n def output(self):\n return self._output\n @output.setter\n def output(self, val):\n raise AttributeError('Denied')\n\n # ==== feedback ====\n @property\n def feedback(self):\n return self._feedback\n @feedback.setter\n def feedback(self, feedback):\n self._feedback = bool(feedback)\n \n \n def __str__(self):\n poly = ' + '.join([\n (f'x^{d}' if d > 1 else ('x' if d==1 else '1')) \n for d in self.poly\n ])\n string = ', '.join([\n f'poly: \"{poly}\"',\n f'state: 0x{self.state:0{(self.length+1)//4}x}',\n f'output: {None if self.output is None else int(self.output)}'\n ])\n return string\n \n def __repr__(self):\n return f'LSFR({str(self)})'\n \n def __len__(self):\n return self.length\n \n def __iter__(self):\n return self\n \n def __next__(self):\n '''Execute a LFSR step and returns the output bit (bool)'''\n self._state = ((self._state << 1) | self._feedback) & self._statemask\n self._output = bool(self._state & self._outmask)\n self._feedback = parity(self._state & self._poly) \n return self._output\n \n def run_steps(self, n=1):\n '''\n Execute multiple LFSR steps.\n \n Parameters\n ----------\n n: int, optional (default=1)\n number of steps to execute.\n \n Output\n ------\n list of bool (len=n),\n LFSR output bit stream.\n '''\n return [next(self) for _ in range(n)]\n \n def cycle(self, state=None):\n '''\n Execute a full LFSR cycle (LFSR.len steps).\n \n Parameters\n ----------\n state: int or list of int or bools, optional (default=None)\n shift register state. If None, state is kept as is.\n \n Output\n ------\n list of bool (len=2**myLFSR.len - 1),\n LFSR output bit stream.\n '''\n if state is not None:\n self.state = state\n return self.run_steps(n=int(2**len(self)) - 1)\n\n```\n\nAgain, we check the implementation with the reference example. This time, we did not need to use a verbose flag to print the internal state, since all internal variable are accessible by the user.\n\n\n```python\npoly = [3, 1, 0]\nstate = 0x7\nniter = 7\n\ndef print_lfsr(lfsr):\n print(f'{lfsr.state:0{len(lfsr)}b} ({lfsr.state:d}) ',\n f'{int(lfsr.output):d} {int(lfsr.feedback):d}')\n\n# create and instance of the LFSR\nlfsr = LFSR(poly, state)\n\nprint('\\n state b fb')\nprint_lfsr(lfsr) # print initial state\nfor b in islice(lfsr, niter):\n print_lfsr(lfsr)\n```\n\n \n state b fb\n 010 (2) 0 0\n 001 (1) 1 1\n 100 (4) 0 1\n 110 (6) 0 1\n 111 (7) 1 0\n 011 (3) 1 1\n 101 (5) 1 0\n 010 (2) 0 0\n\n\nHere is a check for the implementation of `__str__` and `__repr__` methods.\nRemind that the goal of :\n- `__str__` is to provide a readable string\n- `__repr__` is to be unambiguous\n\n\n```python\nprint(lfsr) # print calls `__str__`\ndisplay(lfsr) # display calls `__repr__`\n```\n\n poly: \"x^3 + x + 1\", state: 0x2, output: 0\n\n\n\n LSFR(poly: \"x^3 + x + 1\", state: 0x2, output: 0)\n\n\nHere is a check for the implementation of the methods `cycle` and `run_steps`. Since the former calls the latter, we just check the former.\n\n\n```python\ncycle = lfsr.cycle()\nprint(cycle)\nprint(''.join(f'{int(b)}' for b in cycle))\n```\n\n [True, False, False, True, True, True, False]\n 1001110\n\n\n## Berlekamp-Massey Algorithm\n\nThe Berlekamp\u2013Massey algorithm is an algorithm that finds the shortest LFSR for a given binary sequence.\nThe input consists in the binary sequence and the output is the feedback polynomial characterizing the shortest LFSR able to generate that sequence.\n\n**Pseudocode**:\n> **Input** $b = [b_0, b_1, \\dots, b_N]$
    \n> $P(x) \\leftarrow 1, \\quad m \\leftarrow 0 $
    \n> $Q(x) \\leftarrow 1, \\quad r \\leftarrow 1 $
    \n> **for** $\\tau = 0, 1, \\dots, N-1 $
    \n> $\\qquad d \\leftarrow \\oplus_{j=0}^{m} p_j \\otimes b[\\tau-j] $
    \n> $\\qquad $ **if** $d = 1 $ **then**
    \n> $\\qquad \\qquad $ **if** $2m \\le \\tau $ **then**
    \n> $\\qquad \\qquad \\qquad R(x) \\leftarrow P(x) $
    \n> $\\qquad \\qquad \\qquad P(x) \\leftarrow P(x) + Q(x)x^r $
    \n> $\\qquad \\qquad \\qquad Q(x) \\leftarrow R(x) $
    \n> $\\qquad \\qquad \\qquad m \\leftarrow \\tau + 1 - m $
    \n> $\\qquad \\qquad \\qquad r \\leftarrow 0 $
    \n> $\\qquad \\qquad $ **else**
    \n> $\\qquad \\qquad \\qquad P(x) \\leftarrow P(x) + Q(x)x^r $
    \n> $\\qquad \\qquad $ **endif**
    \n> $\\qquad $ **endif**
    \n> $\\qquad r \\leftarrow r + 1 $
    \n> **endfor**
    \n> **Output** $P(x) $
    \n\n**Template**:\n```python\ndef berlekamp_massey(b):\n ''' function docstring '''\n # algorithm implementation\n return poly\n```\n\n**Example**:\n\nInput: $ b = [1, 0, 1, 0, 0, 1, 1, 1] $\n\n| $\\tau$ | $b_\\tau$ | $d$ | $P(x)$ | $m$ | $Q(x)$ | $r$ |\n|:------:|:--------:|:---:|:---------------:|:---:|:-----------:|:---:|\n| $-$ | $-$ | $-$ | $ 1 $ | $0$ | $ 1 $ | $1$ |\n| $0$ | $1$ | $1$ | $ 1 + x $ | $1$ | $ 1 $ | $1$ |\n| $1$ | $0$ | $1$ | $ 1 $ | $1$ | $ 1 $ | $2$ |\n| $2$ | $1$ | $1$ | $ 1 + x^2 $ | $2$ | $ 1 $ | $1$ |\n| $3$ | $0$ | $0$ | $ 1 + x^2 $ | $2$ | $ 1 $ | $2$ |\n| $4$ | $0$ | $1$ | $ 1 $ | $3$ | $ 1 + x^2 $ | $1$ |\n| $5$ | $1$ | $1$ | $ 1 + x + x^3 $ | $3$ | $ 1 + x^2 $ | $2$ |\n| $6$ | $1$ | $0$ | $ 1 + x + x^3 $ | $3$ | $ 1 + x^2 $ | $3$ |\n| $7$ | $1$ | $0$ | $ 1 + x + x^3 $ | $3$ | $ 1 + x^2 $ | $4$ |\n\nOutput: $ P(x) = 1 + x + x^3 $\n\nWe first define a functions to transform a bit sequence (list of bool, list of 1/0, or a string composed by only '1'/'0') into an integer.\n\n\n```python\ndef bits_to_int(bits):\n ''' transform a bit sequence (str of 1/0 or list of bool) into an int '''\n integer = sum(1 << i for i, bit in enumerate(bits) if bool(int(bit)))\n return integer\n```\n\n\n```python\nbits = '101011010111101'\ninteger = bits_to_int(bits)\nbin(integer)\n```\n\n\n\n\n '0b101111010110101'\n\n\n\nWe choose to implement the Berlekamp-Massy algorithm by storing polynoimals $P(x)$ and $Q(x)$ as integers. Instead, the bit sequence $b$ is kept as list/string and at each iteration is transformed to an integer.\n\nNote that, from an implementation point of view, the main steps are:\n- the computation of the discrepancy bit \n$$d \\leftarrow \\oplus_{j=0}^{m} p_j \\otimes b[\\tau-j]$$\nThis operation is very similar to the combination of state and polynominal coefficients in the computation of the LFSR feedback bit.\n```python \nd = parity(P & bits_to_int(bits[t-m:t+1][::-1]))\n```\n\n\n- the update of the polynomial $P(x)$\n$$P(x) \\leftarrow P(x) + Q(x)x^r$$\n```python \nP = P ^ (Q << r)\n```\n - Assuming $Q(x) = \\sum_{i=0}^l q_i x^i$, then $Q(x)x^r = \\sum_{i=0}^l q_i x^{r+i}$. Therefore, if $Q(x)$ is represented as `[q0, q1, ..., ql]`, $Q(x)x^r$ is `[0, ..., 0, q0, q1, ..., ql]` where the number of 0s is $r$. With the integer representation, this is translated in a left shift of `Q` by `r` positions.\n - The $+$ operation in $GF(2^m)$ corresponds to the element-wise $+$ operation in $GF(2)$, that is the `xor`. With the integer representation the bit-wise xor is implemented with `^`.\n\nAs for the LFSR implemented as a generator, we make use of a `verbose` flag to print the internal variables for debugging purposes.\n\n\n```python\ndef berlekamp_massey(bits, verbose=False):\n '''\n Find the shortest LFSR for a given binary sequence.\n \n Parameters\n ----------\n bits: list/string of 0/1,\n stream of bits.\n \n Return\n ------\n list of int,\n linear feedback polynomial expressed as list of the exponents related\n to the non-zero coefficients\n '''\n \n # variables initialization\n P, m = 0x1, 0\n Q, r = 0x1, 1\n \n if verbose:\n from math import log2\n lenP = 1 + int(log2(len(b)+1))\n print(f' t b d {\"poly\":>{lenP}} m {\"Q\":>{lenP}} r')\n print(f' - - - {P:{lenP}b} 0 {Q:{lenP}b} 1')\n \n for t in range(len(bits)):\n \n # compute discrepancy\n d = parity(P & bits_to_int(bits[t-m:t+1][::-1]))\n \n if d:\n if 2*m <= t: # A \n P, Q = P^(Q< $\\tau (x_0 a + x_1 b)$\n\nI would like to find the best out of Matt's simple approximation. I used **numerical methods** to optimize the generalized model. The error would be computed at `np.linspace(1, 5, 1000)`.\n\n\n```python\nfrom scipy.optimize import minimize\n```\n\n\n```python\ndef perimeter_jcop(x, a, b=1):\n return tau * (x[0] * a + x[1] * b)\n\ndef error(x):\n space = np.linspace(1, 5, 1000)\n jcop = perimeter_jcop(x, space)\n bessel = perimeter_bessel(space)\n return (np.abs(1 - jcop / bessel) * 100).sum()\n```\n\n\n```python\nx_guess = (0, 0)\nbound = (-1.5, 1.5)\nsol = minimize(error, x_guess, bounds=(bound, bound))\n[find_fractional_approx(x) for x in sol.x]\n```\n\n\n\n\n [(10, 17), (22, 59)]\n\n\n\nInstead of using the optimal solution $\\tau (0.58786245 a + 0.373148 b)$, I would also like to have a neat solution, so I computed the fraction approximation for the solution algorithmically. Hence, resulted in \n\n\\begin{equation}\n\\tau \\left( \\frac{10}{17}a + \\frac{22}{59}b \\right)\n\\end{equation}\n\n# Let's see the error\n\n\n```python\nspace = np.linspace(1, 5, 100)\n\ngm = perimeter_gm(space)\nam = perimeter_am(space)\nrms = perimeter_rms(space)\nparker = perimeter_parker(space)\nbessel = perimeter_bessel(space)\njcop = perimeter_jcop((10/17, 22/59), space)\n\nplt.figure(figsize=(10, 5))\nplt.plot(space, np.abs(1 - gm / bessel) * 100, \"r-\", label=\"gmean\")\nplt.plot(space, np.abs(1 - am / bessel) * 100, \"g-\", label=\"amean\")\nplt.plot(space, np.abs(1 - rms / bessel) * 100, \"b-\", label=\"rms\")\nplt.plot(space, np.abs(1 - parker / bessel) * 100, \"k-\", label=\"parker\")\nplt.plot(space, np.abs(1 - jcop / bessel) * 100, \"m-\", label=\"jcop\")\nplt.legend()\n```\n\n# It's not as good at small and larger $a/b$ values\n\n\n```python\nspace = np.linspace(1, 75, 200)\n\ngm = perimeter_gm(space)\nam = perimeter_am(space)\nrms = perimeter_rms(space)\nparker = perimeter_parker(space)\nbessel = perimeter_bessel(space)\njcop = perimeter_jcop((10/17, 22/59), space)\n\nplt.figure(figsize=(10, 5))\nplt.plot(space, np.abs(1 - gm / bessel) * 100, \"r-\", label=\"gmean\")\nplt.plot(space, np.abs(1 - am / bessel) * 100, \"g-\", label=\"amean\")\nplt.plot(space, np.abs(1 - rms / bessel) * 100, \"b-\", label=\"rms\")\nplt.plot(space, np.abs(1 - parker / bessel) * 100, \"k-\", label=\"parker\")\nplt.plot(space, np.abs(1 - jcop / bessel) * 100, \"m-\", label=\"jcop\")\nplt.legend()\n```\n\n# Improvement with RMS\n\nAs we can see that out of all the mean computation (harmonic, arithmetic, geometric, and rms), rms performs the best. So let's include that in our model.\n\n\n\\begin{equation}\n\\tau \\left( x_0 a + x_1 \\sqrt{ \\left( \\frac{a^2+b^2}{2} \\right) } + x_2 b \\right)\n\\end{equation}\n\n\n```python\ndef perimeter_jcop(x, a, b=1):\n return tau * (x[0] * a + x[1] * np.sqrt((a**2 + b**2)/2) + x[2] * b)\n\ndef error(x):\n space = np.linspace(1, 5, 1000)\n jcop = perimeter_jcop(x, space)\n bessel = perimeter_bessel(space)\n return (np.abs(1 - jcop / bessel) * 100).sum()\n```\n\n\n```python\nx_guess = (0, 0, 0)\nbound = (-1.5, 1.5)\nsol = minimize(error, x_guess, bounds=(bound, bound, bound))\n[find_fractional_approx(x) for x in sol.x]\n```\n\n\n\n\n [(7, 45), (27, 41), (7, 37)]\n\n\n\nI would again used the fractinal approximation just for aesthetic\n\n\n```python\nspace = np.linspace(1, 5, 200)\n\ngm = perimeter_gm(space)\nam = perimeter_am(space)\nrms = perimeter_rms(space)\nparker = perimeter_parker(space)\nbessel = perimeter_bessel(space)\njcop = perimeter_jcop((7/45, 27/41, 7/37), space)\n\nplt.figure(figsize=(10, 5))\nplt.plot(space, np.abs(1 - gm / bessel) * 100, \"r-\", label=\"gmean\")\nplt.plot(space, np.abs(1 - am / bessel) * 100, \"g-\", label=\"amean\")\nplt.plot(space, np.abs(1 - rms / bessel) * 100, \"b-\", label=\"rms\")\nplt.plot(space, np.abs(1 - parker / bessel) * 100, \"k-\", label=\"parker\")\nplt.plot(space, np.abs(1 - jcop / bessel) * 100, \"m-\", label=\"jcop\")\nplt.legend()\n```\n\n# It also worked quite well on large $a/b$\n\n\n```python\nspace = np.linspace(1, 75, 200)\n\ngm = perimeter_gm(space)\nam = perimeter_am(space)\nrms = perimeter_rms(space)\nparker = perimeter_parker(space)\nbessel = perimeter_bessel(space)\njcop = perimeter_jcop((7/45, 27/41, 7/37), space)\n\nplt.figure(figsize=(10, 5))\nplt.plot(space, np.abs(1 - gm / bessel) * 100, \"r-\", label=\"gmean\")\nplt.plot(space, np.abs(1 - am / bessel) * 100, \"g-\", label=\"amean\")\nplt.plot(space, np.abs(1 - rms / bessel) * 100, \"b-\", label=\"rms\")\nplt.plot(space, np.abs(1 - parker / bessel) * 100, \"k-\", label=\"parker\")\nplt.plot(space, np.abs(1 - jcop / bessel) * 100, \"m-\", label=\"jcop\")\nplt.legend()\n```\n\n# So this is my submission for Ellipse's perimeter approximation\n\n\\begin{equation}\nC \\approx \\tau \\left( \\frac{7}{45} a + \\frac{27}{41} \\sqrt{\\frac{a^2+b^2}{2}} + \\frac{7}{37} b \\right)\n\\end{equation}\n", "meta": {"hexsha": "86d3b174271a9a9929c266dfc0ccb1eb018e1492", "size": 141203, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "writeup.ipynb", "max_stars_repo_name": "WiraDKP/ellipse_perimeter_estimation", "max_stars_repo_head_hexsha": "11e87ced417ce4e996bac43289971b103e3bc47f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "writeup.ipynb", "max_issues_repo_name": "WiraDKP/ellipse_perimeter_estimation", "max_issues_repo_head_hexsha": "11e87ced417ce4e996bac43289971b103e3bc47f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "writeup.ipynb", "max_forks_repo_name": "WiraDKP/ellipse_perimeter_estimation", "max_forks_repo_head_hexsha": "11e87ced417ce4e996bac43289971b103e3bc47f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 300.4319148936, "max_line_length": 29368, "alphanum_fraction": 0.9251644795, "converted": true, "num_tokens": 2041, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9032942145139149, "lm_q2_score": 0.9149009613738741, "lm_q1q2_score": 0.8264247452622392}} {"text": "
    \n
    \n
    \n
    Computational Seismology
    \n
    Numerical derivatives based on the Fourier Transform
    \n
    \n
    \n
    \n\nSeismo-Live: http://seismo-live.org\n\n##### Authors:\n* Fabian Linder ([@fablindner](https://github.com/fablindner))\n* Heiner Igel ([@heinerigel](https://github.com/heinerigel))\n* David Vargas ([@dvargas](https://github.com/davofis))\n---\n\n## Basic Equations\nThe derivative of function $f(x)$ with respect to the spatial coordinate $x$ is calculated using the differentiation theorem of the Fourier transform:\n\n\\begin{equation}\n\\frac{d}{dx} f(x) = \\frac{1}{\\sqrt{2\\pi}} \\int_{-\\infty}^{\\infty} ik F(k) e^{ikx} dk\n\\end{equation}\n\nIn general, this formulation can be extended to compute the n\u2212th derivative of $f(x)$ by considering that $F^{(n)}(k) = D(k)^{n}F(k) = (ik)^{n}F(k)$. Next, the inverse Fourier transform is taken to return to physical space. \n\n\\begin{equation}\nf^{(n)}(x) = \\mathscr{F}^{-1}[(ik)^{n}F(k)] = \\frac{1}{\\sqrt{2\\pi}} \\int_{-\\infty}^{\\infty} (ik)^{n} F(k) e^{ikx} dk\n\\end{equation}\n\n\n\n```python\n# Import all necessary libraries, this is a configuration step for the exercise.\n# Please run it before the simulation code!\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Show the plots in the Notebook.\nplt.switch_backend(\"nbagg\")\n```\n\n#### Exercise 1\n\nDefine a python function call \"fourier_derivative(f, dx)\" that compute the first derivative of a function $f$ using the Fourier transform properties. \n\n\n```python\n#################################################################\n# IMPLEMENT THE FOURIER DERIVATIVE METHOD HERE!\n#################################################################\n```\n\n#### Exercise 2\n\nCalculate the numerical derivative based on the Fourier transform to show that the derivative is exact. Define an arbitrary function (e.g. a Gaussian) and initialize its analytical derivative on the same spatial grid. Calculate the numerical derivative and the difference to the analytical solution. Vary the wavenumber content of the analytical function. Does it make a difference? Why is the numerical result not entirely exact?\n\n\n```python\n# Basic parameters\n# ---------------------------------------------------------------\nnx = 128\nx, dx = np.linspace(2*np.pi/nx, 2*np.pi, nx, retstep=True) \nsigma = 0.5\nxo = np.pi\n\n#################################################################\n# IMPLEMENT YOUR SOLUTION HERE!\n#################################################################\n\n```\n\n#### Exercise 3\n\nNow that the numerical derivative is available, we can visually inspect our results. Make a plot of both, the analytical and numerical derivatives together with the difference error. \n\n\n```python\n#################################################################\n# PLOT YOUR SOLUTION HERE!\n#################################################################\n```\n", "meta": {"hexsha": "97f8b509bb37d5a49ab29ec324f2485fb2c0676e", "size": 5602, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "pseudospectral/ps_derivative.ipynb", "max_stars_repo_name": "cheshirepezz/PDE", "max_stars_repo_head_hexsha": "75e829c4f52a570d2551574b97396f32cc9fb893", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2020-12-11T14:43:22.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-12T09:15:32.000Z", "max_issues_repo_path": "pseudospectral/ps_derivative.ipynb", "max_issues_repo_name": "cheshirepezz/PDE", "max_issues_repo_head_hexsha": "75e829c4f52a570d2551574b97396f32cc9fb893", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "pseudospectral/ps_derivative.ipynb", "max_forks_repo_name": "cheshirepezz/PDE", "max_forks_repo_head_hexsha": "75e829c4f52a570d2551574b97396f32cc9fb893", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-15T22:15:42.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-15T22:15:42.000Z", "avg_line_length": 32.3815028902, "max_line_length": 436, "alphanum_fraction": 0.5210639057, "converted": true, "num_tokens": 843, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9149009642742805, "lm_q2_score": 0.9032942106088968, "lm_q1q2_score": 0.8264247443094547}} {"text": "# Ecuaciones diferenciales\n\n## Preparaci\u00f3n\n\nProseguimos a activar el ambiente de trabajo y cargar los paquetes a utilizar.\n\n\n\n```julia\nimport Pkg; Pkg.activate(\".\"); Pkg.instantiate()\n\n```\n\n\n```julia\nusing DifferentialEquations\n\n```\n\n\n```julia\nusing Plots\n\n```\n\nLa siguiente configuraci\u00f3n por defecto se coloca para la versi\u00f3n espec\u00edfica de Julia, Plots y backend GR utilizada en construir este documento. Puede ser \u00f3ptimo cambiarlo a otra para uso personal:\n\n\n\n```julia\nbegin\n\tsize = 50\n\tPlots.default(size = (2600,2000),titlefontsize = size, tickfontsize = size, legendfontsize = size, guidefontsize = size, legendtitlefontsize = size, lw = 5)\nend\n\n```\n\n## Modelos de ecuaciones diferenciales ordinarias (ODEs) \n### Preliminares\nGracias a la gran eficiencia de m\u00e9todos iterativos y de \u00e1lgebra lineal, la forma preferencial de pensar en ecuaciones diferenciales es en su forma de *sistema de ecuaciones de primer \u00f3rden*. Esto siempre es posible de obtener mediante [reducci\u00f3n de \u00f3rden](https://en.wikipedia.org/wiki/Ordinary_differential_equation##Reduction_of_order).\n\nUna vez realizado, la forma que obtenemos, para ecuaciones aut\u00f3nomas (no dependientes de manera expl\u00edcita del tiempo), es:\n\n$$\nx'(t) = F(x(t))\n$$\n\ndonde $x: \\mathbb{R} \\rightarrow \\mathbb{R}^n$ es una trayectoria en el espacio para la cual deseamos resolver y obedece la din\u00e1mica del campo vectorial:\n\n$$F: \\mathbb{R}^n \\rightarrow \\mathbb{R}^n$$\n\nEsto define un sistema de $n$ ecuaciones a resolver. A continuaci\u00f3n veremos algunos ejemplos.\n\n### Ejemplo introductorio: La estructura general\n\nSe ejemplifica el procedimiento general de resoluci\u00f3n de ecuaciones diferenciales utilizando el paquete `DifferentialEquations` utilizando el ejemplo:\n\n$$\\frac{du}{dx} = f(u, p, t) = \\alpha u$$\n\ndonde $u$ es el observable din\u00e1mico a encontrar, y la forma general $f(u, p, t)$ describe la evoluci\u00f3n temporal de $u$ y puede depender de dicho observable: $u$, el tiempo param\u00e9trico que describe su din\u00e1mica: $t$, y par\u00e1metros potencialmente exteriores que caracterizan el contexto espec\u00edfico del fen\u00f3meno (ejemplo: constantes f\u00edsicas (masa, aceleraci\u00f3n debido a gravedad, permitividad, etc.), indicadores socio-econ\u00f3micos (tasas constantes, natalidad, migraci\u00f3n, etc.)).\n\nLos pasos se presentan a continuaci\u00f3n \n\n#### 1. Definici\u00f3n de la funci\u00f3n \n\n\n\n\n```julia\nf(u,p,t) = 1.01*u\n\n```\n\nComo descrito anteriormente, $f$ depende de $u, p, t$, pero solamente es obligatoria la dependencia de $u$ (salvo el caso trivial donde $f$ es constante y en dado caso no necesitamos resoluci\u00f3n num\u00e9rica), mientras que, en este caso, no tenemos par\u00e1metros movibles $p$ ni tampoco el tiempo $t$ expl\u00edcito, aunque por supuesto $u$ se asume ser funci\u00f3n del tiempo ($u = u(t)$).\n\n\n#### 2. Definici\u00f3n de las condiciones iniciales y de frontera\n\n\n\n```julia\nu\u2080 = 1/2; tspan_u = (0.0,1.0)\n\n```\n\nAqu\u00ed $u_0$ es el valor inicial tomado por la variable din\u00e1mica $u$. El tiempo inicial se define en el intervalo `tspan_u`, que es una tupla de un valor inicial y un valor final ($t_0$, $t_f$). \n\nEs este intervalo el cual ser\u00e1 particionado en una cadena discreta de puntos $\\{t_0, t_1, t_2, \\ldots, t_{n-1}, t_f\\}$ y obtener los valores de $u$ en dichos tiempos: $\\{u(t_0), u(t_1), u(t_2), \\ldots, u(t_{n-1}), u(t_f)\\}$.\n\n#### 3. Planteamiento del problema\n\n\n\n```julia\nprob_u = ODEProblem(f,u\u2080,tspan_u);\n\n```\n\nEste es un ejemplo de un problema planteado como una ecuaci\u00f3n diferencial ordinaria de primer orden. En su lugar, pudimos haber definido una de orden superior, una estoc\u00e1stica, integrodiferencial, etc.\n\nEjemplos de estos ser\u00e1n mostrados luego.\n\n#### 4. Resoluci\u00f3n y exploraci\u00f3n\n\n\n\n```julia\nsol_u = solve(prob_u, Tsit5(), reltol=1e-8, abstol=1e-8);\n\n```\n\nLa funci\u00f3n `solve` es una de las funciones m\u00e1s complejas (en cuanto a completitud y flexibilidad) del paquete, siendo la misma redefinida mediante multiple-dispatch para poder reaccionar sin aparente dificultad a cualquier tipo de problema planteado y permitiendo argumentos que personalicen el proceso de soluci\u00f3n para cada una de ellas.\n\nComo ejemplo, aqu\u00ed tenemos que el problema `prob_u` se resuelve mediante un **solver** llamado `Tsit5`. La lista de solvers incluidos en el paquete puede encontrarse en su documentaci\u00f3n [aqu\u00ed](https://diffeq.sciml.ai/stable/tutorials/ode_example/##Choosing-a-Solver-Algorithm), y de \u00e9stos se habla en mayor profundidad luego. \n\nAdem\u00e1s, se ilustra que podemos elegir un error relativo y error absoluto que queremos aceptar para que el solver lo garantice. Leer m\u00e1s sobre los tipos de errores de aproximaci\u00f3n [aqu\u00ed](https://en.wikipedia.org/wiki/Approximation_error).\n\nPosterior a resolver la ecuaci\u00f3n diferencial, podemos explorar su comportamiento de manera gr\u00e1fica:\n\n\n\n```julia\nbegin\n\tplot(sol_u,linewidth=5, title=\"Soluci\u00f3n aproximada al problema\",\n xaxis=\"Time (t)\", yaxis=\"u(t) (in \u03bcm)\",label=\"Aproximaci\u00f3n\")\n\tplot!(sol_u.t, t->0.5*exp(1.01t),lw=3,ls=:dash,label=\"Soluci\u00f3n exacta\")\nend\n\n```\n\n### P\u00e9ndulo ca\u00f3tico\nEl p\u00e9ndulo doble es un sistema muy famosamente estudiado por exhibir un comportamiento **ca\u00f3tico** (cuya definici\u00f3n matem\u00e1tica es rigurosa). Sus ecuaciones del movimiento son las siguientes:\n\n$$\\frac{d}{dt}\n\\begin{pmatrix}\n\\alpha \\\\ l_\\alpha \\\\ \\beta \\\\ l_\\beta\n\\end{pmatrix}=\n\\begin{pmatrix}\n2\\frac{l_\\alpha - (1+\\cos\\beta)l_\\beta}{3-\\cos 2\\beta} \\\\\n-2\\sin\\alpha - \\sin(\\alpha + \\beta) \\\\\n2\\frac{-(1+\\cos\\beta)l_\\alpha + (3+2\\cos\\beta)l_\\beta}{3-\\cos2\\beta}\\\\\n-\\sin(\\alpha+\\beta) - 2\\sin(\\beta)\\frac{(l_\\alpha-l_\\beta)l_\\beta}{3-\\cos2\\beta} + 2\\sin(2\\beta)\\frac{l_\\alpha^2-2(1+\\cos\\beta)l_\\alpha l_\\beta + (3+2\\cos\\beta)l_\\beta^2}{(3-\\cos2\\beta)^2}\n\\end{pmatrix}$$\n\nResolveremos este sistema utilizando los m\u00e9todos del paquetes `DifferentialEquations.jl`. \n\nEl movimiento generado por esta ecuaciones se ve similar a:\n\n\n\n\n```julia\nhtml\"\n

    \n\n

    \n\"\n\n```\n\n\n```julia\n## El prefijo const es opcional y no significa VALOR constante, si no tipo constante.\nbegin\n\tconst m\u2081, m\u2082, L\u2081, L\u2082, g = 1, 2, 1, 2, 9.81 \n\tinitial = [0, \u03c0/3, 0, 3pi/5]\n\ttspan = (0., 50.)\nend;\n\n```\n\nSe define una funci\u00f3n auxiliar para transformar de coordenadas polares a cartesianas:\n\n\n\n```julia\nfunction polar2cart(sol; dt=0.02, l1=L\u2081, l2=L\u2082, vars=(2,4))\n u = sol.t[1]:dt:sol.t[end]\n p1 = l1*map(x->x[vars[1]], sol.(u))\n p2 = l2*map(y->y[vars[2]], sol.(u))\n x1 = l1*sin.(p1)\n y1 = l1*-cos.(p1)\n (u, (x1 + l2*sin.(p2),\n y1 - l2*cos.(p2)))\nend\n\n```\n\n\n```julia\nfunction double_pendulum(xdot,x,p,t)\n xdot[1] = x[2]\n xdot[2] = - ((g*(2*m\u2081+m\u2082)*sin(x[1]) + m\u2082*(g*sin(x[1]-2*x[3]) + \n\t\t\t\t2*(L\u2082*x[4]^2+L\u2081*x[2]^2*cos(x[1]-x[3]))*sin(x[1]-x[3])))/\n\t\t (2*L\u2081*(m\u2081+m\u2082-m\u2082*cos(x[1]-x[3])^2)))\n xdot[3] = x[4]\n xdot[4] = (((m\u2081+m\u2082)*(L\u2081*x[2]^2+g*cos(x[1])) + \n\t\t\t L\u2082*m\u2082*x[4]^2*cos(x[1]-x[3]))*sin(x[1]-x[3]))/\n\t\t\t (L\u2082*(m\u2081+m\u2082-m\u2082*cos(x[1]-x[3])^2))\nend\n\n```\n\n\n```julia\ndouble_pendulum_problem = ODEProblem(double_pendulum, initial, tspan);\n\n```\n\n\n```julia\nsol = solve(double_pendulum_problem, Vern7(), abs_tol=1e-10, dt=0.05)\n\n```\n\n\n```julia\nbegin\n\tts, ps = polar2cart(sol, l1=L\u2081, l2=L\u2082, dt=0.01)\n\tplot(ps...)\nend\n\n```\n\n### 3. Sistema de H\u00e9non-Heiles \nEs un sistema din\u00e1mico que modela el movimiento de una estrella orbitando alrededor de su centro gal\u00e1ctico mientras yace restringido en un plano. \u00c9ste es un ejemplo de un sistema Hamiltoniano, y tiene la forma:\n\n$$\n\\begin{align}\n\\frac{d^2x}{dt^2}&=-\\frac{\\partial V}{\\partial x}\\\\\n\\frac{d^2y}{dt^2}&=-\\frac{\\partial V}{\\partial y}\n\\end{align}\n$$\n\ndonde\n\n$$V(x,y)={\\frac {1}{2}}(x^{2}+y^{2})+\\lambda \\left(x^{2}y-{\\frac {y^{3}}{3}}\\right).$$\n\nes conocido como el **potencial de H\u00e9non\u2013Heiles**. De \u00e9ste puede derivarse su Hamiltoniano:\n\n$$H={\\frac {1}{2}}(p_{x}^{2}+p_{y}^{2})+{\\frac {1}{2}}(x^{2}+y^{2})+\\lambda \\left(x^{2}y-{\\frac {y^{3}}{3}}\\right).$$\n\nEsta cantidad representa un invariante del sistema din\u00e1mico: Una cantidad conservada. En este caso, es la energ\u00eda total del sistema.\n\n\n\n\n```julia\nbegin\n\t## Par\u00e1metros\n\tinitial\u2082 = [0.,0.1,0.5,0]\n\ttspan\u2082 = (0,100.)\n\t## V: Potencial, T: Energ\u00eda cin\u00e9tica total, E: Energ\u00eda total\n\tV(x,y) = 1//2 * (x^2 + y^2 + 2x^2*y - 2//3 * y^3)\n\tE(x,y,dx,dy) = V(x,y) + 1//2 * (dx^2 + dy^2);\nend;\n\n```\n\nDefinimos el modelo en una funci\u00f3n:\n\n\n\n```julia\nfunction H\u00e9non_Heiles(du,u,p,t)\n x = u[1]\n y = u[2]\n dx = u[3]\n dy = u[4]\n du[1] = dx\n du[2] = dy\n du[3] = -x - 2x*y\n du[4] = y^2 - y -x^2\nend\n\n```\n\nResolvemos el problema:\n\n\n\n```julia\nbegin\n\tprob\u2082 = ODEProblem(H\u00e9non_Heiles, initial\u2082, tspan\u2082)\n\tsol\u2082 = solve(prob\u2082, Vern9(), abs_tol=1e-16, rel_tol=1e-16);\nend\n\n```\n\n\n```julia\nplot(sol\u2082, vars=(1,2), title = \"\u00d3rbita del sistema de H\u00e9non-Heiles\", xaxis = \"x\", yaxis = \"y\", leg=false)\n\n```\n\nParece estar correctamente resuelto pero... examinando la evoluci\u00f3n de la energ\u00eda total/Hamiltoniano podemos encontrar lo siguiente:\n\n\n\n```julia\nbegin\n\tenergy = map(x->E(x...), sol\u2082.u)\n\tplot(sol\u2082.t, energy .- energy[1], title = \"Cambio de la energ\u00eda en el tiempo\", xaxis = \"N\u00famero de iteraciones\", yaxis = \"Cambio en energ\u00eda\")\nend\n\n```\n\n\u00a1La energ\u00eda total cambia! Eso quiere decir que la ley de conservaci\u00f3n que esperamos que se cumpla f\u00edsicamente no parece cumplirse en nuestra simulaci\u00f3n. Esto delata un error en la resoluci\u00f3n de la ecuaci\u00f3n, espec\u00edficamente por el m\u00e9todo utilizado.\n\nPara evitar eso, podemos utilizar algo conocido como un **integrador simpl\u00e9tico** que considere la estructura de sistema Hamiltoniano que tiene nuestras ecuaciones.\n\n\u00c9ste lo implementamos a continuaci\u00f3n:\n\n\n\n```julia\nfunction HH_acceleration!(dv,v,u,p,t)\n x,y = u\n dx,dy = dv\n dv[1] = -x - 2x*y\n dv[2] = y^2 - y -x^2\nend;\n\n```\n\nDebemos ahora definir la condici\u00f3n inicial por separado, pues nuestro sistema, al ser Hamiltoniano, tiene una segmentaci\u00f3n natural en esos pares de variables.\n\n\n\n```julia\nbegin\n\tinitial_positions = [0.0,0.1]\n\tinitial_velocities = [0.5,0.0]\nend;\n\n```\n\nResolvemos el problema ahora como una ecuaci\u00f3n de segundo orden pero con estructura simpl\u00e9ctica detectada:\n\n\n\n```julia\nbegin\n\tprob\u2083 = SecondOrderODEProblem(HH_acceleration!, \n\t\t\t\t\t\t\t\t initial_velocities,\n\t\t\t\t\t\t\t\t initial_positions,\n\t\t\t\t\t\t\t\t tspan\u2082)\n\tsol\u2083 = solve(prob\u2083, KahanLi8(), dt=1/10)\nend;\n\n```\n\nAhora podemos observar c\u00f3mo la energ\u00eda se mantiene muy cercana a cero, oscilando solamente por errores de precisi\u00f3n num\u00e9rica pero sin existir una tendencia de crecimiento an\u00f3mala.\n\n\n\n```julia\nbegin\n\tenergy\u2082 = map(x->E(x[3], x[4], x[1], x[2]), sol\u2083.u)\n\tplot(sol\u2083.t, energy\u2082 .- energy\u2082[1], title = \"Cambio de la energ\u00eda en el tiempo\", xaxis = \"N\u00famero de iteraciones\", yaxis = \"Cambio en energ\u00eda\")\nend\n\n```\n\n## Ecuaciones diferenciales Estoc\u00e1sticas\n\n\n\n## Bibliograf\u00eda\n* [Documentaci\u00f3n de SciML](https://sciml.ai/) \n* [Tutoriales de SciML](https://github.com/SciML/SciMLTutorials.jl)\n* H\u00e9non, Michel (1983), \\\"Numerical exploration of Hamiltonian Systems\\\", in Iooss, G. (ed.), Chaotic Behaviour of Deterministic Systems, Elsevier Science Ltd, pp. 53\u2013170, ISBN 044486542X\n* Aguirre, Jacobo; Vallejo, Juan C.; Sanju\u00e1n, Miguel A. F. (2001-11-27). \\\"Wada basins and chaotic invariant sets in the H\u00e9non-Heiles system\\\". Physical Review E. American Physical Society (APS). 64 (6): 066208. doi:10.1103/physreve.64.066208. ISSN 1063-651X.\n* Steven Johnson. 18.335J Introduction to Numerical Methods . Spring 2019. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu. License: Creative Commons BY-NC-SA.\n\n\n", "meta": {"hexsha": "449f81c57dd5db087202392ce8873a9af449990f", "size": 24196, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_build/html/_sources/Ecuaciones_diferenciales.ipynb", "max_stars_repo_name": "galexbh/Intro-Julia-2021", "max_stars_repo_head_hexsha": "ec97166de13c5ea793d754750ee6191af26760cb", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2021-01-11T23:22:13.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-21T19:43:09.000Z", "max_issues_repo_path": "_build/html/_sources/Ecuaciones_diferenciales.ipynb", "max_issues_repo_name": "galexbh/Intro-Julia-2021", "max_issues_repo_head_hexsha": "ec97166de13c5ea793d754750ee6191af26760cb", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-01-13T18:16:13.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-15T22:18:25.000Z", "max_forks_repo_path": "_build/html/_sources/Ecuaciones_diferenciales.ipynb", "max_forks_repo_name": "galexbh/Intro-Julia-2021", "max_forks_repo_head_hexsha": "ec97166de13c5ea793d754750ee6191af26760cb", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2021-01-12T17:54:45.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-13T06:21:14.000Z", "avg_line_length": 39.7307060755, "max_line_length": 493, "alphanum_fraction": 0.4245329807, "converted": true, "num_tokens": 3787, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9032942014971871, "lm_q2_score": 0.9149009544128984, "lm_q1q2_score": 0.8264247270654135}} {"text": "# Fitting a Mixture Model with Gibbs Sampling\n\n\n```python\n%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport random\nimport matplotlib.pyplot as plt\n\nfrom scipy import stats\nfrom collections import namedtuple, Counter\n```\n\nSuppose we receive some data that looks like the following:\n\n\n```python\ndata = pd.Series.from_csv(\"clusters.csv\")\n_=data.hist(bins=20)\n```\n\n\n```python\ndata.size\n```\n\n\n\n\n 1000\n\n\n\nIt appears that these data exist in three separate clusters. We want to develop a method for finding these _latent_ clusters. One way to start developing a method is to attempt to describe the process that may have generated these data.\n\nFor simplicity and sanity, let's assume that each data point is generated independently of the other. Moreover, we will assume that within each cluster, the data points are identically distributed. In this case, we will assume each cluster is normally distributed and that each cluster has the same variance, $\\sigma^2$.\n\nGiven these assumptions, our data could have been generated by the following process. For each data point, randomly select 1 of 3 clusters from the distribution $\\text{Discrete}(\\pi_1, \\pi_2, \\pi_3)$. Each cluster $k$ corresponds to a parameter $\\theta_k$ for that cluster, sample a data point from $\\mathcal{N}(\\theta_k, \\sigma^2)$.\n\nEquivalently, we could consider these data to be generated from a probability distribution with this probability density function:\n\n$$\np(x_i \\,|\\, \\pi, \\theta_1, \\theta_2, \\theta_3, \\sigma)=\n \\sum_{k=1}^3 \\pi_k\\cdot\n \\frac{1}{\\sigma\\sqrt{2\\pi}}\n \\text{exp}\\left\\{\n \\frac{-(x_i-\\theta_k)^2}{2\\sigma^2}\n \\right\\}\n$$\n\nwhere $\\pi$ is a 3-dimensional vector giving the _mixing proportions_. In other words, $\\pi_k$ describes the proportion of points that occur in cluster $k$.\n\n\nThat is, _the probability distribution describing $x$ is a linear combination of normal distributions_.\n\nWe want to use this _generative_ model to formulate an algorithm for determining the particular parameters that generated the dataset above. The $\\pi$ vector is unknown to us, as is each cluster mean $\\theta_k$. \n\nWe would also like to know $z_i\\in\\{1, 2, 3\\}$, the latent cluster for each point. It turns out that introducing $z_i$ into our model will help us solve for the other values.\n\nThe joint distribution of our observed data (`data`) along with the assignment variables is given by:\n\n\\begin{align}\np(\\mathbf{x}, \\mathbf{z} \\,|\\, \\pi, \\theta_1, \\theta_2, \\theta_3, \\sigma)&=\n p(\\mathbf{z} \\,|\\, \\pi)\n p(\\mathbf{x} \\,|\\, \\mathbf{z}, \\theta_1, \\theta_2, \\theta_3, \\sigma)\\\\\n &= \\prod_{i=1}^N p(z_i \\,|\\, \\pi)\n \\prod_{i=1}^N p(x_i \\,|\\, z_i, \\theta_1, \\theta_2, \\theta_3, \\sigma) \\\\\n &= \\prod_{i=1}^N \\pi_{z_i}\n \\prod_{i=1}^N \n \\frac{1}{\\sigma\\sqrt{2\\pi}}\n \\text{exp}\\left\\{\n \\frac{-(x_i-\\theta_{z_i})^2}{2\\sigma^2}\n \\right\\}\\\\\n &= \\prod_{i=1}^N \n \\left(\n \\pi_{z_i}\n \\frac{1}{\\sigma\\sqrt{2\\pi}}\n \\text{exp}\\left\\{\n \\frac{-(x_i-\\theta_{z_i})^2}{2\\sigma^2}\n \\right\\}\n \\right)\\\\\n &=\n \\prod_i^n\n \\prod_k^K\n \\left(\n \\pi_k \n \\frac{1}{\\sigma\\sqrt{2\\pi}}\n \\text{exp}\\left\\{\n \\frac{-(x_i-\\theta_k)^2}{2\\sigma^2}\n \\right\\}\n \\right)^{\\delta(z_i, k)}\n\\end{align}\n\n### Keeping Everything Straight\n\nBefore moving on, we need to devise a way to keep all our data and parameters straight. Following ideas suggested by [Keith Bonawitz](http://people.csail.mit.edu/bonawitz/Composable%20Probabilistic%20Inference%20with%20Blaise%20-%20Keith%20Bonawitz%20PhD%20Thesis.pdf), let's define a \"state\" object to store all of this data. \n\nIt won't yet be clear why we are defining some components of `state`, however we will use each part eventually! As an attempt at clarity, I am using a trailing underscore in the names of members that are fixed. We will update the other parameters as we try to fit the model.\n\n\n```python\nSuffStat = namedtuple('SuffStat', 'theta N')\n\ndef update_suffstats(state):\n for cluster_id, N in Counter(state['assignment']).iteritems():\n points_in_cluster = [x \n for x, cid in zip(state['data_'], state['assignment'])\n if cid == cluster_id\n ]\n mean = np.array(points_in_cluster).mean()\n \n state['suffstats'][cluster_id] = SuffStat(mean, N)\n\ndef initial_state():\n num_clusters = 3\n alpha = 1.0\n cluster_ids = range(num_clusters)\n\n state = {\n 'cluster_ids_': cluster_ids,\n 'data_': data,\n 'num_clusters_': num_clusters,\n 'cluster_variance_': .01,\n 'alpha_': alpha,\n 'hyperparameters_': {\n \"mean\": 0,\n \"variance\": 1,\n },\n 'suffstats': [None, None, None],\n 'assignment': [random.choice(cluster_ids) for _ in data],\n 'pi': [alpha / num_clusters for _ in cluster_ids],\n 'cluster_means': [-1, 0, 1]\n }\n update_suffstats(state)\n return state\n\nstate = initial_state()\n```\n\n\n```python\nfor k, v in state.items():\n print(k)\n```\n\n num_clusters_\n suffstats\n data_\n cluster_means\n cluster_variance_\n cluster_ids_\n assignment\n pi\n alpha_\n hyperparameters_\n\n\n### Gibbs Sampling\n\nThe [theory of Gibbs sampling](https://en.wikipedia.org/wiki/Gibbs_sampling) tells us that given some data $\\bf y$ and a probability distribution $p$ parameterized by $\\gamma_1, \\ldots, \\gamma_d$, we can successively draw samples from the distribution by sampling from\n\n$$\\gamma_j^{(t)}\\sim p(\\gamma_j \\,|\\, \\gamma_{\\neg j}^{(t-1)})$$\n \nwhere $\\gamma_{\\neg j}^{(t-1)}$ is all current values of $\\gamma_i$ except for $\\gamma_j$. If we sample long enough, these $\\gamma_j$ values will be random samples from $p$. \n\nIn deriving a Gibbs sampler, it is often helpful to observe that \n\n$$\n p(\\gamma_j \\,|\\, \\gamma_{\\neg j})\n = \\frac{\n p(\\gamma_1,\\ldots,\\gamma_d)\n }{\n p(\\gamma_{\\neg j})\n } \\propto p(\\gamma_1,\\ldots,\\gamma_d).\n$$\n\nThe conditional distribution is proportional to the joint distribution. We will get a lot of mileage from this simple observation by dropping constant terms from the joint distribution (relative to the parameters we are conditioned on).\n\nThe $\\gamma$ values in our model are each of the $\\theta_k$ values, the $z_i$ values, and the $\\pi_k$ values. Thus, we need to derive the conditional distributions for each of these.\n\nMany derivation of Gibbs samplers that I have seen rely on a lot of handwaving and casual appeals to conjugacy. I have tried to add more mathematical details here. I would gladly accept feedback on how to more clearly present the derivations! I have also tried to make the derivations more concrete by immediately providing code to do the computations in this specific case. \n\n#### Conditional Distribution of Assignment\n\nFor berevity, we will use\n\n$$\np(z_i=k \\,|\\, \\cdot)=\np(z_i=k \\,|\\, \n z_{\\neg i}, \\pi,\n \\theta_1, \\theta_2, \\theta_3, \\sigma, \\bf x\n ).\n $$\n \nBecause cluster assignements are conditionally independent given the cluster weights and paramters,\n\n\\begin{align}\n p(z_i=k \\,|\\, \\cdot) \n &\\propto\n \\prod_i^n\n \\prod_k^K\n \\left(\n \\pi_k \n \\frac{1}{\\sigma\\sqrt{2\\pi}}\n \\text{exp}\\left\\{\n \\frac{-(x_i-\\theta_k)^2}{2\\sigma^2}\n \\right\\}\n \\right)^{\\delta(z_i, k)} \\\\\n &\\propto\n \\pi_k \\cdot\n \\frac{1}{\\sigma\\sqrt{2\\pi}} \n \\text{exp}\\left\\{\n \\frac{-(x_i-\\theta_k)^2}{2\\sigma^2}\n \\right\\}\n\\end{align}\n\nThis equation intuitively makes sense: point $i$ is more likely to be in cluster $k$ if $k$ is itself probable ($\\pi_k\\gg 0$) and $x_i$ is close to the mean of the cluster $\\theta_k$.\n\nFor each data point $i$, we can compute $p(z_i=k \\,|\\, \\cdot)$ for each of cluster $k$. These values are the unnormalized parameters to a discrete distribution from which we can sample assignments.\n\nBelow, we define functions for doing this sampling. `sample_assignment` will generate a sample from the posterior assignment distribution for the specified data point. `update_assignment` will sample from the posterior assignment for each data point and update the `state` object.\n\n\n```python\ndef log_assignment_score(data_id, cluster_id, state):\n \"\"\"log p(z_i=k \\,|\\, \\cdot) \n \n We compute these scores in log space for numerical stability.\n \"\"\"\n x = state['data_'][data_id]\n theta = state['cluster_means'][cluster_id]\n var = state['cluster_variance_']\n log_pi = np.log(state['pi'][cluster_id])\n return log_pi + stats.norm.logpdf(x, theta, var)\n\n\ndef assigment_probs(data_id, state):\n \"\"\"p(z_i=cid \\,|\\, \\cdot) for cid in cluster_ids\n \"\"\"\n scores = [log_assignment_score(data_id, cid, state) for cid in state['cluster_ids_']]\n scores = np.exp(np.array(scores))\n return scores / scores.sum()\n\n\ndef sample_assignment(data_id, state):\n \"\"\"Sample cluster assignment for data_id given current state\n \n cf Step 1 of Algorithm 2.1 in Sudderth 2006\n \"\"\"\n p = assigment_probs(data_id, state)\n return np.random.choice(state['cluster_ids_'], p=p)\n\n\ndef update_assignment(state):\n \"\"\"Update cluster assignment for each data point given current state \n \n cf Step 1 of Algorithm 2.1 in Sudderth 2006\n \"\"\"\n for data_id, x in enumerate(state['data_']):\n state['assignment'][data_id] = sample_assignment(data_id, state)\n update_suffstats(state)\n```\n\n#### Conditional Distribution of Mixture Weights\n\nWe can similarly derive the conditional distributions of mixture weights by an application of Bayes theorem. Instead of updating each component of $\\pi$ separately, we update them together (this is called blocked Gibbs).\n\n\\begin{align}\np(\\pi \\,|\\, \\cdot)&=\np(\\pi \\,|\\, \n \\bf{z}, \n \\theta_1, \\theta_2, \\theta_3,\n \\sigma, \\mathbf{x}, \\alpha\n )\\\\\n&\\propto\np(\\pi \\,|\\, \n \\mathbf{x}, \n \\theta_1, \\theta_2, \\theta_3,\n \\sigma, \\alpha\n )\np(\\bf{z}\\ \\,|\\, \n \\mathbf{x}, \n \\theta_1, \\theta_2, \\theta_3,\n \\sigma, \\pi, \\alpha\n )\\\\\n&=\np(\\pi \\,|\\, \n \\alpha\n )\np(\\bf{z}\\ \\,|\\, \n \\mathbf{x}, \n \\theta_1, \\theta_2, \\theta_3,\n \\sigma, \\pi, \\alpha\n )\\\\\n&=\n\\prod_{i=1}^K \\pi_k^{\\alpha/K - 1}\n\\prod_{i=1}^K \\pi_k^{\\sum_{i=1}^N \\delta(z_i, k)} \\\\\n&=\\prod_{k=1}^3 \\pi_k^{\\alpha/K+\\sum_{i=1}^N \\delta(z_i, k)-1}\\\\\n&\\propto \\text{Dir}\\left(\n \\sum_{i=1}^N \\delta(z_i, 1)+\\alpha/K, \n \\sum_{i=1}^N \\delta(z_i, 2)+\\alpha/K,\n \\sum_{i=1}^N \\delta(z_i, 3)+\\alpha/K\n \\right)\n\\end{align}\n\nHere are Python functions to sample from the mixture weights given the current `state` and to update the mixture weights in the `state` object.\n\n\n```python\ndef sample_mixture_weights(state):\n \"\"\"Sample new mixture weights from current state according to \n a Dirichlet distribution \n \n cf Step 2 of Algorithm 2.1 in Sudderth 2006\n \"\"\"\n ss = state['suffstats']\n alpha = [ss[cid].N + state['alpha_'] / state['num_clusters_'] \n for cid in state['cluster_ids_']]\n return stats.dirichlet(alpha).rvs(size=1).flatten()\n\ndef update_mixture_weights(state):\n \"\"\"Update state with new mixture weights from current state\n sampled according to a Dirichlet distribution \n \n cf Step 2 of Algorithm 2.1 in Sudderth 2006\n \"\"\"\n state['pi'] = sample_mixture_weights(state)\n```\n\n#### Conditional Distribution of Cluster Means\n\nFinally, we need to compute the conditional distribution for the cluster means.\n\nWe assume the unknown cluster means are distributed according to a normal distribution with hyperparameter mean $\\lambda_1$ and variance $\\lambda_2^2$. The final step in this derivation comes from the normal-normal conjugacy. For more information see [section 2.3 of this](http://www.cs.ubc.ca/~murphyk/Papers/bayesGauss.pdf) and [section 6.2 this](https://web.archive.org/web/20160304125731/http://fisher.osu.edu/~schroeder.9/AMIS900/ech6.pdf).)\n\n\\begin{align}\np(\\theta_k \\,|\\, \\cdot)&=\np(\\theta_k \\,|\\, \n \\bf{z}, \\pi,\n \\theta_{\\neg k},\n \\sigma, \\bf x, \\lambda_1, \\lambda_2\n ) \\\\\n&\\propto p(\\left\\{x_i \\,|\\, z_i=k\\right\\} \\,|\\, \\bf{z}, \\pi,\n \\theta_1, \\theta_2, \\theta_3,\n \\sigma, \\lambda_1, \\lambda_2) \\cdot\\\\\n &\\phantom{==}p(\\theta_k \\,|\\, \\bf{z}, \\pi,\n \\theta_{\\neg k},\n \\sigma, \\lambda_1, \\lambda_2)\\\\\n&\\propto p(\\left\\{x_i \\,|\\, z_i=k\\right\\} \\,|\\, \\mathbf{z},\n \\theta_k, \\sigma)\n p(\\theta_k \\,|\\, \\lambda_1, \\lambda_2)\\\\\n&= \\mathcal{N}(\\theta_k \\,|\\, \\mu_n, \\sigma_n)\\\\\n\\end{align}\n\n\n$$ \\sigma_n^2 = \\frac{1}{\n \\frac{1}{\\lambda_2^2} + \\frac{N_k}{\\sigma^2}\n } $$\n \nand \n\n$$\\mu_n = \\sigma_n^2 \n \\left(\n \\frac{\\lambda_1}{\\lambda_2^2} + \n \\frac{n\\bar{x_k}}{\\sigma^2}\n \\right)\n$$\n\nHere is the code for sampling those means and for updating our state accordingly.\n\n\n```python\ndef sample_cluster_mean(cluster_id, state):\n cluster_var = state['cluster_variance_']\n hp_mean = state['hyperparameters_']['mean']\n hp_var = state['hyperparameters_']['variance']\n ss = state['suffstats'][cluster_id]\n \n numerator = hp_mean / hp_var + ss.theta * ss.N / cluster_var\n denominator = (1.0 / hp_var + ss.N / cluster_var)\n posterior_mu = numerator / denominator\n posterior_var = 1.0 / denominator\n \n return stats.norm(posterior_mu, np.sqrt(posterior_var)).rvs()\n\n\ndef update_cluster_means(state):\n state['cluster_means'] = [sample_cluster_mean(cid, state)\n for cid in state['cluster_ids_']]\n```\n\nDoing each of these three updates in sequence makes a complete _Gibbs step_ for our mixture model. Here is a function to do that:\n\n\n```python\ndef gibbs_step(state):\n update_assignment(state)\n update_mixture_weights(state)\n update_cluster_means(state)\n```\n\nInitially, we assigned each data point to a random cluster. We can see this by plotting a histogram of each cluster.\n\n\n```python\ndef plot_clusters(state):\n gby = pd.DataFrame({\n 'data': state['data_'], \n 'assignment': state['assignment']}\n ).groupby(by='assignment')['data']\n hist_data = [gby.get_group(cid).tolist() \n for cid in gby.groups.keys()]\n plt.hist(hist_data, \n bins=20,\n histtype='stepfilled', alpha=.5 )\n \nplot_clusters(state)\n```\n\nEach time we run `gibbs_step`, our `state` is updated with newly sampled assignments. Look what happens to our histogram after 5 steps:\n\n\n```python\nfor _ in range(5):\n gibbs_step(state)\nplot_clusters(state)\n```\n\nSuddenly, we are seeing clusters that appear very similar to what we would intuitively expect: three Gaussian clusters.\n\nAnother way to see the progress made by the Gibbs sampler is to plot the change in the model's log-likelihood after each step. The log likehlihood is given by:\n\n$$\n\\log p(\\mathbf{x} \\,|\\, \\pi, \\theta_1, \\theta_2, \\theta_3)\n\\propto \\sum_x \\log \\left(\n \\sum_{k=1}^3 \\pi_k \\exp \n \\left\\{ \n -(x-\\theta_k)^2 / (2\\sigma^2)\n \\right\\}\n\\right)\n$$\n\nWe can define this as a function of our `state` object:\n\n\n```python\ndef log_likelihood(state):\n \"\"\"Data log-likeliehood\n \n Equation 2.153 in Sudderth\n \"\"\"\n \n ll = 0 \n for x in state['data_']:\n pi = state['pi']\n mean = state['cluster_means']\n sd = np.sqrt(state['cluster_variance_'])\n ll += np.log(np.dot(pi, stats.norm(mean, sd).pdf(x)))\n return ll\n```\n\n\n```python\nstate = initial_state()\nll = [log_likelihood(state)]\nfor _ in range(20):\n gibbs_step(state)\n ll.append(log_likelihood(state))\npd.Series(ll).plot()\n```\n\nSee that the log likelihood improves with iterations of the Gibbs sampler. This is what we should expect: the Gibbs sampler finds state configurations that make the data we have seem \"likely\". However, the likelihood isn't strictly monotonic: it jitters up and down. Though it behaves similarly, the Gibbs sampler isn't optimizing the likelihood function. In its steady state, it is sampling from the posterior distribution. The `state` after each step of the Gibbs sampler is a sample from the posterior.\n\n\n```python\npd.Series(ll).plot(ylim=[-150, -100])\n```\n\n[In another post](/collapsed-gibbs/), I show how we can \"collapse\" the Gibbs sampler and sampling the assignment parameter without sampling the $\\pi$ and $\\theta$ values. This collapsed sampler can also be extended to the model with a Dirichet process prior that allows the number of clusters to be a parameter fit by the model.\n\n## Notation Helper\n\n* $N_k$, `state['suffstat'][k].N`: Number of points in cluster $k$.\n\n* $\\theta_k$, `state['suffstat'][k].theta`: Mean of cluster $k$.\n* $\\lambda_1$, `state['hyperparameters_']['mean']`: Mean of prior distribution over cluster means.\n* $\\lambda_2^2$, `state['hyperparameters_']['variance']` Variance of prior distribution over cluster means.\n* $\\sigma^2$, `state[cluster_variance_]`: Known, fixed variance of clusters. \n\nThe superscript $(t)$ on $\\theta_k$, $pi_k$, and $z_i$ indicates the value of that variable at step $t$ of the Gibbs sampler.\n", "meta": {"hexsha": "c7a6bda22c51039fc6e1eace14a2f4f3b0786f43", "size": 71843, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "pages/2015-09-02-fitting-a-mixture-model.ipynb", "max_stars_repo_name": "tdhopper/notes-on-dirichlet-processes", "max_stars_repo_head_hexsha": "6efb736ca7f65cb4a51f99494d6fcf6709395cd7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 438, "max_stars_repo_stars_event_min_datetime": "2015-08-06T13:32:35.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-05T03:20:44.000Z", "max_issues_repo_path": "pages/2015-09-02-fitting-a-mixture-model.ipynb", "max_issues_repo_name": "tdhopper/notes-on-dirichlet-processes", "max_issues_repo_head_hexsha": "6efb736ca7f65cb4a51f99494d6fcf6709395cd7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2015-10-13T17:10:18.000Z", "max_issues_repo_issues_event_max_datetime": "2018-07-18T14:37:21.000Z", "max_forks_repo_path": "pages/2015-09-02-fitting-a-mixture-model.ipynb", "max_forks_repo_name": "tdhopper/notes-on-dirichlet-processes", "max_forks_repo_head_hexsha": "6efb736ca7f65cb4a51f99494d6fcf6709395cd7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 134, "max_forks_repo_forks_event_min_datetime": "2015-08-26T03:59:12.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-10T02:45:44.000Z", "avg_line_length": 96.6931359354, "max_line_length": 10094, "alphanum_fraction": 0.8004537672, "converted": true, "num_tokens": 4841, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9032942041005328, "lm_q2_score": 0.9149009474519222, "lm_q1q2_score": 0.8264247231594075}} {"text": "# Conditional Probability - Regions & Wealth\n\n> This document is written in *R*.\n>\n> ***GitHub***: https://github.com/czs108\n\n## Background\n\n> The following table gives the *mean wealth* and *percentage population* in various *regions* of a fictitious country.\n>\n> | Region | Ashire | Bshire | Cshire | West Toptown | East Toptown | South Toptown | North Toptown |\n> | :--------: | :----: | :----: | :----: | :----------: | :----------: | :-----------: | :-----------: |\n> | **Wealth** | 80 | 110 | 110 | 70 | 120 | 90 | 110 |\n> | **Pop** | 15% | 20% | 20% | 10% | 10% | 10% | 15% |\n\n\n```R\nwealth.all <- c(80, 110, 110, 70, 120, 90, 110)\nprobs.all <- c(0.15, 0.20, 0.20, 0.10, 0.10, 0.10, 0.15)\n```\n\n## Question A\n\n> What are the *mean* and *variance* of the *wealth* for the *whole* country?\n\n\n```R\nmean.all <- sum(wealth.all * probs.all)\nvar.all <- sum(((wealth.all - mean.all) ^ 2) * probs.all)\n\nprint(sprintf(\"Mean = %.1f\", mean.all), quote=FALSE)\nprint(sprintf(\"Variance = %.1f\", var.all), quote=FALSE)\n```\n\n [1] Mean = 100.5\n [1] Variance = 254.8\n\n\n## Question B\n\n> What are the *mean* and *variance* of the *wealth* for those who live in *Toptown*?\n\nMake the probabilities add up to **1**.\n\n\n```R\nprobs.tp <- probs.all[4:7]\nprobs.tp <- probs.tp / sum(probs.tp)\n\nsum(probs.tp)\n```\n\n\n1\n\n\n\n```R\nwealth.tp <- wealth.all[4:7]\n\nmean.tp <- sum(wealth.tp * probs.tp)\nvar.tp <- sum(((wealth.tp - mean.tp) ^ 2) * probs.tp)\n\nprint(sprintf(\"Mean = %.1f\", mean.tp), quote=FALSE)\nprint(sprintf(\"Variance = %.1f\", var.tp), quote=FALSE)\n```\n\n [1] Mean = 98.9\n [1] Variance = 343.2\n\n\n## Question C\n\n> What is the probability of living in an area with $\\text{mean wealth} > 100$ if you live in *Toptown*?\n\n\\begin{equation}\n\\begin{split}\nP &= \\frac{P(live\\ in\\ Toptown\\, \\cap\\, wealth > 100)}{P(live\\ in\\ Toptown)} \\\\\n &= \\frac{10\\% + 15\\%}{10\\% + 10\\% + 10\\% + 15\\%} \\\\\n &= \\frac{25\\%}{45\\%}\n\\end{split}\n\\end{equation}\n\n## Question D\n\n> What is the probability that you live in *Toptown* given that you live in an area with $\\text{mean wealth} > 100$?\n\n\\begin{equation}\n\\begin{split}\nP &= \\frac{P(live\\ in\\ Toptown\\, \\cap\\, wealth > 100)}{P(wealth > 100)} \\\\\n &= \\frac{10\\% + 15\\%}{20\\% + 20\\% + 10\\% + 15\\%} \\\\\n &= \\frac{25\\%}{65\\%}\n\\end{split}\n\\end{equation}\n", "meta": {"hexsha": "85d4f2b62f2336005b847f0d6c76b9a2bd40d6e1", "size": 4814, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "exercises/Conditional Probability - Regions & Wealth.ipynb", "max_stars_repo_name": "czs108/Probability-Theory-Exercises", "max_stars_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exercises/Conditional Probability - Regions & Wealth.ipynb", "max_issues_repo_name": "czs108/Probability-Theory-Exercises", "max_issues_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercises/Conditional Probability - Regions & Wealth.ipynb", "max_forks_repo_name": "czs108/Probability-Theory-Exercises", "max_forks_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-21T05:04:07.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-21T05:04:07.000Z", "avg_line_length": 23.8316831683, "max_line_length": 128, "alphanum_fraction": 0.4410054009, "converted": true, "num_tokens": 872, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9314624993576758, "lm_q2_score": 0.8872046011730965, "lm_q1q2_score": 0.8263978152503224}} {"text": "# COURSE: Master math by coding in Python\n## SECTION: Graphing\n\n#### https://www.udemy.com/course/math-with-python/?couponCode=MXC-DISC4ALL\n#### INSTRUCTOR: sincxpress.com\n\nNote about this code: Each video in this section of the course corresponds to a section of code below. Please note that this code roughly matches the code shown in the live recording, but is not exactly the same -- the variable names, order of lines, and parameters may be slightly different. \n\n\n```python\n# import required packages at the top of the script!\nimport sympy as sym\nimport numpy as np\nfrom IPython.display import display, Math\nimport matplotlib.pyplot as plt\n```\n\n# VIDEO: Plotting coordinates on a plane\n\n\n```python\nx = 3\ny = 5\n\n# basic plotting a red dot\nplt.plot(x,y,'ro')\n\n# set axis limits\nplt.axis('square') # order matters\nplt.axis([-6,6,-6,6])\nplt.grid()\n\nplt.show()\n```\n\n\n```python\n# a set of coordinates\n\nx = [-4,2,5,6,2,-5]\ny = [5,2,10,-5,4,0]\n\nfor i in range(0,len(x)):\n plt.plot(x[i],y[i],'o',label='point %s'%i)\n \n\nplt.legend()\nplt.axis('square')\nplt.grid()\nplt.show()\n```\n\n\n```python\n# getting information from axes\n\nplt.plot(4,3,'rs')\n\n# get an object for the current axis\naxis = plt.gca()\nylim = axis.get_ylim()\nprint(ylim)\n\n# now change only the upper y-axis limit\naxis.set_ylim([ ylim[0],6 ])\n\nplt.xlabel('X axis')\nplt.ylabel('F(x)')\n\nplt.show()\n```\n\n### Exercise\n\n\n```python\n# define a function and then subs\nimport sympy as sym\n\nx = sym.symbols('x')\ny = x**2 - 3*x\n\nxrange = range(-10,11)\n\n\nfor i in range(0,len(xrange)):\n plt.plot(xrange[i],y.subs({x:xrange[i]}),'o')\n \nplt.xlabel('x')\nplt.ylabel('$f(x) = %s$' %sym.latex(y))\nplt.show()\n```\n\n# VIDEO: Graphing lines\n\n\n```python\n# drawing lines\n\np1 = [-3,-1]\np2 = [4,4]\n\n# nice try, but wrong code :(\nplt.plot(p1,p2)\nplt.plot([p1[0],p2[0]],[p1[1],p2[1]],color=[.7,.3,.8],linewidth=5)\n\nplt.axis('square')\nplt.axis([-6,6,-6,6])\nplt.show()\n```\n\n\n```python\nx = 3\ny = 5\n\n# basic plotting a red dot\nplt.plot(x,y,'ro')\nplt.plot([0,x],[0,y],'r')\n\n# set axis limits\nplt.axis('square') # order matters\nplt.axis([-6,6,-6,6])\nplt.grid()\n\n# now add lines\nplt.plot([-6,6],[0,0],'k')\nplt.plot([0,0],[-6,6],'k')\n\nplt.show()\n```\n\n### Exercises\n\n\n```python\nx = range(-20,20)\n\nfor i in range(0,len(x)):\n plt.plot([0,x[i]],[0,abs(x[i])**(1/2)])\n \nplt.xlabel('x')\nplt.ylabel('y')\nplt.show()\n```\n\n\n```python\n# draw a square\nplt.plot([0,2],[2,2],'r')\nplt.plot([0,2],[0,0],'k')\nplt.plot([0,0],[0,2],'g')\nplt.plot([2,2],[0,2],'m')\n\nplt.axis('square')\nplt.axis([-3,5,-3,5])\nplt.show()\n\n```\n\n# VIDEO: Linear equations in slope-intercept form\n\n\n```python\n# y = mx + b\n\nx = [-5,5]\nm = 2\nb = 1\n\n# next line doesn't work; solution comes later!\n#y = m*x+b\n\n# for now, this way\ny = [0,0]\nfor i in range(0,len(x)):\n y[i] = m*x[i] + b\n\n\nplt.plot(x,y,label='y=%sx+%s' %(m,b))\nplt.axis('square')\nplt.xlim(x)\nplt.ylim(x)\nplt.grid()\naxis = plt.gca()\nplt.plot(axis.get_xlim(),[0,0],'k--')\nplt.plot([0,0],axis.get_ylim(),'k--')\nplt.legend()\nplt.title('The plot.')\n\nplt.show()\n\n```\n\n\n```python\nimport numpy as np\n\n# converting x into a numpy array\ny = m*np.array(x) + b\n\n\nplt.plot(x,y,label='y=%sx+%s' %(m,b))\nplt.axis('square')\nplt.xlim(x)\nplt.ylim(x)\nplt.grid()\naxis = plt.gca()\nplt.plot(axis.get_xlim(),[0,0],'k--')\nplt.plot([0,0],axis.get_ylim(),'k--')\nplt.legend()\nplt.title('The plot.')\n\nplt.show()\n```\n\n\n```python\nprint(type(x))\nprint(type(np.array(x)))\n```\n\n### Exercise\n\n\n```python\n# plot these two lines\nimport numpy as np\n\nx = [-5,5]\nm = [.7,-5/4]\nb = [-2,3/4]\n\nfor i in range(0,len(x)):\n y = m[i]*np.array(x) + b[i]\n plt.plot(x,y,label='y=%sx+%s' %(m[i],b[i]))\n \nplt.axis('square')\nplt.xlim(x)\nplt.ylim(x)\nplt.grid()\nplt.xlabel('x')\nplt.ylabel('y')\naxis = plt.gca()\nplt.plot(axis.get_xlim(),[0,0],'k--')\nplt.plot([0,0],axis.get_ylim(),'k--')\nplt.legend(prop={'size':15})\nplt.title('The plot.')\n\nplt.show()\n\n```\n\n# VIDEO: Graphing rational functions\n\n\n```python\nimport numpy as np\n\nx = range(-3,4)\ny = np.zeros(len(x))\n\nfor i in range(0,len(x)):\n y[i] = 2 - x[i]**2\n\nplt.plot(x,y,'s-')\nplt.xlabel('x'), plt.ylabel('y')\nplt.show()\n```\n\n\n```python\n# what if you want more spacing?\n\nx = np.linspace(-3,4,14)\ny = 2 + np.sqrt(abs(x))\n \nplt.plot(x,y,'s-')\nplt.show()\n```\n\n### Exercise\n\n\n```python\nimport numpy as np\n\ne = range(-1,4)\nx = np.linspace(-4,4,300)\n\nfor i in e:\n y = x**i\n plt.plot(x,y,label='$y=x^{%s}$'%i,linewidth=4)\n \nplt.legend()\nplt.ylim([-20,20])\nplt.xlim([x[0],x[-1]])\nplt.xlabel('x')\nplt.ylabel('y')\nplt.show()\n```\n\n# VIDEO: Plotting functions with sympy\n\n\n```python\n# create symbolic variables\nfrom sympy.abc import x\n\n# define function\ny = x**2\n\n# plotting function in sympy\np = sym.plotting.plot(y) #(x,y)\n\n# trying to adjust the y-axis limits\np.ylim = [0,50] # ...but it doesn't work :(\n```\n\n\n```python\n# to set features of the plot, turn the plotting off, then make adjustments, then show the plot\n\n# create a plot object\np = sym.plotting.plot(y,show=False)\n\n# change the y-axis of the entire plot\np.xlim = (0,50)\n\n# change a feature of only the first plot object (the line, in this case there is only one)\np[0].line_color = 'm'\np.title = 'This is a nice-looking plot!'\n\n# now show the line\np.show()\n```\n\n\n```python\n# This code shows how to use expressions with parameters\n# and also how to plot multiple lines in the same plot\n\nx,a = sym.symbols('x,a')\n\n# a convenient way to import the plot module\nimport sympy.plotting.plot as symplot\n\n# the basic expression with parameters\nexpr = a/x\n\n# generate the first plot\np = symplot(expr.subs(a,1),(x,-5,5),show=False)\np[0].label = 'y = %s'%expr.subs(a,1) # create a label for the legend\n\n# extend to show the second plot as well\np.extend( symplot(expr.subs(a,3),show=False) )\np[1].label = 'y = %s'%expr.subs(a,3)\n\n# some plotting adjustments\np.ylim = [-5,5]\np[0].line_color = 'r'\np.legend = True # activate the legend\n\n# and show the plot\np.show()\n\n```\n\n### Exercise\n\n\n```python\n# create variables\nx,a = sym.symbols('x,a')\n\n# define function\ny = a/(x**2-a)\n\n# reset and initialize the plot function\np = None\np = sym.plotting.plot(y.subs(a,1),(x,-5,5),show=False )\np[0].label = '$%s$'%sym.latex(y.subs(a,1))\n\n# loop over values of a\nfor i in range(2,5):\n p.extend( sym.plotting.plot(y.subs(a,i),(x,-5,5),show=False ) )\n p[i-1].line_color = list(np.random.rand(3))\n p[i-1].label = '$%s$'%sym.latex(y.subs(a,i))\n\n# a bit of touching up and show the plot\np.ylim = [-10,10]\np.legend = True\np.show()\n```\n\n# VIDEO: Making pictures from matrices\n\n\n```python\n# create a matrix\nA = [ [1,2],[1,4] ]\n\n# show it (yikes! many functions!)\ndisplay(Math(sym.latex(sym.sympify(np.array(A)))))\n\n# now image it\nplt.imshow(A)\n\nplt.xticks([0,1])\nplt.yticks([.85,1])\n\nplt.show()\n```\n\n\n```python\nA = np.zeros((10,14))\n\nprint( np.shape(A) )\n\nfor i in range(0,np.shape(A)[0]):\n for j in range(0,np.shape(A)[1]):\n \n # populate the matrix\n A[i,j] = 3*i-4*j\n\nprint(A)\nplt.imshow(A)\nplt.plot([0,3],[8,2],'r',linewidth=4)\nplt.set_cmap('Purples')\n\nfor i in range(0,np.shape(A)[0]):\n for j in range(0,np.shape(A)[1]):\n plt.text(j,i,int(A[i,j]),horizontalalignment='center',verticalalignment='center')\n\n\nplt.show()\n```\n\n### Exercise\n\n\n```python\n# make a checkerboard\n\nC = np.zeros((10,10))\n\nfor i in range(0,10):\n for j in range(0,10):\n C[i,j] = (-1)**(i+j)\n \nplt.imshow(C)\nplt.set_cmap('gray')\nplt.tick_params(labelleft=False,labelbottom=False)\nplt.show()\n```\n\n# VIDEO: Drawing patches with polygons\n\n\n```python\nfrom matplotlib.patches import Polygon\n\nx = np.linspace(0,1,100)\ny = np.array([ [1,1],[2,3],[3,1] ])\np = Polygon(y,facecolor='m',alpha=.3)\n\n# extend with two polygons\ny1 = np.array([ [2,2],[2.5,4],[3.5,1] ])\np1 = Polygon(y1,alpha=.2,edgecolor='k')\n\nfig, ax = plt.subplots()\nax.add_patch(p1)\nax.add_patch(p)\nax.set_ylim([0,4])\nax.set_xlim([0,4])\nplt.show()\n```\n\n\n```python\nx = np.linspace(-2,2,101)\nf = -x**2\n\ny = np.vstack((x,f)).T\np = Polygon(y,facecolor='g',alpha=.2,edgecolor='k')\n\np1 = Polygon(np.array([ [-.5,-4],[-.5,-2.5],[.5,-2.5],[.5,-4] ]),facecolor='k')\n\nfig, ax = plt.subplots()\nax.add_patch(p)\nax.add_patch(p1)\n\nplt.plot(x,f,'k')\nplt.plot(x[[0,-1]],[-4,-4],'k')\nplt.show()\n```\n\n# VIDEO: Exporting graphics as pictures\n\n\n```python\nC = np.zeros((10,10))\n\nfor i in range(0,10):\n for j in range(0,10):\n C[i,j] = (-1)**(i+j)\n \nplt.imshow(C)\nplt.set_cmap('gray')\nplt.tick_params(axis='both',labelleft=False,labelbottom=False)\n\n# save the figure!\nplt.savefig('NiceFigure.png')\nplt.show() # make sure this line comes after, not before, the savefig function call\n\n```\n\n# VIDEO: Graphing bug hunt!\n\n\n```python\nplt.plot(3,2,'ro')\n\n# set axis limits\nplt.axis('square')\nplt.axis([-6,6,-6,6])\nplt.show()\n```\n\n\n```python\n# plot a line\nplt.plot([0,3],[0,5])\nplt.show()\n```\n\n\n```python\nimport numpy as np\n\nx = range(-3,4)\ny = np.zeros(len(x))\n\nfor i in range(0,len(x)):\n y[i] = 2 - x[i]**2\n\nplt.plot(x,y,'s-')\nplt.show()\n```\n\n\n```python\n# plot two lines\nplt.plot([-2,3],[4,0],'b',label='line 1')\nplt.plot([0,3],[-3,3],'r',label='line 2')\n\nplt.legend()\nplt.show()\n```\n\n\n```python\nrandmat = np.random.randn(5,9)\n\n# draw a line from lower-left corner to upper-right corner\nplt.plot([8,0],[0,4],color=(.4,.1,.9),linewidth=5)\n\nplt.imshow(randmat)\nplt.set_cmap('Purples')\nplt.show()\n```\n\n\n```python\n# plot two lines\nplt.plot([-2,3],[4,0],'b',label='line1')\nplt.plot([0,3],[-3,3],'r',label='line2')\n\nplt.legend(['line 1','line 2'])\nplt.show()\n```\n\n\n```python\nx = np.linspace(1,4,20)\ny = x**2/(x-2)\n\nplt.plot(x,y)\n\n# adjust the x-axis limits according to the first and last points in x\nplt.xlim(x[[0,-1]])\n\nplt.show()\n```\n\n\n```python\nx = sym.symbols('x')\ny = x**2 - 3*x\n\nxrange = range(-10,11)\n\nfor i in range(0,len(xrange)):\n plt.plot(xrange[i],y.subs(x,xrange[i]),'o')\n \nplt.xlabel('x')\nplt.ylabel('$f(x) = %s$' %sym.latex(y))\nplt.show()\n```\n\n\n```python\nx = [-5,5]\nm = 2\nb = 1\n\ny = m*np.array(x)+b\n\nplt.plot(x,y)\nplt.show()\n```\n\n\n```python\nx = range(-20,21)\n\nfor i in range(0,len(x)):\n plt.plot([0,x[i]],[0,abs(x[i])**(1/2)],color=(i/len(x),i/len(x),i/len(x)))\n\nplt.axis('off')\nplt.show()\n```\n\n\n```python\n# draw a checkerboard with purple numbers on top\n\nm = 8\nn = 4\n\n# initialize matrix\nC = np.zeros((m,n))\n\n# populate the matrix\nfor i in range(0,m):\n for j in range(0,n):\n C[i,j] = (-1)**(i+j)\n \n\n# display some numbers\nfor i in range(0,m):\n for j in range(0,n):\n plt.text(j,i,i+j,\\\n horizontalalignment='center',verticalalignment='center',\\\n fontdict=dict(color='m'))\n\n\nplt.imshow(C)\nplt.set_cmap('gray')\nplt.show()\n```\n", "meta": {"hexsha": "22a27192671fb2875fd3d99d855b083132b45306", "size": 20293, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "MXC_pymath_graphing.ipynb", "max_stars_repo_name": "stefan-cross/mathematics-with-python", "max_stars_repo_head_hexsha": "40a130b0d7da98c4bd54406c61e99daeca57680f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "MXC_pymath_graphing.ipynb", "max_issues_repo_name": "stefan-cross/mathematics-with-python", "max_issues_repo_head_hexsha": "40a130b0d7da98c4bd54406c61e99daeca57680f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MXC_pymath_graphing.ipynb", "max_forks_repo_name": "stefan-cross/mathematics-with-python", "max_forks_repo_head_hexsha": "40a130b0d7da98c4bd54406c61e99daeca57680f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.2510964912, "max_line_length": 299, "alphanum_fraction": 0.4607007342, "converted": true, "num_tokens": 3516, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9399133498259924, "lm_q2_score": 0.8791467738423874, "lm_q1q2_score": 0.8263217891909125}} {"text": "# Marginal Likelihood in Python\n\n\n\n```python\nimport sympy\n\nsympy.init_printing()\n\nx = sympy.Symbol('x', positive=True)\ns = sympy.Symbol('s', positive=True)\n\ncauchy = 2 / (sympy.pi * (1 + x ** 2))\nnormal = 2 / sympy.sqrt(2*sympy.pi) * sympy.exp(-x**2 / 2)\n\nhs = sympy.integrate(normal * cauchy.subs({x: s / x}) / x, (x, 0, sympy.oo))\nhs\n```\n\n\n```python\nbeta = sympy.Symbol('beta', positive=True)\ngamma = sympy.Symbol('gamma', positive=True)\ntheta = sympy.Symbol('theta')\nz1 = sympy.integrate(sympy.cos(theta)**(2*((gamma/(beta - 2)) - 3/2) + 3),\n (theta, -sympy.pi, sympy.pi))\nz1\n```\n\n\n```python\nbeta = sympy.Symbol('beta', positive=True)\ngamma = sympy.Symbol('gamma', constant = True)\ntheta = sympy.Symbol('theta')\nz1 = sympy.integrate(sympy.cos(theta)**(2*(gamma/(beta - 2))),\n (theta, -sympy.pi/2, sympy.pi/2))\nz1\n```\n\n\n```python\nfrom sympy.stats import Beta, Binomial, density, E, variance\nfrom sympy import Symbol, simplify, pprint, expand_func\n\nalpha = Symbol(\"alpha\", positive=True)\nbeta = Symbol(\"beta\", positive=True)\nz = Symbol(\"z\")\ntheta = Beta(\"theta\", alpha, beta)\nD = density(theta)(z)\npprint(D, use_unicode=False)\nexpand_func(simplify(E(theta, meijerg=True)))\n```\n\n alpha - 1 beta - 1\n z *(-z + 1) \n ---------------------------\n beta(alpha, beta) \n\n\n\n\n\n alpha/(alpha + beta)\n\n\n\n\n```python\n\n```\n\n\n```python\nnormal = 1 / sympy.sqrt(2*sympy.pi) * sympy.exp(-x**2 / 2)\n```\n\n\n```python\nx = sympy.Symbol('x')\nnormal = 1 / sympy.sqrt(2*sympy.pi) * sympy.exp(-x**2 / 2)\nsympy.integrate(normal, (x, -sympy.oo, sympy.oo))\n```\n\n\n```python\nn = sympy.Symbol('n', integer=True, positive=True)\nk = sympy.Symbol('k', integer=True, positive=True)\na = b = 1.\np = .25\npr = sympy.binomial(n, k)*(p**k)*(1-p)**(n-k)\npr\n```\n\n\n```python\nsympy.summation(pr*k, (k, 0, 20), (n, 20, 20))\n```\n\n\n```python\na = .5\nb = 1.\np = x**(a-1)*(1-x)**(b-1)/sympy.beta(a, b)\nn = 10\nk = sympy.Symbol('k', positive=True)\npr = sympy.binomial(n, k)*(p**k)*(1-p)**(n-k)\nsympy.integrate(pr.subs({x: k}), (x, 0, n))\n```\n\n\n```python\na = 1.\nb = 3.\nn = 20\nk = sympy.Symbol('k', integer=True, positive=True)\np = sympy.binomial(n, k)*sympy.beta(k+a, n-k+b)/sympy.beta(a, b)\nsympy.summation(p*k, (k, 0, 20))\n```\n\n\n```python\nx = sympy.Symbol('x', positive=True)\na = 1.#sympy.Symbol('alpha', positive=True)\nb = 1.#sympy.Symbol('beta', positive=True)\nbeta = (x**(a-1))*((1-x)**(b-1))/sympy.beta(a, b)\nsympy.integrate(beta*x, (x, 0, 1))\n```\n\n\n```python\nsympy.beta(1, 1)\n```\n", "meta": {"hexsha": "d8418f4416c4dfa5fcad2134d035320f4e6ee864", "size": 11536, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Miscellaneous/sympy_test.ipynb", "max_stars_repo_name": "junpenglao/Planet_Sakaar_Data_Science", "max_stars_repo_head_hexsha": "73d9605b91b774a56d18c193538691521f679f16", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 51, "max_stars_repo_stars_event_min_datetime": "2018-04-08T19:53:15.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-24T21:08:25.000Z", "max_issues_repo_path": "Miscellaneous/sympy_test.ipynb", "max_issues_repo_name": "junpenglao/Planet_Sakaar_Data_Science", "max_issues_repo_head_hexsha": "73d9605b91b774a56d18c193538691521f679f16", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-05-29T20:50:37.000Z", "max_issues_repo_issues_event_max_datetime": "2020-09-12T07:14:08.000Z", "max_forks_repo_path": "Miscellaneous/sympy_test.ipynb", "max_forks_repo_name": "junpenglao/Planet_Sakaar_Data_Science", "max_forks_repo_head_hexsha": "73d9605b91b774a56d18c193538691521f679f16", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2018-07-21T09:53:10.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-07T19:06:26.000Z", "avg_line_length": 42.7259259259, "max_line_length": 2766, "alphanum_fraction": 0.6738037448, "converted": true, "num_tokens": 916, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9399133531922387, "lm_q2_score": 0.8791467675095294, "lm_q1q2_score": 0.8263217861979992}} {"text": "# Monte Carlo \n\n\n```julia\nusing Plots\n```\n\n## Tetrahedron volume\n\n\n$$\n\\begin{align}\nV &= \\int_0^1{\\int_0^{1-x}{\\int_0^{1-x-y}{1 \\quad dz}dy}dx} \\\\\n&= \\int_0^1{\\int_0^{1-x}{1-x-y \\quad dy}dx} \\\\\n&= \\int_0^1{ \\left( y(1-x)-\\frac{y^2}{2} \\right)_{y=0}^{y=1-x} \\quad dx} \\\\\n&= \\int_0^1{ \\frac{(1-x)^2}{2} \\quad dx} \\\\\n&= \\int_0^1{ \\frac{1-2x+x^2}{2} \\quad dx} \\\\\n&= \\left( \\frac{x}{2} - \\frac{x^2}{2} + \\frac{x^3}{6} \\right)_{x=0}^{x=1} \\\\\n&= \\frac{1}{6}\n\\end{align}\n$$\n\n## Standard Monte Carlo integration\n\n\n```julia\nfunction montecarlo(iterations)\n integral = 0\n for i=1:iterations\n x, y, z = rand(3)\n if (x+y+z<1)\n integral += 1\n end\n end\n volume = integral/iterations\nend\n\nmontecarlo(100)\n```\n\n\n\n\n 0.16\n\n\n\n# Test\n\n\n```julia\nfunction montecarlotest(iterations)\n err = log10(abs(montecarlo(iterations)-1/6))\nend\n```\n\n\n\n\n montecarlotest (generic function with 1 method)\n\n\n\n\n```julia\np = plot()\nfor i=1:10\n x = 1:7\n y = montecarlotest.(10 .^x)\n p = scatter!(p, x, y, label=:false)\nend\n\ndisplay(p)\n```\n\n\n \n\n \n\n\n## Adjourn\n\n\n```julia\nusing Dates\nprintln(\"mahdiar\")\nDates.format(now(), \"Y/U/d HH:MM\") \n```\n\n mahdiar\n\n\n\n\n\n \"2021/April/17 16:17\"\n\n\n", "meta": {"hexsha": "868034198ea122f50dd67f2415fad1ff63a99bb4", "size": 32731, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "HW12/2.ipynb", "max_stars_repo_name": "mahdiarsadeghi/NumericalAnalysis", "max_stars_repo_head_hexsha": "95a0914c06963b0510971388f006a6b2fc0c4ef9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW12/2.ipynb", "max_issues_repo_name": "mahdiarsadeghi/NumericalAnalysis", "max_issues_repo_head_hexsha": "95a0914c06963b0510971388f006a6b2fc0c4ef9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW12/2.ipynb", "max_forks_repo_name": "mahdiarsadeghi/NumericalAnalysis", "max_forks_repo_head_hexsha": "95a0914c06963b0510971388f006a6b2fc0c4ef9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 91.941011236, "max_line_length": 4473, "alphanum_fraction": 0.5917631603, "converted": true, "num_tokens": 493, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541659378681, "lm_q2_score": 0.8774767762675405, "lm_q1q2_score": 0.8262796618860602}} {"text": "# Multivariate Gaussians, K-means and Gaussian Mixture Models\n\n\n```python\n%matplotlib inline\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.linalg\nimport sklearn.datasets\nimport pylab\nfrom matplotlib.patches import Ellipse\n\n```\n\n# Generate data\nWe'll generate data from a mixture of Gaussians model by generating data from three separate Gaussians and concatenating the data. We'll do everything in 2 dimensions so that we can visualise things easily.\n\nIn this tutorial, we'll sometimes avoid using a built in function, and instead do things the \"long way\" to get some extra exposure to some basic maths and statistics.\n\nUsing the `np.random.randn` command we can generate data from a standard multivariate normal, and using `matplotlib` we can visualise this.\n\n\n```python\n# Generate data\nnum_data = 10000\ndimensions = 2\ndata = np.random.randn(num_data,dimensions)\n\n# Plot the data\nplt.figure(figsize=(6, 6))\nplt.scatter(data[:,0], data[:,1])\nplt.axis('equal')\nplt.show()\n```\n\nJust as a quick sanity check, we can make sure that the mean and covariance of the data we just generated is what we expect:\n\n\n```python\nprint np.mean(data[:,0])\nprint np.mean(data[:,1])\nprint np.cov(data.T)\n```\n\n -0.0123484576753\n -0.0148022033307\n [[ 0.99082741 0.00706382]\n [ 0.00706382 0.99911906]]\n\n\nGreat! These numbers are roughly what we would expect, given that the population mean is `0` and the population covariance is the `2x2` identity matrix. You can try increasing `num_data` and verify that the empirical mean and covariance get closer to the population values.\n\nNow, let's use the function `np.random.randn` to generate data from non-standard Gaussians.\n\nLet's suppose that $Z \\sim \\mathcal{N}\\left(0, I_n\\right)$ is a standard Gaussian, and we'll define a new variable $X$ as:\n\n$$ X = \\Sigma^{1/2} Z + \\mu $$\n\nwhere $\\Sigma$ is a _symmetric_ $n\\times n$ matrix and $\\mu$ is a vector of size $n$.\n\nBy closure of Gaussians under linear operations, $X$ is still Gaussian and its distribution is therefore determined by its mean and covariance, which we can simply calculate.\n\n$$ \n\\begin{align}\n\\mathbb{E}[X] &= \\Sigma^{1/2}\\mathbb{E}[Z] + \\mu \\\\\n &= \\mu \\\\\n\\mathbb{C}[X] &= \\mathbb{E}[XX^\\intercal] - \\mathbb{E}[X]\\mathbb{E}[X]^{\\intercal} \\\\\n &= \\Sigma^{1/2}\\mathbb{E}[ZZ^\\intercal]\\Sigma^{1/2} + \\mu \\mu^\\intercal - \\mu \\mu^\\intercal\\\\\n &= \\Sigma\n\\end{align}\n$$\n\nTherefore, $X \\sim \\mathcal{N}\\left(\\mu, \\Sigma\\right)$. Using this fact we can write a function to generate samples from arbitrary Gaussians.\n\nNote that because we store our data in a matrix where each _row_ corresponds to a single observation, we multiply by the square root of the covariance matrix on the right rather than the left in the function below.\n\n\n```python\ndef sample_gaussian(mean, covariance, n_data, dimensions=2):\n assert mean.shape == (dimensions,)\n assert covariance.shape == (dimensions, dimensions)\n sqrt_cov = scipy.linalg.sqrtm(covariance)\n data = np.random.randn(n_data, dimensions)\n return np.dot(data, sqrt_cov) + mean\n```\n\n\n```python\nmean1 = np.array([3,1])\ncov1 = np.array([[3,1],[1,2]])\ndata1 = sample_gaussian(mean1, cov1, 1000)\n\nplt.figure(figsize=(6, 6))\nplt.scatter(data1[:,0], data1[:,1])\nplt.axis('equal')\nplt.show()\n```\n\nAgain as a sanity check we can calculate the empirical mean and covariances of the data we just generated to see that they match what we expect.\n\n\n```python\nprint np.mean(data1[:,0])\nprint np.mean(data1[:,1])\nprint np.cov(data1.T)\n```\n\n 3.07562386296\n 1.02746883564\n [[ 2.92964863 0.94634928]\n [ 0.94634928 1.9995487 ]]\n\n\nLet's generate a few different clusters and plot them together.\n\n\n```python\nmean1 = np.array([3,1])\ncov1 = np.array([[3,1],[1,2]])\ndata1 = sample_gaussian(mean1, cov1, 1000)\n\nmean2 = np.array([7,-2])\ncov2 = np.array([[1,1],[1,5]])\ndata2 = sample_gaussian(mean2, cov2, 1000)\n\nmean3 = np.array([-1,-4])\ncov3 = np.array([[3,-1],[-1,3]])\ndata3 = sample_gaussian(mean3, cov3, 1000)\n\n\n\nplt.figure(figsize=(6, 6))\nplt.scatter(data1[:,0], data1[:,1])\nplt.scatter(data2[:,0], data2[:,1])\nplt.scatter(data3[:,0], data3[:,1])\nplt.axis('equal')\nplt.show()\n```\n\n# K-means \nNow that we have some data generated, let's do some clustering. We're going to implement one of the most basic clustering algorithms: K-means.\n\nThe basic idea here is that we are going to start with an initial seed of $K$ points in the data-space. These are the centres of $K$ distinct clusters. We then iterate, first assigning each datum to the cluster whose centre is nearest to it, and then updating the $K$ points by setting each to be the average of all of the data assigned to it.\n\nMore formally, we start with $K$ points $m_1, \\ldots, m_K$ and some data $x_1,\\ldots, x_N$. Then, we iteratively apply each of the following steps:\n\n$1)$ Find, for each $k$, $S_k= \\lbrace x_n : d(x_n,m_k) \\leq d(x_n, m_i) \\: \\forall i \\rbrace$\n\n$2)$ Update the cluster centres: $m_k = \\frac{1}{|S_k|} \\sum_{x_n \\in S_k} x_n$\n\nWe apply these steps until convergence occurs. We could define this, for instance, as occurring when cluster assignments do not change.\n\nWe'll write two functions, one for each of the steps above.\n\n\n```python\ndef update_cluster_assignments(data, m):\n # returns a dictionary mapping index i to numpy array of data that are nearest to m[i]\n clusters = {}\n for i in range(len(m)):\n clusters[i] = []\n for x in data:\n # find index i of nearest m[i] and append it to cluster\n clusters[np.sum((m-x)**2, 1).argmin()].append(x)\n for i in range(len(m)):\n clusters[i] = np.array(clusters[i])\n return clusters\n \ndef update_cluster_means(clusters):\n # returns the updated cluster means\n n_clusters = len(clusters)\n dimension = len(clusters[0][0])\n m = np.zeros([n_clusters, dimension])\n for i in clusters:\n # calculate the mean of points assigned to cluster i\n m[i] = np.mean(clusters[i], 0)\n return m\n \n```\n\nLet's try out the functions that we've written on the data we generated earlier. We'll concatenate all our data together into one big matrix, try some initial seeds for the centres and find the corresponding clusters. We can visualise this by slightly modifying the code to plot from before (we'll put it in a function as we'll use it again later).\n\n\n```python\ndata = np.concatenate([data1, data2, data3])\nm = np.array([[0,0],[10,10],[2,0.5],[-5,-5]])\nclusters = update_cluster_assignments(data, m)\n\n\ndef plotter(clusters):\n # Plot data, colored by cluster\n plt.figure(figsize=(6, 6))\n for i in range(len(clusters)):\n plt.scatter(clusters[i][:,0], clusters[i][:,1])\n plt.axis('equal')\n plt.show()\n \nplotter(clusters)\n```\n\nLet's update the mean once and the update the clusters.\n\n\n```python\nm = update_cluster_means(clusters)\nclusters = update_cluster_assignments(data, m)\nplotter(clusters)\n```\n\nThe clusters changed! Great. $k$-means works by repeatedly performing the two update functions we just called, so let's write a wrapper function to do this, terminating when the cluster centers have stopped changing. \n\n\n```python\ndef kmeans(data, k, eps=None):\n # If initial cluster centres are not provided, randomly select k data points to be the centres\n m = data[np.random.choice(range(len(data)), k, replace=False)]\n m_old = m\n clusters = update_cluster_assignments(data, m)\n m = update_cluster_means(clusters)\n \n # Stop once the cluster centres stop changing\n while (m_old != m).any():\n m_old = m\n clusters = update_cluster_assignments(data, m)\n m = update_cluster_means(clusters)\n \n return clusters\n```\n\n\n```python\nplotter(kmeans(data, 3))\nplotter(kmeans(data, 5))\nplotter(kmeans(data, 7))\n```\n\n## Trying this out on some real data\nWe'll run our freshly written $k$-means algorithm on the Iris dataset\n\n\n```python\niris = sklearn.datasets.load_iris()\niris_data = iris.data\niris_clusters = kmeans(iris_data,3)\n```\n\nSince the iris dataset is 4-dimensional, we can't visualise all of it at once. We can compare each of the 6 pairs of dimensions separately:\n\n\n```python\nnumber_of_subplots=6\nv=0\nplt.figure(figsize=(12, 22))\nplt.axis(\"equal\")\nfor j in range(3):\n for k in range(j+1, 4):\n v += 1\n\n ax1 = pylab.subplot(number_of_subplots,3,v)\n for i in range(len(iris_clusters)):\n ax1.scatter(iris_clusters[i][:,j], iris_clusters[i][:,k])\n\nplt.show()\n```\n\n# Principal Component Analysis (PCA)\n\nPlotting each pair of dimensions separately is not very satisfying, as it's hard to piece together what each plot shows separately into a single explanation. We can use Principal Component Analysis (PCA) to instead find the most informative 2D subspace to project our data onto before visualising. \n\nPCA is a useful pre-processing algorithm that projects data to a lower dimensional subspace while maintaining the maximum variance within the data. We'll apply this here, projecting the 4-dimensional Iris dataset to a 2-dimensional space so that we can see our clusters better.\n\nBefore applying PCA, we have to standardise our data, since different measurement units in different dimensions can mess things up.\n\n\n```python\niris_data_std = iris_data.copy()\nfor i in range(len(iris_data[0])):\n iris_data_std[:,i] = (iris_data_std[:,i] - iris_data[:,i].mean()) / iris_data[:,i].std()\n```\n\nNext, we calculate the directions of maximal variance within the data by finding the eigenvectors of the covariance matrix. We'll then stack the two eigenvectors corresponding to the largest eigenvalues, giving us a matrix that will project our 4D data onto the 2D subspace with largest variance.\n\nThe next plot shows the data in the projected space coloured by the true class label.\n\n\n```python\n# Calculate eigenvalues\neig_vals, eig_vecs = np.linalg.eig(np.cov(iris_data_std.T))\n\n# Project data onto lower dimensional subspace\nproj_matrix = eig_vecs[:,0:2]\nproj_data = iris_data_std.dot(proj_matrix)\n\n# Plot all data coloured by class label\nplt.figure(figsize=(6, 6))\nfor col in range(3):\n plt.scatter(proj_data[:,0][iris.target==col], proj_data[:,1][iris.target==col])\nplt.axis('equal')\nplt.show()\n```\n\nNow let's take the clusters we found using $k$-means, project each of them to the 2D subspace, and plot.\n\n\n```python\n# Plot data, colored by cluster\nplt.figure(figsize=(6, 6))\nfor i in range(len(iris_clusters)):\n cluster_data = iris_clusters[i].copy()\n # we have to apply the same standardisation transformation as we did to the original data before doing PCA\n for j in range(len(cluster_data[0])):\n cluster_data[:,j] = (cluster_data[:,j] - iris_data[:,j].mean()) / iris_data[:,j].std()\n proj_cluster = cluster_data.dot(proj_matrix)\n plt.scatter(proj_cluster[:, 0], proj_cluster[:, 1])\nplt.axis('equal')\nplt.show()\n```\n\nAs you can see, we do a fairly good job at separating the data into the correct classes!\n\n# Gaussian Mixture Model\n\nLet's now work towards building up to a probabilistic generalisation of $k$-means known as a _Gaussian Mixture Model_. $k$-means is great for its simplicity, but it suffers from (at least) the following two problems:\n\n1) Each datum is given a \"hard\" assignment to a cluster. If for example a datum straddles the boundary of two clusers, we might in fact want to express an uncertain belief of which cluster the datum belongs to.\n\n2) The only thing that is important in determining which cluster a datum belongs to is which centre it is nearest to. The shape of the cluster is not taken into account at all, leading to somewhat strage clusterings such as the one below.\n\n\n```python\nmean1 = np.array([0,0])\ncov1 = np.array([[1,0],[0,1]])\ndata1 = sample_gaussian(mean1, cov1, 100)\n\nmean2 = np.array([0,10])\ncov2 = np.array([[1,0],[0,1]])\ndata2 = sample_gaussian(mean2, cov2, 100)\n\nmean3 = np.array([10,5])\ncov3 = np.array([[1,0],[0,50]])\ndata3 = sample_gaussian(mean3, cov3, 100)\n\nclusters = kmeans(np.concatenate([data1, data2, data3]), 3)\nplotter(clusters)\n```\n\nIn the case above, we'd really want to have three clusters - the two circles on the left, and the long sausage on the right. However, because the sausage is very long, the points near the ends will be closer to the circles than the sausage centre. The Gaussian Mixture Model will come to our rescue.\n\n## The generative model\n\nIn a Gaussian Mixture Model (henceforth GMM) we assume that there are $K$ distinct classes, each of which follow different Gaussian distributions with parameters $\\mu_k$ and $\\Sigma_k$. We assume a prior discrete distribution $Dis(\\pi)$ with parameter $\\pi$ over the class $s$ for each datum.\n\n$$ \n\\begin{align*}\ns_i & \\sim Dis(\\pi) \\\\\nX_i & \\sim \\mathcal{N}\\left( \\mu_{s_i}, \\Sigma_{s_i}\\right)\n\\end{align*}\n$$\n\nHere's some code that can generate data from this model.\n\n\n```python\ndef sample_GMM(mu, sigma, pi, num_data):\n # mu, sigma and pi are lists of length k\n assert len(mu) == len(sigma)\n assert len(mu) == len(pi)\n \n K = len(mu)\n D = len(mu[0])\n X = np.zeros([num_data, D])\n \n # random sample from multinomial distribution to see how many samples we should generate for each class\n class_nums = np.random.multinomial(num_data, pi)\n for k in range(K):\n # Fill in X in order of classes.... we do this so that we only have to call sample_gaussian K times\n lower = sum(class_nums[0:k])\n upper = sum(class_nums[0:k+1])\n X[lower:upper,:] = \\\n sample_gaussian(covariance=sigma[k], mean=mu[k], n_data=class_nums[k])\n \n # Shuffle rows of X so that classes are not in order\n np.random.shuffle(X)\n return X\n \n```\n\n\n```python\nmu = [np.array([-20,-20]), np.array([10,10]), np.array([-10,10])]\nsigma = [np.array([[10,0],[0,10]]), np.array([[10,0],[0,1]]), np.array([[1,0],[0,10]])]\npi = [0.7, 0.2, 0.1]\n\nX = sample_GMM(mu, sigma, pi, 1000)\n\nplt.scatter(X[:,0], X[:,1])\nplt.show()\n```\n\nGiven any observation $X_i$ and values for the parameters $\\pi, \\mu_k, \\Sigma_k$ for all $k$, we can calculate the posterior probability that $X_i$ belongs to class $k$ using Bayes' rule:\n\n$$ \n\\begin{align*}\nP(s_i = k | X_i) &= \\frac{P(X_i | s_i = k) P(s_i = k)}{P(X_i)} \\\\\n &\\propto \\mathcal{N}\\left(X_i | \\mu_k, \\Sigma_k \\right) \\pi_k\n\\end{align*}\n$$\n\n\nHow should we choose the values for the parameters $\\pi, \\mu_k$ and $\\Sigma_k$? We'll optimise the margingal likelihood (the probability of the data given the parameters). To make the maths a bit simpler, we'll actually consider the marginal log-likelihood. This has the advantage of products into sums (making things more numerically stable when we would otherwise have the product of lots of small probabilities), and since $log$ is a monotonically increasing function, maximising the marginal log-likelihood will give the same parameters as maximising the marginal likelihood.\n\n$$ \n\\begin{align*}\n\\log P(X |\\pi, \\mu, \\Sigma) &= \\sum_{i=1}^{N} \\log P(X_i |\\pi, \\mu, \\Sigma) \\\\ \n &= \\sum_{i=1}^{N} \\log \\left( \\sum_k P(X_i |\\mu_k, \\Sigma_k) P(s_i = k) \\right) \\\\\n &= \\sum_{i=1}^{N} \\log \\left( \\sum_k \\mathcal{N}(X_i |\\mu_k, \\Sigma_k) \\pi_k \\right)\n\\end{align*}\n$$\n\nUnfortunately, this would be difficult to directly optimise. Instead, we can use the **EM algorithm** to optimise a lower bound on this. \n\n# The EM algorithm\n\nWe'll breifly explain the EM algorithm here. For models with latent varibles such as the above GMM, it can be difficult to optimise the marginal log-likelihood with respect to the parameters. However, if we knew the latent variables for each observation, this would be easy. Similarly, if we knew the parameters, then finding the latent variables for each observations would be easy. EM works by iteratively performing these two updates, optimising a lower bound on the log-likelihood called the _Free Energy_. The Free Energy is a function of the parameters $\\theta$ of the model and an approximating distribution $Q(Y)$ for the posteriors over the latents:\n\n$$\n\\begin{align*}\n\\log P(X | \\theta) &= \\log \\int P(X, Y | \\theta) dY \\\\\n &= \\log \\int \\frac{P(X, Y| \\theta)}{ Q(Y)} Q(Y) dY \\\\\n &\\geq \\int \\log \\left(\\frac{P(X, Y| \\theta)}{ Q(Y)}\\right) Q(Y) dY =: F(Q,\\theta)\n\\end{align*}\n$$\n\nIn an __Expectation__ step, we maximise $F$ with respect to $Q$, keeping $\\theta$ fixed. Noting that\n\n$$\n\\begin{align*}\nF(Q,\\theta) &= \\int \\log \\left(\\frac{P(X, Y| \\theta)}{ Q(Y)}\\right) Q(Y) dY \\\\\n &= \\int \\log \\left( \\frac{P(Y|X, \\theta) P(X|\\theta)}{Q(Y)} \\right) Q(Y) dY \\\\\n &= \\log P(X|\\theta) + \\int \\log \\left( \\frac{P(Y|X, \\theta)}{Q(Y)} \\right) Q(Y) dY \\\\\n &= \\log P(X|\\theta) - KL\\left[Q(Y) || P(Y|X, \\theta) \\right]\\\\\n\\end{align*}\n$$\n\nSince KL divergences are always greater than zero, maximising $F$ with respect to $Q$ reduces to minimising the KL on the right hand side. This term equals zero if and only if $Q(Y)$ is set to be the posteriors of the latents given the current parameters.\n\nIn a __Maximisation__ step, we maximise $F$ with respect to $\\theta$, keeping $Q$ fixed. Noting that \n\n\n$$\n\\begin{align*}\nF(Q,\\theta) &= \\int \\log \\left(\\frac{P(X, Y| \\theta)}{ Q(Y)}\\right) Q(Y) dY \\\\\n &= \\int \\log \\left( P(X, Y| \\theta) \\right) Q(Y) dY - \\int \\log \\left(Q(Y)\\right) Q(Y) dY \\\\\n &= \\mathbb{E}_{Y\\sim Q} \\left[\\log P(X, Y| \\theta)\\right] + H[Q]\n\\end{align*}\n$$\n\n\nwhere $H[Q]$ is the _entropy_ of $Q$, we see maximising $F$ with respect to $\\theta$ reduces to maximising $\\mathbb{E}_{Y\\sim Q} \\left[\\log P(X, Y| \\theta)\\right]$\n\n## EM for GMM\n\nIt's relatively straightforward to derive the E and M updates for the Gaussian Mixture Model.\n\n__E-step__:\n\nFor the $i$th datum $X_i$, the posterior probability it belongs to class $k$ is \n\n\n$$ \n\\begin{align*}\nP(s_i = k | X_i) &= \\frac{P(X_i | s_i = k) P(s_i = k)}{P(X_i)} \\\\\n &\\propto \\mathcal{N}\\left(X_i | \\mu_k, \\Sigma_k \\right) \\pi_k\n\\end{align*}\n$$\n\nWe'll write $r_{ik} = P(s_i = k | X_i)$ for shorthand (r stands for _responsibility)\n\n__M-step__:\n\nGiven the posteriors over the latents, we have that.... TODO: tidy this up...\n$$\n\\begin{align*}\n\\mathbb{E}_{Y\\sim Q} \\left[\\log P(X, Y| \\theta)\\right] &= \\sum_{i,k} r_{ik} \\log\\left( \\pi_k \\mathcal{N}(X_i| \\mu_k, \\Sigma_k) \\right) \\\\\n&= \\sum_{i,k} r_{ik} \\left[ \\log(\\pi_k) - \\log( + \\left( (X_i - \\mu_k) \\Sigma_k^{-1} (X_i - \\mu_k)^{\\intercal} \\right) \\right]\\\\\n\\end{align*}\n$$\n\n$$\\pi_k = \\frac{\\sum_i r_{ik}}{\\sum_{ik} r_{ik}}$$\n\n$$ \\mu_k = \\frac{\\sum_i r_{ik} X_i}{\\sum_i r_{ik}} $$\n\n$$\\Sigma_k = \\frac{\\sum_i r_{ik} \\left(X_i - \\mu_k \\right)\\left(X_i - \\mu_k \\right)^\\intercal}{\\sum_i r_{ik}} $$\n\n\nThe following code implements these updates, which we can wrap together into a function to iteratively perform these updates until all of the parameters have stopped changing.\n\n\n```python\n\n```\n\n\n```python\ndef E_step(X, pi, mu, Sigma):\n K = len(pi)\n N = len(X)\n r = np.zeros([N,K])\n \n # for each component, calculate probability density for all X...\n for k in range(K):\n r[:,k] = scipy.stats.multivariate_normal.pdf(cov=Sigma[k], mean=mu[k], x=X)\n # ... then normalise each row\n r = r / r.sum(axis=1)[:,None]\n return r\n \ndef M_step(X, r):\n N = len(r)\n K = len(r[0])\n \n pi = list(r.sum(axis=0) / r.sum())\n mu = list((r.T).dot(X)/r.sum(axis=0)[:,None])\n Sigma = [None]*K\n \n for k in range(K):\n mu_k = mu[k]\n Sigma[k] = ((r[:,k][:,None]*(X - mu[k])).T).dot(X-mu[k])/ r[:,k].sum()\n return pi, mu, Sigma\n\n\ndef fit_GMM(X, K):\n # K = number of components\n # Initialise means to be random K data points\n mu = list(X[np.random.choice(len(X),K,replace=False)])\n sigma = [5*np.identity(len(X[0]))] * K\n pi = [1./K] * K\n \n\n r = E_step(X, pi, mu, sigma)\n \n mu_old = mu\n sigma_old = sigma\n pi_old = pi\n r_old = r \n \n pi, mu, sigma = M_step(X, r) \n\n while ((np.array(mu_old) - np.array(mu))**2).sum() + \\\n ((np.array(sigma_old) - np.array(sigma))**2).sum() + \\\n ((np.array(pi_old) - np.array(pi))**2).sum() + \\\n ((np.array(r_old) - np.array(r))**2).sum()> 1e-25:\n mu_old = mu\n sigma_old = sigma\n pi_old = pi\n r_old = r \n \n r = E_step(X, pi, mu, sigma)\n pi, mu, sigma = M_step(X, r)\n \n\n return pi, mu, sigma\n \n```\n\nWe'll also write a plotting function that will help us visualise the clusters that we find.\n\n\n```python\ndef plotter_GMM(X, mu, sigma):\n fig = plt.figure(figsize=(6, 6))\n ax = fig.add_subplot(111, aspect='equal')\n ax.scatter(X[:,0], X[:,1])\n for k in range(len(mu)):\n eigvals, eigvecs = np.linalg.eig(sigma[k])\n eigvals = np.sqrt(eigvals)\n ell = Ellipse(xy=mu[k],\n width=eigvals[0]*4, height=eigvals[1]*4,\n angle=np.rad2deg(np.arccos(eigvecs[0, 0])),\n color='black')\n ell.set_facecolor('none')\n ax.add_artist(ell)\n\n plt.axis('equal')\n plt.show()\n```\n\n\n```python\nnp.random.seed(0)\npi, mu, sigma = fit_GMM(X, 3)\nprint pi\n\nplotter_GMM(X, mu, sigma)\n```\n\nNote that the clusters that our algorithm finds are highly dependent on the initial choice of parameters. In the above case, it would appear that two of the initial cluster seeds were chosen to be from one of the clusters, and since EM is vulnerable to be stuck in local minimima, it is not able to move either of the centres away. \n\nA sensible idea would be to try multiple choices for initial values. At the end of each iteration, we could calculate the marginal log-likelihood corresponding to the choice of parameters we found, and choose the final parameters that give the largest marginal log-likelihood.\n\nWhen we have a good choice of initial parameters though, the algorithm appears to perform well:\n\n\n```python\nnp.random.seed(7)\npi, mu, sigma = fit_GMM(X, 3)\nprint pi\n\nplotter_GMM(X, mu, sigma)\n```\n\nAlso, we see that the earlier problem with $k$-means is resolved using our more advanced GMM:\n\n\n```python\nmean1 = np.array([0,0])\ncov1 = np.array([[1,0],[0,1]])\ndata1 = sample_gaussian(mean1, cov1, 100)\n\nmean2 = np.array([0,10])\ncov2 = np.array([[1,0],[0,1]])\ndata2 = sample_gaussian(mean2, cov2, 100)\n\nmean3 = np.array([10,5])\ncov3 = np.array([[1,0],[0,50]])\ndata3 = sample_gaussian(mean3, cov3, 100)\n\ndata = np.concatenate([data1, data2, data3])\npi, mu, sigma = fit_GMM(data, 3)\nplotter_GMM(data, mu, sigma)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "cedd6b1fb8178851a001e9bfd41d7b593103f2eb", "size": 477070, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "k-means, GMMs and Gaussians.ipynb", "max_stars_repo_name": "paruby/ml-basics", "max_stars_repo_head_hexsha": "26456a9386eedb8ab9026205a771e54053baf1e5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 36, "max_stars_repo_stars_event_min_datetime": "2018-06-27T05:44:09.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-30T06:39:07.000Z", "max_issues_repo_path": "k-means, GMMs and Gaussians.ipynb", "max_issues_repo_name": "paruby/ml-basics", "max_issues_repo_head_hexsha": "26456a9386eedb8ab9026205a771e54053baf1e5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "k-means, GMMs and Gaussians.ipynb", "max_forks_repo_name": "paruby/ml-basics", "max_forks_repo_head_hexsha": "26456a9386eedb8ab9026205a771e54053baf1e5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-12-02T05:08:02.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-27T12:50:49.000Z", "avg_line_length": 413.4055459272, "max_line_length": 46498, "alphanum_fraction": 0.9247992957, "converted": true, "num_tokens": 6409, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9219218370002787, "lm_q2_score": 0.8962513745192024, "lm_q1q2_score": 0.826273713610768}} {"text": "# Radioactive decay\n\n- toc: false\n- branch: master\n- badges: false\n- comments: false\n- categories: [mathematics, numerical recipes]\n- hide: true\n\n----\nQuestions:\n- How can I describe radioactive decay using a first-order ODE? \n- What are initial conditions and why are they important?\n----\n\n----\nObjectives:\n- Map between physical notation for a particular problem and the more general notation for all differential equations\n- Solve a linear, first order, separable ODE using integration\n- Understand the physical importance of initial conditions \n----\n\n\n\n### Radioactive decay can be modelled a linear, first-order ODE\n\nAs our first example of an ODE we will model radioactive decay using a differential equation.\n\nWe know that the decay rate is proportional to the number of atoms present. Mathematically, this relationship can be expressed as:\n\n\\begin{equation}\n\\frac{d N}{d t} = -\\lambda N\n\\end{equation}\n\nNote that we could choose to use different variables, for example:\n\n\\begin{equation}\n\\frac{d y}{d x} = cy\n\\end{equation}\n\nHowever we try to use variables connected to the context of the problem. For example $N$ for the Number of atoms.\n\n\nFor example, if we know that 10% of atoms will decay per second we could write:\n\n\\begin{equation}\n\\frac{d N}{d t} = -0.1 N\n\\end{equation}\n\nwhere $N$ is the number of atoms and $t$ is time measured in seconds.\n\nThis equation is linear and first-order.\n\n| physical notation | generic notation |\n|-----|-----|\n|number of atoms $N$ | dependent variable $y$|\n| time $t$ | independent variable $x$|\n| decay rate $\\frac{dN}{dt}$ | differential $\\frac{dy}{dx}$|\n| constant of proportionality $\\lambda=0.1 $ | parameter $c$ |\n\n### The equation for radioactive decay is separable and has an analytic solution\n\nThe radioactive decay equation is separable. For example,\n\n\\begin{equation}\n\\frac{d N}{d t} = -\\lambda N\n\\end{equation}\n\nCan be seperated as\n\n\\begin{equation}\n\\frac{dN}{N} = -\\lambda dt.\n\\end{equation}\n\nWe can then integrate each side:\n\n\\begin{equation}\n\\ln N = -\\lambda t + const.\n\\end{equation}\n\nand solve for N:\n\n\\begin{equation}\nN = e^{-\\lambda t}e^{\\textrm{const.}} \n\\end{equation}\n\n\n> Note: Remember that $\\int \\frac{1}{x} dx = \\ln x + \\textrm{const.}$\n\n### To model a physical system an initial value has to be provided\n\nAt the beginning (when $t=0$):\n\n\\begin{equation}\nN = e^{-\\lambda t}e^{\\textrm{const.}} = e^{0}e^{\\textrm{const.}} = e^{\\textrm{const.}}\n\\end{equation}\n\nSo we can identify $e^{\\textrm{const.}}$ as the amount of radioactive material that was present in the beginning. We denote this starting amount as $N_0$.\n\nSubstituting this back into Equation 4, the final solution can be more meaningfully written as:\n\n\\begin{equation}\nN = N_0 e^{-\\lambda t}\n\\end{equation}\n\nWe now have not just one solution, but a whole class of solutions that are dependent on the initial amount of radioactive material $N_0$. \n\nRemember that not all mathematical solutions make physical sense. To model a physical system, this initial value (also known as initial condition) has to be provided alongside the constant of proportionality $\\lambda$.\n\n\n\n### ODEs can have initial values or boundary values\n\nODEs have either initial values or boundary values. For example, using Newton's second laws we could calculate the distance $x$ an object travels under the influence of gravity over time $t$\n\n\\begin{equation}\n\\frac{\\mathrm{d}^2x}{\\mathrm{d}t^2} = -g\n\\end{equation}\n\nAn initial value problem would be where we know the starting position and velocity. A boundary value problem would be where we specify the position of the ball at times $t=t_0$ and $t=t_1$. \n\nIn this course we will only study ODEs with initial values. ODEs with boundary values are more difficult to solve, but you can related materials listed under `External resources`.\n\n\n### The number of initial conditions depends on the order of the differential equation\n\nOur radioactive decay example is a first-order ODE and so we only had to provide a single initial condition. For second-order ODEs (such as acceleration under gravity) we need to provide two initial/boundary conditions, for third-order ODEs we would need to provide three, and so on.\n\n\n-----\nKeypoints:\n- Radioactive decay can be modelled a linear, first-order ODE\n- The equation for radioactive decay is separable and has an analytic solution\n- To model a physical system an initial value has to be provided\n- The number of initial conditions depends on the order of the differential equation\n-----\n\n---\n\nDo [the quick-test](https://nu-cem.github.io/CompPhys/2021/08/02/Radioactive-Decay-Qs.html).\n\nBack to [Modelling with Ordinary Differential Equations](https://nu-cem.github.io/CompPhys/2021/08/02/ODEs.html).\n\n---\n", "meta": {"hexsha": "b752a404a3887f590720a6ec1acf49fb178f2598", "size": 8406, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_notebooks/2021-08-02-Radioactive-Decay.ipynb", "max_stars_repo_name": "charlo66609/CompPhys", "max_stars_repo_head_hexsha": "a28ad04f44e314adf238d4ea9dc8d73af4bce2ce", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_notebooks/2021-08-02-Radioactive-Decay.ipynb", "max_issues_repo_name": "charlo66609/CompPhys", "max_issues_repo_head_hexsha": "a28ad04f44e314adf238d4ea9dc8d73af4bce2ce", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 17, "max_issues_repo_issues_event_min_datetime": "2021-10-06T08:11:43.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-24T13:46:08.000Z", "max_forks_repo_path": "_notebooks/2021-08-02-Radioactive-Decay.ipynb", "max_forks_repo_name": "charlo66609/CompPhys", "max_forks_repo_head_hexsha": "a28ad04f44e314adf238d4ea9dc8d73af4bce2ce", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-03T10:11:28.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-03T10:11:28.000Z", "avg_line_length": 28.7876712329, "max_line_length": 291, "alphanum_fraction": 0.571853438, "converted": true, "num_tokens": 1244, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9219218391455084, "lm_q2_score": 0.896251366205709, "lm_q1q2_score": 0.8262737078690418}} {"text": "```python\nimport sympy as sm\nfrom scipy.misc import derivative\n```\n\n\n```python\nx, y, z = sm.symbols('x y z')\n```\n\n\n```python\n#Function - 3x**2\nsm.diff(3*x**2)\n```\n\n\n\n\n$\\displaystyle 6 x$\n\n\n\n\n```python\nsm.diff(2*x)\n```\n\n\n\n\n$\\displaystyle 2$\n\n\n\n\n```python\ndef myFunc(x):\n return 2*x\n```\n\n\n```python\nderivative(myFunc,4)\n```\n\n\n\n\n 2.0\n\n\n\n\n```python\n#Function 3x^2+5x\nsm.diff(3*x**2+5*x)\n```\n\n\n\n\n$\\displaystyle 6 x + 5$\n\n\n\n\n```python\n#Product Rule\nsm.diff((x**2 + 1) * sm.cos(x))\n```\n\n\n\n\n$\\displaystyle 2 x \\cos{\\left(x \\right)} - \\left(x^{2} + 1\\right) \\sin{\\left(x \\right)}$\n\n\n\n\n```python\n#Chain Rule\nsm.diff((x**2 - 3*x + 5) ** 3)\n```\n\n\n\n\n$\\displaystyle \\left(6 x - 9\\right) \\left(x^{2} - 3 x + 5\\right)^{2}$\n\n\n\n\n```python\n#Partial Derivative\nf = x**2 * y * z**5\n```\n\n\n```python\nsm.diff(f,x)\n```\n\n\n\n\n$\\displaystyle 2 x y z^{5}$\n\n\n\n\n```python\nsm.diff(f,y)\n```\n\n\n\n\n$\\displaystyle x^{2} z^{5}$\n\n\n\n\n```python\nsm.diff(f,z)\n```\n\n\n\n\n$\\displaystyle 5 x^{2} y z^{4}$\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "45adf8f26f8d89789afdf766faeca9657fde2924", "size": 4823, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Multivariate Calculus.ipynb", "max_stars_repo_name": "perceptrons-ai/Computational-Mathematics", "max_stars_repo_head_hexsha": "ffb88b4de735735696eba652eee24c24758f33cd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-02-23T05:37:30.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-23T05:37:30.000Z", "max_issues_repo_path": "Multivariate Calculus.ipynb", "max_issues_repo_name": "perceptrons-ai/Computational-Mathematics", "max_issues_repo_head_hexsha": "ffb88b4de735735696eba652eee24c24758f33cd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Multivariate Calculus.ipynb", "max_forks_repo_name": "perceptrons-ai/Computational-Mathematics", "max_forks_repo_head_hexsha": "ffb88b4de735735696eba652eee24c24758f33cd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 17.225, "max_line_length": 106, "alphanum_fraction": 0.4277420693, "converted": true, "num_tokens": 374, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9697854120593483, "lm_q2_score": 0.8519528094861981, "lm_q1q2_score": 0.8262114064026921}} {"text": "# Introduction to Graph Matching\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\nThe graph matching problem (GMP), is meant to find an alignment of nodes between two graphs that minimizes the number of edge disagreements between those two graphs. Therefore, the GMP can be formally written as an optimization problem: \n\n\\begin{equation}\n\\begin{aligned}\n\\min & {\\;-trace(APB^T P^T)}\\\\\n\\text{s.t. } & {\\;P \\: \\epsilon \\: \\mathcal{P}} \\\\\n\\end{aligned}\n\\end{equation}\n\nWhere $\\mathcal{P}$ is the set of possible permutation matrices.\n\nThe Quadratic Assignment problem is a combinatorial opimization problem, modeling following the real-life problem: \n\n\"Consider the problem of allocating a set of facilities to a set of locations, with the\ncost being a function of the distance and flow between the facilities, plus costs associated\nwith a facility being placed at a certain location. The objective is to assign each facility\nto a location such that the total cost is minimized.\" [1]\n\nWhen written as an optimization problem, the QAP is represented as:\n\n\\begin{equation}\n\\begin{aligned}\n\\min & {\\; trace(APB^T P^T)}\\\\\n\\text{s.t. } & {\\;P \\: \\epsilon \\: \\mathcal{P}} \\\\\n\\end{aligned}\n\\end{equation}\n\nSince the GMP objective function is the negation of the QAP objective function, any algorithm that solves one can solve the other. \n\n\nThis class is an implementation of the Fast Approximate Quadratic Assignment Problem (FAQ), an algorithm designed to efficiently and accurately solve the QAP, as well as GMP. \n\n[1] Optimierung, Diskrete & Er, Rainer & Ela, A & Burkard, Rainer & Dragoti-Cela, Eranda & Pardalos, Panos & Pitsoulis, Leonidas. (1998). The Quadratic Assignment Problem. Handbook of Combinatorial Optimization. 10.1007/978-1-4613-0303-9_27. \n\n\n```python\nfrom graspy.match import GraphMatch as GMP\nfrom graspy.simulations import er_np\n```\n\nFor the sake of this tutorial, we will use FAQ to solve the GMP for two graphs where we know a solution exists. \nBelow, we sample a binary graph (undirected and no self-loops) $G_1 \\sim ER_{NP}(50, 0.3)$.\nThen, we randomly shuffle the nodes of $G_1$ to initiate $G_2$.\nThe number of edge disagreements as a result of the node shuffle is printed below.\n\n\n```python\nn = 50\np = 0.3\n\nnp.random.seed(1)\nG1 = er_np(n=n, p=p)\nnode_shuffle_input = np.random.permutation(n)\nG2 = G1[np.ix_(node_shuffle_input, node_shuffle_input)]\nprint(\"Number of edge disagreements: \", np.sum(abs(G1-G2)))\n```\n\n## Visualize the graphs using heat mapping\n\n\n```python\nfrom graspy.plot import heatmap\nheatmap(G1, cbar=False, title = 'G1 [ER-NP(50, 0.3) Simulation]')\nheatmap(G2, cbar=False, title = 'G2 [G1 Randomly Shuffled]')\n```\n\nBelow, we create a model to solve GMP. The model is then fitted for the two graphs $G_1$ and $G_2$. One of the option for the algorithm is the starting position of $P$. In this case, the class default of barycenter intialization is used, or the flat doubly stochastic matrix. The number of edge disagreements is printed below. With zero edge disagreements, we see that FAQ is successful in unshuffling the graph.\n\n\n```python\ngmp = GMP()\ngmp = gmp.fit(G1,G2)\nG2 = G2[np.ix_(gmp.perm_inds_, gmp.perm_inds_)]\nprint(\"Number of edge disagreements: \", np.sum(abs(G1-G2)))\n```\n\n\n```python\nheatmap(G1, cbar=False, title = 'G1[ER-NP(50, 0.3) Simulation]')\nheatmap(G2, cbar=False, title = 'G2[ER-NP(50, 0.3) Randomly Shuffled] unshuffled')\n```\n", "meta": {"hexsha": "fa0aed45832dadc3771c7338cfefd896513e665c", "size": 5456, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/tutorials/matching/faq.ipynb", "max_stars_repo_name": "spencer-loggia/graspologic", "max_stars_repo_head_hexsha": "cf7ae59289faa8f5538e335e2859cc2a843f2839", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/tutorials/matching/faq.ipynb", "max_issues_repo_name": "spencer-loggia/graspologic", "max_issues_repo_head_hexsha": "cf7ae59289faa8f5538e335e2859cc2a843f2839", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/tutorials/matching/faq.ipynb", "max_forks_repo_name": "spencer-loggia/graspologic", "max_forks_repo_head_hexsha": "cf7ae59289faa8f5538e335e2859cc2a843f2839", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 32.0941176471, "max_line_length": 418, "alphanum_fraction": 0.5997067449, "converted": true, "num_tokens": 953, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9362850128595114, "lm_q2_score": 0.8824278710924295, "lm_q1q2_score": 0.8262039906333666}} {"text": "\n
    \n\n```python\n# Importing some python libraries.\nimport numpy as np\nfrom numpy.random import randn,rand\nimport matplotlib.pyplot as pl\n\nfrom matplotlib.pyplot import plot\n\nimport seaborn as sns\n%matplotlib inline\n\n# Fixing figure sizes\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 10,5\n\n\n\nsns.set_palette('Reds_r')\n```\n\n# Reaction Network Homework\n\nIn this homework, we will study a very simple set of reactions by modelling it through three different ways. First, we shall employ an ODE model called the **Reaction Rate Equation**. Then, we will solve the **Chemical Langevin Equation** and, finally, we will simulate the exact model by \"solving\" the **Chemical Master Equation**. \n\nThe reaction network of choice shall be a simple birth-death process, described by the relations : \n\n$$\n\\begin{align}\n\\emptyset \\stackrel{a}{\\to} X,\\\\\nX \\stackrel{\\mu X}{\\to} \\emptyset.\n\\end{align}\n$$\n\n$X$ here is the population number.\n\nThroughout, we shall use $a=10$ and $\\mu=1.0$. \n\n## Reaction Rate Equation\n\nThe reaction rate equation corresponding to the system is \n\n$$\n\\begin{align}\n\\dot{x}=a-\\mu\\cdot x,\\\\\nx(0)=x_0.\n\\end{align}\n$$\n\nAs this is a linear equation, we can solve it exactly, with solution\n\n$$\nx(t)=x(t) = a/\\mu+(x_0-a/\\mu) e^{\\mu (-t)}\n$$\n\n\n\n\n```python\n# Solution of the RRE\ndef x(t,x0=3,a=10.0,mu=1.0):\n return (x0-a/mu)*np.exp(-t*mu)+a/mu\n\n```\n\nWe note that there is a stationary solution, $x(t)=a/\\mu$. From the exponential in the solution, we can see that this is an attracting fixed point.\n\n\n```python\nt = np.linspace(0,3)\nx0list = np.array([0.5,1,15])\n\nsns.set_palette(\"Reds\",n_colors=3)\n\nfor x0 in x0list: \n pl.plot(t,x(t,x0),linewidth=4)\n\npl.title('Population numbers for different initial conditions.', fontsize=20)\npl.xlabel('Time',fontsize=20)\n```\n\n# Chemical Langevin Equation\n\nNext, we will model the system by using the CLE. For our particular birth/death process, this will be \n\n$$\ndX_t=(a-\\mu\\cdot X_t)dt+(\\sqrt{a}-\\sqrt{\\mu\\cdot X_t})dW.\n$$\n\nTo solve this, we shall use the Euler-Maruyama scheme from the previous homework. We fix a $\\Delta t$ positive. Then, the scheme shall be : \n\n$$\nX_{n+1}=X_n+(a-\\mu\\cdot X_n)\\Delta t+(\\sqrt{a}-\\sqrt{\\mu\\cdot X_t})\\cdot \\sqrt{\\Delta t}\\cdot z,\\ z\\sim N(0,1).\n$$\n\n\n```python\ndef EM(xinit,T,Dt=0.1,a=1,mu=2):\n '''\n Returns the solution of the CLE with parameters a, mu\n \n Arguments\n =========\n xinit : real, initial condition.\n Dt : real, stepsize of the Euler-Maruyama.\n T : real, final time to reach.\n a : real, parameter of the RHS. \n mu : real, parameter of the RHS.\n \n '''\n \n n = int(T/Dt) # number of steps to reach T\n X = np.zeros(n)\n z = randn(n)\n \n X[0] = xinit # Initial condition\n \n # EM method \n for i in xrange(1,n):\n X[i] = X[i-1] + Dt* (a-mu*X[i-1])+(np.sqrt(a)-np.sqrt(mu*X[i-1]))*np.sqrt(Dt)*z[i]\n \n return X\n \n```\n\nSimilarly to the previous case, here is a run with multiple initial conditions. \n\n\n```python\nT = 10 # final time to reach\nDt = 0.01 # time-step for EM\n\n# Set the palette to reds with ten colors\nsns.set_palette('Reds',10)\n\ndef plotPaths(T,Dt):\n n = int(T/Dt)\n t = np.linspace(0,T,n)\n\n xinitlist = np.linspace(10,15,10)\n\n for x0 in xinitlist : \n path = EM(xinit=x0,T=T,Dt=Dt,a=10.0,mu=1.0)\n pl.plot(t, path,linewidth=5)\n\n pl.xlabel('time', fontsize=20)\n pl.title('Paths for initial conditions between 1 and 10.', fontsize=20)\n \n return path\n \npath = plotPaths(T,Dt)\n\nprint 'Paths decay towards', path[np.size(path)-1]\nprint 'The stationary point is', 1.0\n```\n\nWe notice that the asymptotic behavior of the CLE is the same as that of the RRE. The only notable difference is the initial random kicks in the paths, all because of the stochasticicity. \n\n\n## Chemical Master Equation\n\nFinally, we shall simulate the system exactly by using the Stochastic Simulation Algorithm (SSA). \n\n\n```python\ndef SSA(xinit, nsteps, a=10.0, mu=1.0):\n '''\n Using SSA to exactly simulate the death/birth process starting\n from xinit and for nsteps. \n \n a and mu are parameters of the propensities.\n \n Returns\n =======\n path : array-like, the path generated. \n tpath: stochastic time steps\n '''\n \n \n path = np.zeros(nsteps)\n tpath= np.zeros(nsteps)\n \n path[0] = xinit # initial population\n \n u = rand(2,nsteps) # pre-pick all the uniform variates we need\n \n for i in xrange(1,nsteps):\n \n # The propensities will be normalized\n tot_prop = path[i-1]*mu+a\n prob = path[i-1]*mu/tot_prop # probability of death \n \n if(u[0,i]\n\nWe flick the spinner and it stops. Let's call the angle of the pointer $x$. It seems a safe assumption that the distribution of $x$ is uniform between $[0,2\\pi)$... so $p_x(x) = 1/\\sqrt{2\\pi}$\n\nNow let's say that we change variables to $y=\\cos(x)$ (sorry if the names are confusing here, don't think about x- and y-coordinates, these are just names for generic variables). The question is this:\n**what is the distribution of y?** Let's call it $p_y(y)$\n\nWell it's easy to do with a simulation, let's try it out\n\n\n```python\n# generate samples for x, evaluate y=cos(x)\nn_samples = 100000\nx = np.random.uniform(0,2*np.pi,n_samples)\ny = np.cos(x)\n```\n\n\n```python\n# make a histogram of x\nn_bins = 50\ncounts, bins, patches = plt.hist(x, bins=50, density=True, alpha=0.3)\nplt.plot([0,2*np.pi], (1./2/np.pi, 1./2/np.pi), lw=2, c='r')\nplt.xlim(0,2*np.pi)\nplt.xlabel('x')\nplt.ylabel('$p_x(x)$')\n```\n\nOk, now let's make a histogram for $y=\\cos(x)$\n\n\n```python\ncounts, y_bins, patches = plt.hist(y, bins=50, density=True, alpha=0.3)\nplt.xlabel('y')\nplt.ylabel('$p_y(y)$')\n```\n\nIt's not uniform! Why is that? Let's look at the $x-y$ relationship\n\n\n```python\n# make a scatter of x,y\nplt.scatter(x[:300],y[:300]) #just the first 300 points\n\nxtest = .2\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')\nxtest = xtest+.1\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')\n\nxtest = 2*np.pi-xtest\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')\nxtest = xtest+.1\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')\n\n\nxtest = np.pi/2\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')\nxtest = xtest+.1\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='r')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='r')\n\nxtest = 2*np.pi-xtest\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')\nxtest = xtest+.1\nplt.plot((-1,xtest),(np.cos(xtest),np.cos(xtest)), c='g')\nplt.plot((xtest,xtest),(-1.5,np.cos(xtest)), c='g')\n\n\nplt.ylim(-1.5,1.5)\nplt.xlim(-1,7)\n```\n\nThe two sets of vertical lines are both separated by $0.1$. The probability $P(a < x < b)$ must equal the probability of $P( cos(b) < y < cos(a) )$. In this example there are two different values of $x$ that give the same $y$ (see green and red lines), so we need to take that into account. For now, let's just focus on the first part of the curve with $x<\\pi$.\n\nSo we can write (this is the important equation):\n\n\\begin{equation}\n\\int_a^b p_x(x) dx = \\int_{y_b}^{y_a} p_y(y) dy \n\\end{equation}\nwhere $y_a = \\cos(a)$ and $y_b = \\cos(b)$.\n\nand we can re-write the integral on the right by using a change of variables (pure calculus)\n\n\\begin{equation}\n\\int_a^b p_x(x) dx = \\int_{y_b}^{y_a} p_y(y) dy = \\int_a^b p_y(y(x)) \\left| \\frac{dy}{dx}\\right| dx \n\\end{equation}\n\nnotice that the limits of integration and integration variable are the same for the left and right sides of the equation, so the integrands must be the same too. Therefore:\n\n\\begin{equation}\np_x(x) = p_y(y) \\left| \\frac{dy}{dx}\\right| \n\\end{equation}\nand equivalently\n\\begin{equation}\np_y(y) = p_x(x) \\,/ \\,\\left| \\, {dy}/{dx}\\, \\right | \n\\end{equation}\n\nThe factor $\\left|\\frac{dy}{dx} \\right|$ is called a Jacobian. When it is large it is stretching the probability in $x$ over a large range of $y$, so it makes sense that it is in the denominator.\n\n\n```python\nplt.plot((0.,1), (0,.3))\nplt.plot((0.,1), (0,0), lw=2)\nplt.plot((1.,1), (0,.3))\nplt.ylim(-.1,.4)\nplt.xlim(-.1,1.6)\nplt.text(0.5,0.2, '1', color='b')\nplt.text(0.2,0.03, 'x', color='black')\nplt.text(0.5,-0.05, 'y=cos(x)', color='g')\nplt.text(1.02,0.1, '$\\sin(x)=\\sqrt{1-y^2}$', color='r')\n```\n\nIn our case:\n\\begin{equation}\n\\left|\\frac{dy}{dx} \\right| = \\sin(x)\n\\end{equation}\n\nLooking at the right-triangle above you can see $\\sin(x)=\\sqrt{1-y^2}$ and finally there will be an extra factor of 2 for $p_y(y)$ to take into account $x>\\pi$. So we arrive at\n\\begin{equation}\np_y(y) = 2 \\times \\frac{1}{2 \\pi} \\frac{1}{\\sin(x)} = \\frac{1}{\\pi} \\frac{1}{\\sin(\\arccos(y))} = \\frac{1}{\\pi} \\frac{1}{\\sqrt{1-y^2}}\n\\end{equation}\n\n Notice that when $y=\\pm 1$ the pdf is diverging. This is called a [caustic](http://www.phikwadraat.nl/huygens_cusp_of_tea/) and you see them in your coffee and rainbows!\n\n| | |\n|---|---|\n| | | \n\n\n**Let's check our prediction**\n\n\n```python\ncounts, y_bins, patches = plt.hist(y, bins=50, density=True, alpha=0.3)\npdf_y = (1./np.pi)/np.sqrt(1.-y_bins**2)\nplt.plot(y_bins, pdf_y, c='r', lw=2)\nplt.ylim(0,5)\nplt.xlabel('y')\nplt.ylabel('$p_y(y)$')\n```\n\nPerfect!\n\n## A trick using the cumulative distribution function (cdf) to generate random numbers\n\nLet's consider a different variable transformation now -- it is a special one that we can use to our advantage. \n\\begin{equation}\ny(x) = \\textrm{cdf}(x) = \\int_{-\\infty}^x p_x(x') dx'\n\\end{equation}\n\nHere's a plot of a distribution and cdf for a Gaussian.\n\n(NOte: the axes are different for the pdf and the cdf http://matplotlib.org/examples/api/two_scales.html\n\n\n```python\nfrom scipy.stats import norm\n```\n\n\n```python\nx_for_plot = np.linspace(-3,3, 30)\nfig, ax1 = plt.subplots()\n\nax1.plot(x_for_plot, norm.pdf(x_for_plot), c='b')\nax1.set_ylabel('p(x)', color='b')\nfor tl in ax1.get_yticklabels():\n tl.set_color('b')\n \nax2 = ax1.twinx()\nax2.plot(x_for_plot, norm.cdf(x_for_plot), c='r')\nax2.set_ylabel('cdf(x)', color='r')\nfor tl in ax2.get_yticklabels():\n tl.set_color('r')\n```\n\nOk, so let's use our result about how distributions transform under a change of variables to predict the distribution of $y=cdf(x)$. We need to calculate \n\n\\begin{equation}\n\\frac{dy}{dx} = \\frac{d}{dx} \\int_{-\\infty}^x p_x(x') dx'\n\\end{equation}\n\nJust like particles and anti-particles, when derivatives meet anti-derivatives they annihilate. So $\\frac{dy}{dx} = p_x(x)$, which shouldn't be a surprise.. the slope of the cdf is the pdf.\n\nSo putting these together we find the distribution for $y$ is:\n\n\\begin{equation}\np_y(y) = p_x(x) \\, / \\, \\frac{dy}{dx} = p_x(x) /p_x(x) = 1\n\\end{equation}\n\nSo it's just a uniform distribution from $[0,1]$, which is perfect for random numbers.\n\nWe can turn this around and generate a uniformly random number between $[0,1]$, take the inverse of the cdf and we should have the distribution we want for $x$.\n\nLet's try it for a Gaussian. The inverse of the cdf for a Gaussian is called [ppf](http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.stats.norm.html)\n\n\n\n```python\nnorm.ppf.__doc__\n```\n\n\n\n\n '\\n Percent point function (inverse of `cdf`) at q of the given RV.\\n\\n Parameters\\n ----------\\n q : array_like\\n lower tail probability\\n arg1, arg2, arg3,... : array_like\\n The shape parameter(s) for the distribution (see docstring of the\\n instance object for more information)\\n loc : array_like, optional\\n location parameter (default=0)\\n scale : array_like, optional\\n scale parameter (default=1)\\n\\n Returns\\n -------\\n x : array_like\\n quantile corresponding to the lower tail probability q.\\n\\n '\n\n\n\n\n```python\n#check it out\nnorm.cdf(0), norm.ppf(0.5)\n```\n\n\n\n\n (0.5, 0.0)\n\n\n\nOk, let's use CDF trick to generate Normally-distributed (aka Gaussian-distributed) random numbers\n\n\n```python\nrand_cdf = np.random.uniform(0,1,10000)\nrand_norm = norm.ppf(rand_cdf)\n```\n\n\n```python\n_ = plt.hist(rand_norm, bins=30, density=True, alpha=0.3)\nplt.xlabel('x')\n```\n\n**Pros**: The great thing about this technique is it is very efficient. You only generate one random number per random $x$.\n\n**Cons**: the downside is you need to know how to compute the inverse cdf for $p_x(x)$ and that can be difficult. It works for a distribution like a Gaussian, but for some random distribution this might be even more computationally expensive than the accept/reject approach. This approach also doesn't really work if your distribution is for more than one variable.\n\n## Going full circle\n\nOk, let's try it for our distribution of $y=\\cos(x)$ above. We found \n\n\\begin{equation}\np_y(y) = \\frac{1}{\\pi} \\frac{1}{\\sqrt{1-y^2}}\n\\end{equation}\n\nSo the CDF is (see Wolfram alpha for [integral](http://www.wolframalpha.com/input/?i=integrate%5B1%2Fsqrt%5B1-x%5E2%5D%2FPi%5D) )\n\\begin{equation}\ncdf(y') = \\int_{-1}^{y'} \\frac{1}{\\pi} \\frac{1}{\\sqrt{1-y^2}} = \\frac{1}{\\pi}\\arcsin(y') + C\n\\end{equation}\nand we know that for $y=-1$ the CDF must be 0, so the constant is $1/2$ and by looking at the plot or remembering some trig you know that it's also $cdf(y') = (1/\\pi) \\arccos(y')$.\n\nSo to apply the trick, we need to generate uniformly random variables $z$ between 0 and 1, and then take the inverse of the cdf to get $y$. Ok, so what would that be:\n\\begin{equation}\ny = \\textrm{cdf}^{-1}(z) = \\cos(\\pi z)\n\\end{equation}\n\n**Of course!** that's how we started in the first place, we started with a uniform $x$ in $[0,2\\pi]$ and then defined $y=\\cos(x)$. So we just worked backwards to get where we started. The only difference here is that we only evaluate the first half: $\\cos(x < \\pi)$\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "4a600069d32f0f139540d45f3d63429c2f337ce9", "size": 92162, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "book/distributions/change-of-variables.ipynb", "max_stars_repo_name": "willettk/stats-ds-book", "max_stars_repo_head_hexsha": "06bc751a7e82f73f9d7419f32fe5882ec5742f2f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 41, "max_stars_repo_stars_event_min_datetime": "2020-08-18T12:14:43.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T16:37:17.000Z", "max_issues_repo_path": "book/distributions/change-of-variables.ipynb", "max_issues_repo_name": "willettk/stats-ds-book", "max_issues_repo_head_hexsha": "06bc751a7e82f73f9d7419f32fe5882ec5742f2f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2020-08-19T04:22:24.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-22T15:18:24.000Z", "max_forks_repo_path": "book/distributions/change-of-variables.ipynb", "max_forks_repo_name": "willettk/stats-ds-book", "max_forks_repo_head_hexsha": "06bc751a7e82f73f9d7419f32fe5882ec5742f2f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2020-08-19T02:57:47.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T15:24:07.000Z", "avg_line_length": 155.1548821549, "max_line_length": 20440, "alphanum_fraction": 0.8828150431, "converted": true, "num_tokens": 3136, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9263037323284109, "lm_q2_score": 0.8918110382493035, "lm_q1q2_score": 0.826087893262005}} {"text": "# Week 2\n## Introduction to Object-Oriented Programming\n\nThese weeks exercises starts you working with classes. If you want a gentler introduction, exercises for Chapter 7 in Langtangen is recommended.\n\n## Exercise 1 \u2014 Quadratic functions\n\nIn this exercise, we will build on the example given in the lectures of implementing 2nd degree polynomials as objects of a custom defined class. A general 2nd degree polynomial, aka, quadratic function, can be written as:\n\n$$ f(x) = a_2x^2 + a_1x + a_0,$$\n\nwhere the coefficients, $a_2$, $a_1$, and $a_0$ uniquely defines the polynomial.\n\n### Exercise 1a) Defining the `Quadratic` class\n\nCreate a class, `Quadratic`, that represents a general 2nd degree polynomial. Define the following methods:\n* A constructor (`__init__`)\n* A call method (`__call__`)\nThe constructor should take in the three coefficients in order: `a_2`, `a_1`, and `a_0`, and the call method should take the free variable `x`.\n\nYou class should be able to handle the following test script:\n***\n```Python \nf = Quadratic(1, -2, 1)\nx = np.linspace(-5, 5, 101)\nplt.plot(x, f(x))\nplt.show()\n```\n***\nImplement your solution here:\n\n\n```python\n\n```\n\nUse this to test your implementation:\n\n\n```python\ndef test_Quadratic():\n f = Quadratic(1, -2, 1)\n assert abs(f(-1) - 4) < 1e-8\n assert abs(f(0) - 1) < 1e-8\n assert abs(f(1) - 0) < 1e-8\n \n# test_Quadratic() # uncomment this line when testing\n```\n\n### Exercise 1b) Pretty printing\n\nExtend your `Quadratic` class with a string special method (`__str__`) so that you can print a Polynomial object and get the polynomial written out on a readable form. Test by creating a polynomial object and printing it out.\n\n#### Exercise 1c) Adding together polynomials\n\nAdding together two general quadratic functions:\n\n$$f(x) = a_2x^2 + a_1 x + a_0, \\qquad g(x) = b_2x^2 + b_1 x + b_0,$$\ngives a new quadratic function:\n\n$$(f + g)(x) = (a_2 + b_2)x^2 + (a_1 + b_1)x + (a_0 + b_0)$$\n\nImplement this functionality using the addition special method (`__add__`). This method should return a new Quadratic-object, without changing the two that are added together. Your new class should be able to handle the following test script:\n***\n```Python \nf = Quadratic(1, -2, 1)\ng = Quadratic(-1, 6, -3)\n\nh = f + g\nprint(h)\n\nx = np.linspace(-5, 5, 101)\nplt.plot(x, h(x))\nplt.show()\n```\n***\n(Because $a_2 + b_2 = 0$, the resulting plot should be a straight line.)\n\nImplement your solution here:\n\n\n```python\n\n```\n\nUse this to test your implementation:\n\n\n```python\ndef test_Quadratic_add():\n f = Quadratic(1, -2, 1)\n g = Quadratic(-1, 6, -3)\n h = f + g\n a2, a1, a0 = h.coeffs\n assert a2 == 0\n assert a1 == 4\n assert a0 == -2\n \n# test_Quadratic_add() # uncomment this line when testing\n```\n\n### Exercise 1d) Finding the roots\n\nThe roots of a general quadratic function,\n$$f(x) = ax^2+bx+c = 0,$$\nare given by the quadratic formula\n$${\\displaystyle x={\\frac {-b\\pm {\\sqrt {b^{2}-4ac}}}{2a}}.}$$\n\nExtend your `Quadratic` function with a method `.roots()` that finds and returns the real roots of the function (ignore the imaginary ones). Return the result as a tuple with 0, 1, or 2 elements.\n\nTest your method on the three polynomials:\n* $2x^2 -2x + 2$\n* $x^2 - 2x + 1$\n* $x^2 -3x + 2$\n\nImplement your solution here:\n\n\n```python\n\n```\n\nUse this to test your implementation:\n\n\n```python\ndef test_Quadratic_root():\n f1 = Quadratic(2, -2, 2)\n f2 = Quadratic(1, -2, 1)\n f3 = Quadratic(1, -3, 2)\n \n assert f1.roots() == ()\n assert abs(f2.roots()[0] - 1) < 1e-8\n assert abs(f3.roots()[0] - 1) < 1e-8 and abs(f3.roots()[1] - 2) < 1e-8\n \n# test_Quadratic_root() # uncomment this line when testing\n```\n\n### Exercise 1e) Finding the intersection of two quadratic functions\n\nExtend your class with a method that finds and returns the intersection points (if any) between two `Quadratic`-objects. It should work as follows:\n\n***\n```Python\nf = Quadratic(1, -2, 1)\ng = Quadratic(2, 3, -2)\n\nprint(f.intersect(g))\n```\n***\n**Hint:** The intersections are all points solving $f(x) = g(x)$, which can be written as $(f-g)(x) = 0$.\n\nTest your solution by plotting the two functions and their intersections:\n\n$$f(x) = x^2 -2x + 1, \\qquad g(x) = 2x^2 + 3x - 2.$$\n\n## Exercise 2 \u2014 A class for general polynomials\n\nWe now turn to looking at general polynomials of degree $n$. These can be written as \n\n$$f(x) = a_{n}x^{n}+a_{n-1}x^{n-1}+\\dotsb +a_{2}x^{2}+a_{1}x+a_{0},$$\nor more compactly as\n$${\\displaystyle \\sum _{k=0}^{n}a_{k}x^{k}}.$$\n\nWe want to make a class that represents such a polynomial, and can take any number of coefficients in. The constructor of such a class could for example take in a list of the coefficients: `[a0, a1, ..., aN]`. However, this list will always have to be of length $N$, and say we want to specify the polynomial $x^{1000} + 1$, it is highly inefficient to pass in such a long list, as most coefficients are actually 0.\n\nA better approach is to use a dictionary, where we use the index as the key and the coefficient as the value. Doing this, we can then specify only the non-zero coefficients, and simply skip those that are 0. So defining $x^{1000} + 1$ would simply be: `Polynomial({0: 1, 1000: 1})`.\n\n\n### Exercise 2a) Defining the Polynomial class\n\nDefine the `Polynomial` class with the following methods\n* A constructor (`__init__`) that takes in the coefficients of the polynomial as a dictionary\n* A call method (`__call__`) that computes f(x) for a given x\n* A string method (`__str__`) for informative printing of the polynomial\n\nYour class should be able to handle the following test script\n\n***\n```Python\ncoeffs = {0: 1, 5:-1, 10:1}\nf = Polynomial(coeffs)\n\nprint(f)\n\nx = linspace(-1, 1, 101)\nplt.plot(x, f(x))\nplt.show()\n```\n****\n\nImplement your solution here:\n\n\n```python\n\n```\n\n### Exercise 2b): Adding general polynomials together\n\nWe now want to be able to add together two general polynomial objects, which should produce a new general polynomial object. Mathematically, this is just an extension of the 2nd degree polynomial case which we saw in exercise (1). If we have\n\n$$f(x) = {\\displaystyle \\sum _{k=0}^{m}a_{k}x^{k}}, \\qquad g(x)={\\displaystyle \\sum _{k=0}^{n}b_{k}x^{k}},$$\nthe sum will be defined by\n$$(f + g)(x) = {\\displaystyle \\sum _{k=0}^{\\max(m, n)}(a_{k} + b_{k})x^{k}}.$$\n\nThus, if we add together two polynomials of degree $m$ and $n$, then the sum will have degree $\\max(m, n)$, i.e., the largest of the two.\n\nExtend your class to add this functionality using the addition special method (`__add__`).\n\nThe class should handle the following test case:\n***\n```Python\n\nf = Polynomial({0:1, 5:-7, 10:1})\ng = Polynomial({5:7, 10:1, 15:-3})\n\nprint(f+g)\n```\n***\nWhich should produce the output: $-3x^{15} + 2x^{10} + 1$\n\nImplement your solution here:\n\n\n```python\n\n```\n\n**Hint:** You will need to create a new coefficient dictionary for the new polynomial and add in the coefficients from the two polynomials. This can be slightly tricky getting the keys right. Here `collections.defaultdict` can be useful, but it isn't necessary.\n\n### Exercise 2c) Defining a `AddableDictionary` class\n\nThe previous exercise would have been a lot simpler, if we could simply add two dictionary objects together as follows:\n```Python\na = {0: 2, 1: 3, 2: 4}\nb = {0: -1, 1:3, 2: 3, 3: 2}\nc = a + b\n```\nHowever, if you try to do this, you get an exception:\n> `TypeError: unsupported operand type(s) for +: 'dict' and 'dict'`\n\nThis means that there is no addition special method defined for dictionaries. However, we can extend the normal dictionary class to include this by adding a special method as follows\n```Python\nclass AddableDict(dict):\n def __add__(self, other):\n ...\n```\n\nAdd the necessary code, so that our new `AddableDict` class can add two dictionaries together as follows:\n```Python\na = AddableDict({0: 2, 1: 3, 2: 4})\nb = AddableDict({0: -1, 1:3, 2: 3, 3: 2})\nprint(a + b)\n```\nAnd give the ouput: `{0: 1, 1: 6, 2: 7, 3: 2}`.\n\nImplement your solution here:\n\n\n```python\n\n```\n\nUse this to test your implementation:\n\n\n```python\ndef test_AddableDict():\n a = AddableDict({0: 2, 1: 3, 2: 4})\n b = AddableDict({0: -1, 1:3, 2: 3, 3: 2})\n c = a + b\n assert c[0] == 1\n assert c[1] == 6\n assert c[2] == 7\n assert c[3] == 2\n \n# test_AddableDict() # uncomment this line when testing\n```\n\nHaving made the `AddableDict class`, go back and change the Polynomial constructor, so that even if the user sends in the coefficients as a normal dictionary, it is converted to an `AddableDict` inside the Polynomial. Having done this, rewrite `Polynomial.__add__`, which should be trivial.\n\n### Exercise 2d) Derivative of a polynomial\n\nIt is also the case that the derivative of a polynomial is a polynomial, if we have\n\n$$f(x) = {\\displaystyle \\sum _{k=0}^{m}a_{k}x^{k}},$$\nthen we get\n$$f'(x) = {\\displaystyle \\sum _{k=1}^{m} (a_{k}\\cdot k)x^{k-1}},$$\nwhich can be written as\n$$f'(x) = {\\displaystyle \\sum _{k=0}^{m-1} b_k x^{k}},$$\nwhere $b_{k} = (k+1)a_{k+1}$.\n\nImplement a method, `derivative`, that returns the function $f'(x)$ as a new Polynomial object. Test your function by finding the derivative of\n\n$$f(x) = x^{10} - 3x^6 + 2x^2 + 1.$$\n\nImplement your solution here:\n\n\n```python\n\n```\n\nUse this to test your implementation:\n\n\n```python\ndef test_derivative():\n f = Polynomial({10:1, 6:-3, 2:2, 0:1})\n f_deriv = f.derivative()\n assert f_deriv.coeffs == {9:10, 5:-18, 1:4}\n \n# test_derivative() # uncomment this line when testing\n```\n\n### Exercise 2e) Multiplying polynomials\n\nIt is also the case that the *product* of two polynomials form a new polynomial. If we again define\n\n$$f(x) = {\\displaystyle \\sum _{k=0}^{m}a_{k}x^{k}}, \\qquad g(x)={\\displaystyle \\sum _{k=0}^{n}b_{k}x^{k}},$$\nthen the product is given by\n$$(f \\cdot g)(x) = \\left(\\sum _{i = 0}^m a_{i}x^{i}\\right) \\cdot \\left(\\sum _{j=0}^{n} b_{j}x^{j}\\right) = \\sum_{i=0}^m\\sum_{j=0}^n a_i b_j x^{i + j}$$\n\nImplement this functionality using the multiplication special method (`__mul__`). To acomplish this, you will need two nested for-loops over the coefficient dictionaries.\n\nTest your implementation with the code block\n***\n```Python\nf = Polynomial({2: 4, 1: 1})\ng = Polynomial({3: 3, 0: 1})\nprint(f*g)\n```\n***\nWhich should give the output:\n$$(4x^2 + x)(3x^3 + 1) = 12x^5 + 3x^4 + 4x^2 + x$$\n\nImplement your solution here:\n\n\n```python\n\n```\n\nUse this to test your implementation:\n\n\n```python\ndef test_Polynomial_mul():\n f = Polynomial({2: 4, 1: 1})\n g = Polynomial({3: 3, 0: 1})\n h = f*g\n assert h.coeffs == {5:12, 4:3, 2:4, 1:1}\n \n# test_Polynomial_mul() # uncomment this line when testing\n```\n\n## Exercise 3 - Quantum Harmonic Oscillator in One Dimension\n\nThe quantum harmonic oscillator wave function, $\\psi_n (x)$ is a solution to the time-independent Scr\u00f6dinger equation\n$$\\hat{H} \\psi_n (x) = E_n \\psi_n (x)$$\nwhere $\\hat{H}$ is the quantum harmonic oscillator hamiltonian and $E_n$ is the energy at the $n$-th level ($n$ must be a positive integer, $n = 0, 1, 2,$...). \n\nIn this excersise you will implement a `class HOWF` which represents a quantum harmonic oscillator wave function in one dimension. \n\n### Exercise 3a) Hermite polynomials \n\nTo implement the harmonic oscillator wave function, a Hermite Polynomial is needed. The Hermite Polynomials are given by the recursive formula\n\n\\begin{align*}\n H_n(x) &= 2 x H_{n - 1}(x) - 2(n - 1) H_{n - 2}(x) ,\n\\end{align*}\nfor $n = 2, 3, 4, ...$, and inital conditions are \n\\begin{align}\n H_0(x) &= 1 \\\\\n H_1(x) &= 2 x .\n\\end{align}\n\nMake a `class HOWF`. Start by implementing the constructor and the recursive private method `_compute_Hermite(self, n)`. \n\nThe method `_compute_Hermite(self, n)` should return the hermite polynomial as an object of the class `Polynomial`, which you implemented in the previous exercise. The method should be implemented using recursion. \n*Hint: You will easily get error messages if you try to multiply numbers with your polynomial class. Remember that you can consider a constant as a polynomial of the zero-th degree!\n\nThe constructor must take the level $n$ as a paramterer. The constructor should make a call the method `_compute_Hermite(self, n)` to obtain and store the hermite polynomial of the $n$-th order. Name the variable of the hermite polynomial `H`.\n\n\nImplement your solution here:\n\n\n```python\n\n```\n\n\n```python\ndef test_Hermite():\n H0 = lambda x: 1\n H1 = lambda x: 2*x\n H2 = lambda x: 4*x*x - 2\n H3 = lambda x: 8*x**3 - 12*x\n H4 = lambda x: 16*x**4 - 48*x**2 + 12\n H5 = lambda x: 32*x**5 - 160*x**3 + 120*x\n H_table = [H0, H1, H2, H3, H4, H5]\n \n tol = 1e-12\n for n in range(0, 6):\n H = HOWF(n).H\n for x in [0, 0.5, 1.0/3, 1, 3/2, 2]:\n expected = H_table[n](x)\n computed = H(x)\n msg = \"The implemented Hermite Polynomial yields unexpected result for n = %d\\\n \\n\\tH(%.2f) = %.13g != %.13g\" %(n, x, expected, computed)\n assert abs(expected - computed) < tol, msg\n \n# test_Hermite() # uncomment this line when testing\n```\n\n### Exercise 3b) The quantum harmonic wave function\n\nIn this exercise you will be asked to implement two functions. The expressions would include the mass of the particle $m$, plancks constant $\\hbar$ and the angular frequency $\\omega$. When computing, it is an advantage to simplify the expressions as much as possible. Therefore we simplifiy the calculations by using dimensionless variables, defined by \n$\\chi = \\sqrt{\\frac{m \\omega}{\\hbar}} \\cdot x$. This will make all the calculation dimensionless. Consequently, the dimensionless energy (in this case scale) is then given by $\\epsilon_n = \\frac{2 E_n}{\\hbar \\omega}$.\n\nThe quantum harmonic wave function of the $n$-th energy level is then given by\n\n\\begin{align*}\n \\psi_n (\\chi) = \\pi^{-\\frac{1}{4}} \\frac{1}{\\sqrt{2^n n!}} H_n(\\chi)\\exp\\left(\\frac{-\\chi^2}{2}\\right)\n\\end{align*}\n\nwhere $H_n$ is the Hermite polynimial of the $n$-th order. Let `HOWF` a callable class by implementing the formula above in the special method `__call__(self, chi)`.\n\nThe energy of the $n$-th level is then given by\n\\begin{align*}\n \\epsilon_n = 2n + 1 .\n\\end{align*}\n\nImplement a method in `HOWF` which calculates the energy of the wave function. Define this method as a property.\n\nImplement your solution here:\n\n\n```python\nfrom numpy import exp, pi\nfrom math import factorial, sqrt\n```\n\n### Exercise 3c) Visualising the wave functions\n\nYou will now reproduce a quite iconic figure of the quantum harmonic oscillator wave function. Plot the wave functions in relative height to their energy, as in plot $\\psi_n(\\chi) + \\epsilon_n$ for $n = 0, 1, 2, ..., N$ in the same plot. \n\nSet $N$ to be at least 15. A sanity check for your implementation of the wave functions is that the number of nodes of $\\psi_n$ should be $n$. (That means that $\\psi_n$ should intersect $\\epsilon_n$ $n$ times on the y-axis, since the wave function has been lifted by $\\epsilon_n$ on the $y$-axis). An appropriate range for $\\chi$ could be $\\chi \\in [-8, 8]$. \n\n\n\nImplement your solution here:\n\n\n```python\n\n```\n\n## Exercise 4 \u2014 Fibonnaci with Memoization\n\nThe Fibonnaci sequence is given by the numbers\n$$1, 1, 2, 3, 5, 8, 13, 21, 34, \\ldots$$\nit is defined by the recursive formula\n$$F_{i} = F_{i-1} + F_{i-2}.$$\n\nFor this defintion to make any sense, the sequence has to start somewhere, so we define that \n\n$$F_0 = 0, \\qquad F_1 = 1.$$\n\n\n\n### Exercise 4a) A Fibonacci function\n\nWrite a *recursive* function `fibonacci(n)`, that returns $F_n$. A *recursive* function is a function that calls itself. To do this, you won't need any explicit loops. You will however, need to include the *base-case* of $F_0=0$ $F_1=1$, otherwise the recursion would just continue forever.\n\nVerify your function by writing out $F_1, \\ldots, F_{10}$ and comparing to the sequence above.\n\nImplement your solution here:\n\n\n```python\n\n```\n\nUse this to test your implementation:\n\n\n```python\ndef test_fibonacci():\n computed = []\n for i in range(11):\n computed.append(fibonacci(i))\n assert computed == [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]\n\n# test_fibonacci() # uncomment this line when testing\n```\n\n### Exercise 4b) Improving our algorithm\n\nWe have now implemented a function that finds the $n$'th Fibonnaci number, which is great. However, it is very inefficient. Imagine for example we want to compute $F_{100}$. The first call to the function, creates two new calls: `F(99)` and `F(98)`. Both of these call the function twice again, so we have four function calls there. Each of those four lead to two new ones and so on all the way down to the base case of $n=1$. This means we make something on the order of $2^{100}$ function calls! This means the complexity of calulating $F(n)$ grows exponentially as $n$\u00a0grows, which is horrible. Try drawing a tree to represent the function calls for $n=100$ and you quickly see the absurdity of the situation.\n\nTo improve our Fibonnaci algorithm, we will use a dynamic programming technique called *memoization*. While the name is a bit weird, the idea is fairly simple, we simply make our program remember the answers it has already computed, so that we won't have to compute them again later.\n\nClasses are perfect for making memoized functions, as we can use an internal dictionary to remember old solutions.\n\nFill in the skeleton code below so that the class computes Fibonacci numbers:\n\n\n```python\nclass Fibonacci:\n def __init__(self):\n self.memory = {0: 1, 1: 1}\n \n def __call__(self, n):\n \"\"\"\n if n is in memory\n - return it\n if n is not in memory\n - calcuate it recursively\n - put it into memory\n - return it\n \"\"\"\n pass\n```\n\nIf you program your class correctly, you should see a huge speed up. This is because we now avoid repeating the same calculations uneccesarily. For example, if you try computing $F(100)$ as follows:\n\n\n```python\nfib = Fibonacci()\nprint(fib(100))\n```\n\nThe memoized function will only need to compute the numbers $F(2), \\ldots F(100)$ once, meaning the number of function calls needed is now *linear* instead of exponential! \n\nImplement your solution here:\n\n\n```python\n\n```\n\nSame test to make sure the code still works:\n\n\n```python\n# test_fibonacci() # uncomment this line when testing\n```\n\n### Exercise 4c) Maximum Recursion Depth\n\nTry finding an even larger number, like $F_{100000}$. Does it work? It will probably fail due to a maximum recursion depth. Try to understand what this means and why it happens. Can you think of a work-around for this problem?\n\nHint: You need to build up the memoization dictionary from the ground up, and then it will be able to handle larger inputs. This can be done with a for-loop for example.\n\nImplement your solution here:\n\n\n```python\n\n```\n\nSame test to make sure the code still works:\n\n\n```python\ndef test_fibonacci():\n computed = []\n for i in range(11):\n computed.append(fibonacci(i))\n assert computed == [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55]\n\n# test_fibonacci() # uncomment this line when testing\n```\n\n### Exercise 4d) A memoized factorial class\n\nThe factorial is also defined recursively\n$$N! = N \\cdot (N-1)!,$$\nwith a base case of $F(0) = 1$.\n\nRepeat the process of the Fibonacci sequence and create a memoized class, `Factorial`, that computes $N!$.\n\nImplement your solution here:\n\n\n```python\n\n```\n\nUse this to test your implementation:\n\n\n```python\ndef test_Factorial():\n f = Factorial()\n computed = []\n for i in range(6):\n computed.append(f(i))\n \n assert computed == [1, 1, 2, 6, 24, 120]\n\n# test_Factorial() # uncomment this line when testing\n```\n", "meta": {"hexsha": "a80edc26b4bf6d268d77647dc53677c8bac4405b", "size": 30413, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "book/docs/exercises/week2/E2_exercises_on_oop.ipynb", "max_stars_repo_name": "finsberg/IN1910_H21", "max_stars_repo_head_hexsha": "4bd8f49c0f6839884bfaf8a0b1e3a717c3041092", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "book/docs/exercises/week2/E2_exercises_on_oop.ipynb", "max_issues_repo_name": "finsberg/IN1910_H21", "max_issues_repo_head_hexsha": "4bd8f49c0f6839884bfaf8a0b1e3a717c3041092", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "book/docs/exercises/week2/E2_exercises_on_oop.ipynb", "max_forks_repo_name": "finsberg/IN1910_H21", "max_forks_repo_head_hexsha": "4bd8f49c0f6839884bfaf8a0b1e3a717c3041092", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-08-30T12:38:40.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-05T14:14:59.000Z", "avg_line_length": 31.6802083333, "max_line_length": 721, "alphanum_fraction": 0.5564725611, "converted": true, "num_tokens": 5881, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026550642018, "lm_q2_score": 0.9005297807787537, "lm_q1q2_score": 0.8260583588727344}} {"text": "```python\nimport sympy as sym\nfrom sympy.polys import subresultants_qq_zz\n\nsym.init_printing()\n```\n\nThe Bezout matrix is a special square matrix associated with two polynomials, introduced by Sylvester (1853) and Cayley (1857) and named after \u00c9tienne B\u00e9zout. B\u00e9zoutian may also refer to the determinant of this matrix, which is equal to the resultant of the two polynomials.\n\nThe entries of Bezout matrix are bilinear functions of coefficients of the given polynomials. The Bezout formulation has gone over different generalizations. The most common one is the Cayley.. Cayley's matrix is given by,\n\n$$ \\left|\n\\begin{array}{cc} \np(x) & q(x)\\\\\np(a)& q(a)\n\\end{array}\n\\right| = \\Delta(x, a)$$\n\nwhere $\\Delta(x, a)$ is the determinant.\n\nWe have the polynomial:\n\n$$ \\delta(x, a) = \\frac{\\Delta(x,a)}{x-a}$$\n\nThe matrix is then constructed from the coefficients of polynomial $\\alpha$. Each coefficient is viewed as a polynomial of $x_1,..., x_n$.\n\nThe Bezout matrix is highly related to the Sylvester matrix and the greatest common divisor of polynomials. Unlike in Sylvester's formulation, where the resultant of $p$ and $q$ is the determinant of an $(m + n) \\times (m + n)$ matrix, in the Cayley formulation, the resultant is obtained\nas the determinant of a $n \\times n$ matrix.\n\nExample: Generic example\n------------------------\n\n\n```python\nb_3, b_2, b_1, b_0 = sym.symbols(\"b_3, b_2, b_1, b_0\")\nx = sym.symbols('x')\n```\n\n\n```python\nb = sym.IndexedBase(\"b\")\n```\n\n\n```python\np = b_2 * x ** 2 + b_1 * x + b_0\nq = sym.diff(p, x)\n```\n\n\n```python\nsubresultants_qq_zz.bezout(p, q, x)\n```\n\nExample: Existence of common roots\n------------------------------------------\n\nNote that if the system has a common root we are expecting the resultant/determinant to equal to zero.\n\n**A commot root exists.**\n\n\n```python\n# example one\np = x ** 3 +1\nq = x + 1\n```\n\n\n```python\nsubresultants_qq_zz.bezout(p, q, x)\n```\n\n\n```python\nsubresultants_qq_zz.bezout(p, q, x).det()\n```\n\n\n```python\n# example two\np = x ** 2 - 5 * x + 6\nq = x ** 2 - 3 * x + 2\n```\n\n\n```python\nsubresultants_qq_zz.bezout(p, q, x)\n```\n\n\n```python\nsubresultants_qq_zz.bezout(p, q, x).det()\n```\n\n**A common root does not exist.**\n\n\n```python\nz = x ** 2 - 7 * x + 12\nh = x ** 2 - x\n```\n\n\n```python\nsubresultants_qq_zz.bezout(z, h, x).det()\n```\n\nDixon's Resultant\n-----------------\n\nDixon (1908) showed how to extend this formulation to $m = 3$ polynomials in $n = 2$ variables.\n\nIn a similar manner but this time,\n\n$$ \\left|\n\\begin{array}{cc} \np(x, y) & q(x, y) & h(x, y) \\cr\np(a, y) & q(a, y) & h(b, y) \\cr\np(a, b) & q(a, b) & h(a, b) \\cr\n\\end{array}\n\\right| = \\Delta(x, y, \\alpha, \\beta)$$\n\nwhere $\\Delta(x, y, \\alpha, \\beta)$ is the determinant.\n\nThus, we have the polynomial:\n\n$$ \\delta(x,y, \\alpha, \\beta) = \\frac{\\Delta(x, y, \\alpha, \\beta)}{(x-\\alpha)(y - \\beta)}$$\n\n\n```python\nfrom sympy.polys.multivariate_resultants import DixonResultant\n```\n\nExample: Generic example of Dixon $(n=2, m=3)$\n---------------------------------------------------\n\n\n```python\na_1, a_2, b_1, b_2, u_1, u_2, u_3 = sym.symbols('a_1, a_2, b_1, b_2, u_1, u_2, u_3')\n```\n\n\n```python\ny = sym.symbols('y')\n```\n\n\n```python\np = a_1 * x ** 2 * y ** 2 + a_2 * x ** 2\nq = b_1 * x ** 2 * y ** 2 + b_2 * y ** 2\nh = u_1 * x + u_2 * y + u_3\n```\n\n\n```python\ndixon = DixonResultant(variables=[x, y], polynomials=[p, q, h])\n```\n\n\n```python\npoly = dixon.get_dixon_polynomial()\n```\n\n\n```python\npoly\n```\n\n\n```python\nmatrix = dixon.get_dixon_matrix(poly)\n```\n\n\n```python\nmatrix\n```\n\n\n```python\nmatrix.det().factor()\n```\n\nDixon's General Case\n--------------------\n\n[Yang et al.](https://rd.springer.com/chapter/10.1007/3-540-63104-6_11) generalized the Dixon resultant method of three polynomials with two variables to the system of $n+1$ polynomials with $n$ variables.\n\nExample: Numerical example\n--------------------\n\n\n```python\np = x + y\nq = x ** 2 + y ** 3\nh = x ** 2 + y\n```\n\n\n```python\ndixon = DixonResultant([p, q, h], (x, y))\n```\n\n\n```python\npoly = dixon.get_dixon_polynomial()\npoly.simplify()\n```\n\n\n```python\nmatrix = dixon.get_dixon_matrix(polynomial=poly)\nmatrix\n```\n\n\n```python\nmatrix.det()\n```\n\nExample: Generic example\n---------\n\n\n```python\na, b, c = sym.symbols('a, b, c')\n```\n\n\n```python\np_1 = a * x ** 2 + b * x * y + (b + c - a) * x + a * y + 3 * (c - 1)\np_2 = 2 * a ** 2 * x ** 2 + 2 * a * b * x * y + a * b * y + b ** 3\np_3 = 4 * (a - b) * x + c * (a + b) * y + 4 * a * b\n```\n\n\n```python\npolynomials = [p_1, p_2, p_3]\n```\n\n\n```python\ndixon = DixonResultant(polynomials, [x, y])\n```\n\n\n```python\npoly = dixon.get_dixon_polynomial()\n```\n\n\n```python\nsize = len(poly.monoms())\nsize\n```\n\n\n```python\nmatrix = dixon.get_dixon_matrix(poly)\nmatrix\n```\n\nExample: \n--------------------------------------------------------------------------------------------------\n**From [Dixon resultant\u2019s solution of systems of geodetic polynomial equations](https://rd.springer.com/content/pdf/10.1007%2Fs00190-007-0199-0.pdf)**\n\n\n\n```python\nz = sym.symbols('z')\n```\n\n\n```python\nf = x ** 2 + y ** 2 - 1 + z * 0\ng = x ** 2 + z ** 2 - 1 + y * 0\nh = y ** 2 + z ** 2 - 1\n```\n\n\n```python\ndixon = DixonResultant([f, g, h], [y, z])\n```\n\n\n```python\npoly = dixon.get_dixon_polynomial()\n```\n\n\n```python\nmatrix = dixon.get_dixon_matrix(poly)\nmatrix\n```\n\n\n```python\nmatrix.det()\n```\n", "meta": {"hexsha": "158feba0ef191febc900bb157cc02842e4aa8932", "size": 74443, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/notebooks/Bezout_Dixon_resultant.ipynb", "max_stars_repo_name": "utkarshdeorah/sympy", "max_stars_repo_head_hexsha": "dcdf59bbc6b13ddbc329431adf72fcee294b6389", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 8323, "max_stars_repo_stars_event_min_datetime": "2015-01-02T15:51:43.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T13:13:19.000Z", "max_issues_repo_path": "examples/notebooks/Bezout_Dixon_resultant.ipynb", "max_issues_repo_name": "utkarshdeorah/sympy", "max_issues_repo_head_hexsha": "dcdf59bbc6b13ddbc329431adf72fcee294b6389", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 15102, "max_issues_repo_issues_event_min_datetime": "2015-01-01T01:33:17.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T22:53:13.000Z", "max_forks_repo_path": "examples/notebooks/Bezout_Dixon_resultant.ipynb", "max_forks_repo_name": "utkarshdeorah/sympy", "max_forks_repo_head_hexsha": "dcdf59bbc6b13ddbc329431adf72fcee294b6389", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 4490, "max_forks_repo_forks_event_min_datetime": "2015-01-01T17:48:07.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T17:24:05.000Z", "avg_line_length": 76.3517948718, "max_line_length": 10492, "alphanum_fraction": 0.7618849321, "converted": true, "num_tokens": 1787, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026528034426, "lm_q2_score": 0.9005297801113612, "lm_q1q2_score": 0.8260583562246525}} {"text": "# Marginal Probability - Districts & Ages\n\n> This document is written in *R*.\n>\n> ***GitHub***: https://github.com/czs108\n\n## Background\n\n> The following table gives the *percentage of pupils* in a certain school from different *districts* and at different *ages*.\n>\n> | *District \\ Age* | 14 | 15 | 16 | 17 |\n> | :--------------: | :--: | :--: | :--: | :--: |\n> | **North** | 5% | 10% | 5% | 10% |\n> | **Center** | 10% | 15% | 5% | 5% |\n> | **South** | 15% | 10% | 5% | 5% |\n\n## Question A\n\n> Give values for the following probabilities.\n\n### Question 1\n\n\\begin{align}\nP(age = 15)\n &= \\sum_{i = North,Center,South} P(age = 15\\, \\cap\\, district = i) \\\\\n &= 10\\% + 15\\% + 10\\% \\\\\n &= 35\\%\n\\end{split}\n\\end{equation}\n\n### Question 2\n\n\\begin{equation}\n\\begin{split}\nP(district = Center)\n &= \\sum_{i = 14}^{17} P(age = i\\, \\cap\\, district = Center) \\\\\n &= 10\\% + 15\\% + 5\\% + 5\\% \\\\\n &= 35\\%\n\\end{split}\n\\end{equation}\n\n### Question 3\n\n\\begin{equation}\n\\begin{split}\nP(age = 15\\, \\mid\\, district = Center)\n &= \\frac{P(age = 15\\, \\cap\\, district = Center)}{P(district = Center)} \\\\\n &= \\frac{15\\%}{35\\%}\n\\end{split}\n\\end{equation}\n\n### Question 4\n\n\\begin{equation}\n\\begin{split}\nP(age = 15\\, \\cap\\, district = Center)\n &= P(district = Center) \\cdot P(age = 15\\, \\mid\\, district = Center) \\\\\n &= 35\\% \\times \\frac{15\\%}{35\\%} \\\\\n &= 15\\%\n\\end{split}\n\\end{equation}\n\n### Question 5\n\n\\begin{equation}\n\\begin{split}\nP(age = 15\\, \\cup\\, district = Center)\n &= P(age = 15) + P(district = Center) - P(age = 15\\, \\cap\\, district = Center) \\\\\n &= 35\\% + 35\\% - 15\\% \\\\\n &= 55\\%\n\\end{split}\n\\end{equation}\n\n## Question B\n\n> What are the *mean* and *variance* of *age*?\n\n\\begin{equation}\nMean = \\sum_{i = 14}^{17} i \\cdot P(age = i)\n\\end{equation}\n\n\\begin{equation}\nVariance = \\sum_{i = 14}^{17} (i - \\overline{i})^2 \\cdot P(age = i)\n\\end{equation}\n\n\n```R\nages <- c(14:17)\nprobs <- c(0.30, 0.35, 0.15, 0.20)\nmean <- sum(ages * probs)\nvar <- sum(((ages - mean) ^ 2) * probs)\n\nprint(sprintf(\"Mean = %.1f\", mean), quote=FALSE)\nprint(sprintf(\"Variance = %.1f\", var), quote=FALSE)\n```\n\n [1] Mean = 15.2\n [1] Variance = 1.2\n\n\n## Question C\n\n> Use the `barplot` to visualize the table.\n\nStore the data in a matrix.\n\n\n```R\nprcnt <- c(5, 10, 5, 10, 10, 15, 5, 5, 15, 10, 5, 5)\nprcnt <- matrix(prcnt, nrow=3, byrow=TRUE)\n\nprcnt\n```\n\n\n\n\n\t\n\t\n\t\n\n
    5105 10
    10155 5
    15105 5
    \n\n\n\nAdd row names and column names.\n\n\n```R\ncolnames(prcnt) <- c(14:17)\nrownames(prcnt) <- c(\"North\", \"Center\", \"South\")\n\nprcnt\n```\n\n\n\n\n\n\t\n\t\n\t\n\n
    14151617
    North 5105 10
    Center10155 5
    South15105 5
    \n\n\n\n\n```R\nbarplot(prcnt, xlab=\"Age\", ylab=\"Percentage\", legend.text=TRUE)\n```\n", "meta": {"hexsha": "98c24fdc2e31f2a555cab396c90cbec8062dbf89", "size": 19888, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "exercises/Marginal Probability - Districts & Ages.ipynb", "max_stars_repo_name": "czs108/Probability-Theory-Exercises", "max_stars_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "exercises/Marginal Probability - Districts & Ages.ipynb", "max_issues_repo_name": "czs108/Probability-Theory-Exercises", "max_issues_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "exercises/Marginal Probability - Districts & Ages.ipynb", "max_forks_repo_name": "czs108/Probability-Theory-Exercises", "max_forks_repo_head_hexsha": "60c6546db1e7f075b311d1e59b0afc3a13d93229", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-03-21T05:04:07.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-21T05:04:07.000Z", "avg_line_length": 61.3827160494, "max_line_length": 12116, "alphanum_fraction": 0.7378318584, "converted": true, "num_tokens": 1299, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294403979493139, "lm_q2_score": 0.8887588023318196, "lm_q1q2_score": 0.826048334920242}} {"text": "# Elliptic curve cryptography\n\nWhat is an Elliptic curve (EC)? An elliptic curve is a plane algebraic curve over a [finite field](https://en.wikipedia.org/wiki/Finite_field) which is defined by an equation of the form:\n\n\\begin{equation}\ny^2 = x^3+ax+b \\quad \\textrm{where} \\quad 4a^3+27b^2 \u2260 0\n\\label{eq:ecurve}\n\\tag{1}\n\\end{equation}\n\nThe $4a^3+27b^2 \u2260 0$ restrained is required to avoid singular points.\n\n\nA finite field is a set where operations of multiplication, addition, subtraction and division are defined according to basic rules. Examples of finite fields are [integers mod p](https://en.wikipedia.org/wiki/Modular_arithmetic#Integers_modulo_n) when p is a prime number.\n\n\n```python\n# Import the necessary libraries\n# to remove code in browser, press f12 and in console type: document.querySelectorAll(\"div.input\").forEach(function(a){a.remove()})\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation\nimport seaborn as sns\nimport numpy as np\nimport pandas as pd\n\nfrom typing import Callable, Tuple\nfrom scipy import optimize\n\ndef ecurve_power2(a: float, b: float, x: float) -> float:\n # y\u00b2=x\u00b3+ax+b and 4a\u00b3+27b\u00b2\u22600\n # secp256k1 curve is y\u00b2 = x\u00b3+7\n return x**3 + x*a + b\n\ndef ecurve(domain: pd.array, ecurve_power2_func: Callable[[float], float]) -> pd.DataFrame:\n # y = sqrt(x\u00b3+ax+b)\n # Only return domain where y>0\n y2 = ecurve_power2_func(domain)\n x_ = domain[y2>0]\n y2 = y2[y2>0]\n y = np.sqrt(y2)\n dataset = pd.DataFrame({'x': x_, 'y': y, 'y_neg': (-1)*y})\n return dataset\n\ndef domain(x1: float, x2: float, step: float = 0.1) -> np.ndarray:\n return np.arange(x1, x2, step).astype(np.float64)\n\ndef straight_line(m: float, c: float, x: float) -> float:\n # y = xm + c\n return m*x + c\n\ndef calc_straight_line_params(point1: Tuple[float, float], point2: Tuple[float, float]) -> Tuple[float, float]:\n # Calculate the gradient(m) and y intercept(c) in: y = xm + c\n x1, y1 = point1\n x2, y2 = point2\n m = (y2 - y1)/(x2 - x1)\n c = -1*x2*m + y2\n return m, c\n\ndef plot_elliptic_curve(axs: plt.axes, domain: pd.array, ecurve_power2_partial: Callable[[float], float], title=\"\") -> None:\n # data must have x and y coloms\n data = ecurve(domain, ecurve_power2_partial)\n # to display as a continues function, the grid needs to go past the cut of values for the ec, hence the -1's\n X, Y = np.mgrid[min(data.x)-1:max(data.x):100j, min(data.y_neg)-1:max(data.y):100j]\n axs.contour(X, Y, Y**2 - ecurve_power2_partial(X), levels=[0]) # pos graph\n axs.contour(X, Y*-1, Y**2 - ecurve_power2_partial(X), levels=[0]) # pos graph\n axs.set_title(title)\n axs.set_xlim(min(data.x)-1, max(data.x)+1)\n axs.set_ylim(min(data.y_neg)-1, max(data.y)+1)\n\ndef plot_straight_line(axs: plt.axes, domain: pd.array, straight_line_partial_func: Callable[[float], float], title=\"\") -> None:\n axs.plot(domain, straight_line_partial_func(domain))\n if title != \"\":\n axs.set_title(title)\n\ndef roots(f: Callable[[float], float], g: Callable[[float], float], domain: pd.array) -> np.array:\n d = lambda x: f(x) - g(x)\n roots_index = np.argwhere(np.diff(np.sign(d(domain)))).flatten()\n return domain[roots_index].to_numpy()\n\ndef calc_intersection(domain: pd.array, ecurve_power2_partial_func: Callable[[float], float], straight_line_partial_func: Callable[[float], float]) -> Tuple[float, float]:\n data = ecurve(domain, ecurve_power2_partial_func)\n \n ecurve_pos_partial = lambda x: np.sqrt(ecurve_power2_partial_func(x))\n ecurve_neg_partial = lambda x: np.sqrt(ecurve_power2_partial_func(x))*-1\n \n roots_pos = roots(ecurve_pos_partial, straight_line_partial_func, data.x)\n roots_neg = roots(ecurve_neg_partial, straight_line_partial_func, data.x)\n intersections = pd.DataFrame({'x': roots_pos, 'y': ecurve_pos_partial(roots_pos)})\n intersections2 = pd.DataFrame({'x': roots_neg, 'y': ecurve_neg_partial(roots_neg)})\n return intersections.append(intersections2).reset_index()\n```\n\nExample of elliptic curves with different a and b values:\n\n\n```python\n# Setup domain and Elliptic Curve function\ndom = domain(-5,5) # Domain\nsecp256k1_pow2 = lambda x: ecurve_power2(0, 7, x) # EllipticCurve function y^2 with secp256k1 parameters\n```\n\n\n```python\n# calc_point_on_ec = ecurve(dom, secp256k1_pow2) # EllipticCurve function sqrrt(y^2)\nfig_example, (ax1_example, ax2_example, ax3_example, ax4_example) = plt.subplots(1,4, sharex='col', sharey='row', gridspec_kw={'hspace': 0.2, 'wspace': 0.1},figsize=(20,5))\nplot_elliptic_curve(ax1_example, dom, lambda x: ecurve_power2(1, -1, x), 'y\u00b2 = x\u00b3+x-1')\nplot_elliptic_curve(ax2_example, dom, lambda x: ecurve_power2(1, 1, x), 'y\u00b2 = x\u00b3+x+1')\nplot_elliptic_curve(ax3_example, dom, lambda x: ecurve_power2(-3, 3, x), 'y\u00b2 = x\u00b3-3x+3')\nplot_elliptic_curve(ax4_example, dom, lambda x: ecurve_power2(-4, 0, x), 'y\u00b2 = x\u00b3-4x')\n```\n\nThe elliptic curve used by most cryptocurrencies is called the secp256k1 and takes the form\n\\begin{equation}\ny^2 = x^3+x+7\n\\label{eq:secp256k1}\n\\tag{2}\n\\end{equation}\n\n\n```python\nfig_secp256k1, (ax_secp256k1) = plt.subplots(1,1, sharex='col', sharey='row', gridspec_kw={'hspace': 0.2, 'wspace': 0.1},figsize=(4,5))\nplot_elliptic_curve(ax_secp256k1, dom, secp256k1_pow2, 'y\u00b2 = x\u00b3+7')\n```\n\n## Finite field\nThe elliptic curve operation for point addition are different than normal addition. With normal addition you would expect that point1 (x1, y1) + point2 (x2, y2) would equal (x1+1x2, y1+y2). This is not so with elliptic curve where the add operation is defined differently: When you add two points on a elliptic curve together, you get a third point on the curve.\n\nThe process is can described as when you have 2 points on a elliptic curve, you draw a line bewteen the points, determine where it intersects the curve. This intersection point is then reflected across the x-axis (i.e multiply the y-coordinate by -1 (x, y*-1)).\n\nA example of addition would be:\n\n\n```python\nfig_intersec, (ax1_intersec, ax2_intersec, ax3_intersec, ax4_intersec) = plt.subplots(1,4, sharex='col', sharey='row', gridspec_kw={'hspace': 0.2, 'wspace': 0.1},figsize=(20,5))\nplot_elliptic_curve(ax1_intersec, dom, secp256k1_pow2, 'To add point P and Q')\nplot_elliptic_curve(ax2_intersec, dom, secp256k1_pow2, 'Draw a line between the points')\nplot_elliptic_curve(ax3_intersec, dom, secp256k1_pow2, 'Reflect that point across the x-axis')\nplot_elliptic_curve(ax4_intersec, dom, secp256k1_pow2, ' P+Q=R')\n\n# Arbitrary points on elliptic curve\npoints = ecurve(pd.array([-1, 4]), secp256k1_pow2)\npoint1 = (points.x[0], points.y[0])\npoint2 = (points.x[1], points.y_neg[1])\n\nm, c = calc_straight_line_params(point1=point1, point2=point2) # Calculate straight line paramaters giving the points\n\nstraight_line_partial = lambda x: straight_line(m, c, x) # Straight line function with paramaters\n\n# Calculate intersections between the Straight line function and the EllipticCurve function\nintersections = calc_intersection(domain=dom, ecurve_power2_partial_func=secp256k1_pow2, straight_line_partial_func=straight_line_partial)\n\n# First plot\nax1_intersec.plot(intersections.x[0], intersections.y[0], \"o\", label=\"P\", c='b')\nax1_intersec.plot(intersections.x[2], intersections.y[2], \"o\", label=\"Q\", c='k')\nax1_intersec.legend()\nax1_intersec.set_xlabel(\"Fig 1\")\nax1_intersec.axhline(linewidth=1, color='k')\n\n# Second plot\nplot_straight_line(axs=ax2_intersec, domain=dom, straight_line_partial_func=straight_line_partial, title=\"\")\nax2_intersec.plot(intersections.x[0], intersections.y[0], \"o\", label=\"P\", c='b')\nax2_intersec.plot(intersections.x[2], intersections.y[2], \"o\", label=\"Q\", c='k')\nax2_intersec.plot(intersections.x[1], intersections.y[1], \"o\", label=\"Intersection\", c='g')\nax2_intersec.legend()\nax2_intersec.set_xlabel(\"Fig 2\")\nax2_intersec.axhline(linewidth=1, color='k')\n\n# Third plot\nplot_straight_line(axs=ax3_intersec, domain=dom, straight_line_partial_func=straight_line_partial, title=\"\")\nax3_intersec.plot(intersections.x[0], intersections.y[0], \"o\", label=\"P\", c='b')\nax3_intersec.plot(intersections.x[2], intersections.y[2], \"o\", label=\"Q\", c='k')\nax3_intersec.plot(intersections.x[1], intersections.y[1], \"o\", label=\"Intersection\", c='g')\nax3_intersec.plot(intersections.x[1], intersections.y[1]*-1, \"o\", label=\"R\", c='r')\nax3_intersec.legend()\nax3_intersec.set_xlabel(\"Fig 3\")\nax3_intersec.axhline(linewidth=1, color='k')\n\nax3_intersec.vlines(intersections.x[1], ymin=intersections.y[1], ymax=intersections.y[1]*-1, colors='r', linestyles='dashed')\n\n# Fourth plot\nax4_intersec.plot(intersections.x[0], intersections.y[0], \"o\", label=\"P\", c='b')\nax4_intersec.plot(intersections.x[2], intersections.y[2], \"o\", label=\"Q\", c='k')\nax4_intersec.plot(intersections.x[1], intersections.y[1]*-1, \"o\", label=\"R\", c='r')\nax4_intersec.legend()\nax4_intersec.set_xlabel(\"Fig 4\")\nax4_intersec.axhline(linewidth=1, color='k')\n\nprint(\"\")\n```\n\nSteps to find $P+Q$\n- Fig1: If you have point $P$ (-1, 2.5) and $Q$ (4.0, -8.4) on the elliptic curve\n- Fig2: Draw a line between the points, find the intersect point at (1.7, -3.5)\n- Fig3: Reflect the intersect point across the x-axis to found the new point, $R$ (1.7, 3.5)\n- Fig4: $P+Q=R$\n\nWith elliptic curve cryptography, you do not just add two arbitrary points together, but rather you start with a base point on the curve and add that point to it self. If we start with a base point $P$ than we have to find a line that goes through $P$ and $P$. Unfortunately there are infinite such lines. With elliptic curve cryptography the tangent line is used in this special case. The same process is followed now to calculate $P+P=2P$:\n\n\n```python\nfig_ecurve, (ax1_ecurve, ax2_ecurve, ax3_ecurve, ax4_ecurve) = plt.subplots(1,4, sharex='col', sharey='row', gridspec_kw={'hspace': 0.2, 'wspace': 0.1},figsize=(20,5))\nplot_elliptic_curve(ax1_ecurve, dom, secp256k1_pow2, 'Initial point P')\nplot_elliptic_curve(ax2_ecurve, dom, secp256k1_pow2, 'Find tangent line that goes through P and P')\nplot_elliptic_curve(ax3_ecurve, dom, secp256k1_pow2, 'Reflect the intersection point across the x-axis')\nplot_elliptic_curve(ax4_ecurve, dom, secp256k1_pow2, 'P+P=2P')\n\n# Choose a arbitrary point P on the elliptic curve\np_points = ecurve(pd.array([-1.3, -1.31]), secp256k1_pow2)\np_point1 = (p_points.x[0], p_points.y[0])\np_point2 = (p_points.x[1], p_points.y[1])\n\nm, c = calc_straight_line_params(point1=p_point1, point2=p_point2) # Calculate straight line paramaters giving the points\n\nstraight_line_partial = lambda x: straight_line(m, c, x) # Straight line function with paramaters\n\n# Calculate intersections between the Straight line function and the EllipticCurve function\nintersections_ecurve = calc_intersection(domain=dom, ecurve_power2_partial_func=secp256k1_pow2, straight_line_partial_func=straight_line_partial)\n\n# First plot\nax1_ecurve.plot(p_points.x[0], p_points.y[0], \"o\", label=\"P\", c='b')\nax1_ecurve.legend()\nax1_ecurve.set_xlabel(\"Fig 1\")\nax1_ecurve.axhline(linewidth=1, color='k')\n\n# Second plot\nplot_straight_line(axs=ax2_ecurve, domain=dom, straight_line_partial_func=straight_line_partial, title=\"\")\nax2_ecurve.plot(p_points.x[0], p_points.y[0], \"o\", label=\"P\", c='b')\nax2_ecurve.plot(intersections_ecurve.x[0], intersections_ecurve.y[0], \"o\", label=\"Intersection\", c='g')\nax2_ecurve.legend()\nax2_ecurve.set_xlabel(\"Fig 2\")\nax2_ecurve.axhline(linewidth=1, color='k')\n\n# Third plot\nplot_straight_line(axs=ax3_ecurve, domain=dom, straight_line_partial_func=straight_line_partial, title=\"\")\n# ax3_ecurve.plot(intersections_ecurve.x[0], intersections_ecurve.y[0], \"o\", label=\"P\", c='b')\nax3_ecurve.plot(p_points.x[0], p_points.y[0], \"o\", label=\"P\", c='b')\nax3_ecurve.plot(intersections_ecurve.x[0], intersections_ecurve.y[0], \"o\", label=\"Intersection\", c='g')\nax3_ecurve.plot(intersections_ecurve.x[0], intersections_ecurve.y[0]*-1, \"o\", label=\"P+P=2P\", c='r')\nax3_ecurve.legend()\nax3_ecurve.set_xlabel(\"Fig 3\")\nax3_ecurve.axhline(linewidth=1, color='k')\n\nax3_ecurve.vlines(intersections_ecurve.x[0], ymin=intersections_ecurve.y[0], ymax=intersections_ecurve.y[0]*-1, colors='r', linestyles='dashed')\n\n# Fourth plot\nax4_ecurve.plot(p_points.x[0], p_points.y[0], \"o\", label=\"P\", c='b')\n# ax4_ecurve.plot(intersections_ecurve.x[0], intersections_ecurve.y[0], \"o\", label=\"P\", c='b')\nax4_ecurve.plot(intersections_ecurve.x[0], intersections_ecurve.y[0]*-1, \"o\", label=\"P+P=2P\", c='r')\n\nax4_ecurve.legend()\nax4_ecurve.set_xlabel(\"Fig 4\")\nax4_ecurve.axhline(linewidth=1, color='k')\n\nprint(\"\")\n```\n\nNow that we have $2P$, we can add $P$ again to get $3P$, see the example below which follows the same process as before. Draw a line between $P$ and $2P$, find the intersect and reflect this intersect value across the x-axis to find $3P$.\n\n\n```python\nfig_ecurve3P, (ax1_ecurve3P, ax2_ecurve3P, ax3_ecurve3P, ax4_ecurve3P) = plt.subplots(1,4, sharex='col', sharey='row', gridspec_kw={'hspace': 0.2, 'wspace': 0.1},figsize=(20,5))\nplot_elliptic_curve(ax1_ecurve3P, dom, secp256k1_pow2, 'P + 2P')\nplot_elliptic_curve(ax2_ecurve3P, dom, secp256k1_pow2, 'Draw a line that goes through P and 2P')\nplot_elliptic_curve(ax3_ecurve3P, dom, secp256k1_pow2, 'Reflect the intersection point across the x-axis')\nplot_elliptic_curve(ax4_ecurve3P, dom, secp256k1_pow2, '2P+P=3P')\n\n# Use P and 2P from previous run\np_point1 = (p_points.x[0], p_points.y[0])\np_point2 = (intersections_ecurve.x[0], intersections_ecurve.y[0]*-1)\n\nm, c = calc_straight_line_params(point1=p_point1, point2=p_point2) # Calculate straight line paramaters giving the points\n\nstraight_line_partial = lambda x: straight_line(m, c, x) # Straight line function with paramaters\n\n# Calculate intersections between the Straight line function and the EllipticCurve function\nintersections = calc_intersection(domain=dom, ecurve_power2_partial_func=secp256k1_pow2, straight_line_partial_func=straight_line_partial)\n\n# First plot\nax1_ecurve3P.plot(p_points.x[0], p_points.y[0], \"o\", label=\"P\", c='b')\nax1_ecurve3P.plot(intersections_ecurve.x[0], intersections_ecurve.y[0]*-1, \"o\", label=\"2P\", c='r')\nax1_ecurve3P.legend()\nax1_ecurve3P.set_xlabel(\"Fig 1\")\nax1_ecurve3P.axhline(linewidth=1, color='k')\n\n# Second plot\nplot_straight_line(axs=ax2_ecurve3P, domain=dom, straight_line_partial_func=straight_line_partial, title=\"\")\nax2_ecurve3P.plot(p_points.x[0], p_points.y[0], \"o\", label=\"P\", c='b')\nax2_ecurve3P.plot(intersections_ecurve.x[0], intersections_ecurve.y[0]*-1, \"o\", label=\"2P\", c='r')\nax2_ecurve3P.plot(intersections.x[1], intersections.y[1], \"o\", label=\"Intersection\", c='g')\n\nax2_ecurve3P.legend()\nax2_ecurve3P.set_xlabel(\"Fig 2\")\nax2_ecurve3P.axhline(linewidth=1, color='k')\n\n# Third plot\nplot_straight_line(axs=ax3_ecurve3P, domain=dom, straight_line_partial_func=straight_line_partial, title=\"\")\nax3_ecurve3P.plot(p_points.x[0], p_points.y[0], \"o\", label=\"P\", c='b')\nax3_ecurve3P.plot(intersections_ecurve.x[0], intersections_ecurve.y[0]*-1, \"o\", label=\"2P\", c='r')\nax3_ecurve3P.plot(intersections.x[1], intersections.y[1], \"o\", label=\"Intersection\", c='g')\nax3_ecurve3P.plot(intersections.x[1], intersections.y[1]*-1, \"o\", label=\"2P+P=3P\", c='m')\nax3_ecurve3P.legend()\nax3_ecurve3P.set_xlabel(\"Fig 3\")\nax3_ecurve3P.axhline(linewidth=1, color='k')\n\nax3_ecurve3P.vlines(intersections.x[1], ymin=intersections.y[1], ymax=intersections.y[1]*-1, colors='r', linestyles='dashed')\n\n# Fourth plot\nax4_ecurve3P.plot(p_points.x[0], p_points.y[0], \"o\", label=\"P\", c='b')\nax4_ecurve3P.plot(intersections_ecurve.x[0], intersections_ecurve.y[0]*-1, \"o\", label=\"2P\", c='r')\nax4_ecurve3P.plot(intersections.x[1], intersections.y[1]*-1, \"o\", label=\"3P\", c='m')\nax4_ecurve3P.legend()\nax4_ecurve3P.set_xlabel(\"Fig 4\")\nax4_ecurve3P.axhline(linewidth=1, color='k')\n\nprint(\"\")\n```\n\nThe same process now can be used to calculate $4P, 5P ... nP$.\n\nThe base point used in secp256k1 curve has the following ($x, y$) coordinates:
    \n$x:$ 55066263022277343669578718895168534326250603453777594175500187360389116729240
    \n$y:$ 32670510020758816978083085130507043184471273380659243275938904335757337482424\n\nIn the examples above a base point was choosen to view all the calculated points in small graph.\n\n## Addition properties\n\nIn this finite field, addition also has the property of \n\n\\begin{equation}\nnP+rP = (n+r)P\n\\label{eq:addition}\n\\tag{3}\n\\end{equation}\n\nA example are $4P+6P = (4+6)P = 10P$. The easiest method to calculate for example $10P$ would require 4 calculations:
    \n\n$$\n\\begin{align}\nP+P &= 2P \\\\\n2P+2P &= 4P \\\\\n4P+4P &= 8P \\\\\n8P+2P &= 10P \\\\\n\\end{align}\n$$\n\n## Diffie\u2013Hellman key exchange\n\nIn this section we will exploring the Diffie\u2013Hellman key exchange (DH). This will serve as a basis to understand Elliptic curve cryptography. DH is one of the earliest practical examples of public key exchange implemented within the field of cryptography. \n\nEncryption is the process of converting information or data into a code to allow only the intended precipitant to decode and read the message. Often times this encryption/decryption is done with a shared secret key. In the following example we will show how two parties can get a shared key (in the past, they physically shared the key on a piece of paper). \n\nLet's start with the example of Nick and Connie, they want to sent messages to each other without being eavesdropped on. They will share a arbitary number: $g$, this is sent over the internet and could have intercepted, but that does not matter. Nick and Connie also create their own secret key (a big number) and do not share it with anybody:\n$$\n\\begin{align}\nNick&: n \\\\\nConnie&: c\n\\end{align}\n$$\n\nThen they will raise the arbitary number $g$ to the power of there secret key:\n\n$$\n\\begin{align}\nNick: g^n &= H_n \\\\\nConnie: g^c &= H_c\n\\tag{4}\n\\end{align}\n$$\n\nOnces they have the $H$ term, they exchange that to each other. $g, H_n, H_c$ are publicly sent and anybody can view these values. Once they have that $H$ term, they raise it to their secret key.\n\n$$\n\\begin{align}\nNick: H_c^n &= S \\\\\nConnie: H_n^c &= S\n\\tag{5}\n\\end{align}\n$$\n\nBy doing this, they end up with the same number: $S$, this is the shared key, neither of them had to send it to each other explicitly. Now, for example to encrypt a message using a Caesar cipher (a simple cipher) you can shift all the letters in your message by $S$ number of letters, and shift it back by $S$ number of letters to decrypt (decipher) it. You now have method to encrypt your communication with each other.\n\n\n\nTo prove equation 5, you can subsitute equation 4 into 5:\n\n$$\nNick: H_c^n = (g^c)^n = g^{cn} = S\n$$\n\n$$\nConnie: H_n^c = (g^n)^c = g^{nc} = S\n$$\n\nUnfortunately we are not done yet, remeber that publicly sent values are $g, H_n, H_c$ are publicly sent. To calculate for example Nick's private $n$ will be trivial since the equation is\n\n$$\nNick: g^n = H_n\n$$\n\nCalculating $n$ is as easy as solving this log problem $2^n=16$\n\nWhat about this discrete log problem: $2^n mod 17 = 16$. This becomes difficult because of the [modulus](https://en.wikipedia.org/wiki/Modular_arithmetic), you do not know how many times over 17 we have gone. Another example of modulus is a clock. If I told you that the start time is 12 o'clock and the end time is 1 o'clock and I ask you how many hours has passed you would not know because you do not know how many times the clock went round. It could be 1 hour, 13 hours or 25 hours and so on. It is because of this fact that you have to start guessing, the discrete log problem is the basis for the DH key exchange. The calculations to create the shared key is simple, but it very difficult to solve for the private key.\n\nNow you can just use the modulus operator in equations 4 and 5\n\n$$\n\\begin{align}\nNick: g^n \\, mod(p) &= H_n \\\\\nConnie: g^c \\, mod(p) &= H_c\n\\tag{6}\n\\end{align}\n$$\n\nYou will end up with a shared key again, but this time is very difficult, almost impossible to figure out what the private keys are if the private keys are very big.\n\n$$\n\\begin{align}\nNick: H_c^n \\, mod(p) &= S \\\\\nConnie: H_n^c \\, mod(p) &= S\n\\tag{7}\n\\end{align}\n$$\n\nA more practical example: Nick and Connie both decide publicly on a generator, $G=3$, and a prime modulus, $P=17$. Then Connie decides on a random private key, $c=15$ and Nick does the same $n=13$.\n\n$$\n\\begin{align}\nNick: G^n \\, mod(p) &= H_n \\\\\n3^{13} mod 17 &= 12\\\\\nConnie: G^c \\, mod(p) &= H_c \\\\\n3^{15} mod 17 &= 6\n\\tag{6}\n\\end{align}\n$$\n\nNick send $H_n=12$ publicly to Connie, and Connie sends $H_c=6$ to Nick publicly. Now the heart of the trick, Nick takes Connies publicly sent value and raises it to the power of his private number, and vice versa to obtain the same shared secret of 10.\n\n$$\n\\begin{align}\nNick: H_c^n \\, mod(p) &= S \\\\\n 6^{13} mod 17 &= 10 \\\\\nConnie: H_n^c \\, mod(p) &= S \\\\\n12^{15}mod 17 &= 10\n\\tag{7}\n\\end{align}\n$$\n\nFor the DH, the reason that we choose a prime number for the modulus, is that this guarantees the group is cyclic. It also has a other property, because it is a modulus of a prime ($p$), a generator exists ($g$). A generator is smaller than the prime ($\nWe can use the same form of equation 4 in the DH and apply it to EC:\n\n$$\n\\begin{align}\nnG &= H_n\n\\tag{8}\n\\end{align}\n$$\n\nwhere $G$ is the starting point and $H_n$ is the end point. $n$ is amount of times that $g$ is added to each self. Even if you know what $G$ and $H_n$ is, it very difficult to figure out what $n$ is. \n\nKnwowing this, we can just use the DH procedure with these elliptic curve equations and it end's up working the same.\n\n$$\n\\begin{align}\nNick: nG &= H_n \\\\\nConnie: cG &= H_c\n\\tag{9}\n\\end{align}\n$$\n\nwhere $G, H_n, H_c$ are publicly sent. The shared public key can also be calculated in the same way as DH:\n\n$$\n\\begin{align}\nNick: nH_c &= S \\\\\nConnie: cH_n &= S\n\\tag{10}\n\\end{align}\n$$\n\nWith the DF, the modulus allows to take the possible answers to the exponent problem and reduce the possible set of numbers. Using this $2^n mod 17 = 16$ example again, because of the mod 17, you limit the possible answers to 16. Now in EC, you also can take the modulus of the curve from being a function with infinite values to a finite set of values.\n\n\\begin{align}\ny^2 &= x^3+ax+b\\\\ \ny^2 mod p &= (x^3+ax+b)\\, mod p\n\\label{eq:ecurve}\n\\tag{11}\n\\end{align}\n\nwhere $p$ is a prime number, it is a prime number to ensure that addition and multiplication operations can always be undone.
    \nIn secp256k1, $p$ is the largest prime that is smaller than $2^{256}$, this would be $2^{256}\u20132^{32}\u2013977 = $\n\n\n```python\np = 2**256 - 2**32 - 977\np\n```\n\n\n\n\n 115792089237316195423570985008687907853269984665640564039457584007908834671663\n\n\n\nThis means that x and y coordinates of the elliptic curve can be any number up to this prime.\n\n\n```python\n# http://en.wikibooks.org/wiki/Algorithm_Implementation/Mathematics/Extended_Euclidean_algorithm\ndef egcd(a, b):\n x,y, u,v = 0,1, 1,0\n while a != 0:\n q, r = b//a, b%a\n m, n = x-u*q, y-v*q\n b,a, x,y, u,v = a,r, u,v, m,n\n return b, x, y\n\n# calculate modular inverse\ndef modinv(a, m):\n g, x, y = egcd(a, m)\n if g != 1:\n return None # modular inverse does not exist\n else:\n return x % m\n\n# ecurve(dom, secp256k1_pow2).y % 7777\n# secp256k1_pow2(dom) % 5\n\n```\n\n\n```python\n# https://andrea.corbellini.name/2015/05/23/elliptic-curve-cryptography-finite-fields-and-discrete-logarithms/\ndef extended_euclidean_algorithm(a, b):\n \"\"\"\n Returns a three-tuple (gcd, x, y) such that\n a * x + b * y == gcd, where gcd is the greatest\n common divisor of a and b.\n\n This function implements the extended Euclidean\n algorithm and runs in O(log b) in the worst case.\n \"\"\"\n s, old_s = 0, 1\n t, old_t = 1, 0\n r, old_r = b, a\n\n while r != 0:\n quotient = old_r // r\n old_r, r = r, old_r - quotient * r\n old_s, s = s, old_s - quotient * s\n old_t, t = t, old_t - quotient * t\n\n return old_r, old_s, old_t\n\n\ndef inverse_of(n, p):\n \"\"\"\n Returns the multiplicative inverse of\n n modulo p.\n\n This function returns an integer m such that\n (n * m) % p == 1.\n \"\"\"\n gcd, x, y = extended_euclidean_algorithm(n, p)\n assert (n * x + p * y) % p == gcd\n\n if gcd != 1:\n # Either n is 0, or p is not a prime number.\n raise ValueError(\n '{} has no multiplicative inverse '\n 'modulo {}'.format(n, p))\n else:\n return x % p\n```\n\n\n```python\n# https://andrea.corbellini.name/2015/05/23/elliptic-curve-cryptography-finite-fields-and-discrete-logarithms/\n# https://www.youtube.com/watch?v=NnyZZw8d1wI\np = 19\necurve_pow2_mod = lambda x: ecurve_power2(-7, 10, x) % p\n\ndom = domain(0,p, 1) # Domain\ndom = pd.array(dom)\ndom = dom[dom>0]\n\ny2_mod = pd.DataFrame({'x': dom, 'y_1': ecurve_pow2_mod(dom), 'y_2': ecurve_pow2_mod(dom)})\n\n# plt.plot(y2_mod.x, y, 'ro')\n# plt.show()\ny2_mod\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    xy_1y_2
    01.04.04.0
    12.04.04.0
    23.016.016.0
    34.08.08.0
    45.05.05.0
    56.013.013.0
    67.00.00.0
    78.010.010.0
    89.011.011.0
    910.09.09.0
    1011.010.010.0
    1112.01.01.0
    1213.07.07.0
    1314.015.015.0
    1415.012.012.0
    1516.04.04.0
    1617.016.016.0
    1718.016.016.0
    \n
    \n\n\n\n\n```python\necurve_pow2_mod(5)\n```\n\n\n\n\n 5\n\n\n\n\n```python\ninverse_of(16, 19)\n```\n\n\n\n\n 6\n\n\n\n\n```python\ninverse_of(5, 19)\n```\n\n\n\n\n 4\n\n\n\n\n```python\nx = 6\ng = [5,2,3]\ngp = pd.array(g)\ndf = pd.DataFrame({'x': gp, 'diff': gp-4})\ndf = df.sort_values(by ='diff' )\nf = lambda x: x + 1\nf(a for a in df.x)\n```\n\n\n```python\n# https://medium.com/asecuritysite-when-bob-met-alice/nothing-up-my-sleeve-creating-a-more-trust-world-with-the-elliptic-curve-pedersen-commitment-7b363d136579\n2 ^= 1\n```\n\n\n```python\na = 1\nb = 2\na ^= b\na\n```\n\n\n\n\n 3\n\n\n\n\n```python\nb\n```\n\n\n\n\n 2\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "485bb24a4ba64466b8d8c791285643f9a561bf74", "size": 281948, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "e_curve.ipynb", "max_stars_repo_name": "grenaad/elliptic_curve", "max_stars_repo_head_hexsha": "4dd3f0338dca1eeb23df531be4fcfa0c4268151d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "e_curve.ipynb", "max_issues_repo_name": "grenaad/elliptic_curve", "max_issues_repo_head_hexsha": "4dd3f0338dca1eeb23df531be4fcfa0c4268151d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "e_curve.ipynb", "max_forks_repo_name": "grenaad/elliptic_curve", "max_forks_repo_head_hexsha": "4dd3f0338dca1eeb23df531be4fcfa0c4268151d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 237.9308016878, "max_line_length": 60256, "alphanum_fraction": 0.9003184984, "converted": true, "num_tokens": 9149, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9449947101574299, "lm_q2_score": 0.8740772318846386, "lm_q1q2_score": 0.8259983604000326}} {"text": "# Coordinates, Projections, and Grids\n\n## Synopisis\n\n- Review of coordinate systems and projections of the sphere onto the 2d placne\n- Discussion about lengths and areas in finite volume grids used by ESMs\n\n\n```python\n%run -i figure-scripts/init.py\n```\n\n## Coordinate systems\n\nA coordinate system allows us to (uniquely) specify points in space. You should be familiar with the Cartesian 2d coordinate system where, by convention, (x,y) are the normal distances, measured in the same units, from two perpendicular lines through the origin.\n\nWe live in a 3d world (referring to spatial dimensions) and so require 3 numerical values to label a point in space. In Cartesian coordinates they might be (x,y,z) referenced to the center of the Earth, with z being height above the equatorial plane (positive in the direction of the North pole), y being the distance from a plane through the poles and a reference meridian, and x the distance from the plane perpendicular to both other planes. The equations of motion used by many models are often derived starting in this Cartesian coordinate system. However, these Cartesian coordinates are inconvenient to use in practice because we live on the surface of a sphere and \"up\", as defined by gravity, is sometimes increasing z (at the North Pole), and sometimes changing x or y (at the Equator).\n\n### Spherical coordinates\n\nIn ESMs we typically use spherical coordinates, $\\lambda$, $\\phi$ and $r$, where $\\lambda$ is \"longitude\", a rotation angle eastward around the poles starting at a reference meridian; $\\phi$ is \"latitude\", an elevation angle from the Equatorial plane (positive in Northern hemisphere), and $r$ is the radial distance from the center of the Earth. $\\lambda,\\phi,r$ are related to Cartesian $x,y,z$ by some simple relations:\n\n$$\n\\begin{pmatrix}\nx \\\\ y \\\\ z\n\\end{pmatrix} =\n\\begin{pmatrix}\nr \\cos \\phi \\cos \\lambda \\\\ r \\cos \\phi \\sin \\lambda \\\\ r \\sin \\phi\n\\end{pmatrix}\n$$\n\nNote that $r, x, y, z$ are all in the same units (eg. kilometets or meters) and $\\lambda,\\phi$ are angles usually given in degrees or radians.\n\n__Coordinate singularities:__ At the North and South poles of the coordinate system, $\\phi = \\pm \\pi/2 = \\pm 90^\\circ$, all values of longitude refer to the same point. There is no \"east\" when you are positioned at the pole. This has many consequences, but one of the more fundamental is that spherical coordinates are not a good coordinate to use to design a discretization of the spherical domain.\n\n__Periodic coordinates:__ While a tuple of longitude, latitude and radius unambiguously define a point in space, given a point in space you there are multiple valid longitudes that refer to the same point. Longitude is cyclic ($\\pm360^\\circ$ is equivalent to $0^\\circ$). This can cause problems in practice, particularly for plotting spherical data for which effort is sometimes needed to handle the periodicity.\n\n### Geographic coordinates\n\nWe live on the surface of the Earth and to precisely refer to points near the Earth's surface requires a properly defined geographic coordinate system. A common choice of coordinates is latitude, longitude and altitude, where altitude is height above a particular surface. Unfortunately the Earth is not spherical and that reference surface is better approximated as an ellipsoidal.\n\nIn order to be unambiguous about the definition of coordinates, map-makers choose a reference ellipse with a agreed upon scale and orientation. They then choose the most appropriate mapping of the spherical coordinate system onto that ellipsoid, called a _geodetic datum_. A widely used global datum includes the [World Geodetic System](https://en.wikipedia.org/wiki/World_Geodetic_System) (WGS 84), the default datum used for the Global Positioning System. When you are given a latitude-longitude pair of values, strictly speaking without the geodetic datum, the is some ambiguity about the actual physical point being referred to. For ESMs, the datum is rarely provided and this is because ESMs almost universally approximate the Earth as a sphere and use spherical coordinates for referencing locations. This means some approximation is required when comparing real-world positions and model positions.\n\nThe latitude and longitude using these horizontal datums are the spherical coordinates of the point on an ellipse. It you draw a straight line from the point on the ellipsoidal to the center, if passes through all spheres co-centered with the same latitude and longitude.\n\nDifferent datum have different reference points and scales, and so longitude and latitude can differ between geodetic datum.\n\n## Projections\n\nTo view data covering the surface of a sphere, or the Earth, we have to project that 3d surface into 2d. Imagine peeling the rind off an orange in one piece and then trying to flatten it onto a table top; the curvature in the peel requires you to distort the rind or make cuts, in order to flatten it fully. This is the function of the map projections and distortion is inevitable. Some projection preserve properties such as relative angles between lines, or relative area, but there is no projection of the surface of the sphere that can avoid distortion of some form.\n\nA projection maps the longitude and latitude of spherical coordinates into a new coordinate system. Very confusingly, sometimes the projection coordinates will be called longitude and latitude too! The projection coordinates are meaningless unless you know what the projection is so you often find a reference to the projection in the meta-data of coordinates; it means the longitude and latitude are not spherical coordinate but projection coordinates.\n\n\n```python\n%run -i figure-scripts/some-projections.py\n```\n\nFigure 1: The colored circles in these plots are circles in the tangent plane to the sphere and projected onto the surface. The various projections can distort the circles. The circles are separated zonally or meridionally by 60$^\\circ$. In 1a, a perspective image of the sphere, the circles appear non-circular because of the viewing angle. The blue circle appears circular because we are viewing it from directly overhead. The projection in 1b is the easy to use Plate-Carr\u00e9e projection, a \"lat-lon\" plot, in which circles are stretched zonally with distance from the equator. 1d shows the Mercator projection in which circles remain circles but are expanded in size away from th equator. 1c shows the Robinson projection which compromises between the two. The purple dashed lines is a straight line in latitude-longitude coordinates, and the yellow dashed line is a straight line in the Mercator coordinates. The cyan dashed line is a great arc, and is straight in the perspective view because we are viewing it from directly overhead.\n\nThe two most useful projections are the equirectangular and Mercator projections.\n\n### Equirectangular projection\n\nThis is the simplest projection, sometimes thought of a non-projection which is incorrect. In general it takes the form\n$$\n\\begin{align}\nx & = R \\left( \\lambda - \\lambda_0 \\right) \\cos \\phi_0 \\\\\ny & = R \\left( \\phi - \\phi_0 \\right)\n\\end{align}\n$$\n\nThe origin of the plot, $(x,y)=(0,0)$ corresponds to $(\\lambda,\\phi)=(\\lambda_0,\\phi_0)$. The $\\cos \\phi_0$ term is a constant and the most common choice of $\\phi_0=0$ gives the plate carr\u00e9e projection, which means \"flat square\" in French. In this case, the projection is simply\n$$\n\\begin{align}\nx & = R \\left( \\lambda - \\lambda_0 \\right) \\\\\ny & = R \\phi\n\\end{align}\n$$\n\nDistances in the y-direction are proportional to the meridional direction on the sphere, but the x-direction are stretched zonally, more so further from the equator. This is apparent in the orange and green circles in figure 1b, where the heights or the loops are the same as circles on the equator but the width is markedly increased.\n\nIn the cartopy package, this projection is called \"Plate-Carr\u00e9e\" which is French for flat square. Other names for this projection are equidistant cylindrical projection and geographic projection. See https://en.wikipedia.org/wiki/Equirectangular_projection.\n\n### Mercator projection\n\nThe Mercator projection has the same stretching in the x-direction as the equirectangular projection but, in order to preserve shape, it also stretches the y direction so that infinitesimal elements are stretched isotropically (the y-stretching is equal to the x-stretching).\n\n$$\n\\begin{align}\nx & = R \\left( \\lambda - \\lambda_0 \\right) \\\\\ny & = R \\tanh^{-1} \\left( \\sin \\phi \\right)\n\\end{align}\n$$\n\nAt the polar singularities, the x-stretching is infinite so y becomes infinite and the Mercator projection can never show the poles. See https://en.wikipedia.org/wiki/Mercator_projection.\n\n\n### Lines\n\nA length of a line between two points is a function of the path taken. On the surface of sphere, the shortest path between two given points is a great arc. A great arc does not appear straight in many projections. Unfortunately, many grid calculations use a great arc for the length of a line between nodes on a model grid, which can be inconsistent with the constraints or assumptions about the grid.\n\nThe dashed curves in figure 1 are \"straight\" lines between two points in various projections. The cyan dashed curve is a great arc. The purple dashed curve is a straight line in the Plate-Carree projection (latitude-longitude space) and the yellow dashed curve is a straight line in the Mercator projection. All are curved in most other projections. To describe a straight line in some projection then _the projection must be known_, irrespective of the coordinate system defining the end points. That is, we can define the end points of the line in latitude-longitude coordinates but say a line is straight in the Mercator projection, and by so doing unambiguously define that line.\n\nIn the Mercator projection, the length of a line is $\\frac{R}{\\cos \\alpha} \\Delta \\phi$ where $\\tan \\alpha = \\frac{\\Delta y}{\\Delta x}$\n\n## ESM grids\n\nMany ESMs use quadrilateral grids to discretize the surface of the sphere. The following discussion also applies to fully unstructured grids built from polygons but here we use quadrilateral grids for simplicity. There are also grids that have cuts and joins but here we'll stick to space-filling grids that are logically rectangular, meaning they can be stored in rectangular arrays in computer memory and referenced with a pair of indices ($i,j$ by convention).\n\nA quadrilateral grid is a rectangular mesh of adjacent quadrilateral cells that share edges and vertexes. Although the mesh and the cell are logically rectangular they might be physically curvilinear. From the grid we require positions of nodes, distances along edges, and areas of cells.\n\nIf we choose a coordinate system with which to record the locations of mesh nodes, say spherical latitude-longitude with appropriate definitions, then we can unambiguously define those node locations. We could describe the exact same grid using a different coordinate system, say 3D Cartesian coordinates. The physical positions of the nodes of the grids are part of what define the grid, but the choice of coordinates with which we describe those positions does not change the grid.\n\nThe edges of each cell are a curve between two adjacent nodes but the particular path of the curve has to be defined. Different paths will have different lengths. Similarly, the particular paths of the cell edges will determine the cell area. Thus the path of the cell edges is a fundamental component of a model grid needed for calculating the lengths and areas on a grid.\n\n### Simple spherical coordinate grid\n\nBefore we discuss the best choice for defining a curve between points, let's briefly define a simple spherical-coordinate grid.\nThe mesh is formed of lines of constant longitude and lines of constant latitude.\n\nLet $i \\in 0,1,\\ldots, n_i$ and $j \\in 0,1,\\ldots, n_j$, then node $i,j$ is at longitude $\\lambda_i$ and latitude $\\phi_j$ where $\\lambda_i=\\lambda_0 + i \\Delta \\lambda$, $\\phi_j=\\phi_0 + j \\Delta \\phi$.\n\nHere, $\\Delta \\lambda$ and $\\Delta \\phi$ are grid spacings. In practice, these can be smooth functions of $i$ and $j$ respectively but here we treat them as constant.\n\nAn example simple spherical grid is shown below. The red dots are the nodes of the mesh with positions $\\lambda_i,\\phi_j$. The dashed lines are the cell edges that for a regular net. Notice that in the Plate-Carr\u00e9e projection the grid is regular because the grid-spacing in constant in longitude-latitude coordinates.\n\nThe lengths and areas of the grid are measured on the surface of sphere. We defined the edges to be either lines of constant longitude or latitude. Using spherical geometry, the length of a meridionally-oriented (constant longitude) cell edge is $R \\Delta \\phi$. For a zonally-oriented edge at constant latitude $\\phi_j$, the length is $R \\Delta \\lambda \\cos \\phi_j$. The area of a cell labelled $i+\\frac{1}{2},j+\\frac{1}{2}$ bounded by four edges is $R^2 \\Delta \\lambda \\left( \\sin \\phi_{j+1} - \\sin \\phi_j \\right)$.\n\nThe metric factors for this grid are the same as for a Plate-Carr\u00e9e projection because we are defining the paths of the cell edges to be straight in the Plate-Carr\u00e9e projection. The use of the Plate-Carr\u00e9e coordinates for position, namely longitude and latitude, is a happy coincidence which means everything, positions and metrics, are defined by one projection.\n\n\n```python\n%run -i figure-scripts/simple-spherical-grid.py\n```\n\nFigure 2: A simple spherical grid view in different projections. The red dots are the nodes of the mesh. The dashed lines are the grid of cell edges. Here the grid spacing is constant in latitude-longitude coordinates which is why the grid looks regular in the Plate-Carr\u00e9e projection.\n\nIn the above example of a simple spherical grid we used spherical geometry combined with the specification of the paths of cell edges to calculate everything. If we were given just the positions of nodes a spherical grid, we could re-calculate the grid lengths and areas given the knowledge that the cell edges are straight in the Plate-Carr\u00e9e projection.\n\n### Not to use great arcs\n\nIf is quite common for the projection for the cell edges to be omitted from a grid file. Sometimes, you will encounter some software that was written to handle arbitrary grids that does not know about the projections. In this instance, developers often make a choice for how to calculate a length between two points. There are two immediately simple choices we can make: i) the shortest curve, or ii) choose a projection in which to draw a straight line. There are other ways to choose and define the curve forming edges but they are not as useful for out purposes.\n\nAs discussed above, the shortest line is a great arc but it turns out that choosing great arcs does not result in orthogonal meshes without a particular distributions of nodes. For example, if you consider the example spherical grid defined above, using a great arc to approximate the distance along a latitude circle will underestimate the distance. If you were to then calculate the actual paths implied by the great arcs, the grid is not orthogonal at the nodes.\n\nIf no projection is given, then straight lines in the Plate-Carr\u00e9e projection make a reasonable approximation that at least works for Plate-Carr\u00e9e and Mercator. Many grids a composed of patches of grids that used different projections but have to match at the joins and it seems the Plate-Carr\u00e9e is often the common denominator.\n\n## Contents of grid files\n\nAs mentioned earlier, because most ESM treat the Earth as spherical, they rarely bother to choose a datum for the geographic coordinate system. Further, the longitudes and latitudes _may not be spherical coordinates_ but instead may be the result of a projection.\n\nThe habit of not specifying the projection for cell edges is somewhat alleviated by the general pattern of providing the numerical values for lengths and areas as part of the grid specification. For example, see https://www.researchgate.net/publication/228641121_Gridspec_A_standard_for_the_description_of_grids_used_in_Earth_System_models. For many calculations, including integrals and gradients of scalars, having the numerical values of lengths and areas is sufficient. For vector operations, the angle of grid-lines is needed.\n\nThe [CF convention](http://cfconventions.org/cf-conventions/cf-conventions.html) requires the grid to be provided in all files. That convention supports specification of mappings (projections) but does not distinguish the projection for paths from the projections for coordinates. by default, longitude and latitude are the true geographic coordinates. And the required grid information is limited to the node positions. Further the nodes correspond to the data locations which is a finite-difference perspective of a mesh, and quite different from the finite volume perspective that most ESMs assume.\n\n## Summary\n", "meta": {"hexsha": "b1b98e7a79c3c1b5ad07abd8a3efb31656afc31b", "size": 478138, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "9-Coordinates-Projections-and-Grids.ipynb", "max_stars_repo_name": "adcroft/Analyzing-ESM-data-with-python", "max_stars_repo_head_hexsha": "fc94d8f880d1f389c8f2d7a62b29cb687f3d772d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "9-Coordinates-Projections-and-Grids.ipynb", "max_issues_repo_name": "adcroft/Analyzing-ESM-data-with-python", "max_issues_repo_head_hexsha": "fc94d8f880d1f389c8f2d7a62b29cb687f3d772d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "9-Coordinates-Projections-and-Grids.ipynb", "max_forks_repo_name": "adcroft/Analyzing-ESM-data-with-python", "max_forks_repo_head_hexsha": "fc94d8f880d1f389c8f2d7a62b29cb687f3d772d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 1824.9541984733, "max_line_length": 280756, "alphanum_fraction": 0.9571755435, "converted": true, "num_tokens": 3652, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.956634206181515, "lm_q2_score": 0.863391617003942, "lm_q1q2_score": 0.8259499541563406}} {"text": "# Modelling Change - Project\n\n## Topic: Gradient Descent\n\n> `numerical` `optimisation`\n\nGradient descent is a general method for finding minima of functions taught is widely used in many fields.\nWrite out the equations for a simple gradient descent method and code an implementation in Python.\nLocate the minima of an example function who's min you can find using the analytically (that is, as we did\nin lectures and tutorials). Investigate how the convergence is affected by:\n\n1. step size or other parameters in the algorith\n2. the initial starting point\n\n### Author: \n\n**Name:** Jonathon Belotti\n\n**Student ID:** 13791607\n\n## Overview\n\nThis report details the mathematics and computer code implementation of the _Gradient Descent_ optimisation method, and investigates the behaviour of the optimisation process when subject to variance in algorithm paramaters ('step size', iteration number) and variance in the initial starting point. \n\n### Report Structure\n\n1. [**Introduction to the Pure-Python3 Implementation.**](#1.-Introduction-to-the-Pure-Python3-Implementation)\n2. [**Introduction to the Gradient Descent method.**](#2.-Introduction-to-the-Gradient-Descent-Method)\n3. [**Convergence.**](#3.-Convergence)\n5. [**References**](#References)\n6. [**Appendix**](#Appendix)\n\n### 1. Introduction to the Pure-Python3 Implementation\n\nTo assist in the exploration and communication of the Gradient Descent optimisation technique, a 'from scratch' [pure-Python](https://stackoverflow.com/a/52461357/4885590) implementation of the gradient descent algorithm has been written and is used throughout. It is a 'standalone module', and imports no third-party code. The implementation targets usefulness in learning, not performance, but is fast enough for practical example.\n\nTo reduce implementation complexity and length, not all differentiable functions are supported by the implementation. Supported functions include:\n\n- [Polynomial functions](https://en.wikipedia.org/wiki/Polynomial)\n\n\nThe implementation is wholly contained in one Python3 module, `gradient_descent`, and it is available at: https://github.com/thundergolfer/modelling_change\n\n\n```python\nfrom typing import Mapping\n\nimport gradient_descent # Assumes gradient_descent.py is on PYTHONPATH. For module implementation see appendix.\n```\n\n**Variables** become components in expressions and are used in the differentiation functions. They don't do much else.\n\n```python\nx = Variable(\"x\")\n```\n\n**Expressions** can be evaluated at a point in function space, eg $(x=1, y=2, z=4)$ and can be differentiated with respect to a reference variable. \n\n`Expression` is a base class for `ConstantExpression`, `PolynomialExpression`, and `Multiply`.\n\n```python\nclass Expression:\n def diff(self, ref_var: Optional[Variable] = None) -> Optional[\"Expression\"]:\n raise NotImplementedError\n\n def evaluate(self, point: Point) -> float:\n raise NotImplementedError\n```\n\n**MultiVariableFunctions** are created by specifying their inputs, `variables`, and composing `Expression` objects by \naddition (subtraction is handled by negative coefficients on expressions).\n\nWe can get the gradient of a function and evaluated it at a point in function space, just like `Expression` objects.\n\n```python\nclass MultiVariableFunction:\n \"\"\"\n MultiVariableFunction support the composition of expressions by addition into a\n function of multiple real-valued variables.\n\n Partial differentiation with respect to a single variable is supported, as is\n evaluation at a Point, and gradient finding.\n \"\"\"\n\n def __init__(self, variables: Set[Variable], expressions: List[Expression]): ...\n\n def gradient(self) -> GradientVector: ...\n\n def diff(self, ref_var: Variable) -> \"MultiVariableFunction\": ...\n\n def evaluate(self, point: Point) -> float: ...\n```\n\n### 2. Introduction to the Gradient Descent Method\n\nGradient Descent is an iterative optimization process that can be used effectively to find local mimina of differentiable functions, particulary when those functions are convex. When the output of a differentiable function under some set of inputs can be framed as a _cost_, the minimization of this _cost function_ becomes an optimization problem to which the Gradient Descent process can be applied.\n\nThe \"Deep Neural Networks\" revolution that swept through the 2010s has its foundation in the simple single-layer neural networks first published in the 1980s, and those simple networks were optimized through gradient descent. Thus, a first lesson in understanding today's hottest technological field, Deep Neural Networks, involves going right back to the start and understanding the basic Gradient Descent optimization process.\n\n#### 2.1 A Function's Gradient\n\nIn order to minimise a function's value, we need to ascertain which way we should nudge its inputs to decrease the output value, and we have to be sure than a series of decreases will eventually lead to a minimum (local or global). For differentiable functions, the first-derivative of a function can be the way.\n\nFor a function of a single variable, $f(x)$, the rate of change at some value $x=a$ is given by the first-derivative $f'(x)$. In the case of $f(x) = x^2 + x$, we know that:\n\n$$f'(x) = 2x + 1$$\n\nand thus at $f'(1) = 2(1) + 1 = 3$ the function is increasing in output value 'to the right' and decreasing 'to the left'. At $f'(-1) = 2(-1) + 1 = -1$ the function is decreasing in output value 'to the right' and increasing 'to the left;. \n\nIn either case, we know from the first-derivative which direction to nudge $x$, until we reach $f'(1/2) = 2*(1/2) + 1 = 0$ and we've reached the critical point.\n\n\nBut for a multi-variable function there are multiple ways in which to influence the output value and thus multiple dimensions along which a we could change inputs. How can we extend our understanding of the direction of function decrease beyond 'left and right' and into 3-dimensions and more? We use partial derivatives.\n\n\nIf $z = f(x, y)$, then we have a multi-variable function with partial derivatives:\n \n$$f_x(x_0, y_0) = \\lim_{h \\to 0}\\frac{f(x_0+h, y_0) - f(x_0, y_0)}{h}$$\n\n$$f_y(x_0, y_0) = \\lim_{h \\to 0}\\frac{f(x_0, y_0+h) - f(x_0, y_0)}{h}$$\n\nwith each capturing the rate-of-change with respect to a single variable in our multi-variable function.\n\nIn the $x,y$ plane, we can imagine being at some point $(x_0,y_0)$ and nudging away from that point in the plane by the vector $\\mathbf{u} = \\langle a, b \\rangle $. \n\nNow not restricted to moving 'left and right' in the x-axis or 'up and down' the y-axis, we have a **Directional Derivative** of $f$ at $(x_0,y_0)$.\n\n$$D_uf(x_0, y_0) = \\lim_{h \\to 0}\\frac{f(x_0 + ha, y_0+hb) - f(x_0, y_0)}{h}$$\n\nMore intuitively, we can consider that nudge as being of length $h$ at some angle $\\theta$ (capturing direction). Thus our $a$ and $b$ are $\\cos{\\theta}$ and $\\sin{\\theta}$ respectively, and $\\mathbf{u} = \\langle a, b \\rangle $ is a vector of length 1.\n\nIn fact, any differentiable function of $x$ and $y$ has a directional derivative in a direction of $\\mathbf{u}$ and this relationship can be expressed as:\n\n$$D_uf(x, y) = f_x(x, y)a + f_y(x, y)b$$\n\n**To prove this.**\n\nDefine a function $g$ of the single variable $h$ as\n\n$$\n\\begin{align}\ng(h) & = f(x_0 + ha, y_0+hb) \\\\\n\\end{align}\n$$\n\n\nBy definition of the derivative:\n\n$$\n\\begin{align}\ng'(0) & = \\lim_{h \\to 0}\\frac{g(h) - g(0)}{h} \\\\\n& = \\lim_{h \\to 0}\\frac{f(x_0+ha, y_0+hb) - f(x_0, y_0)}{h}\\\\\n& = D_uf(x_0, y_0)\n\\end{align}\n$$\n\nWriting $x = x_0 + ha$ and $y = y_0 + hb$ we get $g(h) = f(x, y)$ and \n\n$$\n\\begin{align}\ng'(h) & = \\frac{\\partial f}{\\partial x}\\frac{\\partial x}{\\partial h} + \\frac{\\partial f}{\\partial y}\\frac{\\partial y}{\\partial h} \\\\\n& = f_x(x, y)a + f_y(x, y)b \\\\\n& = D_uf(x_0, y_0)\n\\end{align}\n$$\n\nSubstituting in $h=0$ then $x$ and $y$ become $x = x_0$, $y = y_0$, so:\n\n$$\n\\begin{align}\ng'(0) & = f_x(x, y)a + f_y(x, y)b \\\\\n & = f_x(x_0, y_0)a + f_y(x_0, y_0)b \\\\\n& = D_uf(x_0, y_0)\n\\end{align}\n$$\n\nThus:\n\n$$D_uf(x, y) = f_x(x, y)a + f_y(x, y)b$$\n\n\nThis relationship between partial derivatives and a directional nudges in each input dimension generalises beyond 2 dimensions, and can be compactly represented by _vectorising_ the combination of the partial derivatives and an input.\n\n$$\n\\begin{align}\nD_uf(x,y) & = f_x(x, y)a + f_y(x, y)b \\\\\n & = \\langle f_x(x,y), f_y(x, y) \\rangle \\cdot \\langle a, b \\rangle \\\\\n & = \\langle f_x(x,y), f_y(x, y) \\rangle \\cdot \\mathbf{u} \\\\\n\\end{align}\n$$\n\n**In the `gradient_descent` library, we can calculate the gradient vector from a `MultiVariableFunction`. For the function:**\n\n$$f(x,y) = x^2 + y^2 - 2x - 6y + 14$$\n\n\n```python\nx = gradient_descent.Variable(\"x\")\ny = gradient_descent.Variable(\"y\")\nf = gradient_descent.MultiVariableFunction(\n variables={x, y},\n expressions=[\n gradient_descent.PolynomialExpression(variable=x, coefficient=1, exponent=2),\n gradient_descent.PolynomialExpression(variable=y, coefficient=1, exponent=2),\n gradient_descent.PolynomialExpression(variable=x, coefficient=-2, exponent=1),\n gradient_descent.PolynomialExpression(variable=y, coefficient=-6, exponent=1),\n gradient_descent.ConstantExpression(real=14.0),\n ],\n)\n\nf.gradient()\n```\n\n#### 2.2 Steepest Descent\n\nNow able to determine the gradient vector of a function, capturing the rate of change along each dimension of a function, the question becomes in which 'direction' to go to 'descend' or decrease the function's value?.\n\n$$\\nabla_{u} f(x_0,y_0) = \\mathbf{u} \\cdot f(x_0, y_0)$$ \n\nFirstly, if the gradient is zero\n\n$$\\nabla_{u} f(x_0,y_0) = 0$$ \n\nthen the directional gradient is zero in every direction. This would be the end of our descent. But for a non-zero gradient, then the gradient itself is the direction that maximises the dot product.\n\n$$\\max \\nabla f(x_0, y_0) \\cdot \\mathbf{u} = \\frac{\\nabla f(x_0, y_0)}{\\lvert \\nabla f(x_0, y_0) \\rvert}$$\n\n\n\nThis is actually great, because in order to descend fastest from some point $f(x_0, y_0)$ we don't need to calculate which direction is best, the direction of the gradient is the best direction.\n\n#### 2.3 Gradient Descent - Iterating Towards the Bottom\n\nNow with a method to calculate the direction of maximum descent from a point $\\mathbf{a}$ in a function's input space, we are very close to creating the _Gradient Descent_ optimisation process.\n\nGiven a differentiable multi-variable function $f(\\mathbf{x})$, with $\\mathbf{x}$ being a vector of inputs $\\langle x, y, z, ... \\rangle$, then we know:\n\n**At some point $\\mathbf{a} \\in \\mathbf{x}$, $f(\\mathbf{x})$ decreases _fastest_ in the direction of the negative gradient:** $-\\nabla \\mathbf{f(a)}$\n\nIn the Python library `gradient_descent`, we can calculate this:\n\n\n```python\nf_grad = f.gradient()\n\nprint(f\"Gradient: {f_grad}\")\n\na: gradient_descent.Point = {\n x: -1,\n y: 1,\n}\n\nf_grad_a: Mapping[gradient_descent.Variable, float] = {\n var: grad_elem.evaluate(a)\n for var, grad_elem\n in f_grad.items()\n}\n \nprint(\"Gradient of f(x, y) @ point 'a'\")\nprint(f_grad_a)\n```\n\n#### 2.4 Analytical vs. Iterative\n\nNow, understanding the process, we are ready to run Gradient Descent in Python. The optimisation problem we'll solve is minimising:\n\n$$cost= f(x,y) = x^2 + y^2 - 2x - 6y + 14$$\n\nWe can solve this analytically, which will be useful in validating the Python implementation:\n\n$$\n\\begin{align}\nf_x(x, y) & = 2x + 0 - 2 - 0 + 0 \\\\\nf_x(x, y) & = 2x - 2\\\\\n\\\\\nf_y(x, y) & = 0 + 2y - 0 - 6 + 0 \\\\\nf_y(x, y) & = 2y - 6\n\\end{align}\n$$\n\nSolving...\n\n$$\n\\begin{align}\nf_x(x, y) & = 2x - 2 = 0 \\\\\n2x - 2 & = 0 \\\\\n2x & = 2 \\\\\nx & = 1 \\\\\n\\\\\nf_y(x, y) & = 2y - 6 = 0 \\\\\n2y - 6 & = 0 \\\\\n2y & = 6 \\\\\ny & = 3 \\\\\n\\end{align}\n$$\n\nSo $f(x, y)$ has a critical point at $(1, 3)$ and we can show graphically that this critical point is a minimum:\n\n\n```python\nimport sympy\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n\n```python\nfig = plt.figure()\nax = fig.gca(projection='3d')\n\n# Create boundaries of f(x,y)\nx = np.linspace(-1,15)\ny = np.linspace(-1,15)\n\n# Create 2D domain of f(x,y)\nxf, yf = np.meshgrid(x,y)\n\n# Discrete version of f(x,y) over 2D domain\nfxy = (xf**2) + (yf**2) - (2*xf) - (6*yf) + 14\n\n# Plot our function to be optimised f(x,y)\nax.plot_surface(xf, yf, fxy, alpha=0.1)\nax.view_init(15, 20)\n\n# Plot extrema\nx_extrema = np.array([1])\ny_extrema = np.array([3])\nz_extrema = (x_extrema**2) + (y_extrema**2) - (2*x_extrema) - (6*y_extrema) + 14\n\nax.scatter(x_extrema, y_extrema, z_extrema, color='r', label='maxima')\n\nax.grid(False)\nax.legend()\n# plt.locator_params(nbins=6) # Amount of numbers per axis\n# Label our axes\nax.set_xlabel(\"x\")\nax.set_ylabel(\"y\")\nax.set_zlabel(\"z\")\n```\n\n**Now let's solve the same problem iteratively using the Python implementation of the Gradient Descent algorithm:**\n\nThe core of the iterative gradient descent process is nice and simple:\n\n$$ \\mathbf{a}_{n+1} = \\mathbf {a}_{n}-\\gamma \\nabla f(\\mathbf {a} _{n})$$\n\n\n$$f(\\mathbf{a_0}) \\ge f(\\mathbf{a_1}) \\ge f(\\mathbf{a_1}) \\ge f(\\mathbf{a_2}) \\ge f(\\mathbf{a_3}) ...$$\n\nThis process has 4 clear inputs:\n\n1. The function\n2. The initial starting point, $\\mathbf{a_0}$\n3. The step size, $\\gamma$ (gamma)\n4. The number of iterations\n\n\n```python\nminimum_val, minimum_point = gradient_descent.gradient_descent(\n gamma=0.1,\n max_iterations=5000,\n f=f,\n )\nprint(\"\\nResu\")\nprint(f\"Min Value: {minimum_val}\")\nprint(f\"Min Location: {minimum_point}\")\n```\n\nSuccess! The answers are not exact because of [floating-point arithmetic error](https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html), but $(1, 3)$ is correct.\n\nWe can re-run the function and no matter which values are randomly assigned to the initial starting point, the process converges to the correct result.\n\n### 3. Convergence Behaviour\n\nHaving demonstrated gradient descent convergence of a simple convex function, let's investigate convergence behaviour\non different functions when the parameters of the convergence process are manipulated.\n\n#### 3.1 Changing Max Iterations\n\nClearly we have have a relationship between the function's gradient, the step size, and the number of iterations that can cause nonconvergence when max iterations is limited. \n\nWe want the step size to be small enough such that the monotonic series is stable in its convergence to a minima,\nbut 'small enough' assumes a sufficient series length. By severely limiting iterations, we can prevent convergence even on the simple function we've observed earlier.\n\n\n```python\nminimum_val, minimum_point = gradient_descent.gradient_descent(\n gamma=0.01,\n max_iterations=5,\n f=f,\n )\nprint(\"\\nResults:\")\nprint(f\"Min Value: {minimum_val}\")\nprint(f\"Min Location: {minimum_point}\")\n```\n\n#### 3.2 Changing Step Size\n\nThe _Rosenbrock function_ function below was specifically designed to test optimization functions like gradient descent. The global minima of the function is $(1, 1)$ but is difficult to reach this minima because it is in a _very_\nshallow valley.\n\n$$f(x,y)=(1-x)^2+100(y-x^2)^2$$\n\nalternate form\n\n$$100 x^4 - 200 x^2 y + x^2 - 2 x + 100 y^2 + 1$$\n\nWe can use `matplotlib` to visualise the shallowness of the area in which the $(1, 1)$ global minima resides.\nDue to the sensitivity of our contour bands, the function's value area to be unchanging in a large parabolic region\ncontaining the minima. In fact, the area contains a _very slight_ decline.\n\n\n```python\ndelta = 0.001\nx = np.arange(-3.0, 3.0, delta)\ny = np.arange(-2.0, 2.0, delta)\n\nX, Y = np.meshgrid(x, y)\n\nZ1 = np.exp(-X**2 - Y**2)\nZ2 = np.exp(-(X - 1)**2 - (Y - 1)**2)\nZ = (Z1 - Z2) * 2\n\nx = np.linspace(-2,2)\ny = np.linspace(-2,2)\n\n# Create our grid of points\nxv, yv = np.meshgrid(x,y)\nax = plt.subplot(1,2,1) \n\n# Make a contour plot that is filled with color.\nax.contourf(xv,yv, (1 - xv)**2+ 100*(yv - (xv**2))**2)\nax.set_title('Contours of f(x,y)')\n```\n\nAttempting gradient descent on this function, we do not the global minimum even after a large number of iterations,\n_but we get close_.\n\n\n```python\nx = gradient_descent.Variable(\"x\")\ny = gradient_descent.Variable(\"y\")\nf = gradient_descent.MultiVariableFunction( # f(x, y) =\n variables={x, y},\n expressions=[\n gradient_descent.PolynomialExpression(variable=x, coefficient=100, exponent=4), # 100x^4 -\n gradient_descent.Multiply( # 200x^2y +\n a=gradient_descent.PolynomialExpression(variable=x, coefficient=-200, exponent=2),\n b=gradient_descent.PolynomialExpression(variable=y, coefficient=1, exponent=1),\n ), \n gradient_descent.PolynomialExpression(variable=x, coefficient=1, exponent=2), # x^2 -\n gradient_descent.PolynomialExpression(variable=x, coefficient=-2, exponent=1), # 2x +\n gradient_descent.PolynomialExpression(variable=y, coefficient=100, exponent=2), # 100y^2 +\n gradient_descent.ConstantExpression(real=1.0), # 1\n ],\n)\n\nf.gradient()\n\n# The grad descent process on this function is quite sensitive to initial point. \n# If started too far away from minima, the gradients 'explode' and convergence does not occur.\ninitial_points = [\n {x: 1, y: 1},\n {x: 1.1, y: 1.1},\n]\n\nMAX_ITERATIONS = 5000 # How big does this need to be to achieve convergence???\n\nfor i_p in initial_points:\n minimum_val, minimum_point = gradient_descent.gradient_descent(\n gamma=0.00001,\n max_iterations=MAX_ITERATIONS,\n initial_point=i_p,\n f=f,\n )\n print(\"\\nResults:\")\n print(f\"--- Min Value: {minimum_val}\")\n print(f\"--- Min Location: {minimum_point}\\n\\n\")\n```\n\n#### 3.3 Changing initial starting points\n\nFor functions with _multiple_ local minima, it matters which initial starting point we use, as gradient descent has\nno mathematical mechanism of exploring multiple local minima.\n\nThe function below has two minima, at $(1, 1)$ and $(-1, -1)$. Gradient descent can find either, depending on initial point of descent.\n\n$$f(x, y) = x^4 + y^4 - 4xy + 1$$\n\n\n```python\nx = gradient_descent.Variable(\"x\")\ny = gradient_descent.Variable(\"y\")\nf = gradient_descent.MultiVariableFunction( # f(x, y) =\n variables={x, y},\n expressions=[\n gradient_descent.PolynomialExpression(variable=x, coefficient=1, exponent=4), # x^4 +\n gradient_descent.PolynomialExpression(variable=y, coefficient=1, exponent=4), # y^4 -\n gradient_descent.Multiply( # 4xy +\n a=gradient_descent.PolynomialExpression(variable=x, coefficient=-4, exponent=1),\n b=gradient_descent.PolynomialExpression(variable=y, coefficient=1, exponent=1),\n ), \n gradient_descent.ConstantExpression(real=1.0), # 1\n ],\n)\n\n# The grad descent process on this function is quite sensitive to initial point. \n# If started too far away from minima, the gradients 'explode' and convergence does not occur.\ninitial_points = [\n {x: 1, y: 1},\n {x: -1, y: -1},\n {x: -1.5, y: -1.5},\n {x: 1.1, y: 1.2},\n]\n\nfor i_p in initial_points:\n minimum_val, minimum_point = gradient_descent.gradient_descent(\n gamma=0.1,\n max_iterations=50000,\n initial_point=i_p,\n f=f,\n )\n print(\"\\nResults:\")\n print(f\"--- Min Value: {minimum_val}\")\n print(f\"--- Min Location: {minimum_point}\")\n```\n\n### Conclusion\n\nIn this report we've demonstrated the core components of the gradient descent optimization process, in both Python and equations. Convergence has been demonstrated on simple polynomials and more complicated polynomials that can demonstrate how careful parameter tuning of the gradient descent process can be necessary to ensure convergence, even on convex functions.\n\n## References\n\n* RUMELHART, David E.; HINTON, Geoffrey E.; WILLIAMS, Ronald J. (1986). \"Learning representations by back-propagating errors\". Nature. 323 (6088): 533\u2013536. doi:10.1038/323533a0. S2CID 205001834 -http://www.cs.utoronto.ca/~hinton/absps/naturebp.pdf\n* STEWART, J. (2019). \"Calculus: concepts and contexts\". Boston, MA, USA, Cengage.\n* SANDERSON, G. (2018). \"Why the gradient is the direction of steepest ascent\". https://www.youtube.com/watch?v=TEB2z7ZlRAw\n\n## Appendix\n\nThe full `gradient_descent` implementation is available online at [github.com/thundergolfer/modelling_change/](https://github.com/thundergolfer/modelling_change/), but it has also been copied in below:\n\n\n```python\n\"\"\"\nPure-Python3 implementation of Gradient Descent (https://en.wikipedia.org/wiki/Gradient_descent).\nWritten for educational/learning purposes and not performance.\n\nCompleted as part of the UTS course '35512 - Modelling Change' (https://handbook.uts.edu.au/subjects/35512.html).\n\"\"\"\nimport random\nfrom typing import List, Mapping, Optional, Dict, Set, Tuple\n\n\n# Used to make chars like 'x' resemble typical mathematical symbols.\ndef _italic_str(text: str) -> str:\n return f\"\\x1B[3m{text}\\x1B[23m\"\n\n\ndef _superscript_exp(n: str) -> str:\n return \"\".join([\"\u2070\u00b9\u00b2\u00b3\u2074\u2075\u2076\u2077\u2078\u2079\"[ord(c) - ord('0')] for c in str(n)])\n\n\nclass Variable:\n \"\"\"\n A object representing a mathematical variable, for use in building expressions.\n\n Usage: `x = Variable(\"x\")`\n \"\"\"\n def __init__(self, var: str):\n if len(var) != 1 or (not var.isalpha()):\n raise ValueError(\"Variable must be single alphabetical character. eg. 'x'\")\n self.var = var\n\n def __repr__(self):\n return _italic_str(self.var)\n\n def __eq__(self, other):\n \"\"\"Overrides the default implementation\"\"\"\n if isinstance(other, Variable):\n return self.var == other.var\n return False\n\n def __key(self):\n return self.var\n\n def __hash__(self):\n return hash(self.__key())\n\n\n# An element of some set called a space. Here, that 'space' will be the domain of a multi-variable function.\nPoint = Dict[Variable, float]\n\n\nclass Expression:\n def diff(self, ref_var: Optional[Variable] = None) -> Optional[\"Expression\"]:\n raise NotImplementedError\n\n def evaluate(self, point: Point) -> float:\n raise NotImplementedError\n\n\nclass ConstantExpression(Expression):\n \"\"\"\n ConstantExpression is a single real-valued number.\n It cannot be parameterised and its first-derivative is always 0 (None).\n \"\"\"\n\n def __init__(self, real: float):\n super().__init__()\n self.real = real\n\n def diff(self, ref_var: Optional[Variable] = None) -> Optional[Expression]:\n return None\n\n def evaluate(self, point: Point) -> float:\n return self.real\n\n def __repr__(self):\n return str(self.real)\n\n\nclass PolynomialExpression(Expression):\n \"\"\"\n An expression object that support evaluation and differentiation of single-variable polynomials.\n \"\"\"\n def __init__(\n self,\n variable: Variable,\n coefficient: float,\n exponent: int\n ):\n super().__init__()\n self.var = variable\n self.coefficient = coefficient\n self.exp = exponent\n\n def diff(self, ref_var: Optional[Variable] = None) -> Optional[Expression]:\n if ref_var and ref_var != self.var:\n return None\n if self.exp == 1:\n return ConstantExpression(real=self.coefficient)\n return PolynomialExpression(\n variable=self.var,\n coefficient=self.coefficient * self.exp,\n exponent=self.exp - 1,\n )\n\n def evaluate(self, point: Point) -> float:\n return (\n self.coefficient *\n point[self.var] ** self.exp\n )\n\n def __repr__(self):\n return f\"{self.coefficient}{self.var}{_superscript_exp(str(self.exp))}\"\n\n\nclass Multiply(Expression):\n def __init__(self, a: PolynomialExpression, b: PolynomialExpression):\n self.a = a\n self.b = b\n\n def diff(self, ref_var: Optional[Variable] = None) -> Optional[\"Expression\"]:\n if not ref_var:\n raise RuntimeError(\"Must pass ref_var when differentiating Multiply expression\")\n if self.a.var == ref_var:\n diff_a = self.a.diff(ref_var=ref_var)\n if not diff_a:\n return None\n else:\n return Multiply(a=diff_a, b=self.b)\n elif self.b.var == ref_var:\n diff_b = self.b.diff(ref_var=ref_var)\n if not diff_b:\n return None\n else:\n return Multiply(a=self.a, b=diff_b)\n else:\n return None # diff with respect to some non-involved variable is 0\n\n def evaluate(self, point: Point) -> float:\n return self.a.evaluate(point) * self.b.evaluate(point)\n\n def __repr__(self):\n return f\"({self.a})({self.b})\"\n\n\nGradientVector = Dict[Variable, \"MultiVariableFunction\"]\n\n\nclass MultiVariableFunction:\n \"\"\"\n MultiVariableFunction support the composition of expressions by addition into a\n function of multiple real-valued variables.\n\n Partial differentiation with respect to a single variable is supported, as is\n evaluation at a Point, and gradient finding.\n \"\"\"\n\n def __init__(self, variables: Set[Variable], expressions: List[Expression]):\n self.vars = variables\n self.expressions = expressions\n\n def gradient(self) -> GradientVector:\n grad_v: GradientVector = {}\n for v in self.vars:\n grad_v[v] = self.diff(ref_var=v)\n return grad_v\n\n def diff(self, ref_var: Variable) -> \"MultiVariableFunction\":\n first_partial_derivatives: List[Expression] = []\n for expression in self.expressions:\n first_partial_diff = expression.diff(ref_var=ref_var)\n if first_partial_diff:\n first_partial_derivatives.append(first_partial_diff)\n return MultiVariableFunction(\n variables=self.vars,\n expressions=first_partial_derivatives,\n )\n\n def evaluate(self, point: Point) -> float:\n return sum(\n expression.evaluate(point)\n for expression\n in self.expressions\n )\n\n def __repr__(self):\n return \" + \".join([str(e) for e in self.expressions])\n\n\ndef gradient_descent(\n gamma: float,\n max_iterations: int,\n f: MultiVariableFunction,\n initial_point: Optional[Point] = None,\n) -> Tuple[float, Point]:\n \"\"\"\n Implements Gradient Descent (https://en.wikipedia.org/wiki/Gradient_descent) in pure-Python3.6+ with\n no external dependencies.\n\n :param gamma: 'step size', or 'learning rate'\n :param max_iterations: Maximum number of steps in descent process.\n :param f: A differentiable function off multiple real-valued variables.\n :param initial_point: Optionally, a place to start the descent process\n :return: A tuple of first a local minimum and second the point at which minimum is found.\n \"\"\"\n if gamma <= 0:\n raise ValueError(\"gamma value must be a positive real number, \u03b3\u2208\u211d+\")\n\n iterations_per_logline = 100\n a: Point = {}\n f_grad = f.gradient()\n\n if not initial_point:\n for v in f.vars:\n a[v] = random.randrange(4)\n else:\n a = initial_point\n for i in range(max_iterations):\n # Calculate function's gradient @ point `a`\n grad_a: Mapping[Variable, float] = {\n var: grad_elem.evaluate(a)\n for var, grad_elem\n in f_grad.items()\n }\n # update estimate of minimum point\n a_next = {\n var: current - (gamma * grad_a[var])\n for var, current\n in a.items()\n }\n a_prev = a\n a = a_next\n\n if a_prev == a:\n print(\"Iteration as not changed value. Stopping early.\")\n break\n if i % iterations_per_logline == 0:\n print(f\"Iteration {i}. Current min estimate: {a}\")\n return f.evaluate(a), a\n\n\ndef main() -> None:\n x = Variable(\"x\")\n y = Variable(\"y\")\n\n # Test variable comparisons\n ##########################\n assert Variable(\"x\") == Variable(\"x\")\n assert Variable(\"x\") != Variable(\"y\")\n assert Variable(\"y\") != Variable(\"x\")\n assert Variable(\"y\") != Variable(\"z\")\n\n # Test gradient evaluations of Expressions\n ##########################################\n # ConstantExpressions\n assert ConstantExpression(real=0.0).diff() is None\n assert ConstantExpression(real=4.5).diff() is None\n # PolynomialExpression\n poly1 = PolynomialExpression(\n variable=Variable(\"x\"),\n coefficient=2,\n exponent=4,\n )\n poly1_grad1 = poly1.diff()\n assert poly1_grad1.var == Variable(\"x\")\n assert poly1_grad1.coefficient == 8\n assert poly1_grad1.exp == 3\n poly1_grad2 = poly1.diff(ref_var=Variable(\"y\"))\n assert poly1_grad2 is None\n\n # Test function evaluation\n ##########################\n x = Variable(\"x\")\n y = Variable(\"y\")\n # f = 3x + y^2\n f1 = MultiVariableFunction(\n variables={x, y},\n expressions=[\n PolynomialExpression(variable=x, coefficient=3, exponent=1),\n PolynomialExpression(variable=y, coefficient=1, exponent=2),\n ],\n )\n assert f1.evaluate(point={x: 1.0, y: 1.0}) == 4\n assert f1.evaluate(point={x: 1.0, y: 2.0}) == 7\n # Test function gradient\n g = f1.gradient()\n assert str(g[x]) == \"3\"\n\n # Test Multiply\n ##########################\n a = PolynomialExpression(variable=x, coefficient=3, exponent=1)\n b = PolynomialExpression(variable=y, coefficient=1, exponent=2)\n a_times_b = Multiply(a=a, b=b)\n result = a_times_b.evaluate(point={x: 2.0, y: 4.0})\n assert result == (6 * 16)\n result = a_times_b.evaluate(point={x: 3.0, y: 5.0})\n assert result == 225\n # Test diff on multiplication expression\n a_times_b_diff = a_times_b.diff(ref_var=x)\n assert a_times_b_diff.evaluate(point={x: 1.0, y: 5.0}) == 75\n\n\nif __name__ == \"__main__\":\n main()\n```\n", "meta": {"hexsha": "527d28cd1e64ed507cd112f6e3cdcb276d8c844b", "size": 41302, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "optimization/gradient_descent.ipynb", "max_stars_repo_name": "thundergolfer/uni", "max_stars_repo_head_hexsha": "e604d1edd8e5085f0ae1c0211015db38c07fc926", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-06T04:50:09.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-06T04:50:09.000Z", "max_issues_repo_path": "optimization/gradient_descent.ipynb", "max_issues_repo_name": "thundergolfer/uni", "max_issues_repo_head_hexsha": "e604d1edd8e5085f0ae1c0211015db38c07fc926", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2022-01-23T06:09:21.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-23T06:14:17.000Z", "max_forks_repo_path": "optimization/gradient_descent.ipynb", "max_forks_repo_name": "thundergolfer/uni", "max_forks_repo_head_hexsha": "e604d1edd8e5085f0ae1c0211015db38c07fc926", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.2072155412, "max_line_length": 442, "alphanum_fraction": 0.5493196455, "converted": true, "num_tokens": 7936, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.894789457685656, "lm_q2_score": 0.9230391632454272, "lm_q1q2_score": 0.8259257123029975}} {"text": "# Factoring Polynomials with SymPy\n\nHere is an example that uses [SymPy](http://sympy.org/en/index.html) to factor polynomials.\n\n\n```python\nfrom ipywidgets import interact\n```\n\n\n```python\nfrom sympy import Symbol, Eq, factor\n```\n\n\n```python\nx = Symbol('x')\n```\n\n\n```python\ndef factorit(n):\n return Eq(x**n-1, factor(x**n-1))\n```\n\n\n```python\nfactorit(12)\n```\n\n\n\n\n Eq(x**12 - 1, (x - 1)*(x + 1)*(x**2 + 1)*(x**2 - x + 1)*(x**2 + x + 1)*(x**4 - x**2 + 1))\n\n\n\n\n```python\ninteract(factorit, n=(2,40));\n```\n\n\n interactive(children=(IntSlider(value=21, description='n', max=40, min=2), Output()), _dom_classes=('widget-in\u2026\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "aa4f911c833b559630de400d1d8203398fb5b090", "size": 4824, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/source/examples/Factoring.ipynb", "max_stars_repo_name": "nthiery/ipywidgets", "max_stars_repo_head_hexsha": "463cb0444d1a2f31c3d5cacfe6b38d067f231406", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-06-27T08:22:43.000Z", "max_stars_repo_stars_event_max_datetime": "2019-06-27T08:22:43.000Z", "max_issues_repo_path": "docs/source/examples/Factoring.ipynb", "max_issues_repo_name": "miklobit/ipywidgets", "max_issues_repo_head_hexsha": "092c085aaf3b1165a5c8f24a9ce663ae523e7969", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 200, "max_issues_repo_issues_event_min_datetime": "2019-02-07T18:19:36.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-29T08:37:12.000Z", "max_forks_repo_path": "docs/source/examples/Factoring.ipynb", "max_forks_repo_name": "miklobit/ipywidgets", "max_forks_repo_head_hexsha": "092c085aaf3b1165a5c8f24a9ce663ae523e7969", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-04-08T11:30:13.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-08T11:30:13.000Z", "avg_line_length": 23.3043478261, "max_line_length": 165, "alphanum_fraction": 0.521973466, "converted": true, "num_tokens": 217, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9525741254760639, "lm_q2_score": 0.8670357615200474, "lm_q1q2_score": 0.8259158322864322}} {"text": "# Initialize Enviroment\n\n\n```python\nfrom IPython.display import display, Math\nfrom sympy import *\ninit_printing()\n```\n\n# Plotting\n## Single variable function\n### Basic plotting\n\n\n\n```python\nfrom sympy import symbols\nfrom sympy.plotting import plot\nx = symbols('x')\n\nexpr = x*x\n\nplot(expr);\n```\n\n\n
    \n\n\n## Suppresss Str representation\n\nIf you don't like the ' string, add a ';' at the end of the plot command.\n\n\n```python\nplot(expr);\n```\n\n### Plot with range\n\nTo specify the plotting range, pass a tuple of the form ```(variable, lower_limit, upper_limit)```\n\n\n```python\nfrom sympy import symbols\nfrom sympy.plotting import plot\nx = symbols('x')\n\nexpr = x**2\nexpr_range = (x,-2,3)\n\nplot(expr, expr_range);\n```\n\n### Title and Label\n\nPass in keyword arguments ```Title```,```xlabel```,```ylabel```. Latex is supported by these arguments.\n\n\n```python\nfrom sympy import symbols\nfrom sympy.plotting import plot\nx = symbols('x')\n\nexpr = x**2\nexpr_range = (x,-2,3)\n\ntitle = '$y = {}$'.format(latex(expr))\n\nplot(expr, expr_range, title = title, xlabel = 'x', ylabel = 'y');\n```\n\n### Line color\n\nPass keyword argument ```line_color```.\n\n\n```python\nfrom sympy import symbols\nfrom sympy.plotting import plot\nx = symbols('x')\n\nexpr = x**2\n\nplot(expr, expr_range, line_color = 'r');\n```\n\n### Multiple plot with same range\n\nUse the syntax of ```plot(expr_1, expr_2, expr_3, range)```\n\n\n```python\nexpr_1 = x\nexpr_2 = x**2\nexpr_3 = x**3\n\nplot(expr_1, expr_2, expr_3, (x, -1, 1));\n```\n\n### Multiple plots with different range\n\nUse the syntax of \n```\nplot(\n (expr_1,range_1),\n (expr_2,range_2),\n ...\n)\n```\n\n\n```python\nexpr_1 = x**2\nrange_1 = (x,-2,2)\n\nexpr_2 = x\nrange_2 = (x,-1,1)\n\nplot(\n (expr_1,range_1),\n (expr_2,range_2)\n);\n```\n\n### Multiple plots with differnt colors\n\nPlotting multiple plots with different colors is not straight forward. \n\n1. Pass ```show = False``` to suppress displaying the plot, save the returned object to a variable.\n2. Get the indivisual data series by indexing the plotting object and set line_color seperately.\n3. Call ```show()``` method of the plotting object to display the image.\n\n\n```python\nexpr_1 = x**2\nrange_1 = (x,-2,2)\n\nexpr_2 = x\nrange_2 = (x,-1,1)\n\np = plot(\n (expr_1,range_1),\n (expr_2,range_2),\n show = False\n);\n\np[0].line_color = 'r'\np[1].line_color = 'b'\n\np.show()\n```\n\nTo add a legend to the plot, take additional two steps.\n\n1. Pass ```Legend = True``` when construct the plotting object.\n2. Set label for each data series seperately.\n\n\n```python\nexpr_1 = x**2\nrange_1 = (x,-2,2)\n\nexpr_2 = x\nrange_2 = (x,-1,1)\n\np = plot(\n (expr_1,range_1),\n (expr_2,range_2),\n show = False,\n legend = True\n);\n\np[0].line_color = 'r'\np[1].line_color = 'b'\n\np[0].label = 'Line 1'\np[1].label = 'Line 2'\n\np.show()\n```\n\n## 3D surface plot\n### Single plot\n\nUse ```plot3d(expr, x_range , y_range)``` to draw a 3D surface plot.\n\n\n```python\nfrom sympy import symbols\nfrom sympy.plotting import plot3d\n\nx, y = symbols('x y')\n\nexpr = x*y\nx_range = (x, -5, 5)\ny_range = (y, -5, 5)\n\nplot3d(expr, x_range, y_range);\n```\n\n### Multiple plot\n\nUse the following syntax to draw multiple 3D surface plot.\n\n```\nplot3d(\n \uff08expr_1, x_range_1 , y_range_1\uff09,\n \uff08expr_2, x_range_2 , y_range_2\uff09\n)\n```\n\n\n```python\nplot3d(\n (x**2 + y**2, (x, -5, 5), (y, -5, 5)),\n (x*y, (x, -3, 3), (y, -3, 3))\n);\n```\n\n## Single variable parametric function\n\nUse ```plot_parametric(expr_x, expr_y, range_u)``` to draw a single variable parametric function.\n\n\n```python\nfrom sympy import symbols, cos, sin\nfrom sympy.plotting import plot_parametric\n\nu = symbols('u')\nexpr_x = cos(u)\nexpr_y = sin(u)\n\np = plot_parametric(expr_x, expr_y, (u, -5, 5));\n```\n\nThe result is not a perfect circle. Sympy offers the ```aspect_ratio``` argment to adjust the ratio, but it doesn't work in sympy 1.0 yet. The community is working on the problem and it may work in the next release.\n\n## 3D parametric line\n\nUse ```plot3d_parametric_line(expr_x, expr_y, expr_z, range_u)``` to draw a 3D parametric function.\n\n\n```python\nfrom sympy import symbols, cos, sin\nfrom sympy.plotting import plot3d_parametric_line\nu = symbols('u')\nexpr_x = cos(u)\nexpr_y = sin(u)\nexpr_z = u\n\nplot3d_parametric_line(expr_x, expr_y, expr_z, (u, -5, 5));\n```\n\n## 3D parametric surface\nUse ```plot3d_parametric_surface(expr_x, expr_y, expr_z, u_range, v_range)``` to draw a 3d parametric surface.\n\n\n```python\nfrom sympy import symbols, cos, sin\nfrom sympy.plotting import plot3d_parametric_surface\nu, v = symbols('u v')\n\nexpr_x = cos(u + v)\nexpr_y = sin(u-v)\nexpr_z = u-v\nu_range = (u, -5, 5)\nv_range = (v, -5, 5)\n\nplot3d_parametric_surface(expr_x, expr_y, expr_z, u_range, v_range);\n```\n\n## Implicit function\n### Single variable implicit function\nUse ```plot_implicit(equition)``` to draw implicit function plotting. \n\n\n```python\np1 = plot_implicit(Eq(x**2 + y**2-5))\n```\n\n### Multiple variable implicit function\n\n\n```python\np2 = plot_implicit(\n Eq(x**2 + y**2, 3),\n (x, -3, 3), \n (y, -3, 3)\n)\n```\n\n## Implicit inequilities\n\nPass an inequility to ```plot_implicit```\n\n\n```python\nplot_implicit(y > x**2);\n```\n\nTo combine several conditions to define the region, and ```And,Or``` logic conjunctions.\n\n\n```python\nplot_implicit(And(y > x, y > -x));\n```\n\n\n```python\nplot_implicit(Or(y > x, y > -x));\n```\n\nSometimes Sympy doesn't choose the variable for horizontal axis as you expect.\n\n\n```python\nplot_implicit(Eq(y - 1));\n```\n\nIn this case, use ```x_var``` to choose the variable for x_axis.\n\n\n```python\nplot_implicit(Eq(y - 1),x_var=x);\n```\n\n# Reference\n[Sympy Documentation](http://docs.sympy.org/latest/index.html)\n\n# Related Articles\n* [Sympy Notes I]({filename}0026_sympy_intro_1_en.ipynb)\n* [Sympy Notes II]({filename}0027_sympy_intro_2_en.ipynb)\n* [Sympy Notes III]({filename}0028_sympy_intro_3_en.ipynb)\n* [Sympy Notes IV]({filename}0029_sympy_intro_4_en.ipynb)\n", "meta": {"hexsha": "9f531f3a905df5f28dcd8b0fe3c659c14f2395b3", "size": 572756, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "0011_sympy/0029_sympy_intro_4_en.ipynb", "max_stars_repo_name": "junjiecai/jupyter_demos", "max_stars_repo_head_hexsha": "8aa8a0320545c0ea09e05e94aea82bc8aa537750", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-09-16T10:44:39.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-04T18:55:52.000Z", "max_issues_repo_path": "0011_sympy/0029_sympy_intro_4_en.ipynb", "max_issues_repo_name": "junjiecai/jupyter_demos", "max_issues_repo_head_hexsha": "8aa8a0320545c0ea09e05e94aea82bc8aa537750", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "0011_sympy/0029_sympy_intro_4_en.ipynb", "max_forks_repo_name": "junjiecai/jupyter_demos", "max_forks_repo_head_hexsha": "8aa8a0320545c0ea09e05e94aea82bc8aa537750", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-10-24T16:19:29.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-04T18:55:57.000Z", "avg_line_length": 628.7113062569, "max_line_length": 108100, "alphanum_fraction": 0.9529223614, "converted": true, "num_tokens": 1905, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8991213826762113, "lm_q2_score": 0.9184802529509909, "lm_q1q2_score": 0.8258252349940912}} {"text": "# Advanced Expression Manipulation\n\n\n```python\nfrom sympy import *\nx, y, z = symbols('x y z')\n```\n\nFor each exercise, fill in the function according to its docstring. \n\n## Creating expressions from classes\n\nCreate the following objects without using any mathematical operators like `+`, `-`, `*`, `/`, or `**` by explicitly using the classes `Add`, `Mul`, and `Pow`. You may use `x` instead of `Symbol('x')` and `4` instead of `Integer(4)`.\n\n$$x^2 + 4xyz$$\n$$x^{(x^y)}$$\n$$x - \\frac{y}{z}$$\n\n\n\n```python\ndef explicit_classes1():\n \"\"\"\n Returns the expression x**2 + 4*x*y*z, built using SymPy classes explicitly.\n\n >>> explicit_classes1()\n x**2 + 4*x*y*z\n \"\"\"\n\n```\n\n\n```python\nexplicit_classes1()\n```\n\n\n```python\ndef explicit_classes2():\n \"\"\"\n Returns the expression x**(x**y), built using SymPy classes explicitly.\n\n >>> explicit_classes2()\n x**(x**y)\n \"\"\"\n\n```\n\n\n```python\nexplicit_classes2()\n```\n\n\n```python\ndef explicit_classes3():\n \"\"\"\n Returns the expression x - y/z, built using SymPy classes explicitly.\n\n >>> explicit_classes3()\n x - y/z\n \"\"\"\n\n```\n\n\n```python\nexplicit_classes3()\n```\n\n## Nested args\n\n\n```python\nexpr = x**2 - y*(2**(x + 3) + z)\n```\n\nUse nested `.args` calls to get the 3 in expr.\n\n\n```python\ndef nested_args():\n \"\"\"\n Get the 3 in the above expression.\n\n >>> nested_args()\n 3\n \"\"\"\n\n```\n\n\n```python\nnested_args()\n```\n\n## Traversal \n\nWrite a post-order traversal function that prints each node.\n\n\n```python\ndef post(expr):\n \"\"\"\n Post-order traversal\n\n >>> expr = x**2 - y*(2**(x + 3) + z)\n >>> post(expr)\n -1\n y\n 2\n 3\n x\n x + 3\n 2**(x + 3)\n z\n 2**(x + 3) + z\n -y*(2**(x + 3) + z)\n x\n 2\n x**2\n x**2 - y*(2**(x + 3) + z)\n \"\"\"\n\n```\n\n\n```python\npost(expr)\n```\n\n\n```python\nfor i in postorder_traversal(expr):\n print(i)\n```\n", "meta": {"hexsha": "7559d6e82e90191767670894b7cce2226b3e7ab5", "size": 5189, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tutorial_exercises/Advanced-Expression Manipulation.ipynb", "max_stars_repo_name": "gvvynplaine/scipy-2016-tutorial", "max_stars_repo_head_hexsha": "aa417427a1de2dcab2a9640b631b809d525d7929", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 53, "max_stars_repo_stars_event_min_datetime": "2016-06-21T21:11:02.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-04T07:51:03.000Z", "max_issues_repo_path": "tutorial_exercises/Advanced-Expression Manipulation.ipynb", "max_issues_repo_name": "gvvynplaine/scipy-2016-tutorial", "max_issues_repo_head_hexsha": "aa417427a1de2dcab2a9640b631b809d525d7929", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 11, "max_issues_repo_issues_event_min_datetime": "2016-07-02T20:24:06.000Z", "max_issues_repo_issues_event_max_datetime": "2016-07-11T11:31:44.000Z", "max_forks_repo_path": "tutorial_exercises/Advanced-Expression Manipulation.ipynb", "max_forks_repo_name": "gvvynplaine/scipy-2016-tutorial", "max_forks_repo_head_hexsha": "aa417427a1de2dcab2a9640b631b809d525d7929", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 36, "max_forks_repo_forks_event_min_datetime": "2016-06-25T09:04:24.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-09T06:46:01.000Z", "avg_line_length": 18.8690909091, "max_line_length": 243, "alphanum_fraction": 0.4553863943, "converted": true, "num_tokens": 568, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.899121366457407, "lm_q2_score": 0.9184802423517103, "lm_q1q2_score": 0.8258252105674002}} {"text": "## Solution to exercise 1 \n\n### Dynamic Programming with John Stachurski\n\nThe question was: Does there always exists a $x \\in [0, \\infty)$ that solves the equation\n$$\n x\n = c (1-\\beta) + \\beta\n \\sum_{k=1}^K \\max \\left\\{\n w_k ,\\, x\n \\right\\}\n \\, p_k\n$$\nIs it unique? Suggest a strategy for computing it.\n\nWe are assuming here that $\\beta \\in (0, 1)$.\n\nThere were hints, as follows:\n\n* Use the metric space $(\\mathbb R, \\rho)$ where $\\rho(x, y) = |x-y|$\n\n* If $x_1, \\ldots, x_K$ are any $K$ numbers, then\n\n$$ \\left| \\sum_{k=1}^K x_k \\right| \\leq \\sum_{k=1}^K |x_k| $$\n\n* For any $a, x, y$ in $\\mathbb R$, \n \n$$ \n \\left| \n \\max \\left\\{ a,\\, x \\right\\} - \\max \\left\\{ a,\\, y \\right\\} \n \\right|\n \\leq | x - y |\n$$\n\n\nYou can convince yourself of the second inequality by sketching and checking different cases...\n\n### Solution\n\nWe're going to use the contraction mapping theorem. Let \n\n$$ \n f(x)\n = c (1-\\beta) + \\beta\n \\sum_{k=1}^K \\max \\left\\{\n w_k ,\\, x\n \\right\\}\n \\, p_k\n$$\n\nWe're looking for a fixed point of $f$ on $\\mathbb R_+ = [0, \\infty)$.\n\nUsing the hints above, we see that, for any $x, y$ in $\\mathbb R_+$, we have\n\n\\begin{align}\n |f(x) - f(y)|\n & = \\left| \n \\beta \\sum_{k=1}^K \\max \\left\\{\n w_k ,\\, x\n \\right\\} \\, p_k\n -\n \\beta \\sum_{k=1}^K \\max \\left\\{\n w_k ,\\, y\n \\right\\} \\, p_k \n \\right|\n \\\\\n & = \\beta\\, \\left|\n \\sum_{k=1}^K [\\max \\left\\{\n w_k ,\\, x\n \\right\\} - \\max \\left\\{\n w_k ,\\, y\n \\right\\} ]\\, p_k \n \\right|\n \\\\\n & \\leq \\beta\\,\\sum_{k=1}^K\n \\left|\n \\max \\left\\{\n w_k ,\\, x\n \\right\\} - \\max \\left\\{\n w_k ,\\, y\n \\right\\} \n \\right| p_k \n \\\\\n & \\leq \\beta\\,\\sum_{k=1}^K\n \\left|\n x - y\n \\right| p_k \n \\\\\n\\end{align}\n\nSince $\\sum_k p_k = 1$, this yields\n\n$$ |f(x) - f(y)| \\leq \\beta |x - y| $$\n\nHence $f$ is a contraction map on $\\mathbb R_+$, and therefore has a unique fixed point $x^*$ such that $f^n(x) \\to x^*$ as $n \\to \\infty$ from any $x \\in \\mathbb R_+$.\n\nLet's plot $f$ when \n\n* $K = 2$\n* $w_1 = 1$ and $w_2 = 2$\n* $p_1 = 0.3$ and $p_3 = 0.7$\n* $c=1$ and $\\beta = 0.9$\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nc, w1, w2, p1, p2, \u03b2 = 1, 1, 2, 0.3, 0.7, 0.9\n\ndef f(x):\n return c * (1 - \u03b2) + \u03b2 * (max(x, w1)*p1 + max(x, w2)*p2)\n```\n\n\n```python\nxvec = np.linspace(0, 4, 100)\nyvec = [f(x) for x in xvec]\n\nfig, ax = plt.subplots()\nax.plot(xvec, yvec, label='$f$')\nax.plot(xvec, xvec, 'k-', label='$45$')\nax.legend()\nplt.show()\n```\n\nNow let's compute that fixed point by iteration:\n\n\n```python\nx = 1.0\nx_vals = []\nfor i in range(50):\n x_vals.append(x)\n x = f(x)\n \nfig, ax = plt.subplots()\nax.plot(x_vals)\nax.set(xlabel=\"$n$\", ylabel=\"$f^n(x)$\")\nax.grid()\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "6aa54030bfec83f6504ac261b20bc297b7b4482c", "size": 31819, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ex1_solution.ipynb", "max_stars_repo_name": "jstac/keio_dynamic_programming", "max_stars_repo_head_hexsha": "824ad3662c2c59482a523d92bae1c9b191a8ac85", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2017-10-09T08:13:55.000Z", "max_stars_repo_stars_event_max_datetime": "2018-08-24T03:37:25.000Z", "max_issues_repo_path": "ex1_solution.ipynb", "max_issues_repo_name": "jstac/keio_dynamic_programming", "max_issues_repo_head_hexsha": "824ad3662c2c59482a523d92bae1c9b191a8ac85", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ex1_solution.ipynb", "max_forks_repo_name": "jstac/keio_dynamic_programming", "max_forks_repo_head_hexsha": "824ad3662c2c59482a523d92bae1c9b191a8ac85", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2017-10-09T08:39:40.000Z", "max_forks_repo_forks_event_max_datetime": "2019-12-27T17:23:08.000Z", "avg_line_length": 121.9118773946, "max_line_length": 17790, "alphanum_fraction": 0.8452182658, "converted": true, "num_tokens": 1133, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9324533069832973, "lm_q2_score": 0.8856314692902446, "lm_q1q2_score": 0.8258099923081651}} {"text": "### floor, ceil\n\n$\\lfloor \\sqrt{\\lfloor x\\rfloor}\\rfloor = \\lfloor \\sqrt{ x}\\rfloor$\n\n$\\lceil \\sqrt{\\lceil x\\rceil}\\rceil = \\lceil \\sqrt{x}\\rceil$\n\ncontinuous, monotonically increasing $f$ with property $f(x)\\in Z \\Rightarrow x \\in Z$. Then below holds\n$$\n\\begin{align}\n \\lfloor f(\\lfloor x\\rfloor)\\rfloor = \\lfloor f(x)\\rfloor,\\ \\lceil f(\\lceil x\\rceil)\\rceil = \\lceil f(x)\\rceil\n\\end{align}\n$$\n\n$\\lceil \\sqrt{\\lfloor x\\rfloor}\\rceil = \\lceil \\sqrt{ x}\\rceil$, only when $m^2 < x < m^2+1$ not hold.\n\n$$\n\\begin{align}\n \\sum_{k=0}^{m-1} \\lfloor \\frac{n+k}{m} \\rfloor =n= \\sum_{k=0}^{m-1} \\lceil \\frac{n-k}{m} \\rceil \n\\end{align}\n$$\n\n$$\n\\begin{align}\n \\sum_{k=0}^{n-1} \\lfloor \\sqrt{k} \\rfloor = na - \\frac{1}{3} a^3 - \\frac{1}{2}a^2 - \\frac{1}{6}a,\\ a = \\lfloor \\sqrt{n} \\rfloor\n\\end{align}\n$$\n\n$$\n\\begin{align}\n \\sum_{k=0}^{m-1} \\lfloor \\frac{nk+x}{m} \\rfloor = d \\lfloor \\frac{x}{d} \\rfloor + \\frac{(m-1)(n-1)}{2} + \\frac{d-1}{2} = \\sum_{k=0}^{n-1} \\lfloor \\frac{mk+x}{n} \\rfloor,\\ d = \\text{gcd}(m,n)\n\\end{align}\n$$\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "7060a88026a2c197de9ddf80c89f4d4ea62c76fd", "size": 2031, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notes/floor_ceil.ipynb", "max_stars_repo_name": "sogapalag/problems", "max_stars_repo_head_hexsha": "0ea7d65448e1177f8b3f81124a82d187980d659c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-04-04T14:56:12.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-04T14:56:12.000Z", "max_issues_repo_path": "notes/floor_ceil.ipynb", "max_issues_repo_name": "sogapalag/problems", "max_issues_repo_head_hexsha": "0ea7d65448e1177f8b3f81124a82d187980d659c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notes/floor_ceil.ipynb", "max_forks_repo_name": "sogapalag/problems", "max_forks_repo_head_hexsha": "0ea7d65448e1177f8b3f81124a82d187980d659c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.6056338028, "max_line_length": 219, "alphanum_fraction": 0.4726735598, "converted": true, "num_tokens": 469, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9324533013520765, "lm_q2_score": 0.8856314617436728, "lm_q1q2_score": 0.825809980284153}} {"text": "# Gleichungssystem\nL\u00f6sen Sie das folgende Gleichungssystem einmal ausschlie\u00dflich mit `sympy` Routinen und einmal mit `numpy`:\n\n$$\n5x+4y+3z = 2 \\\\\n2x+3y+z = 0 \\\\\n3x+3y+z = 2 \\\\\n$$\n\nGeben Sie die Ergebnisse f\u00fcr $x, y, z$ sinnvoll strukturiert am Bildschirm aus.\n\n\n```python\nfrom sympy import Eq, solve\nfrom sympy.abc import x, y, z\n\nsystem = [\n Eq(5*x + 4*y + 3*z, 2),\n Eq(2*x + 3*y + z, 0),\n Eq(3*x + 3*y + z, 2)\n]\nsolution = solve(system, [x, y, z])\nfor k, v in solution.items():\n print(f'{k} = {float(v)}')\n```\n\n x = 2.0\n y = -0.8\n z = -1.6\n\n\n\n```python\nimport numpy as np\nsystem = np.array(\n [\n [5, 4, 3],\n [2, 3, 1],\n [3, 3, 1]\n ]\n)\nsolution = np.linalg.solve(system, np.array([2, 0, 2]))\nfor k, v in zip('xyz', solution):\n print(f'{k} = {round(v, 1)}')\n```\n\n x = 2.0\n y = -0.8\n z = -1.6\n\n", "meta": {"hexsha": "8fa4680e4e79e5ba998937b6203abc05a2dfe8eb", "size": 2220, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Daniel_Malzl_Aufgabe2.ipynb", "max_stars_repo_name": "dmalzl/mathcode", "max_stars_repo_head_hexsha": "6a22ad0b2f193e0b7fa3926a65a6a0791f2e2366", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Daniel_Malzl_Aufgabe2.ipynb", "max_issues_repo_name": "dmalzl/mathcode", "max_issues_repo_head_hexsha": "6a22ad0b2f193e0b7fa3926a65a6a0791f2e2366", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Daniel_Malzl_Aufgabe2.ipynb", "max_forks_repo_name": "dmalzl/mathcode", "max_forks_repo_head_hexsha": "6a22ad0b2f193e0b7fa3926a65a6a0791f2e2366", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.5533980583, "max_line_length": 115, "alphanum_fraction": 0.4567567568, "converted": true, "num_tokens": 353, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9648551535992067, "lm_q2_score": 0.8558511414521923, "lm_q1q2_score": 0.8257723845439114}} {"text": "```python\nfrom sympy import *\n```\n\n\n```python\ne1, p1, e2, p2, m = symbols(\"E_1 p_1 E_2 p_2 m\",real=True, positive=True)\n```\n\nWe work in 1-dimensional coordinates with $p = p_z$. We generally use $m = m_p = m_n$, in other words we neglect the mass difference between protons and neutrons. We consider a proton projectile and a nuclear target with $A$ nucleons.\n\nWe first compute the cms energy $\\sqrt{s_{nn}}$ in the nucleon-nucleon system from the fixed target system.\n\nIn this system, we have these 4-vectors:\n* projectile $(E, p)$\n* target $(m, 0)$\n\n\n```python\ns2 = (e1 + m)**2 - (p1 + 0)**2; s2\n```\n\n\n\n\n$\\displaystyle - p_{1}^{2} + \\left(E_{1} + m\\right)^{2}$\n\n\n\n\n```python\ns2 = s2.subs(e1, sqrt(p1 ** 2 + m ** 2)); s2\n```\n\n\n\n\n$\\displaystyle - p_{1}^{2} + \\left(m + \\sqrt{m^{2} + p_{1}^{2}}\\right)^{2}$\n\n\n\n\n```python\ns2.series(m, 0, 2)\n```\n\n\n\n\n$\\displaystyle 2 m p_{1} + O\\left(m^{2}\\right)$\n\n\n\n\n```python\ns = sqrt((e1 + m)**2 - (p1 + 0)**2); s\n```\n\n\n\n\n$\\displaystyle \\sqrt{- p_{1}^{2} + \\left(E_{1} + m\\right)^{2}}$\n\n\n\n\n```python\ns = s.subs(e1, sqrt(p1 ** 2 + m ** 2)); s\n```\n\n\n\n\n$\\displaystyle \\sqrt{- p_{1}^{2} + \\left(m + \\sqrt{m^{2} + p_{1}^{2}}\\right)^{2}}$\n\n\n\n\n```python\ns.series(m, 0, 2)\n```\n\n\n\n\n$\\displaystyle \\sqrt{2} \\sqrt{m} \\sqrt{p_{1}} + \\frac{\\sqrt{2} m^{\\frac{3}{2}}}{2 \\sqrt{p_{1}}} + O\\left(m^{2}\\right)$\n\n\n\n\n```python\ndef sqrt_snn(energy):\n mass = (938.27 + 939.57) * 0.5e-9 # nucleon mass in PeV\n return sqrt(2 * mass * energy) * 1e3 # sqrt(sNN) in TeV\n```\n\n\n```python\n0.5 * (sqrt_snn(320000+90000) - sqrt_snn(320000-90000))\n```\n\n\n\n\n$\\displaystyle 110.127117815583$\n\n\n\n\n```python\ndef beam_energy(sqrt_snn):\n mass = (938.27 + 939.57) * 0.5e-3 # nucleon mass in GeV\n return (sqrt_snn * 1e3) ** 2 / (2 * mass) / 1e6 # beam in PeV\n```\n\n\n```python\nbeam_energy(14)\n```\n\n\n\n\n 104.37523963702976\n\n\n", "meta": {"hexsha": "c45cca845436aadfd651c1105d52e93419c9eb25", "size": 5543, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "From fixed target to sqrt_s_nn and back.ipynb", "max_stars_repo_name": "HDembinski/essays", "max_stars_repo_head_hexsha": "a3070c10c6ca2a9c4b24eb89cba5ec5518e085dd", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 17, "max_stars_repo_stars_event_min_datetime": "2020-04-07T07:29:46.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-04T18:09:31.000Z", "max_issues_repo_path": "From fixed target to sqrt_s_nn and back.ipynb", "max_issues_repo_name": "HDembinski/essays", "max_issues_repo_head_hexsha": "a3070c10c6ca2a9c4b24eb89cba5ec5518e085dd", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 8, "max_issues_repo_issues_event_min_datetime": "2020-04-08T11:31:52.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-25T02:01:34.000Z", "max_forks_repo_path": "From fixed target to sqrt_s_nn and back.ipynb", "max_forks_repo_name": "HDembinski/essays", "max_forks_repo_head_hexsha": "a3070c10c6ca2a9c4b24eb89cba5ec5518e085dd", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-05-16T14:35:06.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-01T05:53:21.000Z", "avg_line_length": 21.1564885496, "max_line_length": 243, "alphanum_fraction": 0.461122136, "converted": true, "num_tokens": 722, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9465966671870766, "lm_q2_score": 0.8723473697001441, "lm_q1q2_score": 0.825761112787569}} {"text": "Initialize printing:\n\n\n```\n%pylab inline\nfrom sympy.interactive.printing import init_printing\ninit_printing()\n```\n\n \n Welcome to pylab, a matplotlib-based Python environment [backend: module://IPython.zmq.pylab.backend_inline].\n For more information, type 'help(pylab)'.\n\n\n# Poisson Equation\n\nRadial Poisson equation is\n$$\n\\phi''(r)+{2\\over r} \\phi'(r) = -4\\pi\\rho(r).\n$$\nAlternatively, this can also be written as:\n$$\n{1\\over r}(r \\phi(r))'' = -4\\pi\\rho(r).\n$$\n### Example I\n\nPositive Gaussian charge density:\n\n\n```\nfrom sympy import var, exp, pi, sqrt, integrate, refine, Q, oo, Symbol, DiracDelta\nvar(\"r alpha Z\")\nalpha = Symbol(\"alpha\", positive=True)\nrho = Z * (alpha / sqrt(pi))**3 * exp(-alpha**2 * r**2)\nrho\n```\n\nThe total charge is $Z$:\n\n\n```\nintegrate(4*pi*rho*r**2, (r, 0, oo))\n```\n\nSolve for $\\phi(r)$ from the Poisson equation:\n\n\n```\nphi = integrate(-4*pi*rho*r, r, r)/r\nphi\n```\n\nTell SymPy that $\\alpha$ is positive:\n\n\n```\nphi = refine(phi, Q.positive(alpha))\nphi\n```\n\nPlot the charge:\n\n\n```\nfrom sympy import plot\n```\n\n\n```\nplot(rho.subs({Z: 1, alpha: 0.1}), (r, 0, 100));\n```\n\nPlot the potential\n\n\n```\nplot(phi.subs({Z: 1, alpha: 0.1}), 1/r, (r, 0, 100), ylim=[0, 0.12]);\n```\n\n\n```\n\n```\n", "meta": {"hexsha": "643187d55934abb07a78ee72efc7bc9071c58aca", "size": 37592, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tutorial_exercises/Poisson equation.ipynb", "max_stars_repo_name": "certik/scipy-2013-tutorial", "max_stars_repo_head_hexsha": "26a1cab3a16402afdc20088cedf47acd9bc58483", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 23, "max_stars_repo_stars_event_min_datetime": "2015-02-28T08:53:05.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-05T05:37:59.000Z", "max_issues_repo_path": "sympy/Poisson equation.ipynb", "max_issues_repo_name": "certik/scipy-in-13", "max_issues_repo_head_hexsha": "418c139ab6e1b0c9acd53e7e1a02b8b930005096", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-04-17T15:05:46.000Z", "max_issues_repo_issues_event_max_datetime": "2021-04-17T15:05:46.000Z", "max_forks_repo_path": "sympy/Poisson equation.ipynb", "max_forks_repo_name": "certik/scipy-in-13", "max_forks_repo_head_hexsha": "418c139ab6e1b0c9acd53e7e1a02b8b930005096", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 14, "max_forks_repo_forks_event_min_datetime": "2015-03-11T00:25:21.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-25T14:52:40.000Z", "avg_line_length": 146.2723735409, "max_line_length": 15716, "alphanum_fraction": 0.8747605874, "converted": true, "num_tokens": 410, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9465966702001757, "lm_q2_score": 0.8723473630627235, "lm_q1q2_score": 0.8257611091330779}} {"text": "---\nauthor: Nathan Carter (ncarter@bentley.edu)\n---\n\nThis answer assumes you have imported SymPy as follows.\n\n\n```python\nfrom sympy import * # load all math functions\ninit_printing( use_latex='mathjax' ) # use pretty math output\n```\n\nIn SymPy, we tend to work with formulas (that is, mathematical expressions)\nrather than functions (like $f(x)$). So if we wish to compute the\nderivative of $f(x)=10x^2-16x+1$, we will focus on just the $10x^2-16x+1$ portion.\n\n\n```python\nvar( 'x' )\nformula = 10*x**2 - 16*x + 1\nformula\n```\n\n\n\n\n$\\displaystyle 10 x^{2} - 16 x + 1$\n\n\n\nWe can compute its derivative by using the `diff` function.\n\n\n```python\ndiff( formula )\n```\n\n\n\n\n$\\displaystyle 20 x - 16$\n\n\n\nIf it had been a multi-variable function, we would need to specify the\nvariable with respect to which we wanted to compute a derivative.\n\n\n```python\nvar( 'y' ) # introduce a new variable\nformula2 = x**2 - y**2 # consider the formula x^2 + y^2\ndiff( formula2, y ) # differentiate with respect to y\n```\n\n\n\n\n$\\displaystyle - 2 y$\n\n\n\nWe can compute second or third derivatives by repeating the variable\nwith respect to which we're differentiating. To do partial derivatives,\nuse multiple variables.\n\n\n```python\ndiff( formula, x, x ) # second derivative with respect to x\n```\n\n\n\n\n$\\displaystyle 20$\n\n\n\n\n```python\ndiff( formula2, x, y ) # mixed partial derivative\n```\n\n\n\n\n$\\displaystyle 0$\n\n\n", "meta": {"hexsha": "7c676fffbfe982763f698abf44542a7abd841c57", "size": 4099, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "database/tasks/How to compute the derivative of a function/Python, using SymPy.ipynb", "max_stars_repo_name": "nathancarter/how2data", "max_stars_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "database/tasks/How to compute the derivative of a function/Python, using SymPy.ipynb", "max_issues_repo_name": "nathancarter/how2data", "max_issues_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "database/tasks/How to compute the derivative of a function/Python, using SymPy.ipynb", "max_forks_repo_name": "nathancarter/how2data", "max_forks_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-07-18T19:01:29.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T06:47:11.000Z", "avg_line_length": 21.6878306878, "max_line_length": 99, "alphanum_fraction": 0.5101244206, "converted": true, "num_tokens": 393, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9504109756113862, "lm_q2_score": 0.8688267711434708, "lm_q1q2_score": 0.8257424991997567}} {"text": "# Gaussian Mixture Models\n\n\nA Gaussian mixture model (GMM) is a density model where we combine a finite number of K Gaussian distributions $\\mathcal{N}(x|\\mu_k,\\Sigma_k)$ so that the probability density of a random variable $x$ is expressed as:\n\n$$ p(x|\\theta) = \\sum_{k=1}^K \\pi_k \\mathcal{N}(x|\\mu_k,\\Sigma_k) \\\\\n0 \\le \\pi_k \\le 1, \\sum_{k=1}^{K}{\\pi_k} = 1$$\nwhere $\\theta:=\\{\\mu_k,\\Sigma_k,\\pi_k:k=1,2,\\dots,K\\}$ is the collection of all parameters of the model.\n\nMixture models allow relatively complex marginal distributions to be expressed interms of more tractable joint distributions over an expanded space by the inclusion of latent (or unobserved) variables. In addition to providing a framework for building complex probability distributions, mixture models can be used to cluster data.\n\n\n## Theory\nIn density estimation, we estimate data compactly using a density from a parametric family of distributions. Typically, a Gaussian, this choice might make for a poor approximation, in which case a more expressive model to consider would be mixture models.\n\nGaussian Mixture models have the advantage to\n- enable multi-modal data representations with multiple clusters.\n- compute the parameters, $\\theta$ computed/learned from data via a maximum likelihood approach using the Expectation-Maximization algorithm.\n\nGiven a dataset for random variable $x$, we introduce a $K$ dimensional binary random variable $z$ having a 1-of-$K$ representation in which a particular element $z_k = 1$ with $z_i = 0 \\text{ } \\forall i\\ne k$ i.e., $\\sum{z_k} = 1$.\nThe marginal distribution $p(z,x)$ is defined as $ p(z,x) = p(x|z)\\cdot p(z)$.\n\nIf $p(z_k = 1) = \\pi_k$ and $p(x|z_k=1) = \\mathcal{N}(x|\\mu_k,\\Sigma_k)$ then, $$\\begin{align} p(z) &= \\prod_{k=1}^K \\pi_k^{z_k} \\\\\np(x|z) &= \\prod_{k=1}^K \\Bigg(\\mathcal{N}(x|\\mu_k,\\Sigma_k)\\Bigg)^{z_k} \\end{align}$$\n\nThe marginal distribution of $x$ is therefore,\n\n$$ p(x) = \\sum_z p(x,z) = \\sum_z p(z) p(x|z) = \\sum_{k=1}^K{\\pi_k \\mathcal{N}(x|\\mu_k,\\Sigma_k) }$$\n\n### Maximum Likelihood Approach\nIn the maximum likelihood (MLE) approach, we wish to model a dataset of observations $\\{x_1,x_2,\\dots,x_N\\}$ using a mixture of Gaussians. In the MLE approach, we seek to compute the parameters $\\pi$,$\\mu$,$\\Sigma$ of each of the $K$ Gaussians per the graphical model below.\n\n\n\nThe log-likelihood for the entire dataset is expressed as:\n\n$$ \\ln{p(X|\\pi,\\mu,\\Sigma)} = \\sum_{n=1}^N {\\ln {\\Bigg(\\sum_{k=1}^K {\\pi_k \\mathcal{N}(x|\\mu_k,\\Sigma_k)}\\Bigg)}} $$\n\nTo determine the optimal MLE parameters, we seek to compute $\\pi$,$\\mu$,$\\Sigma$ which maximize the likelihood of the data. Note however that this term is intractable due to the presence of the summation inside the logarithm, so that the logarithm no longer directly acts on the Gaussians.\n\n### Expectation Maximization for GMM\nAn elegant and powerful method for finding MLE solutions for models with latent variables is the EM algorithm.\n\nAt maximum likelihood, the derivative of the log-likelihood w.r.t each of the parameters is zero.\n\n#### MLE of $\\mu$\n$$ \\begin{align} \n0 &= \\frac{d}{d \\mu_k} \\ln{p(X|\\pi,\\mu,\\Sigma)} \\\\\n&= -\\sum_{n=1}^N {\\frac{\\pi_k \\mathcal{N}(x_n|\\mu_k,\\Sigma_k)}{\\sum_{k=1}^K {\\pi_k \\mathcal{N}(x|\\mu_k,\\Sigma_k)}}\\Sigma_k (x_n - \\mu_k)}\\\\\n&= -\\sum_{n=1}^N{\\gamma(z_{nk})\\Sigma_k (x_n - \\mu_k)}\n\\end{align}$$\n\nwhere $\\gamma(z_{nk}) = \\frac{\\pi_k \\mathcal{N}(x_n|\\mu_k,\\Sigma_k)}{\\sum_{k=1}^K {\\pi_k \\mathcal{N}(x|\\mu_k,\\Sigma_k)}}$ defined as the `responsibility that component` $k$ `takes for explaining the observation` $x$, i.e., $\\gamma(z_{nk}) = p(z_k=1|x_n)$ is the posterior probability of the latent variable.\n\nThe MLE estimate of $\\mu_k$ is therefore \n\n$$\\mu_k = \\frac{\\sum_{n=1}^N{\\gamma(z_{nk}) x_n}}{\\sum_{n=1}^N{\\gamma(z_{nk})}} = \\frac{1}{N_k}\\sum_{n=1}^N{\\gamma(z_{nk}) x_n}$$\nwhere $N_k = \\sum_{n=1}^N{\\gamma(z_{nk})}$ is the effective number of points assigned to the cluster $k$.\n\n#### MLE of $\\Sigma$\n$$ \\begin{align} \n0 &= \\frac{d}{d \\Sigma_k} \\ln{p(X|\\pi,\\mu,\\Sigma)} \\\\\n&= \\frac{d}{d \\Sigma_k} |\\Sigma_k| -\\sum_{n=1}^N {\\frac{\\pi_k \\mathcal{N}(x_n|\\mu_k,\\Sigma_k)}{\\sum_{k=1}^K {\\pi_k \\mathcal{N}(x|\\mu_k,\\Sigma_k)}} (x_n - \\mu_k)(x_n - \\mu_k)^T}\\\\\n&= \\Sigma_k -\\sum_{n=1}^N{\\gamma(z_{nk}) (x_n - \\mu_k)(x_n - \\mu_k)^T}\n\\end{align}$$\n\nNote that the $ \\frac{d}{dA} \\ln{\\left(\\det{A}\\right)^{-1}} = A$. More details on this identity can be found [here](https://statisticaloddsandends.wordpress.com/2018/05/24/derivative-of-log-det-x/).\n\nThe MLE estimate of $\\Sigma_k$ is therefore \n\n$$\\Sigma_k = \\frac{1}{N_k}\\sum_{n=1}^N{\\gamma(z_{nk}) (x_n - \\mu_k)(x_n - \\mu_k)^T}$$\n\n#### MLE of $\\pi$\nTo compute the MLE of $\\pi$, the constraint $\\sum_k \\pi_k = 1$ has to be accounted for. This can be done with a lagrange multipler $\\lambda$ and maximizing the quantity $\\ln{p(X|\\pi,\\mu,\\Sigma)} + \\lambda \\sum_k{\\pi_k} - 1$.\n\n$$ \\begin{align} \n0 &= \\frac{d}{d \\pi_k} \\Bigg(\\ln{p(X|\\pi,\\mu,\\Sigma)} + \\lambda \\sum_k{\\pi_k} - 1 \\Bigg)\\\\\n&= \\sum_{n=1}^N {\\frac{\\mathcal{N}(x_n|\\mu_k,\\Sigma_k)}{\\sum_{k=1}^K {\\pi_k \\mathcal{N}(x|\\mu_k,\\Sigma_k)}} + \\lambda} \\end{align} $$\n\nMultiplying both sides by $\\pi_k$ and summing over k,\n\n$$ \\begin{align} 0 &= \\sum_{n=1}^N {\\sum_{k=1}^K{\\frac{\\pi_k \\mathcal{N}(x_n|\\mu_k,\\Sigma_k)}{\\sum_{k=1}^K {\\pi_k \\mathcal{N}(x|\\mu_k,\\Sigma_k)}}}} + \\sum_{k=1}^K {\\pi_k \\lambda} \\\\\n&= \\sum_{n=1}^N {N_k} + \\lambda \\\\\n\\Rightarrow \\lambda &= -\\sum_{n=1}^N {N_k} = -N\n\\end{align}$$\n\nSubstituting for $\\lambda$ above, the MLE of $\\pi_k$ is derived as:\n\n$$\\pi_k = \\frac{N_k}{N}$$\n\nThe above solutions do not constitute a closed-form solution for the parameters of the mixture model because of the dependence of the responsibilities $\\gamma(z_{nk})$ on the same parameters. An iterative scheme of these steps constitutes the EM algorithm can be applied to solve for the parameters.\n\nEach iteration of the EM algorithm involves two updates - `the E step and the M step`.\n- In the `E step`, we use the current values of the parameters to evaluate the responsibilities.\n- In the `M step`, we maximize the parameters with respect to the responsibilities computed in the `E step`.\n\n\n## Example\nThe palmer penguins dataset released by [4] and obtained from [5] is used as an example. Two features - Flipper Length & Culmen Length are used as the features to cluster the dataset into the 3 categories of penguins - Adelie, Chinstrap and Gentoo. The dataset is plotted below.\n\n\n```python\nimport pandas as pd\nimport requests\nimport io\nimport numpy as np\nfrom scipy.stats import multivariate_normal as gaussian\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nfrom matplotlib.patches import Ellipse\nimport matplotlib.transforms as transforms\nimport matplotlib.colors as mcolors\n\ndef getCSV(url):\n download = requests.get(url).content\n df = pd.read_csv(io.StringIO(download.decode('utf-8')))\n return df\n\nfile = \"https://raw.githubusercontent.com/mcnakhaee/palmerpenguins/master/palmerpenguins/data/penguins-raw.csv\"\ndf = getCSV(file)\n```\n\n\n```python\ntxt_labels = np.unique(df['Species'])\nlbl = txt_labels[0]\nfig,ax = plt.subplots(1,2,figsize=(10,5))\ndf_data = [None]*len(txt_labels)\nimg = mpimg.imread('../../img/lter_penguins.png')\nax[0].imshow(img)\nax[0].axis('off')\ncolor = ['tomato','mediumorchid','seagreen','aqua','black','magenta']\nfor i,lbl in enumerate(txt_labels):\n df_data[i] = df[df['Species'] == lbl]\n# print(df_data[i].columns)\n ax[1].scatter(df_data[i]['Flipper Length (mm)'],df_data[i]['Culmen Length (mm)'],color=color[i])\n# ax[1].axis('off')\nax[1].set_xlabel('Flipper Length');\nax[1].set_ylabel('Culmen Length');\n```\n\n\n```python\n## Number of classes\nK = 3#len(txt_labels)\n\nflp_len = np.mean(df['Flipper Length (mm)'])\nclm_len = np.mean(df['Culmen Length (mm)'])\n\ndf = df[df['Flipper Length (mm)'].notna()]\ndf = df[df['Culmen Length (mm)'].notna()]\ndata = np.matrix(np.c_[df['Flipper Length (mm)'],df['Culmen Length (mm)']].T)\n# print(data)\n\nx_mean = np.array([flp_len,clm_len])\nd = data - np.reshape(x_mean,(2,1))\ncov = np.matmul(d,d.T)/float(data.shape[1])\n## Init\npi_init = [1/float(K) for i in range(K)]#[float(df_i.shape[0])/float(df.shape[0]) for df_i in df_data]\nmu_init = [np.ravel(data[:,k]) for k in range(K)]\n# sigma_init = [np.eye(x_mean.shape[0]) for k in range(K)]\nsigma_init = [cov for k in range(K)]\n```\n\n\n```python\ndef E_Step(X,pi,mu,sigma):\n gamma = np.zeros((X.shape[1],K))\n for n in range(X.shape[1]):\n tot = 0.0\n for k in range(K):\n gamma[n,k] = pi[k]*gaussian.pdf(X[:,n].T,mean=mu[k],cov=sigma[k])\n tot = tot + gamma[n][k]\n gamma[n,:] = gamma[n,:]/tot\n return gamma\n\ndef M_step(X,gamma):\n Nk = [0 for i in range(K)]\n pi = [0 for i in range(K)]\n for k in range(K):\n Nk[k] = np.sum(gamma[:,k])\n pi[k] = Nk[k]/float(X.shape[1])\n mu_mat = np.matmul(X,gamma)/Nk\n mu = []\n for k in range(K):\n mu.append(np.array([mu_mat[0,k],mu_mat[1,k]]))\n sigma = [np.zeros((2,2)) for i in range(K)]\n for k in range(K):\n for n in range(X.shape[1]):\n del_x = X[:,n] - np.reshape(mu[k],(X.shape[0],1))\n cov = np.matmul(del_x,del_x.T)\n sigma[k] = sigma[k] + gamma[n,k]/Nk[k]*cov\n \n return mu,sigma,pi\n\ndef getLogLikelihood(X,pi,mu,sigma):\n logLikelihood = 0\n for n in range(X.shape[1]):\n prob = 0\n for k in range(K):\n if np.linalg.det(sigma[k]) < 1E-6:\n print(sigma[k])\n continue\n prob = prob + pi[k]*gaussian.pdf(X[:,n].T,mean=mu[k].T,cov=sigma[k])\n logLikelihood = logLikelihood + np.log(prob)\n return logLikelihood\n\ndef EM(X,pi0,mu0,sigma0,iter_max = 250,tol=1E-6):\n pi_curr = pi0\n mu_curr = mu0\n sigma_curr = sigma0\n iter = 0\n post_prob = []\n pi = pi0.copy()\n mu = mu0.copy()\n sigma = sigma0.copy()\n while iter < iter_max:\n max_del = np.finfo('f').min\n iter = iter + 1\n gamma = E_Step(X,pi,mu,sigma)\n mu1,sigma1,pi1 = M_step(X,gamma)\n# print(pi)\n# print(mu)\n# print(sigma)\n prob = 0\n for k in range(len(mu)):\n del_mu = np.max(np.abs(mu[k]-mu1[k]))\n del_pi = np.max(np.abs(pi[k]-pi1[k]))\n del_sig = np.max(np.max(np.abs(sigma[k]-sigma1[k])))\n max_del = max(max_del,max(del_mu,max(del_pi,del_sig)))\n pi = pi1.copy()\n mu = mu1.copy()\n sigma = sigma1.copy()\n pi_curr = pi\n mu_curr = mu\n sigma_curr = sigma\n post_prob.append(getLogLikelihood(X,pi,mu,sigma))\n# print(iter,max_del)\n if (max_del <= tol):\n print(\"Converged after Iteration: \" + str(iter))\n break\n \n return mu,sigma,pi,post_prob\n\ndef confidence_ellipse(ax, mu, cov, n_std=3.0, facecolor='none', **kwargs):\n \"\"\"\n Create a plot of the covariance confidence ellipse of `x` and `y`\n\n Parameters\n ----------\n cov : Covariance matrix\n Input data.\n\n ax : matplotlib.axes.Axes\n The axes object to draw the ellipse into.\n\n n_std : float\n The number of standard deviations to determine the ellipse's radiuses.\n\n Returns\n -------\n matplotlib.patches.Ellipse\n\n Other parameters\n ----------------\n kwargs : `~matplotlib.patches.Patch` properties\n \"\"\"\n# if cov != cov.T:\n# raise ValueError(\"Not a valid covariance matrix\")\n\n# cov = np.cov(x, y)\n pearson = cov[0, 1]/np.sqrt(cov[0, 0] * cov[1, 1])\n # Using a special case to obtain the eigenvalues of this\n # two-dimensionl dataset.\n ell_radius_x = np.sqrt(1 + pearson)\n ell_radius_y = np.sqrt(1 - pearson)\n ellipse = Ellipse((0, 0),\n width=ell_radius_x * 2,\n height=ell_radius_y * 2,\n facecolor=facecolor,\n **kwargs)\n\n # Calculating the stdandard deviation of x from\n # the squareroot of the variance and multiplying\n # with the given number of standard deviations.\n scale_x = np.sqrt(cov[0, 0]) * n_std\n mean_x = mu[0]\n\n # calculating the stdandard deviation of y ...\n scale_y = np.sqrt(cov[1, 1]) * n_std\n mean_y = mu[1]\n\n transf = transforms.Affine2D() \\\n .rotate_deg(45) \\\n .scale(scale_x, scale_y) \\\n .translate(mean_x, mean_y)\n\n ellipse.set_transform(transf + ax.transData)\n return ax.add_patch(ellipse)\n```\n\n\n```python\nmu,sigma,pi,post_prob = EM(data,pi_init,mu_init,sigma_init)\n```\n\n Converged after Iteration: 110\n\n\n\n```python\nfig,ax = plt.subplots(1,3,figsize=(20,5))\ngamma = E_Step(data,pi,mu,sigma)\nfor n in range(data.shape[1]):\n rgb = np.array([0,0,0])\n for k in range(K):\n rgb = rgb+gamma[n,k]*np.array(mcolors.to_rgb(color[k]))\n ax[1].scatter(data[0,n],data[1,n],color=rgb)\nax[1].set_title('Classification as a function of responsibilities')\nfor k in range(K):\n if k < 3:\n ax[0].scatter(df_data[k]['Flipper Length (mm)'],df_data[k]['Culmen Length (mm)'],color=color[k])\n ax[0].plot(mu[k][0],mu[k][1],'kx')\n for i in range(3):\n confidence_ellipse(ax[0],mu[k],sigma[k],i+1,edgecolor=color[k],linestyle='dashed')\n \nax[0].set_title('Confidence bounds of the clusters from EM')\nax[0].set_xlabel('Flipper Length (mm)');\nax[0].set_ylabel('Culmen Length (mm)');\n\nax[2].plot(range(len(post_prob)),post_prob);\nax[2].set_title('Learning curve');\nax[2].set_ylabel('Log Likehood');\nax[2].set_xlabel('Iteration Number');\n```\n\n## References\n[1]: Bishop, Christopher M. 2006. Pattern Recognition and Machine Learning. Springer.\n\n[2]: M. P. Deisenroth, A. A. Faisal, and C. S. Ong, 2021. https://mml-book.com \n\n[3]: Refer to https://statisticaloddsandends.wordpress.com/2018/05/24/derivative-of-log-det-x/ for an explanation of identity for derivative of log of a matrix determinant \n\n[4]: Horst AM, Hill AP, Gorman KB (2020). palmerpenguins: Palmer Archipelago (Antarctica) penguin data. R package version 0.1.0. https://allisonhorst.github.io/palmerpenguins/.\n\n[5]: CSV data downloaded from https://github.com/mcnakhaee/palmerpenguins\n\n[6]: Code for plotting confidence ellipses from https://matplotlib.org/3.1.0/gallery/statistics/confidence_ellipse.html\n", "meta": {"hexsha": "96a386aed26b2c5c6b406299acf2a767d374f015", "size": 198652, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "files/DensityEstimation/GaussianMixtureModels.ipynb", "max_stars_repo_name": "chandrusuresh/MyNotes", "max_stars_repo_head_hexsha": "4e0f86195d6d9eb3168bfb04ca42120e9df17f0b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "files/DensityEstimation/GaussianMixtureModels.ipynb", "max_issues_repo_name": "chandrusuresh/MyNotes", "max_issues_repo_head_hexsha": "4e0f86195d6d9eb3168bfb04ca42120e9df17f0b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "files/DensityEstimation/GaussianMixtureModels.ipynb", "max_forks_repo_name": "chandrusuresh/MyNotes", "max_forks_repo_head_hexsha": "4e0f86195d6d9eb3168bfb04ca42120e9df17f0b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 386.4824902724, "max_line_length": 98936, "alphanum_fraction": 0.923066468, "converted": true, "num_tokens": 4426, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.927363293639213, "lm_q2_score": 0.8902942203004186, "lm_q1q2_score": 0.8256261804457513}} {"text": "# Worksheet 3\n\n\n```\n%matplotlib inline\n```\n\n## Question 1\n\nApply Simpson's rule to compute\n\n\\begin{equation}\n \\int_0^{\\pi/2} \\cos (x) \\, dx\n\\end{equation}\n\nusing 3 points (so $h = \\pi/4$) and 5 points (so $h = \\pi/8$).\n\n### Answer Question 1\n\nThe exact solution is, of course, 1.\n\nSimpson\u2019s rule (composite version) is\n\n\\begin{equation}\n I = \\frac{h}{3} \\left[ f(a) + f(b) + 2 \\sum_{j = 1}^{N/2 - 1} f(x_{2 j}) + 4 \\sum_{j = 1}^{N/2} f(x_{2 j-1}) \\right]\n\\end{equation}\n\nwhere we are using $N + 1$ points with $N$ even, with $x_0 = a, x_N = b$, equally spaced with grid spacing $h = (b \u2212 a)/N$.\n\nWith 3 points we have $N = 2$ and $h = (\\pi/2)/2 = \\pi/4$, and so we have nodes and samples given by\n\n\\begin{equation}\n \\begin{array}{c|c|c}\n i & x_i & f(x_i) \\\\ \\hline\n 0 & 0 & 1 \\\\\n 1 & \\pi/4 & 1 / \\sqrt{2} \\\\\n 2 & \\pi/2 & 0\n \\end{array}\n\\end{equation}\n\nUsing Simpson's rule we then get\n\n\\begin{align}\n I &= \\frac{h}{3} \\left[ f_0 + f_2 + 4 f_1 \\right] \\\\\n & = \\frac{\\pi}{12} \\left[ 1 + 2 \\sqrt{2} \\right] \\\\\n & \\approx 1.0023. \n\\end{align}\n\nWith 5 points we have $N = 4$ and $h = (\\pi/2)/4 = \\pi/8$, and so we have nodes and samples given by\n\n\\begin{equation}\n \\begin{array}{c|c|c}\n i & x_i & f(x_i) \\\\ \\hline\n 0 & 0 & 1 \\\\\n 1 & \\pi/8 & \\cos ( \\pi / 8 ) \\approx 0.9239 \\\\\n 2 & \\pi/4 & 1 / \\sqrt{2} \\\\\n 3 & 3\\pi/8 & \\cos ( 3 \\pi / 8 ) \\approx 0.9239 \\\\\n 4 & \\pi/2 & 0\n \\end{array}\n\\end{equation}\n\nUsing Simpson's rule we then get\n\n\\begin{align}\n I &= \\frac{h}{3} \\left[ f_0 + f_4 + 4 (f_1 + f_3) + 2 f_2 \\right] \\\\\n & = \\frac{\\pi}{24} \\left[ 1 + 4 \\left( \\cos ( \\pi / 8 ) + \\cos ( 3 \\pi / 8 ) \\right) + \\sqrt{2} \\right] \\\\\n & \\approx 1.00013. \n\\end{align}\n\n## Question 2\n\nApply Richardson extrapolation to the result above; does the answer improve?\n\n### Answer Question 2\n\nSimpson's rule has order of accuracy 4. We note that we have just computed the result using 3 ($N = 2$) and 5 ($N = 4$) points. Richardson extrapolation gives the result\n\n\\begin{align}\n R_4 &= \\frac{2^4 I_{N=4} - I_{N=2}}{2^4 - 1} \\\\\n &\\approx 0.999992.\n\\end{align}\n\nWe note that the error has gone from $2.3 \\times 10^{\u22123}$ for $I_{N=2}$ to $1.3 \\times 10^{\u22124}$ for $I_{N=4}$ and now to $8.4 \\times 10^{\u22126}$ for the Richardson extrapolation $R_4$, a good improvement.\n\n## Question 3\n\nState the rate of convergence of the trapezoidal rule and Simpson\u2019s rule, and sketch (or explain in words) the proof.\n\n### Answer Question 3\n\nFor the trapezoidal rule the error converges as $h^2$. For Simpson\u2019s rule the error converges as $h^4$.\n\nIn both cases the proof takes a similar path. Consider the quadrature over a single subinterval. Taylor series expand the quadrature rule about a suitable point $x_j$ (left edge for trapezoidal rule, centre for Simpson\u2019s rule) to get an expression for the quadrature of the interval in terms of $h$ and the function $f$ and its derivatives as evaluated at $x_j$.\n\nNext write down the anti-derivative $F(t)$ of $f$ for the interval as a function of the width of the interval $t$. This, when evaluated at $t = h$, is the exact solution for the quadrature of the subinterval. Taylor series expand $F$ about $t = 0$ to get an expression for the exact result in terms of $h$ and the function $f$ and its derivatives as evaluated at $x_j$.\n\nBy comparing the two expressions we have a bound on the error in terms of $h$ and derivatives of $f$. By summing over all intervals (note that at this stage we lose a power of $h$ as we have $N$ subintervals with $N \\propto h^{\u22121}$) we can bound the global error in terms of $h$ and the maximum value of a derivative of $f$.\n\n\n## Question 4\n\nExplain in words adaptive and Gaussian quadrature, in particular the aims of each and the times when one or the other is more useful.\n\n### Answer Question 4\n\nAdaptive quadrature uses any standard quadrature method and some error estimator, such as Richardson extrapolation, to place additional nodes wherever required to ensure that the error is less than some desired tolerance. Each subinterval is tested to ensure that its (appropriately weighted) contribution to the total error is sufficiently small. If it is not, the subinterval is further subdivided by introducing more nodes in a fashion appropriate for the quadrature method used. This is a straightforward way of getting high accuracy for low computational cost using standard quadrature algorithms.\n\nGaussian quadrature aims to get the best result for a *generic* function by allowing both the choice of nodes and weights to vary. The location of the nodes and the value of the weights is given by ensuring that the quadrature is exact for as many polynomials as possible; i.e., if we have $N$ nodes (and hence $N$ weights) we should be able to exactly integrate $x^s$ for $0 \\le s \\le 2 N \u2212 1$. By introducing a weighting function we can also deal with integrands that are (mildly) singular at the boundaries of the domain, or unbounded domains. Provided the function can be evaluated anywhere this is an effective way of getting high accuracy with few function evaluations for most functions.\n\n## Question 5\n\nShow how the speed of convergence of a nonlinear root finding method depends and the derivatives of the map $g(x)$ near the fixed point $s$.\n\n### Answer Question 5\n\nWe assume we are constructing an iterative sequence $x_n$ where $x_{n+1} = g(x_n)$, and that the error at step $n$ is $e_n = x_n \u2212 s$. Then if we assume that the step $x_{n+1}$ is sufficiently close to the root $s$ then we can write\n\n\\begin{align}\n e_{n+1} &= x_{n+1} \u2212 s \\\\\n & = g(x_n) \u2212 g(s) \n\\end{align}\nwhich, using the definition of the sequence and the fixed point, gives\n\\begin{align}\n e_{n+1} & = g(s) (x_n \u2212 s) + \\frac{g''(s)}{2!} (x_n \u2212 s)^2 + {\\mathcal O} \\left( (x_n \u2212 s)^3 \\right)\n\\end{align}\nand, by Taylor expanding\n\\begin{align}\n e_{n+1} & = g(s) e_n + \\frac{g''(s)}{2!} e_n^2 + {\\mathcal O} \\left( e_n^3 \\right).\n\\end{align}\n\nHence if $g(s) = 0$ we have that the error reduces by a constant amount proportional to the derivative at each step. If the derivative does vanish the error at each iteration is proportional to the square of the previous error which leads to faster convergence.\n\n## Question 6\n\nUse Newton's method to find the root in $[0, 1]$ of\n\n\\begin{equation}\n f(x) = \\sin (x) \u2212 e^x + 0.9 + x.\n\\end{equation}\n\nStart from $x_0 = 1/2$ and retain 3 significant figures. Take 3 steps.\n\n### Answer Question 6\n\nFor Newton's method we have\n\n\\begin{equation}\n x_{n+1} = x_n - \\frac{f(x_n)}{f'(x_n)}.\n\\end{equation}\n\nSo first we compute the derivative,\n\n\\begin{equation}\nf(x) = \\cos (x) \u2212 e^x + 1.\n\\end{equation}\n\nIt follows that the iterative scheme is given by\n\n\\begin{equation}\n x_{n+1} = x_n \u2212 \\frac{ \\sin (x_n) \u2212 e^{x_n} + 0.9 + x_n}{\\cos (x_n) \u2212 e^{x_n} + 1}.\n\\end{equation}\n\nWe start from $x_0 = 1/2$ and compute with full precision but only retain 3 significant figures for the values of the $x_n$:\n\n\\begin{align}\n x_1 & = x_0 \u2212 \\frac{ \\sin (x_0) \u2212 e^{x_0} + 0.9 + x_0}{\\cos (x_0) \u2212 e^{x_0} + 1} \\\\\n & \\approx -0.508;\n\\end{align}\n\nretaining 3 s.f. we set $x_1 = \u22120.508$, and find\n\n\\begin{align}\n x_2 & = x_1 \u2212 \\frac{ \\sin (x_1) \u2212 e^{x_1} + 0.9 + x_1}{\\cos (x_1) \u2212 e^{x_1} + 1} \\\\\n & \\approx 0.0393;\n\\end{align}\n\nretaining 3 s.f. we set $x_2 = 0.0393$, and find\n\n\\begin{align}\n x_3 & = x_2 \u2212 \\frac{ \\sin (x_2) \u2212 e^{x_2} + 0.9 + x_2}{\\cos (x_2) \u2212 e^{x_2} + 1} \\\\\n & \\approx 0.103.\n\\end{align}\n\nAfter 5 steps you would see, to 3 s.f., that it has converged to 0.106, so after 3 steps it does quite\nwell; a better approximation to the solution is $0.106022965\\dots$ .\n\n\n```\ndef Newton(f, df, x0, tolerance = 1e-10, MaxSteps = 100):\n \"\"\"Implementing Newton's method to solve f(x) = 0, where df is the derivative of f, starting from the guess x_0.\"\"\"\n \n import numpy as np\n \n x = np.zeros(MaxSteps)\n x[0] = x0\n \n # Set up the map g\n g = lambda x: x - f(x) / df(x)\n \n for i in range(1, MaxSteps):\n x[i] = g(x[i-1])\n if (np.absolute(f(x[i])) < tolerance):\n break\n return x[:i+1]\n\ndef fn_q6(x):\n \"\"\"Simple function defined in question, f(x) = sin(x) - e^x + 0.9 + x.\"\"\"\n \n import numpy as np\n \n return np.sin(x) - np.exp(x) + 0.9 + x\n\ndef d_fn_q6(x):\n \"\"\"Derivative of simple function defined in question, f(x) = sin(x) - e^x + 0.9 + x.\"\"\"\n \n import numpy as np\n \n return np.cos(x) - np.exp(x) + 1.0\n\n\nx = Newton(fn_q6, d_fn_q6, 0.5, tolerance = 1e-15)\nprint \"The root is approximately {} where f is after {} steps.\\n\".format(x[-1], fn_q6(x[-1]), len(x))\nprint \"The first three steps are {}\\n\".format(x[1:4])\nprint \"The fifth step is \", x[5]\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfig = plt.figure(figsize = (12, 8), dpi = 50)\nplt.semilogy(range(len(x)-1), np.absolute(x[:-1] - x[-1]), 'kx')\nplt.xlabel('Iteration', size = 16)\nplt.ylabel('$|x_i - x_{final}|$', size = 16)\n\nplt.show()\n```\n\n## Coding Question 1\n\nWrite a single function that, depending on an input argument, computes the integral of an input function $f(x)$ between the input arguments $a, b$, using either\n\n1. Simpson's rule, 3 points\n2. Trapezoidal rule, 3 points\n3. Gaussian Quadrature, 3 nodes.\n\nTest your code on\n\n\\begin{align}\n \\int_0^1 \\sin^2 ( \\pi x ) & = \\frac{1}{2}, \\\\\n \\int_0^1 e^{-x} \\sinh ( x ) d x & \\approx 0.283833821 \\\\\n \\int_0^1 \\frac{1}{\\sqrt{x}} d x & = 2.\n\\end{align}\n\nNote that there are good reasons for some of the methods to fail on the final test!\n\n### Answer Coding Question 1\n\n\n```\ndef integrate(f, a, b, method = 'Simpson'):\n \"\"\"Integrate a given function f over [a, b] using 3 points/nodes using one of three methods (Simpson, Trapezoidal, Gauss).\"\"\"\n import numpy as np\n \n if method == 'Simpson':\n h = (b - a) / 2.0\n I = h / 3.0 * (f(a) + f(b) + 4.0 * f( (a + b) / 2.0 ))\n elif method == 'Trapezoidal':\n h = (b - a) / 2.0\n I = h / 2.0 * (f(a) + f(b) + 2.0 * f( (a + b) / 2.0 ))\n elif method == 'Gauss':\n nodes = np.array([-np.sqrt(3.0/5.0), 0.0, np.sqrt(3.0/5.0)])\n weights = np.array([5.0/9.0, 8.0/9.0, 5.0/9.0])\n # Remap [-1, 1] to the given interval\n nodes = (nodes + 1.0) * (b - a) / 2.0 + a\n I = 0\n for i in range(len(nodes)):\n I += weights[i] * f(nodes[i])\n # Reweight\n I *= (b - a) / 2.0\n else:\n raise Exception(\"method parameter unknown: must be one of ['Simpson', 'Trapezoidal', 'Gauss']\")\n \n return I\n\ndef f1(x):\n \"\"\"First integrand\"\"\"\n import numpy as np\n \n return np.sin(np.pi * x)**2\n\ndef f2(x):\n \"\"\"Second integrand\"\"\"\n import numpy as np\n \n return np.exp(-x) * np.sinh(x)\n\ndef f3(x):\n \"\"\"Third integrand\"\"\"\n import numpy as np\n \n return 1.0 / np.sqrt(x)\n\n# Now look at the results\nimport numpy as np\n\nexact_solutions = [0.5, 0.283833821, 2.0]\n\nintegrand = 0\nfor i in [f1, f2, f3]:\n integrand +=1\n for m in ['Simpson', 'Trapezoidal', 'Gauss']:\n print \"For integrand number {} using method {} the result is {} (exact solution is {})\\n\".format(integrand, m, integrate(i, 0, 1, m), exact_solutions[integrand-1])\n\n```\n\n For integrand number 1 using method Simpson the result is 0.666666666667 (exact solution is 0.5 )\n \n For integrand number 1 using method Trapezoidal the result is 0.5 (exact solution is 0.5 )\n \n For integrand number 1 using method Gauss the result is 0.511227100236 (exact solution is 0.5 )\n \n For integrand number 2 using method Simpson the result is 0.282762246006 (exact solution is 0.283833821 )\n \n For integrand number 2 using method Trapezoidal the result is 0.266113229303 (exact solution is 0.283833821 )\n \n For integrand number 2 using method Gauss the result is 0.283839841028 (exact solution is 0.283833821 )\n \n For integrand number 3 using method Simpson the result is inf (exact solution is 2.0 )\n \n For integrand number 3 using method Trapezoidal the result is inf (exact solution is 2.0 )\n \n For integrand number 3 using method Gauss the result is 1.75086317797 (exact solution is 2.0 )\n \n\n\n -c:42: RuntimeWarning: divide by zero encountered in double_scalars\n\n\nThe results for the final integral will fail for those methods that evaluate the integral at $x = 0$.\n\n## Coding Question 2\n\nImplement the secant method to find the root of\n\n\\begin{equation}\n f(x) = \\tan (x) - e^{-x}, \\quad x \\in [0, 1].\n\\end{equation}\n\n### Answer Coding Question 2\n\n\n```\ndef Secant(f, x0, x1, tolerance = 1e-10, MaxSteps = 100):\n \"\"\"Implement the secant method to find the root of the equation f(x) = 0, starting from the initial guesses x^{(0,1)} = x0, x1\"\"\"\n \n import numpy as np\n \n x = np.zeros(MaxSteps)\n x[0] = x0\n x[1] = x1\n \n # There is no map!\n for i in range(2, MaxSteps):\n x[i] = x[i-1] - f(x[i-1]) * (x[i-1] - x[i-2]) / (f(x[i-1]) - f(x[i-2]))\n if (np.absolute(f(x[i])) < tolerance):\n break\n return x[:i+1]\n\n# Now define the function whose root is to be found\ndef fn_q2(x):\n \"\"\"Simple function defined in question, f(x) = tan(x) - exp(-x).\"\"\"\n \n import numpy as np\n \n return np.tan(x) - np.exp(-x)\n\n\nx = Secant(fn_q2, 0.0, 1.0)\nprint \"The root is approximately {} where f is {} after {} steps.\\n\".format(x[-1], fn_q2(x[-1]), len(x))\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfig = plt.figure(figsize = (12, 8), dpi = 50)\nplt.semilogy(range(len(x)-1), np.absolute(x[:-1] - x[-1]), 'kx')\nplt.xlabel('Iteration', size = 16)\nplt.ylabel('$|x_i - x_{final}|$', size = 16)\n\nplt.show()\n```\n\n\n```\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../../IPythonNotebookStyles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n> (The cell above executes the style for this notebook. It closely follows the style used in the [12 Steps to Navier Stokes](http://lorenabarba.com/blog/cfd-python-12-steps-to-navier-stokes/) course.)\n", "meta": {"hexsha": "ff216e46b6df1341cfcdcca376de1c83eafe262c", "size": 60329, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Worksheets/Worksheet3_Notebook.ipynb", "max_stars_repo_name": "alistairwalsh/NumericalMethods", "max_stars_repo_head_hexsha": "fa10f9dfc4512ea3a8b54287be82f9511858bd22", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-12-01T09:15:04.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-01T09:15:04.000Z", "max_issues_repo_path": "Worksheets/Worksheet3_Notebook.ipynb", "max_issues_repo_name": "indranilsinharoy/NumericalMethods", "max_issues_repo_head_hexsha": "989e0205565131057c9807ed9d55b6c1a5a38d42", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Worksheets/Worksheet3_Notebook.ipynb", "max_forks_repo_name": "indranilsinharoy/NumericalMethods", "max_forks_repo_head_hexsha": "989e0205565131057c9807ed9d55b6c1a5a38d42", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-04-13T02:58:54.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-13T02:58:54.000Z", "avg_line_length": 81.3059299191, "max_line_length": 17511, "alphanum_fraction": 0.7457110179, "converted": true, "num_tokens": 4989, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8596637469145053, "lm_q2_score": 0.9603611650116028, "lm_q1q2_score": 0.825587677505054}} {"text": "# Numerical Evaluation of Integrals\n\n\n```python\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n```\n\nIntegration problems are common in statistics whenever we are dealing with continuous distributions. For example the expectation of a function is an integration problem\n\n$$\nE[f(x)] = \\int{f(x) \\, p(x) \\, dx}\n$$\n\nIn Bayesian statistics, we need to solve the integration problem for the marginal likelihood or evidence\n\n$$\np(X \\mid \\alpha) = \\int{p(X \\mid \\theta) \\, p(\\theta \\mid \\alpha) d\\theta}\n$$\n\nwhere $\\alpha$ is a hyperparameter and $p(X \\mid \\alpha)$ appears in the denominator of Bayes theorem\n\n$$\np(\\theta | X) = \\frac{p(X \\mid \\theta) \\, p(\\theta \\mid \\alpha)}{p(X \\mid \\alpha)}\n$$\n\nIn general, there is no closed form solution to these integrals, and we have to approximate them numerically. The first step is to check if there is some **reparameterization** that will simplify the problem. Then, the general approaches to solving integration problems are\n\n1. Numerical quadrature\n2. Importance sampling, adaptive importance sampling and variance reduction techniques (Monte Carlo swindles)\n3. Markov Chain Monte Carlo\n4. Asymptotic approximations (Laplace method and its modern version in variational inference)\n\nThis lecture will review the concepts for quadrature and Monte Carlo integration.\n\nQuadrature\n----\n\nYou may recall from Calculus that integrals can be numerically evaluated using quadrature methods such as Trapezoid and Simpson's's rules. This is easy to do in Python, but has the drawback of the complexity growing as $O(n^d)$ where $d$ is the dimensionality of the data, and hence infeasible once $d$ grows beyond a modest number.\n\n### Integrating functions\n\n\n```python\nfrom scipy.integrate import quad\n```\n\n\n```python\ndef f(x):\n return x * np.cos(71*x) + np.sin(13*x)\n```\n\n\n```python\nx = np.linspace(0, 1, 100)\nplt.plot(x, f(x))\npass\n```\n\n#### Exact solution\n\n\n```python\nfrom sympy import sin, cos, symbols, integrate\n\nx = symbols('x')\nintegrate(x * cos(71*x) + sin(13*x), (x, 0,1)).evalf(6)\n```\n\n\n\n\n 0.0202549\n\n\n\n#### Using quadrature\n\n\n```python\ny, err = quad(f, 0, 1.0)\ny\n```\n\n\n\n\n 0.02025493910239419\n\n\n\n#### Multiple integration\n\nFollowing the `scipy.integrate` [documentation](http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html), we integrate\n\n$$\nI=\\int_{y=0}^{1/2}\\int_{x=0}^{1-2y} x y \\, dx\\, dy\n$$\n\n\n```python\nx, y = symbols('x y')\nintegrate(x*y, (x, 0, 1-2*y), (y, 0, 0.5))\n```\n\n\n\n\n 0.0104166666666667\n\n\n\n\n```python\nfrom scipy.integrate import nquad\n\ndef f(x, y):\n return x*y\n\ndef bounds_y():\n return [0, 0.5]\n\ndef bounds_x(y):\n return [0, 1-2*y]\n\ny, err = nquad(f, [bounds_x, bounds_y])\ny\n```\n\n\n\n\n 0.010416666666666668\n\n\n\n## Monte Carlo integration\n\nThe basic idea of Monte Carlo integration is very simple and only requires elementary statistics. Suppose we want to find the value of \n$$\nI = \\int_a^b f(x) dx\n$$\nin some region with volume $V$. Monte Carlo integration estimates this integral by estimating the fraction of random points that fall below $f(x)$ multiplied by $V$. \n\n\nIn a statistical context, we use Monte Carlo integration to estimate the expectation\n$$\nE[g(X)] = \\int_X g(x) p(x) dx\n$$\n\nwith\n\n$$\n\\bar{g_n} = \\frac{1}{n} \\sum_{i=1}^n g(x_i)\n$$\nwhere $x_i \\sim p$ is a draw from the density $p$.\n\nWe can estimate the Monte Carlo variance of the approximation as\n$$\nv_n = \\frac{1}{n^2} \\sum_{o=1}^n (g(x_i) - \\bar{g_n})^2)\n$$\n\nAlso, from the Central Limit Theorem,\n\n$$\n\\frac{\\bar{g_n} - E[g(X)]}{\\sqrt{v_n}} \\sim \\mathcal{N}(0, 1)\n$$\n\nThe convergence of Monte Carlo integration is $\\mathcal{0}(n^{1/2})$ and independent of the dimensionality. Hence Monte Carlo integration generally beats numerical integration for moderate- and high-dimensional integration since numerical integration (quadrature) converges as $\\mathcal{0}(n^{d})$. Even for low dimensional problems, Monte Carlo integration may have an advantage when the volume to be integrated is concentrated in a very small region and we can use information from the distribution to draw samples more often in the region of importance.\n\nAn elementary, readable description of Monte Carlo integration and variance reduction techniques can be found [here](https://www.cs.dartmouth.edu/~wjarosz/publications/dissertation/appendixA.pdf).\n\n### Intuition behind Monte Carlo integration\n\nWe want to find some integral \n\n$$I = \\int{f(x)} \\, dx$$\n\nConsider the expectation of a function $g(x)$ with respect to some distribution $p(x)$. By definition, we have\n\n$$\nE[g(x)] = \\int{g(x) \\, p(x) \\, dx}\n$$\n\nIf we choose $g(x) = f(x)/p(x)$, then we have\n\n$$\n\\begin{align}\nE[g(x)] &= \\int{\\frac{f(x}{p(x)} \\, p(x) \\, dx} \\\\\n&= \\int{f(x) dx} \\\\\n&= I\n\\end{align}\n$$\n\nBy the law of large numbers, the average converges on the expectation, so we have\n\n$$\nI \\approx \\bar{g_n} = \\frac{1}{n} \\sum_{i=1}^n g(x_i)\n$$\n\nIf $f(x)$ is a proper integral (i.e. bounded), and $p(x)$ is the uniform distribution, then $g(x) = f(x)$ and this is known as ordinary Monte Carlo. If the integral of $f(x)$ is improper, then we need to use another distribution with the same support as $f(x)$.\n\n\n```python\nfrom scipy import stats\n```\n\n\n```python\nx = np.linspace(-3,3,100)\ndist = stats.norm(0,1)\na = -2\nb = 0\nplt.plot(x, dist.pdf(x))\nplt.fill_between(np.linspace(a,b,100), dist.pdf(np.linspace(a,b,100)), alpha=0.5)\nplt.text(b+0.1, 0.1, 'p=%.4f' % (dist.cdf(b) - dist.cdf(a)), fontsize=14)\npass\n```\n\n#### Using quadrature\n\n\n```python\ny, err = quad(dist.pdf, a, b)\ny\n```\n\n\n\n\n 0.47724986805182085\n\n\n\n#### Simple Monte Carlo integration\n\nIf we can sample directly from the target distribution $N(0,1)$\n\n\n```python\nn = 10000\nx = dist.rvs(n)\nnp.sum((a < x) & (x < b))/n\n```\n\n\n\n\n 0.4816\n\n\n\nIf we cannot sample directly from the target distribution $N(0,1)$ but can evaluate it at any point. \n\nRecall that $g(x) = \\frac{f(x)}{p(x)}$. Since $p(x)$ is $U(a, b)$, $p(x) = \\frac{1}{b-a}$. So we want to calculate\n\n$$\n\\frac{1}{n} \\sum_{i=1}^n (b-a) f(x)\n$$\n\n\n```python\nn = 10000\nx = np.random.uniform(a, b, n)\nnp.mean((b-a)*dist.pdf(x))\n```\n\n\n\n\n 0.4783397843683427\n\n\n\n### Intuition for error rate\n\nWe will just work this out for a proper integral $f(x)$ defined in the unit cube and bounded by $|f(x)| \\le 1$. Draw a random uniform vector $x$ in the unit cube. Then\n\n$$\n\\begin{align}\nE[f(x_i)] &= \\int{f(x) p(x) dx} = I \\\\\n\\text{Var}[f(x_i)] &= \\int{(f(x_i) - I )^2 p(x) \\, dx} \\\\\n&= \\int{f(x)^2 \\, p(x) \\, dx} - 2I \\int(f(x) \\, p(x) \\, dx + I^2 \\int{p(x) \\, dx} \\\\\n&= \\int{f(x)^2 \\, p(x) \\, dx} + I^2 \\\\\n& \\le \\int{f(x)^2 \\, p(x) \\, dx} \\\\\n& \\le \\int{p(x) \\, dx} = 1\n\\end{align}\n$$\n\nNow consider summing over many such IID draws $S_n = f(x_1) + f(x_2) + \\cdots + f(x_n)$, \\ldots, x_n$. We have\n\n$$\n\\begin{align}\nE[S_n] &= nI \\\\\n\\text{Var}[S_n] & \\le n\n\\end{align}\n$$\n\nand as expected, we see that $I \\approx S_n/n$. From Chebyshev's inequality,\n\n$$\n\\begin{align}\nP \\left( \\left| \\frac{s_n}{n} - I \\right| \\ge \\epsilon \\right) &= \nP \\left( \\left| s_n - nI \\right| \\ge n \\epsilon \\right) & \\le \\frac{\\text{Var}[s_n]}{n^2 \\epsilon^2} & \\le\n\\frac{1}{n \\epsilon^2} = \\delta\n\\end{align}\n$$\n\nSuppose we want 1% accuracy and 99% confidence - i.e. set $\\epsilon = \\delta = 0.01$. The above inequality tells us that we can achieve this with just $n = 1/(\\delta \\epsilon^2) = 1,000,000$ samples, regardless of the data dimensionality.\n\n### Example\n\nWe want to estimate the following integral $\\int_0^1 e^x dx$. \n\n\n```python\nx = np.linspace(0, 1, 100)\nplt.plot(x, np.exp(x))\nplt.xlim([0,1])\nplt.ylim([0, np.exp(1)])\npass\n```\n\n#### Analytic solution\n\n\n```python\nfrom sympy import symbols, integrate, exp\n\nx = symbols('x')\nexpr = integrate(exp(x), (x,0,1))\nexpr.evalf()\n```\n\n\n\n\n 1.71828182845905\n\n\n\n#### Using quadrature\n\n\n```python\nfrom scipy import integrate\n\ny, err = integrate.quad(exp, 0, 1)\ny\n```\n\n\n\n\n 1.7182818284590453\n\n\n\n#### Monte Carlo integration\n\n\n```python\nfor n in 10**np.array([1,2,3,4,5,6,7,8]):\n x = np.random.uniform(0, 1, n)\n sol = np.mean(np.exp(x))\n print('%10d %.6f' % (n, sol))\n```\n\n 10 2.016472\n 100 1.717020\n 1000 1.709350\n 10000 1.719758\n 100000 1.716437\n 1000000 1.717601\n 10000000 1.718240\n 100000000 1.718152\n\n\n### Monitoring variance in Monte Carlo integration\n\nWe are often interested in knowing how many iterations it takes for Monte Carlo integration to \"converge\". To do this, we would like some estimate of the variance, and it is useful to inspect such plots. One simple way to get confidence intervals for the plot of Monte Carlo estimate against number of iterations is simply to do many such simulations.\n\nFor the example, we will try to estimate the function (again)\n\n$$\nf(x) = x \\cos 71 x + \\sin 13x, \\ \\ 0 \\le x \\le 1\n$$\n\n\n```python\ndef f(x):\n return x * np.cos(71*x) + np.sin(13*x)\n```\n\n\n```python\nx = np.linspace(0, 1, 100)\nplt.plot(x, f(x))\npass\n```\n\n#### Single MC integration estimate\n\n\n```python\nn = 100\nx = f(np.random.random(n))\ny = 1.0/n * np.sum(x)\ny\n```\n\n\n\n\n -0.15505102485636882\n\n\n\n#### Using multiple independent sequences to monitor convergence\n\nWe vary the sample size from 1 to 100 and calculate the value of $y = \\sum{x}/n$ for 1000 replicates. We then plot the 2.5th and 97.5th percentile of the 1000 values of $y$ to see how the variation in $y$ changes with sample size. The blue lines indicate the 2.5th and 97.5th percentiles, and the red line a sample path.\n\n\n```python\nn = 100\nreps = 1000\n\nx = f(np.random.random((n, reps)))\ny = 1/np.arange(1, n+1)[:, None] * np.cumsum(x, axis=0)\nupper, lower = np.percentile(y, [2.5, 97.5], axis=1)\n```\n\n\n```python\nplt.plot(np.arange(1, n+1), y, c='grey', alpha=0.02)\nplt.plot(np.arange(1, n+1), y[:, 0], c='red', linewidth=1);\nplt.plot(np.arange(1, n+1), upper, 'b', np.arange(1, n+1), lower, 'b')\npass\n```\n\n#### Using bootstrap to monitor convergence\n\nIf it is too expensive to do 1000 replicates, we can use a bootstrap instead.\n\n\n```python\nxb = np.random.choice(x[:,0], (n, reps), replace=True)\nyb = 1/np.arange(1, n+1)[:, None] * np.cumsum(xb, axis=0)\nupper, lower = np.percentile(yb, [2.5, 97.5], axis=1)\n```\n\n\n```python\nplt.plot(np.arange(1, n+1)[:, None], yb, c='grey', alpha=0.02)\nplt.plot(np.arange(1, n+1), yb[:, 0], c='red', linewidth=1)\nplt.plot(np.arange(1, n+1), upper, 'b', np.arange(1, n+1), lower, 'b')\npass\n```\n\n## Variance Reduction\n\nWith independent samples, the variance of the Monte Carlo estimate is \n\n\n$$\n\\begin{align}\n\\text{Var}[\\bar{g_n}] &= \\text{Var} \\left[ \\frac{1}{N}\\sum_{i=1}^{N} \\frac{f(x_i)}{p(x_i)} \\right] \\\\\n&= \\frac{1}{N^2} \\sum_{i=1}^{N} \\text{Var} \\left[ \\frac{f(x_i)}{p(x_i)} \\right] \\\\\n&= \\frac{1}{N^2} \\sum_{i=1}^{N} \\text{Var}[Y_i] \\\\\n&= \\frac{1}{N} \\text{Var}[Y_i]\n\\end{align}\n$$\n\nwhere $Y_i = f(x_i)/p(x_i)$. In general, we want to make $\\text{Var}[\\bar{g_n}]$ as small as possible for the same number of samples. There are several variance reduction techniques (also colorfully known as Monte Carlo swindles) that have been described - we illustrate the change of variables and importance sampling techniques here.\n\n### Change of variables\n\nThe Cauchy distribution is given by \n$$\nf(x) = \\frac{1}{\\pi (1 + x^2)}, \\ \\ -\\infty \\lt x \\lt \\infty \n$$\n\nSuppose we want to integrate the tail probability $P(X > 3)$ using Monte Carlo. One way to do this is to draw many samples form a Cauchy distribution, and count how many of them are greater than 3, but this is extremely inefficient.\n\n#### Only 10% of samples will be used\n\n\n```python\nimport scipy.stats as stats\n\nh_true = 1 - stats.cauchy().cdf(3)\nh_true\n```\n\n\n\n\n 0.10241638234956674\n\n\n\n\n```python\nn = 100\n\nx = stats.cauchy().rvs(n)\nh_mc = 1.0/n * np.sum(x > 3)\nh_mc, np.abs(h_mc - h_true)/h_true\n```\n\n\n\n\n (0.1, 0.02359370926927643)\n\n\n\n#### A change of variables lets us use 100% of draws\n\nWe are trying to estimate the quantity\n\n$$\n\\int_3^\\infty \\frac{1}{\\pi (1 + x^2)} dx\n$$\n\nUsing the substitution $y = 3/x$ (and a little algebra), we get\n\n$$\n\\int_0^1 \\frac{3}{\\pi(9 + y^2)} dy\n$$\n\nHence, a much more efficient MC estimator is \n\n$$\n\\frac{1}{n} \\sum_{i=1}^n \\frac{3}{\\pi(9 + y_i^2)}\n$$\n\nwhere $y_i \\sim \\mathcal{U}(0, 1)$.\n\n\n```python\ny = stats.uniform().rvs(n)\nh_cv = 1.0/n * np.sum(3.0/(np.pi * (9 + y**2)))\nh_cv, np.abs(h_cv - h_true)/h_true\n```\n\n\n\n\n (0.10252486615772155, 0.0010592427272478476)\n\n\n\n### Importance sampling\n\nSuppose we want to evaluate\n\n$$\nI = \\int{h(x)\\,p(x) \\, dx}\n$$\n\nwhere $h(x)$ is some function and $p(x)$ is the PDF of $y$. If it is hard to sample directly from $p$, we can introduce a new density function $q(x)$ that is easy to sample from, and write\n\n$$\nI = \\int{h(x)\\, p(x)\\, dx} = \\int{h(x)\\, \\frac{p(x)}{q(x)} \\, q(x) \\, dx}\n$$\n\nIn other words, we sample from $h(y)$ where $y \\sim q$ and weight it by the likelihood ratio $\\frac{p(y)}{q(y)}$, estimating the integral as\n\n$$\n\\frac{1}{n}\\sum_{i=1}^n \\frac{p(y_i)}{q(y_i)} h(y_i)\n$$\n\nSometimes, even if we can sample from $p$ directly, it is more efficient to use another distribution.\n\n#### Example\n\nSuppose we want to estimate the tail probability of $\\mathcal{N}(0, 1)$ for $P(X > 5)$. Regular MC integration using samples from $\\mathcal{N}(0, 1)$ is hopeless since nearly all samples will be rejected. However, we can use the exponential density truncated at 5 as the importance function and use importance sampling. Note that $h$ here is simply the identify function.\n\n\n```python\nx = np.linspace(4, 10, 100)\nplt.plot(x, stats.expon(5).pdf(x))\nplt.plot(x, stats.norm().pdf(x))\npass\n```\n\n#### Expected answer\n\nWe expect about 3 draws out of 10,000,000 from $\\mathcal{N}(0, 1)$ to have a value greater than 5. Hence simply sampling from $\\mathcal{N}(0, 1)$ is hopelessly inefficient for Monte Carlo integration.\n\n\n```python\n%precision 10\n```\n\n\n\n\n '%.10f'\n\n\n\n\n```python\nv_true = 1 - stats.norm().cdf(5)\nv_true\n```\n\n\n\n\n 0.0000002867\n\n\n\n#### Using direct Monte Carlo integration\n\n\n```python\nn = 10000\ny = stats.norm().rvs(n)\nv_mc = 1.0/n * np.sum(y > 5)\n# estimate and relative error\nv_mc, np.abs(v_mc - v_true)/v_true \n```\n\n\n\n\n (0.0000000000, 1.0000000000)\n\n\n\n#### Using importance sampling\n\n\n```python\nn = 10000\ny = stats.expon(loc=5).rvs(n)\nv_is = 1.0/n * np.sum(stats.norm().pdf(y)/stats.expon(loc=5).pdf(y))\n# estimate and relative error\nv_is, np.abs(v_is- v_true)/v_true\n```\n\n\n\n\n (0.0000002850, 0.0056329867)\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "760ece1e5ea97e3ac0f4739b2cd6595481cbef6b", "size": 189626, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/S10C_Monte_Carlo_Integration.ipynb", "max_stars_repo_name": "taotangtt/sta-663-2018", "max_stars_repo_head_hexsha": "67dac909477f81d83ebe61e0753de2328af1be9c", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 72, "max_stars_repo_stars_event_min_datetime": "2018-01-20T20:50:22.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-27T23:24:21.000Z", "max_issues_repo_path": "notebooks/S10C_Monte_Carlo_Integration.ipynb", "max_issues_repo_name": "taotangtt/sta-663-2018", "max_issues_repo_head_hexsha": "67dac909477f81d83ebe61e0753de2328af1be9c", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2020-02-03T13:43:46.000Z", "max_issues_repo_issues_event_max_datetime": "2020-02-03T13:43:46.000Z", "max_forks_repo_path": "notebooks/S10C_Monte_Carlo_Integration.ipynb", "max_forks_repo_name": "taotangtt/sta-663-2018", "max_forks_repo_head_hexsha": "67dac909477f81d83ebe61e0753de2328af1be9c", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 64, "max_forks_repo_forks_event_min_datetime": "2018-01-12T17:13:14.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-14T20:22:46.000Z", "avg_line_length": 166.3385964912, "max_line_length": 37352, "alphanum_fraction": 0.8900467236, "converted": true, "num_tokens": 4715, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9196425355825848, "lm_q2_score": 0.8976952941600964, "lm_q1q2_score": 0.8255587765019454}} {"text": "## Learning Objectives\n\nBy the end of this session you should be able to...\n\n1. Take the derivative of a function over one variable\n1. Take the partial derivative of a function over all of its variables \n1. Find the minimum of the function to obtain the best line that represents relationships between two variables in a dataset\n\n## Why are derivatives important?\n\nDerivatives are the foundation for Linear Regression (a topic we'll cover later in the course) that allows us to obtain the best line that represents relationships between two variables in a dataset.\n\n## Introduction to Derivatives\n\nThe process of fidning a derivative is called **Differentiation**, which is a technique used to calculate the slope of a graph at different points.\n\n### Activity - Derivative Tutorial:\n\n1. Go through this [Derivative tutorial from Math Is Fun](https://www.mathsisfun.com/calculus/derivatives-introduction.html) (15 min)\n1. When you're done, talk with a partner about topics you still have questions on. See if you can answer each other's questions. (5 min)\n1. We'll then go over questions on the tutorial as a class (10 min)\n\n### Review Diagram\n\nReview the below diagram as a class, and compare with what you just learned in the above Derivative Tutorial. Note that a Gradient Function is just another name for the Derivative of a function:\n\n\n\n\n## Derivative Formula\n\n- Choose small $\\Delta x$\n\n- $f^\\prime(x) = \\frac{d}{dx}f(x) = \\frac{\\Delta y}{\\Delta x} = \\frac{f(x + \\Delta x) - f(x)}{\\Delta x}$\n\nRemember that $\\Delta x$ approaches 0. So if plugging in a value in the above formula, choose a _very_ small number, or simplify the equation further such that all $\\Delta x = 0$, like we saw in the tutorial\n\n## Activity: Write a Python function that calculates the gradient of $x^2$ at $x = 3$ and $x = -2$ using the above definition\n\n\n```python\ndef f(x):\n return x**2\n\n\neps = 1e-6\nx = 3\nprint((f(x + eps) - f(x)) / eps)\nx = -2\nprint((f(x + eps) - f(x)) / eps)\n```\n\n 6.000001000927568\n -3.999998999582033\n\n\nNote that these values match $2x$, our derivative of $x^2$:\n\n$2*3 = 6$\n\n$2 * -2 = -4$\n\n## Derivative Table\n\nAs a shortcut, use the second page of this PDF to find the derivative for common formulas. Utilize this as a resource going forward!\n\n- https://www.qc.edu.hk/math/Resource/AL/Derivative%20Table.pdf\n\n## Extend Gradient into Two-Dimensional Space\n\nNow we know how to calculate a derivative of one variable. But what if we have two?\n\nTo do this, we need to utilize **Partial Derivatives**. Calculating a partial derivative is essentially calculating two derivatives for a function: one for each variable, where they other variable is set to a constant.\n\n### Activity - Partial Derivative Video\n\nLets watch this video about Partial Derivative Intro from Khan Academy: https://youtu.be/AXqhWeUEtQU\n\n**Note:** Here are some derivative shortcuts that will help in the video:\n\n$\\frac{d}{dx}x^2 = 2x$\n\n$\\frac{d}{x}sin(x) = cos(x)$\n\n$\\frac{d}{dx}x = 1$\n\n### Activity - Now You Try!\nConsider the function $f(x, y) = \\frac{x^2}{y}$\n\n- Calculate the first order partial derivatives ($\\frac{\\partial f}{\\partial x}$ and $\\frac{\\partial f}{\\partial y}$) and evaluate them at the point $P(2, 1)$.\n\n## We can use the Symbolic Python package (library) to compute the derivatives and partial derivatives\n\n\n```python\nfrom sympy import symbols, diff\n# initialize x and y to be symbols to use in a function\nx, y = symbols('x y', real=True)\nf = (x**2)/y\n# Find the partial derivatives of x and y\nfx = diff(f, x, evaluate=True) # partial derivative of f(x,y) with respect to x\nfy = diff(f, y, evaluate=True) # partial derivative of f(x,y) with respect to y\nprint(fx)\nprint(fy)\n# print(f.evalf(subs={x: 2, y: 1}))\nprint(fx.evalf(subs={x: 2, y: 1}))\nprint(fy.evalf(subs={x: 2, y: 1}))\n```\n\n 2*x/y\n -x**2/y**2\n 4.00000000000000\n -4.00000000000000\n\n\n## Optional Reading: Tensorflow is a powerful package from Google that calculates the derivatives and partial derivatives numerically \n\n\n```python\nimport tensorflow as tf \n\nx = tf.Variable(2.0)\ny = tf.Variable(1.0)\n\nwith tf.GradientTape(persistent=True) as t:\n z = tf.divide(tf.multiply(x, x), y)\n\n# Use the tape to compute the derivative of z with respect to the\n# intermediate value x and y.\ndz_dx = t.gradient(z, x)\ndz_dy = t.gradient(z, y)\n\n\nprint(dz_dx)\nprint(dz_dy)\n\n# All at once:\ngradients = t.gradient(z, [x, y])\nprint(gradients)\n\n\ndel t\n```\n\n## Optional Reading: When x and y are declared as constant, we should add `t.watch(x)` and `t.watch(y)`\n\n\n```python\nimport tensorflow as tf \n\nx = tf.constant(2.0)\ny = tf.constant(1.0)\n\nwith tf.GradientTape(persistent=True) as t:\n t.watch(x)\n t.watch(y)\n z = tf.divide(tf.multiply(x, x), y)\n\n# Use the tape to compute the derivative of z with respect to the\n# intermediate value y.\ndz_dx = t.gradient(z, x)\ndz_dy = t.gradient(z, y)\n```\n\n# Calculate Partial Derivative from Definition\n\n\n```python\ndef f(x, y):\n return x**2/y\n\n\neps = 1e-6\nx = 2\ny = 1\nprint((f(x + eps, y) - f(x, y)) / eps)\nprint((f(x, y + eps) - f(x, y)) / eps)\n```\n\n 4.0000010006480125\n -3.9999959997594203\n\n\nLooks about right! This works rather well, but it is just an approximation. Also, you need to call `f()` at least once per parameter (not twice, since we could compute `f(x, y)` just once). This makes this approach difficult to control for large systems (for example neural networks).\n\n## Why Do we need Partial Gradients?\n\nIn many applications, more specifically DS applications, we want to find the Minimum of a cost function\n\n- **Cost Function:** a function used in machine learning to help correct / change behaviour to minimize mistakes. Or in other words, a measure of how wrong the model is in terms of its ability to estimate the relationship between x and y. [Source](https://towardsdatascience.com/machine-learning-fundamentals-via-linear-regression-41a5d11f5220)\n\n\nWhy do we want to find the minimum for a cost function? Given that a cost function mearues how wrong a model is, we want to _minimize_ that error!\n\nIn Machine Learning, we frequently use models to run our data through, and cost functions help us figure out how badly our models are performing. We want to find parameters (also known as **weights**) to minimize our cost function, therefore minimizing error!\n\nWe find find these optimal weights by using a **Gradient Descent**, which is an algorithm that tries to find the minimum of a function (exactly what we needed!). The gradient descent tells the model which direction it should take in order to minimize errors, and it does this by selecting more and more optimal weights until we've minimized the function! We'll learn more about models when we talk about Linear Regression in a future lesson, but for now, let's review the Gradient Descent process with the below images, given weights $w_0$ and $w_1$:\n\n\n\nLook at that bottom right image. Looks like we're using partial derivatives to find out optimal weights. And we know exactly how to do that!\n\n## Finding minimum of a function\n\nAssume we want to minimize the function $J$ which has two weights $w_0$ and $w_1$\n\nWe have two options to find the minimum of $J(w_0, w_1)$:\n\n1. Take partial derivatives of $J(w_0, w_1)$ with relation to $w_0$ and $w_1$:\n\n$\\frac{\\partial J(w_0, w_1)}{\\partial w_0}$\n\n$\\frac{\\partial J(w_0, w_1)}{\\partial w_1}$\n\nAnd find the appropriate weights such that the partial derivatives equal 0:\n\n$\\frac{\\partial J(w_0, w_1)}{\\partial w_0} = 0$\n\n$\\frac{\\partial J(w_0, w_1)}{\\partial w_1} = 0$\n\nIn this approach we should solve system of linear or non-linear equation\n\n2. Use the Gradient Descent algorithm:\n\nFirst we need to define two things:\n\n- A step-size alpha ($\\alpha$) -- also called the *learning rate* -- as a small number (like $1.e-5$)\n- An arbitrary random initial value for $w_0$ and $w_1$: $w_0 = np.random.randn()$ and $w_1 = np.random.randn()$\n\nFinally, we need to search for the most optimal $w_0$ and $w_1$ by using a loop to update the weights until we find the most optimal weights. We'll need to establish a threshold to compare weights to know when to stop the loop. For example, if the weight update -- the change in the weight parameter -- from one iteration is within 0.0001 of the weight from the previous iteration, we can stop the loop (0.0001 is our threshold here)\n\nLet's review some pseudocode for how to implement this algorithm:\n\n```\n# initialization\ninitialize the following:\n a starting weight value -- an initial guess, could be random\n the learning rate (alpha), a small number (we'll choose 1.e-5)\n the threshold -- set this to 1.e-4\n the current weight update -- initialize to 1\n\n# weight update loop\nwhile the weight update is greater than the threshold:\n store the current values of the weights into a previous value variable \n set the weight values to new values based on the algorithm, by adding the weight updates\n```\n\nHow do we `set the weight values to new values based on the algorithm`? by using the below equations:\n \n$w_0 = w_0 - \\alpha \\frac{\\partial J(w_0, w_1)}{\\partial w_0}$\n \n$w_1 = w_1 - \\alpha \\frac{\\partial J(w_0, w_1)}{\\partial w_1}$\n\n\nFinish the \"starter code\" block below, creating real code from the pseudocode!\n\n\n**Stretch Challenge:** We may also want to limit the number of loops we do, in addition to checking the threshold. Determine how we may go about doing that\n\n\n## Resources\n\n- [Derivative tutorial from Math Is Fun](https://www.mathsisfun.com/calculus/derivatives-introduction.html) \n- [Derivative Table](https://www.qc.edu.hk/math/Resource/AL/Derivative%20Table.pdf)\n- [Khan Academy - Partial Derivatives video](https://www.youtube.com/watch?v=AXqhWeUEtQU&feature=youtu.be)\n- [Towards Data Science - Machine Learning Fundamentals: cost functions and gradient Descent](https://towardsdatascience.com/machine-learning-fundamentals-via-linear-regression-41a5d11f5220)\n\n## Gradient descent in one dimension\n\n\n```python\n# pseudo-code\ndef minimize(f):\n # Initialize\n \n # run the weight update loop until it terminates\n \n # return the current weights\n```\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\nnp.random.seed(42)\neps = 1.e-10\n\n# define an interesting function to minimize\nx = np.linspace(-10,10, 100)\nfx =lambda x: (1/50)*x**4 - 2*x**2 + x + 1 \ndf_dx = lambda x: (4/50)*x**3 - 4*x + 1\nplt.plot(x,fx(x),label = 'function f(x)')\nplt.plot(x,df_dx(x),'r--',label = 'derivative df/dx')\nplt.legend()\nplt.grid()\n```\n\n\n```python\ndef derivative(fx, x):\n delta_x = 1e-10\n df_dx = (fx(x + delta_x) - fx(x)) / delta_x\n return df_dx\n```\n\n\n```python\nderivative(fx, 7.300008)\n```\n\n\n\n\n 2.921467512351228\n\n\n\n### gradient descent function\n\n\n```python\ndef minimize(fx,x_init):\n # Initialize\n alpha = 1.e-6\n thresh = 1.e-4\n weight_update = 1.\n x_values = []\n max_iter = 1000\n n_iter = 0\n x = x_init\n eps = 1.e-10\n \n # run the weight update loop until it terminates\n while np.abs(weight_update) > thresh: #and n_iter < max_iter:\n n_iter+=1\n df_dx = (fx(x+eps) - fx(x))/eps\n weight_update = -alpha*df_dx\n x = x + weight_update\n x_values.append(x) \n \n # return the final value of the weight -- which should correspond to the minimum of the function\n return x, x_init, x_values, n_iter\n```\n\n### Choose an initial value for x, then run gradient descent\n\n\n```python\nimport numpy as np\n\nnp.random.seed(42)\n\nx_init = np.random.uniform(-10,10,1) # choose a random starting point\nprint(x_init)\nx_star, x_init, x_values, n_iter = minimize(f,x_init)\nprint(f'Started at {x_init}, found minimum {x_star} after {n_iter} iterations')\n```\n\n### Check that the derivative of the function is indeed zero at the minimum found by gradient descent\n\n\n```python\n# derivative, from calculus \nprint(f'derivative from calculus: {df_dx(x_star)}')\n\n# derivative, from definition\nprint(f'derivative from calculus: {(fx(x_star + eps) - fx(x_star) )/eps}')\n```\n\n derivative from calculus: [10.01321418]\n derivative from calculus: [10.01321692]\n\n\n\n```python\nx_values = np.array(x_values)\nplt.plot(x,fx(x),label = 'function f(x)')\nplt.plot(np.array(x_values),fx(x_values),'.',markersize = 1, label='gradient descent updates')\nplt.plot(x_init,fx(x_init),'r.',markersize = 10, label = '$x_{init}$')\nplt.plot(x_star,fx(x_star),'k*',markersize = 10, label = '$x_{star}$')\n\nplt.xlabel('x')\nplt.ylabel('f(x)')\nplt.grid()\nplt.legend(loc='best');\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "63dc9989319b85ed7fafde0b8a268b43579692ef", "size": 107920, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Class Work/Calculus/partial_derivative_jcat.ipynb", "max_stars_repo_name": "Pondorasti/QL-1.1", "max_stars_repo_head_hexsha": "78d8ce9668cf688f3f877af98f7fc5b14a286c15", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Class Work/Calculus/partial_derivative_jcat.ipynb", "max_issues_repo_name": "Pondorasti/QL-1.1", "max_issues_repo_head_hexsha": "78d8ce9668cf688f3f877af98f7fc5b14a286c15", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Class Work/Calculus/partial_derivative_jcat.ipynb", "max_forks_repo_name": "Pondorasti/QL-1.1", "max_forks_repo_head_hexsha": "78d8ce9668cf688f3f877af98f7fc5b14a286c15", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 166.0307692308, "max_line_length": 34536, "alphanum_fraction": 0.7784377317, "converted": true, "num_tokens": 3398, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9196425333801889, "lm_q2_score": 0.8976952880018481, "lm_q1q2_score": 0.8255587688614778}} {"text": "# Symbolic Computing with SymPy\n\nUse of Computer Algebra Systems (CAS) or symbolic computing relates to symbolic mathematical operations to be contrasted with the floating point mathematics of numerical computation. Symbolic computing should be viewed as complementary to numerical computing, and both can be extremely helpful in the course of your research. We will focus here on symbolic computing in SymPy, a Python based CAS.\n\nThere are many other CAS packages avaialble (see for example: https://en.wikipedia.org/wiki/List_of_computer_algebra_systems). We will work with SymPy because it is free, open-source, built in Python, and accessible from anywhere. SageMath is a related, but more complete solution with similar advantages but more overhead.\n\nThis lesson is developed from the official SymPy tutorial, available with more detail at http://docs.sympy.org/latest/tutorial/index.html.\n\nA working version of SymPy can be accessed from anywhere by going to http://live.sympy.org\n\nTo get started with SymPy, we need to import the library. If the library is not avaialble, you need to run `pip install sympy --user` in your terminal.\n\n\n```python\n%pylab inline\nfrom sympy import *\n```\n\nOne advantage of a CAS such as SymPy is that irrational numbers are treated exactly, rather than approximately.\n\n\n```python\nsqrt(8)\n```\n\nCompare this to what we would have gotten using the definition of `sqrt()` in the math package.\n\n\n```python\nimport math\nmath.sqrt(8)\n```\n\nAnother advantage of SymPy is that it can produce pretty output, including LaTeX. Let us now turn on the LaTeX printer for SymPy.\n\n\n```python\ninit_printing(latex)\n```\n\nJust like elsewhere in Python, one needs to be careful about how division is treated due to the assumed data types of the input. Try evaluating the follwing cells.\n\n\n```python\n1/3\n```\n\n\n```python\n1./3\n```\n\n\n```python\nS(1)/3\n```\n\nThough these expressions look similar, they give different results. The result of the first cell even depends on your version of Python. The third cell coverts the 1 from an `int` to a SymPy object, and returns a SymPy `Rational`. You can always check your data type with the `type()` command.\n\n\n```python\ntype(1)\n```\n\n\n```python\ntype(1.)\n```\n\n\n```python\ntype(S(1))\n```\n\n\n```python\ntype(S(1)/3)\n```\n\nThe main advantage of any CAS, however is its ability to deal with variables, which are called symobls in SymPy. We first have to define symbols that will then be treated as SymPy objects.\n\n\n```python\nx, y, z, t, nu, sigma = symbols('x y z t nu sigma')\n```\n\n\n```python\nexpr = x**2+2*y\nexpr\n```\n\n\n```python\nexpr + 1\n```\n\n\n```python\nexpr - y\n```\n\n\n```python\nexpr * x\n```\n\n\n```python\nexpr**2\n```\n\n\n```python\nexpand(expr**2)\n```\n\n\n```python\nx**nu+sigma\n```\n\nYou can see from these examples that SymPy does some straightforward simplifications automatically, but exressions are neither factored nor expanded unless explicit commands are given. There are also several built-in characters which will produce nice LaTeX output.\n\n### Exercise 1\n\nWrite $(w+1/3)^3 + (w^2-2w+12)^2 - (w/4)^4$ in expanded polynomial form.\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n# Quick Examples of SymPy Capabilities\n\n## Derivatives\n\n\n```python\ndiff(exp(x**2), x)\n```\n\n\n```python\ndiff(x**7, x)\n```\n\n\n```python\ndiff(x**7, x, x)\n```\n\n\n```python\ndiff(x**7, x, 3)\n```\n\n\n```python\ndiff(x**2 * y**3, x, y)\n```\n\n\n```python\ndiff(sin(x)**2, x)\n```\n\n\n```python\ndiff(exp(x)*cos(x), x)\n```\n\nMultiple derivatives can be taken by repeating the variable, or by indicating the order of the derivative with an integer.\n\n### Exercise 2\n\nFind the first and second derivatives of $\\frac{1}{\\log(w^2+1/2)}$ with respect to $w$.\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n## Integrals\n\n\n```python\nintegrate(2*x, x)\n```\n\n\n```python\nintegrate(sin(x)*cos(x), x)\n```\n\n\n```python\nintegrate(exp(x)*cos(x)**2, x)\n```\n\n\n```python\nintegrate(y*x*z, x, y)\n```\n\n\n```python\nintegrate(cos(x), (x, 1, 2))\n```\n\n\n```python\nintegrate(exp(-x**2), (x, -oo, oo))\n```\n\nNote that infinity is written as two small 'o' characters.\n\n### Exercise 3\n\nCalculate the following definte integral: $\\int_0^{e^4} \\log(w) \\, \\mathrm{d}w$.\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n## Limits\n\n\n```python\nlimit( (x**2 - 1)/(x - 1), x, 1)\n```\n\n\n```python\nlimit( (5*x**4 + 3*x**2)/(2*x**4 - 20*x**3 + 4), x, oo)\n```\n\n\n```python\nlimit( sin(x)/x, x, 0)\n```\n\n\n```python\nlimit(1/x, x, 0, '+')\n```\n\n\n```python\nlimit(1/x, x, 0, '-')\n```\n\n### Exercise 4\n\nCalculate the limit: $$\\lim_{w \\rightarrow 1} \\left(\\frac{w}{3w-3} - \\frac{1}{3\\log w} \\right) $$\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n## Solving Algebraic Equations\n\nThe function `solve()` assumes that the expression is equal to zero unless the SymPy `Eq()` is used.\n\n\n```python\nsolve(x**2 - 9, x)\n```\n\n\n```python\nsolve( x**2 + y*x + z, x)\n```\n\n\n```python\nsolve(10**x - y**3, x)\n```\n\n\n```python\nsolve(tan(x) - y**2, x)\n```\n\n\n```python\nsolve(x**2 + 1, x)\n```\n\n\n```python\nsolve(x**3 + 1, x)\n```\n\n### Exercise 5\n\nSolve the following expression for $w$: $\\sqrt{5}w + 3w^2 = 2/7$\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n## Solving Differential Equations\n\nThe function `dsolve()` can be used to solve ordinary differential equations after defining `Function` symbols.\n\n\n```python\nf, g, h = symbols('f g h', cls=Function)\n```\n\n\n```python\ndsolve(diff(f(x), x) - f(x), f(x))\n```\n\n\n```python\ndsolve(diff(f(x), x, x) + f(x), f(x))\n```\n\n\n```python\ndsolve(diff(f(x), x, x) + 3 * diff(f(x), x ) + 6 * f(x), f(x))\n```\n\nThere is also a `pdsolve()` for partial differential equations.\n\n\n```python\npdsolve(2*diff(f(x,y),x)/f(x,y) + 3*diff(f(x,y), y)/f(x,y) + 1, f(x,y))\n```\n\n### Exercise 6\n\nSolve $f'(x) = f^2(x)$ where $f(1/3) = 5/2$.\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n# Basic Manipulation\n\n## Substitution and Numerical Evaluation\n\nAll instances of a variable in an expression can be replaced with the subs() command.\n\n\n```python\nexpr = 4*x**2 + 3*x + 1\nexpr\n```\n\n\n```python\nexpr.subs(x, 2)\n```\n\n\n```python\nexpr.subs(x, y+z)\n```\n\n\n```python\nexpr = x**2 * y**2 + 3 * x * y**2 + 4*x - 12*z\nexpr\n```\n\n\n```python\nexpr.subs([(x,3), (y,-1), (z,2)])\n```\n\nThe `evalf()` command is used to evaluate an expression as a floating point number to an arbitrary number of digits.\n\n\n```python\nexpr = sqrt(8)\nexpr\n```\n\n\n```python\nexpr.evalf()\n```\n\n\n```python\nexpr = sqrt(x)\nexpr\n```\n\n\n```python\nexpr.evalf(subs={x: 2})\n```\n\n\n```python\npi.evalf(100)\n```\n\n### Exercise 7\n\nFind the numerical value of the expression $x^{1/3} + \\sqrt{5} x^2 - \\log x$ for $x=1,2,\\ldots,10$\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n## Simplification\n\nThe function `simplify()` attempts to give the simplest form of an expression.\n\n\n```python\nsimplify(sin(x)**2 + cos(x)**2)\n```\n\n\n```python\nsimplify((x**3 + x**2 - x - 1)/(x**2 + 2*x + 1))\n```\n\n\n```python\nsimplify(gamma(x)/gamma(x - 2))\n```\n\nExpressions are typically not expanded or factored by `simplify()`, but can be forced to do so by `expand()` and `factor()`.\n\n\n```python\nexpand((x + 1)**4)\n```\n\n\n```python\nfactor(x**2 + 2*x + 1)\n```\n\n`collect()` can be used to group common powers of a given term in an expression.\n\n\n```python\ncollect(4*x**2 + 3*x + 6*z*x**2 + y*x + 1, x)\n```\n\n# Calculus\n\nThere are tools for performing calculus operations as well as creating unevaluated derivatives, integrals, and series, which can be evaluated with the `doit()` command.\n\n\n```python\ndiff(log(x), x)\n```\n\n\n```python\nexpr = sin(x)*y**2\nexpr\n```\n\n\n```python\nexpr.diff(x, x, y)\n```\n\n\n```python\nderiv = Derivative(exp(x**2*y),x,y)\nderiv\n```\n\n\n```python\nderiv.doit()\n```\n\n\n```python\nintegrate(exp(x)*x**2, x)\n```\n\n\n```python\ninteg = Integral(exp(-x**2 - y**2), (x, -oo, oo), (y, -oo, oo))\ninteg\n```\n\n\n```python\ninteg.doit()\n```\n\n\n```python\nlimit(tan(x)/x, x, 0)\n```\n\n\n```python\nexpr = Limit(sqrt(sin(x))/sqrt(x),x,0)\nexpr\n```\n\n\n```python\nexpr.doit()\n```\n\n## Series Expansions\n\nSymPy can perform Taylor series expansions of a given function about a specified point to a desired order (which defaults to 6).\n\n\n```python\nseries(exp(x), x, 0, 8)\n```\n\n\n```python\nexpr = atan(x)\nexpr.series(x, 0)\n```\n\n\n```python\nseries(f(x), x, 1)\n```\n\n\n```python\nseries(exp(1/x), x, oo)\n```\n\n\n```python\nseries(1/sin(x), x, 0)\n```\n\n### Exercise 8\n\nFind the series expansion of $\\mathrm{sinc}(x)$ about $x=0$\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n## Linear Algebra\n\nSymPy can also handle symbolic computing of vectors and matrices.\n\n\n```python\nA = Matrix([[sqrt(2),0],[pi,E]])\nA\n```\n\n\n```python\nB = Matrix([[x,y],[z,w]])\nB\n```\n\n\n```python\nB.inv()\n```\n\n\n```python\nB**(-1)\n```\n\n\n```python\nB**2\n```\n\n\n```python\nB.T\n```\n\n\n```python\n3*B\n```\n\n\n```python\nV = Matrix([2*x,5*z])\nV\n```\n\n\n```python\nB*V\n```\n\n\n```python\n(V.T)*B\n```\n\n\n```python\nB.det()\n```\n\n\n```python\nB.rref()\n```\n\n# Printing\n\nWe have so far seen that SymPy can produce nice LaTeX output for easy reading. It can also produce output which is easily transferrable for other uses.\n\n\n```python\nexpr = Piecewise((x + 1, x > 0), (x, True))\nexpr\n```\n\n\n```python\nprint(latex(expr))\n```\n\n\n```python\nprint(ccode(expr))\n```\n\n\n```python\nprint(fcode(expr))\n```\n\n\n```python\nprint(python(expr))\n```\n\n\n```python\nexpr = Eq(x**2 + 2*log(3*y), z)\nexpr\n```\n\n\n```python\nprint(latex(expr))\n```\n\n\n```python\nprint(ccode(expr))\n```\n\n\n```python\nprint(fcode(expr))\n```\n\n\n```python\nexpr = Integral(exp(-x**2-y**2), (x, 0, oo), (y, 0, oo))\nexpr\n```\n\n\n```python\nprint(latex(expr))\n```\n\n\n```python\nprint(python(expr))\n```\n\n# Detailed Example: The Quantum Harmonic Oscillator\n\nLet us now see how we can use what we have learned about SymPy to treat a realistic physical problem: the 1-d quantum harmonic oscillator (following Griffiths Quantum Mechanics). In particular, we wish to find the wavefunctions and energy levels of a particle of mass $m$ in a potential $V(x) = \\frac{1}{2} m \\omega^2 x^2$. We need to solve the time-independent Schrodinger equation.\n$$-\\frac{\\hbar^2}{2m}\\frac{d^2\\phi}{dx^2} + \\frac{1}{2} m \\omega^2 x^2 \\phi = E \\phi$$\nLet's put this in a slightly more convenient form\n$$ \\frac{d^2\\phi}{d\\xi^2} = (\\xi^2 - K)\\phi $$\nwhere we have defined $\\xi \\equiv \\sqrt{\\frac{m\\omega}{\\hbar}}x$ and $K \\equiv \\frac{2E}{\\hbar\\omega} $. We will now define the relevant quantities for SymPy manipulation.\n\n\n```python\nphi, f = symbols('phi f', cls=Function)\nxi, K, c = symbols('xi K c')\n```\n\nNotice that for large $\\xi$, we can neglect the $K$ on the right hand side of the equation. In this regime, we can easily find approximate solutions to the approximate version of Schrodinger's equation.\n\n\n```python\nschrod_approx = Eq(diff(phi(xi),xi,xi), xi**2*phi(xi))\nschrod_approx\n```\n\n\n```python\nA, B = symbols('A B')\nsol_approx = A*exp(xi**2/2) + B*exp(-xi**2/2)\nsol_approx\n```\n\nLet's check that this is indeed an approximate solution.\n\n\n```python\ndiff(sol_approx,xi,xi)\n```\n\nWe see that in the regime $\\xi\\rightarrow\\infty$, this is indeed an approximate solution to Schrodinger's equation. We can also see that the term proportional to $A$ blows up as $\\xi\\rightarrow\\infty$ and is thus unphysical. We therefore take the term proportional to $B$ to be our approximate solution in this regime. We can the simplify the subsequent steps by separating out this exponential piece of the solution. Let us define\n$$\\phi(\\xi) \\equiv h(\\xi) e^{-\\xi^2/2}$$\n\n\n```python\nh = symbols('h', cls=Function)\nphi_sub = h(xi)*exp(-xi**2/2)\nphi_sub\n```\n\n\n```python\nphi_prime = diff(phi_sub, xi)\nphi_prime\n```\n\n\n```python\nphi_prime_prime = diff(phi_prime, xi)\nphi_prime_prime\n```\n\n\n```python\nschrod_h = simplify(phi_prime_prime - (xi**2-K)*phi_sub)\nschrod_h\n```\n\nWe can now solve the simpler differential equation without the exponential part.\n\n\n```python\nschrod_h_simp = simplify(schrod_h * exp(xi**2/2))\nschrod_h_simp\n```\n\nThis type of differential equation can typically be solved as a power series.\n\n\n```python\nj, n = symbols('j n', integer=True)\nschrod_series = schrod_h_simp.subs(h(xi), Sum(f(j)*xi**j,(j,0,oo)))\nschrod_series\n```\n\n\n```python\nschrod_series.doit()\n```\n\nNow we can read off the coefficient of $\\xi^j$ which gives a recursive expression for $f(j)$.\n\n\n```python\nschrod_recursive = Eq((j+1)*(j+2)*f(j+2) - 2*j*f(j) + (K - 1) * f(j),0)\nschrod_recursive\n```\n\n\n```python\nEq(f(j+2),solve(schrod_recursive, f(j+2))[0])\n```\n\nThis defines a recursive relation for the coefficients of the power series solution which depends on the two arbitrary constants $f(0)$ and $f(1)$. In order for our soution to be normalizable, we must require that the series truncates, and so $f(n) = 0$ for some finite $n$ and also that $f(0) = 0$ for $n$ odd and $f(1) = 0$ for $n$ even. We therefore find that $K = 2n+1$ and so\n$$ E_n = \\left( n+\\frac{1}{2}\\right) \\hbar \\omega \\, $$\nWith this choice of $K$, we have\n$$ f(j+2) = \\frac{-2(n-j)}{(j+1)(j+2)}f(j) $$\n\n\n```python\ndef h_sol(n):\n g = symbols('g', cls=Function)\n F = 0\n if n % 2 == 0:\n F = c\n g = F\n for j in range (2, n+1, 2):\n F *= S(-2)*(n-(j-2))/((j-1)*j)\n g += F*xi**j\n else:\n F = c\n g = F*xi\n for j in range (3, n+1, 2):\n F *= S(-2)*(n-(j-2))/((j-1)*j)\n g += F*xi**j\n return g\n```\n\n\n```python\nh_sol(5)\n```\n\nFinally, we arrive at the wavefunctions for the stationary states of the quantum harmonic oscillator.\n\n\n```python\nphi_sol = h_sol(5)*exp(-xi**2/2)\nphi_sol\n```\n\n\n```python\nnorm_c = solve(integrate(phi_sol**2, (xi, -oo, oo))-1,c)[1]\nnorm_c\n```\n\n\n```python\nnorm_phi_sol = simplify(phi_sol.subs(c, norm_c))\nnorm_phi_sol\n```\n\n\n```python\nfor i in range(0,6):\n phi_sol = h_sol(i)*exp(-xi**2/2)\n norm_c = solve(integrate(phi_sol**2, (xi, -oo, oo))-1,c)[1]\n norm_phi_sol = simplify(phi_sol.subs(c, norm_c))\n p1 = plot(norm_phi_sol, norm_phi_sol**2, (xi, -5, 5))\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "276df01648ad7e891da4da79897819d48aa651f0", "size": 37969, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Symbolic_Computing_SymPy.ipynb", "max_stars_repo_name": "kuunal-mahtani/CTA200kmahtani", "max_stars_repo_head_hexsha": "7e7aea448e947028f7dc5d1ba8ecaab47b2b986d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Symbolic_Computing_SymPy.ipynb", "max_issues_repo_name": "kuunal-mahtani/CTA200kmahtani", "max_issues_repo_head_hexsha": "7e7aea448e947028f7dc5d1ba8ecaab47b2b986d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Symbolic_Computing_SymPy.ipynb", "max_forks_repo_name": "kuunal-mahtani/CTA200kmahtani", "max_forks_repo_head_hexsha": "7e7aea448e947028f7dc5d1ba8ecaab47b2b986d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 20.6915531335, "max_line_length": 1382, "alphanum_fraction": 0.5227422371, "converted": true, "num_tokens": 4279, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9196425311777929, "lm_q2_score": 0.8976952886860978, "lm_q1q2_score": 0.8255587675136625}} {"text": "```python\n%matplotlib inline\nfrom IPython.display import display,Math\nfrom sympy import *\ninit_session()\n```\n\n IPython console for SymPy 1.8 (Python 3.7.10-64-bit) (ground types: gmpy)\n \n These commands were executed:\n >>> from __future__ import division\n >>> from sympy import *\n >>> x, y, z, t = symbols('x y z t')\n >>> k, m, n = symbols('k m n', integer=True)\n >>> f, g, h = symbols('f g h', cls=Function)\n >>> init_printing()\n \n Documentation can be found at https://docs.sympy.org/1.8/\n \n\n\n\n```python\n%%time\n# \u30e6\u30fc\u30af\u30ea\u30c3\u30c9\u306e\u4e92\u9664\u6cd5\na, b = 20210608, 80601202\nr = a%b\ncount = 1\nwhile r != 0:\n a,b = b,r\n r = a%b\n count += 1\nprint(b,count)\n```\n\n 22 16\n CPU times: user 57 \u00b5s, sys: 0 ns, total: 57 \u00b5s\n Wall time: 60.3 \u00b5s\n\n\n\n```python\n%%time\n# \u30e6\u30fc\u30af\u30ea\u30c3\u30c9\u306e\u4e92\u9664\u6cd5\u3000\u4f59\u308a\u306e\u6539\u826f\na, b = 20210608, 80601202\nr = a%b\ncount = 1\nwhile r != 0:\n a,b = b,r\n r = a%b\n count += 1\n if b < 2*r:\n r = b-r\nprint(b,count)\n```\n\n 22 11\n CPU times: user 239 \u00b5s, sys: 56 \u00b5s, total: 295 \u00b5s\n Wall time: 208 \u00b5s\n\n\n\n```python\n# \u62e1\u5f35\u30e6\u30fc\u30af\u30ea\u30c3\u30c9\u306e\u4e92\u9664\u6cd5\ndef myexgcd(a,b):\n x, y, u, v = 1, 0, 0, 1\n while b != 0:\n q = a // b\n x -= q * u\n y -= q * v\n x, u = u, x\n y, v = v, y\n a, b = b, a % b\n return x, y\n```\n\n\n```python\n# \u52d5\u4f5c\u78ba\u8a8d\na,b = 123,321\nx,y = myexgcd(a,b)\nprint(\"{:d}*{:d}{:+d}*{:d}={:d}\".format(x,a,y,b,x*a+y*b))\n```\n\n 47*123-18*321=3\n\n\n\n```python\nfrom ipywidgets import interact\nimport time\ndef myexgcd(a,b):\n x, y, u, v = 1, 0, 0, 1\n while b != 0:\n q = a // b\n x -= q * u\n y -= q * v\n x, u = u, x\n y, v = v, y\n a, b = b, a % b\n return x, y\n@interact\ndef _(a=\"314159265\",b=\"35\"):\n digits = len(b)\n a,b = int(a),int(b)\n x,y = myexgcd(a,b)\n return display(Math(\"{:d}*{:d}{:+d}*{:d}={:d}\".format(x,a,y,b,x*a+y*b)))\n```\n\n\n\n\n interactive(children=(Text(value='314159265', description='a'), Text(value='35', description='b'), Output()), \u2026\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "7666bd0ab10e350b2e14214f83fb7527863e26ab", "size": 5047, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "21jk1-0616.ipynb", "max_stars_repo_name": "ritsumei-aoi/21jk1", "max_stars_repo_head_hexsha": "2d49628ef8721a507193a58aa1af4b31a60dfd8b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "21jk1-0616.ipynb", "max_issues_repo_name": "ritsumei-aoi/21jk1", "max_issues_repo_head_hexsha": "2d49628ef8721a507193a58aa1af4b31a60dfd8b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "21jk1-0616.ipynb", "max_forks_repo_name": "ritsumei-aoi/21jk1", "max_forks_repo_head_hexsha": "2d49628ef8721a507193a58aa1af4b31a60dfd8b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.3855932203, "max_line_length": 120, "alphanum_fraction": 0.4475926293, "converted": true, "num_tokens": 812, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9196425333801889, "lm_q2_score": 0.897695283896349, "lm_q1q2_score": 0.8255587650858862}} {"text": "# Mathematical Theory and Intuition behind Our Improved Model \n\nThis markdown explains the mathematical theory behind our model. Note that the actual implementation might be different. For simplicity, we are also not including all the datasets we used (otherwise the equations might be too long).\n\nLet $i$ denote the $i$th date since the beginning of our dataset and $n$ denote the cutoff date for training-testing . Let the range from $n+1$ to $m$ be the range for testing (forecast). \n\nFor a given date $i$, let $\\textbf{asp}(i)$ be the actual index for S\\&P500; $\\textbf{fbsp}(i)$ be the index for S\\&P500 bases on a training set of the first $i-1$ many days' market data of S\\&P500; $\\textbf{div}(i)$ be the dividend yield rates of S\\&P500; $\\textbf{eps}(i)$ be the dividen yield rates of S\\&P500; $\\textbf{tby}(i)$ be the 10-year treasury bond yield; $\\textbf{ffr}(i)$ be the fed fund rates; $\\textbf{fta}(i)$ be the fed total assets.\n\n## Linear Model\n\n\\begin{equation}\n \\textbf{fb}_1(i,a,b,c,d,e):=\\textbf{fbsp}(i)+a*\\textbf{tby}(i)+b*\\textbf{ffr}(i)+c*\\textbf{fta}(i)+d*\\textbf{div}(i)+e*\\textbf{eps}(i)\n\\end{equation}\n\n\\begin{equation}\n \\textbf{E}_n(a,b,c,d,e):= \\frac{1}{n-1000}\\sum_{i=1000}^{n} \n (\\textbf{fb}_1(i,a,b,c,d,e)-\\textbf{asp}(i))^2\n\\end{equation}\n\nattains its minimum. Namely, finding\n\n\\begin{equation}\n (a_n,b_n,c_n,d_n,e_n):=\\text{argmin} \\textbf{E}_n(a,b,c,d,e)\n\\end{equation}\n\nNote that it doesn't make sense to start with $i=1$ since the fbprophet itself need to use the first $i-1$ of the data for training\n\n## Nonlinear Model\n\nAll notations are the same as above.\n\nHere is a different model (nonlinear revision of fbprophet). In this model, we will the dividend yield rate of S\\&P 500.\n\nFirst, what is the dividend yield rate? Dividend is the money that publicly listed companies pay you (usually four times a year) for holding their stocks (until the so-called ex-dividend dates). It's like the interest paid to your saving accounts from the banks. Some companies pay while some don't, especially for the growth tech stocks. From my experience, the impact of bond rates and fed fund rates on the stock market will change when they rise above or fall below the dividend yield rate}. Stock prices fall when those rates rise above the dividend yield rate of SP500 (investor are selling their stocks to buy bonds or save more money in their bank account!).\n\nBased on this idea, it might be useful to consider the differences of those rates and the dividend yield rate of SP500. \n\nNormally an increase in the federal fund rate will result in an increase in bank loan interest rate, which will in turn result in an decrease in the net income of S\\&P500-listed companies since they have to pay a higher interest when borrowing money from banks. Based on this thought, I believe it is reasonable to make a correction to $c*\\textbf{eps}(i)$ by replacing the term by $c*\\textbf{eps}(i)(1+d*\\textbf{ffr}(i))$. If my intuition is correct, the generated constant $d$ from the optimization should be a negative number. \n\n\n$$\\textbf{fb}_2(i,a,b,c,d,e):= \\textbf{fbsp}(i)*\\big[1+a*(\\textbf{div}(i)-\\textbf{tby}(i))\\big]\\big[1+b*(\\textbf{div}(i)-\\textbf{ffr}(i))\\big]\\\\\n+c*\\textbf{eps}(i)(1+d*\\textbf{ffr}(i)+e*\\textbf{fta}(i))$$\n\nand consider\n\n$$E_n(a,b,c,d,e) := \\frac{1}{n-1000}\\sum_{i=1000}^n(\\textbf{fb}_2(i,a,b,c,d,e)-\\textbf{asp}(i))^2$$\n\nNow find (by approximation, SGD, etc.) $(a_n,b_n,c_n,d_n,e_n):=\\text{argmin} E_n(a,b,c,d,e)$ \n\nUsing $(a_n,b_n,c_n,d_n,e_n)$ as constants, our output will be $\\textbf{fb}_2(i,a_n,b_n,c_n,d_n,e_n)$\n\n\n### The actual implementation of the nonlinear model\n\nFor the actual implementation of the nonlinear model, we threw away the higher order terms (the products of three things) since they are relatively smaller quantities\n\n### The mathematical theory behind our nonlinear model: \n\n#### The Taylor's theorem for multivariate functions\n\nLet $f :\\mathbb{R}^n \u2192 \\mathbb R$ be a $k$-times-differentiable function at the point $a \u2208 \\mathbb{R}^n$. Then there exists $h_a:\\mathbb{R}^n \u2192 \\mathbb R$ such that:\n\n\\begin{align}\n& f(\\boldsymbol{x}) = \\sum_{|\\alpha|\\leq k} \\frac{D^\\alpha f(\\boldsymbol{a})}{\\alpha!} (\\boldsymbol{x}-\\boldsymbol{a})^\\alpha + \\sum_{|\\alpha|=k} h_\\alpha(\\boldsymbol{x})(\\boldsymbol{x}-\\boldsymbol{a})^\\alpha, \\\\\n& \\mbox{and}\\quad \\lim_{\\boldsymbol{x}\\to \\boldsymbol{a}}h_\\alpha(\\boldsymbol{x})=0.\n\\end{align}\n\nThis approximation is also optimal in some sense.\n\nIn our model, we think of $\\textbf{asp}$ as a function of $\\textbf{fbsp}, \\textbf{tby}, \\textbf{div}, \\textbf{ffr}, \\textbf{fta}$, etc. Say \n\n$$\\textbf{asp}:=F(\\textbf{fbsp}, \\textbf{tby}, \\textbf{div}, \\textbf{ffr}, \\textbf{fta},\\cdots)$$\n\nWith the assumption that $F$ here is regular, we can apply the Taylor's theorem for multivariate functions from above to $F$. Ideally, we have to consider all possible products in our implementation. But in our implementation, we chose to make a balance between our intuitive nonlinear model and Taylor's theorem.\n\n## A faster computation scheme\nOne drawback of the models we proposed about is that we have to call fbprophet thousands of times when we implement it.\n\nHere is a method that reduces the number of times calling fbprophet:\n\nSay in the training process we are considering from i=1,000 to 11,000. Namely based on the current scheme, we have to call fbprophet for 10,000 times.\n\nInstead of this, we make a break-down 10,000=100*100 as follows:\n\nFor i=1,000 to 1,100, we only use the first 999 dates for training;\n\nFor i=1,100 to 1,200, we only use the first 1,099 dates for training;\n\n.............................................\n\nFor i=10,900 to 11,000, we only use the first 10,899 dates for training;\n\nIn this way, it seems that we only need to call fbprophet for 100 times. And this doesn't seem harm the accuracy too much.\n\n\n```python\nimport pandas as pd\nimport numpy as np\n\n## For plotting\nimport matplotlib.pyplot as plt\nfrom matplotlib import style\nimport datetime as dt\nimport seaborn as sns\nsns.set_style(\"whitegrid\")\n```\n\n\n```python\npath = '../Data/dff2.csv'\n```\n\n\n```python\ndf= pd.read_csv(path, parse_dates=['ds'])\n# df = df.rename(columns = {\"Date\":\"ds\",\"Close\":\"y\"}) \ndf = df[['ds', 'y','fbsp', 'diff','tby', 'ffr', 'fta', 'eps', 'div', 'une', 'wti', 'ppi',\n 'rfs']]\n# df\n```\n\n\n```python\ndf['fbsp_tby'] = df['fbsp'] * df['tby']\ndf['fbsp_ffr'] = df['fbsp'] * df['ffr']\ndf['fbsp_div'] = df['fbsp'] * df['div']\ndf['eps_tby'] = df['eps'] * df['tby']\ndf['eps_ffr'] = df['eps'] * df['ffr']\ndf['eps_div'] = df['eps'] * df['div']\n```\n\n\n```python\n# cutoff between test and train data\ncutoff = len(df) - 252\ndf_train = df[:cutoff].copy()\ndf_test = df[cutoff:].copy()\nprint(cutoff)\n```\n\n 2300\n\n\n\n```python\ndf_train.columns\n```\n\n\n\n\n Index(['ds', 'y', 'fbsp', 'diff', 'tby', 'ffr', 'fta', 'eps', 'div', 'une',\n 'wti', 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby',\n 'eps_ffr', 'eps_div'],\n dtype='object')\n\n\n\n\n```python\npossible_features = ['tby', 'ffr', 'fta', 'eps', 'div', 'une', 'wti',\n 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby',\n 'eps_ffr', 'eps_div']\n\nfrom itertools import chain, combinations\n\ndef powerset(iterable):\n #\"powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)\"\n s = list(iterable)\n return chain.from_iterable(combinations(s, r) for r in range(len(s)+1))\n\n#print(list(powerset(possible_features)))\n```\n\n\n```python\nlen(possible_features)\n```\n\n\n\n\n 15\n\n\n\n\n```python\nfrom statsmodels.regression.linear_model import OLS\n\nreg_new = OLS((df_train['diff']).copy(),df_train[possible_features].copy()).fit()\nprint(reg_new.params)\n\n#from the output, we can see it's consistent with sklearn output\n```\n\n tby 99.694834\n ffr 298.736913\n fta -0.000051\n eps -9.542566\n div -575.728680\n une -57.231481\n wti 2.970708\n ppi -8.724101\n rfs 0.008933\n fbsp_tby -0.099985\n fbsp_ffr -0.269213\n fbsp_div -0.276994\n eps_tby 0.947387\n eps_ffr 2.839324\n eps_div 6.401360\n dtype: float64\n\n\n\n```python\nnew_coef = reg_new.params\nnew_possible_feats = new_coef[abs(new_coef)>0].index\n\npower_feats = list(powerset(new_possible_feats))\npower_feats.remove(())\n\npower_feats = [ list(feats) for feats in power_feats]\nlen(power_feats)\n\n```\n\n\n\n\n 32767\n\n\n\n\n```python\nAIC_scores = []\nparameters = []\n\nfor feats in power_feats:\n tmp_reg = OLS((df_train['diff']).copy(),df_train[feats].copy()).fit()\n AIC_scores.append(tmp_reg.aic)\n parameters.append(tmp_reg.params)\n\n \nMin_AIC_index = AIC_scores.index(min(AIC_scores))\nMin_AIC_feats = power_feats[Min_AIC_index] \nMin_AIC_params = parameters[Min_AIC_index]\nprint(Min_AIC_feats)\nprint(Min_AIC_params) \n```\n\n ['tby', 'ffr', 'fta', 'eps', 'div', 'une', 'wti', 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby', 'eps_ffr', 'eps_div']\n tby 99.694834\n ffr 298.736913\n fta -0.000051\n eps -9.542566\n div -575.728680\n une -57.231481\n wti 2.970708\n ppi -8.724101\n rfs 0.008933\n fbsp_tby -0.099985\n fbsp_ffr -0.269213\n fbsp_div -0.276994\n eps_tby 0.947387\n eps_ffr 2.839324\n eps_div 6.401360\n dtype: float64\n\n\n\n```python\nlen(Min_AIC_feats)\n```\n\n\n\n\n 15\n\n\n\n\n```python\n###After selecting the best features, we report the testing error, and make the plot \nAIC_df_test = df_test[Min_AIC_feats]\nAIC_pred_test = AIC_df_test.dot(Min_AIC_params)+df_test.fbsp\n\nAIC_df_train = df_train[Min_AIC_feats]\nAIC_pred_train = AIC_df_train.dot(Min_AIC_params)+ df_train.fbsp\n\n\n```\n\n\n```python\nfrom sklearn.metrics import mean_squared_error as MSE\n\nmse_train = MSE(df_train.y, AIC_pred_train) \nmse_test = MSE(df_test.y, AIC_pred_test)\n\n\n#compare with fbprophet()\n\nfb_mse_train = MSE(df_train.y, df_train.fbsp) \nfb_mse_test = MSE(df_test.y, df_test.fbsp)\n\n\nprint(mse_train,fb_mse_train)\n\nprint(mse_test,fb_mse_test)\n```\n\n 2946.8595927656197 18713.80955029442\n 325321.3852693186 98769.80247438994\n\n\n\n```python\ndf_train.ds\n```\n\n\n\n\n 0 2010-11-16\n 1 2010-11-17\n 2 2010-11-18\n 3 2010-11-19\n 4 2010-11-22\n ... \n 2295 2020-01-22\n 2296 2020-01-23\n 2297 2020-01-24\n 2298 2020-01-27\n 2299 2020-01-28\n Name: ds, Length: 2300, dtype: datetime64[ns]\n\n\n\n\n```python\nplt.figure(figsize=(18,10))\n\n# plot the training data\nplt.plot(df_train.ds,df_train.y,'b',\n label = \"Training Data\")\n\nplt.plot(df_train.ds, AIC_pred_train,'r-',\n label = \"Improved Fitted Values by Best_AIC\")\n\n# # plot the fit\nplt.plot(df_train.ds, df_train.fbsp,'g-',\n label = \"FB Fitted Values\")\n\n# # plot the forecast\nplt.plot(df_test.ds, df_test.fbsp,'g--',\n label = \"FB Forecast\")\nplt.plot(df_test.ds, AIC_pred_test,'r--',\n label = \"Improved Forecast by Best_AIC\")\nplt.plot(df_test.ds,df_test.y,'b--',\n label = \"Test Data\")\n\nplt.legend(fontsize=14)\n\nplt.xlabel(\"Date\", fontsize=16)\nplt.ylabel(\"SP&500 Close Price\", fontsize=16)\n\nplt.show()\n```\n\n\n```python\nplt.figure(figsize=(18,10))\nplt.plot(df_test.y,label=\"Training Data\")\nplt.plot(df_test.fbsp,label=\"FB Forecast\")\nplt.plot(AIC_pred_test,label=\"Improved Forecast by Best_AIC\")\nplt.legend(fontsize = 14)\nplt.show()\n```\n\n\n```python\n\n```\n\n\n```python\ncolumn = ['tby', 'ffr', 'fta', 'eps', 'div', 'une',\n 'wti', 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby', 'eps_ffr', 'eps_div']\n```\n\n\n```python\nfrom sklearn import preprocessing\ndf1_train = df_train[['diff', 'tby', 'ffr', 'fta', 'eps', 'div', 'une', 'wti', 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby', 'eps_ffr', 'eps_div']]\n\nX = preprocessing.scale(df1_train)\nfrom statsmodels.regression.linear_model import OLS\n\nreg_new = OLS((X[:,0]).copy(),X[:,1:].copy()).fit()\nprint(reg_new.params)\n```\n\n [ 0.65150054 1.70457239 -0.1573802 -0.18007979 -0.15221931 -0.62326075\n 0.45065894 -0.38972706 2.87210843 -1.17604495 -4.92858316 -2.15459111\n 0.11418468 2.74829778 0.55520382]\n\n\n\n```python\n# Before Covid\n# pd.Series(reg_new.params, index=['tby', 'ffr', 'fta', 'eps', 'div', 'une',\n# 'wti', 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby', 'eps_ffr', 'eps_div'] )\n```\n\n\n```python\n# before covid\ncoef1 = [ 1.50405129, 1.03228322, 0.27409454, 1.17073571, 0.31243092,\n -0.75747342, 0.46988206, -0.39944639, 2.10369448, -0.69112943,\n -2.1804296 , -2.38576385, -1.14196633, 1.41832903, -0.34501927]\n# include covid\ncoef2 = [ 0.65150054, 1.70457239, -0.1573802 , -0.18007979, -0.15221931,\n -0.62326075, 0.45065894, -0.38972706, 2.87210843, -1.17604495,\n -4.92858316, -2.15459111, 0.11418468, 2.74829778, 0.55520382]\n```\n\n\n```python\n# Include Covid\n# pd.Series( np.append( ['coefficients (before covid)'], np.round(coef1,3)), index= np.append(['features'], column) ) \n \n```\n\n\n```python\nindex1 = ['10 Year U.S Treasury Bond Yield Rates (tby)', 'Federal Funds Rates (ffr)',\n 'Federal Total Assets (fta)', 'Earning-Per-Share of S&P 500 (eps)', 'Dividend Yield of S&P 500 (div)',\n 'Unemployment Rates (une) ', 'West Texas Intermediate oil index (wit)', 'Producer Price Index (ppi)',\n 'Retail and Food Services Sales (rfs)', \n 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby', 'eps_ffr', 'eps_div'\n ]\n```\n\n\n```python\nlen(index1)\n```\n\n\n\n\n 15\n\n\n\n\n```python\npd.Series(coef2, index =index1)\n```\n\n\n\n\n 10 Year U.S Treasury Bond Yield Rates (tby) 0.651501\n Federal Funds Rates (ffr) 1.704572\n Federal Total Assets (fta) -0.157380\n Earning-Per-Share of S&P 500 (eps) -0.180080\n Dividend Yield of S&P 500 (div) -0.152219\n Unemployment Rates (une) -0.623261\n West Texas Intermediate oil index (wit) 0.450659\n Producer Price Index (ppi) -0.389727\n Retail and Food Services Sales (rfs) 2.872108\n fbsp_tby -1.176045\n fbsp_ffr -4.928583\n fbsp_div -2.154591\n eps_tby 0.114185\n eps_ffr 2.748298\n eps_div 0.555204\n dtype: float64\n\n\n\n\n```python\ndf3 = pd.DataFrame(coef1, index = index1, columns = ['coefficients (before covid)'])\ndf3['coefficients (include covid)'] =pd.Series(coef2, index =index1)\ndf3\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    coefficients (before covid)coefficients (include covid)
    10 Year U.S Treasury Bond Yield Rates (tby)1.5040510.651501
    Federal Funds Rates (ffr)1.0322831.704572
    Federal Total Assets (fta)0.274095-0.157380
    Earning-Per-Share of S&P 500 (eps)1.170736-0.180080
    Dividend Yield of S&P 500 (div)0.312431-0.152219
    Unemployment Rates (une)-0.757473-0.623261
    West Texas Intermediate oil index (wit)0.4698820.450659
    Producer Price Index (ppi)-0.399446-0.389727
    Retail and Food Services Sales (rfs)2.1036942.872108
    fbsp_tby-0.691129-1.176045
    fbsp_ffr-2.180430-4.928583
    fbsp_div-2.385764-2.154591
    eps_tby-1.1419660.114185
    eps_ffr1.4183292.748298
    eps_div-0.3450190.555204
    \n
    \n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "af518fd6a0c282dd2c409d09a09e2fd7b30d36cc", "size": 229768, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Code/Scenario2 including covid period.ipynb", "max_stars_repo_name": "thinkhow/Market-Prediction-with-Macroeconomics-features", "max_stars_repo_head_hexsha": "feac711017739ea6ffe46a7fcac6b4b0c265e0b5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Code/Scenario2 including covid period.ipynb", "max_issues_repo_name": "thinkhow/Market-Prediction-with-Macroeconomics-features", "max_issues_repo_head_hexsha": "feac711017739ea6ffe46a7fcac6b4b0c265e0b5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-05-24T00:26:34.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-24T00:26:34.000Z", "max_forks_repo_path": "Code/Scenario2 including covid period.ipynb", "max_forks_repo_name": "shelbycox/Stock-Erdos", "max_forks_repo_head_hexsha": "b0b3bb3e14ea8ad9453b22637293bfa9a781c749", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 267.4831199069, "max_line_length": 115672, "alphanum_fraction": 0.9042425403, "converted": true, "num_tokens": 5525, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9241418241572634, "lm_q2_score": 0.8933094010836642, "lm_q1q2_score": 0.8255445794542899}} {"text": "```python\nimport matplotlib.pyplot as plt\nimport numpy as np\n```\n\n# Motivation\n\nDuring my master studies working through assignments in probability theory I spend a lot of time struggling with a task, which seemed to be very easy at the first glance. Here it is:\n\n
    \nGiven two independent uniformly distributed random variables $X$ and $Y$ (i.e. $X \u22a5 Y, X,Y \\sim U([0,1 ])$), determine the probability density function p(z) of $Z=X+Y$.\n
    \n\nAfter a quick web search I found the theoretical tool, which is required need to accomplish the task - [Convolution of probability distributions](https://en.wikipedia.org/wiki/Convolution_of_probability_distributions). It states:\n\n
    \nThe general formula for the distribution of the sum $Z=X+Y$ of two independent integer-valued (and hence discrete) random variables is\n \n\\begin{equation}\nP(Z=z) = \\sum_{k=-\\infty}^\\infty P(X=k)P(Y=z-k)\n\\end{equation}\n\nThe counterpart for independent continuously distributed random variables with density functions $f, g$ is\n\n\\begin{equation}\nh(z)=(f*g)(z)=\\int_{-\\infty}^\\infty f(z-t)g(t) dt = \\int_{-\\infty}^\\infty f(t)g(z-t) dt\n\\end{equation}\n
    \n\nMy first thought was \"oh, I saw those integrals back in control theory lectures years ago\" - but years went by and I had no idea anymore how the integral \n\n\\begin{equation}\n(f*g)(z) = \\int_{-\\infty}^\\infty f(t)g(z-t) dt\n\\end{equation}\n\nis calculated - there are some fancy visualitions in wikipedia like this one, but for me it was hard to decipher the meaning of the different variables within the equation.\n\nThe goal of this blogpost is to go step by step over the basics - in the end you should feel comfortable with convolutions plus you could calculate them in Python, too.\n\nEnjoy!\n\n# Basics\n\n## Altering fuction graphs\n\nYou probably had this chapter in school, teachers do that usually with quadratic functions. If you know what happens for function $f$ if you alter $f(x)$ to $f(-x+3)$ feel free to skip this section.\n\nThe motivation of all this is to understand the term $g(z-x)$ which is changed from $g(x)$. Note that, I replaced the $t$ with $x$ in order not to confuse the reader with $t$ as time.\n\nLet's define two simple functions to work through this tutorial. The first will be a simple quadratic function \n\n\\begin{equation}\nf_1(x)=x^2\n\\end{equation}\n\nAs I want to solve my task, too - we will have a uniform distribution $U$ as our second function\n\n\\begin{equation}\nf_2(x)=\\begin{cases}\n \\frac{1}{b - a} & \\mathrm{for}\\ a \\le x \\le b, \\\\[8pt]\n 0 & \\mathrm{for}\\ xb\n\\end{cases}\n\\end{equation}\n\nLet's implement them in Python:\n\n\n```python\ndef f1(x):\n return x**2\n```\n\n\n```python\ndef f2(x, a, b):\n const = 1/(b-a)\n if a <= x and x <= b:\n return const\n else:\n return 0\n```\n\n\n```python\nx_vec = np.linspace(-10, 10, 10000)\nf1_vec = f1(x_vec)\nf2_vec = np.asarray([f2(x, 0, 1) for x in np.nditer(x_vec)])\n\nplt.figure(figsize=(15,5))\nplt.plot(x_vec, f1_vec, label=\"$f_1(x)=x^2$\")\nplt.plot(x_vec, f2_vec, label=\"$f_2(x)=U([0, 1])(x)$\")\nplt.xlim([-4, 4])\nplt.ylim([-0.1, 2])\nplt.grid()\nplt.legend()\nplt.xlabel(\"x\")\nplt.ylabel(\"function value\")\nplt.title(\"functions graphs $f_1, f_2$ visualized\")\nplt.savefig(\"001_f1_and_f2.png\", dpi=300)\n```\n\nNow what happens when we add a constant $z$ to the function argument (i.e. $f(x+3)$)? Let's see what happens for different values of $z$.\n\n\n```python\nk_array = [-2, 1]\n```\n\n\n```python\nx_vec = np.linspace(-10, 10, 10000)\n\nplt.figure(figsize=(15,5))\nplt.plot(x_vec, f1(x_vec), label=\"$f_1(x)$\")\nfor k in k_array:\n label_text = \"$f_1(x%+d)$ with z=%d\"%(k,k)\n plt.plot(x_vec, f1(x_vec+k), label=label_text, alpha=0.8, ls=\"--\")\nplt.xlim([-4, 4])\nplt.ylim([-0.1, 2])\nplt.grid()\nplt.legend()\nplt.xlabel(\"x\")\nplt.ylabel(\"function value\")\nplt.title(\"Altering $f_1$ by adding a constant $z$ to the function argument $f(x+z)$\")\nplt.savefig(\"002_f1_shift_z.png\", dpi=300)\n```\n\n\n```python\nx_vec = np.linspace(-10, 10, 10000)\nf2_vec = np.asarray([f2(x, 0, 1) for x in np.nditer(x_vec)])\n\nplt.figure(figsize=(15,5))\nplt.plot(x_vec, f2_vec, label=\"$f_2(x)$\")\nfor k in k_array:\n label_text = \"$f_2(x%+d)$ with z=%d\"%(k,k)\n f2_vec = np.asarray([f2(x, 0-k, 1-k) for x in np.nditer(x_vec)])\n plt.plot(x_vec, f2_vec, label=label_text, alpha=0.8, ls=\"--\")\nplt.xlim([-4, 4])\nplt.ylim([-0.1, 2])\nplt.grid()\nplt.legend()\nplt.xlabel(\"x\")\nplt.ylabel(\"function value\")\nplt.title(\"Altering $f_2$ by adding a constant $z$ to the function argument $f(x+z)$\")\nplt.savefig(\"003_f2_shift_z.png\", dpi=300)\n```\n\nAdding a positive number $z$, we shift the graph to the left, with a negative $z$ to the right.\n\nWhat happens if we multiply $-1$ to the function argument $x$?\n\n\n```python\nx_vec = np.linspace(-10, 10, 10000)\n\nplt.figure(figsize=(15,5))\nplt.plot(x_vec, f1(x_vec), label=\"$f_1(x)$\")\n\nlabel_text = \"$f_1(-x)$\"\nplt.plot(x_vec, f1(-x_vec), label=label_text, alpha=0.8, ls=\"--\")\nplt.xlim([-4, 4])\nplt.ylim([-0.1, 2])\nplt.axvline(0, color=\"k\", linewidth=0.7)\nplt.grid()\nplt.legend()\nplt.xlabel(\"x\")\nplt.ylabel(\"function value\")\nplt.title(\"Altering $f_1$ by inverting the sign of the function argument $f(-x)$\")\nplt.savefig(\"004_f1_invert_x.png\", dpi=300)\n```\n\n\n```python\nx_vec = np.linspace(-10, 10, 10000)\nf2_vec = np.asarray([f2(x, 0, 1) for x in np.nditer(x_vec)])\n\nplt.figure(figsize=(15,5))\nplt.plot(x_vec, f2_vec, label=\"$f_2(x)$\")\n\nlabel_text = \"$f_2(-x)$\"\nf2_vec_minus = np.asarray([f2(x, -1, 0) for x in np.nditer(x_vec)])\nplt.plot(x_vec, f2_vec_minus, label=label_text, alpha=0.8, ls=\"--\")\nplt.xlim([-4, 4])\nplt.ylim([-0.1, 2])\nplt.grid()\nplt.legend()\nplt.axvline(0, color=\"k\", linewidth=0.7)\nplt.xlabel(\"x\")\nplt.ylabel(\"function value\")\nplt.title(\"Altering $f_2$ by inverting the sign of the function argument $f(-x)$\")\nplt.savefig(\"005_f2_invert_x.png\", dpi=300)\n```\n\nApperently, this change \"mirrors\" the function graph on the y-axis.\n\nNow we can understand the tranformation $g(t)$ to $g(z-t)$. It is adding a constant $z$ and multiplying $-1$ to $t$.\n\n\n```python\nx_vec = np.linspace(-10, 10, 10000)\nf1_vec = f1(-x_vec+2)\nf2_vec = np.asarray([f2(-x, 0-2, 1-2) for x in np.nditer(x_vec)])\n\nplt.figure(figsize=(15,5))\n\nf1_vec = f1(x_vec)\nf2_vec = np.asarray([f2(x, 0, 1) for x in np.nditer(x_vec)])\nplt.plot(x_vec, f1_vec, label=\"$f_1(x)$\")\nplt.plot(x_vec, f2_vec, label=\"$f_2(x)$\")\n\n\"\"\"\nf1_vec = f1(x_vec+2)\nf2_vec = np.asarray([f2(x, 0-2, 1-2) for x in np.nditer(x_vec)])\nplt.plot(x_vec, f1_vec, label=\"$f_1(2+x)$\", color=\"blue\", ls=\"-.\")\nplt.plot(x_vec, f2_vec, label=\"$f_2(2+x)$\", color=\"orange\", ls=\"-.\")\n\"\"\"\nf1_vec = f1(-x_vec+2)\nf2_vec = np.asarray([f2(-x, 0-2, 1-2) for x in np.nditer(x_vec)])\nplt.plot(x_vec, f1_vec, label=\"$f_1(2-x)$\", color=\"blue\", ls=\"--\")\nplt.plot(x_vec, f2_vec, label=\"$f_2(2-x)$\", color=\"orange\", ls=\"--\")\nplt.xlim([-4, 4])\nplt.ylim([-0.1, 2])\nplt.grid()\nplt.legend()\nplt.xlabel(\"x\")\nplt.ylabel(\"function value\")\nplt.title(\"functions graphs $f_1, f_2$ altered to $f(z-x)$\")\nplt.savefig(\"006_f1_f2_altered.png\", dpi=300)\n```\n\n## Multiplying two functions\n\nLet's continue to work through the convolution calculation formula - our next task is to undestand the product of two functions (or in my case density distributions):\n\n\\begin{equation}\nf(x)g(z-x)\n\\end{equation}\n\nWhat happens if we multiply them? \n\nLet's define a proxy function $a$ as a product of $f_2(x)$ and $f_2(0.5-x)$ and visualize it\n\n\\begin{equation}\na(x) = f_2(x)f_2(0.5-x)\n\\end{equation}\n\n\n```python\nx_vec = np.linspace(-10, 10, 10000)\nf2_vec = np.asarray([f2(x, 0, 1) for x in np.nditer(x_vec)])\n\nplt.figure(figsize=(15,5))\nplt.plot(x_vec, f2_vec, label=\"$f_2(x)$\", linewidth=0.9)\n\nlabel_text = \"$f_2(0.5-x)$\"\nf2_vec2 = np.asarray([f2(-x, -1.5, -0.5) for x in np.nditer(x_vec)])\nplt.plot(x_vec, f2_vec2, label=label_text, linewidth=0.9)\n\nh_t = np.multiply(f2_vec, f2_vec2)\nplt.plot(x_vec, h_t, label=\"$a(x)$\", alpha=0.6, ls=\"--\", color=\"k\")\nplt.xlim([-1, 4])\nplt.ylim([-0.1, 1.5])\nplt.grid()\nplt.legend()\nplt.axvline(0, color=\"k\", linewidth=0.7)\nplt.xlabel(\"x\")\nplt.ylabel(\"function value\")\nplt.title(\"Product $a(t)$ of two functions $f_2(x)f_2(0.5-x)$\")\nplt.savefig(\"007_a_x_f1_f2_multiplied.png\", dpi=300)\n```\n\nThe black dotted graph is the product we we integrate on in the next step.\n\n## Integral\n\nAn integral is the area between the function graph and the x-axis. This is the last part of our convolution equation\n\n\\begin{equation}\n\\int_{-\\infty}^\\infty a(x) dx = \\int_{-\\infty}^\\infty f(x)g(z-x) dx\n\\end{equation}\n\n\n```python\nf2_vec = np.asarray([f2(x, 0, 1) for x in np.nditer(x_vec)])\nf2_vec2 = np.asarray([f2(-x, -1.5, -0.5) for x in np.nditer(x_vec)])\nh_t = np.multiply(f2_vec, f2_vec2)\n\nplt.figure(figsize=(15,5))\nplt.plot(x_vec, h_t, label=\"$a(x)$\", ls=\"-\")\n\nplt.fill_between(x_vec, 0, h_t, hatch=\"/\", facecolor='yellow', label=\"$\\int_{-\\infty}^\\infty a(x) dx = 0.5$\")\nplt.xlim([-1, 4])\nplt.ylim([-0.1, 1.5])\nplt.grid()\nplt.legend()\nplt.axvline(0, color=\"k\", linewidth=0.7)\nplt.xlabel(\"x\")\nplt.ylabel(\"function value\")\nplt.title(\"Integral $\\int_{-\\infty}^\\infty a(x) dx$\")\nplt.savefig(\"008_a_x_integrated.png\", dpi=300)\n```\n\n# Building blocks in one picture\n\n## Principle\n\nNow we are ready to understand the convolution equation\n\n\\begin{equation}\nh(z) = (f*g)(z) = \\int_{-\\infty}^\\infty f(x)g(z-x) dx\n\\end{equation}\n\nIn order to calculate $h(z)$ for a particular $z$ we have to do following steps:\n\n1. Shift function $g(x)$ to the left $z$, i.e. $g(x+z)$\n2. Mirror the result relative to the y-axis, i.e. $g(-x+z)=g(z-x)$\n3. Calculate the product of $f(x)g(z-x)=a(x)$\n4. Calculate the infinite integral over $a(x) = i_z$\n5. The result of the convolution at particulat position $z$ is $i_z$, i.e. we calculated $h(z)=\\int_{-\\infty}^\\infty f(x)g(z-x) dx$.\n\n## Back to homework\n\nNow I believe we can solve our random variable sum assignment $Z=X+Y$:\n\n\\begin{equation}\np_Z(Z=z)=\\int_{-\\infty}^\\infty p_X(X=(z-x))p_Y(Y=x) dx\n\\end{equation}\n\nLet's apply our algorithm developed above and see what will be the result.\n\n\n```python\nfrom scipy.integrate import trapz\n```\n\n\n```python\ndef p_Z(z):\n x_vec = np.linspace(-5, 5, 1000)\n px_vec = np.asarray([f2(x, 0, 1) for x in np.nditer(x_vec)])\n py_vec_altered = np.asarray([f2(-x, 0-z, 1-z) for x in np.nditer(x_vec)])\n a_x = np.multiply(px_vec, py_vec_altered)\n return trapz(a_x, x_vec)\n```\n\n\n```python\nz_vec = np.linspace(-1, 3, 200)\npz_vec= np.asarray([p_Z(x) for x in np.nditer(z_vec)])\n```\n\n\n```python\nplt.figure(figsize=(15,5))\nplt.plot(z_vec, pz_vec, label=\"$p_Z(Z=z)$\", alpha=0.6, ls=\"--\", color=\"k\")\nplt.xlim([-1, 3])\nplt.ylim([-0.1, 1.5])\nplt.grid()\nplt.legend()\n#plt.axvline(0, color=\"k\", linewidth=0.7)\nplt.xlabel(\"z\")\nplt.ylabel(\"function value\")\nplt.title(\"Probability density of $Z=Y+X$ with $X \u22a5 Y, X,Y \\sim U([0,1 ])$\")\nplt.savefig(\"009_px_py_convolution.png\", dpi=300)\n```\n\nNice! We can see that the sum of two equally distributed random variables will lead to a triangle formed probability density! It also sounds natural because the expected value of $X$ and $Y$ are both $0.5$ - so the expected value of $Z$ (the focal point of the density) will be at $z=1$!\n\n\n```python\n\n```\n", "meta": {"hexsha": "db442e236572e54278e32217f053e0e15345172d", "size": 288425, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "convolution.ipynb", "max_stars_repo_name": "kopytjuk/didactic-convolution", "max_stars_repo_head_hexsha": "6d03777ce7c7d9f0efbedf3d1481bf30fcbf5d4d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "convolution.ipynb", "max_issues_repo_name": "kopytjuk/didactic-convolution", "max_issues_repo_head_hexsha": "6d03777ce7c7d9f0efbedf3d1481bf30fcbf5d4d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "convolution.ipynb", "max_forks_repo_name": "kopytjuk/didactic-convolution", "max_forks_repo_head_hexsha": "6d03777ce7c7d9f0efbedf3d1481bf30fcbf5d4d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 451.3693270736, "max_line_length": 49316, "alphanum_fraction": 0.9380670885, "converted": true, "num_tokens": 3705, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9241418116217418, "lm_q2_score": 0.8933094025038598, "lm_q1q2_score": 0.8255445695686527}} {"text": "# ML-Fundamentals - Lineare Regression - Exercise: Multivariate Linear Regression\n\n## Table of Contents\n* [Introduction](#Introduction)\n* [Requirements](#Requirements) \n * [Knowledge](#Knowledge) \n * [Modules](#Python-Modules)\n* [Exercises - Multivariate Linear Regression](#Exercises---Multivariate-Linear-Regression)\n * [Create Features](#Create-Features)\n * [Linear Hypothesis](#Linear-Hypothesis)\n * [Generate Target Values](#Generate-Target-Values)\n * [Plot The Data](#Plot-The-Data)\n * [Cost Function](#Cost-Function)\n * [Gradient Descent](#Gradient-Descent)\n * [Training and Evaluation](#Training-and-Evaluation)\n * [Feature Scaling](#Feature-Scaling)\n* [Summary and Outlook](#Summary-and-Outlook)\n* [Literature](#Literature) \n* [Licenses](#Licenses)\n\n## Introduction\n\nIn this exercise you will implement the _multivariate linear regression_, a model with two or more predictors and one response variable (opposed to one predictor using univariate linear regression). The whole exercise consists of the following steps:\n\n1. Generate values for two predictors/features $(x_1, x_2)$\n2. Implement a linear function as hypothesis (model) \n3. Generate values for the response (Y / target values)\n4. Plot the $((x_1, x_2), y)$ values in a 3D plot.\n5. Write a function to quantify your model (cost function)\n6. Implement the gradient descent algorithm to train your model (optimizer) \n7. Visualize your training process and results\n8. Apply feature scaling (pen & paper)\n\n## Requirements\n### Knowledge\n\nYou should have a basic knowledge of:\n- Univariate linear regression\n- Multivariate linear regression\n- Squared error\n- Gradient descent\n- numpy\n- matplotlib\n\nSuitable sources for acquiring this knowledge are:\n- [Multivariate Linear Regression Notebook](http://christianherta.de/lehre/dataScience/machineLearning/basics/multivariate_linear_regression.php) by Christian Herta and his [lecture slides](http://christianherta.de/lehre/dataScience/machineLearning/multivariateLinearRegression.pdf) (German)\n- Chapter 2 of the open classroom [Machine Learning](http://openclassroom.stanford.edu/MainFolder/CoursePage.php?course=MachineLearning) by Andrew Ng\n- Chapter 5.1 of [Deep Learning](http://www.deeplearningbook.org/contents/ml.html) by Ian Goodfellow \n- Some parts of chapter 1 and 3 of [Pattern Recognition and Machine Learning](https://www.microsoft.com/en-us/research/people/cmbishop/#!prml-book) by Christopher M. Bishop\n- [numpy quickstart](https://docs.scipy.org/doc/numpy-1.15.1/user/quickstart.html)\n- [Matplotlib tutorials](https://matplotlib.org/tutorials/index.html)\n\n### Python Modules\n\nBy [deep.TEACHING](https://www.deep-teaching.org/) convention, all python modules needed to run the notebook are loaded centrally at the beginning. \n\n\n\n```python\n# External Modules\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d.axes3d import Axes3D\nfrom matplotlib import cm\n\n%matplotlib notebook\n```\n\n## Exercise - Multivariate Linear Regression\n\nWe will only use two features in this notebook, so we are still able to plot them together with the target in a 3D plot. But your implementation should also be capable of handling more (except the plots). \n\n### Create Features\n\nFirst we will create some features. The features should be in a 2D numpy array, the rows separating the different feature vectors (training examples), the columns containing the features. Each feature should be **uniformly** distributed in a specifiable range.\n\n**Task:**\n\nImplement the function to generate a feature matrix (numpy array).\n\n\n```python\ndef create_feature_matrix(sample_size, n_features, x_min, x_max):\n '''creates random feature vectors based on a lienar function in a given interval\n \n Args:\n sample_size: number feature vectors\n n_features: number of features for each vector\n x_min: lower bound value ranges\n x_max: upper bound value ranges\n \n Returns:\n x: 2D array containing feature vecotrs with shape (sample_size, n_features)\n '''\n raise NotImplementedError(\"You should implement this!\")\n```\n\n\n```python\n# Solution\n\ndef create_feature_matrix(sample_size, n_features, x_min, x_max):\n '''creates random feature vectors based on a lienar function in a given interval\n \n Args:\n sample_size: number feature vectors\n n_features: number of features for each vector\n x_min: lower bound value ranges\n x_max: upper bound value ranges\n \n Returns:\n x: 2D array containing feature vecotrs with shape (sample_size, n_features)\n '''\n \n return np.random.uniform(x_min, x_max, (sample_size, n_features))\n```\n\n\n```python\nsample_size = 100\nn_features = 2\nx_min = [1.5, -0.5]\nx_max = [11., 5.0]\n\nX = create_feature_matrix(sample_size, n_features, x_min, x_max)\nX\n```\n\n\n\n\n array([[ 2.55343578, 4.09882663],\n [ 9.14676162, 1.75409426],\n [ 6.87061128, 3.088036 ],\n [ 3.31754536, -0.10581374],\n [10.04676143, 4.91327452],\n [ 9.53248102, -0.21669548],\n [ 4.74762432, 2.11592889],\n [ 3.04610114, 0.66861665],\n [ 8.99810878, 4.10601558],\n [ 2.03046397, 1.35255697],\n [ 4.6004498 , 1.01205624],\n [ 5.2053511 , 4.85018639],\n [ 8.92142801, 3.40039974],\n [ 8.41672928, 3.66919872],\n [ 5.01825985, 2.51420676],\n [ 9.08429314, 1.04931984],\n [ 2.15416736, 0.5268242 ],\n [ 6.2284124 , 4.54399381],\n [ 8.10072993, 1.84264001],\n [ 3.71944741, 0.94380792],\n [ 4.35883408, 1.26337124],\n [ 4.43690719, 0.99997779],\n [ 2.51908645, 0.34135765],\n [ 4.64194197, 1.37604907],\n [ 5.56066631, 4.24084101],\n [ 3.31660389, 3.4357965 ],\n [ 7.46692545, -0.13872683],\n [ 8.67304278, 1.53969574],\n [ 2.25378335, 2.1360613 ],\n [ 2.6540387 , 1.03259774],\n [ 5.02662661, 2.87918316],\n [ 4.12726842, 4.23028426],\n [ 5.68731577, 2.70034396],\n [10.74272894, -0.16190299],\n [ 5.57938986, 3.30598652],\n [ 9.26774676, 3.15054548],\n [ 4.23662404, 4.93963246],\n [ 3.84702751, 0.70003255],\n [ 7.23072073, 2.14386805],\n [ 1.57051795, 3.70768626],\n [ 5.40511937, 3.34361318],\n [ 1.86679871, 4.50691358],\n [ 2.64773238, 3.48214973],\n [ 5.29444443, 1.06644367],\n [ 8.15360051, 1.91813851],\n [ 1.58028104, 3.52942058],\n [ 5.57410943, 0.36621873],\n [ 3.51710302, 4.05378562],\n [ 1.9668387 , 3.08308516],\n [ 5.59368553, 0.48926087],\n [ 4.72382896, 1.27217418],\n [ 4.75903772, 4.02492114],\n [10.32314113, 3.45291417],\n [ 8.3607832 , 1.41331422],\n [ 5.02277249, -0.17925784],\n [ 4.96284139, 3.16654885],\n [ 5.25042165, 4.12198925],\n [ 3.87229428, 2.14576293],\n [ 9.53374907, -0.05896827],\n [ 8.15028007, 3.9327573 ],\n [ 9.56958557, 2.32766841],\n [ 7.64470758, 1.88364085],\n [ 9.19808575, 2.63258445],\n [ 8.54669469, 2.58779554],\n [ 2.0067687 , 3.09877182],\n [ 3.62733662, 0.25622579],\n [10.05874243, 0.19438902],\n [ 8.14099486, 4.01139473],\n [ 7.21657038, 2.4667673 ],\n [ 8.1279165 , 0.24171846],\n [ 1.85488792, 2.11330979],\n [ 6.03449596, 3.593834 ],\n [ 3.28212156, -0.2277349 ],\n [ 7.08842704, 2.76401252],\n [ 8.22070264, 3.87316012],\n [ 7.86751765, 2.55121306],\n [ 3.23420664, 4.64625445],\n [ 2.26114604, 3.59172898],\n [ 9.69223623, 1.5591381 ],\n [ 8.4105376 , 4.45931284],\n [ 2.22748906, 2.79878555],\n [ 7.32528596, 2.07677038],\n [10.36156224, 3.67408725],\n [ 6.55661923, -0.46412987],\n [ 3.9209253 , -0.47884172],\n [ 8.8067252 , 4.58892015],\n [ 4.13219722, -0.30789565],\n [10.96105526, 2.83739255],\n [ 6.83442425, 2.91034924],\n [ 9.49990468, -0.17566033],\n [ 6.95625609, 1.48659351],\n [ 4.37699785, 1.08344818],\n [ 2.00884202, 0.23834419],\n [ 3.33201506, 2.12857654],\n [ 1.50306421, 0.71705611],\n [ 9.46405326, 1.78328114],\n [ 9.61235747, 1.24012355],\n [ 1.96521893, 0.76458039],\n [ 6.35091656, 4.27638952],\n [ 6.44543344, 2.78178998]])\n\n\n\n\n```python\nassert len(X[:,0]) == sample_size\nassert len(X[0,:]) == n_features\nfor i in range(n_features):\n assert np.max(X[:,i]) <= x_max[i]\n assert np.min(X[:,i]) >= x_min[i]\n```\n\n### Linear Hypothesis\n\n\nA short recap, a hypothesis $h_\\theta({\\bf x})$ is a certain function that we believe is similar to a target function that we like to model. A hypothesis $h_\\theta({\\bf x})$ is a function of ${\\bf x}$ with fixed parameters $\\Theta$. \n\nHere we have $n$ features ${\\bf x} = \\{x_1, \\ldots, x_n \\}$ and $n+1$ parameters $\\Theta = \\{\\theta_0, \\theta_1 \\ldots, \\theta_n \\}$:\n\n$$\nh_\\theta({\\bf x}) = \\theta_{0} + \\theta_{1} x_1 + \\ldots \\theta_n x_n \n$$\n\nadding an extra element to $\\vec x$ for convenience, this could also be rewritten as:\n\n$$\nh_\\theta({\\bf x}) = \\theta_{0} x_0 + \\theta_{1} x_1 + \\ldots \\theta_n x_n \n$$\n\nwith $x_0 = 1$ for all feature vectors (training examples).\n\nOr treating ${\\bf x}$ and $\\Theta$ as vectors:\n\n$$\nh(\\vec x) = \\vec x'^T \\vec \\theta\n$$\n\nwith:\n\n$$\n\\vec x = \\begin{pmatrix} \nx_1 & x_2 & \\ldots & x_n \\\\\n\\end{pmatrix}^T\n\\text{ and }\n\\vec x' = \\begin{pmatrix} \n1 & x_1 & x_2 & \\ldots & x_n \\\\\n\\end{pmatrix}^T\n$$\n\nand\n\n$$\n\\vec \\theta = \\begin{pmatrix} \n\\theta_0 & \\theta_1 & \\ldots & \\theta_n \\\\\n\\end{pmatrix}^T\n$$\n\nOr for the whole data set at once: The rows in $X$ separate the different feature vectors, the columns contain the features. \n\n$$\n\\vec h_\\Theta(X) = X' \\cdot \\vec \\theta\n$$\n\nthe vector $\\vec h(X) = \\left( h(\\vec x^{(1)}),h(\\vec x^{(2)}), \\dots, h(\\vec x^{(m)}) )\\right)^T$ contains all predictions for the data batch $X$.\n\nwith:\n\n$$\n\\begin{align}\nX &= \\begin{pmatrix} \nx_1^{(1)} & \\ldots & x_n^{(1)} \\\\\nx_1^{(2)} & \\ldots & x_n^{(2)} \\\\\n\\vdots &\\vdots &\\vdots \\\\\nx_1^{(m)} & \\ldots & x_n^{(m)} \\\\\n\\end{pmatrix}\n&=\n\\begin{pmatrix} \n\\vec x^{(1)T} \\\\\n\\vec x^{(2)T} \\\\\n\\vdots \\\\\n\\vec x^{(m)T} \\\\\n\\end{pmatrix}\n\\end{align}\n$$\nrespectively\n$$\n\\begin{align}\nX' = \\begin{pmatrix} \n1 & x_1^{(1)} & \\ldots & x_n^{(1)} \\\\\n1 & x_1^{(2)} & \\ldots & x_n^{(2)} \\\\\n\\vdots &\\vdots &\\vdots &\\vdots \\\\\n1 & x_1^{(m)} & \\ldots & x_n^{(m)} \\\\\n\\end{pmatrix}\n&=\n\\begin{pmatrix} \n\\vec x'^{(1)T} \\\\\n\\vec x'^{(2)T} \\\\\n\\vdots \\\\\n\\vec x'^{(m)T} \\\\\n\\end{pmatrix}\n\\end{align}\n$$\n\n**Task:**\n\nImplement hypothesis $\\vec h_\\Theta(X)$ in the method `linear_hypothesis` and return it as a function. Implement it the computationally efficient (**pythonic**) way by not using any loops and handling all data at once (use $X$ respectively $X'$).\n\n**Hint:**\n\nOf course you are free to implement as many helper functions as you like, e.g. for transforming $X$ to $X'$, though you do not have to. Up to you.\n\n\n```python\ndef linear_hypothesis(thetas):\n ''' Combines given list argument in a linear equation and returns it as a function\n \n Args:\n thetas: list of coefficients\n \n Returns:\n lambda that models a linear function based on thetas and x\n '''\n raise NotImplementedError(\"You should implement this!\")\n```\n\n\n```python\n# Solution\n\ndef linear_hypothesis(thetas):\n ''' Combines given list argument in a linear equation and returns it as a function\n \n Args:\n thetas: list of coefficients\n \n Returns:\n lambda that models a linear function based on thetas and x\n '''\n return lambda x: np.concatenate((np.ones((len(x[:,0]), 1)), x), axis=1).dot(thetas)\n```\n\n\n```python\nassert len(linear_hypothesis([.1,.2,.3])(X)) == sample_size\n```\n\n### Generate Target Values\n\n**Task:**\n\nUse your implemented `linear_hypothesis` inside the next function to generate some target values $Y$. Additionally add some Gaussian noise.\n\n\n```python\ndef generate_targets(X, theta, sigma):\n ''' Combines given arguments in a linear equation with X, \n adds some Gaussian noise and returns the result\n \n Args:\n X: 2D numpy feature matrix\n theta: list of coefficients\n sigma: standard deviation of the gaussian noise\n \n Returns:\n target values for X\n '''\n raise NotImplementedError(\"You should implement this!\")\n```\n\n\n```python\n# Solution\n\ndef generate_targets(X, theta, sigma):\n ''' Combines given arguments in a linear equation with X, \n adds some Gaussian noise and returns the result\n \n Args:\n X: 2D numpy feature matrix\n theta: list of coefficients\n sigma: standard deviation of the gaussian noise\n \n Returns:\n target values for X\n '''\n noise = np.random.randn(len(X)) * sigma\n h = linear_hypothesis(theta)\n y = h(X) + noise\n return y\n```\n\n\n```python\ntheta = (2., 3., -4.)\nsigma = 3.\ny = generate_targets(X, theta, sigma)\n```\n\n\n```python\nassert len(y) == sample_size\n```\n\n### Plot The Data\n\n**Task:**\n\nPlot the data $\\mathcal D = \\{((x^{(1)}_1,x^{(1)}_2)^T,y^{(1)}), \\ldots, ((x^{(n)}_1,x^{(n)}_2)^T,y^{(n)})\\}$ in a 3D scatter plot. The plot should look like the following:\n\n\n\n**Sidenote:**\n\nThe command `%matplotlib notebook` (instead of `%matplotlib inline`) creates an interactive (e.g. rotatable) plot.\n\n\n```python\n%matplotlib notebook\n\ndef plot_data_scatter(features, targets):\n \"\"\" Plots the features and the targets in a 3D scatter plot\n \n Args:\n features: 2D numpy-array features\n targets: ltargets\n \"\"\"\n raise NotImplementedError(\"You should implement this!\")\n```\n\n\n```python\n# Solution\n\n%matplotlib notebook\n\ndef plot_data_scatter(features, targets):\n \"\"\" Plots the features and the targets in a 3D scatter plot\n \n Args:\n features: 2D numpy-array features\n targets: ltargets\n \"\"\"\n fig = plt.figure()\n ax = fig.add_subplot(111, projection='3d')\n ax.scatter(features[:,0], features[:,1], targets, c='r')\n ax.set_xlabel('$x_1$')\n ax.set_ylabel('$x_2$')\n ax.set_zlabel('$y$')\n```\n\n\n```python\nplot_data_scatter(X, y)\n```\n\n\n \n\n\n\n\n\n\n### Cost Function\nA cost function $J$ depends on the given training data $D$ and hypothesis $h_\\theta(\\vec x)$. In the context of the linear regression, the cost function measures how \"wrong\" a model is regarding its ability to estimate the relationship between $\\vec x$ and $y$ for specific $\\Theta$ values. Later we will treat this as an optimization problem and try to minimize the cost function $J_{\\mathcal D}(\\Theta)$ to find optimal $\\theta$ values for our hypothesis $h_\\theta(\\vec x)$. The cost function we use in this exercise is the [Mean-Squared-Error](https://en.wikipedia.org/wiki/Mean_squared_error) cost function:\n\n\\begin{equation}\n J_{\\mathcal D}(\\Theta)=\\frac{1}{2m}\\sum_{i=1}^{m}{(h_\\Theta(\\vec x^{(i)})-y^{(i)})^2}\n\\end{equation}\n\nImplement the cost function $J_D(\\Theta)$ in the method `mse_cost_function`. The method should return a function that takes the values of $\\Theta$ as an argument.\n\nAs a sidenote, the terms \"loss function\" or \"error function\" are often used interchangeably in the field of Machine Learning.\n\n\n```python\ndef mse_cost_function(x, y):\n ''' Implements MSE cost function as a function J(theta) on given traning data \n \n Args:\n x: vector of x values \n y: vector of ground truth values y \n \n Returns:\n lambda J(theta) that models the cost function\n '''\n raise NotImplementedError(\"You should implement this!\")\n```\n\n\n```python\n# Solution\n\ndef mse_cost_function(x, y):\n ''' Implements MSE cost function as a function J(theta) on given traning data \n \n Args:\n x: vector of x values \n y: vector of ground truth values y \n \n Returns:\n lambda J(theta) that models the cost function\n '''\n assert(len(x) == len(y))\n m = len(x)\n return lambda theta: 1./(2. * float(m)) * np.sum((linear_hypothesis(theta)(x) - y )**2)\n```\n\nReview the cell in which you generate the target values and note the theta values, which were used for it (If you haven't edited the default values, it should be `[2, 3, -4]`)\n\n**Optional:**\n\nTry a few different values for theta to pass to the cost function - Which thetas result in a low error and which produce a great error?\n\n\n```python\nJ = mse_cost_function(X, y)\nprint(J(theta))\n```\n\n 4.675479225972847\n\n\n### Gradient Descent\n\nA short recap, the gradient descent algorithm is a first-order iterative optimization for finding a minimum of a function. From the current position in a (cost) function, the algorithm steps proportional to the negative of the gradient and repeats this until it reaches a local or global minimum and determines. Stepping proportional means that it does not go entirely in the direction of the negative gradient, but scaled by a fixed value $\\alpha$ also called the learning rate. Implementing the following formalized update rule is the core of the optimization process:\n\n\\begin{equation}\n \\theta_{j}^{new} \\leftarrow \\theta_{j}^{old} - \\alpha * \\frac{\\partial}{\\partial\\theta_{j}} J(\\vec \\theta^{old})\n\\end{equation}\n\n**Task:**\n\nImplement the function to update all theta values.\n\n\n```python\ndef update_theta(x, y, theta, learning_rate):\n ''' Updates learnable parameters theta \n \n The update is done by calculating the partial derivities of \n the cost function including the linear hypothesis. The \n gradients scaled by a scalar are subtracted from the given \n theta values.\n \n Args:\n x: 2D numpy array of x values\n y: array of y values corresponding to x\n theta: current theta values\n learning_rate: value to scale the negative gradient \n \n Returns:\n theta: Updated theta vector\n '''\n raise NotImplementedError(\"You should implement this!\")\n```\n\n\n```python\n# Solution\n\ndef update_theta(x, y, theta, learning_rate):\n ''' Updates learnable parameters theta \n \n The update is done by calculating the partial derivities of \n the cost function including the linear hypothesis. The \n gradients scaled by a scalar are subtracted from the given \n theta values.\n \n Args:\n x: 2D numpy array of x values\n y: array of y values corresponding to x\n theta: current theta values\n learning_rate: value to scale the negative gradient \n \n Returns:\n theta: Updated theta vector\n '''\n m = len(x[:,0])\n # update rules\n x_ = np.concatenate((np.ones([m,1]), x), axis=1)\n theta = theta - alpha / float(m) * (x_.T.dot(linear_hypothesis(theta)(x) - y) )\n\n \n \n return theta\n```\n\nUsing the `update_theta` method, you can now implement the gradient descent algorithm. Iterate over the update rule to find the values for $\\vec \\theta$ that minimize our cost function $J_D(\\vec \\theta)$. This process is often called training of a machine learning model. \n\n**Task:**\n- Implement the function for the gradient descent.\n- Create a history of all theta and cost values and return them.\n\n\n```python\ndef gradient_descent(learning_rate, theta, iterations, x, y):\n ''' Minimize theta values of a linear model based on MSE cost function\n \n Args:\n learning_rate: scalar, scales the negative gradient \n theta: initial theta values\n x: vector, x values from the data set\n y: vector, y values from the data set\n iterations: scalar, number of theta updates\n \n Returns:\n history_cost: cost after each iteration\n history_theta: Updated theta values after each iteration\n '''\n raise NotImplementedError(\"You should implement this!\")\n```\n\n\n```python\n# Solution\n\ndef gradient_descent(learning_rate, theta, iterations, x, y):\n ''' Minimize theta values of a linear model based on MSE cost function\n \n Args:\n learning_rate: scalar, scales the negative gradient \n theta: initial theta values\n x: vector, x values from the data set\n y: vector, y values from the data set\n iterations: scalar, number of theta updates\n \n Returns:\n history_cost: cost after each iteration\n history_theta: Updated theta values after each iteration\n '''\n history_cost = np.zeros(nb_iterations)\n history_theta = np.zeros([iterations, len(theta)])\n cost = mse_cost_function(x, y);\n for i in range(nb_iterations):\n history_theta[i] = theta\n history_cost[i] = cost(theta)\n theta = update_theta(x, y, theta, learning_rate)\n return history_cost, history_theta\n```\n\n### Training and Evaluation\n\n**Task:**\n\nChoose an appropriate learning rate, number of iterations and initial theta values and start the training\n\n\n```python\n# Your implementation:\n\nalpha = 42.42 # assign an appropriate value\nnb_iterations = 1337 # assign an appropriate value\nstart_values_theta = [42., 42., 42.] # assign appropriate values\nhistory_cost, history_theta = gradient_descent(alpha, start_values_theta, nb_iterations, X, y)\n```\n\n :15: RuntimeWarning: overflow encountered in square\n return lambda theta: 1./(2. * float(m)) * np.sum((linear_hypothesis(theta)(x) - y )**2)\n\n\n\n```python\n# Solution\n\nalpha = 0.01\nnb_iterations = 100\nstart_values_theta = [1.,-1.,.5]\nhistory_cost, history_theta = gradient_descent(alpha, start_values_theta, nb_iterations, X, y)\n```\n\nNow that the training has finished we can visualize our results.\n\n**Task:**\n\nPlot the costs over the iterations. If you have used `fig = plt.figure()` and `ax = fig.add_subplot(111)` in the last plot, use it again here, else the plot will be added to the last plot instead of a new one.\n\nYour plot should look similar to this one:\n\n\n\n\n```python\ndef plot_progress(costs):\n \"\"\" Plots the costs over the iterations\n \n Args:\n costs: history of costs\n \"\"\"\n raise NotImplementedError(\"You should implement this!\")\n```\n\n\n```python\n# Solution\n\ndef plot_progress(costs):\n \"\"\" Plots the costs over the iterations\n \n Args:\n costs: history of costs\n \"\"\"\n fig = plt.figure()\n ax = fig.add_subplot(111)\n ax.plot(np.array(range(len(costs))), costs)\n ax.set_xlabel('Iterationen')\n ax.set_ylabel('Kosten')\n ax.set_title('Fortschritt')\n```\n\n\n```python\nplot_progress(history_cost)\nprint(\"costs before the training:\\t \", history_cost[0])\nprint(\"costs after the training:\\t \", history_cost[-1])\n```\n\n\n \n\n\n\n\n\n\n costs before the training:\t 190.45165080692664\n costs after the training:\t 4.947880491325002\n\n\n**Task:**\n\nFinally plot the decision hyperplane (just a plain plane here though) together with the data in a 3D plot.\n\nYour plot should look similar to this one:\n\n\n\n\n```python\ndef evaluation_plt(x, y, final_theta):\n ''' Plots the data x, y together with the final model\n \n Args:\n cost_hist: vector, history of all cost values from a opitmization\n theta_0: scalar, model parameter for boundary\n theta_1: scalar, model parameter for boundary\n x: vector, x values from the data set\n y: vector, y values from the data set\n '''\n raise NotImplementedError(\"You should implement this!\")\n```\n\n\n```python\n# Solution\n\ndef evaluation_plt(x, y, final_theta):\n g = 100\n x_1 = np.linspace(np.min(X[:,0]-1), np.max(X[:,0]+1), g)\n x_2 = np.linspace(np.min(X[:,1]-1), np.max(X[:,1]+1), g)\n X1, X2 = np.meshgrid(x_1, x_2)\n \n Y = final_theta[0] + final_theta[1] * X1 + final_theta[2] * X2\n print(Y.shape)\n\n fig = plt.figure()\n ax = fig.add_subplot(111, projection='3d')\n\n ax.plot_surface(X1, X2, Y, cmap=cm.jet, rstride=5, cstride=5, antialiased=True, shade=True, alpha=0.5, linewidth=0. )\n ax.scatter(x[:,0], x[:,1], y, c='r')\n ax.set_xlabel('$x_1$')\n ax.set_ylabel('$x_2$')\n ax.set_zlabel('$y$')\n```\n\n\n```python\n\nevaluation_plt(X, y, history_theta[-1])\nprint(\"thetas before the training:\\t\", history_theta[0])\nprint(\"thetas after the training:\\t\", history_theta[-1])\n```\n\n (100, 100)\n\n\n\n \n\n\n\n\n\n\n thetas before the training:\t [ 1. -1. 0.5]\n thetas after the training:\t [ 1.24635143 3.02581154 -3.83837415]\n\n\n### Feature Scaling\n\nNow suppose the following features $X$:\n\n\n```python\nX = np.array([[0.0001, 2000],\n [0.0002, 1800],\n [0.0003, 1600]], dtype=np.float32)\n\nsample_size = len(X[:,0])\nprint(X)\n```\n\n [[1.0e-04 2.0e+03]\n [2.0e-04 1.8e+03]\n [3.0e-04 1.6e+03]]\n\n\n**Task:**\n\nThis task can be done via **pen & paper** or by inserting some code below. Either way, you should be able to solve both tasks below on paper only using a calculator.\n\n1. Apply feature scaling onto `X` using the *mean* and the *standard deviation*. What values do the scaled features have?\n * *Optional:*\n\n You can even execute the cell above and start running your notebook again from top (all **except** executing the cell to generate your features, which would overwrite these new features).\n\n When you start training you should notice that your costs do not decrease, maybe even increase, if you have not adjusted your learning rate (training might also throw an overflow warning).\n\n**Task:**\n\n2. After the training with scaled features your new $\\\\vec \\theta'$ values will something like: $\\vec \\theta'=\\left(-7197, 326, -326\\right)^T$ (you can try training with but you do not have to). \n\n Suppose $\\vec \\theta'=\\left(-7197, 326, -326\\right)^T$. What are the corresponding $\\theta_j$ values for the unscaled data?\n\n * (If you did train your model with the scaled features, the resulting $\\theta_j$ should really be $\\vec \\theta'=\\left(-7197, 326, -326\\right)^T$\n\n\n```python\n# Solution for 1) + 2)\n\n### Generate new targets for new features X\n\ntheta = (2., 3., -4.)\nsigma = 3.\ny = generate_targets(X, theta, sigma)\n```\n\n\n```python\nmu = X.mean(axis=0)\nstd = X.std(axis=0)\nprint(mu, std)\n```\n\n [2.0000001e-04 1.8000000e+03] [8.1649661e-05 1.6329932e+02]\n\n\n\n```python\nX_scaled = (X - mu) / std\nX_scaled\n```\n\n\n\n\n array([[-1.2247449e+00, 1.2247449e+00],\n [-1.7822383e-07, 0.0000000e+00],\n [ 1.2247449e+00, -1.2247449e+00]], dtype=float32)\n\n\n\n\n```python\n### Calculate mean and standarddeviation and scale X\n\nmu1 = X[:,0].mean()\nstd1 = np.sqrt(np.var(X[:,0]))\nmu2 = X[:,1].mean()\nstd2 = np.sqrt(np.var(X[:,1]))\n\nprint(\"meanX1:\\t\", mu1, \"\\tstdX1:\\t\", std1)\nprint(\"meanX2:\\t\", mu2, \"\\tstdX2:\\t\", std2)\n\nX[:,0] = (X[:,0] - mu1) / std1\nX[:,1] = (X[:,1] - mu2) / std2\nprint(\"X scaled:\\n\", X)\n\n```\n\n meanX1:\t 0.00020000001 \tstdX1:\t 8.164966e-05\n meanX2:\t 1800.0 \tstdX2:\t 163.29932\n X scaled:\n [[-1.2247449e+00 1.2247449e+00]\n [-1.7822383e-07 0.0000000e+00]\n [ 1.2247449e+00 -1.2247449e+00]]\n\n\n\n```python\n### Train with scaled X\n\nalpha = 0.5\nnb_iterations = 100\n\ntheta = (2., 3., -4.)\nsigma = 3.\ny = generate_targets(X, theta, sigma)\n\nstart_values_theta = [1.,-1.,.5]\nhistory_cost, history_theta = gradient_descent(alpha, start_values_theta, nb_iterations, X_scaled, y)\n\nprint(\"thetas before the training:\\t\", history_theta[0])\nprint(\"thetas after the training:\\t\", history_theta[-1])\nprint(\"----------------------------\")\nprint(\"costs before the training:\\t\", history_cost[0])\nprint(\"costs after the training:\\t\", history_cost[-1])\n```\n\n thetas before the training:\t [ 1. -1. 0.5]\n thetas after the training:\t [ 1.07576387 2.82165621 -3.32163366]\n ----------------------------\n costs before the training:\t 43.92675099402078\n costs after the training:\t 14.713938810789191\n\n\n\n```python\n### Compute theta for unscaled data\n\ntheta_hat = np.array([-7197., 326., -326.])\nprint(\"theta_hat:\\t\", theta_hat)\n\ntheta_hat_rescaled = np.array([0.,0.,0.])\ntheta_hat_rescaled[0] = theta_hat[0] - theta_hat[1]*mu1/std1 - theta_hat[2]*mu2/std2\ntheta_hat_rescaled[1] = theta_hat[1] / std1\ntheta_hat_rescaled[2] = theta_hat[2] / std2\nprint(\"theta_rescaled:\\t\", theta_hat_rescaled)\n\nstart_values_theta = theta_hat_rescaled\n```\n\n theta_hat:\t [-7197. 326. -326.]\n theta_rescaled:\t [-4.40213221e+03 3.99266812e+06 -1.99633414e+00]\n\n\n\n```python\n### Rescale X to original\n\nX = np.array([[0.0001, 2000],\n [0.0002, 1800],\n [0.0003, 1600]], dtype=np.float32)\n```\n\n\n```python\n### Test costs for original X and new rescaled theta\n\nalpha = 0.000\nnb_iterations = 10\nhistory_cost, history_theta = gradient_descent(alpha, start_values_theta, nb_iterations, X, y)\n\nprint(\"costs with unscaled X and new theta:\\t \", history_cost[-1])\n```\n\n costs with unscaled X and new theta:\t 26114727.727051158\n\n\n**Solution 3:**\n\nWith polynomila regression features like `1000` would easily skyrocket even more. And worse: `0.00001` would become too small very fast and lose precision. In the worst case values would overflow or become zero.\n\n## Summary and Outlook\n\nDuring this exercise, the linear regression was extended to multidimensional feature space and feature scaling was practiced. You should be able to answer the following questions:\n- How does the implementation of the multivariate regression differ from the univariate one?\n- Why do we apply feature scaling?\n- Why does feature scaling help?\n\n## Licenses\n\n### Notebook License (CC-BY-SA 4.0)\n\n*The following license applies to the complete notebook, including code cells. It does however not apply to any referenced external media (e.g., images).*\n\nExercise: Multivariate Linear Regression
    \nby Christian Herta, Klaus Strohmenger
    \nis licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/).
    \nBased on a work at https://gitlab.com/deep.TEACHING.\n\n\n### Code License (MIT)\n\n*The following license only applies to code cells of the notebook.*\n\nCopyright 2018 Christian Herta, Klaus Strohmenger\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n", "meta": {"hexsha": "0280976f4ec9e9260f474ffff9decb7aedbc6d74", "size": 427141, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_notebooks/Multivariate_lineare_regression.ipynb", "max_stars_repo_name": "yachty66/openblog", "max_stars_repo_head_hexsha": "9935b588044bcb9c84853aabcc042bba089916f1", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_notebooks/Multivariate_lineare_regression.ipynb", "max_issues_repo_name": "yachty66/openblog", "max_issues_repo_head_hexsha": "9935b588044bcb9c84853aabcc042bba089916f1", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_notebooks/Multivariate_lineare_regression.ipynb", "max_forks_repo_name": "yachty66/openblog", "max_forks_repo_head_hexsha": "9935b588044bcb9c84853aabcc042bba089916f1", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 95.8144908031, "max_line_length": 105257, "alphanum_fraction": 0.7630665284, "converted": true, "num_tokens": 9055, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9059898254600902, "lm_q2_score": 0.9111797069968974, "lm_q1q2_score": 0.8255195437048952}} {"text": "# SVM\n\n## What is SVM\n\n\n\n[Definition](https://en.wikipedia.org/wiki/Support-vector_machine#Definition): More formally, a support-vector machine constructs a hyperplane or set of hyperplanes in a high- or infinite-dimensional space, which can be used for classification, regression, or other tasks like outliers detection. Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training-data point of any class (so-called functional margin), since in general the larger the margin, the lower the generalization error of the classifier.\n\nWith above graph, direct representation of what SVM is tring to do:\n\n$$\n\\begin{align*}\n &arg \\underset{boundary}{max} margin(boundary), \\\\\n &s.t. \\text{the gap between the final boundary and all the correctly classified apples/bananas >= margin} \\\\\n &\\text{(when given a boundary, the margin function calculates the minimal gap between that boundary and all the apples/bananas)}\n\\end{align*}\n$$\n\n## Functional margin vs geometric margin, and SVM interpretation\n\nLets say we had the final hyperplane: $w^Tx + b = 0$, then we can use $|w^Tx_i + b|$ to represent how far the data point $x_i$ is from the hyperplane, and also to the binary classification problems: we can define that when $(w^Tx_i + b) > 0$ then $x_i \\in class 1$, and when $(w^Tx_i + b) < 0$ then $x_i \\in class -1$;\n\nWith $y \\in \\{-1, 1\\}$, to those correctly classified data points we will always have $y(w^Tx + b) > 0$, and the bigger the value is the more confident we are that the data point is correctly classified;\n\n$\\hat{\\gamma_i} = y_i(w^Tx_i + b)$ is so called **functional margin** between the data point $(x_i, y_i)$ and the hyperplane $w^Tx + b = 0$;\n\nBut there is one problem with this functional margin, when we proportionally scale up/down the $w, b$ to $Mw, Mb$, the hyperplane never got changed, but the original functional margin $\\hat{\\gamma_i}$ will be changed to $M\\hat{\\gamma_i}$;\n\nTo standardize the margin, there is **geometric margin**:\n\n$$\n\\gamma_i = y_i \\frac{w^Tx_i + b}{||w||}\n$$\n\nSo another representation of what SVM is trying to achieve:\n\n$$\n\\begin{align*} \\tag{1}\n &arg \\, \\underset{w, b}{max} \\, \\gamma \\\\\n &s.t. \\, y_i \\frac{w^Tx_i + b}{||w||} \\geq \\gamma, \\forall i\n\\end{align*}\n$$\n\nAs the connection between the functional margin $\\hat{\\gamma_i}$ and the geometric margin $\\gamma_i$:\n\n$$\\gamma_i = \\frac{\\hat{\\gamma_i}}{||w||}$$\n\nthe above representation **(1)** also can be interpreted to:\n\n$$\n\\begin{align*} \\tag{2}\n &arg \\, \\underset{w, b}{max} \\, \\frac{\\hat{\\gamma}}{||w||} \\\\\n &s.t. \\, y_i(w^Tx_i + b) \\geq \\hat{\\gamma}, \\forall i\n\\end{align*}\n$$\n\nAs we told above that we can always proportionally scale up/down the $w, b$ without affecting the actual hyperplane we are looking for, so for simplifying the problem: we can tweak the $w, b$ to make the functional margin be 1, then further more we can convert the representation to:\n\n$$\n\\begin{align*} \\tag{3}\n &arg \\, \\underset{w, b}{max} \\, \\frac{1}{||w||} \\\\\n &s.t. \\, y_i(w^Tx_i + b) \\geq 1, \\forall i\n\\end{align*}\n$$\n\nIn ML we are always accustomed to solve the minimization problem, not the maximization problem, so lets further more convert the above **(3)**:\n\n$$\n\\begin{align*} \\tag{4}\n &arg \\, \\underset{w, b}{min} \\, \\frac{1}{2}||w||^2 \\\\\n &s.t. \\, y_i(w^Tx_i + b) \\geq 1, \\forall i\n\\end{align*}\n$$\n\nThis is the final interpretation of what SVM is trying to achieve.\n\n## Lagrange multiplier, Lagrange duality and KKT conditions\n\n### Lagrange equation\n\nWith above form **(4)**, lets construct the Lagrange equation:\n\n$$\n\\begin{equation} \\tag{5}\n L(w, b, \\alpha)=\\frac{1}{2}\\|w\\|^2 - \\sum_{i=1}^n \\alpha_{i}\\big(y_i(w^T \\cdot x_i + b) - 1\\big)\n\\end{equation}\n$$\n\nThe methodology behind the way constructing the Lagrange equation please refer to this image [lagrange duality](https://github.com/lnshi/ml-exercises/blob/master/ml_basics/rdm011_support_vector_machines/lagrange_duality.png), it is a long image, i don't want to put here, especially pay attention to the red box highlighted part.\n\n### Primal problem:\n\n$$\n\\begin{equation} \\tag{5.1}\n \\min _{w, b} \\, \\big[ \\max _{\\alpha : \\alpha_{i} \\geq 0} L(w, b, \\alpha) \\big]\n\\end{equation}\n$$\n\n### Dual problem:\n\n$$\n\\begin{equation} \\tag{5.2}\n \\max _{\\alpha : \\alpha_{i} \\geq 0} \\, \\big[ \\min _{w, b} L(w, b, \\alpha) \\big]\n\\end{equation}\n$$\n\nThe reason why we need to introduce in the dual problem, and solve the dual problem instead of the primal problem, and a full example of solving a real problem can be found in this image [lagrange duality example](https://github.com/lnshi/ml-exercises/blob/master/ml_basics/rdm011_support_vector_machines/lagrange_duality_example.png), again it is a long image, i don't want to put here, especially pay attention to the red box highlighted part.\n\nIn short words: in the primal problem, after we treat $w, b$ as a fixed value, then $L(w, b, \\alpha)$ is a **LINEAR FUNCTION** with independent variables $x_i$, we **CANNOT** get its maxima/minima by taking advantages of those derivative/partial derivative knowledges, the only left way is brute force compare, definitely that is NOT what we want. But in the dual problem, we can.\n\n### How to solve the dual problem\n\n$$\n\\begin{align*}\n &\\frac{\\partial L}{\\partial w}=0 \\Longrightarrow w=\\sum_{i=1}^n \\alpha_i y_i x_i \\tag{5.3} \\\\\n &\\frac{\\partial L}{\\partial b}=0 \\Longrightarrow 0=\\sum_{i=1}^n \\alpha_i y_i \\tag{5.4}\n\\end{align*}\n$$\n\nCombine (5.3) with (5) we can get:\n\n$$\n\\begin{align*} \\tag{5.5}\n L(w, b, \\alpha) &= \\frac{1}{2}\\|w\\|^{2}-\\sum_{i=1}^n \\alpha_{i}\\big(y_{i}(w^{T} \\cdot x_{i}+b)-1\\big) \\\\\n &= \\frac{1}{2} w^T w - w^T \\sum_{i=1}^n \\alpha_i y_i x_i - b\\sum_{i=1}^n \\alpha_i y_i + \\sum_{i=1}^n \\alpha_i \\\\\n &= \\frac{1}{2} \\sum_{i=1}^n \\sum_{j=1}^n \\alpha_i \\alpha_j y_i y_j x_i^T x_j - \\sum_{i=1}^n \\sum_{j=1}^n \\alpha_i \\alpha_j y_i y_j x_i^T x_j + \\sum_{i=1}^n \\alpha_i \\\\\n &= \\sum_{i=1}^n \\alpha_i - \\frac{1}{2} \\sum_{i=1}^n \\sum_{j=1}^n \\alpha_i \\alpha_j y_i y_j x_i^T x_j\n\\end{align*}\n$$\n\nFinally the dual problem is converted to:\n\n$$\n\\begin{align*} \\tag{5.6}\n &\\max _{\\alpha} \\Bigg[ \\sum_{i=1}^n \\alpha_i - \\frac{1}{2} \\sum_{i=1}^n \\sum_{j=1}^n \\alpha_i \\alpha_j y_i y_j x_i^T x_j \\Bigg] \\\\\n &s.t. \\sum_{i=1}^n \\alpha_{i} y_{i} = 0 \\\\\n &\\quad \\,\\,\\, \\alpha_i \\geq 0, \\forall i\n\\end{align*}\n$$\n\nOf course, again you can covert it to a minimization problem:\n\n$$\n\\begin{align*} \\tag{5.7}\n &\\min _{\\alpha} \\Bigg[\\frac{1}{2} \\sum_{i=1}^n \\sum_{j=1}^n \\alpha_i \\alpha_j y_i y_j x_i^T x_j - \\sum_{i=1}^n \\alpha_i \\Bigg] \\\\\n &s.t. \\sum_{i=1}^n \\alpha_{i} y_{i} = 0 \\\\\n &\\quad \\,\\,\\, \\alpha_i \\geq 0, \\forall i\n\\end{align*}\n$$\n\n### KKT conditions\n\nTake the below optimization problem as example:\n\n$$\n\\begin{align*}\n &min f(x) \\\\\n &s.t. g_{j}(x) = 0 \\, (j=1,2, \\cdots, p) \\\\\n &\\quad \\,\\, h_{k}(x) \\leq 0 \\, (k=1,2, \\cdots, q)\n\\end{align*}\n$$\n\nConstruct Lagrange equation:\n\n$$\n\\begin{align*}\n L(x, \\alpha, \\beta) = f(x) + \\sum_{j=1}^p \\alpha_j g_j(x) + \\sum_{k=1}^q \\beta_k h_k(x)\n\\end{align*}\n$$\n\nKKT conditions are the necessary conditions to make $x^*$ be the optimal solution of the original optimization problem after we did all of the transformations:\n1. Lagrange equation construction;\n2. Primal problem construction;\n3. Dual problem construction;\n\n$$\n\\begin{cases}\n \\frac{\\partial f}{\\partial x_i} + \\sum\\limits_{j=1}^p \\alpha_j \\frac{\\partial g_j}{\\partial x_i} + \\sum\\limits_{k=1}^q \\beta_k \\frac{\\partial h_k}{\\partial x_i} = 0 &\\quad (i = 1, 2, \\cdots, m) \\\\\n g_{j}(x) = 0 &\\quad (j=1,2, \\cdots, p) \\\\\n \\beta_k h_{k}(x) = 0 &\\quad (k=1,2, \\cdots, q) \\\\\n \\beta_k \\geq 0\n\\end{cases}\n$$\n\n## Coordinate ascent algorithm and SMO\n\n### Coordinate ascent algorithm\n\nVery simple algorithm, there is an example here [coordinate ascent example](https://github.com/lnshi/ml-exercises/blob/master/ml_basics/rdm011_support_vector_machines/coordinate_ascent_example.png) to demonstrate how it works.\n\n### SMO\n\nFor readabilities, i copy above our final dual problem (5.7) here:\n\n$$\n\\begin{align*} \\tag{5.7 copy}\n &\\min _{\\alpha} \\Bigg[\\frac{1}{2} \\sum_{i=1}^n \\sum_{j=1}^n \\alpha_i \\alpha_j y_i y_j x_i^T x_j - \\sum_{i=1}^n \\alpha_i \\Bigg] \\\\\n &s.t. \\sum_{i=1}^n \\alpha_{i} y_{i} = 0 \\\\\n &\\quad \\,\\,\\, \\alpha_i \\geq 0, \\forall i\n\\end{align*}\n$$\n\nIn coordinate ascent algorithm, after has done the random initialization(CANNOT initialize with 0), in each iteration we choose only one variable to update.\n\nBut with above problem, there is constraint $\\sum\\limits_{i=1}^n \\alpha_{i} y_{i} = 0$, so we CANNOT just update one variable at a time, but at least update another one correspondingly to satisfy the constraint, so instead of one we choose two variables to tune in one iteration in SMO.\n\nDetailed steps to show how to update the randomly chosen(actually there is some strategies to decide which better to chose) two variables $(\\alpha_i, \\alpha_j)$ in each iteration can be referred here: [SMO example](https://github.com/lnshi/ml-exercises/blob/master/ml_basics/rdm011_support_vector_machines/smo.png), full article from here: [\u673a\u5668\u5b66\u4e60\u7b97\u6cd5\u5b9e\u8df5-SVM\u4e2d\u7684SMO\u7b97\u6cd5](https://zhuanlan.zhihu.com/p/29212107).\n\n## Slack variables and penalty factors\n\nFor readabilities I just copy above formula (4) - the original problem here:\n\n$$\n\\begin{align*} \\tag{4 copy}\n &arg \\, \\underset{w, b}{min} \\, \\frac{1}{2}||w||^2 \\\\\n &s.t. \\, y_i(w^Tx_i + b) \\geq 1, \\forall i\n\\end{align*}\n$$\n\nIntroduce in slack variables $\\zeta_i$ and penalty factor $C$:\n\n$$\n\\begin{align*} \\tag{4.1}\n &arg \\, \\underset{w, b}{min} \\, \\frac{1}{2}||w||^2 + C \\sum_{i=1}^n \\zeta_i \\\\\n &s.t. \\, y_i(w^Tx_i + b) \\geq 1 - \\zeta_i, \\forall i \\\\\n &\\quad \\,\\,\\, \\zeta_i \\geq 0, \\forall i\n\\end{align*}\n$$\n\nFor each sample point we have different slack variable(target solving the outliers, for the others the slack variables should be all 0), similarly for the penalty factor we also can have different values for each of the sample point, NOT necessarily to be all same.\n\nDeal with unbalanced dataset(e.g.: the `+ class` has quite a lot of samples, but the `- class` only has quite a few):\n\n$$\n\\begin{align*} \\tag{4.2}\n &arg \\, \\underset{w, b}{min} \\, \\frac{1}{2}||w||^2 + C_+ \\sum_{i=1}^p \\zeta_i + C_- \\sum_{i=p+1}^n \\zeta_i \\\\\n &s.t. \\, y_i(w^Tx_i + b) \\geq 1 - \\zeta_i, \\forall i \\\\\n &\\quad \\,\\,\\, \\zeta_i \\geq 0, \\forall i\n\\end{align*}\n$$\n\n# Experience sklearn built-in SVM\n\n\n```python\nimport os\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# Sets the backend of matplotlib to the 'inline' backend.\n#\n# With this backend, the output of plotting commands is displayed inline within frontends like the Jupyter notebook,\n# directly below the code cell that produced it.\n# The resulting plots will then also be stored in the notebook document.\n#\n# More details: https://stackoverflow.com/questions/43027980/purpose-of-matplotlib-inline\n%matplotlib inline\n\nfrom scipy.io import loadmat\n\nraw_data = loadmat(os.getcwd() + '/linear_discriminable_samples.mat')\n```\n\n\n```python\ndata = pd.DataFrame(raw_data['X'], columns=['X1', 'X2'])\ndata['y'] = raw_data['y']\n\npositive = data[data['y'].isin([1])] \nnegative = data[data['y'].isin([0])]\n\nfig, ax = plt.subplots(figsize=(12,8)) \nax.scatter(positive['X1'], positive['X2'], s=50, marker='x', label='Positive') \nax.scatter(negative['X1'], negative['X2'], s=50, marker='o', label='Negative') \nax.legend() \n```\n\n## SVM with linear kernel ('C=1' vs 'C=100')\n\n\n```python\nfrom sklearn import svm\n\nsvc1 = svm.LinearSVC(C=1, loss='hinge', max_iter=1000)\nsvc1\n```\n\n\n\n\n LinearSVC(C=1, class_weight=None, dual=True, fit_intercept=True,\n intercept_scaling=1, loss='hinge', max_iter=1000, multi_class='ovr',\n penalty='l2', random_state=None, tol=0.0001, verbose=0)\n\n\n\n\n```python\nsvc1.fit(data[['X1', 'X2']], data['y']) \nsvc1.score(data[['X1', 'X2']], data['y'])\n```\n\n /usr/local/anaconda3/lib/python3.7/site-packages/sklearn/svm/base.py:931: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.\n \"the number of iterations.\", ConvergenceWarning)\n\n\n\n\n\n 0.9803921568627451\n\n\n\n\n```python\nsvc2 = svm.LinearSVC(C=100, loss='hinge', max_iter=1000) \nsvc2.fit(data[['X1', 'X2']], data['y']) \nsvc2.score(data[['X1', 'X2']], data['y']) \n```\n\n /usr/local/anaconda3/lib/python3.7/site-packages/sklearn/svm/base.py:931: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.\n \"the number of iterations.\", ConvergenceWarning)\n\n\n\n\n\n 1.0\n\n\n\n\n```python\ndata['SVM 1 Confidence'] = svc1.decision_function(data[['X1', 'X2']])\n\nfig, ax = plt.subplots(figsize=(12,8)) \nax.scatter(data['X1'], data['X2'], s=50, c=data['SVM 1 Confidence'], cmap='seismic') \nax.set_title('SVM (C=1) Decision Confidence')\n```\n\n\n```python\ndata['SVM 2 Confidence'] = svc2.decision_function(data[['X1', 'X2']])\n\nfig, ax = plt.subplots(figsize=(12,8)) \nax.scatter(data['X1'], data['X2'], s=50, c=data['SVM 2 Confidence'], cmap='seismic') \nax.set_title('SVM (C=100) Decision Confidence')\n```\n\n## SVM with gaussian kernel (built-in by sklearn already)\n\n\n```python\nraw_data = loadmat(os.getcwd() + '/nonlinear_discriminable_samples.mat')\n\ndata = pd.DataFrame(raw_data['X'], columns=['X1', 'X2']) \ndata['y'] = raw_data['y']\n\npositive = data[data['y'].isin([1])] \nnegative = data[data['y'].isin([0])]\n\nfig, ax = plt.subplots(figsize=(12,8)) \nax.scatter(positive['X1'], positive['X2'], s=30, marker='x', label='Positive') \nax.scatter(negative['X1'], negative['X2'], s=30, marker='o', label='Negative') \nax.legend()\n```\n\n### How to use a custom kernel with some dynamic values\n\n\n```python\ndef build_gaussian_kernel(sigma):\n def gaussian_kernel(x1, x2):\n return np.exp(-(np.sum((x1 - x2) ** 2) / (2 * (sigma ** 2))))\n return gaussian_kernel\n# Then: svc = svm.SVC(C=100, kernel=build_gaussian_kernel(sigma=1), gamma=10, probability=True)\n```\n\n\n```python\nsvc = svm.SVC(C=100, kernel='rbf', gamma=10, probability=True)\nsvc.fit(data[['X1', 'X2']], data['y']) \ndata['Probability'] = svc.predict_proba(data[['X1', 'X2']])[:,0]\n\nfig, ax = plt.subplots(figsize=(12,8))\nax.scatter(data['X1'], data['X2'], s=30, c=data['Probability'], cmap='Reds')\n```\n\n### Model selection: 'GridSearchCV' and 'PredefinedSplit'\n\n\n```python\nfrom sklearn import base\nfrom sklearn import model_selection\nfrom sklearn import ensemble\n\nraw_data = loadmat(os.getcwd() + '/samples_contain_both_training_and_validation_set.mat')\n\nX = np.concatenate(\n (raw_data['X'], raw_data['Xval']), axis=0\n)\ny = np.concatenate(\n (raw_data['y'].ravel(), raw_data['yval'].ravel())\n)\n\ntest_fold = np.concatenate(\n (\n np.full(raw_data['X'].shape[0], 0),\n np.full(raw_data['Xval'].shape[0], -1)\n )\n)\nps = model_selection.PredefinedSplit(test_fold)\n\nparam_candidates = [\n {\n 'C': [0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100],\n 'gamma': [0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100],\n 'kernel': ['rbf']\n }\n]\n\ngs = model_selection.GridSearchCV(\n estimator=svm.SVC(),\n param_grid=param_candidates,\n cv=ps,\n refit=True,\n error_score=0,\n n_jobs=-1\n)\n\ngs.fit(X, y)\ngs.best_estimator_, gs.best_score_\n```\n\n\n\n\n (SVC(C=10, cache_size=200, class_weight=None, coef0=0.0,\n decision_function_shape='ovr', degree=3, gamma=10, kernel='rbf',\n max_iter=-1, probability=False, random_state=None, shrinking=True,\n tol=0.001, verbose=False), 0.943127962085308)\n\n\n\n# References\n\n- [\u652f\u6301\u5411\u91cf\u673a(SVM)\u662f\u4ec0\u4e48\u610f\u601d\uff1f](https://www.zhihu.com/question/21094489/answer/117246987)\n\n- [\u652f\u6301\u5411\u91cf\u673a\u4e2d\u7684\u51fd\u6570\u8ddd\u79bb\u548c\u51e0\u4f55\u8ddd\u79bb\u600e\u4e48\u7406\u89e3\uff1f](https://www.zhihu.com/question/20466147/answer/28469993)\n\n- [SVM\u63a8\u5bfc\u8fc7\u7a0b](https://zhuanlan.zhihu.com/p/34811858)\n\n- [\u5b66\u4e60SVM\uff08\u4e09\uff09\u7406\u89e3SVM\u4e2d\u7684\u5bf9\u5076\u95ee\u9898](https://blog.csdn.net/chaipp0607/article/details/73849539)\n\n- [\u96f6\u57fa\u7840\u5b66SVM\u2014Support Vector Machine(\u4e00)](https://zhuanlan.zhihu.com/p/24638007)\n\n- [\u96f6\u57fa\u7840\u5b66SVM-Support Vector Machine(\u4e8c)](https://zhuanlan.zhihu.com/p/29865057)\n\n- [\u7b80\u6613\u89e3\u8bf4\u62c9\u683c\u6717\u65e5\u5bf9\u5076\uff08Lagrange duality\uff09](http://www.cnblogs.com/90zeng/p/Lagrange_duality.html)\n\n- [\u6d45\u8c08\u6700\u4f18\u5316\u95ee\u9898\u7684KKT\u6761\u4ef6](https://zhuanlan.zhihu.com/p/26514613)\n\n- [\u673a\u5668\u5b66\u4e60\u7b97\u6cd5\u5b9e\u8df5-SVM\u4e2d\u7684SMO\u7b97\u6cd5](https://zhuanlan.zhihu.com/p/29212107)\n\n- [SVM\u5b66\u4e60\uff08\u4e94\uff09\uff1a\u677e\u5f1b\u53d8\u91cf\u4e0e\u60e9\u7f5a\u56e0\u5b50](https://blog.csdn.net/qll125596718/article/details/6910921)\n", "meta": {"hexsha": "556f848bd09c364369fbab98029f27c4121150cc", "size": 281936, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ml_basics/rdm011_support_vector_machines/support_vector_machines.ipynb", "max_stars_repo_name": "lnshi/ml-exercises", "max_stars_repo_head_hexsha": "7482ca0f5bb599e6be0a02b50ca1db3c882e715f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-10-06T10:32:10.000Z", "max_stars_repo_stars_event_max_datetime": "2019-07-04T14:55:55.000Z", "max_issues_repo_path": "ml_basics/rdm011_support_vector_machines/support_vector_machines.ipynb", "max_issues_repo_name": "lnshi/ml-exercises", "max_issues_repo_head_hexsha": "7482ca0f5bb599e6be0a02b50ca1db3c882e715f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ml_basics/rdm011_support_vector_machines/support_vector_machines.ipynb", "max_forks_repo_name": "lnshi/ml-exercises", "max_forks_repo_head_hexsha": "7482ca0f5bb599e6be0a02b50ca1db3c882e715f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-07-03T09:24:41.000Z", "max_forks_repo_forks_event_max_datetime": "2019-07-03T09:24:41.000Z", "avg_line_length": 373.4251655629, "max_line_length": 131536, "alphanum_fraction": 0.9284660348, "converted": true, "num_tokens": 5442, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9059898127684335, "lm_q2_score": 0.9111797136297299, "lm_q1q2_score": 0.8255195381497938}} {"text": "```python\nimport sympy as sp\nimport numpy as np\n```\n\n\n```python\ndef factorial(n):\n if n < 2:\n return 1\n else:\n return n*factorial(n-1)\n```\n\n\n```python\ndef n_degree_taylor(formula, x, x_0, n):\n '''\n Given the analytic expression, x and x_0, calculate\n its nth degree Taylor polynomial\n Note: Variable is defaulted to x\n '''\n var_x = sp.symbols('x')\n acc = 0\n # Be careful, n+1 because it's up to n\n for i in range(n+1):\n acc += sp.diff(formula, var_x, i).subs(var_x, x_0)/factorial(i)*(x-x_0)**i\n return float(acc)\n```\n\n\n```python\n# Generic Test for n_degree_taylor\nexpected = -2\nactual = n_degree_taylor('-sin(x)', 2, 0, 2)\ntol = 10**-7\nassert abs(expected-actual) < tol\n```\n\n\n```python\ndef empirical_taylor(derivatives, x, x_0):\n '''\n Given empirical derivatives f(x_0), f'(x_0), f''(x_0)...\n calculate it's taylor series with this empirical data\n ASSUME FROM 0th derivative (which is the original function)\n '''\n result = 0\n for i in range(len(derivatives)):\n result += derivatives[i]*(x-x_0)**i/factorial(i)\n return result\n```\n\n\n```python\n# Generic Test Case for empirical_taylor\nexpected = -4/3\nactual = empirical_taylor([8, -8, 4, -2, 2], 2, 0)\ntol = 10**-7\nassert abs(expected-actual) < tol\n```\n\n\n```python\n# Workspace\nn_degree_taylor('-sin(x)', 5, 0, 2)\n```\n\n\n\n\n -5.0\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "e50132dcb2884d28095c8045de416ac773fcfaf7", "size": 3120, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "taylor.ipynb", "max_stars_repo_name": "Racso-3141/uiuc-cs357-fa21-scripts", "max_stars_repo_head_hexsha": "e44f0a1ea4eb657cb77253f1db464d52961bbe5e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2021-11-02T05:56:10.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-03T19:25:19.000Z", "max_issues_repo_path": "taylor.ipynb", "max_issues_repo_name": "Racso-3141/uiuc-cs357-fa21-scripts", "max_issues_repo_head_hexsha": "e44f0a1ea4eb657cb77253f1db464d52961bbe5e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "taylor.ipynb", "max_forks_repo_name": "Racso-3141/uiuc-cs357-fa21-scripts", "max_forks_repo_head_hexsha": "e44f0a1ea4eb657cb77253f1db464d52961bbe5e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-10-30T15:18:01.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-10T11:26:43.000Z", "avg_line_length": 22.1276595745, "max_line_length": 91, "alphanum_fraction": 0.4935897436, "converted": true, "num_tokens": 447, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9304582516374121, "lm_q2_score": 0.8872046041554922, "lm_q1q2_score": 0.8255068448271816}} {"text": "```python\nimport sympy\nfrom sympy import I, conjugate, pi, oo\nfrom sympy.functions import exp\nfrom sympy.matrices import Identity, MatrixSymbol\nimport numpy as np\n```\n\n# \\#3\n\n## Fourier matrix with $\\omega$\n\n\n```python\n\u03c9 = sympy.Symbol(\"omega\")\n\ndef omegaPow(idx):\n return \u03c9**(idx) \n\ndef fourierGenerator(N):\n return sympy.Matrix(N, N, lambda m, n: omegaPow(m * n))\n\nassert fourierGenerator(2) == sympy.Matrix([[1, 1], [1, \u03c9]])\n\nfourierGenerator(4)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 1 & 1\\\\1 & \\omega & \\omega^{2} & \\omega^{3}\\\\1 & \\omega^{2} & \\omega^{4} & \\omega^{6}\\\\1 & \\omega^{3} & \\omega^{6} & \\omega^{9}\\end{matrix}\\right]$\n\n\n\n\n```python\nfourierGenerator(4)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 1 & 1\\\\1 & \\omega & \\omega^{2} & \\omega^{3}\\\\1 & \\omega^{2} & \\omega^{4} & \\omega^{6}\\\\1 & \\omega^{3} & \\omega^{6} & \\omega^{9}\\end{matrix}\\right]$\n\n\n\n## Complex conjugate F\n\n\n```python\nconjugate(fourierGenerator(4))\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 1 & 1\\\\1 & \\overline{\\omega} & \\overline{\\omega}^{2} & \\overline{\\omega}^{3}\\\\1 & \\overline{\\omega}^{2} & \\overline{\\omega}^{4} & \\overline{\\omega}^{6}\\\\1 & \\overline{\\omega}^{3} & \\overline{\\omega}^{6} & \\overline{\\omega}^{9}\\end{matrix}\\right]$\n\n\n\n## Substitute $\\omega=e^\\frac{i2{\\pi}mn}{N}$\n\n\n```python\ndef omegaSubstirution(idx, N):\n return sympy.exp((I * 2 * pi * idx) / N)\n\n\nassert omegaSubstirution(1, 4) == I\nassert omegaSubstirution(8, 8) == 1\nassert omegaSubstirution(3, 6) == -1\n```\n\n## Generate Fourier matrix with values\n\n\n```python\ndef fourierGeneratorWithExp(N):\n return sympy.Matrix(N, N, lambda m, n: omegaSubstirution(m * n, N))\n\n\nassert fourierGeneratorWithExp(2) == sympy.Matrix([[1, 1], [1, -1]])\n\nF4 = fourierGeneratorWithExp(4)\n\nF4\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 1 & 1\\\\1 & i & -1 & - i\\\\1 & -1 & 1 & -1\\\\1 & - i & -1 & i\\end{matrix}\\right]$\n\n\n\n## Matrix conjugate\n\n\n```python\nF4Conj = conjugate(F4)\n\nassert Identity(4).as_explicit() == (1 / 4) * F4 * F4Conj\n\nF4Conj\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 1 & 1\\\\1 & - i & -1 & i\\\\1 & -1 & 1 & -1\\\\1 & i & -1 & - i\\end{matrix}\\right]$\n\n\n\n## Conjugate generator with $\\omega$\n\n\n```python\ndef fourierConjGenerator(N):\n return sympy.Matrix(N, N, lambda m, n: omegaPow(0 if m == 0 else (N - m) * n))\n\n\nfourierConjGenerator(4)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 1 & 1\\\\1 & \\omega^{3} & \\omega^{6} & \\omega^{9}\\\\1 & \\omega^{2} & \\omega^{4} & \\omega^{6}\\\\1 & \\omega & \\omega^{2} & \\omega^{3}\\end{matrix}\\right]$\n\n\n\n## Conjugate generator with values\n\n\n```python\ndef fourierConjGeneratorWithExp(N):\n return sympy.Matrix(\n N, N, lambda m, n: omegaSubstirution(0 if m == 0 else (N - m) * n, N))\n\n\nF4ConjWithExp = fourierConjGeneratorWithExp(4)\n\nassert F4Conj == F4ConjWithExp\n\nF4ConjWithExp\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 1 & 1\\\\1 & - i & -1 & i\\\\1 & -1 & 1 & -1\\\\1 & i & -1 & - i\\end{matrix}\\right]$\n\n\n\n## Permutation Generator\n\n\n```python\ndef generatePermutationMatrix(N):\n return np.vstack((np.hstack((np.array([1]), np.zeros(\n N - 1, dtype=int))), np.zeros((N - 1, N), dtype=int))) + np.fliplr(\n np.diagflat(np.ones(N - 1, dtype=int), -1))\n\n\nassert np.all(\n generatePermutationMatrix(4) == np.array([[1, 0, 0, 0], [0, 0, 0, 1],\n [0, 0, 1, 0], [0, 1, 0, 0]]))\n\ngeneratePermutationMatrix(4)\n```\n\n\n\n\n array([[1, 0, 0, 0],\n [0, 0, 0, 1],\n [0, 0, 1, 0],\n [0, 1, 0, 0]])\n\n\n\n## $$F=P{\\cdot}\\overline{F}$$\n\n\n```python\nP4 = generatePermutationMatrix(4)\n\nassert F4 == P4 * F4Conj\n\nP4 * F4Conj\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 1 & 1\\\\1 & i & -1 & - i\\\\1 & -1 & 1 & -1\\\\1 & - i & -1 & i\\end{matrix}\\right]$\n\n\n\n## $$P^2 = I$$\n\n\n```python\nassert np.all(np.linalg.matrix_power(P4, 2) == np.identity(4, dtype=int))\n```\n\n## $$\\frac{1}{N}{\\cdot}F^2 = P$$\n\n\n```python\nassert np.all(((1 / 4) * np.linalg.matrix_power(F4, 2)).astype(int) == P4)\n```\n\n# \\#4\n\n\n```python\nID = sympy.Matrix(\n np.hstack((np.vstack((np.identity(2, dtype=int), np.identity(2,\n dtype=int))),\n np.vstack((np.identity(2, dtype=int) * np.diag(F4)[0:2],\n -np.identity(2, dtype=int) * np.diag(F4)[0:2])))))\n\nID\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0 & 1 & 0\\\\0 & 1 & 0 & i\\\\1 & 0 & -1 & 0\\\\0 & 1 & 0 & - i\\end{matrix}\\right]$\n\n\n\n\n```python\nID.inv()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\frac{1}{2} & 0 & \\frac{1}{2} & 0\\\\0 & \\frac{1}{2} & 0 & \\frac{1}{2}\\\\\\frac{1}{2} & 0 & - \\frac{1}{2} & 0\\\\0 & - \\frac{i}{2} & 0 & \\frac{i}{2}\\end{matrix}\\right]$\n\n\n\n\n```python\nF2 = fourierGeneratorWithExp(2)\nZeros2 = np.zeros((2, 2), dtype=int)\n\nF22 = sympy.Matrix(\n np.array(np.vstack((np.hstack((F2, Zeros2)), np.hstack((Zeros2, F2))))))\n\nF22\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 0 & 0\\\\1 & -1 & 0 & 0\\\\0 & 0 & 1 & 1\\\\0 & 0 & 1 & -1\\end{matrix}\\right]$\n\n\n\n\n```python\nF22.inv()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\frac{1}{2} & \\frac{1}{2} & 0 & 0\\\\\\frac{1}{2} & - \\frac{1}{2} & 0 & 0\\\\0 & 0 & \\frac{1}{2} & \\frac{1}{2}\\\\0 & 0 & \\frac{1}{2} & - \\frac{1}{2}\\end{matrix}\\right]$\n\n\n\n\n```python\nEvenOddP = sympy.Matrix([[1, 0, 0, 0], [0, 0, 1, 0], [0, 1, 0, 0],\n [0, 0, 0, 1]])\nEvenOddP\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0 & 0 & 0\\\\0 & 0 & 1 & 0\\\\0 & 1 & 0 & 0\\\\0 & 0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\nassert ID * F22 * EvenOddP == F4\n\nID * F22 * EvenOddP\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 1 & 1\\\\1 & i & -1 & - i\\\\1 & -1 & 1 & -1\\\\1 & - i & -1 & i\\end{matrix}\\right]$\n\n\n\n\n```python\nF4\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 1 & 1\\\\1 & i & -1 & - i\\\\1 & -1 & 1 & -1\\\\1 & - i & -1 & i\\end{matrix}\\right]$\n\n\n\n# \\#5\n\n$$\n F^{T}_{N} = \n\\left(\n \\begin{bmatrix}\n I_{N/2} & D_{N/2} \\\\[0.3em]\n I_{N/2} & -D_{N/2} \\\\[0.3em]\n \\end{bmatrix}\n \\cdot\n \\begin{bmatrix}\n F_{N/2} & \\\\[0.3em]\n & F_{N/2} \\\\[0.3em]\n \\end{bmatrix}\n \\cdot\n \\begin{bmatrix}\n even_{N/2} & even_{N/2} \\\\[0.3em]\n odd_{N/2} & odd_{N/2} \\\\[0.3em]\n \\end{bmatrix}\n \\right)^T\n =\n \\begin{bmatrix}\n even_{N/2} & odd_{N/2} \\\\[0.3em]\n even_{N/2} & odd_{N/2} \\\\[0.3em]\n \\end{bmatrix}\n \\cdot\n \\begin{bmatrix}\n F_{N/2} & \\\\[0.3em]\n & F_{N/2} \\\\[0.3em]\n \\end{bmatrix}\n \\cdot\n \\begin{bmatrix}\n I_{N/2} & I_{N/2} \\\\[0.3em]\n D_{N/2} & -D_{N/2} \\\\[0.3em]\n \\end{bmatrix}\n $$\n\n\n```python\nEvenOddP * F22 * sympy.Matrix.transpose(ID)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 1 & 1\\\\1 & i & -1 & - i\\\\1 & -1 & 1 & -1\\\\1 & - i & -1 & i\\end{matrix}\\right]$\n\n\n\n\n```python\nEvenOddP * F22 * sympy.Matrix.transpose(ID) == F4\n```\n\n\n\n\n True\n\n\n\n# \\#6\n\n### Based on Permutation Matrix for the Conjugate Matrix - even / odd Permutation\n\n\n```python\nN = 6\n\nP = generatePermutationMatrix(N)\n\nEvenOddP = sympy.Matrix(\n np.vstack((\n P[0],\n P[np.arange(N-2, 1, -2)],\n P[np.arange(N-1, 0, -2)]\n ))\n)\n\nEvenOddP\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0 & 0 & 0 & 0 & 0\\\\0 & 0 & 1 & 0 & 0 & 0\\\\0 & 0 & 0 & 0 & 1 & 0\\\\0 & 1 & 0 & 0 & 0 & 0\\\\0 & 0 & 0 & 1 & 0 & 0\\\\0 & 0 & 0 & 0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\ndef generatePermutationFourierMatrix(N):\n P = generatePermutationMatrix(N)\n return sympy.Matrix(\n np.vstack((\n P[0],\n P[np.arange(N-2, 1, -2)],\n P[np.arange(N-1, 0, -2)]\n ))\n )\n\nassert np.all(generatePermutationFourierMatrix(4) == sympy.Matrix([\n [1, 0, 0, 0],\n [0, 0, 1, 0],\n [0, 1, 0, 0],\n [0, 0, 0, 1]\n]))\n```\n\n### Shape of the Permutation Matrix\n$$\n\\begin{bmatrix}\n even_{N/2} & even_{N/2} \\\\[0.3em]\n odd_{N/2} & odd_{N/2} \\\\[0.3em]\n\\end{bmatrix}\n$$\n\n### N = 8 Fourier Permutation Matrix\n\n\n```python\ngeneratePermutationFourierMatrix(8)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\\\0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\\\0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\\\0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\ngeneratePermutationFourierMatrix(6)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0 & 0 & 0 & 0 & 0\\\\0 & 0 & 1 & 0 & 0 & 0\\\\0 & 0 & 0 & 0 & 1 & 0\\\\0 & 1 & 0 & 0 & 0 & 0\\\\0 & 0 & 0 & 1 & 0 & 0\\\\0 & 0 & 0 & 0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n## Generate Fourier Matrices\n\n### Generate Fourier Blocks for basic Matrix:\n\n#### Half Eye: $I_{N/2}$\n#### Half Fourier Diagonal: $D_{N/2}$\n#### Half Fourier: $F_{N/2}$\n\n\n```python\ndef generateFourierBlocks(N, fourierGenerator):\n half = int(N/2)\n quarterEye = sympy.Matrix.eye(half, half)\n quarterZeroes = sympy.Matrix.zeros(half, half)\n halfDiagFourier = sympy.Matrix.diag(\n np.diagonal(fourierGenerator(N))\n )[:half, :half]\n halfFourier = fourierGenerator(half)\n return (quarterEye, quarterZeroes, halfDiagFourier, halfFourier)\n```\n\n\n```python\nBlocks4 = generateFourierBlocks(4, fourierGenerator)\n\nassert Blocks4 == (\n sympy.Matrix([\n [1, 0],\n [0, 1]\n ]),\n sympy.Matrix([\n [0, 0],\n [0, 0]\n ]),\n sympy.Matrix([\n [1, 0],\n [0, \u03c9]\n ]),\n sympy.Matrix([\n [1, 1],\n [1, \u03c9]\n ])\n)\n```\n\n\n```python\nBlocks4Complex = generateFourierBlocks(4, fourierGeneratorWithExp)\n\nassert Blocks4Complex == (\n sympy.Matrix([\n [1, 0],\n [0, 1]\n ]),\n sympy.Matrix([\n [0, 0],\n [0, 0]\n ]),\n sympy.Matrix([\n [1, 0],\n [0, I]\n ]),\n sympy.Matrix([\n [1, 1],\n [1, -1]\n ])\n)\n```\n\n\n```python\ndef composeFourierMatricesFromBlocks(N, quarterEye, quarterZeroes, halfDiagFourier, halfFourier):\n return (\n sympy.Matrix.vstack(\n sympy.Matrix.hstack(\n quarterEye,\n halfDiagFourier\n ),\n sympy.Matrix.hstack(\n quarterEye,\n -halfDiagFourier\n )\n ),\n sympy.Matrix.vstack(\n sympy.Matrix.hstack(\n halfFourier,\n quarterZeroes\n ),\n sympy.Matrix.hstack(\n quarterZeroes,\n halfFourier\n )\n ),\n generatePermutationFourierMatrix(N)\n )\n```\n\n\n```python\ndef createFourierMatrices(N, fourierGenerator):\n (quarterEye, quarterZeroes, halfDiagFourier, halfFourier) = generateFourierBlocks(N, fourierGenerator)\n return composeFourierMatricesFromBlocks(N, quarterEye, quarterZeroes, halfDiagFourier, halfFourier)\n```\n\n## Generate Fourier Matrices \n\n### IdentityAndDiagonal: $\n\\begin{bmatrix}\n I_{N/2} & D_{N/2} \\\\[0.3em]\n I_{N/2} & -D_{N/2} \\\\[0.3em]\n \\end{bmatrix}\n$\n \n### FourierHalfNSize: $\n\\begin{bmatrix}\n F_{N/2} & \\\\[0.3em]\n & F_{N/2} \\\\[0.3em]\n \\end{bmatrix}\n$\n\n### EvenOddPermutation: $\n\\begin{bmatrix}\n even_{N/2} & even_{N/2} \\\\[0.3em]\n odd_{N/2} & odd_{N/2} \\\\[0.3em]\n\\end{bmatrix}\n$\n\n### Full picture\n$$\nF_{N}=\\begin{bmatrix}\n I_{N/2} & D_{N/2} \\\\[0.3em]\n I_{N/2} & -D_{N/2} \\\\[0.3em]\n \\end{bmatrix}\n \\cdot\n \\begin{bmatrix}\n F_{N/2} & \\\\[0.3em]\n & F_{N/2} \\\\[0.3em]\n \\end{bmatrix}\n \\cdot\n \\begin{bmatrix}\n even_{N/2} & even_{N/2} \\\\[0.3em]\n odd_{N/2} & odd_{N/2} \\\\[0.3em]\n \\end{bmatrix}\n $$\n\n\n```python\ndef generateFourierMatricesWithOmega(N):\n return createFourierMatrices(N, fourierGenerator)\n```\n\n\n```python\nIdentityAndDiagonal, FHalfNSize, EvenOddPermutation = generateFourierMatricesWithOmega(4)\n```\n\n\n```python\nassert sympy.Matrix([\n [1, 0, 1, 0],\n [0, 1, 0, \u03c9],\n [1, 0, -1, 0],\n [0, 1, 0, -\u03c9],\n]) == IdentityAndDiagonal\n\nIdentityAndDiagonal\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0 & 1 & 0\\\\0 & 1 & 0 & \\omega\\\\1 & 0 & -1 & 0\\\\0 & 1 & 0 & - \\omega\\end{matrix}\\right]$\n\n\n\n\n```python\nassert sympy.Matrix([\n [1, 1, 0, 0],\n [1, \u03c9, 0, 0],\n [0, 0, 1, 1],\n [0, 0, 1, \u03c9]\n]) == FHalfNSize\n\nFHalfNSize\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 0 & 0\\\\1 & \\omega & 0 & 0\\\\0 & 0 & 1 & 1\\\\0 & 0 & 1 & \\omega\\end{matrix}\\right]$\n\n\n\n\n```python\nFHalfNSize = sympy\n```\n\n\n```python\nassert sympy.Matrix([\n [1, 0, 0, 0],\n [0, 0, 1, 0],\n [0, 1, 0, 0],\n [0, 0, 0, 1]\n]) == EvenOddPermutation\n\nEvenOddPermutation\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0 & 0 & 0\\\\0 & 0 & 1 & 0\\\\0 & 1 & 0 & 0\\\\0 & 0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\ndef generateFourierMatricesWithExp(N):\n return createFourierMatrices(N, fourierGeneratorWithExp)\n```\n\n\n```python\nIdentityAndDiagonal, FourierHalfNSize, EvenOddPermutation = generateFourierMatricesWithExp(4)\n```\n\n\n```python\nassert sympy.Matrix([\n [1, 0, 1, 0],\n [0, 1, 0, I],\n [1, 0, -1, 0],\n [0, 1, 0, -I]\n]) == IdentityAndDiagonal\n\nIdentityAndDiagonal\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0 & 1 & 0\\\\0 & 1 & 0 & i\\\\1 & 0 & -1 & 0\\\\0 & 1 & 0 & - i\\end{matrix}\\right]$\n\n\n\n\n```python\nassert sympy.Matrix([\n [1, 1, 0, 0],\n [1, -1, 0, 0],\n [0, 0, 1, 1],\n [0, 0, 1, -1]\n]) == FourierHalfNSize\n\nFourierHalfNSize\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 0 & 0\\\\1 & -1 & 0 & 0\\\\0 & 0 & 1 & 1\\\\0 & 0 & 1 & -1\\end{matrix}\\right]$\n\n\n\n\n```python\nassert sympy.Matrix([\n [1, 0, 0, 0],\n [0, 0, 1, 0],\n [0, 1, 0, 0],\n [0, 0, 0, 1]\n]) == EvenOddPermutation\n\nEvenOddPermutation\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0 & 0 & 0\\\\0 & 0 & 1 & 0\\\\0 & 1 & 0 & 0\\\\0 & 0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n## Solution for \\#6\n\n\n```python\nIdentityAndDiagonal = sympy.Matrix([\n [1, 0, 0, 1, 0, 0],\n [0, 1, 0, 0, \u03c9, 0],\n [0, 0, 1, 0, 0, \u03c9**2],\n [1, 0, 0, -1, 0, 0],\n [0, 1, 0, 0, -\u03c9, 0],\n [0, 0, 1, 0, 0, -\u03c9**2],\n])\nIdentityAndDiagonal\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0 & 0 & 1 & 0 & 0\\\\0 & 1 & 0 & 0 & \\omega & 0\\\\0 & 0 & 1 & 0 & 0 & \\omega^{2}\\\\1 & 0 & 0 & -1 & 0 & 0\\\\0 & 1 & 0 & 0 & - \\omega & 0\\\\0 & 0 & 1 & 0 & 0 & - \\omega^{2}\\end{matrix}\\right]$\n\n\n\n\n```python\nFourierHalfNSize = sympy.Matrix([\n [1, 1, 1, 0, 0, 0],\n [1, \u03c9**2, \u03c9**4, 0, 0, 0],\n [1, \u03c9**4, \u03c9**2, 0, 0, 0],\n [0, 0, 0, 1, 1, 1],\n [0, 0, 0, 1, \u03c9**2, \u03c9**4],\n [0, 0, 0, 1, \u03c9**4, \u03c9**2],\n])\n\nFourierHalfNSize\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 1 & 0 & 0 & 0\\\\1 & \\omega^{2} & \\omega^{4} & 0 & 0 & 0\\\\1 & \\omega^{4} & \\omega^{2} & 0 & 0 & 0\\\\0 & 0 & 0 & 1 & 1 & 1\\\\0 & 0 & 0 & 1 & \\omega^{2} & \\omega^{4}\\\\0 & 0 & 0 & 1 & \\omega^{4} & \\omega^{2}\\end{matrix}\\right]$\n\n\n\n\n```python\nEvenOddPermutation\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0 & 0 & 0 & 0 & 0\\\\0 & 0 & 1 & 0 & 0 & 0\\\\0 & 0 & 0 & 0 & 1 & 0\\\\0 & 1 & 0 & 0 & 0 & 0\\\\0 & 0 & 0 & 1 & 0 & 0\\\\0 & 0 & 0 & 0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\nFourierBasedOnMatrices = IdentityAndDiagonal * FourierHalfNSize * EvenOddPermutation\nFourierBasedOnMatrices\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 1 & 1 & 1 & 1\\\\1 & \\omega & \\omega^{2} & \\omega^{3} & \\omega^{4} & \\omega^{5}\\\\1 & \\omega^{2} & \\omega^{4} & \\omega^{6} & \\omega^{2} & \\omega^{4}\\\\1 & -1 & 1 & -1 & 1 & -1\\\\1 & - \\omega & \\omega^{2} & - \\omega^{3} & \\omega^{4} & - \\omega^{5}\\\\1 & - \\omega^{2} & \\omega^{4} & - \\omega^{6} & \\omega^{2} & - \\omega^{4}\\end{matrix}\\right]$\n\n\n\n\n```python\n# assert FourierBasedOnMatrices == fourierGeneratorWithExp(N)\n\nfourierGenerator(N)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 1 & 1 & 1 & 1 & 1\\\\1 & \\omega & \\omega^{2} & \\omega^{3} & \\omega^{4} & \\omega^{5}\\\\1 & \\omega^{2} & \\omega^{4} & \\omega^{6} & \\omega^{8} & \\omega^{10}\\\\1 & \\omega^{3} & \\omega^{6} & \\omega^{9} & \\omega^{12} & \\omega^{15}\\\\1 & \\omega^{4} & \\omega^{8} & \\omega^{12} & \\omega^{16} & \\omega^{20}\\\\1 & \\omega^{5} & \\omega^{10} & \\omega^{15} & \\omega^{20} & \\omega^{25}\\end{matrix}\\right]$\n\n\n\n\n```python\nN = 4\nsympy.Matrix(\n sympy.fft(sympy.Matrix.eye(2, 2), 1)\n)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}2\\\\1.0 - 1.0 i\\\\0\\\\1.0 + 1.0 i\\end{matrix}\\right]$\n\n\n\n\n```python\nN = 16\nhalf = int(N/2)\n\nsympy.fft(sympy.Matrix.eye(2), 1)\n\n```\n\n\n\n\n [2, 1.0 - 1.0*I, 0, 1.0 + 1.0*I]\n\n\n\n\n```python\nsympy.Matrix.eye(half)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\\\0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\\\0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\\\0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\\\0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\end{matrix}\\right]$\n\n\n", "meta": {"hexsha": "0b369a1e7c5d0d10b2b78e2f98da03912a312891", "size": 40907, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "4.FourierSeriesAndIntegrals/3.DiscretFourierTransformAndTheFFT/ProblemSet4.3.ipynb", "max_stars_repo_name": "nickovchinnikov/Computational-Science-and-Engineering", "max_stars_repo_head_hexsha": "45620e432c97fce68a24e2ade9210d30b341d2e4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2021-01-14T08:00:23.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-31T14:00:11.000Z", "max_issues_repo_path": "4.FourierSeriesAndIntegrals/3.DiscretFourierTransformAndTheFFT/ProblemSet4.3.ipynb", "max_issues_repo_name": "nickovchinnikov/Computational-Science-and-Engineering", "max_issues_repo_head_hexsha": "45620e432c97fce68a24e2ade9210d30b341d2e4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "4.FourierSeriesAndIntegrals/3.DiscretFourierTransformAndTheFFT/ProblemSet4.3.ipynb", "max_forks_repo_name": "nickovchinnikov/Computational-Science-and-Engineering", "max_forks_repo_head_hexsha": "45620e432c97fce68a24e2ade9210d30b341d2e4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-25T15:21:40.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-25T15:21:40.000Z", "avg_line_length": 25.4872274143, "max_line_length": 471, "alphanum_fraction": 0.4005182487, "converted": true, "num_tokens": 6913, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9304582497090321, "lm_q2_score": 0.8872045862611168, "lm_q1q2_score": 0.8255068264663447}} {"text": "## RIHAD VARIAWA, Data Scientist - Who has fun LEARNING, EXPLORING & GROWING\n## Functions\nSo far in this course we've explored equations that perform algebraic operations to produce one or more results. A *function* is a way of encapsulating an operation that takes an input and produces exactly one ouput.\n\nFor example, consider the following function definition:\n\n\\begin{equation}f(x) = x^{2} + 2\\end{equation}\n\nThis defines a function named ***f*** that accepts one input (***x***) and returns a single value that is the result calculated by the expression *x2 + 2*.\n\nHaving defined the function, we can use it for any input value. For example:\n\n\\begin{equation}f(3) = 11\\end{equation}\n\nYou've already seen a few examples of Python functions, which are defined using the **def** keyword. However, the strict definition of an algebraic function is that it must return a single value. Here's an example of defining and using a Python function that meets this criteria:\n\n\n```python\n# define a function to return x^2 + 2\ndef f(x):\n return x**2 + 2\n\n# call the function\nf(3)\n```\n\nYou can use functions in equations, just like any other term. For example, consider the following equation:\n\n\\begin{equation}y = f(x) - 1\\end{equation}\n\nTo calculate a value for ***y***, we take the ***f*** of ***x*** and subtract 1. So assuming that ***f*** is defined as previously, given an ***x*** value of 4, this equation returns a ***y*** value of **17** (*f*(4) returns 42 + 2, so 16 + 2 = 18; and then the equation subtracts 1 to give us 17). Here it is in Python:\n\n\n```python\nx = 4\ny = f(x) - 1\nprint(y)\n```\n\nOf course, the value returned by a function depends on the input; and you can graph this with the iput (let's call it ***x***) on one axis and the output (***f(x)***) on the other.\n\n\n```python\n%matplotlib inline\n\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values from -100 to 100\nx = np.array(range(-100, 101))\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('f(x)')\nplt.grid()\n\n# Plot x against f(x)\nplt.plot(x,f(x), color='purple')\n\nplt.show()\n```\n\nAs you can see (if you hadn't already figured it out), our function is a *quadratic function* - it returns a squared value that results in a parabolic graph when the output for multiple input values are plotted.\n\n## Bounds of a Function\nSome functions will work for any input and may return any output. For example, consider the function ***u*** defined here:\n\n\\begin{equation}u(x) = x + 1\\end{equation}\n\nThis function simply adds 1 to whatever input is passed to it, so it will produce a defined output for any value of ***x*** that is a *real* number; in other words, any \"regular\" number - but not an *imaginary* number like √-1, or ∞ (infinity). You can specify the set of real numbers using the symbol ${\\rm I\\!R}$ (note the double stroke). The values that can be used for ***x*** can be expressed as a *set*, which we indicate by enclosing all of the members of the set in \"{...}\" braces; so to indicate the set of all possible values for x such that x is a member of the set of all real numbers, we can use the following expression:\n\n\\begin{equation}\\{x \\in \\rm I\\!R\\}\\end{equation}\n\n\n### Domain of a Function\nWe call the set of numbers for which a function can return value it's *domain*, and in this case, the domain of ***u*** is the set of all real numbers; which is actually the default assumption for most functions.\n\nNow consider the following function ***g***:\n\n\\begin{equation}g(x) = (\\frac{12}{2x})^{2}\\end{equation}\n\nIf we use this function with an ***x*** value of **2**, we would get the output **9**; because (12 ÷ (2•2))2 is 9. Similarly, if we use the value **-3** for ***x***, the output will be **4**. However, what happens when we apply this function to an ***x*** value of **0**? Anything divided by 0 is undefined, so the function ***g*** doesn't work for an ***x*** value of 0.\n\nSo we need a way to denote the domain of the function ***g*** by indicating the input values for which a defined output can be returned. Specifically, we need to restrict ***x*** to a specific list of values - specifically any real number that is not 0. To indicate this, we can use the following notation:\n\n\\begin{equation}\\{x \\in \\rm I\\!R\\;\\;|\\;\\; x \\ne 0 \\}\\end{equation}\n\nThis is interpreted as *Any value for x where x is in the set of real numbers such that x is not equal to 0*, and we can incorporate this into the function's definition like this:\n\n\\begin{equation}g(x) = (\\frac{12}{2x})^{2}, \\{x \\in \\rm I\\!R\\;\\;|\\;\\; x \\ne 0 \\}\\end{equation}\n\nOr more simply:\n\n\\begin{equation}g(x) = (\\frac{12}{2x})^{2},\\;\\; x \\ne 0\\end{equation}\n\nWhen you plot the output of a function, you can indicate the gaps caused by input values that are not in the function's domain by plotting an empty circle to show that the function is not defined at this point:\n\n\n```python\n%matplotlib inline\n\n# Define function g\ndef g(x):\n if x != 0:\n return (12/2*x)**2\n\n# Plot output from function g\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values from -100 to 100\nx = range(-100, 101)\n\n# Get the corresponding y values from the function\ny = [g(a) for a in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('g(x)')\nplt.grid()\n\n# Plot x against g(x)\nplt.plot(x,y, color='purple')\n\n# plot an empty circle to show the undefined point\nplt.plot(0,g(0.0000001), color='purple', marker='o', markerfacecolor='w', markersize=8)\n\nplt.show()\n```\n\nNote that the function works for every value other than 0; so the function is defined for x = 0.000000001, and for x = -0.000000001; it only fails to return a defined value for exactly 0.\n\nOK, let's take another example. Consider this function:\n\n\\begin{equation}h(x) = 2\\sqrt{x}\\end{equation}\n\nApplying this function to a non-negative ***x*** value returns a meaningful output; but for any value where ***x*** is negative, the output is undefined.\n\nWe can indicate the domain of this function in its definition like this:\n\n\\begin{equation}h(x) = 2\\sqrt{x}, \\{x \\in \\rm I\\!R\\;\\;|\\;\\; x \\ge 0 \\}\\end{equation}\n\nThis is interpreted as *Any value for x where x is in the set of real numbers such that x is greater than or equal to 0*.\n\nOr, you might see this in a simpler format:\n\n\\begin{equation}h(x) = 2\\sqrt{x},\\;\\; x \\ge 0\\end{equation}\n\nNote that the symbol ≥ is used to indicate that the value must be *greater than **or equal to*** 0; and this means that **0** is included in the set of valid values. To indicate that the value must be *greater than 0, **not including 0***, use the > symbol. You can also use the equivalent symbols for *less than or equal to* (≤) and *less than* (<).\n\nWhen plotting a function line that marks the end of a continuous range, the end of the line is shown as a circle, which is filled if the function includes the value at that point, and unfilled if it does not.\n\nHere's the Python to plot function ***h***:\n\n\n```python\n%matplotlib inline\n\ndef h(x):\n if x >= 0:\n import numpy as np\n return 2 * np.sqrt(x)\n\n# Plot output from function h\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values from -100 to 100\nx = range(-100, 101)\n\n# Get the corresponding y values from the function\ny = [h(a) for a in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('h(x)')\nplt.grid()\n\n# Plot x against h(x)\nplt.plot(x,y, color='purple')\n\n# plot a filled circle at the end to indicate a closed interval\nplt.plot(0, h(0), color='purple', marker='o', markerfacecolor='purple', markersize=8)\n\nplt.show()\n```\n\nSometimes, a function may be defined for a specific *interval*; for example, for all values between 0 and 5:\n\n\\begin{equation}j(x) = x + 2,\\;\\; x \\ge 0 \\text{ and } x \\le 5\\end{equation}\n\nIn this case, the function is defined for ***x*** values between 0 and 5 *inclusive*; in other words, **0** and **5** are included in the set of defined values. This is known as a *closed* interval and can be indicated like this:\n\n\\begin{equation}\\{x \\in \\rm I\\!R\\;\\;|\\;\\; 0 \\le x \\le 5 \\}\\end{equation}\n\nIt could also be indicated like this:\n\n\\begin{equation}\\{x \\in \\rm I\\!R\\;\\;|\\;\\; [0,5] \\}\\end{equation}\n\nIf the condition in the function was **x > 0 and x < 5**, then the interval would be described as *open* and 0 and 5 would *not* be included in the set of defined values. This would be indicated using one of the following expressions:\n\n\\begin{equation}\\{x \\in \\rm I\\!R\\;\\;|\\;\\; 0 \\lt x \\lt 5 \\}\\end{equation}\n\\begin{equation}\\{x \\in \\rm I\\!R\\;\\;|\\;\\; (0,5) \\}\\end{equation}\n\nHere's function ***j*** in Python:\n\n\n```python\n%matplotlib inline\n\ndef j(x):\n if x >= 0 and x <= 5:\n return x + 2\n\n \n# Plot output from function j\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values from -100 to 100\nx = range(-100, 101)\ny = [j(a) for a in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('j(x)')\nplt.grid()\n\n# Plot x against k(x)\nplt.plot(x, y, color='purple')\n\n# plot a filled circle at the ends to indicate an open interval\nplt.plot(0, j(0), color='purple', marker='o', markerfacecolor='purple', markersize=8)\nplt.plot(5, j(5), color='purple', marker='o', markerfacecolor='purple', markersize=8)\n\nplt.show()\n```\n\nNow, suppose we have a function like this:\n\n\\begin{equation}\nk(x) = \\begin{cases}\n 0, & \\text{if } x = 0, \\\\\n 1, & \\text{if } x = 100\n\\end{cases}\n\\end{equation}\n\nIn this case, the function has highly restricted domain; it only returns a defined output for 0 and 100. No output for any other ***x*** value is defined. In this case, the set of the domain is:\n\n\\begin{equation}\\{0,100\\}\\end{equation}\n\nNote that this does not include all real numbers, it only includes 0 and 100.\n\nWhen we use Python to plot this function, note that it only makes sense to plot a scatter plot showing the individual values returned, there is no line in between because the function is not continuous between the values within the domain. \n\n\n```python\n%matplotlib inline\n\ndef k(x):\n if x == 0:\n return 0\n elif x == 100:\n return 1\n\n \n# Plot output from function k\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values from -100 to 100\nx = range(-100, 101)\n# Get the k(x) values for every value in x\ny = [k(a) for a in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('k(x)')\nplt.grid()\n\n# Plot x against k(x)\nplt.scatter(x, y, color='purple')\n\nplt.show()\n```\n\n### Range of a Function\nJust as the domain of a function defines the set of values for which the function is defined, the *range* of a function defines the set of possible outputs from the function.\n\nFor example, consider the following function:\n\n\\begin{equation}p(x) = x^{2} + 1\\end{equation}\n\nThe domain of this function is all real numbers. However, this is a quadratic function, so the output values will form a parabola; and since the function has no negative coefficient or constant, it will be an upward opening parabola with a vertex that has a y value of 1.\n\nSo what does that tell us? Well, the minimum value that will be returned by this function is 1, so it's range is:\n\n\\begin{equation}\\{p(x) \\in \\rm I\\!R\\;\\;|\\;\\; p(x) \\ge 1 \\}\\end{equation}\n\nLet's create and plot the function for a range of ***x*** values in Python:\n\n\n```python\n%matplotlib inline\n\n# define a function to return x^2 + 1\ndef p(x):\n return x**2 + 1\n\n\n# Plot the function\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values from -100 to 100\nx = np.array(range(-100, 101))\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('p(x)')\nplt.grid()\n\n# Plot x against f(x)\nplt.plot(x,p(x), color='purple')\n\nplt.show()\n```\n\nNote that the ***p(x)*** values in the plot drop exponentially for ***x*** values that are negative, and then rise exponentially for positive ***x*** values; but the minimum value returned by the function (for an *x* value of 0) is **1**.\n", "meta": {"hexsha": "72a34ffef9e07792be3e6a651eebaa0135cc0f91", "size": 16804, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "AI Professional/2 - Essential Mathematics For Artificial Intelligence/DAT256x/Module01/01-08-Functions.ipynb", "max_stars_repo_name": "2series/DataScience-Courses", "max_stars_repo_head_hexsha": "5ee71305721a61dfc207d8d7de67a9355530535d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-02-23T07:40:39.000Z", "max_stars_repo_stars_event_max_datetime": "2020-02-23T07:40:39.000Z", "max_issues_repo_path": "AI Professional/2 - Essential Mathematics For Artificial Intelligence/DAT256x/Module01/01-08-Functions.ipynb", "max_issues_repo_name": "2series/DataScience-Courses", "max_issues_repo_head_hexsha": "5ee71305721a61dfc207d8d7de67a9355530535d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "AI Professional/2 - Essential Mathematics For Artificial Intelligence/DAT256x/Module01/01-08-Functions.ipynb", "max_forks_repo_name": "2series/DataScience-Courses", "max_forks_repo_head_hexsha": "5ee71305721a61dfc207d8d7de67a9355530535d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-12-05T11:04:58.000Z", "max_forks_repo_forks_event_max_datetime": "2020-02-26T10:42:08.000Z", "avg_line_length": 37.0949227373, "max_line_length": 661, "alphanum_fraction": 0.5684955963, "converted": true, "num_tokens": 3373, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9046505299595162, "lm_q2_score": 0.9124361700013356, "lm_q1q2_score": 0.8254358647459394}} {"text": "# Solving orbital equations with different algorithms\n\nThis notebook was adapted from `Orbit_games.ipynb`.\n\n\n\nWe consider energy plots and orbital solutions in polar coordinates for the general potential energy\n\n$\\begin{align}\n U(r) = k r^n\n\\end{align}$\n\nfor different ODE solution algorithms. The `solve_ivp` function can itself be specified to use different solution methods (with the `method` keyword). Here we will set it by default to use 'RK23', which is a variant on the Runge-Kutta second-order algorithm. Second-order in this context means that the accuracy of a calculation will improve by a factor of $10^2 = 100$ if $\\Delta t$ is reduced by a factor of ten. \n\nWe will compare it with the crudest algorithm, Euler's method, which is first order, and a second-order algorithm called Leapfrog, which is designed to be precisely time-reversal invariant. This property guarantees conservation of energy, which is not true of the other algorithms we will consider.\n\nTo solve the differential equations for orbits, we have defined the $\\mathbf{y}$ \nand $d\\mathbf{y}/dt$ vectors as\n\n$\\begin{align}\n \\mathbf{y} = \\left(\\begin{array}{c} r(t) \\\\ \\dot r(t) \\\\ \\phi(t) \\end{array} \\right) \n \\qquad\n \\frac{d\\mathbf{y}}{dt} \n = \\left(\\begin{array}{c} \\dot r(t) \\\\ \\ddot r(t) \\\\ \\dot\\phi(t) \\end{array} \\right) \n = \\left(\\begin{array}{c} \\dot r(t) \\\\ \n -\\frac{1}{\\mu}\\frac{dU_{\\rm eff}(r)}{dr} \\\\ \n \\frac{l}{\\mu r^2} \\end{array} \\right) \n\\end{align}$\n\nwhere we have substituted the differential equations for $\\ddot r$ and $\\dot\\phi$.\n\nThen Euler's method can be written as a simple prescription to obtain $\\mathbf{y}_{i+1}$ \nfrom $\\mathbf{y}_i$, where the subscripts label the elements of the `t_pts` array: \n$\\mathbf{y}_{i+1} = \\mathbf{y}_i + \\left(d\\mathbf{y}/dt\\right)_i \\Delta t$, or, by components:\n\n$\\begin{align}\n r_{i+1} &= r_i + \\frac{d\\mathbf{y}_i[0]}{dt} \\Delta t \\\\\n \\dot r_{i+1} &= \\dot r_{i} + \\frac{d\\mathbf{y}_i[1]}{dt} \\Delta t \\\\\n \\phi_{i+1} &= \\phi_i + \\frac{d\\mathbf{y}_i[2]}{dt} \\Delta t\n\\end{align}$\n\n**Look at the** `solve_ode_Euler` **method below and verify the algorithm is correctly implemented.** \n\nThe leapfrog method does better by evaluating $\\dot r$ at a halfway time step before and after the $r$ evaluation, \nwhich is both more accurate and incorporates time reversal: \n\n$\\begin{align}\n \\dot r_{i+1/2} &= \\dot r_{i} + \\frac{d\\mathbf{y}_i[1]}{dt} \\Delta t/2 \\\\\n r_{i+1} &= r_i + \\dot r_{i+1/2} \\Delta t \\\\\n \\dot r_{i+1} &= \\dot r_{i+1/2} + \\frac{d\\mathbf{y}_{i+1}[1]}{dt} \\Delta t/2 \\\\\n \\phi_{i+1} &= \\phi_i + \\frac{d\\mathbf{y}_i[2]}{dt} \\Delta t\n\\end{align}$\n\n**Look at the** `solve_ode_Leapfrog` **method below and verify the algorithm is correctly implemented.** \n\nA third method is the second-order Runge-Kutta algorithm, which we invoke from `solve_ivp` as `RK23`. \nIt does not use a fixed time-step as in our \"homemade\" implementations, so there is not a direct \ncomparison, but we can still check if it conserves energy.\n\n**Run the notebook. You are to turn in and comment on the \"Change in energy with time\" plot at the end. \nWhere do you see energy conserved or not conserved? Show that Euler is first order and leapfrog is second \norder by changing $\\Delta t$; describe what you did and what you found.**\n\n**Try another potential to see if you get the same general conclusions.**\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.integrate import solve_ivp\n```\n\n\n```python\n# Change the common font size\nfont_size = 14\nplt.rcParams.update({'font.size': font_size})\n```\n\n\n```python\nclass Orbit:\n \"\"\"\n Potentials and associated differential equations for central force motion\n with the potential U(r) = k r^n. Several algorithms for integration of \n ordinary differential equations are now available. \n \"\"\"\n \n def __init__(self, ang_mom, n, k=1, mu=1):\n self.ang_mom = ang_mom\n self.n = n\n self.k = k\n self.mu = mu\n \n def U(self, r):\n \"\"\"Potential energy of the form U = kr^n.\"\"\"\n return self.k * r**self.n\n \n def Ucf(self, r):\n \"\"\"Centrifugal potential energy\"\"\"\n return self.ang_mom**2 / (2. * self.mu * r**2)\n \n def Ueff(self, r):\n \"\"\"Effective potential energy\"\"\"\n return self.U(r) + self.Ucf(r)\n \n def U_deriv(self, r):\n \"\"\"dU/dr\"\"\"\n return self.n * self.k * r**(self.n - 1)\n \n def Ucf_deriv(self, r):\n \"\"\"dU_cf/dr\"\"\"\n return -2. * self.ang_mom**2 / (2. * self.mu * r**3)\n \n def Ueff_deriv(self, r):\n \"\"\"dU_eff/dr\"\"\"\n return self.U_deriv(r) + self.Ucf_deriv(r)\n \n def dy_dt(self, t, y):\n \"\"\"\n This function returns the right-hand side of the diffeq: \n [dr/dt d^2r/dt^2 dphi/dt]\n \n Parameters\n ----------\n t : float\n time \n y : float\n 3-component vector with y[0] = r(t), y[1] = dr/dt, y[2] = phi\n \n \"\"\"\n return [ y[1], \n -1./self.mu * self.Ueff_deriv(y[0]), \n self.ang_mom / (self.mu * y[0]**2) ]\n \n \n def solve_ode(self, t_pts, r_0, r_dot_0, phi_0,\n method='RK23',\n abserr=1.0e-8, relerr=1.0e-8):\n \"\"\"\n Solve the ODE given initial conditions.\n Use solve_ivp with the option of specifying the method.\n Specify smaller abserr and relerr to get more precision.\n \"\"\"\n y = [r_0, r_dot_0, phi_0] \n solution = solve_ivp(self.dy_dt, (t_pts[0], t_pts[-1]), \n y, t_eval=t_pts, method=method, \n atol=abserr, rtol=relerr)\n r, r_dot, phi = solution.y\n return r, r_dot, phi\n \n def solve_ode_Euler(self, t_pts, r_0, r_dot_0, phi_0):\n \"\"\"\n Solve the ODE given initial conditions with the Euler method.\n The accuracy is determined by the spacing of times in t_pts.\n \"\"\"\n \n delta_t = t_pts[1] - t_pts[0]\n \n # initialize the arrays for r, rdot, phi with zeros\n num_t_pts = len(t_pts) # length of the array t_pts\n r = np.zeros(num_t_pts)\n r_dot = np.zeros(num_t_pts)\n phi = np.zeros(num_t_pts)\n \n # initial conditions\n r[0] = r_0\n r_dot[0] = r_dot_0\n phi[0] = phi_0\n \n # step through the differential equation\n for i in np.arange(num_t_pts - 1):\n t = t_pts[i]\n y = [r[i], r_dot[i], phi[i]]\n r[i+1] = r[i] + self.dy_dt(t,y)[0] * delta_t\n r_dot[i+1] = r_dot[i] + self.dy_dt(t,y)[1] * delta_t \n phi[i+1] = phi[i] + self.dy_dt(t,y)[2] * delta_t\n return r, r_dot, phi \n \n \n def solve_ode_Leapfrog(self, t_pts, r_0, r_dot_0, phi_0):\n \"\"\"\n Solve the ODE given initial conditions with the Leapfrog method.\n \"\"\"\n delta_t = t_pts[1] - t_pts[0]\n \n # initialize the arrays for r, rdot, r_dot_half, phi with zeros\n num_t_pts = len(t_pts)\n r = np.zeros(num_t_pts)\n r_dot = np.zeros(num_t_pts)\n r_dot_half = np.zeros(num_t_pts)\n phi = np.zeros(num_t_pts)\n \n # initial conditions\n r[0] = r_0\n r_dot[0] = r_dot_0\n phi[0] = phi_0\n \n # step through the differential equation\n for i in np.arange(num_t_pts - 1):\n t = t_pts[i]\n y = [r[i], r_dot[i], phi[i]]\n r_dot_half[i] = r_dot[i] + self.dy_dt(t, y)[1] * delta_t/2.\n r[i+1] = r[i] + r_dot_half[i] * delta_t\n \n y = [r[i+1], r_dot[i], phi[i]]\n r_dot[i+1] = r_dot_half[i] + self.dy_dt(t, y)[1] * delta_t/2.\n \n phi[i+1] = phi[i] + self.dy_dt(t,y)[2] * delta_t\n return r, r_dot, phi \n \n \n def energy(self, t_pts, r, r_dot):\n \"\"\"Evaluate the energy as a function of time\"\"\"\n return (self.mu/2.) * r_dot**2 + self.Ueff(r)\n```\n\n\n```python\ndef start_stop_indices(t_pts, plot_start, plot_stop):\n start_index = (np.fabs(t_pts-plot_start)).argmin() # index in t_pts array \n stop_index = (np.fabs(t_pts-plot_stop)).argmin() # index in t_pts array \n return start_index, stop_index\n```\n\n# Pick a potential\n\n\n```python\nn = 2 \nk = 1. \nang_mom = 2. \no1 = Orbit(ang_mom, n=n, k=k, mu=1)\n\nfig_2 = plt.figure(figsize=(7,5))\nax_2 = fig_2.add_subplot(1,1,1)\n\nr_pts = np.linspace(0.001, 3., 200)\nU_pts = o1.U(r_pts)\nUcf_pts = o1.Ucf(r_pts)\nUeff_pts = o1.Ueff(r_pts)\n\nax_2.plot(r_pts, U_pts, linestyle='dashed', color='blue', label='U(r)')\nax_2.plot(r_pts, Ucf_pts, linestyle='dotted', color='green', label='Ucf(r)')\nax_2.plot(r_pts, Ueff_pts, linestyle='solid', color='red', label='Ueff(r)')\n\nax_2.set_xlim(0., 3.)\nax_2.set_ylim(-1., 10.)\nax_2.set_xlabel('r')\nax_2.set_ylabel('U(r)')\nax_2.set_title(f'$n = {n},\\ \\ k = {k},\\ \\ l = {ang_mom}$')\nax_2.legend(loc='upper center')\n\nax_2.axhline(0., color='black', alpha=0.3)\n\n\nfig_2.tight_layout()\n\nfig_2.savefig('Gravitation_orbit_1.png')\n\n```\n\n## Plot orbit and check energy conservation\n\n\n```python\n# Plotting time \nt_start = 0.\nt_end = 10.\ndelta_t = 0.001\n\nt_pts = np.arange(t_start, t_end+delta_t, delta_t) \n\n# Initial conditions\nr_0 = 1. # 1.\nr_dot_0 = 0.\nphi_0 = 0.0\nr_pts, r_dot_pts, phi_pts = o1.solve_ode(t_pts, r_0, r_dot_0, phi_0)\nr_pts_Euler, r_dot_pts_Euler, phi_pts_Euler \\\n = o1.solve_ode_Euler(t_pts, r_0, r_dot_0, phi_0)\nr_pts_LF, r_dot_pts_LF, phi_pts_LF \\\n = o1.solve_ode_Leapfrog(t_pts, r_0, r_dot_0, phi_0)\n\nc = o1.ang_mom**2 / (np.abs(o1.k) * o1.mu)\nepsilon = c / r_0 - 1.\nenergy_0 = o1.mu/2. * r_dot_0**2 + o1.Ueff(r_0)\nprint(f'energy = {energy_0:.2f}')\nprint(f'eccentricity = {epsilon:.2f}')\n```\n\n energy = 3.00\n eccentricity = 3.00\n\n\n\n```python\nfig_4 = plt.figure(figsize=(8,8))\n\noverall_title = 'Orbit: ' + \\\n rf' $n = {o1.n},$' + \\\n rf' $k = {o1.k:.1f},$' + \\\n rf' $l = {o1.ang_mom:.1f},$' + \\\n rf' $r_0 = {r_0:.1f},$' + \\\n rf' $\\dot r_0 = {r_dot_0:.1f},$' + \\\n rf' $\\phi_0 = {phi_0:.1f}$' + \\\n '\\n' # \\n means a new line (adds some space here)\nfig_4.suptitle(overall_title, va='baseline')\n\nax_4a = fig_4.add_subplot(2,2,1)\nax_4a.plot(t_pts, r_pts, color='black', label='RK23')\nax_4a.plot(t_pts, r_pts_Euler, color='blue', label='Euler')\nax_4a.plot(t_pts, r_pts_LF, color='red', label='Leapfrog')\nax_4a.set_xlabel(r'$t$')\nax_4a.set_ylabel(r'$r$')\nax_4a.set_title('Time dependence of radius')\nax_4a.legend()\n\nax_4b = fig_4.add_subplot(2,2,2)\nax_4b.plot(t_pts, phi_pts/(2.*np.pi), color='black', label='RK23')\nax_4b.plot(t_pts, phi_pts_Euler/(2.*np.pi), color='blue', label='Euler')\nax_4b.plot(t_pts, phi_pts_LF/(2.*np.pi), color='red', label='Leapfrog')\nax_4b.set_xlabel(r'$t$')\nax_4b.set_ylabel(r'$\\phi/2\\pi$')\nax_4b.set_title(r'Time dependence of $\\phi$')\nax_4b.legend()\n\nax_4c = fig_4.add_subplot(2,2,3)\nax_4c.plot(r_pts*np.cos(phi_pts), r_pts*np.sin(phi_pts), \n color='black', label='RK23')\nax_4c.plot(r_pts_Euler*np.cos(phi_pts_Euler), \n r_pts_Euler*np.sin(phi_pts_Euler), \n color='blue', label='Euler')\nax_4c.plot(r_pts_LF*np.cos(phi_pts_LF), \n r_pts_LF*np.sin(phi_pts_LF), \n color='red', label='Leapfrog')\nax_4c.set_xlabel(r'$x$')\nax_4c.set_ylabel(r'$y$')\nax_4c.set_aspect('equal')\nax_4c.set_title('Cartesian plot')\nax_4c.legend()\n\nax_4d = fig_4.add_subplot(2,2,4, polar=True)\nax_4d.plot(phi_pts, r_pts, color='black', label='RK23')\nax_4d.plot(phi_pts_Euler, r_pts_Euler, color='blue', label='Euler')\nax_4d.plot(phi_pts_LF, r_pts_LF, color='red', label='Leapfrog')\nax_4d.set_title('Polar plot', pad=20.)\nax_4d.legend()\n\n\nfig_4.tight_layout()\nfig_4.savefig('Leapfrog_orbit_1.png', dpi=200, bbox_inches='tight')\n\n\n```\n\n\n```python\nE_tot_pts = o1.energy(t_pts, r_pts, r_dot_pts)\nE_tot_0 = E_tot_pts[0]\nE_tot_rel_pts = np.abs((E_tot_pts - E_tot_0)/E_tot_0)\n\nE_tot_pts_Euler = o1.energy(t_pts, r_pts_Euler, r_dot_pts_Euler)\nE_tot_0_Euler = E_tot_pts_Euler[0]\nE_tot_rel_pts_Euler = np.abs((E_tot_pts_Euler - E_tot_0_Euler)/E_tot_0_Euler)\n\nE_tot_pts_LF = o1.energy(t_pts, r_pts_LF, r_dot_pts_LF)\nE_tot_0_LF = E_tot_pts_LF[0]\nE_tot_rel_pts_LF = np.abs((E_tot_pts_LF - E_tot_0_LF)/E_tot_0_LF)\n\n```\n\n\n```python\nfig_5 = plt.figure(figsize=(6,6))\n\noverall_title = 'Orbit: ' + \\\n rf' $n = {o1.n},$' + \\\n rf' $k = {o1.k:.1f},$' + \\\n rf' $l = {o1.ang_mom:.1f},$' + \\\n rf' $r_0 = {r_0:.1f},$' + \\\n rf' $\\dot r_0 = {r_dot_0:.1f},$' + \\\n rf' $\\phi_0 = {phi_0:.1f}$' + \\\n '\\n' # \\n means a new line (adds some space here)\nfig_5.suptitle(overall_title, va='baseline')\n\nax_5a = fig_5.add_subplot(1,1,1)\n#ax_5a.semilogy(t_pts, np.abs(E_tot_pts), color='black', label=r'$E(t)$')\nax_5a.semilogy(t_pts, E_tot_rel_pts, \n color='green', label=r'$\\Delta E(t)$ RK23')\nax_5a.semilogy(t_pts, E_tot_rel_pts_Euler, \n color='blue', label=r'$\\Delta E(t)$ Euler')\nax_5a.semilogy(t_pts, E_tot_rel_pts_LF, \n color='red', label=r'$\\Delta E(t)$ Leapfrog')\nax_5a.set_ylim(1.e-10, 1.e-2) # (1.e-12, 5)\nax_5a.set_xlabel(r'$t$')\nax_5a.set_ylabel(r'Energy')\nax_5a.set_title('Change in energy with time')\nax_5a.legend()\n\nfig_5.tight_layout()\nfig_5.savefig('Leapfrog_energy_test_1.png', dpi=200, bbox_inches='tight')\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "0950c6959ee3ec348e2aa8fd394ac2ae9133f299", "size": 202193, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "2020_week_7/.ipynb_checkpoints/Orbital_eqs_with_different_algorithms-checkpoint.ipynb", "max_stars_repo_name": "CLima86/Physics_5300_CDL", "max_stars_repo_head_hexsha": "d9e8ee0861d408a85b4be3adfc97e98afb4a1149", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2020_week_7/.ipynb_checkpoints/Orbital_eqs_with_different_algorithms-checkpoint.ipynb", "max_issues_repo_name": "CLima86/Physics_5300_CDL", "max_issues_repo_head_hexsha": "d9e8ee0861d408a85b4be3adfc97e98afb4a1149", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2020_week_7/.ipynb_checkpoints/Orbital_eqs_with_different_algorithms-checkpoint.ipynb", "max_forks_repo_name": "CLima86/Physics_5300_CDL", "max_forks_repo_head_hexsha": "d9e8ee0861d408a85b4be3adfc97e98afb4a1149", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 373.7393715342, "max_line_length": 107668, "alphanum_fraction": 0.9190525884, "converted": true, "num_tokens": 4467, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9046505299595163, "lm_q2_score": 0.912436167620237, "lm_q1q2_score": 0.8254358625918775}} {"text": "```python\nimport numpy as np\nfrom sympy import *\ninit_printing(use_unicode=True) # \u8f38\u51fa\u6578\u5b78\u683c\u5f0f\nimport matplotlib.pylab as plt\n%matplotlib\n```\n\n Using matplotlib backend: Qt5Agg\n\n\n\n```python\nM = MatrixSymbol('M', 3, 3)\n```\n\n\n```python\nMatrix(M)\n```\n\n\n\n\n$$\\left[\\begin{array}{ccc}M_{0, 0} & M_{0, 1} & M_{0, 2}\\\\M_{1, 0} & M_{1, 1} & M_{1, 2}\\\\M_{2, 0} & M_{2, 1} & M_{2, 2}\\end{array}\\right]$$\n\n\n\n\n```python\nx1, x2 = symbols('x1, x2')\nw111, w112, w121, w122 = symbols('w111, w121, w112, w122')\nw211, w212, w221, w222 = symbols('w211, w221, w212, w222')\nb11, b12, b21, b22 = symbols('b11, b12, b21, b22')\nX = Matrix([x1, x2]).T\nW1 = Matrix([[w111, w121], [w112, w122]])\nW2 = Matrix([[w211, w221], [w212, w222]])\nB1 = Matrix([b11, b12]).T\nB2 = Matrix([b21, b22]).T\n```\n\n\n```python\nX * W1 + B1\n```\n\n\n\n\n$$\\left[\\begin{matrix}b_{11} + w_{111} x_{1} + w_{121} x_{2} & b_{12} + w_{112} x_{1} + w_{122} x_{2}\\end{matrix}\\right]$$\n\n\n\n\n```python\n(((X * W1) + B1) * W2) + B2\n```\n\n\n\n\n$$\\left[\\begin{matrix}b_{21} + w_{211} \\left(b_{11} + w_{111} x_{1} + w_{121} x_{2}\\right) + w_{221} \\left(b_{12} + w_{112} x_{1} + w_{122} x_{2}\\right) & b_{22} + w_{212} \\left(b_{11} + w_{111} x_{1} + w_{121} x_{2}\\right) + w_{222} \\left(b_{12} + w_{112} x_{1} + w_{122} x_{2}\\right)\\end{matrix}\\right]$$\n\n\n", "meta": {"hexsha": "139fdeeacc2a3cb8cbaafdb3799b6e93fceb06be", "size": 3504, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "DL/Neural Network.ipynb", "max_stars_repo_name": "sppool/works", "max_stars_repo_head_hexsha": "2ab9c10d7326e0e836f3214f5067b6f070deb2e0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-27T16:41:42.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-27T16:41:42.000Z", "max_issues_repo_path": "DL/Neural Network.ipynb", "max_issues_repo_name": "sppool/works", "max_issues_repo_head_hexsha": "2ab9c10d7326e0e836f3214f5067b6f070deb2e0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "DL/Neural Network.ipynb", "max_forks_repo_name": "sppool/works", "max_forks_repo_head_hexsha": "2ab9c10d7326e0e836f3214f5067b6f070deb2e0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-04-11T08:13:29.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-11T14:15:53.000Z", "avg_line_length": 23.8367346939, "max_line_length": 327, "alphanum_fraction": 0.464326484, "converted": true, "num_tokens": 584, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9425067228145365, "lm_q2_score": 0.8757869965109765, "lm_q1q2_score": 0.8254351319651464}} {"text": "\n\n---\n## 01. Data Analysis. Statistics Basics\n\n\nEduard Larra\u00f1aga (ealarranaga@unal.edu.co)\n\n---\n\n### About this notebook\n\nIn this worksheet, we introduce some basic aspects and definitions of statistics for working with astrophysical data. \n\n---\n\n### Statistics with `numpy`\n \nStatistics are designed to summarize, reduce or describe data. A statistic is a function of the data alone!\n\nConsider a dataset $\\{ x_1, x_2, x_3, ...\\}$\n\nSome important quantities defined to describe the dataset are **average** or **mean**, **median**, **maximum value**, **average of the squares**, etc. \n\nNow, we will explore some of these concepts using the `numpy` package and a set of data taken from the book *Computational Physics* by Mark Newman that can be downloaded from\n\nhttp://www-personal.umich.edu/~mejn/computational-physics/\n\nThe file `sunspots-since1749.txt` contains the observed number of sunspots on the Sun for each month since January 1749. In the file, the first column corresponds to the month and the second the sunspots number. \n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\n```\n\n\n```python\nmonth, sunspots = np.loadtxt('sunspots-since1749.txt', unpack=True)\n```\n\nThe number of samples in the dataset is\n\n\n```python\nprint(month.size, ' months')\n```\n\n 3143 months\n\n\n\n```python\nyears, months = divmod(month.size, 12)\nprint(years, ' years and ', months, ' months')\n```\n\n 261 years and 11 months\n\n\nA scatter plot showing the behavior of the dataset,\n\n\n```python\nfig, ax = plt.subplots(figsize=(7,5))\n\nax.scatter(month, sunspots, marker='.')\nax.set_xlabel(r'months since january 1749')\nax.set_ylabel(r'number of sunspots')\nplt.show()\n```\n\nand a plot showing the same dataset is\n\n\n```python\nfig, ax = plt.subplots(figsize=(7,5))\n\nax.plot(month, sunspots)\nax.set_xlabel(r'months since january 1749')\nax.set_ylabel(r'number of sunspots')\nplt.show()\n```\n\n#### Maximum and Minimum\n\n\n```python\nnp.ndarray.max(sunspots)\n```\n\n\n\n\n 253.8\n\n\n\n\n```python\nnp.ndarray.min(sunspots)\n```\n\n\n\n\n 0.0\n\n\n\n---\n#### Mean and Weighted Average\nThe function `numpy.mean()` returns the arithmetic average of the array elements.\n\nhttps://numpy.org/doc/stable/reference/generated/numpy.mean.html?highlight=mean#numpy.mean\n\n\n\n\n\n\\begin{equation}\n \\text{mean} = \\frac{\\sum x_i}{N}\n\\end{equation}\n\n\nThe monthly mean number of spots in the time period of the data set is\n\n\n\n```python\nmean_sunspots = np.mean(sunspots)\nmean_sunspots\n```\n\n\n\n\n 51.924498886414256\n\n\n\nThe function `numpy.average()` returns the weighted average of the array elements.\n\nhttps://numpy.org/doc/stable/reference/generated/numpy.average.html?highlight=average#numpy.average\n\nIncluding a weight function with values $w_i$, this function uses the formula\n\n\\begin{equation}\n \\text{average} = \\frac{\\sum x_i w_i}{\\sum w_i}\n\\end{equation}\n\n\n\n```python\nw = sunspots/np.ndarray.max(sunspots)\nw\n```\n\n\n\n\n array([0.2285264 , 0.24665091, 0.27580772, ..., 0.09929078, 0.09259259,\n 0.08510638])\n\n\n\n\n```python\nnp.average(sunspots, weights=w)\n```\n\n\n\n\n 89.74573872218345\n\n\n\nIf we do not include the weights or if all weights are equal, the average is equivalent to the mean\n\n\n```python\nnp.average(sunspots)\n```\n\n\n\n\n 51.924498886414256\n\n\n\n#### Median and Mode\n\nThe function `numpy.median()` returns the median of the array elements along a given axis. \n\nGiven a vector V of length N, the median of V is the middle value of a sorted copy of V, when N is odd, and the average of the two middle values of the sorted copy of V when N is even.\n\nhttps://numpy.org/doc/stable/reference/generated/numpy.median.html\n\n\n```python\nnp.median(sunspots)\n```\n\n\n\n\n 41.5\n\n\n\nThe mode corresponds to the value ocurring most frequently in the dataset and it can be seen as the location of the peak of the histogram of the data.\n\nAlthough the `numpy` package does not have a mode function, we can use the function `scipy.mode()`to calculate the modal value. \n\n\n\n```python\nfrom scipy import stats \n\nstats.mode(sunspots)\n```\n\n\n\n\n ModeResult(mode=array([0.]), count=array([67]))\n\n\n\nThis result indicates that the mode is the number $0.$ and that it appears 67 times in the sunspots dataset.\n\nA histogram may help to show the behavior of the data,\n\n\n```python\nfig, ax = plt.subplots(figsize=(7,5))\n\nax.hist(sunspots, bins=25)\nax.set_xlabel(r'number of sunspots')\nax.set_ylabel(r'')\nplt.show()\n```\n\n\n```python\nfig, ax = plt.subplots(figsize=(7,5))\n\nax.hist(sunspots, bins=400)\nax.set_xlabel(r'number of sunspots')\nax.set_ylabel(r'')\nplt.show()\n```\n\n## Astrophysical Signals\n\nIn astrophysics, it is usual to detect signals immersed in noise. \nFor example, in radioastronomy, the emission of sources such as radio-galaxies, pulsars, supernova remnants, etc. are received by radio-telescopes in Earth. they detect emission at frequencies in the order of megahertz, which is similar to many radio stations and therefore, the signal is immersed in a lot of noise!\n\nThe flux density of these signals is measured in *Janskys* (Jy), which is equivalent to \n\n\\begin{equation}\n1 \\text{ Jansky} = 10^{-26} \\frac{\\text{Watts}}{\\text{m}^2\\text{ Hz}}\n\\end{equation}\n\nHence, flux is a measure of the spectral power received by a telescope detector of unit projected area. \n\nUsually, astrophysical sources have flux densities much smaller than the noise around them.\n\n| Source | Flux Density (Jy) |\n|:--------|-------------------:|\n|Crab Pulsar at 1.4GHz|$\\sim 0.01$|\n|Milky Way at 10 GHz|$\\sim 2 \\times 10^3$|\n|Sun at 10 GHz|$\\sim 4 \\times 10^6$|\n|Mobile Phone| $\\sim 1.1 \\times 10^8$|\n\n\n\nConsider a synthetic signal with the Gaussian form, \n\n\n```python\nx = np.linspace(0, 100, 200)\ny = 0.5*np.exp(-(x-40)**2./25.)\n\nfig, ax = plt.subplots(figsize=(7,5))\n\nax.plot(x,y)\nax.set_xlabel(r'$x$')\nax.set_ylabel(r'Flux Density (Jy)')\nplt.show()\n\n```\n\nNow, we will add some random noise by defining two random arrays\n\n\n```python\nnoise1 = np.random.rand(200)\nnoise2 = np.random.rand(200)\n\nfig, ax = plt.subplots(figsize=(7,5))\nax.plot(x,noise1, color='darkblue')\nax.plot(x,noise2, color='cornflowerblue')\n\nax.set_xlabel(r'$x$')\nax.set_ylabel(r'noise')\nplt.show()\n```\n\nWe add these random noise arrays to the Gaussian profile to obtain\n\n\n```python\nrawsignal = y + (noise1 - noise2)\n\nfig, ax = plt.subplots(figsize=(7,5))\nax.plot(x,rawsignal, label='signal + noise', color='cornflowerblue')\nax.plot(x,y, label='Original Gaussian signal', color='crimson')\nax.set_xlabel(r'$x$')\nax.set_ylabel(r'signal + noise')\nplt.legend()\nplt.show()\n```\n\nIt is clear that the Gaussian profile is completely hidden into the noise.\n\n### Application of the Statistical Concepts. Extracting a Signal from the Noise.\n\nNow we will use some statistic concepts such as mean and median to isolate the signal from the noise. First, let us create 9 of such signal + noise synthetic profiles.\n\n\n```python\nn = 9 # Number of profiles\nrawprofiles = np.zeros([n,200])\nfor i in range(n):\n rawprofiles[i] = y + ( np.random.rand(200) - np.random.rand(200) )\n\nfig, ax = plt.subplots(3,3, figsize=(10,7))\n\nax[0,0].plot(x,rawprofiles[0])\nax[0,1].plot(x,rawprofiles[1])\nax[0,2].plot(x,rawprofiles[2])\nax[1,0].plot(x,rawprofiles[3])\nax[1,1].plot(x,rawprofiles[4])\nax[1,2].plot(x,rawprofiles[5])\nax[2,0].plot(x,rawprofiles[6])\nax[2,1].plot(x,rawprofiles[7])\nax[2,2].plot(x,rawprofiles[8])\n\nax[2,1].set_xlabel(r'$x$')\nax[1,0].set_ylabel(r'$signal$')\nplt.show()\n```\n\nThe question here is \u00bfHow can we recover the original signal from these profiles?\n\n**Stacking** the profiles (signal+noise) will provide a method to recover the signal.\n\n#### Stacking using the Mean \n\nSince the profiles have a random noise, when adding regions with only noise, the mean of the random numbers cancel out. On the other hand, when we add regions in which there is some signal data, the mean adds together, increasing the so-called **signal to noise** ratio.\n\n\n```python\nrecovered_signal = np.mean(rawprofiles, axis=0)\n\nfig, ax = plt.subplots(figsize=(7,5))\nax.plot(x,recovered_signal, label='Recovered signal', color='cornflowerblue')\nax.plot(x,y, label='Original Gaussian signal', color='crimson')\nax.set_xlabel(r'$x$')\nax.set_ylabel(r'signal')\nplt.legend()\nplt.show()\n```\n\nIt is clear that the recovered signal is not perfect, although the Gaussian profile seems to be present.\n\nTaking not 9 but 100 synthetic profiles, the stacking method gives a much better result for the recovered signal.\n\n\n```python\nn = 100\nrawprofiles = np.zeros([n,200])\nfor i in range(n):\n rawprofiles[i] = y + ( np.random.rand(200) - np.random.rand(200) )\n\nrecovered_signal = np.mean(rawprofiles, axis=0)\n\nfig, ax = plt.subplots(figsize=(7,5))\nax.plot(x,recovered_signal, label='Recovered signal', color='cornflowerblue')\nax.plot(x,y, label='Original Gaussian signal', color='crimson')\nax.set_xlabel(r'$x$')\nax.set_ylabel(r'signal')\nplt.legend()\nplt.show()\n```\n\nNow, the Gaussian original signal is evident!\n\n#### Stacking using the Median \n\nAnother form to calcualte the stack is using the median instead of the mean. When a distribution is symmetric the mean and the median are equivalent. However, if the distribution is asymmetric, or when there are significan outliers, the median can be a much better indicator of the central value.\n\nHence, although our random noise is expected to be a symmetric distribution around the zero value, there may exist outliers that affect the result of the mean. Therefore we will make a median stacking of 9 signal+noise profiles to compare the results.\n\n\n```python\nn = 9\nrawprofiles = np.zeros([n,200])\nfor i in range(n):\n rawprofiles[i] = y + ( np.random.rand(200) - np.random.rand(200) )\n\nrecovered_signal = np.median(rawprofiles, axis=0)\n\nfig, ax = plt.subplots(figsize=(7,5))\nax.plot(x,recovered_signal, label='Recovered signal (Median Stacking)', color='cornflowerblue')\nax.plot(x,y, label='Original Gaussian signal', color='crimson')\nax.set_xlabel(r'$x$')\nax.set_ylabel(r'signal')\nplt.legend()\nplt.show()\n```\n\nNote that the result is almost the same as that obtained with the mean stacking (the reason may be that the noise distribution is symmetric w.r.t. zero). Now we will consider 100 profiles,\n\n\n```python\nn = 100\nrawprofiles = np.zeros([n,200])\nfor i in range(n):\n rawprofiles[i] = y + ( np.random.rand(200) - np.random.rand(200) )\n\nrecovered_signal = np.median(rawprofiles, axis=0)\n\nfig, ax = plt.subplots(figsize=(7,5))\nax.plot(x,recovered_signal, label='Recovered signal (Median Stacking)', color='cornflowerblue')\nax.plot(x,y, label='Original Gaussian signal', color='crimson')\nax.set_xlabel(r'$x$')\nax.set_ylabel(r'signal')\nplt.legend()\nplt.show()\n```\n\nOnce again, the result is not much different from that obtained using the mean stacking.\n", "meta": {"hexsha": "88e206df0760022f26af24d680f87e3313306807", "size": 622163, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "18. Statistics Basics/01.DataAnalysis-StatisticsBasics.ipynb", "max_stars_repo_name": "ashcat2005/AstrofisicaComputacional2022", "max_stars_repo_head_hexsha": "67463ec4041eb08c0f326792fed0dcf9e970e9b7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2022-03-08T06:18:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-10T04:55:53.000Z", "max_issues_repo_path": "18. Statistics Basics/01.DataAnalysis-StatisticsBasics.ipynb", "max_issues_repo_name": "ashcat2005/AstrofisicaComputacional2022", "max_issues_repo_head_hexsha": "67463ec4041eb08c0f326792fed0dcf9e970e9b7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "18. Statistics Basics/01.DataAnalysis-StatisticsBasics.ipynb", "max_forks_repo_name": "ashcat2005/AstrofisicaComputacional2022", "max_forks_repo_head_hexsha": "67463ec4041eb08c0f326792fed0dcf9e970e9b7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2022-03-09T17:47:43.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-21T02:29:36.000Z", "avg_line_length": 451.8249818446, "max_line_length": 121804, "alphanum_fraction": 0.9458453814, "converted": true, "num_tokens": 2845, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9353465134460244, "lm_q2_score": 0.8824278649085117, "lm_q1q2_score": 0.8253758268097958}} {"text": "# the German tank problem\n\nthe Germans have a population of $n$ tanks labeled with serial numbers $1,2,...,n$ on the tanks. The number of tanks $n$ is unknown and of interest to the Allied forces. The Allied forces randomly capture $k$ tanks from the Germans with replacement and observe their serial numbers $\\{x_1, x_2, ..., x_k\\}$. The goal is to, from observing the serial numbers on this random sample of the tanks, estimate $n$.\n\nthe *estimator* of $n$ maps an outcome of the experiment to an estimate of $n$, $\\hat{n}$.\n\n\n```julia\nusing StatsBase\nusing PyPlot\nusing Statistics\nusing Printf\n\nPyPlot.matplotlib.style.use(\"seaborn-pastel\")\n\nrcParams = PyPlot.PyDict(PyPlot.matplotlib.\"rcParams\")\nrcParams[\"font.size\"] = 16;\n```\n\n \u250c Info: Recompiling stale cache file /home/mick/.julia/compiled/v1.2/StatsBase/EZjIG.ji for StatsBase [2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91]\n \u2514 @ Base loading.jl:1240\n\n\n## data structure for a tank\n\nfor elegance\n\n\n```julia\nstruct Tank\n serial_no::Int\nend\ntank = Tank(3)\n```\n\n\n\n\n Tank(3)\n\n\n\n## visualizing the captured tanks and their serial numbers\n\n\n```julia\nfunction viz_tanks(tanks::Array{Tank}, savename=nothing)\n nb_tanks = length(tanks)\n \n img = PyPlot.matplotlib.image.imread(\"tank.png\")\n\n fig, ax = subplots(1, nb_tanks)\n for (t, tank) in enumerate(tanks)\n ax[t].imshow(img)\n ax[t].set_title(tank.serial_no)\n ax[t].axis(\"off\")\n end\n tight_layout()\n \n if ! isnothing(savename)\n savefig(savename * \".png\", format=\"png\", dpi=300)\n # Linux command line tool to trim white space\n run(`convert $savename.png -trim $savename.png`)\n end\nend\n\nn = 7\ntanks = [Tank(s) for s in 1:n]\nviz_tanks(tanks)\n```\n\n## simulating tank capture\n\nwrite a function `capture_tanks` to simulate the random sampling of `nb_tanks_captured` tanks from all `nb_tanks` tanks the Germans have (without replacement). return a random sample of tanks.\n\n\n```julia\nfunction capture_tanks(num_captured::Int, num_tanks::Int)\n return sample([Tank(i) for i in 1:num_tanks], num_captured, replace=false)\nend\n\nviz_tanks(capture_tanks(3, 100))\n```\n\n## defining different estimators\n\nan estimator maps an outcome $\\{x_1, x_2, ..., x_k\\}$ to an estimate for $n$, $\\hat{n}$.\n\n### estimator (1): maximum serial number\n\nthis is the maximum likelihood estimator.\n\n\\begin{equation}\n\\hat{n} = \\max_i x_i\n\\end{equation}\n\n\n```julia\nfunction max_serial_no(captured_tanks::Array{Tank})\n return maximum([t.serial_no for t in captured_tanks])\nend\n\nmax_serial_no(tanks)\n```\n\n\n\n\n 7\n\n\n\n### estimator (2): maximum serial number plus initial gap\n\n\\begin{equation}\n\\hat{n} = \\max_i x_i + \\bigl(\\min_i x_i -1\\bigr)\n\\end{equation}\n\n\n```julia\nfunction max_plus_initial_gap(captured_tanks::Array{Tank})\n gap_adjustment = minimum([t.serial_no for t in captured_tanks]) - 1\n return max_serial_no(captured_tanks) + gap_adjustment\nend\n\nmax_plus_initial_gap(tanks)\n```\n\n\n\n\n 7\n\n\n\n### estimator (3): maximum serial number plus gap if samples are evenly spaced\n\n\\begin{equation}\n\\hat{n} = \\max_i x_i + \\bigl( \\max_i x_i / k -1 \\bigr)\n\\end{equation}\n\n\n```julia\nfunction max_plus_even_gap(captured_tanks::Array{Tank})\n gap_adjustment = maximum([t.serial_no for t in captured_tanks]) / length(captured_tanks) - 1\n return max_serial_no(captured_tanks) + Int(gap_adjustment)\nend\n\nmax_plus_even_gap(tanks)\n```\n\n\n\n\n 7\n\n\n\n## assessing the bias and variance of different estimators\n\nsay the Germans have `nb_tanks` tanks, and we randomly capture `nb_tanks_captured`. what is the distribution of the estimators (over different outcomes of this random experiment), and how does the distribution compare to the true `nb_tanks`?\n\n\n```julia\n\n```\n\n\n```julia\n\n```\n\n\n```julia\n\n```\n\nnotes:\n\n## what happens as we capture more and more tanks, i.e. increase $k$?\n\nassess estimator (3).\n\n\n```julia\n\n```\n\n## one-sided confidence interval\n\nhow confident are we that the Germans don't have *more* tanks?\n\nsignificance level: $\\alpha$\n\ntest statistic = estimator (3) = $\\hat{n} = \\max_i x_i + \\bigl( \\max_i x_i / k -1 \\bigr)$\n\n**null hypothesis**: the number of tanks is $n=n_0$
    \n**alternative hypothesis**: the number of tanks is less than $n_0$\n\nwe reject the null hypothesis (say, \"the data does not support the null hypothesis\") that the number of tanks is $n=n_0$ if the p-value is less than $\\alpha$. the p-value is the probability that, if the null hypothesis is true, we get a test statistic equal to or smaller than we observed.\n\nwe want to find the highest $n_0$ such that we have statistical power to reject the null hypothesis in favor of the alternative hypothesis. this is the upper bound on the confidence interval!\n\nthen the idea is that, if the null hypothesis is that the number of tanks is absurdly large compared to the largest serial number we saw in our sample, it would be very unlikely that we would see such small serial numbers compared to the number of tanks, so we'd reject the null hypothesis. \n\n\n```julia\n\n```\n\nsay $\\alpha=0.05$ and we seek a 95% one-sided confidence interval\n\n\n```julia\n\n```\n\n\n```julia\n\n```\n", "meta": {"hexsha": "209f69338c4a77ac8a0951b7e7c5ce7cc1d797f4", "size": 32906, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "In-Class Notes/German Tank Problem/.ipynb_checkpoints/German tank problem_sparse-checkpoint.ipynb", "max_stars_repo_name": "cartemic/CHE-599-intro-to-data-science", "max_stars_repo_head_hexsha": "a2afe72b51a3b9e844de94d59961bedc3534a405", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "In-Class Notes/German Tank Problem/.ipynb_checkpoints/German tank problem_sparse-checkpoint.ipynb", "max_issues_repo_name": "cartemic/CHE-599-intro-to-data-science", "max_issues_repo_head_hexsha": "a2afe72b51a3b9e844de94d59961bedc3534a405", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "In-Class Notes/German Tank Problem/.ipynb_checkpoints/German tank problem_sparse-checkpoint.ipynb", "max_forks_repo_name": "cartemic/CHE-599-intro-to-data-science", "max_forks_repo_head_hexsha": "a2afe72b51a3b9e844de94d59961bedc3534a405", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-10-02T16:11:36.000Z", "max_forks_repo_forks_event_max_datetime": "2019-10-15T20:10:40.000Z", "avg_line_length": 81.6526054591, "max_line_length": 14542, "alphanum_fraction": 0.8367167082, "converted": true, "num_tokens": 1412, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.935346504434783, "lm_q2_score": 0.8824278587245935, "lm_q1q2_score": 0.825375813073919}} {"text": "## Moving average filter\n\nThe moving average(MA) filter is a Low Pass FIR used for smoothing signals. This filter sum the data of L consecutive elements of the input vector and divide by L, therefore the result is a single output point. \nAs the parameter L increases, the smoothness of the output is better, whereas the sharp transitions in the data are made increasingly blunt. This implies that this filter has an excellent time-domain response but a poor frequency response.\n\n\n### Implementation\n\n\\begin{align}\ny[n]=\\frac{1}{L}\\sum_{k=0}^{L-1} x[n-k]\n\\end{align}\n\nWhere,\n\ny: output vector
    \nx: input vector
    \nL: data point\n\n\n
    \nFigure 1: Discrete-time 4-point Moving Average FIR filter\n
    \n\n\n## Modules\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.signal import freqz, dimpulse, dstep\nfrom math import sin, cos, sqrt, pi,pow\n```\n\n## Parameters\n\n\n```python\nfsampling = 100\nL = 4\n```\n\n## Coefficients\n\n\n```python\n#coefficients\nb = np.ones(L) #numerator coeffs of filter transfer function\n#[1. 1. 1. 1.]\n#b = (np.ones(L))/L #numerator coeffs of filter transfer function\n\na = np.array([L] + [0]*(L-1)) #denominator coeffs of filter transfer function\n#[4 0 0 0]\n#a = np.ones(1) #denominator coeffs of filter transfer function\n```\n\n## Frequency response\n\n\n\n```python\n#frequency response\nw, h = freqz(b, a, worN=4096)\n#w, h = freqz(b, a)\n\nw *= fsampling / (2 * pi) \n\n# Plot the amplitude response\nplt.figure(dpi=100)\nplt.subplot(2, 1, 1)\nplt.suptitle('Bode Plot')\nplt.plot(w, 20 * np.log10(abs(h))) \nplt.ylabel('Magnitude [dB]')\nplt.xlim(0, fsampling / 2)\nplt.ylim(-60, 10)\nplt.axhline(-6.01, linewidth=0.8, color='black', linestyle=':')\n\n# Plot the phase response\nplt.subplot(2, 1, 2)\nplt.plot(w, 180 * np.angle(h) / pi) \nplt.xlabel('Frequency [Hz]')\nplt.ylabel('Phase [\u00b0]')\nplt.xlim(0, fsampling / 2)\nplt.ylim(-180, 90)\nplt.yticks([-180, -135, -90, -45, 0, 45, 90])\nplt.show()\nprint(\"Figure 2: Magnitude and phase response of L=4-point Moving Average filter\")\n```\n\n## Impulse response\n\n\n```python\nt, y = dimpulse((b, a, 1/fsampling), n=2*L)\nplt.figure(dpi=100)\nplt.suptitle('Impulse Response')\n_, _, baseline = plt.stem(t, y[0], basefmt='k:')\nplt.setp(baseline, 'linewidth', 1)\nbaseline.set_xdata([0,1])\nbaseline.set_transform(plt.gca().get_yaxis_transform())\nplt.xlabel('Time [seconds]')\nplt.ylabel('Output')\nplt.xlim(-1/fsampling, 2*L/fsampling)\nplt.yticks([0, 0.5/L, 1.0/L])\nplt.show()\nprint(\"Figure 3: Plot the impulse response of discrete-time system.\")\n```\n\n### Testing our equation\n\n\\begin{align}\n|H(e^{j\\omega})|=\\sqrt{\\frac{1}{16}((1+cos(\\omega)+cos(2\\omega)+cos(3\\omega))^{2}+(sin(\\omega)+sin(2\\omega)+sin(3\\omega))^{2})}\n\\end{align}\n\n\n\n\n```python\nN=4096 #Elements for w vector\nw=(np.linspace(0, pi, N, endpoint=True)).reshape(N, )\n\nH=np.zeros((N,1))\nfor i in range(N-2):\n H[i]=sqrt(pow(1/L,2)*(pow(1+cos(w[i])+cos(2*w[i])+cos(3*w[i]),2)+ pow(sin(w[i])+sin(2*w[i])+sin(3*w[i]),2)))\n\n```\n\n#### Comparison of results\n\n\n\n```python\nplt.figure(dpi=100)\nplt.plot(w*fsampling / (2 * pi),20*np.log10(H+0.000000001))\nplt.plot(w*fsampling / (2 * pi), 20*np.log10(abs(h)),linestyle='dashed') \n\nplt.ylabel('Magnitude [dB]')\nplt.xlabel('Frequency [Hz]')\n\nplt.xlim(0, fsampling / 2)\nplt.ylim(-60, 10)\nplt.axhline(-6.01, linewidth=0.8, color='black', linestyle=':')\nplt.show()\nprint(\"Figure 4: Comparison of freqz vs our equation\")\n```\n\n## Resources\n\nSciPy Documentation: scipy.signal.freqz
    \n\n\n\n```python\n\n```\n", "meta": {"hexsha": "3f736266b1dba2a7ff02b725899f08bb7bc2bcb1", "size": 113330, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "MA filter/Moving average filter.ipynb", "max_stars_repo_name": "frhaedo/dsp", "max_stars_repo_head_hexsha": "a6941f915b602e9daf4b53c69c63be28e3e9df1d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "MA filter/Moving average filter.ipynb", "max_issues_repo_name": "frhaedo/dsp", "max_issues_repo_head_hexsha": "a6941f915b602e9daf4b53c69c63be28e3e9df1d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-04-03T19:15:05.000Z", "max_issues_repo_issues_event_max_datetime": "2021-04-03T19:15:05.000Z", "max_forks_repo_path": "MA filter/Moving average filter.ipynb", "max_forks_repo_name": "frhaedo/dsp", "max_forks_repo_head_hexsha": "a6941f915b602e9daf4b53c69c63be28e3e9df1d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 336.2908011869, "max_line_length": 33584, "alphanum_fraction": 0.9365481338, "converted": true, "num_tokens": 1129, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9032942119105695, "lm_q2_score": 0.9136765281148512, "lm_q1q2_score": 0.8253187194046898}} {"text": "# Taylor integration example: $\\dot x=x^2$\n\nHere, we will integrate the initial-value problem (IVP) defined by\n\n$$\n\\begin{align}\n\\dot x &= x^2 \\\\\nx(0)&=x_0\n\\end{align}\n$$\n\nGiven a real number $A>0$, the restriction of the function $f(x)=x^2$ over the interval $I_A=(-A,A)$ satisfies the Lipschitz condition $|f(x_1)-f(x_2)|=|x_1^2-x_2^2|=|x_1+x_2|\\cdot|x_1-x_2| \\leq 2A|x_1-x_2|$ for every $x_1,x_2 \\in I_A$. Therefore, the Picard-Lindel\u00f6f theorem for ordinary differential equations guarantees there exists a $\\delta>0$ such that a solution for this IVP exists and is unique for $t \\in [0,\\delta]$, for any $x_0 \\in I_A$. Note that in this case, it is necessary to restrict the function to a bounded interval in order to obtain a Lipschitz condition, which in turn is necessary to fulfill the conditions of the Picard-Lindel\u00f6f theorem.\n\nNow, for any initial condition $x_0$, $t_0\\in\\mathbb{R}$ the analytical solution for this problem is:\n\n$$\nx(t)=\\frac{x_0}{1-x_0\\cdot (t-t_0)}\n$$\n\nIn particular, the analytical solution exhibits a divergence at $t^*=t_0+1/x_0$; i.e., the solution is guaranteed to exist only for $t \\in [t_0,t_0+1/x_0)$ if $x_0>0$. Otherwise, if $x_0<0$, then the analytical solution \"moves away\" from the singularity, since in this case we have $t^*=t_0+1/x_00$?\n\nWe will try three methods to integrate this problem:\n\n+ Adaptive time-step, 4th-order, Runge-Kutta (`ODE.jl`)\n+ Taylor method (`TaylorIntegration.jl`)\n+ Adaptive time-step, Runge-Kutta-Fehlberg 7/8 method (`ODE.jl`)\n\nAs initial conditions to integrate this IVP, we choose $x_0=3$, $t_0=0$, since then the singularity will be at $t^*=1/3$, and this number is not exactly representable in a binary floating-point format. Thus, any constant time-step numerical integrator should break down when integrating up to $t_\\mathrm{max}=1/3$, or beyond.\n\nWe start off by including the relevant packages:\n\n\n```julia\nusing TaylorIntegration, ODE, Plots, LaTeXStrings\n```\n\nThe ODE:\n\n\n```julia\ndiffeq(t, x) = x.^2\n```\n\n\n\n\n diffeq (generic function with 1 method)\n\n\n\n## 1. Adaptive time-step, 4th order, Runge Kutta method\n\nWe select $x_0=3$, $t_0=0$. Then, the singularity is at $t=1/3$.\n\n\n```julia\n@time tRK, xRK = ode45(diffeq, 3.0, [0.0, 0.34]); #warmup lap\n@time tRK, xRK = ode45(diffeq, 3.0, [0.0, 0.34]);\n```\n\n Warning: dt < minstep. Stopping.\n 1.397936 seconds (1.51 M allocations: 64.582 MB, 1.80% gc time)\n Warning: dt < minstep. Stopping.\n 0.000970 seconds (11.54 k allocations: 465.828 KB)\n\n\nPlot $x$ vs $t$ (log-log):\n\n\n```julia\npyplot()\nplot(log10(tRK[2:end]), log10(xRK[2:end]))\ntitle!(\"x vs t (log-log)\")\nxlabel!(L\"\\log_{10}(t)\")\nylabel!(L\"\\log_{10}(x(t))\")\n```\n\n\n\n\n\n\n\n\n sys:1: MatplotlibDeprecationWarning: The set_axis_bgcolor function was deprecated in version 2.0. Use set_facecolor instead.\n\n\nWhat is the final state of the system?\n\n\n```julia\ntRK[end], xRK[end]\n```\n\n\n\n\n (0.33333423758781194,7.125446356124256e17)\n\n\n\nDoes the integrator get past the singularity?\n\n\n```julia\ntRK[end]>1/3\n```\n\n\n\n\ntrue\n\n\n\nThe answer is yes! So the last value of the solution is meaningless:\n\n\n```julia\nxRK[end] #this value is meaningless\n```\n\n\n\n\n7.125446356124256e17\n\n\n\nHow many steps did the RK integrator perform?\n\n\n```julia\nlength(xRK)-1\n```\n\n\n\n\n166\n\n\n\nHow does the numerical solution compare to the analytical solution? The analytical solution is:\n\n\n```julia\nexactsol(t, x0) = x0./(1.0-x0.*t) #analytical solution\n```\n\n\n\n\n exactsol (generic function with 1 method)\n\n\n\nThe relative difference between the numerical and analytical solution, $\\delta x$, is:\n\n\n```julia\n\u03b4xRK = (xRK-exactsol(tRK, 3.0))./exactsol(tRK, 3.0) #numerical error, relative to analytical solution\n;\n```\n\nThe $\\delta x$ vs $t$ plot (semilog):\n\n\n```julia\nplot(tRK[2:end], log10(abs(\u03b4xRK[2:end])))\ntitle!(\"Relative error (semi-log)\")\nxlabel!(L\"\\log_{10}(t)\")\nylabel!(L\"\\log_{10}(\\delta x(t))\")\n```\n\n\n\n\n\n\n\n\nThis plot means that the error of the numerical solution grows systematically; and at the end of the integration, the error in the numerical solution is\n\n\n```julia\n(xRK[end-1]-exactsol(tRK[end-1], 3.0))\n```\n\n\n\n\n5.433622064235371e17\n\n\n\n## 2. Taylor method\n\nAgain, we select $x_0=3$, $t_0=0$. The order of the Taylor integration is $28$, and we set the absolute tolerance equal to $10^{-20}$; this value is used during each time-step in order to compute an adaptive step size. We set the maximum number of integration steps equal to the number of steps that the previous integrator did.\n\n\n```julia\n@time tT, xT = taylorinteg(diffeq, 3.0, 0.0, 0.34, 28, 1e-20, maxsteps=length(xRK)-1); #warmup lap\n@time tT, xT = taylorinteg(diffeq, 3.0, 0.0, 0.34, 28, 1e-20, maxsteps=length(xRK)-1);\n```\n\n \u001b[1m\u001b[31mWARNING: Maximum number of integration steps reached; exiting.\u001b[0m\n\n\n 0.215803 seconds (160.86 k allocations: 8.295 MB)\n 0.001468 seconds (19.65 k allocations: 2.171 MB)\n\n\n\n```julia\ntT[end], xT[end]\n```\n\n\n\n\n (0.3333333329479479,2.5948055925168757e9)\n\n\n\nHow many steps did the Taylor integrator perform?\n\n\n```julia\nlength(xT)-1\n```\n\n\n\n\n166\n\n\n\nBelow, we show the $x$ vs $t$ plot (log-log):\n\n\n```julia\nplot(log10(tT[2:end]), log10(xT[2:end]))\ntitle!(\"x vs t (log-log)\")\nxlabel!(L\"\\log_{10}(t)\")\nylabel!(L\"\\log_{10}(x(t))\")\n```\n\n\n\n\n\n\n\n\nDoes the integrator get past the singularity?\n\n\n```julia\ntT[end] > 1/3\n```\n\n\n\n\nfalse\n\n\n\nThe answer is no! Even if we increase the value of the `maxsteps` keyword in `taylorinteg`, it doesn't get past the singularity!\n\nNow, the relative difference between the numerical and analytical solution, $\\delta x$, is:\n\n\n```julia\n\u03b4xT = (xT.-exactsol(tT, 3.0))./exactsol(tT, 3.0);\n```\n\nThe $\\delta x$ vs $t$ plot (logscale):\n\n\n```julia\nplot(tT[6:end], log10(abs(\u03b4xT[6:end])))\ntitle!(\"Relative error (semi-log)\")\nxlabel!(L\"t\")\nylabel!(L\"\\log_{10}(\\delta x(t))\")\n```\n\n\n\n\n\n\n\n\nWe observe that, while the execution time is ~10 times longer wrt 4th-order RK, the numerical solution obtained by the Taylor integrator stays within $10^{-12}$ of the analytical solution, for a same number of steps.\n\nNow, that happens if we use a higher order Runge Kutta method to integrate this problem?\n\n## 3. Runge-Kutta-Fehlberg 7/8 method\n\nHere we use the Runge-Kutta-Fehlberg 7/8 method, included in `ODE.jl`, to integrate the same problem as before.\n\n\n```julia\n@time t78, x78 = ode78(diffeq, 3.0, [0.0, 0.34]); #warmup lap\n@time t78, x78 = ode78(diffeq, 3.0, [0.0, 0.34]);\n```\n\n Warning: dt < minstep. Stopping.\n 0.379769 seconds (345.22 k allocations: 15.063 MB, 2.40% gc time)\n Warning: dt < minstep. Stopping.\n 0.001199 seconds (12.76 k allocations: 494.219 KB)\n\n\nPlot $x$ vs $t$ (log-log):\n\n\n```julia\nplot(log10(t78[2:end]), log10(x78[2:end]))\ntitle!(\"x vs t (log-log)\")\nxlabel!(L\"\\log_{10}(t)\")\nylabel!(L\"\\log_{10}(x(t))\")\n```\n\n\n\n\n\n\n\n\nWhat is the final state of the system?\n\n\n```julia\nt78[end], x78[end]\n```\n\n\n\n\n (0.3333336190078445,1.6711546943686948e18)\n\n\n\nDoes the integrator get past the singularity?\n\n\n```julia\nt78[end]>1/3\n```\n\n\n\n\ntrue\n\n\n\nThe answer is yes! So the last value of the solution is meaningless:\n\n\n```julia\nx78[end] #this value is meaningless\n```\n\n\n\n\n1.6711546943686948e18\n\n\n\nHow many steps did the RK integrator perform?\n\n\n```julia\nlength(x78)-1\n```\n\n\n\n\n91\n\n\n\nThe relative difference between the numerical and analytical solution, $\\delta x$, is:\n\n\n```julia\n\u03b4x78 = (x78-exactsol(t78, 3.0))./exactsol(t78, 3.0) #error relative to analytical solution\n;\n```\n\nThe $\\delta x$ vs $t$ plot (semilog):\n\n\n```julia\nplot(t78[2:end], log10(abs(\u03b4x78[2:end])))\ntitle!(\"Relative error (semi-log)\")\nxlabel!(L\"t\")\nylabel!(L\"\\log_{10}(\\delta x(t))\")\n```\n\n\n\n\n\n\n\n\nThis time, the RKF 7/8 integrator is \"only\" twice as fast as the Taylor integrator, but the error continues to be greater than the error from the latter by several orders of magnitude.\n\n## 4. Adaptive 4th-order RK, stringer tolerance\n\nAs a last example, we will integrate once again our problem using a 4th-order adaptive RK integrator, but imposing a stringer tolerance:\n\n\n```julia\n@time tRK_, xRK_ = ode45(diffeq, 3.0, [0.0, 0.34], abstol=1e-8, reltol=1e-8 ); #warmup lap\n@time tRK_, xRK_ = ode45(diffeq, 3.0, [0.0, 0.34], abstol=1e-8, reltol=1e-8 );\n;\n```\n\n Warning: dt < minstep. Stopping.\n 0.127591 seconds (383.26 k allocations: 16.016 MB, 6.19% gc time)\n Warning: dt < minstep. Stopping.\n 0.029600 seconds (335.10 k allocations: 13.837 MB, 31.90% gc time)\n\n\nNow, the integrator takes 10 times longer to complete the integration than the Taylor method.\n\nDoes it get past the singularity?\n\n\n```julia\ntRK_[end] > 1/3\n```\n\n\n\n\ntrue\n\n\n\nYes! So, once again, the last value reported by the integrator is completely meaningless. But, has it attained a higher precision than the Taylor method? Well, let's calculate once again the numerical error relative to the analytical solution:\n\n\n```julia\n\u03b4xRK_ = (xRK_-exactsol(tRK_, 3.0))./exactsol(tRK_, 3.0);\n```\n\nAnd now, let's plot this relative error vs time:\n\n\n```julia\nplot(tRK_[2:end], log10(abs(\u03b4xRK_[2:end])))\ntitle!(\"Relative error (semi-log)\")\nxlabel!(L\"t\")\nylabel!(L\"\\log_{10}(\\delta x(t))\")\nylims!(-20,20)\n```\n\n\n\n\n\n\n\n\nThe numerical error has actually gotten worse! `TaylorIntegration.jl` is indeed a really competitive package to integrate ODEs.\n\n\n```julia\n\n```\n", "meta": {"hexsha": "89dd4067c46a31cbead4a4c6be600c6084df7458", "size": 156946, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/x-dot-equals-x-squared.ipynb", "max_stars_repo_name": "SebastianM-C/TaylorIntegration.jl", "max_stars_repo_head_hexsha": "f3575ee1caba43e21312062d960613ec2ccba325", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 72, "max_stars_repo_stars_event_min_datetime": "2016-09-22T22:32:12.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T13:35:18.000Z", "max_issues_repo_path": "examples/x-dot-equals-x-squared.ipynb", "max_issues_repo_name": "SebastianM-C/TaylorIntegration.jl", "max_issues_repo_head_hexsha": "f3575ee1caba43e21312062d960613ec2ccba325", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 132, "max_issues_repo_issues_event_min_datetime": "2016-09-21T05:43:08.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-15T02:55:17.000Z", "max_forks_repo_path": "examples/x-dot-equals-x-squared.ipynb", "max_forks_repo_name": "SebastianM-C/TaylorIntegration.jl", "max_forks_repo_head_hexsha": "f3575ee1caba43e21312062d960613ec2ccba325", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 20, "max_forks_repo_forks_event_min_datetime": "2016-09-24T04:37:11.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-25T13:48:07.000Z", "avg_line_length": 152.8198636806, "max_line_length": 22303, "alphanum_fraction": 0.9020809705, "converted": true, "num_tokens": 3042, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9032942093072239, "lm_q2_score": 0.9136765281148513, "lm_q1q2_score": 0.8253187170260742}} {"text": "# Nonlinear Equations and bisection\n\nA large number of nonlinear equations obviously *have* solutions, but the solutions cannot always be found in closed form. For example,\n\n\\begin{equation}\n f(x) = e^x + x - 2 = 0.\n\\end{equation}\n\n\n```\n%matplotlib inline\n```\n\n\n```\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom matplotlib import rcParams\nrcParams['font.family'] = 'serif'\nrcParams['font.size'] = 16\nrcParams['figure.figsize'] = (12,6)\n```\n\n\n```\nx = np.linspace(0.0, 1.0)\ndef fn(x):\n return np.exp(x) + x - 2.0\nf = fn(x)\n\nplt.figure()\nplt.plot(x, f, label = r'$f(x) = e^x + x - 2$')\nplt.xlabel('$x$')\nplt.ylabel('$f(x)$')\nplt.legend(loc = 2)\nplt.show()\n```\n\nNumerical methods can give accurate, approximate solutions.\n\nWrite the problem as\n \n\\begin{equation}\n f(x) = 0.\n\\end{equation}\n \nGoal: given $f$, find $x$. \n\nAssume: $f$ real, continuous. Sometimes restrict to $f$ differentiable.\n\nMay consider equivalent problem\n\n\\begin{equation}\n g(x) = x.\n\\end{equation}\n \nSame problem, different (geometric) interpretation.\n\nAssume that evaluating $f$ or $g$ is \\emph{expensive}. Want to minimize the number of evaluations. \n\nGiven $f: x \\in \\mathbb{R} \\to \\mathbb{R}$, continuous, find $x$ such that\n\n\\begin{equation}\n f(x) = 0.\n\\end{equation}\n\nSimple and robust method: **bisection**. \n\n1. Assume we have found (somehow!) two points, $x^{(L)}$ and $x^{(R)}$ that bracket root. \n2. Check halfway point $x^{(M)}$. Either $f(x^{(M)})=0$ (problem solved), or $x^{(M)}$ and one of $x^{(L,R)}$ bracket the root. \n3. Repeat to converge.\n\n\n```\nxL = 0.1\nxR = 0.9\nxM = 0.5 * (xL + xR)\n\nplt.figure()\nplt.plot(x, f)\nplt.plot([xL], [fn(xL)], 'bo'); plt.plot([xL, xL], [0, fn(xL)], 'b--')\nplt.plot([xR], [fn(xR)], 'bo'); plt.plot([xR, xR], [0, fn(xR)], 'b--')\nplt.plot([xM], [fn(xM)], 'go'); plt.plot([xM, xM], [0, fn(xM)], 'g--')\nplt.axhline(color = 'black', linestyle = '--')\nplt.xlabel('$x$')\nplt.ylabel('$f(x)$')\nplt.show()\n```\n", "meta": {"hexsha": "27c2aad257a6ab955bf97344abadc1c1cfda403e", "size": 48937, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lectures/05 - Nonlinear equations and bisection.ipynb", "max_stars_repo_name": "josh-gree/NumericalMethods", "max_stars_repo_head_hexsha": "03cb91114b3f5eb1b56916920ad180d371fe5283", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 76, "max_stars_repo_stars_event_min_datetime": "2015-02-12T19:51:52.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-26T15:34:11.000Z", "max_issues_repo_path": "Lectures/05 - Nonlinear equations and bisection.ipynb", "max_issues_repo_name": "josh-gree/NumericalMethods", "max_issues_repo_head_hexsha": "03cb91114b3f5eb1b56916920ad180d371fe5283", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2017-05-24T19:49:52.000Z", "max_issues_repo_issues_event_max_datetime": "2018-01-23T21:40:42.000Z", "max_forks_repo_path": "Lectures/05 - Nonlinear equations and bisection.ipynb", "max_forks_repo_name": "josh-gree/NumericalMethods", "max_forks_repo_head_hexsha": "03cb91114b3f5eb1b56916920ad180d371fe5283", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 41, "max_forks_repo_forks_event_min_datetime": "2015-01-05T13:30:47.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-15T09:59:39.000Z", "avg_line_length": 239.887254902, "max_line_length": 22999, "alphanum_fraction": 0.899728222, "converted": true, "num_tokens": 677, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9032942014971872, "lm_q2_score": 0.9136765181249676, "lm_q1q2_score": 0.8253187008664229}} {"text": "# Mathematical Theory and Intuition behind our improved model \n\nThis markdown explains the mathematical theory behind our model. Note that the actual implementation might be different. For simplicity, we are also not including all the datasets we used (otherwise the equations might be too long).\n\nLet $i$ denote the $i$th date since the beginning of our dataset and $n$ denote the cutoff date for training-testing . Let the range from $n+1$ to $m$ be the range for testing (forecast). \n\nFor a given date $i$, let $\\textbf{asp}(i)$ be the actual index for S\\&P500; $\\textbf{fbsp}(i)$ be the index for S\\&P500 bases on a training set of the first $i-1$ many days' market data of S\\&P500; $\\textbf{div}(i)$ be the dividend yield rates of S\\&P500; $\\textbf{eps}(i)$ be the dividen yield rates of S\\&P500; $\\textbf{tby}(i)$ be the 10-year treasury bond yield; $\\textbf{ffr}(i)$ be the fed fund rates; $\\textbf{fta}(i)$ be the fed total assets.\n\n## Linear Model\n\n\\begin{equation}\n \\textbf{fb}_1(i,a,b,c,d,e):=\\textbf{fbsp}(i)+a*\\textbf{tby}(i)+b*\\textbf{ffr}(i)+c*\\textbf{fta}(i)+d*\\textbf{div}(i)+e*\\textbf{eps}(i)\n\\end{equation}\n\n\\begin{equation}\n \\textbf{E}_n(a,b,c,d,e):= \\frac{1}{n-1000}\\sum_{i=1000}^{n} \n (\\textbf{fb}_1(i,a,b,c,d,e)-\\textbf{asp}(i))^2\n\\end{equation}\n\nattains its minimum. Namely, finding\n\n\\begin{equation}\n (a_n,b_n,c_n,d_n,e_n):=\\text{argmin} \\textbf{E}_n(a,b,c,d,e)\n\\end{equation}\n\nNote that it doesn't make sense to start with $i=1$ since the fbprophet itself need to use the first $i-1$ of the data for training\n\n## Nonlinear Model\n\nAll notations are the same as above.\n\nHere is a slightly different model (nonlinear revision of fbprophet). In this model, we will the dividend yield rate of S\\&P 500.\n\nFirst, what is the dividend yield rate? Dividend is the money that publicly listed companies pay you (usually four times a year) for holding their stocks (until the so-called ex-dividend dates). It's like the interest paid to your saving accounts from the banks. Some companies pay while some don't, especially for the growth tech stocks. From my experience, the impact of bond rates and fed fund rates on the stock market will change when they rise above or fall below the dividend yield rate}. Stock prices fall when those rates rise above the dividend yield rate of SP500 (investor are selling their stocks to buy bonds or save more money in their bank account!).\n\nBased on this idea, it might be useful to consider the differences of those rates and the dividend yield rate of SP500. \n\nNormally an increase in the federal fund rate will result in an increase in bank loan interest rate, which will in turn result in an decrease in the net income of S\\&P500-listed companies since they have to pay a higher interest when borrowing money from banks. Based on this thought, I believe it is reasonable to make a correction to $c*\\textbf{eps}(i)$ by replacing the term by $c*\\textbf{eps}(i)(1+d*\\textbf{ffr}(i))$. If my intuition is correct, the generated constant $d$ from the optimization should be a negative number. \n\n\n$$\\textbf{fb}_2(i,a,b,c,d,e):= \\textbf{fbsp}(i)*\\big[1+a*(\\textbf{div}(i)-\\textbf{tby}(i))\\big]\\big[1+b*(\\textbf{div}(i)-\\textbf{ffr}(i))\\big]+c*\\textbf{eps}(i)(1+d*\\textbf{ffr}(i)+e*\\textbf{fta}(i))$$\n\nand consider\n\n$$E_n(a,b,c,d,e) := \\frac{1}{n-1000}\\sum_{i=1000}^n(\\text{fb}_2(i,a,b,c,d,e)-\\textbf{asp}(i))^2$$\n\nNow find (by approximation, SGD, etc.) $(a_n,b_n,c_n,d_n,e_n):=\\text{argmin} E_n(a,b,c,d,e)$ \n\nUsing $(a_n,b_n,c_n,d_n,e_n)$ as constants, our output will be $\\textbf{fb}_2(i,a_n,b_n,c_n,d_n,e_n)$\n\n\n\n### The actual implementation of the nonlinear model\n\nFor the actual implementation of the nonlinear model, we threw away the higher order terms (the products of three things) since they are relatively smaller quantities\n\n\n## A faster computation scheme\nHere is a method we could think of that may reduce the number of times calling fbprophet:\n\nSay in the training process we are considering from i=1,000 to 11,000. Namely based on the current scheme, we have to call fbprophet for 10,000 times.\n\nInstead of this, we make a break-down 10,000=100*100 as follows:\n\nFor i=1,000 to 1,100, we only use the first 999 dates for training;\n\nFor i=1,100 to 1,200, we only use the first 1,099 dates for training;\n\n.............................................\n\nFor i=10,900 to 11,000, we only use the first 10,899 dates for training;\n\nIn this way, it seems that we only need to call fbprophet for 100 times. And this doesn't seem harm the accuracy too much.\n\n\n```python\nimport pandas as pd\nimport numpy as np\n\n## For plotting\nimport matplotlib.pyplot as plt\nfrom matplotlib import style\nimport datetime as dt\nimport seaborn as sns\nsns.set_style(\"whitegrid\")\n```\n\n\n```python\npath = '../Data/dff1.csv'\n```\n\n\n```python\ndf= pd.read_csv(path, parse_dates=['ds'])\n# df = df.rename(columns = {\"Date\":\"ds\",\"Close\":\"y\"}) \ndf = df[['ds', 'y','fbsp', 'diff','tby', 'ffr', 'fta', 'eps', 'div', 'une', 'wti', 'ppi',\n 'rfs']]\n# df\n```\n\n\n```python\ndf['fbsp_tby'] = df['fbsp'] * df['tby']\ndf['fbsp_ffr'] = df['fbsp'] * df['ffr']\ndf['fbsp_div'] = df['fbsp'] * df['div']\ndf['eps_tby'] = df['eps'] * df['tby']\ndf['eps_ffr'] = df['eps'] * df['ffr']\ndf['eps_div'] = df['eps'] * df['div']\n```\n\n\n```python\n# cutoff between test and train data\ncutoff = len(df) - 252\ndf_train = df[:cutoff].copy()\ndf_test = df[cutoff:].copy()\nprint(cutoff)\n```\n\n 2300\n\n\n\n```python\ndf_train.columns\n```\n\n\n\n\n Index(['ds', 'y', 'fbsp', 'diff', 'tby', 'ffr', 'fta', 'eps', 'div', 'une',\n 'wti', 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby',\n 'eps_ffr', 'eps_div'],\n dtype='object')\n\n\n\n\n```python\npossible_features = ['tby', 'ffr', 'fta', 'eps', 'div', 'une', 'wti',\n 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby',\n 'eps_ffr', 'eps_div']\n\nfrom itertools import chain, combinations\n\ndef powerset(iterable):\n #\"powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)\"\n s = list(iterable)\n return chain.from_iterable(combinations(s, r) for r in range(len(s)+1))\n\n#print(list(powerset(possible_features)))\n```\n\n\n```python\nlen(possible_features)\n```\n\n\n\n\n 15\n\n\n\n\n```python\nfrom statsmodels.regression.linear_model import OLS\n\nreg_new = OLS((df_train['diff']).copy(),df_train[possible_features].copy()).fit()\nprint(reg_new.params)\n\n#from the output, we can see it's consistent with sklearn output\n```\n\n tby 305.980303\n ffr 183.608289\n fta 0.000065\n eps -1.775823\n div -142.530009\n une -80.692625\n wti 3.378638\n ppi -7.923858\n rfs 0.006087\n fbsp_tby -0.043153\n fbsp_ffr -0.146870\n fbsp_div -0.391168\n eps_tby -2.122760\n eps_ffr 2.390346\n eps_div 3.950103\n dtype: float64\n\n\n\n```python\nnew_coef = reg_new.params\nnew_possible_feats = new_coef[abs(new_coef)>0].index\n\npower_feats = list(powerset(new_possible_feats))\npower_feats.remove(())\n\npower_feats = [ list(feats) for feats in power_feats]\nlen(power_feats)\n\n```\n\n\n\n\n 32767\n\n\n\n\n```python\nAIC_scores = []\nparameters = []\n\nfor feats in power_feats:\n tmp_reg = OLS((df_train['diff']).copy(),df_train[feats].copy()).fit()\n AIC_scores.append(tmp_reg.aic)\n parameters.append(tmp_reg.params)\n\n \nMin_AIC_index = AIC_scores.index(min(AIC_scores))\nMin_AIC_feats = power_feats[Min_AIC_index] \nMin_AIC_params = parameters[Min_AIC_index]\nprint(Min_AIC_feats)\nprint(Min_AIC_params) \n```\n\n ['tby', 'ffr', 'fta', 'div', 'une', 'wti', 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby', 'eps_ffr', 'eps_div']\n tby 313.791358\n ffr 187.937377\n fta 0.000057\n div -78.607290\n une -87.125044\n wti 3.455364\n ppi -8.104355\n rfs 0.005916\n fbsp_tby -0.046476\n fbsp_ffr -0.132070\n fbsp_div -0.388510\n eps_tby -2.157124\n eps_ffr 2.033978\n eps_div 3.252783\n dtype: float64\n\n\n\n```python\nlen(Min_AIC_feats)\n```\n\n\n\n\n 14\n\n\n\n\n```python\n###After selecting the best features, we report the testing error, and make the plot \nAIC_df_test = df_test[Min_AIC_feats]\nAIC_pred_test = AIC_df_test.dot(Min_AIC_params)+df_test.fbsp\n\nAIC_df_train = df_train[Min_AIC_feats]\nAIC_pred_train = AIC_df_train.dot(Min_AIC_params)+ df_train.fbsp\n\n\n```\n\n\n```python\nfrom sklearn.metrics import mean_squared_error as MSE\n\nmse_train = MSE(df_train.y, AIC_pred_train) \nmse_test = MSE(df_test.y, AIC_pred_test)\n\n\n#compare with fbprophet()\n\nfb_mse_train = MSE(df_train.y, df_train.fbsp) \nfb_mse_test = MSE(df_test.y, df_test.fbsp)\n\n\nprint(mse_train,fb_mse_train)\n\nprint(mse_test,fb_mse_test)\n```\n\n 2543.6188296247346 22303.56360854362\n 11928.655059590139 15247.912341091065\n\n\n\n```python\ndf_train.ds\n```\n\n\n\n\n 0 2009-12-15\n 1 2009-12-16\n 2 2009-12-17\n 3 2009-12-18\n 4 2009-12-21\n ... \n 2295 2019-02-20\n 2296 2019-02-21\n 2297 2019-02-22\n 2298 2019-02-25\n 2299 2019-02-26\n Name: ds, Length: 2300, dtype: datetime64[ns]\n\n\n\n\n```python\nplt.figure(figsize=(18,10))\n\n# plot the training data\nplt.plot(df_train.ds,df_train.y,'b',\n label = \"Training Data\")\n\nplt.plot(df_train.ds, AIC_pred_train,'r-',\n label = \"Improved Fitted Values by Best_AIC\")\n\n# # plot the fit\nplt.plot(df_train.ds, df_train.fbsp,'g-',\n label = \"FB Fitted Values\")\n\n# # plot the forecast\nplt.plot(df_test.ds, df_test.fbsp,'g--',\n label = \"FB Forecast\")\nplt.plot(df_test.ds, AIC_pred_test,'r--',\n label = \"Improved Forecast by Best_AIC\")\nplt.plot(df_test.ds,df_test.y,'b--',\n label = \"Test Data\")\n\nplt.legend(fontsize=14)\n\nplt.xlabel(\"Date\", fontsize=16)\nplt.ylabel(\"SP&500 Close Price\", fontsize=16)\n\nplt.show()\n```\n\n\n```python\nplt.figure(figsize=(18,10))\nplt.plot(df_test.y,label=\"Training Data\")\nplt.plot(df_test.fbsp,label=\"FB Forecast\")\nplt.plot(AIC_pred_test,label=\"Improved Forecast by Best_AIC\")\nplt.legend(fontsize = 14)\nplt.show()\n```\n\n\n```python\n\n```\n\n\n```python\ncolumn = ['tby', 'ffr', 'fta', 'eps', 'div', 'une',\n 'wti', 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby', 'eps_ffr', 'eps_div']\n```\n\n\n```python\nfrom sklearn import preprocessing\ndf1_train = df_train[['diff', 'tby', 'ffr', 'fta', 'eps', 'div', 'une', 'wti', 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby', 'eps_ffr', 'eps_div']]\n\nX = preprocessing.scale(df1_train)\nfrom statsmodels.regression.linear_model import OLS\n\nreg_new = OLS((X[:,0]).copy(),X[:,1:].copy()).fit()\nprint(reg_new.params)\n```\n\n [ 1.50405129 1.03228322 0.27409454 1.17073571 0.31243092 -0.75747342\n 0.46988206 -0.39944639 2.10369448 -0.69112943 -2.1804296 -2.38576385\n -1.14196633 1.41832903 -0.34501927]\n\n\n\n```python\n# Before Covid\n# pd.Series(reg_new.params, index=['tby', 'ffr', 'fta', 'eps', 'div', 'une',\n# 'wti', 'ppi', 'rfs', 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby', 'eps_ffr', 'eps_div'] )\n```\n\n\n```python\n# before covid\ncoef1 = [ 1.50405129, 1.03228322, 0.27409454, 1.17073571, 0.31243092,\n -0.75747342, 0.46988206, -0.39944639, 2.10369448, -0.69112943,\n -2.1804296 , -2.38576385, -1.14196633, 1.41832903, -0.34501927]\n# include covid\ncoef2 = [ 0.65150054, 1.70457239, -0.1573802 , -0.18007979, -0.15221931,\n -0.62326075, 0.45065894, -0.38972706, 2.87210843, -1.17604495,\n -4.92858316, -2.15459111, 0.11418468, 2.74829778, 0.55520382]\n```\n\n\n```python\n# Include Covid\n# pd.Series( np.append( ['coefficients (before covid)'], np.round(coef1,3)), index= np.append(['features'], column) ) \n \n```\n\n\n```python\nindex1 = ['10 Year U.S Treasury Bond Yield Rates (tby)', 'Federal Funds Rates (ffr)',\n 'Federal Total Assets (fta)', 'Earning-Per-Share of S&P 500 (eps)', 'Dividend Yield of S&P 500 (div)',\n 'Unemployment Rates (une) ', 'West Texas Intermediate oil index (wit)', 'Producer Price Index (ppi)',\n 'Retail and Food Services Sales (rfs)', \n 'fbsp_tby', 'fbsp_ffr', 'fbsp_div', 'eps_tby', 'eps_ffr', 'eps_div'\n ]\n```\n\n\n```python\nlen(index1)\n```\n\n\n\n\n 15\n\n\n\n\n```python\npd.Series(coef2, index =index1)\n```\n\n\n\n\n 10 Year U.S Treasury Bond Yield Rates (tby) 0.651501\n Federal Funds Rates (ffr) 1.704572\n Federal Total Assets (fta) -0.157380\n Earning-Per-Share of S&P 500 (eps) -0.180080\n Dividend Yield of S&P 500 (div) -0.152219\n Unemployment Rates (une) -0.623261\n West Texas Intermediate oil index (wit) 0.450659\n Producer Price Index (ppi) -0.389727\n Retail and Food Services Sales (rfs) 2.872108\n fbsp_tby -1.176045\n fbsp_ffr -4.928583\n fbsp_div -2.154591\n eps_tby 0.114185\n eps_ffr 2.748298\n eps_div 0.555204\n dtype: float64\n\n\n\n\n```python\ndf3 = pd.DataFrame(coef1, index = index1, columns = ['coefficients (before covid)'])\ndf3['coefficients (include covid)'] =pd.Series(coef2, index =index1)\ndf3\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    coefficients (before covid)coefficients (include covid)
    10 Year U.S Treasury Bond Yield Rates (tby)1.5040510.651501
    Federal Funds Rates (ffr)1.0322831.704572
    Federal Total Assets (fta)0.274095-0.157380
    Earning-Per-Share of S&P 500 (eps)1.170736-0.180080
    Dividend Yield of S&P 500 (div)0.312431-0.152219
    Unemployment Rates (une)-0.757473-0.623261
    West Texas Intermediate oil index (wit)0.4698820.450659
    Producer Price Index (ppi)-0.399446-0.389727
    Retail and Food Services Sales (rfs)2.1036942.872108
    fbsp_tby-0.691129-1.176045
    fbsp_ffr-2.180430-4.928583
    fbsp_div-2.385764-2.154591
    eps_tby-1.1419660.114185
    eps_ffr1.4183292.748298
    eps_div-0.3450190.555204
    \n
    \n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "6704f542ab5aa067a220f92eeb4abb5ab615366b", "size": 248335, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Code/.ipynb_checkpoints/Scenario2 excluding the covid period-checkpoint.ipynb", "max_stars_repo_name": "thinkhow/Market-Prediction-with-Macroeconomics-features", "max_stars_repo_head_hexsha": "feac711017739ea6ffe46a7fcac6b4b0c265e0b5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Code/.ipynb_checkpoints/Scenario2 excluding the covid period-checkpoint.ipynb", "max_issues_repo_name": "thinkhow/Market-Prediction-with-Macroeconomics-features", "max_issues_repo_head_hexsha": "feac711017739ea6ffe46a7fcac6b4b0c265e0b5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-05-24T00:26:34.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-24T00:26:34.000Z", "max_forks_repo_path": "Code/.ipynb_checkpoints/Scenario2 excluding the covid period-checkpoint.ipynb", "max_forks_repo_name": "shelbycox/Stock-Erdos", "max_forks_repo_head_hexsha": "b0b3bb3e14ea8ad9453b22637293bfa9a781c749", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 296.3424821002, "max_line_length": 111020, "alphanum_fraction": 0.9114744196, "converted": true, "num_tokens": 5102, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9441768651485395, "lm_q2_score": 0.8740772368049823, "lm_q1q2_score": 0.8252835053442258}} {"text": "This notebook is part of https://github.com/AudioSceneDescriptionFormat/splines, see also http://splines.readthedocs.io/.\n\n# Polynomial Parametric Curves\n\n\n```python\n%matplotlib inline\n```\n\n\n```python\nimport sympy as sp\nsp.init_printing(order='grevlex')\n```\n\n\n```python\nt = sp.symbols('t')\n```\n\nThe coefficients are written as bold letters, because they are elements of a vector space (e.g. $\\mathbb{R}^3$).\n\n\n```python\ncoefficients = sp.Matrix(sp.symbols('abm:4')[::-1])\ncoefficients\n```\n\nMonomial basis functions:\n\n\n```python\nb_monomial = sp.Matrix([t**3, t**2, t, 1]).T\nb_monomial\n```\n\n\n```python\nb_monomial.dot(coefficients)\n```\n\nThis is a cubic polynomial in its canonical form (monomial basis).\n\nMonomial basis functions:\n\n\n```python\nsp.plot(*b_monomial, (t, 0, 1));\n```\n\nIt doesn't look like much, but every conceivable cubic polynomial can be formulated as exactly one linear combination of those basis functions.\n\nExample:\n\n\n```python\nexample_polynomial = (2 * t - 1)**3 + (t + 1)**2 - 6 * t + 1\nexample_polynomial\n```\n\n\n```python\nsp.plot(example_polynomial, (t, 0, 1));\n```\n\nCan be re-written with monomial basis functions:\n\n\n```python\nexample_polynomial.expand()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "3e43a382f91193995888395d76a10b69e54e8481", "size": 3550, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/polynomials.ipynb", "max_stars_repo_name": "mgeier/splines", "max_stars_repo_head_hexsha": "f54b09479d98bf13f00a183fd9d664b5783e3864", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/polynomials.ipynb", "max_issues_repo_name": "mgeier/splines", "max_issues_repo_head_hexsha": "f54b09479d98bf13f00a183fd9d664b5783e3864", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/polynomials.ipynb", "max_forks_repo_name": "mgeier/splines", "max_forks_repo_head_hexsha": "f54b09479d98bf13f00a183fd9d664b5783e3864", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 18.9839572193, "max_line_length": 152, "alphanum_fraction": 0.5228169014, "converted": true, "num_tokens": 337, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9579122684798184, "lm_q2_score": 0.8615382076534743, "lm_q1q2_score": 0.8252780188753764}} {"text": "# Linear Algebra\n\n### Vector Space\n\nA ***vector space*** $V$ defined on the field $K$ is ...\n\nIt satisfies the 10 following properties:\n1.\n\nA ***vector*** is an element of a vector space.\n\n### Linear Transformation\n\nMapping between 2 vector spaces.\n\n### Null and range space\n\nAssume two vector spaces $V$ and $W$, and a linear mapping $\\pmb{A}: V \\rightarrow W$.\n\n* The null (aka kernel) space of a matrix $\\pmb{A}$ is defined as the subspace of $V$ such that:\n\n\\begin{equation}\n \\mathcal{N}(\\pmb{A}) = \\{\\pmb{x} \\in V : \\pmb{Ax} = \\pmb{0}\\} \\subseteq V\n\\end{equation}\n\n* The range (aka column or image) space of a matrix $\\pmb{A}$ is defined as the subspace of $W$ such that:\n\n\\begin{equation}\n \\mathcal{R}(\\pmb{A}) = \\{\\pmb{y} \\in W : \\pmb{Ax} = \\pmb{y}, \\; \\forall \\pmb{x} \\in V \\} \\subseteq W\n\\end{equation}\n\nBy the ***Rank\u2013nullity theorem***:\n\\begin{equation}\n dim(\\mathcal{N}(\\pmb{A})) + dim(\\mathcal{R}(\\pmb{A})) = dim(V)\n\\end{equation}\n\n### Rank of a matrix\n\n### Transpose\n\n### Determinant\n\nDeterminant of a square matrix.\n\n\\begin{equation}\n det : \\mathbb{K}^{N \\times N} \\rightarrow \\mathbb{K}: det(\\pmb{A}) = |\\pmb{A}| \n\\end{equation}\n\nProperties:\n* $det(\\pmb{A}) = det(\\pmb{A}^T)$\n* $det(\\pmb{A}^{-1}) = det(\\pmb{A})^{-1}$\n* If $\\pmb{A}$ and $\\pmb{B}$ are square matrices of same size then $det(\\pmb{A}\\pmb{B}) = det(\\pmb{A}) det(\\pmb{B}) = det(\\pmb{B}) det(\\pmb{A}) = det(\\pmb{B}\\pmb{A})$\n\n### Invertible matrix\n\nA square matrix $\\pmb{A} \\in \\mathbb{R}^{N \\times N}$ is invertible, and if $\\pmb{A}\\pmb{A}^{-1} = \\pmb{A}^{-1}\\pmb{A} = \\pmb{I}$.\n\n* $\\pmb{A}$ is invertible.\n* $\\pmb{A}$ is full-rank, i.e. \n* The number 0 is not an eigenvalue of $\\pmb{A}$.\n\nTime complexity: $O(N^3)$\n\nNotes:\n* If you want to know if a matrix $\\pmb{A}$ (which is at least PSD) is PD, you can check if it is invertible. If that's the case, then 0 can not be an eigenvalue of $\\pmb{A}$ and thus the matrix is PD.\n\n### Orthogonal Matrices\n\n$\\pmb{A}$ is an orthogonal (square) matrix if $\\pmb{A}^{-1} = \\pmb{A}^T$, and thus $\\pmb{A}^T\\pmb{A} = \\pmb{A}\\pmb{A}^T = \\pmb{I}$. This implies that the columns (resp. rows) of $\\pmb{A}$ are othonormal to each other. \n\n### Symmetric Matrices\n\n### Positive (Semi) Definite\n\n* PSD $\\rightarrow$ symmetric.\n\n* All the evals of a PD matrix are positive, and 0 is not one of these.\n* All the evals of a PSD matrix are non-negative.\n\n### Norm\n\n### Trace\n\n\\begin{equation}\n tr(\\pmb{A}) = \\sum_i a_{ii}\n\\end{equation}\n\nProperties:\n* The trace is a linear operator: $tr(c_1 \\pmb{A}+ c_2 \\pmb{B}) = c_1 tr(\\pmb{A}) + c_2 tr(\\pmb{B})$\n* The transpose has the same trace: $tr(\\pmb{A}) = tr(\\pmb{A}^T)$\n* Invariance under cyclic permutation: $tr(\\pmb{A}\\pmb{B}\\pmb{C}) = tr(\\pmb{B}\\pmb{C}\\pmb{A}) = tr(\\pmb{C}\\pmb{A}\\pmb{B})$\n* Product: $tr(\\pmb{A}\\pmb{B}) \\neq tr(\\pmb{A})tr(\\pmb{B})$\n* Frobenius Norm: $||\\pmb{A}||_F = \\sqrt{tr(\\pmb{A}^T\\pmb{A})} = \\sqrt{tr(\\pmb{A}\\pmb{A}^T)}$\n\n### Matrix Calculus\n\nMore information can be found on the [matrix cookbook](https://www.math.uwaterloo.ca/~hwolkowi/matrixcookbook.pdf).\n\nFirst order:\n\\begin{align}\n \\nabla_{\\pmb{X}} tr(\\pmb{AX}) = \\nabla_{\\pmb{X}} tr(\\pmb{XA}) &= \\pmb{A}^T \\\\\n \\nabla_{\\pmb{X}} tr(\\pmb{AX}^T) = \\nabla_{\\pmb{X}} tr(\\pmb{X}^T\\pmb{A}) &= \\pmb{A}\n\\end{align}\n\nSecond order:\n\\begin{align}\n \\nabla_{\\pmb{X}} tr(\\pmb{A}\\pmb{X}\\pmb{B}\\pmb{X}) = \\nabla_{\\pmb{X}} tr(\\pmb{X}\\pmb{B}\\pmb{X}\\pmb{A}) &= (\\pmb{BXA})^T + (\\pmb{AXB})^T \\\\\n \\nabla_{\\pmb{X}} tr(\\pmb{A}\\pmb{X}^T\\pmb{B}\\pmb{X}) = \\nabla_{\\pmb{X}} tr(\\pmb{X}^T\\pmb{B}\\pmb{X}\\pmb{A}) = \\nabla_{\\pmb{X}} tr(\\pmb{B}\\pmb{X}\\pmb{A}\\pmb{X}^T) &= \\pmb{BXA} + (\\pmb{A}\\pmb{X}^T\\pmb{B})^T \\\\\n\\end{align}\n\n### Pseudo inverse\n\nThis is the generalization of the inverse, in the sense that it can be applied to rectangular matrices. Assume $\\pmb{A} \\in \\mathbb{R}^{N \\times N}$.\n\nRight pseudo-inverse:\n\\begin{equation}\n \\pmb{A}^\\dagger = (\\pmb{A}^T\\pmb{A})^{-1}\\pmb{A}^T\n\\end{equation}\n\nLeft pseudo-inverse:\n\\begin{equation}\n ^\\dagger\\pmb{A} = \\pmb{A}^T(\\pmb{A}\\pmb{A}^T)^{-1}\n\\end{equation}\n\n#### Linear Regression (LR)\n\nAssume the input is given by $\\pmb{X} \\in \\mathbb{R}^{N \\times D_x}$, the output by $\\pmb{Y} \\in \\mathbb{R}^{N \\times D_y}$, and the weight matrix by $\\pmb{W} \\in \\mathbb{R}^{D_x \\times D_y}$.\n\nThe MSE loss is defined as:\n\n\\begin{equation}\n \\mathcal{L} = ||\\pmb{Y} - \\pmb{XW}||^2_F\n\\end{equation}\n\nTaking the gradient of this loss and setting it to zero gives us the minimum:\n\n\\begin{align}\n \\nabla_{\\pmb{W}} \\mathcal{L} &= \\nabla_{\\pmb{W}} ||\\pmb{Y} - \\pmb{XW}||^2_F \\\\\n &= \\nabla_{\\pmb{W}} tr((\\pmb{Y} - \\pmb{XW})^T(\\pmb{Y} - \\pmb{XW})) \\\\\n &= \\nabla_{\\pmb{W}} [tr(\\pmb{Y}^T\\pmb{Y}) - tr((\\pmb{XW})^T\\pmb{Y}) - tr(\\pmb{Y}^T\\pmb{XW}) + tr(\\pmb{W}^T\\pmb{X}^T\\pmb{X}\\pmb{W})] \\\\\n &= \\nabla_{\\pmb{W}} [tr(\\pmb{Y}^T\\pmb{Y}) - tr(\\pmb{Y}^T\\pmb{XW}) - tr(\\pmb{Y}^T\\pmb{XW}) + tr(\\pmb{W}^T\\pmb{X}^T\\pmb{X}\\pmb{W})] \\\\\n &= -2 \\nabla_{\\pmb{W}} tr(\\pmb{Y}^T\\pmb{XW}) + \\nabla_{\\pmb{W}} tr(\\pmb{W}^T\\pmb{X}^T\\pmb{X}\\pmb{W}) \\\\\n &= -2 (\\pmb{Y}^T\\pmb{X})^T + \\pmb{X}^T\\pmb{X} \\pmb{W} + (\\pmb{X}^T\\pmb{X})^T \\pmb{W} \\\\\n &= -2 \\pmb{X}^T\\pmb{Y} + 2 \\pmb{X}^T\\pmb{X} \\pmb{W} \\\\\n &= 0 \\\\\n \\Leftrightarrow & \\quad (\\pmb{X}^T\\pmb{X}) \\pmb{W} = \\pmb{X}^T\\pmb{Y}\n\\end{align}\n\nIf the covariance $(\\pmb{X}^T\\pmb{X})$ is invertible i.e. is PD, then the best set of weights are given by:\n\n\\begin{equation}\n \\pmb{W}^* = (\\pmb{X}^T\\pmb{X})^{-1} \\pmb{X}^T\\pmb{Y} = \\pmb{X}^\\dagger \\pmb{Y} = \\pmb{\\Sigma_{XX}}^{-1}\\pmb{\\Sigma_{XY}}\n\\end{equation}\n\n#### Linear Weighted Regression (LWR)\n\n### Covariance\n\nThe covariance $\\pmb{C}_{XY}$ is a positive semi-definite (PSD) matrix which captures linear correlation between 2 random variables $X$ and $Y$. The PSD implies that it is symmetric by definition.\n\n* If the cov is invertible, then 0 can not be an eigenvalue of $\\pmb{C}$. This means that it is positive definite (PD).\n\n### Eigenvalue and Eigenvectors\n\n### Diagonalizable and Eigendecomposition\n\n### Idempotence\n\nAn ***idempotent*** matrix $\\pmb{P}$ is a square 'matrix which, when multiplied by itself, yields itself'.\n\n\\begin{equation}\n \\pmb{P} = \\pmb{PP} = \\pmb{P}^2\n\\end{equation}\n\nProperties:\n* An idempotent matrix (except the identity) is singular (i.e. not full rank).\n* $\\pmb{I} - \\pmb{P}$ is also idempotent.\n* An idempotent matrix is always diagonalizable and its eigenvalues are either 0 or 1.\n* The trace of an idempotent matrix equals the rank of the matrix and thus is always an integer.\n\nIn linear regression, the optimal solution of $\\mathcal{L} = ||\\pmb{Y} - \\pmb{XW}||^2_F$ with respect to $\\pmb{W}$ is $\\pmb{W}^* = (\\pmb{X}^T\\pmb{X})^{-1} \\pmb{X}^T\\pmb{Y} = \\pmb{X}^\\dagger \\pmb{Y}$.\n\nThe residual error is then given by:\n\n\\begin{equation}\n E = (\\pmb{Y} - \\pmb{XW}^*) = (\\pmb{Y} - \\pmb{X}(\\pmb{X}^T\\pmb{X})^{-1} \\pmb{X}^T\\pmb{Y}) = [\\pmb{I} - \\pmb{X}(\\pmb{X}^T\\pmb{X})^{-1} \\pmb{X}^T] \\pmb{Y} = [\\pmb{I} - \\pmb{X}\\pmb{X}^\\dagger] \\pmb{Y} = \\pmb{Q}\\pmb{Y}\n\\end{equation}\n\nThe matrices $\\pmb{P} = \\pmb{X}\\pmb{X}^\\dagger$ and $\\pmb{Q} = [\\pmb{I} - \\pmb{X}\\pmb{X}^\\dagger] = [\\pmb{I} - \\pmb{P}]$ are symmetric and idempotent matrices.\n\n* An idempotent linear operator $\\pmb{P}$ is a projection operator on the range space $\\mathcal{R}(\\pmb{P})$ along its null space $\\mathcal{N}(\\pmb{P})$.\n* $\\pmb{P}$ is an orthogonal projection operator $\\Leftrightarrow$ it is idempotent and symmetric.\nEx: $\\pmb{P} = \\left[\\begin{array}{cc} \\pmb{I} & \\pmb{0} \\\\ \\pmb{0} & \\pmb{0} \\end{array} \\right]$\n\n### Orthogonal Complement and Projection Matrix\n\n\"A ***projection*** is a linear transformation $\\pmb{P}$ from a vector space $V$ to itself such that $\\pmb{P}^2 = \\pmb{P}$ (i.e. $\\pmb{P}$ is idempotent). That is, whenever $\\pmb{P}$ is applied twice to any value, it gives the same result as if it were applied once (idempotent). It leaves its image unchanged.\n\nLet $V$ be a finite dimensional vector space and $\\pmb{P}$ be a projection on $V$. Suppose the subspaces $\\mathcal{R}$ and $\\mathcal{N}$ are the range and null space (aka kernel) of $\\pmb{P}$ respectively. Then $\\pmb{P}$ has the following properties:\n1. $\\pmb{P}$ is idempotent by def (i.e. $\\pmb{P}^2 = \\pmb{P}$)\n2. $\\pmb{P}$ is the identity operator $\\pmb{I}$ on $\\mathcal{R}$ (i.e. $\\forall \\pmb{x} \\in \\mathcal{R}: \\pmb{Px} = \\pmb{x}$)\n3. We have a direct sum $V = \\mathcal{R} \\oplus \\mathcal{N}$. Every vector $\\pmb{x} \\in V$ may be decomposed uniquely as $\\pmb{x} = \\pmb{r} + \\pmb{n}$ with $\\pmb{r} = \\pmb{Px} \\in \\mathcal{R}$ and $\\pmb{n} = \\pmb{x} \u2212 \\pmb{Px} = (\\pmb{I} - \\pmb{P}) \\pmb{x} \\in \\mathcal{N}$.\n\nThe range and kernel of a projection are complementary, as are $\\pmb{P}$ and $\\pmb{Q} = \\pmb{I} \u2212 \\pmb{P}$. The operator $\\pmb{Q}$ is also a projection, and the range and null spaces of $\\pmb{P}$ become the null and range spaces of $\\pmb{Q}$ and vice versa. We say $\\pmb{P}$ is a projection along $\\mathcal{N}$ onto $\\mathcal{R}$, and $\\pmb{Q}$ is a projection along $\\mathcal{R}$ onto $\\mathcal{N}$.\"\n\n##### Orthogonal Projection\n\nWhen the vector space $V$ has an inner product and is complete (i.e. it is a Hilbert space), the concept of orthogonality can be used. An ***orthogonal projection*** is a projection for which the range $\\mathcal{R}$ and the null space $\\mathcal{N}$ are orthogonal subspaces. That is, $\\forall \\pmb{r} \\in \\mathcal{R}, \\forall \\pmb{n} \\in \\mathcal{N}: \\pmb{r} . \\pmb{n} = 0$.\n\nThe ***orthogonal complement*** of a subspace $W$ of a vector space $V$ equipped with a bilinear form $B$ is the set $W^\\perp$ of all vectors in $V$ that are orthogonal to every vector in $W$.\n\n### SVD\n\nAny rectangular matrices $\\pmb{A} \\in \\mathbb{R}^{N \\times M}$ can be decomposed into a product of 3 matrices (which can be seen as the building blocks of $\\pmb{A}$):\n\n\\begin{equation}\n \\pmb{A} = \\pmb{U_A}\\pmb{\\Sigma_A}\\pmb{V_A}^T\n\\end{equation}\nwhere:\n* $\\pmb{U_A} \\in \\mathbb{R}^{N \\times N}$ is an orthogonal matrix (i.e. $\\pmb{U_A}^T = \\pmb{U_A}^{-1}$ thus $\\pmb{U_A}\\pmb{U_A}^T = \\pmb{U_A}^T\\pmb{U_A} = \\pmb{I}$). The columns of $\\pmb{U_A}$ contains the eigenvectors of the PSD (thus symmetric) $\\pmb{A}\\pmb{A}^T$, and are also known as the left-singular vectors of $\\pmb{A}$. Its first columns associated with non-zero singular values span the range of $\\pmb{A}$. Note that because $\\pmb{U_A}$ is an orthogonal matrix, it is invertible and thus has full rank, i.e. its columns form a basis and span $\\mathbb{R}^N$.\n* $\\pmb{\\Sigma_A} \\in \\mathbb{R}^{N \\times M}$ is a rectangular matrix. The upper left submatrix is a diagonal matrix of size $r \\times r$ with $r = \\min(N,M)$ while the rest is filled with zeros. The diagonal elements are the singular values ordered by descending order. The singular values (SVs) $\\sigma_i$ are equals to the square roots of the eigenvalues $\\lambda_i$ of $\\pmb{A}\\pmb{A}^T$ and $\\pmb{A}^T\\pmb{A}$. The rank of $\\pmb{A}$ is given by the number of SVs different from 0.\n* $\\pmb{V_A} \\in \\mathbb{R}^{M \\times M}$ is an orthogonal matrix (i.e. $\\pmb{V_A}^T = \\pmb{V_A}^{-1}$ thus $\\pmb{V_A}\\pmb{V_A}^T = \\pmb{V_A}^T\\pmb{V_A} = \\pmb{I}$). The columns of $\\pmb{V_A}$ contains the eigenvectors of the PSD (thus symmetric) $\\pmb{A}^T\\pmb{A}$, and are also known as the right-singular vectors of $\\pmb{A}$. Its last columns associated with vanishing singular values $\\sigma_i = 0$ span the null space of $\\pmb{A}$. Note that because $\\pmb{V_A}$ is an orthogonal matrix, it is invertible and thus has full rank, i.e. its columns form a basis and span $\\mathbb{R}^M$.\n\nFew notes:\n* SVD can be seen as a generalization of eigendecomposition.\n* SVD can be compressed such that $\\pmb{A} = \\pmb{\\tilde{U}_A}\\pmb{\\tilde{\\Sigma}_A}\\pmb{\\tilde{V_A}^T}$ with $\\pmb{\\tilde{U}_A} \\in \\mathbb{R}^{N \\times r}$, $\\pmb{\\tilde{\\Sigma}_A} \\in \\mathbb{R}^{r \\times r}$, $\\pmb{V_A} \\in \\mathbb{R}^{M \\times r}$, and $r=\\min(N,M)$.\n* \n\nSome properties:\n* **Existence**:\n* **Uniqueness**:\n* **Transpose**: $\\pmb{A}^T = (\\pmb{U_A}\\pmb{\\Sigma_A}\\pmb{V_A}^T)^T = \\pmb{V_A}\\pmb{\\Sigma_A}^T\\pmb{U_A}^T $\n* If the matrix A is symmetric, then it has real eigenvalues.\n* **Pseudo-inverse**: The pseudo-inverse of $\\pmb{A}$ is given by $\\pmb{A^\\dagger} = \\pmb{V_A}\\pmb{\\Sigma_A}^{-1}\\pmb{U_A}^T$ with $\\pmb{\\Sigma_A}^{-1}$ containing the inverse of singular values on its diagonal.\n* **Null and range space** of $\\pmb{A}$: $\\mathcal{N}(\\pmb{A}) \\equiv \\mathcal{R}^\\perp(\\pmb{A}^T)$ and $\\mathcal{R}(\\pmb{A}) \\equiv \\mathcal{N}^\\perp(\\pmb{A}^T)$. This can be seen by applying SVD on $\\pmb{A}$ and $\\pmb{A}^T$. The last columns of $\\pmb{V_A}$ associated with vanishing SVs spans the null-space of $\\pmb{A}$, and the first columns of $\\pmb{V_A}$ associated with non-zero SVs spans the range space of $\\pmb{A}^T$. Because of the orthogonality property of $\\pmb{V_A}$, we have the $r$ first columns are orthogonals to the $M-r$ last columns. $dim(\\mathcal{R}(\\pmb{A})) + dim(\\mathcal{N}(\\pmb{A})) = M$\n\n### Block Matrices\n\n### PCA\n\nThe (MSE) loss minimized by PCA is:\n\n\\begin{equation}\n \\mathcal{L} = ||\\pmb{X} - \\pmb{XWW}^T||^2_F\n\\end{equation}\n\nwhere is the $\\pmb{\\tilde{X}} = \\pmb{XW}$ is the projected data on the lower dimensional space, and $\\pmb{\\hat{X}} = \\pmb{\\tilde{X}W}^T$ is the data projected back to the original space.\n\nTaking the gradient of this loss with respect to $\\pmb{W}$ and setting it to $0$ gives us the minimum:\n\n\\begin{align}\n \\nabla_{\\pmb{W}} \\mathcal{L} &= \\nabla_{\\pmb{W}} tr((\\pmb{X} - \\pmb{XWW}^T)^T(\\pmb{X} - \\pmb{XWW}^T)) \\\\\n &= \\nabla_{\\pmb{W}} tr((\\pmb{X}^T - \\pmb{WW}^T \\pmb{X}^T) (\\pmb{X} - \\pmb{XWW}^T)) \\\\\n &= \\nabla_{\\pmb{W}} [tr(\\pmb{X}^T\\pmb{X}) - tr(\\pmb{WW}^T \\pmb{X}^T\\pmb{X}) - tr(\\pmb{X}^T\\pmb{X}\\pmb{WW}^T) + tr(\\pmb{WW}^T\\pmb{X}^T\\pmb{XWW}^T)] \\\\\n &= \\nabla_{\\pmb{W}} [- 2 tr(\\pmb{W}^T \\pmb{\\Sigma_{XX}} \\pmb{W}) + tr(\\pmb{WW}^T \\pmb{\\Sigma_{XX}} \\pmb{WW}^T)] \\\\\n &= - 2 (\\pmb{\\Sigma_{XX}} \\pmb{W} + \\pmb{\\Sigma_{XX}}^T \\pmb{W}) + (\\pmb{W}^T \\pmb{\\Sigma_{XX}} \\pmb{WW}^T)^T + (\\pmb{\\Sigma_{XX}} \\pmb{WW}^T)\\pmb{W} + (\\pmb{WW}^T \\pmb{\\Sigma_{XX}})^T (\\pmb{W}^T)^T + (\\pmb{WW}^T \\pmb{\\Sigma_{XX}} \\pmb{W}) \\\\\n &= - 4 \\pmb{\\Sigma_{XX}} \\pmb{W} + \\pmb{WW}^T \\pmb{\\Sigma_{XX}}^T \\pmb{W} + \\pmb{\\Sigma_{XX}} \\pmb{WW}^T\\pmb{W} + \\pmb{\\Sigma_{XX}}^T\\pmb{WW}^T \\pmb{W} + \\pmb{WW}^T \\pmb{\\Sigma_{XX}} \\pmb{W} \\\\\n &= - 4 \\pmb{\\Sigma_{XX}} \\pmb{W} + 2 \\pmb{WW}^T \\pmb{\\Sigma_{XX}} \\pmb{W} + 2 \\pmb{\\Sigma_{XX}} \\pmb{WW}^T\\pmb{W} \\\\\n &= 0 \\\\\n \\Leftrightarrow & \\quad \\pmb{WW}^T \\pmb{\\Sigma_{XX}} \\pmb{W} + \\pmb{\\Sigma_{XX}} \\pmb{WW}^T\\pmb{W} = 2 \\pmb{\\Sigma_{XX}} \\pmb{W} \\\\\n \\Leftrightarrow & \\quad \\pmb{WW}^T \\pmb{\\Sigma_{XX}} + \\pmb{\\Sigma_{XX}} \\pmb{WW}^T = 2 \\pmb{\\Sigma_{XX}} \\\\\n \\Leftrightarrow & \\quad \\pmb{WW}^T + \\pmb{\\Sigma_{XX}} \\pmb{WW}^T \\pmb{\\Sigma_{XX}}^{-1} = 2 \\pmb{I} \\\\\n \\Leftrightarrow & \\quad \\pmb{\\Sigma_{XX}} = (2 \\pmb{I} - \\pmb{WW}^T) \\pmb{\\Sigma_{XX}} (\\pmb{WW}^T)^{-1} \\\\\n \\Leftrightarrow & \\quad \\pmb{Q\\Lambda Q}^T = (2 \\pmb{I} - \\pmb{WW}^T) \\pmb{Q\\Lambda Q}^T (\\pmb{WW}^T)^{-1} \\\\\n \\Leftrightarrow & \\quad \\left\\{ \\begin{array}{l} (2 \\pmb{I} - \\pmb{WW}^T) \\pmb{Q} = \\pmb{Q} \\\\ \\pmb{Q}^T (\\pmb{WW}^T)^{-1} = \\pmb{Q}^T \\: \\Leftrightarrow \\: \\pmb{QQ}^T = \\pmb{WW}^T \\end{array} \\right. \\\\\n\\end{align}\n\nAlgo using the covariance:\n1. subtract the mean $\\pmb{X}$\n2. compute the covariance matrix $\\pmb{\\Sigma_{XX}} = \\pmb{X}^T\\pmb{X}$\n3. compute the eigendecomposition of $\\pmb{\\Sigma_{XX}}$, i.e. compute $\\pmb{Q}$ and $\\pmb{\\Lambda}$ such that $\\pmb{\\Sigma_{XX}} = \\pmb{Q} \\pmb{\\Lambda} \\pmb{Q}^T$.\n4. return the sorted evals and the corresponding evecs\n\nAlgo using SVD:\n1. substract the mean from $\\pmb{X}$\n2. compute SVD of $\\pmb{X}$, i.e. $\\pmb{X} = \\pmb{U_X \\Sigma_X V_X}^T$. The eigenvectors are given by $\\pmb{Q} = \\pmb{V_X}$ and the evals by $\\pmb{\\Lambda} = \\pmb{\\Sigma_X}^2$.\n\n#### Recursive PCA/SVD\n\n#### Hierarchical PCA/SVD\n\n### Linear FNN with 1 hidden layer\n\nAssume a linear Feedforward Neural Network (FNN) with 1 input, 1 hidden, and 1 output layer. Assume the input data is given by $\\pmb{X} \\in \\mathbb{R}^{N \\times D_x}$, the output data by $\\pmb{Y} \\in \\mathbb{R}^{N \\times D_y}$, the hidden data by $\\pmb{H} \\in \\mathbb{R}^{N \\times D_h}$, the input-hidden weight matrix by $\\pmb{W_1} \\in \\mathbb{R}^{D_x \\times D_h}$ and the hidden-output weight matrix $\\pmb{W_2} \\in \\mathbb{R}^{D_h \\times D_y}$. These variables are related by the following relationships:\n\n\\begin{equation}\n \\pmb{H} = \\pmb{X} \\pmb{W_1} \\qquad \\mbox{and} \\qquad \\pmb{Y} = \\pmb{H} \\pmb{W_2}\n\\end{equation}\n\nThe MSE loss is thus defined as:\n\n\\begin{equation}\n \\mathcal{L} = ||\\pmb{Y} - \\pmb{XW_1W_2}||^2_F\n\\end{equation}\n\nTaking the gradient of this loss with respect to $\\pmb{W_1}$ and $\\pmb{W_2}$, and setting these to zero gives us the minimum:\n\n\\begin{align}\n \\nabla_{\\pmb{W_1}} \\mathcal{L} &= \\nabla_{\\pmb{W_1}} ||\\pmb{Y} - \\pmb{XW_1W_2}||^2_F \\\\\n &= \\nabla_{\\pmb{W_1}} tr((\\pmb{Y} - \\pmb{XW_1W_2})^T(\\pmb{Y} - \\pmb{XW_1W_2})) \\\\\n &= \\nabla_{\\pmb{W_1}} [tr(\\pmb{Y}^T\\pmb{Y}) - tr((\\pmb{XW_1W_2})^T\\pmb{Y}) - tr(\\pmb{Y}^T\\pmb{XW_1W_2}) + tr(\\pmb{W}^T\\pmb{X}^T\\pmb{X}\\pmb{W})] \\\\\n &= \\nabla_{\\pmb{W_1}} [tr(\\pmb{Y}^T\\pmb{Y}) - tr(\\pmb{Y}^T\\pmb{XW_1W_2}) - tr(\\pmb{Y}^T\\pmb{XW_1W_2}) + tr(\\pmb{(W_1W_2)}^T\\pmb{X}^T\\pmb{X}\\pmb{(W_1W_2)})] \\\\\n &= \\nabla_{\\pmb{W_1}} [-2 tr(\\pmb{Y}^T\\pmb{XW_1W_2}) + tr(\\pmb{W_2}^T\\pmb{W_1}^T\\pmb{X}^T\\pmb{X}\\pmb{W_1}\\pmb{W_2})] \\\\\n &= \\nabla_{\\pmb{W_1}} [-2 tr(\\pmb{\\Sigma_{YX}}\\pmb{W_1W_2}) + tr(\\pmb{W_2}^T\\pmb{W_1}^T\\pmb{\\Sigma_{XX}}\\pmb{W_1}\\pmb{W_2})] \\\\\n &= -2 (\\pmb{W_2}\\pmb{\\Sigma_{YX}})^T + \\pmb{\\Sigma_{XX}}\\pmb{W_1W_2W_2}^T + \\pmb{\\Sigma_{XX}}^T\\pmb{W_1}(\\pmb{W_2W_2}^T)^T \\\\\n &= -2 \\pmb{\\Sigma_{XY}} \\pmb{W_2}^T + 2 \\pmb{\\Sigma_{XX}}\\pmb{W_1W_2W_2}^T \\\\\n &= 0 \\\\\n \\Leftrightarrow & \\quad \\pmb{\\Sigma_{XX}}\\pmb{W_1W_2W_2}^T = \\pmb{\\Sigma_{XY}} \\pmb{W_2}^T \\\\\n \\Leftrightarrow & \\quad \\pmb{W_1} = \\pmb{\\Sigma_{XX}}^{-1} \\pmb{\\Sigma_{XY}} \\pmb{W_2}^T (\\pmb{W_2W_2}^T)^{-1}\n\\end{align}\n\nand \n\n\\begin{align}\n \\nabla_{\\pmb{W_2}} \\mathcal{L} &= \\nabla_{\\pmb{W_2}} [-2 tr(\\pmb{\\Sigma_{YX}}\\pmb{W_1W_2}) + tr(\\pmb{W_2}^T\\pmb{W_1}^T\\pmb{\\Sigma_{XX}}\\pmb{W_1}\\pmb{W_2})] \\\\\n &= -2 (\\pmb{\\Sigma_{YX}}\\pmb{W_1})^T + \\pmb{W_1}^T\\pmb{\\Sigma_{XX}}\\pmb{W_1}\\pmb{W_2} + (\\pmb{W_1}^T\\pmb{\\Sigma_{XX}}\\pmb{W_1})^T\\pmb{W_2} \\\\\n &= -2 \\pmb{W_1}^T\\pmb{\\Sigma_{XY}} + 2 \\pmb{W_1}^T\\pmb{\\Sigma_{XX}}\\pmb{W_1}\\pmb{W_2} \\\\\n &= 0 \\\\\n \\Leftrightarrow & \\quad \\pmb{W_1}^T\\pmb{\\Sigma_{XX}}\\pmb{W_1}\\pmb{W_2} = \\pmb{W_1}^T \\pmb{\\Sigma_{XY}} \\\\\n \\Leftrightarrow & \\quad \\pmb{W_2} = (\\pmb{W_1}^T\\pmb{\\Sigma_{XX}}\\pmb{W_1})^{-1} \\pmb{W_1}^T \\pmb{\\Sigma_{XY}} \\\\\n\\end{align}\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "d703912d61944732c1a5ace19b63c65f26dc5fd8", "size": 24780, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tutorials/math/Linear-Algebra.ipynb", "max_stars_repo_name": "Pandinosaurus/pyrobolearn", "max_stars_repo_head_hexsha": "9cd7c060723fda7d2779fa255ac998c2c82b8436", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-01-21T21:08:30.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T16:45:49.000Z", "max_issues_repo_path": "tutorials/math/Linear-Algebra.ipynb", "max_issues_repo_name": "Pandinosaurus/pyrobolearn", "max_issues_repo_head_hexsha": "9cd7c060723fda7d2779fa255ac998c2c82b8436", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tutorials/math/Linear-Algebra.ipynb", "max_forks_repo_name": "Pandinosaurus/pyrobolearn", "max_forks_repo_head_hexsha": "9cd7c060723fda7d2779fa255ac998c2c82b8436", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-09-29T21:25:39.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-29T21:25:39.000Z", "avg_line_length": 50.7786885246, "max_line_length": 644, "alphanum_fraction": 0.5189669088, "converted": true, "num_tokens": 8182, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9207896802383028, "lm_q2_score": 0.896251377983158, "lm_q1q2_score": 0.8252590197462503}} {"text": "\n \n
    \n Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli
    \n\n\n```python\nfrom __future__ import print_function\nfrom __future__ import absolute_import\n\n%matplotlib inline\nimport numpy\nimport matplotlib.pyplot as plt\n```\n\n# Numerical Differentiation\n\n**GOAL:** Given a set of $N+1$ points $(x_i, y_i)$ compute the derivative of a given order to a specified accuracy.\n\n**Approaches:** \n * Find the interpolating polynomial $P_N(x)$ and differentiate that.\n * Use Taylor-series expansions and the method of undetermined coefficients to derive finite-difference weights and their error estimates\n \n**Issues:** Order vs accuracy...how to choose\n\n# Example 1: how to approximate the derivative $f'(x)$ given a discrete sampling of a function $f(x)$\n\nHere we will consider how to estimate $f'(x_k)$ given a $N$ point sampling of $f(x)=\\sin(\\pi x) + 1/2 \\sin(2\\pi x)$ sampled uniformly over the interval $x\\in [ 0,1]$\n\n\n```python\nN = 11\nx = numpy.linspace(0,1,N)\nxfine = numpy.linspace(0,1,101)\nf = lambda x: numpy.sin(numpy.pi*x) + 0.5*numpy.sin(4*numpy.pi*x)\n\nfig = plt.figure(figsize=(8,6))\naxes = fig.add_subplot(1,1,1)\naxes.plot(xfine, f(xfine),'b',label='$f(x)$')\naxes.plot(x, f(x), 'ro', markersize=12, label='$f(x_k)$')\naxes.grid()\naxes.set_xlabel('x')\np = numpy.polyfit(x,f(x),N-1)\naxes.plot(xfine,numpy.polyval(p,xfine),'g--',label='$P_{{{N}}}$'.format(N=N-1))\naxes.legend(fontsize=15)\nplt.show()\n```\n\n### Example 2: how to approximate derivative $f'(x)$ given a discrete sampling of a function $f(x)$\n\nHere we will consider how to estimate $f'(x_k)$ given a $N$ point sampling of Runge's function sampled uniformly over the interval $x\\in [ -1,1]$\n\n\n```python\nN = 11\nx = numpy.linspace(-1,1,N)\nxfine = numpy.linspace(-1,1,101)\nf = lambda x: 1./(1. + 25*x**2)\n\nfig = plt.figure(figsize=(8,6))\naxes = fig.add_subplot(1,1,1)\naxes.plot(xfine, f(xfine),'b',label='$f(x)$')\naxes.plot(x, f(x), 'ro', markersize=12, label='$f(x_k)$')\naxes.grid()\naxes.set_xlabel('x')\np = numpy.polyfit(x,f(x),N-1)\naxes.plot(xfine,numpy.polyval(p,xfine),'g--',label='$P_{{{N}}}$'.format(N=N-1))\naxes.legend(fontsize=15)\nplt.show()\n```\n\n### The interpolating polynomial: review\n\nFrom our previous lecture, we showed that we can approximate a function $f(x)$ over some interval in terms of a unique interpolating polynomial through $N+1$ points and a remainder term\n\n$$\n f(x) = P_N(x) + R_N(x)\n$$\n\nWhere the Lagrange remainder term is\n\n$$R_N(x) = (x - x_0)(x - x_1)\\cdots (x - x_{N})(x - x_{N+1}) \\frac{f^{(N+1)}(c)}{(N+1)!}$$\n\nWhile there are multiple ways to represent the interpolating polynomial, both $P_N(x)$ and $R_N(x)$ are polynomials in $x$ and therefore differentiable. Thus we should be able to calculate the first derivative and its error as\n\n$$\n f'(x) = P'_N(x) + R'_N(x)\n$$\n\nand likewise for higher order derivatives up to degree $N$.\n\n### Derivatives of the Lagrange Polynomials \n\nThe Lagrange basis, is a particularly nice basis for calculating numerical differentiation formulas because of their basic interpolating property that\n\n$$\n P_N(x) = \\sum_{i=0}^N f(x_i)\\ell_i(x)\n$$\n\nwhere $f(x_i)$ is just the value of our function $f$ at node $x_i$ and all of the $x$ dependence is contained in the Lagrange Polynomials $\\ell_i(x)$ (which only depend on the node coordinates $x_i$, $i=0,\\ldots,N$). Thus, the interpolating polynomial at any $x$ is simply a linear combination of the values at the nodes $f(x_i)$\n\nLikewise its first derivative\n$$\nP'_N(x) = \\sum_{i=0}^N f(x_i)\\ell'_i(x)\n$$\nis also just a linear combination of the values $f(x_i)$\n\n## Examples\n\nGiven the potentially, highly oscillatory nature of the interpolating polynomial, in practice we only use a small number of data points around a given point $x_k$ to derive a differentiation formula for the derivative $f'(x_k)$. In the context of differential equations we also often have $f(x)$ so that $f(x_k) = y_k$ and we can approximate the derivative of a known function $f(x)$.\n\n\n```python\nN = 9\nf = lambda x: 1./(1. + 25*x**2)\n#f = lambda x: numpy.cos(2.*numpy.pi*x)\n```\n\n\n```python\nx = numpy.linspace(-1,1,N)\nxfine = numpy.linspace(-1,1,101)\n\nfig = plt.figure(figsize=(8,6))\naxes = fig.add_subplot(1,1,1)\naxes.plot(xfine, f(xfine),'b',label='$f(x)$')\naxes.plot(x, f(x), 'ro', markersize=12, label='$f(x_k)$')\nx3 = x[5:8]\nx3fine = numpy.linspace(x3[0],x3[-1],20)\np = numpy.polyfit(x3,f(x3),2)\naxes.plot(x3,f(x3),'m',label = 'Piecewise $P_1(x)$')\naxes.plot(x3fine,numpy.polyval(p,x3fine),'k',label = 'Piecewise $P_2(x)$')\naxes.grid()\naxes.set_xlabel('x')\np = numpy.polyfit(x,f(x),N-1)\naxes.plot(xfine,numpy.polyval(p,xfine),'g--',label='$P_{{{N}}}$'.format(N=N-1))\naxes.legend(fontsize=14,loc='best')\nplt.show()\n```\n\n### Example: 1st order polynomial through 2 points $x=x_0, x_1$:\n\n\n$$\n P_1(x)=f_0\\ell_0(x) + f_1\\ell_1(x)\n$$\n\nOr written out in full\n\n$$\nP_1(x) = f_0\\frac{x-x_1}{x_0-x_1} + f_1\\frac{x-x_0}{x_1-x_0} \n$$\n\n\nThus the first derivative of this polynomial for all $x\\in[x_0,x_1]$ is\n\n$$\nP'_1(x) = \\frac{f_0}{x_0-x_1} + \\frac{f_1}{x_1-x_0} = \\frac{f_1 - f_0}{x_1 - x_0} = \\frac{f_1 - f_0}{\\Delta x}\n$$\n\nWhere $\\Delta x$ is the width of the interval. This formula is simply the slope of the chord connecting the points $(x_0, f_0)$ and $(x_1,f_1)$. Note also, that the estimate of the first-derivative is constant for all $x\\in[x_0,x_1]$.\n\n#### \"Forward\" and \"Backward\" first derivatives\n\nEven though the first derivative by this method is the same at both $x_0$ and $x_1$, we sometime make a distinction between the \"forward Derivative\"\n\n$$f'(x_n) \\approx D_1^+ = \\frac{f(x_{n+1}) - f(x_n)}{\\Delta x}$$\n\nand the \"backward\" finite-difference as\n\n$$f'(x_n) \\approx D_1^- = \\frac{f(x_n) - f(x_{n-1})}{\\Delta x}$$\n\n\n\nNote these approximations should be familiar to use as the limit as $\\Delta x \\rightarrow 0$ these are no longer approximations but equivalent definitions of the derivative at $x_n$.\n\n### Example: 2nd order polynomial through 3 points $x=x_0, x_1, x_2$:\n\n\n$$\n P_2(x)=f_0\\ell_0(x) + f_1\\ell_1(x) + f_2\\ell_2(x)\n$$\n\nOr written out in full\n\n$$\nP_2(x) = f_0\\frac{(x-x_1)(x-x_2)}{(x_0-x_1)(x_0-x_2)} + f_1\\frac{(x-x_0)(x-x_2)}{(x_1-x_0)(x_1-x_2)} + f_2\\frac{(x-x_0)(x-x_1)}{(x_2-x_0)(x_2-x_1)}\n$$\n\n\nThus the first derivative of this polynomial for all $x\\in[x_0,x_2]$ is\n\n$$\nP'_2(x) = f_0\\frac{(x-x_1)+(x-x_2)}{(x_0-x_1)(x_0-x_2)} + f_1\\frac{(x-x_0)+(x-x_2)}{(x_1-x_0)(x_1-x_2)} + f_2\\frac{(x-x_0)+(x-x_1)}{(x_2-x_0)(x_2-x_1)}\n$$\n\n\n\n**Exercise**: show that the second-derivative $P''_2(x)$ is a constant (find it!) but is also just a linear combination of the function values at the nodes.\n\n### Special case of equally spaced nodes $x = [-h, 0, h]$ where $h=\\Delta x$ is the grid spacing\n\n\nGeneral Case:\n$$\nP'_2(x) = f_0\\frac{(x-x_1)+(x-x_2)}{(x_0-x_1)(x_0-x_2)} + f_1\\frac{(x-x_0)+(x-x_2)}{(x_1-x_0)(x_1-x_2)} + f_2\\frac{(x-x_0)+(x-x_1)}{(x_2-x_0)(x_2-x_1)}\n$$\n\nBecomes:\n$$\nP'_2(x) = f_0\\frac{2x-h}{2h^2} + f_1\\frac{-2x}{h^2} + f_2\\frac{2x+h}{2h^2}\n$$\n\nwhich if we evaluate at the three nodes $-h,0,h$ yields\n\n$$\nP'_2(-h) = \\frac{-3f_0 + 4f_1 -1f_2}{2h}, \\quad\\quad P'_2(0) = \\frac{-f_0 + f_2}{2h}, \\quad\\quad P'_2(h) = \\frac{f_0 -4f_1 + 3f_2}{2h} \n$$\n\nAgain, just linear combinations of the values at the nodes $f(x_i)$\n\n#### Quick Checks\n\nIn general, all finite difference formulas can be written as linear combinations of the values of $f(x)$ at the nodes. The formula's can be hard to remember, but they are easy to check.\n\n* The sum of the coefficients must add to zero. Why?\n* The sign of the coefficients can be checked by inserting $f(x_i) = x_i$\n\n##### Example\n\nGiven \n$$\nP'_2(-h) =\\frac{-3f_0 + 4f_1 -1f_2}{2h}\n$$\n\nWhat is $P'_2(-h)$ if\n\n* $$f_0=f_1=f_2$$\n* $$f_0 = 0, ~f_1 = 1, ~f_2 = 2$$ \n\n### Error Analysis\n\nIn addition to calculating finite difference formulas, we can also estimate the error\n\nFrom Lagrange's Theorem, the remainder term looks like\n\n$$R_N(x) = (x - x_0)(x - x_1)\\cdots (x - x_{N})(x - x_{N+1}) \\frac{f^{(N+1)}(c)}{(N+1)!}$$\n\nThus the derivative of the remainder term $R_N(x)$ is\n\n$$R_N'(x) = \\left(\\sum^{N}_{i=0} \\left( \\prod^{N}_{j=0,~j\\neq i} (x - x_j) \\right )\\right ) \\frac{f^{(N+1)}(c)}{(N+1)!}$$\n\nThe remainder term contains a sum of $N$'th order polynomials and can be awkward to evaluate, however, if we restrict ourselves to the error at any given node $x_k$, the remainder simplifies to \n\n$$R_N'(x_k) = \\left( \\prod^{N}_{j=0,~j\\neq k} (x_k - x_j) \\right) \\frac{f^{(N+1)}(c)}{(N+1)!}$$\n\nIf we let $\\Delta x = \\max_i |x_k - x_i|$ we then know that the remainder term will be $\\mathcal{O}(\\Delta x^N)$ as $\\Delta x \\rightarrow 0$ thus showing that this approach converges and we can find arbitrarily high order approximations (ignoring floating point error).\n\n### Examples\n\n#### First order differences $N=1$\n\nFor our first order finite differences, the error term is simply\n\n$$R_1'(x_0) = -\\Delta x \\frac{f''(c)}{2}$$\n$$R_1'(x_1) = \\Delta x \\frac{f''(c)}{2}$$\n\nBoth of which are $O(\\Delta x f'')$\n\n#### Second order differences $N=2$\n\n\nFor general second order polynomial interpolation, the derivative of the remainder term is\n\n$$\\begin{aligned}\n R_2'(x) &= \\left(\\sum^{2}_{i=0} \\left( \\prod^{2}_{j=0,~j\\neq i} (x - x_j) \\right )\\right ) \\frac{f'''(c)}{3!} \\\\\n &= \\left ( (x - x_{i+1}) (x - x_{i-1}) + (x-x_i) (x-x_{i-1}) + (x-x_i)(x-x_{i+1}) \\right ) \\frac{f'''(c)}{3!}\n\\end{aligned}$$\n\nAgain evaluating this expression at the center point $x = x_i$ and assuming evenly space points we have\n\n$$R_2'(x_i) = -\\Delta x^2 \\frac{f'''(c)}{3!}$$\n\nshowing that our error is $\\mathcal{O}(\\Delta x^2)$.\n\n### Caution\n\nHigh order does not necessarily imply high-accuracy! \n\nAs always, the question remains as to whether the underlying function is well approximated by a high-order polynomial.\n\n\n### Convergence \n\nNevertheless, we can always check to see if the error reduces as expected as $\\Delta x\\rightarrow 0$. Here we estimate the 1st and 2nd order first-derivative for evenly spaced points\n\n\n```python\ndef D1_p(func, x_min, x_max, N):\n \"\"\" calculate consistent 1st order Forward difference of a function func(x) defined on the interval [x_min,xmax]\n and sampled at N evenly spaced points\"\"\"\n\n x = numpy.linspace(x_min, x_max, N)\n f = func(x)\n dx = x[1] - x[0]\n f_prime = numpy.zeros(N)\n f_prime[0:-1] = (f[1:] - f[0:-1])/dx\n # and patch up the end point with a backwards difference\n f_prime[-1] = f_prime[-2]\n\n return f_prime\n\ndef D1_2(func, x_min, x_max, N):\n \"\"\" calculate consistent 2nd order first derivative of a function func(x) defined on the interval [x_min,xmax]\n and sampled at N evenly spaced points\"\"\"\n\n x = numpy.linspace(x_min, x_max, N)\n f = func(x)\n dx = x[1] - x[0]\n f_prime = numpy.zeros(N)\n f_prime[0] = f[:3].dot(numpy.array([-3, 4, -1]))/(2*dx)\n f_prime[1:-1] = (f[2:N] - f[0:-2])/(2*dx)\n f_prime[-1] = f[-3:].dot(numpy.array([1, -4, 3]))/(2*dx)\n \n return f_prime\n```\n\n#### Note: \n\nThis first derivative operator can also be written as a Matrix $D$ such that $f'(\\mathbf{x}) = Df(\\mathbf{x})$ where $\\mathbf{x}$ is a vector of $x$ coordinates. (exercise left for the homework)\n\n\n```python\nN = 11\nxmin = 0.\nxmax = 1.\nfunc = lambda x: numpy.sin(numpy.pi*x) + 0.5*numpy.sin(4*numpy.pi*x)\nfunc_prime = lambda x: numpy.pi*numpy.cos(numpy.pi*x) + 2.*numpy.pi * numpy.cos(4*numpy.pi*x)\nD1f = D1_p(func, xmin, xmax, N)\nD2f = D1_2(func, xmin, xmax, N)\n```\n\n\n```python\nxa = numpy.linspace(xmin, xmax, 100)\nxi = numpy.linspace(xmin, xmax, N)\nfig = plt.figure(figsize=(16, 6))\naxes = fig.add_subplot(1, 2, 1)\naxes.plot(xa, func(xa), 'b', label=\"$f(x)$\")\naxes.plot(xa, func_prime(xa), 'k--', label=\"$f'(x)$\")\naxes.plot(xi, func(xi), 'ro')\naxes.plot(xi, D1f, 'ko',label='$D^+_1(f)$')\naxes.legend(loc='best')\naxes.set_title(\"$f'(x)$\")\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"$f'(x)$ and $\\hat{f}'(x)$\")\naxes.grid()\n\naxes = fig.add_subplot(1, 2, 2)\naxes.plot(xa, func(xa), 'b', label=\"$f(x)$\")\naxes.plot(xa, func_prime(xa), 'k--', label=\"$f'(x)$\")\naxes.plot(xi, func(xi), 'ro')\naxes.plot(xi, D2f, 'go',label='$D_1^2(f)$')\naxes.legend(loc='best')\naxes.set_title(\"$f'(x)$\")\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"$f'(x)$ and $\\hat{f}'(x)$\")\naxes.grid()\nplt.show()\n```\n\n\n```python\nN = 11\nxmin = -1\nxmax = 1.\nfunc = lambda x: 1./(1 + 25.*x**2)\nfunc_prime = lambda x: -50. * x / (1. + 25.*x**2)**2\nD1f = D1_p(func, xmin, xmax, N)\nD2f = D1_2(func, xmin, xmax, N)\n```\n\n\n```python\nxa = numpy.linspace(xmin, xmax, 100)\nxi = numpy.linspace(xmin, xmax, N)\nfig = plt.figure(figsize=(16, 6))\naxes = fig.add_subplot(1, 2, 1)\naxes.plot(xa, func(xa), 'b', label=\"$f(x)$\")\naxes.plot(xa, func_prime(xa), 'k--', label=\"$f'(x)$\")\naxes.plot(xi, func(xi), 'ro')\naxes.plot(xi, D1f, 'ko',label='$D^+_1(f)$')\naxes.legend(loc='best')\naxes.set_title(\"$f'(x)$\")\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"$f'(x)$ and $\\hat{f}'(x)$\")\naxes.grid()\n\naxes = fig.add_subplot(1, 2, 2)\naxes.plot(xa, func(xa), 'b', label=\"$f(x)$\")\naxes.plot(xa, func_prime(xa), 'k--', label=\"$f'(x)$\")\naxes.plot(xi, func(xi), 'ro')\naxes.plot(xi, D2f, 'go',label='$D_1^2(f)$')\naxes.legend(loc='best')\naxes.set_title(\"$f'(x)$\")\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"$f'(x)$ and $\\hat{f}'(x)$\")\naxes.grid()\nplt.show()\n```\n\n#### Computing Order of Convergence\n\nSay we had the error $E(\\Delta x)$ and we wanted to make a statement about the rate of convergence (note we can replace $E$ here with the $R$ from above). Then we can do the following:\n$$\\begin{aligned}\n E(\\Delta x) &= C \\Delta x^n \\\\\n \\log E(\\Delta x) &= \\log C + n \\log \\Delta x\n\\end{aligned}$$\n\nThe slope of the line is $n$ when modeling the error like this! We can also match the first point by solving for $C$:\n\n$$\n C = e^{\\log E(\\Delta x) - n \\log \\Delta x}\n$$\n\n\n```python\n# Compute the error as a function of delta_x\nN_range = numpy.logspace(1, 4, 10, dtype=int)\ndelta_x = numpy.empty(N_range.shape)\nerror = numpy.empty((N_range.shape[0], 4))\nfor (i, N) in enumerate(N_range):\n x_hat = numpy.linspace(xmin, xmax, N)\n delta_x[i] = x_hat[1] - x_hat[0]\n\n # Compute forward difference\n D1f = D1_p(func, xmin, xmax, N)\n \n # Compute 2nd order difference\n D2f = D1_2(func, xmin, xmax, N)\n\n \n # Calculate the infinity norm or maximum error\n error[i, 0] = numpy.linalg.norm(numpy.abs(func_prime(x_hat) - D1f), ord=numpy.inf)\n error[i, 1] = numpy.linalg.norm(numpy.abs(func_prime(x_hat) - D2f), ord=numpy.inf)\n \nerror = numpy.array(error)\ndelta_x = numpy.array(delta_x)\n \norder_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x))\n \nfig = plt.figure(figsize=(8,6))\naxes = fig.add_subplot(1,1,1)\naxes.loglog(delta_x, error[:,0], 'ro', label='$D_1^+$')\naxes.loglog(delta_x, error[:,1], 'bo', label='$D_1^2$')\naxes.loglog(delta_x, order_C(delta_x[0], error[0, 0], 1.0) * delta_x**1.0, 'r--', label=\"1st Order\")\naxes.loglog(delta_x, order_C(delta_x[0], error[0, 1], 2.0) * delta_x**2.0, 'b--', label=\"2nd Order\")\naxes.legend(loc=4)\naxes.set_title(\"Convergence of Finite Differences\", fontsize=18)\naxes.set_xlabel(\"$\\Delta x$\", fontsize=16)\naxes.set_ylabel(\"$|f'(x) - \\hat{f}'(x)|$\", fontsize=16)\naxes.legend(loc='best', fontsize=14)\naxes.grid()\n\nplt.show()\n```\n\n# Another approach: The method of undetermined Coefficients\n\nAn alternative method for finding finite-difference formulas is by using Taylor series expansions about the point we want to approximate. The Taylor series about $x_n$ is\n\n$$f(x) = f(x_n) + (x - x_n) f'(x_n) + \\frac{(x - x_n)^2}{2!} f''(x_n) + \\frac{(x - x_n)^3}{3!} f'''(x_n) + \\mathcal{O}((x - x_n)^4)$$\n\nSay we want to derive the second order accurate, first derivative approximation that we just did, this requires the values $(x_{n+1}, f(x_{n+1})$ and $(x_{n-1}, f(x_{n-1})$. We can express these values via our Taylor series approximation above as\n\n\\begin{aligned}\n f(x_{n+1}) &= f(x_n) + (x_{n+1} - x_n) f'(x_n) + \\frac{(x_{n+1} - x_n)^2}{2!} f''(x_n) + \\frac{(x_{n+1} - x_n)^3}{3!} f'''(x_n) + \\mathcal{O}((x_{n+1} - x_n)^4) \\\\\n\\end{aligned}\n\nor\n\\begin{aligned}\n&= f(x_n) + \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) + \\frac{\\Delta x^3}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^4)\n\\end{aligned}\n\nand\n\n\\begin{align}\nf(x_{n-1}) &= f(x_n) + (x_{n-1} - x_n) f'(x_n) + \\frac{(x_{n-1} - x_n)^2}{2!} f''(x_n) + \\frac{(x_{n-1} - x_n)^3}{3!} f'''(x_n) + \\mathcal{O}((x_{n-1} - x_n)^4) \n\\end{align}\n\n\\begin{align} \n&= f(x_n) - \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) - \\frac{\\Delta x^3}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^4)\n\\end{align}\n\nOr all together (for regularly spaced points),\n\\begin{align} \nf(x_{n+1}) &= f(x_n) + \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) + \\frac{\\Delta x^3}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^4)\\\\\nf(x_n) &= f(x_n) \\\\\nf(x_{n-1})&= f(x_n) - \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) - \\frac{\\Delta x^3}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^4)\n\\end{align}\n\nNow to find out how to combine these into an expression for the derivative we assume our approximation looks like\n\n$$\n f'(x_n) + R(x_n) = A f(x_{n+1}) + B f(x_n) + C f(x_{n-1})\n$$\n\nwhere $R(x_n)$ is our error. \n\nPlugging in the Taylor series approximations we find\n\n$$\\begin{aligned}\n f'(x_n) + R(x_n) &= A \\left ( f(x_n) + \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) + \\frac{\\Delta x^3}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^4)\\right ) \\\\\n & + B ~~~~f(x_n) \\\\ \n & + C \\left ( f(x_n) - \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) - \\frac{\\Delta x^3}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^4) \\right )\n\\end{aligned}$$\n\nOr\n$$\nf'(x_n) + R(x_n)= (A + B + C) f(x_n) + (A\\Delta x +0B - C\\Delta x)f'(x_n) + (A\\frac{\\Delta x^2}{2!} + C\\frac{\\Delta x^2}{2!})f''(x_n) + O(\\Delta x^3)\n$$\n\nSince we want $R(x_n) = \\mathcal{O}(\\Delta x^2)$ we want all terms lower than this to cancel except for those multiplying $f'(x_n)$ as those should sum to 1 to give us our approximation. Collecting the terms with common evaluations of the derivatives on $f(x_n)$ we get a series of expressions for the coefficients $A$, $B$, and $C$ based on the fact we want an approximation to $f'(x_n)$. The $n=0$ terms collected are $A + B + C$ and are set to 0 as we want the $f(x_n)$ term to also cancel.\n\n$$\\begin{aligned}\n f(x_n):& &A + B + C &= 0 \\\\\n f'(x_n): & &A \\Delta x - C \\Delta x &= 1 \\\\\n f''(x_n): & &A \\frac{\\Delta x^2}{2} + C \\frac{\\Delta x^2}{2} &= 0\n\\end{aligned} $$\n\nOr as a linear algebra problem\n\n$$\\begin{bmatrix}\n1 & 1 & 1 \\\\\n\\Delta x & 0 &-\\Delta x \\\\\n\\frac{\\Delta x^2}{2} & 0 & \\frac{\\Delta x^2}{2} \\\\\n\\end{bmatrix}\n\\begin{bmatrix} A \\\\ B\\\\ C\\\\\\end{bmatrix} =\n\\begin{bmatrix} 0 \\\\ 1\\\\ 0\\\\\\end{bmatrix} \n$$\n\nThis last equation $\\Rightarrow A = -C$, using this in the second equation gives $A = \\frac{1}{2 \\Delta x}$ and $C = -\\frac{1}{2 \\Delta x}$. The first equation then leads to $B = 0$. \n\nPutting this altogether then gives us our previous expression including an estimate for the error:\n\n$$\\begin{aligned}\n f'(x_n) + R(x_n) &= \\quad \\frac{1}{2 \\Delta x} \\left ( f(x_n) + \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) + \\frac{\\Delta x^3}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^4)\\right ) \\\\\n & \\quad + 0 \\cdot f(x_n) \\\\ \n & \\quad - \\frac{1}{2 \\Delta x} \\left ( f(x_n) - \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) - \\frac{\\Delta x^3}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^4) \\right ) \\\\\n &= f'(x_n) + \\frac{1}{2 \\Delta x} \\left ( \\frac{2 \\Delta x^3}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^4)\\right )\n\\end{aligned}$$\nso that we find\n$$\n R(x_n) = \\frac{\\Delta x^2}{3!} f'''(x_n) + \\mathcal{O}(\\Delta x^3) = \\mathcal{O}(\\Delta x^2)\n$$\n\n#### Another way...\n\nThere is one more way to derive the second order accurate, first order finite-difference formula. Consider the two first order forward and backward finite-differences averaged together:\n\n$$\\frac{D_1^+(f(x_n)) + D_1^-(f(x_n))}{2} = \\frac{f(x_{n+1}) - f(x_n) + f(x_n) - f(x_{n-1})}{2 \\Delta x} = \\frac{f(x_{n+1}) - f(x_{n-1})}{2 \\Delta x}$$\n\n### Example 4: Higher Order Derivatives\n\nUsing our Taylor series approach lets derive the second order accurate second derivative formula. Again we will use the same points and the Taylor series centered at $x = x_n$ so we end up with the same expression as before:\n\n$$\\begin{aligned}\n f''(x_n) + R(x_n) &= \\quad A \\left ( f(x_n) + \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) + \\frac{\\Delta x^3}{3!} f'''(x_n) + \\frac{\\Delta x^4}{4!} f^{(4)}(x_n) + \\mathcal{O}(\\Delta x^5)\\right ) \\\\\n &+ \\quad B \\cdot f(x_n) \\\\\n &+ \\quad C \\left ( f(x_n) - \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) - \\frac{\\Delta x^3}{3!} f'''(x_n) + \\frac{\\Delta x^4}{4!} f^{(4)}(x_n) + \\mathcal{O}(\\Delta x^5) \\right )\n\\end{aligned}$$\n\nexcept this time we want to leave $f''(x_n)$ on the right hand side. \n\nTry out the same trick as before and see if you can setup the equations that need to be solved.\n\nDoing the same trick as before we have the following expressions:\n\n$$\\begin{aligned}\n f(x_n): & & A + B + C &= 0\\\\\n f'(x_n): & & A \\Delta x - C \\Delta x &= 0\\\\\n f''(x_n): & & A \\frac{\\Delta x^2}{2} + C \\frac{\\Delta x^2}{2} &= 1\n\\end{aligned}$$\n\nOr again\n\n$$\\begin{bmatrix}\n1 & 1 & 1 \\\\\n\\Delta x & 0 &-\\Delta x \\\\\n\\frac{\\Delta x^2}{2} & 0 & \\frac{\\Delta x^2}{2} \\\\\n\\end{bmatrix}\n\\begin{bmatrix} A \\\\ B\\\\ C\\\\\\end{bmatrix} =\n\\begin{bmatrix} 0 \\\\ 0\\\\ 1\\\\\\end{bmatrix} \n$$\n\nNote, the Matrix remains, the same, only the right hand side has changed\n\nThe second equation implies $A = C$ which combined with the third implies\n\n$$A = C = \\frac{1}{\\Delta x^2}$$\n\nFinally the first equation gives\n\n$$B = -\\frac{2}{\\Delta x^2}$$\n\nleading to the final expression\n\n$$\\begin{aligned}\n f''(x_n) + R(x_n) &= \\quad \\frac{1}{\\Delta x^2} \\left ( f(x_n) + \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) + \\frac{\\Delta x^3}{3!} f'''(x_n) + \\frac{\\Delta x^4}{4!} f^{(4)}(x_n) + \\mathcal{O}(\\Delta x^5)\\right ) \\\\\n &+ \\quad -\\frac{2}{\\Delta x^2} \\cdot f(x_n) \\\\\n &+ \\quad \\frac{1}{\\Delta x^2} \\left ( f(x_n) - \\Delta x f'(x_n) + \\frac{\\Delta x^2}{2!} f''(x_n) - \\frac{\\Delta x^3}{3!} f'''(x_n) + \\frac{\\Delta x^4}{4!} f^{(4)}(x_n) + \\mathcal{O}(\\Delta x^5) \\right ) \\\\\n &= f''(x_n) + \\frac{1}{\\Delta x^2} \\left(\\frac{2 \\Delta x^4}{4!} f^{(4)}(x_n) + \\mathcal{O}(\\Delta x^5) \\right )\n\\end{aligned}\n$$\nso that\n\n$$\n R(x_n) = \\frac{\\Delta x^2}{12} f^{(4)}(x_n) + \\mathcal{O}(\\Delta x^3)\n$$\n\n\n```python\ndef D2(func, x_min, x_max, N):\n \"\"\" calculate consistent 2nd order second derivative of a function func(x) defined on the interval [x_min,xmax]\n and sampled at N evenly spaced points\"\"\"\n\n x = numpy.linspace(x_min, x_max, N)\n f = func(x)\n dx = x[1] - x[0]\n D2f = numpy.zeros(x.shape) \n D2f[1:-1] = (f[:-2] - 2*f[1:-1] + f[2:])/(dx**2)\n # patch up end points to be 1 sided 2nd derivatives\n D2f[0] = D2f[1]\n D2f[-1] = D2f[-2]\n\n \n return D2f\n```\n\n\n```python\nf = lambda x: numpy.sin(x)\nf_dubl_prime = lambda x: -numpy.sin(x)\n\n# Use uniform discretization\nx = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, 1000)\nN = 80\nx_hat = numpy.linspace(-2 * numpy.pi, 2 * numpy.pi, N)\ndelta_x = x_hat[1] - x_hat[0]\n\n# Compute derivative\nD2f = D2(f, x_hat[0], x_hat[-1], N)\n```\n\n\n```python\nfig = plt.figure(figsize=(8,6))\naxes = fig.add_subplot(1, 1, 1)\n\naxes.plot(x,f(x),'b',label='$f(x)$')\naxes.plot(x, f_dubl_prime(x), 'k--', label=\"$f'(x)$\")\naxes.plot(x_hat, D2f, 'ro', label='$D_2(f)$')\naxes.set_xlim((x[0], x[-1]))\naxes.set_ylim((-1.1, 1.1))\naxes.legend(loc='best',fontsize=14)\naxes.grid()\naxes.set_title('Discrete Second derivative',fontsize=18)\naxes.set_xlabel('$x$', fontsize=16)\n\nplt.show()\n```\n\n### The general case\n\nIn the general case we can use any $N+1$ points to calculated consistent finite difference coefficients for approximating any derivative of order $k \\leq N$. Relaxing the requirement of equal grid spacing (or the expectation that the location where the derivative is evaluated $\\bar{x}$, is one of the grid points) the Taylor series expansions become\n\n\n$$\\begin{aligned}\n f^{(k)}(\\bar{x}) + R(\\bar{x}) &= \\quad c_0 \\left ( f(\\bar{x}) + \\Delta x_0 f'(\\bar{x}) + \\frac{\\Delta x_0^2}{2!} f''(\\bar{x}) + \\frac{\\Delta x_0^3}{3!} f'''(\\bar{x}) + \\frac{\\Delta x_0^4}{4!} f^{(4)}(\\bar{x}) + \\mathcal{O}(\\Delta x_0^5)\\right ) \\\\\n &+ \\quad c_1 \\left ( f(\\bar{x}) + \\Delta x_1 f'(\\bar{x}) + \\frac{\\Delta x_1^2}{2!} f''(\\bar{x}) + \\frac{\\Delta x_1^3}{3!} f'''(\\bar{x}) + \\frac{\\Delta x_1^4}{4!} f^{(4)}(\\bar{x}) + \\mathcal{O}(\\Delta x_1^5)\\right )\\\\\n &+ \\quad c_2 \\left ( f(\\bar{x}) + \\Delta x_2 f'(\\bar{x}) + \\frac{\\Delta x_2^2}{2!} f''(\\bar{x}) + \\frac{\\Delta x_2^3}{3!} f'''(\\bar{x}) + \\frac{\\Delta x_2^4}{4!} f^{(4)}(\\bar{x}) + \\mathcal{O}(\\Delta x_2^5)\\right ) \\\\\n &+ \\quad \\vdots\\\\\n &+ \\quad c_N \\left ( f(\\bar{x}) + \\Delta x_N f'(\\bar{x}) + \\frac{\\Delta x_N^2}{2!} f''(\\bar{x}) + \\frac{\\Delta x_N^3}{3!} f'''(\\bar{x}) + \\frac{\\Delta x_N^4}{4!} f^{(4)}(\\bar{x}) + \\mathcal{O}(\\Delta x_N^5)\\right ) \\\\\n\\end{aligned}$$\nwhere $\\Delta\\mathbf{x} = \\bar{x} - \\mathbf{x}$ is the distance between the point $\\bar{x}$ and each grid point.\n\nEquating terms of equal order reduces the problem to another Vandermonde matrix problem\n$$\\begin{bmatrix}\n1 & 1 & 1 & \\cdots & 1 \\\\\n\\Delta x_0 & \\Delta x_1 & \\Delta x_2 & \\cdots & \\Delta x_N\\\\\n\\frac{\\Delta x_0^2}{2!} & \\frac{\\Delta x_1^2}{2!} & \\frac{\\Delta x_2^2}{2!} &\\cdots & \\frac{\\Delta x_N^2}{2!}\\\\\n & & \\vdots & \\cdots & \\\\\n\\frac{\\Delta x_0^N}{N!} & \\frac{\\Delta x_1^N}{N!} & \\frac{\\Delta x_2^N}{N!} & \\cdots & \\frac{\\Delta x_N^N}{N!}\\\\\n\\end{bmatrix}\n\\begin{bmatrix} c_0 \\\\ c_1\\\\ c_2 \\\\ \\vdots \\\\ c_N\\\\\\end{bmatrix} =\n\\mathbf{b}_k \n$$\n\nwhere $\\mathbf{b}_k$ is a vector of zeros with just a one in the $k$th position for the $k$th derivative.\n\nBy exactly accounting for the first $N+1$ terms of the Taylor series (with $N+1$ equations), we can get any order derivative $0 k. \n Usually the elements x(i) are monotonically increasing\n and x(1) <= xbar <= x(n), but neither condition is required.\n The x values need not be equally spaced but must be distinct. \n \n Modified rom http://www.amath.washington.edu/~rjl/fdmbook/ (2007)\n \"\"\"\n \n from scipy.special import factorial\n\n n = x.shape[0]\n assert k < n, \" The order of the derivative must be less than the stencil width\"\n\n # Generate the Vandermonde matrix from the Taylor series\n A = numpy.ones((n,n))\n xrow = (x - xbar) # displacements x-xbar \n for i in range(1,n):\n A[i,:] = (xrow**(i))/factorial(i);\n \n b = numpy.zeros(n) # b is right hand side,\n b[k] = 1 # so k'th derivative term remains\n\n c = numpy.linalg.solve(A,b) # solve n by n system for coefficients\n \n return c\n\n```\n\n\n```python\nN = 11\nx = numpy.linspace(-2*numpy.pi, 2.*numpy.pi, )\nk = 2\nscale = (x[1]-x[0])**k\n\nprint(fdcoeffV(k,x[0],x[:3])*scale)\nfor j in range(k,N-1):\n print(fdcoeffV(k, x[j], x[j-1:j+2])*scale)\nprint(fdcoeffV(k,x[-1],x[-3:])*scale)\n```\n\n### Example: A variably spaced mesh\n\n\n```python\nN = 21\ny = numpy.linspace(-.95, .95,N)\nx = numpy.arctanh(y)\n```\n\n\n```python\nfig = plt.figure(figsize=(8,6))\naxes = fig.add_subplot(1, 1, 1)\naxes.plot(x,numpy.zeros(x.shape),'bo-')\naxes.plot(x,y,'ro-')\naxes.grid()\naxes.set_xlabel('$x$')\naxes.set_ylabel('$y$')\nplt.show()\n```\n\n\n```python\nk=1\nfd = fdcoeffV(k,x[0],x[:3])\nprint('{}, sum={}'.format(fd,fd.sum()))\nfor j in range(1,N-1):\n fd = fdcoeffV(k, x[j], x[j-1:j+2])\n print('{}, sum={}'.format(fd,fd.sum()))\nfd = fdcoeffV(k,x[-1],x[-3:])\nprint('{}, sum={}'.format(fd,fd.sum()))\n\n```\n\n### Application to Numerical PDE's\n\nGiven an efficent way to generate Finite Difference Coefficients these coefficients can be stored in a (usually sparse) matrix $D_k$ such that given any discrete vector $\\mathbf{f} = f(\\mathbf{x})$, We can calculate the approximate $k$th derivative as simply the matrix vector product\n\n$$\n \\mathbf{f}' = D_k\\mathbf{f}\n$$\n\nThis technique will become extremely useful when solving basic finite difference approximations to differential equations (as we will explore in future lectures and homeworks). \n\n### The Bigger idea\n\nMore generally, using finite differences we can transform a continuous differential operator on a function space\n\n$$\n v = \\frac{d}{dx} u(x)\n$$\nwhich maps a function to a function, to a discrete linear algebraic problem \n\n$$\n \\mathbf{v} = D\\mathbf{u}\n$$\nwhere $\\mathbf{v}, \\mathbf{u}$ are discrete approximations to the continous functions $v,u$ and $D$ is a discrete differential operator (Matrix) which maps a vector to a vector.\n", "meta": {"hexsha": "052055b5d5ba10f777f1f929f6c7b2fb51ec2e78", "size": 48848, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "07_differentiation.ipynb", "max_stars_repo_name": "tzussman/intro-numerical-methods", "max_stars_repo_head_hexsha": "3b1735a088dac6ee15e56436ea118997e69d2af1", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "07_differentiation.ipynb", "max_issues_repo_name": "tzussman/intro-numerical-methods", "max_issues_repo_head_hexsha": "3b1735a088dac6ee15e56436ea118997e69d2af1", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "07_differentiation.ipynb", "max_forks_repo_name": "tzussman/intro-numerical-methods", "max_forks_repo_head_hexsha": "3b1735a088dac6ee15e56436ea118997e69d2af1", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 31.5148387097, "max_line_length": 506, "alphanum_fraction": 0.512057812, "converted": true, "num_tokens": 10855, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8962513814471134, "lm_q2_score": 0.9207896748041439, "lm_q1q2_score": 0.8252590180654522}} {"text": "```python\nfrom sympy import *\nimport matplotlib.pyplot as plt\nfrom sympy.plotting import plot\nfrom sympy.matrices import *\ninit_printing()\n%matplotlib inline\n```\n\n# Problem 1 #\n\nWe have the following system\n\n\\begin{align}\n q_D &= b_D - 2p + r \\\\\n q_S &= b_S + 2p - r \\\\\n r &= 2b_r + 4p - 2q\n\\end{align}\nConvert the system of equations into a linear function of the form $A\\mathbf{x} = \\mathbf{b}$\n\n\\begin{align}\n q + 2p - r &= b_D \\\\\n 1q - 2p + r &= b_S \\\\\n 1q - 2p + \\frac{1}{2}r &= b_r\n\\end{align}\n\nand in matrix terms we have\n\n\\begin{align}\n \\begin{bmatrix}\n 1 & 2 & -1 \\\\\n 1 & -2 & 1 \\\\\n 1 & -2 & \\frac{1}{2}\n \\end{bmatrix}\n \\begin{bmatrix}\n q \\\\\n p \\\\\n r\n \\end{bmatrix} &= \n \\begin{bmatrix}\n b_D \\\\\n b_S \\\\\n b_r\n \\end{bmatrix}\n\\end{align}\n\n### Test Solution Existence and Uniqueness with Determinant\nWe want to test the determinant $\\det A = 0$. If it is, then we may either have no solution to the system, or an infinite number of solutions. If $\\det A \\neq 0$ then we know for every vector $\\mathbf{b}$ there is a unique solution to the system.\n\n\n```python\nA = Matrix([\n [1, 2, -1],\n [1, -2, 1],\n [1, -2, Rational(1,2)]\n ])\n\n# We calculate the determinant using A.det() function\ndetA = A.det()\nA, Eq(Symbol('\\det A'),detA)\n```\n\n### Find A^{-1} and Solve the System\n\nIf $A$ is invertible then $\\mathbf{x} = A^{-1}\\mathbf{b}$. So calculate the inverse and post multiply $\\mathbf{b}$ to find the solution. Assume that $\\mathbf{b} = [10,5,1]$.\n\n\n```python\n#Instantiate b vector\nb = Matrix([\n [10],\n [8],\n [2]\n ])\n\n#Calculate the inverse of the A matrix.\nAinv = A.inv()\nAinv, b\n```\n\n\n```python\n#Use the inverse of A to find the solution x\nx = Ainv*b\nx\n```\n\n# Problem 2\n\nConsider the system:\n\n\\begin{align}\n q_D &= \\frac{b_D}{p^2} \\\\\n q_S &= A_S + \\frac{1}{b_S}p^2\n\\end{align}\n\n\n```python\nbD = Symbol('b_D', real=True, positive=True)\nbS = Symbol('b_S', real=True, positive=True)\nAS = Symbol('A_S', real=True, positive=True)\nAD = Symbol('A_D', real=True, positive=True)\np = Symbol('p', real=True, positive=True)\n\ndemand = bD*exp(-p)\nsupply = bS*exp(p) \n\ndemand, supply\n```\n\n\n```python\n#Find the equilibrium price\np_eq = solve(demand-supply,p)[0]\np_eq\n```\n\n\n```python\ngrad = Matrix([p_eq.diff(bD), p_eq.diff(bS)])\ngrad\n```\n\n\n```python\nsl = [(bD,5),(bS,1)]\ngradval = grad.subs(sl)\ngradval\n```\n\n\n```python\ndx = Matrix([Rational(1,9), -Rational(1,7)])\ndp = gradval.dot(dx)\ndp\n```\n\n\n```python\np_eq.subs(sl) + dp\n```\n\n\n```python\nplot(5*exp(-p),1*exp(p),(p,0,2))\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "d94f2ae3b0a06ad628b4b7e47e80c13edc7f51c2", "size": 36144, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "assets/pdfs/math_bootcamp/2016/Final_Exam.ipynb", "max_stars_repo_name": "joepatten/joepatten.github.io", "max_stars_repo_head_hexsha": "4b9acc8720f3a33337368fee719902b54a6f2f68", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assets/pdfs/math_bootcamp/2016/Final_Exam.ipynb", "max_issues_repo_name": "joepatten/joepatten.github.io", "max_issues_repo_head_hexsha": "4b9acc8720f3a33337368fee719902b54a6f2f68", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2020-08-09T16:28:31.000Z", "max_issues_repo_issues_event_max_datetime": "2020-08-10T14:48:57.000Z", "max_forks_repo_path": "assets/pdfs/math_bootcamp/2016/Final_Exam.ipynb", "max_forks_repo_name": "joepatten/joepatten.github.io", "max_forks_repo_head_hexsha": "4b9acc8720f3a33337368fee719902b54a6f2f68", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 80.8590604027, "max_line_length": 12926, "alphanum_fraction": 0.8050852147, "converted": true, "num_tokens": 900, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9334308073258009, "lm_q2_score": 0.8840392848011834, "lm_q1q2_score": 0.8251895033196922}} {"text": "# Solving regression problems with python\n### First we are gonna to create a synthetic data, suposse we have a historic record of house prices according to their size.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n\nX = np.linspace(0,20,100) \n# Just create a ficticial relationship and ignore that we know it xD\nY = (-2*X**3 + X**4.5 + 100000*np.random.rand(X.shape[0]))/1000\n\nplt.plot(X,Y,'.')\nplt.xlabel('Size (feet)^2')\nplt.ylabel('Price')\n\nplt.show()\n```\n\n\n \n\n\n# K-Nearest Neighbors\n\n## This is a nonparametric model that predict from the data (There is no an actual training stage in this model, for each new sample the prediction is performed using the training data). Therefore if we want to save this model, we need to save all the data.\n\n### How works?\nAs input we receive an Xref matrix that is the training data (size of the houses). The vector Yref is the vector that contains all the labels (prices) of the training data, i.e., for a size X what is the price Y. We also receive a Xtest matrix that are the sample to which we want to predict their response value (price of the house). \n\nFor each new sample we need to compute the distance of this sample to the samples in the Xref (training data).\n## \\begin{equation} d(x_{new}, x) = \\lvert x_{new} - x \\lvert \u00a0\\end{equation}\n\nLet V be the set of the K-nearest neighbors of the sample x. The prediction is computed averaging the labels of the nearest neighbors:\n\n## \\begin{equation} prediction(x_{new}) = \\frac{1}{K} \\sum_{y^{(i)} \\, | \\, x^{(i)} \\in V } y^{(i)}\u00a0\\end{equation}\n\n\n\n```python\ndef KnnRegression(Xref,Xtest,Yref,k=5): \n M = Xtest.shape[0]\n N = Xref.shape[0]\n \n prediction = np.zeros(shape=(M))\n distance = np.zeros(shape=(N))\n \n for i in range(M):\n for j in range(N):\n distance[j] = np.abs(Xtest[i]-Xref[j])\n \n indices = np.argsort(distance)[:k] \n prediction[i] = np.mean(Yref[indices])\n \n return prediction\n```\n\n# Let's create new data in the same range, and plot the prediction of the model in orange.\n\n\n```python\nnewData = np.linspace(0,15,50)\n\nprediction = KnnRegression(X,newData,Y,k=1)\n\nplt.plot(X,Y,'.')\nplt.plot(newData,prediction)\nplt.xlabel('Size (feet)^2')\nplt.ylabel('Price')\n```\n\n# How to select the best k for our data?\n\n## It's validation time!\n\n## Scikit-Learn offers a function that split our data in the number of folds that we want\n\n\n```python\nfrom sklearn.model_selection import KFold\n```\n\n\n```python\nnfolds = 2\nkf = KFold(n_splits=nfolds)\n\nfor train_index, test_index in kf.split(X):\n print(\"TRAIN:\", train_index, \"TEST:\", test_index)\n X_train, X_test = X[train_index], X[test_index]\n y_train, y_test = Y[train_index], Y[test_index]\n```\n\n TRAIN: [50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73\n 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97\n 98 99] TEST: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23\n 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47\n 48 49]\n TRAIN: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23\n 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47\n 48 49] TEST: [50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73\n 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97\n 98 99]\n\n\n# In the literature, researchers usually uses 10 folds to test their data\n\n## First, we need to divide our data in 20% for test and 80% for validation\n\n\n```python\nM = X.shape[0]\n\nindices = np.random.permutation(M)\n\nXval = X[indices[:int(0.8*M)]]\nYval = Y[indices[:int(0.8*M)]]\n\nXtest = X[indices[int(0.8*M):]]\nYtest = Y[indices[int(0.8*M):]]\n```\n\n\n```python\nprint(Xval.shape)\nprint(Xtest.shape)\n```\n\n (80,)\n (20,)\n\n\n\n```python\nfrom sklearn.metrics import mean_absolute_error\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.metrics import r2_score\n\n\nnfolds = 10\nkf = KFold(n_splits=nfolds)\n\n\nK_values = np.arange(1,100,5)\n\nResults_R_score = np.zeros((K_values.shape[0],10))\nResults_MAE = np.zeros((K_values.shape[0],10))\nResults_MSE = np.zeros((K_values.shape[0],10))\n\nfor index, k in enumerate(K_values):\n fold = 0\n for train_index, test_index in kf.split(Xval):\n \n X_train, X_test = Xval[train_index], Xval[test_index]\n y_train, y_test = Yval[train_index], Yval[test_index]\n\n prediction = KnnRegression(X_train,X_test,y_train,k)\n\n Results_R_score[index,fold] = r2_score(y_test, prediction)\n Results_MAE[index,fold] = mean_absolute_error(y_test, prediction)\n Results_MSE[index,fold] = mean_squared_error(y_test, prediction)\n \n fold = fold + 1\n```\n\n\n```python\nnp.mean(Results_MAE, axis = 0)\n```\n\n\n\n\n array([ 65.88308948, 71.87376378, 88.00741415, 115.71559878,\n 139.99914835, 156.08538099, 84.65725362, 107.32497102,\n 102.21782728, 63.42642517])\n\n\n\n\n```python\nBestsolution = np.argmin(np.mean(Results_MAE, axis = 0))\nBestsolution\n```\n\n\n\n\n 9\n\n\n\n\n```python\nK_values[Bestsolution]\n```\n\n\n\n\n 46\n\n\n\n\n```python\nprediction = KnnRegression(Xval,Xtest,Yval,k=6)\n\nplt.plot(Xtest,Ytest,'.')\nplt.plot(Xtest,prediction,'x')\nplt.xlabel('Size (feet)^2')\nplt.ylabel('Price')\n```\n\n\n```python\nnp.mean(Results_MAE,axis = 0)\n```\n\n\n\n\n array([ 65.88308948, 71.87376378, 88.00741415, 115.71559878,\n 139.99914835, 156.08538099, 84.65725362, 107.32497102,\n 102.21782728, 63.42642517])\n\n\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "477105c6a5e2089890e26e24e400c88ebe980a29", "size": 40686, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Regression-Python-Medellin/Regression problems with Python.ipynb", "max_stars_repo_name": "williamegomez/Python-Talks", "max_stars_repo_head_hexsha": "138949fc6c75b516d0bf74b5e5fbe75d92f129d3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-11-08T17:03:01.000Z", "max_stars_repo_stars_event_max_datetime": "2018-11-28T05:56:10.000Z", "max_issues_repo_path": "Regression-Python-Medellin/Regression problems with Python.ipynb", "max_issues_repo_name": "williamegomez/Python-Talks", "max_issues_repo_head_hexsha": "138949fc6c75b516d0bf74b5e5fbe75d92f129d3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Regression-Python-Medellin/Regression problems with Python.ipynb", "max_forks_repo_name": "williamegomez/Python-Talks", "max_forks_repo_head_hexsha": "138949fc6c75b516d0bf74b5e5fbe75d92f129d3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 95.9575471698, "max_line_length": 19408, "alphanum_fraction": 0.8494076587, "converted": true, "num_tokens": 1887, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.90192067652954, "lm_q2_score": 0.9149009549929797, "lm_q1q2_score": 0.8251680882847905}} {"text": "# 2. Algebrai m\u0171veletek\n\n\n```python\nimport math\nimport numpy as np\nimport sympy as sp\nsp.init_printing()\n```\n\n\n```python\nF, m, a, b, c, x = sp.symbols(\"F m a b c x\")\n```\n\n\n```python\nF=m*a\n```\n\n\n```python\nF.subs([(a,7)])\n```\n\n\n```python\nF.subs([(a,7),(m,1.1)])\n```\n\n\n```python\n((a+b)**3).expand()\n```\n\n\n```python\n((a+b)**7 - (b+2*a)**3).expand()\n```\n\n\n```python\n(a**2+b**2+2*a*b).factor()\n```\n\n\n```python\nsp.factor(a**2+b**2+2*a*b)\n```\n\n\n```python\nsp.factor(b**3 + 3*a*b**2 + 3*a**2*b + a**3)\n```\n\n\n```python\na/b+c/b+7/b\n```\n\n\n```python\nsp.ratsimp(a/b+c/b+7/b)\n```\n\n\n```python\n(a/b+c/b+7/b).ratsimp()\n```\n\n\n```python\n(sp.sin(x)**2 + sp.cos(x)**2).simplify()\n```\n\n\n```python\n(sp.cos(2*x)).expand()\n```\n\n\n```python\nsp.expand_trig(sp.cos(2*x))\n```\n\n\n```python\nimport scipy.constants\n```\n\n\n```python\nscipy.constants.golden\n```\n\n\n```python\nmath.sqrt(-1+0j)\n```\n\n\n```python\nnp.sqrt(-1+0j)\n```\n\n\n\n\n 1j\n\n\n\n\n```python\nsp.limit(sp.sin(x)/x,x,0)\n```\n\n\nTaylor-sor megad\u00e1sa. Els\u0151 param\u00e9ter a f\u00fcggv\u00e9ny, m\u00e1sodik a v\u00e1ltoz\u00f3, harmadik az \u00e9rt\u00e9k ami k\u00f6r\u00fcl akarjuk a sort kifejteni, negyedik pedig a foksz\u00e1m:\n\n$$f\\left(x\\right) \\approx \\sum\\limits_{i=0}^{N} \\dfrac{\\left(x - x_0\\right)^i}{i!} \\left.\\dfrac{\\mathrm{d}^i f}{\\mathrm{d} x^i}\\right|_{x = x_0}$$\n\n\n```python\nsp.series(sp.sin(x),x,0,20)\n```\n\n\n```python\nlista = np.array([2,3,64,89,1,4,9,0,1])\n```\n\n\n```python\nlista.sort()\nlista\n```\n\n\n\n\n array([ 0, 1, 1, 2, 3, 4, 9, 64, 89])\n\n\n", "meta": {"hexsha": "4434e0454377491104f53895a6ad94ca57dfdbb6", "size": 31259, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "01/01_02_algebrai_muveletek.ipynb", "max_stars_repo_name": "TamasPoloskei/BME-VEMA", "max_stars_repo_head_hexsha": "542725bf78e9ad0962018c1cf9ff40c860f8e1f0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "01/01_02_algebrai_muveletek.ipynb", "max_issues_repo_name": "TamasPoloskei/BME-VEMA", "max_issues_repo_head_hexsha": "542725bf78e9ad0962018c1cf9ff40c860f8e1f0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-11-20T14:17:52.000Z", "max_issues_repo_issues_event_max_datetime": "2018-11-20T14:17:52.000Z", "max_forks_repo_path": "01/01_02_algebrai_muveletek.ipynb", "max_forks_repo_name": "TamasPoloskei/BME-VEMA", "max_forks_repo_head_hexsha": "542725bf78e9ad0962018c1cf9ff40c860f8e1f0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 54.8403508772, "max_line_length": 6098, "alphanum_fraction": 0.7556223808, "converted": true, "num_tokens": 593, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9149009457116781, "lm_q2_score": 0.9019206705978501, "lm_q1q2_score": 0.825168074486884}} {"text": "# Local search\n\nIn unconstrained optimization, we wish to solve problems of the form\n\\begin{align}\n\\text{minimize} & & E(w) \n\\end{align}\n\n* The local search algorithms have the form :\n\\begin{align}\nw_0 & = \\text{some initial value} \\\\\n\\text{for}\\;\\; & \\tau = 1, 2,\\dots \\\\\n& w_\\tau = w_{\\tau-1} + g_\\tau\n\\end{align}\n\nHere, $g_\\tau$ is a search direction. The loop is executed until a convergence condition is satisfied or \nthe maximum number of iterations is reached. The algorithm iteratively search for solutions that achieve a lower objective value by moving in the search direction.\n\n# Gradient Descent \n* Gradient descent is a popular local search method with the search direction chosen as the negative gradient direction:\n\\begin{align}\ng_\\tau & = - \\eta \\nabla E(w_{\\tau-1})\n\\end{align}\n\n* When the gradient vanishes, i.e., $E(w) = 0$, the algorithm does not make any progress. Such points are also called fixed points. \n\n* The iterates, under certain conditions, converge to the minimum $w^* = \\arg\\min_{w} E(w)$. A natural question here finding the conditions for guaranteed convergence to a fixed point and the rate -- how fast convergence happens as a function of iterations\n\n* The parameter $\\eta$ is called the *learning rate*, to be chosen depending on the problem. If the learning rate is not properly chosen, the algorithm can (and will) diverge.\n\n* There is a well developed theory on how to choose $\\eta$ adaptively to speed up covergence.\n\n* Even for minimizing quadratic objective functions, or equivalently for solving linear systems, gradient descent can have a quite poor converge properties: it takes a lot of iterations to find the minimum. However, it is applicable as a practical method in many problems as it requires only the calculation of the gradient.\n\n* For maximization problems\n\\begin{align}\n\\text{maximize} & & E(w) \n\\end{align}\nwe just move in the direction of the gradient so the search direction is $g_\\tau = \\eta \\nabla E(w_{\\tau-1})$\n\n\n\n```python\n%matplotlib inline\nimport numpy as np\nimport matplotlib as mpl\nimport matplotlib.pylab as plt\n\nfrom ipywidgets import interact, interactive, fixed\nimport ipywidgets as widgets\nfrom IPython.display import clear_output, display, HTML\nfrom matplotlib import rc\n\nmpl.rc('font',**{'size': 20, 'family':'sans-serif','sans-serif':['Helvetica']})\nmpl.rc('text', usetex=True)\n\nimport time\nimport numpy as np\n\ny = np.array([7.04, 7.95, 7.58, 7.81, 8.33, 7.96, 8.24, 8.26, 7.84, 6.82, 5.68])\nx = np.array(np.linspace(-1,1,11))\nN = len(x)\n\n# Design matrix\n#A = np.vstack((np.ones(N), x, x**2, x**3)).T\ndegree = 9\nA = np.hstack([np.power(x.reshape(N,1),i) for i in range(degree+1)])\n\n# Learning rate\neta = 0.001\n \n# initial parameters\nw = np.array(np.random.randn(degree+1))\nW = []\nErr = []\nfor epoch in range(50000): \n # Error\n err = y-A.dot(w)\n \n # Total error\n E = np.sum(err**2)/N\n \n # Gradient\n dE = -2.*A.T.dot(err)/N\n \n if epoch%100 == 0: \n #print(epoch,':',E)\n # print(w) \n W.append(w)\n Err.append(E)\n\n # Perfom one descent step\n w = w - eta*dE\n```\n\nThe following cell demonstrates interactively the progress of plain gradient descent \nand how its solution differs from the optimum found by solving the corresponding least squares problem.\n\n\n\n\n```python\n\nfig = plt.figure(figsize=(5,5))\n\nleft = -1.5\nright = 1.5\nxx = np.linspace(left,right,50)\nAA = np.hstack((np.power(xx.reshape(len(xx),1),i) for i in range(degree+1)))\n\n# Find best\nA_orth, R = np.linalg.qr(A)\nw_orth, res, rank, s = np.linalg.lstsq(A_orth, y)\nw_star = np.linalg.solve(R, w_orth)\nyy = AA.dot(w_star)\n\n#ax.set_xlim((2,15))\n\n#dots = plt.Line2D(x,y, linestyle='', markerfacecolor='b',marker='o', alpha=0.5, markersize=5)\n#ax.add_line(dots)\nplt.plot(x,y, linestyle='', markerfacecolor='b',marker='o', alpha=0.5, markersize=5)\nplt.plot(xx, yy, linestyle=':', color='k', alpha=0.3)\nln = plt.Line2D(xdata=[], ydata=[], linestyle='-',linewidth=2)\n\n\nax = fig.gca()\nax.add_line(ln)\nplt.close(fig)\n\nax.set_xlim((left,right))\nax.set_ylim((5,9))\n\ndef plot_gd(iteration=0):\n w = W[iteration]\n f = AA.dot(w)\n #print(w)\n ln.set_ydata(f)\n ln.set_xdata(xx)\n \n ax.set_title('$E = '+str(Err[iteration])+'$')\n \n display(fig)\n \nres = interact(plot_gd, iteration=(0,len(W)-1))\n\n```\n\n\n

    Failed to display Jupyter Widget of type interactive.

    \n

    \n If you're reading this message in Jupyter Notebook or JupyterLab, it may mean\n that the widgets JavaScript is still loading. If this message persists, it\n likely means that the widgets JavaScript library is either not installed or\n not enabled. See the Jupyter\n Widgets Documentation for setup instructions.\n

    \n

    \n If you're reading this message in another notebook frontend (for example, a static\n rendering on GitHub or NBViewer),\n it may mean that your frontend doesn't currently support widgets.\n

    \n\n\n\n\n```python\n%matplotlib inline\n\nimport scipy as sc\nimport numpy as np\nimport pandas as pd\nimport matplotlib as mpl\nimport matplotlib.pylab as plt\n\ndf_arac = pd.read_csv(u'data/arac.csv',sep=';')\n#df_arac[['Year','Car']]\n\nBaseYear = 1995\nx = np.matrix(df_arac.Year[0:]).T-BaseYear\ny = np.matrix(df_arac.Car[0:]).T/1000000.\n\nplt.plot(x+BaseYear, y, 'o-')\nplt.xlabel('Yil')\nplt.ylabel('Araba (Millions)')\n\nplt.show()\n```\n\n\n```python\nfrom itertools import product\n\ndef Error_Surface(y, A, left=0, right=1, bottom=0, top=1, step=0.1): \n W0 = np.arange(left,right, step)\n W1 = np.arange(bottom,top, step)\n\n ErrSurf = np.zeros((len(W1),len(W0)))\n\n for i,j in product(range(len(W1)), range(len(W0))):\n e = y - A*np.matrix([W0[j], W1[i]]).T\n ErrSurf[i,j] = e.T*e/2\n \n return ErrSurf\n```\n\n\n```python\nBaseYear = 1995\nx = np.matrix(df_arac.Year[0:]).T-BaseYear\ny = np.matrix(df_arac.Car[0:]).T/1000000.\n\n# Setup the vandermonde matrix\nN = len(x)\nA = np.hstack((np.ones((N,1)), x))\n\n\n\nleft = -5\nright = 15\nbottom = -4\ntop = 6\nstep = 0.05\nErrSurf = Error_Surface(y, A, left=left, right=right, top=top, bottom=bottom)\n\nplt.figure(figsize=(10,10))\n#plt.imshow(ErrSurf, interpolation='nearest', \n# vmin=0, vmax=10000,origin='lower',\n# extent=(left,right,bottom,top), cmap='jet')\n\nplt.contour(ErrSurf, \n vmin=0, vmax=10000,origin='lower', levels=np.linspace(100,5000,10),\n extent=(left,right,bottom,top), cmap='jet')\n\nplt.xlabel('$w_0$')\nplt.ylabel('$w_1$')\nplt.title('Error Surface')\n#plt.colorbar(orientation='horizontal')\nplt.show()\n```\n\n### Animation of Gradient descent\n\n\n```python\n%matplotlib inline\nimport matplotlib.pylab as plt\n\nimport time\nfrom IPython import display\nimport numpy as np\n\n# Setup the Design matrix\nN = len(x)\nA = np.hstack((np.ones((N,1)), x))\n\n# Starting point\nw = np.matrix('[15; -6]')\n\n# Number of iterations\nEPOCH = 200\n\n# Learning rate: The following is the largest possible fixed rate for this problem\n#eta = 0.0001696\neta = 0.0001696\n\nfig = plt.figure()\nax = fig.gca()\n\nplt.plot(x+BaseYear, y, 'o-')\n\nplt.xlabel('x')\nplt.ylabel('y')\n\nf = A.dot(w)\nln = plt.Line2D(xdata=x+BaseYear, ydata=f, linestyle='-',linewidth=2,color='red')\nax.add_line(ln)\n\nfor epoch in range(EPOCH):\n f = A.dot(w)\n err = y-f\n \n ln.set_xdata(x)\n ln.set_ydata(f)\n \n E = np.sum(err.T*err)/2\n dE = -A.T.dot(err)\n \n# if epoch%1 == 0: \n# print(epoch,':',E)\n # print(w) \n \n w = w - eta*dE\n\n ax.set_title(E)\n display.clear_output(wait=True)\n display.display(plt.gcf())\n time.sleep(0.1)\n```\n\n\n```python\n# An implementation of Gradient Descent for solving linear a system\n\n# Setup the Design matrix\nN = len(x)\nA = np.hstack((np.ones((N,1)), x))\n\n# Starting point\nw = np.matrix('[15; -6]')\n\n# Number of iterations\nEPOCH = 5000\n\n# Learning rate: The following is the largest possible fixed rate for this problem\n#eta = 0.0001696\neta = 0.0001696\n\nError = np.zeros((EPOCH))\nW = np.zeros((2,EPOCH))\n\nfor tau in range(EPOCH):\n # Calculate the error\n e = y - A*w \n \n # Store the intermediate results\n W[0,tau] = w[0]\n W[1,tau] = w[1]\n Error[tau] = (e.T*e)/2\n \n # Compute the gradient descent step\n g = -A.T*e\n w = w - eta*g\n #print(w.T)\n \nw_star = w \nplt.figure(figsize=(8,8))\nplt.imshow(ErrSurf, interpolation='nearest', \n vmin=0, vmax=1000,origin='lower',\n extent=(left,right,bottom,top))\nplt.xlabel('w0')\nplt.ylabel('w1')\n\nln = plt.Line2D(W[0,:300:1], W[1,:300:1], marker='o',markerfacecolor='w')\nplt.gca().add_line(ln)\n\nln = plt.Line2D(w_star[0], w_star[1], marker='x',markerfacecolor='w')\nplt.gca().add_line(ln)\nplt.show()\n\nplt.figure(figsize=(8,3))\nplt.semilogy(Error)\nplt.xlabel('Iteration tau')\nplt.ylabel('Error')\nplt.show()\n\n```\n\n* The illustration shows the convergence of GD with learning rate near the limit where the convergence is oscillatory.\n\n* $\\eta$, Learning rate is a parameter of the algorithm\n\n* $w$, the variable are the parameters of the Model \n\n* $y$: Targets\n\n* $x$: Inputs, \n\n# Accelerating Gradient descent\n\n## Momentum methods, a.k.a., heavy ball\n\n\\begin{align}\np(\\tau) & = \\nabla E(w(\\tau-1)) + \\beta p(\\tau-1) \\\\\nw(\\tau) & = w(\\tau-1) - \\alpha p(\\tau) \n\\end{align}\n\nWhen $\\beta=0$, we recover gradient descent.\n\n\n\n```python\n%matplotlib inline\nimport matplotlib.pylab as plt\n\nfrom notes_utilities import pnorm_ball_line\n\n\nimport time\nfrom IPython import display\nimport numpy as np\n\n\n#y = np.array([7.46, 6.77, 12.74, 7.11, 7.81, 8.84, 6.08, 5.39, 8.15, 6.42, 5.73])\ny = np.array([8.04, 6.95, 7.58, 8.81, 8.33, 9.96, 7.24, 4.26, 10.84, 4.82, 5.68])\n#y = np.array([9.14, 8.14, 8.74, 8.77, 9.26, 8.10, 6.13, 3.10, 9.13, 7.26, 4.74])\nx = np.array([10., 8., 13., 9., 11., 14., 6., 4., 12., 7., 5.])\nN = len(x)\n\n# Design matrix\nA = np.vstack((np.ones(N), x)).T\n\nw_best, E, rank, s = np.linalg.lstsq(A, y)\nerr = y-A.dot(w_best)\nE_min = np.sum(err**2)/N\n\n\ndef inspect_momentum(alpha = 0.005, beta = 0.97):\n ln = pnorm_ball_line(mu=w_best, A=np.linalg.cholesky(np.linalg.inv(A.T.dot(A))),linewidth=1)\n ln2 = pnorm_ball_line(mu=w_best, A=4*np.linalg.cholesky(np.linalg.inv(A.T.dot(A))),linewidth=1)\n\n # initial parameters\n w0 = np.array([2., 1.])\n w = w0.copy()\n p = np.zeros(2)\n\n EPOCHS = 100\n W = np.zeros((2,EPOCHS))\n for epoch in range(EPOCHS):\n # Error\n err = y-A.dot(w)\n W[:,epoch] = w \n # Mean square error\n E = np.sum(err**2)/N\n\n # Gradient\n dE = -2.*A.T.dot(err)/N\n p = dE + beta*p\n\n# if epoch%10 == 1: \n# print(epoch,':',E)\n # print(w) \n\n # Perfom one descent step\n w = w - alpha*p\n\n\n# print(E_min)\n\n plt.plot(W[0,:],W[1,:],'.-b')\n plt.plot(w_best[0],w_best[1],'ro')\n plt.plot(w0[0],w0[1],'ko')\n plt.xlim((1.8,4.3))\n plt.ylim((0,1.2))\n plt.title('$\\\\alpha = $'+str(alpha)+' $\\\\beta = $'+str(beta))\n plt.gca().add_line(ln)\n plt.gca().add_line(ln2)\n plt.show()\n \ninspect_momentum(alpha=0.0014088, beta=0.95)\n\n\n```\n\n\n```python\n%matplotlib inline\nfrom __future__ import print_function\nfrom ipywidgets import interact, interactive, fixed\nimport ipywidgets as widgets\nimport matplotlib.pylab as plt\nfrom IPython.display import clear_output, display, HTML\n\ninteract(inspect_momentum, alpha=(0, 0.02, 0.001), beta=(0, 0.99, 0.001))\n```\n\n\n

    Failed to display Jupyter Widget of type interactive.

    \n

    \n If you're reading this message in Jupyter Notebook or JupyterLab, it may mean\n that the widgets JavaScript is still loading. If this message persists, it\n likely means that the widgets JavaScript library is either not installed or\n not enabled. See the Jupyter\n Widgets Documentation for setup instructions.\n

    \n

    \n If you're reading this message in another notebook frontend (for example, a static\n rendering on GitHub or NBViewer),\n it may mean that your frontend doesn't currently support widgets.\n

    \n\n\n\n\n\n\n \n\n\n\n# Advanced Material\n\nhttps://distill.pub/2017/momentum/\n\nhttp://blog.mrtz.org/2013/09/07/the-zen-of-gradient-descent.html\n\nA great talk by Ben Recht:\nhttps://simons.berkeley.edu/talks/ben-recht-2013-09-04\n\nBackpropagation\nhttp://www.offconvex.org/2016/12/20/backprop/\n\n## Analysis of convergence of Gradient descent for a quadratic function\n\nRecall that the error function we minimize is\n$$\nE(w) = \\frac{1}{2} (y-Aw)^T(y-Aw) = \\frac{1}{2}(y^\\top y - 2 y^\\top A w + w^\\top A^\\top A w)\n$$\n\nThe gradient at the point $w$ will be denoted as $\\nabla E(w) = g(w)$ where\n$$g(w) = -A^\\top (y - Aw) = A^\\top A w - A^\\top y$$\n\nMoreover, the gradient at the minimum will vanish:\n$$\ng(w_\\star) = 0\n$$\nIndeed, we can solve \n$$0 = A^\\top (Aw_\\star - y)$$\nas\n$$w_\\star = (A^\\top A)^{-1}A^\\top y $$\nbut this is not our point.\n\nFor a constant learning rate $\\eta$, gradient descent executes the following iteration\n$$\nw_t = w_{t-1} - \\eta g(w_{t-1}) = w_{t-1} - \\eta A^\\top (Aw_{t-1} - y)\n$$\n\n$$\nw_t = (I - \\eta A^\\top A) w_{t-1} + \\eta A^\\top y\n$$\n\nThis is a fixed point equation of form\n$$\nw_t = T(w_{t-1}) \n$$\nwhere $T$ is an affine transformation. \n\nWe will assume that $T$ is a contraction, i.e. for any two different\nparameters $w$ and $w'$ in the domain we have\n$$\n\\| T(w) - T(w') \\| \\leq L_\\eta \\|w-w' \\|\n$$\n\nwhere $L_\\eta < 1$, then the distance shrinks. Hence the mapping converges to a fixed point (this is a consequence of a deeper result in analysis called the Brouwer fixed-point theorem (https://en.0wikipedia.org/wiki/Brouwer_fixed-point_theorem))\n\nWe will consider in particular the distance between the optimum and the current point $w(t)$\n\n$$\n\\| T(w_t) - T(w_\\star) \\| \\leq L_\\eta \\|w_t - w_\\star \\|\n$$\nBut we have \n$T(w_\\star) = w_\\star$ and $w_t = T(w_{t-1})$ so\n$\\|w_t - w_\\star \\| = \\|T(w_{t-1}) - T(w_\\star) \\|$. \n\n\\begin{align}\n\\| T(w_t) - T(w_\\star) \\| & \\leq L_\\eta \\|T(w_{t-1}) - T(w_\\star) \\| \\\\\n& \\leq L^2_\\eta \\|T(w_{t-2}) - T(w_\\star) \\| \\\\\n\\vdots \\\\\n& \\leq L^{t+1}_\\eta \\| w_{0} - w_\\star \\| \n\\end{align}\n\n\n$$\nT(w) = (I - \\eta A^\\top A) w + \\eta A^\\top y\n$$\n\n$$\nT(w_\\star) = (I - \\eta A^\\top A) w_\\star + \\eta A^\\top y\n$$\n\n$$\n\\| T(w) - T(w') \\| = \\| (I - \\eta A^\\top A) (w-w') \\| \\leq \\| I - \\eta A^\\top A \\| \\| w-w' \\|\n$$\n\nWhen the norm of the matrix $\\| I - \\eta A^\\top A \\| < 1$ we have convergence. Here we take the operator norm, i.e., the magnitude of the largest eigenvalue.\n\nBelow, we plot the absolute value of the maximum eigenvalues of $I - \\eta A^\\top A$ as a function of $\\eta$.\n\n\n```python\n\nleft = 0.0000\nright = 0.015\n\nN = 1000\nETA = np.linspace(left,right,N)\n\ndef compute_largest_eig(ETA, A):\n \n LAM = np.zeros(N)\n D = A.shape[1]\n n = A.shape[0]\n \n for i,eta in enumerate(ETA):\n #print(eta)\n lam,v = np.linalg.eig(np.eye(D) - 2*eta*A.T.dot(A)/n)\n LAM[i] = np.max(np.abs(lam))\n \n return LAM\n\n# This number is L_\\eta\nLAM = compute_largest_eig(ETA, A)\n\nplt.plot(ETA, LAM)\n#plt.plot(ETA, np.ones((N,1)))\n#plt.gca().set_ylim([0.98, 1.02])\nplt.ylim([0.997,1.01])\nplt.xlabel('eta')\nplt.ylabel('absolute value of the largest eigenvalue')\nplt.show()\n\n```\n\n\n```python\nplt.semilogy(ETA,LAM)\nplt.ylim([0.997,1])\nplt.show()\n```\n\nIf $E$ is twice differentiable, contractivity means that $E$ is convex.\n\nFor $t>0$\n\\begin{align}\n\\|T(x + t \\Delta x) - T(x) \\| & \\leq \\rho \\|t \\Delta x\\| \\\\\n\\frac{1}{t} \\|T(x + t \\Delta x) - T(x) \\| &\\leq \\rho \\|\\Delta x\\| \n\\end{align}\n\nIf we can show that $\\rho< 1$, then $T$ is a contraction.\n\nBy definitions\n$$\nT(x) = x - \\alpha \\nabla E(x)\n$$\n\n$$\nT(x + t \\Delta x) = x + t \\Delta x - \\alpha \\nabla E(x + t \\Delta x)\n$$\n\n\\begin{align}\n\\frac{1}{t} \\|T(x + t \\Delta x) - T(x) \\| & = \\frac{1}{t} \\|x + t \\Delta x - \\alpha \\nabla E(x + t \\Delta x) - x + \\alpha \\nabla E(x) \\| \\\\\n& = \\| \\Delta x - \\frac{\\alpha}{t} (\\nabla E(x + t \\Delta x) - \\nabla E(x) ) \\| \\\\\n\\end{align}\n\nAs this relation holds for all $t$, we take the limit when $t\\rightarrow 0^+$\n\n\\begin{align}\n\\| \\Delta x - \\alpha \\nabla^2 E(x) \\Delta x \\| & = \\| (I - \\alpha \\nabla^2 E(x)) \\Delta x \\| \\\\\n& \\leq \\| I - \\alpha \\nabla^2 E(x) \\| \\| \\Delta x \\| \n\\end{align}\n\nIf we can choose $\\alpha$ for all $\\xi$ in the domain such that \n$$\n\\| I - \\alpha \\nabla^2 E(\\xi) \\| \\leq \\rho < 1\n$$\nis satisfied, we have a sufficient condition for a contraction.\n\nLemma:\n\nAssume that for $0 \\leq \\rho < 1$, $\\alpha> 0$ and $U(\\xi)$ is a symmetric matrix valued function for all $\\xi \\in \\mathcal{D}$ and we have \n$$\n\\| I - \\alpha U(\\xi) \\| \\leq \\rho \n$$\nthen $U = U(\\xi)$ is positive semidefinite with $$\\frac{1 - \\rho}{\\alpha} I \\preceq U $$ for every $\\xi$.\n\nProof:\n\n$$\n\\|I - \\alpha U \\| = \\sup_{x\\neq 0} \\frac{x^\\top(I - \\alpha U )x }{x^\\top x} \\leq \\rho\n$$\n\n$$\nx^\\top(I - \\alpha U )x \\leq \\rho x^\\top x\n$$\n\n$$\n(1- \\rho) x^\\top x \\leq \\alpha x^\\top U x\n$$\n\nThis implies that for all $x$ we have\n$$\n0 \\leq x^\\top (U - \\frac{1 - \\rho}{\\alpha} I) x\n$$\nIn other words, the matrix $U - \\frac{1 - \\rho}{\\alpha} I$ is positive semidefinite, or:\n\n$$\n\\frac{1 - \\rho}{\\alpha} I \\preceq U\n$$\nWe now see that $\\rho<1$ we have the guarantee that $U$ is positive semidefinite.\n\n$$\nT(x) = M x + b\n$$\n\n$$\n\\|T(x) - T(x_\\star) \\| = \\|Mx + b - M x_\\star + b \\| = \\| M(x-x_\\star) \\|\n$$\n\nBy Schwarz inequality\n\n$$\n\\|T(x) - T(x_\\star) \\| \\leq \\|M\\| \\|x-x_\\star\\|\n$$\nIf $\\|M\\| < 1$, we have a contraction. Assume the existence of a fixed point $x_\\star$ such that $x_\\star = T(x_\\star)$. (Does a fixed point always exist for a contraction?)\n\n\n```python\n# Try to fit with GD to the original data\nBaseYear2 = 0\nx2 = np.matrix(df_arac.Year[31:]).T-BaseYear2\n\n# Setup the vandermonde matrix\nN = len(x2)\nA = np.hstack((np.ones((N,1)), x2))\n\nleft = -8\nright = -7.55\nN = 100\nETA = np.logspace(left,right,N)\n\nLAM = compute_largest_eig(ETA, A)\n\nplt.plot(ETA, LAM)\nplt.plot(ETA, np.ones((N,1)))\nplt.gca().set_ylim([0.98, 1.02])\nplt.xlabel('eta')\nplt.ylabel('absolute value of the largest eigenvalue')\nplt.show()\n\n\n```\n\nAnalysis of Momentum\n\n\\begin{align}\np(\\tau) & = \\nabla E(w(\\tau-1)) + \\beta p(\\tau-1) \\\\\nw(\\tau) & = w(\\tau-1) - \\alpha p(\\tau) \\\\\nw(\\tau-1) & = w(\\tau-2) - \\alpha p(\\tau-1) \\\\\n\\end{align}\n\n\\begin{align}\n\\left(\\begin{array}{c}\nw(\\tau) \\\\\nw(\\tau-1)\n\\end{array}\n\\right)\n& = &\\left(\\begin{array}{cc}\n\\cdot & \\cdot \\\\\n\\cdot & \\cdot\n\\end{array}\n\\right)\n\\left(\\begin{array}{c}\nw(\\tau-1) \\\\\nw(\\tau-2)\n\\end{array}\n\\right)\n\\end{align}\n\n\n\\begin{align}\n\\frac{1}{\\alpha}(w(\\tau-1) - w(\\tau)) & = p(\\tau) = \\nabla E(w(\\tau-1)) + \\beta \\frac{1}{\\alpha}(w(\\tau-2) - w(\\tau-1)) = \\\\\n\\frac{1}{\\alpha}(w(\\tau-2) - w(\\tau-1)) & = p(\\tau-1) \\\\\n\\end{align}\n\n\\begin{align}\n\\frac{1}{\\alpha}(w(\\tau-1) - w(\\tau)) & = \\nabla E(w(\\tau-1)) + \\beta \\frac{1}{\\alpha}(w(\\tau-2) - w(\\tau-1)) \\\\\n w(\\tau) & = -\\alpha \\nabla E(w(\\tau-1)) - \\beta w(\\tau-2) + (\\beta+1) w(\\tau-1)\n\\end{align}\n\n\n\n* Note that GD is sensetive to scaling of data\n* For example, if we would not have shifted the $x$ axis our original data, GD might not have worked. The maximum eigenvalue is very close to $1$ for all $\\eta$ upto numerical precision\n", "meta": {"hexsha": "948cf228896017d252b8b9fb6f14f79016e2737b", "size": 213522, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "GradientDescent.ipynb", "max_stars_repo_name": "bkoyuncu/notes", "max_stars_repo_head_hexsha": "0e660f46b7d17fdfddc2cad1bb60dcf847f5d1e4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "GradientDescent.ipynb", "max_issues_repo_name": "bkoyuncu/notes", "max_issues_repo_head_hexsha": "0e660f46b7d17fdfddc2cad1bb60dcf847f5d1e4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "GradientDescent.ipynb", "max_forks_repo_name": "bkoyuncu/notes", "max_forks_repo_head_hexsha": "0e660f46b7d17fdfddc2cad1bb60dcf847f5d1e4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-09-11T11:46:36.000Z", "max_forks_repo_forks_event_max_datetime": "2019-09-11T11:46:36.000Z", "avg_line_length": 188.6236749117, "max_line_length": 56672, "alphanum_fraction": 0.850746059, "converted": true, "num_tokens": 6486, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284088045171238, "lm_q2_score": 0.8887587964389112, "lm_q1q2_score": 0.8251314917059274}} {"text": "### Evaluate a polynomial string\n\n\n```python\ndef symbolize(s):\n \"\"\"\n Converts a a string (equation) to a SymPy symbol object\n \"\"\"\n from sympy import sympify\n s1=s.replace('.','*')\n s2=s1.replace('^','**')\n s3=sympify(s2)\n \n return(s3)\n```\n\n\n```python\ndef eval_multinomial(s,vals=None,symbolic_eval=False):\n \"\"\"\n Evaluates polynomial at vals.\n vals can be simple list, dictionary, or tuple of values.\n vals can also contain symbols instead of real values provided those symbols have been declared before using SymPy\n \"\"\"\n from sympy import Symbol\n sym_s=symbolize(s)\n sym_set=sym_s.atoms(Symbol)\n sym_lst=[]\n for s in sym_set:\n sym_lst.append(str(s))\n sym_lst.sort()\n if symbolic_eval==False and len(sym_set)!=len(vals):\n print(\"Length of the input values did not match number of variables and symbolic evaluation is not selected\")\n return None\n else:\n if type(vals)==list:\n sub=list(zip(sym_lst,vals))\n elif type(vals)==dict:\n l=list(vals.keys())\n l.sort()\n lst=[]\n for i in l:\n lst.append(vals[i])\n sub=list(zip(sym_lst,lst))\n elif type(vals)==tuple:\n sub=list(zip(sym_lst,list(vals)))\n result=sym_s.subs(sub)\n \n return result\n```\n\n### Helper function for flipping binary values of a _ndarray_\n\n\n```python\ndef flip(y,p):\n import numpy as np\n lst=[]\n for i in range(len(y)):\n f=np.random.choice([1,0],p=[p,1-p])\n lst.append(f)\n lst=np.array(lst)\n return np.array(np.logical_xor(y,lst),dtype=int)\n```\n\n### Classification sample generation based on a symbolic expression\n\n\n```python\ndef gen_classification_symbolic(m=None,n_samples=100,n_features=2,flip_y=0.0):\n \"\"\"\n Generates classification sample based on a symbolic expression.\n Calculates the output of the symbolic expression at randomly generated (Gaussian distribution) points and\n assigns binary classification based on sign.\n m: The symbolic expression. Needs x1, x2, etc as variables and regular python arithmatic symbols to be used.\n n_samples: Number of samples to be generated\n n_features: Number of variables. This is automatically inferred from the symbolic expression. So this is ignored \n in case a symbolic expression is supplied. However if no symbolic expression is supplied then a \n default simple polynomial can be invoked to generate classification samples with n_features.\n flip_y: Probability of flipping the classification labels randomly. A higher value introduces more noise and make\n the classification problem harder.\n Returns a numpy ndarray with dimension (n_samples,n_features+1). Last column is the response vector.\n \"\"\"\n \n import numpy as np\n from sympy import Symbol,sympify\n \n if m==None:\n m=''\n for i in range(1,n_features+1):\n c='x'+str(i)\n c+=np.random.choice(['+','-'],p=[0.5,0.5])\n m+=c\n m=m[:-1]\n sym_m=sympify(m)\n n_features=len(sym_m.atoms(Symbol))\n evals=[]\n lst_features=[]\n for i in range(n_features):\n lst_features.append(np.random.normal(scale=5,size=n_samples))\n lst_features=np.array(lst_features)\n lst_features=lst_features.T\n for i in range(n_samples):\n evals.append(eval_multinomial(m,vals=list(lst_features[i])))\n \n evals=np.array(evals)\n evals_binary=evals>0\n evals_binary=evals_binary.flatten()\n evals_binary=np.array(evals_binary,dtype=int)\n evals_binary=flip(evals_binary,p=flip_y)\n evals_binary=evals_binary.reshape(n_samples,1)\n \n lst_features=lst_features.reshape(n_samples,n_features)\n x=np.hstack((lst_features,evals_binary))\n \n return (x)\n```\n\n\n```python\nx=gen_classification_symbolic(m='2*x1+3*x2+5*x3',n_samples=10,flip_y=0.0)\n```\n\n\n```python\nimport pandas as pd\ndf=pd.DataFrame(x)\n```\n\n\n```python\ndf\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    0123
    0-0.442614-0.5237972.3003781.0
    1-0.4636873.2005224.3094491.0
    2-2.580825-2.6068775.8452461.0
    30.8885343.727563-3.1749440.0
    4-1.120233-2.737827-8.3081710.0
    5-4.832498-6.5914975.5652510.0
    6-2.3266884.7518033.3095591.0
    7-1.213919-10.2092682.4474710.0
    8-1.3208910.770222-8.7358170.0
    92.1504376.061752-6.7331770.0
    \n
    \n\n\n\n\n```python\nx=gen_classification_symbolic(m='12*x1/(x2+5*x3)',n_samples=10,flip_y=0.2)\ndf=pd.DataFrame(x)\ndf\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    0123
    05.5667077.6456910.5598111.0
    1-10.5287962.241197-4.2163590.0
    25.0008660.6159972.0210551.0
    32.9194583.5932800.4984381.0
    44.671159-0.679506-5.1252421.0
    5-4.484902-12.8810672.3409851.0
    61.302499-5.054502-1.3965780.0
    7-2.321861-2.5483142.8816540.0
    8-7.0216604.5301752.7015440.0
    9-9.778964-0.1326424.0219560.0
    \n
    \n\n\n\n#### Classification samples with linear separator but no noise\n\n\n```python\nx=gen_classification_symbolic(m='x1-2*x2',n_samples=50,flip_y=0.0)\ndf=pd.DataFrame(x)\nplt.scatter(x=df[0],y=df[1],c=df[2])\nplt.show()\n```\n\n#### Classification samples with linear separator but significant noise (flipped bits)\n\n\n```python\nx=gen_classification_symbolic(m='x1-2*x2',n_samples=50,flip_y=0.15)\ndf=pd.DataFrame(x)\nplt.scatter(x=df[0],y=df[1],c=df[2])\nplt.show()\n```\n\n\n```python\nimport seaborn as sns\n```\n\n#### Classification samples with non-linear separator\n\n\n```python\nx=gen_classification_symbolic(m='x1**2-x2**2',n_samples=500,flip_y=0.01)\ndf=pd.DataFrame(x)\nplt.scatter(x=df[0],y=df[1],c=df[2])\nplt.show()\n```\n\n\n```python\nx=gen_classification_symbolic(m='x1**2-x2**2',n_samples=500,flip_y=0.01)\ndf=pd.DataFrame(x)\nplt.scatter(x=df[0],y=df[1],c=df[2])\nplt.show()\n```\n\n### Regression sample generation based on a symbolic expression\n\n\n```python\ndef gen_regression_symbolic(m=None,n_samples=100,n_features=2,noise=0.0,noise_dist='normal'):\n \"\"\"\n Generates regression sample based on a symbolic expression. Calculates the output of the symbolic expression \n at randomly generated (drawn from a Gaussian distribution) points\n m: The symbolic expression. Needs x1, x2, etc as variables and regular python arithmatic symbols to be used.\n n_samples: Number of samples to be generated\n n_features: Number of variables. This is automatically inferred from the symbolic expression. So this is ignored \n in case a symbolic expression is supplied. However if no symbolic expression is supplied then a \n default simple polynomial can be invoked to generate regression samples with n_features.\n noise: Magnitude of Gaussian noise to be introduced (added to the output).\n noise_dist: Type of the probability distribution of the noise signal. \n Currently supports: Normal, Uniform, t, Beta, Gamma, Poission, Laplace\n\n Returns a numpy ndarray with dimension (n_samples,n_features+1). Last column is the response vector.\n \"\"\"\n \n import numpy as np\n from sympy import Symbol,sympify\n \n if m==None:\n m=''\n for i in range(1,n_features+1):\n c='x'+str(i)\n c+=np.random.choice(['+','-'],p=[0.5,0.5])\n m+=c\n m=m[:-1]\n \n sym_m=sympify(m)\n n_features=len(sym_m.atoms(Symbol))\n evals=[]\n lst_features=[]\n \n for i in range(n_features):\n lst_features.append(np.random.normal(scale=5,size=n_samples))\n lst_features=np.array(lst_features)\n lst_features=lst_features.T\n lst_features=lst_features.reshape(n_samples,n_features)\n \n for i in range(n_samples):\n evals.append(eval_multinomial(m,vals=list(lst_features[i])))\n \n evals=np.array(evals)\n evals=evals.reshape(n_samples,1)\n \n if noise_dist=='normal':\n noise_sample=noise*np.random.normal(loc=0,scale=1.0,size=n_samples)\n elif noise_dist=='uniform':\n noise_sample=noise*np.random.uniform(low=0,high=1.0,size=n_samples)\n elif noise_dist=='beta':\n noise_sample=noise*np.random.beta(a=0.5,b=1.0,size=n_samples)\n elif noise_dist=='Gamma':\n noise_sample=noise*np.random.gamma(shape=1.0,scale=1.0,size=n_samples)\n elif noise_dist=='laplace':\n noise_sample=noise*np.random.laplace(loc=0.0,scale=1.0,size=n_samples)\n \n noise_sample=noise_sample.reshape(n_samples,1)\n evals=evals+noise_sample\n \n x=np.hstack((lst_features,evals))\n \n return (x)\n```\n\n#### Generate samples with a rational function as input \n### $$\\frac{10x_1}{(3x_2+4x_3)}$$\n\n\n```python\nx=gen_regression_symbolic(m='10*x1/(3*x2+4*x3)',n_samples=10,noise=0.1)\ndf=pd.DataFrame(x)\ndf\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    0123
    0-7.08499-4.358398.10732-3.67783156896633
    1-10.66840.779056-6.646514.45796155973331
    2-6.69542-5.3071.179355.95108211351032
    3-3.835816.46401-5.996788.28104884665168
    4-1.50995-1.46056-4.102130.493224249022704
    52.22995-1.121634.410091.51213441402200
    64.262610.9826877.06761.31581579041464
    70.7871952.20071-4.68889-0.724650906335961
    80.720728-2.32469-0.468637-0.917106940761226
    9-3.04485-3.26701-3.190831.40239700700260
    \n
    \n\n\n\n#### Generate samples with no symbolic input and with 10 features\n\n\n```python\nx=gen_regression_symbolic(n_features=10,n_samples=10,noise=0.1)\ndf=pd.DataFrame(x)\ndf\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    012345678910
    0-3.55302-3.60695-10.07297.85252-3.762767.78251-0.638193-4.75034-0.9896030.181414-10.2675547899900
    1-6.34778-2.74753-4.751874.704892.700145.29475-0.1900954.85289-4.50207-1.22839-1.93507315813350
    22.15057-0.43572-5.478056.960743.10096-1.508981.099134.73097-0.216513-3.256294.91632094325713
    31.111815.621560.06600366.665651.88656-4.57167-0.84389-2.445451.666497.3616918.0607243993836
    4-3.27857-4.135454.36299-3.73405-1.3958-6.392371.99202-0.216661-10.39053.43896-23.7181534803553
    5-6.78726-7.76895-2.2653-3.4326-16.8937-1.791114.79866-4.436922.01342-3.46766-49.7886776301048
    61.66157-4.344246.69391-3.42321-8.639760.8627222.70897-13.5047-7.14555-3.16143-33.7432854958296
    73.469497.11633-1.085236.41709-1.231617.079566.63776.06688-4.992623.8500519.9862211512126
    81.131554.02597-3.056074.9271-3.604546.062520.9366734.07096-2.040525.1109315.8002295267657
    9-1.60514.97768-7.69743-0.658654.070532.98333-10.93414.94022.97710.6835721.5880195549236
    \n
    \n\n\n\n\n```python\nimport matplotlib.pyplot as plt\n```\n\n#### Generate samples with less noise and plot: $0.2x^2+1.2x+6+f_{noise}(x\\mid{N=0.1})$\n\n\n```python\nx=gen_regression_symbolic(m='0.2*x**2+1.2*x+6',n_samples=100,noise=0.1)\ndf=pd.DataFrame(x)\nplt.scatter(df[0],df[1],edgecolor='k',alpha=0.7,c='red',s=150)\nplt.show()\n```\n\n#### Generate samples with more noise and plo: $0.2x^2+1.2x+6+f_{noise}(x\\mid{N=10})$\n\n\n```python\nx=gen_regression_symbolic(m='0.2*x**2+1.2*x+6',n_samples=100,noise=10)\ndf=pd.DataFrame(x)\nplt.scatter(df[0],df[1],edgecolor='k',alpha=0.7,c='red',s=150)\nplt.show()\n```\n\n#### Generate samples with larger coefficent for the quadratic term and plot: $1.3x^2+1.2x+6+f_{noise}(x\\mid{N=10})$\n\n\n```python\n\n```\n\n#### Generate sample with transcedental or rational functions: $x^2.e^{-0.5x}.sin(x+10)$\n\n\n```python\nx=gen_regression_symbolic(m='x**2*exp(-0.5*x)*sin(x+10)',n_samples=50,noise=1)\ndf=pd.DataFrame(x)\nplt.figure(figsize=(10,4))\nplt.scatter(df[0],df[1],edgecolor='k',alpha=0.7,c='red',s=150)\nplt.grid(True)\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "537d5684efbfdc72d8a5b2ac47fc18684890e2bc", "size": 271536, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Symbolic regression classification generator.ipynb", "max_stars_repo_name": "eric-erki/Random_Function_Generator", "max_stars_repo_head_hexsha": "2c6cbc3e88d67c7bed55fae338b9341cda96a555", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-01-11T20:22:22.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-05T19:32:15.000Z", "max_issues_repo_path": "Symbolic regression classification generator.ipynb", "max_issues_repo_name": "eric-erki/Random_Function_Generator", "max_issues_repo_head_hexsha": "2c6cbc3e88d67c7bed55fae338b9341cda96a555", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Symbolic regression classification generator.ipynb", "max_forks_repo_name": "eric-erki/Random_Function_Generator", "max_forks_repo_head_hexsha": "2c6cbc3e88d67c7bed55fae338b9341cda96a555", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2019-01-11T20:22:24.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-05T19:32:18.000Z", "avg_line_length": 231.0944680851, "max_line_length": 66756, "alphanum_fraction": 0.8965993459, "converted": true, "num_tokens": 6276, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284088005554476, "lm_q2_score": 0.8887587964389112, "lm_q1q2_score": 0.8251314881849527}} {"text": "# Problem set 3 (90 pts)\n\n## Important note: the template for your solution filename is Name_Surname_PS3.ipynb\n\n## Problem 1 (25 pts)\n\n- (5 pts) Prove that $\\mathrm{vec}(AXB) = (B^\\top \\otimes A)\\, \\mathrm{vec}(X)$ if $\\mathrm{vec}(X)$ is a columnwise reshape of a matrix into a long vector. What does it change if the reshape is rowwise? \n\n**Note:** To make a columnwise reshape in Python one should use ```np.reshape(X, order='f')```, where the string ```'f'``` stands for the Fortran ordering. \n\n- (2 pts) What is the complexity of a naive computation of $(A \\otimes B) x$? Show how it can be reduced.\n\n- (3 pts) Let matrices $A$ and $B$ have eigendecompositions $A = S_A\\Lambda_A S_A^{-1}$ and $B = S_B\\Lambda_B S^{-1}_B$. Find eigenvectors and eigenvalues of the matrix $A\\otimes I + I \\otimes B$, where dimension of $I$ coincides with the dimension of $A$ and $B$.\n\n\n- (10 pts) Let $A = \\mathrm{diag}\\left(\\frac{1}{1000},\\frac{2}{1000},\\dots \\frac{999}{1000}, 1, 1000 \\right)$. Estimate analytically the number of iterations required to solve linear system with $A$ with the relative accuracy $10^{-4}$ using\n - Richardson iteration with the optimal choice of parameter (use $2$-norm)\n - Chebyshev iteration (use $2$-norm)\n - Conjugate gradient method (use $A$-norm).\n \n- (5 pts) Provide numerical confirmation of your estimate from theoretical point of view\n\n\n```python\n# Your solution is here\n```\n\n## Problem 2 (40 pts)\n\n### Spectral graph partitioning and inverse iteration\n\n\nGiven connected graph $G$ and its corresponding graph Laplacian matrix $L = D - A$ with eigenvalues $0=\\lambda_1, \\lambda_2, ..., \\lambda_n$, where $D$ is its degree matrix and $A$ is its adjacency matrix, *Fiedler vector* is an eignevector correspondng to the second smallest eigenvalue $\\lambda_2$ of $L$. Fiedler vector can be used for graph partitioning: positive values correspond to the one part of a graph and negative values to another.\n\n### Inverse power method (15 pts)\n\nTo find the Fiedler vector we will use the inverse iteration with adaptive shifts (Rayleigh quotient iteration). \n\n* (5 pts) Write down the orthoprojection matrix on the space orthogonal to the eigenvector of $L$, corresponding to the eigenvalue $0$ and prove (analytically) that it is indeed an orthoprojection.\n \n* (5 pts) Implement the spectral partitioning as the function ```partition```:\n\n\n```python\n# INPUT:\n# A - adjacency matrix (scipy.sparse.csr_matrix)\n# num_iter_fix - number of iterations with fixed shift (int)\n# shift - (float number)\n# num_iter_adapt - number of iterations with adaptive shift (int) -- Rayleigh quotient iteration steps\n# x0 - initial guess (1D numpy.ndarray)\n# OUTPUT:\n# x - normalized Fiedler vector (1D numpy.ndarray)\n# eigs - eigenvalue estimations at each step (1D numpy.ndarray)\n# eps - relative tolerance (float)\ndef partition(A, shift, num_iter_fix, num_iter_adapt, x0, eps):\n x = x0\n eigs = np.array([0])\n return x, eigs\n```\n\nAlgorithm must halt before `num_iter_fix + num_iter_adapt` iterations if the following condition is satisfied $$ \\boxed{\\|\\lambda_k - \\lambda_{k-1}\\|_2 / \\|\\lambda_k\\|_2 \\leq \\varepsilon} \\text{ at some step } k.$$\n\nDo not forget to use the orthogonal projection from above in the iterative process to get the correct eigenvector.\nIt is also a good idea to use ```shift=0``` before the adaptive stragy is used. This, however, is not possible since the matrix $L$ is singular, and sparse decompositions in ```scipy``` does not work in this case. Therefore, we first use a very small shift instead.\n\n* (3 pts) Generate a random `lollipop_graph` using `networkx` library and find its partition. [Draw](https://networkx.github.io/documentation/networkx-1.9/examples/drawing/labels_and_colors.html) this graph with vertices colored according to the partition.\n\n* (2 pts) Start the method with a random initial guess ```x0```, set ```num_iter_fix=0``` and comment why the method can converge to a wrong eigenvalue.\n\n### Spectral graph properties (15 pts)\n\n* (5 pts) Prove that multiplicity of the eigenvalue $0$ in the spectrum of the graphs Laplacian is the number of its connected components.\n* (10 pts) The second-smallest eigenvalue of $L(G)$, $\\lambda_2(L(G))$, is often called the algebraic connectivity of the\ngraph $G$. A basic intuition behind the use of this term is that a graph with a higher algebraic\nconnectivity typically has more edges, and can therefore be thought of as being \u201cmore connected\u201d. \nTo check this statement, create few graphs with equal number of vertices using `networkx`, one of them should be $C_{30}$ - simple cyclic graph, and one of them should be $K_{30}$ - complete graph. (You also can change the number of vertices if it makes sense for your experiments, but do not make it trivially small).\n * Find the algebraic connectivity for the each graph using inverse iteration.\n * Plot the dependency $\\lambda_2(G_i)$ on $|E_i|$.\n * Draw a partition for a chosen graph from the generated set.\n * Comment on the results.\n\n### Image bipartition (10 pts)\n\nLet us deal here with a graph constructed from a binarized image.\nConsider the rule, that graph vertices are only pixels with $1$, and each vertex can have no more than $8$ connected vertices (pixel neighbours), $\\textit{i.e}$ graph degree is limited by 8.\n* (3 pts) Find an image with minimal size equal to $(256, 256)$ and binarize it such that graph built on black pixels has exactly $1$ connected component.\n* (5 pts) Write a function that constructs sparse adjacency matrix from the binarized image, taking into account the rule from above.\n* (2 pts) Find the partition of the resulting graph and draw the image in accordance with partition.\n\n\n```python\n# Your solution is here\n```\n\n## Problem 3 (25 pts)\n\n**Disclaimer**: this problem is released first time, so some typos can be found. \n\n## Mathematical model (Navier-Stokes equations)\n\nThe governing equations for two-dimensional incompressible\nflows can be written in a dimensionless form as:\n\n\\begin{equation}\\tag{1}\n\\dfrac{\\partial \\omega}{\\partial t} = \\dfrac{1}{Re} \\big(\\dfrac{\\partial^2 \\omega}{\\partial x^2} + \\dfrac{\\partial^2 \\omega}{\\partial y^2}\\big) - \\big(\\dfrac{\\partial \\psi}{\\partial y} \\dfrac{\\partial \\omega}{\\partial x} - \\dfrac{\\partial \\psi}{\\partial x} \\dfrac{\\partial \\omega}{\\partial y}\\big),\n\\end{equation}\n\nalong with the kinematic relationship between vorticity $\\omega(x,y,t)$ and stream function $\\psi(x,y,t)$ according to the Poisson equation, which is given as:\n\n\\begin{equation}\\tag{2}\n\\dfrac{\\partial^2 \\psi}{\\partial x^2} + \\dfrac{\\partial^2 \\psi}{\\partial y^2} = -\\omega.\n\\end{equation}\n\nWe consider equations (1) and (2) in the computational domain $\\Omega = [0, 2\\pi] \\times [0, 2\\pi]$ and impose the following periodic boundary conditions:\n\n$$\\omega(x,0,t) =\\omega(x, 2\\pi, t), \\quad \\omega(0,y,t) =\\omega(2\\pi, y, t), \\quad t \\geq 0,$$\nand the same for $\\psi(x,y,t)$.\n\nNote: the Reynolds number, referred to as $Re$, is a fundamental physical constant that in particular determines whether the fluid flow is laminar or turbulent.\n\n## The animation below represents a particular solution of the Navier-Stokes equations (1) and (2) and you will get it in the end of this problem\n\n\n# Fourier-Galerkin pseudospectral method\n\nFourier series expansion based methods are often used for solving problems with periodic boundary conditions. One of the most accurate methods for solving the Navier\u2013Stokes equations in periodic domains is **the pseudospectral method**, which exploits the Fast Fourier Transform (FFT) algorithm. \n\nOutline: the main idea of spectral methods is to write the solution of a differential equation as a sum of certain \"basis functions\" (e.g. Fourier series, Chebyshev polynomials etc) and then to choose the coefficients in the sum in order to satisfy the differential equation as well as possible.\n\nComprehensive survey of such methods can be found in [this book](https://depts.washington.edu/ph506/Boyd.pdf).\n\n### Discrete Fourier Transform\n\nWe discretize the domain $[0,L_x]\\times[0, L_y]$ by introducing a computation **grid** consisting of $N_x \\times N_y$ equally spaced points.\n\nThe discrete grid coordinates for $i = 0, 1, \\ldots, N_x$ and $j = 0, 1, \\ldots, N_y$ are given by:\n\n$$x_i = \\frac{i L_x}{N_x}, \\quad y_j = \\frac{j L_y}{N_y}.$$\n\nNote, that since the domain is periodic $x_0 = x_{N_x}$ and $y_0 = y_{N_y}$.\n\n Then, any discrete function $u_{i,j} = u(x_i,y_j)$ can be transformed to the Fourier space using the Discrete Fourier Transform (DFT):\n\n$$ \\tilde{u}_{m,n} = \\sum_{i = 0}^{N_x - 1}\\sum_{j = 0}^{N_y - 1} u_{i, j}e^{-\n\\mathbf{i}(\\frac{2\\pi m}{L_x}x_i + \\frac{2\\pi n}{L_y}y_j)},$$\n\nand its inverse transform is:\n\n$$ u_{i,j} = \\frac{1}{N_x N_y} \\sum_{m = -\\frac{N_x}{2}}^{\\frac{N_x}{2} - 1}\\sum_{n = -\\frac{N_y}{2}}^{\\frac{N_y}{2} - 1} \\tilde{u}_{m, n}e^{\\mathbf{i}(\\frac{2\\pi m}{L_x}x_i + \\frac{2\\pi n}{L_y}y_j)},$$\n\nwhere $i$ and $j$ represent indices for the physical space (i.e. coordinates in the introduced grid), $m$ and $n$ are indices in the Fourier space (i.e. frequencies). \n\n\nWe also introduce wavenumbers:\n\n$$k_x = \\frac{2\\pi m}{L_x}, \\quad k_y = \\frac{2 \\pi n}{L_y}.$$\n\n\n**Bonus question:** how DFT coefficients $\\tilde{u}_{m,n}$ relate to coefficients in the truncated Fourier series of $u(x,y)$?\n\n### Differentiation\nIn Fourier space we can easily perform differentiation with respect to $x$ and $y$. For example, the\nfirst and the second order derivatives of any function $u$ in discrete\ndomain becomes:\n\n$$ \\left(\\dfrac{\\partial u}{\\partial x}\\right)_{i,j} = \\frac{1}{N_x N_y}\\sum_{m = -\\frac{N_x}{2}}^{\\frac{N_x}{2} - 1}\\sum_{n = \\frac{N_y}{2}}^{\\frac{N_y}{2} - 1} \\tilde{u}_{m, n} (\\mathbf{i}k_x) e^{\\mathbf{i}(k_x x_i + k_y y_j)}, $$\n\n$$ \\left(\\dfrac{\\partial^2 u}{\\partial x^2}\\right)_{i,j} = \\frac{1}{N_x N_y}\\sum_{m = -\\frac{N_x}{2}}^{\\frac{N_x}{2} - 1}\\sum_{n = -\\frac{N_y}{2}}^{\\frac{N_y}{2} - 1} \\tilde{u}_{m, n} (-k_x^2) e^{\\mathbf{i}(k_x x_i + k_y y_j)}, $$\n\nand similarly for the derivatives w.r.t. $y$ \n\nAssume $L_x = L_y = L = 2\\pi$, $N_x = N_y = N$ for simplicity. Then, differentiation $\\frac{\\partial}{\\partial x}$ in the Fourier space can be implemented as follows:\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n\n```python\ndef dudx(u_tilde, N):\n k1d = np.fft.fftfreq(N) * N\n return u_tilde * (1j * k1d)\n```\n\n Note, we use ```np.fft.fftfreq(N)``` to determine the order of frequencies for certain ```numpy``` implementation (see the documentation of ```numpy.fft``` module for details).\n\nConsider the following example:\n\n\n```python\nL = 2*np.pi # size of computational domain\nd = 7\nN = 2**d\n```\n\n\n```python\n# discretize the domain $[0, 2\\pi] \\times [0, 2\\pi]$ with uniform grid\n\nls = np.linspace(0, L, N, endpoint=False)\nxx, yy = np.meshgrid(ls, ls, indexing='xy')\n\n# define simple periodic function\nu = np.sin(xx) * np.sin(yy) \n\n# first, compute du/dx analytically\nu_x = np.cos(xx) * np.sin(yy) \n\n# next, compute du/dx in Fourier space\nu_tilde = np.fft.fft2(u)\nu_tilde_x = dudx(u_tilde, N)\nu_x_fourier = np.fft.ifft2(u_tilde_x)\n\n# check the result\nerr = np.linalg.norm(u_x - u_x_fourier)\nprint(\"error = \", err)\n```\n\n error = 5.437108258986234e-13\n\n\n- (5 pts) Similarly with the implementation of ```dudx(u_tilde, N)``` given above, your first task is to implement other derivatives arising in the Navier-Stokes equtions (1), (2). Loops are prohibited!\n\n\n```python\ndef dudy(u_tilde, N):\n pass\n\ndef d2udx2(u_tilde, N):\n pass\n \ndef d2udy2(u_tilde, N):\n pass\n```\n\n### Navier-Stokes equations in the Fourier space\n\nAfter transforming Eq. (1) and Eq. (2) to the Fourier space, the governing equations become:\n\n\\begin{equation}\\tag{3}\n\\frac{\\partial \\tilde{\\omega}_{m,n}}{\\partial t} = \\frac{1}{Re}[(-k_x^2 - k_y^2)\\tilde{\\omega}_{m,n}] - \\tilde{N},\n\\end{equation}\n\n\\begin{equation}\\tag{4}\n(-k_x^2 - k_y^2)\\tilde{\\psi}_{m,n} = -\\tilde{\\omega}_{m,n},\n\\end{equation}\n\nwhere $\\tilde{N}$ represents the non-linear term which is computed using 2D convolutions as follows:\n\n$$\\tilde{N} = (\\mathbf{i}k_y \\tilde{\\psi}_{m,n}) \\circ (\\mathbf{i}k_x \\tilde{\\omega}_{m,n}) - (\\mathbf{i}k_x \\tilde{\\psi}_{m,n}) \\circ (\\mathbf{i}k_y \\tilde{\\omega}_{m,n}),$$\n\ni.e. multiplications in physical space become convolutions in the Fourier space.\n\nTo clarify where these convolutions come from, consider two discrete functions $u$ and $v$ represented by their DFT (1D for simplicity):\n\n$$ u_{i} = \\frac{1}{N_x} \\sum_{m = -\\frac{N_x}{2}}^{\\frac{N_x}{2} - 1} \\tilde{u}_{m}e^{\\mathbf{i}\\frac{2\\pi m}{L_x}x_i},$$\n\n$$ v_{i} = \\frac{1}{N_x} \\sum_{n = -\\frac{N_x}{2}}^{\\frac{N_x}{2} - 1}\\tilde{v}_{n}e^{\\mathbf{i}\\frac{2\\pi n}{L_x}x_i}.$$\n\nThen, the direct multiplication results in:\n$$ u_{i} v_{i} = \\frac{1}{N_x} \\sum_{k = -N_x}^{N_x - 2} \\frac{1}{N_x}\\tilde{w}_{k}e^{\\mathbf{i}\\frac{2\\pi k}{L_x}x_i},$$\nwhere the coefficients $\\tilde{\\omega}_k$ are computed as follows (check it!):\n\n$$\\tilde{w}_{k} = \\sum_{m + n = k}\\tilde{u}_m\\tilde{v}_n.$$\n\n\nBelow we provide a possible implementation of 2D convolution using ```scipy.signal``` module. Note, that *full* convolution introduces higher frequinces that should be truncated in a proper way.\n\n\n```python\nfrom scipy import signal\n\ndef conv2d_scipy(u_tilde, v_tilde, N):\n # np.fft.fftshift is used to align implementation and formulas\n full_conv = signal.convolve(np.fft.fftshift(u_tilde),\\\n np.fft.fftshift(v_tilde), mode='full')\n trunc_conv = full_conv[N//2:-N//2+1, N//2:-N//2+1]\n return np.fft.ifftshift(trunc_conv)/(N*N)\n\n```\n\n(10 pts) Your second task is to implement the same 2D convolution but using the *Convolution Theorem* in this time.\n\n\n \n Hint: From the lecture course you should know that applying *Convolution Theorem* is straightforward when computing **circular** (or periodic) convolutions. However, for this task you should use an appropriate zero-padding by a factor of two (with further truncation).\n\n\n```python\ndef conv2d(u_tilde, v_tilde, N):\n pass\n```\n\n\n```python\n# check yourself\n\nu_tilde = np.random.rand(N, N)\nv_tilde = np.random.rand(N, N)\n\nerr = np.linalg.norm(conv2d(u_tilde, v_tilde, N) - conv2d_scipy(u_tilde, v_tilde, N))\nprint(\"error =\", err) # should be close to machine precision\n```\n\n**Poisson solver**\n\nFinally, we need to solve the Poisson equation Eq. (2) which can be easily computed in the Fourier space according to the Eq. (4).\n\n\n(5 pts) Implement inverse of the laplacian operator according to the template provided below. Note: the laplacian operator with periodic boundary conditions is singular (since the constant function is in nullspace). So, in order to avoid division by zero:\n1. Assume the problem is always consistent (i.e. $\\tilde{\\omega}_{0,0} = 0$), \n2. Assume $\\tilde{\\psi}_{0,0} = 0$ (i.e. return normal solution). Loops are prohibited!\n\n\n```python\ndef laplace_inverse(omega_tilde, N):\n psi_tilde = None\n return psi_tilde\n```\n\n\n```python\n# check yourself\n\n# consider simple solution\nsol_analytic = np.sin(xx)*np.sin(yy)\n\n# compute corresponding right hand side analytically\nrhs = -2*np.sin(xx)*np.sin(yy)\n\n# solve Poisson problem in Fourier space\nrhs_tilde = np.fft.fft2(rhs)\nsol_tilde = laplace_inverse(rhs_tilde, N)\nsol = np.fft.ifft2(sol_tilde)\n\n# check error is small\nerr = np.linalg.norm(sol - sol_analytic)\nprint(\"error =\", err)\n```\n\n error = 1.8561658787461062e-14\n\n\n**Time integration**\n\nEqs. (3) and (4) can be considered as semi-discrete ordinary differential equations (ODEs) obtained after (spectral) spatial discretization of the partial differential equations (1) and (2):\n\n\\begin{equation}\\tag{5}\n\\frac{d \\tilde{\\omega}}{dt} = \\mathcal{L}(\\tilde{\\omega}, \\tilde{\\psi}),\n\\end{equation}\n\nwhere $\\mathcal{L}( \\tilde{\\omega} , \\tilde{\\psi})$ is the discrete operator of spatial derivatives including non-linear convective terms, linear diffusive terms, and $\\tilde{\\psi}$ which is obtained from the Poisson equation (4).\n\n(5 pts) Implement $\\mathcal{L}$ according to the template provided below\n\n\n```python\ndef L_op(omega_tilde, psi_tilde, N, Re=1):\n pass\n```\n\nWe integrate in time using fourth-order Runge\u2013Kutta scheme that can be written in the following form:\n\n$$\\tilde{\\omega}^{(1)} = \\tilde{\\omega}^{n} + \\frac{\\Delta t}{2}\\mathcal{L}(\\tilde{\\omega}^{n}, \\tilde{\\psi}^{n})$$\n\n$$\\tilde{\\omega}^{(2)} = \\tilde{\\omega}^{n} + \\frac{\\Delta t}{2}\\mathcal{L}(\\tilde{\\omega}^{(1)}, \\tilde{\\psi}^{(1)})$$\n\n$$\\tilde{\\omega}^{(3)} = \\tilde{\\omega}^{n} + \\Delta t\\mathcal{L}(\\tilde{\\omega}^{(2)}, \\tilde{\\psi}^{(2)})$$\n\n$$\\tilde{\\omega}^{n+1} = \\frac{1}{3}(-\\tilde{\\omega}^{n} + \\tilde{\\omega}^{(1)} + 2\\tilde{\\omega}^{(2)} + \\tilde{\\omega}^{(3)}) + \\frac{\\Delta t}{6}\\mathcal{L}(\\tilde{\\omega}^{3}, \\tilde{\\psi}^{3})$$\n\n\n\n\n```python\ndef integrate_runge_kutta(omega0_tilde, N, n_steps, tau, Re):\n omega_prev = omega0_tilde\n psi_prev = laplace_inverse(-omega_prev, N)\n for step in range(n_steps):\n if(step%100 == 0):\n print(step)\n omega_1 = omega_prev + (tau/2)*L_op(omega_prev, psi_prev, N, Re)\n psi_1 = -laplace_inverse(omega_1, N)\n\n omega_2 = omega_prev + (tau/2)*L_op(omega_1, psi_1, N, Re)\n psi_2 = -laplace_inverse(omega_2, N)\n\n omega_3 = omega_prev + tau*L_op(omega_2, psi_2, N, Re)\n psi_3 = -laplace_inverse(omega_3, N)\n\n omega_next = (1./3)*(-omega_prev + omega_1 + 2*omega_2 + omega_3) + (tau/6)*L_op(omega_3, psi_3, N, Re)\n psi_next = -laplace_inverse(omega_next, N)\n\n omega_prev = omega_next\n psi_prev = psi_next\n return omega_prev\n```\n\n### Validation with analytical solution\n\nWe first consider the Taylor-Green vortex (known analytical solution of the Navier-Stokes equations) to validate our solver:\n\n\n```python\n# Taylor-Green vortex -- analytical solution for validation purposes\n\ndef taylor_green_vortex(xx, yy, t, N, Re):\n k = 3\n omega = 2*k*np.cos(k*xx)*np.cos(k*yy)*np.exp(-2*k**2*t*(1/Re))\n return omega\n```\n\n\n```python\n\nRe = 1000\ntau = 1e-2 # timestep\nn_steps = 100\nT = tau * n_steps # finial time\n\nomega0 = taylor_green_vortex(xx, yy, 0, N, Re) # initial vorticity\nomega0_tilde = np.fft.fft2(omega0) # convert to the Fourier space\nomegaT_tilde = integrate_runge_kutta(omega0_tilde, N, n_steps, tau, Re) # integrate in time in the Fourier space\nomegaT = np.real(np.fft.ifft2(omegaT_tilde)) # return back to physical space\n```\n\n 0\n\n\n\n```python\n# check the error is small\n\nomegaT_analytical = taylor_green_vortex(xx, yy, T, N, Re) \nerr = np.linalg.norm(omegaT_analytical - omegaT)\nprint(\"error =\", err)\n```\n\n error = 2.3043898350926834e-12\n\n\n### Shear layer problem\n\nFinaly, we consider another (more interesting) initial vorticity that gives the dynamic from the GIF in the beginning of this problem.\n\n\n```python\n# intial condition that evolves like a vortex\n\ndef shear_layer0(xx, yy, N):\n delta = 0.05\n sigma = 15/np.pi\n a = delta*np.cos(yy[:, :N//2]) - sigma*(np.cosh(sigma*(xx[:, :N//2] - np.pi/2)))**(-2)\n b = delta*np.cos(yy[:, N//2:]) + sigma*(np.cosh(sigma*(3*np.pi/2 - xx[:, N//2:])))**(-2)\n return np.concatenate((a, b), axis=1)\n```\n\n\n```python\nRe = 10000\ntau = 1e-3 # timestep\nn_steps = 10000\nT = tau * n_steps # finial time\n\nomega0 = shear_layer0(xx, yy, N) # initial vorticity\nomega0_tilde = np.fft.fft2(omega0) # convert to the Fourier space\nomegaT_tilde = integrate_runge_kutta(omega0_tilde, N, n_steps, tau, Re) # integrate in time in the Fourier space\nomegaT = np.real(np.fft.ifft2(omegaT_tilde)) # return back to physical space\n```\n\n 0\n 100\n 200\n 300\n 400\n 500\n 600\n 700\n 800\n 900\n 1000\n 1100\n 1200\n 1300\n 1400\n 1500\n 1600\n 1700\n 1800\n 1900\n 2000\n 2100\n 2200\n 2300\n 2400\n 2500\n 2600\n 2700\n 2800\n 2900\n 3000\n 3100\n 3200\n 3300\n 3400\n 3500\n 3600\n 3700\n 3800\n 3900\n 4000\n 4100\n 4200\n 4300\n 4400\n 4500\n 4600\n 4700\n 4800\n 4900\n 5000\n 5100\n 5200\n 5300\n 5400\n 5500\n 5600\n 5700\n 5800\n 5900\n 6000\n 6100\n 6200\n 6300\n 6400\n 6500\n 6600\n 6700\n 6800\n 6900\n 7000\n 7100\n 7200\n 7300\n 7400\n 7500\n 7600\n 7700\n 7800\n 7900\n 8000\n 8100\n 8200\n 8300\n 8400\n 8500\n 8600\n 8700\n 8800\n 8900\n 9000\n 9100\n 9200\n 9300\n 9400\n 9500\n 9600\n 9700\n 9800\n 9900\n\n\n\n```python\n# plot the solution at the final timestamp\n\nplt.imshow(np.real(np.fft.ifft2(omega_final)), cmap='jet')\n```\n", "meta": {"hexsha": "fa506967fdaa80b0e0c484cf3f095420c22a0964", "size": 111204, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "hw/hw3/hw3.ipynb", "max_stars_repo_name": "mikhailpautov/nla2020", "max_stars_repo_head_hexsha": "74ba7da5c7a17293452a381150346edca37ce5f3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 70, "max_stars_repo_stars_event_min_datetime": "2020-10-26T09:55:01.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-12T09:11:41.000Z", "max_issues_repo_path": "hw/hw3/hw3.ipynb", "max_issues_repo_name": "mikhailpautov/nla2020", "max_issues_repo_head_hexsha": "74ba7da5c7a17293452a381150346edca37ce5f3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-10-12T06:53:42.000Z", "max_issues_repo_issues_event_max_datetime": "2021-10-12T06:53:43.000Z", "max_forks_repo_path": "hw/hw3/hw3.ipynb", "max_forks_repo_name": "mikhailpautov/nla2020", "max_forks_repo_head_hexsha": "74ba7da5c7a17293452a381150346edca37ce5f3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 48, "max_forks_repo_forks_event_min_datetime": "2020-10-26T19:20:14.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T21:45:49.000Z", "avg_line_length": 112.6686930091, "max_line_length": 78844, "alphanum_fraction": 0.8499334556, "converted": true, "num_tokens": 6476, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8887587993853655, "lm_q2_score": 0.9284087960985615, "lm_q1q2_score": 0.8251314869593701}} {"text": "```python\nimport numpy as np\nimport pandas as pd\nimport emcee\nfrom scipy import stats\nfrom scipy.optimize import curve_fit\n\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfrom dataloader import *\n\nsns.set(style='ticks', context='talk')\nplt.style.use(\"paper.mplstyle\")\n```\n\n## Population Definition\n\n### Bivariate Normal\n\n\n```python\nsize = 1000000\nrho_xy = 0.25\nsigma_x = 1\nsigma_y = 1\ncov_xy = rho_xy * (sigma_x * sigma_y)\ncov = np.array([[sigma_x**2, cov_xy], \n [ cov_xy, sigma_y**2]])\nm = rho_xy * sigma_y / sigma_x\nb = 0\n\npopulation_dist = stats.multivariate_normal(cov=cov)\npopulation_sample = population_dist.rvs(size)\nx, y = population_sample[:, 0], population_sample[:, 1]\nr_xy = stats.pearsonr(x, y)[0]\nm_best = r_xy * np.std(y) / np.std(x)\nb_best = np.mean(y) - m_best*np.mean(x)\n\nprint(\"Population Correlation:\", r_xy)\n```\n\n Population Correlation: 0.24955648671784122\n\n\n\n```python\ndef f(x, m, b):\n return m*x + b\n\nfig = plt.figure(figsize=(10, 10))\nplt.hist2d(x, y, bins=100, range=[[-4, 4], [-4, 4]], cmap=\"gray_r\");\n\nxrange = np.linspace(-6, 6, 100)\nplt.plot(xrange, f(xrange, m_best, b_best), 'r--', label=\"Truth\")\n\n\npopt, _ = curve_fit(f, x, y)\nplt.plot(xrange, f(xrange, *popt), label=\"OLS\", zorder=1)\n\nfig.suptitle(\"Population Sample r=0.25\")\nplt.xlabel(\"x-value\")\nplt.ylabel(\"y-value\")\nplt.legend()\nplt.tight_layout(rect=[0, 0.03, 1, 0.95])\nplt.savefig(\"figures/bivariate.png\")\n```\n\n## Measurement Sample\n\nEach measurement sample will be measured with error of constant variance. \n\n* The first sample will have equal variance on x and y error. \n* The second sample will have x error be 5 times greater than y error. \n* The third will have x error be 10 times greater than y error.\n\n\n```python\ndata_size = 100\ndata_idx = np.random.randint(0, size, size=data_size)\n\nsigma_x = np.ones(data_size) * 0.2\nsigma_y = np.ones(data_size) * 0.2\nxerr = stats.norm(loc=0, scale=sigma_x).rvs(data_size)\nyerr = stats.norm(loc=0, scale=sigma_y).rvs(data_size)\n```\n\n### Measurement Sample 1\n\nBoth x and y has single value errors with error ratio of 1.\n\n\n```python\nxdata = x[data_idx] + xerr*1\nydata = y[data_idx] + yerr*1\n\ndf1 = pd.DataFrame(\n {\n \"x\": xdata,\n \"xerr\": sigma_x,\n \"y\": ydata,\n \"yerr\": sigma_y,\n }\n).sort_values('x')\n\nplt.errorbar(df1.x, df1.y, yerr=df1.yerr, xerr=df1.xerr, c='k', fmt='ko', \n lw=0.5, ms=5)\ndf1.head()\n```\n\n### Measurement Sample 2\n\nBoth x and y has single value errors with error ratio x of y being 5\n\n\n```python\nxdata = x[data_idx] + xerr*5\nydata = y[data_idx] + yerr\n\ndf2 = pd.DataFrame(\n {\n \"x\": xdata,\n \"xerr\": sigma_x*5,\n \"y\": ydata,\n \"yerr\": sigma_y,\n }\n).sort_values('x')\n\nplt.errorbar(df2.x, df2.y, yerr=df2.yerr, xerr=df2.xerr, fmt='ko', \n lw=0.5, ms=5)\ndf2.head()\n```\n\n### Measurement Sample 3\n\nBoth x and y has errors of variance uniformly assigned. The error ratio is about 5\n\n\n```python\nxdata = x[data_idx] + xerr*10\nydata = y[data_idx] + yerr\n\ndf3 = pd.DataFrame(\n {\n \"x\": xdata,\n \"xerr\": sigma_x*10,\n \"y\": ydata,\n \"yerr\": sigma_y,\n }\n).sort_values('x')\n\nplt.errorbar(df3.x, df3.y, yerr=df3.yerr, xerr=df3.xerr, fmt='ko', \n lw=0.5, ms=5)\ndf2.head()\n```\n\n## Model\n\nLet $Y^*$ and $X^*$ be the random variable with the population distribution and $Y$ and $X$ be the sample measured with error $\\varepsilon_y \\sim \\text{Normal}(0, \\sigma_x)$ and $\\varepsilon_x \\sim \\text{Normal}(0, \\sigma_y)$ respectively. We assume that $Y^*$ and $X^*$ are linearly related with the residual called the intrinsic scatter $\\epsilon_\\text{int} \\sim \\text{Normal}(0, \\sigma_\\text{int})$ where $\\sigma_\\text{int}$ is unknown and to be fitted for. \n\n**Theory**\n\n$\nY^* = \\theta_1 X^* + \\theta_0 + \\epsilon\n$\n\n**Measurement**\n\n$\nY = Y^* + \\varepsilon_y\\\\\nX = X^* + \\varepsilon_x\\\\\n$\n\n**Likelihood Function**\n\nThe probability to observe a given value from $Y$ conditioned on a value from $X$ and the model described above is\n\n$\n\\begin{align}\nP(Y=y \\mid X=x, \\epsilon_\\text{int}=\\epsilon; \\theta) = P(Y=y; \\theta)\n\\end{align}\n$\n\n$\n\\begin{align}\nY = \\theta_1 X^* + \\theta_0 + \\epsilon + \\varepsilon_y\n\\end{align}\n$\n\n### Different than LINMIX\n\nLINMIX by [Kelly 2007](http://adsabs.harvard.edu/abs/2007ApJ...665.1489K) (hereon called Kelly07) differs from the model above by LINMIX having the addition assumption:\n\n1. $X^*$ is assumed be distributed as Gaussian mixture model with $k$ components each component has the Gaussian mean parameter $\\mu_i$ and variance parameter $\\tau_i^2$,\n \n $\n \\begin{align}\n X \\sim \\sum_{i=1}^k P_i \\cdot (\\tau_i Z_i + \\mu_i)\n \\end{align}\n $\n \n Where $Z_i$ for all $i$ are IID standard normal. $P_i$ for all $i$ are independently distributed as the $\\text{Bernouli}(p_i)$.\n \n This assumption is a free parameter in MCMC and can be interpreted as an attempt to fit the X distribution as a Gaussian mixture model.\n\n## Slope Posterior via LINMIX\n\n\n```python\nfrom linmix import linmix\n\nlm1 = linmix.LinMix(age_df['age'].groupby('snid').mean(), hr_df['hr'], xsig=0, ysig=hr_df['hr_err'],\n K=3, seed=912\n)\nlm1.run_mcmc(maxiter=5000)\n\nlm1 = linmix.LinMix(age_df['age'].groupby('snid').mean(), hr_df['hr'], xsig=age_df['age'].groupby('snid').std(), ysig=hr_df['hr_err'],\n K=3, seed=912\n)\nlm1.run_mcmc(maxiter=5000)\n```\n", "meta": {"hexsha": "bc2ccb7a355bb28b5d78870c9cb6fd5bf337db7a", "size": 138378, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "mcmc_gaussian_mixture.ipynb", "max_stars_repo_name": "ketozhang/statistical-methods-on-sne-ia-luminosity-evolution", "max_stars_repo_head_hexsha": "868c34eef7612375bec9c535c108240b57aedf40", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "mcmc_gaussian_mixture.ipynb", "max_issues_repo_name": "ketozhang/statistical-methods-on-sne-ia-luminosity-evolution", "max_issues_repo_head_hexsha": "868c34eef7612375bec9c535c108240b57aedf40", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mcmc_gaussian_mixture.ipynb", "max_forks_repo_name": "ketozhang/statistical-methods-on-sne-ia-luminosity-evolution", "max_forks_repo_head_hexsha": "868c34eef7612375bec9c535c108240b57aedf40", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 197.6828571429, "max_line_length": 53344, "alphanum_fraction": 0.8924829091, "converted": true, "num_tokens": 1734, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284087985746092, "lm_q2_score": 0.8887587905460026, "lm_q1q2_score": 0.825131480953437}} {"text": "```python\nimport sympy as sp\nimport numpy as np\nimport pandas as pd\n```\n\n\n```python\nx = sp.symbols('x')\nf = sp.exp(2 * x)\n```\n\n\n```python\ndef calculate_first_derivative(nodes, values, forward=True):\n step = nodes[1] - nodes[0]\n diffs = []\n if forward:\n for i in range(len(nodes) - 1):\n diffs.append((values[i + 1] - values[i]) / step)\n diffs.append(None)\n else:\n diffs.append(None)\n for i in range(1, len(nodes)):\n diffs.append((values[i] - values[i - 1]) / step)\n \n return diffs\n```\n\n\n```python\ndef calculate_first_derivative_high_precision(nodes, values):\n step = nodes[1] - nodes[0]\n diffs = [None]\n for i in range(1, len(nodes) - 1):\n diffs.append((values[i + 1] - values[i - 1]) / (2 * step))\n diffs.append(None)\n return diffs\n```\n\n\n```python\ndef calculate_second_derivative(nodes, values):\n step = nodes[1] - nodes[0]\n diffs = [None]\n for i in range(1, len(nodes) - 1):\n diffs.append((values[i - 1] - 2 * values[i] + values[i + 1]) / step ** 2)\n diffs.append(None)\n return diffs\n```\n\n\n```python\ndef create_table_of_derivative_estimation(nodes, values, dvalues, d_est, d_est_hp, ddvalues, dd_est):\n derror = [abs(v - e) if e != None else None for v, e in zip(dvalues, d_est)]\n dhperror = [abs(v - e) if e != None else None for v, e in zip(dvalues, d_est_hp)]\n dderror = [abs(v - e) if e != None else None for v, e in zip(ddvalues, dd_est)]\n \n rows = list(zip(\n nodes, values,\n dvalues,\n d_est, derror,\n d_est_hp, dhperror,\n ddvalues,\n dd_est, dderror\n ))\n header = ['x', 'f(x)',\n 'f\\'(x)',\n 'f\\'(x) O(n) est.', 'f\\'(x) O(n) error',\n 'f\\'(x) O(n^2) est.', 'f\\'(x) O(n^2) error',\n 'f\\'\\'(x)',\n 'f\\'\\'(x) O(n^2) est.', 'f\\'(x) O(n^2) error'\n ]\n return pd.DataFrame(rows, columns=header)\n```\n\n# 1\n\n\n```python\na = 0\nb = 4\nnodes_count = 11\ndf = sp.diff(f)\nddf = sp.diff(df)\n\nnodes = np.linspace(a, b, nodes_count, endpoint=True)\nvalues = [f.evalf(subs={x: i}) for i in nodes]\ndvalues = [df.evalf(subs={x: i}) for i in nodes]\nddvalues = [ddf.evalf(subs={x: i}) for i in nodes]\n\nd_est = calculate_first_derivative(nodes, values)\nd_est_hp = calculate_first_derivative_high_precision(nodes, values)\ndd_est = calculate_second_derivative(nodes, values)\n```\n\n\n```python\ncreate_table_of_derivative_estimation(nodes, values, dvalues, d_est, d_est_hp, ddvalues, dd_est)\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    xf(x)f'(x)f'(x) O(n) est.f'(x) O(n) errorf'(x) O(n^2) est.f'(x) O(n^2) errorf''(x)f''(x) O(n^2) est.f'(x) O(n^2) error
    00.01.000000000000002.000000000000003.063852321231171.06385232123117NoneNone4.00000000000000NoneNone
    10.42.225540928492474.451081856984946.818728739756622.367646882771684.941290530493890.4902086735089588.902163713969879.387191046313620.485027332343746
    20.84.953032424395119.9060648487902315.17535989061625.2692950418259910.99704431518641.0909794663961919.812129697580520.89157787714901.07944817956855
    31.211.023176380641622.046352761283233.773384541169411.727031779886224.47437221589282.4280194546095944.092705522566446.49506162638282.40235610381643
    41.624.532530197109449.065060394218775.164049590087226.098989195868554.46871706562835.4036566714095998.1301207884374103.4766626222955.34654183385715
    52.054.5981500331442109.196300066288167.28066871397758.0843686476883121.22235915203212.0260590857435218.392600132577230.29154780972411.8989476771470
    62.4121.510417518735243.020835037470372.289974768544129.269139731074269.78532174126126.7644867037907486.041670074940512.52326513641926.4815950614791
    72.8270.426407426153540.852814852306828.546576114824287.693761262518600.41827544168459.56546058937851081.705629704611140.6415033657058.9358736610868
    83.2601.8450378720821203.690075744161843.96431630584640.2742405616751336.25544621033132.5653704661672407.380151488332538.54435047754131.164198989210
    93.61339.430764394422678.861528788844103.818056618281424.956527829442973.89118646206295.0296576732215357.723057577675649.63435078109291.911293203418
    104.02980.957987041735961.91597408346NoneNoneNoneNone11923.8319481669NoneNone
    \n
    \n\n\n\n# 2\n\n\n```python\nstep = 1\ntarget_node = 2\ntarget_value = ddf.evalf(subs={x: target_node})\nerror = None\n\nwhile True:\n nodes = [target_node - step, target_node, target_node + step]\n values = [f.evalf(subs={x: i}) for i in nodes]\n estimated_value = calculate_second_derivative(nodes, values)[1]\n new_error = round(abs(estimated_value - target_value), 5)\n \n if error == None or new_error <= error:\n error = new_error\n step /= 2\n else:\n break\n```\n\n\n```python\nstep\n```\n\n\n\n\n 3.0517578125e-05\n\n\n\n# 3\n\n\n```python\ndef newton_interpolation_equally_spaced_nodes(nodes, values, order):\n def finite_differences(nodes, values, max_order):\n if len(nodes) != len(values):\n raise ValueError(\"nodes and values lists must have the same length\")\n diffs = [values]\n for order in range(max_order):\n current_diffs = []\n for i in range(1, len(nodes) - order):\n current_diffs.append(diffs[-1][i] - diffs[-1][i-1])\n diffs.append(current_diffs)\n return diffs\n \n step = nodes[1] - nodes[0]\n fds = finite_differences(nodes, values, order)\n\n t = (x - b) / step\n\n # Compute N_k for k up `order` to estimate error for polynom of order `order - 1`\n ns = [1]\n for i in range(order + 1):\n ns.append(ns[-1] * (t + i) / (i + 1))\n\n # Calculate all the polynoms with order from 0 to `order`\n polynoms = []\n for i in range(order + 1):\n prev = polynoms[-1] if len(polynoms) > 0 else 0\n polynoms.append(prev + ns[i] * fds[i][-1])\n\n return polynoms[-1]\n```\n\n\n```python\na, b = 0, 1\nnodes_count = 11 # to include node `b` and have a nice 0.1 step\nnodes = np.linspace(a, b, nodes_count, endpoint=True)\nvalues = [f.evalf(subs={x: i}) for i in nodes]\nnew_node_id = 1\n\npolynom = newton_interpolation_equally_spaced_nodes(nodes, values, 5)\n```\n\n\n```python\ndf_numeric = calculate_first_derivative(nodes, values)[new_node_id]\ndf_poly = sp.diff(polynom).evalf(subs={x: nodes[new_node_id]})\ndf_actual = sp.diff(f).evalf(subs={x: nodes[new_node_id]})\n```\n\n\n```python\ndf_actual, df_poly, df_numeric\n```\n\n\n\n\n (2.44280551632034, 2.64068217834383, 2.70421939481100)\n\n\n\n\n```python\npoly_error, numeric_error = df_poly - df_actual, df_numeric - df_actual\npoly_error, numeric_error\n```\n\n\n\n\n (0.197876662023491, 0.261413878490665)\n\n\n", "meta": {"hexsha": "ec31488c6edc3bc933270dc9eefcc0e09a2f0497", "size": 18100, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "year-2/computational-workshop/task-5.ipynb", "max_stars_repo_name": "Sergobot/university", "max_stars_repo_head_hexsha": "7cd8c07fc660f1e19127c6488991ddd59d99643c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-09-05T08:43:47.000Z", "max_stars_repo_stars_event_max_datetime": "2019-09-05T08:43:52.000Z", "max_issues_repo_path": "year-2/computational-workshop/task-5.ipynb", "max_issues_repo_name": "Sergobot/university", "max_issues_repo_head_hexsha": "7cd8c07fc660f1e19127c6488991ddd59d99643c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "year-2/computational-workshop/task-5.ipynb", "max_forks_repo_name": "Sergobot/university", "max_forks_repo_head_hexsha": "7cd8c07fc660f1e19127c6488991ddd59d99643c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-04T07:40:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-18T07:12:08.000Z", "avg_line_length": 32.5539568345, "max_line_length": 110, "alphanum_fraction": 0.4633701657, "converted": true, "num_tokens": 3295, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9252299591537478, "lm_q2_score": 0.8918110432813418, "lm_q1q2_score": 0.825130295148057}} {"text": "```python\nimport sympy as sy\nfrom sympy import diff\nfrom sympy.abc import x\nfrom sympy import Function\n#f = Function('f')\n```\n\n\n```python\nf = 1/(x+1)\nf\n\n```\n\n\n\n\n$\\displaystyle \\frac{1}{x + 1}$\n\n\n\n\n```python\ndiff(f,x).subs(x,-5)\n\n```\n\n\n\n\n$\\displaystyle - \\frac{1}{16}$\n\n\n\n\n```python\ndiff(f,x).subs(x,-3)\n\n```\n\n\n\n\n$\\displaystyle - \\frac{1}{4}$\n\n\n\n\n```python\ndiff(f,x).subs(x,0)\n```\n\n\n\n\n$\\displaystyle -1$\n\n\n\n\n```python\ndiff(f,x).subs(x,2)\n\n\n```\n\n\n\n\n$\\displaystyle - \\frac{1}{9}$\n\n\n\n\n```python\nfrom sympy import sin, cos\nimport sympy as sy\nfrom sympy import simplify\nfrom sympy import sympify\nx = sy.symbols('x)')\n```\n\n\n```python\nexpr = sin(x)**2 +cos(x)**2\nexpr\n```\n\n\n\n\n$\\displaystyle \\sin^{2}{\\left(x) \\right)} + \\cos^{2}{\\left(x) \\right)}$\n\n\n\n\n```python\nsimplify(expr)\n```\n\n\n\n\n$\\displaystyle 1$\n\n\n\n\n```python\nsympify('r + cos(x)**2')\n```\n\n\n\n\n$\\displaystyle r + \\cos^{2}{\\left(x \\right)}$\n\n\n\n\n```python\nfrom sympy import Matrix\nwwb05 = Matrix([[0,2,4,6,8,10,12], \n [13,16,18,19,18,14,11]]) \nprint(\"x = top row(left to right) f(x) = bottom row\")\nwwb05\n\n\n```\n\n x = top row(left to right) f(x) = bottom row\n\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0 & 2 & 4 & 6 & 8 & 10 & 12\\\\13 & 16 & 18 & 19 & 18 & 14 & 11\\end{matrix}\\right]$\n\n\n\n\n```python\nx = wwb05[0:7]\nprint(\"x-values are stored in variable x as Matrix slice:\",x)\n```\n\n x-values are stored in variable x as Matrix slice: [0, 2, 4, 6, 8, 10, 12]\n\n\n\n```python\ndef func(x):\n return wwb05[7:15]\nprint(\"f(x) values are stored in variable x as Matrix slice:\",func(x))\n```\n\n f(x) values are stored in variable x as Matrix slice: [13, 16, 18, 19, 18, 14, 11]\n\n\n\n```python\nrot = Matrix([[0,2,4,6,8,10,12], \n [13,16,18,19,18,14,11]]) \nfrom sympy.matrices import Matrix\nfrom sympy.abc import x, y\nM = Matrix([[x, 0,2,4,6,8,10,12], [y, 13,16,18,19,18,14,11]])\nM\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}x & 0 & 2 & 4 & 6 & 8 & 10 & 12\\\\y & 13 & 16 & 18 & 19 & 18 & 14 & 11\\end{matrix}\\right]$\n\n\n\n\n```python\nM.diff(x)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\end{matrix}\\right]$\n\n\n\n\n```python\nf = sy.Function('f')\ndef f(x):\n return 1/x\ndiffqot = f(x.subs(x,3)) - f(x.subs(x,0.4))\ndiffqot\n```\n\n\n\n\n$\\displaystyle -2.16666666666667$\n\n\n\n\n```python\nt,h = sy.symbols('t,h')\nA,B,C = sy.symbols(\"A,B,C\",constant=True, real=True) \n# line above this is obsolete code (see cell 185)\n# I am not sure on problem 6 of WWB05 about the constants \"A,B,C\"\n# Are these integers, real numbers, rational?\n\ndef f(t):\n return -5 - 11*t\n```\n\n\n```python\ndiff(f(t))\n```\n\n\n\n\n$\\displaystyle -11$\n\n\n\n\n```python\nf = sy.Function('f')\ndef f(x):\n return -2 -7*sy.sqrt(x)\n```\n\n\n```python\nf(x)\n```\n\n\n\n\n$\\displaystyle - 7 \\sqrt{x} - 2$\n\n\n\n\n```python\n\n```\n\n\n```python\nexpr0 = (f(x+h) - f(x))/h\nexpr0\n```\n\n\n\n\n$\\displaystyle \\frac{7 \\sqrt{x} - 7 \\sqrt{h + x}}{h}$\n\n\n\n\n```python\n\nexpr0_rewritten = A / ( sy.sqrt(B*x + C*h) + sy.sqrt(x) )\nexpr0_rewritten\n```\n\n\n\n\n$\\displaystyle \\frac{A}{\\sqrt{x} + \\sqrt{B x + C h}}$\n\n\n\n\n```python\nfrom sympy import simplify\nsy.simplify(expr0)\n```\n\n\n\n\n$\\displaystyle \\frac{7 \\left(\\sqrt{x} - \\sqrt{h + x}\\right)}{h}$\n\n\n\n\n```python\nexp5 = sy.simplify(expr0)\nexp5\n```\n\n\n\n\n$\\displaystyle \\frac{7 \\left(\\sqrt{x} - \\sqrt{h + x}\\right)}{h}$\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "5eb19d8cd8ee61aebf2fde9d307e41100a2b5632", "size": 11897, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Personal_Projects/SympyPractice.ipynb", "max_stars_repo_name": "NSC9/Sample_of_Work", "max_stars_repo_head_hexsha": "8f8160fbf0aa4fd514d4a5046668a194997aade6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Personal_Projects/SympyPractice.ipynb", "max_issues_repo_name": "NSC9/Sample_of_Work", "max_issues_repo_head_hexsha": "8f8160fbf0aa4fd514d4a5046668a194997aade6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Personal_Projects/SympyPractice.ipynb", "max_forks_repo_name": "NSC9/Sample_of_Work", "max_forks_repo_head_hexsha": "8f8160fbf0aa4fd514d4a5046668a194997aade6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.5996705107, "max_line_length": 140, "alphanum_fraction": 0.4359082122, "converted": true, "num_tokens": 1243, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9252299550303292, "lm_q2_score": 0.8918110454379297, "lm_q1q2_score": 0.8251302934660866}} {"text": "# Linear First Order System\n\nThis notebook demonstrates simulation of a linear first-order system in Pyomo using two distinct approaches. The first uses the `Simulator` class from Pyomo which can employ the \n\n## First-Order Differential Equation with Initial Condition\n\nThe following cell implements a solution to a first-order linear model in the form\n\n\\begin{align}\n\\tau\\frac{dy}{dt} + y & = K u(t) \\\\\n\\end{align}\n\nwhere $\\tau$ and $K$ are model parameters, and $u(t)$ is an external process input.\n\n\n```python\n% matplotlib inline\nfrom pyomo.environ import *\nfrom pyomo.dae import *\nimport matplotlib.pyplot as plt\n\ntf = 10\ntau = 1\nK = 5\n\n# define u(t)\nu = lambda t: 1\n\n# create a model object\nmodel = ConcreteModel()\n\n# define the independent variable\nmodel.t = ContinuousSet(bounds=(0, tf))\n\n# define the dependent variables\nmodel.y = Var(model.t)\nmodel.dydt = DerivativeVar(model.y)\n\n# fix the initial value of y\nmodel.y[0].fix(0)\n\n# define the differential equation as a constraint\nmodel.ode = Constraint(model.t, rule=lambda model, t: tau*model.dydt[t] + model.y[t] == K*u(t))\n\n# transform dae model to discrete optimization problem\n#TransformationFactory('dae.finite_difference').apply_to(model, nfe=50, method='BACKWARD')\n\n# solve the model\n#SolverFactory('ipopt').solve(model).write()\n\ntsim, profiles = Simulator(model, package='scipy').simulate(numpoints=100)\n\nplt.plot(tsim, profiles)\n\n# access elements of a ContinuousSet object\nt = [t for t in model.t]\n\n# access elements of a Var object\ny = [model.y[t]() for t in model.y]\n\nplt.plot(t,y)\nplt.xlabel('time / sec')\nplt.ylabel('response')\nplt.title('Response of a linear first-order ODE')\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "e38b8cbcfa90eab05d6254ae008e1d1cb57437f6", "size": 16402, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/simulation/Linear_First_Order_System.ipynb", "max_stars_repo_name": "gschivley/ND-Pyomo-Cookbook", "max_stars_repo_head_hexsha": "7bddccd14eb3044e9d3e7c46999e1c9c29541e16", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/simulation/Linear_First_Order_System.ipynb", "max_issues_repo_name": "gschivley/ND-Pyomo-Cookbook", "max_issues_repo_head_hexsha": "7bddccd14eb3044e9d3e7c46999e1c9c29541e16", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/simulation/Linear_First_Order_System.ipynb", "max_forks_repo_name": "gschivley/ND-Pyomo-Cookbook", "max_forks_repo_head_hexsha": "7bddccd14eb3044e9d3e7c46999e1c9c29541e16", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-10-14T16:40:40.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-20T12:02:29.000Z", "avg_line_length": 122.4029850746, "max_line_length": 13016, "alphanum_fraction": 0.8769052555, "converted": true, "num_tokens": 433, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9496693716759489, "lm_q2_score": 0.8688267864276108, "lm_q1q2_score": 0.825098188361943}} {"text": "```python\nimport sympy as sp\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n\n```python\nx = sp.symbols('x')\nf = sp.sin(x) / sp.sqrt(1 - x)\nstart, end = 0, 1\n```\n\n\n```python\nf\n```\n\n\n\n\n$\\displaystyle \\frac{\\sin{\\left(x \\right)}}{\\sqrt{1 - x}}$\n\n\n\nWe will be integrating the following function from 0 to 1\n\n# Let's plot it first!\n\n\n```python\nx_plot = np.linspace(start, end, 300, endpoint=False)\ny_plot = sp.lambdify(x, f, 'numpy')(x_plot)\n\nsns.set_style('whitegrid')\nplt.figure(figsize=(12, 6))\nsns.lineplot(x_plot, y_plot);\n```\n\n# Exact value\n\n\n```python\ntrue_value = 1.18698444\n# Thanks, Wolfram Alpha!\n```\n\n# Midpoint Riemann sum\n\n\n```python\nnodes_count = 3\nnodes = np.linspace(start, end, nodes_count, endpoint=False)\nstep = (nodes[1] - nodes[0])\nnodes += step / 2\n\nvalues = sp.lambdify(x, f, 'numpy')(nodes)\nmid_riemann_value = step * values.sum()\n```\n\n\n```python\nmid_riemann_value\n```\n\n\n\n\n 0.8909319389164732\n\n\n\n# Using weights\n\n\n```python\np = 1 / sp.sqrt(1 - x)\nnodes = [sp.Rational(1, 6), 0.5, sp.Rational(5, 6)]\nphi = f / p\nw = (x - nodes[0]) * (x - nodes[1]) * (x - nodes[2])\ndw = w.diff()\n```\n\n\n```python\ncoeffs = [\n 11 / 20,\n -1 / 10,\n 31 / 20\n]\ncoeffs\n```\n\n\n\n\n [0.55, -0.1, 1.55]\n\n\n\n\n```python\nweights_value = sum([coeffs[i] * phi.evalf(subs={x: nodes[i]}) for i in range(len(nodes))])\nweights_value\n```\n\n\n\n\n$\\displaystyle 1.19057444157482$\n\n\n\n# Gauss time!\n\n\n\n```python\nroots = [-1 / sp.sqrt(3), 1 / sp.sqrt(3)]\ncoeffs = [1, 1]\nnodes = [(start + end + (end - start) * r) / 2 for r in roots]\n```\n\n\n```python\ngauss_value = sum([coeffs[i] * f.evalf(subs={x: nodes[i]}) for i in range(len(nodes))]) * (end - start) / 2\n```\n\n\n```python\ngauss_value\n```\n\n\n\n\n$\\displaystyle 0.889706408692229$\n\n\n\n# Gauss-like formulas\n\n\n```python\np = 1 / sp.sqrt(1 - x)\nnodes_count = 2\nmus = [\n float(\n sp.integrate(\n p * x ** k,\n (x, 0, 1)\n )\n )\n for k in range(2 * nodes_count)\n]\nfor i in range(2 * nodes_count):\n print(f'mu_{i} = {mus[i]}')\n```\n\n mu_0 = 2.0\n mu_1 = 1.3333333333333333\n mu_2 = 1.0666666666666667\n mu_3 = 0.9142857142857143\n\n\n\n```python\n# Huge thanks to Wolfram Alpha (again)!\npoly_coeffs = [-8 / 7, 8 / 35]\npolynom = x**2 + x * poly_coeffs[0] + poly_coeffs[1]\n```\n\n\n```python\nnodes = sp.solve(polynom)\n```\n\n\n```python\nphi = f / p\n```\n\n\n```python\ncoeffs = [\n (mus[1] - mus[0] * nodes[1]) / (nodes[0] - nodes[1]),\n (mus[1] - mus[0] * nodes[0]) / (nodes[1] - nodes[0])\n]\n```\n\n\n```python\ngauss_like_value = sum([coeffs[i] * phi.evalf(subs={x: nodes[i]}) for i in range(nodes_count)])\n```\n\n\n```python\ngauss_like_value\n```\n\n\n\n\n$\\displaystyle 1.18673193986058$\n\n\n", "meta": {"hexsha": "8182ffb7ffa32740ec9f0af96f266eecab3170a7", "size": 23205, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "year-2/computational-workshop/task-6.ipynb", "max_stars_repo_name": "Sergobot/university", "max_stars_repo_head_hexsha": "7cd8c07fc660f1e19127c6488991ddd59d99643c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-09-05T08:43:47.000Z", "max_stars_repo_stars_event_max_datetime": "2019-09-05T08:43:52.000Z", "max_issues_repo_path": "year-2/computational-workshop/task-6.ipynb", "max_issues_repo_name": "Sergobot/university", "max_issues_repo_head_hexsha": "7cd8c07fc660f1e19127c6488991ddd59d99643c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "year-2/computational-workshop/task-6.ipynb", "max_forks_repo_name": "Sergobot/university", "max_forks_repo_head_hexsha": "7cd8c07fc660f1e19127c6488991ddd59d99643c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-04T07:40:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-18T07:12:08.000Z", "avg_line_length": 57.2962962963, "max_line_length": 15236, "alphanum_fraction": 0.7945701357, "converted": true, "num_tokens": 967, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9458012701768145, "lm_q2_score": 0.8723473697001441, "lm_q1q2_score": 0.8250672502977995}} {"text": "# Numerical root-finding\nGiven equation:\n\\begin{align}\nf(x) = 2\\sin\\sqrt{x} - x + 1\n\\end{align}\n\n## Graphical method\n\n\n```python\nfrom math import sqrt\nfrom math import sin\nfrom math import cos\nfrom math import fabs\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n\n\ndef f(x_array): \n if isinstance(x_array, list):\n return [2*sin(sqrt(x)) - x + 1 for x in x_array]\n return 2*sin(sqrt(x_array)) - x_array + 1\n\ndef df(x_array): \n if isinstance(x_array, list):\n return [cos(sqrt(x))/sqrt(x) - 1 for x in x_array]\n return cos(sqrt(x_array))/sqrt(x_array) - 1\n\n\nlower = 0\nupper = 7\nlength = 1000\nx = [lower + x*(upper-lower)/length for x in range(length)]\nplt.plot(x, f(x))\nplt.grid(color='r', linestyle='-', linewidth=0.5)\nplt.show()\n```\n\nIt can be clearly seen that the root is between 2.5 and 3.5. Let's extend analysis by zooming.\n\n\n```python\nlower = 2.5\nupper = 3.5\nlength = 1000\nx = [lower + x*(upper-lower)/length for x in range(length)]\nplt.plot(x, f(x))\nplt.grid(color='r', linestyle='-', linewidth=0.5)\nplt.show()\n```\n\nSo, with 0.1 precision we found that the root is between 2.9 and 3. This is our raw graphical estimation of the root.\n\n## Implementation and usage of Bisection method\n\n\n```python\ndef bisection(function, lower, upper, precision, x_r_old = 0):\n x_r = 0.5*(upper + lower)\n if fabs((x_r - x_r_old)/x_r) < precision:\n return x_r_old, [fabs((x_r - x_r_old)/x_r)]\n if function(lower) * function(x_r) > 0:\n lower = x_r\n elif function(lower) * function(x_r) < 0:\n upper = x_r\n else:\n return x_r, [0]\n rec = bisection(function, lower, upper, precision, x_r)\n return rec[0], [fabs((x_r - x_r_old)/x_r)*100] + rec[1]\n\n\nbisect_result = bisection(f, 0, 5, 1e-10)\nprint(\"According to Bisection approach the root of the given equation is: \" + str(bisect_result[0]))\n```\n\n According to Bisection approach the root of the given equation is: 2.976215688395314\n\n\n## Implementation and usage of False-Position method\n\n\n```python\ndef false_pos(function, lower, upper, precision, x_r_old = 0):\n x_r = upper - (function(upper)*(lower - upper))/(function(lower) - function(upper))\n if fabs((x_r - x_r_old)/x_r) < precision:\n return x_r_old, [fabs((x_r - x_r_old)/x_r)]\n if function(lower) * function(x_r) > 0:\n lower = x_r\n elif function(lower) * function(x_r) < 0:\n upper = x_r\n else:\n return x_r, [0]\n rec = false_pos(function, lower, upper, precision, x_r)\n return rec[0], [fabs((x_r - x_r_old)/x_r)*100] + rec[1]\n\n\nfalse_pos_result = false_pos(f, 0, 5, 1e-10)\nprint(\"According to False-Position approach the root of the given equation is: \" + str(false_pos_result[0]))\n```\n\n According to False-Position approach the root of the given equation is: 2.9762156879372084\n\n\n## Implementation and usage of Fixed-Point Iteration method\n\n\n```python\ndef fixed_pnt(function, precision, x_r_old):\n x_r = function(x_r_old) + x_r_old\n if fabs((x_r - x_r_old)/x_r) < precision:\n return x_r, [fabs((x_r - x_r_old)/x_r)]\n rec = fixed_pnt(function, precision, x_r)\n return rec[0], [fabs((x_r - x_r_old)/x_r)*100] + rec[1]\n\n\nfixed_pnt_result = fixed_pnt(f, 1e-10, 1)\nprint(\"According to Fixed-Point approach the root of the given equation is: \" + str(fixed_pnt_result[0]))\n```\n\n According to Fixed-Point approach the root of the given equation is: 2.9762156880341646\n\n\n## Implementation and usage of Newton-Raphson method\n\n\n```python\ndef new_rap(function, dfunction, precision, x_r_old):\n x_r = x_r_old - function(x_r_old)/dfunction(x_r_old)\n if fabs((x_r - x_r_old)/x_r) < precision:\n return x_r, [fabs((x_r - x_r_old)/x_r)]\n rec = new_rap(function, dfunction, precision, x_r)\n return rec[0], [fabs((x_r - x_r_old)/x_r)*100] + rec[1]\n\n\nnew_rap_result = new_rap(f, df, 1e-10, 1)\nprint(\"According to Newton-Raphson approach the root of the given equation is: \" + str(new_rap_result[0]))\n```\n\n According to Newton-Raphson approach the root of the given equation is: 2.976215688041108\n\n\n## Error analysis\n\n\n```python\nplt.semilogy(bisect_result[1], label = 'Bisection')\nplt.semilogy(false_pos_result[1], label = 'False-Position')\nplt.semilogy(fixed_pnt_result[1], label = 'Fixed-Point')\nplt.semilogy(new_rap_result[1][:-1], label = 'Newton-Raphson')\nplt.legend()\nplt.grid()\nplt.ylabel('True relative error')\nplt.xlabel('# of iterations')\nplt.show()\n```\n\nFrom above plot it is seen that relative error functions of Fixed-Point, False-Position and Bisection methods are linear after 2nd iteration. This is due to proportional relation between iterative estimations. In general, practical convergence rate confirms theoretical expectations of the algorithms.\n\n## Maximum precision test\n\n\n```python\nplt.semilogy(bisection(f, 2.9, 3.0, 1e-30)[1][:-1], label = 'Bisection')\nplt.semilogy(false_pos(f, 2.9, 3.0, 1e-30)[1][:-1], label = 'False-Position')\nplt.semilogy(fixed_pnt(f, 1e-30, 2.9)[1][:-1], label = 'Fixed-Point')\nplt.semilogy(new_rap(f, df, 1e-30, 2.9)[1][:-1], label = 'Newton-Raphson')\nplt.ylim(1e-16, 1e+1)\nplt.legend()\nplt.grid()\nplt.ylabel('True relative error')\nplt.xlabel('# of iterations')\nplt.show()\n```\n\nAccording to graph above, Bisection and Fixed-Point methods reach about 10^-14 error. False-Position and Newton-Raphson methods reach 10^-11 and 10^-7 precisions respectively. The test was done by setting precision parameters as 10^-30.\n\n## Execution performance (dependence on parameters)\n\nAccording to error function plots it is clearly seen that Newton-Raphson method is the fastest way of root estimation, however it have lowest precision in comparison with other 3 approaches. Newton-Raphson method have only one tunable parameter which is initial guess. Let's try different initial guesses to test its effects.\n\n\n```python\nplt.semilogy(new_rap(f, df, 1e-30, 1)[1][:-1], label = '1')\nplt.semilogy(new_rap(f, df, 1e-30, 1.5)[1][:-1], label = '1.5')\nplt.semilogy(new_rap(f, df, 1e-30, 2)[1][:-1], label = '2')\nplt.semilogy(new_rap(f, df, 1e-30, 2.5)[1][:-1], label = '2.5')\nplt.semilogy(new_rap(f, df, 1e-30, 3)[1][:-1], label = '3')\nplt.semilogy(new_rap(f, df, 1e-30, 3.5)[1][:-1], label = '3.5')\nplt.semilogy(new_rap(f, df, 1e-30, 4)[1][:-1], label = '4')\nplt.semilogy(new_rap(f, df, 1e-30, 4.5)[1][:-1], label = '4.5')\nplt.semilogy(new_rap(f, df, 1e-30, 60)[1][:-1], label = '60')\nplt.ylim(1e-16, 1e+1)\nplt.legend()\nplt.grid()\nplt.show()\n```\n\nFrom above graph it is clearly seen that initial guess parameter determines convergence rate and even precision value. The fastest convergence was performed using 3.0 as initial guess (which is very close to actual root) with only 2 iterations. Initial values 1, 1.5 and 60 converged after 4 iterations. Executions with other arguments converged in 3 iterations. \n\nInteresting observation is that initial guesses of 1.5 and 60 converged in 4 iterations both (may be because of quadratic convergence), so farness of initial guess from actual value does not really determine the rate of convergence. For comparison reasons, let's test our linearly converging algorithm Fixed-Point iteration method.\n\n\n```python\nplt.semilogy(fixed_pnt(f, 1e-30, 2.5)[1][:-1], label = '2.5')\nplt.semilogy(fixed_pnt(f, 1e-30, 60)[1][:-1], label = '60')\nplt.ylim(1e-16, 1e+1)\nplt.legend()\nplt.grid()\nplt.show()\n```\n\nSame situation here, initial guesses of 60 and 2.5 converging at 14th iteration. So the reason of convergence rate independence of initial guess is not a quadratic error minimization.\n\n## Conclusion\n\nThe given assignment's goal is to give a practical experience of various root-finding algorithm implementations and usage of them. Bracketing and Fixed-Point iterative methods are linearly converging algorithms which are dependent on various parameters defining convergence rates and precision values. Newton-Raphson method is a fastest method among these 4 approaches (quadratic error function), however it pays with precision limit. From observations, for high precision its better to use Fixed-Point algorithm which have similar to Bisection accuracy, but faster by more than twice. \n\n\n```python\n\n```\n", "meta": {"hexsha": "84f73cf11841a99c2732540c9e0ae6e0b9cb6c25", "size": 156329, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "root.ipynb", "max_stars_repo_name": "BatyaGG/numerical_methods", "max_stars_repo_head_hexsha": "40036c07ed4db2fb03fe0d188feeb440aa260ce2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-06-23T12:19:55.000Z", "max_stars_repo_stars_event_max_datetime": "2018-06-23T12:19:55.000Z", "max_issues_repo_path": "root.ipynb", "max_issues_repo_name": "BatyaGG/numerical_methods", "max_issues_repo_head_hexsha": "40036c07ed4db2fb03fe0d188feeb440aa260ce2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "root.ipynb", "max_forks_repo_name": "BatyaGG/numerical_methods", "max_forks_repo_head_hexsha": "40036c07ed4db2fb03fe0d188feeb440aa260ce2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 337.6436285097, "max_line_length": 42012, "alphanum_fraction": 0.9307678038, "converted": true, "num_tokens": 2416, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9314624993576758, "lm_q2_score": 0.885631484383387, "lm_q1q2_score": 0.8249325159535981}} {"text": "# 2/ Definitions\n\n\n```python\n# helper code needed for running in colab\nif 'google.colab' in str(get_ipython()):\n print('Downloading plot_helpers.py to util/ (only neded for colab')\n !mkdir util; wget https://raw.githubusercontent.com/minireference/noBSLAnotebooks/master/util/plot_helpers.py -P util\n```\n\n\n```python\n# setup SymPy\nfrom sympy import *\nx, y, z, t = symbols('x y z t')\ninit_printing()\n\n# setup plotting\n%matplotlib inline\nimport matplotlib.pyplot as mpl\nfrom util.plot_helpers import plot_vec, plot_vecs, autoscale_arrows\n```\n\n## SymPy Matrix objects\n\n\n```python\nv = Matrix([1,2,3])\nv\n```\n\n\n```python\n# define symbolically\nv_1, v_2, v_3 = symbols('v_1 v_2 v_3')\nv = Matrix([v_1,v_2,v_3])\n```\n\n\n```python\nv\n```\n\n\n```python\nv.T\n```\n\n\n```python\nA = Matrix(\n [ [1,7],\n [2,8], \n [3,9] ])\nA\n```\n\n\n```python\n# define symbolically\na_11, a_12, a_21, a_22, a_31, a_32 = symbols('a_11 a_12 a_21 a_22 a_31 a_32')\nA = Matrix([\n [a_11, a_12],\n [a_21, a_22], \n [a_31, a_32]])\n```\n\n\n```python\nA\n```\n\n## Vector operations\n\n\n```python\nu_1, u_2, u_3 = symbols('u_1 u_2 u_3')\nu = Matrix([u_1,u_2,u_3])\nv_1, v_2, v_3 = symbols('v_1 v_2 v_3')\nv = Matrix([v_1,v_2,v_3])\nalpha = symbols('alpha')\n\nu\n```\n\n\n```python\nalpha*u\n```\n\n\n```python\nu+v\n```\n\n\n```python\nu.norm()\n```\n\n\n```python\nuhat = u/u.norm()\nuhat\n```\n\n\n```python\nu = Matrix([1.5,1])\nw = 2*u\nuhat = u/u.norm()\n\nfig = mpl.figure()\nplot_vecs(u, w, uhat)\nautoscale_arrows()\n```\n\n### Dot product\n\n\n```python\nu = Matrix([u_1,u_2,u_3])\nv = Matrix([v_1,v_2,v_3])\n\nu.dot(v)\n```\n\n\n```python\nfig = mpl.figure()\nu = Matrix([1,1])\nv = Matrix([3,0])\nplot_vecs(u,v)\nautoscale_arrows()\n\nu_dot_v = u.dot(v)\nu_dot_v\n```\n\n\n```python\nphi = acos( u.dot(v)/(u.norm()*v.norm()) )\nprint('angle between u and v is', phi)\nu.norm()*v.norm()*cos(phi)\n```\n\n### Cross product\n\n\n```python\nu = Matrix([u_1,u_2,u_3])\nv = Matrix([v_1,v_2,v_3])\n\nu.cross(v)\n```\n\n\n```python\nu = Matrix([1,0,0])\nv = Matrix([1,1,0])\nw = u.cross(v) # a vector perpendicular to both u and v\n\nmpl.figure()\nplot_vecs(u, v, u.cross(v))\n\n```\n\n\n```python\nprint('length of cross product', w.norm())\n\nphi = acos( u.dot(v)/(u.norm()*v.norm()) )\n\nw.norm() == u.norm()*v.norm()*sin(phi)\n\n```\n\n length of cross product 1\n\n\n\n\n\n True\n\n\n\n## Projection operation\n\n\n```python\ndef proj(vec, d):\n \"\"\"Computes the projection of vector `vec` onto vector `d`.\"\"\"\n return d.dot(vec)/d.norm() * d/d.norm()\n```\n\n\n```python\nfig = mpl.figure()\nu = Matrix([1,1])\nv = Matrix([3,0])\n\npu_on_v = proj(u,v)\n\nplot_vecs(u, v, pu_on_v)\n\n\n# autoscale_arrows()\nax = mpl.gca()\nax.set_xlim([-1,3])\nax.set_ylim([-1,3])\n\n\n```\n\n# Matrix operations\n\n\n```python\na_11, a_12, a_21, a_22, a_31, a_32 = symbols('a_11 a_12 a_21 a_22 a_31 a_32')\nA = Matrix([\n [a_11, a_12],\n [a_21, a_22], \n [a_31, a_32]])\nb_11, b_12, b_21, b_22, b_31, b_32 = symbols('b_11 b_12 b_21 b_22 b_31 b_32')\nB = Matrix([\n [b_11, b_12],\n [b_21, b_22], \n [b_31, b_32]])\nalpha = symbols('alpha')\n```\n\n\n```python\nA\n```\n\n\n```python\nA + B\n```\n\n\n```python\nalpha*A\n```\n\n\n```python\nv_1, v_2 = symbols('v_1 v_2')\nv = Matrix([v_1,v_2])\n\nA*v\n```\n\n\n```python\nA[:,0]*v[0] + A[:,1]*v[1]\n```\n\n\n```python\nA = Matrix([\n [a_11, a_12],\n [a_21, a_22], \n [a_31, a_32]])\nB = Matrix([\n [b_11, b_12],\n [b_21, b_22]])\n\nA*B\n```\n\n\n```python\nA.T\n```\n\n\n```python\nprint('the shape of v is', v.shape)\nv\n```\n\n\n```python\nprint('the shape of v.T is', v.T.shape)\nv.T\n```\n\n\n```python\nu = Matrix([u_1,u_2,u_3])\nv = Matrix([v_1,v_2,v_3])\n\nu.T*v\n```\n\n\n```python\nu * v.T\n```\n\n\n```python\nA = Matrix([\n [3, 3],\n [2, S(3)/2]\n])\nA\n```\n\n\n```python\nA.inv()\n```\n\n\n```python\nA * A.inv()\n```\n\n\n```python\nA.inv() * A\n```\n\n\n```python\nB = Matrix([\n [b_11, b_12],\n [b_21, b_22]])\nB\n```\n\n\n```python\nB.trace()\n```\n\n\n```python\nB.det()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "e8df21af6cd8155f84e0364ce55f4be8af68bbfb", "size": 145103, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chapter02_definitions.ipynb", "max_stars_repo_name": "minireference/noBSLAnotebooks", "max_stars_repo_head_hexsha": "3d6acb134266a5e304cb2d51c5ac4dc3eb3949b4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 116, "max_stars_repo_stars_event_min_datetime": "2016-04-20T13:56:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T08:55:08.000Z", "max_issues_repo_path": "chapter02_definitions.ipynb", "max_issues_repo_name": "minireference/noBSLAnotebooks", "max_issues_repo_head_hexsha": "3d6acb134266a5e304cb2d51c5ac4dc3eb3949b4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-07-01T17:00:38.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-01T19:34:09.000Z", "max_forks_repo_path": "chapter02_definitions.ipynb", "max_forks_repo_name": "minireference/noBSLAnotebooks", "max_forks_repo_head_hexsha": "3d6acb134266a5e304cb2d51c5ac4dc3eb3949b4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 29, "max_forks_repo_forks_event_min_datetime": "2017-02-04T05:22:23.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-28T00:06:50.000Z", "avg_line_length": 116.0824, "max_line_length": 35284, "alphanum_fraction": 0.8685485483, "converted": true, "num_tokens": 1472, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9219218370002787, "lm_q2_score": 0.8947894668039496, "lm_q1q2_score": 0.8249259489643971}} {"text": "### Example 2: Nonlinear convection in 2D\n\nFollowing the initial convection tutorial with a single state variable $u$, we will now look at non-linear convection (step 6 in the original). This brings one new crucial challenge: computing a pair of coupled equations and thus updating two time-dependent variables $u$ and $v$.\n\nThe full set of coupled equations is now\n\n\\begin{aligned}\n\\frac{\\partial u}{\\partial t} + u \\frac{\\partial u}{\\partial x} + v \\frac{\\partial u}{\\partial y} = 0 \\\\\n\\\\\n\\frac{\\partial v}{\\partial t} + u \\frac{\\partial v}{\\partial x} + v \\frac{\\partial v}{\\partial y} = 0\\\\\n\\end{aligned}\n\nand rearranging the discretized version gives us an expression for the update of both variables\n\n\\begin{aligned}\nu_{i,j}^{n+1} &= u_{i,j}^n - u_{i,j}^n \\frac{\\Delta t}{\\Delta x} (u_{i,j}^n-u_{i-1,j}^n) - v_{i,j}^n \\frac{\\Delta t}{\\Delta y} (u_{i,j}^n-u_{i,j-1}^n) \\\\\n\\\\\nv_{i,j}^{n+1} &= v_{i,j}^n - u_{i,j}^n \\frac{\\Delta t}{\\Delta x} (v_{i,j}^n-v_{i-1,j}^n) - v_{i,j}^n \\frac{\\Delta t}{\\Delta y} (v_{i,j}^n-v_{i,j-1}^n)\n\\end{aligned}\n\nSo, for starters we will re-create the original example run in pure NumPy array notation, before demonstrating \nthe Devito version. Let's start again with some utilities and parameters:\n\n\n```python\nfrom examples.cfd import plot_field, init_hat\nimport numpy as np\nimport sympy\n%matplotlib inline\n\n# Some variable declarations\nnx = 101\nny = 101\nnt = 80\nc = 1.\ndx = 2. / (nx - 1)\ndy = 2. / (ny - 1)\nsigma = .2\ndt = sigma * dx\n```\n\nLet's re-create the initial setup with a 2D \"hat function\", but this time for two state variables.\n\n\n```python\n#NBVAL_IGNORE_OUTPUT\n\n# Allocate fields and assign initial conditions\nu = np.empty((nx, ny))\nv = np.empty((nx, ny))\n\ninit_hat(field=u, dx=dx, dy=dy, value=2.)\ninit_hat(field=v, dx=dx, dy=dy, value=2.)\n\nplot_field(u)\n```\n\nNow we can create the two stencil expression for our two coupled equations according to the discretized equation above. We again use some simple Dirichlet boundary conditions to keep the values on all sides constant.\n\n\n```python\n#NBVAL_IGNORE_OUTPUT\nfor n in range(nt + 1): ##loop across number of time steps\n un = u.copy()\n vn = v.copy()\n u[1:, 1:] = (un[1:, 1:] - \n (un[1:, 1:] * c * dt / dy * (un[1:, 1:] - un[1:, :-1])) -\n vn[1:, 1:] * c * dt / dx * (un[1:, 1:] - un[:-1, 1:]))\n v[1:, 1:] = (vn[1:, 1:] -\n (un[1:, 1:] * c * dt / dy * (vn[1:, 1:] - vn[1:, :-1])) -\n vn[1:, 1:] * c * dt / dx * (vn[1:, 1:] - vn[:-1, 1:]))\n \n u[0, :] = 1\n u[-1, :] = 1\n u[:, 0] = 1\n u[:, -1] = 1\n \n v[0, :] = 1\n v[-1, :] = 1\n v[:, 0] = 1\n v[:, -1] = 1\n \nplot_field(u)\n```\n\nExcellent, we again get a wave that resembles the one from the oiginal examples.\n\nNow we can set up our coupled problem in Devito. Let's start by creating two initial state variables $u$ and $v$, as before, and initialising them with our \"hat function.\n\n\n```python\n#NBVAL_IGNORE_OUTPUT\nfrom devito import Grid, TimeFunction\n\n# First we need two time-dependent data fields, both initialized with the hat function\ngrid = Grid(shape=(nx, ny), extent=(2., 2.))\nu = TimeFunction(name='u', grid=grid)\ninit_hat(field=u.data[0], dx=dx, dy=dy, value=2.)\n\nv = TimeFunction(name='v', grid=grid)\ninit_hat(field=v.data[0], dx=dx, dy=dy, value=2.)\n\nplot_field(u.data[0])\n```\n\nUsing the two `TimeFunction` objects we can again derive our discretized equation, rearrange for the forward stencil point in time and define our variable update expression - only we have to do everything twice now! We again use forward differences for time via `u.dt` and backward differences in space via `u.dxl` and `u.dyl` to match the original tutorial.\n\n\n```python\nfrom devito import Eq, solve\n\neq_u = Eq(u.dt + u*u.dxl + v*u.dyl)\neq_v = Eq(v.dt + u*v.dxl + v*v.dyl)\n\n# We can use the same SymPy trick to generate two\n# stencil expressions, one for each field update.\nstencil_u = solve(eq_u, u.forward)\nstencil_v = solve(eq_v, v.forward)\nupdate_u = Eq(u.forward, stencil_u, subdomain=grid.interior)\nupdate_v = Eq(v.forward, stencil_v, subdomain=grid.interior)\n\nprint(\"U update:\\n%s\\n\" % update_u)\nprint(\"V update:\\n%s\\n\" % update_v)\n```\n\n U update:\n Eq(u(t + dt, x, y), dt*(-u(t, x, y)*Derivative(u(t, x, y), x) - v(t, x, y)*Derivative(u(t, x, y), y) + u(t, x, y)/dt))\n \n V update:\n Eq(v(t + dt, x, y), dt*(-u(t, x, y)*Derivative(v(t, x, y), x) - v(t, x, y)*Derivative(v(t, x, y), y) + v(t, x, y)/dt))\n \n\n\nWe then set Dirichlet boundary conditions at all sides of the domain to $1$.\n\n\n```python\nx, y = grid.dimensions\nt = grid.stepping_dim\nbc_u = [Eq(u[t+1, 0, y], 1.)] # left\nbc_u += [Eq(u[t+1, nx-1, y], 1.)] # right\nbc_u += [Eq(u[t+1, x, ny-1], 1.)] # top\nbc_u += [Eq(u[t+1, x, 0], 1.)] # bottom\nbc_v = [Eq(v[t+1, 0, y], 1.)] # left\nbc_v += [Eq(v[t+1, nx-1, y], 1.)] # right\nbc_v += [Eq(v[t+1, x, ny-1], 1.)] # top\nbc_v += [Eq(v[t+1, x, 0], 1.)] # bottom\n```\n\nAnd finally we can put it all together to build an operator and solve our coupled problem.\n\n\n```python\n#NBVAL_IGNORE_OUTPUT\nfrom devito import Operator\n\n# Reset our data field and ICs\ninit_hat(field=u.data[0], dx=dx, dy=dy, value=2.)\ninit_hat(field=v.data[0], dx=dx, dy=dy, value=2.)\n\nop = Operator([update_u, update_v] + bc_u + bc_v)\nop(time=nt, dt=dt)\n\nplot_field(u.data[0])\n```\n\nExcellent, we have now a scalar implementation of a convection problem, but this can be written as a single vectorial equation:\n\n$\\frac{d U}{dt} + \\nabla(U)U = 0$\n\nLet's now use devito vectorial utilities and implement the vectorial equation\n\n\n```python\nfrom devito import VectorTimeFunction, grad\n\nU = VectorTimeFunction(name='U', grid=grid)\ninit_hat(field=U[0].data[0], dx=dx, dy=dy, value=2.)\ninit_hat(field=U[1].data[0], dx=dx, dy=dy, value=2.)\n\nplot_field(U[1].data[0])\n\neq_u = Eq(U.dt + grad(U)*U)\n```\n\nWe now have a vectorial equation. Unlike in the previous case, we do not need to play with left/right derivatives\nas the automated staggering of the vectorial function takes care of this.\n\n\n```python\neq_u\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)} \\frac{\\partial}{\\partial x} \\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)} + \\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)} \\frac{\\partial}{\\partial y} \\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)} + \\frac{\\partial}{\\partial t} \\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)}\\\\\\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)} \\frac{\\partial}{\\partial x} \\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)} + \\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)} \\frac{\\partial}{\\partial y} \\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)} + \\frac{\\partial}{\\partial t} \\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)}\\end{matrix}\\right] = 0$\n\n\n\nThen we set the nboundary conditions\n\n\n```python\nx, y = grid.dimensions\nt = grid.stepping_dim\nbc_u = [Eq(U[0][t+1, 0, y], 1.)] # left\nbc_u += [Eq(U[0][t+1, nx-1, y], 1.)] # right\nbc_u += [Eq(U[0][t+1, x, ny-1], 1.)] # top\nbc_u += [Eq(U[0][t+1, x, 0], 1.)] # bottom\nbc_v = [Eq(U[1][t+1, 0, y], 1.)] # left\nbc_v += [Eq(U[1][t+1, nx-1, y], 1.)] # right\nbc_v += [Eq(U[1][t+1, x, ny-1], 1.)] # top\nbc_v += [Eq(U[1][t+1, x, 0], 1.)] # bottom\n```\n\n\n```python\n# We can use the same SymPy trick to generate two\n# stencil expressions, one for each field update.\nstencil_U = solve(eq_u, U.forward)\nupdate_U = Eq(U.forward, stencil_U, subdomain=grid.interior)\n```\n\nAnd we have the updated (stencil) as a vectorial equation once again\n\n\n```python\nupdate_U\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\operatorname{U_{x}}{\\left(t + dt,x + \\frac{h_{x}}{2},y \\right)}\\\\\\operatorname{U_{y}}{\\left(t + dt,x,y + \\frac{h_{y}}{2} \\right)}\\end{matrix}\\right] = \\left[\\begin{matrix}dt \\left(- \\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)} \\frac{\\partial}{\\partial x} \\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)} - \\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)} \\frac{\\partial}{\\partial y} \\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)} + \\frac{\\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)}}{dt}\\right)\\\\dt \\left(- \\operatorname{U_{x}}{\\left(t,x + \\frac{h_{x}}{2},y \\right)} \\frac{\\partial}{\\partial x} \\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)} - \\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)} \\frac{\\partial}{\\partial y} \\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)} + \\frac{\\operatorname{U_{y}}{\\left(t,x,y + \\frac{h_{y}}{2} \\right)}}{dt}\\right)\\end{matrix}\\right]$\n\n\n\nWe finally run the operator\n\n\n```python\n#NBVAL_IGNORE_OUTPUT\nop = Operator([update_U] + bc_u + bc_v)\nop(time=nt, dt=dt)\n\n# The result is indeed the expected one.\nplot_field(U[0].data[0])\n```\n\n\n```python\nfrom devito import norm\nassert np.isclose(norm(u), norm(U[0]), rtol=1e-5, atol=0)\n```\n", "meta": {"hexsha": "b11af214f80f33f1482207596e9d5c51e0a629a4", "size": 558510, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "original examples/cfd/02_convection_nonlinear.ipynb", "max_stars_repo_name": "ofmla/Devito-playbox", "max_stars_repo_head_hexsha": "f3547c7c1bfd82cc32b51179c178f685ecf12e84", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-08T20:09:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-08T20:09:00.000Z", "max_issues_repo_path": "original examples/cfd/02_convection_nonlinear.ipynb", "max_issues_repo_name": "ofmla/Devito-playbox", "max_issues_repo_head_hexsha": "f3547c7c1bfd82cc32b51179c178f685ecf12e84", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "original examples/cfd/02_convection_nonlinear.ipynb", "max_forks_repo_name": "ofmla/Devito-playbox", "max_forks_repo_head_hexsha": "f3547c7c1bfd82cc32b51179c178f685ecf12e84", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 1041.9962686567, "max_line_length": 96888, "alphanum_fraction": 0.9569676461, "converted": true, "num_tokens": 3218, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9219218370002787, "lm_q2_score": 0.8947894618940992, "lm_q1q2_score": 0.8249259444378989}} {"text": "\n\n**Estimation Using Maximum Likelihood (MLE)** is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference. \nhttps://en.wikipedia.org/wiki/Maximum_likelihood_estimation\n\n\n```python\nimport numpy as np\nimport scipy.stats\nimport sympy\nimport math\n```\n\n\n```python\n# sample count\nN = 1000\nx = sympy.symbols('x', positive= True)\n```\n\n\n```python\ndef parameterEstimate(p_dist, df, param_true,v):\n print(\"True parameter value:\", param_true)\n # Experiment and samples\n samples = p_dist.rvs(N)\n #likelihood function \n L = np.product([df.subs(x,sample) for sample in samples])\n print(\"Likelihood Function:\", L)\n # solve for Likelihood \n #Note: log of the likelihood fincton makes the maximization problem tractable.\n\n Log_L = sympy.expand_log(sympy.log(L))\n param_est, = sympy.solve(sympy.diff(Log_L, v),v)\n\n print(\"Estimated parameter value:\", float(param_est))\n```\n\n\n```python\nprint(\"Bernoulli Distribution\")\np_true = 0.7\np_dist = scipy.stats.bernoulli(p_true)\n# variables using sympy\np = sympy.symbols('p', positive= True)\n#probability function\ndf = p**x * (1-p)**(1-x)\nparameterEstimate(p_dist, df, p_true,p)\n```\n\n Bernoulli Distribution\n True parameter value: 0.7\n Likelihood Function: p**741*(1 - p)**259\n Estimated parameter value: 0.741\n\n\n\n```python\nprint(\"Exponential Distribution\")\np_dist = scipy.stats.expon()\np_true = p_dist.mean()\n# variables using sympy\nt = sympy.symbols('t', positive= True)\n\n#probability function\ndf = (1/t) * math.e ** (-x/t)\n\nparameterEstimate(p_dist, df, p_true,t)\n\n```\n\n Exponential Distribution\n True parameter value: 1.0\n Likelihood Function: 2.71828182845905**(-1022.71172198597/t)/t**1000\n Estimated parameter value: 1.02271172198597\n\n\n\n```python\nprint(\"Normal Distribution\")\nmu_true = 2\nsigma_true=0\np_dist = scipy.stats.norm(mu_true,sigma_true)\n# variables using sympy\nmu, sigma = sympy.symbols('mu, sigma', positive= True)\n\n#probability function\ndf = ((2 * math.pi* sigma*sigma) ** (-0.5)) * (math.e ** (-0.5 * (x-mu)* (x-mu)/(sigma * sigma)))\nparameterEstimate(p_dist, df, mu_true,mu)\n\n```\n\n Normal Distribution\n True parameter value: 2\n Likelihood Function: 8.12953716728196e-400*2.71828182845905**(1000*(2.0 - mu)*(0.5*mu - 1.0)/sigma**2)*sigma**(-1000.0)\n Estimated parameter value: 2.0\n\n\n\n```python\nprint(\"Poisson Distribution\")\nk_true= 2\np_dist = scipy.stats.poisson(k_true)\nk = sympy.symbols('k', positive= True)\ndf = k**x*sympy.exp(-k)/sympy.factorial(x)\nparameterEstimate(p_dist, df, k_true,k)\n\n```\n\n Poisson Distribution\n True parameter value: 2\n Likelihood Function: k**1997*exp(-1000*k)/142220199860881056663191396650367732478138060843772563671397683466569187541722759507915845631107969632545273741424688952787500633027625617630113037289315761090010517971364415451864150528618267221053183525752106609291883404681346679086957134357390537918919070304662927052260585891725831988996973817758666931146942020746209214900436818491150673157476389615461350353676686715592963879195841207050750154875803762161535927975936000000000000000000000000000000000000000000000\n Estimated parameter value: 1.997\n\n", "meta": {"hexsha": "bb013d7d288d0e6d9d54a28bd6e42f1d4ebb4634", "size": 7544, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Statistics/MaximumLikelihoodEstimation.ipynb", "max_stars_repo_name": "bkgsur/FinanceModelingComputationWithPython", "max_stars_repo_head_hexsha": "e526d53fc06776b91202c87005ace3e4f946e80c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Statistics/MaximumLikelihoodEstimation.ipynb", "max_issues_repo_name": "bkgsur/FinanceModelingComputationWithPython", "max_issues_repo_head_hexsha": "e526d53fc06776b91202c87005ace3e4f946e80c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Statistics/MaximumLikelihoodEstimation.ipynb", "max_forks_repo_name": "bkgsur/FinanceModelingComputationWithPython", "max_forks_repo_head_hexsha": "e526d53fc06776b91202c87005ace3e4f946e80c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.3805309735, "max_line_length": 557, "alphanum_fraction": 0.5379109226, "converted": true, "num_tokens": 1064, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9473810421953309, "lm_q2_score": 0.870597273444551, "lm_q1q2_score": 0.8247873522483122}} {"text": "## Chemical kinetics\nIn chemistry one is often interested in how fast a chemical process proceeds. Chemical reactions (when viewed as single events on a molecular scale) are probabilitic. However, most reactive systems of interest involve very large numbers of molecules (a few grams of a simple substance containts on the order of $10^{23}$ molecules. The sheer number allows us to describe this inherently stochastic process deterministically.\n\n### Law of mass action\nIn order to describe chemical reactions as as system of ODEs in terms of concentrations ($c_i$) and time ($t$), one can use the [law of mass action](https://en.wikipedia.org/wiki/Law_of_mass_action):\n\n$$\n\\frac{dc_i}{dt} = \\sum_j S_{ij} r_j\n$$\nwhere $r_j$ is given by:\n$$\nr_j = k_j\\prod_l c_l^{R_{jl}}\n$$\n\nand $S$ is a matrix with the overall net stoichiometric coefficients (positive for net production, negative for net consumption), and $R$ is a matrix with the multiplicities of each reactant for each equation.\n\n### Example: Nitrosylbromide\nWe will now look at the following (bi-directional) chemical reaction:\n\n$$\n\\mathrm{2\\,NO + Br_2 \\leftrightarrow 2\\,NOBr}\n$$\n\nwhich describes the equilibrium between nitrogen monoxide (NO) and bromine (Br$_2$) and nitrosyl bromide (NOBr). It can be represented as a set of two uni-directional reactions (**f**orward and **b**ackward):\n\n$$\n\\mathrm{2\\,NO + Br_2 \\overset{k_f}{\\rightarrow} 2\\,NOBr} \\\\ \n\\mathrm{2\\,NOBr \\overset{k_b}{\\rightarrow} 2\\,NO + Br_2}\n$$\n\nThe law of mass action tells us that the rate of the first process (forward) is proportional to the concentration Br$_2$ and the square of the concentration of NO. The rate of the second reaction (the backward process) is in analogy proportional to the square of the concentration of NOBr. Using the proportionality constants $k_f$ and $k_b$ we can formulate our system of nonlinear ordinary differential equations as follows:\n\n$$\n\\frac{dc_1}{dt} = 2(k_b c_3^2 - k_f c_2 c_1^2) \\\\\n\\frac{dc_2}{dt} = k_b c_3^2 - k_f c_2 c_1^2 \\\\\n\\frac{dc_3}{dt} = 2(k_f c_2 c_1^2 - k_b c_3^2)\n$$\n\nwhere we have denoted the concentration of NO, Br$_2$, NOBr with $c_1,\\ c_2,\\ c_3$ respectively.\n\nThis ODE system corresponds to the following two matrices:\n\n$$\nS = \\begin{bmatrix}\n-2 & 2 \\\\\n-1 & 1 \\\\\n2 & -2\n\\end{bmatrix}\n$$\n\n$$\nR = \\begin{bmatrix}\n2 & 1 & 0 \\\\\n0 & 0 & 2 \n\\end{bmatrix}\n$$\n\n\n### Solving the initial value problem numerically\nWe will now integrate this system of ordinary differential equations numerically as an initial value problem (IVP) using the ``odeint`` solver provided by ``scipy``:\n\n\n```python\nimport numpy as np\nfrom scipy.integrate import odeint\n```\n\nBy looking at the [documentation](https://docs.scipy.org/doc/scipy-0.19.0/reference/generated/scipy.integrate.odeint.html) of odeint we see that we need to provide a function which computes a vector of derivatives ($\\dot{\\mathbf{y}} = [\\frac{dy_1}{dt}, \\frac{dy_2}{dt}, \\frac{dy_3}{dt}]$). The expected signature of this function is:\n\n f(y: array[float64], t: float64, *args: arbitrary constants) -> dydt: array[float64]\n \nin our case we can write it as:\n\n\n```python\ndef rhs(y, t, kf, kb):\n rf = kf * y[0]**2 * y[1]\n rb = kb * y[2]**2\n return [2*(rb - rf), rb - rf, 2*(rf - rb)]\n```\n\n\n```python\n%load_ext scipy2017codegen.exercise\n```\n\nReplace **???** by the proper arguments for ``odeint``, you can write ``odeint?`` to read its documentaiton.\n\n\n```python\n%exercise exercise_odeint.py\n```\n\n\n```python\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n\n```python\nplt.plot(tout, yout)\n_ = plt.legend(['NO', 'Br$_2$', 'NOBr'])\n```\n\nWriting the ``rhs`` function by hand for larger reaction systems quickly becomes tedious. Ideally we would like to construct it from a symbolic representation (having a symbolic representation of the problem opens up many possibilities as we will soon see). But at the same time, we need the ``rhs`` function to be fast. Which means that we want to produce a fast function from our symbolic representation. Generating a function from our symbolic representation is achieved through *code generation*. \n\nIn summary we will need to:\n\n1. Construct a symbolic representation from some domain specific representation using SymPy.\n2. Have SymPy generate a function with an appropriate signature (or multiple thereof), which we pass on to the solver.\n\nWe will achieve (1) by using SymPy symbols (and functions if needed). For (2) we will use a function in SymPy called ``lambdify``\u2015it takes a symbolic expressions and returns a function. In a later notebook, we will look at (1), for now we will just use ``rhs`` which we've already written:\n\n\n```python\nimport sympy as sym\nsym.init_printing()\n```\n\n\n```python\ny, k = sym.symbols('y:3'), sym.symbols('kf kb')\nydot = rhs(y, None, *k)\nydot\n```\n\n## Exercise\nNow assume that we had constructed ``ydot`` above by applying the more general law of mass action, instead of hard-coding the rate expressions in ``rhs``. Then we could have created a function corresponding to ``rhs`` using ``lambdify``:\n\n\n```python\n%exercise exercise_lambdify.py\n```\n\n\n```python\nplt.plot(tout, odeint(f, y0, tout, k_vals))\n_ = plt.legend(['NO', 'Br$_2$', 'NOBr'])\n```\n\nIn this example the gains of using a symbolic representation are arguably limited. However, it is quite common that the numerical solver will need another function which calculates the [Jacobian](https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant) of $\\dot{\\mathbf{y}}$ (given as Dfun in the case of ``odeint``). Writing that by hand is both tedious and error prone. But SymPy solves both of those issues:\n\n\n```python\nsym.Matrix(ydot).jacobian(y)\n```\n\nIn the next notebook we will look at an example where providing this as a function is beneficial for performance.\n", "meta": {"hexsha": "204640516307ebe1d12834f4300a307f07821254", "size": 8770, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/25-chemical-kinetics-intro.ipynb", "max_stars_repo_name": "gvvynplaine/scipy-2017-codegen-tutorial", "max_stars_repo_head_hexsha": "4bd0cdb1bdbdc796bb90c08114a00e390b3d3026", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 56, "max_stars_repo_stars_event_min_datetime": "2017-05-31T21:01:22.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-14T04:26:01.000Z", "max_issues_repo_path": "notebooks/25-chemical-kinetics-intro.ipynb", "max_issues_repo_name": "gvvynplaine/scipy-2017-codegen-tutorial", "max_issues_repo_head_hexsha": "4bd0cdb1bdbdc796bb90c08114a00e390b3d3026", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 26, "max_issues_repo_issues_event_min_datetime": "2017-06-06T19:05:04.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-29T23:15:19.000Z", "max_forks_repo_path": "notebooks/25-chemical-kinetics-intro.ipynb", "max_forks_repo_name": "gvvynplaine/scipy-2017-codegen-tutorial", "max_forks_repo_head_hexsha": "4bd0cdb1bdbdc796bb90c08114a00e390b3d3026", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 29, "max_forks_repo_forks_event_min_datetime": "2017-06-06T14:45:12.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-25T14:11:06.000Z", "avg_line_length": 33.861003861, "max_line_length": 510, "alphanum_fraction": 0.6002280502, "converted": true, "num_tokens": 1559, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8991213718636754, "lm_q2_score": 0.9173026528034426, "lm_q1q2_score": 0.8247664196028199}} {"text": "# Linear Regression of a Nonlinear Model \n## CH EN 2450 - Numerical Methods\n**Prof. Tony Saad (www.tsaad.net)
    Department of Chemical Engineering
    University of Utah**\n\n## Synopsis\nIn this lesson, you will exercise your knowledge on how to perform linear least-squares regression on a nonlinear exponential model.\n\n## Problem Statement\n\nChemical reacting systems consist of different substances reacting with each other. These reactants have different reaction rates. A reaction rate is the speed at which reactants are converted into products. A popular model for the reaction rate is known as the Arrhenius reaction rate model given by\n\\begin{equation}\nk(T) = A e^{-\\frac{E_\\text{a}}{RT}}\n\\label{eq:arrhenius}\n\\end{equation}\nwhere $k$ is known as the reaction rate constant, $T$ is the absolute temperature (in Kelvin), $A$ is the pre-exponential factor and is a measure of the frequency of molecular collisions, $E_\\text{a}$ is the activation energy or the energy required to get the reaction started, and $R$ is the universal gas constant.\n\nWe can typically measure $k$ for a given temperature and chemical reaction. Our goal is to find $A$ and $E_\\text{a}$ given those data.\n\n|T (K)| k (1/s) |\n|:-------:|:-------:|\n|313|0.00043|\n|319|0.00103|\n|323|0.00180|\n|328|0.00355|\n|333|0.00717|\n\nFirst, let's get some boiler plate out of the way\n\n\n```python\n%matplotlib inline\n%config InlineBackend.figure_format = 'svg'\nimport matplotlib.pyplot as plt\nimport numpy as np\n```\n\nNow input the data from the table into numpy arrays\n\n\n```python\nR=8.314\nT=np.array([313, 319, 323, 328, 333])\nk=np.array([0.00043, 0.00103, 0.00180, 0.00355, 0.00717])\n```\n\nAs usual, you should generate a plot to get an idea of how the data looks like\n\n\n```python\nplt.plot(T,k,'o')\nplt.xlabel('Temperature (K)')\nplt.ylabel('k (1/s)')\nplt.title('Arrhenius reaction rate')\nplt.grid()\n```\n\n\n \n\n \n\n\nThe data do indeed look like they follow an exponential trend. We will use linear regression now to fit this model. We will first take the logarithm of the Arrhenius equation\n\\begin{equation}\n\\ln k = \\ln A + - \\frac{1}{RT} E_\\text{a}\n\\end{equation}\nwhere we will choose our new y-values as $y_\\text{new} = \\ln (k)$ and the new x-values as $x_\\text{new} = - \\frac{1}{RT}$. We will also call $a_0 = \\ln{A}$ and $a_1=E_\\text{a}$ so that our new model fit is\n\\begin{equation}\ny_\\text{new} = a_0 + a_1 x_\\text{new}\n\\end{equation}\n\nTo program this, we will first create arrays for $y_\\text{new}$ and $x_\\text{new}$\n\n\n```python\nynew = np.log(k)\nxnew = -1.0/R/T\n```\n\nLet's plot the transformed data to see how they look like\n\n\n```python\nplt.plot(xnew,ynew,'o')\nplt.xlabel('-1/RT')\nplt.ylabel('log(k)')\nplt.title('Arrhenius reaction rate')\nplt.grid()\n```\n\n\n \n\n \n\n\nAs expected, the transformed data do follow a linear trend. Now we can apply the usual linear least-squares regression to these transformed data.\n\nLet's use the normal equations. In this case, we have\n\n\n```python\nN = len(ynew)\nA = np.array([np.ones(N), xnew]).T\nATA = A.T @ A\ncoefs = np.linalg.solve(ATA, A.T @ ynew)\nprint(coefs)\n```\n\n [3.89245804e+01 1.21481509e+05]\n\n\nThe variable `coefs` contains the coefficients of the straight line fit. We can untangle that using `poly1d`\n\n\n```python\np = np.poly1d(np.flip(coefs,0)) # must reverse array for poly1d to work\n```\n\nFinally, we can plot the fit over the transformed data\n\n\n```python\nplt.plot(xnew, p(xnew), label='best fit')\nplt.plot(xnew, ynew, 'o', label='original data')\nplt.xlabel('-1/RT')\nplt.ylabel('log(k)')\nplt.title('Arrhenius reaction rate')\nplt.grid()\n```\n\n\n \n\n \n\n\nwe can also calculate the $R^2$ value. Recall that the $R^2$ value can be computed as:\n\\begin{equation}R^2 = 1 - \\frac{\\sum{(y_i - f_i)^2}}{\\sum{(y_i - \\bar{y})^2}}\\end{equation}\nwhere\n\\begin{equation}\n\\bar{y} = \\frac{1}{N}\\sum{y_i}\n\\end{equation}\n\n\n```python\nybar = np.average(ynew)\nfnew = p(xnew)\nrsq = 1 - np.sum( (ynew - fnew)**2 )/np.sum( (ynew - ybar)**2 )\nprint(rsq)\n```\n\n 0.9998570770329075\n\n\nThis is an unbelievable fit! But it is correct since the data was actually setup to produce such a high R2 value.\n\nNow we need to compute the original coefficients of the model, $A$ and $E_\\text{a}$. This can be done by inverting the original transfomration\n\\begin{equation}\nA = e^{a_0}\n\\end{equation}\nand\n\\begin{equation}\nE_\\text{a} = a_1\n\\end{equation}\n\n\n```python\na0 = coefs[0]\na1 = coefs[1]\nA = np.exp(a0)\nEa = a1\n```\n\n\n```python\n# construct a function for the regressed line\ny_fit = lambda x: A * np.exp(-Ea/R/x)\n\n# To produce a nice smooth plot for the fit, generate points between Tmin and Tmax and use those as the x data\nTpoints = np.linspace(min(T), max(T))\nplt.plot(Tpoints, y_fit(Tpoints), label='Best fit' )\n\n# now plot the original input data\nplt.plot(T, k, 'o', label='Original data')\n\nplt.xlabel('Temperature (K)')\nplt.ylabel('Reaction rate, k (1/s)')\nplt.title('Arrhenius reaction rate')\nplt.legend()\nplt.grid()\n```\n\n\n \n\n \n\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\nCSS style adapted from https://github.com/barbagroup/CFDPython. Copyright (c) Barba group\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "fa73bea1a96b02e0e8a5df859559574596389959", "size": 213440, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "topics/regression/kinetics example.ipynb", "max_stars_repo_name": "jomorodi/NumericalMethods", "max_stars_repo_head_hexsha": "e040693001941079b2e0acc12e0c3ee5c917671c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-03-27T05:22:34.000Z", "max_stars_repo_stars_event_max_datetime": "2021-01-27T10:49:13.000Z", "max_issues_repo_path": "topics/regression/kinetics example.ipynb", "max_issues_repo_name": "jomorodi/NumericalMethods", "max_issues_repo_head_hexsha": "e040693001941079b2e0acc12e0c3ee5c917671c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "topics/regression/kinetics example.ipynb", "max_forks_repo_name": "jomorodi/NumericalMethods", "max_forks_repo_head_hexsha": "e040693001941079b2e0acc12e0c3ee5c917671c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2019-12-29T23:31:56.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-28T19:04:10.000Z", "avg_line_length": 43.329273244, "max_line_length": 326, "alphanum_fraction": 0.4834941904, "converted": true, "num_tokens": 2254, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9399133548753619, "lm_q2_score": 0.877476785879798, "lm_q1q2_score": 0.8247521496415305}} {"text": "[ADS entry](http://adsabs.harvard.edu/abs/1985Natur.317...44C)\n\n\n```python\nimport sympy\nsympy.init_printing()\n```\n\nEquation 1\n\n\n```python\nrho = sympy.Symbol('rho') # Density\nu = sympy.Symbol('u') # Velocity\nr = sympy.Symbol('r', positive=True) # Radius\nq = sympy.Symbol('q') # Mass injection rate\neqn_1 = sympy.Eq(sympy.Derivative(rho*u*r**2,r)/r**2,q)\neqn_1\n```\n\nEquation 2\n\n\n```python\np = sympy.Symbol('p') # Pressure\neqn_2 = sympy.Eq(rho*u*sympy.Derivative(u,r),-sympy.Derivative(p,r)-q*u)\neqn_2\n```\n\nEquation 3\n\n\n```python\nQ = sympy.Symbol('Q', positive=True) # Energy injection rate\ngamma = sympy.Symbol('gamma', positive=True) # Adiabatic index\neqn_3 = sympy.Eq(sympy.Derivative(rho*u*r**2*(u**2/2+gamma*p/rho/(gamma+1)),r)/r**2,Q)\neqn_3\n```\n\nMass conservation can be integrated\n\n\n```python\n_ = q*r**2\n_ = sympy.integrate(_,r)\nint_mass_cons = sympy.Eq(rho*u,_/r**2)\nint_mass_cons\n```\n\nEnergy conservation can be integrated\n\n\n```python\n_ = Q*r**2\n_ = sympy.integrate(_,r)\n_ = _/r**2\n_ = _/ int_mass_cons.rhs\nint_energy_cons = sympy.Eq(gamma*p/(gamma-1)/rho+u**2/2,_)\nint_energy_cons\n```\n\nSolution without injection\n\n\n```python\nm = sympy.Symbol('m', positive=True) # Mach number\nc = sympy.Symbol('c') # Speed of sound\ndot_M = sympy.Symbol(r'\\dot{M}', positive=True)\nB = sympy.Symbol('B',positive=True)\nS = sympy.Symbol('S', positive=True)\nxi = sympy.Symbol('xi', positive=True)\n_ = [sympy.Eq(rho*u*r**2,dot_M),\n sympy.Eq(u**2/2+gamma*p/rho/(gamma-1),B),\n sympy.Eq(p/rho**gamma,S)]\n_ = [itm.subs(u,m*c) for itm in _]\n_ = [itm.subs(p,rho*c**2/gamma) for itm in _]\n_ = [itm.subs(sympy.solve(_[0],rho,dict=True)[0]) for itm in _]\n_ = [itm.subs(sympy.solve(_[1],c,dict=True)[1]) for itm in _]\n_ = _[2]\n_ = sympy.expand_power_base(_,force=True)\n_ = _.simplify()\n_ = _.subs(m**2,xi**2/(gamma-1)).simplify().subs(xi,m*sympy.sqrt(gamma-1)).simplify()\n_ = _.subs(gamma,xi+1).simplify().subs(xi,gamma-1).simplify()\noutside_solution = _\noutside_solution\n```\n\n\n```python\nz = sympy.Symbol('z') # Speed of sound squared\nn = sympy.Symbol('n', positive=True) # 1/m**2\nxi = sympy.Symbol('xi', positive=True) # Auxiliary variable\n_ = [int_mass_cons, int_energy_cons, eqn_2]\n_ = [itm.subs(p,rho*z/gamma) for itm in _]\n_ = [itm.subs(u,m*sympy.sqrt(z)) for itm in _]\n_ = [itm.subs(sympy.solve(_[1],z,dict=True)[0]) for itm in _]\n_ = [itm.subs(sympy.solve(_[0],rho,dict=True)[0]) for itm in _]\n_ = _[2]\n_ = _.subs(m,1/sympy.sqrt(n(r)))\n_ = _.doit()\n_ = sympy.solve(_,n(r).diff(r))[0]\n_ = _.subs(n(r),n)\n_ = 1/_/r\n_ = sympy.integrate(_,n)\n_ = sympy.logcombine(_,force=True)\n_ = sympy.exp(_)\n_ = _.factor(n)\n_ = _.subs(n,1/m**2)\ninside_solution = _\ninside_solution\n```\n\nBoth cases ($rR$) have singularities at $m = 1$. Hence, the flow in each region is either supersonic or subsonic, and therefore the sonic point must be at $r = R$.\n\nequation 4\n\n\n```python\nR = sympy.Symbol('R') # Radius of the wind emitting region\n_ = inside_solution/inside_solution.subs(m,1)\neqn_4 = sympy.Eq(r/R,sympy.powsimp(_))\neqn_4\n```\n\nEquation 5\n\n\n```python\n_ = sympy.solve(outside_solution,r)[0]\n_ = _/(_.subs(m,1))\n_ = sympy.expand_power_base(_,force=True)\n_ = _.simplify()\n_ = _.subs(gamma,xi+1).simplify().subs(xi,gamma-1)\neqn_5 = sympy.Eq(r/R,_)\neqn_5\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "c24518c2ce376df1bdd306af12afec66cfc1d6e0", "size": 33020, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chevalier_clegg_1985.ipynb", "max_stars_repo_name": "scientific-paper-re-derivation/chevalier_clegg_1985", "max_stars_repo_head_hexsha": "24130b85d41352d0e41afeaa4c7e29b330d9b3d7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chevalier_clegg_1985.ipynb", "max_issues_repo_name": "scientific-paper-re-derivation/chevalier_clegg_1985", "max_issues_repo_head_hexsha": "24130b85d41352d0e41afeaa4c7e29b330d9b3d7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chevalier_clegg_1985.ipynb", "max_forks_repo_name": "scientific-paper-re-derivation/chevalier_clegg_1985", "max_forks_repo_head_hexsha": "24130b85d41352d0e41afeaa4c7e29b330d9b3d7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 66.3052208835, "max_line_length": 3414, "alphanum_fraction": 0.7300121139, "converted": true, "num_tokens": 1202, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9399133515091157, "lm_q2_score": 0.8774767842777551, "lm_q1q2_score": 0.8247521451819462}} {"text": "# [Day 13](https://adventofcode.com/2020/day/13): Shuttle Search\n\n\n```python\nwith open(\"../data/13.txt\", \"r\") as f:\n t0, times = (l.strip() for l in f.readlines())\n \nt0 = int(t0)\ntimes = times.split(\",\")\ntimes = [(o, int(p)) for o, p in enumerate(times) if p != \"x\"]\noffsets, periods = zip(*times)\n```\n\n## Part 1\n\n\n```python\nwait, period = min(zip((-t0 % p for p in periods), periods))\nassert 2092 == wait * period\n```\n\n## Part 2\n\nThe earliest timestamp ($t$) satisfies the system of congruences\n$$\n\\eqalign{\nt+T_0&\\equiv0\\pmod{P_0},\\cr\nt+T_1&\\equiv0\\pmod{P_1},\\cr\n&\\dots\\cr}\n$$\nwhere $T_n$, $P_n$ are the offsets and periods. The [Chinese Remainder Theorem](https://en.wikipedia.org/wiki/Chinese_remainder_theorem) yields a solution.\n\n\n```python\nfrom sympy import gcd\nfrom sympy.ntheory.modular import crt\nfrom itertools import combinations\n\n# Verify that we can apply the Chinese Remainder Theorem\nfor m, n in combinations(periods, 2):\n assert gcd(m, n) == 1\n\ntimestamp, _ = crt(periods, (-o for o in offsets))\nassert 702970661767766 == int(timestamp)\n```\n", "meta": {"hexsha": "6f70e4cfbfc0f90f2e5e477f5e63c1c5d639e555", "size": 2372, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "solutions/13.ipynb", "max_stars_repo_name": "egnha/AoC-2020", "max_stars_repo_head_hexsha": "bf52430e59ff4a6346c65dea41fcb663b5c76036", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "solutions/13.ipynb", "max_issues_repo_name": "egnha/AoC-2020", "max_issues_repo_head_hexsha": "bf52430e59ff4a6346c65dea41fcb663b5c76036", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "solutions/13.ipynb", "max_forks_repo_name": "egnha/AoC-2020", "max_forks_repo_head_hexsha": "bf52430e59ff4a6346c65dea41fcb663b5c76036", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.8076923077, "max_line_length": 161, "alphanum_fraction": 0.5227655987, "converted": true, "num_tokens": 341, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9399133531922388, "lm_q2_score": 0.8774767762675405, "lm_q1q2_score": 0.8247521391299399}} {"text": "# METOD algorithm - Sum of Gaussians\n\n# 1. Import libraries\n\nThe following libraries are required to run the METOD Algorithm.\n\n\n```python\nimport numpy as np\nfrom numpy import linalg as LA\nimport pandas as pd\n\nimport metod_alg as mt\nfrom metod_alg import objective_functions as mt_obj\n```\n\n# 2. Define function and gradient\n\nWeighted Sum of Gaussians objectve function:\n\n\\begin{equation}\n\\label{eq:funct1}\nf(x_n^{(k)})= -\\sum_{p=1}^{P} c_p\\exp \\Bigg\\{ {-\\frac{1}{2 \\sigma^2}(x_n^{(k)}-x_{0p})^T A_p^T \\Sigma_p A_p(x_n^{(k)}-x_{0p})}\\Bigg\\}\\, .\n\\end{equation}\nwhere $x_n^{(k)}$ is the $n$-th point after $k$ iterations of anti-gradient descent, $P$ is the number of Gaussian densities; $A_p$ is a random rotation matrix of size $d\\times d$; $\\Sigma_p$ is a diagonal positive definite matrix of size $d\\times d$ with smallest and largest eigenvalues $\\lambda_{min}$ and $\\lambda_{max}$ respectively; $x_{0p} \\in \\mathfrak{X}=[0,1]^d$ (centers of the Gaussian densities); $c_p$ is a fixed constant and $p=1,...,P$.\n\nNote that anti-gradient descent iterations are terminated at the smallest $k=K_n$ such that $\\nabla f(x_n^{(k)}) < \\delta$, where $\\delta$ is some small positive constant. \n\n\n\n\n```python\nf = mt_obj.sog_function\ng = mt_obj.sog_gradient\n```\n\n# 3. Defining parameters\n\nThe following parameters are required in order to derive $A_p$ ($p=1,...,P$) ; $\\Sigma_p$ ($p=1,...,P$); $x_{0p}$ ($p=1,...,P$) and $c_p$ ($p=1,...,P$).\n\n\n\u2022d: dimension\n\n\u2022P: number of minima\n\n\u2022lambda_1: smallest eigenvalue of $\\Sigma_p$ ($p=1,...,P$)\n\n\u2022lambda_2: largest eigenvalue of $\\Sigma_p$ ($p=1,...,P$)\n\n\u2022sigma_sq: value for $\\sigma^2$\n\nIn order to replicate results, we will control the pseudo-random number generator seed, so that the same random objective function and random starting points $x_n^{(0)}$ $(n=1,...,1000)$ will be generated each time the code is run. The random seed number will be set to 90.\n\n\n```python\nd = 20\nP = 10\nlambda_1 = 1\nlambda_2 = 10\nsigma_sq = 0.8\nseed = 90\n```\n\n\n```python\nnp.random.seed(seed)\nstore_x0, matrix_combined, store_c = mt_obj.function_parameters_sog(P, d, lambda_1, lambda_2)\n```\n\nWhere,\n\n\u2022store_x0: $x_{0p}$ ($p=1,...,P$)\n\n\u2022matrix_combined: $A_p^T \\Sigma_p A_p$ ($p=1,...,P$)\n\n\u2022store_c: $c_p$ ($p=1,...,P$)\n\n\n```python\nargs = P, sigma_sq, store_x0, matrix_combined, store_c\n```\n\n# 4. Run METOD Algorithm\n\n\n```python\n(discovered_minimizers,\n number_minimizers,\n func_vals_of_minimizers,\n excessive_no_descents,\n starting_points,\n grad_evals) = mt.metod(f, g, args, d)\n```\n\n# 5. Results of the METOD Algorithm\n\nTotal number of minimizers found:\n\n\n```python\nnumber_minimizers\n```\n\nPositions of minimizers:\n\n\n```python\ndiscovered_minimizers\n```\n\nFunction values of minimizers:\n\n\n```python\nfunc_vals_of_minimizers\n```\n\nTotal number of excessive descents:\n\n\n```python\nexcessive_no_descents\n```\n\nNumber of gradient evaluations for each starting point\n\n\n```python\ngrad_evals\n```\n\n# 6. Save results to csv file (optional)\n\nThe below csv files will be saved to the same folder which contains the METOD Algorithm - Sum of Gaussians notebook.\n\nRows in discovered_minimizers_d_%s_p_%s_sog.csv represent discovered minimizers. The total number of rows will be the same as the value of number_minimizers.\n\n\n```python\nnp.savetxt('discovered_minimizers_d_%s_p_%s_sog.csv' % (d, P), discovered_minimizers, delimiter=\",\")\n```\n\nEach row in func_vals_discovered_minimizers_d_%s_p_%s_sog.csv represents the function value of each discovered minimizer. The total number of rows will be the same as the value for number_minimizers.\n\n\n```python\nnp.savetxt('func_vals_discovered_minimizers_d_%s_p_%s_sog.csv' % (d, P), func_vals_of_minimizers, delimiter=\",\")\n```\n\nsummary_table_d_%s_p_%s_sog.csv will contain the total number of minimizers discovered and the total number of extra descents.\n\n\n```python\nsummary_table = pd.DataFrame({\n\"Total number of unique minimizers\": [number_minimizers],\n\"Extra descents\": [excessive_no_descents]})\nsummary_table.to_csv('summary_table_d_%s_p_%s_sog.csv' % (d, P))\n```\n\n# 7. Test results (optional)\n\nThis test can only be used for the Sum of Gaussians function.\n\nTo check each discovered minimizer is unique, we do the following:\n\nFor each minimizer $x_l^{(K_l)}$ ($l=1,...,L$)\n\n\\begin{equation}\np_l = {\\rm argmin}_{1\\le p \\le P} \\|x_l^{(K_l)} - x_{0p}\\|\n\\end{equation}\n\nFor each $p_l$ found, it is ensured that $\\|x_l^{(K_l)} - x_{0p_l}\\| \\text{ is small}$.\n\nIf all $p_l$ is different for each l=$(1,...,L)$ and $\\|x_l^{(K_l)} - x_{0p_l}\\|$ is small for each $p_l$, then all discovered minimizers are unique. \n\n\n```python\ndef calc_minimizer(point, p, store_x0):\n \"\"\"Returns the index p_1 and also the distance between the minimizer discovered by METOD and x_{0p_1}\"\"\" \n dist = np.zeros((p))\n for i in range(p):\n dist[i] = LA.norm(point - store_x0[i])\n return np.argmin(dist), np.min(dist)\n```\n\n\n```python\n\"\"\"Store values from calc_minimizer function\"\"\" \nnorms_with_minimizer = np.zeros((number_minimizers))\npos_list = np.zeros((number_minimizers))\nfor j in range(number_minimizers):\n pos, min_dist = calc_minimizer(discovered_minimizers[j], P, store_x0)\n pos_list[j] = pos\n norms_with_minimizer[j] = min_dist\n```\n\n${\\max}_{1\\le l \\le L} \\|x_l^{(K_l)}-x_{0p_l}\\|$ should be small\n\n\n```python\nnp.max(norms_with_minimizer)\n```\n\nEnsure that the number of unique minimizers is $L$\n\n\n```python\nnp.unique(pos_list).shape[0] == number_minimizers\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "df46035ef396fc17c3d213b23f6d4b95370dc8fd", "size": 10563, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Examples/Notebooks/METOD Algorithm - Sum of Gaussians.ipynb", "max_stars_repo_name": "Megscammell/METOD-Algorithm", "max_stars_repo_head_hexsha": "7518145ec100599bddc880f5f52d28f9a3959108", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Examples/Notebooks/METOD Algorithm - Sum of Gaussians.ipynb", "max_issues_repo_name": "Megscammell/METOD-Algorithm", "max_issues_repo_head_hexsha": "7518145ec100599bddc880f5f52d28f9a3959108", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-11-17T09:03:17.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-17T09:03:17.000Z", "max_forks_repo_path": "Examples/Notebooks/METOD Algorithm - Sum of Gaussians.ipynb", "max_forks_repo_name": "Megscammell/METOD-Algorithm", "max_forks_repo_head_hexsha": "7518145ec100599bddc880f5f52d28f9a3959108", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.3387096774, "max_line_length": 469, "alphanum_fraction": 0.5478557228, "converted": true, "num_tokens": 1661, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9399133447766224, "lm_q2_score": 0.8774767778695834, "lm_q1q2_score": 0.8247521332512134}} {"text": "# Saving Memory\n\n\nRunning operations can cause new memory to be allocated to host results. \n\n1. First, we do not want to run around allocating memory unnecessarily all the time. In machine learning, we might have hundreds of megabytes of parameters and update all of them multiple times per second. Typically, we will want to perform these updates **in place**.\n2. we might point at the same parameters from multiple variables. If we do not update in place, other references will still point to the old memory location, making it possible for parts of our code to inadvertently reference stale parameters.\n\n\n```python\nimport torch\ny = torch.tensor([3.5])\nx = 1.2\nprint(id(y))\ny = 1.2 + y\nprint(id(y))\n```\n\n 2232877748616\n 2232877751896\n\n\n\n```python\n# Fortunately, performing in-place operations is easy. We can assign the result of an operation \n# to a previously allocated array with slice notation\nZ = torch.zeros_like(y)\nprint('id(Z):', id(Z))\nZ[:] = x + y\nprint('id(Z):', id(Z))\n```\n\n id(Z): 2232877750936\n id(Z): 2232877750936\n\n\n\n```python\nprint(id(y))\ny[:] = 1.2 + y\nprint(id(y))\n```\n\n 2232877751896\n 2232877751896\n\n\n\n```python\nprint(id(y))\ny += x\nprint(id(y))\n```\n\n 2232877751896\n 2232877751896\n\n\n# Linear Algebra\n\n## Scalar, Vector, Matrix and Tensor\n1. Formally, we call values consisting of just one numerical quantity **scalars** --> $x \\in \\mathbb{R}$\n\\begin{align}\nx = 2.5\n\\end{align}\n2. You can think of a **vector** as simply a list of scalar values. We call these values the elements (entries or components) of the vector. We work with vectors via one-dimensional tensors. In general tensors can have arbitrary lengths, subject to the memory limits of your machine. Extensive literature considers column vectors to be the default orientation of vectors, so does this book. --> $X \\in \\mathbb{R}^n$ represents a vector x consists of n real-valued scalars.\n\\begin{equation}\n X = \\begin{bmatrix}\nx_1 \\\\\nx_2 \\\\\n\\vdots \\\\\nx_n\n\\end{bmatrix}\n\\end{equation}\n3. Just as vectors generalize scalars from order zero to order one, **matrices** generalize vectors from order one to order two. $A \\in \\mathbb{R}^{m \\times n}$ to express that the matrix A consists ofmrows and n columns of real-valued scalars\n\\begin{equation}\nA = \\begin{bmatrix}\na_{11} & a_{12} & \\ldots & a_{1n} \\\\\na_{21} & a_{22} & \\ldots & a_{2n} \\\\\n\\vdots & \\vdots & \\ddots &\\vdots \\\\\na_{m1} & a_{m2} & \\ldots & a_{mn}\n\\end{bmatrix}\n\\end{equation}\n- when a matrix has the same number of rows and columns, its shape becomes a square; thus, it is called a **square matrix**.\n- Sometimes, we want to flip the axes. When we exchange a matrix\u02bcs rows and columns, the result is called the **transpose** of the matrix, which is denoted by $A^T$\n- a **symmetric matrix** A is equal to its transpose: $A = A^T$\n\nMatrices are useful data structures: they allow us to organize data that have **different modalities of variation**. For example, rows in our matrix might correspond to different houses (data examples), while columns might correspond to different attributes. \n\nThus, although the default orientation of a single vector is a column vector, in a matrix that represents a tabular dataset, **it is more conventional to treat each data example as a row vector in the matrix.** And, as we will see in later chapters, this convention will enable common deep learning practices\n\n4. Just as vectors generalize scalars, and matrices generalize vectors, we can build data structures with even more axes. **Tensors** (\u201ctensors\u201d in this subsection refer to algebraic objects) give us a generic way of describing **n-dimensional arrays** with an arbitrary number of axes.\n - Vectors, for example, are **first-order** tensors, and matrices are **second-order** tensors.\n\n## Basic Properties of Tensor Arithmetic\n\n### Hadamard product $A*B$\n\n\nSpecifically, elementwise multiplication of two matrices is called their Hadamard product (math notation $\\odot$). For example, $A, B \\in \\mathbb{R}^{m \\times n}$:\n\\begin{equation}\nB = \\begin{bmatrix}\nb_{11} & b_{12} & \\ldots & b_{1n} \\\\\nb_{21} & b_{22} & \\ldots & b_{2n} \\\\\n\\vdots & \\vdots & \\ddots &\\vdots \\\\\nb_{m1} & b_{m2} & \\ldots & b_{mn}\n\\end{bmatrix}\n\\end{equation}\n\n\n\\begin{equation}\nA \\odot B = A * B= \\begin{bmatrix}\na_{11}b_{11} & a_{12}b_{12} & \\ldots & a_{1n}b_{1n} \\\\\na_{21}b_{21} & a_{22}b_{22} & \\ldots & a_{2n}b_{2n} \\\\\n\\vdots & \\vdots & \\ddots &\\vdots \\\\\na_{m1}b_{m1} & a_{m2}b_{m2} & \\ldots & a_{mn}b_{mn}\n\\end{bmatrix}\n\\end{equation}\n\n### Dot Products $torch.dot(x, y)$\n\nGiven two vectors $x, y \\in \\mathbb{R}^d$, their **dot product** $x^Ty$ or ($$) is a sum over the products of the elements at the same position:\n\\begin{equation}\nx^Ty = \\sum_{i = 1}^d x_iy_i\n\\end{equation}\n\n\n```python\nimport torch\ny = torch.ones(4, dtype=torch.float32)\ny\n```\n\n\n\n\n tensor([1., 1., 1., 1.])\n\n\n\n\n```python\nx = torch.tensor([0., 1., 2., 3.])\nx\n```\n\n\n\n\n tensor([0., 1., 2., 3.])\n\n\n\n\n```python\ntorch.dot(x,y)\n```\n\n\n\n\n tensor(6.)\n\n\n\n\n```python\nx*y\n```\n\n\n\n\n tensor([0., 1., 2., 3.])\n\n\n\n\n```python\ntorch.sum(x * y)\n```\n\n\n\n\n tensor(6.)\n\n\n\n**Usage: weighted average** Dot products are useful in a wide range of contexts. For example, given some set of values, denoted by a vector $x \\in \\mathbb{R}^d$ and a set of weights denoted by $w \\in \\mathbb{R}^d$, the weighted sum of the values in $x$ according to the weights $w$ could be expressed as the dot product $x^Tw$. When the weights are non-negative and sum to one (i.e., $\\sum_{i=1}^{d}w_i = 1$), the dot product expresses a **weighted average**. After normalizing two vectors to have the unit length, the dot products express the cosine of the angle between them.\n\n### Matrix-Vector Products $torch.mv(A, x)$\n\nRecall the matrix $A \\in \\mathbb{R}^{m \\times n}$ and the vector $x \\in \\mathbb{R}^n$. The matrix-vector product $Ax$ is simply a column vector of length $m$, whose $i^{th}$ element is the dot product $a_i^Tx$\n\n### Matrix-Matrix Multiplication $torch.mm(A, B)$\n\nSay that we have two matrices $A \\in \\mathbb{R}^{m x k}$ and $B \\in \\mathbb{R}^{k x n}$, $C = AB \\in mathbb{R}^{m x n}$\n\n### Norms\n\nInformally, the norm of a vector tells us how big a vector is. The notion of **size** under consideration here concerns not dimensionality but rather the magnitude of the components. \n\n\nIn linear algebra, a vector norm is a function $f$ that maps a vector ($x, Y\\in \\mathbb{R}^n$) to a scalar $f: \\mathbb{R}^n \\rightarrow \\mathbb{R}$, satisfying a handful of properties:\n1. its norm also scales by the absolute value $\\alpha$ of the same constant factor: $f(\\alpha x) = |\\alpha| f(x)$. \n2. Triangle inequality: $f(x + y) \\leq f(x) + f(y)$\n3. the norm must be non-negative: $f(x) \\geq 0$\n\n\nYou might notice that norms sound a lot like measures of distance:\n\n1. $L_2$ norm (Euclidean distance) of $x$ is the square root of the sum of the squares of the vector elements: \n\\begin{equation}\nL_2(x) = \\parallel x \\parallel_2 = \\sqrt{\\sum_{i=1}^{n}x_i^2}\n\\end{equation} \nIn deep learning, we work more often with the squared L2 norm.\n\n\n```python\n# In code, we can calculate the L2 norm of a vector as follows.\nu = torch.tensor([3.0, -4.0])\ntorch.norm(u)\n```\n\n\n\n\n tensor(5.)\n\n\n\n2. $L_1$ norm is expressed as the sum of the absolute values of the vector elements: \n\\begin{equation}\nL_1(x) = \\parallel x \\parallel_1 = \\sum_{i=1}^{n}|x_i|\n\\end{equation}\n\n\n```python\n# To calculate the L1 norm, we compose the absolute value function with a sum over the elements.\ntorch.abs(u).sum()\n```\n\n\n\n\n tensor(7.)\n\n\n\nAs compared with the L2 norm, it is less influenced by outliers. Both the $L_2$ norm and the $L_1$ norm are special cases of the more general $L_p$ norm: \n\\begin{equation}L_p(x) = \\parallel x \\parallel_p = (\\sum_{i=1}^{n}|x_i|^p)^{1/p}\\end{equation}\n\nAnalogous to $L_2$ norms of vectors, the Frobenius norm of a matrix $X \\in \\mathbb{R}^{m x n}$ is the square root of the sum of the squares of the matrix elements:\n\\begin{equation}\n\\parallel X \\parallel_F = \\sqrt{\\sum_{i=1}^m\\sum_{j=1}^nx_{ij}^2}\n\\end{equation}\n\n\n```python\n# Invoking the following function will calculate the Frobenius norm of a matrix.\ntorch.norm(torch.ones((4, 9)))\n```\n\n\n\n\n tensor(6.)\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "b8b5bc88a84977e9bd10952c0c7f737b21ecd97c", "size": 15506, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "deep_learning/Geometry and Linear Algebra.ipynb", "max_stars_repo_name": "bingrao/notebook", "max_stars_repo_head_hexsha": "4bd74a09ffe86164e4bd318b25480c9ca0c6a462", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "deep_learning/Geometry and Linear Algebra.ipynb", "max_issues_repo_name": "bingrao/notebook", "max_issues_repo_head_hexsha": "4bd74a09ffe86164e4bd318b25480c9ca0c6a462", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "deep_learning/Geometry and Linear Algebra.ipynb", "max_forks_repo_name": "bingrao/notebook", "max_forks_repo_head_hexsha": "4bd74a09ffe86164e4bd318b25480c9ca0c6a462", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.3992673993, "max_line_length": 588, "alphanum_fraction": 0.5503675996, "converted": true, "num_tokens": 2537, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8791467801752451, "lm_q2_score": 0.9381240116814386, "lm_q1q2_score": 0.8247487042748207}} {"text": "```python\nimport matplotlib.pyplot as plt\nimport numpy as np\n```\n\n(a) At the start of section 4.1 it was shown that the likelihood for a set of n independent Binomial trials is given by:\n\n\\begin{align}\nL(\\theta) & = \\prod_{i=1}^{n} \\theta^{x_i} (1-\\theta)^{1-x_i} \\\\\n& = \\theta^{\\sum_{i=1}^{n} x_i} (1 - \\theta)^{n - \\sum_{i=1}^{n} x_i}\n\\end{align}\n\nor in terms of the log-likelihood:\n\\begin{align}\nlogL(\\theta) & = log \\theta \\sum_{i=1}^{n} x_i + \\log (1-\\theta) \\Bigg( {n- \\sum_{i=1}^{n} x_i} \\Bigg)\n\\end{align}\n\n(b) The likelihood of $\\theta$ is given by\n\n\n```python\nk = np.array([i for i in range(7)])\nn_k_2 = np.array([42860, 89213, 47819, 0, 0, 0, 0])\nn_k_6 = np.array([1096, 6233, 15700, 22221, 17332, 7908, 1579])\nx = n_k_2 + n_k_6\nprint(repr(k))\nprint(repr(x))\n```\n\n\n```python\ndef log_likelihood(k: np.ndarray, x: np.ndarray, theta: np.ndarray) -> np.ndarray:\n n = np.sum(k * x)\n sum_x = np.sum(x)\n log_like = np.log(theta) * sum_x + np.log(1 - theta) * (n - sum_x)\n #like = np.exp(log_like - np.max(log_like)) # tiny values\n return log_like\n```\n\n\n```python\ntheta = np.linspace(0.01, 0.99, num=100)\nlog_like = log_likelihood(k, x, theta)\nplt.plot(theta, log_like)\nplt.xlabel(r'$\\theta$')\nplt.ylabel('Log likelihood')\nplt.title(r'Log likelihood of $\\theta$');\n```\n\nThe MLE of $\\theta$ is given by the solution to the score equation which is:\n\n$$\n\\hat{\\theta} = \\frac{1}{n} \\sum_{i=1}^{n} x_i\n$$\n\n\n```python\ndef mle(k: np.ndarray, x: np.ndarray) -> float:\n return np.sum(x) / np.sum(k * x)\n```\n\n\n```python\ntheta_hat = mle(k, x)\nprint(f'MLE of theta = {np.round(theta_hat, 2)}')\n```\n\nThe standard error of $\\theta$ is given by:\n\n\\begin{align}\nse(\\hat{\\theta}) & = \\sqrt{var \\Bigg(\\frac{1}{n} \\sum_{i=1}^{n} x_i \\Bigg)} \\\\\n& = \\sqrt{\\frac{\\hat{\\theta} (1 - \\hat{\\theta})}{n}}\n\\end{align}\n\n\n```python\nse_theta_hat = np.sqrt((theta_hat * (1 - theta_hat)) / np.sum(k * x))\nnp.round(se_theta_hat, 4)\n```\n\n(c) The binomial model is not a good fit to the data as the MLE is larger than would be expected. There is not equal sampling probability amongst the familes. Extra-binomial variance is observed since the sampling is done from a mixed population of families of size 2 and families of size 6.\n", "meta": {"hexsha": "a6fe5d0215b7dafbb92e4f22f9d84f67161d96ed", "size": 4212, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "python/chapter-4/exercises/EX4-1.ipynb", "max_stars_repo_name": "covuworie/in-all-likelihood", "max_stars_repo_head_hexsha": "6638bec8bb4dde7271adb5941d1c66e7fbe12526", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "python/chapter-4/exercises/EX4-1.ipynb", "max_issues_repo_name": "covuworie/in-all-likelihood", "max_issues_repo_head_hexsha": "6638bec8bb4dde7271adb5941d1c66e7fbe12526", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2020-03-24T17:53:04.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-23T20:16:17.000Z", "max_forks_repo_path": "python/chapter-4/exercises/EX4-1.ipynb", "max_forks_repo_name": "covuworie/in-all-likelihood", "max_forks_repo_head_hexsha": "6638bec8bb4dde7271adb5941d1c66e7fbe12526", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-21T10:24:59.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-21T10:24:59.000Z", "avg_line_length": 26.0, "max_line_length": 297, "alphanum_fraction": 0.5185185185, "converted": true, "num_tokens": 778, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9381240090865197, "lm_q2_score": 0.8791467643431001, "lm_q1q2_score": 0.8247486871409909}} {"text": "# Task One : Fuandamentals of Qunatum Computation\n\n## Basic Arithmatic Operations & Complex Numbers\n\n\n```python\n\n```\n\n\n```python\n\n```\n\nComplex numbers are always of the form\n\n\\begin{align}\n\\alpha = a + bi\n\\end{align}\n\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n## Complex conjugate\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n## Norms/Absolute Values\n\n\n\n\\begin{align} ||z|| &= \\sqrt{zz^*} = \\sqrt{|z|^2},\\\\ ||w|| &= \\sqrt{ww^*} = \\sqrt{|w|^2}, \\end{align} \n\n\n```python\n\n```\n\n\n```python\n\n```\n\n## Row Vectors, Column Vectors, and Bra-Ket Notation\n\n\\begin{align} \\text{Column Vector:} \\ \\begin{pmatrix}\na_1 \\\\ a_2 \\\\ \\vdots \\\\ a_n\n\\end{pmatrix} \n\\quad \\quad \\text{Row Vector:} \\ \\begin{pmatrix}\na_1, & a_2, & \\cdots, & a_n\n\\end{pmatrix} \\end{align}\n\n\n\n\n```python\n\n```\n\n\n```python\n\n```\n\nRow vectors in quantum mechanics are also called **bra-vectors**, and are denoted as follows:\n\n\\begin{align} \\langle A| = \\begin{pmatrix}\na_1, & a_2, \\cdots, & a_n\n\\end{pmatrix} \\end{align}\n\nColumn vectors are also called **ket-vectors** in quantum mechanics denoted as follows:\n\n\\begin{align} |B\\rangle = \\begin{pmatrix}\nb_1 \\\\ b_2 \\\\ \\vdots \\\\ b_n\n\\end{pmatrix} \\end{align}\n\nIn general, if we have a column vector, i.e. a ket-vector:\n\n\\begin{align} |A\\rangle = \\begin{pmatrix}\na_1 \\\\ a_2 \\\\ \\vdots \\\\ a_n\n\\end{pmatrix} \\end{align}\n\nthe corresponding bra-vector:\n\n\\begin{align} \\langle A| = \\begin{pmatrix}\na_1^*, & a_2^*, & \\cdots, & a_n^*\n\\end{pmatrix} \\end{align}\n\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n## Inner Product\n\n\\begin{align} \\langle A| = \\begin{pmatrix}\na_1, & a_2, & \\cdots, & a_n\n\\end{pmatrix}, \\quad \\quad\n|B\\rangle = \\begin{pmatrix}\nb_1 \\\\ b_2 \\\\ \\vdots \\\\ b_n\n\\end{pmatrix} \\end{align}\n\nTaking the inner product of $\\langle A|$ and $|B\\rangle$ gives the following:\n\n\\begin{align} \\langle A| B \\rangle &= \\begin{pmatrix} \na_1, & a_2, & \\cdots, & a_n\n\\end{pmatrix}\n\\begin{pmatrix}\nb_1 \\\\ b_2 \\\\ \\vdots \\\\ b_n\n\\end{pmatrix}\\\\\n&= \na_1b_1 + a_2b_2 + \\cdots + a_nb_n\\\\\n&= \\sum_{i=1}^n a_ib_i\n\\end{align}\n\n\n```python\n# Define the 4x1 matrix version of a column vector (instead of using the np.array() version):\n```\n\n\n```python\n# Define B as a 1x4 matrix\n\n```\n\n\n```python\n# Compute \n\n```\n\n## Matrices\n\n\n\\begin{align}\nM = \\begin{pmatrix}\n2-i & -3 \\\\\n-5i & 2\n\\end{pmatrix}\n\\end{align}\n\n\n```python\n\n```\n\n\n```python\n\n```\n\nHermitian conjugates are given by taking the conjugate transpose of the matrix\n\n\n```python\n\n```\n\n## Tensor Products of Matrices\n\n\\begin{align}\n\\begin{pmatrix}\na & b \\\\\nc & d\n\\end{pmatrix} \\otimes \n\\begin{pmatrix}\nx & y \\\\\nz & w\n\\end{pmatrix} = \n\\begin{pmatrix}\na \\begin{pmatrix}\nx & y \\\\\nz & w\n\\end{pmatrix} & b \\begin{pmatrix}\nx & y \\\\\nz & w\n\\end{pmatrix} \\\\\nc \\begin{pmatrix}\nx & y \\\\\nz & w\n\\end{pmatrix} & d \\begin{pmatrix}\nx & y \\\\\nz & w\n\\end{pmatrix}\n\\end{pmatrix} = \n\\begin{pmatrix}\nax & ay & bx & by \\\\\naz & aw & bz & bw \\\\\ncx & cy & dx & dy \\\\\ncz & cw & dz & dw\n\\end{pmatrix}\n\\end{align}\n\n\n```python\n\n```\n\n# Task Two: Qubits, Bloch Sphere and Basis States\n\n\n```python\n\n```\n\n\n\n\\begin{align}\n|\\psi \\rangle = \\begin{pmatrix}\n\\alpha \\\\ \\beta\n\\end{pmatrix}, \\quad \\text{where } \\sqrt{\\langle \\psi | \\psi \\rangle} = 1. \n\\end{align}\n\nThink of Qubit as an Electron:\n\n\\begin{align}\n\\text{spin-up}: \\ |0\\rangle &= \\begin{pmatrix} 1\\\\0 \\end{pmatrix} \\\\\n\\text{spin-down}: \\ |1\\rangle & = \\begin{pmatrix} 0\\\\1 \\end{pmatrix}\n\\end{align}\n\n\n```python\n\n```\n\nAnother representation is via Bloch Sphere:\n\n\n```python\n\n```\n\n## Spin + / - :\n\n\\begin{align}\n\\text{spin +}: \\ |+\\rangle &= \\begin{pmatrix} 1/\\sqrt{2} \\\\ 1/\\sqrt{2} \\end{pmatrix} = \\frac{1}{\\sqrt{2}} \\left(|0\\rangle + |1\\rangle\\right) \\\\\n\\text{spin -}: \\ |-\\rangle & = \\begin{pmatrix} 1/\\sqrt{2} \\\\ -1/\\sqrt{2} \\end{pmatrix} = \\frac{1}{\\sqrt{2}} \\left(|0\\rangle - |1\\rangle\\right)\n\\end{align}\n\n\n```python\n\n```\n\n## Basis States\n\n\n\\begin{align}\n |0\\rangle &= \\begin{pmatrix} 1\\\\0 \\end{pmatrix} \\\\\n |1\\rangle & = \\begin{pmatrix} 0\\\\1 \\end{pmatrix}\n\\end{align}\nPreapring other states from Basis States:\n\\begin{align}\n|00 \\rangle &= |0\\rangle \\otimes |0\\rangle = \\begin{pmatrix} 1\\\\0 \\end{pmatrix} \\otimes \\begin{pmatrix} 1\\\\0 \\end{pmatrix} = \\begin{pmatrix} 1\\\\0\\\\0\\\\0 \\end{pmatrix} \\\\\n|01 \\rangle &= |0\\rangle \\otimes |1\\rangle = \\begin{pmatrix} 1\\\\0 \\end{pmatrix} \\otimes \\begin{pmatrix} 0\\\\1 \\end{pmatrix} = \\begin{pmatrix} 0\\\\1\\\\0\\\\0 \\end{pmatrix} \\\\\n|10 \\rangle &= |1\\rangle \\otimes |0\\rangle = \\begin{pmatrix} 0\\\\1 \\end{pmatrix} \\otimes \\begin{pmatrix} 1\\\\0 \\end{pmatrix} = \\begin{pmatrix} 0\\\\0\\\\1\\\\0 \\end{pmatrix} \\\\\n|11 \\rangle &= |1\\rangle \\otimes |1\\rangle = \\begin{pmatrix} 0\\\\1 \\end{pmatrix} \\otimes \\begin{pmatrix} 0\\\\1 \\end{pmatrix} = \\begin{pmatrix} 0\\\\0\\\\0\\\\1 \\end{pmatrix}\n\\end{align}\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n# Task Three: Qunatum Gates and Circuits\n\n\n```python\nfrom qiskit import *\nfrom qiskit.visualization import plot_bloch_multivector\n```\n\n## Pauli Matrices\n\n\\begin{align}\nI = \\begin{pmatrix} 1&0 \\\\ 0&1 \\end{pmatrix}, \\quad\nX = \\begin{pmatrix} 0&1 \\\\ 1&0 \\end{pmatrix}, \\quad\nY = \\begin{pmatrix} 0&i \\\\ -i&0 \\end{pmatrix}, \\quad\nZ = \\begin{pmatrix} 1&0 \\\\ 0&-1 \\end{pmatrix} \\quad\n\\end{align}\n\n## X-gate\n\nThe X-gate is represented by the Pauli-X matrix:\n\n$$ X = \\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix} = |0\\rangle\\langle1| + |1\\rangle\\langle0| $$\n\nEffect a gate has on a qubit: \n\n$$ X|0\\rangle = \\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix}\\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix} = \\begin{bmatrix} 0 \\\\ 1 \\end{bmatrix} = |1\\rangle$$\n\n\n\n```python\n# Let's do an X-gate on a |0> qubit\n\n```\n\n\n```python\n# Let's see the result\n\n```\n\n## Z & Y-Gate\n\n\n\n$$ Y = \\begin{bmatrix} 0 & -i \\\\ i & 0 \\end{bmatrix} \\quad\\quad\\quad\\quad Z = \\begin{bmatrix} 1 & 0 \\\\ 0 & -1 \\end{bmatrix} $$\n\n$$ Y = -i|0\\rangle\\langle1| + i|1\\rangle\\langle0| \\quad\\quad Z = |0\\rangle\\langle0| - |1\\rangle\\langle1| $$\n\n\n\n\n\n```python\n# Do Y-gate on qubit 0\n\n# Do Z-gate on qubit 0\n\n\n```\n\n## Hadamard Gate\n\n\n\n$$ H = \\tfrac{1}{\\sqrt{2}}\\begin{bmatrix} 1 & 1 \\\\ 1 & -1 \\end{bmatrix} $$\n\nWe can see that this performs the transformations below:\n\n$$ H|0\\rangle = |+\\rangle $$\n\n$$ H|1\\rangle = |-\\rangle $$\n\n\n```python\n#create circuit with three qubit\n\n# Apply H-gate to each qubit:\n\n# See the circuit:\n\n```\n\n## Identity Gate\n\n\n\n$$\nI = \\begin{bmatrix} 1 & 0 \\\\ 0 & 1\\end{bmatrix}\n$$\n\n\n\n$$ I = XX $$\n\n\n\n\n```python\n\n```\n\n ** Other Gates: S-gate , T-gate, U-gate\n\n# Task Four : Multiple Qubits, Entanglement\n\n## Multiple Qubits\n\nThe state of two qubits :\n\n$$ |psi\\rangle = a_{00}|00\\rangle + a_{01}|01\\rangle + a_{10}|10\\rangle + a_{11}|11\\rangle = \\begin{bmatrix} a_{00} \\\\ a_{01} \\\\ a_{10} \\\\ a_{11} \\end{bmatrix} $$\n\n\n```python\nfrom qiskit import *\n```\n\n\n```python\n\n# Apply H-gate to each qubit:\n\n# See the circuit:\n\n```\n\nEach qubit is in the state $|+\\rangle$, so we should see the vector:\n\n$$ \n|{+++}\\rangle = \\frac{1}{\\sqrt{8}}\\begin{bmatrix} 1 \\\\ 1 \\\\ 1 \\\\ 1 \\\\\n 1 \\\\ 1 \\\\ 1 \\\\ 1 \\\\\n \\end{bmatrix}\n$$\n\n\n```python\n# Let's see the result\n\n```\n\n\n```python\n\n```\n\n\n\n$$\nX|q_1\\rangle \\otimes H|q_0\\rangle = (X\\otimes H)|q_1 q_0\\rangle\n$$\n\nThe operation looks like this:\n\n$$\nX\\otimes H = \\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix} \\otimes \\tfrac{1}{\\sqrt{2}}\\begin{bmatrix} 1 & 1 \\\\ 1 & -1 \\end{bmatrix} = \\frac{1}{\\sqrt{2}}\n\\begin{bmatrix} 0 & 0 & 1 & 1 \\\\\n 0 & 0 & 1 & -1 \\\\\n 1 & 1 & 0 & 0 \\\\\n 1 & -1 & 0 & 0 \\\\\n\\end{bmatrix}\n$$\n\nWhich we can then apply to our 4D statevector $|q_1 q_0\\rangle$. You will often see the clearer notation:\n\n$$\nX\\otimes H = \n\\begin{bmatrix} 0 & H \\\\\n H & 0\\\\\n\\end{bmatrix}\n$$\n\n## C-Not Gate\n\n\n```python\n#create circuit with two qubit\n\n# Apply CNOT\n\n# See the circuit:\n\n```\n\nClassical truth table of C-Not gate:\n\n| Input (t,c) | Output (t,c) |\n|:-----------:|:------------:|\n| 00 | 00 |\n| 01 | 11 |\n| 10 | 10 |\n| 11 | 01 |\n\n\n\n## Entanglement\n\n\n```python\n#create two qubit circuit\n\n# Apply H-gate to the first:\n\n```\n\n\n```python\n# Let's see the result:\n\n```\n\nQuantum System Sate is:\n\n$$\n|0{+}\\rangle = \\tfrac{1}{\\sqrt{2}}(|00\\rangle + |01\\rangle)\n$$\n\n\n\n```python\n\n# Apply H-gate to the first:\n\n# Apply a CNOT:\n\n\n```\n\n\n```python\n# Let's see the result:\n\n```\n\nWe see we have this final state (Bell State):\n\n$$\n\\text{CNOT}|0{+}\\rangle = \\tfrac{1}{\\sqrt{2}}(|00\\rangle + |11\\rangle)\n$$ \n\n\n\nOther Bell States:\n\n# Task Five: Bernstein-Vazirani Algorithm\n\n\nA black-box function $f$, which takes as input a string of bits ($x$), and returns either $0$ or $1$, that is:\n\n\n\n\n\n\n\n$$f(\\{x_0,x_1,x_2,...\\}) \\rightarrow 0 \\textrm{ or } 1 \\textrm{ where } x_n \\textrm{ is }0 \\textrm{ or } 1 $$ \n\nThe function is guaranteed to return the bitwise product of the input with some string, $s$. \n\n\nIn other words, given an input $x$, $f(x) = s \\cdot x \\, \\text{(mod 2)} =\\ x_0 * s_0+x_1*s_1+x_2*s_2+...\\ $ mod 2\n\nThe quantum Bernstein-Vazirani Oracle:\n \n1. Initialise the inputs qubits to the $|0\\rangle^{\\otimes n}$ state, and output qubit to $|{-}\\rangle$.\n2. Apply Hadamard gates to the input register\n3. Query the oracle\n4. Apply Hadamard gates to the input register\n5. Measure\n\n## Example Two Qubits:\n\n
      \n
    1. The register of two qubits is initialized to zero:\n \n\n$$\\lvert \\psi_0 \\rangle = \\lvert 0 0 \\rangle$$\n\n \n
    2. \n\n
    3. Apply a Hadamard gate to both qubits:\n \n\n$$\\lvert \\psi_1 \\rangle = \\frac{1}{2} \\left( \\lvert 0 0 \\rangle + \\lvert 0 1 \\rangle + \\lvert 1 0 \\rangle + \\lvert 1 1 \\rangle \\right) $$\n\n \n
    4. \n\n
    5. For the string $s=11$, the quantum oracle performs the operation:\n$$\n|x \\rangle \\xrightarrow{f_s} (-1)^{x\\cdot 11} |x \\rangle. \n$$\n\n$$\\lvert \\psi_2 \\rangle = \\frac{1}{2} \\left( (-1)^{00\\cdot 11}|00\\rangle + (-1)^{01\\cdot 11}|01\\rangle + (-1)^{10\\cdot 11}|10\\rangle + (-1)^{11\\cdot 11}|11\\rangle \\right)$$\n\n$$\\lvert \\psi_2 \\rangle = \\frac{1}{2} \\left( \\lvert 0 0 \\rangle - \\lvert 0 1 \\rangle - \\lvert 1 0 \\rangle + \\lvert 1 1 \\rangle \\right)$$\n\n \n
    6. \n\n
    7. Apply a Hadamard gate to both qubits:\n \n\n$$\\lvert \\psi_3 \\rangle = \\lvert 1 1 \\rangle$$\n\n \n
    8. \n\n
    9. Measure to find the secret string $s=11$\n
    10. \n\n\n
    \n\n\n\n```python\nfrom qiskit import *\n%matplotlib inline\nfrom qiskit.tools.visualization import plot_histogram\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "5889828fe83cc44893f6713690abdc12853abeec", "size": 23044, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "sample/Guided Project.ipynb", "max_stars_repo_name": "rishuatgithub/qrepo", "max_stars_repo_head_hexsha": "6c764d3d6c23d7dd97ff028ec73c6f74ba997440", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "sample/Guided Project.ipynb", "max_issues_repo_name": "rishuatgithub/qrepo", "max_issues_repo_head_hexsha": "6c764d3d6c23d7dd97ff028ec73c6f74ba997440", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "sample/Guided Project.ipynb", "max_forks_repo_name": "rishuatgithub/qrepo", "max_forks_repo_head_hexsha": "6c764d3d6c23d7dd97ff028ec73c6f74ba997440", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.0305927342, "max_line_length": 200, "alphanum_fraction": 0.4579934039, "converted": true, "num_tokens": 3887, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9572777975782055, "lm_q2_score": 0.8615382094310355, "lm_q1q2_score": 0.8247313996536125}} {"text": "# Sympy\n\nAnaconda Python comes with a package called [Sympy](https://www.sympy.org/en/index.html) which is a shortened term for *Symbolic Python*. Most simply, this package allows the user to analytically solve problems that he might otherwise have to solve numerically in Python. Generally, in the past, most analytical work was done in what is known as a Computer Algebra System (CAS) such as *Mathematica*. While *Mathematica* still remains at the peak of most symbolic representations, tools like Sympy can be a powerful asset in the Physicist's toolbelt.\n\nWe'll begin below with a few examples to get an understanding of how Sympy works. Considering I have never used Sympy before, I will be closely following the [Dynamics and Controls in Jupyter Notebooks Tutorial](https://dynamics-and-control.readthedocs.io/en/latest/index.html)\n\n## Imports\n\n\n\n```python\n# Python\nfrom IPython.display import display\n\n# 3rd Party Imports\nimport numpy as np\nimport sympy as sy\n```\n\n## Notebook Setup\n\nIt appears that Sympy, be default, does not try to \"pretty print\" math output. But, Jupyter notebooks can display LaTeX and MathJax equations so we will turn on this sympy feature now.\n\n\n```python\n# Turn on Pretty Print\nsy.init_printing()\n```\n\n## Symbol / Variable Creation\n\nPython behaves the same way with Sympy as it does with other aspects of the language. For example, if you attempted to run the line\n\n```python\nprint(x)\n```\n\nwithout first defining `x`, Python would throw a name error. Therefore, we would have to assign a value to `x` then call the `print` function like the example below.\n\n```python\nx = 5\nprint(x)\n```\n\nWith the above syntax, we are telling Python to first assign the value of 5 to `x` then recall what `x` represents. Sympy works the same way; however, we will use Sympy to tell the Python interpretter that `x` represents a Symbol. Although, you do not have to know this to know how Sympy works, it may be useful to know that Symbol is simply a class in Sympy and we are simply instantiating an object of that class.\n\nThe next cell shows how we might instantiate a Symbol object `x` and print its value.\n\n\n```python\nx = sy.Symbol('x')\ny = sy.Symbol('z')\ndisplay(x)\ndisplay(y)\n```\n\nNow, there are a few things to note from the cell above. On the first line, we initialized a Sympy Symbol called $x$ and stored it into the variable `x`. On the second line, we initialized a Sympy Symbol but $z$ but stored it into a variable called `y`. Note how Python does not care what the variable name is. We could have called the Symbol $z$ `joe` but Python would still interpret that variable `joe` as $z$.\n\nOne last thing to notice. Above, I call the IPython function `display` rather than the built in Python function `print`. This is simply bacause pretty printing is not a Python capability. Rather, this is an IPython capability. However, if (like below) you only have to print one thing per cell, IPython will automatically `display` the value for you.\n\n\n```python\nx**2\n```\n\n## Higher Level Examples\n\nWe'll next study Sympy's options by looking at the simply mathematical construct: the polynomial. Let's first create and display a polynomial to work with.\n\n\n```python\n# Create the Polynomial\npol = (3*x**2 - x + 1)**2\npol\n```\n\nWe can then perform higher level operations on the polynomial by operating directly on the `pol` object.\n\n\n```python\n# Display the Expanded Polynomial\npol.expand()\n```\n\n\n```python\n# Get the First Derivative\npol.diff()\n```\n\n\n```python\n# Get the Second Derivative\npol.diff(x, 2) # Arg 1: wrt; Arg 2: Second deriv\n```\n\n\n```python\n# Get the Indefinite Integral\npol.integrate()\n```\n\n\n```python\n# Get the Definite Integral from -1 to 1\ndisplay(pol.integrate((x, -1, 1)))\ndisplay(sy.N(pol.integrate((x, -1, 1)))) # As a Decimal Number\n```\n\nWe can even use Sympy to get the Taylor Series expansion of expressions.\n\n\n```python\ndisplay(sy.series(sy.exp(x), x, 0, 5)) # Expansion of e^x at x=0\ndisplay(sy.series(sy.sin(x), x, 0, 5)) # Expansion of sin(x) at x=0\ndisplay(sy.series(sy.cos(x), x, 0, 5)) # Expansion of cos(x) at x=0\n```\n\n### Solving a System of Equations\n\nIf you know what you are doing, solving a system of equations using a computer is just as easy to do it numerically as it is symbolically. However, it is sometimes nice to solve a system of equations with any arbitrary variables. For example, below we have a system of equations with four unknowns, but we are interested in solving the equations for $x$ and $y$.\n\nOne advantage (at least in Python) of solving the system numerically with Numpy is that Numpy is much faster than Sympy. For small systems of equations, this is not of great importance, but if you had many variables then this could become a problem very quickly. See the [Dynamics and Controls - Linear Systems Example](https://dynamics-and-control.readthedocs.io/en/latest/1_Dynamics/1_Modelling/Equation%20solving%20tools.html#Special-case:-linear-systems) for details on the speed of Sympy vs. the speed of Numpy.\n\n\n```python\n# Assign Symbols\nx, y, a, b = sy.symbols('x, y, a, b')\n\n# Solve the system of equations\nsy.solve(\n [\n a*x - 3*y + 4, # = 0\n 2*x + b*y - 1 # = 0\n ],\n [x, y]\n)\n```\n\nThis same concept can be extended to solving for a differential equation. Below, we express $x$ as some unknown function of $t$, setup the differential equation and solve it.\n\n\n```python\n# Create the Variables\nt = sy.Symbol('t', postive=True)\nw = sy.Symbol('omega', positive=True)\n\n# Create the Position function\nx = sy.Function('x', real=True)\n\n# Create and Print the Differential equation\nde = x(t).diff(t, 2) + w**2 * x(t) # = 0\nde\n```\n\n\n```python\n# Get the Solution\nsy.dsolve(de)\n```\n\n### The Laplace Transform\n\nTo avoid re-inventing the wheel, I will point you to the [Dynamics and Controls - Laplace Transform Introduction](https://dynamics-and-control.readthedocs.io/en/latest/1_Dynamics/3_Linear_systems/Laplace%20transforms.html#Laplace-transforms-in-SymPy) to begin this section. Instead, I will recreate the first few lines of a [Table of Laplace Transforms](http://tutorial.math.lamar.edu/Classes/DE/Laplace_Table.aspx).\n\nNow, I do not want to type out `sy.laplace_transform` and `sy.inverse_laplace_transform` everytime, so I will just import them below with a shorter, simpler name.\n\n\n```python\n# Import the Laplace Transforms\nfrom sympy import laplace_transform as L, inverse_laplace_transform as invL\n\n# Define the Symbols needed\ns, t = sy.symbols('s t')\na = sy.Symbol('a', real=True, positive=True)\n```\n\n\n```python\n# Do the Laplace Transform of 1\ndisplay(L(1, t, s))\n\n# Display the same without conditions\ndisplay(L(1, t, s, noconds=True))\n```\n\n\n```python\n# Do the Laplace Transform of exp(a t)\nL(sy.exp(a*t), t, s, noconds=True)\n```\n\n\n```python\n# Do the Laplace Transform of t^n\nn = sy.Symbol('n', integer=True, positive=True)\nL(t**n, t, s, noconds=True)\n```\n\n\n```python\n# Do the Laplace Transform of t^p, p > -1\np = sy.Symbol('p', real=True)\nL(t**p, t, s)\n```\n\n\n```python\n# Do the Laplace Transform of cos(a t)\nL(sy.cos(a*t), t, s, noconds=True)\n```\n\n\n```python\n# Show the integral of sinc(a x) = sin(a x)/(a x) directly\nf = sy.sin(t)/(t)\ndisplay(f)\n\n# Show the Definite integral of f from 0 to inf\ndisplay(f.integrate((t, 0, sy.oo)))\n\n# Get the Laplace Transform\nF = L(f, t, s, noconds=True)\ndisplay(F)\n\n# Do the Integral with the Laplace Transform\ndisplay(sy.lambdify((s), F, 'numpy')(0))\n```\n\n## Assignment\n\nYour assignment is to take a problem from another class for which you had to use the Laplace Transform, describe it in the text cell below, then get the Laplace Transform of the equation using Sympy in the cell below that.\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "02eac2cc0fef96efa102368cf47f960069947a34", "size": 75376, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "08-Sympy/Sympy.ipynb", "max_stars_repo_name": "wwaldron/NumericalPythonGuide", "max_stars_repo_head_hexsha": "8e0c2947251b9639cbc66d6462dd495c180e3faa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-22T02:29:11.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-22T02:29:11.000Z", "max_issues_repo_path": "08-Sympy/Sympy.ipynb", "max_issues_repo_name": "wwaldron/NumericalPythonGuide", "max_issues_repo_head_hexsha": "8e0c2947251b9639cbc66d6462dd495c180e3faa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "08-Sympy/Sympy.ipynb", "max_forks_repo_name": "wwaldron/NumericalPythonGuide", "max_forks_repo_head_hexsha": "8e0c2947251b9639cbc66d6462dd495c180e3faa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 91.5868772783, "max_line_length": 4192, "alphanum_fraction": 0.8337667162, "converted": true, "num_tokens": 2058, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541659378681, "lm_q2_score": 0.8757869932689565, "lm_q1q2_score": 0.8246884706859126}} {"text": "# 2021-09-13 Variable coefficients\n\n## Last time\n* Conditioning and stability of polynomial interpolation\n* Runge Phenomenon\n* Chebyshev differentiation\n* Chebyshev-based solver for the Poisson problem\n\n## Today\n* Variable coefficients\n* Conservative/divergence form vs non-divergence forms\n* Verification with discontinuities\n\n\n```julia\nusing Plots\nusing LinearAlgebra\n\nfunction vander(x, k=nothing)\n if k === nothing\n k = length(x)\n end\n V = ones(length(x), k)\n for j = 2:k\n V[:, j] = V[:, j-1] .* x\n end\n V\nend\n\nfunction fdstencil(source, target, k)\n \"kth derivative stencil from source to target\"\n x = source .- target\n V = vander(x)\n rhs = zero(x)'\n rhs[k+1] = factorial(k)\n rhs / V\nend\n\n\nfunction poisson(x, spoints, forcing; left=(0, zero), right=(0, zero))\n n = length(x)\n L = zeros(n, n)\n rhs = forcing.(x)\n for i in 2:n-1\n jleft = min(max(1, i-spoints\u00f72), n-spoints+1)\n js = jleft : jleft + spoints - 1\n L[i, js] = -fdstencil(x[js], x[i], 2)\n end\n L[1,1:spoints] = fdstencil(x[1:spoints], x[1], left[1])\n L[n,n-spoints+1:n] = fdstencil(x[n-spoints+1:n], x[n], right[1])\n rhs[1] = left[2](x[1])\n rhs[n] = right[2](x[n])\n L, rhs\nend\n\nCosRange(a, b, n) = (a + b)/2 .+ (b - a)/2 * cos.(LinRange(-pi, 0, n))\n\nfunction vander_chebyshev(x, n=nothing)\n if isnothing(n)\n n = length(x) # Square by default\n end\n m = length(x)\n T = ones(m, n)\n if n > 1\n T[:, 2] = x\n end\n for k in 3:n\n T[:, k] = 2 * x .* T[:,k-1] - T[:, k-2]\n end\n T\nend\n\nfunction chebdiff(x, n=nothing)\n T = vander_chebyshev(x, n)\n m, n = size(T)\n dT = zero(T)\n dT[:,2:3] = [one.(x) 4*x]\n for j in 3:n-1\n dT[:,j+1] = j * (2 * T[:,j] + dT[:,j-1] / (j-2))\n end\n ddT = zero(T)\n ddT[:,3] .= 4\n for j in 3:n-1\n ddT[:,j+1] = j * (2 * dT[:,j] + ddT[:,j-1] / (j-2))\n end\n T, dT, ddT\nend\n```\n\n\n\n\n chebdiff (generic function with 2 methods)\n\n\n\n# Lagrange interpolating polynomials on `CosRange`\n\n\"nodal\" --- \"modal\"\n\n\n```julia\nn = 5\nx = CosRange(-1, 1, n)\nxx = LinRange(-1, 1, 100)\nTx = vander_chebyshev(x)\nTxx, dTxx, ddTxx = chebdiff(xx, n)\nplot(xx, Txx / Tx)\nscatter!(x, [one.(x) zero.(x)])\n```\n\n\n\n\n \n\n \n\n\n\n\n```julia\nplot(xx, Txx)\n```\n\n\n\n\n \n\n \n\n\n\n# Solving a BVP on Chebyshev nodes\n\n\n```julia\nfunction poisson_cheb(n, rhsfunc, leftbc=(0, zero), rightbc=(0, zero))\n x = CosRange(-1, 1, n)\n T, dT, ddT = chebdiff(x)\n dT /= T\n ddT /= T\n T /= T\n L = -ddT\n rhs = rhsfunc.(x)\n for (index, deriv, func) in\n [(1, leftbc...), (n, rightbc...)]\n L[index,:] = (T, dT)[deriv+1][index,:]\n rhs[index] = func(x[index])\n end\n x, L, rhs\nend\n```\n\n\n\n\n poisson_cheb (generic function with 3 methods)\n\n\n\n\n```julia\nmanufactured(x) = tanh(2x)\nd_manufactured(x) = 2*cosh(2x)^-2\nmdd_manufactured(x) = 8 * tanh(2x) / cosh(2x)^2\nx, A, rhs = poisson_cheb(12, mdd_manufactured,\n (0, manufactured), (1, d_manufactured))\nplot(x, A \\ rhs, marker=:auto)\nplot!(manufactured, legend=:bottomright)\n```\n\n\n\n\n \n\n \n\n\n\n# \"spectral\" (exponential) convergence\n\n\n```julia\nfunction poisson_error(n)\n x, A, rhs = poisson_cheb(n, mdd_manufactured, (0, manufactured), (1, d_manufactured))\n u = A \\ rhs\n norm(u - manufactured.(x), Inf)\nend\n\nns = 3:20\nplot(ns, abs.(poisson_error.(ns)), marker=:auto, yscale=:log10)\nps = [1 2 3]\nplot!([n -> n^-p for p in ps], label=map(p -> \"\\$n^{-$p}\\$\", ps))\n```\n\n\n\n\n \n\n \n\n\n\n# Variable coefficients\n\n* Heat conduction: steel, brick, wood, foam\n* Electrical conductivity: copper, rubber, air\n* Elasticity: steel, rubber, concrete, rock\n* Linearization of nonlinear materials\n * ketchup, glacier ice, rocks (mantle/lithosphere)\n\n\n```julia\nkappa_step(x) = .1 + .9 * (x > 0)\nkappa_smooth(x) = .55 + .45 * sin(pi*x/2)\nplot([kappa_step, kappa_smooth], xlims=(-1, 1), ylims=(0, 1), label=\"\u03ba\")\n```\n\n\n\n\n \n\n \n\n\n\n\\begin{align}\n-(\\kappa u_x)_x &= 0 & u(-1) &= 0 & \\kappa u_x(1) &= 1\n\\end{align}\n\n* What physical scenario could this represent?\n* Qualitatively, what would a solution look like?\n\n# A naive finite difference solver\n\n| Conservative (divergence) form | Non-divergence form |\n| ------------------------------ | ------------------- |\n| $-(\\kappa u_x)_x = 0$ | $-\\kappa u_{xx} - \\kappa_x u_x = 0$ |\n\n\n```julia\nfunction poisson_nondivergence(x, spoints, kappa, forcing; leftbc=(0, zero), rightbc=(0, zero))\n n = length(x)\n L = zeros(n, n)\n rhs = forcing.(x)\n kappax = kappa.(x)\n for i in 2:n-1\n jleft = min(max(1, i-spoints\u00f72), n-spoints+1)\n js = jleft : jleft + spoints - 1\n kappa_x = fdstencil(x[js], x[i], 1) * kappax[js]\n L[i, js] = -fdstencil(x[js], x[i], 2) .* kappax[i] - fdstencil(x[js], x[i], 1) * kappa_x\n end\n L[1,1:spoints] = fdstencil(x[1:spoints], x[1], leftbc[1])\n if leftbc[1] == 1\n L[1, :] *= kappax[1]\n end\n L[n,n-spoints+1:n] = fdstencil(x[n-spoints+1:n], x[n], rightbc[1])\n if rightbc[1] == 1\n L[n, :] *= kappax[n]\n end\n rhs[1] = leftbc[2](x[1])\n rhs[n] = rightbc[2](x[n])\n L, rhs\nend\n```\n\n\n\n\n poisson_nondivergence (generic function with 1 method)\n\n\n\n# Try it\n\n\n```julia\nx = LinRange(-1, 1, 30)\nL, rhs = poisson_nondivergence(x, 3, kappa_smooth, zero, rightbc=(1, one))\nu = L \\ rhs\nplot(x, u)\nplot!(x -> 5*kappa_smooth(x))\n```\n\n\n\n\n \n\n \n\n\n\n# Manufactured solutions for variable coefficients\n\n\n```julia\nmanufactured(x) = tanh(2x)\nd_manufactured(x) = 2*cosh(2x)^-2\nd_kappa_smooth(x) = .45*pi/2 * cos(pi*x/2)\nflux_manufactured_kappa_smooth(x) = kappa_smooth(x) * d_manufactured(x)\nfunction forcing_manufactured_kappa_smooth(x)\n 8 * tanh(2x) / cosh(2x)^2 * kappa_smooth(x) -\n d_kappa_smooth(x) * d_manufactured(x)\nend\nx = LinRange(-1, 1, 20)\nL, rhs = poisson_nondivergence(x, 3, kappa_smooth,\n forcing_manufactured_kappa_smooth,\n leftbc=(0, manufactured), rightbc=(1, flux_manufactured_kappa_smooth))\nu = L \\ rhs\nplot(x, u, marker=:auto, legend=:bottomright)\n\nplot!([manufactured flux_manufactured_kappa_smooth forcing_manufactured_kappa_smooth kappa_smooth])\n```\n\n\n\n\n \n\n \n\n\n\n# Convergence\n\n\n```julia\nfunction poisson_error(n, spoints=3)\n x = LinRange(-1, 1, n)\n L, rhs = poisson_nondivergence(x, spoints, kappa_smooth,\n forcing_manufactured_kappa_smooth,\n leftbc=(0, manufactured), rightbc=(1, flux_manufactured_kappa_smooth))\n u = L \\ rhs\n norm(u - manufactured.(x), Inf)\nend\nns = 2 .^ (3:10)\nplot(ns, poisson_error.(ns, 5), marker=:auto, xscale=:log10, yscale=:log10)\nplot!([n -> n^-2, n -> n^-4], label=[\"\\$1/n^2\\$\" \"\\$1/n^4\\$\"])\n```\n\n\n\n\n \n\n \n\n\n\n# \u2705 Verified!\n\n# Let's try with the discontinuous coefficients\n\n\n```julia\nx = LinRange(-1, 1, 20)\nL, rhs = poisson_nondivergence(x, 3, kappa_step,\n zero,\n leftbc=(0, zero), rightbc=(0, one))\nu = L \\ rhs\nplot(x, u, marker=:auto, legend=:bottomright)\n```\n\n\n\n\n \n\n \n\n\n\n# Discretizing in conservative form\n\n| Conservative (divergence) form | Non-divergence form |\n| ------------------------------ | ------------------- |\n| $-(\\kappa u_x)_x = 0$ | $-\\kappa u_{xx} - \\kappa_x u_x = 0$ |\n\n\n```julia\nfunction poisson_conservative(n, kappa, forcing; leftbc=(0, zero), rightbc=(0, zero))\n x = LinRange(-1, 1, n)\n xstag = (x[1:end-1] + x[2:end]) / 2\n L = zeros(n, n)\n rhs = forcing.(x)\n kappa_stag = kappa.(xstag)\n for i in 2:n-1\n flux_L = kappa_stag[i-1] * fdstencil(x[i-1:i], xstag[i-1], 1)\n flux_R = kappa_stag[i] * fdstencil(x[i:i+1], xstag[i], 1)\n js = i-1:i+1\n weights = -fdstencil(xstag[i-1:i], x[i], 1)\n L[i, i-1:i+1] = weights[1] * [flux_L..., 0] + weights[2] * [0, flux_R...]\n end\n if leftbc[1] == 0\n L[1, 1] = 1\n rhs[1] = leftbc[2](x[1])\n rhs[2:end] -= L[2:end, 1] * rhs[1]\n L[2:end, 1] .= 0\n end\n if rightbc[1] == 0\n L[end,end] = 1\n rhs[end] = rightbc[2](x[end])\n rhs[1:end-1] -= L[1:end-1,end] * rhs[end]\n L[1:end-1,end] .= 0\n end\n x, L, rhs\nend\n```\n\n\n\n\n poisson_conservative (generic function with 1 method)\n\n\n\n# Compare conservative vs non-divergence forms\n\n\n```julia\nforcing = zero # one\nx, L, rhs = poisson_conservative(20, kappa_step,\n forcing, leftbc=(0, zero), rightbc=(0, one))\nu = L \\ rhs\nplot(x, u, marker=:auto, legend=:bottomright)\n```\n\n\n\n\n \n\n \n\n\n\n\n```julia\nx = LinRange(-1, 1, 200)\nL, rhs = poisson_nondivergence(x, 3, kappa_step,\n forcing, leftbc=(0, zero), rightbc=(0, one))\nu = L \\ rhs\nplot(x, u, marker=:auto, legend=:bottomright)\n```\n\n\n\n\n \n\n \n\n\n\n# Continuity of flux\n\n\n```julia\nforcing = zero\nx, L, rhs = poisson_conservative(20, kappa_step,\n forcing, leftbc=(0, zero), rightbc=(0, one))\nu = L \\ rhs\nplot(x, u, marker=:auto, legend=:bottomright)\n```\n\n\n\n\n \n\n \n\n\n\n\n```julia\nxstag = (x[1:end-1] + x[2:end]) ./ 2\ndu = (u[1:end-1] - u[2:end]) ./ diff(x)\nplot(xstag, [du .* kappa_step.(xstag)], marker=:auto, ylims=[-1, 1])\n```\n\n\n\n\n \n\n \n\n\n\n# Manufactured solutions with discontinuous coefficients\n\n* We need to be able to evaluate derivatives of the flux $-\\kappa u_x$.\n* A physically-realizable solution would have continuous flux, but we we'd have to be making a physical solution to have that in verification.\n* Idea: replace the discontinuous function with a continuous one with a rapid transition.\n\n\n```julia\nkappa_tanh(x, epsilon=.1) = .55 + .45 * tanh(x / epsilon)\nd_kappa_tanh(x, epsilon=.1) = .45/epsilon * cosh(x/epsilon)^-2\nplot([kappa_tanh])\n```\n\n\n\n\n \n\n \n\n\n\n# Solving with the smoothed step $\\kappa$\n\n\n```julia\nkappa_tanh(x, epsilon=.01) = .55 + .45 * tanh(x / epsilon)\nd_kappa_tanh(x, epsilon=.01) = .45/epsilon * cosh(x/epsilon)^-2\nflux_manufactured_kappa_tanh(x) = kappa_tanh(x) * d_manufactured(x)\nfunction forcing_manufactured_kappa_tanh(x)\n 8 * tanh(2x) / cosh(2x)^2 * kappa_tanh(x) -\n d_kappa_tanh(x) * d_manufactured(x)\nend\nx, L, rhs = poisson_conservative(300, kappa_tanh,\n forcing_manufactured_kappa_tanh,\n leftbc=(0, manufactured), rightbc=(0, manufactured))\nu = L \\ rhs\nplot(x, u, marker=:auto, legend=:bottomright, title=\"Error $(norm(u - manufactured.(x), Inf))\")\nplot!([manufactured flux_manufactured_kappa_tanh forcing_manufactured_kappa_tanh kappa_tanh])\n```\n\n\n\n\n \n\n \n\n\n\n# Convergence\n\n\n```julia\nfunction poisson_error(n, spoints=3)\n x, L, rhs = poisson_conservative(n, kappa_tanh,\n forcing_manufactured_kappa_tanh,\n leftbc=(0, manufactured), rightbc=(0, manufactured))\n u = L \\ rhs\n norm(u - manufactured.(x), Inf)\nend\nns = 2 .^ (3:10)\nplot(ns, poisson_error.(ns, 3), marker=:auto, xscale=:log10, yscale=:log10)\nplot!(n -> n^-2, label=\"\\$1/n^2\\$\")\n```\n\n\n\n\n \n\n \n\n\n", "meta": {"hexsha": "827b674f1eac93814e1b09eeeb58f9349c16a320", "size": 640263, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "slides/2021-09-13-coefficients.ipynb", "max_stars_repo_name": "cu-numpde/numpde", "max_stars_repo_head_hexsha": "e5e1a465a622eba56900004f9a503412407cdccf", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-01-01T20:54:51.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-01T20:54:51.000Z", "max_issues_repo_path": "slides/2021-09-13-coefficients.ipynb", "max_issues_repo_name": "amta3208/fall21", "max_issues_repo_head_hexsha": "e5e1a465a622eba56900004f9a503412407cdccf", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/2021-09-13-coefficients.ipynb", "max_forks_repo_name": "amta3208/fall21", "max_forks_repo_head_hexsha": "e5e1a465a622eba56900004f9a503412407cdccf", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-01T20:54:46.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-01T20:54:46.000Z", "avg_line_length": 175.7515783695, "max_line_length": 22934, "alphanum_fraction": 0.6527192732, "converted": true, "num_tokens": 3797, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9263037302939516, "lm_q2_score": 0.8902942275774318, "lm_q1q2_score": 0.8246828640641474}} {"text": "Integration on manifolds exercise in Sympy!\n===========================================\nHere I compute the scale factor $J(t)$ for the integration over the faces of a \"cube mapped\" sphere. That is\n$$ F = (u,v,1)^t/\\sqrt{u^2 + v^2 + 1}$$\n\nExplanation:\n------------\nFollowing http://www.owlnet.rice.edu/~fjones/chap11.pdf\n\nLet $M \\in R^n$ be a manifold defined by map $F: A \\rightarrow M$, where $A \\in R^m, m(t_f*Tpercentage): \n print(Tpercentage*100, '% completed')\n Tpercentage +=0.1\n i=i+1\n \nend = time.time()\nprint('The elapsed time was:', end - start)\n```\n\n 10.0 % completed\n 20.0 % completed\n 30.000000000000004 % completed\n 40.0 % completed\n 50.0 % completed\n 60.0 % completed\n 70.0 % completed\n 80.0 % completed\n 89.99999999999999 % completed\n 99.99999999999999 % completed\n The elapsed time was: 878.3249571323395\n\n\n\n```python\nn = len(t)\nEnergy = np.zeros(n)\nAngMom = np.zeros(n)\n\nfor i in range(n):\n speed2 = Q[2,i]**2 + Q[3,i]**2\n r = np.sqrt(Q[0,i]**2 + Q[1,i]**2)\n Energy[i] = speed2/2 - G*M/r\n AngMom[i] = Q[0,i]*Q[3,i] - Q[1,i]*Q[2,i]\n\n \nfig, ax = plt.subplots(1,2, figsize=(14,5))\n\nax[0].plot(Q[0,:], Q[1,:], color='cornflowerblue', label=f'$h=$ {h:.2e}')\nax[0].set_title('RK4 Method')\nax[0].set_xlabel(r'$x$')\nax[0].set_ylabel(r'$y$')\nax[0].legend()\n\nax[1].plot(t, Energy, color='mediumslateblue', label=f'Energy')\nax[1].set_title('RK4 Method')\nax[1].set_xlabel(r'$t$')\nax[1].set_ylabel(r'$E, \\ell$')\nax[1].legend()\n\nplt.show()\n```\n\n\n```python\nnp.abs(Energy[n-1] - Energy[0])\n```\n\n\n\n\n 3.2153850852978394e-06\n\n\n\nNote that the conservation of energy is improved but the computation time increases considerably.\n\nThe RK4 algorithm evaluates $100000$ grid points in $5.71$ s., with change in the energy of $1.12\\times 10^{-05}$.\n\nThe adaptive RK algorithm, using the error tolerance $\\epsilon = 1\\times 10^{-11}$ and a fudge factor $S=0.999$, evaluated $326089$ points with a variation of the energy between the initial and final values of $3.22\\times 10^{-6}$. However, the computation time increased to $935.43$ s. or approximately $15.6$ minutes. \n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "67f6ec5c19d1a178d45d885f1499facc0dc68be4", "size": 217989, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "10._ODE1/presentation/ODE-Application01.ipynb", "max_stars_repo_name": "ashcat2005/ComputationalAstrophysics", "max_stars_repo_head_hexsha": "edda507d0d0a433dfd674a2451d750cf6ad3f1b7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-09-23T02:49:10.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-21T06:04:39.000Z", "max_issues_repo_path": "10._ODE1/presentation/ODE-Application01.ipynb", "max_issues_repo_name": "ashcat2005/ComputationalAstrophysics", "max_issues_repo_head_hexsha": "edda507d0d0a433dfd674a2451d750cf6ad3f1b7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "10._ODE1/presentation/ODE-Application01.ipynb", "max_forks_repo_name": "ashcat2005/ComputationalAstrophysics", "max_forks_repo_head_hexsha": "edda507d0d0a433dfd674a2451d750cf6ad3f1b7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-12-05T14:06:28.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-25T04:51:58.000Z", "avg_line_length": 287.964332893, "max_line_length": 52364, "alphanum_fraction": 0.9189592135, "converted": true, "num_tokens": 4261, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294404096760998, "lm_q2_score": 0.8872046011730965, "lm_q1q2_score": 0.8246038079808435}} {"text": "\n\n# First Order Initial Value Problem\n \n\nThe more general form of a first order Ordinary Differential Equation is: \n\\begin{equation}\ny^{'}=f(t,y).\n\\end{equation}\nThis can be solved analytically by integrating both sides but this is not straight forward for most problems.\nNumerical methods can be used to approximate the solution at discrete points.\n\n\n## Euler method\n\nThe simplest one step numerical method is the Euler Method named after the most prolific of mathematicians [Leonhard Euler](https://en.wikipedia.org/wiki/Leonhard_Euler) (15 April 1707 \u2013 18 September 1783) .\n\nThe general Euler formula for the first order differential equation\n\\begin{equation}\ny^{'} = f(t,y), \n\\end{equation}\n\napproximates the derivative at time point $t_i$,\n\n\\begin{equation}\ny^{'}(t_i) \\approx \\frac{w_{i+1}-w_i}{t_{i+1}-t_{i}},\n\\end{equation}\n\nwhere $w_i$ is the approximate solution of $y$ at time $t_i$.\n\nThis substitution changes the differential equation into a __difference__ equation of the form \n\n\\begin{equation}\n\\frac{w_{i+1}-w_i}{t_{i+1}-t_{i}}=f(t_i,w_i). \n\\end{equation}\n\nAssuming uniform stepsize $t_{i+1}-t_{i}$ is replaced by $h$, re-arranging the equation gives\n\\begin{equation} \nw_{i+1}=w_i+hf(t_i,w_i).\n\\end{equation}\n This can be read as the future $w_{i+1}$ can be approximated by the present $w_i$ and the addition of the input to the system $f(t,y)$ times the time step.\n\n\n\n```python\n## Library\nimport numpy as np\nimport math \nimport pandas as pd\n%matplotlib inline\nimport matplotlib.pyplot as plt # side-stepping mpl backend\nimport matplotlib.gridspec as gridspec # subplots\nimport warnings\n\nwarnings.filterwarnings(\"ignore\")\n\n```\n\n## Population growth\n\nThe general form of the population growth differential equation is: \n\\begin{equation} \ny^{'}=\\epsilon y \n\\end{equation}\nwhere $\\epsilon$ is the growth rate. The initial population at time $a$ is \n\\begin{equation} \ny(a)=A,\n \\end{equation}\n\\begin{equation}\n a\\leq t \\leq b. \n\\end{equation}\nIntegrating gives the general analytic (exact) solution: \n\\begin{equation}\n y=Ae^{\\epsilon x}. \n\\end{equation}\nWe will use this equation to illustrate the application of the Euler method.\n \n## Discrete Interval\nThe continuous time $a\\leq t \\leq b $ is discretised into $N$ intervals by a constant stepsize\n\\begin{equation} \nh=\\frac{b-a}{N}.\n\\end{equation}\nHere the interval is $0\\leq t \\leq 2$ is discretised into $20$ intervals with stepsize\n\\begin{equation}\n h=\\frac{2-0}{20}=0.1,\n\\end{equation}\nthis gives the 21 discrete points:\n\\begin{equation}\n t_0=0, \\ t_1=0.1, \\ ... t_{20}=2. \n\\end{equation}\nThis is generalised to \n\\begin{equation}\nt_i=0+i0.1, \\ \\ \\ i=0,1,...,20.\n\\end{equation}\nThe plot below shows the discrete time steps.\n\n\n```python\n### Setting up time\nt_end=2.0\nt_start=0\nN=20\nh=(t_end-t_start)/(N)\ntime=np.arange(t_start,t_end+0.01,h)\nfig = plt.figure(figsize=(10,4))\nplt.plot(time,0*time,'o:',color='red')\nplt.xlim((0,2))\nplt.title('Illustration of discrete time points for h=%s'%(h))\nplt.plot();\n```\n\n## Initial Condition\nTo get a specify solution to a first order initial value problem, an __initial condition__ is required.\n\nFor our population problem the intial condition is:\n\\begin{equation}\ny(0)=10.\n\\end{equation}\nThis gives the analytic solution\n\\begin{equation}\ny=10e^{\\epsilon t}.\n\\end{equation}\n### Growth rate \nLet the growth rate\n\\begin{equation}\n\\epsilon=0.5\n\\end{equation}\ngiving the analytic solution.\n\\begin{equation}\ny=10e^{0.5 t}.\n\\end{equation}\nThe plot below shows the exact solution on the discrete time steps.\n\n\n```python\n## Analytic Solution y\ny=10*np.exp(0.5*time)\n\nfig = plt.figure(figsize=(10,4))\nplt.plot(time,y,'o:',color='black')\nplt.xlim((0,2))\nplt.xlabel('time')\nplt.ylabel('y')\nplt.title('Analytic (Exact) solution')\nplt.plot();\n```\n\n## Numerical approximation of Population growth\nThe differential equation is transformed using the Euler method into a difference equation of the form\n\\begin{equation} w_{i+1}=w_{i}+h \\epsilon w_i. \\end{equation}\nThis approximates a series of of values $w_0, \\ w_1, \\ ..., w_{N}$.\nFor the specific example of the population equation the difference equation is\n \\begin{equation} w_{i+1}=w_{i}+h 0.5 w_i. \\end{equation}\nwhere $w_0=10$. From this initial condition the series is approximated.\nThe plot below shows the exact solution $y$ in black circles and Euler approximation $w$ in blue squares. \n\n\n```python\nw=np.zeros(N+1)\nw[0]=10\nfor i in range (0,N):\n w[i+1]=w[i]+h*(0.5)*w[i]\n\nfig = plt.figure(figsize=(10,4))\nplt.plot(time,y,'o:',color='black',label='exact')\nplt.plot(time,w,'s:',color='blue',label='Euler')\nplt.xlim((0,2))\nplt.xlabel('time')\nplt.legend(loc='best')\nplt.title('Analytic and Euler solution')\nplt.plot();\n```\n\n## Numerical Error\nWith a numerical solution there are two types of error: \n* local truncation error at one time step; \n* global error which is the propagation of local error. \n\n### Derivation of Euler Local truncation error\nThe left hand side of a initial value problem $\\frac{dy}{dt}$ is approximated by __Taylors theorem__ expand about a point $t_0$ giving:\n\\begin{equation}\ny(t_1) = y(t_0)+(t_1-t_0)y^{'}(t_0) + \\frac{(t_1-t_0)^2}{2!}y^{''}(\\xi), \\ \\ \\ \\ \\ \\ \\xi \\in [t_0,t_1]. \n\\end{equation}\nRearranging and letting $h=t_1-t_0$ the equation becomes\n\\begin{equation}\ny^{'}(t_0)=\\frac{y(t_1)-y(t_0)}{h}-\\frac{h}{2}y^{''}(\\xi). \n\\end{equation}\nFrom this the local truncation error is\n\\begin{equation}\n\\tau \\leq \\frac{h}{2}M \\sim O(h),\n\\end{equation}\nwhere $y^{''}(t) \\leq M $.\n#### Derivation of Euler Local truncation error for the Population Growth\nIn most cases $y$ is unknown but in our example problem there is an exact solution which can be used to estimate the local truncation\n\\begin{equation}\ny'(t)=5e^{0.5 t},\n\\end{equation}\n\\begin{equation}\ny''(t)=2.5e^{0.5 t}.\n\\end{equation}\nFrom this a maximum upper limit can be calculated for $y^{''} $ on the interval $[t_0,t_1]=[0,0.1]$\n\\begin{equation}\ny''(0.1)=2.5e^{0.1\\times 0.5}=2.63=M,\n\\end{equation}\n\\begin{equation}\n\\tau=\\frac{h}{2}2.63=0.1315. \n\\end{equation}\nThe plot below shows the exact local truncation error $|y-w|$ (red triangle) and the upper limit of the Truncation error (black v) for the first two time points $t_0$ and $t_1$.\n\n\n```python\nfig = plt.figure(figsize=(10,4))\nplt.plot(time[0:2],np.abs(w[0:2]-y[0:2]),'^:'\n ,color='red',label='Error |y-w|')\nplt.plot(time[0:2],0.1*2.63/2*np.ones(2),'v:'\n ,color='black',label='Upper Local Truncation')\nplt.xlim((0,.15))\nplt.xlabel('time')\nplt.legend(loc='best')\nplt.title('Local Truncation Error')\nplt.plot();\n```\n\n## Global Error\nThe error does not stay constant accross the time this is illustrated in the figure below for the population growth equation. The actual error (red triangles) increases over time while the local truncation error (black v) remains constant.\n\n\n```python\nfig = plt.figure(figsize=(10,4))\nplt.plot(time,np.abs(w-y),'^:'\n ,color='red',label='Error |y-w|')\nplt.plot(time,0.1*2.63/2*np.ones(N+1),'v:'\n ,color='black',label='Upper Local Truncation')\nplt.xlim((0,2))\nplt.xlabel('time')\nplt.legend(loc='best')\nplt.title('Why Local Truncation does not extend to global')\nplt.plot();\n```\n\n## Theorems\nThe theorem below proves an upper limit of the global truncation error.\n### Euler Global Error\n__Theorem Global Error__\n\nSuppose $f$ is continuous and satisfies a Lipschitz Condition with constant\nL on $D=\\{(t,y)|a\\leq t \\leq b, -\\infty < y < \\infty \\}$ and that a constant M\nexists with the property that \n\\begin{equation}\n|y^{''}(t)|\\leq M. \n\\end{equation}\nLet $y(t)$ denote the unique solution of the Initial Value Problem\n\n\\begin{equation}\ny^{'}=f(t,y) \\ \\ \\ a\\leq t \\leq b, \\ \\ \\ y(a)=\\alpha, \n\\end{equation}\nand $w_0,w_1,...,w_N$ be the approx generated by the Euler method for some\npositive integer N. Then for $i=0,1,...,N$\n\\begin{equation}\n|y(t_i)-w_i| \\leq \\frac{Mh}{2L}|e^{L(t_i-a)}-1|. \n\\end{equation}\n\n### Theorems about Ordinary Differential Equations\n__Definition__\n\nA function $f(t,y)$ is said to satisfy a __Lipschitz Condition__ in the variable $y$ on \nthe set $D \\subset R^2$ if a constant $L>0$ exist with the property that\n\\begin{equation}\n|f(t,y_1)-f(t,y_2)| < L|y_1-y_2| \n\\end{equation}\nwhenever $(t,y_1),(t,y_2) \\in D$. The constant L is call the Lipschitz Condition\nof $f$.\n\n__Theorem__\nSuppose $f(t,y)$ is defined on a convex set $D \\subset R^2$. If a constant\n$L>0$ exists with\n\\begin{equation}\n\\left|\\frac{\\partial f(t,y)}{\\partial y}\\right|\\leq L,\n\\end{equation}\nthen $f$ satisfies a Lipschitz Condition an $D$ in the variable $y$ with\nLipschitz constant L.\n\n\n### Global truncation error for the population equation\nFor the population equation specific values $L$ and $M$ can be calculated.\n\nIn this case $f(t,y)=\\epsilon y$ is continuous and satisfies a Lipschitz Condition with constant\n\\begin{equation} \\left|\\frac{\\partial f(t,y)}{\\partial y}\\right|\\leq L, \\end{equation}\n\\begin{equation} \\left|\\frac{\\partial \\epsilon y}{\\partial y}\\right|\\leq \\epsilon=0.5=L, \\end{equation}\n\non $D=\\{(t,y)|0\\leq t \\leq 2, 10 < y < 30 \\}$ and that a constant $M$\nexists with the property that \n\\begin{equation} |y^{''}(t)|\\leq M. \\end{equation}\n\\begin{equation} |y^{''}(t)|=2.5e^{0.5\\times 2} \\leq 2.5 e=6.8. \\end{equation}\n\n__Specific Theorem Global Error__\n\nLet $y(t)$ denote the unique solution of the Initial Value Problem\n\\begin{equation} y^{'}=0.5 y, \\ \\ \\ 0\\leq t \\leq 2, \\ \\ \\ y(0)=10, \\end{equation}\nand $w_0,w_1,...,w_N$ be the approx generated by the Euler method for some\npositive integer N. Then for $i=0,1,...,N$\n\\begin{equation} |y(t_i)-w_i| \\leq \\frac{6.8 h}{2\\times 0.5}|e^{0.5(t_i-0)}-1| \\end{equation}\n\nThe figure below shows the exact error $y-w$ in red triangles and the upper global error in black x's.\n\n\n```python\nfig = plt.figure(figsize=(10,4))\nplt.plot(time,np.abs(w-y),'^:'\n ,color='red',label='Error |y-w|')\nplt.plot(time,0.1*6.8*(np.exp(0.5*time)-1),'x:'\n ,color='black',label='Upper Global Truncation')\nplt.xlim((0,2))\nplt.xlabel('time')\nplt.legend(loc='best')\nplt.title('Global Truncation Error')\nplt.plot();\n```\n\n### Table\nThe table below shows the iteration $i$, the discrete time point t[i], the Euler approximation w[i] of the solution $y$, the exact error $|y-w|$ and the upper limit of the global error for the linear population equation.\n\n\n```python\n\nd = {'time t_i': time[0:10], 'Euler (w_i) ':w[0:10],'Exact (y)':y[0:10],'Exact Error (|y_i-w_i|)':np.round(np.abs(w[0:10]-y[0:10]),10),r'Global Error ':np.round(0.1*6.8*(np.exp(0.5*time[0:10])-1),20)}\ndf = pd.DataFrame(data=d)\ndf\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    time t_iEuler (w_i)Exact (y)Exact Error (|y_i-w_i|)Global Error
    00.010.00000010.0000000.0000000.000000
    10.110.50000010.5127110.0127110.034864
    20.211.02500011.0517090.0267090.071516
    30.311.57625011.6183420.0420920.110047
    40.412.15506212.2140280.0589650.150554
    50.512.76281612.8402540.0774390.193137
    60.613.40095613.4985880.0976320.237904
    70.714.07100414.1906750.1196710.284966
    80.814.77455414.9182470.1436930.334441
    90.915.51328215.6831220.1698400.386452
    \n
    \n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "1621e89276aac483fc8ea8b708898c24f1c26b60", "size": 144717, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter 01 - Euler Methods/101_Euler_method_with_Theorems_Growth_function.ipynb", "max_stars_repo_name": "john-s-butler-dit/Numerical-Analysis-Python", "max_stars_repo_head_hexsha": "edd89141efc6f46de303b7ccc6e78df68b528a91", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 69, "max_stars_repo_stars_event_min_datetime": "2019-09-05T21:39:12.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-26T14:00:25.000Z", "max_issues_repo_path": "Chapter 01 - Euler Methods/101_Euler_method_with_Theorems_Growth_function.ipynb", "max_issues_repo_name": "Zak2020/Numerical-Analysis-Python", "max_issues_repo_head_hexsha": "edd89141efc6f46de303b7ccc6e78df68b528a91", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter 01 - Euler Methods/101_Euler_method_with_Theorems_Growth_function.ipynb", "max_forks_repo_name": "Zak2020/Numerical-Analysis-Python", "max_forks_repo_head_hexsha": "edd89141efc6f46de303b7ccc6e78df68b528a91", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2021-06-17T15:34:04.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-14T14:53:43.000Z", "avg_line_length": 195.8281461434, "max_line_length": 27842, "alphanum_fraction": 0.8706233545, "converted": true, "num_tokens": 4341, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8933094032139577, "lm_q2_score": 0.9230391664210672, "lm_q1q2_score": 0.8245595668987125}} {"text": "\n\n\n\n# Critical Points and Optimization\nWe've explored various techniques that we can use to calculate the derivative of a function at a specific *x* value; in other words, we can determine the *slope* of the line created by the function at any point on the line.\n\nThis ability to calculate the slope means that we can use derivatives to determine some interesting properties of the function.\n\n## Function Direction at a Point\nConsider the following function, which represents the trajectory of a ball that has been kicked on a football field:\n\n\\begin{equation}k(x) = -10x^{2} + 100x + 3 \\end{equation}\n\nRun the Python code below to graph this function and see the trajectory of the ball over a period of 10 seconds.\n\n\n```python\n%matplotlib inline\n\n# Create function k\ndef k(x):\n return -10*(x**2) + (100*x) + 3\n\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values to plot\nx = list(range(0, 11))\n\n# Use the function to get the y values\ny = [k(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x (time in seconds)')\nplt.ylabel('k(x) (height in feet)')\nplt.xticks(range(0,15, 1))\nplt.yticks(range(-200, 500, 20))\nplt.grid()\n\n# Plot the function\nplt.plot(x,y, color='green')\n\nplt.show()\n```\n\nBy looking at the graph of this function, you can see that it describes a parabola in which the ball rose in height before falling back to the ground. On the graph, it's fairly easy to see when the ball was rising and when it was falling.\n\nOf course, we can also use derivative to determine the slope of the function at any point. We can apply some of the rules we've discussed previously to determine the derivative function:\n\n- We can add together the derivatives of the individual terms (***-10x2***, ***100x***, and ***3***) to find the derivative of the entire function.\n- The *power* rule tells us that the derivative of ***-10x2*** is ***-10 • 2x***, which is ***-20x***.\n- The *power* rule also tells us that the derivative of ***100x*** is ***100***.\n- The derivative of any constant, such as ***3*** is ***0***.\n\nSo:\n\n\\begin{equation}k'(x) = -20x + 100 + 0 \\end{equation}\n\nWhich of course simplifies to:\n\n\\begin{equation}k'(x) = -20x + 100 \\end{equation}\n\nNow we can use this derivative function to find the slope for any value of ***x***.\n\nRun the cell below to see a graph of the function and its derivative function:\n\n\n```python\n%matplotlib inline\n\n# Create function k\ndef k(x):\n return -10*(x**2) + (100*x) + 3\n\ndef kd(x):\n return -20*x + 100\n\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values to plot\nx = list(range(0, 11))\n\n# Use the function to get the y values\ny = [k(i) for i in x]\n\n# Use the derivative function to get the derivative values\nyd = [kd(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x (time in seconds)')\nplt.ylabel('k(x) (height in feet)')\nplt.xticks(range(0,15, 1))\nplt.yticks(range(-200, 500, 20))\nplt.grid()\n\n# Plot the function\nplt.plot(x,y, color='green')\n\n# Plot the derivative\nplt.plot(x,yd, color='purple')\n\nplt.show()\n```\n\nLook closely at the purple line representing the derivative function, and note that it is a constant decreasing value - in other words, the slope of the function is reducing linearly as x increases. Even though the function value itself is increasing for the first half of the parabola (while the ball is rising), the slope is becoming less steep (the ball is not rising at such a high rate), until finally the ball reaches its apogee and the slope becomes negative (the ball begins falling).\n\nNote also that the point where the derivative line crosses 0 on the y-axis is also the point where the function value stops increasing and starts decreasing. When the slope has a positive value, the function is increasing; and when the slope has a negative value, the function is decreasing.\n\nThe fact that the derivative line crosses 0 at the highest point of the function makes sense if you think about it logically. If you were to draw the tangent line representing the slope at each point, it would be rotating clockwise throughout the graph, initially pointing up and to the right as the ball rises, and turning until it is pointing down and right as the ball falls. At the highest point, the tangent line would be perfectly horizontal, representing a slope of 0.\n\nRun the following code to visualize this:\n\n\n```python\n%matplotlib inline\n\n# Create function k\ndef k(x):\n return -10*(x**2) + (100*x) + 3\n\ndef kd(x):\n return -20*x + 100\n\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values to plot\nx = list(range(0, 11))\n\n# Use the function to get the y values\ny = [k(i) for i in x]\n\n# Use the derivative function to get the derivative values\nyd = [kd(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x (time in seconds)')\nplt.ylabel('k(x) (height in feet)')\nplt.xticks(range(0,15, 1))\nplt.yticks(range(-200, 500, 20))\nplt.grid()\n\n# Plot the function\nplt.plot(x,y, color='green')\n\n# Plot the derivative\nplt.plot(x,yd, color='purple')\n\n# Plot tangent slopes for x = 2, 5, and 8\nx1 = 2\nx2 = 5\nx3 = 8\nplt.plot([x1-1,x1+1],[k(x1)-(kd(x1)),k(x1)+(kd(x1))], color='red')\nplt.plot([x2-1,x2+1],[k(x2)-(kd(x2)),k(x2)+(kd(x2))], color='red')\nplt.plot([x3-1,x3+1],[k(x3)-(kd(x3)),k(x3)+(kd(x3))], color='red')\n\nplt.show()\n```\n\nNow consider the following function, which represents the number of flowers growing in a flower bed before and after the spraying of a fertilizer:\n\n\\begin{equation}w(x) = x^{2} + 2x + 7 \\end{equation}\n\n\n```python\n%matplotlib inline\n\n# Create function w\ndef w(x):\n return (x**2) + (2*x) + 7\n\ndef wd(x):\n return 2*x + 2\n\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values to plot\nx = list(range(-10, 11))\n\n# Use the function to get the y values\ny = [w(i) for i in x]\n\n# Use the derivative function to get the derivative values\nyd = [wd(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x (time in days)')\nplt.ylabel('w(x) (flowers)')\nplt.xticks(range(-10,15, 1))\nplt.yticks(range(-200, 500, 20))\nplt.grid()\n\n# Plot the function\nplt.plot(x,y, color='green')\n\n# Plot the derivative\nplt.plot(x,yd, color='purple')\n\nplt.show()\n```\n\nNote that the green line represents the function, showing the number of flowers for 10 days before and after the fertilizer treatment. Before treatment, the number of flowers was in decline, and after treatment the flower bed started to recover.\n\nThe derivative function is shown in purple, and once again shows a linear change in slope. This time, the slope is increasing at a constant rate; and once again, the derivative function line crosses 0 at the lowest point in the function line (in other words, the slope changed from negative to positive when the flowers started to recover).\n\n## Critical Points\nFrom what we've seen so far, it seems that there is a relationship between a function reaching an extreme value (a maximum or a minimum), and a derivative value of 0. This makes intuitive sense; the derivative represents the slope of the line, so when a function changes from a negative slope to a positive slope, or vice-versa, the derivative must pass through 0.\n\nHowever, you need to be careful not to assume that just because the derivative is 0 at a given point, that this point represents the minimum or maximum of the function. For example, consider the following function:\n\n\\begin{equation}v(x) = x^{3} - 2x + 100 \\end{equation}\n\nRun the following Python code to visualize this function and its corresponding derivative function:\n\n\n```python\n%matplotlib inline\n\n# Create function v\ndef v(x):\n return (x**3) - (2*x) + 100\n\ndef vd(x):\n return 3*(x**2) - 2\n\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values to plot\nx = list(range(-10, 11))\n\n# Use the function to get the y values\ny = [v(i) for i in x]\n\n# Use the derivative function to get the derivative values\nyd = [vd(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('v(x)')\nplt.xticks(range(-10,15, 1))\nplt.yticks(range(-1000, 2000, 100))\nplt.grid()\n\n# Plot the function\nplt.plot(x,y, color='green')\n\n# Plot the derivative\nplt.plot(x,yd, color='purple')\n\nplt.show()\n```\n\nNote that in this case, the purple derivative function line passes through 0 as the green function line transitions from a *concave downwards* slope (a slope that is decreasing) to a *concave upwards* slope (a slope that is increasing). The slope flattens out to 0, forming a \"saddle\" before the it starts increasing.\n\nWhat we can learn from this is that interesting things seem to happen to the function when the derivative is 0. We call points where the derivative crosses 0 *critical points*, because they indicate that the function is changing direction. When a function changes direction from positive to negative, it forms a peak (or a *local maximum*), when the function changes direction from negative to positive it forms a trough (or *local minimum*), and when it maintains the same overall direction but changes the concavity of the slope it creates an *inflexion point*.\n\n## Finding Minima and Maxima\nA common use of calculus is to find minimum and maximum points in a function. For example, we might want to find out how many seconds it took for the kicked football to reach its maximum height, or how long it took for our fertilizer to be effective in reversing the decline of flower growth.\n\nWe've seen that when a function changes direction to create a maximum peak or a minimum trough, the derivative of the function is 0, so a step towards finding these extreme points might be to simply find all of the points in the function where the derivative is 0. For example, here's our function for the kicked football:\n\n\\begin{equation}k(x) = -10x^{2} + 100x + 3 \\end{equation}\n\nFrom this, we've calculated the function for the derivative as:\n\n\\begin{equation}k'(x) = -20x + 100 \\end{equation}\n\nWe can then solve the derivative equation for an f'(x) value of 0:\n\n\\begin{equation}-20x + 100 = 0 \\end{equation}\n\nWe can remove the constant by subtracting 100 to both sides:\n\n\\begin{equation}-20x = -100 \\end{equation}\n\nMultiplying both sides by -1 gets rid of the negative values (this isn't strictly necessary, but makes the equation a little less confusing)\n\n\\begin{equation}20x = 100 \\end{equation}\n\nSo:\n\n\\begin{equation}x = 5 \\end{equation}\n\nSo we know that the derivative will be 0 when *x* is 5, but is this a minimum, a maximum, or neither? It could just be an inflexion point, or the entire function could be a constant value with a slope of 0) Without looking at the graph, it's difficult to tell.\n\n## Second Order Derivatives\nThe solution to our problem is to find the derivative of the derivative! Until now, we've found the derivative of a function, and indicated it as ***f'(x)***. Technically, this is known as the *prime* derivative; and it describes the slope of the function. Since the derivative function is itself a function, we can find its derivative, which we call the *second order* (or sometimes just *second*) derivative. This is indicated like this: ***f''(x)***.\n\nSo, here's our function for the kicked football:\n\n\\begin{equation}k(x) = -10x^{2} + 100x + 3 \\end{equation}\n\nHere's the function for the prime derivative:\n\n\\begin{equation}k'(x) = -20x + 100 \\end{equation}\n\nAnd using a combination of the power rule and the constant rule, here's the function for the second derivative:\n\n\\begin{equation}k''(x) = -20 \\end{equation}\n\nNow, without even drawing the graph, we can see that the second derivative has a constant value; so we know that the slope of the prime derivative is linear; and because it's a negative value, we know that it is decreasing. So when the prime derivative crosses 0, it we know that the slope of the function is decreasing linearly; so the point at *x=0* must be a maximum point.\n\nRun the following code to plot the function, the prime derivative, and the second derivative for the kicked ball:\n\n\n```python\n%matplotlib inline\n\n# Create function k\ndef k(x):\n return -10*(x**2) + (100*x) + 3\n\ndef kd(x):\n return -20*x + 100\n\ndef k2d(x):\n return -20\n\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values to plot\nx = list(range(0, 11))\n\n# Use the function to get the y values\ny = [k(i) for i in x]\n\n# Use the derivative function to get the k'(x) values\nyd = [kd(i) for i in x]\n\n# Use the 2-derivative function to get the k''(x)\ny2d = [k2d(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x (time in seconds)')\nplt.ylabel('k(x) (height in feet)')\nplt.xticks(range(0,15, 1))\nplt.yticks(range(-200, 500, 20))\nplt.grid()\n\n# Plot the function\nplt.plot(x,y, color='green')\n\n# Plot k'(x)\nplt.plot(x,yd, color='purple')\n\n# Plot k''(x)\nplt.plot(x,y2d, color='magenta')\n\nplt.show()\n```\n\nLet's take the same approach for the flower bed problem. Here's the function:\n\n\\begin{equation}w(x) = x^{2} + 2x + 7 \\end{equation}\n\nUsing the power rule and constant rule, gives us the prime derivative function:\n\n\\begin{equation}w'(x) = 2x + 2 \\end{equation}\n\nApplying the power rule and constant rule to the prime derivative function gives us the second derivative function:\n\n\\begin{equation}w''(x) = 2 \\end{equation}\n\nNote that this time, the second derivative is a positive constant, so the prime derivative (which is the slope of the function) is increasing linearly. The point where the prime derivative crosses 0 must therefore be a minimum. Let's run the code below to check:\n\n\n```python\n%matplotlib inline\n\n# Create function w\ndef w(x):\n return (x**2) + (2*x) + 7\n\ndef wd(x):\n return 2*x + 2\n\ndef w2d(x):\n return 2\n\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values to plot\nx = list(range(-10, 11))\n\n# Use the function to get the y values\ny = [w(i) for i in x]\n\n# Use the derivative function to get the w'(x) values\nyd = [wd(i) for i in x]\n\n# Use the 2-derivative function to get the w''(x) values\ny2d = [w2d(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x (time in days)')\nplt.ylabel('w(x) (flowers)')\nplt.xticks(range(-10,15, 1))\nplt.yticks(range(-200, 500, 20))\nplt.grid()\n\n# Plot the function\nplt.plot(x,y, color='green')\n\n# Plot w'(x)\nplt.plot(x,yd, color='purple')\n\n# Plot w''(x)\nplt.plot(x,y2d, color='magenta')\n\nplt.show()\n```\n\n## Critical Points that are *Not* Maxima or Minima\nOf course, it's possible for a function to form a \"saddle\" where the prime derivative is zero at a point that is not a minimum or maximum. Here's an example of a function like this:\n \n\\begin{equation}v(x) = x^{3} - 6x^{2} + 12x + 2 \\end{equation}\n\nAnd here's its prime derivative:\n \n\\begin{equation}v'(x) = 3x^{2} - 12x + 12 \\end{equation}\n \nLet's find a critical point where v'(x) = 0\n \n\\begin{equation}3x^{2} - 12x + 12 = 0 \\end{equation}\n\nFactor the x-terms\n \n\\begin{equation}3x(x - 4) = 12 \\end{equation}\n\nDivide both sides by 3:\n\n\\begin{equation}x(x - 4) = 4 \\end{equation}\n\nFactor the x terms back again\n\n\\begin{equation}x^{2} - 4x = 4 \\end{equation}\n\nComplete the square, step 1\n\n\\begin{equation}x^{2} - 4x + 4 = 0 \\end{equation}\n\nComplete the square, step 2\n\n\\begin{equation}(x - 2)^{2} = 0 \\end{equation}\n\nFind the square root:\n\n\\begin{equation}x - 2 = \\pm\\sqrt{0}\\end{equation}\n\n\\begin{equation}x - 2 = +\\sqrt{0} = 0, -\\sqrt{0} = 0\\end{equation}\n\nv'(2) = 0 (only touches 0 once)\n\nIs it a maximum or minimum? Let's find the second derivative:\n\n\\begin{equation}v''(x) = 6x - 12\\end{equation}\n\nSo\n\n\\begin{equation}v''(2) = 0\\end{equation}\n\nSo it's neither negative or positive, so it's not a maximum or minimum.\n\n\n```python\n%matplotlib inline\n\n# Create function v\ndef v(x):\n return (x**3) - (6*(x**2)) + (12*x) + 2\n\ndef vd(x):\n return (3*(x**2)) - (12*x) + 12\n\ndef v2d(x):\n return (3*(2*x)) - 12\n\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values to plot\nx = list(range(-5, 11))\n\n# Use the function to get the y values\ny = [v(i) for i in x]\n\n# Use the derivative function to get the derivative values\nyd = [vd(i) for i in x]\n\n# Use the derivative function to get the derivative values\ny2d = [v2d(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x')\nplt.ylabel('v(x)')\nplt.xticks(range(-10,15, 1))\nplt.yticks(range(-2000, 2000, 50))\nplt.grid()\n\n# Plot the function\nplt.plot(x,y, color='green')\n\n# Plot the derivative\nplt.plot(x,yd, color='purple')\n\n# Plot the derivative\nplt.plot(x,y2d, color='magenta')\n\nplt.show()\n\nprint (\"v(2) = \" + str(v(2)))\n\nprint (\"v'(2) = \" + str(vd(2)))\n\nprint (\"v''(2) = \" + str(v2d(2)))\n\n```\n\n## Optimization\nThe ability to use derivatives to find minima and maxima of a function makes it a useful tool for scenarios where you need to optimize a function for a specific variable.\n\n### Defining Functions to be Optimized\nFor example, suppose you have decided to build an online video service that is based on a subscription model. You plan to charge a monthly subscription fee, and you want to make the most revenue possible. The problem is that customers are price-sensitive, so if you set the monthly fee too high, you'll deter some customers from signing up. Conversely, if you set the fee too low, you may get more customers, but at the cost of reduced revenue.\n\nWhat you need is some kind of function that will tell you how many subscriptions you might expect to get based on a given fee. So you've done some research, and found a formula to indicate that the expected subscription volume (in thousands) can be calculated as 5-times the monthly fee subtracted from 100; or expressed as a function:\n\n\\begin{equation}s(x) = -5x + 100\\end{equation}\n\nWhat you actually want to optimize is monthly revenue, which is simply the number of subscribers multiplied by the fee:\n\n\\begin{equation}r(x) = s(x) \\cdot x\\end{equation}\n\nWe can combine ***s(x)*** into ***r(x)*** like this:\n\n\\begin{equation}r(x) = -5x^{2} + 100x\\end{equation}\n\n### Finding the Prime Derivative\nThe function ***r(x)*** will return the expected monthly revenue (in thousands) for any proposed fee (*x*). What we need to do now is to find the fee that yields the maximum revenue. Fortunately, we can use a derivative to do that.\n\nFirst, we need to determine the prime derivative of ***r(x)***, and we can do that easily using the power rule:\n\n\\begin{equation}r'(x) = 2 \\cdot -5x + 100\\end{equation}\n\nWhich is:\n\n\\begin{equation}r'(x) = -10x + 100\\end{equation}\n\n### Find Critical Points\nNow we need to find any critical points where the derivative is 0, as this could indicate a maximum:\n\n\\begin{equation}-10x + 100 = 0\\end{equation}\n\nLet's isolate the *x* term:\n\n\\begin{equation}-10x = -100\\end{equation}\n\nBoth sides are negative, so we can mulitply both by -1 to make them positive without affecting the equation:\n\n\\begin{equation}10x = 100\\end{equation}\n\nNow we can divide both sides by 10 to isolate *x*:\n\n\\begin{equation}x = \\frac{100}{10}\\end{equation}\n\nSo:\n\n\\begin{equation}x = 10\\end{equation}\n\n#### Check for a Maximum\nWe now know that with an *x* value of of **10**, the derivative is 0; or put another way, when the fee is 10, the slope indicating the change in subscription volume is flat. This could potentially be a point where the change in subscription volume has peaked (in other words, a maximum); but it could also be a minimum or just an inflexion point where the rate of change transitions from negative to positive.\n\nTo be sure, we can check the second order derivative. We can calculate this by applying the power rule to the prime derivative:\n\n\\begin{equation}r''(x) = -10\\end{equation}\n\nNote that the second derivative is a constant with a negative value. It will be the same for any point, including our critical point at *x=10*:\n\n\\begin{equation}r''(10) = -10\\end{equation}\n\nA negative value for the second derivative tells us that the derivative slope is moving in a negative direction at the point where it is 0, so the function value must be at a maximum.\n\nIn other words, the optimal monthly fee for our online video service is 10 - this will generate the maximum monthly revenue.\n\nRun the code below to show the function ***r(x)*** as a graph, and verify that the maximum point is at x = 10.\n\n\n```python\n%matplotlib inline\n\n# Create function s\ndef s(x):\n return (-5*x) + 100\n\n# Create function r\ndef r(x):\n return s(x) * x\n\nfrom matplotlib import pyplot as plt\n\n# Create an array of x values to plot\nx = list(range(0, 21))\n\n# Use the function to get the y values\ny = [r(i) for i in x]\n\n# Set up the graph\nplt.xlabel('x (monthly fee)')\nplt.ylabel('r(x) (revenue in $,000)')\nplt.xticks(range(0,22, 1))\nplt.yticks(range(0, 600, 50))\nplt.grid()\n\n# Plot the function\nplt.plot(x,y, color='green')\n\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "0b89f630cc227cd231549ef08c8b5b453bee9a5f", "size": 260557, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "02_04_Critical_Points_and_Optimization.ipynb", "max_stars_repo_name": "verryp/learning-phyton", "max_stars_repo_head_hexsha": "103470ae49652755f146baaa353d3c72e6588a4a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "02_04_Critical_Points_and_Optimization.ipynb", "max_issues_repo_name": "verryp/learning-phyton", "max_issues_repo_head_hexsha": "103470ae49652755f146baaa353d3c72e6588a4a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "02_04_Critical_Points_and_Optimization.ipynb", "max_forks_repo_name": "verryp/learning-phyton", "max_forks_repo_head_hexsha": "103470ae49652755f146baaa353d3c72e6588a4a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 259.0029821074, "max_line_length": 29960, "alphanum_fraction": 0.8911831192, "converted": true, "num_tokens": 5723, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391706552538, "lm_q2_score": 0.893309398243273, "lm_q1q2_score": 0.8245595660930145}} {"text": "# https://en.wikipedia.org/wiki/Finite_difference\n\nThree forms are commonly considered: forward, backward, and central differences.[1][2][3]\n\nA forward difference is an expression of the form\n\n$$ \\displaystyle \\Delta _{h}[f](x)=f(x+\\Delta x)-f(x).$$\nDepending on the application, the spacing h may be variable or constant. When omitted, $\\Delta x=h$ is taken to be 1: \u0394[\u2009f\u2009](x) = \u03941[\u2009f\u2009](x).\n\nA backward difference uses the function values at x and x \u2212 \\Delta, instead of the values at x + \\Delta and x:\n\n$$ \\displaystyle \\nabla _{h}[f](x)=f(x)-f(x-\\Delta x).$$\n\nFinally, the central difference is given by\n\n$$\\displaystyle \\delta _{h}[f](x) = f\\left(x+{\\tfrac {1}{2}}\\Delta x\\right)-f\\left(x-{\\tfrac {1}{2}}\\Delta x \\right) $$\n\nThe derivative of a function f at a point x is defined by the limit.\n\n$$ f'(x)=\\lim_{h\\to 0} {\\frac {f(x+h)-f(x)}{h}} $$\n\n\n```python\n# red dashes, blue squares and green triangles\n#Example: [a,b], n\n# https://matplotlib.org/users/pyplot_tutorial.html\nimport numpy as np\nimport matplotlib.pyplot as plt\na=0\nb=1\nn=3\ndeltax=(b-a)/n\ndeltax\n# evenly sampled time at delta x intervals\nx = np.arange(a, b+deltax, deltax)\n#x = np.linspace(a, b, n+1)\nx\nx = np.linspace(-3, 3, 50)\ny2 = x**2+1\n\n\nplt.figure()\n#set x limits\nplt.xlim((0, 2))\nplt.ylim((0, 3))\n\n# set new sticks\nnew_sticks = np.linspace(0, 2, 5)\nplt.xticks(new_sticks)\n# set tick labels\nplt.yticks(np.arange(0, 5, step=1))\n\n# set line styles\n\nl2, = plt.plot(x, y2, color='red', linewidth=1.0, linestyle='--', label='f(x)= x^2+1')\n\nplt.legend(loc='upper left')\n\nplt.show()\n```\n\nplot a secant line pass the points (0,1) and (1,2)\n\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef main():\n # x = np.linspace(-2,2,100)\n a=-2\n b=3\n divx=0.01\n x = np.arange(a, b, divx)\n x1=0\n p1 = int((x1-a)/divx) #starts from zero\n deltax=1\n count_deltax=int(deltax/divx)\n p2 = p1+ count_deltax #starts from zero\n\n y1 = main_func(x)\n y2 = calculate_secant(x, y1, p1, p2)\n plot(x, y1, y2)\n plt.show()\n\ndef main_func(x):\n return x**2+1\n\ndef calculate_secant(x, y, p1, p2):\n points = [p1, p2]\n m, b = np.polyfit(x[points], y[points], 1)\n return m * x + b\n\ndef plot(x, y1, y2):\n plt.plot(x, y1)\n plt.plot(x, y2)\n #set x limits\n plt.xlim((-2, 2))\n #set x limits\n plt.ylim((0, 4))\n\nmain()\n```\n\nforward difference\n\n\n```python\nx=0\ndeltax=1\nmain_func(x+deltax)-main_func(x)\n```\n\n\n\n\n 1\n\n\n\n\n```python\nplot a tangent secant line pass the points (0,1)\n```\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef main():\n # x = np.linspace(-2,2,100)\n a=-2\n b=3\n divx=0.01\n x = np.arange(a, b, divx)\n x1=1\n p1 = int((x1-a)/divx) #starts from zero\n deltax=0.02\n count_deltax=int(deltax/divx)\n p2 = p1+ count_deltax #starts from zero\n\n y1 = main_func(x)\n y2 = calculate_secant(x, y1, p1, p2)\n plot(x, y1, y2)\n plt.show()\n\ndef main_func(x):\n return x**2+1\n\ndef calculate_secant(x, y, p1, p2):\n points = [p1, p2]\n m, b = np.polyfit(x[points], y[points], 1)\n return m * x + b\n\ndef plot(x, y1, y2):\n plt.plot(x, y1)\n plt.plot(x, y2)\n #set x limits\n plt.xlim((-2, 2))\n #set x limits\n plt.ylim((0, 4))\n\nmain()\n```\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef main():\n # x = np.linspace(-2,2,100)\n a=-2\n b=3\n divx=0.01\n x = np.arange(a, b, divx)\n x1=1\n p1 = int((x1-a)/divx) #starts from zero\n deltax=0.01\n count_deltax=int(deltax/divx)\n p2 = p1+ count_deltax #starts from zero\n\n y1 = main_func(x)\n y2 = calculate_secant(x, y1, p1, p2)\n plot(x, y1, y2)\n plt.show()\n\ndef main_func(x):\n return x**2+1\n\ndef calculate_secant(x, y, p1, p2):\n points = [p1, p2]\n m, b = np.polyfit(x[points], y[points], 1)\n print(m)\n return m * x + b\n\ndef plot(x, y1, y2):\n plt.plot(x, y1)\n plt.plot(x, y2)\n \n #set x limits\n plt.xlim((-2, 2))\n #set x limits\n plt.ylim((0, 4))\n \n\nmain()\n\n```\n\n\n```python\nx=1\ndeltax=0.00000000001\n(main_func(x+deltax)-main_func(x))/deltax\n```\n\n\n\n\n 2.000000165480742\n\n\n\n### $$ f'(x)=\\lim_{h\\to 0} {\\frac {f(x+h)-f(x)}{h}} $$\n$$ f'(x)={\\frac {f(1+2)-f(1)}{2}} = 4$$\n\nThe derivative of a function f at a point x is defined by the limit.\n\n$$ f'(x)=\\lim_{h\\to 0} {\\frac {f(x+h)-f(x)}{h}} $$\n\nhttp://www.math.unl.edu/~s-bbockel1/833-notes/node23.html\nforward difference approximation:\n$$ f'(x)={\\frac {f(x+h)-f(x)}{h}}+O(h) $$\n\n$$ f'(1)=?$$\n\n\n```python\nfrom sympy import diff, Symbol, sin, tan, limit\nx = Symbol('x')\ndiff(main_func(x), x)\nlimit(main_func(x), x, 1)\n```\n\n\n\n\n 2\n\n\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef main():\n # x = np.linspace(-2,2,100)\n a=-2\n b=3\n divx=0.01\n x = np.arange(a, b, divx)\n x1=1\n p1 = int((x1-a)/divx) #starts from zero\n deltax=0.01\n count_deltax=int(deltax/divx)\n p2 = p1+ count_deltax #starts from zero\n\n y1 = main_func(x)\n y2 = calculate_secant(x, y1, p1, p2)\n plot(x, y1, y2)\n plt.show()\n\ndef main_func(x):\n return x**2+1\n\ndef calculate_secant(x, y, p1, p2):\n points = [p1, p2]\n m, b = np.polyfit(x[points], y[points], 1)\n print(m)\n return m * x + b\n\ndef plot(x, y1, y2):\n plt.plot(x, y1)\n plt.plot(x, y2)\n \n #set x limits\n plt.xlim((-2, 2))\n #set x limits\n plt.ylim((0, 4))\n \n\nmain()\n```\n\n\nA backward difference uses the function values at x and x \u2212 \\Delta, instead of the values at x + \\Delta and x:\n\n$$ f'(x)=\\lim_{h\\to 0} {\\tfrac{f(x)-f(x-\\Delta x)}{\\Delta x}}$$\n\n\n\n\n```python\nx=1\ndeltax=0.0001\n(main_func(x)-main_func(x-deltax))/deltax\n```\n\n\n\n\n 1.9999000000003875\n\n\n\n\nFinally, the central difference is given by\n\n$$f'(x)=\\lim_{h\\to 0} {\\tfrac {f\\left(x+{\\tfrac {1}{2}}\\Delta x\\right)-f\\left(x-{\\tfrac {1}{2}}\\Delta x \\right)}{\\Delta x}} $$\n\n\n```python\nx=1\ndeltax=0.0001\n(main_func(x+deltax*(1/2))-main_func(x-deltax*(1/2)))/deltax\n```\n\n\n\n\n 1.9999999999997797\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "cf13ca4a52764c291cef3170c48940cc3daf0309", "size": 91921, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "student_hw_20181029.ipynb", "max_stars_repo_name": "tccnchsu/Numerical_Analysis", "max_stars_repo_head_hexsha": "6bc60672c0c417f1194cb507d1b65ac1691fbb8d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-03-16T15:26:47.000Z", "max_stars_repo_stars_event_max_datetime": "2019-03-16T15:26:47.000Z", "max_issues_repo_path": "student_hw_20181029.ipynb", "max_issues_repo_name": "tccnchsu/Numerical_Analysis", "max_issues_repo_head_hexsha": "6bc60672c0c417f1194cb507d1b65ac1691fbb8d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "student_hw_20181029.ipynb", "max_forks_repo_name": "tccnchsu/Numerical_Analysis", "max_forks_repo_head_hexsha": "6bc60672c0c417f1194cb507d1b65ac1691fbb8d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2018-10-29T03:52:02.000Z", "max_forks_repo_forks_event_max_datetime": "2018-12-10T02:43:51.000Z", "avg_line_length": 166.8257713249, "max_line_length": 18924, "alphanum_fraction": 0.8945181188, "converted": true, "num_tokens": 2241, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391621868804, "lm_q2_score": 0.8933094032139577, "lm_q1q2_score": 0.8245595631162737}} {"text": "dS/dt=-bSI, dI/dt=bSI (uso b para beta)\n\n\n```python\nfrom sympy import *\nfrom sympy.abc import S,I,t,b\n```\n\n\n```python\n#puntos criticos\nP=-b*S*I\nQ=b*S*I\n#establecer P(S,I)=0 y Q(S,I)=0\nPeqn=Eq(P,0)\nQeqn=Eq(Q,0)\nprint(solve((Peqn,Qeqn),S,I))\n#matriz Jacobiana\nJ11=diff(P,S)\nJ12=diff(P,I)\nJ21=diff(Q,S)\nJ22=diff(Q,I)\nJ=Matrix([[J11,J12],[J21,J22]])\npprint(J)\n```\n\n [(0, I), (S, 0)]\n \u23a1-I\u22c5b -S\u22c5b\u23a4\n \u23a2 \u23a5\n \u23a3I\u22c5b S\u22c5b \u23a6\n\n\n\n```python\n#J en el punto critico\nJc1=J.subs([(S,0),(I,I)])\npprint(Jc1.eigenvals())\npprint(Jc1.eigenvects())\nJc2=J.subs([(S,S),(I,0)])\npprint(Jc2.eigenvals())\npprint(Jc2.eigenvects())\n```\n\n {0: 1, -I\u22c5b: 1}\n \u23a1\u239b \u23a1\u23a10\u23a4\u23a4\u239e \u239b \u23a1\u23a1-1\u23a4\u23a4\u239e\u23a4\n \u23a2\u239c0, 1, \u23a2\u23a2 \u23a5\u23a5\u239f, \u239c-I\u22c5b, 1, \u23a2\u23a2 \u23a5\u23a5\u239f\u23a5\n \u23a3\u239d \u23a3\u23a31\u23a6\u23a6\u23a0 \u239d \u23a3\u23a31 \u23a6\u23a6\u23a0\u23a6\n {0: 1, S\u22c5b: 1}\n \u23a1\u239b \u23a1\u23a11\u23a4\u23a4\u239e \u239b \u23a1\u23a1-1\u23a4\u23a4\u239e\u23a4\n \u23a2\u239c0, 1, \u23a2\u23a2 \u23a5\u23a5\u239f, \u239cS\u22c5b, 1, \u23a2\u23a2 \u23a5\u23a5\u239f\u23a5\n \u23a3\u239d \u23a3\u23a30\u23a6\u23a6\u23a0 \u239d \u23a3\u23a31 \u23a6\u23a6\u23a0\u23a6\n\n\nLos puntos criticos son no hiperbolicos.\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.integrate import odeint\nimport pylab as pl\nimport matplotlib\n```\n\n\n```python\nb=1\ndef dx_dt(x,t):\n return [ -b*x[0]*x[1] , b*x[1]*x[0] ]\n#trayectorias en tiempo hacia adelante\nts=np.linspace(0,10,500)\nic=np.linspace(20000,100000,3)\nfor r in ic:\n for s in ic:\n x0=[r,s]\n xs=odeint(dx_dt,x0,ts)\n plt.plot(xs[:,0],xs[:,1],\"-\", color=\"orangered\", lw=1.5)\n#trayectorias en tiempo hacia atras\nts=np.linspace(0,-10,500)\nic=np.linspace(20000,100000,3)\nfor r in ic:\n for s in ic:\n x0=[r,s]\n xs=odeint(dx_dt,x0,ts)\n plt.plot(xs[:,0],xs[:,1],\"-\", color=\"orangered\", lw=1.5)\n#etiquetas de ejes y estilo de letra\nplt.xlabel('S',fontsize=20)\nplt.ylabel('I',fontsize=20)\nplt.tick_params(labelsize=12)\nplt.ticklabel_format(style=\"sci\", scilimits=(0,0))\nplt.xlim(0,100000)\nplt.ylim(0,100000)\n#campo vectorial\nX,Y=np.mgrid[0:100000:15j,0:100000:15j]\nu=-b*X*Y\nv=b*Y*X\npl.quiver(X,Y,u,v,color='dimgray')\nplt.savefig(\"SIinf.pdf\",bbox_inches='tight')\nplt.show()\n```\n\nAnalisis de Bifurcaciones\n\nEl punto critico del sistema no depende del parametro beta por lo que no cambia al variar beta.\n", "meta": {"hexsha": "a099718e70389d1ba800e59d3b011c3b1eac317a", "size": 138254, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ModeloSI(infecciosas).ipynb", "max_stars_repo_name": "deleonja/dynamical-sys", "max_stars_repo_head_hexsha": "024acc61a4e36d46b1502ce0391707e4afbc58e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ModeloSI(infecciosas).ipynb", "max_issues_repo_name": "deleonja/dynamical-sys", "max_issues_repo_head_hexsha": "024acc61a4e36d46b1502ce0391707e4afbc58e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ModeloSI(infecciosas).ipynb", "max_forks_repo_name": "deleonja/dynamical-sys", "max_forks_repo_head_hexsha": "024acc61a4e36d46b1502ce0391707e4afbc58e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 723.8429319372, "max_line_length": 83496, "alphanum_fraction": 0.7805560779, "converted": true, "num_tokens": 1081, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9433475683211323, "lm_q2_score": 0.8740772417253255, "lm_q1q2_score": 0.8245586405064284}} {"text": "\n\n# Multivariate Gaussian with full covariance\n\nIn this reading you will learn how you can use TensorFlow to specify any multivariate Gaussian distribution.\n\n\n```python\nimport tensorflow as tf\nimport tensorflow_probability as tfp\ntfd = tfp.distributions\n\nprint(\"TF version:\", tf.__version__)\nprint(\"TFP version:\", tfp.__version__)\n```\n\n TF version: 2.3.0\n TFP version: 0.11.0\n\n\nSo far, you've seen how to define multivariate Gaussian distributions using `tfd.MultivariateNormalDiag`. This class allows you to specify a multivariate Gaussian with a diagonal covariance matrix $\\Sigma$. \n\nIn cases where the variance is the same for each component, i.e. $\\Sigma = \\sigma^2 I$, this is known as a _spherical_ or _isotropic_ Gaussian. This name comes from the spherical (or circular) contours of its probability density function, as you can see from the plot below for the two-dimensional case. \n\n\n```python\n# Plot the approximate density contours of a 2d spherical Gaussian\n\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nspherical_2d_gaussian = tfd.MultivariateNormalDiag(loc=[0., 0.])\n\nN = 100000\nx = spherical_2d_gaussian.sample(N)\nx1 = x[:, 0]\nx2 = x[:, 1]\nsns.jointplot(x1, x2, kind='kde', space=0, );\n```\n\nAs you know, a diagonal covariance matrix results in the components of the random vector being independent. \n\n## Full covariance with `MultivariateNormalFullTriL`\n\nYou can define a full covariance Gaussian distribution in TensorFlow using the Distribution `tfd.MultivariateNormalTriL`.\n\nMathematically, the parameters of a multivariate Gaussian are a mean $\\mu$ and a covariance matrix $\\Sigma$, and so the `tfd.MultivariateNormalTriL` constructor requires two arguments:\n\n- `loc`, a Tensor of floats corresponding to $\\mu$,\n- `scale_tril`, a a lower-triangular matrix $L$ such that $LL^T = \\Sigma$.\n\nFor a $d$-dimensional random variable, the lower-triangular matrix $L$ looks like this:\n\n\\begin{equation}\n L = \\begin{bmatrix}\n l_{1, 1} & 0 & 0 & \\cdots & 0 \\\\\n l_{2, 1} & l_{2, 2} & 0 & \\cdots & 0 \\\\\n l_{3, 1} & l_{3, 2} & l_{3, 3} & \\cdots & 0 \\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n l_{d, 1} & l_{d, 2} & l_{d, 3} & \\cdots & l_{d, d}\n \\end{bmatrix},\n\\end{equation}\n\nwhere the diagonal entries are positive: $l_{i, i} > 0$ for $i=1,\\ldots,d$.\n\nHere is an example of creating a two-dimensional Gaussian with non-diagonal covariance:\n\n\n```python\n# Set the mean and covariance parameters\n\nmu = [0., 0.] # mean\nscale_tril = [[1., 0.],\n [0.6, 0.8]]\n\nsigma = tf.matmul(tf.constant(scale_tril), tf.transpose(tf.constant(scale_tril))) # covariance matrix\nprint(sigma)\n```\n\n tf.Tensor(\n [[1. 0.6]\n [0.6 1. ]], shape=(2, 2), dtype=float32)\n\n\n\n```python\n# Create the 2D Gaussian with full covariance\n\nnonspherical_2d_gaussian = tfd.MultivariateNormalTriL(loc=mu, scale_tril=scale_tril)\nnonspherical_2d_gaussian\n```\n\n\n\n\n \n\n\n\n\n```python\n# Check the Distribution mean\n\nnonspherical_2d_gaussian.mean()\n```\n\n\n\n\n \n\n\n\n\n```python\n# Check the Distribution covariance\n\nnonspherical_2d_gaussian.covariance()\n```\n\n\n\n\n \n\n\n\n\n```python\n# Plot its approximate density contours\n\nx = nonspherical_2d_gaussian.sample(N)\nx1 = x[:, 0]\nx2 = x[:, 1]\nsns.jointplot(x1, x2, kind='kde', space=0, color='r');\n```\n\nAs you can see, the approximate density contours are now elliptical rather than circular. This is because the components of the Gaussian are correlated.\n\nAlso note that the marginal distributions (shown on the sides of the plot) are both univariate Gaussian distributions.\n\n## The Cholesky decomposition\n\nIn the above example, we defined the lower triangular matrix $L$ and used that to build the multivariate Gaussian distribution. The covariance matrix is easily computed from $L$ as $\\Sigma = LL^T$.\n\nThe reason that we define the multivariate Gaussian distribution in this way - as opposed to directly passing in the covariance matrix - is that not every matrix is a valid covariance matrix. The covariance matrix must have the following properties:\n\n1. It is symmetric\n2. It is positive (semi-)definite\n\n_NB: A symmetric matrix $M \\in \\mathbb{R}^{d\\times d}$ is positive semi-definite if it satisfies $b^TMb \\ge 0$ for all nonzero $b\\in\\mathbb{R}^d$. If, in addition, we have $b^TMb = 0 \\Rightarrow b=0$ then $M$ is positive definite._\n\nThe Cholesky decomposition is a useful way of writing a covariance matrix. The decomposition is described by this result:\n\n> For every real-valued symmetric positive-definite matrix $M$, there is a unique lower-diagonal matrix $L$ that has positive diagonal entries for which \n>\n> \\begin{equation}\n LL^T = M\n \\end{equation}\n> This is called the _Cholesky decomposition_ of $M$.\n\nThis result shows us why Gaussian distributions with full covariance are completely represented by the `MultivariateNormalTriL` Distribution.\n\n### `tf.linalg.cholesky`\n\nIn case you have a valid covariance matrix $\\Sigma$ and would like to compute the lower triangular matrix $L$ above to instantiate a `MultivariateNormalTriL` object, this can be done with the `tf.linalg.cholesky` function. \n\n\n```python\n# Define a symmetric positive-definite matrix\n\nsigma = [[10., 5.], [5., 10.]]\n```\n\n\n```python\n# Compute the lower triangular matrix L from the Cholesky decomposition\n\nscale_tril = tf.linalg.cholesky(sigma)\nscale_tril\n```\n\n\n\n\n \n\n\n\n\n```python\n# Check that LL^T = Sigma\n\ntf.linalg.matmul(scale_tril, tf.transpose(scale_tril))\n```\n\n\n\n\n \n\n\n\nIf the argument to the `tf.linalg.cholesky` is not positive definite, then it will fail:\n\n\n```python\n# Try to compute the Cholesky decomposition for a matrix with negative eigenvalues\n\nbad_sigma = [[10., 11.], [11., 10.]]\n\ntry:\n scale_tril = tf.linalg.cholesky(bad_sigma)\nexcept Exception as e:\n print(e)\n```\n\n Cholesky decomposition was not successful. The input might not be valid. [Op:Cholesky]\n\n\n### What about positive semi-definite matrices?\n\nIn cases where the matrix is only positive semi-definite, the Cholesky decomposition exists (if the diagonal entries of $L$ can be zero) but it is not unique.\n\nFor covariance matrices, this corresponds to the degenerate case where the probability density function collapses to a subspace of the event space. This is demonstrated in the following example:\n\n\n```python\n# Create a multivariate Gaussian with a positive semi-definite covariance matrix\n\npsd_mvn = tfd.MultivariateNormalTriL(loc=[0., 0.], scale_tril=[[1., 0.], [0.4, 0.]])\npsd_mvn\n```\n\n\n\n\n \n\n\n\n\n```python\n# Plot samples from this distribution\n\nx = psd_mvn.sample(N)\nx1 = x[:, 0]\nx2 = x[:, 1]\nplt.xlim(-5, 5)\nplt.ylim(-5, 5)\nplt.title(\"Scatter plot of samples\")\nplt.scatter(x1, x2, alpha=0.5);\n```\n\nIf the input to the function `tf.linalg.cholesky` is positive semi-definite but not positive definite, it will also fail:\n\n\n```python\n# Try to compute the Cholesky decomposition for a positive semi-definite matrix\n\nanother_bad_sigma = [[10., 0.], [0., 0.]]\n\ntry:\n scale_tril = tf.linalg.cholesky(another_bad_sigma)\nexcept Exception as e:\n print(e)\n```\n\n Cholesky decomposition was not successful. The input might not be valid. [Op:Cholesky]\n\n\nIn summary: if the covariance matrix $\\Sigma$ for your multivariate Gaussian distribution is positive-definite, then an algorithm that computes the Cholesky decomposition of $\\Sigma$ returns a lower-triangular matrix $L$ such that $LL^T = \\Sigma$. This $L$ can then be passed as the `scale_tril` of `MultivariateNormalTriL`.\n\n## Putting it all together\n\nYou are now ready to put everything that you have learned in this reading together.\n\nTo create a multivariate Gaussian distribution with full covariance you need to:\n\n1. Specify parameters $\\mu$ and either $\\Sigma$ (a symmetric positive definite matrix) or $L$ (a lower triangular matrix with positive diagonal elements), such that $\\Sigma = LL^T$.\n\n2. If only $\\Sigma$ is specified, compute `scale_tril = tf.linalg.cholesky(sigma)`.\n\n3. Create the distribution: `multivariate_normal = tfd.MultivariateNormalTriL(loc=mu, scale_tril=scale_tril)`.\n\n\n```python\n# Create a multivariate Gaussian distribution\n\nmu = [1., 2., 3.]\nsigma = [[0.5, 0.1, 0.1],\n [0.1, 1., 0.6],\n [0.1, 0.6, 2.]]\n\nscale_tril = tf.linalg.cholesky(sigma)\n\nmultivariate_normal = tfd.MultivariateNormalTriL(loc=mu, scale_tril=scale_tril)\n```\n\n\n```python\n# Check the covariance matrix\n\nmultivariate_normal.covariance()\n```\n\n\n\n\n \n\n\n\n\n```python\n# Check the mean\n\nmultivariate_normal.mean()\n```\n\n\n\n\n \n\n\n\n## Deprecated: `MultivariateNormalFullCovariance`\n\nThere was previously a class called `tfd.MultivariateNormalFullCovariance` which takes the full covariance matrix in its constructor, but this is being deprecated. Two reasons for this are:\n\n* covariance matrices are symmetric, so specifying one directly involves passing redundant information, which involves writing unnecessary code. \n* it is easier to enforce positive-definiteness through constraints on the elements of a decomposition than through a covariance matrix itself. The decomposition's only constraint is that its diagonal elements are positive, a condition that is easy to parameterize for.\n\n### Further reading and resources\n* https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/MultivariateNormalTriL\n* https://www.tensorflow.org/api_docs/python/tf/linalg/cholesky\n", "meta": {"hexsha": "b00a747b86300da27c1c226701e45c48f8f19938", "size": 124309, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "01_The TensorFlow_Probability_library/02_Multivariate_Gaussian_with_full_covariance.ipynb", "max_stars_repo_name": "mohd-faizy/07T_Probabilistic-Deep-Learning-with-TensorFlow-", "max_stars_repo_head_hexsha": "3cf6719c55c1744f820c2a437ce986cc0e71ba61", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 28, "max_stars_repo_stars_event_min_datetime": "2020-12-21T16:28:38.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-25T16:12:43.000Z", "max_issues_repo_path": "01_The TensorFlow_Probability_library/02_Multivariate_Gaussian_with_full_covariance.ipynb", "max_issues_repo_name": "ShreJais/Probabilistic-Deep-Learning-with-TensorFlow", "max_issues_repo_head_hexsha": "5bdf9b21480860f966e5d22c8a84aa37190dff91", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "01_The TensorFlow_Probability_library/02_Multivariate_Gaussian_with_full_covariance.ipynb", "max_forks_repo_name": "ShreJais/Probabilistic-Deep-Learning-with-TensorFlow", "max_forks_repo_head_hexsha": "5bdf9b21480860f966e5d22c8a84aa37190dff91", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 20, "max_forks_repo_forks_event_min_datetime": "2021-01-08T10:55:46.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T23:00:45.000Z", "avg_line_length": 152.152998776, "max_line_length": 54006, "alphanum_fraction": 0.8698967895, "converted": true, "num_tokens": 2861, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9489172644875642, "lm_q2_score": 0.8688267762381843, "lm_q1q2_score": 0.8244447278214868}} {"text": "# Integral simb\u00f3lica usando SymPy\r\n\r\n\r\n\n\n\n```python\nimport numpy as np\r\nimport sympy as sy\r\nfrom sympy.utilities.lambdify import lambdify\r\nfrom scipy.integrate import quad\r\nfrom scipy.misc import derivative\n```\n\n\n```python\nx = sy.Symbol('x')\r\nf = sy.exp(-x)*sy.sin(3.0*x)\r\nres_symbolic = sy.integrate(f)\r\nres = sy.integrate(f, (x, 0, 2*sy.pi))\r\n\r\nprint(res.evalf())\n```\n\n 0.299439767180488\n\n\n\n```python\nf\n```\n\n\n\n\n$\\displaystyle e^{- x} \\sin{\\left(3.0 x \\right)}$\n\n\n\n\n```python\nres_symbolic\n```\n\n\n\n\n$\\displaystyle - 0.1 e^{- x} \\sin{\\left(3.0 x \\right)} - 0.3 e^{- x} \\cos{\\left(3.0 x \\right)}$\n\n\n\n\n```python\n%timeit sy.integrate(f, (x, 0, 2*sy.pi))\n```\n\n 207 ms \u00b1 9.18 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\n\n\n\n```python\n# Lambdfy integrals\r\nres_symbolic = sy.integrate(f)\r\ninteg = lambda x0, x1: res_symbolic.evalf(subs={x: x1}) - res_symbolic.evalf(subs={x: x0})\r\n%timeit integ(0, 2*np.pi)\n```\n\n 1.53 ms \u00b1 88.6 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\n\n\n\n```python\ninteg = lambda x0, x1: float(sy.integrate(f, (x, x0, x1)))\r\n%timeit integ(0, 2*np.pi)\n```\n\n 171 ms \u00b1 12.6 ms per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)\n\n\n## Multi-varible\n\n\n```python\nx, y = sy.symbols('x, y')\r\nf = x**2\r\nh = y**2\r\ng = f + h\r\ng1 = g.subs(x,1)\r\n\n```\n\n\n\n\n$\\displaystyle 5$\n\n\n\n\n```python\n# Integrate g(x,y)*dx\r\nsy.integrate(g,x)\n```\n\n\n\n\n$\\displaystyle \\frac{x^{3}}{3} + x y^{2}$\n\n\n\n\n```python\n# Integrate g(x,y)*dy\r\nsy.integrate(g,y)\n```\n\n\n\n\n$\\displaystyle x^{2} y + \\frac{y^{3}}{3}$\n\n\n\n\n```python\n# Double integral g(x,y)*dx*dy\r\nsy.integrate(sy.integrate(g,x),y)\n```\n\n\n\n\n$\\displaystyle \\frac{x^{3} y}{3} + \\frac{x y^{3}}{3}$\n\n\n\n\n```python\n# Double integral g(x,y)*dx*dy, xfrom 0 to 1, y from zero to 1\r\nsy.integrate(sy.integrate(g,(x,0,1)),(y,0,1))\n```\n\n\n\n\n$\\displaystyle \\frac{2}{3}$\n\n\n\n\n```python\n# Evaluating the results\r\nsy.integrate(sy.integrate(g,(x,0,1)),(y,0,1)).evalf()\n```\n\n\n\n\n$\\displaystyle 0.666666666666667$\n\n\n\n\n```python\n# Show the symbolic\r\nsy.Integral(sy.Integral(g,(x,0,1)),(y,0,1))\n```\n\n\n\n\n$\\displaystyle \\int\\limits_{0}^{1}\\int\\limits_{0}^{1} \\left(x^{2} + y^{2}\\right)\\, dx\\, dy$\n\n\n\n# Using Scipy to Numerical integrate defined functions\n\n\n```python\ndef f(x):\r\n return np.exp(-x)*np.sin(3.0*x)\r\n\r\ni, err = quad(f, 0, 2*np.pi)\r\nprint(i)\n```\n\n 0.29943976718048754\n\n\n\n```python\n%timeit i, err = quad(f, 0, 2*np.pi)\n```\n\n 9.72 \u00b5s \u00b1 1.04 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100000 loops each)\n\n\n\n```python\ndef foo(x, y):\r\n return(x**2 + x*y**3)\r\n\r\nfrom scipy.misc import derivative\r\nderivative(foo, 1, dx = 1e-6, args = (2, ))\n```\n\n\n\n\n 9.999999999621423\n\n\n\n# Derivada simb\u00f3lica\n\n\n```python\nx = sy.Symbol('x')\r\nf = 3*x**2 + 1\r\nf\n```\n\n\n\n\n$\\displaystyle 3 x^{2} + 1$\n\n\n\n\n```python\n# Lambdfy derivatives\r\nddx = lambdify(x, sy.diff(f, x)) # creates a function that you can call\r\n%timeit ddx(2)\n```\n\n 91.8 ns \u00b1 10.2 ns per loop (mean \u00b1 std. dev. of 7 runs, 10000000 loops each)\n\n\n\n```python\ndx = sy.diff(f, x)\r\nddx = lambda x0: dx.subs(x, x0)\r\n%timeit ddx(2)\n```\n\n 14.2 \u00b5s \u00b1 345 ns per loop (mean \u00b1 std. dev. of 7 runs, 100000 loops each)\n\n\n\n```python\n# Derivada de segunda ordem\r\nsy.diff(f, (x, 2))\n```\n\n\n\n\n$\\displaystyle 6$\n\n\n\n# Derivada num\u00e9rica usando Scipy\n\n\n```python\ndef f(x):\r\n return 3*x**2 + 1\r\n\r\n%timeit derivative(f, 2)\n```\n\n 13.3 \u00b5s \u00b1 401 ns per loop (mean \u00b1 std. dev. of 7 runs, 100000 loops each)\n\n", "meta": {"hexsha": "d1457471f33e9560edf18424c66baf2c4be21e2a", "size": 10516, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "VA/work1/numerical_and_symbolic_integration.ipynb", "max_stars_repo_name": "lucas-schroeder/Master_program_UFSC", "max_stars_repo_head_hexsha": "4a1cecfa1ebcd57968449d0650abd11d782df71c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "VA/work1/numerical_and_symbolic_integration.ipynb", "max_issues_repo_name": "lucas-schroeder/Master_program_UFSC", "max_issues_repo_head_hexsha": "4a1cecfa1ebcd57968449d0650abd11d782df71c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "VA/work1/numerical_and_symbolic_integration.ipynb", "max_forks_repo_name": "lucas-schroeder/Master_program_UFSC", "max_forks_repo_head_hexsha": "4a1cecfa1ebcd57968449d0650abd11d782df71c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.4740740741, "max_line_length": 111, "alphanum_fraction": 0.4547356409, "converted": true, "num_tokens": 1285, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.907312213841788, "lm_q2_score": 0.9086178969328286, "lm_q1q2_score": 0.8244001156023942}} {"text": "# Numerical investigation of an oscillator\nWe consider a particle of mass $m = 1$ moving on the horizontal $x$ axis in a potential $U=\\omega_{0}^2 x^{2}/2$,\nsubject to a friction force $F_{friction}=-2\\gamma \\dot{x}$ and a driving force $F_{driving}=\\alpha\\cos(t+\\phi)$. The equation of motion is therefore of the general form\n\n\\begin{equation} \\ddot{x}+2\\gamma \\dot{x}+ \\omega_0^2x-\\alpha\\cos(t+\\phi)=0 \\qquad(1)\\end{equation}\n\nThis is a harmonic oscillator with damping (friction) and driving. We want to study the solutions of this equation in terms of the free parameters of the problem. These are the initial position $x_{0}$, the initial velocity $\\dot{x}_{0}$, the frequency $\\omega_{0}$, and the parameters $\\gamma$, $\\alpha$, and $\\phi$. Once we have the\nsolution we want to produce plots of the position, the velocity, the potential and kinetic energy,and the phase portrait (velocity in terms of position).\n### - First step:\nBy denoting the velocity $v$ (i.e. $v = \\dot{x}$), rewrite (1) as a system of first order ODEs in $x$ and $v$.\n\n\n\n$\\frac{dx}{dt} = v$ \n\n$\\frac{dv}{dt}$ = -2$\\gamma v$ - $w^{2}_{0}x$ + $\\alpha\\cos(t+\\phi)$\n\n### - Second step: \nLet $\\{t_k \\}$ be a partition of $[0,60]$ such that $0=t_0odeint
    , write a Python function named Oscillator_func which takes $x_{0}$, $\\dot{x}_{0}$, $\\omega_{0}$, $\\gamma$, $\\alpha$ and $\\phi$ as inputs and return the arrays $x$ and $v$ (numerical solution of the system of ODEs obtained above) such that for a given index $k$ we have $x[k]= x(t_k)$ and $v[k] = v(t_k)$\n\n\n\n\n```python\nfrom scipy.integrate import odeint\nfrom pylab import*\ndef F (s,t):\n [x,v] =s\n dxdt = v\n dvdt = -2*gamma*v -(w_0)**2*x + alpha*(np.cos(t+phi))\n return [dxdt,dvdt]\ndef Oscillator_func(x_0,x_dot_0,gamma,w_0,alpha,phi):\n s0 = [x_0,x_dot_0]\n N = (int((b-a)/H))+1\n t=np.linspace(0,60,N)\n sol = odeint(F,s0,t)\n x = sol[:,0]\n v = sol[:,1]\n return x,v,t\n\n```\n\n### - Third step: \nYou can then play with the numerical values ($x_{0}$, $\\dot{x}_{0}$, $\\omega_{0}$, $\\gamma$, $\\alpha$, $\\phi$) to investigate different situations. From now on, we assume that\n\n \\begin{equation} x_{0}=5,\\qquad \\dot{x}_{0}=0,\\qquad \\omega_{0}=0.8,\\qquad \\phi=10.\\end{equation} \n \n \nwrite a code which produces the following three graphs:\n\n - First graph:\nposition $x(t)$ and velocity $\\dot{x}(t)$, both as a function of time.\n\n - Second graph: \n potential energy $U(t)$ and kinetic energy $K(t)$, both as a function of time.\n \n - Third graph: \n a parametric plot representing the position $x(t)$ in terms of the position $\\dot{x}(t)$.\n \n #### a- for $\\alpha=0, \\gamma=0$ (Harmonic oscillator)\n \nThis is the usual harmonic oscillator, with the position and the velocity oscillating for ever, as you\ncan see on this plot:\n \n \n \n The kinetic energy $K$ and the potential energy $U$ both oscillate for ever, and the total energy $E = K + U$ is equal to a constant at any time. This is conservation of energy.\n \n \n \n \n The phase portrait is closed, and the trajectories go for ever around this ellipse representing pairs\nof points $x(t)$ and $\\dot{x}(t)$. We see that $x(t)$ and $\\dot{x}(t)$ never reach $0$.\n\n\n\n\n\n```python\nx_0 =5\nx_dot_0 =0\ngamma = 0\nw_0 = 0.8\nalpha =0\nphi =10\na=0\nb=60\nH=0.001\nx,v,t=Oscillator_func(x_0,x_dot_0,gamma,w_0,alpha,phi)\nK = (1/2)*((w_0)**2)*(x**2)\nU = (1/2)*(v**2)\nplt.figure(figsize=(8,14))\nplt.subplot(3,1,1)\nplt.plot(t,x)\nplt.plot(t,v)\nplt.legend(['x(t)','v(t)'])\nplt.subplot(3,1,2)\nplt.plot(t,K)\nplt.plot(t,U)\nplt.legend(['Kinetic Energy','Potential Energy'])\nplt.subplot(3,1,3)\nplt.plot(x,v)\nplt.show()\n```\n\n#### b- for $\\alpha=0, \\gamma=0.1$ (Damped oscillator)\n\n\nIn this case we have $\\gamma^2 < \\omega_{0}^2$, which corresponds to the under-damped regime discussed in class. The solution has an oscillatoray behavior with an exponential damping, as you can see on this\nplot:\n\n\nEs expected, the mechanical energy is not conserved because of friction (this is why the motion\neventually stops). You can see on this plot that energy goes to zero:\n\n\n\nIn the phase portrait the trajectory starts at $x_{0}=5$ and $\\dot{x}_{0}=0$, which is our initial condition, and\nis attracted to the point $x = 0$ and $\\dot{x}=0$. This is consistent with what we see above on the plots\nof $x(t)$ and $\\dot{x}(t)$.\n\n\n\n\n```python\nx_0 =5\nx_dot_0 =0\nw_0 = 0.8\nalpha =0\nphi =10\ngamma = 0.1\nx,v,t=Oscillator_func(x_0,x_dot_0,gamma,w_0,alpha,phi)\nplt.figure(figsize=(10,16))\nplt.subplot(3,1,1)\nplt.plot (t,x)\nplt.plot(t,v)\nplt.legend(['x(t)','v(t)'])\nplt.subplot(3,1,2)\nplt.plot(t,K)\nplt.plot(t,U)\nplt.legend(['Kinetic Energy','Potential Energy'])\nplt.subplot(3,1,3)\nplt.plot(x,v)\nplt.show()\n```\n\n#### c- for $\\alpha=1, \\gamma=0.1$ (Damped oscillator with driving) \n\nIn this case the motion is initially damped, as in the previous case, but after some time the driving force takes over and becomes dominant, so the motion is again oscillatory, where the oscillations are dictated by the driving. This can be seen on the plot.\n\n\nThe energy is initially decreasing, as in the damped case, but then the driving starts to inject energy in the system, so energy increases once again and starts to oscillate between kinetic and potential.\n\n\nIn the phase portrait the trajectory starts at $x_{0}=5$ and $\\dot{x}_{0}=0$, which is our initial condition. It is initially attracted to the center, because of the damping as in the previous case, but then we see that the trajectory does not converge towards $(0, 0)$, but goes back to a stable circle.\n\n\n\n```python\nx_0 =5\nx_dot_0 =0\ngamma = 0.1\nw_0 = 0.8\nalpha =1\nphi =10\nx,v,t=Oscillator_func(x_0,x_dot_0,gamma,w_0,alpha,phi)\nK = (1/2)*(w_0)**2*x**2\nU = (1/2)*v**2\nplt.figure(figsize=(10,16))\nplt.subplot(3,1,1)\nplt.plot (t,x)\nplt.plot(t,v)\nplt.legend(['x(t)','v(t)'])\nplt.subplot(3,1,2)\nplt.plot(t,K)\nplt.plot(t,U)\nplt.legend(['Kinetic Energy','Potential Energy'])\nplt.subplot(3,1,3)\nplt.plot(x,v)\nplt.show()\n```\n\n\n```python\n{\n \"cells\": [],\n \"metadata\": {},\n \"nbformat\": 4,\n \"nbformat_minor\": 2\n}\n\n\n```\n\n\n\n\n {'cells': [], 'metadata': {}, 'nbformat': 4, 'nbformat_minor': 2}\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "f7debb9dcc38d79dffb01f2246f9543f1a93f68b", "size": 470571, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "audrey_demafo_CM.ipynb", "max_stars_repo_name": "AudreyDemafo/AudreyDemafo.github.io", "max_stars_repo_head_hexsha": "ccc2ed8898fc87d073c2ebfc4b136ccda21f4103", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "audrey_demafo_CM.ipynb", "max_issues_repo_name": "AudreyDemafo/AudreyDemafo.github.io", "max_issues_repo_head_hexsha": "ccc2ed8898fc87d073c2ebfc4b136ccda21f4103", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "audrey_demafo_CM.ipynb", "max_forks_repo_name": "AudreyDemafo/AudreyDemafo.github.io", "max_forks_repo_head_hexsha": "ccc2ed8898fc87d073c2ebfc4b136ccda21f4103", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 1404.6895522388, "max_line_length": 186884, "alphanum_fraction": 0.9595746444, "converted": true, "num_tokens": 1989, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9086178919837706, "lm_q2_score": 0.9073122119620789, "lm_q1q2_score": 0.8244001094041162}} {"text": "```python\nimport sympy as sm\nimport numpy as np\nimport matplotlib.pyplot as pl\n```\n\n\n```python\ndef ryOptimization(\n f, \n c1, \n c2, \n xyBounds):\n\n x,y= sm.symbols('x,y')\n\n # sympy function --> numpy function\n z= sm.lambdify([x,y],f([x,y])) \n\n xx= np.linspace(-10,10,1001)\n yy= np.linspace(-10,10,1001)\n zz= z(xx,yy)\n\n xm, ym= np.meshgrid(xx,yy)\n zm= z(xm,ym)\n zz.shape, zm.shape\n\n y1= sm.solve(\n c1([x,y]),\n y)[0]\n y2= sm.solve(\n c2([x,y]),\n y)[0]\n y1= sm.lambdify(x, y1)\n y2= sm.lambdify(x, y2)\n yy1= y1(xx)\n yy2= y2(xx)\n [(xmin,xmax),(ymin,ymax)]= xyBounds\n \n y_ymin= ymin *np.ones_like(xx)\n y_ymax= ymax *np.ones_like(xx)\n x_xmin= xmin *np.ones_like(yy)\n x_xmax= xmax *np.ones_like(yy)\n\n y_0= np.zeros_like(xx)\n x_0= np.zeros_like(yy)\n\n ax= pl.axes(xlim=(-10,10),ylim=(-10,10))#, projection='3d')\n\n ax.contour(xm,ym,zm, 100, cmap='rainbow')\n\n ax.plot(xx,yy1,'r',\n xx,yy2,'g',\n linestyle='--'\n ) \n\n ax.plot(xx, y_0,'w-',\n x_0,yy, 'w-',\n linewidth= 3,\n alpha= 0.3 \n )\n\n ax.plot(xx, y_ymin,\n xx, y_ymax,\n x_xmin, yy, \n x_xmax, yy, \n linewidth= 1,\n linestyle= '--',\n color= 'gray'\n )\n\n for yyy in [yy1,yy2]:\n \n ax.fill_between(xx, y_ymax, yyy, \n where= (yyy>=y_ymax),\n alpha= 0.5, \n color='white', \n interpolate=True\n )\n ax.fill_between(xx, y_ymin, yyy, \n where= (yyy<=y_ymin),\n alpha= 0.5, \n color='white', \n interpolate=True\n )\n\n\n import scipy.optimize as sopt\n\n x0= [0, 0]\n\n opt= sopt.minimize(\n f,\n x0,\n #method= 'SLSQP',\n bounds= xyBounds,\n constraints= Constraints\n )\n\n opt\n\n xopt= opt.x[0]\n yopt= opt.x[1]\n fopt= opt.fun\n\n ax.scatter(\n x= xopt,\n y= yopt, \n color= 'magenta',\n marker= 's' \n )\n\n ax.text(\n x= xopt, \n y= yopt,\n s= f'{xopt:.3f},{yopt:.3f}'\n )\n\n\n fLtx= sm.latex(f([x,y]))\n c1Ltx= sm.latex(c1([x,y]))\n c2Ltx= sm.latex(c2([x,y]))\n\n titleStr= f'''\n Optimize: f(x,y)= ${fLtx}$\n Subject to: Constraints and Bounds\n opt= [x= {xopt:.3f}, y= {yopt:.3f}; f= {fopt:.3f}]\n '''\n\n ax.set_title(titleStr)\n ax.set_xlabel('x')\n ax.set_ylabel('y')\n\n infoStr= f'''\n $Objective: f(x,y)= {fLtx}$\n $c_1(x,y)= {c1Ltx} \u2265 0$ \n $c_2(x,y)= {c2Ltx} \u2265 0$ \n $x \\in [{xmin},{xmax}]$ \n $y \\in [{ymin},{ymax}]$\n opt= [x= {xopt:.3f}, y= {yopt:.3f}; f= {fopt:.3f}]\n '''\n ax.text(\n x= -10, \n y= -10,\n s= infoStr\n )\n\n pl.show()\n```\n\n\n```python\ndef f(s):\n x, y= s\n z= x + y\n \n return z\ndef c1(s):\n x,y= s\n z= x**2 + y**2 -1\n return z\n\ndef c2(s):\n x,y= s\n z= -(x**2 + y**2 -2)\n return z\n\nConstraints= [\n {'fun':c1, 'type':'ineq'},\n {'fun':c2, 'type':'ineq'},\n]\n\nxyBounds= [(-5, 5), # xmin, xmax\n (-5, 5) # ymin, ymax\n ]\n```\n\n\n```python\nryOptimization(f,c1,c2, xyBounds)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "0d212d18084f36f87a4f602c98d2a8b1f6a03884", "size": 240399, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ch04_HomeWork_1.ipynb", "max_stars_repo_name": "loucadgarbon/pattern-recognition", "max_stars_repo_head_hexsha": "2fd45db9597c01f8b76aba1cad9575c2b3a764ec", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ch04_HomeWork_1.ipynb", "max_issues_repo_name": "loucadgarbon/pattern-recognition", "max_issues_repo_head_hexsha": "2fd45db9597c01f8b76aba1cad9575c2b3a764ec", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ch04_HomeWork_1.ipynb", "max_forks_repo_name": "loucadgarbon/pattern-recognition", "max_forks_repo_head_hexsha": "2fd45db9597c01f8b76aba1cad9575c2b3a764ec", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 928.1814671815, "max_line_length": 233826, "alphanum_fraction": 0.9523126136, "converted": true, "num_tokens": 1173, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9449947132556619, "lm_q2_score": 0.8723473697001441, "lm_q1q2_score": 0.8243636524891186}} {"text": "```python\n%pylab inline\nfrom sympy import symbols, Matrix\nfrom sympy.parsing.sympy_parser import parse_expr\n```\n\n Populating the interactive namespace from numpy and matplotlib\n\n\n\n```python\nexpr = parse_expr(\"x*y\")\nexpr\n```\n\n\n\n\n x*y\n\n\n\n\n```python\ntype(expr)\n```\n\n\n\n\n sympy.core.mul.Mul\n\n\n\n\n```python\ntype(parse_expr(\"Matrix([[1,2],[3,4]]) * t\"))\n```\n\n\n\n\n sympy.matrices.dense.MutableDenseMatrix\n\n\n\n\n```python\ntype(parse_expr(\"cos(x)\"))\n```\n\n\n\n\n cos\n\n\n\n\n```python\ntype(parse_expr(\"Eq(10,a)\"))\n```\n\n\n\n\n sympy.core.relational.Equality\n\n\n\n\n```python\ntype(parse_expr(\"10>a\"))\n```\n\n\n\n\n sympy.core.relational.StrictGreaterThan\n\n\n\n\n```python\ntype(parse_expr(\"10>=a\"))\n```\n\n\n\n\n sympy.core.relational.GreaterThan\n\n\n\n\n```python\ntype(parse_expr(\"10a', '10a\n {'is_constant': False, 'is_equality': False, 'is_inequality': True, 'is_matrix': False, 'dimension': [0, 0]}\n \n 10 2 : return np.zeros(len(x)) \n #else : return None;\n return (fprime(x+stepSize,n-1)-fprime(x-stepSize,n-1))/(2.*stepSize) #... or simply compute numerically\n```\n\n`GPTools` allows for predicting derivatives very easily! Just use `n=1` in the `predict` method to compute the first derivative.\n\n\n```python\nderivOrder = 1 # order of the derivative\ny_prime_star, std_prime_star = gp.predict(X_star, n=derivOrder, return_std=True)\n```\n\n\n```python\nplt.plot(x_star, y_prime_star, label=\"prediction\")\nplt.fill_between(x_star, y_prime_star+std_prime_star, y_prime_star-std_prime_star, color='lightgrey')\nplt.plot(x, fprime(x,derivOrder), c='k', ls='--', label=\"true\")\n\nplt.title(\"The derivative\")\nplt.xlabel('$x$');\nplt.ylabel(f\"$f(x)$'s derivative of order {derivOrder}\")\nplt.legend();\n```\n\n# Nuclear Matter Application\n\nOur specific use case is similar to the example above: we fit a GP to data, in this case from a physics simulation.\nBut there is one additional source of uncertainty from the theory error. This will also involve us creating our own custom GP kernel!\n\nStart again by getting some data.\n\n\n```python\ndf = pd.read_csv('../data/all_matter_data.csv')\n# Convert differences to total prediction at each MBPT order\nmbpt_orders = ['Kin', 'MBPT_HF', 'MBPT_2', 'MBPT_3', 'MBPT_4']\ndf[mbpt_orders] = df[mbpt_orders].apply(np.cumsum, axis=1)\n# 'total' is now unnecessary. Remove it.\ndf.pop('total');\n```\n\n\n```python\norders = np.array([0, 2, 3, 4])\n# body = 'NN-only'\nbody = 'NN+3N'\nLambda = 450\nfits = {450: [1, 7], 500: [4, 10]}\ntrain1 = slice(None, None, 5)\nvalid1 = slice(2, None, 5)\n# valid1 = np.array([i % 5 != 0 for i in range(len())])\n[fit_n2lo, fit_n3lo] = fits[Lambda]\n\nexcluded = np.array([1])\n\nsavefigs = False\n\nmask_fit = np.isin(df['fit'], fits[Lambda]) | np.isnan(df['fit'])\n\nmask1 = \\\n (df['Body'] == body) & \\\n mask_fit & \\\n (df['Lambda'] == Lambda)\n\n\n# df_fit = df[mask_fit]\ndf_n = df[mask1 & (df['x'] == 0)]\ndf_s = df[mask1 & (df['x'] == 0.5)]\n\nkf_n = df_n[df_n['OrderEFT'] == 'LO']['kf'].values\nkf_s = df_s[df_s['OrderEFT'] == 'LO']['kf'].values\ndensity = df_n[df_n['OrderEFT'] == 'LO']['n'].values\nkf_d = kf_n.copy()\n\n# valid1 = np.arange(len(kf_n)) % 5 != 0\n\nKf_n = kf_n[:, None]\nKf_s = kf_s[:, None]\nKf_d = kf_d[:, None]\n\ny_n = np.array([df_n[df_n['OrderEFT'] == order]['MBPT_4'].values for order in df_n['OrderEFT'].unique()]).T\ny_s = np.array([df_s[df_s['OrderEFT'] == order]['MBPT_4'].values for order in df_s['OrderEFT'].unique()]).T\ny_d = y_n - y_s\n```\n\nVisualize it.\n\n\n```python\nfor i, n in enumerate(orders):\n plt.plot(kf_s, y_s[:, i], label=fr'$y_{n}$')\nplt.legend()\nplt.xlabel(r'$k_f$ [fm$^{-1}$]')\nplt.ylabel(r'$E/A$');\n```\n\nThe kernel for our GP convergence model is not a simple RBF, it must be multiplied by factors related to the converges of the EFT:\n\n\\begin{align}\n \\kappa(x, x';\\bar c, \\ell) = y_{\\text{ref}}(x)y_{\\text{ref}}(x') \\frac{[Q(x)Q(x')]^{k+1}}{1-Q(x)Q(x')} \\bar c^2 r(x,x';\\ell)\n\\end{align}\n\nwhere $\\bar c^2 r(x,x';\\ell)$ is the RBF kernel used above.\nGPTools can handle products of kernels, so we just need to create a kernel object that represents the prefactor above.\nI'm still not completely sure how to make a truly compatible kernel object, but the code below seems to do the trick.\n\n\n```python\nclass CustomKernel(gptools.Kernel):\n \"\"\"A Custom GPTools kernel that wraps an arbitrary function f with a compatible signature\n \n Parameters\n ----------\n f : callable\n A positive semidefinite kernel function that takes f(Xi, Xj, ni, nj) where ni and nj are\n integers for the number of derivatives to take with respect to Xi or Xj. It should return\n an array of Xi.shape[0]\n *args\n Args passed to the Kernel class\n **kwargs\n Kwargs passed to the Kernel class\n \"\"\"\n \n def __init__(self, f, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.f = f\n\n def __call__(self, Xi, Xj, ni, nj, hyper_deriv=None, symmetric=False):\n return self.f(Xi, Xj, int(np.unique(ni)[0]), int(np.unique(nj)[0]))\n```\n\nWe need to take arbitrary numbers of derivatives. Humans can make mistakes, let `SymPy` handle it.\n\n\n```python\nfrom sympy import symbols, diff, sqrt, lambdify\n\ndef kernel_scale_sympy(lowest_order=4, highest_order=None):\n \"\"\"Creates a sympy object that is the convergence part of the GP kernel\n \n Parameters\n ----------\n lowest_order\n highest_order\n \"\"\"\n k_f1, k_f2, y_ref, Lambda_b, = symbols('k_f1 k_f2 y_ref Lambda_b')\n hbar_c = 200\n Q1 = hbar_c * k_f1 / Lambda_b\n Q2 = hbar_c * k_f2 / Lambda_b\n num = (Q1 * Q2)**(lowest_order)\n if highest_order is not None:\n num = num - (Q1 * Q2)**highest_order\n kernel_scale = y_ref**2 * num / (1 - Q1*Q2)\n return k_f1, k_f2, Lambda_b, y_ref, kernel_scale\n\ndef eval_kernel_scale(Xi, Xj=None, ni=None, nj=None, breakdown=600, ref=16, lowest_order=4, highest_order=None):\n \"\"\"Creates a matrix for the convergence part of the GP kernel.\n Compatible with the CustomKernel class signature.\n \n Parameters\n -----------\n Xi\n Xj\n ni\n nj\n breakdown\n ref\n lowest_order\n highest_order\n \"\"\"\n if ni is None:\n ni = 0\n if nj is None:\n nj = 0\n k_f1, k_f2, Lambda_b, y_ref, kernel_scale = kernel_scale_sympy(\n lowest_order=lowest_order, highest_order=highest_order\n )\n expr = diff(kernel_scale, k_f1, ni, k_f2, nj)\n f = lambdify((k_f1, k_f2, Lambda_b, y_ref), expr, \"numpy\")\n if Xj is None:\n Xj = Xi\n K = f(Xi, Xj, breakdown, ref)\n K = K.astype('float')\n return np.squeeze(K)\n```\n\nFinally, we can create the kernel objects for our GP interpolation and derivatives. Assume some values for the hyperparameters, these would come from our convergence analysis of the observable coefficients.\n\n\n```python\nfrom functools import partial\n\nk_max = 4\neval_kernel_lower = partial(eval_kernel_scale, lowest_order=0, highest_order=k_max)\neval_kernel_upper = partial(eval_kernel_scale, lowest_order=k_max+1)\n\nmatter_rbf_kernel = gptools.SquaredExponentialKernel(\n initial_params=[1,1], fixed_params=[True, True])\n\nkernel_lower = CustomKernel(eval_kernel_lower) * matter_rbf_kernel\nkernel_upper = CustomKernel(eval_kernel_upper) * matter_rbf_kernel\n```\n\n\n```python\nkf_s_all = np.linspace(kf_s[0], kf_s[-1], 50)\nKf_s_all = kf_s_all[:, None]\n```\n\nBegin by interpolating the data points we plotted above. This does not include truncation error yet. Also compute its derivative as before.\n\n\n```python\ngp_lower = gptools.GaussianProcess(kernel_lower)\ngp_lower.add_data(Kf_s[:], y_s[:,-1], err_y=1e-5)\n\ny_s_star, std_s_star = gp_lower.predict(Kf_s_all, return_std=True)\ny_s_star_prime, std_s_star_prime = gp_lower.predict(Kf_s_all, n=1, return_std=True)\n```\n\nBut now we can create the additional source of uncertainty due to EFT truncation.\nCreate both the error term for the interpolant and for its derivative.\n\n\n```python\nzero_d = np.zeros(Kf_s_all.shape, dtype=int)\none_d = np.ones(Kf_s_all.shape, dtype=int)\nstd_upper_star = np.sqrt(kernel_upper(Kf_s_all, Kf_s_all, ni=zero_d, nj=zero_d))\nstd_upper_star_prime = np.sqrt(kernel_upper(Kf_s_all, Kf_s_all, ni=one_d, nj=one_d))\n\nstd_total_star = std_s_star + std_upper_star\nstd_total_star_prime = std_s_star_prime + std_upper_star_prime \n```\n\nLet's see how we did. Error bands represent 2 standard deviations.\nThe left plot shows the interpolant alone, with error bands that are so small they can't be seen.\nThe right plot is the same as the left, except truncation bands are added.\n\n\n```python\nfig, axes = plt.subplots(1, 2, figsize=(3.5, 2.7), sharey=True, sharex=True)\nax1, ax2 = axes\nax1.plot(kf_s, y_s[:, -1], ls='', marker='.', c='k');\nax1.plot(kf_s_all, y_s_star, lw=0.8)\nax1.fill_between(kf_s_all, y_s_star+2*std_s_star, y_s_star-2*std_s_star, color=color_95)\nax1.fill_between(kf_s_all, y_s_star+std_s_star, y_s_star-std_s_star, color=color_68)\nax1.set_title('Interpolant')\nax1.set_xlabel(r'$k_f$ [fm$^{-1}$]')\n\nax2.plot(kf_s, y_s[:, -1], ls='', marker='.', c='k');\nax2.plot(kf_s_all, y_s_star, lw=0.8)\nax2.fill_between(kf_s_all, y_s_star+2*std_total_star, y_s_star-2*std_total_star, color=color_95)\nax2.fill_between(kf_s_all, y_s_star+std_total_star, y_s_star-std_total_star, color=color_68)\nax2.set_title('+ Truncation')\nax2.set_xlabel(r'$k_f$ [fm$^{-1}$]')\nfig.tight_layout(w_pad=0.25)\n\nfig.savefig('saturation_with_error_bands')\n```\n\nPlot the derivatives with respect to $k_{f}$ similarly\n\n\n```python\nfig, axes = plt.subplots(1, 2, figsize=(3.5, 2.7), sharey=True, sharex=True)\nax1, ax2 = axes\n\nax1.plot(kf_s_all, y_s_star_prime, lw=0.8)\nax1.fill_between(kf_s_all, y_s_star_prime+2*std_s_star_prime, y_s_star_prime-2*std_s_star_prime, color=color_95)\nax1.fill_between(kf_s_all, y_s_star_prime+std_s_star_prime, y_s_star_prime-std_s_star_prime, color=color_68)\nax1.axhline(0, 0, 1, c='lightgrey', zorder=0)\nax1.set_title('Interpolant')\nax1.set_xlabel(r'$k_f$ [fm$^{-1}$]')\n\nax2.plot(kf_s_all, y_s_star_prime, lw=0.8)\nax2.fill_between(kf_s_all, y_s_star_prime+2*std_total_star_prime, y_s_star_prime-2*std_total_star_prime, color=color_95)\nax2.fill_between(kf_s_all, y_s_star_prime+std_total_star_prime, y_s_star_prime-std_total_star_prime, color=color_68)\nax2.axhline(0, 0, 1, c='lightgrey', zorder=0)\nax2.set_title('+ Truncation')\nax2.set_xlabel(r'$k_f$ [fm$^{-1}$]')\n\nfig.suptitle('Derivative of $E/A$ wrt $k_{f}$', y=1.05)\nfig.tight_layout(w_pad=0.25)\nfig.savefig('pressure_with_error_bands')\n```\n\nNice! This is, up to some easily handled scaling factors, the pressure! With truncation errors!\n\n\n```python\n\n```\n", "meta": {"hexsha": "f7eee88ad33bb6b12083eec8a9f74453fabfd353", "size": 248499, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "analysis/derivatives-tutorial.ipynb", "max_stars_repo_name": "buqeye/nuclear-matter-convergence", "max_stars_repo_head_hexsha": "6500e686c3b0579a1ac7c7570d84ffe8e09ad085", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-09-15T19:08:49.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-16T15:37:27.000Z", "max_issues_repo_path": "analysis/derivatives-tutorial.ipynb", "max_issues_repo_name": "buqeye/nuclear-matter-convergence", "max_issues_repo_head_hexsha": "6500e686c3b0579a1ac7c7570d84ffe8e09ad085", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "analysis/derivatives-tutorial.ipynb", "max_forks_repo_name": "buqeye/nuclear-matter-convergence", "max_forks_repo_head_hexsha": "6500e686c3b0579a1ac7c7570d84ffe8e09ad085", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2020-08-31T18:54:10.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T14:35:18.000Z", "avg_line_length": 335.3562753036, "max_line_length": 49444, "alphanum_fraction": 0.9270298874, "converted": true, "num_tokens": 4205, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9449947055100817, "lm_q2_score": 0.8723473630627234, "lm_q1q2_score": 0.8243636394599546}} {"text": "# Clustering\n\n\nWikipedia: Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters). Clustering is one of the main task of exploratory data mining, and a common technique for statistical data analysis, used in many fields, including machine learning, pattern recognition, image analysis, information retrieval, and bioinformatics.\n\nSources: http://scikit-learn.org/stable/modules/clustering.html\n\n## K-means clustering\n\nSource: C. M. Bishop *Pattern Recognition and Machine Learning*, Springer, 2006\n\nSuppose we have a data set $X = \\{x_1 , \\cdots , x_N\\}$ that consists of $N$ observations of a random $D$-dimensional Euclidean variable $x$. Our goal is to partition the data set into some number, $K$, of clusters, where we shall suppose for the moment that the value of $K$ is given. Intuitively, we might think of a cluster as comprising a group of data points whose inter-point distances are small compared to the distances to points outside of the cluster. We can formalize this notion by first introducing a set of $D$-dimensional vectors $\\mu_k$, where $k = 1, \\ldots, K$, in which $\\mu_k$ is a **prototype** associated with the $k^{th}$ cluster. As we shall see shortly, we can think of the $\\mu_k$ as representing the centres of the clusters. Our goal is then to find an assignment of data points to clusters, as well as a set of vectors $\\{\\mu_k\\}$, such that the sum of the squares of the distances of each data point to its closest prototype vector $\\mu_k$, is at a minimum.\n\nIt is convenient at this point to define some notation to describe the assignment of data points to clusters. For each data point $x_i$ , we introduce a corresponding set of binary indicator variables $r_{ik}\u00a0\\in \\{0, 1\\}$, where $k = 1, \\ldots, K$, that describes which of the $K$ clusters the data point $x_i$ is assigned to, so that if data point $x_i$ is assigned to cluster $k$ then $r_{ik} = 1$, and $r_{ij} = 0$ for $j \\neq k$. This is known as the 1-of-$K$ coding scheme. We can then define an objective function, denoted **inertia**, as\n\n$$\nJ(r, \\mu) = \\sum_i^N \\sum_k^K r_{ik} \\|x_i - \\mu_k\\|_2^2\n$$\n\nwhich represents the sum of the squares of the Euclidean distances of each data point to its assigned vector $\\mu_k$. Our goal is to find values for the $\\{r_{ik}\\}$ and the $\\{\\mu_k\\}$ so as to minimize the function $J$. We can do this through an iterative procedure in which each iteration involves two successive steps corresponding to successive optimizations with respect to the $r_{ik}$ and the $\\mu_k$ . First we choose some initial values for the $\\mu_k$. Then in the first phase we minimize $J$ with respect to the $r_{ik}$, keeping the $\\mu_k$ fixed. In the second phase we minimize $J$ with respect to the $\\mu_k$, keeping $r_{ik}$ fixed. This two-stage optimization process is then repeated until convergence. We shall see that these two stages of updating $r_{ik}$ and $\\mu_k$ correspond respectively to the expectation (E) and maximization (M) steps of the expectation-maximisation (EM) algorithm, and to emphasize this we shall use the terms E step and M step in the context of the $K$-means algorithm.\n\nConsider first the determination of the $r_{ik}$ . Because $J$ in is a linear function of $r_{ik}$ , this optimization can be performed easily to give a closed form solution. The terms involving different $i$ are independent and so we can optimize for each $i$ separately by choosing $r_{ik}$ to be 1 for whichever value of $k$ gives the minimum value of $||x_i - \\mu_k||^2$ . In other words, we simply assign the $i$th data point to the closest cluster centre. More formally, this can be expressed as\n\n\\begin{equation}\n r_{ik}=\\begin{cases}\n 1, & \\text{if } k = \\arg\\min_j ||x_i - \\mu_j||^2.\\\\\n 0, & \\text{otherwise}.\n \\end{cases}\n\\end{equation}\n\nNow consider the optimization of the $\\mu_k$ with the $r_{ik}$ held fixed. The objective function $J$ is a quadratic function of $\\mu_k$, and it can be minimized by setting its derivative with respect to $\\mu_k$ to zero giving\n\n$$\n2 \\sum_i r_{ik}(x_i - \\mu_k) = 0\n$$\n\nwhich we can easily solve for $\\mu_k$ to give\n\n$$\n\\mu_k = \\frac{\\sum_i r_{ik}x_i}{\\sum_i r_{ik}}.\n$$\n\nThe denominator in this expression is equal to the number of points assigned to cluster $k$, and so this result has a simple interpretation, namely set $\\mu_k$ equal to the mean of all of the data points $x_i$ assigned to cluster $k$. For this reason, the procedure is known as the $K$-means algorithm.\n\nThe two phases of re-assigning data points to clusters and re-computing the cluster means are repeated in turn until there is no further change in the assignments (or until some maximum number of iterations is exceeded). Because each phase reduces the value of the objective function $J$, convergence of the algorithm is assured. However, it may converge to a local rather than global minimum of $J$.\n\n\n```python\nfrom sklearn import cluster, datasets\nimport matplotlib.pyplot as plt\nimport seaborn as sns # nice color\n%matplotlib inline\n\niris = datasets.load_iris()\nX = iris.data[:, :2] # use only 'sepal length and sepal width'\ny_iris = iris.target\n\nkm2 = cluster.KMeans(n_clusters=2).fit(X)\nkm3 = cluster.KMeans(n_clusters=3).fit(X)\nkm4 = cluster.KMeans(n_clusters=4).fit(X)\n\nplt.figure(figsize=(9, 3)) \nplt.subplot(131)\nplt.scatter(X[:, 0], X[:, 1], c=km2.labels_)\nplt.title(\"K=2, J=%.2f\" % km2.inertia_)\n\nplt.subplot(132)\nplt.scatter(X[:, 0], X[:, 1], c=km3.labels_)\nplt.title(\"K=3, J=%.2f\" % km3.inertia_)\n\nplt.subplot(133)\nplt.scatter(X[:, 0], X[:, 1], c=km4.labels_)#.astype(np.float))\nplt.title(\"K=4, J=%.2f\" % km4.inertia_)\n```\n\n### Exercises\n\n#### 1. Analyse clusters\n\n- Analyse the plot above visually. What would a good value of $K$ be?\n\n- If you instead consider the inertia, the value of $J$, what would a good value of $K$ be?\n\n- Explain why there is such difference.\n\n- For $K=2$ why did $K$-means clustering not find the two \"natural\" clusters? See the assumptions of $K$-means: \nhttp://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_assumptions.html#example-cluster-plot-kmeans-assumptions-py\n\n#### 2. Re-implement the $K$-means clustering algorithm (homework)\n\nWrite a function `kmeans(X, K)` that return an integer vector of the samples' labels.\n\n## Gaussian mixture models\n\nThe Gaussian mixture model (GMM) is a simple linear superposition of Gaussian components over the data, aimed at providing a rich class of density models. We turn to a formulation of Gaussian mixtures in terms of discrete latent variables: the $K$ hidden classes to be discovered.\n\nDifferences compared to $K$-means:\n\n- Whereas the $K$-means algorithm performs a hard assignment of data points to clusters, in which each data point is associated uniquely with one cluster, the GMM algorithm makes a soft assignment based on posterior probabilities.\n\n- Whereas the classic $K$-means is only based on Euclidean distances, classic GMM use a Mahalanobis distances that can deal with non-spherical distributions. It should be noted that Mahalanobis could be plugged within an improved version of $K$-Means clustering. The Mahalanobis distance is unitless and scale-invariant, and takes into account the correlations of the data set.\n\nThe Gaussian mixture distribution can be written as a linear superposition of $K$ Gaussians in the form:\n\n$$\np(x) = \\sum_{k=1}^K \\mathcal{N}(x \\,|\\, \\mu_k, \\Sigma_k)p(k),\n$$\n\nwhere:\n\n- The $p(k)$ are the mixing coefficients also know as the class probability of class $k$, and they sum to one: $\\sum_{k=1}^K p(k) = 1$.\n\n- $\\mathcal{N}(x \\,|\\, \\mu_k, \\Sigma_k) = p(x \\,|\\, k)$ is the conditional distribution of $x$ given a particular class $k$. It is the multivariate Gaussian distribution defined over a $P$-dimensional vector $x$ of continuous variables.\n\nThe goal is to maximize the log-likelihood of the GMM:\n\n$$\n\\ln \\prod_{i=1}^N p(x_i)= \\ln \\prod_{i=1}^N \\left\\{ \\sum_{k=1}^K \\mathcal{N}(x_i \\,|\\, \\mu_k, \\Sigma_k)p(k) \\right\\} = \\sum_{i=1}^N \\ln\\left\\{ \\sum_{k=1}^K \\mathcal{N}(x_i \\,|\\, \\mu_k, \\Sigma_k) p(k) \\right\\}.\n$$\n\nTo compute the classes parameters: $p(k), \\mu_k, \\Sigma_k$ we sum over all samples, by weighting each sample $i$ by its responsibility or contribution to class $k$: $p(k \\,|\\, x_i)$ such that for each point its contribution to all classes sum to one $\\sum_k p(k \\,|\\, x_i) = 1$. This contribution is the conditional probability\nof class $k$ given $x$: $p(k \\,|\\, x)$ (sometimes called the posterior). It can be computed using Bayes' rule:\n\n\\begin{align}\np(k \\,|\\, x) &= \\frac{p(x \\,|\\, k)p(k)}{p(x)}\\\\\n &= \\frac{\\mathcal{N}(x \\,|\\, \\mu_k, \\Sigma_k)p(k)}{\\sum_{k=1}^K \\mathcal{N}(x \\,|\\, \\mu_k, \\Sigma_k)p(k)}\n\\end{align}\n\nSince the class parameters, $p(k)$, $\\mu_k$ and $\\Sigma_k$, depend on the responsibilities $p(k \\,|\\, x)$ and the responsibilities depend on class parameters, we need a two-step iterative algorithm: the expectation-maximization (EM) algorithm. We discuss this algorithm next.\n\n###\u00a0The expectation-maximization (EM) algorithm for Gaussian mixtures\n\nGiven a Gaussian mixture model, the goal is to maximize the likelihood function with respect to the parameters (comprised of the means and covariances of the components and the mixing coefficients).\n\nInitialize the means $\\mu_k$, covariances $\\Sigma_k$ and mixing coefficients $p(k)$\n\n1. **E step**. For each sample $i$, evaluate the responsibilities for each class $k$ using the current parameter values\n\n$$\np(k \\,|\\, x_i) = \\frac{\\mathcal{N}(x_i \\,|\\, \\mu_k, \\Sigma_k)p(k)}{\\sum_{k=1}^K \\mathcal{N}(x_i \\,|\\, \\mu_k, \\Sigma_k)p(k)}\n$$\n\n2. **M step**. For each class, re-estimate the parameters using the current responsibilities\n\n\\begin{align}\n\\mu_k^{\\text{new}} &= \\frac{1}{N_k} \\sum_{i=1}^N p(k \\,|\\, x_i) x_i\\\\\n\\Sigma_k^{\\text{new}} &= \\frac{1}{N_k} \\sum_{i=1}^N p(k \\,|\\, x_i) (x_i - \\mu_k^{\\text{new}}) (x_i - \\mu_k^{\\text{new}})^T\\\\\np^{\\text{new}}(k) &= \\frac{N_k}{N}\n\\end{align}\n\n3. Evaluate the log-likelihood\n\n$$\n \\sum_{i=1}^N \\ln \\left\\{ \\sum_{k=1}^K \\mathcal{N}(x|\\mu_k, \\Sigma_k) p(k) \\right\\},\n$$\n\nand check for convergence of either the parameters or the log-likelihood. If the convergence criterion is not satisfied return to step 1.\n\n\n```python\nimport numpy as np\nfrom sklearn import datasets\nimport matplotlib.pyplot as plt\nimport seaborn as sns # nice color\nimport sklearn\nfrom sklearn.mixture import GaussianMixture\n\nimport pystatsml.plot_utils\n\ncolors = sns.color_palette()\n\niris = datasets.load_iris()\nX = iris.data[:, :2] # 'sepal length (cm)''sepal width (cm)'\ny_iris = iris.target\n\ngmm2 = GaussianMixture(n_components=2, covariance_type='full').fit(X)\ngmm3 = GaussianMixture(n_components=3, covariance_type='full').fit(X)\ngmm4 = GaussianMixture(n_components=4, covariance_type='full').fit(X)\n\nplt.figure(figsize=(9, 3)) \nplt.subplot(131)\nplt.scatter(X[:, 0], X[:, 1], c=[colors[lab] for lab in gmm2.predict(X)])#, color=colors)\nfor i in range(gmm2.covariances_.shape[0]):\n pystatsml.plot_utils.plot_cov_ellipse(cov=gmm2.covariances_[i, :], pos=gmm2.means_[i, :],\n facecolor='none', linewidth=2, edgecolor=colors[i])\n plt.scatter(gmm2.means_[i, 0], gmm2.means_[i, 1], edgecolor=colors[i],\n marker=\"o\", s=100, facecolor=\"w\", linewidth=2)\nplt.title(\"K=2\")\n\nplt.subplot(132)\nplt.scatter(X[:, 0], X[:, 1], c=[colors[lab] for lab in gmm3.predict(X)])\nfor i in range(gmm3.covariances_.shape[0]):\n pystatsml.plot_utils.plot_cov_ellipse(cov=gmm3.covariances_[i, :], pos=gmm3.means_[i, :],\n facecolor='none', linewidth=2, edgecolor=colors[i])\n plt.scatter(gmm3.means_[i, 0], gmm3.means_[i, 1], edgecolor=colors[i],\n marker=\"o\", s=100, facecolor=\"w\", linewidth=2)\nplt.title(\"K=3\")\n\nplt.subplot(133)\nplt.scatter(X[:, 0], X[:, 1], c=[colors[lab] for lab in gmm4.predict(X)]) # .astype(np.float))\nfor i in range(gmm4.covariances_.shape[0]):\n pystatsml.plot_utils.plot_cov_ellipse(cov=gmm4.covariances_[i, :], pos=gmm4.means_[i, :],\n facecolor='none', linewidth=2, edgecolor=colors[i])\n plt.scatter(gmm4.means_[i, 0], gmm4.means_[i, 1], edgecolor=colors[i],\n marker=\"o\", s=100, facecolor=\"w\", linewidth=2)\n_ = plt.title(\"K=4\")\n```\n\n## Model selection\n\n\n### Bayesian information criterion\n\nIn statistics, the Bayesian information criterion (BIC) is a criterion for model selection among a finite set of models; the model with the lowest BIC is preferred. It is based, in part, on the likelihood function and it is closely related to the Akaike information criterion (AIC).\n\n\n```python\nX = iris.data\ny_iris = iris.target\n\nbic = list()\n#print(X)\n\nks = np.arange(1, 10)\n\nfor k in ks:\n gmm = GaussianMixture(n_components=k, covariance_type='full')\n gmm.fit(X)\n bic.append(gmm.bic(X))\n\nk_chosen = ks[np.argmin(bic)]\n\nplt.plot(ks, bic)\nplt.xlabel(\"k\")\nplt.ylabel(\"BIC\")\n\nprint(\"Choose k=\", k_chosen)\n```\n\n## Hierarchical clustering\n\nHierarchical clustering is an approach to clustering that build hierarchies of clusters in two main approaches:\n\n- **Agglomerative**: A *bottom-up* strategy, where each observation starts in their own cluster, and pairs of clusters are merged upwards in the hierarchy.\n\n- **Divisive**: A *top-down* strategy, where all observations start out in the same cluster, and then the clusters are split recursively downwards in the hierarchy.\n\nIn order to decide which clusters to merge or to split, a measure of dissimilarity between clusters is introduced. More specific, this comprise a *distance* measure and a *linkage* criterion. The distance measure is just what it sounds like, and the linkage criterion is essentially a function of the distances between points, for instance the minimum distance between points in two clusters, the maximum distance between points in two clusters, the average distance between points in two clusters, etc. One particular linkage criterion, the Ward criterion, will be discussed next.\n\n### Ward clustering\n\nWard clustering belongs to the family of agglomerative hierarchical clustering algorithms. This means that they are based on a \"bottoms up\" approach: each sample starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy.\n\nIn Ward clustering, the criterion for choosing the pair of clusters to merge at each step is the minimum variance criterion. Ward's minimum variance criterion minimizes the total within-cluster variance by each merge. To implement this method, at each step: find the pair of clusters that leads to minimum increase in total within-cluster variance after merging. This increase is a weighted squared distance between cluster centers.\n\nThe main advantage of agglomerative hierarchical clustering over $K$-means clustering is that you can benefit from known neighborhood information, for example, neighboring pixels in an image.\n\n\n```python\nfrom sklearn import cluster, datasets\nimport matplotlib.pyplot as plt\nimport seaborn as sns # nice color\n\niris = datasets.load_iris()\nX = iris.data[:, :2] # 'sepal length (cm)''sepal width (cm)'\ny_iris = iris.target\n\nward2 = cluster.AgglomerativeClustering(n_clusters=2, linkage='ward').fit(X)\nward3 = cluster.AgglomerativeClustering(n_clusters=3, linkage='ward').fit(X)\nward4 = cluster.AgglomerativeClustering(n_clusters=4, linkage='ward').fit(X)\n\nplt.figure(figsize=(9, 3)) \nplt.subplot(131)\nplt.scatter(X[:, 0], X[:, 1], c=ward2.labels_)\nplt.title(\"K=2\")\n\nplt.subplot(132)\nplt.scatter(X[:, 0], X[:, 1], c=ward3.labels_)\nplt.title(\"K=3\")\n\nplt.subplot(133)\nplt.scatter(X[:, 0], X[:, 1], c=ward4.labels_) # .astype(np.float))\nplt.title(\"K=4\")\n```\n\n## Exercises\n\nPerform clustering of the iris dataset based on all variables using Gaussian mixture models.\nUse PCA to visualize clusters.\n", "meta": {"hexsha": "0ea4d1212b121e67d12488fd403f9e4b94b8ff9e", "size": 20912, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "machine_learning/clustering.ipynb", "max_stars_repo_name": "gautard/pystatsml", "max_stars_repo_head_hexsha": "c86109bd1d95252d7b03bf32dbf267278618dae5", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 56, "max_stars_repo_stars_event_min_datetime": "2017-10-11T17:59:45.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-03T11:00:48.000Z", "max_issues_repo_path": "machine_learning/clustering.ipynb", "max_issues_repo_name": "gautard/pystatsml", "max_issues_repo_head_hexsha": "c86109bd1d95252d7b03bf32dbf267278618dae5", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2018-03-08T16:34:50.000Z", "max_issues_repo_issues_event_max_datetime": "2019-11-13T15:09:58.000Z", "max_forks_repo_path": "machine_learning/clustering.ipynb", "max_forks_repo_name": "gautard/pystatsml", "max_forks_repo_head_hexsha": "c86109bd1d95252d7b03bf32dbf267278618dae5", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 51, "max_forks_repo_forks_event_min_datetime": "2016-09-22T20:28:17.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-15T09:36:06.000Z", "avg_line_length": 52.8080808081, "max_line_length": 1037, "alphanum_fraction": 0.6242827085, "converted": true, "num_tokens": 4373, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9324533107374444, "lm_q2_score": 0.8840392909114835, "lm_q1q2_score": 0.8243253636323955}} {"text": "\n\n$\\Large{\\text{Kernel machines}}$ \n\nLet us first generate a synthetic data set to illustrate the behavior of different kernels.\n\n\n```python\nfrom sklearn.datasets import make_moons\nimport numpy as np\nX,y = make_moons(n_samples=200, shuffle=True, noise=0.1, random_state=42)\nprint ('first five target values before changing 0 to -1')\n# print (sum(y))\nprint (y[0:5])\ny = np.where(y >=1,y,-1)\nprint('moon data shape:', np.shape(X))\n\n#check the shape of moon target labels\nprint('moon target shape:', np.shape(y))\n#We can print first 5 samples of moon data and check \nprint('Features of first five samples of moon data:')\nprint(X[0:5,])\nprint ('first five target values after changing 0 to -1')\nprint (y[0:5])\n```\n\n first five target values before changing 0 to -1\n [0 0 0 1 1]\n moon data shape: (200, 2)\n moon target shape: (200,)\n Features of first five samples of moon data:\n [[-1.04942573 0.08444263]\n [ 0.92281755 0.45748851]\n [ 0.65678659 0.69959669]\n [ 1.1889402 -0.38652807]\n [ 0.28926455 -0.13774489]]\n first five target values after changing 0 to -1\n [-1 -1 -1 1 1]\n\n\n\n```python\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# scatter plot, dots colored by class value\n\ndf = pd.DataFrame(dict(x=X[:,0], y=X[:,1], label=y))\ncolors = {-1:'red', 1:'blue'}\nfig, ax = plt.subplots()\ngrouped = df.groupby('label')\nfor key, group in grouped:\n group.plot(ax=ax, kind='scatter', x='x', y='y', label=key, color=colors[key])\nplt.show()\n```\n\n$\\large{\\text{Kernel trick}}$\n\nFortunately, finding a suitable choice of $\\Phi$ transformation can be by-passed if we can find a suitable kernel function $K$ such that the dot product $\\Phi(\\mathbf{u})^\\top \\Phi(\\mathbf{z})$ can be represented as $K(\\mathbf{u},\\mathbf{z})$. \n\nIndeed, this is possible using a result known as $\\textbf{Mercer's theorem}$.\n\n$\\large{\\text{Mercer's theorem}}$\n\n$K$ satisfies:\n\n$\n\\begin{align}\n\\int K(\\mathbf{u}, \\mathbf{z}) g(\\mathbf{u}) g(\\mathbf{z}) d\\mathbf{u} d\\mathbf{z} \\geq 0 \n\\end{align}\n$\n\nfor any function $g$ satisfying $\\int g^2(\\mathbf{u}) d\\mathbf{u} < \\infty$, if and only if $K$ corresponds to a unique transformation $\\Phi$ such that $K( \\mathbf{u}, \\mathbf{z}) = \\Phi(\\mathbf{u})^\\top \\Phi(\\mathbf{z})$. \n\nSeveral possible choices of $K$ exist. Some examples are:\n\n\n\n* Polynomial kernel: $K(\\mathbf{u}, \\mathbf{z}) = (\\mathbf{u}^\\top \\mathbf{z} + 1)^p$ \n* Gaussian kernel (or) radial basis function (rbf) kernel: $K(\\mathbf{u}, \\mathbf{z}) = e^{-\\frac{\\|\\mathbf{u}-\\mathbf{z}\\|^2}{2\\sigma^2}}$\n\n\n```python\nnp.random.seed(1000)\nnum_samples = len(X)\n#Create an index array \nindexarr = np.arange(num_samples) #index array\nnp.random.shuffle(indexarr) #shuffle the indices \n#print('shuffled indices of samples:')\n#print(indexarr)\n```\n\n\n```python\n#Use the samples corresponding to first 80% of indexarr for training \nnum_train = int(0.8*num_samples)\n#Use the remaining 20% samples for testing \nnum_test = num_samples-num_train\nprint('num_train: ',num_train, 'num_test: ', num_test)\n```\n\n num_train: 160 num_test: 40\n\n\n\n```python\n#Use the first 80% of indexarr to create the train data features and train labels \ntrain_X = X[indexarr[0:num_train]]\ntrain_y = y[indexarr[0:num_train]]\nprint('shape of train data features:')\nprint(train_X.shape)\nprint('shape of train data labels')\nprint(train_y.shape)\n\n# print (sum(x <= 0 for x in train_y))\n```\n\n shape of train data features:\n (160, 2)\n shape of train data labels\n (160,)\n\n\n\n```python\n#Use remaining 20% of indexarr to create the test data and test labels \ntest_X = X[indexarr[num_train:num_samples]]\ntest_y = y[indexarr[num_train:num_samples]]\nprint('shape of test data features:')\nprint(test_X.shape)\nprint('shape of test data labels')\nprint(test_y.shape)\n```\n\n shape of test data features:\n (40, 2)\n shape of test data labels\n (40,)\n\n\n$\\Large{\\text{Finding best hyperparameter } \\gamma \\ \\text{using cross validation}}$\n\nQuestion: What is cross validation?\n\nAnswer: Cross validation (CV) is a technique employed to best tune the hyperparameters of a machine learning algorithm. The k-fold cross validation splits the data into k folds. The samples in $k-1$ folds are used in training phase and the remaining fold is treated as test set. The scores are computed at every iteration of CV and then the hyperparameter associated with highest average score is chosen for the testing purpose.\n\n#Visualizing the working procedure of k-fold cross validation\n\n\nImage credit: $\\texttt{https://scikit-learn.org/stable/_images/grid_search_cross_validation.png} $\n\n\n```python\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.svm import SVC\n\ngammas = [0.0001,0.001, 0.01, 0.1, 1, 20, 40, 60, 80, 100]\nkernels = ['sigmoid','linear', 'poly', 'rbf']\nbest_gamma = {}\ncv_k = 5 #5-fold cross validation\nfor kernel_ in kernels:\n print (kernel_,'kernel')\n avg_score = np.zeros(len(gammas))\n for gamma in gammas:\n clf = SVC(kernel=kernel_, gamma=gamma, random_state=1)\n scores = cross_val_score(clf, train_X, train_y, cv=cv_k) \n avg_score[gammas.index(gamma)] = np.mean(scores)\n # print ('average score for kernel',kernel_, 'at gamma = ', gamma,'is',avg_score[gammas.index(gamma)])\n print (avg_score)\n max_score_index = np.argmax(avg_score)\n # print (max_score_index)\n best_gamma[kernel_] = gammas[int(max_score_index)]\n\nprint ('best hyperparameters = ', best_gamma)\n```\n\n sigmoid kernel\n [0.5125 0.5125 0.8 0.85625 0.65625 0.575 0.5625 0.55625 0.55625\n 0.55625]\n linear kernel\n [0.8625 0.8625 0.8625 0.8625 0.8625 0.8625 0.8625 0.8625 0.8625 0.8625]\n poly kernel\n [0.5125 0.5125 0.5125 0.6125 0.925 0.925 0.925 0.925 0.925 0.9125]\n rbf kernel\n [0.5125 0.5125 0.825 0.85625 0.98125 0.99375 0.99375 0.99375 1.\n 1. ]\n best hyperparameters = {'sigmoid': 0.1, 'linear': 0.0001, 'poly': 1, 'rbf': 80}\n\n\nNow let us try to use four different kernels with above learned hyperparameters on moon data set. \n\n\n```python\n# for creating a mesh to plot in\nh=0.02 #mesh step size\nnum_samples = len(X)\nx1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1\nx2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1\nxx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, h),\n np.arange(x2_min, x2_max, h))\n\nfrom sklearn.svm import SVC\ntrain_accuracy = {}\ntest_accuracy = {}\nfor kernel_ in kernels:\n clf = SVC(kernel=kernel_,gamma=best_gamma[kernel_],max_iter = 10000)\n clf_model = clf.fit(train_X,train_y.ravel())\n predicted_labels_train = clf_model.predict(train_X)\n # print(predicted_labels)\n train_error = np.sum(0.5*np.abs(predicted_labels_train-train_y.ravel()))/len(train_y)*100.0\n train_accuracy[kernel_] = 100.0-train_error \n\n predicted_labels_test = clf_model.predict(test_X)\n # print(predicted_labels)\n test_error = np.sum(0.5*np.abs(predicted_labels_test-test_y.ravel()))/len(test_y)*100.0\n test_accuracy[kernel_] = 100.0-test_error \n \n ## visualizing decision boundary\n Z = clf_model.predict(np.c_[xx1.ravel(), xx2.ravel()])\n # Put the result into a color plot\n Z = Z.reshape(xx1.shape)\n\n fig, (ax1, ax2) = plt.subplots(1,2, figsize=(16,6))\n ax1.contourf(xx1, xx2, Z, cmap=plt.cm.coolwarm, alpha=0.8)\n\n # Plot also the training points\n ax1.scatter(train_X[:, 0], train_X[:, 1], c=train_y, cmap=plt.cm.coolwarm)\n # plt.scatter(X[:int(num_samples/2),0],X[:int(num_samples/2),1], marker='o', color='red')\n # plt.scatter(X[int(num_samples/2):,0],X[int(num_samples/2):,1], marker='s', color='green')\n xlabel = 'x1' + str('\\n')+'kernel='+str(kernel_)\n ax1.set_xlabel(xlabel)\n ax1.set_ylabel('x2')\n ax1.set_xlim(xx1.min(), xx1.max())\n ax1.set_ylim(xx2.min(), xx2.max())\n ax1.set_xticks(())\n ax1.set_yticks(())\n ax1.set_title('decision boundary with training points')\n\n #plot the test points along with decision boundaries\n ax2.contourf(xx1, xx2, Z, cmap=plt.cm.coolwarm, alpha=0.8)\n\n # Plot also the test points\n ax2.scatter(test_X[:, 0], test_X[:, 1], c=test_y, cmap=plt.cm.coolwarm)\n # plt.scatter(X[:int(num_samples/2),0],X[:int(num_samples/2),1], marker='o', color='red')\n # plt.scatter(X[int(num_samples/2):,0],X[int(num_samples/2):,1], marker='s', color='green')\n xlabel = 'x1' + str('\\n')+'kernel='+str(kernel_)\n ax2.set_xlabel(xlabel)\n ax2.set_ylabel('x2')\n ax2.set_xlim(xx1.min(), xx1.max())\n ax2.set_ylim(xx2.min(), xx2.max())\n ax2.set_xticks(())\n ax2.set_yticks(())\n ax2.set_title('decision boundary with test points')\n\n\n plt.show()\n\nprint('train accuracy:', train_accuracy)\nprint('test accuracy:', test_accuracy)\n```\n\n#Let us now check the decision boundaries for RBF kernel with different $\\gamma$ values \n\n\n```python\n# for creating a mesh to plot in\nh=0.02 #mesh step size\nx1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1\nx2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1\nxx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, h),\n np.arange(x2_min, x2_max, h))\n\nfrom sklearn.svm import SVC\ngamma = [0.0001,0.001, 0.01, 0.1, 1, 20, 40, 60, 80, 100]\ntrain_accuracy = {}\ntest_accuracy = {}\nfor gamma_ in gamma:\n clf = SVC(kernel='rbf', gamma=gamma_, max_iter = 10000)\n clf_model = clf.fit(train_X,train_y.ravel())\n predicted_labels = clf_model.predict(train_X)\n # print(predicted_labels)\n train_error = np.sum(0.5*np.abs(predicted_labels-train_y.ravel()))/len(train_y)*100.0\n train_accuracy[gamma_] = 100.0-train_error \n predicted_test_labels = clf_model.predict(test_X)\n test_error = np.sum(0.5*np.abs(predicted_test_labels-test_y.ravel()))/len(test_y)*100.0\n test_accuracy[gamma_] = 100.0-test_error\n ## visualizing decision boundary\n Z = clf_model.predict(np.c_[xx1.ravel(), xx2.ravel()])\n # Put the result into a color plot\n Z = Z.reshape(xx1.shape)\n\n fig, (ax1, ax2) = plt.subplots(1,2, figsize=(16,6))\n ax1.contourf(xx1, xx2, Z, cmap=plt.cm.coolwarm, alpha=0.8)\n\n # Plot also the training points\n ax1.scatter(train_X[:, 0], train_X[:, 1], c=train_y, cmap=plt.cm.coolwarm)\n # plt.scatter(X[:int(num_samples/2),0],X[:int(num_samples/2),1], marker='o', color='red')\n # plt.scatter(X[int(num_samples/2):,0],X[int(num_samples/2):,1], marker='s', color='green')\n xlabel = 'x1 '+str('\\n')+'kernel='+str(kernel_)+' gamma='+str(gamma_)\n ax1.set_xlabel(xlabel)\n ax1.set_ylabel('x2')\n ax1.set_xlim(xx1.min(), xx1.max())\n ax1.set_ylim(xx2.min(), xx2.max())\n ax1.set_xticks(())\n ax1.set_yticks(())\n ax1.set_title('decision boundary with training points')\n\n #plot the test points along with decision boundaries\n ax2.contourf(xx1, xx2, Z, cmap=plt.cm.coolwarm, alpha=0.8)\n\n # Plot also the training points\n ax2.scatter(test_X[:, 0], test_X[:, 1], c=test_y, cmap=plt.cm.coolwarm)\n # plt.scatter(X[:int(num_samples/2),0],X[:int(num_samples/2),1], marker='o', color='red')\n # plt.scatter(X[int(num_samples/2):,0],X[int(num_samples/2):,1], marker='s', color='green')\n xlabel = 'x1 '+str('\\n')+'kernel='+str(kernel_)+' gamma='+str(gamma_)\n ax2.set_xlabel(xlabel)\n ax2.set_ylabel('x2')\n ax2.set_xlim(xx1.min(), xx1.max())\n ax2.set_ylim(xx2.min(), xx2.max())\n ax2.set_xticks(())\n ax2.set_yticks(())\n ax2.set_title('decision boundary with test points')\n\n\n plt.show()\n \n \nprint('train accuracy:', train_accuracy)\nprint ('test accuracy:',test_accuracy)\n```\n", "meta": {"hexsha": "510223055e34e43f0b0519a5cc33a7a8fc9c018e", "size": 707498, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Kernel Machines/AIML_CEP_Kernel_machines_TA_session_06Nov2021.ipynb", "max_stars_repo_name": "arvind-maurya/AIML_CEP_2021", "max_stars_repo_head_hexsha": "eb09ccb812b1b3fb5761d45423a08289923c8da9", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Kernel Machines/AIML_CEP_Kernel_machines_TA_session_06Nov2021.ipynb", "max_issues_repo_name": "arvind-maurya/AIML_CEP_2021", "max_issues_repo_head_hexsha": "eb09ccb812b1b3fb5761d45423a08289923c8da9", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Kernel Machines/AIML_CEP_Kernel_machines_TA_session_06Nov2021.ipynb", "max_forks_repo_name": "arvind-maurya/AIML_CEP_2021", "max_forks_repo_head_hexsha": "eb09ccb812b1b3fb5761d45423a08289923c8da9", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2021-10-17T10:16:01.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-29T13:43:25.000Z", "avg_line_length": 1019.4495677233, "max_line_length": 50066, "alphanum_fraction": 0.95145852, "converted": true, "num_tokens": 3654, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9111797003640646, "lm_q2_score": 0.904650530602188, "lm_q1q2_score": 0.8242991994082938}} {"text": "# \u30bd\u30eb\u30d0\u3092\u4f7f\u3046\n\n**MatrixEquations.jl**\u3092\u4f7f\u3063\u3066\u6700\u9069\u30ec\u30ae\u30e5\u30ec\u30fc\u30bf\u3092\u8a2d\u8a08\u3059\u308b\uff0e \n\u516c\u5f0f\u30da\u30fc\u30b8 : [https://docs.juliahub.com/MatrixEquations/1uOBF/1.1.4/](https://docs.juliahub.com/MatrixEquations/1uOBF/1.1.4/)\n\n\n```julia\nusing Plots\nusing LinearAlgebra\nusing MatrixEquations\n```\n\n\u30b7\u30b9\u30c6\u30e0\u3092\u5b9a\u7fa9 \n\n$$\n \\begin{align}\n \\dot{x} &= Ax+Bu\\\\\n y &= Cx\n \\end{align}\\\\\n A\\in\\mathbb{R}^{2\\times2},~B\\in\\mathbb{R}^{2\\times2},~C\\in\\mathbb{R}^{2\\times2}\n$$\n\n\n```julia\nx\u2080 = [1, 0.5]\nA = [\n 1.1 2\n -0.3 -1\n]\nB = [\n 1 2\n 0.847 3\n]\nC = [\n 1. 0.\n 0. 1.\n];\n```\n\n\u5b89\u5b9a\u6027\u3092\u8abf\u3079\u3066\u307f\u308b\n\n\n```julia\neigvals(A)\n```\n\n\n 2-element Vector{Float64}:\n -0.6588723439378912\n 0.7588723439378913\n\n\n\u8ca0\u306e\u56fa\u6709\u5024\u3092\u6301\u3064\u306e\u3067\u4e0d\u5b89\u5b9a\uff0e \n\u521d\u671f\u5024\u5fdc\u7b54\u3092\u8abf\u3079\u308b\uff0e \n\n\n```julia\nt = 0:0.01:5\nx = Vector{typeof(x\u2080)}(undef, length(t))\nfor i in 1:length(t)\n x[i] = exp(A .* t[i]) * x\u2080\nend\n\nx = permutedims(hcat(x...))\nplot(t, x[:, 1], label=\"x1\")\nplot!(t, x[:, 2], label=\"x2\")\nplot!(xlabel=\"t\")\nsavefig(\"lqr.png\")\n```\n\n \n\u767a\u6563\u3059\u308b\u3053\u3068\u304c\u308f\u304b\u308b\uff0e \n\n## \u30ea\u30ab\u30c3\u30c1\u65b9\u7a0b\u5f0f\u3092\u89e3\u304f\n`MatrixEquations`\u306e`arec`\u3092\u4f7f\u3046\uff0e \n\u30c9\u30ad\u30e5\u30e1\u30f3\u30c8 : [https://docs.juliahub.com/MatrixEquations/1uOBF/1.1.4/autodocs/#MatrixEquations.arec](https://docs.juliahub.com/MatrixEquations/1uOBF/1.1.4/autodocs/#MatrixEquations.arec) \n\n`arec`\u306f\u6b21\u306e\u9023\u7d9a\u4ee3\u6570\u30ea\u30ab\u30c3\u30c1\u65b9\u7a0b\u5f0f\u3092\u89e3\u304f\uff0e \n\n\n\n\n```julia\nQ = diagm([10.0, 10.0])\nR = diagm([1., 1.])\nS = zero(B)\nP, E, F = arec(A, B, R, Q, S);\n```\n\n\u623b\u308a\u5024\u306f\u6b21\u306e\u901a\u308a\uff0e \n* `P` : \u30ea\u30ab\u30c3\u30c1\u65b9\u7a0b\u5f0f\u306e\u89e3 \n* `E` : \u6700\u9069\u30ec\u30ae\u30e5\u30ec\u30fc\u30bf\u306e\u9589\u30eb\u30fc\u30d7\u306e\u56fa\u6709\u5024 \n* `F` : \u6700\u9069\u30d5\u30a3\u30fc\u30c9\u30d0\u30c3\u30af\u884c\u5217 \n\n\n```julia\nP # \u30ea\u30ab\u30c3\u30c1\u65b9\u7a0b\u5f0f\u306e\u89e3\n```\n\n\n 2\u00d72 Matrix{Float64}:\n 3.34568 -1.07266\n -1.07266 1.30235\n\n\n\n```julia\nE # \u9589\u30eb\u30fc\u30d7\u306e\u56fa\u6709\u5024\n```\n\n\n 2-element Vector{Float64}:\n -11.862446758953242\n -2.732479989130699\n\n\n\n```julia\nF # \u6700\u9069\u30d5\u30a3\u30fc\u30c9\u30d0\u30c3\u30af\u884c\u5217\n```\n\n\n 2\u00d72 Matrix{Float64}:\n 2.43714 0.0304348\n 3.47339 1.76174\n\n\n\u6700\u9069\u30d5\u30a3\u30fc\u30c9\u30d0\u30c3\u30af\u884c\u5217\u3092\u5165\u529b\u306b\u4f7f\u3063\u305f\u3068\u304d\u306e\u30b7\u30b9\u30c6\u30e0\u306e\u6642\u9593\u767a\u5c55\u3092\u8abf\u3079\u308b\uff0e \n\n\n```julia\nA\u0304 = A .- (B * F) # \u65b0\u3057\u3044A\u884c\u5217\n\nx2 = Vector{typeof(x\u2080)}(undef, length(t))\nu2 = Vector{typeof(x\u2080)}(undef, length(t))\nfor i in 1:length(t)\n x2[i] = exp(A\u0304 .* t[i]) * x\u2080\n u2[i] = -F * x2[i]\nend\n\nx2 = permutedims(hcat(x2...))\nfigx = plot(t, x2[:, 1], label=\"x1\")\nplot!(figx, t, x2[:, 2], label=\"x2\")\nplot!(figx, xlabel=\"t\")\nsavefig(figx, \"lqr2.png\")\n\nu2 = permutedims(hcat(u2...))\nfigu = plot(t, u2[:, 1], label=\"u1\")\nplot!(figu, t, u2[:, 2], label=\"u2\")\nplot!(figu, xlabel=\"t\")\nsavefig(figu, \"lqr2u.png\")\n```\n\n \n \n \n\u5b89\u5b9a\u5316\u3055\u308c\u3066\u3044\u308b\u3053\u3068\u304c\u308f\u304b\u308b\uff0e \n", "meta": {"hexsha": "3810abe5f68090d482f9ab32a33d878bb8679dbf", "size": 5797, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lqr/julia_src/using_solver.ipynb", "max_stars_repo_name": "YoshimitsuMatsutaIe/abc_2022", "max_stars_repo_head_hexsha": "9c6fb487c7ec22fdc57cc1eb0abec4c9786ad995", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "lqr/julia_src/using_solver.ipynb", "max_issues_repo_name": "YoshimitsuMatsutaIe/abc_2022", "max_issues_repo_head_hexsha": "9c6fb487c7ec22fdc57cc1eb0abec4c9786ad995", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lqr/julia_src/using_solver.ipynb", "max_forks_repo_name": "YoshimitsuMatsutaIe/abc_2022", "max_forks_repo_head_hexsha": "9c6fb487c7ec22fdc57cc1eb0abec4c9786ad995", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 20.7777777778, "max_line_length": 239, "alphanum_fraction": 0.449197861, "converted": true, "num_tokens": 1192, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9273632976542184, "lm_q2_score": 0.8887588008585925, "lm_q1q2_score": 0.8242022923834331}} {"text": "```python\nfrom sympy import *\n\n\n# Implementation of QuaternionBase::toRotationMatrix(void).\n# The quaternion q is given as a list [qw, qx, qy, qz].\ndef QuaternionToRotationMatrix(q):\n tx = 2 * q[1]\n ty = 2 * q[2]\n tz = 2 * q[3]\n twx = tx * q[0]\n twy = ty * q[0]\n twz = tz * q[0]\n txx = tx * q[1]\n txy = ty * q[1]\n txz = tz * q[1]\n tyy = ty * q[2]\n tyz = tz * q[2]\n tzz = tz * q[3]\n return Matrix([[1 - (tyy + tzz), txy - twz, txz + twy],\n [txy + twz, 1 - (txx + tzz), tyz - twx],\n [txz - twy, tyz + twx, 1 - (txx + tyy)]])\n\n\n# Implementation of SO3Group expAndTheta().\n# Only implementing the first case (of very small rotation) since we take the Jacobian at zero.\ndef SO3exp(omega):\n theta = omega.norm()\n theta_sq = theta**2\n \n half_theta = theta / 2\n \n theta_po4 = theta_sq * theta_sq\n imag_factor = Rational(1, 2) - Rational(1, 48) * theta_sq + Rational(1, 3840) * theta_po4;\n real_factor = 1 - Rational(1, 2) * theta_sq + Rational(1, 384) * theta_po4;\n \n # return SO3Group(Eigen::Quaternion(\n # real_factor, imag_factor * omega.x(), imag_factor * omega.y(),\n # imag_factor * omega.z()));\n qw = real_factor\n qx = imag_factor * omega[0]\n qy = imag_factor * omega[1]\n qz = imag_factor * omega[2]\n \n return QuaternionToRotationMatrix([qw, qx, qy, qz])\n\n\n# Implementation of SE3Group exp().\n# Only implementing the first case (of small rotation) since we take the Jacobian at zero.\ndef SE3exp(tangent):\n omega = Matrix(tangent[3:6])\n V = SO3exp(omega)\n rotation = V\n translation = V * Matrix(tangent[0:3])\n return rotation.row_join(translation)\n\n\n# Main\ninit_printing(use_unicode=True)\n\n# Define the tangent vector with symbolic elements T_0 to T_5.\n# (For a matrix, use: Matrix(3, 1, lambda i,j:var('S_%d%d' % (i,j))) )\nT = Matrix(6, 1, lambda i,j:var('T_%d' % (i)))\n\n# Compute transformation matrix from tangent vector.\nT_matrix = SE3exp(T)\n\n# Define the vector current_T * src:\nS = Matrix(3, 1, lambda i,j:var('S_%d' % (i)))\n\n# Matrix-vector multiplication with homogeneous vector:\nresult = T_matrix * S.col_join(Matrix([1]))\n\n# Compute Jacobian:\n# (Note: The transpose is needed for stacking the matrix columns (instead of rows) into a vector.)\njac = result.transpose().reshape(result.rows * result.cols, 1).jacobian(T)\n\n# Take Jacobian at zero:\njac_subs = jac.subs([(T[0], 0), (T[1], 0), (T[2], 0), (T[3], 0), (T[4], 0), (T[5], 0)])\n\n# Simplify and output:\njac_subs_simple = simplify(jac_subs)\npprint(jac_subs_simple)\n```\n\n \u23a11 0 0 0 S\u2082 -S\u2081\u23a4\n \u23a2 \u23a5\n \u23a20 1 0 -S\u2082 0 S\u2080 \u23a5\n \u23a2 \u23a5\n \u23a30 0 1 S\u2081 -S\u2080 0 \u23a6\n\n\n\n```python\n# Treat the function of which we want to determine the derivative as a list of nested functions.\n# This makes it easier to compute the derivative of each part, simplify it, and concatenate the results\n# using the chain rule.\n\n### Define the function of which the Jacobian shall be taken ###\n\n# Matrix-vector multiplication with homogeneous vector:\ndef MatrixVectorMultiplyHomogeneous(matrix, vector):\n return matrix * vector.col_join(Matrix([1]))\n\n# Define the vector current_T * src:\nS = Matrix(3, 1, lambda i,j:var('S_%d' % (i)))\n\n# The list of nested functions. They will be evaluated from right to left\n# (this is to match the way they would be written in math: f(g(x)).)\nfunctions = [lambda matrix : MatrixVectorMultiplyHomogeneous(matrix, S), SE3exp]\n\n\n### Define the variables wrt. to take the Jacobian, and the position for evaluation ###\n\n# Chain rule:\n# d(f(g(x))) / dx = (df/dy)(g(x)) * dg/dx\n\n# Define the parameter with respect to take the Jacobian, y in the formula above:\nparameters = Matrix(6, 1, lambda i,j:var('T_%d' % (i)))\n\n# Set the position at which to take the Jacobian, g(x) in the formula above:\nparameter_values = zeros(6, 1)\n\n\n### Automatic Jacobian calculation, no need to modify anything beyond this point ###\n\n# Jacobian from previous step, dg/dx in the formula above:\nprevious_jacobian = 1\n\n# TODO: Test whether this works with non-matrix functions.\ndef ComputeValueAndJacobian(function, parameters, parameter_values):\n # Evaluate the function.\n values = function(parameter_values)\n # Compute the Jacobian.\n symbolic_values = function(parameters)\n symbolic_values_vector = symbolic_values.transpose().reshape(symbolic_values.rows * symbolic_values.cols, 1)\n parameters_vector = parameters.transpose().reshape(parameters.rows * parameters.cols, 1)\n jacobian = symbolic_values_vector.jacobian(parameters_vector)\n # Set in the evaluation point.\n for row in range(0, parameters.rows):\n for col in range(0, parameters.cols):\n jacobian = jacobian.subs(parameters[row, col], parameter_values[row, col])\n # Simplify the jacobian.\n jacobian = simplify(jacobian)\n return (values, jacobian)\n\n\n# Print info about initial state.\nprint('Taking the Jacobian of these functions (sorted from inner to outer):')\nfor i in range(len(functions) - 1, -1, -1):\n print(str(functions[i]))\nprint('with respect to:')\npprint(parameters)\nprint('at position:')\npprint(parameter_values)\nprint('')\n\n# Loop over all functions:\nfor i in range(len(functions) - 1, -1, -1):\n # Compute value and Jacobian of this function.\n (values, jacobian) = ComputeValueAndJacobian(functions[i], parameters, parameter_values)\n \n # Update parameter_values\n parameter_values = values\n # Update parameters (create a new symbolic vector of the same size as parameter_values)\n parameters = Matrix(values.rows, values.cols, lambda i,j:var('T_%d%d' % (i,j)))\n # Concatenate this Jacobian with the previous one according to the chain rule:\n previous_jacobian = jacobian * previous_jacobian\n \n # Print intermediate result\n print('Intermediate step ' + str(len(functions) - i) + ', for ' + str(functions[i]))\n print('Position after function evaluation (function value):')\n pprint(parameter_values)\n print('Jacobian of this function wrt. its input only:')\n pprint(jacobian)\n print('Cumulative Jacobian wrt. the innermost parameter:')\n pprint(previous_jacobian)\n print('')\n\n# Print final result\nprint('Final result:')\npprint(previous_jacobian)\n```\n\n Taking the Jacobian of these functions (sorted from inner to outer):\n \n at 0x7fdab7639950>\n with respect to:\n \u23a1T\u2080\u23a4\n \u23a2 \u23a5\n \u23a2T\u2081\u23a5\n \u23a2 \u23a5\n \u23a2T\u2082\u23a5\n \u23a2 \u23a5\n \u23a2T\u2083\u23a5\n \u23a2 \u23a5\n \u23a2T\u2084\u23a5\n \u23a2 \u23a5\n \u23a3T\u2085\u23a6\n at position:\n \u23a10\u23a4\n \u23a2 \u23a5\n \u23a20\u23a5\n \u23a2 \u23a5\n \u23a20\u23a5\n \u23a2 \u23a5\n \u23a20\u23a5\n \u23a2 \u23a5\n \u23a20\u23a5\n \u23a2 \u23a5\n \u23a30\u23a6\n \n Intermediate step 1, for \n Position after function evaluation (function value):\n \u23a11 0 0 0\u23a4\n \u23a2 \u23a5\n \u23a20 1 0 0\u23a5\n \u23a2 \u23a5\n \u23a30 0 1 0\u23a6\n Jacobian of this function wrt. its input only:\n \u23a10 0 0 0 0 0 \u23a4\n \u23a2 \u23a5\n \u23a20 0 0 0 0 1 \u23a5\n \u23a2 \u23a5\n \u23a20 0 0 0 -1 0 \u23a5\n \u23a2 \u23a5\n \u23a20 0 0 0 0 -1\u23a5\n \u23a2 \u23a5\n \u23a20 0 0 0 0 0 \u23a5\n \u23a2 \u23a5\n \u23a20 0 0 1 0 0 \u23a5\n \u23a2 \u23a5\n \u23a20 0 0 0 1 0 \u23a5\n \u23a2 \u23a5\n \u23a20 0 0 -1 0 0 \u23a5\n \u23a2 \u23a5\n \u23a20 0 0 0 0 0 \u23a5\n \u23a2 \u23a5\n \u23a21 0 0 0 0 0 \u23a5\n \u23a2 \u23a5\n \u23a20 1 0 0 0 0 \u23a5\n \u23a2 \u23a5\n \u23a30 0 1 0 0 0 \u23a6\n Cumulative Jacobian wrt. the innermost parameter:\n \u23a10 0 0 0 0 0 \u23a4\n \u23a2 \u23a5\n \u23a20 0 0 0 0 1 \u23a5\n \u23a2 \u23a5\n \u23a20 0 0 0 -1 0 \u23a5\n \u23a2 \u23a5\n \u23a20 0 0 0 0 -1\u23a5\n \u23a2 \u23a5\n \u23a20 0 0 0 0 0 \u23a5\n \u23a2 \u23a5\n \u23a20 0 0 1 0 0 \u23a5\n \u23a2 \u23a5\n \u23a20 0 0 0 1 0 \u23a5\n \u23a2 \u23a5\n \u23a20 0 0 -1 0 0 \u23a5\n \u23a2 \u23a5\n \u23a20 0 0 0 0 0 \u23a5\n \u23a2 \u23a5\n \u23a21 0 0 0 0 0 \u23a5\n \u23a2 \u23a5\n \u23a20 1 0 0 0 0 \u23a5\n \u23a2 \u23a5\n \u23a30 0 1 0 0 0 \u23a6\n \n Intermediate step 2, for at 0x7fdab7639950>\n Position after function evaluation (function value):\n \u23a1S\u2080\u23a4\n \u23a2 \u23a5\n \u23a2S\u2081\u23a5\n \u23a2 \u23a5\n \u23a3S\u2082\u23a6\n Jacobian of this function wrt. its input only:\n \u23a1S\u2080 0 0 S\u2081 0 0 S\u2082 0 0 1 0 0\u23a4\n \u23a2 \u23a5\n \u23a20 S\u2080 0 0 S\u2081 0 0 S\u2082 0 0 1 0\u23a5\n \u23a2 \u23a5\n \u23a30 0 S\u2080 0 0 S\u2081 0 0 S\u2082 0 0 1\u23a6\n Cumulative Jacobian wrt. the innermost parameter:\n \u23a11 0 0 0 S\u2082 -S\u2081\u23a4\n \u23a2 \u23a5\n \u23a20 1 0 -S\u2082 0 S\u2080 \u23a5\n \u23a2 \u23a5\n \u23a30 0 1 S\u2081 -S\u2080 0 \u23a6\n \n Final result:\n \u23a11 0 0 0 S\u2082 -S\u2081\u23a4\n \u23a2 \u23a5\n \u23a20 1 0 -S\u2082 0 S\u2080 \u23a5\n \u23a2 \u23a5\n \u23a30 0 1 S\u2081 -S\u2080 0 \u23a6\n\n", "meta": {"hexsha": "94e0e30e079111a68312c851b0cf8fd827055e42", "size": 12103, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "libvis/scripts/LMOptimizer SE3Optimization Test Jacobian derivation.ipynb", "max_stars_repo_name": "zimengjiang/badslam", "max_stars_repo_head_hexsha": "785a2a5a11ce57b09d47ea7ca6a42196a4f12409", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 541, "max_stars_repo_stars_event_min_datetime": "2019-06-16T22:12:49.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T05:53:56.000Z", "max_issues_repo_path": "libvis/scripts/LMOptimizer SE3Optimization Test Jacobian derivation.ipynb", "max_issues_repo_name": "zimengjiang/badslam", "max_issues_repo_head_hexsha": "785a2a5a11ce57b09d47ea7ca6a42196a4f12409", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 82, "max_issues_repo_issues_event_min_datetime": "2019-06-18T06:45:38.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-23T00:34:34.000Z", "max_forks_repo_path": "libvis/scripts/LMOptimizer SE3Optimization Test Jacobian derivation.ipynb", "max_forks_repo_name": "zimengjiang/badslam", "max_forks_repo_head_hexsha": "785a2a5a11ce57b09d47ea7ca6a42196a4f12409", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 104, "max_forks_repo_forks_event_min_datetime": "2019-06-17T06:42:20.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-16T20:51:22.000Z", "avg_line_length": 35.0811594203, "max_line_length": 119, "alphanum_fraction": 0.4702966207, "converted": true, "num_tokens": 3384, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9273632956467157, "lm_q2_score": 0.8887587890727754, "lm_q1q2_score": 0.8242022796695132}} {"text": "**Exercise set 4**\n==============\n\n\n>The goal of this exercise is to perform **least-squares regression** and to see how we can estimate errors in the parameters we find.\n\n**Exercise 4.1**\n\nIn this exercise we will use least-squares regression to investigate a physical phenomenon: the decay of beer froth with time. The file [erdinger.txt](Data/erdinger.txt) (located at 'Data/erdinger.txt') contains [measured heights](https://doi.org/10.1088/0143-0807/23/1/304) for beer froth as a function of time, along with the errors in the measured heights.\n\n**(a)** Use least-squares regression to create a linear model that predicts the beer froth height as a function of time. Plot your linear model together with the raw data.\n\n\n```python\n# Your code here\n```\n\n**Your answer to question 4.1(a):** *Double click here*\n\n**(b)** Obtain the [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination), $R^2$, for your model.\n\n\n```python\n# Your code here\n```\n\n**Your answer to question 4.1(b):** *Double click here*\n\n**(c)** It is reasonable to assume that the change in the volume of froth is proportional\nto the volume present at a given time. One can show that this leads\nto exponential decay,\n\n\\begin{equation}\nh(t) = h(0) \\exp \\left(-\\frac{t}{\\tau} \\right),\n\\label{eq:hard}\n\\tag{1}\\end{equation}\n\nwhere $h(t)$ is the height of the froth as a function of time $t$, and $\\tau$ is a parameter.\nIn the following, consider $h(0)$ as an unknown parameter to be determined. Show\nhow you can transform the equation above to a linear equation of the form,\n\n\\begin{equation}\ny = a + b x,\n\\tag{2}\\end{equation}\n\nand give the relation(s) between the variables $h, h(0), t, \\tau$ and \n$a, b, x, y$.\n\n**Your answer to question 4.1(c):** *Double click here*\n\n**(d)** Using the transformation found above, create a new linear model, and estimate $h(0)$ and $\\tau$.\nFurther, calculate the coefficient of determination for this case, and compare the two\nlinear models you have found so far.\n\n\n```python\n# Your code here\n```\n\n**Your answer to question 4.1(d):** *Double click here*\n\n**(e)** From the analytical model Eq. (1), $h(0)$ is a known constant, equal to the height of\nthe froth at time zero. Thus, we can reformulate our model and fit it to just obtain\none parameter, $b$. Essentially, we are defining $y^\\prime = y - a$ and using the model,\n\n\\begin{equation}\ny^\\prime = y - a = b x,\n\\tag{3}\\end{equation}\n\nthat is, we have a linear model *without* the constant term.\nShow that the least-squares solution for $b$ when fitting $y^\\prime = bx$ is given by,\n\n\\begin{equation}\nb = \\frac{\n\\sum_{i=1}^n y_i^\\prime x_i\n}{\\sum_{i=1}^n x_i^2\n},\n\\label{eq:bexpr}\n\\tag{4}\\end{equation}\n\nwhere $n$ is the number of measurements and $x_i$ and $y_i^\\prime$ are the\nmeasured values.\n\n**Your answer to question 4.1(e):** *Double click here*\n\n**(f)** Do the fitting a final time, but use Eq. (4)\nto obtain the parameter $b$.\nCalculate the coefficient of determination and compare the three linear models you have found.\n\n\n```python\n# Your code here\n```\n\n**Your answer to question 4.1(f):** *Double click here*\n\n**Exercise 4.2**\n\nIn this exercise, we will consider a linear model where we have one variable:\n\n\\begin{equation}\ny = a + bx,\n\\end{equation}\n\nand we have determined $a$ and $b$ using the least-squares equations. We further have\n$n$ data points $(x_1, y_1), (x_2, y_2), \\ldots, (x_n, y_n)$ where the $x_i$'s do not have\nany uncertainty, while the uncertainty in the $y_i$'s are all equal to $\\sigma_y$.\n\n\n> Our goal here is to find expressions for estimating\n> the errors in the parameters $a$ and $b$,\n> given the error in our measurements of $y$.\n\n**Background information: Propagation of errors**\n\nTo be able to estimate the errors in $a$ and $b$, we will use [propagation of errors](https://en.wikipedia.org/wiki/Propagation_of_uncertainty).\nFor simplicity, consider a function, $f$, of two variables $u$ and $v$: $f = f(u, v)$.\nBy doing a Taylor expansion about the average values, $\\bar{u}$ and $\\bar{v}$, we can\nshow that the uncertainty (or \"error\") in the function $f$, $\\sigma_f$, due to the uncertainties in $u$\nand $v$ ($\\sigma_u$ and $\\sigma_v$, respectively) is given by:\n\n\\begin{equation}\n\\sigma_f^2 = \\left(\\frac{\\partial f}{\\partial u} \\right)^2 \\sigma_u^2 +\n\\left(\\frac{\\partial f}{\\partial v} \\right)^2 \\sigma_v^2 +\n2 \\frac{\\partial f}{\\partial u} \\frac{\\partial f}{\\partial v} \\sigma_{uv} + \\text{higher-order terms},\n\\end{equation}\n\nwhere $\\sigma_{uv}$ is the *covariance* between $u$ and $v$. Typically, the errors are \"small\"\nand this motivates us to neglect the higher-order terms. Further, we will assume that the\nvariables $u$ and $v$ are *not* correlated: $\\sigma_{uv} = 0$. We then arrive at the\n(perhaps well-known) approximate propagation-of-errors-expression for the uncertainty in $f$:\n\n\\begin{equation}\n\\sigma_f^2 \\approx \\left(\\frac{\\partial f}{\\partial u} \\right)^2 \\sigma_u^2 +\n\\left(\\frac{\\partial f}{\\partial v} \\right)^2 \\sigma_v^2 .\n\\end{equation}\n\nThis can be generalized to $k$ variables, say $f=f(z_1, z_2, \\ldots, z_k)$. The approximate\nexpression for the uncertainty in $f$, $\\sigma_f$, due to the uncertainties\nin the $z_i$'s, $\\sigma_{z_{i}}$, is then:\n\n\\begin{equation}\n\\sigma_f^2 \\approx \\sum_{i=1}^{k} \\left(\\frac{\\partial f}{\\partial z_{i}} \\right)^2 \\sigma_{z_{i}}^2 .\n\\label{eq:errorp}\n\\tag{5}\\end{equation}\n\nWe will use this expression to estimate the uncertainties in $a$ and $b$.\n\n**Deriving expressions for the uncertainties in $a$ and $b$**\n\n**(a)** Show that the error in the $b$ parameter, $\\sigma_b$,\nis given by the following expression:\n\n\\begin{equation}\n\\sigma_b^2 = \\frac{\\sigma_y^2}{\\sum_{i=1}^n \\left(x_i - \\bar{x}\\right)^2},\n\\end{equation}\n\nwhere $\\bar{x} = \\frac{1}{n} \\sum_{i=1}^{n} x_i$ is the average of $x$.\n\n***Hint:*** Use the least-squares expression for $b$:\n\n\\begin{equation}\nb = \\frac{\n\\sum_{i=1}^n (x_i - \\bar{x}) (y_i - \\bar{y})\n}{\n\\sum_{i=1}^n (x_i - \\bar{x})^2\n},\n\\end{equation}\n\ntogether with the propagation-of-errors expression (Eq. (5)), and consider $b$ as a\nfunction of the $y_i$'s: $b = f(y_1, y_2, \\ldots, y_n)$. You might find it helpful to determine\n$\\frac{\\partial b}{\\partial y_j}$\nas an intermediate step in your derivation. \n\n**Your answer to question 4.2(a):** *Double click here*\n\n**(b)** Show that the error in the $a$ parameter, $\\sigma_a$, is given by the following expression:\n\n\\begin{equation}\n\\sigma_a^2 = \\frac{\\sigma_y^2}{n} \\times\n\\frac{\n\\sum_{i=1}^{n} x_i^2\n}{\n\\sum_{i=1}^{n} (x_i - \\bar{x})^2 \n} .\n\\end{equation}\n\n***Hint:*** Use the least-squares expression for $a$:\n\n\\begin{equation}\na = \\bar{y} - b \\bar{x},\n\\end{equation}\n\ntogether with the propagation-of-errors expression (Eq. (5)), and consider $a$ as a\nfunction of the $y_i$'s *and* $b$: $a = f(y_1, y_2, \\ldots, y_n,b)$. You might find it\nhelpful to determine\n$\\frac{\\partial a}{\\partial y_j}$ and $\\frac{\\partial a}{\\partial b}$ as intermediate steps\nin your derivation.\n\n**Your answer to question 4.2(b):** *Double click here*\n\n\n\n\n", "meta": {"hexsha": "1ced6fbc11e5278aed2932fa42d1e8ccf255be15", "size": 10769, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "exercises/04_Exercise_Set_4.ipynb", "max_stars_repo_name": "sroet/chemometrics", "max_stars_repo_head_hexsha": "c797505d07e366319ba1544e8a602be94b88fbb6", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2020-02-04T12:09:19.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-06T12:28:24.000Z", "max_issues_repo_path": "exercises/04_Exercise_Set_4.ipynb", "max_issues_repo_name": "sroet/chemometrics", "max_issues_repo_head_hexsha": "c797505d07e366319ba1544e8a602be94b88fbb6", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 72, "max_issues_repo_issues_event_min_datetime": "2020-01-06T10:24:33.000Z", "max_issues_repo_issues_event_max_datetime": "2022-01-21T10:37:46.000Z", "max_forks_repo_path": "exercises/04_Exercise_Set_4.ipynb", "max_forks_repo_name": "sroet/chemometrics", "max_forks_repo_head_hexsha": "c797505d07e366319ba1544e8a602be94b88fbb6", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2020-01-09T12:04:26.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-19T10:06:14.000Z", "avg_line_length": 32.5347432024, "max_line_length": 368, "alphanum_fraction": 0.5534404309, "converted": true, "num_tokens": 2162, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9273632876167045, "lm_q2_score": 0.8887587883361618, "lm_q1q2_score": 0.8242022718496619}} {"text": "```python\n# N-dim arrays\n# four classes are available: dense/sparse + mutable/immutable\nfrom sympy import *\n#from sympy import Array\ninit_printing(use_unicode=True)\n```\n\n\n```python\nA = Array([[1,2],[3,4]])\n```\n\n\n```python\nA\n```\n\n\n```python\nB = Array(range(12), (3,4))\n```\n\n\n```python\nB\n```\n\n\n```python\nC = Array(range(12), (2,2,3))\n```\n\n\n```python\nC\n```\n\n\n```python\n# rank of the Array\nA.rank()\n```\n\n\n```python\nB.rank()\n```\n\n\n```python\nC.rank()\n```\n\n\n```python\nA.tomatrix()\n```\n\n\n```python\nA.tolist()\n```\n\n\n```python\nB.tomatrix()\n```\n\n\n```python\nC.tolist()\n```\n\n\n```python\n# no default functionality for converting higher-order tensors into matrices\nC.tomatrix()\n```\n\n\n```python\n# tensor products: higher-order tensors from lower-order tensors\n\n# unit vectors\n##############\n\ni = Array([1,0,0])\nj = Array([0,1,0])\nk = Array([0,0,1])\n```\n\n\n```python\nii = tensorproduct(i,i)\n```\n\n\n```python\nii\n```\n\n\n```python\nij = tensorproduct(i,j)\n```\n\n\n```python\nij\n\n```\n\n\n```python\n# similarly\n# ii, ij, ik\n# ji, jj, jk\n# ki, kj, kk\n\n```\n\n\n```python\ni.rank()\n```\n\n\n```python\nii.rank()\n```\n\n\n```python\nI2 = Array([[1,0,0], [0,1,0], [0,0,1]])\n```\n\n\n```python\nI2\n```\n\n\n```python\n# fourth-order tensor\n# I4_1_ijkl = delta_ij delta_kl\nI4_1 = tensorproduct(I2, I2)\n```\n\n\n```python\nI4_1\n```\n\n\n```python\nMatrix(I4_1)\n```\n\n\n```python\nMatrix(I4_1).reshape(9,9)\n```\n\n\n```python\n# I4_2_ijkl = delta_ik delta_jl\nI4_2 = eye(9)\n```\n\n\n```python\nI4_2\n```\n\n\n```python\n# tensor contraction\nA = Array([[1,2,3],[4,5,6],[7,8,9]])\n```\n\n\n```python\nA\n```\n\n\n```python\n# the matrix trace is equivalent to contaction of second-order tensor along axes 1 and 2\n# since the starting index in Python is zero.\ntrA = tensorcontraction(A, (0,1))\n```\n\n\n```python\ntrA\n```\n\n\n```python\n# matrix product is the contaction of fourth order tensor along axes 2 and 3\n# of the fourth-order tensor formed as the tensor product of the matrices as rank-2 tensor\nD = Array([[1,2],[3,4]])\n```\n\n\n```python\nDtD = tensorproduct(D,D)\n```\n\n\n```python\nDtD\n```\n\n\n```python\nDtD.rank()\n```\n\n\n```python\ntensorcontraction(DtD,(1,2))\n```\n\n\n```python\nD.tomatrix()*D.tomatrix()\n```\n\n\n```python\n#\n```\n\n\n```python\n#\n```\n\n\n```python\n# indexed objects\n# A[i,j]\n# A is the IndexedBase\n# i and j are indices\n\nA = IndexedBase('A')\ni,j = symbols('i j', cls=Idx)\n```\n\n\n```python\nA[i,j]\n```\n\n\n```python\nA[i,j].shape\n```\n\n\n```python\ni = Idx('i', 3)\nj = Idx('j', 3)\n```\n\n\n```python\nA[i,j].shape\n```\n\n\n```python\nA[i,j].ranges\n```\n\n\n```python\ni.lower\n```\n\n\n```python\ni.upper\n```\n\n\n```python\n# index with unbounded upper limit\nk = Idx('k', oo)\n```\n\n\n```python\nk.lower\n```\n\n\n```python\nk.upper\n```\n\n\n```python\n#\n```\n\n\n```python\n#\n```\n\n\n```python\n# Matrix expressions\nfrom sympy import MatrixSymbol, Matrix\n```\n\n\n```python\nF = MatrixSymbol('F',3,3)\n```\n\n\n```python\nF.shape\n```\n\n\n```python\nF\n```\n\n\n```python\nMatrix(F)\n```\n\n\n```python\nA = MatrixSymbol('A',3,3)\nB = MatrixSymbol('B',3,3)\nC = MatrixSymbol('C',3,3)\n```\n\n\n```python\nMatrix(C)\n```\n\n\n```python\n# MatrixAddition\nMatrix(MatAdd(A,B,C))\n```\n\n\n```python\nMatMul(A,B,C)\n```\n\n\n```python\nMatrix(MatMul(A,B,C))\n```\n\n\n```python\nMatrix(hadamard_product(A,B))\n```\n\n\n```python\n# matrix inverse\nAinv = Inverse(A)\n```\n\n\n```python\nAinv\n```\n\n\n```python\nIA = Trace(A)\n```\n\n\n```python\nIA\n```\n\n\n```python\n# tensor calculs\n\ni = Idx('i', 3)\nj = Idx('j', 3)\nk = Idx('k', 3)\nl = Idx('l', 3)\n\n```\n\n\n```python\nA=MatrixSymbol('A',3,3)\n```\n\n\n```python\nA\n```\n\n\n```python\nprint(diff(F[i,j],F[i,j]))\n```\n\n 1\n\n\n\n```python\nprint(diff(F[k,l],F[i,j]))\n```\n\n KroneckerDelta(i, k)*KroneckerDelta(j, l)\n\n\n\n```python\nfrom sympy.concrete.delta import _simplify_delta\n\n_simplify_delta(diff(F[k,l],F[i,j]))\n```\n\n\n```python\nC = F.T*F\n```\n\n\n```python\nprint(diff(C[i,j],F[k,l]))\n_simplify_delta(diff(C[i,j],F[k,l]))\n```\n\n\n```python\nsimplify(diff(C[i,j],F[k,l]))\n```\n\n\n```python\nDeterminant(A)\n```\n\n\n```python\nLeviCivita(i,j,k)\n```\n\n\n```python\nFinv=Inverse(F)\n```\n\n\n```python\ndiff(Determinant(F),F[k,l])\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "8f77cb5c1602cc75c30813f22755c9641b320b84", "size": 127160, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "6-calculus-tensors.ipynb", "max_stars_repo_name": "chennachaos/SA2CTechChatSymPy", "max_stars_repo_head_hexsha": "9f1dbb48655ff5f8bdd6b4ced48b58aed0ba5bf4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "6-calculus-tensors.ipynb", "max_issues_repo_name": "chennachaos/SA2CTechChatSymPy", "max_issues_repo_head_hexsha": "9f1dbb48655ff5f8bdd6b4ced48b58aed0ba5bf4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "6-calculus-tensors.ipynb", "max_forks_repo_name": "chennachaos/SA2CTechChatSymPy", "max_forks_repo_head_hexsha": "9f1dbb48655ff5f8bdd6b4ced48b58aed0ba5bf4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 72.9128440367, "max_line_length": 21812, "alphanum_fraction": 0.778491664, "converted": true, "num_tokens": 1352, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.903294214513915, "lm_q2_score": 0.912436153333645, "lm_q1q2_score": 0.824198298419613}} {"text": "# AST 341 homework 2\n\n\n```python\nfrom sympy import init_session\ninit_session()\n```\n\n IPython console for SymPy 1.6.2 (Python 3.8.5-64-bit) (ground types: python)\n \n These commands were executed:\n >>> from __future__ import division\n >>> from sympy import *\n >>> x, y, z, t = symbols('x y z t')\n >>> k, m, n = symbols('k m n', integer=True)\n >>> f, g, h = symbols('f g h', cls=Function)\n >>> init_printing()\n \n Documentation can be found at https://docs.sympy.org/1.6.2/\n \n\n\n## 1.\n\na. We want to compute the pressure for a gas that obeys the Maxwell-Boltzmann distribution:\n\n\\begin{equation}\n4\\pi p^2 n(p) dp = \\frac{n_I}{(2\\pi m_I kT)^{3/2}} e^{-p^2/(2m_I kT)} 4 \\pi p^2 dp\n\\end{equation}\n\nOur pressure integral is:\n\\begin{equation}\nP = \\frac{1}{3} \\int n(p) p v d^3 p = \\frac{1}{3} \\int_0^{\\infty} 4\\pi p^2 n(p) p v dp\n\\end{equation}\nand for a non-relativistic gas, we can take $v = p/m_I$\n\nDefine the symbols we will use—make sure we tell SymPy which are positive and real, since we have a square root in our functions\n\n\n```python\nnI, m, k, T = symbols(\"n_I m_I k T\", positive=True, real=True)\np = symbols(\"p\")\n```\n\nDefine the distribution function, $n(p)$\n\n\n```python\nn = nI/(2*pi*m*k*T)**Rational(3,2) * exp(-p**2/(2*m*k*T))\nn\n```\n\nNow do the integral. Note we indicate $\\infty$ as `oo`\n\n\n```python\nP = Rational(1,3) * integrate(4*pi*p**2 * n * p * (p / m), (p, 0, oo))\nP\n```\n\nNotice that we get the ideal gas law!\n\\begin{equation}\nP = n_I k T\n\\end{equation}\n\nb. Now we want to compute the energy\n\n\\begin{equation}\ne = \\frac{1}{\\rho} \\int_0^\\infty 4\\pi p^2 \\mathcal{E}(p) n(p) dp\n\\end{equation}\nwhere the kinetic energy for a single particle is just\n\\begin{equation}\n\\mathcal{E}(p) = \\frac{p^2}{2 m_I}\n\\end{equation}\nfor a non-relativistic gas\n\n\n```python\nrho = symbols(\"rho\", positive=True)\n```\n\n\n```python\ne = rho**-1 * integrate(4*pi*p**2 * p**2 / (2*m) * n, (p, 0, oo))\ne\n```\n\nSo\n\\begin{equation}\n\\rho e = \\frac{3}{2} n_I k T\n\\end{equation}\n\nThis shows us that\n\\begin{equation}\n\\rho e = \\frac{3}{2} P\n\\end{equation}\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "83b6e56f4cf05807c929e8af44ed4b5ad59415c7", "size": 12926, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ideal_gas/ideal_gas.ipynb", "max_stars_repo_name": "zingale/ast341_examples", "max_stars_repo_head_hexsha": "0a15b9bf0b268b00021c59504eb7a2006b7e5ada", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-09-09T15:48:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-09T16:08:51.000Z", "max_issues_repo_path": "ideal_gas/ideal_gas.ipynb", "max_issues_repo_name": "zingale/ast341_examples", "max_issues_repo_head_hexsha": "0a15b9bf0b268b00021c59504eb7a2006b7e5ada", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ideal_gas/ideal_gas.ipynb", "max_forks_repo_name": "zingale/ast341_examples", "max_forks_repo_head_hexsha": "0a15b9bf0b268b00021c59504eb7a2006b7e5ada", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.0909090909, "max_line_length": 4396, "alphanum_fraction": 0.7416060653, "converted": true, "num_tokens": 764, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.952574122783325, "lm_q2_score": 0.8652240825770432, "lm_q1q2_score": 0.8241900714718341}} {"text": "# Lesson 3 (Circular) Convolution\n\n## 1D Convolution\n\nA circulant matrix $C$ is a square matrix that's completely determined by the first column $c$. Other columns are just shifts of the first columns.\n\nFor example, the circulant matrix generated by $c=\\begin{bmatrix}1\\\\2\\\\0\\end{bmatrix}$ is $\\qquad C=circ(c)=\\begin{bmatrix}1&0&2\\\\2&1&0\\\\0&2&1\\end{bmatrix}$.\n\n\n```python\nimport numpy as np\nfrom scipy.linalg import circulant\n```\n\n\n```python\ncirculant([1,2,0])\n```\n\n\n\n\n array([[1, 0, 2],\n [2, 1, 0],\n [0, 2, 1]])\n\n\n\nLet $c,x\\in\\mathcal{C}^n$. By definition, the convolution between $c$ and $x$ is \n$$c*x(i)=\\sum_{j=0}^{n-1}c(i-j)x(j)=\\begin{bmatrix}c_i&c_{i-1}&\\cdots&c_{i-(n-1)}\\end{bmatrix}\\begin{bmatrix}x_0\\\\x_1\\\\\\vdots\\\\x_{n-1}\\end{bmatrix},$$\nwhich means that\n$$c*x=\\begin{bmatrix}c_0&c_{0-1}&\\cdots&c_{0-(n-1)}\\\\\nc_1&c_{1-1}&\\cdots&c_{1-(n-1)}\\\\\n\\vdots&\\vdots&\\ddots&\\vdots\\\\\nc_{n-1}&c_{n-1-1}&\\cdots&c_{n-1-(n-1)}\\end{bmatrix}\\begin{bmatrix}x_0\\\\x_1\\\\\\vdots\\\\x_{n-1}\\end{bmatrix}=circ(c)x.$$\n\nLet $F_n$ be the $n\\times n$ unitary discrete Fourier transform, as defined in Lesson 2. One fast way to compute convolution is to use FFT. It can be proven that\n\\begin{equation}F_n(c)\\odot F_n(x)=F_n(c*x),\\end{equation}\nwhere $\\odot$ is the coordinate-wise vector product, that is $[x\\odot y](i)=x(i)y(i)$.\n\nLet $diag(x)$ be the diagonal matrix whose diagonal is $x$, then the above equation can also be written as\n$$diag(F_nc)F_nx=F_ncirc(c)x\\Longleftrightarrow circ(c)=F_n^*diag(F_nc)F_n\\Longleftrightarrow F_n circ(c)F_n^*=diag(F_nc).$$\n\nThe code below verifies that both ways of computing convolution give the same output.\n\n\n```python\nx = np.random.randn(3);print(x)\n```\n\n [-0.57732409 0.33621261 2.05435539]\n\n\n\n```python\ncirculant([1,2,0])@x\n```\n\n\n\n\n array([ 3.5313867 , -0.81843557, 2.72678061])\n\n\n\n\n```python\nnp.fft.ifft(np.fft.fft([1,2,0])*np.fft.fft(x))\n```\n\n\n\n\n array([ 3.5313867 +0.j, -0.81843557+0.j, 2.72678061+0.j])\n\n\n\n## to be continued\n\n\n```python\n\n```\n", "meta": {"hexsha": "ccb992b75ac977194443e6f4bf8e7aca15ec7b83", "size": 4460, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "nb/L3Convolution.ipynb", "max_stars_repo_name": "xuemeic/Image-Processing", "max_stars_repo_head_hexsha": "3038ce2fb89af2bd28d9ff034e1884a3a2830d6c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "nb/L3Convolution.ipynb", "max_issues_repo_name": "xuemeic/Image-Processing", "max_issues_repo_head_hexsha": "3038ce2fb89af2bd28d9ff034e1884a3a2830d6c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "nb/L3Convolution.ipynb", "max_forks_repo_name": "xuemeic/Image-Processing", "max_forks_repo_head_hexsha": "3038ce2fb89af2bd28d9ff034e1884a3a2830d6c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.5978835979, "max_line_length": 176, "alphanum_fraction": 0.516367713, "converted": true, "num_tokens": 775, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9566341962906709, "lm_q2_score": 0.8615382165412808, "lm_q1q2_score": 0.8241769193546662}} {"text": "```python\nimport sympy as sp\nimport pandas as pd\n%matplotlib inline\n```\n\n\n```python\nx, y = sp.symbols('x y')\nf = sp.tan(y - 0.8 * x) + 0.8 * x * y - 0.3\ng = x ** 2 + y ** 2 - 1.7\n```\n\n\n```python\nplot = sp.plot_implicit(sp.Eq(f, 0), show=False, line_color='blue')\nplot.extend(sp.plot_implicit(sp.Eq(g, 0), show=False, line_color='green'))\nplot.show()\n```\n\n\n```python\ndef newton_system(x, y, f, g, x0, y0):\n dfdx = f.diff(x)\n dfdy = f.diff(y)\n dgdx = g.diff(x)\n dgdy = g.diff(y)\n \n d = dfdx * dgdy - dgdx * dfdy\n dx = f * dgdy - g * dfdy\n dy = dfdx * g - dgdx * f\n\n xs, ys, norms, fs, gs = [], [], [], [], []\n xk, yk = x0, y0\n while True: \n new_x = xk - (dx / d).evalf(subs={x: xk, y: yk})\n new_y = yk - (dy / d).evalf(subs={x: xk, y: yk})\n xs.append(new_x)\n ys.append(new_y)\n\n norm = sp.sqrt((new_x - xk) ** 2 + (new_y - yk) ** 2)\n norms.append(norm)\n fs.append(f.evalf(subs={x: new_x, y: new_y}))\n gs.append(g.evalf(subs={x: new_x, y: new_y}))\n\n xk, yk = new_x, new_y\n\n if norm < 1e-5 and len(xs) > 1:\n break\n return pd.DataFrame(list(zip(xs, ys, norms, fs, gs)), columns=['x', 'y', 'norm', 'f', 'g'])\n```\n\n\n```python\nnewton_system(x, y, f, g, 1, 1)\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    xynormfg
    01.227617088897150.6223829111028480.440912922225209-0.06483435032634680.194404204985173
    11.141496335218980.6360741713855970.0872022638595296-0.003580820041035580.00760423482222702
    21.137556669938970.6371668050740430.00408837509233626-8.94822418305518e-61.67148108958352e-5
    31.137547694734980.6371697123154419.43431710133745e-6-5.15817799583199e-118.90062924372405e-11
    \n
    \n\n\n\n\n```python\nnewton_system(x, y, f, g, -1, -1)\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    xynormfg
    0-0.822059767940987-1.027940232059010.180120467334300-0.01217696131767940.0324433827527268
    1-0.824116762561558-1.010514448595970.0175467705339115-0.0001873326327529910.000307889156169836
    2-0.824171430058251-1.010317522190070.000204373541676120-3.26758676647519e-84.17685447475136e-8
    3-0.824171440931074-1.010317492649513.14779734395557e-8-7.59514473637850e-168.46188933633038e-16
    \n
    \n\n\n", "meta": {"hexsha": "8ccd1e672f9a80863176a5ffa125e9eac0b37a78", "size": 31049, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "year-2/computational-workshop/task-7.ipynb", "max_stars_repo_name": "Sergobot/university", "max_stars_repo_head_hexsha": "7cd8c07fc660f1e19127c6488991ddd59d99643c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2019-09-05T08:43:47.000Z", "max_stars_repo_stars_event_max_datetime": "2019-09-05T08:43:52.000Z", "max_issues_repo_path": "year-2/computational-workshop/task-7.ipynb", "max_issues_repo_name": "Sergobot/university", "max_issues_repo_head_hexsha": "7cd8c07fc660f1e19127c6488991ddd59d99643c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "year-2/computational-workshop/task-7.ipynb", "max_forks_repo_name": "Sergobot/university", "max_forks_repo_head_hexsha": "7cd8c07fc660f1e19127c6488991ddd59d99643c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-10-04T07:40:20.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-18T07:12:08.000Z", "avg_line_length": 105.6088435374, "max_line_length": 22204, "alphanum_fraction": 0.8141002931, "converted": true, "num_tokens": 1537, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9566342012360933, "lm_q2_score": 0.8615382076534742, "lm_q1q2_score": 0.8241769151129568}} {"text": "# Exercise 2: Markov Chains and Markov Decision Processes (MDP) \n\nThis exercise deals with the formal handling of Markov chains and Markov decision processes. \n\n## 1) Markov Chain: State Transition\nThe graph shows the last beer problem. \nThe nodes show the states.\nThe arrows define the possible transitions to other states and the numbers besides the arrows define the propability of the corresponding transition.\nIf you are for example in the state \"Inital Beer\", with 30% propability you go to have a pizza, with 60% propability you meet friends and with 10% propability you end up sleeping.\n\nDefine the state transition probability matrix $\\mathcal{P}_{xx'}$ of the graph shown in the figure below!\n\n\n\nWith $p_k = \\begin{bmatrix}\n\\text{Pr}_k \\lbrace \\text{Inital Beer} \\rbrace \\\\\n\\text{Pr}_k \\lbrace \\text{Meet Friends} \\rbrace \\\\\n\\text{Pr}_k \\lbrace \\text{Pizza} \\rbrace \\\\\n\\text{Pr}_k \\lbrace \\text{Another Beer} \\rbrace \\\\\n\\text{Pr}_k \\lbrace \\text{\"Last Beer\"}\\rbrace \\\\\n\\text{Pr}_k \\lbrace \\text{Sleep} \\rbrace \\\\\n\\end{bmatrix}^\\text{T}$\n\nYOUR ANSWER HERE\n\n## 2 ) Markov Chain: Stationary State\nUsing $p = p \\mathcal{P}$, calculate the stationary state probability.\n\nPlease note that the sum of the state propabilities equals one for any specific point in time.\n\nYOUR ANSWER HERE\n\n## 3) Markov Reward Process: Evaluating States\n\nIn the following rewards for every state are defined.\n\nGiven the reward distribution $r_\\mathcal{X}$, calculate the state-values $v_\\mathcal{X}$. \n\nThe states are defined by:\n$\\mathcal{X} = \\left\\lbrace \\begin{matrix}\n\\text{Inital Beer}\\\\\n\\text{Meet Friends}\\\\\n\\text{Pizza}\\\\\n\\text{Another Beer}\\\\\n\\text{\"Last Beer\"}\\\\\n\\text{Sleep}\\\\\n\\end{matrix}\n\\right\\rbrace$\n\nThe rewards are defined by:\n$r_\\mathcal{X} = \\begin{bmatrix}\n+1\\\\\n+1\\\\\n+2\\\\\n+1\\\\\n-3\\\\\n0\\\\\n\\end{bmatrix}$\n\nThe state-value is defined by the state-value Bellman equation: $v_\\mathcal{X} = r_\\mathcal{X} + \\gamma \\mathcal{P}_{xx'} v_\\mathcal{X}$. Assume that $\\gamma = 0.9$ and write a Python program to calculate $v_\\mathcal{X}$. Which state is most promising? Why?\n\nWhich state is most promising when $\\gamma = 0.1$?\n\nYOUR ANSWER HERE\n\n\n```python\nimport numpy as np\n\n# define given parameters\ngamma = 0.1 # discount factor\n\n# YOUR CODE HERE\nraise NotImplementedError()\n\nprint(v_X)\n\n```\n\n## 4) Markov Decision Process: State Transition\n\nThe graph shows an MDP.\nThe nodes are the states. \nIn every state you can choose between two actions (Lazy or Productive). \nTaken actions impact the state transition probability to the next state.\nIf you for example have a \"Hangover\" and decide to be \"Productive\", there is a 30% chance for you to \"Visit Lecture\" and a 70% chance to stay in the \"Hangover\" state.\n\nDefine the lazy state transition probabilitiy $\\mathcal{P}_{xx'}^{u=\\text{Lazy}}$ and the productive state transition probability $\\mathcal{P}_{xx'}^{u=\\text{Productive}}$ of the graph shown in the figure below.\n\n\n\nWith $p_k = \\begin{bmatrix}\n\\text{Pr}_k \\lbrace \\text{Hangover} \\rbrace \\\\\n\\text{Pr}_k \\lbrace \\text{Sleep} \\rbrace \\\\\n\\text{Pr}_k \\lbrace \\text{More Sleep} \\rbrace \\\\\n\\text{Pr}_k \\lbrace \\text{Visit Lecture} \\rbrace \\\\\n\\text{Pr}_k \\lbrace \\text{Study}\\rbrace \\\\\n\\text{Pr}_k \\lbrace \\text{Pass the Exam} \\rbrace \\\\\n\\end{bmatrix}^\\text{T}$\n\n## 4) Solution\n\n\n\\begin{align}\n\\mathcal{P}_{xx'}^{u=\\text{Lazy}}&=\\begin{bmatrix}\n0 & 1 & 0 & 0 & 0 & 0\\\\\n0 & 0 & 1 & 0 & 0 & 0\\\\\n0 & 0 & 1 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0.8 & 0.2\\\\ \n0 & 0 & 1 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0 & 1\\\\\n\\end{bmatrix}\\\\\n\\mathcal{P}_{xx'}^{u=\\text{Productive}}&=\\begin{bmatrix}\n0.7 & 0 & 0 & 0.3 & 0 & 0\\\\\n0 & 0 & 0.4 & 0.6 & 0 & 0\\\\\n0 & 0 & 0.5 & 0 & 0.5 & 0\\\\\n0 & 0 & 0 & 0 & 1 & 0\\\\ \n0 & 0 & 0 & 0 & 0.1 & 0.9\\\\\n0 & 0 & 0 & 0 & 0 & 1\\\\\n\\end{bmatrix}\n\\end{align}\n\n## 5) Markov Decision Process: Trivial Policy Evaluation\n\nThe rewards for this problem are defined by:\n$r_\\mathcal{X} = r_\\mathcal{X}^{u=\\text{Productive}} = r_\\mathcal{X}^{u=\\text{Lazy}} = \\begin{bmatrix}\n-1\\\\\n-1\\\\\n-1\\\\\n-1\\\\\n-1\\\\\n0\\\\\n\\end{bmatrix}$.\n\nHow can we interprete these rewards?\nEvaluate both the lazy policy and the productive policy using $\\gamma = 0.9$.\n\nBonus question: Can we evaluate the state-value of $\\lbrace x=\\text{More Sleep}, u=\\text{Lazy}\\rbrace$ for an infinite time horizon without the use of the Bellman equation?\n\nYOUR ANSWER HERE\n\n\n```python\nimport numpy as np\n\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n\n```python\n# Bonus question: Can we evaluate the state-value of {\ud835\udc65=More Sleep,\ud835\udc62=Lazy} for an infinite time horizon without the use of the Bellman equation?\n\n\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n## 6) Action-Value Function Evalution\n\nNow, the policy is defined by:\n\\begin{align}\n\\pi(u_k=\\text{Productive} | x_k)&=\\alpha,\\\\\n\\pi(u_k=\\text{Lazy} | x_k)&=1-\\alpha, \\forall x_k \\in \\mathcal{X}\n\\end{align}\n\nCalculate action-values for the problem as described using the 'fifty-fifty' policy ($\\alpha = 0.5$) according to the Bellman Expectation Equation: $q_\\pi(x_k, u_k) = \\mathcal{R}^u_x + \\gamma \\sum_{x_{k+1} \\in \\mathcal{X}} p^u_{xx'} v_\\pi(x_{k+1})$ $\\forall x_k, u_k \\in \\mathcal{X}, \\mathcal{U}$.\n\n## 6) Solution\n\n\n\n```python\nimport numpy as np\n\ngamma = 0.9\nalpha = 0.5\nno_states = 6\nno_actions = 2\nr_X = np.array([-1, -1, -1, -1, -1, 0]).reshape(-1, 1)\nq_XU = np.zeros([no_states, no_actions])\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n## 7) Markov Decision Problem: Stochastic Policy Evalution\n\nPlot the state-value of the states \"Lecture\" and \"Study\" for different $\\alpha$. What do we see? Why?\n\n## 7) Solution\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn-talk')\n\nn = 6 # dimension of state space\nno_of_samples = 1000\n\nalphas = np.linspace(0, 1, no_of_samples)\nv_n_alpha = np.zeros([n, no_of_samples])\n\n# YOUR CODE HERE\nraise NotImplementedError()\n```\n\n\n```python\nplt.figure(figsize=[10, 6])\nstates = [\"Hangover\", \"Sleep\", \"More Sleep\", \"Visit Lecture\", \"Study\", \"Pass Exam\"]\nalphas = alphas.flatten()\nfor state, vnalp in zip(states, v_n_alpha):\n ls = '--' if state in ['Visit Lecture', 'Study'] else '-'\n plt.plot(alphas, vnalp, ls=ls, label=r\"$x=${}\".format(state))\n \nplt.legend()\nplt.xlabel(r\"$\\alpha$\")\nplt.ylabel(r\"$v_\\pi(x)$\")\nplt.xlim([0, 1])\nplt.ylim([-10, 0])\n```\n\nYOUR ANSWER HERE\n", "meta": {"hexsha": "2008c07407d76bfb64b67241186a18ae775a48e9", "size": 16765, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "exercises/templates/ex02/Ex2.ipynb", "max_stars_repo_name": "adilsheraz/reinforcement_learning_course_materials", "max_stars_repo_head_hexsha": "e086ae7dcee2a0c1dbb329c2b25cf583c339c75a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 557, "max_stars_repo_stars_event_min_datetime": "2020-07-20T08:38:15.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T19:30:35.000Z", "max_issues_repo_path": "exercises/templates/ex02/Ex2.ipynb", "max_issues_repo_name": "speedhunter001/reinforcement_learning_course_materials", "max_issues_repo_head_hexsha": "09a211da5707ba61cd653ab9f2a899b08357d6a3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2020-07-22T07:27:55.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-12T14:37:08.000Z", "max_forks_repo_path": "exercises/templates/ex02/Ex2.ipynb", "max_forks_repo_name": "speedhunter001/reinforcement_learning_course_materials", "max_forks_repo_head_hexsha": "09a211da5707ba61cd653ab9f2a899b08357d6a3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 115, "max_forks_repo_forks_event_min_datetime": "2020-09-08T17:12:25.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T18:13:08.000Z", "avg_line_length": 27.1717990276, "max_line_length": 315, "alphanum_fraction": 0.5413062929, "converted": true, "num_tokens": 2059, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9241418241572635, "lm_q2_score": 0.8918110339361276, "lm_q1q2_score": 0.8241598757053082}} {"text": "\n\n\n```python\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom scipy.stats import norm\n```\n\n#Distribucion normal te\u00f3rica\n\n\\begin{align}\nP(X) = \\frac{1}{\\sigma \\sqrt{2\\pi}^{\\left[ -\\frac{1}{2}\\left(\\frac{X - \ud835\udf07}{\\sigma}\\right)^2\\right]}}\n\\end{align}\n\n\n```python\ndef gaussian(x, mu, sigma):\n return 1/(sigma * np.sqrt(2*np.pi))*np.exp(-0.5 * pow((x-mu)/sigma, 2))\n```\n\n\n```python\n#Lista de datos de entrada para la funcion gaussiana\nx = np.arange(-4, 4, 0.1)\ny = gaussian(x, 0.0, 1.0)\nplt.plot(x, y)\n\n```\n\n#Usando scipy\n\n\n```python\ndist = norm(0, 1) #promedio, desviacion estandar\nx = np.arange(-4, 4, 0.1)\ny = [dist.pdf(value) for value in x]\nplt.plot(x,y)\n```\n\n\n```python\ndist = norm(0, 1)\nx = np.arange(-4, 4, 0.1)\ny = [dist.cdf(value) for value in x]\nplt.plot(x, y)\n```\n\n#Distribuci\u00f3n normal (gaussiana) a partir de los datos\n\nArchivo [excel](https://seattlecentral.edu/qelp/sets/057/057.html) \n\n\n```python\ndf = pd.read_excel('s057.xls')\narr = df['Normally Distributed Housefly Wing Lengths'].values[4:]\n#frecuencia de los datos del array\nvalues, dist = np.unique(arr, return_counts=True)\nplt.bar(values, dist)\n```\n\n\n```python\n#Estimacion parametrica de una distribucion\nmu = arr.mean()\nsigma = arr.std()\nx = np.arange(30, 60, 0.1)\ndist = norm(mu, sigma)\ny = [dist.pdf(value) for value in x]\nplt.plot(x,y)\nvalues, dist = np.unique(arr, return_counts = True)\nplt.bar(values, dist/len(arr))\n```\n", "meta": {"hexsha": "7f55d615783c5321159df8e91d69573ebe149030", "size": 67966, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "probabilidad_continua/distribucion_continua.ipynb", "max_stars_repo_name": "yeyomuri/probabilidad", "max_stars_repo_head_hexsha": "e36eacb1a40c6721624603f05f75adc1a7037967", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "probabilidad_continua/distribucion_continua.ipynb", "max_issues_repo_name": "yeyomuri/probabilidad", "max_issues_repo_head_hexsha": "e36eacb1a40c6721624603f05f75adc1a7037967", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "probabilidad_continua/distribucion_continua.ipynb", "max_forks_repo_name": "yeyomuri/probabilidad", "max_forks_repo_head_hexsha": "e36eacb1a40c6721624603f05f75adc1a7037967", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 221.3876221498, "max_line_length": 16034, "alphanum_fraction": 0.9075420063, "converted": true, "num_tokens": 524, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9465966762263736, "lm_q2_score": 0.8705972801594707, "lm_q1q2_score": 0.8241044917306759}} {"text": "# Non-uniform structured grid\n\nIn fluid mechanics and heat transfer, the solution often contains regions of large gradients and regions of small gradients. In cases where both regions are well localized, computational efficiency favors clustering computational nodes in regions of large gradients and mapping regions of small gradients with coarse meshes. This notebook described a common method to define a non-uniform grid for a symmetrical problem, with the finest resolution at the two ends of the domain, e.g. channel flow.\n\n## Grid transformation\n\nConsider a domain $[-H/2,+H/2]$ discretized with $N$ points. The location of computational nodes on a uniform grid is defined as:\n\n$$\n\\tilde{y}_j = -h + j\\Delta \\text{ with }\\Delta = \\frac{2h}{N-1}\\text{ and }j\\in[0,N-1] \n$$\nwhere $h=H/2$.\n\nThe common approach to create a non-uniform grid is to operate a transform function over the uniform grid. For a channel flow, one such function is:\n\n$$\ny_j = h\\frac{\\tanh\\gamma \\tilde{y}_j}{\\tanh\\gamma h}\n$$\n\nThe coefficient $\\gamma$ controls the stretching of the grid, in this case the minimum mesh size, which is at the wall.\n\n### Python set-up and useful functions\n\n\n```python\n%matplotlib inline \n# plots graphs within the notebook\n\nfrom IPython.display import display,Image, Latex\nfrom sympy.interactive import printing\nprinting.init_printing(use_latex='mathjax')\nfrom IPython.display import clear_output\n\nimport time\n\nimport numericaltools as numtools\n\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport math\nimport scipy.constants as sc\nimport h5py\n\n\nimport sympy as sym\n\n\n\n\nclass PDF(object):\n def __init__(self, pdf, size=(200,200)):\n self.pdf = pdf\n self.size = size\n\n def _repr_html_(self):\n return ''.format(self.pdf, self.size)\n\n def _repr_latex_(self):\n return r'\\includegraphics[width=1.0\\textwidth]{{{0}}}'.format(self.pdf)\n\nclass ListTable(list):\n \"\"\" Overridden list class which takes a 2-dimensional list of \n the form [[1,2,3],[4,5,6]], and renders an HTML Table in \n IPython Notebook. \"\"\"\n \n def _repr_html_(self):\n html = [\"\"]\n for row in self:\n html.append(\"\")\n \n for col in row:\n html.append(\"\".format(col))\n \n html.append(\"\")\n html.append(\"
    {0}
    \")\n return ''.join(html)\n \nfont = {'family' : 'serif',\n #'color' : 'black',\n 'weight' : 'normal',\n 'size' : 16,\n }\nfontlabel = {'family' : 'serif',\n #'color' : 'black',\n 'weight' : 'normal',\n 'size' : 16,\n }\n\n\nfrom matplotlib.ticker import FormatStrFormatter\nplt.rc('font', **font)\n\n```\n\n## Example\n\nCreate a grid for $H=2$, $N=33$, and $\\Delta_{min}=0.001$\n\n\n\n\n\n\n```python\nH = 2\nN = 33\nDeltamin = 0.0001\ngamma_guess = 2\ny,gamma = numtools.stretched_mesh(H,N,Deltamin,gamma_guess)\nprint(\"Stretching coefficient gamma: %1.4e\" %gamma)\nplt.plot(y,'o')\nplt.show()\nplt.plot(0.5*(y[1:]+y[:-1]),y[1:]-y[:-1],'o')\nplt.show()\n\n\n```\n\n## Grid generation at fixed $\\gamma$\n\n$H=2$, $N=257$, $\\gamma = 2.6$\n\n\n```python\nH = 2.\nh = H/2\nN =257\ngamma = 2.6\ny_uni = np.linspace(-h,h,N)\ny = h*np.tanh(gamma*y_uni)/np.tanh(gamma*h)\nplt.plot(y,'o')\nplt.show()\nplt.plot(0.5*(y[1:]+y[:-1]),y[1:]-y[:-1],'o')\nplt.show()\nprint(\"Deltamin = %1.4e\" %(y[1]-y[0]))\n```\n\n## Generate a bunch of grids with different $N$ but constant $\\Delta_{min}$\n\n\n```python\nH = 2\ndeltamin = 5e-4\ngamma_guess = 2.\nN_array = np.array([33,65,129,257,513,1025],dtype=int)\ngamma_array = np.zeros(len(N_array))\nj = 0\nfor N in N_array:\n y,gamma_array[j] = numtools.stretched_mesh(H,N,Deltamin,gamma_guess)\n gamma_guess = gamma_array[j]\n print(\"for N=%4i, gamma=%1.4f\" %(N,gamma_array[j]))\n \n j += 1\n \nprint(gamma_array)\n \n```\n\n for N= 33, gamma=4.8624\n for N= 65, gamma=4.3730\n for N= 129, gamma=3.9348\n for N= 257, gamma=3.5145\n for N= 513, gamma=3.0970\n for N=1025, gamma=2.6734\n [4.86238469 4.37303735 3.93484222 3.51452064 3.09698058 2.67343703]\n\n\n## Generate a bunch of grids at constant $N$ and varying $\\Delta_{min}$\n\n\n```python\nDeltamin_array = np.array([1e-2,5e-3,1e-3,5e-4,1e-4])\nH = 2 \nN = 129\ngamma_guess = 2\nfor Deltamin in Deltamin_array:\n y,gamma = numtools.stretched_mesh(H,N,Deltamin,gamma_guess)\n print(\"For Dmin=%1.1e, gamma=%1.4f\" %(Deltamin,gamma))\n```\n\n For Dmin=1.0e-02, gamma=0.8639\n For Dmin=5.0e-03, gamma=1.4658\n For Dmin=1.0e-03, gamma=2.5568\n For Dmin=5.0e-04, gamma=2.9842\n For Dmin=1.0e-04, gamma=3.9348\n\n\n\n```python\n2/129\n```\n\n\n\n\n$$0.015503875968992248$$\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "ecee1a92807242a85a17522c2fdd3c61756b3ab1", "size": 47772, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "A01-Grid-Generation/grid-generation.ipynb", "max_stars_repo_name": "fuqianyin/numericalmethods", "max_stars_repo_head_hexsha": "e8ac8e4c378fa1c6ed9cfd67a89bcc12567f658d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-10-18T19:40:22.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-18T19:40:22.000Z", "max_issues_repo_path": "A01-Grid-Generation/grid-generation.ipynb", "max_issues_repo_name": "fuqianyin/numericalmethods", "max_issues_repo_head_hexsha": "e8ac8e4c378fa1c6ed9cfd67a89bcc12567f658d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "A01-Grid-Generation/grid-generation.ipynb", "max_forks_repo_name": "fuqianyin/numericalmethods", "max_forks_repo_head_hexsha": "e8ac8e4c378fa1c6ed9cfd67a89bcc12567f658d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 129.1135135135, "max_line_length": 12740, "alphanum_fraction": 0.8773968015, "converted": true, "num_tokens": 1477, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9304582632076909, "lm_q2_score": 0.885631476836816, "lm_q1q2_score": 0.8240431257796461}} {"text": "```python\n%matplotlib notebook\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n```\n\n\n```python\nnp.random.seed(1)\n```\n\n# Problem setup:\n\nWe're going to have a small number of (X, Y) data points.\nWe will see how the model misfit changes as we increase the number of parameters.\nThe type of model we'll use is a sum of sines and cosines of increasing frequency. The model parameters are the amplitudes of these sinusoids. \nThat is, given the m data points $(x_1, y_1) ,\\ldots, (x_m, y_m)$ , we'll fit\n\n\\begin{align}\ny_i = a_0 + a_1 \\cos(x_i) + a_2 \\cos(2x_i) + \\ldots a_n \\cos(nx_i) + b_1 \\sin(x_i) + \\ldots + b_n \\sin(nx_i)\n\\end{align}\n\nBut before we work with this, we'll first show what happens with Numpy's `Polynomial.fit` function, which is often used to show the danger of high complexity model fitting.\n\n# Make the truth data: all zeros, but there's one noise point at 1\n\nFor illustration purposes, the data generation process will just be all zeros. But we'll add noise to one single point to move it from 0 to 1.\n\n\nAn omniscient model would ignore the noise point and predict all zeros... but it will try to fit through that noise point at y=1.\n\n\n\n```python\nX_MAX = 2 * np.pi\ndef make_data(m):\n # Make Xs scattered on interval [0, 2pi]\n X = X_MAX * np.random.rand(m,)\n # Y's are all zero (1e-10 for plotting purposes)\n Y = 1e-9 + np.zeros((m,))\n # ...except for one noise point\n Y[m//2] = 1\n return X, Y\n\n```\n\n# Traditional model-fitting explanation: Use fewer parameters than data (n << m)\n\n\n\n```python\nfrom numpy.polynomial import Polynomial\nm = 9\nX, Y = make_data(m)\n\nn = 4\n\np = Polynomial.fit(X, Y, n)\nxx, yy = p.linspace(domain=[np.min(X), np.max(X)])\nplt.figure()\nplt.plot(xx, yy, label=f\"poly fit {n = }\")\nplt.plot(X, Y, 'rx', label='sample points')\nplt.legend()\n```\n\n\n \n\n\n\n\n\n\n\n\n\n \n\n\n\nHere a polynomial won't be look great, but it's relatively stable in the region around the data.\n\nHowever, the fit becomes unstable as the order increases up to the number of data points, and the errors go crazy goes crazy.\n\n\n```python\nn = 9\np = Polynomial.fit(X, Y, n)\nxx, yy = p.linspace(domain=[np.min(X), np.max(X)])\nplt.figure()\nplt.plot(xx, yy, label=f\"poly fit {n = }\")\nplt.plot(X, Y, 'rx', label='sample points')\nplt.ylim(-10, 10)\nplt.legend()\n```\n\n /Users/scott/opt/anaconda3/envs/mapping/lib/python3.8/site-packages/numpy/polynomial/polynomial.py:1350: RankWarning: The fit may be poorly conditioned\n return pu._fit(polyvander, x, y, deg, rcond, full, w)\n\n\n\n \n\n\n\n\n\n\n\n\n\n \n\n\n\nThe points are all interpolated here, but the fit inbetween is terrible. This is generally shown to be the reason for keeping model complexity low compared to the size of the data.\n\n# Back to the sine/cosine model:\n\nNow to show what happens as we increase the number of model parameters above and beyond the number of data points, we'll use the Fourier model:\n\n\\begin{align}\ny_i = a_0 + a_1 \\cos(x_i) + a_2 \\cos(2x_i) + \\ldots a_n \\cos(nx_i) + b_1 \\sin(x_i) + \\ldots + b_n \\sin(nx_i)\n\\end{align}\n\n(Note that we're using this sine/cosine model to match [1], and because the `Polynomial.fit` fails for when the `deg`, polynomial order, exceeds `m`, the number of points).\n\n\nBelow are the functions for the making $\\mathbf{A}$ matrix for fitting the system $\\mathbf{y = Ax}$, along with `fit()` and `predict()` functions for the model:\n\n\n```python\ndef form_A(X, order):\n # N is the highest order sin/cos, so num terms is n = 2order + 1\n # X has m data points\n m = X.shape[0]\n A = np.zeros((m, 2*order + 1))\n A[:, 0] = 1\n for k in range(1, order + 1):\n A[:, k] = np.cos(X * k)\n A[:, k + order] = np.sin(X * k)\n return A\n\ndef predict(coeffs, xs):\n \"\"\"Take in the coefficients of the sin/cosine terms from the fitting,\n along with the x locations, and output the model predicted ys\n \n The `coeffs` vector are the a_i and b_i cos/sin coefficients\"\"\"\n order = (len(coeffs) - 1) // 2\n m = len(xs)\n idxs = np.arange(1, order+1).reshape((1, order))\n a0 = coeffs[0]\n cos_terms = np.sum(coeffs[1:(order+1)] * np.cos(xs.reshape((-1, 1)) * idxs), axis=1)\n sin_terms = np.sum(coeffs[order+1:] * np.sin(xs.reshape((-1, 1)) * idxs), axis=1)\n ys = a0 + cos_terms + sin_terms\n return ys\n\n\ndef calculate_avg_misfit(y_hat, y_truth = 0):\n # The real data should be all 0s (except that one noisy point), so a 0 line would be perfect\n return np.sum(np.abs(y_hat - y_truth)) / len(y_hat)\n\ndef fit(X, Y, order, x_test=None, x_max=X_MAX, print_misfit=True):\n \"\"\"Fit the sin/cos model, return the [a_i, b_i] coefficients\"\"\"\n A = form_A(X, order)\n \n m = len(X)\n n = 2*order + 1\n assert A.shape == (m, n)\n # If overdetermined system (m > n, more data points (rows) than features (columns)), \n # this would be a least squares fitting of the model.\n # coeffs = np.linalg.lstsq(A, Y, rcond=None)[0]\n # For fat A (m < n), we'll use the pseudoinverse to fit\n coeffs = np.linalg.pinv(A) @ Y\n if x_test is None:\n x_test = np.linspace(0, X_MAX, 500)\n y_hat = predict(coeffs, x_test)\n \n misfit = calculate_avg_misfit(y_hat)\n if print_misfit:\n print(f\"Average misfit: {misfit}\")\n return x_test, y_hat, misfit\n\n```\n\n## Case: m > n, more data points than model parameters\n\nThe model does it's best to fit all points, but there are fewer parameters than data points, so we're doing a least-squares regression\n\n\n```python\n# m = Number of data points\n# n = Number of parameters to fit\n# m = 9\n# X, Y = make_data(m)\n\norder = 3\n# n = 2*order + 1\n\nx_test, y_hat, misfit = fit(X, Y, order)\n\nplt.figure()\nplt.plot(x_test, y_hat, 'b.-', label='predicted')\nplt.plot(X, Y, 'rx', label='sample points')\nplt.title(\"Predicted y vs x (linear scale)\")\nplt.legend()\n```\n\n Average misfit: 1.9714393403195682\n\n\n\n \n\n\n\n\n\n\n\n\n\n \n\n\n\nFor a more \"energy-like\" view, we'll plot in a log scale. Any peaks above 0 can be considered \"noise energy\", as the truth would be all 0s.\n\n\n```python\nplt.figure()\nplt.semilogy(x_test, y_hat, 'b.-', label='predicted')\nplt.semilogy(X, Y, 'rx', label='sample points')\nplt.title(\"Predicted y vs x (log-scale)\")\nplt.legend()\n```\n\n\n \n\n\n\n\n\n\n\n\n\n \n\n\n\nThe model has `order` number of large humps, where large, badly-fitting spikes occur (where the humps might wrap around the `2 pi` point, since the sines/cosines are periodic there).\n\n# Overfitting: m == n\nThe number of sin and cosine terms will now be equal to the number of data points to fit. \n\nThis is the **worst-case scenario** here, where we can interpolate the data (match the training points exactly), but in between those, the model goes wildly off the rails (just like with the polynomail fit).\n\n\n```python\norder = m // 2\nn = 2*order + 1\nassert n == m\n\nx_test, y_hat, misfit = fit(X, Y, order)\n\nplt.figure()\nplt.semilogy(x_test, y_hat, 'b.-')\nplt.semilogy(X, Y, 'rx')\nplt.show()\n```\n\n Average misfit: 20.355497441416627\n\n\n\n \n\n\n\n\n\n\n# Overfitting? Try n > m\nLet's use twice as many parameters as data points\n\n\n```python\norder = m\n# n = 2*order + 1\n\nx_test, y_hat, misfit = fit(X, Y, order)\n\nplt.figure()\nplt.semilogy(x_test, y_hat, 'b.-')\nplt.semilogy(X, Y, 'rx')\nplt.show()\n```\n\n Average misfit: 0.15200455330631488\n\n\n\n \n\n\n\n\n\n\nNow there are still badly fitting spikes, but there peaks start to all go down...\nThe \"noise energy\" is getting spread across higher frequencies, and the average misfit to the truth goes down.\n\n\n# Harmless overfitting? n = 10m\n\nWay more parameters: now use 10 times the number of parameters as data points\n\n\n```python\norder = 10*m\n\nx_test, y_hat, misfit = fit(X, Y, order)\n\nplt.figure()\nplt.semilogy(x_test, y_hat, 'b.-')\nplt.semilogy(X, Y, 'rx')\nplt.show()\n\nplt.figure()\nplt.plot(x_test, y_hat, 'b.-', label='predicted')\nplt.plot(X, Y, 'rx', label='sample points')\nplt.title(\"Predicted y vs x (linear scale)\")\nplt.legend()\n```\n\n Average misfit: 0.018012970909079603\n\n\n\n \n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n \n\n\n\n# Show the misfit as a function of number of sin/cos terms\n\n\n```python\norders = list(range(0, m)) + list(np.logspace(np.log10(m), np.log10(8*m), 10).astype(int))\nns = 2*np.array(orders) + 1\n# print(ns)\n\nmisfits = []\nfor order in orders:\n x_test, y_hat, misfit = fit(X, Y, order, print_misfit=False)\n misfits.append(misfit)\n\n```\n\n\n```python\ninterp_thresh = m\nplt.figure();\nplt.semilogy(ns, misfits, '.-')\nplt.semilogy(np.ones(5) * interp_thresh, np.linspace(0, np.max(misfits), 5), 'k-', label='interpolation threshold')\nplt.ylabel(\"Average error\")\nplt.xlabel(\"Number of sin/cosine terms used to fit\")\nplt.legend()\n```\n\n\n \n\n\n\n\n\n\n\n\n\n \n\n\n\n### References\n[1] Muthukumar, Vidya, et al. \"Harmless interpolation of noisy data in regression.\" IEEE Journal on Selected Areas in Information Theory (2020).\n\nThe final figure is replicating one case from Figure 2: https://arxiv.org/pdf/1903.09139.pdf\n\n# Random extra test: Fitting with a wide neural network\n\nTODO: check similar \"average error\" graph as above with a NN of increasing width\n\n\n```python\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n```\n\n\n```python\n\nnwide = 1000\nclass Net(nn.Module):\n\n def __init__(self):\n super(Net, self).__init__()\n # an affine operation: y = Wx + b\n self.fc1 = nn.Linear(1, nwide)\n self.fc2 = nn.Linear(nwide, nwide)\n self.fc3 = nn.Linear(nwide, 1)\n\n\n def forward(self, x):\n# x = F.relu(self.fc1(x))\n# x = F.relu(self.fc2(x))\n \n# x = self.fc1(x)\n# x = self.fc2(x)\n# x = torch.tanh(self.fc1(x))\n# x = torch.tanh(self.fc2(x))\n x = torch.relu(self.fc1(x))\n x = torch.relu(self.fc2(x))\n x = self.fc3(x)\n return x\n\nnet = Net()\nprint(net)\n\n```\n\n Net(\n (fc1): Linear(in_features=1, out_features=1000, bias=True)\n (fc2): Linear(in_features=1000, out_features=1000, bias=True)\n (fc3): Linear(in_features=1000, out_features=1, bias=True)\n )\n\n\n\n```python\nimport torch.optim as optim\n\ninp = torch.Tensor(X.reshape(-1, 1))\ntarget = torch.Tensor(Y).view(-1, 1)\n\ncriterion = nn.MSELoss()\n\n# create your optimizer\noptimizer = optim.SGD(net.parameters(), lr=0.01)\n\nnum_epochs = 5000\nfor epoch in range(num_epochs):\n # in your training loop:\n optimizer.zero_grad() # zero the gradient buffers\n output = net(inp)\n loss = criterion(output, target)\n loss.backward()\n optimizer.step() # Does the update\n if epoch % 500 == 0:\n print(f\"{epoch = }, {loss.item() = }\")\n\n```\n\n epoch = 0, loss.item() = 0.15103742480278015\n epoch = 500, loss.item() = 0.05079585686326027\n epoch = 1000, loss.item() = 0.028149651363492012\n epoch = 1500, loss.item() = 0.008730195462703705\n epoch = 2000, loss.item() = 0.003026040503755212\n epoch = 2500, loss.item() = 0.000692488276399672\n epoch = 3000, loss.item() = 0.00022584182443097234\n epoch = 3500, loss.item() = 9.963271440938115e-05\n epoch = 4000, loss.item() = 4.930593786411919e-05\n epoch = 4500, loss.item() = 2.507205499568954e-05\n\n\n\n```python\nplt.figure()\nx_test = np.linspace(0, X_MAX, 500).reshape(-1, 1)\ny_test = net(torch.Tensor(x_test)).detach().numpy()\n\n# xx, yy = inp.detach().numpy(), output.detach().numpy()\nplt.plot(x_test, y_test, 'b.-')\n\nplt.semilogy(x_test, y_test, 'b.-')\nplt.semilogy(X, Y, 'rx')\n\nplt.figure()\nplt.plot(x_test, y_test, 'b.-')\nplt.plot(X, Y, 'rx')\n```\n\n\n \n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n []\n\n\n", "meta": {"hexsha": "4dfcea2f84facfd16ac2b27fabc1c5733f87d263", "size": 1024662, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/Overfitting with fourier.ipynb", "max_stars_repo_name": "scottstanie/scottstanie.github.io", "max_stars_repo_head_hexsha": "69863e119e996cf939b420a973d5153e0baf3b7a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/Overfitting with fourier.ipynb", "max_issues_repo_name": "scottstanie/scottstanie.github.io", "max_issues_repo_head_hexsha": "69863e119e996cf939b420a973d5153e0baf3b7a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/Overfitting with fourier.ipynb", "max_forks_repo_name": "scottstanie/scottstanie.github.io", "max_forks_repo_head_hexsha": "69863e119e996cf939b420a973d5153e0baf3b7a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 92.6876526459, "max_line_length": 82877, "alphanum_fraction": 0.7262873025, "converted": true, "num_tokens": 3609, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8856314828740729, "lm_q2_score": 0.9304582564583618, "lm_q1q2_score": 0.8240431254196433}} {"text": "# Four ways to look at matrix multiplication\n\nIf $A$ is $m\\times n$ and $B$ is $n\\times p$, then $C=AB$ is an $m\\times p$ matrix. We may regard as a definition the formula\n\\begin{equation}\nC_{ij} = \\sum_{k=1}^n A_{ik}B_{kj}.\n\\end{equation}\n\n\n```julia\nusing LinearAlgebra\n```\n\n\n```julia\nm,n,p = 4,5,3;\nA = rand(-4:4,m,n);\nB = rand(-4:4,n,p);\ndisplay(A), display(B);\n```\n\n\n 4\u00d75 Array{Float64,2}:\n 1.0 0.0 4.0 1.0 0.0\n 3.0 3.0 4.0 3.0 2.0\n 1.0 1.0 1.0 3.0 4.0\n 4.0 3.0 4.0 1.0 0.0\n\n\n\n 5\u00d73 Array{Float64,2}:\n 1.0 -1.0 -1.0\n 0.0 -0.0 1.0\n 0.0 -1.0 2.0\n 1.0 1.0 1.0\n 1.0 2.0 2.0\n\n\n\n```julia\nC = A*B\n```\n\n\n\n\n 4\u00d73 Array{Float64,2}:\n 2.0 -4.0 8.0\n 8.0 0.0 15.0\n 8.0 9.0 13.0\n 5.0 -7.0 8.0\n\n\n\nHere is a literal interpretation of the summation definition for the matrix product. Notice how in Julia, there are \"implicit for\" loops (aka generators) that can be enclosed in parentheses to generate a list for a command, or in brackets to generate a matrix or vector. \n\n\n```julia\nC_0 = [ sum(A[i,k]*B[k,j] for k=1:n) for i=1:m, j=1:p ]\n```\n\n\n\n\n 4\u00d73 Array{Float64,2}:\n 2.0 -4.0 8.0\n 8.0 0.0 15.0\n 8.0 9.0 13.0\n 5.0 -7.0 8.0\n\n\n\n## Inner products\n\nIf the matrices are real, then we can interpret each sum as the inner product between vectors that are of length $n$. In Julia we can use `dot` for the inner product, or the LaTeX symbol $\\cdot$, which is entered as `\\cdot` followed by the Tab key.\n\n\n```julia\nC_1 = [ A[i,:]\u22c5B[:,j] for i=1:m, j=1:p ]\n```\n\n\n\n\n 4\u00d73 Array{Float64,2}:\n 2.0 -4.0 8.0\n 8.0 0.0 15.0\n 8.0 9.0 13.0\n 5.0 -7.0 8.0\n\n\n\nNote that `A[i,:]` and `B[:,j]` extract one row and one column, respectively. In Julia, each result will be of type `Vector`, and the shape distinction is not preserved, as every vector is simply one-dimensional. (This is unlike MATLAB, where even vectors are regarded as having two dimensions, one with size 1.) \n\n\n```julia\nA[2,:]\n```\n\n\n\n\n 5-element Array{Float64,1}:\n 3.0\n 3.0\n 4.0\n 3.0\n 2.0\n\n\n\n\n```julia\nsize(ans)\n```\n\n\n\n\n (5,)\n\n\n\n## Linear combinations of columns\n\nIf we express $B$ columnwise, then the matrix product $AB$ can also be expressed columnwise, as\n\\begin{equation}\nAB = \\begin{bmatrix} A b_1 & A b_2 & \\cdots & A b_p \\end{bmatrix}.\n\\end{equation}\n\n\n```julia\nb1 = B[:,1]\n```\n\n\n\n\n 5-element Array{Float64,1}:\n 1.0\n 0.0\n 0.0\n 1.0\n 1.0\n\n\n\n\n```julia\ndisplay(C[:,1]), display(A*b1);\n```\n\n\n 4-element Array{Float64,1}:\n 2.0\n 8.0\n 8.0\n 5.0\n\n\n\n 4-element Array{Float64,1}:\n 2.0\n 8.0\n 8.0\n 5.0\n\n\nFurthermore, $A$ times a compatible vector is a linear combination of the columns of $A$:\n$$\nAv = v_1 a_1 + \\cdots + v_n a_n.\n$$\n\n\n```julia\ndisplay(A*b1), display( sum(b1[k]*A[:,k] for k=1:n) );\n```\n\n\n 4-element Array{Float64,1}:\n 2.0\n 8.0\n 8.0\n 5.0\n\n\n\n 4-element Array{Float64,1}:\n 2.0\n 8.0\n 8.0\n 5.0\n\n\nPutting this all together, the full interpretation of $C=AB$ is\n\n\n```julia\nC_2 = hcat( ( sum(B[:,j][k]*A[:,k] for k=1:n) for j=1:p )... )\n```\n\n\n\n\n 4\u00d73 Array{Float64,2}:\n 2.0 -4.0 8.0\n 8.0 0.0 15.0\n 8.0 9.0 13.0\n 5.0 -7.0 8.0\n\n\n\nOf course, there is no reason to go through all that in practice. \n\n## Linear combinations of rows\n\nThis is the dual of the previous version:\n\n$$\nAB = \\begin{bmatrix} a_1^T B \\\\ \\vdots \\\\ a_m^T B \\end{bmatrix}\n$$\n\nWe put the transposes in because we want to have all named vectors be column vectors. Thus, each row of $A$ has to have a transpose on it. Note also that transposing a Julia vector will create a \"row vector\": \n\n\n```julia\na3T = A[3,:]'\n```\n\n\n\n\n 1\u00d75 Adjoint{Float64,Array{Float64,1}}:\n 1.0 1.0 1.0 3.0 4.0\n\n\n\nThus, in the third row of the product:\n\n\n```julia\ndisplay(C[3,:]), display(a3T*B);\n```\n\n\n 3-element Array{Float64,1}:\n 8.0\n 9.0\n 13.0\n\n\n\n 1\u00d73 Adjoint{Float64,Array{Float64,1}}:\n 8.0 9.0 13.0\n\n\nThese are the same vector, although the second version thinks of it as having row shape. Furthermore, each such vector-matrix product is a linear combination of the rows of $B$,\n\n\n```julia\ndisplay(a3T*B), display( sum(a3T[k]*B[k,:] for k=1:n) );\n```\n\n\n 1\u00d73 Adjoint{Float64,Array{Float64,1}}:\n 8.0 9.0 13.0\n\n\n\n 3-element Array{Float64,1}:\n 8.0\n 9.0\n 13.0\n\n\nFinally, doing this for all the rows of $A$, we have another identity for the product $AB$.\n\n\n```julia\nC_3 = vcat( ( sum(A[i,:][k]*B[k,:]' for k=1:n) for i=1:m )... )\n```\n\n\n\n\n 4\u00d73 Array{Float64,2}:\n 2.0 -4.0 8.0\n 8.0 0.0 15.0\n 8.0 9.0 13.0\n 5.0 -7.0 8.0\n\n\n\n## Outer products\n\nThis form might be the most surprising, and it leads to some interesting perspectives. The outer product between two vectors is the matrix formed by all possible products of pairs of elements from them. \n\n\n```julia\nouter(u,v) = [ u[i]v[j] for i=1:length(u), j=1:length(v) ];\n```\n\n\n```julia\nouter(A[:,2],B[4,:])\n```\n\n\n\n\n 4\u00d73 Array{Float64,2}:\n 0.0 0.0 0.0\n 3.0 3.0 3.0\n 1.0 1.0 1.0\n 3.0 3.0 3.0\n\n\n\nThe outer product is consistent with the definition of the matrix product $uv^T$. Note this is a *column* times a *row*, which is the reverse of the inner product, nor do the vectors even need to have the same length.\n\n\n```julia\nA[:,2]*B[4,:]'\n```\n\n\n\n\n 4\u00d73 Array{Float64,2}:\n 0.0 0.0 0.0\n 3.0 3.0 3.0\n 1.0 1.0 1.0\n 3.0 3.0 3.0\n\n\n\nBecause each column of $uv^T$ is a multiple of $u$ (or equivalently, each row is a multiple of $v^T$), any nonzero outer product has rank 1. \n\nAn interesting identity is that a general matrix product can be written as a sum of outer products:\n$$\nAB = \\sum_{k=1}^n a_k b_k^T,\n$$\nwhere we are writing $A$ by its columns and $B$ by its rows. \n\n\n```julia\nC_4 = sum( A[:,k]*B[k,:]' for k=1:n )\n```\n\n\n\n\n 4\u00d73 Array{Float64,2}:\n 2.0 -4.0 8.0\n 8.0 0.0 15.0\n 8.0 9.0 13.0\n 5.0 -7.0 8.0\n\n\n", "meta": {"hexsha": "2dd83ddc5f087e9c01c3f05b87bc3c23fe5d0aac", "size": 13975, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "julia/Matrix-multiplication.ipynb", "max_stars_repo_name": "tobydriscoll/udmath612", "max_stars_repo_head_hexsha": "3d5d86d2e153ee93e0de7b0f720ce2af354303e6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-09-25T14:12:40.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-01T06:11:06.000Z", "max_issues_repo_path": "julia/Matrix-multiplication.ipynb", "max_issues_repo_name": "tobydriscoll/udmath612", "max_issues_repo_head_hexsha": "3d5d86d2e153ee93e0de7b0f720ce2af354303e6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "julia/Matrix-multiplication.ipynb", "max_forks_repo_name": "tobydriscoll/udmath612", "max_forks_repo_head_hexsha": "3d5d86d2e153ee93e0de7b0f720ce2af354303e6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-11-01T06:11:06.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-03T03:47:34.000Z", "avg_line_length": 21.0784313725, "max_line_length": 319, "alphanum_fraction": 0.4597495528, "converted": true, "num_tokens": 2472, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9304582593509314, "lm_q2_score": 0.8856314783461302, "lm_q1q2_score": 0.8240431237683324}} {"text": "# Simulating The Spread of Heat Through a Material\n> Written by David Mascharka and Ryan Soklaski\n\n## Understanding the Heat Propagation\nIn this problem, we will learn about a simple algorithm for numerically simulating the spread of heat through a material. We will want to use vectorization to write an efficient algorithm.\n\nImagine that we have a rectangular piece of steel. For now, let's treat this piece of steel as a 5x5 grid - we are only able to measure the average temperature of each of these 25 grid regions. Let's assume that steel starts off at a uniform 0-degrees. Thus, our temperature readout for each of its grid positions is:\n\n```\n 0 0 0 0 0 \n 0 0 0 0 0 \n 0 0 0 0 0 \n 0 0 0 0 0 \n 0 0 0 0 0 \n```\n\nNow, we will clamp hot contacts, which are always at a constant 100-degrees, along the outer edges of the steel. Upon clamping these contacts, our temperature readout at time-0 becomes:\n\n```\n 100 100 100 100 100\n 100 0 0 0 100\n 100 0 0 0 100\n 100 0 0 0 100\n 100 100 100 100 100\n```\n\nWe will adopt the same indexing scheme as a 2D NumPy array. That is, element (i,j) of this grid is row-i, column-j in the grid. The top-left corner is located at (0, 0), and has a temperature of 100. \n\nMoving forward, we want to describe, numerically, how the heat from the contacts will spread through the material as time carries on. The heat equation is a partial differential equation that describes the flow of heat through space and time. In the following equation, the function $u(x, y, t)$ describes how much heat resides at the location $(x, y)$ at time $t$:\n\n\\begin{equation}\n\\frac{\\partial u}{\\partial t} - \\alpha \\left(\\frac{\\partial^{2} u}{\\partial x^{2}} + \\frac{\\partial^{2} u}{\\partial y^{2}} \\right)= 0\n\\end{equation}\n\nDo not worry if you have no clue what a partial differential equation is! You do not need to know anything about the heat equation, we are simply providing some background here. \n\nWhat this equation ultimately says is that heat will spread such that a point will take on the average amount of heat among its neighboring points. Numerically, we can write this out as:\n\n\\begin{equation}\nu^{(t)}_{ij} = \\frac{u^{(t-1)}_{i+1,j} + u^{(t-1)}_{i-1,j} + u^{(t-1)}_{i,j+1} + u^{(t-1)}_{i,j-1}}{4}\n\\end{equation}\n\nThat is, $u^{(t)}_{ij}$ is the heat at grid-location $(i, j)$ at time-step $t$. It's value is given by the average of the heat of all four of its neighboring grid positions from time-step $t-1$. See that the right side of the equation averages the heat from above, below, left-of, and right-of $(i, j)$, at time-step $t-1$. This means of evolving the heat through our gridded material is known as the *finite difference method*.\n\nLet's using the finite difference method to figure out what the distribution of heat looks like throughout our steel square at time-step 1. Keep in mind that we need not update any of the outer-edges of the steel - those positions are held at a fixed heat. We'll start at the upper-left corner and get\n\n\\begin{equation}\nu^{t=1}_{1,1} = \\frac{0 + 100 + 0 + 100}{4} = 50\\\\\nu^{t=1}_{1,2} = \\frac{0 + 100 + 0 + 0}{4} = 25\\\\\n\\end{equation}\n\nand so on, yielding the heat distribution at timestep-1 of:\n```\n 100 100 100 100 100\n 100 50 25 50 100\n 100 25 0 25 100\n 100 50 25 50 100\n 100 100 100 100 100\n```\n\nRepeating this process again will produce the heat distribution at timestep-2, and so on. After many iterations, we see the entire region becomes 100 degrees. This is because the heat from the edges flows inward until everything is the same temperature. This stabilized distribution of heat is known as the *steady state*. If we change the boundary conditions, i.e. change what we clamp to the edges of our steel, we will observe different steady states.\n\n\n## Problem 1: A Simple Implementation of Finite Differences\nWrite a Python function that takes in a 2-dimensional numpy-array containing heat-values, and uses the finite difference method to produce the heat distribution for that material at the next time-step. Do this using simple for-loops to iterate over the values of the array.\n\nAssume that the boundary-values of the array are fixed, so you need not update them. However, do *not* assume that the boundary values are all the same as one another, as they were in the preceding example.\n\nAlso, be careful not to change the content of the array that your function is given. You need to use the values in that array, unchanged, to compute the new heat distribution. Consider making use of `np.copy` to create a copy of the input array (so that your new array will have the appropriate boundary values).\n\n\n```python\n# make sure to execute this cell so that your function is defined\n# you must re-run this cell any time you make a change to this function\nimport numpy as np\ndef evolve_heat_slow(u):\n \"\"\" Given a 2D array of heat-values (at fixed boundary), produces\n the new heat distribution after one iteration of the finite \n difference method.\n \n Parameters\n ----------\n u : numpy.ndarray shape=(M, N)\n An MxN array of heat values at time-step t-1.\n (M and N are both at least 2)\n \n Returns\n -------\n numpy.ndarray, shape=(M, N)\n An MxN array of heat values at time-step t.\n \"\"\" \n new = np.copy(u)\n size = np.size(u)\n rows = np.size(u,0)\n columns = np.size(u,1)\n for r in range(1, rows - 1):\n for c in range(1, columns - 1):\n top = u[r-1, c]\n left = u[r, c-1]\n right = u[r, c+1]\n bottom = u[r+1, c]\n new[r,c] = (top+left+right+bottom)/4\n return new\n \n # student code goes here\n```\n\n\n```python\nfrom bwsi_grader.python.heat_dispersion import grader\ngrader(evolve_heat_slow, grade_ver=1)\n```\n\n \n ============================== ALL TESTS PASSED! ===============================\n Your submission code: bw4c328516b4c813fe5f8394f4f6b83799336f22f27044cb26efb23cf8\n ================================================================================\n \n\n\nArmed with this function, we will find the steady state of a more finely-gridded sheet of steel, with a less trivial set of boundary heat-values.\n\nWe will create an 80x96 grid with the following boundary conditions:\n\n- Along the top row, we linearly increase the heat from 0 to 300 degrees from left to right\n- Along the bottom row, we fade from 0 to 80 degrees at the middle and back to 0 on the right\n- Along the left side, we linearly increase from 0 degrees at the bottom to 90 at the top (note that the very corner point is 0 from the 0 -> 300 continuum above)\n- Along the right side, we linearly increase the heat from 0 to 300 degrees from bottom to top\n\n\n```python\n# creating the 80x96-grid sheet with the non-trivial boundary conditions\n# simply execute this cell; you need not change anything.\n\nimport numpy as np\n# discretize our rectangle into an 80x96 grid\nrows = 80 \ncolumns = 96\nu = np.zeros((rows, columns))\n\n# set up the boundary conditions\nu[0] = np.linspace(0, 300, columns) # top row runs 0 -> 300\nu[1:,0] = np.linspace(90, 0, rows-1) # left side goes 0 -> 90 bottom to top\nu[-1,:columns//2] = np.linspace(0, 80, columns//2) # 0 (left) to 80 (middle) along the bottom\nu[-1,columns//2:] = np.linspace(80, 0, columns//2) # 80 (middle) to 0 (left) along the bottom\nu[:,-1] = np.linspace(300,0,rows) # 0 -> 300 bottom to top along the right\n```\n\nLet's plot the initial condition for this steel sheet. You should see a \"hot spot\" in the top-right corner, and varying amounts of heat elsewhere on the boundary. Check that this corresponds to the boundary conditions that we imposed.\n\n\n```python\n# execute this cell\n\n# matplotlib is a Python library used for visualizing data\nimport matplotlib.pyplot as plt\nfrom matplotlib.animation import FuncAnimation\n%matplotlib notebook\n\nfig, ax = plt.subplots()\nax.imshow(u, cmap='hot');\n```\n\n\n \n\n\n\n\n\n\nNow, we will make an animation of the heat spreading through this material. However, our current implementation is too slow - let's time the amount of time required to evolve the heat in the material for 5000 iterations. This should take 20 sec - 1 minute.\n\n\n```python\nimport time\nslow = u.copy()\nstart = time.time()\nfor _ in range(5000): # perform 5000 iterations to reach a steady state\n slow = evolve_heat_slow(slow)\nt = round(time.time() - start, 1)\nprint(\"`evolve_heat_slow` took {} seconds to complete 5000 iterations\".format(t))\n```\n\n `evolve_heat_slow` took 45.3 seconds to complete 5000 iterations\n\n\n# Problem 2: A Vectorized Version of Finite Differences\nUse NumPy array arithmetic to vectorize the finite-difference method that you implemented in problem #1. Your code should not utilize any for-loops.\n\n\n```python\nimport numpy as np\ndef evolve_heat_fast(u):\n \"\"\" Given a 2D array of heat-values (at fixed boundary), produces\n the new heat distribution after one iteration of the finite \n difference method.\n \n Parameters\n ----------\n u : numpy.ndarray shape=(M, N)\n An MxN array of heat values at time-step t-1.\n (M and N are both at least 2)\n \n Returns\n -------\n numpy.ndarray, shape=(M, N)\n An MxN array of heat values at time-step t.\n \"\"\" \n old_temps = np.copy(u)\n new_temps = np.copy(u)\n new_temps[1:-1,1:-1] = (1/4) * (old_temps[2:,1:-1] + old_temps[:-2,1:-1] + old_temps[1:-1,:-2] + old_temps[1:-1,2:])\n return new_temps\n```\n\n\n```python\nfrom bwsi_grader.python.heat_dispersion import grader\ngrader(evolve_heat_fast, grade_ver=2)\n```\n\n \n ============================== ALL TESTS PASSED! ===============================\n Your submission code: bw95ca3a7cfec977141125d15af9059617240297927b952f8dfae97d2a\n ================================================================================\n \n\n\nNow let's use our vectorized code to perform 5000 iterations to evolve the heat in our system. This should be nearly 100-times faster than our original implementation.\n\n\n```python\n# execute this cell\n\nimport time\nfast = u.copy()\nstart = time.time()\nall_frames = []\nfor _ in range(5000):\n all_frames.append(fast.copy())\n fast = evolve_heat_fast(fast)\nt = round(time.time() - start, 1)\nprint(\"`evolve_heat_fast` took {} seconds to complete 5000 iterations\".format(t))\n```\n\nPlotting the distribution of heat after 5000 time-steps of evolution. \n\n\n```python\n# execute this cell\n\nfig, ax = plt.subplots()\nax.imshow(fast, cmap='hot');\n```\n\nEven better, let's plot an animation of the heat spreading through the steel sheet! The animation will loop back to the beginning after playing through.\n\n\n```python\n# execute this cell\n\nfig = plt.figure()\nt = u.copy()\nim = plt.imshow(t, animated=True, cmap='hot')\n\ndef updatefig(*args):\n im.set_array(all_frames[args[0]])\n return im,\n\nani = FuncAnimation(fig, updatefig, range(5000), interval=1, blit=True, repeat=True,\n repeat_delay=1000)\nplt.show()\n```\n\nTry creating your own boundary conditions for the temperature distribution on this steel sheet. Reuse the code from above to animate how the heat spreads through it. Also congratulate yourself for numerically solving a fixed-boundary partial differential equation :)\n", "meta": {"hexsha": "ead437ad04be5a8a37083ed5b2da89ea3afc985e", "size": 119697, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "PythonHW-1.2.4/heat_dispersion/HW_heat_dispersion.ipynb", "max_stars_repo_name": "abhatia25/Beaverworks-Racecar", "max_stars_repo_head_hexsha": "7c579e27de3688d58f280ff260897f77efa5fb4a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "PythonHW-1.2.4/heat_dispersion/HW_heat_dispersion.ipynb", "max_issues_repo_name": "abhatia25/Beaverworks-Racecar", "max_issues_repo_head_hexsha": "7c579e27de3688d58f280ff260897f77efa5fb4a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-03-06T21:24:28.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-06T21:24:28.000Z", "max_forks_repo_path": "PythonHW-1.2.4/heat_dispersion/HW_heat_dispersion.ipynb", "max_forks_repo_name": "abhatia25/Beaverworks-Racecar", "max_forks_repo_head_hexsha": "7c579e27de3688d58f280ff260897f77efa5fb4a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-06T21:04:55.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-06T21:04:55.000Z", "avg_line_length": 99.4161129568, "max_line_length": 67083, "alphanum_fraction": 0.7657084137, "converted": true, "num_tokens": 3050, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.930458253565792, "lm_q2_score": 0.8856314632529872, "lm_q1q2_score": 0.8240431046012915}} {"text": "# Cvxpylayers tutorial\n\n\n```python\nimport cvxpy as cp\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport torch\nfrom cvxpylayers.torch import CvxpyLayer\ntorch.set_default_dtype(torch.double)\n\nnp.set_printoptions(precision=3, suppress=True)\n```\n\n## Parametrized convex optimization problem\n\n$$\n\\begin{array}{ll} \\mbox{minimize} & f_0(x;\\theta)\\\\\n\\mbox{subject to} & f_i(x;\\theta) \\leq 0, \\quad i=1, \\ldots, m\\\\\n& A(\\theta)x=b(\\theta),\n\\end{array}\n$$\nwith variable $x \\in \\mathbf{R}^n$ and parameters $\\theta\\in\\Theta\\subseteq\\mathbf{R}^p$\n\n* objective and inequality constraints $f_0, \\ldots, f_m$ are *convex* in $x$ for each $\\theta$, *i.e.*, their graphs curve upward\n* equality constraints are linear\n* for a given value of $\\theta$, find a value for $x$ that minimizes objective, while satisfying constraints\n* we can efficiently solve these globally with near total reliability\n\n## Solution map\n* Solution $x^\\star$ is an implicit function of $\\theta$\n* When unique, define solution map as function\n$x^\\star = \\mathcal S(\\theta)$\n* Need to call numerical solver to evaluate\n* This function is often differentiable\n* In a series of papers we showed how to analytically differentiate this function, using the implicit function theorem\n* Benefits of analytical differentiation: works with nonsmooth objective/constraints, low memory usage, don't compound errors\n\n## CVXPY\n* High level domain-specific language (DSL) for convex optimization\n* Define variables, parameters, objective and constraints\n* Synthesize into problem object, then call solve method\n* We've added derivatives to CVXPY (forward and backward)\n\n## CVXPYlayers\n\n* Convert CVXPY problems into callable, differentiable Pytorch and Tensorflow modules in one line\n\n## Applications\n* learning convex optimization models (structured prediction): https://stanford.edu/~boyd/papers/learning_copt_models.html\n* learning decision-making policies (reinforcement learning): https://stanford.edu/~boyd/papers/learning_cocps.html\n* machine learning hyper-parameter tuning and feature engineering: https://stanford.edu/~boyd/papers/lsat.html\n* repairing infeasible or unbounded optimization problems: https://stanford.edu/~boyd/papers/auto_repair_cvx.html\n* as protection layers in neural networks: http://physbam.stanford.edu/~fedkiw/papers/stanford2019-10.pdf\n* custom neural network layers (sparsemax, csoftmax, csparsemax, LML): https://locuslab.github.io/2019-10-28-cvxpylayers/\n* and many more... \n\n## Average example\nFind the average of a vector:\n\\begin{equation}\n\\begin{array}{ll}\n\\mbox{minimize} & \\sum_{i=1}^n (y_i - x)^2\n\\end{array}\n\\end{equation}\nVariable $x$, parameters $y\\in\\mathbf{R}^n$\n\nThe solution map is clearly:\n$$x=\\sum_{i=1}^n y_i / n$$\n\n\n```python\nn = 7\n\n# Define variables & parameters\nx = cp.Variable()\ny = cp.Parameter(n)\n\n# Define objective and constraints\nobjective = cp.sum_squares(y - x)\nconstraints = []\n\n# Synthesize problem\nprob = cp.Problem(cp.Minimize(objective), constraints)\n\n# Set parameter values\ny.value = np.random.randn(n)\n\n# Solve problem in one line\nprob.solve(requires_grad=True)\nprint(\"solution:\", \"%.3f\" % x.value)\nprint(\"analytical solution:\", \"%.3f\" % np.mean(y.value))\n```\n\n solution: -0.380\n analytical solution: -0.380\n\n\nThe gradient is simply:\n$$\\nabla_y x = (1/n)\\mathbf{1}$$\n\n\n```python\n# Set gradient wrt x\nx.gradient = np.array([1.])\n\n# Differentiate in one line\nprob.backward()\nprint(\"gradient:\", y.gradient)\nprint(\"analytical gradient:\", np.ones(y.size) / n)\n```\n\n gradient: [0.143 0.143 0.143 0.143 0.143 0.143 0.143]\n analytical gradient: [0.143 0.143 0.143 0.143 0.143 0.143 0.143]\n\n\n## Median example\nFinding the median of a vector:\n\\begin{equation}\n\\begin{array}{ll}\n\\mbox{minimize} & \\sum_{i=1}^n |y_i - x|,\n\\end{array}\n\\end{equation}\nVariable $x$, parameters $y\\in\\mathbf{R}^n$\n\nSolution:\n$$x=\\mathbf{median}(y)$$\n\nGradient (no duplicates):\n$$(\\nabla_y x)_i = \\begin{cases}\n1 & y_i = \\mathbf{median}(y) \\\\\n0 & \\text{otherwise}.\n\\end{cases}$$\n\n\n```python\nn = 7\n\n# Define variables & parameters\nx = cp.Variable()\ny = cp.Parameter(n)\n\n# Define objective and constraints\nobjective = cp.norm1(y - x)\nconstraints = []\n\n# Synthesize problem\nprob = cp.Problem(cp.Minimize(objective), constraints)\n\n# Set parameter values\ny.value = np.random.randn(n)\n\n# Solve problem in one line\nprob.solve(requires_grad=True)\nprint(\"solution:\", \"%.3f\" % x.value)\nprint(\"analytical solution:\", \"%.3f\" % np.median(y.value))\n```\n\n solution: -0.350\n analytical solution: -0.350\n\n\n\n```python\n# Set gradient wrt x\nx.gradient = np.array([1.])\n\n# Differentiate in one line\nprob.backward()\nprint(\"gradient:\", y.gradient)\ng = np.zeros(y.size)\ng[y.value == np.median(y.value)] = 1.\nprint(\"analytical gradient:\", g)\n```\n\n gradient: [-0. 0. -0. 1. 0. -0. 0.]\n analytical gradient: [0. 0. 0. 1. 0. 0. 0.]\n\n\n## Elastic-net regression example \nWe are given training data $(x_i, y_i)_{i=1}^{N}$,\nwhere $x_i\\in\\mathbf{R}$ are inputs and $y_i\\in\\mathbf{R}$ are outputs.\nSuppose we fit a model for this regression problem by solving the elastic-net problem\n\\begin{equation}\n\\begin{array}{ll}\n\\mbox{minimize} & \\frac{1}{N}\\sum_{i=1}^N (ax_i + b - y_i)^2 + \\lambda |a| + \\alpha a^2,\n\\end{array}\n\\label{eq:trainlinear}\n\\end{equation}\nwhere $\\lambda,\\alpha>0$ are hyper-parameters.\n\nWe hope that the test loss $\\mathcal{L}^{\\mathrm{test}}(a,b) =\n\\frac{1}{M}\\sum_{i=1}^M (a\\tilde x_i + b - \\tilde y_i)^2$ is small, where\n$(\\tilde x_i, \\tilde y_i)_{i=1}^{M}$ is our test set.\n\nFirst, we set up our problem, where $\\{x_i, y_i\\}_{i=1}^N$, $\\lambda$, and $\\alpha$ are our parameters.\n\n\n```python\nfrom sklearn.datasets import make_blobs\nfrom sklearn.model_selection import train_test_split\n\ntorch.manual_seed(0)\nnp.random.seed(0)\nn = 2\nN = 60\nX, y = make_blobs(N, n, centers=np.array([[2, 2], [-2, -2]]), cluster_std=3)\nXtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=.5)\n\nXtrain, Xtest, ytrain, ytest = map(\n torch.from_numpy, [Xtrain, Xtest, ytrain, ytest])\nXtrain.requires_grad_(True)\nm = Xtrain.shape[0]\n\na = cp.Variable((n, 1))\nb = cp.Variable((1, 1))\nX = cp.Parameter((m, n))\nY = ytrain.numpy()[:, np.newaxis]\n\nlog_likelihood = (1. / m) * cp.sum(\n cp.multiply(Y, X @ a + b) - cp.logistic(X @ a + b)\n)\nregularization = - 0.1 * cp.norm(a, 1) - 0.1 * cp.sum_squares(a)\nprob = cp.Problem(cp.Maximize(log_likelihood + regularization))\nfit_logreg = CvxpyLayer(prob, [X], [a, b])\n\ntorch.manual_seed(0)\nnp.random.seed(0)\nn = 1\nN = 60\nX = np.random.randn(N, n)\ntheta = np.random.randn(n)\ny = X @ theta + .5 * np.random.randn(N)\nXtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=.5)\nXtrain, Xtest, ytrain, ytest = map(\n torch.from_numpy, [Xtrain, Xtest, ytrain, ytest])\nXtrain.requires_grad_(True)\n\nm = Xtrain.shape[0]\n\n# set up variables and parameters\na = cp.Variable(n)\nb = cp.Variable()\nX = cp.Parameter((m, n))\nY = cp.Parameter(m)\nlam = cp.Parameter(nonneg=True)\nalpha = cp.Parameter(nonneg=True)\n\n# set up objective\nloss = (1/m)*cp.sum(cp.square(X @ a + b - Y))\nreg = lam * cp.norm1(a) + alpha * cp.sum_squares(a)\nobjective = loss + reg\n\n# set up constraints\nconstraints = []\n\nprob = cp.Problem(cp.Minimize(objective), constraints)\n```\n\n\n```python\n# convert into pytorch layer in one line\nfit_lr = CvxpyLayer(prob, [X, Y, lam, alpha], [a, b])\n# this object is now callable with pytorch tensors\nfit_lr(Xtrain, ytrain, torch.zeros(1), torch.zeros(1))\n```\n\n\n\n\n (tensor([[-0.6603]], grad_fn=<_CvxpyLayerFnFnBackward>),\n tensor([0.1356], grad_fn=<_CvxpyLayerFnFnBackward>))\n\n\n\n\n```python\n# sweep over values of alpha, holding lambda=0, evaluating the gradient along the way\nalphas = np.logspace(-3, 2, 200)\ntest_losses = []\ngrads = []\nfor alpha_vals in alphas:\n alpha_tch = torch.tensor([alpha_vals], requires_grad=True)\n alpha_tch.grad = None\n a_tch, b_tch = fit_lr(Xtrain, ytrain, torch.zeros(1), alpha_tch)\n test_loss = (Xtest @ a_tch.flatten() + b_tch - ytest).pow(2).mean()\n test_loss.backward()\n test_losses.append(test_loss.item())\n grads.append(alpha_tch.grad.item())\n```\n\n\n```python\nplt.semilogx()\nplt.plot(alphas, test_losses, label='test loss')\nplt.plot(alphas, grads, label='analytical gradient')\nplt.plot(alphas[:-1], np.diff(test_losses) / np.diff(alphas), label='numerical gradient', linestyle='--')\nplt.legend()\nplt.xlabel(\"$\\\\alpha$\")\nplt.show()\n```\n\n\n```python\n# sweep over values of lambda, holding alpha=0, evaluating the gradient along the way\nlams = np.logspace(-3, 2, 200)\ntest_losses = []\ngrads = []\nfor lam_vals in lams:\n lam_tch = torch.tensor([lam_vals], requires_grad=True)\n lam_tch.grad = None\n a_tch, b_tch = fit_lr(Xtrain, ytrain, lam_tch, torch.zeros(1))\n test_loss = (Xtest @ a_tch.flatten() + b_tch - ytest).pow(2).mean()\n test_loss.backward()\n test_losses.append(test_loss.item())\n grads.append(lam_tch.grad.item())\n```\n\n\n```python\nplt.semilogx()\nplt.plot(lams, test_losses, label='test loss')\nplt.plot(lams, grads, label='analytical gradient')\nplt.plot(lams[:-1], np.diff(test_losses) / np.diff(lams), label='numerical gradient', linestyle='--')\nplt.legend()\nplt.xlabel(\"$\\\\lambda$\")\nplt.show()\n```\n\n\n```python\n# compute the gradient of the test loss wrt all the training data points, and plot\nplt.figure(figsize=(10, 6))\na_tch, b_tch = fit_lr(Xtrain, ytrain, torch.tensor([.05]), torch.tensor([.05]), solver_args={\"eps\": 1e-8})\ntest_loss = (Xtest @ a_tch.flatten() + b_tch - ytest).pow(2).mean()\ntest_loss.backward()\na_tch_test, b_tch_test = fit_lr(Xtest, ytest, torch.tensor([0.]), torch.tensor([0.]), solver_args={\"eps\": 1e-8})\nplt.scatter(Xtrain.detach().numpy(), ytrain.numpy(), s=20)\nplt.plot([-5, 5], [-3*a_tch.item() + b_tch.item(),3*a_tch.item() + b_tch.item()], label='train')\nplt.plot([-5, 5], [-3*a_tch_test.item() + b_tch_test.item(), 3*a_tch_test.item() + b_tch_test.item()], label='test')\nXtrain_np = Xtrain.detach().numpy()\nXtrain_grad_np = Xtrain.grad.detach().numpy()\nytrain_np = ytrain.numpy()\nfor i in range(Xtrain_np.shape[0]):\n plt.arrow(Xtrain_np[i], ytrain_np[i],\n -.1 * Xtrain_grad_np[i][0], 0.)\nplt.legend()\nplt.show()\n```\n\n\n```python\n# move the training data points in the direction of their gradients, and see the train line get closer to the test line\nplt.figure(figsize=(10, 6))\nXtrain_new = torch.from_numpy(Xtrain_np - .15 * Xtrain_grad_np)\na_tch, b_tch = fit_lr(Xtrain_new, ytrain, torch.tensor([.05]), torch.tensor([.05]), solver_args={\"eps\": 1e-8})\nplt.scatter(Xtrain_new.detach().numpy(), ytrain.numpy(), s=20)\nplt.plot([-5, 5], [-3*a_tch.item() + b_tch.item(),3*a_tch.item() + b_tch.item()], label='train')\nplt.plot([-5, 5], [-3*a_tch_test.item() + b_tch_test.item(), 3*a_tch_test.item() + b_tch_test.item()], label='test')\nplt.legend()\nplt.show()\n```\n", "meta": {"hexsha": "0f995e32ad4e9a8b026f1bd34955a5fe88597a6c", "size": 107008, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "examples/torch/tutorial.ipynb", "max_stars_repo_name": "RanganThaya/cvxpylayers", "max_stars_repo_head_hexsha": "483e9220ff34a8eea31d80f83a5cdc930925925d", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1287, "max_stars_repo_stars_event_min_datetime": "2019-10-25T21:19:48.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T16:35:11.000Z", "max_issues_repo_path": "examples/torch/tutorial.ipynb", "max_issues_repo_name": "RanganThaya/cvxpylayers", "max_issues_repo_head_hexsha": "483e9220ff34a8eea31d80f83a5cdc930925925d", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 100, "max_issues_repo_issues_event_min_datetime": "2019-10-28T15:38:19.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-18T14:23:16.000Z", "max_forks_repo_path": "examples/torch/tutorial.ipynb", "max_forks_repo_name": "RanganThaya/cvxpylayers", "max_forks_repo_head_hexsha": "483e9220ff34a8eea31d80f83a5cdc930925925d", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 115, "max_forks_repo_forks_event_min_datetime": "2019-10-28T16:57:31.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-22T18:20:48.000Z", "avg_line_length": 185.1349480969, "max_line_length": 27532, "alphanum_fraction": 0.8979889354, "converted": true, "num_tokens": 3300, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9390248242542283, "lm_q2_score": 0.8774767874818408, "lm_q1q2_score": 0.8239724861523003}} {"text": "# Automatic differentiation\n\nAutomatic differentiation (AD) is a set of techniques to calculate **exact** derivatives, numerically, in an automatic way. It is neither symbolic differentiation, nor something like finite differences.\n\nThere are two main methods: forward-mode AD and reverse-mode AD. \nEach has its strengths and weaknesses. Forward mode is significantly easier to implement.\n\n# Forward-mode AD\n\nLet's start by thinking about univariate functions $f: \\mathbb{R} \\to \\mathbb{R}$. We would like to calculate the derivative $f'(a)$ at some point $a \\in \\mathbb{R}$.\n\nWe know various rules about how to calculate such derivatives. For example, if we have already managed to calculate $f'(a)$ and $g'(a)$, we can calculate $(f+g)'(a)$ and $(f.g)'(a)$ as\n\n\n\\begin{align}\n(f+g)'(a) &= f'(a) + g'(a)\\\\\n(f.g)'(a) &= f'(a) \\, g(a) + f(a) \\, g'(a)\n\\end{align}\n\nWe also have the chain rule, which plays a crucial role:\n\n$$(f \\circ g)'(a) = f'(g(a)) \\, g'(a)$$\n\nWe see that in general we will need, for each function $f$, both the value $f(a)$ and the derivative $f'(a)$, and this is the *only* information that we require in order to calculate the first derivative of any combination of functions.\n\n## Jets\n\nFormally, we can think of a first-order Taylor polynomial of $f$, called the \n[**jet** of $f$](https://en.wikipedia.org/wiki/Jet_(mathematics) at $a$, denoted $J_a(f)$:\n\n$$(J_a(f))(x) := f(a) + x f'(a)$$\n\n[This can be thought of as representing the **set of all functions** with the same data $f(a)$ and $f'(a)$.]\n\nFormally, it is common to think of this as a \"dual number\", $f + \\epsilon f'$, that we can manipulate, following the rule that $\\epsilon^2 = 0$. (Cf. complex numbers, which have the same structure, but with $\\epsilon^2 = -1$.) E.g.\n\n$$(f + \\epsilon f') \\times (g + \\epsilon g') = f \\, g + \\epsilon (f' g + f g')$$\n\nshows how to define the multiplication of two jets.\n\n## Computer representation\n\nAs usual, we can represent a polynomial just by its degree and its coefficients, so we can define a Julia object as follows. We will leave the evaluation point $(a)$ as being implicit, although we could, of course, include it if desired.\n\n\n```julia\nimmutable Jet{T} <: Real\n val::T # value\n der::T # derivative # type \\prime to get \u2032\nend\n```\n\n\n```julia\nimport Base: +, *, -, convert, promote_rule\n```\n\n\n```julia\n+(f::Jet, g::Jet) = Jet(f.val + g.val, f.der + g.der)\n-(f::Jet, g::Jet) = Jet(f.val - g.val, f.der - g.der)\n\n*(f::Jet, g::Jet) = Jet(f.val*g.val, f.der*g.val + f.val*g.der)\n```\n\n\n\n\n * (generic function with 150 methods)\n\n\n\nWe can now define `Jet`s and manipulate them:\n\n\n```julia\nf = Jet(3, 4) # any function f such that f(a) = 3 and f'(a) = 4, or the set of all such functions\ng = Jet(5, 6) # any function g such that g(a) = 5 and g'(a) = 6\n\nf + g # calculate the value and derivative of (f + g) for any f and g in these sets\n```\n\n\n\n\n Jet{Int64}(8,10)\n\n\n\n\n```julia\nf * g\n```\n\n\n\n\n Jet{Int64}(15,38)\n\n\n\n\n```julia\nf * (g + g)\n```\n\n\n\n\n Jet{Int64}(30,76)\n\n\n\n## Performance\n\nIt seems like we must have introduced quite a lot of computational overhead by creating a relatively complex data structure, and associated methods, to manipulate pairs of numbers. Let's see how the performance is:\n\n\n```julia\nadd(a1, a2, b1, b2) = (a1+b1, a2+b2)\n```\n\n\n\n\n add (generic function with 1 method)\n\n\n\n\n```julia\nadd(1, 2, 3, 4)\n@time add(1, 2, 3, 4)\n```\n\n 0.000001 seconds (4 allocations: 176 bytes)\n\n\n\n\n\n (4,6)\n\n\n\n\n```julia\na = Jet(1, 2)\nb = Jet(3, 4)\n\nadd2(j1, j2) = j1 + j2\nadd2(a, b)\n@time add2(a, b)\n```\n\n 0.000002 seconds (5 allocations: 192 bytes)\n\n\n WARNING: Method definition add2(Any, Any) in module Main at In[157]:4 overwritten at In[158]:4.\n\n\n\n\n\n Jet{Int64}(4,6)\n\n\n\n\n```julia\n\n```\n\n\n```julia\n@code_native add(1, 2, 3, 4)\n```\n\n \t.section\t__TEXT,__text,regular,pure_instructions\n Filename: In[151]\n \tpushq\t%rbp\n \tmovq\t%rsp, %rbp\n Source line: 1\n \taddq\t%rcx, %rsi\n \taddq\t%r8, %rdx\n \tmovq\t%rsi, (%rdi)\n \tmovq\t%rdx, 8(%rdi)\n \tmovq\t%rdi, %rax\n \tpopq\t%rbp\n \tretq\n \tnopw\t%cs:(%rax,%rax)\n\n\n\n```julia\n@code_native add2(a, b)\n```\n\n \t.section\t__TEXT,__text,regular,pure_instructions\n Filename: In[158]\n \tpushq\t%rbp\n \tmovq\t%rsp, %rbp\n Source line: 4\n \tmovq\t(%rdx), %rax\n \tmovq\t8(%rdx), %rcx\n \taddq\t(%rsi), %rax\n \taddq\t8(%rsi), %rcx\n \tmovq\t%rax, (%rdi)\n \tmovq\t%rcx, 8(%rdi)\n \tmovq\t%rdi, %rax\n \tpopq\t%rbp\n \tretq\n \tnop\n\n\nWe see that there is only a slight overhead to do with moving the data around. The data structure itself has disappeared, and we basically have a standard Julia tuple.\n\n## Functions on jets: chain rule\n\nWe can also define functions of these objects using the chain rule. For example, if `f` is a jet representing the function $f$, then we would like `exp(f)` to be a jet representing the function $\\exp \\circ f$, i.e. with value $\\exp(f(a))$ and derivative $(\\exp \\circ f)'(a) = \\exp(f(a)) \\, f'(a)$:\n\n\n```julia\nimport Base: exp\n```\n\n\n```julia\nexp(f::Jet) = Jet(exp(f.val), exp(f.val) * f.der)\n```\n\n\n\n\n exp (generic function with 12 methods)\n\n\n\n\n```julia\nf\n```\n\n\n\n\n Jet{Int64}(3,4)\n\n\n\n\n```julia\nexp(f)\n```\n\n\n\n\n Jet{Float64}(20.085536923187668,80.34214769275067)\n\n\n\n## Conversion and promotion\n\nHowever, we can't do e.g. the following:\n\n\n```julia\n# 3 * f\n```\n\n[Warning: In Julia 0.5, you may need to restart the kernel after doing this for the following to work correctly.]\n\nIn order to get this to work, we need to hook into Julia's type promotion and conversion machinery.\nFirst, we specify how to promote a number and a `Jet`:\n\n\n```julia\npromote_rule{T<:Real,S}(::Type{Jet{S}}, ::Type{T}) = Jet{S}\n```\n\n\n\n\n promote_rule (generic function with 102 methods)\n\n\n\nSecond, we specify how to `convert` a (constant) number to a `Jet`. By e.g. $g = f+3$, we mean the function such that $g(x) = f(x) + 3$ for all $x$, i.e. $g = f + 3.\\mathbb{1}$, where $\\mathbb{1}$ is the constant function $\\mathbb{1}: x \\mapsto 1$.\n\nThus we think of a constant $c$ as the constant function $c \\, \\mathbb{1}$, with $c(a) = c$ and $c'(a) = 0$, which we encode as the following conversion:\n\n\n```julia\nconvert{T<:Union{AbstractFloat, Integer, Rational},S}(::Type{Jet{S}}, x::T) = Jet{S}(x, 0)\n```\n\n\n\n\n convert (generic function with 600 methods)\n\n\n\n\n```julia\nconvert(Jet{Float64}, 3.1)\n```\n\n\n\n\n Jet{Float64}(3.1,0.0)\n\n\n\n\n```julia\npromote(Jet(1,2), 3.0)\n```\n\n\n\n\n (Jet{Int64}(1,2),Jet{Int64}(3,0))\n\n\n\n\n```julia\npromote(Jet(1,2), 3.1)\n```\n\n\n```julia\nconvert(Jet{Float64}, 3.0)\n```\n\n\n\n\n Jet{Float64}(3.0,0.0)\n\n\n\nJulia's machinery now enables us to do what we wanted:\n\n\n```julia\nJet(1.1, 2.3) + 3\n```\n\n\n\n\n Jet{Float64}(4.1,2.3)\n\n\n\n## Calculating derivatives of arbitrary functions\n\nHow can we use this to calculate the derivative of an arbitrary function? For example, we wish to differentiate the function\n\n\n```julia\nh(x) = x^2 - 2\n```\n\n\n\n\n h (generic function with 1 method)\n\n\n\nat $a = 3$.\n\nWe think of this as a function of $x$, which itself we think of as the identity function $\\iota: x \\mapsto x$, so that\n\n$$h = \\iota^2 - 2.\\mathbb{1}$$\n\nWe represent the identity function as follows:\n\n\n```julia\na = 3\nx = Jet(a, 1) \n```\n\n\n\n\n Jet{Int64}(3,1)\n\n\n\nsince $\\iota'(a) = 1$ for any $a$.\n\nNow we simply evaluate the function `h` at `x`:\n\n\n```julia\nh(x)\n```\n\n\n\n\n Jet{Int64}(7,6)\n\n\n\nThe first component of the resulting `Jet` is the value $h(a)$, and the second component is the derivative, $h'(a)$. \n\nWe can codify this into a function as follows:\n\n\n```julia\nderivative(f, x) = f(Jet(x, one(x))).der\n```\n\n WARNING: Method definition derivative(Any, Any) in module Main at In[32]:1 overwritten at In[49]:1.\n\n\n\n\n\n derivative (generic function with 1 method)\n\n\n\n\n```julia\nderivative(x -> 3x^5 + 2, 2)\n```\n\n\n\n\n 240\n\n\n\nThis is capable of differentiating any function that involves functions whose derivatives we have specified by defining corresponding rules on `Jet` objects. For example,\n\n\n```julia\ny = [1.,2]\nk(x) = (y'* [x 2; 3 4] * y)[]\n```\n\n WARNING: Method definition k(Any) in module Main at In[29]:2 overwritten at In[34]:2.\n\n\n\n\n\n k (generic function with 1 method)\n\n\n\n\n```julia\nk(3)\n```\n\n\n\n\n 29.0\n\n\n\n\n```julia\nderivative(x->k(x), 10)\n```\n\n\n\n\n 1\n\n\n\nThis works since Julia is constructing the following object:\n\n\n```julia\n[Jet(3.0, 1.0) 2; 3 4]\n```\n\n\n\n\n 2\u00d72 Array{Jet{Float64},2}:\n Jet{Float64}(3.0,1.0) Jet{Float64}(2.0,0.0)\n Jet{Float64}(3.0,0.0) Jet{Float64}(4.0,0.0)\n\n\n\n# Higher dimensions\n\nHow can we extend this to higher dimensions? For example, we wish to differentiate the following function $f: \\mathbb{R}^2 \\to \\mathbb{R}$:\n\n\n```julia\nf1(x, y) = x^2 + x*y\n```\n\n\n\n\n f1 (generic function with 1 method)\n\n\n\nAs we learn in calculus, the partial derivative $\\partial f/\\partial x$ is the function obtained by fixing $y$, thinking of the resulting function as a function only of $x$, and then differentiating.\n\nSuppose that we wish to differentiate $f$ at $(a, b)$:\n\n\n```julia\na, b = 3.0, 4.0\n\nf1_x(x) = f1(x, b) # single-variable function \n```\n\n WARNING: Method definition f1_x(Any) in module Main at In[47]:3 overwritten at In[50]:3.\n\n\n\n\n\n f1_x (generic function with 1 method)\n\n\n\nSince we now have a single-variable function, we can differentiate it:\n\n\n```julia\nderivative(f1_x, a)\n```\n\n\n\n\n 10.0\n\n\n\nUnder the hood this is doing\n\n\n```julia\nf1(Jet(a, one(a)), b)\n```\n\n\n\n\n Jet{Float64}(21.0,10.0)\n\n\n\nSimilarly, we can differentiate with respect to $y$ by doing\n\n\n```julia\nf1(a, Jet(b, one(b)))\n```\n\n\n\n\n Jet{Float64}(21.0,3.0)\n\n\n\nNote that we must do **two separate calculations** to get the two partial derivatives. To calculate a gradient of a function $f:\\mathbb{R}^n \\to \\mathbb{R}$ thus requires $n$ separate calculations.\n\nForward-mode AD is implemented in a clean and efficient way in the `ForwardDiff.jl` package.\n\n# Syntax trees\n\n## Forward-mode\n\nTo understand what forward-mode AD is doing, and its name, it is useful to think of an expression as a **syntax tree**; cf. [this notebook](Syntax trees in Julia.ipynb).\n\nIf we label the nodes in the tree as $v_i$, then forward differentiation fixes a variable, e.g. $y$, and calculates $\\partial v_i / \\partial y$ for each $i$. If e.g. $v_1 = v_2 + v_3$, then we have\n\n$$\\frac{\\partial v_1}{\\partial y} = \\frac{\\partial v_2}{\\partial y} + \\frac{\\partial v_3}{\\partial y}.$$\n\nDenoting $v_1' := \\frac{\\partial v_1}{\\partial y}$, we have $v_1' = v_2' + v_3'$, so we need to calculate the derivatives and nodes lower down in the graph first, and propagate the information up. We start at $v_x' = 0$, since $\\frac{\\partial x}{\\partial y} = 0$, and $v_y' = 1$.\n\n\n## Reverse mode\n\nAn alternative method to calculate derivatives is to fix not the variable with which to differentiate, but *what it is* that we differentiate, i.e. to calculate the **adjoint**, $\\bar{v_i} := \\frac{\\partial f}{\\partial v_i}$, for each $i$. \n\nIf $f = v_1 + v_2$, with $v_1 = v_3 + v_4$ and $v_2 = v_3 + v_5$, then\n\n$$\\frac{\\partial f}{\\partial v_3} = \\frac{\\partial f}{\\partial v_1} \\frac{\\partial v_1}{\\partial v_3} + \\frac{\\partial f}{\\partial v_2} \\frac{\\partial v_2}{\\partial v_3},$$\n\ni.e.\n\n$$\\bar{v_3} = \\alpha_{13} \\, \\bar{v_1} + \\alpha_{2,3} \\, \\bar{v_2},$$\n\nwhere $\\alpha_{ij}$ are the coefficients specifying the relationship between the different terms. Thus, the adjoint information propagates **down** the graph, in **reverse** order, hence the name \"reverse-mode\".\n\nFor this reason, reverse mode is much harder to implement. However, it has the advantage that all derivatives $\\partial f / \\partial x_i$ are calculated in a *single pass* of the tree.\n\nJulia has en efficient implementation of reverse-mode AD in https://github.com/JuliaDiff/ReverseDiff.jl\n\n## Example of reverse mode\n\nReverse mode is difficult to implement in a general way, but easy to do by hand. e.g. consider the function\n\n$$f(x,y,z) = x \\, y - \\sin(z)$$\n\nWe decompose this into its tree with labelled nodes, corresponding to the following sequence of elementary operations:\n\n\n```julia\nff(x, y, z) = x*y - 2*sin(x*z)\n\nx, y, z = 1, 2, 3\n\nv\u2081 = x\nv\u2082 = y\nv\u2083 = z\nv\u2084 = v\u2081 * v\u2082\nv\u2085 = v\u2081 * v\u2083\nv\u2086 = sin(v\u2085)\nv\u2087 = v\u2084 - 2v\u2086 # f\n```\n\n WARNING: Method definition ff(Any, Any, Any) in module Main at In[137]:1 overwritten at In[139]:1.\n\n\n\n\n\n 1.7177599838802655\n\n\n\n\n```julia\nff(x, y, z)\n```\n\n\n\n\n 1.7177599838802655\n\n\n\nWe have decomposed the **forward pass** into elementary operations. We now proceed to calculate the adjoints. The difficulty is to *find which variables depend on the current variable under question*.\n\n\n```julia\nv\u0304\u2087 = 1\nv\u0304\u2086 = -2 # \u2202f/\u2202v\u2086 = \u2202v\u2087/\u2202v\u2086\nv\u0304\u2085 = v\u0304\u2086 * cos(v\u2085) # \u2202v\u2087/\u2202v\u2086 * \u2202v\u2086/\u2202v\u2085\nv\u0304\u2084 = 1 \nv\u0304\u2083 = v\u0304\u2085 * v\u2081 # \u2202f/\u2202v\u2083 = \u2202f/\u2202v\u2085 . \u2202v\u2085/\u2202v\u2083. # This gives \u2202f/\u2202z\nv\u0304\u2082 = v\u0304\u2084 * v\u2081\nv\u0304\u2081 = v\u0304\u2085*v\u2083 + v\u0304\u2084*v\u2082\n```\n\n\n\n\n 7.939954979602673\n\n\n\nThus, in a single pass we have calculated the gradient $\\nabla f(1, 2, 3)$:\n\n\n```julia\n(v\u0304\u2081, v\u0304\u2082, v\u0304\u2083)\n```\n\n\n\n\n (7.939954979602673,1,1.9799849932008908)\n\n\n\nLet's check that it's correct:\n\n\n```julia\nForwardDiff.gradient(x->ff(x...), [x,y,z])\n```\n\n\n\n\n 3-element Array{Float64,1}:\n 7.93995\n 1.0 \n 1.97998\n\n\n\n# Example: optimization\n\nAs an example of the use of AD, consider the following function that we wish to optimize:\n\n\n```julia\nx = rand(3)\ny = rand(3)\n\ndistance(W) = W*x - y\n```\n\n\n\n\n distance (generic function with 1 method)\n\n\n\n\n```julia\nusing ForwardDiff\n```\n\n\n```julia\nForwardDiff.jacobian(distance, rand(3,3))\n```\n\n\n\n\n 3\u00d79 Array{Float64,2}:\n 0.889986 0.0 0.0 0.855784 \u2026 0.659763 0.0 0.0 \n 0.0 0.889986 0.0 0.0 0.0 0.659763 0.0 \n 0.0 0.0 0.889986 0.0 0.0 0.0 0.659763\n\n\n\n\n```julia\nobjective(W) = (a = distance(W); dot(a, a))\n```\n\n WARNING: Method definition objective(Any) in module Main at In[60]:4 overwritten at In[66]:1.\n\n\n\n\n\n objective (generic function with 1 method)\n\n\n\n\n```julia\nW0 = rand(3, 3)\ngrad = ForwardDiff.gradient(objective, W0)\n```\n\n\n\n\n 3\u00d73 Array{Float64,2}:\n 2.14718 2.06467 1.59175 \n 0.023659 0.0227498 0.0175388\n 2.13258 2.05063 1.58092 \n\n\n\n\n```julia\n2*(W0*x-y)*x' == grad # LHS is the analytical derivative\n```\n\n\n\n\n true\n\n\n\n# Example: Interval arithmetic\n\nHow can we find roots of a function?\n\n\n```julia\nf2(x) = x^2 - 2\n```\n\n WARNING: Method definition f2(Any) in module Main at In[100]:1 overwritten at In[108]:1.\n\n\n\n\n\n f2 (generic function with 1 method)\n\n\n\n## Exclusion of domains\n\nAn idea is to *exclude* regions of $\\mathbb{R}$ by showing that they *cannot* contain a zero, by calculating the image (range) of the function over a given domain.\n\nThis is, in general, a difficult problem, but **interval arithmetic** provides a partial solution, by calculating an **enclosure** of the range, i.e. and interval that is guaranteed to contain the range.\n\n\n```julia\nusing ValidatedNumerics\n```\n\n\n```julia\nX = 3..4\n```\n\n\n\n\n [3, 4]\n\n\n\n\n```julia\ntypeof(X)\n```\n\n\n\n\n ValidatedNumerics.Interval{Float64}\n\n\n\nThis is a representation of the set $X = [3, 4] := \\{x\\in \\mathbb{R}: 3 \\le x \\le 4\\}$.\n\nWe can evaluate a Julia function on an `Interval` object `X`. The result is a new `Interval`, which is **guaranteed to contain the true image** $\\mathrm{range}(f; X) := \\{f(x): x \\in X \\}$. This is achieved by defining arithmetic operations on intervals in the correct way, e.g.\n\n$$X + Y = [x_1, x_2] + [y_1, y_2] = [x_1+y_1, x_2+y_2].$$\n\n\n```julia\nf2(X)\n```\n\n\n\n\n [7, 14]\n\n\n\nSince this result does not contain $0$, we have *proved* that $f$ has no zero in the domain $[3,4]$. We can even use semi-infinite intervals:\n\n\n```julia\nX1 = 3..\u221e # type \\infty\n```\n\n\n\n\n [3, \u221e]\n\n\n\n\n```julia\nf2(X1)\n```\n\n\n\n\n [7, \u221e]\n\n\n\n\n```julia\nX2 = -\u221e.. -3 # space is required\n```\n\n\n\n\n [-\u221e, -3]\n\n\n\n\n```julia\nf2(X2)\n```\n\n\n\n\n [7, \u221e]\n\n\n\nWe have thus exclued two semi-infinite regions, and have proved that any root *must* lie in $[-3,3]$, by two simple calculations. However,\n\n\n```julia\nf2(-3..3)\n```\n\n\n\n\n [-2, 7]\n\n\n\nWe cannot conclude anything from this, since the result is, in general, an over-estimate of the true range, which thus may or may not contain zero. We can proceed by bisecting the interval. E.g. after two bisections, we find\n\n\n```julia\nf2(-3.. -1.5)\n```\n\n\n\n\n [0.25, 7]\n\n\n\nso we have excluded another piece.\n\n## Proving existence of roots\n\nTo prove that there *does* exist a root, we need a different approach. It is a standard method to evaluate the function at two end-points of an interval:\n\n\n```julia\nf2(1), f2(2)\n```\n\n\n\n\n (-1,2)\n\n\n\nSince there is a sign change, there exists at least one root $x^*$ in the interval $[1,2]$, i.e. a point such that $f(x^*) = 0$.\n\nTo prove that it is unique, one method is to prove that $f_2$ is *monotone* in that interval, i.e. that the derivative has a unique sign. To do so, we need to evaluate the derivative *at every point in the interval*, which seems impossible.\n\nAgain, however, interval arithmetic easily gives an *enclosure* of this image. To show this, we need to evaluate the derivative using interval arithmetic.\n\nThanks to Julia's parametric types, we get **composability for free**: we can just substitute in an interval to `ForwardDiff` or `Jet`, and it works:\n\n\n```julia\nForwardDiff.derivative(f2, 1..2)\n```\n\n\n\n\n [2, 4]\n\n\n\nAgain, the reason for this is that Julia creates the object\n\n\n```julia\nJet(x, one(x))\n```\n\n\n\n\n Jet{ValidatedNumerics.Interval{Float64}}([1, 2],[1, 1])\n\n\n\nSince an enclosure of the derivative is the interval $[2, 4]$ (and, in fact, in this case this is the true image, but there is no way to know this other than with an analytical calculation), we have **proved** that the image of the derivative function $f'$ over the interval $X = [1,2]$ does *not* contain zero, and hence that the image is monotone.\n\nTo actually find the root within this interval, we can use the [Newton interval method](Interval Newton.ipynb). In general, we should not expect to be able to use intervals in standard numerical methods designed for floats; rather, we will need to modify the numerical method to take *advantage* of intervals.\n\nThe Newton interval method can find, in a guaranteed way, *all* roots of a function in a given interval (or tell you if when it is unable to to so, for example if there are double roots). Although people think that finding roots of a general function is difficult, this is basically a solved problem using these methods.\n", "meta": {"hexsha": "7539f3bd1fcc44805d52d376151e902a8380d0e4", "size": 43988, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "lectures/other/Automatic differentiation and applications.ipynb", "max_stars_repo_name": "nvngpt31/18S096", "max_stars_repo_head_hexsha": "a634b220bb65ab2a6e10f42b18ffbd9d2de98b2c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 107, "max_stars_repo_stars_event_min_datetime": "2019-02-20T01:38:18.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-21T03:01:03.000Z", "max_issues_repo_path": "lectures/other/Automatic differentiation and applications.ipynb", "max_issues_repo_name": "nvngpt31/18S096", "max_issues_repo_head_hexsha": "a634b220bb65ab2a6e10f42b18ffbd9d2de98b2c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "lectures/other/Automatic differentiation and applications.ipynb", "max_forks_repo_name": "nvngpt31/18S096", "max_forks_repo_head_hexsha": "a634b220bb65ab2a6e10f42b18ffbd9d2de98b2c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 25, "max_forks_repo_forks_event_min_datetime": "2019-04-16T20:43:02.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T22:18:06.000Z", "avg_line_length": 21.6263520157, "max_line_length": 355, "alphanum_fraction": 0.5047285623, "converted": true, "num_tokens": 5984, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9207896693699844, "lm_q2_score": 0.8947894738180211, "lm_q1q2_score": 0.8239129037526379}} {"text": "\n\n# Lab 4: Control flow-Loops\n\nIn this notebook, we propose and solve some exercises about loops.\n\n**As a good programming practice, our test cases should ensure that all branches of the code are executed at least once.**\n\n## List of exercises\n\n\n\n1. Write a program that prints out the numbers from 0 to 10 (only even numbers). Use both: while and for loops.\n\n\n* Input: no input\n* Expected output:\n\n```\n0\n2\n4\n6\n8\n10\n```\n\n\n```\ni = 0\nwhile i <= 10:\n print(i)\n i = i + 2\n \nfor i in range(0,12,2):\n print(i)\n\n\n```\n\n2. Write a program that prints out the numbers from 0 to 10 (only odd numbers). Use both: while and for loops.\n\n\n* Input: no input\n* Expected output:\n\n```\n1\n3\n5\n7\n9\n```\n\n\n```\ni = 1\nwhile i < 11:\n print(i)\n i = i + 2\n \n\nfor i in range(1,11,2):\n print(i)\n```\n\n3. Write a program that prints out the numbers from 10 to 0 (only odd numbers). Use both: while and for loops.\n\n\n* Input: no input\n* Expected output:\n\n```\n9\n7\n5\n3\n1\n```\n\n\n```\ni = 9\nwhile i >= 0:\n print(i)\n i = i - 2\n \n\nfor i in range(9,0,-2):\n print(i)\n```\n\n4. Write a program to calculate the factorial of an integer number. Use both: while and for loops.\n * factorial (n) = n * factorial (n-1) \n\n* Input: an integer number, 5\n* Expected output:\n\n```\n120\n```\n\n\n```\nnumber = 5\nfact = 1\n\nif number < 0:\n fact = -1\nelif number == 0 or number == 1:\n fact = 1\nelse:\n for i in range(1,number+1):\n fact *= i\n\nprint(fact)\n\n\n```\n\n5. Write a program to calculate the exponentiation of a number with base b, and exponent n, both: while and for loops.\n * $base^{exponent} = a * a* a...*a$\n\n* Input: base 2, exponent 3\n* Expected output:\n\n```\n8\n```\n\n\n```\nbase = 2\nexponent = 3\nmipow = 1\n\nif exponent > 0:\n for i in range(0,exponent):\n mipow *= base\n\nprint(mipow)\n```\n\n6. Given an integer number, n, make the sum of all the numbers from 0 to n.\n\n\n* Input: 5\n* Expected output:\n\n```\n15\n```\n\n\n```\nn = 5\nadd_n = 0\nfor i in range(0,n+1,1):\n add_n += i\n\nprint(add_n)\n```\n\n7. Given an integer number, n, print a cross of radius n as follows.\n\n\n* Input: 9\n* Expected output:\n\n```\n * . . . . . . . *\n . * . . . . . * .\n . . * . . . * . .\n . . . * . * . . .\n . . . . * . . . .\n . . . * . * . . .\n . . * . . . * . .\n . * . . . . . * .\n * . . . . . . . *\n```\n\n\n\n\n```\nn = 9\nfor i in range(0, n, 1):\n for j in range (0, n, 1):\n if (i==j or n-i-1 == j):\n print(\" *\", end=\"\")\n else:\n print(\" .\", end=\"\")\n print(\"\") \n```\n\n8. Given an integer number, n, print a wedge of stars as follows.\n\n\n* Input: 5\n* Expected output:\n\n```\n*****\n****\n***\n**\n*\n```\n\n\n\n\n\n\n```\nn_stars = 5\nfor i in range (n_stars, 0, -1):\n for j in range (0, i, 1):\n print(\"*\", end=\"\")\n print(\"\")\n\n#Pythonic\nfor i in range (n_stars, 0, -1):\n print(\"*\"*i)\n```\n\n9. Given an integer number, n, print a wedge of stars as follows.\n\n\n* Input: 5\n* Expected output:\n\n```\n*\n**\n***\n****\n*****\n```\n\n\n```\nn_stars = 5\nfor i in range (0, n_stars+1, 1):\n print(\"*\"*i)\n```\n\n10. Make a program to display the multiplication table (1-10).\n\n\n* Input: no input\n* Expected output:\n\n```\n1\t 2\t 3\t 4\t 5\t 6\t 7\t 8\t 9\t10\t\n2\t 4\t 6\t 8\t 10\t12\t14\t16\t18\t20\t\n3\t 6\t 9\t 12\t15\t18\t21\t24\t27\t30\t\n4\t 8\t 12\t16\t20\t24\t28\t32\t36\t40\t\n5\t 10\t15\t20\t25\t30\t35\t40\t45\t50\t\n6\t 12\t18\t24\t30\t36\t42\t48\t54\t60\t\n7\t 14\t21\t28\t35\t42\t49\t56\t63\t70\t\n8\t 16\t24\t32\t40\t48\t56\t64\t72\t80\t\n9\t 18\t27\t36\t45\t54\t63\t72\t81\t90\t\n10\t 20\t30\t40\t50\t60\t70\t80\t90\t100\n```\n\n\n```\nfor i in range(1,11,1):\n for j in range (1, 11, 1):\n print(i*j, end = \"\")\n print(\"\\t\", end = \"\")\n print(\"\")\n\n```\n\n 1\t2\t3\t4\t5\t6\t7\t8\t9\t10\t\n 2\t4\t6\t8\t10\t12\t14\t16\t18\t20\t\n 3\t6\t9\t12\t15\t18\t21\t24\t27\t30\t\n 4\t8\t12\t16\t20\t24\t28\t32\t36\t40\t\n 5\t10\t15\t20\t25\t30\t35\t40\t45\t50\t\n 6\t12\t18\t24\t30\t36\t42\t48\t54\t60\t\n 7\t14\t21\t28\t35\t42\t49\t56\t63\t70\t\n 8\t16\t24\t32\t40\t48\t56\t64\t72\t80\t\n 9\t18\t27\t36\t45\t54\t63\t72\t81\t90\t\n 10\t20\t30\t40\t50\t60\t70\t80\t90\t100\t\n\n\n11. Write a program to detect if a number is a prime number.\n\n\n* Input: 5, 8\n* Expected output:\n\n\n\n```\nThe number 5 is prime.\nThe number 8 is not prime.\n```\n\n\n\n\n```\nn = 1\nn_divisors = 1\ndivisor = 2\nwhile divisor < n and n_divisors <= 2:\n if n % divisor == 0:\n n_divisors = n_divisors + 1\n divisor = divisor + 1\n\nif n_divisors > 2:\n print(\"The number {} is not prime.\".format(n))\nelse:\n print(\"The number {} is prime.\".format(n)) \n```\n\n12. Write a program to sum all odd numbers between n and 100 (included).\n\n\n* Input: 11\n* Expected output:\n\n```\nThe sum between 11-100 is 4995.\n```\n\n\n\n```\ntop = 100\nn = 11\nadd_top = 0\nfor value in range(11, top+1):\n add_top = add_top + value\n\nprint(\"The sum between {}-{} is {}\".format(n,top,add_top))\n```\n\n The sum between 11-100 is 4995\n\n\n13. Write a program to show the square of the first 10 numbers.\n\n\n* Input: no input\n* Expected output:\n\n```\nThe square of 0 is 0\nThe square of 1 is 1\nThe square of 2 is 4\nThe square of 3 is 9\nThe square of 4 is 16\nThe square of 5 is 25\nThe square of 6 is 36\nThe square of 7 is 49\nThe square of 8 is 64\nThe square of 9 is 81\nThe square of 10 is 100\n```\n\n\n```\nfor i in range(0, 11):\n print(\"The square of {} is {}\".format(i,i**2))\n```\n\n The square of 0 is 0\n The square of 1 is 1\n The square of 2 is 4\n The square of 3 is 9\n The square of 4 is 16\n The square of 5 is 25\n The square of 6 is 36\n The square of 7 is 49\n The square of 8 is 64\n The square of 9 is 81\n The square of 10 is 100\n\n\n14. Write a program to show the Fibonnacci sequence of a given number n.\n\n\t\tfibonacci(n) = \n\t\t\t\t\t\tfibonacci (0) = 0\n\t\t\t\t\t\tfibonacci (1) = 1\n\t\t\t\t\t\tfibonacci (n) = fibonacci(n-1) + fibonacci (n-2)\n\n\n* Input: a positive number n\n* Expected output:\n\n```\n0\n1\n1\n2\n3\n```\n\n\n\n\n```\nnumbersToGenerate=5\nfn2 = 0\nfn1 = 1\nprint(fn2)\nprint(fn1)\nfor i in range(2, numbersToGenerate):\n fcurrent = fn1 + fn2\n temp = fn1\n fn1 = fcurrent\n fn2 = temp\n print(fn1)\n```\n\n15. Write a program to check if an integer number is a palindrome.\n\n* Input: 121\n* Expected output:\n\n```\nThe number 121 is palindrome: True\n```\n\n\n\n\n```\nnumber = 121\npalindrome = number\nreverse = 0\nwhile (palindrome != 0):\n remainder = palindrome % 10\n reverse = reverse * 10 + remainder\n palindrome = palindrome // 10\n \n\nprint(\"The number \"+str(number)+\" is palindrome: \"+str((number==reverse)))\n```\n\n The number 121 is palindrome: True\n\n\n16. Write a program to check if an integer number of three digit is an Armstrong number.\n * An Armstrong number of three digit is a number whose sum of cubes of its digit is equal to its number.\"\n\n* Input: $153 = 1^3+5^3+3^3$ or $1+125+27=153$\n* Expected output:\n\n```\nThe number 153 is an Armstrong number.\n```\n\n\n\n\n```\nnumber = 153\ninitial_number = number\nresult = 0\nif number>=100 and number <= 999:\n while number != 0:\n remainder = number % 10\n result = result + remainder**3\n number = number // 10\n if result == initial_number:\n print(\"The number {} is an Armstrong number.\".format(result))\n else:\n print(\"The number {} is NOT an Armstrong number.\".format(result))\nelse:\n print(\"Number is not valid.\")\n```\n\n17. Write a program to show the first N prime numbers.\n\n* Input: N = 10\n* Expected output:\n\n```\nThe number 1 is prime.\nThe number 2 is prime.\nThe number 3 is prime.\nThe number 5 is prime.\nThe number 7 is prime.\nThe number 11 is prime.\nThe number 13 is prime.\nThe number 17 is prime.\nThe number 19 is prime.\nThe number 23 is prime.\n```\n\n\n```\ncounter = 0\nMAX_PRIMES = 10\nn = 1\n\nwhile counter < MAX_PRIMES:\n n_divisors = 1\n divisor = 1\n #Loop if n is prime\n while divisor < n and n_divisors <= 2:\n if n % divisor == 0:\n n_divisors = n_divisors + 1\n divisor = divisor + 1\n #End loop if n is prime\n #Step second loop\n if n_divisors <= 2:\n print(\"The number {} is prime.\".format(n)) \n counter = counter + +1\n \n n = n + 1\n```\n\n18. Write a program to calculate the combinatorial number\n\n$\\binom{m}{n} = \\frac{m!}{n! (m-n)!}$\n\n* Input: m = 10, n = 5\n* Expected output:\n\n```\n252\n```\n\n\n```\nm = 10\nn = 5\nfactorialmn = 1\nfactorialn = 1\nfactorialm = 1\n\nif (m < n):\n print(\"M must be >= n\");\nelse:\n #Factorial m\n if m == 0 or m == 1:\n factorialm = 1\n else:\n i = m\n while i > 0:\n factorialm = i * factorialm\n i = i - 1\n #Factorial n\n if n == 0 or n == 1:\n factorialn = 1\n else:\n i = n\n while i>0 :\n factorialn = i * factorialn\n i = i - 1\n #Factorial m-n\n if (m - n == 0 or m - n == 1):\n factorialmn = 1\n else:\n i = m - n\n while i>0 :\n factorialmn = i * factorialmn\n i = i - 1 \n print(\"The result is: \",(factorialm / (factorialn * factorialmn)))\n```\n\n19. Write a program to print a Christmas tree of 15 base stars.\n\n* Input: base_stars = 15\n* Expected output:\n\n\n\n```\n * \n *** \n ***** \n ******* \n ********* \n *********** \n ************* \n *** \n *** \n *** \n```\n\n\n\n\n```\nbase_stars = 15\nhalf_blank_spaces = 0;\nfor i in range(1,base_stars, 2):\n half_blank_spaces = (base_stars-i) // 2\n for j in range(0, half_blank_spaces):\n print(\" \",end=\"\")\n for k in range(0, i):\n print(\"*\",end=\"\")\n for j in range(0, half_blank_spaces):\n print(\" \",end=\"\")\n print(\"\")\n\nhalf_blank_spaces = (base_stars//2)-1;\nfor i in range(3):\n for j in range(0,half_blank_spaces):\n print(\" \",end=\"\")\n for k in range(0, 3):\n print(\"*\",end=\"\")\n for j in range(0, half_blank_spaces):\n print(\" \",end=\"\")\n print(\"\")\n```\n\n * \n *** \n ***** \n ******* \n ********* \n *********** \n ************* \n *** \n *** \n *** \n\n\n20. Write a program to print an * in the upper diagonal of an implicit matrix of size n.\n\n* Input: n = 3\n* Expected output:\n\n\n\n```\n* * * \n. * * \n. . * \n```\n\n\n\n\n```\nn = 3\nfor i in range(1,n+1):\n for j in range (1,n+1):\n if (i <= j):\n print(\"*\", end = \"\")\n else:\n print(\".\", end = \"\")\n print(\"\")\n```\n\n21. Write a program to print an * in the lower diagonal of an implicit matrix of size n.\n\n* Input: n = 3\n* Expected output:\n\n\n\n```\n* . . \n* * . \n* * * \n```\n\n\n\n```\nn = 3\nfor i in range(1,n+1):\n for j in range (1,n+1):\n if (i >= j):\n print(\"*\", end = \"\")\n else:\n print(\".\", end = \"\")\n print(\"\")\n```\n\n22. Write a program to print a diamond of radius n.\n\n* Input: n = 7\n* Expected output:\n\n\n\n```\n . . . * . . .\n . . * * * . .\n . * * * * * .\n * * * * * * *\n . * * * * * .\n . . * * * . .\n . . . * . . .\n```\n\n\n\n\n\n```\nn = 7\nmid = n//2\nlmin=0\nrmax=0\nif (n % 2 != 0):\n for i in range(0,n):\n if i <= mid:\n lmin=mid-i\n rmax=mid+i\n else:\n lmin=mid-(n-i)+1\n rmax=mid+(n-i)-1\n for j in range (0,n):\n if j>=lmin and j<=rmax:\n print(\" *\", end = \"\")\n else:\n print(\" .\", end = \"\")\n print(\"\")\n```\n\n23. Write a program that displays a table of size (n x n) in which each cell will have * if i is a divisor of j or \"\" otherwise.\n\n* Input: n = 10\n* Expected output:\n\n\n\n\n```\n* * * * * * * * * * 1\n* * * * * * 2\n* * * * 3\n* * * * 4\n* * * 5\n* * * * 6\n* * 7\n* * * * 8\n* * * 9\n* * * * 10\n\n```\n\n\n\n\n```\nN = 10\nfor i in range (1, N+1):\n for j in range (1, N+1):\n if (i%j == 0 or j%i == 0):\n print(\" *\", end = \"\")\n else:\n print(\" \", end = \"\")\n print(\" \",i)\n print(\"\")\n```\n\n24. Write a program to display a menu with 3 options (to say hello in English, Spanish and French) and to finish, the user shall introduce the keyword \"quit\". If any other option is introduced, the program shall display that the input value is not a valid option.\n\n* Input: test the options\n* Expected output:\n\n\n\n```\n----------MENU OPTIONS----------\n1-Say Hello!\n2-Say \u00a1Hola!\n3-Say Salut!\n> introduce an option or quit to exit...\n```\n\n\n\n\n```\noption = \"1\"\nwhile option != \"quit\":\n print(\"----------MENU OPTIONS----------\")\n print(\"1-Say Hello!\")\n print(\"2-Say \u00a1Hola!\")\n print(\"3-Say Salut!\")\n option = input(\"> introduce an option or quit to exit...\")\n if option == \"1\":\n print(\"Hello!\")\n elif option == \"2\":\n print(\"\u00a1Hola!\")\n elif option == \"3\":\n print(\"Salut!\")\n elif option == \"quit\":\n print(\"...finishing...\")\n else:\n print(\"Not a valid option: \", option)\n\n\n```\n\n25. The number finder: write a program that asks the user to find out a target number. If the input value is less than the target number, the program shall say \"the target value is less than X\", otherwise, the program shall say \"the target value is greater than X. The program shall finish once the user finds out the target number.\n\n* Input: target = 10\n* Expected output:\n\n\n\n```\nIntroduce a number: 3\nThe target value is greater than 3\nIntroduce a number: 5\nThe target value is greater than 5\nIntroduce a number: 12\nThe target value is less than 12\nIntroduce a number: 10\n```\n\n\n\n\n```\ntarget = 10\nfound = False\nwhile (not found):\n value = int(input(\"Introduce a number: \"))\n if target < value:\n print(\"The target value is less than \", value)\n elif target > value:\n print(\"The target value is greater than \", value)\n else:\n found = True\n\nif found:\n print(\"You have found out the target value!\")\n```\n\n26. Modify the program in 20 to give only 5 attempts.\n\n* Input: target = 10\n* Expected output:\n\n\n\n```\nIntroduce a number: 6\nThe target value is greater than 6\nIntroduce a number: 7\nThe target value is greater than 7\nIntroduce a number: 12\nThe target value is less than 12\nIntroduce a number: 9\nThe target value is greater than 9\nIntroduce a number: 11\nThe target value is less than 11\nYou have consumed the max number of attempts.\n```\n\n\n\n\n```\ntarget = 10\nfound = False\nattempts = 0\nMAX_ATTEMPTS = 5\nwhile (not found and attempts < MAX_ATTEMPTS):\n value = int(input(\"Introduce a number: \"))\n if target < value:\n print(\"The target value is less than \", value)\n elif target > value:\n print(\"The target value is greater than \", value)\n else:\n found = True\n attempts = attempts + 1\n\nif found:\n print(\"You have found out the target value!\")\nelse:\n print(\"You have consumed the max number of attempts.\")\n```\n\n27. Write a program to print the following pattern:\n\n\n\n```\n1\n22\n333\n4444\n55555\n```\n\n\n\n\n```\nfor i in range (1,6):\n print(str(i)*i)\n```\n\n28. Write a program to find greatest common divisor (GCD) of two numbers.\n\n* Input: x = 54, y = 24\n* Expected output:\n* Tip: try to improve the algorithm following the strategies here: https://en.wikipedia.org/wiki/Greatest_common_divisor\n\n\n\n```\nThe GCD of 54 and 24 is: 6\n```\n\n\n\n\n```\n#Python version in the math library\ndef gcd(a, b):\n \"\"\"Calculate the Greatest Common Divisor of a and b.\n\n Unless b==0, the result will have the same sign as b (so that when\n b is divided by it, the result comes out positive).\n \"\"\"\n while b:\n a, b = b, a%b\n return a\n```\n\n\n```\nx = 54\ny = 24\ngcd = 1\n#find x divisor\ncurrent_divisor = 1\nif x != 0 and y != 0:\n if x > y:\n top = y\n else:\n top = x\n while (current_divisor < top):\n if (x % current_divisor == 0) and (y % current_divisor == 0):\n gcd = current_divisor\n current_divisor = current_divisor + 1\n\n print(\"The GCD of {} and {} is: {}\".format(x,y,gcd))\nelse:\n print(\"The input must be different to 0.\")\n \n \n```\n\n29. Write a program that counts the number of primer numbers between 1-MAX.\n\n* Input: MAX = 100\n* Expected output:\n\n\n```\nThe number of primer numbers between 1 and 100 is 26.\n```\n\n\n\n\n```\nMAX = 100\nn_primes = 0\nn = 1\n\nwhile n < MAX:\n n_divisors = 1\n divisor = 1\n #Loop if n is prime\n while divisor < n and n_divisors <= 2:\n if n % divisor == 0:\n n_divisors = n_divisors + 1\n divisor = divisor + 1\n #End loop if n is prime\n #Step second loop\n if n_divisors <= 2:\n n_primes = n_primes + 1\n \n n = n + 1\n\nprint(\"The number of primer numbers between 1 and {} is {}.\".format(MAX, n_primes)) \n```\n\n The number of primer numbers between 1 and 100 is 26.\n\n\n30. Write a program that sums all primer numbers between 1-MAX.\n\n* Input: MAX = 100\n* Expected output:\n\n\n```\nThe sum of all primer numbers between 1 and 100 is 1061.\n```\n\n\n```\nMAX = 100\nsum_primes = 0\nn = 1\n\nwhile n < MAX:\n n_divisors = 1\n divisor = 1\n #Loop if n is prime\n while divisor < n and n_divisors <= 2:\n if n % divisor == 0:\n n_divisors = n_divisors + 1\n divisor = divisor + 1\n #End loop if n is prime\n #Step second loop\n if n_divisors <= 2:\n sum_primes = sum_primes + n\n \n n = n + 1\n\nprint(\"The sum of all primer numbers between 1 and {} is {}.\".format(MAX, sum_primes)) \n```\n\n The sum of all primer numbers between 1 and 100 is 1061.\n\n\n31. Write a program to approximate the value of $e$ using the Taylor series for $e^x$.\n\n$\\begin{align}\n \\sum_{n=0}^\\infty \\frac{x^n}{n!} &= \\frac{x^0}{0!} + \\frac{x^1}{1!} + \\frac{x^2}{2!} + \\frac{x^3}{3!} + \\frac{x^4}{4!} + \\frac{x^5}{5!}+ \\cdots \n \\end{align}$\n\n * Input: x = 1, MAX (N) = 100\n * Expected output: \n\n\n```\nThe value of e is 2.7182818284590455\n```\n\n\n\n\n```\nsum_e = 0\nMAX = 100\nx = 1\nfor n in range (1, MAX+1):\n factorial_n = 1\n for i in range(1,n):\n factorial_n = i * factorial_n\n sum_e = sum_e + (1/factorial_n)\n\nprint(\"The value of e is {}\".format(sum_e))\n```\n\n32. Write a program to detect whether a number is a perfect number.\n\n* A number is perfect if the addition of all positive divisors is equal to the number.\n\n* Input: n = 6\n* Expected output:\n\n\n```\nThe number 6 is perfect.\n```\n\n\n\n\n```\nn = 6\nperfect = 0\nfor i in range (1,n):\n if n%i == 0:\n perfect = perfect + i\n\nif perfect == n:\n print(\"The number {} is perfect.\".format(n))\nelse:\n print(\"The number {} is NOT perfect.\".format(n))\n```\n", "meta": {"hexsha": "c467d4681754576e2140e865d31b4b6d09f9c534", "size": 44210, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "intro-programming/Lab_4_Control_Flow_Loops.ipynb", "max_stars_repo_name": "chemaar/python-programming-course", "max_stars_repo_head_hexsha": "142be4008919874f61a8ed036dc0dca0025475f1", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-03-26T08:18:14.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-07T13:50:53.000Z", "max_issues_repo_path": "intro-programming/Lab_4_Control_Flow_Loops.ipynb", "max_issues_repo_name": "chemaar/python-programming-course", "max_issues_repo_head_hexsha": "142be4008919874f61a8ed036dc0dca0025475f1", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-05-11T18:31:53.000Z", "max_issues_repo_issues_event_max_datetime": "2021-05-11T18:31:53.000Z", "max_forks_repo_path": "intro-programming/Lab_4_Control_Flow_Loops.ipynb", "max_forks_repo_name": "chemaar/python-programming-course", "max_forks_repo_head_hexsha": "142be4008919874f61a8ed036dc0dca0025475f1", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.2062833432, "max_line_length": 348, "alphanum_fraction": 0.3860212622, "converted": true, "num_tokens": 5976, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.894789454880027, "lm_q2_score": 0.9207896845856297, "lm_q1q2_score": 0.8239128999295277}} {"text": "```python\nimport sympy as sp\n```\n\n\n```python\nfrom sympy.physics.vector import init_vprinting\ninit_vprinting(use_latex='mathjax', pretty_print=False)\n```\n\n\n```python\nfrom IPython.display import Image\nImage('fig/2rp_new.png', width=300)\n```\n\n\n```python\nfrom sympy.physics.mechanics import dynamicsymbols\n```\n\n\n```python\ntheta1, theta2, l1, l2, theta, alpha, a, d = dynamicsymbols('theta1 theta2 l1 l2 theta alpha a d')\ntheta1, theta2, l1, l2, theta, alpha, a, d \n```\n\n\n\n\n$\\displaystyle \\left( \\theta_{1}, \\ \\theta_{2}, \\ l_{1}, \\ l_{2}, \\ \\theta, \\ \\alpha, \\ a, \\ d\\right)$\n\n\n\n\n```python\nrot = sp.Matrix([[sp.cos(theta), -sp.sin(theta)*sp.cos(alpha), sp.sin(theta)*sp.sin(alpha)],\n [sp.sin(theta), sp.cos(theta)*sp.cos(alpha), -sp.cos(theta)*sp.sin(alpha)],\n [0, sp.sin(alpha), sp.cos(alpha)]])\n\ntrans = sp.Matrix([a*sp.cos(theta), a*sp.sin(theta),d])\n\nlast_row = sp.Matrix([[0, 0, 0, 1]])\n\nm = sp.Matrix.vstack(sp.Matrix.hstack(rot, trans), last_row)\nm\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\cos{\\left(\\theta \\right)} & - \\sin{\\left(\\theta \\right)} \\cos{\\left(\\alpha \\right)} & \\sin{\\left(\\alpha \\right)} \\sin{\\left(\\theta \\right)} & a \\cos{\\left(\\theta \\right)}\\\\\\sin{\\left(\\theta \\right)} & \\cos{\\left(\\alpha \\right)} \\cos{\\left(\\theta \\right)} & - \\sin{\\left(\\alpha \\right)} \\cos{\\left(\\theta \\right)} & a \\sin{\\left(\\theta \\right)}\\\\0 & \\sin{\\left(\\alpha \\right)} & \\cos{\\left(\\alpha \\right)} & d\\\\0 & 0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\n# Transformation: frame '0' to '1'\n\nm01 = m.subs({alpha:0, a:l1, theta: theta1, d:0})\nm01\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\cos{\\left(\\theta_{1} \\right)} & - \\sin{\\left(\\theta_{1} \\right)} & 0 & l_{1} \\cos{\\left(\\theta_{1} \\right)}\\\\\\sin{\\left(\\theta_{1} \\right)} & \\cos{\\left(\\theta_{1} \\right)} & 0 & l_{1} \\sin{\\left(\\theta_{1} \\right)}\\\\0 & 0 & 1 & 0\\\\0 & 0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\n# Transformation: frame '1' to '2'\n\nm12 = m.subs({alpha:0, a:l2, theta: theta2, d:0})\nm12\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\cos{\\left(\\theta_{2} \\right)} & - \\sin{\\left(\\theta_{2} \\right)} & 0 & l_{2} \\cos{\\left(\\theta_{2} \\right)}\\\\\\sin{\\left(\\theta_{2} \\right)} & \\cos{\\left(\\theta_{2} \\right)} & 0 & l_{2} \\sin{\\left(\\theta_{2} \\right)}\\\\0 & 0 & 1 & 0\\\\0 & 0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\n# Homogenous transformation: frame '0' to '2'\n\nm02 = (m01*m12)\nm02\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}- \\sin{\\left(\\theta_{1} \\right)} \\sin{\\left(\\theta_{2} \\right)} + \\cos{\\left(\\theta_{1} \\right)} \\cos{\\left(\\theta_{2} \\right)} & - \\sin{\\left(\\theta_{1} \\right)} \\cos{\\left(\\theta_{2} \\right)} - \\sin{\\left(\\theta_{2} \\right)} \\cos{\\left(\\theta_{1} \\right)} & 0 & l_{1} \\cos{\\left(\\theta_{1} \\right)} - l_{2} \\sin{\\left(\\theta_{1} \\right)} \\sin{\\left(\\theta_{2} \\right)} + l_{2} \\cos{\\left(\\theta_{1} \\right)} \\cos{\\left(\\theta_{2} \\right)}\\\\\\sin{\\left(\\theta_{1} \\right)} \\cos{\\left(\\theta_{2} \\right)} + \\sin{\\left(\\theta_{2} \\right)} \\cos{\\left(\\theta_{1} \\right)} & - \\sin{\\left(\\theta_{1} \\right)} \\sin{\\left(\\theta_{2} \\right)} + \\cos{\\left(\\theta_{1} \\right)} \\cos{\\left(\\theta_{2} \\right)} & 0 & l_{1} \\sin{\\left(\\theta_{1} \\right)} + l_{2} \\sin{\\left(\\theta_{1} \\right)} \\cos{\\left(\\theta_{2} \\right)} + l_{2} \\sin{\\left(\\theta_{2} \\right)} \\cos{\\left(\\theta_{1} \\right)}\\\\0 & 0 & 1 & 0\\\\0 & 0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\n# simplify the transformation as\n\nmbee = sp.Matrix([[m02[0,0].simplify(), m02[0,1].simplify(), sp.trigsimp(m02[0,3].simplify())],\n [m02[1,0].simplify(), m02[1,1].simplify(), sp.trigsimp(m02[1,3].simplify())],\n [m02[2,0].simplify(), m02[2,1].simplify(), m02[2,2].simplify() ]])\n\nmbee\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\cos{\\left(\\theta_{1} + \\theta_{2} \\right)} & - \\sin{\\left(\\theta_{1} + \\theta_{2} \\right)} & l_{1} \\cos{\\left(\\theta_{1} \\right)} + l_{2} \\cos{\\left(\\theta_{1} + \\theta_{2} \\right)}\\\\\\sin{\\left(\\theta_{1} + \\theta_{2} \\right)} & \\cos{\\left(\\theta_{1} + \\theta_{2} \\right)} & l_{1} \\sin{\\left(\\theta_{1} \\right)} + l_{2} \\sin{\\left(\\theta_{1} + \\theta_{2} \\right)}\\\\0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\n# Forward Kinematic Equations\n\n# position in x-direction\n\npx = mbee[0,2]\npx\n```\n\n\n\n\n$\\displaystyle l_{1} \\cos{\\left(\\theta_{1} \\right)} + l_{2} \\cos{\\left(\\theta_{1} + \\theta_{2} \\right)}$\n\n\n\n\n```python\n# position in y-direction\n\npy = mbee[1,2]\npy\n```\n\n\n\n\n$\\displaystyle l_{1} \\sin{\\left(\\theta_{1} \\right)} + l_{2} \\sin{\\left(\\theta_{1} + \\theta_{2} \\right)}$\n\n\n\n\n```python\n# Evaluation of tip position\n\nfx = sp.lambdify([l1, l2, theta1, theta2], px, 'numpy')\nfy = sp.lambdify([l1, l2, theta1, theta2], py, 'numpy')\n```\n\n\n```python\n# plotting tip position in X & Y Plane\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport numpy as np\nd2r = np.deg2rad\n```\n\n\n```python\ntheta1s = np.linspace(d2r(0), d2r(360)) #desired range of motion for joint 1\ntheta2s = np.linspace(d2r(0), d2r(360)) #desired range of motion for joint 2\n\nzx = np.array(fx(15.0, 15.0, theta1s, theta2s))\nzy = np.array(fy(15.0, 15.0, theta1s, theta2s))\n\nfig, ax1 = plt.subplots()\nax1.plot(np.rad2deg(theta1s), zx, label = r'$p_x$')\nax1.plot(np.rad2deg(theta1s), zy, label = r'$p_y$')\nax1.set_xlabel(r'($\\theta_1$, $\\theta_2$) [deg]')\nax1.set_ylabel(r' tip position [mm]')\nplt.legend()\nplt.grid()\n```\n\n\n```python\n# Manipulator Workspace\n\nfrom plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot\ninit_notebook_mode(connected=True) # for offline mode in Jupyter Notebook use\n\nimport plotly.offline as py\nimport plotly.graph_objs as go\nfrom numpy import cos, sin\n```\n\n\n\n\n\n\n\n```python\ntheta11 = np.linspace(d2r(0), d2r(90))\ntheta22 = np.linspace(d2r(0), d2r(360))\ntheta1, theta2 = np.meshgrid(theta11, theta22)\nl_range = [5]\n\npx1 = {}\npy1 = {}\npz1 = {}\nfor i in l_range:\n l1 = i\n l2 = i-4\n \n pxa = l1*cos(theta1) + l2*cos(theta1 + theta2)\n pya = l1*sin(theta1) + l2*sin(theta1 + theta2)\n \n px1[f'x{i}'] = pxa\n py1[f'x{i}'] = pya\n```\n\n\n```python\npxx = px1['x5']\npyy = px1['x5']\npzz = pyy*0 #dummy zero points for z-axis, as it doesn't exist\n```\n\n\n```python\ntrace1 = go.Surface(z=pzz, x= pxx, y=pyy,\n colorscale='Reds',\n showscale=False,\n opacity=0.7,\n )\ndata = [trace1]\n```\n\n\n```python\nlayout = go.Layout(scene=dict(xaxis = dict(title='X (mm)'),\n yaxis = dict(title='Y (mm)'),\n zaxis = dict(title='Z (mm)'),\n ),\n )\n```\n\n\n```python\nfig = go.Figure(data=data, layout=layout)\npy.iplot(fig)\n```\n", "meta": {"hexsha": "710e216acd42930a89af4d87c2944152379a3acc", "size": 476300, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "2rp_manipulator.ipynb", "max_stars_repo_name": "Eddy-Morgan/kinematics--2RP--DH--convention", "max_stars_repo_head_hexsha": "5519c2ba210fb90f9e82a706f407643684de0a3b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2rp_manipulator.ipynb", "max_issues_repo_name": "Eddy-Morgan/kinematics--2RP--DH--convention", "max_issues_repo_head_hexsha": "5519c2ba210fb90f9e82a706f407643684de0a3b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2rp_manipulator.ipynb", "max_forks_repo_name": "Eddy-Morgan/kinematics--2RP--DH--convention", "max_forks_repo_head_hexsha": "5519c2ba210fb90f9e82a706f407643684de0a3b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 51.4918918919, "max_line_length": 111893, "alphanum_fraction": 0.6760844006, "converted": true, "num_tokens": 2392, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9626731083722525, "lm_q2_score": 0.8558511524823263, "lm_q1q2_score": 0.8239048892641357}} {"text": "### Uniform Distribution\n#### Continous\nConsider,\n$$\\mathbb{U}[N_{bin},r'_{min},r'_{max},r_{min},r_{max}] = \\frac{r'_{max}-r'_{min}}{N_{bin}(r_{max}-r_{min})} $$\nSince in Matlab rand, the uniform distribution is defined in the range of $[0,1]$,\n$$r\\sim \\mathbb{U}[0,1]$$\nTo extend the range $r\\in [0,1]$ to $r'\\in [a,b]$,\n$$r' = r(b-a)+a$$\n\n#### Discrete\n\\begin{equation}\np(x)=\\left\\{\n \\begin{array}{@{}ll@{}}\n 0.25, & \\text{if}\\ x\\in\\{1,2,3,4\\} \\\\\n 0, & \\text{otherwise}\n \\end{array}\\right.\n\\end{equation}\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n\n```python\ns = np.random.uniform(1,5,10000)\ncount, bins, ignored = plt.hist(s, 100, density=True, align='left',rwidth=0.5)\nplt.plot(bins, 0.25*np.ones_like(bins), linewidth=2, color='r')\n```\n\n\n```python\ns = np.random.randint(1,5,100000)\nplt.hist(s, density=True, align='mid', rwidth=0.5)\n```\n\n### Degenerate Distribution\n\\begin{equation}\np(x)=\\left\\{\n \\begin{array}{@{}ll@{}}\n 1, & \\text{if}\\ x=1 \\\\\n 0, & \\text{if}\\ x\\in\\{2,3,4\\}\n \\end{array}\\right.\n\\end{equation}\n\n\n```python\ns = np.random.uniform(0,1,1000)\ns = np.concatenate((s,np.zeros(5000)))\ncount, bins, ignored = plt.hist(s, 1, density=True, align='right')\nplt.plot(range(0,6), np.ones(6), linewidth=2, color='r')\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "4584656e06d6490e095185bad4687ed38ae31ab4", "size": 21581, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Machine Learning A Probabilistic Perspective/2Probability/.ipynb_checkpoints/2.1discreteProbDistFig-checkpoint.ipynb", "max_stars_repo_name": "zcemycl/ProbabilisticPerspectiveMachineLearning", "max_stars_repo_head_hexsha": "8291bc6cb935c5b5f9a88f7b436e6e42716c21ae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2019-11-20T10:20:29.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-09T11:15:23.000Z", "max_issues_repo_path": "Machine Learning A Probabilistic Perspective/2Probability/.ipynb_checkpoints/2.1discreteProbDistFig-checkpoint.ipynb", "max_issues_repo_name": "zcemycl/ProbabilisticPerspectiveMachineLearning", "max_issues_repo_head_hexsha": "8291bc6cb935c5b5f9a88f7b436e6e42716c21ae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Machine Learning A Probabilistic Perspective/2Probability/.ipynb_checkpoints/2.1discreteProbDistFig-checkpoint.ipynb", "max_forks_repo_name": "zcemycl/ProbabilisticPerspectiveMachineLearning", "max_forks_repo_head_hexsha": "8291bc6cb935c5b5f9a88f7b436e6e42716c21ae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-05-27T03:56:38.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-02T13:15:42.000Z", "avg_line_length": 115.4064171123, "max_line_length": 6756, "alphanum_fraction": 0.8709512998, "converted": true, "num_tokens": 497, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9353465170505205, "lm_q2_score": 0.880797071719777, "lm_q1q2_score": 0.8238504732613909}} {"text": "# Unsupervised Learning \n\n\n```python\n# Setup\nfrom utils.lecture10 import *\n%matplotlib inline\n```\n\n### Supervised vs Unsupervised Learning\n\nThe difference between *supervised learning* and *unsupervised learning* is that in the first case we have a variable $y$ which we want to predict, given a set of variables $\\{ X_1, . . . , X_p \\}$.\n\nIn *unsupervised learning* are not interested in prediction, because we do not have an associated response variable $y$. Rather, the goal is to discover interesting properties about the measurements on $\\{ X_1, . . . , X_p \\}$. \n\nQuestions that we are usually interested in are\n- Clustering\n- Dimensionality reduction\n\nIn general, unsupervised learning can be viewed as an extention of exploratory data analysis.\n\n### Dimensionality Reduction\n\nWorking in high-dimensional spaces can be undesirable for many reasons; raw data are often sparse as a consequence of the curse of dimensionality, and analyzing the data is usually computationally intractable (hard to control or deal with). \n\nDimensionality reduction can also be useful to plot high-dimensional data. \n\n### Clustering\n\nClustering refers to a very broad set of techniques for finding subgroups, or clusters, in a data set. When we cluster the observations of a data set, we seek to partition them into distinct groups so that the observations within each group are quite similar to each other, while observations in different groups are quite different from each other.\n\nIn this section we focus on the following algorithms:\n\n1. **K-means clustering**\n2. **Hierarchical clustering**\n3. **Gaussian Mixture Models**\n\n## Principal Component Analysis\n\nSuppose that we wish to visualize $n$ observations with measurements on a set of $p$ features, $\\{X_1, . . . , X_p\\}$, as part of an exploratory data analysis.\n\nWe could do this by examining two-dimensional scatterplots of the data, each of which contains the n observations\u2019 measurements on two of the features. However, there are $p(p\u22121)/2$ such scatterplots; for example,\nwith $p = 10$ there are $45$ plots! \n\nPCA provides a tool to do just this. It finds a low-dimensional represen- tation of a data set that contains as much as possible of the variation. \n\n\n\n### First Principal Component\n\nThe **first principal component** of a set of features $\\{X_1, . . . , X_p\\}$ is the normalized linear combination of the features $Z_1$\n\n$$\nZ_1 = \\phi_{11} X_1 + \\phi_{21} X_2 + ... + \\phi_{p1} X_p\n$$\n\nthat has the largest variance. \n\nBy normalized, we mean that $\\sum_{i=1}^p \\phi^2_{i1} = 1$.\n\n### PCA Computation\n\nIn other words, the first principal component loading vector solves the optimization problem\n\n$$\n\\underset{\\phi_{11}, \\ldots, \\phi_{p 1}}{\\max} \\ \\Bigg \\lbrace \\frac{1}{n} \\sum _ {i=1}^{n}\\left(\\sum _ {j=1}^{p} \\phi _ {j1} x _ {ij} \\right)^{2} \\Bigg \\rbrace \\quad \\text { subject to } \\quad \\sum _ {j=1}^{p} \\phi _ {j1}^{2}=1\n$$\n\nThe objective that we are maximizing is just the sample variance of the $n$ values of $z_{i1}$.\n\nAfter the first principal component $Z_1$ of the features has been determined, we can find the second principal component $Z_2$. The **second principal component** is the linear combination of $\\{X_1, . . . , X_p\\}$ that has maximal variance out of all linear combinations that are *uncorrelated* with $Z_1$.\n\n### Example\n\nWe illustrate the use of PCA on the `USArrests` data set. \n\nFor each of the 50 states in the United States, the data set contains the number of arrests per $100,000$ residents for each of three crimes: `Assault`, `Murder`, and `Rape.` We also record the percent of the population in each state living in urban areas, `UrbanPop`.\n\n\n```python\n# Load crime data\ndf = pd.read_csv('data/USArrests.csv', index_col=0)\ndf.head()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    MurderAssaultUrbanPopRape
    State
    Alabama13.22365821.2
    Alaska10.02634844.5
    Arizona8.12948031.0
    Arkansas8.81905019.5
    California9.02769140.6
    \n
    \n\n\n\n### Data Scaling \n\nTo make all the features comparable, we first need to scale them. In this case, we use the `sklearn.preprocessing.scale()` function to normalize each variable to have zero mean and unit variance. \n\n\n```python\n# Scale data\nX_scaled = pd.DataFrame(scale(df), index=df.index, columns=df.columns).values\n```\n\nWe will see later what are the practical implications of (not) scaling.\n\n### Fitting\n\nLet's fit PCA with 2 components.\n\n\n```python\n# Fit PCA with 2 components\npca2 = PCA(n_components=2).fit(X_scaled)\n```\n\n\n```python\n# Get weights\nweights = pca2.components_.T\ndf_weights = pd.DataFrame(weights, index=df.columns, columns=['PC1', 'PC2'])\ndf_weights\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    PC1PC2
    Murder0.5358990.418181
    Assault0.5831840.187986
    UrbanPop0.278191-0.872806
    Rape0.543432-0.167319
    \n
    \n\n\n\n### Projecting the data\n\nWhat does the trasformed data looks like?\n\n\n```python\n# Transform X to get the principal components\nX_dim2 = pca2.transform(X_scaled)\ndf_dim2 = pd.DataFrame(X_dim2, columns=['PC1', 'PC2'], index=df.index)\ndf_dim2.head()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    PC1PC2
    State
    Alabama0.9855661.133392
    Alaska1.9501381.073213
    Arizona1.763164-0.745957
    Arkansas-0.1414201.119797
    California2.523980-1.542934
    \n
    \n\n\n\n### Visualization\n\nThe advantage og PCA is that it allows us to see the variation in lower dimesions.\n\n\n```python\nmake_figure_10_1a(df_dim2, df_weights)\n```\n\n### PCA and Spectral Analysis\n\nIn case you haven't noticed, calculating principal components, is equivalent to calculating the eigenvectors of the design matrix $X'X$, i.e. the variance-covariance matrix of $X$. Indeed what we performed above is a decomposition of the variance of $X$ into orthogonal components.\n\nThe constrained maximization problem above can be re-written in matrix notation as\n\n$$\n\\max \\ \\phi' X'X \\phi \\quad \\text{ s. t. } \\quad \\phi'\\phi = 1\n$$\n\nWhich has the following dual representation\n\n$$\n\\mathcal L (\\phi, \\lambda) = \\phi' X'X \\phi - \\lambda (\\phi'\\phi - 1)\n$$\n\nIf we take the first order conditions\n\n$$\n\\begin{align}\n& \\frac{\\partial \\mathcal L}{\\partial \\lambda} = \\phi'\\phi - 1 \\\\\n& \\frac{\\partial \\mathcal L}{\\partial \\phi} = 2 X'X \\phi - 2 \\lambda \\phi\n\\end{align}\n$$\n\nSetting the derivatives to zero at the optimum, we get\n\n$$\n\\begin{align}\n& \\phi'\\phi = 1 \\\\\n& X'X \\phi = \\lambda \\phi\n\\end{align}\n$$\n\nThus, $\\phi$ is an **eigenvector** of the covariance matrix $X'X$, and the maximizing vector will be the one associated with the largest **eigenvalue** $\\lambda$. \n\n### Eigenvalues and eigenvectors \n\nWe can now double-check it using `numpy` linear algebra package.\n\n\n```python\neigenval, eigenvec = np.linalg.eig(X_scaled.T @ X_scaled)\ndata = np.concatenate((eigenvec,eigenval.reshape(1,-1)))\nidx = list(df.columns) + ['Eigenvalue']\ndf_eigen = pd.DataFrame(data, index=idx, columns=['PC1', 'PC2','PC3','PC4'])\n\ndf_eigen\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    PC1PC2PC3PC4
    Murder0.5358990.4181810.649228-0.341233
    Assault0.5831840.187986-0.743407-0.268148
    UrbanPop0.278191-0.8728060.133878-0.378016
    Rape0.543432-0.1673190.0890240.817778
    Eigenvalue124.01207949.4882588.67150417.828159
    \n
    \n\n\n\nThe spectral decomposition of the variance of $X$ generates a set of orthogonal vectors (eigenvectors) with different magnitudes (eigenvalues). The eigenvalues tell us the amount of variance of the data in that direction.\n\nIf we combine the eigenvectors together, we form a projection matrix $P$ that we can use to transform the original variables: $\\tilde X = P X$\n\n\n```python\nX_transformed = X_scaled @ eigenvec\ndf_transformed = pd.DataFrame(X_transformed, index=df.index, columns=['PC1', 'PC2','PC3','PC4'])\n\ndf_transformed.head()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    PC1PC2PC3PC4
    State
    Alabama0.9855661.1333920.156267-0.444269
    Alaska1.9501381.073213-0.4385832.040003
    Arizona1.763164-0.745957-0.8346530.054781
    Arkansas-0.1414201.119797-0.1828110.114574
    California2.523980-1.542934-0.3419960.598557
    \n
    \n\n\n\nThis is exactly the dataset that we obtained before.\n\n### Scaling the Variables\n\nThe results obtained when we perform PCA will also depend on whether the variables have been individually scaled. In fact, the variance of a variable depends on its magnitude.\n\n\n```python\n# Variables variance\ndf.var(axis=0)\n```\n\n\n\n\n Murder 18.970465\n Assault 6945.165714\n UrbanPop 209.518776\n Rape 87.729159\n dtype: float64\n\n\n\nConsequently, if we perform PCA on the unscaled variables, then the first principal component loading vector will have a very large loading for `Assault`, since that variable has by far the highest variance.\n\n\n```python\n# Fit PCA with unscaled varaibles\nX = df.values\npca2_u = PCA(n_components=2).fit(X)\n```\n\n\n```python\n# Get weights\nweights_u = pca2_u.components_.T\ndf_weights_u = pd.DataFrame(weights_u, index=df.columns, columns=['PC1', 'PC2'])\ndf_weights_u\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    PC1PC2
    Murder0.0417040.044822
    Assault0.9952210.058760
    UrbanPop0.046336-0.976857
    Rape0.075156-0.200718
    \n
    \n\n\n\n\n```python\n# Transform X to get the principal components\nX_dim2_u = pca2_u.transform(X)\ndf_dim2_u = pd.DataFrame(X_dim2_u, columns=['PC1', 'PC2'], index=df.index)\n```\n\n### Plotting\n\nWe can compare the lower dimensional representations with and without scaling.\n\n\n```python\nmake_figure_10_1b(df_dim2, df_dim2_u, df_weights, df_weights_u)\n```\n\nAs predicted, the first principal component loading vector places almost all of its weight on `Assault`, while the second principal component loading vector places almost all of its weight on `UrpanPop`. Comparing this to the left-hand plot, we see that scaling does indeed have a substantial effect on the results obtained. However, this result is simply a consequence of the scales on which the variables were measured. \n\n### The Proportion of Variance Explained\n\nWe can now ask a natural question: how much of the information in a given data set is lost by projecting the observations onto the first few principal components? That is, how much of the variance in the data is not contained in the first few principal components? More generally, we are interested in knowing the **proportion of variance explained (PVE)** by each principal component. \n\n\n```python\n# Four components\npca4 = PCA(n_components=4).fit(X_scaled)\n```\n\n\n```python\n# Variance of the four principal components\npca4.explained_variance_\n```\n\n\n\n\n array([2.53085875, 1.00996444, 0.36383998, 0.17696948])\n\n\n\n### Interpretation\n\nWe can compute it in percentage of the total variance.\n\n\n```python\n# As a percentage of the total variance\npca4.explained_variance_ratio_\n```\n\n\n\n\n array([0.62006039, 0.24744129, 0.0891408 , 0.04335752])\n\n\n\nIn the `Arrest` dataset, the first principal component explains $62.0\\%$ of the variance in the data, and the next principal component explains $24.7\\%$ of the variance. Together, the first two principal components explain almost $87\\%$ of the variance in the data, and the last two principal components explain only $13\\%$ of the variance.\n\n### Plotting\n\nWe can plot in a graph the percentage of the variance explained, relative to the number of components.\n\n\n```python\nmake_figure_10_2(pca4)\n```\n\n### How Many Principal Components?\n\nIn general, a $n \\times p$ data matrix $X$ has $\\min\\{n \u2212 1, p\\}$ distinct principal components. However, we usually are not interested in all of them; rather, we would like to use just the first few principal components in order to visualize or interpret the data. \n\nWe typically decide on the number of principal components required to visualize the data by examining a *scree plot*.\n\nHowever, there is no well-accepted objective way to decide how many principal com- ponents are enough.\n\n## K-Means Clustering\n\nThe idea behind K-means clustering is that a good clustering is one for which the within-cluster variation is as small as possible. Hence we want to solve the problem\n\n$$\n\\underset{C_{1}, \\ldots, C_{K}}{\\operatorname{minimize}} \\Bigg\\lbrace \\sum_{k=1}^{K} W\\left(C_{k}\\right) \\Bigg\\rbrace\n$$\n\nwhere $C_k$ is a cluster and $ W(C_k)$ is a measure of the amount by which the observations within a cluster differ from each other.\n\nThere are many possible ways to define this concept, but by far the most common choice involves **squared Euclidean distance**. That is, we define\n\n$$\nW\\left(C_{k}\\right)=\\frac{1}{\\left|C_{k}\\right|} \\sum_{i, i^{\\prime} \\in C_{k}} \\sum_{j=1}^{p}\\left(x_{i j}-x_{i^{\\prime} j}\\right)^2\n$$\n\nwhere $|C_k|$ denotes the number of observations in the $k^{th}$ cluster. \n\n### Algorithm\n\n1. Randomly assign a number, from $1$ to $K$, to each of the observations. These serve as initial cluster assignments for the observations.\n\n2. Iterate until the cluster assignments stop changing:\n\n a) For each of the $K$ clusters, compute the cluster centroid. The kth cluster centroid is the vector of the $p$ feature means for the observations in the $k^{th}$ cluster.\n \n b) Assign each observation to the cluster whose centroid is closest (where closest is defined using Euclidean distance).\n\n### Generate the data\n\nWe first generate a 2-dimensional dataset.\n\n\n```python\n# Simulate data\nnp.random.seed(123)\nX = np.random.randn(50,2)\nX[0:25, 0] = X[0:25, 0] + 3\nX[0:25, 1] = X[0:25, 1] - 4\n```\n\n\n```python\nmake_new_figure_1(X)\n```\n\n### Step 1: random assignement\n\nNow let's randomly assign the data to two clusters, at random.\n\n\n```python\n# Init clusters\nK = 2\nclusters0 = np.random.randint(K,size=(np.size(X,0)))\n```\n\n\n```python\nmake_new_figure_2(X, clusters0)\n```\n\n### Step 2: estimate distributions\n\nWhat are the new centroids?\n\n\n```python\n# Compute new centroids\ndef compute_new_centroids(X, clusters):\n K = len(np.unique(clusters))\n centroids = np.zeros((K,np.size(X,1)))\n for k in range(K):\n if sum(clusters==k)>0:\n centroids[k,:] = np.mean(X[clusters==k,:], axis=0)\n else:\n centroids[k,:] = np.mean(X, axis=0)\n return centroids\n```\n\n\n```python\n# Print\ncentroids0 = compute_new_centroids(X, clusters0)\nprint(centroids0)\n```\n\n [[ 1.54179703 -1.65922379]\n [ 1.67917325 -2.36272948]]\n\n\n### Plotting the centroids\n\nLet's add the centroids to the graph.\n\n\n```python\n# Plot\nplot_assignment(X, centroids0, clusters0, 0, 0)\n```\n\n### Step 3: assign data to clusters\n\nNow we can assign the data to the clusters, according to the closest centroid.\n\n\n```python\n# Assign X to clusters\ndef assign_to_cluster(X, centroids):\n K = np.size(centroids,0)\n dist = np.zeros((np.size(X,0),K))\n for k in range(K):\n dist[:,k] = np.mean((X - centroids[k,:])**2, axis=1)\n clusters = np.argmin(dist, axis=1)\n \n # Compute inertia\n inertia = 0\n for k in range(K):\n if sum(clusters==k)>0:\n inertia += np.sum((X[clusters==k,:] - centroids[k,:])**2)\n return clusters, inertia\n```\n\n### Plotting assigned data\n\n\n```python\n# Get cluster assignment\n[clusters1,d] = assign_to_cluster(X, centroids0)\n```\n\n\n```python\n# Plot\nplot_assignment(X, centroids0, clusters1, d, 1)\n```\n\n### Full Algorithm\n\nWe now have all the components to proceed iteratively.\n\n\n```python\ndef kmeans_manual(X, K):\n\n # Init\n i = 0\n d0 = 1e4\n d1 = 1e5\n clusters = np.random.randint(K,size=(np.size(X,0)))\n\n # Iterate until convergence\n while np.abs(d0-d1) > 1e-10:\n d1 = d0\n centroids = compute_new_centroids(X, clusters)\n [clusters, d0] = assign_to_cluster(X, centroids)\n plot_assignment(X, centroids, clusters, d0, i)\n i+=1\n```\n\n### Plotting k-means clustering\n\n\n```python\n# Test\nkmeans_manual(X, K)\n```\n\nHere the observations can be easily plotted because they are two-dimensional.\nIf there were more than two variables then we could instead perform PCA\nand plot the first two principal components score vectors.\n\n### More clusters\n\nIn the previous example, we knew that there really were two clusters because\nwe generated the data. However, for real data, in general we do not know\nthe true number of clusters. We could instead have performed K-means\nclustering on this example with `K = 3`. If we do this, K-means clustering will split up the two \"real\" clusters, since it has no information about them:\n\n\n```python\n# K=3\nkmeans_manual(X, 3)\n```\n\n### Sklearn package\n\nThe automated function in `sklearn` to persorm $K$-means clustering is `KMeans`. \n\n\n```python\n# SKlearn algorithm\nkm1 = KMeans(n_clusters=3, n_init=1, random_state=1)\nkm1.fit(X)\n```\n\n\n\n\n KMeans(n_clusters=3, n_init=1, random_state=1)\n\n\n\n### Plotting\n\nWe can plot the asssignment generated by the `KMeans` function.\n\n\n```python\n# Plot\nplot_assignment(X, km1.cluster_centers_, km1.labels_, km1.inertia_, km1.n_iter_)\n```\n\nAs we can see, the results are different in the two algorithms? Why? $K$-means is susceptible to the initial values. One way to solve this problem is to run the algorithm multiple times and report only the best results\n\n### Initial Assignment\n\nTo run the `Kmeans()` function in python with multiple initial cluster assignments, we use the `n_init` argument (default: 10). If a value of `n_init` greater than one is used, then K-means clustering will be performed using multiple random assignments, and the `Kmeans()` function will report only the best results.\n\n\n```python\n# 30 runs\nkm_30run = KMeans(n_clusters=3, n_init=30, random_state=1).fit(X)\nplot_assignment(X, km_30run.cluster_centers_, km_30run.labels_, km_30run.inertia_, km_30run.n_iter_)\n```\n\n### Best Practices\n\nIt is generally recommended to always run K-means clustering with a large value of `n_init`, such as 20 or 50 to avoid getting stuck in an undesirable local optimum.\n\nWhen performing K-means clustering, in addition to using multiple initial cluster assignments, it is also important to set a random seed using the `random_state` parameter. This way, the initial cluster assignments can be replicated, and the K-means output will be fully reproducible.\n\n## Hierarchical Clustering\n\nOne potential disadvantage of K-means clustering is that it requires us to pre-specify the number of clusters $K$. \n\nHierarchical clustering is an alternative approach which does not require that we commit to a particular choice of $K$.\n\n### The Dendogram\n\nHierarchical clustering has an added advantage over K-means clustering in that it results in an attractive tree-based representation of the observations, called a **dendrogram**.\n\n\n```python\nd = dendrogram(\n linkage(X, \"complete\"),\n leaf_rotation=90., # rotates the x axis labels\n leaf_font_size=8., # font size for the x axis labels\n )\n```\n\n### Interpretation\n\nEach leaf of the *dendrogram* represents one observation. \n\nAs we move up the tree, some leaves begin to fuse into branches. These correspond to observations that are similar to each other. As we move higher up the tree, branches themselves fuse, either with leaves or other branches. The earlier (lower in the tree) fusions occur, the more similar the groups of observations are to each other.\n\nWe can use de *dendogram* to understand how similar two observations are: we can look for the point in the tree where branches containing those two obse rvations are first fused. The height of this fusion, as measured on the vertical axis, indicates how different the two observations are. Thus, observations that fuse at the very bottom of the tree are quite similar to each other, whereas observations that fuse close to the top of the tree will tend to be quite different.\n\nThe term **hierarchical** refers to the fact that clusters obtained by cutting the dendrogram at a given height are necessarily nested within the clusters obtained by cutting the dendrogram at any greater height.\n\n### The Hierarchical Clustering Algorithm\n\n1. Begin with $n$ observations and a measure (such as Euclidean distance) of all the $n(n \u2212 1)/2$ pairwise dissimilarities. Treat each 2 observation as its own cluster.\n\n2. For $i=n,n\u22121,...,2$\n\n a) Examine all pairwise inter-cluster dissimilarities among the $i$ clusters and identify the **pair of clusters that are least dissimilar** (that is, most similar). Fuse these two clusters. The dissimilarity between these two clusters indicates the height in the dendrogram at which the fusion should be placed.\n \n b) Compute the new pairwise inter-cluster dissimilarities among the $i\u22121$ remaining clusters.\n\n### The Linkage Function\n\nWe have a concept of the dissimilarity between pairs of observations, but how do we define the dissimilarity between two clusters if one or both of the clusters contains multiple observations?\n\nThe concept of dissimilarity between a pair of observations needs to be extended to a pair of groups of observations. This extension is achieved by developing the notion of **linkage**, which defines the dissimilarity between two groups of observations.\n\n### Linkages\n\nThe four most common types of linkage are:\n\n1. **Complete**: Maximal intercluster dissimilarity. Compute all pairwise dissimilarities between the observations in cluster A and the observations in cluster B, and record the largest of these dissimilarities.\n2. **Single**: Minimal intercluster dissimilarity. Compute all pairwise dissimilarities between the observations in cluster A and the observations in cluster B, and record the smallest of these dissimilarities. \n3. **Average**: Mean intercluster dissimilarity. Compute all pairwise dissimilarities between the observations in cluster A and the observations in cluster B, and record the average of these dissimilarities.\n4. **Centroid**: Dissimilarity between the centroid for cluster A (a mean vector of length p) and the centroid for cluster B. Centroid linkage can result in undesirable inversions.\n\nAverage, complete, and single linkage are most popular among statisticians. Average and complete linkage are generally preferred over single linkage, as they tend to yield more balanced dendrograms. Centroid linkage is often used in genomics, but suffers from a major drawback in that an inversion can occur, whereby two clusters are fused at a height below either of the individual clusters in the dendrogram. This can lead to difficulties in visualization as well as in interpretation of the dendrogram.\n\n\n```python\n# Init\nlinkages = [hierarchy.complete(X), hierarchy.average(X), hierarchy.single(X)]\ntitles = ['Complete Linkage', 'Average Linkage', 'Single Linkage']\n```\n\n### Plotting\n\n\n```python\nmake_new_figure_4(linkages, titles)\n```\n\nFor this data, both *complete* and *average* linkage generally separates the observations into their correct groups.\n\n## Gaussian Mixture Models\n\nClustering methods such as hierarchical clustering and K-means are based on heuristics and rely primarily on finding clusters whose members are close to one another, as measured directly with the data (no probability model involved).\n\n*Gaussian Mixture Models* assume that the data was generated by multiple multivariate gaussian distributions. The objective of the algorithm is to recover these latent distributions.\n\nThe advantages with respect to K-means are\n\n- a structural interpretaion of the parameters \n- automatically generates class probabilities\n- can generate clusters of observations that are not necessarily close to each other\n\n### Algorithm\n\n1. Randomly assign a number, from $1$ to $K$, to each of the observations. These serve as initial cluster assignments for the observations.\n\n2. Iterate until the cluster assignments stop changing:\n\n a) For each of the $K$ clusters, compute its mean and variance. The main difference with K-means is that we also compute the variance matrix. \n \n b) Assign each observation to its most likely cluster.\n\n### Dataset\n\nLet's use the same data we have used for k-means, for a direct comparison.\n\n\n```python\nmake_new_figure_1(X)\n```\n\n### Step 1: random assignement\n\nLet's also use the same random assignment of the K-means algorithm.\n\n\n```python\nmake_new_figure_2(X, clusters0)\n```\n\n### Step 2: compute distirbutions\n\nWhat are the new distributions?\n\n\n```python\n# Compute new centroids\ndef compute_distributions(X, clusters):\n K = len(np.unique(clusters))\n distr = []\n for k in range(K):\n if sum(clusters==k)>0:\n distr += [multivariate_normal(np.mean(X[clusters==k,:], axis=0), np.cov(X[clusters==k,:].T))]\n else:\n distr += [multivariate_normal(np.mean(X, axis=0), np.cov(X.T))]\n return distr\n```\n\n\n```python\n# Print\ndistr0 = compute_distributions(X, clusters0)\nprint(\"Mean of the first distribution: \\n\", distr0[0].mean)\nprint(\"\\nVariance of the first distribution: \\n\", distr0[0].cov)\n```\n\n Mean of the first distribution: \n [ 1.54179703 -1.65922379]\n \n Variance of the first distribution: \n [[ 3.7160256 -2.27290036]\n [-2.27290036 4.67223237]]\n\n\n### Plotting the distributions\n\nLet's add the distributions to the graph.\n\n\n```python\n# Plot\nplot_assignment_gmm(X, clusters0, distr0, i=0, logL=0.0)\n```\n\n### Likelihood\n\nThe main difference with respect with K-means is that we can now compute the probability that each observation belongs to each cluster. This is the probability that each observation was generated by one of the two bi-variate normal distributions. These probabilities are called **likelihoods**.\n\n\n```python\n# Print first 5 likelihoods\npdfs0 = np.stack([d.pdf(X) for d in distr0], axis=1)\npdfs0[:5]\n```\n\n\n\n\n array([[0.03700522, 0.05086876],\n [0.00932081, 0.02117353],\n [0.04092453, 0.04480732],\n [0.00717854, 0.00835799],\n [0.01169199, 0.01847373]])\n\n\n\n### Step 3: assign data to clusters\n\nNow we can assign the data to the clusters, via maximum likelihood.\n\n\n```python\n# Assign X to clusters\ndef assign_to_cluster_gmm(X, distr):\n pdfs = np.stack([d.pdf(X) for d in distr], axis=1)\n clusters = np.argmax(pdfs, axis=1)\n log_likelihood = 0\n for k, pdf in enumerate(pdfs):\n log_likelihood += np.log(pdf[clusters[k]])\n return clusters, log_likelihood\n```\n\n\n```python\n# Get cluster assignment\nclusters1, logL1 = assign_to_cluster_gmm(X, distr0)\n```\n\n### Plotting assigned data\n\n\n```python\n# Compute new distributions\ndistr1 = compute_distributions(X, clusters1)\n```\n\n\n```python\n# Plot\nplot_assignment_gmm(X, clusters1, distr1, 1, logL1);\n```\n\n### Expectation - Maximization\n\nThe two steps we have just seen, are part of a broader family of algorithms to maximize likelihoods called **expectation**-**maximization** algorithms.\n\nIn the expectation step, we computed the expectation of the parameters, given the current cluster assignment. \n\nIn the maximization step, we assigned observations to the cluster that maximized the likelihood of the single observation.\n\nThe alternative, and more computationally intensive procedure, would have been to specify a global likelihood function and find the mean and variance paramenters of the two normal distributions that maximized those likelihoods.\n\n### Full Algorithm\n\nWe can now deploy the full algorithm.\n\n\n```python\ndef gmm_manual(X, K):\n\n # Init\n i = 0\n logL0 = 1e4\n logL1 = 1e5\n clusters = np.random.randint(K,size=(np.size(X,0)))\n\n # Iterate until convergence\n while np.abs(logL0-logL1) > 1e-10:\n logL1 = logL0\n distr = compute_distributions(X, clusters)\n clusters, logL0 = assign_to_cluster_gmm(X, distr)\n plot_assignment_gmm(X, clusters, distr, i, logL0)\n i+=1\n```\n\n### Plotting k-means clustering\n\n\n```python\n# Test\ngmm_manual(X, K)\n```\n\nIn this case, GMM does a very poor job identifying the original clusters.\n\n### Overlapping Clusters\n\nLet's now try with a different dataset, where the data is drawn from two overlapping bi-variate gaussian distributions, forming a cross.\n\n\n```python\n# Simulate data\nX = np.random.randn(50,2)\nX[0:25, :] = np.random.multivariate_normal([0,0], [[50,0],[0,1]], size=25)\nX[25:, :] = np.random.multivariate_normal([0,0], [[1,0],[0,50]], size=25)\n```\n\n\n```python\nmake_new_figure_1(X)\n```\n\n### GMM with overlapping distributions\n\n\n```python\n# GMM\ngmm_manual(X, K)\n```\n\nAs we can see, GMM is able to correctly recover the original clusters.\n\n### K-means with overlapping distributions\n\n\n```python\n# K-means\nkmeans_manual(X, K)\n```\n\nK-means generates completely different clusters.\n", "meta": {"hexsha": "f154cf3f4dcbbfc85c005f80cb38f8f5d3c6b321", "size": 717531, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/10_unsupervised.ipynb", "max_stars_repo_name": "matteocourthoud/Machine-Learning-for-Economic-Analysis", "max_stars_repo_head_hexsha": "e19a1643372aafd13d8b7f7c1f5e6e863b552ec1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-12-06T20:36:03.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-21T13:22:22.000Z", "max_issues_repo_path": "notebooks/10_unsupervised.ipynb", "max_issues_repo_name": "matteocourthoud/Machine-Learning-for-Economic-Analysis", "max_issues_repo_head_hexsha": "e19a1643372aafd13d8b7f7c1f5e6e863b552ec1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/10_unsupervised.ipynb", "max_forks_repo_name": "matteocourthoud/Machine-Learning-for-Economic-Analysis", "max_forks_repo_head_hexsha": "e19a1643372aafd13d8b7f7c1f5e6e863b552ec1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-11-23T10:10:58.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-16T09:44:33.000Z", "avg_line_length": 241.0248572388, "max_line_length": 170360, "alphanum_fraction": 0.9178822936, "converted": true, "num_tokens": 9114, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9353465170505205, "lm_q2_score": 0.880797068590724, "lm_q1q2_score": 0.8238504703346422}} {"text": "```python\nimport sympy as sp\nfrom sympy.matrices import Matrix\n#import numpy as np\n\n# pretty print numpy matrices\n#np.set_printoptions(formatter={'float': '{: 0.5f}'.format})\n```\n\n\n```python\nth1, th2, th3, th4, th5, th6 = sp.symbols('th1:7')\n\n# the DH paramters for KUKA robot as per written document\n# alpha, a, d, theta (adjustmet)\n# the angles we will use to control the robot will be added to the theta paramters\nDH = [[0 , 0 , 0.75 , th1 ],\n [-sp.pi/2, 0.35 , 0 , th2 - sp.pi/2],\n [0 , 1.25 , 0 , th3 ],\n [-sp.pi/2, -0.054, 1.5 , th4 ],\n [sp.pi/2 , 0 , 0 , th5 ],\n [-sp.pi/2, 0 , 0 , th6 ],\n [0 , 0 , 0.303, 0 ]]\n```\n\n\n```python\n# a help function that builds a transformation matrix given the 4 DH paramters for that joint\ndef LinkTransform(params):\n # params is a list of length 4: alpha, a, d, theta in this order\n alpha = params[0]\n a = params[1]\n d = params[2]\n theta = params[3]\n # returns a Matrix of transformation for the given DH paramters\n return Matrix([[sp.cos(theta) , -sp.sin(theta) , 0 , a],\n [sp.sin(theta)*sp.cos(alpha), sp.cos(theta)*sp.cos(alpha), -sp.sin(alpha), -sp.sin(alpha)*d],\n [sp.sin(theta)*sp.sin(alpha), sp.cos(theta)*sp.sin(alpha), sp.cos(alpha) , sp.cos(alpha)*d],\n [0 , 0 , 0 , 1]])\n```\n\n\n```python\nT34 = LinkTransform(DH[3])\nT45 = LinkTransform(DH[4])\nT56 = LinkTransform(DH[5])\nT36 = sp.simplify(T34 * T45 * T56)\nprint(T36[0:3,0:3])\n```\n\n Matrix([\n [-sin(th4)*sin(th6) + cos(th4)*cos(th5)*cos(th6), -sin(th4)*cos(th6) - sin(th6)*cos(th4)*cos(th5), -sin(th5)*cos(th4)],\n [ sin(th5)*cos(th6), -sin(th5)*sin(th6), cos(th5)],\n [-sin(th4)*cos(th5)*cos(th6) - sin(th6)*cos(th4), sin(th4)*sin(th6)*cos(th5) - cos(th4)*cos(th6), sin(th4)*sin(th5)]])\n\n\n\n```python\n# builds a transformation matrix for the orientation adjustment of the gripper\n# so that we are consistent with the URDF representaiton of the gripper\ndef GripperAdjust(r_z = sp.pi, r_y = -sp.pi/2):\n R_z = Matrix([[sp.cos(r_z), -sp.sin(r_z), 0, 0],\n [sp.sin(r_z), sp.cos(r_z) , 0, 0],\n [0 , 0 , 1, 0],\n [0 , 0 , 0, 1]])\n R_y = Matrix([[sp.cos(r_y) , 0 , sp.sin(r_y), 0],\n [0 , 1 , 0 , 0],\n [-sp.sin(r_y), 0 , sp.cos(r_y), 0],\n [0 , 0 , 0 , 1]])\n return sp.simplify(R_z * R_y)\n```\n\n\n```python\nT01 = LinkTransform(DH[0])\nprint('T01 = '+str(sp.simplify(T01))+'\\n')\nT12 = LinkTransform(DH[1])\nprint('T12 = '+str(sp.simplify(T12))+'\\n')\nT23 = LinkTransform(DH[2])\nprint('T23 = '+str(sp.simplify(T23))+'\\n')\nT34 = LinkTransform(DH[3])\nprint('T34 = '+str(sp.simplify(T34))+'\\n')\nT45 = LinkTransform(DH[4])\nprint('T45 = '+str(sp.simplify(T45))+'\\n')\nT56 = LinkTransform(DH[5])\nprint('T56 = '+str(sp.simplify(T56))+'\\n')\nT6G = LinkTransform(DH[6])\nprint('T6G = '+str(sp.simplify(T6G))+'\\n')\nprint('TGA = '+str(GripperAdjust()))\n```\n\n T01 = Matrix([\n [cos(th1), -sin(th1), 0, 0],\n [sin(th1), cos(th1), 0, 0],\n [ 0, 0, 1, 0.75],\n [ 0, 0, 0, 1]])\n \n T12 = Matrix([\n [sin(th2), cos(th2), 0, 0.35],\n [ 0, 0, 1, 0],\n [cos(th2), -sin(th2), 0, 0],\n [ 0, 0, 0, 1]])\n \n T23 = Matrix([\n [cos(th3), -sin(th3), 0, 1.25],\n [sin(th3), cos(th3), 0, 0],\n [ 0, 0, 1, 0],\n [ 0, 0, 0, 1]])\n \n T34 = Matrix([\n [ cos(th4), -sin(th4), 0, -0.054],\n [ 0, 0, 1, 1.5],\n [-sin(th4), -cos(th4), 0, 0],\n [ 0, 0, 0, 1]])\n \n T45 = Matrix([\n [cos(th5), -sin(th5), 0, 0],\n [ 0, 0, -1, 0],\n [sin(th5), cos(th5), 0, 0],\n [ 0, 0, 0, 1]])\n \n T56 = Matrix([\n [ cos(th6), -sin(th6), 0, 0],\n [ 0, 0, 1, 0],\n [-sin(th6), -cos(th6), 0, 0],\n [ 0, 0, 0, 1]])\n \n T6G = Matrix([\n [1, 0, 0, 0],\n [0, 1, 0, 0],\n [0, 0, 1, 0.303],\n [0, 0, 0, 1]])\n \n TGA = Matrix([\n [0, 0, 1, 0],\n [0, -1, 0, 0],\n [1, 0, 0, 0],\n [0, 0, 0, 1]])\n\n\n\n```python\nT0G = sp.simplify(T01*T12*T23*T34*T45*T56*T6G)\nprint('T0G unajusted = '+str(T0G))\n```\n\n T0G unajusted = Matrix([\n [((sin(th1)*sin(th4) + sin(th2 + th3)*cos(th1)*cos(th4))*cos(th5) + sin(th5)*cos(th1)*cos(th2 + th3))*cos(th6) - (-sin(th1)*cos(th4) + sin(th4)*sin(th2 + th3)*cos(th1))*sin(th6), -((sin(th1)*sin(th4) + sin(th2 + th3)*cos(th1)*cos(th4))*cos(th5) + sin(th5)*cos(th1)*cos(th2 + th3))*sin(th6) + (sin(th1)*cos(th4) - sin(th4)*sin(th2 + th3)*cos(th1))*cos(th6), -(sin(th1)*sin(th4) + sin(th2 + th3)*cos(th1)*cos(th4))*sin(th5) + cos(th1)*cos(th5)*cos(th2 + th3), -0.303*sin(th1)*sin(th4)*sin(th5) + 1.25*sin(th2)*cos(th1) - 0.303*sin(th5)*sin(th2 + th3)*cos(th1)*cos(th4) - 0.054*sin(th2 + th3)*cos(th1) + 0.303*cos(th1)*cos(th5)*cos(th2 + th3) + 1.5*cos(th1)*cos(th2 + th3) + 0.35*cos(th1)],\n [ ((sin(th1)*sin(th2 + th3)*cos(th4) - sin(th4)*cos(th1))*cos(th5) + sin(th1)*sin(th5)*cos(th2 + th3))*cos(th6) - (sin(th1)*sin(th4)*sin(th2 + th3) + cos(th1)*cos(th4))*sin(th6), -((sin(th1)*sin(th2 + th3)*cos(th4) - sin(th4)*cos(th1))*cos(th5) + sin(th1)*sin(th5)*cos(th2 + th3))*sin(th6) - (sin(th1)*sin(th4)*sin(th2 + th3) + cos(th1)*cos(th4))*cos(th6), -(sin(th1)*sin(th2 + th3)*cos(th4) - sin(th4)*cos(th1))*sin(th5) + sin(th1)*cos(th5)*cos(th2 + th3), 1.25*sin(th1)*sin(th2) - 0.303*sin(th1)*sin(th5)*sin(th2 + th3)*cos(th4) - 0.054*sin(th1)*sin(th2 + th3) + 0.303*sin(th1)*cos(th5)*cos(th2 + th3) + 1.5*sin(th1)*cos(th2 + th3) + 0.35*sin(th1) + 0.303*sin(th4)*sin(th5)*cos(th1)],\n [ -(sin(th5)*sin(th2 + th3) - cos(th4)*cos(th5)*cos(th2 + th3))*cos(th6) - sin(th4)*sin(th6)*cos(th2 + th3), (sin(th5)*sin(th2 + th3) - cos(th4)*cos(th5)*cos(th2 + th3))*sin(th6) - sin(th4)*cos(th6)*cos(th2 + th3), -sin(th5)*cos(th4)*cos(th2 + th3) - sin(th2 + th3)*cos(th5), -0.303*sin(th5)*cos(th4)*cos(th2 + th3) - 0.303*sin(th2 + th3)*cos(th5) - 1.5*sin(th2 + th3) + 1.25*cos(th2) - 0.054*cos(th2 + th3) + 0.75],\n [ 0, 0, 0, 1]])\n\n\n\n```python\nT0GA = sp.simplify(T0G*GripperAdjust())\nprint('T0G ajusted = '+str(T0GA))\n```\n\n T0G ajusted = Matrix([\n [-(sin(th1)*sin(th4) + sin(th2 + th3)*cos(th1)*cos(th4))*sin(th5) + cos(th1)*cos(th5)*cos(th2 + th3), ((sin(th1)*sin(th4) + sin(th2 + th3)*cos(th1)*cos(th4))*cos(th5) + sin(th5)*cos(th1)*cos(th2 + th3))*sin(th6) - (sin(th1)*cos(th4) - sin(th4)*sin(th2 + th3)*cos(th1))*cos(th6), ((sin(th1)*sin(th4) + sin(th2 + th3)*cos(th1)*cos(th4))*cos(th5) + sin(th5)*cos(th1)*cos(th2 + th3))*cos(th6) + (sin(th1)*cos(th4) - sin(th4)*sin(th2 + th3)*cos(th1))*sin(th6), -0.303*sin(th1)*sin(th4)*sin(th5) + 1.25*sin(th2)*cos(th1) - 0.303*sin(th5)*sin(th2 + th3)*cos(th1)*cos(th4) - 0.054*sin(th2 + th3)*cos(th1) + 0.303*cos(th1)*cos(th5)*cos(th2 + th3) + 1.5*cos(th1)*cos(th2 + th3) + 0.35*cos(th1)],\n [-(sin(th1)*sin(th2 + th3)*cos(th4) - sin(th4)*cos(th1))*sin(th5) + sin(th1)*cos(th5)*cos(th2 + th3), ((sin(th1)*sin(th2 + th3)*cos(th4) - sin(th4)*cos(th1))*cos(th5) + sin(th1)*sin(th5)*cos(th2 + th3))*sin(th6) + (sin(th1)*sin(th4)*sin(th2 + th3) + cos(th1)*cos(th4))*cos(th6), ((sin(th1)*sin(th2 + th3)*cos(th4) - sin(th4)*cos(th1))*cos(th5) + sin(th1)*sin(th5)*cos(th2 + th3))*cos(th6) - (sin(th1)*sin(th4)*sin(th2 + th3) + cos(th1)*cos(th4))*sin(th6), 1.25*sin(th1)*sin(th2) - 0.303*sin(th1)*sin(th5)*sin(th2 + th3)*cos(th4) - 0.054*sin(th1)*sin(th2 + th3) + 0.303*sin(th1)*cos(th5)*cos(th2 + th3) + 1.5*sin(th1)*cos(th2 + th3) + 0.35*sin(th1) + 0.303*sin(th4)*sin(th5)*cos(th1)],\n [ -sin(th5)*cos(th4)*cos(th2 + th3) - sin(th2 + th3)*cos(th5), -(sin(th5)*sin(th2 + th3) - cos(th4)*cos(th5)*cos(th2 + th3))*sin(th6) + sin(th4)*cos(th6)*cos(th2 + th3), -(sin(th5)*sin(th2 + th3) - cos(th4)*cos(th5)*cos(th2 + th3))*cos(th6) - sin(th4)*sin(th6)*cos(th2 + th3), -0.303*sin(th5)*cos(th4)*cos(th2 + th3) - 0.303*sin(th2 + th3)*cos(th5) - 1.5*sin(th2 + th3) + 1.25*cos(th2) - 0.054*cos(th2 + th3) + 0.75],\n [ 0, 0, 0, 1]])\n\n\n\n```python\n# builds the transformation matrices based on the DH and the theta paramters\n# it resurns the final transformation matrix\ndef ChainLinkTransform(DH, show = False):\n for i in range(len(DH)):\n T = LinkTransform(DH[i])\n if i == 0:\n result = T\n else:\n result = sp.simplify(result * T)\n\n if show:\n print(\"T%d_%d = \" % (i, i+1))\n print(T)\n print(\"T0_%d = \" % (i+1))\n print(result)\n \n return sp.simplify(result * GripperAdust())\n```\n\n\n```python\n# extracts the position and orientation from the quaternion matrix\ndef OrientationFromQuaternion(Q):\n pos = Q[:,3].T\n orient = [sp.atan2(Q[2,1], Q[2,2]),\n sp.atan2(-Q[2,0], sp.sqrt(Q[0,0]**2 + Q[1,0]**2)),\n sp.atan2(Q[1,0], Q[0,0])]\n \n return pos[0:3], orient\n```\n\n\n```python\ndef CalculateEffector(sym, thetas):\n res = sym.evalf(subs={th1: thetas[0], th2: thetas[1], th3: thetas[2], \n th4: thetas[3], th5: thetas[4], th6: thetas[5]})\n pos, orient = OrientationFromQuaternion(res)\n print(\"pos = \"+str(pos))\n print(\"orient = \"+str(orient)) \n```\n\n\n```python\nsymtransf = ChainLinkTransform(DH)\n```\n\n\n```python\nth = [0, 0, 0, 0, 0, 0]\nCalculateEffector(symtransf, th)\n```\n\nROS Reported:
    \n`Trans = [2.153, 0.000, 1.947]\nRot = [0.000, 0.000, 0.000]`\n\n\n```python\nth = [0.99, 0, 0, 0, 0, 0, 0]\nCalculateEffector(symtransf, th)\n```\n\nROS Reported:
    \nTrans = [1.173, 1.805, 1.947]
    \nRot = [0.000, 0.000, 0.994]\n\n\n```python\nth = [0.99, 0, 0, 0, 0, 0.72, 0]\nCalculateEffector(DH, th)\n```\n\nROS Reported:
    \nTrans = [1.173, 1.805, 1.947]
    \nRot = [0.720, 0.0, 0.994]\n\n\n```python\nth = [0.99, 0, 0, 0, 0.49, 0.72, 0]\nCalculateEffector(DH, th)\n```\n\nROS Reported:
    \nTrans = [1.154, 1.775, 1.804]
    \nRot = [0.720, 0.489, 0.994]\n", "meta": {"hexsha": "e67855855b9c8a3f40a1cb726a94e0f9bacff164", "size": 16773, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Term 1/Project 2 - Robotic Arm - Pick and Place/submission/forward_kinematics-sympy.ipynb", "max_stars_repo_name": "sonelu/rsend", "max_stars_repo_head_hexsha": "dc48761f5b582091d120be5dd67d5aec266718b1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Term 1/Project 2 - Robotic Arm - Pick and Place/submission/forward_kinematics-sympy.ipynb", "max_issues_repo_name": "sonelu/rsend", "max_issues_repo_head_hexsha": "dc48761f5b582091d120be5dd67d5aec266718b1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Term 1/Project 2 - Robotic Arm - Pick and Place/submission/forward_kinematics-sympy.ipynb", "max_forks_repo_name": "sonelu/rsend", "max_forks_repo_head_hexsha": "dc48761f5b582091d120be5dd67d5aec266718b1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 40.2230215827, "max_line_length": 698, "alphanum_fraction": 0.4124485781, "converted": true, "num_tokens": 4190, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9481545348152283, "lm_q2_score": 0.8688267643505193, "lm_q1q2_score": 0.8237820365877866}} {"text": "# Basic stats using `Scipy`\nIn this example we will go over how to draw samples from various built in probability distributions and define your own custom distributions.\n\n## Packages being used\n+ `scipy`: has all the stats stuff\n+ `numpy`: has all the array stuff\n\n## Relevant documentation\n+ `scipy.stats`: http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html, http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_continuous.html#scipy.stats.rv_continuous, http://docs.scipy.org/doc/scipy/reference/stats.html#module-scipy.stats\n\n\n```python\nimport numpy as np\nimport scipy.stats as st\n# some special functions we will make use of later on\nfrom scipy.special import erfc\nfrom matplotlib import pyplot as plt\nfrom astropy.visualization import hist\nimport mpl_style\n%matplotlib inline\nplt.style.use(mpl_style.style1)\n```\n\nThere are many probability distributions that are already available in `scipy`: http://docs.scipy.org/doc/scipy/reference/stats.html#module-scipy.stats. These classes allow for the evaluations of PDFs, CDFs, PPFs, moments, random draws, and fitting. As an example lets take a look at the normal distribution.\n\n\n```python\nnorm = st.norm(loc=0, scale=1)\nx = np.linspace(-5, 5, 1000)\n\nplt.figure(1, figsize=(8, 10))\nplt.subplot2grid((2, 2), (0, 0))\nplt.plot(x, norm.pdf(x))\nplt.xlabel('x')\nplt.ylabel('PDF(x)')\nplt.xlim(-5, 5)\n\nplt.subplot2grid((2, 2), (0, 1))\nplt.plot(x, norm.cdf(x))\nplt.xlabel('x')\nplt.ylabel('CDF(x)')\nplt.xlim(-5, 5)\n\nplt.subplot2grid((2, 2), (1, 0))\nsample_norm = norm.rvs(size=100000)\nhist(sample_norm, bins='knuth', histtype='step', lw=1.5, density=True)\nplt.xlabel('x')\nplt.ylabel('Random Sample')\nplt.tight_layout()\n```\n\nYou can calculate moments and fit data:\n\n\n```python\nfor i in range(4):\n print('moment {0}: {1}'.format(i+1, norm.moment(i+1)))\n\nprint('best fit: {0}'.format(st.norm.fit(sample_norm)))\n```\n\n moment 1: 0.0\n moment 2: 1.0\n moment 3: 0.0\n moment 4: 3.0\n best fit: (0.0023590267327017016, 1.0016772649523067)\n\n\n# Custom probability distributions\nSometimes you need to use obscure PDFs that are not already in `scipy` or `astropy`. When this is the case you can make your own subclass of `st.rv_continuous` and overwrite the `_pdf` or `_cdf` methods. This new sub class will act exactly like the built in distributions.\n\nThe methods you can override in the subclass are:\n\n+ \\_rvs: create a random sample drawn from the distribution\n+ \\_pdf: calculate the PDF at any point\n+ \\_cdf: calculate the CDF at any point\n+ \\_sf: survival function, a.k.a. 1-CDF(x)\n+ \\_ppf: percent point function, a.k.a. inverse CDF\n+ \\_isf: inverse survival function\n+ \\_stats: function that calculates the first 4 moments\n+ \\_munp: function that calculates the nth moment\n+ \\_entropy: differential entropy\n+ \\_argcheck: function to check the input arguments are valid (e.g. var>0)\n\nYou should override any method you have analytic functions for, otherwise (typically slow) numerical integration, differentiation, and function inversion are used to transform the ones that are specified.\n\n## The exponentially modified Gaussian distribution\nAs and example lets create a class for the EMG distribution (https://en.wikipedia.org/wiki/Exponentially_modified_Gaussian_distribution). This is the distributions resulting from the sum of a Gaussian random variable and an exponential random variable. The PDF and CDF are:\n\n\\begin{align}\nf(x;\\mu,\\sigma, \\lambda) & = \\frac{\\lambda}{2} \\exp{\\left( \\frac{\\lambda}{2} \\left[ 2\\mu+\\lambda\\sigma^{2}-2x \\right] \\right)} \\operatorname{erfc}{\\left( \\frac{\\mu + \\lambda\\sigma^{2}-x}{\\sigma\\sqrt{2}} \\right)} \\\\\nF(x; \\mu, \\sigma, \\lambda) & = \\Phi(u, 0, v) - \\Phi(u, v^2, v) \\exp{\\left( -u + \\frac{v^2}{2} \\right)} \\\\\n\\Phi(x, a, b) & = \\frac{1}{2} \\left[ 1 + \\operatorname{erf}{\\left( \\frac{x - a}{b\\sqrt{2}} \\right)} \\right] \\\\\nu & = \\lambda(x - \\mu) \\\\\nv & = \\lambda\\sigma\n\\end{align}\n\n\n```python\n# create a generating class\nclass EMG_gen1(st.rv_continuous):\n def _pdf(self, x, mu, sig, lam):\n u = 0.5 * lam * (2 * mu + lam * sig**2 - 2 * x)\n v = (mu + lam * sig**2 - x)/(sig * np.sqrt(2))\n return 0.5 * lam * np.exp(u) * erfc(v)\n def _cdf(self, x, mu, sig, lam):\n u = lam * (x - mu)\n v = lam * sig\n phi1 = st.norm.cdf(u, loc=0, scale=v)\n phi2 = st.norm.cdf(u, loc=v**2, scale=v)\n return phi1 - phi2 * np.exp(-u + 0.5 * v**2)\n def _stats(self, mu, sig, lam):\n # reutrn the mean, variance, skewness, and kurtosis\n mean = mu + 1 / lam\n var = sig**2 + 1 / lam**2\n sl = sig * lam\n u = 1 + 1 / sl**2\n skew = (2 / sl**3) * u**(-3 / 2)\n v = 3 * (1 + 2 / sl**2 + 3 / sl**4) / u**2\n kurt = v - 3\n return mean, var, skew, kurt\n def _argcheck(self, mu, sig, lam):\n return np.isfinite(mu) and (sig > 0) and (lam > 0)\n\nclass EMG_gen2(EMG_gen1):\n def _ppf(self, q, mu, sig, lam):\n # use linear interpolation to solve this faster (not exact, but much faster than the built in method)\n # pick range large enough to fit the full cdf\n var = sig**2 + 1 / lam**2\n x = np.arange(mu - 50 * np.sqrt(var), mu + 50 * np.sqrt(var), 0.01)\n y = self.cdf(x, mu, sig, lam)\n return np.interp(q, y, x)\n\nclass EMG_gen3(EMG_gen1):\n def _rvs(self, mu, sig, lam):\n # redefine the random sampler to sample based on a normal and exp dist\n return st.norm.rvs(loc=mu, scale=sig, size=self._size) + st.expon.rvs(loc=0, scale=1/lam, size=self._size)\n\n# use generator to make the new class\nEMG1 = EMG_gen1(name='EMG1')\nEMG2 = EMG_gen2(name='EMG2')\nEMG3 = EMG_gen3(name='EMG3')\n```\n\nLets look at how long it takes to create readom samples for each of these version of the EMG:\n\n\n```python\n%time EMG1.rvs(0, 1, 0.5, size=1000)\nprint('=========')\n%time EMG2.rvs(0, 1, 0.5, size=1000)\nprint('=========')\n%time EMG3.rvs(0, 1, 0.5, size=1000)\nprint('=========')\n```\n\n CPU times: user 3.88 s, sys: 7.93 ms, total: 3.88 s\n Wall time: 3.87 s\n =========\n CPU times: user 3.38 ms, sys: 0 ns, total: 3.38 ms\n Wall time: 3.34 ms\n =========\n CPU times: user 973 \u00b5s, sys: 1.62 ms, total: 2.6 ms\n Wall time: 20 ms\n =========\n\n\n /mnt/lustre/shared_python_environment/DataLanguages/lib/python3.8/site-packages/scipy/stats/_distn_infrastructure.py:1083: VisibleDeprecationWarning: The signature of > does not contain a \"size\" keyword. Such signatures are deprecated.\n warnings.warn(\n\n\nAs you can see, the numerical inversion of the CDF is very slow, the approximation to the inversion is much faster, and defining `_rvs` in terms of the `normal` and `exp` distributions is the fastest.\n\nLets take a look at the results for `EMG3`:\n\n\n```python\ndist = EMG3(0, 1, 0.5)\nx = np.linspace(-5, 20, 1000)\n\nplt.figure(2, figsize=(8, 10))\nplt.subplot2grid((2, 2), (0, 0))\nplt.plot(x, dist.pdf(x))\nplt.xlabel('x')\nplt.ylabel('PDF(x)')\n\nplt.subplot2grid((2, 2), (0, 1))\nplt.plot(x, dist.cdf(x))\nplt.xlabel('x')\nplt.ylabel('CDF(x)')\n\nplt.subplot2grid((2, 2), (1, 0))\nsample_emg = dist.rvs(size=10000)\nhist(sample_emg, bins='knuth', histtype='step', lw=1.5, density=True)\nplt.xlabel('x')\nplt.ylabel('Random Sample')\nplt.tight_layout()\n```\n\nAs with the built in functions we can calculate moments and do fits to data. **Note** Since we are not using the built in `loc` and `scale` params they are fixed to 0 and 1 in the fit below.\n\n\n```python\nfor i in range(4):\n print('moment {0}: {1}'.format(i+1, dist.moment(i+1)))\n\nprint('best fit: {0}'.format(EMG3.fit(sample_emg, floc=0, fscale=1)))\n```\n\n moment 1: 2.0\n moment 2: 9.0\n moment 3: 54.0\n moment 4: 435.0\n best fit: (-0.02693863999005528, 1.0210387739316098, 0.49836860851157133, 0, 1)\n\n\nFor reference here is how `scipy` defines this distribution (found under the name `exponnorm`):\n\n\n```python\nimport scipy.stats._continuous_distns as cd\nnp.source(cd.exponnorm_gen)\n```\n\n In file: /mnt/lustre/shared_python_environment/DataLanguages/lib/python3.8/site-packages/scipy/stats/_continuous_distns.py\n \n class exponnorm_gen(rv_continuous):\n r\"\"\"An exponentially modified Normal continuous random variable.\n \n Also known as the exponentially modified Gaussian distribution [1]_.\n \n %(before_notes)s\n \n Notes\n -----\n The probability density function for `exponnorm` is:\n \n .. math::\n \n f(x, K) = \\frac{1}{2K} \\exp\\left(\\frac{1}{2 K^2} - x / K \\right)\n \\text{erfc}\\left(-\\frac{x - 1/K}{\\sqrt{2}}\\right)\n \n where :math:`x` is a real number and :math:`K > 0`.\n \n It can be thought of as the sum of a standard normal random variable\n and an independent exponentially distributed random variable with rate\n ``1/K``.\n \n %(after_notes)s\n \n An alternative parameterization of this distribution (for example, in\n the Wikpedia article [1]_) involves three parameters, :math:`\\mu`,\n :math:`\\lambda` and :math:`\\sigma`.\n \n In the present parameterization this corresponds to having ``loc`` and\n ``scale`` equal to :math:`\\mu` and :math:`\\sigma`, respectively, and\n shape parameter :math:`K = 1/(\\sigma\\lambda)`.\n \n .. versionadded:: 0.16.0\n \n References\n ----------\n .. [1] Exponentially modified Gaussian distribution, Wikipedia,\n https://en.wikipedia.org/wiki/Exponentially_modified_Gaussian_distribution\n \n %(example)s\n \n \"\"\"\n def _rvs(self, K, size=None, random_state=None):\n expval = random_state.standard_exponential(size) * K\n gval = random_state.standard_normal(size)\n return expval + gval\n \n def _pdf(self, x, K):\n return np.exp(self._logpdf(x, K))\n \n def _logpdf(self, x, K):\n invK = 1.0 / K\n exparg = invK * (0.5 * invK - x)\n return exparg + _norm_logcdf(x - invK) - np.log(K)\n \n def _cdf(self, x, K):\n invK = 1.0 / K\n expval = invK * (0.5 * invK - x)\n logprod = expval + _norm_logcdf(x - invK)\n return _norm_cdf(x) - np.exp(logprod)\n \n def _sf(self, x, K):\n invK = 1.0 / K\n expval = invK * (0.5 * invK - x)\n logprod = expval + _norm_logcdf(x - invK)\n return _norm_cdf(-x) + np.exp(logprod)\n \n def _stats(self, K):\n K2 = K * K\n opK2 = 1.0 + K2\n skw = 2 * K**3 * opK2**(-1.5)\n krt = 6.0 * K2 * K2 * opK2**(-2)\n return K, opK2, skw, krt\n \n\n\n\n```python\n%time st.exponnorm.rvs(0.5, size=1000)\nprint('=========')\n```\n\n CPU times: user 868 \u00b5s, sys: 0 ns, total: 868 \u00b5s\n Wall time: 576 \u00b5s\n =========\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "f4fb2380722396cd15c6a55ec813789420d6faf0", "size": 103655, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Stats_with_Scipy.ipynb", "max_stars_repo_name": "CKrawczyk/jupyter_data_languages", "max_stars_repo_head_hexsha": "48bfd4121a4c0014f2e8d56edec4ad55a7b8015f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2016-10-15T22:32:18.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-30T10:44:37.000Z", "max_issues_repo_path": "Stats_with_Scipy.ipynb", "max_issues_repo_name": "CKrawczyk/jupyter_data_languages", "max_issues_repo_head_hexsha": "48bfd4121a4c0014f2e8d56edec4ad55a7b8015f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Stats_with_Scipy.ipynb", "max_forks_repo_name": "CKrawczyk/jupyter_data_languages", "max_forks_repo_head_hexsha": "48bfd4121a4c0014f2e8d56edec4ad55a7b8015f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2016-10-17T13:44:56.000Z", "max_forks_repo_forks_event_max_datetime": "2019-01-24T18:59:13.000Z", "avg_line_length": 210.6808943089, "max_line_length": 47224, "alphanum_fraction": 0.9019150065, "converted": true, "num_tokens": 3356, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9252299591537478, "lm_q2_score": 0.8902942290328345, "lm_q1q2_score": 0.8237268931628668}} {"text": "# Homework 14: Nonlinear Equations\n\n\n### Problem 1\nUse fsolve to find the roots of the polynomial $f(x) = 2x^2 + 3x - 10$.\n\n\n```python\nimport numpy as np\nfrom scipy.optimize import fsolve\n```\n\n\n```python\n\n```\n\n### Problem 2\n\nUse fsolve to find the solution of the following two equations:\n\\begin{align}\nf(x,y) &= 2x^{2/3}+y^{2/3}-9^{1/3} \\\\\ng(x,y) &= \\frac{x^2}{4} + \\sqrt{y} - 1.\n\\end{align}\nUse an initial guess of $x_0=1$, $y_0$ = 1.\n\n\n```python\n\n```\n\n### Problem 3\n\n\n```python\n# import or install wget\ntry:\n import wget\nexcept:\n try:\n from pip import main as pipmain\n except:\n from pip._internal import main as pipmain\n pipmain(['install','wget'])\n import wget\n\n# retrieve thermoData.yaml\nurl = 'https://apmonitor.com/che263/uploads/Main/thermoData.yaml'\nfilename = wget.download(url)\nprint('')\nprint('Retrieved thermoData.yaml')\n```\n\n 100% [..............................................................................] 14985 / 14985\n Retrieved thermoData.yaml\n\n\nCompute the adiabatic flame temperature for a stoichiometric methane-air flame. The code is given below. There is a thermo class that is modified from your last homework. Also, you'll need thermoData.yaml again. Then there is a function to define. Fill in the blanks as indicated. You should also read all of the code given below and make sure you understand it.\n\n**Equation Summary:** \n* Your function (started for you below) is: ```f_flame(Ta) = 0```. \n* That is, $f_{flame}(T_a) = 0 = H_r(T_r) - H_p(T_a) = 0$. \n * $T_a$ is the unknown.\n * $T_r = 300\\,K$\n * $H_r(T_r) = y_{CH4}h_{CH4}(T_r) + y_{O2}h_{O2}(T_r) + y_{N2}h_{N2}(T_r)$.\n * $H_p(T_a) = y_{CO2}h_{CO2}(T_a) + y_{H2O}h_{H2O}(T_a) + y_{N2}h_{N2}(T_a)$.\n * $y_i = m_i/m_t$. \n * $m_i = n_iM_i$.\n * $n_i$ and $M_i$ are given.\n * $m_t = \\sum_im_i$.\n * **Do these separately for reactants and products** That is: $m_t = m_{O2}+m_{N2}+m_{CH4}$ for the reactants. (Also $m_t$ is the same for products since mass is conserved.)\n * $h_i$ is computed using the thermo class. So, if ```t_CO2``` is my thermo class object for $CO_2$, then ```h_CO2=t_CO2.h_mass(T)```. \n \n\n**Description:**\n* We have a chemical reaction:\n * $CH_4 + 2O_2 + 7.52N_2 \\rightarrow CO_2 + 2H_2O$ + 7.52$N_2$.\n* You can think of the burning as potential energy stored in the reactant bonds being released as kinetic energy in the products so the product temperature is higher.\n* Adiabatic means there is no enthalpy loss. You can think of enthalpy as energy. This means the products have the same enthalpy as the reactants. And this is just a statement that energy is conserved, like mass is.\n* The idea is to take a known reactant temperature, find the reactant enthalpy (which is an easy explicit equation you can calculate directly), then set the product enthalpy equal to the reactant enthalpy and find the corresponding product temperature (which is a harder nonlinear solve).\n * $T_r\\rightarrow h_r = h_p \\rightarrow T_p$.\n* The reactants start at room temperature, $T=300\\,K$, so we can compute their enthalpy.\n * We know the moles of reactants: $n_{ch4}=1$, $n_{O2}=2$, $n_{N2}=7.52$. \n * So, we can compute the corresponding masses using the molecular weights.\n * Then we sum the masses of each species to get the total mass, and compute the mass fractions.\n * Then we can compute the enthalpy as $h=\\sum_iy_ih_i$. That is, the total enthalpy is the sum of the enthalpy per unit mass of each species times the mass fraction of each species.\n * For reactants we have $h_r = y_{CH4}h_{CH4}+y_{O2}h_{O2}+y_{N2}h_{N2}$, where $h_i$ are evaluated using the class function h_mass(T), and T=300 for reactants.\n* Now, $h_p=h_r$. For products, we have $h_p = y_{CO2}h_{CO2}+y_{H2O}h_{H2O}+y_{N2}h_{N2}$, where we evaluate the class function h_mass(Tp), where Tp is the product temperature we are trying to compute.\n * Solving for $T_p$ amounts to solving $f(T_p)=0$, where $$f(T_p) = h_p - y_{CO2}h_{CO2}(T_p)+y_{H2O}h_{H2O}(T_p)+y_{N2}h_{N2}(T_p)$$.\n\n\n\n\n\n\n\n```python\nimport numpy as np\nfrom scipy.optimize import fsolve\nimport yaml\n\nclass thermo:\n def __init__(self, species, MW) :\n \"\"\"\n species: input string name of species in thermoData.yaml\n M: input (species molecular weight, kg/kmol)\n \"\"\"\n self.Rgas = 8314.46 # J/kmol*K\n self.M = MW\n with open(\"thermoData.yaml\") as yfile : \n yfile = yaml.load(yfile)\n self.a_lo = yfile[species][\"a_lo\"]\n self.a_hi = yfile[species][\"a_hi\"]\n self.T_lo = 300.\n self.T_mid = 1000.\n self.T_hi = 3000.\n \n #--------------------------------------------------------\n def h_mole(self,T) :\n \"\"\"\n return enthalpy in units of J/kmol\n T: input (K)\n \"\"\"\n if T<=self.T_mid and T>=self.T_lo :\n a = self.a_lo\n elif T>self.T_mid and T<=self.T_hi :\n a = self.a_hi\n else :\n print (\"ERROR: temperature is out of range\")\n hrt = a[0] + a[1]/2.0*T + a[2]/3.0*T*T + a[3]/4.0*T**3.0 + a[4]/5.0*T**4.0 + a[5]/T\n return hrt * self.Rgas * T\n \n #--------------------------------------------------------\n def h_mass(self,T) :\n \"\"\"\n return enthalpy in units of J/kg\n T: input (K)\n \"\"\"\n return self.h_mole(T)/self.M\n```\n\n\n```python\ndef f_flame(Ta) :\n \"\"\" \n We are solving for hp = sum_i y_i*h_i. In f=0 form this is f = hp - sum_i y_i*h_i\n We know the reactant temperature, so we can compute enthalpy (h). Then we know hp = hr (adiabatic).\n Vary T until sum_i y_i*h_i = hp.\n Steps: \n 1. Given moles --> mass --> mass fractions.\n 2. Make thermo classes for each species.\n 3. Compute hr = sum_i y_i*h_i.\n ... Do this for the reactants, then products. \n \"\"\"\n no2 = 2. # kmol\n nch4 = 1. \n nn2 = 7.52 \n nco2 = 1.\n nh2o = 2.\n Mo2 = 32. # kg/kmol\n Mch4 = 16. \n Mn2 = 28. \n Mco2 = 44.\n Mh2o = 18.\n mo2 = no2*Mo2 # mass \n mch4 = nch4*Mch4 # mass \n mn2 = nn2*Mn2 # mass \n mh2o = nh2o*Mh2o\n mco2 = nco2*Mco2\n t_o2 = thermo(\"O2\",Mo2) # thermo object; use as: t_o2.h_mass(T) to get h_O2, etc.\n t_ch4 = thermo(\"CH4\",Mch4)\n t_n2 = thermo(\"N2\",Mn2)\n t_co2 = thermo(\"CO2\",Mco2)\n t_h2o = thermo(\"H2O\",Mh2o)\n\n #-------- Reactants\n # TO DO: compute total mass, then mass fractions\n # TO DO: Set reactant temperature, then compute reactant enthalpy\n \n #---------- Products\n # TO DO: Set the product enthalpy = reactant enthalpy\n # TO DO: Set the product mass fractions\n # TO DO: Compute the enthalpy of the products corresponding to the current Tp\n # Then return the function: f(Tp) = hp - hp_based_on_current_Tp\n```\n\n\n```python\n# TO DO: Set a guess temperature, then solve for the product temperature\n```\n\n### Problem 4\n\n**Example: Solve a system of 6 equations in 6 unknowns**\n\nThis is solving a parallel pipe network where we have three pipes that are connected at the beginning and the end. The pipes can be of different lengths and diameter and pipe roughness. Given the total flow rate, and the pipe properties, find the flow rate through each of three parallel pipes.\n* **Unknowns: three flow rates: $Q_1$, $Q_2$, $Q_3$**.\n* We need ***three equations***. \n * We'll label the pipes 1, 2, and 3. \n * **Eq. 1:** $Q_{tot} = Q_1+Q_2+Q_3$.\n * That is, the total flow rate is just the sum through each pipe.\n * Because the pipes are connected, the pressure drop across each pipe is the same: \n * **Eq. 2:** $\\Delta P_1=\\Delta P_2,$\n * **Eq. 3:** $\\Delta P_1=\\Delta P_3$\n* Now we need to relate the pressure drop equations to the unknowns. The pressure is related to the flow rate by:\n * $\\Delta P=\\frac{fL\\rho v^2}{2D}$, and we use $Q=Av=\\frac{\\pi}{4}D^2v\\rightarrow v=\\frac{4Q}{\\pi D^2}$, where $Q$ is volumetric flow rate. Then, substitute for v to get: $$\\Delta P=\\frac{fL\\rho}{2D}\\left(\\frac{4Q}{\\pi D^2}\\right)^2$$\n* Here, $f$ is the friction factor in the pipe. We treat it as an unknown so we have **three more unknowns: $f_1$, $f_2$, $f_3$**. The Colbrook equation relates $f$ to $Q$ for given pipe properties. So, we have **three more equations**.\n\n* Here are the **six equations** in terms of the **six unknowns: $Q_1$, $Q_2$, $Q_3$, $f_1$, $f_2$, $f_3$**.\n 1. $Q_1+Q_2+Q_3-Q_{tot} = 0$.\n 2. $\\frac{f_1L_1\\rho}{2D_1}\\left(\\frac{4Q_1}{\\pi D_1^2}\\right)^2 - \\frac{f_2L_2\\rho}{2D_2}\\left(\\frac{4Q_2}{\\pi D_2^2}\\right)^2 = 0$\n 3. $\\frac{f_1L_1\\rho}{2D_1}\\left(\\frac{4Q_1}{\\pi D_1^2}\\right)^2 - \\frac{f_3L_3\\rho}{2D_3}\\left(\\frac{4Q_3}{\\pi D_3^2}\\right)^2 = 0$\n 4. Colbrook equation relating $f_1$ to $Q_1$: \n $$\\frac{1}{\\sqrt{f_1}}+2\\log_{10}\\left(\\frac{\\epsilon_1}{3.7D_1} + \\frac{2.51\\mu\\pi D_1}{\\rho 4Q_1\\sqrt{f_1}}\\right).$$\n 5. Colbrook equation relating $f_2$ to $Q_2$.\n 6. Colbrook equation relating $f_3$ to $Q_3$.\n \n \n\n* All units are SI.\n\n\n```python\ndef F_pipes(x) :\n Q1 = x[0] # rename the vars so we can read our equations below.\n Q2 = x[1]\n Q3 = x[2]\n f1 = x[3]\n f2 = x[4]\n f3 = x[5]\n \n Qt = 0.01333 # Given total volumetric flow rate\n e1 = 0.00024 # pipe roughness (m) (epsilon in the equation)\n e2 = 0.00012 \n e3 = 0.0002\n L1 = 100 # pipe length (m)\n L2 = 150\n L3 = 80\n D1 = 0.05 # pipe diameter (m)\n D2 = 0.045\n D3 = 0.04\n mu = 1.002E-3 # viscosity (kg/m*s)\n rho = 998. # density (kg/m3)\n\n F = np.zeros(6) # initialize the function array\n\n # TO DO: Define the functions here\n \n return F \n \n#--------------------------------------\n# TO DO: make a guess array for the unknowns: Q1, Q2, Q3, f1, f2, f3\n# (use Q3 = Qtot-Q1-Q2 in your guess, for consistency)\n# TO DO: Solve the problem and print the results.\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "bb7179fd05fcbff338da682a036e5c9900f9c5be", "size": 14375, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "python/HW14.ipynb", "max_stars_repo_name": "uw-cheme375/uw-cheme375.github.io", "max_stars_repo_head_hexsha": "5b20393705c4640a9e6af89708730eb08cb15ded", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "python/HW14.ipynb", "max_issues_repo_name": "uw-cheme375/uw-cheme375.github.io", "max_issues_repo_head_hexsha": "5b20393705c4640a9e6af89708730eb08cb15ded", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "python/HW14.ipynb", "max_forks_repo_name": "uw-cheme375/uw-cheme375.github.io", "max_forks_repo_head_hexsha": "5b20393705c4640a9e6af89708730eb08cb15ded", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.4917582418, "max_line_length": 371, "alphanum_fraction": 0.5003130435, "converted": true, "num_tokens": 3360, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8902942203004186, "lm_q2_score": 0.9252299612154571, "lm_q1q2_score": 0.823726886918902}} {"text": "# Programovanie\n\nLetn\u00e1 \u0161kola FKS 2018\n\nMa\u0165o Ga\u017eo, Fero Dr\u00e1\u010dek\n(& vykradnut\u00e9 materi\u00e1ly od Mateja Badina, Feriho Hermana, Kuba, Pe\u0165a, Jarn\u00fdch \u0161k\u00f4l FX a kade-tade po internete)\n\nV tomto kurze si uk\u00e1\u017eeme z\u00e1klady programovania a nau\u010d\u00edme sa programova\u0165 matematiku a fyziku.\nTak\u00e9to vedomosti s\u00fa skvel\u00e9 a budete v\u010faka nim: \n* vedie\u0165 efekt\u00edvnej\u0161ie robi\u0165 dom\u00e1ce \u00falohy\n* kvalitnej\u0161ie rie\u0161i\u0165 semin\u00e1rov\u00e9 a olympi\u00e1dov\u00e9 pr\u00edklady\n* lep\u0161ie rozumie\u0165 svetu (IT je dnes na trhu najr\u00fdchlej\u0161ie rozv\u00edjaj\u00facim sa odvetv\u00edm)\n\nPo\u010d\u00edta\u010d je blb\u00fd a treba mu v\u0161etko poveda\u0165 a vysvetli\u0165. Komunikova\u0165 sa s n\u00edm d\u00e1 na viacer\u00fdch \u00farovniach, my budeme pou\u017e\u00edva\u0165 Python. Python (n\u00e1zov odvoden\u00fd z Monty Python's Flying Circus) je v\u0161eobecn\u00fd programovac\u00ed jazyk, ktor\u00fdm sa daj\u00fa vytv\u00e1ra\u0165 webov\u00e9 str\u00e1nky ako aj robi\u0165 seri\u00f3zne vedeck\u00e9 v\u00fdpo\u010dty. To znamen\u00e1, \u017ee nau\u010di\u0165 sa ho nie je na \u0161kodu a mo\u017eno v\u00e1s raz bude \u017eivi\u0165.\n\nRozhranie, v ktorom p\u00ed\u0161eme k\u00f3d, sa vol\u00e1 Jupyter Notebook. Je to prostredie navrhnut\u00e9 tak, aby sa dalo programova\u0165 doslova v prehliada\u010di a aby sa k\u00f3d dal k\u00faskova\u0165. Pre zbehnutie k\u00faskov programu sta\u010d\u00ed stla\u010di\u0165 Shift+Enter. \n\n\n\n# D\u00e1tov\u00e9 typy a oper\u00e1tory\n\n### \u010c\u00edsla\n\n pod\u013ea o\u010dak\u00e1van\u00ed, vracia trojku\n\n\n```python\n3\n```\n\n\n\n\n 3\n\n\n\n\n```python\n2+3 # scitanie\n```\n\n\n\n\n 5\n\n\n\n\n```python\n6-2 # odcitanie\n```\n\n\n\n\n 4\n\n\n\n\n```python\n10*2 # nasobenie\n```\n\n\n\n\n 20\n\n\n\n\n```python\n35/5 # delenie\n```\n\n\n\n\n 7.0\n\n\n\n\n```python\n5//3 # celociselne delenie TODO je toto treba?\n```\n\n\n\n\n 1\n\n\n\n\n```python\n7%3 # modulo\n```\n\n\n\n\n 1\n\n\n\n\n```python\n2**3 # umocnovanie\n```\n\n\n\n\n 8\n\n\n\n\n```python\n4 * (2 + 3) # poradie dodrzane\n```\n\n\n\n\n 20\n\n\n\n### Logick\u00e9 v\u00fdrazy\n\n\n```python\n1 == 1 # logicka rovnost\n```\n\n\n\n\n True\n\n\n\n\n```python\n2 != 3 # logicka nerovnost\n```\n\n\n\n\n True\n\n\n\n\n```python\n1 < 10\n```\n\n\n\n\n True\n\n\n\n\n```python\n1 > 10\n```\n\n\n\n\n False\n\n\n\n\n```python\n2 <= 2\n```\n\n\n\n\n True\n\n\n\n# Premenn\u00e9\n\nToto je premenn\u00e1.\n\nPo stla\u010den\u00ed Shift+Enter program v okienku zbehne a premenn\u00e1 sa ulo\u017e\u00ed do pam\u00e4te (RAMky, v\u0161etko sa deje na RAMke).\n\n\n```python\na = 2\n```\n\nTeraz s \u0148ou mo\u017eno pracova\u0165 ako s be\u017en\u00fdm \u010d\u00edslom.\n\n\n```python\n2 * a\n```\n\n\n\n\n 4\n\n\n\n\n```python\na + a\n```\n\n\n\n\n 4\n\n\n\n\n```python\na + a*a\n```\n\n\n\n\n 6\n\n\n\nMo\u017eno ju aj umocni\u0165.\n\n\n```python\na**3\n```\n\n\n\n\n 8\n\n\n\nPridajme druh\u00fa premenn\u00fa.\n\n\n```python\nb = 5\n```\n\nNasledovn\u00e9 v\u00fdpo\u010dty dopadn\u00fa pod\u013ea o\u010dak\u00e1van\u00ed.\n\n\n```python\na + b\n```\n\n\n\n\n 7\n\n\n\n\n```python\na * b\n```\n\n\n\n\n 10\n\n\n\n\n```python\nb**a\n```\n\n\n\n\n 25\n\n\n\nRe\u00e1lne \u010d\u00edsla m\u00f4\u017eeme zobrazova\u0165 aj vo vedeckej forme: $2.3\\times 10^{-3}$.\n\n\n```python\nd = 2.3e-3\n```\n\n# Funkcie\n\nSpravme si jednoduch\u00fa funkciu, ktor\u00e1 za n\u00e1s s\u010d\u00edta dve \u010d\u00edsla, aby sme sa s t\u00fdm u\u017e nemuseli tr\u00e1pi\u0165 my:\n\n\n```python\ndef scitaj(a, b):\n return a + b\n```\n\n\n```python\nscitaj(10, 12) # vrati sucet\n```\n\n\n\n\n 22\n\n\n\nFunkcia funguje na cel\u00fdch aj re\u00e1lnych \u010d\u00edslach.\n\nNa\u0161a s\u010d\u00edtacia funkcia m\u00e1 __\u0161tyri podstatn\u00e9 veci__:\n1. `def`: toto slovo definuje funkciu.\n2. dvojbodka na konci prv\u00e9ho riadku, odtia\u013e za\u010d\u00edna defin\u00edcia.\n3. Odsadenie k\u00f3du vn\u00fatri funkcie o \u0161tyri medzery.\n4. Samotn\u00fd k\u00f3d. V \u0148om sa m\u00f4\u017ee dia\u0165 \u010doko\u013evek, Python ho postupne prech\u00e1dza.\n5. `return`: k\u013e\u00fa\u010dov\u00e1 vec. Za toto slovo sa p\u00ed\u0161e, \u010do je output funkcie.\n\n### \u00daloha 1\nNap\u00ed\u0161te funkciu `priemer`, ktor\u00e1 zoberie dve \u010d\u00edsla (v\u00fd\u0161ky dvoch chlapcov) a vypo\u010d\u00edta ich priemern\u00fa v\u00fd\u0161ku.\n\nAk m\u00e1\u0161 \u00falohu hotov\u00fa, prihl\u00e1s sa ved\u00facemu.\n\n\n```python\n# Tvoje riesenie:\ndef priemer(prvy, druhy):\n return ((prvy+druhy)/2)\n\npriemer(90,20)\n\n```\n\n\n\n\n 55.0\n\n\n\n# Po\u010fme na fyziku\n\nV tomto momente m\u00f4\u017eeme za\u010da\u0165 pou\u017e\u00edva\u0165 Python ako sofistikovanej\u0161iu kalkula\u010dku a po\u010d\u00edta\u0165 \u0148ou z\u00e1kladn\u00e9 fyzik\u00e1lne probl\u00e9my. \nJednoduch\u00fd pr\u00edklad s ktor\u00fdm za\u010dneme: dostanete zadan\u00fdch nieko\u013eko fyzikl\u00e1nych kon\u0161t\u00e1nt ako **premenn\u00e9**.\nPredstavte si, \u017ee m\u00e1te za \u00falohou vypo\u010d\u00edta\u0165 nejak\u00fa fyzik\u00e1lnu veli\u010dinu pre nieko\u013eko zadan\u00fdch hodn\u00f4t. Ve\u013emi pohodn\u00e9 je nap\u00edsa\u0165 si funkciu, do ktorej v\u017edy zad\u00e1me po\u010diato\u010dn\u00e9 hodnoty.\n\nZadan\u00e9 kon\u0161tanty\n\n\n```python\nkb=1.38064852e-23 # Boltzmanova kon\u0161tanta\nG=6.67408e-11 # Gravita\u010dn\u00e1 kon\u0161tanta\n```\n\n## \u00daloha 2\nNap\u00ed\u0161te funkciu kotr\u00e1 spo\u010d\u00edta gravita\u010dn\u00fa silu medzi dvomi telesami pre zadan\u00fa vzdialenos\u0165 $r$ a hmotnosti $m_1$ a $m_2$.\n\nPripom\u00edname, vzorec pre v\u00fdpo\u010det gravita\u010dnej sily je\n$F=G \\frac{m_1 m_2}{r^2}$\n\n\n```python\n# Tvoje riesenie:\n\ndef Sila(m_1, m_2, r):\n F=G* m_1*m_2/r**2\n return (F)\n\nSila(10,10,100)\n```\n\n\n\n\n 6.67408e-13\n\n\n\n## \u00daloha 3\nNap\u00ed\u0161te funkciu kotr\u00e1 spo\u010d\u00edta tlak v n\u00e1dobe s objemom $V$, teplotou $T$ v ktorej je $N$ \u010dast\u00edc.\n\nPripom\u00edname, vzorec pre v\u00fdpo\u010det tlaku je\n$p=\\frac{N kb T}{V}$\n\n\n```python\n# tvoje riesenie:\ndef tlak(N,T,V):\n p=N*kb*T/V\n return p\n\ntlak(6e23, 270,1)\n```\n\n\n\n\n 2236.6506024\n\n\n\n## \u00daloha 4\nNap\u00ed\u0161te funkciu ktor\u00e1 vr\u00e1ti v\u00fdsledn\u00fa r\u00fdchlos\u0165 dvoch guli\u010diek po dokonale pru\u017enej zr\u00e1\u017eke. Funkciu m\u00e1 ma\u0165 ako vstupn\u00e9 argumenty hmotnosti $m_1$, $m_2$ a r\u00fdchlosti $u_1$ a $u_2$ guli\u010diek pred zr\u00e1\u017ekou. V\u00fdstupom bud\u00fa nov\u00e9 r\u00fdchlosti $v_1$ a $v_2$. \n\nHint: Vyu\u017eit\u00edm z\u00e1konu zachovania energie pr\u00eddeme ku nasleduj\u00facim v\u00fdrazom pre nov\u00e9 r\u00fdchlosti.\n\n$v_1=\\frac{u_1 (m_1-m_2)+2 m_2u_2}{m_1+m_2}$\n\n$v_2=\\frac{u_2 (m_2-m_1)+2 m_1u_1}{m_1+m_2}$\n\n\n```python\n# tvoje riesenie:\ndef zrazka(m_1,m_2,u_1,u_2):\n return ((u_1 *(m_1-m_2)+2*m_2*u_2)/(m_1+m_2),(u_2 *(m_2-m_1)+2* m_1*u_1)/(m_1+m_2))\n\nzrazka(1,1,10,-10)\n```\n\n\n\n\n (-10.0, 10.0)\n\n\n\n# Zoznamy\n\nZatia\u013e sme sa zozn\u00e1mili s \u010d\u00edslami (cel\u00e9, re\u00e1lne), stringami a trochu aj logick\u00fdmi hodnotami.\nZo v\u0161etk\u00fdch t\u00fdchto prvkov vieme vytv\u00e1ra\u0165 mno\u017einy, v informatickom jazyku `zoznamy`.\n\nNa \u00favod sa teda pozrieme, ako s vytv\u00e1ra zoznam (po anglicky `list`). Tak\u00fato vec v\u0161eobecne naz\u00fdvame d\u00e1tov\u00e1 \u0161trukt\u00fara.\n\n\n```python\nli = [] # prazdny list\n```\n\n\n```python\nve = [4, 2, 3] # list s cislami\n```\n\n\n```python\nve\n```\n\n\n\n\n [4, 2, 3]\n\n\n\n\n```python\nve[0] # indexovat zaciname nulou!\n```\n\n\n\n\n 4\n\n\n\n\n```python\nve[1]\n```\n\n\n\n\n 2\n\n\n\n\n```python\nve[-1] # vybratie posledneho prvku\n```\n\n\n\n\n 3\n\n\n\n\n```python\nw = [5, 10, 15]\n```\n\n\u010co sa sa stane, ak zoznamy s\u010d\u00edtame? Spoja sa.\n\n\n```python\nve + w\n```\n\n\n\n\n [4, 2, 3, 5, 10, 15]\n\n\n\nM\u00f4\u017eeme ich n\u00e1sobi\u0165?\n\n\n```python\nve * ve\n```\n\nSmola, nem\u00f4\u017eeme. Ale v\u0161imnime si, ak\u00e1 u\u017eito\u010dn\u00e1 je chybov\u00e1 hl\u00e1\u0161ka. Jasne n\u00e1m hovor\u00ed, \u017ee nemo\u017eno n\u00e1sobi\u0165 `list`y.\n\nSo zoznamami m\u00f4\u017eeme robi\u0165 r\u00f4zne in\u00e9 u\u017eito\u010dn\u00e9 veci. Napr\u00edklad ich s\u010d\u00edta\u0165.\n\n\n```python\nsum(ve)\n```\n\n\n\n\n 9\n\n\n\nAlebo zisti\u0165 d\u013a\u017eku:\n\n\n```python\nlen(ve)\n```\n\n\n\n\n 3\n\n\n\nAlebo ich utriedi\u0165:\n\nAlebo na koniec prida\u0165 nov\u00fd prvok:\n\n\n```python\nve.append(10)\nve\n```\n\n\n\n\n [4, 2, 3, 10]\n\n\n\nAlebo odobra\u0165:\n\n### Interval\n\n\nZoznam mo\u017eno zadefinova\u0165 cez rozsah:\n\n\n```python\nrange(10)\ntype(range(10))\n```\n\n\n\n\n range\n\n\n\n\n```python\nlist(range(10))\n```\n\n\n\n\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n\n\n\n```python\nlist(range(3, 9))\n```\n\n\n\n\n [3, 4, 5, 6, 7, 8]\n\n\n\n## \u00daloha 5\nSpo\u010d\u00edtajte:\n* s\u00fa\u010det v\u0161etk\u00fdch \u010d\u00edsel od 1 do 1000.\n\nVytvorte zoznam `letnaskola`, ktor\u00fd bude obsahova\u0165 va\u0161ich 5 ob\u013e\u00faben\u00fdch cel\u00fdch \u010d\u00edsel. \n* Pridajte na koniec zoznamu \u010d\u00edslo 100\n* Prep\u00ed\u0161te prv\u00e9 \u010d\u00edslo v zozname tak, aby sa rovnalo posledn\u00e9mu v zozname.\n* Vypo\u010d\u00edtajte s\u00fa\u010det prv\u00e9ho \u010d\u00edsla, posledn\u00e9ho \u010d\u00edsla a d\u013a\u017eky zoznamu.\n\n\n```python\n# Tvoje riesenie:\nzoznam = list(range(1,1001))\nprint(sum(zoznam))\n\nletnaskola = [1,1995,12,6,42]\nprint(letnaskola)\nletnaskola.append(100)\nprint(letnaskola)\nletnaskola[0] = letnaskola[len(letnaskola)-1]\nprint(letnaskola)\n\nprint(letnaskola[0]+letnaskola[len(letnaskola)-1], len(letnaskola))\n```\n\n 500500\n [1, 1995, 12, 6, 42]\n [1, 1995, 12, 6, 42, 100]\n [100, 1995, 12, 6, 42, 100]\n 200 6\n\n\n# For cyklus\n\nIndexy zoznamu m\u00f4\u017eeme postupne prech\u00e1dza\u0165. For cyklus je tzv. `iter\u00e1tor`, ktor\u00fd iteruje cez zoznam.\n\n\n```python\nfor i in [3,2,5,6]:\n print(i)\n```\n\n 2\n 5\n 7\n 8\n 10\n 11\n 14\n 18\n 20\n 25\n\n\n\n```python\nfor i in [3,2,5,6]:\n print(i**2)\n```\n\n 9\n 4\n 25\n 36\n\n\nAko \u00faspe\u0161ne vytvori\u0165 For cyklus? Podobne, ako pri funkci\u00e1ch:\n\n* `for`: toto slovo je na za\u010diatku.\n* `i`: iterovana velicina\n* `in`: pred zoznamom, cez ktor\u00fd prech\u00e1dzame (iterujeme).\n* dvojbodka na konci prv\u00e9ho riadku.\n* kod, ktory sa cykli sa odsadzuje o \u0161tyri medzery.\n\nZa pomoci for cyklu m\u00f4\u017eeme takisto s\u010d\u00edta\u0165 \u010d\u00edsla. Napr. \u010d\u00edsla od 0 do 100:\n\n\n```python\nsuma = 0\n\nfor i in range(101): # uvedomme si, preco tam je 101 a nie 100\n suma = suma + i # skratene sum += i\n \nprint(suma)\n```\n\n 5050\n\n\n## H\u013eadanie hodnoty zlat\u00e9ho rezu $\\varphi$\n\nJednoduch\u00e9 cvi\u010denie na obozn\u00e1menie sa s tzv. selfkonzistentn\u00fdm probl\u00e9mom a for cyklom.\nZlat\u00fd rez je mo\u017en\u00e9 n\u00e1js\u0165 ako rie\u0161ienie rovnice\n$x=1+1/x$\nJej rie\u0161enie vieme hlada\u0165 postupn\u00fdm iterovan\u00edm\n\n\n```python\nx = 1;\n\nfor i in range (0,20):\n x = 1+1/x\n print (x)\n```\n\n 2.0\n 1.5\n 1.6666666666666665\n 1.6\n 1.625\n 1.6153846153846154\n 1.619047619047619\n 1.6176470588235294\n 1.6181818181818182\n 1.6179775280898876\n 1.6180555555555556\n 1.6180257510729614\n 1.6180371352785146\n 1.6180327868852458\n 1.618034447821682\n 1.618033813400125\n 1.6180340557275543\n 1.6180339631667064\n 1.6180339985218035\n 1.618033985017358\n\n\n## \u00daloha 6\nSpo\u010d\u00edtajte s\u00fa\u010det druh\u00fdch mocn\u00edn v\u0161etk\u00fdch nep\u00e1rnych \u010d\u00edsel od 1 do 100 s vyu\u017eit\u00edm for cyklu.\n\n\n```python\n# Tvoje riesenie:\nsuma = 0\nfor i in range(50):\n suma = suma + (2*i+1)**2\nprint(suma)\n```\n\n 166650\n\n\n## \u00daloha 7\n### Dvojhlav\u00fd tank (FKS 30.2.2.A2)\nNevieme odkia\u013e, no m\u00e1me bombastick\u00fd tank, ktor\u00fd m\u00e1 dve hlavne namieren\u00e9 opa\u010dn\u00fdm smerom \u2013 samozrejme tak, \u017ee nemieria proti sebe ;-). V tanku je\n$N = 42$ n\u00e1bojov s hmotnos\u0165ou $m= 20$ kg. Tank s n\u00e1bojmi v\u00e1\u017ei dokopy\n$M= 43$ t. Potom tank za\u010dne strie\u013ea\u0165 striedavo z hlavn\u00ed n\u00e1boje r\u00fdchlos\u0165ou v= 1 000 m s frekvenciou strie\u013eania $f= 0.2$ Hz. Ke\u010f\u017ee tank je nezabrzden\u00fd a dobre naolejovan\u00fd, za\u010dne sa pohybova\u0165. \nAko \u010faleko od p\u00f4vodn\u00e9ho miesta vystrel\u00ed posledn\u00fd n\u00e1boj? Akej ve\u013ekej chyby by sme sa dopustili,\nak by sme zanedbali zmenu celkovej hmotnosti tanku po\u010das strie\u013eania?\n\nHint: http://old.fks.sk/archiv/2014_15/30vzorakyLeto2.pdf , strana 10\n\n\n```python\nx=0\nm=20\nMtank=[43000]\nvtank=[0]\nv=-1000\nf=0.2\nfor i in range(43):\n x=x+vtank[-1]*1/f\n vtank.append(vtank[-1]-m*v/(Mtank[-1]-m))\n Mtank.append(Mtank[-1]-m)\n v=-v\nprint(x)\n\n```\n\n 48.83697212654822\n\n\n# Podmienky\n\nPochop\u00edme ich na pr\u00edklade. Zme\u0148te `a` a zistite, \u010do to sprav\u00ed.\n\n\n```python\na = 5\n\nif a == 3:\n print(\"cislo a je rovne trom.\")\nelif a == 5:\n print(\"cislo a je rovne piatim\")\nelse:\n print(\"cislo a nie je rovne trom ani piatim.\")\n```\n\n cislo a je rovne piatim\n\n\nZa pomoci podmienky teraz m\u00f4\u017eeme z for cyklu vyp\u00edsa\u0165 napr. len p\u00e1rne \u010d\u00edsla. P\u00e1rne \u010d\u00edslo identifikujeme ako tak\u00e9, ktor\u00e9 po delen\u00ed dvomi d\u00e1va zvy\u0161ok nula.\n\nPre zvy\u0161ok po delen\u00ed sa pou\u017e\u00edva percento:\n\n\n```python\nfor i in range(10):\n if i % 2 == 0:\n print(i)\n```\n\n 0\n 2\n 4\n 6\n 8\n\n\nCyklus mozeme zastavit, ak sa porusi nejaka podmienka\n\n\n```python\nfor i in range(20):\n print(i)\n if i>10:\n print('Koniec.')\n break\n```\n\n 0\n 1\n 2\n 3\n 4\n 5\n 6\n 7\n 8\n 9\n 10\n 11\n Koniec.\n\n\n## H\u013eadanie hodnoty Ludolfovho \u010d\u00edsla $\\pi$\nPomocou Monte Carlo met\u00f3dy integrovania sa nau\u010d\u00edme ako napr\u00edklad vypo\u010d\u00edta\u0165 $\\pi$.\nNasleduj\u00face pr\u00edkazy vygeneruj\u00fa zoznam n\u00e1hodn\u00fdch \u010d\u00edsel od nula po jeden\n\n\n\n```python\nimport random as rnd\nimport numpy as np\n\nNOP = 50000\n\nCoordXList = [];\nCoordYList = [];\n\nfor j in range (NOP):\n CoordXList.append(rnd.random())\n CoordYList.append(rnd.random())\n```\n\nTieto dva zoznamy pou\u017eijeme ako $x$-ov\u00e9 a $y$-ov\u00e9 s\u00faradnice bodov v rovnine. Ked\u017ee n\u00e1hodn\u00e9 rozdelenie bodov je rovnomern\u00e9, tak pomer bodov, ktor\u00e9 sa nach\u00e1dzaj\u00fa vn\u00fatri \u0161tvr\u0165kru\u017enice s polomerom jedna ku v\u0161etk\u00fdm bodom mus\u00ed by\u0165 rovnak\u00fd ako pomer plochy \u0161tvr\u0165kruhu a \u0161tvroca. \nTeda $$\\frac{\\frac{1}{4}\\pi 1^2}{1^2}\\stackrel{!}{=}\\frac{N_{in}}{NOP}.$$\nNasleduj\u00face dve bunky vygeneruj\u00fa obr\u00e1zok rozlo\u017eenia bodov a stvr\u0165kru\u017enicu\n\n\n```python\nCircPhi = np.arange(0,np.pi/2,0.01)\n```\n\n\n```python\nimport matplotlib.pyplot as plt\nf1=plt.figure(figsize=(7,7))\n\nplt.plot(\n CoordXList,\n CoordYList,\n color = \"red\",\n linestyle= \"none\",\n marker = \",\"\n)\nplt.plot(np.cos(CircPhi),np.sin(CircPhi))\n#plt.axis([0, 1, 0, 1])\n#plt.axes().set_aspect('equal', 'datalim')\nplt.show(f1)\n```\n\n## \u00daloha 8\nTeraz je va\u0161ou \u00falohou spo\u010d\u00edta\u0165 $\\pi$. Hint: bod je vn\u00fatri \u0161tvr\u0165kru\u017enice pokia\u013e plat\u00ed $x^2+y^2<1.$\n\n\n```python\n#vase riesenie\nNumIn = 0\n\nfor j in range (NOP):\n \n #if (CoordXList[j] - 0.5)*(CoordXList[j] - 0.5) + (CoordYList[j] - 0.5)*(CoordYList[j] - 0.5) < 0.25:\n if CoordXList[j]*CoordXList[j] + CoordYList[j]*CoordYList[j] <= 1:\n \n NumIn = NumIn + 1; \n```\n\n\n```python\nNumIn/NOP*4\n```\n\n\n\n\n 3.14768\n\n\n\n# Numerick\u00e9 s\u010d\u00edtavanie\n\nVo fyzike je \u010dasto u\u017eito\u010dn\u00e9 rozdeli\u0165 si probl\u00e9m na mal\u00e9 \u010dasti.\n\n\n## \u00daloha 9\nTeraz je va\u0161ou \u00falohou vymyslie\u0165 ako spo\u010d\u00edta\u0165 gravita\u010dn\u00e9 pole od jednorozmernej use\u010dky o v\u00fd\u0161ke h nad stredom \u00fase\u010dky.\n \u00dase\u010dka m\u00e1 hmotnos\u0165 $M$ a d\u013a\u017eku $L$. \n\n\u00dase\u010dku si rozdel\u00edte na $N$ mal\u00fdch dielov. Hmotnos\u0165 jedn\u00e9ho tak\u00e9ho dieliku je potom \n$$dm=\\frac{M}{N}$$\n\nVzdialenos\u0165 tak\u00e9hoto bodu od stedu \u00fase\u010dky $x$.\nPotom gravita\u010dn\u00e1 pole od toho mal\u00e9ho k\u00fasku v \u017eiadanom bode je:\n$$\\vec{\\Delta g}=-G \\frac{\\Delta m}{r^3}\\vec{r}.$$\nRozdroben\u00e9 na $y$-ov\u00fa a $x$-ov\u00fa zlo\u017eku:\n$$\\Delta g_y=-G \\frac{\\Delta m}{(x^2+h^2)}\\cos(\\phi)=-G \\frac{\\Delta m}{(x^2+h^2)}\\frac{h}{\\sqrt{x^2=h^2}},$$\nrespekt\u00edve \n$$\\Delta g_x=-G \\frac{\\Delta m}{(x^2+h^2)}\\sin(\\phi)=-G \\frac{\\Delta m}{(x^2+h^2)}\\frac{x}{\\sqrt{x^2+h^2}},$$\nVa\u0161ou \u00falohou je rozdeli\u0165 tak\u00fato \u00fase\u010dku a s\u010d\u00edta\u0165 pr\u00edspevky od v\u0161etk\u00fdch mal\u00fdch k\u00faskov. Premyslite si \u017ee ked\u017ee sme v strede \u00fase\u010dky, $x$-ov\u00e9 pr\u00edspevky sa navz\u00e1jom vynuluj\u00fa.\nAk v\u00e1m to pr\u00edde pr\u00edli\u0161 jednoduch\u00e9, tak mo\u017ete naprogramova\u0165 program ktor\u00fdch spo\u010d\u00edta gravita\u010dn\u00e9 pole nad lubovo\u013en\u00fdm bodom\n\n\n\n\n```python\nN=1000\nM=1000\nL=2\nh=1\n\n#vase riesenie\ng=0\nfor i in range(-int(N/2),int(N/2)):\n g=g+G*M/N*(i/N)/((i/N)**2+h**2)**(3/2)\ng \n \n\n\n\n\n```\n\n\n\n\n -2.3877914507634306e-11\n\n\n\n# Obiehanie Zeme okolo Slnka\n\nFyziku (d\u00fafam!) v\u0161etci pozn\u00e1me.\n\n* gravita\u010dn\u00e1 sila:\n$$ \\mathbf F(\\mathbf r) = -\\frac{G m M}{r^3} \\mathbf r $$\n\n### Eulerov algoritmus (zl\u00fd)\n$$\\begin{align}\na(t) &= F(t)/m \\\\\nv(t+dt) &= v(t) + a(t) dt \\\\\nx(t+dt) &= x(t) + v(t) dt \\\\\n\\end{align}$$\n\n### Verletov algoritmus (dobr\u00fd)\n$$ x(t+dt) = 2 x(t) - x(t-dt) + a(t) dt^2 $$\n\n\n```python\nfrom numpy.linalg import norm\n\nG = 6.67e-11\nMs = 2e30\nMz = 6e24\ndt = 86400.0\nN = int(365*86400.0/dt)\n#print(N)\n\nR0 = 1.5e11\nr_list = np.zeros((N, 2))\nr_list[0] = [R0, 0.0] # mozno miesat listy s ndarray\n\nv0 = 29.7e3\nv_list = np.zeros((N, 2))\nv_list[0] = [0.0, v0]\n\n# sila medzi planetami\ndef force(A, r):\n return -A / norm(r)**3 * r\n\n# Verletova integracia\ndef verlet_step(r_n, r_nm1, a, dt): # r_nm1 -- r n minus 1\n return 2*r_n - r_nm1 + a*dt**2\n\n# prvy krok je specialny\na = force(G*Ms, r_list[0])\nr_list[1] = r_list[0] + v_list[0]*dt + a*dt**2/2\n\n\n# riesenie pohybovych rovnic\nfor i in range(2, N):\n a = force(G*Ms, r_list[i-1])\n r_list[i] = verlet_step(r_list[i-1], r_list[i-2], a, dt)\n \n \nplt.plot(r_list[:, 0], r_list[:, 1])\nplt.xlim([-2e11, 2e11])\nplt.ylim([-2e11, 2e11])\nplt.xlabel(\"$x$\", fontsize=20)\nplt.ylabel(\"$y$\", fontsize=20)\nplt.gca().set_aspect('equal', adjustable='box')\n#plt.axis(\"equal\")\nplt.show()\n```\n\n## Pridajme Mesiac\n\n\n```python\nMm = 7.3e22\nR0m = R0 + 384e6\nv0m = v0 + 1e3\nrm_list = np.zeros((N, 2))\nrm_list[0] = [R0m, 0.0]\nvm_list = np.zeros((N, 2))\nvm_list[0] = [0.0, v0m]\n\n# prvy Verletov krok\nam = force(G*Ms, rm_list[0]) + force(G*Mz, rm_list[0] - r_list[0])\nrm_list[1] = rm_list[0] + vm_list[0]*dt + am*dt**2/2\n\n# riesenie pohybovych rovnic\nfor i in range(2, N):\n a = force(G*Ms, r_list[i-1]) - force(G*Mm, rm_list[i-1]-r_list[i-1])\n am = force(G*Ms, rm_list[i-1]) + force(G*Mz, rm_list[i-1]-r_list[i-1])\n r_list[i] = verlet_step(r_list[i-1], r_list[i-2], a, dt)\n rm_list[i] = verlet_step(rm_list[i-1], rm_list[i-2], am, dt)\n \nplt.plot(r_list[:, 0], r_list[:, 1])\nplt.plot(rm_list[:, 0], rm_list[:, 1])\nplt.xlabel(\"$x$\", fontsize=20)\nplt.ylabel(\"$y$\", fontsize=20)\nplt.gca().set_aspect('equal', adjustable='box')\nplt.xlim([-2e11, 2e11])\nplt.ylim([-2e11, 2e11])\nplt.show() # mesiac moc nevidno, ale vieme, ze tam je\n```\n\n## \u00daloha pre V\u00e1s: Treba prida\u0165 Mars :)\nPridajte Mars!\n\n## Matematick\u00e9 kyvadlo s odporom \nNasimulujte matematick\u00e9 kyvadlo s odporom $\\gamma$,\n$$ \\ddot \\theta = -\\frac g l \\sin\\theta -\\gamma \\theta^2,$$\nza pomoci met\u00f3dy `odeint`.\n\nAlebo p\u00e1d telesa v odporovom prostred\u00ed:\n$$ a = -g - kv^2.$$\n\n\n```python\nfrom scipy.integrate import odeint\n\ndef F(y, t, g, k):\n return [y[1], g -k*y[1]**2]\n\nN = 101\nk = 1.0\ng = 10.0\nt = np.linspace(0, 1, N)\ny0 = [0.0, 0.0]\ny = odeint(F, y0, t, args=(g, k))\n\nplt.plot(t, y[:, 1])\nplt.xlabel(\"$t$\", fontsize=20)\nplt.ylabel(\"$v(t)$\", fontsize=20)\nplt.show()\n```\n\n## Harmonick\u00fd oscil\u00e1tor pomocou met\u00f3dy Leapfrog (modifik\u00e1cia Verletovho algoritmu)\n\n\n```python\nN = 10000\nt = linspace(0,100,N)\ndt = t[1] - t[0]\n\n# Funkcie\ndef integrate(F,x0,v0,gamma):\n x = zeros(N)\n v = zeros(N)\n E = zeros(N) \n \n # Po\u010diato\u010dn\u00e9 podmienky\n x[0] = x0\n v[0] = v0\n \n # Integrovanie rovn\u00edc pomocou met\u00f3dy Leapfrog (wiki)\n fac1 = 1.0 - 0.5*gamma*dt\n fac2 = 1.0/(1.0 + 0.5*gamma*dt)\n \n for i in range(N-1):\n v[i + 1] = fac1*fac2*v[i] - fac2*dt*x[i] + fac2*dt*F[i]\n x[i + 1] = x[i] + dt*v[i + 1]\n E[i] += 0.5*(x[i]**2 + ((v[i] + v[i+1])/2.0)**2)\n \n E[-1] = 0.5*(x[-1]**2 + v[-1]**2)\n \n # Vr\u00e1time rie\u0161enie\n return x,v,E\n```\n\n\n```python\n# Pozrime sa na tri r\u00f4zne po\u010diato\u010dn\u00e9 podmienky\nF = zeros(N)\nx1,v1,E1 = integrate(F,0.0,1.0,0.0) # x0 = 0.0, v0 = 1.0, gamma = 0.0\nx2,v2,E2 = integrate(F,0.0,1.0,0.05) # x0 = 0.0, v0 = 1.0, gamma = 0.01\nx3,v3,E3 = integrate(F,0.0,1.0,0.4) # x0 = 0.0, v0 = 1.0, gamma = 0.5\n\n# Nakreslime si grafy\n\nplt.rcParams[\"axes.grid\"] = True\nplt.rcParams['font.size'] = 14\nplt.rcParams['axes.labelsize'] = 18\nplt.figure()\nplt.subplot(211)\nplt.plot(t,x1)\nplt.plot(t,x2)\nplt.plot(t,x3)\nplt.ylabel(\"x(t)\")\n\nplt.subplot(212)\nplt.plot(t,E1,label=r\"$\\gamma = 0.0$\")\nplt.plot(t,E2,label=r\"$\\gamma = 0.01$\")\nplt.plot(t,E3,label=r\"$\\gamma = 0.5$\")\nplt.ylim(0,0.55)\nplt.ylabel(\"E(t)\")\n\nplt.xlabel(\"\u010cas\")\nplt.legend(loc=\"center right\")\n\nplt.tight_layout()\n```\n\nA \u010do ak bude oscil\u00e1tor aj tlmenn\u00fd?\n\n\n```python\ndef force(f0,t,w,T):\n return f0*cos(w*t)*exp(-t**2/T**2) \n\nF1 = zeros(N)\nF2 = zeros(N)\nF3 = zeros(N)\nfor i in range(N-1):\n F1[i] = force(1.0,t[i] - 20.0,1.0,10.0)\n F2[i] = force(1.0,t[i] - 20.0,0.9,10.0)\n F3[i] = force(1.0,t[i] - 20.0,0.8,10.0)\n```\n\n\n```python\nx1,v1,E1 = integrate(F1,0.0,0.0,0.0)\nx2,v2,E2 = integrate(F1,0.0,0.0,0.01)\nx3,v3,E3 = integrate(F1,0.0,0.0,0.1)\n\nplt.figure()\nplt.subplot(211)\nplt.plot(t,x1)\nplt.plot(t,x2)\nplt.plot(t,x3)\nplt.ylabel(\"x(t)\")\n\nplt.subplot(212)\nplt.plot(t,E1,label=r\"$\\gamma = 0$\")\nplt.plot(t,E2,label=r\"$\\gamma = 0.01$\")\nplt.plot(t,E3,label=r\"$\\gamma = 0.1$\")\npt.ylabel(\"E(t)\")\n\nplt.xlabel(\"Time\")\nplt.rcParams['legend.fontsize'] = 14.0\nplt.legend(loc=\"upper left\")\n\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "284cf508cce5828af010d7a7dbe2dcd99eb0c76a", "size": 125526, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Programko_vzor.ipynb", "max_stars_repo_name": "matoga/LetnaSkolaFKS_notebooks", "max_stars_repo_head_hexsha": "26faa2d30ee942e18246fe466d9bf42f16cc1433", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Programko_vzor.ipynb", "max_issues_repo_name": "matoga/LetnaSkolaFKS_notebooks", "max_issues_repo_head_hexsha": "26faa2d30ee942e18246fe466d9bf42f16cc1433", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Programko_vzor.ipynb", "max_forks_repo_name": "matoga/LetnaSkolaFKS_notebooks", "max_forks_repo_head_hexsha": "26faa2d30ee942e18246fe466d9bf42f16cc1433", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 58.8771106942, "max_line_length": 73968, "alphanum_fraction": 0.7750027883, "converted": true, "num_tokens": 8350, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8991213718636752, "lm_q2_score": 0.916109614162018, "lm_q1q2_score": 0.8236937330628558}} {"text": "# The $\\chi^2$ Distribution\n\n## $\\chi^2$ Test Statistic\n\nIf we make $n$ ranom samples (observations) from Gaussian (Normal) distributions with known means, $\\mu_i$, and known variances, $\\sigma_i^2$, it is seen that the total squared deviation,\n\n$$\n\\chi^2 = \\sum_{i=1}^{n} \\left(\\frac{x_i - \\mu_i}{\\sigma_i}\\right)^2\\,,\n$$\n\nfollows a $\\chi^2$ distribution with $n$ degrees of freedom.\n\n## Probability Distribution Function\n\nThe $\\chi^2$ probability distribution function for $k$ degrees of freedom (the number of parameters that are allowed to vary) is given by\n\n$$\nf\\left(\\chi^2\\,;k\\right) = \\frac{\\displaystyle 1}{\\displaystyle 2^{k/2} \\,\\Gamma\\left(k\\,/2\\right)}\\, \\chi^{k-2}\\,e^{-\\chi^2/2}\\,,\n$$\n\nwhere if there are no constrained variables the number of degrees of freedom, $k$, is equal to the number of observations, $k=n$. The p.d.f. is often abbreviated in notation from $f\\left(\\chi^2\\,;k\\right)$ to $\\chi^2_k$.\n\nA reminder that for integer values of $k$, the Gamma function is $\\Gamma\\left(k\\right) = \\left(k-1\\right)!$, and that $\\Gamma\\left(x+1\\right) = x\\Gamma\\left(x\\right)$, and $\\Gamma\\left(1/2\\right) = \\sqrt{\\pi}$.\n\n## Mean\n\nLetting $\\chi^2=z$, and noting that the form of the Gamma function is\n\n$$\n\\Gamma\\left(z\\right) = \\int\\limits_{0}^{\\infty} x^{z-1}\\,e^{-x}\\,dx,\n$$\n\nit is seen that the mean of the $\\chi^2$ distribution $f\\left(\\chi^2 ; k\\right)$ is\n\n$$\n\\begin{align}\n\\mu &= \\textrm{E}\\left[z\\right] = \\displaystyle\\int\\limits_{0}^{\\infty} z\\, \\frac{\\displaystyle 1}{\\displaystyle 2^{k/2} \\,\\Gamma\\left(k\\,/2\\right)}\\, z^{k/2-1}\\,e^{-z\\,/2}\\,dz \\\\\n &= \\displaystyle \\frac{\\displaystyle 1}{\\displaystyle \\Gamma\\left(k\\,/2\\right)} \\int\\limits_{0}^{\\infty} \\left(\\frac{z}{2}\\right)^{k/2}\\,e^{-z\\,/2}\\,dz = \\displaystyle \\frac{\\displaystyle 1}{\\displaystyle \\Gamma\\left(k\\,/2\\right)} \\int\\limits_{0}^{\\infty} x^{k/2}\\,e^{-x}\\,2 \\,dx \\\\\n &= \\displaystyle \\frac{\\displaystyle 2 \\,\\Gamma\\left(k\\,/2 + 1\\right)}{\\displaystyle \\Gamma\\left(k\\,/2\\right)} \\\\\n &= \\displaystyle 2 \\frac{k}{2} \\frac{\\displaystyle \\Gamma\\left(k\\,/2\\right)}{\\displaystyle \\Gamma\\left(k\\,/2\\right)} \\\\\n &= k.\n\\end{align}\n$$\n\n## Variance\n\nLikewise, the variance is\n\n$$\n\\begin{align}\n\\textrm{Var}\\left[z\\right] &= \\textrm{E}\\left[\\left(z-\\textrm{E}\\left[z\\right]\\right)^2\\right] = \\displaystyle\\int\\limits_{0}^{\\infty} \\left(z - k\\right)^2\\, \\frac{\\displaystyle 1}{\\displaystyle 2^{k/2} \\,\\Gamma\\left(k\\,/2\\right)}\\, z^{k/2-1}\\,e^{-z\\,/2}\\,dz \\\\\n &= \\displaystyle\\int\\limits_{0}^{\\infty} z^2\\, f\\left(z \\,; k\\right)\\,dz - 2k\\int\\limits_{0}^{\\infty} z\\,\\,f\\left(z \\,; k\\right)\\,dz + k^2\\int\\limits_{0}^{\\infty} f\\left(z \\,; k\\right)\\,dz \\\\\n &= \\displaystyle\\int\\limits_{0}^{\\infty} z^2 \\frac{\\displaystyle 1}{\\displaystyle 2^{k/2} \\,\\Gamma\\left(k\\,/2\\right)}\\, z^{k/2-1}\\,e^{-z\\,/2}\\,dz - 2k^2 + k^2\\\\\n &= \\displaystyle\\int\\limits_{0}^{\\infty} \\frac{\\displaystyle 1}{\\displaystyle 2^{k/2} \\,\\Gamma\\left(k\\,/2\\right)}\\, z^{k/2+1}\\,e^{-z\\,/2}\\,dz - k^2\\\\\n &= \\frac{\\displaystyle 2}{\\displaystyle \\Gamma\\left(k\\,/2\\right)} \\displaystyle\\int\\limits_{0}^{\\infty} \\left(\\frac{z}{2}\\right)^{k/2+1}\\,e^{-z\\,/2}\\,dz - k^2 = \\frac{\\displaystyle 2}{\\displaystyle \\Gamma\\left(k\\,/2\\right)} \\displaystyle\\int\\limits_{0}^{\\infty} x^{k/2+1}\\,e^{-x}\\,2\\,dx - k^2 \\\\\n &= \\displaystyle \\frac{\\displaystyle 4 \\,\\Gamma\\left(k\\,/2 + 2\\right)}{\\displaystyle \\Gamma\\left(k\\,/2\\right)} - k^2 \\\\\n &= \\displaystyle 4 \\left(\\frac{k}{2} + 1\\right) \\frac{\\displaystyle \\Gamma\\left(k\\,/2 + 1\\right)}{\\displaystyle \\Gamma\\left(k\\,/2\\right)} - k^2 \\\\\n &= \\displaystyle 4 \\left(\\frac{k}{2} + 1\\right) \\frac{k}{2} - k^2 \\\\\n &= k^2 + 2k - k^2 \\\\\n &= 2k,\n\\end{align}\n$$\n\nsuch that the standard deviation is\n\n$$\n\\sigma = \\sqrt{2k}\\,.\n$$\n\nGiven this information we now plot the $\\chi^2$ p.d.f. with various numbers of degrees of freedom to visualize how the distribution's behaviour\n\n\n```python\nimport numpy as np\nimport scipy.stats as stats\n\nimport matplotlib.pyplot as plt\n```\n\n\n```python\n# Plot the chi^2 distribution\nx = np.linspace(0., 10., num=1000)\n\n[plt.plot(x, stats.chi2.pdf(x, df=ndf), label=r'$k = ${}'.format(ndf))\n for ndf in range(1, 7)]\n \nplt.ylim(-0.01, 0.5)\n \nplt.xlabel(r'$x=\\chi^2$')\nplt.ylabel(r'$f\\left(x;k\\right)$')\nplt.title(r'$\\chi^2$ distribution for various degrees of freedom')\n\nplt.legend(loc='best')\n\nplt.show();\n```\n\n## Cumulative Distribution Function\n\nThe cumulative distribution function (CDF) for the $\\chi^2$ distribution is (letting $z=\\chi^2$)\n\n$$\n\\begin{split}\nF_{\\chi^2}\\left(x\\,; k\\right) &= \\int\\limits_{0}^{x} f_{\\chi^2}\\left(z\\,; k\\right) \\,dz \\\\\n &= \\int\\limits_{0}^{x} \\frac{\\displaystyle 1}{\\displaystyle 2^{k/2} \\,\\Gamma\\left(k\\,/2\\right)}\\, z^{k/2-1}\\,e^{-z/2} \\,dz \\\\\n &= \\int\\limits_{0}^{x} \\frac{\\displaystyle 1}{\\displaystyle 2 \\,\\Gamma\\left(k\\,/2\\right)}\\, \\left(\\frac{z}{2}\\right)^{k/2-1}\\,e^{-z/2} \\,dz = \\frac{1}{\\displaystyle 2 \\,\\Gamma\\left(k\\,/2\\right)}\\int\\limits_{0}^{x/2} t^{k/2-1}\\,e^{-t} \\,2\\,dt \\\\\n &= \\frac{1}{\\displaystyle \\Gamma\\left(k\\,/2\\right)}\\int\\limits_{0}^{x/2} t^{k/2-1}\\,e^{-t} \\,dt\n\\end{split}\n$$\n\nNoting the form of the [lower incomplete gamma function](https://en.wikipedia.org/wiki/Incomplete_gamma_function) is\n\n$$\n\\gamma\\left(s,x\\right) = \\int\\limits_{0}^{x} t^{s-1}\\,e^{-t} \\,dt\\,,\n$$\n\nand the form of the [regularized Gamma function](https://en.wikipedia.org/wiki/Incomplete_gamma_function#Regularized_Gamma_functions_and_Poisson_random_variables) is\n\n$$\nP\\left(s,x\\right) = \\frac{\\gamma\\left(s,x\\right)}{\\Gamma\\left(s\\right)}\\,,\n$$\n\nit is seen that\n\n$$\n\\begin{split}\nF_{\\chi^2}\\left(x\\,; k\\right) &= \\frac{1}{\\displaystyle \\Gamma\\left(k\\,/2\\right)}\\int\\limits_{0}^{x/2} t^{k/2-1}\\,e^{-t} \\,dt \\\\\n &= \\frac{\\displaystyle \\gamma\\left(\\frac{k}{2},\\frac{x}{2}\\right)}{\\displaystyle \\Gamma\\left(\\frac{k}{2}\\right)} \\\\\n &= P\\left(\\frac{k}{2},\\frac{x}{2}\\right)\\,.\n\\end{split}\n$$\n\nThus, it is seen that the compliment to the CDF (the complementary cumulative distribution function (CCDF)),\n\n$$\n\\bar{F}_{\\chi^2}\\left(x\\,; k\\right) = 1-F_{\\chi^2}\\left(x\\,; k\\right),\n$$\n\nrepresents a one-sided (one-tailed) $p$-value for observing a $\\chi^2$ given a model — that is, the probability to observe a $\\chi^2$ value greater than or equal to that which was observed.\n\n\n```python\ndef chi2_ccdf(x, df):\n \"\"\"The complementary cumulative distribution function\n \n Args:\n x: the value of chi^2\n df: the number of degrees of freedom\n \n Returns:\n 1 - the cumulative distribution function\n \"\"\"\n return 1. - stats.chi2.cdf(x=x, df=df)\n```\n\n\n```python\nx = np.linspace(0., 10., num=1000)\nfig, axes = plt.subplots(nrows=1, ncols=2, figsize=(14, 4.5))\n\nfor ndf in range(1,7):\n axes[0].plot(x, stats.chi2.cdf(x, df=ndf),\n label=r'$k = ${}'.format(ndf))\n axes[1].plot(x, chi2_ccdf(x, df=ndf),\n label=r'$k = ${}'.format(ndf))\n \naxes[0].set_xlabel(r'$x=\\chi^2$')\naxes[0].set_ylabel(r'$F\\left(x;k\\right)$')\naxes[0].set_title(r'$\\chi^2$ CDF for various degrees of freedom')\n\naxes[0].legend(loc='best')\n\naxes[1].set_xlabel(r'$x=\\chi^2$')\naxes[1].set_ylabel(r'$\\bar{F}\\left(x;k\\right) = p$-value')\naxes[1].set_title(r'$\\chi^2$ CCDF ($p$-value) for various degrees of freedom')\n\naxes[1].legend(loc='best')\n\nplt.show();\n```\n\n## Binned $\\chi^2$ per Degree of Freedom\n\nTODO\n\n## References\n\n- \\[1\\] G. Cowan, _Statistical Data Analysis_, Oxford University Press, 1998\n- \\[2\\] G. Cowan, \"Goodness of fit and Wilk's theorem\", Notes, 2013\n", "meta": {"hexsha": "1974199db7cfb73e57c64e834c3c80765ead8bac", "size": 148428, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Notebooks/Introductory/Chi-Squared-Distribution.ipynb", "max_stars_repo_name": "fizisist/Statistics-Notes", "max_stars_repo_head_hexsha": "9399bca77abc36ee342f8af2fadddffd79390bed", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Notebooks/Introductory/Chi-Squared-Distribution.ipynb", "max_issues_repo_name": "fizisist/Statistics-Notes", "max_issues_repo_head_hexsha": "9399bca77abc36ee342f8af2fadddffd79390bed", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Notebooks/Introductory/Chi-Squared-Distribution.ipynb", "max_forks_repo_name": "fizisist/Statistics-Notes", "max_forks_repo_head_hexsha": "9399bca77abc36ee342f8af2fadddffd79390bed", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 337.3363636364, "max_line_length": 95200, "alphanum_fraction": 0.9256609265, "converted": true, "num_tokens": 2794, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284088025362857, "lm_q2_score": 0.8872045907347107, "lm_q1q2_score": 0.8236885516887081}} {"text": "# Heat Equation\n\nSolve the differential equation\n\n\\begin{align}\n \\partial_t u(t,x) - \\partial_x^2 u(t,x) &= 0, && t \\in (0,T), \\, x \\in (x_\\mathrm{min}, x_\\mathrm{max}),\n \\\\\n u(0,x) &= u_0(x), && x \\in [x_\\mathrm{min}, x_\\mathrm{max}]\n\\end{align}\n\nwith appropriate boundary conditions.\n\n## Constant Dirichlet Boundary Conditions\n\nSolve the heat equation with constant Dirichlet boundary conditions\n\n\\begin{align}\n u(t,x_\\mathrm{min}) &= u_0(x_\\mathrm{min}), && t \\in (0,T),\n \\\\\n u(t,x_\\mathrm{max}) &= u_0(x_\\mathrm{max}), && t \\in (0,T).\n\\end{align}\n\n### Using `SummationByPartsOperators.jl`\n\n\n```julia\nusing SummationByPartsOperators, OrdinaryDiffEq\nusing Plots, LaTeXStrings\n\nxmin, xmax = -\u03c0, \u03c0\nN = 512\nacc_order = 4\ntspan = (0., 10.)\n# source of coefficients\nsource = MattssonSv\u00e4rdShoeybi2008()\node_alg = Tsit5()\n\nu\u2080(x) = -(x - 0.5)^2 + 1/12\n\nD = derivative_operator(source, 2, acc_order, xmin, xmax, N)\nx = D.grid\nu0 = u\u2080.(x)\n\nfunction rhs!(du, u, p, t)\n mul!(du, D, u)\n @inbounds du[1] -= (u[1] - u\u2080(xmin)) / D.coefficients.left_weights[1]\n @inbounds du[end] -= (u[end] - u\u2080(xmax)) / D.coefficients.right_weights[1]\nend\n\node = ODEProblem(rhs!, u0, tspan)\nsol = solve(ode, ode_alg, save_everystep=false, saveat=0:10)\n@time sol = solve(ode, ode_alg, save_everystep=false, saveat=0:10)\n\n# try to plot the solution at different time points using\nplot(x, [sol(i) for i in 0:1:10])\n```\n\n### Using `DiffEqOperators.jl`\n\n\n```julia\nusing DiffEqOperators, OrdinaryDiffEq\nusing Plots, LaTeXStrings\n\nxmin, xmax = -\u03c0, \u03c0\nN = 512\nacc_order = 4\ntspan = (0., 10.)\node_alg = Tsit5()\n\nu\u2080(x) = -(x - 0.5)^2 + 1/12\n\nx = range(xmin, stop=xmax, length=N)\nu0 = u\u2080.(x)\nL = DiffEqOperators.DerivativeOperator{Float64}(2, acc_order, 2\u03c0/511, 512, :Dirichlet, :Dirichlet; BC=(u0[1],u0[end]))\n\node = ODEProblem(L, u0, tspan)\nsol = solve(ode, ode_alg, save_everystep=false, saveat=0:10)\n@time sol = solve(ode, ode_alg, save_everystep=false, saveat=0:10)\n\n# try to plot the solution at different time points using\nplot(x, [sol(i) for i in 0:1:10])\n```\n\n\n```julia\n\n```\n", "meta": {"hexsha": "29700c561302a20250a6dd503d34415ba0a78fc6", "size": 3772, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/Heat_equation.ipynb", "max_stars_repo_name": "UnofficialJuliaMirrorSnapshots/SummationByPartsOperators.jl-9f78cca6-572e-554e-b819-917d2f1cf240", "max_stars_repo_head_hexsha": "9d665594279c4131d3132c8d3fc9db2ea17b912d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-07-02T10:17:34.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-02T10:17:34.000Z", "max_issues_repo_path": "notebooks/Heat_equation.ipynb", "max_issues_repo_name": "UnofficialJuliaMirror/SummationByPartsOperators.jl-9f78cca6-572e-554e-b819-917d2f1cf240", "max_issues_repo_head_hexsha": "99379add278e0463145289703273681b2c291da7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/Heat_equation.ipynb", "max_forks_repo_name": "UnofficialJuliaMirror/SummationByPartsOperators.jl-9f78cca6-572e-554e-b819-917d2f1cf240", "max_forks_repo_head_hexsha": "99379add278e0463145289703273681b2c291da7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.3776223776, "max_line_length": 127, "alphanum_fraction": 0.5182926829, "converted": true, "num_tokens": 756, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284087946129327, "lm_q2_score": 0.8872045952083047, "lm_q1q2_score": 0.8236885488123971}} {"text": "# Session 10 - Unsupervised Learning \n\n## Contents\n\n- [Principal Components Analysis](#Principal-Components-Analysis)\n- [Clustering Methods](#Clustering-Methods)\n\n\n```python\n# Import\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport time\n\nfrom sklearn.preprocessing import scale\nfrom sklearn.decomposition import PCA\nfrom sklearn.cluster import KMeans\nfrom scipy.cluster import hierarchy\nfrom scipy.cluster.hierarchy import linkage, dendrogram, cut_tree\n```\n\n\n```python\n# Import matplotlib for graphs\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import axes3d\nfrom IPython.display import clear_output\n\n# Set global parameters\n%matplotlib inline\nplt.style.use('seaborn-white')\nplt.rcParams['lines.linewidth'] = 3\nplt.rcParams['figure.figsize'] = (10,6)\nplt.rcParams['figure.titlesize'] = 20\nplt.rcParams['axes.titlesize'] = 18\nplt.rcParams['axes.labelsize'] = 14\nplt.rcParams['legend.fontsize'] = 14\n```\n\nThe difference between *supervised learning* and *unsupervised learning* is that in the first case we have a variable $y$ which we want to predict, given a set of variables $\\{ X_1, . . . , X_p \\}$.\n\nIn *unsupervised learning* are not interested in prediction, because we do not have an associated response variable $y$. Rather, the goal is to discover interesting things about the measurements on $\\{ X_1, . . . , X_p \\}$. Question that we can answer are:\n\n- Is there an informative way to visualize the data? \n- Can we discover subgroups among the variables or among the observations?\n\nWe are going to answe these questions with two main methods, respectively:\n\n- Principal Components Analysis\n- Clustering\n\n## Principal Component Analysis\n\nSuppose that we wish to visualize $n$ observations with measurements on a set of $p$ features, $\\{X_1, . . . , X_p\\}$, as part of an exploratory data analysis.\n\nWe could do this by examining two-dimensional scatterplots of the data, each of which contains the n observations\u2019 measurements on two of the features. However, there are $p(p\u22121)/2$ such scatterplots; for example,\nwith $p = 10$ there are $45$ plots! \n\nPCA provides a tool to do just this. It finds a low-dimensional represen- tation of a data set that contains as much as possible of the variation. \n\nThe **first principal component** of a set of features $\\{X_1, . . . , X_p\\}$ is the normalized linear combination of the features\n\n$$\nZ_1 = \\phi_{11} X_1 + \\phi_{21} X_2 + ... + \\phi_{p1} X_p\n$$\n\nthat has the largest variance. By normalized, we mean that $\\sum_{i=1}^p \\phi^2_{i1} = 1$.\n\nIn other words, the first principal component loading vector solves the optimization problem\n\n$$\n\\underset{\\phi_{11}, \\ldots, \\phi_{p 1}}{\\operatorname{max}} \\ \\left\\{\\frac{1}{n} \\sum_{i=1}^{n}\\left(\\sum_{j=1}^{p} \\phi_{j 1} x_{i j}\\right)^{2}\\right\\} \\quad \\text { subject to } \\quad \\sum_{j=1}^{p} \\phi_{j 1}^{2}=1\n$$\n\nThe objective that we are maximizing is just the sample variance of the $n$ values of $z_{i1}$.\n\nAfter the first principal component $Z_1$ of the features has been determined, we can find the second principal component $Z_2$. The **second principal component** is the linear combination of $\\{X_1, . . . , X_p\\}$ that has maximal variance out of all linear combinations that are *uncorrelated* with $Z_1$.\n\nWe illustrate the use of PCA on the `USArrests` data set. For each of the 50 states in the United States, the data set contains the number of arrests per $100,000$ residents for each of three crimes: `Assault`, `Murder`, and `Rape.` We also record the percent of the population in each state living in urban areas, `UrbanPop`.\n\n\n```python\n# Load crime data\ndf = pd.read_csv('data/USArrests.csv', index_col=0)\ndf.head()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    MurderAssaultUrbanPopRape
    State
    Alabama13.22365821.2
    Alaska10.02634844.5
    Arizona8.12948031.0
    Arkansas8.81905019.5
    California9.02769140.6
    \n
    \n\n\n\n\n```python\n# Scale data\nX_scaled = pd.DataFrame(scale(df), index=df.index, columns=df.columns).values\n```\n\n\n```python\n# Fit PCA with 2 components\npca2 = PCA(n_components=2).fit(X_scaled)\n```\n\n\n```python\n# Get weights\nweights = pca2.components_.T\ndf_weights = pd.DataFrame(weights, index=df.columns, columns=['PC1', 'PC2'])\ndf_weights\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    PC1PC2
    Murder0.5358990.418181
    Assault0.5831840.187986
    UrbanPop0.278191-0.872806
    Rape0.543432-0.167319
    \n
    \n\n\n\n\n```python\n# Transform X to get the principal components\nX_dim2 = pca2.transform(X_scaled)\ndf_dim2 = pd.DataFrame(X_dim2, columns=['PC1', 'PC2'], index=df.index)\ndf_dim2.head()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    PC1PC2
    State
    Alabama0.9855661.133392
    Alaska1.9501381.073213
    Arizona1.763164-0.745957
    Arkansas-0.1414201.119797
    California2.523980-1.542934
    \n
    \n\n\n\n\n```python\nfig, ax1 = plt.subplots(figsize=(10,10))\nax1.set_title('Figure 10.1');\n\n# Plot Principal Components 1 and 2\nfor i in df_dim2.index:\n ax1.annotate(i, (df_dim2.PC1.loc[i], -df_dim2.PC2.loc[i]), ha='center', fontsize=14)\n \n# Plot reference lines\nm = np.max(np.abs(df_dim2.values))*1.2\nax1.hlines(0,-m,m, linestyles='dotted', colors='grey')\nax1.vlines(0,-m,m, linestyles='dotted', colors='grey')\nax1.set_xlabel('First Principal Component')\nax1.set_ylabel('Second Principal Component')\nax1.set_xlim(-m,m); ax1.set_ylim(-m,m)\n\n# Plot Principal Component loading vectors, using a second y-axis.\nax1b = ax1.twinx().twiny() \nax1b.set_ylim(-1,1); ax1b.set_xlim(-1,1)\nfor i in df_weights[['PC1', 'PC2']].index:\n ax1b.annotate(i, (df_weights.PC1.loc[i]*1.05, -df_weights.PC2.loc[i]*1.05), color='orange', fontsize=16)\n ax1b.arrow(0,0,df_weights.PC1[i], -df_weights.PC2[i], color='orange', lw=2)\n```\n\n### PCA and spectral analysis\n\nIn case you haven't noticed, calculating principal components, is equivalent to calculating the eigenvectors of the design matrix $X'X$, i.e. the variance-covariance matrix of $X$. Indeed what we performed above is a decomposition of the variance of $X$ into orthogonal components.\n\nThe constrained maximization problem above can be re-written in matrix notation as\n\n$$\n\\max \\ \\phi' X'X \\phi \\quad \\text{ s. t. } \\quad \\phi'\\phi = 1\n$$\n\nWhich has the following dual representation\n\n$$\n\\mathcal L (\\phi, \\lambda) = \\phi' X'X \\phi - \\lambda (\\phi'\\phi - 1)\n$$\n\nWhich gives first order conditions\n\n$$\n\\begin{align}\n& \\frac{\\partial \\mathcal L}{\\partial \\lambda} = \\phi'\\phi - 1 \\\\\n& \\frac{\\partial \\mathcal L}{\\partial \\phi} = 2 X'X \\phi - 2 \\lambda \\phi\n\\end{align}\n$$\n\nSetting the derivatives to zero at the optimum, we get\n\n$$\n\\begin{align}\n& \\phi'\\phi = 1 \\\\\n& X'X \\phi = \\lambda \\phi\n\\end{align}\n$$\n\nThus, $\\phi$ is an **eigenvector** of the covariance matrix $X'X$, and the maximizing vector will be the one associated with the largest **eigenvalue** $\\lambda$. \n\nWe can now double-check it using `numpy` linear algebra package.\n\n\n```python\neigenval, eigenvec = np.linalg.eig(X_scaled.T @ X_scaled)\ndata = np.concatenate((eigenvec,eigenval.reshape(1,-1)))\nidx = list(df.columns) + ['Eigenvalue']\ndf_eigen = pd.DataFrame(data, index=idx, columns=['PC1', 'PC2','PC3','PC4'])\n\ndf_eigen\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    PC1PC2PC3PC4
    Murder0.5358990.4181810.649228-0.341233
    Assault0.5831840.187986-0.743407-0.268148
    UrbanPop0.278191-0.8728060.133878-0.378016
    Rape0.543432-0.1673190.0890240.817778
    Eigenvalue124.01207949.4882588.67150417.828159
    \n
    \n\n\n\nThe spectral decomposition of the variance of $X$ generates a set of orthogonal vectors (eigenvectors) with different magnitudes (eigenvalues). The eigenvalues tell us the amount of variance of the data in that direction.\n\nIf we combine the eigenvectors together, we form a projection matrix $P$ that we can use to transform the original variables: $\\tilde X = P X$\n\n\n```python\nX_transformed = X_scaled @ eigenvec\ndf_transformed = pd.DataFrame(X_transformed, index=df.index, columns=['PC1', 'PC2','PC3','PC4'])\n\ndf_transformed.head()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    PC1PC2PC3PC4
    State
    Alabama0.9855661.1333920.156267-0.444269
    Alaska1.9501381.073213-0.4385832.040003
    Arizona1.763164-0.745957-0.8346530.054781
    Arkansas-0.1414201.119797-0.1828110.114574
    California2.523980-1.542934-0.3419960.598557
    \n
    \n\n\n\nThis is exactly the dataset that we obtained before.\n\n### More on PCA\n\n#### Scaling the Variables\n\nThe results obtained when we perform PCA will also depend on whether the variables have been individually scaled. In fact, the variance of a variable depends on its magnitude.\n\n\n```python\n# Variables variance\ndf.var(axis=0)\n```\n\n\n\n\n Murder 18.970465\n Assault 6945.165714\n UrbanPop 209.518776\n Rape 87.729159\n dtype: float64\n\n\n\nConsequently, if we perform PCA on the unscaled variables, then the first principal component loading vector will have a very large loading for `Assault`, since that variable has by far the highest variance.\n\n\n```python\n# Fit PCA with unscaled varaibles\nX = df.values\npca2_u = PCA(n_components=2).fit(X)\n```\n\n\n```python\n# Get weights\nweights_u = pca2_u.components_.T\ndf_weights_u = pd.DataFrame(weights_u, index=df.columns, columns=['PC1', 'PC2'])\ndf_weights_u\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    PC1PC2
    Murder0.0417040.044822
    Assault0.9952210.058760
    UrbanPop0.046336-0.976857
    Rape0.075156-0.200718
    \n
    \n\n\n\n\n```python\n# Transform X to get the principal components\nX_dim2_u = pca2_u.transform(X)\ndf_dim2_u = pd.DataFrame(X_dim2_u, columns=['PC1', 'PC2'], index=df.index)\ndf_dim2_u.head()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    PC1PC2
    State
    Alabama64.80216411.448007
    Alaska92.82745017.982943
    Arizona124.068216-8.830403
    Arkansas18.34003516.703911
    California107.422953-22.520070
    \n
    \n\n\n\n\n```python\nfig, (ax1,ax2) = plt.subplots(1,2,figsize=(18,9))\n\n# Scaled PCA\nfor i in df_dim2.index:\n ax1.annotate(i, (df_dim2.PC1.loc[i], -df_dim2.PC2.loc[i]), ha='center', fontsize=14)\nax1b = ax1.twinx().twiny() \nax1b.set_ylim(-1,1); ax1b.set_xlim(-1,1)\nfor i in df_weights[['PC1', 'PC2']].index:\n ax1b.annotate(i, (df_weights.PC1.loc[i]*1.05, -df_weights.PC2.loc[i]*1.05), color='orange', fontsize=16)\n ax1b.arrow(0,0,df_weights.PC1[i], -df_weights.PC2[i], color='orange', lw=2)\nax1.set_title('Scaled')\n \n# Unscaled PCA\nfor i in df_dim2_u.index:\n ax2.annotate(i, (df_dim2_u.PC1.loc[i], -df_dim2_u.PC2.loc[i]), ha='center', fontsize=14)\nax2b = ax2.twinx().twiny() \nax2b.set_ylim(-1,1); ax2b.set_xlim(-1,1)\nfor i in df_weights_u[['PC1', 'PC2']].index:\n ax2b.annotate(i, (df_weights_u.PC1.loc[i]*1.05, -df_weights_u.PC2.loc[i]*1.05), color='orange', fontsize=16)\n ax2b.arrow(0,0,df_weights_u.PC1[i], -df_weights_u.PC2[i], color='orange', lw=2)\nax2.set_title('Unscaled')\n\n# Plot reference lines\nfor ax,df in zip((ax1,ax2), (df_dim2,df_dim2_u)):\n m = np.max(np.abs(df.values))*1.2\n ax.hlines(0,-m,m, linestyles='dotted', colors='grey')\n ax.vlines(0,-m,m, linestyles='dotted', colors='grey')\n ax.set_xlabel('First Principal Component')\n ax.set_ylabel('Second Principal Component')\n ax.set_xlim(-m,m); ax.set_ylim(-m,m)\n```\n\nAs predicted, the first principal component loading vector places almost all of its weight on `Assault`, while the second principal component loading vector places almost all of its weight on `UrpanPop`. Comparing this to the left-hand plot, we see that scaling does indeed have a substantial effect on the results obtained. However, this result is simply a consequence of the scales on which the variables were measured. \n\n#### The Proportion of Variance Explained\n\nWe can now ask a natural question: how much of the information in a given data set is lost by projecting the observations onto the first few principal components? That is, how much of the variance in the data is not contained in the first few principal components? More generally, we are interested in knowing the **proportion of variance explained (PVE)** by each principal component. \n\n\n```python\n# Four components\npca4 = PCA(n_components=4).fit(X_scaled)\n```\n\n\n```python\n# Variance of the four principal components\npca4.explained_variance_\n```\n\n\n\n\n array([2.53085875, 1.00996444, 0.36383998, 0.17696948])\n\n\n\n\n```python\n# As a percentage of the total variance\npca4.explained_variance_ratio_\n```\n\n\n\n\n array([0.62006039, 0.24744129, 0.0891408 , 0.04335752])\n\n\n\nIn the `Arrest` dataset, the first principal component explains $62.0\\%$ of the variance in the data, and the next principal component explains $24.7\\%$ of the variance. Together, the first two principal components explain almost $87\\%$ of the variance in the data, and the last two principal components explain only $13\\%$ of the variance.\n\n\n```python\nfig, (ax1, ax2) = plt.subplots(1,2, figsize=(12,5))\nfig.suptitle('Figure 10.2');\n\n# Relative \nax1.plot([1,2,3,4], pca4.explained_variance_ratio_)\nax1.set_ylabel('Prop. Variance Explained')\nax1.set_xlabel('Principal Component');\n\n# Cumulative\nax2.plot([1,2,3,4], np.cumsum(pca4.explained_variance_ratio_))\nax2.set_ylabel('Cumulative Variance Explained');\nax2.set_xlabel('Principal Component');\n```\n\n#### Deciding How Many Principal Components to Use\n\nIn general, a $n \\times p$ data matrix $X$ has $\\min\\{n \u2212 1, p\\}$ distinct principal components. However, we usually are not interested in all of them; rather, we would like to use just the first few principal components in order to visualize or interpret the data. \n\nWe typically decide on the number of principal components required to visualize the data by examining a *scree plot*.\n\nHowever, there is no well-accepted objective way to decide how many principal com- ponents are enough.\n\n## Clustering\n\nClustering refers to a very broad set of techniques for finding subgroups, or clusters, in a data set. When we cluster the observations of a data set, we seek to partition them into distinct groups so that the observations within each group are quite similar to each other, while observations in different groups are quite different from each other.\n\nIn this section we focus on perhaps the two best-known clustering approaches: \n\n1. **K-means clustering**: we seek to partition the observations into a pre-specified clustering number of clusters\n2. **Hierarchical clustering**: we do not know in advance how many clusters we want; in fact, we end up with a tree-like visual representation of the observations, called a dendrogram, that allows us to view at once the clusterings obtained for each possible number of clusters, from 1 to n.\n\n### K-Means Clustering\n\nThe idea behind K-means clustering is that a good clustering is one for which the within-cluster variation is as small as possible. Hence we want to solve the problem\n\n$$\n\\underset{C_{1}, \\ldots, C_{K}}{\\operatorname{minimize}}\\left\\{\\sum_{k=1}^{K} W\\left(C_{k}\\right)\\right\\}\n$$\n\nwhere $C_k$ is a cluster and $ W(C_k)$ is a measure of the amount by which the observations within a cluster differ from each other.\n\nThere are many possible ways to define this concept, but by far the most common choice involves **squared Euclidean distance**. That is, we define\n\n$$\nW\\left(C_{k}\\right)=\\frac{1}{\\left|C_{k}\\right|} \\sum_{i, i^{\\prime} \\in C_{k}} \\sum_{j=1}^{p}\\left(x_{i j}-x_{i^{\\prime} j}\\right)^2\n$$\n\nwhere $|C_k|$ denotes the number of observations in the $k^{th}$ cluster. \n\n### Algorithm\n\n1. Randomly assign a number, from $1$ to $K$, to each of the observations. These serve as initial cluster assignments for the observations.\n\n2. Iterate until the cluster assignments stop changing:\n\n a) For each of the $K$ clusters, compute the cluster centroid. The kth cluster centroid is the vector of the $p$ feature means for the observations in the $k^{th}$ cluster.\n \n b) Assign each observation to the cluster whose centroid is closest (where closest is defined using Euclidean distance).\n\n\n```python\nnp.random.seed(123)\n\n# Simulate data\nX = np.random.randn(50,2)\nX[0:25, 0] = X[0:25, 0] + 3\nX[0:25, 1] = X[0:25, 1] - 4\n```\n\n\n```python\nfig, ax = plt.subplots(figsize=(6, 5))\nfig.suptitle(\"Baseline\")\n\n# Plot\nax.scatter(X[:,0], X[:,1], s=50, alpha=0.5, c='k') \nax.set_xlabel('X0'); ax.set_ylabel('X1');\n```\n\n\n```python\nnp.random.seed(1)\n\n# Init clusters\nK = 2\nclusters0 = np.random.randint(K,size=(np.size(X,0)))\n```\n\n\n```python\nfig, ax = plt.subplots(figsize=(6, 5))\nfig.suptitle(\"Random assignment\")\n\n# Plot\nax.scatter(X[clusters0==0,0], X[clusters0==0,1], s=50, alpha=0.5) \nax.scatter(X[clusters0==1,0], X[clusters0==1,1], s=50, alpha=0.5)\nax.set_xlabel('X0'); ax.set_ylabel('X1');\n```\n\n\n```python\n# Compute new centroids\ndef compute_new_centroids(X, clusters):\n K = len(np.unique(clusters))\n centroids = np.zeros((K,np.size(X,1)))\n for k in range(K):\n if sum(clusters==k)>0:\n centroids[k,:] = np.mean(X[clusters==k,:], axis=0)\n else:\n centroids[k,:] = np.mean(X, axis=0)\n return centroids\n```\n\n\n```python\n# Print\ncentroids0 = compute_new_centroids(X, clusters0)\nprint(centroids0)\n```\n\n [[ 1.35725989 -2.15281035]\n [ 1.84654757 -1.99437838]]\n\n\n\n```python\n# Plot assignment\ndef plot_assignment(X, centroids, clusters, d, i):\n clear_output(wait=True)\n fig, ax = plt.subplots(figsize=(6, 5))\n fig.suptitle(\"Iteration %.0f: inertia=%.1f\" % (i,d))\n\n # Plot\n ax.clear()\n colors = plt.rcParams['axes.prop_cycle'].by_key()['color'];\n K = np.size(centroids,0)\n for k in range(K):\n ax.scatter(X[clusters==k,0], X[clusters==k,1], s=50, c=colors[k], alpha=0.5) \n ax.scatter(centroids[k,0], centroids[k,1], marker = '*', s=300, color=colors[k])\n ax.set_xlabel('X0'); ax.set_ylabel('X1');\n \n # Show\n plt.show();\n```\n\n\n```python\n# Plot\nplot_assignment(X, centroids0, clusters0, 0, 0)\n```\n\n\n```python\n# Assign X to clusters\ndef assign_to_cluster(X, centroids):\n K = np.size(centroids,0)\n dist = np.zeros((np.size(X,0),K))\n for k in range(K):\n dist[:,k] = np.mean((X - centroids[k,:])**2, axis=1)\n clusters = np.argmin(dist, axis=1)\n \n # Compute inertia\n inertia = 0\n for k in range(K):\n if sum(clusters==k)>0:\n inertia += np.sum((X[clusters==k,:] - centroids[k,:])**2)\n return clusters, inertia\n```\n\n\n```python\n# Get cluster assignment\n[clusters1,d] = assign_to_cluster(X, centroids0)\n```\n\n\n```python\n# Plot\nplot_assignment(X, centroids0, clusters1, d, 1)\n```\n\nWe now have all the components to proceed iteratively.\n\n\n```python\ndef kmeans_manual(X, K):\n\n # Init\n i = 0\n d0 = 1e4\n d1 = 1e5\n clusters = np.random.randint(K,size=(np.size(X,0)))\n\n # Iterate until convergence\n while np.abs(d0-d1) > 1e-10:\n d1 = d0\n centroids = compute_new_centroids(X, clusters)\n [clusters, d0] = assign_to_cluster(X, centroids)\n plot_assignment(X, centroids, clusters, d0, i)\n i+=1\n```\n\n\n```python\n# Test\nkmeans_manual(X, K)\n```\n\nHere the observations can be easily plotted because they are two-dimensional.\nIf there were more than two variables then we could instead perform PCA\nand plot the first two principal components score vectors.\n\nIn this example, we knew that there really were two clusters because\nwe generated the data. However, for real data, in general we do not know\nthe true number of clusters. We could instead have performed K-means\nclustering on this example with `K = 3`. If we do this, K-means clustering will split up the two \"real\" clusters, since it has no information about them:\n\n\n```python\n# K=3\nkmeans_manual(X, 3)\n```\n\nThe automated function in `sklearn` to persorm $K$-means clustering is `KMeans`. \n\n\n```python\n# SKlearn algorithm\nkm1 = KMeans(n_clusters=3, n_init=1, random_state=1)\nkm1.fit(X)\n```\n\n\n\n\n KMeans(n_clusters=3, n_init=1, random_state=1)\n\n\n\n\n```python\n# Plot\nplot_assignment(X, km1.cluster_centers_, km1.labels_, km1.inertia_, km1.n_iter_)\n```\n\nAs we can see, the results are different in the two algorithms? Why? $K$-means is susceptible to the initial values. One way to solve this problem is to run the algorithm multiple times and report only the best results\n\nTo run the `Kmeans()` function in python with multiple initial cluster assignments, we use the `n_init` argument (default: 10). If a value of `n_init` greater than one is used, then K-means clustering will be performed using multiple random assignments, and the `Kmeans()` function will report only the best results.\n\n\n```python\n# 30 runs\nkm_30run = KMeans(n_clusters=3, n_init=30, random_state=1).fit(X)\nplot_assignment(X, km_30run.cluster_centers_, km_30run.labels_, km_30run.inertia_, km_30run.n_iter_)\n```\n\nIt is generally recommended to always run K-means clustering with a large value of `n_init`, such as 20 or 50 to avoid getting stuck in an undesirable local optimum.\n\nWhen performing K-means clustering, in addition to using multiple initial cluster assignments, it is also important to set a random seed using the `random_state` parameter. This way, the initial cluster assignments can be replicated, and the K-means output will be fully reproducible.\n\n### Hierarchical Clustering\n\nOne potential disadvantage of K-means clustering is that it requires us to pre-specify the number of clusters $K$. Hierarchical clustering is an alternative approach which does not require that we commit to a particular choice of $K$\n\n\n#### The Dendogram\n\nHierarchical clustering has an added advantage over K-means clustering in that it results in an attractive tree-based representation of the observations, called a **dendrogram**.\n\n\n```python\nplt.figure(figsize=(25, 10))\nplt.title('Hierarchical Clustering Dendrogram')\n\n# calculate full dendrogram\nplt.xlabel('sample index')\nplt.ylabel('distance')\ndendrogram(\n linkage(X, \"complete\"),\n leaf_rotation=90., # rotates the x axis labels\n leaf_font_size=8., # font size for the x axis labels\n)\nplt.show()\n```\n\nEach leaf of the *dendrogram* represents one observation. \n\nAs we move up the tree, some leaves begin to fuse into branches. These correspond to observations that are similar to each other. As we move higher up the tree, branches themselves fuse, either with leaves or other branches. The earlier (lower in the tree) fusions occur, the more similar the groups of observations are to each other.\n\nWe can use de *dendogram* to understand how similar two observations are: we can look for the point in the tree where branches containing those two obse rvations are first fused. The height of this fusion, as measured on the vertical axis, indicates how different the two observations are. Thus, observations that fuse at the very bottom of the tree are quite similar to each other, whereas observations that fuse close to the top of the tree will tend to be quite different.\n\nThe term **hierarchical** refers to the fact that clusters obtained by cutting the dendrogram at a given height are necessarily nested within the clusters obtained by cutting the dendrogram at any greater height.\n\n#### The Hierarchical Clustering Algorithm\n\n1. Begin with $n$ observations and a measure (such as Euclidean distance) of all the $n(n \u2212 1)/2$ pairwise dissimilarities. Treat each 2 observation as its own cluster.\n\n2. For $i=n,n\u22121,...,2$\n\n a) Examine all pairwise inter-cluster dissimilarities among the $i$ clusters and identify the **pair of clusters that are least dissimilar** (that is, most similar). Fuse these two clusters. The dissimilarity between these two clusters indicates the height in the dendrogram at which the fusion should be placed.\n \n b) Compute the new pairwise inter-cluster dissimilarities among the $i\u22121$ remaining clusters.\n\n#### The linkage function\n\nWe have a concept of the dissimilarity between pairs of observations, but how do we define the dissimilarity between two clusters if one or both of the clusters contains multiple observations?\n\nThe concept of dissimilarity between a pair of observations needs to be extended to a pair of groups of observations. This extension is achieved by developing the notion of **linkage**, which defines the dissimilarity between two groups of observations.\n\nThe four most common types of linkage are:\n\n1. **Complete**: Maximal intercluster dissimilarity. Compute all pairwise dissimilarities between the observations in cluster A and the observations in cluster B, and record the largest of these dissimilarities.\n2. **Single**: Minimal intercluster dissimilarity. Compute all pairwise dissimilarities between the observations in cluster A and the observations in cluster B, and record the smallest of these dissimilarities. \n3. **Average**: Mean intercluster dissimilarity. Compute all pairwise dissimilarities between the observations in cluster A and the observations in cluster B, and record the average of these dissimilarities.\n4. **Centroid**: Dissimilarity between the centroid for cluster A (a mean vector of length p) and the centroid for cluster B. Centroid linkage can result in undesirable inversions.\n\nAverage, complete, and single linkage are most popular among statisticians. Average and complete linkage are generally preferred over single linkage, as they tend to yield more balanced dendrograms. Centroid linkage is often used in genomics, but suffers from a major drawback in that an inversion can occur, whereby two clusters are fused at a height below either of the individual clusters in the dendrogram. This can lead to difficulties in visualization as well as in interpretation of the dendrogram.\n\n\n```python\n# Init\nlinkages = [hierarchy.complete(X), hierarchy.average(X), hierarchy.single(X)]\ntitles = ['Complete Linkage', 'Average Linkage', 'Single Linkage']\n```\n\n\n```python\nfig, (ax1,ax2,ax3) = plt.subplots(1,3, figsize=(18,10))\n\n# Plot\nfor linkage, t, ax in zip(linkages, titles, (ax1,ax2,ax3)):\n dendrogram(linkage, ax=ax)\n ax.set_title(t)\n```\n\nFor this data, complete and average linkage generally separates the observations into their correct groups.\n", "meta": {"hexsha": "c9d85aa3b6c0835835356452dbe0efaafcd32ac2", "size": 593316, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "10_unsupervised.ipynb", "max_stars_repo_name": "albarran/Machine-Learning-for-Economic-Analysis-2020", "max_stars_repo_head_hexsha": "eaca29efb8347e51178bdb7fcf90f934d7fee144", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-05-14T17:23:16.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-14T17:23:16.000Z", "max_issues_repo_path": "10_unsupervised.ipynb", "max_issues_repo_name": "albarran/Machine-Learning-for-Economic-Analysis-2020", "max_issues_repo_head_hexsha": "eaca29efb8347e51178bdb7fcf90f934d7fee144", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "10_unsupervised.ipynb", "max_forks_repo_name": "albarran/Machine-Learning-for-Economic-Analysis-2020", "max_forks_repo_head_hexsha": "eaca29efb8347e51178bdb7fcf90f934d7fee144", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 339.0377142857, "max_line_length": 178032, "alphanum_fraction": 0.9232769721, "converted": true, "num_tokens": 9193, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9334308165850443, "lm_q2_score": 0.8824278618165526, "lm_q1q2_score": 0.8236853596328194}} {"text": "```python\nfrom sympy import init_session\ninit_session()\n```\n\n IPython console for SymPy 1.5.1 (Python 3.6.10-64-bit) (ground types: gmpy)\n \n These commands were executed:\n >>> from __future__ import division\n >>> from sympy import *\n >>> x, y, z, t = symbols('x y z t')\n >>> k, m, n = symbols('k m n', integer=True)\n >>> f, g, h = symbols('f g h', cls=Function)\n >>> init_printing()\n \n Documentation can be found at https://docs.sympy.org/1.5.1/\n \n\n\n\n```python\n#init_printing?\n```\n\n### Parameters and two Gaussians\n\n\n```python\na, b, c, a1, a2 = symbols('a b c a1 a2', positive=True, real=True)\n```\n\n\n```python\ng1=x*exp(-a1*x**2)\ng2=x*exp(-a2*x**2)\ng1, g2\n```\n\n### Normalization constant\n\n\n```python\nN=integrate(g1*g1, (x, -oo, oo))\nN\n```\n\n\n```python\n1/sqrt(N)\n```\n\n\n```python\nprinting.sstrrepr(1/sqrt(N))\n```\n\n\n\n\n '2*2**(1/4)*a1**(3/4)/pi**(1/4)'\n\n\n\n### Overlap integral S\n\n\n```python\nS=integrate(g1*g2, (x, -oo, oo))\nS\n```\n\n\n```python\nS.simplify()\n```\n\n\n```python\nprinting.sstrrepr(S.simplify())\n```\n\n\n\n\n 'sqrt(pi)/(2*(a1 + a2)**(3/2))'\n\n\n\n### Kinetic energy $T = -\\frac{\\hbar^2}{2m} \\frac{d^2}{dx^2} = \\frac{1}{2m}\\left(\\frac{\\hbar}{i}\\frac{d}{dx} \\right)^2$\n\n\n```python\nd1=diff(g1,x)\nd2=diff(g2,x)\nd1, d2\n```\n\n\n```python\nT = 1/2 * integrate(d1*d2, (x, -oo, oo))\n#T=T.simplify()\n#T=T.factor()\nT.factor()\n```\n\n\n```python\nprinting.sstrrepr(T.factor())\n```\n\n\n\n\n '1.5*sqrt(pi)*a1*a2/(a1 + a2)**(5/2)'\n\n\n\n### Potential $V(x) = (ax^2 - b)e^{-cx^2}$\n\n\n```python\nv=(a*x**2-b)*exp(-c*x**2)\nv\n```\n\n\n```python\nV = integrate(g1*v*g2, (x, -oo, oo))\nV\n```\n\n\n```python\nV.factor()\n```\n\n\n```python\nprinting.sstrrepr(V.factor())\n```\n\n\n\n\n 'sqrt(pi)*(3*a - 2*a1*b - 2*a2*b - 2*b*c)/(4*(a1 + a2 + c)**(5/2))'\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "3e650140d17fc8dd6b288156c730455860125bd0", "size": 45011, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/GTO_integrals/GTO_1D_P.ipynb", "max_stars_repo_name": "tsommerfeld/L2-methods_for_resonances", "max_stars_repo_head_hexsha": "acba48bfede415afd99c89ff2859346e1eb4f96c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/GTO_integrals/GTO_1D_P.ipynb", "max_issues_repo_name": "tsommerfeld/L2-methods_for_resonances", "max_issues_repo_head_hexsha": "acba48bfede415afd99c89ff2859346e1eb4f96c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/GTO_integrals/GTO_1D_P.ipynb", "max_forks_repo_name": "tsommerfeld/L2-methods_for_resonances", "max_forks_repo_head_hexsha": "acba48bfede415afd99c89ff2859346e1eb4f96c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 94.1652719665, "max_line_length": 7560, "alphanum_fraction": 0.8338850503, "converted": true, "num_tokens": 682, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9441768572945969, "lm_q2_score": 0.8723473730188543, "lm_q1q2_score": 0.8236502011261393}} {"text": "# Matrix Factorization implementation \n\nIv\u00e1n Vall\u00e9s P\u00e9rez - 2018\n\nIn a [recommender system](https://en.wikipedia.org/wiki/Recommender_system) we have two kind of entities: users and items. We want to predict for an arbitrary user which items the user prefers. Sometimes users give explicit ratings, for example 4 or 5 star reviews. Let's start with this case and give an accurate mathematical formulation of the problem.\n\nWe will call $r_{ij}$ the rating that user $i$ gives to item $j$ and we want to build a model to predict these ratings. Let's call the predictions of the model $\\hat{r}_{ij}$. It's important to realize that since each user will probably give ratings to just a handful of items and in most cases there are thousands or even millions of items we don't have ratings for the vast majority of user-item possible interactions. Let's call $I$ the set of known interactions and let's suppose that we want to minimize the squared loss, that is, we want to minimize:\n\n\\begin{equation}\nL = \\sum_{(i,j) \\in I} (r_{ij} - \\hat{r}_{ij})^2\n\\end{equation}\n\nA matrix-factorization based recommender will solve the above problem by supposing that:\n\n- We represent user $i$ with an unknown user bias $u_i^b$ and an unknown vector of length $K$ that we will call $u_i^e$, which is usually called the user embedding.\n- We represent item $j$ with an unknown item bias $v_j^b$ and an unknown vector of length $K$ that we will call $v_j^e$, which is usually called the item embedding.\n- The predicted rating of item $j$ by user $i$ is the biases plus the dot product of the two embeddings: \n\n\\begin{equation}\n\\hat{r}_{ij} = u_i^b + v_j^b + u_i^e \\cdot v_j^e = u_i^b + v_j^b + \\sum_{k=1}^K u_{ik}^ev_{jk}^e\n\\end{equation}\n\nThe above vectors are the parameters of our problem and $K$ is a hyperparameter. If we have $N$ users and $M$ items this means that we have $(K + 1)(N + M)$ parameters. Substituting inside the loss function we have:\n\n\\begin{equation}\nL = \\sum_{(i,j) \\in I} (r_{ij} - u_i^b - v_j^b - \\sum_{k=1}^K u_{ik}^ev_{jk}^e)^2\n\\end{equation}\n\nTo improve the generalization capabilities of the model regularization is added and finally we have:\n\n\\begin{equation}\nL = \\sum_{(i,j) \\in I} (r_{ij} - u_i^b - v_j^b - \\sum_{k=1}^Ku_{ik}^ev_{jk}^e)^2 + \\lambda_u\\sum_{i,k} (u_{ik}^e)^2 + \\lambda_v\\sum_{j, k} (v_{jk}^e)^2\n\\end{equation}\n\n$\\lambda_u$ and $\\lambda_v$ are two additional hyperparameters.\n\n\n```python\n%cd ..\n```\n\n\n```python\nfrom src.matrix_factorization import MatrixFactorization\nfrom src.deep_factorization import DeepFactorization\n```\n\n C:\\Users\\Ivan Valles Perez\\AppData\\Local\\Continuum\\Anaconda3\\lib\\site-packages\\h5py\\__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n\n\n\n```python\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom IPython.display import clear_output\n\n# Load the data and calculate the users set and items set cardinalities\ndf = pd.read_csv(\"./data/interactions.csv\", names=[\"U\", \"I\", \"Q\"])\nU_cardinality = df.U.nunique()\nI_cardinality = df.I.nunique()\n```\n\n\n```python\n# Generates and shuffles the data set to remove possible tendencies\nnp.random.seed(655321)\nmat = df.values\nnp.random.shuffle(mat)\n```\n\n\n```python\n# Divide the data set into train-dev-test\ntrain_mat = mat[:85000]\ndev_mat = mat[85000:90000]\ntest_mat = mat[90000:]\n```\n\n\n```python\n# Separate features from target\nx_train=(train_mat[:,:2]-1)\ny_train=(train_mat[:,2:])\nx_dev=(dev_mat[:,:2]-1)\ny_dev=(dev_mat[:,2:])\nx_test=(test_mat[:,:2]-1)\ny_test=(test_mat[:,2:])\n```\n\n## Matrix Factorization\n\n\n```python\n# Initialize the model and the performance accumulators for reporting purposes\n# The metaparameters have been tuned to improve the algorithm performance\nMF = MatrixFactorization(n_users=U_cardinality, n_items=I_cardinality, emb_size=5, lr=0.002, _lambda=0.1)\nlosses_train = [MF.evaluate(x_train, y_train, batch_size=1000)] # Add the initial loss (w. random weights)\nlosses_dev = [MF.evaluate(x_dev, y_dev, batch_size=1000)] # Add the initial loss (w. random weights)\n```\n\n\n```python\nfor i in range(50): # Run for 50 epochs\n \n MF.fit(x_train, y_train, batch_size=128) # Compute an epoch using SGD\n losses_train.append(MF.evaluate(x_train, y_train, batch_size=128)) # Compute the train performance\n losses_dev.append(MF.evaluate(x_dev, y_dev, batch_size=128)) # Compute the dev. performance\n \n # Plot train and dev errors over time\n clear_output(True)\n plt.figure(figsize=[10, 3])\n plt.plot(losses_train)\n plt.plot(losses_dev)\n axes = plt.gca()\n plt.legend([\"Train MSE\", \"Dev MSE\"])\n plt.title(\"Training errors over time (Mean Squared Error)\")\n plt.ylabel(\"Log-MSE\")\n plt.xlabel(\"Epochs\")\n plt.ylim([0.1, axes.get_ylim()[1]])\n plt.yscale('log')\n plt.grid(True, which=\"both\",ls=\"-\")\n plt.show()\n print(\"[EPOCH {0}] Train error = {1:.4f} | Dev error = {2:.4f}\".format(i+1, losses_train[-1], losses_dev[-1]))\n\nprint(\"Test MSE achieved = {0:.4f}\".format(MF.evaluate(x_test, y_test, batch_size=128)))\n\n```\n\n## Extra: solution using deep learning\n\n\n```python\nDF = DeepFactorization(n_users=U_cardinality, n_items=I_cardinality, emb_size=5, lr=0.002, _lambda=0.1)\nlosses_train = [DF.evaluate(x_train, y_train, batch_size=1000)] # Add the initial loss (w. random weights)\nlosses_dev = [DF.evaluate(x_dev, y_dev, batch_size=1000)] # Add the initial loss (w. random weights)\n```\n\n\n```python\nfor i in range(50): # Run for 50 epochs\n \n DF.fit(x_train, y_train, batch_size=128) # Compute an epoch using SGD\n losses_train.append(DF.evaluate(x_train, y_train, batch_size=128)) # Compute the train performance\n losses_dev.append(DF.evaluate(x_dev, y_dev, batch_size=128)) # Compute the dev. performance\n \n # Plot train and dev errors over time\n clear_output(True)\n plt.figure(figsize=[10, 3])\n plt.plot(losses_train)\n plt.plot(losses_dev)\n axes = plt.gca()\n plt.legend([\"Train MSE\", \"Dev MSE\"])\n plt.title(\"Training errors over time (Mean Squared Error)\")\n plt.ylabel(\"Log-MSE\")\n plt.xlabel(\"Epochs\")\n plt.ylim([0.1, axes.get_ylim()[1]])\n plt.yscale('log')\n plt.grid(True, which=\"both\",ls=\"-\")\n plt.show()\n print(\"[EPOCH {0}] Train error = {1:.4f} | Dev error = {2:.4f}\".format(i+1, losses_train[-1], losses_dev[-1]))\n\nprint(\"Test MSE achieved = {0:.4f}\".format(DF.evaluate(x_test, y_test, batch_size=128)))\n\n```\n", "meta": {"hexsha": "1f21fa0f9717cb774963c66dec8de57c1e16fd20", "size": 49123, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/example.ipynb", "max_stars_repo_name": "ivallesp/deep_matrix_factorization", "max_stars_repo_head_hexsha": "8aab17d7abac81fff4fd574c0a41beb46af04499", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2020-05-19T17:19:41.000Z", "max_stars_repo_stars_event_max_datetime": "2020-12-25T08:35:32.000Z", "max_issues_repo_path": "notebooks/example.ipynb", "max_issues_repo_name": "ivallesp/deep_matrix_factorization", "max_issues_repo_head_hexsha": "8aab17d7abac81fff4fd574c0a41beb46af04499", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/example.ipynb", "max_forks_repo_name": "ivallesp/deep_matrix_factorization", "max_forks_repo_head_hexsha": "8aab17d7abac81fff4fd574c0a41beb46af04499", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 157.9517684887, "max_line_length": 19752, "alphanum_fraction": 0.8716894326, "converted": true, "num_tokens": 1888, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9559813538993888, "lm_q2_score": 0.8615382147637196, "lm_q1q2_score": 0.8236144689858831}} {"text": "Euler Problem 41\n================\n\nWe shall say that an n-digit number is pandigital if it makes use of all the digits 1 to n exactly once. For example, 2143 is a 4-digit pandigital and is also prime.\n\nWhat is the largest n-digit pandigital prime that exists?\n\n\n```python\nfrom itertools import permutations\nfrom sympy import isprime\n\nfor v in permutations(range(7, 0, -1)):\n p = int(''.join(map(str, v)))\n if isprime(p):\n print(p)\n break\n```\n\n 7652413\n\n\n**Explanation:** Every pandigital number $N$ with 8 or 9 digits is divisible by 9, since the sum of the digits of $N$ is $1 + 2 + 3 + \\cdots + 8 = 36$ or $1 + 2 + 3 + \\cdots + 9 = 45$, respectively. Therefore, pandigital primes have at most 7 digits.\n\nWe use `itertools.permutations` to iterate through all permutations of the digits 1-7 in reverse order until we find a permutation that forms a prime number.\n", "meta": {"hexsha": "13dcc6937807e3d3def082cc840d00e451f35e35", "size": 1819, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Euler 041 - Pandigital prime.ipynb", "max_stars_repo_name": "Radcliffe/project-euler", "max_stars_repo_head_hexsha": "5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2016-05-11T18:55:35.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-27T21:38:43.000Z", "max_issues_repo_path": "Euler 041 - Pandigital prime.ipynb", "max_issues_repo_name": "Radcliffe/project-euler", "max_issues_repo_head_hexsha": "5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Euler 041 - Pandigital prime.ipynb", "max_forks_repo_name": "Radcliffe/project-euler", "max_forks_repo_head_hexsha": "5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.6197183099, "max_line_length": 261, "alphanum_fraction": 0.5508521165, "converted": true, "num_tokens": 247, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9579122732859021, "lm_q2_score": 0.8596637505099168, "lm_q1q2_score": 0.823482457512439}} {"text": "```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport emcee\nimport scipy.spatial.distance\nfrom scipy.stats import multivariate_normal\n```\n\n# Linear regression: uncertainty in uncertainty\n\nLet's say we have a data in x and y axes as below:\n\n\n```python\nx = np.array([0.0596779, 0.34317802, 0.39211752, 0.42310646, 0.43857224, 0.4809319, 0.68482974,\n 0.71946897, 0.72904971, 0.9807642])\ny = np.array([-1.39284066, 0.02093466, -0.48427289, -0.36730135, -0.33372661, 0.49791066,\n 0.89920648, 0.63361326, 0.47788066, 1.07935026])\nplt.plot(x, y, '.')\nplt.show()\n```\n\nWith the data given above, we would like to fit it with a linear regression **y = a + bx**. That is, we would like to determine the coefficients and its error in different cases:\n\n1. Assuming the model is correct and the error is known\n2. Assuming the model is correct and the error is unknown\n3. The model has an assumed inadequacy and the error is unknown\n\n## Known correct model & known error\n\nIn this first case, we will determine the uncertainty of the coefficients if we know the standard deviation of the data.\nThe way we invert it is using Bayesian inference with the help from `emcee` sampler.\n\nThe probability density function of the coefficients can be written as\n\n$$\\begin{equation}\nP(a,b | \\mathcal{D}) \\propto P(\\mathcal{D} | a, b) P(a, b)\n\\end{equation}$$\n\nAssuming flat prior of \\\\(a, b\\\\), we can write the probability of the coefficients as\n\n$$\\begin{align}\nP(a,b | \\mathcal{D}) &\\propto P(\\mathcal{D} | a, b) \\\\\n& \\propto \\exp\\left[-\\sum_i \\frac{(a+bx_i-y_i)^2}{2\\sigma^2}\\right]\n\\end{align}$$\n\nFrom the expression above, we can draw samples of \\\\(a,b\\\\) using the `emcee` sampler.\n\n\n```python\ndef lnprob1(param, x, y):\n param = np.array(param)\n a = param[0]\n b = param[1]\n sigma = 0.28297849805199204\n return -np.sum((a + b * x - y)**2) / (2 * sigma**2)\n```\n\n\n```python\nndim, nwalkers = 2, 100\nivar = 1. / np.random.rand(ndim)\np0 = [np.random.rand(ndim) for i in range(nwalkers)]\n\nsampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob1, args=[x, y])\npos, prob, state = sampler.run_mcmc(p0, 10000)\n```\n\n\n```python\na_post1 = sampler.flatchain[10000:,0]\nb_post1 = sampler.flatchain[10000:,1]\nplt.figure(figsize=(12,4))\nplt.subplot(1,2,1)\nplt.hist(a_post1, 50, normed=True)\nplt.title(\"a\")\nplt.subplot(1,2,2)\nplt.hist(b_post1, 50, normed=True)\nplt.title(\"b\")\nplt.show()\n\nprint(\"a: %f +- %f (true: %f)\" % (np.mean(a_post1), np.std(a_post1), -1.0856306033005612))\nprint(\"b: %f +- %f (true: %f)\" % (np.mean(b_post1), np.std(b_post1), 1.9946908931671716))\n```\n\n## Known correct model and unknown error\n\nIn this case, we don't know quite sure what the error was, so we put a prior uncertainty of the \\\\(\\sigma\\\\) and get the posterior belief on that as well as the other coefficients.\n\nThe probability density function of the coefficients can be written as\n\n$$\\begin{equation}\nP(a,b,\\sigma | \\mathcal{D}) \\propto P(\\mathcal{D} | a, b, \\sigma) P(a, b) P(\\sigma)\n\\end{equation}$$\n\nAs before, the prior of \\\\(a, b\\\\) is assumed flat. However, we assumed the \\\\(\\sigma\\\\) to have the flat log prior. Therefore, we can write the probability of the coefficients as\n\n$$\\begin{align}\nP(a,b,\\sigma | \\mathcal{D}) &\\propto P(\\mathcal{D} | a, b, \\sigma) \\frac{1}{\\sigma} \\\\\n& \\propto \\exp\\left[-\\sum_i \\frac{(a+bx_i-y_i)^2}{2\\sigma^2}\\right] \\frac{1}{\\sigma^{N+1}}\n\\end{align}$$\nwhere \\\\(N\\\\) is the number of data samples.\n\nFrom the expression above, we can draw samples of \\\\(a,b, \\sigma\\\\) using the `emcee` sampler.\n\n\n```python\ndef lnprob2(param, x, y):\n a, b, sigma = param\n if sigma < 0: return -np.inf\n \n ymodel = a + b * x\n N = len(x)\n return - np.sum((ymodel - y)**2) / (2*sigma**2) - (N+1) * np.log(sigma)\n```\n\n\n```python\nndim, nwalkers = 3, 100\np0 = np.random.random((nwalkers, ndim))\n\nsampler2 = emcee.EnsembleSampler(nwalkers, ndim, lnprob2, args=[x, y])\npos, prob, state = sampler2.run_mcmc(p0, 10000)\n```\n\n\n```python\na_post2 = sampler2.flatchain[10000:,0]\nb_post2 = sampler2.flatchain[10000:,1]\ns_post2 = sampler2.flatchain[10000:,2]\nplt.figure(figsize=(12,4))\nplt.subplot(1,3,1)\nplt.hist(a_post2, 50, normed=True)\nplt.title(\"a\")\nplt.subplot(1,3,2)\nplt.hist(b_post2, 50, normed=True)\nplt.title(\"b\")\nplt.subplot(1,3,3)\nplt.hist(s_post2, 50, normed=True)\nplt.title(\"sigma\")\nplt.show()\n\nprint(\"a: %f +- %f (true: %f)\" % (np.mean(a_post2), np.std(a_post2), -1.0856306033005612))\nprint(\"b: %f +- %f (true: %f)\" % (np.mean(b_post2), np.std(b_post2), 1.9946908931671716))\nprint(\"sigma: %f +- %f (true: %f)\" % (np.mean(s_post2), np.std(s_post2), 0.28297849805199204))\n```\n\n## Unknown correct model and unknown error \n\nThis is similar to the previous case, except that we are not sure that the linear model is the correct one. One way to encode our uncertainty is to express the observation as\n\n$$\\begin{equation}\n\\hat{y}(x) = a + bx + \\varepsilon + \\eta(x)\n\\end{equation}$$\n\nwhere \\\\(\\varepsilon \\sim \\mathcal{N}(0, \\sigma^2)\\\\) is the Gaussian noise and \\\\(\\eta(x)\\\\) is the model inadequacy. Let's assume the model inadequacy is a Gaussian process with mean zero and squared exponential kernel:\n\n$$\\begin{align}\n\\eta(x) & \\sim \\mathcal{GP}\\left[0, c(\\cdot, \\cdot)\\right] \\\\\nc(x_1, x_2) & = m^2 \\exp\\left[-\\frac{(x_1 - x_2)^2}{2 d^2}\\right]\n\\end{align}$$\n\nTo encode the Gaussian noise and the Gaussian process in one expression, we can write it as\n\n$$\\begin{align}\n(\\eta(x) + \\varepsilon) & \\sim \\mathcal{GP}\\left[0, c_2(\\cdot, \\cdot)\\right] \\\\\nc_2(x_1, x_2) & = m^2 \\exp\\left[-\\frac{(x_1 - x_2)^2}{2 d^2}\\right] + \\delta(x_1-x_2) \\sigma^2\n\\end{align}$$\n\nwhere \\\\(\\delta\\\\) term is one when the argument is zero and zero otherwise.\n\n\nThe posterior distribution is given by\n\n$$\\begin{equation}\nP(a,b,\\sigma,m,d | \\mathcal{D}) \\propto P(\\mathcal{D} | a, b, \\sigma, m, d) P(a, b) P(\\sigma) P(m) P(d)\n\\end{equation}$$\n\nAs before, the prior of \\\\(a, b\\\\) is assumed flat and \\\\(\\sigma\\\\) to have the flat log prior. Here we also assume \\\\(m,d\\\\) to have the flat log prior. Thus, we can write the probability of the coefficients as\n\n$$\\begin{align}\nP(a,b,\\sigma,m,d | \\mathcal{D}) &\\propto P(\\mathcal{D} | a, b, \\sigma, m, d) \\frac{1}{\\sigma m d} \\\\\n& \\propto \\mathcal{GP}\\left[\\hat{y}; a + bx, c_2(\\cdot, \\cdot)\\right] \\frac{1}{\\sigma m d}\n\\end{align}$$\n\nFrom the expression above, we can draw samples of \\\\(a,b, \\sigma, m, d\\\\) using the `emcee` sampler.\n\n\n```python\ndef lnprob3(param, x, y, dist):\n a, b, sigma, m, d = param\n if sigma < 0 or m < 0 or d < 0: return -np.inf\n if d > 1: return -np.inf\n \n # calculate the covariance matrix\n cov = m*m * np.exp(-dist**2/(2*d*d)) + np.eye(len(x)) * (sigma*sigma)\n # cov = np.eye(len(x)) * (sigma * sigma)\n \n ymodel = a + b * x\n obs = ymodel - y\n return multivariate_normal.logpdf(obs, mean=np.zeros(obs.shape[0]), cov=cov) - np.log(sigma * m * d)\n```\n\n\n```python\ndist = scipy.spatial.distance.squareform(scipy.spatial.distance.pdist(np.expand_dims(x, 1)))\n```\n\n\n```python\nndim, nwalkers = 5, 100\np0 = np.random.random((nwalkers, ndim))\n\nsampler3 = emcee.EnsembleSampler(nwalkers, ndim, lnprob3, args=[x, y, dist])\npos, prob, state = sampler3.run_mcmc(p0, 100000)\n```\n\n\n```python\nnburn = 10000\na_post3 = sampler3.flatchain[nburn:,0]\nb_post3 = sampler3.flatchain[nburn:,1]\ns_post3 = sampler3.flatchain[nburn:,2]\nm_post3 = sampler3.flatchain[nburn:,3]\nd_post3 = sampler3.flatchain[nburn:,4]\nplt.figure(figsize=(20,4))\nplt.subplot(1,5,1)\nplt.hist(a_post3, 50, normed=True)\nplt.title(\"a\")\nplt.subplot(1,5,2)\nplt.hist(b_post3, 50, normed=True)\nplt.title(\"b\")\nplt.subplot(1,5,3)\nplt.hist(s_post3, 50, normed=True)\nplt.title(\"sigma\")\nplt.subplot(1,5,4)\nplt.hist(m_post3, 50, normed=True)\nplt.title(\"m\")\nplt.subplot(1,5,5)\nplt.hist(d_post3, 50, normed=True)\nplt.title(\"d\")\nplt.show()\n\nprint(\"a: %f +- %f (true: %f)\" % (np.mean(a_post3), np.std(a_post3), -1.0856306033005612))\nprint(\"b: %f +- %f (true: %f)\" % (np.mean(b_post3), np.std(b_post3), 1.9946908931671716))\nprint(\"sigma: %f +- %f (true: %f)\" % (np.mean(s_post3), np.std(s_post3), 0.28297849805199204))\nprint(\"m: %f +- %f\" % (np.mean(m_post3), np.std(m_post3)))\nprint(\"d: %f +- %f\" % (np.mean(d_post3), np.std(d_post3)))\n```\n\n## Conclusions\n\nWe have compared the retrieved coefficients with different degrees of uncertainty: (1) known model and known error, (2) known model and unknown error, and (3) unknown model and unknown error. Here are the comparisons of the retrieved coefficients for those three cases.\n\n\n```python\nplt.figure(figsize=(10, 12))\na_xlim = (-2.5, 1)\nb_xlim = (0, 5)\n\nplt.subplot(3,2,1)\nplt.hist(a_post1, 50, normed=True, range=a_xlim)\nplt.title(\"a\")\nplt.ylabel(\"Known model and known error\")\nplt.subplot(3,2,2)\nplt.hist(b_post1, 50, normed=True, range=b_xlim)\nplt.title(\"b\")\n\nplt.subplot(3,2,3)\nplt.hist(a_post2, 50, normed=True, range=a_xlim)\nplt.ylabel(\"Known model and unknown error\")\nplt.subplot(3,2,4)\nplt.hist(b_post2, 50, normed=True, range=b_xlim)\n\nplt.subplot(3,2,5)\nplt.hist(a_post3, 50, normed=True, range=a_xlim)\nplt.ylabel(\"Unknown model and unknown error\")\nplt.subplot(3,2,6)\nplt.hist(b_post3, 50, normed=True, range=b_xlim)\n\nplt.show()\n```\n\n### Data generator\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nnp.random.seed(123)\nsize = (10,)\na = np.random.randn()\nb = np.random.randn() * 2\nsigma = np.abs(np.random.randn() * 1)\nx = np.sort(np.random.random(size))\ny = a + b * x + np.random.randn(*size) * sigma\n\nplt.errorbar(x, y, yerr=sigma, fmt='.')\nplt.show()\n\nprint(x)\nprint(y)\nprint(a, b, sigma)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "09e11a1c4ab52e52233520ef7bb6c0e08aee2ead", "size": 101380, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "assets/notebooks/.ipynb_checkpoints/Uncertainty in uncertainty-checkpoint.ipynb", "max_stars_repo_name": "mfkasim1/mfkasim1.github.io", "max_stars_repo_head_hexsha": "2755ad0fa2a27e6e790fbe67d3635454ef790e49", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-07-22T07:28:39.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-22T07:28:39.000Z", "max_issues_repo_path": "assets/notebooks/.ipynb_checkpoints/Uncertainty in uncertainty-checkpoint.ipynb", "max_issues_repo_name": "mfkasim1/mfkasim1.github.io", "max_issues_repo_head_hexsha": "2755ad0fa2a27e6e790fbe67d3635454ef790e49", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assets/notebooks/.ipynb_checkpoints/Uncertainty in uncertainty-checkpoint.ipynb", "max_forks_repo_name": "mfkasim1/mfkasim1.github.io", "max_forks_repo_head_hexsha": "2755ad0fa2a27e6e790fbe67d3635454ef790e49", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-07-22T07:28:40.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-22T07:28:40.000Z", "avg_line_length": 174.1924398625, "max_line_length": 34104, "alphanum_fraction": 0.8852633656, "converted": true, "num_tokens": 3282, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026663679976, "lm_q2_score": 0.897695295528596, "lm_q1q2_score": 0.8234582881743887}} {"text": "# Linear Algebra: Eigenvectors and Eigenvalues\n\n## Eigenvalues and Eigenvectors\n\nAssume, we have two interest bearing accounts. The first gives an interest rate of 5%, the second a 3% interest, with annual compound.\n\nAssume that after $t$ years the amounts in the two accounts are represented by a 2-vector:\n\n$x^{(t)} = \\begin{bmatrix}\n amount in Account 1 \\\\[0.3em]\n amount in Account 2\n\\end{bmatrix}$\n\nThe growth of the amounts in one year can be described in a matrix:\n\n$x^{(t+1)} = \\begin{bmatrix}\n a_{11} & a_{12} \\\\[0.3em]\n a_{21} & a_{22}\n\\end{bmatrix} x^{(t)}$\n\nGiven the specification of the interest rate above, this simple case gives us:\n\n$x^{(t+1)} = \\begin{bmatrix}\n 1.05 & 0 \\\\[0.3em]\n 0 & 1.03\n\\end{bmatrix} x^{(t)}$\n\nLet $A$ denote the matrix: $\\begin{bmatrix}\n 1.05 & 0 \\\\[0.3em]\n 0 & 1.03\n\\end{bmatrix}$\n\n$A$ is a diagonal.\n\nIf we initially put \\$ 100 on each account, we can compute the result of the accummulated interest after a year as:\n\n\n```python\nimport numpy as np\nx = np.array([[100],\n [100]])\nA = np.array([[1.05, 0],\n [0, 1.03]])\nA.dot(x)\n```\n\n\n\n\n array([[105.],\n [103.]])\n\n\n\nAfter two years the accounts would be:\n\n\n```python\nA.dot(A.dot(x))\n```\n\n\n\n\n array([[110.25],\n [106.09]])\n\n\n\nIf we might want to know how $x^{(100)}$ compares to $x^{(0)}$, we could iterate over:\n\n$\\begin{align}\nx^{(100)} & = A x^{(99)} \\\\\n & = A(Ax^{(98)}) \\\\\n & = A(A(Ax^{(97)})) \\\\\n & \\vdots \\\\\n & = \\underbrace{A \\cdot A \\dots A}_\\text{100 times} \\ x^{(0)} \n\\end{align}$\n\nWe can also write the product as $A^{100}$.\n\nNote that $A$ is a diagonal, thus the entries of $A^{100}$ are $1.05^{100}$ and $1.03^{100}$:\n\n$A^{100} = \\begin{bmatrix}\n 131.50125784630401 & 0 \\\\[0.3em]\n 0 & 19.218631980856298\n\\end{bmatrix}$\n\nWhat we can see is that account 1 dominates account 2, account 2 becoming less and less relevant over time.\n\nNow consider the definition below:\n\n**Basic definition:**\n\nGiven $A \\in \\mathbb{R}^{n\\times n}$, $\\lambda$ is the **eigenvalue** of $A$, if there is a non-zero vector $x$, the corresponding **eigenvector**, if the following is true:\n\n$Ax = \\lambda x, x \\neq 0$\n\nFormally, given a square matrix $A \\in \\mathbb{R}^{n\\times n}$, we say that $\\lambda \\in \\mathbb{C}$ is an **eigenvalue** of $A$ and $x \\in \\mathbb{C}^n$ is the corresponding **eigenvector**.\n\nIntuitively, this definition means that multiplying $A$ by the vector $x$ results in a new vector that points in the same direction as $x$, but is scaled by a factor $\\lambda$.\n\nAlso note that for any **eigenvector** $x \\in \\mathbb{C}^n$, and scalar $t \\in \\mathbb{C}, A(cx) = cAx = c\\lambda x = \\lambda(cx)$, so $cx$ is also an **eigenvector**. For this reason when we talk about **\u201cthe\u201d eigenvector** associated with $\\lambda$, we usually assume that the **eigenvector** is normalized to have length $1$ (this still creates some ambiguity, since $x$ and $\u2212x$ will both be **eigenvectors**, but we will have to live with this).\n\nFor any $\\lambda$ an eigenvalue of $A$ there is a vector space, the eigenspace, that corresponds to $\\lambda$:\n\n$\\{ x : A x = \\lambda x \\}$\n\nAny non-zero vector in this space is an eigenvector. One convenient requirement is that the eigenvector has norm $1$.\n\nComing back to the account example, the eigenvalues of the matrix $\\begin{bmatrix}\n 1.05 & 0 \\\\[0.3em]\n 0 & 1.03\n\\end{bmatrix}$ are $1.05$ and $1.03$.\n\nThe eigenvector for the eigenvalue $1.05$ is $\\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix}$. The eigenvector for the eigenvalue $1.03$ is $\\begin{bmatrix} 0 \\\\ 1 \\end{bmatrix}$.\n\nWe can rewrite the equation above to state that $(\\lambda, x)$ is an eigenvalue-eigenvector pair of $A$ if:\n\n$(\\lambda I \u2212 A)x = 0, x \\neq 0$\n\nBut $(\\lambda I \u2212 A)x = 0$ has a non-zero solution to $x$ if and only if $(\\lambda I \u2212 A)$ has a non-empty nullspace, which is only the case if $(\\lambda I \u2212 A)$ is singular:\n\n$|(\\lambda I \u2212 A)| = 0$\n\nWe can now use the previous definition of the determinant to expand this expression\ninto a (very large) polynomial in $\\lambda$, where $\\lambda$ will have maximum degree $n$.\n\nWe then find the $n$ (possibly complex) roots of this polynomial to find the $n$ eigenvalues $\\lambda_1, \\dots{}, \\lambda_n$. To find the eigenvector corresponding to the eigenvalue $\\lambda_i$, we simply solve the linear equation $(\\lambda_iI \u2212 A)x = 0$. It should be noted that this is not the method which is actually used in practice to numerically compute the eigenvalues and eigenvectors (remember that the complete expansion of the determinant has $n!$ terms); it is rather a mathematical argument.\n\nThe following are properties of eigenvalues and eigenvectors (in all cases assume $A \\in \\mathbb{R}^{n\\times n}$ has eigenvalues $\\lambda_i, \\dots{}, \\lambda_n$ and associated eigenvectors $x_1, \\dots{}, x_n$):\n\n- The trace of a $A$ is equal to the sum of its eigenvalues,
    \n$\\mathrm{tr}A = \\sum_{i=1}^n \\lambda_i$\n- The determinant of $A$ is equal to the product of its eigenvalues,
    \n$|A| = \\prod_{i=1}^n \\lambda_i$\n- The rank of $A$ is equal to the number of non-zero eigenvalues of $A$\n- If $A$ is non-singular then $1/\\lambda_i$ is an eigenvalue of $A^{\u22121}$ with associated eigenvector $x_i$, i.e., $A^{\u22121}x_i = (1/\\lambda_i)x_i$. (To prove this, take the eigenvector equation, $Ax_i = \\lambda_i x_i$ and left-multiply each side by $A^{\u22121}$.)\n- The eigenvalues of a diagonal matrix $D = \\mathrm{diag}(d_1, \\dots{}, d_n)$ are just the diagonal entries $d_1, \\dots{}, d_n$.\n\nWe can write all the eigenvector equations simultaneously as:\n\n$A X = X \\Lambda$\n\nwhere the columns of $X \\in \\mathbb{R}^{n\\times n}$ are the eigenvectors of $A$ and $\\Lambda$ is a diagonal matrix whose entries are the eigenvalues of $A$:\n\n$X \\in \\mathbb{R}^{n\\times n} = \\begin{bmatrix}\n \\big| & \\big| & & \\big| \\\\[0.3em]\n x_1 & x_2 & \\cdots & x_n \\\\[0.3em]\n \\big| & \\big| & & \\big| \n\\end{bmatrix} , \\Lambda = \\mathrm{diag}(\\lambda_1, \\dots{}, \\lambda_n)$\n\nIf the eigenvectors of $A$ are linearly independent, then the matrix $X$ will be invertible, so $A = X \\Lambda X^{\u22121}$. A matrix that can be written in this form is called **diagonalizable**.\n\nWe can compute the eigenvalues and eigenvectors in numpy in the following way:\n\n\n```python\nA = np.array([[ 2., 1. ],\n [ 1., 2. ]])\nA\n```\n\n\n\n\n array([[ 2., 1.],\n [ 1., 2.]])\n\n\n\n\n```python\nfrom numpy import linalg as lg\nl = lg.eigvals(A)\nprint(l)\n```\n\n [ 3. 1.]\n\n\nWe could now compute the eigenvector for each eigenvalue. See for an example the [HMC Mathematics Online Tutorial on Eigenvalues and Eigenvestors](https://www.math.hmc.edu/calculus/tutorials/eigenstuff/).\n\n## Eigenvalues and Eigenvectors of Symmetric Matrices\n\nTwo remarkable properties come about when we look at the eigenvalues and eigenvectors\nof a symmetric matrix $A \\in \\mathbb{S}^n$.\n\n1. it can be shown that all the eigenvalues of $A$ are real\n2. the eigenvectors of $A$ are orthonormal, i.e., the matrix $X$ defined above is an orthogonal matrix (for this reason, we denote the matrix of eigenvectors as $U$ in this case).\n\nWe can therefore represent $A$ as $A = U \\Lambda U^T$, remembering from above that the inverse of an orthogonal matrix is just its transpose.\n\nUsing this, we can show that the definiteness of a matrix depends entirely on the sign of\nits eigenvalues. Suppose $A \\in \\mathbb{S}^n = U \\Lambda U^T$. Then:\n\n$x^T A x = x^T U \\Lambda U^T x = y^T \\Lambda y = \\sum_{i=1}^n \\lambda_i y^2_i$\n\nwhere $y = U^T x$ (and since $U$ is full rank, any vector $y \\in \\mathbb{R}^n$ can be represented in this form).\n\nBecause $y^2_i$ is always positive, the sign of this expression depends entirely on the $\\lambda_i$'s. If all $\\lambda_i > 0$, then the matrix is positive definite; if all $\\lambda_i \\leq 0$, it is positive semidefinite. Likewise, if all $\\lambda_i < 0$ or $\\lambda_i \\leq 0$, then $A$ is negative definite or negative semidefinite respectively. Finally, if $A$ has both positive and negative eigenvalues, it is indefinite.\n\nAn application where eigenvalues and eigenvectors come up frequently is in maximizing\nsome function of a matrix. In particular, for a matrix $A \\in \\mathbb{S}^n$, consider the following maximization problem,\n\n$\\mathrm{max}_{x\\in \\mathbb{R}^n} x^T A x \\mbox{ subject to } \\|x\\|^2_2 = 1$\n\ni.e., we want to find the vector (of norm $1$) which maximizes the quadratic form. Assuming\nthe eigenvalues are ordered as $\\lambda_1 \\geq \\lambda_2 \\geq \\dots{} \\geq \\lambda_n$, the optimal $x$ for this optimization problem is $x_1$, the eigenvector corresponding to $\\lambda_1$. In this case the maximal value of the quadratic form is $\\lambda_1$. Similarly, the optimal solution to the minimization problem,\n\n$\\mathrm{min}_{x\\in \\mathbb{R}^n} x^T A x \\mbox{ subject to } \\|x\\|^2_2 = 1$\n\nis $x_n$, the eigenvector corresponding to $\\lambda_n$, and the minimal value is $\\lambda_n$. This can be proved by appealing to the eigenvector-eigenvalue form of $A$ and the properties of orthogonal matrices. However, in the next section we will see a way of showing it directly using matrix calculus.\n", "meta": {"hexsha": "904f900c928387eea5e352923b89cd56717b5d98", "size": 16215, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Mathematics/Linear Algebra/Linear Algebra Curated/Linear Algebra - Eigenvalues and Eigenvectors.ipynb", "max_stars_repo_name": "okara83/Becoming-a-Data-Scientist", "max_stars_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Mathematics/Linear Algebra/Linear Algebra Curated/Linear Algebra - Eigenvalues and Eigenvectors.ipynb", "max_issues_repo_name": "okara83/Becoming-a-Data-Scientist", "max_issues_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Mathematics/Linear Algebra/Linear Algebra Curated/Linear Algebra - Eigenvalues and Eigenvectors.ipynb", "max_forks_repo_name": "okara83/Becoming-a-Data-Scientist", "max_forks_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T15:41:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T07:47:40.000Z", "avg_line_length": 28.1510416667, "max_line_length": 516, "alphanum_fraction": 0.5480111008, "converted": true, "num_tokens": 2826, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026505426831, "lm_q2_score": 0.8976952852648487, "lm_q1q2_score": 0.8234582645531157}} {"text": "# Robust Data-Driven Portfolio Diversification\n### Francisco A. Ibanez\n\n1. RPCA on the sample\n2. Singular Value Hard Thresholding (SVHT)\n3. Truncated SVD\n4. Maximize portfolio effective bets - regualization, s.t.: \n - Positivity constraint\n - Leverage 1x\n\nThe combination of (1), (2), and (3) should limit the possible permutations of the J vector when doing the spectral risk parity.\n\n## Methodology\nThe goal of the overall methodology is to arrive to a portfolio weights vector which provides a well-balanced portfolio exposure to each one of the spectral risk factors present in an given investable universe.\n\nWe start with the data set $X_{T \\times N}$ which containst the historical excess returns for each one of the assets that span the investable universe of the portfolio. Before performing the eigendecomposition on $X$, we need to clean the set from noisy trading observations and outliers. We apply Robust Principal Components (RPCA) on $X$ to achieve this, which seeks to decompose $X$ into a structured low-rank matrix $R$ and a sparse matrix $C$ containing outliers and corrupt data:\n\n\\begin{aligned}\nX=R_0+C_0\n\\end{aligned}\n\nThe principal components of $R$ are robust to outliers and corrupt data in $C$. Mathematically, the goal is to find $R$ and $C$ that satisfy the following:\n\n\\begin{aligned}\n\\min_{R,C} ||R||_{*} + \\lambda ||C||_{1} \\\\\n\\text{subject to} \\\\ R + C = X\n\\end{aligned} \n\n\n```python\nimport pandas as pd\nimport numpy as np\nfrom rpca import RobustPCA\nimport matplotlib.pyplot as plt\nfrom scipy.linalg import svd\nfrom optht import optht\n\nraw = pd.read_pickle('etf_er.pkl')\nsample = raw.dropna() # Working with even panel for now\n\n# Outlier detection & cleaning\nX = (sample - sample.mean()).div(sample.std()).values # Normalization\nt, n = X.shape\nlmb = 4 / np.sqrt(max(t, n)) # Hyper-parameter\nrob = RobustPCA(lmb=lmb, max_iter=int(1E6))\nR, C = rob.fit(X) # Robust, Corrupted\n\n# Low-rank representation (compression) through hard thresholding Truncated-SVD\nU, S, Vh = svd(R, full_matrices=False, compute_uv=True, lapack_driver='gesdd')\nS = np.diag(S)\nk = optht(X, sv=np.diag(S), sigma=None)\n\nV = Vh.T\nVhat = V.copy()\nVhat[:, k:] = 0\nShat = S.copy()\nShat[k:, k:] = 0\n\ncum_energy = np.cumsum(np.diag(S)) / np.sum(np.diag(S))\nprint(f'SVHT: {k}, {round(cum_energy[k] * 100, 2)}% of energy explained')\n```\n\n SVHT: 8, 58.43% of energy explained\n\n\n\n```python\nD = np.diag(sample.std().values)\nt, n = X.shape\nw = np.array([1 / n] * n).reshape(-1, 1)\neigen_wts = V.T @ D @ w\np = np.divide(np.diag(eigen_wts.flatten()) @ S.T @ S @ eigen_wts, w.T @ D @ R.T @ R @ D @ w)\np = p.flatten()\neta_p = np.exp(-np.sum(np.multiply(p, np.log(p))))\neta_p\n```\n\n\n\n\n 1.4535327279694732\n\n\n\n\n```python\ndef effective_bets(weights, singular_values_matrix, eigen_vector_matrix, volatilities, k=None):\n w = weights.reshape(-1, 1)\n eigen_wts = eigen_vector_matrix.T @ np.diag(volatilities) @ w\n p = (np.diag(eigen_wts.flatten()) @ singular_values_matrix.T @ singular_values_matrix @ eigen_wts).flatten()\n if k != None:\n p = p[:k]\n p_norm = np.divide(p, p.sum())\n eta = np.exp(-np.sum(np.multiply(p_norm, np.log(p_norm))))\n return eta\n\neffective_bets(np.array([1 / n] * n), S, V, sample.std().values)\n\n```\n\n\n\n\n 1.4535327279694745\n\n\n\n\n```python\ndef objfunc(weights, singular_values_matrix, eigen_vector_matrix, volatilities, k=None):\n return -effective_bets(weights, singular_values_matrix, eigen_vector_matrix, volatilities, k)\n\n# Testing if minimizing p.T @ p yields the same results as maximizing the effective numbers of bets \ndef objfunc2(weights, singular_values_matrix, eigen_vector_matrix, volatilities, k=None):\n w = weights.reshape(-1, 1)\n eigen_wts = eigen_vector_matrix.T @ np.diag(volatilities) @ w\n p = np.diag(eigen_wts.flatten()) @ singular_values_matrix.T @ singular_values_matrix @ eigen_wts\n if k != None:\n p = p[:k]\n n = p.shape[0]\n p_norm = np.divide(p, p.sum())\n c = np.divide(np.ones((n, 1)), n)\n return ((p_norm - c).T @ (p_norm - c)).item()\n```\n\n\n```python\n# POSITIVE ONLY\nfrom scipy.optimize import minimize\n\ncons = (\n {'type': 'ineq', 'fun': lambda x: x},\n {'type': 'ineq', 'fun': lambda x: np.sum(x) - 1}\n)\n\nopti = minimize(\n fun=objfunc,\n x0=np.array([1 / n] * n),\n args=(S, V, sample.std().values),\n constraints=cons,\n method='SLSQP',\n tol=1E-12,\n options={'maxiter': 1E9}\n)\n\nw_star = opti.x\nw_star /= w_star.sum() \npd.Series(w_star, index=sample.columns).plot.bar()\nprint(-opti.fun)\n```\n\n\n```python\n# UNCONSTRAINED\nfrom scipy.optimize import minimize\n\ncons = (\n {'type': 'ineq', 'fun': lambda x: x},\n {'type': 'ineq', 'fun': lambda x: np.sum(x) - 1}\n)\n\nopti = minimize(\n fun=objfunc,\n x0=np.array([1 / n] * n),\n args=(S, V, sample.std().values),\n# constraints=cons,\n method='SLSQP',\n tol=1E-12,\n options={'maxiter': 1E9}\n)\n\nw_star = opti.x\nw_star /= w_star.sum() \npd.Series(w_star, index=sample.columns).plot.bar()\nprint(-opti.fun)\n```\n\n\n```python\neigen_wts = V.T @ np.diag(sample.std().values) @ w_star.reshape(-1, 1)\np = (np.diag(eigen_wts.flatten()) @ S.T @ S @ eigen_wts).flatten()\np = np.divide(p, p.sum())\npd.Series(p).plot.bar()\n```\n\n\n```python\n# RC.T @ RC... different?\ncons = (\n {'type': 'ineq', 'fun': lambda x: x},\n {'type': 'ineq', 'fun': lambda x: np.sum(x) - 1}\n)\n\nopti = minimize(\n fun=objfunc2,\n x0=np.array([1 / n] * n),\n args=(S, V, sample.std().values),\n constraints=cons,\n method='SLSQP',\n tol=1E-12,\n options={'maxiter': 1E9}\n)\n\nw_star = opti.x\nw_star /= w_star.sum()\n#pd.Series(w_star, index=sample.columns).plot.bar()\nprint(effective_bets(w_star, S, V, sample.std().values))\nS, V, sample.std().values\n\neigen_wts = V.T @ np.diag(sample.std().values) @ w_star.reshape(-1, 1)\np = (np.diag(eigen_wts.flatten()) @ S.T @ S @ eigen_wts).flatten()\np = np.divide(p, p.sum())\npd.Series(p).plot.bar()\n```\n\n\n```python\nnp.array([1, 2, 3]).shape\n```\n\n\n\n\n (3,)\n\n\n\n\n```python\n\n```\n\n\n```python\nD = np.diag(sample.std().values)\nn = sample.shape[0]\nSigma = 1 / (n - 1) * D @ X.T @ X @ D\nSigma_b = 1 / (n - 1) * D @ (R + C).T @ (R + C) @ D\n\npd.DataFrame(R.T @ C)\npd.DataFrame(R.T @ C) + pd.DataFrame(C.T @ R)\npd.DataFrame(R.T @ R) + pd.DataFrame(C.T @ C)\n```\n\n\\begin{aligned}\nX &= R + C \\\\\nR &= USV^{T}\n\\end{aligned}\n\nusing the Singular Value Hard Thresholding (SVHT) obtained above we can approximate $R$:\n\\begin{aligned}\nR &\\approx \\tilde{U}\\tilde{S}\\tilde{V}^{T}\n\\end{aligned}\n\nCheck the algebra so everything add up and the first matrix $X$ can be recovered from this point.\n\n\\begin{aligned}\n\\Sigma &= \\frac{1}{(n - 1)}DX^{T}XD \\\\\n\\Sigma &= \\frac{1}{(n - 1)}D(R + C)^{T}(R + C))D\n\\end{aligned}\n\nthen, portfolio risk will be given by:\n\\begin{aligned}\nw^{T}\\Sigma w &= \\frac{1}{(n - 1)}w^{T}D(R + C)^{T}(R + C))D w \\\\\nw^{T}\\Sigma w &= \\frac{1}{(n - 1)}w^{T}D(R^{T}R + R^{T}C + C^{T}R + C^{T}C ) D w \\\\\n\\end{aligned}\n\n\n\n\\begin{aligned}\nw^{T}\\Sigma w &= \\frac{1}{(n - 1)} \\lbrack w^{T}D(R^{T}R)Dw + w^{T} D(R^{T}C + C^{T}R + C^{T}C ) D w \\rbrack\n\\end{aligned}\n\n\nTaking the Singular Value Decomposition of R\n\n\\begin{aligned}\nR &= USV^{T} \\\\\n\\end{aligned}\n\nwe can express R in terms of its singular values and eigenvectors:\n\n\\begin{aligned}\nw^{T}\\Sigma w &= (n - 1)^{-1} \\lbrack w^{T}D(VSU^{T}USV^{T})Dw + w^{T} D(R^{T}C + C^{T}R + C^{T}C) D w \\rbrack \\\\\nw^{T}\\Sigma w &= (n - 1)^{-1} \\lbrack w^{T}D(V S^{2} V^{T})Dw + w^{T} D(R^{T}C + C^{T}R + C^{T}C) D w \\rbrack\n\\end{aligned}\n\nwhere $S^{2}$ contains the eigenvalues of $R$ in its diagonal entries\n\n\\begin{aligned}\nw^{T}\\Sigma w &= (n - 1)^{-1} \\lbrack \\underbrace{w^{T}DV S^{2} V^{T}Dw}_\\text{Robust Component} \n+ \\underbrace{w^{T} D(R^{T}C + C^{T}R + C^{T}C) D w}_\\text{Noisy Component} \\rbrack\n\\end{aligned}\n\nTO DO: There has to be a way of reducing the noisy component, or at least interpret/explain it.\n\nThe portfolio risk contribution is then given by \n\n\\begin{aligned}\ndiag(w)\\Sigma w &= (n - 1)^{-1} \\lbrack \\underbrace{\\theta}_\\text{Robust Component} \n+ \\underbrace{\\gamma}_\\text{Noisy Component} \\rbrack \\\\\n\n\\theta &= diag(V^{T}Dw)S^{2} V^{T}Dw\n\n\\end{aligned}\n\n\\begin{align}\n\\eta (w) & \\equiv \\exp \\left( -\\sum^{N}_{n=1} p_{n} \\ln{(p_{n})} \\right)\n\\end{align}\n\nNow we look for:\n\\begin{align}\n\\arg \\max_{w} \\eta(w)\n\\end{align}\n\n", "meta": {"hexsha": "78043cf83883c1f7b5ef0baa28b812b0c49cbcd2", "size": 50612, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/spectral_diversification.ipynb", "max_stars_repo_name": "fcoibanez/eigenportfolio", "max_stars_repo_head_hexsha": "6e0f6c0239448a191aecf9137d545abf12cb344e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/spectral_diversification.ipynb", "max_issues_repo_name": "fcoibanez/eigenportfolio", "max_issues_repo_head_hexsha": "6e0f6c0239448a191aecf9137d545abf12cb344e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/spectral_diversification.ipynb", "max_forks_repo_name": "fcoibanez/eigenportfolio", "max_forks_repo_head_hexsha": "6e0f6c0239448a191aecf9137d545abf12cb344e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 94.7790262172, "max_line_length": 9902, "alphanum_fraction": 0.8330830633, "converted": true, "num_tokens": 2771, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9314625031628428, "lm_q2_score": 0.8840392802184581, "lm_q1q2_score": 0.8234494408465629}} {"text": "# \u7b26\u53f7\u8ba1\u7b97\n\n\u7b26\u53f7\u8ba1\u7b97\u53c8\u79f0\u8ba1\u7b97\u673a\u4ee3\u6570,\u901a\u4fd7\u5730\u8bf4\u5c31\u662f\u7528\u8ba1\u7b97\u673a\u63a8\u5bfc\u6570\u5b66\u516c\u5f0f,\u5982\u5bf9\u8868\u8fbe\u5f0f\u8fdb\u884c\u56e0\u5f0f\u5206\u89e3,\u5316\u7b80,\u5fae\u5206,\u79ef\u5206,\u89e3\u4ee3\u6570\u65b9\u7a0b,\u6c42\u89e3\u5e38\u5fae\u5206\u65b9\u7a0b\u7b49.\n\n\u7b26\u53f7\u8ba1\u7b97\u4e3b\u8981\u662f\u64cd\u4f5c\u6570\u5b66\u5bf9\u8c61\u4e0e\u8868\u8fbe\u5f0f.\u8fd9\u4e9b\u6570\u5b66\u5bf9\u8c61\u4e0e\u8868\u8fbe\u5f0f\u53ef\u4ee5\u76f4\u63a5\u8868\u73b0\u81ea\u5df1,\u5b83\u4eec\u4e0d\u662f\u4f30\u8ba1/\u8fd1\u4f3c\u503c.\u8868\u8fbe\u5f0f/\u5bf9\u8c61\u662f\u672a\u7ecf\u4f30\u8ba1\u7684\u53d8\u91cf\u53ea\u662f\u4ee5\u7b26\u53f7\u5f62\u5f0f\u5448\u73b0.\n\n## \u4f7f\u7528SymPy\u8fdb\u884c\u7b26\u53f7\u8ba1\u7b97\n\n[SymPy](https://www.sympy.org/zh/index.html)\u662fpython\u73af\u5883\u4e0b\u7684\u7b26\u53f7\u8ba1\u7b97\u5e93,\u4ed6\u53ef\u4ee5\u7528\u4e8e:\n\n+ \u7b80\u5316\u6570\u5b66\u8868\u8fbe\u5f0f\n+ \u8ba1\u7b97\u5fae\u5206,\u79ef\u5206\u4e0e\u6781\u9650\n+ \u6c42\u65b9\u7a0b\u7684\u89e3\n+ \u77e9\u9635\u8fd0\u7b97\u4ee5\u53ca\u5404\u79cd\u6570\u5b66\u51fd\u6570.\n\n\u6240\u6709\u8fd9\u4e9b\u529f\u80fd\u90fd\u901a\u8fc7\u6570\u5b66\u7b26\u53f7\u5b8c\u6210.\n\u4e0b\u9762\u662f\u4f7f\u7528SymPy\u505a\u7b26\u53f7\u8ba1\u7b97\u4e0e\u4e00\u822c\u8ba1\u7b97\u7684\u5bf9\u6bd4:\n\n> \u4e00\u822c\u7684\u8ba1\u7b97\n\n\n```python\nimport math\nmath.sqrt(3)\n```\n\n\n\n\n 1.7320508075688772\n\n\n\n\n```python\nmath.sqrt(27)\n```\n\n\n\n\n 5.196152422706632\n\n\n\n> \u4f7f\u7528SymPy\u8fdb\u884c\u7b26\u53f7\u8ba1\u7b97\n\n\n```python\nimport sympy\nsympy.sqrt(3)\n```\n\n\n\n\n$\\displaystyle \\sqrt{3}$\n\n\n\n\n```python\nsympy.sqrt(27)\n```\n\n\n\n\n$\\displaystyle 3 \\sqrt{3}$\n\n\n\nSymPy\u7a0b\u5e8f\u5e93\u7531\u82e5\u5e72\u6838\u5fc3\u80fd\u529b\u4e0e\u5927\u91cf\u7684\u53ef\u9009\u6a21\u5757\u6784\u6210.SymPy\u7684\u4e3b\u8981\u529f\u80fd:\n\n+ \u5305\u62ec\u57fa\u672c\u7b97\u672f\u4e0e\u516c\u5f0f\u7b80\u5316,\u4ee5\u53ca\u6a21\u5f0f\u5339\u914d\u51fd\u6570,\u5982\u4e09\u89d2\u51fd\u6570/\u53cc\u66f2\u51fd\u6570/\u6307\u6570\u51fd\u6570\u4e0e\u5bf9\u6570\u51fd\u6570\u7b49(\u6838\u5fc3\u80fd\u529b)\n\n+ \u652f\u6301\u591a\u9879\u5f0f\u8fd0\u7b97,\u4f8b\u5982\u57fa\u672c\u7b97\u672f/\u56e0\u5f0f\u5206\u89e3\u4ee5\u53ca\u5404\u79cd\u5176\u4ed6\u8fd0\u7b97(\u6838\u5fc3\u80fd\u529b)\n\n+ \u5fae\u79ef\u5206\u529f\u80fd,\u5305\u62ec\u6781\u9650/\u5fae\u5206\u4e0e\u79ef\u5206\u7b49(\u6838\u5fc3\u80fd\u529b)\n\n+ \u5404\u79cd\u7c7b\u578b\u65b9\u7a0b\u5f0f\u7684\u6c42\u89e3,\u4f8b\u5982\u591a\u9879\u5f0f\u6c42\u89e3/\u65b9\u7a0b\u7ec4\u6c42\u89e3/\u5fae\u5206\u65b9\u7a0b\u6c42\u89e3(\u6838\u5fc3\u80fd\u529b)\n\n+ \u79bb\u6563\u6570\u5b66(\u6838\u5fc3\u80fd\u529b)\n\n+ \u77e9\u9635\u8868\u793a\u4e0e\u8fd0\u7b97\u529f\u80fd(\u6838\u5fc3\u80fd\u529b)\n\n+ \u51e0\u4f55\u51fd\u6570(\u6838\u5fc3\u80fd\u529b)\n\n+ \u501f\u52a9pyglet\u5916\u90e8\u6a21\u5757\u753b\u56fe\n\n+ \u7269\u7406\u5b66\u652f\u6301\n\n+ \u7edf\u8ba1\u5b66\u8fd0\u7b97\uff0c\u5305\u62ec\u6982\u7387\u4e0e\u5206\u5e03\u51fd\u6570\n\n+ \u5404\u79cd\u6253\u5370\u529f\u80fd\n\n+ LaTeX\u4ee3\u7801\u751f\u6210\u529f\u80fd\n\n## \u4f7f\u7528SymPy\u7684\u5de5\u4f5c\u6d41\n\n\u4f7f\u7528SymPy\u505a\u7b26\u53f7\u8ba1\u7b97\u4e0d\u540c\u4e8e\u4e00\u822c\u8ba1\u7b97,\u5b83\u7684\u6d41\u7a0b\u662f:\n\n+ \u5728\u6784\u5efa\u7b97\u5f0f\u524d\u7533\u660e\u7b26\u53f7,\u7136\u540e\u5229\u7528\u58f0\u660e\u7684\u7b26\u53f7\u6784\u5efa\u7b97\u5f0f\n+ \u5229\u7528\u7b97\u5f0f\u8fdb\u884c\u63a8\u5bfc,\u8ba1\u7b97\u7b49\u7b26\u53f7\u8fd0\u7b97\u64cd\u4f5c\n+ \u8f93\u51fa\u7ed3\u679c\n\n\u4e0b\u9762\u662f\u4e00\u4e2a\u7b80\u5355\u7684\u4f8b\u5b50,\u5c31\u5f53\u4f5cSymPy\u7684helloworld\u5427\n\n\n```python\nimport sympy as sp\nx, y = sp.symbols('x y') #\u58f0\u660e\u7b26\u53f7x,y\nexpr = x + 2*y # \u6784\u9020\u7b97\u5f0f\nexpr\n```\n\n\n\n\n$\\displaystyle x + 2 y$\n\n\n\n\n```python\nexpr + 1 # \u5728\u7b97\u5f0f\u4e4b\u4e0a\u6784\u5efa\u65b0\u7b97\u5f0f\n```\n\n\n\n\n$\\displaystyle x + 2 y + 1$\n\n\n\n\n```python\nexpr + x # \u65b0\u6784\u5efa\u7684\u7b97\u5f0f\u53ef\u4ee5\u660e\u663e\u7684\u5316\u7b80\u5c31\u4f1a\u81ea\u52a8\u5316\u7b80\n```\n\n\n\n\n$\\displaystyle 2 x + 2 y$\n\n\n\n\n```python\nx*(expr) # \u65b0\u7b97\u5f0f\u4e0d\u80fd\u660e\u663e\u7684\u5316\u7b80,\u6bd4\u5982\u8fd9\u4e2a\u4f8b\u5b50,\u5c31\u4e0d\u4f1a\u81ea\u52a8\u5316\u7b80\n```\n\n\n\n\n$\\displaystyle x \\left(x + 2 y\\right)$\n\n\n\n\n```python\nexpand_expr = sp.expand(x*(expr)) # \u624b\u52a8\u5316\u7b80\u65b0\u7b97\u5f0f\nexpand_expr\n```\n\n\n\n\n$\\displaystyle x^{2} + 2 x y$\n\n\n\n\n```python\nsp.factor(expand_expr) # \u5c06\u5316\u7b80\u7684\u5f0f\u5b50\u505a\u56e0\u5f0f\u5206\u89e3\n```\n\n\n\n\n$\\displaystyle x \\left(x + 2 y\\right)$\n\n\n\n\n```python\nsp.latex(expand_expr) # \u8f93\u51fa\u7b26\u53f7\u7684latex\u4ee3\u7801\n```\n\n\n\n\n 'x^{2} + 2 x y'\n\n\n", "meta": {"hexsha": "59fcffc75a6578d044be0c3fdf39804c9051a5c2", "size": 6358, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "src/\u6570\u636e\u5206\u6790\u7bc7/\u5de5\u5177\u4ecb\u7ecd/SymPy/README.ipynb", "max_stars_repo_name": "hsz1273327/TutorialForDataScience", "max_stars_repo_head_hexsha": "1d8e72c033a264297e80f43612cd44765365b09e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-04-27T12:40:25.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-27T12:40:25.000Z", "max_issues_repo_path": "README.ipynb", "max_issues_repo_name": "TutorialForPython/python-symbolic-computation", "max_issues_repo_head_hexsha": "724e3a294e87eefe25dfe37c96c2606d24cfc767", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 3, "max_issues_repo_issues_event_min_datetime": "2020-03-31T03:36:05.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-31T03:36:21.000Z", "max_forks_repo_path": "src/\u6570\u636e\u5206\u6790\u7bc7/\u5de5\u5177\u4ecb\u7ecd/SymPy/README.ipynb", "max_forks_repo_name": "hsz1273327/TutorialForDataScience", "max_forks_repo_head_hexsha": "1d8e72c033a264297e80f43612cd44765365b09e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 17.7103064067, "max_line_length": 78, "alphanum_fraction": 0.4545454545, "converted": true, "num_tokens": 1072, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9263037292767219, "lm_q2_score": 0.8887588023318195, "lm_q1q2_score": 0.8232605930274773}} {"text": "# Using optimization routines from `scipy` and `statsmodels`\n\n\n```python\n%matplotlib inline\n```\n\n\n```python\nimport scipy.linalg as la\nimport numpy as np\nimport scipy.optimize as opt\nimport matplotlib.pyplot as plt\nimport pandas as pd\n```\n\n\n```python\nnp.set_printoptions(precision=3, suppress=True)\n```\n\nUsing `scipy.optimize`\n----\n\nOne of the most convenient libraries to use is `scipy.optimize`, since it is already part of the Anaconda installation and it has a fairly intuitive interface.\n\n\n```python\nfrom scipy import optimize as opt\n```\n\n#### Minimizing a univariate function $f: \\mathbb{R} \\rightarrow \\mathbb{R}$\n\n\n```python\ndef f(x):\n return x**4 + 3*(x-2)**3 - 15*(x)**2 + 1\n```\n\n\n```python\nx = np.linspace(-8, 5, 100)\nplt.plot(x, f(x));\n```\n\nThe [`minimize_scalar`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize_scalar.html#scipy.optimize.minimize_scalar) function will find the minimum, and can also be told to search within given bounds. By default, it uses the Brent algorithm, which combines a bracketing strategy with a parabolic approximation.\n\n\n```python\nopt.minimize_scalar(f, method='Brent')\n```\n\n\n```python\nopt.minimize_scalar(f, method='bounded', bounds=[0, 6])\n```\n\n### Local and global minima\n\n\n```python\ndef f(x, offset):\n return -np.sinc(x-offset)\n```\n\n\n```python\nx = np.linspace(-20, 20, 100)\nplt.plot(x, f(x, 5));\n```\n\n\n```python\n# note how additional function arguments are passed in\nsol = opt.minimize_scalar(f, args=(5,))\nsol\n```\n\n\n```python\nplt.plot(x, f(x, 5))\nplt.axvline(sol.x, c='red')\npass\n```\n\n#### We can try multiple random starts to find the global minimum\n\n\n```python\nlower = np.random.uniform(-20, 20, 100)\nupper = lower + 1\nsols = [opt.minimize_scalar(f, args=(5,), bracket=(l, u)) for (l, u) in zip(lower, upper)]\n```\n\n\n```python\nidx = np.argmin([sol.fun for sol in sols])\nsol = sols[idx]\n```\n\n\n```python\nplt.plot(x, f(x, 5))\nplt.axvline(sol.x, c='red');\n```\n\n#### Using a stochastic algorithm\n\nSee documentation for the [`basinhopping`](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.basinhopping.html) algorithm, which also works with multivariate scalar optimization. Note that this is heuristic and not guaranteed to find a global minimum.\n\n\n```python\nfrom scipy.optimize import basinhopping\n\nx0 = 0\nsol = basinhopping(f, x0, stepsize=1, minimizer_kwargs={'args': (5,)})\nsol\n```\n\n\n```python\nplt.plot(x, f(x, 5))\nplt.axvline(sol.x, c='red');\n```\n\n### Constrained optimization with `scipy.optimize`\n\nMany real-world optimization problems have constraints - for example, a set of parameters may have to sum to 1.0 (equality constraint), or some parameters may have to be non-negative (inequality constraint). Sometimes, the constraints can be incorporated into the function to be minimized, for example, the non-negativity constraint $p \\gt 0$ can be removed by substituting $p = e^q$ and optimizing for $q$. Using such workarounds, it may be possible to convert a constrained optimization problem into an unconstrained one, and use the methods discussed above to solve the problem.\n\nAlternatively, we can use optimization methods that allow the specification of constraints directly in the problem statement as shown in this section. Internally, constraint violation penalties, barriers and Lagrange multipliers are some of the methods used used to handle these constraints. We use the example provided in the Scipy [tutorial](http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html) to illustrate how to set constraints.\n\nWe will optimize:\n\n$$\nf(x) = -(2xy + 2x - x^2 -2y^2)\n$$\nsubject to the constraint\n$$\nx^3 - y = 0 \\\\\ny - (x-1)^4 - 2 \\ge 0\n$$\nand the bounds\n$$\n0.5 \\le x \\le 1.5 \\\\\n1.5 \\le y \\le 2.5\n$$\n\n\n```python\ndef f(x):\n return -(2*x[0]*x[1] + 2*x[0] - x[0]**2 - 2*x[1]**2)\n```\n\n\n```python\nx = np.linspace(0, 3, 100)\ny = np.linspace(0, 3, 100)\nX, Y = np.meshgrid(x, y)\nZ = f(np.vstack([X.ravel(), Y.ravel()])).reshape((100,100))\nplt.contour(X, Y, Z, np.arange(-1.99,10, 1), cmap='jet');\nplt.plot(x, x**3, 'k:', linewidth=1)\nplt.plot(x, (x-1)**4+2, 'k:', linewidth=1)\nplt.fill([0.5,0.5,1.5,1.5], [2.5,1.5,1.5,2.5], alpha=0.3)\nplt.axis([0,3,0,3])\n```\n\nTo set constraints, we pass in a dictionary with keys `type`, `fun` and `jac`. Note that the inequality constraint assumes a $C_j x \\ge 0$ form. As usual, the `jac` is optional and will be numerically estimated if not provided.\n\n\n```python\ncons = ({'type': 'eq',\n 'fun' : lambda x: np.array([x[0]**3 - x[1]]),\n 'jac' : lambda x: np.array([3.0*(x[0]**2.0), -1.0])},\n {'type': 'ineq',\n 'fun' : lambda x: np.array([x[1] - (x[0]-1)**4 - 2])})\n\nbnds = ((0.5, 1.5), (1.5, 2.5))\n```\n\n\n```python\nx0 = [0, 2.5]\n```\n\nUnconstrained optimization\n\n\n```python\nux = opt.minimize(f, x0, constraints=None)\nux\n```\n\nConstrained optimization\n\n\n```python\ncx = opt.minimize(f, x0, bounds=bnds, constraints=cons)\ncx\n```\n\n\n```python\nx = np.linspace(0, 3, 100)\ny = np.linspace(0, 3, 100)\nX, Y = np.meshgrid(x, y)\nZ = f(np.vstack([X.ravel(), Y.ravel()])).reshape((100,100))\nplt.contour(X, Y, Z, np.arange(-1.99,10, 1), cmap='jet');\nplt.plot(x, x**3, 'k:', linewidth=1)\nplt.plot(x, (x-1)**4+2, 'k:', linewidth=1)\nplt.text(ux['x'][0], ux['x'][1], 'x', va='center', ha='center', size=20, color='blue')\nplt.text(cx['x'][0], cx['x'][1], 'x', va='center', ha='center', size=20, color='red')\nplt.fill([0.5,0.5,1.5,1.5], [2.5,1.5,1.5,2.5], alpha=0.3)\nplt.axis([0,3,0,3]);\n```\n\n## Some applications of optimization\n\n### Finding paraemeters for ODE models\n\nThis is a specialized application of `curve_fit`, in which the curve to be fitted is defined implicitly by an ordinary differential equation \n$$\n\\frac{dx}{dt} = -kx\n$$\nand we want to use observed data to estimate the parameters $k$ and the initial value $x_0$. Of course this can be explicitly solved but the same approach can be used to find multiple parameters for $n$-dimensional systems of ODEs.\n\n[A more elaborate example for fitting a system of ODEs to model the zombie apocalypse](http://adventuresinpython.blogspot.com/2012/08/fitting-differential-equation-system-to.html)\n\n\n```python\nfrom scipy.integrate import odeint\n\ndef f(x, t, k):\n \"\"\"Simple exponential decay.\"\"\"\n return -k*x\n\ndef x(t, k, x0):\n \"\"\"\n Solution to the ODE x'(t) = f(t,x,k) with initial condition x(0) = x0\n \"\"\"\n x = odeint(f, x0, t, args=(k,))\n return x.ravel()\n```\n\n\n```python\n# True parameter values\nx0_ = 10\nk_ = 0.1*np.pi\n\n# Some random data genererated from closed form solution plus Gaussian noise\nts = np.sort(np.random.uniform(0, 10, 200))\nxs = x0_*np.exp(-k_*ts) + np.random.normal(0,0.1,200)\n\npopt, cov = opt.curve_fit(x, ts, xs)\nk_opt, x0_opt = popt\n\nprint(\"k = %g\" % k_opt)\nprint(\"x0 = %g\" % x0_opt)\n```\n\n\n```python\nimport matplotlib.pyplot as plt\nt = np.linspace(0, 10, 100)\nplt.plot(ts, xs, 'r.', t, x(t, k_opt, x0_opt), '-');\n```\n\n### Another example of fitting a system of ODEs using the `lmfit` package\n\nYou may have to install the [`lmfit`](http://cars9.uchicago.edu/software/python/lmfit/index.html) package using `pip` and restart your kernel. The `lmfit` algorithm is another wrapper around `scipy.optimize.leastsq` but allows for richer model specification and more diagnostics.\n\n\n```python\n! pip install lmfit\n```\n\n\n```python\nfrom lmfit import minimize, Parameters, Parameter, report_fit\nimport warnings\n```\n\n\n```python\ndef f(xs, t, ps):\n \"\"\"Lotka-Volterra predator-prey model.\"\"\"\n try:\n a = ps['a'].value\n b = ps['b'].value\n c = ps['c'].value\n d = ps['d'].value\n except:\n a, b, c, d = ps\n \n x, y = xs\n return [a*x - b*x*y, c*x*y - d*y]\n\ndef g(t, x0, ps):\n \"\"\"\n Solution to the ODE x'(t) = f(t,x,k) with initial condition x(0) = x0\n \"\"\"\n x = odeint(f, x0, t, args=(ps,))\n return x\n\ndef residual(ps, ts, data):\n x0 = ps['x0'].value, ps['y0'].value\n model = g(ts, x0, ps)\n return (model - data).ravel()\n\nt = np.linspace(0, 10, 100)\nx0 = np.array([1,1])\n\na, b, c, d = 3,1,1,1\ntrue_params = np.array((a, b, c, d))\n\nnp.random.seed(123)\ndata = g(t, x0, true_params)\ndata += np.random.normal(size=data.shape)\n\n# set parameters incluing bounds\nparams = Parameters()\nparams.add('x0', value= float(data[0, 0]), min=0, max=10) \nparams.add('y0', value=float(data[0, 1]), min=0, max=10) \nparams.add('a', value=2.0, min=0, max=10)\nparams.add('b', value=2.0, min=0, max=10)\nparams.add('c', value=2.0, min=0, max=10)\nparams.add('d', value=2.0, min=0, max=10)\n\n# fit model and find predicted values\nresult = minimize(residual, params, args=(t, data), method='leastsq')\nfinal = data + result.residual.reshape(data.shape)\n\n# plot data and fitted curves\nplt.plot(t, data, 'o')\nplt.plot(t, final, '-', linewidth=2);\n\n# display fitted statistics\nreport_fit(result)\n```\n\n#### Optimization of graph node placement\n\nTo show the many different applications of optimization, here is an example using optimization to change the layout of nodes of a graph. We use a physical analogy - nodes are connected by springs, and the springs resist deformation from their natural length $l_{ij}$. Some nodes are pinned to their initial locations while others are free to move. Because the initial configuration of nodes does not have springs at their natural length, there is tension resulting in a high potential energy $U$, given by the physics formula shown below. Optimization finds the configuration of lowest potential energy given that some nodes are fixed (set up as boundary constraints on the positions of the nodes).\n\n$$\nU = \\frac{1}{2}\\sum_{i,j=1}^n ka_{ij}\\left(||p_i - p_j||-l_{ij}\\right)^2\n$$\n\nNote that the ordination algorithm Multi-Dimensional Scaling (MDS) works on a very similar idea - take a high dimensional data set in $\\mathbb{R}^n$, and project down to a lower dimension ($\\mathbb{R}^k$) such that the sum of distances $d_n(x_i, x_j) - d_k(x_i, x_j)$, where $d_n$ and $d_k$ are some measure of distance between two points $x_i$ and $x_j$ in $n$ and $d$ dimension respectively, is minimized. MDS is often used in exploratory analysis of high-dimensional data to get some intuitive understanding of its \"structure\".\n\n\n```python\nfrom scipy.spatial.distance import pdist, squareform\n```\n\n- P0 is the initial location of nodes\n- P is the minimal energy location of nodes given constraints\n- A is a connectivity matrix - there is a spring between $i$ and $j$ if $A_{ij} = 1$\n- $L_{ij}$ is the resting length of the spring connecting $i$ and $j$\n- In addition, there are a number of `fixed` nodes whose positions are pinned.\n\n\n```python\nn = 20\nk = 1 # spring stiffness\nP0 = np.random.uniform(0, 5, (n,2)) \nA = np.ones((n, n))\nA[np.tril_indices_from(A)] = 0\nL = A.copy()\n```\n\n\n```python\nL.astype('int')\n```\n\n\n```python\ndef energy(P):\n P = P.reshape((-1, 2))\n D = squareform(pdist(P))\n return 0.5*(k * A * (D - L)**2).sum()\n```\n\n\n```python\nD0 = squareform(pdist(P0))\nE0 = 0.5* k * A * (D0 - L)**2\n```\n\n\n```python\nD0[:5, :5]\n```\n\n\n```python\nE0[:5, :5]\n```\n\n\n```python\nenergy(P0.ravel())\n```\n\n\n```python\n# fix the position of the first few nodes just to show constraints\nfixed = 4\nbounds = (np.repeat(P0[:fixed,:].ravel(), 2).reshape((-1,2)).tolist() + \n [[None, None]] * (2*(n-fixed)))\nbounds[:fixed*2+4]\n```\n\n\n```python\nsol = opt.minimize(energy, P0.ravel(), bounds=bounds)\n```\n\n#### Visualization\n\nOriginal placement is BLUE\nOptimized arrangement is RED.\n\n\n```python\nplt.scatter(P0[:, 0], P0[:, 1], s=25)\nP = sol.x.reshape((-1,2))\nplt.scatter(P[:, 0], P[:, 1], edgecolors='red', facecolors='none', s=30, linewidth=2);\n```\n\nOptimization of standard statistical models\n---\n\nWhen we solve standard statistical problems, an optimization procedure similar to the ones discussed here is performed. For example, consider multivariate logistic regression - typically, a Newton-like algorithm known as iteratively reweighted least squares (IRLS) is used to find the maximum likelihood estimate for the generalized linear model family. However, using one of the multivariate scalar minimization methods shown above will also work, for example, the BFGS minimization algorithm. \n\nThe take home message is that there is nothing magic going on when Python or R fits a statistical model using a formula - all that is happening is that the objective function is set to be the negative of the log likelihood, and the minimum found using some first or second order optimization algorithm.\n\n\n```python\nimport statsmodels.api as sm\n```\n\n### Logistic regression as optimization\n\nSuppose we have a binary outcome measure $Y \\in {0,1}$ that is conditinal on some input variable (vector) $x \\in (-\\infty, +\\infty)$. Let the conditioanl probability be $p(x) = P(Y=y | X=x)$. Given some data, one simple probability model is $p(x) = \\beta_0 + x\\cdot\\beta$ - i.e. linear regression. This doesn't really work for the obvious reason that $p(x)$ must be between 0 and 1 as $x$ ranges across the real line. One simple way to fix this is to use the transformation $g(x) = \\frac{p(x)}{1 - p(x)} = \\beta_0 + x.\\beta$. Solving for $p$, we get\n$$\np(x) = \\frac{1}{1 + e^{-(\\beta_0 + x\\cdot\\beta)}}\n$$\nAs you all know very well, this is logistic regression.\n\nSuppose we have $n$ data points $(x_i, y_i)$ where $x_i$ is a vector of features and $y_i$ is an observed class (0 or 1). For each event, we either have \"success\" ($y = 1$) or \"failure\" ($Y = 0$), so the likelihood looks like the product of Bernoulli random variables. According to the logistic model, the probability of success is $p(x_i)$ if $y_i = 1$ and $1-p(x_i)$ if $y_i = 0$. So the likelihood is\n$$\nL(\\beta_0, \\beta) = \\prod_{i=1}^n p(x_i)^y(1-p(x_i))^{1-y}\n$$\nand the log-likelihood is \n\\begin{align}\nl(\\beta_0, \\beta) &= \\sum_{i=1}^{n} y_i \\log{p(x_i)} + (1-y_i)\\log{1-p(x_i)} \\\\\n&= \\sum_{i=1}^{n} \\log{1-p(x_i)} + \\sum_{i=1}^{n} y_i \\log{\\frac{p(x_i)}{1-p(x_i)}} \\\\\n&= \\sum_{i=1}^{n} -\\log 1 + e^{\\beta_0 + x_i\\cdot\\beta} + \\sum_{i=1}^{n} y_i(\\beta_0 + x_i\\cdot\\beta)\n\\end{align}\n\nUsing the standard 'trick', if we augment the matrix $X$ with a column of 1s, we can write $\\beta_0 + x_i\\cdot\\beta$ as just $X\\beta$.\n\n\n```python\ndf_ = pd.read_csv(\"binary.csv\")\ndf_.columns = df_.columns.str.lower()\ndf_.head()\n```\n\n\n```python\n# We will ignore the rank categorical value\n\ncols_to_keep = ['admit', 'gre', 'gpa']\ndf = df_[cols_to_keep]\ndf.insert(1, 'dummy', 1)\ndf.head()\n```\n\n### Solving as a GLM with IRLS\n\nThis is very similar to what you would do in R, only using Python's `statsmodels` package. The GLM solver uses a special variant of Newton's method known as iteratively reweighted least squares (IRLS), which will be further desribed in the lecture on multivarite and constrained optimizaiton.\n\n\n```python\nmodel = sm.GLM.from_formula('admit ~ gre + gpa', \n data=df, family=sm.families.Binomial())\nfit = model.fit()\nfit.summary()\n```\n\n### Or use R\n\n\n```python\n%load_ext rpy2.ipython\n```\n\n\n```r\n%%R -i df\nm <- glm(admit ~ gre + gpa, data=df, family=\"binomial\")\nsummary(m)\n```\n\n### Home-brew logistic regression using a generic minimization function\n\nThis is to show that there is no magic going on - you can write the function to minimize directly from the log-likelihood equation and run a minimizer. It will be more accurate if you also provide the derivative (+/- the Hessian for second order methods), but using just the function and numerical approximations to the derivative will also work. As usual, this is for illustration so you understand what is going on - when there is a library function available, you should probably use that instead.\n\n\n```python\ndef f(beta, y, x):\n \"\"\"Minus log likelihood function for logistic regression.\"\"\"\n return -((-np.log(1 + np.exp(np.dot(x, beta)))).sum() + (y*(np.dot(x, beta))).sum())\n```\n\n\n```python\nbeta0 = np.zeros(3)\nopt.minimize(f, beta0, args=(df['admit'], df.loc[:, 'dummy':]), method='BFGS', options={'gtol':1e-2})\n```\n\n### Optimization with `sklearn`\n\nThere are also many optimization routines in the `scikit-learn` package, as you already know from the previous lectures. Many machine learning problems essentially boil down to the minimization of some appropriate loss function.\n\n### Resources\n\n- [Scipy Optimize reference](http://docs.scipy.org/doc/scipy/reference/optimize.html)\n- [Scipy Optimize tutorial](http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html)\n- [LMFit - a modeling interface for nonlinear least squares problems](http://cars9.uchicago.edu/software/python/lmfit/index.html)\n- [CVXpy- a modeling interface for convex optimization problems](https://github.com/cvxgrp/cvxpy)\n- [Quasi-Newton methods](http://en.wikipedia.org/wiki/Quasi-Newton_method)\n- [Convex optimization book by Boyd & Vandenberghe](http://stanford.edu/~boyd/cvxbook/)\n- [Nocedal and Wright textbook](http://www.springer.com/us/book/9780387303031)\n", "meta": {"hexsha": "0f5dd6c347b3b7f6453a41382058a4087ecd7e76", "size": 26804, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/T07D_Optimization_Examples.ipynb", "max_stars_repo_name": "Yijia17/sta-663-2021", "max_stars_repo_head_hexsha": "e6484e3116c041b8c8eaae487eff5f351ff499c9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 18, "max_stars_repo_stars_event_min_datetime": "2021-01-19T16:35:54.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-01T02:12:30.000Z", "max_issues_repo_path": "notebooks/T07D_Optimization_Examples.ipynb", "max_issues_repo_name": "Yijia17/sta-663-2021", "max_issues_repo_head_hexsha": "e6484e3116c041b8c8eaae487eff5f351ff499c9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/T07D_Optimization_Examples.ipynb", "max_forks_repo_name": "Yijia17/sta-663-2021", "max_forks_repo_head_hexsha": "e6484e3116c041b8c8eaae487eff5f351ff499c9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 24, "max_forks_repo_forks_event_min_datetime": "2021-01-19T16:26:13.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-15T05:10:14.000Z", "avg_line_length": 30.4590909091, "max_line_length": 707, "alphanum_fraction": 0.5594314281, "converted": true, "num_tokens": 5015, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8918110511888303, "lm_q2_score": 0.9230391674796139, "lm_q1q2_score": 0.8231765302384573}} {"text": "# The Laplace Approximation\nThe laplace approximation is a widely used framework that finds a Gaussian approximation to a probability density definted over a set of continuous variables. It is especially useful when applying Bayesian principles to logistic regression where computing integral of posterior distributions becomes intractable.\n\n\n\n## Basic Idea\nConsider a continuous random variable $z \\in \\mathcal{R}^D$ with probability distribution given by $p(z) = \\frac{1}{Z}f(z)$ where $Z = \\int{f(z) dz}$ is the normalizing constant and need not be known.\n\nIn the Laplace approximation, the goal is to `find a Gaussian distribution q(z) centered on a mode of the p(z)`. The mode can be computed by determining the value of $z=z_0$ where $\\frac{dp(z)}{dz} = 0$.\n\nNote that if $p(z)$ is `multi-modal`, the laplace approximation is only precise in the neighborhood of one of its many modes.\n\nLet $q(z) \\sim \\mathcal{N}(z_0,A^{-1})$ where $A$ is the precision matrix. Note: Precision matrix is the inverse of covariance matrix and is often employed for computational reasons.\n\n$$ \\begin{align} q_z &= \\frac{\\sqrt{|A|}}{(2\\pi)^{D/2}} \\exp \\{-\\frac{1}{2}(z-z_0)^T A (z-z_0)\\} \\\\ \\Rightarrow \\ln{q_z} &= \\frac{1}{2} \\left(\\ln{|A|} - D \\ln{2\\pi}\\right) - \\frac{1}{2}(z-z_0)^T A(z-z_0) \\\\\n&= \\ln{f_{z0}} - \\frac{1}{2}A(z-z_0)^2\\end{align}$$\n\nNote that this is a Taylor series expansion for $p_z$ at a mode where $\\frac{d \\ln p(z)}{dz} = 0$ and $\\frac{d^2 \\ln p(z)}{dz^2} = -A < 0 \\Rightarrow A > 0$.\n\nIn summary, the laplace approximation involves evaluating the mode $z_0$ and the Hessian $A$ at $z_0$. So if f(z) has an intractable but analytical form, the mode can be found by some form of numerical optimization algorithm. Note that the normalization constant $Z$ does not need to be known to apply this method.\n\n## Example\nThis is an example to demonstrate the Laplace approximation and adapted from Figure 4.14 in [1].\n\nSuppose $p(z) \\propto \\sigma(20z+4) \\exp{\\left(\\frac{-z^2}{2}\\right)}$ where $\\sigma(\\cdot)$ is the sigmoid function. This form is very common in classification problems and serves as a good practical example.\n\nTo compute the mode $z_0$ & Hessian $-A$,\n\n$$ \\begin{align} \\frac{d}{dz}\\ln p_z &\\propto \\frac{d}{dz}\\ln \\sigma(\\cdot) + \\frac{d}{dz}\\ln \\exp{\\left(\\frac{-z^2}{2}\\right)} \\\\\n&= 20 (1-\\sigma(\\cdot)) - z \\\\\n&= 0 \\text{ iff } z_0 = 20(1-\\sigma(20 z_0 + 4))\\end{align}$$\n\nThe above expression to determine $z_0$ is nonlinear and can be solved by Newton's method.\nLet $y(z_0) = z_0 - 20(1-\\sigma(20 z_0 + 4))$. To find $z_0$ such that $y=0$, we start with an initial guess $z_{0,0}$ and iterate the following equation till convergence.\n$z_{0,k+1} = z_{0,k} - \\left(y'(z_{0,k})\\right)^{-1} y(z_{0,k})$. The convergence criteria can be either set to a fixed maximum number of iterations or till $|z_{0,k+1} - z_{0,k}| \\le \\epsilon$ for some small $\\epsilon$.\n\nThe Hessian is expressed as:\n\n$$ \\begin{align} \\frac{d^2}{dz^2}\\ln p_z &\\propto \\frac{d}{dz}\\frac{d}{dz}\\ln p_z \\\\\n&= -400\\sigma(\\cdot)(1-\\sigma(\\cdot)) - 1 \\\\\n\\Rightarrow A &= -\\Bigg(\\frac{d^2}{dz^2}\\ln p_z\\Bigg)\\Bigg\\vert_{z=z_0} = 400\\sigma(20 z_0 + 4)(1-\\sigma(20 z_0 + 4)) + 1\\end{align}$$\n\n\n```python\nimport numpy as np\nfrom scipy.integrate import trapz\nfrom scipy.stats import norm\nimport matplotlib.pyplot as plt\nimport matplotlib\n# matplotlib.rcParams['text.usetex'] = True\n# matplotlib.rcParams['text.latex.unicode'] = True\n%matplotlib inline\n\n\ndef sigmoid(x):\n den = 1.0+np.exp(-x)\n return 1.0/den\n\ndef p_z(z):\n p = np.exp(-np.power(z,2)/2)*sigmoid(20*z+4)\n sum_p = trapz(p,z) ## normalize for plotting\n return p,p/sum_p\n\ndef findMode(z_init,max_iter = 25,tol = 1E-6):\n iter = 0\n z_next = np.finfo('d').max\n z_cur = z_init\n while (iter < max_iter and np.abs(z_next-z_cur) > tol):\n if iter > 0:\n z_cur = z_next\n y = z_cur - 20*(1-sigmoid(20*z_cur+4))\n der_y = 1 + 400*sigmoid(20*z_cur+4)*(1-sigmoid(20*z_cur+4))\n z_next = z_cur - y/der_y\n iter = iter+1\n# print(\"Iter-\"+str(iter)+\":\"+str(z_next))\n return z_next\n\ndef getHessian(z):\n sig_x = sigmoid(20*z+4)\n return 400*sig_x*(1-sig_x) + 1\n```\n\n\n```python\nz = np.linspace(-10,10,10000)\npz,pzn = p_z(z)\n\n## Mode & Precision matrix\nz0 = findMode(0)\nA = getHessian(z0)\nz0_idx = np.where(np.abs(z-z0) == np.min(np.abs(z-z0)))[0]\np_z0 = pzn[z0_idx]\n\ndp = np.gradient(pzn,z[1]-z[0])\nd2p = np.gradient(dp,z[1]-z[0])\n\n## Get approx Gaussian distribution\nq_z = norm.pdf(z, z0, 1/np.sqrt(A))\nfig,ax = plt.subplots(1,1,figsize=(4,3))\nax.cla()\nax.plot(z,pzn,color=\"orange\")\nax.fill_between(z,pzn, 0,\n facecolor=\"orange\", # The fill color\n color='orange', # The outline color\n alpha=0.2) # Transparency of the fill\n#ax.axvline(x=z0)#,ylim=0,ymax=0.7)\nax.vlines(z0, ymin=0, ymax=p_z0,linestyles='dotted')\nax.plot(z,q_z,'r')\nax.set_xlim([-2,4]);\nax.set_ylim([0,0.8]);\nax.set_yticks([0,0.2,0.4,0.6,0.8]);\nax.legend(['p_z','N('+str(np.round(z0,4))+','+str(np.round(1/np.sqrt(A),3))+')'])\nax.set_title('p(z) with its Laplace Approximation');\n```\n\n## References\n[1]: Bishop, Christopher M. 2006. Pattern Recognition and Machine Learning. Springer.\n", "meta": {"hexsha": "7a4761ab90031698b883534679922f4bce61749d", "size": 27716, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "files/DensityEstimation/LaplaceApproximation.ipynb", "max_stars_repo_name": "chandrusuresh/MyNotes", "max_stars_repo_head_hexsha": "4e0f86195d6d9eb3168bfb04ca42120e9df17f0b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "files/DensityEstimation/LaplaceApproximation.ipynb", "max_issues_repo_name": "chandrusuresh/MyNotes", "max_issues_repo_head_hexsha": "4e0f86195d6d9eb3168bfb04ca42120e9df17f0b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "files/DensityEstimation/LaplaceApproximation.ipynb", "max_forks_repo_name": "chandrusuresh/MyNotes", "max_forks_repo_head_hexsha": "4e0f86195d6d9eb3168bfb04ca42120e9df17f0b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 128.3148148148, "max_line_length": 19632, "alphanum_fraction": 0.843303507, "converted": true, "num_tokens": 1755, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391664210672, "lm_q2_score": 0.8918110519076928, "lm_q1q2_score": 0.8231765299579719}} {"text": "Consider a markovian process with state spaces {0,1} with the following state-transition matrix:\n$\\pi =$ \\begin{bmatrix}\nPr(s_{t+1} = 0|s_{t}= 0) & Pr(s_{t+1} = 1|s_{t}= 0) \\\\\nPr(s_{t+1} = 0|s_{t}= 1) & Pr(s_{t+1} = 1|s_{t}= 1) \n\\end{bmatrix}.\n\nAssume that $Pr(s_{t+1} = 0|s_{t}= 0) = Pr(s_{t+1} = 1|s_{t}= 1) = 2/3$ and $Pr(s_{t+1} = 0|s_{t}= 1) = \nPr(s_{t+1} = 1|s_{t} = 0) = 1/3$. Consider that the initial distribution for the states as: $\\Pi_{0} = [5/6 1/6]'$ \nand answer the following itens:\n\na) Right a Python function that iterates the states until the stationary state distribution for a given initial condition. \nLike: $\\Pi_{t+1} = \\pi'\\Pi_{t}$\n\nAnswer:\n\nImporting modules\n\n\n```python\nimport numpy as np\nfrom scipy import linalg\n```\n\nDefining the state-transition matrix and the initial condition:\n\n\n```python\nT = np.array([[2/3, 1/3], [1/3 , 2/3]])\npi_0 = np.array([[5/6, 1/6]])\n```\n\nCreating the function:\n\n\n```python\n#Returns the stationary distribution\ndef markov_int(T, pi_0):\n pi_t = np.copy(pi_0)\n norm, tol = 1, 1e-6\n while norm > tol:\n pi_t1 = np.dot(pi_t, T)\n norm = np.max(abs(pi_t1 - pi_t))\n pi_t = np.copy(pi_t1)\n return pi_t\n```\n\n\n```python\npi_t = markov_int(T, pi_0)\nprint(f'The stationary distribution is {pi_t}')\n```\n\n The stationary distribution is [[0.50000021 0.49999979]]\n\n\nb) Create a function in Python that returns the stationary distribution using the \nleft normalized eigenvector of $\\pi$ associated to the eignvalue $1$. \n\nI will deactivate the warnings because sometimes the function to find the eignvalues/vectors divides by zero\n\n\n```python\nimport warnings\nwarnings.filterwarnings('ignore')\n```\n\n\n```python\n# returns the normalized eignvectors and eignvalues\n# the first line calculates the eignvector(its normalized already) and eignvalues\n# the second line copies the eignvectors\n# the third line normalizes it(just because the exercise asks to do it)\ndef markov_matr(T):\n w, v = linalg.eig(T, left = True, right = False)\n pi = v[:, :]\n pi /= pi.sum(axis = 0) \n return pi, w\n```\n\n\n```python\nv, w = markov_matr(T)\nprint(v)\nprint(abs(w))\nprint(v[:,0].reshape(2,1))\nprint('The first column of the eignvectors matrix is the eignvector associated to the first eignvalue and so on.')\nprint('The normalized eignvectors matrix is:')\nprint(f'{v}')\nprint(f'The eignvalues associated are:')\nprint(f'{abs(w)}')\n```\n\n [[ 0.5 -inf]\n [ 0.5 inf]]\n [1. 0.33333333]\n [[0.5]\n [0.5]]\n The first column of the eignvectors matrix is the eignvector associated to the first eignvalue and so on.\n The normalized eignvectors matrix is:\n [[ 0.5 -inf]\n [ 0.5 inf]]\n The eignvalues associated are:\n [1. 0.33333333]\n\n\nConsider a vector of possible incomes $y = [y_{1} \\space y_{2}]$ and a vector of three possible values for the assets\n$a = [a_{1} \\space a_{2} \\space a_{3}]$. The joint stationary distribution of endowments of assets and income is given by:\n$\\phi_{\\infty} = [Pr(y_{1}, a_{1}) \\space Pr(y_{1}, a_{2}) \\space Pr(y_{1}, a_{3}) \\space Pr(y_{2}, a_{1})\n \\space Pr(y_{2}, a_{2}) \\space Pr(y_{2}, a_{3})]$.\n\nSuppose that the policy function is given by $a'(a,y) = a$. Suppose that $Pr(y_{1}, a_{i}) = 0.1$ for all\n$i \\in {1,2,3}$ and $Pr(y_{2}, a_{1}) = Pr(y_{2}, a_{2}) = 0.2$ and that $Pr(y_{1}, a_{3}) = 0.3$.\nCompute the aggregate demand for assets, that is:\n\n\\begin{equation}\nE[a] = \\int a'(a,y)d\\phi_{\\infty}\n\\end{equation} \n\nDefining the vectors:\n\n\n```python\nphi = np.array([[.1, .1, .1], [.2, .2, .3]])\n```\n\nNote that the initial income(y) and asset(a) distribution were not defined.We need to do it.\n\nDefining state dimensions:\n\n\n```python\nn_y = 2\nn_a = 3\n```\n\nDefining state space:\n\n\n```python\ny_grid = np.linspace(0,1, n_y)\na_grid = np.linspace(0,1, n_a)\n```\n\nCreating a policy function as defined by the exercise:\n\n\n```python\ndef f_poli(a_grid):\n a_line = np.zeros((n_y, n_a))\n a_line[0,:] = a_grid\n a_line[1,:] = a_grid\n \n return a_line\na_line = f_poli(a_grid)\na_line\n```\n\n\n\n\n array([[0. , 0.5, 1. ],\n [0. , 0.5, 1. ]])\n\n\n\nNow we need loop through all income and assets(a line) states defined by the policy functio, \nbut first we will define the initial expected asset distribution as zero and the phi given.\n\n\n```python\nEa = 0 #Expected asset demand distribution\nphi = np.array([.1, .1, .1, .2, .2, .3]) #Creating phi\n#Now looping\nfor i_y in range(n_y):\n for i_a in range(n_a):\n idx = i_y * n_a + i_a\n print(idx)\n Ea += a_line[i_y, i_a]*phi[idx]\nprint(Ea)\n```\n\n 0.55\n\n", "meta": {"hexsha": "501133eff10ea2a1cd491401eadd1b461dd2d744", "size": 11903, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Markov Chain and Asset Distribution.ipynb", "max_stars_repo_name": "valcareggi/Macroeconomics", "max_stars_repo_head_hexsha": "e6b1165aeacf2369b70f2d710a198962c3390864", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Markov Chain and Asset Distribution.ipynb", "max_issues_repo_name": "valcareggi/Macroeconomics", "max_issues_repo_head_hexsha": "e6b1165aeacf2369b70f2d710a198962c3390864", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Markov Chain and Asset Distribution.ipynb", "max_forks_repo_name": "valcareggi/Macroeconomics", "max_forks_repo_head_hexsha": "e6b1165aeacf2369b70f2d710a198962c3390864", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 22.715648855, "max_line_length": 134, "alphanum_fraction": 0.4667730824, "converted": true, "num_tokens": 1531, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391621868804, "lm_q2_score": 0.8918110404058913, "lm_q1q2_score": 0.823176515565264}} {"text": "# Graphs and Linear Algebra\n\nRecall that the adjacency matrix of an undirected graph has $A_{i,j} = 1$ if and only if nodes $i$ and $j$ are adjacent. Also, recall that a graph is **regular** with degree $k$ if every node has $k$ neighbors. We also say that the graph is $k$-regular. Finally, for shorthand, we say that the eigenvalues of a graph are the eigenvalues of its adjacency matrix.\n\n**a)** Find three graphs with more than five nodes that are 2-regular, 3-regular, and 4-regular. Represent these in `networkx`, draw them, and find their adjacency matrices. These will be running examples for this problem.\n\n**b)** Find the eigenvalues of the three examples, along with the multiplicities of the eigenvalues.\n\n**c)** Show that if $G$ is $k$-regular, then $k$ is an eigenvalue of $G$. \n\n**d)** Show that $G$ is $k$-regular and connected, then the eigenvalue $k$ of $G$ has multiplicity one. \n\n**e)** Show that $G$ is $k$-regular then $|\\lambda|\\leq k$ for any eigenvalue $\\lambda$ of $G$. \n\n**f)** Let $J$ be the matrix of all ones and $A$ be the adjacency matrix of a $k$-regular graph. Show that $AJ = JA=kJ$. \n\n**g)** Show by construction that there exists regular graph with least eigenvalue equal to $-2$. \n\n**h)** Show that the following graph, called the Petersen Graph, is $3$-regular by finding its eigenvalues. \n\n\n\n**i)** Show that if both $n \\geq k+1$ and $nk$ is even, then there exists a $k$-regular graph of size $n$. \n\n\n```python\nimport networkx as nx\nimport math\nimport scipy\nimport scipy.integrate as spi\nimport numpy as np\nimport sympy as sm\nsm.init_printing(use_latex='mathjax')\nimport matplotlib.pyplot as plt\nimport itertools\nimport random\n%matplotlib inline\n```\n\n**a)** Find three graphs with more than five nodes that are 2-regular, 3-regular, and 4-regular. Represent these in networkx, draw them, and find their adjacency matrices. These will be running examples for this problem.\n\n\n```python\n#Ans.a) Representing three graphs with more than five nodes \n#that are 2-regular, 3-regular, and 4-regular and drawing them.\n\n#Graph 1 parameters\nG = nx.Graph()\nG.add_nodes_from([1,2,3,4,5,6,7,8])\nG.add_edges_from([(1,2),(2,3),(3,4),(4,5),(5,6),(6,7),(7,8),(8,1)])\n\n#Graph 2 parameters\nH = nx.Graph()\nH.add_nodes_from([1,2,3,4,5,6,7,8])\nH.add_edges_from([(1,2),(2,3),(3,4),(4,5),(5,6),(6,7),\n (7,8),(8,1),(1,3),(2,4),(5,7),(6,8)])\n\n#Graph 3 parameters\nI = nx.Graph()\nI.add_nodes_from([1,2,3,4,5,6,7,8])\nI.add_edges_from([(1,2),(1,3),(1,7),(1,8),(2,3),(2,4),(2,8),(3,4),\n (3,5),(4,5),(4,6),(5,6),(5,7),(6,7),(6,8),(7,8)])\n\n#Plotting parameters\nbasic_graph,ax = plt.subplots(1,3, figsize= (15,5))\n\nnx.draw(G,\n ax=ax[0], \n pos=nx.kamada_kawai_layout(G),\n with_labels=True, \n node_color='#444444',\n font_color=\"white\",\n )\nax[0].set_title('2-regular graph with 8 nodes');\n\nnx.draw(H,\n ax=ax[1], \n pos=nx.kamada_kawai_layout(H),\n with_labels=True, \n node_color='#444444',\n font_color=\"white\",\n )\nax[1].set_title('3-regular graph with 8 nodes');\n\nnx.draw(I,\n ax=ax[2], \n pos=nx.kamada_kawai_layout(I),\n with_labels=True, \n node_color='#444444',\n font_color=\"white\",\n )\nax[2].set_title('4-regular graph with 8 nodes');\n```\n\n**Ans.a) To find the adjacency matrix of the graphs drawn:**\n\nAdjacency matrix for the 2-regular graph with 8 nodes:\n$$\nA_2 = \\begin{pmatrix}\n0 & 1 & 0 & 0 & 0 & 0 & 0 & 1\\\\\n1 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\\\\n0 & 1 & 0 & 1 & 0 & 0 & 0 & 0\\\\\n0 & 0 & 1 & 0 & 1 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 1 & 0 & 1 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 1 & 0 & 1 & 0\\\\\n0 & 0 & 0 & 0 & 0 & 1 & 0 & 1\\\\\n1 & 0 & 0 & 0 & 0 & 0 & 1 & 0\n\\end{pmatrix}\n$$\n\nAdjacency matrix for the 3-regular graph with 8 nodes:\n$$\nA_3 = \\begin{pmatrix}\n0 & 1 & 1 & 0 & 0 & 0 & 0 & 1\\\\\n1 & 0 & 1 & 1 & 0 & 0 & 0 & 0\\\\\n1 & 1 & 0 & 1 & 0 & 0 & 0 & 0\\\\\n0 & 1 & 1 & 0 & 1 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 1 & 0 & 1 & 1 & 0\\\\\n0 & 0 & 0 & 0 & 1 & 0 & 1 & 1\\\\\n0 & 0 & 0 & 0 & 1 & 1 & 0 & 1\\\\\n1 & 0 & 0 & 0 & 0 & 1 & 1 & 0\n\\end{pmatrix}\n$$\n\nAdjacency matrix for the 4-regular graph with 8 nodes:\n$$\nA_4 = \\begin{pmatrix}\n0 & 1 & 1 & 0 & 0 & 0 & 1 & 1\\\\\n1 & 0 & 1 & 1 & 0 & 0 & 0 & 1\\\\\n1 & 1 & 0 & 1 & 1 & 0 & 0 & 0\\\\\n0 & 1 & 1 & 0 & 1 & 1 & 0 & 0\\\\\n0 & 0 & 1 & 1 & 0 & 1 & 1 & 0\\\\\n0 & 0 & 0 & 1 & 1 & 0 & 1 & 1\\\\\n1 & 0 & 0 & 0 & 1 & 1 & 0 & 1\\\\\n1 & 1 & 0 & 0 & 0 & 1 & 1 & 0\n\\end{pmatrix}\n$$\n\n\n\n**b)** Find the eigenvalues of the three examples, along with the multiplicities of the eigenvalues\n\n\n```python\n#Ans.b) Eigenvalues of the 3 example graphs:\n#Representing the adjacent matrix \nA_2 = np.matrix([[0 , 1 , 0 , 0 , 0 , 0 , 0 , 1],\n [1 , 0 , 1 , 0 , 0 , 0 , 0 , 0],\n [0 , 1 , 0 , 1 , 0 , 0 , 0 , 0],\n [0 , 0 , 1 , 0 , 1 , 0 , 0 , 0],\n [0 , 0 , 0 , 1 , 0 , 1 , 0 , 0],\n [0 , 0 , 0 , 0 , 1 , 0 , 1 , 0],\n [0 , 0 , 0 , 0 , 0 , 1 , 0 , 1],\n [1 , 0 , 0 , 0 , 0 , 0 , 1 , 0]])\n\nA_3 = np.matrix([[0 , 1 , 1 , 0 , 0 , 0 , 0 , 1],\n [1 , 0 , 1 , 1 , 0 , 0 , 0 , 0],\n [1 , 1 , 0 , 1 , 0 , 0 , 0 , 0],\n [0 , 1 , 1 , 0 , 1 , 0 , 0 , 0],\n [0 , 0 , 0 , 1 , 0 , 1 , 1 , 0],\n [0 , 0 , 0 , 0 , 1 , 0 , 1 , 1],\n [0 , 0 , 0 , 0 , 1 , 1 , 0 , 1],\n [1 , 0 , 0 , 0 , 0 , 1 , 1 , 0]])\n\nA_4 = np.matrix([[0 , 1 , 1 , 0 , 0 , 0 , 1 , 1],\n [1 , 0 , 1 , 1 , 0 , 0 , 0 , 1],\n [1 , 1 , 0 , 1 , 1 , 0 , 0 , 0],\n [0 , 1 , 1 , 0 , 1 , 1 , 0 , 0],\n [0 , 0 , 1 , 1 , 0 , 1 , 1 , 0],\n [0 , 0 , 0 , 1 , 1 , 0 , 1 , 1],\n [1 , 0 , 0 , 0 , 1 , 1 , 0 , 1],\n [1 , 1 , 0 , 0 , 0 , 1 , 1 , 0]])\n\ndef Sym(A):\n \"\"\"\n To check if matrix A is symmetric,\n as to fulfill the property of Adjacency matrix\n \"\"\"\n B = A.T #Defining a matrix which is transpose of matrix A\n if B.all()==A.all(): #Condition for checking symmetry\n print(\"Matrix is symmetric\")\n\n#To check if the three example matrix are symmetric\nSym(A_2)\nSym(A_3)\nSym(A_4)\n\n#Eigenvalues calculation and rounding off\nA2_eig = np.round(np.linalg.eigvals(A_2))\nA3_eig = np.round(np.linalg.eigvals(A_3))\nA4_eig = np.round(np.linalg.eigvals(A_4))\n\ndef Mul(A_eig):\n \"\"\"\n To get the multiplicity of each eigenvalues in A_eig list\n \"\"\"\n uni = np.unique(A_eig) #To get the unique eigenvalues\n lst = dict.fromkeys(uni, []) #Extract keys from the dictionary of unique eigenvalues\n for i in range(len(uni)):\n c = 0 \n for j in range(len(A_eig)):\n if (A_eig[j] == uni[i]):\n c = c + 1\n lst[uni[i]] = c\n return lst\n\n#Printing the results\nprint('\\nEigenvalues of 2-regular graph:', A2_eig)\nprint('Multiplicity of eigenvalues of 2-regular graph:',Mul(A2_eig))\nprint('\\nEigenvalues of matrix of 3-regular:', A3_eig)\nprint('Multiplicity of eigenvalues of 3-regular graph:',Mul(A3_eig))\nprint('\\nEigenvalues of matrix of 4-regular:', A4_eig)\nprint('Multiplicity of eigenvalues of 4-regular graph:',Mul(A4_eig))\n```\n\n Matrix is symmetric\n Matrix is symmetric\n Matrix is symmetric\n \n Eigenvalues of 2-regular graph: [-2. -1. 0. 2. -1. -0. 1. 1.]\n Multiplicity of eigenvalues of 2-regular graph: {-2.0: 1, -1.0: 2, 0.0: 2, 1.0: 2, 2.0: 1}\n \n Eigenvalues of matrix of 3-regular: [-2. 3. 2. 1. -1. -1. -1. -1.]\n Multiplicity of eigenvalues of 3-regular graph: {-2.0: 1, -1.0: 4, 1.0: 1, 2.0: 1, 3.0: 1}\n \n Eigenvalues of matrix of 4-regular: [ 4. -0. -1. 1. 1. -2. -2. -1.]\n Multiplicity of eigenvalues of 4-regular graph: {-2.0: 2, -1.0: 2, -0.0: 1, 1.0: 2, 4.0: 1}\n\n\n**c)** Show that if $G$ is $k$-regular, then $k$ is an eigenvalue of $G$.\n\n**Ans.c)**\nLet's take an eigenvalue $\\lambda$ of graph G whose adjacency matrix is $A_G$ and the corresponding eigenvector be $x = (x_1, x_2, ...,x_n)$. \n\nNow, let's assume $x_k$ is the largest co-ordinate of the vector $x$. \n\nAnd, $\\Delta(G)$ be the maximum degree of nodes in the graph.\n\nUsing linear algebra's basic equation, we have,\n$$\\lambda x = Ax$$\nThen, $$=)|\\lambda.x_k| = \\left|(a_{k1},a_{k2},...,a_{kn})(x_1,x_2...,x_n)\\right|$$\\\n$$=)|\\lambda.x_k|=|\\lambda||x_k| = \\left|\\sum_{i=1}^{n} a_{ki}x_i \\right|$$\n\nAs we know that, $x_k$ is the largest co-ordinate in vector $x$,\n$$|\\lambda.x_k| = \\left|\\sum_{i=1}^{n} a_{ki}x_i\\right| \\leq deg(G) \\left|x_k\\right| $$\n\nNow, the following conditions, \n$$deg(G) = \\Delta(G) \\; \\&\\; x_i = x_k \\forall i,$$ needs to be true, for the given equality to hold.\n\nWe know that, for any k-regular graph, \n$$deg(G) = k = \\Delta(G)-----(1)$$\n\nAlso, for graph G to be regular, (1,1,1,..1) must be an eigenvector of the adjacency matrix $A_G$ which shows that, \n$$x_i = x_k \\;\\forall\\; i-----(2)$$\n\nFrom (1) and (2) we can say that the above conditions for equality holds. \nThus, if $G$ is $k$-regular, then $k$ is an eigenvalue of $G$. \n\n\n```python\n#Ans.c) From Q(a) graphs:\nprint('Eigenvalues of 2-regular matrix:', A2_eig)\nprint('Eigenvalues of 3-regular matrix:', A3_eig)\nprint('Eigenvalues of 4-regular matrix:', A4_eig)\nprint('\\nFrom the above list, its clear that k is present in the eigenvalue list')\nprint('Thus, if G is k-regular, then k is an eigenvalue of G')\n```\n\n Eigenvalues of 2-regular matrix: [-2. -1. 0. 2. -1. -0. 1. 1.]\n Eigenvalues of 3-regular matrix: [-2. 3. 2. 1. -1. -1. -1. -1.]\n Eigenvalues of 4-regular matrix: [ 4. -0. -1. 1. 1. -2. -2. -1.]\n \n From the above list, its clear that k is present in the eigenvalue list\n Thus, if G is k-regular, then k is an eigenvalue of G\n\n\n**d)** Show that $G$ is $k$-regular and connected, then the eigenvalue $k$ of $G$ has multiplicity one. \n\n**Ans.d)**\nFrom (c), we know that for $k$-regular graph with eigenvalue $k$ and eigenvector $x = (x_1,x_2,..,x_n)$ the equality $x_1 = x_2 = ... = x_n$ holds. \n\nHere, $k$ is eigenvalue of the graph $G$.\n\nNow, the space of the eigenvectors of eigenvalue $k$ is of dimension 1 and it proves that $k$ has a multiplicity of 1. \n\nAlso, for a graph to be connected, the maximum value of spectral radius must be the eigenvalue of the graph and it satisfies the condition of multiplicity to be 1. Here, for a connected k-regular graph the maximum value of spectral radius is k and thus its multiplicity will be 1.\n\n\n```python\n#Ans.d) From Q(a) graphs:\nprint('Eigenvalues of 2-regular matrix:', A2_eig)\nprint('Multiplicty of eigenvalue 2:', Mul(A2_eig)[2])\nprint('\\nEigenvalues of 3-regular matrix:', A3_eig)\nprint('Multiplicty of eigenvalue 3:', Mul(A3_eig)[3])\nprint('\\nEigenvalues of 4-regular matrix:', A4_eig)\nprint('Multiplicty of eigenvalue 4:', Mul(A4_eig)[4])\nprint('\\nIf G is k-regular and connected, then the eigenvalue k of G has multiplicity 1')\n```\n\n Eigenvalues of 2-regular matrix: [-2. -1. 0. 2. -1. -0. 1. 1.]\n Multiplicty of eigenvalue 2: 1\n \n Eigenvalues of 3-regular matrix: [-2. 3. 2. 1. -1. -1. -1. -1.]\n Multiplicty of eigenvalue 3: 1\n \n Eigenvalues of 4-regular matrix: [ 4. -0. -1. 1. 1. -2. -2. -1.]\n Multiplicty of eigenvalue 4: 1\n \n If G is k-regular and connected, then the eigenvalue k of G has multiplicity 1\n\n\n**e)** Show that $G$ is $k$-regular then $|\\lambda|\\leq k$ for any eigenvalue $\\lambda$ of $G$. \n\n**Ans.e)**\nLet's take an eigenvalue $\\lambda$ of graph G whose adjacency matrix is $A_G$ and the corresponding eigenvector be $x = (x_1, x_2, ...,x_n)$. \n\nNow, let's assume $x_k$ is the largest co-ordinate of the vector $x$. \n\nAnd, $\\Delta(G)$ be the maximum degree of nodes in the graph.\n\nTo prove the above statement, we have to show that $\\lambda \u2264 \u0394(G)$.\n\nUsing linear algebra's basic equation, we have,\n$$\\lambda x = Ax$$\nThen, $$=)|\\lambda.x_k| = \\left|(a_{k1},a_{k2},...,a_{kn})(x_1,x_2...,x_n)\\right|$$\\\n$$=)|\\lambda.x_k|=|\\lambda||x_k| = \\left|\\sum_{i=1}^{n} a_{ki}x_i \\right|$$\n\nAs we know that, $x_k$ is the largest co-ordinate in vector $x$,\n$$=)|\\lambda.x_k| = \\left|\\sum_{i=1}^{n} a_{ki}x_i \\right| \\leq \\left| \\sum_{i=1}^{n} a_{ki}x_k\\right| $$\\\n$$=)|\\lambda.x_k| \\leq \\left| \\sum_{i=1}^{n} a_{ki}x_k\\right| \\leq \\left| x_k \\sum_{i=1}^{n} a_{ki}\\right| -----(1)$$\n\nAs the graph is $k$-regular, the summation of elements in each row of adjacency matrix will be the $\\Delta(G)=k -----(2)$ \n\nUsing $(1),(2)$ and substitutuing $|x_k| = 1$ in (1), we can prove that, if $G$ is $k$-regular then $|\\lambda| \\leq k $ for any eigenvalue $\\lambda$ of $G$.\n\n\n```python\n#Ans.e) From Q(a) graphs:\nprint('Absolute value of Eigenvalues of 2-regular matrix:', np.abs(A2_eig))\nprint('Maximum value of Eigenvalues of 2-regular matrix:', np.max(np.abs(A2_eig)))\nprint('\\nAbsolute value of Eigenvalues of 3-regular matrix:', np.abs(A3_eig))\nprint('Maximum value of Eigenvalues of 2-regular matrix:', np.max(np.abs(A3_eig)))\nprint('\\nAbsolute value of Eigenvalues of 4-regular matrix:', np.abs(A4_eig))\nprint('Maximum value of Eigenvalues of 2-regular matrix:', np.max(np.abs(A4_eig)))\n\nprint('\\nIt shows if G is k-regular then |\u03bb|\u2264k for any eigenvalue \u03bb of G')\n```\n\n Absolute value of Eigenvalues of 2-regular matrix: [2. 1. 0. 2. 1. 0. 1. 1.]\n Maximum value of Eigenvalues of 2-regular matrix: 2.0\n \n Absolute value of Eigenvalues of 3-regular matrix: [2. 3. 2. 1. 1. 1. 1. 1.]\n Maximum value of Eigenvalues of 2-regular matrix: 3.0\n \n Absolute value of Eigenvalues of 4-regular matrix: [4. 0. 1. 1. 1. 2. 2. 1.]\n Maximum value of Eigenvalues of 2-regular matrix: 4.0\n \n It shows if G is k-regular then |\u03bb|\u2264k for any eigenvalue \u03bb of G\n\n\n**f)** Let $J$ be the matrix of all ones and $A$ be the adjacency matrix of a $k$-regular graph. Show that $AJ = JA=kJ$. \n\n**Ans.f)**\nWe have the following facts- \n1. $G$ is the $k$-regular graph. \n2. Adjacency Matrix $A$ is symmetric. \n3. $J$ is a matrix of all ones and is of the same size of $A$ and also symmetric. \n\nAs both matrix $A$ and $J$ are symmetric, then their product by interchanging positions will be same. Thus, $$AJ = JA----(1)$$ \n\nNow, in $kJ$ resultant matrix, each element will be equal to $k$. From the definition of $k$, it corresponds to the number of neighbors of each nodes. \n\nUsing (1) and comparing it with $kJ$, we have, for every row $J_i$ in matrix $J$, and multipying with column $A_i$ the output will be count of the number of 1's in column $A_i$ of the matix $A$. So, it will be equal to $k$. We can show it as follows- \n\n$$=)J_i.A_i = (1,1,...,1).(a_{i1},a_{i2},...a_{in})^T $$\\\n$$=)J_i.A_i = \\sum_{j = 1}^{n}(a_{ij}) $$\\\n$$=)J_i.A_i = k $$\n\nThis shows that each element of the resultant matrix $AJ$ and $JA$ is equal to $k$. Hence, it proves that, $$AJ = JA = kJ$$\n\n\n```python\n#f)\nJ = np.ones((4,4)) #To construct a matrix of all ones\nA = np.matrix([[0 , 1 , 0 , 1], #To construct a 2-regular adjacency matrix of node 4 \n [1 , 0 , 1 , 0],\n [0 , 1 , 0 , 1],\n [1 , 0 , 1 , 0]])\nk = np.max(np.round(np.linalg.eigvals(A))) #To get the maximum eigenvalue\na = A*J\nb = J*A\nc = k*J\ndisplay(a)\ndisplay(b)\ndisplay(c)\nif (a.all()==b.all()==c.all()): #Given condition to prove\n print('Given equation holds')\nelse:\n print('Given equation does not hold')\n```\n\n\n matrix([[2., 2., 2., 2.],\n [2., 2., 2., 2.],\n [2., 2., 2., 2.],\n [2., 2., 2., 2.]])\n\n\n\n matrix([[2., 2., 2., 2.],\n [2., 2., 2., 2.],\n [2., 2., 2., 2.],\n [2., 2., 2., 2.]])\n\n\n\n array([[2., 2., 2., 2.],\n [2., 2., 2., 2.],\n [2., 2., 2., 2.],\n [2., 2., 2., 2.]])\n\n\n Given equation holds\n\n\n**g)** Show by construction that there exists regular graph with least eigenvalue equal to \u22122 .\n\n\n```python\n#Ans.g) #Graph parameters to draw a 2-regular graph of node 5\nG = nx.Graph()\nG.add_nodes_from([1,2,3,4,5])\nG.add_edges_from([(1,2),(2,3),(3,4),(4,5),(5,1)])\n\n#Plotting parameters\nbasic_graph,ax = plt.subplots(1,1, figsize= (5,5))\nnx.draw(G,\n ax=ax, \n pos=nx.kamada_kawai_layout(G),\n with_labels=True, \n node_color='#444444',\n font_color=\"white\",\n )\nax.set_title('2-regular graph with 5 nodes');\n```\n\n\n```python\n#Ans.g) \n#Constructing the adjacency matrix of the above graph\nA_g = np.matrix([[0 , 1 , 0 , 0 , 1],\n [1 , 0 , 1 , 0 , 0],\n [0 , 1 , 0 , 1 , 0],\n [0 , 0 , 1 , 0 , 1],\n [1 , 0 , 0 , 1 , 0]])\n\nSym(A_g) #To check if the matrix is symmetric\n\n#To calculate the eigenvalue and find the minimum \nAg_eig = np.round(np.linalg.eigvals(A_g))\nprint('Eigenvalues:',Ag_eig)\nAg_eig_min = np.min(Ag_eig)\nprint('Minimum eigenvalue:', Ag_eig_min)\nprint('It shows that the minimum eigenvalue of the regular graph is -2')\n```\n\n Matrix is symmetric\n Eigenvalues: [-2. 1. 2. -2. 1.]\n Minimum eigenvalue: -2.0\n It shows that the minimum eigenvalue of the regular graph is -2\n\n\n**h)** Show that the following graph, called the Petersen Graph, is $3$-regular by finding its eigenvalues. \n\n\n\n\n\n```python\n#Ans.h) Petersen Graph\n#Graph parameters to draw Petersen Graph\nG = nx.Graph()\nG.add_nodes_from([1,2,3,4,5,6,7,8,9,10])\nG.add_edges_from([(1,2),(1,5),(1,6),(2,3),(2,7),(3,4),(3,8),(4,5),\n (4,9),(5,10),(6,8),(6,9),(7,9),(7,10),(8,10)])\n\n#Plotting parameters\nbasic_graph,ax = plt.subplots(1,1, figsize= (5,5))\nseq = [[10,6,7,8,9],[1,2,3,4,5]]\nnx.draw(G,\n ax=ax, \n pos=nx.shell_layout(G,seq),\n with_labels=True, \n node_color='#444444',\n font_color=\"white\",\n )\nax.set_title('Petersen Graph');\n```\n\n\n```python\n#Ans.h) \n#Representing the adjacency matrix for Petersen Graph\nA_h = np.matrix([[0,1,0,0,1,1,0,0,0,0],\n [1,0,1,0,0,0,1,0,0,0],\n [0,1,0,1,0,0,0,1,0,0],\n [0,0,1,0,1,0,0,0,1,0],\n [1,0,0,1,0,0,0,0,0,1],\n [1,0,0,0,0,0,0,1,1,0],\n [0,1,0,0,0,0,0,0,1,1],\n [0,0,1,0,0,1,0,0,0,1],\n [0,0,0,1,0,1,1,0,0,0],\n [0,0,0,0,1,0,1,1,0,0]])\n\n#To find the eigenvalues and the maximum of them\nAh_eig = np.round(np.linalg.eigvals(A_h))\nprint('Eigenvalues:',Ah_eig)\nAh_eig_max = np.max(Ah_eig)\nprint('Maximum eigenvalue:', Ah_eig_max)\nprint('As maximum eigenvalue is 3, thus Petersen Graph is 3-regular graph')\n```\n\n Eigenvalues: [-2. 1. 3. -2. -2. 1. -2. 1. 1. 1.]\n Maximum eigenvalue: 3.0\n As maximum eigenvalue is 3, thus Petersen Graph is 3-regular graph\n\n\n**i)**\nShow that if both n\u2265k+1 and nk is even, then there exists a k -regular graph of size n .\n\n**Ans.i)**\nAs $nk$ is even, and if $n$ depends on $k$, we have following cases:\n1. $n$ = 0 or $k$ = 0\n2. if $k$ is even then $n$ is even\n3. if $k$ is odd then $n$ is even or odd\n\nConsidering the above options, we have the following cases-\n\n**i)**\nIf $n = 0$ \n\nThen a graph doesn't exit. Also, it means that $k$ will be negative.This is not possible. \n\nIf $k = 0$ \n\nThen $n \\geq 1$, and the graph will have nodes without any neighbors. This type of graphs are called $0$-regular graphs. Thus, a $k$-regular graph of size $n$ exists. \n\n**ii)**\nIf $k$ is even and $k \\geq 2$\n\nLet's assume that $G_{n}(k)$ is a connected $k$-regular graph with a vertex set of $\\{1,2,3,...,n\\}$, for some $n \\geq k+1 $.\n\nNow, by using the property that if minimum degree of a vertex in a graph $X$ is $\\delta \\geq 2$, then there exists a path in $G$ containing $\\delta$ edges. \n\nLet's consider, path $P$ to be a path in $G_{n}(k)$ containing $k$ edges. As $k$ is even, there will be $\\frac{k}{2}$ vertex disjoint edges, $e_i = (u_i,v_i), 1\\leq i \\leq \\frac{k}{2}$ in the path $P$.\n\nThus, the set, $\\bigcup_{1\\leq i \\leq \\frac{k}{2}} \\left\\{ u_i,v_i \\right\\}\\;$ has k distict verices. \n\nBy removing the edges, $e_i$, $1\\leq i\\leq \\frac{k}{2}$ and adding $k$ new edges, the new set becomes-\n\n$$\\bigcup_{i=1}^{\\frac{k}{2}}\\left\\{(n+1,u_i)\\right\\} \\bigcup \\left\\{(n+1,v_i)\\right\\} $$\n\nThe resulting graph is a connected $k$-regular graph with vertices $\\{1,2,...,n+1\\}$ and is denoted as $G_{n+1}(k)$ graph.\n\n**iii)**\nIf $k$ is odd, \n\nAnd if $k = 1$, then, $n \\geq 2$ and has to be even. We can denote it as-\n\n$n = 2z,$ where $z$ is natural number.\n\nAs number of nodes are always even, they can be distributed in two disjoint sets and each node can be connected one element of set A and to exactly one element of set B. \n\nFor $k \\geq 3$\n\nLet's assume that $G_{n}(k)$ is a connected $k$-regular graph with a vertex set of $\\{1,2,3,...,n\\}$, for some even $n \\geq k+1 $. \nThe resulting graph is a connected $k$-regular graph with vertices $\\{1,2,...,n+1\\}$ and is denoted as $G_{n+1}(k)$ graph.\n\nNow, by using the property that if minimum degree of a vertex in a graph $X$ is $\\delta \\geq 2$, then there exists a path in $G$ containing $\\delta$ edges. \n\nLet's consider, path $Q_{n}(k) = (q_1,q_2,...,q_{k+1})$ to be a path in $G_{n}(k)$ containing $k$ edges as $\\{(q_j,q_{j+1})\\}_{1\\leq j \\leq k}$.\n\nBy removing $k-1$ edges, $\\{(q_j,q_{j+1})\\}_{1\\leq j \\leq k-1}$ and adding the following edges-\n\n1. For $1\\leq j \\leq k-1, j $ odd, add the edges $\\{(n+1, q_j),(n+1, q_{j+1})\\}$\n\n2. For $1\\leq j \\leq k-1, j $ even, add the edges $\\{(n+2, q_j),(n+2, q_{j+1})\\}$\n\n3. And add the edge $(n+1, n+2)$\n\nSince k is odd, the total number of edges added to $1$ is $k-1$ and thus, there are $k-1$ edges with $n+1$ as the end vertex after $1$. Similarly, after step $2\\;$ there are $k-1$ edges with $n+2$ as the end vertex and the resulting graph, is a connected $k$-regular graph with vertex set $\\{1,2,...,n+1,n+2\\}$ and denoted as $G_{n+2}(k)$.\n\nThus, from the above cases we can state that if $nk$ is even and $n\u2265k+1$, then there exists a $k$-regular graph of size $n$.\n\n**Reference:** https://arxiv.org/abs/1801.08345, https://www.math.kit.edu/iag6/lehre/graphtheo2013w/media/how_to_write_a_proof.pdf\n\n", "meta": {"hexsha": "4fc0aeffc19e8598270e279d8aa02b532dd60c38", "size": 126812, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Graphs and Linear Algebra/Graphs and Linear Algebra.ipynb", "max_stars_repo_name": "joy6543/Mathematics-and-Analytical-Methods", "max_stars_repo_head_hexsha": "eea45c7ffd98c308254142903ef661c792fa3503", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Graphs and Linear Algebra/Graphs and Linear Algebra.ipynb", "max_issues_repo_name": "joy6543/Mathematics-and-Analytical-Methods", "max_issues_repo_head_hexsha": "eea45c7ffd98c308254142903ef661c792fa3503", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Graphs and Linear Algebra/Graphs and Linear Algebra.ipynb", "max_forks_repo_name": "joy6543/Mathematics-and-Analytical-Methods", "max_forks_repo_head_hexsha": "eea45c7ffd98c308254142903ef661c792fa3503", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 126812.0, "max_line_length": 126812, "alphanum_fraction": 0.878828502, "converted": true, "num_tokens": 8173, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391621868804, "lm_q2_score": 0.8918110353738529, "lm_q1q2_score": 0.8231765109204955}} {"text": "# Logistic Regression\n\n\n```python\nimport numpy as np \nfrom matplotlib import pyplot as plt \nfrom scipy.optimize import minimize\nfrom sklearn.preprocessing import PolynomialFeatures\nfrom scipy import optimize\n\n```\n\nIn this part, we will solve a`classification problem`. \nClassification ploblems are just like the regression problems except that the values we want to predict is only a small number of discrete values.\n\n### Examples:\n- spam / not spam \n- Tumor:Malignant/Benign\t\n\n---------------\n## Part One\n---------------\n\n#### Firstly, we will define the Basic equations of Logistic Regression model.\n\n### 1- Hypothesis\n\n### $ h_\\theta(x) = g(\\theta^T x)$ \n        where   0 <$ h_\\theta(x) <1 \\\\ $ \n\n### $g(\\theta^T x)$= $\\frac{1}{1-e^(\\theta^T x)}$\n\nAnd this's what called `sigmoid fuction`, or logistic function.\n\nThe concept of hypothesis also can be described by conditional probability: \n\n### $ h_\\theta(x) = P(y=1 | x;\\theta)$      \n\n### $So, when$   $ h_\\theta(x) = P(y=1 | x;\\theta) =0.7, then$   $ h_\\theta(x) = P(y=0 | x;\\theta) = 0.3$\n\n\n```python\ndef sigmoid(z):\n return(1 / (1 + np.exp(-z)))\n\n```\n\n\n```python\nx = np.array([np.arange(-10., 10., 0.2)])\nsig = sigmoid(x)\n\nfig = plt.figure(figsize=(15,8))\nfig.suptitle('Sigmoid Function ', fontsize=14, fontweight='bold')\n\nax = fig.add_subplot(111)\nfig.subplots_adjust(top=0.85)\nax.set_title('axes title')\n\nax.set_xlabel('h(x)')\nax.set_ylabel('y')\nax.plot(x, sig, 'o')\n\nax.text(5, 1.5, 'y=1 if h(x)>0.5', style='italic',bbox={'facecolor':'blue', 'alpha':0.6, 'pad':20})\nax.text(-5, 0.5, 'y=0 if h(x)<0.5', style='italic',bbox={'facecolor':'blue', 'alpha':0.6, 'pad':20})\n\nplt.axhline(0, color='black')\nplt.axvline(0, color='black')\n\nax.axis([-10, 10, -0.5, 2])\nax.grid(True)\nplt.show()\n\n```\n\n### 2- Cost Function \nThe logistic model's cost function is different than linear regression one, because if we apply the same equation, the cost function will be non-convex.\n\nso, the logistic regression new cost function is: \n\n### $\\begin{equation}\n cost(h_\\theta(x),y) =\n \\begin{cases}\n \\text{ -log($h_\\theta(x)$) ... if y=1}\\\\\n \\text{-log($1-h_\\theta(x)$) ... if y=0}\\\\\n \\end{cases} \n\\end{equation}$\n\n##### but the simple version written as : \n\n### $ cost(h_\\theta(x),y) = -y log(h_\\theta(x)) - (1-y) log(1- h_\\theta(x)) $\n\n##### So, now we can write the overall structure of the equation:\n\n### $ J(\\theta) = \\frac{-1}{m} \\sum_{i=1}^{m} cost(h_\\theta(x),y) $\n\n\n\n```python\ndef costFunction(theta,x,y):\n m= y.size\n h= sigmoid(x.dot(theta))\n cost_= (-1/m)*(y.dot(np.log(h)) + (1-y).dot(np.log(1-h)))\n \n \n return cost_\n```\n\n### 3-Gradient Decsent / Minimization Function\n\n\n```python\ndef gradientDescent(theta, x, y):\n m=y.size\n z = np.dot(x,theta)\n h= sigmoid(z)\n error = h.T - y\n grad = (1 / m) * np.dot(error, x)\n return grad\ndef optimization (theta,x,y):\n return minimize(costFunction, theta, args=(x,y), method=None, jac=gradientDescent, options={'maxiter':400})\n\n```\n\n### Now, lets start solving the exercise ... \n\nSuppose that you are the administrator of an university department and you want to determine each applicant\u2019s chance of admission based on their results on two exams. \n\nSo, we aim to build a classification model that estimates an applicant\u2019s probability of admission based the scores from those two exams.\n\n##### a- Uploading and Visualization \n\n\n```python\ndata = np.loadtxt('Datasets/ex2data1.txt', delimiter=',')\nX = np.array(data[:,0:2])\nX = np.insert(X,0,1,axis=1)\ny= data[:,2].T\ntheta =np.zeros([X.shape[1],1])\n```\n\n\n```python\ndef plotData(data, label_x, label_y, label_pos, label_neg, axes=None):\n # Get indexes for class 0 and class 1\n neg = data[:,2] == 0\n pos = data[:,2] == 1\n \n # If no specific axes object has been passed, get the current axes.\n if axes == None:\n axes = plt.gca()\n axes.scatter(data[pos][:,0], data[pos][:,1], marker='+', c='k', s=60, linewidth=2, label=label_pos)\n axes.scatter(data[neg][:,0], data[neg][:,1], c='y', s=60, label=label_neg)\n axes.set_xlabel(label_x)\n axes.set_ylabel(label_y)\n axes.legend(frameon= True, fancybox = True);\n\n```\n\n\n```python\nplotData(data, 'Exam 1 score', 'Exam 2 score', 'Admitted', 'Not admitted')\n```\n\n#### b- Fit the parameters theta to estimate the decision boundary line equation\nNow, lets us firstly compare between the cost of using gradient descent and the minimize function, then plot the decision boundary. \n\n\n\n```python\ngrad_theta = gradientDescent(theta,X,y)\nopt_theta= optimization(theta, X,y).x\n\nprint ('The fitted parameters', grad_theta[0,:], 'have a cost', costFunction(theta,X,y))\n\nprint ('The fitted parameters', opt_theta, 'have a cost', costFunction(opt_theta,X,y))\n\n```\n\n The fitted parameters [ -0.1 -12.00921659 -11.26284221] have a cost [0.69314718]\n The fitted parameters [-25.16133284 0.2062317 0.2014716 ] have a cost 0.20349770158944375\n\n\n /opt/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:4: RuntimeWarning: divide by zero encountered in log\n after removing the cwd from sys.path.\n\n\n\n```python\nplotData(data, 'Exam 1 score', 'Exam 2 score', 'Admitted', 'Not admitted')\nx1_min, x1_max = X[:,1].min(), X[:,1].max(),\nx2_min, x2_max = X[:,2].min(), X[:,2].max(),\nxx1, xx2 = np.meshgrid(np.linspace(x1_min, x1_max), np.linspace(x2_min, x2_max))\nh = sigmoid(np.c_[np.ones((xx1.ravel().shape[0],1)), xx1.ravel(), xx2.ravel()].dot(opt_theta))\nh = h.reshape(xx1.shape)\nplt.contour(xx1, xx2, h, [0.5])\n```\n\n#### c- Prediction:\n\nAs we know from the sigmoid function, y=1 when $h_\\theta(x)$ > 0.5, and vise versa. \n\nso from this rule, we can predict where the new inputs are belong, based on the new fitted parameters $\\theta$\n\n\n```python\ndef prediction(theta, x):\n threshold=0.5\n result= sigmoid(np.dot(x,theta.T)) >= threshold\n return result.astype('int')\n```\n\n#### d- Case Study\n\n\n```python\n\"\"\"\"\"\nWe'll assume that we have two students, the first student's score at the first exam= 30, and the second exam = 50\nwhile the second student's score at the first exam= 40, and the second exam = 100\n\"\"\"\"\"\ncase1= np.array([1,30,50])\ncase2= np.array([1,40,100])\n\ndef result(result):\n if result == 0:\n result='not admitted'\n else:\n result= 'admitted'\n return result\n\nprint( 'case 1 -with score1 = 30 and score2=50- is ', result(prediction(opt_theta,case1)))\nprint( 'case 2 -with score1 = 40 and score2=100- is ', result(prediction(opt_theta,case2)))\n```\n\n case 1 -with score1 = 30 and score2=50- is not admitted\n case 2 -with score1 = 40 and score2=100- is admitted\n\n\n\n```python\nplotData(data, 'Exam 1 score', 'Exam 2 score', 'Admitted', 'Not admitted')\nplt.scatter(case1[1], case1[2], s=100, c='g', marker='x', label='case1')\nplt.scatter(case2[1], case2[2], s=100, c='r', marker='v', label='case2')\nplt.legend(frameon= True, fancybox = True);\n\nx1_min, x1_max = X[:,1].min(), X[:,1].max()\nx2_min, x2_max = X[:,2].min(), X[:,2].max()\nxx1, xx2 = np.meshgrid(np.linspace(x1_min, x1_max), np.linspace(x2_min, x2_max))\nh = sigmoid(np.c_[np.ones((xx1.ravel().shape[0],1)), xx1.ravel(), xx2.ravel()].dot(opt_theta))\nh = h.reshape(xx1.shape)\nplt.contour(xx1, xx2, h, [0.5])\n```\n\n---------------\n## Part Two\n---------------\n\n### Overfitting problem\n\nIf we have too many features, the learned hypothesis may fit the training set very well, with almost zero cost. but fail to generalize to new examples. \notherwise, if we have a lot of features, and very little training data, overfitting can occurs. \n\n- ###### Adressing overfitting: \n\n1- Reduce number of features\n\n2- Regularization \n\n\n### Regularization Method\n\nThe prupose of regularization, is to reduce the affect -or the wieght- of theta parameters, by adding thetas with reverse signs, and multiplying them by a specific factor.\n\n\n### $ J(\\theta) = \\frac{-1}{m}[\\sum_{i=1}^{m}-y log(h_\\theta(x)) - (1-y) log(1- h_\\theta(x))] + \\frac{\\gamma }{2m} [\\sum_{i=1}^{n} \\theta_j^2]$\n\n\n\n```python\ndef Reg_costFunction(theta,x,y, regPar):\n m= y.size\n h= sigmoid(x.dot(theta))\n cost_= (-1/m)*(y.dot(np.log(h)) + (1-y).dot(np.log(1-h)))+(regPar/(2*m))*np.sum(np.square(theta[1:]))\n \n return cost_\n```\n\n#### And we should apply the same rule on gradient descent and/or minimization functions\n\n\n```python\ndef Reg_gradientDescent(theta, x, y, regPar ):\n m=y.size\n z = np.dot(x,theta)\n h= sigmoid(z)\n error = h.T - y\n grad = (1 / m) * np.dot(error, x)\n reg_term= regPar/(m)*np.sum(np.square(theta[1:]))\n reg_grad= grad + reg_term\n\n return reg_grad\n\ndef optimization (theta,x,y, regPar):\n result = optimize.minimize(Reg_costFunction, theta, args=(x, y, regPar), method='BFGS', options={\"maxiter\":500, \"disp\":False} )\n return result\n \n```\n\n### *The Exercise:*\nwe will implement regularized logistic regression; to predict whether microchips from a fabrication plant passes quality assurance (QA).\n\nSuppose you are the product manager of the factory and you have the\ntest results for some microchips on two different tests. From these two tests,\nyou would like to determine whether the microchips should be accepted or\nrejected. To help you make the decision, you have a dataset of test results\non past microchips, from which you can build a logistic regression model.\n\n\n```python\ndata2 = np.loadtxt('Datasets/ex2data2.txt', delimiter=',')\nX2 = np.array(data2[:,0:2])\nones= np.ones([data2.shape[0]])\nX2 = np.insert(X2,0,1,axis=1)\ny2 = data2[:,2].T\ntheta2 = np.zeros([X2.shape[1],1])\n```\n\n\n```python\nplotData(data2, 'Exam 1 score', 'Exam 2 score', 'Admitted', 'Not admitted')\n```\n\n#### Because the distribution of the data implies that it needs a polynomial(non-linear) decision boundary, such as:\n### $ g(\\theta_0 + \\theta_1 + \\theta_1^2+\\theta_1 \\theta_2+ \\theta_2^3 ...)$\n#### we need to apply the following function: \n\n\n\n```python\ndef map_feature( feature1, feature2 ):\n\n degrees = 6\n out = np.ones( (feature1.shape[0], 1) )\n\n for i in range(1,degrees+1):\n for j in range(0,i+1):\n term1 = feature1 ** (i-j)\n term2 = feature2 ** (j)\n term = (term1 * term2).reshape( term1.shape[0], 1 ) \n out = np.hstack(( out, term ))\n return out\n```\n\n\n```python\ntheta2 = np.zeros(XX.shape[1])\ncost= Reg_costFunction(theta2,XX, y2, 1)\nprint ('The cost value =', cost)\n```\n\n The cost value = 0.6931471805599453\n\n\n## Non-linear Decision Boundary\n\n\n```python\nXX= map_feature(X2[:,1], X2[:,2])\ntheta2 = np.zeros([XX.shape[1],1])\n\nnum=[0,1,50,100]\nthetas= np.zeros([theta2.shape[0],4])\n\nfor i in range(len(num)):\n op_theta= optimization(theta2.T, XX, y2 ,num[i]).x\n thetas[:,i]= op_theta\n \ndef Reg_Boundary(theta):\n xvals = np.linspace(-1,1.5,50)\n yvals = np.linspace(-1,1.5,50)\n zvals = np.zeros((len(xvals),len(yvals)))\n \n for i in range(len(xvals)):\n for j in range(len(yvals)):\n myfeaturesij = map_feature(np.array([xvals[i]]),np.array([yvals[j]]))\n zvals[i][j] = np.dot(theta,myfeaturesij.T)\n zvals = zvals.transpose()\n \n u, v = np.meshgrid( xvals, yvals )\n plt.contour( xvals, yvals, zvals, [0])\n```\n\n\n```python\nplt.figure(figsize=(12,10))\nplt.subplot(221)\nplotData(data2, 'Microchip Test 1', 'Microchip Test 2', 'y = 1', 'y = 0')\nplt.title('Lambda = %d'%num[0])\nReg_Boundary(thetas[:,0])\n\nplt.figure(figsize=(12,10))\nplt.subplot(222)\nplotData(data2, 'Microchip Test 1', 'Microchip Test 2', 'y = 1', 'y = 0')\nplt.title('Lambda = %d'%num[1])\nReg_Boundary(thetas[:,1])\n\nplt.figure(figsize=(12,10))\nplt.subplot(223)\nplotData(data2, 'Microchip Test 1', 'Microchip Test 2', 'y = 1', 'y = 0')\nplt.title('Lambda = %d'%num[2])\nReg_Boundary(thetas[:,2])\n\nplt.figure(figsize=(12,10))\nplt.subplot(224)\nplotData(data2, 'Microchip Test 1', 'Microchip Test 2', 'y = 1', 'y = 0')\nplt.title('Lambda = %d'%num[3])\nReg_Boundary(thetas[:,3])\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "096fc2b3ba0b03281a33ce7b240c6ce17f93828a", "size": 221117, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Assignment2: AcceptancePredection.ipynb", "max_stars_repo_name": "zuhaalfaraj/Machine-Learning-Andrew-Ng-assignments-using-python", "max_stars_repo_head_hexsha": "b767bfb8a2ce6dce14c9a758c3d346b0d26e89f2", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-02-27T21:58:37.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-27T21:58:37.000Z", "max_issues_repo_path": "Assignment2: AcceptancePredection.ipynb", "max_issues_repo_name": "zuhaalfaraj/Machine-Learning-Andrew-Ng-assignments-using-python", "max_issues_repo_head_hexsha": "b767bfb8a2ce6dce14c9a758c3d346b0d26e89f2", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Assignment2: AcceptancePredection.ipynb", "max_forks_repo_name": "zuhaalfaraj/Machine-Learning-Andrew-Ng-assignments-using-python", "max_forks_repo_head_hexsha": "b767bfb8a2ce6dce14c9a758c3d346b0d26e89f2", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 261.0590318772, "max_line_length": 29528, "alphanum_fraction": 0.9196669636, "converted": true, "num_tokens": 3707, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9399133498259924, "lm_q2_score": 0.8757869981319863, "lm_q1q2_score": 0.8231638911482855}} {"text": "# XGBoost & LightGBM\n\n## XGBoost\n\nmathematically, we can write our ensemble tree model in the form\n\n$$\\hat{y}_{i} = \\sum_{k=1}^{K}f_{k}(x_{i}), f_{k}\\in\\mathcal{F}$$\n\nwhere $K$ is the number of trees, $f$ is a function in the function space $\\mathcal{F}$, and $\\mathcal{F}$ is the set of all possible CARTs.\n\nthe objective function to be optimized is given by\n\n$$\\mbox{obj}(\\theta) = \\sum_{i=1}^{n}l(y_{i}, \\hat{y}_{i}) + \\sum_{k=1}^{K}\\Omega(f_{k})$$\n\nwhere $l$ is the loss term, $\\Omega$ is the regularization term.\n\n### Additive Learning\n\nwhat are the parameters of trees? you can find that what we need to learn are those functions $f_{i}$, each containing the structure of the tree and the leaf scores.\n\nlearning tree strcture is much harder than traditional optimization problem where you can simply take the gradient, instead, we use an additive strategy: fix what we learned, and add one new tree at a time. we write the prediction value at step $t$ as $\\hat{y}_{i}^{(t)}$, then\n\n$$\n\\begin{equation}\n\\begin{split}\n\\mbox{obj}^{(t)} =& \\sum_{i=1}^{n}l(y_{i}, \\hat{y}_{i}^{(t)}) + \\sum_{k=1}^{t}\\Omega(f_{k}) \\\\\n=& \\sum_{i=1}^{n}l(y_{i}, \\hat{y}_{i}^{(t - 1)} + f_{t}(x_{i})) + \\Omega(f_{t}) + \\mbox{constant}\n\\end{split}\n\\end{equation}\n$$\n\n$$\\hat{y}_{i}^{(0)} = 0$$\n\nin general case($l$ arbitrary), we take the taylor expansion of the loss function up to the second order\n\n$$\\mbox{obj}^{(t)} \\approx \\sum_{i=1}^{n}[l(y_{i}, \\hat{y}_{i}^{(t - 1)}) + g_{i}f_{t}(x_{i}) + \\frac{1}{2}h_{i}f_{t}(x_{i})^{2}] + \\Omega(f_{t}) + \\mbox{constant}$$\n\nwhere $g_{i}$ and $h_{i}$ are defined as\n\n$$g_{i} = \\frac{\\partial l(y_{i}, \\hat{y}_{i}^{(t - 1)})}{\\partial \\hat{y}_{i}^{(t-1)}}, h_{i} = \\frac{\\partial^{2} l(y_{i}, \\hat{y}_{i}^{(t - 1)})}{{\\partial \\hat{y}_{i}^{(t-1)}}^2}$$\n\nafter removing all the constants, the specific objective at step $t$ becomes\n\n$$\\sum_{i=1}^{n}[g_{i}f_{t}(x_{i}) + \\frac{1}{2}h_{i}f_{t}(x_{i})^{2}] + \\Omega(f_{t})$$\n\nthis becomes our optimization goal for the new tree.\n\n### Model Complexity\n\nwe need to define the complexity of the tree $\\Omega(f)$, in order to do so, let us first refine the definition of the tree $f(x)$ as\n\n$$f(x) = w_{q(x)}, w \\in \\mathbb{R}^{T}, q: \\mathbb{R}^{d} \\to \\{1,2,...,T\\}$$\n\nwhere $w$ is the vector of scores on leaves, $q$ is a function assigning each data point to the corresponding leaf, and $T$ is the number of leaves. in XGBoost, we define the complexity as\n\n$$\\Omega(f) = \\gamma{T} + \\frac{1}{2}\\lambda\\sum_{j=1}^{T}w_{j}^{2}$$\n\nthis works well in practice.\n\n### The Structure Score\n\nnow we can write the objective value with the $t$-th tree as:\n\n$$\n\\begin{equation}\n\\begin{split}\n\\mbox{obj}^{(t)} = & \\sum_{i=1}^{n}[g_{i}f_{t}(x_{i}) + \\frac{1}{2}h_{i}f_{t}(x_{i})^{2}] + \\gamma{T} + \\frac{1}{2}\\lambda\\sum_{j=1}^{T}w_{j}^{2} \\\\\n=& \\sum_{j=1}^{T}[(\\sum_{i\\in{I_{j}}}g_{i})w_{j} + \\frac{1}{2}(\\sum_{i\\in{I_j}}h_{i} + \\lambda)w_{j}^2] + \\gamma{T}\n\\end{split}\n\\end{equation}\n$$\n\nwhere $I_{j} = \\{i|q_{i}=j\\}$ is the set of indices of data-points assign to the $j$-th leaf.\n\nwe could further compress the expression by defining $G_{j} = \\sum_{i\\in{I_{j}}}g_{i}, H_{j} = \\sum_{i\\in{I_j}}h_{i}$:\n\n$$\\mbox{obj}^{(t)} = \\sum_{j=1}^{T}[G_{j}w_{j} + \\frac{1}{2}(H_{j} + \\lambda)w_{j}^2] + \\gamma{T}$$\n\nin this equation, $w_{j}$ are independent with respect to each other, the form $G_{j}w_{j} + \\frac{1}{2}(H_{j} + \\lambda)w_{j}^2$ is quadratic and the best $w_{j}$ for a given $q(x)$ and the best objective reduction we can get is:\n\n$$w_{j}^{\\ast} = -\\frac{G_{j}}{H_{j} + \\lambda}$$\n\n$$\\mbox{obj}^{\\ast} = -\\frac{1}{2}\\sum_{i=1}^{T}\\frac{G_{j}^2}{H_{j} + \\lambda} + \\gamma{T}$$\n\nthe last equation measures how good a tree structure $q(x)$ is.\n\n### Learn the Tree Structure\n\nnow we have a way to measure how good a tree is, ideally we would enumerate all possible trees and pick the best one, but not practical.\n\ninstead we will try to optimize **one level** of the tree at a time, specifically we try to split a leaf into two leaves, and the score gain is:\n\n$$Gain = \\frac{1}{2}\\left [\\frac{G_L^2}{H_L + \\lambda} + \\frac{G_R^2}{H_R + \\lambda} - \\frac{(G_L + G_R)^2}{H_L + H_R + \\lambda}\\right ] - \\gamma$$\n\nif the first part of $Gain$ is smaller than $\\gamma$, we would do better not add that branch, this is exactly pruning!\n\nfor real valued data, we places all instances in sorted order(by the split feature), then a left to right scan is sufficient to calculate the structure score of all possible split solutions, and we can find the best split efficiently. \n\nin practice, since it is intractable to enumerate all possible tree structures, we add one split at a time, this approach works well at most of the time.\n\n### XGBoost Practice\n\n\n```python\n\"\"\"quadratic dataset\"\"\"\nimport numpy as np\nfrom sklearn.model_selection import train_test_split\n\nnp.random.seed(42)\nX = np.random.rand(100, 1) - 0.5\ny = 3*X[:, 0]**2 + 0.05 * np.random.randn(100)\nX_train, X_val, y_train, y_val = train_test_split(X, y, random_state=42)\n```\n\n\n```python\n\"\"\"basic xgboost\"\"\"\nimport xgboost\nfrom sklearn.metrics import mean_squared_error\n\nxgb_reg = xgboost.XGBRegressor()\nxgb_reg.fit(X_train, y_train)\ny_pred = xgb_reg.predict(X_val)\nmean_squared_error(y_pred, y_val)\n```\n\n\n\n\n 0.0030701301701716146\n\n\n\n\n```python\n\"\"\"xgboost automatically taking care of early stopping\"\"\"\nxgb_reg.fit(X_train, y_train, eval_set=[(X_val, y_val)], early_stopping_rounds=2)\ny_pred = xgb_reg.predict(X_val)\nmean_squared_error(y_pred, y_val)\n```\n\n [0]\tvalidation_0-rmse:0.19678\n [1]\tvalidation_0-rmse:0.14325\n [2]\tvalidation_0-rmse:0.10835\n [3]\tvalidation_0-rmse:0.08482\n [4]\tvalidation_0-rmse:0.07044\n [5]\tvalidation_0-rmse:0.06255\n [6]\tvalidation_0-rmse:0.05927\n [7]\tvalidation_0-rmse:0.05698\n [8]\tvalidation_0-rmse:0.05519\n [9]\tvalidation_0-rmse:0.05513\n [10]\tvalidation_0-rmse:0.05473\n [11]\tvalidation_0-rmse:0.05463\n [12]\tvalidation_0-rmse:0.05427\n [13]\tvalidation_0-rmse:0.05376\n [14]\tvalidation_0-rmse:0.05377\n [15]\tvalidation_0-rmse:0.05363\n [16]\tvalidation_0-rmse:0.05358\n [17]\tvalidation_0-rmse:0.05387\n\n\n\n\n\n 0.0028706534131390338\n\n\n\n## LightGBM\n\nXGBoost uses level-wise tree growth:\n\n\n\nwhile LightGBM uses leaf-wise tree growth:\n\n\n\nwe can formalize leaf-wise tree growth as:\n\n$$(p_m, f_{m}, v_{m}) = \\underset{(p,f,v)}{argmin}\\ L(T_{m-1}(X).split(p, f, v), Y)$$\n\n$$T_{m}(X) = T_{m-1}(X).split(p_m, f_{m}, v_{m})$$\n\nfinding the best split is costly, in XGBoost, we enumerate all features and all thresholds to find the best split.\n\nLightGBM optimize that by using the histogram algorithm:\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "820fb7a95a88d21e86a0f5a3dde2518c8c2b25ce", "size": 11543, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_build/jupyter_execute/08_xgboost & ligthgbm.ipynb", "max_stars_repo_name": "newfacade/machine-learning-notes", "max_stars_repo_head_hexsha": "1e59fe7f9b21e16151654dee888ceccc726274d3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_build/jupyter_execute/08_xgboost & ligthgbm.ipynb", "max_issues_repo_name": "newfacade/machine-learning-notes", "max_issues_repo_head_hexsha": "1e59fe7f9b21e16151654dee888ceccc726274d3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_build/jupyter_execute/08_xgboost & ligthgbm.ipynb", "max_forks_repo_name": "newfacade/machine-learning-notes", "max_forks_repo_head_hexsha": "1e59fe7f9b21e16151654dee888ceccc726274d3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.0755667506, "max_line_length": 287, "alphanum_fraction": 0.5150307546, "converted": true, "num_tokens": 2403, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9399133548753619, "lm_q2_score": 0.8757869900269366, "lm_q1q2_score": 0.8231638879524131}} {"text": "```python\nfrom sklearn import metrics\nimport numpy as np\nimport sympy\nfrom sympy.matrices import Matrix\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.datasets import make_classification\n```\n\n\n```python\ny_true = np.round(np.random.sample(10))\ny_pred = np.round(np.random.sample(10))\nequals = 0\nfor true, pred in zip(y_true, y_pred):\n if true == pred: equals +=1\nequals\n```\n\n\n\n\n 5\n\n\n\n\n```python\n# Alternative way of 0-1 sample generation\nn, p = 1, .5 # number of trials, probability of each trial\ns = np.random.binomial(n, p, size=10)\ns\n```\n\n\n\n\n array([0, 0, 0, 1, 0, 0, 0, 0, 0, 1])\n\n\n\nTry ours from the quiz\n\n\n\n```python\ny_true = np.array([1, 1, 0, 1, 1, 1])\ny_pred = np.array([1, 1, 1, 1, 0, 0])\n\n```\n\n\n```python\ndf = pd.DataFrame(data={\"y_true\": y_true, \"y_pred\": y_pred})\ndf.transpose()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    012345
    y_true110111
    y_pred111100
    \n
    \n\n\n\n\n```python\nallCount = len(y_true)\noneCount = int(np.sum(y_true))\nzeroCount = allCount - oneCount\nallCount, oneCount, zeroCount\n```\n\n\n\n\n (6, 5, 1)\n\n\n\n\n```python\n\n```\n\n\n```python\ncm = metrics.confusion_matrix(y_true, y_pred)\ncm\n```\n\n\n\n\n array([[0, 1],\n [2, 3]])\n\n\n\n\n```python\ndisplay = metrics.ConfusionMatrixDisplay(cm, [\"0's or negative\", \"1's or positive\"])\ndisplay.plot()\nplt.show()\n```\n\n\n\n\n```python\nmetrics.accuracy_score(y_true, y_pred)\n```\n\n\n\n\n 0.5\n\n\n\n## By hand, in Python terminology (non-statistical)\nBasic cells of confusion matrix. \n\n\n```python\n(TN, FP), (FN, TP) = cm\nprint(\"TN =\", TN)\nprint(\"FP =\", FP)\nprint(\"FN =\", FN)\nprint(\"TP =\", TP)\n```\n\n TN = 0\n FP = 1\n FN = 2\n TP = 3\n\n\nMarginals\n\n\n```python\n\n```\n\n\n```python\ndiagonal = TP + TN\nprint(\"diagonal =\", diagonal)\nall = TP + TN + FP + FN\nprint(\"all =\", all)\naccByFormula = diagonal / all\nprint(\"accByFormula =\", accByFormula)\n```\n\n diagonal = 3\n all = 6\n accByFormula = 0.5\n\n\n\n\n\n```python\nmetrics.precision_score(y_true, y_pred)\n```\n\n\n\n\n 0.75\n\n\n\n\n\n\n```python\nmetrics.recall_score(y_true, y_pred)\n```\n\n\n\n\n 0.6\n\n\n\n\n```python\n%%script false\nplt.plot(*metrics.precision_recall_curve(y_true, y_pred))\nplt.show()\n```\n\n\n\n\n```python\nmetrics.f1_score(y_true, y_pred)\n```\n\n\n\n\n 0.6666666666666665\n\n\n\n\n```python\nprint(metrics.classification_report(y_true, y_pred))\n```\n\n precision recall f1-score support\n \n 0 0.00 0.00 0.00 1\n 1 0.75 0.60 0.67 5\n \n accuracy 0.50 6\n macro avg 0.38 0.30 0.33 6\n weighted avg 0.62 0.50 0.56 6\n \n\n\n\n```python\n\n```\n\n------------------\n\n\n```python\ny_true = np.array([1, 1, 0, 1, 1, 1])\ny_pred = np.array([1, 1, 1, 1, 0, 0])\n```\n\n\n```python\nprint(metrics.classification_report(y_true, y_pred))\n```\n\n precision recall f1-score support\n \n 0 0.00 0.00 0.00 1\n 1 0.75 0.60 0.67 5\n \n accuracy 0.50 6\n macro avg 0.38 0.30 0.33 6\n weighted avg 0.62 0.50 0.56 6\n \n\n\n--------------------------\n\n\n```python\ny_true = np.array([1, 1, 0, 1, 1, 1])\ny_pred = np.array([1, 0, 0, 1, 1, 1])\n```\n\n\n```python\nprint(metrics.classification_report(y_true, y_pred))\n```\n\n precision recall f1-score support\n \n 0 0.50 1.00 0.67 1\n 1 1.00 0.80 0.89 5\n \n accuracy 0.83 6\n macro avg 0.75 0.90 0.78 6\n weighted avg 0.92 0.83 0.85 6\n \n\n\n\n```python\n\n```\n\n#ROC\n\n\n```python\nimport matplotlib.pyplot as plt \nfrom sklearn import datasets, metrics, model_selection, svm\nX, y = datasets.make_classification(random_state=0)\nX_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, random_state=0)\nclf = svm.SVC(random_state=0)\nclf.fit(X_train, y_train)\n\nmetrics.plot_roc_curve(clf, X_test, y_test) \nplt.show() \n```\n\n\n```python\nimport numpy as np\nfrom sklearn import metrics\n#y = np.array([1, 1, 2, 2])\ny = np.array([0, 0, 1, 1])\nscores = np.array([0.1, 0.4, 0.35, 0.8])\nfpr, tpr, thresholds = metrics.roc_curve(y, scores, pos_label=1)\nfpr\n\n```\n\n\n\n\n array([0. , 0. , 0.5, 0.5, 1. ])\n\n\n\n\n```python\ntpr\n```\n\n\n\n\n array([0. , 0.5, 0.5, 1. , 1. ])\n\n\n\n\n```python\nthresholds\n```\n\n\n\n\n array([1.8 , 0.8 , 0.4 , 0.35, 0.1 ])\n\n\n\n\n```python\nplt.plot(fpr, tpr)\n```\n\n\n```python\nfig = plt.figure()\nax = fig.add_subplot(projection=\"3d\")\nax.plot(fpr, tpr, thresholds)\n```\n\n\n```python\nfrom plotly import graph_objects as go\n```\n\n\n```python\nline = go.Scatter3d(x=fpr, y=tpr, z=thresholds, mode=\"lines\")\nfig = go.Figure(data=line)\nfig.show()\n```\n\n\n\n \n\n
    \n \n \n \n
    \n \n
    \n \n\n\n", "meta": {"hexsha": "8d7cd029873dd6ddcee911c5d1abca95f0834ed6", "size": 155397, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter 2/python/classification metrics/2-class metrics.ipynb", "max_stars_repo_name": "borisgarbuzov/schulich_data_science_1", "max_stars_repo_head_hexsha": "fd05ec2bbbe35408f90ebfcf10bb4ca588e7871c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter 2/python/classification metrics/2-class metrics.ipynb", "max_issues_repo_name": "borisgarbuzov/schulich_data_science_1", "max_issues_repo_head_hexsha": "fd05ec2bbbe35408f90ebfcf10bb4ca588e7871c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter 2/python/classification metrics/2-class metrics.ipynb", "max_forks_repo_name": "borisgarbuzov/schulich_data_science_1", "max_forks_repo_head_hexsha": "fd05ec2bbbe35408f90ebfcf10bb4ca588e7871c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2020-10-25T05:26:50.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-07T08:25:58.000Z", "avg_line_length": 155397.0, "max_line_length": 155397, "alphanum_fraction": 0.8959503723, "converted": true, "num_tokens": 1937, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9099069962657177, "lm_q2_score": 0.9046505447409666, "lm_q1q2_score": 0.8231478598353982}} {"text": "# CHEM 1000 - Spring 2022\nProf. Geoffrey Hutchison, University of Pittsburgh\n\n## Graded Homework 6\n\nFor this homework, we'll focus on:\n- integrals in 2D polar and 3D spherical space\n- probability (including integrating continuous distributions)\n---\n\nAs a reminder, you do not need to use Python to solve the problems. If you want, you can use other methods, just put your answers in the appropriate places.\n\nTo turn in, either download as Notebook (.ipynb) or Print to PDF and upload to Gradescope.\n\nMake sure you fill in any place that says YOUR CODE HERE or \"YOUR ANSWER HERE\", as well as your name and collaborators (i.e., anyone you discussed this with) below:\n\n\n```python\nNAME = \"\"\nCOLLABORATORS = \"\"\n```\n\n### Cartesian to Spherical Integrals\n\nConsider the Cartesian integral:\n$$\n\\iiint z^{2} d x d y d z\n$$\n\nEvaluation is fairly simple over a rectangular region, but if we wish to evaluate across a spherical volume, the limits of integration become messy.\n\nInstead, we can transform $z^2$, resulting in the integral:\n\n$$\n\\int_{0}^{a} \\int_{0}^{\\pi} \\int_{0}^{2 \\pi}(r \\cos \\theta)^{2} r^{2} \\sin \\theta d \\varphi d \\theta d r\n$$\n\nIntegrate across a sphere of size \"a\"\n\n\n```python\nfrom sympy import init_session\ninit_session()\n```\n\n\n```python\na, r, theta, phi = symbols(\"N r theta phi\")\n# technically, we're integrating psi**2 but let's not worry about that now\n# z => (r*cos(theta)) in spherical coordinates\nf = (r*cos(theta)**2)\n\nintegrate(# something)\n```\n\n### Normalizing\n\nWe often wish to normalize functions. Calculate a normalization constant $N^2$ for the following integral (e.g., evaluate it, set it equal to one, etc.)\n\n$$\n\\int_{0}^{\\infty} \\int_{0}^{\\pi} \\int_{0}^{2 \\pi} N^2 \\mathrm{e}^{-2 r} \\cos ^{2} \\theta r^{2} \\sin \\theta d \\varphi d \\theta d r\n$$\n\n(This is related to a $2p_z$ hydrogen atomic orbital.)\n\n\n```python\n# if you have an error, make sure you run the cell above this\nN, r, theta, phi = symbols(\"N r theta phi\")\n# technically, we're integrating psi**2 but let's not worry about that now\nf = N**2 * exp(-2*r) * cos(theta)**2\n\nintegrate(# something)\n```\n\n### Gaussian Distribution and Probability\n\nYou may have heard much about the Gaussian \"normal\" distribution (\"Bell curve\").\n\nIf we flip a coin enough, the binomial distribution becomes essentially continuous. Moreover, the \"law of large numbers\" (i.e., the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem)) indicates that by taking the enough data points, the sum - and average will tend towards a Gaussian distribution.\n\nIn short, even if our underlying data comes from some different distribution, the average will become normal (e.g. consider the average velocities in an ideal gas - a mole is a big number).\n\nWe will spend some time on the Gaussian distribution.\n- mean $x_0$\n- standard deviation $\\sigma$, variance $\\sigma^2$\n\nA normalized Gaussian distribution is then:\n\n$$\np(x) =\\frac{1}{\\sqrt{2 \\pi \\sigma^{2}}} \\exp \\left[-\\frac{\\left(x-x_{0}\\right)^{2}}{2 \\sigma^{2}}\\right]\n$$\n\nIf the mean $x_0 = 0$, what fraction of data will fall:\n- within $\\pm 0.5\\sigma$\n- within one standard deviation\n- within $\\pm 1.5 \\sigma$\n- within $\\pm 2.0 \\sigma$\n\n(In other words, change the limits of integration on the probability function.)\n\n\n```python\nsigma = symbols('sigma')\n\np = (1/sqrt(2*pi*sigma**2))*exp((-x**2)/(2*sigma**2))\nsimplify(integrate(p, (x, -0.5*sigma, +0.5*sigma)))\n```\n\n### If we want to include \"error bars\" with 95% confidence, what intervals do we use?\n\nYOUR ANSWER HERE\n\nWhat about 99% confidence intervals?\n", "meta": {"hexsha": "1ce0df6b99e963f4fc402df420d611e96b0bf2a1", "size": 8669, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "homework/ps6/ps6.ipynb", "max_stars_repo_name": "ghutchis/chem1000", "max_stars_repo_head_hexsha": "07a7eac20cc04ee9a1bdb98339fbd5653a02a38d", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 12, "max_stars_repo_stars_event_min_datetime": "2020-06-23T18:44:37.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-14T10:13:05.000Z", "max_issues_repo_path": "homework/ps6/ps6.ipynb", "max_issues_repo_name": "ghutchis/chem1000", "max_issues_repo_head_hexsha": "07a7eac20cc04ee9a1bdb98339fbd5653a02a38d", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "homework/ps6/ps6.ipynb", "max_forks_repo_name": "ghutchis/chem1000", "max_forks_repo_head_hexsha": "07a7eac20cc04ee9a1bdb98339fbd5653a02a38d", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-07-29T10:45:23.000Z", "max_forks_repo_forks_event_max_datetime": "2021-10-16T09:51:00.000Z", "avg_line_length": 27.5206349206, "max_line_length": 334, "alphanum_fraction": 0.5602722344, "converted": true, "num_tokens": 977, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9046505376715775, "lm_q2_score": 0.9099069980980297, "lm_q1q2_score": 0.8231478550605136}} {"text": "# Legendre Functions\n\nSeries expansions are used in a variety of circumstances:\n- When we need a tractable approximation to some ugly equation\n- To transform between equivalent ways of looking at a problem (e.g. time domain vs frequency domain)\n- When they are (part of) a solution to a particular class of differential equation\n\nFor approximations, there is an important divide between getting the best fit *near a point* (e.g. Taylor series) and getting the best fit *over an interval*. This notebook deals with one example of the latter; there is a separate notebook for Taylor expansions and others for Fourier, Bessel, etc.\n\n## Fitting over an interval\n\nWhat is the best (tractable) series approximating my function across some range of values? What matters is an overall best fit (e.g. least-squares deviation) across the range, and we can't tolerate wild divergences as with the Taylor series.\n\nThere are various series which are useful in different contexts, but a common property is that the terms are *orthogonal* over some interval $[a,b]$. If $f(t)$ is a real-valued function their *inner product* is defined as\n\n$$ \\langle f(m t),f(n t) \\rangle \\colon =\\int _a^b f(m t) f(n t) \\, dt $$\n\nFor orthogonal functions, this is non-zero if $m=n$ and zero if $m \\ne n$, i.e. \n\n$$\\langle f(m t),f(n t) \\rangle = a \\delta_{mn}$$\n\nwhere $\\delta$ is the Kronecker delta. If $a = 1$ the functions are said to be orthonormal.\n\n## The Legendre differential equation\n\nThis is of the form\n\n$$ (1 - x^2)y'' -2x y' + l(l+1)y = 0 $$\n\nwhere $l$ is a constant. The most useful solutions are the Legendre polynomials, where $y = P_l(x)$.\n\n## Legendre Polynomials\n\nThese are \"just\" polynomials, so maybe conceptually simpler than, for example, Bessel functions. Their special feature is that the coefficients are chosen so that they are mutually orthogonal over the range $[-1,1]$. \n\nThey are given by the formula\n\n$$ P_n(x) = \\frac{1}{2^n n!} \\frac{d^n}{dx^n} (x^2 -1)^n $$\n\nThey tend to crop up in the sort of problems which naturally use spherical coordinates and/or spherical harmonics, such as fluctuations in the CMB, \"sunquakes\" in our local star or (at the other end of the scale range) electron orbitals in the hydrogen atom.\n\n## Associated Legendre Functions\n\nthe function $P_n^m(x)$ is of degree $n$ and order $m$. It is related to the $n$th order polynomial $P_n(x)$ by\n\n$$ P_n^m(x) = (-1)^m (1-x^2)^{m/2}\\ \\frac{d^m P_n(x)}{dx^m} $$\n\nOrder zero functions are just the corresponding Legendre polynomials: $P_n^0(x) \\equiv P_n(x)$.\n\n## Software\n\nStart with a few basics, then we can get mathematical.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.rcParams.update({'font.size': 16})\n```\n\nHow to work with Legendre functions in Python? A quick Google search turns up quite a few possibilities, though it may not be immediately obvious how these relate to one another:\n- `scipy.special.legendre()`\n- `scipy.special.lpmn()` and `.lpmv()`\n- `numpy.polynomial.legendre`\n- `sympy.functions.special.polymomials.legendre()`\n- `sympy.functions.special.polymomials.assoc_legendre()`\n- `sympy.polys.orthopolys.legendre_poly()`\n- `mpmath.legendre()`\n\n### scipy.special\n\nThis one is relatively simple. Calling `legendre(n)` returns the nth-order polynomial as a function which can then itself be called with one or more x-values.\n\n\n```python\nimport scipy.special as sp\nP_3_sp = sp.legendre(3)\ndisplay(P_3_sp)\n```\n\n\n poly1d([ 2.5, 0. , -1.5, 0. ])\n\n\n\n```python\nx10 = np.linspace(-1, 1, 10)\ndisplay(P_3_sp(x10))\n```\n\n\n array([-1. , -0.00960219, 0.40466392, 0.40740741, 0.16323731,\n -0.16323731, -0.40740741, -0.40466392, 0.00960219, 1. ])\n\n\nFor the associated Legendre functions there are a couple of related SciPy functions which take different approaches to vector input and output. `scipy.special.lpmn(m, n, x)` will only take a single scalar $x$, but returns an $(m+1, n+1)$ array of results for all orders $0 \\dots m$ and degrees $0 \\dots n$. In contrast, `scipy.special.lpmv(m, n, x)` accepts arrays of $x$ and returns results for just the specified $m$ and $n$.\n\n\n```python\nxs = np.linspace(0, 1, 100)\nm = 1\nn = 2\nP_lm, _ = sp.lpmn(m, n, xs[5])\ndisplay(P_lm.shape)\n\nP_lmv = sp.lpmv(m, n, xs)\ndisplay(P_lmv.shape)\n```\n\n### numpy.polynomial\n\nThis is less simple and needs more exploration. Start like this, then read whatever documentation you can find.\n\n\n```python\nfrom numpy.polynomial import Legendre as P\nP_3_npl = P([3])\ndisplay(P_3_npl)\n```\n\n\n Legendre([3.], domain=[-1, 1], window=[-1, 1])\n\n\n### sympy.functions.special.polymomials\n\nThis is symbolic math, which will give you differentiation, integration, etc, as well as nice $LaTeX$ output. Not so convenient for plotting.\n\n\n```python\nfrom sympy import legendre, assoc_legendre, init_printing\ninit_printing()\nfrom sympy.abc import x\n\ndisplay(legendre(3, x))\ndisplay(assoc_legendre(3, 2, x))\n```\n\n### sympy.polys.orthopolys\n\nSort of like sympy.functions.special.polymomials, but with some different options.\n\n\n```python\nfrom sympy import legendre_poly\ndisplay(legendre_poly(3))\ndisplay(legendre_poly(3, polys=True))\n```\n\n### mpmath\n\nThis is aimed at arbitrary-precision floating point arithmetic. It doesn't seem to do symbolic math like SymPy or (more surprisingly?) handle array input like SciPy.\n\nIf you don't have the `mpmath` package installed, don't worry: this is the only cell that tries to use it.\n\n\n```python\nimport mpmath as mp\nfor x1 in np.arange(0, 1, 0.2):\n display(mp.legendre(3, x1))\n```\n\n\n mpf('0.0')\n\n\n\n mpf('-0.28000000000000003')\n\n\n\n mpf('-0.44')\n\n\n\n mpf('-0.35999999999999988')\n\n\n\n mpf('0.08000000000000014')\n\n\n### Provisional conclusions\n\nIt seems like `sympy.functions.special.polymomials` offers the simplest way to do symbolic math, and `scipy.special` the easiest way to do numerical calculations. Other packages no doubt have more sophisticated capabilities but I'm not the right person to judge.\n\nThe first few __Legendre polymomials__ look like this. Note that they are alternately odd/even functions.\n\n\n```python\nfrom IPython.display import Math\nfrom sympy import latex\nfrom sympy.abc import x\n\nfor i in range(6):\n l_i = latex(legendre(i, x))\n display(Math('P_{} = {}'.format(i, l_i)))\n```\n\n\n$\\displaystyle P_0 = 1$\n\n\n\n$\\displaystyle P_1 = x$\n\n\n\n$\\displaystyle P_2 = \\frac{3 x^{2}}{2} - \\frac{1}{2}$\n\n\n\n$\\displaystyle P_3 = \\frac{5 x^{3}}{2} - \\frac{3 x}{2}$\n\n\n\n$\\displaystyle P_4 = \\frac{35 x^{4}}{8} - \\frac{15 x^{2}}{4} + \\frac{3}{8}$\n\n\n\n$\\displaystyle P_5 = \\frac{63 x^{5}}{8} - \\frac{35 x^{3}}{4} + \\frac{15 x}{8}$\n\n\nThe first few __associated Legendre functions__:\n\n\n```python\nfor i in range(4):\n for j in range(i):\n l_ij = latex(assoc_legendre(i, j, x))\n display(Math('P_{}^{} = {}'.format(i, j, l_ij)))\n```\n\n\n$\\displaystyle P_1^0 = x$\n\n\n\n$\\displaystyle P_2^0 = \\frac{3 x^{2}}{2} - \\frac{1}{2}$\n\n\n\n$\\displaystyle P_2^1 = - 3 x \\sqrt{- x^{2} + 1}$\n\n\n\n$\\displaystyle P_3^0 = \\frac{5 x^{3}}{2} - \\frac{3 x}{2}$\n\n\n\n$\\displaystyle P_3^1 = - \\sqrt{- x^{2} + 1} \\left(\\frac{15 x^{2}}{2} - \\frac{3}{2}\\right)$\n\n\n\n$\\displaystyle P_3^2 = 15 x \\left(- x^{2} + 1\\right)$\n\n\n__Plotting__ the first few Legendre polymomials over the range where they are orthogonal:\n\n\n```python\nimport scipy.special as sp\n\nxlims = (-1, 1)\nx = np.linspace(xlims[0], xlims[1], 1000)\n\nplt.figure(figsize=(9, 9))\nfor v in range(0, 6):\n plt.plot(x, sp.legendre(v)(x))\n\nplt.xlim(xlims)\nplt.ylim((-1.1, 1.1))\nplt.legend(('$\\mathcal{P}_0(x)$', '$\\mathcal{P}_1(x)$', '$\\mathcal{P}_2(x)$',\n '$\\mathcal{P}_3(x)$', '$\\mathcal{P}_4(x)$', '$\\mathcal{P}_5(x)$'),\n loc = 0)\nplt.xlabel('$x$')\nplt.ylabel('$\\mathcal{P}_n(x)$')\nplt.title('Plots of the first six Legendre Polynomials') \nplt.grid(True)\n```\n\n## Spherical coordinates\n\nAn interesting use of the associated Legendre functions has $x = \\cos(\\theta)$. The resulting functions are a component in the spherical harmonics $Y_l^m(\\theta, \\phi)$, described in another Jupyter notebook in this folder.\n\nWe can make polar plots showing the magnitude of $P_l^m(\\cos \\theta)$ in the direction $\\theta$. Here $\\theta$ is the angle down from the $+z$ axis. There is no $\\phi$ dependency in $P_l^m(\\cos \\theta)$ so think of these plots as being radially symmetric around the $z$-axis (i.e. rotate them about the vertical axis).\n\nTODO - color-code the plots by the sign of $P_l^m(\\cos \\theta)$. This would make the nodes clearer to see.\n\n\n```python\nthetas = np.linspace(0, np.pi, 200)\ntheta_x = np.sin(thetas)\ntheta_y = np.cos(thetas)\n\nfig = plt.figure(figsize = (15,15))\n\nfor n in range(3):\n for m in range(n+1):\n P_lm = sp.lpmv(m, n, np.cos(thetas))\n\n x_coords = theta_x*np.abs(P_lm)\n y_coords = theta_y*np.abs(P_lm)\n ax = fig.add_subplot(3, 3, m+1+3*n)\n ax.plot(x_coords, y_coords, 'b-', label='$P_{}^{}$'.format(n,m))\n # reflect the plot across the z-axis\n ax.plot(-x_coords, y_coords, 'b-')\n ax.axis('equal')\n# ax.set_title('$P_{}^{}$'.format(n,m))\n ax.legend()\n```\n\n\n\n## References\n\n- Boas, \"Mathematical methods in the physical sciences\", 3rd ed, chapter 12\n- MathWorld, http://mathworld.wolfram.com/LegendrePolynomial.html and http://mathworld.wolfram.com/AssociatedLegendrePolynomial.html\n- Wikipedia, https://en.wikipedia.org/wiki/Legendre_polynomials\n- Binney & Tremaine, \"Galactic Dynamics\", 2nd ed, appendix C.5\n- Griffiths & Schroeter, \"Introduction to Quantum Mechanics\", 3rd ed, section 4.1.2\n- Mathews & Walker, \"Mathematical Methods of Physics\", 2nd ed, section 7.1\n\n\n```python\n\n```\n", "meta": {"hexsha": "b2a0483cda80048550aab502e34cbe09118f0ec5", "size": 218068, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "math/Legendre.ipynb", "max_stars_repo_name": "colinleach/astro-Jupyter", "max_stars_repo_head_hexsha": "8d7618068f0460ff0c514075ce84d2bda31870b6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-06-06T15:35:35.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-06T15:35:35.000Z", "max_issues_repo_path": "math/Legendre.ipynb", "max_issues_repo_name": "colinleach/astro-Jupyter", "max_issues_repo_head_hexsha": "8d7618068f0460ff0c514075ce84d2bda31870b6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2019-06-08T11:44:15.000Z", "max_issues_repo_issues_event_max_datetime": "2019-06-10T17:42:32.000Z", "max_forks_repo_path": "math/Legendre.ipynb", "max_forks_repo_name": "colinleach/astro-Jupyter", "max_forks_repo_head_hexsha": "8d7618068f0460ff0c514075ce84d2bda31870b6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-04-14T15:28:43.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-14T15:28:43.000Z", "avg_line_length": 282.4715025907, "max_line_length": 98044, "alphanum_fraction": 0.9235421978, "converted": true, "num_tokens": 2915, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9032941988938413, "lm_q2_score": 0.9111797154386841, "lm_q1q2_score": 0.8230633511055045}} {"text": "# Machine Learning Implementation\n\n## Imports\n\n\n```python\nimport json\n\nimport numpy as np\nimport pandas as pd\nimport plotly.offline as py\nfrom plotly import graph_objects as go\n```\n\n## Logistic regression\n\n### The maths\n\nThe logistic model aims to predict the discrete y variable a.k.a the target variable (e.g. whether something will happen) based on a collection of features. It does this by transforming a linear combination of the features into a curve and fitting this curve to the data.\n\nThe curve used in logistic regression is the sigmoid function\n\n$$\n\\sigma(x) = \\frac{1}{1+e^{-x}}\n$$\n\nDefine y as\n\n$$\n\\begin{align}\n\\hat{y} &= h_{\\boldsymbol{\\beta}}(\\mathbf{x})\\\\\n\\hat{y}&= \\sigma\\left(\\beta_0x_0+\\cdots+\\beta_nx_n\\right)\\quad &n\\in \\mathbb{N},x_0=1 \\\\\n\\hat{y}&=\\sigma\\left(\\sum^{n}_{i=0}\\beta_ix_i\\right) \\\\\n\\hat{y}&=\\sigma\\left(\\mathbf{\\boldsymbol{\\beta}^Tx}\\right)\\quad&\\boldsymbol{\\beta},\\mathbf{x}\\in\\mathbb{R}^{n\\times1}\\\\\n\\hat{y}&=\\sigma\\left(\\boldsymbol{\\beta}^T\\mathbf{x}\\right)\n\\end{align}\n$$\n\nnotice\n\n$$\n\\hat{y} = \\frac{1}{1+e^{-\\boldsymbol{\\beta}^T\\mathbf{x}}}\n$$\n\nso\n\n$$\n\\begin{align}\n\\hat{y} + \\hat{y}e^{-\\boldsymbol{\\beta}^T\\mathbf{x}} &= 1\\\\\n\\hat{y}e^{-\\boldsymbol{\\beta}^T\\mathbf{x}} &= 1 - \\hat{y}\\\\\n\\frac{\\hat{y}}{1 - \\hat{y}} &= e^{\\boldsymbol{\\beta}^T\\mathbf{x}}\\\\\n\\ln\\left(\\frac{\\hat{y}}{1 - \\hat{y}}\\right)&=\\boldsymbol{\\beta}^T\\mathbf{x}\n\\end{align}\n$$\n\nThis above is the logit form of logistic regression. We model the logit as a linear combination of the x variables\n\nWe define the cost function as follows for each y and corresponding x\n\n$$\n\\begin{align}\nJ(\\mathbf{x})\n&= \\begin{cases}\n-\\log\\left(h_{\\boldsymbol{\\beta}}(\\mathbf{x})\\right) &\\text{if y=1}\\\\\n-\\log\\left(1-h_{\\boldsymbol{\\beta}}(\\mathbf{x})\\right) &\\text{if y=0}\\\\\n\\end{cases}\n\\end{align}\n$$\n\n$$\n\\begin{align}\nJ(\\mathbf{x})\n&= -\\frac{1}{m}\\sum_{j=1}^my^j\\log\\left(h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)\\right)\n+(1-y^j)\\log\\left(1-h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)\\right)\\\\\n&= -\\frac{1}{m}\\sum_{j=1}^my^j\\log\\left(\\frac{1}{1+e^{-\\boldsymbol{\\beta}^T\\mathbf{x}}}\\right)\n+(1-y^j)\\log\\left(1-\\frac{1}{1+e^{-\\boldsymbol{\\beta}^T\\mathbf{x}}}\\right)\\\\\n&= -\\frac{1}{m}\\sum_{j=1}^my^j\\log\\left(\\frac{1}{1+e^{-\\sum^{n}_{i=0}\\beta_ix_i}}\\right)\n+(1-y^j)\\log\\left(1-\\frac{1}{1+e^{-\\sum^{n}_{i=0}\\beta_ix_i}}\\right)\n\\end{align}\n$$\n\nnote\n\n$$\n\\begin{align}\nh_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)&=\\frac{1}{1+e^{-\\boldsymbol{\\beta}^T\\mathbf{x}^j}}\\\\\n&=\\frac{1}{1+e^{-\\sum^{n}_{i=0}\\beta_ix_i}}\n\\end{align}\n$$\n\nso\n\n$$\n\\begin{align}\n\\frac{\\partial h}{\\partial \\beta_k} &= -\\left(1+e^{-\\sum^{n}_{i=0}\\beta_ix_i}\\right)^{-2}e^{-\\sum^{n}_{i=0}\\beta_ix_i} (-x_k^j)\\\\\n&=\\frac{1}{1+e^{-\\sum^{n}_{i=0}\\beta_ix_i}}\n\\frac{-e^{-\\sum^{n}_{i=0}\\beta_ix_i} (-x_k^j)}{1+e^{-\\sum^{n}_{i=0}\\beta_ix_i}}\\\\\n&=\\frac{1}{1+e^{-\\sum^{n}_{i=0}\\beta_ix_i}}\n\\frac{(1-1-e^{-\\sum^{n}_{i=0}\\beta_ix_i})(-x_k^j)}{1+e^{-\\sum^{n}_{i=0}\\beta_ix_i}}\\\\\n&=\\frac{1}{1+e^{-\\sum^{n}_{i=0}\\beta_ix_i}}\n\\left(\n\\frac{1}{1+e^{-\\sum^{n}_{i=0}\\beta_ix_i}}-\n\\frac{1+e^{-\\sum^{n}_{i=0}\\beta_ix_i}}{1+e^{-\\sum^{n}_{i=0}\\beta_ix_i}}\n\\right)(-x_k^j)\\\\\n&=\\frac{1}{1+e^{-\\sum^{n}_{i=0}\\beta_ix_i}}\n\\left(\n\\frac{1+e^{-\\sum^{n}_{i=0}\\beta_ix_i}}{1+e^{-\\sum^{n}_{i=0}\\beta_ix_i}}-\n\\frac{1}{1+e^{-\\sum^{n}_{i=0}\\beta_ix_i}}\n\\right)(x_k^j)\\\\\n&=\\frac{1}{1+e^{-\\sum^{n}_{i=0}\\beta_ix_i}}\n\\left(\n1-\n\\frac{1}{1+e^{-\\sum^{n}_{i=0}\\beta_ix_i}}\n\\right)(x_k^j)\\\\\n&=h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)(1-h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j))x_k^j\n\\end{align}\n$$\n\nWe need to differentiate the cost function i.e. find the gradient\n\n$$\n\\begin{align}\n\\frac{\\partial J}{\\partial\\beta_k}\\left(\\boldsymbol{\\beta}\\right) \n&=\\frac{\\partial}{\\partial\\beta_k}\\left(\n-\\frac{1}{m}\\sum_{j=1}^my^j\\log\\left(h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)\\right)\n+(1-y^j)\\log\\left(1-h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)\\right)\n\\right)\\\\\n&=-\\frac{1}{m}\\sum_{j=1}^m\\frac{y^j}{h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)}\\frac{\\partial h}{\\partial \\beta_k}\n+\\frac{-(1-y^j)}{1-h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)}\\frac{\\partial h}{\\partial \\beta_k}\\\\\n&=-\\frac{1}{m}\\sum_{j=1}^m\\frac{y^j}{h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)}\nh_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)(1-h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j))x_k^j\n+\\frac{-(1-y^j)}{1-h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)}\nh_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)(1-h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j))x_k^j\\\\\n&=-\\frac{1}{m}\\sum_{j=1}^my^j(1-h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j))x_k^j\n-(1-y^j)\nh_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)x_k^j\\\\\n&=\\frac{1}{m}\\sum_{j=1}^m\n\\left(h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)-y^j\\right)x_k^j\n\\end{align}\n$$\n\nhence\n\n$$\n\\nabla_{\\boldsymbol{\\beta}} J\n=\n\\begin{bmatrix}\n \\frac{\\partial J}{\\partial\\beta_1} \\\\\n \\vdots \\\\\n \\frac{\\partial J}{\\partial\\beta_n}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n \\frac{1}{m}\\sum_{j=1}^m\n \\left(h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)-y^j\\right)x_1^j\\\\\n \\vdots \\\\\n \\frac{1}{m}\\sum_{j=1}^m\n \\left(h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)-y^j\\right)x_n^j\n\\end{bmatrix}\n$$\n\nDefine the design matrix and column representation of y. Here each row of X and y are training examples hence there are m rows\n\n$$\\mathbf{X}\\in\\mathbb{R}^{m\\times n},\n\\quad \\mathbf{y}\\in\\mathbb{R}^{m\\times 1}\n$$\n\n$$\n\\mathbf{X}=\\begin{bmatrix}\n \\dots & (\\mathbf{x}^1)^T & \\dots\\\\\n \\dots & (\\mathbf{x}^2)^T & \\dots\\\\\n \\dots & \\vdots & \\dots\\\\\n \\dots & (\\mathbf{x}^m)^T & \\dots\n\\end{bmatrix}\\quad\n\\mathbf{y}=\\begin{bmatrix}\n y_1\\\\y_2\\\\\\vdots\\\\y_m\n\\end{bmatrix}\n$$\n\n$$\n\\begin{align}\n\\nabla_{\\boldsymbol{\\beta}} J\n=\n\\begin{bmatrix}\n \\frac{1}{m}\\sum_{j=1}^m\n \\left(h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)-y^j\\right)x_1^j\\\\\n \\vdots \\\\\n \\frac{1}{m}\\sum_{j=1}^m\n \\left(h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)-y^j\\right)x_n^j\n\\end{bmatrix}\n=\n\\frac{1}{m}\n\\begin{bmatrix}\n \\sum^{n}_{i=0}h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)^jx^j_1\\\\\n \\vdots \\\\\n \\sum^{n}_{i=0}h_{\\boldsymbol{\\beta}}(\\mathbf{x}^j)x^j_n\n\\end{bmatrix}\n-\n\\frac{1}{m}\n\\begin{bmatrix}\n \\sum^{m}_{j=1}y^jx^j_1\\\\\n \\vdots \\\\\n \\sum^{m}_{j=1}y^jx^j_n\\\\\n\\end{bmatrix}\n\\end{align}\n$$\n\n$$\nh_{\\boldsymbol{\\beta}}(\\mathbf{x}^j) = \\sigma({\\mathbf{x}^j}^T\\boldsymbol{\\beta})\n$$\n\nso\n\n$$\n\\begin{align}\n\\nabla_{\\boldsymbol{\\beta}} J\n&=\\frac{1}{m}\\left(\n\\mathbf{X}^T\\sigma(\\mathbf{X}\\mathbf{\\boldsymbol{\\beta}})-\\mathbf{X}^T\\mathbf{y}\n\\right)\\\\\n&=\\frac{1}{m}\\mathbf{X}^T\\left(\n\\sigma(\\mathbf{X}\\mathbf{\\boldsymbol{\\beta}})-\\mathbf{y}\n\\right)\\\\\n&=\\frac{1}{m}\\mathbf{X}^T\\left(\n\\mathbf{\\hat{y}}-\\mathbf{y}\n\\right)\n\\end{align}\n$$\n\nwhere\n\n$$\n\\mathbf{\\hat{y}} = \\sigma(\\mathbf{X}\\mathbf{\\boldsymbol{\\beta}})\n$$\n\nWe could have derived the same thing using matrix calculus\n\n### Example sigmoid\n\nThe curve used in logistic regression is the sigmoid function\n\n$$\n\\sigma(x) = \\frac{1}{1+e^{-x}}\n$$\n\n\n```python\nsigmoid_fig = go.FigureWidget()\ndemo_x = np.arange(-10,10,0.1)\ndemo_y = 1 / (1 + np.exp(-demo_x))\nsigmoid_fig.add_scatter(\n x=demo_x,\n y=demo_y)\nsigmoid_fig.layout.title = 'Sigmoid Function'\nsigmoid_fig\n```\n\n\n FigureWidget({\n 'data': [{'type': 'scatter',\n 'uid': 'fc81bfbe-f9f8-419c-9923-4d127962d5e2',\n \u2026\n\n\n### Make fake data\n\n\n```python\nm = 100\nx0 = np.ones(shape=(m, 1))\nx1 = np.linspace(0, 10, m).reshape(-1, 1)\nX = np.column_stack((x0, x1))\n\n# let y = 0.5 * x + 1 + epsilon\nepsilon = np.random.normal(scale=2, size=(m, 1))\ny = x1 + epsilon\ny = (y > 5).astype(int)\n```\n\n\n```python\nfig = go.FigureWidget()\nfig = fig.add_scatter(\n x=X[:,1],\n y=y[:,0],\n mode='markers',\n name='linear data + noise')\nfig\n```\n\n\n FigureWidget({\n 'data': [{'mode': 'markers',\n 'name': 'linear data + noise',\n 'ty\u2026\n\n\n### Logistic regression class\n\n\n```python\nimport json\n\nimport numpy as np\n\n\nclass LogisticRegression():\n\n def __init__(self, learning_rate=0.05):\n \"\"\" \n Logistic regression model\n\n Parameters:\n ----------\n learning_rate: float, optional, default 0.05\n The learning rate parameter controlling the gradient descent\n step size\n \"\"\"\n self.learning_rate = learning_rate\n print('Creating logistic model instance')\n\n def __repr__(self):\n return (\n f'')\n\n def fit(self, X, y, n_iter=1000):\n \"\"\" \n Fit the logistic regression model\n\n Updates the weights with n_iter iterations of batch gradient\n descent updates\n\n Parameters:\n ----------\n X: numpy.ndarray\n Training data, shape (m samples, (n - 1) features + 1)\n Note the first column of X is expected to be ones (to allow \n for the bias to be included in beta)\n y: numpy.ndarray\n Target values - class label {0, 1}, shape (m samples, 1)\n n_iter: int, optional, default 1000\n Number of batch gradient descent steps\n \"\"\"\n m, n = X.shape\n print(f'fitting with m={m} samples with n={n-1} features\\n')\n self.beta = np.zeros(shape=(n, 1))\n self.costs = []\n self.betas = [self.beta]\n for iteration in range(n_iter):\n y_pred = self.predict_proba(X)\n cost = (-1 / m) * (\n (y.T @ np.log(y_pred)) +\n ((np.ones(shape=y.shape) - y).T @ np.log(\n np.ones(shape=y_pred.shape) - y_pred))\n )\n self.costs.append(cost[0][0])\n gradient = (1 / m) * X.T @ (y_pred - y)\n self.beta = self.beta - (\n self.learning_rate * gradient)\n self.betas.append(self.beta)\n\n def predict_proba(self, X):\n \"\"\" \n Predicted probability values for class 1\n\n Note this is calculated as the sigmoid of the linear combination\n of the feature values and the weights.\n\n Parameters:\n ----------\n X: numpy.ndarray\n Training data, shape (m samples, (n - 1) features + 1)\n Note the first column of X is expected to be ones (to allow \n for the bias to be included in beta)\n\n Returns:\n -------\n numpy.ndarray:\n Predicted probability of samples being in class 1\n \"\"\" \n y_pred = self.sigmoid(X @ self.beta)\n return y_pred\n\n def predict(self, X, descision_prob=0.5):\n \"\"\" \n Predict the class values from sample X feature values\n\n Parameters:\n ----------\n X: numpy.ndarray\n Training data, shape (m samples, (n - 1) features + 1)\n Note the first column of X is expected to be ones (to allow \n for the bias to be included in beta)\n\n Returns:\n -------\n numpy.ndarray:\n Prediceted class values, shape (m samples, 1)\n \"\"\"\n y_pred = self.sigmoid(X @ self.beta)\n return (y_pred > descision_prob) * 1\n\n def sigmoid(self, x):\n \"\"\" \n Sigmoid function\n\n f(x) = 1 / (1 + e^(-x))\n\n Parameters:\n ----------\n x: numpy.ndarray\n\n Returns:\n -------\n numpy.ndarray:\n sigmoid of x, values in (0, 1)\n \"\"\" \n return 1 / (1 + np.exp(-x))\n\n```\n\n\n```python\nlogistic_regression = LogisticRegression()\nlogistic_regression.fit(X, y)\n```\n\n Creating logistic model instance\n fitting with m=100 samples with n=1 features\n \n\n\n\n```python\nexample_X = np.array([[1,1],[1,4],[1,7]])\nlogistic_regression.predict(example_X)\n```\n\n\n\n\n array([[0],\n [0],\n [1]])\n\n\n\n### Plot the best fit\n\n\n```python\nfig = fig.add_scatter(\n x=X[:,1], \n y=logistic_regression.predict_proba(X)[:,0],\n mode='markers',\n name='logistic best fit')\nfig\n```\n\n\n FigureWidget({\n 'data': [{'mode': 'markers',\n 'name': 'linear data + noise',\n 'ty\u2026\n\n\n### Plot the cost function\n\n\n```python\n# Haven't got round to this yet - see linear regression for an example error \n# surface decent.\n```\n\n## Logisitc regression - Titanic example\n\n### Load data\n\n\n```python\nX_train = pd.read_feather('../data/titanic/processed/X_train.feather')\nX_test = pd.read_feather('../data/titanic/processed/X_test.feather')\ny_train = pd.read_feather('../data/titanic/processed/y_train.feather')\ny_test = pd.read_feather('../data/titanic/processed/y_test.feather')\n```\n\n### Train model\n\n\n```python\ntitanic_logistic_model = LogisticRegression()\n```\n\n Creating logistic model instance\n\n\n\n```python\ntitanic_logistic_model.fit(X=X_train.values, y=y_train.values, n_iter=4000)\n```\n\n fitting with m=712 samples with n=29 features\n \n\n\n### Plot the cost\n\n\n```python\ntitanic_cost_fig = go.FigureWidget()\n\ntitanic_cost_fig.add_scatter(\n x=list(range(len(titanic_logistic_model.costs))),\n y=titanic_logistic_model.costs,\n)\n\ntitanic_cost_fig.layout.title = 'Cost Vs gradient descent iterations'\ntitanic_cost_fig.layout.xaxis.title = 'Iterations'\ntitanic_cost_fig.layout.yaxis.title = 'Cost'\ntitanic_cost_fig\n```\n\n\n FigureWidget({\n 'data': [{'type': 'scatter',\n 'uid': 'a3a5dfba-fddd-428e-87a9-4a7f4461bffc',\n \u2026\n\n\n### Error analysis\n\n\n```python\ny_pred = titanic_logistic_model.predict(X_test.values)\ntest_accuracy = (y_pred == y_test.values).sum() / len(y_pred)\n\nprint(f'Test accuracy is with my implementation {test_accuracy:.2%}')\n```\n\n Test accuracy is with my implementation 79.89%\n\n\n## End\n", "meta": {"hexsha": "9a744cf238de12ba8277625f6c33ad4e05fe912d", "size": 26691, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/logistic_regression.ipynb", "max_stars_repo_name": "simonwardjones/machine_learning", "max_stars_repo_head_hexsha": "1e92865bfe152acaf0df2df8f11a5f51833389a9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2020-05-13T13:21:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-29T08:11:22.000Z", "max_issues_repo_path": "notebooks/logistic_regression.ipynb", "max_issues_repo_name": "simonwardjones/machine_learning", "max_issues_repo_head_hexsha": "1e92865bfe152acaf0df2df8f11a5f51833389a9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/logistic_regression.ipynb", "max_forks_repo_name": "simonwardjones/machine_learning", "max_forks_repo_head_hexsha": "1e92865bfe152acaf0df2df8f11a5f51833389a9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2020-06-20T12:50:23.000Z", "max_forks_repo_forks_event_max_datetime": "2020-08-19T13:29:01.000Z", "avg_line_length": 24.7597402597, "max_line_length": 277, "alphanum_fraction": 0.467236147, "converted": true, "num_tokens": 4605, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9637799462157138, "lm_q2_score": 0.8539127455162773, "lm_q1q2_score": 0.8229839799465902}} {"text": "```python\nfrom sympy import init_session\ninit_session()\n```\n\n IPython console for SymPy 1.0 (Python 3.6.3-64-bit) (ground types: python)\n \n These commands were executed:\n >>> from __future__ import division\n >>> from sympy import *\n >>> x, y, z, t = symbols('x y z t')\n >>> k, m, n = symbols('k m n', integer=True)\n >>> f, g, h = symbols('f g h', cls=Function)\n >>> init_printing()\n \n Documentation can be found at http://docs.sympy.org/1.0/\n\n\n\n```python\nr, u, v, w, E, e, p = symbols(\"rho u v w E e p\")\ndedr, dedp = symbols(r\"\\left.\\frac{\\partial{}e}{\\partial\\rho}\\right|_p \\left.\\frac{\\partial{}e}{\\partial{}p}\\right|_\\rho\")\nBx, By, Bz = symbols(\"B_x B_y B_z\")\n```\n\n\n```python\nA = Matrix(\n [[1, 0, 0, 0, 0, 0, 0, 0],\n [u, r, 0, 0, 0, 0, 0, 0],\n [v, 0, r, 0, 0, 0, 0, 0],\n [w, 0, 0, r, 0, 0, 0, 0],\n [e + r*dedr + (u**2 + v**2 + w**2)/2, r*u, r*v, r*w, r*dedp, Bx, By, Bz],\n [0, 0, 0, 0, 0, 1, 0, 0],\n [0, 0, 0, 0, 0, 0, 1, 0],\n [0, 0, 0, 0, 0, 0, 0, 1]])\n```\n\n\n```python\nA\n```\n\n\n```python\nA.inv()\n```\n\n\n```python\nFr, Fu, Fv, Fw, FE, FBx, FBy, FBz = symbols(r\"F_{\\rho} F_{\\rho{}u} F_{\\rho{}v} F_{\\rho{}w} F_{\\rho{}E} F_{B_x} F_{B_y} F_{B_z}\")\n```\n\n\n```python\nF = Matrix([[Fr], [Fu], [Fv], [Fw], [FE], [FBx], [FBy], [FBz]])\n```\n\n\n```python\nA.inv() * F\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "ce6a676c69cb4924ace57354aee77097053c9ade", "size": 66001, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Source/mhd/notes/mhd_jacobian.ipynb", "max_stars_repo_name": "MargotF/Castro", "max_stars_repo_head_hexsha": "5cdb549af422ef44c9b1822d0fefe043b3533c57", "max_stars_repo_licenses": ["BSD-3-Clause-LBNL"], "max_stars_count": 178, "max_stars_repo_stars_event_min_datetime": "2017-05-03T18:07:03.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T22:34:53.000Z", "max_issues_repo_path": "Source/mhd/notes/mhd_jacobian.ipynb", "max_issues_repo_name": "MargotF/Castro", "max_issues_repo_head_hexsha": "5cdb549af422ef44c9b1822d0fefe043b3533c57", "max_issues_repo_licenses": ["BSD-3-Clause-LBNL"], "max_issues_count": 1334, "max_issues_repo_issues_event_min_datetime": "2017-05-04T14:23:24.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-28T00:12:06.000Z", "max_forks_repo_path": "Source/mhd/notes/mhd_jacobian.ipynb", "max_forks_repo_name": "MargotF/Castro", "max_forks_repo_head_hexsha": "5cdb549af422ef44c9b1822d0fefe043b3533c57", "max_forks_repo_licenses": ["BSD-3-Clause-LBNL"], "max_forks_count": 86, "max_forks_repo_forks_event_min_datetime": "2017-06-12T15:27:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-09T22:21:44.000Z", "avg_line_length": 122.4508348794, "max_line_length": 10410, "alphanum_fraction": 0.4795381888, "converted": true, "num_tokens": 606, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.951142225532629, "lm_q2_score": 0.8652240808393984, "lm_q1q2_score": 0.8229511578340086}} {"text": "# Neutron scattering and Monte Carlo methods\n\nPlease indicate your name below, since you will need to submit this notebook completed latest the day after the datalab.\n\nDon't forget to save your progress during the datalab to avoid any loss due to crashes.\n\n\n```python\nname=''\n```\n\nIn this datalab we are going to get acquainted with the basics of Monte Carlo particle transport methods, and we will learn how to sample random events and random values from various distributions. These are going to be our bricks which later we will put together into an actual Monte Carlo simulation.\n\nSince neutron reactions, espescially scattering, provide an excellent ground to familiarize ourselves with Monte Carlo particle transport methods, we will also use this lab to review some of the features of elastic neutron scattering.\n\n**Prerequisites**: Before the lab you should have reviewed the lecture on neutron scattering and the short introduction on Monte Carlo methods and pseudorandom numbers.\n\nThe new python knowledge from the lab is going to be \n- histograms with `plt.hist`\n- random number generators from `numpy.random`\n\nLet's get started and have some fun!\n\n## Experiment 1: Relation of angle and energy in elastic scattering\n\nWe have discussed the elastic potential scattering in the CM frame, and showed that for the LAB energy \n\n$$E_l'=\\frac{1}{2}E_l[(1+\\alpha)+(1-\\alpha)\\cos\\theta_C]$$\n\nwhere\n\n$$\\alpha=\\big(\\frac{A-1}{A+1}\\big)^2$$\n\nand $A=M/m$\n\nLet's investigate how the ratio of the incoming and the outgoing neutron energy depends on the scattering angle.\n\nPlot the above formula for several nuclides (eg. A=1, A=12, A=23, etc) and for angles between $0^\\circ-360^\\circ$. Do not repeat the plotting command, use a loop instead to iterate through all mass numbers. After the plot write a sentence on your conclusion!\n\n**Note #1**: Remember, `np.cos` can perform the operation on a numpy array or list.\n\n**Note #2**: Trigonometric functions in numpy take values in radians.\n\n**Note #3**: $\\pi$ can be accessed as `np.pi`.\n\n**Note #4**: If you wish to use specific colors for the curves, you can define your own list of colors, and call a color according to the indices in the plot (eg. `colors=['blue','green',...]`\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\ntheta=#np.linspace(0,360,361)*np.pi/180 #Remove the first comment!\nAs=[1,12,23] #Feel free to add more\nplt.figure()\n# Your loop and plotting comes here\nplt.xlabel('angle (deg)')\nplt.ylabel(r\"$\\frac{E_{n,after}}{E_{n,before}}$\")\nplt.show()\n```\n\nChange this cell to your conclusion!\n\n## Experiment 2: Isotropic directions\n\nWhen sampling *isotropic* directions, one is often tempted to think that the colatitude or polar angle $\\theta$ is uniformly distributed over $[0,\\pi]$ and the azimuth $\\phi$ is uniformly distrubted over $[0,2\\pi]$. However this is not the case. It is $\\cos\\theta$ which is uniformly distributed over $[-1,1]$. Further reading: http://corysimon.github.io/articles/uniformdistn-on-sphere/ (note the angles are named opposite). Remember the conversion between Cartesian coordinates and polar coordinates:\n\n$$x=r\\sin\\theta\\cos\\phi$$\n$$y=r\\sin\\theta\\sin\\phi$$\n$$z=r\\cos\\theta$$\n\nRead and run the two code cells below. The code creates 1000 unit length ($r=1$) vectors' coordinates, and visualizes them. The two code blocks contain the same code besides the way how `theta` is being created. The first code block samples `theta` uniformly between $[0,\\pi]$ (incorrect), and the second samples the cosine of `theta` uniformly between $[-1,1]$. Observe that in the first **incorrect** case the poles are oversampled.\n\n**Note #1**. We are using `np.random.uniform` to generate uniformly generated random numbers. The first input of this function is the lower boundary of the distribution, the second is the higher boundary, and the third is the number of random numbers to be sampled. Note that `np. random` has several built-in functions to sample random numbers from various distributions, you can review them with `?np.random`.\n\n\n```python\nfrom mpl_toolkits.mplot3d import Axes3D\n\nN=1000\nr=np.ones(N)\ntheta=np.random.uniform(0,np.pi,N) ### INCORRECT\nmu=np.cos(theta)\nphi=np.random.uniform(0,2*np.pi,N)\n\nx=r*np.sin(theta)*np.cos(phi)\ny=r*np.sin(theta)*np.sin(phi)\nz=r*np.cos(theta)\n\nplt.figure()\nplt.scatter(phi,theta)\nplt.xlabel(r'$\\phi$')\nplt.ylabel(r'$\\theta$')\nplt.title('Incorrect solution')\nplt.show()\n\nfig = plt.figure(figsize=plt.figaspect(1.0)*1.5) #Adjusts the aspect ratio and enlarges the figure (text does not enlarge)\nax = fig.gca(projection='3d')\nax.scatter(x,y,z)\nplt.title('Incorrect solution')\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_zlabel('z')\nax.azim = 113\nax.elev = 28\nplt.show()\n```\n\n\n```python\nN=1000\nr=np.ones(N)\nmu=np.random.uniform(-1,1,N) ### CORRECT\ntheta=np.arccos(mu)\nphi=np.random.uniform(0,2*np.pi,N)\n\nx=r*np.sin(theta)*np.cos(phi)\ny=r*np.sin(theta)*np.sin(phi)\nz=r*np.cos(theta)\n\nplt.figure()\nplt.scatter(phi,theta)\nplt.xlabel(r'$\\phi$')\nplt.ylabel(r'$\\theta$')\nplt.title('Correct solution')\nplt.show()\n\nfig = plt.figure(figsize=plt.figaspect(1.0)*1.5) #Adjusts the aspect ratio and enlarges the figure (text does not enlarge)\nax = fig.gca(projection='3d')\nax.scatter(x,y,z)\nplt.title('Correct solution')\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_zlabel('z')\nax.azim = 113\nax.elev = 28\nplt.show()\n```\n\n## Experiment 3: Distribution of outgoing energy\n\nWe just showed that isotropic scattering means that the CM cosine of the angle is uniformly distributed. So let us combine exercise 1 and 2, and investigate the distribution of the outgoing neutron energy for isotropic scattering. \n\nGenerate 1 million uniformly distributed angle cosines in CM (`muC`), and calculate the final energy distribution of 1 MeV neutrons after scattering isotropically. Then use `plt.hist` to visualize the distribution of the energy. What is your expectation? Conclude what you have found. \n\n\n```python\nA=10 #You can change this to study other target nuclides\nEi=1 #MeV\nNsample=1e6 #Number of angles to sample\n\nalpha=# Finish the line\nmuC=np.random.uniform()#Complete the line\nEf=#Final energy from muC, Ei, alpha. Note: muC is already the cosine!\n\n#Here we create a histogram with 100 bins\nNbin=100\nplt.figure()\nplt.hist(Ef,Nbin)\nplt.axvline(Ei,color='r') #adds vertical line at Ei\nplt.axvline(alpha*Ei,color='r')\nplt.show()\n\n```\n\nChange this cell to your conclusion!\n\n## Experiment 4: Scattering angle in LAB\n\nWe looked into how the energy change depends on the CM scattering angle. We saw what isotropy in CM means, and we also saw how the outgoing energy is distributed for isotropic CM angles. There is one last thing, which can sometimes be confusing: we intuitively prefer the LAB system! So how does the cosine of the scattering angle look in the LAB? That's what we will try to find out now!\n\nSample 1 million angle cosines in the CM (`muC`), and then convert the angle to the LAB (`thetaL`). Use the formula below, and finally calculate the cosine of the LAB angle (`muL`). The formula to convert from CM to LAB:\n\n$$\\theta_L=\\tan^{-1}\\Big(\\frac{\\sin \\theta_C}{\\frac{1}{A}+\\mu_C}\\Big)$$\n\nRead and execute the code block below to evaluate this for several mass numbers and to calculate the mean (with `np.mean`) of the LAB cosines. Compare the empirical mean with the value from the lecture ($\\bar\\mu_L=\\frac{2}{3A}$)\n\nWhat is your conclusion: is the scattering isotropic in the LAB? Write down your conclusion!\n\n\n```python\nAs=[1,12,50,238]\n\nmuC=np.random.uniform(-1,1,1000000)\nthetaC=np.arccos(muC)\n\nfor A in As:\n \n thetaL=np.arctan2(np.sin(thetaC),((1/A)+muC))\n \n muL=np.cos(thetaL)\n \n plt.figure()\n plt.hist(muL,100)\n plt.xlabel(r'$\\mu_L$')\n plt.ylabel('number of occurrences')\n plt.title(str(A))\n plt.show()\n print(str(A),str(2/(3*A)),str(np.mean(muL)))\n```\n\nChange this cell to your conclusion!\n\n## Experiment 5: Neutron slowing down in elastic scattering\n\nLet's slow down a neutron! From the previous exercises we could conclude that in our \"billiard ball\" scattering the energy and the angle is in a direct relationship. Therefore, we can sample one and figure out the other. But for the moment let's just neglect the angle completely and care only about the energy loss (for which the distribution we have played with in Experiment 3). We will try to figure out how many scattering events it takes to slow down a neutron from 2 MeV to 1eV (we consider that there are no temperature effects). \n\nInvestigate some target nuclide. Which nuclides are effective in slowing neutrons? Note down some values. \n\n\n```python\ndef neutronLetargizer(Ei,Ef,A):\n \"\"\"Function calculate how many scattering events are needed to slow a neutron\n from an initial energy (Ei) to a final energy (Ef)\n Parameters\n ----------\n Ei : float\n Initial energy of the neutron\n Ef : float\n Final energy to be reached\n A : float\n Mass number of the scatterer\n \n Returns\n -------\n N : int\n Number of scattering events\n \"\"\"\n alpha=#finish the line\n N=0\n E=Ei\n while E>=Ef:\n E=#sample a random outgoing energy based on alpha and E.\n N=N+1\n return N \n```\n\nLet's run this function for various nuclides, both light and heavy!\n\n\n```python\nA=1\nEi=2.0 #MeV\nEf=1e-6 #MeV\nprint(neutronLetargizer(Ei,Ef,A))\n```\n\nThat's a pretty cool function we have now. We can use it to look at the statistics of the number of scattering needed! Let's run this function for 10k neutrons and calculate the mean and visualize the distribution of the number of scattering events needed. (Notice that running this many neutrons for heavy nuclide might take some time)\n\n\n```python\nA=12\nE0=2.0 #MeV\nEf=1e-6 #MeV\n\n#######\nNs=[]\n\nfor _ in range(10000): # _ means here that we just don't care about the iteration index\n Ns.append(neutronLetargizer(E0,Ef,A))\n\nprint('Mean \\t Standard deviation')\nprint()#complete the line by calculating the mean of the list of number of scattering events with np.mean and\n #the standard deviation with np.std\n\nplt.figure()\n# use plt.hist to investigate the distribution of the number of scattering events.\nplt.show()\n```\n\nChange this cell to your conclusion! And fill out the table\n\n|A | $\\bar N$ | std |\n|---|----------|-----|\n|1 | ? | ? |\n|12 | ? | ? |\n|238| ? | ? |\n\n## Experiment 6: Sampling from distributions\n\nIn the video recording and in the Appendix of the lecture notes we have reviewed how to sample numbers from distributions, in the following we are going to implement these methods for neutron transport related tasks.\n\n### Discrete distribution: which event happens?\n\nThe probability of reaction $i$ happening at energy $E$ is \n\n\\begin{equation}\n\\frac{\\Sigma_i(E)}{\\Sigma_t(E)}\n\\end{equation}\n\nLet us consider that in our material only two reactions might happen: scattering or capture, thus a simple condition can be used to decide which happens.\n\nComplete the `reactionType` function to return a random event type. Assume that at the energy the neutron is travelling with $\\Sigma_s=0.64 \\: \\text{cm}^{-1}$ and $\\Sigma_c=0.39 \\: \\text{cm}^{-1}$. Call the function with these values.\n\n\n```python\ndef reactionType(SigS,SigC):\n \"\"\"Function to sample a random event type\n \n Parameters\n ----------\n SigS : float\n Macroscopic scattering cross section\n SigC : float\n Macroscopic capture cross section\n \"\"\"\n SigT=#complete the line\n x=#sample random number between 0 and 1\n if x < SigS/SigT:\n return 'scatter'\n # else return 'capture'\n #\n\nss=0.64\nsc=0.39\nprint()#complete the line with the function call\n```\n\nNumpy actually has a built in function `np.random.choice()`, which does the same for us. As an input it takes a list of choices to sample from, and optionally one can also pass a list of probabilities.\n\n\n```python\nnp.random.choice(['scatter','capture'],p=[ss/(ss+sc),sc/(ss+sc)])\n```\n\n### Continous distribution I: path to next collision\n\nLet's consider that we have some probability density function $p(x)$, and the related cumulative distribution function is $F(x)=\\int_{-\\infty}^xp(t)dt$. This function is going to take values between 0 and 1. So if we can sample random numbers uniformly between 0 and 1, we could just convert them by evaluating the inverse of the cumulative distribution function to obtain a random value $x$ sampled from the distribution:\n\n$x=F^{-1}(r)$\n\nThis of course is only useful, when it is possible to easily integrate the probability density function. \n\nLet's see how can we use this to sample random distances travelled by a neutron between collision events. We learnt that \n\n$\\exp(-\\Sigma_t x)$ is the probability that a neutron moves a distance dx without any interaction.\n\nand \n\n$\\Sigma_t \\exp(-\\Sigma_t x)dx$ is the probability that the neutron has its interaction at dx.\n\nSo\n\n$p(x)=\\Sigma_t \\exp(-\\Sigma_t x)$\n\nThus\n\n$F(x)=1-\\exp(\\Sigma_tx)$\n\nIf we take the inverse, to sample a random path\n\n$x=-\\frac{\\ln(1-r)}{\\Sigma_t}$\n\nbut if r is uniform over $[0,1]$, than $1-r$ is also uniform over $[0,1]$, so this simplifies to\n\n$x=-\\frac{\\ln r}{\\Sigma_t}$\n\nComplete the `distanceToCollision` function below.\n\n**Note #1** computational speed is everything in MC calculations. Although in this course we don't try to avoid every unnecessary operation, this example is just to highlight that sometimes operations can be avoided with some reasoning.\n\n**Note #2** the natural logarithm can be computed with `np.log`.\n\n**Note #3** `numpy.random` has a built-in function to sample the exponential distribution, nevertheless here we will convert the uniformly distributed random numbers between $[0,1]$ to exponentially distributed random numbers.\n\n\n```python\ndef distanceToCollision(SigT,N=1):\n \"\"\"Function to sample the distance between collisions\n \n Parameters\n ----------\n SigT : float\n Total Macroscopic cross section in 1/cm\n N : int\n Number of events to be sampled (default=1)\n \n Returns\n -------\n x : float or array-like\n Random distance between collisions\n \"\"\"\n r = np.random.uniform(0,1,N)\n x = # Complete the line\n return x\n```\n\nWe can now try this function. Let's consider that 1 MeV neutrons enter a material which has a total cross section of $\\Sigma_t=0.18 \\: \\text{cm}^{-1}$ at this energy. Or well, let's consider that 10k neutrons enter the material, and let's see how the distribution of the random distances looks like, and what is the mean.\n\n\n```python\nSigT=0.18\nN=10000\nds=#call distanceToCollision() here\n\nplt.figure()\nplt.hist(ds,100)\nplt.show()\n\nprint('Empirical Mean (cm) \\t Theoretical mean (cm)')\nprint() #print the empirical mean free path. and the mean free path expected from theory\n```\n\n### Continous distribution II: Watt distribution\n\n\nWhen the probability density function is less well behaving, and we cannot obtain the cumulative distribution function easily, we can use for example the rejection method. In this case, we draw a random number (r1), convert it to be between $a$ and $b$ (the bounds of the random value), then we draw an other random number (r2) to create a $y$ value based on the maximum of the probaility density function (M). If the $(x,y)$ pair is under the curve (ie. $y\n## Task 1\nThe first task is to write a Python function called `sqrt2` that returns the square root of 2 to 100 decimal spaces without making use of any imported modules. \n\nThere are several known methods [[1]](#refs). Here, we will look at two methods. Note that the methods described below only calculate the square root of whole numbers, not real numbers. \n\n### Method 1: Newton's Method\nThis method is thought to date back to the Babylonians circa 1000 BC [[2]](#refs). It determines the square root of a number with increasing accuracy through each iteration. This is achieved in the following way [[3]](#refs):\n\n1. **Make a reasonable guess for the square root**\n\n A reasonable way would be to identify the nearest whole integer that, when squared, comes closest to the value under investigation. The following piece of `code` demonstrates:\n\n\n```python\n#Find largest integer n whose square is less than the number under investigation, in this case, 10\n\n# Value of number for which square root is sought\nnum = 20\n\n# Variable to contain answer\nnearestValue = 0\n\n# Loop through all options until exact integer found, or i*i > num \nfor i in range(num):\n if i*i < num:\n continue\n if i*i == num:\n nearestValue = i\n break\n if i*i > num:\n nearestValue = i-1\n break\n \n# Prevent instances where nearestValue = 0\nif nearestValue == 0:\n nearestValue = 1\n```\n\n\n```python\n# Print largest integer \nprint(\"The nearest value which when squared is less than the number under investigation is: \", nearestValue)\n```\n\n The nearest value which when squared is less than the number under investigation is: 4\n\n\n2. **Increase the precision of the approximate guess**\n\n Add the above guessed root value to the original number under investigation divided by the guessed root value, and divide this final figure by 2: \n \n$$x^{next} = \\frac{(x^{current} + \\frac{n}{x^{current}})}{2}$$\n \n \n\n\n```python\n# Increase the precision of the answer from the previous step\nnextNum = (nearestValue + (num/nearestValue))/2\n\nnextNum\n```\n\n\n\n\n 4.5\n\n\n\n3. **Repeat until the desired precision is achieved**\n\n The `nextNum` value from step 2 gets reused in the calculation as the next best approximate guess until the desired precision of answer is achieved\n\n\n```python\n# Use the previous answer as the next value in the equation\nnearestValue = nextNum\n\n# Repeat the equation\nnextNum = (nearestValue + (num/nearestValue))/2\n\nnextNum\n```\n\n\n\n\n 4.472222222222222\n\n\n\nThese steps can be placed into a single Python function, here called `sqrtNewton` [[3]](#refs). In this example, the user can adapt the precision as needed.\n\n\n```python\n# Define variables\nprecision = 10 ** (-10) # Specify number of decimal spaces\nnum = 2 # Specify the number of which square root is sought\n\n# Create function\ndef sqrtNewton(num):\n # Find nearest value as starting guesstimate\n nearestValue = 0\n \n for i in range(num):\n if i*i < num:\n continue\n if i*i == num:\n nearestValue = i\n break\n if i*i > num:\n nearestValue = i-1\n break\n \n # Prevent instances where nearestValue = 0\n if nearestValue == 0:\n nearestValue = 1\n \n # Loop through the equation until value with desired precision is identified\n while abs(num - (nearestValue * nearestValue)) > precision:\n nearestValue = (nearestValue + (num / nearestValue)) / 2\n \n # Return the value to the specified precision\n return nearestValue\n\n#Print result\nprint(\"The square root of\", num, \"to the precision of\", precision, \"decimal spaces is\", sqrtNewton(num))\n\n```\n\n The square root of 2 to the precision of 1e-10 decimal spaces is 1.4142135623746899\n\n\nThe **problem** here is that the display does not show beyond the 16th decimal space. The Python `decimal` module allows specifying the number of decimal spaces to display [[4]](#refs); however, the instructions for this task clearly stated that no imported modules are to be used. This is where the second method comes in.\n\n## Method 2: Long Division Algorithm\nThis method makes use of long division calculations by hand to determine each successive number in a square root answer [[5]](#refs). Because it allows determining each number in turn, it can be used to create a string of undetermined length without the limitation of decimal spaces display that is inherent in method 1. There are several steps involved:\n\n1. **Seperate the digits into pairs**\n\nThis step is not needed, given that the issue here is the number of digits *after* the decimal. For whole numbers, the computer will be able to determine the square root without needing to break down into pairs.\n\n2. **Find the largest integer**\n\nAs before, this can be done with a `for` loop.\n\n\n```python\n#Find largest integer n whose square is less than the number under investigation\n\n# Value of number for which square root is sought\nnum = 1234\n\n# Variable to contain answer\nnearestValue = 0\n\n# Loop through all options until exact integer found, or i*i > num \nfor i in range(num):\n if i*i < num:\n continue\n if i*i == num:\n nearestValue = i\n break\n if i*i > num:\n nearestValue = i-1\n break\n \n# Prevent instances where nearestValue = 0\nif nearestValue == 0:\n nearestValue = 1\n \n# Print nearest integer\nprint(\"The nearest value which when squared is less than the number under investigation is: \", nearestValue)\n```\n\n The nearest value which when squared is less than the number under investigation is: 35\n\n\nThis value now becomes the next number in the answer, and is added to the answer string\n\n\n```python\n#Create final answer string variable\nans = ''\n\n# Add number to string\nans += str(nearestValue)\nprint(ans)\n```\n\n 35\n\n\n3. **Subtract square of largest integer from current number**\n\nThe square of the value from the previous step is subtracted from the current number. The answer of this difference will now be added to the front of the next digit pair to create a new value to investigate.\n\n\n```python\n#Calculate square of nearestValue\nsqrNearestValue = nearestValue * nearestValue\n# print(sqrNearestValue)\n\n#Subtract sqrNearestValue from num\nnextNumber = num - sqrNearestValue\nprint(nextNumber)\n```\n\n 9\n\n\n4. **Move to the next pair of values in the number under investigation**\n\nPlace the next two values next to the answer calculated in the step above. Where no values exist, use zeros instead. To achieve this, one can merely multiple the answer above by 100.\n\n\n```python\n#Multiply nextNumber by 100\nnextNumber = nextNumber * 100\nprint('nextNumber: ', nextNumber)\n```\n\n nextNumber: 900\n\n\nMultiply the current answer by 2. This will be used in the next step\n\n\n```python\n#Multiply current ans by 2\ncurrentAnswer = int(ans) * 2\nprint('currentAnswer: ', currentAnswer)\n```\n\n currentAnswer: 70\n\n\n5. **Find the right value match**\n\nThe next step is to use the value from the previous step, and identify what single integer, when placed at the end of that value and the whole value is multiplied by the same integer, is less than or equal to the nextNumber.\n\n\n```python\n#Identify the integer that completes the following equation and is less than the nextNumber: currentAnswery * y <= nextNumber\nfor i in range(1,11): #up to 11 to include the possibility of 9 being the integer\n testNumber = int(str(currentAnswer) + str(i)) * i\n if testNumber < nextNumber:\n continue\n if testNumber == nextNumber:\n nextAnswer = i\n testNumber = int(str(currentAnswer) + str(i)) * i\n # print('integer i is: ', i)\n break\n if testNumber > nextNumber:\n nextAnswer = i - 1\n testNumber = int(str(currentAnswer) + str(i-1)) * (i-1)\n #print('integer i is: ', i-1)\n break\nprint('testNumber: ', testNumber)\nprint('nextAnswer: ',nextAnswer)\n```\n\n testNumber: 701\n nextAnswer: 1\n\n\nThis `nextAnswer` value then becomes the next value in the root answer.\n\n\n```python\n#Add nextAnswer to answer\nans += str(nextAnswer)\nprint('ans: ',ans)\n```\n\n ans: 351\n\n\n6. **Identify the next number**\n\nOnce again, subtract the total of the value identified in step 5 from the value on the left\n\n\n```python\n#Get the nextNumber\nlastNumber = nextNumber\nnextNumber = lastNumber - testNumber\nprint('lastNumber:', lastNumber)\nprint('testNumber:', testNumber)\nprint('nextNumber:', nextNumber)\n```\n\n lastNumber: 900\n testNumber: 701\n nextNumber: 199\n\n\n7. **Repeat steps 4 to 6 for the total number of decimal spaces required**\n\nRepeat the steps until you have total number of decimal spaces required in the answer. This can be achieved by placing the above into a `while` loop as below.\n\n\n```python\n#FROM HERE TO END LOOPS UNTIL 100 DECIMAL SPACES#\nwhile len(ans) < (100 + nearestValue):\n#Multiply nextNumber by 100\n nextNumber = nextNumber * 100\n #print('nextNumber: ', nextNumber)\n\n #Multiply current ans by 2\n currentAnswer = int(ans) * 2\n #print('currentAnswer: ', currentAnswer)\n\n #Identify the integer that completes the following equation and is less than the nextNumber: currentAnswery * y <= nextNumber\n for i in range(1,11): #up to 11 to include the possibility of 9 being the integer\n testNumber = int(str(currentAnswer) + str(i)) * i\n if testNumber < nextNumber:\n continue\n if testNumber == nextNumber:\n nextAnswer = i\n testNumber = int(str(currentAnswer) + str(i)) * i\n #print('integer i is: ', i)\n break\n if testNumber > nextNumber:\n nextAnswer = i - 1\n testNumber = int(str(currentAnswer) + str(i-1)) * (i-1)\n #print('integer i is: ', i-1)\n break\n #print('testNumber: ', testNumber)\n #print('nextAnswer: ',nextAnswer)\n\n #Add nextAnswer to answer\n ans += str(nextAnswer)\n #print('ans: ',ans)\n\n #Get the nextNumber\n lastNumber = nextNumber\n nextNumber = lastNumber - testNumber\n #print('nextNumber: ', nextNumber)\n #print('line break - next iteration starts after this')\n #print()\n \nprint(\"ans: \", ans)\n```\n\n ans: 351283361405005916058703116253563067645404854787765405690202683926394175654576700713118864847740795362187533615970865896869519707927067\n\n\nThe issue, however, is that the answer does not contain a decimal point. Inserting the decimal point can be done by determining the length of the string created by the `nearestValue`, and inserting a decimal point into the string after that value. For example, the square root of 16 is 4. The length of 4 as a string is 1, therefore the decimal will be placed after the first digit in the string. Likewise, the square root of 120 is 10.95. The `nearestValue` for calculating the square root of 120 is 10 ($10^2 = 100$). The length of 10 as a string is 2. Therefore, the decimal point will be placed after the second digit in the string. And so on.\n\n\n```python\n#Place decimal point at appropriate place\nans = ans[0:len(str(nearestValue))] + '.' + ans[len(str(nearestValue)):]\nprint(\"ans: \", ans)\n```\n\n ans: 35.1283361405005916058703116253563067645404854787765405690202683926394175654576700713118864847740795362187533615970865896869519707927067\n\n\nThe final `function` for the long division algorithm which incorporates all of the steps is below. The commented out `print` statements are for debugging purposes.\n\n\n```python\ndef sqrtLongDiv(num, decSpaces):\n largestInteger = 0\n ans = ''\n nextAnswer = 0\n\n #Find largest integer n whose square is less than num\n for i in range(num+1):\n if i*i < num:\n continue\n if i*i == num:\n largestInteger = i\n break\n if i*i > num:\n largestInteger = i-1\n break\n #print(largestInteger)\n \n # Prevent instances where largestInteger = 0\n if largestInteger == 0:\n largestInteger = 1\n\n #Add integer to answer\n ans += str(largestInteger)\n #print(ans)\n\n #Calculate square of largestInteger\n sqrLargestInteger = largestInteger*largestInteger\n #print(sqrLargestInteger)\n\n #Subtract sqrLargestInteger from num\n nextNumber = num - sqrLargestInteger\n #print(nextNumber)\n\n #FROM HERE TO END LOOPS UNTIL 100 DECIMAL SPACES#\n while len(ans) < (decSpaces + len(str(largestInteger))):\n #Multiply nextNumber by 100\n nextNumber = nextNumber * 100\n #print('nextNumber: ', nextNumber)\n\n #Multiply current ans by 2\n currentAnswer = int(ans) * 2\n #print('currentAnswer: ', currentAnswer)\n\n #Identify the integer that completes the following equation and is less than the nextNumber: currentAnswery * y <= nextNumber\n for i in range(1,11): #up to 11 to include the possibility of 9 being the integer\n testNumber = int(str(currentAnswer) + str(i)) * i\n if testNumber < nextNumber:\n continue\n if testNumber == nextNumber:\n nextAnswer = i\n testNumber = int(str(currentAnswer) + str(i)) * i\n #print('integer i is: ', i)\n break\n if testNumber > nextNumber:\n nextAnswer = i - 1\n testNumber = int(str(currentAnswer) + str(i-1)) * (i-1)\n #print('integer i is: ', i-1)\n break\n #print('testNumber: ', testNumber)\n #print('nextAnswer: ',nextAnswer)\n\n #Add nextAnswer to answer\n ans += str(nextAnswer)\n #print('ans: ',ans)\n\n #Get the nextNumber\n lastNumber = nextNumber\n nextNumber = lastNumber - testNumber\n #print('nextNumber: ', nextNumber)\n #print('line break - next iteration starts after this')\n\n #Place decimal point at appropriate place\n ans = ans[0:len(str(largestInteger))] + '.' + ans[len(str(largestInteger)):]\n\n return ans\n\n#Run function with designated number\nnum = 123\ndecSpaces = 20\nprint(\"The long division square root of\", num,\"to\", decSpaces,\"decimal spaces is\", sqrtLongDiv(num, decSpaces))\n```\n\n The long division square root of 123 to 20 decimal spaces is 11.09053650640941716205\n\n\n## Final output as required\n\nThe brief requested a single function `sqrt2` to 100 decimal spaces. Below is that function and output.\n\n\n```python\ndef sqrt2():\n largestInteger = 0\n ans = ''\n nextAnswer = 0\n num = 2\n\n #Find largest integer n whose square is less than num\n for i in range(num+1):\n if i*i < num:\n continue\n if i*i == num:\n largestInteger = i\n break\n if i*i > num:\n largestInteger = i-1\n break\n\n #print(largestInteger)\n\n #Add integer to answer\n ans += str(largestInteger)\n #print(ans)\n\n #Calculate square of largestInteger\n sqrLargestInteger = largestInteger*largestInteger\n #print(sqrLargestInteger)\n\n #Subtract sqrLargestInteger from num\n nextNumber = num - sqrLargestInteger\n #print(nextNumber)\n\n #FROM HERE TO END LOOPS UNTIL 100 DECIMAL SPACES#\n while len(ans) < (100 + len(str(largestInteger))):\n #Multiply nextNumber by 100\n nextNumber = nextNumber * 100\n #print('nextNumber: ', nextNumber)\n\n #Multiply current ans by 2\n currentAnswer = int(ans) * 2\n #print('currentAnswer: ', currentAnswer)\n\n #Identify the integer that completes the following equation and is less than the nextNumber: currentAnswery * y <= nextNumber\n for i in range(1,11): #up to 11 to include the possibility of 9 being the integer\n testNumber = int(str(currentAnswer) + str(i)) * i\n if testNumber < nextNumber:\n continue\n if testNumber == nextNumber:\n nextAnswer = i\n testNumber = int(str(currentAnswer) + str(i)) * i\n #print('integer i is: ', i)\n break\n if testNumber > nextNumber:\n nextAnswer = i - 1\n testNumber = int(str(currentAnswer) + str(i-1)) * (i-1)\n #print('integer i is: ', i-1)\n break\n #print('testNumber: ', testNumber)\n #print('nextAnswer: ',nextAnswer)\n\n #Add nextAnswer to answer\n ans += str(nextAnswer)\n #print('ans: ',ans)\n\n #Get the nextNumber\n lastNumber = nextNumber\n nextNumber = lastNumber - testNumber\n #print('nextNumber: ', nextNumber)\n #print('line break - next iteration starts after this')\n\n #Place decimal point at appropriate place\n ans = ans[0:len(str(largestInteger))] + '.' + ans[len(str(largestInteger)):]\n\n return ans\n\n# Print out answer\nprint(\"The square root of 2 to 100 decimal spaces is:\")\nprint(sqrt2())\n```\n\n The square root of 2 to 100 decimal spaces is:\n 1.4142135623730950488016887242096980785696718753769480731766797379907324784621070388503875343276415727\n\n\n## Compare answer\n\nThe algorithm veracity is tested against a string of the square root of 2 calculated to 1 million digits [[6]](#refs). Note how there is no multi-line wrapping of the imported string - this will give you an error, as it reads in a `\\n` character.\n\n\n```python\n#Get string from website extract\nnasaImport = '1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702492483605585073721264412149709993583141322266592750559275579995050115278206057147010955997160597027453459686201472851'\n\n#Create string of 100 decimal spaces (up to 102 values to include first digit and decimal point)\nsqrt2Nasa = nasaImport[0:102]\n\n#print(len(sqrt2()))\n#print(len(sqrt2Nasa))\n\n#Compare Nasa string with algorithm\nprint(\"The function sqrt2 is equal to the first 100 decimal spaces of the square root of 2 as calculated by Nasa:\", sqrt2Nasa == sqrt2())\n\n```\n\n The function sqrt2 is equal to the first 100 decimal spaces of the square root of 2 as calculated by Nasa: True\n\n\n***\n\n\n## Task 2\nThe second task is to calculate the Chi-square test statistic of a table in Wikipedia [[7]](#refs) using `scipy.stats`, and confirm that the value is 24.6. In addition, the task asks to calculate the associated *p* value.\n\n### Chi-square test\nThe Chi square test is a statistical hypothesis test used to test for a relationship between categorical variables. It assumes that the variables being investigated are categorical and independent [[8]](#refs). It determines whether there is a relationship by comparing the *expected* values to the *observed* values for each category [[9]](#refs). As per any statistical test, it assumes a null hypothesis of no relationship; if the Chi square statistic is significantly different from a critical value (determined by the degrees of freedom), then the null hypothesis is rejected. \n\n### Chi-square calculation\nThe Chi-square statistic is the sum of the observed value minus the expected value squared, divided by the expected value, for each category. This is presented as follows [[10]](#refs):\n\n\\begin{equation}\n\\chi^2=\\Sigma\\frac{(O-E)^2}{E} \\\\\n\\text{where O is the actual value and E is the expected value.}\n\\end{equation}\n\nThe expected value is determined by multiplying the row total by the column total, and dividing the answer by the grand total, as follows:\n\n$$expected\\ value\\ = \\frac{row\\ total\\ x\\ column\\ total}{grand\\ total}$$\n\n\n### Table of observed values\nThe table from Wikipedia is presented below:\n\n\n```python\n#Import necessary modules\nimport pandas as pd\n\n#Generate table - adapted from: https://towardsdatascience.com/gentle-introduction-to-chi-square-test-for-independence-7182a7414a95\ncells = pd.DataFrame([[90, 60, 104, 95], #Row 1, White collar\n [30, 50, 51, 20], #Row 2, Blue collar\n [30, 40, 45, 35] #Row 3, No collar\n ],\nindex = [\"White Collar\", \"Blue Collar\", \"No Collar\"],\ncolumns = [\"A\", \"B\", \"C\", \"D\"])\n\n#Add in columns totals to a new copy of cells, as chi2_contingency needs internal cell values only\ntable = cells.copy()\ntable.loc['Total',:] = table.sum(axis=0)\n\n#Add in row totals\ntable.loc[:,'Total'] = table.sum(axis=1)\n\n#Tidy up display by converting all floating points to integers\ntable = table.astype(int)\n\n#Display table\ntable\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    ABCDTotal
    White Collar906010495349
    Blue Collar30505120151
    No Collar30404535150
    Total150150200150650
    \n
    \n\n\n\n### Determination of Chi square statistic and associated values\nThe method to use is the `stats.scipy.chi2_contingency`, as the table is larger than a 2x2, and this statistical test is the hypothesis test of independence of the observed frequencies in the contingency table. \n\nIt takes an array-like structure, and the method returns the Chi square statistic, the *p* value, Degrees of freedom, and the expected frequencies, in that order, based on marginal values calculated by the method from the observed frequencies.\n\n\n```python\n#Import necessary modules\nfrom scipy.stats import chi2_contingency\nimport numpy as np\n\n#Turn table into a numpy array\nobs = np.array(cells)\n#print(obs)\n\n#Run chi2 contingency method\nchiStat, pVal, dof, expF = chi2_contingency(obs)\n\n#Print output\nprint(\"The Chi square statistic is:\", round(chiStat,1))\nprint(\"The p value is:\", round(pVal,4))\nprint(\"The Degrees of Freedom are:\", dof)\nprint(\"The expected frequencies are:\\n\", (pd.DataFrame(expF,index = [\"White Collar\", \"Blue Collar\", \"No Collar\"],\ncolumns = [\"A\", \"B\", \"C\", \"D\"])).round(2))\n\n```\n\n The Chi square statistic is: 24.6\n The p value is: 0.0004\n The Degrees of Freedom are: 6\n The expected frequencies are:\n A B C D\n White Collar 80.54 80.54 107.38 80.54\n Blue Collar 34.85 34.85 46.46 34.85\n No Collar 34.62 34.62 46.15 34.62\n\n\n## Discussion\nThe Chi square statistic of 24.6 calculated by the `scipy.stats.ch2_contingency` method aligns with that in the Wikipedia article. When one compares the *expected* values in the above output to the *observed* values in the original table, it is evident that in some categories, there are large differences between these two values. This is reflected in the significant *p* value of 0.0004, reported as p < 0.05. \n\nWe therefore reject the null hypothesis that there is no difference between a person's neighbourhood of residence based on their occupational class, and state that there is a statistically significant difference between the distribution of persons across each neighbourhood based on their occupational class. \n\n***\n\n## Task 3\nThe third task relates to one measure of dispersion around the mean, namely standard deviation. The task is to examine the difference between `STDDEV.S` and `STDDEV.P` in Excel, and then use `numpy` to explain why `STDDEV.S` is a better estimate for the standard deviation of a population when performed on a sample.\n\n### What is a standard deviation?\nThe standard deviation (SD) gives a measure of dispersement or spread of one's data around the mean [[11]](#refs). A small SD indicates that the data is gathered close to the mean (small spread), while a large SD indicates that data is spread out quite widely around the mean [[12]](#refs). The below image compares two populations with the same mean of 0, but with a SD of 1 and 3 respectively. Notice how the plot with SD of 3 is more dispersed around the mean of 0.\n\n\n```python\n# Import necessary modules\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\n# Plot of 1,000 data points from a normal distribution with mean of 0 and SD of 1 - adapted from: https://stackoverflow.com/a/51758496\nvalue1 = np.around(np.random.normal(loc=0,scale=1,size=1000), 2)\nsns.distplot(value1, label=\"Mean 0, SD 1\")\n\n# Plot of 1,000 data points from a normal distribution with a mean of 0 and SD of 3\nvalue2 = np.around(np.random.normal(loc=0, scale = 3, size = 1000),2)\nsns.distplot(value2, label=\"Mean 0, SD 3\")\n\n# Show legend\nplt.legend()\n\n```\n\nThere are some interesting conclusions that can be gleaned from a SD when the data is distributed normally around the mean as in the above example. Roughly 68% of all data points will lie within \u00b11 SD of the mean, ~95% within \u00b12 SD of the mean, and ~99% within \u00b13 SD of the mean [[13]](#refs). One can evaluate that as follows:\n\n\n```python\n'''Proportion of values within 1SD of value1 (from -1 to +1 around mean of 0)'''\n# Sort array\nvalue1.sort()\n\n# Extract all values between -1 and +1\nprop_sd1_value1 = []\nfor value in value1:\n if value < 1.0 and value > -1.0:\n prop_sd1_value1.append(value)\n \n# Determine proportion of values\nprop = len(prop_sd1_value1)/1000 * 100\n\n# Print result\nprint(f\"The proportion of values between \u00b11 SD in a normal distribution with a mean of 0 and SD of 1 is {prop:.2f}%\")\n```\n\n The proportion of values between \u00b11 SD in a normal distribution with a mean of 0 and SD of 1 is 69.10%\n\n\n\n```python\n'''Proportion of values within 2SD of value1 (from -2 to +2 around mean of 0)'''\n# Extract all values between -1 and +1\nprop_sd2_value1 = []\nfor value in value1:\n if value < 2.0 and value > -2.0:\n prop_sd2_value1.append(value)\n \n# Determine proportion of values\nprop = len(prop_sd2_value1)/1000 * 100\n\n# Print result\nprint(f\"The proportion of values between \u00b12 SD in a normal distribution with a mean of 0 and SD of 1 is {prop:.2f}%\")\n```\n\n The proportion of values between \u00b12 SD in a normal distribution with a mean of 0 and SD of 1 is 95.20%\n\n\n\n```python\n'''Proportion of values within 3SD of value1 (from -3 to +3 around mean of 0)'''\n# Extract all values between -1 and +1\nprop_sd3_value1 = []\nfor value in value1:\n if value < 3.0 and value > -3.0:\n prop_sd3_value1.append(value)\n \n# Determine proportion of values\nprop = len(prop_sd3_value1)/1000 * 100\n\n# Print result\nprint(f\"The proportion of values between \u00b13 SD in a normal distribution with a mean of 0 and SD of 1 is {prop:.2f}%\")\n```\n\n The proportion of values between \u00b13 SD in a normal distribution with a mean of 0 and SD of 1 is 99.70%\n\n\nFinally, the SD is used in calculating the confidence interval (CI). A CI represents a range in which some value that we are interested in determining can be found [[14]](#refs). Oftentimes, a 95% CI interval is presented. As has been shown above, 2SD around a mean usually contain ~95% of all values in the sample or population (1.96SD is exactly 95%). It is this characteristic of the SD that is useful for determining a range in which a value can be found.\n\n### Calculating standard deviation\nThe equation for calculating the SD of a **population** is: \n$$\\sigma = \\sqrt{\\frac{\\Sigma(x_i - \\mu)}{N}}$$ where $\\sigma$ represents the population SD, $x_i$ is each value in the population, $\\mu$ is the population mean, and $N$ is the size of the population. \n\nThe SD represents the spread of values around the mean, or put another way, how far a value is from the mean. This 'distance' can be represented by the difference between the value and a mean, represented as $x - \\mu$, where $x$ is the value, and $\\mu$ is the mean of all the data points. The mean is simply calculated by adding all the values in the data set together, and dividing by the number of values present. In other words, $\\frac{\\Sigma{x_i}}{N}$, where $x_i$ are each value in the data set, and $N$ is the total number of observations in the data set.\n\nIf we merely added the difference for each $x - \\mu$, we would get a result of zero, because a value can be smaller or larger than the mean. For example, in an array of `[3,4,5,6,7]`, the mean is `5`. The difference between each value and the mean ($x - \\mu$) in turn is `[-2, -1, 0, 1, 2]`, which when added together equals zero. To prevent this, the value for each difference is squared: $\\Sigma(x - \\mu)^2$. This gives a positive array for each data point: `[4, 1, 0, 1, 4]`. The sum of this is now 10. This value is divided by the number of observations in the population, $N$, before the square root is applied to arrive at the population SD, $\\sigma$.\n\nMore frequently, a sample of a population is examined. The equation for calculating the SD of a **sample** is: \n\n$$s = \\sqrt{\\frac{\\Sigma(x_i - \\overline x)}{n - 1}}$$ \n\nwhere $s$ represents the sample SD, $x_i$ is each value in the sample, $\\overline x$ is the sample mean, and $n$ is the sample size. \n\nThe main difference between these two equations is in the denominator: population SD makes use of the *whole* population size, while sample SD makes use of the sample size minus one. It is this difference that is conveyed in the different Excel functions: `STDDEV.P` is used to calculate the SD of a *whole* population (e.g. all children in a class of interest), while `STDDEV.S` is used to calculate the SD of a *sample* of the population (e.g. a handful of children from the class of interest) [[15]](#refs). Because it is often difficult to measure a complete population (either because the population is too big to measure in it's entirety, or the true size of the population is difficult to define), a sample is used. In the next section, we will use `numpy` to look at why using the sample equation (denominator $n-1$) is a better estimator of the SD than the population equation (denominator $N$).\n\n### Sample Standard Deviation Calculation example\nLet's create a dataset of random numbers on a normal distribution between 0 and 85 representing the age of a population. Let's assume that this data set represents the WHOLE population of interest.\n\n\n```python\n#Create array\npop = np.random.randint(0, 86, 1000)\npop.sort()\n#print(pop)\nprint(f\"Values range from a minimum of {pop[0]} to {pop[-1]}, with a mean of {sum(pop)/len(pop):.1f}, and a population standard deviation of {np.std(pop):.1f}.\")\n```\n\n Values range from a minimum of 0 to 85, with a mean of 41.4, and a population standard deviation of 24.9.\n\n\nLet's now assume that we take a sample from this population, because we cannot access the whole population to measure age. Ideally, we want to get as close to the actual SD of the population as possible using the sample. We randomly sample 10 observations from the population. We repeat this sampling 50 times, and determine the average *population* SD of all these samples.\n\n\n```python\n#Create list to store each result of population SD\navg_pop_sd = []\n\n#Create sample and run 50 times\nfor i in range(50):\n sample = np.random.choice(pop, 10, replace=False)\n avg_pop_sd.append(round(np.std(sample),1))\n \n#print(avg_sample_sd)\nprint(f\"The average SD using a population calculation for a random sample of size 10 repeated 50 times from the same population is {sum(avg_pop_sd)/len(avg_pop_sd):.1f}.\")\nprint(f\"This differs from the true population SD by {abs(np.std(pop) - (sum(avg_pop_sd)/len(avg_pop_sd))):.1f}.\")\n```\n\n The average SD using a population calculation for a random sample of size 10 repeated 50 times from the same population is 23.7.\n This differs from the true population SD by 1.2.\n\n\nIn the next step, we repeat this process, but apply the *sample* SD calculation of $n-1$.\n\n\n```python\n#Create list to store each result of population SD\navg_sample_sd = []\n\n#Create sample and run 50 times\nfor i in range(50):\n sample = np.random.choice(pop, 10, replace=False)\n avg_sample_sd.append(round(np.std(sample, ddof=1),1))\n \n#print(avg_sample_sd)\nprint(f\"The average SD using a sample calculation for a random sample of size 10 repeated 50 times from the same population is {sum(avg_sample_sd)/len(avg_sample_sd):.1f}.\")\nprint(f\"This differs from the true population SD by {abs(np.std(pop) - (sum(avg_sample_sd)/len(avg_sample_sd))):.1f}.\")\n```\n\n The average SD using a sample calculation for a random sample of size 10 repeated 50 times from the same population is 25.2.\n This differs from the true population SD by 0.3.\n\n\n### Discussion\nThe above indicates that the use of `STDDEV.S` offers a better approximation of the population SD than `STDDEV.P` when studying a sample of a population. When the whole population is known and measured, `STDDEV.P` should be used, as this calculates the true SD. \n\n***\n\n## Task 4\nThe final task requires using `scikit-learn` to apply k-means clustering to Fisher's Iris dataset. \n\n### K-means clustering\nK-means clustering is a method of unsupervised machine learning that groups together similar points in a dataset by identifying clusters of data objects in a dataset [[16]](#refs). This is achieved by first telling the algorithm how many clusters (k) the data should be seperated into. The algorithm then selects k number of random points in the dataset as the centre of the starting clusters. These centres are referred to as the centroid. Each datapoint is then assigned to the nearest centroid. The algorithm then calculates the mean of all the datapoints, and repositions the centroid to this calculated value. After the new centroid is calculated, all the datapoints are again reassigned to the new nearest centroid and hence, new cluster. The process repeats until no datapoints change to new clusters, i.e. the centroid does not change [[17]](#refs).\n\nBecuase the initial starting centroids are chosen at random, k-means will provide different results each time [[16]](#refs). For this reason, the algorithm is usually run several times, with cluster assignments leading to the lowest sum of the squared error (SSE: the sum of the squared Euclidean distance of each point to its closest centroid) been selected as the final result [[17]](#refs).\n\n### Determining starting cluster size *k*\nBecause k-means is a method of unsupervised machine learning, we don't usually know how many clusters, or in this case Iris species, to expect (even though we know it's three). There are two ways of determining an appropriate cluster size [[17]](#refs). Here we'll look at one, the **elbow method**.\n\n#### The elbow method\nThe elbow method runs through repeated k-means calculations with increasing number of k, recording the SSE for each iteration. The k is plotted against the SEE for each iteration. Because the 'best' cluster distribution is usually that with the smallest SSE, the the number k with the smallest SSE would be preferred. However, there is a trade-off between k and SSE - the smaller the cluster, the less 'meaningful' the cluster becomes. So a 'sweet spot' is usually sought, right where the SSE curve starts to bend like an elbow. Hence, the elbow method. \n\nIn the next section, we'll look at the iris dataset and perform a k-means clustering.\n\n### Import the Iris dataset\n`scikit-learn` contains a copy of the Iris dataset. This can be imported from the `scikit-learn` package.\n\n\n```python\n#Import necessary modules\nimport numpy as np\nimport sklearn.cluster as skcl\nimport matplotlib.pyplot as plt\n\n#Import datasets (https://scikit-learn.org/stable/tutorial/basic/tutorial.html#loading-example-dataset)\nfrom sklearn import datasets\n\n#Load data\ndataset = datasets.load_iris()\nirisMeasurements = dataset.data\nirisMeasurements\n```\n\n\n\n\n array([[5.1, 3.5, 1.4, 0.2],\n [4.9, 3. , 1.4, 0.2],\n [4.7, 3.2, 1.3, 0.2],\n [4.6, 3.1, 1.5, 0.2],\n [5. , 3.6, 1.4, 0.2],\n [5.4, 3.9, 1.7, 0.4],\n [4.6, 3.4, 1.4, 0.3],\n [5. , 3.4, 1.5, 0.2],\n [4.4, 2.9, 1.4, 0.2],\n [4.9, 3.1, 1.5, 0.1],\n [5.4, 3.7, 1.5, 0.2],\n [4.8, 3.4, 1.6, 0.2],\n [4.8, 3. , 1.4, 0.1],\n [4.3, 3. , 1.1, 0.1],\n [5.8, 4. , 1.2, 0.2],\n [5.7, 4.4, 1.5, 0.4],\n [5.4, 3.9, 1.3, 0.4],\n [5.1, 3.5, 1.4, 0.3],\n [5.7, 3.8, 1.7, 0.3],\n [5.1, 3.8, 1.5, 0.3],\n [5.4, 3.4, 1.7, 0.2],\n [5.1, 3.7, 1.5, 0.4],\n [4.6, 3.6, 1. , 0.2],\n [5.1, 3.3, 1.7, 0.5],\n [4.8, 3.4, 1.9, 0.2],\n [5. , 3. , 1.6, 0.2],\n [5. , 3.4, 1.6, 0.4],\n [5.2, 3.5, 1.5, 0.2],\n [5.2, 3.4, 1.4, 0.2],\n [4.7, 3.2, 1.6, 0.2],\n [4.8, 3.1, 1.6, 0.2],\n [5.4, 3.4, 1.5, 0.4],\n [5.2, 4.1, 1.5, 0.1],\n [5.5, 4.2, 1.4, 0.2],\n [4.9, 3.1, 1.5, 0.2],\n [5. , 3.2, 1.2, 0.2],\n [5.5, 3.5, 1.3, 0.2],\n [4.9, 3.6, 1.4, 0.1],\n [4.4, 3. , 1.3, 0.2],\n [5.1, 3.4, 1.5, 0.2],\n [5. , 3.5, 1.3, 0.3],\n [4.5, 2.3, 1.3, 0.3],\n [4.4, 3.2, 1.3, 0.2],\n [5. , 3.5, 1.6, 0.6],\n [5.1, 3.8, 1.9, 0.4],\n [4.8, 3. , 1.4, 0.3],\n [5.1, 3.8, 1.6, 0.2],\n [4.6, 3.2, 1.4, 0.2],\n [5.3, 3.7, 1.5, 0.2],\n [5. , 3.3, 1.4, 0.2],\n [7. , 3.2, 4.7, 1.4],\n [6.4, 3.2, 4.5, 1.5],\n [6.9, 3.1, 4.9, 1.5],\n [5.5, 2.3, 4. , 1.3],\n [6.5, 2.8, 4.6, 1.5],\n [5.7, 2.8, 4.5, 1.3],\n [6.3, 3.3, 4.7, 1.6],\n [4.9, 2.4, 3.3, 1. ],\n [6.6, 2.9, 4.6, 1.3],\n [5.2, 2.7, 3.9, 1.4],\n [5. , 2. , 3.5, 1. ],\n [5.9, 3. , 4.2, 1.5],\n [6. , 2.2, 4. , 1. ],\n [6.1, 2.9, 4.7, 1.4],\n [5.6, 2.9, 3.6, 1.3],\n [6.7, 3.1, 4.4, 1.4],\n [5.6, 3. , 4.5, 1.5],\n [5.8, 2.7, 4.1, 1. ],\n [6.2, 2.2, 4.5, 1.5],\n [5.6, 2.5, 3.9, 1.1],\n [5.9, 3.2, 4.8, 1.8],\n [6.1, 2.8, 4. , 1.3],\n [6.3, 2.5, 4.9, 1.5],\n [6.1, 2.8, 4.7, 1.2],\n [6.4, 2.9, 4.3, 1.3],\n [6.6, 3. , 4.4, 1.4],\n [6.8, 2.8, 4.8, 1.4],\n [6.7, 3. , 5. , 1.7],\n [6. , 2.9, 4.5, 1.5],\n [5.7, 2.6, 3.5, 1. ],\n [5.5, 2.4, 3.8, 1.1],\n [5.5, 2.4, 3.7, 1. ],\n [5.8, 2.7, 3.9, 1.2],\n [6. , 2.7, 5.1, 1.6],\n [5.4, 3. , 4.5, 1.5],\n [6. , 3.4, 4.5, 1.6],\n [6.7, 3.1, 4.7, 1.5],\n [6.3, 2.3, 4.4, 1.3],\n [5.6, 3. , 4.1, 1.3],\n [5.5, 2.5, 4. , 1.3],\n [5.5, 2.6, 4.4, 1.2],\n [6.1, 3. , 4.6, 1.4],\n [5.8, 2.6, 4. , 1.2],\n [5. , 2.3, 3.3, 1. ],\n [5.6, 2.7, 4.2, 1.3],\n [5.7, 3. , 4.2, 1.2],\n [5.7, 2.9, 4.2, 1.3],\n [6.2, 2.9, 4.3, 1.3],\n [5.1, 2.5, 3. , 1.1],\n [5.7, 2.8, 4.1, 1.3],\n [6.3, 3.3, 6. , 2.5],\n [5.8, 2.7, 5.1, 1.9],\n [7.1, 3. , 5.9, 2.1],\n [6.3, 2.9, 5.6, 1.8],\n [6.5, 3. , 5.8, 2.2],\n [7.6, 3. , 6.6, 2.1],\n [4.9, 2.5, 4.5, 1.7],\n [7.3, 2.9, 6.3, 1.8],\n [6.7, 2.5, 5.8, 1.8],\n [7.2, 3.6, 6.1, 2.5],\n [6.5, 3.2, 5.1, 2. ],\n [6.4, 2.7, 5.3, 1.9],\n [6.8, 3. , 5.5, 2.1],\n [5.7, 2.5, 5. , 2. ],\n [5.8, 2.8, 5.1, 2.4],\n [6.4, 3.2, 5.3, 2.3],\n [6.5, 3. , 5.5, 1.8],\n [7.7, 3.8, 6.7, 2.2],\n [7.7, 2.6, 6.9, 2.3],\n [6. , 2.2, 5. , 1.5],\n [6.9, 3.2, 5.7, 2.3],\n [5.6, 2.8, 4.9, 2. ],\n [7.7, 2.8, 6.7, 2. ],\n [6.3, 2.7, 4.9, 1.8],\n [6.7, 3.3, 5.7, 2.1],\n [7.2, 3.2, 6. , 1.8],\n [6.2, 2.8, 4.8, 1.8],\n [6.1, 3. , 4.9, 1.8],\n [6.4, 2.8, 5.6, 2.1],\n [7.2, 3. , 5.8, 1.6],\n [7.4, 2.8, 6.1, 1.9],\n [7.9, 3.8, 6.4, 2. ],\n [6.4, 2.8, 5.6, 2.2],\n [6.3, 2.8, 5.1, 1.5],\n [6.1, 2.6, 5.6, 1.4],\n [7.7, 3. , 6.1, 2.3],\n [6.3, 3.4, 5.6, 2.4],\n [6.4, 3.1, 5.5, 1.8],\n [6. , 3. , 4.8, 1.8],\n [6.9, 3.1, 5.4, 2.1],\n [6.7, 3.1, 5.6, 2.4],\n [6.9, 3.1, 5.1, 2.3],\n [5.8, 2.7, 5.1, 1.9],\n [6.8, 3.2, 5.9, 2.3],\n [6.7, 3.3, 5.7, 2.5],\n [6.7, 3. , 5.2, 2.3],\n [6.3, 2.5, 5. , 1.9],\n [6.5, 3. , 5.2, 2. ],\n [6.2, 3.4, 5.4, 2.3],\n [5.9, 3. , 5.1, 1.8]])\n\n\n\nThere are 150 entries in the dataset. Each column represents, in turn: Sepal Length (sl), Sepal Width (sw), Petal Length (pl), Petal Width (pw) [[18]](#refs). In addition, the dataset has classified each observation according to the type of Iris: Iris-Setosa, Iris-Versicolour, Iris-Virginica. These are recorded as 0, 1 or 2 respectively.\n\n\n```python\n#Display classification of each datapoint\ndataset['target']\n```\n\n\n\n\n array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,\n 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,\n 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])\n\n\n\n### Determine an appropriate number for k\nWe are assuming that we don't know there are 3 iris species in our data. So let's apply the elbow method and see how many clusters are 'advised'.\n\n\n```python\n#Applying the elbow method [adapted from https://predictivehacks.com/k-means-elbow-method-code-for-python/]\n#Create blank list to store SSE\nsse = []\n\n#Run through 7 clusters determining SSE for each\nfor k in range (1,8):\n kmeans = skcl.KMeans(n_clusters = k)\n kmeans.fit(irisMeasurements)\n sse.append(kmeans.inertia_)\n \n#Display the results of SSE against k\nplt.plot(range(1,8), sse)\nplt.xlabel('Number of clusters, k')\nplt.ylabel('SSE')\nplt.title('The Elbow Method showing the optimal k')\nplt.show()\n```\n\nAs mentioned previously, we are looking for that sweet spot trade-off between cluster size and smallest SSE. From the above diagram, it would appear that there isn't much gain to be had above 3 clusters. So we will define 3 clusters for our kmeans clustering.\n\n### Perform kmeans clustering on dataset\n\n\n```python\n#Instantiate algorithm for dataset, using a deterministic initialization of centroids\nkmIris = skcl.KMeans(n_clusters=3, random_state=0)\n\n#Fit data\nkmIris.fit(irisMeasurements)\n\n#Check labels\nkmIris.labels_\n```\n\n\n\n\n array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\n 1, 1, 1, 1, 1, 1, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,\n 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,\n 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 0, 0, 0, 0, 2, 0, 0, 0,\n 0, 0, 0, 2, 2, 0, 0, 0, 0, 2, 0, 2, 0, 2, 0, 0, 2, 2, 0, 0, 0, 0,\n 0, 2, 0, 0, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 2, 0, 0, 2], dtype=int32)\n\n\n\nWe can see that there is some break in the continuity of clustering of the datapoints; numbers should run consecutively, i.e. 1,1,1,...2,2,2,...0,0,0. There should be no mixing of values. We can determine the size of each cluster identified through kmeans to see the accuracy of the clustering, as we know that the target dataset is clustered contiguously in groups of 50 for each Iris species.\n\n\n```python\n#Count number of items in each cluster [taken from https://stackoverflow.com/a/5829377]\nfrom collections import Counter\n\n#Divide cluster result into clusters of 50 contiguous values\n#Convert to list to allow slicing\ncalculatedLabels = list(kmIris.labels_)\n\n#Create clusters of 50\ncluster0 = calculatedLabels[0:50]\ncluster1 = calculatedLabels[50:100]\ncluster2 = calculatedLabels[100:]\n\n#Count values in each cluster and assign clusters to variables\ndict0 = Counter(cluster0)\ndict1 = Counter(cluster1)\ndict2 = Counter(cluster2)\n\n#Determine accuracy in each cluster\nprint(f\"In the first cluster of 50, there are {dict0[1]} datapoints grouped to Iris Setosa (accuracy of {dict0[1]/50 * 100}%)\")\nprint(f\"In the second cluster of 50, there are {dict1[2]} datapoints grouped to Iris Versicolor (accuracy of {dict1[2]/50 * 100}%)\")\nprint(f\"In the third cluster of 50, there are {dict2[0]} datapoints grouped to Iris Virginica (accuracy of {dict2[0]/50 * 100}%)\")\n\n```\n\n In the first cluster of 50, there are 50 datapoints grouped to Iris Setosa (accuracy of 100.0%)\n In the second cluster of 50, there are 48 datapoints grouped to Iris Versicolor (accuracy of 96.0%)\n In the third cluster of 50, there are 36 datapoints grouped to Iris Virginica (accuracy of 72.0%)\n\n\nThe kmeans clustering was only able to identify all objects from the Iris Setosa species correctly with 100% accuracy. It was only able to identify Iris Versicolor with 96% accuracy, and Iris Virginica with 72% accuracy. We can represent this visually in the following plots.\n\n\n```python\n#Tidy up calculated array so that same colors are used between both plots\ncleanArray = []\ndirty = kmIris.labels_\nfor i in dirty:\n if i == 1:\n cleanArray.append(0)\n if i == 2:\n cleanArray.append(1)\n if i == 0:\n cleanArray.append(2)\n```\n\n\n```python\n#Visualise actual vs calculated clusters [adapted from https://medium.com/@belen.sanchez27/predicting-iris-flower-species-with-k-means-clustering-in-python-f6e46806aaee]\n\n#Plot side by side\nfig, axes = plt.subplots(1, 2, figsize=(16,8))\n\n#First plot of actual clusters\naxes[0].scatter(irisMeasurements[:, 0], irisMeasurements[:, 1], c=dataset['target'])\naxes[0].set_xlabel('Sepal length')\naxes[0].set_ylabel('Sepal width')\naxes[0].set_title('Actual clustering')\n\n\n#Second plot of calculated clusters\naxes[1].scatter(irisMeasurements[:, 0], irisMeasurements[:, 1], c=cleanArray)\naxes[1].set_xlabel('Sepal length')\naxes[1].set_ylabel('Sepal width')\naxes[1].set_title('Calculated clustering')\n```\n\n### Discussion\nKmeans clustering is a form of unsupervised machine learning. While it is useful in identifying common groupings in datasets of various shapes and sizes, it is limited by the need to identify starting number of clusters, and the fact that outliers can be excluded from the correct grouping, as seen above [[20]](#refs). \n\n***\n\n### References\n[1] Wikipedia. Methods of computing square roots \\[Internet\\]. Wikimedia Foundation; 2020 \\[Last updated 2020 October 30\\]. Available from https://en.wikipedia.org/wiki/Methods_of_computing_square_roots\n\n[2] Johnson, SG. Square Roots via Newton's Method \\[published lecture notes\\] MIT; 2015 February 4 \\[Cited 2020 October 14\\]. Available at: https://math.mit.edu/~stevenj/18.335/newton-sqrt.pdf\n\n[3] Regmi, S. Calculating the Square Root of a Number using the \u200aNewton-Raphson Method \\[A How To Guide\\]. \\[Internet\\] Hackernoon; 2020 \\[Last updated 2020 January 18; cited 2020 October 14\\] Available at https://hackernoon.com/calculating-the-square-root-of-a-number-using-the-newton-raphson-method-a-how-to-guide-yr4e32zo\n\n[4] Python Software Foundation. decimal \u2014 Decimal fixed point and floating point arithmetic \\[Internet\\]. Python Software Foundation; 2020. \\[Last updated 2020 October 31; cited 2020 October 10\\]. Available at https://docs.python.org/3.8/library/decimal.html\n\n[5] Arobelidze, A. How to find the square root of a number and calculate it by hand \\[Internet\\]. USA: FreeCodeCamp; 2020 \\[Last updated 2020 February 6; cited 2020 October 16\\] Available at https://www.freecodecamp.org/news/find-square-root-of-number-calculate-by-hand/\n\n[6] Nemirof, R and Bonnell, J. The Square Root of Two to 1 Million Digits \\[Internet\\]. Available from https://apod.nasa.gov/htmltest/gifcity/sqrt2.1mil\n\n[7] Wikipedia. Chi-squared test \\[Internet\\]. Wikimedia Foundation; 2020 \\[Last updated 2020 October 11; cited 2020 November 14\\]. Available from https://en.wikipedia.org/wiki/Chi-squared_test\n\n[8] Laerd Statistics. Chi-Square Test for Association using SPSS Statistics \\[Internet\\]. UK: Lund Lund Research Ltd. \\[Last updated 2018; cited 2020 November 14\\] Available from https://statistics.laerd.com/spss-tutorials/chi-square-test-for-association-using-spss-statistics.php\n\n[9] Glen, S. Chi-Square Statistic: How to Calculate It / Distribution \\[Internet\\]. StatisticsHowTo.com; 2020 \\[Last updated 2020; cited 2020 November 14\\] Available from https://www.statisticshowto.com/probability-and-statistics/chi-square/\n\n[10] Okada, S. Gentle Introduction to Chi-Square Test for Independence: Beginners guide to Chi-square using Jupyter Notebook \\[Internet\\]. Medium - towards data science; 2020. \\[Last updated 2020 January; cited 2020 November 14\\] Available from https://towardsdatascience.com/gentle-introduction-to-chi-square-test-for-independence-7182a7414a95#92ef\n\n[11] Glen, S. Standard Deviation: Simple Definition, Step by Step Video \\[Internet\\]. StatisticsHowTo.com; 2020 \\[Last updated 2020; cited 2020 November 24\\] Available from https://www.statisticshowto.com/probability-and-statistics/standard-deviation/#SDD\n\n[12] Wikipedia. Standard deviation \\[Internet\\]. Wikimedia Foundation; 2020 \\[Last updated 10 December 2020; cited 2020 November 24\\] Available from https://en.wikipedia.org/wiki/Standard_deviation\n\n[13] Hayes, A. Empirical Rule. \\[Internet\\] Investopedia; nd. \\[Last updated 2020 July 27; cited 2020 November 24\\] Available from https://www.investopedia.com/terms/e/empirical-rule.asp\n\n[14] Maths is Fun. Confidence Intervals \\[Internet\\]. MathsisFun.com; 2017 \\[Last updated 2017; cited 2020 November 24\\] Available from https://www.mathsisfun.com/data/confidence-interval.html\n\n[15] Microsoft. STDEV.S function \\[Internet\\]. USA: Microsoft; 2020 \\[Last updated 2020; cited 2020 November 24\\] Available from https://support.microsoft.com/en-us/office/stdev-s-function-7d69cf97-0c1f-4acf-be27-f3e83904cc23\n\n[16] Munnelly, G. K-Means \\[Internet\\]. Trinity College Dublin; 2017 \\[Last updated 2017; cited 2020 December 10\\] Available from https://www.scss.tcd.ie/~munnellg/projects/kmeans.html\n\n[17] Arvai, K. K-Means Clustering in Python: A Practical Guide \\[Internet\\]. Real Python; 2020 \\[Last updated 2020 July 20; cited 2020 December 10\\] Available from https://realpython.com/k-means-clustering-python/\n\n[18] scikit-learn developers. The Iris Dataset \\[Internet\\]. scikit-learn; 2020 \\[Last updated 2020; cited 2020 December 10\\] Available from https://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html\n\n[19] Garbade, MJ. Understanding K-means Clustering in Machine Learning \\[Internet\\]. Medium - towards data science; 2020 \\[Last updated 2018 September 12; cited 2020 December 10\\] Available from https://towardsdatascience.com/understanding-k-means-clustering-in-machine-learning-6a6e67336aa1\n\n[20] Google. k-Means Advantages and Disadvantages \\[Internet\\]. Google Developers; 2020 \\[Last updated 2020 February 10; cited 2020 December 11\\] Available from https://developers.google.com/machine-learning/clustering/algorithm/advantages-disadvantages\n", "meta": {"hexsha": "00d3f32ba2e44bae6c21d401304f071690c46ce5", "size": 166065, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ML&StatsAssignment.ipynb", "max_stars_repo_name": "thomas-roux/MLandStatisticsAssignment", "max_stars_repo_head_hexsha": "f8ae0d2d6ae981e7652284898a3eaa915425b355", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ML&StatsAssignment.ipynb", "max_issues_repo_name": "thomas-roux/MLandStatisticsAssignment", "max_issues_repo_head_hexsha": "f8ae0d2d6ae981e7652284898a3eaa915425b355", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ML&StatsAssignment.ipynb", "max_forks_repo_name": "thomas-roux/MLandStatisticsAssignment", "max_forks_repo_head_hexsha": "f8ae0d2d6ae981e7652284898a3eaa915425b355", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 87.8650793651, "max_line_length": 61200, "alphanum_fraction": 0.7847951103, "converted": true, "num_tokens": 15907, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9019206712569267, "lm_q2_score": 0.9124361533336451, "lm_q1q2_score": 0.8229450278937693}} {"text": "```python\nimport numpy as np\nfrom sympy import *\ninit_printing(use_latex='mathjax')\n```\n\n\n```python\nx1, x2, t = symbols('x1 x2 t')\n\nF = Matrix([\n x1 ** 2 * x2 ** 2 + x1 * x2,\n 1 - t ** 2,\n 1 + t ** 2\n])\nF.jacobian([x1,x2, t])\n```\n\n\n\n\n$$\\left[\\begin{matrix}2 x_{1} x_{2}^{2} + x_{2} & 2 x_{1}^{2} x_{2} + x_{1} & 0\\\\0 & 0 & - 2 t\\\\0 & 0 & 2 t\\end{matrix}\\right]$$\n\n\n\n\n```python\nx1, x2, x3, t = symbols('x1 x2 x3 t')\n\nF = Matrix([\n x1 ** 3 * cos(x2) * exp(x3)\n])\nT = Matrix([\n 2 * t,\n 1 - t ** 2,\n exp(t)\n])\njf = F.jacobian([x1,x2, x3])\njt = T.jacobian([t])\ndisplay(jf)\ndisplay(jt)\n```\n\n\n$$\\left[\\begin{matrix}3 x_{1}^{2} e^{x_{3}} \\cos{\\left (x_{2} \\right )} & - x_{1}^{3} e^{x_{3}} \\sin{\\left (x_{2} \\right )} & x_{1}^{3} e^{x_{3}} \\cos{\\left (x_{2} \\right )}\\end{matrix}\\right]$$\n\n\n\n$$\\left[\\begin{matrix}2\\\\- 2 t\\\\e^{t}\\end{matrix}\\right]$$\n\n\n\n```python\nx1, x2, u1, u2, t = symbols('x1 x2 u1, u2, t')\n\nF = Matrix([\n x1 ** 2 - x2 ** 2\n])\nU = Matrix([\n 2 * u1 + 3 * u2,\n 2 * u1 - 3 * u2\n])\nT = Matrix([\n cos(t / 2),\n sin(2 * t)\n])\njf = F.jacobian([x1,x2, x3])\nju = U.jacobian([u1, u2])\njt = T.jacobian([t])\ndisplay(jf)\ndisplay(ju)\ndisplay(jt)\n```\n\n\n$$\\left[\\begin{matrix}2 x_{1} & - 2 x_{2} & 0\\end{matrix}\\right]$$\n\n\n\n$$\\left[\\begin{matrix}2 & 3\\\\2 & -3\\end{matrix}\\right]$$\n\n\n\n$$\\left[\\begin{matrix}- \\frac{1}{2} \\sin{\\left (\\frac{t}{2} \\right )}\\\\2 \\cos{\\left (2 t \\right )}\\end{matrix}\\right]$$\n\n\n\n```python\nx1, x2, u1, u2, t = symbols('x1 x2 u1, u2, t')\n\nF = Matrix([\n cos(x1)*sin(x2)\n])\nU = Matrix([\n 2 * u1 ** 2 + 3 * u2 ** 2 - u2,\n 2 * u1 - 5 * u2 ** 3\n])\nT = Matrix([\n exp(t / 2),\n exp(-2 * t)\n])\njf = F.jacobian([x1,x2, x3])\nju = U.jacobian([u1, u2])\njt = T.jacobian([t])\ndisplay(jf)\ndisplay(ju)\ndisplay(jt)\n```\n\n\n$$\\left[\\begin{matrix}- \\sin{\\left (x_{1} \\right )} \\sin{\\left (x_{2} \\right )} & \\cos{\\left (x_{1} \\right )} \\cos{\\left (x_{2} \\right )} & 0\\end{matrix}\\right]$$\n\n\n\n$$\\left[\\begin{matrix}4 u_{1} & 6 u_{2} - 1\\\\2 & - 15 u_{2}^{2}\\end{matrix}\\right]$$\n\n\n\n$$\\left[\\begin{matrix}\\frac{e^{\\frac{t}{2}}}{2}\\\\- 2 e^{- 2 t}\\end{matrix}\\right]$$\n\n\n\n```python\nx1, x2, x3, u1, u2, t = symbols('x1 x2 x3 u1 u2 t')\n\nF = Matrix([\n sin(x1) * cos(x2) * exp(x3)\n])\nU = Matrix([\n sin(u1) + cos(u2),\n cos(u1) - sin(u2),\n exp(u1 + u2)\n])\nT = Matrix([\n 1 + t / 2,\n 1 - t / 2\n])\njf = F.jacobian([x1,x2, x3])\nju = U.jacobian([u1, u2])\njt = T.jacobian([t])\ndisplay(jf)\ndisplay(ju)\ndisplay(jt)\n```\n\n\n$$\\left[\\begin{matrix}e^{x_{3}} \\cos{\\left (x_{1} \\right )} \\cos{\\left (x_{2} \\right )} & - e^{x_{3}} \\sin{\\left (x_{1} \\right )} \\sin{\\left (x_{2} \\right )} & e^{x_{3}} \\sin{\\left (x_{1} \\right )} \\cos{\\left (x_{2} \\right )}\\end{matrix}\\right]$$\n\n\n\n$$\\left[\\begin{matrix}\\cos{\\left (u_{1} \\right )} & - \\sin{\\left (u_{2} \\right )}\\\\- \\sin{\\left (u_{1} \\right )} & - \\cos{\\left (u_{2} \\right )}\\\\e^{u_{1} + u_{2}} & e^{u_{1} + u_{2}}\\end{matrix}\\right]$$\n\n\n\n$$\\left[\\begin{matrix}\\frac{1}{2}\\\\- \\frac{1}{2}\\end{matrix}\\right]$$\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "9db75be7f04ce1bcf44dece345ba6204da12c53d", "size": 8516, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Certification 2/Week3.1 - Multivariate Chain rule.ipynb", "max_stars_repo_name": "The-Brains/MathForMachineLearning", "max_stars_repo_head_hexsha": "5cbd9006f166059efaa2f312b741e64ce584aa1f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2018-04-16T02:53:59.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-16T06:51:57.000Z", "max_issues_repo_path": "Certification 2/Week3.1 - Multivariate Chain rule.ipynb", "max_issues_repo_name": "The-Brains/MathForMachineLearning", "max_issues_repo_head_hexsha": "5cbd9006f166059efaa2f312b741e64ce584aa1f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Certification 2/Week3.1 - Multivariate Chain rule.ipynb", "max_forks_repo_name": "The-Brains/MathForMachineLearning", "max_forks_repo_head_hexsha": "5cbd9006f166059efaa2f312b741e64ce584aa1f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2019-05-20T02:06:55.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-18T06:21:41.000Z", "avg_line_length": 24.1931818182, "max_line_length": 277, "alphanum_fraction": 0.3563879756, "converted": true, "num_tokens": 1371, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9572778036723354, "lm_q2_score": 0.8596637469145054, "lm_q1q2_score": 0.8229370235430481}} {"text": "# Neural Network Fundamentals\n\n## Gradient Descent Introduction:\nhttps://www.youtube.com/watch?v=IxBYhjS295w\n\n\n```python\nfrom IPython.display import YouTubeVideo\nYouTubeVideo(\"IxBYhjS295w\")\n```\n\n\n\n\n\n\n\n\n\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.linear_model import LinearRegression\n\nnp.random.seed(1)\n\n%matplotlib inline\nnp.random.seed(1)\n```\n\n\n```python\nN = 100\nx = np.random.rand(N,1)*5\n# Let the following command be the true function\ny = 2.3 + 5.1*x\n# Get some noisy observations\ny_obs = y + 2*np.random.randn(N,1)\n```\n\n\n```python\nplt.scatter(x,y_obs,label='Observations')\nplt.plot(x,y,c='r',label='True function')\nplt.legend()\nplt.show()\n```\n\n## Gradient Descent\n\nWe are trying to minimise $\\sum \\xi_i^2$.\n\n\\begin{align}\n\\mathcal{L} & = \\frac{1}{N}\\sum_{i=1}^N (y_i-f(x_i,w,b))^2 \\\\\n\\frac{\\delta\\mathcal{L}}{\\delta w} & = -\\frac{1}{N}\\sum_{i=1}^N 2(y_i-f(x_i,w,b))\\frac{\\delta f(x_i,w,b)}{\\delta w} \\\\ \n& = -\\frac{1}{N}\\sum_{i=1}^N 2\\xi_i\\frac{\\delta f(x_i,w,b)}{\\delta w}\n\\end{align}\nwhere $\\xi_i$ is the error term $y_i-f(x_i,w,b)$ and \n$$\n\\frac{\\delta f(x_i,w,b)}{\\delta w} = x_i\n$$\n\nSimilar expression can be found for $\\frac{\\delta\\mathcal{L}}{\\delta b}$ (exercise).\n\nFinally the weights can be updated as $w_{new} = w_{current} - \\gamma \\frac{\\delta\\mathcal{L}}{\\delta w}$ where $\\gamma$ is a learning rate between 0 and 1.\n\n\n```python\n# Helper functions\ndef f(w,b):\n return w*x+b\n\ndef loss_function(e):\n L = np.sum(np.square(e))/N\n return L\n\ndef dL_dw(e,w,b):\n return -2*np.sum(e*df_dw(w,b))/N\n\ndef df_dw(w,b):\n return x\n\ndef dL_db(e,w,b):\n return -2*np.sum(e*df_db(w,b))/N\n\ndef df_db(w,b):\n return np.ones(x.shape)\n```\n\n\n```python\n# The Actual Gradient Descent\ndef gradient_descent(iter=100,gamma=0.1):\n # get starting conditions\n w = 10*np.random.randn()\n b = 10*np.random.randn()\n \n params = []\n loss = np.zeros((iter,1))\n for i in range(iter):\n# from IPython.core.debugger import Tracer; Tracer()()\n params.append([w,b])\n e = y_obs - f(w,b) # Really important that you use y_obs and not y (you do not have access to true y)\n loss[i] = loss_function(e)\n\n #update parameters\n w_new = w - gamma*dL_dw(e,w,b)\n b_new = b - gamma*dL_db(e,w,b)\n w = w_new\n b = b_new\n \n return params, loss\n \nparams, loss = gradient_descent()\n```\n\n\n```python\niter=100\ngamma = 0.1\nw = 10*np.random.randn()\nb = 10*np.random.randn()\n\nparams = []\nloss = np.zeros((iter,1))\nfor i in range(iter):\n# from IPython.core.debugger import Tracer; Tracer()()\n params.append([w,b])\n e = y_obs - f(w,b) # Really important that you use y_obs and not y (you do not have access to true y)\n loss[i] = loss_function(e)\n\n #update parameters\n w_new = w - gamma*dL_dw(e,w,b)\n b_new = b - gamma*dL_db(e,w,b)\n w = w_new\n b = b_new\n```\n\n\n```python\ndL_dw(e,w,b)\n```\n\n\n\n\n 0.007829640537794828\n\n\n\n\n```python\nplt.plot(loss)\n```\n\n\n```python\nparams = np.array(params)\nplt.plot(params[:,0],params[:,1])\nplt.title('Gradient descent')\nplt.xlabel('w')\nplt.ylabel('b')\nplt.show()\n```\n\n\n```python\nparams[-1]\n```\n\n\n\n\n array([4.98991104, 2.72258102])\n\n\n\n## Multivariate case\n\nWe are trying to minimise $\\sum \\xi_i^2$. This time $ f = Xw$ where $w$ is Dx1 and $X$ is NxD.\n\n\\begin{align}\n\\mathcal{L} & = \\frac{1}{N} (y-Xw)^T(y-Xw) \\\\\n\\frac{\\delta\\mathcal{L}}{\\delta w} & = -\\frac{1}{N} 2\\left(\\frac{\\delta f(X,w)}{\\delta w}\\right)^T(y-Xw) \\\\ \n& = -\\frac{2}{N} \\left(\\frac{\\delta f(X,w)}{\\delta w}\\right)^T\\xi\n\\end{align}\nwhere $\\xi_i$ is the error term $y_i-f(X,w)$ and \n$$\n\\frac{\\delta f(X,w)}{\\delta w} = X\n$$\n\nFinally the weights can be updated as $w_{new} = w_{current} - \\gamma \\frac{\\delta\\mathcal{L}}{\\delta w}$ where $\\gamma$ is a learning rate between 0 and 1.\n\n\n```python\nN = 1000\nD = 5\nX = 5*np.random.randn(N,D)\nw = np.random.randn(D,1)\ny = X.dot(w)\ny_obs = y + np.random.randn(N,1)\n```\n\n\n```python\nw\n```\n\n\n\n\n array([[ 0.93774813],\n [-2.62540124],\n [ 0.74616483],\n [ 0.67411002],\n [ 1.0142675 ]])\n\n\n\n\n```python\nX.shape\n```\n\n\n\n\n (1000, 5)\n\n\n\n\n```python\nw.shape\n```\n\n\n\n\n (5, 1)\n\n\n\n\n```python\n(X*w.T).shape\n```\n\n\n\n\n (1000, 5)\n\n\n\n\n```python\n# Helper functions\ndef f(w):\n return X.dot(w)\n\ndef loss_function(e):\n L = e.T.dot(e)/N\n return L\n\ndef dL_dw(e,w):\n return -2*X.T.dot(e)/N \n```\n\n\n```python\ndef gradient_descent(iter=100,gamma=1e-3):\n # get starting conditions\n w = np.random.randn(D,1)\n params = []\n loss = np.zeros((iter,1))\n for i in range(iter):\n params.append(w)\n e = y_obs - f(w) # Really important that you use y_obs and not y (you do not have access to true y)\n loss[i] = loss_function(e)\n\n #update parameters\n w = w - gamma*dL_dw(e,w)\n \n return params, loss\n \nparams, loss = gradient_descent()\n```\n\n\n```python\nplt.plot(loss)\n```\n\n\n```python\nparams[-1]\n```\n\n\n\n\n array([[ 0.94792987],\n [-2.60989696],\n [ 0.72929842],\n [ 0.65272494],\n [ 1.01038855]])\n\n\n\n\n```python\nmodel = LinearRegression(fit_intercept=False)\nmodel.fit(X,y)\nmodel.coef_.T\n```\n\n\n\n\n array([[ 0.93774813],\n [-2.62540124],\n [ 0.74616483],\n [ 0.67411002],\n [ 1.0142675 ]])\n\n\n\n\n```python\n# compare parameters side by side\nnp.hstack([params[-1],model.coef_.T])\n```\n\n\n\n\n array([[ 0.94792987, 0.93774813],\n [-2.60989696, -2.62540124],\n [ 0.72929842, 0.74616483],\n [ 0.65272494, 0.67411002],\n [ 1.01038855, 1.0142675 ]])\n\n\n\n## Stochastic Gradient Descent\n\n\n```python\ndef dL_dw(X,e,w):\n return -2*X.T.dot(e)/len(X)\n\ndef gradient_descent(gamma=1e-3, n_epochs=100, batch_size=20, decay=0.9):\n epoch_run = int(len(X)/batch_size)\n \n # get starting conditions\n w = np.random.randn(D,1)\n params = []\n loss = np.zeros((n_epochs,1))\n for i in range(n_epochs):\n params.append(w)\n \n for j in range(epoch_run):\n idx = np.random.choice(len(X),batch_size,replace=False)\n e = y_obs[idx] - X[idx].dot(w) # Really important that you use y_obs and not y (you do not have access to true y)\n #update parameters\n w = w - gamma*dL_dw(X[idx],e,w)\n loss[i] = e.T.dot(e)/len(e) \n gamma = gamma*decay #decay the learning parameter\n \n return params, loss\n \nparams, loss = gradient_descent()\n```\n\n\n```python\nplt.plot(loss)\n```\n\n\n```python\nnp.hstack([params[-1],model.coef_.T])\n```\n\n\n\n\n array([[ 0.94494132, 0.93774813],\n [-2.6276984 , -2.62540124],\n [ 0.74654537, 0.74616483],\n [ 0.66766209, 0.67411002],\n [ 1.00760747, 1.0142675 ]])\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "579d49b063b104ea1866841d861d00303db60ae5", "size": 98058, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "jupyter/Keras_TensorFlow_Course/Lesson 02 - GradientDescent.ipynb", "max_stars_repo_name": "multivacplatform/multivac-dl", "max_stars_repo_head_hexsha": "54cb33960ba14f32ed9ac185a4c151a6b72a97ca", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-11-24T10:47:49.000Z", "max_stars_repo_stars_event_max_datetime": "2018-11-24T10:47:49.000Z", "max_issues_repo_path": "jupyter/Keras_TensorFlow_Course/Lesson 02 - GradientDescent.ipynb", "max_issues_repo_name": "multivacplatform/multivac-dl", "max_issues_repo_head_hexsha": "54cb33960ba14f32ed9ac185a4c151a6b72a97ca", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "jupyter/Keras_TensorFlow_Course/Lesson 02 - GradientDescent.ipynb", "max_forks_repo_name": "multivacplatform/multivac-dl", "max_forks_repo_head_hexsha": "54cb33960ba14f32ed9ac185a4c151a6b72a97ca", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 138.3046544429, "max_line_length": 27052, "alphanum_fraction": 0.8862102021, "converted": true, "num_tokens": 2212, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9658995733060719, "lm_q2_score": 0.8519528019683106, "lm_q1q2_score": 0.8229008478981036}} {"text": "# \u2605 Numerical Differentiaion and Integration \u2605\n\n\n```python\n# Import modules\nimport math\nimport sympy as sym\nimport numpy as np\nimport scipy \nimport matplotlib.pyplot as plt\nimport plotly\nimport plotly.plotly as ply\nimport plotly.figure_factory as ply_ff\nfrom IPython.display import Math\nfrom IPython.display import display\n\n# Startup plotly\nplotly.offline.init_notebook_mode(connected=True)\n\n''' Fix MathJax issue '''\n# The polling here is to ensure that plotly.js has already been loaded before\n# setting display alignment in order to avoid a race condition.\nfrom IPython.core.display import display, HTML\ndisplay(HTML(\n ''\n))\n```\n\n\n\n\n\n\n\n\n\n# 5.1 Numerical Differentiation\n\n## Two-point forward-difference formula\n\n## $f'(x) = \\frac{f(x+h) - f(x)}{h} - \\frac{h}{2}f''(c)$\n\nwhere $c$ is between $x$ and $x+h$\n\n### Example\n\nUse the two-point forward-difference formula with $h = 0.1$ to approximate the derivative of $f(x) = 1/x$ at $x = 2$\n\n\n```python\n# Parameters\nx = 2\nh = 0.1\n\n# Symbolic computation\nsym_x = sym.Symbol('x')\nsym_deri_x1 = sym.diff(1 / sym_x, sym_x)\nsym_deri_x1_num = sym_deri_x1.subs(sym_x, x).evalf()\n\n# Approximation\nf = lambda x : 1 / x\nderi_x1 = (f(x + h) - f(x)) / h\n\n# Comparison\nprint('approximate = %f, real value = %f, backward error = %f' %(deri_x1, sym_deri_x1_num, abs(deri_x1 - sym_deri_x1_num)) )\n```\n\n approximate = -0.238095, real value = -0.250000, backward error = 0.011905\n\n\n## Three-point centered-difference formula\n\n## $f'(x) = \\frac{f(x+h) - f(x-h)}{2h} - \\frac{h^2}{6}f'''(c)$\n\nwhere $x-h < c < x+h$\n\n### Example\n\nUse the three-point centered-difference formula with $h = 0.1$ to approximate the derivative of $f(x) = 1 / x$ at $x = 2$\n\n\n\n```python\n# Parameters\nx = 2\nh = 0.1\nf = lambda x : 1 / x\n\n# Symbolic computation\nsym_x = sym.Symbol('x')\nsym_deri_x1 = sym.diff(1 / sym_x, sym_x)\nsym_deri_x1_num = sym_deri_x1.subs(sym_x, x).evalf()\n\n# Approximation\nderi_x1 = (f(x + h) - f(x - h)) / (2 * h)\n\n# Comparison\nprint('approximate = %f, real value = %f, backward error = %f' %(deri_x1, sym_deri_x1_num, abs(deri_x1 - sym_deri_x1_num)) )\n```\n\n approximate = -0.250627, real value = -0.250000, backward error = 0.000627\n\n\n## Three-point centered-difference formula for second derivative\n\n## $f''(x) = \\frac{f(x - h) - 2f(x) + f(x + h)}{h^2} - \\frac{h^2}{12}f^{(iv)}(c)$\n\nfor some $c$ between $x - h$ and $x + h$\n\n## Rounding error\n\n### Example\n\nApproximate the derivative of $f(x) = e^x$ at $x = 0$\n\n\n```python\n# Parameters\nf = lambda x : math.exp(x)\nreal_value = 1\nh_msg = \"$10^{-%d}$\"\ntwp_deri_x1 = lambda x, h : ( f(x + h) - f(x) ) / h\nthp_deri_x1 = lambda x, h : ( f(x + h) - f(x - h) ) / (2 * h)\n\ndata = [\n [\"h\", \n \"$f'(x) \\\\approx \\\\frac{e^{x+h} - e^x}{h}$\", \n \"error\", \n \"$f'(x) \\\\approx \\\\frac{e^{x+h} - e^{x-h}}{2h}$\", \n \"error\"],\n]\n\nfor i in range(1,10):\n h = pow(10, -i)\n twp_deri_x1_value = twp_deri_x1(0, h) \n thp_deri_x1_value = thp_deri_x1(0, h)\n row = [\"\", \"\", \"\", \"\", \"\"]\n row[0] = h_msg %i\n row[1] = '%.14f' %twp_deri_x1_value\n row[2] = '%.14f' %abs(twp_deri_x1_value - real_value)\n row[3] = '%.14f' %thp_deri_x1_value\n row[4] = '%.14f' %abs(thp_deri_x1_value - real_value)\n data.append(row)\n\ntable = ply_ff.create_table(data)\nplotly.offline.iplot(table, show_link=False)\n```\n\n\n
    \n\n\n## Extrapolation for order n formula\n\n## $ Q \\approx \\frac{2^nF(h/2) - F(h)}{2^n - 1} $\n\n\n```python\nsym.init_printing(use_latex=True)\n\nx = sym.Symbol('x')\ndx = sym.diff(sym.exp(sym.sin(x)), x)\nMath('Derivative : %s' %sym.latex(dx) )\n```\n\n\n\n\n$$Derivative : e^{\\sin{\\left (x \\right )}} \\cos{\\left (x \\right )}$$\n\n\n\n# 5.2 Newton-Cotes Formulas For Numerical Integration\n\n## Trapezoid Rule\n\n## $\\int_{x_0}^{x_1} f(x) dx = \\frac{h}{2}(y_0 + y_1) - \\frac{h^3}{12}f''(c)$\n\nwhere $h = x_1 - x_0$ and $c$ is between $x_0$ and $x_1$\n\n## Simpson's Rule\n\n## $\\int_{x_0}^{x_2} f(x) dx = \\frac{h}{3}(y_0 + 4y_1 + y_2) - \\frac{h^5}{90}f^{(iv)}(c)$\n\nwhere $h = x_2 - x_1 = x_1 - x_0$ and $c$ is between $x_0$ and $x_2$\n\n### Example\n\nApply the Trapezoid Rule and Simpson's Rule to approximate $\\int_{1}^{2} \\ln(x) dx$ and find an upper bound for the error in your approximations\n\n\n```python\n# Apply Trapezoid Rule\ntrapz = scipy.integrate.trapz([np.log(1), np.log(2)], [1, 2])\n\n# Evaluate the error term of Trapezoid Rule\nsym_x = sym.Symbol('x')\nexpr = sym.diff(sym.log(sym_x), sym_x, 2)\ntrapz_err = abs(expr.subs(sym_x, 1).evalf() / 12)\n\n# Print out results\nprint('Trapezoid rule : %f and upper bound error : %f' %(trapz, trapz_err) )\n```\n\n Trapezoid rule : 0.346574 and upper bound error : 0.083333\n\n\n\n```python\n# Apply Simpson's Rule\narea = scipy.integrate.simps([np.log(1), np.log(1.5), np.log(2)], [1, 1.5, 2])\n\n# Evaluate the error term\nsym_x = sym.Symbol('x')\nexpr = sym.diff(sym.log(sym_x), sym_x, 4)\nsimps_err = abs( pow(0.5, 5) / 90 * expr.subs(sym_x, 1).evalf() )\n\n# Print out results\nprint('Simpson\\'s rule : %f and upper bound error : %f' %(area, simps_err) )\n```\n\n Simpson's rule : 0.385835 and upper bound error : 0.002083\n\n\n## Composite Trapezoid Rule\n\n## $\\int_{a}^{b} f(x) dx = \\frac{h}{2} \\left ( y_0 + y_m + 2\\sum_{i=1}^{m-1}y_i \\right ) - \\frac{(b-a)h^2}{12}f''(c)$\n\nwhere $h = (b - a) / m $ and $c$ is between $a$ and $b$\n\n## Composite Simpson's Rule\n\n## $ \\int_{a}^{b}f(x)dx = \\frac{h}{3}\\left [ y_0 + y_{2m} + 4\\sum_{i=1}^{m}y_{2i-1} + 2\\sum_{i=1}^{m - 1}y_{2i} \\right ] - \\frac{(b-a)h^4}{180}f^{(iv)}(c) $\n\nwhere $c$ is between $a$ and $b$\n\n### Example\n\nCarry out four-panel approximations of $\\int_{1}^{2} \\ln{x} dx$ using the composite Trapezoid Rule and composite Simpson's Rule\n\n\n```python\n# Apply composite Trapezoid Rule\nx = np.linspace(1, 2, 5)\ny = np.log(x)\ntrapz = scipy.integrate.trapz(y, x)\n\n# Error term\nsym_x = sym.Symbol('x')\nexpr = sym.diff(sym.log(sym_x), sym_x, 2)\ntrapz_err = abs( (2 - 1) * pow(0.25, 2) / 12 * expr.subs(sym_x, 1).evalf() )\n\nprint('Trapezoid Rule : %f, error = %f' %(trapz, trapz_err) )\n```\n\n Trapezoid Rule : 0.383700, error = 0.005208\n\n\n\n```python\n# Apply composite Trapezoid Rule\nx = np.linspace(1, 2, 9)\ny = np.log(x)\narea = scipy.integrate.simps(y, x)\n\n# Error term\nsym_x = sym.Symbol('x')\nexpr = sym.diff(sym.log(sym_x), sym_x, 4)\nsimps_err = abs( (2 - 1) * pow(0.125, 4) / 180 * expr.subs(sym_x, 1).evalf() )\n\nprint('Simpson\\'s Rule : %f, error = %f' %(area, simps_err) )\n```\n\n Simpson's Rule : 0.386292, error = 0.000008\n\n\n## Midpoint Rule\n\n## $ \\int_{x_0}^{x_1} f(x)dx = hf(\\omega) + \\frac{h^3}{24}f''(c) $\n\nwhere $ h = (x_1 - x_0) $, $\\omega$ is the midpoint $ x_0 + h / 2 $, and $c$ is between $x_0$ and $x_1$\n\n## Composite Midpoint Rule\n\n## $ \\int_{a}^{b} f(x) dx = h \\sum_{i=1}^{m}f(\\omega_{i}) + \\frac{(b - a)h^2}{24} f''(c) $\n\nwhere $h = (b - a) / m$ and $c$ is between $a$ and $b$. The $\\omega_{i}$ are the midpoints of the $m$ equal subintervals of $[a,b]$\n\n### Example\nApproximate $\\int_{0}^{1} \\frac{\\sin x}{x} dx$ by using the composite Midpoint Rule with $m = 10$ panels\n\n\n```python\n# Parameters\nm = 10\nh = (1 - 0) / m\nf = lambda x : np.sin(x) / x\nmids = np.arange(0 + h/2, 1, h)\n\n# Apply composite midpoint rule\narea = h * np.sum(f(mids))\n\n# Error term\nsym_x = sym.Symbol('x')\nexpr = sym.diff(sym.sin(sym_x) / sym_x, sym_x, 2)\nmid_err = abs( (1 - 0) * pow(h, 2) / 24 * expr.subs(sym_x, 1).evalf() )\n\n# Print out\nprint('Composite Midpoint Rule : %.8f, error = %.8f' %(area, mid_err) )\n```\n\n Composite Midpoint Rule : 0.94620858, error = 0.00009964\n\n\n# 5.3 Romberg Integration\n\n\n```python\ndef romberg(f, a, b, step):\n R = np.zeros(step * step).reshape(step, step)\n R[0][0] = (b - a) * (f(a) + f(b)) / 2\n for j in range(1, step):\n h = (b - a) / pow(2, j)\n summ = 0\n for i in range(1, pow(2, j - 1) + 1):\n summ += h * f(a + (2 * i - 1) * h)\n R[j][0] = 0.5 * R[j - 1][0] + summ\n \n for k in range(1, j + 1):\n R[j][k] = ( pow(4, k) * R[j][k - 1] - R[j - 1][k - 1] ) / ( pow(4, k) - 1 )\n \n return R[step - 1][step - 1]\n```\n\n### Example\n\nApply Romberg Integration to approximate $\\int_{1}^{2} \\ln{x}dx$\n\n\n```python\nf = lambda x : np.log(x)\nresult = romberg(f, 1, 2, 4)\nprint('Romberg Integration : %f' %(result) )\n```\n\n Romberg Integration : 0.386294\n\n\n\n```python\nf = lambda x : np.log(x)\nresult = scipy.integrate.romberg(f, 1, 2, show=True)\nprint('Romberg Integration : %f' %(result) )\n```\n\n Romberg integration of .vfunc at 0x7fa72aad8510> from [1, 2]\n \n Steps StepSize Results\n 1 1.000000 0.346574 \n 2 0.500000 0.376019 0.385835 \n 4 0.250000 0.383700 0.386260 0.386288 \n 8 0.125000 0.385644 0.386292 0.386294 0.386294 \n 16 0.062500 0.386132 0.386294 0.386294 0.386294 0.386294 \n 32 0.031250 0.386254 0.386294 0.386294 0.386294 0.386294 0.386294 \n \n The final result is 0.38629436112 after 33 function evaluations.\n Romberg Integration : 0.386294\n\n\n# 5.4 Adaptive Quadrature\n\n\n```python\n''' Use Trapezoid Rule '''\n\ndef adaptive_quadrature(f, a, b, tol):\n return adaptive_quadrature_recursively(f, a, b, tol, a, b, 0)\n \ndef adaptive_quadrature_recursively(f, a, b, tol, orig_a, orig_b, deep):\n c = (a + b) / 2\n S = lambda x, y : (y - x) * (f(x) + f(y)) / 2\n if abs( S(a, b) - S(a, c) - S(c, b) ) < 3 * tol * (b - a) / (orig_b - orig_a) or deep > 20 :\n return S(a, c) + S(c, b)\n else:\n return adaptive_quadrature_recursively(f, a, c, tol / 2, orig_a, orig_b, deep + 1) + adaptive_quadrature_recursively(f, c, b, tol / 2, orig_a, orig_b, deep + 1)\n```\n\n\n```python\n''' Use Simpon's Rule '''\n\ndef adaptive_quadrature(f, a, b, tol):\n return adaptive_quadrature_recursively(f, a, b, tol, a, b, 0)\n \ndef adaptive_quadrature_recursively(f, a, b, tol, orig_a, orig_b, deep):\n c = (a + b) / 2\n S = lambda x, y : (y - x) * ( f(x) + 4 * f((x + y) / 2) + f(y) ) / 6\n if abs( S(a, b) - S(a, c) - S(c, b) ) < 15 * tol or deep > 20 :\n return S(a, c) + S(c, b)\n else:\n return adaptive_quadrature_recursively(f, a, c, tol / 2, orig_a, orig_b, deep + 1) + adaptive_quadrature_recursively(f, c, b, tol / 2, orig_a, orig_b, deep + 1)\n```\n\n### Example\n\nUse Adaptive Quadrature to approximate the integral $ \\int_{-1}^{1} (1 + \\sin{e^{3x}}) dx $\n\n\n```python\nf = lambda x : 1 + np.sin(np.exp(3 * x))\nval = adaptive_quadrature(f, -1, 1, tol=1e-12)\nprint(val)\n```\n\n 2.50080911034\n\n\n# 5.5 Gaussian Quadrature\n\n\n```python\npoly = scipy.special.legendre(2)\n# Find roots of polynomials\ncomp = scipy.linalg.companion(poly)\nroots = scipy.linalg.eig(comp)[0]\n```\n\n### Example\n\nApproximate $\\int_{-1}^{1} e^{-\\frac{x^2}{2}}dx$ using Gaussian Quadrature\n\n\n```python\nf = lambda x : np.exp(-np.power(x, 2) / 2)\nquad = scipy.integrate.quadrature(f, -1, 1)\nprint(quad[0])\n```\n\n 1.71124878401\n\n\n\n```python\n# Parametes\na = -1\nb = 1\ndeg = 3\nf = lambda x : np.exp( -np.power(x, 2) / 2 )\n\nx, w = scipy.special.p_roots(deg) # Or use numpy.polynomial.legendre.leggauss\nquad = np.sum(w * f(x))\n \nprint(quad)\n```\n\n 1.7120202452\n\n\n### Example\n\nApproximate the integral $\\int_{1}^{2} \\ln{x} dx$ using Gaussian Quadrature\n\n\n```python\n# Parametes\na = 1\nb = 2\ndeg = 4\nf = lambda t : np.log( ((b - a) * t + b + a) / 2) * (b - a) / 2\n\nx, w = scipy.special.p_roots(deg)\nnp.sum(w * f(x))\n```\n\n\n\n\n 0.38629449693871409\n\n\n", "meta": {"hexsha": "981052de1c808c532e9ccbb54ac4dfee28580c73", "size": 60495, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "5_Numerical_Differentiation_And_Integration.ipynb", "max_stars_repo_name": "Jim00000/Numerical-Analysis", "max_stars_repo_head_hexsha": "b520a9b0e494bea72c3e54e0cae5bbb7c75748c1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2017-07-08T21:26:07.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-22T11:09:40.000Z", "max_issues_repo_path": "5_Numerical_Differentiation_And_Integration.ipynb", "max_issues_repo_name": "Jim00000/Numerical-Analysis", "max_issues_repo_head_hexsha": "b520a9b0e494bea72c3e54e0cae5bbb7c75748c1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "5_Numerical_Differentiation_And_Integration.ipynb", "max_forks_repo_name": "Jim00000/Numerical-Analysis", "max_forks_repo_head_hexsha": "b520a9b0e494bea72c3e54e0cae5bbb7c75748c1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2019-01-06T09:55:29.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-28T15:45:32.000Z", "avg_line_length": 37.691588785, "max_line_length": 10945, "alphanum_fraction": 0.4069261922, "converted": true, "num_tokens": 4416, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9324533144915912, "lm_q2_score": 0.8824278710924296, "lm_q1q2_score": 0.8228227931998946}} {"text": "# Evaluating numerical errors and accuracy\n\n- toc: false\n- branch: master\n- badges: true\n- comments: false\n- categories: [mathematics, numerical recipes]\n- hide: true\n\n-----\nQuestions:\n\n- Which numerical errors are unavoidable in a Python programme?\n- How do I choose the optimum step size $h$ when using the finite difference method?\n- What do the terms first-order accurate and second-order accurate mean?\n- How can I measure the speed of my code?\n\nObjectives:\n\n- Understand that there are unavoidable rounding errors when working with floats\n- Write code for testing if two floats are equivalent (to within machine accuracy)\n- Calculate the optimum step size $h$ for finite difference methods\n- Measure the length of time a Notebook cell takes to run using the `%time` magic.\n-----\n\n### Computers have inherent limitations that lead to rounding errors\n\n- We have seen how computer programming can be used to model physical systems. However computers have inherent limitations - they cannot store real numbers with an infinite number of decimal places.\n- In many cases this is not a problem, but it is something to be aware of. For example, take the following piece of code:\n\n\n```python\ndef add_numbers(a,b):\n return a+b\n\ndef test_add_numbers():\n assert add_numbers(0.1,0.2) == 0.3\n```\n\n- `add_numbers` is a function for adding two Python objects `a` and `b`.\n- `test_add_numbers` is a function for testing is the `add_numbers` function is working as expected (we will see more on testing later in the course). This function will raise an error if $0.1 + 0.2$ does not equal 0.3.\n\n\n```python\nadd_numbers(1,2)\n```\n\n\n\n\n 3\n\n\n\nThe `add_numbers` function works as expected if we pass it two integers. However when we run the test function we raise an assertion error:\n\n\n\n```python\ntest_add_numbers()\n```\n\nThis rounding error is given because $0.1+0.2$ does not equal 0.3 exactly:\n\n\n```python\n0.1+0.2\n```\n\n\n\n\n 0.30000000000000004\n\n\n\nThis is because floating point numbers (floats) are represented on the computer to a certain precision. In Python the standard level of precision is 16 significant digits.\n\n> Note: The largest value you can give a floating point variable is about $10^{308}$. The smallest is -$10^{308}$. If you exceed these values (which is unlikely) then the computer will return an Overflow error. In contrast, PYthon can represent integers to any precision - limited only by the memory of the machine.\n\n### Do not test for the equality of two floats\n\nAs we have seen in the previous example, we should not test for the equality of two floats. Instead we should ask if they are equal up to a given precision:\n\n\n\n```python\ndef add_numbers(a,b):\n return a+b\n\nepsilon = 1e-12\n\ndef test_add_numbers():\n assert abs(add_numbers(0.1,0.2) - 0.3) < epsilon\n```\n\n\n```python\ntest_add_numbers()\n```\n\n### Finite difference methods have two sources of error\n\n- There are two sources of errors for finite difference methods: one is from the approximation that the step size $h$ is small but not zero. The second is from the rounding errors introduced at the start of this tutorial.\n- One way of improving the finite-$h$ approximation is to decrease the step size in space (use a higher number of points on our real space grid). However when the step size is decreased the programme will run more slowly.\n- We also need to think about the rounding errors associated with finite differences. Counter-intuitively, these errors can increase as we decrease the step size $h$. \n\nTo demonstrate this, consider the Taylor expansion of $f(x)$ about $x$:\n\n\\begin{equation}\nf(x+h) = f(x) + hf'(x) +\\frac{1}{2}h^2f''(x) + \\ldots\n\\end{equation}\n\nRe-arrange the expression to get the expression for the forward difference method:\n\n\\begin{equation}\nf'(x) = \\frac{f(x+h)}{h} - \\frac{1}{2}hf''(x)+\\ldots\n\\end{equation}\n\nA computer can typically store a number $f(x)$ to an accuracy of 16 significant figures, or $Cf(x)$ where $C=10^{-16}$. In the worst case, this makes the error $\\epsilon$ on our derivative:\n\n\\begin{equation}\n\\epsilon = \\frac{2C|f(x)|}{h} + \\frac{1}{2}h|f''(x)|.\n\\end{equation}\n\nWe want to find the value of $h$ which minimises this error so we differentiate with respect to $h$ and set the result equal to zero.\n\n\\begin{equation}\n-\\frac{2C|f(x)|}{h^2} + h|f''(x)|\n\\end{equation}\n\n\\begin{equation}\nh = \\sqrt{4C\\lvert\\frac{f(x)}{f''(x)}\\rvert}\n\\end{equation}\n\nIf $f(x)$ and $f''(x)$ are order 1, then $h$ should be order $\\sqrt{C}$, or $10^{-8}$.\n\nSimilar reasoning applied to the central difference formula suggests that the optimum step size for this method is $10^{-5}$.\n\n### The relaxation method for PDEs is limited by the accuracy of the finite difference method.\n\n- For solving PDEs we use the relaxation method combined with a finite difference method.\n- Even if we use a very small target accuracy for convergence of the relaxation method, our accuracy will still be limited by the finite differences. Higher-order finite difference methods (such as the 5-point or 7-point methods) can be used here to improve the overrall accuracy of the calculation.\n\n### Euler's method is a first-order method accurate to order $h$.\n\n- As we have seen in previous tutorials, numerical methods (such as Euler's method or the Runge-Kutta method) give approximate solutions.\n- Euler's method neglected the term in $h^2$ and higher:\n\\begin{equation}\nx(t+h) = x(t)+hf(x,t)+\\mathcal{O}(h^2)\n\\end{equation}\n- This tells us the error introduced on a single step of the method is proportional to $h^2$ - this makes Euler's method a first-order method, accurate to order $h$.\n- However the cumulative error over several steps is proportional to $h$ \n- So to make our error half as large we need to double the number of steps (halve the step size) and double the length of the calculation.\n\n### The second-order Runge-Kutta method is accurate to order $h^2$.\n\n- The error term for one step of the Runge-Kutta method is ${O}(h^3)$ - this makes the Runge-Kutta method accurate to order $h^2$ which is why this is called the second-order Runge Kutta method (RK2).\n- With the RK2 can use a fewer number of steps whilst getting the same accuracy as Euler's method.\n- There are higher order Runge-Kutta methods which increase the accuracy further.\n\n### Use the %time magic to measure the length of time a Jupyter Notebook cell takes to run \n\n\n\n```python\ndef sum_integers(max_integer):\n count = 0\n for i in range(max_integer):\n count += max_integer + 1\n \n return count\n \n```\n\n\n```python\n%time sum = sum_integers(1000000)\n```\n\n CPU times: user 100 ms, sys: 3.79 ms, total: 104 ms\n Wall time: 110 ms\n\n\n---\nKeypoints:\n\n- Computers have inherent limitations that lead to rounding errors\n- Do not test for the equality of two floats\n- Finite difference methods have two sources of error\n- The relaxation method for PDEs is limited by the accuracy of the finite difference method.\n- Euler's method is a first-order method accurate to order $h$.\n- The second-order Runge-Kutta method is accurate to order $h^2$.\n\n\n\n---\n\nDo [the quick-test](https://nu-cem.github.io/CompPhys/2021/08/02/Evaluating-Accuracy-Qs.html).\n\nBack to [Modelling with Partial Differential Equations](https://nu-cem.github.io/CompPhys/2021/08/02/PDEs.html).\n\n---\n", "meta": {"hexsha": "ef1f3e42994cf26b4d41e3d14babd170fd46d783", "size": 12996, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_notebooks/2021-08-02-Evaluating-Accuracy.ipynb", "max_stars_repo_name": "NU-CEM/CompPhys", "max_stars_repo_head_hexsha": "7a7e8ab672797d6db0dbbd673fdc66c6e7bad971", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_notebooks/2021-08-02-Evaluating-Accuracy.ipynb", "max_issues_repo_name": "NU-CEM/CompPhys", "max_issues_repo_head_hexsha": "7a7e8ab672797d6db0dbbd673fdc66c6e7bad971", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 17, "max_issues_repo_issues_event_min_datetime": "2021-10-06T08:11:43.000Z", "max_issues_repo_issues_event_max_datetime": "2021-11-24T13:46:08.000Z", "max_forks_repo_path": "_notebooks/2021-08-02-Evaluating-Accuracy.ipynb", "max_forks_repo_name": "NU-CEM/CompPhys", "max_forks_repo_head_hexsha": "7a7e8ab672797d6db0dbbd673fdc66c6e7bad971", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-03T10:11:28.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-03T10:11:28.000Z", "avg_line_length": 35.7032967033, "max_line_length": 703, "alphanum_fraction": 0.5998768852, "converted": true, "num_tokens": 1871, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9324533126145178, "lm_q2_score": 0.882427872638409, "lm_q1q2_score": 0.8228227929850663}} {"text": "```python\n%matplotlib inline\nfrom IPython.display import display,Math\nfrom sympy import *\ninit_session()\n```\n\n\n```python\n%%time\n# \u6700\u5927\u516c\u7d04\u6570 gcd(a, b) \u3092\u6c42\u3081\u308b\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0 \u305d\u306e1\na, b = 20210608, 80601202\ni = 1\nwhile i <= b:\n if a%i == 0:\n if b%i == 0:\n M = i\n i = i+1\nprint(M)\n```\n\n\n```python\n%%time\n# \u6700\u5927\u516c\u7d04\u6570 gcd(a, b) \u3092\u6c42\u3081\u308b\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0 \u305d\u306e2\na, b = 20210608, 80601202\nr = a%b\nwhile r != 0:\n a,b = b,r\n r = a%b\nprint(b)\n```\n\n\n```python\nfrom ipywidgets import interact\nimport time\ndef mygcd2(a,b):\n r = a%b\n count = 1\n while r != 0:\n count += 1\n a,b = b,r\n r = a%b\n return b,count\n@interact\ndef _(a=\"314159265\",b=\"35\"):\n digits = len(b)\n a,b = int(a),int(b)\n d,count = mygcd2(a,b)\n return display(Math(\"gcd({0:d},{1:d})={2:d}, (digits,count)=({3:d},{4:d})\".format(a,b,d,digits,count)))\n```\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "fa419b6f1d8fd27d9c3c5424859f38a7211b55de", "size": 2812, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "21jk1-0609.ipynb", "max_stars_repo_name": "ritsumei-aoi/21jk1", "max_stars_repo_head_hexsha": "2d49628ef8721a507193a58aa1af4b31a60dfd8b", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "21jk1-0609.ipynb", "max_issues_repo_name": "ritsumei-aoi/21jk1", "max_issues_repo_head_hexsha": "2d49628ef8721a507193a58aa1af4b31a60dfd8b", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "21jk1-0609.ipynb", "max_forks_repo_name": "ritsumei-aoi/21jk1", "max_forks_repo_head_hexsha": "2d49628ef8721a507193a58aa1af4b31a60dfd8b", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 19.3931034483, "max_line_length": 115, "alphanum_fraction": 0.4715504979, "converted": true, "num_tokens": 351, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9324533144915913, "lm_q2_score": 0.8824278633625322, "lm_q1q2_score": 0.8228227859921262}} {"text": "# Statistics Fundamentals\nStatistics is primarily about analyzing data samples, and that starts with udnerstanding the distribution of data in a sample.\n\n## Analyzing Data Distribution\nA great deal of statistical analysis is based on the way that data values are distributed within the dataset. In this section, we'll explore some statistics that you can use to tell you about the values in a dataset.\n\n### Measures of Central Tendency\nThe term *measures of central tendency* sounds a bit grand, but really it's just a fancy way of saying that we're interested in knowing where the middle value in our data is. For example, suppose decide to conduct a study into the comparative salaries of people who graduated from the same school. You might record the results like this:\n\n| Name | Salary |\n|----------|-------------|\n| Dan | 50,000 |\n| Joann | 54,000 |\n| Pedro | 50,000 |\n| Rosie | 189,000 |\n| Ethan | 55,000 |\n| Vicky | 40,000 |\n| Frederic | 59,000 |\n\nNow, some of the former-students may earn a lot, and others may earn less; but what's the salary in the middle of the range of all salaries?\n\n#### Mean\nA common way to define the central value is to use the *mean*, often called the *average*. This is calculated as the sum of the values in the dataset, divided by the number of observations in the dataset. When the dataset consists of the full population, the mean is represented by the Greek symbol ***μ*** (*mu*), and the formula is written like this:\n\n\\begin{equation}\\mu = \\frac{\\displaystyle\\sum_{i=1}^{N}X_{i}}{N}\\end{equation}\n\nMore commonly, when working with a sample, the mean is represented by ***x̄*** (*x-bar*), and the formula is written like this (note the lower case letters used to indicate values from a sample):\n\n\\begin{equation}\\bar{x} = \\frac{\\displaystyle\\sum_{i=1}^{n}x_{i}}{n}\\end{equation}\n\nIn the case of our list of heights, this can be calculated as:\n\n\\begin{equation}\\bar{x} = \\frac{50000+54000+50000+189000+55000+40000+59000}{7}\\end{equation}\n\nWhich is **71,000**.\n\n>In technical terminology, ***x̄*** is a *statistic* (an estimate based on a sample of data) and ***μ*** is a *parameter* (a true value based on the entire population). A lot of the time, the parameters for the full population will be impossible (or at the very least, impractical) to measure; so we use statistics obtained from a representative sample to approximate them. In this case, we can use the sample mean of salary for our selection of surveyed students to try to estimate the actual average salary of all students who graduate from our school.\n\nIn Python, when working with data in a *pandas.dataframe*, you can use the ***mean*** function, like this:\n\n\n```python\nimport pandas as pd\n\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,55000,40000,59000]})\n\nprint (df['Salary'].mean())\n```\n\n 71000.0\n\n\nSo, is **71,000** really the central value? Or put another way, would it be reasonable for a graduate of this school to expect to earn $71,000? After all, that's the average salary of a graduate from this school.\n\nIf you look closely at the salaries, you can see that out of the seven former students, six earn less than the mean salary. The data is *skewed* by the fact that Rosie has clearly managed to find a much higher-paid job than her classmates.\n\n#### Median\nOK, let's see if we can find another definition for the central value that more closely reflects the expected earning potential of students attending our school. Another measure of central tendancy we can use is the *median*. To calculate the median, we need to sort the values into ascending order and then find the middle-most value. When there are an odd number of observations, you can find the position of the median value using this formula (where *n* is the number of observations):\n\n\\begin{equation}\\frac{n+1}{2}\\end{equation}\n\nRemember that this formula returns the *position* of the median value in the sorted list; not the value itself.\n\nIf the number of observations is even, then things are a little (but not much) more complicated. In this case you calculate the median as the average of the two middle-most values, which are found like this:\n\n\\begin{equation}\\frac{n}{2} \\;\\;\\;\\;and \\;\\;\\;\\; \\frac{n}{2} + 1\\end{equation}\n\nSo, for our graduate salaries; first lets sort the dataset:\n\n| Salary |\n|-------------|\n| 40,000 |\n| 50,000 |\n| 50,000 |\n| 54,000 |\n| 55,000 |\n| 59,000 |\n| 189,000 |\n\nThere's an odd number of observation (7), so the median value is at position (7 + 1) ÷ 2; in other words, position 4:\n\n| Salary |\n|-------------|\n| 40,000 |\n| 50,000 |\n| 50,000 |\n|***>54,000*** |\n| 55,000 |\n| 59,000 |\n| 189,000 |\n\nSo the median salary is **54,000**.\n\nThe *pandas.dataframe* class in Python has a ***median*** function to find the median:\n\n\n```python\nimport pandas as pd\n\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,55000,40000,59000]})\n\nprint (df['Salary'].median())\n```\n\n 54000.0\n\n\n#### Mode\nAnother related statistic is the *mode*, which indicates the most frequently occurring value. If you think about it, this is potentially a good indicator of how much a student might expect to earn when they graduate from the school; out of all the salaries that are being earned by former students, the mode is earned by more than any other.\n\nLooking at our list of salaries, there are two instances of former students earning **50,000**, but only one instance each for all other salaries:\n\n| Salary |\n|-------------|\n| 40,000 |\n|***>50,000***|\n|***>50,000***|\n| 54,000 |\n| 55,000 |\n| 59,000 |\n| 189,000 |\n\nThe mode is therefore **50,000**.\n\nAs you might expect, the *pandas.dataframe* class has a ***mode*** function to return the mode:\n\n\n```python\nimport pandas as pd\n\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,55000,40000,59000]})\n\nprint (df['Salary'].mode())\n```\n\n 0 50000\n dtype: int64\n\n\n##### Multimodal Data\nIt's not uncommon for a set of data to have more than one value as the mode. For example, suppose Ethan receives a raise that takes his salary to **59,000**:\n\n| Salary |\n|-------------|\n| 40,000 |\n|***>50,000***|\n|***>50,000***|\n| 54,000 |\n|***>59,000***|\n|***>59,000***|\n| 189,000 |\n\nNow there are two values with the highest frequency. This dataset is *bimodal*. More generally, when there is more than one mode value, the data is considered *multimodal*.\n\nThe *pandas.dataframe.**mode*** function returns all of the modes:\n\n\n```python\nimport pandas as pd\n\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,59000,40000,59000]})\n\nprint (df['Salary'].mode())\n```\n\n 0 50000\n 1 59000\n dtype: int64\n\n\n### Distribution and Density\nNow we know something about finding the center, we can start to explore how the data is distributed around it. What we're interested in here is understanding the general \"shape\" of the data distribution so that we can begin to get a feel for what a 'typical' value might be expected to be.\n\nWe can start by finding the extremes - the minimum and maximum. In the case of our salary data, the lowest paid graduate from our school is Vicky, with a salary of **40,000**; and the highest-paid graduate is Rosie, with **189,000**.\n\nThe *pandas.dataframe* class has ***min*** and ***max*** functions to return these values.\n\nRun the following code to compare the minimum and maximum salaries to the central measures we calculated previously:\n\n\n```python\nimport pandas as pd\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,55000,40000,59000]})\n\nprint ('Min: ' + str(df['Salary'].min()))\nprint ('Mode: ' + str(df['Salary'].mode()[0]))\nprint ('Median: ' + str(df['Salary'].median()))\nprint ('Mean: ' + str(df['Salary'].mean()))\nprint ('Max: ' + str(df['Salary'].max()))\n```\n\n Min: 40000\n Mode: 50000\n Median: 54000.0\n Mean: 71000.0\n Max: 189000\n\n\nWe can examine these values, and get a sense for how the data is distributed - for example, we can see that the *mean* is closer to the max than the *median*, and that both are closer to the *min* than to the *max*.\n\nHowever, it's generally easier to get a sense of the distribution by visualizing the data. Let's start by creating a histogram of the salaries, highlighting the *mean* and *median* salaries (the *min*, *max* are fairly self-evident, and the *mode* is wherever the highest bar is):\n\n\n```python\n%matplotlib inline\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,55000,40000,59000]})\n\nsalary = df['Salary']\nsalary.plot.hist(title='Salary Distribution', color='lightblue', bins=25) \nplt.axvline(salary.mean(), color='magenta', linestyle='dashed', linewidth=2)\nplt.axvline(salary.median(), color='green', linestyle='dashed', linewidth=2)\nplt.show()\n```\n\nThe ***mean*** and ***median*** are shown as dashed lines. Note the following:\n- *Salary* is a continuous data value - graduates could potentially earn any value along the scale, even down to a fraction of cent.\n- The number of bins in the histogram determines the size of each salary band for which we're counting frequencies. Fewer bins means merging more individual salaries together to be counted as a group.\n- The majority of the data is on the left side of the histogram, reflecting the fact that most graduates earn between 40,000 and 55,000\n- The mean is a higher value than the median and mode.\n- There are gaps in the histogram for salary bands that nobody earns.\n\nThe histogram shows the relative frequency of each salary band, based on the number of bins. It also gives us a sense of the *density* of the data for each point on the salary scale. With enough data points, and small enough bins, we could view this density as a line that shows the shape of the data distribution.\n\nRun the following cell to show the density of the salary data as a line on top of the histogram:\n\n\n```python\n%matplotlib inline\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.stats as stats\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,55000,40000,59000]})\n\nsalary = df['Salary']\ndensity = stats.gaussian_kde(salary)\nn, x, _ = plt.hist(salary, histtype='step', normed=True, bins=25) \nplt.plot(x, density(x)*5)\nplt.axvline(salary.mean(), color='magenta', linestyle='dashed', linewidth=2)\nplt.axvline(salary.median(), color='green', linestyle='dashed', linewidth=2)\nplt.show()\n\n```\n\nNote that the density line takes the form of an asymmetric curve that has a \"peak\" on the left and a long tail on the right. We describe this sort of data distribution as being *skewed*; that is, the data is not distributed symmetrically but \"bunched together\" on one side. In this case, the data is bunched together on the left, creating a long tail on the right; and is described as being *right-skewed* because some infrequently occurring high values are pulling the *mean* to the right.\n\nLet's take a look at another set of data. We know how much money our graduates make, but how many hours per week do they need to work to earn their salaries? Here's the data:\n\n| Name | Hours |\n|----------|-------|\n| Dan | 41 |\n| Joann | 40 |\n| Pedro | 36 |\n| Rosie | 30 |\n| Ethan | 35 |\n| Vicky | 39 |\n| Frederic | 40 |\n\nRun the following code to show the distribution of the hours worked:\n\n\n```python\n%matplotlib inline\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.stats as stats\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Hours':[41,40,36,30,35,39,40]})\n\nhours = df['Hours']\ndensity = stats.gaussian_kde(hours)\nn, x, _ = plt.hist(hours, histtype='step', normed=True, bins=25) \nplt.plot(x, density(x)*7)\nplt.axvline(hours.mean(), color='magenta', linestyle='dashed', linewidth=2)\nplt.axvline(hours.median(), color='green', linestyle='dashed', linewidth=2)\nplt.show()\n```\n\nOnce again, the distribution is skewed, but this time it's **left-skewed**. Note that the curve is asymmetric with the ***mean*** to the left of the ***median*** and the *mode*; and the average weekly working hours skewed to the lower end.\n\nOnce again, Rosie seems to be getting the better of the deal. She earns more than her former classmates for working fewer hours. Maybe a look at the test scores the students achieved on their final grade at school might help explain her success:\n\n| Name | Grade |\n|----------|-------|\n| Dan | 50 |\n| Joann | 50 |\n| Pedro | 46 |\n| Rosie | 95 |\n| Ethan | 50 |\n| Vicky | 5 |\n| Frederic | 57 |\n\nLet's take a look at the distribution of these grades:\n\n\n```python\n%matplotlib inline\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.stats as stats\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Grade':[50,50,46,95,50,5,57]})\n\ngrade = df['Grade']\ndensity = stats.gaussian_kde(grade)\nn, x, _ = plt.hist(grade, histtype='step', normed=True, bins=25) \nplt.plot(x, density(x)*7.5)\nplt.axvline(grade.mean(), color='magenta', linestyle='dashed', linewidth=2)\nplt.axvline(grade.median(), color='green', linestyle='dashed', linewidth=2)\nplt.show()\n```\n\nThis time, the distribution is symmetric, forming a \"bell-shaped\" curve. The ***mean***, ***median***, and mode are at the same location, and the data tails off evenly on both sides from a central peak.\n\nStatisticians call this a *normal* distribution (or sometimes a *Gaussian* distribution), and it occurs quite commonly in many scenarios due to something called the *Central Limit Theorem*, which reflects the way continuous probability works - more about that later.\n\n#### Skewness and Kurtosis\nYou can measure *skewness* (in which direction the data is skewed and to what degree) and kurtosis (how \"peaked\" the data is) to get an idea of the shape of the data distribution. In Python, you can use the ***skew*** and ***kurt*** functions to find this:\n\n\n```python\n%matplotlib inline\nimport pandas as pd\nimport numpy as np\nfrom matplotlib import pyplot as plt\nimport scipy.stats as stats\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,55000,40000,59000],\n 'Hours':[41,40,36,30,35,39,40],\n 'Grade':[50,50,46,95,50,5,57]})\n\nnumcols = ['Salary', 'Hours', 'Grade']\nfor col in numcols:\n print(df[col].name + ' skewness: ' + str(df[col].skew()))\n print(df[col].name + ' kurtosis: ' + str(df[col].kurt()))\n density = stats.gaussian_kde(df[col])\n n, x, _ = plt.hist(df[col], histtype='step', normed=True, bins=25) \n plt.plot(x, density(x)*6)\n plt.show()\n print('\\n')\n```\n\nNow let's look at the distribution of a real dataset - let's see how the heights of the father's measured in Galton's study of parent and child heights are distributed:\n\n\n```python\n%matplotlib inline\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.stats as stats\n\nimport statsmodels.api as sm\n\ndf = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data\n\n\nfathers = df['father']\ndensity = stats.gaussian_kde(fathers)\nn, x, _ = plt.hist(fathers, histtype='step', normed=True, bins=50) \nplt.plot(x, density(x)*2.5)\nplt.axvline(fathers.mean(), color='magenta', linestyle='dashed', linewidth=2)\nplt.axvline(fathers.median(), color='green', linestyle='dashed', linewidth=2)\nplt.show()\n\n```\n\nAs you can see, the father's height measurements are approximately normally distributed - in other words, they form a more or less *normal* distribution that is symmetric around the mean.\n\n### Measures of Variance\nWe can see from the distribution plots of our data that the values in our dataset can vary quite widely. We can use various measures to quantify this variance.\n\n#### Range\nA simple way to quantify the variance in a dataset is to identify the difference between the lowest and highest values. This is called the *range*, and is calculated by subtracting the minimim value from the maximum value.\n\nThe following Python code creates a single Pandas dataframe for our school graduate data, and calculates the *range* for each of the numeric features:\n\n\n```python\nimport pandas as pd\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,55000,40000,59000],\n 'Hours':[41,40,36,30,35,39,40],\n 'Grade':[50,50,46,95,50,5,57]})\n\nnumcols = ['Salary', 'Hours', 'Grade']\nfor col in numcols:\n print(df[col].name + ' range: ' + str(df[col].max() - df[col].min()))\n```\n\n#### Percentiles and Quartiles\nThe range is easy to calculate, but it's not a particularly useful statistic. For example, a range of 149,000 between the lowest and highest salary does not tell us which value within that range a graduate is most likely to earn - it doesn't tell us nothing about how the salaries are distributed around the mean within that range. The range tells us very little about the comparative position of an individual value within the distribution - for example, Frederic scored 57 in his final grade at school; which is a pretty good score (it's more than all but one of his classmates); but this isn't immediately apparent from a score of 57 and range of 90.\n\n##### Percentiles\nA percentile tells us where a given value is ranked in the overall distribution. For example, 25% of the data in a distribution has a value lower than the 25th percentile; 75% of the data has a value lower than the 75th percentile, and so on. Note that half of the data has a value lower than the 50th percentile - so the 50th percentile is also the median!\n\nLet's examine Frederic's grade using this approach. We know he scored 57, but how does he rank compared to his fellow students?\n\nWell, there are seven students in total, and five of them scored less than Frederic; so we can calculate the percentile for Frederic's grade like this:\n\n\\begin{equation}\\frac{5}{7} \\times 100 \\approx 71.4\\end{equation} \n\nSo Frederic's score puts him at the 71.4th percentile in his class.\n\nIn Python, you can use the ***percentileofscore*** function in the *scipy.stats* package to calculate the percentile for a given value in a set of values:\n\n\n```python\nimport pandas as pd\nfrom scipy import stats\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,55000,40000,59000],\n 'Hours':[41,40,36,30,35,39,40],\n 'Grade':[50,50,46,95,50,5,57]})\n\nprint(stats.percentileofscore(df['Grade'], 57, 'strict'))\n```\n\nWe've used the strict definition of percentile; but sometimes it's calculated as being the percentage of values that are less than *or equal to* the value you're comparing. In this case, the calculation for Frederic's percentile would include his own score:\n\n\\begin{equation}\\frac{6}{7} \\times 100 \\approx 85.7\\end{equation} \n\nYou can calculate this way in Python by using the ***weak*** mode of the ***percentileofscore*** function:\n\n\n```python\nimport pandas as pd\nfrom scipy import stats\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,55000,40000,59000],\n 'Hours':[41,40,36,30,35,39,40],\n 'Grade':[50,50,46,95,50,5,57]})\n\nprint(stats.percentileofscore(df['Grade'], 57, 'weak'))\n```\n\nWe've considered the percentile of Frederic's grade, and used it to rank him compared to his fellow students. So what about Dan, Joann, and Ethan? How do they compare to the rest of the class? They scored the same grade (50), so in a sense they share a percentile.\n\nTo deal with this *grouped* scenario, we can average the percentage rankings for the matching scores. We treat half of the scores matching the one we're ranking as if they are below it, and half as if they are above it. In this case, there were three matching scores of 50, and for each of these we calculate the percentile as if 1 was below and 1 was above. So the calculation for a percentile for Joann based on scores being less than or equal to 50 is:\n\n\\begin{equation}(\\frac{4}{7}) \\times 100 \\approx 57.14\\end{equation} \n\nThe value of **4** consists of the two scores that are below Joann's score of 50, Joann's own score, and half of the scores that are the same as Joann's (of which there are two, so we count one).\n\nIn Python, the ***percentileofscore*** function has a ***rank*** function that calculates grouped percentiles like this:\n\n\n```python\nimport pandas as pd\nfrom scipy import stats\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,55000,40000,59000],\n 'Hours':[41,40,36,30,35,39,40],\n 'Grade':[50,50,46,95,50,5,57]})\n\nprint(stats.percentileofscore(df['Grade'], 50, 'rank'))\n```\n\n##### Quartiles\nRather than using individual percentiles to compare data, we can consider the overall spread of the data by dividing those percentiles into four *quartiles*. The first quartile contains the values from the minimum to the 25th percentile, the second from the 25th percentile to the 50th percentile (which is the median), the third from the 50th percentile to the 75th percentile, and the fourth from the 75th percentile to the maximum.\n\nIn Python, you can use the ***quantile*** function of the *pandas.dataframe* class to find the threshold values at the 25th, 50th, and 75th percentiles (*quantile* is a generic term for a ranked position, such as a percentile or quartile).\n\nRun the following code to find the quartile thresholds for the weekly hours worked by our former students:\n\n\n```python\n# Quartiles\nimport pandas as pd\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,55000,40000,59000],\n 'Hours':[41,40,36,17,35,39,40],\n 'Grade':[50,50,46,95,50,5,57]})\nprint(df['Hours'].quantile([0.25, 0.5, 0.75]))\n```\n\nIts usually easier to understand how data is distributed across the quartiles by visualizing it. You can use a histogram, but many data scientists use a kind of visualization called a *box plot* (or a *box and whiskers* plot).\n\nLet's create a box plot for the weekly hours:\n\n\n```python\n%matplotlib inline\nimport pandas as pd\nfrom matplotlib import pyplot as plt\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,55000,40000,59000],\n 'Hours':[41,40,36,30,35,39,40],\n 'Grade':[50,50,46,95,50,5,57]})\n\n# Plot a box-whisker chart\ndf['Hours'].plot(kind='box', title='Weekly Hours Distribution', figsize=(10,8))\nplt.show()\n```\n\nThe box plot consists of:\n- A rectangular *box* that shows where the data between the 25th and 75th percentile (the second and third quartile) lie. This part of the distribution is often referred to as the *interquartile range* - it contains the middle 50 data values.\n- *Whiskers* that extend from the box to the bottom of the first quartile and the top of the fourth quartile to show the full range of the data.\n- A line in the box that shows that location of the median (the 50th percentile, which is also the threshold between the second and third quartile)\n\nIn this case, you can see that the interquartile range is between 35 and 40, with the median nearer the top of that range. The range of the first quartile is from around 30 to 35, and the fourth quartile is from 40 to 41.\n\n#### Outliers\nLet's take a look at another box plot - this time showing the distribution of the salaries earned by our former classmates:\n\n\n```python\n%matplotlib inline\nimport pandas as pd\nfrom matplotlib import pyplot as plt\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,55000,40000,59000],\n 'Hours':[41,40,36,30,35,39,40],\n 'Grade':[50,50,46,95,50,5,57]})\n\n# Plot a box-whisker chart\ndf['Salary'].plot(kind='box', title='Salary Distribution', figsize=(10,8))\nplt.show()\n```\n\nSo what's going on here?\n\nWell, as we've already noticed, Rosie earns significantly more than her former classmates. So much more in fact, that her salary has been identifed as an *outlier*. An outlier is a value that is so far from the center of the distribution compared to other values that it skews the distribution by affecting the mean. There are all sorts of reasons that you might have outliers in your data, including data entry errors, failures in sensors or data-generating equipment, or genuinely anomalous values.\n\nSo what should we do about it?\n\nThis really depends on the data, and what you're trying to use it for. In this case, let's assume we're trying to figure out what's a reasonable expectation of salary for a graduate of our school to earn. Ignoring for the moment that we have an extremly small dataset on which to base our judgement, it looks as if Rosie's salary could be either an error (maybe she mis-typed it in the form used to collect data) or a genuine anomaly (maybe she became a professional athelete or some other extremely highly paid job). Either way, it doesn't seem to represent a salary that a typical graduate might earn.\n\nLet's see what the distribution of the data looks like without the outlier:\n\n\n```python\n%matplotlib inline\nimport pandas as pd\nfrom matplotlib import pyplot as plt\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,55000,40000,59000],\n 'Hours':[41,40,36,17,35,39,40],\n 'Grade':[50,50,46,95,50,5,57]})\n\n# Plot a box-whisker chart\ndf['Salary'].plot(kind='box', title='Salary Distribution', figsize=(10,8), showfliers=False)\nplt.show()\n```\n\nNow it looks like there's a more even distribution of salaries. It's still not quite symmetrical, but there's much less overall variance. There's potentially some cause here to disregard Rosie's salary data when we compare the salaries, as it is tending to skew the analysis.\n\nSo is that OK? Can we really just ignore a data value we don't like?\n\nAgain, it depends on what you're analyzing. Let's take a look at the distribution of final grades:\n\n\n```python\n%matplotlib inline\nimport pandas as pd\nfrom matplotlib import pyplot as plt\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,55000,40000,59000],\n 'Hours':[41,40,36,17,35,39,40],\n 'Grade':[50,50,46,95,50,5,57]})\n\n# Plot a box-whisker chart\ndf['Grade'].plot(kind='box', title='Grade Distribution', figsize=(10,8))\nplt.show()\n```\n\nOnce again there are outliers, this time at both ends of the distribution. However, think about what this data represents. If we assume that the grade for the final test is based on a score out of 100, it seems reasonable to expect that some students will score very low (maybe even 0) and some will score very well (maybe even 100); but most will get a score somewhere in the middle. The reason that the low and high scores here look like outliers might just be because we have so few data points. Let's see what happens if we include a few more students in our data:\n\n\n```python\n%matplotlib inline\nimport pandas as pd\nfrom matplotlib import pyplot as plt\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic', 'Jimmie', 'Rhonda', 'Giovanni', 'Francesca', 'Rajab', 'Naiyana', 'Kian', 'Jenny'],\n 'Grade':[50,50,46,95,50,5,57,42,26,72,78,60,40,17,85]})\n\n# Plot a box-whisker chart\ndf['Grade'].plot(kind='box', title='Grade Distribution', figsize=(10,8))\nplt.show()\n```\n\nWith more data, there are some more high and low scores; so we no longer consider the isolated cases to be outliers.\n\nThe key point to take away here is that you need to really understand the data and what you're trying to do with it, and you need to ensure that you have a reasonable sample size, before determining what to do with outlier values.\n\n#### Variance and Standard Deviation\nWe've seen how to understand the *spread* of our data distribution using the range, percentiles, and quartiles; and we've seen the effect of outliers on the distribution. Now it's time to look at how to measure the amount of variance in the data.\n\n##### Variance\nVariance is measured as the average of the squared difference from the mean. For a full population, it's indicated by a squared Greek letter *sigma* (***σ2***) and calculated like this:\n\n\\begin{equation}\\sigma^{2} = \\frac{\\displaystyle\\sum_{i=1}^{N} (X_{i} -\\mu)^{2}}{N}\\end{equation}\n\nFor a sample, it's indicated as ***s2*** calculated like this:\n\n\\begin{equation}s^{2} = \\frac{\\displaystyle\\sum_{i=1}^{n} (x_{i} -\\bar{x})^{2}}{n-1}\\end{equation}\n\nIn both cases, we sum the difference between the individual data values and the mean and square the result. Then, for a full population we just divide by the number of data items to get the average. When using a sample, we divide by the total number of items **minus 1** to correct for sample bias.\n\nLet's work this out for our student grades (assuming our data is a sample from the larger student population).\n\nFirst, we need to calculate the mean grade:\n\n\\begin{equation}\\bar{x} = \\frac{50+50+46+95+50+5+57}{7}\\approx 50.43\\end{equation}\n\nThen we can plug that into our formula for the variance:\n\n\\begin{equation}s^{2} = \\frac{(50-50.43)^{2}+(50-50.43)^{2}+(46-50.43)^{2}+(95-50.43)^{2}+(50-50.43)^{2}+(5-50.43)^{2}+(57-50.43)^{2}}{7-1}\\end{equation}\n\nSo:\n\n\\begin{equation}s^{2} = \\frac{0.185+0.185+19.625+1986.485+0.185+2063.885+43.165}{6}\\end{equation}\n\nWhich simplifies to:\n\n\\begin{equation}s^{2} = \\frac{4113.715}{6}\\end{equation}\n\nGiving the result:\n\n\\begin{equation}s^{2} \\approx 685.619\\end{equation}\n\nThe higher the variance, the more spread your data is around the mean.\n\nIn Python, you can use the ***var*** function of the *pandas.dataframe* class to calculate the variance of a column in a dataframe:\n\n\n```python\nimport pandas as pd\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,55000,40000,59000],\n 'Hours':[41,40,36,17,35,39,40],\n 'Grade':[50,50,46,95,50,5,57]})\n\nprint(df['Grade'].var())\n```\n\n##### Standard Deviation\nTo calculate the variance, we squared the difference of each value from the mean. If we hadn't done this, the numerator of our fraction would always end up being zero (because the mean is at the center of our values). However, this means that the variance is not in the same unit of measurement as our data - in our case, since we're calculating the variance for grade points, it's in grade points squared; which is not very helpful.\n\nTo get the measure of variance back into the same unit of measurement, we need to find its square root:\n\n\\begin{equation}s = \\sqrt{685.619} \\approx 26.184\\end{equation}\n\nSo what does this value represent?\n\nIt's the *standard deviation* for our grades data. More formally, it's calculated like this for a full population:\n\n\\begin{equation}\\sigma = \\sqrt{\\frac{\\displaystyle\\sum_{i=1}^{N} (X_{i} -\\mu)^{2}}{N}}\\end{equation}\n\nOr like this for a sample:\n\n\\begin{equation}s = \\sqrt{\\frac{\\displaystyle\\sum_{i=1}^{n} (x_{i} -\\bar{x})^{2}}{n-1}}\\end{equation}\n\nNote that in both cases, it's just the square root of the corresponding variance forumla!\n\nIn Python, you can calculate it using the ***std*** function:\n\n\n```python\nimport pandas as pd\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,55000,40000,59000],\n 'Hours':[41,40,36,17,35,39,40],\n 'Grade':[50,50,46,95,50,5,57]})\n\nprint(df['Grade'].std())\n```\n\n#### Standard Deviation in a Normal Distribution\n\nIn statistics and data science, we spend a lot of time considering *normal* distributions; because they occur so frequently. The standard deviation has an important relationship to play in a normal distribution.\n\nRun the following cell to show a histogram of a *standard normal* distribution (which is a distribution with a mean of 0 and a standard deviation of 1):\n\n\n```python\n%matplotlib inline\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.stats as stats\n\n# Create a random standard normal distribution\ndf = pd.DataFrame(np.random.randn(100000, 1), columns=['Grade'])\n\n# Plot the distribution as a histogram with a density curve\ngrade = df['Grade']\ndensity = stats.gaussian_kde(grade)\nn, x, _ = plt.hist(grade, color='lightgrey', normed=True, bins=100) \nplt.plot(x, density(x))\n\n# Get the mean and standard deviation\ns = df['Grade'].std()\nm = df['Grade'].mean()\n\n# Annotate 1 stdev\nx1 = [m-s, m+s]\ny1 = [0.25, 0.25]\nplt.plot(x1,y1, color='magenta')\nplt.annotate('1s (68.26%)', (x1[1],y1[1]))\n\n# Annotate 2 stdevs\nx2 = [m-(s*2), m+(s*2)]\ny2 = [0.05, 0.05]\nplt.plot(x2,y2, color='green')\nplt.annotate('2s (95.45%)', (x2[1],y2[1]))\n\n# Annotate 3 stdevs\nx3 = [m-(s*3), m+(s*3)]\ny3 = [0.005, 0.005]\nplt.plot(x3,y3, color='orange')\nplt.annotate('3s (99.73%)', (x3[1],y3[1]))\n\n# Show the location of the mean\nplt.axvline(grade.mean(), color='grey', linestyle='dashed', linewidth=1)\n\nplt.show()\n```\n\nThe horizontal colored lines show the percentage of data within 1, 2, and 3 standard deviations of the mean (plus or minus).\n\nIn any normal distribution:\n- Approximately 68.26% of values fall within one standard deviation from the mean.\n- Approximately 95.45% of values fall within two standard deviations from the mean.\n- Approximately 99.73% of values fall within three standard deviations from the mean.\n\n#### Z Score\nSo in a normal (or close to normal) distribution, standard deviation provides a way to evaluate how far from a mean a given range of values falls, allowing us to compare where a particular value lies within the distribution. For example, suppose Rosie tells you she was the highest scoring student among her friends - that doesn't really help us assess how well she scored. She may have scored only a fraction of a point above the second-highest scoring student. Even if we know she was in the top quartile; if we don't know how the rest of the grades are distributed it's still not clear how well she performed compared to her friends.\n\nHowever, if she tells you how many standard deviations higher than the mean her score was, this will help you compare her score to that of her classmates.\n\nSo how do we know how many standard deviations above or below the mean a particular value is? We call this a *Z Score*, and it's calculated like this for a full population:\n\n\\begin{equation}Z = \\frac{x - \\mu}{\\sigma}\\end{equation}\n\nor like this for a sample:\n\n\\begin{equation}Z = \\frac{x - \\bar{x}}{s}\\end{equation}\n\nSo, let's examine Rosie's grade of 95. Now that we know the *mean* grade is 50.43 and the *standard deviation* is 26.184, we can calculate the Z Score for this grade like this:\n\n\\begin{equation}Z = \\frac{95 - 50.43}{26.184} = 1.702\\end{equation}.\n\nSo Rosie's grade is 1.702 standard deviations above the mean.\n\n### Summarizing Data Distribution in Python\nWe've seen how to obtain individual statistics in Python, but you can also use the ***describe*** function to retrieve summary statistics for all numeric columns in a dataframe. These summary statistics include many of the statistics we've examined so far (though it's worth noting that the *median* is not included):\n\n\n```python\nimport pandas as pd\n\ndf = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],\n 'Salary':[50000,54000,50000,189000,55000,40000,59000],\n 'Hours':[41,40,36,17,35,39,40],\n 'Grade':[50,50,46,95,50,5,57]})\nprint(df.describe())\n```\n", "meta": {"hexsha": "62c83e5aa5fa2cbcf831cae3db5f3d96b88e1850", "size": 166230, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Essential_Math_for_Machine_Learning_Python_Edition/Module04/04-02-Statistics Fundamentals.ipynb", "max_stars_repo_name": "chandlersong/pythonMath", "max_stars_repo_head_hexsha": "f267f14b954327bea61485fe37590fefc0d45e65", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Essential_Math_for_Machine_Learning_Python_Edition/Module04/04-02-Statistics Fundamentals.ipynb", "max_issues_repo_name": "chandlersong/pythonMath", "max_issues_repo_head_hexsha": "f267f14b954327bea61485fe37590fefc0d45e65", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Essential_Math_for_Machine_Learning_Python_Edition/Module04/04-02-Statistics Fundamentals.ipynb", "max_forks_repo_name": "chandlersong/pythonMath", "max_forks_repo_head_hexsha": "f267f14b954327bea61485fe37590fefc0d45e65", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 126.0272934041, "max_line_length": 14444, "alphanum_fraction": 0.8073933706, "converted": true, "num_tokens": 10038, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9324533051062237, "lm_q2_score": 0.8824278710924296, "lm_q1q2_score": 0.8228227849179847}} {"text": "# Homework 01\n\nCongratulations! You've managed to open this Juypter notebook on either Github or on your local machine.\n\nHelp for Jupyter Notebooks can be found in the Jupyter Lab by going to `Help > Notebook Reference`. You can also go to the [Notebook basics](https://jupyter-notebook.readthedocs.io/en/latest/examples/Notebook/Notebook%20Basics.html) documentation.\n\nThe basics are that you can write *markdown* cells that have math, words, and other markdown (like Rmarkdown!)... and *code* cells that, well, have code and display the output below (as in Rmarkdown). Switch *mode* to change between the two (sidebar will change colors).\n\nIf you want to export your document to .pdf (you don't have to!), you an go to `File > Export Notebook As... > PDF`. To do this, I had to install the *Inkscape* application. (I did this with the [Chocolatey](https://chocolatey.org/) package manager on Windows. You can probably do this with [Homewbrew](https://brew.sh/) on a Mac)\n\n# Instructions\n\nYour homework is to generate 3d plots of a quadratic function in $\\mathbb R^2$ and to examine the relationship between eigenvalues of the Hessian matrices, shapes of the functions, and the (possible) existence of minima and maxima.\n\nYou can find the documentation for `Plots.jl` at \n\nFor the following functions\n\n\\begin{align}\n f^a(x,y) &= -x^2 - y^2 \\\\\n f^b(x,y) &= -x^2 + xy - y^2 \\\\\n f^c(x,y) &= -x^2 + 2xy - y^2 \\\\\n f^d(x,y) &= -x^2 + 3xy - y^2 \n\\end{align}\n\n1. Write the Hessian matrix in \\LaTeX\n\n2. Compute the determinants by hand. Are the Hessians PD, PSD, NSD, or ND? What does this imply about convexity / concavity of the function? What about the existence of a minimum or maximum over the domain $\\mathbb R^2$?\n\n3. `@assert` statements are wonderful to include in your functions because they make sure that the inputs meet certain assumptions... such as that the length of two vectors is the same. Using them regularly can help you avoid errors\n\n Use an `@assert` statement to check that your determinants computed by hand are correct. See what Julia does when you put the wrong determinanmt in. See [`LinearAlgebra.det`](https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/index.html#LinearAlgebra.det) docs\n\n ```julia\n @assert det(Ha) == ???\n ```\n\n4. Compute the eigenvalues of your matrix using [`LinearAlgebra.eigvals`](https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/index.html#LinearAlgebra.eigvals)\n\n5. Create a function in Julia to compute $f^a, f^b, f^c, f^d$ as above. Plot them!\n\n\nTo submit your homework, commit this notebook to your personal homework repo, push it, and issue a pull request to turn it in to me.\n\n# Top of your Julia Code\n\n\n```julia\n# Load libraries first of all\nusing LinearAlgebra # load LinearAlgebra standard library\nusing Plots # loads the Plots module\n```\n\n\n```julia\n# tells Plots to use the GR() backend.\n# Note: for interactive 3d plots, you can also install \n# PyPlot or PlotlyJS and try using those. You might \n# need to use Atom or the REPL to get the interactivity\ngr()\n```\n\n\n\n\n Plots.GRBackend()\n\n\n\n\n```julia\n# define a range we can iterate over\nxrange = -3.0 : 0.1 : 3.0\n```\n\n\n\n\n -3.0:0.1:3.0\n\n\n\n# Question 1\n\n$$\nf^a(x,y) = -x^2 - y^2\n$$ \n\n## Part 1a \n\nHessian is $H^a = \\begin{bmatrix} ? & ? \\\\ ? & ? \\end{bmatrix}$\n\n## Part 1b\n\nDeterminant is $|H^a| = ?$, so the matrix is ???, and it has a global minimum / maximum at ????\n\n\n```julia\n# define the Hessian for H^a. \n# Note:\n# Julia in Jupyter Notebooks and Atom can handle latexy characters\n# I got fancy by typing H\\^a [tab] and getting a superscript\n# We could have also gotten greek letters with \\beta [tab]\n# or (very important) approximately equals with \\approx [tab]\n\nH\u1d43 = [-2 4 ; 2 3]\n```\n\n\n\n\n 2\u00d72 Array{Int64,2}:\n -2 4\n 2 3\n\n\n\n## Part 1c\n\n\n```julia\n@assert det(H\u1d43) == 3\n```\n\n## Part 1d\n\n\n```julia\neigvals(H\u1d43)\n```\n\n\n\n\n 2-element Array{Float64,1}:\n -3.274917217635375\n 4.274917217635375\n\n\n\n\n```julia\n# functions to plot\nfa(x,y) = -x^2 - y^2\n```\n\n\n\n\n fa (generic function with 1 method)\n\n\n\n\n```julia\nplot(xrange, xrange, fa, st = :surface, title = \"\\$a=0\\$\")\n```\n\n\n\n\n \n\n \n\n\n\n# Question 2\n\n# Question 3\n\n# Question 4\n", "meta": {"hexsha": "d06a5377066f4b492eaa467ee3461eb77d95992a", "size": 169654, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Hwk01.ipynb", "max_stars_repo_name": "ucdavis-are254/Hwk01", "max_stars_repo_head_hexsha": "f70ba5b02ff26b793e01849361623aa9abc5df9b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Hwk01.ipynb", "max_issues_repo_name": "ucdavis-are254/Hwk01", "max_issues_repo_head_hexsha": "f70ba5b02ff26b793e01849361623aa9abc5df9b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-09-28T01:01:39.000Z", "max_issues_repo_issues_event_max_datetime": "2019-09-30T00:49:50.000Z", "max_forks_repo_path": "Hwk01.ipynb", "max_forks_repo_name": "ucdavis-are254/Hwk01", "max_forks_repo_head_hexsha": "f70ba5b02ff26b793e01849361623aa9abc5df9b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 11, "max_forks_repo_forks_event_min_datetime": "2019-09-25T17:08:52.000Z", "max_forks_repo_forks_event_max_datetime": "2019-09-30T06:26:37.000Z", "avg_line_length": 78.1455550438, "max_line_length": 339, "alphanum_fraction": 0.7850861164, "converted": true, "num_tokens": 1266, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9005297754396141, "lm_q2_score": 0.9136765263519306, "lm_q1q2_score": 0.8227929171001508}} {"text": "## Probability and Computing - Mitzenmacher, Upfal\n\n\n```python\nimport numpy as np\nfrom sympy import poly\nfrom sympy.abc import x\n```\n\nDefine two polynomials:\n- $F$ is a product of monomials\n- $G$ is in canonical form\n\n\n```python\nF = poly((x + 1) * (x - 2) * (x + 3) * (x - 4) * (x + 5) * (x - 6))\nG = poly(x**6 - 7 * x**3 + 25)\n```\n\nWant to check if $F \\equiv G$ without converting $F$ to canonical form.\n\n\n```python\ndef polycheck(F: 'poly', G: 'poly', \u03b4: int, k: int, replacement: bool) -> bool:\n \"\"\"Randomized algorithm for verifying whether F and G are equivalent.\n \n If F \u2261 G, then the algo always computes the correct answer.\n If F \u2262 G, then the algo can compute the wrong answer by\n finding r s.t. r is the root of F(x) - G(x) = 0,\n which, by FTA, can happen at most in d / (\u03b4 * d) cases,\n meaning that the prob. of error (in one iter) is <= 1/\u03b4.\n \n If sampling is performed WITH replacement, then iterations are independent,\n therefore, the probability of error becomes <= (1/\u03b4)**k\n i.e. exponentially small in the number of trials.\n If sampling is performed WITHOUT replacement, we get a tighter bound <= (1/\u03b4)**k,\n since the error now consists of the event \"finding k distinct roots\",\n which is much stronger.\n \n Args:\n F, G: sympy polynomials.\n \u03b4: upper bound for the sample space.\n k: number of iterations (trials).\n replacement: whether to perform sampling with or without replacement.\n \n Returns:\n True if F,G are found to be equivalent, otherwise False.\n \"\"\"\n d = max(F.degree(), G.degree())\n space = np.arange(1, \u03b4 * d + 1) # {1, ..., \u03b4 * d}\n\n # choose values uniformly at random from the sample space\n rs = np.random.choice(space, replace=replacement, size=k)\n\n for r in rs:\n if F(r) != G(r):\n return False\n \n return True\n```\n\n\n```python\npolycheck(F, G, \u03b4=100, k=10, replacement=False)\n```\n", "meta": {"hexsha": "ae1796b3c9f7eea1f75acb0dcf57a865320eeca0", "size": 3692, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "src/polycheck.ipynb", "max_stars_repo_name": "alexandru-dinu/notebooks", "max_stars_repo_head_hexsha": "7e963f482158db8f86efa3a706ad2b85f82241e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/polycheck.ipynb", "max_issues_repo_name": "alexandru-dinu/notebooks", "max_issues_repo_head_hexsha": "7e963f482158db8f86efa3a706ad2b85f82241e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-01-11T19:24:10.000Z", "max_issues_repo_issues_event_max_datetime": "2021-01-11T19:24:10.000Z", "max_forks_repo_path": "src/polycheck.ipynb", "max_forks_repo_name": "alexandru-dinu/notebooks", "max_forks_repo_head_hexsha": "7e963f482158db8f86efa3a706ad2b85f82241e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 28.4, "max_line_length": 94, "alphanum_fraction": 0.5189599133, "converted": true, "num_tokens": 549, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9273633016692236, "lm_q2_score": 0.8872045981907007, "lm_q1q2_score": 0.8227609854342451}} {"text": "Data Science Notebooks without math are lame.\n\nThankfully Jupyter Notebooks support LatEx\n\nThis cheatsheet is neither comprehensive nor complete. For now, it is only a placeholder. My plans are to grow it over time to be a most useful stuff in one place with cut and paste snapshots, but without explicitly spending much time in its development\n\nHere are some references:\n\nhttp://bebi103.caltech.edu.s3-website-us-east-1.amazonaws.com/2015/tutorials/t0c_intro_to_latex.html\n\nhttps://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Typesetting%20Equations.html\n\nhttps://oeis.org/wiki/List_of_LaTeX_mathematical_symbols\n\nInline as follows \nJoint Probability is the probability ofntersection of two or more events. Written as $ P(A \\cap B) , P(A,B), P(A\\ and\\ B)$\n\nBlock equation as follows\n\n\\begin{align*}\n\\text{good: } P(A \\cap B) = P(B)\\,P(A \\mid B)\n\\end{align*}\n\n\\begin{align*}\n\\text{not good: } P(A \\cap B) = P(B) P(A | B)\n\\end{align*}\n\nMultiline as follows. Presence of align* messes up *sometimes*. Just align\n\n\\begin{align}\n\\text{good: } P(A \\cap B) = P(B)\\,P(A \\mid B) \\\\\n\\text{not good: } P(A \\cap B) = P(B) P(A | B)\n\\end{align}\n\nAnother example of multi line equations\n\\begin{align}\n\\dot{x} & = \\sigma(y-x) \\\\ \n\\dot{y} & = \\rho x - y - xz \\\\ \n\\dot{z} & = -\\beta z + xy\n\\end{align}\n\nMulti functions\n\\begin{align*}\n\\min_{x \\in R^{n}}\\ f(x)\\ subject\\ to\\ \\left\\{\\begin{array}{lll}\n c_{i}(x)\\ = \\ 0,\\ i \\ \\in \\varepsilon, \\\\\n c_{i}(x)\\ \\geq \\ 0,\\ i\\in \\iota \n \\end{array}\\right.\n\\end{align*}\n\nBolded equation\n\\begin{equation*},\n\\left( \\sum_{k=1}^n a_k b_k \\right)^2 \\leq \\left( \\sum_{k=1}^n a_k^2 \\right) \\left( \\sum_{k=1}^n b_k^2 \\right) \\end{equation*}\n\nProviding equation number on right\n\n\\begin{equation}\n\\begin{aligned}\n&x = 1 \\\\\n&y = 2 + 2x^2\n\\end{aligned}\n\\tag{fun system}\n\\end{equation}\n\n\n```python\n\n```\n", "meta": {"hexsha": "5f1797eb64395e34cf1224f152fa059e440f9ee4", "size": 3672, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Latex_Notebook_Cheatsheet.ipynb", "max_stars_repo_name": "datavector-io/datascience", "max_stars_repo_head_hexsha": "b1de0cd1c563b3c90d3f4382f0130c77e18308f9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Latex_Notebook_Cheatsheet.ipynb", "max_issues_repo_name": "datavector-io/datascience", "max_issues_repo_head_hexsha": "b1de0cd1c563b3c90d3f4382f0130c77e18308f9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Latex_Notebook_Cheatsheet.ipynb", "max_forks_repo_name": "datavector-io/datascience", "max_forks_repo_head_hexsha": "b1de0cd1c563b3c90d3f4382f0130c77e18308f9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.4172661871, "max_line_length": 262, "alphanum_fraction": 0.5119825708, "converted": true, "num_tokens": 638, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9241418158002491, "lm_q2_score": 0.8902942173896131, "lm_q1q2_score": 0.8227581146548988}} {"text": "```python\n##Import and define symbols\nimport sympy as sp\nimport numpy as np \n\nTi = sp.Symbol('T_i'); k = sp.Symbol('k'); To = sp.Symbol('T_o'); Ti0 = sp.Symbol('T_i_0'); To0 = sp.Symbol('T_o_0'); ho = sp.Symbol('h_o'); hi = sp.Symbol('h_i'); r = sp.Symbol('r'); ro = sp.Symbol('r_o'); ri = sp.Symbol('r_i'); Cp = sp.Symbol('C_p'); rho = sp.Symbol('rho');\nT = sp.Function('T')(r)\nU = sp.Function('U')(r)\n\nC1 = sp.Symbol('C1'); C2 = sp.Symbol('C2')\n```\n\n# Problem 2.1.1 in the problems manual\n\n\n\n## Nomenclature table\n| Nomenclature | Variable | Expression |\n|--------------------------------------|----------|-------------------------|\n| Temperature | T | |\n| Radius | r | |\n| Convective heat transfer coefficient | h | |\n| Conductive heat transfer coefficient | k | |\n| Biot number | Bi | $\\frac{hR}{k}$ |\n| Temperature fraction | $\\phi$ | $\\frac{T-T_o}{T_i-T_o}$ |\n| Quantity \"X\" of internal fluid | $X_i$ | |\n| Quantity \"X\" of external fluid | $X_o$ | |\n\n## Simplifying assumptions:\n1. Steady state; $\\frac{dT}{dt} = 0$\n2. Infinite symetric cylinder; $\\frac{dT}{dz} = \\frac{dT}{d\\theta} = 0$; $T(r)$\n3. No heat generation within the clinder; $q''' = 0$\n\n## Differential conservation equation solution\nThe consititutive equation for cylindrical coordinates:\n$$\\rho c \\frac{dT}{dt}= \\frac{1}{r}\\frac{d}{dr}(r\\cdot k\\frac{dT}{dr})+\\frac{1}{r}\\frac{d}{d\\theta}(k\\frac{dT}{d\\theta})+\\frac{d}{dz}(k\\frac{dT}{dz})+q'''$$\n\nWhen assumptions are applied:\n\n$$0 =\\frac{d^2T}{dr^2}+\\frac{1}{r}\\frac{dT}{dr}$$\n\nThe boundary conditions for convective heat transfer at the walls:\n\n$$\\frac{dT}{dr}(r = r_o) = \\frac{h_o}{k}[T_o - T(r = r_o)]$$\n\n$$\\frac{dT}{dr}(r = r_i) = \\frac{-h_i}{k}[T_i - T(r = r_i)]$$\n\nSubstituting the derivative of temperature $\\frac{dT}{dr} = U(r)$ into the constitutive equation:\n\n$$0 = \\frac{dU(r)}{dr} + \\frac{1}{r}\\cdot U(r)$$\n\nSeperating and integrating:\n\n$$U(r) = \\frac{dT}{dr} = \\frac{c_1}{r}$$\n\nAnd again:\n\n$$T(r) = c_1\\ln{r} + c_2$$\n\nSubstituting in the temperature equations into the boundary conditions yields a system of two equations and unkowns $c_1, c_2$:\n\n$$\\frac{c_1}{r_o} = \\frac{h_o}{k}[T_o - (c_1\\ln{r_o} + c_2)]$$\n\n$$\\frac{c_1}{r_i} = \\frac{-h_i}{k}[T_i - (c_1\\ln{r_i} + c_2)]$$\n\n\n```python\n## Solve DE\n#Define equation with U\neqn = (sp.Derivative(U,r)+1/r*U)\nprint('System differential equation with substitution for derivative of temperature:')\ndisplay(eqn)\n\n#Solve DE for derivative of temperature (U)\nDiff_U = sp.dsolve(eqn, U)\nprint('Expression for differential in temperature with respect to r:')\ndisplay(Diff_U)\n\n#Redefine Temperature\nDiff_T = Diff_U.subs(U, sp.Derivative(T,r))\nprint('Differential equation for temperature:')\ndisplay(Diff_T)\n\n#Solve for temperature\nTemp = sp.dsolve(Diff_T, T)\nprint('Solved expression for temperature with integration constants:')\ndisplay(Temp)\n```\n\n System differential equation with substitution for derivative of temperature:\n\n\n\n$\\displaystyle \\frac{d}{d r} U{\\left(r \\right)} + \\frac{U{\\left(r \\right)}}{r}$\n\n\n Expression for differential in temperature with respect to r:\n\n\n\n$\\displaystyle U{\\left(r \\right)} = \\frac{C_{1}}{r}$\n\n\n Differential equation for temperature:\n\n\n\n$\\displaystyle \\frac{d}{d r} T{\\left(r \\right)} = \\frac{C_{1}}{r}$\n\n\n Solved expression for temperature with integration constants:\n\n\n\n$\\displaystyle T{\\left(r \\right)} = C_{1} \\log{\\left(r \\right)} + C_{2}$\n\n\n\n```python\n#Define the two boundary conditions \neqn1= ho/k*(To-(Temp.rhs.subs(r, ro)))-Diff_U.rhs.subs(r, ro)\neqn2= -hi/k*(Ti-(Temp.rhs.subs(r, ri)))-Diff_U.rhs.subs(r, ri)\n\nprint('First Equation')\ndisplay(eqn1)\n\nprint('Second Equation')\ndisplay(eqn2)\n\n#Solve for c1 and c2\nC1_ = sp.solve(eqn1,C1)[0]\nC2_ = sp.solve(eqn2,C2)[0]\nC1eq = C1_.subs(C2,C2_)-C1\nC1_ = sp.simplify(sp.solve(C1eq,C1)[0])\nC2_ = sp.simplify(C2_.subs(C1,C1_))\n\n#Define biot numbers\nBi_i = sp.Symbol('Bi_i')\nBi_o = sp.Symbol('Bi_o')\n\n#substitute biot numbers into the equation\nC1_ = sp.simplify((C1_.subs(hi*ri, Bi_i*k)).subs(ho*ro, Bi_o*k))\nC2_ = sp.simplify((C2_.subs(hi*ri, Bi_i*k)).subs(ho*ro, Bi_o*k))\n\nprint('C1 solved')\ndisplay(C1_)\nprint('C2 solved')\ndisplay(C2_)\n```\n\n First Equation\n\n\n\n$\\displaystyle - \\frac{C_{1}}{r_{o}} + \\frac{h_{o} \\left(- C_{1} \\log{\\left(r_{o} \\right)} - C_{2} + T_{o}\\right)}{k}$\n\n\n Second Equation\n\n\n\n$\\displaystyle - \\frac{C_{1}}{r_{i}} - \\frac{h_{i} \\left(- C_{1} \\log{\\left(r_{i} \\right)} - C_{2} + T_{i}\\right)}{k}$\n\n\n C1 solved\n\n\n\n$\\displaystyle \\frac{Bi_{i} Bi_{o} \\left(T_{i} - T_{o}\\right)}{Bi_{i} Bi_{o} \\log{\\left(r_{i} \\right)} - Bi_{i} Bi_{o} \\log{\\left(r_{o} \\right)} - Bi_{i} - Bi_{o}}$\n\n\n C2 solved\n\n\n\n$\\displaystyle \\frac{Bi_{i} Bi_{o} T_{i} \\log{\\left(r_{o} \\right)} - Bi_{i} Bi_{o} T_{o} \\log{\\left(r_{i} \\right)} + Bi_{i} T_{i} + Bi_{o} T_{o}}{- Bi_{i} Bi_{o} \\log{\\left(r_{i} \\right)} + Bi_{i} Bi_{o} \\log{\\left(r_{o} \\right)} + Bi_{i} + Bi_{o}}$\n\n\nWith $Bi = \\frac{hR}{k}$\n\nDefining dimensionless parameter $\\phi (r) = \\frac{T(r)-T_o}{T_i-T_o}$ and solving for $\\phi$ \n\n$$\\phi(r) = \\frac{c_1\\ln{r}+c_2-T_o}{T_i-T_o}$$\n\n## Investigating this behavior:\n\n\n```python\n##Set some constants for r for a few cases\n#Thick wall vs thin wall\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nr_i = 1\nr_o = np.array([10, 5, 1.1])\n\n#Investigate outside biot for constant inside biot\nBi_i = 1\nBi_o = np.array([0.01, 1, 10])\nT_i = 100\nT_o = 200\n\nfor j, R_o in enumerate(r_o):\n rs = np.linspace(r_i, R_o, 100)\n phis = np.zeros((len(Bi_o), len(rs)))\n for k, Bi_out in enumerate(Bi_o):\n c1 = Bi_i*Bi_out*(T_i-T_o)/(Bi_i*Bi_out*np.log(r_i)-Bi_i*Bi_out*np.log(R_o)-Bi_i-Bi_out)\n c2 = (Bi_i*Bi_out*T_i*np.log(R_o)-Bi_i*Bi_out*T_o*np.log(r_i)+Bi_i*T_i+Bi_out*T_o)/(-Bi_i*Bi_out*np.log(r_i)+Bi_i*Bi_out*np.log(R_o)+Bi_i+Bi_out)\n #phis[k][:] = (c1*np.log(rs)+c2 - T_o)/(T_i-T_o)\n phis[k][:] = (np.log(rs/R_o))/(np.log(r_i/R_o)+1/Bi_out + 1/Bi_i)\n plt.figure(j)\n plt.plot(rs, phis[k][:],label = 'Bi_o/Bi_i ='+str(Bi_out))\n plt.legend()\n plt.xlabel('r')\n plt.ylabel('phi')\n plt.title('R = '+str(R_o))\n \n\n```\n\nIn interpereting the graphs, it us useful to remember that $\\phi = 1$ corresponds to the temperature being equal to the internal air temperature, and $\\phi = 0$ corresponds to temoerature being equal to the external air temperature.\n\n## Points to note:\n1. For a thin wall (Thickness << cylinder diameter), the internal temperature is nearly constant and determined by the convective coefficients. If convective transfer is much more prominent on the external surface than the internal surface, then the cylinder temperature is equal to the external temperature and vice versa. For comparible convective forces, the cylinder temperature is somewhere in between the two air temperatures.\n2. For thin walls, the slight temperature distribution that is exhibited is nearly linear, approcimating this case to a slab wall instead of a cylinder wall.\n3. For thick walls (Thickness ~ cylinder diameter), a distribution of temperatures is much more prominent, and the curviture of the cylinder is noted as it is non linear. This is intuitive, as for a cyliner the area of flux increases as radius increases, so temperature change should slow down as radius increases, which we do see.\n4. What we note is that the greater a Biot number is compared to the other side of the cylinder, the closer the wall temperature on that side comes to the air temperature. Alternatively, if the Biot numbers are of similar magnitude, the wall temperature on both sides of the cylinder walls do not approach the air temperatures but are instead in between the two.\n\n\n```python\n\n```\n", "meta": {"hexsha": "8d782b59aa8bbabef6b34a9d3c73cfbe80c3ca69", "size": 98624, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "presentations/10_16_19_Evan.ipynb", "max_stars_repo_name": "uw-cheme512/uw-cheme512.github.io", "max_stars_repo_head_hexsha": "6dad7a9554eafb6eba347462d30c62bf9c0ec4da", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "presentations/10_16_19_Evan.ipynb", "max_issues_repo_name": "uw-cheme512/uw-cheme512.github.io", "max_issues_repo_head_hexsha": "6dad7a9554eafb6eba347462d30c62bf9c0ec4da", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "presentations/10_16_19_Evan.ipynb", "max_forks_repo_name": "uw-cheme512/uw-cheme512.github.io", "max_forks_repo_head_hexsha": "6dad7a9554eafb6eba347462d30c62bf9c0ec4da", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 202.5133470226, "max_line_length": 29716, "alphanum_fraction": 0.8981789422, "converted": true, "num_tokens": 2515, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9489172630429475, "lm_q2_score": 0.8670357512127872, "lm_q1q2_score": 0.822745192001224}} {"text": "```python\nimport sympy as sp\nfrom sympy import init_printing\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ninit_printing()\n```\n\n\n```python\n# Define the system\nx = (sp.Matrix([sp.symbols('q p', real = True)])).T\nm, l , g = sp.symbols('m l g', real = True)\nH = 1/2 * x[1]**2 / (l**2 * m) + m*g*l*sp.sin(x[0])\ndH = H.diff(x)\ndx = sp.Matrix([[0, 1],[-1, 0]]) * dH\ndx\n```\n\n\n```python\n# Taylor series\ndx.taylor_term(1, x)\n```\n\n\n```python\ndq = np.arange(0, 2*np.pi, np.pi/20)\ndp = np.arange(-10, 10, 0.1)\nDQ, DP = np.meshgrid(dq, dp)\n# Make a func\ndx_f = sp.lambdify((x[0], x[1]),dx.subs([(m, 1), (g, -9.81), (l, 1)]) )\n```\n\n\n```python\ndt = sp.symbols('t')\nJ = (sp.eye(2) + dt*dx.jacobian(x))\nJ_E = J.subs(([(m, 1), (g, -9.81), (l, 1), (dt, 0.1)]) )\ne1, e2 = J_E.eigenvals().items()\neigenvals_n = sp.lambdify(x[0], np.array([e1[0], e2[0]]))\n```\n\n\n```python\ne1, e2 = eigenvals_n(dq)\nplt.plot(dq, e1)\nplt.plot(dq, e2)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "15be6f831ac1b435112bb098a2339235a8fd1ef8", "size": 21344, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "code/Untitled.ipynb", "max_stars_repo_name": "AlCap23/crrn_2018", "max_stars_repo_head_hexsha": "17f593d35182104e1705255ac81765dc42e674f0", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "code/Untitled.ipynb", "max_issues_repo_name": "AlCap23/crrn_2018", "max_issues_repo_head_hexsha": "17f593d35182104e1705255ac81765dc42e674f0", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "code/Untitled.ipynb", "max_forks_repo_name": "AlCap23/crrn_2018", "max_forks_repo_head_hexsha": "17f593d35182104e1705255ac81765dc42e674f0", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 124.0930232558, "max_line_length": 11484, "alphanum_fraction": 0.8224793853, "converted": true, "num_tokens": 398, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9591542829224748, "lm_q2_score": 0.8577681104440172, "lm_q1q2_score": 0.8227319568866975}} {"text": "The error in a cross product calculation with (3,) tuples\n\n\n```\nfrom sympy import Symbol, symarray, Matrix, matrices\nu = Matrix(symarray('u', (3, 1)))\nv = Matrix(symarray('v', (3, 1)))\n```\n\n\n```\nc = u.cross(v)\nc\n```\n\n\n\n\n [u_1_0*v_2_0 - u_2_0*v_1_0, -u_0_0*v_2_0 + u_2_0*v_0_0, u_0_0*v_1_0 - u_1_0*v_0_0]\n\n\n\nAssuming same error $\\delta$ for both vectors:\n\n\n```\nd = Symbol('d')\ne = matrices.ones((3,1)) * d\n```\n\nSame calculation as above, with error\n\n\n```\nue = u + e\nve = v + e\nce = ue.cross(ve)\n```\n\nCalculate absolute error by subtracting true result from result with error\n\n\n```\ncce = ce - c\ncce.simplify()\ncce\n```\n\n\n\n\n [d*(u_1_0 - u_2_0 - v_1_0 + v_2_0), d*(-u_0_0 + u_2_0 + v_0_0 - v_2_0), d*(u_0_0 - u_1_0 - v_0_0 + v_1_0)]\n\n\n\nFloating point calculation error given by operations on elements:\n\n\n```\nc\n```\n\n\n\n\n [u_1_0*v_2_0 - u_2_0*v_1_0, -u_0_0*v_2_0 + u_2_0*v_0_0, u_0_0*v_1_0 - u_1_0*v_0_0]\n\n\n\nEach element has two products and one subtraction; The input values are $\\le 1$. Calculation error per element then $3 \\epsilon / 2$\n", "meta": {"hexsha": "1889c3be7d53dc217742bab5f426be9efcdd51e6", "size": 3248, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/source/notebooks/cross_product_error.ipynb", "max_stars_repo_name": "tobon/nibabel", "max_stars_repo_head_hexsha": "ff2b5457207bb5fd6097b08f7f11123dc660fda7", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2015-10-01T01:13:59.000Z", "max_stars_repo_stars_event_max_datetime": "2015-10-01T01:13:59.000Z", "max_issues_repo_path": "doc/source/notebooks/cross_product_error.ipynb", "max_issues_repo_name": "tobon/nibabel", "max_issues_repo_head_hexsha": "ff2b5457207bb5fd6097b08f7f11123dc660fda7", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2015-11-13T03:05:24.000Z", "max_issues_repo_issues_event_max_datetime": "2016-08-06T19:18:54.000Z", "max_forks_repo_path": "doc/source/notebooks/cross_product_error.ipynb", "max_forks_repo_name": "tobon/nibabel", "max_forks_repo_head_hexsha": "ff2b5457207bb5fd6097b08f7f11123dc660fda7", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-02-27T20:48:03.000Z", "max_forks_repo_forks_event_max_datetime": "2019-02-27T20:48:03.000Z", "avg_line_length": 21.0909090909, "max_line_length": 143, "alphanum_fraction": 0.4593596059, "converted": true, "num_tokens": 430, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9449947148047777, "lm_q2_score": 0.8705972600147106, "lm_q1q2_score": 0.8227098094374223}} {"text": "```python\nimport sympy as sp\nfrom sympy.physics.quantum import TensorProduct\nimport numpy as np\n\nprint(f\"SymPy Version: {sp.__version__}\")\n\n# \u6570\u5f0f\u3092\u30ad\u30ec\u30a4\u306b\u8868\u793a\u3059\u308b\nsp.init_printing()\n```\n\n SymPy Version: 1.8\n\n\n### \u30d9\u30af\u30c8\u30eb\u3092\u751f\u6210\u3059\u308b\n\n- \u30d9\u30af\u30c8\u30eb\u306e\u751f\u6210\u306b\u306f\u3001`sympy.Matrix`\u3092\u4f7f\u7528\u3059\u308b\u3002\n + \u751f\u6210\u3055\u308c\u305f\u30d9\u30af\u30c8\u30eb\u306f\u5217\u30d9\u30af\u30c8\u30eb\u306b\u306a\u308b\u3002\n + `\u30d9\u30af\u30c8\u30eb.T`\u3068\u3059\u308b\u3053\u3068\u3067\u3001\u884c\u30d9\u30af\u30c8\u30eb\u3092\u4f5c\u308b\u3053\u3068\u304c\u3067\u304d\u308b(\u3044\u308f\u3086\u308b\u3001\u884c\u5217\u306e\u8ee2\u7f6e\u306b\u76f8\u5f53)\n\n\n```python\nvec1 = sp.Matrix([0, 1, 2, 3])\nvec1\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0\\\\1\\\\2\\\\3\\end{matrix}\\right]$\n\n\n\n\n```python\nvec1.T\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0 & 1 & 2 & 3\\end{matrix}\\right]$\n\n\n\n\n```python\n# numpy\u304b\u3089\u30d9\u30af\u30c8\u30eb\u3092\u4f5c\u308b\u3053\u3068\u3082\u51fa\u6765\u308b\nsp.Matrix(np.arange(5))\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0\\\\1\\\\2\\\\3\\\\4\\end{matrix}\\right]$\n\n\n\n### \u884c\u5217\u3092\u751f\u6210\u3059\u308b\n\n- \u884c\u5217\u306e\u751f\u6210\u306b\u304a\u3044\u3066\u3082\u3001`sympy.Matrix`\u3092\u4f7f\u7528\u3059\u308b\u3002\n- \u5358\u4f4d\u884c\u5217\u306f\u3001`sympy.eye()`\u3067\u751f\u6210\u3067\u304d\u308b\u3002\n- \u30bc\u30ed\u884c\u5217\u306f\u3001`sympy.zeros()`\u3067\u751f\u6210\u3067\u304d\u308b\u3002\n- \u5bfe\u89d2\u884c\u5217\u306f\u3001`sympy.diag()`\u3067\u751f\u6210\u3067\u304d\u308b\u3002\n- \u884c\u5217\u306e\u30b7\u30f3\u30dc\u30eb\u3092\u4f7f\u7528\u3059\u308b\u5834\u5408\u3001`sympy.MatrixSymbol`\u3092\u4f7f\u7528\u3059\u308b\u3002\n\n\n```python\nmat = sp.Matrix([[1, 2], [3, 4]])\nmat\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 2\\\\3 & 4\\end{matrix}\\right]$\n\n\n\n\n```python\nsp.eye(3)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0 & 0\\\\0 & 1 & 0\\\\0 & 0 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\nsp.zeros(2, 5)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0 & 0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0 & 0\\end{matrix}\\right]$\n\n\n\n\n```python\nsp.diag(1, 0, 2, 4)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 2 & 0\\\\0 & 0 & 0 & 4\\end{matrix}\\right]$\n\n\n\n\n```python\nmat2 = sp.MatrixSymbol('X', 3, 4)\nmat2 = sp.Matrix(mat2)\nmat2\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{array}{cccc}X_{0, 0} & X_{0, 1} & X_{0, 2} & X_{0, 3}\\\\X_{1, 0} & X_{1, 1} & X_{1, 2} & X_{1, 3}\\\\X_{2, 0} & X_{2, 1} & X_{2, 2} & X_{2, 3}\\end{array}\\right]$\n\n\n\n\n```python\n# \u884c\u5217\u306e\u8981\u7d20\u306b\u30a2\u30af\u30bb\u30b9\u3082\u3067\u304d\u308b\nmat2[0, 0] = 1\nmat2\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{array}{cccc}1 & X_{0, 1} & X_{0, 2} & X_{0, 3}\\\\X_{1, 0} & X_{1, 1} & X_{1, 2} & X_{1, 3}\\\\X_{2, 0} & X_{2, 1} & X_{2, 2} & X_{2, 3}\\end{array}\\right]$\n\n\n\n### \u30d9\u30af\u30c8\u30eb\u306e\u6f14\u7b97\n\n- \u52a0\u7b97\u3001\u4e57\u7b97\u306f`+`, `-`\u6f14\u7b97\u5b50\u3067\u5b9f\u884c\u3067\u304d\u308b\u3002\n- \u5b9a\u6570\u3092\u639b\u3051\u308b\u3053\u3068\u3067\u3001\u30d9\u30af\u30c8\u30eb\u306e\u30b9\u30ab\u30e9\u30fc\u500d\u3082\u53ef\u80fd\n- \u5185\u7a4d\u306b\u306f\u3001`v1.dot(v2)`\u3092\u7528\u3044\u308b\u3002\n- \u8981\u7d20\u3054\u3068\u306e\u4e57\u7b97(\u30a2\u30c0\u30de\u30fc\u30eb\u7a4d)\u306b\u306f\u3001`v1.multiply_elementwise(v2)`\u3092\u7528\u3044\u308b\u3002\n- \u30d9\u30af\u30c8\u30eb\u306e\u5927\u304d\u3055\u306f\u3001`v1.norm()`\u3067\u8a08\u7b97\u3067\u304d\u308b\u3002\n\n\n```python\nvec1 = sp.Matrix(np.arange(4))\nvec2 = sp.Matrix(np.arange(4))\n```\n\n\n```python\nvec1 + vec2\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0\\\\2\\\\4\\\\6\\end{matrix}\\right]$\n\n\n\n\n```python\n2 * vec1 - 0.5 * vec2\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0\\\\1.5\\\\3.0\\\\4.5\\end{matrix}\\right]$\n\n\n\n\n```python\n# \u5185\u7a4d\nvec1.dot(vec2)\n```\n\n\n```python\n# \u8981\u7d20\u3054\u3068\u306e\u4e57\u7b97 (\u30a2\u30c0\u30de\u30fc\u30eb\u7a4d)\nvec1.multiply_elementwise(vec2)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0\\\\1\\\\4\\\\9\\end{matrix}\\right]$\n\n\n\n\n```python\n# \u30d9\u30af\u30c8\u30eb\u306e\u5927\u304d\u3055\nvec1.norm()\n```\n\n### \u884c\u5217\u306e\u6f14\u7b97\n\n- \u52a0\u7b97\u3001\u4e57\u7b97\u306f`+`, `-`\u6f14\u7b97\u5b50\u3067\u5b9f\u884c\u3067\u304d\u308b\u3002\n- `*`\u6f14\u7b97\u5b50\u306f\u3001\u884c\u5217\u7a4d\u3068\u306a\u308b\u3002\n- `mat.T`\u3067\u3001\u8ee2\u7f6e\u884c\u5217\u3092\u8a08\u7b97\u3067\u304d\u308b\u3002\n- `mat.inv()`\u3067\u3001\u9006\u884c\u5217\u3092\u8a08\u7b97\u3067\u304d\u308b\u3002\n- `mat.det()`\u3067\u3001\u884c\u5217\u5f0f\u3092\u8a08\u7b97\u3067\u304d\u308b\u3002\n\n\n```python\nmat1 = sp.Matrix(np.arange(0, 4).reshape(2, 2))\nmat1\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0 & 1\\\\2 & 3\\end{matrix}\\right]$\n\n\n\n\n```python\nmat2 = sp.Matrix(np.arange(5, 9).reshape(2, 2))\nmat2\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}5 & 6\\\\7 & 8\\end{matrix}\\right]$\n\n\n\n\n```python\nmat1 + mat2\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}5 & 7\\\\9 & 11\\end{matrix}\\right]$\n\n\n\n\n```python\n2.0 * mat1 - 0.5 * mat2\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}-2.5 & -1.0\\\\0.5 & 2.0\\end{matrix}\\right]$\n\n\n\n\n```python\n# \u884c\u5217\u5f0f\nmat1 * mat2\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}7 & 8\\\\31 & 36\\end{matrix}\\right]$\n\n\n\n\n```python\n# \u8ee2\u7f6e\u884c\u5217\nmat1.T\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}0 & 2\\\\1 & 3\\end{matrix}\\right]$\n\n\n\n\n```python\n# \u9006\u884c\u5217\nmat1.inv()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}- \\frac{3}{2} & \\frac{1}{2}\\\\1 & 0\\end{matrix}\\right]$\n\n\n\n\n```python\nmat1.inv() * mat1\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0\\\\0 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\n# \u884c\u5217\u5f0f\nmat1.det()\n```\n\n### \u95a2\u6570\u306e\u9069\u7528\n\n- `\u884c\u5217.applyfunc()`\u3092\u4f7f\u7528\u3059\u308b\u3053\u3068\u3067\u3001\u8981\u7d20\u3054\u3068\u306e\u8a08\u7b97\u304c\u53ef\u80fd\u306b\u306a\u308b\u3002\n + \u5f15\u6570\u306b\u306f\u3001\u95a2\u6570\u30aa\u30d6\u30b8\u30a7\u30af\u3001\u307e\u305f\u306f\u30e9\u30e0\u30c0\u5f0f\u3092\u4f7f\u7528\u3059\u308b\u3002\n\n\n```python\nx = sp.Matrix(sp.MatrixSymbol('X', 3, 3))\nx\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{array}{ccc}X_{0, 0} & X_{0, 1} & X_{0, 2}\\\\X_{1, 0} & X_{1, 1} & X_{1, 2}\\\\X_{2, 0} & X_{2, 1} & X_{2, 2}\\end{array}\\right]$\n\n\n\n\n```python\n# \u5358\u7d14\u306b2\u4e57\u3059\u308b\u3068\u3001\u884c\u5217\u7a4d\u3068\u306a\u308b\u3002\nx ** 2\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}X_{0, 0}^{2} + X_{0, 1} X_{1, 0} + X_{0, 2} X_{2, 0} & X_{0, 0} X_{0, 1} + X_{0, 1} X_{1, 1} + X_{0, 2} X_{2, 1} & X_{0, 0} X_{0, 2} + X_{0, 1} X_{1, 2} + X_{0, 2} X_{2, 2}\\\\X_{0, 0} X_{1, 0} + X_{1, 0} X_{1, 1} + X_{1, 2} X_{2, 0} & X_{0, 1} X_{1, 0} + X_{1, 1}^{2} + X_{1, 2} X_{2, 1} & X_{0, 2} X_{1, 0} + X_{1, 1} X_{1, 2} + X_{1, 2} X_{2, 2}\\\\X_{0, 0} X_{2, 0} + X_{1, 0} X_{2, 1} + X_{2, 0} X_{2, 2} & X_{0, 1} X_{2, 0} + X_{1, 1} X_{2, 1} + X_{2, 1} X_{2, 2} & X_{0, 2} X_{2, 0} + X_{1, 2} X_{2, 1} + X_{2, 2}^{2}\\end{matrix}\\right]$\n\n\n\n\n```python\n# applyfunc \u3092\u7528\u3044\u308b\u3053\u3068\u3067\u3001\u8981\u7d20\u3054\u3068\u306b2\u4e57\u3059\u308b\u3053\u3068\u3082\u53ef\u80fd\nx.applyfunc(lambda y: y ** 2)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}X_{0, 0}^{2} & X_{0, 1}^{2} & X_{0, 2}^{2}\\\\X_{1, 0}^{2} & X_{1, 1}^{2} & X_{1, 2}^{2}\\\\X_{2, 0}^{2} & X_{2, 1}^{2} & X_{2, 2}^{2}\\end{matrix}\\right]$\n\n\n\n### \u30d9\u30af\u30c8\u30eb\u306e\u76f4\u4ea4\u5316\n\n- `symply.GramSchmidt()`\u3092\u4f7f\u7528\u3059\u308b\u3053\u3068\u3067\u3001\u30b0\u30e9\u30e0\u30fb\u30b7\u30e5\u30df\u30c3\u30c8\u306e\u6b63\u898f\u76f4\u4ea4\u5316\u3092\u884c\u3046\u3053\u3068\u304c\u3067\u304d\u308b\u3002\n- `symply.Matrix.orthogonalize()`\u3067\u3082\u3001\u540c\u69d8\u306e\u7d50\u679c\u304c\u5f97\u3089\u308c\u308b\u3002\n\n\n```python\nvec1 = sp.Matrix([1, 1, 0])\nvec2 = sp.Matrix([0, -1, 1])\nvec3 = sp.Matrix([1, 1, 1])\n\nvec1, vec2, vec3\n```\n\n\n\n\n$\\displaystyle \\left( \\left[\\begin{matrix}1\\\\1\\\\0\\end{matrix}\\right], \\ \\left[\\begin{matrix}0\\\\-1\\\\1\\end{matrix}\\right], \\ \\left[\\begin{matrix}1\\\\1\\\\1\\end{matrix}\\right]\\right)$\n\n\n\n\n```python\n# \u5185\u7a4d\u306e\u7d50\u679c\u304b\u3089\u3001\u5b9a\u7fa9\u3057\u305f\u30d9\u30af\u30c8\u30eb\u306f\u76f4\u4ea4\u5316\u3057\u3066\u3044\u306a\u3044\nprint(f\"vec1 \u30fb vec2 = {vec1.dot(vec2)}\")\nprint(f\"vec1 \u30fb vec3 = {vec1.dot(vec3)}\")\n```\n\n vec1 \u30fb vec2 = -1\n vec1 \u30fb vec3 = 2\n\n\n\n```python\nvec_list = [vec1, vec2, vec3]\nout1, out2, out3 = sp.GramSchmidt(vec_list)\nout1, out2, out3\n```\n\n\n\n\n$\\displaystyle \\left( \\left[\\begin{matrix}1\\\\1\\\\0\\end{matrix}\\right], \\ \\left[\\begin{matrix}\\frac{1}{2}\\\\- \\frac{1}{2}\\\\1\\end{matrix}\\right], \\ \\left[\\begin{matrix}- \\frac{1}{3}\\\\\\frac{1}{3}\\\\\\frac{1}{3}\\end{matrix}\\right]\\right)$\n\n\n\n\n```python\n# \u5185\u7a4d\u306e\u7d50\u679c\u304b\u3089\u3001\u4e92\u3044\u306b\u76f4\u4ea4\u3057\u3066\u3044\u308b\u306e\u304c\u308f\u304b\u308b\nprint(f\"out1 \u30fb out2 = {out1.dot(out2)}\")\nprint(f\"out1 \u30fb out3 = {out1.dot(out3)}\")\n```\n\n out1 \u30fb out2 = 0\n out1 \u30fb out3 = 0\n\n\n\n```python\nsp.Matrix.orthogonalize(*vec_list)\n```\n\n\n\n\n$\\displaystyle \\left[ \\left[\\begin{matrix}1\\\\1\\\\0\\end{matrix}\\right], \\ \\left[\\begin{matrix}\\frac{1}{2}\\\\- \\frac{1}{2}\\\\1\\end{matrix}\\right], \\ \\left[\\begin{matrix}- \\frac{1}{3}\\\\\\frac{1}{3}\\\\\\frac{1}{3}\\end{matrix}\\right]\\right]$\n\n\n\n### \u30d9\u30af\u30c8\u30eb\u306e\u5916\u7a4d\u30fb\u76f4\u7a4d\n\n- \u5916\u7a4d\u306f\u3001`vec1.corss(vec2)` \u3067\u8a08\u7b97\u3067\u304d\u308b\u3002\n- \u76f4\u7a4d\u306f\u3001`sympy.physics.quantum.TensorProduct(vec1, vec.T)` \u3067\u8a08\u7b97\u3067\u304d\u308b\u3002\n\n#### [\u53c2\u8003] \u76f4\u7a4d\u3068\u306f\uff1f\n\n$$\n\\boldsymbol{u} \\otimes \\boldsymbol{u} = \\boldsymbol{u} \\cdot \\boldsymbol{u}^{T}\n$$\n\n\n```python\nux, uy, uz = sp.symbols('ux uy uz')\nvx, vy, vz = sp.symbols('vx vy vz')\n\nu = sp.Matrix([ux, uy, uz])\nv = sp.Matrix([vx, vy, vz])\n\nu, v\n```\n\n\n\n\n$\\displaystyle \\left( \\left[\\begin{matrix}ux\\\\uy\\\\uz\\end{matrix}\\right], \\ \\left[\\begin{matrix}vx\\\\vy\\\\vz\\end{matrix}\\right]\\right)$\n\n\n\n\n```python\n# \u3053\u308c\u306f\u5185\u7a4d\nu.dot(v)\n```\n\n\n```python\n# \u5916\u7a4d\u306e\u8a08\u7b97\nu.cross(v)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}uy vz - uz vy\\\\- ux vz + uz vx\\\\ux vy - uy vx\\end{matrix}\\right]$\n\n\n\n\n```python\n# \u76f4\u7a4d\u306e\u8a08\u7b97\nTensorProduct(u, u.T)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}ux^{2} & ux uy & ux uz\\\\ux uy & uy^{2} & uy uz\\\\ux uz & uy uz & uz^{2}\\end{matrix}\\right]$\n\n\n\n### \u884c\u5217\u306e\u56fa\u6709\u5024\u3068\u56fa\u6709\u30d9\u30af\u30c8\u30eb\n\n- \u884c\u5217\u306e\u56fa\u6709\u5024\u306f\u3001`\u884c\u5217.eigenvals()`\u3067\u8a08\u7b97\u3067\u304d\u308b\u3002\n- \u307e\u305f\u3001\u56fa\u6709\u30d9\u30af\u30c8\u30eb\u306f\u3001`\u884c\u5217.eigenvects()`\u3067\u8a08\u7b97\u3067\u304d\u308b\u3002\n\n\n```python\nA = sp.Matrix([[5, 3], [4, 9]])\nA\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}5 & 3\\\\4 & 9\\end{matrix}\\right]$\n\n\n\n\n```python\nA.eigenvals()\n```\n\n\n```python\nA.eigenvects()\n```\n\n\n\n\n$\\displaystyle \\left[ \\left( 3, \\ 1, \\ \\left[ \\left[\\begin{matrix}- \\frac{3}{2}\\\\1\\end{matrix}\\right]\\right]\\right), \\ \\left( 11, \\ 1, \\ \\left[ \\left[\\begin{matrix}\\frac{1}{2}\\\\1\\end{matrix}\\right]\\right]\\right)\\right]$\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "9e4050617c2483d5b31d5eafc460700798189295", "size": 31814, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/03-VectorAndMatrix.ipynb", "max_stars_repo_name": "codemajin/Introduction-to-SymPy", "max_stars_repo_head_hexsha": "a403b50ba384260a8d5845be1fd1fbd0e5f6b807", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/03-VectorAndMatrix.ipynb", "max_issues_repo_name": "codemajin/Introduction-to-SymPy", "max_issues_repo_head_hexsha": "a403b50ba384260a8d5845be1fd1fbd0e5f6b807", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/03-VectorAndMatrix.ipynb", "max_forks_repo_name": "codemajin/Introduction-to-SymPy", "max_forks_repo_head_hexsha": "a403b50ba384260a8d5845be1fd1fbd0e5f6b807", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.4919871795, "max_line_length": 1744, "alphanum_fraction": 0.4819576287, "converted": true, "num_tokens": 3821, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9304582497090321, "lm_q2_score": 0.8840392832736084, "lm_q1q2_score": 0.8225616441887889}} {"text": "# Classical Physics Models\n### Yingbo Ma, Chris Rackauckas\n\nIf you're getting some cold feet to jump in to DiffEq land, here are some handcrafted differential equations mini problems to hold your hand along the beginning of your journey.\n\n## Radioactive Decay of Carbon-14\n\n#### First order linear ODE\n\n$$f(t,u) = \\frac{du}{dt}$$\n\nThe Radioactive decay problem is the first order linear ODE problem of an exponential with a negative coefficient, which represents the half-life of the process in question. Should the coefficient be positive, this would represent a population growth equation.\n\n\n```julia\nusing OrdinaryDiffEq, Plots\ngr()\n\n#Half-life of Carbon-14 is 5,730 years.\nC\u2081 = 5.730\n\n#Setup\nu\u2080 = 1.0\ntspan = (0.0, 1.0)\n\n#Define the problem\nradioactivedecay(u,p,t) = -C\u2081*u\n\n#Pass to solver\nprob = ODEProblem(radioactivedecay,u\u2080,tspan)\nsol = solve(prob,Tsit5())\n\n#Plot\nplot(sol,linewidth=2,title =\"Carbon-14 half-life\", xaxis = \"Time in thousands of years\", yaxis = \"Percentage left\", label = \"Numerical Solution\")\nplot!(sol.t, t->exp(-C\u2081*t),lw=3,ls=:dash,label=\"Analytical Solution\")\n```\n\n## Simple Pendulum\n\n#### Second Order Linear ODE\n\nWe will start by solving the pendulum problem. In the physics class, we often solve this problem by small angle approximation, i.e. $ sin(\\theta) \\approx \\theta$, because otherwise, we get an elliptic integral which doesn't have an analytic solution. The linearized form is\n\n$$\\ddot{\\theta} + \\frac{g}{L}{\\theta} = 0$$\n\nBut we have numerical ODE solvers! Why not solve the *real* pendulum?\n\n$$\\ddot{\\theta} + \\frac{g}{L}{\\sin(\\theta)} = 0$$\n\n\n```julia\n# Simple Pendulum Problem\nusing OrdinaryDiffEq, Plots\n\n#Constants\nconst g = 9.81\nL = 1.0\n\n#Initial Conditions\nu\u2080 = [0,\u03c0/2]\ntspan = (0.0,6.3)\n\n#Define the problem\nfunction simplependulum(du,u,p,t)\n \u03b8 = u[1]\n d\u03b8 = u[2]\n du[1] = d\u03b8\n du[2] = -(g/L)*sin(\u03b8)\nend\n\n#Pass to solvers\nprob = ODEProblem(simplependulum,u\u2080, tspan)\nsol = solve(prob,Tsit5())\n\n#Plot\nplot(sol,linewidth=2,title =\"Simple Pendulum Problem\", xaxis = \"Time\", yaxis = \"Height\", label = [\"Theta\",\"dTheta\"])\n```\n\nSo now we know that behaviour of the position versus time. However, it will be useful to us to look at the phase space of the pendulum, i.e., and representation of all possible states of the system in question (the pendulum) by looking at its velocity and position. Phase space analysis is ubiquitous in the analysis of dynamical systems, and thus we will provide a few facilities for it.\n\n\n```julia\np = plot(sol,vars = (1,2), xlims = (-9,9), title = \"Phase Space Plot\", xaxis = \"Velocity\", yaxis = \"Position\", leg=false)\nfunction phase_plot(prob, u0, p, tspan=2pi)\n _prob = ODEProblem(prob.f,u0,(0.0,tspan))\n sol = solve(_prob,Vern9()) # Use Vern9 solver for higher accuracy\n plot!(p,sol,vars = (1,2), xlims = nothing, ylims = nothing)\nend\nfor i in -4pi:pi/2:4\u03c0\n for j in -4pi:pi/2:4\u03c0\n phase_plot(prob, [j,i], p)\n end\nend\nplot(p,xlims = (-9,9))\n```\n\n## Simple Harmonic Oscillator\n\n### Double Pendulum\n\n\n```julia\n#Double Pendulum Problem\nusing OrdinaryDiffEq, Plots\n\n#Constants and setup\nconst m\u2081, m\u2082, L\u2081, L\u2082 = 1, 2, 1, 2\ninitial = [0, \u03c0/3, 0, 3pi/5]\ntspan = (0.,50.)\n\n#Convenience function for transforming from polar to Cartesian coordinates\nfunction polar2cart(sol;dt=0.02,l1=L\u2081,l2=L\u2082,vars=(2,4))\n u = sol.t[1]:dt:sol.t[end]\n\n p1 = l1*map(x->x[vars[1]], sol.(u))\n p2 = l2*map(y->y[vars[2]], sol.(u))\n\n x1 = l1*sin.(p1)\n y1 = l1*-cos.(p1)\n (u, (x1 + l2*sin.(p2),\n y1 - l2*cos.(p2)))\nend\n\n#Define the Problem\nfunction double_pendulum(xdot,x,p,t)\n xdot[1]=x[2]\n xdot[2]=-((g*(2*m\u2081+m\u2082)*sin(x[1])+m\u2082*(g*sin(x[1]-2*x[3])+2*(L\u2082*x[4]^2+L\u2081*x[2]^2*cos(x[1]-x[3]))*sin(x[1]-x[3])))/(2*L\u2081*(m\u2081+m\u2082-m\u2082*cos(x[1]-x[3])^2)))\n xdot[3]=x[4]\n xdot[4]=(((m\u2081+m\u2082)*(L\u2081*x[2]^2+g*cos(x[1]))+L\u2082*m\u2082*x[4]^2*cos(x[1]-x[3]))*sin(x[1]-x[3]))/(L\u2082*(m\u2081+m\u2082-m\u2082*cos(x[1]-x[3])^2))\nend\n\n#Pass to Solvers\ndouble_pendulum_problem = ODEProblem(double_pendulum, initial, tspan)\nsol = solve(double_pendulum_problem, Vern7(), abs_tol=1e-10, dt=0.05);\n```\n\n\n```julia\n#Obtain coordinates in Cartesian Geometry\nts, ps = polar2cart(sol, l1=L\u2081, l2=L\u2082, dt=0.01)\nplot(ps...)\n```\n\n### Poincar\u00e9 section\n\nThe Poincar\u00e9 section is a contour plot of a higher-dimensional phase space diagram. It helps to understand the dynamic interactions and is wonderfully pretty.\n\nThe following equation came from [StackOverflow question](https://mathematica.stackexchange.com/questions/40122/help-to-plot-poincar%C3%A9-section-for-double-pendulum)\n\n$$\\frac{d}{dt}\n \\begin{pmatrix}\n \\alpha \\\\ l_\\alpha \\\\ \\beta \\\\ l_\\beta\n \\end{pmatrix}=\n \\begin{pmatrix}\n 2\\frac{l_\\alpha - (1+\\cos\\beta)l_\\beta}{3-\\cos 2\\beta} \\\\\n -2\\sin\\alpha - \\sin(\\alpha + \\beta) \\\\\n 2\\frac{-(1+\\cos\\beta)l_\\alpha + (3+2\\cos\\beta)l_\\beta}{3-\\cos2\\beta}\\\\\n -\\sin(\\alpha+\\beta) - 2\\sin(\\beta)\\frac{(l_\\alpha-l_\\beta)l_\\beta}{3-\\cos2\\beta} + 2\\sin(2\\beta)\\frac{l_\\alpha^2-2(1+\\cos\\beta)l_\\alpha l_\\beta + (3+2\\cos\\beta)l_\\beta^2}{(3-\\cos2\\beta)^2}\n \\end{pmatrix}$$\n\nThe Poincar\u00e9 section here is the collection of $(\u03b2,l_\u03b2)$ when $\u03b1=0$ and $\\frac{d\u03b1}{dt}>0$.\n\n#### Hamiltonian of a double pendulum\nNow we will plot the Hamiltonian of a double pendulum\n\n\n```julia\n#Constants and setup\nusing OrdinaryDiffEq\ninitial2 = [0.01, 0.005, 0.01, 0.01]\ntspan2 = (0.,200.)\n\n#Define the problem\nfunction double_pendulum_hamiltonian(udot,u,p,t)\n \u03b1 = u[1]\n l\u03b1 = u[2]\n \u03b2 = u[3]\n l\u03b2 = u[4]\n udot .=\n [2(l\u03b1-(1+cos(\u03b2))l\u03b2)/(3-cos(2\u03b2)),\n -2sin(\u03b1) - sin(\u03b1+\u03b2),\n 2(-(1+cos(\u03b2))l\u03b1 + (3+2cos(\u03b2))l\u03b2)/(3-cos(2\u03b2)),\n -sin(\u03b1+\u03b2) - 2sin(\u03b2)*(((l\u03b1-l\u03b2)l\u03b2)/(3-cos(2\u03b2))) + 2sin(2\u03b2)*((l\u03b1^2 - 2(1+cos(\u03b2))l\u03b1*l\u03b2 + (3+2cos(\u03b2))l\u03b2^2)/(3-cos(2\u03b2))^2)]\nend\n\n# Construct a ContiunousCallback\ncondition(u,t,integrator) = u[1]\naffect!(integrator) = nothing\ncb = ContinuousCallback(condition,affect!,nothing,\n save_positions = (true,false))\n\n# Construct Problem\npoincare = ODEProblem(double_pendulum_hamiltonian, initial2, tspan2)\nsol2 = solve(poincare, Vern9(), save_everystep = false, callback=cb, abstol=1e-9)\n\nfunction poincare_map(prob, u\u2080, p; callback=cb)\n _prob = ODEProblem(prob.f,[0.01, 0.01, 0.01, u\u2080],prob.tspan)\n sol = solve(_prob, Vern9(), save_everystep = false, callback=cb, abstol=1e-9)\n scatter!(p, sol, vars=(3,4), markersize = 2)\nend\n```\n\n\n```julia\np = scatter(sol2, vars=(3,4), leg=false, markersize = 2, ylims=(-0.01,0.03))\nfor i in -0.01:0.00125:0.01\n poincare_map(poincare, i, p)\nend\nplot(p,ylims=(-0.01,0.03))\n```\n\n## H\u00e9non-Heiles System\n\nThe H\u00e9non-Heiles potential occurs when non-linear motion of a star around a galactic center with the motion restricted to a plane.\n\n$$\n\\begin{align}\n\\frac{d^2x}{dt^2}&=-\\frac{\\partial V}{\\partial x}\\\\\n\\frac{d^2y}{dt^2}&=-\\frac{\\partial V}{\\partial y}\n\\end{align}\n$$\n\nwhere\n\n$$V(x,y)={\\frac {1}{2}}(x^{2}+y^{2})+\\lambda \\left(x^{2}y-{\\frac {y^{3}}{3}}\\right).$$\n\nWe pick $\\lambda=1$ in this case, so\n\n$$V(x,y) = \\frac{1}{2}(x^2+y^2+2x^2y-\\frac{2}{3}y^3).$$\n\nThen the total energy of the system can be expressed by\n\n$$E = T+V = V(x,y)+\\frac{1}{2}(\\dot{x}^2+\\dot{y}^2).$$\n\nThe total energy should conserve as this system evolves.\n\n\n```julia\nusing OrdinaryDiffEq, Plots\n\n#Setup\ninitial = [0.,0.1,0.5,0]\ntspan = (0,100.)\n\n#Remember, V is the potential of the system and T is the Total Kinetic Energy, thus E will\n#the total energy of the system.\nV(x,y) = 1//2 * (x^2 + y^2 + 2x^2*y - 2//3 * y^3)\nE(x,y,dx,dy) = V(x,y) + 1//2 * (dx^2 + dy^2);\n\n#Define the function\nfunction H\u00e9non_Heiles(du,u,p,t)\n x = u[1]\n y = u[2]\n dx = u[3]\n dy = u[4]\n du[1] = dx\n du[2] = dy\n du[3] = -x - 2x*y\n du[4] = y^2 - y -x^2\nend\n\n#Pass to solvers\nprob = ODEProblem(H\u00e9non_Heiles, initial, tspan)\nsol = solve(prob, Vern9(), abs_tol=1e-16, rel_tol=1e-16);\n```\n\n\n```julia\n# Plot the orbit\nplot(sol, vars=(1,2), title = \"The orbit of the H\u00e9non-Heiles system\", xaxis = \"x\", yaxis = \"y\", leg=false)\n```\n\n\n```julia\n#Optional Sanity check - what do you think this returns and why?\n@show sol.retcode\n\n#Plot -\nplot(sol, vars=(1,3), title = \"Phase space for the H\u00e9non-Heiles system\", xaxis = \"Position\", yaxis = \"Velocity\")\nplot!(sol, vars=(2,4), leg = false)\n```\n\n\n```julia\n#We map the Total energies during the time intervals of the solution (sol.u here) to a new vector\n#pass it to the plotter a bit more conveniently\nenergy = map(x->E(x...), sol.u)\n\n#We use @show here to easily spot erratic behaviour in our system by seeing if the loss in energy was too great.\n@show \u0394E = energy[1]-energy[end]\n\n#Plot\nplot(sol.t, energy, title = \"Change in Energy over Time\", xaxis = \"Time in iterations\", yaxis = \"Change in Energy\")\n```\n\n### Symplectic Integration\n\nTo prevent energy drift, we can instead use a symplectic integrator. We can directly define and solve the `SecondOrderODEProblem`:\n\n\n```julia\nfunction HH_acceleration!(dv,v,u,p,t)\n x,y = u\n dx,dy = dv\n dv[1] = -x - 2x*y\n dv[2] = y^2 - y -x^2\nend\ninitial_positions = [0.0,0.1]\ninitial_velocities = [0.5,0.0]\nprob = SecondOrderODEProblem(HH_acceleration!,initial_velocities,initial_positions,tspan)\nsol2 = solve(prob, KahanLi8(), dt=1/10);\n```\n\nNotice that we get the same results:\n\n\n```julia\n# Plot the orbit\nplot(sol2, vars=(3,4), title = \"The orbit of the H\u00e9non-Heiles system\", xaxis = \"x\", yaxis = \"y\", leg=false)\n```\n\n\n```julia\nplot(sol2, vars=(3,1), title = \"Phase space for the H\u00e9non-Heiles system\", xaxis = \"Position\", yaxis = \"Velocity\")\nplot!(sol2, vars=(4,2), leg = false)\n```\n\nbut now the energy change is essentially zero:\n\n\n```julia\nenergy = map(x->E(x[3], x[4], x[1], x[2]), sol2.u)\n#We use @show here to easily spot erratic behaviour in our system by seeing if the loss in energy was too great.\n@show \u0394E = energy[1]-energy[end]\n\n#Plot\nplot(sol2.t, energy, title = \"Change in Energy over Time\", xaxis = \"Time in iterations\", yaxis = \"Change in Energy\")\n```\n\nIt's so close to zero it breaks GR! And let's try to use a Runge-Kutta-Nystr\u00f6m solver to solve this. Note that Runge-Kutta-Nystr\u00f6m isn't symplectic.\n\n\n```julia\nsol3 = solve(prob, DPRKN6());\nenergy = map(x->E(x[3], x[4], x[1], x[2]), sol3.u)\n@show \u0394E = energy[1]-energy[end]\ngr()\nplot(sol3.t, energy, title = \"Change in Energy over Time\", xaxis = \"Time in iterations\", yaxis = \"Change in Energy\")\n```\n\nNote that we are using the `DPRKN6` sovler at `reltol=1e-3` (the default), yet it has a smaller energy variation than `Vern9` at `abs_tol=1e-16, rel_tol=1e-16`. Therefore, using specialized solvers to solve its particular problem is very efficient.\n", "meta": {"hexsha": "ef912dbe755c9fb48d29a08bb31f90b3d663f996", "size": 14527, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebook/models/01-classical_physics.ipynb", "max_stars_repo_name": "UnofficialJuliaMirror/DiffEqTutorials.jl-6d1b261a-3be8-11e9-3f2f-0b112a9a8436", "max_stars_repo_head_hexsha": "c7a0eaccc71464405499c13dba5bb8e1380fce29", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebook/models/01-classical_physics.ipynb", "max_issues_repo_name": "UnofficialJuliaMirror/DiffEqTutorials.jl-6d1b261a-3be8-11e9-3f2f-0b112a9a8436", "max_issues_repo_head_hexsha": "c7a0eaccc71464405499c13dba5bb8e1380fce29", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebook/models/01-classical_physics.ipynb", "max_forks_repo_name": "UnofficialJuliaMirror/DiffEqTutorials.jl-6d1b261a-3be8-11e9-3f2f-0b112a9a8436", "max_forks_repo_head_hexsha": "c7a0eaccc71464405499c13dba5bb8e1380fce29", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.2780082988, "max_line_length": 1139, "alphanum_fraction": 0.5842224823, "converted": true, "num_tokens": 3715, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9207896802383028, "lm_q2_score": 0.8933094145755219, "lm_q1q2_score": 0.8225500902008602}} {"text": "# Bracketing Methods (Bisection example)\n\nConsider the refrigeration tank example from Belegundu and Chandrupatla [1].\nWe want to minimize the cost of a cylindrical refrigeration tank that must have\na volume of 50 m$^3$.\nThe costs of the tank are\n- Circular ends cost \\$10 per m$^2$\n- Cylindrical walls cost \\$6 per m$^2$\n- Refrigerator costs \\$80 per m$^2$ over its life\n\nLet $d$ be the tank diameter, and $L$ the height.\n\n\\begin{align}\n f &= 10 \\left(\\frac{2 \\pi d^2}{4}\\right) + 6 (\\pi d L) + 80 \\left( \\frac{2\n\\pi d^2}{4} + \\pi d L \\right)\\\\\n &= 45 \\pi d^2 + 86 \\pi d L\n\\end{align}\n\nHowever, $L$ is a function of $d$ because the volume is constrained. We could\nadd a constraint to the problem\n\\begin{align}\n \\frac{\\pi d^2}{4} L = V\n\\end{align}\nbut it is easier to express $V$ as a function of $d$ and make the problem\nunconstrained.\n\\begin{align}\n L &= \\frac{4 V}{\\pi d^2}\\\\\n &= \\frac{200}{\\pi d^2}\n\\end{align}\nThus the optimization can be expressed as\n\\begin{align*}\n\\textrm{minimize} &\\quad 45 \\pi d^2 + \\frac{17200}{d}\\\\\n\\textrm{with respect to} &\\quad d \\\\\n\\textrm{subject to} &\\quad d \\ge 0\n\\end{align*}\n\nOne-dimensional optimization problems are silly of course, we can just find the\nminimum by looking at a plot. However, we use a one-dimensional example to\nillustrate line searches. A line search seeks an approximate minimum to a one-dimensional optimization problem within a N-dimensional space.\n\nWe will use bisection to find the minimum of this function. This is a recursive\nfunction.\n\n\n\n```python\nfrom math import fabs\n\ndef bisection(x1, x2, f1, f2, fh, sizevec):\n \"\"\"\n This function finds the root of a function using bisection.\n \n Parameters\n ----------\n x1 : float\n lower bound\n x2 : float\n upper bound\n f1 : float\n function value at lower bound\n f2 : float\n function value at upper bound\n f1 * f2 must be < 0 in order to contain a root. \n Currently this is left up to the user to check.\n fh : function handle\n should be of form f = fh(x)\n where f is the function value\n sizevec : list \n input an empty array and the interval size\n will be appended at each iteration\n \n Returns\n -------\n xroot : float\n root of function fh\n \n \"\"\"\n\n # divide interval in half\n x = 0.5*(x1 + x2)\n \n # save in iteration history \n sizevec.append(x2-x1)\n\n # if interval is small, then we have converged\n if (fabs(x2 - x1) < 1e-6):\n return x\n\n # evaluate function at the new point (midpoint of interval)\n f = fh(x)\n\n # determine which side of the interval are root is in\n if (f*f1 < 0): # left brack applies\n x2 = x\n f2 = f\n else: # right bracket applies\n x1 = x\n f1 = f\n \n # recursively call bisection with our new interval\n return bisection(x1, x2, f1, f2, fh, sizevec)\n```\n\nWe are interseted in optimization, so we don't want to find the root of our\nfunction, but rather the \"root\" of the derivative as a potential minimum point.\nLet's define our objectve function, its derivative, and solve for the minium.\n\n\n```python\n%matplotlib inline\n\nimport numpy as np\nfrom math import pi\nimport matplotlib.pyplot as plt\nplt.style.use('fivethirtyeight')\n\ndef func(d):\n return 45*pi*d**2 + 17200.0/d\n\n\ndef deriv(d):\n return 90*pi*d - 17200.0/d**2\n\n\n\n# choose starting interval\nd1 = 1.0\nd2 = 10.0\n\n# evalaute function\ng1 = deriv(d1)\ng2 = deriv(d2)\n\n# check that our bracket is ok\nassert(g1*g2 < 0)\n\n# find optimal point\nsize = []\ndopt = bisection(d1, d2, g1, g2, deriv, size)\n\n# plot function\ndvec = np.linspace(d1, d2, 200)\nplt.figure()\nplt.plot(dvec, func(dvec)/1e3)\nplt.plot(dopt, func(dopt)/1e3, 'r*', markersize=12)\nplt.xlabel('diameter (m)')\nplt.ylabel('cost (thousands of dollars)')\n\n# plot convergence history (interval size)\nplt.figure()\nplt.semilogy(size)\nplt.xlabel('iteration')\nplt.ylabel('interval size')\n```\n\nNote the linear convergence behavior.\n\n[1] Belegundu, A. D. and Chandrupatla, T. R., Optimization Concepts and Applications in Engineering, Cambridge University Press, Mar 2011.\n", "meta": {"hexsha": "164eb530e59640e762528d5a0a7769a57d1f9426", "size": 58346, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "LineSearch.ipynb", "max_stars_repo_name": "BYUFLOWLab/MDOnotebooks", "max_stars_repo_head_hexsha": "49344cb874a52cd67cc04ebb728195fa025d5590", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2017-03-13T23:22:32.000Z", "max_stars_repo_stars_event_max_datetime": "2017-08-10T14:15:31.000Z", "max_issues_repo_path": "LineSearch.ipynb", "max_issues_repo_name": "BYUFLOWLab/MDOnotebooks", "max_issues_repo_head_hexsha": "49344cb874a52cd67cc04ebb728195fa025d5590", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "LineSearch.ipynb", "max_forks_repo_name": "BYUFLOWLab/MDOnotebooks", "max_forks_repo_head_hexsha": "49344cb874a52cd67cc04ebb728195fa025d5590", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-03-12T11:31:01.000Z", "max_forks_repo_forks_event_max_datetime": "2019-03-12T11:31:01.000Z", "avg_line_length": 236.2186234818, "max_line_length": 28090, "alphanum_fraction": 0.9036780585, "converted": true, "num_tokens": 1228, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9207896780646393, "lm_q2_score": 0.8933094003735664, "lm_q1q2_score": 0.8225500751820922}} {"text": "# ARMA models\n\n2019, J\u00e9r\u00e9mie DECOCK\n\n## Lag operator\n\nDefinition of the *backward shift operator* $B$:\n\n$$B y_t = y_{t-1}$$\n\n$$B^i y_t = y_{t-i}$$\n\nDefinition of the *forward shift operator* $F$:\n\n$$ F y_t = B^{-1} y_t = y_{t+1}$$\n\n$$F^i y_t = B^{-i} y_t = y_{t+i}$$\n\n## Lag polynomials\n\nPolynomials of the lag operator are commonly used to simplify ARMA models notation.\nThese polynomials are called *lag polynomials*.\n\n### Example\n\n$a_{(p)} (B)$ is a *lag polynomials*:\n$$\n\\begin{align}\na_{(p)} (B) &= 1 + \\sum_{i=1}^p a_i B^i \\\\\n &= 1 + a_1 B + a_2 B^2 + \\dots + a_p B^p\n\\end{align}\n$$\n\n### Usage example in time series analysis\n\n$$\n\\begin{align}\na_{(p)} (B)y_t &= \\left( 1 + \\sum_{i=1}^p a_i B^i \\right) ~ y_t \\\\\n &= y_t + a_1 B y_t + a_2 B^2 y_t + \\dots + a_p B^p y_t \\\\\n &= y_t + a_1 y_{t-1} + a_2 y_{t-2} + \\dots + a_p y_{t-p}\n\\end{align}\n$$\n\n## Difference operator\n\nDefinition of the *difference operator*:\n$$\\nabla^i y_t = (1-B)^i ~ y_t$$\n\n### Example: first order difference operator (i.e. $i=1$)\n\n$$\n\\begin{align}\n\\nabla y_t &= (1-B) ~ y_t \\\\\n &= y_t - B ~ y_t \\\\\n &= y_t - y_{t-1}\n\\end{align}\n$$\n\n### Example: second order difference operator (i.e. $i=2$)\n\n$$\n\\begin{align}\n\\nabla^2 y_t &= (1-B)^2 ~ y_t \\\\\n &= (1-B)(1-B) ~ y_t \\\\\n &= \\left( 1 - 2B + B^2 \\right) ~ y_t \\\\\n &= y_t - 2y_{t-1} + y_{t-2}\n\\end{align}\n$$\n\n### Example: first order difference operator with a lag of 12\n\n$$\n\\begin{align}\n\\left( 1-B^{12} \\right) ~ y_t &= y_t - B^{12} ~ y_t \\\\\n &= y_t - y_{t-12}\n\\end{align}\n$$\n\n## Autoregressive Model (AR)\n\nUsing this model, the observation $y_t$ of a time series at time $t$ is defined as a linear combination of its $p$ past observations $y_{t-1}, \\dots, y_{t-p}$\n\n$$\n\\begin{align}\nAR(\\color{\\red}{p}): \\quad\\quad\\quad \\color{\\red}{\\underbrace{\\varphi_{(p)} (B) y_t}_{\\text{AR}}} \n&= c + \\color{\\green}{\\varepsilon_t} \\\\\n\\color{\\red}{ \\left( 1 + \\sum_{i=1}^p \\varphi_i B^i \\right) y_t }\n&= c + \\color{\\green}{\\varepsilon_t} \\\\\n\\color{\\red}{ y_t } + \\color{\\red}{\\sum_{i=1}^p \\varphi_i B^i y_t }\n&= c + \\color{\\green}{\\varepsilon_t} \\\\\n\\color{\\red}{ y_t } + \\color{\\red}{ \\sum_{i=1}^p \\varphi_i y_{t-i} }\n&= c + \\color{\\green}{\\varepsilon_t} \\\\\n\\color{\\red}{y_t} \n&= c + \\color{\\green}{\\varepsilon_t} + \\color{\\red}{\\sum_{i=1}^p \\phi_i y_{t-i}}\n\\end{align}\n$$\n\nwith $\\varphi_i = -\\phi_i$.\n\n$c$ is a constant used to ensure the series is centered to 0 ($c$ is the average of the series).\n\n$\\color{\\green}{\\varepsilon_t}$ is the noise component of $y_t$.\n\n### Example: Autoregressive order 3 process (i.e. $p=3$) noted AR(3)\n\n$$\n\\begin{align}\nAR(\\color{\\red}{3}): \\quad\\quad\\quad \\color{\\red}{\\underbrace{\\varphi_{(3)} (B) y_t}_{\\text{AR}}} \n&= c + \\color{\\green}{\\varepsilon_t} \\\\\n\\color{\\red}{ \\left( 1 + \\sum_{i=1}^3 \\varphi_i B^i \\right) y_t }\n&= c + \\color{\\green}{\\varepsilon_t} \\\\\n\\color{\\red}{ y_t } + \\color{\\red}{\\sum_{i=1}^3 \\varphi_i B^i y_t }\n&= c + \\color{\\green}{\\varepsilon_t} \\\\\n\\color{\\red}{ y_t } + \\color{\\red}{ \\sum_{i=1}^3 \\varphi_i y_{t-i} }\n&= c + \\color{\\green}{\\varepsilon_t} \\\\\n\\color{\\red}{y_t} \n&= c + \\color{\\green}{\\varepsilon_t} + \\color{\\red}{\\sum_{i=1}^3 \\phi_i y_{t-i}} \\\\\n\\color{\\red}{y_t} \n&= c + \\color{\\green}{\\varepsilon_t} + \\color{\\red}{\\phi_1 y_{t-1}} + \\color{\\red}{\\phi_2 y_{t-2}} + \\color{\\red}{\\phi_3 y_{t-3}}\n\\end{align}\n$$\n\nwith $\\varphi_i = -\\phi_i$.\n\n## Moving Average Model (MA)\n\nCombinaison lin\u00e9aire des $q$ erreurs pass\u00e9es\nUsing this model, the observation $y_t$ of a time series at time $t$ is defined as the output of a linear filter with transfer function $\\theta_{(q)}(B)$ when the input is white noise $\\varepsilon_t$.\n\n$$\n\\begin{align}\nMA(\\color{\\green}{q}): \\quad\\quad\\quad\n\\color{\\red}{y_t} &= c + \\color{\\green}{\\underbrace{\\theta_{(q)} (B) \\varepsilon_t}_{MA}} \\\\\n\\color{\\red}{y_t} &= c + \\color{\\green}{\\left( 1 + \\sum_{j=1}^q \\theta_j B^j \\right) ~ \\varepsilon_t} \\\\\n\\color{\\red}{y_t} &= c + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^q \\theta_j B^j \\varepsilon_t} \\\\\n\\color{\\red}{y_t} &= c + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^q \\theta_j ~ \\varepsilon_{t-j}}\n\\end{align}\n$$\n\n### Example: Moving average order 3 process (i.e. $q=3$) noted MA(3)\n\n$$\n\\begin{align}\nMA(\\color{\\green}{3}): \\quad\\quad\\quad\n\\color{\\red}{y_t} &= c + \\color{\\green}{\\underbrace{\\theta_{(3)} (B) \\varepsilon_t}_{MA}} \\\\\n\\color{\\red}{y_t} &= c + \\color{\\green}{\\left( 1 + \\sum_{j=1}^3 \\theta_j B^j \\right) ~ \\varepsilon_t} \\\\\n\\color{\\red}{y_t} &= c + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^3 \\theta_j B^j \\varepsilon_t} \\\\\n\\color{\\red}{y_t} &= c + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^3 \\theta_j ~ \\varepsilon_{t-j}} \\\\\n\\color{\\red}{y_t} &= c + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\theta_1 ~ \\varepsilon_{t-1}} + \\color{\\green}{\\theta_2 ~ \\varepsilon_{t-2}} + \\color{\\green}{\\theta_3 ~ \\varepsilon_{t-3}}\n\\end{align}\n$$\n\n### AutoRegressive Moving Average (ARMA)\n\nAn ARMA(p,q) process of order $p$ and $q$ is a combination of an AR(p) and an MA(q) process.\n\n$$\n\\begin{align}\nARMA(\\color{\\red}{p}, \\color{\\green}{q}): \\quad\\quad\\quad\n\\color{\\red}{\\underbrace{\\varphi_{(p)} (B) y_t}_{\\text{AR}}} \n&=\nc + \\color{\\green}{\\underbrace{\\theta_{(q)} (B) \\varepsilon_t}_{MA}}\n\\\\\n\\color{\\red}{ \\left( 1 + \\sum_{i=1}^p \\varphi_i B^i \\right) y_t }\n&=\nc + \\color{\\green}{\\left( 1 + \\sum_{j=1}^q \\theta_j B^j \\right) ~ \\varepsilon_t}\n\\\\\n\\color{\\red}{ y_t } + \\color{\\red}{\\sum_{i=1}^p \\varphi_i B^i y_t }\n&=\nc + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^q \\theta_j B^j \\varepsilon_t}\n\\\\\n\\color{\\red}{ y_t } + \\color{\\red}{ \\sum_{i=1}^p \\varphi_i y_{t-i} }\n&=\nc + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^q \\theta_j ~ \\varepsilon_{t-j}}\n\\\\\n\\color{\\red}{y_t}\n&=\nc\n+ \\color{\\red}{\\underbrace{\\sum_{i=1}^p \\phi_i y_{t-i}}_{\\text{AR model}}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\color{\\green}{\\underbrace{\\sum_{j=1}^q \\theta_j \\varepsilon_{t-j}}_{\\text{MA model}}}\n\\end{align}\n$$\n\nwith $\\varphi_i = -\\phi_i$\n\n#### Example: ARMA(3,2)\n\n$$\n\\begin{align}\nARMA(\\color{\\red}{3}, \\color{\\green}{2}): \\quad\\quad\\quad\n\\color{\\red}{\\underbrace{\\varphi_{(3)} (B) y_t}_{\\text{AR}}} \n&=\nc + \\color{\\green}{\\underbrace{\\theta_{(2)} (B) \\varepsilon_t}_{MA}}\n\\\\\n\\color{\\red}{ \\left( 1 + \\sum_{i=1}^3 \\varphi_i B^i \\right) y_t }\n&=\nc + \\color{\\green}{\\left( 1 + \\sum_{j=1}^2 \\theta_j B^j \\right) ~ \\varepsilon_t}\n\\\\\n\\color{\\red}{ y_t } + \\color{\\red}{\\sum_{i=1}^3 \\varphi_i B^i y_t }\n&=\nc + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^2 \\theta_j B^j \\varepsilon_t}\n\\\\\n\\color{\\red}{ y_t } + \\color{\\red}{ \\sum_{i=1}^3 \\varphi_i y_{t-i} }\n&=\nc + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^2 \\theta_j ~ \\varepsilon_{t-j}}\n\\\\\n\\color{\\red}{y_t}\n&=\nc\n+ \\color{\\red}{\\underbrace{\\sum_{i=1}^3 \\phi_i y_{t-i}}_{\\text{AR model}}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\color{\\green}{\\underbrace{\\sum_{j=1}^2 \\theta_j \\varepsilon_{t-j}}_{\\text{MA model}}}\n\\\\\n\\color{\\red}{y_t}\n&=\nc\n+ \\underbrace{ \\color{\\red}{\\phi_1 y_{t-1}} + \\color{\\red}{\\phi_2 y_{t-2}} + \\color{\\red}{\\phi_3 y_{t-3}} }_{\\text{AR model}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\underbrace{ \\color{\\green}{\\theta_1 \\varepsilon_{t-1}} + \\color{\\green}{\\theta_2 \\varepsilon_{t-2}} }_{\\text{MA model}}\n\\end{align}\n$$\n\nwith $\\varphi_i = -\\phi_i$\n\n## AutoRegressive Integrated Moving Average (ARIMA)\n\nAn ARIMA process is an *integrated* ARMA process.\nIt's used on non-stationary time series.\nAn initial transformation step (the \"integrated\" part of the model) is applied to eliminate the non-stationarity.\nThis transformation is usually operated by the *difference operator* described above.\n\nAn $ARIMA(p,d,q)$ process produce series that can be modeled as a stationary $ARMA(p,q)$ process after being differenced $d$ times.\n\n$$\n\\begin{align}\nARIMA(\\color{\\red}{p}, \\color{\\orange}{d}, \\color{\\green}{q}): \\quad\\quad\\quad\n\\color{\\red}{\\underbrace{\\varphi_{(p)} (B) \\color{\\orange}{\\nabla^d} y_t}_{\\text{AR}}} \n&=\nc + \\color{\\green}{\\underbrace{\\theta_{(q)} (B) \\varepsilon_t}_{MA}}\n\\\\\n\\color{\\red}{ \\left( 1 + \\sum_{i=1}^p \\varphi_i B^i \\right)} \\color{\\orange}{\\nabla^d} \\color{\\red}{ y_t }\n&=\nc + \\color{\\green}{\\left( 1 + \\sum_{j=1}^q \\theta_j B^j \\right) ~ \\varepsilon_t}\n\\\\\n\\color{\\orange}{\\nabla^d} \\color{\\red}{ y_t } + \\color{\\red}{\\sum_{i=1}^p \\varphi_i B^i } \\color{\\orange}{\\nabla^d} \\color{\\red}{ y_t }\n&=\nc + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^q \\theta_j B^j \\varepsilon_t}\n\\\\\n\\color{\\orange}{\\nabla^d} \\color{\\red}{ y_t } + \\color{\\red}{ \\sum_{i=1}^p \\varphi_i } \\color{\\orange}{\\nabla^d} \\color{\\red}{ y_{t-i} }\n&=\nc + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^q \\theta_j ~ \\varepsilon_{t-j}}\n\\\\\n\\color{\\orange}{\\nabla^d} \\color{\\red}{y_t}\n&=\nc\n+ \\underbrace{\n \\color{\\red}{\\sum_{i=1}^p \\phi_i}\n \\color{\\orange}{\\nabla^d}\n \\color{\\red}{y_{t-i}}\n }_{\\text{ARI model}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\underbrace{\\color{\\green}{\\sum_{j=1}^q \\theta_j \\varepsilon_{t-j}}}_{\\text{MA model}}\n\\end{align}\n$$\n\nwith\n\n$$\\nabla = (1-B)$$\n$$\\nabla^2 = (1-B)^2$$\n$$\\dots$$\n$$\\nabla^d = (1-B)^d$$\n\n#### Examples for $d=1$, $d=2$ and $d=3$\n\nIf $d=0$:\n$$\n\\begin{align}\n\\nabla^0 y_t &:= (1-B)^0 y_t \\\\\n &= y_t\n\\end{align}\n$$\n\nIf $d=1$:\n$$\n\\begin{align}\n\\nabla y_t &:= (1-B) y_t \\\\\n &= y_t - B y_t \\\\\n &= y_t - y_{t-1}\n\\end{align}\n$$\n\nIf $d=2$:\n$$\n\\begin{align}\n\\nabla^2 y_t &:= (1-B)^2 y_t \\\\\n &= (1-B)(1-B) ~ y_t \\\\\n &= (1 - 2B + B^2) y_t \\\\\n &= y_t - 2B y_t + B^2 y_t \\\\\n &= y_t - 2 y_{t-1} + y_{t-2}\n\\end{align}\n$$\n\nThe above approach generalises to the i-th difference operator $\\nabla^i y_t = (1-B)^i ~ y_t$\n\n#### Example: ARIMA(3, 1, 2)\n\n$$\n\\begin{align}\nARIMA(\\color{\\red}{3}, \\color{\\orange}{1}, \\color{\\green}{2}): \\quad\\quad\\quad\n\\color{\\red}{\\underbrace{\\varphi_{(3)} (B) \\color{\\orange}{\\nabla} y_t}_{\\text{AR}}} \n&=\nc + \\color{\\green}{\\underbrace{\\theta_{(2)} (B) \\varepsilon_t}_{MA}}\n\\\\\n\\color{\\red}{ \\left( 1 + \\sum_{i=1}^3 \\varphi_i B^i \\right)} \\color{\\orange}{\\nabla} \\color{\\red}{ y_t }\n&=\nc + \\color{\\green}{\\left( 1 + \\sum_{j=1}^2 \\theta_j B^j \\right) ~ \\varepsilon_t}\n\\\\\n\\color{\\orange}{\\nabla} \\color{\\red}{ y_t } + \\color{\\red}{\\sum_{i=1}^3 \\varphi_i B^i } \\color{\\orange}{\\nabla} \\color{\\red}{ y_t }\n&=\nc + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^2 \\theta_j B^j \\varepsilon_t}\n\\\\\n\\color{\\orange}{\\nabla} \\color{\\red}{ y_t } + \\color{\\red}{ \\sum_{i=1}^3 \\varphi_i } \\color{\\orange}{\\nabla} \\color{\\red}{ y_{t-i} }\n&=\nc + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^2 \\theta_j ~ \\varepsilon_{t-j}}\n\\\\\n\\color{\\orange}{\\nabla} \\color{\\red}{y_t}\n&=\nc\n+ \\underbrace{\n \\color{\\red}{\\sum_{i=1}^3 \\phi_i}\n \\color{\\orange}{\\nabla}\n \\color{\\red}{y_{t-i}}\n }_{\\text{ARI model}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\underbrace{\\color{\\green}{\\sum_{j=1}^2 \\theta_j \\varepsilon_{t-j}}}_{\\text{MA model}}\n\\\\\n\\color{\\orange}{\\nabla} \\color{\\red}{y_t}\n&=\nc\n+ \\underbrace{\n \\color{\\red}{\\phi_1} \\color{\\orange}{\\nabla} \\color{\\red}{y_{t-1}}\n + \\color{\\red}{\\phi_2} \\color{\\orange}{\\nabla} \\color{\\red}{y_{t-2}}\n + \\color{\\red}{\\phi_3} \\color{\\orange}{\\nabla} \\color{\\red}{y_{t-3}}\n }_{\\text{ARI model}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\underbrace{\\color{\\green}{\\theta_1 \\varepsilon_{t-1}} + \\color{\\green}{\\theta_2 \\varepsilon_{t-2}}}_{\\text{MA model}}\n\\\\\n\\color{\\orange}{(1-B)} \\color{\\red}{y_t}\n&=\nc\n+ \\underbrace{\n \\color{\\red}{\\phi_1} \\color{\\orange}{(1-B)} \\color{\\red}{y_{t-1}}\n + \\color{\\red}{\\phi_2} \\color{\\orange}{(1-B)} \\color{\\red}{y_{t-2}}\n + \\color{\\red}{\\phi_3} \\color{\\orange}{(1-B)} \\color{\\red}{y_{t-3}}\n }_{\\text{ARI model}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\underbrace{\\color{\\green}{\\theta_1 \\varepsilon_{t-1}} + \\color{\\green}{\\theta_2 \\varepsilon_{t-2}}}_{\\text{MA model}}\n\\\\\n\\color{\\red}{y_t} - \\color{\\orange}{B y_t}\n&=\nc\n+ \\underbrace{\n \\color{\\red}{\\phi_1} (\\color{\\red}{y_{t-1}} - \\color{\\orange}{B y_{t-1}})\n + \\color{\\red}{\\phi_2} (\\color{\\red}{y_{t-2}} - \\color{\\orange}{B y_{t-2}})\n + \\color{\\red}{\\phi_3} (\\color{\\red}{y_{t-3}} - \\color{\\orange}{B y_{t-3}})\n }_{\\text{ARI model}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\underbrace{\\color{\\green}{\\theta_1 \\varepsilon_{t-1}} + \\color{\\green}{\\theta_2 \\varepsilon_{t-2}}}_{\\text{MA model}}\n\\\\\n\\color{\\red}{y_t} - \\color{\\orange}{y_{t-1}}\n&=\nc\n+ \\underbrace{\n \\color{\\red}{\\phi_1} (\\color{\\red}{y_{t-1}} - \\color{\\orange}{y_{t-2}})\n + \\color{\\red}{\\phi_2} (\\color{\\red}{y_{t-2}} - \\color{\\orange}{y_{t-3}})\n + \\color{\\red}{\\phi_3} (\\color{\\red}{y_{t-3}} - \\color{\\orange}{y_{t-4}})\n }_{\\text{ARI model}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\underbrace{\\color{\\green}{\\theta_1 \\varepsilon_{t-1}} + \\color{\\green}{\\theta_2 \\varepsilon_{t-2}}}_{\\text{MA model}}\n\\\\\n\\color{\\red}{y_t}\n&=\nc\n+ \\color{\\orange}{y_{t-1}} \n+ \\underbrace{\n \\color{\\red}{\\phi_1 y_{t-1}} - \\color{\\orange}{\\phi_1 y_{t-2}}\n + \\color{\\red}{\\phi_2 y_{t-2}} - \\color{\\orange}{\\phi_2 y_{t-3}}\n + \\color{\\red}{\\phi_3 y_{t-3}} - \\color{\\orange}{\\phi_3 y_{t-4}}\n }_{\\text{ARI model}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\underbrace{\\color{\\green}{\\theta_1 \\varepsilon_{t-1}} + \\color{\\green}{\\theta_2 \\varepsilon_{t-2}}}_{\\text{MA model}}\n\\end{align}\n$$\n\n## AutoRegressive Moving Average Including Exogenous Covariates (ARMAX)\n\n$$\n\\begin{align}\n\\color{\\red}{\\underbrace{\\varphi (B) y_t}_{AR}}\n&= c\n+ \\color{\\green}{\\underbrace{\\theta (B) \\varepsilon_t}_{MA}}\n+ \\color{\\purple}{\\underbrace{\\beta_1 x_{1,~t} + \\beta_2 x_{2,~t} + \\dots + \\beta_r x_{r,~t}}_{X}}\n\\end{align}\n$$\n\n$x_{1,~t}, x_{2,~t}, \\dots, x_{r,~t}$ is the value of the $r$ exogenous variables at time $t$, with $\\beta_1, \\beta_2, \\dots, \\beta_r$ the $r$ corresponding coefficients.\n\n$$\nARMAX(\\color{\\red}{p}, \\color{\\green}{q}): \\quad\n\\color{\\red}{y_t} = c\n+ \\color{\\red}{\\underbrace{\\sum_{i=1}^p \\varphi_i y_{t-i}}_{\\text{AR model}}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\color{\\green}{\\underbrace{\\sum_{j=1}^q \\theta_j \\varepsilon_{t-j}}_{\\text{MA model}}}\n+ \\color{\\purple}{\\underbrace{\\sum_{k=1}^r \\beta_k x_{k,~t}}_{\\text{Exogenous}}}\n$$\n\n## Seasonal AutoRegressive Moving Average (SARMA)\n\n#### Purely seasonal processes\n\n$$\n\\begin{align}\nARMA(\\color{\\red}{P}, \\color{\\green}{Q})_{\\color{\\red}{s}, \\color{\\green}{s'}}: \\quad\\quad\\quad\n\\color{\\red}{\\underbrace{\\Phi_{(P)} (B^s) y_t}_{\\text{SAR}}}\n&= \nc + \\color{\\green}{\\underbrace{\\Theta_{(Q)} (B^{s'}) \\varepsilon_t}_{SMA}}\n\\\\\n\\color{\\red}{ \\left( 1 + \\sum_{i=1}^P \\Phi_i B^{si} \\right) y_t }\n&=\nc + \\color{\\green}{\\left( 1 + \\sum_{j=1}^Q \\Theta_j B^{s'j} \\right) ~ \\varepsilon_t}\n\\\\\n\\color{\\red}{ y_t } + \\color{\\red}{\\sum_{i=1}^P \\Phi_i B^{si} y_t }\n&=\nc + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^Q \\Theta_j B^{s'j} \\varepsilon_t}\n\\\\\n\\color{\\red}{ y_t } + \\color{\\red}{ \\sum_{i=1}^P \\Phi_i ~ y_{t-si} }\n&=\nc + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^Q \\Theta_j ~ \\varepsilon_{t-s'j}}\n\\\\\n\\color{\\red}{y_t} &= c\n+ \\color{\\red}{\\underbrace{\\sum_{i=1}^P \\varPhi_i ~ y_{t-si}}_{\\text{SAR model}}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\color{\\green}{\\underbrace{\\sum_{j=1}^Q \\Theta_j ~ \\varepsilon_{t-s'j}}_{\\text{SMA model}}}\n\\end{align}\n$$\n\nwith $\\varPhi = -\\Phi$ and :\n\n- $s$ is the number of time steps for a single seasonal period for the AR process\n- $s'$ is the number of time steps for a single seasonal period for the MA process\n- $P$ is the seasonal AR process order\n- $Q$ is the seasonal MA process order\n\nRemark: most of the time, $s = s'$. In this case, the process is simply noted $ARMA(\\color{\\red}{P}, \\color{\\green}{Q})_{s}$\n\n#### Example: $ARMA(3,2)_{4, 5}$\n\n$$\n\\begin{align}\nARMA(\\color{\\red}{3}, \\color{\\green}{2})_{4, 5}: \\quad\\quad\\quad\n\\color{\\red}{\\underbrace{\\Phi_{(3)} (B^4) y_t}_{\\text{SAR}}} \n&=\nc + \\color{\\green}{\\underbrace{\\Theta_{(2)} (B^5) \\varepsilon_t}_{SMA}}\n\\\\\n\\color{\\red}{ \\left( 1 + \\sum_{i=1}^3 \\Phi_i B^{4i} \\right) y_t }\n&=\nc + \\color{\\green}{\\left( 1 + \\sum_{j=1}^2 \\Theta_j B^{5j} \\right) ~ \\varepsilon_t}\n\\\\\n\\color{\\red}{ y_t } + \\color{\\red}{\\sum_{i=1}^3 \\Phi_i B^{4i} y_t }\n&=\nc + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^2 \\Theta_j B^{5j} \\varepsilon_t}\n\\\\\n\\color{\\red}{ y_t } + \\color{\\red}{ \\sum_{i=1}^3 \\Phi_i y_{t-4i} }\n&=\nc + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^2 \\Theta_j ~ \\varepsilon_{t-5j}}\n\\\\\n\\color{\\red}{y_t}\n&=\nc\n+ \\color{\\red}{\\underbrace{\\sum_{i=1}^3 \\varPhi_i y_{t-4i}}_{\\text{SAR model}}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\color{\\green}{\\underbrace{\\sum_{j=1}^2 \\Theta_j \\varepsilon_{t-5j}}_{\\text{SMA model}}}\n\\\\\n\\color{\\red}{y_t}\n&=\nc\n+ \\underbrace{ \\color{\\red}{\\varPhi_1 y_{t-4}} + \\color{\\red}{\\varPhi_2 y_{t-8}} + \\color{\\red}{\\varPhi_3 y_{t-12}} }_{\\text{SAR model}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\underbrace{ \\color{\\green}{\\Theta_1 \\varepsilon_{t-5}} + \\color{\\green}{\\Theta_2 \\varepsilon_{t-10}} }_{\\text{SMA model}}\n\\end{align}\n$$\n\nwith $\\varPhi_i = -\\Phi_i$\n\n### Seasonal and non seasonal processes : additive process $ARMA(p,q)+ARMA(P,Q)_{s, s'}$\n\nThese processes are rarely used $ARMA(p,q)+ARMA(P,Q)_{s, s'}$.\n\n$$\n\\begin{align}\nARMA(p,q)+ARMA(P,Q)_{s, s'}: \\quad \n\\left(\n\\underbrace{\\varphi_p \\left(B\\right)}_{AR} +\n\\underbrace{\\Phi_P \\left(B^s\\right)}_{SAR} -\n1\n\\right) ~ y_t\n&=\nc +\n\\left(\n\\underbrace{\\theta_q \\left(B\\right)}_{MA} +\n\\underbrace{\\Theta_Q \\left(B^{s'}\\right)}_{SMA}\n- 1 \\right) ~ \\varepsilon_t \\\\\n\\underbrace{\\varphi_p (B) ~ y_t}_{AR} +\n\\underbrace{\\Phi_P \\left(B^s\\right) ~ y_t}_{SAR} -\ny_t\n&=\nc +\n\\underbrace{\\theta_q \\left(B\\right) ~ \\varepsilon_t}_{MA} +\n\\underbrace{\\Theta_Q \\left(B^{s'}\\right) ~ \\varepsilon_t}_{SMA} -\n\\varepsilon_t \\\\\n\\underbrace{\\left( 1 + \\sum_{i=1}^p \\varphi_i B^i \\right) ~ y_t}_{AR} +\n\\underbrace{\\left( 1 + \\sum_{k=1}^P \\Phi_k B^{sk} \\right) ~ y_t}_{SAR} -\ny_t\n&=\nc +\n\\underbrace{\\left( 1 + \\sum_{j=1}^q \\theta_j B^j \\right) ~ \\varepsilon_t}_{MA} +\n\\underbrace{\\left( 1 + \\sum_{l=1}^Q \\Theta_l B^{s'l} \\right) ~ \\varepsilon_t}_{SMA} -\n\\varepsilon_t \\\\\n\\underbrace{y_t + \\sum_{i=1}^p \\varphi_i B^i y_t}_{AR} +\n\\underbrace{y_t + \\sum_{k=1}^P \\Phi_k B^{sk} y_t}_{SAR} -\ny_t\n&=\nc +\n\\underbrace{\\varepsilon_t + \\sum_{j=1}^q \\theta_j B^j \\varepsilon_t}_{MA} +\n\\underbrace{\\varepsilon_t + \\sum_{l=1}^Q \\Theta_l B^{s'l} \\varepsilon_t}_{SMA} -\n\\varepsilon_t \\\\\ny_t +\n\\sum_{i=1}^p \\varphi_i B^i y_t +\n\\sum_{k=1}^P \\Phi_k B^{sk} y_t\n&=\nc +\n\\varepsilon_t + \\sum_{j=1}^q \\theta_j B^j \\varepsilon_t +\n\\sum_{l=1}^Q \\Theta_l B^{s'l} \\varepsilon_t \\\\\ny_t +\n\\sum_{i=1}^p \\varphi_i y_{t-i} +\n\\sum_{k=1}^P \\Phi_k y_{t-sk}\n&=\nc +\n\\varepsilon_t +\n\\sum_{j=1}^q \\theta_j \\varepsilon_{t-j} +\n\\sum_{l=1}^Q \\Theta_l \\varepsilon_{t-s'l} \\\\\ny_t\n&=\nc\n+ \\underbrace{\\sum_{i=1}^p \\phi_i y_{t-i}}_{\\text{AR model}}\n+ \\underbrace{\\sum_{k=1}^{P} \\varPhi_k y_{t-sk}}_{\\text{SAR model}}\n+ \\varepsilon_t\n+ \\underbrace{\\sum_{j=1}^q \\theta_j \\varepsilon_{t-j}}_{\\text{MA model}}\n+ \\underbrace{\\sum_{l=1}^{Q} \\Theta_l \\varepsilon_{t-s'l}}_{\\text{SMA model}}\n\\end{align}\n$$\n\nwith $\\varPhi = -\\Phi$ and $\\varphi = -\\phi$\n\nRemark: most of the time, $s = s'$. In this case, the process is simply noted $ARMA(\\color{\\red}{p}, \\color{\\green}{q}) + ARMA(\\color{\\red}{P}, \\color{\\green}{Q})_{s}$\n\n### Seasonal and non seasonal processes : multiplicative process $ARMA(p,q) \\times ARMA(P,Q)_{s, s'}$\n\nThese process are more often used than previously described additive processes.\n\nSeasonal series are characterized by a strong serial correlation at the seasonal lag (and possibly multiples thereof).\n\n$$\n\\begin{align}\nARMA(\\color{\\red}{p},\\color{\\green}{q}) \\times ARMA(\\color{\\red}{P},\\color{\\green}{Q})_{\\color{\\red}{s}, \\color{\\green}{s'}}: \\quad \n\\color{\\red}{\n\\underbrace{\\varphi_{(p)} (B)}_{AR} ~\n\\underbrace{\\Phi_{(P)} (B^s)}_{SAR} ~ \ny_t\n}\n&=\nc +\n\\color{\\green}{\n\\underbrace{\\theta_{(q)} (B)}_{MA} ~\n\\underbrace{\\Theta_{(Q)} (B^{s'})}_{SMA} ~\n\\varepsilon_t\n}\n\\\\\n\\color{\\red}{\n\\underbrace{\\left( 1 + \\sum_{i=1}^p \\varphi_i B^i \\right)}_{AR} ~\n\\underbrace{\\left( 1 + \\sum_{k=1}^P \\Phi_k B^{sk} \\right)}_{SAR} ~ \ny_t\n}\n&=\nc +\n\\color{\\green}{\n\\underbrace{\\left( 1 + \\sum_{j=1}^q \\theta_j B^j \\right)}_{MA} ~\n\\underbrace{\\left( 1 + \\sum_{l=1}^Q \\Theta_l B^{s'l} \\right)}_{SMA} ~\n\\varepsilon_t\n}\n\\\\\n\\color{\\red}{\n\\underbrace{\n\\left( 1 + \\varphi_1 B + \\varphi_2 B^2 + \\dots + \\varphi_p B^p \\right)\n\\left( 1 + \\Phi_1 B^{s} + \\Phi_2 B^{s2} + \\dots + \\Phi_P B^{sP} \\right)\ny_t\n}_{\\text{Order of the equivalent AR process} ~=~ p + Ps}\n}\n&=\nc +\n\\color{\\green}{\n\\underbrace{\n\\left( 1 + \\theta_1 B + \\theta_2 B^2 + \\dots + \\theta_q B^q \\right)\n\\left( 1 + \\Theta_1 B^{s'} + \\Theta_2 B^{s'2} + \\dots + \\Theta_Q B^{s'Q} \\right)\n\\varepsilon_t\n}_{\\text{Order of the equivalent MA process} ~=~ q + Qs'}\n}\n\\\\\n\\end{align}\n$$\n\nwith $s > p$ and $s' > q$.\n\n$ARMA(p,q) \\times ARMA(P,Q)_{s, s'}$ can be defined as an $ARMA(p + Ps, ~ q + Qs')$ with some parameters fixed to 0.\n\nRemark: most of the time, $s = s'$. In this case, the process is simply noted $ARMA(\\color{\\red}{p}, \\color{\\green}{q}) \\times ARMA(\\color{\\red}{P}, \\color{\\green}{Q})_{s}$\n\n#### Example: $ARMA(3,0) \\times ARMA(2,0)_{6, 0}$\n\n$$\n\\begin{align}\nARMA(\\color{\\red}{3},0) \\times ARMA(\\color{\\pink}{2},0)_{6,0}: \\quad\n\\color{\\red}{\n\\underbrace{\\varphi_{(3)} (B)}_{AR} ~\n}\n\\color{\\pink}{\n\\underbrace{\\Phi_{(2)} (B^6)}_{SAR} ~ \n}\ny_t\n=~&\nc + \\varepsilon_t\n\\\\\n\\color{\\red}{\n\\underbrace{\\left( 1 + \\sum_{i=1}^3 \\varphi_i B^i \\right)}_{AR} ~\n}\n\\color{\\pink}{\n\\underbrace{\\left( 1 + \\sum_{k=1}^2 \\Phi_k B^{6k} \\right)}_{SAR} ~ \n}\ny_t\n=~&\nc + \\varepsilon_t\n\\\\\n\\underbrace{\n \\color{\\red}{\n \\left( 1 + \\varphi_1 B + \\varphi_2 B^2 + \\varphi_3 B^3 \\right)\n }\n\\color{\\pink}{\n \\left( 1 + \\Phi_1 B^{6} + \\Phi_2 B^{12} \\right)\n }\ny_t\n}_{\\text{Order of the equivalent AR process} ~=~ 3 + 2 \\times 6 = 15}\n=~&\nc + \\varepsilon_t\n\\\\\n\\left(\n1 + \\Phi_1 B^{6} + \\Phi_2 B^{12}\n+ \\color{\\red}{\\varphi_1} B + \\color{\\red}{\\varphi_1} \\color{\\pink}{\\Phi_1} B^{7} + \\color{\\red}{\\varphi_1} \\color{\\pink}{\\Phi_2} B^{13}\n+ \\color{\\red}{\\varphi_2} B^2 + \\color{\\red}{\\varphi_2} \\color{\\pink}{\\Phi_1} B^{8} + \\color{\\red}{\\varphi_2} \\color{\\pink}{\\Phi_2} B^{14}\n+ \\color{\\red}{\\varphi_3} B^3 + \\color{\\red}{\\varphi_3} \\color{\\pink}{\\Phi_1} B^{9} + \\color{\\red}{\\varphi_3} \\color{\\pink}{\\Phi_2} B^{15}\n\\right)\ny_t\n=~&\nc + \\varepsilon_t\n\\\\\ny_t =\n & ~ c \\\\\n & + \\color{\\red}{\\phi_1} y_{t-1} + \\color{\\red}{\\phi_2} y_{t-2} + \\color{\\red}{\\phi_3} y_{t-3} \\\\\n & + \\color{\\pink}{\\varPhi_1} y_{t-6} + \\color{\\red}{\\phi_1} \\color{\\pink}{\\varPhi_1} y_{t-7} + \\color{\\red}{\\phi_2} \\color{\\pink}{\\varPhi_1} y_{t-8} + \\color{\\red}{\\phi_3} \\color{\\pink}{\\varPhi_1} y_{t-9} \\\\\n & + \\color{\\pink}{\\varPhi_2} y_{t-12} + \\color{\\red}{\\phi_1} \\color{\\pink}{\\varPhi_2} y_{t-13} + \\color{\\red}{\\phi_2} \\color{\\pink}{\\varPhi_2} y_{t-14} + \\color{\\red}{\\phi_3} \\color{\\pink}{\\varPhi_2} y_{t-15} \\\\\n & + \\varepsilon_t\n\\end{align}\n$$\n\n
    \n\n#### Example: $ARMA(3,2) \\times ARMA(2,1)_{6, 12}$\n\n$$\n\\begin{align}\nARMA(\\color{\\red}{3},\\color{\\green}{2}) \\times ARMA(\\color{\\red}{2},\\color{\\green}{1})_{\\color{\\red}{6},\\color{\\green}{12}}: \\quad\n\\color{\\red}{\n\\underbrace{\\varphi_{(3)} (B)}_{AR} ~\n\\underbrace{\\Phi_{(2)} (B^6)}_{SAR} ~ \ny_t\n}\n=~&\nc +\n\\color{\\green}{\n\\underbrace{\\theta_{(2)} (B)}_{MA} ~\n\\underbrace{\\Theta_{(1)} (B^{12})}_{SMA} ~\n\\varepsilon_t\n}\n\\\\\n\\color{\\red}{\n\\underbrace{\\left( 1 + \\sum_{i=1}^3 \\varphi_i B^i \\right)}_{AR} ~\n\\underbrace{\\left( 1 + \\sum_{k=1}^2 \\Phi_k B^{6k} \\right)}_{SAR} ~ \ny_t\n}\n=~&\nc +\n\\color{\\green}{\n\\underbrace{\\left( 1 + \\sum_{j=1}^2 \\theta_j B^j \\right)}_{MA} ~\n\\underbrace{\\left( 1 + \\sum_{l=1}^1 \\Theta_l B^{12l} \\right)}_{SMA} ~\n\\varepsilon_t\n}\n\\\\\n\\color{\\red}{\n\\underbrace{\n\\left( 1 + \\varphi_1 B + \\varphi_2 B^2 + \\varphi_3 B^3 \\right)\n\\left( 1 + \\Phi_1 B^{6} + \\Phi_2 B^{12} \\right)\ny_t\n}_{\\text{Order of the equivalent AR process} ~=~ 3 + 2 \\times 6 = 15}\n}\n=~&\nc +\n\\color{\\green}{\n\\underbrace{\n\\left( 1 + \\theta_1 B + \\theta_2 B^2 \\right)\n\\left( 1 + \\Theta_1 B^{12} \\right)\n\\varepsilon_t\n}_{\\text{Order of the equivalent MA process} ~=~ 2 + 1 \\times 12 = 14}\n}\n%\n%\\\\\n%\\left(\n%1 + \\Phi_1 B^{6} + \\Phi_2 B^{12}\n%+ \\varphi_1 B + \\varphi_1 B \\times \\Phi_1 B^{6} + \\varphi_1 B \\times \\Phi_2 B^{12}\n%+ \\varphi_2 B^2 + \\varphi_2 B^2 \\times \\Phi_1 B^{6} + \\varphi_2 B^2 \\times \\Phi_2 B^{12}\n%+ \\varphi_3 B^3 + \\varphi_3 B^3 \\times \\Phi_1 B^{6} + \\varphi_3 B^3 \\times \\Phi_2 B^{12}\n%\\right)\n%y_t\n%=~&\n%c +\n%\\left(\n%1 + \\Theta_1 B^{12}\n%+ \\theta_1 B + \\theta_1 B \\times \\Theta_1 B^{12}\n%+ \\theta_2 B^2 + \\theta_2 B^2 \\times \\Theta_1 B^{12}\n%\\right)\n%\\varepsilon_t\n%\n\\\\\n\\color{\\red}{\n\\left(\n1 + \\Phi_1 B^{6} + \\Phi_2 B^{12}\n+ \\varphi_1 B + \\varphi_1 \\Phi_1 B^{7} + \\varphi_1 \\Phi_2 B^{13}\n+ \\varphi_2 B^2 + \\varphi_2 \\Phi_1 B^{8} + \\varphi_2 \\Phi_2 B^{14}\n+ \\varphi_3 B^3 + \\varphi_3 \\Phi_1 B^{9} + \\varphi_3 \\Phi_2 B^{15}\n\\right)\ny_t\n}\n=~&\nc +\n\\color{\\green}{\n\\left(\n1 + \\Theta_1 B^{12}\n+ \\theta_1 B + \\theta_1 \\Theta_1 B^{13}\n+ \\theta_2 B^2 + \\theta_2 \\Theta_1 B^{14}\n\\right)\n\\varepsilon_t\n}\n\\\\\n\\color{\\red}{y_t} =\n & ~ c \\\\\n & \\color{\\red}{+ \\varPhi_1 y_{t-1} + \\varPhi_2 y_{t-2} + \\varPhi_3 y_{t-3}} \\\\\n & \\color{\\red}{+ \\phi_1 y_{t-6} + \\varPhi_1 \\phi_1 y_{t-7} + \\varPhi_2 \\phi_1 y_{t-8} + \\varPhi_3 \\phi_1 y_{t-9}} \\\\\n & \\color{\\red}{+ \\phi_2 y_{t-12} + \\varPhi_1 \\phi_2 y_{t-13} + \\varPhi_2 \\phi_2 y_{t-14} + \\varPhi_3 \\phi_2 y_{t-15}} \\\\\n & \\color{\\green}{+ \\varepsilon_t} \\\\\n & \\color{\\green}{+ \\theta_1 \\varepsilon_{t-1} + \\theta_2 \\varepsilon_{t-2}} \\\\\n & \\color{\\green}{+ \\Theta_1 \\varepsilon_{t-12} + \\theta_1 \\Theta_1 \\varepsilon_{t-13} + \\theta_2 \\Theta_1 \\varepsilon_{t-14}}\n\\end{align}\n$$\n\n### Seasonal AutoRegressive Moving Average (SARIMA) : $ARIMA(P,D,Q)_{s,s'',s'}$ with seasonal differencing\n\n$$\n\\begin{align}\nARIMA(\\color{\\red}{P}, \\color{\\orange}{D}, \\color{\\green}{Q})_{\\color{\\red}{s}, \\color{\\orange}{s''}, \\color{\\green}{s'}}: \\quad\\quad\\quad\n\\color{\\red}{\\underbrace{\\Phi_{(P)} (B^s) \\color{\\orange}{\\nabla^D_{s''}} y_t}_{\\text{SARI}}}\n&= \nc + \\color{\\green}{\\underbrace{\\Theta_{(Q)} (B^{s'}) \\varepsilon_t}_{SMA}}\n\\\\\n\\color{\\red}{ \\left( 1 + \\sum_{i=1}^P \\Phi_i B^{si} \\right) \\color{\\orange}{\\nabla^D_{s''}} y_t }\n&=\nc + \\color{\\green}{\\left( 1 + \\sum_{j=1}^Q \\Theta_j B^{s'j} \\right) ~ \\varepsilon_t}\n\\\\\n\\color{\\orange}{\\nabla^D_{s''}} \\color{\\red}{ y_t } + \\color{\\red}{\\sum_{i=1}^P \\Phi_i B^{si} \\color{\\orange}{\\nabla^D_{s''}} y_t }\n&=\nc + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^Q \\Theta_j B^{s'j} \\varepsilon_t}\n\\\\\n\\color{\\orange}{\\nabla^D_{s''}} \\color{\\red}{ y_t } + \\color{\\red}{ \\sum_{i=1}^P \\Phi_i ~ \\color{\\orange}{\\nabla^D_{s''}} y_{t-si} }\n&=\nc + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^Q \\Theta_j ~ \\varepsilon_{t-s'j}}\n\\\\\n\\color{\\orange}{\\nabla^D_{s''}} \\color{\\red}{y_t} &= c\n+ \\color{\\red}{\\underbrace{\\sum_{i=1}^P \\varPhi_i ~ \\color{\\orange}{\\nabla^D_{s''}} y_{t-si}}_{\\text{SARI model}}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\color{\\green}{\\underbrace{\\sum_{j=1}^Q \\Theta_j ~ \\varepsilon_{t-s'j}}_{\\text{SMA model}}}\n\\end{align}\n$$\n\nwith\n\n$$\\nabla_{s''} = (1-B^{s''})$$\n$$\\nabla_{s''}^2 = (1-B^{s''})^2$$\n$$\\dots$$\n$$\\nabla_{s''}^d = (1-B^{s''})^d$$\n\nRemark: most of the time, $s = s' = s''$. In this case, the process is simply noted $ARIMA(\\color{\\red}{P}, \\color{\\orange}{D}, \\color{\\green}{Q})_{s}$\n\n#### Example: $ARIMA(3, 1, 2)_{24, 24, 24}$\n\n$$\n\\begin{align}\nARIMA(\\color{\\red}{3}, \\color{\\orange}{1}, \\color{\\green}{2})_{\\color{\\red}{24}, \\color{\\orange}{24}, \\color{\\green}{24}}: \\quad\\quad\\quad\n\\color{\\red}{\\underbrace{\\Phi_{(3)} (B^{24}) \\color{\\orange}{\\nabla_{24}} y_t}_{\\text{SARI}}}\n&= \nc + \\color{\\green}{\\underbrace{\\Theta_{(2)} (B^{24}) \\varepsilon_t}_{SMA}}\n\\\\\n\\color{\\red}{ \\left( 1 + \\sum_{i=1}^3 \\Phi_i B^{24i} \\right) \\color{\\orange}{\\nabla_{24}} y_t }\n&=\nc + \\color{\\green}{\\left( 1 + \\sum_{j=1}^2 \\Theta_j B^{24j} \\right) ~ \\varepsilon_t}\n\\\\\n\\color{\\orange}{\\nabla_{24}} \\color{\\red}{ y_t } + \\color{\\red}{\\sum_{i=1}^3 \\Phi_i B^{24i} \\color{\\orange}{\\nabla_{24}} y_t }\n&=\nc + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^2 \\Theta_j B^{24j} \\varepsilon_t}\n\\\\\n\\color{\\orange}{\\nabla_{24}} \\color{\\red}{ y_t } + \\color{\\red}{ \\sum_{i=1}^3 \\Phi_i ~ \\color{\\orange}{\\nabla_{24}} y_{t-24i} }\n&=\nc + \\color{\\green}{\\varepsilon_t} + \\color{\\green}{\\sum_{j=1}^2 \\Theta_j ~ \\varepsilon_{t-24j}}\n\\\\\n\\color{\\orange}{\\nabla_{24}} \\color{\\red}{y_t} &= c\n+ \\color{\\red}{\\underbrace{\\sum_{i=1}^3 \\varPhi_i ~ \\color{\\orange}{\\nabla_{24}} y_{t-24i}}_{\\text{SARI model}}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\color{\\green}{\\underbrace{\\sum_{j=1}^2 \\Theta_j ~ \\varepsilon_{t-24j}}_{\\text{SMA model}}}\n\\\\\n\\color{\\orange}{\\nabla_{24}} \\color{\\red}{y_t}\n&=\nc\n+ \\underbrace{\n \\color{\\red}{\\varPhi_1} \\color{\\orange}{\\nabla_{24}} \\color{\\red}{y_{t-24}}\n + \\color{\\red}{\\varPhi_2} \\color{\\orange}{\\nabla_{24}} \\color{\\red}{y_{t-48}}\n + \\color{\\red}{\\varPhi_3} \\color{\\orange}{\\nabla_{24}} \\color{\\red}{y_{t-72}}\n }_{\\text{SARI model}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\underbrace{\\color{\\green}{\\Theta_1 \\varepsilon_{t-24}} + \\color{\\green}{\\Theta_2 \\varepsilon_{t-48}}}_{\\text{MA model}}\n\\\\\n\\color{\\orange}{(1 - B^{24})} \\color{\\red}{y_t}\n&=\nc\n+ \\underbrace{\n \\color{\\red}{\\varPhi_1} \\color{\\orange}{(1 - B^{24})} \\color{\\red}{y_{t-24}}\n + \\color{\\red}{\\varPhi_2} \\color{\\orange}{(1 - B^{24})} \\color{\\red}{y_{t-48}}\n + \\color{\\red}{\\varPhi_3} \\color{\\orange}{(1 - B^{24})} \\color{\\red}{y_{t-72}}\n }_{\\text{SARI model}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\underbrace{\\color{\\green}{\\Theta_1 \\varepsilon_{t-24}} + \\color{\\green}{\\Theta_2 \\varepsilon_{t-48}}}_{\\text{MA model}}\n\\\\\n\\color{\\red}{y_t} - \\color{\\orange}{B^{24} y_t}\n&=\nc\n+ \\underbrace{\n \\color{\\red}{\\varPhi_1} (\\color{\\red}{y_{t-24}} - \\color{\\orange}{B^{24} y_{t-24}})\n + \\color{\\red}{\\varPhi_2} (\\color{\\red}{y_{t-48}} - \\color{\\orange}{B^{24} y_{t-48}})\n + \\color{\\red}{\\varPhi_3} (\\color{\\red}{y_{t-72}} - \\color{\\orange}{B^{24} y_{t-72}})\n }_{\\text{SARI model}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\underbrace{\\color{\\green}{\\Theta_1 \\varepsilon_{t-24}} + \\color{\\green}{\\Theta_2 \\varepsilon_{t-48}}}_{\\text{MA model}}\n\\\\\n\\color{\\red}{y_t} - \\color{\\orange}{y_{t-24}}\n&=\nc\n+ \\underbrace{\n \\color{\\red}{\\varPhi_1} (\\color{\\red}{y_{t-24}} - \\color{\\orange}{y_{t-48}})\n + \\color{\\red}{\\varPhi_2} (\\color{\\red}{y_{t-48}} - \\color{\\orange}{y_{t-72}})\n + \\color{\\red}{\\varPhi_3} (\\color{\\red}{y_{t-72}} - \\color{\\orange}{y_{t-96}})\n }_{\\text{SARI model}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\underbrace{\\color{\\green}{\\Theta_1 \\varepsilon_{t-24}} + \\color{\\green}{\\Theta_2 \\varepsilon_{t-48}}}_{\\text{MA model}}\n\\\\\n\\color{\\red}{y_t}\n&=\nc\n+ \\color{\\orange}{y_{t-24}} \n+ \\underbrace{\n \\color{\\red}{\\varPhi_1 y_{t-24}} - \\color{\\orange}{\\varPhi_1 y_{t-48}}\n + \\color{\\red}{\\varPhi_2 y_{t-48}} - \\color{\\orange}{\\varPhi_2 y_{t-72}}\n + \\color{\\red}{\\varPhi_3 y_{t-72}} - \\color{\\orange}{\\varPhi_3 y_{t-96}}\n }_{\\text{SARI model}}\n+ \\color{\\green}{\\varepsilon_t}\n+ \\underbrace{\\color{\\green}{\\Theta_1 \\varepsilon_{t-24}} + \\color{\\green}{\\Theta_2 \\varepsilon_{t-48}}}_{\\text{MA model}}\n\\end{align}\n$$\n\n### Seasonal AutoRegressive Moving Average (SARIMA) : $ARIMA(p,d,q) \\times ARIMA(P,D,Q)_{s,s'',s'}$ with seasonal and non-seasonal differencing\n\n$$\n\\begin{align}\nARIMA(\\color{\\red}{p}, \\color{\\orange}{d}, \\color{\\green}{q}) \\times ARIMA(\\color{\\red}{P}, \\color{\\orange}{D}, \\color{\\green}{Q})_{\\color{\\red}{s}, \\color{\\orange}{s'}, \\color{\\green}{s''}}: \\quad\\quad\\quad\n\\color{\\red}{\n\\underbrace{\\varphi_{(p)} (B)}_{AR} ~\n\\underbrace{\\Phi_{(P)} (B^s)}_{SAR} ~ \n}\n\\color{\\orange}{\\nabla^D_{s''}} \\color{\\orange}{\\nabla^d}\n\\color{\\red}{y_t}\n&=\nc +\n\\color{\\green}{\n\\underbrace{\\theta_{(q)} (B)}_{MA} ~\n\\underbrace{\\Theta_{(Q)} (B^{s'})}_{SMA} ~\n\\varepsilon_t\n}\n\\\\\n\\color{\\red}{\n\\underbrace{\\left( 1 + \\sum_{i=1}^p \\varphi_i B^i \\right)}_{AR} ~\n\\underbrace{\\left( 1 + \\sum_{k=1}^P \\Phi_k B^{sk} \\right)}_{SAR} ~ \n}\n\\color{\\orange}{\\nabla^D_{s''}} \\color{\\orange}{\\nabla^d}\n\\color{\\red}{y_t}\n&=\nc +\n\\color{\\green}{\n\\underbrace{\\left( 1 + \\sum_{j=1}^q \\theta_j B^j \\right)}_{MA} ~\n\\underbrace{\\left( 1 + \\sum_{l=1}^Q \\Theta_l B^{s'l} \\right)}_{SMA} ~\n\\varepsilon_t\n}\n\\\\\n\\underbrace{\n\\color{\\red}{\n\\left( 1 + \\varphi_1 B + \\varphi_2 B^2 + \\dots + \\varphi_p B^p \\right)\n\\left( 1 + \\Phi_1 B^{s} + \\Phi_2 B^{s2} + \\dots + \\Phi_P B^{sP} \\right)\n}\n\\color{\\orange}{\\nabla^D_{s''}} \\color{\\orange}{\\nabla^d}\n\\color{\\red}{y_t}\n}\n&=\nc +\n\\underbrace{\n\\color{\\green}{\n\\left( 1 + \\theta_1 B + \\theta_2 B^2 + \\dots + \\theta_q B^q \\right)\n\\left( 1 + \\Theta_1 B^{s'} + \\Theta_2 B^{s'2} + \\dots + \\Theta_Q B^{s'Q} \\right)\n\\varepsilon_t\n}\n}\n\\\\\n\\end{align}\n$$\n\nRemark: $\\color{\\orange}{\\nabla^D_{s''}} \\color{\\orange}{\\nabla^d} y_t = \\color{\\orange}{\\nabla^d} \\color{\\orange}{\\nabla^D_{s''}} y_t$\n\n#### Example: $ARMA(3,1,2) \\times ARMA(2,1,1)_{24, 24, 24}$\n\n$$\n\\begin{align}\nARIMA(\\color{\\red}{3}, \\color{\\orange}{1}, \\color{\\green}{2}) \\times ARIMA(\\color{\\red}{2}, \\color{\\orange}{1}, \\color{\\green}{1})_{\\color{\\red}{24}, \\color{\\orange}{24}, \\color{\\green}{24}}: \\quad\\quad\\quad\n\\color{\\red}{\n\\underbrace{\\varphi_{(3)} (B)}_{AR} ~\n\\underbrace{\\Phi_{(2)} \\left( B^{24} \\right)}_{SAR} ~ \n}\n\\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla}\n\\color{\\red}{y_t}\n&=\nc +\n\\color{\\green}{\n\\underbrace{\\theta_{(2)} (B)}_{MA} ~\n\\underbrace{\\Theta_{(1)} \\left( B^{24} \\right)}_{SMA} ~\n\\varepsilon_t\n}\n\\\\\n\\color{\\red}{\n\\underbrace{\\left( 1 + \\sum_{i=1}^3 \\varphi_i B^i \\right)}_{AR} ~\n\\underbrace{\\left( 1 + \\sum_{k=1}^2 \\Phi_k B^{24 k} \\right)}_{SAR} ~ \n}\n\\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla}\n\\color{\\red}{y_t}\n&=\nc +\n\\color{\\green}{\n\\underbrace{\\left( 1 + \\sum_{j=1}^2 \\theta_j B^j \\right)}_{MA} ~\n\\underbrace{\\left( 1 + \\sum_{l=1}^1 \\Theta_l B^{24 l} \\right)}_{SMA} ~\n\\varepsilon_t\n}\n\\\\\n\\color{\\red}{\n\\left( 1 + \\varphi_1 B + \\varphi_2 B^2 + \\varphi_3 B^3 \\right)\n\\left( 1 + \\Phi_1 B^{24} + \\Phi_2 B^{48} \\right)\n}\n\\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla}\n\\color{\\red}{y_t}\n&=\nc +\n\\color{\\green}{\n\\left( 1 + \\theta_1 B + \\theta_2 B^2 \\right)\n\\left( 1 + \\Theta_1 B^{24} \\right)\n\\varepsilon_t\n}\n\\\\\n\\color{\\red}{\n\\left(\n1 + \\Phi_1 B^{24} + \\Phi_2 B^{48}\n+ \\varphi_1 B + \\varphi_1 \\Phi_1 B^{25} + \\varphi_1 \\Phi_2 B^{49}\n+ \\varphi_2 B^2 + \\varphi_2 \\Phi_1 B^{26} + \\varphi_2 \\Phi_2 B^{50}\n+ \\varphi_3 B^3 + \\varphi_3 \\Phi_1 B^{27} + \\varphi_3 \\Phi_2 B^{51}\n\\right)\n}\n\\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla}\n\\color{\\red}{y_t}\n=~&\nc +\n\\color{\\green}{\n\\left(\n1 + \\Theta_1 B^{24}\n+ \\theta_1 B + \\theta_1 \\Theta_1 B^{25}\n+ \\theta_2 B^2 + \\theta_2 \\Theta_1 B^{26}\n\\right)\n\\varepsilon_t\n}\n\\\\\n\\color{\\red}{y_t} =\n & ~ c \\\\\n & \\color{\\red}{+ \\varPhi_1 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-1} + \\varPhi_2 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-2} + \\varPhi_3 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-3}} \\\\\n & \\color{\\red}{+ \\phi_1 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-24} + \\varPhi_1 \\phi_1 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-25} + \\varPhi_2 \\phi_1 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-26} + \\varPhi_3 \\phi_1 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-27}} \\\\\n & \\color{\\red}{+ \\phi_2 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-48} + \\varPhi_1 \\phi_2 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-49} + \\varPhi_2 \\phi_2 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-50} + \\varPhi_3 \\phi_2 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-51}} \\\\\n & \\color{\\green}{+ \\varepsilon_t} \\\\\n & \\color{\\green}{+ \\theta_1 \\varepsilon_{t-1} + \\theta_2 \\varepsilon_{t-2}} \\\\\n & \\color{\\green}{+ \\Theta_1 \\varepsilon_{t-24} + \\theta_1 \\Theta_1 \\varepsilon_{t-25} + \\theta_2 \\Theta_1 \\varepsilon_{t-26}}\n\\\\\n\\end{align}\n$$\n\nAs we have:\n\n$$\n\\begin{align}\n\\nabla &= (1-B) \\\\\n\\nabla_{24} &= (1-B^{24}) \\\\\n\\nabla \\nabla_{24} &= (1-B)(1-B^{24}) \\\\\n &= 1-B^{24}-B+B^{25} \\\\\n\\nabla \\nabla_{24} y_t &= (1-B^{24}-B+B^{25}) y_t \\\\\n &= y_t - B^{24} y_t - B y_t + B^{25} y_t \\\\\n &= y_t - y_{t-24} - y_{t-1} + y_{t-25} \\\\\n\\end{align}\n$$\n\nThus:\n\n$$\n\\begin{align}\n\\color{\\red}{y_t} =\n & ~ c \\\\\n & \\color{\\red}{+ \\varPhi_1 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-1} + \\varPhi_2 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-2} + \\varPhi_3 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-3}} \\\\\n & \\color{\\red}{+ \\phi_1 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-24} + \\varPhi_1 \\phi_1 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-25} + \\varPhi_2 \\phi_1 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-26} + \\varPhi_3 \\phi_1 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-27}} \\\\\n & \\color{\\red}{+ \\phi_2 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-48} + \\varPhi_1 \\phi_2 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-49} + \\varPhi_2 \\phi_2 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-50} + \\varPhi_3 \\phi_2 \\color{\\orange}{\\nabla_{24}} \\color{\\orange}{\\nabla} y_{t-51}} \\\\\n & \\color{\\green}{+ \\varepsilon_t} \\\\\n & \\color{\\green}{+ \\theta_1 \\varepsilon_{t-1} + \\theta_2 \\varepsilon_{t-2}} \\\\\n & \\color{\\green}{+ \\Theta_1 \\varepsilon_{t-24} + \\theta_1 \\Theta_1 \\varepsilon_{t-25} + \\theta_2 \\Theta_1 \\varepsilon_{t-26}}\n\\\\\n= & ...\n\\\\\n\\end{align}\n$$\n", "meta": {"hexsha": "ff8423cee428259428f191ab99da22c825e9c845", "size": 53370, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "nb_sci_ai/ai_arma_en.ipynb", "max_stars_repo_name": "jdhp-docs/python-notebooks", "max_stars_repo_head_hexsha": "91a97ea5cf374337efa7409e4992ea3f26b99179", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2017-05-03T12:23:36.000Z", "max_stars_repo_stars_event_max_datetime": "2020-10-26T17:30:56.000Z", "max_issues_repo_path": "nb_sci_ai/ai_arma_en.ipynb", "max_issues_repo_name": "jdhp-docs/python-notebooks", "max_issues_repo_head_hexsha": "91a97ea5cf374337efa7409e4992ea3f26b99179", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "nb_sci_ai/ai_arma_en.ipynb", "max_forks_repo_name": "jdhp-docs/python-notebooks", "max_forks_repo_head_hexsha": "91a97ea5cf374337efa7409e4992ea3f26b99179", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-10-26T17:30:57.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-26T17:30:57.000Z", "avg_line_length": 41.2442040185, "max_line_length": 379, "alphanum_fraction": 0.4673224658, "converted": true, "num_tokens": 16270, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9207896693699844, "lm_q2_score": 0.8933093982432729, "lm_q1q2_score": 0.822550065453523}} {"text": "# Determine derivative of Jacobian from angular velocity to exponential rates\n\nPeter Corke 2021\n\nSymPy code to deterine the time derivative of the mapping from angular velocity to exponential coordinate rates.\n\n\n```python\nfrom sympy import *\n```\n\nA rotation matrix can be expressed in terms of exponential coordinates (also called Euler vector)\n\n$\n\\mathbf{R} = e^{[\\varphi]_\\times} \n$\nwhere $\\mathbf{R} \\in SO(3)$ and $\\varphi \\in \\mathbb{R}^3$.\n\nThe mapping from angular velocity $\\omega$ to exponential coordinate rates $\\dot{\\varphi}$ is\n\n$\n\\dot{\\varphi} = \\mathbf{A} \\omega\n$\n\nwhere $\\mathbf{A}$ is given by (2.107) of [Robot Dynamics Lecture Notes, Robotic Systems Lab, ETH Zurich, 2018](https://ethz.ch/content/dam/ethz/special-interest/mavt/robotics-n-intelligent-systems/rsl-dam/documents/RobotDynamics2018/RD_HS2018script.pdf)\n\n\n$\n\\mathbf{A} = I_{3 \\times 3} - \\frac{1}{2} [v]_\\times + [v]^2_\\times \\frac{1}{\\theta^2} \\left( 1 - \\frac{\\theta}{2} \\frac{\\sin \\theta}{1 - \\cos \\theta} \\right)\n$\nwhere $\\theta = \\| \\varphi \\|$ and $v = \\hat{\\varphi}$\n\nWe simplify the equation as\n\n$\n\\mathbf{A} = I_{3 \\times 3} - \\frac{1}{2} [v]_\\times + [v]^2_\\times \\Theta\n$\n\nwhere\n$\n\\Theta = \\frac{1}{\\theta^2} \\left( 1 - \\frac{\\theta}{2} \\frac{\\sin \\theta}{1 - \\cos \\theta} \\right)\n$\n\nWe can find the derivative using the chain rule\n\n$\n\\dot{\\mathbf{A}} = - \\frac{1}{2} [\\dot{v}]_\\times + 2 [v]_\\times [\\dot{v}]_\\times \\Theta + [v]^2_\\times \\dot{\\Theta}\n$\n\nWe start by defining some symbols\n\n\n```python\nTheta, theta, theta_dot, t = symbols('Theta theta theta_dot t', real=True)\n```\n\nWe start by finding an expression for $\\Theta$ which depends on $\\theta(t)$\n\n\n```python\ntheta_t = Function(theta)(t)\n```\n\n\n```python\nTheta = 1 / theta_t ** 2 * (1 - theta_t / 2 * sin(theta_t) / (1 - cos(theta_t)))\nTheta\n```\n\n\n\n\n$\\displaystyle \\frac{1 - \\frac{\\theta{\\left(t \\right)} \\sin{\\left(\\theta{\\left(t \\right)} \\right)}}{2 \\left(1 - \\cos{\\left(\\theta{\\left(t \\right)} \\right)}\\right)}}{\\theta^{2}{\\left(t \\right)}}$\n\n\n\nand now determine the derivative\n\n\n```python\nT_dot = Theta.diff(t)\nT_dot\n```\n\n\n\n\n$\\displaystyle - \\frac{2 \\left(1 - \\frac{\\theta{\\left(t \\right)} \\sin{\\left(\\theta{\\left(t \\right)} \\right)}}{2 \\left(1 - \\cos{\\left(\\theta{\\left(t \\right)} \\right)}\\right)}\\right) \\frac{d}{d t} \\theta{\\left(t \\right)}}{\\theta^{3}{\\left(t \\right)}} + \\frac{- \\frac{\\theta{\\left(t \\right)} \\cos{\\left(\\theta{\\left(t \\right)} \\right)} \\frac{d}{d t} \\theta{\\left(t \\right)}}{2 \\left(1 - \\cos{\\left(\\theta{\\left(t \\right)} \\right)}\\right)} - \\frac{\\sin{\\left(\\theta{\\left(t \\right)} \\right)} \\frac{d}{d t} \\theta{\\left(t \\right)}}{2 \\left(1 - \\cos{\\left(\\theta{\\left(t \\right)} \\right)}\\right)} + \\frac{\\theta{\\left(t \\right)} \\sin^{2}{\\left(\\theta{\\left(t \\right)} \\right)} \\frac{d}{d t} \\theta{\\left(t \\right)}}{2 \\left(1 - \\cos{\\left(\\theta{\\left(t \\right)} \\right)}\\right)^{2}}}{\\theta^{2}{\\left(t \\right)}}$\n\n\n\nwhich is a somewhat complex expression that depends on $\\theta(t)$ and $\\dot{\\theta}(t)$.\n\nWe will remove the time dependency and generate code\n\n\n```python\nT_dot = T_dot.subs([(theta_t.diff(t), theta_dot), (theta_t, theta)])\n```\n\n\n```python\npycode(T_dot)\n```\n\n\n\n\n '(-1/2*theta*theta_dot*math.cos(theta)/(1 - math.cos(theta)) + (1/2)*theta*theta_dot*math.sin(theta)**2/(1 - math.cos(theta))**2 - 1/2*theta_dot*math.sin(theta)/(1 - math.cos(theta)))/theta**2 - 2*theta_dot*(-1/2*theta*math.sin(theta)/(1 - math.cos(theta)) + 1)/theta**3'\n\n\n\nIn order to evaluate the line above we need an expression for $\\theta$ and $\\dot{\\theta}$. $\\theta$ is the norm of $\\varphi$ whose elements are functions of time\n\n\n```python\nphi_names = ('varphi_0', 'varphi_1', 'varphi_2')\nphi = [] # names of angles, eg. theta\nphi_t = [] # angles as function of time, eg. theta(t)\nphi_d = [] # derivative of above, eg. d theta(t) / dt\nphi_n = [] # symbol to represent above, eg. theta_dot\nfor i in phi_names:\n phi.append(symbols(i, real=True))\n phi_t.append(Function(phi[-1])(t))\n phi_d.append(phi_t[-1].diff(t))\n phi_n.append(i + '_dot')\n```\n\nCompute the norm\n\n\n```python\ntheta = Matrix(phi_t).norm()\ntheta\n```\n\n\n\n\n$\\displaystyle \\sqrt{\\varphi_{0}^{2}{\\left(t \\right)} + \\varphi_{1}^{2}{\\left(t \\right)} + \\varphi_{2}^{2}{\\left(t \\right)}}$\n\n\n\nand find its derivative\n\n\n```python\ntheta_dot = theta.diff(t)\ntheta_dot\n```\n\n\n\n\n$\\displaystyle \\frac{\\varphi_{0}{\\left(t \\right)} \\frac{d}{d t} \\varphi_{0}{\\left(t \\right)} + \\varphi_{1}{\\left(t \\right)} \\frac{d}{d t} \\varphi_{1}{\\left(t \\right)} + \\varphi_{2}{\\left(t \\right)} \\frac{d}{d t} \\varphi_{2}{\\left(t \\right)}}{\\sqrt{\\varphi_{0}^{2}{\\left(t \\right)} + \\varphi_{1}^{2}{\\left(t \\right)} + \\varphi_{2}^{2}{\\left(t \\right)}}}$\n\n\n\nand now remove the time dependenices\n\n\n```python\ntheta_dot = theta_dot.subs(a for a in zip(phi_d, phi_n))\ntheta_dot = theta_dot.subs(a for a in zip(phi_t, phi))\ntheta_dot\n```\n\n\n\n\n$\\displaystyle \\frac{\\varphi_{0} \\varphi_{0 dot} + \\varphi_{1} \\varphi_{1 dot} + \\varphi_{2} \\varphi_{2 dot}}{\\sqrt{\\varphi_{0}^{2} + \\varphi_{1}^{2} + \\varphi_{2}^{2}}}$\n\n\n\nwhich is simply the dot product over the norm.\n", "meta": {"hexsha": "4a43f1b4cf427658e64be73bb6f403f3ec4e74ec", "size": 10757, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "symbolic/angvelxform_dot.ipynb", "max_stars_repo_name": "dbkmgm/spatialmath-python", "max_stars_repo_head_hexsha": "8d48e5a21334f9ceac4f549f194c79afaa22a5d7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "symbolic/angvelxform_dot.ipynb", "max_issues_repo_name": "dbkmgm/spatialmath-python", "max_issues_repo_head_hexsha": "8d48e5a21334f9ceac4f549f194c79afaa22a5d7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "symbolic/angvelxform_dot.ipynb", "max_forks_repo_name": "dbkmgm/spatialmath-python", "max_forks_repo_head_hexsha": "8d48e5a21334f9ceac4f549f194c79afaa22a5d7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.5521978022, "max_line_length": 913, "alphanum_fraction": 0.5159431068, "converted": true, "num_tokens": 1724, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9546474220263198, "lm_q2_score": 0.8615382112085969, "lm_q1q2_score": 0.822465232307454}} {"text": "# Tutorial\n\nWe will solve the following problem using a computer to assist with the technical aspects:\n\n\n```{admonition} Problem\n\nConsider the function $f(x)= \\frac{24 x \\left(a - 4 x\\right) + 2 \\left(a - 8 x\\right) \\left(b - 4 x\\right)}{\\left(b - 4 x\\right)^{4}}$\n\n1. Given that $\\frac{df}{dx}|_{x=0}=0$, $\\frac{d^2f}{dx^2}|_{x=0}=-1$ and that $b>0$ find the values of $a$ and $b$.\n2. For the specific values of $a$ and $b$ find:\n 1. $\\lim_{x\\to 0}f(x)$;\n 2. $\\lim_{x\\to \\infty}f(x)$;\n 3. $\\int f(x) dx$;\n 4. $\\int_{5}^{20} f(x) dx$.\n\n```\n\nSympy is once again the library we will use for this.\n\nWe will start by creating a variable `expression` that has the value of the expression of $f(x)$:\n\n\n```python\nimport sympy as sym\n\nx = sym.Symbol(\"x\")\na = sym.Symbol(\"a\")\nb = sym.Symbol(\"b\")\nexpression = (24 * x * (a - 4 * x) + 2 * (a - 8 * x) * (b - 4 * x)) / ((b - 4 * x) ** 4)\nexpression\n```\n\n\n\n\n$\\displaystyle \\frac{24 x \\left(a - 4 x\\right) + \\left(2 a - 16 x\\right) \\left(b - 4 x\\right)}{\\left(b - 4 x\\right)^{4}}$\n\n\n\nnow we can will use `sympy.diff` to calculate the derivative. This tool takes two inputs:\n\n- the first is the expression we are differentiating. Essentially this is the numerator of $\\frac{df}{dx}$.\n- the first is the variable we are differentiating for. Essentially this is the denominator of $\\frac{df}{dx}$.\n\n```{attention}\nWe have imported `import sympy as sym` so we are going to write `sym.diff`:\n```\n\n\n```python\nderivative = sym.diff(expression, x)\nderivative\n```\n\n\n\n\n$\\displaystyle \\frac{16 a - 16 b - 64 x}{\\left(b - 4 x\\right)^{4}} + \\frac{16 \\left(24 x \\left(a - 4 x\\right) + \\left(2 a - 16 x\\right) \\left(b - 4 x\\right)\\right)}{\\left(b - 4 x\\right)^{5}}$\n\n\n\nLet us factorise that to make it slightly clearer:\n\n\n```python\nsym.factor(derivative)\n```\n\n\n\n\n$\\displaystyle \\frac{16 \\left(- 3 a b - 12 a x + b^{2} + 16 b x + 16 x^{2}\\right)}{\\left(- b + 4 x\\right)^{5}}$\n\n\n\nWe will now create the first equation, which is obtained by substituting $x=0$\nin to the value of the derivative and equating that to $0$:\n\n\n```python\nfirst_equation = sym.Eq(derivative.subs({x: 0}), 0)\nfirst_equation\n```\n\n\n\n\n$\\displaystyle \\frac{32 a}{b^{4}} + \\frac{16 a - 16 b}{b^{4}} = 0$\n\n\n\nWe will factor that equation:\n\n\n```python\nsym.factor(first_equation)\n```\n\n\n\n\n$\\displaystyle \\frac{16 \\left(3 a - b\\right)}{b^{4}} = 0$\n\n\n\nNow we are going to create the second equation, substituting $x=0$ in to the\nvalue of the second derivative. We calculate the second derivative by passing a\nthird (optional) input to `sym.diff`:\n\n\n```python\nsecond_derivative = sym.diff(expression, x, 2)\nsecond_derivative\n```\n\n\n\n\n$\\displaystyle \\frac{64 \\left(-1 - \\frac{8 \\left(- a + b + 4 x\\right)}{b - 4 x} + \\frac{10 \\left(12 x \\left(a - 4 x\\right) + \\left(a - 8 x\\right) \\left(b - 4 x\\right)\\right)}{\\left(b - 4 x\\right)^{2}}\\right)}{\\left(b - 4 x\\right)^{4}}$\n\n\n\nWe equate this expression to $-1$:\n\n\n```python\nsecond_equation = sym.Eq(second_derivative.subs({x: 0}), -1)\nsecond_equation\n```\n\n\n\n\n$\\displaystyle \\frac{64 \\left(\\frac{10 a}{b} - 1 - \\frac{8 \\left(- a + b\\right)}{b}\\right)}{b^{4}} = -1$\n\n\n\nNow to solve the first equation to obtain a value for $a$:\n\n\n```python\nsym.solveset(first_equation, a)\n```\n\n\n\n\n$\\displaystyle \\left\\{\\frac{b}{3}\\right\\}$\n\n\n\nNow to substitute that value for $a$ and solve the second equation for $b$:\n\n\n```python\nsecond_equation = second_equation.subs({a: b / 3})\nsecond_equation\n```\n\n\n\n\n$\\displaystyle - \\frac{192}{b^{4}} = -1$\n\n\n\n\n```python\nsym.solveset(second_equation, b)\n```\n\n\n\n\n$\\displaystyle \\left\\{- 2 \\sqrt{2} \\sqrt[4]{3}, 2 \\sqrt{2} \\sqrt[4]{3}, - 2 \\sqrt{2} \\sqrt[4]{3} i, 2 \\sqrt{2} \\sqrt[4]{3} i\\right\\}$\n\n\n\nRecalling the question we know that $b>0$ thus: $b = 2\\sqrt{2}\\sqrt[4]{3}$ and\n$a=\\frac{2\\sqrt{2}\\sqrt[4]{3}}{3}$.\n\nWe will substitute these values back and finish the question:\n\n\n```python\nexpression = expression.subs(\n {a: 2 * sym.sqrt(2) * sym.root(3, 4) / 3, b: 2 * sym.sqrt(2) * sym.root(3, 4)}\n)\nexpression\n```\n\n\n\n\n$\\displaystyle \\frac{24 x \\left(- 4 x + \\frac{2 \\sqrt{2} \\sqrt[4]{3}}{3}\\right) + \\left(- 16 x + \\frac{4 \\sqrt{2} \\sqrt[4]{3}}{3}\\right) \\left(- 4 x + 2 \\sqrt{2} \\sqrt[4]{3}\\right)}{\\left(- 4 x + 2 \\sqrt{2} \\sqrt[4]{3}\\right)^{4}}$\n\n\n\n```{attention}\nWe are using the `sym.root` command for the generic $n$th root.\n```\n\nWe can confirm our findings:\n\n\n```python\nsym.diff(expression, x).subs({x: 0})\n```\n\n\n\n\n$\\displaystyle 0$\n\n\n\n\n```python\nsym.diff(expression, x, 2).subs({x: 0})\n```\n\n\n\n\n$\\displaystyle -1$\n\n\n\nNow we will calculate the limits using `sym.limit`, this takes 3 inputs:\n\n- The expression we are taking the limit of.\n- The variable that is changing.\n- The value that the variable is tending towards.\n\n\n```python\nsym.limit(expression, x, 0)\n```\n\n\n\n\n$\\displaystyle \\frac{\\sqrt{3}}{36}$\n\n\n\n\n```python\nsym.limit(expression, x, sym.oo)\n```\n\n\n\n\n$\\displaystyle 0$\n\n\n\nNow we are going to calculate the **indefinite** integral using\n`sympy.integrate`. This tool takes 2 inputs as:\n\n- the first is the expression we're integrating. This is the $f$ in $\\int_a^b f\n dx$.\n- the second is the remaining information needed to calculate the integral: $x$.\n\n\n```python\nsym.factor(sym.integrate(expression, x))\n```\n\n\n\n\n$\\displaystyle \\frac{x \\left(6 x - \\sqrt{2} \\sqrt[4]{3}\\right)}{12 \\left(4 x^{3} - 6 \\sqrt{2} \\sqrt[4]{3} x^{2} + 6 \\sqrt{3} x - \\sqrt{2} \\cdot 3^{\\frac{3}{4}}\\right)}$\n\n\n\nIf we want to calculate a **definite** integral then instead of passing the\nsingle variable we pass a tuple which contains the variable as well as the\nbounds of integration:\n\n\n```python\nsym.factor(sym.integrate(expression, (x, 5, 20)))\n```\n\n\n\n\n$\\displaystyle - \\frac{5 \\left(- 5000 \\sqrt{2} \\sqrt[4]{3} - 1200 \\sqrt{3} + 75 \\sqrt{2} \\cdot 3^{\\frac{3}{4}} + 119997\\right)}{2 \\left(-32000 - 120 \\sqrt{3} + \\sqrt{2} \\cdot 3^{\\frac{3}{4}} + 2400 \\sqrt{2} \\sqrt[4]{3}\\right) \\left(-500 - 30 \\sqrt{3} + \\sqrt{2} \\cdot 3^{\\frac{3}{4}} + 150 \\sqrt{2} \\sqrt[4]{3}\\right)}$\n\n\n", "meta": {"hexsha": "9f3b2e447831e3043d3bca1d259b7ad465fe910f", "size": 14456, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "book/tools-for-mathematics/03-calculus/tutorial/.main.md.bcp.ipynb", "max_stars_repo_name": "daffidwilde/pfm", "max_stars_repo_head_hexsha": "dcf38faccee3c212c8394c36f4c093a2916d283e", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "book/tools-for-mathematics/03-calculus/tutorial/.main.md.bcp.ipynb", "max_issues_repo_name": "daffidwilde/pfm", "max_issues_repo_head_hexsha": "dcf38faccee3c212c8394c36f4c093a2916d283e", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "book/tools-for-mathematics/03-calculus/tutorial/.main.md.bcp.ipynb", "max_forks_repo_name": "daffidwilde/pfm", "max_forks_repo_head_hexsha": "dcf38faccee3c212c8394c36f4c093a2916d283e", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.924137931, "max_line_length": 354, "alphanum_fraction": 0.4750276702, "converted": true, "num_tokens": 2100, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9546474162199107, "lm_q2_score": 0.86153820232079, "lm_q1q2_score": 0.8224652188202889}} {"text": "```python\nimport warnings\nwarnings.filterwarnings('ignore')\n\nfrom sympy import sqrt, Matrix, init_printing\ninit_printing()\n\nfrom fractions import Fraction\n\nfrom typing import Iterable\n\nclass Vector:\n def __init__(self, *components, dim=None):\n if dim is None:\n if isinstance(components[0], Iterable):\n self.components = tuple(components[0])\n else:\n self.components = components\n self.dim = len(self.components)\n else:\n self.components = tuple(0 for _ in range(dim))\n self.dim = dim\n \n def __add__(self, other):\n if isinstance(other, int):\n return self\n elif isinstance(other, Vector):\n if self.dim != other.dim:\n raise ValueError(f'Cannot add vector of dimension {self.dim} to vector of dimension {other.dim}')\n return Vector(i + j for i, j in zip(self.components, other.components))\n else:\n raise TypeError(f'Cannot add vector to object of type {type(other)}')\n \n __radd__ = __add__\n \n def __sub__(self, other):\n if self.dim != other.dim:\n raise ValueError(f'Cannot subtract vector of dimension {self.dim} from vector of dimension {other.dim}')\n return Vector(i - j for i, j in zip(self.components, other.components))\n \n def __mul__(self, scalar):\n return Vector(scalar * c for c in self.components)\n \n __rmul__ = __mul__\n \n def __truediv__(self, scalar):\n return self * (1 / scalar)\n \n def dot(self, other) -> int:\n if self.dim != other.dim:\n raise ValueError(f'Cannot dot product vector of dimension {self.dim} with vector of dimension {other.dim}')\n return sum(i * j for i, j in zip(self.components, other.components))\n \n def __abs__(self):\n return sqrt(self.dot(self))\n \n def __repr__(self):\n return f\"[{', '.join(str(c) for c in self.components)}]\"\n \n @property\n def pretty(self):\n return Matrix(self.components)\n \n```\n\n\n```python\n# Given n vectors, which are a basis for a subspace W. Use the Gram-Schmidt process to find an orthogonal basis for the subspace W.\ndef gram_schmidt(*vectors):\n basis = []\n for v in vectors:\n if len(basis) == 0:\n basis.append(v)\n else:\n basis.append(v - sum(b * Fraction(v.dot(b), b.dot(b)) for b in basis))\n return basis\n\n```\n\n# Normalize vectors\n\n\n```python\nv1 = Vector(1, -1, 0, 1, 1)\nv2 = Vector(-2, 2, 4, 2, 2)\nv3 = Vector(3, 3, 0, -3, 3)\n\n[(v1/abs(v1)).pretty, (v2/abs(v2)).pretty, (v3/abs(v3)).pretty]\n```\n\n\n\n\n$\\displaystyle \\left[ \\left[\\begin{matrix}\\frac{1}{2}\\\\- \\frac{1}{2}\\\\0\\\\\\frac{1}{2}\\\\\\frac{1}{2}\\end{matrix}\\right], \\ \\left[\\begin{matrix}- \\frac{\\sqrt{2}}{4}\\\\\\frac{\\sqrt{2}}{4}\\\\\\frac{\\sqrt{2}}{2}\\\\\\frac{\\sqrt{2}}{4}\\\\\\frac{\\sqrt{2}}{4}\\end{matrix}\\right], \\ \\left[\\begin{matrix}\\frac{1}{2}\\\\\\frac{1}{2}\\\\0\\\\- \\frac{1}{2}\\\\\\frac{1}{2}\\end{matrix}\\right]\\right]$\n\n\n\n# Distance from vector to vector\n\n\n```python\nu = Vector(12, -1, 1, -8)\nv = Vector(11, -3, -4, -5)\n\nabs(u-v)\n```\n\n# Distance from vector to plane\n\n\n```python\ny = Vector(3, -7, 5)\nu1 = Vector(-2, -5, 1)\nu2 = Vector(-3, 2, 4)\n\nabs(y - (u1 * Fraction(y.dot(u1), u1.dot(u1)) + u2 * Fraction(y.dot(u2), u2.dot(u2))))\n```\n\n# Gram-Schmidt Orthogonalization\n\n\n```python\nu1 = Vector(1, -1, -1, 1, 1)\nu2 = Vector(4, 1, 6, -6, 4)\nu3 = Vector(7, -5, -4, 8, 1)\n\n[v.pretty for v in gram_schmidt(u1, u2, u3)]\n```\n\n\n\n\n$\\displaystyle \\left[ \\left[\\begin{matrix}1\\\\-1\\\\-1\\\\1\\\\1\\end{matrix}\\right], \\ \\left[\\begin{matrix}5\\\\0\\\\5\\\\-5\\\\5\\end{matrix}\\right], \\ \\left[\\begin{matrix}3\\\\0\\\\2\\\\2\\\\-3\\end{matrix}\\right]\\right]$\n\n\n\n# QR Decomposition\n\n\n```python\nv1 = Vector(1, -1, 0, 1, 1)\nv2 = Vector(-2, 2, 4, 2, 2)\nv3 = Vector(2, 3, -1, -2, 3)\n\nA = Matrix([[1, 2, 4],\n [-1, -2, 1],\n [0, 4, 2],\n [1, 6, 3],\n [1, 6, 8]])\n\nQ = Matrix([(v1/abs(v1)).components, (v2/abs(v2)).components, (v3/abs(v3)).components]).T # insert orthonormal vectors as list row vectors and transpose\nR = Q.T*A\nR\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}2 & 8 & 7\\\\0 & 4 \\sqrt{2} & 3 \\sqrt{2}\\\\0 & 0 & 3 \\sqrt{3}\\end{matrix}\\right]$\n\n\n\n# Orthogonal Diagonalization\n\n\n```python\nA = Matrix([[-2, -2, -3], \n [-2, -3, -2],\n [-3, -2, -2]])\n\nA.eigenvects()\n```\n\n\n\n\n$\\displaystyle \\left[ \\left( -7, \\ 1, \\ \\left[ \\left[\\begin{matrix}1\\\\1\\\\1\\end{matrix}\\right]\\right]\\right), \\ \\left( -1, \\ 1, \\ \\left[ \\left[\\begin{matrix}1\\\\-2\\\\1\\end{matrix}\\right]\\right]\\right), \\ \\left( 1, \\ 1, \\ \\left[ \\left[\\begin{matrix}-1\\\\0\\\\1\\end{matrix}\\right]\\right]\\right)\\right]$\n\n\n\n\n```python\nu1 = Vector(1, 1, 1)\nu2 = Vector(1, -2, 1)\nu3 = Vector(-1, 0, 1)\n\n[(u1/abs(u1)).pretty, (u2/abs(u2)).pretty, (u3/abs(u3)).pretty]\n```\n\n\n\n\n$\\displaystyle \\left[ \\left[\\begin{matrix}\\frac{\\sqrt{3}}{3}\\\\\\frac{\\sqrt{3}}{3}\\\\\\frac{\\sqrt{3}}{3}\\end{matrix}\\right], \\ \\left[\\begin{matrix}\\frac{\\sqrt{6}}{6}\\\\- \\frac{\\sqrt{6}}{3}\\\\\\frac{\\sqrt{6}}{6}\\end{matrix}\\right], \\ \\left[\\begin{matrix}- \\frac{\\sqrt{2}}{2}\\\\0\\\\\\frac{\\sqrt{2}}{2}\\end{matrix}\\right]\\right]$\n\n\n", "meta": {"hexsha": "2c1392b6036b2513bbe860cdc86f2db67e3f8f26", "size": 15088, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Orthogonalization Toolkit.ipynb", "max_stars_repo_name": "AdinAck/LinAlg-Toolkits", "max_stars_repo_head_hexsha": "dbb11e0d5e2fd2d924433614788c262b0286490c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Orthogonalization Toolkit.ipynb", "max_issues_repo_name": "AdinAck/LinAlg-Toolkits", "max_issues_repo_head_hexsha": "dbb11e0d5e2fd2d924433614788c262b0286490c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Orthogonalization Toolkit.ipynb", "max_forks_repo_name": "AdinAck/LinAlg-Toolkits", "max_forks_repo_head_hexsha": "dbb11e0d5e2fd2d924433614788c262b0286490c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.4392059553, "max_line_length": 2430, "alphanum_fraction": 0.5321447508, "converted": true, "num_tokens": 1737, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9525741322079104, "lm_q2_score": 0.863391617003942, "lm_q1q2_score": 0.8224445203231145}} {"text": "## Lie groups and representations\n\nA (complex) **Lie group** is a finite dimensional smooth (complex) manifold $G$ together with a group structure on $G$, such that the multiplication $G \\times G \\rightarrow G$ and the attaching of an inverse $g \\mapsto g^1:G \\rightarrow G$ are smooth maps.\n\nA **matrix Lie** group is any subgroup $H$ of $GL(n;C)$. $H$ is a closed subset: if $A_n$ is any sequence of matrices in $H$, and $A_n$ converges to some matrix $A$, then either $A \\in H$, or $A$ is not invertible.\n\n### Counterexamples. \nSet of all $n \\times n$ invertible matrices all of whose entries are rational. This is in fact a subgroup of $GL(n;\\mathbb{C})$, but not a closed subgroup. \nSubgroup of $GL(n;\\mathbb{C})$ : Let $a$ be an irrational real number, and let\n$H = \n\\begin{Bmatrix}\n\\begin{pmatrix}\ne^{it} & 0\\\\ \n0 & e^{ita}\n\\end{pmatrix}\n\\vert t \\in \\mathbb{R}\n\\end{Bmatrix}\n$\n\nWe can find a sequence wich converges to $-I$, but it does not belong to $H$. The closure of H is a 2 dimensional torus: $H = \n\\begin{Bmatrix}\n\\begin{pmatrix}\ne^{it} & 0\\\\ \n0 & e^{is}\n\\end{pmatrix}\n\\vert t,s \\in \\mathbb{R}\n\\end{Bmatrix}\n$\n\n\n### Examples.\nGeneral linear groups: $GL(n;\\mathbb{R}) \\lt GL(n;\\mathbb{C})$\n\nSpecial linear groups: $SL(n;\\mathbb{R})$ and $SL(n;\\mathbb{C})$, matrices having determinant one.\n\northogonal and special orthogonal groups: $O(n), SO(n)$. real matrices \n> column vectors that make up $A$ are orthonormal, that is, $\\sum_{i=1}^nA_{ij}A_{ik} = \\delta_{jk}$ \n> $\\langle x,y \\rangle = \\langle Ax,Ay \\rangle$ \n> $A^{tr} A = I$ \n$det(A) = \\pm 1$\n\nunitary and special unitary groups, $U(n)$ and $SU(n)$: $A^\\dagger A = I$ \n$\\vert det(A) \\vert = 1$\n\ncomplex orthogonal groups, $O(n;\\mathbb{C})$ and $SO(n;\\mathbb{C})$\n\ngeneralized orthogonal $O(n;k)$ \n> Define a symmetric bilinear form $[...]_{n+k}$ on $\\mathbb{R}^{n+k}$: $$[x,y]_{n,k} = x_1 y_1 + ... + x_n y_n - x_{n+1} y_{n+1} - ... - x_{n+k} y_{n+k}$$ \n> $A \\in O(n;k) \\Longrightarrow [Ax,Ay]=[x,y] for all x,y \\in \\mathbb{R}^{n+k}$ \n> $A^{tr} g A = g$ where $g$ is diagonal matrix (metric tensor) with 1 in first $n$ positions and -1 in last $k$. \n$det(A) = \\pm 1$\n\nLorentz group: $O(3; 1)$\n\nsymplectic groups $Sp(n;\\mathbb{R})$, $Sp(n;\\mathbb{C})$, and $Sp(n)$ \n> Consider the skew-symmetric bilinear form $B$ on $\\mathbb{R}^{2n}$ defined as: $B[x,y] = \\sum_{i=1}^n x_i y_{n+i} - x_{n+i} y_i$ \n> $B[Ax,Ay] = B[x,y]$ for all $x,y$ \n> if $J(2n \\times 2n) = \n\\begin{pmatrix}\n0 & I\\\\ \n-I & 0\n\\end{pmatrix}$, then $A^{tr} J A = J$ \n> $Sp(n) = Sp(n;\\mathbb{C}) \\cap U(2n)$ \n> $det(A) = 1$\n\nThe Heisenberg group $H$: $A = \n\\begin{pmatrix}\n1 & a & b\\\\ \n0 & 1 & c\\\\ \n0 & 0 & 1\n\\end{pmatrix}\n$; $A^{-1} = \n\\begin{pmatrix}\n1 & -a & ac-b\\\\ \n0 & 1 & -c\\\\ \n0 & 0 & 1\n\\end{pmatrix}\n$; \n$H$ is subgroup of GL(n;\\mathbb{R})$\n\n\n$\\mathbb{R}^* \\simeq GL(1;\\mathbb{R})$, $\\mathbb{C}^* \\simeq GL(1;\\mathbb{C})$, $S^1 \\simeq U(1)$, \n$\\mathbb{R}^n \\simeq \\begin{pmatrix}\ne^{x_1} & & 0\\\\ \n & ... & \\\\ \n0 & & e^{x_n}\n\\end{pmatrix}$\n\n\n\n### Euclidean and Poincar\u00b4e groups.\n\nEuclidean group $E(n)$ is by definition the group of all one-to-one, onto, distance-preserving maps of $\\mathbb{R}^n$\nto itself, that is, maps $f : \\mathbb{R}^n \\rightarrow \\mathbb{R}^n$ such that $d (f (x) , f (y)) = d (x, y)$ for all\n$x, y \\in \\mathbb{R}^n$.$E(n)$ is subgroup of $GL(n+1;\\mathbb{R})$. $T = T_x R$ where $R \\in O(n)$ and $T_x$ is translation by $x$ \n$T = \\begin{pmatrix}\n & & & x_1\\\\ \n & R & & \\vdots\\\\ \n & & & x_n\\\\ \n0 & ... & 0 & 1\n\\end{pmatrix}$\n\nPoincar\u00b4e group P(n; 1). affine transformations of $\\mathbb{R}^{n+1}$ which preserve the Lorentz distance $d_L(x, y) = (x_1 \u2212 y_1)^2 + \u00b7 \u00b7 \u00b7 + (x_n \u2212 y_n)^2 \u2212 (x_{n+1} \u2212 y_{n+1})^2$. $T = T_x R$ where $R \\in O(n;1)$\n\nA matrix Lie group $G$ is said to be **compact** if it is closed ($A_n \\leftarrow A \\in G$) and bounded ($A_{ij} \\lt Constant$)\n\nA matrix Lie group $G$ is said to be **connected** if given any two matrices $A$ and $B$ in $G$, there exists a continuous path $A(t), a \\le t \\le b$, lying in $G$ with $A(a) = A$, and $A(b) = B$. \nFor matrix Lie groups topological definitions of path-connected and connected are identical.\n\nIf $G$ is a matrix Lie group, then the component of $G$ containing the identity is a subgroup of $G$.\n\nA connected matrix Lie group $G$ is said to be simply connected if every loop in $G$ can be shrunk continuously to a point in $G$. \nMore precisely, $G$ is **simply connected** if given any continuous path $A(t), 0 \u2264 t \u2264 1$, lying in $G$ with $A(0) = A(1)$, there exists a continuous function $A(s, t), 0 \\le s, t \\le 1$, taking values in $G$ with the following properties: \n>1) $A(s, 0) = A(s, 1)$ for all $s$ \n>2) $A(0, t) = A(t)$ \n>3) $A(1, t) = A(1, 0)$ for all $t$\n\n\n\n$$\\begin{matrix}\n & Components & Simply \\space connected & dim & \\\\ \nGL(n;\\mathbb{C}) & 1 & no & n^2 & \\\\ \nSL(n;\\mathbb{C}) & 1 & yes & n^2 & \\\\\nGL(n;\\mathbb{R}) & 2 & no & n^2 & \\\\ \nSL(n;\\mathbb{R}) & 1 & no & n^2-1 & \\\\ \nO(n) & 2 & & \\frac{n (n-1)}{2} & \\\\ \nSO(n) & 1 & no & \\frac{n (n-1)}{2} & \\\\ \nU(n) & 1 & no & n^2 & \\\\ \nSU(n) & 1 & yes & n^2-1 & \\\\ \nO(n;1) & 4 & & & \\\\ \nSO(n;1) & 2 & n=1 yes; n \\gt 1 no & & \\\\ \nHeisenberg & 1 & yes & & \\\\ \nE(n) & 2 & & & \\\\ \nP(n;1) & 4 & & & \\\\ \n & & & & \n\\end{matrix}$$\n\nTheorem: Every matrix Lie group is a Lie group.\n\nTheorem: Let $G$ and $H$ be Lie groups, and $\u03c6$ a group homomorphism from $G$ to $H$. Then if $\u03c6$ is continuous it is also smooth.\n\n## Lie Algebras and the Exponential Mapping\n\nFor any $n \\times n$ real or complex matrix $X$, the series $e^X = \\sum_{m=0}^\\infty \\frac{X^m}{m!}$ converges. The matrix exponential $e^X$ is a continuous function of $X$.\n\n$\\Vert A \\Vert = sup_{x \\neq 0} \\frac{\\Vert Ax \\Vert}{\\Vert x \\Vert}$ $\\Leftrightarrow$ $\\Vert A \\Vert$ is the smallest $\\lambda$, $\\Vert Ax \\Vert \\le \\lambda \\Vert x \\Vert$ for $x \\in $\\mathbb{C}^n$\n\n$\\sum_{m=0}^\\infty \\Vert \\frac{X^m}{m!} \\Vert \\le \\sum_{m=0}^\\infty \\frac{\\Vert X \\Vert ^m}{m!} = e^{\\Vert X \\Vert}$. (The sum converges absolutely) \n\nLet $X, Y$ be arbitrary $n \\times n$ matrices. Then\n> $e^0 = I$ \n> $(e^X)^{-1} = e^{-X}$ \n> $e^{(\\alpha + \\beta)X} = e^{\\alpha X} e^{\\beta X}$ \n> if $XY = YX$, then $e^{X+Y} = e^X e^Y = e^y e^x$ \n> $e^{C X C^{-1}} = C e^X C^{-1}$ \n> $\\Vert e^X \\Vert \\le e^{\\Vert X \\Vert}$\n\n$e^{tX}$ is a smooth curve in $\\mathbb{C}^{n \\times n}$. And $\\frac{d}{dt} e^{tX} = X e^{tX} = e^{tX} X$. \nAnd particularly $\\left . \\frac{d}{dt} \\right |_{t=0} e^{tX} = X e^{tX} = e^{tX} X$.\n\nA Jordan\u2013Chevalley decomposition of $X$ is an expression of it as a sum: \n$X = X_{ss} + X_n$, where $X_{ss}$ is semisimple, $X_n$ is nilpotent, and $X_{ss}$ and $X_n$ commute. If such a decomposition exists it is unique, and $X_{ss}$ and $X_n$ are in fact expressible as polynomials in $X$.\n\nThe function $log(z) = \\sum_{m=1}^\\infty (-1)^{m+1} \\frac{(z-1)^m}{m}$ is defined and analytic in a circle of radius one about $z=1$. \nfor all $\\vert z-1 \\vert \\lt 1$ $e^{logz} = z$ \nfor all $\\vert u \\vert \\lt log2$, $\\vert e^u-1 \\vert \\lt 1$ and $log e^u = u$.\n\nThe function $log(A) = \\sum_{m=1}^\\infty (-1)^{m+1} \\frac{(A-I)^m}{m}$ is defined and continuous on the set of all $n \\times n$ complex matrices $A$ with $\\Vert A-I \\Vert \\lt 1$, and $log A$ is real if $A$ is real. \nfor all $\\Vert A-I \\Vert \\lt 1$ $e^{logA} = A$ \nfor all $\\Vert X \\Vert \\lt log2$, $\\Vert e^A-1 \\Vert \\lt 1$ and $log e^X = X$.\n\nLie Product Formula (Trotter formula): $e^{X+Y} = \\lim_{m \\to \\infty} (e^{\\frac{X}{m}}e^{\\frac{Y}{m}})^m$\n\nTheorem: $det(e^X)=e^{tr(X)}$\n\n$A : \\mathbb{R} \\to GL(n;\\mathbb{C})$ is called a one-parameter group if \n> $A$ is continuous \n> $A(0) = I$ \n> $A(t+s) = A(t) A(s)$ for all $t,s \\in \\mathbb{R}$\n\nIf $A$ is a one-parameter group in $GL(n;\\mathbb{C})$, then there exists unique $X$, such that $A(t) = e^{tX}$\n\n\n\nLet $G$ be a matrix Lie group. Then the **Lie algebra** of $G$, denoted $\\mathfrak{g}$, is the set of all matrices $X$ such that $e^{tX}$ is in $G$ for all real numbers $t$. (In physics usually $e^{itX}$ is used).\n\n$GL(n;\\mathbb{C}) \\Longrightarrow gl(n;\\mathbb{C})$, all complex matrices. \n$GL(n;\\mathbb{R}) \\Longrightarrow gl(n;\\mathbb{g})$, all real matrices. \n$SL(n;\\mathbb{C}) \\Longrightarrow sl(n;\\mathbb{C})$, all complex matrices with trace 0. \n$SL(n;\\mathbb{R}) \\Longrightarrow sl(n;\\mathbb{R})$, all real matrices with trace 0. \n$U(n) \\Longrightarrow u(n)$, all complex matrices that $X^\\dagger = -X$ \n$su(n)$, complex matrices that $X^\\dagger = -X$ and $tr(X)=0$ \n$O(n)$ and $SO(n) \\Longrightarrow so(n)$, all real such that $X^{tr} = -X$ \n$o(n;k), so(n;k)$ all real such that $gX^{tr}g = X$ \n$sp(n;\\mathbb{R})$ all that $J X^{tr} J = X$ \nHeisenberg group: $X = \\begin{pmatrix}\n0 & \\alpha & \\beta \\\\ \n0 & 0 & \\gamma \\\\ \n0 & 0 & 0\n\\end{pmatrix}$ and $e^{tX} = \\begin{pmatrix}\n1 & t\\alpha & t\\beta + t^2 \\alpha \\gamma \\frac{1}{2} \\\\ \n0 & 1 & t\\gamma \\\\ \n0 & 0 & 1\n\\end{pmatrix}$ \n\nLet $G$ be a matrix Lie group, and $X$ an element of its Lie\nalgebra. Then $e^X$ is an element of the identity component of $G$.\n\nLet $G$ be a matrix Lie group with Lie algebra $\\mathfrak{g}$. $X,Y \\in \\mathfrak{g}$ and $A \\in G$. \n> $AXA^{-1} \\in \\mathfrak{g}$ \n> $sX \\in \\mathfrak{g}$ for all real s \n> $X+Y \\in \\mathfrak{g}$ \n> $XY - YX \\in \\mathfrak{g}$. (for physics convention $-i(XY - YX) \\in \\mathfrak{g}$)\n\n**main theorem**\nLet $G$ and $H$ be matrix Lie groups, with Lie algebras $\\mathfrak{g}$ and $\\mathfrak{h}$, respectively. Suppose that $\\phi : G \\rightarrow H$ be a Lie group homomorphism. Then there exists a unique real linear map $\\tilde{\\phi} : \\mathfrak{g} \\rightarrow \\mathfrak{h}$ such that $\\phi(e^X) = e^{\\tilde{\\phi}(X)}$ for all $X \\in \\mathfrak{g}$ .\n> $\\tilde{\\phi}(AXA^{-1}) = \\phi(A) \\tilde{\\phi}(X) \\phi(A)^{-1}$ \n> $\\tilde{\\phi}([X,Y]) = [\\tilde{\\phi}(X), \\tilde{\\phi}(Y)]$ \n> $\\tilde{\\phi}(X) = \\left . \\frac{d}{dx} \\right \\vert_{t=0} \\phi(e^{tX})$\n\nIf $G, H, K$ are matrix Lie groups and $\\phi : G \\rightarrow H$, $\\psi : G \\rightarrow H$, then $\\widetilde{\\phi \\circ \\psi} = \\tilde{\\psi} \\circ \\tilde{\\psi}$\n\nAdjoint Mapping: the linear map $Ad A : \\mathfrak{g} \\rightarrow \\mathfrak{g}$, defined as $Ad A(X) = AXA^{-1}$. $Ad : G \\rightarrow GL(\\mathfrak{g})$ is group homomorphism.\n\nLet $G$ be a matrix Lie group with Lie algebra $\\mathfrak{g}$. Then there\nexist a neighborhood $U$ of zero in $\\mathfrak{g}$ and a neighborhood $V$ of $I$ in $G$ such that the\nexponential mapping takes $U$ homeomorphically onto $V$ .\n\nIf $G$ is a connected matrix Lie group, then every element $A$ of $G$ can be written in the form $A = e^{X_1}e^{X_2} \u00b7 \u00b7 \u00b7 e^{X_n}$ for some $X1,X2, \u00b7 \u00b7 \u00b7Xn \\in \\mathfrak{g}$.\n\nA finite-dimensional real or complex **Lie algebra** is a finite-dimensional real or complex vector space $\\mathfrak{g}$, together with a map $[ ]$ from $\\mathfrak{g} \\times \\mathfrak{g} \\to \\mathfrak{g}$ , with the following properties: \n> $[]$ is bilinear. \n> $[X, Y ] = \u2212[Y,X]$ for all $X, Y \\in \\mathfrak{g}$. \n> $[X, [Y,Z]] + [Y, [Z,X]] + [Z, [X, Y ]] = 0$ for all $X, Y,Z \\in \\mathfrak{g}$ (Jacobi identity).\n\nThe space $\\mathfrak{g}l(n;R)$ of all $n \\times n$ real matrices is a real Lie algebra.\n\nLinear maps of $V$ into itself $\\mathfrak{g}l(V)$ becomes a real or complex Lie algebra with the bracket operation $[A,B] = AB \u2212 BA$.\n\nEvery finite-dimensional real Lie algebra is isomorphic to a subalgebra of $\\mathfrak{g}l(n;R)$. Every finite-dimensional complex Lie algebra is isomorphic to a (complex) subalgebra of \\mathfrak{g}l(n;C)$. (This is\nin contrast to the situation for Lie groups, where most but not all Lie groups are\nmatrix Lie groups.)\n\nLet $\\mathfrak{g}$ be a finite-dimensional real or complex Lie algebra, and let $X_1, \u00b7 \u00b7 \u00b7 ,X_n$ be a basis for $\\mathfrak{g}$ (as a vector space). Then for each $i, j$ $[X_i,X_j ]$ can be written uniquely in the form $[X_i,X_j ] = \\sum_{k=1}^n c_{ijk}X_k$. $c_{ijk}$ are called **structure constants**.\n\n$V$ is a finite-dimensional real vector space, then the **complexification** of $V$ , denoted $V_C$, is the space of formal linear combinations: $v_1 + iv_2$. With with $v_1, v_2 \\in V$. $V$ is the real subspace of $V_C$.\n\nLet $\\mathfrak{g}$ be a finite-dimensional real Lie algebra, and $\\mathfrak{g}_C$ its complexification (as a real vector space). Then the bracket operation on $\\mathfrak{g}$ has a unique extension to $\\mathfrak{g}_C$ which makes $\\mathfrak{g}_C$ into a complex Lie algebra. The complex Lie algebra $\\mathfrak{g}_C$ is called the complexification of the real Lie algebra $\\mathfrak{g}$. $$[X_1 + iX_2, Y_1 + iY_2] = ([X_1, Y_1] \u2212 [X_2, Y_2]) + i ([X_1, Y_2] + [X_2, Y_1])$$\n\n\n$\\mathfrak{gl}(n;\\mathbb{R})_{\\mathbb{C}} \\simeq \\mathfrak{gl}(n;\\mathbb{C})$; \n$u(n)_{\\mathbb{C}} \\simeq \\mathfrak{g}l(n;\\mathbb{C})$; \n$\\mathfrak{sl}(n;\\mathbb{R})_{\\mathbb{C}} \\simeq \\mathfrak{sl}(n;\\mathbb{C})$; \n$so(n)_{\\mathbb{C}} \\simeq so(n;\\mathbb{C})$\n\nA given complex Lie algebra may have several non-isomorphic real forms.\n\n**main theorem (inverse)**\n1. Let $G$ and $H$ be matrix Lie groups, let $\\phi_1,\\phi_2 : G \\rightarrow H$ be Lie group homomorphisms. And let $\\tilde{\\phi_1},\\tilde{\\phi_2} : \\mathfrak{g} \\rightarrow \\mathfrak{h}$ be the associated Lie algebras homomorphisms. If $G$ is connected and $\\tilde{\\phi_1} = \\tilde{\\phi_2}$ then $\\phi_1 = \\phi_2$\n\n2. Let $G$ and $H$ be matrix Lie groups with Lie algebras $\\mathfrak{g},\\mathfrak{h}$. Let $\\tilde{\\phi} : \\mathfrak{g} \\rightarrow \\mathfrak{h}$ be a Lie algebra homomorphism. If $G$ is connected and simply connected, then there exists a unique Lie group homomorphism $\\phi : G \\rightarrow H$. \n> $\\tilde{\\phi}(AXA^{-1}) = \\phi(A) \\tilde{\\phi}(X) \\phi(A)^{-1}$ \n> $\\tilde{\\phi}([X,Y]) = [\\tilde{\\phi}(X), \\tilde{\\phi}(Y)]$ \n> $\\tilde{\\phi}(X) = \\left . \\frac{d}{dx} \\right \\vert_{t=0} \\phi(e^{tX})$\n\n\n\nLet $G$ be a connected matrix Lie group. A universal covering group of G (or just **universal cover**) is a connected, simply connected Lie group $\\tilde{G}$, together with a Lie group homomorphism $\\phi : \\tilde{G} \\to G$ (called the projection map) with the following properties: \n> $\\phi$ maps $\\tilde{G}$ onto $G$ (surjective) \n> There is a neighborhood $U$ of $I$ in $\\tilde{G}$ which maps homeomorphically under $\\phi$ onto a neighborhood $V$ of $I$ in $G$.\n\nIf $G$ is any connected matrix Lie group, then a universal covering group $\\tilde{G}$ of $G$ exists and is unique up to canonical isomorphism.\n\n$G$ and $\\tilde{G}$ have the same Lie algebra: \nLet $G$ be a connected matrix Lie group, $\\tilde{G}$ its universal cover, and $\\phi$ the projection map from $\\tilde{G}$ to $G$. Suppose that $\\tilde{G}$ is a matrix Lie group with Lie algebra $\\tilde{ \\mathfrak{g}}$. Then the associated Lie algebra map $\\tilde{\\phi} :\\tilde{ \\mathfrak{g}} \\to \\tilde{G}$ is an isomorphism.\n\nExample: The universal cover of $S_1$ is $\\mathbb{R}$, and the projection map is the map $x \\to e^{ix}$.\n\n\nfinite-dimensional complex **representation** of $G$ is a Lie group homomorphism $\\Pi : G \\to GL(n;\\mathbb{C})$\n\nfinite-dimensional complex representation of $\\mathfrak{g}$ is a Lie algebra homomorphism $\\pi$ of $\\mathfrak{g} \\to gl(n;\\mathbb{C})$\n\n$\\pi(X) = \\left . \\frac{d}{dx} \\right \\vert_{t=0} \\Pi (e^{tX})$\n\nA representation with no non-trivial invariant subspaces is called **irreducible**.\n\nA finite-dimensional representation of a group or Lie algebra, acting on a space $V$ , is said to be **completely reducible** if the following property is satisfied: Given an invariant subspace $W \\subset V$ , and a second invariant subspace $U \\subset W \\subset V$ , there exists a third invariant subspace $\\tilde{U} \\subset W$ such that $U \\cap \\tilde{U} = \\emptyset$ and $U \\cup \\tilde{U} = W$.\n\nLet $\\mathfrak{g}$ be a real Lie algebra, and $\\mathfrak{g}_C$ its complexification. Then every finite-dimensional complex representation $\\pi$ of $\\mathfrak{g}$ has a unique extension to a (complex-linear) representation of $\\mathfrak{g}_C$, also denoted $\\pi$. The representation of $\\mathfrak{g}_C$ satisfies: $\\pi (X + iY ) = \\pi (X) + i \\pi (Y )$ for all $X \\in \\mathfrak{g}$.\n\nLet $G$ be a matrix Lie group, let $H$ be a Hilbert space, and let $U(H)$ denote the group of unitary operators on $H$. Then a homomorphism $\\Pi : G \\to U(H)$ is called a **unitary representation** of $G$ if $\\Pi$ satisfies the following\ncontinuity condition: If $A_,A \\in G$ and $A_n \\to A$, then $\\Pi(A_n)v \\to \\Pi(A)v$ for all $v \\in H$. A unitary representation with no non-trivial closed invariant subspaces\nis called irreducible.\n\nA finite-dimensional completely reducible representation of a group or Lie algebra is equivalent to a direct sum of (one or more) irreducible representations.\n\nLet $G$ be a matrix Lie group. Let $\\Pi$ be a finite-dimensional unitary representation of $G$, acting on a finite-dimensional real or complex Hilbert space $V$ . Then $\\Pi$ is completely reducible.\n\nIf $G$ is a finite group, then every finite-dimensional real or complex representation of $G$ is completely reducible.\n\nIf $G$ is a compact matrix Lie group, then every finite-dimensional real or complex representation of $G$ is completely reducible.\n\n**Clebsch-Gordan** theory: Suppose $\\Pi_1,\\Pi_2$ are irreducible representations of a group $G$. If we regard\n$\\Pi_1 \\otimes \\Pi_2$ as a representation of $G$, it may no longer be irreducible. If it is not irreducible, one can attempt to decompose it as a direct sum of irreducible representations.\n\n**Schur\u2019s Lemma**: \n> Let $V$ and $W$ be irreducible real or complex representations of a group or Lie algebra, and let $\\phi : V \\to W$ be a\nmorphism. Then either $\\phi = 0$ or $\\phi$ is an isomorphism. \n> Let $V$ be an irreducible complex representation of a group or Lie algebra, and let $\\phi : V \\to V$ be a morphism of $V$ with itself. Then $\\phi = \\lambda I$, for some $\\lambda \\in \\mathbb{C}$. \n> Let $V$ and $W$ be an irreducible complex representation of a group or Lie algebra, and let $\\phi_1,\\phi_2 : V \\to W$ be non zero morphisms. Then $\\phi_1 = \\lambda \\phi_2$, for some $\\lambda \\in \\mathbb{C}$.\n\nAn irreducible complex representation of a commutative group or Lie algebra is one-dimensional.\n\n\n## SU(2) and SO(3)\n\n$SU(2)$ : $A = \\begin{pmatrix}\n\\alpha & -\\bar{\\beta} \\\\ \n\\beta & \\bar{\\alpha}\n\\end{pmatrix}$ where $\\vert \\alpha \\vert ^2 + \\vert \\beta \\vert ^2 = 1$ , that is we have $S^3$ in $\\mathbb{R}^4$. compact, path connected and simply connected.\n\n$SO(3)$ can be parametrized with $R(\\phi,\\vec{n})$, counterclockwise rotation about the axis through $\\vec{n}$ by angle $\\phi$. If $0 \\le \\phi \\le \\pi$, $\\vec{n}$ and $\\phi$ define exactly a unique rotation: except $R(\\pi, \\pm n)$ which are identical. We have a solid ball in $\\mathbb{R}^3$ with radius $\\pi$, Identity is the center of the ball, and antipodal points are glued together. compact, path connected but not simply connected.\n\n[Dirac belt Trick](https://vimeo.com/62228139) (or hand rotation, twice)\n\nConsider the space $V$ of all $2 \\times 2$ complex matrices which are self-adjoint and have trace zero. And inner product as $\\langle A,B \\rangle = trace(AB)$. This is a three dimensional vector space. With orthonormal basis $\\{ \\sigma_x,\\sigma_y,\\sigma_z \\}$. If $U \\in SU(2)$ then $UAU^{-1} \\in V$. $\\Phi_U(A) \\equiv UAU^{-1}$ is a linear map of $V$ into itself. More $\\langle \\Phi_U(A),\\Phi_U(B) \\rangle = \\langle A,B \\rangle$. $U \\to \\Phi_U$ is homomorphism and continuous. \nThus $U \\to \\Phi_U$ is a Lie group homomorphism of $SU(2)$ into $SO(3)$ (in fact two to one map: $\\Phi_U = \\Phi_{-U}$).\n\n\n\n\n\n\n\n## Clifford group\n\n\n## Links\n\n[An Elementary Introduction to Groups and Representations (Brian C. Hall)](https://arxiv.org/abs/math-ph/0005032)\n\n\n\n## To Look\n\nSymplectic group\n\nOrbit-stabilizer theorem and Burnside's lemma\n\nhomomorphism theorems\n\n\n\n\n\n```python\nfrom sympy import *\ninit_printing(use_latex='mathjax')\nfrom sympy.physics.quantum.dagger import Dagger\na, b, c, d = symbols('a b c d')\nm = Matrix([[a,b],[c,d]])\ndisplay(m)\nDagger(m)*m, m*Dagger(m)\n\n\n```\n\n\n$$\\left[\\begin{matrix}a & b\\\\c & d\\end{matrix}\\right]$$\n\n\n\n\n\n$$\\left ( \\left[\\begin{matrix}a \\overline{a} + c \\overline{c} & b \\overline{a} + d \\overline{c}\\\\a \\overline{b} + c \\overline{d} & b \\overline{b} + d \\overline{d}\\end{matrix}\\right], \\quad \\left[\\begin{matrix}a \\overline{a} + b \\overline{b} & a \\overline{c} + b \\overline{d}\\\\c \\overline{a} + d \\overline{b} & c \\overline{c} + d \\overline{d}\\end{matrix}\\right]\\right )$$\n\n\n\n\n```python\nprint('aa')\n1+1\n\n```\n\n aa\n\n\n\n\n\n 2\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "8ee2a24dc7553144d5916e8f5e252c412c21f7e2", "size": 27446, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Math/IntroToGroupsLie.ipynb", "max_stars_repo_name": "gate42qc/seminars", "max_stars_repo_head_hexsha": "35ff77b902d9c2ede619fd6e2d9c3e80d20d78de", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2018-12-07T10:02:06.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-24T19:30:03.000Z", "max_issues_repo_path": "Math/IntroToGroupsLie.ipynb", "max_issues_repo_name": "gate42qc/seminars", "max_issues_repo_head_hexsha": "35ff77b902d9c2ede619fd6e2d9c3e80d20d78de", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Math/IntroToGroupsLie.ipynb", "max_forks_repo_name": "gate42qc/seminars", "max_forks_repo_head_hexsha": "35ff77b902d9c2ede619fd6e2d9c3e80d20d78de", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-08-22T12:07:40.000Z", "max_forks_repo_forks_event_max_datetime": "2019-08-22T12:07:40.000Z", "avg_line_length": 49.9018181818, "max_line_length": 508, "alphanum_fraction": 0.5389492094, "converted": true, "num_tokens": 7405, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9465966656805269, "lm_q2_score": 0.8688267643505193, "lm_q1q2_score": 0.8224285181882025}} {"text": "# Taylor problem 1.50\n\nThis problem attacks the \"oscillating skateboard\" problem described in Example 1.2 of Taylor. A Newton's 2nd law analysis leads to the differential equation for the angle $\\phi$ in radians:\n\n$\n\\begin{align}\n \\ddot\\phi = -\\frac{g}{R}\\sin\\phi\n \\;.\n\\end{align}\n$\n\nThis is a 2nd order, *nonlinear* differential equation. We note it is the same equation describing the motion of a simple (undamped, not driven) pendulum.\n\nProblem 1.50 has us solving this equation numerically for particular initial conditions and comparing the plots to the approximate solution based on the small angle approximation for $\\sin\\phi$. We'll build up code to find this solution and plot it in steps to illustrate how a notebook evolves. We don't create the polished version at once!\n\n**Your goal for problem 1.51: Modify the relevant part of this notebook to produce the required figure, print it out, and turn it in with your homework.**\n\n\n```python\n%matplotlib inline\n```\n\n\n```python\nimport numpy as np\nfrom scipy.integrate import odeint\n\nimport matplotlib.pyplot as plt\n#plt.rcParams.update({'font.size': 18})\n\n```\n\nWe'll define the right-hand side (rhs) of the ordinary differential equations (ODE) using the standard form from the Python basics notebook:\n\n$$\\begin{align}\n \\frac{d}{dt}\\left(\\begin{array}{c}\n \\phi \\\\\n \\dot\\phi\n \\end{array}\\right)\n = \\left(\\begin{array}{c}\n \\dot\\phi \\\\\n -g \\sin(\\phi)\n \\end{array}\\right)\n\\end{align}$$\n\n\n```python\ndef ode_rhs_exact(u_vec, t, *params):\n \"\"\" \n Right-hand side (rhs) of the differential equation, with \n u_vec = [\\phi, \\dot\\phi] and params = [g, R]. Returns the list of\n d(u_vec)/dt, as prescribed by the differential equation.\n \n \"\"\"\n phi, phidot = u_vec # extract phi and phidot from the passed vector\n g, R = params # extract g and R from the passed parameters\n return [phidot, -g*np.sin(phi)/R]\n```\n\n\n```python\n# parameters\ng = 9.8 # in mks units\nR = 5 # radius in meters\n\n# absolute and relative tolerances for ode solver\nabserr = 1.0e-8\nrelerr = 1.0e-6\n\n# initial conditions for [phi, phidot]\nphi0 = np.pi/180 * 90. # convert initial phi to radians\nu0_vec = [phi0, 0.]\n\nt_max = 15. # integration time\nt_pts = np.arange(0, t_max, 0.01) # array of time points, spaced 0.01\n\n# Integrate the differential equation and read off phi, phidot (note T!)\nphi, phidot = odeint(ode_rhs_exact, u0_vec, t_pts, args=(g, R), \n atol=abserr, rtol=relerr).T\n```\n\n\n```python\nfig = plt.figure()\nax = fig.add_subplot(1,1,1)\nax.plot(t_pts, 180./np.pi * phi)\nfig.tight_layout() # make the spacing of subplots nicer\n```\n\n**Does the plot make sense for $\\phi$? E.g., does it start at the correct angle? Does it have the behavior you expect (e.g., periodic with constant amplitude)?**\n\nNow let's put this into a function:\n\n\n```python\ndef solve_for_phi(phi0, phidot0=0, t_min=0., t_max=1., g=9.8, R=5.):\n \"\"\"\n Solve the differential equation for the skateboard Example 1.2 in Taylor.\n The result for t, \\phi(t) and \\dot\\phi(t) are returned for a grid with\n t_min < t < t_max and a hardwired (for now) spacing of 0.01 seconds.\n The ODE solver is odeint from scipy, with specified tolerances. \n Units are mks and angles are in radians.\n \"\"\"\n\n # absolute and relative tolerances for ode solver\n abserr = 1.0e-8\n relerr = 1.0e-6\n\n # initial conditions for [phi, phidot]\n u0_vec = [phi0, phidot0]\n\n t_pts = np.arange(t_min, t_max, 0.01)\n\n # Integrate the differential equation\n phi, phidot = odeint(ode_rhs_exact, u0_vec, t_pts, args=(g, R), \n atol=abserr, rtol=relerr).T\n \n return t_pts, phi, phidot\n```\n\nCheck that it works (gives the previous result).\n\n\n```python\nphi0 = np.pi/180 * 90. # convert initial phi to radians\nt_pts, phi, phidot = solve_for_phi(phi0, t_max=15.)\n```\n\n\n```python\nfig = plt.figure()\nax = fig.add_subplot(1,1,1)\nax.plot(t_pts, 180./np.pi * phi)\nfig.tight_layout() # make the spacing of subplots nicer\n\n```\n\nOk, now we need an ode function for the small angle approximation. It's very easy now to copy and modify our other function!\n\n\n```python\ndef ode_rhs_small_angle(u_vec, t, *params):\n \"\"\" \n Right-hand side (rhs) of the differential equation, with \n u_vec = [\\phi, \\dot\\phi] and params = [g, R]. Returns the list of\n d(u_vec)/dt, as prescribed by the differential equation.\n \n \"\"\"\n phi, phidot = u_vec # We don't actually use x or y here, but could!\n g, R = params\n return [phidot, -g*phi/R]\n```\n\nAnd we can put them together into one solver function:\n\n\n```python\ndef solve_for_phi_all(phi0, phidot0=0, t_min=0., t_max=1., g=9.8, R=5.):\n \"\"\"\n Solve the differential equation for the skateboard Example 1.2 in Taylor\n using the exact equation and the small angle approximation.\n The result for t, \\phi(t) and \\dot\\phi(t) are returned for a grid with\n t_min < t < t_max and a hardwired (for now) spacing of 0.01 seconds.\n The ODE solver is odeint from scipy, with specified tolerances. \n Units are mks and angles are in radians.\n \"\"\"\n\n # absolute and relative tolerances for ode solver\n abserr = 1.0e-8\n relerr = 1.0e-6\n\n # initial conditions for [phi, phidot]\n u0_vec = [phi0, phidot0]\n\n t_pts = np.arange(t_min, t_max, 0.01)\n\n # Integrate the differential equations\n phi, phidot = odeint(ode_rhs_exact, u0_vec, t_pts, args=(g, R), \n atol=abserr, rtol=relerr).T\n phi_sa, phidot_sa = odeint(ode_rhs_small_angle, u0_vec, t_pts, args=(g, R), \n atol=abserr, rtol=relerr).T\n \n return t_pts, phi, phidot, phi_sa, phidot_sa\n```\n\nAlways try it out!\n\n\n```python\nphi0 = np.pi/180 * 90.\nt_pts, phi, phidot, phi_sa, phidot_sa = solve_for_phi_all(phi0, t_max=15.)\nprint(phi0)\n```\n\n 1.5707963267948966\n\n\n\n```python\nfig = plt.figure()\nax = fig.add_subplot(1,1,1)\nax.plot(t_pts, 180./np.pi * phi)\nax.plot(t_pts, 180./np.pi * phi_sa)\nfig.tight_layout() # make the spacing of subplots nicer\n\n```\n\nThis is actually the plot that is requested, so we could analyze it at this stage, but instead let's improve the plot and see how to save it.\n\n### Ok, now for some more systematic plotting\n\nHere we see examples of applying limits to the x and y axes as well as labels and a title.\n\n\n```python\nfig = plt.figure(figsize=(8,6))\nax = fig.add_subplot(1,1,1)\nax.set_xlim(0.,15.)\nax.set_ylim(-25.,25.)\nax.set_xlabel('t (sec)')\nax.set_ylabel(r'$\\phi$')\nax.set_title(r'$\\phi_0 = 20$ degrees')\nline_exact, = ax.plot(t_pts, 180./np.pi * phi, label='exact')\nline_sa, = ax.plot(t_pts, 180./np.pi * phi_sa, label='small angle')\nax.legend()\n\n# save the figure\nfig.savefig('Taylor_prob_1.50.png', bbox_inches='tight')\n```\n\n### Bonus: repeat with widgets!\n\nThis actually generalizes problems 1.50 and 1.51 so that you can examine any angle in between. Use it to check your figure for 1.51.\n\n\n```python\nfrom ipywidgets import interact, fixed\nimport ipywidgets as widgets\n\ndef rad_to_deg(theta_rad):\n \"\"\"Take as input an angle in radians and return it in degrees.\"\"\"\n return 180./np.pi * theta_rad\n\ndef deg_to_rad(theta_deg):\n \"\"\"Take as input an angle in degrees and return it in radians.\"\"\"\n return np.pi/180. * theta_deg\n\n```\n\n\n```python\ndef plot_exact_and_small_angle(phi0_deg=0):\n phi0_rad = deg_to_rad(phi0_deg)\n t_pts, phi_rad, phidot, phi_sa_rad, phidot_sa = \\\n solve_for_phi_all(phi0_rad, t_max=15.)\n phi_deg = rad_to_deg(phi_rad)\n phi_sa_deg = rad_to_deg(phi_sa_rad)\n \n fig = plt.figure(figsize=(8,6))\n ax = fig.add_subplot(1,1,1)\n line_exact, = ax.plot(t_pts, phi_deg, label='exact')\n line_sa, = ax.plot(t_pts, phi_sa_deg, label='small angle')\n ax.legend()\n ax.set_xlim(0.,15.)\n #ax.set_ylim(-90.,90.)\n ax.set_xlabel('t (sec)')\n ax.set_ylabel(r'$\\phi$')\n ax.set_title(fr'$\\phi_0 = {phi0_deg:.0f}$')\n plt.show()\n\n```\n\n\n```python\ninteract(plot_exact_and_small_angle, phi0_deg=(0.,90.));\n```\n\n\n interactive(children=(FloatSlider(value=0.0, description='phi0_deg', max=90.0), Output()), _dom_classes=('widg\u2026\n\n\n\n```python\n# to avoid the jiggling and do some formatting\nphi0_deg_widget = widgets.FloatSlider(min=0., max=120.0, step=0.1, value=0.,\n description=r'$\\phi_0$ (degrees)',\n readout_format='.0f',\n continuous_update=False\n )\ninteract(plot_exact_and_small_angle, phi0_deg=phi0_deg_widget);\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "8be08486495ad67acb72439b75fd005445658ee7", "size": 134087, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "2020_week_1/Taylor_problem_1.50.ipynb", "max_stars_repo_name": "CLima86/Physics_5300_CDL", "max_stars_repo_head_hexsha": "d9e8ee0861d408a85b4be3adfc97e98afb4a1149", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "2020_week_1/Taylor_problem_1.50.ipynb", "max_issues_repo_name": "CLima86/Physics_5300_CDL", "max_issues_repo_head_hexsha": "d9e8ee0861d408a85b4be3adfc97e98afb4a1149", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "2020_week_1/Taylor_problem_1.50.ipynb", "max_forks_repo_name": "CLima86/Physics_5300_CDL", "max_forks_repo_head_hexsha": "d9e8ee0861d408a85b4be3adfc97e98afb4a1149", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 258.8552123552, "max_line_length": 37508, "alphanum_fraction": 0.9168823227, "converted": true, "num_tokens": 2499, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8976953003183443, "lm_q2_score": 0.9161096095812347, "lm_q1q2_score": 0.8223872910975476}} {"text": "# Workshop 7 optional material: Simulating dynamical systems\n\n\nPhysical systems that evolve in time can be easily (but not always quickly) simulated on a computer. All you need to know is the function that takes the system from its current state to a future state:\n\n$S_2 = f(S_1, dt)$\n\nIn the above equation, $S_1$ and $S_2$ represent the current and future states, respectively. $S_2$ is the state of the system after a short amount of time, $dt$. The function $f$, which encodes the dynamics of the system, often depends on the current state and the choice of $dt$. Choosing a smaller time step, $dt$, generally improves the accuracy of any simulation.\n\nFor example, the position of a particle, $x(t)$, in one dimension can be written as a function of its velocity $v$:\n\n$\\frac{dx}{dt} = v$\n\nYou could find $x(t)$ using differential equation solving methods, or you could just simulate the motion of the particle. To do this, we can express the dynamics as:\n\n\\begin{align}\n\\ x_2 & = f(x_1, dt) \\\\\n\\ & = x_1 + v*dt \\\\\n\\end{align}\n\n\n```python\nimport matplotlib.pyplot as plt\n\nvelocity = 1.0\ndt = 0.01\n\ndef update(x, t):\n '''This is like the function f above; you should\n think of the \"state\" of the system as its position\n at some time. We update its state according to the\n dynamics of the system, and I prefer to also update\n the value of time in this same function--but this could\n also be done in the loop below.'''\n \n # update the state and time\n x = x + velocity*dt\n t = t + dt\n \n # return the updated values\n return x, t\n```\n\n\n```python\nx = 5 # initial x position\nt = 0 # initial time\n\n# Some lists to store the positions and times\nx_values = [x] # I like putting the initial values in the list\nt_values = [t]\n\n# This is where the actual \"simulation\" occurs; the update\n# function is used in a loop with however many iterations we want\nfor i in range(100):\n \n x, t = update(x, t)\n x_values.append(x)\n t_values.append(t)\n \n# Finally, some plotting to visualize the results\nplt.plot(x_values, t_values)\nplt.ylabel('position')\nplt.xlabel('time')\nplt.show()\n```\n\nWow! It's a straight line. You probably could've guessed that or used some fancy, built-in ODE solving libraries. But the point is, if you can write down an `update( )` function for some system, then you can simulate it--whether or not there's an analytical way to find a solution. \n\nThere's a chance that your capstone project is just a variant of this problem, so I don't want to steal away too many project ideas in this notebook. Once you start simulating a physical system, it's good to consider the accuracy of your simulated \"solution\". If you don't have an exact solution to compare against, you can adjust the chosen value of $dt$ and see how it changes the simulation.\n\n## Exercise 1 \n\nChange the code above to simulate a particle under constant acceleration in one dimension. The `update` function should now also update and return the velocity of the particle. Plot the position of the particle as a function of time using at least two different choices of time step $dt$. It may be helpful to include $dt$ as an input parameter to the `update` function so you can easily adjust $dt$ in your simulations.\n\n\n```python\n\n```\n\nStochastic systems (systems with some randomness) are another great use-case for this method. [Brownian motion](https://en.wikipedia.org/wiki/Brownian_motion) describes the random motion of a particle (like a piece of dust) in a fluid. The thermal motion of the fluid molecule bumps the dust particle around in a seemingly random, but well-described, process. (In theory you could also model this by keeping track of the motion of many tiny fluid particles colliding with each other, the walls of some container, and the large dust particle.) \n\nBelow, I've written down an update function for the position of a particle undergoing Brownian motion.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef update(x, t, dt):\n \n x = x + np.sqrt(dt)*np.random.randn()\n t = t + dt\n \n return x, t\n```\n\n\n```python\nx = 0\nt = 0\nx_values = [x]\nt_values = [t]\n\nfor i in range(100):\n \n x, t = update(x, t, 0.1)\n \n x_values.append(x)\n t_values.append(t)\n \nplt.plot(t_values, x_values)\nplt.show()\n```\n\n## Exercise 2\n\nRepeat the Brownian motion simulation many (~1000) times and plot the distribution of final positions. The mean of the distribution should be zero, but there is some variance to this distribution. How does the variance and/or standard deviation of this distribution scale with time? \n\n\n```python\n\n```\n\n## Viral spreading\n\n(Very relevant in these times of covid...)\n\nFor a more complicated example, here's a simulation of viral spreading following the mathematics of [this paper](https://www.pnas.org/content/pnas/111/46/E4911.full.pdf). You can play with the parameter $\\mu$ (mu) to see how it controls the nature of the spreading.\n\n\n```python\nimport numpy as np\n\nprob = 0.4 # Probability of infection\nL = 2000 # Size of grid\n\n# The distribution from which \"jumps\" are drawn\ndef jump(y, mu, L, C):\n return np.power((y*(np.power(L, -mu) - np.power(C, -mu)) + np.power(C, -mu)), -1/mu)\n\n# function that updates the state of the population and list of infected individuals\ndef update(population, infected, mu, L, C):\n for inf in infected: \n if np.random.random() > prob:\n start_x = inf[0] \n start_y = inf[1] \n angle = np.random.uniform(0, 2*np.pi)\n dist = jump(np.random.random(), mu, L, C)\n end_x = int(start_x + np.cos(angle)*dist) % L\n end_y = int(start_y + np.sin(angle)*dist) % L\n if population[end_x, end_y] == 0:\n population[end_x, end_y] = 1\n infected.append([end_x, end_y])\n \n return population, infected\n\n```\n\n\n```python\npopulation = np.zeros((int(L),int(L)))\npopulation[1000,1000] = 1\n\ninfected = [[1000,1000]]\n \nmu = 1.8\nC = 1.5\npending = []\n \nts = []\nrs = []\npopulations = np.zeros((30, int(L), int(L)))\n\n# The simulation -- each iteration is more computationally expensive than the last,\n# so be careful changing the number of iterations\nfor i in range(26):\n populations[i] = population\n population, infected = update(population, infected, mu, L, C)\n```\n\n\n```python\nimport random\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom matplotlib import cm,rc\nfrom matplotlib.colors import ListedColormap, LinearSegmentedColormap\n\ndark = cm.get_cmap('Dark2_r', 256)\nnewcolors = dark(np.linspace(0, 1, 256))\nwhite = np.array([1,1,1,1])\nnewcolors[:25, :] = white\nnewcmp = ListedColormap(newcolors)\n\n\n\nfig = plt.figure(figsize=(8,8))\n\na = populations[25] # You can change this number to get an earlier/later state of the population\n\nim = plt.imshow(a, interpolation='none', aspect='auto', vmin=0, vmax=1, cmap = newcmp)\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "096306665433d24b8bcda30006b1a8a632137abe", "size": 10423, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Week09/WS07/Workshop07_optional.ipynb", "max_stars_repo_name": "ds-connectors/Physics-88-SP22", "max_stars_repo_head_hexsha": "5f3a74cbd50063fd09b91c28da534176968bb3f2", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Week09/WS07/Workshop07_optional.ipynb", "max_issues_repo_name": "ds-connectors/Physics-88-SP22", "max_issues_repo_head_hexsha": "5f3a74cbd50063fd09b91c28da534176968bb3f2", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Week09/WS07/Workshop07_optional.ipynb", "max_forks_repo_name": "ds-connectors/Physics-88-SP22", "max_forks_repo_head_hexsha": "5f3a74cbd50063fd09b91c28da534176968bb3f2", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.7313915858, "max_line_length": 556, "alphanum_fraction": 0.5726758131, "converted": true, "num_tokens": 1784, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9161096204605946, "lm_q2_score": 0.8976952893703477, "lm_q1q2_score": 0.8223872908343329}} {"text": "# Assignment 1\n\nThe goal of this assignment is to supply you with machine learning models and algorithms. In this notebook, we will cover linear and nonlinear models, the concept of loss functions and some optimization techniques. All mathematical operations should be implemented in **NumPy** only. \n\n\n## Table of contents\n* [1. Logistic Regression](#1.-Logistic-Regression)\n * [1.1 Linear Mapping](#1.1-Linear-Mapping)\n * [1.2 Sigmoid](#1.2-Sigmoid)\n * [1.3 Negative Log Likelihood](#1.3-Negative-Log-Likelihood)\n * [1.4 Model](#1.4-Model)\n * [1.5 Simple Experiment](#1.5-Simple-Experiment)\n* [2. Decision Tree](#2.-Decision-Tree)\n * [2.1 Gini Index & Data Split](#2.1-Gini-Index-&-Data-Split)\n * [2.2 Terminal Node](#2.2-Terminal-Node)\n * [2.3 Build the Decision Tree](#2.3-Build-the-Decision-Tree)\n* [3. Experiments](#3.-Experiments)\n * [3.1 Decision Tree for Heart Disease Prediction](#3.1-Decision-Tree-for-Heart-Disease-Prediction) \n * [3.2 Logistic Regression for Heart Disease Prediction](#3.2-Logistic-Regression-for-Heart-Disease-Prediction)\n\n### Note\nSome of the concepts below have not (yet) been discussed during the lecture. These will be discussed further during the next lectures. \n\n### Before you begin\n\nTo check whether the code you've written is correct, we'll use **automark**. For this, we created for each of you an account with the username being your student number. \n\n\n```python\nimport automark as am\n\n# fill in you student number as your username\nusername = '13060775'\n\n# to check your progress, you can run this function\nam.get_progress(username)\n```\n\n ---------------------------------------------\n | Carlo Harprecht |\n | carlo.harprecht@student.uva.nl |\n ---------------------------------------------\n | linear_forward | completed |\n | linear_grad_W | completed |\n | linear_grad_b | completed |\n | nll_forward | completed |\n | nll_grad_input | completed |\n | sigmoid_forward | completed |\n | sigmoid_grad_input | completed |\n | tree_gini_index | completed |\n | tree_split_data_left | completed |\n | tree_split_data_right | completed |\n | tree_to_terminal | completed |\n ---------------------------------------------\n\n\nSo far all your tests are 'not attempted'. At the end of this notebook you'll need to have completed all test. The output of `am.get_progress(username)` should at least match the example below. However, we encourage you to take a shot at the 'not attempted' tests!\n\n```\n---------------------------------------------\n| Your name / student number |\n| your_email@your_domain.whatever |\n---------------------------------------------\n| linear_forward | not attempted |\n| linear_grad_W | not attempted |\n| linear_grad_b | not attempted |\n| nll_forward | not attempted |\n| nll_grad_input | not attempted |\n| sigmoid_forward | not attempted |\n| sigmoid_grad_input | not attempted |\n| tree_data_split_left | not attempted |\n| tree_data_split_right | not attempted |\n| tree_gini_index | not attempted |\n| tree_to_terminal | not attempted |\n---------------------------------------------\n```\n\n\n```python\nfrom __future__ import print_function, absolute_import, division # You don't need to know what this is. \nimport numpy as np # this imports numpy, which is used for vector- and matrix calculations\n```\n\nThis notebook makes use of **classes** and their **instances** that we have already implemented for you. It allows us to write less code and make it more readable. If you are interested in it, here are some useful links:\n* The official [documentation](https://docs.python.org/3/tutorial/classes.html) \n* Video by *sentdex*: [Object Oriented Programming Introduction](https://www.youtube.com/watch?v=ekA6hvk-8H8)\n* Antipatterns in OOP: [Stop Writing Classes](https://www.youtube.com/watch?v=o9pEzgHorH0)\n\n# 1. Logistic Regression\n\nWe start with a very simple algorithm called **Logistic Regression**. It is a generalized linear model for 2-class classification.\nIt can be generalized to the case of many classes and to non-linear cases as well. However, here we consider only the simplest case. \n\nLet us consider a data with 2 classes. Class 0 and class 1. For a given test sample, logistic regression returns a value from $[0, 1]$ which is interpreted as a probability of belonging to class 1. The set of points for which the prediction is $0.5$ is called a *decision boundary*. It is a line on a plane or a hyper-plane in a space.\n\n\n\nLogistic regression has two trainable parameters: a weight $W$ and a bias $b$. For a vector of features $X$, the prediction of logistic regression is given by\n\n$$\nf(X) = \\frac{1}{1 + \\exp(-[XW + b])} = \\sigma(h(X))\n$$\nwhere $\\sigma(z) = \\frac{1}{1 + \\exp(-z)}$ and $h(X)=XW + b$.\n\nParameters $W$ and $b$ are fitted by maximizing the log-likelihood (or minimizing the negative log-likelihood) of the model on the training data. For a training subset $\\{X_j, Y_j\\}_{j=1}^N$ the normalized negative log likelihood (NLL) is given by \n\n$$\n\\mathcal{L} = -\\frac{1}{N}\\sum_j \\log\\Big[ f(X_j)^{Y_j} \\cdot (1-f(X_j))^{1-Y_j}\\Big]\n= -\\frac{1}{N}\\sum_j \\Big[ Y_j\\log f(X_j) + (1-Y_j)\\log(1-f(X_j))\\Big]\n$$\n\nThere are different ways of fitting this model. In this assignment we consider Logistic Regression as a one-layer neural network. We use the following algorithm for the **forward** pass:\n\n1. Linear mapping: $h=XW + b$\n2. Sigmoid activation function: $f=\\sigma(h)$\n3. Calculation of NLL: $\\mathcal{L} = -\\frac{1}{N}\\sum_j \\Big[ Y_j\\log f_j + (1-Y_j)\\log(1-f_j)\\Big]$\n\nIn order to fit $W$ and $b$ we perform Gradient Descent ([GD](https://en.wikipedia.org/wiki/Gradient_descent)). We choose a small learning rate $\\gamma$ and after each computation of forward pass, we update the parameters \n\n$$W_{\\text{new}} = W_{\\text{old}} - \\gamma \\frac{\\partial \\mathcal{L}}{\\partial W}$$\n\n$$b_{\\text{new}} = b_{\\text{old}} - \\gamma \\frac{\\partial \\mathcal{L}}{\\partial b}$$\n\nWe use Backpropagation method ([BP](https://en.wikipedia.org/wiki/Backpropagation)) to calculate the partial derivatives of the loss function with respect to the parameters of the model.\n\n$$\n\\frac{\\partial\\mathcal{L}}{\\partial W} = \n\\frac{\\partial\\mathcal{L}}{\\partial h} \\frac{\\partial h}{\\partial W} =\n\\frac{\\partial\\mathcal{L}}{\\partial f} \\frac{\\partial f}{\\partial h} \\frac{\\partial h}{\\partial W}\n$$\n\n$$\n\\frac{\\partial\\mathcal{L}}{\\partial b} = \n\\frac{\\partial\\mathcal{L}}{\\partial h} \\frac{\\partial h}{\\partial b} =\n\\frac{\\partial\\mathcal{L}}{\\partial f} \\frac{\\partial f}{\\partial h} \\frac{\\partial h}{\\partial b}\n$$\n\n## 1.1 Linear Mapping\nFirst of all, you need to implement the forward pass of a linear mapping:\n$$\nh(X) = XW +b\n$$\n\n**Note**: here we use `n_out` as the dimensionality of the output. For logisitc regression `n_out = 1`. However, we will work with cases of `n_out > 1` in next assignments. You will **pass** the current assignment even if your implementation works only in case `n_out = 1`. If your implementation works for the cases of `n_out > 1` then you will not have to modify your method next week. All **numpy** operations are generic. It is recommended to use numpy when is it possible.\n\n\n```python\ndef linear_forward(x_input, W, b):\n \"\"\"Perform the mapping of the input\n # Arguments\n x_input: input of the linear function - np.array of size `(n_objects, n_in)`\n W: np.array of size `(n_in, n_out)`\n b: np.array of size `(n_out,)`\n # Output\n the output of the linear function \n np.array of size `(n_objects, n_out)`\n \"\"\"\n output = np.dot(x_input,W) + b\n \n return output\n```\n\nLet's check your first function. We set the matrices $X, W, b$:\n$$\nX = \\begin{bmatrix}\n1 & -1 \\\\\n-1 & 0 \\\\\n1 & 1 \\\\\n\\end{bmatrix} \\quad\nW = \\begin{bmatrix}\n4 \\\\\n2 \\\\\n\\end{bmatrix} \\quad\nb = \\begin{bmatrix}\n3 \\\\\n\\end{bmatrix}\n$$\n\nAnd then compute \n$$\nXW = \\begin{bmatrix}\n1 & -1 \\\\\n-1 & 0 \\\\\n1 & 1 \\\\\n\\end{bmatrix}\n\\begin{bmatrix}\n4 \\\\\n2 \\\\\n\\end{bmatrix} =\n\\begin{bmatrix}\n2 \\\\\n-4 \\\\\n6 \\\\\n\\end{bmatrix} \\\\\nXW + b = \n\\begin{bmatrix}\n5 \\\\\n-1 \\\\\n9 \\\\\n\\end{bmatrix} \n$$\n\n\n```python\nX_test = np.array([[1, -1],\n [-1, 0],\n [1, 1]])\n\nW_test = np.array([[4],\n [2]])\n\nb_test = np.array([3])\n\nh_test = linear_forward(X_test, W_test, b_test)\nprint(h_test)\n```\n\n [[ 5]\n [-1]\n [ 9]]\n\n\n\n```python\nam.test_student_function(username, linear_forward, ['x_input', 'W', 'b'])\n```\n\n Running local tests...\n linear_forward successfully passed local tests\n Running remote test...\n Test was successful. Congratulations!\n\n\nNow you need to implement the calculation of the partial derivative of the loss function with respect to the parameters of the model. As this expressions are used for the updates of the parameters, we refer to them as gradients.\n$$\n\\frac{\\partial \\mathcal{L}}{\\partial W} = \n\\frac{\\partial \\mathcal{L}}{\\partial h}\n\\frac{\\partial h}{\\partial W} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial b} = \n\\frac{\\partial \\mathcal{L}}{\\partial h}\n\\frac{\\partial h}{\\partial b} \\\\\n$$\n\n\n```python\ndef linear_grad_W(x_input, grad_output, W, b):\n \"\"\"Calculate the partial derivative of \n the loss with respect to W parameter of the function\n dL / dW = (dL / dh) * (dh / dW)\n # Arguments\n x_input: input of a dense layer - np.array of size `(n_objects, n_in)`\n grad_output: partial derivative of the loss functions with \n respect to the ouput of the dense layer (dL / dh)\n np.array of size `(n_objects, n_out)`\n W: np.array of size `(n_in, n_out)`\n b: np.array of size `(n_out,)`\n # Output\n the partial derivative of the loss \n with respect to W parameter of the function\n np.array of size `(n_in, n_out)`\n \"\"\"\n grad_W = np.dot(x_input.T,grad_output)\n \n return grad_W\n```\n\n\n```python\nam.test_student_function(username, linear_grad_W, ['x_input', 'grad_output', 'W', 'b'])\n```\n\n Running local tests...\n linear_grad_W successfully passed local tests\n Running remote test...\n Test was successful. Congratulations!\n\n\n\n```python\ndef linear_grad_b(x_input, grad_output, W, b):\n \"\"\"Calculate the partial derivative of \n the loss with respect to b parameter of the function\n dL / db = (dL / dh) * (dh / db)\n # Arguments\n x_input: input of a dense layer - np.array of size `(n_objects, n_in)`\n grad_output: partial derivative of the loss functions with \n respect to the ouput of the linear function (dL / dh)\n np.array of size `(n_objects, n_out)`\n W: np.array of size `(n_in, n_out)`\n b: np.array of size `(n_out,)`\n # Output\n the partial derivative of the loss \n with respect to b parameter of the linear function\n np.array of size `(n_out,)`\n \"\"\"\n grad_b = np.sum(grad_output)\n \n return grad_b\n```\n\n\n```python\nam.test_student_function(username, linear_grad_b, ['x_input', 'grad_output', 'W', 'b'])\n```\n\n Running local tests...\n linear_grad_b successfully passed local tests\n Running remote test...\n Test was successful. Congratulations!\n\n\n\n```python\nam.get_progress(username)\n```\n\n ---------------------------------------------\n | Carlo Harprecht |\n | carlo.harprecht@student.uva.nl |\n ---------------------------------------------\n | linear_forward | completed |\n | linear_grad_W | completed |\n | linear_grad_b | completed |\n | nll_forward | completed |\n | nll_grad_input | completed |\n | sigmoid_forward | completed |\n | sigmoid_grad_input | completed |\n | tree_gini_index | completed |\n | tree_split_data_left | completed |\n | tree_split_data_right | completed |\n | tree_to_terminal | not attempted |\n ---------------------------------------------\n\n\n## 1.2 Sigmoid\n$$\nf = \\sigma(h) = \\frac{1}{1 + e^{-h}} \n$$\n\nSigmoid function is applied element-wise. It does not change the dimensionality of the tensor and its implementation is shape-agnostic in general.\n\n\n```python\ndef sigmoid_forward(x_input):\n \"\"\"sigmoid nonlinearity\n # Arguments\n x_input: np.array of size `(n_objects, n_in)`\n # Output\n the output of relu layer\n np.array of size `(n_objects, n_in)`\n \"\"\"\n output = 1/(1+np.e**(-x_input))\n return output\n```\n\n\n```python\nam.test_student_function(username, sigmoid_forward, ['x_input'])\n```\n\n Running local tests...\n sigmoid_forward successfully passed local tests\n Running remote test...\n Test was successful. Congratulations!\n\n\nNow you need to implement the calculation of the partial derivative of the loss function with respect to the input of sigmoid. \n\n$$\n\\frac{\\partial \\mathcal{L}}{\\partial h} = \n\\frac{\\partial \\mathcal{L}}{\\partial f}\n\\frac{\\partial f}{\\partial h} \n$$\n\nTensor $\\frac{\\partial \\mathcal{L}}{\\partial f}$ comes from the loss function. Let's calculate $\\frac{\\partial f}{\\partial h}$\n\n$$\n\\frac{\\partial f}{\\partial h} = \n\\frac{\\partial \\sigma(h)}{\\partial h} =\n\\frac{\\partial}{\\partial h} \\Big(\\frac{1}{1 + e^{-h}}\\Big)\n= \\frac{e^{-h}}{(1 + e^{-h})^2}\n= \\frac{1}{1 + e^{-h}} \\frac{e^{-h}}{1 + e^{-h}}\n= f(h) (1 - f(h))\n$$\n\nTherefore, in order to calculate the gradient of the loss with respect to the input of sigmoid function you need \nto \n1. calculate $f(h) (1 - f(h))$ \n2. multiply it element-wise by $\\frac{\\partial \\mathcal{L}}{\\partial f}$\n\n\n```python\ndef sigmoid_grad_input(x_input, grad_output):\n \"\"\"sigmoid nonlinearity gradient. \n Calculate the partial derivative of the loss \n with respect to the input of the layer\n # Arguments\n x_input: np.array of size `(n_objects, n_in)`\n grad_output: np.array of size `(n_objects, n_in)` \n dL / df\n # Output\n the partial derivative of the loss \n with respect to the input of the function\n np.array of size `(n_objects, n_in)` \n dL / dh\n \"\"\"\n f_h = (1/(1+np.e**(-x_input)))*(1-(1/(1+np.e**(-x_input))))\n grad_input = f_h * grad_output\n return grad_input\n```\n\n\n```python\nam.test_student_function(username, sigmoid_grad_input, ['x_input', 'grad_output'])\n```\n\n Running local tests...\n sigmoid_grad_input successfully passed local tests\n Running remote test...\n Test was successful. Congratulations!\n\n\n## 1.3 Negative Log Likelihood\n\n$$\n\\mathcal{L} \n= -\\frac{1}{N}\\sum_j \\Big[ Y_j\\log \\dot{Y}_j + (1-Y_j)\\log(1-\\dot{Y}_j)\\Big]\n$$\n\nHere $N$ is the number of objects. $Y_j$ is the real label of an object and $\\dot{Y}_j$ is the predicted one.\n\n\n```python\ndef nll_forward(target_pred, target_true):\n \"\"\"Compute the value of NLL\n for a given prediction and the ground truth\n # Arguments\n target_pred: predictions - np.array of size `(n_objects, 1)`\n target_true: ground truth - np.array of size `(n_objects, 1)`\n # Output\n the value of NLL for a given prediction and the ground truth\n scalar\n \"\"\"\n output = -(1/len(target_pred))*np.sum(target_true*np.log(target_pred) + (1-target_true)*np.log(1-target_pred)) \n return output\n```\n\n\n```python\nam.test_student_function(username, nll_forward, ['target_pred', 'target_true'])\n```\n\n Running local tests...\n nll_forward successfully passed local tests\n Running remote test...\n Test was successful. Congratulations!\n\n\nNow you need to calculate the partial derivative of NLL with with respect to its input.\n\n$$\n\\frac{\\partial \\mathcal{L}}{\\partial \\dot{Y}}\n=\n\\begin{pmatrix}\n\\frac{\\partial \\mathcal{L}}{\\partial \\dot{Y}_0} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\dot{Y}_1} \\\\\n\\vdots \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\dot{Y}_N}\n\\end{pmatrix}\n$$\n\nLet's do it step-by-step\n\n\\begin{equation}\n\\begin{split}\n\\frac{\\partial \\mathcal{L}}{\\partial \\dot{Y}_0} \n&= \\frac{\\partial}{\\partial \\dot{Y}_0} \\Big(-\\frac{1}{N}\\sum_j \\Big[ Y_j\\log \\dot{Y}_j + (1-Y_j)\\log(1-\\dot{Y}_j)\\Big]\\Big) \\\\\n&= -\\frac{1}{N} \\frac{\\partial}{\\partial \\dot{Y}_0} \\Big(Y_0\\log \\dot{Y}_0 + (1-Y_0)\\log(1-\\dot{Y}_0)\\Big) \\\\\n&= -\\frac{1}{N} \\Big(\\frac{Y_0}{\\dot{Y}_0} - \\frac{1-Y_0}{1-\\dot{Y}_0}\\Big)\n= \\frac{1}{N} \\frac{\\dot{Y}_0 - Y_0}{\\dot{Y}_0 (1 - \\dot{Y}_0)}\n\\end{split}\n\\end{equation}\n\nAnd for the other components it can be done in exactly the same way. So the result is the vector where each component is given by \n$$\\frac{1}{N} \\frac{\\dot{Y}_j - Y_j}{\\dot{Y}_j (1 - \\dot{Y}_j)}$$\n\nOr if we assume all multiplications and divisions to be done element-wise the output can be calculated as\n$$\n\\frac{\\partial \\mathcal{L}}{\\partial \\dot{Y}} = \\frac{1}{N} \\frac{\\dot{Y} - Y}{\\dot{Y} (1 - \\dot{Y})}\n$$\n\n\n```python\ndef nll_grad_input(target_pred, target_true):\n \"\"\"Compute the partial derivative of NLL\n with respect to its input\n # Arguments\n target_pred: predictions - np.array of size `(n_objects, 1)`\n target_true: ground truth - np.array of size `(n_objects, 1)`\n # Output\n the partial derivative \n of NLL with respect to its input\n np.array of size `(n_objects, 1)`\n \"\"\"\n grad_input = (1/len(target_pred))*((target_pred-target_true)/(target_pred*(1-target_pred))) \n return grad_input\n```\n\n\n```python\nam.test_student_function(username, nll_grad_input, ['target_pred', 'target_true'])\n```\n\n Running local tests...\n nll_grad_input successfully passed local tests\n Running remote test...\n Test was successful. Congratulations!\n\n\n\n```python\nam.get_progress(username)\n```\n\n ---------------------------------------------\n | Carlo Harprecht |\n | carlo.harprecht@student.uva.nl |\n ---------------------------------------------\n | linear_forward | completed |\n | linear_grad_W | completed |\n | linear_grad_b | completed |\n | nll_forward | completed |\n | nll_grad_input | completed |\n | sigmoid_forward | not attempted |\n | sigmoid_grad_input | completed |\n | tree_gini_index | not attempted |\n | tree_split_data_left | not attempted |\n | tree_split_data_right | not attempted |\n | tree_to_terminal | not attempted |\n ---------------------------------------------\n\n\n## 1.4 Model\n\nHere we provide a model for your. It consist of the function which you have implmeneted above\n\n\n```python\nclass LogsticRegressionGD(object):\n \n def __init__(self, n_in, lr=0.05):\n super().__init__()\n self.lr = lr\n self.b = np.zeros(1, )\n self.W = np.random.randn(n_in, 1)\n \n def forward(self, x):\n self.h = linear_forward(x, self.W, self.b)\n y = sigmoid_forward(self.h)\n return y\n \n def update_params(self, x, nll_grad):\n # compute gradients\n grad_h = sigmoid_grad_input(self.h, nll_grad)\n grad_W = linear_grad_W(x, grad_h, self.W, self.b)\n grad_b = linear_grad_b(x, grad_h, self.W, self.b)\n # update params\n self.W = self.W - self.lr * grad_W\n self.b = self.b - self.lr * grad_b\n```\n\n## 1.5 Simple Experiment\n\n\n```python\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n\n```python\n# Generate some data\ndef generate_2_circles(N=100):\n phi = np.linspace(0.0, np.pi * 2, 100)\n X1 = 1.1 * np.array([np.sin(phi), np.cos(phi)])\n X2 = 3.0 * np.array([np.sin(phi), np.cos(phi)])\n Y = np.concatenate([np.ones(N), np.zeros(N)]).reshape((-1, 1))\n X = np.hstack([X1,X2]).T\n return X, Y\n\n\ndef generate_2_gaussians(N=100):\n phi = np.linspace(0.0, np.pi * 2, 100)\n X1 = np.random.normal(loc=[1, 2], scale=[2.5, 0.9], size=(N, 2))\n X1 = X1 @ np.array([[0.7, -0.7], [0.7, 0.7]])\n X2 = np.random.normal(loc=[-2, 0], scale=[1, 1.5], size=(N, 2))\n X2 = X2 @ np.array([[0.7, 0.7], [-0.7, 0.7]])\n Y = np.concatenate([np.ones(N), np.zeros(N)]).reshape((-1, 1))\n X = np.vstack([X1,X2])\n return X, Y\n\ndef split(X, Y, train_ratio=0.7):\n size = len(X)\n train_size = int(size * train_ratio)\n indices = np.arange(size)\n np.random.shuffle(indices)\n train_indices = indices[:train_size]\n test_indices = indices[train_size:]\n return X[train_indices], Y[train_indices], X[test_indices], Y[test_indices]\n\nf, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 4))\n\n\nX, Y = generate_2_circles()\nax1.scatter(X[:,0], X[:,1], c=Y.ravel(), edgecolors= 'none')\nax1.set_aspect('equal')\n\n\nX, Y = generate_2_gaussians()\nax2.scatter(X[:,0], X[:,1], c=Y.ravel(), edgecolors= 'none')\nax2.set_aspect('equal')\n\n```\n\n\n```python\nX_train, Y_train, X_test, Y_test = split(*generate_2_gaussians(), 0.7)\n```\n\n\n```python\n# let's train our model\nmodel = LogsticRegressionGD(2, 0.05)\n\nfor step in range(30):\n Y_pred = model.forward(X_train)\n \n loss_value = nll_forward(Y_pred, Y_train)\n accuracy = ((Y_pred > 0.5) == Y_train).mean()\n print('Step: {} \\t Loss: {:.3f} \\t Acc: {:.1f}%'.format(step, loss_value, accuracy * 100))\n \n loss_grad = nll_grad_input(Y_pred, Y_train)\n model.update_params(X_train, loss_grad)\n\n \nprint('\\n\\nTesting...')\nY_test_pred = model.forward(X_test)\ntest_accuracy = ((Y_test_pred > 0.5) == Y_test).mean()\nprint('Acc: {:.1f}%'.format(test_accuracy * 100))\n```\n\n Step: 0 \t Loss: 0.546 \t Acc: 73.6%\n Step: 1 \t Loss: 0.528 \t Acc: 73.6%\n Step: 2 \t Loss: 0.511 \t Acc: 75.0%\n Step: 3 \t Loss: 0.495 \t Acc: 75.7%\n Step: 4 \t Loss: 0.480 \t Acc: 75.7%\n Step: 5 \t Loss: 0.465 \t Acc: 77.1%\n Step: 6 \t Loss: 0.452 \t Acc: 77.9%\n Step: 7 \t Loss: 0.439 \t Acc: 77.9%\n Step: 8 \t Loss: 0.426 \t Acc: 80.0%\n Step: 9 \t Loss: 0.415 \t Acc: 81.4%\n Step: 10 \t Loss: 0.403 \t Acc: 81.4%\n Step: 11 \t Loss: 0.393 \t Acc: 81.4%\n Step: 12 \t Loss: 0.383 \t Acc: 82.1%\n Step: 13 \t Loss: 0.373 \t Acc: 84.3%\n Step: 14 \t Loss: 0.364 \t Acc: 84.3%\n Step: 15 \t Loss: 0.355 \t Acc: 84.3%\n Step: 16 \t Loss: 0.347 \t Acc: 84.3%\n Step: 17 \t Loss: 0.339 \t Acc: 85.7%\n Step: 18 \t Loss: 0.331 \t Acc: 85.7%\n Step: 19 \t Loss: 0.324 \t Acc: 86.4%\n Step: 20 \t Loss: 0.317 \t Acc: 86.4%\n Step: 21 \t Loss: 0.310 \t Acc: 87.1%\n Step: 22 \t Loss: 0.303 \t Acc: 87.9%\n Step: 23 \t Loss: 0.297 \t Acc: 87.9%\n Step: 24 \t Loss: 0.291 \t Acc: 89.3%\n Step: 25 \t Loss: 0.286 \t Acc: 89.3%\n Step: 26 \t Loss: 0.280 \t Acc: 89.3%\n Step: 27 \t Loss: 0.275 \t Acc: 89.3%\n Step: 28 \t Loss: 0.270 \t Acc: 89.3%\n Step: 29 \t Loss: 0.265 \t Acc: 90.7%\n \n \n Testing...\n Acc: 88.3%\n\n\n\n```python\ndef plot_model_prediction(prediction_func, X, Y, hard=True):\n u_min = X[:, 0].min()-1\n u_max = X[:, 0].max()+1\n v_min = X[:, 1].min()-1\n v_max = X[:, 1].max()+1\n\n U, V = np.meshgrid(np.linspace(u_min, u_max, 100), np.linspace(v_min, v_max, 100))\n UV = np.stack([U.ravel(), V.ravel()]).T\n c = prediction_func(UV).ravel()\n if hard:\n c = c > 0.5\n plt.scatter(UV[:,0], UV[:,1], c=c, edgecolors= 'none', alpha=0.15)\n plt.scatter(X[:,0], X[:,1], c=Y.ravel(), edgecolors= 'black')\n plt.xlim(left=u_min, right=u_max)\n plt.ylim(bottom=v_min, top=v_max)\n plt.axes().set_aspect('equal')\n plt.show()\n \nplot_model_prediction(lambda x: model.forward(x), X_train, Y_train, False)\n\nplot_model_prediction(lambda x: model.forward(x), X_train, Y_train, True)\n```\n\n\n```python\nX_train, Y_train, X_test, Y_test = split(*generate_2_circles(), 0.7)\n```\n\n\n```python\n# let's train our model\nmodel = LogsticRegressionGD(2, 0.05)\n\nfor step in range(30):\n Y_pred = model.forward(X_train)\n \n loss_value = nll_forward(Y_pred, Y_train)\n accuracy = ((Y_pred > 0.5) == Y_train).mean()\n print('Step: {} \\t Loss: {:.3f} \\t Acc: {:.1f}%'.format(step, loss_value, accuracy * 100))\n \n loss_grad = nll_grad_input(Y_pred, Y_train)\n model.update_params(X_train, loss_grad)\n\n \nprint('\\n\\nTesting...')\nY_test_pred = model.forward(X_test)\ntest_accuracy = ((Y_test_pred > 0.5) == Y_test).mean()\nprint('Acc: {:.1f}%'.format(test_accuracy * 100))\n```\n\n Step: 0 \t Loss: 1.135 \t Acc: 53.6%\n Step: 1 \t Loss: 1.123 \t Acc: 53.6%\n Step: 2 \t Loss: 1.111 \t Acc: 53.6%\n Step: 3 \t Loss: 1.099 \t Acc: 53.6%\n Step: 4 \t Loss: 1.087 \t Acc: 53.6%\n Step: 5 \t Loss: 1.075 \t Acc: 53.6%\n Step: 6 \t Loss: 1.064 \t Acc: 53.6%\n Step: 7 \t Loss: 1.053 \t Acc: 53.6%\n Step: 8 \t Loss: 1.041 \t Acc: 53.6%\n Step: 9 \t Loss: 1.031 \t Acc: 53.6%\n Step: 10 \t Loss: 1.020 \t Acc: 53.6%\n Step: 11 \t Loss: 1.009 \t Acc: 53.6%\n Step: 12 \t Loss: 0.999 \t Acc: 53.6%\n Step: 13 \t Loss: 0.989 \t Acc: 53.6%\n Step: 14 \t Loss: 0.978 \t Acc: 53.6%\n Step: 15 \t Loss: 0.969 \t Acc: 53.6%\n Step: 16 \t Loss: 0.959 \t Acc: 53.6%\n Step: 17 \t Loss: 0.949 \t Acc: 53.6%\n Step: 18 \t Loss: 0.940 \t Acc: 53.6%\n Step: 19 \t Loss: 0.931 \t Acc: 53.6%\n Step: 20 \t Loss: 0.922 \t Acc: 53.6%\n Step: 21 \t Loss: 0.913 \t Acc: 53.6%\n Step: 22 \t Loss: 0.905 \t Acc: 53.6%\n Step: 23 \t Loss: 0.897 \t Acc: 53.6%\n Step: 24 \t Loss: 0.889 \t Acc: 53.6%\n Step: 25 \t Loss: 0.881 \t Acc: 53.6%\n Step: 26 \t Loss: 0.873 \t Acc: 53.6%\n Step: 27 \t Loss: 0.866 \t Acc: 53.6%\n Step: 28 \t Loss: 0.858 \t Acc: 53.6%\n Step: 29 \t Loss: 0.851 \t Acc: 53.6%\n \n \n Testing...\n Acc: 40.0%\n\n\n\n```python\ndef plot_model_prediction(prediction_func, X, Y, hard=True):\n u_min = X[:, 0].min()-1\n u_max = X[:, 0].max()+1\n v_min = X[:, 1].min()-1\n v_max = X[:, 1].max()+1\n\n U, V = np.meshgrid(np.linspace(u_min, u_max, 100), np.linspace(v_min, v_max, 100))\n UV = np.stack([U.ravel(), V.ravel()]).T\n c = prediction_func(UV).ravel()\n if hard:\n c = c > 0.5\n plt.scatter(UV[:,0], UV[:,1], c=c, edgecolors= 'none', alpha=0.15)\n plt.scatter(X[:,0], X[:,1], c=Y.ravel(), edgecolors= 'black')\n plt.xlim(left=u_min, right=u_max)\n plt.ylim(bottom=v_min, top=v_max)\n plt.axes().set_aspect('equal')\n plt.show()\n \nplot_model_prediction(lambda x: model.forward(x), X_train, Y_train, False)\n\nplot_model_prediction(lambda x: model.forward(x), X_train, Y_train, True)\n```\n\n# 2. Decision Tree\nThe next model we look at is called **Decision Tree**. This type of model is non-parametric, meaning in contrast to **Logistic Regression** we do not have any parameters here that need to be trained.\n\nLet us consider a simple binary decision tree for deciding on the two classes of \"creditable\" and \"Not creditable\".\n\n\n\nEach node, except the leafs, asks a question about the the client in question. A decision is made by going from the root node to a leaf node, while considering the clients situation. The situation of the client, in this case, is fully described by the features:\n1. Checking account balance\n2. Duration of requested credit\n3. Payment status of previous loan\n4. Length of current employment\n\nIn order to build a decision tree we need training data. To carry on the previous example: we need a number of clients for which we know the properties 1.-4. and their creditability.\nThe process of building a decision tree starts with the root node and involves the following steps:\n1. Choose a splitting criteria and add it to the current node.\n2. Split the dataset at the current node into those that fullfil the criteria and those that do not.\n3. Add a child node for each data split.\n4. For each child node decide on either A. or B.:\n 1. Repeat from 1. step\n 2. Make it a leaf node: The predicted class label is decided by the majority vote over the training data in the current split.\n\n## 2.1 Gini Index & Data Split\nDeciding on how to split your training data at each node is dominated by the following two criterias:\n1. Does the rule help me make a final decision?\n2. Is the rule general enough such that it applies not only to my training data, but also to new unseen examples?\n\nWhen considering our previous example, splitting the clients by their handedness would not help us deciding on their creditability. Knowning if a rule will generalize is usually a hard call to make, but in practice we rely on [Occam's razor](https://en.wikipedia.org/wiki/Occam%27s_razor) principle. Thus the less rules we use, the better we believe it to generalize to previously unseen examples.\n\nOne way to measure the quality of a rule is by the [**Gini Index**](https://en.wikipedia.org/wiki/Gini_coefficient).\nSince we only consider binary classification, it is calculated by:\n$$\nGini = \\sum_{n\\in\\{L,R\\}}\\frac{|S_n|}{|S|}\\left( 1 - \\sum_{c \\in C} p_{S_n}(c)^2\\right)\\\\\np_{S_n}(c) = \\frac{|\\{\\mathbf{x}_{i}\\in \\mathbf{X}|y_{i} = c, i \\in S_n\\}|}{|S_n|}, n \\in \\{L, R\\}\n$$\nwith $|C|=2$ being your set of class labels and $S_L$ and $S_R$ the two splits determined by the splitting criteria.\nWhile we only consider two class problems for decision trees, the method can also be applied when $|C|>2$.\nThe lower the gini score, the better the split. In the extreme case, where all class labels are the same in each split respectively, the gini index takes the value of $0$.\n\n\n```python\ndef tree_gini_index(Y_left, Y_right, classes):\n \"\"\"Compute the Gini Index.\n # Arguments\n Y_left: class labels of the data left set\n np.array of size `(n_objects, 1)`\n Y_right: class labels of the data right set\n np.array of size `(n_objects, 1)`\n classes: list of all class values\n # Output\n gini: scalar `float`\n \"\"\"\n gini = 0\n all_inst = len(Y_left)+len(Y_right)\n\n gini += (len(Y_left)/all_inst)*(1-((len(Y_left[Y_left == classes[0]])/len(Y_left))**2 + \n (len(Y_left[Y_left == classes[1]])/len(Y_left))**2))\n gini += (len(Y_right)/all_inst)*(1-((len(Y_right[Y_right == classes[0]])/len(Y_right))**2 + \n (len(Y_right[Y_right == classes[1]])/len(Y_right))**2))\n \n return gini\n```\n\n\n```python\nam.test_student_function(username, tree_gini_index, ['Y_left', 'Y_right', 'classes'])\n```\n\n Running local tests...\n tree_gini_index successfully passed local tests\n Running remote test...\n Test was successful. Congratulations!\n\n\nAt each node in the tree, the data is split according to a split criterion and each split is passed onto the left/right child respectively.\nImplement the following function to return all rows in `X` and `Y` such that the left child gets all examples that are less than the split value and vice versa. \n\n\n```python\ndef tree_split_data_left(X, Y, feature_index, split_value):\n \"\"\"Split the data `X` and `Y`, at the feature indexed by `feature_index`.\n If the value is less than `split_value` then return it as part of the left group.\n \n # Arguments\n X: np.array of size `(n_objects, n_in)`\n Y: np.array of size `(n_objects, 1)`\n feature_index: index of the feature to split at \n split_value: value to split between\n # Output\n (XY_left): np.array of size `(n_objects_left, n_in + 1)`\n \"\"\"\n X_left, Y_left = None, None\n X_left = X[X[:,feature_index] < split_value]\n Y_left = Y[X[:,feature_index] < split_value]\n\n XY_left = np.concatenate([X_left, Y_left], axis=-1)\n return XY_left\n\n\ndef tree_split_data_right(X, Y, feature_index, split_value):\n \"\"\"Split the data `X` and `Y`, at the feature indexed by `feature_index`.\n If the value is greater or equal than `split_value` then return it as part of the right group.\n \n # Arguments\n X: np.array of size `(n_objects, n_in)`\n Y: np.array of size `(n_objects, 1)`\n feature_index: index of the feature to split at\n split_value: value to split between\n # Output\n (XY_left): np.array of size `(n_objects_left, n_in + 1)`\n \"\"\"\n X_right, Y_right = None, None\n \n X_right = X[X[:,feature_index] >= split_value]\n Y_right = Y[X[:,feature_index] >= split_value]\n \n XY_right = np.concatenate([X_right, Y_right], axis=-1)\n return XY_right\n```\n\n\n```python\nam.test_student_function(username, tree_split_data_left, ['X', 'Y', 'feature_index', 'split_value'])\n```\n\n Running local tests...\n tree_split_data_left successfully passed local tests\n Running remote test...\n Test was successful. Congratulations!\n\n\n\n```python\nam.test_student_function(username, tree_split_data_right, ['X', 'Y', 'feature_index', 'split_value'])\n```\n\n Running local tests...\n tree_split_data_right successfully passed local tests\n Running remote test...\n Test was successful. Congratulations!\n\n\n\n```python\nam.get_progress(username)\n```\n\n ---------------------------------------------\n | Carlo Harprecht |\n | carlo.harprecht@student.uva.nl |\n ---------------------------------------------\n | linear_forward | completed |\n | linear_grad_W | completed |\n | linear_grad_b | completed |\n | nll_forward | completed |\n | nll_grad_input | completed |\n | sigmoid_forward | completed |\n | sigmoid_grad_input | completed |\n | tree_gini_index | completed |\n | tree_split_data_left | completed |\n | tree_split_data_right | completed |\n | tree_to_terminal | not attempted |\n ---------------------------------------------\n\n\nNow to find the split rule with the lowest gini score, we brute-force search over all features and values to split by.\n\n\n```python\ndef tree_best_split(X, Y):\n class_values = list(set(Y.flatten().tolist()))\n r_index, r_value, r_score = float(\"inf\"), float(\"inf\"), float(\"inf\")\n r_XY_left, r_XY_right = (X,Y), (X,Y)\n for feature_index in range(X.shape[1]):\n for row in X:\n XY_left = tree_split_data_left(X, Y, feature_index, row[feature_index])\n XY_right = tree_split_data_right(X, Y, feature_index, row[feature_index])\n XY_left, XY_right = (XY_left[:,:-1], XY_left[:,-1:]), (XY_right[:,:-1], XY_right[:,-1:])\n gini = tree_gini_index(XY_left[1], XY_right[1], class_values)\n if gini < r_score:\n r_index, r_value, r_score = feature_index, row[feature_index], gini\n r_XY_left, r_XY_right = XY_left, XY_right\n return {'index':r_index, 'value':r_value, 'XY_left': r_XY_left, 'XY_right':r_XY_right}\n```\n\n## 2.2 Terminal Node\nThe leaf nodes predict the label of an unseen example, by taking a majority vote over all training class labels in that node.\n\n\n```python\ndef tree_to_terminal(Y):\n \"\"\"The most frequent class label, out of the data points belonging to the leaf node,\n is selected as the predicted class.\n \n # Arguments\n Y: np.array of size `(n_objects)`\n \n # Output\n label: most frequent label of `Y.dtype`\n \"\"\"\n (values,counts) = np.unique(Y,return_counts=True)\n ind = np.argmax(counts)\n label = values[ind]\n \n return label\n```\n\n\n```python\nam.test_student_function(username, tree_to_terminal, ['Y'])\n```\n\n Running local tests...\n tree_to_terminal successfully passed local tests\n Running remote test...\n Test was successful. Congratulations!\n\n\n\n```python\nam.get_progress(username)\n```\n\n ---------------------------------------------\n | Carlo Harprecht |\n | carlo.harprecht@student.uva.nl |\n ---------------------------------------------\n | linear_forward | completed |\n | linear_grad_W | completed |\n | linear_grad_b | completed |\n | nll_forward | completed |\n | nll_grad_input | completed |\n | sigmoid_forward | completed |\n | sigmoid_grad_input | completed |\n | tree_gini_index | completed |\n | tree_split_data_left | completed |\n | tree_split_data_right | completed |\n | tree_to_terminal | completed |\n ---------------------------------------------\n\n\n## 2.3 Build the Decision Tree\nNow we recursively build the decision tree, by greedily splitting the data at each node according to the gini index.\nTo prevent the model from overfitting, we transform a node into a terminal/leaf node, if:\n1. a maximum depth is reached.\n2. the node does not reach a minimum number of training samples.\n\n\n\n```python\ndef tree_recursive_split(X, Y, node, max_depth, min_size, depth):\n XY_left, XY_right = node['XY_left'], node['XY_right']\n del(node['XY_left'])\n del(node['XY_right'])\n # check for a no split\n if XY_left[0].size <= 0 or XY_right[0].size <= 0:\n node['left_child'] = node['right_child'] = tree_to_terminal(np.concatenate((XY_left[1], XY_right[1])))\n return\n # check for max depth\n if depth >= max_depth:\n node['left_child'], node['right_child'] = tree_to_terminal(XY_left[1]), tree_to_terminal(XY_right[1])\n return\n # process left child\n if XY_left[0].shape[0] <= min_size:\n node['left_child'] = tree_to_terminal(XY_left[1])\n else:\n node['left_child'] = tree_best_split(*XY_left)\n tree_recursive_split(X, Y, node['left_child'], max_depth, min_size, depth+1)\n # process right child\n if XY_right[0].shape[0] <= min_size:\n node['right_child'] = tree_to_terminal(XY_right[1])\n else:\n node['right_child'] = tree_best_split(*XY_right)\n tree_recursive_split(X, Y, node['right_child'], max_depth, min_size, depth+1)\n\n\ndef build_tree(X, Y, max_depth, min_size):\n root = tree_best_split(X, Y)\n tree_recursive_split(X, Y, root, max_depth, min_size, 1)\n return root\n```\n\nBy printing the split criteria or the predicted class at each node, we can visualise the decising making process.\nBoth the tree and a a prediction can be implemented recursively, by going from the root to a leaf node.\n\n\n```python\ndef print_tree(node, depth=0):\n if isinstance(node, dict):\n print('%s[X%d < %.3f]' % ((depth*' ', (node['index']+1), node['value'])))\n print_tree(node['left_child'], depth+1)\n print_tree(node['right_child'], depth+1)\n else:\n print('%s[%s]' % ((depth*' ', node)))\n \ndef tree_predict_single(x, node):\n if isinstance(node, dict):\n if x[node['index']] < node['value']:\n return tree_predict_single(x, node['left_child'])\n else:\n return tree_predict_single(x, node['right_child'])\n \n return node\n\ndef tree_predict_multi(X, node):\n Y = np.array([tree_predict_single(row, node) for row in X])\n return Y[:, None] # size: (n_object,) -> (n_object, 1)\n```\n\nLet's test our decision tree model on some toy data.\n\n\n```python\nX_train, Y_train, X_test, Y_test = split(*generate_2_circles(), 0.7)\n\ntree = build_tree(X_train, Y_train, 4, 1)\nY_pred = tree_predict_multi(X_test, tree)\ntest_accuracy = (Y_pred == Y_test).mean()\nprint('Test Acc: {:.1f}%'.format(test_accuracy * 100))\n```\n\nWe print the decision tree in [pre-order](https://en.wikipedia.org/wiki/Tree_traversal#Pre-order_(NLR)).\n\n\n```python\nprint_tree(tree)\n```\n\n\n```python\nplot_model_prediction(lambda x: tree_predict_multi(x, tree), X_test, Y_test)\n```\n\n# 3. Experiments\nThe [Cleveland Heart Disease](https://archive.ics.uci.edu/ml/datasets/Heart+Disease) dataset aims at predicting the presence of heart disease based on other available medical information of the patient.\n\nAlthough the whole database contains 76 attributes, we focus on the following 14:\n1. Age: age in years \n2. Sex: \n * 0 = female\n * 1 = male \n3. Chest pain type: \n * 1 = typical angina\n * 2 = atypical angina\n * 3 = non-anginal pain\n * 4 = asymptomatic\n4. Trestbps: resting blood pressure in mm Hg on admission to the hospital \n5. Chol: serum cholestoral in mg/dl \n6. Fasting blood sugar: > 120 mg/dl\n * 0 = false\n * 1 = true\n7. Resting electrocardiographic results: \n * 0 = normal\n * 1 = having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of > 0.05 mV) \n * 2 = showing probable or definite left ventricular hypertrophy by Estes' criteria \n8. Thalach: maximum heart rate achieved \n9. Exercise induced angina:\n * 0 = no\n * 1 = yes\n10. Oldpeak: ST depression induced by exercise relative to rest \n11. Slope: the slope of the peak exercise ST segment\n * 1 = upsloping\n * 2 = flat \n * 3 = downsloping \n12. Ca: number of major vessels (0-3) colored by flourosopy \n13. Thal: \n * 3 = normal\n * 6 = fixed defect\n * 7 = reversable defect \n14. Target: diagnosis of heart disease (angiographic disease status)\n * 0 = < 50% diameter narrowing \n * 1 = > 50% diameter narrowing\n \nThe 14. attribute is the target variable that we would like to predict based on the rest.\n\nWe have prepared some helper functions to download and pre-process the data in `heart_disease_data.py`\n\n\n```python\nimport heart_disease_data\n```\n\n\n```python\nX, Y = heart_disease_data.download_and_preprocess()\nX_train, Y_train, X_test, Y_test = split(X, Y, 0.7)\n```\n\nLet's have a look at some examples\n\n\n```python\nprint(X_train[0:2])\nprint(Y_train[0:2])\n\n# TODO feel free to explore more examples and see if you can predict the presence of a heart disease\n```\n\n## 3.1 Decision Tree for Heart Disease Prediction \nLet's build a decision tree model on the training data and see how well it performs\n\n\n```python\n# TODO: you are free to make use of code that we provide in previous cells\n# TODO: play around with different hyper parameters and see how these impact your performance\n\ntree = build_tree(X_train, Y_train, 5, 4)\nY_pred = tree_predict_multi(X_test, tree)\ntest_accuracy = (Y_pred == Y_test).mean()\nprint('Test Acc: {:.1f}%'.format(test_accuracy * 100))\n```\n\nHow did changing the hyper parameters affect the test performance? Usually hyper parameters are tuned using a hold-out [validation set](https://en.wikipedia.org/wiki/Training,_validation,_and_test_sets#Validation_dataset) instead of the test set.\n\n## 3.2 Logistic Regression for Heart Disease Prediction\n\nInstead of manually going through the data to find possible correlations, let's try training a logistic regression model on the data.\n\n\n```python\n# TODO: you are free to make use of code that we provide in previous cells\n# TODO: play around with different hyper parameters and see how these impact your performance\n```\n\nHow well did your model perform? Was it actually better then guessing? Let's look at the empirical mean of the target.\n\n\n```python\nY_train.mean()\n```\n\nSo what is the problem? Let's have a look at the learned parameters of our model.\n\n\n```python\nprint(model.W, model.b)\n```\n\nIf you trained sufficiently many steps you'll probably see how some weights are much larger than others. Have a look at what range the parameters were initialized and how much change we allow per step (learning rate). Compare this to the scale of the input features. Here an important concept arises, when we want to train on real world data: \n[Feature Scaling](https://en.wikipedia.org/wiki/Feature_scaling).\n\nLet's try applying it on our data and see how it affects our performance.\n\n\n```python\n# TODO: Rescale the input features and train again\n```\n\nNotice that we did not need any rescaling for the decision tree. Can you think of why?\n", "meta": {"hexsha": "c96e585c03e6ed3d481ec2636e637f1ae90a64b4", "size": 627585, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "week_2/ML.ipynb", "max_stars_repo_name": "CHarprecht/UVA_AML20", "max_stars_repo_head_hexsha": "00c052f36cf61d1986a1499a75631ff1d75af6ee", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "week_2/ML.ipynb", "max_issues_repo_name": "CHarprecht/UVA_AML20", "max_issues_repo_head_hexsha": "00c052f36cf61d1986a1499a75631ff1d75af6ee", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week_2/ML.ipynb", "max_forks_repo_name": "CHarprecht/UVA_AML20", "max_forks_repo_head_hexsha": "00c052f36cf61d1986a1499a75631ff1d75af6ee", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 332.0555555556, "max_line_length": 142792, "alphanum_fraction": 0.9168096752, "converted": true, "num_tokens": 12158, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8887588023318196, "lm_q2_score": 0.9252299643080208, "lm_q1q2_score": 0.8223062749599088}} {"text": "# Gram-Schmidt process\n\nThe Gram-Schmidt procedure takes a list of (column) vectors and forms an orthonormal basis from this set.\nAs a corollary, the procedure allows us to determine the dimension of the space spanned by the basis vectors, which is equal to or less than the space which the vectors sit.\n\n\n```python\n# !pip install sympy\n```\n\n\n```python\nimport numpy as np\nimport numpy.linalg as la\n\nverySmallNumber = 1e-14 # That's 1\u00d710\u207b\u00b9\u2074 = 0.00000000000001\n```\n\n\n```python\n# sympy converts numpy as laTex\n# from IPython.display import display, Math, Latex\nimport sympy as sp\nfrom sympy import Matrix as spm\nsp.init_printing(use_unicode=True)\n```\n\n### Gram-Schmidt procedure\nTake 4 basis vectors as a list of vectors as the columns of a matrix, A.\nGo through the vectors one at a time and set them to be orthogonal to all the vectors that came before it and normalise it.\n\n\n```python\ndef gsBasis4(A):\n \"\"\"Gram-Schmidt for 4 vectors\"\"\"\n \n # Work with a copy - vectors are mutable\n B = np.array(A, dtype=np.float_) \n \n # Column(vector) 0: Normalise(divide by its modulus or norm)\n B[:, 0] = B[:, 0] / la.norm(B[:, 0])\n \n # Column 1: - subtract any overlap with the zeroth vector\n B[:, 1] = B[:, 1] - B[:, 1] @ B[:, 0] * B[:, 0]\n \n # If there's anything left after that subtraction, then B[:, 1] is linearly independant of B[:, 0]\n # Normalise - norm(indepent)=1, norm(dependent)=0\n if la.norm(B[:, 1]) > verySmallNumber :\n B[:, 1] = B[:, 1] / la.norm(B[:, 1])\n else :\n B[:, 1] = np.zeros_like(B[:, 1])\n \n # Column 2: - subtract the overlap with the zeroth vector\n # - subtract the overlap with the first\n B[:, 2] = B[:, 2] - B[:, 2] @ B[:, 0] * B[:, 0] - B[:, 2] @ B[:, 1] * B[:, 1]\n \n # Normalise - norm(indepent)=1, norm(dependent)=0\n if la.norm(B[:, 2]) > verySmallNumber :\n B[:, 2] = B[:, 2] / la.norm(B[:, 2])\n else :\n B[:, 2] = np.zeros_like(B[:, 2])\n \n # Column 2: - subtract the overlap with the zeroth vector\n # - subtract the overlap with the first\n # - subtract the overlap with the second\n B[:, 3] = B[:, 3] - (B[:, 3] @ B[:, 0] * B[:, 0]) - (B[:, 3] @ B[:, 1] * B[:, 1]) - (B[:, 3] @ B[:, 2] * B[:, 2])\n \n # Normalise - norm(indepent)=1, norm(dependent)=0\n if la.norm(B[:, 3]) > verySmallNumber :\n B[:, 3] = B[:, 3] / la.norm(B[:, 3])\n else :\n B[:, 3] = np.zeros_like(B[:, 3])\n \n return B\n```\n\n\n```python\ndef gsBasis(A):\n \"\"\"Gram-Schmidt for n vectors\"\"\"\n \n # Work with a copy - vectors are mutable\n B = np.array(A, dtype=np.float_)\n \n # Loop over all vectors\n for i in range(B.shape[1]):\n # Loop over all previous vectors, j, to subtract.\n for j in range(i):\n # Subtract the overlap with previous vectors\n # you'll need the current vector B[:, i] and a previous vector B[:, j]\n B[:, i] -= (B[:, i] @ B[:, j] * B[:, j])\n \n # Normalise - norm(indepent)=1, norm(dependent)=0\n if la.norm(B[:, i]) > verySmallNumber :\n B[:, i] = B[:, i] / la.norm(B[:, i])\n else :\n B[:, i] = np.zeros_like(B[:, i])\n\n return B\n\ndef dimensions(A):\n \"\"\"Gram-schmidt process to calculate the dimension spanned by a list of vectors.\n Independent vectors are normalised to one and dependent vectors to zero\n Thus the sum of all the norms will be the dimension\"\"\"\n return np.sum(la.norm(gsBasis(A), axis=0))\n```\n\n## Test your code before submission\nTo test the code you've written above, run the cell (select the cell above, then press the play button [ \u25b6| ] or press shift-enter).\nYou can then use the code below to test out your function.\nYou don't need to submit this cell; you can edit and run it as much as you like.\n\nTry out your code on tricky test cases!\n\n### 4 vector function\n\n\n```python\nV = np.array([[1,0,2,6],\n [0,1,8,2],\n [2,8,3,1],\n [1,-6,2,3]], dtype=np.float_)\nspm(gsBasis4(V).round(2))\n```\n\n\n```python\n# Once you've done Gram-Schmidt once, doing it again should give you the same result.\nU = gsBasis4(V)\nspm(U.round(2)), spm(gsBasis4(U).round(2))\nnp.testing.assert_almost_equal(gsBasis4(U), gsBasis4(V))\n```\n\n### Generic function\n\n\n```python\nspm(gsBasis(V).round(2))\n```\n\n### Non-square matrices\n\n\n```python\nA = np.array([[3,2,3],\n [2,5,-1],\n [2,4,8],\n [12,2,1]], dtype=np.float_)\nspm(gsBasis(A).round(2)), dimensions(A)\n```\n\n### Dependent vectors - a linear combination of the others\n\n\n```python\nB = np.array([[6,2,1,7,5],\n [2,8,5,-4,1],\n [1,-6,3,2,8]], dtype=np.float_)\nspm(gsBasis(B).round(2)), dimensions(B)\n```\n\n\n```python\nC = np.array([[1,0,2],\n [0,1,-3],\n [1,0,2]], dtype=np.float_)\nspm(gsBasis(C).round(2)), dimensions(C)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "45f6f8fb4ba9289c8eed7a12617e742b3022a0e2", "size": 7822, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_math/Imperial College London - Math for ML/GramSchmidtProcess.ipynb", "max_stars_repo_name": "aixpact/data-science", "max_stars_repo_head_hexsha": "f04a54595fbc2d797918d450b979fd4c2eabac15", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-07-22T23:12:39.000Z", "max_stars_repo_stars_event_max_datetime": "2020-07-25T02:30:48.000Z", "max_issues_repo_path": "_math/Imperial College London - Math for ML/GramSchmidtProcess.ipynb", "max_issues_repo_name": "aixpact/data-science", "max_issues_repo_head_hexsha": "f04a54595fbc2d797918d450b979fd4c2eabac15", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_math/Imperial College London - Math for ML/GramSchmidtProcess.ipynb", "max_forks_repo_name": "aixpact/data-science", "max_forks_repo_head_hexsha": "f04a54595fbc2d797918d450b979fd4c2eabac15", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.1865671642, "max_line_length": 179, "alphanum_fraction": 0.4838915878, "converted": true, "num_tokens": 1490, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8887587905460027, "lm_q2_score": 0.925229954514902, "lm_q1q2_score": 0.8223062553515974}} {"text": "```python\n%pylab inline\nimport sympy as sym\n```\n\n Populating the interactive namespace from numpy and matplotlib\n\n\nWe'll need to compute\n\n$$\nA_{ij} := \\int_a^b v_j' v_i' \n$$\n\nand \n\n$$\nf_i := \\int_a^b v_i f\n$$\n\nWe split the interval in $M$ elements (segments), with $M+1$ vertices (element boundaries). Define the element boundaries $q := [a, q_1, q_2, ..., q_{M-1}, b]$, and the elements sizes $h_k := v_{k+1}-q_k$. \nIn $[a,b]$ we have a total of $N$ basis functions, which are piecewise polynomials of degree $d$, so that in each segment there are always at most $d+1$ non-zero basis functions, and $d+1$ support points.\n\nWe assume that each support point can be interpreted as the image of reference support points $\\hat a_\\alpha$, where $\\alpha \\in [0,d]$, throught the mapping $F_k(s) := q_k+h_k s$ which maps $[0,1]$ to $T_k := [q_k, q_{k+1}]$.\n\n\nSimilarly, every global basis function $v_j$ can be seen as the composition of a *reference* basis function $\\hat v_\\alpha$ defined on the reference interval $[0,1]$, and the inverse of the element transformation $F_k(s) := q_k+h_k s$ which maps $[0,1]$ to $[q_k, q_{k+1}]$, that is: \n\n$$\nv_i(F_k(s)) = P_{ki\\alpha} \\hat v_\\alpha(s)\n$$\n\nWhere $P_{i\\alpha}$ represents the numbering of the local basis function $\\alpha \\in [0,d]$, i.e., given the $(d+1)$ reference basis functions in $0,1$ of degree $d$, $P_{ki\\alpha}$.\n\nWe will implement the numbering as a matrix $P \\in R^{M,d+1}$, which returns the global index $i$, given the local element index $k$ and the local basis function index $i$, i.e., \n\n\n$$\nv_{P_{k\\alpha}}(F_k(s)) = \\hat v_\\alpha(s)\n$$\n\nNotice that, if we want to compute the derivative w.r.t. $x$ of $v_i$, computed in $F_k(s)$, as a function of the derivative of $\\hat v_\\alpha$ w.r.t. $s$, we need to take into account also the derivative of $F_k$, i.e., since $(v\\circ F_k)' = (v' \\circ F_k) F_k'$, we have\n\n\n$$\nv'_{P_{k\\alpha}}(F_k(s)) = \\hat v'_\\alpha(s)/F'_k(s) = \\hat v'_\\alpha(s)/h_k\n$$\n\n\n```python\n# Let's define the domain discretisazion. In 1D, this is just a set of vertices, identifying edges\na = 0 \nb = 1\nM = 4 # Number of elements\ndegree = 3\n# Make sure we don't choose degree = 0. This won't work. Piecewise constants cannot be continuous\nassert degree > 0\n\n# We now choose the number of quadrature points, in order to integrate *exactly* \n# both (v_i, v_j) and (v'_i, v'_j)\nn_quadrature_points = 2*degree+1\n\n# To get a continuous space, we construct piecewise polynomials with support points on the boundary of the\n# elements. If the degree is greater than 1, then we pick (d-1) equispaced points in the interior of the\n# elements as additional support points.\n\n# Notice that the total number of degrees of freedom is equal to the number of vertices (M+1) plus the \n# number of *interior* basis functions (d+1-2), that is: (M+1) + (d-1)*M = M*d+1\nN = M*degree+1\n\nref_vertices = linspace(0,1,degree+1)\n\nvertices = linspace(a,b,M+1) # Vertices of our triangulation\n```\n\n\n```python\n# The reference element is [0,1]. We construct the mappings, the determinant of their Jacobians, and the \n# reference Basis functions\n\ndef mapping(q, i):\n \"\"\"\n Returns the mapping from [0,1] to T_k := [q[k], q[k+1]]\n \"\"\"\n assert i < len(q)-1\n assert i >= 0\n return lambda x: q[i]+x*(q[i+1]-q[i])\n\ndef mapping_J(q,i):\n assert i < len(q)-1\n assert i >= 0\n return (q[i+1]-q[i])\n\ndef lagrange_basis(q, i):\n assert i < len(q)\n assert i >= 0\n return lambda x: prod([(x-q[j])/(q[i]-q[j]) for j in range(len(q)) if i!=j], axis=0)\n\n# Workaround, to allow lambdify to work also on constant expressions\ndef np_lambdify(varname, func):\n lamb = sym.lambdify(varname, func, modules=['numpy'])\n if func.is_constant():\n return lambda t: full_like(t, lamb(t))\n else:\n return lambda t: lamb(np.array(t))\n\ndef lagrange_basis_derivative(q,i,order=1):\n t = sym.var('t')\n return np_lambdify(t, lagrange_basis(q,i)(t).diff(t,order))\n```\n\n\n```python\n# Let's check that, on each element, F_k(0) = q[k] and F_k(1) = q[k+1]\nassert abs(array([mapping(vertices, i)(0) for i in range(M)]) - vertices[:-1]).max() < 1e-16\nassert abs(array([mapping(vertices, i)(1) for i in range(M)]) - vertices[1:]).max() < 1e-16\n```\n\n\n```python\nx = linspace(0,1,51)\n\nV = array([lagrange_basis(ref_vertices,i)(x) for i in range(degree+1)]).T\nVp = array([lagrange_basis_derivative(ref_vertices,i)(x) for i in range(degree+1)]).T\n_ = [plot(x, V)]\nshow()\n_ = [plot(x, Vp)]\nshow()\n```\n\n\n```python\n# Now construct an interpolatory quadrature formula on [0,1]\nq, w = numpy.polynomial.legendre.leggauss(n_quadrature_points)\n\nq = (q+1)/2\nw = w/2\n```\n\n\n```python\n# And build a global numbering of the basis functions i = P[k,alpha]. Keep in mind that, to ensure continuity, \n# we identify the global index the first basis function of each element, with the global index of the \n# last basis function of the previous element\n\nP = zeros((M,degree+1), dtype=int)\n\nfor k in range(M):\n start = k*degree\n P[k] = array(range(start,start+degree+1))\n\nassert P.max() == N-1\nprint(P)\n```\n\n [[ 0 1 2 3]\n [ 3 4 5 6]\n [ 6 7 8 9]\n [ 9 10 11 12]]\n\n\n\n```python\n# Now we build, for each segment, the transformation of quadrature points and weights, so that we can \n# integrate the rhs and the matrices\n\nQ = array([mapping(vertices,k)(q) for k in range(M)])\nJxW = array([mapping_J(vertices,k)*w for k in range(M)])\n```\n\n\n```python\n# Let's test that everything works: the integral between 0 and 1 of the function f(x) = x should return 0.5. \n# Then sum_k q[k].dot(w_k) should be 0.5.\n\nintegral = 0\nfor k in range(M):\n integral = integral + Q[k].dot(JxW[k])\n\nassert abs(integral - .5) < 1e-16\n\n#same as\neinsum('kq,kq', Q, JxW)\n```\n\n\n\n\n 0.49999999999999994\n\n\n\n\n```python\n# Construct the matrix B[k,j,i], defined as the value of v_i(F_k(x[j]))\nB = zeros((M, len(x), N))\n\nfor k in range(M):\n B[k,:,P[k]] = V.T\n \n# To evaluate functions and to do some plotting, also gather together all F_k(x[j]) in X[k,j]\nX = array([mapping(vertices,k)(x) for k in range(M)])\n```\n\n\n```python\nX.shape, B.shape\n```\n\n\n\n\n ((4, 51), (4, 51, 13))\n\n\n\n\n```python\n# Reshaping X and B, we can use them to compute piecewise interpolation\nX = X.flatten()\nB = B.reshape((len(X),-1))\n\nX.shape, B.shape\n```\n\n\n\n\n ((204,), (204, 13))\n\n\n\n\n```python\n_ = plot(X, B[:,0:3]) \n_ = plot(vertices, 0*vertices,'ro')\n```\n\n\n```python\n# Notice that the global support points are the image through F_k of the reference support points,\n# Numbered according to the matrix P, i.e..\n\nsupport_points = zeros((N,))\n\nfor k in range(M):\n support_points[P[k]] = mapping(vertices,k)(ref_vertices)\n \n# If we chose equispaced vertices, they should be identical to linspace(a,b,N):\nabs(support_points - linspace(a,b,N)).max()\n```\n\n\n\n\n 1.1102230246251565e-16\n\n\n\n\n```python\n# Let's use the runge function as an example, and compute its *piecewise* polynomial interpolation \ndef runge(x):\n return 1/(1+50*(x-.5)**2)\n```\n\n\n```python\nplot(X, runge(X))\nplot(support_points, runge(support_points), 'ro')\nplot(X, B.dot(runge(support_points)),'r')\n```\n\n\n```python\n# Construct a arrays Bq and Bprimeq: Bq[k,j,i] is v_i(T_k(q[j])), \n# and Bprimeq[k,j,i] is v'_i(T_k(q[j]))/T'_k(q[j])\nBq = zeros((M, n_quadrature_points, N))\nBprimeq = zeros((M, n_quadrature_points, N))\n\nVq = array([lagrange_basis(ref_vertices,i)(q) for i in range(degree+1)]).T\nVprimeq = array([lagrange_basis_derivative(ref_vertices,i)(q) for i in range(degree+1)]).T\n\nfor k in range(M):\n Bq[k,:,P[k]] = Vq.T\n Bprimeq[k,:,P[k]] = Vprimeq.T/mapping_J(vertices,k)\n\nXq = Q.flatten()\nBq = Bq.reshape((len(Xq),-1))\nBprimeq = Bprimeq.reshape((len(Xq),-1))\nJxWq = JxW.flatten()\n```\n\n\n```python\n# Now compute the integrals for the mass and stiffness matrices\nmass_matrix = einsum('qi,qj,q',Bq,Bq,JxWq)\nstiffness_matrix = einsum('qi,qj,q',Bprimeq,Bprimeq,JxWq)\n```\n\n\n```python\n# Notice that the stiffness matrix is non-invertible (Pure neumann problem in H1 is not uniquely solvable)\n# In fact we are not imposing the boundary conditions on the space, and we need to enforce them numerically\nlinalg.cond(stiffness_matrix)\n```\n\n\n\n\n 7.89617984234469e+16\n\n\n\n\n```python\n# We do the same thing we did in the finite difference case: eliminate the rows corresponding to the boundary \n# points, set the diagonal to 1, and set the rhs to the desired boundary condition\nstiffness_matrix[0,:] = stiffness_matrix[-1,:] = 0\nstiffness_matrix[0,0] = stiffness_matrix[-1,-1] = 1 \n\n# Let's check now if the matrix is well conditioned\nlinalg.cond(stiffness_matrix)\n```\n\n\n\n\n 250.25346068360452\n\n\n\n\n```python\n# Now let's first compute an L2 projection of the runge function, and compare with the interpolation\nmass_rhs = einsum('qi,q,q', Bq, runge(Xq), JxWq)\nu_projection = linalg.solve(mass_matrix, mass_rhs)\n```\n\n\n```python\nplot(X, runge(X))\nu_interpolate = runge(support_points)\nplot(support_points, u_interpolate, 'ro')\nplot(X, B.dot(runge(support_points)),'r')\nplot(X, B.dot(u_projection),'g')\n```\n\n\n```python\n# Finally, let's solve non trivial problem: -u'' = sin(2 pi x) with zero bc.\n\ndef rhs_function(x):\n return sin(2*pi*x)\n\ndef exact(x):\n return sin(2*pi*x)/(4*pi**2)\n\n# Assemble the rhs, and make sure we set to zero the boundary conditions\nrhs = einsum('qi,q,q', Bq, rhs_function(Xq), JxWq)\nrhs[0] = rhs[-1] = 0\n```\n\n\n```python\nu = linalg.solve(stiffness_matrix, rhs)\n```\n\n\n```python\nplot(X, exact(X))\nplot(X, B.dot(u))\nplot(support_points, exact(support_points), 'o')\n```\n\n\n```python\n# Let's compute also the L2 error: \nerror = sqrt(einsum('q,q', (Bq.dot(u)-exact(Xq))**2, JxWq))\nerror\n```\n\n\n\n\n 3.5160616773129974e-05\n\n\n", "meta": {"hexsha": "bfe89d2e31087a9c37927fcd262e564231021434", "size": 141183, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "slides/Lecture 12 - LH - LAB - Introduction to PDEs - Finite Elements in 1D - Implementation.ipynb", "max_stars_repo_name": "vitturso/numerical-analysis-2021-2022", "max_stars_repo_head_hexsha": "d675a6f766a42d0a46e7cd69dbfed8645a0b2590", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "slides/Lecture 12 - LH - LAB - Introduction to PDEs - Finite Elements in 1D - Implementation.ipynb", "max_issues_repo_name": "vitturso/numerical-analysis-2021-2022", "max_issues_repo_head_hexsha": "d675a6f766a42d0a46e7cd69dbfed8645a0b2590", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "slides/Lecture 12 - LH - LAB - Introduction to PDEs - Finite Elements in 1D - Implementation.ipynb", "max_forks_repo_name": "vitturso/numerical-analysis-2021-2022", "max_forks_repo_head_hexsha": "d675a6f766a42d0a46e7cd69dbfed8645a0b2590", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 210.7208955224, "max_line_length": 32124, "alphanum_fraction": 0.9088629651, "converted": true, "num_tokens": 3075, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284088064979618, "lm_q2_score": 0.8856314738181875, "lm_q1q2_score": 0.8222280596045743}} {"text": "# Timeseries Prediction\n\n** EVERYTHING IN THIS TUTORIAL IS VERY NEW! CONSIDER IT ON AN ALPHA PHASE!**\n\n**APIs are likely to change!**\n\n**Examples shown in this tutorial are also in the repo `TimeseriesPrediction.jl/examples`**\n\nTopics:\n\n* Nature of prediction models of **DynamicalSystems.jl**\n* Local model prediction\n* Multi-variate local model Prediction\n* Spatio-temporal Timeseries Prediction\n* Docstrings\n\n## Nature of prediction models\nSuppose you have a scalar or multi-variate timeseries and you want to predict its future behaviour.\n\nYou can either take your *neural-network/machine-learning hammer* and lots of computing power **or** you can use methods from nonlinear dynamics and chaos.\n\n**DynamicalSystems.jl** follows the second approach. This road is not only surprisingly powerful, but also much, **much** simpler.\n\n---\n\n# Local Model Prediction\n\nLocal model prediction does something very simple: it makes a prediction of a state, by finding the future of similar (*neighboring*) states! Then it uses the predicted state as a new state from which other predictions can be made!\n\nYeap, that simple.\n\nLet's see how well this method fares in a simple system, the Roessler system (3D & chaotic):\n\n$$\n\\begin{aligned}\n\\dot{x} &= -y-z \\\\\n\\dot{y} &= x+ay \\\\\n\\dot{z} &= b + z(x-c)\n\\end{aligned}\n$$\n\n\n```julia\nusing DynamicalSystems \n\n# This initial condition gives a good prediction:\nu0_good = [0.065081, 0.917503, 0.300242]\n\nross = Systems.roessler(u0_good)\n```\n\n\n\n\n 3-dimensional continuous dynamical system\n state: [0.065081, 0.917503, 0.300242]\n e.o.m.: DynamicalSystemsBase.Systems.roessler_eom\n in-place? false\n jacobian: DynamicalSystemsBase.Systems.roessler_jacob\n\n\n\n\nLet's get a \"measurement\" from the roessler system\n\n\n```julia\ndt = 0.1 # sampling rate\ntf = 1000.0 # final time\ntr = trajectory(ross, tf; dt = dt)\n\n# This is the measurement\ns = tr[50:end, 2] # we skip the first points, they are transient\n# This is the accompanying time vector:\ntimevec = collect(0:dt:tf)[50:end];\n```\n\nHow does this timeseries look?\n\n\n```julia\nusing PyPlot; figure(figsize = (8,4))\nplot(timevec, s, lw = 1.0);\n```\n\nPlease note: these are chaotic oscillations, the system is *not* periodic for the chosen (default) parameter values!\n\nAlright, so we have a recorded some timeseries of length:\n\n\n```julia\nlength(s)\n```\n\n\n\n\n9952\n\n\n\nAnd now we want to predict!\n\nLet's see the prediction function in action! The function to use is\n```julia\nlocalmodel_tsp(s, D::Int, \u03c4, p::Int; kwargs...)\n```\nHere `s` is the timeseries to be predicted. `D, \u03c4` are the values of the `Reconstruction` that has to be made from `s`. The last argument `p` is simply the amount of points to predict!\n\nThe `Reconstruction` idea and functions were introduced in the tutorial \"Delay Coordinates Embedding\". \n\nThis local model prediction method assumes that the system is on some kind of chaotic attractor. This is why it is crucial to reconstruct a signal before using the method!\n\nLet's use only a first part of the timeseries as a \"training set\"\n\n\n```julia\nN = length(s)\nN_train = 1000\ns_train = s[1:N_train];\n```\n\nand then use the rest of the timeseries to compare with the prediction\n\n\n```julia\ns_test = s[N_train+1:end];\n```\n\nHere we define the parameters to make a prediction:\n\n\n```julia\n\u03c4 = 17\nD = 3\np = 500\n\ns_pred = localmodel_tsp(s_train, D, \u03c4, p)\n# prediction always includes last point of `s_train`\n```\n\n\n\n\n 501-element Array{Float64,1}:\n -1.01244 \n -0.618513\n -0.25985 \n 0.107845\n 0.481012\n 0.855973\n 1.22897 \n 1.59618 \n 1.95381 \n 2.29806 \n 2.62522 \n 2.93168 \n 3.21399 \n \u22ee \n 6.92972 \n 7.25001 \n 7.49616 \n 7.66678 \n 7.76051 \n 7.77608 \n 7.71248 \n 7.56916 \n 7.34617 \n 7.04425 \n 6.66491 \n 6.21044 \n\n\n\nLet's plot!\n\n\n```julia\nfigure(figsize=(8,4))\npast = 100\nplot(timevec[N_train-past:N_train+1], s[N_train-past:N_train+1], color = \"C1\", label = \"timeseries\")\nplot(timevec[N_train:N_train+p], s[N_train:N_train+p], color = \"C3\", label = \"real future\")\nplot(timevec[N_train:N_train+p], s_pred, color = \"C0\", linestyle = \"dashed\", alpha = 0.5, label = \"prediction\")\nlegend(); xlabel(\"\\$t\\$\"); ylabel(\"\\$y\\$\")\nprintln(\"Prediction of $(p) points from $(N_train) points. i.c.: $(get_state(ross))\")\n```\n\nOf course the prediction depends strongly on:\n\n* Choosing proper `Reconstruction` parameters\n* The initial condition\n\nHow did I know that the value of `\u03c4=17` was good?\n\n\n```julia\nestimate_delay(s, \"first_zero\")\n```\n\n\n\n\n17\n\n\n\nThe function `localmodel_tsp` also accepts some keyword arguments which I did not discuss. These are:\n\n * `method = AverageLocalModel(2)` : Subtype of [`AbstractLocalModel`](@ref).\n * `ntype = FixedMassNeighborhood(2)` : Subtype of [`AbstractNeighborhood`](@ref).\n * `stepsize = 1` : Prediction step size.\n \nWe already know what does the `ntype` keyword does: it chooses a neighborhood type.\n\nThe `method` keyword chooses the method of the local prediction. There are two methods, the `AverageLocalModel`, which we already used by default, as well as the `LinearLocalModel`. Their docstrings are at the end of this tutorial.\n\nWithout explanations, their call signatures are:\n```julia\nAverageLocalModel(n::Int)\nLinearLocalModel(n::Int, \u03bc::Real)\nLinearLocalModel(n::Int, s_min::Real, s_max::Real)\n```\n\n\n```julia\nusing BenchmarkTools\n@btime localmodel_tsp($s_train, $D, $\u03c4, $p)\nprintln(\"Time for predicting $(p) points from $(N_train) points.\")\n```\n\n 686.934 \u03bcs (6029 allocations: 455.09 KiB)\n Time for predicting 500 points from 1000 points.\n\n\nLet's bundle all the production-prediction-plotting process into one function and play around!\n\n\n```julia\nfunction predict_roessler(N_train, p, method, u0 = rand(3); ntype = FixedMassNeighborhood(5))\n \n ds = Systems.roessler(u0)\n dt = 0.1\n tr = trajectory(ds, (N_train+p)\u00f7dt; dt = dt)\n \n s = tr[:, 2] # actually, any of the 3 variables of the Roessler work well\n \n s_train = s[1:N_train]\n s_test = s[N_train+1:end]\n\n # parameters to predict:\n \u03c4 = 17\n D = 3\n\n s_pred = localmodel_tsp(s_train, D, \u03c4, p; method = method, ntype = ntype)\n \n figure(figsize=(8,4))\n past = 100\n plot(timevec[N_train-past:N_train+1], s[N_train-past:N_train+1], color = \"C1\", label = \"timeseries\")\n plot(timevec[N_train:N_train+p], s[N_train:N_train+p], color = \"C3\", label = \"real future\")\n plot(timevec[N_train:N_train+p], s_pred, color = \"C0\", linestyle = \"dashed\", alpha = 0.5, label = \"prediction\")\n legend(); xlabel(\"\\$t\\$\"); ylabel(\"\\$y\\$\") \n mprint = Base.datatype_name(typeof(method))\n println(\"N_train = $(N_train), p = $(p), method = $(mprint), u0 = $(u0)\")\n return\nend\n\npredict_roessler(3000, 500, LinearLocalModel(2, 5.0))\n# Linear Local model is slower than Average local model, and in general not that\n# much more powerful.\n```\n\n# Multi-Variate Prediction\n\nOn purpose I was always referring to `s` as \"timeseries\". There is no reason for `s` to be scalar though, this prediction method works just as well when predicting multiple timeseries. And the call signature does not change at all!\n\nThe following example demonstrates the prediction of the Lorenz96 model\n\n$$\n\\frac{dx_i}{dt} = (x_{i+1}-x_{i-2})x_{i-1} - x_i + F \n$$\n\na system that displays high-dimensional chaos and is thus very difficult to predict!\n\n\n```julia\nusing DynamicalSystems, PyPlot\n\n#Generate timeseries set\nds = Systems.lorenz96(5; F=8.)\nic = get_state(ds)\n\u0394t = 0.05\ns = trajectory(ds, 2100; dt=\u0394t)[:,1:2]\n\n#Set Training and Test Set\nN_train = 40000\np = 200\ns_train = s[1:N_train,1:2]\ns_test = s[N_train:N_train+p,1:2]\n\n#Embedding Parameters\nD = 5; # total dimension of reconstruction is D*2 ! ! !\nx = s[:, 1]\n\u03c4 = estimate_delay(x, \"first_zero\")\nprintln(\"Delay time estimation: $(\u03c4)\")\n\n#Prediction\nmethod = LinearLocalModel(2, 2.5)\nmethod = AverageLocalModel(2)\nntype = FixedMassNeighborhood(5)\ns_pred = localmodel_tsp(s_train, D, \u03c4, p; method = method, ntype = ntype)\n\n\nfigure(figsize=(12,4))\nax = subplot(121)\nplot((N_train:N_train+p)*\u0394t, s_test[:,1], label=\"signal\")\nplot((N_train:N_train+p)*\u0394t, s_pred[:,1], label=\"prediction\")\nylabel(\"\\$x_1(n)\\$\")\nxlabel(\"\\$t\\$\")\nlegend()\nax = subplot(122)\nplot((N_train:N_train+p)*\u0394t, s_test[:,2], label=\"signal\")\nplot((N_train:N_train+p)*\u0394t, s_pred[:,2], label=\"prediction\")\nylabel(\"\\$x_2(n)\\$\")\nxlabel(\"\\$t\\$\")\nlegend()\ntight_layout();\nprintln(\"Prediction p=$p of Lorenz96 Model (5 Nodes) from $N_train points\")\nprintln(\"i.c.: $ic\")\n```\n\n# Spatio Temporal Timeseries prediction\n\nSpatio-temporal systems are systems that depend on both space and time, i.e. *fields* (like Partial Differential Equations). These systems can also be predicted using these local model methods!\n\nIn the following sections we will see 3 examples, but there won't be any code shown for the last example (because its humongus).\n\nSee `TimeseriesPrediction.jl/examples` repository for more examples!\n\n## Barkley Model\nThe Barkley model consists of 2 coupled fields each having 2 spatial dimensions, and is considered one of the simplest spatio-temporal systems\n\n$$\n\\begin{align}\n\\frac{\\partial u }{\\partial t} =& \\frac{1}{\\epsilon} u (1-u)\\left(u-\\frac{v+b}{a}\\right) + \\nabla^2 u\\nonumber \\\\\n\\frac{\\partial v }{\\partial t} =& u - v\n\\end{align}\n$$\n\n\n\n```julia\n# This Algorithm of evolving the Barkley model is taken from\n# http://www.scholarpedia.org/article/Barkley_model\nfunction barkley(T, Nx, Ny)\n a = 0.75; b = 0.02; \u03b5 = 0.02\n\n u = zeros(Nx, Ny); v = zeros(Nx, Ny)\n U = Vector{Array{Float64,2}}()\n V = Vector{Array{Float64,2}}()\n\n #Initial state that creates spirals\n u[40:end,34] = 0.1; u[40:end,35] = 0.5\n u[40:end,36] = 5; v[40:end,34] = 1\n u[1:10,14] = 5; u[1:10,15] = 0.5\n u[1:10,16] = 0.1; v[1:10,17] = 1\n u[27:36,20] = 5; u[27:36,19] = 0.5\n u[27:36,18] = 0.1; v[27:36,17] = 1\n\n h = 0.75; \u0394t = 0.1; \u03b4 = 0.001\n \u03a3 = zeros(Nx, Ny, 2)\n r = 1; s = 2\n \n function F(u, uth)\n if u < uth\n u/(1-(\u0394t/\u03b5)*(1-u)*(u-uth))\n else\n (u + (\u0394t/\u03b5)*u*(u-uth))/(1+(\u0394t/\u03b5)*u*(u-uth))\n end\n end\n\n for m=1:T\n for i=1:Nx, j=1:Ny\n if u[i,j] < \u03b4\n u[i,j] = \u0394t/h^2 * \u03a3[i,j,r]\n v[i,j] = (1 - \u0394t)* v[i,j]\n else\n uth = (v[i,j] + b)/a\n v[i,j] = v[i,j] + \u0394t*(u[i,j] - v[i,j])\n u[i,j] = F(u[i,j], uth) + \u0394t/h^2 *\u03a3[i,j,r]\n \u03a3[i,j,s] -= 4u[i,j]\n i > 1 && (\u03a3[i-1,j,s] += u[i,j])\n i < Nx && (\u03a3[i+1,j,s] += u[i,j])\n j > 1 && (\u03a3[i,j-1,s] += u[i,j])\n j < Ny && (\u03a3[i,j+1,s] += u[i,j])\n end\n \u03a3[i,j,r] = 0\n end\n r,s = s,r\n #V[:,:,m] .= v\n #U[:,:,m] .= u\n push!(U,copy(u))\n push!(V,copy(v))\n end\n return U,V\nend\n```\n\n\n\n\n barkley (generic function with 1 method)\n\n\n\n\n```julia\nNx = 50\nNy = 50\nTskip = 100\nTtrain = 500\np = 20\nT = Tskip + Ttrain + p\n\nU,V = barkley(T, Nx, Ny);\n\nVtrain = V[Tskip + 1:Tskip + Ttrain]\nVtest = V[Tskip + Ttrain : T]\n\nD = 2\n\u03c4 = 1\nB = 2\nk = 1\nc = 200\n\nVpred = localmodel_stts(Vtrain, D, \u03c4, p, B, k; boundary=c)\nerr = [abs.(Vtest[i]-Vpred[i]) for i=1:p+1];\nprintln(\"Maximum error: $(maximum(maximum.(err)))\")\n```\n\n Reconstructing\n Creating Tree\n Working on Frame 1/20\n Working on Frame 2/20\n Working on Frame 3/20\n Working on Frame 4/20\n Working on Frame 5/20\n Working on Frame 6/20\n Working on Frame 7/20\n Working on Frame 8/20\n Working on Frame 9/20\n Working on Frame 10/20\n Working on Frame 11/20\n Working on Frame 12/20\n Working on Frame 13/20\n Working on Frame 14/20\n Working on Frame 15/20\n Working on Frame 16/20\n Working on Frame 17/20\n Working on Frame 18/20\n Working on Frame 19/20\n Working on Frame 20/20\n Maximum error: 0.018341198402581194\n\n\nPlotting the real evolution, prediction, and error side by side (plotting takes a *lot* of time) with `Tskip = 200, Ttrain = 1000, p = 200` produces:\n\n\n\n## Kuramoto-Sivashinsky\n\nThis system consists of only a single field and a single spatial dimension\n\n$$\ny_t = \u2212y y_x \u2212 y_{xx} \u2212 y_{xxxx} \n$$\nwhere the subscripts denote partial derivatives.\n\nWe will not show code for this example, but only the results. The code is located at `TimeseriesPrediction.jl/examples/KSprediction.jl` and is quite large for a Jupyter notebook.\n\n### Prediction\n\nA typical prediction with parameters `D = 2, \u03c4 = 1, \u0392 = 5, k = 1` and system parameters `L = 150` (see file for more) looks like this:\n\n\n\nThe vertical axis, which is *time*, is measured within units of the maximal Lyapunov exponent of the system \u039b, which is around 0.1.\n\n### Mean Squared Error of prediction\n\nWe now present a measure of the error of the prediction, by averaging the error values across all timesteps and across all spatial values, and then normalizing properly\n\n\n\nThe above curves are also *averaged* over 10 different initial conditions and subsequent predictions!\n\nThe parameters used for the prediction \n\nYou can compare it with e.g. figure 5 of this [Physical Review Letters article](https://doi.org/10.1103/PhysRevLett.120.024102).\n\n# Docstrings\n\n\n```julia\n?localmodel_tsp\n```\n\n search: \u001b[1ml\u001b[22m\u001b[1mo\u001b[22m\u001b[1mc\u001b[22m\u001b[1ma\u001b[22m\u001b[1ml\u001b[22m\u001b[1mm\u001b[22m\u001b[1mo\u001b[22m\u001b[1md\u001b[22m\u001b[1me\u001b[22m\u001b[1ml\u001b[22m\u001b[1m_\u001b[22m\u001b[1mt\u001b[22m\u001b[1ms\u001b[22m\u001b[1mp\u001b[22m \u001b[1ml\u001b[22m\u001b[1mo\u001b[22m\u001b[1mc\u001b[22m\u001b[1ma\u001b[22m\u001b[1ml\u001b[22m\u001b[1mm\u001b[22m\u001b[1mo\u001b[22m\u001b[1md\u001b[22m\u001b[1me\u001b[22m\u001b[1ml\u001b[22m\u001b[1m_\u001b[22ms\u001b[1mt\u001b[22mt\u001b[1ms\u001b[22m Average\u001b[1mL\u001b[22m\u001b[1mo\u001b[22m\u001b[1mc\u001b[22m\u001b[1ma\u001b[22m\u001b[1ml\u001b[22m\u001b[1mM\u001b[22m\u001b[1mo\u001b[22m\u001b[1md\u001b[22m\u001b[1me\u001b[22m\u001b[1ml\u001b[22m Abstract\u001b[1mL\u001b[22m\u001b[1mo\u001b[22m\u001b[1mc\u001b[22m\u001b[1ma\u001b[22m\u001b[1ml\u001b[22m\u001b[1mM\u001b[22m\u001b[1mo\u001b[22m\u001b[1md\u001b[22m\u001b[1me\u001b[22m\u001b[1ml\u001b[22m\n \n\n\n\n\n\n```\nlocalmodel_tsp(s, D::Int, \u03c4, p::Int; method, ntype, stepsize)\nlocalmodel_tsp(s, p::Int; method, ntype, stepsize)\n```\n\nPerform a timeseries prediction for `p` points, using local weighted modeling [1]. The function always returns an object of the same type as `s`, which can be either a timeseries (vector) or an `AbstractDataset` (trajectory), and the returned data always contains the final point of `s` as starting point. This means that the returned data has length of `p + 1`.\n\nIf given `(s, D, \u03c4)`, then a [`Reconstruction`](@ref) is performed on `s` with dimension `D` and delay `\u03c4`. If given only `s` then no [`Reconstruction`](@ref) is done. Keep in mind that the intented behavior of the algorithm is to work with a reconstruction, and not \"raw\" data.\n\n## Keyword Arguments\n\n * `method = AverageLocalModel(2)` : Subtype of [`AbstractLocalModel`](@ref).\n * `ntype = FixedMassNeighborhood(2)` : Subtype of [`AbstractNeighborhood`](@ref).\n * `stepsize = 1` : Prediction step size.\n\n## Description\n\nGiven a query point, the function finds its neighbors using neighborhood `ntype`. Then, the neighbors `xnn` and their images `ynn` are used to make a prediction for the future of the query point, using the provided `method`. The images `ynn` are the points `xnn` shifted by `stepsize` into the future.\n\nThe algorithm is applied iteratively until a prediction of length `p` has been created, starting with the query point to be the last point of the timeseries.\n\n## References\n\n[1] : D. Engster & U. Parlitz, *Handbook of Time Series Analysis* Ch. 1, VCH-Wiley (2006)\n\n\n\n\n\n```julia\n?AbstractLocalModel\n```\n\n search: \u001b[1mA\u001b[22m\u001b[1mb\u001b[22m\u001b[1ms\u001b[22m\u001b[1mt\u001b[22m\u001b[1mr\u001b[22m\u001b[1ma\u001b[22m\u001b[1mc\u001b[22m\u001b[1mt\u001b[22m\u001b[1mL\u001b[22m\u001b[1mo\u001b[22m\u001b[1mc\u001b[22m\u001b[1ma\u001b[22m\u001b[1ml\u001b[22m\u001b[1mM\u001b[22m\u001b[1mo\u001b[22m\u001b[1md\u001b[22m\u001b[1me\u001b[22m\u001b[1ml\u001b[22m\n \n\n\n\n\n\n```\nAbstractLocalModel\n```\n\nSupertype of methods for making a prediction of a query point `q` using local models, following the methods of [1]. Concrete subtypes are `AverageLocalModel` and `LinearLocalModel`.\n\nAll models weight neighbors with the following weight function\n\n$$\n\\begin{aligned}\n\u03c9_i = \\left[ 1- \\left(\\frac{d_i}{d_{max}}\\right)^n\\right]^n\n\\end{aligned}\n$$\n\nwith $d_i = ||x_{nn,i} -q||_2$ and degree `n`, to ensure smoothness of interpolation.\n\n### Average Local Model\n\n```\nAverageLocalModel(n::Int)\n```\n\nThe prediction is simply the weighted average of the images $y_{nn, i}$ of the neighbors $x_{nn, i}$ of the query point `q`:\n\n$$\n\\begin{aligned}\ny_{pred} = \\frac{\\sum{\\omega_i^2 y_{nn,i}}}{\\sum{\\omega_i^2}}\n\\end{aligned}\n$$\n\n### Linear Local Model\n\n```\nLinearLocalModel(n::Int, \u03bc::Real)\nLinearLocalModel(n::Int, s_min::Real, s_max::Real)\n```\n\nThe prediction is a weighted linear regression over the neighbors $x_{nn, i}$ of the query and their images $y_{nn,i}$ as shown in [1].\n\nGiving either `\u03bc` or `s_min` and `s_max` determines which type of regularization is applied.\n\n * `\u03bc` : Ridge Regression\n\n $$\n \\begin{aligned}\n f(\\sigma) = \\frac{\\sigma^2}{\\mu^2 + \\sigma^2}\n \\end{aligned}\n $$\n * `s_min`, `s_max` : Soft Threshold\n\n $$\n \\begin{aligned}\n f(\\sigma) = \\begin{cases} 0, & \\sigma < s_{min}\\\\\n \\left(1 - \\left( \\frac{s_{max}-\\sigma}{s_{max}-s_{min}}\\right)^2 \\right)^2,\n &s_{min} \\leq \\sigma \\leq s_{max} \\\\\n 1, & \\sigma > s_{max}\\end{cases}\n \\end{aligned}\n $$\n\n## References\n\n[1] : D. Engster & U. Parlitz, *Handbook of Time Series Analysis* Ch. 1, VCH-Wiley (2006)\n\n\n\n\n\n```julia\n?localmodel_stts\n```\n\n search: \u001b[1ml\u001b[22m\u001b[1mo\u001b[22m\u001b[1mc\u001b[22m\u001b[1ma\u001b[22m\u001b[1ml\u001b[22m\u001b[1mm\u001b[22m\u001b[1mo\u001b[22m\u001b[1md\u001b[22m\u001b[1me\u001b[22m\u001b[1ml\u001b[22m\u001b[1m_\u001b[22m\u001b[1ms\u001b[22m\u001b[1mt\u001b[22m\u001b[1mt\u001b[22m\u001b[1ms\u001b[22m \u001b[1ml\u001b[22m\u001b[1mo\u001b[22m\u001b[1mc\u001b[22m\u001b[1ma\u001b[22m\u001b[1ml\u001b[22m\u001b[1mm\u001b[22m\u001b[1mo\u001b[22m\u001b[1md\u001b[22m\u001b[1me\u001b[22m\u001b[1ml\u001b[22m\u001b[1m_\u001b[22mt\u001b[1ms\u001b[22mp\n \n\n\n\n\n\n```\nlocalmodel_stts(U::AbstractVector{<:AbstractArray{T, \u03a6}}, D, \u03c4, p, B, k; kwargs...)\n```\n\nPerform a spatio-temporal timeseries prediction for `p` iterations, using local weighted modeling [1]. The function always returns an object of the same type as `U`, with each entry being a predicted state. The returned data always contains the final state of `U` as starting point (total returned length is `p+1`).\n\n`(D, \u03c4, B, k)` are used to make a [`STReconstruction`](@ref) on `U`. In most cases `k=1` and `\u03c4=1,2,3` give best results.\n\n## Keyword Arguments\n\n * `boundary, weighting` : Passed directly to [`STReconstruction`](@ref).\n * `method = AverageLocalModel(2)` : Subtype of [`AbstractLocalModel`](@ref).\n * `ntype = FixedMassNeighborhood(3)` : Subtype of [`AbstractNeighborhood`](@ref).\n * `printprogress = true` : To print progress done.\n\n## Description\n\nThis method works identically to [`localmodel_tsp`](@ref), by expanding the concept from vector-states to general array-states.\n\n## Performance Notes\n\nBe careful when choosing `B` as memory usage and computation time depend strongly on it.\n\n## References\n\n[1] : U. Parlitz & C. Merkwirth, [Phys. Rev. Lett. **84**, pp 1890 (2000)](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.84.1890)\n\n\n\n\n\n```julia\n?STReconstruction\n```\n\n search: \u001b[1mS\u001b[22m\u001b[1mT\u001b[22m\u001b[1mR\u001b[22m\u001b[1me\u001b[22m\u001b[1mc\u001b[22m\u001b[1mo\u001b[22m\u001b[1mn\u001b[22m\u001b[1ms\u001b[22m\u001b[1mt\u001b[22m\u001b[1mr\u001b[22m\u001b[1mu\u001b[22m\u001b[1mc\u001b[22m\u001b[1mt\u001b[22m\u001b[1mi\u001b[22m\u001b[1mo\u001b[22m\u001b[1mn\u001b[22m\n \n\n\n\n\n\n```\nSTReconstruction(U::AbstractVector{<:AbstractArray}, D, \u03c4, B, k, boundary, weighting)\n <: AbstractDataset\n```\n\nPerform spatio-temporal(ST) delay-coordinates reconstruction from a ST-timeseries `s`.\n\n## Description\n\nAn extension of [`Reconstruction`](@ref) to support inclusion of spatial neighbors into reconstructed vectors. `B` is the number of spatial neighbors along each direction to be included. The parameter `k` indicates the spatial sampling density as described in [1].\n\nTo better understand `B`, consider a system of 2 spatial dimensions, where the state is a `Matrix`, and choose a point of the matrix to reconstruct. Giving `B = 1` will choose the current point and 8 points around it, *not* 4! For `\u03a6` dimensions, the number of spatial points is then given by `(2B + 1)^\u03a6`. Temporal embedding via `D` and `\u03c4` is treated analogous to [`Reconstruction`](@ref). Therefore the total embedding dimension is `D*(2B + 1)^\u03a6`.\n\n## Other Parameters\n\n * `boundary = 20` : Constant boundary value used for reconstruction of states close to the border. Pass `false` for periodic boundary conditions.\n * `weighting = (a,b)` or `nothing` : If given numbers `(a, b)`, adds `\u03a6` additional entries to reconstructed states. These are a spatial weighting that may be useful for considering spatially inhomogenous dynamics. Each entry is a normalized spatial coordinate $-1\\leq\\tilde{x}\\leq 1$:\n\n $$\n \\begin{aligned}\n \\omega(\\tilde{x}) = a \\tilde{x} ^ b.\n \\end{aligned}\n $$\n\n## References\n\n[1] : U. Parlitz & C. Merkwirth, [Phys. Rev. Lett. **84**, pp 1890 (2000)](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.84.1890)\n\n\n\n", "meta": {"hexsha": "231b2f236f69f911481f7ce61a61ac6ad965aedd", "size": 509864, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "introductory-tutorials/broader-topics-and-ecosystem/introduction-to-dynamicalsystems.jl/7. Timeseries Prediction.ipynb", "max_stars_repo_name": "grenkoca/JuliaTutorials", "max_stars_repo_head_hexsha": "3968e0430db77856112521522e10f7da0d7610a0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 535, "max_stars_repo_stars_event_min_datetime": "2020-07-15T14:56:11.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-25T12:50:32.000Z", "max_issues_repo_path": "introductory-tutorials/broader-topics-and-ecosystem/introduction-to-dynamicalsystems.jl/7. Timeseries Prediction.ipynb", "max_issues_repo_name": "grenkoca/JuliaTutorials", "max_issues_repo_head_hexsha": "3968e0430db77856112521522e10f7da0d7610a0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 42, "max_issues_repo_issues_event_min_datetime": "2018-02-25T22:53:47.000Z", "max_issues_repo_issues_event_max_datetime": "2020-05-14T02:15:50.000Z", "max_forks_repo_path": "introductory-tutorials/broader-topics-and-ecosystem/introduction-to-dynamicalsystems.jl/7. Timeseries Prediction.ipynb", "max_forks_repo_name": "grenkoca/JuliaTutorials", "max_forks_repo_head_hexsha": "3968e0430db77856112521522e10f7da0d7610a0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 394, "max_forks_repo_forks_event_min_datetime": "2020-07-14T23:22:24.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-28T20:12:57.000Z", "avg_line_length": 432.8217317487, "max_line_length": 142426, "alphanum_fraction": 0.9315660647, "converted": true, "num_tokens": 6609, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9219218391455084, "lm_q2_score": 0.8918110497511051, "lm_q1q2_score": 0.8221800831568253}} {"text": "# Random Number\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt \n```\n\n c:\\Users\\leona\\anaconda3\\lib\\site-packages\\numpy\\_distributor_init.py:30: UserWarning: loaded more than 1 DLL from .libs:\n c:\\Users\\leona\\anaconda3\\lib\\site-packages\\numpy\\.libs\\libopenblas.PYQHXLVVQ7VESDPUVUADXEVJOBGHJPAY.gfortran-win_amd64.dll\n c:\\Users\\leona\\anaconda3\\lib\\site-packages\\numpy\\.libs\\libopenblas.XWYDX2IKJW2NMTWSFYNGFUWKQU3LYTCZ.gfortran-win_amd64.dll\n warnings.warn(\"loaded more than 1 DLL from .libs:\"\n\n\n\n```python\nnp.random.rand(1)\n```\n\n\n\n\n array([0.30236111])\n\n\n\n\n```python\nnp.random.randint(low = 1, high = 10)\n```\n\n\n\n\n 9\n\n\n\n\n```python\nn = 1000000\nx = np.random.rand(n)\ny = np.random.rand(n)\nr = np.sqrt(x**2+y**2)\ninside = 0\noutside = 0\nfor i in range(len(r)):\n if r[i] <= 1 :\n inside += 1\ninside \n```\n\n\n\n\n 784948\n\n\n\n\n```python\nr\n```\n\n\n\n\n array([0.60574234, 0.93723258, 0.9383093 , ..., 0.60240754, 0.39812025,\n 0.96042183])\n\n\n\n\n```python\ninside*4/n\n```\n\n\n\n\n 3.139792\n\n\n\n\n```python\ninside*4/n\n```\n\n\n\n\n 3.139792\n\n\n\n\n```python\nnp.pi\n```\n\n\n\n\n 3.141592653589793\n\n\n\n# Contoh\n\n\\begin{equation}\ng(x) := 4+3x\n\\end{equation}\n\n\n```python\nx = np.linspace(0,5, 10000)\ngx = 4+3*x\nx_ = np.random.uniform(low = 0, high = 2, size = 10000)\ny = np.random.uniform(low=-0, high = 10, size = 10000)\nplt.figure(figsize =(16,16))\nplt.plot(x,gx, 'g')\nplt.xlim(-3,3)\nplt.ylim(-25,25)\nplt.grid()\nplt.scatter(x_,y, 0.5)\nplt.show()\n```\n\n\\begin{equation}\nf(x) := \\sin(4x) + \\cos(\\sin(2x^3))\n\\end{equation}\n\n\n```python\nx = np.linspace(-5,5, 10000)\nfx = np.sin(4*x) + np.cos(np.sin(2*x**3))\nx_ = np.random.uniform(low = -5, high = 5, size = 10000)\ny = np.random.uniform(low=-0.5, high = 2, size = 10000)\nplt.figure(figsize =(16,16))\nplt.plot(x,fx, 'g')\nplt.scatter(x_,y, 0.5)\nplt.show()\n```\n", "meta": {"hexsha": "654131c640f5b9db029f7bb0e2dc2809acb76cef", "size": 429240, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Materi/random-number.ipynb", "max_stars_repo_name": "leonv1602/mathematics-with-python", "max_stars_repo_head_hexsha": "8099f0029870b546027d9293847cef642e210217", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Materi/random-number.ipynb", "max_issues_repo_name": "leonv1602/mathematics-with-python", "max_issues_repo_head_hexsha": "8099f0029870b546027d9293847cef642e210217", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Materi/random-number.ipynb", "max_forks_repo_name": "leonv1602/mathematics-with-python", "max_forks_repo_head_hexsha": "8099f0029870b546027d9293847cef642e210217", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 1460.0, "max_line_length": 276798, "alphanum_fraction": 0.958932066, "converted": true, "num_tokens": 692, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.933430805473952, "lm_q2_score": 0.8807970826714614, "lm_q1q2_score": 0.8221631303371293}} {"text": "# Vectors\nVectors, and vector spaces, are fundamental to *linear algebra*, and they're used in many machine learning models. Vectors describe spatial lines and planes, enabling you to perform calculations that explore relationships in multi-dimensional space.\n\n## What is a Vector\nAt its simplest, a vector is a numeric element that has both *magnitude* and *direction*. The magnitude represents a distance (for example, \"2 miles\") and the direction indicates which way the vector is headed (for example, \"East\"). Vectors are defined by an n-dimensional coordinate that describe a point in space that can be connected by a line from an arbitrary origin.\n\nThat all seems a bit complicated, so let's start with a simple, two-dimensional example. In this case, we'll have a vector that is defined by a point in a two-dimensional plane: A two dimensional coordinate consists of an *x* and a *y* value, and in this case we'll use **2** for *x* and **1** for *y*.\n\nOur vector can be written as **v**=(2,1), but more formally we would use the following notation, in which the dimensional coordinate values for the vector are shown as a matrix:\n\\begin{equation}\\vec{v} = \\begin{bmatrix}2 \\\\ 1 \\end{bmatrix}\\end{equation}\n\nSo what exactly does that mean? Well, the coordinate is two-dimensional, and describes the movements required to get to the end point (of *head*) of the vector - in this case, we need to move 2 units in the *x* dimension, and 1 unit in the *y* dimension. Note that we don't specify a starting point for the vector - we're simply describing a destination coordinate that encapsulate the magnitide and direction of the vector. Think about it as the directions you need to follow to get to *there* from *here*, without specifying where *here* actually is!\n\nIt can help to visualize the vector, and with a two-dimensional vector, that's pretty straightforward. We just define a two-dimensional plane, choose a starting point, and plot the coordinate described by the vector relative to the starting point.\n\nRun the code in the following cell to visualize the vector **v** (which remember is described by the coordinate (2,1)).\n\n\n```python\n%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# We'll use a numpy array for our vector\nv = np.array([2,1])\n\n# and we'll use a quiver plot to visualize it.\norigin = [0], [0]\nplt.axis('equal')\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, *v, scale=10, color='r')\nplt.show()\n```\n\nNote that we can use a numpy array to define the vector in Python; so to create our (2,1) vector, we simply create a numpy array with the elements [2,1]. We've then used a quiver plot to visualize the vector, using the point 0,0 as the starting point (or *origin*). Our vector of (2,1) is shown as an arrow that starts at 0,0 and moves 2 units along the *x* axis (to the right) and 1 unit along the *y* axis (up).\n\n## Calculating Vector Magnitude and Direction\nWe tend to work with vectors by expressing their components as *cartesian coordinates*; that is, *x* and *y* (and other dimension) values that define the number of units travelled along each dimension. So the coordinates of our (2,1) vector indicate that we must travel 2 units along the *x* axis, and *1* unit along the *y* axis.\n\nHowever, you can also work with verctors in terms of their *polar coordinates*; that is coordinates that describe the magnitude and direction of the vector. The magnitude is the overall distance of the vector from tail to head, and the direction is the angle at which the vector is oriented.\n\n### Calculating Magnitude\nCalculating the magnitude of the vector from its cartesian coordinates requires measuring the distance between the arbitrary starting point and the vector head point. For a two-dimensional vector, we're actually just calculating the length of the hypotenuse in a right-angled triangle - so we could simply invoke Pythagorean theorum and calculate the square root of the sum of the squares of it's components, like this:\n\n\\begin{equation}\\|\\vec{v}\\| = \\sqrt{v_{1}\\;^{2} + v_{2}\\;^{2}}\\end{equation}\n\nThe notation for a vector's magnitude is to surround the vector name with vertical bars - you can use single bars (for example, |**v**|) or double bars (||**v**||). Double-bars are often used to avoid confusion with absolute values. Note that the components of the vector are indicated by subscript indices (v1, v2,...v*n*),\n\nIn this case, the vector **v** has two components with values **2** and **1**, so our magnitude calculation is:\n\n\\begin{equation}\\|\\vec{v}\\| = \\sqrt{2^{2} + 1^{2}}\\end{equation}\n\nWhich is:\n\n\\begin{equation}\\|\\vec{v}\\| = \\sqrt{4 + 1}\\end{equation}\n\nSo:\n\n\\begin{equation}\\|\\vec{v}\\| = \\sqrt{5} \\approx 2.24\\end{equation}\n\nYou can run the following Python code to get a more precise result (note that the elements of a numpy array are zero-based)\n\n\n```python\nimport math\n\nvMag = math.sqrt(v[0]**2 + v[1]**2)\nprint (vMag)\n```\n\nThis calculation works for vectors of any dimensionality - you just take the square root of the sum of the squared components:\n\n\\begin{equation}\\|\\vec{v}\\| = \\sqrt{v_{1}\\;^{2} + v_{2}\\;^{2} ... + v_{n}\\;^{2}}\\end{equation}\n\nIn Python, *numpy* provides a linear algebra library named **linalg** that makes it easier to work with vectors - you can use the **norm** function in the following code to calculate the magnitude of a vector:\n\n\n```python\nimport numpy as np\n\nvMag = np.linalg.norm(v)\nprint (vMag)\n```\n\n### Calculating Direction\nTo calculate the direction, or *amplitude*, of a vector from its cartesian coordinates, you must employ a little trigonometry. We can get the angle of the vector by calculating the *inverse tangent*; sometimes known as the *arctan* (the *tangent* calculates an angle as a ratio - the inverse tangent, or **tan-1**, expresses this in degrees).\n\nIn any right-angled triangle, the tangent is calculated as the *opposite* over the *adjacent*. In a two dimensional vector, this is the *y* value over the *x* value, so for our **v** vector (2,1):\n\n\\begin{equation}tan(\\theta) = \\frac{1}{2}\\end{equation}\n\nThis produces the result ***0.5***, from which we can use a calculator to calculate the inverse tangent to get the angle in degrees:\n\n\\begin{equation}\\theta = tan^{-1} (0.5) \\approx 26.57^{o}\\end{equation}\n\nNote that the direction angle is indicated as ***θ***.\n\nRun the following Python code to confirm this:\n\n\n```python\nimport math\nimport numpy as np\n\nv = np.array([2,1])\nvTan = v[1] / v[0]\nprint ('tan = ' + str(vTan))\nvAtan = math.atan(vTan)\n# atan returns the angle in radians, so convert to degrees\nprint('inverse-tan = ' + str(math.degrees(vAtan)))\n```\n\nThere is an added complication however, because if the value for *x* or *y* (or both) is negative, the orientation of the vector is not standard, and a calculator can give you the wrong tan-1 value. To ensure you get the correct direction for your vector, use the following rules:\n- Both *x* and *y* are positive: Use the tan-1 value.\n- *x* is negative, *y* is positive: Add 180 to the tan-1 value.\n- Both *x* and *y* are negative: Add 180 to the tan-1 value.\n- *x* is positive, *y* is negative: Add 360 to the tan-1 value.\n\nTo understand why we need to do this, think of it this way. A vector can be pointing in any direction through a 360 degree arc. Let's break that circle into four quadrants with the x and y axis through the center. Angles can be measured from the x axis in both the positive (counter-clockwise) and negative (clockwise) directions. We'll number the quadrants in the positive (counter-clockwise) direction (which is how we measure the *positive* angle) like this:\n\n \n\n 2 | 1\n - o -\n 3 | 4\n\n\nOK, let's look at 4 example vectors\n\n 1. Vector [2,4] has positive values for both x and y. The line for this vector travels through the point 0,0 from quadrant 3 to quadrant 1. Tan-1 of 4/2 is around 63.4 degrees, which is the positive angle from the x axis to the vector line - so this is the direction of the vector.\n 2. Vector [-2,4] has a negative x and positive y. The line for this vector travels through point 0,0 from quadrant 4 to quadrant 2. Tan-1 of 4/-2 is around -64.4 degrees, which is the *negative* angle from x to the vector line; but in the wrong direction (as if the vector was travelling from quadrant 2 towards quadrant 4). So we need the opposite direction, which we get by adding 180.\n 3. Vector [-2,-4] has negative x and y. The line for the vector travels through 0,0 from quadrant 1 to quadrant 3. Tan-1 of -4/-2 is around 63.4 degrees, which is the angle between the x axis and the line, but again in the opposite direction, from quadrant 3 to quadrant 1; we need to go a further 180 degrees to reflect the correct direction.\n 4. Vector [2,-4] has positive x and negative y. It travels through 0,0 from quadrant 2 to quadrant 4. Tan-1 of -4/2 is around -64.4 degrees, which is the *negative* angle from the x axis to the vector line. Technically it's correct, the line is travelleing down and to the right at an angle of -63.4 degrees; but we want to express the *positive* (counter-clockwise) angle, so we add 360.\n\n\nIn the previous Python code, we used the *math.**atan*** function to calculate the inverse tangent from a numeric tangent. The *numpy* library includes a similar ***arctan*** function. When working with numpy arrays, you can also use the *numpy.**arctan2*** function to return the inverse tangent of an array-based vector in *radians*, and you can use the *numpy.**degrees*** function to convert this to degrees. The ***arctan2*** function automatically makes the necessary adjustment for negative *x* and *y* values.\n\n\n```python\nimport numpy as np\n\nv = np.array([2,1])\nprint ('v: ' + str(np.degrees(np.arctan2(v[1], v[0]))))\n\ns = np.array([-3,2])\nprint ('s: ' + str(np.degrees(np.arctan2(s[1], s[0]))))\n```\n\n## Vector Addition\nSo far, we've worked with one vector at a time. What happens when you need to add two vectors.\n\nLet's take a look at an example, we already have a vector named **v**, as defined here:\n\\begin{equation}\\vec{v} = \\begin{bmatrix}2 \\\\ 1 \\end{bmatrix}\\end{equation}\nNow let's create a second vector, and called **s** like this:\n\\begin{equation}\\vec{s} = \\begin{bmatrix}-3 \\\\ 2 \\end{bmatrix}\\end{equation}\n\nRun the cell below to create **s** and plot it together with **v**:\n\n\n```python\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nv = np.array([2,1])\ns = np.array([-3,2])\nprint (s)\n\n# Plot v and s\nvecs = np.array([v,s])\norigin = [0], [0]\nplt.axis('equal')\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, vecs[:,0], vecs[:,1], color=['r', 'b'], scale=10)\nplt.show()\n```\n\nYou can see in the plot that the two vectors have different directions and magnitudes. So what happens when we add them together?\n\nHere's the formula:\n\\begin{equation}\\vec{z} = \\vec{v}+\\vec{s}\\end{equation}\n\nIn terms of our vector matrices, this looks like this:\n\\begin{equation}\\vec{z} = \\begin{bmatrix}2 \\\\ 1 \\end{bmatrix} + \\begin{bmatrix}-3 \\\\ 2 \\end{bmatrix}\\end{equation}\n\nWhich gives the following result:\n\\begin{equation}\\vec{z} = \\begin{bmatrix}2 \\\\ 1 \\end{bmatrix} + \\begin{bmatrix}-3 \\\\ 2 \\end{bmatrix} = \\begin{bmatrix}-1 \\\\ 3 \\end{bmatrix}\\end{equation}\n\nLet's verify that Python gives the same result:\n\n\n```python\nz = v + s\nprint(z)\n```\n\nSo what does that look like on our plot?\n\n\n```python\nvecs = np.array([v,s,z])\norigin = [0], [0]\nplt.axis('equal')\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, vecs[:,0], vecs[:,1], color=['r', 'b', 'g'], scale=10)\nplt.show()\n```\n\nSo what's going on here?\nWell, we added the dimensions of **s** to the dimensions of **v** to describe a new vector **z**. Let's break that down:\n- The dimensions of **v** are (2,1), so from our starting point we move 2 units in the *x* dimension (across to the right) and 1 unit in the *y* dimension (up). In the plot, if you start at the (0,0) position, this is shown as the red arrow.\n- Then we're adding **s**, which has dimension values (-3, 2), so we move -3 units in the *x* dimension (across to the left, because it's a negative number) and then 2 units in the *y* dimension (up). On the plot, if you start at the head of the red arrow and make these moves, you'll end up at the head of the green arrow, which represents **z**.\n\nThe same is true if you perform the addition operation the other way around and add **v** to **s**, the steps to create **s** are described by the blue arrow, and if you use that as the starting point for **v**, you'll end up at the head of the green arrow, which represents **z**.\n\nNote on the plot that if you simply moved the tail of the blue arrow so that it started at the head of red arrow, its head would end up in the same place as the head of the green arrow; and the same would be true if you moved tail of the red arrow to the head of the blue arrow.\n", "meta": {"hexsha": "54affae0f2307df7681a0f0a9138cfd0f23c319f", "size": 16974, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Vector and Matrices by Hiren/03-01-Vectors.ipynb", "max_stars_repo_name": "awesome-archive/Basic-Mathematics-for-Machine-Learning", "max_stars_repo_head_hexsha": "b6699a9c29ec070a0b1615c46952cb0deeb73b54", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 401, "max_stars_repo_stars_event_min_datetime": "2018-08-29T04:55:26.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T11:03:39.000Z", "max_issues_repo_path": "Vector and Matrices by Hiren/03-01-Vectors.ipynb", "max_issues_repo_name": "aligeekk/Basic-Mathematics-for-Machine-Learning", "max_issues_repo_head_hexsha": "8662076d60e89f58a6e81e4ca1377569472760a2", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2018-11-19T23:54:27.000Z", "max_issues_repo_issues_event_max_datetime": "2018-11-20T00:15:39.000Z", "max_forks_repo_path": "Vector and Matrices by Hiren/03-01-Vectors.ipynb", "max_forks_repo_name": "aligeekk/Basic-Mathematics-for-Machine-Learning", "max_forks_repo_head_hexsha": "8662076d60e89f58a6e81e4ca1377569472760a2", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 135, "max_forks_repo_forks_event_min_datetime": "2018-08-29T05:04:00.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-30T07:04:25.000Z", "avg_line_length": 49.0578034682, "max_line_length": 561, "alphanum_fraction": 0.6276658419, "converted": true, "num_tokens": 3564, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026663679976, "lm_q2_score": 0.8962513682840824, "lm_q1q2_score": 0.8221337698629549}} {"text": "# First Steps\n\n- [DifferentialEquations.jl docs](https://diffeq.sciml.ai/dev/index.html)\n\n## Radioactive decay\n\nHow to define your model, state variables, initial (and/or boundary) conditions, and parameters.\n\nAs a simple example, the concentration of a decaying nuclear isotope could be described as an exponential decay:\n\n$$\n\\frac{d}{dt}C(t) = - \\lambda C(t)\n$$\n\n**State variable(s)**\n- $C(t)$: The concentration of a decaying nuclear isotope.\n\n**Parameter(s)**\n- $\\lambda$: The rate constant of decay. The half-life $t_{\\frac{1}{2}} = \\frac{ln2}{\\lambda}$\n\n### Standard operating procedures\n\n- Define a model function representing the right-hand-side (RHS) of the sysstem.\n - Out-of-place form: `f(u, p, t)` where `u` is the state variable(s), `p` is the parameter(s), and `t` is the independent variable (usually time). The output is the right hand side (RHS) of the differential equation system.\n - In-place form: `f!(du, u, p, t)`, where the output is saved to `du`. The rest is the same as the out of place form. The in-place form has potential performance benefits since it allocates less arrays than the out-of-place form.\n- Initial conditions (`u0`) for the state variable(s).\n- (Optional) parameter(s) `p`.\n- Define a problem (e.g. `ODEProblem`) using the modeling function (`f`), initial conditions (`u0`), simulation time span (`tspan == (tstart, tend)`), and parameter(s) `p`.\n- Solve the problem by calling `solve(prob)`.\n\n\n```julia\nusing DifferentialEquations\nusing Plots\n\n# The Exponential decay ODE model, out-of-place form\nexpdecay(u, p, t) = p * u\n\np = -1.0 # Parameter\nu0 = 1.0 # Initial condition\ntspan = (0.0, 2.0) # Simulation start and end time points\nprob = ODEProblem(expdecay, u0, tspan, p) # Define the problem\nsol = solve(prob) # Solve the problem\n```\n\n### Visualization\n\n`DifferentialEquations.jl` has a plot recipe so that `plot(sol)` directly visualizes the time series of the state variable(s).\n\n\n```julia\nusing Plots\nplot(sol) # Visualize the solution\n```\n\n### Solution handling\n\nDocs: https://diffeq.sciml.ai/stable/basics/solution/\n\n`sol(t)`: solution at time `t` with interpolations.\n\n\n```julia\nsol(1.0) # \n```\n\n`sol.t`: time points of the solution. Notice *t=1* may not in one of the time points.\n\n\n```julia\nsol.t\n```\n\n`sol.u`: The solution at time points `sol.t`\n\n\n```julia\nsol.u\n```\n\n## The SIR model\n\nA more complicated example is the [SIR model](https://www.maa.org/press/periodicals/loci/joma/the-sir-model-for-spread-of-disease-the-differential-equation-model) describing infectious disease spreading. There are more state variables and parameters.\n\n$$\n\\begin{align}\n\\frac{d}{dt}S(t) &= - \\beta S(t)I(t) \\\\\n\\frac{d}{dt}I(t) &= \\beta S(t)I(t) - \\gamma I(t) \\\\\n\\frac{d}{dt}R(t) &= \\gamma I(t)\n\\end{align}\n$$\n\n**State variable(s)**\n\n- $S(t)$ : the fraction of susceptible people\n- $I(t)$ : the fraction of infectious people\n- $R(t)$ : the fraction of recovered (or removed) people\n\n**Parameter(s)**\n\n- $\\beta$ : the rate of infection when susceptible and infectious people meet\n- $\\gamma$ : the rate of recovery of infectious people\n\n\n```julia\nusing DifferentialEquations\nusing Plots\n\n# SIR model, in-place form\nfunction sir!(du, u, p ,t)\n\ts, i, r = u\n\t\u03b2, \u03b3 = p\n\tv1 = \u03b2 * s * i\n\tv2 = \u03b3 * i\n du[1] = -v1\n du[2] = v1 - v2\n du[3] = v2\n\treturn nothing\nend\n\n# Parameters of the SIR model\np = (\u03b2 = 1.0, \u03b3 = 0.3)\nu0 = [0.99, 0.01, 0.00] # s, i, r\ntspan = (0.0, 20.0)\n\n# Define a problem\nprob = ODEProblem(sir!, u0, tspan, p)\n\n# Solve the problem\nsol = solve(prob)\n\n# Visualize the solution\nplot(sol, label=[\"S\" \"I\" \"R\"], legend=:right)\n```\n\n### Solution handling\n\n`sol[i, j]`: `i`th component at timestep `j`\n\n\n```julia\nsol[2]\n```\n\n\n```julia\nsol[1, 2]\n```\n\n`sol[i, :]`: the timeseries for the `i`th component.\n\n\n```julia\nsol[1, :]\n```\n\n`sol(t,idxs=1)`: the 1st element in time point(s) `t` with interpolation. `t` can be a scalar (single point) or an vector-like sequence. (multiple time points)\n\n\n```julia\nsol(10, idxs=2)\n```\n\n\n```julia\nsol(0.0:0.1:20.0, idxs=2)\n```\n\n## Lorenz system\n\nThe Lorenz system is a system of ordinary differential equations having chaotic solutions for certain parameter values and initial conditions. ([Wikipedia](https://en.wikipedia.org/wiki/Lorenz_system))\n\n$$\n\\begin{align}\n \\frac{dx}{dt} &=& \\sigma(y-x) \\\\\n \\frac{dy}{dt} &=& x(\\rho - z) -y \\\\\n \\frac{dz}{dt} &=& xy - \\beta z\n\\end{align}\n$$\n\nIn this example, we will use [LabelledArrays.jl](https://github.com/SciML/LabelledArrays.jl) to get DSL-like syntax.\n\n\n```julia\nusing LabelledArrays\nusing DifferentialEquations\nusing Plots\n\nfunction lorenz!(du,u,p,t)\n du.x = p.\u03c3*(u.y-u.x)\n du.y = u.x*(p.\u03c1-u.z) - u.y\n du.z = u.x*u.y - p.\u03b2*u.z\nend\n\nu0 = LVector(x=1.0, y=0.0, z=0.0)\np = LVector(\u03c3=10.0, \u03c1=28.0, \u03b2=8/3)\ntspan = (0.0,100.0)\nprob = ODEProblem(lorenz!,u0,tspan,p)\nsol = solve(prob)\n```\n\n### Visualization\n\n`vars=(1, 2, 3)`: make a phase plot with 1st, 2nd, and the 3rd state variable. With LabelledArrays, you can use symbols instead of numbers.\n\n\n```julia\nplot(sol, vars=(:x, :y, :z))\n```\n", "meta": {"hexsha": "f53773fe7edad445a73249b47d28f8ca9e107c51", "size": 8832, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/intro/01-first-steps.ipynb", "max_stars_repo_name": "ww-juliabook/learn-diffeq", "max_stars_repo_head_hexsha": "a82b9fc2d715315ec1486d94f4a8ffb98187ec9d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/intro/01-first-steps.ipynb", "max_issues_repo_name": "ww-juliabook/learn-diffeq", "max_issues_repo_head_hexsha": "a82b9fc2d715315ec1486d94f4a8ffb98187ec9d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2022-03-19T17:02:19.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-20T09:13:41.000Z", "max_forks_repo_path": "docs/01-first-steps.ipynb", "max_forks_repo_name": "ntumitolab/juliabook-diffeq", "max_forks_repo_head_hexsha": "97fd4893108001d2a415291c5ccec0d2a5de54f8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 26.1301775148, "max_line_length": 259, "alphanum_fraction": 0.5251358696, "converted": true, "num_tokens": 1628, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026573249612, "lm_q2_score": 0.8962513641273354, "lm_q1q2_score": 0.8221337579451262}} {"text": "# Solving equations of motion for a particle in magnetic and electric field\n\nIn the following we discuss the solutions for the equation of motion for a particle in a magnetic and electric field\n\n\\begin{equation}\n m\\mathbf{\\ddot{r} } = q\\mathbf{E} + \\frac{q}{c} \\mathbf{\\dot{r}} \\times \\mathbf{B}\n\\end{equation}\n\ngoverned by the Lorentz Force. $\\mathbf{B} = (0, 0, B_z)$, $\\mathbf{E} = (0, E_y, E_z)$ and the initial conditions $\\mathbf{r} = (x_0, y_0, z_0)$, $\\mathbf{v_0} = (v_{0x}, v_{0y}, v_{0z})$.\n\n## Import libraries\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n## Define the solutions of the problem\n\nThe following solution for the movement in the x-y plane can be derived by solving above differential equation:\n\n\\begin{split}\n x(t) &= x_0 + \\frac{E_z c}{B_z}t + \\left(v_{0x} - \\frac{E_z c}{Bz}\\right) \\frac{cm}{qB_z}\\sin{\\frac{qB_z}{cm}t} \\\\\n y(t) &= y_0 + \\left(v_{0x} - \\frac{E_z c}{Bz}\\right) \\frac{cm}{qB_z} \\left(\\cos{\\frac{qB_z}{cm}t} - 1\\right) \\\\\n z(t) &= 0\n\\end{split}\n\n\n```python\ndef x_t(t, x_0, v_0x, E_y, B_z, q, c, m):\n return x_0 + ((E_y * c) / B_z) * t + (v_0x - (E_y * c) / B_z) * ((c * m) / (q * B_z)) * np.sin(((q * B_z) / (c * m)) * t)\n```\n\n\n```python\ndef y_t(t, y_0, v_0x, E_y, B_z, q, c, m):\n return y_0 + ((v_0x - (E_y * c) / B_z) * ((c * m) / (q * B_z))) * (np.cos(((q * B_z) / (c * m)) * t) - 1)\n```\n\n### Define initial conditions\n\n\n```python\nx_0 = 0\ny_0 = 0\n\nv_0x = 0\n\nE_y = 1 \nB_z = 1 \n\nc = 1# 299792458\nm = 1# np.exp(-31)\nq = 1#1.6 * np.exp(-19)\n\nt = 0\n```\n\n### Solve and plot\n\n\n```python\nt_start = 0\nt_end = 200\nt = np.linspace(0, t_end, 1000)\n\nx = x_t(t, x_0, v_0x, E_y, B_z, q, c, m)\ny = y_t(t, y_0, v_0x, E_y, B_z, q, c, m)\n```\n\n\n```python\nplt.figure(figsize=(15,10))\n\nplt.subplot(211)\n\nplt.title('Solutions of the x(t) and y(t) coordinates')\n\nplt.ylabel('$x(t)$ in m')\nplt.xlim(xmin=t_start, xmax=t_end)\nplt.plot(t, x)\n\nplt.subplot(212)\n\nplt.ylabel('$y(t)$ in m')\nplt.xlabel('$t$ in s')\nplt.xlim(xmin=t_start, xmax=t_end)\nplt.plot(t, y)\n```\n", "meta": {"hexsha": "37bfbf5c94960268321375e1369c4681dd18a709", "size": 109911, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "DynamicSystems/MotionEquations_MagneticElectricField/MotionEquations_MagneticElectricField.ipynb", "max_stars_repo_name": "Progklui/physicsProjects", "max_stars_repo_head_hexsha": "158481bd38c046a1ca6217dab65f9e80763ead10", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "DynamicSystems/MotionEquations_MagneticElectricField/MotionEquations_MagneticElectricField.ipynb", "max_issues_repo_name": "Progklui/physicsProjects", "max_issues_repo_head_hexsha": "158481bd38c046a1ca6217dab65f9e80763ead10", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "DynamicSystems/MotionEquations_MagneticElectricField/MotionEquations_MagneticElectricField.ipynb", "max_forks_repo_name": "Progklui/physicsProjects", "max_forks_repo_head_hexsha": "158481bd38c046a1ca6217dab65f9e80763ead10", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 552.3165829146, "max_line_length": 105376, "alphanum_fraction": 0.9517973633, "converted": true, "num_tokens": 835, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9626731147976794, "lm_q2_score": 0.8539127548105611, "lm_q1q2_score": 0.8220388514389498}} {"text": "
    \n\n# Python for Finance\n\n**Analyze Big Financial Data**\n\nO'Reilly (2014)\n\nYves Hilpisch\n\n\n\n**Buy the book ** |\nO'Reilly |\nAmazon\n\n**All book codes & IPYNBs** |\nhttp://oreilly.quant-platform.com\n\n**The Python Quants GmbH** | http://tpq.io\n\n**Contact us** | pff@tpq.io\n\n# Mathematical Tools\n\n\n```python\nimport seaborn as sns; sns.set()\nimport matplotlib as mpl\nmpl.rcParams['font.family'] = 'serif'\n```\n\n## Approximation\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n\n```python\ndef f(x):\n return np.sin(x) + 0.5 * x\n```\n\n\n```python\nx = np.linspace(-2 * np.pi, 2 * np.pi, 50)\n```\n\n\n```python\nplt.plot(x, f(x), 'b')\nplt.grid(True)\nplt.xlabel('x')\nplt.ylabel('f(x)')\n# tag: sin_plot\n# title: Example function plot\n# size: 60\n```\n\n### Regression\n\n#### Monomials as Basis Functions\n\n\n```python\nreg = np.polyfit(x, f(x), deg=1)\nry = np.polyval(reg, x)\n```\n\n\n```python\nplt.plot(x, f(x), 'b', label='f(x)')\nplt.plot(x, ry, 'r.', label='regression')\nplt.legend(loc=0)\nplt.grid(True)\nplt.xlabel('x')\nplt.ylabel('f(x)')\n# tag: sin_plot_reg_1\n# title: Example function and linear regression\n# size: 60\n```\n\n\n```python\nreg = np.polyfit(x, f(x), deg=5)\nry = np.polyval(reg, x)\n```\n\n\n```python\nplt.plot(x, f(x), 'b', label='f(x)')\nplt.plot(x, ry, 'r.', label='regression')\nplt.legend(loc=0)\nplt.grid(True)\nplt.xlabel('x')\nplt.ylabel('f(x)')\n# tag: sin_plot_reg_2\n# title: Regression with monomials up to order 5\n# size: 60\n```\n\n\n```python\nreg = np.polyfit(x, f(x), 7)\nry = np.polyval(reg, x)\n```\n\n\n```python\nplt.plot(x, f(x), 'b', label='f(x)')\nplt.plot(x, ry, 'r.', label='regression')\nplt.legend(loc=0)\nplt.grid(True)\nplt.xlabel('x')\nplt.ylabel('f(x)')\n# tag: sin_plot_reg_3\n# title: Regression with monomials up to order 7\n# size: 60\n```\n\n\n```python\nnp.allclose(f(x), ry)\n```\n\n\n\n\n False\n\n\n\n\n```python\nnp.sum((f(x) - ry) ** 2) / len(x)\n```\n\n\n\n\n 0.0017769134759517628\n\n\n\n#### Individual Basis Functions\n\n\n```python\nmatrix = np.zeros((3 + 1, len(x)))\nmatrix[3, :] = x ** 3\nmatrix[2, :] = x ** 2\nmatrix[1, :] = x\nmatrix[0, :] = 1\n```\n\n\n```python\nreg = np.linalg.lstsq(matrix.T, f(x))[0]\n```\n\n\n```python\nreg\n```\n\n\n\n\n array([ 3.73659739e-16, 5.62777448e-01, 0.00000000e+00,\n -5.43553615e-03])\n\n\n\n\n```python\nry = np.dot(reg, matrix)\n```\n\n\n```python\nplt.plot(x, f(x), 'b', label='f(x)')\nplt.plot(x, ry, 'r.', label='regression')\nplt.legend(loc=0)\nplt.grid(True)\nplt.xlabel('x')\nplt.ylabel('f(x)')\n# tag: sin_plot_reg_4\n# title: Regression via least-squares function\n# size: 60\n```\n\n\n```python\nmatrix[3, :] = np.sin(x)\nreg = np.linalg.lstsq(matrix.T, f(x))[0]\nry = np.dot(reg, matrix)\n```\n\n\n```python\nplt.plot(x, f(x), 'b', label='f(x)')\nplt.plot(x, ry, 'r.', label='regression')\nplt.legend(loc=0)\nplt.grid(True)\nplt.xlabel('x')\nplt.ylabel('f(x)')\n# tag: sin_plot_reg_5\n# title: Regression using individual functions\n# size: 60\n```\n\n\n```python\nnp.allclose(f(x), ry)\n```\n\n\n\n\n True\n\n\n\n\n```python\nnp.sum((f(x) - ry) ** 2) / len(x)\n```\n\n\n\n\n 3.1327946847380535e-31\n\n\n\n\n```python\nreg\n```\n\n\n\n\n array([ 3.73659739e-16, 5.00000000e-01, 0.00000000e+00,\n 1.00000000e+00])\n\n\n\n#### Noisy Data\n\n\n```python\nxn = np.linspace(-2 * np.pi, 2 * np.pi, 50)\nxn = xn + 0.15 * np.random.standard_normal(len(xn))\nyn = f(xn) + 0.25 * np.random.standard_normal(len(xn))\n```\n\n\n```python\nreg = np.polyfit(xn, yn, 7)\nry = np.polyval(reg, xn)\n```\n\n\n```python\nplt.plot(xn, yn, 'b^', label='f(x)')\nplt.plot(xn, ry, 'ro', label='regression')\nplt.legend(loc=0)\nplt.grid(True)\nplt.xlabel('x')\nplt.ylabel('f(x)')\n# tag: sin_plot_reg_6\n# title: Regression with noisy data\n# size: 60\n```\n\n#### Unsorted Data\n\n\n```python\nxu = np.random.rand(50) * 4 * np.pi - 2 * np.pi\nyu = f(xu)\n```\n\n\n```python\nprint(xu[:10].round(2))\nprint(yu[:10].round(2))\n```\n\n [-1.64 5. 4.58 -1.91 -2.85 1.64 2.2 -3.27 -3.39 5.95]\n [-1.82 1.54 1.3 -1.9 -1.71 1.82 1.91 -1.51 -1.45 2.65]\n\n\n\n```python\nreg = np.polyfit(xu, yu, 5)\nry = np.polyval(reg, xu)\n```\n\n\n```python\nplt.plot(xu, yu, 'b^', label='f(x)')\nplt.plot(xu, ry, 'ro', label='regression')\nplt.legend(loc=0)\nplt.grid(True)\nplt.xlabel('x')\nplt.ylabel('f(x)')\n# tag: sin_plot_reg_7\n# title: Regression with unsorted data\n# size: 60\n```\n\n#### Multiple Dimensions\n\n\n```python\ndef fm(p):\n x, y = p\n return np.sin(x) + 0.25 * x + np.sqrt(y) + 0.05 * y ** 2\n```\n\n\n```python\nx = np.linspace(0, 10, 20)\ny = np.linspace(0, 10, 20)\nX, Y = np.meshgrid(x, y)\n # generates 2-d grids out of the 1-d arrays\nZ = fm((X, Y))\nx = X.flatten()\ny = Y.flatten()\n # yields 1-d arrays from the 2-d grids\n```\n\n\n```python\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib as mpl\n\nfig = plt.figure(figsize=(9, 6))\nax = fig.gca(projection='3d')\nsurf = ax.plot_surface(X, Y, Z, rstride=2, cstride=2, cmap=mpl.cm.coolwarm,\n linewidth=0.5, antialiased=True)\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_zlabel('f(x, y)')\nfig.colorbar(surf, shrink=0.5, aspect=5)\n# tag: sin_plot_3d_1\n# title: Function with two parameters\n# size: 60\n```\n\n\n```python\nmatrix = np.zeros((len(x), 6 + 1))\nmatrix[:, 6] = np.sqrt(y)\nmatrix[:, 5] = np.sin(x)\nmatrix[:, 4] = y ** 2\nmatrix[:, 3] = x ** 2\nmatrix[:, 2] = y\nmatrix[:, 1] = x\nmatrix[:, 0] = 1\n```\n\n\n```python\nimport statsmodels.api as sm\n```\n\n\n```python\nmodel = sm.OLS(fm((x, y)), matrix).fit()\n```\n\n\n```python\nmodel.rsquared\n```\n\n\n\n\n 1.0\n\n\n\n\n```python\na = model.params\na\n```\n\n\n\n\n array([ 2.47024623e-15, 2.50000000e-01, 2.57843868e-15,\n -2.04697370e-16, 5.00000000e-02, 1.00000000e+00,\n 1.00000000e+00])\n\n\n\n\n```python\ndef reg_func(a, p):\n x, y = p\n f6 = a[6] * np.sqrt(y)\n f5 = a[5] * np.sin(x)\n f4 = a[4] * y ** 2\n f3 = a[3] * x ** 2\n f2 = a[2] * y\n f1 = a[1] * x\n f0 = a[0] * 1\n return (f6 + f5 + f4 + f3 +\n f2 + f1 + f0)\n```\n\n\n```python\nRZ = reg_func(a, (X, Y))\n```\n\n\n```python\nfig = plt.figure(figsize=(9, 6))\nax = fig.gca(projection='3d')\nsurf1 = ax.plot_surface(X, Y, Z, rstride=2, cstride=2,\n cmap=mpl.cm.coolwarm, linewidth=0.5,\n antialiased=True)\nsurf2 = ax.plot_wireframe(X, Y, RZ, rstride=2, cstride=2,\n label='regression')\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_zlabel('f(x, y)')\nax.legend()\nfig.colorbar(surf, shrink=0.5, aspect=5)\n# tag: sin_plot_3d_2\n# title: Higher dimension regression\n# size: 60\n```\n\n### Interpolation\n\n\n```python\nimport scipy.interpolate as spi\n```\n\n\n```python\nx = np.linspace(-2 * np.pi, 2 * np.pi, 25)\n```\n\n\n```python\ndef f(x):\n return np.sin(x) + 0.5 * x\n```\n\n\n```python\nipo = spi.splrep(x, f(x), k=1)\n```\n\n\n```python\niy = spi.splev(x, ipo)\n```\n\n\n```python\nplt.plot(x, f(x), 'b', label='f(x)')\nplt.plot(x, iy, 'r.', label='interpolation')\nplt.legend(loc=0)\nplt.grid(True)\nplt.xlabel('x')\nplt.ylabel('f(x)')\n# tag: sin_plot_ipo_1\n# title: Example plot with linear interpolation\n# size: 60\n```\n\n\n```python\nnp.allclose(f(x), iy)\n```\n\n\n\n\n True\n\n\n\n\n```python\nxd = np.linspace(1.0, 3.0, 50)\niyd = spi.splev(xd, ipo)\n```\n\n\n```python\nplt.plot(xd, f(xd), 'b', label='f(x)')\nplt.plot(xd, iyd, 'r.', label='interpolation')\nplt.legend(loc=0)\nplt.grid(True)\nplt.xlabel('x')\nplt.ylabel('f(x)')\n# tag: sin_plot_ipo_2\n# title: Example plot (detail) with linear interpolation\n# size: 60\n```\n\n\n```python\nipo = spi.splrep(x, f(x), k=3)\niyd = spi.splev(xd, ipo)\n```\n\n\n```python\nplt.plot(xd, f(xd), 'b', label='f(x)')\nplt.plot(xd, iyd, 'r.', label='interpolation')\nplt.legend(loc=0)\nplt.grid(True)\nplt.xlabel('x')\nplt.ylabel('f(x)')\n# tag: sin_plot_ipo_3\n# title: Example plot (detail) with cubic splines interpolation\n# size: 60\n```\n\n\n```python\nnp.allclose(f(xd), iyd)\n```\n\n\n\n\n False\n\n\n\n\n```python\nnp.sum((f(xd) - iyd) ** 2) / len(xd)\n```\n\n\n\n\n 1.1349319851436892e-08\n\n\n\n## Convex Optimization\n\n\n```python\ndef fm(p):\n x, y = p\n return (np.sin(x) + 0.05 * x ** 2\n + np.sin(y) + 0.05 * y ** 2)\n```\n\n\n```python\nx = np.linspace(-10, 10, 50)\ny = np.linspace(-10, 10, 50)\nX, Y = np.meshgrid(x, y)\nZ = fm((X, Y))\n```\n\n\n```python\nfig = plt.figure(figsize=(9, 6))\nax = fig.gca(projection='3d')\nsurf = ax.plot_surface(X, Y, Z, rstride=2, cstride=2, cmap=mpl.cm.coolwarm,\n linewidth=0.5, antialiased=True)\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_zlabel('f(x, y)')\nfig.colorbar(surf, shrink=0.5, aspect=5)\n# tag: opt_plot_3d\n# title: Function to minimize with two parameters\n# size: 60\n```\n\n\n```python\nimport scipy.optimize as spo\n```\n\n### Global Optimization\n\n\n```python\ndef fo(p):\n x, y = p\n z = np.sin(x) + 0.05 * x ** 2 + np.sin(y) + 0.05 * y ** 2\n if output == True:\n print('%8.4f %8.4f %8.4f' % (x, y, z))\n return z\n```\n\n\n```python\noutput = True\nspo.brute(fo, ((-10, 10.1, 5), (-10, 10.1, 5)), finish=None)\n```\n\n -10.0000 -10.0000 11.0880\n -10.0000 -10.0000 11.0880\n -10.0000 -5.0000 7.7529\n -10.0000 0.0000 5.5440\n -10.0000 5.0000 5.8351\n -10.0000 10.0000 10.0000\n -5.0000 -10.0000 7.7529\n -5.0000 -5.0000 4.4178\n -5.0000 0.0000 2.2089\n -5.0000 5.0000 2.5000\n -5.0000 10.0000 6.6649\n 0.0000 -10.0000 5.5440\n 0.0000 -5.0000 2.2089\n 0.0000 0.0000 0.0000\n 0.0000 5.0000 0.2911\n 0.0000 10.0000 4.4560\n 5.0000 -10.0000 5.8351\n 5.0000 -5.0000 2.5000\n 5.0000 0.0000 0.2911\n 5.0000 5.0000 0.5822\n 5.0000 10.0000 4.7471\n 10.0000 -10.0000 10.0000\n 10.0000 -5.0000 6.6649\n 10.0000 0.0000 4.4560\n 10.0000 5.0000 4.7471\n 10.0000 10.0000 8.9120\n\n\n\n\n\n array([ 0., 0.])\n\n\n\n\n```python\noutput = False\nopt1 = spo.brute(fo, ((-10, 10.1, 0.1), (-10, 10.1, 0.1)), finish=None)\nopt1\n```\n\n\n\n\n array([-1.4, -1.4])\n\n\n\n\n```python\nfm(opt1)\n```\n\n\n\n\n -1.7748994599769203\n\n\n\n### Local Optimization\n\n\n```python\noutput = True\nopt2 = spo.fmin(fo, opt1, xtol=0.001, ftol=0.001, maxiter=15, maxfun=20)\nopt2\n```\n\n -1.4000 -1.4000 -1.7749\n -1.4700 -1.4000 -1.7743\n -1.4000 -1.4700 -1.7743\n -1.3300 -1.4700 -1.7696\n -1.4350 -1.4175 -1.7756\n -1.4350 -1.3475 -1.7722\n -1.4088 -1.4394 -1.7755\n -1.4438 -1.4569 -1.7751\n -1.4328 -1.4427 -1.7756\n -1.4591 -1.4208 -1.7752\n -1.4213 -1.4347 -1.7757\n -1.4235 -1.4096 -1.7755\n -1.4305 -1.4344 -1.7757\n -1.4168 -1.4516 -1.7753\n -1.4305 -1.4260 -1.7757\n -1.4396 -1.4257 -1.7756\n -1.4259 -1.4325 -1.7757\n -1.4259 -1.4241 -1.7757\n -1.4304 -1.4177 -1.7757\n -1.4270 -1.4288 -1.7757\n Warning: Maximum number of function evaluations has been exceeded.\n\n\n\n\n\n array([-1.42702972, -1.42876755])\n\n\n\n\n```python\nfm(opt2)\n```\n\n\n\n\n -1.7757246992239009\n\n\n\n\n```python\noutput = False\nspo.fmin(fo, (2.0, 2.0), maxiter=250)\n```\n\n Optimization terminated successfully.\n Current function value: 0.015826\n Iterations: 46\n Function evaluations: 86\n\n\n\n\n\n array([ 4.2710728 , 4.27106945])\n\n\n\n### Constrained Optimization\n\n\n```python\n# function to be minimized\nfrom math import sqrt\ndef Eu(p):\n s, b = p\n return -(0.5 * sqrt(s * 15 + b * 5) + 0.5 * sqrt(s * 5 + b * 12))\n\n# constraints\ncons = ({'type': 'ineq', 'fun': lambda p: 100 - p[0] * 10 - p[1] * 10})\n # budget constraint\nbnds = ((0, 1000), (0, 1000)) # uppper bounds large enough\n```\n\n\n```python\nresult = spo.minimize(Eu, [5, 5], method='SLSQP',\n bounds=bnds, constraints=cons)\n```\n\n\n```python\nresult\n```\n\n\n\n\n fun: -9.700883611487832\n jac: array([-0.48508096, -0.48489535, 0. ])\n message: 'Optimization terminated successfully.'\n nfev: 21\n nit: 5\n njev: 5\n status: 0\n success: True\n x: array([ 8.02547122, 1.97452878])\n\n\n\n\n```python\nresult['x']\n```\n\n\n\n\n array([ 8.02547122, 1.97452878])\n\n\n\n\n```python\n-result['fun']\n```\n\n\n\n\n 9.700883611487832\n\n\n\n\n```python\nnp.dot(result['x'], [10, 10])\n```\n\n\n\n\n 99.999999999999986\n\n\n\n## Integration\n\n\n```python\nimport scipy.integrate as sci\n```\n\n\n```python\ndef f(x):\n return np.sin(x) + 0.5 * x\n```\n\n\n```python\na = 0.5 # left integral limit\nb = 9.5 # right integral limit\nx = np.linspace(0, 10)\ny = f(x)\n```\n\n\n```python\nfrom matplotlib.patches import Polygon\n\nfig, ax = plt.subplots(figsize=(7, 5))\nplt.plot(x, y, 'b', linewidth=2)\nplt.ylim(ymin=0)\n\n# area under the function\n# between lower and upper limit\nIx = np.linspace(a, b)\nIy = f(Ix)\nverts = [(a, 0)] + list(zip(Ix, Iy)) + [(b, 0)]\npoly = Polygon(verts, facecolor='0.7', edgecolor='0.5')\nax.add_patch(poly)\n\n# labels\nplt.text(0.75 * (a + b), 1.5, r\"$\\int_a^b f(x)dx$\",\n horizontalalignment='center', fontsize=20)\n\nplt.figtext(0.9, 0.075, '$x$')\nplt.figtext(0.075, 0.9, '$f(x)$')\n\nax.set_xticks((a, b))\nax.set_xticklabels(('$a$', '$b$'))\nax.set_yticks([f(a), f(b)])\n# tag: sin_integral\n# title: Example function with integral area\n# size: 50\n```\n\n### Numerical Integration\n\n\n```python\nsci.fixed_quad(f, a, b)[0]\n```\n\n\n\n\n 24.366995967084602\n\n\n\n\n```python\nsci.quad(f, a, b)[0]\n```\n\n\n\n\n 24.374754718086752\n\n\n\n\n```python\nsci.romberg(f, a, b)\n```\n\n\n\n\n 24.374754718086713\n\n\n\n\n```python\nxi = np.linspace(0.5, 9.5, 25)\n```\n\n\n```python\nsci.trapz(f(xi), xi)\n```\n\n\n\n\n 24.352733271544516\n\n\n\n\n```python\nsci.simps(f(xi), xi)\n```\n\n\n\n\n 24.374964184550748\n\n\n\n### Integration by Simulation\n\n\n```python\nfor i in range(1, 20):\n np.random.seed(1000)\n x = np.random.random(i * 10) * (b - a) + a\n print(np.sum(f(x)) / len(x) * (b - a))\n```\n\n 24.8047622793\n 26.5229188983\n 26.2655475192\n 26.0277033994\n 24.9995418144\n 23.8818101416\n 23.5279122748\n 23.507857659\n 23.6723674607\n 23.6794104161\n 24.4244017079\n 24.2390053468\n 24.115396925\n 24.4241919876\n 23.9249330805\n 24.1948421203\n 24.1173483782\n 24.1006909297\n 23.7690510985\n\n\n## Symbolic Computation\n\n\n```python\nimport sympy as sy\n```\n\n### Basics\n\n\n```python\nx = sy.Symbol('x')\ny = sy.Symbol('y')\n```\n\n\n```python\ntype(x)\n```\n\n\n\n\n sympy.core.symbol.Symbol\n\n\n\n\n```python\nsy.sqrt(x)\n```\n\n\n\n\n sqrt(x)\n\n\n\n\n```python\n3 + sy.sqrt(x) - 4 ** 2\n```\n\n\n\n\n sqrt(x) - 13\n\n\n\n\n```python\nf = x ** 2 + 3 + 0.5 * x ** 2 + 3 / 2\n```\n\n\n```python\nsy.simplify(f)\n```\n\n\n\n\n 1.5*x**2 + 4.5\n\n\n\n\n```python\nsy.init_printing(pretty_print=False, use_unicode=False)\n```\n\n\n```python\nprint(sy.pretty(f))\n```\n\n 2 \n 1.5*x + 4.5\n\n\n\n```python\nprint(sy.pretty(sy.sqrt(x) + 0.5))\n```\n\n ___ \n \\/ x + 0.5\n\n\n\n```python\npi_str = str(sy.N(sy.pi, 400000))\npi_str[:40]\n```\n\n\n\n\n '3.14159265358979323846264338327950288419'\n\n\n\n\n```python\npi_str[-40:]\n```\n\n\n\n\n '8245672736856312185020980470362464176198'\n\n\n\n\n```python\npi_str.find('111272')\n```\n\n\n\n\n 366713\n\n\n\n### Equations\n\n\n```python\nsy.solve(x ** 2 - 1)\n```\n\n\n\n\n [-1, 1]\n\n\n\n\n```python\nsy.solve(x ** 2 - 1 - 3)\n```\n\n\n\n\n [-2, 2]\n\n\n\n\n```python\nsy.solve(x ** 3 + 0.5 * x ** 2 - 1)\n```\n\n\n\n\n [0.858094329496553, -0.679047164748276 - 0.839206763026694*I, -0.679047164748276 + 0.839206763026694*I]\n\n\n\n\n```python\nsy.solve(x ** 2 + y ** 2)\n```\n\n\n\n\n [{x: -I*y}, {x: I*y}]\n\n\n\n### Integration\n\n\n```python\na, b = sy.symbols('a b')\n```\n\n\n```python\nprint(sy.pretty(sy.Integral(sy.sin(x) + 0.5 * x, (x, a, b))))\n```\n\n b \n / \n | \n | (0.5*x + sin(x)) dx\n | \n / \n a \n\n\n\n```python\nint_func = sy.integrate(sy.sin(x) + 0.5 * x, x)\n```\n\n\n```python\nprint(sy.pretty(int_func))\n```\n\n 2 \n 0.25*x - cos(x)\n\n\n\n```python\nFb = int_func.subs(x, 9.5).evalf()\nFa = int_func.subs(x, 0.5).evalf()\n```\n\n\n```python\nFb - Fa # exact value of integral\n```\n\n\n\n\n 24.3747547180867\n\n\n\n\n```python\nint_func_limits = sy.integrate(sy.sin(x) + 0.5 * x, (x, a, b))\nprint(sy.pretty(int_func_limits))\n```\n\n 2 2 \n - 0.25*a + 0.25*b + cos(a) - cos(b)\n\n\n\n```python\nint_func_limits.subs({a : 0.5, b : 9.5}).evalf()\n```\n\n\n\n\n 24.3747547180868\n\n\n\n\n```python\nsy.integrate(sy.sin(x) + 0.5 * x, (x, 0.5, 9.5))\n```\n\n\n\n\n 24.3747547180867\n\n\n\n### Differentiation\n\n\n```python\nint_func.diff()\n```\n\n\n\n\n 0.5*x + sin(x)\n\n\n\n\n```python\nf = (sy.sin(x) + 0.05 * x ** 2\n + sy.sin(y) + 0.05 * y ** 2)\n```\n\n\n```python\ndel_x = sy.diff(f, x)\ndel_x\n```\n\n\n\n\n 0.1*x + cos(x)\n\n\n\n\n```python\ndel_y = sy.diff(f, y)\ndel_y\n```\n\n\n\n\n 0.1*y + cos(y)\n\n\n\n\n```python\nxo = sy.nsolve(del_x, -1.5)\nxo\n```\n\n\n\n\n mpf('-1.4275517787645941')\n\n\n\n\n```python\nyo = sy.nsolve(del_y, -1.5)\nyo\n```\n\n\n\n\n mpf('-1.4275517787645941')\n\n\n\n\n```python\nf.subs({x : xo, y : yo}).evalf() \n # global minimum\n```\n\n\n\n\n -1.77572565314742\n\n\n\n\n```python\nxo = sy.nsolve(del_x, 1.5)\nxo\n```\n\n\n\n\n mpf('1.7463292822528528')\n\n\n\n\n```python\nyo = sy.nsolve(del_y, 1.5)\nyo\n```\n\n\n\n\n mpf('1.7463292822528528')\n\n\n\n\n```python\nf.subs({x : xo, y : yo}).evalf()\n # local minimum\n```\n\n\n\n\n 2.27423381055640\n\n\n\n## Conclusions\n\n## Further Reading\n\n
    \n\nhttp://tpq.io | @dyjh | training@tpq.io\n\n**Quant Platform** |\nhttp://quant-platform.com\n\n**Python for Finance** |\nPython for Finance @ O'Reilly\n\n**Derivatives Analytics with Python** |\nDerivatives Analytics @ Wiley Finance\n\n**Listed Volatility and Variance Derivatives** |\nListed VV Derivatives @ Wiley Finance\n\n**Python Training** |\nPython for Finance University Certificate\n", "meta": {"hexsha": "95b5b12814153c6804d3953215c39e2da39065eb", "size": 846767, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ipython3/09_Math_Tools.ipynb", "max_stars_repo_name": "ioancw/py4fi", "max_stars_repo_head_hexsha": "bbf7b41d375e4f7b0344bc9b1e97d7910ad1e6ec", "max_stars_repo_licenses": ["CNRI-Python"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-08-25T11:39:29.000Z", "max_stars_repo_stars_event_max_datetime": "2019-08-25T11:39:29.000Z", "max_issues_repo_path": "ipython3/09_Math_Tools.ipynb", "max_issues_repo_name": "ioancw/py4fi", "max_issues_repo_head_hexsha": "bbf7b41d375e4f7b0344bc9b1e97d7910ad1e6ec", "max_issues_repo_licenses": ["CNRI-Python"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ipython3/09_Math_Tools.ipynb", "max_forks_repo_name": "ioancw/py4fi", "max_forks_repo_head_hexsha": "bbf7b41d375e4f7b0344bc9b1e97d7910ad1e6ec", "max_forks_repo_licenses": ["CNRI-Python"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 287.8201903467, "max_line_length": 179588, "alphanum_fraction": 0.920476353, "converted": true, "num_tokens": 6908, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9073122288794595, "lm_q2_score": 0.9059898134030163, "lm_q1q2_score": 0.8220156369407764}} {"text": "```javascript\n%%javascript\nMathJax.Hub.Config({TeX: { equationNumbers: { autoNumber: \"AMS\" } }});\n```\n\n\n \n\n\n# Fourier Series\n## Trigonometric Fourier Series\nAny arbitrary periodic function can be expressed as an infinite sum of weighted sinusoids, this is called a *Fourier series*. The Fourier series can be expressed in three different forms, *exponential*, *amplitude-phase* and *trigonometric*, we will only use the latter. The trigonometric Fourier series of a periodic function $f(t)$ is given by \n\n\\begin{equation} \\label{eq:TrigFourier}\nf(t) = a_0 + \\sum_{n=1}^{\\infty} \\left[a_n \\cos\\left(2\\pi n f_0 t \\right) + b_n \\sin\\left(2\\pi n f_0 t \\right)\\right] \n\\end{equation}\n\nThe relationship between angular frequency, $\\omega$ $[rad/s]$, and the linear frequency, $f$ $[Hz]$ is shown by the following equation\n\n\\begin{equation}\n\\omega=2\\pi f\n\\end{equation} \n\nEquation \\eqref{eq:TrigFourier} can therefor also be written as\n\n$$\nf(t) = a_0 + \\sum_{n=1}^{\\infty} \\left[a_n \\cos\\left( n \\omega_0 t \\right) + b_n \\sin\\left(n \\omega_0 t \\right)\\right] \n$$\n\nThe Fourier component $a_0$ is the dc component (the time average) of function $f(t)$. The Fourier components $a_n$ and $b_n$ are the amplitudes of the cosinusoids and sinusoids, respectively. \nThe frequency $f_0$ is the **fundemental frequency** and the frequency of every trem is an integer multiple of $f_0$, the resulting waves are called **harmonics**. So the first wave of the Fourier series, $n=1$, is the fundamental wave (1st harmonic), the next wave , $n=2$, is called the 2nd harmonic and has the frequency $2f_0$. The frequencies of the harmonics are strictly integer multiples of the fundamental frequency: $f_0,\\,2f_0,\\,3f_0,\\, \\ldots \\,nf_0$. \n\nSources: [Fourier Series](https://link.springer.com/chapter/10.1007%2F978-90-481-9443-8_16) and [Trigonometric Fourier Series](https://onlinelibrary.wiley.com/doi/pdf/10.1002/9781118844373.app4).\n\n## Magnitude Spectrum\n\nTo draw the magnitude spectrum for a trigonometric Fourier series we need to consider both $a_n$ and $b_n$. To find the magnitudes to the harmonics of the two coefficients we take the root-sum-square (RSS), [source](https://link.springer.com/chapter/10.1007%2F978-3-642-28818-0_2).\n\n\\begin{equation}\\label{eq:magnitude}\nM_n=\\sqrt{a_n^2 + b_n^2}\n\\end{equation}\n\nThe distinction between amplitude and magnitude can be somewhat confusing, and often these two terms are used interchangeably. Lets try to make the destination as clear as possible how these will be used when we talk about Fourier series. The Fourier components $a_n$ and $b_n$ are the amplitudes of the cosinusoids and sinusoids, respectively, they can have both positive and negative values. When we take the RSS of these two we get the magnitude, which is always positive. The amplitude and magnitude can have the same value in some [cases](https://www.mathworks.com/academia/books/the-intuitive-guide-to-fourier-analysis-and-spectral-estimation-with-matlab-langton.html).\nThe magnitude spectrum for the Fourier series can now be drawn, where the y-axis magnitude and the x-axis is frequency.\n\nTo make a small example we can take the Fourier series of a square wave with the fundamental frequency $f_0=1$ Hz\n\n\\begin{equation}\n\\text{Square}(t) = \\sin{2\\pi t}+\\dfrac{\\sin{3 \\cdot2\\pi t}}{3}+\\dfrac{\\sin{5\\cdot 2\\pi t}}{5} + \\ldots = \\sum_{n=1}^{\\infty}\\dfrac{\\sin{(2n-1)2\\pi t}}{2n - 1}\n\\end{equation}\n\nThis is an odd function, meaning that it is symmetric about the origin. It contains only sinusoids, not cosinusoids, so $a_n=0$. The magnitudes become $Mn=\\sqrt{a_n^2+b_n^2}=\\sqrt{b_n^2}$. For this Fourier series, the amplitudes $b_n$ of the sinusoids will be the same as the magnitude $M_n$, since all the values of $b_n$ are positive. The magnitudes spectrum of the first 5 harmonics can be seen in the following figure\n\n\n\nIn the figure below, the magnitude spectrum and the time series of the signal in the same plot. We call the three first harmonics for $h_1$, $h_2$ and $h_3$, with the magnitudes $M_1=1$, $M_2=\\frac{1}{3}$ and $M_3=\\frac{1}{5}$, and linear frequencies, $f_1=f_0=1$, $f_2=3f_0=3$ and $f_3=5f_0=5$, respectively. \n\n\\begin{align*}\nh_1 = \\underset{\\substack{\\downarrow \\\\ M_1}}{1}\\cdot \\sin{2\\pi\\cdot \\underset{\\substack{\\downarrow \\\\ f_1}}{1} \\cdot x}, \\hspace{8pt}\nh_2 = \\underset{\\substack{\\downarrow \\\\ M_2}}{\\dfrac{1}{3}}\\sin{2\\pi \\cdot \\underset{\\substack{\\downarrow \\\\ f_2}}{3} \\cdot x} , \\hspace{8pt}\nh_3 = \\underset{\\substack{\\downarrow \\\\ M_3}}{\\dfrac{1}{5}}\\sin{2\\pi\\cdot \\underset{\\substack{\\downarrow \\\\ f_3}}{5} \\cdot x}\n\\end{align*}\n\nTaking the sum of $h_1$, $h_2$ and $h_3$ gives us the signal in the blue field, we see that the square wave starts to take form, the more harmonics we add the better the approximation gets. The magnitude spectrum is plotted in the red field. \n\n\n\n```python\n\n```\n", "meta": {"hexsha": "4ddaca134a26622b72fec7de85c701eb6ea77c86", "size": 6565, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "1. Fourier Series.ipynb", "max_stars_repo_name": "Bjarten/flc-based-filters", "max_stars_repo_head_hexsha": "4299cc4010f41a52c16770a1e712b2fe2a878dc5", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2019-03-08T02:38:45.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-18T12:13:26.000Z", "max_issues_repo_path": "1. Fourier Series.ipynb", "max_issues_repo_name": "Bjarten/flc-based-filters", "max_issues_repo_head_hexsha": "4299cc4010f41a52c16770a1e712b2fe2a878dc5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "1. Fourier Series.ipynb", "max_forks_repo_name": "Bjarten/flc-based-filters", "max_forks_repo_head_hexsha": "4299cc4010f41a52c16770a1e712b2fe2a878dc5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-06-04T16:13:06.000Z", "max_forks_repo_forks_event_max_datetime": "2020-06-04T16:13:06.000Z", "avg_line_length": 55.6355932203, "max_line_length": 684, "alphanum_fraction": 0.6371667936, "converted": true, "num_tokens": 1529, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9073122113355092, "lm_q2_score": 0.9059898127684335, "lm_q1q2_score": 0.8220156204703714}} {"text": "# Inertial Brownian motion simulation\n\nThe Inertial Langevin equation for a particle of mass $m$ and some damping $\\gamma$ writes:\n\n\\begin{equation}\nm\\ddot{x} = -\\gamma \\dot{x} + \\sqrt{2k_\\mathrm{B}T \\gamma} \\mathrm{d}B_t\n\\end{equation}\n\nIntegrating the latter equation using the Euler method, one can replace $\\dot{x}$ by:\n\n\\begin{equation}\n\\dot{x} \\simeq \\frac{x_i - x_{i-1}}{\\tau} ~,\n\\end{equation}\n\n$\\ddot{x}$ by:\n\n\\begin{equation}\n\t\\begin{aligned}\n\t\\ddot{x} &\\simeq \n\t\\frac{\n\t\t\\frac{x_i - x_{i-1}}{\\tau}\n\t\t-\n\t\t\\frac{x_{i-1} - x_{i-2}}{\\tau}\n\t}\n\t{\\tau} \\\\\n\t& = \\frac{x_i - 2x_{i - 1} + x_{i-2}}{\\tau^2} ~.\n\t\\end{aligned}\n\\end{equation}\n\nand finally, $\\mathrm{d}B_t$ by a Gaussian random number $w_i$ with a zero mean value and a $\\tau$ variance, on can write $x_i$ as:\n\n\\begin{equation}\t\n\tx_i = \\frac{2 + \\tau /\\tau_\\mathrm{B}}{1 + \\tau / \\tau_\\mathrm{B} } x_{i-1} \n\t- \\frac{1}{1 + \\tau / \\tau_\\mathrm{B}}x_{i-2}\n\t+ \\frac{\\sqrt{2k_\\mathrm{B}T\\gamma}}{m(1 + \\tau/\\tau_\\mathrm{B})} \\tau w_i ~,\n\\end{equation}\n\n\nIn the following, we use Python to simulate such a movement and check the properties of the mean squared displacement. Then, I propose a Cython implementation that permits a $200$x speed improvement on the simulation.\n\n\n```python\n# Import important libraries\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n```\n\n\n```python\n# Just some matplotlib tweaks\nimport matplotlib as mpl\n\nmpl.rcParams[\"xtick.direction\"] = \"in\"\nmpl.rcParams[\"ytick.direction\"] = \"in\"\nmpl.rcParams[\"lines.markeredgecolor\"] = \"k\"\nmpl.rcParams[\"lines.markeredgewidth\"] = 1.5\nmpl.rcParams[\"figure.dpi\"] = 200\nfrom matplotlib import rc\n\nrc(\"font\", family=\"serif\")\nrc(\"text\", usetex=True)\nrc(\"xtick\", labelsize=\"medium\")\nrc(\"ytick\", labelsize=\"medium\")\nrc(\"axes\", labelsize=\"large\")\n\n\ndef cm2inch(value):\n return value / 2.54\n```\n\n\n```python\nN = 1000000 # number of time steps\ntau = 0.01 # simulation time step\nm = 1e-8 # particle mass\na = 1e-6 # radius of the particle\neta = 0.001 # viscosity (here water)\ngamma = 6 * np.pi * eta * a\nkbT = 4e-21\ntauB = m / gamma\n```\n\nWith such properties we have a characteristic diffusion time $\\tau_\\mathrm{B} =0.53$ s.\n\n\n```python\ndef xi(xi1, xi2):\n \"\"\"\n Function that compute the position of a particle using the full Langevin Equation\n \"\"\"\n t = tau / tauB\n wi = np.random.normal(0, np.sqrt(tau))\n return (\n (2 + t) / (1 + t) * xi1\n - 1 / (1 + t) * xi2\n + np.sqrt(2 * kbT * gamma) / (m * (1 + t)) * np.power(tau,1) * wi\n )\n\n```\n\n\n```python\ndef trajectory(N):\n \"\"\"\n Function generating a trajectory of length N.\n \"\"\"\n x = np.zeros(N)\n for i in range(2, len(x)):\n x[i] = xi(x[i - 1], x[i - 2])\n return x\n \n\n```\n\nNow that the functions are setup one can simply generate a trajectory of length $N$ by simply calling the the function ```trajectory()```\n\n\n```python\n# Generate a trajectory of 10e6 points.\nx = trajectory(1000000)\n```\n\n\n```python\nplt.plot(np.arange(len(x))*tau, x)\nplt.title(\"Intertial Brownian trajectory\")\nplt.ylabel(\"$x$ (m)\")\nplt.xlabel(\"$t$ (s)\")\nplt.show()\n```\n\n## Cross checking \n\nWe now check that the simulated trajectory gives us the correct MSD properties to ensure the simulation si done properly. The MSD given by:\n\n\\begin{equation}\n\\mathrm{MSD}(\\Delta t) = \\left. \\langle \\left( x(t) - x(t+\\Delta t \\right)^2 \\rangle \\right|_t ~,\n\\end{equation}\n\nwith $\\Delta t$ a lag time. The MSD, can be computed using the function defined in the cell below. For a lag time $\\Delta t \\ll \\tau_B$ we should have:\n\n\\begin{equation}\n\\mathrm{MSD}(\\Delta t) = \\frac{k_\\mathrm{B}T}{m} \\Delta t ^2 ~,\n\\end{equation}\n\nand for $\\Delta t \\gg \\tau_B$:\n\n\\begin{equation}\n\\mathrm{MSD}(\\tau) = 2 D \\Delta t~,\n\\end{equation}\n\nwith $D = k_\\mathrm{B}T / (6 \\pi \\eta a)$.\n\n\n\n```python\nt = np.array([*np.arange(3,10,1), *np.arange(10,100,10), *np.arange(100,1000,100), *np.arange(1000,8000,1000)])\ndef msd(x,Dt):\n \"\"\"Function that return the MSD for a list of time index t for a trajectory x\"\"\"\n _msd = lambda x, t : np.mean((x[:-t] - x[t:])**2)\n return [_msd(x,i) for i in t]\nMSD = msd(x,t)\n```\n\n\n```python\nD = kbT/(6*np.pi*eta*a)\nt_plot = t*tau\nplt.loglog(t*tau,MSD, \"o\")\nplt.plot(t*tau, (2*D*t_plot), \"--\", color = \"k\", label=\"long time theory\")\nplt.plot(t*tau, kbT/m * t_plot**2, \":\", color = \"k\", label=\"short time theory\")\nplt.ylabel(\"MSD (m$^2$)\")\nplt.xlabel(\"$\\Delta t$ (s)\")\nhoriz_data = [1e-8, 1e-17]\nt_horiz = [tauB, tauB]\nplt.plot(t_horiz, horiz_data, \"k\", label=\"$\\\\tau_\\mathrm{B}$\")\nplt.legend()\nplt.show()\n```\n\nThe simulations gives expected results. However, with the computer used, 6 seconds are needed to generate this trajectory. If someone wants to look at fine effects and need to generate millions of trajectories it is too long. In order to fasten the process, in the following I use Cython to generate the trajectory using C language.\n\n## Cython acceleration\n\n\n```python\n# Loading Cython library\n%load_ext Cython\n\n```\n\nWe now write the same functions as in the first part of the appendix. However, we now indicate the type of each variable.\n\n\n```cython\n%%cython\n\nimport cython\ncimport numpy as np\nimport numpy as np\nfrom libc.math cimport sqrt\nctypedef np.float64_t dtype_t\n\ncdef int N = 1000000 # length of the simulation\n\ncdef dtype_t tau = 0.01 # simulation time step\ncdef dtype_t m = 1e-8 # particle mass\ncdef dtype_t a = 1e-6 # radius of the particle \ncdef dtype_t eta = 0.001 # viscosity (here water)\ncdef dtype_t gamma = 6 * 3.14 * eta * a\ncdef dtype_t kbT = 4e-21\ncdef dtype_t tauB = m/gamma\ncdef dtype_t[:] x = np.zeros(N)\n\n@cython.boundscheck(False)\n@cython.wraparound(False)\n@cython.nonecheck(False)\n@cython.cdivision(True) \ncdef dtype_t xi_cython( dtype_t xi1, dtype_t xi2, dtype_t wi):\n cdef dtype_t t = tau / tauB\n return (\n (2 + t) / (1 + t) * xi1\n - 1 / (1 + t) * xi2\n + sqrt(2 * kbT * gamma) / (m * (1 + t)) * tau * wi\n )\n\n@cython.boundscheck(False)\n@cython.wraparound(False)\n@cython.nonecheck(False)\ncdef dtype_t[:] _traj(dtype_t[:] x, dtype_t[:] wi):\n cdef int i\n for i in range(2, N):\n \n x[i] = xi_cython(x[i-1], x[i-2], wi[i])\n return x \n\n\ndef trajectory_cython():\n \n\n cdef dtype_t[:] wi = np.random.normal(0, np.sqrt(tau), N).astype('float64')\n \n \n return _traj(x, wi)\n\n\n```\n\n\n```python\n%timeit trajectory(1000000)\n```\n\n 6.79 s \u00b1 92.2 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\n\n\n\n```python\n%timeit trajectory_cython()\n```\n\n 30.6 ms \u00b1 495 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)\n\n\nAgain, we check that the results given through the use of Cython gives the correct MSD\n\n\n```python\nx=np.asarray(trajectory_cython())\nD = kbT/(6*np.pi*eta*a)\nt_plot = t*tau\nplt.loglog(t*tau,MSD, \"o\")\nplt.plot(t*tau, (2*D*t_plot), \"--\", color = \"k\", label=\"long time theory\")\nplt.plot(t*tau, kbT/m * t_plot**2, \":\", color = \"k\", label=\"short time theory\")\n\nhoriz_data = [1e-8, 1e-17]\nt_horiz = [tauB, tauB]\nplt.plot(t_horiz, horiz_data, \"k\", label=\"$\\\\tau_\\mathrm{B}$\")\nplt.xlabel(\"$\\\\Delta t$ (s)\")\nplt.ylabel(\"MSD (m$^2$)\")\nplt.legend()\nplt.show()\n```\n\n### Conclusion\n\nFinally, one only needs $\\simeq 30$ ms to generate the trajectory instead of $\\simeq 7$ s which is a\n$\\simeq 250\\times$ improvement speed. The simulation si here bound to the time needed to generate the array of random numbers which is still done using numpy function. After further checking, Numpy random generation si as optimize as one could do so there is no benefit on cythonizing the random generation. For the sake of completness one could fine a Cython version to generate random numbers. Found thanks to Senderle on [Stackoverflow](https://stackoverflow.com/questions/42767816/what-is-the-most-efficient-and-portable-way-to-generate-gaussian-random-numbers). Tacking into account that, the time improvment on the actual computation of the trajectory **without** the random number generation is done with an $\\simeq 1100\\times$ improvement speed.\n\n\n```cython\n%%cython\nfrom libc.stdlib cimport rand, RAND_MAX\nfrom libc.math cimport log, sqrt\nimport numpy as np\nimport cython\n\ncdef double random_uniform():\n cdef double r = rand()\n return r / RAND_MAX\n\ncdef double random_gaussian():\n cdef double x1, x2, w\n\n w = 2.0\n while (w >= 1.0):\n x1 = 2.0 * random_uniform() - 1.0\n x2 = 2.0 * random_uniform() - 1.0\n w = x1 * x1 + x2 * x2\n\n w = ((-2.0 * log(w)) / w) ** 0.5\n return x1 * w\n\n@cython.boundscheck(False)\ncdef void assign_random_gaussian_pair(double[:] out, int assign_ix):\n cdef double x1, x2, w\n\n w = 2.0\n while (w >= 1.0):\n x1 = 2.0 * random_uniform() - 1.0\n x2 = 2.0 * random_uniform() - 1.0\n w = x1 * x1 + x2 * x2\n\n w = sqrt((-2.0 * log(w)) / w)\n out[assign_ix] = x1 * w\n out[assign_ix + 1] = x2 * w\n\n@cython.boundscheck(False)\ndef my_uniform(int n):\n cdef int i\n cdef double[:] result = np.zeros(n, dtype='f8', order='C')\n for i in range(n):\n result[i] = random_uniform()\n return result\n\n@cython.boundscheck(False)\ndef my_gaussian(int n):\n cdef int i\n cdef double[:] result = np.zeros(n, dtype='f8', order='C')\n for i in range(n):\n result[i] = random_gaussian()\n return result\n\n@cython.boundscheck(False)\ndef my_gaussian_fast(int n):\n cdef int i\n cdef double[:] result = np.zeros(n, dtype='f8', order='C')\n for i in range(n // 2): # Int division ensures trailing index if n is odd.\n assign_random_gaussian_pair(result, i * 2)\n if n % 2 == 1:\n result[n - 1] = random_gaussian()\n\n return result\n\n\n```\n\n\n```python\n%timeit my_gaussian_fast(1000000)\n```\n\n 30.9 ms \u00b1 941 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)\n\n\n\n```python\n%timeit np.random.normal(0,1,1000000)\n```\n\n 26.4 ms \u00b1 1.87 ms per loop (mean \u00b1 std. dev. of 7 runs, 10 loops each)\n\n\nOne can thus see, that even a pure C implementation can be slower than the Numpy one, thanks to a great optimization.\n\n\n```python\nfig = plt.figure(figsize = (cm2inch(16), cm2inch(10)))\ngs = fig.add_gridspec(2, 1)\nf_ax1 = fig.add_subplot(gs[0, 0])\nfor i in range(100):\n x = np.asarray(trajectory_cython())* 1e6\n plt.plot(np.arange(N)*tau / 60, x)\n\nplt.ylabel(\"$x$ ($\\mathrm{\\mu m}$)\")\nplt.xlabel(\"$t$ (min)\")\nplt.text(5,100, \"a)\")\nplt.xlim([0,160])\nf_ax1 = fig.add_subplot(gs[1, 0])\n \nx=np.asarray(trajectory_cython())\nD = kbT/(6*np.pi*eta*a)\nplt.loglog(t*tau,MSD, \"o\")\nt_plot = np.linspace(0.5e-2,5e3,1000)\nplt.plot(t_plot, (2*D*t_plot), \"--\", color = \"k\", label=\"long time theory\")\nplt.plot(t_plot, kbT/m * t_plot**2, \":\", color = \"k\", label=\"short time theory\")\n\nhoriz_data = [1e-7, 1e-18]\nt_horiz = [tauB, tauB]\nplt.plot(t_horiz, horiz_data, \"k\", label=\"$\\\\tau_\\mathrm{B}$\")\nplt.ylabel(\"MSD (m$^2$)\")\nplt.xlabel(\"$\\\\Delta t$ (s)\")\nax = plt.gca()\nlocmaj = mpl.ticker.LogLocator(base=10.0, subs=(1.0, ), numticks=100)\nax.yaxis.set_major_locator(locmaj)\nlocmin = mpl.ticker.LogLocator(base=10.0, subs=np.arange(2, 10) * .1,\n numticks=100)\nax.yaxis.set_minor_locator(locmin)\nax.yaxis.set_minor_formatter(mpl.ticker.NullFormatter())\nplt.legend(frameon=False)\nplt.text(0.7e2,1e-15, \"b)\")\nplt.xlim([0.8e-2,1e2])\nplt.ylim([1e-16,1e-10])\nplt.tight_layout(pad=0.4, w_pad=0.5, h_pad=1.0)\n\nplt.savefig(\"intertial_langevin.pdf\")\nplt.show()\n```\n", "meta": {"hexsha": "be32e71bbcec574a7ec139f5ff4e06eb52789e19", "size": 606134, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "03_tail/inertial_sim/inertial_Brownian_motion.ipynb", "max_stars_repo_name": "eXpensia/Confined-Brownian-Motion", "max_stars_repo_head_hexsha": "bd0eb6dea929727ea081dae060a7d1aa32efafd1", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "03_tail/inertial_sim/inertial_Brownian_motion.ipynb", "max_issues_repo_name": "eXpensia/Confined-Brownian-Motion", "max_issues_repo_head_hexsha": "bd0eb6dea929727ea081dae060a7d1aa32efafd1", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "03_tail/inertial_sim/inertial_Brownian_motion.ipynb", "max_forks_repo_name": "eXpensia/Confined-Brownian-Motion", "max_forks_repo_head_hexsha": "bd0eb6dea929727ea081dae060a7d1aa32efafd1", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 914.2292609351, "max_line_length": 369168, "alphanum_fraction": 0.9502997687, "converted": true, "num_tokens": 3724, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9441768588653855, "lm_q2_score": 0.8705972801594706, "lm_q1q2_score": 0.8219978053177169}} {"text": "# Multi-component phase equilibrium\n\nWe can use Raolt's Law to capture the behavior of a liquid-vapor mixture of propane and n-butane, at a mixture pressure of 2 atm. \n\nFor each component $i$,\n\\begin{equation}\nx_{f,i} P_{\\text{sat}, i} = x_i P \\;,\n\\end{equation}\nwhere $x_i$ is the mole fraction of component $i$ in the gas phase, $x_{f,i}$ is the mole fraction of component $i$ in the liquid phase, $P$ is the mixture pressure, and $P_{\\text{sat}, i}$ is the saturation pressure of component $i$ at the mixture temperature.\n\nFirst, given a mixture with equal mole fractions of propane and n-butane in the liquid phase (0.5 and 0.5), find the equilibrium temperature and the gas-phase mole fractions:\n\n\n```python\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# these are mostly for making the saved figures nicer\nfrom IPython.display import set_matplotlib_formats\nset_matplotlib_formats('pdf', 'png')\nplt.rcParams['savefig.dpi'] = 150\nplt.rcParams['figure.dpi']= 150\n\nimport numpy as np\nfrom scipy.optimize import root\nfrom CoolProp.CoolProp import PropsSI\n\nfrom pint import UnitRegistry\nureg = UnitRegistry()\nQ_ = ureg.Quantity\n```\n\nSpecify the known data:\n\n\n```python\npressure = Q_(2, 'atm')\ncomponents = ['propane', 'butane']\n\nmole_fraction_liquid = np.zeros(2)\nmole_fraction_liquid[0] = 0.5\nmole_fraction_liquid[1] = 1.0 - mole_fraction_liquid[0]\n```\n\nWe then write a function for the system of equations we need to solve to find our unknowns: the equilibrium temperature, and the two mole fractions of the components in the vapor phase. So, we need three equations:\n\\begin{align}\nx_1 + x_2 &= 1 \\\\\nx_{f,1} P_{\\text{sat}, 1} &= x_1 P \\\\\nx_{f,2} P_{\\text{sat}, 2} &= x_2 P\n\\end{align}\nwhich come from conservation of mass and Raoult's Law.\n\nWe'll use the `root()` function, which finds the values of our unknowns that make our equations equal to zero (i.e., the roots). To use this, we need to make our equations all equal zero, such that the correct values of the unknowns satisfy them:\n\\begin{align}\nx_1 + x_2 - 1 &= 0 \\\\\nx_{f,1} P_{\\text{sat}, 1} - x_1 P &= 0 \\\\\nx_{f,2} P_{\\text{sat}, 2} - x_2 P &= 0\n\\end{align}\n\nNow, we can define the function, then solve for the roots:\n\n\n```python\ndef equilibrium(xvals, pressure, components, mole_fraction_liquid):\n '''Function for finding equilibrium temperature and vapor mole fractions.\n \n xvals[0]: temperature (K)\n xvals[1:]: vapor mole fractions\n '''\n temp = xvals[0]\n mole_fraction_gas = [xvals[1], xvals[2]]\n \n pressure_sat = np.zeros(2)\n pressure_sat[0] = PropsSI('P', 'T', temp, 'Q', 1.0, components[0])\n pressure_sat[1] = PropsSI('P', 'T', temp, 'Q', 1.0, components[1])\n \n return [\n (mole_fraction_liquid[0] * pressure_sat[0] - \n mole_fraction_gas[0] * pressure\n ),\n (mole_fraction_liquid[1] * pressure_sat[1] - \n mole_fraction_gas[1] * pressure\n ),\n mole_fraction_gas[0] + mole_fraction_gas[1] - 1.0\n ]\n```\n\n\n```python\nsol = root(\n equilibrium, [250, 0.5, 0.5],\n args=(pressure.to('Pa').magnitude, components, mole_fraction_liquid,)\n )\n\nprint(f'Equilibrium temperature: {sol.x[0]: .2f} K')\nprint(f'Gas mole fraction of {components[0]}: {sol.x[1]: .3f}')\nprint(f'Gas mole fraction of {components[1]}: {sol.x[2]: .3f}')\n```\n\n Equilibrium temperature: 262.48 K\n Gas mole fraction of propane: 0.833\n Gas mole fraction of butane: 0.167\n\n\nWe see that though the we have equal moles of the components in the liquid phase, the mole fraction of propane is over 80% in the vapor phase.\n\nWe can use the same approach to show the relationship between liquid and gas mole fraction for the two components:\n\n\n```python\n# since we just have two components and the mole \n# fractions must sum to 1.0, we can just create\n# a single array for the first component\nmole_fractions_liquid = np.linspace(0, 1, 51)\n\ntemps = np.zeros(len(mole_fractions_liquid))\nmole_fractions_gas = np.zeros((len(mole_fractions_liquid), 2))\n\nfor idx, x in enumerate(mole_fractions_liquid):\n sol = root(\n equilibrium, [250, 0.5, 0.5],\n args=(pressure.to('Pa').magnitude, components, [x, 1.0 - x],)\n )\n temps[idx] = sol.x[0]\n mole_fractions_gas[idx] = [sol.x[1], sol.x[2]]\n\nfig, ax = plt.subplots(1, 2)\n\nax[0].plot(mole_fractions_liquid, mole_fractions_gas[:,0])\nax[0].set_xlabel(f'Mole fraction of {components[0]} in liquid')\nax[0].set_ylabel(f'Mole fraction of {components[0]} in gas')\nax[0].grid(True)\n\nax[1].plot(1.0 - mole_fractions_liquid, mole_fractions_gas[:,1])\nax[1].set_xlabel(f'Mole fraction of {components[1]} in liquid')\nax[1].set_ylabel(f'Mole fraction of {components[1]} in gas')\nax[1].grid(True)\n\nplt.tight_layout()\nplt.show()\n```\n\nThe line between the liquid and gas mole fractions is the *equilibrium line*.\n\nInterestingly, we see that the gas phase is richer in propane than the liquid phase, and the leaner in butane.\n\nWe can also plot the temperature against the propane mole fractions in the liquid and gas phases:\n\n\n```python\nplt.plot(mole_fractions_liquid, temps, '--', label='bubble point line')\nplt.plot(mole_fractions_gas[:,0], temps, label='dew point line')\n\n# fill space between the lines\nplt.fill_betweenx(\n temps, mole_fractions_liquid, mole_fractions_gas[:,0],\n color='gray', alpha=0.2, label='liquid & vapor'\n )\n\nplt.xlabel('Mole fraction of propane in liquid and gas')\nplt.ylabel('Temperature (K)')\n\n# text box properties\nprops = dict(boxstyle='round', facecolor='white', alpha=0.6)\n\nplt.text(0.1, 255, 'single-phase liquid', bbox=props)\nplt.text(0.7, 275, 'single-phase vapor', bbox=props)\n\nplt.grid(True)\nplt.legend()\nplt.show()\n```\n\nThe line formed by the temperature and liquid mole fraction is the *bubble point line* and the line of temperature and vapor mole fraction is the *dew point line*.\n\nThe region between the lines shows the temperatures where the liquid and vapor phases can coexist; below the bubble point line, the mixture will always be in a liquid phase, and above the dew point line, the mixture will always in a vapor phase.\n\nThis plot can be used to examine the behavior of a two-phase mixture undergoing a heating process. For example, consider a mixture that starts at 250 K and a liquid mole fraction of propane of 0.6, as it is heated. Note that the total mole fraction of propane, 0.6, should remain constant through this process.\n\n1. Initially the mixture is entirely liquid, and the liquid mole fractions remain constant as the mixture is heated. \n2. With increasing temperature, eventually we reach the bubble point line, at a temperature of about 259 K. The first bubbles appear, with a propane mole fraction of 0.89 in the vapor phase found via the *tie line* (the horizontal line connecting to the dew point line at this temperature).\n3. As the mixture is heated more and the temperature increases, it forms a two-phase mixture. The liquid and vapor mole fractions of propane are found via the horizontal tie lines connecting to the bubble and dew point lines. For example, at about 265 K, the propane mole fractions are about 0.44 and 0.79 in the liquid and vapor phases, respectively. The quality of mixture, the ratio of the moles of vapor to the total moles, can be found here using the information about propane:\n\\begin{align}\nQ &= \\frac{n_g}{n} = \\frac{n_g}{n_g + n_f} \\\\\nQ &= \\frac{z_1 - x_1}{x_{f,1} - x_1} \\;,\n\\end{align}\nwhere $z_1$ is the total mole fraction of propane ($n_1 / n$), which remains constant through this process. At this state, the quality is about 0.45.\n4. Finally, with more heating, the temperature continues to rise until the mixture reaches the dew point line, where the quality is 1 and the mole fraction of propane in the vapor phase is 0.6. This occurs at about 274 K, and the last liquid droplet will have a propane mole fraction of about 0.24.\n5. Further heating will bring the mixture fully into the vapor phase, with a propane mole fraction of 0.6.\n\nIt also shows the *temperature glide*, which is the difference between the temperatures at the dew line and bubble line for a given composition.\n\nThe calculations for these values follow:\n\n\n```python\nprint('Point 2:')\nprint(f'Temperature: {temps[30]: .1f} K')\ntotal_mole_fraction_propane = mole_fractions_liquid[30]\nprint(f'Mole fraction of propane in liquid phase: {mole_fractions_liquid[30]: .3f}')\nprint(f'Mole fraction of propane in the vapor phase: {mole_fractions_gas[30,0]: .3f}')\n```\n\n Point 2:\n Temperature: 258.9 K\n Mole fraction of propane in liquid phase: 0.600\n Mole fraction of propane in the vapor phase: 0.885\n\n\n\n```python\nprint('Point 3:')\nprint(f'Temperature: {temps[22]: .1f} K')\nprint(f'Mole fraction of propane in liquid phase: {mole_fractions_liquid[22]: .3f}')\nprint(f'Mole fraction of propane in the vapor phase: {mole_fractions_gas[22,0]: .3f}')\n\nquality_propane = (\n (total_mole_fraction_propane - mole_fractions_liquid[22]) /\n (mole_fractions_gas[22,0] - mole_fractions_liquid[22])\n )\nprint(f'Quality of propane: {quality_propane: .3f}')\n```\n\n Point 3:\n Temperature: 264.9 K\n Mole fraction of propane in liquid phase: 0.440\n Mole fraction of propane in the vapor phase: 0.794\n Quality of propane: 0.452\n\n\n\n```python\nprint('Point 4 (approximate):')\nprint(f'Temperature: {temps[12]: .1f} K')\nprint(f'Mole fraction of propane in liquid phase: {mole_fractions_liquid[12]: .3f}')\nprint(f'Mole fraction of propane in the vapor phase: {mole_fractions_gas[12,0]: .3f}')\n```\n\n Point 4 (approximate):\n Temperature: 274.7 K\n Mole fraction of propane in liquid phase: 0.240\n Mole fraction of propane in the vapor phase: 0.589\n\n", "meta": {"hexsha": "c3c17e32052d6277f9e6564ab9667f06ec3cc7b8", "size": 193670, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "content/mixtures/multicomponent-phase-equilibrium.ipynb", "max_stars_repo_name": "msb002/computational-thermo", "max_stars_repo_head_hexsha": "9302288217a36e0ce29e320688a3f574921909a5", "max_stars_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "content/mixtures/multicomponent-phase-equilibrium.ipynb", "max_issues_repo_name": "msb002/computational-thermo", "max_issues_repo_head_hexsha": "9302288217a36e0ce29e320688a3f574921909a5", "max_issues_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "content/mixtures/multicomponent-phase-equilibrium.ipynb", "max_forks_repo_name": "msb002/computational-thermo", "max_forks_repo_head_hexsha": "9302288217a36e0ce29e320688a3f574921909a5", "max_forks_repo_licenses": ["CC-BY-4.0", "BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 497.8663239075, "max_line_length": 84132, "alphanum_fraction": 0.9420044405, "converted": true, "num_tokens": 2727, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9441768651485395, "lm_q2_score": 0.8705972600147106, "lm_q1q2_score": 0.8219977917675974}} {"text": "# Modeling and Simulation in Python\n\nCase study\n\nCopyright 2017 Allen Downey\n\nLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)\n\n\n\n```python\n# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import functions from the modsim.py module\nfrom modsim import *\n```\n\n## Yo-yo\n\nSuppose you are holding a yo-yo with a length of string wound around its axle, and you drop it while holding the end of the string stationary. As gravity accelerates the yo-yo downward, tension in the string exerts a force upward. Since this force acts on a point offset from the center of mass, it exerts a torque that causes the yo-yo to spin.\n\n\n\nThis figure shows the forces on the yo-yo and the resulting torque. The outer shaded area shows the body of the yo-yo. The inner shaded area shows the rolled up string, the radius of which changes as the yo-yo unrolls.\n\nIn this model, we can't figure out the linear and angular acceleration independently; we have to solve a system of equations:\n\n$\\sum F = m a $\n\n$\\sum \\tau = I \\alpha$\n\nwhere the summations indicate that we are adding up forces and torques.\n\nAs in the previous examples, linear and angular velocity are related because of the way the string unrolls:\n\n$\\frac{dy}{dt} = -r \\frac{d \\theta}{dt} $\n\nIn this example, the linear and angular accelerations have opposite sign. As the yo-yo rotates counter-clockwise, $\\theta$ increases and $y$, which is the length of the rolled part of the string, decreases.\n\nTaking the derivative of both sides yields a similar relationship between linear and angular acceleration:\n\n$\\frac{d^2 y}{dt^2} = -r \\frac{d^2 \\theta}{dt^2} $\n\nWhich we can write more concisely:\n\n$ a = -r \\alpha $\n\nThis relationship is not a general law of nature; it is specific to scenarios like this where there is rolling without stretching or slipping.\n\nBecause of the way we've set up the problem, $y$ actually has two meanings: it represents the length of the rolled string and the height of the yo-yo, which decreases as the yo-yo falls. Similarly, $a$ represents acceleration in the length of the rolled string and the height of the yo-yo.\n\nWe can compute the acceleration of the yo-yo by adding up the linear forces:\n\n$\\sum F = T - mg = ma $\n\nWhere $T$ is positive because the tension force points up, and $mg$ is negative because gravity points down.\n\nBecause gravity acts on the center of mass, it creates no torque, so the only torque is due to tension:\n\n$\\sum \\tau = T r = I \\alpha $\n\nPositive (upward) tension yields positive (counter-clockwise) angular acceleration.\n\nNow we have three equations in three unknowns, $T$, $a$, and $\\alpha$, with $I$, $m$, $g$, and $r$ as known parameters. It is simple enough to solve these equations by hand, but we can also get SymPy to do it for us.\n\n\n\n\n```python\nfrom sympy import init_printing, symbols, Eq, solve\n\ninit_printing()\n```\n\n\n```python\nT, a, alpha, I, m, g, r = symbols('T a alpha I m g r')\n```\n\n\n```python\neq1 = Eq(a, -r * alpha)\n```\n\n\n```python\neq2 = Eq(T - m * g, m * a)\n```\n\n\n```python\neq3 = Eq(T * r, I * alpha)\n```\n\n\n```python\nsoln = solve([eq1, eq2, eq3], [T, a, alpha])\n```\n\n\n```python\nsoln[T]\n```\n\n\n```python\nsoln[a]\n```\n\n\n```python\nsoln[alpha]\n```\n\n\nThe results are\n\n$T = m g I / I^* $\n\n$a = -m g r^2 / I^* $\n\n$\\alpha = m g r / I^* $\n\nwhere $I^*$ is the augmented moment of inertia, $I + m r^2$.\n\nYou can also see [the derivation of these equations in this video](https://www.youtube.com/watch?v=chC7xVDKl4Q).\n\nTo simulate the system, we don't really need $T$; we can plug $a$ and $\\alpha$ directly into the slope function.\n\n\n```python\nradian = UNITS.radian\nm = UNITS.meter\ns = UNITS.second\nkg = UNITS.kilogram\nN = UNITS.newton\n```\n\n\n\n\nnewton\n\n\n\n**Exercise:** Simulate the descent of a yo-yo. How long does it take to reach the end of the string?\n\nI provide a `Params` object with the system parameters:\n\n* `Rmin` is the radius of the axle. `Rmax` is the radius of the axle plus rolled string.\n\n* `Rout` is the radius of the yo-yo body. `mass` is the total mass of the yo-yo, ignoring the string. \n\n* `L` is the length of the string.\n\n* `g` is the acceleration of gravity.\n\n\n```python\nparams = Params(Rmin = 8e-3 * m,\n Rmax = 16e-3 * m,\n Rout = 35e-3 * m,\n mass = 50e-3 * kg,\n L = 1 * m,\n g = 9.8 * m / s**2,\n t_end = 1 * s)\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    values
    Rmin0.008 meter
    Rmax0.016 meter
    Rout0.035 meter
    mass0.05 kilogram
    L1 meter
    g9.8 meter / second ** 2
    t_end1 second
    \n
    \n\n\n\nHere's a `make_system` function that computes `I` and `k` based on the system parameters.\n\nI estimated `I` by modeling the yo-yo as a solid cylinder with uniform density ([see here](https://en.wikipedia.org/wiki/List_of_moments_of_inertia)).\n\nIn reality, the distribution of weight in a yo-yo is often designed to achieve desired effects. But we'll keep it simple.\n\n\n```python\ndef make_system(params):\n \"\"\"Make a system object.\n \n params: Params with Rmin, Rmax, Rout, \n mass, L, g, t_end\n \n returns: System with init, k, Rmin, Rmax, mass,\n I, g, ts\n \"\"\"\n unpack(params)\n \n init = State(theta = 0 * radian,\n omega = 0 * radian/s,\n y = L,\n v = 0 * m / s)\n \n I = mass * Rout**2 / 2\n k = (Rmax**2 - Rmin**2) / 2 / L / radian \n \n return System(init=init, k=k,\n Rmin=Rmin, Rmax=Rmax,\n mass=mass, I=I, g=g,\n t_end=t_end)\n```\n\nTesting `make_system`\n\n\n```python\nsystem = make_system(params)\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    values
    inittheta 0 radian\nomega 0.0 radi...
    k9.6e-05 meter / radian
    Rmin0.008 meter
    Rmax0.016 meter
    mass0.05 kilogram
    I3.0625000000000006e-05 kilogram * meter ** 2
    g9.8 meter / second ** 2
    t_end1 second
    \n
    \n\n\n\n\n```python\nsystem.init\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    values
    theta0 radian
    omega0.0 radian / second
    y1 meter
    v0.0 meter / second
    \n
    \n\n\n\nWrite a slope function for this system, using these results from the book:\n\n$ r = \\sqrt{2 k y + R_{min}^2} $ \n\n$ T = m g I / I^* $\n\n$ a = -m g r^2 / I^* $\n\n$ \\alpha = m g r / I^* $\n\nwhere $I^*$ is the augmented moment of inertia, $I + m r^2$.\n\n\n\n```python\n# Solution\n\ndef slope_func(state, t, system):\n \"\"\"Computes the derivatives of the state variables.\n \n state: State object with theta, omega, y, v\n t: time\n system: System object with Rmin, k, I, mass\n \n returns: sequence of derivatives\n \"\"\"\n theta, omega, y, v = state\n unpack(system)\n \n r = sqrt(2*k*y + Rmin**2)\n alpha = mass * g * r / (I + mass * r**2)\n a = -r * alpha\n \n return omega, alpha, v, a \n```\n\nTest your slope function with the initial paramss.\n\n\n```python\n# Solution\n\nslope_func(system.init, 0*s, system)\n```\n\n\n\n\n (,\n ,\n ,\n )\n\n\n\nWrite an event function that will stop the simulation when `y` is 0.\n\n\n```python\n# Solution\n\ndef event_func(state, t, system):\n \"\"\"Stops when y is 0.\n \n state: State object with theta, omega, y, v\n t: time\n system: System object with Rmin, k, I, mass\n \n returns: y\n \"\"\"\n theta, omega, y, v = state\n return y\n```\n\nTest your event function:\n\n\n```python\n# Solution\n\nevent_func(system.init, 0*s, system)\n```\n\n\n\n\n1 meter\n\n\n\nThen run the simulation.\n\n\n```python\n# Solution\n\nresults, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.05*s)\ndetails\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    values
    solNone
    t_events[[0.879217870162702]]
    nfev134
    njev0
    nlu0
    status1
    messageA termination event occurred.
    successTrue
    \n
    \n\n\n\nCheck the final state. If things have gone according to plan, the final value of `y` should be close to 0.\n\n\n```python\n# Solution\n\nresults.tail()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    thetaomegayv
    0.70614744.018235121.3198493.272909e-01-1.765726
    0.75614750.267960128.6088402.369824e-01-1.844989
    0.80614756.872431135.4958481.429620e-01-1.914052
    0.85614763.809267141.8850014.576279e-02-1.971982
    0.87921867.114651144.6306439.020562e-17-1.994692
    \n
    \n\n\n\nPlot the results.\n\n`theta` should increase and accelerate.\n\n\n```python\ndef plot_theta(results):\n plot(results.theta, color='C0', label='theta')\n decorate(xlabel='Time (s)',\n ylabel='Angle (rad)')\nplot_theta(results)\n```\n\n`y` should decrease and accelerate down.\n\n\n```python\ndef plot_y(results):\n plot(results.y, color='C1', label='y')\n\n decorate(xlabel='Time (s)',\n ylabel='Length (m)')\n \nplot_y(results)\n```\n\nPlot velocity as a function of time; is the yo-yo accelerating?\n\n\n```python\n# Solution\n\nv = results.v * m / s\nplot(v)\ndecorate(xlabel='Time (s)',\n ylabel='Velocity (m/s)')\n```\n\nUse `gradient` to estimate the derivative of `v`. How goes the acceleration of the yo-yo compare to `g`?\n\n\n```python\n# Solution\n\na = gradient(v)\nplot(a)\ndecorate(xlabel='Time (s)',\n ylabel='Acceleration (m/$s^2$)')\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "e8cece1ab5a1b1d5b5f818315b3019473caaeeb6", "size": 108901, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "code/soln/yoyo_soln.ipynb", "max_stars_repo_name": "SSModelGit/ModSimPy", "max_stars_repo_head_hexsha": "4d1e3d8c3b878ea876e25e6a74509535f685f338", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "code/soln/yoyo_soln.ipynb", "max_issues_repo_name": "SSModelGit/ModSimPy", "max_issues_repo_head_hexsha": "4d1e3d8c3b878ea876e25e6a74509535f685f338", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "code/soln/yoyo_soln.ipynb", "max_forks_repo_name": "SSModelGit/ModSimPy", "max_forks_repo_head_hexsha": "4d1e3d8c3b878ea876e25e6a74509535f685f338", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 93.2371575342, "max_line_length": 19653, "alphanum_fraction": 0.8079448306, "converted": true, "num_tokens": 4028, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9046505376715775, "lm_q2_score": 0.908617900644622, "lm_q1q2_score": 0.8219816723561774}} {"text": "# Advanced topic: Solving the two-layer grey gas model analytically with sympy\n\nThis notebook is part of [The Climate Laboratory](https://brian-rose.github.io/ClimateLaboratoryBook) by [Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany.\n\n____________\n\n## 1. Introducing symbolic computation with sympy\n____________\n\nThese notes elaborate on material in the [lecture on elementary greenhouse models](https://brian-rose.github.io/ClimateLaboratoryBook/courseware/elementary-greenhouse.html), demonstrating use of a computer algebra system to make precise calculations.\n\n### Symbolic math with the sympy package\n\nThe two-layer grey gas model is simple enough that we can work out all the details algebraically. There are three temperatures to keep track of $(T_s, T_0, T_1)$, so we will have 3x3 matrix equations.\n\nWe all know how to work these things out with pencil and paper. But it can be tedious and error-prone. \n\nSymbolic math software lets us use the computer to automate a lot of tedious algebra.\n\nThe [sympy](http://www.sympy.org/en/index.html) package is a powerful open-source symbolic math library that is well-integrated into the scientific Python ecosystem. \n\n### Getting started with sympy\n\n\n```python\nimport sympy\n# Allow sympy to produce nice looking equations as output\nsympy.init_printing()\n# Define some symbols for mathematical quantities\n# Assume all quantities are positive (which will help simplify some expressions)\nepsilon, T_e, T_s, T_0, T_1, sigma = \\\n sympy.symbols('epsilon, T_e, T_s, T_0, T_1, sigma', positive=True)\n# So far we have just defined some symbols, e.g.\nT_s\n```\n\n\n```python\n# We have hard-coded the assumption that the temperature is positive\nsympy.ask(T_s>0)\n```\n\n\n\n\n True\n\n\n\n____________\n\n## 2. Coding up the 2-layer grey gas model in sympy\n____________\n\n### Longwave emissions\n\nLet's denote the emissions from each layer as\n\\begin{align}\nE_s &= \\sigma T_s^4 \\\\\nE_0 &= \\epsilon \\sigma T_0^4 \\\\\nE_1 &= \\epsilon \\sigma T_1^4 \n\\end{align}\n\nrecognizing that $E_0$ and $E_1$ contribute to **both** the upwelling and downwelling beams.\n\n\n```python\n# Define these operations as sympy symbols \n# And display as a column vector:\nE_s = sigma*T_s**4\nE_0 = epsilon*sigma*T_0**4\nE_1 = epsilon*sigma*T_1**4\nE = sympy.Matrix([E_s, E_0, E_1])\nE\n```\n\n### Shortwave radiation\n\nSince we have assumed the atmosphere is transparent to shortwave, the incident beam $Q$ passes unchanged from the top to the surface, where a fraction $\\alpha$ is reflected upward out to space.\n\n\n```python\n# Define some new symbols for shortwave radiation\nQ, alpha = sympy.symbols('Q, alpha', positive=True)\n# Create a dictionary to hold our numerical values\ntuned = {}\ntuned[Q] = 341.3 # global mean insolation in W/m2\ntuned[alpha] = 101.9/Q.subs(tuned) # observed planetary albedo\ntuned[sigma] = 5.67E-8 # Stefan-Boltzmann constant in W/m2/K4\ntuned\n# Numerical value for emission temperature\n#T_e.subs(tuned)\n```\n\n### Tracing the upwelling beam of longwave radiation\n\nLet $U$ be the upwelling flux of longwave radiation. \n\nThe upward flux **from the surface to layer 0** is\n\n$$ U_0 = E_s $$\n\n(just the emission from the suface).\n\n\n```python\nU_0 = E_s\nU_0\n```\n\nFollowing this beam upward, we can write the upward flux from layer 0 to layer 1 as the sum of the transmitted component that originated below layer 0 and the new emissions from layer 0:\n\n$$ U_1 = (1-\\epsilon) U_0 + E_0 $$\n\n\n```python\nU_1 = (1-epsilon)*U_0 + E_0\nU_1\n```\n\nContinuing to follow the same beam, the upwelling flux above layer 1 is\n$$ U_2 = (1-\\epsilon) U_1 + E_1 $$\n\n\n```python\nU_2 = (1-epsilon) * U_1 + E_1\n```\n\nSince there is no more atmosphere above layer 1, this upwelling flux is our Outgoing Longwave Radiation for this model:\n\n$$ OLR = U_2 $$\n\n\n```python\nU_2\n```\n\nThe three terms in the above expression represent the **contributions to the total OLR that originate from each of the three levels**. \n\nLet's code this up explicitly for future reference:\n\n\n```python\n# Define the contributions to OLR originating from each level\nOLR_s = (1-epsilon)**2 *sigma*T_s**4\nOLR_0 = epsilon*(1-epsilon)*sigma*T_0**4\nOLR_1 = epsilon*sigma*T_1**4\n\nOLR = OLR_s + OLR_0 + OLR_1\n\nprint( 'The expression for OLR is')\nOLR\n```\n\n### Downwelling beam\n\nLet $D$ be the downwelling longwave beam. Since there is no longwave radiation coming in from space, we begin with \n\n\n```python\nfromspace = 0\nD_2 = fromspace\n```\n\nBetween layer 1 and layer 0 the beam contains emissions from layer 1:\n\n$$ D_1 = (1-\\epsilon)D_2 + E_1 = E_1 $$\n\n\n```python\nD_1 = (1-epsilon)*D_2 + E_1\nD_1\n```\n\nFinally between layer 0 and the surface the beam contains a transmitted component and the emissions from layer 0:\n\n$$ D_0 = (1-\\epsilon) D_1 + E_0 = \\epsilon(1-\\epsilon) \\sigma T_1^4 + \\epsilon \\sigma T_0^4$$\n\n\n```python\nD_0 = (1-epsilon)*D_1 + E_0\nD_0\n```\n\nThis $D_0$ is what we call the **back radiation**, i.e. the longwave radiation from the atmosphere to the surface.\n\n____________\n\n\n## 3. Tuning the grey gas model to observations\n____________\n\nIn building our new model we have introduced exactly one parameter, the absorptivity $\\epsilon$. We need to choose a value for $\\epsilon$.\n\nWe will tune our model so that it **reproduces the observed global mean OLR** given **observed global mean temperatures**.\n\nTo get appropriate temperatures for $T_s, T_0, T_1$, revisit the global, annual mean lapse rate plot from NCEP Reanalysis data we first encountered in the [Radiation notes](https://brian-rose.github.io/ClimateLaboratoryBook/courseware/radiation.html).\n\n### Temperatures\n\nFirst, we set \n$$T_s = 288 \\text{ K} $$\n\nFrom the lapse rate plot, an average temperature for the layer between 1000 and 500 hPa is \n\n$$ T_0 = 275 \\text{ K}$$\n\nDefining an average temperature for the layer between 500 and 0 hPa is more ambiguous because of the lapse rate reversal at the tropopause. We will choose\n\n$$ T_1 = 230 \\text{ K}$$\n\nFrom the graph, this is approximately the observed global mean temperature at 275 hPa or about 10 km.\n\n\n```python\n# add to our dictionary of values:\ntuned[T_s] = 288.\ntuned[T_0] = 275.\ntuned[T_1] = 230.\ntuned\n```\n\n### OLR\n\nFrom the [observed global energy budget](https://brian-rose.github.io/ClimateLaboratoryBook/courseware/models-budgets-fun.html#2.-The-observed-global-energy-budget) we set \n\n$$ OLR = 238.5 \\text{ W m}^{-2} $$\n\n### Solving for $\\epsilon$\n\nWe wrote down the expression for OLR as a function of temperatures and absorptivity in our model above. \n\nWe just need to equate this to the observed value and solve a **quadratic equation** for $\\epsilon$.\n\nThis is where the real power of the symbolic math toolkit comes in. \n\nSubsitute in the numerical values we are interested in:\n\n\n```python\n# the .subs() method for a sympy symbol means\n# substitute values in the expression using the supplied dictionary\n# Here we use observed values of Ts, T0, T1 \nOLR2 = OLR.subs(tuned)\nOLR2\n```\n\nWe have a quadratic equation for $\\epsilon$.\n\nNow use the `sympy.solve` function to solve the quadratic:\n\n\n```python\n# The sympy.solve method takes an expression equal to zero\n# So in this case we subtract the tuned value of OLR from our expression\neps_solution = sympy.solve(OLR2 - 238.5, epsilon)\neps_solution\n```\n\nThere are two roots, but the second one is unphysical since we must have $0 < \\epsilon < 1$.\n\nJust for fun, here is a simple of example of *filtering a list* using powerful Python *list comprehension* syntax:\n\n\n```python\n# Give me only the roots that are between zero and 1!\nlist_result = [eps for eps in eps_solution if 0\n\n## 4. Level of emission\n____________\n\nEven in this very simple greenhouse model, there is **no single level** at which the OLR is generated.\n\nThe three terms in our formula for OLR tell us the contributions from each level.\n\n\n```python\nOLRterms = sympy.Matrix([OLR_s, OLR_0, OLR_1])\nOLRterms\n```\n\nNow evaluate these expressions for our tuned temperature and absorptivity:\n\n\n```python\nOLRtuned = OLRterms.subs(tuned)\nOLRtuned\n```\n\nSo we are getting about 67 W m$^{-2}$ from the surface, 79 W m$^{-2}$ from layer 0, and 93 W m$^{-2}$ from the top layer.\n\nIn terms of fractional contributions to the total OLR, we have (limiting the output to two decimal places):\n\n\n```python\nsympy.N(OLRtuned / 238.5, 2)\n```\n\nNotice that the largest single contribution is coming from the top layer. This is in spite of the fact that the emissions from this layer are weak, because it is so cold.\n\nComparing to observations, the actual contribution to OLR from the surface is about 22 W m$^{-2}$ (or about 9% of the total), not 67 W m$^{-2}$. So we certainly don't have all the details worked out yet!\n\nAs we will see later, to really understand what sets that observed 22 W m$^{-2}$, we will need to start thinking about the spectral dependence of the longwave absorptivity.\n\n____________\n\n\n## 5. Radiative forcing in the 2-layer grey gas model\n____________\n\nAdding some extra greenhouse absorbers will mean that a greater fraction of incident longwave radiation is absorbed in each layer.\n\nThus **$\\epsilon$ must increase** as we add greenhouse gases.\n\nSuppose we have $\\epsilon$ initially, and the absorptivity increases to $\\epsilon_2 = \\epsilon + \\delta_\\epsilon$.\n\nSuppose further that this increase happens **abruptly** so that there is no time for the temperatures to respond to this change. **We hold the temperatures fixed** in the column and ask how the radiative fluxes change.\n\n**Do you expect the OLR to increase or decrease?**\n\nLet's use our two-layer leaky greenhouse model to investigate the answer.\n\nThe components of the OLR before the perturbation are\n\n\n```python\nOLRterms\n```\n\nAfter the perturbation we have\n\n\n```python\ndelta_epsilon = sympy.symbols('delta_epsilon')\nOLRterms_pert = OLRterms.subs(epsilon, epsilon+delta_epsilon)\nOLRterms_pert\n```\n\nLet's take the difference\n\n\n```python\ndeltaOLR = OLRterms_pert - OLRterms\ndeltaOLR\n```\n\nTo make things simpler, we will neglect the terms in $\\delta_\\epsilon^2$. This is perfectly reasonably because we are dealing with **small perturbations** where $\\delta_\\epsilon << \\epsilon$.\n\nTelling `sympy` to set the quadratic terms to zero gives us\n\n\n```python\ndeltaOLR_linear = sympy.expand(deltaOLR).subs(delta_epsilon**2, 0)\ndeltaOLR_linear\n```\n\nRecall that the three terms are the contributions to the OLR from the three different levels. In this case, the **changes** in those contributions after adding more absorbers.\n\nNow let's divide through by $\\delta_\\epsilon$ to get the normalized change in OLR per unit change in absorptivity:\n\n\n```python\ndeltaOLR_per_deltaepsilon = \\\n sympy.simplify(deltaOLR_linear / delta_epsilon)\ndeltaOLR_per_deltaepsilon\n```\n\nNow look at the **sign** of each term. Recall that $0 < \\epsilon < 1$. **Which terms in the OLR go up and which go down?**\n\n**THIS IS VERY IMPORTANT, SO STOP AND THINK ABOUT IT.**\n\nThe contribution from the **surface** must **decrease**, while the contribution from the **top layer** must **increase**.\n\n**When we add absorbers, the average level of emission goes up!**\n\n### \"Radiative forcing\" is the change in radiative flux at TOA after adding absorbers\n\nIn this model, only the longwave flux can change, so we define the radiative forcing as\n\n$$ R = - \\delta OLR $$\n\n(with the minus sign so that $R$ is positive when the climate system is gaining extra energy).\n\nWe just worked out that whenever we add some extra absorbers, the emissions to space (on average) will originate from higher levels in the atmosphere. \n\nWhat does this mean for OLR? Will it increase or decrease?\n\nTo get the answer, we just have to sum up the three contributions we wrote above:\n\n\n```python\nR_per_deltaepsilon = -sum(deltaOLR_per_deltaepsilon)\nR_per_deltaepsilon\n```\n\nIs this a positive or negative number? The key point is this:\n\n**It depends on the temperatures, i.e. on the lapse rate.**\n\n### Greenhouse effect for an isothermal atmosphere\n\nStop and think about this question:\n\nIf the **surface and atmosphere are all at the same temperature**, does the OLR go up or down when $\\epsilon$ increases (i.e. we add more absorbers)?\n\nUnderstanding this question is key to understanding how the greenhouse effect works.\n\n#### Let's solve the isothermal case\n\nWe will just set $T_s = T_0 = T_1$ in the above expression for the radiative forcing.\n\n\n```python\nR_per_deltaepsilon.subs([(T_0, T_s), (T_1, T_s)])\n```\n\nwhich then simplifies to\n\n\n```python\nsympy.simplify(R_per_deltaepsilon.subs([(T_0, T_s), (T_1, T_s)]))\n```\n\n#### The answer is zero\n\nFor an isothermal atmosphere, there is **no change** in OLR when we add extra greenhouse absorbers. Hence, no radiative forcing and no greenhouse effect.\n\nWhy?\n\nThe level of emission still must go up. But since the temperature at the upper level is the **same** as everywhere else, the emissions are exactly the same.\n\n### The radiative forcing (change in OLR) depends on the lapse rate!\n\nFor a more realistic example of radiative forcing due to an increase in greenhouse absorbers, we can substitute in our tuned values for temperature and $\\epsilon$. \n\nWe'll express the answer in W m$^{-2}$ for a 2% increase in $\\epsilon$.\n\n\n```python\ndelta_epsilon = 0.02 * epsilon\ndelta_epsilon\n```\n\nThe three components of the OLR change are\n\n\n```python\n(deltaOLR_per_deltaepsilon * delta_epsilon).subs(tuned)\n```\n\nAnd the net radiative forcing is\n\n\n```python\n(R_per_deltaepsilon*delta_epsilon).subs(tuned)\n```\n\nSo in our example, **the OLR decreases by 2.6 W m$^{-2}$**, or equivalently, the radiative forcing is +2.6 W m$^{-2}$.\n\nWhat we have just calculated is this:\n\n*Given the observed lapse rates, a small increase in absorbers will cause a small decrease in OLR.*\n\nThe greenhouse effect thus gets stronger, and energy will begin to accumulate in the system -- which will eventually cause temperatures to increase as the system adjusts to a new equilibrium.\n\n____________\n\n\n## 6. Radiative equilibrium in the 2-layer grey gas model\n____________\n\nIn the previous section we:\n\n- made no assumptions about the processes that actually set the temperatures. \n- used the model to calculate radiative fluxes, **given observed temperatures**. \n- stressed the importance of knowing the lapse rates in order to know how an increase in emission level would affect the OLR, and thus determine the radiative forcing.\n\nA key question in climate dynamics is therefore this:\n\n**What sets the lapse rate?**\n\nIt turns out that lots of different physical processes contribute to setting the lapse rate. \n\nUnderstanding how these processes acts together and how they change as the climate changes is one of the key reasons for which we need more complex climate models.\n\nFor now, we will use our prototype greenhouse model to do the most basic lapse rate calculation: the **radiative equilibrium temperature**.\n\nWe assume that\n\n- the only exchange of energy between layers is longwave radiation\n- equilibrium is achieved when the **net radiative flux convergence** in each layer is zero.\n\n### Compute the radiative flux convergence\n\nFirst, the **net upwelling flux** is just the difference between flux up and flux down:\n\n\n```python\n# Upwelling and downwelling beams as matrices\nU = sympy.Matrix([U_0, U_1, U_2])\nD = sympy.Matrix([D_0, D_1, D_2])\n# Net flux, positive up\nF = U-D\nF\n```\n\n#### Net absorption is the flux convergence in each layer\n\n(difference between what's coming in the bottom and what's going out the top of each layer)\n\n\n```python\n# define a vector of absorbed radiation -- same size as emissions\nA = E.copy()\n\n# absorbed radiation at surface\nA[0] = F[0]\n# Compute the convergence\nfor n in range(2):\n A[n+1] = -(F[n+1]-F[n])\n\nA\n```\n\n#### Radiative equilibrium means net absorption is ZERO in the atmosphere\n\nThe only other heat source is the **shortwave heating** at the **surface**.\n\nIn matrix form, here is the system of equations to be solved:\n\n\n```python\nradeq = sympy.Equality(A, sympy.Matrix([(1-alpha)*Q, 0, 0]))\nradeq\n```\n\nJust as we did for the 1-layer model, it is helpful to rewrite this system using the definition of the **emission temperture** $T_e$\n\n$$ (1-\\alpha) Q = \\sigma T_e^4 $$\n\n\n```python\nradeq2 = radeq.subs([((1-alpha)*Q, sigma*T_e**4)])\nradeq2\n```\n\nIn this form we can see that we actually have a **linear system** of equations for a set of variables $T_s^4, T_0^4, T_1^4$.\n\nWe can solve this matrix problem to get these as functions of $T_e^4$.\n\n\n```python\n# Solve for radiative equilibrium \nfourthpower = sympy.solve(radeq2, [T_s**4, T_1**4, T_0**4])\nfourthpower\n```\n\nThis produces a dictionary of solutions for the fourth power of the temperatures!\n\nA little manipulation gets us the solutions for temperatures that we want:\n\n\n```python\n# need the symbolic fourth root operation\nfrom sympy.simplify.simplify import nthroot\n\nfourthpower_list = [fourthpower[key] for key in [T_s**4, T_0**4, T_1**4]]\nsolution = sympy.Matrix([nthroot(item,4) for item in fourthpower_list])\n# Display result as matrix equation!\nT = sympy.Matrix([T_s, T_0, T_1])\nsympy.Equality(T, solution)\n```\n\nIn more familiar notation, the radiative equilibrium solution is thus\n\n\\begin{align} \nT_s &= T_e \\left( \\frac{2+\\epsilon}{2-\\epsilon} \\right)^{1/4} \\\\\nT_0 &= T_e \\left( \\frac{1+\\epsilon}{2-\\epsilon} \\right)^{1/4} \\\\\nT_1 &= T_e \\left( \\frac{ 1}{2 - \\epsilon} \\right)^{1/4}\n\\end{align}\n\nPlugging in the tuned value $\\epsilon = 0.586$ gives\n\n\n```python\nTsolution = solution.subs(tuned)\n# Display result as matrix equation!\nsympy.Equality(T, Tsolution)\n```\n\nNow we just need to know the Earth's emission temperature $T_e$!\n\n(Which we already know is about 255 K)\n\n\n```python\n# Here's how to calculate T_e from the observed values\nsympy.solve(((1-alpha)*Q - sigma*T_e**4).subs(tuned), T_e)\n```\n\n\n```python\n# Need to unpack the list\nTe_value = sympy.solve(((1-alpha)*Q - sigma*T_e**4).subs(tuned), T_e)[0]\nTe_value\n```\n\n#### Now we finally get our solution for radiative equilibrium\n\n\n```python\n# Output 4 significant digits\nTrad = sympy.N(Tsolution.subs([(T_e, Te_value)]), 4)\nsympy.Equality(T, Trad)\n```\n\nCompare these to the values we derived from the **observed lapse rates**:\n\n\n```python\nsympy.Equality(T, T.subs(tuned))\n```\n\nThe **radiative equilibrium** solution is substantially **warmer at the surface** and **colder in the lower troposphere** than reality.\n\nThis is a very general feature of radiative equilibrium, and we will see it again very soon in this course.\n\n____________\n\n\n## 7. Summary\n____________\n\n## Key physical lessons\n\n- Putting a **layer of longwave absorbers** above the surface keeps the **surface substantially warmer**, because of the **backradiation** from the atmosphere (greenhouse effect).\n- The **grey gas** model assumes that each layer absorbs and emits a fraction $\\epsilon$ of its blackbody value, independent of wavelength.\n\n- With **incomplete absorption** ($\\epsilon < 1$), there are contributions to the OLR from every level and the surface (there is no single **level of emission**)\n- Adding more absorbers means that **contributions to the OLR** from **upper levels** go **up**, while contributions from the surface go **down**.\n- This upward shift in the weighting of different levels is what we mean when we say the **level of emission goes up**.\n\n- The **radiative forcing** caused by an increase in absorbers **depends on the lapse rate**.\n- For an **isothermal atmosphere** the radiative forcing is zero and there is **no greenhouse effect**\n- The radiative forcing is positive for our atmosphere **because tropospheric temperatures tends to decrease with height**.\n- Pure **radiative equilibrium** produces a **warm surface** and **cold lower troposphere**.\n- This is unrealistic, and suggests that crucial heat transfer mechanisms are missing from our model.\n\n### And on the Python side...\n\nDid we need `sympy` to work all this out? No, of course not. We could have solved the 3x3 matrix problems by hand. But computer algebra can be very useful and save you a lot of time and error, so it's good to invest some effort into learning how to use it. \n\nHopefully these notes provide a useful starting point.\n\n### A follow-up assignment\n\nYou are now ready to tackle [Assignment 5](../Assignments/Assignment05 -- Radiative forcing in a grey radiation atmosphere.ipynb), where you are asked to extend this grey-gas analysis to many layers. \n\nFor more than a few layers, the analytical approach we used here is no longer very useful. You will code up a numerical solution to calculate OLR given temperatures and absorptivity, and look at how the lapse rate determines radiative forcing for a given increase in absorptivity.\n\n____________\n\n## Credits\n\nThis notebook is part of [The Climate Laboratory](https://brian-rose.github.io/ClimateLaboratoryBook), an open-source textbook developed and maintained by [Brian E. J. Rose](http://www.atmos.albany.edu/facstaff/brose/index.html), University at Albany.\n\nIt is licensed for free and open consumption under the\n[Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license.\n\nDevelopment of these notes and the [climlab software](https://github.com/brian-rose/climlab) is partially supported by the National Science Foundation under award AGS-1455071 to Brian Rose. Any opinions, findings, conclusions or recommendations expressed here are mine and do not necessarily reflect the views of the National Science Foundation.\n____________\n\n\n```python\n\n```\n", "meta": {"hexsha": "cebecddce6ab0511cb576acec2f35eba6d48bd91", "size": 251366, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "content/courseware/sympy-greenhouse.ipynb", "max_stars_repo_name": "phaustin/ClimateLaboratoryBook", "max_stars_repo_head_hexsha": "bfe3920e868d64bfa45e3ed012700c4b1947f275", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-03-26T07:19:36.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-26T07:19:36.000Z", "max_issues_repo_path": "content/courseware/sympy-greenhouse.ipynb", "max_issues_repo_name": "rhwhite/ClimateLaboratoryBook", "max_issues_repo_head_hexsha": "d796f86cb092dda837844487351026ff5ebc70ce", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "content/courseware/sympy-greenhouse.ipynb", "max_forks_repo_name": "rhwhite/ClimateLaboratoryBook", "max_forks_repo_head_hexsha": "d796f86cb092dda837844487351026ff5ebc70ce", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-01-11T18:24:42.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-06T03:41:00.000Z", "avg_line_length": 109.3846823325, "max_line_length": 12368, "alphanum_fraction": 0.8451898825, "converted": true, "num_tokens": 5769, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9086178994073576, "lm_q2_score": 0.9046505254608136, "lm_q1q2_score": 0.8219816601419667}} {"text": "```python\nfrom sympy import *\ninit_printing(use_latex='mathjax')\nx, y, z = symbols('x,y,z')\nn, m = symbols('n,m', integer=True)\n```\n\n\n```python\nimport matplotlib.pyplot as plt\n```\n\n# Evaluaci\u00f3n num\u00e9rica\n\nEn esta secci\u00f3n aprenderemos como usar nuestras ecuaciones simb\u00f3licas para conducir c\u00e1lculos num\u00e9ricos\n\n## `.subs` y `.evalf`\n\nLa forma m\u00e1s simple (y m\u00e1s lenta) de evaluar una expresi\u00f3n num\u00e9ricamente es con los m\u00e9todos `.subs` y` .evalf`\n\n\n```python\nsin(x)\n```\n\n\n```python\nsin(x).subs({x: 0})\n```\n\n\n```python\nacos(x).subs({x: -1})\n```\n\n\n```python\nacos(x).subs({x: -1}).evalf()\n```\n\n\n```python\nacos(x).subs({x: -1}).evalf(n=100)\n```\n\n### Ejercicio\n\nEn una secci\u00f3n anterior calculamos la siguiente integral simb\u00f3lica\n\n$$ \\int_y^z x^n dx $$\n\n\n```python\nresult = integrate(x**n, (x, y, z))\nresult\n```\n\nUsa `.subs` y un diccionario con claves (*keys*) `n, y, z` para evaluar el resultado\n\n n == 2\n y == 0\n z == 3\n\n\n```python\n# Evalua la integral resultante en los valores anteriores\n\n\n```\n\n### Ejercicio\n\nEsta integral toma una forma especial cuando $n = -1$. Usa `subs` para encontrar la expresi\u00f3n cuando\n\n n == -1\n y == 5\n z == 100\n \nLuego usa `.evalf` para evaluar esta expresi\u00f3n resultante como un flotante.\n\n\n```python\n# Evalua la intergral resultante para los valores {n: -1, y: 5, z: 100}\n# Luego usa evalf para obtener un resultado numerico\n\n\n```\n\n## `lambdify`\n\nLos m\u00e9todos `.subs` y` .evalf` son geniales cuando quieres evaluar una expresi\u00f3n en un solo punto. Cuando quieres evaluar tu expresi\u00f3n en muchos puntos, se vuelven lentos r\u00e1pidamente.\n\nPara resolver este problema, *SymPy* puede reescribir sus expresiones como funciones normales de Python usando la biblioteca *math*, c\u00e1lculos vectorizados usando la biblioteca *NumPy*, c\u00f3digo *C* o *Fortran* usando impresoras de c\u00f3digos, o incluso sistemas m\u00e1s sofisticados.\n\nHablaremos sobre algunos de los temas m\u00e1s avanzados m\u00e1s adelante. Por ahora, `lambdify`...\n\n\n```python\n# function = lambdify(input, output)\n\nf = lambdify(x, x**2)\nf(3)\n```\n\n\n```python\nimport numpy as np\nf = lambdify(x, x**2) # Use numpy backend\ndata = np.array([1, 2, 3, 4, 5], float)\nf(data)\n```\n\n### Ejercicio\n\nAqu\u00ed se muestra hay una funci\u00f3n de onda radial para el \u00e1tomo de carbono para $n=3$, $l=1$\n\n\n```python\nfrom sympy.physics.hydrogen import R_nl\nn = 3\nl = 1\nr = 6 # Carbon\nexpr = R_nl(n, l, x, r)\nexpr\n```\n\nCrea una funci\u00f3n, `f`, que eval\u00faa esta expresi\u00f3n usando el motor (*backend*) *numpy*\n\n\n```python\n# Create Numpy function mapping x to expr with the numpy backend\nf = lambdify(x,expr)\n```\n\n\n```python\nf\n```\n\nPodemos graficar la funci\u00f3n de $x \\in [0, 5]$ con el siguiente c\u00f3digo *numpy*/*matplotlib*\n\n\n```python\nnx = np.linspace(0, 5, 1000)\nplt.plot(nx, f(nx))\n```\n\n### Ejercicio\n\nCrea una funci\u00f3n *numpy* que calcula la derivada de nuestra expresi\u00f3n. Grafica el resultado junto con el original.\n\n\n```python\n# Calcula la derivada de expr con respecto a x\n\n\n```\n\n\n```python\n# Crea una funcion fprime usando lambdify\n\n\n```\n\n\n```python\n# Grafica los resultados junto con f(nx)\n\n\n```\n", "meta": {"hexsha": "9ac0307564edd2a1c4432047400934ee714d1add", "size": 6835, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tutorial_exercises/05-Numeric-Evaluation.ipynb", "max_stars_repo_name": "t3rodrig/sympy-tutorial-es", "max_stars_repo_head_hexsha": "5cd5497f799e889d758a26539781cdc72b1e6a74", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "tutorial_exercises/05-Numeric-Evaluation.ipynb", "max_issues_repo_name": "t3rodrig/sympy-tutorial-es", "max_issues_repo_head_hexsha": "5cd5497f799e889d758a26539781cdc72b1e6a74", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tutorial_exercises/05-Numeric-Evaluation.ipynb", "max_forks_repo_name": "t3rodrig/sympy-tutorial-es", "max_forks_repo_head_hexsha": "5cd5497f799e889d758a26539781cdc72b1e6a74", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.359375, "max_line_length": 283, "alphanum_fraction": 0.5221653255, "converted": true, "num_tokens": 943, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9086178919837705, "lm_q2_score": 0.9046505254608135, "lm_q1q2_score": 0.8219816534262147}} {"text": "# Determining the exponent of power-law distributions\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sklearn.linear_model\nfrom log_binning import log_bin\n```\n\nWe are interested in finding the distribution function $f_Z(z)$ for a given distribution. The distribution we will be looking at is a large sample of uniform numbers to a given power. Below we draw our sample, $z$.\n\n\n```python\nalpha = 1\n\nz = np.sort(np.random.rand(int(1e6)) ** (-(alpha + 1)))\n```\n\nWe sort $z$ in order to get a smooth cumulative distribution function, which we'll find by\n\\begin{align}\n P(Z > z) = N \\sum_{i = 1}^{n} z_i,\n\\end{align}\nwhere $z_i$ is the element in $z$ at position $i$ and $N$ is a normalization factor. We normalize by dividing by the largest number in the cumulative sum, i.e., the last number.\n\n\n```python\ndef compute_cdf(z):\n z_sum = np.cumsum(z)\n z_norm = np.max(z_sum)\n\n return z_sum / z_norm\n```\n\n\n```python\ncdf_z = compute_cdf(z)\n\nprint(f\"Area under the cdf-curve: {np.trapz(cdf_z)}\")\n```\n\n Area under the cdf-curve: 7.503479896639883\n\n\nWe plot the cumulative distribution function in a log-log plot in order to see the power-law type behaviour.\n\n\n```python\nfig = plt.figure(figsize=(14, 10))\n\nplt.loglog(z, cdf_z, lw=2)\nplt.title(r\"Log-log plot of $P(Z > z)$\")\nplt.xlabel(r\"X\")\nplt.ylabel(r\"$P(Z > z)$\")\nplt.grid()\nplt.show()\n```\n\nIn this plot we see a log-log plot of the cumulative distribution function for power-law random numbers\n\\begin{align}\n z(x) = x^{-(\\alpha + 1)}.\n\\end{align}\nHaving found the cumulative distribution function, we can compute the actual underlying distribution function, $f_Z(z)$, from the cumulative distribution function by\n\\begin{align}\n f_Z(z) = \\frac{\\text{d} P(Z > z)}{\\text{d} z}.\n\\end{align}\nAs the cumulative distribution function is given as an array, we use `np.gradient` to compute the derivative with respect to $z$.\n\n\n```python\nf_z = np.gradient(cdf_z)\n\nprint(f\"Area under the pdf-curve: {np.trapz(f_z)}\")\n```\n\n Area under the pdf-curve: 0.9999999999995588\n\n\n\n```python\nfig = plt.figure(figsize=(14, 10))\n\nplt.loglog(z, f_z, lw=2)\nplt.grid()\nplt.title(r\"The probability function found form the cumulative distribution function\")\nplt.xlabel(r\"\")\nplt.show()\n```\n\nWe have again used a log-log plot, but this time for the probability density function found from the cumulative distribution function.\n\n## Logarithmic binning\n\nFor ease of use when dealing with power law type distributions, we have created a function `log_bin` in the file `log_binning.py` that computes a histogram with logarithmic binning for a given dataset. An exmample is demonstrated below using the sample $z$ drawn from before.\n\n\n```python\nhist, bins, widths = log_bin(z, num_bins=10)\n```\n\n\n```python\nplt.figure(figsize=(14, 10))\n\nplt.bar(bins[1:], hist, widths)\nplt.xscale(\"log\")\nplt.yscale(\"log\")\nplt.title(r\"Logarithmic binning of $z$\")\nplt.show()\n```\n", "meta": {"hexsha": "c9abf2971967fbbe0bc859161d07d829c8399816", "size": 77138, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "project-3/power-law-distributions.ipynb", "max_stars_repo_name": "Schoyen/FYS4460", "max_stars_repo_head_hexsha": "0c6ba1deefbfd5e9d1657910243afc2297c695a3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-08-29T16:29:18.000Z", "max_stars_repo_stars_event_max_datetime": "2019-08-29T16:29:18.000Z", "max_issues_repo_path": "project-3/power-law-distributions.ipynb", "max_issues_repo_name": "Schoyen/FYS4460", "max_issues_repo_head_hexsha": "0c6ba1deefbfd5e9d1657910243afc2297c695a3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "project-3/power-law-distributions.ipynb", "max_forks_repo_name": "Schoyen/FYS4460", "max_forks_repo_head_hexsha": "0c6ba1deefbfd5e9d1657910243afc2297c695a3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-05-27T14:01:36.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-27T14:01:36.000Z", "avg_line_length": 297.8301158301, "max_line_length": 33086, "alphanum_fraction": 0.9244600586, "converted": true, "num_tokens": 797, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9314625126757597, "lm_q2_score": 0.8824278710924296, "lm_q1q2_score": 0.8219484820628759}} {"text": "```python\nfrom einsteinpy.symbolic.predefined import Schwarzschild, DeSitter, AntiDeSitter, Minkowski, find\nfrom einsteinpy.symbolic import RicciTensor, RicciScalar, ChristoffelSymbols, RiemannCurvatureTensor, WeylTensor\nimport sympy\nfrom sympy import simplify\n\nsympy.init_printing() # for pretty printing\n```\n\n\n```python\nsch = Schwarzschild(c=1) # Define Schwarzschild metric in natural units\nsch.tensor()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 - \\frac{r_{s}}{r} & 0 & 0 & 0\\\\0 & - \\frac{1}{1 - \\frac{r_{s}}{r}} & 0 & 0\\\\0 & 0 & - r^{2} & 0\\\\0 & 0 & 0 & - r^{2} \\sin^{2}{\\left(\\theta \\right)}\\end{matrix}\\right]$\n\n\n\n\n```python\nMinkowski(c=1).tensor() # Minkowski Spacetime in natural units\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}-1 & 0 & 0 & 0\\\\0 & 1.0 & 0 & 0\\\\0 & 0 & 1.0 & 0\\\\0 & 0 & 0 & 1.0\\end{matrix}\\right]$\n\n\n\n\n```python\nDeSitter().tensor() # de Sitter metric \n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}-1 & 0 & 0 & 0\\\\0 & e^{\\frac{2 x}{\\alpha}} & 0 & 0\\\\0 & 0 & e^{\\frac{2 x}{\\alpha}} & 0\\\\0 & 0 & 0 & e^{\\frac{2 x}{\\alpha}}\\end{matrix}\\right]$\n\n\n\n\n```python\nAntiDeSitter().tensor() # anti de Sitter Metric\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}-1 & 0 & 0 & 0\\\\0 & \\cos^{2}{\\left(t \\right)} & 0 & 0\\\\0 & 0 & \\cos^{2}{\\left(t \\right)} \\sinh^{2}{\\left(\\chi \\right)} & 0\\\\0 & 0 & 0 & \\sin^{2}{\\left(\\theta \\right)} \\cos^{2}{\\left(t \\right)} \\sinh^{2}{\\left(\\chi \\right)}\\end{matrix}\\right]$\n\n\n\n\n```python\nfind(\"sitter\")\n```\n\n\n\n\n ['AntiDeSitter', 'AntiDeSitterStatic', 'DeSitter']\n\n\n\n\n```python\nch = ChristoffelSymbols.from_metric(sch) # Compute Christoffel Symbols from Schwarzschild metric\nch.tensor()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\left[\\begin{matrix}0 & \\frac{r_{s}}{2 r^{2} \\left(1 - \\frac{r_{s}}{r}\\right)} & 0 & 0\\\\\\frac{r_{s}}{2 r^{2} \\left(1 - \\frac{r_{s}}{r}\\right)} & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right] & \\left[\\begin{matrix}- \\frac{r_{s} \\left(- \\frac{1}{2} + \\frac{r_{s}}{2 r}\\right)}{r^{2}} & 0 & 0 & 0\\\\0 & \\frac{r_{s} \\left(- \\frac{1}{2} + \\frac{r_{s}}{2 r}\\right)}{r^{2} \\left(1 - \\frac{r_{s}}{r}\\right)^{2}} & 0 & 0\\\\0 & 0 & 2 r \\left(- \\frac{1}{2} + \\frac{r_{s}}{2 r}\\right) & 0\\\\0 & 0 & 0 & 2 r \\left(- \\frac{1}{2} + \\frac{r_{s}}{2 r}\\right) \\sin^{2}{\\left(\\theta \\right)}\\end{matrix}\\right] & \\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & 0 & \\frac{1}{r} & 0\\\\0 & \\frac{1}{r} & 0 & 0\\\\0 & 0 & 0 & - \\sin{\\left(\\theta \\right)} \\cos{\\left(\\theta \\right)}\\end{matrix}\\right] & \\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & 0 & 0 & \\frac{1}{r}\\\\0 & 0 & 0 & \\frac{\\cos{\\left(\\theta \\right)}}{\\sin{\\left(\\theta \\right)}}\\\\0 & \\frac{1}{r} & \\frac{\\cos{\\left(\\theta \\right)}}{\\sin{\\left(\\theta \\right)}} & 0\\end{matrix}\\right]\\end{matrix}\\right]$\n\n\n\n\n```python\nRm1 = RiemannCurvatureTensor.from_christoffels(ch) # Compute Riemann Curvature Tensor from Christoffel Symbols\nRm1.tensor()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & - \\frac{r_{s}}{r^{2} \\left(r - r_{s}\\right)} & 0 & 0\\\\0 & 0 & \\frac{r_{s}}{2 r} & 0\\\\0 & 0 & 0 & \\frac{r_{s} \\sin^{2}{\\left(\\theta \\right)}}{2 r}\\end{matrix}\\right] & \\left[\\begin{matrix}0 & \\frac{r_{s}}{r^{2} \\left(r - r_{s}\\right)} & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right] & \\left[\\begin{matrix}0 & 0 & - \\frac{r_{s}}{2 r} & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right] & \\left[\\begin{matrix}0 & 0 & 0 & - \\frac{r_{s} \\sin^{2}{\\left(\\theta \\right)}}{2 r}\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right]\\\\\\left[\\begin{matrix}0 & 0 & 0 & 0\\\\- \\frac{r_{s} \\left(r - r_{s}\\right)}{r^{4}} & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right] & \\left[\\begin{matrix}\\frac{r_{s} \\left(r - r_{s}\\right)}{r^{4}} & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & \\frac{r_{s}}{2 r} & 0\\\\0 & 0 & 0 & \\frac{r_{s} \\sin^{2}{\\left(\\theta \\right)}}{2 r}\\end{matrix}\\right] & \\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & 0 & - \\frac{r_{s}}{2 r} & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right] & \\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & 0 & 0 & - \\frac{r_{s} \\sin^{2}{\\left(\\theta \\right)}}{2 r}\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right]\\\\\\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\\\frac{r_{s} \\left(r - r_{s}\\right)}{2 r^{4}} & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right] & \\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & - \\frac{r_{s}}{2 r^{2} \\left(r - r_{s}\\right)} & 0 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right] & \\left[\\begin{matrix}- \\frac{r_{s} \\left(r - r_{s}\\right)}{2 r^{4}} & 0 & 0 & 0\\\\0 & \\frac{r_{s}}{2 r^{2} \\left(r - r_{s}\\right)} & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & - \\frac{r_{s} \\sin^{2}{\\left(\\theta \\right)}}{r}\\end{matrix}\\right] & \\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & \\frac{r_{s} \\sin^{2}{\\left(\\theta \\right)}}{r}\\\\0 & 0 & 0 & 0\\end{matrix}\\right]\\\\\\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\\\frac{r_{s} \\left(r - r_{s}\\right)}{2 r^{4}} & 0 & 0 & 0\\end{matrix}\\right] & \\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & - \\frac{r_{s}}{2 r^{2} \\left(r - r_{s}\\right)} & 0 & 0\\end{matrix}\\right] & \\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & \\frac{r_{s}}{r} & 0\\end{matrix}\\right] & \\left[\\begin{matrix}- \\frac{r_{s} \\left(r - r_{s}\\right)}{2 r^{4}} & 0 & 0 & 0\\\\0 & \\frac{r_{s}}{2 r^{2} \\left(r - r_{s}\\right)} & 0 & 0\\\\0 & 0 & - \\frac{r_{s}}{r} & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right]\\end{matrix}\\right]$\n\n\n\n\n```python\nwt = WeylTensor.from_metric(sch) # Compute Weyl tensor from Schwarzschild metric\nwt.tensor()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}\\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & - \\frac{r_{s}}{r^{3}} & 0 & 0\\\\0 & 0 & \\frac{r_{s} \\left(r - r_{s}\\right)}{2 r^{2}} & 0\\\\0 & 0 & 0 & \\frac{r_{s} \\left(r - r_{s}\\right) \\sin^{2}{\\left(\\theta \\right)}}{2 r^{2}}\\end{matrix}\\right] & \\left[\\begin{matrix}0 & \\frac{r_{s}}{r^{3}} & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right] & \\left[\\begin{matrix}0 & 0 & - \\frac{r_{s} \\left(r - r_{s}\\right)}{2 r^{2}} & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right] & \\left[\\begin{matrix}0 & 0 & 0 & - \\frac{r_{s} \\left(r - r_{s}\\right) \\sin^{2}{\\left(\\theta \\right)}}{2 r^{2}}\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right]\\\\\\left[\\begin{matrix}0 & 0 & 0 & 0\\\\\\frac{r_{s}}{r^{3}} & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right] & \\left[\\begin{matrix}- \\frac{r_{s}}{r^{3}} & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & - \\frac{r_{s}}{2 r - 2 r_{s}} & 0\\\\0 & 0 & 0 & - \\frac{r_{s} \\sin^{2}{\\left(\\theta \\right)}}{2 r - 2 r_{s}}\\end{matrix}\\right] & \\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & 0 & \\frac{r_{s}}{2 \\left(r - r_{s}\\right)} & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right] & \\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & 0 & 0 & \\frac{r_{s} \\sin^{2}{\\left(\\theta \\right)}}{2 \\left(r - r_{s}\\right)}\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right]\\\\\\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\- \\frac{r_{s} \\left(r - r_{s}\\right)}{2 r^{2}} & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right] & \\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & \\frac{r_{s}}{2 \\left(r - r_{s}\\right)} & 0 & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right] & \\left[\\begin{matrix}\\frac{r_{s} \\left(r - r_{s}\\right)}{2 r^{2}} & 0 & 0 & 0\\\\0 & - \\frac{r_{s}}{2 r - 2 r_{s}} & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & r r_{s} \\sin^{2}{\\left(\\theta \\right)}\\end{matrix}\\right] & \\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & - r r_{s} \\sin^{2}{\\left(\\theta \\right)}\\\\0 & 0 & 0 & 0\\end{matrix}\\right]\\\\\\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\- \\frac{r_{s} \\left(r - r_{s}\\right) \\sin^{2}{\\left(\\theta \\right)}}{2 r^{2}} & 0 & 0 & 0\\end{matrix}\\right] & \\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & \\frac{r_{s} \\sin^{2}{\\left(\\theta \\right)}}{2 \\left(r - r_{s}\\right)} & 0 & 0\\end{matrix}\\right] & \\left[\\begin{matrix}0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\0 & 0 & - r r_{s} \\sin^{2}{\\left(\\theta \\right)} & 0\\end{matrix}\\right] & \\left[\\begin{matrix}\\frac{r_{s} \\left(r - r_{s}\\right) \\sin^{2}{\\left(\\theta \\right)}}{2 r^{2}} & 0 & 0 & 0\\\\0 & - \\frac{r_{s} \\sin^{2}{\\left(\\theta \\right)}}{2 r - 2 r_{s}} & 0 & 0\\\\0 & 0 & r r_{s} \\sin^{2}{\\left(\\theta \\right)} & 0\\\\0 & 0 & 0 & 0\\end{matrix}\\right]\\end{matrix}\\right]$\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "8ff3dcbeaa8553a56f2864aa7a7c41ead277e305", "size": 45595, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Common Metrics and Spacetimes.ipynb", "max_stars_repo_name": "SheepWaitForWolf/General-Relativity", "max_stars_repo_head_hexsha": "e9eb0f8cc65be9368c6648c8afaa1f8e631516a8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-04T11:01:54.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-04T11:01:54.000Z", "max_issues_repo_path": "Common Metrics and Spacetimes.ipynb", "max_issues_repo_name": "SheepWaitForWolf/General-Relativity", "max_issues_repo_head_hexsha": "e9eb0f8cc65be9368c6648c8afaa1f8e631516a8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Common Metrics and Spacetimes.ipynb", "max_forks_repo_name": "SheepWaitForWolf/General-Relativity", "max_forks_repo_head_hexsha": "e9eb0f8cc65be9368c6648c8afaa1f8e631516a8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 62.2032742156, "max_line_length": 3009, "alphanum_fraction": 0.1577585262, "converted": true, "num_tokens": 4045, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9603611597645271, "lm_q2_score": 0.8558511506439708, "lm_q1q2_score": 0.8219262036182488}} {"text": "```python\nfrom sympy import*\ninit_printing()\n```\n\n\n```python\nk,m,w0,w = symbols('k,m,\\omega_0,\\omega')\nm1,m2 = symbols('m_1,m_2')\n```\n\n\n```python\nW = w0**2*Matrix([[2,-1,0,0],[-1,2,-1,0],[0,-1,2,-1],[0,0,-1,2]])\n\ndisplay(W)\nw0=1\n```\n\n\n```python\nsolve(det(W - w**2*eye(4)),w**2)\n```\n\n\n```python\nW.eigenvals()\n```\n\n\n```python\nevecs = W.eigenvects()\n```\n\n\n```python\nfor i in range(0,len(evecs)):\n display(simplify(evecs[i][2][0]))\n```\n\n\n```python\nm2 = Matrix([[2*k/m1, k/m1],[k/m2,-k/m2]])\n```\n\n\n```python\nevecs2 = m2.eigenvects()\nfor i in range(0,len(evecs2)):\n display(simplify(evecs2[i][2][0]))\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "0f44ef8616d8569f4e16d80500e477c064f008e5", "size": 37012, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "teaching/PHY1110/DS/Code/.ipynb_checkpoints/DS10 - Four Masses and Five Springs-checkpoint.ipynb", "max_stars_repo_name": "dpcherian/dpcherian.github.io", "max_stars_repo_head_hexsha": "6110da8c20e05edc106882db8b1c3795ade37708", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "teaching/PHY1110/DS/Code/.ipynb_checkpoints/DS10 - Four Masses and Five Springs-checkpoint.ipynb", "max_issues_repo_name": "dpcherian/dpcherian.github.io", "max_issues_repo_head_hexsha": "6110da8c20e05edc106882db8b1c3795ade37708", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "teaching/PHY1110/DS/Code/.ipynb_checkpoints/DS10 - Four Masses and Five Springs-checkpoint.ipynb", "max_forks_repo_name": "dpcherian/dpcherian.github.io", "max_forks_repo_head_hexsha": "6110da8c20e05edc106882db8b1c3795ade37708", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 120.1688311688, "max_line_length": 4996, "alphanum_fraction": 0.8415919161, "converted": true, "num_tokens": 250, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.953966093674472, "lm_q2_score": 0.8615382165412808, "lm_q1q2_score": 0.8218782469851571}} {"text": "A Pythagorean triplet is a set of three natural numbers, $a < b < c$, for which,\n\n\\begin{equation}\na^2 + b^2 = c^2\n\\end{equation}\n\nFor example, $3^2 + 4^2 = 9 + 16 = 25 = 5^2$.\n\nThere exists exactly one Pythagorean triplet for which $a + b + c = 1000$.\nFind the product $abc$.\n\n### Remark\n\nThis is a fairly straighforward constraint satisfaction problem (CSP) and is perhaps most easily solved in a CSP modelling language such as MiniZinc. However, to employ such tools would be to defeat the very purpose of the exercise, which is to give us practice with implementation.\n\n\n\n\n```python\nfrom six.moves import range, reduce\n```\n\n### Version 1: The Obvious\n\n\n```python\npair_sum_eq = lambda n, start=0: ((i, n-i) for i in range(start, (n>>1)+1))\n```\n\n\n```python\nlist(pair_sum_eq(21, 5))\n```\n\n\n\n\n [(5, 16), (6, 15), (7, 14), (8, 13), (9, 12), (10, 11)]\n\n\n\nNote that $3a < a + b + c = 1000$, so $a < \\frac{1000}{3} \\Leftrightarrow a \\leq \\lfloor \\frac{1000}{3} \\rfloor = 333$ so $1 \\leq a \\leq 333$. Therefore, we need only iterate up to 333 in the outermost loop. Now, $b + c = 1000 - a$, so $667 \\leq b + c \\leq 999$, so we look at all pairs $333 \\leq b < c$ such that $b + c = 1000 - a$ with the help of the function `pair_sum_eq`. Within the innermost loop, the $a, b, c$ now satisfy the constraints $a < b < c$ and $a + b + c = 1000$ so now we need only check that they indeed form a Pythagorean triplet, i.e. $a^2 + b^2 = c^2$, and yield it.\n\n\n\n\n```python\ndef pythagorean_triplet_sum_eq(n):\n for a in range(1, n//3+1):\n for b, c in pair_sum_eq(n-a, start=n//3):\n if a*a + b*b == c*c:\n yield a, b, c\n```\n\n\n```python\nlist(pythagorean_triplet_sum_eq(1000))\n```\n\n\n\n\n [(200, 375, 425)]\n\n\n\n\n```python\nprod = lambda iterable: reduce(lambda x,y: x*y, iterable)\n```\n\n\n```python\nprod(pythagorean_triplet_sum_eq(1000))\n```\n\n\n\n\n (200, 375, 425)\n\n\n\n### Version 2: Euclid's Formula\n\n\n```python\n# TODO\n```\n", "meta": {"hexsha": "d79f4203c6d3f30353fa9b63cf1997347ea0f5f0", "size": 4706, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "problem-9-special-pythagorean-triplet.ipynb", "max_stars_repo_name": "ltiao/project-euler", "max_stars_repo_head_hexsha": "94a5705f09c69271142b1d2ca5581a988cbc0e3c", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "problem-9-special-pythagorean-triplet.ipynb", "max_issues_repo_name": "ltiao/project-euler", "max_issues_repo_head_hexsha": "94a5705f09c69271142b1d2ca5581a988cbc0e3c", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "problem-9-special-pythagorean-triplet.ipynb", "max_forks_repo_name": "ltiao/project-euler", "max_forks_repo_head_hexsha": "94a5705f09c69271142b1d2ca5581a988cbc0e3c", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 21.787037037, "max_line_length": 610, "alphanum_fraction": 0.5055248619, "converted": true, "num_tokens": 669, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.894789457685656, "lm_q2_score": 0.9184802468145655, "lm_q1q2_score": 0.8218464419421926}} {"text": "**1**. Making wallpaper with `fromfunction`\n\nAdapted from [Circle Squared](http://igpphome.ucsd.edu/~shearer/COMP233/SciAm_Mandel.pdf)\n\nCreate a $400 \\times 400$ array using the function `lambda i, j: 0.27**2*(i**2 + j**2) % 1.5`. Use `imshow` from `matplotlib.pyplot` with `interpolation='nearest'` and the `YlOrBr` colormap to display the resulting array as an image.\n\n\n```python\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n\n```python\nxs = np.fromfunction(lambda i, j: 0.27**2*(i**2 + j**2) % 1.5, \n (400, 400),)\nplt.figure(figsize=(8,8))\nplt.imshow(xs, interpolation='nearest', cmap=plt.cm.YlOrBr)\nplt.xticks([])\nplt.yticks([])\npass\n```\n\n**2**. Find t least squares solution for $\\beta_0, \\beta_1, \\beta_2$ using the normal equations $\\hat{\\beta} = (X^TX)^{-1}x^Ty$.\n\n\\begin{align}\n10 &= \\beta_0 + 3 \\beta_1 + 7 \\beta_2 \\\\\n11 &= \\beta_0 + 2 \\beta_1 + 8 \\beta_2 \\\\\n9 &= \\beta_0 + 3 \\beta_1 + 7 \\beta_2 \\\\\n10 &= \\beta_0 + 1 \\beta_1 + 9 \\beta_2 \\\\\n\\end{align}\n\nYou can find the inverse of a matrix by using `np.linalg.inv` and the transpose with `X.T`\n\n\n```python\ny = np.array([10,11,9,10]).reshape(-1,1)\nX = np.c_[np.ones(4), [3,2,3,1], [7,8,8,9]]\n```\n\n\n```python\nX\n```\n\n\n\n\n array([[1., 3., 7.],\n [1., 2., 8.],\n [1., 3., 8.],\n [1., 1., 9.]])\n\n\n\n\n```python\nX.shape, y.shape\n```\n\n\n\n\n ((4, 3), (4, 1))\n\n\n\nDirect translation of normal equations\n\n\n```python\n\u03b2 = np.linalg.inv(X.T @ X) @ X.T @ y\n```\n\n\n```python\n\u03b2\n```\n\n\n\n\n array([[23.66666667],\n [-1.33333333],\n [-1.33333333]])\n\n\n\nMore numerically stable version\n\n\n```python\n\u03b2 = np.linalg.solve(X.T @ X, X.T @ y)\n```\n\n\n```python\n\u03b2\n```\n\n\n\n\n array([[23.66666667],\n [-1.33333333],\n [-1.33333333]])\n\n\n\nCompare observed with fitted\n\n\n```python\nnp.c_[y, X @ \u03b2]\n```\n\n\n\n\n array([[10. , 10.33333333],\n [11. , 10.33333333],\n [ 9. , 9. ],\n [10. , 10.33333333]])\n\n\n\n\n```python\nfrom mpl_toolkits.mplot3d import Axes3D\n```\n\n\n```python\nfig = plt.figure()\nax = fig.add_subplot(111,projection='3d')\nu = np.linspace(0, 4, 2)\nv = np.linspace(6, 10, 2)\nU, V = np.meshgrid(u, v)\nZ = \u03b2[0] + U*\u03b2[1] +V*\u03b2[1]\nax.scatter(X[:,1] , X[:,2] , y, color='red', s=25)\nax.plot_surface(U, V, Z, alpha=0.2)\nax.view_init(elev=30, azim=-60)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "0f81218851101689bf8e7e9df7be97e10ef99ca0", "size": 634870, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebook/T03_Exercises_Solutions.ipynb", "max_stars_repo_name": "ashnair1/sta-663-2019", "max_stars_repo_head_hexsha": "17eb85b644c52978c2ef3a53a80b7fb031360e3d", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 68, "max_stars_repo_stars_event_min_datetime": "2019-01-09T21:53:55.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-16T17:14:22.000Z", "max_issues_repo_path": "notebook/T03_Exercises_Solutions.ipynb", "max_issues_repo_name": "ashnair1/sta-663-2019", "max_issues_repo_head_hexsha": "17eb85b644c52978c2ef3a53a80b7fb031360e3d", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebook/T03_Exercises_Solutions.ipynb", "max_forks_repo_name": "ashnair1/sta-663-2019", "max_forks_repo_head_hexsha": "17eb85b644c52978c2ef3a53a80b7fb031360e3d", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 62, "max_forks_repo_forks_event_min_datetime": "2019-01-09T21:43:48.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-15T04:26:25.000Z", "avg_line_length": 2152.1016949153, "max_line_length": 580436, "alphanum_fraction": 0.9622174618, "converted": true, "num_tokens": 861, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.918480237330998, "lm_q2_score": 0.8947894604912848, "lm_q1q2_score": 0.821846436033311}} {"text": "# Lecture 8: Random Variables and Their Distributions\n\n## Two Basic Types of Random Variables\n\nThere are two basic types of random variables.\n\n* _Discrete random variables_ are random variables whose values can be enumerated, such as $a_1, a_2, \\dots , a_n$. Here, _discrete_ does not necessarily mean only integers. \n\n* _Continuous random variables_, on the other hand, can take on values in $\\mathbb{R}$.\n\nNote that there are other types that might mix the two definitions.\n\n\n### Cumulative distribution function\n\nGiven $X \\le x$ is an event, and a function $F(x) = P(X \\le x)$.\n\nThen $F$ is known as the __cumulative distribution function__ (CDF) of $X$. In contrast with the probability mass function, the CDF is more general, and is applicable for both discrete and continuous random variables.\n\n\n\n### Probability mass function\n\nThe __probability mass function__ (PMF) is $P(X=a_j)$, for all $\\text{j}$. A pmf must satisfy the following conditions:\n\n\\begin{align}\n P_j &\\ge 0 \\\\\n \\sum_j P_j &= 1\n\\end{align}\n\n\n\nIn practice, using a PMF is easier than a CDF, but a PMF is only applicable in the case of __discrete random variables__ only\n\n## Revisiting the Binomial Distribution $Bin(n,p)$\n\n$X \\sim Bin(n, p)$, where\n\n* $n$ is an integer greater than 0\n* $p$ is between 0.0 and 1.0\n\nHere are a few different ways to explain the Binomial distribution.\n\n### Explanation with a story\n\n$X$ is the number of successes in $n$ _independent_ $Bern(p)$ trials.\n\n### Explanation using sum of indicator random variables\n\n\\begin{align}\n &X = X_1 + X_2 + \\dots + X_j \n & &\\text{where } X_j =\n \\begin{cases}\n 1, &\\text{if } j^{\\text{th}} \\text{ trial succeeds} \\\\\n 0, &\\text{ otherwise }\n \\end{cases} \\\\\n & & &\\text{and } X_1, X_2, \\dots , X_j \\text{ are i.i.d Bern(n,p)}\n\\end{align}\n\n### Explanation using a probability mass function\n\n\\begin{align}\n P(X=k) = \\binom{n}{k} p^k q^{n-k} \\text{, where } q = 1 - p \\text{ and } k \\in \\{0,1,2, \\dots ,n\\}\n\\end{align}\n\nA probability mass function sums to 1. Note that\n\n\\begin{align}\n \\sum_{k=0}^n \\binom{n}{k} p^k q^{n-k} &= (p + q)^n & &\\text{by the Binomial Theorem} \\\\\n &= (p + 1 - p)^n \\\\\n &= 1^n \\\\\n &= 1\n\\end{align}\nso the explanation by PMF holds. By the way, it is this relation to the [Binomial Theorem](https://en.wikipedia.org/wiki/Binomial_theorem) that this distribution gets its name.\n\n## Sum of Two Binomial Random Variables\n\nRecall that we left Lecture 7 with this parting thought:\n \n\\begin{align}\n & X \\sim \\text{Bin}(n,p) \\text{, } Y \\sim \\text{Bin}(m,p) \\rightarrow X+Y \\sim \\text{Bin}(n+m, p)\n\\end{align}\n\nLet's see how this is true by using the three explanation approaches we used earlier.\n\n### Explanation with a story\n\n$X$ and $Y$ are functions, and since they both have the same sample space $S$ and the same domain. \n\n$X$ is the number of successes in $n \\text{ Bern}(p)$ trials.\n\n$Y$ is the number of successes in $m \\text{ Bern}(p)$ trials.\n\n$\\therefore X + Y$ is simply the sum of successes in $n+m \\text{ Bern}(p)$ trials.\n\n### Explanation using a sum of indicator random variables\n\n\\begin{align}\n &X = X_1 + X_2 + \\dots + X_n, ~~~~ Y = Y_1 + Y_2 + \\dots + Y_m \\\\\n &\\Rightarrow X+Y = \\sum_{i=1}^n X_j + \\sum_{j=1}^m Y_j \\text{, which is } n + m \\text{ i.i.d. Bern}(p) \\\\\n &\\Rightarrow \\text{Bin}(n + m, p)\n\\end{align}\n\n### Explanation using a probability mass function\n\nWe start with a PMF of the _convolution_ $X+Y=k$\n\n\\begin{align}\n P(X+Y=k) &= \\sum_{j=0}^k P(X+Y=k|X=j)P(X=j) & &\\text{wishful thinking, assume we know } x \\text{ and apply LOTP} \\\\\n &= \\sum_{j=0}^k P(Y=k-j|X=j) \\binom{n}{j} p^j q^{n-j} & &\\text{... but since events }Y=k-j \\text{ and } X=j \\text{ are independent}\\\\\n &= \\sum_{j=0}^k P(Y=k-j) \\binom{n}{j} p^j q^{n-j} & &P(Y=k-j|X=j) = P(Y=k-j) \\\\\n &= \\sum_{j=0}^k \\binom{m}{k-j} p^{k-j} q^{m-k+j}~~~ \\binom{n}{j} p^j q^{n-j} \\\\\n &= p^k q^{m+n-k} \\underbrace{\\sum_{j=0}^k \\binom{m}{k-j}\\binom{n}{j}}_{\\text{Vandermonde's Identity}} & &\\text{see Lecture 2, Story Proofs, Ex.3} \\\\\n &= \\binom{m+n}{k} p^k q^{m+n-k} ~~~~ \\blacksquare\n\\end{align}\n\n\n## Hypergeometric Distribution\n\nA common mistake with the Binomial distribution is forgetting that\n\n* the trials are independent\n* they have the same probability of success\n\n#### Example: How Many Aces in a 5-card Hand?\n\nA 5-card hand out of a standard 52-card deck of playing cards, what is the the distribution (PMF or CDF) of the number of aces.\n\nLet $X = (\\text{# of aces})$.\n\nFind $P(X=k)$ where $x \\in \\text{\\{0,1,2,3,4\\}}$. At this point, we can conclude that the distribution is __not__ Binomial, _as the trials are not independent_.\n\nThe PMF is given by\n\\begin{align}\n P(X=k) &= \\frac{\\binom{4}{k}\\binom{48}{5-k}}{\\binom{52}{5}}\n\\end{align}\n\n\n#### Example: Sampling White/Black Marbles\n\nThere are $b$ black and $w$ white marbles. Pick a random sample of $n$ marbles. What is the distribution on the # white marbles?\n\n\\begin{align}\n P(X=k) &= \\frac{\\binom{w}{k}\\binom{b}{n-k}}{\\binom{w+b}{n}}\n\\end{align}\n\nThe distribution in the examples above is called the __Hypergeometric distribution__. In the hypergeometric distribution, we sample _without replacement_, and so the trials cannot be independent.\n\nNote that when the total number of items in our sample is exceedingly large, it becomes highly unlikely that we would choose the same item _more than once_. Therefore, it doesn't matter if you are sampling with replacement or without, suggesting that in this case the hypergeometric behaves as the binomial.\n\n#### Is the Hypergeometric distribution a valid PMF?\n\n_Is the hypergeometric non-negative?_\n\nYes, obviously.\n\n_Does the PMF sum to 1?_\n\n\\begin{align}\n P(X=k) &= \\sum_{k=0}^w \\frac{\\binom{w}{k}\\binom{b}{n-k}}{\\binom{w+b}{n}} \\\\\n &= \\frac{1}{\\binom{w+b}{n}} \\sum_{k=0}^w \\binom{w}{k}\\binom{b}{n-k} & &\\text{since }\\binom{w+b}{n} \\text{ is constant} \\\\\n &= \\frac{1}{\\binom{w+b}{n}} \\binom{w+b}{n} & &\\text{by Vandermonde's Identity } \\\\\n &= 1 ~~~~ \\blacksquare\n\\end{align}\n\n----\n\n## Appendix A: Independent, Identically Distributed\n\nA sequence of random variables are independent and identically distributed (i.i.d) if each r.v. has the same probability distribution and all are mutually independent.\n\nThink of the Binomial distribution where we consider a string of coin flips. The probability $p$ for head's is the *same across all coin flips*, and seeing a head (or tail) in a previous toss *does not affect any of the coin flips that come afterwards*.\n", "meta": {"hexsha": "f82a5b3cc847ab1a44ede28153614389c0ae3baa", "size": 9648, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lecture_08.ipynb", "max_stars_repo_name": "dirtScrapper/Stats-110-master", "max_stars_repo_head_hexsha": "a123692d039193a048ff92f5a7389e97e479eb7e", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lecture_08.ipynb", "max_issues_repo_name": "dirtScrapper/Stats-110-master", "max_issues_repo_head_hexsha": "a123692d039193a048ff92f5a7389e97e479eb7e", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture_08.ipynb", "max_forks_repo_name": "dirtScrapper/Stats-110-master", "max_forks_repo_head_hexsha": "a123692d039193a048ff92f5a7389e97e479eb7e", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.5408560311, "max_line_length": 316, "alphanum_fraction": 0.5497512438, "converted": true, "num_tokens": 2075, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8947894520743981, "lm_q2_score": 0.9184802440252811, "lm_q1q2_score": 0.8218464342925407}} {"text": "```python\n# El amplificador puente es como se observa en la imagen.\nimport numpy as np\nimport sympy as sym \nimport matplotlib.pyplot as plt \nimport matplotlib.image as mpimg \nfrom IPython.display import Image \nsym.init_printing() \n#%matplotlib widget \n%matplotlib inline\nImage(filename='amp_puente.png',width=300) \n```\n\n\n```python\n# VERIFICAR QUE Vo RESPONDE A LA SIGUIENTE ECUACION\nsym.var('Va, Vb, Vc, Vd, Vo, Vp')\nsym.var('R1, R2, R3, R4, R5, R6')\nsym.var('Vo_')\ndisplay(sym.Eq(Vo_,sym.fu(((1+R6/R5)*(R2/(R1+R2)-R4/(R3+R4))*Vp))))\nVo_=sym.fu(((1+R6/R5)*(R2/(R1+R2)-R4/(R3+R4))*Vp))\n```\n\n\n```python\nfind=sym.Matrix(([Va],[Vb],[Vc],[Vd],[Vo])) #Incognitas\n#Se escriben tantas ecuacionenes como nodos haya\nec_nodo_0=sym.Eq(Vd,0)\nec_nodo_1=sym.Eq(Vb-Vc,Vp) \nec_nodo_2=sym.Eq((Vb-Vd)/R3+(Vc-Vd)/R4,0)\nec_nodo_3=sym.Eq(Va/R5+(Va-Vo)/R6,0)\nec_nodo_4=sym.Eq((Vb-Va)/R1+(Vb-Vd)/R3,(Va-Vc)/R2+(Vd-Vc)/R4)#Caso especial de superNodo\ndisplay(sym.Eq(Vo,sym.factor(sym.solve([ec_nodo_0,ec_nodo_1,ec_nodo_2,ec_nodo_3,ec_nodo_4],find)[Vo])))\nVo=sym.simplify(sym.factor(sym.solve([ec_nodo_0,ec_nodo_1,ec_nodo_2,ec_nodo_3,ec_nodo_4],find)[Vo]))\n```\n\n\n```python\nprint('Se valida la ecuaci\u00f3n?',np.invert(np.bool_(sym.simplify(Vo_-Vo))))\nsym.simplify(Vo_-Vo)\n```\n\n\n```python\nsym.var('Av,R, D, Vo_calc') # Si Av es la ganancia Av=(1+R6/R5) R1=R-D (Contrae) R2=R+D R1/R2=R4/R3\n\ndisplay(sym.Eq(Vo_calc,sym.simplify((Vo.subs({(R1,R-D),(R2,R+D),(R3,R+D),(R4,R-D),(R6,(Av-1)*R5)})))))\nVo_calc=sym.simplify((Vo.subs({(R1,R-D),(R2,R+D),(R3,R+D),(R4,R-D),(R6,(Av-1)*R5)})))\n```\n", "meta": {"hexsha": "c0c1f6d89663c7c29c748b88ecb108e5439306ca", "size": 28931, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "python/0/Amp_Puente.ipynb", "max_stars_repo_name": "WayraLHD/SRA21", "max_stars_repo_head_hexsha": "1b0447bf925678b8065c28b2767906d1daff2023", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-09-29T16:38:53.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-29T16:38:53.000Z", "max_issues_repo_path": "python/0/Amp_Puente.ipynb", "max_issues_repo_name": "WayraLHD/SRA21", "max_issues_repo_head_hexsha": "1b0447bf925678b8065c28b2767906d1daff2023", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-08-10T08:24:57.000Z", "max_issues_repo_issues_event_max_datetime": "2021-08-10T08:24:57.000Z", "max_forks_repo_path": "python/0/Amp_Puente.ipynb", "max_forks_repo_name": "WayraLHD/SRA21", "max_forks_repo_head_hexsha": "1b0447bf925678b8065c28b2767906d1daff2023", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 153.8882978723, "max_line_length": 13296, "alphanum_fraction": 0.8859355017, "converted": true, "num_tokens": 673, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9019206791658466, "lm_q2_score": 0.9111797015700343, "lm_q1q2_score": 0.8218118152821787}} {"text": "```python\nimport sympy as sm\nimport matplotlib.pyplot as plt\n```\n\n1. Write down a 2-D function, like z= f(x,y), it may be polynomials, sinusoidal, exponential, logarithm \u2026. \n\n\n```python\nx, y = sm.symbols('x, y')\nz = x**2 + y**2\n```\n\n2. Plot the 3-D figure , or 2-D figure (like an image) \n\n\n```python\n%matplotlib inline\nfig = sm.plotting.plot3d(z, x, y)\n```\n\n3. Find the maximal, minimal point and circle them \n\n\n```python\nimg = plt.imread('hw2_2.jpg')\nfig = plt.imshow(img)\n```\n\n4. Find the maximal and minimal point by solving \n\n\n```python\ndx = z.diff(x)\ndx\n```\n\n\n\n\n$\\displaystyle 2 x$\n\n\n\n\n```python\ndy = z.diff(y)\ndy\n```\n\n\n\n\n$\\displaystyle 2 y$\n\n\n\n\n```python\nsm.solve([dx, dy], x, y, dict=True)\n```\n\n\n\n\n [{x: 0, y: 0}]\n\n\n\n### Read the following blog post and practice through it and then write down a notebook and codes for presentation \n\n#### Case study 1: finding the general form of a function and its inverse \n\nWhat\u2019s the inverse of this little quadratic function? We could find it by hand, but let\u2019s just fire up SymPy:\n\n\n```python\nimport sympy\n\n# create a symbolic variable for each symbol in our equation\ny, x, k = sympy.symbols('y, x, k', real=True)\n\n# define the equation y = kx - (1-k)x^2\nfwd_equation = sympy.Eq(y, k*x - (k - 1)*x**2)\n\n# solve the equation for x and print solutions\ninverse = sympy.solve(fwd_equation, x)\nprint('found {} solutions for x:'.format(len(inverse)))\nprint('\\n'.join([str(s) for s in inverse]))\n```\n\n found 2 solutions for x:\n (k - sqrt(k**2 - 4*k*y + 4*y))/(2*(k - 1))\n (k + sqrt(k**2 - 4*k*y + 4*y))/(2*(k - 1))\n\n\nLet\u2019s see how that first solution looks when we substitute in k=1.5 or k=1.375:\n\n\n```python\nprint(inverse[0].subs(k, 1.5).simplify())\nprint(inverse[0].subs(k, 1.375).simplify())\n```\n\n 1.5 - 1.5*sqrt(1 - 0.888888888888889*y)\n 1.83333333333333 - 1.83333333333333*sqrt(1 - 0.793388429752066*y)\n\n\nLet\u2019s conclude this example with a nice trick for any users of LaTeX or MathJax: we\u2019ll ask SymPy to typeset the inverse formula for us.\n\n\n```python\nprint('x =', sympy.latex(inverse[0]))\n```\n\n x = \\frac{k - \\sqrt{k^{2} - 4 k y + 4 y}}{2 \\left(k - 1\\right)}\n\n\n#### Case study 2: solving systems of equations \n\nNote that taking the dot product of two planes\u2019 normal vectors gives the negative cosine of their dihedral angle, so the middle constraint above can be expressed as \u2113Q\u2219\u2113R=\u2212cosp. Here\u2019s the corresponding SymPy code:\n\n\n```python\np, q, r = sympy.symbols('p, q, r', real=True)\n\n# create some symbols for unknown elements of lQ\nx1, y1, z1 = sympy.symbols('x1, y1, z1')\n\n# define vectors we know so far\nP = sympy.Matrix([0, 0, 1])\nlR = sympy.Matrix([1, 0, 0])\nlQ = sympy.Matrix([x1, y1, z1])\n\nlQ_equations = [\n sympy.Eq(lQ.dot(P), 0), # lQ contains P\n sympy.Eq(lQ.dot(lR), -sympy.cos(p)), # angle at point P\n sympy.Eq(lQ.dot(lQ), 1) # lQ is a unit vector\n]\n\nS = sympy.solve(lQ_equations, x1, y1, z1, dict=True, simplify=True)\nprint('found {} solutions for lQ:'.format(len(S)))\nprint('\\n'.join([sympy.pretty(sln) for sln in S])) # ask for pretty output\n\nlQ = lQ.subs(S[1])\nprint('now lQ is {}'.format(lQ))\n```\n\n found 2 solutions for lQ:\n {x\u2081: -cos(p), y\u2081: -\u2502sin(p)\u2502, z\u2081: 0}\n {x\u2081: -cos(p), y\u2081: \u2502sin(p)\u2502, z\u2081: 0}\n now lQ is Matrix([[-cos(p)], [Abs(sin(p))], [0]])\n\n\nOne of SymPy\u2019s quirks is that it sometimes adds extraneous details to solutions like the absolute value above (not needed here because \u00b1sinp and \u00b1|sinp| are equivalent sets of values). Removing these is quick once you notice them. Here\u2019s the code to get rid of the stray Abs:\n\n\n```python\nlQ = lQ.subs(sympy.Abs(sympy.sin(p)), sympy.sin(p))\nprint('after subbing out abs, lQ is {}'.format(lQ))\n```\n\n after subbing out abs, lQ is Matrix([[-cos(p)], [sin(p)], [0]])\n\n\nAnyways, let\u2019s manually verify that the solution worked. This is absolutely unnecessary because sympy.solve doesn\u2019t return incorrect solutions, but it can be a nice sanity check, and the code is short, anyways:\n\n\n```python\nprint('checking our work:')\nprint(' lQ . P =', lQ.dot(P))\nprint(' lQ . lR =', lQ.dot(lR))\nprint(' lQ . lQ =', lQ.dot(lQ))\n```\n\n checking our work:\n lQ . P = 0\n lQ . lR = -cos(p)\n lQ . lQ = sin(p)**2 + cos(p)**2\n\n\nOops, looks like that last item could be simplified. Let\u2019s try that again:\n\n\n```python\nprint(' lQ . lQ =', lQ.dot(lQ).simplify())\n```\n\n lQ . lQ = 1\n\n\nAll the constraints were satisfied, and I got to show you sympy.simplify in action.\nNow we can go through a similar process to solve for \u2113P, except this time we will explicitly encode the unit-length constraint into its z-coordinate.2\n\n\n```python\nx2, y2 = sympy.symbols('x2, y2')\nz2 = sympy.sqrt(1 - x2**2 - y2**2)\n\nlP = sympy.Matrix([x2, y2, z2])\nprint('||lP||^2 =', lP.dot(lP))\n\nlP_equations = [\n sympy.Eq(lP.dot(lR), -sympy.cos(q)),\n sympy.Eq(lP.dot(lQ), -sympy.cos(r)),\n]\n\nS = sympy.solve(lP_equations, x2, y2, dict=True, simplify=True)\nprint('got {} solutions for lP'.format(len(S)))\nprint('\\n'.join([sympy.pretty(sln) for sln in S]))\n\nlP = lP.subs(S[0])\nprint('now lP is {}'.format(lP))\n```\n\n ||lP||^2 = 1\n got 1 solutions for lP\n \u23a7 -(cos(p)\u22c5cos(q) + cos(r)) \u23ab\n \u23a8x\u2082: -cos(q), y\u2082: \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u23ac\n \u23a9 sin(p) \u23ad\n now lP is Matrix([[-cos(q)], [-(cos(p)*cos(q) + cos(r))/sin(p)], [sqrt(-(cos(p)*cos(q) + cos(r))**2/sin(p)**2 - cos(q)**2 + 1)]])\n\n\nHere\u2019s the relvant GLSL snippet from the shader linked above, reflecting the full solution we obtained:\n```\nvec3 lr = vec3(1, 0, 0);\nvec3 lq = vec3(-cp, sp, 0);\nvec3 lp = vec3(-cq, -(cr + cp*cq)/sp, 0);\nlp.z = sqrt(1.0 - dot(lp.xy, lp.xy));\n\nvec3 P = normalize(cross(lr, lq));\nvec3 Q = normalize(cross(lp, lr));\nvec3 R = normalize(cross(lq, lp));\n```\n\n\n```python\n\n```\n\n#### Case study 3: Jacobians for nonlinear least squares \n\nWe start by just implementing the function f(\u03b8,x,y) defined above:\n\n\n```python\nx, y, u, v, h, s, t, l, rho, phi = sympy.symbols(\n 'x, y, u, v, h, s, t, l, rho, phi', real=True)\n\ncr = sympy.cos(rho)\nsr = sympy.sin(rho)\n\nxp = (x - u) * cr + (y - v) * sr\nyp = -(x - u) * sr + (y - v) * cr\n\nf = ( h * sympy.exp(-xp**2 / (2*s**2) - yp**2 / (2*t**2) ) *\n sympy.cos( 2 * sympy.pi * xp / l + phi ) )\n```\n\nNext, we can just print out the partial derivatives, one by one. Since we are targeting a C++ program, we can ask SymPy to directly emit C code:\n\n\n```python\ntheta = (u, v, h, s, t, l, rho, phi)\n\nfor i, var in enumerate(theta):\n deriv = f.diff(var)\n print('grad[{}]'.format(i), '=', sympy.ccode(deriv) + ';')\n```\n\n grad[0] = h*(-((u - x)*sin(rho) + (-v + y)*cos(rho))*sin(rho)/pow(t, 2) + ((-u + x)*cos(rho) + (-v + y)*sin(rho))*cos(rho)/pow(s, 2))*exp(-1.0/2.0*pow((u - x)*sin(rho) + (-v + y)*cos(rho), 2)/pow(t, 2) - 1.0/2.0*pow((-u + x)*cos(rho) + (-v + y)*sin(rho), 2)/pow(s, 2))*cos(phi + 2*M_PI*((-u + x)*cos(rho) + (-v + y)*sin(rho))/l) + 2*M_PI*h*exp(-1.0/2.0*pow((u - x)*sin(rho) + (-v + y)*cos(rho), 2)/pow(t, 2) - 1.0/2.0*pow((-u + x)*cos(rho) + (-v + y)*sin(rho), 2)/pow(s, 2))*sin(phi + 2*M_PI*((-u + x)*cos(rho) + (-v + y)*sin(rho))/l)*cos(rho)/l;\n grad[1] = h*(((u - x)*sin(rho) + (-v + y)*cos(rho))*cos(rho)/pow(t, 2) + ((-u + x)*cos(rho) + (-v + y)*sin(rho))*sin(rho)/pow(s, 2))*exp(-1.0/2.0*pow((u - x)*sin(rho) + (-v + y)*cos(rho), 2)/pow(t, 2) - 1.0/2.0*pow((-u + x)*cos(rho) + (-v + y)*sin(rho), 2)/pow(s, 2))*cos(phi + 2*M_PI*((-u + x)*cos(rho) + (-v + y)*sin(rho))/l) + 2*M_PI*h*exp(-1.0/2.0*pow((u - x)*sin(rho) + (-v + y)*cos(rho), 2)/pow(t, 2) - 1.0/2.0*pow((-u + x)*cos(rho) + (-v + y)*sin(rho), 2)/pow(s, 2))*sin(rho)*sin(phi + 2*M_PI*((-u + x)*cos(rho) + (-v + y)*sin(rho))/l)/l;\n grad[2] = exp(-1.0/2.0*pow((u - x)*sin(rho) + (-v + y)*cos(rho), 2)/pow(t, 2) - 1.0/2.0*pow((-u + x)*cos(rho) + (-v + y)*sin(rho), 2)/pow(s, 2))*cos(phi + 2*M_PI*((-u + x)*cos(rho) + (-v + y)*sin(rho))/l);\n grad[3] = h*pow((-u + x)*cos(rho) + (-v + y)*sin(rho), 2)*exp(-1.0/2.0*pow((u - x)*sin(rho) + (-v + y)*cos(rho), 2)/pow(t, 2) - 1.0/2.0*pow((-u + x)*cos(rho) + (-v + y)*sin(rho), 2)/pow(s, 2))*cos(phi + 2*M_PI*((-u + x)*cos(rho) + (-v + y)*sin(rho))/l)/pow(s, 3);\n grad[4] = h*pow((u - x)*sin(rho) + (-v + y)*cos(rho), 2)*exp(-1.0/2.0*pow((u - x)*sin(rho) + (-v + y)*cos(rho), 2)/pow(t, 2) - 1.0/2.0*pow((-u + x)*cos(rho) + (-v + y)*sin(rho), 2)/pow(s, 2))*cos(phi + 2*M_PI*((-u + x)*cos(rho) + (-v + y)*sin(rho))/l)/pow(t, 3);\n grad[5] = 2*M_PI*h*((-u + x)*cos(rho) + (-v + y)*sin(rho))*exp(-1.0/2.0*pow((u - x)*sin(rho) + (-v + y)*cos(rho), 2)/pow(t, 2) - 1.0/2.0*pow((-u + x)*cos(rho) + (-v + y)*sin(rho), 2)/pow(s, 2))*sin(phi + 2*M_PI*((-u + x)*cos(rho) + (-v + y)*sin(rho))/l)/pow(l, 2);\n grad[6] = h*(-1.0/2.0*((u - x)*sin(rho) + (-v + y)*cos(rho))*(2*(u - x)*cos(rho) - 2*(-v + y)*sin(rho))/pow(t, 2) - 1.0/2.0*(-2*(-u + x)*sin(rho) + 2*(-v + y)*cos(rho))*((-u + x)*cos(rho) + (-v + y)*sin(rho))/pow(s, 2))*exp(-1.0/2.0*pow((u - x)*sin(rho) + (-v + y)*cos(rho), 2)/pow(t, 2) - 1.0/2.0*pow((-u + x)*cos(rho) + (-v + y)*sin(rho), 2)/pow(s, 2))*cos(phi + 2*M_PI*((-u + x)*cos(rho) + (-v + y)*sin(rho))/l) - 2*M_PI*h*(-(-u + x)*sin(rho) + (-v + y)*cos(rho))*exp(-1.0/2.0*pow((u - x)*sin(rho) + (-v + y)*cos(rho), 2)/pow(t, 2) - 1.0/2.0*pow((-u + x)*cos(rho) + (-v + y)*sin(rho), 2)/pow(s, 2))*sin(phi + 2*M_PI*((-u + x)*cos(rho) + (-v + y)*sin(rho))/l)/l;\n grad[7] = -h*exp(-1.0/2.0*pow((u - x)*sin(rho) + (-v + y)*cos(rho), 2)/pow(t, 2) - 1.0/2.0*pow((-u + x)*cos(rho) + (-v + y)*sin(rho), 2)/pow(s, 2))*sin(phi + 2*M_PI*((-u + x)*cos(rho) + (-v + y)*sin(rho))/l);\n\n\nInstead, we will get SymPy to do the common subexpression elimination for us using sympy.cse:\n\n\n```python\nderivs = [ f.diff(var) for var in theta ]\n\nvariable_namer = sympy.numbered_symbols('sigma_')\nreplacements, reduced = sympy.cse(derivs, symbols=variable_namer)\n\nfor key, val in replacements:\n print('double', key, '=', sympy.ccode(val) + ';')\n\nprint()\n\nfor i, r in enumerate(reduced):\n print('grad[{}]'.format(i), '=', sympy.ccode(r) + ';')\n```\n\n double sigma_0 = cos(rho);\n double sigma_1 = 2*sigma_0;\n double sigma_2 = -u + x;\n double sigma_3 = sin(rho);\n double sigma_4 = -v + y;\n double sigma_5 = sigma_3*sigma_4;\n double sigma_6 = sigma_0*sigma_2 + sigma_5;\n double sigma_7 = M_PI/l;\n double sigma_8 = 2*sigma_7;\n double sigma_9 = phi + sigma_6*sigma_8;\n double sigma_10 = pow(s, -2);\n double sigma_11 = pow(sigma_6, 2);\n double sigma_12 = pow(t, -2);\n double sigma_13 = u - x;\n double sigma_14 = sigma_0*sigma_4;\n double sigma_15 = sigma_13*sigma_3 + sigma_14;\n double sigma_16 = pow(sigma_15, 2);\n double sigma_17 = exp(-1.0/2.0*sigma_10*sigma_11 - 1.0/2.0*sigma_12*sigma_16);\n double sigma_18 = h*sigma_17*sin(sigma_9);\n double sigma_19 = sigma_10*sigma_6;\n double sigma_20 = sigma_12*sigma_15;\n double sigma_21 = sigma_17*cos(sigma_9);\n double sigma_22 = h*sigma_21;\n double sigma_23 = sigma_18*sigma_8;\n double sigma_24 = sigma_2*sigma_3;\n \n grad[0] = sigma_1*sigma_18*sigma_7 + sigma_22*(sigma_0*sigma_19 - sigma_20*sigma_3);\n grad[1] = sigma_22*(sigma_0*sigma_20 + sigma_19*sigma_3) + sigma_23*sigma_3;\n grad[2] = sigma_21;\n grad[3] = sigma_11*sigma_22/pow(s, 3);\n grad[4] = sigma_16*sigma_22/pow(t, 3);\n grad[5] = 2*M_PI*sigma_18*sigma_6/pow(l, 2);\n grad[6] = sigma_22*(-1.0/2.0*sigma_19*(2*sigma_14 - 2*sigma_24) - 1.0/2.0*sigma_20*(sigma_1*sigma_13 - 2*sigma_5)) - sigma_23*(sigma_14 - sigma_24);\n grad[7] = -sigma_18;\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "3907021aee5f96e76390dcc1c471193af81118af", "size": 224807, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ch02_HomeWork_2.ipynb", "max_stars_repo_name": "loucadgarbon/pattern-recognition", "max_stars_repo_head_hexsha": "2fd45db9597c01f8b76aba1cad9575c2b3a764ec", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ch02_HomeWork_2.ipynb", "max_issues_repo_name": "loucadgarbon/pattern-recognition", "max_issues_repo_head_hexsha": "2fd45db9597c01f8b76aba1cad9575c2b3a764ec", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ch02_HomeWork_2.ipynb", "max_forks_repo_name": "loucadgarbon/pattern-recognition", "max_forks_repo_head_hexsha": "2fd45db9597c01f8b76aba1cad9575c2b3a764ec", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 346.3898305085, "max_line_length": 116734, "alphanum_fraction": 0.9267860876, "converted": true, "num_tokens": 4553, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8902942348544447, "lm_q2_score": 0.9230391701259804, "lm_q1q2_score": 0.8217764517079913}} {"text": "# Fourier Series\nAny general waveform of finity energy $x(t)$ (square integrable) can be represented as weighted sum of trigonometric functions.\n\\begin{equation}\nf(x) \\simeq \\frac{a_0}{2} + \\sum_{n=1}^{\\infty}\\left\\{a_n\\cos(\\frac{n\\pi{x}}{p}) + b_n\\sin(\\frac{n\\pi{x}}{p})\\right\\}\n\\end{equation}\nwhere the Fourier coefficients $a_n, b_n$ are defined by\n\\begin{equation}\na_n = \\frac{1}{p}\\int_{-p}^{p}f(x)\\cos(\\frac{n\\pi{x}}{p})dx, n\\geq0\\\\\nb_n = \\frac{1}{p}\\int_{-p}^{p}f(x)\\sin(\\frac{n\\pi{x}}{p})dx, n\\geq1\n\\end{equation}\n\n## Square function (even)\n\\begin{equation}\nf(x) = \\left\\{ \\begin{array}{rcl}\nA, & -\\frac{1}{2}T_p + nT < x < \\frac{1}{2}T_p + nT \\\\ \n0, & \\frac{1}{2}T_p + nT < x < \\frac{3}{2}T_p + nT \n\\end{array}\\right.\n\\end{equation}\nwhere $n \\in R$, $T_p = \\pi$ and $T = 2\\pi$.\n\n\\begin{eqnarray}\na_0 &=& \\frac{1}{T}\\int_0^T f(t)dt \\\\\n&=& \\frac{1}{T}\\int_{-\\frac{T_p}{2}}^{\\frac{T_p}{2}} Adt \\\\ \n&=& \\frac{T_p}{T}A \\\\\n&=& \\frac{A}{2}\n\\end{eqnarray}\n\n\\begin{eqnarray}\na_n &=& \\frac{2}{T}\\int_0^T f(t)\\cos(\\omega_nt)dt \\\\\n&=& \\frac{2}{T}\\int_{-\\frac{T_p}{2}}^{\\frac{T_p}{2}} A\\cos(\\omega_nt)dt \\\\ \n&=& \\frac{2A}{\\omega_n T}\\int_{-\\frac{T_p}{2}}^{\\frac{T_p}{2}} \\cos(\\omega_nt)d\\omega_nt \\\\\n&=& \\frac{4A}{\\omega_n T}\\sin(\\frac{\\omega_n T_p}{2}) \\\\\n&=& \\frac{2A}{n\\pi}\\sin(\\frac{n\\pi}{2})\n\\end{eqnarray}\n\n\\begin{equation}\na_n = \\left\\{ \\begin{array}{lcl}\n0, & n & even, n\\neq 0 \\\\ \n(-1)^{\\frac{n-1}{2}}\\frac{2A}{n\\pi}, & n & odd\n\\end{array}\\right.\n\\end{equation}\n\n\\begin{eqnarray}\nb_n &=& \\frac{2}{T}\\int_0^T f(t)\\sin(\\omega_nt)dt \\\\\n&=& \\frac{2}{T}\\int_{-\\frac{T_p}{2}}^{\\frac{T_p}{2}} A\\sin(\\omega_nt)dt \\\\ \n&=& 0\n\\end{eqnarray}\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Fourier coefficients\ndef a0(A, X):\n return A/2 * np.ones(len(X))\n\ndef an(A, n):\n if n%2==0:\n return 0\n return (-1)**((n-1)/2)*(2*A)/(n*np.pi)\n\ndef Plot(x, y, s, T):\n plt.figure(figsize=(10,5))\n ax = plt.gca()\n ax.grid(color='#b7b7b7', linestyle='-', linewidth=0.5, alpha=0.5)\n plt.plot(x,y,'-', color='black', lw=2,label=\"Signal\")\n plt.plot(x,s,'-', color='#d3d3d3', lw=2,label=\"Fourier series approximation\")\n plt.xticks([-5/2*np.pi, -3/2*np.pi, -1/2*np.pi, 0, 1/2*np.pi, 3/2*np.pi, 5/2*np.pi],\n [r'$-5/2\\pi$', r'$-3/2\\pi$', r'$-1/2\\pi$', r'$0$',r'$1/2\\pi$', r'$3/2\\pi$', r'$5/2\\pi$'])\n\n # the arrow\n ax.annotate('', xy=(-T/2,0.5), xytext=(-T/2,0),\n arrowprops={'arrowstyle': '-', 'ls': 'dashed', 'ec': '#333333'}, va='center')\n ax.annotate('', xy=(T/2,0.5), xytext=(T/2,0),\n arrowprops={'arrowstyle': '-', 'ls': 'dashed', 'ec': '#333333'}, va='center')\n ax.annotate('', xy=(T/2,0.5), xytext=(-T/2,0.5),\n arrowprops={'arrowstyle': '<->'}, va='center')\n ax.annotate('', xy=(T/4,1.5), xytext=(-T/4,1.5),\n arrowprops={'arrowstyle': '<->'}, va='center')\n plt.text(-0.2, 1.55, r'$T_p$', {'color': 'k', 'fontsize': 12})\n plt.text(-0.2, 0.55, r'$T$', {'color': 'k', 'fontsize': 12})\n plt.show()\n \n\ndef Series(n):\n # Initial parameters\n A = 2\n N = 1000\n T = 2*np.pi\n f = 1/T\n\n x = np.linspace(-5/2*np.pi,5/2*np.pi,N)\n y = A/2*(1 + np.sign(np.cos(2*np.pi*f*x)))\n\n s = a0(A, x) + sum(an(A, i) * np.cos(i*x) for i in range(1,n+1))\n Plot(x,y,s,T)\n \nSeries(19)\n```\n\n## Sawtooth function (odd)\n\\begin{equation}\nf(x) = \\frac{2A}{T}x \\quad (n - \\frac{1}{2})T < x < (n - \\frac{1}{2})T\n\\end{equation}\nwhere $n \\in R$, and $T = 2\\pi$.\n\n\\begin{eqnarray}\na_0 &=& \\frac{1}{T}\\int_0^T f(t)dt \\\\\n&=& \\frac{1}{T}\\int_{-\\frac{T}{2}}^{\\frac{T}{2}} \\frac{2A}{T}tdt \\\\ \n&=& 0\n\\end{eqnarray}\n\n\\begin{eqnarray}\na_n &=& \\frac{2}{T}\\int_0^T f(t)\\cos(\\omega_nt)dt \\\\\n&=& \\frac{2}{T}\\int_{-\\frac{T}{2}}^{\\frac{T}{2}} \\frac{2A}{T}t\\cos(\\omega_nt)dt \\\\ \n&=& 0\n\\end{eqnarray}\n\n\\begin{eqnarray}\nb_n &=& \\frac{2}{T}\\int_0^T f(t)\\sin(\\omega_nt)dt \\\\\n&=& \\frac{2}{T}\\int_{-\\frac{T}{2}}^{\\frac{T}{2}} \\frac{2A}{T}t\\sin(\\omega_nt)dt \\\\ \n&=& -\\frac{4A}{\\omega_nT^2}\\int_{-\\frac{T}{2}}^{\\frac{T}{2}}td\\cos(\\omega_nt) \\\\\n&=& -\\frac{4A}{\\omega_nT^2} \\left[ t\\cdot\\cos(\\omega_nt)\\big\\vert_{-\\frac{T}{2}}^{\\frac{T}{2}} - \\int_{-\\frac{T}{2}}^{\\frac{T}{2}} \\cos(\\omega_nt)dt \\right] \\\\\n&=& -\\frac{4A}{\\omega_nT^2} \\left[ T\\cos(\\frac{\\omega_n T}{2}) -\\frac{2}{\\omega_n}\\sin(\\frac{\\omega_n T}{2}) \\right]\n\\end{eqnarray}\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import signal\n\n# Fourier coefficients\ndef bn(T, A, n):\n w_n = 2*np.pi*n/T\n return -(4*A)/(w_n*T**2)*(T*np.cos(w_n*T/2) - 2/(w_n)*np.sin(w_n*T/2))\n\ndef Plot(x,y,s,T):\n plt.figure(figsize=(10,5))\n ax = plt.gca()\n ax.grid(color='#b7b7b7', linestyle='-', linewidth=0.5, alpha=0.5)\n plt.plot(x,y,'-', color='black', lw=2,label=\"Signal\")\n plt.plot(x,s,'-', color='#d3d3d3', lw=2,label=\"Fourier series approximation\")\n plt.xticks([-5/2*np.pi, -3/2*np.pi, -1/2*np.pi, 0, 1/2*np.pi, 3/2*np.pi, 5/2*np.pi],\n [r'$-5/2\\pi$', r'$-3/2\\pi$', r'$-1/2\\pi$', r'$0$',r'$1/2\\pi$', r'$3/2\\pi$', r'$5/2\\pi$'])\n\n # the arrow\n ax.annotate('', xy=(T/2,-1), xytext=(-T/2,-1),\n arrowprops={'arrowstyle': '<->'}, va='center')\n plt.text(-0.2, -0.95, r'$T$', {'color': 'k', 'fontsize': 12})\n \ndef Series(n):\n A = 2\n T = 2*np.pi\n f = 1/T\n \n x = np.linspace(-5/2*np.pi,5/2*np.pi,1000)\n y = A * signal.sawtooth(x-T/2)\n\n s = sum(bn(T, A, i) * np.sin(i*x) for i in range(1,n+1))\n Plot(x,y,s,T)\n \nSeries(19)\n```\n\n## Delta function\nDelta function can be represented as the generalized form of the square function.\n\n\n```python\n# general form\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Fourier coefficients\ndef a0(T, T_p, A, X):\n return T_p/T *A * np.ones(len(X))\n\ndef an(T, T_p, A, n):\n w_n = 2*m.pi*n/T\n return (4*A)/(w_n * T)*np.sin((w_n * T_p)/2)\n\ndef Plot(x,y,s,T,T_p): \n plt.figure(figsize=(10,5))\n ax = plt.gca()\n ax.grid(color='#b7b7b7', linestyle='-', linewidth=0.5, alpha=0.5)\n plt.plot(x,y,'-', color='black', lw=2,label=\"Signal\")\n plt.plot(x,s,'-', color='#d3d3d3', lw=2,label=\"Fourier series approximation\")\n plt.xticks([-5/2*np.pi, -3/2*np.pi, -1/2*np.pi, 0, 1/2*np.pi, 3/2*np.pi, 5/2*np.pi],\n [r'$-5/2\\pi$', r'$-3/2\\pi$', r'$-1/2\\pi$', r'$0$',r'$1/2\\pi$', r'$3/2\\pi$', r'$5/2\\pi$'])\n\n # the arrow\n ax.annotate('', xy=(-T/2,0.5), xytext=(-T/2,0),\n arrowprops={'arrowstyle': '-', 'ls': 'dashed', 'ec': '#333333'}, va='center')\n ax.annotate('', xy=(T/2,0.5), xytext=(T/2,0),\n arrowprops={'arrowstyle': '-', 'ls': 'dashed', 'ec': '#333333'}, va='center')\n ax.annotate('', xy=(T/2,0.5), xytext=(-T/2,0.5),\n arrowprops={'arrowstyle': '<->'}, va='center')\n ax.annotate('', xy=(T_p/2,1.5), xytext=(-T_p/2,1.5),\n arrowprops={'arrowstyle': '<->'}, va='center')\n plt.text(-0.2, 1.55, r'$T_p$', {'color': 'k', 'fontsize': 12})\n plt.text(-0.2, 0.55, r'$T$', {'color': 'k', 'fontsize': 12})\n plt.show()\n \ndef Series(n, delta):\n # Initial parameters\n A = 2\n T = 2*m.pi\n T_p = m.pi / delta\n f = 1/T\n\n x = np.linspace(-5/2*np.pi,5/2*np.pi,1000)\n y = []\n for i in X:\n if i > (-1/2*T_p) and i < (1/2*T_p) or i > (-1/2*T_p + T) and i < (1/2*T_p + T) or i > (-1/2*T_p - T) and i < (1/2*T_p - T):\n y.append(A)\n else:\n y.append(0)\n\n s = a0(T, T_p, A, X) + sum(an(T, T_p, A, i) * np.cos(i*x) for i in range(1,n+1))\n Plot(x,y,s,T,T_p)\n \nSeries(60, 4)\n```\n\n## Rectified Linear Unit function (aperiod)\n\\begin{equation}\nf(x) = \\left\\{\\begin{array}{cl}\n0,&-\\pi < x \\leq0\\\\\nx,&0 < x < \\pi\\\\\n\\end{array}\\right.\\\n\\end{equation}\n\\begin{equation}\nf(x) \\simeq \\frac{a_0}{2} + \\sum_{n=1}^{\\infty}\\left\\{a_n\\cos(nx) + b_n\\sin(nx)\\right\\}\n\\end{equation}\n\\begin{eqnarray}\n\\frac{a_0}{2} &=& \\frac{1}{2\\pi}\\int_{-\\pi}^{\\pi}f(x)dx \\\\ \n&=& \\frac{1}{2\\pi}\\int_{0}^{\\pi}xdx \\\\\n&=& \\frac{\\pi}{4}\n\\end{eqnarray}\n\\begin{eqnarray}\na_n &=& \\frac{1}{\\pi}\\int_{-\\pi}^{\\pi}f(x)\\cos(nx)dx, n\\geq1\\\\\n&=& \\frac{1}{\\pi}\\int_{0}^{\\pi}x\\cos(nx)dx\\\\\n&=& \\frac{1}{n\\pi}\\int_{0}^{\\pi}xd\\sin(nx)\\\\\n&=& \\frac{1}{n\\pi}\\left\\{ \\left[x\\sin(nx)\\right]_0^\\pi - \\int_{0}^{\\pi}\\sin(nx)dx \\right\\}\\\\\n&=& -\\frac{1}{n^2\\pi}\\int_{0}^{\\pi}\\sin(nx)dnx \\\\\n&=& \\frac{1}{n^2\\pi}\\left[\\cos(nx)\\right]_0^\\pi \\\\\n&=& \\frac{-2}{n^2\\pi},(n\\geq1,odd)\n\\end{eqnarray}\n\\begin{eqnarray}\nb_n &=& \\frac{1}{\\pi}\\int_{-\\pi}^{\\pi}f(x)\\sin(nx)dx, n\\geq1\\\\\n&=& \\frac{1}{\\pi}\\int_{0}^{\\pi}x\\sin(nx)dx\\\\\n&=& -\\frac{1}{n\\pi}\\int_{0}^{\\pi}xd\\cos(nx)\\\\\n&=& -\\frac{1}{n\\pi}\\left\\{ \\left[x\\cos(nx)\\right]_0^\\pi - \\int_{0}^{\\pi}\\cos(nx)dx \\right\\}\\\\\n&=& -\\frac{1}{n\\pi}\\left[(-i)^n\\pi-\\frac{1}{n}\\int_{0}^{\\pi}\\cos(nx)dnx\\right] \\\\\n&=& -\\frac{1}{n\\pi}\\left\\{(-i)^n\\pi-\\frac{1}{n}\\left[\\sin(nx)\\right]_0^\\pi\\right\\} \\\\\n&=& \\frac{(-1)^{n+1}}{n},(n\\geq1)\n\\end{eqnarray}\n\\begin{equation}\nf(x) \\simeq \\frac{\\pi}{4} + \\sum_{n\\geq1,odd}\\frac{-2}{n^2\\pi}\\cos(nx) + \\sum_{n\\geq1}\\frac{(-1)^{n+1}}{n}\\sin(nx)\n\\end{equation}\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Fourier coefficients\ndef a0(x):\n return np.pi/4 * np.ones(len(x))\n\ndef an(n,x):\n if n%2==0:\n return 0\n return (-2)/(n**2*np.pi) * np.ones(len(x))\n\ndef bn(n,x):\n return (-1)**(n+1)/n * np.ones(len(x))\n\n\ndef Plot(x,y,s): \n plt.figure(figsize=(10,5))\n ax = plt.gca()\n ax.grid(color='#b7b7b7', linestyle='-', linewidth=0.5, alpha=0.5)\n plt.plot(x,y,'-', color='black', lw=2,label=\"RELU\")\n plt.plot(x,s,'-', color='#b7b7b7', lw=1,label=\"f2\")\n \n plt.xticks([-np.pi, -1/2*np.pi, 0, 1/2*np.pi, np.pi], [r'$-\\pi$', r'$-1/2\\pi$', r'$0$',r'$1/2\\pi$', r'$\\pi$'])\n plt.show()\n \ndef Series(n):\n # Initial parameters\n N = 1000\n x = np.linspace(-np.pi, np.pi, N)\n y = np.where(x<0,0,x)\n\n s = a0(x) + sum(an(i,x) * np.cos(i*x) + bn(i,x) * np.sin(i*x) for i in range(1,n+1))\n Plot(x,y,s)\n \nSeries(30)\n```\n\n## Reference:\n\n- [Mathematics of the discrete Fourier transform](https://ccrma.stanford.edu/~jos/st/)\n- [Fourier series - Wikipedia](https://en.wikipedia.org/wiki/Fourier_series)\n- [An Interactive Guide To The Fourier Transform](https://betterexplained.com/articles/an-interactive-guide-to-the-fourier-transform/)\n- [Fourier Series: Basic Results](http://www.sosmath.com/fourier/fourier1/fourier1.html)\n", "meta": {"hexsha": "ac0ec10c6049dba32f699f1fda7125bc5e3af518", "size": 14635, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "002_fourier_series.ipynb", "max_stars_repo_name": "ValleyZw/hill", "max_stars_repo_head_hexsha": "51d4bc76d7c49b3dccf88659fade806906a9b4e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-07-22T07:19:39.000Z", "max_stars_repo_stars_event_max_datetime": "2019-07-22T07:19:39.000Z", "max_issues_repo_path": "002_fourier_series.ipynb", "max_issues_repo_name": "ValleyZw/hill", "max_issues_repo_head_hexsha": "51d4bc76d7c49b3dccf88659fade806906a9b4e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "002_fourier_series.ipynb", "max_forks_repo_name": "ValleyZw/hill", "max_forks_repo_head_hexsha": "51d4bc76d7c49b3dccf88659fade806906a9b4e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-05-19T17:52:16.000Z", "max_forks_repo_forks_event_max_datetime": "2020-05-19T17:52:16.000Z", "avg_line_length": 38.3115183246, "max_line_length": 186, "alphanum_fraction": 0.4429108302, "converted": true, "num_tokens": 4329, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9230391579526935, "lm_q2_score": 0.8902942304882371, "lm_q1q2_score": 0.8217764368400036}} {"text": "# Solving AOC 19 with optimization\n\nDay 19 of this year's Advent of Code is a nice geometrical problem. We are given sets of points detected by different scanners in $3d$ space. Each scanner has its own coordinate system and does **not** know its own position with respect to the others. Based on some common detected points that scanners share, the task is to retrieve their relative relative positions in a single coordinate system. A longer and more accurate description can be found at https://adventofcode.com/2021/day/19.\n\nThis task can be split into several smaller steps:\n\n- identify the scanners which share common points\n- choose a reference coordinate system\n- determine the transformation needed to translate all the sets of points into the chosen coordinate system\n\nIn this post, I'm going to talk about how I solved this problem using some algorithms from convex optimization :) \n\n## Reading the input data\n\nFirst, we should read and analyze the data to know what we're dealing with. I saved the example data which is given in the problem description in a text file, which can be easily parsed as done below\n\n\n```python\nimport numpy as np\n\n\ndef read_input(text_file):\n scanners = list()\n with open(text_file) as f:\n for line in f:\n if \"scanner\" in line:\n scanners.append(list())\n elif len(line.strip()) == 0:\n scanners[-1] = np.array(scanners[-1])\n else:\n scanners[-1].append(tuple(int(i) for i in line.strip().split(\",\")))\n scanners[-1] = np.array(scanners[-1])\n return scanners\n\n\npoints = read_input(\"test.txt\")\n```\n\nWe can also take a look at the some of the points detected by the scanners\n\n\n```python\npoints[0]\n```\n\n\n\n\n array([[ 404, -588, -901],\n [ 528, -643, 409],\n [-838, 591, 734],\n [ 390, -675, -793],\n [-537, -823, -458],\n [-485, -357, 347],\n [-345, -311, 381],\n [-661, -816, -575],\n [-876, 649, 763],\n [-618, -824, -621],\n [ 553, 345, -567],\n [ 474, 580, 667],\n [-447, -329, 318],\n [-584, 868, -557],\n [ 544, -627, -890],\n [ 564, 392, -477],\n [ 455, 729, 728],\n [-892, 524, 684],\n [-689, 845, -530],\n [ 423, -701, 434],\n [ 7, -33, -71],\n [ 630, 319, -379],\n [ 443, 580, 662],\n [-789, 900, -551],\n [ 459, -707, 401]])\n\n\n\n## Find the relations between different sets\n\nOnce we read the data, we should find the sets which share at least 12 points, as stated in the problem description. To do so, we can compute the pairwise distances among all points in the same set (according to some norm). Then, if two sets share at least 66 such distances, it means there are at least 12 points in common between them (since the number of pairwise distances between 12 points is 66, i.e., 12 choose 2). We call sets which share at least 12 points a \"couple\". Note that I'm using python sets, which could give wrong results. Indeed, if more than 2 points are at the same distance (for example (0,0,0), (1,1,1), (2,2,2)), only one distance would be counted, and we would potentially miss some related sets (if this causes the shared distances to be less than 66). Neverthless, this is easy to fix, altough I didn't do it and got the right answer anyway!\n\n\n```python\nfrom scipy.spatial.distance import pdist\nfrom itertools import combinations\n\n\ndef find_connected_sets(scanners):\n connected = []\n for cloud_1, cloud_2 in combinations(range(len(scanners)), 2):\n dist_1 = pdist(scanners[cloud_1])\n dist_2 = pdist(scanners[cloud_2])\n shared_distances = set(dist_1) & set(dist_2)\n if len(shared_distances) >= 66:\n connected.append((cloud_1, cloud_2))\n connected.append((cloud_2, cloud_1))\n return connected\n \ncouples = find_connected_sets(points)\ncouples\n```\n\n\n\n\n [(0, 1), (1, 0), (1, 3), (3, 1), (1, 4), (4, 1), (2, 4), (4, 2)]\n\n\n\nAnother problem we might encounter with the above approach is that two different sets of points which share the same distances might be considered overlapping even if they're not. For this advent of code puzzle, I'm not really sure this is a problem (how else could we detect that the sets share some points if not considering pairwise distances?), but in a real situation it could be.\n\nOnce we have all the connected \"couples\" of sets, we can build a graph. This graph will be used later, to determine the order to visit the sets of points. For example, the sets in the example data give rise to the following graph.\n\n\n```python\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport networkx as nx\n\n\nG=nx.Graph()\n# Add nodes and edges\nfor couple in find_connected_sets(points):\n G.add_edge(couple[0], couple[1])\n\nnx.draw(G, with_labels=True)\nplt.show()\n```\n\nAs already said, we need to determine the order to visit the sets of points. Indeed, in order to translate all the points in a single coordinate system, we can start from a set of points and iteratively transform the other sets into the system of the first set, following their relations in the graph we built. For example, in the example dataset, we can start from 0. From here, we can only visit 1 (since it's the only set which share at least 12 points with 0). From 1, we can either go to 4 or 3. Suppose we go to 3, then there are no other sets left. At this point we visit 4 and finally 2. This procedure corresponds to a Breadth First Search (BFS) visit of the graph of relations.\n\n\n```python\ndef get_visit_path(relations, path, visited=None, couple=(-1, 0)):\n couples = [i for i in relations if i[0] == couple[1]]\n for new_couple in couples:\n if new_couple[1] not in visited and new_couple != couple[::-1]:\n path.append(new_couple)\n visited.add(new_couple[1])\n # recursively find next scanners\n get_visit_path(relations, path, visited, couple=new_couple)\n\n\npath = []\nvis = set()\nget_visit_path(couples, path, vis)\npath\n```\n\n\n\n\n [(0, 1), (1, 3), (1, 4), (4, 2)]\n\n\n\n## Find the correct transformation\n\nUp to this point, we didn't describe how to translate a coordinate system to another one given that 2 sets share at least 12 points. We are going to do it next. However, first we need to identify common points between two sets which share at least 12 points. This is a bit boring: I do it by doing some manipulations on the indices in the (symmetric) matrix of pairwise distances. If you're interested in the process, the code is given below:\n\n\n```python\nfrom scipy.spatial.distance import squareform\nfrom collections import defaultdict, Counter\nfrom itertools import product\n\n\n\ndef find_shared_points(points_1, points_2):\n pairwise_dist_1 = pdist(points_1, 'sqeuclidean')\n pairwise_dist_2 = pdist(points_2, 'sqeuclidean')\n shared_distances = set(pairwise_dist_1) & set(pairwise_dist_2)\n if len(shared_distances) < 66:\n raise ValueError(\"At least 12 points are required!\")\n dist_1 = squareform(pairwise_dist_1)\n dist_1[np.tril_indices(len(dist_1))] = 0\n dist_2 = squareform(pairwise_dist_2)\n dist_2[np.tril_indices(len(dist_2))] = 0\n graph_dict = defaultdict(Counter)\n for dist in shared_distances:\n first_group = np.where(dist == dist_1)\n second_group = np.where(dist == dist_2)\n\n for node1 in first_group:\n for node2 in second_group:\n try:\n graph_dict[node1.item()][node2.item()] += 1\n except ValueError:\n # if there's more than 1 index from np.where\n for couple in product(node1, node2):\n graph_dict[couple[0]][couple[1]] += 1\n graph_dict = {\n node: graph_dict[node].most_common(1)[0][0]\n for node in graph_dict.keys()\n if graph_dict[node].most_common(1)[0][1] >= 11\n }\n return graph_dict\n\n\n\nfind_shared_points(points[0], points[1])\nfor key, value in find_shared_points(points[0], points[1]).items():\n print(points[0][key], points[1][value])\n```\n\n [ 528 -643 409] [-460 603 -452]\n [-485 -357 347] [ 553 889 -390]\n [-661 -816 -575] [729 430 532]\n [-345 -311 381] [ 413 935 -424]\n [-618 -824 -621] [686 422 578]\n [ 404 -588 -901] [-336 658 858]\n [ 459 -707 401] [-391 539 -444]\n [ 544 -627 -890] [-476 619 847]\n [-447 -329 318] [ 515 917 -361]\n [ 423 -701 434] [-355 545 -477]\n [ 390 -675 -793] [-322 571 750]\n [-537 -823 -458] [605 423 415]\n\n\nHere comes the interesting part! Once we have the shared points, we can actually solve the problem. At this point, we have to find the transformation which transforms one point cloud to the other. This transformation is given by a rotation matrix and a translation vector. If we denote by $x$ and $x'$ the points in the two different coordinate systems, we will have: \n\n$$ x = R x' + t $$\n\nwhere $R$ is a $ 3 \\times 3 $ rotation matrix and $ t $ is a translation vector. We can actually focus on the rotation matrix, as the translation vector is easy to recover once the points are \"aligned\".\n\nThe problem of finding the rotation matrix can be formalized as an optimization problem. First, we center the point clouds by subtracting them their centroids, i.e., their mean vectors. We define the centered points as $\\{y_i\\}_{i=1}^n$ and $\\{y_i' \\}_{i=1}^n$. Then we want to minimize (with respect to the rotation matrix) the sum of the distances between every couple of points. In mathematical terms, we have the following objective to minimize\n\n$$ f(A) = \\frac{1}{n} \\sum_{i=1}^n \\| y_i - A y_i' \\| $$\n\nwhere $A$ belong to the set of real-valued 3 by 3 matrices. \n\nIs this a convex objective? It easy to verify. Consider the function $g_i(A) = \\| y_i - A y_i' \\|$. From the definition of convexity:\n\n$$ \n\\begin{align}\ng_i(\\lambda A_1 + (1-\\lambda) A_2) &= \\| y_i - \\lambda A_1 - (1-\\lambda) A_2 y_i' \\| \\\\\n &= \\| \\lambda (y_i - A_1 y_i') + (1-\\lambda) (y_i - A_2 y_i') \\| \\\\\n &\\leq \\lambda \\| y_i - A_1 y_i' \\| + (1-\\lambda) \\| y_i - A_2 y_i' \\| = \\lambda g_i(A_1) + (1-\\lambda) g_i(A_2)\n\\end{align}\n$$\n\nwhere we used the triangle inequality in the third step. Note that $f(A) = \\frac{1}{n} \\sum_{i=1}^n g_i(A)$ and the sum of convex functions is still a convex function. Hence, we have that $f$ is a convex function and we are guaranteed to find a global optimum.\n\nWe didn't specify the norm used in $f$, but it could be any norm. For simplicity, we will consider the Euclidean norm. The correct matrix $R$ will be the one minimizing the objective function $f$,\n\n$$ R = \\arg\\min_{A \\in \\mathbb{R}^{3 \\times 3}} f(A) $$\n\nTo be more precise, the set over which we are minimizing should be the one of the rotation matrices in $3d$ space, i.e., orthogonal matrices 3 by 3 with determinant equal to 1. However, if we use a gradient-based method, this more accurate formulation would involve an expensive projection step on the mentioned set in every step of the optimization process. Also, I found out that the minimization problem above worked well for the task at hand. \n\nNow that we have our optimization problem formulated, we can try to solve it by using any optimization algorithm. To this aim, we can for example try any of the optimization algorithms contained in pytorch. One nice feature of pytorch is that it computes the gradients automatically, once we define the loss function. Below, we use most famous optimization algorithm, Stochastic Gradient Descent (SGD). The condition we used to determine when to stop the iterations of the optimization algorithm is to consider when the rounded matrix multiplied by the points to be rotated give back the points we want to recover. If this does not happen after 10 thousands steps, an error is raised.\n\n\n```python\nimport torch\nfrom functools import partial\n\n\ndef loss_f(x1, x2, model):\n return torch.linalg.norm(x1 - x2 @ model, axis=1).sum() / len(x1)\n\n\ndef optimize(cloud_1, cloud_2, opt_algo, save_history=True):\n centered_cloud_1 = cloud_1 - cloud_1.mean(axis=0)\n centered_cloud_2 = cloud_2 - cloud_2.mean(axis=0)\n centered_cloud_1 = torch.tensor(centered_cloud_1).float()\n centered_cloud_2 = torch.tensor(centered_cloud_2).float()\n history = [centered_cloud_1, centered_cloud_2]\n model = torch.eye(3, requires_grad=True)\n optimizer = opt_algo(params=[model])\n iterations = 10000\n cum_loss = 0\n for step in range(iterations):\n optimizer.zero_grad()\n loss = loss_f(centered_cloud_1, centered_cloud_2, model)\n loss.backward()\n optimizer.step()\n cum_loss += loss.item()\n converged = torch.all(torch.isclose(centered_cloud_1, centered_cloud_2 @ torch.round(model.detach())))\n if save_history:\n history.append(centered_cloud_2 @ model)\n if converged:\n break\n\n if not converged:\n raise ValueError(\n f\"Method has not converged!\\n\"\n f\"Rotation matrix:\\n\"\n f\"{torch.round(model.detach())}\\n\"\n f\"Loss at step {step}: {loss.item()}\"\n )\n rot_mat = np.rint(model.detach().numpy()).astype(int)\n transl_vec = cloud_1[0] - cloud_2[0] @ rot_mat\n return rot_mat, transl_vec, step, history\n\n\n\ndef find_transf(points_1, points_2, algo, save_history=True):\n # retrieve shared points between point clouds\n graph_dict = find_shared_points(points_1, points_2)\n # retrieve the transformation\n first = points_1[list(graph_dict.keys())]\n second = points_2[list(graph_dict.values())]\n rot_mat, transl_vec, steps, history = optimize(first, second, algo, save_history=save_history)\n return rot_mat, transl_vec, steps, history\n\n```\n\nWe can try to visualize the optimization process, by plotting the the transformed points as the objective function is minimized. We can see that as the optimal solution is approached, the transformed red points approach the original blue points. This only works in an interactive Jupyter notebook though. In the next cell we export the result to an HTML video (see below).\n\n\n```python\n%matplotlib notebook\nfrom IPython.display import HTML\nimport matplotlib.animation as animation\n\n\ndef update(i, X, Y, Z, cloud1, cloud2):\n cloud2[0].set_data(X[i], Y[i])\n cloud2[0].set_3d_properties(Z[i])\n return cloud1, cloud2\n\n\ndef make_animation(data):\n data = np.array([i.detach().numpy() for i in data])\n xs = data[:, :, 0]\n ys = data[:, :, 1]\n zs = data[:, :, 2]\n fig = plt.figure()\n fig.set_size_inches(10, 10)\n ax = fig.add_subplot(111, projection='3d')\n cloud_1 = ax.plot(xs[0], ys[0], zs[0], \"o\", color='blue', markersize=5)\n cloud_2 = ax.plot([], [], [], \"o\", color='red', markersize=5)\n\n # ax limits\n x_max = max(xs.flatten())\n x_min = min(xs.flatten())\n y_max = max(ys.flatten())\n y_min = min(ys.flatten())\n z_max = max(zs.flatten())\n z_min = min(zs.flatten())\n ax.set_xlim(x_min - 10, x_max + 10)\n ax.set_ylim(y_min - 10, y_max + 10)\n ax.set_zlim(z_min - 10, z_max + 10)\n\n ani = animation.FuncAnimation(\n fig,\n update,\n frames=len(data) - 1,\n fargs=(xs, ys, zs, cloud_1, cloud_2),\n interval=100,\n blit=True,\n repeat=True,\n )\n # ani.save('rotation.gif', writer='imagemagick', fps=200)\n return ani\n\n\nsgd = partial(torch.optim.SGD, lr=1e-4)\n_, _, _, hist = find_transf(points[0], points[1], sgd)\nani = make_animation(hist)\n```\n\n\n \n\n\n\n\n\n\n\n```python\nHTML(ani.to_html5_video())\n```\n\n\n\n\n\n\n\n\nOne disadvantage of SGD is that it has a learning rate to be set, which can influence the speed of convergence of the optimization process. Even more bad, if the learning rate is completely wrong, the solution will never converge to the optimal one (for example, you can [download the notebook](https://nicolaus93.github.io/assets/aoc19.ipynb) and try to use lr=1e-2)! For this reason, finding a good learning rate is a time consuming process, since different values have to be tried until a good one is found. On the other hand, there are also other optimization algorithms which do not need parameters such as the learning rate to be tuned and can still converge to the optimal solution. One such algorithm is Cocob, which has been used also to train neural networks. Let's see how it performs on this problem.\n\n\n```python\n%%capture\nfrom optimal_pytorch.coin_betting.torch import Cocob\n\n_, _, _, hist = find_transf(points[0], points[1], Cocob)\nani = make_animation(hist)\n```\n\nNote how we don't have to set any $lr$ parameter above!\n\n\n```python\nHTML(ani.to_html5_video())\n```\n\n\n\n\n\n\n\n\nThere's also another subtle difference which can be noticed by looking at the videos above. If you look at the speed of the moving points when using SGD and Cocob, you can see that it is not the same. In SGD it is constant, and the red points slowly approach the blue ones. On the other hand, with Cocob the red points start moving slowly but then their speed greatly increases as they approach the blue ones. What is going on here?\n\nThe explanation is that the learning rate in SGD is fixed (at least in the pytorch version), while in Cocob it is not! But.. Didn't we say the Cocob does not have any learning rate? Well, Cocob doesn't have any learning rate to be **set**, but that doesn't mean it has no learning rate at all. The truth is that this algorithm is able to determine automatically the optimal learning rate, which is not fixed in advance, but changes depending on the data fed to it. In particular, it can be shown that finding the optimal learning rate is more or less equivalent to betting on the outcome of a coin in a game where the goal of the algorithm is to increase its total amount of money, hence the name Cocob (which stands for Continuous **Coin Betting**). Furthermore, if the objective function is fixed, the amount of money which is the algorithm is betting on (which is equivalent to learning rate) will grow exponentially and the algorithm will approach the optimal solution exponentially fast! If you're interested in a more detailed explanation you can watch [this video](https://youtu.be/f4KTZoClfs4?t=948). Coin betting is a fascinating and theoretically grounded area of research in optimization!\n\n## Solving the original problem\n\nNow that we have all the pieces needed, we can solve the original problem! In particular, there are 2 tasks:\n\n- Translate all the points to a single coordinate system and count the number of unique points\n- Find the couple of scanners which are at the largest Manhattan distance apart. What is such distance?\n\nBelow, we show how to use the functions described until now to solve the 2 tasks:\n\n\n```python\ndef solve(data_file, algo, save_iterations=False):\n point_clouds = read_input(data_file)\n couples = find_connected_sets(point_clouds)\n visit_path = []\n vis = set()\n get_visit_path(couples, visit_path, vis) # modify path inplace\n transl_vectors = []\n tot_iterations = 0\n for couple in visit_path:\n first, second = couple\n rot_mat, transl_vec, iterations, history = find_transf(\n point_clouds[first], \n point_clouds[second], \n algo, \n save_history=save_iterations\n )\n point_clouds[second] = point_clouds[second] @ rot_mat + transl_vec\n transl_vectors.append(transl_vec)\n tot_iterations += len(history) - 1\n \n common_points = set()\n for cloud in point_clouds:\n for point in cloud:\n common_points.add(tuple(point))\n \n max_dist = max(pdist(transl_vectors, 'cityblock'))\n if save_iterations:\n print(f\"Total number of iterations: {tot_iterations}\")\n return len(common_points), int(max_dist)\n \n \npart_1, part_2 = solve(\"test.txt\", Cocob)\nprint(f\"part1: {part_1}\")\nprint(f\"part2: {part_2}\")\n```\n\n part1: 79\n part2: 3621\n\n\n\n```python\npart_1, part_2 = solve(\"test.txt\", partial(torch.optim.SGD, lr=1e-4))\nprint(f\"part1: {part_1}\")\nprint(f\"part2: {part_2}\")\n```\n\n part1: 79\n part2: 3621\n\n\nAnd now the real puzzle, both with Cocob and SGD:\n\n\n```python\npart_1, part_2 = solve(\"input_puzzle.txt\", Cocob)\nprint(f\"part1: {part_1}\")\nprint(f\"part2: {part_2}\")\n```\n\n part1: 400\n part2: 12168\n\n\n\n```python\npart_1, part_2 = solve(\"input_puzzle.txt\", partial(torch.optim.SGD, lr=1e-4))\nprint(f\"part1: {part_1}\")\nprint(f\"part2: {part_2}\")\n```\n\n part1: 400\n part2: 12168\n\n\nWe can also measure how long it takes for the optimization algorithms to find the solution in terms of time:\n\n\n```python\n%%timeit\nsolve(\"input_puzzle.txt\", Cocob)\n```\n\n 508 ms \u00b1 4.33 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\n\n\n\n```python\n%%timeit\nsolve(\"input_puzzle.txt\", partial(torch.optim.SGD, lr=1e-4))\n```\n\n 958 ms \u00b1 37.3 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\n\n\nIt seems that Cocob is faster. However, the time taken to find a solution is not really meaningful since it is subject to many different variables. Instead, we can measure how many iterations the algorithms need:\n\n\n```python\nsolve(\"input_puzzle.txt\", Cocob, save_iterations=True)\n```\n\n Total number of iterations: 2069\n\n\n\n\n\n (400, 12168)\n\n\n\n\n```python\nsolve(\"input_puzzle.txt\", partial(torch.optim.SGD, lr=1e-4), save_iterations=True)\n```\n\n Total number of iterations: 4936\n\n\n\n\n\n (400, 12168)\n\n\n\n## Summary\n\nThis was definitely a cool problem to solve in this year's Advent of Code! I hope you enjoyed the explanation on optimization algorithms and especially **coin betting**. I didn't go into much detail, but if you're interested this is a very nice topic in optimization and machine learning. You can find many resources online, and there even was a [tutorial](https://www.youtube.com/watch?v=UT_ziU3nIwU&t=619s) at last year's ICML. Despite their [practical success](https://github.com/Arturus/kaggle-web-traffic/blob/master/how_it_works.md#training-and-validation), I believe these algorithms are still not very well known to the practicioners. One of the reason is that there are not many implementations available. For this reason, last year we started implementing some of them in pytorch, and currently a [library](https://github.com/Nicolaus93/coin_betting) on Github is available on my profile. The plan is to add more parameter-free algorithms to the library (not only in torch but also JAX) and extensively test them against other more famous optimization algorithms such as SGD or Adam.\n", "meta": {"hexsha": "c4bf896509cba6962b5b8656f706362a0c9fe06e", "size": 377372, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "assets/aoc19.ipynb", "max_stars_repo_name": "Nicolaus93/nicolaus93.github.io", "max_stars_repo_head_hexsha": "1abe6160ac00e9cb1c76a90d5c1edf1354d0d5fa", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assets/aoc19.ipynb", "max_issues_repo_name": "Nicolaus93/nicolaus93.github.io", "max_issues_repo_head_hexsha": "1abe6160ac00e9cb1c76a90d5c1edf1354d0d5fa", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "assets/aoc19.ipynb", "max_forks_repo_name": "Nicolaus93/nicolaus93.github.io", "max_forks_repo_head_hexsha": "1abe6160ac00e9cb1c76a90d5c1edf1354d0d5fa", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 105.7953462293, "max_line_length": 133503, "alphanum_fraction": 0.8208584633, "converted": true, "num_tokens": 5858, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9458012747599251, "lm_q2_score": 0.8688267728417087, "lm_q1q2_score": 0.8217374692992401}} {"text": "```python\n\n```\n\n\n```python\nfrom sympy import *\nfrom math import *\nx, y = Symbol('x'), Symbol('y')\n\nfrom sympy.plotting import plot\n\nfrom sympy.vector import Vector\nfrom sympy.vector import CoordSys3D\n\nfrom sympy.geometry import Point\n\nN = CoordSys3D('N')\n\n```\n\n\n```python\nclass taylor_polys:\n def __init__(self):\n pass\n \n def taylor_eval(self, f, a, n):\n \"\"\"\n Evaluates the Taylor Polynomial of a given function n times centered around a\n\n A Taylor polynomial is defined as:\n n\n Pn(x) = \u03a3((deriv(f(a), i)(x-a)^n)/(n!))\n i=0\n \n f = function\n n = # of iterations\n a = center point\n\n Ex: P2(x) @ a = 3\n\n = ((deriv(f(a), 0)(x-3)^0)/(0!)) + ((deriv(f(a), 1)(x-3)^1)/(1!)) + ((deriv(f(a), 2)(x-3)^2)/(2!))\n = 0 + deriv(f(a), 1)(x-3) + ((deriv(f(a), 2)(x-3)^2)/6)\n \"\"\"\n return sum([(lambdify(x, diff(f, x, i))(a)*(x-a)**i)/(factorial(i)) for i in range(n+1)])\n \n def langr_error(self, f, a, n, x_est):\n \"\"\"\n Evaluates the Langrange Error of a function given an estimate of x \n \n The Langrange Error Bound for Pn(x) is defined as:\n \n abs(En(x)) = abs(f(x) - Pn(x)) <= (M*(abs(x-a))**n+1)/((n+1)!)\n \n M = max(deriv(f(abs(a)), n+1), deriv(f(abs(x_est)), n+1))\n \n M = is the bigger # between the n+1th derivative of the function of the absolute values of a and the estimate of x\n x_est = the estimate of x\n \n \"\"\"\n \n M = max(lambdify(x, diff(f, x, n+1))(abs(a)),lambdify(x, diff(f, x, n+1))(abs(x_est)))\n return (M*(abs(x_est-a)**(n+1)))/(factorial(n+1)) \n```\n\n\n```python\nmodel = taylor_polys()\nmodel.langr_error(e**x, 0, 3, -.1)\n```\n\n\n\n\n 4.6048788253152e-06\n\n\n\n\n```python\nans_eq = 2*x**3 + 4*x**2 + 1*x\napprox_eq = model.taylor_eval(ans_eq,2,2)\napprox_eq\n```\n\n\n\n\n$\\displaystyle 41 x + 16 \\left(x - 2\\right)^{2} - 48$\n\n\n\n\n```python\n\nplot(approx_eq ,ans_eq, xlim=[-10,10], ylim=[-10,10])\n```\n\n\n```python\ntesteq = lambdify([x,y],x**y)(1,1)\ntesteq\n```\n\n\n\n\n 1\n\n\n\n\n```python\neq1 = Eq(y**2, 2*x+3)\nsolve(eq1,y)\n```\n\n\n\n\n [-sqrt(2*x + 3), sqrt(2*x + 3)]\n\n\n\n\n```python\nn = Symbol('n')\ndef ratio_test(eq):\n \n ans = solveset(abs(limit(eq.subs(n,n+1)/eq, n, float('inf'))) < 1, x, S.Reals)\n return ans, (ans.end-ans.start)/2\n```\n\n\n```python\nratio_test(((1)**n*(2*x-3)**(n+1))/(2**n*(n+1)))\n```\n\n\n\n\n (Interval.open(1/2, 5/2), 1)\n\n\n\n\n```python\ndef projAtoB(a, b): return (a.dot(b)/(b.magnitude()**2)) * b\n```\n\n\n```python\na = 6*N.i - 3*N.j + 2*N.k\nb = 2*N.i + 1*N.j - 2*N.k\nprojAtoB(a, b)\n```\n\n\n\n\n$\\displaystyle (\\frac{10}{9})\\mathbf{\\hat{i}_{N}} + (\\frac{5}{9})\\mathbf{\\hat{j}_{N}} + (- \\frac{10}{9})\\mathbf{\\hat{k}_{N}}$\n\n\n\n\n```python\nt = Symbol('t')\ndef collision_test(p1, p2):\n # Collision Test\n \n col = dict((str(x), solve(Eq(p1[x], p2[x]))) for x in range(3))\n \n for i in col['0']:\n if i in col['1'] and i in col['2']:\n return Point([lambdify(t,p1[x])(i) for x in range(3)]),True\n \n return (None, False)\n```\n\n\n```python\npointA = Point(t, t**2, t**3)\npointB = Point(1 - t, (1-t)/2, (1-t)/4)\n\ncollision_test(pointA, pointB)\n```\n\n\n\n\n (Point3D(1/2, 1/4, 1/8), True)\n\n\n\n\n```python\ndef normalize(x):\n return x/x.magnitude()\n```\n\n\n```python\na = 1*N.i + 1*N.j - 1*N.k \nnormalize(a)\n```\n\n\n\n\n$\\displaystyle (\\frac{\\sqrt{3}}{3})\\mathbf{\\hat{i}_{N}} + (\\frac{\\sqrt{3}}{3})\\mathbf{\\hat{j}_{N}} + (- \\frac{\\sqrt{3}}{3})\\mathbf{\\hat{k}_{N}}$\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "c6a30e2c7e56cc6bfbf1326f1391fd82bc263789", "size": 23198, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Math/Math - Calculus II.ipynb", "max_stars_repo_name": "AlephEleven/awesome-projects", "max_stars_repo_head_hexsha": "871c9cd6ef12ad7b0ee9f1bf4b296e2d1ff78493", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Math/Math - Calculus II.ipynb", "max_issues_repo_name": "AlephEleven/awesome-projects", "max_issues_repo_head_hexsha": "871c9cd6ef12ad7b0ee9f1bf4b296e2d1ff78493", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Math/Math - Calculus II.ipynb", "max_forks_repo_name": "AlephEleven/awesome-projects", "max_forks_repo_head_hexsha": "871c9cd6ef12ad7b0ee9f1bf4b296e2d1ff78493", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.8613333333, "max_line_length": 14772, "alphanum_fraction": 0.7761876024, "converted": true, "num_tokens": 1275, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9458012701768144, "lm_q2_score": 0.8688267745399465, "lm_q1q2_score": 0.8217374669235061}} {"text": "# Symbolick\u00e9 po\u010dty\nV tomto tutori\u00e1lu je p\u0159edstaven modul [Sympy](http://www.sympy.org/en/index.html), kter\u00fd slou\u017e\u00ed k po\u010dt\u016fm se symbolickou prom\u011bnnou v Pythonu. K\u00f3d pro import Sympy a nastaven\u00ed Pylabu n\u00e1sleduje.\n\n\n```python\n# inline plots \n%matplotlib inline \n# import sympy\nimport sympy as sp \n```\n\nAby bylo mo\u017en\u00e9 zobrazit v\u00fdstup ze Sympy pomoc\u00ed Latex rovnic\u00ed v Jupyter Notebook, n\u00e1sleduj\u00edc\u00ed nastaven\u00ed mus\u00ed b\u00fdt provedeno.\n\n\n```python\nsp.init_printing(use_latex='mathjax')\n```\n\n***\n## Pr\u00e1ce se Sympy\nNa za\u010d\u00e1tku je pot\u0159eba si vytvo\u0159it symbolick\u00e9 prom\u011bnn\u00e9. P\u0159\u00edklad n\u00e1sleduje.\n\n\n```python\nx, y, z = sp.symbols('x y z')\n```\n\nOd te\u010f je mo\u017en\u00e9 prom\u011bnn\u00e9 `x, y, z` (Sympy objekty prom\u011bnn\u00e1) pou\u017e\u00edvat p\u0159i po\u010dtech.\n\n\n```python\nsp.Eq(x + y, z) # create equation\n```\n\n\n\n\n$$x + y = z$$\n\n\n\n\n```python\nsp.simplify(x**2 + y - 2*z / (x*z)) # simplify expression\n```\n\n\n\n\n$$x^{2} + y - \\frac{2}{x}$$\n\n\n\nN\u011bkter\u00e9 funkce Sympy dok\u00e1\u017e\u00ed tak\u00e9 pracovat s v\u00fdrazem/rovnic\u00ed zadanou jako text. V tomto p\u0159\u00edpad\u011b nen\u00ed pot\u0159eba prom\u011bnn\u00e9 vytv\u00e1\u0159et p\u0159edem. V n\u00e1sleduj\u00edc\u00edm p\u0159\u00edkladu u\u017eijeme prom\u011bnn\u00e9 *a* a *b*, kter\u00e9 p\u0159edt\u00edm nijak nedefinujeme!\n\n\n```python\nsp.simplify(\"a + b**2\")\n```\n\n\n\n\n$$a + b^{2}$$\n\n\n\nN\u011bkter\u00e9 funkce vrac\u00ed sv\u016fj v\u00fdsledek ve form\u011b objektu, kter\u00fd je mo\u017en\u00fd n\u00e1sledn\u011b pou\u017e\u00edt. Uk\u00e1z\u00e1no na p\u0159\u00edkladu.\n\n\n```python\nf = sp.simplify(\"a + b**2 / (a*b)\") # creation of Sympy object - expression\ntype(f)\n```\n\n\n\n\n sympy.core.add.Add\n\n\n\n\n```python\nf.cancel()\n```\n\n\n\n\n$$\\frac{1}{a} \\left(a^{2} + b\\right)$$\n\n\n\nPozn\u00e1mka: alternativn\u00ed pou\u017eit\u00ed podobn\u00fdch funkc\u00ed je mo\u017eno p\u0159\u00edmo s textem jako argument - p\u0159\u00edklad n\u00e1sleduje:\n\n\n```python\nsp.cancel(\"a + b**2 / (a*b)\")\n```\n\n\n\n\n$$\\frac{1}{a} \\left(a^{2} + b\\right)$$\n\n\n\n***\n## \u00dapravy v\u00fdraz\u016f\nN\u00e1sleduj\u00ed p\u0159\u00edklady \u00faprav v\u00fdraz\u016f na v\u00fdrazu: $\\frac{z+\\frac{x^3 + 1}{y^2}}{2(1-x)+x^2}$\n\n\n```python\nf = '(1+(x**3+1/x**2))/(2*(1-x)+x**2)'\n```\n\n### Zjednodu\u0161en\u00ed v\u00fdrazu\n\n\n```python\nsp.sympify(f)\n```\n\n\n\n\n$$\\frac{x^{3} + 1 + \\frac{1}{x^{2}}}{x^{2} - 2 x + 2}$$\n\n\n\n### P\u0159eveden\u00ed na kanonickou formu\n\n\n```python\nsp.cancel(f)\n```\n\n\n\n\n$$\\frac{x^{5} + x^{2} + 1}{x^{4} - 2 x^{3} + 2 x^{2}}$$\n\n\n\n### Rozlo\u017een\u00ed na faktory\n\n\n```python\nsp.factor(f)\n```\n\n\n\n\n$$\\frac{x^{5} + x^{2} + 1}{x^{2} \\left(x^{2} - 2 x + 2\\right)}$$\n\n\n\n### Rozklad\n\n\n```python\nsp.expand(f)\n```\n\n\n\n\n$$\\frac{x^{3}}{x^{2} - 2 x + 2} + \\frac{1}{x^{4} - 2 x^{3} + 2 x^{2}} + \\frac{1}{x^{2} - 2 x + 2}$$\n\n\n\n### Rozklad na parci\u00e1ln\u00ed zlomky\n\n\n```python\nfs = sp.simplify(f)\nsp.apart(fs)\n```\n\n\n\n\n$$x + \\frac{3 x - 5}{2 x^{2} - 4 x + 4} + 2 + \\frac{1}{2 x} + \\frac{1}{2 x^{2}}$$\n\n\n\n### Dosazen\u00ed hodnoty\n\n\n```python\nfs.subs(\"x\", 5)\n```\n\n\n\n\n$$\\frac{3151}{425}$$\n\n\n\n***\n## \u0158e\u0161en\u00ed rovnic\n### Ko\u0159eny rovnice\nN\u00e1sleduj\u00ed dva p\u0159\u00edklady jak z\u00edskat ko\u0159eny rovnice.\n\n\n```python\nf = \"x**2 +3*x -4\"\n```\n\n\n```python\nsp.roots(f)\n```\n\n\n\n\n$$\\left \\{ -4 : 1, \\quad 1 : 1\\right \\}$$\n\n\n\nNebo:\n\n\n```python\nsp.solve(f)\n\n```\n\n\n\n\n$$\\left [ -4, \\quad 1\\right ]$$\n\n\n\n### \u0158e\u0161en\u00ed soustavy rovnic\nPomoc\u00ed Sympy je mo\u017en\u00e9 zadat soustavu rovnic v\u00edce zp\u016fsoby. N\u00e1sleduje p\u0159\u00edklad jak vytvo\u0159it soustavu:\n\n$ a + b = 1 $\n\n$ a^4 = c $\n\n$ b - 5/2 = 3 $\n\npomoc\u00ed listu rovnic.\n\n\n```python\na, b, c = sp.symbols(\"a, b, c\")\nequations = [\n sp.Eq(a + b, 1),\n sp.Eq(a**4, c),\n sp.Eq(b - 5/2, 3),\n]\nsp.solve(equations)\n```\n\n\n\n\n$$\\left [ \\left \\{ a : -4.5, \\quad b : 5.5, \\quad c : 410.0625\\right \\}\\right ]$$\n\n\n\nPozn\u00e1mka: v\u0161im\u011bte si, \u017ee vr\u00e1cen je slovnik uvnit\u0159 listu.\n\n***\n## Kalkulus\n\n### Derivace\n\n\n```python\nfd = sp.Derivative('2*x*sqrt(1/x)',x)\nfd\n```\n\n\n\n\n$$\\frac{d}{d x}\\left(2 x \\sqrt{\\frac{1}{x}}\\right)$$\n\n\n\n\n```python\nfd.doit()\n```\n\n\n\n\n$$\\sqrt{\\frac{1}{x}}$$\n\n\n\n### Integr\u00e1l\n\n\n```python\nfi = sp.Integral('2*x*sqrt(1/x)',x)\nfi\n```\n\n\n\n\n$$\\int 2 x \\sqrt{\\frac{1}{x}}\\, dx$$\n\n\n\n\n```python\nfi.doit()\n```\n\n\n\n\n$$\\frac{4 x^{2}}{3} \\sqrt{\\frac{1}{x}}$$\n\n\n\n### Limity\n\n\n```python\nfl = sp.Limit('sin(x)/x', x, 0)\nfl\n```\n\n\n\n\n$$\\lim_{x \\to 0^+}\\left(\\frac{1}{x} \\sin{\\left (x \\right )}\\right)$$\n\n\n\n\n```python\nfl.doit()\n```\n\n\n\n\n$$1$$\n\n\n\n***\n## Kreslen\u00ed graf\u016f\nSympy pou\u017e\u00edv\u00e1 Matplotlib ke kreslen\u00ed graf\u016f. Samotn\u00fd Matplotlib a jeho pokro\u010dil\u00e9 mo\u017enosti jsou p\u0159edm\u011btem jin\u00e9ho tutori\u00e1l\u016f. N\u00e1sleduje n\u011bkolik jednoduch\u00fdch p\u0159\u00edklad\u016f, jak pou\u017e\u00edt Matplotlib skrze Sympy.\n\n\n```python\nsp.plot(x**2)\n```\n\n\n```python\nsp.plot(x**3)\n```\n\n***\n## Vektory a Matice\n\n\n```python\nA = sp.Matrix([[x, -1], [3, x], [1, 2]])\nA\n```\n\n\n\n\n$$\\left[\\begin{matrix}x & -1\\\\3 & x\\\\1 & 2\\end{matrix}\\right]$$\n\n\n\n\n```python\nB = sp.Matrix([[1, -1, 0], [3, 2, 4]])\nB\n```\n\n\n\n\n$$\\left[\\begin{matrix}1 & -1 & 0\\\\3 & 2 & 4\\end{matrix}\\right]$$\n\n\n\n\n```python\n(A * B)**2\n```\n\n\n\n\n$$\\left[\\begin{matrix}\\left(- x - 2\\right) \\left(3 x + 3\\right) + \\left(x - 3\\right)^{2} - 28 & \\left(- x - 2\\right) \\left(x - 3\\right) + \\left(- x - 2\\right) \\left(2 x - 3\\right) - 12 & 4 x \\left(- x - 2\\right) - 4 x - 20\\\\28 x + \\left(x - 3\\right) \\left(3 x + 3\\right) + \\left(2 x - 3\\right) \\left(3 x + 3\\right) & 12 x + \\left(- x - 2\\right) \\left(3 x + 3\\right) + \\left(2 x - 3\\right)^{2} & 4 x \\left(2 x - 3\\right) + 20 x - 12\\\\16 x + 44 & - x + 1 & 12 x + 36\\end{matrix}\\right]$$\n\n\n\n\n```python\n2.5 * (B * A)\n```\n\n\n\n\n$$\\left[\\begin{matrix}2.5 x - 7.5 & - 2.5 x - 2.5\\\\7.5 x + 25.0 & 5.0 x + 12.5\\end{matrix}\\right]$$\n\n\n\n### U\u017eite\u010dn\u00e9 konstruktory matic\n\n\n```python\nsp.eye(3)\n```\n\n\n\n\n$$\\left[\\begin{matrix}1 & 0 & 0\\\\0 & 1 & 0\\\\0 & 0 & 1\\end{matrix}\\right]$$\n\n\n\n\n```python\nsp.ones(3,2)\n```\n\n\n\n\n$$\\left[\\begin{matrix}1 & 1\\\\1 & 1\\\\1 & 1\\end{matrix}\\right]$$\n\n\n\n\n```python\nsp.zeros(2,3)\n```\n\n\n\n\n$$\\left[\\begin{matrix}0 & 0 & 0\\\\0 & 0 & 0\\end{matrix}\\right]$$\n\n\n\n### Determinant a charakteristick\u00fd polynom matice\nN\u00e1sleduje uk\u00e1zka jak z\u00edskat determinand matice.\n\n\n```python\nC = sp.Matrix([[1, -x, 0], [x, 2, 4], [-3, 2, -4]])\nC.det()\n```\n\n\n\n\n$$- 4 x^{2} + 12 x - 16$$\n\n\n\nN\u00e1sleduj\u00edc\u00ed k\u00f3d z\u00edska charakteristick\u00fd polynom matice *C*.\n\n\n```python\np = C.charpoly()\nsp.factor(p)\n```\n\n\n\n\n$$\\lambda^{3} + \\lambda^{2} + \\lambda x^{2} - 18 \\lambda + 4 x^{2} - 12 x + 16$$\n\n\n\n### Vlastn\u00ed \u010d\u00edsla, vektory\nN\u00e1sleduj\u00ed p\u0159\u00edklady jak spo\u010d\u00edtat vlastn\u00ed \u010d\u00edsla (*eigenvalues*) a vlastn\u00ed vektory (*eigenvectors*) matice.\n\n\n```python\nD = sp.Matrix([[1, -1], [x, 2]])\nD.eigenvals()\n```\n\n\n\n\n$$\\left \\{ - \\frac{1}{2} \\sqrt{- 4 x + 1} + \\frac{3}{2} : 1, \\quad \\frac{1}{2} \\sqrt{- 4 x + 1} + \\frac{3}{2} : 1\\right \\}$$\n\n\n\n\n```python\nD.eigenvects()\n```\n\n\n\n\n$$\\left [ \\left ( - \\frac{1}{2} \\sqrt{- 4 x + 1} + \\frac{3}{2}, \\quad 1, \\quad \\left [ \\left[\\begin{matrix}\\frac{1}{\\frac{1}{2} \\sqrt{- 4 x + 1} - \\frac{1}{2}}\\\\1\\end{matrix}\\right]\\right ]\\right ), \\quad \\left ( \\frac{1}{2} \\sqrt{- 4 x + 1} + \\frac{3}{2}, \\quad 1, \\quad \\left [ \\left[\\begin{matrix}\\frac{1}{- \\frac{1}{2} \\sqrt{- 4 x + 1} - \\frac{1}{2}}\\\\1\\end{matrix}\\right]\\right ]\\right )\\right ]$$\n\n\n\n### \u0158e\u0161en\u00ed soustavy rovnic\nHled\u00e1me $x$ a $y$ pro n\u00e1sleduj\u00edc\u00ed soustavu rovnic pro jak\u00e9koliv $z$:\n \n$ 5x -3y = z $\n\n$ -4x + 3y = 2 $\n\nPro \u0159e\u0161en\u00ed je soustava nejd\u0159\u00edve p\u0159eps\u00e1na maticov\u00e9 podoby:\n\n$AX = B$\n\nkde $X = [x, y]$. \u0158e\u0161en\u00ed je potom:\n\n$X = A^{-1}B$\n\nRealizace pomoc\u00ed Sympy n\u00e1sleduje:\n\n\n```python\nA = sp.Matrix([[5, -3], [-4, 3]])\nB = sp.Matrix([z, 2])\nX = A**-1 * B\nX\n```\n\n\n\n\n$$\\left[\\begin{matrix}z + 2\\\\\\frac{4 z}{3} + \\frac{10}{3}\\end{matrix}\\right]$$\n\n\n", "meta": {"hexsha": "38e7de86d64dc306d990dbd044cb8327d7ab91c8", "size": 65539, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "podklady/notebooks/symbolicke-pocty.ipynb", "max_stars_repo_name": "matousc89/PPSI", "max_stars_repo_head_hexsha": "f4e65d84048a10f4ebb19ae288027046013546e6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "podklady/notebooks/symbolicke-pocty.ipynb", "max_issues_repo_name": "matousc89/PPSI", "max_issues_repo_head_hexsha": "f4e65d84048a10f4ebb19ae288027046013546e6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "podklady/notebooks/symbolicke-pocty.ipynb", "max_forks_repo_name": "matousc89/PPSI", "max_forks_repo_head_hexsha": "f4e65d84048a10f4ebb19ae288027046013546e6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 43.721814543, "max_line_length": 17560, "alphanum_fraction": 0.6913440852, "converted": true, "num_tokens": 3174, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9458012648298514, "lm_q2_score": 0.8688267660487573, "lm_q1q2_score": 0.821737454246944}} {"text": "```python\nfrom sympy.abc import x,y,z\nfrom sympy import sin,Matrix\nfrom sympy import *\nimport numpy as np\n```\n\n### Jacobian\nUse sympy to replicate the MATLAB documentation https://uk.mathworks.com/help/symbolic/jacobian.html\n\n#### 1. Jacobian of Vector Function\n\n\n```python\nX = Matrix([x*y*z,y**2,x+z])\nY = Matrix([x,y,z])\nX.jacobian(Y)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}y z & x z & x y\\\\0 & 2 y & 0\\\\1 & 0 & 1\\end{matrix}\\right]$\n\n\n\n#### 2. Jacobian of Scalar Function\n\n\n```python\nX = Matrix([2*x+3*y+4*z])\nY = Matrix([x,y,z])\nX.jacobian(Y)\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}2 & 3 & 4\\end{matrix}\\right]$\n\n\n\n#### 3. Jacobian with Respect to Scalar\n\n\n```python\nX = Matrix([x**2*y,x*sin(y)])\nY = Matrix([x])\na = X.jacobian(Y)\n```\n\n\n```python\nprint(a)\n```\n\n Matrix([[2*x*y], [sin(y)]])\n\n\n\n```python\nb = a.subs([(x,0),(y,0)])\n```\n\n\n```python\nprint(b)\n```\n\n Matrix([[0], [0]])\n\n\n\n```python\nnp.array(b).astype(int).squeeze()+1\n```\n\n\n\n\n array([1, 1])\n\n\n\n#### 4. Others\n\n\n```python\nx, y = symbols('x y')\nf = x + y\nf.subs([(x,10),(y,20)])\nf\n```\n\n\n\n\n$\\displaystyle x + y$\n\n\n\n\n```python\nx, y, z, t = symbols('x y z t')\nexpr = cos(x)+1\nexpr.subs(x,y)\n```\n\n\n\n\n$\\displaystyle \\cos{\\left(y \\right)} + 1$\n\n\n\n\n```python\na = expr.subs(x,0)\n```\n\n\n```python\nprint(a+1)\n```\n\n 3\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "1da54f51069522d53ac78f2db1f8eed605c1d0fe", "size": 5064, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Robotics/Jacobian/.ipynb_checkpoints/Jacobian-checkpoint.ipynb", "max_stars_repo_name": "zcemycl/ProbabilisticPerspectiveMachineLearning", "max_stars_repo_head_hexsha": "8291bc6cb935c5b5f9a88f7b436e6e42716c21ae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 4, "max_stars_repo_stars_event_min_datetime": "2019-11-20T10:20:29.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-09T11:15:23.000Z", "max_issues_repo_path": "Capstone/Jacobian/Jacobian.ipynb", "max_issues_repo_name": "kasiv008/Robotics", "max_issues_repo_head_hexsha": "302b3336005acd81202ebbbb0c52a4b2692fa9c7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Capstone/Jacobian/Jacobian.ipynb", "max_forks_repo_name": "kasiv008/Robotics", "max_forks_repo_head_hexsha": "302b3336005acd81202ebbbb0c52a4b2692fa9c7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2020-05-27T03:56:38.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-02T13:15:42.000Z", "avg_line_length": 17.6445993031, "max_line_length": 112, "alphanum_fraction": 0.4370063191, "converted": true, "num_tokens": 462, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9381240142763574, "lm_q2_score": 0.8757869835428966, "lm_q1q2_score": 0.8215968006522444}} {"text": "# Euler Problem 203\n\n\n The binomial coefficients nCk can be arranged in triangular form, Pascal's triangle, like this:\n 1\t\n 1\t\t1\t\n 1\t\t2\t\t1\t\n 1\t\t3\t\t3\t\t1\t\n 1\t\t4\t\t6\t\t4\t\t1\t\n 1\t\t5\t\t10\t\t10\t\t5\t\t1\t\n 1\t\t6\t\t15\t\t20\t\t15\t\t6\t\t1\t\n 1\t\t7\t\t21\t\t35\t\t35\t\t21\t\t7\t\t1\n .........\n\n It can be seen that the first eight rows of Pascal's triangle contain twelve distinct numbers: \n 1, 2, 3, 4, 5, 6, 7, 10, 15, 20, 21 and 35.\n\n A positive integer n is called squarefree if no square of a prime divides n. Of the twelve distinct\n numbers in the first eight rows of Pascal's triangle, all except 4 and 20 are squarefree. The sum of \n the distinct squarefree numbers in the first eight rows is 105.\n\n Find the sum of the distinct squarefree numbers in the first 51 rows of Pascal's triangle.\n\n\n\n```python\nN = 51\n\ndef binom(n, k):\n if k > n // 2:\n k = n - k\n b = 1\n for i in range(k):\n b = (b * (n - i)) // (i + 1)\n return b\n```\n\n\n```python\nfrom sympy import primerange, nextprime\n\nMAXIMUM_ROW_NUMBER = 100\nsmallprimes = list(primerange(2, MAXIMUM_ROW_NUMBER + 1))\n```\n\n\n```python\ndef squarefree_binom(n, k):\n assert n <= MAXIMUM_ROW_NUMBER\n for p in smallprimes:\n if p > n:\n return True\n if binom_order(n, k, p) > 1:\n return False\n return True\n \n \n```\n\nThe exponent of a prime $p$ in the prime factorization of $n!$ is equal to $(n-s)/(p-1)$, where $s$ is the sum of the base-$p$ digits of $n$. This is a consequence of [Legendre's formula](https://en.wikipedia.org/wiki/Legendre%27s_formula). We can use this formula to calculate the exponent of $p$ in the prime factorization of a binomial coefficient.\n\n\n```python\ndef sum_of_digits(n, p):\n return n if n < p else (n % p) + sum_of_digits(n // p, p)\n\ndef fact_order(n, p):\n return (n - sum_of_digits(n, p)) // (p - 1)\n\ndef binom_order(n, k, p):\n return fact_order(n, p) - fact_order(k, p) - fact_order(n-k, p)\n \n```\n\n\n```python\nresults = set()\nfor n in range(N):\n for k in range(n//2 + 1):\n if squarefree_binom(n, k):\n results.add(binom(n, k))\nprint(sum(results))\n```\n\n 34029210557338\n\n", "meta": {"hexsha": "3c03ae2380cb609d27397fd1c831fdbdcb267d97", "size": 4112, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Euler 203 - Squarefree Binomial Coefficients.ipynb", "max_stars_repo_name": "Radcliffe/project-euler", "max_stars_repo_head_hexsha": "5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2016-05-11T18:55:35.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-27T21:38:43.000Z", "max_issues_repo_path": "Euler 203 - Squarefree Binomial Coefficients.ipynb", "max_issues_repo_name": "Radcliffe/project-euler", "max_issues_repo_head_hexsha": "5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Euler 203 - Squarefree Binomial Coefficients.ipynb", "max_forks_repo_name": "Radcliffe/project-euler", "max_forks_repo_head_hexsha": "5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.9727891156, "max_line_length": 357, "alphanum_fraction": 0.484922179, "converted": true, "num_tokens": 715, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9362850039701653, "lm_q2_score": 0.8774767906859264, "lm_q1q2_score": 0.8215683604511006}} {"text": "# POL280 Bayesian Modeling Homework 1\n\n### Gento Kato (May 2, 2017)\n\n---\n\n## Q1\n*Use the Gamma-Poisson conjugate speci\ufb01cation to analyze data on the number of presidential appointments from 1960 to 2000. The data are in the \ufb01le called **appointments.dta** in the Dropbox folder.*\n\n---\n\nTo start with, it is shown in the class that $gamma(\\alpha, \\beta)$ prior distribution and poisson likelihood produces $gamma(\\alpha + \\Sigma y, \\beta + n)$ posterior distribution, as follows: \n\n* **Gamma (Prior) - Poisson (Likelihood) $\\Rightarrow$ Gamma Posterior**\n\n\\begin{align}\n\\mbox{Prior Distribution} = \\mbox{Gamma}(\\alpha, \\beta) &= \\frac{\\beta^{\\alpha}}{\\Gamma (\\alpha)} \\theta^{\\alpha-1} e^{- \\beta \\theta} \\\\\n\\mbox{Poisson PMF } &= p(y | \\theta) = \\frac{e^{-\\theta} \\theta^{y_i}}{y_i !} \\\\\n\\mbox{Poisson Likelihood } &= \\mathit{L}(\\theta | y) = \\hat{\\Pi}_{i=1}^n \\frac{e^{-\\theta} \\theta^{y_i}}{y_i !} \\\\\n&= \\frac{e^{-\\theta n} \\theta^{\\sum_{i=1}^{n} y_i} }{y_1 ! y_2 ! \\dots y_n !} \\\\\n\\mbox{Posterior Distribution } \\pi(\\theta | y) &\\propto \\frac{\\beta^{\\alpha}}{\\Gamma (\\alpha)} \\theta^{\\alpha-1} e^{- \\beta \\theta} \\times \\frac{e^{-\\theta n} \\theta^{\\sum_{i=1}^{n} y_i} }{y_1 ! y_2 ! \\dots y_n !} \\\\\n&\\propto \\theta^{\\alpha - 1 + \\Sigma y} e^{- \\theta (\\beta + n)} \\\\\n&\\propto \\mbox{Gamma }(\\alpha + \\Sigma y, \\beta + n)\n\\end{align}\n\nNow, the observed posterior value from appointments.dta can be extracted as follows:\n\n\n```R\nlibrary(foreign)\nappdta <- read.dta(\"../data/POL280/appointments.dta\") ## Open Data\ny <- appdta$appoints ; y ## Store observed y\n```\n\n\n
      \n\t
    1. 2
    2. \n\t
    3. 3
    4. \n\t
    5. 3
    6. \n\t
    7. 2
    8. \n\t
    9. 0
    10. \n\t
    11. 1
    12. \n\t
    13. 2
    14. \n\t
    15. 1
    16. \n\t
    17. 2
    18. \n\t
    19. 1
    20. \n
    \n\n\n\nThen, using R function rgamma(alpha, beta), we can create function to produce posterior distribution:\n\n\n```R\nposterior <- function(sample, y, alpha, beta){\n n = length(y); sigmay = sum(y) \n # n is y's sample size, sigmay is the sum of all y values\n return(rgamma(sample, alpha+sigmay, beta+n)) \n # generate posterior distribution from prior's parameters\n}\n```\n\nI will use the above function in Q3 and Q4.\n\n## Q2\n*The posterior distribution for $\\theta$ is $gamma(\\delta_1, \\delta_2)$ according to some parameters $\\delta_1$ and $\\delta_2$, which of course depend on your choice of the parameters for the gamma prior. You should model $\\theta$ using two sets of priors: one which speci\ufb01es a great deal of certainty regarding your best guess as to the value of $\\theta$ and one that represents ignorance regarding this value. *\n\n---\n\nBefore turning to the analysis, for the gamma distribution specification in Q1, the mean and variance of the distribution are defined as follows:\n\n\\begin{align}\nMean\\left[gamma(\\alpha, \\beta)\\right] &= \\frac{\\alpha}{\\beta}\\\\\nVar\\left[gamma(\\alpha, \\beta)\\right] &= \\frac{\\alpha}{\\beta^2}\\\\\n\\end{align}\n\nNow, suppose that there are two prior belief gamma distribution with the same mean (let's say 5), but different variance (certainty), as follows:\n\n\\begin{align}\n\\mbox{Prior 1 (ignorant): Mean } &= \\frac{\\alpha}{\\beta} = \\frac{5}{1} = 5 \\\\\n\\mbox{Variance } &= \\frac{\\alpha}{\\beta^2} = \\frac{5}{1} = 5\\\\\n\\mbox{Prior 2 (certain): Mean } &= \\frac{\\alpha}{\\beta} = \\frac{50}{10} = 5 \\\\\n\\mbox{Variance } &= \\frac{\\alpha}{\\beta^2} = \\frac{50}{100} = \\frac{1}{2}\n\\end{align}\n\nWe can generate those two prior distributions in R as follows\n\n\n```R\n## Set Alpha and Betas\na1 = 5; b1 = 1 ## Ignorant\na2 = 50; b2 = 10 ## Certain\n\n## Generate Prior distribution\nset.seed(27674) # Make this replicable\nprior1 <- rgamma(10000, a1, b1) # Ignorant\nprior2 <- rgamma(10000, a2, b2) # Certain\n\n## Check Result\npaste(\"For prior 1, mean is \", round(mean(prior1),2), \n \", variance is \", round(var(prior1),2))\npaste(\"For prior 2, mean is \", round(mean(prior2),2), \n \", variance is \", round(var(prior2),2))\n```\n\nNow, the above two distributions can be plotted as follows:\n\n\n```R\nlibrary(ggplot2);\nsource(\"https://raw.githubusercontent.com/gentok/Method_Notes/master/sources/gktheme.R\")\n\nbayesdata <- data.frame(prior1 = prior1, prior2 = prior2) \n\n## Plot Result ##\nbgraph <- ggplot(bayesdata) + gktheme +\n geom_density(aes(prior1, fill=\"1\"), alpha = 0.5, size=0.5) + \n geom_density(aes(prior2, fill=\"2\"), alpha = 0.5, size=0.5) + \n scale_y_continuous(limits=c(0,0.6),breaks=c(0,0.2,0.4,0.6))+\n scale_x_continuous(limits=c(0,16.03),breaks=c(0,2.5,5,7.5,10,12.5,15))+\n scale_linetype_manual(name=\"Gamma Parameters\",values=c(1,2), \n labels = c(expression(paste(\"1. Ignorant: \", alpha == 5, \"; \" , beta == 1)),\n expression(paste(\"2. Certain: \", alpha == 50, \"; \" , beta == 10))))+\n scale_fill_manual(name=\"Gamma Parameters\",values=c(1,2), \n labels = c(expression(paste(\"1. Ignorant: \", alpha == 5, \"; \" , beta == 1)),\n expression(paste(\"2. Certain: \", alpha == 50, \"; \" , beta == 10))))+\n xlab(\"Prior Belief\")+\n ylab(\"Density\")+\n ggtitle(\"Prior Distributions by Different Parameters\")+\n theme(legend.position = \"right\")\n```\n\n\n```R\noptions(repr.plot.width=7, repr.plot.height=3.5)\nbgraph\n```\n\n## Q3\n*Generate a large number of values from this distribution in **R**, say 10, 000 or so, using the command:*\n\n**posterior.sample <- rgamma(10000,d1,d2)**\n\n---\n\nNote that in Q1, I created following objects:\n\n * y variable, which is number of appointments by each president.\n * posterior function, to generate posterior distribution from gamma prior parameters and poisson likelihood\n \nposterior function utilizes rgamma, so those two functions are essentially the same. Therefore, I use posterior function to generate posterior distribution, as follows:\n\n\n```R\nset.seed(8900786) # Make this replicable\n#posterior(y, alpha, beta) ## Alpha and Beta from prior distribution\nposterior1 <- posterior(10000, y, a1, b1)\nposterior2 <- posterior(10000, y, a2, b2)\n```\n\n## Q4\n\nSummarize the posteriors with quantities of interest such as means, medians, and\nvariances. Also supply plots of the density of the posterior distributions.\n\n---\n\nBefore starting the analysis, note that mean, median and variance of observed y is shown as follows:\n\n\n```R\npaste(\"For observed y, mean is \", round(mean(y),2), \n \", median is \", round(median(y),2), \n \", variance is \", round(var(y),2))\n```\n\n\n'For observed y, mean is 1.7 , median is 2 , variance is 0.9'\n\n\nNow the characteristics of posterior distribution can be extracted as follows:\n\n\n```R\n## Check Result\npaste(\"For posterior 1, mean is \", round(mean(posterior1),2), \n \", median is \", round(median(posterior1),2), \n \", variance is \", round(var(posterior1),2))\npaste(\"For posterior 2, mean is \", round(mean(posterior2),2), \n \", median is \", round(median(posterior2),2), \n \", variance is \", round(var(posterior2),2))\n```\n\n\n'For posterior 1, mean is 2 , median is 1.96 , variance is 0.18'\n\n\n\n'For posterior 2, mean is 3.35 , median is 3.33 , variance is 0.17'\n\n\nFrom the above result, the mean of posterior distrbution from more ignorant prior distribution (i.e., prior 1) is more strongly pulled by the observed y values than the mean of posterior distibution from certain prior distributon (i.e., prior 2). In other words, posterior distribution 1 has closer mean (i.e., mean is 2) to observed y (i.e., mean is 1.7) than posterior distribution 2 (i.e, mean is 3.35). The observed y has stronger influence on ignorant (high variance) prior belief than on certain (low variance) prior belief. Note that the variance for two posterior distributions are identical.\n\nNow the Result can be plotted as follows:\n\n\n```R\nbayesdata$posterior1 <- posterior1 \nbayesdata$posterior2 <- posterior2\n\n## Plot Result ##\nbgraph2 <- ggplot(bayesdata) + gktheme +\n geom_density(aes(posterior1, fill=\"1\"), alpha = 0.5, size=0.5) + \n geom_density(aes(posterior2, fill=\"2\"), alpha = 0.5, size=0.5) + \n scale_y_continuous(limits=c(0,1),breaks=c(0,0.25,0.5,0.75,1))+\n scale_x_continuous(limits=c(0,6),breaks=c(0,1,2,3,4,5,6))+\n scale_linetype_manual(name=\"Gamma Prior Parameters\",values=c(1,2), \n labels = c(expression(paste(\"1. Ignorant: \", alpha == 5, \"; \" , beta == 1)),\n expression(paste(\"2. Certain: \", alpha == 50, \"; \" , beta == 10))))+\n scale_fill_manual(name=\"Gamma Prior Parameters\",values=c(1,2), \n labels = c(expression(paste(\"1. Ignorant: \", alpha == 5, \"; \" , beta == 1)),\n expression(paste(\"2. Certain: \", alpha == 50, \"; \" , beta == 10))))+\n xlab(\"Posterior Belief\")+\n ylab(\"Density\")+\n ggtitle(\"Posterior Distributions by Prior Parameters\")+\n theme(legend.position = \"right\")\n```\n\n\n```R\noptions(repr.plot.width=7, repr.plot.height=3.5)\nbgraph2\n```\n\nThe above plot further confirmes the implication. While the shape of two posterior distributions are almost identical, the posterior distribution for ignorant prior is placed left of the posterior distribution for certain prior. Given that both prior distribution had the same mean of 5, ignorant prior holders are more strongly pulled by the observation of y (mean of 2) than certain prior holders. \n", "meta": {"hexsha": "989daf87961baa644cb9c458b326303ea1814ce7", "size": 38392, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/.ipynb_checkpoints/POL280_Bayes_HW1-checkpoint.ipynb", "max_stars_repo_name": "gentok/Method_Notes", "max_stars_repo_head_hexsha": "a7b60e50132fdda764efcfb1e163d1b31b2f99f7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/.ipynb_checkpoints/POL280_Bayes_HW1-checkpoint.ipynb", "max_issues_repo_name": "gentok/Method_Notes", "max_issues_repo_head_hexsha": "a7b60e50132fdda764efcfb1e163d1b31b2f99f7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/.ipynb_checkpoints/POL280_Bayes_HW1-checkpoint.ipynb", "max_forks_repo_name": "gentok/Method_Notes", "max_forks_repo_head_hexsha": "a7b60e50132fdda764efcfb1e163d1b31b2f99f7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 65.181663837, "max_line_length": 10232, "alphanum_fraction": 0.7577099396, "converted": true, "num_tokens": 2852, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9362850110816422, "lm_q2_score": 0.8774767778695834, "lm_q1q2_score": 0.8215683546915066}} {"text": "\n\n# Multivariate Gaussian with full covariance\n\nIn this reading you will learn how you can use TensorFlow to specify any multivariate Gaussian distribution.\n\n\n```python\nimport tensorflow as tf\nimport tensorflow_probability as tfp\ntfd = tfp.distributions\n\nprint(\"TF version:\", tf.__version__)\nprint(\"TFP version:\", tfp.__version__)\n```\n\n TF version: 2.7.0\n TFP version: 0.15.0\n\n\nSo far, you've seen how to define multivariate Gaussian distributions using `tfd.MultivariateNormalDiag`. This class allows you to specify a multivariate Gaussian with a diagonal covariance matrix $\\Sigma$. \n\nIn cases where the variance is the same for each component, i.e. $\\Sigma = \\sigma^2 I$, this is known as a _spherical_ or _isotropic_ Gaussian. This name comes from the spherical (or circular) contours of its probability density function, as you can see from the plot below for the two-dimensional case. \n\n\n```python\n# Plot the approximate density contours of a 2d spherical Gaussian\n\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nspherical_2d_gaussian = tfd.MultivariateNormalDiag(loc=[0., 0.])\n\nN = 100000\nx = spherical_2d_gaussian.sample(N)\nx1 = x[:, 0]\nx2 = x[:, 1]\nsns.jointplot(x1, x2, kind='kde', space=0, );\n```\n\nAs you know, a diagonal covariance matrix results in the components of the random vector being independent. \n\n## Full covariance with `MultivariateNormalFullTriL`\n\nYou can define a full covariance Gaussian distribution in TensorFlow using the Distribution `tfd.MultivariateNormalTriL`.\n\nMathematically, the parameters of a multivariate Gaussian are a mean $\\mu$ and a covariance matrix $\\Sigma$, and so the `tfd.MultivariateNormalTriL` constructor requires two arguments:\n\n- `loc`, a Tensor of floats corresponding to $\\mu$,\n- `scale_tril`, a a lower-triangular matrix $L$ such that $LL^T = \\Sigma$.\n\nFor a $d$-dimensional random variable, the lower-triangular matrix $L$ looks like this:\n\n\\begin{equation}\n L = \\begin{bmatrix}\n l_{1, 1} & 0 & 0 & \\cdots & 0 \\\\\n l_{2, 1} & l_{2, 2} & 0 & \\cdots & 0 \\\\\n l_{3, 1} & l_{3, 2} & l_{3, 3} & \\cdots & 0 \\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n l_{d, 1} & l_{d, 2} & l_{d, 3} & \\cdots & l_{d, d}\n \\end{bmatrix},\n\\end{equation}\n\nwhere the diagonal entries are positive: $l_{i, i} > 0$ for $i=1,\\ldots,d$.\n\nHere is an example of creating a two-dimensional Gaussian with non-diagonal covariance:\n\n\n```python\n# Set the mean and covariance parameters\n\nmu = [0., 0.] # mean\nscale_tril = [[1., 0.],\n [0.6, 0.8]]\n\nsigma = tf.matmul(tf.constant(scale_tril), tf.transpose(tf.constant(scale_tril))) # covariance matrix\nprint(sigma)\n```\n\n tf.Tensor(\n [[1. 0.6]\n [0.6 1. ]], shape=(2, 2), dtype=float32)\n\n\n\n```python\n# Create the 2D Gaussian with full covariance\n\nnonspherical_2d_gaussian = tfd.MultivariateNormalTriL(loc=mu, scale_tril=scale_tril)\nnonspherical_2d_gaussian\n```\n\n\n\n\n \n\n\n\n\n```python\n# Check the Distribution mean\n\nnonspherical_2d_gaussian.mean()\n```\n\n\n\n\n \n\n\n\n\n```python\n# Check the Distribution covariance\n\nnonspherical_2d_gaussian.covariance()\n```\n\n\n\n\n \n\n\n\n\n```python\n# Plot its approximate density contours\n\nx = nonspherical_2d_gaussian.sample(N)\nx1 = x[:, 0]\nx2 = x[:, 1]\nsns.jointplot(x1, x2, kind='kde', space=0, color='r');\n```\n\nAs you can see, the approximate density contours are now elliptical rather than circular. This is because the components of the Gaussian are correlated.\n\nAlso note that the marginal distributions (shown on the sides of the plot) are both univariate Gaussian distributions.\n\n## The Cholesky decomposition\n\nIn the above example, we defined the lower triangular matrix $L$ and used that to build the multivariate Gaussian distribution. The covariance matrix is easily computed from $L$ as $\\Sigma = LL^T$.\n\nThe reason that we define the multivariate Gaussian distribution in this way - as opposed to directly passing in the covariance matrix - is that not every matrix is a valid covariance matrix. The covariance matrix must have the following properties:\n\n1. It is symmetric\n2. It is positive (semi-)definite\n\n_NB: A symmetric matrix $M \\in \\mathbb{R}^{d\\times d}$ is positive semi-definite if it satisfies $b^TMb \\ge 0$ for all nonzero $b\\in\\mathbb{R}^d$. If, in addition, we have $b^TMb = 0 \\Rightarrow b=0$ then $M$ is positive definite._\n\nThe Cholesky decomposition is a useful way of writing a covariance matrix. The decomposition is described by this result:\n\n> For every real-valued symmetric positive-definite matrix $M$, there is a unique lower-diagonal matrix $L$ that has positive diagonal entries for which \n>\n> \\begin{equation}\n LL^T = M\n \\end{equation}\n> This is called the _Cholesky decomposition_ of $M$.\n\nThis result shows us why Gaussian distributions with full covariance are completely represented by the `MultivariateNormalTriL` Distribution.\n\n### `tf.linalg.cholesky`\n\nIn case you have a valid covariance matrix $\\Sigma$ and would like to compute the lower triangular matrix $L$ above to instantiate a `MultivariateNormalTriL` object, this can be done with the `tf.linalg.cholesky` function. \n\n\n```python\n# Define a symmetric positive-definite matrix\n\nsigma = [[10., 5.], [5., 10.]]\n```\n\n\n```python\n# Compute the lower triangular matrix L from the Cholesky decomposition\n\nscale_tril = tf.linalg.cholesky(sigma)\nscale_tril\n```\n\n\n\n\n \n\n\n\n\n```python\n# Check that LL^T = Sigma\n\ntf.linalg.matmul(scale_tril, tf.transpose(scale_tril))\n```\n\n\n\n\n \n\n\n\nIf the argument to the `tf.linalg.cholesky` is not positive definite, then it will fail:\n\n\n```python\n# Try to compute the Cholesky decomposition for a matrix with negative eigenvalues\n\nbad_sigma = [[10., 11.], [11., 10.]]\n\ntry:\n scale_tril = tf.linalg.cholesky(bad_sigma)\nexcept Exception as e:\n print(e)\n```\n\n### What about positive semi-definite matrices?\n\nIn cases where the matrix is only positive semi-definite, the Cholesky decomposition exists (if the diagonal entries of $L$ can be zero) but it is not unique.\n\nFor covariance matrices, this corresponds to the degenerate case where the probability density function collapses to a subspace of the event space. This is demonstrated in the following example:\n\n\n```python\n# Create a multivariate Gaussian with a positive semi-definite covariance matrix\n\npsd_mvn = tfd.MultivariateNormalTriL(loc=[0., 0.], scale_tril=[[1., 0.], [0.4, 0.]])\npsd_mvn\n```\n\n\n\n\n \n\n\n\n\n```python\n# Plot samples from this distribution\n\nx = psd_mvn.sample(N)\nx1 = x[:, 0]\nx2 = x[:, 1]\nplt.xlim(-5, 5)\nplt.ylim(-5, 5)\nplt.title(\"Scatter plot of samples\")\nplt.scatter(x1, x2, alpha=0.5);\n```\n\nIf the input to the function `tf.linalg.cholesky` is positive semi-definite but not positive definite, it will also fail:\n\n\n```python\n# Try to compute the Cholesky decomposition for a positive semi-definite matrix\n\nanother_bad_sigma = [[10., 0.], [0., 0.]]\n\ntry:\n scale_tril = tf.linalg.cholesky(another_bad_sigma)\nexcept Exception as e:\n print(e)\n```\n\nIn summary: if the covariance matrix $\\Sigma$ for your multivariate Gaussian distribution is positive-definite, then an algorithm that computes the Cholesky decomposition of $\\Sigma$ returns a lower-triangular matrix $L$ such that $LL^T = \\Sigma$. This $L$ can then be passed as the `scale_tril` of `MultivariateNormalTriL`.\n\n## Putting it all together\n\nYou are now ready to put everything that you have learned in this reading together.\n\nTo create a multivariate Gaussian distribution with full covariance you need to:\n\n1. Specify parameters $\\mu$ and either $\\Sigma$ (a symmetric positive definite matrix) or $L$ (a lower triangular matrix with positive diagonal elements), such that $\\Sigma = LL^T$.\n\n2. If only $\\Sigma$ is specified, compute `scale_tril = tf.linalg.cholesky(sigma)`.\n\n3. Create the distribution: `multivariate_normal = tfd.MultivariateNormalTriL(loc=mu, scale_tril=scale_tril)`.\n\n\n```python\n# Create a multivariate Gaussian distribution\n\nmu = [1., 2., 3.]\nsigma = [[0.5, 0.1, 0.1],\n [0.1, 1., 0.6],\n [0.1, 0.6, 2.]]\n\nscale_tril = tf.linalg.cholesky(sigma)\n\nmultivariate_normal = tfd.MultivariateNormalTriL(loc=mu, scale_tril=scale_tril)\n```\n\n\n```python\n# Check the covariance matrix\n\nmultivariate_normal.covariance()\n```\n\n\n\n\n \n\n\n\n\n```python\n# Check the mean\n\nmultivariate_normal.mean()\n```\n\n\n\n\n \n\n\n\n## Deprecated: `MultivariateNormalFullCovariance`\n\nThere was previously a class called `tfd.MultivariateNormalFullCovariance` which takes the full covariance matrix in its constructor, but this is being deprecated. Two reasons for this are:\n\n* covariance matrices are symmetric, so specifying one directly involves passing redundant information, which involves writing unnecessary code. \n* it is easier to enforce positive-definiteness through constraints on the elements of a decomposition than through a covariance matrix itself. The decomposition's only constraint is that its diagonal elements are positive, a condition that is easy to parameterize for.\n\n### Further reading and resources\n* https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/MultivariateNormalTriL\n* https://www.tensorflow.org/api_docs/python/tf/linalg/cholesky\n", "meta": {"hexsha": "88066f655f419079723ea4f93a2ca46d0daf18ec", "size": 123553, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Week1/Multivariate_Gaussian_with_full_covariance.ipynb", "max_stars_repo_name": "stevensmiley1989/Prob_TF2_Examples", "max_stars_repo_head_hexsha": "fa022e58a44563d09792070be5d015d0798ca00d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Week1/Multivariate_Gaussian_with_full_covariance.ipynb", "max_issues_repo_name": "stevensmiley1989/Prob_TF2_Examples", "max_issues_repo_head_hexsha": "fa022e58a44563d09792070be5d015d0798ca00d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Week1/Multivariate_Gaussian_with_full_covariance.ipynb", "max_forks_repo_name": "stevensmiley1989/Prob_TF2_Examples", "max_forks_repo_head_hexsha": "fa022e58a44563d09792070be5d015d0798ca00d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 159.6291989664, "max_line_length": 53250, "alphanum_fraction": 0.8744749217, "converted": true, "num_tokens": 2792, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9399133464597458, "lm_q2_score": 0.8740772351648677, "lm_q1q2_score": 0.821556859168093}} {"text": "# Numerical methods for 2nd-order ODEs\n\nWe've gone over how to solve 1st-order ODEs using numerical methods, but what about 2nd-order or any higher-order ODEs? We can use the same methods we've already discussed by transforming our higher-order ODEs into a **system of first-order ODEs**.\n\nMeaning, if we are given a 2nd-order ODE\n\\begin{equation}\n\\frac{d^2 y}{dx^2} = y^{\\prime\\prime} = f(x, y, y^{\\prime})\n\\end{equation}\nwe can transform this into a system of **two 1st-order ODEs** that are coupled:\n\\begin{align}\n\\frac{dy}{dx} &= y^{\\prime} = u \\\\\n\\frac{du}{dx} &= u^{\\prime} = y^{\\prime\\prime} = f(x, y, u)\n\\end{align}\nwhere $f(x, y, u)$ is the same as that given above for $\\frac{d^2 y}{dx^2}$.\n\nThus, instead of a 2nd-order ODE to solve, we have two 1st-order ODEs:\n\\begin{align}\ny^{\\prime} &= u \\\\\nu^{\\prime} &= f(x, y, u)\n\\end{align}\n\nSo, we can use all of the methods we have talked about so far to solve 2nd-order ODEs by transforming the one equation into a system of two 1st-order equations.\n\n## Higher-order ODEs\n\nThis works for higher-order ODEs too! For example, if we have a 3rd-order ODE, we can transform it into a system of three 1st-order ODEs:\n\\begin{align}\n\\frac{d^3 y}{dx^3} &= f(x, y, y^{\\prime}, y^{\\prime\\prime}) \\\\\n\\rightarrow y^{\\prime} &= u \\\\\nu^{\\prime} &= y^{\\prime\\prime} = w \\\\\nw^{\\prime} &= y^{\\prime\\prime\\prime} = f(x, y, u, w)\n\\end{align}\n\n## Example: mass-spring problem\n\nFor example, let's solve a forced damped mass-spring problem given by a 2nd-order ODE:\n\\begin{equation}\ny^{\\prime\\prime} + 5y^{\\prime} + 6y = 10 \\sin \\omega t\n\\end{equation}\nwith the initial conditions $y(0) = 0$ and $y^{\\prime}(0) = 5$.\n\nWe start by transforming the equation into two 1st-order ODEs. Let's use the variables $z_1 = y$ and $z_2 = y^{\\prime}$:\n\\begin{align}\n\\frac{dz_1}{dt} &= z_1^{\\prime} = z_2 \\\\\n\\frac{dz_2}{dt} &= z_2^{\\prime} = y^{\\prime\\prime} = 10 \\sin \\omega t - 5z_2 - 6z_1\n\\end{align}\n\n### Forward Euler\n\nThen, let's solve numerically using the forward Euler method. Recall that the recursion formula for forward Euler is:\n\\begin{equation}\ny_{i+1} = y_i + \\Delta x f(x_i, y_i)\n\\end{equation}\nwhere $f(x,y) = \\frac{dy}{dx}$.\n\nLet's solve using $\\omega = 1$ and with a step size of $\\Delta t = 0.1$, over $0 \\leq t \\leq 3$.\n\nWe can compare this against the exact solution, obtainable using the method of undetermined coefficients:\n\\begin{equation}\ny(t) = -6 e^{-3t} + 7 e^{-2t} + \\sin t - \\cos t\n\\end{equation}\n\n\n```matlab\n% plot exact solution first\nt = linspace(0, 3);\ny_exact = -6*exp(-3*t) + 7*exp(-2*t) + sin(t) - cos(t);\nplot(t, y_exact); hold on\n\nomega = 1;\n\ndt = 0.1;\nt = [0 : dt : 3];\n\nf = @(t,z1,z2) 10*sin(omega*t) - 5*z2 - 6*z1;\n\nz1 = zeros(length(t), 1);\nz2 = zeros(length(t), 1);\nz1(1) = 0;\nz2(1) = 5;\nfor i = 1 : length(t)-1\n z1(i+1) = z1(i) + dt * z2(i);\n z2(i+1) = z2(i) + dt * f(t(i), z1(i), z2(i));\nend\n\nplot(t, z1, 'o--')\nxlabel('time'); ylabel('displacement')\nlegend('Exact', 'Forward Euler', 'Location','southeast')\n```\n\n### Heun's Method\n\nFor schemes that involve more than one stage, like Heun's method, we'll need to implement both stages for each 1st-order ODE. For example:\n\n\n```matlab\nclear\n% plot exact solution first\nt = linspace(0, 3);\ny_exact = -6*exp(-3*t) + 7*exp(-2*t) + sin(t) - cos(t);\nplot(t, y_exact); hold on\n\nomega = 1;\n\ndt = 0.1;\nt = [0 : dt : 3];\n\nf = @(t,z1,z2) 10*sin(omega*t) - 5*z2 - 6*z1;\n\nz1 = zeros(length(t), 1);\nz2 = zeros(length(t), 1);\nz1(1) = 0;\nz2(1) = 5;\nfor i = 1 : length(t)-1\n % predictor\n z1p = z1(i) + z2(i)*dt;\n z2p = z2(i) + f(t(i), z1(i), z2(i))*dt;\n\n % corrector\n z1(i+1) = z1(i) + 0.5*dt*(z2(i) + z2p);\n z2(i+1) = z2(i) + 0.5*dt*(f(t(i), z1(i), z2(i)) + f(t(i+1), z1p, z2p));\nend\nplot(t, z1, 'o')\nxlabel('time'); ylabel('displacement')\nlegend('Exact', 'Heuns', 'Location','southeast')\n```\n\n### Runge-Kutta: `ode45`\n\nWe can also solve using `ode45`, by providing a separate function file that defines the system of 1st-order ODEs. In this case, we'll need to use a single **array** variable, `Z`, to store $z_1$ and $z_2$. The first column of `Z` will store $z_1$ (`Z(:,1)`) and the second column will store $z_2$ (`Z(:,2)`).\n\n\n```matlab\n%%file mass_spring.m\nfunction dzdt = mass_spring(t, z)\n omega = 1;\n dzdt = zeros(2,1);\n \n dzdt(1) = z(2);\n dzdt(2) = 10*sin(omega*t) - 6*z(1) - 5*z(2);\nend\n```\n\n Created file '/Users/kyle/projects/ME373/docs/mass_spring.m'.\n\n\n\n```matlab\n% plot exact solution first\nt = linspace(0, 3);\ny_exact = -6*exp(-3*t) + 7*exp(-2*t) + sin(t) - cos(t);\nplot(t, y_exact); hold on\n\n% solution via ode45:\n[T, Z] = ode45('mass_spring', [0 3], [0 5]);\n\nplot(T, Z(:,1), 'o')\nxlabel('time'); ylabel('displacement')\nlegend('Exact', 'ode45', 'Location','southeast')\n```\n\n\n```matlab\n\n```\n", "meta": {"hexsha": "2762c7243fd10f3358e18810416bc58557c38153", "size": 74907, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/second-order-numerical.ipynb", "max_stars_repo_name": "kyleniemeyer/ME373", "max_stars_repo_head_hexsha": "db7e78ac21d7a2cc5bd9fc49cdc3614f2f0fe00e", "max_stars_repo_licenses": ["CC-BY-4.0", "MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-11-03T18:09:05.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-03T18:09:05.000Z", "max_issues_repo_path": "docs/second-order-numerical.ipynb", "max_issues_repo_name": "kyleniemeyer/ME373", "max_issues_repo_head_hexsha": "db7e78ac21d7a2cc5bd9fc49cdc3614f2f0fe00e", "max_issues_repo_licenses": ["CC-BY-4.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/second-order-numerical.ipynb", "max_forks_repo_name": "kyleniemeyer/ME373", "max_forks_repo_head_hexsha": "db7e78ac21d7a2cc5bd9fc49cdc3614f2f0fe00e", "max_forks_repo_licenses": ["CC-BY-4.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 278.4646840149, "max_line_length": 25192, "alphanum_fraction": 0.9181384917, "converted": true, "num_tokens": 1794, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8740772318846386, "lm_q2_score": 0.9399133468805266, "lm_q1q2_score": 0.8215568564527568}} {"text": "# Curvilinear coordinates\n\n\n\nShenfun automates curvilinear coordinates! To this end we make heavily use of [sympy](https://sympy.org) for symbolic mathematics in Python. First some background on curvilinear coordinates:\n\nThe position vector $\\mathbf{r}$ in three-dimensional Cartesian coordinates is given as\n\n$$\n\\mathbf{r} = x^1 \\mathbf{i}_1 + x^2 \\mathbf{i}_2 + x^3 \\mathbf{i}_3 \n$$\n\nwhere $\\mathbf{i}_1, \\mathbf{i}_2, \\mathbf{i}_3$ are the unit basisvectors for the three spatial dimensions. This is very often written as\n\n$$\n\\mathbf{r} = x \\mathbf{i} + y \\mathbf{j} + z \\mathbf{k},\n$$\n\nbut we will use the numbered version below, because it is numbered. \n\nWe write for a point $\\mathbf{x}$\n\n$$\n\\mathbf{x} = (x^1, x^2, x^3).\n$$\n\nA new curvilinear coordinate system with coordinate curves $q^1=q^1(\\mathbf{x}), q^2=q^2(\\mathbf{x})$ and $ q^3=q^3(\\mathbf{x})$ can be defined, and we can get back to the Cartesian coordinate system through the inverse maps\n\n$$\nx^1=x^1(\\mathbf{q}) \\quad x^2=x^2(\\mathbf{q}) \\quad x^3=x^3(\\mathbf{q}),\n$$\n\nusing notation\n\n$$\n\\mathbf{q} = (q^1, q^2, q^3).\n$$\n\nThe position vector can now be written as a function of the new coordinates\n\n$$\n\\mathbf{r} = x^i(\\mathbf{q}) \\,\\mathbf{i}_i,\n$$\n\nwith summation on repeated indices $i$.\n\nFor example, for cylindrical coordinates we have \n\n$$\n q^1, q^2, q^3 = r, \\theta, z,\n$$\n\nwhere $r, \\theta, z$ are the radial position, the azimuthal angle, and the position along the length of the cylinder, respectively. The position vector is given in terms of these new coordinates as\n\n$$\n\\mathbf{r} = r\\cos \\theta \\,\\mathbf{i}_1 + r\\sin \\theta \\,\\mathbf{i}_2 + z \\,\\mathbf{i}_2.\n$$\n\nFor spherical coordinates we have\n\n$$\nq^1, q^2, q^3 = r, \\theta, \\phi,\n$$\n\nand the position vector is\n\n$$\n\\mathbf{r} = r\\sin \\theta \\cos \\phi \\,\\mathbf{i}_1 + r\\sin \\theta \\sin \\phi \\,\\mathbf{i}_2 + r \\cos \\theta \\,\\mathbf{i}_3.\n$$\n\nBy defining this position vector as a map between curvilinear and Cartesian coordinates, we can now find new basis vectors that point along the curvilinear coordinate curves as\n\n$$\n\\mathbf{b}_i = \\frac{\\partial \\mathbf{r}}{\\partial q^{i}}, \\quad \\text{for}\\, i \\, \\in (1, 2, 3).\n$$\n\nThese basis vectors are the covariant basis vectors of the new coordinate system, and they are not normalized. The basis vectors are tangent to the coordinate curves. Along the coordinate curve $q^1$, both $q^2$ and $q^3$ are constant, and likewise for the other two coordinate curves $q^2$ and $q^3$. \n\nContravariant basis vectors are defined with a superscript index as\n\n$$\n\\mathbf{b}^{i} = \\nabla q^{i}.\n$$\n\nThe covariant and contravariant basis vectors satisfy\n\n$$\n\\mathbf{b}^{i} \\cdot \\mathbf{b}_{j} = \\delta_{j}^{i}.\n$$\n\nA vector $\\mathbf{v}$ can be given in either basis as\n\n$$\n\\mathbf{v} = v^{i} \\mathbf{b}_i = v_i \\mathbf{b}^{i}.\n$$\n\nwhere $v_{i}$ and $v^{i}$ are co- and contravariant vector components, respectively.\n\nIn shenfun we normally use the (natural) covariant basis vectors. The magnitude of the covariant basis vectors are the scaling factors\n\n$$\nh_i = |\\mathbf{b}_i|, \\quad \\text{for}\\, i \\, \\in (1, 2, 3).\n$$\n\nThe co- and contravariant metric tensors $g_{ij}$ and $g^{ij}$ are defined as\n\n\\begin{align*}\ng_{ij} &= \\mathbf{b}_i \\cdot \\mathbf{b}_j, \\\\\ng^{ij} &= \\mathbf{b}^{i} \\cdot \\mathbf{b}^{j}. \\\\\n\\end{align*}\n\nThe determinant of the matrix $G=\\{g_{ij}\\}_{i,j = 1,2,3}$ is termed $g$\n\n$$\ng = \\det(G).\n$$\n\nThe term $\\sqrt{g}$ is also commonly called the Jacobian, $J$, which is defined as the determinant of the matrix $\\partial x^{i}/ \\partial q^j$. \n\nThis is really all we need to define the linear operators in the new coordinate system. \n\nThe gradient of a scalar $f$ is termed $\\nabla f$ and equals\n\n\\begin{equation}\n\\nabla f = \\frac{\\partial f}{\\partial q^{i}}\\,\\mathbf{b}^{i} = \\frac{\\partial f}{\\partial q^{i}} g^{ij} \\,\\mathbf{b}_{j}.\n\\end{equation}\n\nThe divergence of a vector $\\mathbf{v}$ is termed $\\nabla \\cdot \\mathbf{v}$ and is given as\n\n\\begin{equation}\n\\nabla \\cdot \\mathbf{v} = \\frac{1}{\\sqrt{g}} \\frac{\\partial v^{i} \\sqrt{g}}{\\partial q^{i}}.\n\\end{equation}\n\nThis leads to the Laplace operator\n\n\\begin{equation}\n\\nabla^2 f = \\frac{1}{\\sqrt{g}}\\frac{\\partial}{\\partial q^{i}}\\left( g^{li} \\sqrt{g} \\frac{\\partial f}{\\partial q^{l}}\\right).\n\\end{equation}\n\nand the biharmonic operator\n\n\\begin{equation*}\n\\nabla^4 f= \\frac{1}{\\sqrt{g}}\\frac{\\partial}{\\partial q^{i}}\\left( g^{li}\\sqrt{g} \\frac{\\partial}{\\partial q^{l}} \\left( \\frac{1}{\\sqrt{g}}\\frac{\\partial}{\\partial q^{j}}\\left( g^{kj}\\sqrt{g} \\frac{\\partial f}{\\partial q^{k}}\\right) \\right)\\right).\n\\end{equation*}\n\nUnless otherwise stated there is summation on repeated indices. So the biharmonic equation can have at most 81 different terms!\n\n
    \n

    Note

    \n \nEverything needed to do Curvilinear coordinates is achieved by taking derivatives of the position vector $\\mathbf{r}$! These derivatives are automated by Shenfun through Sympy.\n
    \n\nThe Cartesian domain is $\\Omega$ and the computational domain is $D$. The computational domain is a straight line, a rectangle or a box, whereas the Cartesian domain can be curved. An integral over the domain is computed with a change of variables as\n\n\\begin{equation}\n\\int_{\\Omega} u(\\mathbf{x}) d\\sigma = \\int_{D} u(\\mathbf{x}(\\mathbf{q})) \\sqrt{g} d\\mathbf{q}.\n\\end{equation}\n\nwhere $d\\mathbf{q}=\\prod_{i} dq^i$.\n\n
    \n

    Note

    \n\nI should now probably have introduce transformed variables, like\n\n$$\n\\tilde{u}(\\mathbf{q}) = u(\\mathbf{x}(\\mathbf{q})).\n$$\n \n
    \n\nThe spectral Galerkin approximation to the function $u$ in curvilinear coordinates will be\n\n\\begin{equation*}\nu(\\mathbf{x}(\\mathbf{q})) = \\sum_{i}\\sum_{j}\\sum_{k} \\hat{u}_{ijk} \\phi_i(q^1) \\psi_j(q^2) \\gamma_k(q^3)\n\\end{equation*}\n\nfor some basis functions $\\phi, \\psi$ and $\\gamma$. \n\n## Consider spherical coordinates\n\n$$\n\\mathbf{r} = r\\sin \\theta \\cos \\phi \\,\\mathbf{i}_1 + r\\sin \\theta \\sin \\phi \\,\\mathbf{i}_2 + r \\cos \\theta \\,\\mathbf{i}_3.\n$$\n\n\n```python\nfrom shenfun import *\nimport sympy as sp\nconfig['basisvectors'] = 'covariant'\n\nr, theta, phi = psi = sp.symbols('x,y,z', real=True, positive=True)\nrv = (r*sp.sin(theta)*sp.cos(phi), r*sp.sin(theta)*sp.sin(phi), r*sp.cos(theta))\nN = 6\nF = FunctionSpace(N, 'F', dtype='d')\nL0 = FunctionSpace(N, 'L', domain=(0, 1))\nL1 = FunctionSpace(N, 'L', domain=(0, np.pi))\nT = TensorProductSpace(comm, (L0, L1, F), coordinates=(psi, rv, sp.Q.positive(sp.sin(theta))))\nV = VectorSpace(T)\nu = TrialFunction(V)\n```\n\n\n```python\nT.coors.get_covariant_basis()\n```\n\nPretty-print\n\n\n```python\nfrom IPython.display import Math\nMath(T.coors.latex_basis_vectors(symbol_names={r: 'r', theta: '\\\\theta', phi: '\\\\phi'}))\n```\n\n\n```python\nT.coors.get_sqrt_det_g()\n```\n\n\n```python\nMath((div(u)).tolatex(funcname='u', symbol_names={r: 'r', theta: '\\\\theta', phi: '\\\\phi'}))\n```\n\nWe now want to make sure that the following vector identity holds\n\n$$\n\\nabla^2 \\mathbf{u} = \\nabla( \\nabla \\cdot \\mathbf{u}) - \\nabla \\times \\nabla \\times \\mathbf{u}\n$$\n\n\n```python\ndu = div(grad(u))\ndv = grad(div(u)) - curl(curl(u))\ndv.simplify()\ndw = du-dv\ndw.simplify()\nMath(dw.tolatex(funcname='u', symbol_names={r: 'r', theta: '\\\\theta', phi: '\\\\phi'}))\n```\n\nwhich is not bad considering the look of the vector Laplacian:\n\n\n```python\nMath(du.tolatex(funcname='u', symbol_names={r: 'r', theta: '\\\\theta', phi: '\\\\phi'}))\n```\n\nAll of this is automated in Shenfun. You just use the generic operators `div, grad` and `curl` as usual. Non-orthogonal coordinates are possible as well, but not highly tested. \n\n\n```python\n\n```\n", "meta": {"hexsha": "c9f9b4176b3c496a08a731e43a4baa5421d40094", "size": 109450, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Shenfun-curvilinear.ipynb", "max_stars_repo_name": "spectralDNS/PresentationsYu", "max_stars_repo_head_hexsha": "e9d846bac536a3b0d04c53e47477bf4467ef17da", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2021-12-16T01:20:08.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-18T07:32:23.000Z", "max_issues_repo_path": "Shenfun-curvilinear.ipynb", "max_issues_repo_name": "spectralDNS/PresentationsYu", "max_issues_repo_head_hexsha": "e9d846bac536a3b0d04c53e47477bf4467ef17da", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Shenfun-curvilinear.ipynb", "max_forks_repo_name": "spectralDNS/PresentationsYu", "max_forks_repo_head_hexsha": "e9d846bac536a3b0d04c53e47477bf4467ef17da", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 292.6470588235, "max_line_length": 97276, "alphanum_fraction": 0.920493376, "converted": true, "num_tokens": 2508, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.919642526773001, "lm_q2_score": 0.8933094010836642, "lm_q1q2_score": 0.8215253148026571}} {"text": "Polynomial Regression\n==========\n\nIn this week's programming exercise, you are asked to implement the basic building blocks for polynomial regression step-by-step. We will do the following:\n\n\n- **a)** Load a very simple, noisy dataset and inspect it.\n- **b)** Construct a design matrix for polynomial regression of degree m: the Vandermonde matrix.\n- **c)** Calculate the Moore-Penrose pseudoinverse of the design matrix.\n- **d)** Calculate a vector of coefficients that minimizes the squared error of an n-degree polynomial on our given set of measurements (data).\n- **e)** Use this coefficient (weight) vector to construct a polynomial that predicts the underlying function the noisy data is drawn from.\n- **f)** With the work we have done before, we look at a polynomials of different degrees we fit using the provided data.\n\n\nBefore you start, make sure that you have downloaded the file *poly_data.csv* from stud.ip and put it in the same folder as this notebook! You are supposed to implement the functions yourself!\n\n\n```python\n# the usual imports.\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\na)\n------------\n1. Use the numpy function **loadtxt** to load the data from *poly_data.csv* into a variable data. Data should now be a $n\\times n$ **ndarray** matrix. You can check the type and size of data yourself using the **type** function and the **shape** attribute of the matrix.\n2. The first row and second row correspond to the [independent and dependent variable](https://en.wikipedia.org/wiki/Dependent_and_independent_variables) respectively. Store them in two different, new variables **X** (independent) and **Y** (dependent).\n3. Use a scatterplot to take a look at the data. It has been generated by sampling a function $f$ and adding Gaussian noise:\n\\begin{align}\ny_i &= f(x_i) + \\epsilon \\\\\n\\epsilon &\\sim \\mathcal{N}(0, \\sigma^2)\n\\end{align}\n\nYou can use execute the second cell below to take a look at $f(x)$.\n\n\n```python\n# ~~ your code for a) here ~~\ndata = np.loadtxt('poly_data.csv', delimiter = ',') #load data from csv data points are seperated by a \",\"\" \"\nX = data[0] #store independent variables in X\nY = data[1] #store dependent variable in Y\n\nplt.plot(X,Y,'o',color = \"royalblue\") \n# setting and labeling your axis\nplt.ylabel('dependent variable')\nplt.xlabel('independent variable')\nplt.xlim((0,10));\n```\n\n\n```python\n# Taking a loog at f(x)\ndef target_func(x):\n return 1.5*np.sin(x) - np.cos(x/2) + 0.5\n\nx = np.linspace(0,10, 101)\ny = target_func(x)\nplt.plot(x,y);\n```\n\nb) \n--------\n\nIn the lecture, you have derived the formula for linear regression with arbitrary basis functions and normal distributed residuals $\\epsilon$. Here, we choose polynomial basis functions and therefore will try and approximate the function above via a polynomial of degree $m$:\n$$y = \\alpha_0 + \\alpha_1x + \\alpha_2x^2 + \\alpha_3x^3 + \\dots + \\alpha_mx^m + \\epsilon$$\nDue to our choice of basis functions, this is called polynomial regression.\n\nThe simplest version of polynomial regression uses monomial basis functions $\\{1, x, x^2, x^3, \\dots \\}$ in the design matrix. Such a matrix is called the [Vandermonde matrix](https://en.wikipedia.org/wiki/Vandermonde_matrix) in linear algebra. Implement a function that takes the observed, independent variables $x_i$ stored in **X** and constrcuts a design matrix of the following form:\n\n$$ \\Phi = \\begin{bmatrix} 1 & x_1 & x_1^2 & \\dots & x_1^m \\\\ 1 & x_2 & x_2^2 & \\dots & x_2^m \\\\ 1 & x_3 & x_3^2 & \\dots & x_3^m \\\\ \\vdots & \\vdots & \\vdots & & \\vdots \\\\ 1 & x_n & x_n^2 & \\dots & x_n^m \\end{bmatrix}$$\n\nWe have provided the function's doc string as well as two quick lines that test your implementation in the notebook cell below.\n\n\n```python\ndef poly_dm(x, m):\n \"\"\"\n Generate a design matrix with monomial basis functions.\n \n Parameters\n ----------\n x : array_like\n 1-D input array.\n m : int\n degree of the monomial used for the last column.\n \n Returns\n -------\n phi : ndarray\n Design matrix.\n The columns are ``x^0, x^1, ..., x^m``.\n \"\"\"\n # create an array of shape length of input vector x m and fill it with zero\n #note: we use m + 1 because the amount of rows needs to be degree + the first row\n design_matrix = np.zeros((len(x),m+1)) \n \n for i in range(m+1): #fill all rows with the same data set (complete values each row)\n design_matrix[:,i] = x\n\n design_matrix[:,0] = 1 #change all entries of row 0 to 1\n\n for index, x in np.ndenumerate(design_matrix): # for all entries x with the index(column,row)\n exponent = index[1] # we set the variable exponent to entry 1 of the index -> row_nr or degree\n design_matrix[index] = x ** exponent #the entry at this index is now the original entry to the power of the row_nr\n \n return design_matrix #return the design matrix\ntry:\n print('poly_dm:',(lambda a=np.random.rand(10):'O.K.'if np.allclose(poly_dm(a,3),np.vander(a,4,True))else'Something went wrong! (Your result does not match!)')())\nexcept:\n print('poly_dm: Something went horribly wrong! (an error was thrown)')\nexample_array = np.array([1,2,3,4,5]) \npoly_dm(example_array,3)\n```\n\n poly_dm: O.K.\n\n\n\n\n\n array([[ 1., 1., 1., 1.],\n [ 1., 2., 4., 8.],\n [ 1., 3., 9., 27.],\n [ 1., 4., 16., 64.],\n [ 1., 5., 25., 125.]])\n\n\n\nc)\n--------\n\nAccording to the lecture, it is quite usefull to calculate the Moore-Penrose pseudoinverse $A^\\dagger$ of a matrix:\n$$ A^\\dagger = (A^T A)^{-1}A^T$$\nwhere $M^T$ means transpose of matrix $M$ and $M^{-1}$ denotes its inverse.\n\nAccording to the docstring in the cell below, implement a function that returns $A^\\dagger$ for a matrix $A$, and test your implementation against the small test that is included.\n\n\n```python\ndef pseudoinverse(A):\n \"\"\"\n Compute the (Moore-Penrose) pseudo-inverse of a matrix.\n \n Parameters\n ----------\n A : (M, N) array_like\n Matrix to be pseudo-inverted.\n \n Returns\n -------\n A_plus : (N, M) ndarray\n The pseudo-inverse of `a`.\n \"\"\"\n #print(\"Our original design-matrix:\")\n #print(A)\n \n #print(\"Transposed:\")\n A_transposed = A.transpose()\n #print(A_transposed)\n \n #Matrix product of A and A_transposed:\n A_to_be_inversed = np.matmul(A_transposed,A)\n #print(\"Matrix product of A and A_transposed:\")\n #print(A_to_be_inversed)\n \n #The inverse of the Matrix product\n A_inversed = np.linalg.inv(A_to_be_inversed)\n #print(\"The inverse of the Matrix product\")\n #print(A_inversed)\n \n #The Resulting pseudo_inverse of our given matrix\n pseudo_inverse = np.matmul(A_inversed, A_transposed)\n #print(\"The Resulting pseudo_inverse of our given matrix\")\n #print(pseudo_inverse)\n \n return pseudo_inverse\n\n# the lines below test the pseudo_inverse function\ntry:\n print('pseudo_inverse:',(lambda m=np.random.rand(9,5):'Good Job!'if np.allclose(pseudoinverse(m),np.linalg.pinv(m))else'Not quite! (Your result does not match!)')())\nexcept:\n print('pseudo_inverse: Absolutely not! (an error was thrown)')\n \n\n```\n\n pseudo_inverse: Good Job!\n\n\nd)\n-------\nTo estimate the parameters $\\alpha_i$ up to a chosen degree $m$, we use call the vector containing all the $\\alpha_i$ $w$ and solve the following formula presented in class:\n\\begin{align}\ny &= \\Phi w \\\\\nw &= \\Phi^\\dagger y\n\\end{align}\nwhere $\\Phi$ is the design matrix and $\\Phi^\\dagger$ its pseudoinverse and $y$ is the vector of dependent variables we observed in our dataset and stored in **Y**.\n\nImplement a function that calculates $w$ according to the docstring given below. Again, a short test of your implementation is provided.\n\n\n```python\ndef poly_regress(x, y, deg):\n \"\"\"\n Least squares polynomial fit.\n \n Parameters\n ----------\n x : array, shape (M,)\n x-coordinates of the M sample points.\n\n y : array, shape (M,)\n y-coordinates of the sample points.\n \n deg : int\n Degree of the fitting polynomial.\n \n Returns\n -------\n w : array, shape (deg+1,)\n Polynomial coefficients, highest power last.\n \"\"\"\n # variable to store our design matrix\n design_matrix = poly_dm(x, deg)\n \n #variavle to store our pseudo_inverse\n inverse_dm = pseudoinverse(design_matrix)\n \n # variable to store the parameters\n # parameters (beta-coefficients) are calculated by the Matrix product of our dm and it`s pseudo-inverse\n our_parameters = np.matmul(inverse_dm,y)\n \n #print(\"The Parameters are\")\n #print(our_parameters)\n \n return our_parameters\n\n# the lines below test the poly_regress function\ntry:\n print('poly_regress:',(lambda a1=np.random.rand(9),a2=np.random.rand(9):'Ace!'if \n np.allclose(poly_regress(a1,a2,2),np.polyfit(a1,a2,2)[::-1])else'Almost! (Your result does not match!)')())\nexcept:\n print('poly_regress: Not nearly! (an error was thrown)')\n```\n\n poly_regress: Ace!\n\n\ne)\n--------\nThe last function we will write will use the vector of coefficients we can now calculate to construct a polynomial function and evaluate it at any point we choose to. Remember, the form of this polynomial is given by:\n$$y = \\alpha_0 + \\alpha_1x + \\alpha_2x^2 + \\alpha_3x^3 + \\dots + \\alpha_mx^m$$\n\nThis is the model we assumed above, but we do not need to include the noise term here! Again, the function is specified in a docstring and tested in the little {try - catch} block below. *Hint:* The order of the polynomial you need to calculate is inherently given by the length of **w**, the number of coefficients. \n\n\n\n```python\ndef polynom(x, w):\n \"\"\" Evaluate a polynomial.\n \n Parameters\n ----------\n x : 1d array\n Points to evaluate.\n \n w : 1d array\n Coefficients of the monomials.\n \n Returns\n -------\n y : 1d array\n Polynomial evaluated for each cell of x.\n \"\"\"\n y_values = np.zeros(len(x)) #create an array containing with the same length as x \n for i in range(len(x)): # for every entry in x \n for j in range(len(w)): # for every value(beta-coefficient) in our vector w\n y_values[i] += w[j] * (x[i]**j) # the corresponding y value is the sum of all beta-coefficients times the x value to the power of j\n \n return y_values # retunr the y values\n\n# the lines below test the polynom function\ntry:\n print('polynom:',(lambda a1=np.random.rand(9),a2=np.random.rand(9):'OK'if np.allclose(polynom(a1,a2),np.polyval(a2[::-1],a1))else'Slight failure! (Your result does not match!)')())\nexcept:\n print('polynom: Significant failure! (an error was thrown)')\n```\n\n polynom: OK\n\n\nf)\n------\nf, as in finally. We can now use all the functions we have written above to investigate how well a polynomial of a degree $m$ fits the noisy data we are given. For $m \\in \\{1,2,10\\}$, estimate a polynomial function on the data. Evaluate the three functions on a vector of equidistant points between 0 and 10 (*linearly spaced*). Additionally, plot the original target function $f(x)$, as well as the scatter plot of the data samples. Make sure every graph and the scatter appear in the same plot. Label each graph by adding a label argument to the **plt.plot** function. This allows the use of the **legend()** function and makes the plot significantly more understandable!\n\n\n```python\nplt.figure(figsize = (14, 10))\n\ngenerated_numbers = np.linspace(0, 10, 1001) # generates 1000 evenly spaced number between 0 and 10\ntarget_function = target_func(generated_numbers) # variable to store given target function\n\nplt.scatter(X, Y, color = \"white\") #generate scatter plot with white dots\nplt.plot(generated_numbers, target_function, label = 'target function') #plot target function with our generated numbers\nax = plt.gca() #select graph axis (used for backgroundcolor)\nax.set_facecolor(\"black\") #set background color to black\n\nfor degree in [1,2,10]: # generate our polynomial given our data_set for degree 1, 2 and 10\n w = poly_regress(X, Y, degree) # call above defined functions\n y = polynom(generated_numbers, w)\n plt.plot(generated_numbers, y, label = 'degree: ' + str(degree)) # plot polynomials and add labels for the legend\n \n \"\"\"\n for degree in [5]:\n w = poly_regress(X, Y, degree)\n y = polynom(generated_numbers, w)\n plt.plot(generated_numbers, y, label = 'degree: ' + str(degree))\n \"\"\" \n \nplt.xlim((0, 10)) # zoom in\nplt.legend(); # show legend\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "aa644a929eaba017432e7cf01fe6e5c06ed876f6", "size": 115961, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Task_37-Group_26_Polynomial-Regression/Task 37 Group 26 Polynomial Regression.ipynb", "max_stars_repo_name": "Flexi187/sharing", "max_stars_repo_head_hexsha": "e742a3012d9b9990d8e1d0ffdb6c7a84e386e596", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Task_37-Group_26_Polynomial-Regression/Task 37 Group 26 Polynomial Regression.ipynb", "max_issues_repo_name": "Flexi187/sharing", "max_issues_repo_head_hexsha": "e742a3012d9b9990d8e1d0ffdb6c7a84e386e596", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Task_37-Group_26_Polynomial-Regression/Task 37 Group 26 Polynomial Regression.ipynb", "max_forks_repo_name": "Flexi187/sharing", "max_forks_repo_head_hexsha": "e742a3012d9b9990d8e1d0ffdb6c7a84e386e596", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 223.8629343629, "max_line_length": 68320, "alphanum_fraction": 0.9017169566, "converted": true, "num_tokens": 3300, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541561135441, "lm_q2_score": 0.8723473713594992, "lm_q1q2_score": 0.8214495278153977}} {"text": "---\nauthor: Nathan Carter (ncarter@bentley.edu)\n---\n\nThis answer assumes you have imported SymPy as follows.\n\n\n```python\nfrom sympy import * # load all math functions\ninit_printing( use_latex='mathjax' ) # use pretty math output\n```\n\nLet's assume we've defined a variable and created a formula, as covered\nin how to create symbolic variables.\n\n\n```python\nvar( 'x' )\nformula = x**2 + x\nformula\n```\n\n\n\n\n$\\displaystyle x^{2} + x$\n\n\n\nWe can substitute a value for $x$ using the `subs` function.\nYou provide the variable and the value to substitute.\n\n\n```python\nformula.subs( x, 8 ) # computes 8**2 + 8\n```\n\n\n\n\n$\\displaystyle 72$\n\n\n\nIf you had to substitute values for multiple variables, you can use\nmultiple `subs` calls or you can pass a dictionary to `subs`.\n\n\n```python\nvar( 'y' )\nformula = x/2 + y/3\nformula\n```\n\n\n\n\n$\\displaystyle \\frac{x}{2} + \\frac{y}{3}$\n\n\n\n\n```python\nformula.subs( x, 10 ).subs( y, 6 )\n```\n\n\n\n\n$\\displaystyle 7$\n\n\n\n\n```python\nformula.subs( { x: 10, y: 6 } )\n```\n\n\n\n\n$\\displaystyle 7$\n\n\n", "meta": {"hexsha": "140d951638e08ebbdca3a83391b792743e81e51b", "size": 3615, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "database/tasks/How to substitute a value for a symbolic variable/Python, using SymPy.ipynb", "max_stars_repo_name": "nathancarter/how2data", "max_stars_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "database/tasks/How to substitute a value for a symbolic variable/Python, using SymPy.ipynb", "max_issues_repo_name": "nathancarter/how2data", "max_issues_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "database/tasks/How to substitute a value for a symbolic variable/Python, using SymPy.ipynb", "max_forks_repo_name": "nathancarter/how2data", "max_forks_repo_head_hexsha": "7d4f2838661f7ce98deb1b8081470cec5671b03a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-07-18T19:01:29.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-29T06:47:11.000Z", "avg_line_length": 19.8626373626, "max_line_length": 99, "alphanum_fraction": 0.4871369295, "converted": true, "num_tokens": 302, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9416541577509316, "lm_q2_score": 0.8723473663814338, "lm_q1q2_score": 0.8214495245561525}} {"text": "```python\nimport sympy as sp\nsp.init_printing()\n```\n\n\n```python\nx = sp.symbols('x')\nsp.solve(x**2 - 8*x + 25)\n```\n\n\n```python\nx = 1 + 2*sp.I\n\nprint(f'x = {x}, Re(x) = {sp.re(x)}, Imag(x) = {sp.im(x)}, modulus is {sp.Abs(x)}, arg is {sp.arg(x)}')\n```\n\n\n```python\nx = sp.symbols('x')\nsp.integrate(sp.log(x),x)\n```\n\n\n```python\nsp.integrate(x**3 * sp.exp(-x),(x,0,sp.oo))\n```\n\n# Matrices\n\n\n```python\nA = sp.Matrix([[1, 2, -3], [4, 5, 6], [7, 8, 9]])\nA.inv()\n\n```\n\n\n```python\na = sp.symbols('a')\nA = sp.Matrix([[a, 0, 0], [1, a, 0], [0, 1, a]])\nA.inv()\n```\n\n\n```python\nC = sp.Matrix([[1, 2],[3, 4],[5, 6]])\nD = sp.Matrix([4, 2])\n# C * D.transpose()\nC\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "35e383ad2e7edba2eced77f8d17f0c6a7e6773f0", "size": 2012, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "SympyExamples.ipynb", "max_stars_repo_name": "fcooper8472/SympyExamples", "max_stars_repo_head_hexsha": "6df5b06722e9719c1631fca990439bcffc8fc46a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "SympyExamples.ipynb", "max_issues_repo_name": "fcooper8472/SympyExamples", "max_issues_repo_head_hexsha": "6df5b06722e9719c1631fca990439bcffc8fc46a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "SympyExamples.ipynb", "max_forks_repo_name": "fcooper8472/SympyExamples", "max_forks_repo_head_hexsha": "6df5b06722e9719c1631fca990439bcffc8fc46a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 17.649122807, "max_line_length": 109, "alphanum_fraction": 0.4378727634, "converted": true, "num_tokens": 281, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9597620573763841, "lm_q2_score": 0.8558511469672594, "lm_q1q2_score": 0.821413457621235}} {"text": "# Maxwell Boltzman distribution of speed \n\n\\begin{equation}\nf(v,T)=4\\pi v^2(\\frac{m}{2\\pi kT})^{3/2}e^{\\frac{-mv^2}{2 KT}}\n\\end{equation}\n\n\\begin{equation}\nRMS=\\sqrt{\\frac{3kT}{m}}\n\\end{equation}\n\n\\begin{equation}\nMPS=\\sqrt{\\frac{2kT}{m}}\n\\end{equation}\n\n\\begin{equation}\nMS=\\sqrt{\\frac{8kT}{\\pi m}}\n\\end{equation}\n\n\n```python\nimport math as mt\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n\n```python\ndef maxwell(M,v, T):\n NA = 6.02e+23\n k = 1.38e-23\n m=M/NA\n a = m/(2*k)\n b =(a /(np.pi))**1.5\n prob =b*T**(-1.5) *4*np.pi*v**2*np.exp(-a*v**2/T) \n return prob\n```\n\n\n```python\ndef maxwell1(M,v, T):\n NA = 6.02e+23\n k = 1.38e-23\n m=M/NA\n a = m/(2*k)*v**2\n b =(a /(np.pi))**1.5\n prob2 =b*T**(-1.5) *4*np.pi*v**2 \n return prob2\n```\n\n\n```python\ndef maxwell2(M,v, T):\n NA = 6.02e+23\n k = 1.38e-23\n m=M/NA\n a = m/(2*k)*v**2\n b =(a /(np.pi))**1.5\n prob3 =np.exp(-a*v**2/T)\n return prob3\n```\n\n\n```python\nMo=16e-3 #mass of oxygen\nMh=2e-3 # mass of hydrogen\nMn=28e-3 # mass of nitrogen\nvelocity = np.arange(100, 750, 10) \n```\n\n\n```python\n# probability for oxygen at 200K,300K,400K\nprob200 = maxwell(Mo,velocity, 100.0)\nprob300 = maxwell(Mo,velocity, 200.0)\nprob400 = maxwell(Mo,velocity, 300.0)\n```\n\n\n```python\n# probability for oxygen at 200K,300K,400K\nprob2001 = maxwell1(Mo,velocity, 100.0)\nprob3001 = maxwell1(Mo,velocity, 200.0)\nprob4001 = maxwell1(Mo,velocity, 300.0)\n```\n\n\n```python\n# probability for oxygen at 200K,300K,400K\nprob2002 = maxwell2(Mo,velocity, 100.0)\nprob3002 = maxwell2(Mo,velocity, 200.0)\nprob4002 = maxwell2(Mo,velocity, 300.0)\n```\n\n\n```python\nplt.plot(velocity, prob200, 'g-',label='100k') \nplt.plot(velocity, prob300, 'b-',label='200k') \nplt.plot(velocity, prob400, 'r-',label='300k') \nplt.xlabel('Velocity(m/s)')\nplt.ylabel('Probability')\nplt.title('Maxwell Boltzman distribution of speed for Oxygen') \nplt.legend()\nplt.savefig(\"image/BoltmanO.png\", dpi = 600) # dpi dot per inch\nplt.show()\n```\n\n\n```python\nplt.plot(velocity, prob2001, 'g-',label='100k') \nplt.plot(velocity, prob3001, 'b-',label='200k') \nplt.plot(velocity, prob4001, 'r-',label='300k') \nplt.xlabel('Velocity(m/s)')\nplt.ylabel('Probability')\nplt.title('Maxwell Boltzman distribution of speed for Oxygen') \nplt.legend()\nplt.savefig(\"image/Boltman1.png\", dpi = 600) # dpi dot per inch\nplt.show()\n```\n\n\n```python\nplt.plot(velocity, prob2002, 'g-',label='100k') \nplt.plot(velocity, prob3002, 'b-',label='200k') \nplt.plot(velocity, prob4002, 'r-',label='300k') \nplt.xlabel('Velocity(m/s)')\nplt.ylabel('Probability')\nplt.title('Maxwell Boltzman distribution of speed for Oxygen') \nplt.legend()\nplt.savefig(\"image/Boltman2.png\", dpi = 600) # dpi dot per inch\nplt.show()\n```\n\n\n```python\nplt.semilogy(velocity, prob300, 'g-',label='original') \nplt.semilogy(velocity, prob3001, 'b-',label='v2') \nplt.semilogy(velocity, prob3002, 'r-',label='exp') \nplt.xlabel('Velocity(m/s)')\nplt.ylabel('Probability')\nplt.title('Maxwell Boltzman distribution of speed for Oxygen') \nplt.legend()\nplt.savefig(\"image/Boltman3.png\", dpi = 600) # dpi dot per inch\nplt.show()\n```\n\n\n```python\nM = {\"Velocity(m/s)\": velocity,\"Probability(100K)\": prob200,\"Probability(200K)\": prob300,\"Probability(300K)\": prob400}\n```\n\n\n```python\nDF = pd.DataFrame(M)\nDF\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    Velocity(m/s)Probability(100K)Probability(200K)Probability(300K)
    01000.0006120.0002270.000126
    11100.0007260.0002720.000151
    21200.0008450.0003200.000178
    31300.0009690.0003710.000208
    41400.0010940.0004250.000239
    ...............
    607000.0002950.0011040.001319
    617100.0002650.0010610.001297
    627200.0002370.0010190.001274
    637300.0002120.0009770.001250
    647400.0001890.0009350.001225
    \n

    65 rows \u00d7 4 columns

    \n
    \n\n\n\n\n```python\nDF.to_csv(\"data/Boltzman.csv\")\n```\n\n\n```python\n# probability for hydrogen,nitrogen ,oxygen at 300K\nprobh = maxwell(Mh,velocity, 300) # for hydrogen\nprobn = maxwell(Mn,velocity, 300) # for nitrogen\nprobo = maxwell(Mo,velocity, 300) #for oxygen\n```\n\n\n```python\nplt.plot(velocity, probh, 'g-',label='Hydrogen') \nplt.plot(velocity, probn, 'b-',label='Nitrogen') \nplt.plot(velocity, probo, 'r-',label='Oxygen') \nplt.xlabel('Velocity(m/s)')\nplt.ylabel('Probability')\nplt.title('Maxwell Boltzman distribution of speed for gas at ') \nplt.legend()\nplt.savefig(\"image/Boltmanal.png\", dpi = 600) # dpi dot per inch\nplt.show()\n```\n\n\n```python\ndef rms(M,T):\n NA = 6.02e+23\n k = 1.38e-23\n m=M/NA\n vel =np.sqrt(3*k*T/m) \n return vel\n```\n\n\n```python\ntemp = np.arange(200, 500, 10) \n```\n\n\n```python\n# rms for gas \nrmsh = rms(Mh,temp) #for hydrogen\nrmso = rms(Mo,temp) #for oxygen\nrmsn = rms(Mn,temp) #for nitrogen\n```\n\n\n```python\nplt.plot(temp, rmsh, 'g-',label='Hydrogen') \nplt.plot(temp, rmso, 'b-',label='Oxygen') \nplt.plot(temp, rmsn, 'r-',label='Nitrogen') \nplt.xlabel('Temperature(K)')\nplt.ylabel('RMS(m/s)')\nplt.title('RMS speed for gas') \nplt.legend()\nplt.savefig(\"image/rms.png\", dpi = 600) # dpi dot per inch\nplt.show()\n```\n\n# Planck's formula of radiation\n\n\\begin{equation}\nf(\\lambda,T)=\\frac{2hc^2}{\\lambda^5}\\frac{1}{e^{\\frac{hc}{\\lambda KT}}-1}\n\\end{equation}\n\n\n```python\ndef planck(wav, T):\n h = 6.626e-34\n c = 3.0e+8\n k = 1.38e-23\n a = 2.0*h*c**2\n b = h*c/(wav*k*T)\n intensity = a/ ( (wav**5) * (np.exp(b) - 1.0) )\n return intensity\n```\n\n\n```python\nwavelengths = np.arange(6e-9, 2e-6, 1e-9) # wavelength in m\n```\n\n\n```python\n# intensity at 4000K, 5000K, 6000K, 7000K\nintensity4000 = planck(wavelengths, 4000)\nintensity5000 = planck(wavelengths, 5000)\nintensity6000 = planck(wavelengths, 6000)\nintensity7000 = planck(wavelengths, 7000)\n```\n\n\n```python\nplt.plot(wavelengths*1e9, intensity4000, 'r-',label='4000k') \nplt.plot(wavelengths*1e9, intensity5000, 'g-',label='5000k') \nplt.plot(wavelengths*1e9, intensity6000, 'k-',label='6000k') \nplt.xlabel('Wavelength(nm)')\nplt.ylabel('Spetral energy densiry(W/m$^3$)')\nplt.title('Black body radiation') \nplt.legend()\nplt.savefig(\"image/planck.png\", dpi = 600) # dpi dot per inch\nplt.show()\n```\n\n\n```python\nM1 = {\"wavelength(m)\": wavelengths,\"Intensity(4000K)\": intensity4000,\"Intensity(5000K)\": intensity5000,\"Intensity(6000K)\": intensity6000}\n```\n\n\n```python\nDF1 = pd.DataFrame(M1)\nDF1\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    wavelength(m)Intensity(4000K)Intensity(5000K)Intensity(6000K)
    06.000000e-093.391580e-2364.586434e-1842.603298e-149
    17.000000e-092.704885e-1991.305948e-1548.037306e-125
    28.000000e-091.173680e-1711.471820e-1321.711556e-106
    39.000000e-093.428211e-1501.945881e-1152.873970e-92
    41.000000e-084.822859e-1339.161117e-1026.521953e-81
    ...............
    19891.995000e-067.428806e+111.165626e+121.618849e+12
    19901.996000e-067.418241e+111.163810e+121.616189e+12
    19911.997000e-067.407693e+111.161998e+121.613535e+12
    19921.998000e-067.397163e+111.160190e+121.610887e+12
    19931.999000e-067.386651e+111.158385e+121.608244e+12
    \n

    1994 rows \u00d7 4 columns

    \n
    \n\n\n\n\n```python\nDF1.to_csv(\"data/planck.csv\")\n```\n", "meta": {"hexsha": "777f04d10abbdab7cb52aaf5803297a96e5cfbeb", "size": 192883, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "maxwell.ipynb", "max_stars_repo_name": "joshidot/NPS", "max_stars_repo_head_hexsha": "0b5b7dde9b5a9769c8a437d193b210545f9344ca", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "maxwell.ipynb", "max_issues_repo_name": "joshidot/NPS", "max_issues_repo_head_hexsha": "0b5b7dde9b5a9769c8a437d193b210545f9344ca", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "maxwell.ipynb", "max_forks_repo_name": "joshidot/NPS", "max_forks_repo_head_hexsha": "0b5b7dde9b5a9769c8a437d193b210545f9344ca", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 238.4215080346, "max_line_length": 32588, "alphanum_fraction": 0.9057148634, "converted": true, "num_tokens": 3818, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9597620562254525, "lm_q2_score": 0.855851143290548, "lm_q1q2_score": 0.8214134531074407}} {"text": "```python\nfrom sympy import *\n\nhalf = S(1)/2\n\nB, q, l, xi = var(\"B, q, l, xi\")\n\nN1 = -xi**3 + 2*xi**2 - xi\nN2 = 2*xi**3 - 3*xi**2 + 1\nN3 = -xi**3 + xi**2\nN4 = -2*xi**3 + 3*xi**2\nN5 = xi**4/24 - xi**3/12 + xi**2/24\n\nA = Matrix([l*N1, N2, l*N3, N4, l**4/B*N5])\n\nA4 = Matrix([l*N1, N2, l*N3, N4])\ndA4 = diff(A4, xi) / l\n\npprint(\"\\nDistributed Load:\")\nq1 = 2*q*xi\nq2 = 2*q*(1 - xi)\npprint(\"q\u2081, q\u2082:\")\npprint(q1)\npprint(q2)\nfq = integrate(q1*A4*l, (xi, 0, half))\nfq += integrate(q2*A4*l, (xi, half, 1))\npprint(\"leads to fq / ql:\")\npprint(fq/(q*l))\n\npprint(\"\\nConcentrated Moment:\")\npprint(\"M at xi=1/2\")\nMxi = var(\"M\")\nfM = - Mxi * dA4.subs(xi, half)\npprint(\"leads to fM / M:\")\npprint(fM/M)\n\npprint(\"\\nConcentrated Force:\")\npprint(\"F at xi=1/2\")\nFxi = var(\"F\")\nfF = Fxi * A4.subs(xi, half)\npprint(\"leads to fF / F:\")\npprint(fF/F)\n\n# Distributed Load:\n# q\u2081, q\u2082:\n# 2\u22c5q\u22c5\u03be\n# 2\u22c5q\u22c5(-\u03be + 1)\n# leads to fq / ql:\n# \u23a1-5\u22c5l \u23a4\n# \u23a2\u2500\u2500\u2500\u2500\u2500\u23a5\n# \u23a2 96 \u23a5\n# \u23a2 \u23a5\n# \u23a2 1/4 \u23a5\n# \u23a2 \u23a5\n# \u23a2 5\u22c5l \u23a5\n# \u23a2 \u2500\u2500\u2500 \u23a5\n# \u23a2 96 \u23a5\n# \u23a2 \u23a5\n# \u23a3 1/4 \u23a6\n#\n# Concentrated Moment:\n# M at xi=1/2\n# leads to fM / M:\n# \u23a1-1/4\u23a4\n# \u23a2 \u23a5\n# \u23a2 3 \u23a5\n# \u23a2\u2500\u2500\u2500 \u23a5\n# \u23a22\u22c5l \u23a5\n# \u23a2 \u23a5\n# \u23a2-1/4\u23a5\n# \u23a2 \u23a5\n# \u23a2-3 \u23a5\n# \u23a2\u2500\u2500\u2500 \u23a5\n# \u23a32\u22c5l \u23a6\n#\n# Concentrated Force:\n# F at xi=1/2\n# leads to fF / F:\n# \u23a1-l \u23a4\n# \u23a2\u2500\u2500\u2500\u23a5\n# \u23a2 8 \u23a5\n# \u23a2 \u23a5\n# \u23a21/2\u23a5\n# \u23a2 \u23a5\n# \u23a2 l \u23a5\n# \u23a2 \u2500 \u23a5\n# \u23a2 8 \u23a5\n# \u23a2 \u23a5\n# \u23a31/2\u23a6\n\n```\n", "meta": {"hexsha": "86d582f74478c417a89f84aeb8d8928bfa3245f6", "size": 3624, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ipynb/TM_A/TM_2/arbitrary-load.ipynb", "max_stars_repo_name": "kassbohm/tm-snippets", "max_stars_repo_head_hexsha": "5e0621ba2470116e54643b740d1b68b9f28bff12", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "ipynb/TM_A/TM_2/arbitrary-load.ipynb", "max_issues_repo_name": "kassbohm/tm-snippets", "max_issues_repo_head_hexsha": "5e0621ba2470116e54643b740d1b68b9f28bff12", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ipynb/TM_A/TM_2/arbitrary-load.ipynb", "max_forks_repo_name": "kassbohm/tm-snippets", "max_forks_repo_head_hexsha": "5e0621ba2470116e54643b740d1b68b9f28bff12", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 29.7049180328, "max_line_length": 57, "alphanum_fraction": 0.4042494481, "converted": true, "num_tokens": 840, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9805806478450307, "lm_q2_score": 0.8376199592797929, "lm_q1q2_score": 0.8213539223185077}} {"text": "# Revisiting error propagation with automatic differentiation\n\nBy Kyle Cranmer, March 2, 2020\n\nThis notebook is dedicated to Feeman Dyson, who died on February 28, 2020 in Princeton, NJ at the age of 96.\n\n\u201cNew directions in science are launched by new tools much more often than by new concepts. The effect of a concept-driven revolution is to explain old things in new ways. The effect of a tool-driven revolution is to discover new things that have to be explained.\u201d\n\n-- Freeman Dyson\n\n)\n\n## Reminder of propagation of errors \n\nThis notebook was made to investigate the propagation of errors formula.\nWe imagine that we have a function $q(x,y)$ and we want to propagate the\nuncertainty on $x$ and $y$ (denoted $\\sigma_x$ and $\\sigma_y$, respectively) through to the quantity $q$.\n\nThe most straight forward way to do this is just randomly sample $x$ and $y$, evaluate $q$ and look at it's distribution. This is really the definition of what we mean by propagation of uncertianty. It's very easy to do with some simply python code.\n\nThe calculus formula for the propagation of errors is really an approximation. This is the formula for a general $q(x,y)$\n\\begin{equation}\n\\sigma_q^2 = \\left( \\frac{\\partial q}{\\partial x} \\sigma_x \\right)^2 + \\left( \\frac{\\partial q}{\\partial y}\\sigma_y \\right)^2\n\\end{equation}\n\nIn the special case of addition $q(x,y) = x\\pm y$ we have $\\sigma_q^2 = \\sigma_x^2 + \\sigma_y^2$.\n\nIn the special case of multiplication $q(x,y) = x y$ and division $q(x,y) = x / y$ we have $(\\sigma_q/q)^2 = (\\sigma_x/x)^2 + (\\sigma_y/y)^2$, which we can rewrite as $\\sigma_q = (x/y) \\sqrt{(\\sigma_x/x)^2 + (\\sigma_y/y)^2}$\n\nLet's try out these formulas and compare the direct approach of making the distribution to the prediction from these formulas\n\n## Automatic Differentiation\n\n\n\nExcerpts from the Wikipedia article: https://en.wikipedia.org/wiki/Automatic_differentiation\n\nIn mathematics and computer algebra, automatic differentiation (AD), also called algorithmic differentiation or computational differentiation,[1][2] is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.\n\n\n\nUsually, two distinct modes of AD are presented, forward accumulation (or forward mode) and reverse accumulation (or reverse mode). Forward accumulation specifies that one traverses the chain rule from inside to outside (that is, first compute \n${\\displaystyle dw_{1}/dx}$ and then \n${\\displaystyle dw_{2}/dw_{1}}$ and at last \n${\\displaystyle dy/dw_{2}})$, while reverse accumulation has the traversal from outside to inside (first compute \n${\\displaystyle dy/dw_{2}}$ and then \n${\\displaystyle dw_{2}/dw_{1}}$ and at last \n${\\displaystyle dw_{1}/dx})$. More succinctly,\n\n * forward accumulation computes the recursive relation: \n${\\displaystyle {\\frac {dw_{i}}{dx}}={\\frac {dw_{i}}{dw_{i-1}}}{\\frac {dw_{i-1}}{dx}}}$ with \n${\\displaystyle w_{3}=y}$, and,\n\n * reverse accumulation computes the recursive relation: \n${\\displaystyle {\\frac {dy}{dw_{i}}}={\\frac {dy}{dw_{i+1}}}{\\frac {dw_{i+1}}{dw_{i}}}}$ with \n${\\displaystyle w_{0}=x}$.\n\n\n\n\nWe will use \nhttps://jax.readthedocs.io/en/latest/notebooks/autodiff_cookbook.html\n\n\n```python\nfrom jax import grad, jacfwd\nimport jax.numpy as np\n```\n\nNow here are 3 lines of code for the propagation of uncertainty formula\n\n\\begin{equation}\n\\sigma_q = \\sqrt{\\left( \\frac{\\partial q}{\\partial x} \\sigma_x \\right)^2 + \\left( \\frac{\\partial q}{\\partial y}\\sigma_y \\right)^2}\n\\end{equation}\n\n\n```python\ndef error_prop_jax_gen(q,x,dx):\n jac = jacfwd(q)\n return np.sqrt(np.sum(np.power(jac(x)*dx,2)))\n```\n\n## Setup two observations with uncertainties\n\nBelow I'll use $x$ and $y$ for symbols, but they will be stored in the array `x` so that `x[0]=`$x$ and `x[1]=$y$.\n\n\n```python\nx_ = np.array([2.,3.])\ndx_ = np.array([.1,.1])\n```\n\n /Users/cranmer/anaconda3/envs/jax-md/lib/python3.6/site-packages/jax/lib/xla_bridge.py:120: UserWarning: No GPU/TPU found, falling back to CPU.\n warnings.warn('No GPU/TPU found, falling back to CPU.')\n\n\n## Addition and Subtraction\n\n\nIn the special case of addition $q(x,y) = x\\pm y$ we have $\\sigma_q^2 = \\sigma_x^2 + \\sigma_y^2$.\n\n\n\n```python\ndef q(x):\n return x[0]+x[1]\n```\n\n\n```python\ndef error_prop_classic(x, dx):\n # for q = x[0]*x[1]\n ret = dx[0]**2 + dx[1]**2\n return np.sqrt(ret)\n```\n\n\n```python\nprint('q = ', q(x_), '+/-', error_prop_classic(x_, dx_))\n```\n\n q = 5.0 +/- 0.14142136\n\n\n\n```python\nprint('q = ', q(x_), '+/-', error_prop_jax_gen(q, x_, dx_))\n```\n\n q = 5.0 +/- 0.14142136\n\n\n## Multiplication and Division\n\nIn the special case of multiplication \n\\begin{equation}\nq(x,y) = x y\n\\end{equation}\nand division \n\\begin{equation}\nq(x,y) = \\frac{x}{y}\n\\end{equation}\n\n\\begin{equation}\n(\\sigma_q/q)^2 = (\\sigma_x/x)^2 + (\\sigma_y/y)^2\n\\end{equation}\nwhich we can rewrite as\n\\begin{equation}\n\\sigma_q = (x/y) \\sqrt{\\left(\\frac{\\sigma_x}{x}\\right)^2 + \\left(\\frac{\\sigma_y}{y}\\right)^2}\n\\end{equation}\n\n\n```python\ndef q(x):\n return x[0]*x[1]\n```\n\n\n```python\ndef error_prop_classic(x, dx):\n # for q = x[0]*x[1]\n ret = (dx[0]/x[0])**2 + (dx[1]/x[1])**2 \n return (x[0]*x[1])*np.sqrt(ret)\n```\n\n\n```python\nprint('q = ', q(x_), '+/-', error_prop_classic(x_, dx_))\n```\n\n q = 6.0 +/- 0.36055514\n\n\n\n```python\nprint('q = ', q(x_), '+/-', error_prop_jax_gen(q, x_, dx_))\n```\n\n q = 6.0 +/- 0.36055514\n\n\n\n```python\ndef q(x):\n return x[0]/x[1]\n```\n\n\n```python\ndef error_prop_classic(x, dx):\n # for q = x[0]*x[1]\n ret = (dx[0]/x[0])**2 + (dx[1]/x[1])**2 \n return (x[0]/x[1])*np.sqrt(ret)\n```\n\n\n```python\nprint('q = ', q(x_), '+/-', error_prop_classic(x_, dx_))\n```\n\n q = 0.6666667 +/- 0.040061682\n\n\n\n```python\nprint('q = ', q(x_), '+/-', error_prop_jax_gen(q, x_, dx_))\n```\n\n q = 0.6666667 +/- 0.040061682\n\n\n## Powers\n\n$q(x,y) = x^m y^n$ \nwe have \n\n\\begin{equation}\n(\\sigma_q/q)^2 = \\left(|m|\\frac{\\sigma_x}{x}\\right)^2 + \\left(|n|\\frac{\\sigma_y}{y}\\right)^2\n\\end{equation}\nwhich we can rewrite as \n\\begin{equation}\n\\sigma_q = x^m y^n \\sqrt{\\left(|m|\\frac{\\sigma_x}{x}\\right)^2 + \\left(|n|\\frac{\\sigma_y}{y}\\right)^2}\n\\end{equation}\n\n\n\n```python\ndef q(x, m=2, n=3):\n return np.power(x[0],m)*np.power(x[1],n)\n```\n\n\n```python\nx_ = np.array([1.5, 2.5])\ndx_ = np.array([.1, .1])\nq(x_)\n```\n\n\n\n\n DeviceArray(35.15625, dtype=float32)\n\n\n\n\n```python\ndef error_prop_classic(x, dx):\n # for q = x[0]*x[1]\n dq_ = q(x_)*np.sqrt(np.power(2*dx_[0]/x_[0],2)+np.power(3*dx_[1]/x_[1],2))\n return dq_\n```\n\n\n```python\nprint('q = ', q(x_), '+/-', error_prop_classic(x_, dx_))\n```\n\n q = 35.15625 +/- 6.3063865\n\n\n\n```python\nprint('q = ', q(x_), '+/-', error_prop_jax_gen(q, x_, dx_))\n```\n\n q = 35.15625 +/- 6.3063865\n\n\n## Misc Examples\n\nSee some examples here:\n\nhttp://www.geol.lsu.edu/jlorenzo/geophysics/uncertainties/Uncertaintiespart2.html\n\n\n\nExample: `w = (4.52 \u00b1 0.02) cm, A = (2.0 \u00b1 0.2), y = (3.0 \u00b1 0.6) cm`. Find\n\n\\begin{equation}\nz=\\frac{wy^2}{\\sqrt{A}}\n\\end{equation}\n\nThe second relative error, (Dy/y), is multiplied by 2 because the power of y is 2. \nThe third relative error, (DA/A), is multiplied by 0.5 since a square root is a power of one half.\n\nSo Dz = 0.49 (28.638 ) = 14.03 which we round to 14 \n\nz = (29 \u00b1 14) \nUsing Eq. 3b, \nz=(29 \u00b1 12) \nBecause the uncertainty begins with a 1, we keep two significant figures and round the answer to match.\n\n\n```python\ndef q(x):\n return x[0]*x[2]*x[2]/np.sqrt(x[1])\n```\n\n\n```python\nx_ = np.array([4.52, 2., 3.]) #w,A,y\ndx_ = np.array([.02, .2, .6])\n\nprint('q = ', q(x_), '+/-', error_prop_jax_gen(q, x_, dx_))\n```\n\n q = 28.765104 +/- 11.596283\n\n\n### Check with a plot\n\n\n```python\nimport numpy as onp #using jax as np right now\nw_ = onp.random.normal(x_[0], dx_[0], 10000)\nA_ = onp.random.normal(x_[1], dx_[1], 10000)\ny_ = onp.random.normal(x_[2], dx_[2], 10000)\nx__ = np.vstack((w_, A_, y_))\nz_ = q(x__)\nprint('mean =', np.mean(z_), 'std = ', np.std(z_))\n```\n\n mean = 30.050316 std = 11.813263\n\n\n\n```python\nimport matplotlib.pyplot as plt\n```\n\n\n```python\n_ = plt.hist(z_, bins=50)\n```\n\n## Example 2\n\nalso taken from http://www.geol.lsu.edu/jlorenzo/geophysics/uncertainties/Uncertaintiespart2.html\n\n\n`w = (4.52 \u00b1 0.02) cm, x = (2.0 \u00b1 0.2) cm, y = (3.0 \u00b1 0.6) cm. `\n\nFind \n\\begin{equation}\nz = w x +y^2\n\\end{equation}\n\n\nWe have v = wx = (9.0 \u00b1 0.9) cm. \nThe calculation of the uncertainty in is the same as that shown to the left. Then from Eq. 1b \nDz = 3.7 \nz = (18 \u00b1 4) . \n\n\n```python\ndef q(x):\n # [w,x,y]\n return x[0]*x[1]+x[2]*x[2]\n```\n\n\n```python\nx_ = np.array([4.52, 2., 3.]) #w,x,y\ndx_ = np.array([.02, .2, .6])\n```\n\n\n```python\nprint(q(x_),'+/-', error_prop_jax_gen(q, x_, dx_))\n```\n\n 18.04 +/- 3.711983\n\n\n## An example with many inputs\n\nThe code we used for `error_prop_jax_gen` is generic and supports functions `q` on any number of variables\n\n\n```python\ndef q(x):\n return np.sum(x)\n```\n\n\n```python\nx_ = 1.*np.arange(1,101) #counts from 1-100 (and 1.* to make them floats)\ndx_ = 0.1*np.ones(100)\n```\n\nThe sum from $1 to N$ is $N*(N+1)/2$ (see [the story of Gauss](https://hsm.stackexchange.com/questions/384/did-gauss-find-the-formula-for-123-ldotsn-2n-1n-in-elementary-school)), so we expect q(x)=5050. And the uncertainty should be $\\sqrt{100}*0.1$ = 1.\n\n\n```python\nprint(q(x_),'+/-', error_prop_jax_gen(q, x_, dx_))\n```\n\n 5050.0 +/- 1.0\n\n\nanother toy example... product from 1 to 10\n\n\n```python\ndef q(x):\n return np.product(x)\n```\n\n\n```python\nx_ = 1.*np.arange(1,11) #counts from 1-100 (and 1.* to make them floats)\ndx_ = 0.1*np.ones(10)\nprint(q(x_),'+/-', error_prop_jax_gen(q, x_, dx_))\n```\n\n 3628800.0 +/- 451748.1\n\n\nChecking this is an exercise left to the reader :-)\n\n\n```python\n\n```\n", "meta": {"hexsha": "e6b4c145903e77ad0a24ba4636c0c7f1f7af6650", "size": 29536, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "book/error-propagation/error_propagation_with_jax.ipynb", "max_stars_repo_name": "willettk/stats-ds-book", "max_stars_repo_head_hexsha": "06bc751a7e82f73f9d7419f32fe5882ec5742f2f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 41, "max_stars_repo_stars_event_min_datetime": "2020-08-18T12:14:43.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T16:37:17.000Z", "max_issues_repo_path": "book/error-propagation/error_propagation_with_jax.ipynb", "max_issues_repo_name": "willettk/stats-ds-book", "max_issues_repo_head_hexsha": "06bc751a7e82f73f9d7419f32fe5882ec5742f2f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 7, "max_issues_repo_issues_event_min_datetime": "2020-08-19T04:22:24.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-22T15:18:24.000Z", "max_forks_repo_path": "book/error-propagation/error_propagation_with_jax.ipynb", "max_forks_repo_name": "willettk/stats-ds-book", "max_forks_repo_head_hexsha": "06bc751a7e82f73f9d7419f32fe5882ec5742f2f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 13, "max_forks_repo_forks_event_min_datetime": "2020-08-19T02:57:47.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-03T15:24:07.000Z", "avg_line_length": 36.1960784314, "max_line_length": 5972, "alphanum_fraction": 0.6235102925, "converted": true, "num_tokens": 4408, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9241418199787564, "lm_q2_score": 0.8887587905460026, "lm_q1q2_score": 0.8213391662173012}} {"text": "# Elliptic PDE: Illustrating the Maximum Principle\n\nCopyright (C) 2010-2020 Luke Olson
    \nCopyright (C) 2020 Andreas Kloeckner\n\n
    \nMIT License\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE.\n
    \n\n\n```python\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport sympy as sym\n```\n\n## Checking the Solution Symbolically\n\n\n```python\nsx = sym.Symbol(\"x\")\nsy = sym.Symbol(\"y\")\nsr = sym.sqrt(sx**2 + sy**2)\nsphi = sym.atan2(sy, sx)\nssol = sr**2 * sym.cos(2*sphi)\n\nsym.simplify(sym.diff(ssol, sx, 2) + sym.diff(ssol, sy, 2))\n```\n\n\n\n\n$\\displaystyle 0$\n\n\n\n## Plotting the Solution\n\n\n```python\nstep = 0.04\nmaxval = 1.0\n\nr = np.linspace(0, 1.25, 50)\np = np.linspace(0, 2*np.pi, 50)\nR, P = np.meshgrid(r, p)\nX, Y = R*np.cos(P), R*np.sin(P)\n\nZ = R**2 * np.cos(2*P)\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.YlGnBu_r,\n linewidth=0, antialiased=False)\n```\n\nFor the domain (and any given subdomain):\n* Where are maximum and minimum attained?\n\n\n```python\n\n```\n", "meta": {"hexsha": "aaac9eb35e9c555bc8565748c58373b184a24e61", "size": 48500, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "demos/intro/Elliptic PDE Illustrating the Maximum Principle.ipynb", "max_stars_repo_name": "inducer/numpde-notes", "max_stars_repo_head_hexsha": "80952b692fc16f185042a64d91312b0e53fafe17", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2020-05-31T23:00:01.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-25T15:08:14.000Z", "max_issues_repo_path": "demos/intro/Elliptic PDE Illustrating the Maximum Principle.ipynb", "max_issues_repo_name": "inducer/numpde-notes", "max_issues_repo_head_hexsha": "80952b692fc16f185042a64d91312b0e53fafe17", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "demos/intro/Elliptic PDE Illustrating the Maximum Principle.ipynb", "max_forks_repo_name": "inducer/numpde-notes", "max_forks_repo_head_hexsha": "80952b692fc16f185042a64d91312b0e53fafe17", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2020-08-14T22:49:30.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-25T15:08:34.000Z", "avg_line_length": 270.9497206704, "max_line_length": 44040, "alphanum_fraction": 0.9329484536, "converted": true, "num_tokens": 569, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8976952866333484, "lm_q2_score": 0.914900959053549, "lm_q1q2_score": 0.821302278678701}} {"text": "# Vector Multiplication\nVector multiplication can be performed in three ways:\n\n- Scalar Multiplication\n- Dot Product Multiplication\n- Cross Product Multiplication\n\n## Scalar Multiplication\nLet's start with *scalar* multiplication - in other words, multiplying a vector by a single numeric value.\n\nSuppose I want to multiply my vector by 2, which I could write like this:\n\n\\begin{equation} \\vec{w} = 2\\vec{v}\\end{equation}\n\nNote that the result of this calculation is a new vector named **w**. So how would we calculate this?\nRecall that **v** is defined like this:\n\n\\begin{equation}\\vec{v} = \\begin{bmatrix}2 \\\\ 1 \\end{bmatrix}\\end{equation}\n\nTo calculate 2v, we simply need to apply the operation to each dimension value in the vector matrix, like this:\n\n\\begin{equation}\\vec{w} = \\begin{bmatrix}2 \\cdot 2 \\\\ 2 \\cdot 1 \\end{bmatrix}\\end{equation}\n\nWhich gives us the following result:\n\n\\begin{equation}\\vec{w} = \\begin{bmatrix}2 \\cdot 2 \\\\ 2 \\cdot 1 \\end{bmatrix} = \\begin{bmatrix}4 \\\\ 2 \\end{bmatrix}\\end{equation}\n\nIn Python, you can apply these sort of matrix operations directly to numpy arrays, so we can simply calculate **w** like this:\n\n\n```python\n%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport math\n\nv = np.array([2,1])\n\nw = 2 * v\nprint(w)\n\n# Plot w\norigin = [0], [0]\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, *w, scale=10)\nplt.show()\n```\n\nThe same approach is taken for scalar division.\n\nTry it for yourself - use the cell below to calculate a new vector named **b** based on the following definition:\n\n\\begin{equation}\\vec{b} = \\frac{\\vec{v}}{2}\\end{equation}\n\n\n```python\nb = v / 2\nprint(b)\n\n# Plot b\norigin = [0], [0]\nplt.axis('equal')\nplt.grid()\nplt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))\nplt.quiver(*origin, *b, scale=10)\nplt.show()\n```\n\n## Dot Product Multiplication\nSo we've seen how to multiply a vector by a scalar. How about multiplying two vectors together? There are actually two ways to do this depending on whether you want the result to be a *scalar product* (in other words, a number) or a *vector product* (a vector).\n\nTo get a scalar product, we calculate the *dot product*. This takes a similar approach to multiplying a vector by a scalar, except that it multiplies each component pair of the vectors and sums the results. To indicate that we are performing a dot product operation, we use the • operator:\n\n\\begin{equation} \\vec{v} \\cdot \\vec{s} = (v_{1} \\cdot s_{1}) + (v_{2} \\cdot s_{2}) ... + \\; (v_{n} \\cdot s_{n})\\end{equation}\n\nSo for our vectors **v** (2,1) and **s** (-3,2), our calculation looks like this:\n\n\\begin{equation} \\vec{v} \\cdot \\vec{s} = (2 \\cdot -3) + (1 \\cdot 2) = -6 + 2 = -4\\end{equation}\n\nSo the dot product, or scalar product, of **v** • **s** is **-4**.\n\nIn Python, you can use the *numpy.**dot*** function to calculate the dot product of two vector arrays:\n\n\n```python\nimport numpy as np\n\nv = np.array([2,1])\ns = np.array([-3,2])\nd = np.dot(v,s)\nprint (d)\n```\n\n -4\n\n\nIn Python 3.5 and later, you can also use the **@** operator to calculate the dot product:\n\n\n```python\nimport numpy as np\n\nv = np.array([2,1])\ns = np.array([-3,2])\nd = v @ s\nprint (d)\n```\n\n -4\n\n\n### The Cosine Rule\nAn useful property of vector dot product multiplication is that we can use it to calculate the cosine of the angle between two vectors. We could write the dot products as:\n\n$$ \\vec{v} \\cdot \\vec{s} = \\|\\vec{v} \\|\\|\\vec{s}\\| \\cos (\\theta) $$ \n\nWhich we can rearrange as:\n\n$$ \\cos(\\theta) = \\frac{\\vec{v} \\cdot \\vec{s}}{\\|\\vec{v} \\|\\|\\vec{s}\\|} $$\n\nSo for our vectors **v** (2,1) and **s** (-3,2), our calculation looks like this:\n\n$$ \\cos(\\theta) = \\frac{(2 \\cdot-3) + (-3 \\cdot 2)}{\\sqrt{2^{2} + 1^{2}} \\times \\sqrt{-3^{2} + 2^{2}}} $$\n\nSo:\n\n$$\\cos(\\theta) = \\frac{-4}{8.0622577483}$$\n\nWhich calculates to:\n\n$$\\cos(\\theta) = -0.496138938357 $$\n\nSo:\n\n$$\\theta \\approx 119.74 $$\n\nHere's that calculation in Python:\n\n\n```python\nimport math\nimport numpy as np\n\n# define our vectors\nv = np.array([2,1])\ns = np.array([-3,2])\n\n# get the magnitudes\nvMag = np.linalg.norm(v)\nsMag = np.linalg.norm(s)\n\n# calculate the cosine of theta\ncos = (v @ s) / (vMag * sMag)\n\n# so theta (in degrees) is:\ntheta = math.degrees(math.acos(cos))\n\nprint(theta)\n\n```\n\n 119.74488129694222\n\n\n## Cross Product Multiplication\nTo get the *vector product* of multipying two vectors together, you must calculate the *cross product*. The result of this is a new vector that is at right angles to both the other vectors in 3D Euclidean space. This means that the cross-product only really makes sense when working with vectors that contain three components.\n\nFor example, let's suppose we have the following vectors:\n\n\\begin{equation}\\vec{p} = \\begin{bmatrix}2 \\\\ 3 \\\\ 1 \\end{bmatrix}\\;\\; \\vec{q} = \\begin{bmatrix}1 \\\\ 2 \\\\ -2 \\end{bmatrix}\\end{equation}\n\nTo calculate the cross product of these vectors, written as **p** x **q**, we need to create a new vector (let's call it **r**) with three components (r1, r2, and r3). The values for these components are calculated like this:\n\n\\begin{equation}r_{1} = p_{2}q_{3} - p_{3}q_{2}\\end{equation}\n\\begin{equation}r_{2} = p_{3}q_{1} - p_{1}q_{3}\\end{equation}\n\\begin{equation}r_{3} = p_{1}q_{2} - p_{2}q_{1}\\end{equation}\n\nSo in our case:\n\n\\begin{equation}\\vec{r} = \\vec{p} \\times \\vec{q} = \\begin{bmatrix}(3 \\cdot -2) - (1 \\cdot 2) \\\\ (1 \\cdot 1) - (2 \\cdot -2) \\\\ (2 \\cdot 2) - (3 \\cdot 1) \\end{bmatrix} = \\begin{bmatrix}-6 - 2 \\\\ 1 - -4 \\\\ 4 - 3 \\end{bmatrix} = \\begin{bmatrix}-8 \\\\ 5 \\\\ 1 \\end{bmatrix}\\end{equation}\n\nIn Python, you can use the *numpy.**cross*** function to calculate the cross product of two vector arrays:\n\n\n```python\nimport numpy as np\n\np = np.array([2,3,1])\nq = np.array([1,2,-2])\nr = np.cross(p,q)\nprint (r)\n```\n\n [-8 5 1]\n\n", "meta": {"hexsha": "80ae19db9fccc77d9dbb787aec6991684259b945", "size": 21656, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Essential_Math_for_Machine_Learning_Python_Edition/Module03/03-02-Vector Multiplication.ipynb", "max_stars_repo_name": "chandlersong/pythonMath", "max_stars_repo_head_hexsha": "f267f14b954327bea61485fe37590fefc0d45e65", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Essential_Math_for_Machine_Learning_Python_Edition/Module03/03-02-Vector Multiplication.ipynb", "max_issues_repo_name": "chandlersong/pythonMath", "max_issues_repo_head_hexsha": "f267f14b954327bea61485fe37590fefc0d45e65", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Essential_Math_for_Machine_Learning_Python_Edition/Module03/03-02-Vector Multiplication.ipynb", "max_forks_repo_name": "chandlersong/pythonMath", "max_forks_repo_head_hexsha": "f267f14b954327bea61485fe37590fefc0d45e65", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 63.8820058997, "max_line_length": 5900, "alphanum_fraction": 0.7637606206, "converted": true, "num_tokens": 1877, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9532750400464604, "lm_q2_score": 0.8615382094310355, "lm_q1q2_score": 0.8212828710969261}} {"text": "# Modelo de motor DC\n\nEl par $\\tau_r$ en el eje de un motor DC con constante de par $K_{tr}$, momento de inercia del rotor $I_r$, y coeficiente de rozamiento viscoso $b_r$, cuando circula una corriente $i$ por sus bobinados y gira a una velocidad $\\omega_r$, ser\u00e1:\n\n$$\n\\begin{equation}\n \\tau_r = K_{tr}i - I_r\\dot\\omega_r - b_r\\omega_r\n\\end{equation}\n$$\n\nAl introducir una reductora con relaci\u00f3n de transmisi\u00f3n $n$ y rendimiento $\\eta$, la velocidad de giro a la salida $\\omega_g$ se reduce en un factor $n$. El par, por el contrario, es multiplicado por el mismo factor, adem\u00e1s de verse afectado por las p\u00e9rdidas mec\u00e1nicas en los engranajes, representadas a trav\u00e9s del rendimiento $\\eta$:\n\n$$\n\\begin{equation}\n \\omega_g = \\frac{\\omega_r}{n} \\qquad\n \\tau_g = \\eta n\\tau_r\n\\end{equation}\n$$\n\nSustituyendo ambas expresiones en la primera ecuaci\u00f3n, se obtiene la relaci\u00f3n entre par y velocidad a la salida de la reductora:\n\n$$\n\\begin{equation}\n \\tau_g = \\eta nK_{tr}i - \\eta n^2I_r\\dot\\omega_g - \\eta n^2b_r\\omega_g\n\\end{equation}\n$$\n\nEn esta expresi\u00f3n, es f\u00e1cil identificar los par\u00e1metros equivalentes del conjunto motor-reductora, visto desde el lado de la carga:\n\n$$\n\\begin{equation}\n K_{tg} = \\eta nK_{tr} \\qquad\n b_g = \\eta n^2b_r \\qquad\n I_g = \\eta n^2I_r\n\\end{equation}\n$$\n\nDel mismo modo, se puede proceder con la ecuaci\u00f3n del circuito el\u00e9ctrico. El voltaje $V$ en los bornes del motor es:\n\n$$\n\\begin{equation}\n V = L\\frac{\\mathrm{d}i}{\\mathrm{d}t} + Ri + K_{vr}\\omega_r\n\\end{equation}\n$$\n\ndonde $L$ y $R$ son la inductancia y la resistencia del bobinado, respectivamente, y $K_{vr}$ es la constante de velocidad del motor (relaci\u00f3n entre velocidad de giro y fuerza contraelectromotriz), que en un motor DC coincide con la constante de par $K_{tr}$. Si introducimos este \u00faltimo hecho en la ecuaci\u00f3n, junto con la relaci\u00f3n de velocidades:\n\n$$\n\\begin{equation}\n V = L\\frac{\\mathrm{d}i}{\\mathrm{d}t} + Ri + nK_{tr}\\omega_g\n\\end{equation}\n$$\n\npodemos deducir que la constante de velocidad equivalente, vista desde la salida de la reductora, es:\n\n$$\n\\begin{equation}\n K_{vg} = nK_{tr} = \\frac{K_{tg}}{\\eta}\n\\end{equation}\n$$\n\n## Identificaci\u00f3n de par\u00e1metros del motor [Pololu 2215](https://www.pololu.com/product/2215)\n\nCon estas ecuaciones, podemos proceder a identificar los par\u00e1metros del motor. El fabricante proporciona, en la hoja de caracter\u00edsticas del motor, unas curvas par-velocidad y par-corriente en r\u00e9gimen permanente para una tensi\u00f3n nominal de 6V, medidas en el eje de salida de la reductora. A partir de estas curvas, podemos obtener la mayor\u00eda de los par\u00e1metros del motor. Como los ensayos se han realizado en r\u00e9gimen permanente, los efectos de la inductancia $L$ y el momento de inercia equivalente del rotor $I_g$ no aparecen en los resultados, y por lo tanto no se pueden identificar. Si se eliminan estas dos constantes, las ecuaciones que sirven para extraer los par\u00e1metros de las curvas del fabricante ser\u00e1n:\n\n$$\n\\begin{align}\n \\tau_g &= K_{tg}i - b_g\\omega_g \\\\\n 6 &= Ri + K_{vg}\\omega_g\n\\end{align}\n$$\n\nEn estas ecuaciones aparecen las cuatro constantes que s\u00ed podremos identificar a partir de las curvas: $K_{tg}$, $K_{vg}$, $b_g$ y $R$.\n\nLas curvas par-velocidad y par-corriente correspondientes a los ensayos del fabricante para el modelo 2215 son las siguientes:\n\n$$\n\\begin{align}\n \\omega_g &= 410 - 32\\tau_g \\\\\n i &= 0.073 + 0.11\\tau_g\n\\end{align}\n$$\n\ndonde el par est\u00e1 en kg\u00b7mm, la velocidad en rpm, y la corriente en A. En estas dos ecuaciones hay cuatro constantes, as\u00ed que si se cogen las dos ecuaciones del motor, y se expresan tambi\u00e9n en forma par-velocidad y par-corriente, se pueden identificar los cuatro par\u00e1metros, que deber\u00e1n ser convertidos posteriormente a unidades del SI.\n\nLas dos constantes que faltan se pueden obtener a partir de ensayos din\u00e1micos, midiendo la respuesta a una entrada de voltaje determinada con una carga de inercia conocida. Respecto a la inductancia $L$, a partir de aqu\u00ed ignoraremos su efecto, dado que la din\u00e1mica de la parte el\u00e9ctrica es mucho m\u00e1s r\u00e1pida que la de la parte mec\u00e1nica, y no tiene un efecto relevante en el control. Para la inercia equivalente del rotor $I_g$ se usar\u00e1 el siguiente valor, estimado a partir de un ensayo b\u00e1sico:\n\n$$\n I_g = 5\u00b710^{-5} \\mathrm{kg\u00b7m}^2\n$$\n\n## Efecto de la reductora adicional del robot\n\nEl robot que vamos a controlar tiene una reductora adicional de relaci\u00f3n $n_r$ 41:25, as\u00ed que si queremos obtener un modelo del sistema motor-reductora completo, deberemos aplicar una segunda conversi\u00f3n a los par\u00e1metros obtenidos para la primera reductora. En este caso, consideraremos que las p\u00e9rdidas mec\u00e1nicas de la segunda reductora son despreciables frente a las de la primera, que tiene una relaci\u00f3n de velocidades mucho m\u00e1s elevada (38437:507):\n\n$$\n\\begin{equation}\n K_{tw} = n_rK_{tg} \\qquad\n K_{vw} = n_rK_{vg} \\qquad\n b_w = n_r^2b_g \\qquad\n I_w = n_r^2I_g\n\\end{equation}\n$$\n\nDe este modo, las ecuaciones del motor, considerando el par $\\tau_w$ y la velocidad $\\omega_w$ ya en la rueda, ser\u00e1n:\n\n$$\n\\begin{align}\n \\tau_w &= K_{tw}i - I_w\\dot\\omega_w - b_w\\omega_w \\\\\n V &= Ri + K_{vw}\\omega_w\n\\end{align}\n$$\n\n## Introducci\u00f3n del segundo motor\n\nHasta ahora hemos considerado un s\u00f3lo motor, pero nuestro robot tiene dos ruedas, con un motor independiente en cada una. Para evitar tener que desarrollar un modelo completo en tres dimensiones, en la primera fase de dise\u00f1o del controlador se simplificar\u00e1 el sistema, trat\u00e1ndolo como un modelo plano. Eso implica que el robot se mover\u00e1 en l\u00ednea recta, con las dos ruedas girando siempre a la misma velocidad $\\omega_m$, mientras ambos motores reciben el mismo voltaje de entrada. Bajo estos supuestos, se puede considerar que el par total $\\tau_m$ ser\u00e1 dos veces el par de una sola rueda $\\tau_w$. Como los motores van conectados en paralelo a la tensi\u00f3n de entrada $V$, la segunda ecuaci\u00f3n no cambia:\n\n$$\n\\begin{align}\n \\tau_m &= 2K_{tw}i - 2I_w\\dot\\omega_m - 2b_w\\omega_m \\\\\n V &= Ri + K_{vw}\\omega_m\n\\end{align}\n$$\n\nEn estas ecuaciones se pueden identificar los par\u00e1metros correspondientes al modelo completo, que ser\u00e1 el utilizado para el dise\u00f1o del controlador. Para mayor comodidad, se muestran ya los valores en funci\u00f3n de los par\u00e1metros identificados previamente a la salida de la primera reductora:\n\n$$\n\\begin{equation}\n K_t = 2n_rK_{tg} \\qquad\n K_v = n_rK_{vg} \\qquad\n b = 2n_r^2b_g \\qquad\n I_m = 2n_r^2I_g\n\\end{equation}\n$$\n\nAs\u00ed, las ecuaciones definitivas que usaremos para dise\u00f1ar el controlador resultan:\n\n$$\n\\begin{align}\n \\tau_m &= K_ti - I_m\\dot\\omega_m - b\\omega_m \\\\\n V &= Ri + K_v\\omega_m\n\\end{align}\n$$\n\n## Funcionamiento como generador\n\nTodo lo desarollado hasta ahora se basa en la suposici\u00f3n de que el motor produce una potencia positiva, que es transmitida a la rueda a trav\u00e9s de la reductora. Pero cuando el motor est\u00e1 frenando, la potencia fluye en sentido contrario, de la rueda al motor. En esta situaci\u00f3n, lo que antes era la salida de la reductora pasa a ser la entrada, y esto implica que las p\u00e9rdidas de par se producir\u00e1n en el lado opuesto. En general, en funci\u00f3n del signo de la potencia mec\u00e1nica, se cumplir\u00e1 que:\n\n$$\n\\begin{equation}\n \\left\\{\n \\begin{matrix}\n \\tau_g = \\eta n\\tau_r && \\dot W > 0 \\\\\n \\eta\\tau_g = n\\tau_r && \\dot W < 0\n \\end{matrix}\n \\right.\n\\end{equation}\n$$\n\nEsto quiere decir que los par\u00e1metros del sistema ser\u00e1n diferentes dependiendo de si el motor est\u00e1 trabajando como generador o como motor. Es f\u00e1cil demostrar que los par\u00e1metros del conjunto motor-reductora, cuando trabaja en modo generador, pasan a ser:\n\n$$\n\\begin{equation}\n K_t^G = \\frac{K_t}{\\eta^2} \\qquad\n K_v^G = K_v \\qquad\n b^G = \\frac{b}{\\eta^2} \\qquad\n I_m^G = \\frac{I_m}{\\eta^2}\n\\end{equation}\n$$\n\ndonde el rendimiento de la primera reductora $\\eta$ se puede obtener f\u00e1cilmente dividiendo $K_{tg}$ entre $K_{vg}$.\n\nSi los par\u00e1metros del modelo son diferentes en cada caso, lo correcto ser\u00eda dise\u00f1ar dos controladores, uno para modo motor y otro para modo generador. Durante el funcionamiento, habr\u00eda que detectar qu\u00e9 est\u00e1 haciendo el motor en cada momento, y aplicar las ganancias del controlador correspondiente. Para conocer el signo de la potencia mec\u00e1nica, basta con comparar el signo del par $\\tau_m$ con el de la velocidad de giro $\\omega_m$. Para poder estimar el par sin tener que medir la corriente, se puede despejar \u00e9sta en una ecuaci\u00f3n y sustituir el resultado en la otra:\n\n$$\n\\begin{equation}\n \\tau_m = \\frac{K_t}{R}V - \\left(\\frac{K_tK_v}{R} + b\\right)\\omega_m - I_m\\dot\\omega_m\n\\end{equation}\n$$\n\nSi hacemos lo mismo en modo generador, sustituyendo los valores correspondientes de las constantes, se demuestra que:\n\n$$\n\\begin{equation}\n \\tau_m^G = \\frac{\\tau_m}{\\eta^2}\n\\end{equation}\n$$\n\nEn nuestra aplicaci\u00f3n, el efecto del rendimiento de la reductora no es muy relevante, as\u00ed que no merece la pena todo el trabajo de dise\u00f1ar dos controladores e implementar en el microcontrolador la detecci\u00f3n del modo de funcionamiento. El controlador LQR es bastante robusto frente a errores en los par\u00e1metros, y se puede comprobar que las ganancias en modo motor y en modo generador son muy parecidas, as\u00ed que se podr\u00e1n utilizar los par\u00e1metros originales sin problema o, si se quiere afinar algo m\u00e1s, un valor promedio entre modo motor y modo generador.\n", "meta": {"hexsha": "92b03d4b47c6aa788995645bf989e9f302ed40ec", "size": 12020, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Motor.ipynb", "max_stars_repo_name": "ulugris/balboa", "max_stars_repo_head_hexsha": "97839ca001baeb27a33d16c1c0a0c4f22f36b60c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Motor.ipynb", "max_issues_repo_name": "ulugris/balboa", "max_issues_repo_head_hexsha": "97839ca001baeb27a33d16c1c0a0c4f22f36b60c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Motor.ipynb", "max_forks_repo_name": "ulugris/balboa", "max_forks_repo_head_hexsha": "97839ca001baeb27a33d16c1c0a0c4f22f36b60c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 49.8755186722, "max_line_length": 720, "alphanum_fraction": 0.6258735441, "converted": true, "num_tokens": 2872, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9433475794701961, "lm_q2_score": 0.8705972801594706, "lm_q1q2_score": 0.8212758369317728}} {"text": "```python\nimport sympy as sm\n```\n\n\n```python\nsm.init_printing()\n```\n\n# Symbols and Functions\n\n\n```python\na, b, th, gamma, x, t, y, z = sm.symbols('a, b, theta, gamma, x, t, y, z')\n```\n\n\n```python\n%whos\n```\n\n\n```python\na, b, th, gamma, x, t, y, z\n```\n\n\n```python\nf = sm.Function('f')\n```\n\n\n```python\nf(t)\n```\n\n\n```python\nf(x, y, z)\n```\n\n\n```python\na1, a2, a3 = sm.symbols('a1, a2, a3')\n```\n\n\n```python\na1, a2, a3\n```\n\n# Expressions\n\n\n```python\nexpr1 = a + b - x\nexpr1\n```\n\n\n```python\nexpr2 = f(t) + 2*f(x, y, z) + a/b\nexpr2\n```\n\n\n```python\nexpr3 = sm.sin(f(t)) - sm.tan(a/b)/sm.log(gamma)\nexpr3\n```\n\n# Printing\n\n\n```python\nprint(expr3)\n```\n\n\n```python\nrepr(expr3)\n```\n\n\n```python\nsm.srepr(expr1)\n```\n\n\n```python\nsm.pprint(expr3)\n```\n\n\n```python\nprint(sm.latex(expr3))\n```\n\n\n```python\nsm.ccode(expr1)\n```\n\n\n```python\nprint(sm.octave_code(expr3))\n```\n\n# Derivatives\n\n\n```python\nexpr3\n```\n\n\n```python\nsm.diff(expr3, a)\n```\n\n\n```python\npart1 = sm.diff(expr3, a)\n```\n\n\n```python\npart2 = sm.diff(part1, b)\n```\n\n\n```python\npart2\n```\n\n\n```python\nexpr3.diff(a)\n```\n\n\n```python\nexpr3.diff(t)\n```\n\n\n```python\nexpr3.diff(t, 2)\n```\n\n\n```python\nexpr3.diff(t).diff(t)\n```\n\n# Numerical Evaluation\n\n\n```python\nexpr1\n```\n\n\n```python\nrepl = {a: 5, b: -38, x: 102}\nrepl\n```\n\n\n```python\nexpr1.subs(repl)\n```\n\n\n```python\nexpr1.xreplace(repl)\n```\n\n\n```python\ntype(expr1.subs(repl))\n```\n\n\n```python\ntype(-135)\n```\n\n\n```python\ntype(int(expr1.subs(repl)))\n```\n\n\n```python\nexpr4 = sm.pi/4 + sm.sin(x*y)\nexpr4\n```\n\n\n```python\nexpr4.xreplace({x: 12, y: 24})\n```\n\n\n```python\nexpr4.evalf()\n```\n\n\n```python\nexpr4.evalf(subs={x: 12, y:24})\n```\n\n\n```python\ntype(expr4.evalf(subs={x: 12, y:24}))\n```\n\n\n```python\ntype(float(expr4.evalf(subs={x: 12, y:24})))\n```\n\n\n```python\nexpr4.evalf(subs={x: 12, y:24}, n=1000)\n```\n\n\n```python\nexpr1\n```\n\n\n```python\neval_expr1 = sm.lambdify((a, b, x), expr1)\n```\n\n\n```python\neval_expr1(12.0, 34.3, -2.0)\n```\n\n\n```python\ntype(eval_expr1(12.0, 34.3, -2.0))\n```\n\n# Matrices & Linear Algebra\n\n\n```python\nmat1 = sm.Matrix([[1, 2], [3, 4]])\nmat1\n```\n\n\n```python\nmat1.shape\n```\n\n\n```python\nmat1.det()\n```\n\n\n```python\nmat2 = sm.Matrix([[expr1, expr2], [expr3, expr4]])\nmat2\n```\n\n\n```python\nmat2.diff(t)\n```\n\n\n```python\nmat1 + mat2\n```\n\n\n```python\nmat1 * mat2\n```\n\n\n```python\nsm.hadamard_product(mat1, mat2)\n```\n\n\n```python\nmat1**2\n```\n\n\n```python\nmat1 * mat1\n```\n\n\n```python\nsm.eye(5)\n```\n\n\n```python\nsm.zeros(2,4)\n```\n\n# Linear systems\n\n\n```python\nlin_expr_1 = a*x + b**2*y + sm.sin(gamma)*z\nlin_expr_1\n```\n\n\n```python\nlin_expr_2 = sm.sin(f(t))*x + sm.log(f(t))*z\nlin_expr_2\n```\n\n\n```python\nsm.Eq(lin_expr_1, 0)\n```\n\n\n```python\nsm.Eq(lin_expr_2, 0)\n```\n\n\n```python\nres = sm.solve([lin_expr_1, lin_expr_2], x, z, dict=True)\nres\n```\n\n\n```python\nres_dict = res[0]\nres_dict\n```\n\n\n```python\nsm.Eq(x, res_dict[x])\n```\n\n\n```python\nlin_mat_exprs = sm.Matrix([lin_expr_1, lin_expr_2])\nlin_mat_exprs\n```\n\n\n```python\nA = lin_mat_exprs.jacobian([x, z])\nA\n```\n\n\n```python\nb = -lin_mat_exprs.xreplace({x: 0, z: 0})\nb\n```\n\n\n```python\nA.LUsolve(b)\n```\n\n# Simplification\n\n\n```python\nsm.simplify(A.LUsolve(b))\n```\n\n\n```python\nsm.cos(gamma)**2 + sm.sin(gamma)**2\n```\n\n\n```python\nsm.trigsimp(sm.cos(gamma)**2 + sm.sin(gamma)**2)\n```\n\n\n```python\nsub_exprs, simp_expr = sm.cse(A.LUsolve(b).diff(t))\n```\n\n\n```python\nsimp_expr\n```\n\n\n```python\nsub_exprs\n```\n", "meta": {"hexsha": "8bdb6837eca6569b4b95758f877201d7d731cfa2", "size": 14515, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "content/notebooks/sympy.ipynb", "max_stars_repo_name": "moorepants/me41055", "max_stars_repo_head_hexsha": "5c941d28f6f61fa1a02fbda4772d88010014eec0", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "content/notebooks/sympy.ipynb", "max_issues_repo_name": "moorepants/me41055", "max_issues_repo_head_hexsha": "5c941d28f6f61fa1a02fbda4772d88010014eec0", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 42, "max_issues_repo_issues_event_min_datetime": "2022-01-07T18:05:36.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-22T15:15:01.000Z", "max_forks_repo_path": "content/notebooks/sympy.ipynb", "max_forks_repo_name": "moorepants/me41055", "max_forks_repo_head_hexsha": "5c941d28f6f61fa1a02fbda4772d88010014eec0", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-24T17:18:40.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-24T17:18:40.000Z", "avg_line_length": 16.7997685185, "max_line_length": 80, "alphanum_fraction": 0.4664829487, "converted": true, "num_tokens": 1269, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9433475762847495, "lm_q2_score": 0.870597273444551, "lm_q1q2_score": 0.8212758278240285}} {"text": "# Contravariant & Covariant indices in Tensors (Symbolic)\n\n\n```python\nimport sympy\nfrom einsteinpy.symbolic import ChristoffelSymbols, RiemannCurvatureTensor\nfrom einsteinpy.symbolic.predefined import Schwarzschild\n\nsympy.init_printing()\n```\n\n### Analysing the schwarzschild metric along with performing various operations\n\n\n```python\nsch = Schwarzschild()\nsch.tensor()\n```\n\n\n```python\nsch_inv = sch.inv()\nsch_inv.tensor()\n```\n\n\n```python\nsch.order\n```\n\n\n```python\nsch.config\n```\n\n\n\n\n 'll'\n\n\n\n### Obtaining Christoffel Symbols from Metric Tensor\n\n\n```python\nchr = ChristoffelSymbols.from_metric(sch_inv) # can be initialized from sch also\nchr.tensor()\n```\n\n\n```python\nchr.config\n```\n\n\n\n\n 'ull'\n\n\n\n### Changing the first index to covariant\n\n\n```python\nnew_chr = chr.change_config('lll') # changing the configuration to (covariant, covariant, covariant)\nnew_chr.tensor()\n```\n\n\n```python\nnew_chr.config\n```\n\n\n\n\n 'lll'\n\n\n\n### Any arbitary index configuration would also work!\n\n\n```python\nnew_chr2 = new_chr.change_config('lul')\nnew_chr2.tensor()\n```\n\n### Obtaining Riemann Tensor from Christoffel Symbols and manipulating it's indices\n\n\n```python\nrm = RiemannCurvatureTensor.from_christoffels(new_chr2)\nrm[0,0,:,:]\n```\n\n\n```python\nrm.config\n```\n\n\n\n\n 'ulll'\n\n\n\n\n```python\nrm2 = rm.change_config(\"uuuu\")\nrm2[0,0,:,:]\n```\n\n\n```python\nrm3 = rm2.change_config(\"lulu\")\nrm3[0,0,:,:]\n```\n\n\n```python\nrm4 = rm3.change_config(\"ulll\")\nrm4.simplify()\nrm4[0,0,:,:]\n```\n\n#### It is seen that `rm` and `rm4` are same as they have the same configuration\n", "meta": {"hexsha": "304357d29cfc14afc534de66f60cbdbd454b3ed6", "size": 128230, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/source/examples/Playing with Contravariant and Covariant Indices in Tensors(Symbolic).ipynb", "max_stars_repo_name": "iamhardikat11/einsteinpy", "max_stars_repo_head_hexsha": "7bf0ca0020b273e616b6e7c19aed7a5e13925444", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 485, "max_stars_repo_stars_event_min_datetime": "2019-02-04T09:15:22.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-19T13:50:17.000Z", "max_issues_repo_path": "docs/source/examples/Playing with Contravariant and Covariant Indices in Tensors(Symbolic).ipynb", "max_issues_repo_name": "iamhardikat11/einsteinpy", "max_issues_repo_head_hexsha": "7bf0ca0020b273e616b6e7c19aed7a5e13925444", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 570, "max_issues_repo_issues_event_min_datetime": "2019-02-02T10:57:27.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-26T16:37:05.000Z", "max_forks_repo_path": "docs/source/examples/Playing with Contravariant and Covariant Indices in Tensors(Symbolic).ipynb", "max_forks_repo_name": "iamhardikat11/einsteinpy", "max_forks_repo_head_hexsha": "7bf0ca0020b273e616b6e7c19aed7a5e13925444", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 250, "max_forks_repo_forks_event_min_datetime": "2019-01-30T14:14:14.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-28T21:18:18.000Z", "avg_line_length": 196.9738863287, "max_line_length": 35672, "alphanum_fraction": 0.8107307182, "converted": true, "num_tokens": 426, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9511422186079557, "lm_q2_score": 0.8633916047011594, "lm_q1q2_score": 0.8212082064229438}} {"text": "# 15.3. Analyzing real-valued functions\n\n\n```python\nfrom sympy import *\ninit_printing()\n```\n\n\n```python\nvar('x z')\n```\n\n\n```python\nf = 1 / (1 + x**2)\n```\n\n\n```python\nf.subs(x, 1)\n```\n\n\n```python\ndiff(f, x)\n```\n\n\n```python\nlimit(f, x, oo)\n```\n\n\n```python\nseries(f, x0=0, n=9)\n```\n\n\n```python\nintegrate(f, (x, -oo, oo))\n```\n\n\n```python\nintegrate(f, x)\n```\n\n\n```python\nfourier_transform(f, x, z)\n```\n", "meta": {"hexsha": "b49546fb88f6fedd31bb74d025daf8d036fedfae", "size": 2430, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "001-Jupyter/001-Tutorials/002-IPython-Cookbook/chapter15_symbolic/03_function.ipynb", "max_stars_repo_name": "jhgoebbert/jupyter-jsc-notebooks", "max_stars_repo_head_hexsha": "bcd08ced04db00e7a66473b146f8f31f2e657539", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "001-Jupyter/001-Tutorials/002-IPython-Cookbook/chapter15_symbolic/03_function.ipynb", "max_issues_repo_name": "jhgoebbert/jupyter-jsc-notebooks", "max_issues_repo_head_hexsha": "bcd08ced04db00e7a66473b146f8f31f2e657539", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "001-Jupyter/001-Tutorials/002-IPython-Cookbook/chapter15_symbolic/03_function.ipynb", "max_forks_repo_name": "jhgoebbert/jupyter-jsc-notebooks", "max_forks_repo_head_hexsha": "bcd08ced04db00e7a66473b146f8f31f2e657539", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-13T18:49:12.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-13T18:49:12.000Z", "avg_line_length": 15.5769230769, "max_line_length": 45, "alphanum_fraction": 0.4559670782, "converted": true, "num_tokens": 148, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9572778048911612, "lm_q2_score": 0.8577680977182187, "lm_q1q2_score": 0.8211223616893635}} {"text": "## Visualizing a space curve\n\n**Vector Functions on Intervals**\n\nWhen a particle moves through space during a time interval $I$, we think of the particle's\ncoordinates as functions defined on $I$:\n\n\\begin{equation}\n x = f(t), \\ y = g(t),\\ z = h(t) \\ \\ \\ \\mbox{for}\\ \\ t\\in I = [a, b].\n\\end{equation}\n\n The points $(x, y, z) = (f(t), g(t), h(t))$ make up the curve in space that we call a\n **particle's path**.\n \n \n\nA curve in space can also be represented in *vector form*:\n\\begin{equation}\n \\vec{r}(t) = f(t)\\vec{i}+g(t)\\vec{j}+h(t)\\vec{k} = \n \\end{equation}\nwhere $\\vec{i}, \\vec{j}$, and $\\vec{k}$ are *standard unit vectors*.\n\nEquation (2) defines $\\vec{r}$ as a vector function of the real variable $t$ \non the interval $I$. More\ngenerally, a **vector-valued function** or **vector function** on a domain set $D$\nis a rule that\nassigns a vector in space to each element in $D$. For now, the domains will be intervals of\nreal numbers resulting in a *space curve*. Domains can be regions\nin the plane. Vector functions will then represent surfaces in space. \nVector functions on a\ndomain in the plane or space also give rise to *vector fields*, \nwhich are important to the\nstudy of the flow of a fluid, gravitational fields, and electromagnetic phenomena.\n \n\nWe want to visualize vectors functions on intervals using a python package `Sympy`.\n\n\n```python\n# Import necessary packages\n%matplotlib inline\nfrom sympy import symbols, cos, sin\nfrom sympy.plotting import plot3d_parametric_line as ppl3\nfrom sympy.plotting import plot_parametric as ppl\nimport numpy as np\n\nt = symbols('t')\n```\n\n**Example 1** Visualize $\\vec{r}(t) = $ for $t\\in [-2\\pi, 3\\pi]$.\n\n\n```python\nppl(t**2-3*t, t+2, (t, -2*np.pi, 3*np.pi))\n```\n\n**Example 2** Visualize $\\vec{r}(t) = \\cos t\\,\\vec{i} + \\sin t\\,\\vec{j} + t\\,\\vec{k}$ for \n$t\\in [0, 6\\pi]$. \n\n\n```python\nppl3(cos(t), sin(t), t, (t, 0, 6*np.pi))\n```\n\n**Exercises**\n\nVisualize vector functions.\n\n1. $\\vec{r}(t) = (\\sin 3t)(\\cos t)\\vec{i} + (\\sin 3t)(\\sin t)\\vec{j} +t\\vec{k}$ \n2. $\\vec{r}(t) = (\\cos t)\\vec{i} + (\\sin t)\\vec{j} + (\\sin 2t)\\vec{k}$\n3. $\\vec{r}(t) = (4+\\sin 20t)(\\cos t)\\vec{i} + (4+\\sin 20t)(sin t) + (\\cos 20t)\\vec{k}$\n\n\n```python\n\n```\n", "meta": {"hexsha": "fdc38266a0d14d3ef24b37267eb6e8978ab5abd0", "size": 78189, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "T1_2_visualizing_space_curves.ipynb", "max_stars_repo_name": "bkimo/Multivariable_Calculus_with_Python", "max_stars_repo_head_hexsha": "a6225d47d7246557efb54caeaeecfa55c9560856", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "T1_2_visualizing_space_curves.ipynb", "max_issues_repo_name": "bkimo/Multivariable_Calculus_with_Python", "max_issues_repo_head_hexsha": "a6225d47d7246557efb54caeaeecfa55c9560856", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "T1_2_visualizing_space_curves.ipynb", "max_forks_repo_name": "bkimo/Multivariable_Calculus_with_Python", "max_forks_repo_head_hexsha": "a6225d47d7246557efb54caeaeecfa55c9560856", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 405.1243523316, "max_line_length": 58380, "alphanum_fraction": 0.9398125056, "converted": true, "num_tokens": 758, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9615338035725359, "lm_q2_score": 0.8539127492339909, "lm_q1q2_score": 0.8210659736900403}} {"text": "\n# Demo - Integration of functions\n\n \n**Mikael Mortensen** (email: `mikaem@math.uio.no`), Department of Mathematics, University of Oslo.\n\nDate: **August 7, 2020**\n\n**Summary.** This is a demonstration of how the Python module [shenfun](https://github.com/spectralDNS/shenfun) can be used to\nintegrate over 1D curves and 2D surfaces in 3D space.\nWe make use of\ncurvilinear coordinates, and reproduce some integrals\nperformed by Behnam Hashemi with [Chebfun](http://www.chebfun.org/examples/approx3/SurfaceIntegral3D.html).\n\n**Notice.**\n\nFor all the examples below we could just as well\nuse Legendre polynomials instead of Chebyshev.\nJust replace 'C' with 'L' when creating function spaces.\nThe accuracy ought to be similar.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n## The inner product\n\nA lesser known fact about [shenfun](https://github.com/spectralDNS/shenfun) is\nthat it can be used to perform regular, unweighted, integrals with\nspectral accuracy. With the newly added curvilinear coordinates\nfeature, we can now also integrate over highly complex lines and surfaces\nembedded in a higher dimensional space.\n\nTo integrate over a domain in shenfun we use the\n[inner()](https://shenfun.readthedocs.io/en/latest/shenfun.forms.html#shenfun.forms.inner.inner)\nfunction, with a constant test function. The [inner()](https://shenfun.readthedocs.io/en/latest/shenfun.forms.html#shenfun.forms.inner.inner)\nfunction in shenfun is defined as an integral over the\nentire domain $\\Omega$ in question\n\n$$\n(u, v)_w = \\int_{\\Omega} u \\overline{v} w d\\Omega,\n$$\n\nfor trial function $u$, test function $v$ and weight $w$.\nAlso, $\\overline{v}$ represents the complex conjugate of $v$, in case\nwe are working with complex functions (like Fourier exponentials).\n\nThe functions and weights take on different form, but if\nthe test function $v$ is chosen to be a constant, e.g., $v=1$,\nthen the weight is also constant, $w=1$, and the inner product becomes\nan unweighted integral of $u$ over the domain\n\n$$\n(u, 1)_w = \\int_{\\Omega} u d\\Omega\n$$\n\n### Curve integrals\n\nFor example, if we create some function space on the line from\n0 to 1, then we can get the length of this domain using `inner`\n\n\n```\nfrom shenfun import *\nB = FunctionSpace(10, 'C', domain=(0, 1))\nu = Array(B, val=1)\nlength = inner(u, 1)\nprint('Length of domain =', length)\n```\n\nNote that we cannot simply do `inner(1, 1)`, because the\n`inner` function does not know about the domain, which is part\nof the [FunctionSpace](https://shenfun.readthedocs.io/en/latest/shenfun.forms.html#shenfun.forms.arguments.FunctionSpace). So to integrate `u=1`, we need to\ncreate `u` as an [Array](https://shenfun.readthedocs.io/en/latest/shenfun.forms.html#shenfun.forms.arguments.Array) with the constant value 1.\n\nSince the function space `B` is Cartesian the computed\nlength is simply the domain length.\nNot very impressive, but the same goes for multidimensional\ntensor product domains\n\n\n```\nF = FunctionSpace(10, 'F', domain=(0, 2*np.pi))\nT = TensorProductSpace(comm, (B, F))\narea = inner(1, Array(T, val=1))\nprint('Area of domain =', area)\n```\n\nStill not very impressive, but moving to curvilinear coordinates\nit all starts to become more interesting. Lets\nlook at a spiral $C$ embedded in $\\mathbb{R}^3$, parametrized\nby one single parameter $t$\n\n$$\n\\begin{align*}\nx(t) &= \\sin 2t \\\\ \ny(t) &= \\cos 2t \\\\ \nz(t) &= \\frac{t}{2} \\\\ \n0 \\le & t \\le 2\\pi\n\\end{align*}\n$$\n\nWhat is the length of this spiral? The spiral can be\nseen as the red curve in the figure a few cells below.\n\nThe integral over the parametrized curve $C$ can\nbe written as\n\n$$\n\\int_C ds = \\int_{t=0}^{2\\pi} \\sqrt{\\left(\\frac{d x}{d t}\\right)^2 + \\left(\\frac{d y}{d t}\\right)^2 + \\left(\\frac{d z}{d t}\\right)^2} dt.\n$$\n\nWe can find this integral easily using shenfun. Create\na function space in curvilinear coordinates, providing\nthe position vector $\\mathbf{r} = x(t)\\mathbf{i} + y(t) \\mathbf{j} + z(t) \\mathbf{k}$\nas input. Also, choose to work with covariant basis vectors, which\nis really not important unless you work with vector equations. The\nalternative is the default 'normal', where the basis vectors\nare normalized to unit length.\n\n\n```\nimport sympy as sp\nfrom shenfun import *\nconfig['basisvectors'] = 'covariant'\nt = sp.Symbol('x', real=True, positive=True)\nrv = (sp.sin(2*t), sp.cos(2*t), 0.5*t)\nC = FunctionSpace(100, 'C', domain=(0, 2*np.pi), coordinates=((t,), rv))\n```\n\nThen compute the arclength using [inner()](https://shenfun.readthedocs.io/en/latest/shenfun.forms.html#shenfun.forms.inner.inner), again by using a constant\ntestfunction 1, and a constant [Array](https://shenfun.readthedocs.io/en/latest/shenfun.forms.html#shenfun.forms.arguments.Array) `u=1`\n\n\n```\nlength = inner(1, Array(C, val=1))\nprint('Length of spiral =', length)\n```\n\nThe arclength is found to be slightly longer than $4 \\pi$. Looking at the\nspiral below, the result looks reasonable.\n\n\n```\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nfig = plt.figure(figsize=(4, 3))\nX = C.cartesian_mesh(uniform=True)\nax = fig.add_subplot(111, projection='3d')\np = ax.plot(X[0], X[1], X[2], 'r')\nhx = ax.set_xticks(np.linspace(-1, 1, 5))\nhy = ax.set_yticks(np.linspace(-1, 1, 5))\n```\n\nThe term $\\sqrt{\\left(\\frac{d x}{d t}\\right)^2 + \\left(\\frac{d y}{d t}\\right)^2 + \\left(\\frac{d z}{d t}\\right)^2}$\nis actually here a constant $\\sqrt{4.25}$, found in shenfun as\n\n\n```\nC.coors.sg\n```\n\nWe could also integrate a non-constant function over the spiral.\nFor example, lets integrate the function $f(x, y, z)= \\sin^2 x$\n\n$$\n\\int_C \\sin^2 x ds = \\int_{t=0}^{2\\pi} \\sin^2 (\\sin 2t) \\sqrt{\\left(\\frac{d x}{d t}\\right)^2 + \\left(\\frac{d y}{d t}\\right)^2 + \\left(\\frac{d z}{d t}\\right)^2} dt\n$$\n\n\n```\ninner(1, Array(C, buffer=sp.sin(rv[0])**2))\n```\n\nwhich can be easily verified using, e.g., Wolfram Alpha\n\n\n```\nfrom IPython.display import IFrame\nIFrame(\"https://www.wolframalpha.com/input/?i=integrate+sin%5E2%28sin%282t%29%29+sqrt%284.25%29+from+t%3D0+to+2pi\", width=\"500px\", height=\"350px\")\n```\n\n### Surface integrals\n\nConsider a 3D function $f(x,y,z) \\in \\mathbb{R}^3$ and\na 2D surface (not neccessarily plane) $S(u, v)$,\nparametrized in two new coordinates $u$ and $v$. A position\nvector $\\mathbf{r}$ can be used to parametrize $S$\n\n$$\n\\mathbf{r} = x(u, v) \\,\\mathbf{i} + y(u, v) \\,\\mathbf{j} + z(u, v) \\,\\mathbf{k},\n$$\n\nwhere $\\mathbf{i}, \\mathbf{j}, \\mathbf{k}$ are the Cartesian unit vectors.\nThe two new coordinates $u$ and $v$ are functions of $x, y, z$,\nand they each have a one-dimensional domain\n\n$$\nu \\in D_u \\quad v \\in D_v.\n$$\n\nThe exact size of the domain depends on the problem at hand. The computational\ndomain of the surface $S$ is $D=D_u \\times D_v$.\n\nA surface integral of $f$ over $S$ can now be written\n\n$$\n\\int_S f(x, y, z) dS = \\int_D f(x(u, v), y(u, v), z(u, v)) \\left|\\frac{\\partial \\mathbf{r}}{\\partial u} \\times \\frac{\\partial \\mathbf{r}}{\\partial v} \\right| dudv,\n$$\n\nwhere $dS$ is a surface area element. With shenfun such integrals\nare trivial, even for highly complex domains.\n\n## Example 1\n\nConsider first the surface integral of $f(x,y,z)=x^2$\nover the unit sphere. We use regular spherical coordinates,\n\n$$\n\\begin{align*}\n0 &\\le \\theta \\le \\pi \\\\ \n0 &\\le \\phi \\le 2\\pi \\\\ \nx(\\theta, \\phi) &= \\sin \\theta \\cos \\phi \\\\ \ny(\\theta, \\phi) &= \\sin \\theta \\sin \\phi \\\\ \nz(\\theta, \\phi) &= \\cos \\theta\n\\end{align*}\n$$\n\nThe straight forward implementation of a function space for\nthe unit sphere reads\n\n\n```\nimport sympy as sp\n\ntheta, phi = psi =sp.symbols('x,y', real=True, positive=True)\nrv = (sp.sin(theta)*sp.cos(phi), sp.sin(theta)*sp.sin(phi), sp.cos(theta))\n\nB0 = FunctionSpace(0, 'C', domain=(0, np.pi))\nB1 = FunctionSpace(0, 'F', dtype='d')\nT = TensorProductSpace(comm, (B0, B1), coordinates=(psi, rv, sp.Q.positive(sp.sin(theta))))\n```\n\nwhere `sp.Q.positive(sp.sin(theta))` is a restriction that\nhelps `Sympy` in computing the Jacobian required for the integral.\nWe can now approximate the function $f$ on this surface\n\n\n```\nf = Array(T, buffer=rv[0]**2)\n```\n\nand we can integrate over $S$\n\n\n```\nI = inner(1, f)\n```\n\nand finally compare to the exact result, which is $4 \\pi / 3$\n\n\n```\nprint('Error =', abs(I-4*np.pi/3))\n```\n\nNote that we can here achieve better accuracy by using\nmore quadrature points. For example by refining `f`\n\n\n```\nT = T.get_refined(2*np.array(f.global_shape))\nf = Array(T, buffer=rv[0]**2)\nprint('Error =', abs(inner(1, f)-4*np.pi/3))\n```\n\nNot bad at all:-)\n\nTo go a little deeper into the integral, we can get the\nterm $\\left|\\frac{\\partial \\mathbf{r}}{\\partial u} \\times \\frac{\\partial \\mathbf{r}}{\\partial v} \\right|$\nas\n\n\n```\nprint(T.coors.sg)\n```\n\nHere the printed variable is `x`, but this is because `theta`\nis named `x` internally by `Sympy`. This is because of the definition\nused above: `theta, phi = sp.symbols('x,y', real=True, positive=True)`.\n\nNote that $\\mathbf{b}_u = \\frac{\\partial \\mathbf{r}}{\\partial u}$ and\n$\\mathbf{b}_v = \\frac{\\partial \\mathbf{r}}{\\partial v}$ are the two\nbasis vectors used by shenfun for the surface $S$. The basis\nvectors are obtainable as `T.coors.b`, and can also be printed\nin latex using:\n\n\n```\nfrom IPython.display import Math\nMath(T.coors.latex_basis_vectors(symbol_names={theta: '\\\\theta', phi: '\\\\phi'}))\n```\n\nwhere we tell latex to print `theta` as $\\theta$, and not `x`:-)\n\nFrom the basis vectors it should be easy to see that $\\left| \\mathbf{b}_{\\theta} \\times \\mathbf{b}_{\\phi} \\right| = \\sin \\theta$.\n\n## Example 2\n\nNext, we solve [Example 5](http://www.math24.net/surface-integrals-of-first-kind.html)\nfrom the online resources at math24.net. Here\n\n$$\nf = \\sqrt{1+x^2+y^2}\n$$\n\nand the surface is defined by\n\n$$\n\\mathbf{r} = u \\cos v \\mathbf{i} + u \\sin v \\mathbf{j} + v \\mathbf{k}\n$$\n\nwith $0 \\le u \\le 2, 0 \\le v \\le 2\\pi$.\n\nThe implementation is only a few lines, and we end by comparing\nto the exact solution $14 \\pi /3$\n\n\n```\nu, v = psi =sp.symbols('x,y', real=True, positive=True)\nrv = (u*sp.cos(v), u*sp.sin(v), v)\nB0 = FunctionSpace(0, 'C', domain=(0, 2))\nB1 = FunctionSpace(0, 'C', domain=(0, np.pi))\nT = TensorProductSpace(comm, (B0, B1), coordinates=(psi, rv))\nf = Array(T, buffer=sp.sqrt(1+rv[0]**2+rv[1]**2))\nprint('Error =', abs(inner(1, f)-14*np.pi/3))\n```\n\nIn this case the integral measure is\n\n\n```\nprint(T.coors.sg)\n```\n\n## Example 3\n\nIn this third example we use a surface that\nlooks like a seashell. Again, the example is taken from\n[chebfun](http://www.chebfun.org/examples/approx3/SurfaceIntegral3D.html).\n\nThe surface of the seashell is parametrized with position\nvector\n\n$$\n\\begin{align*}\n\\mathbf{r} &= \\left(\\left(\\frac{5}{4}-\\frac{5 v}{8 \\pi}\\right) \\cos 2v(1+\\cos u) + \\cos 2v \\right) \\mathbf{i} \\\\ \n &+\\left(\\left(\\frac{5}{4}-\\frac{5 v}{8 \\pi}\\right) \\sin 2v (1+\\cos u) + \\sin 2v \\right) \\mathbf{j},\\\\ \n &+\\left(\\frac{10 v}{2 \\pi} + \\left(\\frac{5}{4}-\\frac{5 v}{8 \\pi}\\right) \\sin u + 15\\right) \\mathbf{k}\n\\end{align*}\n$$\n\nfor $0 \\le u \\le 2 \\pi, -2 \\pi \\le v \\le 2 \\pi$.\n\nThe function $f$ is now defined as\n\n$$\nf(x,y,z) = x+y+z\n$$\n\nThe implementation is\n\n\n```\nrv = (5*(1-v/(2*sp.pi))*sp.cos(2*v)*(1+sp.cos(u))/4 + sp.cos(2*v),\n 5*(1-v/(2*sp.pi))*sp.sin(2*v)*(1+sp.cos(u))/4 + sp.sin(2*v),\n 10*v/(2*sp.pi) + 5*(1-v/(2*sp.pi))*sp.sin(u)/4 + 15)\n\nB0 = FunctionSpace(100, 'C', domain=(0, 2*np.pi))\nB1 = FunctionSpace(100, 'C', domain=(-2*np.pi, 2*np.pi))\nT = TensorProductSpace(comm, (B0, B1), coordinates=(psi, rv, sp.Q.positive(v-2*sp.pi)))\n\nf = rv[0]+rv[1]+rv[2]\nfb = Array(T, buffer=f)\nI = inner(1, fb)\nprint(I)\n```\n\nwhich agrees very well with chebfun's result. The basis vectors\nfor the surface of the seashell are\n\n\n```\nMath(T.coors.latex_basis_vectors(symbol_names={u: 'u', v: 'v'}))\n```\n\nwhich, if nothing else, shows the power of symbolic\ncomputing in Sympy.\n\nWe can plot the\nseashell using either plotly or mayavi. Here we choose\nplotly since it integrates well with the executable\njupyter book.\n\n\n```\nimport plotly\nfig = surf3D(fb, colorscale=plotly.colors.sequential.Jet)\nfig.update_layout(scene_camera_eye=dict(x=1.6, y=-1.4, z=0))\nfig.show()\n```\n\n\n", "meta": {"hexsha": "3dbfe3eee9792ebfb2cba49843ae7f9d9ece92fe", "size": 21975, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "content/surfaceintegration.ipynb", "max_stars_repo_name": "mikaem/shenfun-demos", "max_stars_repo_head_hexsha": "c2ad13d62866e0812068673fdb6a7ef68ecfb7f2", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "content/surfaceintegration.ipynb", "max_issues_repo_name": "mikaem/shenfun-demos", "max_issues_repo_head_hexsha": "c2ad13d62866e0812068673fdb6a7ef68ecfb7f2", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-09-21T16:10:01.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-21T16:10:01.000Z", "max_forks_repo_path": "content/surfaceintegration.ipynb", "max_forks_repo_name": "mikaem/shenfun-demos", "max_forks_repo_head_hexsha": "c2ad13d62866e0812068673fdb6a7ef68ecfb7f2", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.1967821782, "max_line_length": 187, "alphanum_fraction": 0.533105802, "converted": true, "num_tokens": 3927, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9161096181702032, "lm_q2_score": 0.8962513821399044, "lm_q1q2_score": 0.8210645114767047}} {"text": "sergazy.nurbavliyev@gmail.com \u00a9 2021\n\n## Which game would you choose \n\nQuestion: Assume you and your friend decided to play the game in a casino. There are two games that you can play. \nGame 1. You roll two uniform fair dice. You can get the amount of dollars that is equivalent to the product of the numbers shown on both dies. \nGame 2. You roll one uniform fair die. You will get the dollar amount of the square of the value shown on the die. Which game will you choose? To be more clear: Which game has the higher expected value?\n\n## Answer\n\nWithout calculating the real expected values. (This part is left to the reader.) We can stil answer the question.\n \nLet $X$ be the random variable that shows the outcome of the uniform fair die. Then the game 1 is asking to find $\\mathbb{E}[X]*\\mathbb{E}[X]$. In other words, the product of the expectation of the two independent rolls. However the game 2 is asking to find $\\mathbb{E}[X^2]$. In other words the expectation of the square of the single roll. Remember the definition of the variance. We know it is always non-negative. That is \n$Var(X)\\geq 0$.\n\\begin{equation}\nVar(X)=\\mathbb{E}[X^2]-\\mathbb{E}[X]^2\\geq 0\n\\end{equation}\nThis implies that $\\mathbb{E}[X^2]\\geq \\mathbb{E}[X]^2$. Indeed, $\\mathbb{E}[X^2]> \\mathbb{E}[X]^2$ unless the two games are exactly the same. This implies that the second game has a higher expected value and we should choose that one.\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n## Python code for exact value\n\n\n```python\nimport sympy as S\nfrom sympy.stats import E, Die,variance\nx=Die('D1',6)\ny=Die('D2',6)\n```\n\n\n```python\nE(x),E(x**2),variance(x)\n```\n\n\n\n\n (7/2, 91/6, 35/12)\n\n\n\n\n```python\nE(y),E(y**2),variance(y)\n```\n\n\n\n\n (7/2, 91/6, 35/12)\n\n\n\n\n```python\nz =x*y\n```\n\n\n```python\nE(z),E(z**2),variance(z)\n```\n\n\n\n\n (49/4, 8281/36, 11515/144)\n\n\n\n\n```python\nE(z)**2>> curve = bezier.Curve(nodes, degree=2)\n>>> curve\n```\n", "meta": {"hexsha": "2f73086861205824fe3a7330321d81062c4d6b1b", "size": 220264, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Applied_Alg_sem_5_Interpolation.ipynb", "max_stars_repo_name": "GalinaZh/Appl_alg2021", "max_stars_repo_head_hexsha": "09761b56eb2bdfee4cd5f12cd96562ca146fcb15", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Applied_Alg_sem_5_Interpolation.ipynb", "max_issues_repo_name": "GalinaZh/Appl_alg2021", "max_issues_repo_head_hexsha": "09761b56eb2bdfee4cd5f12cd96562ca146fcb15", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Applied_Alg_sem_5_Interpolation.ipynb", "max_forks_repo_name": "GalinaZh/Appl_alg2021", "max_forks_repo_head_hexsha": "09761b56eb2bdfee4cd5f12cd96562ca146fcb15", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 162.1973490427, "max_line_length": 21992, "alphanum_fraction": 0.8833853921, "converted": true, "num_tokens": 7099, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9304582574225517, "lm_q2_score": 0.8824278556326344, "lm_q1q2_score": 0.82106228485306}} {"text": "# __Fundamentos de programaci\u00f3n__\n\nHecho por: Juan David Arg\u00fcello Plata\n\n## __1. Variables__\n\nUna variable es el nombre con el que se identifica informaci\u00f3n de inter\u00e9s.\n\n```\nnom_variable = contenido\n```\n\nEl contenido de una variable puede cambiar de naturaleza; por eso se dice que Python es un lenguaje din\u00e1mico.\n\n### __1.1. Naturaleza de las variables__\n\n| Naturaleza | Ejemplo |\n|----------|---|\n| Num\u00e9rico | `x = 5` |\n| Textual | `text = 'Esta es una frase'` |\n| Lista | `lista = [0,1,2,\"texto\"]` |\n| Tupla | `tupla = (0,1,2,\"texto\")` |\n| Diccionario | `dic = {\"num\":5, \"text\": \"hola\"}` |\n\n### __1.2. Variable num\u00e9rica__\n\nLa forma en como se define una variable num\u00e9rica y el tipo de operaciones b\u00e1sicas que se pueden emplear con ellas se muestra a continuaci\u00f3n.\n\n\n```python\n#Declarar una variable num\u00e9rica es igual que en el \u00e1lgebra...\nx = 1\nprint(x)\n```\n\n\n```python\nx = 5\nw = 10\nz = 20\nprint(\"x = \", x, \", w = \", w, \", z = \", z) #Podemos ser m\u00e1s espec\u00edficos a la hora de imprimir informaci\u00f3n\n```\n\nTambi\u00e9n se pueden hacer operaciones matem\u00e1ticas, pero _cuidado_: es importante escribir bien las ecuaciones.\n\nSi se quisiera resolver:\n\n$$\n\\begin{equation}\n y = \\frac{x}{w \\, z}\n\\end{equation}\n$$\n\nSe debe escribir el algoritmo as\u00ed:\n\n\n```python\ny = x/(w*z)\nprint(y)\n```\n\nPorque si se escribe y ejecuta as\u00ed:\n\n\n```python\ny = x/w*z\nprint(y)\n```\n\nSe estar\u00eda realmente resolviendo:\n\n$$\n\\begin{equation}\n y = \\frac{x}{w} z\n\\end{equation}\n$$\n\n

    Ejercicio:

    \n\nResuelve la siguiente ecuaci\u00f3n:\n\n$$\n\\begin{equation}\n y = \\frac{m \\, n}{m ^{2}} \\frac{n +1}{ \\left(n^{-2} m \\right) ^{3}}\n\\end{equation}\n$$\n\nD\u00f3nde: \n\n* $n = 2$\n* $m = 10$\n\n\n\n```python\n\n```\n\n\n### __1.2. Variable de texto__\n\nA continuaci\u00f3n, se puede observar la naturaleza de las variables textuales.\n\n\n```python\nt = \"Esta es una oraci\u00f3n\" #De igual manera que la variable num\u00e9rica.\nprint(t)\n```\n\n\n```python\n#Es posible adicionar texto\nt2 = \", \u00bfo no?\"\nfrase_completa = t+t2\nprint(frase_completa)\n```\n\n\n```python\n#Podemos tambi\u00e9n acceder a las letras en un texto\nprint(frase_completa[0])\n```\n\n\n```python\n#Y a fragmentos de una oraci\u00f3n\nprint(frase_completa[2:])\n```\n\n### __1.3. Listas__\n\nVariables _din\u00e1micas_ con contenido de cualquier naturaleza.\n\n\n```python\n#Ejemplo de lista\nl = ['a','b','c', [0,1]]\nprint(l)\n```\n\n\n```python\n#\u00bfC\u00f3mo accedemos a la informaci\u00f3n?\nprint(l[0]) #Recuerda: el contenido de la lista empieza desde 0, 1, 2, ...\n```\n\n\n```python\n#Podemos redefinir el contenido de la siguiente manera:\nl[0] = 'z'\nprint(l) #De esta manera, la lista se cambia su valor\n```\n\n\n```python\nprint(l[3][0]) #Tambi\u00e9n podemos leer la informaci\u00f3n de una lista dentro de otra lista\n```\n\n### __1.4. Tuplas__\n\nVariables _est\u00e1ticas_ con contenido de cualquier naturaleza.\n\n\n```python\nt = ('a',0,20,'2', ('Hola', 'Adi\u00f3s')) #Similar a la lista\nprint(t)\n```\n\n\n```python\n#Tambi\u00e9n podemos acceder a su contenido... y jugar con \u00e9l\nprint('\u00bf' + t[4][0] + '?, ' + t[4][1])\n```\n\n\n```python\n#Pero si lo intentamos cambiar...\nt[0] = 1\n```\n\n### __1.5. Diccionarios__\n\nTipo de variable usada en programaci\u00f3n web. Facilita la lectura de c\u00f3digo al darle _\"nombres\"_ a su contenido.\n\n\n```python\n#Si vamos al s\u00faper mercado\nlista_mercado = {\n 'manzana':2,\n 'peras':3,\n 'uvas': 4\n}\nprint(lista_mercado)\n```\n\n\n```python\n#Podemos ser a\u00fan m\u00e1s espec\u00edficos...\nlista_mercado = {\n 'Frutas': {\n 'Manzanas': {'Unidades': 'Un', 'Cant': 2},\n 'Peras': {'Unidades': 'Un', 'Cant': 1},\n 'Uvas': {'Unidades': 'Lb', 'Cant': 4}\n }\n}\nprint(lista_mercado)\n```\n\n\n```python\n#Se accede a la informaci\u00f3n de la siguiente manera:\nprint(lista_mercado['Frutas']['Manzanas'])\n```\n", "meta": {"hexsha": "dc329134b24703d9dfeea065e5895506c183c7d6", "size": 9697, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Python/Colab/VariablesPython.ipynb", "max_stars_repo_name": "judrodriguezgo/DesarrolloWeb", "max_stars_repo_head_hexsha": "a020b1eb734e243114982cde9edfc3c25d60047a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-10-30T16:54:25.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-30T16:54:25.000Z", "max_issues_repo_path": "Python/Colab/VariablesPython.ipynb", "max_issues_repo_name": "judrodriguezgo/DesarrolloWeb", "max_issues_repo_head_hexsha": "a020b1eb734e243114982cde9edfc3c25d60047a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Python/Colab/VariablesPython.ipynb", "max_forks_repo_name": "judrodriguezgo/DesarrolloWeb", "max_forks_repo_head_hexsha": "a020b1eb734e243114982cde9edfc3c25d60047a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-11-23T22:24:15.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-31T23:51:47.000Z", "avg_line_length": 24.0620347395, "max_line_length": 150, "alphanum_fraction": 0.4274517892, "converted": true, "num_tokens": 1223, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9449947070591977, "lm_q2_score": 0.8688267898240861, "lm_q1q2_score": 0.8210367177349954}} {"text": "# Boltzmann distribution for a harmonic oscillator\n\n\n```python\nfrom sympy import *\ninit_printing()\n```\n\n\n```python\nk, beta, T, p, n, omega, hbar = symbols('k, beta, T, p, n, omega, hbar', positive=True)\n```\n\n\n```python\np = exp(-n*hbar*omega/k/T)*(1-exp(-hbar*omega/k/T))\n```\n\n\n```python\np\n```\n\n\n```python\np.subs(n,2)\n```\n\n\n```python\nsolveset(p.subs([(n,2),(omega,3.0e4*3.14),(k,1.38e-23),(hbar,1.05e-34)])-0.25,T)\n```\n\n\n```python\nx = symbols('x', positive=True)\n```\n\n\n```python\nsolveset(x**2-x**3-0.08,x)\n```\n\n\n```python\n(hbar*omega/k*log(1/0.42)).subs([(n,2),(omega,3.0e4*3.14),(k,1.38e-23),(hbar,1.05e-34)])\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "3547494503193914ac06194b2ff52614e4bb2421", "size": 16153, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Boltzmann.ipynb", "max_stars_repo_name": "corcoted/Phys475", "max_stars_repo_head_hexsha": "8fc0ee6bccda19e5afff14f0e8fda6aef7ee6d66", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-03-10T04:30:46.000Z", "max_stars_repo_stars_event_max_datetime": "2021-07-12T09:20:43.000Z", "max_issues_repo_path": "Boltzmann.ipynb", "max_issues_repo_name": "corcoted/Phys475", "max_issues_repo_head_hexsha": "8fc0ee6bccda19e5afff14f0e8fda6aef7ee6d66", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Boltzmann.ipynb", "max_forks_repo_name": "corcoted/Phys475", "max_forks_repo_head_hexsha": "8fc0ee6bccda19e5afff14f0e8fda6aef7ee6d66", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 57.4839857651, "max_line_length": 3116, "alphanum_fraction": 0.7666068223, "converted": true, "num_tokens": 257, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9449947132556618, "lm_q2_score": 0.8688267626522814, "lm_q1q2_score": 0.8210366974414376}} {"text": "# Einstein Tensor calculations using Symbolic module\n\n\n```python\nimport numpy as np\nimport pytest\nimport sympy\nfrom sympy import cos, simplify, sin, sinh, tensorcontraction\nfrom einsteinpy.symbolic import EinsteinTensor, MetricTensor, RicciScalar\n\nsympy.init_printing()\n```\n\n### Defining the Anti-de Sitter spacetime Metric\n\n\n```python\nsyms = sympy.symbols(\"t chi theta phi\")\nt, ch, th, ph = syms\nm = sympy.diag(-1, cos(t) ** 2, cos(t) ** 2 * sinh(ch) ** 2, cos(t) ** 2 * sinh(ch) ** 2 * sin(th) ** 2).tolist()\nmetric = MetricTensor(m, syms)\n```\n\n### Calculating the Einstein Tensor (with both indices covariant)\n\n\n```python\neinst = EinsteinTensor.from_metric(metric)\neinst.tensor()\n```\n", "meta": {"hexsha": "0adad0af4ca525c98f6ae1d18f27abf97e959176", "size": 100153, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/source/examples/Einstein Tensor symbolic calculation.ipynb", "max_stars_repo_name": "bibek22/einsteinpy", "max_stars_repo_head_hexsha": "78bf5d942cbb12393852f8e4d7a8426f1ffe6f23", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-06-01T18:37:53.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-01T18:37:53.000Z", "max_issues_repo_path": "docs/source/examples/Einstein Tensor symbolic calculation.ipynb", "max_issues_repo_name": "bibek22/einsteinpy", "max_issues_repo_head_hexsha": "78bf5d942cbb12393852f8e4d7a8426f1ffe6f23", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2019-04-08T17:39:50.000Z", "max_issues_repo_issues_event_max_datetime": "2019-04-11T03:10:09.000Z", "max_forks_repo_path": "docs/source/examples/Einstein Tensor symbolic calculation.ipynb", "max_forks_repo_name": "bibek22/einsteinpy", "max_forks_repo_head_hexsha": "78bf5d942cbb12393852f8e4d7a8426f1ffe6f23", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 319.9776357827, "max_line_length": 77604, "alphanum_fraction": 0.7773606382, "converted": true, "num_tokens": 199, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9489172630429475, "lm_q2_score": 0.8652240947405564, "lm_q1q2_score": 0.8210260799000207}} {"text": "# Introduction and Motivation\n\nBinomial distribution plays an important role in real world applications. It measures the probability of having $k$ successes out of $n$ i.i.d. trials with each trial having a success rate $p$. The PDF of binomial distribution with success rate $p$, total number of trials $n$ and the number of successes $k$ is\n\\begin{equation}\nf(k,n,p)={n\\choose k}p^{k}(1-p)^{n-k}\n\\label{bino_pdf}\n\\end{equation}\n\nHowever, it may be difficult to directly use formula because it may contain large and small terms. Typically, ${n\\choose k}$ is very large, whereas $p^k$ and $(1-p)^{n-k}$ are very small. These may cause overflow or underflow when computing them. It would be more convenient if we could reformulate the formula that avoids this issue. Is it possible to do? Maybe we can observe an example of this distribution to get some ideas.\n\nGiven the success rate $p$ of i.i.d. trials and the total number of trials $n$, the above equation gives the relationship between $k$ and $f(k,n,p)$. Below is an example of binomial distribution PDF with $n=3000$ and $p=0.3$.\n\n\n\nThe above figure looks very similar to a normal distribution, which makes intuitive sense. The peak of the distribution should coorespond to $k=np$. Furthermore, it is also very likely that $k$ can deviate around $np$ because it's very possible that we may get some more or fewer successes than expected. However, it is unlikely that the $k$ we get deviates too far from $np$. All of these intuitions can be visualized on the graph. Therefore, one may ask, is it possible to have a normal distribution to approximate the binomial distribution? It turns out the answer is yes. In this project, we will study the relationships between binomial distributions their corresponding normal approximation.\n\n# Derivation\n\nWe begin by presenting the derivation from this paper[1].\nBefore we start, it is worth noting an approximation for factorials called Stirling's formula[2].\n$$\n\\begin{aligned}\nn!&=\\sqrt{2\\pi n}\\left(\\frac{n}{e}\\right)^n\\left(1+\\frac{1}{12n}+\\frac{1}{288n^2}-\\frac{1}{51840n^3}+...\\right)\\\\\n&=\\sqrt{2\\pi n}\\left(\\frac{n}{e}\\right)^n\\left(1+O\\left(\\frac{1}{n}\\right)\\right)\n\\end{aligned}\n$$\n\nEquipped with this formula, we can proceed on making the approximation.\n$$\n\\begin{aligned}\nf(k,n,p)&={n\\choose k}p^{k}(1-p)^{n-k}\\\\\n&=\\frac{n!}{k!(n-k)!}p^{k}(1-p)^{n-k}\\\\\n&=\\sqrt{\\frac{n}{2\\pi k(n-k)}}\\left(\\frac{np}{k}\\right)^{k}\\left(\\frac{n(1-p)}{n-k}\\right)^{n-k}\\left[1+O\\left(\\frac1n\\right)\\right]\n\\end{aligned}\n$$\n\nDefining $\\delta=k-np$, we have $k=\\delta+np$ and $n-k=n(1-p)-\\delta$. Expanding $\\textrm{ln}(1+k)=k-\\frac{1}{2}k^2+O(k^3)$, we have,\n$$\n\\textrm{ln}\\left[\\left(\\frac{np}{k}\\right)^k\\left(\\frac{n(1-p)}{n-k}\\right)^{n-k}\\right]=-\\frac{\\delta^2}{2np(1-p)}+O\\left(\\frac{\\delta^3}{n^2}\\right)\n$$\n\nExponentiating the result, we have\n$$\n\\begin{aligned}\n\\left(\\frac{np}{k}\\right)^k\\left(\\frac{n(1-p)}{n-k}\\right)^{n-k}&=\\textrm{exp}\\left(-\\frac{\\delta^2}{2np(1-p)}+O\\left(\\frac{\\delta^3}{n^2}\\right)\\right)\\\\\n&=\\textrm{exp}\\left(-\\frac{\\delta^2}{2np(1-p)}\\right)\\textrm{exp}\\left(O\\left(\\frac{\\delta^3}{n^2}\\right)\\right)\\\\\n\\end{aligned}\n$$\n\nBecause $\\delta$ is just the difference between observed $k$ and the expected value of $k$ which is $np$, $\\delta$ should be on the scale of a few standard deviations, which is $\\sqrt{np(1-p)}$. Therefore, we argue that $\\delta\\sim\\sqrt{n}$.\nTherefore,\n$$\nO\\left(\\frac{\\delta^3}{n^2}\\right)=O\\left(\\frac{n^{\\frac32}}{n^2}\\right)=O\\left(\\frac{1}{\\sqrt{n}}\\right).\n$$\nAs $n$ gets large, $\\frac{1}{\\sqrt{n}}$ gets small.\nUsing $\\textrm{exp}(x)=1+x+\\frac{x^2}{2}+...=1+x+O(x^2)$, the above equation becomes\n$$\n\\begin{aligned}\n\\left(\\frac{np}{k}\\right)^k\\left(\\frac{n(1-p)}{n-k}\\right)^{n-k}=\\textrm{exp}\\left(-\\frac{\\delta^2}{2np(1-p)}\\right)\\left[1+O\\left(\\frac{1}{\\sqrt{n}}\\right)\\right].\n\\end{aligned}\n$$\n\nFor the square root part, we have\n$$\n\\begin{aligned}\n\\sqrt{\\frac{n}{2\\pi k(n-k)}}&=\\sqrt{\\frac{n}{2\\pi(np+\\delta)(n(1-p)-\\delta)}}\\\\\n&=\\sqrt{\\frac{1}{2\\pi np(1-p)}}\\left[1+O\\left(\\frac{1}{\\sqrt{n}}\\right)\\right]\n\\end{aligned}\n$$\n\nMultiplying them together, we obtain\n$$\n\\begin{aligned}\nf(k,n,p)=\\sqrt{\\frac{1}{2\\pi np(1-p)}}\\textrm{exp}\\left(-\\frac{(k-np)^2}{2np(1-p)}\\right)\\left[1+O\\left(\\frac{1}{\\sqrt{n}}\\right)\\right]\n\\end{aligned}\n$$\n\nWe know that for binomial distribution $f(k)$ with fixed $n$ and $p$, its mean of $k$ is $\\mu=np$, and its variance is $\\sigma^2=np(1-p)$. Therefore, we can rewrite the formula above as\n$$\nf(k)=\\frac{1}{\\sqrt{2\\pi} \\sigma}\\textrm{exp}\\left(-\\frac{(k-\\mu)^2}{2\\sigma^2}\\right)\\left[1+O\\left(\\frac{1}{\\sqrt{n}}\\right)\\right]\n$$\n\nThis is exactly a normal distribution with mean $\\mu$ and variance $\\sigma^2$ with a correction term $O\\left(\\frac{1}{\\sqrt{n}}\\right)$. As $n\\rightarrow\\infty$, $O\\left(\\frac{1}{\\sqrt{n}}\\right)$ becomes a small value compared to $1$.\n\nThere's a caveat here. we arrived at this result by assuming that $\\delta\\sim\\sqrt{n}$, which means that this formula is applicable only when $k$ is within a few standard deviations to $np$. This formula may fail for significantly deviated $k$. This makes intuitive sense. After all, the domain of $f(k)$ is $[0,n]$ whereas the domain of normal distribution is $(-\\infty,\\infty)$. For $k$ smaller than $0$ or larger than $n$, our approximation returns a positive value whereas the true binomial formula returns $0$, and the ratio between them will be infinity. This is not to be worried about because our approximation disregards those exceptional cases where $\\delta\\sim n$ rather than $\\delta\\sim\\sqrt{n}$.\n\nWe can now define the approximation $g(k)$ of $f(k)$ by neglecting the $O\\left(\\frac{1}{\\sqrt{n}}\\right)$ term.\n$$\ng(k)=\\frac{1}{\\sqrt{2\\pi} \\sigma}\\textrm{exp}\\left(-\\frac{(k-\\mu)^2}{2\\sigma^2}\\right).\n$$\n\n# Approximation Error Visualizations\n\nAs previously discussed, our approximation works well when $n$ is large $k$ is within a few standard deviations to $\\mu$. We can visualize both the approximation and the exact function with varying $n$. Here are some examples. The x-axis limits are determined by $\\mu\\pm 5\\sigma$ for all of the plots. The blue and red curves represent the approximations $g(k)$ and the exact functions $f(k)$, respectively.\n\n\n\nAs $n$ gets larger, the differences between two plots becomes more similar. At $n=400$, they are essentially indistinguishable. However, we may still be interested in seeing how the differences decrease with increasing $n$. Defining $d(k)=g(k)-f(k)$, we can visualize $d(k)$.\n\n\n\nWe can see that as $n$ increases the magnitudes of $d(k)$ becomes smaller, but it gets wider. Moreover, its general structures are more or less preserved. Take $n = 400$ and $n=1000$ as a comparison, if we scale y-coordinates the $n=400$ plot down, but scale up its x-coordinates, we will get some results similar to the $n=1000$ plot. In other words, if we ignore the x and y scales, the $n=400$ plot and $n=1000$ plot look pretty much identical.\n\nWe should be aware that the correction is also a function of $k$, because otherwise $d(k)$ will be a constant function. Furthermore, the observations above suggest that on the x-axis, the pattern induced by the correction lengthens in accordance to $\\sigma$. For example, if $\\sigma$ doubles, we should see the distance between the two local minima around the center peak of $d(k)$ also doubles. That is, the plot becomes twice as wide. For a normal distribution, if $\\sigma$ doubles, we need to go $\\sqrt{2}$ as far from $\\mu$ to achieve the same percentile as before. Because $\\sigma$ scales with $\\sqrt{n}$, it implies that when $n$ is large, the contribution from the term $\\frac{k}{\\sqrt{n}}$ dominates those from other terms as a function $y\\left(\\frac{x}{\\sqrt{n}}\\right)$ is $\\sqrt{n}$ times as wide as $y(x)$, and the disproportionalities of the graphs caused by other terms, if they exist, are visually negligible.\n\nWe may also visualize the correction terms itself by defining a new function $r(k)$ such that\n$$\n\\begin{aligned}\nr(k)&=-\\frac{d(k)}{g(k)}\\\\\n&=-\\frac{\\frac{-1}{\\sqrt{2\\pi} \\sigma}\\textrm{exp}\\left(-\\frac{(k-\\mu)^2}{2\\sigma^2}\\right)}{\\frac{1}{\\sqrt{2\\pi} \\sigma}\\textrm{exp}\\left(-\\frac{(k-\\mu)^2}{2\\sigma^2}\\right)}O\\left(\\frac{1}{\\sqrt{n}}\\right)\\\\\n&=O\\left(\\frac{1}{\\sqrt{n}}\\right)\n\\end{aligned}\n$$\n\n\n\nThe plots above confirms the claims that for large $n$ and $k$ not too far from $\\mu$, the correction terms are small and therefore the approximations are very accurate.\n\nAn interesting observation of $r(k)$ is that its local maxima doesn't seem to be at $\\mu$, but rather at some distances away from $\\mu$. We can zoom in around $k=\\mu$ to see more details.\n\n\n\nIt looks like at $k=\\mu$, the correction $r(k)$ is negative, making $g(\\mu)>f(\\mu)$. As $k$ deviates from $\\mu$, the difference becomes smaller and eventually $g(k)f(k)$ again.\n\nScale wise, while we already know that $r(k)$ roughly widens as $\\sqrt{n}$, its amplitudes decrease roughly as $n^{-1}$. Take $n=100$ and $n=1000$ as a comparison, $r(\\mu)$ for $n=1000$ is roughly $10$ times smaller than that for $n=100$. \n\n# Conclusion\n\nWe have constructed a normal approximation to a binomial distribution. The error analysis indicates that the approximation errors tends to be insignificant if $n$ is large and $k$ is not too far from $\\mu$. More specifically, the corrections stays \"in sync\" with the approximation in a sense that the final graphs corresponding to different $n$'s are roughly proportional. Moreover, the amplitudes of the corrections decreases roughly as $n^{-1}$.\n\n# Generalizations\n\nWe may generalize our discussions about binomial distributions to multinomial distributions, using the same technique. In fact, the generalization has already been discussed. For a multinomial $f(\\mathbf{k},n,\\mathbf{p})$ such that\n$$\n\\begin{aligned}\n\\mathbf{k}&=[k_1,k_2,...k_m]\\\\\n\\mathbf{p}&=[p_1,p_2,...p_m]\\\\\nn&=\\sum_{i=1}^{m}k_i\\\\\n0&<=p_1,p_2,...,p_m<=1\n\\end{aligned}\n$$\nfor varibles $\\mathbf{X}=[X_1,X_2,...X_M]$, where $X_1,X_2,...,X_m$ are i.i.d. only within themselves. We can approximate this multinomial distribution PDF as\n$$\n\\mathcal{N}(n\\mathbf{p},n\\mathbf{M}),\n$$\nwhere $\\mathbf{M}=\\mathbf{P}-\\mathbf{p}\\mathbf{p}^\\top$\nwhere $\\mathbf{P}$ is a diagonal matrix with diagonal entries being elements of $\\mathbf{p}$.\n\n# References\n\n[1]The Normal Approximation to the Binomial Distribution. http://scipp.ucsc.edu/~haber/ph116C/NormalApprox.pdf\n\n[2]Stirling's approximation. In Wikipedia. https://en.wikipedia.org/wiki/Stirling%27s_approximation\n\n[3]Geyer, C. Stat 5151 Notes: Brand Name Distributions, http://www.stat.umn.edu/geyer/5102/notes/brand.pdf\n \n\n\n```julia\n\n```\n", "meta": {"hexsha": "a261f86ae5426c8f532bed63c7952196849475c3", "size": 15453, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_notebooks/2020-11-22-Normal-Approximation.ipynb", "max_stars_repo_name": "TianqiT/MyFastpages", "max_stars_repo_head_hexsha": "e70f481e5df3e3619d890f5e57c7e985ce13dc30", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_notebooks/2020-11-22-Normal-Approximation.ipynb", "max_issues_repo_name": "TianqiT/MyFastpages", "max_issues_repo_head_hexsha": "e70f481e5df3e3619d890f5e57c7e985ce13dc30", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2021-09-28T05:43:58.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-26T10:15:46.000Z", "max_forks_repo_path": "_notebooks/2020-11-22-Normal-Approximation.ipynb", "max_forks_repo_name": "TianqiT/MyFastpages", "max_forks_repo_head_hexsha": "e70f481e5df3e3619d890f5e57c7e985ce13dc30", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.3181818182, "max_line_length": 944, "alphanum_fraction": 0.5961302013, "converted": true, "num_tokens": 3290, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8757869884059267, "lm_q2_score": 0.9372107931567176, "lm_q1q2_score": 0.8207970180402516}} {"text": "Euler Problem 70\n================\n\nEuler's Totient function, \u03c6(n) [sometimes called the phi function], is used to\ndetermine the number of positive numbers less than or equal to n which are\nrelatively prime to n. For example, as 1, 2, 4, 5, 7, and 8, are all less than\nnine and relatively prime to nine, \u03c6(9)=6. The number 1 is considered to be\nrelatively prime to every positive number, so \u03c6(1)=1.\n\nInterestingly, \u03c6(87109)=79180, and it can be seen that 87109 is a permutation\nof 79180.\n\nFind the value of n, 1 < n < 10^7, for which \u03c6(n) is a permutation of n and the\nratio n/\u03c6(n) produces a minimum.\n\n\n```python\nfrom sympy import sieve, primepi\nN = 10 ** 7\nn = int(N ** 0.5)\nmin_ratio = 1.005\nbest_n = None\nprimes = list(sieve.primerange(1, N))\npi = primepi(n)\nnum_primes = len(primes)\nfor i in range(pi, -1, -1):\n p = primes[i]\n ratio = p / (p - 1)\n if ratio > min_ratio:\n break\n for j in range(i+1, num_primes):\n q = primes[j]\n n = p * q\n if n > N:\n break\n if p / (p - 1) > min_ratio:\n break\n if sorted(str(n)) == sorted(str(n - p - q + 1)):\n ratio = 1.0 * p * q / (p - 1) / (q - 1)\n if ratio < min_ratio:\n min_ratio = ratio\n best_n = n\nprint(best_n)\n```\n\n 8319823\n\n\n**Discussion:** The ratio n/\u03c6(n) is equal to the product of p/(p-1) for all distinct prime factors p of n.\nWe may assume that n has no repeated factors.\n\nIf n is prime then \u03c6(n) = n - 1, so the digits of \u03c6(n) cannot be a permutation of the digits of n.\n\nIf n is the product of three or more prime factors, then its smallest prime factor is less than 200, so n/\u03c6(n) > 1.005.\n\nSuppose that n is the product of two distinct prime factors p and q (p < q). Then n/\u03c6(n) = p/(p-1) * q/(q-1). If the minimum value realized in this case is less than 1.005, then we have found the optimal value of n.\n", "meta": {"hexsha": "cbbfa38232b143692e28897401fa95fe5d4e54c2", "size": 3101, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Euler 070 - Totient permutation.ipynb", "max_stars_repo_name": "Radcliffe/project-euler", "max_stars_repo_head_hexsha": "5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2016-05-11T18:55:35.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-27T21:38:43.000Z", "max_issues_repo_path": "Euler 070 - Totient permutation.ipynb", "max_issues_repo_name": "Radcliffe/project-euler", "max_issues_repo_head_hexsha": "5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Euler 070 - Totient permutation.ipynb", "max_forks_repo_name": "Radcliffe/project-euler", "max_forks_repo_head_hexsha": "5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 30.4019607843, "max_line_length": 221, "alphanum_fraction": 0.5159625927, "converted": true, "num_tokens": 592, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026573249612, "lm_q2_score": 0.8947894618940992, "lm_q1q2_score": 0.8207927511418294}} {"text": "\n\n# Part 1 - Scalars and Vectors\n\nFor the questions below it is not sufficient to simply provide answer to the questions, but you must solve the problems and show your work using python (the NumPy library will help a lot!) Translate the vectors and matrices into their appropriate python representations and use numpy or functions that you write yourself to demonstrate the result or property. \n\n## 1.1 Create a two-dimensional vector and plot it on a graph\n\n\n```\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\ntwo_d_vector = [.4, 1]\n\nplt.arrow(0,0, two_d_vector[0], two_d_vector[1],head_width=.1, head_length=.1, color ='orange')\nplt.xlim(-.2,1) \nplt.ylim(-.2,1.2)\n```\n\n## 1.2 Create a three-dimensional vector and plot it on a graph\n\n\n```\nvectors = np.array([[0, 0, 0, 2, 3, 4], \n [0, 0, 0, 7, 4, 6],\n [0, 0, 0, 2, 8, 9]])\n\nX, Y, Z, U, V, W = zip(*vectors)\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nax.quiver(X, Y, Z, U, V, W, length=.5)\nax.set_xlim([0, 4])\nax.set_ylim([0, 6])\nax.set_zlim([0, 5])\nax.set_xlabel('X')\nax.set_ylabel('Y')\nax.set_zlabel('Z')\nplt.show()\n```\n\n\n```\n\n```\n\n## 1.3 Scale the vectors you created in 1.1 by $5$, $\\pi$, and $-e$ and plot all four vectors (original + 3 scaled vectors) on a graph. What do you notice about these vectors? \n\n\n```\nfrom math import e, pi\n\ntwo_d_vector = [.4, 1]\nscaled_to_e = np.multiply(e, two_d_vector)\nscaled_to_pi = np.multiply(pi, two_d_vector)\nscaled_by_5 = np.multiply(5, two_d_vector)\n\n\nplt.arrow(0,0, scaled_by_5[0], scaled_by_5[1],head_width=.1, head_length=.1, color ='green')\nplt.arrow(0,0, scaled_to_pi[0], scaled_to_pi[1],head_width=.1, head_length=.1, color ='blue')\nplt.arrow(0,0, scaled_to_e[0], scaled_to_e[1],head_width=.1, head_length=.1, color ='red')\nplt.arrow(0,0, two_d_vector[0], two_d_vector[1],head_width=.1, head_length=.1, color ='orange')\n\n\n\n\nplt.xlim(-.2,3) \nplt.ylim(-.2,6)\n```\n\n\n```\n\n```\n\n## 1.4 Graph vectors $\\vec{a}$ and $\\vec{b}$ and plot them on a graph\n\n\\begin{align}\n\\vec{a} = \\begin{bmatrix} 5 \\\\ 7 \\end{bmatrix}\n\\qquad\n\\vec{b} = \\begin{bmatrix} 3 \\\\4 \\end{bmatrix}\n\\end{align}\n\n\n```\na = np.array([5,7])\nb = np.array([3,4])\n\nplt.arrow(0,0,a[0],a[1], head_width=.2, head_length=.2, color='b')\nplt.arrow(0,0,b[0],b[1], head_width=.2, head_length=.2, color='r')\n\nplt.xlim(0, 6)\nplt.ylim(0,8)\n\n```\n\n## 1.5 find $\\vec{a} - \\vec{b}$ and plot the result on the same graph as $\\vec{a}$ and $\\vec{b}$. Is there a relationship between vectors $\\vec{a} \\thinspace, \\vec{b} \\thinspace \\text{and} \\thinspace \\vec{a-b}$\n\n\n```\na = np.array([5,7])\nb = np.array([3,4])\nc = a - b\n\nplt.arrow(0,0,a[0],a[1], head_width=.2, head_length=.2, color='b')\nplt.arrow(0,0,b[0],b[1], head_width=.2, head_length=.2, color='r')\nplt.arrow(0,0,c[0],c[1], head_width=.2, head_length=.2, color='y')\n\n\nplt.xlim(0, 6)\nplt.ylim(0,8)\n```\n\n## 1.6 Find $c \\cdot d$\n\n\\begin{align}\n\\vec{c} = \\begin{bmatrix}7 & 22 & 4 & 16\\end{bmatrix}\n\\qquad\n\\vec{d} = \\begin{bmatrix}12 & 6 & 2 & 9\\end{bmatrix}\n\\end{align}\n\n\n\n```\nc = np.array([7, 22, 4, 16])\nd = np.array([12, 6, 2, 9])\n\nc_dot_d = (c*d).sum()\n\nc_dot_d\n```\n\n\n\n\n 368\n\n\n\n\n```\nnp.vdot(c, d )\n```\n\n\n\n\n 368\n\n\n\n## 1.7 Find $e \\times f$\n\n\\begin{align}\n\\vec{e} = \\begin{bmatrix} 5 \\\\ 7 \\\\ 2 \\end{bmatrix}\n\\qquad\n\\vec{f} = \\begin{bmatrix} 3 \\\\4 \\\\ 6 \\end{bmatrix}\n\\end{align}\n\n\n```\ne = np.array([5, 7, 2])\nf = np.array([3, 4, 6])\n\ne_cross_f = np.cross(e,f)\n\ne_cross_f\n```\n\n\n\n\n array([ 34, -24, -1])\n\n\n\n\n```\nnp.array\n```\n\n## 1.8 Find $||g||$ and then find $||h||$. Which is longer?\n\n\\begin{align}\n\\vec{g} = \\begin{bmatrix} 1 \\\\ 1 \\\\ 1 \\\\ 8 \\end{bmatrix}\n\\qquad\n\\vec{h} = \\begin{bmatrix} 3 \\\\3 \\\\ 3 \\\\ 3 \\end{bmatrix}\n\\end{align}\n\n\n```\ng = np.array([1, 1, 1, 8])\nh = np.array([3, 3, 3, 3])\n\ng_magnitude = np.sqrt((g**2).sum())\nh_magnitude = np.sqrt((h**2).sum())\n\nif g_magnitude > h_magnitude:\n print(f\"The magnitude of g ({g_magnitude}) is greater than the magnitude of h ({h_magnitude})\")\nelse:\n print(f\"The magnitude of h ({h_magnitude}) is greater than the magnitude of g ({g_magnitude})\")\n```\n\n The magnitude of g (8.18535277187245) is greater than the magnitude of h (6.0)\n\n\n# Part 2 - Matrices\n\n## 2.1 What are the dimensions of the following matrices? Which of the following can be multiplied together? See if you can find all of the different legal combinations.\n\\begin{align}\nA = \\begin{bmatrix}\n1 & 2 \\\\\n3 & 4 \\\\\n5 & 6\n\\end{bmatrix}\n\\qquad\nB = \\begin{bmatrix}\n2 & 4 & 6 \\\\\n\\end{bmatrix}\n\\qquad\nC = \\begin{bmatrix}\n9 & 6 & 3 \\\\\n4 & 7 & 11\n\\end{bmatrix}\n\\qquad\nD = \\begin{bmatrix}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1\n\\end{bmatrix}\n\\qquad\nE = \\begin{bmatrix}\n1 & 3 \\\\\n5 & 7\n\\end{bmatrix}\n\\end{align}\n\n\n```\nA = np.array([[1, 2],\n [3, 4],\n [5,6]])\nB = np.array([2, 4, 6])\nC = np.array([[9, 6, 3],\n [4, 7, 11]])\nD = np.array([[1, 0, 0],\n [0, 1, 0],\n [0, 0, 1]])\nE = np.array([[1, 3],\n [5, 7]])\n```\n\nA - 3x2\n> A*B = No // A*C = Yes // A*D = No // A*E = Yes\n\nB - 1x3\n\n\n> BA = Yes // BC = No // BD = Yes // BE = No\n\n\nC - 2x3\n\n\n> CA = Yes // CB = No // CD = Yes // CE = No\n\nD - 3x3\n\n> DA = Yes // DB = No // DC = No // DE = No\n\nE - 2x2\n\n\n> EA = No // EB = No // EC = Yes // ED = No\n\n\n\n## 2.2 Find the following products: CD, AE, and BA. What are the dimensions of the resulting matrices? How does that relate to the dimensions of their factor matrices?\n\n\n```\nprint(np.matmul(C,D))\nprint(np.matmul(A,E))\nprint(np.matmul(B,A))\n\n```\n\n [[ 9 6 3]\n [ 4 7 11]]\n [[11 17]\n [23 37]\n [35 57]]\n [44 56]\n\n\nCD = 2x3\nAE = 3x2\nBA = 1x2\n\n## 2.3 Find $F^{T}$. How are the numbers along the main diagonal (top left to bottom right) of the original matrix and its transpose related? What are the dimensions of $F$? What are the dimensions of $F^{T}$?\n\n\\begin{align}\nF = \n\\begin{bmatrix}\n20 & 19 & 18 & 17 \\\\\n16 & 15 & 14 & 13 \\\\\n12 & 11 & 10 & 9 \\\\\n8 & 7 & 6 & 5 \\\\\n4 & 3 & 2 & 1\n\\end{bmatrix}\n\\end{align}\n\nThe main diagonal in F, and it's transpose, F.T are the same.\n\nF - 5x4\nF.T - 4x5\n\n\n```\nF = np.array([[20,19,18,17],\n [16,15,14,13],\n [12,11,10,9],\n [8,7,6,5],\n [4,3,2,1]])\n\nF.T\n```\n\n\n\n\n array([[20, 16, 12, 8, 4],\n [19, 15, 11, 7, 3],\n [18, 14, 10, 6, 2],\n [17, 13, 9, 5, 1]])\n\n\n\n# Part 3 - Square Matrices\n\n## 3.1 Find $IG$ (be sure to show your work) \ud83d\ude03\n\nYou don't have to do anything crazy complicated here to show your work, just create the G matrix as specified below, and a corresponding 2x2 Identity matrix and then multiply them together to show the result. You don't need to write LaTeX or anything like that (unless you want to).\n\n\\begin{align}\nG= \n\\begin{bmatrix}\n13 & 14 \\\\\n21 & 12 \n\\end{bmatrix}\n\\end{align}\n\n\n```\nG = np.array([[13,14],\n [21,12]])\n\nG_identity = np.array([[1,0],\n [0,1]])\n\n# If the product is equal to G, we have the proper IG\nnp.matmul(G, G_identity)\n```\n\n\n\n\n array([[13, 14],\n [21, 12]])\n\n\n\n## 3.2 Find $|H|$ and then find $|J|$.\n\n\\begin{align}\nH= \n\\begin{bmatrix}\n12 & 11 \\\\\n7 & 10 \n\\end{bmatrix}\n\\qquad\nJ= \n\\begin{bmatrix}\n0 & 1 & 2 \\\\\n7 & 10 & 4 \\\\\n3 & 2 & 0\n\\end{bmatrix}\n\\end{align}\n\n\n\n```\nH = np.array([[12,11],\n [7,10]])\n\nJ = np.array([[0,1,2],\n [7,10,4],\n [3,2,0]])\n```\n\n\n```\nnp.linalg.det(H)\n```\n\n\n\n\n 43.000000000000014\n\n\n\n\n```\nnp.linalg.det(J)\n```\n\n\n\n\n -19.999999999999996\n\n\n\n## 3.3 Find $H^{-1}$ and then find $J^{-1}$\n\n$H^{-1}$ = \\begin{bmatrix}\n10 & -11 \\\\\n-7 & 12 \n\\end{bmatrix}\n\n\n\n---\n\n\n $J^{-1}$ = No Determinant\n\n## 3.4 Find $HH^{-1}$ and then find $J^{-1}J$. Is $HH^{-1} == J^{-1}J$? Why or Why not? \n\nPlease ignore Python rounding errors. If necessary, format your output so that it rounds to 5 significant digits (the fifth decimal place).\n\nI would say no, as $J^{-1}$ doesn't exist\n\n# Stretch Goals: \n\nA reminder that these challenges are optional. If you finish your work quickly we welcome you to work on them. If there are other activities that you feel like will help your understanding of the above topics more, feel free to work on that. Topics from the Stretch Goals sections will never end up on Sprint Challenges. You don't have to do these in order, you don't have to do all of them. \n\n- Write a function that can calculate the dot product of any two vectors of equal length that are passed to it.\n- Write a function that can calculate the norm of any vector\n- Prove to yourself again that the vectors in 1.9 are orthogonal by graphing them. \n- Research how to plot a 3d graph with animations so that you can make the graph rotate (this will be easier in a local notebook than in google colab)\n- Create and plot a matrix on a 2d graph.\n- Create and plot a matrix on a 3d graph.\n- Plot two vectors that are not collinear on a 2d graph. Calculate the determinant of the 2x2 matrix that these vectors form. How does this determinant relate to the graphical interpretation of the vectors?\n\n\n", "meta": {"hexsha": "73962d1a82ce5bebe6ae6304259cb30bfb2ad5be", "size": 115641, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "module1-vectors-and-matrices/Shawn_Harrington_LS_DS20_131_Vectors_and_Matrices_Assignment.ipynb", "max_stars_repo_name": "Shawn2776/Build_sample", "max_stars_repo_head_hexsha": "d783717537b8618906f71a3306d918b571fe5f04", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "module1-vectors-and-matrices/Shawn_Harrington_LS_DS20_131_Vectors_and_Matrices_Assignment.ipynb", "max_issues_repo_name": "Shawn2776/Build_sample", "max_issues_repo_head_hexsha": "d783717537b8618906f71a3306d918b571fe5f04", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "module1-vectors-and-matrices/Shawn_Harrington_LS_DS20_131_Vectors_and_Matrices_Assignment.ipynb", "max_forks_repo_name": "Shawn2776/Build_sample", "max_forks_repo_head_hexsha": "d783717537b8618906f71a3306d918b571fe5f04", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 110.1342857143, "max_line_length": 46510, "alphanum_fraction": 0.8336835551, "converted": true, "num_tokens": 3272, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9173026505426831, "lm_q2_score": 0.894789464699728, "lm_q1q2_score": 0.820792747646729}} {"text": "```python\nfrom IPython.display import Image\nfrom IPython.core.display import HTML \nImage(url= \"https://i.imgur.com/NLtIf3P.png\")\n```\n\n\n\n\n\n\n\n\n\n```python\n# Start with a coordinate system\nfrom sympy.vector import CoordSys3D, Del\ndelop = Del()\nC = CoordSys3D('C')\n```\n\n\n```python\nC\n```\n\n\n\n\n$\\displaystyle CoordSys3D\\left(C, \\left( \\left[\\begin{matrix}1 & 0 & 0\\\\0 & 1 & 0\\\\0 & 0 & 1\\end{matrix}\\right], \\ \\mathbf{\\hat{0}}\\right)\\right)$\n\n\n\n\n```python\ndelop\n```\n\n\n\n\n$\\displaystyle Del\\left(\\right)$\n\n\n\n\n```python\nfrom sympy import symbols, Function\nv1, v2, v3, f = symbols('v1 v2 v3 f', cls=Function)\n```\n\n\n```python\n# Define the vector field as vfield and the scalar field as sfield.\nvfield = v1(C.x, C.y, C.z)*C.i + v2(C.x, C.y, C.z)*C.j + v3(C.x, C.y, C.z)*C.k\nffield = f(C.x, C.y, C.z)\n```\n\n\n```python\nvfield\n```\n\n\n\n\n$\\displaystyle (\\operatorname{v_{1}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)})\\mathbf{\\hat{i}_{C}} + (\\operatorname{v_{2}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)})\\mathbf{\\hat{j}_{C}} + (\\operatorname{v_{3}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)})\\mathbf{\\hat{k}_{C}}$\n\n\n\n\n```python\nffield\n```\n\n\n\n\n$\\displaystyle f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)}$\n\n\n\n\n```python\n# Construct the expression for the LHS of the equation using Del()\nlhs = (delop.dot(ffield * vfield)).doit()\n```\n\n\n```python\nlhs\n```\n\n\n\n\n$\\displaystyle f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{x}_{C}}} \\operatorname{v_{1}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{y}_{C}}} \\operatorname{v_{2}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{z}_{C}}} \\operatorname{v_{3}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + \\operatorname{v_{1}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{x}_{C}}} f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + \\operatorname{v_{2}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{y}_{C}}} f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + \\operatorname{v_{3}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{z}_{C}}} f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)}$\n\n\n\n\n```python\n# Similarly, the RHS would be defined.\nrhs = ((vfield.dot(delop(ffield))) + (ffield * (delop.dot(vfield)))).doit()\n```\n\n\n```python\nrhs\n```\n\n\n\n\n$\\displaystyle \\left(\\frac{\\partial}{\\partial \\mathbf{{x}_{C}}} \\operatorname{v_{1}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + \\frac{\\partial}{\\partial \\mathbf{{y}_{C}}} \\operatorname{v_{2}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + \\frac{\\partial}{\\partial \\mathbf{{z}_{C}}} \\operatorname{v_{3}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)}\\right) f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + \\operatorname{v_{1}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{x}_{C}}} f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + \\operatorname{v_{2}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{y}_{C}}} f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + \\operatorname{v_{3}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{z}_{C}}} f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)}$\n\n\n\n\n```python\n# Now, to prove the product rule, we would just need to equate the expanded and simplified versions of the lhs and the rhs,\n# so that the SymPy expressions match.\nlhs.expand().simplify() == rhs.expand().doit().simplify()\n```\n\n\n\n\n True\n\n\n\n\n```python\nlhs.expand().simplify()\n```\n\n\n\n\n$\\displaystyle f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{x}_{C}}} \\operatorname{v_{1}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{y}_{C}}} \\operatorname{v_{2}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{z}_{C}}} \\operatorname{v_{3}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + \\operatorname{v_{1}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{x}_{C}}} f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + \\operatorname{v_{2}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{y}_{C}}} f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + \\operatorname{v_{3}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{z}_{C}}} f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)}$\n\n\n\n\n```python\nrhs.expand().doit().simplify()\n```\n\n\n\n\n$\\displaystyle f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{x}_{C}}} \\operatorname{v_{1}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{y}_{C}}} \\operatorname{v_{2}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{z}_{C}}} \\operatorname{v_{3}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + \\operatorname{v_{1}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{x}_{C}}} f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + \\operatorname{v_{2}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{y}_{C}}} f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} + \\operatorname{v_{3}}{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)} \\frac{\\partial}{\\partial \\mathbf{{z}_{C}}} f{\\left(\\mathbf{{x}_{C}},\\mathbf{{y}_{C}},\\mathbf{{z}_{C}} \\right)}$\n\n\n\n\n```python\n# Thus, the general form of the third product rule mentioned above can be proven using sympy.vector.\n\n# source: https://docs.sympy.org/latest/modules/vector/examples.html\n```\n", "meta": {"hexsha": "cb0a4e6782b05dd69cf953c419f0f053e8ab6dbc", "size": 13248, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Personal_Projects/vector math calculus 2.ipynb", "max_stars_repo_name": "NSC9/Sample_of_Work", "max_stars_repo_head_hexsha": "8f8160fbf0aa4fd514d4a5046668a194997aade6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Personal_Projects/vector math calculus 2.ipynb", "max_issues_repo_name": "NSC9/Sample_of_Work", "max_issues_repo_head_hexsha": "8f8160fbf0aa4fd514d4a5046668a194997aade6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Personal_Projects/vector math calculus 2.ipynb", "max_forks_repo_name": "NSC9/Sample_of_Work", "max_forks_repo_head_hexsha": "8f8160fbf0aa4fd514d4a5046668a194997aade6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.3115727003, "max_line_length": 1313, "alphanum_fraction": 0.5012832126, "converted": true, "num_tokens": 2646, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9219218284193597, "lm_q2_score": 0.8902942319436397, "lm_q1q2_score": 0.8207816861446897}} {"text": "# Bayesian estimation\n\nA core concept in machine learning and related fields\n\nProbablistic views and concepts\n\nBayes classifiers\n\n## Conditional probability\n\n$p(A \\, | \\, B)$: the probability of event $A$ given $B$\n\nFrom basic probability:\n$$\n\\begin{align}\np(A \\, | \\, B)\n&=\n\\frac{p(A \\bigcap B)}{p(B)}\n\\end{align}\n$$\nwhere $p(A \\bigcap B)$ is the joint probability, of $A$ and $B$ both happen.\n\nAlternative representation:\n$$\n\\begin{align}\np(A \\bigcap B)\n&=\np(A, B)\n\\end{align}\n$$\n\n$$\n\\begin{align}\np(A \\, | \\, B)\n&=\n\\frac{p(A, B)}{p(B)}\n\\end{align}\n$$\n\nAssume $p(B) > 0$ as otherwise the question is ill-defined.\n\n### Bayes' rule\n\nApply conditional probability to the numerator again:\n$$\n\\begin{align}\np(A \\, | \\, B)\n&=\n\\frac{p(A, B)}{p(B)}\n\\\\\n&=\n\\frac{p(B \\, | \\, A) p(A)}{p(B)}\n\\end{align}\n$$\n\nTo help remember, consider symmetry:\n$$\n\\begin{align}\np(A, B)\n&=\np(A \\, | \\, B) p(B)\n\\\\\n&=\np(B \\, | \\, A) p(A)\n\\end{align}\n$$\n\n\n## Classification\n\n$p(C \\, | \\, \\mathbf{x})$\n\nFrom an observed data $\\mathbf{x}$ we want to predict probability of $C$, the class label.\n\nWe take a probabilistic view because real world is often non-deterministic.\n\n### Iris example\n\n$C$: type of flower\n\n$\\mathbf{x}$: flower features\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
    \n\n\n\n\n\n
    \nSetosa\n\nVersicolor\n\nVirginica\n
    \n\n\n### Banking example\n\nA bank decides whether to make a loan to a customer:\n* $\n\\mathbf{x} = \n\\begin{pmatrix}\nx_1 \\\\\nx_2\n\\end{pmatrix}\n$\n: customer income $x_1$ and asset $x_2$\n\n* $C$: 0/1 if the customer is unlike/likely to pay back the loan\n\nMake a loan if $P(C = 1 \\, | \\, \\mathbf{x}) > 0.5$ or other threshold value\n\n### Bayes' rule\n\nHow to compute $P(C \\, | \\, \\mathbf{x})$, which is unknown?\n\nFrom conditional probability:\n$$\n\\begin{align}\np\\left(C \\, | \\, \\mathbf{x}\\right)\n&= \n\\frac{p\\left(\\mathbf{x} \\, | \\, C\\right) p\\left(C\\right)}{p\\left(\\mathbf{x}\\right)}\n\\\\\n&\\propto\np\\left(\\mathbf{x} \\, | \\, C\\right) p\\left(C\\right)\n\\end{align}\n$$\n\n* $p(C \\, | \\, \\mathbf{x})$: posterior\n
    \nthe likelihood of $C$ given $\\mathbf{x}$\n\n* $p(C)$: prior\n
    \nhow likely $C$ is before observing $\\mathbf{x}$\n\n* $p(\\mathbf{x} \\, | \\, C)$: likelihood\n
    \nhow likely $\\mathbf{x}$ is if it belongs to $C$\n\n* $p(\\mathbf{x})$: marginal/evidence\n
    \nconstant for given $\\mathbf{x}$ \n\nWe can compute $P(C \\, | \\, \\mathbf{x})$ (posterior), if given $p(C)$ (prior) and $p(\\mathbf{x} \\, | \\, C)$ (likelihood)\n\n### Rational decision making\n\nHumans tend to over focus on rare events\n\nFor example:\n\n$\\mathbf{x}$: get killed\n\n$C$: cause of death\n* $C_{car}$: car crash\n* $C_{plane}$: airplane crash\n\nPlane crash is much more deadly than car crash:\n$\np(\\mathbf{x} \\, | \\, C_{plane}) \\gg p(\\mathbf{x} \\, | \\, C_{car})\n$\n\nsay\n$\n\\begin{align}\np(\\mathbf{x} \\, | \\, C_{plane}) &= 1.0\n\\\\\np(\\mathbf{x} \\, | \\, C_{car}) &= 0.1\n\\end{align}\n$\n\nBut plane crash is much rarer than car crash:\n$\np(C_{plane}) \\ll p(C_{car})\n$\n\nsay\n$\n\\begin{align}\np(C_{plane}) &= 0.001 \\\\\np(C_{car}) &= 0.1\n\\end{align}\n$\n\nMultiply together:\n$\n\\begin{align}\np(\\mathbf{x} \\, | \\, C_{plane}) p(C_{plane}) &= 0.001\n\\\\\np(\\mathbf{x} \\, | \\, C_{car}) p(C_{car}) &= 0.01\n\\end{align}\n$\n\n$\n\\begin{align}\n\\frac{p(C_{plane} \\, | \\, \\mathbf{x})}{p(C_{car} \\, | \\, \\mathbf{x})}\n&=\n\\frac{p(\\mathbf{x} \\, | \\, C_{plane}) p(C_{plane})}{p(\\mathbf{x} \\, | \\, C_{car}) p(C_{car})}\n\\end{align}\n$\n\nThus plane travel is actually safter than car travel:\n$\np(C_{plane} \\, | \\, \\mathbf{x}) < p(C_{car} \\, | \\, \\mathbf{x})\n$\n\n# Breast cancer example\n\n$C$: has cancer\n\n$\\overline{C}$: no cancer\n\n$1/0$: positive/negative cancer screening result\n\n$p(C) = 0.01$\n\n$C$:\n* $p(1 | C) = 0.8$\n* $p(0 | C) = 0.2$\n\n$\\overline{C}$:\n* $p(1 | \\overline{C}) = 0.1$\n* $p(0 | \\overline{C}) = 0.9$\n\nWhat is $p(C | 1)$?\n\nhttps://betterexplained.com/articles/an-intuitive-and-short-explanation-of-bayes-theorem/\n\n## Other applications\n\n\n### Value \n\n$R(C_i\\, | \\mathbf{x})$: expected value (e.g. utility/loss/risk) for taking class $C_i$ given data $\\mathbf{x}$\n\n$R_{ik}$: value for taking class $C_i$ when the actual class is $C_k$\n\n$$\n\\begin{align}\nR(C_i \\, | \\mathbf{x})\n=\n\\sum_{k} R_{ik} p(C_k \\, | \\, \\mathbf{x})\n\\end{align}\n$$\n\nGoal: select $C_i$ to optimize $R(C_i \\, | \\, \\mathbf{x})$ given $\\mathbf{x}$\n\n\n### Association\n\n$\nX \\rightarrow Y\n$\n* $X$: antecedent\n* $Y$: consequent\n\nExample: basket analysis for shopping,\n$X$ and $Y$ can be sets of item(s).\n\nSupport:\n$$\np(X, Y)\n$$\n, the statistical significance of having $X$ and $Y$ together\n\nConfidence:\n$$\np(Y | X)\n$$\n, how likely $Y$ can be predicted from $X$\n\nLift:\n$$\n\\begin{align}\n\\frac{p(X, Y)}{p(X)p(Y)}\n&=\n\\frac{p(Y | X)}{p(Y)}\n\\end{align}\n$$\n, $> 1$, $< 1$, $=1$, $X$ makes $Y$ more, less, equally likely \n\n## Learning\n\nFrom training data $\\mathbf{X}$ we want to estimate model parameters $\\Theta$.\n\n$$\n\\begin{align}\np\\left(\\Theta | \\mathbf{X}\\right)\n&= \n\\frac{p\\left(\\mathbf{X} | \\Theta\\right) p\\left(\\Theta\\right)}{p\\left(\\mathbf{X}\\right)}\n\\\\\n&\\propto\np\\left(\\mathbf{X} | \\Theta\\right) p\\left(\\Theta\\right)\n\\end{align}\n$$\n\n* $p(\\Theta | \\mathbf{X})$: posterior\n
    \nthe likelihood of $\\Theta$ given $\\mathbf{X}$\n\n* $p(\\Theta)$: prior\n
    \nhow likely $\\Theta$ is before observing $\\mathbf{X}$\n\n* $p(\\mathbf{X} | \\Theta)$: likelihood\n
    \nhow likely $\\mathbf{X}$ is if the model parameters are $\\Theta$\n\n* $p(\\mathbf{X})$: marginal/evidence\n
    \nconstant for given $\\mathbf{X}$ \n\n### MAP (maximum a posteriori) estimation\n$$\n\\Theta_{MAP} = argmax_{\\Theta} p(\\Theta | \\mathbf{X})\n$$\n\nIf we don't have $p(\\Theta)$, it can be assumed to be flat\n$\np(\\Theta) = 1\n$\nand MAP is equivalent to ML:\n\n### ML (maximum likelihood) estimation\n$$\n\\Theta_{ML} = argmax_{\\Theta} p(\\mathbf{X} | \\Theta)\n$$\n\nUnder the often iid (identical and independently distributed) assumption:\n$$\np(\\mathbf{X} | \\Theta) = \\prod_{t=1}^{N} p(\\mathbf{x}^{(t)} | \\Theta)\n$$\n, where $\\{\\mathbf{x}^{t}\\}$ are the individual samples within $\\mathbf{X}$.\n($\\mathbf{X}$ is a matrix and $\\{\\mathbf{x}^{t}\\}$ are its individual columns.)\n\n### Bayes estimator\nThe expected value of the posterior density:\n$$\n\\begin{align}\n\\Theta_{Bayes} \n&=\nE(\\Theta | \\mathbf{X}) \n\\\\\n&= \n\\int \\Theta p(\\Theta | \\mathbf{X}) d\\Theta\n\\end{align}\n$$\n\nThe best estimate of a random variable/vector is its mean.\n\n\n## Gaussian example\n\n$$\n\\begin{align}\np(\\mathbf{X} | \\Theta)\n&=\n\\frac{1}{(2\\pi)^{N/2}\\sigma^N} \\exp\\left(-\\frac{\\sum_{t=1}^N (\\mathbf{x}^{(t)} - \\Theta)^2}{2\\sigma^2}\\right)\n\\\\\np(\\Theta)\n&=\n\\frac{1}{\\sqrt{2\\pi}\\sigma_0} \\exp\\left( -\\frac{(\\Theta - \\mu_0)^2}{2\\sigma_0^2} \\right)\n\\end{align}\n$$\n\n\n### ML (maximum likelihood) estimation \n$$\n\\Theta_{ML} = argmax_{\\Theta} p(\\mathbf{X} | \\Theta)\n$$\n\nCompute the data $\\mathbf{X}$ mean and variance:\n$$\n\\begin{align}\n\\Theta_{ML} &= \\mathbf{m} = \\frac{\\sum_{t=1}^N \\mathbf{x}^{(t)}}{N}\n\\\\\n\\sigma_{ML}^2 &= s^2 = \\frac{\\sum_{t=1}^N \\left(\\mathbf{x} - \\mathbf{m} \\right)^2}{N}\n\\end{align}\n$$\n\nThe $\\mathbf{m}$ part holds regardless of $\\sigma$ (constant or a variable to be optimized).\n\n### MAP estimation\n$$\n\\begin{align}\n\\Theta_{MAP} \n&= \nargmax_{\\Theta} p(\\Theta | \\mathbf{X})\n\\\\\n&=\nargmax_{\\Theta} p(\\mathbf{X} | \\Theta) p(\\Theta)\n\\end{align}\n$$\n\nIt can be shown that\n$$\n\\Theta_{MAP} =\n\\frac{N/\\sigma^2}{N/\\sigma^2 + 1/\\sigma_0^2} \\mathbf{m}\n+\n\\frac{1/\\sigma_0^2}{N/\\sigma^2 + 1/\\sigma_0^2} \\mu_0\n$$\n, i.e. weighted average of prior mean $\\mu_0$ and sample $\\mathbf{X}$ mean $\\mathbf{m}$, with weights inversely proportional to variances.\n\nNote that if we don't know $p(\\Theta)$, we can assume it is a constant distribution $p(\\Theta) = 1$, i.e. $\\sigma_0 = \\infty$.\nThis will give us $\\Theta_{MAP} = \\mathbf{m} = \\Theta_{ML}$, as expected.\n\n### Bayes estimation\n\n$$\n\\Theta_{Bayes} = E(\\Theta | \\mathbf{X}) =\n\\frac{N/\\sigma^2}{N/\\sigma^2 + 1/\\sigma_0^2} \\mathbf{m}\n+\n\\frac{1/\\sigma_0^2}{N/\\sigma^2 + 1/\\sigma_0^2} \\mu_0\n$$\n, i.e. same as $\\Theta_{MAP}$.\n\nThe math derivations are left as exercise.\n\n# Naive Bayes\n\nNaive Bayes assume the features are independent for the likelihood, i.e. for an $n$-dimensional data vector $\\mathbf{x}$\n$$\n\\begin{align}\np(\\mathbf{x} | \\Theta) \n&=\np(\\mathbf{x}_1, \\mathbf{x}_2, \\cdots \\mathbf{x}_n | \\Theta)\n\\\\\n&=\n\\prod_{k=1}^n p( \\mathbf{x}_k | \\Theta)\n\\end{align}\n$$\n\nWe can generalize from a single data item $\\mathbf{x}$ (a vector) to an entire data set $\\mathbf{X}$ (a matrix whose columns are data items) by considering columns of $\\mathbf{X}$ as features, i.e. $\\mathbf{X}_{(k)}$\n\nPut the above into our Bayesian rule:\n$$\n\\begin{align}\np(\\Theta | \\mathbf{X})\n&=\n\\frac{p(\\Theta) p\\left(\\mathbf{X} | \\Theta\\right)}{p(\\mathbf{X})}\n\\\\\n&=\n\\frac{p(\\Theta) \\prod_{k=1}^n p\\left(\\mathbf{X}_{(k)} | \\Theta\\right)}{p(\\mathbf{X})}\n\\end{align}\n$$\n\nThe main merit of naive Bayes is that the estimation/computation of individual \n$\np\\left(\\mathbf{X}_{(k)} | \\Theta\\right)\n$\nterms is easier/faster than the joint term\n$\np\\left(\\mathbf{X} | \\Theta \\right)\n$\n.\n\nThis feature independence is just an assumption, but tends to work well in practice.\n\nMore details can be found under:\n* http://scikit-learn.org/stable/modules/naive_bayes.html\n* https://en.wikipedia.org/wiki/Naive_Bayes_classifier\n\n# Code example\n\nscikit learn supports naive Bayes with different math functions for the likelihood, such as Gaussian, Bernoulli, and multinomial.\n\n\n```python\nfrom sklearn import datasets\nimport numpy as np\n\niris = datasets.load_iris()\n\nX = iris.data[:, [2, 3]]\ny = iris.target\n```\n\n\n```python\nfrom sklearn.cross_validation import train_test_split\n\n# splitting data into 70% training and 30% test data: \nX_train, X_test, y_train, y_test = train_test_split(\n X, y, test_size=0.3, random_state=0)\n```\n\n\n```python\nfrom sklearn.preprocessing import StandardScaler\n\nsc = StandardScaler()\nsc.fit(X_train)\nX_train_std = sc.transform(X_train)\nX_test_std = sc.transform(X_test)\n```\n\n\n```python\nfrom sklearn.metrics import accuracy_score\n```\n\n\n```python\nfrom sklearn.naive_bayes import GaussianNB\n\ngnb = GaussianNB()\n\n_ = gnb.fit(X_train_std, y_train)\n```\n\n\n```python\ny_pred = gnb.predict(X_test_std)\n\nprint('Accuracy: %.2f' % accuracy_score(y_test, y_pred))\n```\n\n Accuracy: 0.98\n\n", "meta": {"hexsha": "3229d2747b02c22e7952eb286a38800cc8fc44e1", "size": 19775, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "code/ch03/bayes.ipynb", "max_stars_repo_name": "1iyiwei/pyml", "max_stars_repo_head_hexsha": "9bc0fa94abd8dcb5de92689c981fbd9de2ed1940", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 27, "max_stars_repo_stars_event_min_datetime": "2016-12-29T05:58:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-17T10:27:32.000Z", "max_issues_repo_path": "code/ch03/bayes.ipynb", "max_issues_repo_name": "1iyiwei/pyml", "max_issues_repo_head_hexsha": "9bc0fa94abd8dcb5de92689c981fbd9de2ed1940", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "code/ch03/bayes.ipynb", "max_forks_repo_name": "1iyiwei/pyml", "max_forks_repo_head_hexsha": "9bc0fa94abd8dcb5de92689c981fbd9de2ed1940", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 34, "max_forks_repo_forks_event_min_datetime": "2016-09-02T04:59:40.000Z", "max_forks_repo_forks_event_max_datetime": "2020-10-05T02:11:37.000Z", "avg_line_length": 24.3834771887, "max_line_length": 229, "alphanum_fraction": 0.4660935525, "converted": true, "num_tokens": 3650, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9219218370002787, "lm_q2_score": 0.8902942188450159, "lm_q1q2_score": 0.8207816817083252}} {"text": "# 15.1. Diving into symbolic computing with SymPy\n\nhttps://ipython-books.github.io/151-diving-into-symbolic-computing-with-sympy/\n\n### Ref\n\n* Math constant - https://docs.sympy.org/0.7.1/modules/mpmath/functions/constants.html\n\n\n```python\nfrom sympy import *\ninit_printing()\n```\n\n\n```python\nvar('x y') # use in interactive session\n```\n\n\n```python\nx, y = symbols('x y') # use in script\n```\n\n\n```python\nexpr1 = (x + 1) ** 2\nexpr2 = x**2 + 2 * x + 1\n```\n\n\n```python\nexpr1, expr2\n```\n\n\n```python\nexpr1 == expr2\n```\n\n\n\n\n False\n\n\n\n\n```python\nsimplify(expr1 - expr2)\n```\n\n\n```python\nsimplify(expr2)\n```\n\n\n```python\nexpr1.subs(x, expr1)\n```\n\n\n```python\nexpr1.subs(x, pi)\n```\n\n\n```python\nv = expr1.subs(x, S(1) / 2)\n```\n\n\n```python\ntype(v)\n```\n\n\n\n\n sympy.core.numbers.Rational\n\n\n\n\n```python\nv.evalf()\n```\n\n\n```python\npi.evalf() # convert sympy expression to numerical value\n```\n\n\n```python\nexp(1).evalf()\n```\n\n\n```python\nf = lambdify(x, expr1) # convert sympy expression to python function\n```\n\n\n```python\nimport numpy as np\ny2 = f(np.linspace(-2., 2., 5))\n```\n\n\n```python\ntype(y2)\n```\n\n\n\n\n numpy.ndarray\n\n\n\n\n```python\nlen(y2)\n```\n\n\n```python\ntype(f)\n```\n\n\n\n\n function\n\n\n\n\n```python\nf(-1)\n```\n\n### Plot function\n\n\n```python\n%matplotlib inline\nvar('x y')\ny = x**2\nplot(y, (x, -4, 4))\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "ab33549d3fd8a30039de6d81d3d7a71fff1c0a52", "size": 38160, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chapter15_symbolic/01_sympy_intro.ipynb", "max_stars_repo_name": "wgong/cookbook-2nd-code", "max_stars_repo_head_hexsha": "8ca2e5b3c90fee6605f4155e6b9dfb783ce46807", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter15_symbolic/01_sympy_intro.ipynb", "max_issues_repo_name": "wgong/cookbook-2nd-code", "max_issues_repo_head_hexsha": "8ca2e5b3c90fee6605f4155e6b9dfb783ce46807", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter15_symbolic/01_sympy_intro.ipynb", "max_forks_repo_name": "wgong/cookbook-2nd-code", "max_forks_repo_head_hexsha": "8ca2e5b3c90fee6605f4155e6b9dfb783ce46807", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 73.667953668, "max_line_length": 16528, "alphanum_fraction": 0.8409853249, "converted": true, "num_tokens": 436, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284088045171237, "lm_q2_score": 0.8840392893839085, "lm_q1q2_score": 0.8207498598030821}} {"text": "# Analytical Solutions to 2nd-order ODEs\n\nNext, we'll focus on analytical solutions for 2nd-order ODEs, and specifically initial-value problems (IVPs). As before, let's categorize problems based on their solution approach.\n\n## 1. Solution by direct integration\n\nIf you have a 2nd-order ODE of this form:\n\\begin{equation}\n\\frac{d^2 y}{dx^2} = f(x)\n\\end{equation}\nthen you can solve by **direct integration**.\n\nFor example, let's say we are trying to solve for the deflection of a cantilever beam $y(x)$ with a force $P$ at the end, where $E$ is the modulus and $I$ is the moment of intertia, and the initial conditions are $y(0)=0$ and $y^{\\prime}(0) = 0$:\n\n\n\\begin{align}\n\\frac{d^2 y}{dx^2} &= \\frac{-P (L-x)}{EI} \\\\\n\\frac{d}{dx} \\left(\\frac{dy}{dx}\\right) &= \\frac{-P}{EI} (L-x) \\\\\n\\int d \\left(\\frac{dy}{dx}\\right) &= \\frac{-P}{EI} \\int (L-x) dx \\\\\ny^{\\prime} = \\frac{dy}{dx} &= \\frac{-P}{EI} \\left(Lx - \\frac{x^2}{2}\\right) + C_1 \\\\\n\\int dy &= \\int \\left( \\frac{-P}{EI} \\left(Lx - \\frac{x^2}{2}\\right) + C_1 \\right) dx \\\\\ny(x) &= \\frac{-P}{EI} \\left(\\frac{L}{2} x^2 - \\frac{1}{6} x^3\\right) + C_1 x + C_2\n\\end{align}\nThat is our general solution; we can obtain the specific solution by applying our two initial conditions:\n\\begin{align}\ny(0) &= 0 = C_2 \\\\\ny^{\\prime}(0) &= 0 = C_1 \\\\\n\\therefore y(x) &= \\frac{P}{EI} \\left( \\frac{x^3}{6} - \\frac{L x^2}{2} \\right)\n\\end{align}\n\n## 2. Solution by substitution\n\nIf we have a 2nd-order ODE of this form:\n\\begin{equation}\n\\frac{d^2 y}{dx^2} = f(x, y^{\\prime})\n\\end{equation}\nthen we can solve by **substitution**, meaning by substituting a new variable for $y^{\\prime}$. (Notice that $y$ itself does not show up in the ODE.)\n\nLet's substitute $u$ for $y^{\\prime}$ in the above ODE:\n\\begin{align}\nu &= y^{\\prime} \\\\\nu^{\\prime} &= y^{\\prime\\prime} \\\\\n\\rightarrow u^{\\prime} &= f(f, u)\n\\end{align}\nNow we have a 1st-order ODE! Then, we can apply the methods previously discussed to solve this; once we find $u(x)$, we can integrate that once more to get $y(x)$.\n\n### Example: falling object\n\nFor example, consider a falling mass where we are solving for the downward distance as a function of time, $y(t)$, that is experiencing the force of gravity downward and a drag force upward. It starts at some reference point so $y(0) = 0$, and has a zero initial downward velocity: $y^{\\prime}(0) = 0$. The governing equation is:\n\\begin{equation}\nm \\frac{d^2 y}{dt^2} = mg - c \\left( \\frac{dy}{dt} \\right)^2\n\\end{equation}\nwhere $m$ is the mass, $g$ is acceleration due to gravity, and $c$ is the drag proportionality constant.\nWe can substitute $V$ for $y^{\\prime}$, which gives us a first-order ODE:\n\\begin{align}\n\\text{let} \\quad \\frac{dy}{dt} = V \\\\\n\\rightarrow m \\frac{dV}{dt} &= mg - c V^2 \\\\\n\\end{align}\nThen, we can solve this for $V(t)$ using our initial condition for velocity $V(0) = 0$. Once we have that, we can integrate once more:\n\\begin{equation}\ny(t) = \\int V(t) dt\n\\end{equation}\nand apply our initial condition for position, $y(0) = 0$, to obtain $y(t)$.\n\nHere is the full process:\n\\begin{align}\n\\frac{dV}{dt} &= g - \\frac{c}{m} V^2 \\\\\n\\frac{dV}{g - \\frac{c}{m} V^2} &= dt \\\\\n\\frac{m}{c} \\int \\frac{dV}{a^2 - V^2} &= \\int dt = t + \\bar{c}, \\quad \\text{where} \\quad a = \\sqrt{\\frac{mg}{c}} \\\\\n\\frac{m}{c} \\frac{1}{a} \\tanh^{-1} \\left(\\frac{V}{a}\\right) &= t + c_1 \\\\\nV &= a \\tanh \\left( \\frac{a c}{m} t + c_1 \\right) \\\\\n\\therefore V(t) &= \\sqrt{\\frac{mg}{c}} \\tanh \\left(\\sqrt{\\frac{gc}{m}} t + c_1\\right)\n\\end{align}\nApplying the initial condition for velocity, $V(0) = 0$:\n\\begin{align}\nV(0) &= 0 = \\sqrt{\\frac{mg}{c}} \\tanh \\left(0 + c_1\\right) \\\\\n\\therefore c_1 &= 0 \\\\\nV(t) &= \\sqrt{\\frac{mg}{c}} \\tanh \\left(\\sqrt{\\frac{gc}{m}} t\\right)\n\\end{align}\nThen, to get $y(t)$, we just need to integrate once more:\n\\begin{align}\n\\frac{dy}{dt} = V(t) &= \\sqrt{\\frac{mg}{c}} \\tanh \\left(\\sqrt{\\frac{gc}{m}} t\\right) \\\\\n\\int dy &= \\sqrt{\\frac{mg}{c}} \\int \\tanh \\left(\\sqrt{\\frac{gc}{m}} t\\right) dt \\\\\ny(t) &= \\sqrt{\\frac{mg}{c}} \\sqrt{\\frac{m}{gc}} \\log\\left(\\cosh\\left(\\sqrt{\\frac{gc}{m}} t\\right)\\right) + c_2 \\\\\n\\rightarrow y(t) &= \\frac{m}{c} \\log\\left(\\cosh\\left(\\sqrt{\\frac{gc}{m}} t\\right)\\right) + c_2\n\\end{align}\nFinally, we can apply the initial condition for position, $y(0) = 0$, to get our solution:\n\\begin{align}\ny(0) &= 0 = \\frac{m}{c} \\log\\left(\\cosh\\left(0\\right)\\right) + c_2 = c_2 \\\\\n\\rightarrow c_2 &= 0 \\\\\ny(t) &= \\frac{m}{c} \\log\\left(\\cosh\\left(\\sqrt{\\frac{gc}{m}} t\\right)\\right)\n\\end{align}\n\n### Example: catenary problem\n\nThe catenary problem describes the shape of a hanging chain or rope fixed between two points. (It was also a favorite of one of my professors, [Joe Prahl](https://en.wikipedia.org/wiki/Joseph_M._Prahl), and I like to teach it as an example in his honor.) The downward displacement of the hanging string/chain/rope as a function of horizontal position, $y(x)$, is governed by the equation\n\\begin{equation}\ny^{\\prime\\prime} = \\sqrt{1 + (y^{\\prime})^2}\n\\end{equation}\n\n\n\nThis is actually a **boundary value problem**, with the boundary conditions for the displacement at one side $y(0) = 0$ and that the slope is zero in the middle: $\\frac{dy}{dx}\\left(\\frac{L}{2}\\right) = 0$. (Please note that I have skipped the derivation of the governing equation, and left some details out.)\n\nWe can solve this via substitution, by letting a new variable $u = y^{\\prime}$; then, $u^{\\prime} = \\frac{du}{dx} = y^{\\prime\\prime}$. This gives is a first-order ODE, which we can integrate:\n\\begin{align}\n\\frac{du}{dx} &= \\sqrt{1 + u^2} \\\\\n\\int \\frac{du}{\\sqrt{1 + u^2}} &= \\int dx \\\\\n\\sinh^{-1}(u) &= x + c_1, \\quad \\text{where } \\sinh(x) = \\frac{e^x - e^{-x}}{2} \\\\\nu(x) &= \\sinh(x + c_1)\n\\end{align}\n\nThen, we can integrate once again to get $y(x)$:\n\\begin{align}\n\\frac{dy}{dx} &= u(x) = \\sinh(x + c_1) \\\\\n\\int dy &= \\int \\sinh(x + c_1) dx = \\int \\left(\\sinh(x)\\cosh(c_1) + \\cosh(x)\\sinh(c_1)\\right)dx \\\\\ny(x) &= \\cosh(x)\\cosh(c_1) + \\sinh(x)\\sinh(c_1) + c_2 \\\\\n\\rightarrow y(x) &= \\cosh(x + c_1) + c_2\n\\end{align}\nThis is the general solution to the catenary problem, and applies to any boundary conditions.\n\nFor our specific case, we can apply the boundary conditions and find the particular solution, though it involves some algebra...:\n\\begin{align}\ny(0) &= 0 = \\cosh(c_1) + c_2 \\\\\n\\frac{dy}{dx}\\left(\\frac{L}{2}\\right) &= u(0) = \\sinh \\left(\\frac{L}{2} + c_1\\right) \\\\\n\\rightarrow c_1 &= -\\frac{L}{2} \\\\\n0 &= \\cosh \\left( -\\frac{L}{2} \\right) + c_2 \\\\\n\\rightarrow c_2 &= -\\cosh\\left( -\\frac{L}{2} \\right) = -\\cosh\\left(\\frac{L}{2} \\right)\n\\end{align}\nSo, the overall solution for the catenary problem with the given boundary conditions is\n\\begin{equation}\ny(x) = \\cosh \\left(x - \\frac{L}{2}\\right) - \\cosh\\left( \\frac{L}{2} \\right)\n\\end{equation}\n\nLet's see what this looks like:\n\n\n```matlab\nL = 1.0;\nx = linspace(0, 1);\ny = cosh(x - (L/2.)) - cosh(L/2.);\nplot(x, y)\n```\n\nPlease note that I've made some simplifications in the above work, and skipped the details of how the ODE is derived. In general, the solution for the shape is\n\\begin{equation}\ny(x) = C \\cosh \\frac{x + c_1}{C} + c_2\n\\end{equation}\nwhere you would solve for the constants $C$, $c_1$, and $c_2$ using the constraints:\n\\begin{align}\n\\int_{x_a}^{x_b} \\sqrt{1 + (y^{\\prime})^2} dx &= L \\\\\ny(x_a) &= y_a \\\\\ny(x_b) &= y_b \\;,\n\\end{align}\nwhere $L$ is the length of the rope/chain.\n\nYou can read more about the catenary problem here (for example): \n\n## 3. Homogeneous 2nd-order ODEs\n\nAn important category of 2nd-order ODEs are those that look like\n\\begin{equation}\ny^{\\prime\\prime} + p(x) y^{\\prime} + q(x) y = 0\n\\end{equation}\n\"Homogeneous\" means that the ODE is unforced; that is, the right-hand side is zero.\n\nDepending on what $p(x)$ and $q(x)$ look like, we have a few different solution approaches:\n\n- constant coefficients: $y^{\\prime\\prime} + a y^{\\prime} + by = 0$\n- Euler-Cauchy equations: $x^2 y^{\\prime\\prime} + axy^{\\prime} + by = 0$\n- Series solutions\n\nFirst, let's talk about the characteristics of linear, homogeneous 2nd-order ODEs:\n\n#### Solutions have two parts: \nSolutions have two parts: $y(x) = c_1 y_1 + c_2 y_2$, where $y_1$ and $y_2$ are each a basis of the solution.\n\n#### Linearly independent:\n\nThe two parts of the solution $y_1$ and $y_2$ are linearly independent.\n\nOne way of defining this is that $a_1 y_1 + a_2 y_2 = 0$ only has the trivial solution $a_1=0$ and $a_2=0$.\n\nAnother way of thinking about this is that $y_1$ and $y_2$ are linearly *dependent* if one is a multiple of the other, like $y_1 = x$ and $y_2 = 5x$. This *cannot* be solutions to a linear, homogeneous ODE.\n\n#### Both parts satisfy the ODE:\n\n$y_1$ and $y_2$ each satisfy the ODE. Meaning, you can plug each of them into the ODE for $y$ and obtain 0.\n\nHowever, we need both parts together to fully solve the ODE.\n\n#### Reduction of order: \n\nIf $y_1$ is known, we can get $y_2$ by **reduction of order**. Let $y_2 = u y_1$, where $u$ is some unknown function of $x$. Then, put $y_2$ into the ODE $y^{\\prime}{\\prime} + p(x) y^{\\prime} + q(x) y = 0$:\n\\begin{align}\ny_2 &= u y_1 \\\\\ny_2^{\\prime} &= u y_1^{\\prime} + u^{\\prime} y_1 \\\\\ny_2^{\\prime\\prime} &= 2 u^{\\prime} y_1^{\\prime} + u^{\\prime\\prime} y_1 + u y_1^{\\prime\\prime} \\\\\n\\rightarrow u^{\\prime\\prime} &= - \\left[ p(x) + \\left(\\frac{2 y_1^{\\prime}}{y_1}\\right) \\right] u^{\\prime}\n\\text{or, } u^{\\prime\\prime} &= - \\left( g(x) \\right) u^{\\prime}\n\\end{align}\nNow, we have an ODE with only $u^{\\prime\\prime}$, $u^{\\prime}$, and some function $g(x)$\u2014so we can solve by substitution! Let $u^{\\prime} = v$, and then we have $v^{\\prime} = -g(x) v$:\n\\begin{align}\n\\frac{dv}{dx} &= - \\left( p(x) + \\frac{2 y_1^{\\prime}}{y_1} \\right) v \\\\\n\\int \\frac{dv}{v} &= - \\int \\left(p(x) + \\frac{2 y_1^{\\prime}}{y_1} \\right) dx \\\\\n\\text{Recall } 2 \\frac{d}{dx} \\left( \\ln y_1 \\right) &= 2 \\frac{y_1^{\\prime}}{y_1} \\\\\n\\therefore \\int \\frac{dv}{v} &= - \\int \\left(p(x) + 2 \\frac{d}{dx} \\left( \\ln y_1 \\right) \\right) dx \\\\\n\\ln v &= -\\int p(x) dx - 2 \\ln y_1 \\\\\n\\rightarrow v &= \\frac{\\exp\\left( -\\int p(x)dx \\right)}{y_1^2}\n\\end{align}\n\nSo, the actual solution procedure is then:\n\n 1. Solve for $v$: $v = \\frac{\\exp\\left( -\\int p(x)dx \\right)}{y_1^2}$\n 2. Solve for $u$: $u = \\int v dx$\n 3. Get $y_2$: $y_2 = u y_1$\n \nHere's an example, where we know one part of the solution $y_1 = e^{-x}$:\n\\begin{align}\ny^{\\prime\\prime} + 2 y^{\\prime} + y &= 0 \\\\\n\\text{Step 1:} \\quad v = \\frac{\\exp \\left( -\\int 2dx \\right)}{ \\left(e^{-x}\\right)^2} = \\frac{e^{-2x}}{e^{-2x}} &= 1 \\\\\n\\text{Step 2:} \\quad u = \\int v dx = \\int 1 dx &= x \\\\\n\\text{Step 3:} \\quad y_2 &= x e^{-x}\n\\end{align}\nThen, the general solution to the ODE is $y(x) = c_1 e^{-x} + c_2 x e^{-x}$.\n\n### 3a. Equations with constant coefficents\n\nA common category of 2nd-order homogeneous ODEs are equations with constant coefficients, of the form:\n\\begin{equation}\ny^{\\prime\\prime} + a y^{\\prime} + by = 0\n\\end{equation}\nNote that these are unforced, and the right-hand side is zero.\n\nSolutions to these equations take the form $y(x) = e^{\\lambda x}$, and inserting this into the ODE gives us the characteristic equation\n\\begin{equation}\n\\lambda^2 + a \\lambda + b = 0\n\\end{equation}\nwhich we can solve to find the solution for given coefficients $a$ and $b$ and initial conditions. Depending on those coefficients and the solution to the characteristic equation, our solution can fall into one of three cases:\n\n* Real roots: $\\lambda_1$ and $\\lambda_2$. This is an **overdamped** system and the full solution takes the form\n\\begin{equation}\ny(x) = c_1 e^{\\lambda_1 x} + c_2 e^{\\lambda_2 x}\n\\end{equation}\n\n* Repeated roots: $\\lambda_1 = \\lambda_2 = \\lambda$. This is a **critically damped** system and the full solution is\n\\begin{equation}\ny(x) = c_1 e^{\\lambda x} + c_2 x e^{\\lambda x}\n\\end{equation}\n(Where does that second part come from, you might ask? Well, we know that $y_1$ is $e^{\\lambda x}$, but the second part cannot also be $e^{\\lambda x}$ because those are linearly dependent. So, we use *reduction of order* to find $y_2$, which is $x e^{\\lambda x}$.\n\n* Imaginary roots: $\\lambda = \\frac{-a}{2} \\pm \\beta i$, where $\\beta = \\frac{1}{2} \\sqrt{4b - a^2}$. This is an **underdamped** system and the solution takes the form\n\\begin{equation}\ny(x) = e^{-ax/2} \\left( c_1 \\sin \\beta x + c_2 \\cos \\beta x \\right)\n\\end{equation}\n\nSome examples:\n\n1. $y^{\\prime\\prime} + 3 y^{\\prime} + 2y = 0$\n\\begin{align}\n\\rightarrow \\lambda^2 + 3\\lambda + 2 &= 0 \\\\\n(\\lambda + 2)(\\lambda + 1) &= 0 \\\\\n\\lambda &= -2, -1 \\\\\ny(x) &= c_1 e^{-x} + c_2 e^{-2x}\n\\end{align}\nThen, we would use the initial conditions given for $y(0)$ and $y^{\\prime}(0)$ to find $c_1$ and $c_2$.\n\n2. $y^{\\prime\\prime} + 6 y^{\\prime} + 9y = 0$\n\\begin{align}\n\\rightarrow \\lambda^2 + 6\\lambda + 9 &= 0 \\\\\n(\\lambda + 3)(\\lambda + 3) &= 0 \\\\\n\\lambda &= -3 \\\\\ny(x) &= c_1 e^{-3x} + c_2 x e^{-3x}\n\\end{align}\n\n3. $y^{\\prime\\prime} + 6 y^{\\prime} + 25 y = 0$\n\\begin{align}\n\\rightarrow \\lambda^2 + 6\\lambda + 25 &= 0 \\\\\n\\lambda &= -3 \\pm 4i \\\\\ny(x) &= e^{-3x} \\left( c_1 \\sin 4x + c_2 \\cos 4x \\right)\n\\end{align}\n\n### 3b. Euler-Cauchy equations\n\nEuler-Cauchy equations are of the form\n\\begin{equation}\nx^2 y^{\\prime\\prime} + axy^{\\prime} + by = 0\n\\end{equation}\n\nSolutions take the form $y = x^m$, which when plugged into the ODE leads to a different characterisic equation to find $m$:\n\\begin{align}\ny &= x^m \\\\\ny^{\\prime} &= m x^{m-1} \\\\\ny^{\\prime\\prime} &= m (m-1) x^{m-2} \\\\\n\\rightarrow x^2 m (m-1) x^{m-2} + axmx^{m-1} + bx^m &= 0 \\\\\nm^2 + (a-1)m + b &= 0\n\\end{align}\nThis is our new characteristic formula for these problems, and solving for the roots of this equation gives us $m$ and thus our general solution.\n\nLike equations with constant coefficients, we have three solution forms depending on the roots of the characteristic equation:\n\n* Real roots: $y(x) = c_1 x^{m_1} + c_2 x^{m_2}$\n* Repeated roots: $y(x) = c_1 x^m + c_2 x^m \\ln x$\n* Imaginary roots: $m = \\alpha \\pm \\beta i$, and $y(x) = x^{\\alpha} \\left[c_1 \\cos (\\beta \\ln x) + c_2 \\sin (\\beta \\ln x)\\right]$\n\n## 4. Inhomogeneous 2nd-order ODEs\n\nInhomogeneous, or forced, 2nd-order ODEs with constant coefficients take the form\n\\begin{equation}\ny^{\\prime\\prime} + a y^{\\prime} + by = F(t)\n\\end{equation}\nwith initial conditions $y(0) = y_0$ and $y^{\\prime}(0) = y_0^{\\prime}$. Depending on the form of the forcing function $F(t)$, we can solve with techniques such as\n\n - the method of undetermined coefficients\n - variation of parameters\n - LaPlace transforms\n\nThe solution in general to inhomogeneous ODEs includes two parts:\n\\begin{equation}\ny(t) = y_{\\text{H}} + y_{\\text{IH}} = c_1 y_1 + c_2 y_2 + y_{\\text{IH}} \\;,\n\\end{equation}\nwhere $y_{\\text{H}}$ is the solution from the equivalent homogeneous ODE $y^{\\prime\\prime} + a y^{\\prime} + b y = 0$.\n\nThe forcing function $F(t)$ may be \n\n - continuous\n - periodic\n - aperiodic/discontinuous\n \n### 4a. Continuous $F(t)$: method of undetermined coefficients\n\nFor continuous forcing functions, we have two solution methods: the method of undetermined coefficients, and variation of parameters. \n\nGenerally you'll want to use the method of undetermined coefficients when possible, which depends on if $F(t)$ matches one of a set of functions. In that case, the form of the inhomogeneous solution $y_{\\text{IH}}(t)$ follows that of the forcing function $F(t)$, with one or more unknown constants:\n\n| $F(t)$ | $y_{\\text{IH}}(t)$ |\n| ------------- | :-------:|\n| constant | $K$ |\n| $\\cos \\omega t$ | $K_1 \\cos \\omega t + K_2 \\sin \\omega t$ |\n| $\\sin \\omega t$ | $K_1 \\cos \\omega t + K_2 \\sin \\omega t$ |\n| $e^{-at}$ | $K e^{-at}$ |\n| $(A) t$ | $K_0 + K_1 t$ |\n| $t^n$ | $K_0 + K_1 t + K_2 t^2 + \\ldots + K_n t^n$|\n\nFor combinations of these functions, we can combine functions; for example, given\n\\begin{align}\nF(t) &= e^{-at} \\cos \\omega t \\quad \\text{or} e^{-at} \\sin \\omega t \\\\\ny_{\\text{IH}} &= K_1 e^{-at} \\cos \\omega t + K_2 e^{-at} \\sin \\omega t\n\\end{align}\n(Note how in all the above cases how the inhomogeneous solution follows the functional form of the forcing function; for example, the exponential decay rate $a$ or the sinusoidal frequency $\\omega$ match.\n\nThe method of undetermined coefficients works by plugging the candidate inhomogeneous solutionn $y_{\\text{IH}}$ into the full ODE, and solving for the constants (e.g., $K$)\u2014but **not** from the initial conditions.\n\nFor example, let's solve\n\\begin{equation}\ny^{\\prime\\prime} + 2y^{\\prime} + y = e^{-x}\n\\end{equation}\nwith initial conditions $y(0) = y^{\\prime}(0) = 0$. First, we should find the solution to the homogeneous equation\n\\begin{equation}\ny^{\\prime\\prime} + 2y^{\\prime} + y = 0 \\;.\n\\end{equation}\nWe can do this by using the associated characteristic formula\n\\begin{align}\n\\lambda^2 + 2 \\lambda + 1 &= 0 \\\\\n(\\lambda + 1)(\\lambda + 1) &= 0 \\\\\n\\rightarrow y_{\\text{H}} &= c_1 e^{-x} + c_2 x e^{-x}\n\\end{align}\n\nTo find the inhomogeneous solution, we would look at the table above to find what matches the forcing function $e^{-x}$. Normally, we'd grab $K e^{-x}$, but that would not be linearly independent from the first part of the homogeneous solution $y_{\\text{H}}$. The same is true for $K x e^{-x}$, which is linearly dependent with the second part of $y_{\\text{H}}$, but $K x^2 e^{-x}$ works! Then, we just need to find $K$ by plugging this into the ODE:\n\\begin{align}\ny_{\\text{IH}} &= K x^2 e^{-x} \\\\\ny^{\\prime} &= K e^{-x} (2x - x^2) \\\\\ny^{\\prime\\prime} &= K e^{-x} (x^2 - 4x + 2) \\\\\n2 K &= 1 \\\\\n\\rightarrow K &= \\frac{1}{2} \\\\\ny_{\\text{IH}} &= \\frac{1}{2} x^2 e^{-x}\n\\end{align}\nThus, the overall general solution is \n\\begin{equation}\ny(x) = c_1 e^{-x} + c_2 x e^{-x} + \\frac{1}{2} x^2 e^{-x}\n\\end{equation}\nand we would solve for the integration constants $c_1$ and $c_2$ using the initial conditions.\n\nImportant points to remember:\n\n- The constants of the inhomogeneous solution $y_{\\text{IH}}$ come from the ODE, **not** the initial conditions.\n- Only solve for the integration constants $c_1$ and $c_2$ (part of the homogeneous solution) once you have the full general solution $y = c_1 y_1 + c_2 y_2 + y_{\\text{IH}}$.\n\n\n### 4b. Continuous $F(t)$: variation of parameters\n\nWe have the variation of parameters approach to solve for inhomogeneous 2nd-order ODEs that are more general:\n\\begin{equation}\ny^{\\prime\\prime} + p(x) y^{\\prime} + q(x) y = r(x)\n\\end{equation}\nIn this case, we can assume a solution $y(x) = y_1 u_1 + y_2 u_2$.\n\nThe solution procedure is:\n\n1. Obtain $y_1$ and $y_2$ by solving the homogeneous equation: $y^{\\prime\\prime} + p(x) y^{\\prime} + q(x) y = 0$\n\n2. Solve for $u_1$ and $u_2$:\n\\begin{align}\nu_1 &= - \\int \\frac{y_2 r(x)}{W} dx + c_1 \\\\\nu_2 &= \\int \\frac{y_1 r(x)}{W} dx + c_2 \\\\\nW &= \\begin{vmatrix}\ny_1 & y_2\\\\ y_1^{\\prime} & y_2^{\\prime}\\\\\n\\end{vmatrix} = y_1 y_2^{\\prime} - y_2 y_1^{\\prime} \\;,\n\\end{align}\nwhere $W$ is the Wronksian.\n\n3. Then, we have the general solution:\n\\begin{align}\ny &= u_1 y_1 + u_2 y_2 \\\\\n&= \\left( -\\int \\frac{y_2 r(x)}{W} dx + c_1 \\right) y_1 + \\left( \\int \\frac{y_1 r(x)}{W} dx + c_2 \\right) y_2 \\;,\n\\end{align}\nwhere we solve for $c_1$ and $c_2$ using the two initial conditions.\n\n#### Example 1: variation of parameters\n\nFirst, let's try the same example we used for the method of undetermined coefficients above:\n\\begin{equation}\ny^{\\prime\\prime} + 2 y^{\\prime} + y = e^{-x}\n\\end{equation}\nWe already found the homogeneous solution, so we know that $y_1 = e^{-x}$ and $y_2 = x e^{-x}$.\nNext, let's get the Wronksian, and then $u_1$ and $u_2$.\n\\begin{align}\nW &= \\begin{vmatrix} y_1 & y_2 \\\\ y_1^{\\prime} & y_2^{\\prime} \\end{vmatrix} = e^{-x} e^{-x}(1-x) - x e^{-x} (-e^{-x}) = e^{-2x} \\\\\n%\nu_1 &= -\\int \\frac{x e^{-x} e^{-x}}{e^{-2x}} dx + c_1 = -\\int x dx + c_1 = -\\frac{1}{2} x^2 + c_1 \\\\\nu_2 &= \\int \\frac{e^{-x} e^{-x}}{e^{-2x}} dx + c_2 = \\int dx + c_2 = x + c_2 \\\\\ny(x) &= \\left(-\\frac{1}{2} x^2 + c_1\\right) e^{-x} + (x + c_2) x e^{-x} \\\\\n\\end{align}\nAfter simplifying, we obtain the same solution as via the method of undetermined coefficients (but with a bit more work):\n\\begin{equation}\ny(x) = x_1 e^{-x} + c_2 x e^{-x} + \\frac{1}{2} x^2 e^{-x}\n\\end{equation}\n\n#### Example 2: variation of parameters\n\nNow let's try an example that we could *not* solve using the method of undetermined coefficients, with a forcing term that involves hyperbolic cosine (cosh); recall that $\\cosh(x) = \\frac{e^x + e^{-x}}{2}$.\n\\begin{equation}\ny^{\\prime\\prime} + 4 y^{\\prime} + 4y = \\cosh(x)\n\\end{equation}\nFirst, we need to find the homogeneous solution:\n\\begin{align}\ny^{\\prime\\prime} + 4 y^{\\prime} + 4y &= 0 \\\\\n\\lambda^2 + 4 \\lambda + 4 &= 0 \\\\\n\\rightarrow \\lambda &= -2\n\\end{align}\nSo our homogeneous solution involves repeated roots:\n\\begin{equation}\ny_H = c_1 e^{-2x} + c_2 x e^{-2x}\n\\end{equation}\nwhere $y_1 = e^{-2x}$ and $y_2 = x e^{-2x}$.\n\nThen, we need to find $u_1$ and $u_2$, so let's get the Wronksian and then solve\n\\begin{align}\nW &= \\begin{vmatrix} y_1 & y_2 \\\\ y_1^{\\prime} & y_2^{\\prime} \\end{vmatrix} = e^{-2x} (e^{-2x}) (1 - 2x) - x e^{-2x}(-2 e^{-2x}) = e^{-4x} \\\\\n%\nu_1 &= - \\int \\frac{x e^{-2x} \\cosh x}{e^{-4x}} dx + c_1 = -\\int \\frac{x \\frac{1}{2}(e^x + e^{-x})}{e^{-2x}} dx + c_1 \\\\\n &= -\\frac{1}{2} \\int x (e^{3x} + e^x) dx + c_1 = -\\frac{1}{2} \\left[ \\frac{1}{9} e^{3x}(3x-1) + e^x(x-1) \\right] + c_1 \\\\\nu_1 &= -\\frac{1}{18} e^{3x}(3x-1) - \\frac{1}{2} e^x (x-1) + c_1 \\\\\n%\nu_2 &= \\int \\frac{e^{-2x} \\cosh x}{e^{-4x}} dx + c_2 = \\frac{1}{2} \\int e^{2x}(e^x + e^{-x}) dx + c_2 = \\frac{1}{2} \\int (e^{3x} + e^x) dx + c_2 \\\\ \nu_2 &= \\frac{1}{6} e^{3x} + \\frac{1}{2} e^x + c_2\n\\end{align}\n\nThen, when we put these all together, we get the full (complicated) solution:\n\\begin{equation}\ny(x) = \\left[ -\\frac{1}{18} e^{3x} (3x-1) - \\frac{1}{2} e^x (x-1) + c_1 \\right] e^{-2x} + \\left( \\frac{1}{6} e^{3x} + \\frac{1}{2} e^x + c_2 \\right) x e^{-2x}\n\\end{equation}\n", "meta": {"hexsha": "9499aeb681f9896556f48da48cbfb52863a6c381", "size": 46083, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/second-order-ODEs-analytical.ipynb", "max_stars_repo_name": "kyleniemeyer/ME373", "max_stars_repo_head_hexsha": "db7e78ac21d7a2cc5bd9fc49cdc3614f2f0fe00e", "max_stars_repo_licenses": ["CC-BY-4.0", "MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-11-03T18:09:05.000Z", "max_stars_repo_stars_event_max_datetime": "2019-11-03T18:09:05.000Z", "max_issues_repo_path": "docs/second-order-ODEs-analytical.ipynb", "max_issues_repo_name": "kyleniemeyer/ME373", "max_issues_repo_head_hexsha": "db7e78ac21d7a2cc5bd9fc49cdc3614f2f0fe00e", "max_issues_repo_licenses": ["CC-BY-4.0", "MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/second-order-ODEs-analytical.ipynb", "max_forks_repo_name": "kyleniemeyer/ME373", "max_forks_repo_head_hexsha": "db7e78ac21d7a2cc5bd9fc49cdc3614f2f0fe00e", "max_forks_repo_licenses": ["CC-BY-4.0", "MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 82.4382826476, "max_line_length": 17704, "alphanum_fraction": 0.70004123, "converted": true, "num_tokens": 8122, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9284088064979618, "lm_q2_score": 0.8840392832736084, "lm_q1q2_score": 0.8207498558813643}} {"text": "## Classical Mechanics - Week 9\n\n### Last Week:\n- We saw how a potential can be used to analyze a system\n- Gained experience with plotting and integrating in Python \n\n### This Week:\n- We will study harmonic oscillations using packages\n- Further develope our analysis skills \n- Gain more experience wtih sympy\n\n\n```python\n# Let's import packages, as usual\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sympy as sym\nsym.init_printing(use_unicode=True)\n```\n\nLet's analyze a spring using sympy. It will have mass $m$, spring constant $k$, angular frequency $\\omega_0$, initial position $x_0$, and initial velocity $v_0$.\n\nThe motion of this harmonic oscillator is described by the equation:\n\neq 1.) $m\\ddot{x} = -kx$\n\nThis can be solved as \n\neq 2.) $x(t) = A\\cos(\\omega_0 t - \\delta)$, $\\qquad \\omega_0 = \\sqrt{\\dfrac{k}{m}}$\n\nUse SymPy below to plot this function. Set $A=2$, $\\omega_0 = \\pi/2$ and $\\delta = \\pi/4$. \n\n(Refer back to ***Notebook 7*** if you need to review plotting with SymPy.)\n\n\n```python\n# Plot for equation 2 here\nA, omega0, t, delta = sym.symbols('A, omega_0, t, delta')\nx=A*sym.cos(omega0*t-delta)\nx\n```\n\n\n```python\nx1 = sym.simplify(x.subs({A:2, omega0:sym.pi/2, delta:sym.pi/4}))\nx1\n```\n\n\n```python\nsym.plot(x1,(t,0,10), title='position vs time',xlabel='t',ylabel='x')\n```\n\n## Q1.) Calculate analytically the initial conditions, $x_0$ and $v_0$, and the period of the motion for the given constants. Is your plot consistent with these values?\n\n✅ Double click this cell, erase its content, and put your answer to the above question here.\n\n####### Possible answers #######\n\nInitial position $x_0=2\\cos(-\\pi/4)=\\sqrt{2}$.\n\nInitial velocity $v_0=-2(\\pi/2)\\sin(-\\pi/4)=\\pi/\\sqrt{2}$.\n\nThe period is $2\\pi/(\\pi/2)=4$.\n\nThe initial position and the period are easily seen to agree with these values. The initial velocity is\npositive and the initial slope looks like it agrees.\n\n####### Possible answers #######\n\n#### Now let's make plots for underdamped, critically-damped, and overdamped harmonic oscillators.\nBelow are the general equations for these oscillators:\n\n- Underdamped, $\\beta < \\omega_0$ : \n\neq 3.) $x(t) = A e^{-\\beta t}cos(\\omega ' t) + B e^{-\\beta t}sin(\\omega ' t)$ , $\\omega ' = \\sqrt{\\omega_0^2 - \\beta^2}$\n \n ___________________________________\n \n \n- Critically-damped, $\\beta = \\omega_0$:\n\neq 4.) $x(t) = Ae^{-\\beta t} + B t e^{-\\beta t}$\n\n ___________________________________\n \n- Overdamped, $\\beta > \\omega_0$:\n\neq 5.) $x(t) = Ae^{-\\left(\\beta + \\sqrt{\\beta^2 - \\omega_0^2}\\right)t} + Be^{-\\left(\\beta - \\sqrt{\\beta^2 - \\omega_0^2}\\right)t}$\n \n _______________________\n \nIn the cells below use SymPy to create the Position vs Time plots for these three oscillators. \n\nUse $\\omega_0=\\pi/2$ as before, and then choose an appropriate value of $\\beta$ for the three different damped oscillator solutions. Play around with the variables, $A$, $B$, and $\\beta$, to see how different values affect the motion and if this agrees with your intuition. \n\n\n```python\n# Put your code for graphing Underdamped here\nA, B, omega0, beta, t = sym.symbols('A, B, omega_0, beta, t')\nomegap=sym.sqrt(omega0**2-beta**2)\nx=sym.exp(-beta*t)*(A*sym.cos(omegap*t)+B*sym.sin(omegap*t))\nx\n```\n\n\n```python\nx1 = sym.simplify(x.subs({A:0, B:2, omega0:sym.pi/2, beta:sym.pi/40}))\nsym.plot(x1,(t,0,30), title='Underdamped Oscillator',xlabel='t',ylabel='x')\n```\n\n\n```python\n# Put your code for graphing Critical here\nA, B, omega0, beta, t = sym.symbols('A, B, omega_0, beta, t')\nx=sym.exp(-beta*t)*(A+B*t)\nx\n```\n\n\n```python\nx1 = sym.simplify(x.subs({A:0, B:2, omega0:sym.pi/2, beta:sym.pi/2}))\nsym.plot(x1,(t,0,30), title='Critically-damped Oscillator',xlabel='t',ylabel='x')\n```\n\n\n```python\n# Put your code for graphing Overdamped here\nA, B, omega0, beta, t = sym.symbols('A, B, omega_0, beta, t')\nbeta1=beta+sym.sqrt(beta**2-omega0**2)\nbeta2=beta-sym.sqrt(beta**2-omega0**2)\nx=A*sym.exp(-beta1*t)+B*sym.exp(-beta2*t)\nx\n```\n\n\n```python\nx1 = sym.simplify(x.subs({A:-2, B:2, omega0:sym.pi/2, beta:sym.pi}))\nsym.plot(x1,(t,0,30), title='Overdamped Oscillator',xlabel='t',ylabel='x')\n```\n\n## Q2.) How would you compare the 3 different oscillators?\n\n✅ Double click this cell, erase its content, and put your answer to the above question here.\n\n\n####### Possible answers #######\n\nUnderdamped: Oscillations, with amplititude decreasing over time\n\nCritical Damping: No oscillations. Curve dies the fastest of the three (for fixed $\\omega_0$).\n\nOverdamped: No oscillations. Curve dies slower than critically damped case.\n\n####### Possible answers #######\n\n# Here's another simple harmonic system, the pendulum. \n\nThe equation of motion for the pendulum is:\n\neq 6.) $ml\\dfrac{d^2\\theta}{dt^2} + mg \\sin(\\theta) = 0$, where $v=l\\dfrac{d\\theta}{dt}$ and $a=l\\dfrac{d^2\\theta}{dt^2}$\n\nIn the small angle approximation $\\sin\\theta\\approx\\theta$, so this can be written:\n\neq 7.) $\\dfrac{d^2\\theta}{dt^2} = -\\dfrac{g}{l}\\theta$\n\nWe then find the period of the pendulum to be $T = \\dfrac{2\\pi}{\\sqrt{l/g}}$ and the angle at any given time \n(if released from rest) is given by \n\n$\\theta = \\theta_0\\cos{\\left(\\sqrt{\\dfrac{g}{l}} t\\right)}$.\n\nLet's use Euler's Forward method to solve equation (7) for the motion of the pendulum in the small angle approximation, and compare to the analytic solution.\n\nFirst, let's graph the analytic solution for $\\theta$. Go ahead and graph using either sympy, or the other method we have used, utilizing these variables:\n\n- $t:(0s,50s)$\n- $\\theta_0 = 0.5$ radians\n- $l = 40$ meters\n\n\n```python\n# Plot the analytic solution here\nl=40\ng=9.81\n\n\ntheta0, omega0, t = sym.symbols('theta_0, omega_0, t')\ntheta = theta0*sym.cos(omega0*t)\ntheta1 = sym.simplify(theta.subs({omega0:(g/l)**0.5,theta0:0.5}))\nsym.plot(theta1,(t,0,50),title='Pendulum Oscillation, Small Angle Approximation',xlabel='time (s)',ylabel='Theta (radians)')\nplt.show()\n\n\n\n```\n\n\n```python\n# The same analytic plot, but now using matplotlib\n# This is easier for comparing with the Euler's method calculation\n\nti=0\ntf=50\ndt=0.001\nt=np.arange(ti,tf,dt)\n\ntheta0=0.5\nl=40\ng=9.81\n\nomega0=np.sqrt(g/l)\n\ntheta=theta0*np.cos(omega0*t)\nplt.grid()\nplt.xlabel(\"Time (s)\")\nplt.ylabel(\"Theta (radians)\")\nplt.title(\"Theta (radians) vs Time (s), small angle approximation\")\nplt.plot(t,theta)\nplt.show()\n```\n\nNow, use Euler's Forward method to obtain a plot of $\\theta$ as a function of time $t$ (in the small angle approximation). Use eq (7) to calculate $\\ddot{\\theta}$ at each time step.\nTry varying the time step size to see how it affects the Euler's method solution.\n\n\n```python\n# Perform Euler's Method Here\ntheta1=np.zeros(len(t))\ntheta1[0]=theta0\ndtheta=0\nddtheta=-omega0**2*theta1[0]\n\nfor i in range(len(t)-1):\n theta1[i+1] = theta1[i] + dtheta*dt\n dtheta += ddtheta*dt\n ddtheta = -omega0**2*theta1[i+1]\n\nimport matplotlib.patches as mpatches\n\nplt.grid()\nplt.xlabel(\"Time (s)\")\nplt.ylabel(\"Theta (radians)\")\nplt.title(\"Theta (radians) vs Time (s), small angle approximation\")\nblue_patch = mpatches.Patch(color = 'b', label = 'analytic')\nred_patch = mpatches.Patch(color = 'r', label = 'Euler method')\nplt.legend(handles=[blue_patch,red_patch],loc='lower left')\nplt.plot(t,theta1,color='r')\nplt.plot(t,theta,color='b')\nplt.show()\n```\n\nYou should have found that if you chose the time step size small enough, then the Euler's method solution was\nindistinguishable from the analytic solution. \n\nWe can now trivially modify this, to solve for the pendulum **exactly**, without using the small angle approximation.\nThe exact equation for the acceleration is\n\neq 8.) $\\dfrac{d^2\\theta}{dt^2} = -\\dfrac{g}{l}\\sin\\theta$.\n\nModify your Euler's Forward method calculation to use eq (8) to calculate $\\ddot{\\theta}$ at each time step in the cell below.\n\n\n\n```python\ntheta2=np.zeros(len(t))\ntheta2[0]=theta0\ndtheta=0\nddtheta=-omega0**2*np.sin(theta2[0])\n\nfor i in range(len(t)-1):\n theta2[i+1] = theta2[i] + dtheta*dt\n dtheta += ddtheta*dt\n ddtheta = -omega0**2*np.sin(theta2[i+1])\n\nplt.grid()\nplt.xlabel(\"Time (s)\")\nplt.ylabel(\"Theta (radians)\")\nplt.title(\"Theta (radians) vs Time (s)\")\nblue_patch = mpatches.Patch(color = 'b', label = 'small angle approximation')\nred_patch = mpatches.Patch(color = 'r', label = 'EXACT')\nplt.legend(handles=[blue_patch,red_patch],loc='lower left')\nplt.plot(t,theta2,color='r')\nplt.plot(t,theta1,color='b')\nplt.show()\n```\n\n# Q3.) What time step size did you use to find agreement between Euler's method and the analytic solution (in the small angle approximation)? How did the exact solution differ from the small angle approximation? \n\n✅ Double click this cell, erase its content, and put your answer to the above question here.\n\n####### Possible answers #######\n\nA time step of 0.001 was sufficient so that Euler's method and the analytic formula were indistinguishable in the plots (for small angle approximation).\n(Different time steps could be found, depending on how closely one compared the plots.)\n\nThe exact pendulum solution has a slightly longer period than the small angle approximation (in agreement with what we learned last week.)\n\n####### Possible answers #######\n\n### Now let's do something fun:\n\nIn class we found that the 2-dimensional anisotropic harmonic motion can be solved as\n\neq 8a.) $x(t) = A_x \\cos(\\omega_xt)$\n\neq 8b.) $y(t) = A_y \\cos(\\omega_yt - \\delta)$\n\nIf $\\dfrac{\\omega_x}{\\omega_y}$ is a rational number (*i.e,* a ratio of two integers), then the trajectory repeats itself after some amount of time. The plots of $x$ vs $y$ in this case are called Lissajous figures (after the French physicists Jules Lissajous). If $\\dfrac{\\omega_x}{\\omega_y}$ is not a rational number, then the trajectory does not repeat itself, but it still shows some very interesting behavior.\n\nLet's make some x vs y plots below for the 2-d anisotropic oscillator. \n\nFirst, recreate the plots in Figure 5.9 of Taylor. (Hint: Let $A_x=A_y$. For the left plot of Figure 5.9, let $\\delta=\\pi/4$ and for the right plot, let $\\delta=0$.)\n\nNext, try other rational values of $\\dfrac{\\omega_x}{\\omega_y}$ such as 5/6, 19/15, etc, and using different phase angles $\\delta$.\n\nFinally, for non-rational $\\dfrac{\\omega_x}{\\omega_y}$, what does the trajectory plot look like if you let the length of time to be arbitrarily long?\n\n\\[For these parametric plots, it is preferable to use our original plotting method, *i.e.* using `plt.plot()`, as introduced in ***Notebook 1***.\\]\n\n\n```python\n# Plot the Lissajous curves here\nAx=1\nAy=1\nomegay=1\n\nti=0\ntf=10\ndt=0.1\nt=np.arange(ti,tf,dt)\n\nr=2\ndelta=np.pi/4\n\nomegax=r*omegay\n\nX=Ax*np.cos(omegax*t)\nY=Ay*np.cos(omegay*t-delta)\n\nplt.plot(X,Y)\n\n```\n\n\n```python\nti=0\ntf=24.1\ndt=0.1\nt=np.arange(ti,tf,dt)\n\nr=np.sqrt(2)\ndelta=0\n\nomegax=r*omegay\n\nX=Ax*np.cos(omegax*t)\nY=Ay*np.cos(omegay*t-delta)\n\nplt.plot(X,Y)\n```\n\n\n```python\nti=0\ntf=50\ndt=0.1\nt=np.arange(ti,tf,dt)\n\nr=5/6\ndelta=np.pi/2\n\nomegax=r*omegay\n\nX=Ax*np.cos(omegax*t)\nY=Ay*np.cos(omegay*t-delta)\n\nplt.plot(X,Y)\n```\n\n\n```python\nti=0\ntf=100\ndt=0.1\nt=np.arange(ti,tf,dt)\n\nr=19/15\ndelta=np.pi/2\n\nomegax=r*omegay\n\nX=Ax*np.cos(omegax*t)\nY=Ay*np.cos(omegay*t-delta)\n\nplt.plot(X,Y)\n```\n\n\n```python\nti=0\ntf=1000\ndt=0.1\nt=np.arange(ti,tf,dt)\n\nr=np.sqrt(2)\ndelta=0\n\nomegax=r*omegay\n\nX=Ax*np.cos(omegax*t)\nY=Ay*np.cos(omegay*t-delta)\n\nplt.plot(X,Y)\n```\n\n# Q4.) What are some observations you make as you play with the variables? What happens for non-rational $\\omega_x/\\omega_y$ if you let the oscillator run for a long time?\n\n\n✅ Double click this cell, erase its content, and put your answer to the above question here.\n\n####### Possible answers #######\n\nFor rational $\\omega_x/\\omega_y = n/m$ the curve closes. In general, the larger $n$ and $m$ (with no common factors) the longer it takes the curve to close.\n\nFor non-rational $\\omega_x/\\omega_y$, the curve essentially fills in the entire rectangle if you let it run to long enough time.\n\n####### Possible answers #######\n\n# Notebook Wrap-up. \nRun the cell below and copy-paste your answers into their corresponding cells.\n\n\n```python\nfrom IPython.display import HTML\nHTML(\n\"\"\"\n\n\"\"\"\n)\n```\n\n\n\n\n\n\n\n\n\n\n# Well that's that, another Notebook! It's now been 10 weeks of class\n\nYou've been given lots of computational and programing tools these past few months. These past two weeks have been practicing these tools and hopefully you are understanding how some of these pieces add up. Play around with the code and see how it affects our systems of equations. Solve the Schrodinger Equation for the Helium atom. Figure out the unifying theory. The future is limitless!\n", "meta": {"hexsha": "a73a1c51df951f48c17577fb37b8e487ad74b29b", "size": 529879, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/AdminBackground/PHY321/CM_Jupyter_Notebooks/Answers/CM_Notebook9_Answers.ipynb", "max_stars_repo_name": "Shield94/Physics321", "max_stars_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2020-01-09T17:41:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T00:48:58.000Z", "max_issues_repo_path": "doc/AdminBackground/PHY321/CM_Jupyter_Notebooks/Answers/CM_Notebook9_Answers.ipynb", "max_issues_repo_name": "Shield94/Physics321", "max_issues_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2020-01-08T03:47:53.000Z", "max_issues_repo_issues_event_max_datetime": "2020-12-15T15:02:57.000Z", "max_forks_repo_path": "doc/AdminBackground/PHY321/CM_Jupyter_Notebooks/Answers/CM_Notebook9_Answers.ipynb", "max_forks_repo_name": "Shield94/Physics321", "max_forks_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 33, "max_forks_repo_forks_event_min_datetime": "2020-01-10T20:40:55.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T20:28:41.000Z", "avg_line_length": 524.6326732673, "max_line_length": 129732, "alphanum_fraction": 0.9453856446, "converted": true, "num_tokens": 3784, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9086178919837705, "lm_q2_score": 0.9032942001955142, "lm_q1q2_score": 0.8207492720228141}} {"text": "```python\nimport sympy as sp\nx,y,z = sp.symbols('x,y,z')\nsp.init_printing(use_unicode=False, wrap_line=False, no_global=True)\n\n%run ../display_helpers.py\n```\n\n# SymPy - Polynomials\n\nhttp://docs.sympy.org/latest/modules/polys/index.html\n\n----\n## Basics\nhttp://docs.sympy.org/latest/modules/polys/basics.html\n\n----\n\n### Polynomial Division\n\n\n```python\nsubhead(\"Polynomial Division - Rational (Q)\")\n\nf = 5*x**2 + 10*x +3\ng = 2*x + 2\nq,r = sp.div(f, g, domain='QQ')\n\nprint_eq(('{} = {},\\;r\\;{}', f/g, sp.factor(q), r))\n```\n\n\n```python\nsubhead(\"Polynomial Division - Integer (Z)\")\n\nq,r = sp.div(f, g, domain='ZZ')\n\nprint_eq(('{} = {},\\;r\\;{}', f/g, q, r))\n```\n\n\n```python\nsubhead(\"Polynomial Division - Multiple Variables\")\n\nf = x*y + y*z\ng = 3*x + 3*z\nq,r = sp.div(f, g, domain='QQ')\n\nprint_eq(('{} = {},\\;r\\;{}', f/g, q, r))\n```\n\n----\n\n### Greatest Common Divisor (GCD) / Lowest Common Multiple (LCM)\n\n\n```python\nsubhead(\"Greatest Common Divisor (GCD)\")\n\nf = (12*x + 12) * x\ng = 16*x**2\nq = sp.gcd(f, g)\n\nprint_eq(\n ('gcd \\Big( {}\\;,\\;{} \\Big) = {}', f, g, q)\n)\n```\n\n\n```python\nsubhead(\"Greatest Common Divisor (GCD)\")\ndesc('If the coefficients are rational, the polynomial answer is monic')\nf = 3 * x **2 /2\ng = 9 * x/4\nq = sp.gcd(f, g)\n\nprint_eq(\n ('gcd \\Big( {}\\;,\\;{} \\Big) = {}', f, g, q)\n)\n```\n\n\n```python\nsubhead(\"LCM with GCD\")\nf = x * y**2 + x**2 * y\ng = x**2 * y**2\nq = sp.gcd(f, g)\nr = sp.lcm(f, g)\n\nprint_eq(\n ('gcd \\Big( {}\\;,\\;{} \\Big) &= {}', f, g, q),\n ('lcm \\Big( {}\\;,\\;{} \\Big) &= {}', f, g, r)\n)\n```\n\n\n```python\nsubhead(\"LCM with GCD\")\nf = x * y**2 + x**2 * y\ng = x**2 * y**2\nq = sp.gcd(f, g)\nr = sp.lcm(f, g)\n\nprint_eq(\n ('f &= {}', f),\n ('g &= {}', g),\n ('gcd \\Big( f\\;,\\;g \\Big) &= {}', q),\n ('lcm \\Big( f\\;,\\;g \\Big) &= {}', r),\n ('f.g &= {}', (f*g).expand() )\n)\n```\n\n----\n### Factorization\n\n\n\n```python\nsubhead(\"Square-Free Factorization (SQF)\")\ndesc('For univariate polynomials')\nf = 2 * x**2 + 5 * x**3 + 4 * x**4 + x**5\nq = sp.sqf_list(f)\nr = sp.sqf(f)\n\nprint_eq(\n ('f &= {}', f),\n ('&= {}', r),\n)\n\nprint('\\n\\nsqf_list(f) = ', q)\n```\n\n\n```python\nsubhead(\"Factorization\")\ndesc('For univariate & multivariate polynomials with rational coefficient')\nf = x**4/2 + 5*x**3/12 - x**2/3\nq = sp.factor(f)\n\nprint_eq(\n ('a)\\; f &= {}', f),\n ('&= {}', q),\n)\n\nprint('\\n\\n\\n')\n\nf = x**2 + 4*x*y + 4*y**2\nq = sp.factor(f)\nprint_eq(\n ('b)\\; f &= {}', f),\n ('&= {}', q),\n)\n```\n\n\n```python\nsubhead(\"Groebner Bases\")\ndesc('Buchberger\u2019s algorithm is implemented, supporting various monomial orders')\nf = [x**2 + 1, y**4*x + x**3]\nq = sp.groebner( f, x, y, order='lex')\n\nprint_eq(\n ('f &= {}', f)\n)\nprint(q)\n\nprint('\\n\\n')\n\nf = [x**2 + 1, y**4*x + x**3, x*y*z**3]\nq = sp.groebner( f, x, y, z, order='grevlex')\nprint_eq(\n ('f &= {}', f)\n)\nprint(q)\n```\n\n\n```python\nsubhead(\"Solve Equations\")\ndesc('solve')\nf = x**3 + 2*x + 3\nq = sp.solve( f, x )\n\nprint_eq(\n ('f &= {}', f),\n ('x &= {}', q)\n)\n\nprint('\\n\\n\\n')\nf = x**2 + y*x + z\nq = sp.solve( f, x )\n\nprint_eq(\n ('f &= {}', f),\n ('x &= {}', q)\n)\n```\n\n\n```python\ndesc('solve poly system')\nf = [y-x, x-5]\nq = sp.solve_poly_system(f, x, y)\n\nprint_eq(\n ('{}', f),\n ('x = {}', q)\n)\n\nprint('\\n\\n')\n\nf = [y**2 - x**3 + 1, y*x]\nq = sp.solve_poly_system(f, x, y)\n\nprint_eq(\n ('{}', f)\n)\nprint_eq(\n ('x = {}', q)\n)\n```\n\n----\n\n## Examples\n\nhttp://docs.sympy.org/latest/modules/polys/wester.html\n\nSimple univariate poylnomial factorization\nUnivariate GCD, resultant and factorization\nMultivarite GCD, and factorization\nSupport for symbols in exponents\nTesting if polynomials have common zeros\nNormalizing simple rational functions\nExpanding expressions and factoring back\nFactoring in terms of cuclotomic polynomials\nUnivarite factoring over Gaussian numbers\nComputing with automatic field extensions\nUnivariate factoring over various domains\nFactoring polynomials into linear factors\nAdvanced factoring over finite fields\nWorking with Expressions as polynomials\nComputing reduced Gr\u00f6bner bases\nMultivariate factoring over algebraic numbers\nPartial fraction decomposition\n\n\n- factor\n- primitive\n- expand\n- resultant\n- apart\n- cancel\n- solve\n- groebner\n- \n\n----\n\n- Examples\n- Polynomial Manipulation\n- AGCA (Algebraic Geometry * Cummutative Algebra Module)\n- Internals\n- Series Maniuplation\n- Literature\n\n### Glossary\n\n#### Associative\nAn expression is associative *if* $(a * b) * c = a * (b * c)$\n\n#### Commutative\nAn expression is commutative *if* $a * b = b * a$\n\n#### Distributive\nAn expression is distributive *if* $a \\times (b + c) = a \\times b + a \\times c$\n\n#### Group\nExamples - clock/modular arithmetic, symmetries, integer arithmetic\nDefinition\n- Set of elements (generally G)\n- Operations (such as + or $\\times$) (generall- y *)\n- Closed under operation (produces another value in the set) $x, y \\in G \\implies x*y \\in G$\n- Inverses $x^{-1}$ exists for all x, and $x.x^{-1} = e$\n- Identity x*e = e*x = x\n- Associative $(a * b) * c = a * (b * c)$\n- G may or may not be commutative $x*y \\ne y*x$ (symmetries are not commutative)\n - If G is commutative, then it is a Commutative/Abelian Group\n - Otherwise it is a Noncommutative/Non-Albian Group\n\n#### Identity\nThe identity of a value and operation will produce the same value. $n + 0 = n$, $n \\times 1 = n$\n\n#### Inverses\nWhen an inverse of a value/operation are applied to a value, the result is the identity. $n + (-n) = 0$ and $n.n^{-1} = 1$\n\n#### Monic\nA Univratiate polynomial, where the leading coefficient (non-zero) is 1. i.e. $x^3 + 2x^2 - 8x +4 $\n- Closed under multiplication (2 monic polys multiplied make another monic\n\n#### Ring\nSet of elements that allow + - and $\\times$ operations. $\\div$ and commutative $\\times$ are not necessary. Examples - integers, vectors, matricies, polynomials with integer coefficients. \n- Set of elements\n- 2 operations + and $\\times$ (subtraction can be performed with -ve numbers)\n- Addition is commutative\n- Multiplication is associative (but not necessarily commutative)\n- They're distributive\n\n#### Univariate Polynomial\nSingle-variable polynomial, i.e. $x^4 + 3x^3 - 7x + 1$\n", "meta": {"hexsha": "5a1a1473ce952738d04d8587a0fee59345e374e0", "size": 10750, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/python-data-science/sympy/sympy-polynomials.ipynb", "max_stars_repo_name": "sparkboom/my_jupyter_notes", "max_stars_repo_head_hexsha": "9255e4236b27f0419cdd2c8a2159738d8fc383be", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/python-data-science/sympy/sympy-polynomials.ipynb", "max_issues_repo_name": "sparkboom/my_jupyter_notes", "max_issues_repo_head_hexsha": "9255e4236b27f0419cdd2c8a2159738d8fc383be", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/python-data-science/sympy/sympy-polynomials.ipynb", "max_forks_repo_name": "sparkboom/my_jupyter_notes", "max_forks_repo_head_hexsha": "9255e4236b27f0419cdd2c8a2159738d8fc383be", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.0582750583, "max_line_length": 199, "alphanum_fraction": 0.4653953488, "converted": true, "num_tokens": 2063, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9353465080392795, "lm_q2_score": 0.8774767986961403, "lm_q1q2_score": 0.8207448595459206}} {"text": "## Variational Inference: Ising Model\n\nThis notebook focuses on Variational Inference (VI) for the Ising model in application to binary image de-noising. The Ising model is an example of a Markov Random Field (MRF) and it originated from statistical physics. The Ising model assumes that we have a grid of nodes, where each node can be in one of two states. In the case of binary images, you can think of each node as being a pixel with a black or white color. The state of each node depends on the neighboring nodes through interaction potentials. In the case of images, this translates to a smoothness constraint, i.e. a pixel prefers to be of the same color as the neighboring pixels. In the image denoising problem, we assume that we have a 2-D grid of noisy pixel observations of an underlying true image and we would like to recover the true image. Thus, we can model the image as a grid:\n\n\n\nIn the figure above, the shaded nodes are the noisy observations $y_i$ of binary latent variables $x_i \\in \\{-1, +1\\}$. We can write down the joint distribution as follows:\n\n\\begin{equation}\n p(x,y) = p(x)p(y|x) = \\prod_{(s,t)\\in E} \\Psi_{st}(x_s, x_t) \\prod_{i=1}^{n}p(y_i|x_i) = \\prod_{(s,t)\\in E} \\exp \\{x_s w_{st} x_t \\} \\prod_{i=1}^{N} N(y_i|x_i, \\sigma^2)\n\\end{equation}\n\nwhere the interaction potentials are represented by $\\Psi_{st}$ for every pair of nodes $x_s$ and $x_t$ in a set of edges $E$ and the observations $y_i$ are Gaussian with mean $x_i$ and variance $\\sigma^2$. Here, $w_{st}$ is the coupling strength and assumed to be constant and equal to $J>0$ indicating a preference for the same state as neighbors (i.e. potential $\\Psi(x_s, x_t) = \\exp\\{x_s J x_t\\}$ is higher when $x_s$ and $x_t$ are both either $+1$ or $-1$).\n\nThe basic idea behind variational inference is to choose an approximating disribution $q(x)$ which is close to the original distribution $p(x)$ where the distance is measured by KL divergence:\n\n\\begin{equation}\n KL(q||p) = \\sum_x q(x) \\log \\frac{q(x)}{p(x)}\n\\end{equation}\n\nThis makes inference into an optimization problem in which the objective is to minimize KL divergence or maximize the Evidence Lower BOund (ELBO). We can derive the ELBO as follows:\n\n\\begin{equation}\n \\log p(y) = \\log \\sum_{x} p(x,y) = \\log \\sum_x \\frac{q(x)}{q(x)}p(x,y) = \\log E_{q(x)}\\big[\\frac{p(x,y)}{q(x)} \\big] \\geq E_{q(x)}\\big[\\log \\frac{p(x,y)}{q(x)} \\big] = E_{q(x)}\\big[\\log p(x,y) \\big] - E_{q(x)}\\big[\\log q(x) \\big]\n\\end{equation}\n\nIn application to the Ising model, we have:\n\n\\begin{equation}\n \\mathrm{ELBO} = E_{q(x)}\\big[\\log p(x,y) \\big] - E_{q(x)}\\big[\\log q(x) \\big] = E_{q(x)}\\big[\\sum_{(s,t)\\in E}x_s w_{st}x_t + \\sum_{i=1}^{n} \\log N(x_i, \\sigma^2) \\big] - \\sum_{i=1}^{n} E_{q_i(x)}\\big[\\log q_i(x) \\big]\n\\end{equation}\n\nIn *mean-field* variational inference, we assume a *fully-factored* approximation q(x):\n\n\\begin{equation}\n q(x) = \\prod_{i=1}^{n} q(x_i; \\mu_i)\n\\end{equation}\n\nIt can be shown [1] that $q(x_i;\\mu_i)$ that minimizes the KL divergence is given by:\n\n\\begin{equation}\n q_i(x_i) = \\frac{1}{Z_i}\\exp \\big[E_{-q_i}\\{\\log p(x) \\} \\big]\n\\end{equation}\n\nwhere $E_{-q_i}$ denotes an expectation over every $q_j$ except for $j=i$. To compute $q_i(x_i)$, we only care about the terms that involve $x_i$, i.e. we can isolate them as follows:\n\n\\begin{equation}\n E_{-q_i}\\{\\log p(x)\\} = E_{-q_i}\\{x_i \\sum_{j\\in N(i)} w_{ij}x_j + \\log N(x_i,\\sigma^2) + \\mathrm{const} \\} = x_i \\sum_{j\\in N(i)}J\\times \\mu_j + \\log N(x_i, \\sigma^2) + \\mathrm{const}\n\\end{equation}\n\nwhere $N(i)$ denotes the neighbors of node $i$ and $\\mu_j$ is the mean of a binary random variable:\n\n\\begin{equation}\n \\mu_j = E_{q_j}[x_j] = q_j(x_j=+1)\\times (+1) + q_j(x_j=-1)\\times (-1)\n\\end{equation}\n\nIn order to compute this mean, we need to know the values of $q_j(x_j=+1)$ and $q_j(x_j=-1)$. Let $m_i = \\sum_{j\\in N(i)} w_{ij}\\mu_j$ be the mean value of neighbors and let $L_{i}^{+} = N(x_i=+1; \\sigma^2)$ and $L_{i}^{-} = N(x_i=-1; \\sigma^2)$, then we can compute the mean as follows:\n\n\\begin{equation}\n q_i(x_i=+1) = \\frac{\\exp\\{m_i + L_{i}^{+}\\}}{\\exp\\{m_i + L_{i}^{+}\\} + \\exp\\{-m_i + L_{i}^{-}\\}} = \\frac{1}{1+\\exp\\{-2m_i+L_{i}^{-}-L_{i}^{+}\\}} = \\frac{1}{1+\\exp\\{-2 a_i\\}} = \\sigma(2a_i)\n\\end{equation}\n\n\\begin{equation}\n q_i(x_i=-1) = 1 - q_i(x_i=+1) = 1 - \\sigma(2a_i) = \\sigma(-2a_i)\n\\end{equation}\n\n\\begin{equation}\n \\mu_i = E_{q_i}[x_i] = \\sigma(2a_i) - \\sigma(-2a_i) = \\tanh(a_i)\n\\end{equation}\n\nwhere $a_i = m_i + 1/2\\big(L_{i}^{+} - L_{i}^{-}\\big)$. In other words, our mean-field variational updates of the parameters $\\mu_i$ at iteration $k$ are computed as follows:\n\n\\begin{equation}\n \\mu_{i}^{(k)} = \\tanh \\bigg(\\sum_{j\\in N(i)}w_{ij}\\mu_{j}^{(k-1)} + \\frac{1}{2}\\bigg[\\log \\frac{N(x_i=+1, \\sigma^2)}{N(x_i=-1, \\sigma^2)} \\bigg] \\bigg) \\times \\lambda + (1-\\lambda)\\times \\mu_{i}^{(k-1)}\n \\end{equation}\n\nwhere we added a learning rate parameter $\\lambda$. The figure below shows the parametric form of our mean-field approximation of the Ising model:\n\n\n\nNow that we derived the variational updates and the ELBO, let's implement this in Python in application to binary image denoising!\n\n\n```python\n%matplotlib inline\nimport numpy as np\nimport pandas as pd\n\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nfrom PIL import Image\nfrom tqdm import tqdm\nfrom scipy.special import expit as sigmoid\nfrom scipy.stats import multivariate_normal\n\nnp.random.seed(0)\nsns.set_style('whitegrid')\n```\n\nLet's load a grayscale (single channel) image, add Gaussian noise and binarize it based on mean threshold. We can then define variational inference parameters such as the coupling strength, noise level, smoothing rate and max number of iterations:\n\n\n```python\n#load data\nprint \"loading data...\"\ndata = Image.open('./figures/bayes.bmp')\nimg = np.double(data)\nimg_mean = np.mean(img)\nimg_binary = +1*(img>img_mean) + -1*(img 0$, we were able to find the mean parameters for our approximating distribution $q_i(x_i)$ that maximized the ELBO objective and resulted in mostly denoised image. We can visualize the ELBO objective as a function of iterations as follows:\n\n\n```python\nplt.figure()\nplt.plot(ELBO, color='b', lw=2.0, label='ELBO')\nplt.title('Variational Inference for Ising Model')\nplt.xlabel('iterations'); plt.ylabel('ELBO objective')\nplt.legend(loc='upper left')\nplt.savefig('./figures/ising_vi_elbo.png')\n```\n\nNotice that the ELBO is monotonically increasing and flattening out after about 10 iterations. To get further insight into de-noising, we can plot the average entropy $\\frac{1}{n}\\sum_{i=1}^{n}H_q(x_i)$. We expect early entropy to be high due to random initialization, however, as the number of iterations increases, mean-field updates converge on binary values of $x_i$ that are consistent with observations and the neighbors resulting in a decrease in average entropy:\n\n\n```python\nplt.figure()\nplt.plot(Hx_mean, color='b', lw=2.0, label='Avg Entropy')\nplt.title('Variational Inference for Ising Model')\nplt.xlabel('iterations'); plt.ylabel('average entropy')\nplt.legend(loc=\"upper right\")\nplt.savefig('./figures/ising_vi_avg_entropy.png')\n```\n\nThe 2-D Ising model can be extended in multiple ways, for example: 3-D grids and K-states per node (aka Potts model).\n\n### References\n\n[1] K. Murphy, \"Machine Learning: A Probabilistic Perspective\", The MIT Press, 2012 \n[2] E. Sudderth, \"CS242: Probabilistic Graphical Models\", http://cs.brown.edu/courses/cs242/lectures/ \n\n\n", "meta": {"hexsha": "de844bc9d4ca7bf7f4e623f57b914272e009871d", "size": 503997, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chp02/mean_field_mrf.ipynb", "max_stars_repo_name": "gerket/experiments_with_python", "max_stars_repo_head_hexsha": "5dd6dbd69deaaa318bfa7d2c3c9f7fae6220c460", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 382, "max_stars_repo_stars_event_min_datetime": "2017-08-22T13:14:54.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-28T17:56:59.000Z", "max_issues_repo_path": "chp02/mean_field_mrf.ipynb", "max_issues_repo_name": "gerket/experiments_with_python", "max_issues_repo_head_hexsha": "5dd6dbd69deaaa318bfa7d2c3c9f7fae6220c460", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2017-07-31T00:52:36.000Z", "max_issues_repo_issues_event_max_datetime": "2018-10-01T14:29:51.000Z", "max_forks_repo_path": "chp02/mean_field_mrf.ipynb", "max_forks_repo_name": "gerket/experiments_with_python", "max_forks_repo_head_hexsha": "5dd6dbd69deaaa318bfa7d2c3c9f7fae6220c460", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 280, "max_forks_repo_forks_event_min_datetime": "2017-08-23T08:08:32.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-09T07:04:01.000Z", "avg_line_length": 936.7973977695, "max_line_length": 380880, "alphanum_fraction": 0.9428369613, "converted": true, "num_tokens": 2382, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9353465080392797, "lm_q2_score": 0.8774767874818408, "lm_q1q2_score": 0.8207448490566649}} {"text": "```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\nTable of content:\n- Formal definition (of a perceptron)\n- Geometric interpretation (of a perceptron)\n\nTable of content:\n- Formal definition\n- Geometric interpretation\n\n## Formal definition\n\nFor a binary classification problem, a perceptron is formally defined as follows:\n\nFor a input feature vector of $I$ entries, $\\vec{\\phi} = \\{x_1, \\cdots, x_i, \\cdots, x_I\\}$, its classification, $y\\in\\{-1, 1\\}$, is given by\n\n$$y=\\text{sign}(w_0 + \\sum_{i=1}^I w_i \\phi_i)$$\n\nwhere\n- $\\sum_{i=1}^I w_i \\phi_i$ is a linear combination of the input variables.\n- $w_0$ is called the bias; $-w_0$ is called the threshold, the minimum value of $\\sum_{i=1}^I w_i \\phi_i$ for the $\\text{sign}$ function to output 1.\n- \n$\n\\text{sign}(a) = \n\\begin{cases}\n -1, & \\text{when } a < 0 \\\\\n +1, & \\text{when } a \\geq 0\n\\end{cases}\n$\n\nBy adding another entry, $x_0=1$, to $\\vec{\\phi}$, we can write the perceptron in a more convenient vector notation:\n\n$$\n\\begin{align}\ny&=\\text{sign}(w_0 + \\sum_{i=1}^I w_i \\phi_i) \\\\\n&=\\text{sign}(\\sum_{i=0}^I w_i \\phi_i) \\\\\n&=\\text{sign}(\\vec{w} \\cdot \\vec{\\phi}) \\text{, or } \\text{sign}(\\vec{w}^T \\vec{\\phi})\n\\end{align}\n$$\n\nBy adding another entry, $x_0=1$, to $\\vec{\\phi}$, we can write the perceptron in a more convenient vector notation (that will help us interpret the perceptron):\n\n$$\n\\begin{align}\ny&=\\text{sign}(w_0 + \\sum_{i=1}^I w_i \\phi_i) \\\\\n&=\\text{sign}(\\sum_{i=0}^I w_i \\phi_i) \\\\\n&=\\text{sign}(\\vec{w} \\cdot \\vec{\\phi}) \\text{, or } \\text{sign}(\\vec{w}^T \\vec{\\phi})\n\\end{align}\n$$\n\n## Geometric Interpretation\n\n$$y=\\text{sign}(\\vec{w} \\cdot \\vec{\\phi})$$\n\n$$y=\\text{sign}(\\color{red}{\\vec{w} \\cdot \\vec{\\phi}})$$\n\n$$y=\\color{red}{\\text{sign}}(\\vec{w} \\cdot \\vec{\\phi})$$\n\nLet's look at the vectorized form of the perceptron. What do you see here? (wait for 3 seconds.) For me, I see two procedures being applied to the input vector $\\vec{\\phi}$. First, I see $\\vec{\\phi}$ being projected onto $\\vec{w}$ through the dot product. Second, I see that the sign function checks whether that project is positive or negative. So, what's the consequence of these two procedures?\n\nLet's first focus on the interpretation of the first procedure, the dot product, in two dimensions:\n\n## Geometric interpretation of the dot product\n\n\n```python\nw = np.array([[3], [4]]) / 5\n```\n\n\n```python\nnp.linalg.norm(w)\n```\n\n\n\n\n 1.0\n\n\n\nLet's create an arbitrary weight vector. I made it length one. Why? Because \n\nWhen $||\\vec{w}||\\neq1$,\n\n$$\\vec{w}\\cdot\\vec{\\phi} = ||\\vec{w}|| ||\\vec{\\phi}|| \\cos(\\theta)$$\n\nWhen $||\\vec{w}=1||$,\n\n$$\\vec{w}\\cdot\\vec{\\phi} = ||\\vec{\\phi}|| \\cos(\\theta)$$\n\nAccording to the dot product formula above\n- When $||\\vec{w}||\\neq1$, the dot product is the project of $\\vec{\\phi}$ in the direction of $\\vec{w}$ scaled by the norm of $\\vec{w}$.\n- When $||\\vec{w}||=1$, the dot product is the projection of $\\vec{\\phi}$ in the direction of $\\vec{w}$.\n\nObviously, the second case is easier for interpretation. Note that in practice $||\\vec{w}||$ does not neccessarily have a norm of 1. \n\n### Create $\\vec{\\phi}$s with different norms at different angles from $\\vec{w}$, and compute dot products\n\n\n```python\nphi_initial = w \n```\n\nWe start from the input vector that points in the same direction as $\\vec{w}$. For simplicity, I used a copy of $\\vec{w}$.\n\n\n```python\ndef get_rotation_matrix(angle_in_rad):\n return np.array([\n [np.cos(angle_in_rad), -np.sin(angle_in_rad)], \n [np.sin(angle_in_rad), np.cos(angle_in_rad)]\n ])\n```\n\nTo obtain $\\vec{\\phi}$s at different angles from $\\vec{w}$, we define a rotation matrix function, which returns a matrix that can rotate any 2D vector by a specified angle.\n\n\n```python\nangles_in_rad = np.arange(0, np.pi * 2, 0.05)\nprojections = []\nfor angle_in_rad in angles_in_rad:\n \n rotation_matrix = get_rotation_matrix(angle_in_rad)\n phi = rotation_matrix @ (phi_initial * float(np.random.uniform(0.1, 1)))\n \n projection = float(w.T @ phi)\n projections.append(projection)\n\nprojections = np.array(projections)\n```\n\n\n```python\nangles_in_rad = np.arange(0, np.pi * 2, 0.05) # creates many angles\nprojections = [] # creates an accumulator for projections\n\nfor angle_in_rad in angles_in_rad:\n \n # rotate phi_initial by angle_in_rad, and change its norm\n rotation_matrix = get_rotation_matrix(angle_in_rad)\n phi = rotation_matrix @ phi_initial * float(np.random.uniform(0.1, 1))\n \n # compute dot product\n projection = float(w.T @ phi) \n \n projections.append(projection)\n\nprojections = np.array(projections) # for plotting\n```\n\n### Plot projection of $\\vec{\\phi}$ onto $\\vec{w}$ against the angle from $\\vec{\\phi}$ to $\\vec{w}$\n\n\n```python\nplt.figure(figsize=(7, 7))\nplt.polar(angles_in_rad, np.zeros(len(angles_in_rad)), label='The zero line')\nplt.polar(angles_in_rad[projections > 0], projections[projections > 0], 'r.', label='Positive projection')\nplt.polar(angles_in_rad[projections < 0], projections[projections < 0], 'b.', label='Negative projection')\nplt.legend()\nplt.show()\n```\n\nNow, let's plot the projection of $\\vec{\\phi}$ onto $\\vec{w}$ against the angle from $\\vec{\\phi}$ to $\\vec{w}$. \n\nThe circular angle axis around the circle represents the angle between $\\vec{\\phi}$ and $\\vec{w}$. For example, $\\vec{w}$ is represented by the zero degree mark since it is zero degree away from itself.\n\nThe axis here (move mouse over to the radar-like axis) represents the projection of $\\vec{\\phi}$ onto $\\vec{w}$.\n\n At ninety and two-seventy degrees, the projection is zero. From immediately after two-seventy degrees to immediately before ninety degrees counterclockwise, the projection is positive. From immediately after ninety degrees to immediately before two-seventy degrees counterclockwise, the projection is negative.\n\n## Geometric interpretation of the sign function\n\n\n```python\nplt.figure(figsize=(7, 7))\nplt.polar(angles_in_rad, np.zeros(len(angles_in_rad)), label='The zero line')\nplt.polar(angles_in_rad[projections >= 0], np.sign(projections[projections >= 0]), 'r.', label='Positive output')\nplt.polar(angles_in_rad[projections < 0], np.sign(projections[projections < 0]), 'b.', label='Negative output')\nplt.legend()\nplt.show()\n```\n\nNow, let's apply the sign function to the projections. \n\n## Summary\n\n\n```python\n\n```\n\n## Final\n\nThe vector $w$ is perpendicular to the decision boundary. When the weight vector is appropriately chose, the +1 -1 assignment of the perceptron will match the label provided by the training data. But how is a perceptron trained? In Episode 2 of the Perceptron series, I will discuss the Perceptron Learning Algorithm (PLA) in detail. \n\nIf you liked this video, please hit the like botton down below and subscribe for future videos. Good luck with your study in machine learning.\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "0a95019ef07d48577d82a9c6c50d3f7e3f6b14d1", "size": 199431, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "c3_single_layer_networks/perceptron/youtube/ep1/perceptron_math_def.ipynb", "max_stars_repo_name": "zhihanyang2022/bishop1995_notes", "max_stars_repo_head_hexsha": "7d428726b8fb1afac40c13bd43103a5d672ead91", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-01-22T08:29:58.000Z", "max_stars_repo_stars_event_max_datetime": "2021-12-22T02:12:33.000Z", "max_issues_repo_path": "c3_single_layer_networks/perceptron/youtube/ep1/perceptron_math_def.ipynb", "max_issues_repo_name": "zhihanyang2022/bishop1995_notes", "max_issues_repo_head_hexsha": "7d428726b8fb1afac40c13bd43103a5d672ead91", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "c3_single_layer_networks/perceptron/youtube/ep1/perceptron_math_def.ipynb", "max_forks_repo_name": "zhihanyang2022/bishop1995_notes", "max_forks_repo_head_hexsha": "7d428726b8fb1afac40c13bd43103a5d672ead91", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 323.226904376, "max_line_length": 91952, "alphanum_fraction": 0.9349900467, "converted": true, "num_tokens": 2000, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9465966671870766, "lm_q2_score": 0.8670357683915538, "lm_q1q2_score": 0.8207331686914309}} {"text": "```python\nfrom sympy import symbols, Matrix\n```\n\n\n```python\na, b, c, d = symbols('a b c d')\nYt_1, Yt_2, Gt_1 = symbols('Yt_1 Yt_2 Gt_1')\nC, I, G, Yt = symbols('C I G Yt')\nYt_1, Yt_2, Gt_1 = symbols('Yt_1 Yt_2 Gt_1')\ne, u, v = symbols('e u v')\n\n```\n\n\n```python\nA = Matrix([\n [1, 0, 0, 0],\n [0, 1, 0, 0],\n [0, 0, 1, 0],\n [-1, -1, -1, 1]\n])\n```\n\n\n```python\nB = Matrix([\n [-b, -a, 0, 0],\n [0, -c, c, 0],\n [0, 0, 0, -d],\n [0, 0, 0, 0]\n])\n```\n\n\n```python\nY = Matrix([\n [C],\n [I],\n [G],\n [Yt]\n])\n```\n\n\n```python\nX = Matrix([\n [1],\n [Yt_1],\n [Yt_2],\n [Gt_1]\n])\n```\n\n\n```python\nU = Matrix([\n [e],\n [u],\n [v],\n [0]\n])\n```\n\n\n```python\n# Y = M*X + A.inv() * U\n```\n\n\n```python\nY\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}C\\\\I\\\\G\\\\Yt\\end{matrix}\\right]$\n\n\n\n\n```python\nM = -A.inv() * B\nM\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}b & a & 0 & 0\\\\0 & c & - c & 0\\\\0 & 0 & 0 & d\\\\b & a + c & - c & d\\end{matrix}\\right]$\n\n\n\n\n```python\nX\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1\\\\Yt_{1}\\\\Yt_{2}\\\\Gt_{1}\\end{matrix}\\right]$\n\n\n\n\n```python\nA.inv()\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}1 & 0 & 0 & 0\\\\0 & 1 & 0 & 0\\\\0 & 0 & 1 & 0\\\\1 & 1 & 1 & 1\\end{matrix}\\right]$\n\n\n\n\n```python\nU\n```\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}e\\\\u\\\\v\\\\0\\end{matrix}\\right]$\n\n\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "9a437aba9aa3c53caf5be8a5cc6b12fab32eda36", "size": 5485, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Course III/ECONOMETRICS/solver.ipynb", "max_stars_repo_name": "GeorgiyDemo/FA", "max_stars_repo_head_hexsha": "641a29d088904302f5f2164c9b3e1f1c813849ec", "max_stars_repo_licenses": ["WTFPL"], "max_stars_count": 27, "max_stars_repo_stars_event_min_datetime": "2019-08-18T20:54:27.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-22T02:39:45.000Z", "max_issues_repo_path": "Course III/ECONOMETRICS/solver.ipynb", "max_issues_repo_name": "GeorgiyDemo/FA", "max_issues_repo_head_hexsha": "641a29d088904302f5f2164c9b3e1f1c813849ec", "max_issues_repo_licenses": ["WTFPL"], "max_issues_count": 217, "max_issues_repo_issues_event_min_datetime": "2019-09-22T14:43:25.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-30T13:49:18.000Z", "max_forks_repo_path": "Course III/ECONOMETRICS/solver.ipynb", "max_forks_repo_name": "GeorgiyDemo/FA", "max_forks_repo_head_hexsha": "641a29d088904302f5f2164c9b3e1f1c813849ec", "max_forks_repo_licenses": ["WTFPL"], "max_forks_count": 42, "max_forks_repo_forks_event_min_datetime": "2019-09-18T11:36:28.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-19T18:43:00.000Z", "avg_line_length": 18.7842465753, "max_line_length": 141, "alphanum_fraction": 0.3936189608, "converted": true, "num_tokens": 609, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9407897442783526, "lm_q2_score": 0.8723473647220787, "lm_q1q2_score": 0.8206954541787792}} {"text": "# Integer Programming\n\n* All decision variables are integers\n\n\\begin{align}\n\\text{maximize}\\ & \\mathbf{c}^T\\mathbf{x} \\\\\n\\text{subject to } & \\\\\n& A\\mathbf{x} &&\\leq \\mathbf{b} \\\\\n& \\mathbf{x} &&\\geq 0 \\\\\n& \\mathbf{x} \\in \\mathbb{Z}^n\n\\end{align}\n\n* Binary integer programming: Variables are restricted to be either 0 or 1\n\n# Binary Knapsack Problem\n* Combinatorial optimization problem\n* Problem of packing the most valuable or useful items without overloading the luggage. \n * A set of items ($N$ items), each with a weight($w$) and a value($v$)\n * Fixed capacity \n * Maximize the total value possible\n\n\n\n## Problem Formulation\n\\begin{align}\n\\text{maximize}\\ & \\sum_{i=0}^{N-1}v_{i}x_{i} \\\\\n\\text{subject to } & \\\\\n& \\sum_{i=0}^{N-1}w_{i}x_{i} & \\leq C \\\\\n& x_i \\in \\{0,1\\} & \\forall i=0,\\dots,N-1\n\\end{align}\n\n## Coding in Python\n\n## Creating the data (weights and values)\n\n\n```python\nw = [4,2,5,4,5,1,3,5]\nv = [10,5,18,12,15,1,2,8]\nC = 15\nN = len(w)\n```\n\n## Step 2: Importing docplex package\n\n\n```python\nfrom docplex.mp.model import Model\n```\n\n## Step 3: Create an optimization model\n\n\n```python\nknapsack_model = Model('knapsack')\n```\n\n## Step 4: Add multiple binary decision variables\n\nAdds a list of binary decision variables and stores them in the model.\n```python\nbinary_var_list(keys, # sequence of objects / an integer\n lb=None, # lower bound\n ub=None, # upper bound\n name=, # name\n key_format=None) # a format string or None for naming the variables\n```\n\n\n```python\nx = knapsack_model.binary_var_list(N, name=\"x\")\n```\n\n## Step 5: Add the constraints\n\n$$\\sum_{i=1}^{N} w_{i}x_{i} \\leq C$$\n\n\n```python\n# \\sum_{i=1}^{N} w_{i}*x_{i} <= C\nknapsack_model.add_constraint(sum(w[i]*x[i] for i in range(N)) <= C)\n```\n\n\n\n\n docplex.mp.LinearConstraint[](4x_0+2x_1+5x_2+4x_3+5x_4+x_5+3x_6+5x_7,LE,15)\n\n\n\n## Step 6: Define the objective function\n\n$$\\sum_{i=1}^{N} v_{i}x_{i}$$\n\n\n```python\n# \\sum_{i=1}^{N} v_{i}*x_{i}\nobj_fn = sum(v[i]*x[i] for i in range(N))\nknapsack_model.set_objective('max',obj_fn)\n\nknapsack_model.print_information()\n```\n\n Model: knapsack\n - number of variables: 8\n - binary=8, integer=0, continuous=0\n - number of constraints: 1\n - linear=1\n - parameters: defaults\n - objective: maximize\n - problem type is: MILP\n\n\n## Step 7: Solve the model and output the solution\n\n\n```python\nknapsack_model.solve()\nprint('Optimization is done. Objective Function Value: %.2f' % knapsack_model.objective_value)\n# Get values of the decision variables\nknapsack_model.print_solution()\n```\n\n Optimization is done. Objective Function Value: 46.00\n objective: 46\n x_2=1\n x_3=1\n x_4=1\n x_5=1\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "8923b57caf33dd9a2f2e9eca4d5ab733f2589eb0", "size": 5780, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "mathematicalProgramming/Video09/09_video.ipynb", "max_stars_repo_name": "codingperspective/videoMaterials", "max_stars_repo_head_hexsha": "8c9665466d8912c6f0c701c25ad9eb4802fb73a3", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "mathematicalProgramming/Video09/09_video.ipynb", "max_issues_repo_name": "codingperspective/videoMaterials", "max_issues_repo_head_hexsha": "8c9665466d8912c6f0c701c25ad9eb4802fb73a3", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mathematicalProgramming/Video09/09_video.ipynb", "max_forks_repo_name": "codingperspective/videoMaterials", "max_forks_repo_head_hexsha": "8c9665466d8912c6f0c701c25ad9eb4802fb73a3", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2021-11-21T05:02:50.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-17T04:44:57.000Z", "avg_line_length": 23.4959349593, "max_line_length": 103, "alphanum_fraction": 0.4946366782, "converted": true, "num_tokens": 923, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9724147161743552, "lm_q2_score": 0.8438950966654772, "lm_q1q2_score": 0.82061601090489}} {"text": "# Spherical harmonic models 1\n\n\n```python\n# Import notebook dependencies\n\nimport os\nimport sys\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom src import sha, mag, IGRF13_FILE\n```\n\n## Spherical harmonics and representing the geomagnetic field\n\nThe north (X), east (Y) and vertical (Z) (downwards) components of the internally-generated geomagnetic field at colatitude $\\theta$, longitude $\\phi$ and radial distance $r$ (in geocentric coordinates with reference radius $a=6371.2$ km for the IGRF) are written as follows:\n\n$$\\begin{align}\n&X= \\sum_{n=1}^N\\left(\\frac{a}{r}\\right)^{n+2}\\left[ g_n^0X_n^0+\\sum_{m=1}^n\\left( g_n^m\\cos m\\phi+h_n^m\\sin m\\phi \\right)X_n^m\\right]\\\\[6pt]\n&Y= \\sum_{n=1}^N\\left(\\frac{a}{r}\\right)^{n+2} \\sum_{m=1}^n \\left(g_n^m\\sin m\\phi-h_n^m\\cos m\\phi \\right)Y_n^m \\\\[6pt]\n&Z= \\sum_{n=1}^N\\left(\\frac{a}{r}\\right)^{n+2} \\left[g_n^0Z_n^0+\\sum_{m=1}^n\\left( g_n^m\\cos m\\phi+h_n^m\\sin m\\phi \\right)Z_n^m\\right]\\\\[6pt]\n\\text{with}&\\\\[6pt]\n&X_n^m=\\frac{dP_n^n}{d\\theta}\\\\[6pt]\n&Y_n^m=\\frac{m}{\\sin \\theta}P_n^m \\kern{10ex} \\text{(Except at the poles where $Y_n^m=X_n^m\\cos \\theta$.)}\\\\[6pt]\n&Z_n^m=-(n+1)P_n^m\n\\end{align}$$\n\n\nwhere $n$ and $m$ are spherical harmonic degree and order, respectively, and the ($g_n^m, h_n^m$) are the Gauss coefficients for a particular model (e.g. the IGRF) of maximum degree $N$.\n\nThe Associated Legendre functions of degree $n$ and order $m$ are defined, in Schmidt semi-normalised form by\n\n$$P^m_n(x) = \\frac{1}{2^n n!}\\left[ \\frac{(2-\\delta_{0m})(n-m)!\\left(1-x^2\\right)^m}{(n+m)!} \\right]^{1/2}\\frac{d^{n+m}}{dx^{n+m}}\\left(1-x^2\\right)^{n},$$\n\n\nwhere $x = \\cos(\\theta)$. \n\nReferring to Malin and Barraclough (1981), the recurrence relations\n\n$$\\begin{align}\nP_n^n&=\\left(1-\\frac{1}{2n}\\right)^{1/2}\\sin \\theta \\thinspace P_{n-1}^{n-1} \\\\[6pt]\nP_n^m&=\\left[\\left(2n-1\\right) \\cos \\theta \\thinspace P_{n-1}^m-\\left[ \\left(n-1\\right)^2-m^2\\right]^{1/2}P_{n-2}^m\\right]\\left(n^2-m^2\\right)^{-1/2},\\\\[6pt]\n\\end{align}$$\n\nand\n\n$$\\begin{align}\nX_n^n&=\\left(1-\\frac{1}{2n}\\right)^{1/2}\\left( \\sin \\theta \\thinspace X_{n-1}^{n-1}+ \\cos \\theta \\thinspace P_{n-1}^{n-1} \\right)\\\\[6pt]\nX_n^m&=\\left[\\left(2n-1\\right) \\cos \\theta \\thinspace X_{n-1}^m- \\sin \\theta \\thinspace P_{n-1}^m\\right] - \\left[ \\left(n-1\\right)^2-m^2\\right]^{1/2}X_{n-2}^m\\left(n^2-m^2\\right)^{-1/2}.\n\\end{align}$$\n\nmay be used to calculate the $X^m_n$, $Y^m_n$ and $Z^m_n$, given $P_0^0=1$, $P_1^1=\\sin(\\theta)$, $X_0^0=0$ and $X_1^1=\\cos(\\theta)$.\n

    \n\n## Plotting $P_n^m$ and $X_n^m$\nThe $P_n^m(\\theta)$ and $X_n^m(\\theta)$ are building blocks for computing geomagnetic field models given a spherical harmonic model. It's instructive to visualise these functions and below you can experiment by setting different values of spherical harmonic degree ($n$) and order ($m \\le n$). Note how the choice of $n$ and $m$ affects the number of zeroes of the functions. \n\nThe functions are plotted on a semi-circle representing the surface of the Earth, with the inner core added for cosmetic purposes only! Again, purely for cosmetic purposes, the functions are scaled to fit within $\\pm$10% of the Earth's surface.\n\n\n**>> USER INPUT HERE: Set the spherical harmonic degree and order for the plot**\n\n\n```python\ndegree = 13\norder = 7\n```\n\n\n```python\n# Calculate Pnm and Xmn values every 0.5 degrees\ncolat = np.linspace(0,180,361)\npnmvals = np.zeros(len(colat))\nxnmvals = np.zeros(len(colat))\n\nidx = sha.pnmindex(degree,order)\nfor i, cl in enumerate(colat):\n p,x = sha.pxyznm_calc(degree, cl)[0:2]\n pnmvals[i] = p[idx]\n xnmvals[i] = x[idx]\n \ntheta = np.deg2rad(colat)\nct = np.cos(theta)\nst = np.sin(theta)\n\n# Numbers mimicking the Earth's surface and outer core radii\ne_rad = 6.371\nc_rad = 3.485\n\n# Scale values to fit within 10% of \"Earth's surface\". Firstly the P(n,m),\nshell = 0.1*e_rad\npmax = np.abs(pnmvals).max()\npnmvals = pnmvals*shell/pmax + e_rad\nxp = pnmvals*st\nyp = pnmvals*ct\n\n# and now the X(n,m)\nxmax = np.abs(xnmvals).max()\nxnmvals = xnmvals*shell/xmax + e_rad\nxx = xnmvals*st\nyx = xnmvals*ct\n\n# Values to draw the Earth's and outer core surfaces as semi-circles\ne_xvals = e_rad*st\ne_yvals = e_rad*ct\nc_xvals = e_xvals*c_rad/e_rad\nc_yvals = e_yvals*c_rad/e_rad\n\n# Earth-like background framework for plots\ndef eplot(ax):\n ax.set_aspect('equal')\n ax.set_axis_off()\n ax.plot(e_xvals,e_yvals, color='blue')\n ax.plot(c_xvals,c_yvals, color='black')\n ax.fill_between(c_xvals, c_yvals, y2=0, color='lightgrey')\n ax.plot((0, 0), (-e_rad, e_rad), color='black')\n\n# Plot the P(n,m) and X(n,m)\nfig, axes = plt.subplots(nrows=1, ncols=2, figsize=(18, 8))\nfig.suptitle('Degree (n) = '+str(degree)+', order (m) = '+str(order), fontsize=20)\n \naxes[0].plot(xp,yp, color='red')\naxes[0].set_title('P('+ str(degree)+',' + str(order)+')', fontsize=16)\neplot(axes[0])\n\naxes[1].plot(xx,yx, color='red')\naxes[1].set_title('X('+ str(degree)+',' + str(order)+')', fontsize=16)\neplot(axes[1])\n```\n\n**Try again!**\n\n## The International Geomagnetic Reference Field\nThe latest version of the IGRF is IGRF13 which consists of a main-field model every five years from 1900.0 to 2020.0 and a secular variation model for 2020-2025. The main field models have (maximum) spherical harmonic degree (n) and order (m) 10 up to 1995 and n=m=13 from 2000 onwards. The secular variation model has n=m=8.\n\nThe coefficients are first loaded into a pandas database: \n\n\n```python\nigrf13 = pd.read_csv(IGRF13_FILE, delim_whitespace=True, header=3)\nigrf13.head() # Check the values have loaded correctly\n```\n\n### a) Calculating geomagnetic field values using the IGRF\nThe function below calculates geomagnetic field values at a point defined by its colatitude, longitude and altitude, using a spherical harmonic model of maximum degree _nmax_ supplied as an array _gh_. The parameter _coord_ is a string specifying whether the input position is in geocentric coordinates (when _altitude_ should be the geocentric distance in km) or geodetic coordinates (when altitude is distance above mean sea level in km). \n\n(It's unconventional, but I've chosen to include a monopole term, set to zero, at index zero in the _gh_ array.)
    \n\n\n```python\ndef shm_calculator(gh, nmax, altitude, colat, long, coord):\n RREF = 6371.2 #The reference radius assumed by the IGRF\n degree = nmax\n phi = long\n\n if (coord == 'Geodetic'):\n # Geodetic to geocentric conversion using the WGS84 spheroid\n rad, theta, sd, cd = sha.gd2gc(altitude, colat)\n else:\n rad = altitude\n theta = colat\n\n # Function 'rad_powers' to create an array with values of (a/r)^(n+2) for n = 0,1, 2 ..., degree\n rpow = sha.rad_powers(degree, RREF, rad)\n\n # Function 'csmphi' to create arrays with cos(m*phi), sin(m*phi) for m = 0, 1, 2 ..., degree\n cmphi, smphi = sha.csmphi(degree,phi)\n\n # Function 'gh_phi_rad' to create arrays with terms such as [g(3,2)*cos(2*phi) + h(3,2)*sin(2*phi)]*(a/r)**5 \n ghxz, ghy = sha.gh_phi_rad(gh, degree, cmphi, smphi, rpow)\n\n # Function 'pnm_calc' to calculate arrays of the Associated Legendre Polynomials for n (&m) = 0,1, 2 ..., degree\n pnm, xnm, ynm, znm = sha.pxyznm_calc(degree, theta)\n\n # Geomagnetic field components are calculated as a dot product\n X = np.dot(ghxz, xnm)\n Y = np.dot(ghy, ynm)\n Z = np.dot(ghxz, znm)\n\n # Convert back to geodetic (X, Y, Z) if required\n if (coord == 'Geodetic'):\n t = X\n X = X*cd + Z*sd\n Z = Z*cd - t*sd\n\n return((X, Y, Z))\n```\n\n**>> >> USER INPUT HERE: Set the input parameters**\n\n\n```python\nlocation = 'Erehwon'\nctype = 'Geocentric' # coordinate type\naltitude = 6371.2 # in km above the spheroid if ctype = 'Geodetic', radial distance if ctype = 'Geocentric'\ncolat = 35 # NB colatitude, not latitude\nlong = -3 # longitude\nNMAX = 13 # Maxiimum spherical harmonic degree of the model\ndate = 2020.0 # Date for the field estimates\n```\n\nNow calculate the IGRF geomagnetic field estimates.\n\n\n```python\n# Calculate the gh values for the supplied date\nif date == 2020.0:\n gh = igrf13['2020.0']\nelif date < 2020.0:\n date_1 = (date//5)*5\n date_2 = date_1 + 5\n w1 = date-date_1\n w2 = date_2-date\n gh = np.array((w2*igrf13[str(date_1)] + w1*igrf13[str(date_2)])/(w1+w2))\nelif date > 2020.0:\n gh =np.array(igrf13['2020.0'] + (date-2020.0)*igrf13['2020-25'])\n\ngh = np.append(0., gh) # Add a zero monopole term corresponding to g(0,0)\n\nbxyz = shm_calculator(gh, NMAX, altitude, colat, long, ctype)\ndec, hoz ,inc , eff = mag.xyz2dhif(bxyz[0], bxyz[1], bxyz[2])\n\nprint('\\nGeomagnetic field values at: ', location+', '+ str(date), '\\n')\nprint('Declination (D):', '{: .1f}'.format(dec), 'degrees')\nprint('Inclination (I):', '{: .1f}'.format(inc), 'degrees')\nprint('Horizontal intensity (H):', '{: .1f}'.format(hoz), 'nT')\nprint('Total intensity (F) :', '{: .1f}'.format(eff), 'nT')\nprint('North component (X) :', '{: .1f}'.format(bxyz[0]), 'nT')\nprint('East component (Y) :', '{: .1f}'.format(bxyz[1]), 'nT')\nprint('Vertical component (Z) :', '{: .1f}'.format(bxyz[2]), 'nT')\n```\n\n### b) Maps of the IGRF\nNow draw maps of the IGRF at the date selected above. The latitude range is set at -85 degrees to +85 degrees and the longitude range -180 degrees to +180 degrees and IGRF values for (X, Y, Z) are calculated on a 5 degree grid (this may take a few seconds to complete).\n\n**>> >> USER INPUT HERE: Set the element to plot**\n\nEnter the geomagnetic element to plot below:
    \nD = declination
    \nH = horizontal intensity
    \nI = inclination
    \nX = north component
    \nY = east component
    \nZ = vertically (downwards) component
    \nF = total intensity.)\n\n\n```python\nel2plot = 'H'\n```\n\n\n```python\ndef IGRF_plotter(el_name, vals, date):\n if el_name=='D':\n cvals = np.arange(-25,30,5)\n else:\n cvals = 15\n fig, ax = plt.subplots(figsize=(16, 8))\n cplt = ax.contour(longs, lats, vals, levels=cvals)\n ax.clabel(cplt, cplt.levels, inline=True, fmt='%d', fontsize=10)\n ax.set_title('IGRF: '+ el_name + ' (' + str(date) + ')', fontsize=20)\n ax.set_xlabel('Longitude', fontsize=16)\n ax.set_ylabel('Latitude', fontsize=16)\n```\n\n\n```python\nlongs = np.linspace(-180, 180, 73)\nlats = np.linspace(-85, 85, 35)\nBx, By, Bz = zip(*[sha.shm_calculator(gh,13,6371.2,90-lat,lon,'Geocentric') \\\n for lat in lats for lon in longs])\nX = np.asarray(Bx).reshape(35,73)\nY = np.asarray(By).reshape(35,73)\nZ = np.asarray(Bz).reshape(35,73)\nD, H, I, F = [mag.xyz2dhif(X, Y, Z)[el] for el in range(4)]\n\nel_dict={'X':X, 'Y':Y, 'Z':Z, 'D':D, 'H':H, 'I':I, 'F':F}\nIGRF_plotter(el2plot, el_dict[el2plot], date)\n```\n\n### Exercise\nProduce a similar plot for the secular variation from 2020 to 2025 using the IGRF.\n\n### References\n\nMalin, S. R. . and Barraclough, D., (1981). An algorithm for synthesizing the geomagnetic field, Computers & Geosciences. Pergamon, 7(4), pp. 401\u2013405. doi: 10.1016/0098-3004(81)90082-0.\n", "meta": {"hexsha": "704924f1ec1918fd175b851198409c3aa13da316", "size": 15809, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "geomag-obs-models/04a_SHA1.ipynb", "max_stars_repo_name": "MagneticEarth/book.magneticearth.org", "max_stars_repo_head_hexsha": "c8c1e3403b682a508a61053ce330b0e891992ef3", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "geomag-obs-models/04a_SHA1.ipynb", "max_issues_repo_name": "MagneticEarth/book.magneticearth.org", "max_issues_repo_head_hexsha": "c8c1e3403b682a508a61053ce330b0e891992ef3", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "geomag-obs-models/04a_SHA1.ipynb", "max_forks_repo_name": "MagneticEarth/book.magneticearth.org", "max_forks_repo_head_hexsha": "c8c1e3403b682a508a61053ce330b0e891992ef3", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.4620853081, "max_line_length": 450, "alphanum_fraction": 0.549560377, "converted": true, "num_tokens": 3862, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9504109756113862, "lm_q2_score": 0.863391599428538, "lm_q1q2_score": 0.820576852347552}} {"text": "# \"[Draft] Optimizer Code Supplement Using fastai\"\n> \"Building the Optimizers explained by Sebastian Ruder using fastai\"\n\n- toc: true\n- branch: master\n- badges: true\n- comments: true\n- author: Kevin Bird\n- categories: [fastai]\n\n## Introduction\n\nThis post is meant to be a supporting guide that can be used with the incredibly detailed [post from Sebastian Ruder](https://ruder.io/optimizing-gradient-descent/index.html). The goal is to implement each of these optimizers using fastai and run them on a small problem. \n \nThe sample problem that will be used is the one created by Jeremy and Rachel in the graddesc spreadsheet. \n\nBecause this post is generated using Jupyter Notebooks, you can also interact with any of these concepts and modify them as you see fit. In as many places as possible, I will try to connect the code back to the concepts by utilizing the greek letters Sebastian (and usually the paper) uses\n\n\n```python\n#hide\nfrom fastai.vision.all import *\n```\n\n\n```python\na_real = 2\nb_real = 30\n```\n\n`a_real` is a weight and `b_real` is a bias they form the equation $a\\_real*x + b\\_real= y$ \nBoth a and b are parameters and will be learned by our optimizer through the loss function. \nIn a real world problem, these values would be unknown and there would typically be a lot more weights, but the concept will remain the same whether there are 2 parameters or [175 billion](https://www.google.com/search?client=firefox-b-1-d&q=how+many+parameters+does+gpt+3+have). \n\n\n```python\n#hide\nclass MySimpleModel(torch.nn.Module):\n def __init__(self):\n super(MySimpleModel, self).__init__()\n self.linear = nn.Linear(1,1)\n self.linear.weight.data = nn.Parameter(torch.tensor([[1.0]]))\n self.linear.bias.data = nn.Parameter(torch.tensor([1.0]))\n self.a = self.linear.weight\n self.b = self.linear.bias\n \n def forward(self, x):\n x = x.unsqueeze(1)\n x = self.linear(x)\n return x\n```\n\n\n```python\n#hide\nclass OptimizerInsight(Callback):\n def before_batch(self):\n if self.training:\n self.a_guess+=[float(self.model.a)]\n self.b_guess+=[float(self.model.b)]\n #plt.plot(theta_real,b_real, 'ro', markersize=5)\n #print(self.model.theta.tolist())\n def before_epoch(self):\n try:\n self.a_guess = [self.a_guess[-1]]\n except:\n self.a_guess = []\n try:\n self.b_guess = [self.b_guess[-1]]\n except:\n self.b_guess = []\n \n def after_epoch(self):\n plt.plot(self.a_guess, self.b_guess)\n```\n\n\n```python\n#hide\nx_list = torch.tensor([14.0,86,28,51,28,29,72,62,84,15,42,62,47,35,9,38,44,99,13,21,28,20,8,64,99,70,27,17,8])\ny_list = torch.tensor([58.0,202,86,132,86,88,174,154,198,60,114,154,124,100,48,106,118,228,56,72,86,70,46,158,228,170,84,64,46])\nb_list = [(x,y) for x,y in zip(x_list, y_list)]\n```\n\n\n```python\n#hide\ndl = DataLoader(b_list, bs=4)\n#setting validation set equal to training dataset. For this example, we are only interested in the training dataset\ndls = DataLoaders(dl,dl)\n```\n\n## Gradient Descent Variants\n\n### [Batch Gradient Descent](https://ruder.io/optimizing-gradient-descent/index.html#batchgradientdescent)\n\nForumula: \n$\\theta = \\theta - \\eta \\cdot \\nabla_\\theta J( \\theta)$\n \nCode: \n```python\ndef sgd_step(p, lr, **kwargs):\n p.data.add_(p.grad.data, alpha=-lr)\n```\n\nTranslation: \n $\\theta$ = p \n $\\eta$ = lr \n $\\nabla_\\theta J( \\theta)$ = p.grad.data\n\n\n```python\ndef _do_epoch_BGD(self):\n self._do_epoch_train()\n self._step()\n self('after_step')\n self.opt.zero_grad()\n self._do_epoch_validate()\n```\n\n\n```python\ndef _do_one_batch_BGD(self):\n self.pred = self.model(*self.xb)\n self('after_pred')\n if len(self.yb): self.loss = self.loss_func(self.pred, *self.yb)\n self('after_loss')\n if not self.training or not len(self.yb): return\n self('before_backward')\n self._backward()\n self('after_backward')\n```\n\n\n```python\nmodel = MySimpleModel()\nloss_func = mse #F.mse_loss\n\ud835\udf02 = 0.00005\npartial_learner = partial(Learner,dls, model, loss_func, cbs=[OptimizerInsight])\nlearn = partial_learner(SGD, lr=\ud835\udf02)\nlearn._do_epoch = partial(_do_epoch_BGD,learn)\nlearn._do_one_batch = partial(_do_one_batch_BGD,learn)\n```\n\n\n```python\nlearn.fit(10)\n```\n\n### [Stochastic Gradient Descent](https://ruder.io/optimizing-gradient-descent/index.html#stochasticgradientdescent)\n\nFormula: \n$\\theta = \\theta - \\eta \\cdot \\nabla_\\theta J( \\theta; x^{(i)}; y^{(i)})$ \n \nCode: \n```python\ndef sgd_step(p, lr, **kwargs):\n p.data.add_(p.grad.data, alpha=-lr)\n```\n\nTranslation: \n $\\theta$ = p \n $\\eta$ = lr \n $\\nabla_\\theta J( \\theta; x^{(i)}; y^{(i)})$ = p.grad.data\n\n\n```python\ndef _do_epoch_SGD(self):\n self._do_epoch_train()\n self._do_epoch_validate()\n```\n\n\n```python\ndef _do_one_batch_SGD(self):\n #pdb.set_trace()\n for i,b in enumerate(zip(self.xb[0][None].T,self.yb[0][None].T)):\n x,y = b\n self.pred = self.model(x)\n self('after_pred')\n if len(y): self.loss = self.loss_func(self.pred, y)\n self('after_loss')\n if not self.training or not len(self.yb): return\n self('before_backward')\n self._backward()\n self('after_backward')\n self._step()\n self('after_step')\n self.opt.zero_grad()\n```\n\n\n```python\nmodel = MySimpleModel()\nloss_func = mse #F.mse_loss\n\ud835\udf02 = 0.0001\npartial_learner = partial(Learner,dls, model, loss_func, cbs=[OptimizerInsight])\nlearn = partial_learner(SGD, lr=\ud835\udf02)\nlearn._do_epoch = partial(_do_epoch_SGD,learn)\nlearn._do_one_batch = partial(_do_one_batch_SGD,learn)\n```\n\n\n```python\nlearn.fit(10)\n```\n\n### [Mini-batch Gradient Descent](https://ruder.io/optimizing-gradient-descent/index.html#minibatchgradientdescent)\n\n$\\theta = \\theta - \\eta \\cdot \\nabla_\\theta J( \\theta; x^{(i:i+n)}; y^{(i:i+n)})$\n\nComments about Batch Gradient Descent, Stochastic Gradient Descent, and Mini-batch Gradient Descent: \nAll three of these use itentical code for the optimizer itself. The huge difference between these three Optimizers is when the step actually occurs. \n\n\n```python\ndef _do_epoch_MBGD(self):\n self._do_epoch_train()\n self._do_epoch_validate()\n```\n\n\n```python\ndef _do_one_batch_MBGD(self):\n self.pred = self.model(*self.xb)\n self('after_pred')\n if len(self.yb): self.loss = self.loss_func(self.pred, *self.yb)\n self('after_loss')\n if not self.training or not len(self.yb): return\n self('before_backward')\n self._backward()\n self('after_backward')\n self._step()\n self('after_step')\n self.opt.zero_grad()\n```\n\n\n```python\nmodel = MySimpleModel()\nloss_func = mse #F.mse_loss\n\ud835\udf02 = 0.0001\npartial_learner = partial(Learner,dls, model, loss_func, cbs=[OptimizerInsight])\nlearn = partial_learner(SGD, lr=\ud835\udf02)\nlearn._do_epoch = partial(_do_epoch_MBGD,learn)\nlearn._do_one_batch = partial(_do_one_batch_MBGD,learn)\n```\n\n\n```python\nlearn.fit(10)\n```\n\nThe SGD Optimizer can do all three SGD($\\theta$, $\\eta$) above with the same parameters. \n\n**Note**: `mom`, `wd`, and `decouple_wd` are all 0 or don't influence things with their default values. \n\n## Beyond Vanilla Gradient Descent\n\n### [SGD with Momentum](https://ruder.io/optimizing-gradient-descent/index.html#momentum)\n\n$\n\\begin{align} \n\\begin{split} \nv_t &= \\gamma v_{t-1} + \\eta \\nabla_\\theta J( \\theta) \\\\ \n\\theta &= \\theta - v_t \n\\end{split} \n\\end{align}\n$\n\n\n```python\n\n```\n\n### [Nesterov Accelerated Gradient](https://ruder.io/optimizing-gradient-descent/index.html#nesterovacceleratedgradient)\n\n$\n\\begin{align} \n\\begin{split} \nv_t &= \\gamma v_{t-1} + \\eta \\nabla_\\theta J( \\theta - \\gamma v_{t-1} ) \\\\ \n\\theta &= \\theta - v_t \n\\end{split} \n\\end{align}\n$\n\n\n```python\n\n```\n\n### [Adagrad](https://ruder.io/optimizing-gradient-descent/index.html#adagrad)\n\n$g_{t, i} = \\nabla_\\theta J( \\theta_{t, i} )$ \n$\\theta_{t+1, i} = \\theta_{t, i} - \\eta \\cdot g_{t, i}$ \n$\\theta_{t+1, i} = \\theta_{t, i} - \\dfrac{\\eta}{\\sqrt{G_{t, ii} + \\epsilon}} \\cdot g_{t, i}$ \n$\\theta_{t+1} = \\theta_{t} - \\dfrac{\\eta}{\\sqrt{G_{t} + \\epsilon}} \\odot g_{t}$\n\n\n```python\n\n```\n\n### [Adadelta](https://ruder.io/optimizing-gradient-descent/index.html#adadelta)\n\n$E[g^2]_t = \\gamma E[g^2]_{t-1} + (1 - \\gamma) g^2_t$ \n$\n\\begin{align} \n\\begin{split} \n\\Delta \\theta_t &= - \\eta \\cdot g_{t, i} \\\\ \n\\theta_{t+1} &= \\theta_t + \\Delta \\theta_t \\end{split} \n\\end{align}\n$ \n$\\Delta \\theta_t = - \\dfrac{\\eta}{\\sqrt{G_{t} + \\epsilon}} \\odot g_{t}$ \n$\\Delta \\theta_t = - \\dfrac{\\eta}{\\sqrt{E[g^2]_t + \\epsilon}} g_{t}$ \n$\\Delta \\theta_t = - \\dfrac{\\eta}{RMS[g]_{t}} g_t$ \n$E[\\Delta \\theta^2]_t = \\gamma E[\\Delta \\theta^2]_{t-1} + (1 - \\gamma) \\Delta \\theta^2_t$ \n$RMS[\\Delta \\theta]_{t} = \\sqrt{E[\\Delta \\theta^2]_t + \\epsilon}$ \n$\n\\begin{align} \n\\begin{split} \n\\Delta \\theta_t &= - \\dfrac{RMS[\\Delta \\theta]_{t-1}}{RMS[g]_{t}} g_{t} \\\\ \n\\theta_{t+1} &= \\theta_t + \\Delta \\theta_t \n\\end{split} \n\\end{align}\n$ \n\n\n```python\n\n```\n\n### [RMSprop](https://ruder.io/optimizing-gradient-descent/index.html#rmsprop)\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n### [Adam](https://ruder.io/optimizing-gradient-descent/index.html#adam)\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n### [AdaMax](https://ruder.io/optimizing-gradient-descent/index.html#adamax)\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n### [Nadam](https://ruder.io/optimizing-gradient-descent/index.html#nadam)\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n### [AMSGrad](https://ruder.io/optimizing-gradient-descent/index.html#amsgrad)\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n### AdamW\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n### QHAdam\n\n\n```python\n\n```\n\n\n```python\n\n```\n\n### AggMo\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "df49f251d7e759b1f1c1f150f0c1f0c598288317", "size": 84333, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_notebooks/2020-09-07-Optimizer_Code_Supplement.ipynb", "max_stars_repo_name": "kevinbird15/Blog", "max_stars_repo_head_hexsha": "98cd08dc40dc5c7becd501e545920a7ccdd2d517", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_notebooks/2020-09-07-Optimizer_Code_Supplement.ipynb", "max_issues_repo_name": "kevinbird15/Blog", "max_issues_repo_head_hexsha": "98cd08dc40dc5c7becd501e545920a7ccdd2d517", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-09-05T17:39:18.000Z", "max_issues_repo_issues_event_max_datetime": "2022-02-26T09:53:45.000Z", "max_forks_repo_path": "_notebooks/2020-09-07-Optimizer_Code_Supplement.ipynb", "max_forks_repo_name": "kevinbird15/PersonalBlog", "max_forks_repo_head_hexsha": "98cd08dc40dc5c7becd501e545920a7ccdd2d517", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 83.8300198807, "max_line_length": 23088, "alphanum_fraction": 0.8071691983, "converted": true, "num_tokens": 2979, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9111797051879431, "lm_q2_score": 0.9005297934592089, "lm_q1q2_score": 0.8205444717171213}} {"text": "# SymPy - Symbolic algebra in Python\n\nJ.R. Johansson (jrjohansson at gmail.com)\n\nThe latest version of this [IPython notebook](http://ipython.org/notebook.html) lecture is available at [http://github.com/jrjohansson/scientific-python-lectures](http://github.com/jrjohansson/scientific-python-lectures).\n\nThe other notebooks in this lecture series are indexed at [http://jrjohansson.github.io](http://jrjohansson.github.io).\n\n\n```python\n%matplotlib inline\nimport matplotlib.pyplot as plt\n```\n\n## Introduction\n\nThere are two notable Computer Algebra Systems (CAS) for Python:\n\n* [SymPy](http://sympy.org/en/index.html) - A python module that can be used in any Python program, or in an IPython session, that provides powerful CAS features. \n* [Sage](http://www.sagemath.org/) - Sage is a full-featured and very powerful CAS enviroment that aims to provide an open source system that competes with Mathematica and Maple. Sage is not a regular Python module, but rather a CAS environment that uses Python as its programming language.\n\nSage is in some aspects more powerful than SymPy, but both offer very comprehensive CAS functionality. The advantage of SymPy is that it is a regular Python module and integrates well with the IPython notebook. \n\nIn this lecture we will therefore look at how to use SymPy with IPython notebooks. If you are interested in an open source CAS environment I also recommend to read more about Sage.\n\nTo get started using SymPy in a Python program or notebook, import the module `sympy`:\n\n\n```python\nfrom sympy import *\n```\n\nTo get nice-looking $\\LaTeX$ formatted output run:\n\n\n```python\ninit_printing()\n\n# or with older versions of sympy/ipython, load the IPython extension\n#%load_ext sympy.interactive.ipythonprinting\n# or\n#%load_ext sympyprinting\n```\n\n## Symbolic variables\n\nIn SymPy we need to create symbols for the variables we want to work with. We can create a new symbol using the `Symbol` class:\n\n\n```python\nx = Symbol('x')\n```\n\n\n```python\n(pi + x)**2\n```\n\n\n```python\n# alternative way of defining symbols\na, b, c = symbols(\"a, b, c\")\n```\n\n\n```python\ntype(a)\n```\n\n\n\n\n sympy.core.symbol.Symbol\n\n\n\nWe can add assumptions to symbols when we create them:\n\n\n```python\nx = Symbol('x', real=True)\n```\n\n\n```python\nx.is_imaginary\n```\n\n\n\n\n False\n\n\n\n\n```python\nx = Symbol('x', positive=True)\n```\n\n\n```python\nx > 0\n```\n\n### Complex numbers\n\nThe imaginary unit is denoted `I` in SymPy. \n\n\n```python\n1+1*I\n```\n\n\n```python\nI**2\n```\n\n\n```python\n(x * I + 1)**2\n```\n\n### Rational numbers\n\nThere are three different numerical types in SymPy: `Real`, `Rational`, `Integer`: \n\n\n```python\nr1 = Rational(4,5)\nr2 = Rational(5,4)\n```\n\n\n```python\nr1\n```\n\n\n```python\nr1+r2\n```\n\n\n```python\nr1/r2\n```\n\n## Numerical evaluation\n\nSymPy uses a library for arbitrary precision as numerical backend, and has predefined SymPy expressions for a number of mathematical constants, such as: `pi`, `e`, `oo` for infinity.\n\nTo evaluate an expression numerically we can use the `evalf` function (or `N`). It takes an argument `n` which specifies the number of significant digits.\n\n\n```python\npi.evalf(n=50)\n```\n\n\n```python\ny = (x + pi)**2\n```\n\n\n```python\nN(y, 5) # same as evalf\n```\n\nWhen we numerically evaluate algebraic expressions we often want to substitute a symbol with a numerical value. In SymPy we do that using the `subs` function:\n\n\n```python\ny.subs(x, 1.5)\n```\n\n\n```python\nN(y.subs(x, 1.5))\n```\n\nThe `subs` function can of course also be used to substitute Symbols and expressions:\n\n\n```python\ny.subs(x, a+pi)\n```\n\nWe can also combine numerical evaluation of expressions with NumPy arrays:\n\n\n```python\nimport numpy\n```\n\n\n```python\nx_vec = numpy.arange(0, 10, 0.1)\n```\n\n\n```python\ny_vec = numpy.array([N(((x + pi)**2).subs(x, xx)) for xx in x_vec])\n```\n\n\n```python\nfig, ax = plt.subplots()\nax.plot(x_vec, y_vec);\n```\n\nHowever, this kind of numerical evaluation can be very slow, and there is a much more efficient way to do it: Use the function `lambdify` to \"compile\" a SymPy expression into a function that is much more efficient to evaluate numerically:\n\n\n```python\nf = lambdify([x], (x + pi)**2, 'numpy') # the first argument is a list of variables that\n # f will be a function of: in this case only x -> f(x)\n```\n\n\n```python\ny_vec = f(x_vec) # now we can directly pass a numpy array and f(x) is efficiently evaluated\n```\n\nThe speedup when using \"lambdified\" functions instead of direct numerical evaluation can be significant, often several orders of magnitude. Even in this simple example we get a significant speed up:\n\n\n```python\n%%timeit\n\ny_vec = numpy.array([N(((x + pi)**2).subs(x, xx)) for xx in x_vec])\n```\n\n 10 loops, best of 3: 28.2 ms per loop\n\n\n\n```python\n%%timeit\n\ny_vec = f(x_vec)\n```\n\n The slowest run took 8.86 times longer than the fastest. This could mean that an intermediate result is being cached \n 100000 loops, best of 3: 2.93 \u00b5s per loop\n\n\n## Algebraic manipulations\n\nOne of the main uses of an CAS is to perform algebraic manipulations of expressions. For example, we might want to expand a product, factor an expression, or simplify an expression. The functions for doing these basic operations in SymPy are demonstrated in this section.\n\n### Expand and factor\n\nThe first steps in an algebraic manipulation \n\n\n```python\n(x+1)*(x+2)*(x+3)\n```\n\n\n```python\nexpand((x+1)*(x+2)*(x+3))\n```\n\nThe `expand` function takes a number of keywords arguments which we can tell the functions what kind of expansions we want to have performed. For example, to expand trigonometric expressions, use the `trig=True` keyword argument:\n\n\n```python\nsin(a+b)\n```\n\n\n```python\nexpand(sin(a+b), trig=True)\n```\n\nSee `help(expand)` for a detailed explanation of the various types of expansions the `expand` functions can perform.\n\nThe opposite of product expansion is of course factoring. To factor an expression in SymPy use the `factor` function: \n\n\n```python\nfactor(x**3 + 6 * x**2 + 11*x + 6)\n```\n\n### Simplify\n\nThe `simplify` tries to simplify an expression into a nice looking expression, using various techniques. More specific alternatives to the `simplify` functions also exists: `trigsimp`, `powsimp`, `logcombine`, etc. \n\nThe basic usages of these functions are as follows:\n\n\n```python\n# simplify (sometimes) expands a product\nsimplify((x+1)*(x+2)*(x+3))\n```\n\n\n```python\n# simplify uses trigonometric identities\nsimplify(sin(a)**2 + cos(a)**2)\n```\n\n\n```python\nsimplify(cos(x)/sin(x))\n```\n\n### apart and together\n\nTo manipulate symbolic expressions of fractions, we can use the `apart` and `together` functions:\n\n\n```python\nf1 = 1/((a+1)*(a+2))\n```\n\n\n```python\nf1\n```\n\n\n```python\napart(f1)\n```\n\n\n```python\nf2 = 1/(a+2) + 1/(a+3)\n```\n\n\n```python\nf2\n```\n\n\n```python\ntogether(f2)\n```\n\nSimplify usually combines fractions but does not factor: \n\n\n```python\nsimplify(f2)\n```\n\n## Calculus\n\nIn addition to algebraic manipulations, the other main use of CAS is to do calculus, like derivatives and integrals of algebraic expressions.\n\n### Differentiation\n\nDifferentiation is usually simple. Use the `diff` function. The first argument is the expression to take the derivative of, and the second argument is the symbol by which to take the derivative:\n\n\n```python\ny\n```\n\n\n```python\ndiff(y**2, x)\n```\n\nFor higher order derivatives we can do:\n\n\n```python\ndiff(y**2, x, x)\n```\n\n\n```python\ndiff(y**2, x, 2) # same as above\n```\n\nTo calculate the derivative of a multivariate expression, we can do:\n\n\n```python\nx, y, z = symbols(\"x,y,z\")\n```\n\n\n```python\nf = sin(x*y) + cos(y*z)\n```\n\n$\\frac{d^3f}{dxdy^2}$\n\n\n```python\ndiff(f, x, 1, y, 2)\n```\n\n## Integration\n\nIntegration is done in a similar fashion:\n\n\n```python\nf\n```\n\n\n```python\nintegrate(f, x)\n```\n\nBy providing limits for the integration variable we can evaluate definite integrals:\n\n\n```python\nintegrate(f, (x, -1, 1))\n```\n\nand also improper integrals\n\n\n```python\nintegrate(exp(-x**2), (x, -oo, oo))\n```\n\nRemember, `oo` is the SymPy notation for inifinity.\n\n### Sums and products\n\nWe can evaluate sums using the function `Sum`:\n\n\n```python\nn = Symbol(\"n\")\n```\n\n\n```python\nSum(1/n**2, (n, 1, 10))\n```\n\n\n```python\nSum(1/n**2, (n,1, 10)).evalf()\n```\n\n\n```python\nSum(1/n**2, (n, 1, oo)).evalf()\n```\n\nProducts work much the same way:\n\n\n```python\nProduct(n, (n, 1, 10)) # 10!\n```\n\n## Limits\n\nLimits can be evaluated using the `limit` function. For example, \n\n\n```python\nlimit(sin(x)/x, x, 0)\n```\n\nWe can use 'limit' to check the result of derivation using the `diff` function:\n\n\n```python\nf\n```\n\n\n```python\ndiff(f, x)\n```\n\n$\\displaystyle \\frac{\\mathrm{d}f(x,y)}{\\mathrm{d}x} = \\frac{f(x+h,y)-f(x,y)}{h}$\n\n\n```python\nh = Symbol(\"h\")\n```\n\n\n```python\nlimit((f.subs(x, x+h) - f)/h, h, 0)\n```\n\nOK!\n\nWe can change the direction from which we approach the limiting point using the `dir` keywork argument:\n\n\n```python\nlimit(1/x, x, 0, dir=\"+\")\n```\n\n\n```python\nlimit(1/x, x, 0, dir=\"-\")\n```\n\n## Series\n\nSeries expansion is also one of the most useful features of a CAS. In SymPy we can perform a series expansion of an expression using the `series` function:\n\n\n```python\nseries(exp(x), x)\n```\n\nBy default it expands the expression around $x=0$, but we can expand around any value of $x$ by explicitly include a value in the function call:\n\n\n```python\nseries(exp(x), x, 1)\n```\n\nAnd we can explicitly define to which order the series expansion should be carried out:\n\n\n```python\nseries(exp(x), x, 1, 10)\n```\n\nThe series expansion includes the order of the approximation, which is very useful for keeping track of the order of validity when we do calculations with series expansions of different order:\n\n\n```python\ns1 = cos(x).series(x, 0, 5)\ns1\n```\n\n\n```python\ns2 = sin(x).series(x, 0, 2)\ns2\n```\n\n\n```python\nexpand(s1 * s2)\n```\n\nIf we want to get rid of the order information we can use the `removeO` method:\n\n\n```python\nexpand(s1.removeO() * s2.removeO())\n```\n\nBut note that this is not the correct expansion of $\\cos(x)\\sin(x)$ to $5$th order:\n\n\n```python\n(cos(x)*sin(x)).series(x, 0, 6)\n```\n\n## Linear algebra\n\n### Matrices\n\nMatrices are defined using the `Matrix` class:\n\n\n```python\nm11, m12, m21, m22 = symbols(\"m11, m12, m21, m22\")\nb1, b2 = symbols(\"b1, b2\")\n```\n\n\n```python\nA = Matrix([[m11, m12],[m21, m22]])\nA\n```\n\n\n```python\nb = Matrix([[b1], [b2]])\nb\n```\n\nWith `Matrix` class instances we can do the usual matrix algebra operations:\n\n\n```python\nA**2\n```\n\n\n```python\nA * b\n```\n\nAnd calculate determinants and inverses, and the like:\n\n\n```python\nA.det()\n```\n\n\n```python\nA.inv()\n```\n\n## Solving equations\n\nFor solving equations and systems of equations we can use the `solve` function:\n\n\n```python\nsolve(x**2 - 1, x)\n```\n\n\n```python\nsolve(x**4 - x**2 - 1, x)\n```\n\nSystem of equations:\n\n\n```python\nsolve([x + y - 1, x - y - 1], [x,y])\n```\n\nIn terms of other symbolic expressions:\n\n\n```python\nsolve([x + y - a, x - y - c], [x,y])\n```\n\n## Further reading\n\n* http://sympy.org/en/index.html - The SymPy projects web page.\n* https://github.com/sympy/sympy - The source code of SymPy.\n* http://live.sympy.org - Online version of SymPy for testing and demonstrations.\n\n## Versions\n\n\n```python\n%reload_ext version_information\n\n%version_information numpy, matplotlib, sympy\n```\n\n\n\n\n
    SoftwareVersion
    Python2.7.10 64bit [GCC 4.2.1 (Apple Inc. build 5577)]
    IPython3.2.1
    OSDarwin 14.1.0 x86_64 i386 64bit
    numpy1.9.2
    matplotlib1.4.3
    sympy0.7.6
    Sat Aug 15 11:37:37 2015 JST
    \n\n\n", "meta": {"hexsha": "b8ceade190497b6f196ba0f059e9671e256eed37", "size": 140858, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lecture-5-Sympy.ipynb", "max_stars_repo_name": "hbayraktaroglu/scientific-python-lectures", "max_stars_repo_head_hexsha": "e9221b58153aad15c323e3af5f2fde999bf8b7a1", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 2876, "max_stars_repo_stars_event_min_datetime": "2015-01-03T05:28:29.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T21:06:24.000Z", "max_issues_repo_path": "Lecture-5-Sympy.ipynb", "max_issues_repo_name": "hbayraktaroglu/scientific-python-lectures", "max_issues_repo_head_hexsha": "e9221b58153aad15c323e3af5f2fde999bf8b7a1", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": 26, "max_issues_repo_issues_event_min_datetime": "2015-04-12T06:17:06.000Z", "max_issues_repo_issues_event_max_datetime": "2021-02-02T09:57:19.000Z", "max_forks_repo_path": "Lecture-5-Sympy.ipynb", "max_forks_repo_name": "hbayraktaroglu/scientific-python-lectures", "max_forks_repo_head_hexsha": "e9221b58153aad15c323e3af5f2fde999bf8b7a1", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 1728, "max_forks_repo_forks_event_min_datetime": "2015-01-05T00:48:26.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-21T05:51:01.000Z", "avg_line_length": 51.8432094222, "max_line_length": 11168, "alphanum_fraction": 0.7512175382, "converted": true, "num_tokens": 3338, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9005297861178929, "lm_q2_score": 0.9111797027760039, "lm_q1q2_score": 0.82054446285584}} {"text": "# Exerc\u00edcio 2.1\nRepetindo o meu conselho do cap\u00edtulo anterior, sempre que voc\u00ea aprender um recurso novo, voc\u00ea deve test\u00e1-lo no modo interativo e fazer erros de prop\u00f3sito para ver o que acontece.\n\nVimos que n = 42 \u00e9 legal. E 42 = n?\n\n\n```python\nn = 42\n```\n\n\n```python\nprint(n)\n```\n\n 42\n\n\n\n```python\n42 = n\n```\n\nOu x = y = 1?\n\n\n```python\nx = y = 1\n```\n\n\n```python\nprint(x)\nprint(y)\n```\n\n 1\n 1\n\n\nEm algumas linguagens, cada instru\u00e7\u00e3o termina em um ponto e v\u00edrgula ;. O que acontece se voc\u00ea puser um ponto e v\u00edrgula no fim de uma instru\u00e7\u00e3o no Python?\n\n\n```python\nn = 42;\nprint(type(n))\n```\n\n \n\n\n\n```python\nn = 42.\nprint(type(n))\n```\n\n \n\n\nE se puser um ponto no fim de uma instru\u00e7\u00e3o?\n\n\n```python\nn = '42'.\nprint(type(n))\n```\n\nEm nota\u00e7\u00e3o matem\u00e1tica \u00e9 poss\u00edvel multiplicar x e y desta forma: xy. O que acontece se voc\u00ea tentar fazer o mesmo no Python?\n\n\n```python\nx = 4\ny = 2\nprint(xy)\n```\n\n# Exerc\u00edcio 2.2\nPratique o uso do interpretador do Python como uma calculadora:\n\nO volume de uma esfera com raio r \u00e9: \n\\begin{align} \n\\frac{4}{3} \\pi r\u00b3\n\\end{align} Qual \u00e9 o volume de uma esfera com raio 5?\n\n\n```python\nfrom math import pi\nr = 5\nvolume_esfera = (4/3) * r * (pow(pi, 3))\nprint(f'{volume_esfera:.2f}')\n```\n\n 206.71\n\n\nSuponha que o pre\u00e7o de capa de um livro seja R\\\\$ 24,95, mas as livrarias recebem um desconto de 40 \\%. O transporte custa R$ 3,00 para o primeiro exemplar e 75 centavos para cada exemplar adicional. Qual \u00e9 o custo total de atacado para 60 c\u00f3pias?\n\n\n```python\npreco_de_capa = 24.95\ndesconto = .40\nexemplar_unitario = 3.\nexemplar_adicional = .75\n\nnumero_de_copias = 60\n\ntotal = (preco_de_capa * numero_de_copias * desconto) + (exemplar_unitario + (exemplar_adicional * numero_de_copias - 1))\n\nprint(f'{total:.2f}')\n```\n\n 645.80\n\n\nSe eu sair da minha casa \u00e0s 6:52 e correr 1 quil\u00f4metro a um certo passo (8min15s por quil\u00f4metro), ent\u00e3o 3 quil\u00f4metros a um passo mais r\u00e1pido (7min12s por quil\u00f4metro) e 1 quil\u00f4metro no mesmo passo usado em primeiro lugar, que horas chego em casa para o caf\u00e9 da manh\u00e3?\n\n\n```python\nvelocidade_1 = ((8 * 60) + 15) * 2\nvelocidade_2 = ((7 * 60) + 12) * 3\n\ntempo = (velocidade_1 + velocidade_2) / 60\nhorario_de_saida = (6 * 60) + 52\n\nhorario_de_chegada = (horario_de_saida + tempo) / 60\nh, m = f'{horario_de_chegada:.2f}'.split('.')\nprint(f'Hor\u00e1rio do caf\u00e9: {h}:{m}')\n```\n\n Hor\u00e1rio do caf\u00e9: 7:50\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "8252869bcde4da487b454d8e72e2f2c370a6877b", "size": 7856, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "PensePython2e/chapter-2.ipynb", "max_stars_repo_name": "carlos-moreno/algorithms", "max_stars_repo_head_hexsha": "1b202dc853f2e00982de0882a5af498c6cefad31", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "PensePython2e/chapter-2.ipynb", "max_issues_repo_name": "carlos-moreno/algorithms", "max_issues_repo_head_hexsha": "1b202dc853f2e00982de0882a5af498c6cefad31", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "PensePython2e/chapter-2.ipynb", "max_forks_repo_name": "carlos-moreno/algorithms", "max_forks_repo_head_hexsha": "1b202dc853f2e00982de0882a5af498c6cefad31", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.8784194529, "max_line_length": 583, "alphanum_fraction": 0.5260947047, "converted": true, "num_tokens": 847, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9425067244294588, "lm_q2_score": 0.8705972768020108, "lm_q1q2_score": 0.8205437876558701}} {"text": "# SymPy tutorial\n\n**SymPy** is a Python package for performing **symbolic mathematics**\nwhich can perform algebra, integrate and differentiate equations, \nfind solutions to differential equations, and *numerically solve\nmessy equations* -- along other uses.\n\nCHANGE LOG\n \n 2017-06-12 First revision since 2015-12-26.\n\nLet's import sympy and initialize its pretty print functionality \nwhich will print equations using LaTeX.\nJupyter notebooks uses Mathjax to render equations\nso we specify that option.\n\n\n```python\nimport sympy as sym\nsym.init_printing(use_latex='mathjax')\n\n# If you were not in a notebook environment,\n# but working within a terminal, use:\n#\n# sym.init_printing(use_unicode=True)\n```\n\n## Usage\n\nThese sections are illustrated with examples drawn from\n[rlabbe](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Appendix-A-Installation.ipynb) from his appendix for Kalman Filters.\n\nIt is important to distinguish a Python variable\nfrom a **declared symbol** in sympy.\n\n\n```python\nphi, x = sym.symbols('\\phi, x')\n\n# x here is a sympy symbol, and we form a list:\n[ phi, x ]\n```\n\n\n\n\n$$\\left [ \\phi, \\quad x\\right ]$$\n\n\n\nNotice how we used a LaTeX expression for the symbol `phi`.\nThis is not necessary, but if you do the output will render nicely as LaTeX.\n\nAlso notice how $x$ did not have a numerical value for the list to evaluate.\n\nSo what is the **derivative** of $\\sqrt{\\phi}$ ?\n\n\n```python\nsym.diff('sqrt(phi)')\n```\n\n\n\n\n$$\\frac{1}{2 \\sqrt{\\phi}}$$\n\n\n\nWe can **factor** equations:\n\n\n```python\nsym.factor( phi**3 - phi**2 + phi - 1 )\n```\n\n\n\n\n$$\\left(\\phi - 1\\right) \\left(\\phi^{2} + 1\\right)$$\n\n\n\nand we can **expand** them:\n\n\n```python\n((phi+1)*(phi-4)).expand()\n```\n\n\n\n\n$$\\phi^{2} - 3 \\phi - 4$$\n\n\n\nYou can also use strings for equations that use symbols that you have not defined:\n\n\n```python\nx = sym.expand('(t+1)*2')\nx\n```\n\n\n\n\n$$2 t + 2$$\n\n\n\n## Symbolic solution\n\nNow let's use sympy to compute the **Jacobian** of a matrix. \nSuppose we have a function,\n\n$$h=\\sqrt{(x^2 + z^2)}$$\n\nfor which we want to find the Jacobian with respect to x, y, and z.\n\n\n```python\nx, y, z = sym.symbols('x y z')\n\nH = sym.Matrix([sym.sqrt(x**2 + z**2)])\n\nstate = sym.Matrix([x, y, z])\n\nH.jacobian(state)\n```\n\n\n\n\n$$\\left[\\begin{matrix}\\frac{x}{\\sqrt{x^{2} + z^{2}}} & 0 & \\frac{z}{\\sqrt{x^{2} + z^{2}}}\\end{matrix}\\right]$$\n\n\n\nNow let's compute the discrete process noise matrix $\\mathbf{Q}_k$ given the continuous process noise matrix \n$$\\mathbf{Q} = \\Phi_s \\begin{bmatrix}0&0&0\\\\0&0&0\\\\0&0&1\\end{bmatrix}$$\n\nand the equation\n\n$$\\mathbf{Q} = \\int_0^{\\Delta t} \\Phi(t)\\mathbf{Q}\\Phi^T(t) dt$$\n\nwhere \n$$\\Phi(t) = \\begin{bmatrix}1 & \\Delta t & {\\Delta t}^2/2 \\\\ 0 & 1 & \\Delta t\\\\ 0& 0& 1\\end{bmatrix}$$\n\n\n```python\ndt = sym.symbols('\\Delta{t}')\n\nF_k = sym.Matrix([[1, dt, dt**2/2],\n [0, 1, dt],\n [0, 0, 1]])\n\nQ = sym.Matrix([[0,0,0],\n [0,0,0],\n [0,0,1]])\n\nsym.integrate(F_k*Q*F_k.T,(dt, 0, dt))\n```\n\n\n\n\n$$\\left[\\begin{matrix}\\frac{\\Delta{t}^{5}}{20} & \\frac{\\Delta{t}^{4}}{8} & \\frac{\\Delta{t}^{3}}{6}\\\\\\frac{\\Delta{t}^{4}}{8} & \\frac{\\Delta{t}^{3}}{3} & \\frac{\\Delta{t}^{2}}{2}\\\\\\frac{\\Delta{t}^{3}}{6} & \\frac{\\Delta{t}^{2}}{2} & \\Delta{t}\\end{matrix}\\right]$$\n\n\n\n## Numerical solution\n\nYou can find the *numerical value* of an equation by substituting in a value for a variable:\n\n\n```python\nx = sym.symbols('x')\n\nw = (x**2) - (3*x) + 4\nw.subs(x, 4)\n```\n\n\n\n\n$$8$$\n\n\n\nTypically we want a numerical solution where the analytic solution is messy,\nthat is, we want a **solver**.\nThis is done by specifying a sympy equation, for example:\n\n\n```python\nLHS = (x**2) - (8*x) + 15\nRHS = 0\n# where both RHS and LHS can be complicated expressions.\n\nsolved = sym.solveset( sym.Eq(LHS, RHS), x, domain=sym.S.Reals )\n# Notice how the domain solution can be specified.\n\nsolved\n# A set of solution(s) is returned.\n```\n\n\n\n\n$$\\left\\{3, 5\\right\\}$$\n\n\n\n\n```python\n# Testing whether any solution(s) were found:\nif solved != sym.S.EmptySet:\n print(\"Solution set was not empty.\")\n```\n\n Solution set was not empty.\n\n\n\n```python\n# sympy sets are not like the usual Python sets...\ntype(solved)\n```\n\n\n\n\n sympy.sets.sets.FiniteSet\n\n\n\n\n```python\n# ... but can easily to converted to a Python list:\nl = list(solved)\nprint( l, type(l) )\n```\n\n ([3, 5], )\n\n\n\n```python\nLHS = (x**2)\nRHS = -4\n# where both RHS and LHS can be complicated expressions.\n\nsolved = sym.solveset( sym.Eq(LHS, RHS), x )\n# Leaving out the domain will include the complex domain.\n\nsolved\n```\n\n\n\n\n$$\\left\\{- 2 i, 2 i\\right\\}$$\n\n\n\n## Application to financial economics\n\nWe used sympy to deduce parameters of Gaussian mixtures\nin module `lib/ys_gauss_mix.py` and the explanatory notebook\nis rendered at https://git.io/gmix \n", "meta": {"hexsha": "5e9266c19ae2a3149b23ef250a800397c9b532df", "size": 12379, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/fecon235-08-sympy.ipynb", "max_stars_repo_name": "maidenlane/five", "max_stars_repo_head_hexsha": "bf14dd37b0f14d6998893c2b0478275a0fc55a82", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-04-24T05:29:26.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-24T05:29:26.000Z", "max_issues_repo_path": "fecon235-master/docs/fecon235-08-sympy.ipynb", "max_issues_repo_name": "maidenlane/five", "max_issues_repo_head_hexsha": "bf14dd37b0f14d6998893c2b0478275a0fc55a82", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "fecon235-master/docs/fecon235-08-sympy.ipynb", "max_forks_repo_name": "maidenlane/five", "max_forks_repo_head_hexsha": "bf14dd37b0f14d6998893c2b0478275a0fc55a82", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-04-24T05:34:06.000Z", "max_forks_repo_forks_event_max_datetime": "2020-04-24T05:34:06.000Z", "avg_line_length": 22.7555147059, "max_line_length": 293, "alphanum_fraction": 0.4488246223, "converted": true, "num_tokens": 1530, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9425067228145365, "lm_q2_score": 0.8705972717658209, "lm_q1q2_score": 0.8205437815032803}} {"text": "# Solid Angle subtended by a sphere Approximation\n\nWe want to approximate the value of the solid angle subtended by a sphere as seen from any point in space outside of that sphere. This expression will be used in ToFu to compute the radiated power received by a particle of arbitrary radius (small vs plasma volume discretization) from the whole plasma. The expression will allow faster computation.\n\n\n\n## Notations\n\n\n\nLet\u2019s consider the case of a spherical particle of radius $r$, observed from point $M$ located at a distance $d$ from the center $C$ of the particle, as illustrated in the figure above. By definition, the solid angle $\\Omega = \\dfrac{S}{d^2}$ , where $S$ is the surface on the sphere of center $M$ intersecting the particle center $C$ and limited by its radius, as represented in the figure below.\n\n\n\n\n\n## Solid Angle approximation\nIn our case, we get\n\n$$\\Omega = 2\\pi \\left( 1 - \\sqrt{1-\\left(\\dfrac{r}{d}\\right)^2}\\right)$$\n\nHowever, the particle radius is almost always much smaller than the distance between the particle and the observation point $M$. Thus, often $$\\dfrac{r}{d} = X \\xrightarrow[]{} 0$$\n\nThe taylor series of the function $\\Omega(X) = 2\\pi \\left( 1 - \\sqrt{1-X^2}\\right)$ at $X=0$ is given by \n\n$$\\Omega(X) = \\Omega(0) + X\\Omega'(0) + \\dfrac{X^2}{2}\\Omega''(0) + \\dfrac{X^3}{6}\\Omega^{(3)}(0)+ \\dfrac{X^4}{24}\\Omega^{(4)}(0) + O(x^4)$$\n\nwhere\n\n$$\n\\begin{align}\n\\Omega(X) &= 2\\pi \\left( 1 - \\sqrt{1-X^2}\\right)\\\\\n\\Omega'(X) &= 2\\pi X \\left( 1 - X^2\\right)^{-\\dfrac{1}{2}}\\\\\n\\Omega''(X) &= 2\\pi \\left( 1 - X^2\\right)^{-\\dfrac{3}{2}}\\\\\n\\Omega^{(3)}(X) &= 6 \\pi X \\left( 1 - X^2\\right)^{-\\dfrac{5}{2}}\\\\\n\\Omega^{(4)}(X) &= 6 \\pi \\left(4X^2 + 1 \\right)\\left( 1 - X^2\\right)^{-\\dfrac{7}{2}}\n\\end{align}\n$$\n\n\nThus, we get\n\n$$ \\Omega(X) = \\pi x^2 + \\dfrac{x^4 \\pi}{4} + O(x^4) $$\n\nReplacing the variable back\n\n$$ \\Omega \\approx \\pi \\left(\\dfrac{r}{d}\\right)^2 + \\dfrac{\\pi}{4}\\left(\\dfrac{r}{d}\\right)^4$$\n\nAnd to the 9-th degree\n\n$$ \\Omega \\approx \\pi \\left(\\dfrac{r}{d}\\right)^2 + \\dfrac{\\pi}{4}\\left(\\dfrac{r}{d}\\right)^4 + \\dfrac{\\pi}{8}\\left(\\dfrac{r}{d}\\right)^6 + \\dfrac{5 \\pi}{64}\\left(\\dfrac{r}{d}\\right)^8$$\n\n## Computation\n\n\n```python\n%matplotlib widget\nimport ipywidgets as widgets\nimport matplotlib.pyplot as plt\nimport numpy as np\n```\n\n\n```python\n# set up plot\nfig, ax = plt.subplots(figsize=(6, 4))\nax.grid(True)\n \n\ndef exact(r, d):\n \"\"\"\n Return a sine for x with angular frequeny w and amplitude amp.\n \"\"\"\n return 2*np.pi*(1-np.sqrt(1-(r/d)**2))\n \ndef approx(r,d):\n \"\"\"\n Return a sine for x with angular frequeny w and amplitude amp.\n \"\"\"\n x = r/d\n return np.pi*(x**2 + x**4/4)\n\n# generate x values\nd = np.linspace(1, 10, 100)\n\nmaxdiff = 0.\nfor r in np.linspace(0.1,0.8,8):\n diff = abs(exact(r, d) - approx(r,d))\n if r < 0.5:\n maxdiff = max(np.max(diff), maxdiff)\n ax.plot(d, diff, label=str(r))\n\nax.set_ylim([0, maxdiff])\nax.legend()\nax.set_title(\"Error with respect to distance for different radius\")\n\n```\n\n\n Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous \u2026\n\n\n\n\n\n Text(0.5, 1.0, 'Error with respect to distance for different radius')\n\n\n", "meta": {"hexsha": "0f74ebd3a021c22e79298ffc9e2376ba09ccad98", "size": 5806, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Notes_Upgrades/SolidAngle/Solid_Angle_Sphere_Approx.ipynb", "max_stars_repo_name": "WinstonLHS/tofu", "max_stars_repo_head_hexsha": "c95b2eb6aedcf4bac5676752b9635b78f31af6ca", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 56, "max_stars_repo_stars_event_min_datetime": "2017-07-09T10:29:45.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T02:44:50.000Z", "max_issues_repo_path": "Notes_Upgrades/SolidAngle/Solid_Angle_Sphere_Approx.ipynb", "max_issues_repo_name": "WinstonLHS/tofu", "max_issues_repo_head_hexsha": "c95b2eb6aedcf4bac5676752b9635b78f31af6ca", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 522, "max_issues_repo_issues_event_min_datetime": "2017-07-02T21:06:07.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-02T08:07:57.000Z", "max_forks_repo_path": "Notes_Upgrades/SolidAngle/Solid_Angle_Sphere_Approx.ipynb", "max_forks_repo_name": "Didou09/tofu", "max_forks_repo_head_hexsha": "4a4e1f058bab8e7556ed9d518f90807cec605476", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2017-07-02T20:38:53.000Z", "max_forks_repo_forks_event_max_datetime": "2021-12-04T00:12:30.000Z", "avg_line_length": 29.7743589744, "max_line_length": 408, "alphanum_fraction": 0.5168790906, "converted": true, "num_tokens": 1089, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9425067163548471, "lm_q2_score": 0.8705972600147106, "lm_q1q2_score": 0.8205437648039919}} {"text": "## \u0417\u0430\u0434\u0430\u043d\u0438\u0435 1\n\n\u0414\u0430\u043d\u044b \u0434\u0432\u0430 \u0432\u0435\u043a\u0442\u043e\u0440\u0430 \u0432 \u0442\u0440\u0435\u0445\u043c\u0435\u0440\u043d\u043e\u043c \u043f\u0440\u043e\u0441\u0442\u0440\u0430\u043d\u0441\u0442\u0432\u0435: (10,10,10) \u0438 (0,0,-10)\n- \u041d\u0430\u0439\u0434\u0438\u0442\u0435 \u0438\u0445 \u0441\u0443\u043c\u043c\u0443. (\u043d\u0430 \u043b\u0438\u0441\u0442\u043e\u0447\u043a\u0435)\n- \u041d\u0430\u043f\u0438\u0448\u0438\u0442\u0435 \u043a\u043e\u0434 \u043d\u0430 Python, \u0440\u0435\u0430\u043b\u0438\u0437\u0443\u044e\u0449\u0438\u0439 \u0440\u0430\u0441\u0447\u0435\u0442 \u0434\u043b\u0438\u043d\u044b \u0432\u0435\u043a\u0442\u043e\u0440\u0430, \u0437\u0430\u0434\u0430\u043d\u043d\u043e\u0433\u043e \u0435\u0433\u043e \u043a\u043e\u043e\u0440\u0434\u0438\u043d\u0430\u0442\u0430\u043c\u0438. (\u0432 \u043f\u0440\u043e\u0433\u0440\u0430\u043c\u043c\u0435)\n\nlen = x1(10,10,10) + x2(0,0,-10) = (10,10,0)\n\n\n```python\ndef lvec(x: list):\n t = 0\n for i in x:\n t += i**2\n return t**0.5\n```\n\n\n```python\nlvec([3, 4])\n```\n\n\n\n\n 5.0\n\n\n\n## \u0417\u0430\u0434\u0430\u043d\u0438\u0435 2\n\n\u041f\u043e\u0447\u0435\u043c\u0443 \u043f\u0440\u044f\u043c\u044b\u0435 \u043d\u0435 \u043a\u0430\u0436\u0443\u0442\u0441\u044f \u043f\u0435\u0440\u043f\u0435\u043d\u0434\u0438\u043a\u0443\u043b\u044f\u0440\u043d\u044b\u043c\u0438?\n\n\n```python\nimport numpy as np\nfrom matplotlib import pyplot as plt\n```\n\n\n```python\nfrom pylab import rcParams\n```\n\n\n```python\nx = np.linspace(-5, 5, 21)\ny1 = 3*x+1\ny2 = (1/3)*x+1\nplt.plot(x, y1)\nplt.plot(x, y2)\nplt.xlabel('x')\nplt.ylabel('y')\nplt.show()\n```\n\n\u041e\u0442\u0432\u0435\u0442: \u0433\u0438\u043f\u043e\u0442\u0435\u0442\u0438\u0447\u0435\u0441\u043a\u0438, \u0438\u0437 \u0437\u0430 \u0440\u0430\u0437\u043d\u044b\u0445 \u043c\u0430\u0441\u0448\u0442\u0430\u0431\u043e\u0432 \u043e\u0441\u0435\u0439\n\n\n```python\nx = np.linspace(-5, 5, 21)\nrcParams[\"figure.figsize\"] = (6,6)\ny1 = 3*x + 1\ny2 = (1/3)*x + 1\nplt.axis('equal')\nplt.grid()\nplt.xlim((-5, 5))\nplt.ylim((-5, 5))\nplt.plot(x, y1)\nplt.plot(x, y2)\nplt.xlabel('x')\nplt.ylabel('y')\nplt.show()\n```\n\n\u0441 \u043e\u0441\u044f\u043c\u0438 \u0442\u0435\u043f\u0435\u0440\u044c \u0432\u0441\u0435 \u043e\u043a, \u043d\u043e \u043f\u0440\u044f\u043c\u044b\u0435 \u0432\u0441\u0435 \u0440\u0430\u0432\u043d\u043e \u043d\u0435 \u043f\u0435\u0440\u043f\u0435\u043d\u0434\u0438\u043a\u0443\u043b\u044f\u0440\u043d\u044b \u00af\\\\_(\u30c4)_/\u00af\n\n## \u0417\u0430\u0434\u0430\u043d\u0438\u0435 3\n\n\u041d\u0430\u043f\u0438\u0448\u0438\u0442\u0435 \u043a\u043e\u0434 \u043d\u0430 Python, \u0440\u0435\u0430\u043b\u0438\u0437\u0443\u044e\u0449\u0438\u0439 \u043f\u043e\u0441\u0442\u0440\u043e\u0435\u043d\u0438\u0435 \u0433\u0440\u0430\u0444\u0438\u043a\u043e\u0432:\n- \u043e\u043a\u0440\u0443\u0436\u043d\u043e\u0441\u0442\u0438,\n- \u044d\u043b\u043b\u0438\u043f\u0441\u0430,\n- \u0433\u0438\u043f\u0435\u0440\u0431\u043e\u043b\u044b.\n\n\n```python\nt = np.arange(0, 2*np.pi, 0.01)\nr = 4\nplt.grid()\nplt.plot(r*np.sin(t), r*np.cos(t), lw=3)\nplt.axis('equal')\nplt.show()\n```\n\n\n```python\nt = np.arange(0, 2*np.pi, 0.01)\nr = 4\nplt.grid()\nplt.plot(r*np.sin(t), 0.5*r*np.cos(t), lw=3)\nplt.axis('equal')\nplt.show()\n```\n\n\n```python\ndef hypY(x: int, a: int, b: int):\n return np.sqrt(np.abs(b**2*(x**2/a**2 - 1)))\n```\n\n\n```python\nx1 = np.linspace(-10, -1, 100)\nx2 = np.linspace(1, 10, 100)\nplt.grid()\nplt.plot(x1, hypY(x1, -1, 1))\nplt.plot(x1, -hypY(x1, -1, 1))\nplt.plot(x2, hypY(x2, -1, 1))\nplt.plot(x2, -hypY(x2, -1, 1))\n```\n\n## \u0417\u0430\u0434\u0430\u043d\u0438\u0435 4\n\n### 1. \u041f\u0443\u0441\u0442\u044c \u0437\u0430\u0434\u0430\u043d\u0430 \u043f\u043b\u043e\u0441\u043a\u043e\u0441\u0442\u044c: Ax + By + Cz + D = 0. \u041d\u0430\u043f\u0438\u0448\u0438\u0442\u0435 \u0443\u0440\u0430\u0432\u043d\u0435\u043d\u0438\u0435 \u043f\u043b\u043e\u0441\u043a\u043e\u0441\u0442\u0438, \u043f\u0430\u0440\u0430\u043b\u043b\u0435\u043b\u044c\u043d\u043e\u0439 \u0434\u0430\u043d\u043d\u043e\u0439 \u0438 \u043f\u0440\u043e\u0445\u043e\u0434\u044f\u0449\u0435\u0439 \u0447\u0435\u0440\u0435\u0437 \u043d\u0430\u0447\u0430\u043b\u043e \u043a\u043e\u043e\u0440\u0434\u0438\u043d\u0430\u0442\n\n\u041f\u043b\u043e\u0441\u043a\u043e\u0441\u0442\u044c: Ax + By + Cz = 0 ( D = 0 )\n\n### 2. \u041f\u0443\u0441\u0442\u044c \u0437\u0430\u0434\u0430\u043d\u0430 \u043f\u043b\u043e\u0441\u043a\u043e\u0441\u0442\u044c:\n\n$A_1x + B_1y + C_1z + D_1 = 0$\n\n\u0438 \u043f\u0440\u044f\u043c\u0430\u044f:\n\n\\begin{equation}\n\\frac{x - x_1}{x_2 - x_1} = \\frac{y-y_1}{y_2 - y_1} = \\frac{z-z_1}{z_2 - z_1}\n\\end{equation}\n\n\u041a\u0430\u043a \u0443\u0437\u043d\u0430\u0442\u044c, \u043f\u0440\u0438\u043d\u0430\u0434\u043b\u0435\u0436\u0438\u0442 \u043f\u0440\u044f\u043c\u0430\u044f \u043f\u043b\u043e\u0441\u043a\u043e\u0441\u0442\u0438 \u0438\u043b\u0438 \u043d\u0435\u0442?\n\n\u041e\u0442\u0432\u0435\u0442: \u043f\u043e\u0434\u0441\u0442\u0430\u0432\u0438\u0442\u044c \u043a\u043e\u043e\u0440\u0434\u0438\u043d\u0430\u0442\u044b \u043f\u0440\u044f\u043c\u043e\u0439 \u0432 \u0443\u0440\u0430\u0432\u043d\u0435\u043d\u0438\u0435 \u043f\u043b\u043e\u0441\u043a\u043e\u0441\u0442\u0438\n\n$A_1x_1 + B_1y_1 + C_1z_1 + D_1 = A_1x_2 + B_1y_2 + C_1z_2 + D_1$\n\n\u0415\u0441\u043b\u0438 \u0440\u0430\u0432\u0435\u043d\u0441\u0442\u0432\u043e \u0432\u044b\u043f\u043e\u043b\u043d\u044f\u0435\u0442\u0441\u044f, \u0437\u043d\u0430\u0447\u0438\u0442 \u043f\u0440\u044f\u043c\u0430\u044f \u043f\u0440\u0438\u043d\u0430\u0434\u043b\u0435\u0436\u0438\u0442 \u043f\u043b\u043e\u0441\u043a\u043e\u0441\u0442\u0438\n\n### \u0417\u0430\u0434\u0430\u043d\u0438\u0435 5\n\n- \u041d\u0430\u0440\u0438\u0441\u0443\u0439\u0442\u0435 \u0442\u0440\u0435\u0445\u043c\u0435\u0440\u043d\u044b\u0439 \u0433\u0440\u0430\u0444\u0438\u043a \u0434\u0432\u0443\u0445 \u043f\u0430\u0440\u0430\u043b\u043b\u0435\u043b\u044c\u043d\u044b\u0445 \u043f\u043b\u043e\u0441\u043a\u043e\u0441\u0442\u0435\u0439.\n- \u041d\u0430\u0440\u0438\u0441\u0443\u0439\u0442\u0435 \u0442\u0440\u0435\u0445\u043c\u0435\u0440\u043d\u044b\u0439 \u0433\u0440\u0430\u0444\u0438\u043a \u0434\u0432\u0443\u0445 \u043b\u044e\u0431\u044b\u0445 \u043f\u043e\u0432\u0435\u0440\u0445\u043d\u043e\u0441\u0442\u0435\u0439 \u0432\u0442\u043e\u0440\u043e\u0433\u043e \u043f\u043e\u0440\u044f\u0434\u043a\u0430.\n\n\n```python\nfrom pylab import *\nfrom mpl_toolkits.mplot3d import Axes3D\n```\n\n\n```python\nfig = figure()\nax = Axes3D(fig)\nX = np.arange(-5, 5, 0.5)\nY = np.arange(-5, 5, 0.5)\nX, Y = np.meshgrid(X, Y)\nZ1 = 2*X + 5*Y\nZ2 = 2*X + 5*Y + 10\nax.plot_wireframe(X, Y, Z1)\nax.plot_wireframe(X, Y, Z2)\nshow()\n```\n\n\n```python\nfrom matplotlib import cm\n```\n\n\n```python\nfig = figure()\nax = Axes3D(fig)\nX = np.arange(-5, 5, 0.2)\nY = np.arange(-5, 5, 0.2)\nX, Y = np.meshgrid(X, Y)\nZ = 0.5*sin(0.1*X**2) + 0.09*sin(0.15*Y**2)\nax.plot_surface(X, Y, Z, cmap=cm.terrain)\nshow()\n```\n\n## \u0417\u0430\u0434\u0430\u043d\u0438\u0435 6\n\n\u041d\u0430\u0440\u0438\u0441\u0443\u0439\u0442\u0435 \u0433\u0440\u0430\u0444\u0438\u043a \u0444\u0443\u043d\u043a\u0446\u0438\u0438:\ny(x) = k\u2219cos(x \u2013 a) + b\n\u0434\u043b\u044f \u043d\u0435\u043a\u043e\u0442\u043e\u0440\u044b\u0445 (2-3 \u0440\u0430\u0437\u043b\u0438\u0447\u043d\u044b\u0445) \u0437\u043d\u0430\u0447\u0435\u043d\u0438\u0439 \u043f\u0430\u0440\u0430\u043c\u0435\u0442\u0440\u043e\u0432 k, a, b\n\n\n\n```python\nx = np.linspace(-5, 5, 100)\nplt.grid()\nplt.plot(x, np.cos(x - 1) + 1)\nplt.plot(x, 2*np.cos(x - 1) + 1)\nplt.plot(x, np.cos(x + 1) -1)\nplt.show()\n```\n\n## \u0417\u0430\u0434\u0430\u043d\u0438\u0435 7\n\n\u0414\u043e\u043a\u0430\u0436\u0438\u0442\u0435, \u0447\u0442\u043e \u043f\u0440\u0438 \u043e\u0440\u0442\u043e\u0433\u043e\u043d\u0430\u043b\u044c\u043d\u043e\u043c \u043f\u0440\u0435\u043e\u0431\u0440\u0430\u0437\u043e\u0432\u0430\u043d\u0438\u0438 \u0441\u043e\u0445\u0440\u0430\u043d\u044f\u0435\u0442\u0441\u044f \u0440\u0430\u0441\u0441\u0442\u043e\u044f\u043d\u0438\u0435 \u043c\u0435\u0436\u0434\u0443 \u0442\u043e\u0447\u043a\u0430\u043c\u0438\n\n\u00af\\\\_(\u30c4)_/\u00af\n\n## \u0417\u0430\u0434\u0430\u043d\u0438\u0435 8\n\n- \u041d\u0430\u043f\u0438\u0448\u0438\u0442\u0435 \u043a\u043e\u0434, \u043a\u043e\u0442\u043e\u0440\u044b\u0439 \u0431\u0443\u0434\u0435\u0442 \u043f\u0435\u0440\u0435\u0432\u043e\u0434\u0438\u0442\u044c \u043f\u043e\u043b\u044f\u0440\u043d\u044b\u0435 \u043a\u043e\u043e\u0440\u0434\u0438\u043d\u0430\u0442\u044b \u0432 \u0434\u0435\u043a\u0430\u0440\u0442\u043e\u0432\u044b.\n- \u041d\u0430\u043f\u0438\u0448\u0438\u0442\u0435 \u043a\u043e\u0434, \u043a\u043e\u0442\u043e\u0440\u044b\u0439 \u0431\u0443\u0434\u0435\u0442 \u0440\u0438\u0441\u043e\u0432\u0430\u0442\u044c \u0433\u0440\u0430\u0444\u0438\u043a \u043e\u043a\u0440\u0443\u0436\u043d\u043e\u0441\u0442\u0438 \u0432 \u043f\u043e\u043b\u044f\u0440\u043d\u044b\u0445 \u043a\u043e\u043e\u0440\u0434\u0438\u043d\u0430\u0442\u0430\u0445.\n- \u041d\u0430\u043f\u0438\u0448\u0438\u0442\u0435 \u043a\u043e\u0434, \u043a\u043e\u0442\u043e\u0440\u044b\u0439 \u0431\u0443\u0434\u0435\u0442 \u0440\u0438\u0441\u043e\u0432\u0430\u0442\u044c \u0433\u0440\u0430\u0444\u0438\u043a \u043e\u0442\u0440\u0435\u0437\u043a\u0430 \u043f\u0440\u044f\u043c\u043e\u0439 \u043b\u0438\u043d\u0438\u0438 \u0432 \u043f\u043e\u043b\u044f\u0440\u043d\u044b\u0445 \u043a\u043e\u043e\u0440\u0434\u0438\u043d\u0430\u0442\u0430\u0445.\n\n\n```python\ndef convPolar(r, a):\n return r*np.cos(a), r*np.sin(a)\n```\n\n\n```python\nconvPolar(1, np.pi/2)\n```\n\n\n\n\n (6.123233995736766e-17, 1.0)\n\n\n\n\n```python\nx = np.linspace(0, 2*np.pi, 100)\ny = np.empty(100)\ny.fill(0.5)\nplt.polar(x, y)\nr = np.array([0, 0.2])\na = np.array([1, 1])\nb = np.array([1+np.pi, 1+np.pi])\nplt.polar(a, r)\nplt.polar(b, r)\nplt.show()\n```\n\n## \u0417\u0430\u0434\u0430\u043d\u0438\u0435 9\n\n### 1. \u0420\u0435\u0448\u0438\u0442\u0435 \u0441\u0438\u0441\u0442\u0435\u043c\u0443 \u0443\u0440\u0430\u0432\u043d\u0435\u043d\u0438\u0439\n\n\\begin{equation}\n\\left\\{ \\begin{array}{ll}\ny = x^2 - 1\\\\\ne^x + x (1-y) = 1\n\\end{array} \\right.\n\\end{equation}\n\n\\begin{equation}\n\\left\\{ \\begin{array}{ll}\ny = x^2 - 1\\\\\ny = 1 - \\frac{1 - e^x}{x}\n\\end{array} \\right.\n\\end{equation}\n\n\n```python\ndef fy1(x):\n return x**2 - 1\n\ndef fy2(x):\n return 1 - (1 - np.exp(x))/x\n```\n\n\n```python\nxx1 = np.linspace(-5, -0.01, 100)\nxx2 = np.linspace(0.01, 5, 100)\nplt.plot(xx1, fy1(xx1))\nplt.plot(xx1, fy2(xx1))\nplt.plot(xx2, fy1(xx2))\nplt.plot(xx2, fy2(xx2))\nplt.grid()\nplt.show()\n```\n\n\n```python\nfrom scipy.optimize import fsolve\n```\n\n\n```python\ndef equations(p):\n x, y = p\n return (y - x**2 + 1, y-1+(1-np.exp(x))/x)\n```\n\n\n```python\nx1, y1 = fsolve(equations, (-4, 5))\nx2, y2 = fsolve(equations, (3, 3))\nx3, y3 = fsolve(equations, (5, 5))\nprint(x1, y1)\nprint(x2, y2)\nprint(x3, y3)\n```\n\n -1.5818353528959466 1.50220308367128\n 2.618145573085482 5.854686241867206\n 4.200105841166561 16.640889076993545\n\n\n\n```python\nxx1 = np.linspace(-5, -0.01, 100)\nxx2 = np.linspace(0.01, 5, 100)\nplt.plot(xx1, fy1(xx1))\nplt.plot(xx1, fy2(xx1))\nplt.plot(xx2, fy1(xx2))\nplt.plot(xx2, fy2(xx2))\nplt.scatter([x1, x2, x3],[y1, y2, y3])\nplt.grid()\nplt.show()\n```\n\n### 1. \u0420\u0435\u0448\u0438\u0442\u0435 \u0441\u0438\u0441\u0442\u0435\u043c\u0443 \u0443\u0440\u0430\u0432\u043d\u0435\u043d\u0438\u0439 \u0438 \u043d\u0435\u0440\u0430\u0432\u0435\u043d\u0441\u0442\u0432\n\n\\begin{equation}\n\\left\\{ \\begin{array}{ll}\ny = x^2 - 1\\\\\ne^x + x (1-y) > 1\n\\end{array} \\right.\n\\end{equation}\n\n\\begin{equation}\n\\left\\{ \\begin{array}{ll}\ny = x^2 - 1\\\\\ny > 1 - \\frac{1 - e^x}{x}\n\\end{array} \\right.\n\\end{equation}\n\n\u00af\\\\_(\u30c4)_/\u00af \u00af\\\\_(\u30c4)_/\u00af \u00af\\\\_(\u30c4)_/\u00af\n", "meta": {"hexsha": "f2fe22841b31c35e613043a675bf30ffd6852ff8", "size": 572132, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "homework03.ipynb", "max_stars_repo_name": "Selen34/vvm", "max_stars_repo_head_hexsha": "23fd25d759013d6a7a75c4de01072206c2c02218", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "homework03.ipynb", "max_issues_repo_name": "Selen34/vvm", "max_issues_repo_head_hexsha": "23fd25d759013d6a7a75c4de01072206c2c02218", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "homework03.ipynb", "max_forks_repo_name": "Selen34/vvm", "max_forks_repo_head_hexsha": "23fd25d759013d6a7a75c4de01072206c2c02218", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 700.2839657283, "max_line_length": 185312, "alphanum_fraction": 0.9533219607, "converted": true, "num_tokens": 2574, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.912436167620237, "lm_q2_score": 0.8991213772699435, "lm_q1q2_score": 0.8203908637016165}} {"text": "

    \nA normal distribution, sometimes called the bell curve, is a distribution that occurs naturally in many situations. For example, the bell curve is seen in tests like the SAT and GRE. The bulk of students will score the average (C), while smaller numbers of students will score a B or D. An even smaller percentage of students score an F or an A. This creates a distribution that resembles a bell (hence the nickname). The bell curve is symmetrical. Half of the data will fall to the left of the mean; half will fall to the right.\nMany groups follow this type of pattern. That\u2019s why it\u2019s widely used in business, statistics and in government bodies like the FDA:\n

    \n
      \n
    • Heights of people.
    • \n
    • Measurement errors.
    • \n
    • Blood pressure.
    • \n
    • Points on a test.
    • \n
    • IQ scores.
    • \n
    • Salaries.
    • \n
    \n\nThe standard deviation controls the spread of the distribution. \nA smaller standard deviation indicates that the data is tightly clustered around the mean; the normal distribution will be taller. A larger standard deviation indicates that the data is spread out around the mean; the normal distribution will be flatter and wider.\n\nProperties of a normal distribution\n
      \n
    1. The mean, mode and median are all equal.
    2. \n
    3. The curve is symmetric at the center (i.e. around the mean, \u03bc).
    4. \n
    5. Exactly half of the values are to the left of center and exactly half the values are to the right.
    6. \n
    7. The total area under the curve is 1.
    8. \n
    \n\nThe Gaussian distribution is a continuous family of curves, all shaped like a bell. In other words, there are endless possibilities for the number of possible distributions, given the limitless possibilities for standard deviation measurements (which could be from 0 to infinity). The standard Gaussian distribution has a mean of 0 and a standard deviation of 1. The larger the standard deviation, the flatter the curve. The smaller the standard deviation, the higher the peak of the curve.\n\nA Gaussian distribution function can be used to describe physical events if the number of events is very large (Central Limit Theorem(CLT)). In simple terms, the CLT says that while you may not be able to predict what one item will do, if you have a whole ton of items, you can predict what they will do as a whole. For example, if you have a jar of gas at a constant temperature, the Gaussian distribution will enable you to figure out the probability that one particle will move at a certain velocity.\n\nThe Probability Distribution Function (PDF) of Normal or Gaussian or Bell Shaped Distribution is given by:\n\n\\begin{equation}\nP(X=k) = \\frac{1}{\\sqrt{2\\pi}\\sigma} \\mathrm{e}^{\\frac{-(k-\\mu)^2}{2\\sigma^2}}\\\n\\end{equation}\n\nAlso called Normal PDF\n\n\n```python\nimport scipy.stats as S\nimport numpy as np\nimport seaborn as sns\n```\n\nWe are going to withdraw some numbers from Normal Distribution having mean = 65 and standard deviation, sigma = 10\n\nThe concept of probability being limiting case of relative frequency can be demonstrated here also. Let's first calculate the true probability (theoretical value) of probability of Random Variable, X taking value k=45 calculated from the Normal PDF given mean = 60 and sigma = 10. So, the theoretical value of probability of Random Variable, X taking value k=45 is given by:\n\n\\begin{equation}\nP(X=45) = \\frac{1}{\\sqrt{2\\pi} 10} \\mathrm{e}^{\\frac{-(45-60)^2}{2*10^2}}\\ = 0.0129\n\\end{equation}\n\nWhich means that the frequency of value 45 is 1.29 % of the size of population. \n\nLet's calculate the value of probability of value 60. \n\n\\begin{equation}\nP(X=60) = \\frac{1}{\\sqrt{2\\pi} 10} = 0.0399\n\\end{equation}\n\nWhich means that the frequency of value 60 is 3.99 % of the size of population. \n\nLet's generate a sample size of 100 observations. \n\n\n```python\nsigma = 10\nmean = 60\nsample = np.random.normal(loc=mean,scale=sigma,size=100)\n```\n\n\n```python\nsample.shape\n```\n\n\n\n\n (100,)\n\n\n\n\n```python\nsns.distplot(sample,bins='rice',kde=True)\n```\n\nLet's generate a sample size of 1000 observations. \n\n\n```python\nsample = np.random.normal(loc=mean,scale=sigma,size=1000)\n```\n\n\n```python\nsns.distplot(sample,bins='rice',kde=True)\n```\n\nNow, let's generate a sample size of 10000 observations. \n\n\n```python\nsample = np.random.normal(loc=mean,scale=sigma,size=10000)\n```\n\n\n```python\nsns.distplot(sample,bins='rice',kde=True)\n```\n\nNow, for one more last time, let's generate a sample size of 100000 observations.\n\n\n```python\nsample = np.random.normal(loc=mean,scale=sigma,size=100000)\n```\n\n\n```python\nsns.distplot(sample,bins='rice',kde=True)\n```\n\nAll of the plots which we can see above are frequency distributions. In the first plot of frequency distribution, the bulge is coming is coming nearby 60, approximately 55. \n\nIn the second frequency distribution where sample size is increased with respect to the previous case i.e. 1000, the distribution is approaching towards the Normal Distribution with the mean of 60. Although, the bulge is still at the value of 55, approximately but this time it's more refined in shape. \n\nIn the third frequency distribution where the sample size is further increased i.e. 10000, the distribution is approaching more closely towards actual normal probability distribution with the mean=60 and sigma=10, getting finer. \n\nFinally, if we look at the last plotted frequency distribution with the sample size of 100000, we see that it's not that much different from the previous plotted frequency distribution and so, we can say that the frequency of each of the observation is stablizing and stopped changing much and hence approaching towards probability but how ????\n\nWe know that the relative frequency of Random Variable value, X=60 is:\n\nAccording to First Frequency Distribution, it's approximately 13/100 = 0.13. \n\nAccording to Second Frequency Distribution, It's approximately 260/1000 = 0.26. \n\nAccording to Third Frequency Distribution, it's approximately 3250/10000 = 0.3250. \n\nAccording to Fourth Frequency Distribution, it's approximately 35000/100000 = 0.35.\n", "meta": {"hexsha": "86ccf7f010fa41e5ab9c5db75eee6b69e1d48eb1", "size": 74027, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "4. Normal or Gaussian Distribution.ipynb", "max_stars_repo_name": "rarpit1994/Probability-and-Statistics", "max_stars_repo_head_hexsha": "1ae8b7428ec9a3072fce3089d7951b9f7bd06c26", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-03-24T10:29:56.000Z", "max_stars_repo_stars_event_max_datetime": "2019-03-24T10:29:56.000Z", "max_issues_repo_path": "4. Normal or Gaussian Distribution.ipynb", "max_issues_repo_name": "rarpit1994/Probability-and-Statistics", "max_issues_repo_head_hexsha": "1ae8b7428ec9a3072fce3089d7951b9f7bd06c26", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "4. Normal or Gaussian Distribution.ipynb", "max_forks_repo_name": "rarpit1994/Probability-and-Statistics", "max_forks_repo_head_hexsha": "1ae8b7428ec9a3072fce3089d7951b9f7bd06c26", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 182.3325123153, "max_line_length": 16404, "alphanum_fraction": 0.9080875897, "converted": true, "num_tokens": 1475, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9124361700013356, "lm_q2_score": 0.8991213691605412, "lm_q1q2_score": 0.8203908584432011}} {"text": "# The Problem of Overfitting\n\nConsider the problem of predicting $y$ from $x \\in R$. The leftmost figure below shows the result of fitting a $y = \\theta_0+\\theta_1x$ to a dataset. We see that the data doesn\u2019t really lie on straight line, and so the fit is not very good.\n\n\n\nInstead, if we had added an extra feature $x^2$ , and fit $y=\\theta_0+\\theta_1x+\\theta_2x^2$ , then we obtain a slightly better fit to the data (See middle figure). Naively, it might seem that the more features we add, the better. \n\nHowever, there is also a **danger** in adding too many features: The rightmost figure is the result of fitting a 5th order polynomial $y = \\sum_{j=0} ^5 \\theta_j x^j$. We see that even though the fitted curve passes through the data *perfectly*, we would not expect this to be a very good predictor of, say, housing prices (y) for different living areas (x). \n\nWithout formally defining what these terms mean, we\u2019ll say the figure on the left shows an instance of **underfitting**\u2014in which the data clearly shows structure not captured by the model\u2014and the figure on the right is an example of **overfitting**.\n\n**Underfitting**, or high bias, is when the form of our hypothesis function h maps poorly to the trend of the data. It is usually caused by a function that is too simple or uses too few features. \n\nAt the other extreme, **overfitting**, or high variance, is caused by a hypothesis function that fits the available data but does not generalize well to predict new data. It is usually caused by a complicated function that creates a lot of unnecessary curves and angles unrelated to the data.\n\nThis terminology is applied to both linear and logistic regression. There are two main options to address the issue of overfitting:\n\n### \u0e27\u0e34\u0e18\u0e35\u0e01\u0e32\u0e23\u0e41\u0e01\u0e49\u0e1b\u0e31\u0e0d\u0e2b\u0e32 Overfitting \u0e17\u0e33\u0e44\u0e14\u0e49\u0e42\u0e14\u0e22\n\n1) Reduce the number of features:\n- Manually select which features to keep.\n- Use a model selection algorithm (studied later in the course).\n\n2) Regularization\n- Keep all the features, but reduce the magnitude of parameters $\\theta_j$.\n- Regularization works well when we have a lot of slightly useful features.\n\n\n\n\n\n# Cost Function\n\nIf we have overfitting from our hypothesis function, we can reduce the weight that some of the terms in our function carry by increasing their cost.\n\nSay we wanted to make the following function more quadratic:\n\n$\\theta_0 + \\theta_1x + \\theta_2x^2 + \\theta_3x^3 + \\theta_4x^4$\n\nWe'll want to eliminate the influence of $\\theta_3x^3$ and $\\theta_4x^4$ . Without actually getting rid of these features or changing the form of our hypothesis, we can instead modify our **cost function**:\n\n$$min_\\theta\\ \\dfrac{1}{2m}\\sum_{i=1}^m (h_\\theta(x^{(i)}) - y^{(i)})^2 + 1000\\cdot\\theta_3^2 + 1000\\cdot\\theta_4^2$$\n\nWe've added two extra terms at the end to inflate the cost of $\\theta_3$ and $\\theta_4$. Now, in order for the cost function to get close to zero, we will have to reduce the values of $\\theta_3$ and $\\theta_4$ to near zero. This will in turn greatly reduce the values of $\\theta_3x^3$ and $\\theta_4x^4$ in our hypothesis function. As a result, we see that the new hypothesis (depicted by the pink curve) looks like a quadratic function but fits the data better due to the extra small terms $\\theta_3x^3$ and $\\theta_4x^4$.\n\n\n\nWe could also regularize all of our theta parameters in a single summation as:\n\n$$min_\\theta\\ \\dfrac{1}{2m}\\ \\sum_{i=1}^m (h_\\theta(x^{(i)}) - y^{(i)})^2 + \\lambda\\ \\sum_{j=1}^n \\theta_j^2$$\n\nThe $\\lambda$, or lambda, is the **regularization parameter**. It determines how much the costs of our theta parameters are inflated.\n\nUsing the above cost function with the extra summation, we can smooth the output of our hypothesis function to reduce overfitting. **If lambda is chosen to be too large, it may smooth out the function too much and cause underfitting.** Hence, what would happen if \u03bb=0 or is too small ? --> Overfitting\n\n> \u0e21\u0e31\u0e19\u0e40\u0e1b\u0e47\u0e19\u0e01\u0e32\u0e23 trade-off \u0e23\u0e30\u0e2b\u0e27\u0e48\u0e32\u0e07\u0e08\u0e30 underfitting \u0e2b\u0e23\u0e37\u0e2d overfitting \u0e16\u0e49\u0e32 $\\lambda$ \u0e43\u0e2b\u0e0d\u0e48\u0e08\u0e30 under \u0e15\u0e23\u0e07\u0e01\u0e31\u0e19\u0e02\u0e49\u0e32\u0e21\u0e16\u0e49\u0e32 $\\lambda$ \u0e40\u0e25\u0e47\u0e01\u0e01\u0e47\u0e08\u0e30 over\n\n# Regularized Linear Regression\nWe can apply regularization to both linear regression and logistic regression. We will approach linear regression first.\n\n### Gradient Descent\nWe will modify our gradient descent function to **separate out $\\theta_0$ from the rest of the parameters** because we do not want to penalize $\\theta_0$.\n\n$\\begin{align*} & \\text{Repeat}\\ \\lbrace \\newline & \\ \\ \\ \\ \\theta_0 := \\theta_0 - \\alpha\\ \\frac{1}{m}\\ \\sum_{i=1}^m (h_\\theta(x^{(i)}) - y^{(i)})x_0^{(i)} \\newline & \\ \\ \\ \\ \\theta_j := \\theta_j - \\alpha\\ \\left[ \\left( \\frac{1}{m}\\ \\sum_{i=1}^m (h_\\theta(x^{(i)}) - y^{(i)})x_j^{(i)} \\right) + \\frac{\\lambda}{m}\\theta_j \\right] &\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ j \\in \\lbrace 1,2...n\\rbrace\\newline & \\rbrace \\end{align*}$\n\n(\u0e25\u0e2d\u0e07 Prove \u0e14\u0e49\u0e27\u0e22 Calculus \u0e14\u0e39)\n\nThe term $\\frac{\\lambda}{m}\\theta_j$ performs our regularization. With some manipulation our update rule can also be represented as:\n\n$\\theta_j := \\theta_j(1 - \\alpha\\frac{\\lambda}{m}) - \\alpha\\frac{1}{m}\\sum_{i=1}^m(h_\\theta(x^{(i)}) - y^{(i)})x_j^{(i)}$\n\nThe first term in the above equation,** $1 - \\alpha\\frac{\\lambda}{m}$ will always be less than 1**. Intuitively you can see it as reducing the value of $\\theta_j$ by some amount on every update. Notice that the second term is now exactly the same as it was before.\n\n### Normal Equation\n\nNow let's approach regularization using the alternate method of the non-iterative normal equation.\n\nTo add in regularization, the equation is the same as our original, except that we add another term inside the parentheses:\n\n$\\begin{align*}& \\theta = \\left( X^TX + \\lambda \\cdot L \\right)^{-1} X^Ty \\newline& \\text{where}\\ \\ L = \\begin{bmatrix} 0 & & & & \\newline & 1 & & & \\newline & & 1 & & \\newline & & & \\ddots & \\newline & & & & 1 \\newline\\end{bmatrix} \\in \\mathbb{R}^{(n+1)x(n+1)}\\end{align*}$\n\nL is a matrix with 0 at the top left and 1's down the diagonal, with 0's everywhere else. It should have dimension (n+1)\u00d7(n+1). Intuitively, this is the identity matrix (though we are not including x0), multiplied with a single real number $\u03bb$.\n\nRecall that if m < n, then $X^TX$ is non-invertible. However, when we add the term $\u03bb\u22c5L$, then $X^TX + \u03bb\u22c5L$ becomes invertible.\n\n# Regularized Logistic Regression\nWe can regularize logistic regression in a similar way that we regularize linear regression. As a result, we can avoid overfitting. The following image shows how the regularized function, displayed by the pink line, is less likely to overfit than the non-regularized function represented by the blue line:\n\n\n\n### Cost Function\nRecall that our cost function for logistic regression was:\n\n$J(\\theta) = - \\frac{1}{m} \\sum_{i=1}^m \\large[ y^{(i)}\\ \\log (h_\\theta (x^{(i)})) + (1 - y^{(i)})\\ \\log (1 - h_\\theta(x^{(i)})) \\large]$\n\nWe can regularize this equation by adding a term to the end:\n\n$J(\\theta) = - \\frac{1}{m} \\sum_{i=1}^m \\large[ y^{(i)}\\ \\log (h_\\theta (x^{(i)})) + (1 - y^{(i)})\\ \\log (1 - h_\\theta(x^{(i)}))\\large] + \\frac{\\lambda}{2m}\\sum_{j=1}^n \\theta_j^2$\n\n*Note : Prove \u0e2b\u0e19\u0e48\u0e2d\u0e22*\n\nThe second sum, $\\sum_{j=1}^n \\theta_j^2$ **means to explicitly** exclude the bias term, $\\theta_0$. I.e. the $\\theta$ vector is indexed from 0 to n (holding n+1 values, $\\theta_0$ through $\\theta_n$), and this sum explicitly skips $\\theta_0$, by running from 1 to n, skipping 0. Thus, when computing the equation, we should continuously update the two following equations:\n\n\n\n### Prove + Code \u0e2a\u0e48\u0e27\u0e19\u0e19\u0e35\u0e49\u0e14\u0e49\u0e27\u0e22\n\n\n\n# ====================== CODE =========================\n\n \u0e02\u0e49\u0e2d\u0e21\u0e39\u0e25\u0e0a\u0e34\u0e1e\u0e17\u0e35\u0e48\u0e1c\u0e48\u0e32\u0e19 \u0e41\u0e25\u0e30\u0e44\u0e21\u0e48\u0e1c\u0e48\u0e32\u0e19 \u0e04\u0e38\u0e13\u0e20\u0e32\u0e1e\u0e01\u0e32\u0e23\u0e1c\u0e25\u0e34\u0e15\n\n\n```python\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport scipy.optimize as opt \nimport numpy as np\n\ndata2 = pd.read_csv('programing/machine-learning-ex2/ex2/ex2data2.txt',names=['Test 1','Test 2','Accepted'])\n\npositive = data2[data2['Accepted'].isin([1])] \nnegative = data2[data2['Accepted'].isin([0])]\n\nfig, ax = plt.subplots(figsize=(8,5)) \nax.scatter(positive['Test 1'], positive['Test 2'], s=50, c='b', marker='o', label='Accepted') \nax.scatter(negative['Test 1'], negative['Test 2'], s=50, c='r', marker='x', label='Rejected') \nax.legend() \nax.set_xlabel('Test 1 Score') \nax.set_ylabel('Test 2 Score') \nplt.show()\n```\n\n\u0e08\u0e32\u0e01\u0e23\u0e39\u0e1b \u0e14\u0e39\u0e17\u0e23\u0e07\u0e41\u0e25\u0e49\u0e27 Decision Boundary \u0e19\u0e48\u0e32\u0e08\u0e30\u0e40\u0e1b\u0e47\u0e19\u0e2a\u0e21\u0e01\u0e32\u0e23\u0e01\u0e33\u0e25\u0e31\u0e07\u0e40\u0e25\u0e02\u0e04\u0e39\u0e48 (2,4,6) \u0e16\u0e49\u0e32\u0e25\u0e2d\u0e07\u0e17\u0e35\u0e48\u0e01\u0e33\u0e25\u0e31\u0e07 6 \u0e08\u0e30\u0e44\u0e14\u0e49\n\n$$\n\\begin{align}\nz = \\theta_0 + \\theta_1x_1 + \\theta_2x_2 + \\theta_3x_1^2 + \\theta_4x_1x_2 + \\theta_5x_2^2 + \\theta_6x_1^3 + \\theta_7x_1^2x_2 + \\theta_8x_1x_2^2 + \\theta_9x_2^3 + \\theta_{10}x_1^4 + \\theta_{11}x_1^3x_2 + \\theta_{12}x_1^2x_2^2 + \\theta_{13}x_1x_2^3 + \\theta_{14}x_2^4 + \\theta_{15}x_1^5 + \\theta_{16}x_1^4x_2^1 + \\theta_{17}x_1^3x_2^2 + \\theta_{18}x_1^2x_2^3 + \\theta_{19}x_1x_2^4 + \\theta_{20}x_2^5 + \\theta_{21}x_1^6 + \\theta_{22}x_1^5x_2^1 + \\theta_{23}x_1^4x_2^2 + \\theta_{24}x_1^3x_2^3 + \\theta_{25}x_1^2x_2^4 + \\theta_{26}x_1x_2^5 + \\theta_{27}x_2^6\n\\end{align}\n$$\n\n\u0e08\u0e30\u0e40\u0e2b\u0e47\u0e19\u0e27\u0e48\u0e32\u0e2a\u0e21\u0e01\u0e32\u0e23\u0e21\u0e31\u0e19\u0e44\u0e21\u0e48 linear \u0e2d\u0e22\u0e39\u0e48 \u0e41\u0e1b\u0e25\u0e07\u0e43\u0e2b\u0e49\u0e40\u0e1b\u0e47\u0e19 linear \u0e08\u0e30\u0e44\u0e14\u0e49\n\n$$\n\\begin{align}\nz = \\theta_0 + \\theta_1x_1 + \\theta_2x_2 + \\theta_3x_3 + \\theta_4x_4 + \\theta_5x_5 + \\theta_6x_6 + \\theta_7x_7 + \\theta_8x_8 + \\theta_9x_9 + \\theta_{10}x_{10} + \\theta_{11}x_{11} + \\theta_{12}x_{12} + \\theta_{13}x_{13} + \\theta_{14}x_{14} + \\theta_{15}x_{15} + \\theta_{16}x_{16} + \\theta_{17}x_{17} + \\theta_{18}x_{18} + \\theta_{19}x_{19} + \\theta_{20}x_{20} + \\theta_{21}x_{21} + \\theta_{22}x_{22} + \\theta_{23}x_{23} + \\theta_{24}x_{24} + \\theta_{25}x_{25} + \\theta_{26}x_{26} + \\theta_{27}x_{27}\n\\end{align}\n$$\n\n\u0e14\u0e31\u0e07\u0e19\u0e31\u0e49\u0e19\u0e08\u0e32\u0e01 \u0e04\u0e48\u0e32 $x_1,x_2$ \u0e17\u0e35\u0e48\u0e40\u0e23\u0e32\u0e21\u0e35\u0e2d\u0e22\u0e39\u0e48\u0e41\u0e25\u0e49\u0e27 \u0e40\u0e23\u0e32\u0e15\u0e49\u0e2d\u0e07\u0e2b\u0e32\u0e04\u0e48\u0e32 $x_3 - x_{27}$ \u0e40\u0e1e\u0e34\u0e48\u0e21\u0e14\u0e49\u0e27\u0e22 \n\n\u0e2a\u0e23\u0e49\u0e32\u0e07\u0e1f\u0e31\u0e07\u0e01\u0e4c\u0e0a\u0e31\u0e48\u0e19\u0e2a\u0e33\u0e2b\u0e23\u0e31\u0e1a\u0e41\u0e1b\u0e25\u0e07 $x_1,x_2$ \u0e40\u0e1b\u0e47\u0e19 $x_1 - x_n$ (\u0e08\u0e33\u0e19\u0e27\u0e19 $n$ \u0e02\u0e36\u0e49\u0e19\u0e01\u0e31\u0e1a degree \u0e02\u0e2d\u0e07\u0e2a\u0e21\u0e01\u0e32\u0e23 \u0e40\u0e0a\u0e48\u0e19\u0e17\u0e35\u0e48 degree 6 $n$ \u0e04\u0e37\u0e2d 27)\n\n\n```python\ndef sigmoid(z): \n return 1 / (1 + np.exp(-z))\n```\n\n\n```python\ndef mapFeature(degree,x1,x2):\n # Returns a new feature array with more features, comprising of X1, X2, X1.^2, X2.^2, X1*X2, X1*X2.^2, etc.. \n # Inputs X1, X2 must be the same size\n \n df = pd.DataFrame()\n df['Ones'] = np.ones(len(x1))\n \n for i in range(1, degree+1):\n for j in range(0, i+1):\n df['F' + str(i) + str(j)] = np.power(x1, i-j) * np.power(x2, j)\n return df\n```\n\n\n```python\nx1 = data2['Test 1']\nx2 = data2['Test 2']\nfeatures = mapFeature(6,x1,x2)\nfeatures.head()\n```\n\n\n\n\n
    \n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    OnesF10F11F20F21F22F30F31F32F33...F53F54F55F60F61F62F63F64F65F66
    01.00.0512670.699560.0026280.0358640.4893840.0001350.0018390.0250890.342354...0.0009000.0122780.1675421.815630e-082.477505e-070.0000030.0000460.0006290.0085890.117206
    11.0-0.0927420.684940.008601-0.0635230.469143-0.0007980.005891-0.0435090.321335...0.002764-0.0204120.1507526.362953e-07-4.699318e-060.000035-0.0002560.001893-0.0139810.103256
    21.0-0.2137100.692250.045672-0.1479410.479210-0.0097610.031616-0.1024120.331733...0.015151-0.0490770.1589709.526844e-05-3.085938e-040.001000-0.0032380.010488-0.0339730.110047
    31.0-0.3750000.502190.140625-0.1883210.252195-0.0527340.070620-0.0945730.126650...0.017810-0.0238510.0319402.780914e-03-3.724126e-030.004987-0.0066790.008944-0.0119780.016040
    41.0-0.5132500.465640.263426-0.2389900.216821-0.1352030.122661-0.1112830.100960...0.026596-0.0241280.0218901.827990e-02-1.658422e-020.015046-0.0136500.012384-0.0112350.010193
    \n

    5 rows \u00d7 28 columns

    \n
    \n\n\n\n\u0e40\u0e21\u0e37\u0e48\u0e2d regularize cost function \u0e40\u0e1b\u0e47\u0e19\u0e41\u0e1a\u0e1a\u0e19\u0e35\u0e49\n\n$J(\\theta) = - \\frac{1}{m} \\sum_{i=1}^m \\large[ y^{(i)}\\ \\log (h_\\theta (x^{(i)})) + (1 - y^{(i)})\\ \\log (1 - h_\\theta(x^{(i)}))\\large] + \\frac{\\lambda}{2m}\\sum_{j=1}^n \\theta_j^2$\n\n\u0e08\u0e30\u0e44\u0e14\u0e49\n\n\n```python\ndef costReg(theta, X, y, learningRate): \n theta = np.matrix(theta)\n X = np.matrix(X)\n y = np.matrix(y)\n first = np.multiply(-y, np.log(sigmoid(X * theta.T)))\n second = np.multiply((1 - y), np.log(1 - sigmoid(X * theta.T)))\n reg = (learningRate / 2 * len(X)) * np.sum(np.power(theta[:,1:theta.shape[1]], 2))\n return np.sum(first - second) / (len(X)) + reg\n```\n\n\n```python\ndef gradientReg(theta, X, y, learningRate): \n theta = np.matrix(theta)\n X = np.matrix(X)\n y = np.matrix(y)\n\n parameters = int(theta.ravel().shape[1])\n grad = np.zeros(parameters)\n\n error = sigmoid(X * theta.T) - y\n\n for i in range(parameters):\n term = np.multiply(error, X[:,i])\n\n if (i == 0):\n grad[i] = np.sum(term) / len(X)\n else:\n grad[i] = (np.sum(term) / len(X)) + ((learningRate / len(X)) * theta[:,i])\n return grad\n```\n\n\u0e40\u0e15\u0e23\u0e35\u0e22\u0e21 Data \u0e43\u0e2b\u0e49 format \u0e16\u0e39\u0e01 \u0e25\u0e2d\u0e07\u0e43\u0e0a\u0e49 `costReg`\n\n\n```python\n# set X and y \nX2 = features.iloc[:,:] \ny2 = data2.iloc[:,2:3]\n\n# convert to numpy arrays and initalize the parameter array theta\nX2 = np.array(X2.values) \ny2 = np.array(y2.values) \ntheta2 = np.zeros(len(X2[0]))\n\nlearningRate = 1\n\ncostReg(theta2, X2, y2, learningRate) \n```\n\n\n\n\n 0.6931471805599454\n\n\n\n\u0e2b\u0e32\u0e1e\u0e32\u0e23\u0e32\u0e21\u0e34\u0e40\u0e15\u0e2d\u0e23\u0e4c\u0e02\u0e2d\u0e07 Decision Boundary \u0e08\u0e32\u0e01 `fmin_tnc`\n\n\n```python\nresult2 = opt.fmin_tnc(func=costReg, x0=theta2, fprime=gradientReg, args=(X2, y2, learningRate)) \ntheta_min = result2[0]\n```\n\n\u0e08\u0e32\u0e01\u0e1e\u0e32\u0e23\u0e32\u0e21\u0e34\u0e40\u0e15\u0e2d\u0e23\u0e4c\u0e17\u0e35\u0e48\u0e44\u0e14\u0e49\u0e21\u0e32 \u0e25\u0e2d\u0e07 plot \u0e14\u0e39 decision boundary \u0e14\u0e31\u0e07\u0e19\u0e35\u0e49 (\u0e17\u0e35\u0e48 power = 6)\n\n$$\n\\begin{align}\nz = \\theta_0 + \\theta_1x_1 + \\theta_2x_2 + \\theta_3x_1^2 + \\theta_4x_1x_2 + \\theta_5x_2^2 + \\theta_6x_1^3 + \\theta_7x_1^2x_2 + \\theta_8x_1x_2^2 + \\theta_9x_2^3 + \\theta_{10}x_1^4 + \\theta_{11}x_1^3x_2 + \\theta_{12}x_1^2x_2^2 + \\theta_{13}x_1x_2^3 + \\theta_{14}x_2^4 + \\theta_{15}x_1^5 + \\theta_{16}x_1^4x_2^1 + \\theta_{17}x_1^3x_2^2 + \\theta_{18}x_1^2x_2^3 + \\theta_{19}x_1x_2^4 + \\theta_{20}x_2^5 + \\theta_{21}x_1^6 + \\theta_{22}x_1^5x_2^1 + \\theta_{23}x_1^4x_2^2 + \\theta_{24}x_1^3x_2^3 + \\theta_{25}x_1^2x_2^4 + \\theta_{26}x_1x_2^5 + \\theta_{27}x_2^6\n\\end{align}\n$$\n\n\u0e14\u0e39\u0e08\u0e32\u0e01\u0e17\u0e23\u0e07\u0e21\u0e31\u0e19\u0e22\u0e32\u0e01\u0e17\u0e35\u0e48\u0e40\u0e23\u0e32\u0e08\u0e30\u0e27\u0e32\u0e14\u0e40\u0e2a\u0e49\u0e19 decision boundary \u0e08\u0e32\u0e01 \u0e01\u0e32\u0e23\u0e41\u0e01\u0e49\u0e2a\u0e21\u0e01\u0e32\u0e23\u0e19\u0e35\u0e49 \u0e04\u0e34\u0e14\u0e27\u0e48\u0e32\u0e17\u0e35\u0e48\u0e40\u0e1b\u0e47\u0e19\u0e44\u0e1b\u0e44\u0e14\u0e49\u0e01\u0e47\u0e04\u0e37\u0e2d \u0e41\u0e17\u0e19 x1,x2 \u0e44\u0e1b\u0e40\u0e25\u0e22\u0e43\u0e19\u0e0a\u0e48\u0e27\u0e07\u0e17\u0e31\u0e49\u0e07\u0e2b\u0e21\u0e14 \u0e41\u0e25\u0e49\u0e27\u0e19\u0e48\u0e32\u0e08\u0e30\u0e40\u0e2b\u0e47\u0e19\u0e40\u0e2a\u0e49\u0e19\u0e17\u0e35\u0e48 z = 0 \u0e40\u0e2a\u0e49\u0e19\u0e19\u0e31\u0e49\u0e19\u0e41\u0e2b\u0e25\u0e30\u0e04\u0e37\u0e2d decision boundary\n\n\n```python\ndef plotDecisionBoundary(theta):\n # Here is the grid range\n test1 = np.arange(-1,1.5,0.1)\n test2 = np.arange(-1,1.5,0.1)\n\n z = np.zeros((len(test1),len(test2)))\n \n # Evaluate z = theta*x over the grid\n for t1 in range(len(test1)):\n for t2 in range(len(test2)):\n z[t1,t2] = mapFeature(6,np.array([test1[t1]]),np.array([test2[t2]]) ).values.dot(theta)[0]\n \n T1, T2 = np.meshgrid(test1, test2)\n fig, ax = plt.subplots(figsize=(8,5)) \n \n # Data Plot \n ax.scatter(positive['Test 1'], positive['Test 2'], s=50, c='b', marker='o', label='Accepted') \n ax.scatter(negative['Test 1'], negative['Test 2'], s=50, c='r', marker='x', label='Rejected') \n # Decision Boundary \n CS = plt.contour(T1, T2, z,0.00000000,colors='y')\n \n ax.legend()\n ax.set_xlabel('Test 1 Score') \n ax.set_ylabel('Test 2 Score') \n plt.show()\n```\n\n\n```python\nplotDecisionBoundary(theta_min)\n```\n\n\u0e2a\u0e23\u0e38\u0e1b\u0e04\u0e37\u0e2d \u0e17\u0e35\u0e48 lambda = 1 decision boundary \u0e08\u0e30\u0e40\u0e1b\u0e47\u0e19\u0e14\u0e31\u0e07\u0e23\u0e39\u0e1b\u0e02\u0e49\u0e32\u0e07\u0e1a\u0e19\n\n## Predict\n\u0e40\u0e21\u0e37\u0e48\u0e2d\u0e44\u0e14\u0e49 parameter \u0e02\u0e2d\u0e07 decision boundary \u0e21\u0e32\u0e41\u0e25\u0e49\u0e27 \u0e19\u0e33\u0e21\u0e32\u0e17\u0e14\u0e25\u0e2d\u0e07\u0e17\u0e33\u0e19\u0e32\u0e22\u0e1c\u0e25\u0e27\u0e48\u0e32\u0e08\u0e30\u0e40\u0e1b\u0e47\u0e19 0 \u0e2b\u0e23\u0e37\u0e2d 1 \u0e14\u0e31\u0e07\u0e19\u0e35\u0e49\n\n$\nh_{\\theta}(x) = g(z) = \\frac{1}{1 + e^{-z}}\n$\n\n\u0e16\u0e49\u0e32 $z>0$ \u0e08\u0e30\u0e44\u0e14\u0e49\u0e27\u0e48\u0e32 $g(z)$ \u0e25\u0e39\u0e48\u0e40\u0e02\u0e49\u0e32 1 \u0e15\u0e23\u0e07\u0e01\u0e31\u0e19\u0e02\u0e49\u0e32\u0e21 \u0e16\u0e49\u0e32 $z<0$ \u0e08\u0e30\u0e44\u0e14\u0e49\u0e27\u0e48\u0e32 $g(z)$ \u0e25\u0e39\u0e48\u0e40\u0e02\u0e49\u0e32 0 \n\n\n```python\ndef predict(theta, X): \n z = X.dot(theta.T)\n predict = (z>=0)\n return predict\n```\n\n\n```python\ntheta_min = np.matrix(result2[0]) \npredictions = predict(theta_min, X2) \ncorrect = (y2 == predictions)\naccuracy = sum(correct)[0,0]%len(correct) \nprint('accuracy = {0}%'.format(accuracy))\n```\n\n accuracy = 95%\n\n\n\u0e08\u0e32\u0e01\u0e42\u0e1b\u0e23\u0e41\u0e01\u0e23\u0e21\u0e02\u0e49\u0e32\u0e07\u0e1a\u0e19\u0e40\u0e23\u0e32\u0e17\u0e14\u0e25\u0e2d\u0e07\u0e17\u0e35\u0e48 lambda = 1 \u0e2d\u0e22\u0e48\u0e32\u0e07\u0e40\u0e14\u0e35\u0e22\u0e27 \u0e2d\u0e22\u0e32\u0e01\u0e23\u0e39\u0e49\u0e27\u0e48\u0e32 decision boundary \u0e08\u0e30\u0e40\u0e1b\u0e47\u0e19\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e44\u0e23\u0e40\u0e21\u0e37\u0e48\u0e2d lambda \u0e40\u0e1b\u0e47\u0e19\u0e04\u0e48\u0e32\u0e2d\u0e37\u0e48\u0e19\u0e46\n\n\u0e01\u0e48\u0e2d\u0e19\u0e2d\u0e37\u0e48\u0e19\u0e40\u0e23\u0e32\u0e1f\u0e31\u0e07\u0e0a\u0e31\u0e48\u0e19\u0e40\u0e01\u0e48\u0e32\u0e21\u0e32\u0e42\u0e21\u0e01\u0e48\u0e2d\u0e19 \n\n\n```python\ndef plotDecisionBoundaryVaryLambda(X,y,lamb):\n theta2 = np.zeros(len(X[0]))\n \n result2 = opt.fmin_tnc(func=costReg, x0=theta2, fprime=gradientReg, args=(X, y, lamb)) \n theta_min = result2[0]\n \n # Here is the grid range\n test1 = np.arange(-1,1.5,0.1)\n test2 = np.arange(-1,1.5,0.1)\n\n z = np.zeros((len(test1),len(test2)))\n \n # Evaluate z = theta*x over the grid\n for t1 in range(len(test1)):\n for t2 in range(len(test2)):\n z[t1,t2] = mapFeature(6,np.array([test1[t1]]),np.array([test2[t2]]) ).values.dot(theta_min)[0]\n \n T1, T2 = np.meshgrid(test1, test2)\n fig, ax = plt.subplots(figsize=(8,5)) \n \n # Data Plot \n ax.scatter(positive['Test 1'], positive['Test 2'], s=50, c='b', marker='o', label='Accepted') \n ax.scatter(negative['Test 1'], negative['Test 2'], s=50, c='r', marker='x', label='Rejected') \n # Decision Boundary \n CS = plt.contour(T1, T2, z,0.00000000,colors='y')\n \n ax.legend()\n ax.set_xlabel('Test 1 Score') \n ax.set_ylabel('Test 2 Score') \n plt.show()\n```\n\n\u0e17\u0e35\u0e48 lambda = 0\n\n\n```python\nplotDecisionBoundaryVaryLambda(X2,y2,0)\n```\n\n\u0e08\u0e30\u0e40\u0e2b\u0e47\u0e19\u0e27\u0e48\u0e32 Overfitting \u0e2b\u0e19\u0e48\u0e2d\u0e22\u0e46\n\n\u0e17\u0e35\u0e48 lambda = 100\n\n\n```python\nplotDecisionBoundaryVaryLambda(X2,y2,100)\n```\n\n\u0e08\u0e30\u0e40\u0e2b\u0e47\u0e19\u0e27\u0e48\u0e32 Underfitting \u0e21\u0e32\u0e01\u0e46 \n", "meta": {"hexsha": "9ce9811dca6dcb486ad5cc1372b16935c99edca4", "size": 136865, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": ".ipynb_checkpoints/6 Regularization-checkpoint.ipynb", "max_stars_repo_name": "Kanbc/machine-learning", "max_stars_repo_head_hexsha": "3b2789b47d931f90c66724f4999f66e856e30bfd", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": ".ipynb_checkpoints/6 Regularization-checkpoint.ipynb", "max_issues_repo_name": "Kanbc/machine-learning", "max_issues_repo_head_hexsha": "3b2789b47d931f90c66724f4999f66e856e30bfd", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": ".ipynb_checkpoints/6 Regularization-checkpoint.ipynb", "max_forks_repo_name": "Kanbc/machine-learning", "max_forks_repo_head_hexsha": "3b2789b47d931f90c66724f4999f66e856e30bfd", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 164.6991576414, "max_line_length": 30144, "alphanum_fraction": 0.8497497534, "converted": true, "num_tokens": 7380, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8991213691605412, "lm_q2_score": 0.9124361688107864, "lm_q1q2_score": 0.820390857372753}} {"text": "## Importando uma fun\u00e7\u00e3o\n\n\n```python\nimport numpy as np\n```\n\n## Usando uma fun\u00e7\u00e3o/como gerar um array\n\n\n```python\na = np.array([1, 2, 3])\nprint(a)\n```\n\n [1 2 3]\n\n\n\n```python\na = np.array([[1, 2, 3], \n [4, 5, 6],\n [1, 1, 1]])\nprint(a)\n```\n\n [[1 2 3]\n [4 5 6]\n [1 1 1]]\n\n\n## Como inverter matriz\n\n\n```python\ni = np.linalg.inv(a)\nprint(i)\n```\n\n [[-1.2009599e+16 1.2009599e+16 -3.6028797e+16]\n [ 2.4019198e+16 -2.4019198e+16 7.2057594e+16]\n [-1.2009599e+16 1.2009599e+16 -3.6028797e+16]]\n\n\n## Arrendondando matriz\n\n\n```python\nb = np.arange(1, 10.1, 1.332)\nprint(b)\nb = np.around(b, 2)\nprint(b)\n```\n\n [1. 2.332 3.664 4.996 6.328 7.66 8.992]\n [1. 2.33 3.66 5. 6.33 7.66 8.99]\n\n\n## Arange vs Linspace\n\n\n```python\n# Vo\u00e7\u00ea decide o passo e a fun\u00e7\u00e3o escolhe o tamanho\nx = np.arange(step=2, stop=4, start=0)\nprint(x)\n```\n\n [0 2]\n\n\n\n```python\n# Vo\u00e7\u00ea decide o tamanho e a fun\u00e7\u00e3o escolhe o passo\ny = np.linspace(0, 155, 7)\nprint(y)\n```\n\n [ 0. 25.83333333 51.66666667 77.5 103.33333333\n 129.16666667 155. ]\n\n\n## Plotando grafico\n\n\n```python\nimport matplotlib.pyplot as plt\n```\n\n\n```python\nx = np.arange(10)\ny = 2*x\nz = 3*x\nplt.plot(x, y, label='2x')\nplt.plot(x, z, label='3x')\nplt.xlabel('x')\nplt.legend()\nplt.show()\n#show() plota o grafico com tds os comandos plt acima dele at\u00e9 chegar num outro show()\nplt.plot(x, y, label='2x')\nplt.plot(x, z, label='3x')\nplt.xlabel('x')\nplt.legend()\nplt.show()\n```\n\n\n```python\na = np.array([1, 2, 3, 4, 5, 6])\nprint(f'{a[0]}\\n{a[-1]}\\n{a[2:5]}')\n```\n\n 1\n 6\n [3 4 5]\n\n\n\n```python\na = a.reshape([2, 3])\nprint(a)\n```\n\n [[1 2 3]\n [4 5 6]]\n\n\n\n```python\na = np.eye(3)\nprint(a)\n```\n\n [[1. 0. 0.]\n [0. 1. 0.]\n [0. 0. 1.]]\n\n\n\n```python\nfrom scipy.linalg import toeplitz\ne = toeplitz([1, 0, -1, -2], [1, 2, 3, 4]) #Escreve uma matriz([d, d-1, d-2, ...], [d(n), d+1, d+2, ...])\nprint(e)\n```\n\n [[ 1 2 3 4]\n [ 0 1 2 3]\n [-1 0 1 2]\n [-2 -1 0 1]]\n\n\n\n```python\na = np.array([[1, 2, 3],\n [2, 1, 2],\n [3, 2, 1]])\na = np.linalg.inv(a)\nprint(a)\n```\n\n [[-0.375 0.5 0.125]\n [ 0.5 -1. 0.5 ]\n [ 0.125 0.5 -0.375]]\n\n\n\n```python\na = np.array([[1, 1, 1],\n [2, 2, 2],\n [3, 3, 3]])\nb = np.transpose(a)\nprint(a)\nprint(b)\n```\n\n [[1 1 1]\n [2 2 2]\n [3 3 3]]\n [[1 2 3]\n [1 2 3]\n [1 2 3]]\n\n\n\n```python\na = np.array([[1, 2, 3],\n [2, 1, 2],\n [3, 2, 1]])\nb = np.array([[1, 1, 1],\n [2, 2, 2],\n [3, 3, 3]])\nc = np.dot(a, b)\nprint(c)\n```\n\n [[14 14 14]\n [10 10 10]\n [10 10 10]]\n\n\n\n```python\na = np.array([[2, 1],\n [3, 4]])\np = np.array([[10],\n [20]])\nfrom scipy.linalg import solve\n\ns = solve(a, p)\nprint(s)\n```\n\n [[4.]\n [2.]]\n\n\n\n```python\nfrom scipy.linalg import lu_factor, lu_solve\nlu, piv = lu_factor(a)\ns = lu_solve((lu, piv), p)\nprint(s)\n```\n\n [[4.]\n [2.]]\n\n\n\n```python\nfrom sympy import *\n```\n\n\n```python\nf = lambda x: 1/x\nx = symbols ('x')\n```\n\n\n```python\nl = limit(f(x), x, 0)\nprint(l)\n```\n\n oo\n\n\n\n```python\nprint(l + 1)\n```\n\n oo\n\n\n\n```python\nfrom scipy.misc import derivative\nd = derivative(f, 1, dx=1e-10, n=1)\nprint(d)\n```\n\n -1.000000082740371\n\n\n\n```python\n\nprint(f'{d:.2f}')\n```\n\n -1.00\n\n\n\n```python\ny = lambda x: x**2\nv = symbols('v')\ni = integrate(y(v), (v , 1, 2))\nprint(i)\nprint(type(i))\na = float(i)\nprint(a)\nprint(type(a))\n```\n\n 7/3\n \n 2.3333333333333335\n \n\n\n\n```python\ni = float(integrate(f(v), (v , 1, 2)))\nprint(i)\n```\n\n 0.6931471805599453\n\n\n\n```python\nprint(type(i))\n```\n\n \n\n\n\n```python\nimport matplotlib.pyplot as plt\nD = lambda x: 12 - 2*x\nS = lambda x: 20*x\nx = [i for i in range(0, 1000)]\ndx = [D(i) for i in range(0, 1000)]\nsx = [S(i) for i in range(1000, 0, -1)]\nplt.plot(dx, x)\nplt.plot(sx, x)\nplt.show()\n```\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\n \ndef f(x):\n return x\n\ndef g(x):\n return x**2\n\ndef h(x):\n return 2*x\n\nx = np.arange(-5, 6, 1)\ny = [f(i) for i in x]\nz = [g(i) for i in x]\nw = [h(i) for i in x]\n#cria uma lista com os valores de y para cada valor de x\n \nplt.plot(x, y, label='y = x')\nplt.plot(x, z, 'r--', label='y = x**2')\nplt.plot(x, w, 'g-o', label='y = 2x')\nplt.title('Titulo desejado')\nplt.xlabel('Eixo X')\nplt.ylabel('Eixo Y')\nplt.legend(loc='best')\nplt.tight_layout()\nplt.style.use('ggplot')\nplt.grid()\nplt.show()\n```\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\npreco = np.array([8, 7, 6, 5, 4, 3, 2, 1, 0])\nquant = np.array([0, 1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000])\n\nplt.plot(quant, preco)\nplt.ylabel('Pre\u00e7o(R$)')\nplt.xlabel('Quantidade comprada')\nplt.title('Quantidade x Pre\u00e7o')\nplt.tight_layout()\nplt.style.use('ggplot')\nplt.show()\n```\n\n\n```python\npontos = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I']\n\n```\n\n\n```python\ndef elasticidade(P, PN, Q, QN):\n Num = abs((Q - QN)/((Q + QN)/2))\n Dem = abs((P - PN)/((P + PN)/2))\n return Num/Dem\n```\n\n\n```python\nfor i in range(len(pontos)-1):\n r = elasticidade(preco[i], preco[i+1], quant[i], quant[i+1])\n print(f'Entre os pontos {pontos[i]} e {pontos[i+1]} a elasticidade vale {r:.2f}')\n```\n\n Entre os pontos A e B a elasticidade vale 15.00\n Entre os pontos B e C a elasticidade vale 4.33\n Entre os pontos C e D a elasticidade vale 2.20\n Entre os pontos D e E a elasticidade vale 1.29\n Entre os pontos E e F a elasticidade vale 0.78\n Entre os pontos F e G a elasticidade vale 0.45\n Entre os pontos G e H a elasticidade vale 0.23\n Entre os pontos H e I a elasticidade vale 0.07\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "6b8117b1d8305f16f525f9d27d8b098ce0c5a214", "size": 104877, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Monitoria 1.ipynb", "max_stars_repo_name": "ViniciusRCortez/Monitoria-de-metodos-numericos-com-python", "max_stars_repo_head_hexsha": "85678fe8907752533d0dc97dc83550411ba079f0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Monitoria 1.ipynb", "max_issues_repo_name": "ViniciusRCortez/Monitoria-de-metodos-numericos-com-python", "max_issues_repo_head_hexsha": "85678fe8907752533d0dc97dc83550411ba079f0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Monitoria 1.ipynb", "max_forks_repo_name": "ViniciusRCortez/Monitoria-de-metodos-numericos-com-python", "max_forks_repo_head_hexsha": "85678fe8907752533d0dc97dc83550411ba079f0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 126.6630434783, "max_line_length": 23000, "alphanum_fraction": 0.8853514117, "converted": true, "num_tokens": 2310, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.912436153333645, "lm_q2_score": 0.8991213820004279, "lm_q1q2_score": 0.8203908551725012}} {"text": "# The 24 Game\n\nThe Python program below finds solutions to the 24 game: use four numbers and any of the four basic arithmetic operations (multiplication, division, addition and subtraction) to produce the number 24 (or any number you choose).\n\nExecute the program, choose four numbers (separated by commas) and the target number (24 by default).\n\n\n```python\nfrom sympy import symbols, parsing\nfrom sympy.parsing.sympy_parser import parse_expr\nfrom itertools import permutations, combinations_with_replacement\n# list the default numbers to use here\nnumbahs = [2,4,6,8]\n# enter the default number to calculate here\nansah = 24\n\nnumberlist = input(\"Enter a list of four integers, separated by commas [default: 2,4,6,8]: \")\nif numberlist:\n numbahs = [int(n) for n in numberlist.split(',')]\nanswer = input(\"Enter a number compute to [default: 24]: \")\nif answer:\n ansah = int(answer)\n\n# generate all possible arrangements of parentheses and operators\n# ways of combining arguments with parenthesis\npgroups = [\n \"{0}{1}{2}{3}{4}{5}{6}\",\n \"({0}{1}{2}){3}{4}{5}{6}\",\n \"({0}{1}{2}{3}{4}){5}{6}\",\n \"(({0}{1}{2}){3}{4}){5}{6}\",\n \"({0}{1}({2}{3}{4})){5}{6}\",\n \"{0}{1}({2}{3}{4}){5}{6}\",\n \"{0}{1}({2}{3}({4}{5}{6}))\",\n \"{0}{1}(({2}{3}{4}){5}{6})\",\n \"{0}{1}({2}{3}{4}{5}{6})\",\n \"{0}{1}{2}{3}({4}{5}{6})\"\n]\n\n# available operators\noperators = ['*','/','+','-']\n# available symbols\nvariables = ['a','b','c','d']\n# symbols for sympy to work with\na, b, c, d = symbols('a b c d')\n\n# does a file exist with unique expressions?\ntry:\n with open(\"uniqueexpressions.txt\") as f:\n expressions = [parse_expr(s) for s in f.readlines()]\n \n# no file.. generate one\nexcept:\n # collect unique expressions in a set\n expressions = set()\n for p in pgroups: # for every combination of parenthesis\n for combo in combinations_with_replacement(operators, 3): # and every combination of operators\n for o in permutations(combo): # permuted\n for v in permutations(variables): # and every permutation of variables\n s = p.format(v[0],o[0],v[1],o[1],v[2],o[2],v[3]) # construct the expression and...\n s = parse_expr(s)\n # add to expressions set -- this will drop equivalent expressions\n expressions.add(s) \n with open(\"uniqueexpressions.txt\", \"w\") as f:\n [f.write(str(s)+'\\n') for s in expressions]\n \n# for every unique expression, substitute the values and evaluate\nfor x in expressions:\n try:\n val = x.subs({a:numbahs[0],b:numbahs[1],c:numbahs[2],d:numbahs[3]})\n if ansah == val:\n s = str(x)\n for sym, num in zip(variables, numbahs):\n s = s.replace(sym, str(num))\n print(s)\n except:\n pass\n#print(expressions)\n```\n\n Enter a list of four integers, separated by commas [default: 2,4,6,8]: 1,2,3,4\n Enter a number compute to [default: 24]: \n\n\n 2*3*4/1\n 1*2*3*4\n 4*(1 + 2 + 3)\n\n", "meta": {"hexsha": "d9f898f5abf528b45cb643eb4a6df90023210fdf", "size": 4598, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "24i.ipynb", "max_stars_repo_name": "tiggerntatie/24", "max_stars_repo_head_hexsha": "49ad4d06230ee001518ff2bb41b24f2772b744c3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "24i.ipynb", "max_issues_repo_name": "tiggerntatie/24", "max_issues_repo_head_hexsha": "49ad4d06230ee001518ff2bb41b24f2772b744c3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "24i.ipynb", "max_forks_repo_name": "tiggerntatie/24", "max_forks_repo_head_hexsha": "49ad4d06230ee001518ff2bb41b24f2772b744c3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.5620437956, "max_line_length": 236, "alphanum_fraction": 0.5008699435, "converted": true, "num_tokens": 883, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9481545392102522, "lm_q2_score": 0.8652240877899775, "lm_q1q2_score": 0.8203661462721169}} {"text": "# Lecture 10: Expectation Continued\n\n## A Proof of Linearity (discrete case)\n\nLet $T = X + Y$, and show that $\\mathbb{E}(T) = \\mathbb{E}(X) + \\mathbb{E}(Y)$.\n\nWe will also show that $\\mathbb{E}(cX) = c \\mathbb{E}(X)$.\n\nIn general, we'd like to be in a position where\n\n\\begin{align}\n \\sum_{t} t P(T=t) \\stackrel{?}{=} \\sum_{x} x P(X=x) + \\sum_{y} y P(Y=y)\n\\end{align}\n\nso, let's try attacking this from the l.h.s.\n\n\n\nConsidering the image above of a discrete r.v. in Pebble World, note that\n\n\n\\begin{align}\n \\mathbb{E}(X) &= \\sum_{x} x P(X=x) & &\\text{grouping the pebbles per X value; weighted average} \\\\\n &= \\sum_{s}X(s)P(\\{s\\}) & &\\text{ungrouped; sum each pebble separately} \\\\\n \\\\\n \\\\\n \\Rightarrow \\mathbb{E}(T) &= \\sum_{s} (X+Y)(s)P(\\{s\\}) \\\\\n &= \\sum_{s}X(s)P(\\{s\\}) + \\sum_{s}Y(s)P(\\{s\\}) \\\\\n &= \\sum_{x} x P(X=x) + \\sum_{y} y P(Y=y) \\\\\n &= \\mathbb{E}(X) + \\mathbb{E}(Y) ~~~~ \\blacksquare \\\\\n \\\\\n \\\\\n \\Rightarrow \\mathbb{E}(cX) &= \\sum_{x} cx P(X=x) \\\\\n &= c \\sum_{x} x P(X=x) \\\\\n &= c \\mathbb{E}(X) ~~~~ \\blacksquare\n\\end{align}\n\n----\n\n## Negative Binomial Distribution\n\n### Description\n\nA misnomer: this distribution is actually non-negative, and not binomial, either.\n\nThe Negative Binomial is a generalization of the Geometric distribution, where we have a series of independent $Bern(p)$ trials and we want to know # failures before the $r^{\\text{th}}$ success.\n\nWe can codify this using a bit string:\n\n\\begin{align}\n & \\text{1000100100001001} & \\text{0 denotes failure, 1 denotes success} & \\\\\n & r = 5 \\\\\n & n = 11 & \\text{failures} \n\\end{align}\n\nNote that the very last bit position is, of course, a success.\n\nNote also that we can permutate the preceding $r-1$ successes amongst the $n+r-1$ slots that come before that final $r^{\\text{th}}$ success.\n\n### Notation\n\n$X \\sim NB(r,p)$\n\n### Parameters\n\n* $r$ - the total number of successes before we stop counting\n* $p$ - probability of success\n\n\n### Probability mass function\n\n\\begin{align}\n P(X=n) &= \\binom{n+r-1}{r-1} p^r (1-p)^n & &\\text{for } n = 0,1,2,\\dots\\\\\n &= \\binom{n+r-1}{n} p^r (1-p)^n & &\\text{or conversely}\\\\\n\\end{align}\n\n### Expected value\n\nLet $X_j$ be the # failures before the $(j-1)^{\\text{st}}$ and $j^{\\text{th}}$ success. Then we could write\n\n\\begin{align}\n \\mathbb{E}(X) &= \\mathbb{E}(X_1 + X_2 + \\dots + X_r) \\\\\n &= \\mathbb{E}(X_1) + \\mathbb{E}(X_2) + \\dots + \\mathbb{E}(X_r) & &\\text{by Linearity} \\\\\n &= r \\mathbb{E}(X_1) & &\\text{by symmetry} \\\\\n &= r \\frac{q}{p} ~~~~ \\blacksquare\n\\end{align}\n\n----\n\n## Revisting the Geometric: the First Success Distribution\n\n$X \\sim FS(p)$ is the geometric distribution that counts the trials until first success, *including that first success*.\n\nLet $Y = X - 1$. \n\nThen $Y \\sim Geom(p)$\n\nExpected value of $FS(p)$ is\n\n\\begin{align}\n \\mathbb{E}(X) &= E(Y) + 1 \\\\\n &= \\frac{q}{p} + 1 \\\\\n &= \\boxed{\\frac{1}{p}}\n\\end{align}\n\n----\n\n## Putnam Problem\n\nConsider a random permutation of $1, 2, 3, \\dots , n$, where $n \\ge 2$.\n\nFind expected # local maxima. For example, given the permuation $\\boxed{3} ~~ 2 ~~ 1 ~~ 4 ~~ \\boxed{7} ~~ 5 ~~ \\boxed{6}$ we have 3 local maxima:\n\n- $\\boxed{3} \\gt 2$\n- $4 \\lt \\boxed{7} \\gt 5$\n- $ 5 \\lt \\boxed{6}$\n\nNow, there are 2 kinds of cases we need to consider:\n\n- non-edge case: $4 ~~ \\boxed{7} ~~ 5$ has probability of $\\frac{1}{3}$ that the largest number is in the middle position\n- edge case: in both left-edge $\\boxed{3} ~~ 2$ and right-edge $5 ~~ \\boxed{6}$, the probability that the larger number is in the right position is $\\frac{1}{2}$\n\nLet $I_j$ be the indicator r.v. of position $j$ having a local maximum, $1 \\le j \\le n$.\n\nUsing Linearity, we can say that the expected number of local maxima is given by\n\n\\begin{align}\n \\mathbb{E}(I_j) &= \\mathbb{E}(I_1 + I_2 + \\dots + I_n) \\\\\n &= \\mathbb{E}(I_1) + \\mathbb{E}(I_2) + \\dots + \\mathbb{E}(I_n) & &\\text{by Linearity} \\\\\n &= (n-2) \\frac{1}{3} + 2 \\frac{1}{2} \\\\\n &= \\boxed{\\frac{n+1}{3}}\n\\end{align}\n\nIdiot-checking this, we have:\n\n\\begin{align}\n \\mathbb{E}(I_{n=2}) &= \\frac{2+1}{3} & &\\text{... case where } n=2 \\\\\n &= 1 \\\\\n \\\\\n \\\\\n \\mathbb{E}(I_{n=\\infty}) &= \\frac{\\infty+1}{3} & &\\text{... case where } n= \\infty \\\\\n &= \\infty \\\\\n\\end{align}\n\n----\n\n## St. Petersburg Paradox\n\nConsider a game of chance involving a fair coin. We will flip the coin until the very first heads shows (hypergeometric distribution). \n\n- If heads shows on the very first flip, you get $\\$2$.\n- If the first heads shows on the second flip, you get $\\$4$.\n- If the first heads shows on the third flip, you get $\\$8$.\n\nSo you will get $\\$2^n$ if the first heads shows up on the $n^\\text{th}$ trial, including the heads flip.\n\n_How much would you be willing to play this game?_\n\nLet's tackle this by thinking about the expected number of $\\$\\$\\$$ we stand to make. \n\nGiven $Y = 2^n$, find $\\mathbb{E}(Y)$:\n\n\\begin{align}\n \\mathbb{E}(Y) &= \\sum_{k=1}^\\infty 2^k \\frac{1}{2^{k-1}} ~ \\frac{1}{2}\\\\\n &= \\sum_{k=1}^\\infty 2^k \\frac{1}{2^k}\\\\\n &= \\sum_{k=1}^\\infty 1\\\\\n \\\\\n \\\\\n \\mathbb{E}(Y_{k=40}) &= \\sum_{k=1}^{40} 1 \\\\\n &= 40\n\\end{align}\n\nSo, the \"paradox\" here is that even if we capped the payout to $2^{40} \\approx \\$1000000000$, Linearity shows us we would only pay $40. It is very hard to grasp this, but the truth is that if you were offered this game at any price, you should take it.\n\n----\n", "meta": {"hexsha": "b3ef1145b93ba21ee4638cd77ae1491948c2aa8f", "size": 8364, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lecture_10.ipynb", "max_stars_repo_name": "dirtScrapper/Stats-110-master", "max_stars_repo_head_hexsha": "a123692d039193a048ff92f5a7389e97e479eb7e", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lecture_10.ipynb", "max_issues_repo_name": "dirtScrapper/Stats-110-master", "max_issues_repo_head_hexsha": "a123692d039193a048ff92f5a7389e97e479eb7e", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture_10.ipynb", "max_forks_repo_name": "dirtScrapper/Stats-110-master", "max_forks_repo_head_hexsha": "a123692d039193a048ff92f5a7389e97e479eb7e", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.7258064516, "max_line_length": 265, "alphanum_fraction": 0.4766857963, "converted": true, "num_tokens": 1938, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8856314617436728, "lm_q2_score": 0.926303732328411, "lm_q1q2_score": 0.8203637284806304}} {"text": "# Solving systems of linear equations\n\nConsider a general system of $m$ linear equations with $n$ unknowns (variables):\n\n$$ a_{11}x_1 + a_{12}x_2 + \\cdots + a_{1n}x_n = b_1 \\\\\na_{21}x_1 + a_{22}x_2 + \\cdots + a_{2n}x_n = b_2 \\\\\n \\vdots \\qquad \\qquad \\vdots \\\\\na_{m1}x_1 + a_{m2}x_2 + \\cdots + a_{mn}x_n = b_m, $$\n\nwhere $x_i$ are unknowns, $a_{ij}$ are the coefficients of the system and $b_i$ are the RHS terms. These can be real or complex numbers.\n\nAlmost every problem in linear algebra will come down to solving such a system. But what does it mean to solve a system of equations?\n\n**Solution of the system.** Solving a system of equations means to find *all* n-tuples $(x_1, x_2, ..., x_n)$ such that substituting each one back into the system gives exactly those values given on the RHS, in the correct order. We call each such tuple a solution of the system of equations.\n\nTo solve such a system we will use basic arithmetic operations to transform our problem into a simpler one. That is, we will transform our system into another **equivalent system**. We say that two systems of equations are equivalent if they have the same set of solutions. In other words, the transformed system is equivalent to the original one if the transformation does not cause a solution to be lost or gained.\n\nThree such transformations (sometimes called *elementary row operations*) of a system of linear equations that result in an equivalent system are:\n\n1. Swapping any two equations of the system\n2. Multiplying an equation of the system by any number different from 0\n3. Adding an equation of the system multiplied by a scalar to another equation of the system\n\nWe will aim to use these transformations to eliminate certain unknowns from some equations. Ideally, we would like to reduce one equation to have only one unknown, which we can then simply solve for. Then we plug this value in other equations and so on. Let us demonstrate this on a couple of simple examples. \n\n### Example: Unique solution\n\nConsider the following system of 3 linear equations involving 3 unknowns $x, y, z$:\n\n$$ x + z = 0 \\\\\ny - z = 1 \\\\\nx + 2y + z = 1 $$\n\nBeing a system of 3 equations and 3 unknowns, we should be able to solve it. We start by noticing that if we subtract the 1st equation from the 3rd we would be left with an equation involving only $y$. That is, we need to use the transformation rule number 3. After doing that, we get the following equivalent system:\n\n$$ x + z = 0 \\\\\ny - z = 1 \\\\\n2y = 1. $$\n\nNow we can easily see from the 3rd equation that $y = 1/2$. Now we plug this value of $y$ into the 2nd equation and solve it for $z$. Then we plug the value of $z$ into the 1st equation and solve it for $x$. After doing that, we find that the only solution to the problem is a triplet $(1/2, 1/2, -1/2)$.\n\n## Matrix equation\n\nWe can represent any system of linear equation in matrix form. The general $m \\times n$ system from the beginning of this notebook can be represented as:\n\n$$ A \\mathbf{x} = \\mathbf{b}, $$\n\nwhere $A \\in \\mathbb{c}^{m \\times n}$ is called a **coefficient matrix** with coefficients as entries $a_{ij}$ and $\\mathbf{x}$ and $\\mathbf{b}$ are vectors $\\in \\mathbb{R}^n$.\n\nThe same transformation rules from before still apply to a systems represented in matrix form, where each row is one equation. Since these transformations are performed on both the LHS and RHS of equations it is convenient to write the system using an **augmented matrix** which is obtained by appending $\\mathbf{b}$ to $A$:\n\n$$ (A | \\mathbf{b}) =\n\\left ( \\begin{array}{cccc|c}\na_{11} & a_{12} & \\cdots & a_{1n} & b_1 \\\\\na_{21} & a_{22} & \\cdots & a_{2n} & b_2 \\\\\n & \\vdots & & \\vdots & \\\\\na_{m1} & a_{m2} & \\cdots & a_{mn} & b_m \\end{array} \\right ).$$\n\nTo avoid confusing $A$ and $\\mathbf{b}$, they are often separated by a straight line, as shown above.\n\n### Example: No solution\n\nConsider a very similar problem to the one in the previous example, which only differs in the sign of $a_{33}$ in the 3rd equation:\n\n$$ x + z = 0 \\\\\ny - z = 1 \\\\\nx + 2y - z = 1 $$\n\nLet us first write it in matrix-form $A \\mathbf{x} = \\mathbf{b}$:\n\n$$ \\begin{pmatrix}\n1 & 0 & 1 \\\\\n0 & 1 & -1 \\\\\n1 & 2 & -1 \\end{pmatrix} \n\\begin{pmatrix} x \\\\ y \\\\ z \\end{pmatrix} = \n\\begin{pmatrix} 0 \\\\ 1 \\\\ 1 \\end{pmatrix} $$\n\nOr using an augmented matrix:\n\n$$ \\left ( \\begin{array}{ccc|c}\n1 & 0 & 1 & 0 \\\\\n0 & 1 & -1 & 1 \\\\\n1 & 2 & -1 & 1\n\\end{array} \\right ) $$\n\nWe will again aim to eliminate certain coefficients from equations. For example, we could eliminate the first coefficient in the 3rd row by subtracting the 1st row from the 3rd row. By doing that, we get the equivalent system:\n\n \n$$\\begin{aligned}\n \\left ( \\begin{array}{ccc|c}\n1 & 0 & 1 & 0 \\\\\n0 & 1 & -1 & 1 \\\\\n1 & 2 & -1 & 1\n\\end{array} \\right )\n \\hspace{-0.5em}\n \\begin{align}\n &\\phantom{I}\\\\\n &\\phantom{II} \\\\\n &L_3 - L_1 \\to L_3\n \\end{align}\n \\Rightarrow\n\\left ( \\begin{array}{ccc|c}\n1 & 0 & 1 & 0 \\\\\n0 & 1 & -1 & 1 \\\\\n0 & 2 & -2 & 1\n\\end{array} \\right )\n\\end{aligned} $$\n\nNow we want to eliminate the second coefficient in the 3rd equation. To do that, we use the third transformation rule again to subtract $2 \\times$ 2nd equation from the 3rd:\n\n$$\\begin{aligned}\n\\left ( \\begin{array}{ccc|c}\n1 & 0 & 1 & 0 \\\\\n0 & 1 & -1 & 1 \\\\\n0 & 2 & -2 & 1\n\\end{array} \\right )\n \\hspace{-0.5em}\n \\begin{align}\n &\\phantom{I}\\\\\n &\\phantom{II} \\\\\n & L_3 - 2L_1 \\to L_3\n \\end{align}\n \\Rightarrow\n\\left ( \\begin{array}{ccc|c}\n1 & 0 & 1 & 0 \\\\\n0 & 1 & -1 & 1 \\\\\n0 & 0 & 0 & -1\n\\end{array} \\right )\n\\end{aligned} $$\n\nLet us look at the third equation: $0 = -1$. What this equation is telling us is that if a solution exists, that solution would be such that $0 = -1$. Since this is obviously not true, we conclude that there is no solution of this system of equations. Or, more precisely, we found that the solution set of this system is an **[empty set](https://en.wikipedia.org/wiki/Empty_set)**.\n\n## Vector equation\n\nRemember that a product of matrix-vector multiplication is a linear combination of the columns of the matrix. We can therefore write $A\\mathbf{x} = \\mathbf{b}$ as:\n\n$$ x_1 \\begin{pmatrix} \\\\ a_1 \\\\ \\\\ \\end{pmatrix} \n+ x_2 \\begin{pmatrix} \\\\ a_2 \\\\ \\\\ \\end{pmatrix} \n+ \\cdots + x_n \\begin{pmatrix} \\\\ a_n \\\\ \\\\ \\end{pmatrix}\n= \\begin{pmatrix} \\\\ b \\\\ \\\\ \\end{pmatrix} $$\n\nTherefore, solving $A \\mathbf{x} = \\mathbf{b}$ can be thought of as finding weights $x_1, ..., x_n$ such that the above is true.\n\n## Triangular systems\n\nSome special types of systems of linear equations can be solved very easily. An especially important type is a triangular system.\n\nA square $n \\times n$ matrix $A$ is a **lower triangular matrix** if $a_{ij} = 0$ for $i < j$. Similarly, we say that it is **upper triangular** if $a_{ij} = 0$ for $i > j$. For example,\n\n$$\\begin{pmatrix} 1 & 0 & 0 \\\\ 1 & 1 & 0 \\\\ 1 & 1 & 1 \\end{pmatrix}, \\quad\n\\begin{pmatrix} 0 & 0 & 0 \\\\ 1 & 0 & 0 \\\\ 0 & 1 & 0 \\end{pmatrix}, \\quad\n\\begin{pmatrix} 1 & 1 & 0 \\\\ 0 & 1 & 1 \\\\ 0 & 0 & 1 \\end{pmatrix}.$$\n\nThe first two matrices above are lower-triangular because $a_{12}=a_{13}=a_{23} = 0$ and the third one is upper-triangular because $a_{21}=a_{31}=a_{32}=0$. \n\nA system of linear equations which has a triangular coefficient matrix is called a *triangular system*. If you look back at the examples above, you will see that the transformations performed were actually helping us reach a triangular form of the coefficient matrix. Indeed, often the easiest way to solve a system of linear equations will be to transform it into an equivalent triangular system.\n\n### Example: Upper-triangular system\n\nHere we will demonstrate why triangular systems of equations are very simple to solve. Consider the following upper-triangular system:\n\n$$ \\begin{pmatrix}\n1 & -1 & 2 \\\\\n0 & 2 & -1 \\\\\n0 & 0 & 2 \\end{pmatrix} \n\\begin{pmatrix} x \\\\ y \\\\ z \\end{pmatrix} = \n\\begin{pmatrix} -1 \\\\ 3 \\\\ 2 \\end{pmatrix} $$\n\nSolving an $n \\times n$ triangular system comes down to solving, in $n$ steps, one equation with one unknown. So we should be able to solve the above system in just 3 steps. We begin solving an upper-triangular system by solving the last equation, which in our case is the following equation with one unknown: $ 2z = 2 $. Clearly, $z=1$ and $z$ is no longer an unknown variable. Now we work our way up and solve the 2nd equation, plugging in our unique solution of $z$: $2y -z = 2y - 1 = 3$ which is again an equation with one unknown. We find that $y = 2$ which we then plug in the first equation: $ x - y + 2z = x - 2 + 2 = x = -1 $. We have successfully solved the system and we found that it has a unique solution $\\mathbf{x} = (-1, 2, 1)$.\n\nIf the system is lower-triangular, we start solving it from the 1st equation and work our way down to the last one.\n\n### Trapezoidal matrix\n\nWe can generalise the idea of triangular matrices to non-square matrices. A non-square matrix with zero entries below or above the diagonal is called an upper or lower **trapezoidal matrix**. For example:\n\n$$ \\begin{pmatrix} 1 & 2 & 2 & 2 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & 1 & 2\\end{pmatrix}, \\quad\n\\begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\\\ 2 & 1 & 0 & 0 & 0 \\\\ 2 & 2 & 1 & 0 & 0 \\end{pmatrix} $$\n\nWhat can we say about such system of equations $A \\mathbf{x} = \\mathbf{b}$ where $A \\in \\mathbb{C}^{m \\times n}$ and $\\mathbf{x}$, $\\mathbf{b} \\in \\mathbb{C}^n$? If $m < n$ there are fewer equations than there are unknowns so the system is **undertermined**. If $m > n$ there are more equations than unknown and the system is **overdetermined**.\n\n#### Example: Underdetermined system\n\nLet us consider the following system of 2 equations and 3 unknowns:\n\n$$ x - y + 2z = -1 \\\\\n2y - z = 3 $$\n\nWe begin from the 2nd equation since it involves less unknowns than the 1st equation: $ 2y - z = 3$. Here we have a choice of which variable to solve for, but we shall solve it for $y$ since it is the *leading variable* (first non-zero in the row). We find $y = (z + 3)/2$. $z$ is a *free variable*, which we introduce formally bellow, but it essentially means that $z$ can be any number $z \\in \\mathbb{C}$, say $z = t$. Substituting $y = (t + 3)/2$ into the 1st equation:\n\n$$ x = y - 2z - 1 = (t + 3)/2 - 2t - 1 = -\\frac{3t}{2} + \\frac{1}{2} $$\n\nTherefore our solution set in terms of $z$ is $\\{(-\\frac{3t}{2} + \\frac{1}{2}, \\frac{t + 3}{2}, t), t \\in \\mathbb{c}\\}$.\n\n## Existence and uniqueness of a solution\n\nLet us think geometrically what it means for a system $A \\mathbf{x} = \\mathbf{b}$ to have a solution. Consider a simple system of linear equations:\n\n$$ \\begin{bmatrix} 2 & 3 \\\\ 1 & -4 \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\end{bmatrix} = \\begin{bmatrix} 7 \\\\ 3 \\end{bmatrix}, $$\n\nor, equivalently,\n\n$$ 2x + 3y = 7 \\\\ x - 4y = 3. $$\n\nLet us plot these two lines using Python:\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n\nx1 = np.linspace(0, 8, 10)\ny1 = (7 - 2 * x1) / 3\n\nx2 = np.linspace(0, 8, 10)\ny2 = (x2 - 3) / 4\n\nplt.plot(x1, y1, label=r\"$2x + 3y = 7$\")\nplt.plot(x2, y2, label=r\"$x - 4y = 3$\")\nplt.xlim(1.5, 5)\nplt.ylim(-1, 1.5)\nplt.xlabel('x')\nplt.ylabel('y')\nplt.legend(loc='best')\nplt.show()\n```\n\nThe two lines cross at only one point, at point $ (37/11, 1/11)$. This point is the *unique solution* of the system, $\\mathbf{x} = (37/11, 1/11)$.\n\nLet's now consider two systems of two equations whose graphs are parallel lines:\n\n$$ 2x + 3y = 7 \\qquad 2x + 3y = 7 \\\\ 2x + 3y = 5 \\qquad 4x + 6y = 14$$\n\nand let us plot them.\n\n\n```python\nx2 = np.linspace(0, 2.5, 10)\ny2 = (5 - 2 * x2) / 3\n\nfig, ax = plt.subplots(1, 2, figsize=(10, 4), sharey=True)\n\nax[0].plot(x1, y1, label=r\"$2x + 3y = 7$\")\nax[0].plot(x2, y2, label=r\"$2x + 3y = 5$\")\nax[0].set_aspect('equal')\nax[0].set_xlabel('x')\nax[0].set_ylabel('y')\nax[0].legend(loc='best')\n\ny2 = (14 - 4 * x1) / 6\nax[1].plot(x1, y1, label=r\"$2x + 3y = 7$\")\nax[1].plot(x1, y2, 'y--', label=r\"$4x + 6y = 14$\")\nax[1].set_aspect('equal')\nax[1].set_xlabel('x')\nax[1].legend(loc='best')\n\nplt.setp(ax, xlim=(0, 3.7), ylim=(0, 2.5))\nplt.show()\n```\n\nIn the first case, the lines never cross because they are parallel. Therefore, the system of equations has no solution. We can write $\\mathbf{x} \\in \\emptyset$ (empty set).\n\nThe lines are parallel in the second case as well, but now they are on top of each other. There are an infinite number of solutions to that system - every point on the line $ y = (7-2x)/3 $ is a solution. We can then write that the solution set is $ \\{ (t, \\frac{7-2t}{3}), t \\in \\mathbb{R})\\} $.\n\nWe can conclude that the existence and uniqueness of the solution of a linear system depends on whether the lines are parallel or not. \n\nLet us test this by finding the cross products of the row-vectors of the coefficient matrices. The row-vectors are normal to the lines given by the equations, so if the equation graphs are parallel, their normals will be too. \n\n\n```python\nfrom matplotlib.patches import Polygon\n\n\nx2 = np.linspace(0, 8, 10)\ny2 = (x2 - 3) / 4\n\nfig, ax = plt.subplots(1, 2, figsize=(10, 6))\n\nax[0].plot(x1, y1, label=r\"$2x + 3y = 7$\")\nax[0].plot(x2, y2, label=r\"$x - 4y = 3$\")\nax[0].quiver(37/11, 1/11, 2, 3, scale=15, angles='xy', color='b')\nax[0].quiver(37/11, 1/11, 1, -4, scale=15, angles='xy', color='orange')\nvertices = np.array([[0, 0], [2, 3], [3, -1], [1, -4]])/4.3 + np.array([37/11, 1/11])\nax[0].add_patch(Polygon(vertices , facecolor='lightblue', alpha=0.7))\nax[0].set_xlim(1.5, 5)\nax[0].set_ylim(-1, 1.5)\nax[0].set_aspect('equal')\nax[0].set_xlabel('x')\nax[0].set_ylabel('y')\nax[0].legend(loc='best')\n\n\ny2 = (5 - 2 * x2) / 3\nax[1].plot(x1, y1, label=r\"$2x + 3y = 7$\")\nax[1].plot(x2, y2, label=r\"$2x + 3y = 5$\")\nax[1].quiver(1.3, 4.4/3, 2, 3, scale=15, angles='xy', color='b')\nax[1].quiver(1.5, 2/3, 2, 3, scale=15, angles='xy', color='orange')\nax[1].set_xlim(0, 3.7)\nax[1].set_ylim(0, 2.5)\nax[1].set_aspect('equal')\nax[1].set_xlabel('x')\nax[1].legend(loc='best')\n\nplt.show()\n```\n\nIf $\\mathbf{a_1} = (a_{11}, a_{12})$ is the first row-vector of a coefficient matrix and $\\mathbf{a_2} = (a_{21}, a_{22})$, we can express their cross product as:\n\n$$ ( \\mathbf{a_1} \\times \\mathbf{a_2} ) = |\\mathbf{a_1}| |\\mathbf{a_2}| \\sin(\\vartheta) \\hat{n}, $$\n\nwhere $|\\cdot|$ denotes the magnitude of the vector, $\\vartheta$ is the angle between $\\mathbf{a_1}$ and $\\mathbf{a_2}$ and $\\hat{n}$ is the unit normal vector to both vectors. We can see that the magnitude of the vector is equal to the area of a parallelogram with sides $|\\mathbf{u}|$ and $|\\mathbf{v}|$, with $\\vartheta$ controlling how skewed it is. Such a parallelogram is marked by light-blue on the left figure. We also know from the previous notebook that the magnitude of the cross product of the two rows or columns of the matrix is given by the determinant of the matrix:\n\n$$ |(a_{11}, a_{12}) \\times (a_{21}, a_{22}) | = |(a_{11}, a_{21}) \\times (a_{12}, a_{22}) | = \\det \\begin{pmatrix} a_{11} & a_{12} \\\\ a_{21} & a_{22} \\end{pmatrix}.$$\n\nLet us then conclude, and we will justify this in the next notebooks, that a system of linear equations $A \\mathbf{x} = \\mathbf{b}$ will have a unique solution iff $\\det A \\neq 0$. If $\\det A = 0$, the system either has infinitely-many solutions or no solutions at all.\n\n# Gaussian elimination\n\nLet us finally formally introduce what we have been trying to achieve in most examples above. **Gaussian elimination** (or row reduction) uses the three transformations mentioned before to reduce a system to **row echelon form**. A matrix is in row echelon form if:\n\n- all zero rows (if they exist) are below all non-zero rows\n- the **leading coefficient** (or **pivot**; the first non-zero entry in a row) is always strictly to the right of the leading coefficient of the row above it. That is, for two leading elements $a_{ij}$ and $a_{kl}$: if $i < k$ then it is required that $j < l$.\n\nNotice that these conditions require reduced echelon form to be an upper-trapezoidal matrix. For example:\n\n$$ \\begin{pmatrix} 1 & 2 & 0 & 1 \\\\ 0 & 1 & 0 & 5 \\\\ 0 & 0 &2 & 2 \\end{pmatrix}, \\quad \n\\begin{pmatrix} 1 & 0 & 0 \\\\ 0 & 1 & 0 \\\\ 0 & 0 & 1 \\end{pmatrix}, \\quad\n\\begin{pmatrix} 1 & -2 & 0 & 1 \\\\ 0 & 0 & 1 & 3 \\\\ 0 & 0 & 0 & 0 \\\\ 0 & 0 & 0 & 0\\end{pmatrix}$$\n\nThe motivation behind performing Gaussian elimination is therefore clear, as we have demonstrated in the upper-triangular example why triangular systems are most convenient for solving systems of linear equations.\n\nLet us consider again a general coefficient matrix $A \\in \\mathbb{C}^{m \\times n}$ in $A \\mathbf{x} = \\mathbf{b}$:\n\n$$ \\begin{pmatrix} a_{11} & a_{12} & a_{13} & \\cdots & a_{1n} \\\\\na_{21} & a_{22} & a_{23} & \\cdots & a_{2n} \\\\ \na_{31} & a_{32} & a_{33} & \\cdots & a_{3n} \\\\\n & \\vdots & & \\vdots & \\\\\na_{m1} & a_{m2} & a_{m3} & \\cdots & a_{mn} \\end{pmatrix} $$\n\nWhat we want to achieve is to have all entries in the 1st column be 0 except for the first one. Then in the 2nd column all entires below the second entry should be 0. In the 3rd column all entries below the third entry should be 0, and so on. That means that we will have to use the 3rd transformation rule and subtract one equation from every one below it, multiplied such that the column entries will cancel:\n\n$$\n\\begin{aligned}\n &\\begin{pmatrix} a_{11} & a_{12} & a_{13} & \\cdots & a_{1n} \\\\\n a_{21} & a_{22} & a_{23} & \\cdots & a_{2n} \\\\ \n a_{31} & a_{32} & a_{33} & \\cdots & a_{3n} \\\\\n & \\vdots & & \\vdots & \\\\\n a_{m1} & a_{m2} & a_{m3} & \\cdots & a_{mn} \\end{pmatrix}\n \\hspace{-0.5em}\n \\begin{align}\n &\\phantom{L_1}\\\\\n &L_2 - ^{a_{21}}/_{a_{11}}L_1 \\to L_2 \\\\\n &L_3 - ^{a_{31}}/_{a_{11}}L_1 \\to L_3 \\\\\n &\\qquad \\cdots \\\\\n &L_m - ^{a_{m1}}/_{a_{11}}L_1 \\to L_1\n \\end{align} \\\\ \\\\\n \\Rightarrow \\quad\n &\\begin{pmatrix} a_{11} & a_{12} & a_{13} & \\cdots & a_{1n} \\\\\n 0 & \\tilde{a_{22}} & \\tilde{a_{23}} & \\cdots & \\tilde{a_{2n}} \\\\ \n 0 & \\tilde{a_{32}} & \\tilde{a_{33}} & \\cdots & \\tilde{a_{3n}} \\\\\n & \\vdots & & \\vdots & \\\\\n 0 & \\tilde{a_{m2}} & \\tilde{a_{m3}} & \\cdots & \\tilde{a_{mn}} \\end{pmatrix}\n \\hspace{-0.5em}\n \\begin{aligned}\n &\\phantom{L_1}\\\\\n &\\phantom{L_2} \\\\\n &L_3 - ^{a_{32}}/_{a_{12}}L_1 \\to L_3 \\\\\n &\\qquad \\cdots \\\\\n &L_m - ^{a_{m2}}/_{a_{12}}L_1 \\to L_1\n \\end{aligned} \\\\ \\\\\n \\Rightarrow \\quad\n &\\begin{pmatrix} a_{11} & a_{12} & a_{13} & \\cdots & a_{1n} \\\\\n 0 & \\tilde{a_{22}} & \\hat{a_{23}} & \\cdots & \\hat{a_{2n}} \\\\ \n 0 & 0 & \\hat{a_{33}} & \\cdots & \\hat{a_{3n}} \\\\\n & \\vdots & & \\vdots & \\\\\n 0 & 0 & \\hat{a_{m3}} & \\cdots & \\hat{a_{mn}} \\end{pmatrix} \\\\ \n & \\qquad \\dots\n\\end{aligned} $$\n\nAnd so on. Notice that these operations (transformations) are performed on entire rows, so the entries in the entire row change (here denoted by tilde and hat). That is why we need to be careful what row we add to other row since, for example, adding the first row to other rows later on would re-introduce non-zero values in the first column which we previously eliminated.\n\n### Example\n\nConsider the following system of 3 equations and 3 unknowns $x, y, z$:\n\n$$ x - y + 2z = -1 \\\\ x + 2y - z = 2 \\\\ -x + y + z = 0 $$\n\nLet us write it in augmented-matrix form and perform Gaussian eliminations.\n\n$$\\begin{aligned}\n &\\left ( \\begin{array}{ccc|c} \n 1 & -1 & 2 & -1 \\\\ \n 1 & 2 & -1 & 2 \\\\\n -1 & 1 & 1 & 0 \\end{array} \\right )\n \\hspace{-0.5em}\n \\begin{align}\n &\\phantom{L_1}\\\\\n &L_2 - L_1 \\to L_2 \\\\\n &L_3 + L_1 \\to L_3 \\\\\n \\end{align} \\\\ \\\\\n \\Rightarrow \\quad\n &\\left ( \\begin{array}{ccc|c} \n 1 & -1 & 2 & -1 \\\\ \n 0 & 3 & -3 & 3 \\\\\n 0 & 0 & 3 & -1 \\end{array} \\right )\n\\end{aligned}$$\n\nThe one transformation on the 3rd equation eliminated both $x$ and $y$ unknowns from it so we did not have to perform another transformation. Now the system is reduced to an upper-triangular one, which we have encountered before and know how to solve. We begin from the last equation and back-substitute found values as we work our way up. We leave it to the reader to confirm that there is a unique solution $\\mathbf{x} = (1/3, 2/3, -1/3)$.\n\n## Transformations as matrices\n\nRecall the example on permutations a few notebooks ago, where we wrote each permutation of rows or columns as a permutation matrix multiplying the original matrix. We can do the same thing with elementary row transformations. \n\nConsider the same example from above, where we performed the following 2 transformations on the square $3 \\times 3$ matrix $A$:\n\n1. subtracted row 1 from row 2; $L_2 - L_1 \\to L_2$\n2. added row 1 to row 3; $L_3 + L_1 \\to L_3$\n\nWe can write both of these transformations using **elementary matrices**:\n\n$$ E_1 = \\begin{pmatrix} 1 & 0 & 0 \\\\ -1 & 1 & 0 \\\\ 0 & 0 & 1 \\end{pmatrix}, \\quad E_2 = \\begin{pmatrix} 1 & 0 & 0 \\\\ 0 & 1 & 0 \\\\ 1 & 0 & 1 \\end{pmatrix} $$\n\nAs in the example of permutations, left-multiplication by an elementary matrix $E$ represents elementary row operations, while right-multiplication represents elementary column operations. Our transformations are row operations, so we left multiply our original matrix:\n\n$$ E_2 E_1 A = \n\\begin{pmatrix} 1 & -1 & 2 \\\\ 0 & 3 & -3 \\\\ 0 & 0 & 3 \\end{pmatrix},$$\n\nresulting in the same upper-triangular matrix as before. Note that the order in which we multiply is, in general, important.\n\n# Using the inverse matrix to solve a linear system\n\nRemember that an inverse of a square matrix $A$ is $A^{-1}$ such that $AA^{-1} = A^{-1}A = I$. Therefore, if we have a matrix equation $A \\mathbf{x} = \\mathbf{b}$, finding an inverse $A^{-1}$ would allow us to solve for all unknowns simultaneously, rather than in steps like in the examples above. Here is the idea:\n\n$$ A\\mathbf{x} = \\mathbf{b} \\\\\n\\mbox{multiply both sides by} A^{-1} \\\\\nI \\mathbf{x} = A^{-1}\\mathbf{b} \\\\ \n\\mathbf{x} = A^{-1}\\mathbf{b} $$\n\nFinding the solution $\\mathbf{x}$ is then reduced to a matrix-vector multiplication $A^{-1}\\mathbf{b}$.\n\n## Gauss-Jordan elimination\n\n**Gauss-Jordan elimination** is a type of Gaussian elimination that we can use to find the inverse of a matrix. This process is based on reducing a matrix whose inverse we want to find to **reduced row echelon form**. For a matrix to be in reduced row echelon form it must satisfy the two conditions written above for row echelon forms and an additional condition:\n\n- Each leading (pivot) element is 1 and all other entries in their columns are 0.\n\nExamples of such matrices are:\n\n$$ \\begin{pmatrix} 1 & 0 & 0\\\\ 0 & 1 & 0 \\\\ 0 & 0 & 1 \\end{pmatrix}, \\quad\n\\begin{pmatrix} 1 & 0 & 4\\\\ 0 & 1 & 3 \\end{pmatrix}, \\quad\n\\begin{pmatrix} 0 & 0 & 1 & 5 & 0 & 2 & 0 \\\\ 0 & 0 & 0 & 0 & 1 & 6 & 0 \\\\ 0 & 0 & 0 &0 &0 & 0 & 1 \\end{pmatrix} $$\n\nNow, let $A \\in \\mathbb{R}^{n \\times n}$ be a square matrix whose inverse we want to find. We use it to form an augmented block matrix $[A | I]$. We perform elementary row transformations on this matrix such that we reduce $A$ to reduced row echelon form, which we will denote as $A_R$. In this process, the identity matrix $I$ on the right will be transformed to some new matrix $B$. If $A$ is invertible, we will have:\n\n$$ [A | I] \\Rightarrow [A_R | B] = [I | A^{-1}] $$\n\nLet us think why this is. We showed above that elementary row operations can be represented as elementary matrices. Let there be $k$ such transformations needed to reduce $A$ to $A_R$. Then we can write:\n\n$$ [A_R | B] = E_k E_{k-1} \\dots E_2 E_1[A | I] $$\n\nand let us denote $S = E_k E_{k-1} \\dots E_2 E_1$, the product of all $k$ elementary matrices. Now,\n\n$$ [A_R | B] = S[A | I] = [SA | SI] = [SA | S]$$\n\nTherefore, if $A_R = I \\Rightarrow I = SA$, meaning that $S = B$ is indeed the inverse of $A$.\n\nIf $A$ is not invertible, remember that it means that it is not full-rank. What that means is that $A_R$ will have at least one zero-row. In that case, $B \\neq A^{-1}$.\n\n### Example: Calculating an inverse of a matrix\n\nLet us find the inverse matrix of:\n\n$$ A = \\begin{bmatrix} 2 & 1 & 3 \\\\ 0 & 2 & -1 \\\\ 3 & -1 & 2 \\end{bmatrix}. $$\n\nWe begin by forming an augmented matrix $[A|I]$ and begin reducing $A$ to reduced row echelon form.\n\n$$\\begin{aligned}\n {[A | I] =} \\quad &\\left [ \\begin{array}{ccc|ccc} \n 2 & 1 & 3 & 1 & 0 & 0 \\\\ \n 0 & 2 & -1 & 0 & 1 & 0\\\\ \n 3 & -1 & 2 & 0 & 0 & 1 \\end{array} \\right ]\n \\hspace{-0.5em}\n \\begin{aligned}\n &^1/_2 L_1 \\to L_1 \\\\\n &\\phantom{L} \\\\\n &\\phantom{L} \\\\\n \\end{aligned} \\\\ \\\\\n \\sim \\quad\n &\\left [ \\begin{array}{ccc|ccc} \n 1 & ^1/_2 & ^3/_2 & ^1/_2 & 0 & 0 \\\\ \n 0 & 2 & -1 & 0 & 1 & 0\\\\ \n 3 & -1 & 2 & 0 & 0 & 1 \\end{array} \\right ]\n \\hspace{-0.5em}\n \\begin{aligned}\n &\\phantom{L} \\\\\n &\\phantom{L} \\\\\n &L_3 -3L_1 \\to L_3 \\\\\n \\end{aligned} \\\\ \\\\\n \\sim \\quad\n &\\left [ \\begin{array}{ccc|ccc} \n 1 & ^1/_2 & ^3/_2 & ^1/_2 & 0 & 0 \\\\ \n 0 & 2 & -1 & 0 & 1 & 0\\\\ \n 0 & -^5/_2 & -^5/_2 & -^3/_2 & 0 & 1 \\end{array} \\right ]\n \\hspace{-0.5em}\n \\begin{aligned}\n &\\phantom{L} \\\\\n & ^1/_2 L_2 \\to L_2\\\\\n &\\phantom{L} \\\\\n \\end{aligned} \\\\ \\\\\n \\sim \\quad\n &\\left [ \\begin{array}{ccc|ccc} \n 1 & ^1/_2 & ^3/_2 & ^1/_2 & 0 & 0 \\\\ \n 0 & 1 & -^1/_2 & 0 & ^1/_2 & 0\\\\ \n 0 & -^5/_2 & -^5/_2 & -^3/_2 & 0 & 1 \\end{array} \\right ]\n \\hspace{-0.5em}\n \\begin{aligned}\n & L_1 -^1/_2 L_2 \\to L_1 \\\\\n & \\phantom{L}\\\\\n & L_3 +^5/_2 L_2 \\to L_3 \\\\\n \\end{aligned} \\\\ \\\\\n \\sim \\quad\n &\\left [ \\begin{array}{ccc|ccc} \n 1 & 0 & ^7/_4 & ^1/_2 & -^1/_4 & 0 \\\\ \n 0 & 1 & -^1/_2 & 0 & ^1/_2 & 0\\\\ \n 0 & 0 & -^{15}/_4 & -^3/_2 & ^5/_4 & 1 \\end{array} \\right ]\n \\hspace{-0.5em}\n \\begin{aligned}\n & \\phantom{L} \\\\\n & \\phantom{L}\\\\\n & -^4/_{15}L_3 \\to L_3 \\\\\n \\end{aligned} \\\\ \\\\\n \\sim \\quad\n &\\left [ \\begin{array}{ccc|ccc} \n 1 & 0 & ^7/_4 & ^1/_2 & -^1/_4 & 0 \\\\ \n 0 & 1 & -^1/_2 & 0 & ^1/_2 & 0\\\\ \n 0 & 0 & 1 & ^2/_5 & -^1/_3 & -^4/_{15} \\end{array} \\right ]\n \\hspace{-0.5em}\n \\begin{aligned}\n & L_1 -^7/_4 L_3 \\to L_1 \\\\\n & L_2 +^1/_2 L_3 \\to L_2\\\\\n & \\phantom{L} \\\\\n \\end{aligned} \\\\ \\\\\n \\sim \\quad\n &\\left [ \\begin{array}{ccc|ccc} \n 1 & 0 & 0 & -^1/_5 & ^1/_3 & ^7/_{15} \\\\ \n 0 & 1 & 0 & ^1/_5 & ^1/_3 & -^2/_{15}\\\\ \n 0 & 0 & 1 & ^2/_5 & -^1/_3 & -^4/_{15} \\end{array} \\right ]\n = [I | B]\n\\end{aligned}$$\n\nWe successfully found the inverse $A^{-1} = B$! Let us now use it to solve the system $A \\mathbf{x} = \\mathbf{b}$, where $\\mathbf{b} = (2, 0, 1)^T$. As explained before, $A$ on the LHS is eliminated by multiplying both sides by $A^{-1}$ on the left, leaving:\n\n$$ \\mathbf{x} = A^{-1}\\mathbf{b} =\n\\begin{pmatrix} \n-^1/_5 & ^1/_3 & ^7/_{15} \\\\ \n^1/_5 & ^1/_3 & -^2/_{15} \\\\ \n^2/_5 & -^1/_3 & -^4/_{15} \n\\end{pmatrix}\n\\begin{pmatrix} 2 \\\\ 0 \\\\ 1 \\end{pmatrix} =\n\\begin{pmatrix} 1/15 \\\\ 4/15 \\\\ 8/15 \\end{pmatrix} $$\n\nThe reader is encouraged to confirm this is the correct solution either through substitution into the original system or by using another solution method.\n\n\n```python\n\n```\n", "meta": {"hexsha": "943e1d728ba73d30017127013b40a49b9400eb02", "size": 83951, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "mathematics/linear_algebra/Linear_Systems.ipynb", "max_stars_repo_name": "jrper/thebe-test", "max_stars_repo_head_hexsha": "554484b1422204a23fe47da41c6dc596a681340f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "mathematics/linear_algebra/Linear_Systems.ipynb", "max_issues_repo_name": "jrper/thebe-test", "max_issues_repo_head_hexsha": "554484b1422204a23fe47da41c6dc596a681340f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "mathematics/linear_algebra/Linear_Systems.ipynb", "max_forks_repo_name": "jrper/thebe-test", "max_forks_repo_head_hexsha": "554484b1422204a23fe47da41c6dc596a681340f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 121.492040521, "max_line_length": 18100, "alphanum_fraction": 0.7869233243, "converted": true, "num_tokens": 9488, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.888758793492457, "lm_q2_score": 0.9230391643039739, "lm_q1q2_score": 0.8203591740130857}} {"text": "# Grafico funciones simbolicas\n\nEn este tutorial graficaremos una funci\u00f3n simbolica sencilla\n\n\n```python\n# Importar paquetes\nimport sympy as sym\nimport matplotlib.pyplot as plt\nimport numpy as np\n```\n\n\n```python\n# Configurar funcion #1\na = 0.2\nb = 0.2\nx = sym.Symbol('x')\nf_x = sym.sin(a*x) + sym.cos(b*x)\nprint(\"Funcion:\")\nf_x\n```\n\n Funcion:\n\n\n\n\n\n$\\displaystyle \\sin{\\left(0.2 x \\right)} + \\cos{\\left(0.2 x \\right)}$\n\n\n\n\n```python\n# Configurar funcion #2\n# Curva S-Shape\n\nt = sym.Symbol('t')\nf_t = 1/(1 + sym.exp(-t))\nprint(\"Funcion Sigmoide: \")\nf_t\n```\n\n Funcion Sigmoide: \n\n\n\n\n\n$\\displaystyle \\frac{1}{1 + e^{- t}}$\n\n\n\n## Grafico con Matplotlib\n\nA continuaci\u00f3n computaremos los valores numericos de la funci\u00f3n simbolica\npara crear un grafico usando matplotlib\n\n\n```python\n# Convertir la funci\u00f3n simbolica para usar Numpy\nnum_f_x = sym.lambdify(x, f_x, modules=['numpy'])\n# Crear un conjunto de valores entre [0, 100] en intervalos de 10\nx_vals = np.linspace(-np.pi, np.pi, 100)\n# Computar los valores de la funci\u00f3n y\ny_vals = num_f_x(x_vals)\n\n# Funcion #2\nnum_f_t = sym.lambdify(t, f_t, modules=['numpy'])\n# Crear un conjunto de valores entre [0, 100] en intervalos de 10\nx2_vals = np.linspace(-100, 100)\n# Computar los valores de la funci\u00f3n y\ny2_vals = num_f_t(x2_vals)\n```\n\n\n```python\n# Configurar el layout\nfig = plt.figure() # Figura principal\nlayout = fig.add_gridspec(1,2)\n\n# Configuraci\u00f3n del primer plano\ntrig = fig.add_subplot(layout[0,0])\ntrig.plot(x_vals, y_vals) # Graficar valores x, f(x) \ntrig.set_ylabel(f_x) # Configurar la leyenda del grafico\ntrig.set_xlabel('radianes')\ntrig.set_title('Funciones trigonmetricas')\n\n# Configuraci\u00f3n del segundo plano\ntrig2 = fig.add_subplot(layout[0,1])\ntrig2.plot(x2_vals, y2_vals) # Graficar valores x, f(x) \ntrig2.set_ylabel(f_t) # Configurar la leyenda del grafico\ntrig2.set_xlabel('Valores')\ntrig2.set_title('S-Shape')\n\n# Mostrar graficas\nplt.show()\n```\n", "meta": {"hexsha": "84d8f2fafb7afcf7350aaac99a5fd588509b2d9e", "size": 25192, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "matplot_practice.ipynb", "max_stars_repo_name": "ggonzr/matplotlib_practice", "max_stars_repo_head_hexsha": "cf22f468730cb6dab7f30206b67434ff534f5bed", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-11-03T15:38:05.000Z", "max_stars_repo_stars_event_max_datetime": "2020-11-03T15:38:05.000Z", "max_issues_repo_path": "matplot_practice.ipynb", "max_issues_repo_name": "ggonzr/matplotlib_practice", "max_issues_repo_head_hexsha": "cf22f468730cb6dab7f30206b67434ff534f5bed", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "matplot_practice.ipynb", "max_forks_repo_name": "ggonzr/matplotlib_practice", "max_forks_repo_head_hexsha": "cf22f468730cb6dab7f30206b67434ff534f5bed", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 131.8952879581, "max_line_length": 20824, "alphanum_fraction": 0.8914337885, "converted": true, "num_tokens": 619, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9441768604361741, "lm_q2_score": 0.8688267694452331, "lm_q1q2_score": 0.8203261314377038}} {"text": "## CS164 ASSIGNMENT 2: CONSTRAINED OPTIMIZATION & THE KKT CONDITIONS\n\n#### Vin\u00edcius Miranda\n\nThe assignment instructions are available [here](https://course-resources.minerva.kgi.edu/uploaded_files/mke/00087691-7466/assignment2.pdf).\n\nWe are concerned with the linear program\n\n$$\n\\begin{equation*}\n\\begin{aligned}\n& \\underset{x}{\\text{min}}\n& & c^Tx \\\\\n& \\text{subject to:} \n& & Ax \\leq b\n\\end{aligned}\n\\end{equation*}\n$$\n\nwhere $A \\in \\mathbb{R}^{m\\times n}, b \\in \\mathbb{R}^{m\\times 1}$ and $c\\in \\mathbb{R}^{n\\times 1}$. Its KKT conditions are\n\n$$\n\\begin{align}\n\\text{Primal Feasibility : } &A\\textbf{x}^* \\preceq b \\\\\n\\text{Dual Feasibility : } &\\begin{cases} c + A^T\\lambda = \\textbf{0} \\\\ \n \\lambda \\succeq \\textbf{0} \\end{cases} \\\\\n\\text{Complementary Slackness : } &\\lambda^T(A\\textbf{x}^*-b) = \\textbf{0}\n\\end{align}\n$$\n\nwhere dual feasilibity is also known as the stationary condition and $\\lambda \\in \\mathbb{R}^{m\\times 1}$. Note that $\\preceq$ and $\\succeq$ are component-wise (vectorized) versions of $\\leq$ and $\\geq$. There are three possibilities for the solution of this LP (Calafiore XXXX). They are\n1. If the intersection among the half-spaces defined by each of the $m$ constraints is empty, then the feasible set is the empty set and there is no solution.\n2. If the feasible set is nonempty and unbounded, then the problem may or may not attain a finite solution. The geometric interpretation is that if one direction of descent of the objective function is unbounded in the feasible set, the minimum is never found as the function decreases infinitely. \n3. If the feasible set is nonempty and bounded, the feasible is a polyhedron (or polytope) and the solution is attained at one of its vertices. The proof below follows from Calafiore (XXXX). \n\nLet $\\mathcal{X} = \\{x\\in\\mathbb{R}^n : Ax\\leq b\\}$. A point $x^*$ is a solution if and only if $c^Tx\\geq c^Tx^*, \\forall x \\in \\mathcal{X}$. In other words, if all other points in the feasible set have higher objectives. Similarly, we may say that if $x^*$ is optimal, then $c^T(x-x^*)\\geq0, \\forall x \\in \\mathcal{X}$, which is to say that all directions in the feasible set are non-descent directions. Notice that for $f(x)=c^Tx, \\nabla f=c$. If $x^*$ were not a boundary point of the feasible set, one would need only step in the direction $-c$ to reduce the objective. Furthermore, when $\\exists x \\in \\mathcal{X}$ such that $x \\neq x^*$ and $c^T(x-x^*)=0$, there is a collection of points that extremize the objective. These points are all in a face (or facets) of the polyhedron (or polytope), which occurs when the level curves (or sets) of the objective function are parallel to the face.\n\n\n\n## $l_1$ and $l_\\infty$ Regression\n\n\n\nThe $l_1$ and $l_\\infty$ norms can be used as cost functions to find the line-of-best-fit for a regression problem. We can thus define the problems\n\n$$\\min_{\\Theta}||Y-X\\Theta||_1$$\n\nand\n\n$$\\min_{\\Theta}||Y-X\\Theta||_{\\infty}$$\n\nwhere $X \\in \\mathbb{R}^{N\\times2}$ for a dataset of $N$ points in $\\mathbb{R}^2$, $Y \\in \\mathbb{R}^{N\\times1}$, and $\\Theta \\in \\mathbb{R}^{2\\times1}$.\n\n### $l_1$ norm\n\nThe $l_1$ norm computes the sum of the absolute values of each component of a vector. In other words, \n\n$$\\min_{\\Theta}||Y-X\\Theta||_1=\\min_{\\Theta}\\sum_{i=1}^N|Y_i-X_i\\Theta|.$$\n\nWe can then introduce slack variables $u_i$ which will provide bounds to the absolute values. By minimizing over the slack variables, we will always find the tighest nonnegative bound and hence the linear program remains the same. The LP then becomes\n\n$$\n\\begin{equation*}\n\\begin{aligned}\n& \\underset{\\Theta, u}{\\text{min}}\n& & \\sum_{i=1}^N u_i \\\\\n& \\text{subject to:} \n& & |Y_i-X_i\\Theta| \\leq u_i, i=1,\\dots,N \n\\end{aligned}\n\\end{equation*}\n$$\n\nwhich we can express in standard form as\n\n$$\n\\begin{equation*}\n\\begin{aligned}\n& \\underset{\\Theta, u}{\\text{min}}\n& & \\textbf{1}^T\\textbf{u} \\\\\n& \\text{subject to:} \n& & Y_i-X_i\\Theta -u_i \\leq 0, i=1,\\dots,N \\\\\n& & & X_i\\Theta-Y_i- u_i \\leq 0, i=1,\\dots,N\n\\end{aligned}\n\\end{equation*}\n$$\n\nWe thus introduce $N$ slack variables and $2\\times N$ constraints.\n\n### $l_\\infty$ norm\n\nWe proceed similarly with the $l_\\infty$ norm. However, we are now interested only in the largest absolute value of all the components of the vector over which we minimize. Therefore, only one slack variable is necessary to provide a bound on this value. As before, as we minimize the slack variable, we find the tighest bound possible, equivalent to the value of the infinity norm, and thus the LPs are equivalent. Therefore,\n\n$$\n\\begin{equation*}\n\\begin{aligned}\n& \\underset{\\Theta, t}{\\text{min}}\n& & t\\\\\n& \\text{subject to:} \n& & ||Y-X\\Theta||_{\\infty} \\leq t \n\\end{aligned}\n\\end{equation*}\n$$\n\nSince $t$ bounds the largest component of the vector, we also bounds all the other components. In other words,\n\n$$||Y-X\\Theta||_{\\infty} \\leq t \\iff \\max_{i=1,\\dots,N} |Y_i-X_i\\Theta|\\leq t \\iff |Y_i-X_i\\Theta|\\leq t, i = 1,\\dots, N$$\n\nIn standard form, the LP then becomes\n\n$$\n\\begin{equation*}\n\\begin{aligned}\n& \\underset{\\Theta, t}{\\text{min}}\n& & t\\\\\n& \\text{subject to:} \n& & Y_i-X_i\\Theta -t \\leq 0, i=1,\\dots,N \\\\\n& & & X_i\\Theta-Y_i- t \\leq 0, i=1,\\dots,N\n\\end{aligned}\n\\end{equation*}\n$$\n\nThus, we introduce one slack variable $t$ and $2\\times N$ constraints. This section is also heavily based on Calafiore (XXXX).\n\n\nSome code is given below to generate a synthetic dataset. Using CVX, solve two linear programs for computing the regression line for $l_1$ and $l_\\infty$ regression. Plot the lines over the data to evaluate the fit.\n\n\n\n\n```python\n# l_1 and l_infinity regression using cvxpy\nimport numpy as np\nimport cvxpy as cvx\nimport matplotlib.pyplot as plt\n\n# generate a synthetic dataset\n\n# actual parameter values\ntheta1_act = 2\ntheta2_act = 5\n\n# Number of points in dataset\nN = 200\n\n# Noise magnitude\nmag = 30\n\n# datapoints\nx = np.arange(0,N)\ny = theta1_act * x + theta2_act *np.ones([1,N]) + np.random.normal(0,mag,N)\n\nplt.figure()\n# Scatter plot of data\nplt.scatter(x,y)\nplt.show()\n\n```\n\n### $l_1$ norm\n\n\n```python\n# Arranging the variables and checking proper dimensionality\nX = np.array([x, np.ones_like(x)]).T\nY = y.T\nassert X.shape == (N, 2)\nassert Y.shape == (N, 1)\n\n# HINT: you will first want to declare a variable for the parameters of the line\ntheta_1 = cvx.Variable(2)\n\n# The slack variables\nu = cvx.Variable(N)\n\n# The constraints.\nconstraints = [Y.flatten() - X@theta_1 - u <= np.zeros(N),\n X@theta_1 - Y.flatten() - u <= np.zeros(N)]\n\n# Form objective.\nobj = cvx.Minimize(np.ones(N).T@u)\n\n# Form and solve problem.\nprob = cvx.Problem(obj, constraints)\nprob.solve(\"ECOS\") # Returns the optimal value.\nprint(\"status:\", prob.status)\nprint(\"optimal value:\", prob.value)\nprint(\"optimal \u0398:\", theta_1.value)\n```\n\n status: optimal\n optimal value: 5034.734558966026\n optimal \u0398: [1.97202379 9.75376663]\n\n\n### $l_\\infty$ norm\n\n\n```python\n# HINT: you will first want to declare a variable for the parameters of the line\ntheta_infty = cvx.Variable(2)\n\n# The slack variables\nt = cvx.Variable()\n\n# The constraints.\nconstraints = [Yi - Xi@theta_infty - t <= 0 for Yi, Xi in zip(y.flatten(), X)] +\\\n [Xi@theta_infty - Yi - t <= 0 for Yi, Xi in zip(y.flatten(), X)]\n\n #[Y.flatten() - X@theta_infty - t <= np.zeros(N),\n #X@theta_infty - Y.flatten() - t <= np.zeros(N)]\n\n# Form objective.\nobj = cvx.Minimize(t)\n\n# Form and solve problem.\nprob = cvx.Problem(obj, constraints)\nprob.solve(\"ECOS\") # Returns the optimal value.\nprint(\"status:\", prob.status)\nprint(\"optimal value:\", prob.value)\nprint(\"optimal \u0398:\", theta_infty.value)\n```\n\n status: optimal\n optimal value: 84.00028631278876\n optimal \u0398: [ 1.92832652 19.26261364]\n\n\n#### Visualizing the solutions...\n\n\n```python\nplt.figure(figsize=[12, 6])\nplt.scatter(x,y, alpha=0.5)\nplt.plot(x, theta_1.value[0] * x + theta_1.value[1] *np.ones(N), \n color='red', linewidth=3, alpha=0.7, label=r\"$l_1$ norm\")\nTheta2 = np.linalg.inv(X.T@X)@X.T@Y\nplt.plot(x, Theta2[0] * x + Theta2[1] *np.ones(N), \n color='green', linewidth=3, alpha=0.5, label=r\"$l_2$ norm\")\nplt.plot(x, theta_infty.value[0] * x + theta_infty.value[1] *np.ones(N), \n color='orange', linewidth=3, alpha=0.7, label=r\"$l_\\infty$ norm\")\n\nplt.title(\"Line of best fit for different lp-norm cost functions\")\nplt.legend()\nplt.show()\n```\n", "meta": {"hexsha": "bdbbe764e8e34742ff14557d77c466d6780bc1ec", "size": 71059, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Assignment 2 - Constrained Optimization & The KKT Conditions.ipynb", "max_stars_repo_name": "viniciusmss/CS164-Optimization-Methods", "max_stars_repo_head_hexsha": "2a97ad468f83bb6038411d8b6624f27ea374ff69", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Assignment 2 - Constrained Optimization & The KKT Conditions.ipynb", "max_issues_repo_name": "viniciusmss/CS164-Optimization-Methods", "max_issues_repo_head_hexsha": "2a97ad468f83bb6038411d8b6624f27ea374ff69", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Assignment 2 - Constrained Optimization & The KKT Conditions.ipynb", "max_forks_repo_name": "viniciusmss/CS164-Optimization-Methods", "max_forks_repo_head_hexsha": "2a97ad468f83bb6038411d8b6624f27ea374ff69", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 181.2729591837, "max_line_length": 44356, "alphanum_fraction": 0.8891625269, "converted": true, "num_tokens": 2661, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.8976952893703477, "lm_q2_score": 0.9136765234137297, "lm_q1q2_score": 0.8202031110767813}} {"text": "## Conditional Independence\n\nTwo random variable $X$ and $Y$ are conditiaonly independent given $Z$, denoted by $X \\perp \\!\\! \\perp Y \\mid Z$ if \n\n$$p_{X,Y\\mid Z} (x,y\\mid z) = p_{X\\mid Z}(x\\mid z) \\, p_{Y\\mid Z}(y\\mid z)$$\n\nIn general Marginal independence doesn't imply conditional independence and vice versa. \n\n### Example \n\nR: Red Sox Game
    \nA: Accident
    \nT: Bad Traffic \n\n\n\nFind the following probability\n\n(a) $\\mathbb{P}(R=1) = 0.5$\n\n(b) $\\mathbb{P}(R=1 \\mid T=1)$\n\n$$\\begin{align}p_{R,A}(r,a\\mid 1) \n&= \\frac{p_{T\\mid R,A}(1 \\mid r, a)\\, p_R(r) \\, p_A(a)}{p_T(1)}\\\\\n&= c\\cdot p_{T\\mid R,A}(1 \\mid r, a)\\end{align}$$\n\n(c) $\\mathbb{P}(R=1 \\mid T=1, A=1)= \\mathbb{P}(R=1 \\mid T=1)$\n\n### Practice Problem: Conditional Independence\n\nSuppose $X_0, \\dots , X_{100}$ are random variables whose joint distribution has the following factorization:\n\n$$p_{X_0, \\dots , X_{100}}(x_0, \\dots , x_{100}) = p_{X_0}(x_0) \\cdot \\prod _{i=1}^{100} p_{X_ i | X_{i-1}}(x_ i | x_{i-1})$$\n \nThis factorization is what's called a Markov chain. We'll be seeing Markov chains a lot more later on in the course.\n\nShow that $X_{50} \\perp \\!\\! \\perp X_{52} \\mid X_{51}$.\n\n**Answer:** \n$$\n\\begin{eqnarray}\n\t\tp_{X_{50},X_{51},X_{52}}(x_{50},x_{51},x_{52})\n &=& \\sum_{x_{0} \\dots x_{49}} \\sum_{x_{53} \\dots x_{100}} p_{X_0, \\dots , X_{100}}(x_0, \\dots , x_{100}) \\\\\n\t\t&=& \\sum_{x_{0} \\dots x_{49}} \\sum_{x_{53} \\dots x_{100}} \\left[p_{X_0}(x_{0}) \\prod_{i=0}^{50} p_{X_i\\mid X_{i-1}}(x_{i}|x_{i-1})\\right] \\\\\n && \\cdot \\prod_{i=51}^{52} p_{X_i\\mid X_{i-1}}(x_{i}|x_{i-1})\\cdot \\prod_{i=53}^{100} p_{X_i\\mid X_{i-1}}(x_{i}|x_{i-1}) \\\\\n &=& \\underbrace{\\sum_{x_{0} \\dots x_{49}} \\left[p_{X_0}(x_{0}) \\prod_{i=0}^{50} p_{X_i\\mid X_{i-1}}(x_{i}|x_{i-1})\\right]}_{=p_{X_{50}}(x_{50})} \\\\\n && \\cdot \\prod_{i=51}^{52} p_{X_i\\mid X_{i-1}}(x_{i}|x_{i-1})\\cdot \\underbrace{\\sum_{x_{53} \\dots x_{100}}\\prod_{i=53}^{100} p_{X_i\\mid X_{i-1}}(x_{i}|x_{i-1})}_{=1} \\\\[2ex]\n\t\t&=& p_{X_{50}}(x_{50}) \\cdot p_{X_{51}\\mid X_{50}}(x_{51}|x_{50}) \\cdot p_{X_{52}\\mid X_{51}}(x_{52}|x_{51}) \\\\[2ex]\n &=& p_{X_{50}\\mid X_{51}}(x_{50}|x_{51}) \\cdot p_{X_{52}\\mid X_{51}}(x_{52}|x_{51}) \\\\[2ex]\n \\frac{p_{X_{50},X_{51},X_{52}}(x_{50},x_{51},x_{52})}{p_{X_{51}}(x_{51})} \n &=& p_{X_{50}\\mid X_{51}}(x_{50}|x_{51}) \\cdot p_{X_{52}\\mid X_{51}}(x_{52}|x_{51}) \\\\[2ex]\n p_{X_{50},X_{52}\\mid X_{51}}(x_{50},x_{52}\\mid x_{51}) \n &=& p_{X_{50}\\mid X_{51}}(x_{50}|x_{51}) \\cdot p_{X_{52}\\mid X_{51}}(x_{52}|x_{51})\n\\end{eqnarray}\n$$\n\n\n```python\n\n```\n", "meta": {"hexsha": "233c7dc2b288c1f2a69cc16e242012c3fc7412b8", "size": 4493, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "week03/03 Conditional Independence.ipynb", "max_stars_repo_name": "infimath/Computational-Probability-and-Inference", "max_stars_repo_head_hexsha": "e48cd52c45ffd9458383ba0f77468d31f781dc77", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-04-04T03:07:47.000Z", "max_stars_repo_stars_event_max_datetime": "2019-04-04T03:07:47.000Z", "max_issues_repo_path": "week03/03 Conditional Independence.ipynb", "max_issues_repo_name": "infimath/Computational-Probability-and-Inference", "max_issues_repo_head_hexsha": "e48cd52c45ffd9458383ba0f77468d31f781dc77", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week03/03 Conditional Independence.ipynb", "max_forks_repo_name": "infimath/Computational-Probability-and-Inference", "max_forks_repo_head_hexsha": "e48cd52c45ffd9458383ba0f77468d31f781dc77", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-02-27T05:33:49.000Z", "max_forks_repo_forks_event_max_datetime": "2021-02-27T05:33:49.000Z", "avg_line_length": 32.5579710145, "max_line_length": 202, "alphanum_fraction": 0.4845314934, "converted": true, "num_tokens": 1185, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9294404077216355, "lm_q2_score": 0.8824278757303677, "lm_q1q2_score": 0.8201641246037696}} {"text": "# 16 PDEs: Waves\n\n(See *Computational Physics* Ch 21 and *Computational Modeling* Ch 6.5.)\n\n## Background: waves on a string\n\nAssume a 1D string of length $L$ with mass density per unit length $\\rho$ along the $x$ direction. It is held under constant tension $T$ (force per unit length). Ignore frictional forces and the tension is so high that we can ignore sagging due to gravity.\n\n\n### 1D wave equation\nThe string is displaced in the $y$ direction from its rest position, i.e., the displacement $y(x, t)$ is a function of space $x$ and time $t$.\n\nFor small relative displacements $y(x, t)/L \\ll 1$ and therefore small slopes $\\partial y/\\partial x$ we can describe $y(x, t)$ with a *linear* equation of motion:\n\nNewton's second law applied to short elements of the string with length $\\Delta x$ and mass $\\Delta m = \\rho \\Delta x$: the left hand side contains the *restoring force* that opposes the displacement, the right hand side is the acceleration of the string element:\n\n\\begin{align}\n\\sum F_{y}(x) &= \\Delta m\\, a(x, t)\\\\\nT \\sin\\theta(x+\\Delta x) - T \\sin\\theta(x) &= \\rho \\Delta x \\frac{\\partial^2 y(x, t)}{\\partial t^2}\n\\end{align}\n\nThe angle $\\theta$ measures by how much the string is bent away from the resting configuration.\n\nBecause we assume small relative displacements, the angles are small ($\\theta \\ll 1$) and we can make the small angle approximation\n\n$$\n\\sin\\theta \\approx \\tan\\theta = \\frac{\\partial y}{\\partial x}\n$$\n\nand hence\n\n\\begin{align}\nT \\left.\\frac{\\partial y}{\\partial x}\\right|_{x+\\Delta x} - T \\left.\\frac{\\partial y}{\\partial x}\\right|_{x} &= \\rho \\Delta x \\frac{\\partial^2 y(x, t)}{\\partial t^2}\\\\\n\\frac{T \\left.\\frac{\\partial y}{\\partial x}\\right|_{x+\\Delta x} - T \\left.\\frac{\\partial y}{\\partial x}\\right|_{x}}{\\Delta x} &= \\rho \\frac{\\partial^2 y}{\\partial t^2}\n\\end{align}\n\nor in the limit $\\Delta x \\rightarrow 0$ a linear hyperbolic PDE results:\n\n\\begin{gather}\n\\frac{\\partial^2 y(x, t)}{\\partial x^2} = \\frac{1}{c^2} \\frac{\\partial^2 y(x, t)}{\\partial t^2}, \\quad c = \\sqrt{\\frac{T}{\\rho}}\n\\end{gather}\n\nwhere $c$ has the dimension of a velocity. This is the (linear) **wave equation**.\n\n### General solution: waves \n\nGeneral solutions are propagating waves:\n\nIf $f(x)$ is a solution at $t=0$ then\n\n$$\ny_{\\mp}(x, t) = f(x \\mp ct)\n$$\n\nare also solutions at later $t > 0$.\n\nBecause of linearity, any linear combination is also a solution, so the most general solution contains both right and left propagating waves\n\n$$\ny(x, t) = A f(x - ct) + B g(x + ct)\n$$\n\n(If $f$ and/or $g$ are present depends on the initial conditions.)\n\nIn three dimensions the wave equation is\n\n$$\n\\boldsymbol{\\nabla}^2 y(\\mathbf{x}, t) - \\frac{1}{c^2} \\frac{\\partial^2 y(\\mathbf{x}, t)}{\\partial t^2} = 0\\\n$$\n\n### Boundary and initial conditions \n\n* The boundary conditions could be that the ends are fixed \n\n $$y(0, t) = y(L, t) = 0$$\n \n* The *initial condition* is a shape for the string, e.g., a Gaussian at the center\n\n $$\n y(x, t=0) = g(x) = y_0 \\frac{1}{\\sqrt{2\\pi\\sigma}} \\exp\\left[-\\frac{(x - x_0)^2}{2\\sigma^2}\\right]\n $$ \n \n at time 0.\n* Because the wave equation is *second order in time* we need a second initial condition, for instance, the string is released from rest: \n\n $$\n \\frac{\\partial y(x, t=0)}{\\partial t} = 0\n $$\n\n (The derivative, i.e., the initial displacement velocity is provided.)\n\n### Analytical solution\nSolve (as always) with *separation of variables*.\n\n$$\ny(x, t) = X(x) T(t)\n$$\n\nand this yields the general solution (with boundary conditions of fixed string ends and initial condition of zero velocity) as a superposition of normal modes\n\n$$\ny(x, t) = \\sum_{n=0}^{+\\infty} B_n \\sin k_n x\\, \\cos\\omega_n t,\n\\quad \\omega_n = ck_n,\\ k_n = n \\frac{2\\pi}{L} = n k_0.\n$$\n\n(The angular frequency $\\omega$ and the wave vector $k$ are determined from the boundary conditions.)\n\nThe coefficients $B_n$ are obtained from the initial shape:\n\n$$\ny(x, t=0) = \\sum_{n=0}^{+\\infty} B_n \\sin n k_0 x = g(x)\n$$\n\nIn principle one can use the fact that $\\int_0^L dx \\sin m k_0 x \\, \\sin n k_0 x = \\pi \\delta_{mn}$ (orthogonality) to calculate the coefficients:\n\n\\begin{align}\n\\int_0^L dx \\sin m k_0 x \\sum_{n=0}^{+\\infty} B_n \\sin n k_0 x &= \\int_0^L dx \\sin(m k_0 x) \\, g(x)\\\\\n\\pi \\sum_{n=0}^{+\\infty} B_n \\delta_{mn} &= \\dots \\\\\nB_m &= \\pi^{-1} \\dots\n\\end{align}\n\n(but the analytical solution is ugly and I cannot be bothered to put it down here.)\n\n## Numerical solution\n\n1. discretize wave equation\n2. time stepping: leap frog algorithm (iterate)\n\nUse the central difference approximation for the second order derivatives:\n\n\\begin{align}\n\\frac{\\partial^2 y}{\\partial t^2} &\\approx \\frac{y(x, t+\\Delta t) + y(x, t-\\Delta t) - 2y(x, t)}{\\Delta t ^2} = \\frac{y_{i, j+1} + y_{i, j-1} - 2y_{i,j}}{\\Delta t^2}\\\\\n\\frac{\\partial^2 y}{\\partial x^2} &\\approx \\frac{y(x+\\Delta x, t) + y(x-\\Delta x, t) - 2y(x, t)}{\\Delta x ^2} = \\frac{y_{i+1, j} + y_{i-1, j} - 2y_{i,j}}{\\Delta x^2}\n\\end{align}\n\nand substitute into the wave equation to yield the *discretized* wave equation:\n\n$$\n\\frac{y_{i+1, j} + y_{i-1, j} - 2y_{i,j}}{\\Delta x^2} = \\frac{1}{c^2} \\frac{y_{i, j+1} + y_{i, j-1} - 2y_{i,j}}{\\Delta t^2}\n$$\n\nRe-arrange so that the future terms $j+1$ can be calculated from the present $j$ and past $j-1$ terms:\n\n$$\ny_{i,j+1} = 2(1 - \\beta^2)y_{i,j} - y_{i, j-1} + \\beta^2 (y_{i+1,j} + y_{i-1,j}), \\quad \n\\beta := \\frac{c}{\\Delta x/\\Delta t}\n$$\n\nThis is the time stepping algorithm for the wave equation.\n\n## Numerical implementation \n\n\n\n```python\n# if you have plotting problems, try \n# %matplotlib inline\n%matplotlib notebook\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\nplt.style.use('ggplot')\n```\n\n### Student version \n\n\n```python\nL = 0.5 # m\nNx = 50\nNt = 100\n\nDx = L/Nx\nDt = 1e-4 # s\n\nrho = 1.5e-2 # kg/m\ntension = 150 # N\n\nc = np.sqrt(tension/rho)\n\n# TODO: calculate beta\nbeta = c*Dt/Dx\nbeta2 = beta**2\n\nprint(\"c = {0} m/s\".format(c))\nprint(\"Dx = {0} m, Dt = {1} s, Dx/Dt = {2} m/s\".format(Dx, Dt, Dx/Dt))\nprint(\"beta = {}\".format(beta))\n\nX = np.linspace(0, L, Nx+1) # need N+1!\n\ndef gaussian(x, y0=0.05, x0=L/2, sigma=0.1*L):\n return y0/np.sqrt(2*np.pi*sigma) * np.exp(-(x-x0)**2/(2*sigma**2))\n\n# displacements at j-1, j, j+1\ny0 = np.zeros_like(X)\ny1 = np.zeros_like(y0)\ny2 = np.zeros_like(y0)\n\n# save array\ny_t = np.zeros((Nt+1, Nx+1))\n\n# boundary conditions\ny0[0] = y0[-1] = y1[0] = y1[-1] = 0\ny2[:] = y0\n\n# initial conditions: velocity 0, i.e. no difference between y0 and y1\ny0[1:-1] = y1[1:-1] = gaussian(X)[1:-1]\n\n# save initial\nt_index = 0\ny_t[t_index, :] = y0\nt_index += 1\ny_t[t_index, :] = y1\n\nfor jt in range(2, Nt):\n y2[1:-1] = 2*(1-beta2)*y1[1:-1] - y0[1:-1] + beta2*(y1[2:] + y1[:-2])\n y0[:], y1[:] = y1, y2\n \n t_index += 1\n y_t[t_index, :] = y2 \n print(\"Iteration {0:5d}\".format(jt), end=\"\\r\")\nelse:\n print(\"Completed {0:5d} iterations: t={1} s\".format(jt, jt*Dt))\n \n```\n\n c = 100.0 m/s\n Dx = 0.01 m, Dt = 0.0001 s, Dx/Dt = 100.0 m/s\n beta = 1.0\n Completed 99 iterations: t=0.0099 s\n\n\n### Fancy version \nPackage as a function and can use `step` to only save every `step` time steps.\n\n\n```python\ndef wave(L=0.5, Nx=50, Dt=1e-4, Nt=100, step=1,\n rho=1.5e-2, tension=150.):\n\n Dx = L/Nx\n\n #rho = 1.5e-2 # kg/m\n #tension = 150 # N\n\n c = np.sqrt(tension/rho)\n\n beta = c*Dt/Dx\n beta2 = beta**2\n\n print(\"c = {0} m/s\".format(c))\n print(\"Dx = {0} m, Dt = {1} s, Dx/Dt = {2} m/s\".format(Dx, Dt, Dx/Dt))\n print(\"beta = {}\".format(beta))\n\n X = np.linspace(0, L, Nx+1) # need N+1!\n\n def gaussian(x, y0=0.05, x0=L/2, sigma=0.1*L):\n return y0/np.sqrt(2*np.pi*sigma) * np.exp(-(x-x0)**2/(2*sigma**2))\n\n # displacements at j-1, j, j+1\n y0 = np.zeros_like(X)\n y1 = np.zeros_like(y0)\n y2 = np.zeros_like(y0)\n\n # save array\n y_t = np.zeros((int(np.ceil(Nt/step)) + 1, Nx+1))\n\n # boundary conditions\n y0[0] = y0[-1] = y1[0] = y1[-1] = 0\n y2[:] = y0\n\n # initial conditions: velocity 0, i.e. no difference between y0 and y1\n y0[1:-1] = y1[1:-1] = gaussian(X)[1:-1]\n\n # save initial\n t_index = 0\n y_t[t_index, :] = y0\n if step == 1:\n t_index += 1\n y_t[t_index, :] = y1\n\n for jt in range(2, Nt):\n y2[1:-1] = 2*(1-beta2)*y1[1:-1] - y0[1:-1] + beta2*(y1[2:] + y1[:-2])\n y0[:], y1[:] = y1, y2\n\n if jt % step == 0 or jt == Nt-1:\n t_index += 1\n y_t[t_index, :] = y2 \n print(\"Iteration {0:5d}\".format(jt), end=\"\\r\")\n else:\n print(\"Completed {0:5d} iterations: t={1} s\".format(jt, jt*Dt))\n \n return y_t, X, Dx, Dt, step\n\n```\n\n\n```python\ny_t, X, Dx, Dt, step = wave()\n```\n\n c = 100.0 m/s\n Dx = 0.01 m, Dt = 0.0001 s, Dx/Dt = 100.0 m/s\n beta = 1.0\n Completed 99 iterations: t=0.0099 s\n\n\n### 1D plot\nPlot the output in the save array `y_t`. Vary the time steps that you look at with `y_t[start:end]`.\n\nWe indicate time by color changing.\n\n\n```python\nax = plt.subplot(111)\nax.set_prop_cycle(\"color\", [plt.cm.viridis(i) for i in np.linspace(0, 1, len(y_t))])\nax.plot(X, y_t[:40].T, alpha=0.5);\n```\n\n\n \n\n\n\n\n\n\n### 1D Animation\nFor 1D animation to work in a Jupyter notebook, use\n\n\n```python\n%matplotlib notebook\n```\n\nIf no animations are visible, restart kernel and execute the `%matplotlib notebook` cell as the very first one in the notebook.\n\nWe use `matplotlib.animation` to look at movies of our solution:\n\n\n```python\nimport matplotlib.animation as animation\n```\n\nGenerate one full period:\n\n\n```python\ny_t, X, Dx, Dt, step = wave(Nt=100)\n```\n\n c = 100.0 m/s\n Dx = 0.01 m, Dt = 0.0001 s, Dx/Dt = 100.0 m/s\n beta = 1.0\n Completed 99 iterations: t=0.0099 s\n\n\nThe `update_wave()` function simply re-draws our image for every `frame`.\n\n\n```python\ny_limits = 1.05*y_t.min(), 1.05*y_t.max()\n\nfig1 = plt.figure(figsize=(5,5))\nax = fig1.add_subplot(111)\nax.set_aspect(1)\n\ndef update_wave(frame, data):\n global ax, Dt, y_limits\n ax.clear()\n ax.set_xlabel(\"x (m)\")\n ax.set_ylabel(\"y (m)\")\n ax.plot(X, data[frame])\n ax.set_ylim(y_limits)\n ax.text(0.1, 0.9, \"t = {0:3.1f} ms\".format(frame*Dt*1e3), transform=ax.transAxes)\n\nwave_anim = animation.FuncAnimation(fig1, update_wave, frames=len(y_t), fargs=(y_t,), \n interval=30, blit=True, repeat_delay=100)\n\n```\n\n\n \n\n\n\n\n\n\nIf you have ffmpeg installed then you can export to a MP4 movie file:\n\n\n```python\nwave_anim.save(\"string.mp4\", fps=30, dpi=300)\n```\n\nThe whole video can be incoroporated into an html page (uses the HTML5 `